././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.4351544 yt-4.4.0/0000755000175100001770000000000014714401715011564 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/CITATION0000644000175100001770000000365514714401662012733 0ustar00runnerdockerTo cite yt in publications, please use: Turk, M. J., Smith, B. D., Oishi, J. S., et al. 2011, ApJS, 192, 9 In the body of the text, please add a footnote to the yt webpage: http://yt-project.org/ For LaTex and BibTex users: \bibitem[Turk et al.(2011)]{2011ApJS..192....9T} Turk, M.~J., Smith, B.~D., Oishi, J.~S., et al.\ 2011, The Astrophysical Journal Supplement Series, 192, 9 @ARTICLE{2011ApJS..192....9T, author = {{Turk}, M.~J. and {Smith}, B.~D. and {Oishi}, J.~S. and {Skory}, S. and {Skillman}, S.~W. and {Abel}, T. and {Norman}, M.~L.}, title = "{yt: A Multi-code Analysis Toolkit for Astrophysical Simulation Data}", journal = {The Astrophysical Journal Supplement Series}, archivePrefix = "arXiv", eprint = {1011.3514}, primaryClass = "astro-ph.IM", keywords = {cosmology: theory, methods: data analysis, methods: numerical}, year = 2011, month = jan, volume = 192, eid = {9}, pages = {9}, doi = {10.1088/0067-0049/192/1/9}, adsurl = {http://adsabs.harvard.edu/abs/2011ApJS..192....9T}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } Using yt can also utilize other functionality. If you utilize ORIGAMI, we ask that you please cite the ORIGAMI paper: @ARTICLE{2012ApJ...754..126F, author = {{Falck}, B.~L. and {Neyrinck}, M.~C. and {Szalay}, A.~S.}, title = "{ORIGAMI: Delineating Halos Using Phase-space Folds}", journal = {\apj}, archivePrefix = "arXiv", eprint = {1201.2353}, primaryClass = "astro-ph.CO", keywords = {dark matter, galaxies: halos, large-scale structure of universe, methods: numerical}, year = 2012, month = aug, volume = 754, eid = {126}, pages = {126}, doi = {10.1088/0004-637X/754/2/126}, adsurl = {http://adsabs.harvard.edu/abs/2012ApJ...754..126F}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } The main homepage for ORIGAMI can be found here: http://icg.port.ac.uk/~falckb/origami.html ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/CONTRIBUTING.rst0000644000175100001770000010755514714401662014243 0ustar00runnerdocker.. This document is rendered in HTML with cross-reference links filled in at https://yt-project.org/doc/developing/developing.html .. _getting-involved: Getting Involved ================ There are *lots* of ways to get involved with yt, as a community and as a technical system -- not all of them just contributing code, but also participating in the community, helping us with designing the websites, adding documentation, and sharing your scripts with others. Coding is only one way to be involved! Communication Channels ---------------------- There are three main communication channels for yt: * Many yt developers participate in the yt Slack community. Slack is a free chat service that many teams use to organize their work. You can get an invite to yt's Slack organization by clicking the "Join us @ Slack" button on this page: https://yt-project.org/community.html * `yt-users `_ is a relatively high-traffic mailing list where people are encouraged to ask questions about the code, figure things out and so on. * `yt-dev `_ is a much lower-traffic mailing list designed to focus on discussions of improvements to the code, ideas about planning, development issues, and so on. The easiest way to get involved with yt is to read the mailing lists, hang out in IRC or slack chat, and participate. If someone asks a question you know the answer to (or have your own question about!) write back and answer it. If you have an idea about something, suggest it! We not only welcome participation, we encourage it. Documentation ------------- The yt documentation is constantly being updated, and it is a task we would very much appreciate assistance with. Whether that is adding a section, updating an outdated section, contributing typo or grammatical fixes, adding a FAQ, or increasing coverage of functionality, it would be very helpful if you wanted to help out. The easiest way to help out is to fork the main yt repository and make changes in it to contribute back to the ``yt-project``. A fork is a copy of a repository; in this case the fork will live in the space under your username on github, rather than the ``yt-project``. If you have never made a fork of a repository on github, or are unfamiliar with this process, here is a short article about how to do so: https://help.github.com/en/github/getting-started-with-github/fork-a-repo . The documentation for ``yt`` lives in the ``doc`` directory in the root of the yt git repository. To make a contribution to the yt documentation you will make your changes in your own fork of ``yt``. When you are done, issue a pull request through the website for your new fork, and we can comment back and forth and eventually accept your changes. See :ref:`sharing-changes` for more information about contributing your changes to yt on GitHub. Gallery Images and Videos ------------------------- If you have an image or video you'd like to display in the image or video galleries, getting it included it easy! You can either fork the `yt homepage repository `_ and add it there, or email it to us and we'll add it to the `Gallery `_. We're eager to show off the images and movies you make with yt, so please feel free to drop `us `_ a line and let us know if you've got something great! Technical Contributions ----------------------- Contributing code is another excellent way to participate -- whether it's bug fixes, new features, analysis modules, or a new code frontend. See :ref:`creating_frontend` for more details. The process is pretty simple: fork on GitHub, make changes, issue a pull request. We can then go back and forth with comments in the pull request, but usually we end up accepting. For more information, see :ref:`contributing-code`, where we spell out how to get up and running with a development environment, how to commit, and how to use GitHub. When you're ready to share your changes with the community, refer to :ref:`sharing-changes` to see how to contribute them back upstream. Online Presence --------------- Some of these fall under the other items, but if you'd like to help out with the website or any of the other ways yt is presented online, please feel free! Almost everything is kept in git repositories on GitHub, and it is very easy to fork and contribute back changes. Please feel free to dig in and contribute changes. Word of Mouth ------------- If you're using yt and it has increased your productivity, please feel encouraged to share that information. Cite our `paper `_, tell your colleagues, and just spread word of mouth. By telling people about your successes, you'll help bring more eyes and hands to the table -- in this manner, by increasing participation, collaboration, and simply spreading the limits of what the code is asked to do, we hope to help scale the utility and capability of yt with the community size. Feel free to `blog `_ about, `tweet `_ about and talk about what you are up to! Long-Term Projects ------------------ There are some out-there ideas that have been bandied about for the future directions of yt -- stuff like fun new types of visualization, remapping of coordinates, new ways of accessing data, and even new APIs to make life easier. yt is an ambitious project. Let's be ambitious together! yt Community Code of Conduct ---------------------------- The community of participants in open source Scientific projects is made up of members from around the globe with a diverse set of skills, personalities, and experiences. It is through these differences that our community experiences success and continued growth. We expect everyone in our community to follow these guidelines when interacting with others both inside and outside of our community. Our goal is to keep ours a positive, inclusive, successful, and growing community. As members of the community, - We pledge to treat all people with respect and provide a harassment- and bullying-free environment, regardless of sex, sexual orientation and/or gender identity, disability, physical appearance, body size, race, nationality, ethnicity, and religion. In particular, sexual language and imagery, sexist, racist, or otherwise exclusionary jokes are not appropriate. - We pledge to respect the work of others by recognizing acknowledgment/citation requests of original authors. As authors, we pledge to be explicit about how we want our own work to be cited or acknowledged. - We pledge to welcome those interested in joining the community, and realize that including people with a variety of opinions and backgrounds will only serve to enrich our community. In particular, discussions relating to pros/cons of various technologies, programming languages, and so on are welcome, but these should be done with respect, taking proactive measure to ensure that all participants are heard and feel confident that they can freely express their opinions. - We pledge to welcome questions and answer them respectfully, paying particular attention to those new to the community. We pledge to provide respectful criticisms and feedback in forums, especially in discussion threads resulting from code contributions. - We pledge to be conscientious of the perceptions of the wider community and to respond to criticism respectfully. We will strive to model behaviors that encourage productive debate and disagreement, both within our community and where we are criticized. We will treat those outside our community with the same respect as people within our community. - We pledge to help the entire community follow the code of conduct, and to not remain silent when we see violations of the code of conduct. We will take action when members of our community violate this code such as contacting confidential@yt-project.org (all emails sent to this address will be treated with the strictest confidence) or talking privately with the person. This code of conduct applies to all community situations online and offline, including mailing lists, forums, social media, conferences, meetings, associated social events, and one-to-one interactions. The yt Community Code of Conduct was adapted from the `Astropy Community Code of Conduct `_, which was partially inspired by the PSF code of conduct. .. _contributing-code: How to Develop yt ================= yt is a community project! We are very happy to accept patches, features, and bugfixes from any member of the community! yt is developed using git, primarily because it enables very easy and straightforward submission of revisions. We're eager to hear from you, and if you are developing yt, we encourage you to subscribe to the `developer mailing list `_. Please feel free to hack around, commit changes, and send them upstream. .. note:: If you already know how to use the `git version control system `_ and are comfortable with handling it yourself, the quickest way to contribute to yt is to `fork us on GitHub `_, make your changes, push the changes to your fork and issue a `pull request `_. The rest of this document is just an explanation of how to do that. See :ref:`code-style-guide` for more information about coding style in yt and :ref:`docstrings` for an example docstring. Please read them before hacking on the codebase, and feel free to email any of the mailing lists for help with the codebase. Keep in touch, and happy hacking! .. _open-issues: Open Issues ----------- If you're interested in participating in yt development, take a look at the `issue tracker on GitHub `_. You can search by labels, indicating estimated level of difficulty or category, to find issues that you would like to contribute to. Good first issues are marked with a label of *new contributor friendly*. While we try to triage the issue tracker regularly to assign appropriate labels to every issue, it may be the case that issues not marked as *new contributor friendly* are actually suitable for new contributors. Here are some predefined issue searches that might be useful: * Unresolved issues `marked "new contributor friendly" `_. * `All unresolved issues `_. Submitting Changes ------------------ We provide a brief introduction to submitting changes here. yt thrives on the strength of its communities (https://arxiv.org/abs/1301.7064 has further discussion) and we encourage contributions from any user. While we do not discuss version control, git, or the advanced usage of GitHub in detail here, we do provide an outline of how to submit changes and we are happy to provide further assistance or guidance. Licensing +++++++++ yt is `licensed `_ under the BSD 3-clause license. Versions previous to yt-2.6 were released under the GPLv3. All contributed code must be BSD-compatible. If you'd rather not license in this manner, but still want to contribute, please consider creating an external package, which we'll happily link to. How To Get The Source Code For Editing ++++++++++++++++++++++++++++++++++++++ yt is hosted on GitHub, and you can see all of the yt repositories at https://github.com/yt-project/. To fetch and modify source code, make sure you have followed the steps above for bootstrapping your development (to assure you have a GitHub account, etc.). In order to modify the source code for yt, we ask that you make a "fork" of the main yt repository on GitHub. A fork is simply an exact copy of the main repository (along with its history) that you will now own and can make modifications as you please. You can create a personal fork by visiting the yt GitHub webpage at https://github.com/yt-project/yt/ . After logging in, you should see an option near the top right labeled "fork". You now have a forked copy of the yt repository for your own personal modification. This forked copy exists on the GitHub repository, so in order to access it locally you must clone it onto your machine from the command line: .. code-block:: bash $ git clone https://github.com//yt ./yt-git This downloads that new forked repository to your local machine, so that you can access it, read it, make modifications, etc. It will put the repository in a local directory of the same name as the repository in the current working directory. .. code-block:: bash $ cd yt-git Verify that you are on the ``main`` branch of yt by running: .. code-block:: bash $ git branch You can see any past state of the code by using the git log command. For example, the following command would show you the last 5 revisions (modifications to the code) that were submitted to that repository. .. code-block:: bash $ git log -n 5 Using the revision specifier (the number or hash identifier next to each changeset), you can update the local repository to any past state of the code (a previous changeset or version) by executing the command: .. code-block:: bash $ git checkout revision_specifier You can always return to the most recent version of the code by executing the same command as above with the most recent revision specifier in the repository. However, using ``git log`` when you're checked out to an older revision specifier will not show more recent changes to the repository. An alternative option is to use ``checkout`` on a branch. In yt the ``main`` branch is our primary development branch, so checking out ``main`` should return you to the tip (or most up-to-date revision specifier) on the ``main`` branch. .. code-block:: bash $ git checkout main Lastly, if you want to use this new downloaded version of your yt repository as the *active* version of yt on your computer (i.e. the one which is executed when you run yt from the command line or the one that is loaded when you do ``import yt``), then you must "activate" by building yt from source as described in :ref:`install-from-source`. .. _reading-source: How To Read The Source Code +++++++++++++++++++++++++++ The root directory of the yt git repository contains a number of subdirectories with different components of the code. Most of the yt source code is contained in the yt subdirectory. This directory itself contains the following subdirectories: ``frontends`` This is where interfaces to codes are created. Within each subdirectory of yt/frontends/ there must exist the following files, even if empty: * ``data_structures.py``, where subclasses of AMRGridPatch, Dataset and AMRHierarchy are defined. * ``io.py``, where a subclass of IOHandler is defined. * ``fields.py``, where fields we expect to find in datasets are defined * ``misc.py``, where any miscellaneous functions or classes are defined. * ``definitions.py``, where any definitions specific to the frontend are defined. (i.e., header formats, etc.) ``fields`` This is where all of the derived fields that ship with yt are defined. ``geometry`` This is where geometric helpler routines are defined. Handlers for grid and oct data, as well as helpers for coordinate transformations can be found here. ``visualization`` This is where all visualization modules are stored. This includes plot collections, the volume rendering interface, and pixelization frontends. ``data_objects`` All objects that handle data, processed or unprocessed, not explicitly defined as visualization are located in here. This includes the base classes for data regions, covering grids, time series, and so on. This also includes derived fields and derived quantities. ``units`` This used to be where all the unit-handling code resided, but as of now it's mostly just a thin wrapper around unyt. ``utilities`` All broadly useful code that doesn't clearly fit in one of the other categories goes here. If you're looking for a specific file or function in the yt source code, use the unix find command: .. code-block:: bash $ find -name '' The above command will find the FILENAME in any subdirectory in the DIRECTORY_TREE_TO_SEARCH. Alternatively, if you're looking for a function call or a keyword in an unknown file in a directory tree, try: .. code-block:: bash $ grep -R This can be very useful for tracking down functions in the yt source. .. _building-yt: Building yt +++++++++++ If you have made changes to any C or Cython (``.pyx``) modules, you have to rebuild yt before your changes are usable. See :ref:`install-from-source`. .. _requirements-for-code-submission: Requirements for Code Submission -------------------------------- Modifications to the code typically fall into one of three categories, each of which have different requirements for acceptance into the code base. These requirements are in place for a few reasons -- to make sure that the code is maintainable, testable, and that we can easily include information about changes in changelogs during the release procedure. (See `YTEP-0008 `_ for more detail.) For all types of contributions, it is required that all tests pass, or that all non-passing tests are specifically accounted for. * New Features * New unit tests (possibly new answer tests) (See :ref:`testing`) * Docstrings in the source code for the public API * Addition of new feature to the narrative documentation (See :ref:`writing_documentation`) * Addition of cookbook recipe (See :ref:`writing_documentation`) * Extension or Breakage of API in Existing Features * Update existing narrative docs and docstrings (See :ref:`writing_documentation`) * Update existing cookbook recipes (See :ref:`writing_documentation`) * Modify of create new unit tests (See :ref:`testing`) * Bug fixes * Unit test is encouraged, to ensure breakage does not happen again in the future. (See :ref:`testing`) * At a minimum, a minimal, self-contained example demonstrating the bug should because included in the body of the Pull Request, or as part of an independent issue. When submitting, you will be asked to make sure that your changes meet all of these requirements. They are pretty easy to meet, and we're also happy to help out with them. See :ref:`code-style-guide` for how to easily conform to our style guide. .. _git-with-yt: How to Use git with yt ---------------------- If you're new to git, the following resource is pretty great for learning the ins and outs: * https://git-scm.com/ There also exist a number of step-by-step git tutorials to help you get used to version controlling files with git. Here are a few resources that you may find helpful: * http://swcarpentry.github.io/git-novice/ * https://git-scm.com/docs/gittutorial * https://try.github.io/ The commands that are essential for using git include: * ``git --help`` which provides help for any git command. For example, you can learn more about the ``log`` command by doing ``git log --help``. * ``git add `` which stages changes to the specified paths for subsequent committing (see below). * ``git commit`` which commits staged changes (stage using ``git add`` as above) in the working directory to the repository, creating a new "revision." * ``git merge `` which merges the revisions from the specified branch into the current branch, creating a union of their lines of development. This updates the working directory. * ``git pull `` which pulls revisions from the specified branch of the specified remote repository into the current local branch. Equivalent to ``git fetch `` and then ``git merge /``. This updates the working directory. * ``git push `` which sends revisions on local branches to matching branches on the specified remote. ``git push `` will only push changes for the specified branch. * ``git log`` which shows a log of all revisions on the current branch. There are many options you can pass to ``git log`` to get additional information. One example is ``git log --oneline --decorate --graph --all``. We are happy to answer questions about git use on our IRC, slack chat or on the mailing list to walk you through any troubles you might have. Here are some general suggestions for using git with yt: * Although not necessary, a common development work flow is to create a local named branch other than ``main`` to address a feature request or bugfix. If the dev work addresses a specific yt GitHub issue, you may include that issue number in the branch name. For example, if you want to work on issue number X regarding a cool new slice plot feature, you might name the branch: ``cool_new_plot_feature_X``. When you're ready to share your work, push your feature branch to your remote and create a pull request to the ``main`` branch of the yt-project's repository. * When contributing changes, you might be asked to make a handful of modifications to your source code. We'll work through how to do this with you, and try to make it as painless as possible. * Your test may fail automated style checks. See :ref:`code-style-guide` for more information about automatically verifying your code style. * You should only need one fork. To keep it in sync, you can sync from the website. See :ref:`sharing-changes` for a description of the basic workflow and :ref:`multiple-PRs` for a discussion about what to do when you want to have multiple open pull requests at the same time. * If you run into any troubles, stop by IRC (see :ref:`irc`), Slack, or the mailing list. .. _sharing-changes: Making and Sharing Changes -------------------------- The simplest way to submit changes to yt is to do the following: * Build yt from the git repository * Navigate to the root of the yt repository * Make some changes and commit them * Fork the `yt repository on GitHub `_ * Push the changesets to your fork * Issue a pull request. Here's a more detailed flowchart of how to submit changes. #. Fork yt on GitHub. (This step only has to be done once.) You can do this at: https://github.com/yt-project/yt/fork. #. Follow :ref:`install-from-source` for instructions on how to build yt from the git repository. (Below, in :ref:`reading-source`, we describe how to find items of interest.) If you have already forked the repository then you can clone your fork locally:: git clone https://github.com//yt ./yt-git This will create a local clone of your fork of yt in a folder named ``yt-git``. #. Edit the source file you are interested in and test your changes. (See :ref:`testing` for more information.) #. Create a uniquely named branch to track your work. For example: ``git checkout -b my-first-pull-request`` #. Stage your changes using ``git add ``. This command take an argument which is a series of filenames whose changes you want to commit. After staging, execute ``git commit -m ". Addresses Issue #X"``. Note that supplying an actual GitHub issue # in place of ``X`` will cause your commit to appear in the issue tracker after pushing to your remote. This can be very helpful for others who are interested in what work is being done in connection to that issue. #. Remember that this is a large development effort and to keep the code accessible to everyone, good documentation is a must. Add in source code comments for what you are doing. Add in docstrings if you are adding a new function or class or keyword to a function. Add documentation to the appropriate section of the online docs so that people other than yourself know how to use your new code. #. If your changes include new functionality or cover an untested area of the code, add a test. (See :ref:`testing` for more information.) Commit these changes as well. #. Add your remote repository with a unique name identifier. It can be anything but it is conventional to call it ``origin``. You can see names and URLs of all the remotes you currently have configured with:: git remote -v If you already have an ``origin`` remote, you can set it to your fork with:: git remote set-url origin https://github.com//yt If you do not have an ``origin`` remote you will need to add it:: git remote add origin https://github.com//yt In addition, it is also useful to add a remote for the main yt repository. By convention we name this remote ``upstream``:: git remote add upstream https://github.com/yt-project/yt Note that if you forked the yt repository on GitHub and then cloned from there you will not need to add the ``origin`` remote. #. Push your changes to your remote fork using the unique identifier you just created and the command:: git push origin my-first-pull-request Where you should substitute the name of the feature branch you are working on for ``my-first-pull-request``. .. note:: Note that the above approach uses HTTPS as the transfer protocol between your machine and GitHub. If you prefer to use SSH - or perhaps you're behind a proxy that doesn't play well with SSL via HTTPS - you may want to set up an `SSH key`_ on GitHub. Then, you use the syntax ``ssh://git@github.com//yt``, or equivalent, in place of ``https://github.com//yt`` in git commands. For consistency, all commands we list in this document will use the HTTPS protocol. .. _SSH key: https://help.github.com/en/articles/connecting-to-github-with-ssh/ #. Issue a pull request at https://github.com/yt-project/yt/pull/new/main A pull request is essentially just asking people to review and accept the modifications you have made to your personal version of the code. During the course of your pull request you may be asked to make changes. These changes may be related to style issues, correctness issues, or requesting tests. The process for responding to pull request code review is relatively straightforward. #. Make requested changes, or leave a comment indicating why you don't think they should be made. #. Commit those changes to your local repository. #. Push the changes to your fork:: git push origin my-first-pull-request #. Your pull request will be automatically updated. Once your pull request is merged, sync up with the main yt repository by pulling from the ``upstream`` remote:: git checkout main git pull upstream main You might also want to sync your fork of yt on GitHub:: # sync my fork of yt with upstream git push origin main And delete the branch for the merged pull request:: # delete branch for merged pull request git branch -d my-first-pull-request git push origin --delete my-first-pull-request These commands are optional but are nice for keeping your branch list manageable. You can also delete the branch on your fork of yt on GitHub by clicking the "delete branch" button on the page for the merged pull request on GitHub. .. _multiple-PRs: Working with Multiple GitHub Pull Requests ------------------------------------------ Dealing with multiple pull requests on GitHub is straightforward. Development on one feature should be isolated in one named branch, say ``feature_1`` while development of another feature should be in another named branch, say ``feature_2``. A push to remote ``feature_1`` will automatically update any active PR for which ``feature_1`` is a pointer to the ``HEAD`` commit. A push to ``feature_1`` *will not* update any pull requests involving ``feature_2``. .. _code-style-guide: Coding Style Guide ================== Automatically checking and fixing code style -------------------------------------------- We use the `pre-commit `_ framework to validate and automatically fix code styling. It is recommended (though not required) that you install ``pre-commit`` on your machine (see their documentation) and, from the top level of the repo, run .. code-block:: bash $ pre-commit install So that our hooks will run and update your changes on every commit. If you do not want to/are unable to configure ``pre-commit`` on your machine, note that after opening a pull request, it will still be run as a static checker as part of our CI. Some hooks also come with auto-fixing capabilities, which you can trigger manually in a PR by commenting ``pre-commit.ci autofix`` (see ` `_). We use a combination of `black `_, `ruff `_ and `cython-lint `_. See ``.pre-commit-config.yaml`` and ``pyproject.toml`` for the complete configuration details. Note that formatters should not be run directly on the command line as, for instance .. code-block:: bash $ black yt But it can still be done as .. code-block:: bash $ pre-commit run black --all-files The reason is that you may have a specific version of ``black`` installed which can produce different results, while the one that's installed with pre-commit is guaranteed to be in sync with the rest of contributors. Below are a list of additional guidelines for coding in yt, that are not automatically enforced. Source code style guide ----------------------- * In general, follow PEP-8 guidelines. https://www.python.org/dev/peps/pep-0008/ * Classes are ``ConjoinedCapitals``, methods and functions are ``lowercase_with_underscores``. * Do not use nested classes unless you have a very good reason to, such as requiring a namespace or class-definition modification. Classes should live at the top level. ``__metaclass__`` is exempt from this. * Avoid copying memory when possible. For example, don't do ``a = a.reshape(3, 4)`` when ``a.shape = (3, 4)`` will do, and ``a = a * 3`` should be ``np.multiply(a, 3, a)``. * In general, avoid all double-underscore method names: ``__something`` is usually unnecessary. * When writing a subclass, use the super built-in to access the super class, rather than explicitly. Ex: ``super().__init__()`` rather than ``SpecialGrid.__init__()``. * Docstrings should describe input, output, behavior, and any state changes that occur on an object. See :ref:`docstrings` below for a fiducial example of a docstring. * Unless there is a good reason not to (e.g., to avoid circular imports), imports should happen at the top of the file. * If you are comparing with a numpy boolean array, just refer to the array. Ex: do ``np.all(array)`` instead of ``np.all(array == True)``. * Only declare local variables if they will be used later. If you do not use the return value of a function, do not store it in a variable. API Style Guide --------------- * Internally, only import from source files directly -- instead of: ``from yt.visualization.api import ProjectionPlot`` do: ``from yt.visualization.plot_window import ProjectionPlot`` * Import symbols from the module where they are defined, avoid transitive imports. * Import standard library modules, functions, and classes from builtins, do not import them from other yt files. * Numpy is to be imported as ``np``. * Do not use too many keyword arguments. If you have a lot of keyword arguments, then you are doing too much in ``__init__`` and not enough via parameter setting. * Don't create a new class to replicate the functionality of an old class -- replace the old class. Too many options makes for a confusing user experience. * Parameter files external to yt are a last resort. * The usage of the ``**kwargs`` construction should be avoided. If they cannot be avoided, they must be documented, even if they are only to be passed on to a nested function. .. _docstrings: Docstrings ---------- The following is an example docstring. You can use it as a template for docstrings in your code and as a guide for how we expect docstrings to look and the level of detail we are looking for. Note that we use NumPy style docstrings written in `Sphinx restructured text format `_. .. code-block:: rest r"""A one-line summary that does not use variable names or the function name. Several sentences providing an extended description. Refer to variables using back-ticks, e.g. ``var``. Parameters ---------- var1 : array_like Array_like means all those objects -- lists, nested lists, etc. -- that can be converted to an array. We can also refer to variables like ``var1``. var2 : int The type above can either refer to an actual Python type (e.g. ``int``), or describe the type of the variable in more detail, e.g. ``(N,) ndarray`` or ``array_like``. Long_variable_name : {'hi', 'ho'}, optional Choices in brackets, default first when optional. Returns ------- describe : type Explanation output : type Explanation tuple : type Explanation items : type even more explaining Other Parameters ---------------- only_seldom_used_keywords : type Explanation common_parameters_listed_above : type Explanation Raises ------ BadException Because you shouldn't have done that. See Also -------- otherfunc : relationship (optional) newfunc : Relationship (optional), which could be fairly long, in which case the line wraps here. thirdfunc, fourthfunc, fifthfunc Notes ----- Notes about the implementation algorithm (if needed). This can have multiple paragraphs. You may include some math: .. math:: X(e^{j\omega } ) = x(n)e^{ - j\omega n} And even use a greek symbol like :math:`omega` inline. References ---------- Cite the relevant literature, e.g. [1]_. You may also cite these references in the notes section above. .. [1] O. McNoleg, "The integration of GIS, remote sensing, expert systems and adaptive co-kriging for environmental habitat modelling of the Highland Haggis using object-oriented, fuzzy-logic and neural-network techniques," Computers & Geosciences, vol. 22, pp. 585-588, 1996. Examples -------- These are written in doctest format, and should illustrate how to use the function. Use the variables 'ds' for the dataset, 'pc' for a plot collection, 'c' for a center, and 'L' for a vector. >>> a = [1, 2, 3] >>> print([x + 3 for x in a]) [4, 5, 6] >>> print("a\n\nb") a b """ Variable Names and Enzo-isms ---------------------------- Avoid Enzo-isms. This includes but is not limited to: * Hard-coding parameter names that are the same as those in Enzo. The following translation table should be of some help. Note that the parameters are now properties on a ``Dataset`` subclass: you access them like ds.refine_by . - ``RefineBy `` => `` refine_by`` - ``TopGridRank `` => `` dimensionality`` - ``TopGridDimensions `` => `` domain_dimensions`` - ``InitialTime `` => `` current_time`` - ``DomainLeftEdge `` => `` domain_left_edge`` - ``DomainRightEdge `` => `` domain_right_edge`` - ``CurrentTimeIdentifier `` => `` unique_identifier`` - ``CosmologyCurrentRedshift `` => `` current_redshift`` - ``ComovingCoordinates `` => `` cosmological_simulation`` - ``CosmologyOmegaMatterNow `` => `` omega_matter`` - ``CosmologyOmegaLambdaNow `` => `` omega_lambda`` - ``CosmologyHubbleConstantNow `` => `` hubble_constant`` * Do not assume that the domain runs from 0 .. 1. This is not true everywhere. * Variable names should be short but descriptive. * No globals! ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/COPYING.txt0000644000175100001770000000653214714401662013444 0ustar00runnerdocker=============================== The yt project licensing terms =============================== yt is licensed under the terms of the Modified BSD License (also known as New or Revised BSD), as follows: Copyright (c) 2013-, yt Development Team Copyright (c) 2006-2013, Matthew Turk All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of the yt Development Team nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. About the yt Development Team ----------------------------- Matthew Turk began yt in 2006 and remains the project lead. Over time yt has grown to include contributions from a large number of individuals from many diverse institutions, scientific, and technical backgrounds. Until the fall of 2013, yt was licensed under the GPLv3. However, with consent from all developers and on a public mailing list, yt has been relicensed under the BSD 3-clause under a shared copyright model. For more information, see: https://mail.python.org/archives/list/yt-dev@python.org/thread/G4DJDDGB4PSZFJVPWRSHNOSUMTISXC4X/ All versions of yt prior to this licensing change are available under the GPLv3; all subsequent versions are available under the BSD 3-clause license. The yt Development Team is the set of all contributors to the yt project. This includes all of the yt subprojects. The core team that coordinates development on BitBucket can be found here: http://bitbucket.org/yt_analysis/ Our Copyright Policy -------------------- yt uses a shared copyright model. Each contributor maintains copyright over their contributions to yt. But, it is important to note that these contributions are typically only changes to the repositories. Thus, the yt source code, in its entirety is not the copyright of any single person or institution. Instead, it is the collective copyright of the entire yt Development Team. If individual contributors want to maintain a record of what changes/contributions they have specific copyright on, they should indicate their copyright in the commit message of the change, when they commit the change to one of the yt repositories. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/CREDITS0000644000175100001770000001546114714401662012614 0ustar00runnerdockeryt is a group effort. Contributors: Tom Abel (tabel@slac.stanford.edu) Gabriel Altay (gabriel.altay@gmail.com) Kenza Arraki (karraki@nmsu.edu) Kirk Barrow (kssbarrow@gatech.edu) Ricarda Beckmann (ricarda.beckmann@astro.ox.ac.uk) Christoph Behrens (cbehren2@gwdu101.global.gwdg.cluster) Elliott Biondo (biondo@wisc.edu) Alex Bogert (fbogert@ucsc.edu) Josh Borrow (joshua.borrow@durham.ac.uk) Robert Bradshaw (robertwb@gmail.com) André-Patrick Bubel (code@andre-bubel.de) Corentin Cadiou (corentin.cadiou@iap.fr) Pengfei Chen (madcpf@gmail.com) Yi-Hao Chen (ychen@astro.wisc.edu) Yi-Hao Chen (yihaochentw@gmail.com) Salvatore Cielo (cielo@iap.fr) David Collins (dcollins4096@gmail.com) Marianne Corvellec (marianne.corvellec@ens-lyon.org) Jared Coughlin (jcoughl2@nd.edu) Brian Crosby (bcrosby.bd@gmail.com) Weiguang Cui (weiguang.cui@uwa.edu.au) Andrew Cunningham (ajcunn@gmail.com) Bili Dong (qobilidop@gmail.com) Donald E Willcox (eugene.willcox@gmail.com) Nicholas Earl (nchlsearl@gmail.com) Hilary Egan (hilaryye@gmail.com) Daniel Fenn (df11c@my.fsu.edu) John Forbes (jcforbes@ucsc.edu) Enrico Garaldi (egaraldi@uni-bonn.de) Sam Geen (samgeen@gmail.com) Austin Gilbert (agilbert39@gatech.edu) Adam Ginsburg (keflavich@gmail.com) Nick Gnedin (ngnedin@gmail.com) Nathan Goldbaum (ngoldbau@illinois.edu) William Gray (graywilliamj@gmail.com) Philipp Grete (mail@pgrete.de) Max Gronke (max.groenke@gmail.com) Markus Haider (markus.haider@uibk.ac.at) Eric Hallman (hallman13@gmail.com) David Hannasch (David.A.Hannasch@gmail.com) Stephanie Ho (stephaniehkho@gmail.com) Axel Huebl (a.huebl@hzdr.de) Cameron Hummels (chummels@gmail.com) Suoqing Ji (jisuoqing@gmail.com) Allyson Julian (astrohckr@gmail.com) Anni Järvenpää (anni.jarvenpaa@gmail.com) Christian Karch (chiffre@posteo.de) Max Katz (maximilian.katz@stonybrook.edu) BW Keller (kellerbw@mcmaster.ca) Ashley Kelly (a.j.kelly@durham.ac.uk) Chang-Goo Kim (changgoo@princeton.edu) Ji-hoon Kim (me@jihoonkim.org) Steffen Klemer (sklemer@phys.uni-goettingen.de) Fabian Koller (anokfireball@posteo.de) Claire Kopenhafer (clairekope@gmail.com) Kacper Kowalik (xarthisius.kk@gmail.com) Matthew Krafczyk (krafczyk.matthew@gmail.com) Mark Krumholz (mkrumhol@ucsc.edu) Michael Kuhlen (mqk@astro.berkeley.edu) Avik Laha (al3510@moose.cc.columbia.edu) Meagan Lang (langmm.astro@gmail.com) Erwin Lau (ethlau@gmail.com) Doris Lee (dorislee@berkeley.edu) Eve Lee (elee@cita.utoronto.ca) Sam Leitner (sam.leitner@gmail.com) Yuan Li (yuan@astro.columbia.edu) Alex Lindsay (al007@illinois.edu) Yingchao Lu (yingchao.lu@gmail.com) Yinghe Lu (yinghelu@lbl.gov) Chris Malone (chris.m.malone@gmail.com) John McCann (mccann@ucsb.edu) Jonah Miller (jonah.maxwell.miller@gmail.com) Joshua Moloney (joshua.moloney@colorado.edu) Christopher Moody (cemoody@ucsc.edu) Chris Moody (juxtaposicion@gmail.com) Stuart Mumford (stuart@mumford.me.uk) Madicken Munk (madicken.munk@gmail.com) Andrew Myers (atmyers2@gmail.com) Jill Naiman (jnaiman@cfa.harvard.edu) Desika Narayanan (dnarayan@haverford.edu) Kaylea Nelson (kaylea.nelson@yale.edu) Brian O'Shea (oshea@msu.edu) J.S. Oishi (jsoishi@gmail.com) JC Passy (jcpassy@uvic.ca) Hugo Pfister (pfister@loginhz02.iap.fr) David Pérez-Suárez (dps.helio@gmail.com) John Regan (john.regan@helsinki.fi) Mark Richardson (Mark.Richardson.Work@gmail.com) Sherwood Richers (srichers@tapir.caltech.edu) Thomas Robitaille (thomas.robitaille@gmail.com) Anna Rosen (rosen@ucolick.org) Chuck Rozhon (rozhon2@illinois.edu) Douglas Rudd (drudd@uchicago.edu) Rafael Ruggiero (rafael.ruggiero@usp.br) Hsi-Yu Schive (hyschive@gmail.com) Anthony Scopatz (scopatz@gmail.com) Noel Scudder (noel.scudder@stonybrook.edu) Patrick Shriwise (shriwise@wisc.edu) Devin Silvia (devin.silvia@gmail.com) Abhishek Singh (abhisheksing@umass.edu) Sam Skillman (samskillman@gmail.com) Stephen Skory (s@skory.us) Joseph Smidt (josephsmidt@gmail.com) Aaron Smith (asmith@astro.as.utexas.edu) Britton Smith (brittonsmith@gmail.com) Geoffrey So (gsiisg@gmail.com) Josh Soref (jsoref@users.noreply.github.com) Antoine Strugarek (antoine.strugarek@cea.fr) Elizabeth Tasker (tasker@astro1.sci.hokudai.ac.jp) Ben Thompson (bthompson2090@gmail.com) Benjamin Thompson (bthompson2090@gmail.com) Robert Thompson (rthompsonj@gmail.com) Joseph Tomlinson (jmtomlinson95@gmail.com) Stephanie Tonnesen (stonnes@gmail.com) Matthew Turk (matthewturk@gmail.com) Miguel de Val-Borro (miguel.deval@gmail.com) Kausik Venkat (kvenkat2@illinois.edu) Casey W. Stark (caseywstark@gmail.com) Rick Wagner (rwagner@physics.ucsd.edu) Mike Warren (mswarren@gmail.com) Charlie Watson (charlie.watson95@gmail.com) Andrew Wetzel (andrew.wetzel@yale.edu) John Wise (jwise@physics.gatech.edu) Michael Zingale (michael.zingale@stonybrook.edu) John ZuHone (jzuhone@gmail.com) The PasteBin interface code (as well as the PasteBin itself) was written by the Pocoo collective (pocoo.org). developed by Oliver Hahn. Thanks to everyone for all your contributions! ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/MANIFEST.in0000644000175100001770000000655314714401662013334 0ustar00runnerdockerinclude README* CREDITS COPYING.txt CITATION setupext.py CONTRIBUTING.rst include yt/visualization/mapserver/html/map.js include yt/visualization/mapserver/html/map_index.html include yt/visualization/mapserver/html/Leaflet.Coordinates-0.1.5.css include yt/visualization/mapserver/html/Leaflet.Coordinates-0.1.5.src.js include yt/utilities/tests/cosmology_answers.yml include yt/utilities/mesh_types.yaml exclude yt/utilities/lib/cykdtree/c_kdtree.cpp prune tests prune answer-store recursive-include yt *.py *.pyx *.pxi *.pxd README* *.txt LICENSE* *.cu recursive-include doc *.rst *.txt *.py *.ipynb *.png *.jpg *.css *.html recursive-include doc *.h *.c *.sh *.svgz *.pdf *.svg *.pyx # start with excluding all C/C++ files recursive-exclude yt *.h *.c *.hpp *.cpp # then include back every non-generated C/C++ source file # the list can be generated by the following command # git ls-files | grep -E '\.(h|c)(pp)?$' include yt/frontends/artio/artio_headers/artio.c include yt/frontends/artio/artio_headers/artio.h include yt/frontends/artio/artio_headers/artio_endian.c include yt/frontends/artio/artio_headers/artio_endian.h include yt/frontends/artio/artio_headers/artio_file.c include yt/frontends/artio/artio_headers/artio_grid.c include yt/frontends/artio/artio_headers/artio_internal.h include yt/frontends/artio/artio_headers/artio_mpi.c include yt/frontends/artio/artio_headers/artio_mpi.h include yt/frontends/artio/artio_headers/artio_parameter.c include yt/frontends/artio/artio_headers/artio_particle.c include yt/frontends/artio/artio_headers/artio_posix.c include yt/frontends/artio/artio_headers/artio_selector.c include yt/frontends/artio/artio_headers/artio_sfc.c include yt/frontends/artio/artio_headers/cosmology.c include yt/frontends/artio/artio_headers/cosmology.h include yt/geometry/vectorized_ops.h include yt/utilities/lib/_octree_raytracing.hpp include yt/utilities/lib/cykdtree/c_kdtree.cpp include yt/utilities/lib/cykdtree/c_kdtree.hpp include yt/utilities/lib/cykdtree/c_utils.cpp include yt/utilities/lib/cykdtree/c_utils.hpp include yt/utilities/lib/cykdtree/windows/stdint.h include yt/utilities/lib/endian_swap.h include yt/utilities/lib/fixed_interpolator.cpp include yt/utilities/lib/fixed_interpolator.hpp include yt/utilities/lib/marching_cubes.h include yt/utilities/lib/mesh_triangulation.h include yt/utilities/lib/origami_tags.c include yt/utilities/lib/origami_tags.h include yt/utilities/lib/pixelization_constants.cpp include yt/utilities/lib/pixelization_constants.hpp include yt/utilities/lib/platform_dep.h include yt/utilities/lib/platform_dep_math.hpp include yt/utilities/lib/tsearch.c include yt/utilities/lib/tsearch.h include doc/README doc/activate doc/activate.csh doc/cheatsheet.tex exclude doc/cheatsheet.pdf include doc/extensions/README doc/Makefile prune doc/source/reference/api/generated prune doc/build prune .tours recursive-include yt/visualization/volume_rendering/shaders *.fragmentshader *.vertexshader include yt/sample_data_registry.json include conftest.py include yt/py.typed include yt/default.mplstyle prune yt/frontends/_skeleton recursive-include yt/frontends/amrvac *.par recursive-exclude requirements *.txt exclude minimal_requirements.txt exclude .codecov.yml .coveragerc .git-blame-ignore-revs .gitmodules .hgchurn .mailmap exclude .pre-commit-config.yaml clean.sh exclude nose_answer.cfg nose_unit.cfg nose_ignores.txt nose_requirements.txt ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.4351544 yt-4.4.0/PKG-INFO0000644000175100001770000003534514714401715012673 0ustar00runnerdockerMetadata-Version: 2.1 Name: yt Version: 4.4.0 Summary: An analysis and visualization toolkit for volumetric data Author-email: The yt project License: BSD 3-Clause Project-URL: Homepage, https://yt-project.org/ Project-URL: Documentation, https://yt-project.org/doc/ Project-URL: Source, https://github.com/yt-project/yt/ Project-URL: Tracker, https://github.com/yt-project/yt/issues Keywords: astronomy astrophysics visualization amr adaptivemeshrefinement Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: Console Classifier: Framework :: Matplotlib Classifier: Intended Audience :: Science/Research Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: POSIX :: AIX Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: C Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Classifier: Topic :: Scientific/Engineering :: Astronomy Classifier: Topic :: Scientific/Engineering :: Physics Classifier: Topic :: Scientific/Engineering :: Visualization Requires-Python: >=3.10.3 Description-Content-Type: text/markdown License-File: COPYING.txt Requires-Dist: cmyt>=1.1.2 Requires-Dist: ewah-bool-utils>=1.2.0 Requires-Dist: matplotlib>=3.5 Requires-Dist: more-itertools>=8.4 Requires-Dist: numpy<3,>=1.21.3 Requires-Dist: numpy!=2.0.1; platform_machine == "arm64" and platform_system == "Darwin" Requires-Dist: packaging>=20.9 Requires-Dist: pillow>=8.3.2 Requires-Dist: tomli-w>=0.4.0 Requires-Dist: tqdm>=3.4.0 Requires-Dist: unyt>=2.9.2 Requires-Dist: tomli>=1.2.3; python_version < "3.11" Requires-Dist: typing-extensions>=4.4.0; python_version < "3.12" Provides-Extra: hdf5 Requires-Dist: h5py!=3.12.0,>=3.1.0; platform_system == "Windows" and extra == "hdf5" Provides-Extra: netcdf4 Requires-Dist: netCDF4!=1.6.1,>=1.5.3; extra == "netcdf4" Provides-Extra: fortran Requires-Dist: f90nml>=1.1; extra == "fortran" Provides-Extra: adaptahop Provides-Extra: ahf Provides-Extra: amrex Provides-Extra: amrvac Requires-Dist: yt[Fortran]; extra == "amrvac" Provides-Extra: art Provides-Extra: arepo Requires-Dist: yt[HDF5]; extra == "arepo" Provides-Extra: artio Provides-Extra: athena Provides-Extra: athena-pp Provides-Extra: boxlib Provides-Extra: cf-radial Requires-Dist: xarray>=0.16.1; extra == "cf-radial" Requires-Dist: arm-pyart>=1.19.2; extra == "cf-radial" Provides-Extra: chimera Requires-Dist: yt[HDF5]; extra == "chimera" Provides-Extra: chombo Requires-Dist: yt[HDF5]; extra == "chombo" Provides-Extra: cholla Requires-Dist: yt[HDF5]; extra == "cholla" Provides-Extra: eagle Requires-Dist: yt[HDF5]; extra == "eagle" Provides-Extra: enzo-e Requires-Dist: yt[HDF5]; extra == "enzo-e" Requires-Dist: libconf>=1.0.1; extra == "enzo-e" Provides-Extra: enzo Requires-Dist: yt[HDF5]; extra == "enzo" Requires-Dist: libconf>=1.0.1; extra == "enzo" Provides-Extra: exodus-ii Requires-Dist: yt[netCDF4]; extra == "exodus-ii" Provides-Extra: fits Requires-Dist: astropy>=4.0.1; extra == "fits" Requires-Dist: regions>=0.7; extra == "fits" Provides-Extra: flash Requires-Dist: yt[HDF5]; extra == "flash" Provides-Extra: gadget Requires-Dist: yt[HDF5]; extra == "gadget" Provides-Extra: gadget-fof Requires-Dist: yt[HDF5]; extra == "gadget-fof" Provides-Extra: gamer Requires-Dist: yt[HDF5]; extra == "gamer" Provides-Extra: gdf Requires-Dist: yt[HDF5]; extra == "gdf" Provides-Extra: gizmo Requires-Dist: yt[HDF5]; extra == "gizmo" Provides-Extra: halo-catalog Requires-Dist: yt[HDF5]; extra == "halo-catalog" Provides-Extra: http-stream Requires-Dist: requests>=2.20.0; extra == "http-stream" Provides-Extra: idefix Requires-Dist: yt_idefix[HDF5]>=2.3.0; extra == "idefix" Provides-Extra: moab Requires-Dist: yt[HDF5]; extra == "moab" Provides-Extra: nc4-cm1 Requires-Dist: yt[netCDF4]; extra == "nc4-cm1" Provides-Extra: open-pmd Requires-Dist: yt[HDF5]; extra == "open-pmd" Provides-Extra: owls Requires-Dist: yt[HDF5]; extra == "owls" Provides-Extra: owls-subfind Requires-Dist: yt[HDF5]; extra == "owls-subfind" Provides-Extra: parthenon Requires-Dist: yt[HDF5]; extra == "parthenon" Provides-Extra: ramses Requires-Dist: yt[Fortran]; extra == "ramses" Requires-Dist: scipy; extra == "ramses" Provides-Extra: rockstar Provides-Extra: sdf Requires-Dist: requests>=2.20.0; extra == "sdf" Provides-Extra: stream Provides-Extra: swift Requires-Dist: yt[HDF5]; extra == "swift" Provides-Extra: tipsy Provides-Extra: ytdata Requires-Dist: yt[HDF5]; extra == "ytdata" Provides-Extra: full Requires-Dist: cartopy>=0.22.0; extra == "full" Requires-Dist: firefly>=3.2.0; extra == "full" Requires-Dist: glueviz>=0.13.3; extra == "full" Requires-Dist: ipython>=7.16.2; extra == "full" Requires-Dist: ipywidgets>=8.0.0; extra == "full" Requires-Dist: miniballcpp>=0.2.1; extra == "full" Requires-Dist: mpi4py>=3.0.3; extra == "full" Requires-Dist: pandas>=1.1.2; extra == "full" Requires-Dist: pooch>=0.7.0; extra == "full" Requires-Dist: pyaml>=17.10.0; extra == "full" Requires-Dist: pykdtree>=1.3.1; extra == "full" Requires-Dist: pyx>=0.15; extra == "full" Requires-Dist: scipy>=1.5.0; extra == "full" Requires-Dist: glue-core!=1.2.4; python_version >= "3.10" and extra == "full" Requires-Dist: ratarmount~=0.8.1; (platform_system != "Windows" and platform_system != "Darwin") and extra == "full" Requires-Dist: yt[adaptahop]; extra == "full" Requires-Dist: yt[ahf]; extra == "full" Requires-Dist: yt[amrex]; extra == "full" Requires-Dist: yt[amrvac]; extra == "full" Requires-Dist: yt[art]; extra == "full" Requires-Dist: yt[arepo]; extra == "full" Requires-Dist: yt[artio]; extra == "full" Requires-Dist: yt[athena]; extra == "full" Requires-Dist: yt[athena_pp]; extra == "full" Requires-Dist: yt[boxlib]; extra == "full" Requires-Dist: yt[cf_radial]; extra == "full" Requires-Dist: yt[chimera]; extra == "full" Requires-Dist: yt[chombo]; extra == "full" Requires-Dist: yt[cholla]; extra == "full" Requires-Dist: yt[eagle]; extra == "full" Requires-Dist: yt[enzo_e]; extra == "full" Requires-Dist: yt[enzo]; extra == "full" Requires-Dist: yt[exodus_ii]; extra == "full" Requires-Dist: yt[fits]; extra == "full" Requires-Dist: yt[flash]; extra == "full" Requires-Dist: yt[gadget]; extra == "full" Requires-Dist: yt[gadget_fof]; extra == "full" Requires-Dist: yt[gamer]; extra == "full" Requires-Dist: yt[gdf]; extra == "full" Requires-Dist: yt[gizmo]; extra == "full" Requires-Dist: yt[halo_catalog]; extra == "full" Requires-Dist: yt[http_stream]; extra == "full" Requires-Dist: yt[idefix]; extra == "full" Requires-Dist: yt[moab]; extra == "full" Requires-Dist: yt[nc4_cm1]; extra == "full" Requires-Dist: yt[open_pmd]; extra == "full" Requires-Dist: yt[owls]; extra == "full" Requires-Dist: yt[owls_subfind]; extra == "full" Requires-Dist: yt[parthenon]; extra == "full" Requires-Dist: yt[ramses]; extra == "full" Requires-Dist: yt[rockstar]; extra == "full" Requires-Dist: yt[sdf]; extra == "full" Requires-Dist: yt[stream]; extra == "full" Requires-Dist: yt[swift]; extra == "full" Requires-Dist: yt[tipsy]; extra == "full" Requires-Dist: yt[ytdata]; extra == "full" Provides-Extra: mapserver Requires-Dist: bottle; extra == "mapserver" Provides-Extra: test Requires-Dist: pyaml>=17.10.0; extra == "test" Requires-Dist: pytest>=6.1; extra == "test" Requires-Dist: pytest-mpl>=0.16.1; extra == "test" Requires-Dist: sympy!=1.10,!=1.9; extra == "test" Requires-Dist: imageio!=2.35.0; extra == "test" # The yt Project [![PyPI](https://img.shields.io/pypi/v/yt)](https://pypi.org/project/yt) [![Supported Python Versions](https://img.shields.io/pypi/pyversions/yt)](https://pypi.org/project/yt/) [![Latest Documentation](https://img.shields.io/badge/docs-latest-brightgreen.svg)](http://yt-project.org/docs/dev/) [![Users' Mailing List](https://img.shields.io/badge/Users-List-lightgrey.svg)](https://mail.python.org/archives/list/yt-users@python.org//) [![Devel Mailing List](https://img.shields.io/badge/Devel-List-lightgrey.svg)](https://mail.python.org/archives/list/yt-dev@python.org//) [![Data Hub](https://img.shields.io/badge/data-hub-orange.svg)](https://hub.yt/) [![Powered by NumFOCUS](https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A)](http://numfocus.org) [![Sponsor our Project](https://img.shields.io/badge/donate-to%20yt-blueviolet)](https://numfocus.org/donate-to-yt) [![Build and Test](https://github.com/yt-project/yt/actions/workflows/build-test.yaml/badge.svg)](https://github.com/yt-project/yt/actions/workflows/build-test.yaml) [![CI (bleeding edge)](https://github.com/yt-project/yt/actions/workflows/bleeding-edge.yaml/badge.svg)](https://github.com/yt-project/yt/actions/workflows/bleeding-edge.yaml) [![pre-commit.ci status](https://results.pre-commit.ci/badge/github/yt-project/yt/main.svg)](https://results.pre-commit.ci/latest/github/yt-project/yt/main) [![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json)](https://github.com/charliermarsh/ruff) yt is an open-source, permissively-licensed Python library for analyzing and visualizing volumetric data. yt supports structured, variable-resolution meshes, unstructured meshes, and discrete or sampled data such as particles. Focused on driving physically-meaningful inquiry, yt has been applied in domains such as astrophysics, seismology, nuclear engineering, molecular dynamics, and oceanography. Composed of a friendly community of users and developers, we want to make it easy to use and develop - we'd love it if you got involved! We've written a [method paper](https://ui.adsabs.harvard.edu/abs/2011ApJS..192....9T) you may be interested in; if you use yt in the preparation of a publication, please consider citing it. ## Code of Conduct yt abides by a code of conduct partially modified from the PSF code of conduct, and is found [in our contributing guide](http://yt-project.org/docs/dev/developing/developing.html#yt-community-code-of-conduct). ## Installation You can install the most recent stable version of yt either with conda from [conda-forge](https://conda-forge.org/): ```shell conda install -c conda-forge yt ``` or with pip: ```shell python -m pip install yt ``` More information on the various ways to install yt, and in particular to install from source, can be found on [the project's website](https://yt-project.org/docs/dev/installing.html). ## Getting Started yt is designed to provide meaningful analysis of data. We have some Quickstart example notebooks in the repository: * [Introduction](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/1\)_Introduction.ipynb) * [Data Inspection](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/2\)_Data_Inspection.ipynb) * [Simple Visualization](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/3\)_Simple_Visualization.ipynb) * [Data Objects and Time Series](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/4\)_Data_Objects_and_Time_Series.ipynb) * [Derived Fields and Profiles](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/5\)_Derived_Fields_and_Profiles.ipynb) * [Volume Rendering](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/6\)_Volume_Rendering.ipynb) If you'd like to try these online, you can visit our [yt Hub](https://hub.yt/) and run a notebook next to some of our example data. ## Contributing We love contributions! yt is open source, built on open source, and we'd love to have you hang out in our community. We have developed some [guidelines](CONTRIBUTING.rst) for contributing to yt. **Imposter syndrome disclaimer**: We want your help. No, really. There may be a little voice inside your head that is telling you that you're not ready to be an open source contributor; that your skills aren't nearly good enough to contribute. What could you possibly offer a project like this one? We assure you - the little voice in your head is wrong. If you can write code at all, you can contribute code to open source. Contributing to open source projects is a fantastic way to advance one's coding skills. Writing perfect code isn't the measure of a good developer (that would disqualify all of us!); it's trying to create something, making mistakes, and learning from those mistakes. That's how we all improve, and we are happy to help others learn. Being an open source contributor doesn't just mean writing code, either. You can help out by writing documentation, tests, or even giving feedback about the project (and yes - that includes giving feedback about the contribution process). Some of these contributions may be the most valuable to the project as a whole, because you're coming to the project with fresh eyes, so you can see the errors and assumptions that seasoned contributors have glossed over. (This disclaimer was originally written by [Adrienne Lowe](https://github.com/adriennefriend) for a [PyCon talk](https://www.youtube.com/watch?v=6Uj746j9Heo), and was adapted by yt based on its use in the README file for the [MetPy project](https://github.com/Unidata/MetPy)) ## Resources We have some community and documentation resources available. * Our latest documentation is always at http://yt-project.org/docs/dev/ and it includes recipes, tutorials, and API documentation * The [discussion mailing list](https://mail.python.org/archives/list/yt-users@python.org//) should be your first stop for general questions * The [development mailing list](https://mail.python.org/archives/list/yt-dev@python.org//) is better suited for more development issues * You can also join us on Slack at yt-project.slack.com ([request an invite](https://yt-project.org/slack.html)) Is your code compatible with yt ? Great ! Please consider giving us a shoutout as a shiny badge in your README - markdown ```markdown [![yt-project](https://img.shields.io/static/v1?label="works%20with"&message="yt"&color="blueviolet")](https://yt-project.org) ``` - rst ```reStructuredText |yt-project| .. |yt-project| image:: https://img.shields.io/static/v1?label="works%20with"&message="yt"&color="blueviolet" :target: https://yt-project.org ``` ## Powered by NumFOCUS yt is a fiscally sponsored project of [NumFOCUS](https://numfocus.org/). If you're interested in supporting the active maintenance and development of this project, consider [donating to the project](https://numfocus.salsalabs.org/donate-to-yt/index.html). ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/README.md0000644000175100001770000001630014714401662013044 0ustar00runnerdocker# The yt Project [![PyPI](https://img.shields.io/pypi/v/yt)](https://pypi.org/project/yt) [![Supported Python Versions](https://img.shields.io/pypi/pyversions/yt)](https://pypi.org/project/yt/) [![Latest Documentation](https://img.shields.io/badge/docs-latest-brightgreen.svg)](http://yt-project.org/docs/dev/) [![Users' Mailing List](https://img.shields.io/badge/Users-List-lightgrey.svg)](https://mail.python.org/archives/list/yt-users@python.org//) [![Devel Mailing List](https://img.shields.io/badge/Devel-List-lightgrey.svg)](https://mail.python.org/archives/list/yt-dev@python.org//) [![Data Hub](https://img.shields.io/badge/data-hub-orange.svg)](https://hub.yt/) [![Powered by NumFOCUS](https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A)](http://numfocus.org) [![Sponsor our Project](https://img.shields.io/badge/donate-to%20yt-blueviolet)](https://numfocus.org/donate-to-yt) [![Build and Test](https://github.com/yt-project/yt/actions/workflows/build-test.yaml/badge.svg)](https://github.com/yt-project/yt/actions/workflows/build-test.yaml) [![CI (bleeding edge)](https://github.com/yt-project/yt/actions/workflows/bleeding-edge.yaml/badge.svg)](https://github.com/yt-project/yt/actions/workflows/bleeding-edge.yaml) [![pre-commit.ci status](https://results.pre-commit.ci/badge/github/yt-project/yt/main.svg)](https://results.pre-commit.ci/latest/github/yt-project/yt/main) [![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json)](https://github.com/charliermarsh/ruff) yt is an open-source, permissively-licensed Python library for analyzing and visualizing volumetric data. yt supports structured, variable-resolution meshes, unstructured meshes, and discrete or sampled data such as particles. Focused on driving physically-meaningful inquiry, yt has been applied in domains such as astrophysics, seismology, nuclear engineering, molecular dynamics, and oceanography. Composed of a friendly community of users and developers, we want to make it easy to use and develop - we'd love it if you got involved! We've written a [method paper](https://ui.adsabs.harvard.edu/abs/2011ApJS..192....9T) you may be interested in; if you use yt in the preparation of a publication, please consider citing it. ## Code of Conduct yt abides by a code of conduct partially modified from the PSF code of conduct, and is found [in our contributing guide](http://yt-project.org/docs/dev/developing/developing.html#yt-community-code-of-conduct). ## Installation You can install the most recent stable version of yt either with conda from [conda-forge](https://conda-forge.org/): ```shell conda install -c conda-forge yt ``` or with pip: ```shell python -m pip install yt ``` More information on the various ways to install yt, and in particular to install from source, can be found on [the project's website](https://yt-project.org/docs/dev/installing.html). ## Getting Started yt is designed to provide meaningful analysis of data. We have some Quickstart example notebooks in the repository: * [Introduction](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/1\)_Introduction.ipynb) * [Data Inspection](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/2\)_Data_Inspection.ipynb) * [Simple Visualization](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/3\)_Simple_Visualization.ipynb) * [Data Objects and Time Series](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/4\)_Data_Objects_and_Time_Series.ipynb) * [Derived Fields and Profiles](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/5\)_Derived_Fields_and_Profiles.ipynb) * [Volume Rendering](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/6\)_Volume_Rendering.ipynb) If you'd like to try these online, you can visit our [yt Hub](https://hub.yt/) and run a notebook next to some of our example data. ## Contributing We love contributions! yt is open source, built on open source, and we'd love to have you hang out in our community. We have developed some [guidelines](CONTRIBUTING.rst) for contributing to yt. **Imposter syndrome disclaimer**: We want your help. No, really. There may be a little voice inside your head that is telling you that you're not ready to be an open source contributor; that your skills aren't nearly good enough to contribute. What could you possibly offer a project like this one? We assure you - the little voice in your head is wrong. If you can write code at all, you can contribute code to open source. Contributing to open source projects is a fantastic way to advance one's coding skills. Writing perfect code isn't the measure of a good developer (that would disqualify all of us!); it's trying to create something, making mistakes, and learning from those mistakes. That's how we all improve, and we are happy to help others learn. Being an open source contributor doesn't just mean writing code, either. You can help out by writing documentation, tests, or even giving feedback about the project (and yes - that includes giving feedback about the contribution process). Some of these contributions may be the most valuable to the project as a whole, because you're coming to the project with fresh eyes, so you can see the errors and assumptions that seasoned contributors have glossed over. (This disclaimer was originally written by [Adrienne Lowe](https://github.com/adriennefriend) for a [PyCon talk](https://www.youtube.com/watch?v=6Uj746j9Heo), and was adapted by yt based on its use in the README file for the [MetPy project](https://github.com/Unidata/MetPy)) ## Resources We have some community and documentation resources available. * Our latest documentation is always at http://yt-project.org/docs/dev/ and it includes recipes, tutorials, and API documentation * The [discussion mailing list](https://mail.python.org/archives/list/yt-users@python.org//) should be your first stop for general questions * The [development mailing list](https://mail.python.org/archives/list/yt-dev@python.org//) is better suited for more development issues * You can also join us on Slack at yt-project.slack.com ([request an invite](https://yt-project.org/slack.html)) Is your code compatible with yt ? Great ! Please consider giving us a shoutout as a shiny badge in your README - markdown ```markdown [![yt-project](https://img.shields.io/static/v1?label="works%20with"&message="yt"&color="blueviolet")](https://yt-project.org) ``` - rst ```reStructuredText |yt-project| .. |yt-project| image:: https://img.shields.io/static/v1?label="works%20with"&message="yt"&color="blueviolet" :target: https://yt-project.org ``` ## Powered by NumFOCUS yt is a fiscally sponsored project of [NumFOCUS](https://numfocus.org/). If you're interested in supporting the active maintenance and development of this project, consider [donating to the project](https://numfocus.salsalabs.org/donate-to-yt/index.html). ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/conftest.py0000644000175100001770000004171614714401662013775 0ustar00runnerdockerimport os import shutil import sys import tempfile from importlib.metadata import version from importlib.util import find_spec from pathlib import Path import pytest import yaml from packaging.version import Version from yt.config import ytcfg from yt.utilities.answer_testing.testing_utilities import ( _compare_raw_arrays, _hash_results, _save_raw_arrays, _save_result, _streamline_for_io, data_dir_load, ) MPL_VERSION = Version(version("matplotlib")) NUMPY_VERSION = Version(version("numpy")) PILLOW_VERSION = Version(version("pillow")) # setuptools does not ship with the standard lib starting in Python 3.12, so we need to # be resilient if it's not available at runtime if find_spec("setuptools") is not None: SETUPTOOLS_VERSION = Version(version("setuptools")) else: SETUPTOOLS_VERSION = None if find_spec("pandas") is not None: PANDAS_VERSION = Version(version("pandas")) else: PANDAS_VERSION = None def pytest_addoption(parser): """ Lets options be passed to test functions. """ parser.addoption( "--with-answer-testing", action="store_true", ) parser.addoption( "--answer-store", action="store_true", ) parser.addoption( "--answer-raw-arrays", action="store_true", ) parser.addoption( "--raw-answer-store", action="store_true", ) parser.addoption( "--force-overwrite", action="store_true", ) parser.addoption( "--no-hash", action="store_true", ) parser.addoption("--local-dir", default=None, help="Where answers are saved.") # Tell pytest about the local-dir option in the ini files. This # option is used for creating the answer directory on CI parser.addini( "local-dir", default=str(Path(__file__).parent / "answer-store"), help="answer directory.", ) parser.addini( "test_data_dir", default=ytcfg.get("yt", "test_data_dir"), help="Directory where data for tests is stored.", ) def pytest_configure(config): r""" Reads in the tests/tests.yaml file. This file contains a list of each answer test's answer file (including the changeset number). """ # Register custom marks for answer tests and big data config.addinivalue_line("markers", "answer_test: Run the answer tests.") config.addinivalue_line( "markers", "big_data: Run answer tests that require large data files." ) for value in ( # treat most warnings as errors "error", # >>> warnings emitted by testing frameworks, or in testing contexts # we still have some yield-based tests, awaiting for transition into pytest "ignore::pytest.PytestCollectionWarning", # matplotlib warnings related to the Agg backend which is used in CI, not much we can do about it "ignore:Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.:UserWarning", r"ignore:tight_layout.+falling back to Agg renderer:UserWarning", # # >>> warnings from wrong values passed to numpy # these should normally be curated out of the test suite but they are too numerous # to deal with in a reasonable time at the moment. "ignore:invalid value encountered in log10:RuntimeWarning", "ignore:divide by zero encountered in log10:RuntimeWarning", # # >>> there are many places in yt (most notably at the frontend level) # where we open files but never explicitly close them # Although this is in general bad practice, it can be intentional and # justified in contexts where reading speeds should be optimized. # It is not clear at the time of writing how to approach this, # so I'm going to ignore this class of warnings altogether for now. "ignore:unclosed file.*:ResourceWarning", ): config.addinivalue_line("filterwarnings", value) if SETUPTOOLS_VERSION is not None and SETUPTOOLS_VERSION >= Version("67.3.0"): # may be triggered by multiple dependencies # see https://github.com/glue-viz/glue/issues/2364 # see https://github.com/matplotlib/matplotlib/issues/25244 config.addinivalue_line( "filterwarnings", r"ignore:(Deprecated call to `pkg_resources\.declare_namespace\('.*'\)`\.\n)?" r"Implementing implicit namespace packages \(as specified in PEP 420\) " r"is preferred to `pkg_resources\.declare_namespace`\.:DeprecationWarning", ) if SETUPTOOLS_VERSION is not None and SETUPTOOLS_VERSION >= Version("67.5.0"): # may be triggered by multiple dependencies # see https://github.com/glue-viz/glue/issues/2364 # see https://github.com/matplotlib/matplotlib/issues/25244 config.addinivalue_line( "filterwarnings", "ignore:pkg_resources is deprecated as an API:DeprecationWarning", ) if MPL_VERSION < Version("3.5.2") and PILLOW_VERSION >= Version("9.1"): # see https://github.com/matplotlib/matplotlib/pull/22766 config.addinivalue_line( "filterwarnings", r"ignore:NONE is deprecated and will be removed in Pillow 10 \(2023-07-01\)\. " r"Use Resampling\.NEAREST or Dither\.NONE instead\.:DeprecationWarning", ) config.addinivalue_line( "filterwarnings", r"ignore:ADAPTIVE is deprecated and will be removed in Pillow 10 \(2023-07-01\)\. " r"Use Palette\.ADAPTIVE instead\.:DeprecationWarning", ) if NUMPY_VERSION >= Version("1.25"): if find_spec("h5py") is not None and ( Version(version("h5py")) < Version("3.9") ): # https://github.com/h5py/h5py/pull/2242 config.addinivalue_line( "filterwarnings", "ignore:`product` is deprecated as of NumPy 1.25.0" ":DeprecationWarning", ) if PANDAS_VERSION is not None and PANDAS_VERSION >= Version("2.2.0"): config.addinivalue_line( "filterwarnings", r"ignore:\s*Pyarrow will become a required dependency of pandas:DeprecationWarning", ) if sys.version_info >= (3, 12): # already patched (but not released) upstream: # https://github.com/dateutil/dateutil/pull/1285 config.addinivalue_line( "filterwarnings", r"ignore:datetime\.datetime\.utcfromtimestamp\(\) is deprecated:DeprecationWarning", ) if find_spec("ratarmount"): # On Python 3.12+, there is a deprecation warning when calling os.fork() # in a multi-threaded process. We use this mechanism to mount archives. config.addinivalue_line( "filterwarnings", r"ignore:This process \(pid=\d+\) is multi-threaded, use of fork\(\) " r"may lead to deadlocks in the child\." ":DeprecationWarning", ) if find_spec("datatree"): # the cf_radial dependency arm-pyart<=1.9.2 installs the now deprecated # xarray-datatree package (which imports as datatree), which triggers # a bunch of runtimewarnings when importing xarray. # https://github.com/yt-project/yt/pull/5042#issuecomment-2457797694 config.addinivalue_line( "filterwarnings", "ignore:" r"Engine.*loading failed.*" ":RuntimeWarning", ) def pytest_collection_modifyitems(config, items): r""" Decide which tests to skip based on command-line options. """ # Set up the skip marks skip_answer = pytest.mark.skip(reason="--with-answer-testing not set.") skip_unit = pytest.mark.skip(reason="Running answer tests, so skipping unit tests.") skip_big = pytest.mark.skip(reason="--answer-big-data not set.") # Loop over every collected test function for item in items: # If it's an answer test and the appropriate CL option hasn't # been set, skip it if "answer_test" in item.keywords and not config.getoption( "--with-answer-testing" ): item.add_marker(skip_answer) # If it's an answer test that requires big data and the CL # option hasn't been set, skip it if ( "big_data" in item.keywords and not config.getoption("--with-answer-testing") and not config.getoption("--answer-big-data") ): item.add_marker(skip_big) if "answer_test" not in item.keywords and config.getoption( "--with-answer-testing" ): item.add_marker(skip_unit) def pytest_itemcollected(item): # Customize pytest-mpl decorator to add sensible defaults mpl_marker = item.get_closest_marker("mpl_image_compare") if mpl_marker is not None: # in a future version, pytest-mpl may gain an option for doing this: # https://github.com/matplotlib/pytest-mpl/pull/181 mpl_marker.kwargs.setdefault("tolerance", 0.5) def _param_list(request): r""" Saves the non-ds, non-fixture function arguments for saving to the answer file. """ # pytest treats parameterized arguments as fixtures, so there's no # clean way to separate them out from other other fixtures (that I # know of), so we do it explicitly blacklist = [ "hashing", "answer_file", "request", "answer_compare", "temp_dir", "orbit_traj", "etc_traj", ] test_params = {} for key, val in request.node.funcargs.items(): if key not in blacklist: # For plotwindow, the callback arg is a tuple and the second # element contains a memory address, so we need to drop it. # The first element is the callback name, which is all that's # needed if key == "callback": val = val[0] test_params[key] = str(val) # Convert python-specific data objects (such as tuples) to a more # io-friendly format (in order to not have python-specific anchors # in the answer yaml file) test_params = _streamline_for_io(test_params) return test_params def _get_answer_files(request): """ Gets the path to where the hashed and raw answers are saved. """ answer_file = f"{request.cls.__name__}_{request.cls.answer_version}.yaml" raw_answer_file = f"{request.cls.__name__}_{request.cls.answer_version}.h5" # Add the local-dir aspect of the path. If there's a command line value, # have that override the ini file value clLocalDir = request.config.getoption("--local-dir") iniLocalDir = request.config.getini("local-dir") if clLocalDir is not None: answer_file = os.path.join(os.path.expanduser(clLocalDir), answer_file) raw_answer_file = os.path.join(os.path.expanduser(clLocalDir), raw_answer_file) else: answer_file = os.path.join(os.path.expanduser(iniLocalDir), answer_file) raw_answer_file = os.path.join(os.path.expanduser(iniLocalDir), raw_answer_file) # Make sure we don't overwrite unless we mean to overwrite = request.config.getoption("--force-overwrite") storing = request.config.getoption("--answer-store") raw_storing = request.config.getoption("--raw-answer-store") raw = request.config.getoption("--answer-raw-arrays") if os.path.exists(answer_file) and storing and not overwrite: raise FileExistsError( "Use `--force-overwrite` to overwrite an existing answer file." ) if os.path.exists(raw_answer_file) and raw_storing and raw and not overwrite: raise FileExistsError( "Use `--force-overwrite` to overwrite an existing raw answer file." ) # If we do mean to overwrite, do so here by deleting the original file if os.path.exists(answer_file) and storing and overwrite: os.remove(answer_file) if os.path.exists(raw_answer_file) and raw_storing and raw and overwrite: os.remove(raw_answer_file) print(os.path.abspath(answer_file)) return answer_file, raw_answer_file @pytest.fixture(scope="function") def hashing(request): r""" Handles initialization, generation, and saving of answer test result hashes. """ no_hash = request.config.getoption("--no-hash") store_hash = request.config.getoption("--answer-store") raw = request.config.getoption("--answer-raw-arrays") raw_store = request.config.getoption("--raw-answer-store") # This check is so that, when checking if the answer file exists in # _get_answer_files, we don't continuously fail. With this check, # _get_answer_files is called once per class, despite this having function # scope if request.cls.answer_file is None: request.cls.answer_file, request.cls.raw_answer_file = _get_answer_files( request ) if not no_hash and not store_hash and request.cls.saved_hashes is None: try: with open(request.cls.answer_file) as fd: request.cls.saved_hashes = yaml.safe_load(fd) except FileNotFoundError: module_filename = f"{request.function.__module__.replace('.', os.sep)}.py" with open(f"generate_test_{os.getpid()}.txt", "a") as fp: fp.write(f"{module_filename}::{request.cls.__name__}\n") pytest.fail(msg="Answer file not found.", pytrace=False) request.cls.hashes = {} # Load the saved answers if we're comparing. We don't do this for the raw # answers because those are huge yield # Get arguments and their values passed to the test (e.g., axis, field, etc.) params = _param_list(request) # Hash the test results. Don't save to request.cls.hashes so we still have # raw data, in case we want to work with that hashes = _hash_results(request.cls.hashes) # Add the other test parameters hashes.update(params) # Add the function name as the "master" key to the hashes dict hashes = {request.node.name: hashes} # Save hashes if not no_hash and store_hash: _save_result(hashes, request.cls.answer_file) # Compare hashes elif not no_hash and not store_hash: try: for test_name, test_hash in hashes.items(): assert test_name in request.cls.saved_hashes assert test_hash == request.cls.saved_hashes[test_name] except AssertionError: pytest.fail(f"Comparison failure: {request.node.name}", pytrace=False) # Save raw data if raw and raw_store: _save_raw_arrays( request.cls.hashes, request.cls.raw_answer_file, request.node.name ) # Compare raw data. This is done one test at a time because the # arrays can get quite large and storing everything in memory would # be bad if raw and not raw_store: _compare_raw_arrays( request.cls.hashes, request.cls.raw_answer_file, request.node.name ) @pytest.fixture(scope="function") def temp_dir(): r""" Creates a temporary directory needed by certain tests. """ curdir = os.getcwd() if int(os.environ.get("GENERATE_YTDATA", 0)): tmpdir = os.getcwd() else: tmpdir = tempfile.mkdtemp() os.chdir(tmpdir) yield tmpdir os.chdir(curdir) if tmpdir != curdir: shutil.rmtree(tmpdir) @pytest.fixture(scope="class") def ds(request): # data_dir_load can take the cls, args, and kwargs. These optional # arguments, if present, are given in a dictionary as the second # element of the list if isinstance(request.param, str): ds_fn = request.param opts = {} else: ds_fn, opts = request.param try: return data_dir_load( ds_fn, cls=opts.get("cls"), args=opts.get("args"), kwargs=opts.get("kwargs") ) except FileNotFoundError: return pytest.skip(f"Data file: `{request.param}` not found.") @pytest.fixture(scope="class") def field(request): """ Fixture for returning the field. Needed because indirect=True is used for loading the datasets. """ return request.param @pytest.fixture(scope="class") def dobj(request): """ Fixture for returning the ds_obj. Needed because indirect=True is used for loading the datasets. """ return request.param @pytest.fixture(scope="class") def axis(request): """ Fixture for returning the axis. Needed because indirect=True is used for loading the datasets. """ return request.param @pytest.fixture(scope="class") def weight(request): """ Fixture for returning the weight_field. Needed because indirect=True is used for loading the datasets. """ return request.param @pytest.fixture(scope="class") def ds_repr(request): """ Fixture for returning the string representation of a dataset. Needed because indirect=True is used for loading the datasets. """ return request.param @pytest.fixture(scope="class") def Npart(request): """ Fixture for returning the number of particles in a dataset. Needed because indirect=True is used for loading the datasets. """ return request.param ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2151506 yt-4.4.0/doc/0000755000175100001770000000000014714401715012331 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/Makefile0000644000175100001770000001204714714401662013776 0ustar00runnerdocker# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @echo " clean to remove the build directory" @echo " recipeclean to remove files produced by running the cookbook scripts" clean: -rm -rf $(BUILDDIR)/* -rm -rf source/reference/api/yt.* -rm -rf source/reference/api/modules.rst fullclean: clean recipeclean: -rm -rf _temp/*.done source/cookbook/_static/* html: ifneq ($(READTHEDOCS),True) SPHINX_APIDOC_OPTIONS=members,undoc-members,inherited-members,show-inheritance sphinx-apidoc \ -o source/reference/api/ \ -e ../yt $(shell find ../yt -name "*tests*" -type d) ../yt/utilities/voropp* ../yt/analysis_modules/* endif $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/yt.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/yt.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/yt" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/yt" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." make -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/README0000644000175100001770000000056014714401662013213 0ustar00runnerdockerThis directory contains the uncompiled yt documentation. It's written to be used with Sphinx, a tool designed for writing Python documentation. Sphinx is available at this URL: http://www.sphinx-doc.org/en/master/ Because the documentation requires a number of dependencies, we provide pre-built versions online, accessible here: https://yt-project.org/docs/dev/ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/activate0000644000175100001770000000571214714401662014062 0ustar00runnerdocker### Adapted from virtualenv's activate script # This file must be used with "source bin/activate" *from bash* # you cannot run it directly deactivate () { # reset old environment variables if [ -n "$_OLD_VIRTUAL_PATH" ] ; then PATH="$_OLD_VIRTUAL_PATH" export PATH unset _OLD_VIRTUAL_PATH fi if [ -n "$_OLD_VIRTUAL_PYTHONHOME" ] ; then PYTHONHOME="$_OLD_VIRTUAL_PYTHONHOME" export PYTHONHOME unset _OLD_VIRTUAL_PYTHONHOME fi ### Begin extra yt vars if [ -n "$_OLD_VIRTUAL_YT_DEST" ] ; then YT_DEST="$_OLD_VIRTUAL_YT_DEST" export YT_DEST unset _OLD_VIRTUAL_PYTHONHOME fi if [ -n "$_OLD_VIRTUAL_PYTHONPATH" ] ; then PYTHONPATH="$_OLD_VIRTUAL_PYTHONPATH" export PYTHONPATH unset _OLD_VIRTUAL_PYTHONPATH fi if [ -n "$_OLD_VIRTUAL_LD_LIBRARY_PATH" ] ; then LD_LIBRARY_PATH="$_OLD_VIRTUAL_LD_LIBRARY_PATH" export LD_LIBRARY_PATH unset _OLD_VIRTUAL_LD_LIBRARY_PATH fi ### End extra yt vars # This should detect bash and zsh, which have a hash command that must # be called to get it to forget past commands. Without forgetting # past commands the $PATH changes we made may not be respected if [ -n "$BASH" -o -n "$ZSH_VERSION" ] ; then hash -r fi if [ -n "$_OLD_VIRTUAL_PS1" ] ; then PS1="$_OLD_VIRTUAL_PS1" export PS1 unset _OLD_VIRTUAL_PS1 fi unset VIRTUAL_ENV if [ ! "$1" = "nondestructive" ] ; then # Self destruct! unset -f deactivate fi } # unset irrelevant variables deactivate nondestructive VIRTUAL_ENV="__YT_DIR__" export VIRTUAL_ENV _OLD_VIRTUAL_PATH="$PATH" PATH="$VIRTUAL_ENV/bin:$PATH" export PATH ### Begin extra env vars for yt _OLD_VIRTUAL_YT_DEST="$YT_DEST" YT_DEST="$VIRTUAL_ENV" export YT_DEST _OLD_VIRTUAL_PYTHONPATH="$PYTHONPATH" _OLD_VIRTUAL_LD_LIBRARY_PATH="$LD_LIBRARY_PATH" LD_LIBRARY_PATH="$VIRTUAL_ENV/lib:$LD_LIBRARY_PATH" export LD_LIBRARY_PATH ### End extra env vars for yt # unset PYTHONHOME if set # this will fail if PYTHONHOME is set to the empty string (which is bad anyway) # could use `if (set -u; : $PYTHONHOME) ;` in bash if [ -n "$PYTHONHOME" ] ; then _OLD_VIRTUAL_PYTHONHOME="$PYTHONHOME" unset PYTHONHOME fi if [ -z "$VIRTUAL_ENV_DISABLE_PROMPT" ] ; then _OLD_VIRTUAL_PS1="$PS1" if [ "x" != x ] ; then PS1="$PS1" else if [ "`basename \"$VIRTUAL_ENV\"`" = "__" ] ; then # special case for Aspen magic directories # see http://www.zetadev.com/software/aspen/ PS1="[`basename \`dirname \"$VIRTUAL_ENV\"\``] $PS1" else PS1="(`basename \"$VIRTUAL_ENV\"`)$PS1" fi fi export PS1 fi # This should detect bash and zsh, which have a hash command that must # be called to get it to forget past commands. Without forgetting # past commands the $PATH changes we made may not be respected if [ -n "$BASH" -o -n "$ZSH_VERSION" ] ; then hash -r fi ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/activate.csh0000644000175100001770000000365414714401662014641 0ustar00runnerdocker# This file must be used with "source bin/activate.csh" *from csh*. # You cannot run it directly. # Created by Davide Di Blasi . alias deactivate 'test $?_OLD_VIRTUAL_PATH != 0 && setenv PATH "$_OLD_VIRTUAL_PATH" && unset _OLD_VIRTUAL_PATH; test $?_OLD_VIRTUAL_YT_DEST != 0 && setenv YT_DEST "$_OLD_VIRTUAL_YT_DEST" && unset _OLD_VIRTUAL_YT_DEST; test $?_OLD_VIRTUAL_PYTHONPATH != 0 && setenv PYTHONPATH "$_OLD_VIRTUAL_PYTHONPATH" && unset _OLD_VIRTUAL_PYTHONPATH; test $?_OLD_VIRTUAL_LD_LIBRARY_PATH != 0 && setenv LD_LIBRARY_PATH "$_OLD_VIRTUAL_LD_LIBRARY_PATH" && unset _OLD_VIRTUAL_LD_LIBRARY_PATH; rehash; test $?_OLD_VIRTUAL_PROMPT != 0 && set prompt="$_OLD_VIRTUAL_PROMPT" && unset _OLD_VIRTUAL_PROMPT; unsetenv VIRTUAL_ENV; test "\!:*" != "nondestructive" && unalias deactivate' # Unset irrelevant variables. deactivate nondestructive setenv VIRTUAL_ENV "__YT_DIR__" if ($?PATH == 0) then setenv PATH endif set _OLD_VIRTUAL_PATH="$PATH" setenv PATH "${VIRTUAL_ENV}/bin:${PATH}" ### Begin extra yt vars if ($?YT_DEST == 0) then setenv YT_DEST endif set _OLD_VIRTUAL_YT_DEST="$YT_DEST" setenv YT_DEST "${VIRTUAL_ENV}" if ($?PYTHONPATH == 0) then setenv PYTHONPATH endif set _OLD_VIRTUAL_PYTHONPATH="$PYTHONPATH" setenv PYTHONPATH "${VIRTUAL_ENV}/lib/python2.7/site-packages:${PYTHONPATH}" if ($?LD_LIBRARY_PATH == 0) then setenv LD_LIBRARY_PATH endif set _OLD_VIRTUAL_LD_LIBRARY_PATH="$LD_LIBRARY_PATH" setenv LD_LIBRARY_PATH "${VIRTUAL_ENV}/lib:${LD_LIBRARY_PATH}" ### End extra yt vars set _OLD_VIRTUAL_PROMPT="$prompt" if ("" != "") then set env_name = "" else if (`basename "$VIRTUAL_ENV"` == "__") then # special case for Aspen magic directories # see http://www.zetadev.com/software/aspen/ set env_name = `basename \`dirname "$VIRTUAL_ENV"\`` else set env_name = `basename "$VIRTUAL_ENV"` endif endif set prompt = "[$env_name] $prompt" unset env_name rehash ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/cheatsheet.tex0000644000175100001770000004640614714401662015203 0ustar00runnerdocker\documentclass[10pt,landscape]{article} \usepackage{multicol} \usepackage{calc} \usepackage{ifthen} \usepackage[landscape]{geometry} \usepackage[hyphens]{url} % To make this come out properly in landscape mode, do one of the following % 1. % pdflatex cheatsheet.tex % % 2. % latex cheatsheet.tex % dvips -P pdf -t landscape cheatsheet.dvi % ps2pdf cheatsheet.ps % If you're reading this, be prepared for confusion. Making this was % a learning experience for me, and it shows. Much of the placement % was hacked in; if you make it better, let me know... % 2008-04 % Changed page margin code to use the geometry package. Also added code for % conditional page margins, depending on paper size. Thanks to Uwe Ziegenhagen % for the suggestions. % 2006-08 % Made changes based on suggestions from Gene Cooperman. % 2012-11 - Stephen Skory % Converted the latex cheat sheet to a yt cheat sheet, taken from % http://www.stdout.org/~winston/latex/ % This sets page margins to .5 inch if using letter paper, and to 1cm % if using A4 paper. (This probably isn't strictly necessary.) % If using another size paper, use default 1cm margins. \ifthenelse{\lengthtest { \paperwidth = 11in}} { \geometry{top=.5in,left=.5in,right=.5in,bottom=0.85in} } {\ifthenelse{ \lengthtest{ \paperwidth = 297mm}} {\geometry{top=1cm,left=1cm,right=1cm,bottom=1cm} } {\geometry{top=1cm,left=1cm,right=1cm,bottom=1cm} } } % Turn off header and footer \pagestyle{empty} % Redefine section commands to use less space \makeatletter \renewcommand{\section}{\@startsection{section}{1}{0mm}% {-1ex plus -.5ex minus -.2ex}% {0.5ex plus .2ex}%x {\normalfont\large\bfseries}} \renewcommand{\subsection}{\@startsection{subsection}{2}{0mm}% {-1explus -.5ex minus -.2ex}% {0.5ex plus .2ex}% {\normalfont\normalsize\bfseries}} \renewcommand{\subsubsection}{\@startsection{subsubsection}{3}{0mm}% {-1ex plus -.5ex minus -.2ex}% {1ex plus .2ex}% {\normalfont\small\bfseries}} \makeatother % Define BibTeX command \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}} % Don't print section numbers \setcounter{secnumdepth}{0} \setlength{\parindent}{0pt} \setlength{\parskip}{0pt plus 0.5ex} % ----------------------------------------------------------------------- \begin{document} \raggedright \fontsize{3mm}{3mm}\selectfont \begin{multicols}{3} % multicol parameters % These lengths are set only within the two main columns %\setlength{\columnseprule}{0.25pt} \setlength{\premulticols}{1pt} \setlength{\postmulticols}{1pt} \setlength{\multicolsep}{1pt} \setlength{\columnsep}{2pt} \begin{center} \Large{\textbf{yt Cheat Sheet}} \\ \end{center} \subsection{General Info} For everything yt please see \url{http://yt-project.org}. Documentation \url{http://yt-project.org/doc/index.html}. Need help? Start here \url{http://yt-project.org/doc/help/} and then try the IRC chat room \url{http://yt-project.org/irc.html}, or the mailing list \url{https://mail.python.org/archives/list/yt-users@python.org/}. \\ \subsection{Installing yt} The easiest way to install yt is to use the installation script found on the yt homepage or the docs linked above. If you already have python set up with \texttt{numpy}, \texttt{scipy}, \texttt{matplotlib}, \texttt{h5py}, and \texttt{cython}, you can also use \texttt{pip install yt} \subsection{Command Line yt} yt, and its convenience functions, are launched from a command line prompt. Many commands have flags to control behavior. Commands can be followed by {\bf {-}{-}help} (e.g. {\bf yt render {-}{-}help}) for detailed help for that command including a list of the available flags. \texttt{yt load} \textit{dataset} \textemdash\ Load a single dataset. \\ \texttt{yt help} \textemdash\ Print yt help information. \\ \texttt{yt stats} \textit{dataset} \textemdash\ Print stats of a dataset. \\ \texttt{yt update} \textemdash\ Update yt to most recent version.\\ \texttt{yt update --all} \textemdash\ Update yt and dependencies to most recent version. \\ \texttt{yt version} \textemdash\ yt installation information. \\ \texttt{yt upload\_image} \textit{image.png} \textemdash\ Upload PNG image to imgur.com. \\ \texttt{yt upload\_notebook} \textit{notebook.nb} \textemdash\ Upload IPython notebook to \url{https://girder.hub.yt}.\\ \texttt{yt plot} \textit{dataset} \textemdash\ Create a set of images.\\ \texttt{yt render} \textit{dataset} \textemdash\ Create a simple volume rendering. \\ \texttt{yt mapserver} \textit{dataset} \textemdash\ View a plot/projection in a Gmaps-like interface. \\ \texttt{yt pastebin} \textit{text.out} \textemdash\ Post text to the pastebin at paste.yt-project.org. \\ \texttt{yt pastebin\_grab} \textit{identifier} \textemdash\ Print content of pastebin to STDOUT. \\ \texttt{yt bugreport} \textemdash\ Report a yt bug. \\ \texttt{yt hop} \textit{dataset} \textemdash\ Run hop on a dataset. \\ \subsection{yt Imports} In order to use yt, Python must load the relevant yt modules into memory. The import commands are entered in the Python/IPython shell or used as part of a script. \newlength{\MyLen} \settowidth{\MyLen}{\texttt{letterpaper}/\texttt{a4paper} \ } \texttt{import yt} \textemdash\ Load yt. \\ \texttt{from yt.config import ytcfg} \textemdash\ Used to set yt configuration options. If used, must be called before importing any other module.\\ \texttt{from yt.analysis\_modules.\emph{halo\_finding}.api import \textasteriskcentered} \textemdash\ Load halo finding modules. Other modules are loaded in a similar way by swapping the \emph{emphasized} text. See the \textbf{Analysis Modules} section for a listing and short descriptions of each. \subsection{YTArray} Simulation data in yt is returned as a YTArray. YTArray is a numpy array that has unit data attached to it and can automatically handle unit conversions and detect unit errors. Just like a numpy array, YTArray provides a wealth of built-in functions to calculate properties of the data in the array. Here is a very brief list of some useful ones. \settowidth{\MyLen}{\texttt{multicol} }\\ \texttt{v = a.in\_cgs()} \textemdash\ Return the array in CGS units \\ \texttt{v = a.in\_units('Msun/pc**3')} \textemdash\ Return the array in solar masses per cubic parsec \\ \texttt{v = a.max(), a.min()} \textemdash\ Return maximum, minimum of \texttt{a}. \\ \texttt{index = a.argmax(), a.argmin()} \textemdash\ Return index of max, min value of \texttt{a}.\\ \texttt{v = a[}\textit{index}\texttt{]} \textemdash\ Select a single value from \texttt{a} at location \textit{index}.\\ \texttt{b = a[}\textit{i:j}\texttt{]} \textemdash\ Select the slice of values from \texttt{a} between locations \textit{i} to \textit{j-1} saved to a new Numpy array \texttt{b} with length \textit{j-i}. \\ \texttt{sel = (a > const)} \textemdash\ Create a new boolean Numpy array \texttt{sel}, of the same shape as \texttt{a}, that marks which values of \texttt{a > const}. Other operators (e.g. \textless, !=, \%) work as well.\\ \texttt{b = a[sel]} \textemdash\ Create a new Numpy array \texttt{b} made up of elements from \texttt{a} that correspond to elements of \texttt{sel} that are \textit{True}. In the above example \texttt{b} would be all elements of \texttt{a} that are greater than \texttt{const}.\\ \texttt{a.write\_hdf5(\textit{filename.h5})} \textemdash\ Save \texttt{a} to the hdf5 file \textit{filename.h5}.\\ \subsection{IPython Tips} \settowidth{\MyLen}{\texttt{multicol} } These tips work if IPython has been loaded, typically either by invoking \texttt{yt load} on the command line. \texttt{Tab complete} \textemdash\ IPython will attempt to auto-complete a variable or function name when the \texttt{Tab} key is pressed, e.g. \textit{HaloFi}\textendash\texttt{Tab} would auto-complete to \textit{HaloFinder}. This also works with imports, e.g. \textit{from numpy.random.}\textendash\texttt{Tab} would give you a list of random functions (note the trailing period before hitting \texttt{Tab}).\\ \texttt{?, ??} \textemdash\ Appending one or two question marks at the end of any object gives you detailed information about it, e.g. \textit{variable\_name}?.\\ Below a few IPython ``magics'' are listed, which are IPython-specific shortcut commands.\\ \texttt{\%paste} \textemdash\ Paste content from the system clipboard into the IPython shell.\\ \texttt{\%hist} \textemdash\ Print recent command history.\\ \texttt{\%quickref} \textemdash\ Print IPython quick reference.\\ \texttt{\%pdb} \textemdash\ Automatically enter the Python debugger at an exception.\\ \texttt{\%debug} \textemdash\ Drop into a debugger at the location of the last unhandled exception. \\ \texttt{\%time, \%timeit} \textemdash\ Find running time of expressions for benchmarking.\\ \texttt{\%lsmagic} \textemdash\ List all available IPython magics. Hint: \texttt{?} works with magics.\\ Please see \url{http://ipython.org/documentation.html} for the full IPython documentation. \subsection{Load and Access Data} The first step in using yt is to reference a simulation snapshot. After that, simulation data is generally accessed in yt using \textit{Data Containers} which are Python objects that define a region of simulation space from which data should be selected. \settowidth{\MyLen}{\texttt{multicol} } \texttt{ds = yt.load(}\textit{dataset}\texttt{)} \textemdash\ Reference a single snapshot.\\ \texttt{dd = ds.all\_data()} \textemdash\ Select the entire volume.\\ \texttt{a = dd[}\textit{field\_name}\texttt{]} \textemdash\ Copies the contents of \textit{field} into the YTArray \texttt{a}. Similarly for other data containers.\\ \texttt{ds.field\_list} \textemdash\ A list of available fields in the snapshot. \\ \texttt{ds.derived\_field\_list} \textemdash\ A list of available derived fields in the snapshot. \\ \texttt{val, loc = ds.find\_max("Density")} \textemdash\ Find the \texttt{val}ue of the maximum of the field \texttt{Density} and its \texttt{loc}ation. \\ \texttt{sp = ds.sphere(}\textit{cen}\texttt{,}\textit{radius}\texttt{)} \textemdash\ Create a spherical data container. \textit{cen} may be a coordinate, or ``max'' which centers on the max density point. \textit{radius} may be a float in code units or a tuple of (\textit{length, unit}).\\ \texttt{re = ds.region(\textit{cen}, \textit{left edge}, \textit{right edge})} \textemdash\ Create a rectilinear data container. \textit{cen} is required but not used. \textit{left} and \textit{right edge} are coordinate values that define the region. \texttt{di = ds.disk(\textit{cen}, \textit{normal}, \textit{radius}, \textit{height})} \textemdash\ Create a cylindrical data container centered at \textit{cen} along the direction set by \textit{normal},with total length 2$\times$\textit{height} and with radius \textit{radius}. \\ \texttt{ds.save\_object(sp, \textit{``sp\_for\_later''})} \textemdash\ Save an object (\texttt{sp}) for later use.\\ \texttt{sp = ds.load\_object(\textit{``sp\_for\_later''})} \textemdash\ Recover a saved object.\\ \subsection{Defining New Fields} \texttt{yt} expects on-disk fields, fields generated on-demand and in-memory. Field can either be created before a dataset is loaded using \texttt{add\_field}: \texttt{def \_metal\_mass(\textit{field},\textit{data})}\\ \texttt{\hspace{4 mm} return data["metallicity"]*data["cell\_mass"]}\\ \texttt{add\_field("metal\_mass", units='g', function=\_metal\_mass)}\\ Or added to an existing dataset using \texttt{ds.add\_field}: \texttt{ds.add\_field("metal\_mass", units='g', function=\_metal\_mass)}\\ \subsection{Slices and Projections} \settowidth{\MyLen}{\texttt{multicol} } \texttt{slc = yt.SlicePlot(ds, \textit{axis or normal vector}, \textit{fields}, \textit{center=}, \textit{width=}, \textit{weight\_field=}, \textit{additional parameters})} \textemdash\ Make a slice plot perpendicular to \textit{axis} (specified via 'x', 'y', or 'z') or a normal vector for an off-axis slice of \textit{fields} weighted by \textit{weight\_field} at (code-units) \textit{center} with \textit{width} in code units or a (value, unit) tuple. Hint: try \textit{yt.SlicePlot?} in IPython to see additional parameters.\\ \texttt{slc.save(\textit{file\_prefix})} \textemdash\ Save the slice to a png with name prefix \textit{file\_prefix}. \texttt{.save()} works similarly for the commands below.\\ \texttt{prj = yt.ProjectionPlot(ds, \textit{axis or normal vector}, \textit{fields}, \textit{additional params})} \textemdash\ Same as \texttt{yt.SlicePlot} but for projections.\\ \subsection{Plot Annotations} \settowidth{\MyLen}{\texttt{multicol} } Plot callbacks are functions itemized in a registry that is attached to every plot object. They can be accessed and then called like \texttt{ prj.annotate\_velocity(factor=16, normalize=False)}. Most callbacks also accept a \textit{plot\_args} dict that is fed to matplotlib annotator. \\ \texttt{velocity(\textit{factor=},\textit{scale=},\textit{scale\_units=}, \textit{normalize=})} \textemdash\ Uses field "x-velocity" to draw quivers\\ \texttt{magnetic\_field(\textit{factor=},\textit{scale=},\textit{scale\_units=}, \textit{normalize=})} \textemdash\ Uses field "Bx" to draw quivers\\ \texttt{quiver(\textit{field\_x},\textit{field\_y},\textit{factor=},\textit{scale=},\textit{scale\_units=}, \textit{normalize=})} \\ \texttt{contour(\textit{field=},\textit{levels=},\textit{factor=},\textit{clim=},\textit{take\_log=}, \textit{additional parameters})} \textemdash Plots a number of contours \textit{ncont} to interpolate \textit{field} optionally using \textit{take\_log}, upper and lower \textit{c}ontour\textit{lim}its and \textit{factor} number of points in the interpolation.\\ \texttt{grids(\textit{alpha=}, \textit{draw\_ids=}, \textit{periodic=}, \textit{min\_level=}, \textit{max\_level=})} \textemdash Add grid boundaries. \\ \texttt{streamlines(\textit{field\_x},\textit{field\_y},\textit{factor=},\textit{density=})}\\ \texttt{clumps(\textit{clumplist})} \textemdash\ Generate \textit{clumplist} using the clump finder and plot. \\ \texttt{arrow(\textit{pos}, \textit{code\_size})} Add an arrow at a \textit{pos}ition. \\ \texttt{point(\textit{pos}, \textit{text})} \textemdash\ Add text at a \textit{pos}ition. \\ \texttt{marker(\textit{pos}, \textit{marker=})} \textemdash\ Add a matplotlib-defined marker at a \textit{pos}ition. \\ \texttt{sphere(\textit{center}, \textit{radius}, \textit{text=})} \textemdash\ Draw a circle and append \textit{text}.\\ \texttt{hop\_circles(\textit{hop\_output}, \textit{max\_number=}, \textit{annotate=}, \textit{min\_size=}, \textit{max\_size=}, \textit{font\_size=}, \textit{print\_halo\_size=}, \textit{fixed\_radius=}, \textit{min\_mass=}, \textit{print\_halo\_mass=}, \textit{width=})} \textemdash\ Draw a halo, printing it's ID, mass, clipping halos depending on number of particles (\textit{size}) and optionally fixing the drawn circle radius to be constant for all halos.\\ \texttt{hop\_particles(\textit{hop\_output},\textit{max\_number=},\textit{p\_size=},\\ \textit{min\_size},\textit{alpha=})} \textemdash\ Draw particle positions for member halos with a certain number of pixels per particle.\\ \texttt{particles(\textit{width},\textit{p\_size=},\textit{col=}, \textit{marker=}, \textit{stride=}, \textit{ptype=}, \textit{stars\_only=}, \textit{dm\_only=}, \textit{minimum\_mass=}, \textit{alpha=})} \textemdash\ Draw particles of \textit{p\_size} pixels in a slab of \textit{width} with \textit{col}or using a matplotlib \textit{marker} plotting only every \textit{stride} number of particles.\\ \texttt{title(\textit{text})}\\ \subsection{The $\sim$/.yt/ Directory} \settowidth{\MyLen}{\texttt{multicol} } yt will automatically check for configuration files in a special directory (\texttt{\$HOME/.yt/}) in the user's home directory. The \texttt{config} file \textemdash\ Settings that control runtime behavior. \\ The \texttt{my\_plugins.py} file \textemdash\ Add functions, derived fields, constants, or other commonly-used Python code to yt. \subsection{Analysis Modules} \settowidth{\MyLen}{\texttt{multicol}} The import name for each module is listed at the end of each description (see \textbf{yt Imports}). \texttt{Absorption Spectrum} \textemdash\ (\texttt{absorption\_spectrum}). \\ \texttt{Clump Finder} \textemdash\ Find clumps defined by density thresholds (\texttt{level\_sets}). \\ \texttt{Halo Finding} \textemdash\ Locate halos of dark matter particles (\texttt{halo\_finding}). \\ \texttt{Light Cone Generator} \textemdash\ Stitch datasets together to perform analysis over cosmological volumes. \\ \texttt{Light Ray Generator} \textemdash\ Analyze the path of light rays.\\ \texttt{Rockstar Halo Finding} \textemdash\ Locate halos of dark matter using the Rockstar halo finder (\texttt{halo\_finding.rockstar}). \\ \texttt{Star Particle Analysis} \textemdash\ Analyze star formation history and assemble spectra (\texttt{star\_analysis}). \\ \texttt{Sunrise Exporter} \textemdash\ Export data to the sunrise visualization format (\texttt{sunrise\_export}). \\ \subsection{Parallel Analysis} \settowidth{\MyLen}{\texttt{multicol}} Nearly all of yt is parallelized using MPI\@. The \textit{mpi4py} package must be installed for parallelism in yt. To install \textit{pip install mpi4py} on the command line usually works. Execute python in parallel similar to this:\\ \textit{mpirun -n 12 python script.py}\\ The file \texttt{script.py} must call the \texttt{yt.enable\_parallelism()} to turn on yt's parallelism. If this doesn't happen, all cores will execute the same serial yt script. This command may differ for each system on which you use yt; please consult the system documentation for details on how to run parallel applications. \texttt{parallel\_objects()} \textemdash\ A way to parallelize analysis over objects (such as halos or clumps).\\ \subsection{Git} \settowidth{\MyLen}{\texttt{multicol}} Please see \url{https://git-scm.com/} for the latest Git documentation. \texttt{git clone https://github.com/yt-project/yt} \textemdash\ Clone the yt repository. \\ \texttt{git status} \textemdash\ Show status of working tree.\\ \texttt{git diff} \textemdash\ Show changed files in the working tree. \\ \texttt{git log} \textemdash\ Show a log of changes in reverse chronological order.\\ \texttt{git revert } \textemdash\ Revert the changes in an existing commit and create a new commit with reverted changes. \\ \texttt{git add } \textemdash\ Stage changes in the working tree to the index. \\ \texttt{git commit} \textemdash\ Commit staged changes to the repository. \\ \texttt{git merge } Merge the revisions from the specified branch on top of the current branch.\\ \texttt{git push } \textemdash\ Push changes to remote repository. \\ \texttt{git push } \textemdash\ Push changes in specified branch to remote repository. \\ \texttt{git pull } \textemdash\ Pull changes from the specified branch of the remote repository. This is equivalent to \texttt{git fetch } and then \texttt{git merge /}.\\ \subsection{FAQ} \settowidth{\MyLen}{\texttt{multicol}} \texttt{slc.set\_log('field', False)} \textemdash\ When plotting \texttt{field}, use linear scaling instead of log scaling. %\rule{0.3\linewidth}{0.25pt} %\scriptsize % Can put some final stuff here like copyright etc... \end{multicols} \end{document} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/docstring_idioms.txt0000644000175100001770000000322714714401662016437 0ustar00runnerdockerIdioms for Docstrings in yt =========================== For a full list of recognized constructs for marking up docstrings, see the Sphinx documentation: http://www.sphinx-doc.org/en/master/ Specifically, this section: http://www.sphinx-doc.org/en/master/usage/restructuredtext/ http://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#cross-referencing-syntax Variables in Examples --------------------- In order to construct short, useful examples, some variables must be specified. However, because often examples require a bit of setup, here is a list of useful variable names that correspond to specific instances that the user is presupposed to have created. * `ds`: a dataset, loaded successfully * `sp`: a sphere * `c`: a 3-component "center" * `L`: a 3-component vector that corresponds to either angular momentum or a normal vector Cross-Referencing ----------------- To enable sufficient linkages between different sections of the documentation, good cross-referencing is key. To reference a section of the documentation, you can use this construction: For more information, see :ref:`image_writer`. This will insert a link to the section in the documentation which has been identified with `image_writer` as its name. Referencing Classes and Functions --------------------------------- To indicate the return type of a given object, you can reference it using this construction: This function returns a :class:`ProjectionPlot`. To reference a function, you can use: To write out this array, use :func:`save_image`. To reference a method, you can use: To add a projection, use :meth:`ProjectionPlot.set_width`. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2151506 yt-4.4.0/doc/extensions/0000755000175100001770000000000014714401715014530 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/extensions/README0000644000175100001770000000023114714401662015405 0ustar00runnerdockerThis includes a version of the Numpy Documentation extension that has been slightly modified to emit extra TOC tree items. -- Matt Turk, March 25, 2011 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/extensions/config_help.py0000644000175100001770000000202514714401662017357 0ustar00runnerdockerimport re import subprocess from docutils import statemachine from docutils.parsers.rst import Directive def setup(app): app.add_directive("config_help", GetConfigHelp) setup.app = app setup.config = app.config setup.confdir = app.confdir retdict = dict(version="1.0", parallel_read_safe=True, parallel_write_safe=True) return retdict class GetConfigHelp(Directive): required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True def run(self): rst_file = self.state_machine.document.attributes["source"] data = ( subprocess.check_output(self.arguments[0].split(" ") + ["-h"]) .decode("utf8") .split("\n") ) ind = next( (i for i, val in enumerate(data) if re.match(r"\s{0,3}\{.*\}\s*$", val)) ) lines = [".. code-block:: none", ""] + data[ind + 1 :] self.state_machine.insert_input( statemachine.string2lines("\n".join(lines)), rst_file ) return [] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/extensions/pythonscript_sphinxext.py0000644000175100001770000000532114714401662021764 0ustar00runnerdockerimport errno import glob import os import shutil import subprocess import tempfile import time import uuid from docutils import nodes from docutils.parsers.rst import Directive class PythonScriptDirective(Directive): """Execute an inline python script and display images. This uses exec to execute an inline python script, copies any images produced by the script, and embeds them in the document along with the script. """ required_arguments = 0 optional_arguments = 0 has_content = True def run(self): cwd = os.getcwd() tmpdir = tempfile.mkdtemp() os.chdir(tmpdir) rst_file = self.state_machine.document.attributes["source"] rst_dir = os.path.abspath(os.path.dirname(rst_file)) image_dir, image_rel_dir = make_image_dir(setup, rst_dir) # Construct script from cell content content = "\n".join(self.content) with open("temp.py", "w") as f: f.write(content) # Use sphinx logger? uid = uuid.uuid4().hex[:8] print("") print(f">> Contents of the script: {uid}") print(content) print("") start = time.time() subprocess.call(["python", "temp.py"]) print(f">> The execution of the script {uid} took {time.time() - start:f} s") text = "" for im in sorted(glob.glob("*.png")): text += get_image_tag(im, image_dir, image_rel_dir) code = content literal = nodes.literal_block(code, code) literal["language"] = "python" attributes = {"format": "html"} img_node = nodes.raw("", text, **attributes) # clean up os.chdir(cwd) shutil.rmtree(tmpdir, True) return [literal, img_node] def setup(app): app.add_directive("python-script", PythonScriptDirective) setup.app = app setup.config = app.config setup.confdir = app.confdir retdict = dict(version="0.1", parallel_read_safe=True, parallel_write_safe=True) return retdict def get_image_tag(filename, image_dir, image_rel_dir): my_uuid = uuid.uuid4().hex shutil.move(filename, image_dir + os.path.sep + my_uuid + filename) relative_filename = image_rel_dir + os.path.sep + my_uuid + filename return f'
' def make_image_dir(setup, rst_dir): image_dir = os.path.join(setup.app.builder.outdir, "_images") rel_dir = os.path.relpath(setup.confdir, rst_dir) image_rel_dir = os.path.join(rel_dir, "_images") thread_safe_mkdir(image_dir) return image_dir, image_rel_dir def thread_safe_mkdir(dirname): try: os.makedirs(dirname) except OSError as e: if e.errno != errno.EEXIST: raise ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/extensions/yt_colormaps.py0000644000175100001770000000377514714401662017632 0ustar00runnerdocker# This extension is quite simple: # 1. It accepts a script name # 2. This script is added to the document in a literalinclude # 3. Any _static images found will be added import glob import os import shutil from docutils.parsers.rst import Directive, directives # Some of this magic comes from the matplotlib plot_directive. def setup(app): app.add_directive("yt_colormaps", ColormapScript) setup.app = app setup.config = app.config setup.confdir = app.confdir retdict = dict(version="0.1", parallel_read_safe=True, parallel_write_safe=True) return retdict class ColormapScript(Directive): required_arguments = 1 optional_arguments = 0 def run(self): rst_file = self.state_machine.document.attributes["source"] rst_dir = os.path.abspath(os.path.dirname(rst_file)) script_fn = directives.path(self.arguments[0]) script_bn = os.path.basename(script_fn) # This magic is from matplotlib dest_dir = os.path.abspath( os.path.join(setup.app.builder.outdir, os.path.dirname(script_fn)) ) if not os.path.exists(dest_dir): os.makedirs(dest_dir) # no problem here for me, but just use built-ins rel_dir = os.path.relpath(rst_dir, setup.confdir) place = os.path.join(dest_dir, rel_dir) if not os.path.isdir(place): os.makedirs(place) shutil.copyfile( os.path.join(rst_dir, script_fn), os.path.join(place, script_bn) ) im_path = os.path.join(rst_dir, "_static") images = sorted(glob.glob(os.path.join(im_path, "*.png"))) lines = [] for im in images: im_name = os.path.join("_static", os.path.basename(im)) lines.append(f".. image:: {im_name}") lines.append(" :width: 400") lines.append(f" :target: ../../_images/{os.path.basename(im)}") lines.append("\n") lines.append("\n") self.state_machine.insert_input(lines, rst_file) return [] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/extensions/yt_cookbook.py0000644000175100001770000000547714714401662017442 0ustar00runnerdocker# This extension is quite simple: # 1. It accepts a script name # 2. This script is added to the document in a literalinclude # 3. Any _static images found will be added import glob import os import shutil from docutils.parsers.rst import Directive, directives # Some of this magic comes from the matplotlib plot_directive. def setup(app): app.add_directive("yt_cookbook", CookbookScript) setup.app = app setup.config = app.config setup.confdir = app.confdir retdict = dict(version="0.1", parallel_read_safe=True, parallel_write_safe=True) return retdict data_patterns = ["*.h5", "*.out", "*.dat", "*.mp4"] class CookbookScript(Directive): required_arguments = 1 optional_arguments = 0 def run(self): rst_file = self.state_machine.document.attributes["source"] rst_dir = os.path.abspath(os.path.dirname(rst_file)) script_fn = directives.path(self.arguments[0]) script_bn = os.path.basename(script_fn) script_name = os.path.basename(self.arguments[0]).split(".")[0] # This magic is from matplotlib dest_dir = os.path.abspath( os.path.join(setup.app.builder.outdir, os.path.dirname(script_fn)) ) if not os.path.exists(dest_dir): os.makedirs(dest_dir) # no problem here for me, but just use built-ins rel_dir = os.path.relpath(rst_dir, setup.confdir) place = os.path.join(dest_dir, rel_dir) if not os.path.isdir(place): os.makedirs(place) shutil.copyfile( os.path.join(rst_dir, script_fn), os.path.join(place, script_bn) ) im_path = os.path.join(rst_dir, "_static") images = sorted(glob.glob(os.path.join(im_path, f"{script_name}__*.png"))) lines = [] lines.append(f"(`{script_bn} <{script_fn}>`__)") lines.append("\n") lines.append("\n") lines.append(f".. literalinclude:: {self.arguments[0]}") lines.append("\n") lines.append("\n") for im in images: im_name = os.path.join("_static", os.path.basename(im)) lines.append(f".. image:: {im_name}") lines.append(" :width: 400") lines.append(f" :target: ../_images/{os.path.basename(im)}") lines.append("\n") lines.append("\n") for ext in data_patterns: data_files = sorted( glob.glob(os.path.join(im_path, f"{script_name}__*.{ext}")) ) for df in data_files: df_bn = os.path.basename(df) shutil.copyfile( os.path.join(rst_dir, df), os.path.join(dest_dir, rel_dir, df_bn) ) lines.append(f" * Data: `{df_bn} <{df}>`__)") lines.append("\n") self.state_machine.insert_input(lines, rst_file) return [] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/extensions/yt_showfields.py0000644000175100001770000000147114714401662017771 0ustar00runnerdockerimport subprocess import sys from docutils.parsers.rst import Directive def setup(app): app.add_directive("yt_showfields", ShowFields) setup.app = app setup.config = app.config setup.confdir = app.confdir retdict = dict(version="1.0", parallel_read_safe=True, parallel_write_safe=True) return retdict class ShowFields(Directive): required_arguments = 0 optional_arguments = 0 parallel_read_safe = True parallel_write_safe = True def run(self): rst_file = self.state_machine.document.attributes["source"] lines = subprocess.check_output( [sys.executable, "./helper_scripts/show_fields.py"] ) lines = lines.decode("utf8") lines = lines.split("\n") self.state_machine.insert_input(lines, rst_file) return [] ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2151506 yt-4.4.0/doc/helper_scripts/0000755000175100001770000000000014714401715015357 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/helper_scripts/code_support.py0000644000175100001770000000506214714401662020443 0ustar00runnerdockervals = [ "FluidQuantities", "Particles", "Parameters", "Units", "ReadOnDemand", "LoadRawData", "LevelOfSupport", "ContactPerson", ] class CodeSupport: def __init__(self, **kwargs): self.support = {} for v in vals: self.support[v] = "N" for k, v in kwargs.items(): if k in vals: self.support[k] = v Y = "Y" N = "N" code_names = ["Enzo", "Orion", "FLASH", "RAMSES", "Chombo", "Gadget", "ART", "ZEUS"] codes = dict( Enzo=CodeSupport( FluidQuantities=Y, Particles=Y, Parameters=Y, Units=Y, ReadOnDemand=Y, LoadRawData=Y, ContactPerson="Matt Turk", LevelOfSupport="Full", ), Orion=CodeSupport( FluidQuantities=Y, Particles=N, Parameters=Y, Units=Y, ReadOnDemand=Y, LoadRawData=Y, ContactPerson="Jeff Oishi", LevelOfSupport="Full", ), FLASH=CodeSupport( FluidQuantities=Y, Particles=N, Parameters=N, Units=Y, ReadOnDemand=Y, LoadRawData=Y, ContactPerson="John !ZuHone", LevelOfSupport="Partial", ), RAMSES=CodeSupport( FluidQuantities=Y, Particles=N, Parameters=N, Units=N, ReadOnDemand=Y, LoadRawData=Y, ContactPerson="Matt Turk", LevelOfSupport="Partial", ), Chombo=CodeSupport( FluidQuantities=Y, Particles=N, Parameters=N, Units=N, ReadOnDemand=Y, LoadRawData=Y, ContactPerson="Jeff Oishi", LevelOfSupport="Partial", ), Gadget=CodeSupport( FluidQuantities=N, Particles=Y, Parameters=Y, Units=Y, ReadOnDemand=N, LoadRawData=N, ContactPerson="Chris Moody", LevelOfSupport="Partial", ), ART=CodeSupport( FluidQuantities=N, Particles=N, Parameters=N, Units=N, ReadOnDemand=N, LoadRawData=N, ContactPerson="Matt Turk", LevelOfSupport="None", ), ZEUS=CodeSupport( FluidQuantities=N, Particles=N, Parameters=N, Units=N, ReadOnDemand=N, LoadRawData=N, ContactPerson="Matt Turk", LevelOfSupport="None", ), ) print("|| . ||", end=" ") for c in code_names: print(f"{c} || ", end=" ") print() for vn in vals: print(f"|| !{vn} ||", end=" ") for c in code_names: print(f"{codes[c].support[vn]} || ", end=" ") print() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/helper_scripts/parse_cb_list.py0000644000175100001770000000254514714401662020551 0ustar00runnerdockerimport inspect from textwrap import TextWrapper import yt ds = yt.load("RD0005-mine/RedshiftOutput0005") output = open("source/visualizing/_cb_docstrings.inc", "w") template = """ .. function:: %(clsname)s%(sig)s: (This is a proxy for :class:`~%(clsproxy)s`.) %(docstring)s """ tw = TextWrapper(initial_indent=" ", subsequent_indent=" ", width=60) def write_docstring(f, name, cls): if not hasattr(cls, "_type_name") or cls._type_name is None: return for clsi in inspect.getmro(cls): docstring = inspect.getdoc(clsi.__init__) if docstring is not None: break clsname = cls._type_name sig = inspect.formatargspec(*inspect.getargspec(cls.__init__)) sig = sig.replace("**kwargs", "**field_parameters") clsproxy = f"yt.visualization.plot_modifications.{cls.__name__}" # docstring = "\n".join([" %s" % line for line in docstring.split("\n")]) # print(docstring) f.write( template % dict( clsname=clsname, sig=sig, clsproxy=clsproxy, docstring="\n".join(tw.wrap(docstring)), ) ) # docstring = docstring)) for n, c in sorted(yt.visualization.api.callback_registry.items()): write_docstring(output, n, c) print(f".. autoclass:: yt.visualization.plot_modifications.{n}") print(" :members:") print() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/helper_scripts/parse_dq_list.py0000644000175100001770000000202314714401662020560 0ustar00runnerdockerimport inspect from textwrap import TextWrapper import yt ds = yt.load("RD0005-mine/RedshiftOutput0005") output = open("source/analyzing/_dq_docstrings.inc", "w") template = """ .. function:: %(funcname)s%(sig)s: (This is a proxy for :func:`~%(funcproxy)s`.) %(docstring)s """ tw = TextWrapper(initial_indent=" ", subsequent_indent=" ", width=60) def write_docstring(f, name, func): docstring = inspect.getdoc(func) funcname = name sig = inspect.formatargspec(*inspect.getargspec(func)) sig = sig.replace("data, ", "") sig = sig.replace("(data)", "()") funcproxy = f"yt.data_objects.derived_quantities.{func.__name__}" docstring = "\n".join(" %s" % line for line in docstring.split("\n")) f.write( template % dict(funcname=funcname, sig=sig, funcproxy=funcproxy, docstring=docstring) ) # docstring = "\n".join(tw.wrap(docstring)))) dd = ds.all_data() for n, func in sorted(dd.quantities.functions.items()): print(n, func) write_docstring(output, n, func[1]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/helper_scripts/parse_object_list.py0000644000175100001770000000204614714401662021427 0ustar00runnerdockerimport inspect from textwrap import TextWrapper import yt ds = yt.load("RD0005-mine/RedshiftOutput0005") output = open("source/analyzing/_obj_docstrings.inc", "w") template = """ .. class:: %(clsname)s%(sig)s: For more information, see :ref:`%(docstring)s` (This is a proxy for :class:`~%(clsproxy)sBase`.) """ tw = TextWrapper(initial_indent=" ", subsequent_indent=" ", width=60) def write_docstring(f, name, cls): for clsi in inspect.getmro(cls): docstring = inspect.getdoc(clsi.__init__) if docstring is not None: break clsname = name sig = inspect.formatargspec(*inspect.getargspec(cls.__init__)) sig = sig.replace("**kwargs", "**field_parameters") clsproxy = f"yt.data_objects.data_containers.{cls.__name__}" f.write( template % dict( clsname=clsname, sig=sig, clsproxy=clsproxy, docstring="physical-object-api" ) ) for n, c in sorted(ds.__dict__.items()): if hasattr(c, "_con_args"): print(n) write_docstring(output, n, c) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/helper_scripts/run_recipes.py0000755000175100001770000000466114714401662020262 0ustar00runnerdocker#!/usr/bin/env python3 import glob import os import shutil import subprocess import sys import tempfile import traceback from multiprocessing import Pool import matplotlib from yt.config import ytcfg matplotlib.use("Agg") FPATTERNS = ["*.png", "*.txt", "*.h5", "*.dat", "*.mp4"] DPATTERNS = ["LC*", "LR", "DD0046"] BADF = [ "cloudy_emissivity.h5", "apec_emissivity.h5", "xray_emissivity.h5", "AMRGridData_Slice_x_density.png", ] CWD = os.getcwd() ytcfg["yt", "serialize"] = False BLACKLIST = ["opengl_ipython", "opengl_vr"] def prep_dirs(): for directory in glob.glob(f"{ytcfg.get('yt', 'test_data_dir')}/*"): os.symlink(directory, os.path.basename(directory)) def run_recipe(payload): (recipe,) = payload module_name, ext = os.path.splitext(os.path.basename(recipe)) dest = os.path.join(os.path.dirname(recipe), "_static", module_name) if module_name in BLACKLIST: return 0 if not os.path.exists(f"{CWD}/_temp/{module_name}.done"): sys.stderr.write(f"Started {module_name}\n") tmpdir = tempfile.mkdtemp() os.chdir(tmpdir) prep_dirs() try: subprocess.check_call(["python", recipe]) except Exception as exc: trace = "".join(traceback.format_exception(*sys.exc_info())) trace += f" in module: {module_name}\n" trace += f" recipe: {recipe}\n" raise Exception(trace) from exc open(f"{CWD}/_temp/{module_name}.done", "wb").close() for pattern in FPATTERNS: for fname in glob.glob(pattern): if fname not in BADF: shutil.move(fname, f"{dest}__{fname}") for pattern in DPATTERNS: for dname in glob.glob(pattern): shutil.move(dname, dest) os.chdir(CWD) shutil.rmtree(tmpdir, True) sys.stderr.write(f"Finished with {module_name}\n") return 0 for path in [ "_temp", "source/cookbook/_static", "source/visualizing/colormaps/_static", ]: fpath = os.path.join(CWD, path) if os.path.exists(fpath): shutil.rmtree(fpath) os.makedirs(fpath) os.chdir("_temp") recipes = [] for rpath in ["source/cookbook", "source/visualizing/colormaps"]: fpath = os.path.join(CWD, rpath) sys.path.append(fpath) recipes += glob.glob(f"{fpath}/*.py") WPOOL = Pool(processes=6) RES = WPOOL.map_async(run_recipe, ((recipe,) for recipe in recipes)) RES.get() os.chdir(CWD) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/helper_scripts/show_fields.py0000644000175100001770000002362414714401662020247 0ustar00runnerdockerimport inspect import numpy as np import yt.frontends as frontends_module from yt.config import ytcfg from yt.fields.derived_field import NullFunc from yt.frontends.api import _frontends from yt.frontends.stream.fields import StreamFieldInfo from yt.funcs import obj_length from yt.testing import fake_random_ds from yt.units import dimensions from yt.units.yt_array import Unit from yt.utilities.cosmology import Cosmology fields, units = [], [] for fname, (code_units, _aliases, _dn) in StreamFieldInfo.known_other_fields: fields.append(("gas", fname)) units.append(code_units) base_ds = fake_random_ds(4, fields=fields, units=units) base_ds.index base_ds.cosmological_simulation = 1 base_ds.cosmology = Cosmology() ytcfg["yt", "internals", "within_testing"] = True np.seterr(all="ignore") def _strip_ftype(field): if not isinstance(field, tuple): return field elif field[0] == "all": return field return field[1] np.random.seed(int(0x4D3D3D3)) units = [base_ds._get_field_info(f).units for f in fields] fields = [_strip_ftype(f) for f in fields] ds = fake_random_ds(16, fields=fields, units=units, particles=1) ds.parameters["HydroMethod"] = "streaming" ds.parameters["EOSType"] = 1.0 ds.parameters["EOSSoundSpeed"] = 1.0 ds.conversion_factors["Time"] = 1.0 ds.conversion_factors.update({f: 1.0 for f in fields}) ds.gamma = 5.0 / 3.0 ds.current_redshift = 0.0001 ds.cosmological_simulation = 1 ds.hubble_constant = 0.7 ds.omega_matter = 0.27 ds.omega_lambda = 0.73 ds.cosmology = Cosmology( hubble_constant=ds.hubble_constant, omega_matter=ds.omega_matter, omega_lambda=ds.omega_lambda, unit_registry=ds.unit_registry, ) for my_unit in ["m", "pc", "AU", "au"]: new_unit = f"{my_unit}cm" my_u = Unit(my_unit, registry=ds.unit_registry) ds.unit_registry.add( new_unit, my_u.base_value, dimensions.length, "\\rm{%s}/(1+z)" % my_unit, prefixable=True, ) header = r""" .. _field-list: Field List ========== This is a list of many of the fields available in yt. We have attempted to include most of the fields that are accessible through the plugin system, as well as the fields that are known by the frontends, however it is possible to generate many more permutations, particularly through vector operations. For more information about the fields framework, see :ref:`fields`. Some fields are recognized by specific frontends only. These are typically fields like density and temperature that have their own names and units in the different frontend datasets. Often, these fields are aliased to their yt-named counterpart fields (typically 'gas' fieldtypes). For example, in the ``FLASH`` frontend, the ``dens`` field (i.e. ``(flash, dens)``) is aliased to the gas field density (i.e. ``(gas, density)``), similarly ``(flash, velx)`` is aliased to ``(gas, velocity_x)``, and so on. In what follows, if a field is aliased it will be noted. Try using the ``ds.field_list`` and ``ds.derived_field_list`` to view the native and derived fields available for your dataset respectively. For example to display the native fields in alphabetical order: .. notebook-cell:: import yt ds = yt.load("Enzo_64/DD0043/data0043") for i in sorted(ds.field_list): print(i) To figure out out what all of the field types here mean, see :ref:`known-field-types`. .. contents:: Table of Contents :depth: 1 :local: :backlinks: none .. _yt-fields: Universal Fields ---------------- """ footer = """ Index of Fields --------------- .. contents:: :depth: 3 :backlinks: none """ print(header) seen = [] def fix_units(units, in_cgs=False): unit_object = Unit(units, registry=ds.unit_registry) if in_cgs: unit_object = unit_object.get_cgs_equivalent() latex = unit_object.latex_representation() return latex.replace(r"\ ", "~") def print_all_fields(fl): for fn in sorted(fl): df = fl[fn] f = df._function s = f"{df.name}" print(s) print("^" * len(s)) print() if obj_length(df.units) > 0: # Most universal fields are in CGS except for these special fields if df.name[1] in [ "particle_position", "particle_position_x", "particle_position_y", "particle_position_z", "entropy", "kT", "metallicity", "dx", "dy", "dz", "cell_volume", "x", "y", "z", ]: print(f" * Units: :math:`{fix_units(df.units)}`") else: print(f" * Units: :math:`{fix_units(df.units, in_cgs=True)}`") print(f" * Sampling Method: {df.sampling_type}") print() print("**Field Source**") print() if f == NullFunc: print("No source available.") print() continue else: print(".. code-block:: python") print() for line in inspect.getsource(f).split("\n"): print(" " + line) print() ds.index print_all_fields(ds.field_info) class FieldInfo: """a simple container to hold the information about fields""" def __init__(self, ftype, field, ptype): name = field[0] self.units = "" u = field[1][0] if len(u) > 0: self.units = r":math:`\mathrm{%s}`" % fix_units(u) a = [f"``{f}``" for f in field[1][1] if f] self.aliases = " ".join(a) self.dname = "" if field[1][2] is not None: self.dname = f":math:`{field[1][2]}`" if ftype != "particle_type": ftype = f"'{ftype}'" self.name = f"({ftype}, '{name}')" self.ptype = ptype current_frontends = [f for f in _frontends if f not in ["stream"]] for frontend in current_frontends: this_f = getattr(frontends_module, frontend) field_info_names = [fi for fi in dir(this_f) if "FieldInfo" in fi] dataset_names = [dset for dset in dir(this_f) if "Dataset" in dset] if frontend == "gadget": # Drop duplicate entry for GadgetHDF5, add special case for FieldInfo # entry dataset_names = ["GadgetDataset"] field_info_names = ["GadgetFieldInfo"] elif frontend == "boxlib": field_info_names = [] for d in dataset_names: if "Maestro" in d: field_info_names.append("MaestroFieldInfo") elif "Castro" in d: field_info_names.append("CastroFieldInfo") else: field_info_names.append("BoxlibFieldInfo") elif frontend == "chombo": # remove low dimensional field info containers for ChomboPIC field_info_names = [ f for f in field_info_names if "1D" not in f and "2D" not in f ] for dset_name, fi_name in zip(dataset_names, field_info_names): fi = getattr(this_f, fi_name) nfields = 0 if hasattr(fi, "known_other_fields"): known_other_fields = fi.known_other_fields nfields += len(known_other_fields) else: known_other_fields = [] if hasattr(fi, "known_particle_fields"): known_particle_fields = fi.known_particle_fields if "Tipsy" in fi_name: known_particle_fields += tuple(fi.aux_particle_fields.values()) nfields += len(known_particle_fields) else: known_particle_fields = [] if nfields > 0: print(f".. _{dset_name.replace('Dataset', '')}_specific_fields:\n") h = f"{dset_name.replace('Dataset', '')}-Specific Fields" print(h) print("-" * len(h) + "\n") field_stuff = [] for field in known_other_fields: field_stuff.append(FieldInfo(frontend, field, False)) for field in known_particle_fields: if frontend in ["sph", "halo_catalogs", "sdf"]: field_stuff.append(FieldInfo("particle_type", field, True)) else: field_stuff.append(FieldInfo("io", field, True)) # output len_name = 10 len_units = 5 len_aliases = 7 len_part = 9 len_disp = 12 for f in field_stuff: len_name = max(len_name, len(f.name)) len_aliases = max(len_aliases, len(f.aliases)) len_units = max(len_units, len(f.units)) len_disp = max(len_disp, len(f.dname)) fstr = "{nm:{nw}} {un:{uw}} {al:{aw}} {pt:{pw}} {dp:{dw}}" header = fstr.format( nm="field name", nw=len_name, un="units", uw=len_units, al="aliases", aw=len_aliases, pt="particle?", pw=len_part, dp="display name", dw=len_disp, ) div = fstr.format( nm="=" * len_name, nw=len_name, un="=" * len_units, uw=len_units, al="=" * len_aliases, aw=len_aliases, pt="=" * len_part, pw=len_part, dp="=" * len_disp, dw=len_disp, ) print(div) print(header) print(div) for f in field_stuff: print( fstr.format( nm=f.name, nw=len_name, un=f.units, uw=len_units, al=f.aliases, aw=len_aliases, pt=f.ptype, pw=len_part, dp=f.dname, dw=len_disp, ) ) print(div) print("") print(footer) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/helper_scripts/split_auto.py0000644000175100001770000000337014714401662020120 0ustar00runnerdockerimport collections templates = dict( autoclass=r""" %(name)s %(header)s .. autoclass:: %(name)s :members: :inherited-members: :undoc-members: """, autofunction=r""" %(name)s %(header)s .. autofunction:: %(name)s """, index_file=r""" %(title)s %(header)s .. autosummary:: :toctree: generated/%(dn)s """, ) file_names = dict( ft=("Field Types", "source/api/field_types/%s.rst"), pt=("Plot Types", "source/api/plot_types/%s.rst"), cl=("Callback List", "source/api/callback_list/%s.rst"), ee=("Extension Types", "source/api/extension_types/%s.rst"), dd=("Derived Datatypes", "source/api/derived_datatypes/%s.rst"), mt=("Miscellaneous Types", "source/api/misc_types/%s.rst"), fl=("Function List", "source/api/function_list/%s.rst"), ds=("Data Sources", "source/api/data_sources/%s.rst"), dq=("Derived Quantities", "source/api/derived_quantities/%s.rst"), ) to_include = collections.defaultdict(list) for line in open("auto_generated.txt"): ftype, name, file_name = (s.strip() for s in line.split("::")) cn = name.split(".")[-1] if cn[0] == "_": cn = cn[1:] # For leading _ fn = file_names[file_name][1] % cn # if not os.path.exists(os.path.dirname(fn)): # os.mkdir(os.path.dirname(fn)) header = "-" * len(name) dd = dict(header=header, name=name) # open(fn, "w").write(templates[ftype] % dd) to_include[file_name].append(name) for key, val in file_names.items(): title, file = val fn = file.rsplit("/", 1)[0] + ".rst" print(fn) f = open(fn, "w") dn = fn.split("/")[-1][:-4] dd = dict(header="=" * len(title), title=title, dn=dn) f.write(templates["index_file"] % dd) for obj in sorted(to_include[key]): f.write(f" {obj}\n") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/helper_scripts/table.py0000644000175100001770000000632014714401662017022 0ustar00runnerdockercontents = [ ( "Getting Started", [ ("welcome/index.html", "Welcome to yt!", "What's yt all about?"), ( "orientation/index.html", "yt Orientation", "Quickly get up and running with yt: zero to sixty.", ), ( "help/index.html", "How to Ask for Help", "Some guidelines on how and where to ask for help with yt", ), ( "workshop.html", "Workshop Tutorials", "Videos, slides and scripts from the 2012 workshop covering many " + "aspects of yt, from beginning to advanced.", ), ], ), ( "Everyday yt", [ ( "analyzing/index.html", "Analyzing Data", "An overview of different ways to handle and process data: loading " + "data, using and manipulating objects and fields, examining and " + "manipulating particles, derived fields, generating processed data, " + "time series analysis.", ), ( "visualizing/index.html", "Visualizing Data", "An overview of different ways to visualize data: making projections, " + "slices, phase plots, streamlines, and volume rendering; modifying " + "plots; the fixed resolution buffer.", ), ( "interacting/index.html", "Interacting with yt", "Different ways -- scripting, GUIs, prompts, explorers -- to explore " + "your data.", ), ], ), ( "Advanced Usage", [ ( "advanced/index.html", "Advanced yt usage", "Advanced topics: parallelism, debugging, ways to drive yt, " + "developing", ), ( "getting_involved/index.html", "Getting Involved", "How to participate in the community, contribute code and share " + "scripts", ), ], ), ( "Reference Materials", [ ( "cookbook/index.html", "The Cookbook", "A bunch of illustrated examples of how to do things", ), ( "reference/index.html", "Reference Materials", "A list of all bundled fields, API documentation, the Change Log...", ), ("faq/index.html", "FAQ", "Frequently Asked Questions: answered for you!"), ], ), ] heading_template = r"""

%s

%s
""" subheading_template = r"""

%s

%s

""" t = "" for heading, items in contents: s = "" for subheading in items: s += subheading_template % subheading t += heading_template % (heading, s) print(t) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2151506 yt-4.4.0/doc/source/0000755000175100001770000000000014714401715013631 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2191505 yt-4.4.0/doc/source/_static/0000755000175100001770000000000014714401715015257 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/_static/apiKey01.jpg0000644000175100001770000020345414714401662017355 0ustar00runnerdockerJFIFHHC    #%$""!&+7/&)4)!"0A149;>>>%.DIC;C  ;("(;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;3 T!1AQ"aq23U#RS$4BTr56cs7bt%CD&EVu2!1AQ"a2qB#3RS ?*a5ZAZËmtv@v@v@v@v@(Q?(Q?(Q?(Q?(Q?(Q?(Q?(Q?(Q?(Q?(Q?(Q?(Q?(Q?(Q?(Q?(Q?(Q?(Q?(Q?(Q?(Q?(Q?(}S??'?'?'L'c~h-Q" WfTh "{5l6U@Fƹr*Zc g;lǼkxIA#gf3 q#\B׳=*:7xD(ЧY2Nv<;!K(14CD X P O;$ p7H @o|8$ p7H @o|8$ p7H @o|BvM~(<H{5(X@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@D"DAjVA?&m[OQdXÈszDvʃpY6P9#Ȅ}`ӧ sF4\TS̎.O-25xN\!\֬S,E84%{7~O۶v6m։ŽON SE+ɖN(}ȇ&y. XZo4LjGqb&4~Te,sPYLa)!UO΅“isA޾:>,CKn"kd~>Gfյ-iFP4Hx?Ev0~YR(kH#̷C;`v6GkL'>9־$wkXy|PwfVt7Phco'{f[ա>v]{ ͅ6r(4%NF,1Alh8u+V!,qְ̂y, ۤ#(!1: g#DzX$okᶹ`0Py7gPA =߸Ezm> _M ? :$  #5^/]nHY?x7iàcD{Pp nkשPY%۹j3k4ƌVB#,0? 304G=lAR9Cd-}v~yð~A'er,BtuIs{Ue>:=B$Aj1z^P>#h[su;Co6;8J>-ڼݼ̜1Co/~RY~N8F(|x%3'+ǷE%&DqTfV;)\Ѿ'sRCvݛ- a+ˎG-CI^fu7hYw7gr)3hdOAߖ~/{4GUIǷ܈Bbdlܯ-v5gGރ晹k=ڑ#(J$@@A";D?DE,)CtO-Gڧ r-lNh)9 bbw#2mpsCF`Aҏ?c %DVYj9D}dasK\wt;AɰMZ*e啌EސbFMh%ן( MHg.CZte|M>3A};  O72d&~<N%+1~lclQ15htP`p>k$B<87{q.$*vF`&7pߙA =ofrTV]rC .wx?gߚ!7R,0ɈXܕ{8 ״y ;/U,脑1N<eM1φ3_q:KkoAeds]޸Dd)~mtwAӇ >{>62|q=톉{<xoԭ>Y/k8A#`Q֯C+ e1K"\x>o/?,bb;>.|Skl7k=mtP>VK5N;f&'uRk\0ӉܳD~wZ|GT$Aj/$tt)iNlL yC8n{#n'C {%q_/.>f @HVP>VP>VP>VP>VP>VP>VP>VP>VP>VP>VP>VP>VP>VP>VP>VP>VP>VP>VP>VP>VP>VPpK/slmk߷ځOuʞ6=,( (ր /" WfTh "H-Uپ9SrU}#>-P9Sr #>-P9Sr #>-P9Sr #>-P9Sr #>-P9Sr #>-P9Sr #>-P9Sr #>-P9Sr #>-P9Sr #>-P9Sr #>-P9Sr #>-P9Sr #>-P9Sr #>-PS?3c[n<(mO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCgO}6|TܷCl|Tܵ r?/}Ua@9/v'@yY(ֿ~_z_/ށk@ Z?ֿ~_z_/ށk@ Z?ֿ~_z_/ށk@ Z?yEf9^c{x^D ꀃ ʾGEct%O] j׫Z>^?b՚v-/ejYA{]ͱڠ ]GPNAcaP1G es{5|ޛӾo#}zV,ڝ狹$DŽBcK h6(ewVs_^@(9a;U_3{%]c>A0%ysiAzcah{uTv,nIZK v[A%y 5 :'1B1G{B [XOb h4z_F>(7@APyOhaGJMF?6 1N: K)^VslHߨyX{8S_Oj.I'٬ eѷGmO[l­9Orh:!u-2vܞ.Kjyi~ߜ>*ٮ/YzUL{/iGsړ^Ojd27~_/~Hq= x; ܼ˕:h8KQӶ s(=];LRQ4m(;S_dZ6^/i{!;[Xxr#!4H4D{yG [qV <-;hw.~E;l׊v+荠+wa~/|0dy fRdl3Z$p#؃f*X<^2{^c`\!٧;N ܃`{[(?Wm8̴ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@A POAʓ*>,hIB8#Z';1_y2:G8e=&"`5G%?Ӑ1%< #7$7]ia[LZ-`>,qpc 8;v7v.|O%{cd]ɂ@g yˡL?lZ1?.Z8 u{#؆#$npq,x I=GF̷iaCq ֲ[D8$KCٻx`ܖ)b>f8F hb3l0K4VǾv` $ѭR͛ j3Ϻu}N=,eqC 0]v|.;+jv:Z]!я$k6? A*>7Ae_PUX@AjQ Yt֋+fmևo<vw&.cxZL Sܻ<eZ9V:&{$o{D;p;)0A f|/384tOi+X˱ؕЇxx {A>#b_b̑wѽtIA0cl.cw-vhlxj;!=l͙4n #luAeԱv_Z&|뤐 i~2X#7q k>x g'JݝoTF#W:~}4ns3r< }{! [0'lj1:;p,fn#HyGEct=|k.vWm42Ǐ8Z޾?^j9kfw8^:^c.WKx+_]/W,|C)B#<#?qM?M{ 0]Py\-g ~)vr'3^W̎ xĿ$4~%wzKF]ËN({)NV%+טpm׏EiDg>&b#m|O-p>D.eXr|TWm~SgXܡ.9/j%ڞmY4/{ϋRe+㲲AZ-Fӭ.}y/ݷA?%ݹ&{QbŎc@leD*_,mAl 7 [۸+{o_P4AO_3?ڃM?<>Cly13L?q>?G[3?$}H qވ潌bNأ14t=ĠEYwmW7]v66<~ ݞd2uk3@>H:~J;r9DZ-H!"dmmy:ɭq{U}Lmf֥Z:26{PJ@@@@AA%G>` < \a3 p]Ȏ֟?\tAr?Ϧw;θޞfKxH6A26`ϐse  G#h8ņȻὗ{k5ŃM\xqWszq[uVg팝g#cwB \F6} ;0DO}A|GEct7ٯcj<·߈Aq.@t$oMgSmXfm~w ;v?ş#XkD ̎@y{>ZDG\;^z!~[3G-M1c“iY{ .EV,;Li~'b~i+mmz;\0z.-}6gd;7,ώis~݇=g>-cxg"-9 %}Mc}d S2{o/?p<@i113(WԬ(7`563GR Gr{m:VGmXoϝcD|8\ ^)B{Ea-|9m%}C9?)cj4 {0gO z57([7$BCx5M ,ݢD Qr<}սlGA k@ cGt0VUUAŽ ozކ(2222GEct#cH?0~R;4{1 Ug [| [.2Gm{S?sf^N3k(]+XWկx+״bewiWe ׃l- nľJoSL|lz歭w`1h~#5l܍,ui/GHnuų5S˿&*2_}cȮV-MK/<Ob9<|I9銍=~zOݰ7 [۸+{__P4Aʭ*i %cGi(>>a]!Whk&IZ1h'.>UR3fRzh6?R ;Kdxq8~|[?m>W]TZ#Ci#M> G\le*DnMaPh!mS{1mW57nqAݽv2IYKlwCnD3!o^kAllstւIA۸w9GE)fTf wdh=` 4z_F>(7@@A?fF`D5zu7k&FHrW }F*޼GͿ)wby@;5ǧkQN/u.__" 9|z~Q/9Pr;$ƈ$\[cT=7ΥK3,<_CwH&+B:mly7 |Z{:wEz2pW'ɩ/GzGW-5=l9c-w J=|Ƕi;Ko+_#RFb-Ǩ;QP;9>Ys^6'Cdl7 wlA {*b4e i^:#h,eh2e-ӏ+!w<[G 9;u;٪c_0Ԛ Ŭ.$3}6umr}(cd=ǬG]Aifk-;mڞX#Cl 01b[K$w`vi k2vgqn'"$D>Kfc4D22b)ʽ,^6%oe͎Npw \יjs/s_o{lB\VO]KHHxywV}g8W#8es{7ǟ ˒2^f&'d:sv?gת4}r=\ra>^;=42-0V߇/Myu+ ;0V/W㽰d^9,m{Wƺ=Z7 $@@@Atylꍎk/26O2q,.,-Ǻ|NhNLUX1~(mcej (4DJ )Mي]6A#^._nB8}tktz+EI0 L\ ;9G>NkAo lh6FxJXȧ<8 bv0uȑ9:IZ\ܶ.7ր>oQ~'Dt5## (ٔٚ՛oyyC,Ag QA+* q7zߒ }|ٟJ %Cϡ2yh>m?i$qAs/5+j_W?)ۏw1[?TsqAW6=8ݒ~_ΖgT>Zdޙ'd ^G/ZY|' y74f>)n?kbzG`9K{gW)9"2/9+"u?C2𾽆Em-$r#>کLն&@@@@A8X}q{~P;~P;~P;~P;~P;~P;~P;~P;~P;~P;~P;~P;~P;~P;~P;~P;~PB٢%>` 9^^cxig\|֭fåYvu]#D!cKǑ4ODfS=FA{&1 {b+v>P,T3n5Co^zA{_Ẓѱ:NZ'z;޷}1^r0AV>7h߃oza4eKOBvHm: Ǩ\^ fdlCqy(cn)實8k{rAvGEct228MNG_XcmTF ݨEb`csO~%Zכweq^^s0`ߑxԶoOE<㺿k/O$}X Y=Yb~rVf8HS̅×nw^Iʶד->d5:#5&;' ~TiE#KIlx;^ E51&Zקd;^gM{[ܬoъzG{Kv4y(i{creVYӐOvu^dxZ>Kï/[/Z,,=z{󲝣#vGzޏS|rkmU0#DZ,]HZc3l#?7IZ`g|Hecq!=cwkL>?=O~:H+6ArCh<V}hM+2ԟz-іw31 [R&scŲDR;jͮ@.s1Z/=mk\ֶ.G`GA?Pt@AJKPTrgu Ov'xzG@Q˯ހgt0VUUO aщ{No|\QuLTY+>9|kD/v=zOn7wfddXH񅣘]c,EFY-ZY]4E .V+xgkF=#}NP.E4}f% sAx#28;Iaq/OW~׫rKwf*dnhih|QԵk3)F1f5|4@>%uujhn7{aOٱA..lb2ܖlcپ6P~we֣ F kOBoLzK?3jRڧ$NDZ}զXԸi|ޚ:J 9v<i`)KFƿ$ikWx=>qN,݀z Z<9^1v|ox.I9%omjة!!zh2՚ΥMp ⚄n^gןZQ]wig5*q̜q|( 6;^`gUpzJ˿\oh@@A -z_ |ۥ|tǖy/6b[XNbvZxWO"-mGy4ڵNƲ49f=,[ͧV%"ft/I/K%/~\S\sWٝnJz͆Q0ZRy0 mq{u˯m]V#䲝̄zm#"olV>液e6~>;8G Gڼ;oi})V<%*. @R3-NlwMumq 5ݝc3niѾڍ{Xy~6{5'X.L :;A_'YKVӵ]cIH#$bL!lVbz%osyxrZ+A_jW~ ?j YQjأ]>["67ϑ>̈́Uba6q684NA5w'^z#ׇ~tGAj@eH:s`2FsT࣒AEoIf8x$Nt^H-1Ie35$63#{p1>)ڬ^ڂgbs\ç l_4%,zNlN bZkD[csAiZAR֬r. G 4z_F>(7@AAt^Ǜ3JP{4>SA?ٓc#gV#`aH_pAE; %c^BڱhաOon'j޹;rz(:Sڀ-ו/\^l3 :N鶟bޓttztGG>|Kū\ƶq3"OoY>ZOLS>'q ÿE}]vNf5ֵI; x|WO6"/[Ƕ~Hor1W^SU)S3N[|$Z{0&9>iYѻYzNlC91+6s|m#$Zz= е,vrB"j;(⹫k8U^hOdyq7mҹ>>3Y#{ޛ񚕽kz|OxeoO%-lvy0XEehNuĬ3M}UߴG7g._PbPiSL^}w=z+[5;];|<->ާz߱H$ Fy)f|NrQV[M"^m޺u1rUW%+.悛#?][[ʳΐֶH^&|G$;vig!bre<09=@ {&C3F@cl849AK5z(Vt2HHHyoA <(crr2#g3=vYIEe{.$47G4bJsA>-p<ê +A A@l릴=b!dآG!9A>>7Ae_PUX@@@@@@@@@@@@@@@@@@@@@@@@@@A"1AV;7:+\!:s|\^clRƴhm5fj*Q3AOtvO{Nk/_E;X^*6{fg#UU "@aZ9z^ԝt[ƭy Iݴ=fq]w?<쾂&>/=53ќHo>_k8Q?W|91_n'mGQ8'qDG@D)te}7}Sئᰵ q1?xNLgVk&  ? 9IAt߬ d\QlsACVA^6EmdlhkZ` 4z_F>(7@A 'fj*pۄvVh "=!}w](9SvbٿiKAA18\nXGcn|O1ZLL1}MLtKc䴶|ZW:NVK&̠ B?('p}-b |؁! p}-b |؁! p}-b |؁! p}-b |؁! p}-b |؁! p}-b |؁! p}!Xd#K |YWVah2GEct?R- l_J |r8 q/@_ā/쿉_$9eHr8 q/@_ā/쿉_$9eHr8 q/@_ā/쿉_$9eHrDgxD|wL |YWVa4Lbh9!uAݶ )` F8k#{A;BI} <ii'91xs{ps 4d-[$vr 2mGCgAlh2\ 2%dA71{C|AA,fN=wA я hfkځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF_oځF5|k#u Oc28\NkGD0VUU1͒#$ ietޅR=9Kf=;dmow7@A v::q93kG<JIx; e?J>gkrֹZwkK~~,H G(6(#a5D?(,W\h(0E^FOB+* ]nZέ5dlr:A?4侯N_vG~ j&FkAE;)ˍ+Cf,`{%OW0rcַ{=%Ғg3 'n"qz ?gmEv͜F^J 9[#K%Qي#R!Px'sy>{ Y鬊2HuXin: -qԞ(?t%b6ñ˗v&*nW# n&j mUܰ]?lc"]wŽit6 8ϱ&;A#o1npq:N CzAYŚϮ6{vϢc[@@@]랺c|⁰Y䝲+>dcC<`tpׁAL}a'IÿNA;"(,pl;#hۂk3Wx5xA69sڃ(8ٞ*d+bwd-0<@ J TddR3:65ѹ9d k.Pd-d%uUӞ z݃k{-ENzOlN`ɍ^z;=<ڎ؋]RPx__|c$5lcD.!.:\<:<[A3?Y |qے*\4O^ ݁\8lq;W4V2;u~i1- I,ppguofK>jI3#tӣ  ,V{apOod>` ؼlX>3s^Db HOxH$ * }ؠѵ<jKWYd1o{^9D9d[r9*@a Yi18CZ;U;bGA;PPˌ.3 |ͼB1(ۀZAQkՠ7sw((Pg1E^KnpIzM|(pq ᗅl랈h*K#2Ldc:w֣.j9Y#]&H߸|P}/f1xO.c)O-ja)ZDj)G˷<Lo[e2/F'=GqҺ%d-i@N,*d򽩷ͱOW&R@: o@>(#ܚqخh\&ikM. l~+!fl)EmX <W:GݚR,cϐ4]KT+=toi;'PBV2}hcOC PueJ1FHaloBϙA1sisQc*eZ9E?fׇ7вWD1-k^?K6=8`kW䢡kkys,4ky?*O efM"k{[-KZIq>I\ò>#c₢/~wq-CG빼opZ>3b~S+#^Nu!dHͥ<rd8ڸ{=v -KbK7╮ 5(AbhaKnh-hi]|MvS({aqï4Ԗi+ԫ`k&ӽ4~A},=j1͉8:Ǒ6:Vh\~r[?!R=4^wA&>!,dq4e=hҴ>vvlON\_aظ4_[+'!ę@`vwAGvۚq X=cIJndoc'`xsAy^xV{F2 Ƃ4R`.ÐkuFxh]᡻i$k$T8|lts\>C=$J !H2F?G+u? >` 4z_F>(6(#mA96;.a7~Q8O]V`rfO${>a'fck2Y?I-voTI||NN67J\蠸@]VS#zK)2)9s\K:&NÆ@P[aɋgh'M~#Ug$w}:hTRv` ֻ}}|u kXk>>)lo-l1bMϛ |IAΏfg&,q/o4a~k[O#cgA֕*qSwAJ+I8"W^?x$VVb8bo  :r&-45nc=z#.#{iq&Y:IY޿*8<~:s=h9 {I'ycvϪ4OPBظm+#a 7Vj2\ GĒz ~iCI.sÞjvFlW#7=]=@^pN&|BE聐"30F>^h%q s$|G$}Pv닦w!zPC]N=c#ҿt%vJ {^cރU_VBX:sc;A ]:,qQsi;#d(~m7|\]wvۆxoCA N⤸msĎk e>ƏAOEthvMIml  hcaO'Cc^?RH+AFϨ[OڃCrC#ܖ{v:܆ߩk A .A)snnȌls v6-ש̎29ǖ IPVI?]QFMexkv< RtNPw@@@@AtA a $mqCX6y[AƵ֎|R9{sd w?M?VnRg3"v|tlƽf-p]pu0 {sd w?M?1_yg+9($y 770&FɏAɿA{U04&rn=Ob778ٿ5:ٞ&=܎ & hpޚCCo#p.:?M? 7w.qsuaOB+* я }PnkCDh piq;p| kڋf,7!Tj^)#h.O.NJw;Ũ2N5%kWpX8zkJ'EhgN6rvvPD1--:*k->{&㳝p{҃w[Vi~1ڶS=$%{3FҺ8+2l=xwЍ+ b"&;ڶiާ=(|h5s$AF\s;"wyL]Aqsb1`l2+fMNz|Mc{G.گWQA]ٓ)"鄯<'C]PI6F\\!}xkwz;AgrO[QH[9Tn|7Oc6T%MnI FZNG~v/Clc x|mk =Gf^aY +]p1>c#h9GVebuP!#ckk :v{%ڵe䝐q80od앛W-՚YXlXo{i GP9M͒ZޛK;+iŦc-r@ ,64#xxr9&+m{̎d\x>H<9FN!}^額cc \_%CС(l-xIw044Y&&Շ6:0M>{k hr p#E] lL4}lã{+< M;͠_І>Akg&>"Dho}m6kL.ӮG!G&&[pc2?KCL dN7g(9SqwVמ$NA<:݋xlF.݇5#<f&8bGph|k<5v_wYůWzh;: wqt3m(.5l2mD ++q٪M77w cGh-V[ZK>_ hF$sOfv0oL{X;~HYcc,Y,zf,=Z Z罠cLWm+/MNß4־r^sC=I掀A9ؤ䤪LcQ=޷H s6|f.wθm, ~4|9 dmٷǷ#g~c@hx毾\HzZZ{<]:{PF[O+fWxm|' vFAY1w1]kqo>AfQv5^=:4݌"1KXY8O1ʖ]sh-d}zMb$OpA#&ڟFWq57e#z]j7eދ|iƷA,܍<,0tp| :ϟ+X1;m֮Hč94uXX|MbF@K%vG,(#ŜOe[|]cwpݝ5nh36*Ô{=*Vz]C~ ^L/PE15-'#èAu^# 7A !x݊DH-ׇ"4PAs1L2"Fibq(,k>H"=:6k hN?}AUa@x7A]{zTq枼ִɨh2=(Kl4ś /o?i*m5Xֺs6;;?hNR14BON#|+{ѹu}j"mC;bidCC4r?R2qׅO-rfܖOs&g4`>gEi]LVڱۉM&'~;#y$nHRc[+1z_t;Du<\5e~͹q32UHw<[Ax [gWW2Js`ka_9vHA#'(c,G>ww.21y/Cu|"Vh8s:׿H1c mr}͖{ `lh(4Q$ph.qzTg2zhnR5pfnlzI!zp⃞W-YkѬ{OxH#c1RM:"io,٦ 6\{'a ēik/omp@# {A!Nk<&~=yH#? v{-]xpk[Aھ22ld̶`a k=mѳ~W9ْo!08y胓{7bU룳%Y`|Z &dV`n^.1ܝ'#pրs^2іLb.-sdeix93;Z%rp5zZA,;*,HǓh14h)`zfXZg9E3-!%KYk42DyFǁAOľg2ܮ k=(&TYJ]țTH!OsAntTɚCp7N >~.a^2XY:pd?zlCYi֡×GloަN[m߭K ,9)3XB݆4I#|PEs Ld5w>{ַ8C!5nCd4NB$kƝb kq;leĠa6E _%A]DEgJl6fctCKϞcD2}*A!σ,[~?&q]w!80hAgF8PTؠ`c;:)?V(܂\솻~/4MVMfD4Md]'1Q  4cDlA^ GiK ."=3kf1'sh(:1y ɲ*vY`FhYrf;Xއ3`lJߛyXfH|x8tA_hr7g 1l;}9t 濳1cx9p=7zA=3 )`4N$cr9#S4B8PYJ:/=Ȅ0{6T; GzCѥZN(D` >l{wp xd=>İ6 $Oy401ds{ۉLPg!?z_҉oMw8ux.:;XD6;wA -#Ő pRV߶,fB66vy(9Ɏ"KQuI1 j ,۳Rl܎f><ے:e;(J"h7k;u|Z1dMJp{KyЂ KNZK#鞉#\N;:ElXH$]g|پzA>aVoַS72egꖸmGE<H,-b{@|~nvxk5a)Kf۬q݀8ucT1xўիE6J#o4h@ {AР* }ؠѵ0(2C1e h h2#h3Cw·T@@@@@@@@@@޷ A | : 0= P|YWVh|PlPGڃt36V|~?2Vc/y`DO4wӯD؎FmYFNގ 1Ա ,3Hy(%X&uڍ;HއNG/L%OUPo#: ~JkT8 ^ -vAcj_-~чrq9>.O Wcl^{ N@9L60as-ܼ}_1y ->sX|l-@@}a~HEkz/^A׳]Egrz >鯂G8z\C|eہzqyALE1*s% s~wuxސts-үF:h4uhiFQf)z5R7(!ڸUJd!pz|sAF)$riFΆZϞ[g{ox5ǞA#')x\9# l|.4TJsG=Dk8sd 8 ZXk!I{4t*zii*ޮJg} Ek#+8dk۽ |M"#gpfx5*~svdR~fČ ҷ<|\dkwkE#1Lu)eௐ*V.lA]hVq׌gzLd-+Ch׵jY}z=HiZ5ϗ?]fgioMľ J5<,~^p\ʤb3iׅ$"Y@&:X&qqp-[7>;Oi!ΩVI38Zz>*r?B;q>c@9#{F'cU'</xo %N'TEI[#=-1 u] O1e6cEIBY08 qia;N'ڼ1/'qr$xrA˱p:JOON6O3ݡ/# 4 ,>J+'Cxo Y(Z&أ ZAGԕl2̃sѰsp!9_1OqzS$wbGЯP踙rWEqk^;!Ex$~Wcܦkv z#i29GU=\K$aFA3w܅yq8D>iވvv^{cɱ=LNn], }vx̟- iWdۍ'lwÌ|wņV-m̊7Hxy$ؼж݊1ANHD A'XavA3v.f:A+X8%*/X$7^XtMǹ8r߂ M믥C3_`q#. Ge ݋Rz514Ǹ7>C FK'L G+"GAG4RB)^29F SΨ%Lj&1q\|\IT)P 23Ύ527ٌzf#s>#_m 7-㢎830hHzކY9n9srPmo?b9 XJ+ZX}:Os2MjHhtY/YGH4;b[f,SX@Z><^~2ֽK3LDևv!~^hA.Lx: G=YW|W"Fiߊ%Z^Ƞcz8ΞnǞPzvr0&ds#젩+ZU7l=A:#؂̴LacH]h05ߪ9s$c/ǔ׽\ uoWM-3ҫRydws6랽lh8d2Slt0 }mCGxI~KukU$ m; CEzkUX]x@$:xwg-\y|0vMvyo׻kaJE7{4rw ٻy9i uA%tJe&l>VDƗ' eXNfw'\bfzBjLpL$kކA >ZrEs8v`s; ;9VWU ;38>wdOD.?vAn ٣WږSCZ@;}=P\ ?>OxH:0VUU=/6 ;7# 9aϙ?8|9, *܄ظ-"sZ=o-@JNձҎ4 {H8oS:y*WnbYkZ\ LPՊuiH`[s<|i6s)^Ce+]k:`e;i!\uAJ9ycRFJc[D l6ަ#oo'h<>,\씸EUkqxA>UQdo{\yxQ |u;غb;2J,ְIϑ'Tk橕zT_]ߪ.PHR[-Vj;S xq^=PوuydkK xw>D X0ͲHu2=f]{O!FKTE8eq2Gu̝Mҕji8l*0&%x0ZrWܭFcyRi)V&1Y;X=Mmi[]Qz+k\dV`֊ܽ1G{+ McjDb':}J-Y 93{U]62qV*G P.sO]6yi7еWyNqICw C%Q8,ge(&tK00xy nzLaVs(y悶7ڌh:#8cJKA#d)N* 96vNVLYZ5})I^iX8޲ t Ӛ )T2xk8|9iӳo &گ4ss]g-٫6fDk%z{^6'ܕWWk ew 얟>Gcg (a/VdT'{pA>c?=Z(2?co\fcl\]\$Ƽ=92ʽ`T3UL2Fv nafI'B۽xEo֐zwCL2=17Aiblɓ㝉5wbkI\8구vb倴:^|5|!n<2c#jďt,a5{A} -{1ܳ9Cֽê2Cܩ@ނcdc{GBT&#$!IE|tHPo=( 3^Vz< GFq{W\ӎ !$kBf]mc-0xEo-y4=goAd vֽA>mIL5ŗ%~cgO4`cU_/ܚ$Hhׇ eqSRl50M%5vNA8_+zy;=^;r̐tdm<^A:z7(dhc+OFr50p-'Ddvc~JȆ9252G.<Dy F lL=fZA[2Ԩn6J4-{KNÆ,|2=|A'cA%fى;<5dv {8.A/#.kܝc]5<\|H8Æ"6d1;ǁݻˡ ˾ͬ8 1G,|\C|v7fO:vZ4,-pioŌ:t$52ytcԐKle2 xV eH"LN<\, 惎JAO%Z ׵o;Qk>V-/ 9Cv{4! 8#A붰y fFI%:KtKIq ||QZt8Ƕ71ē8|=1v^6`;1J̓{=Yg1;q4[Bv9{F#V֒Xiuk E)K3Yfym~aL6FL<[|"&G;`qmB ʾGEcb<_F H@@@@@@@@@@@@@@@@@@@@@@@@A1+4wgCtOBOw$r=|f28IѸ>R =y6̠Xd_ {<$ GAהn9X@W8#dL n :(3ϓ?}AUa@x7Aŗ+씛&牍{٣ɧO3x۬e05ęh; ǚB[dH-A ,U덏c!4Ry6 lnw @'|Cs#cd.68mq׽#3"m3^dg}I چNN7}۸^0GߴcqؕJ lL[IԥxٍnQJi2: [d;&mZ|9ԛWmj:֛6ြHgۊ_u6~vY-8; SUdrݣSf]H/Սż^\Z*g9in1(ܸd6H#yNZꯓժǍ'ӻ|q <;?g5:U|YZ, Z4/f NiGgaM杫si#c*248u>ͤG-:k9;9c3ݸkA+mK{z4^bpזwhiqsw@|Vh-,ZPwY679>eMqݠZ׼XͲqr*LJs]Oxż{s"ZS^qbH#`:BfCj-DQ%ʷ'p|Q2#O VY;/n5#n?Dpؔt>9YGd1g60-#POf(c%lVf"G#b dI==71=#ڃ}tS2<sVkDAsf F6pq9}=15N9L0[ Qǭo V.W<Ă(6Ayޞs+:XvzAM/ik3M}o45|y jG} x0B (5&@>7$Mr ,Yql@6F9ǔ,v7G|<ݡA >i ,GK?X9zfTnKq, "H{AFaA&fYcZ^1ÄH^]&vlW 9{4ynbluݗp;;Kh,;h$rF:(6dlv-{4iA2)O &`4q m2ZF]yJ6Qk{ RrvqZ-st a.w^)Jz32833F8 A@ȥofD[:i"< H#@.[A}zى2R+Ƶ[ # vމ%c^II. Ns4۞$^{kMnr>Yh xp6׍|ŔoͬG_ɞd*T[wSMsό47@^{Yզl8 ȠjԚN !cFPU|?k4BkǽwȻ{W~kyiZ.QAsN< *Ry'&9$.Ӓ 8T.)8o=.k4GArq{mb: X3rt:TP>n!2c}}Z]ͭ.$ \V?/ r;a. yzױVe`Vf#o/ TRKlqi'\WڷW'wV-;no0"?ȠifsV{6.6n<` nϊL,s]':'頀2 hv-=$6m~>w"pe\Wďr:A;?+m--^}:vSY7 ;\U9OC;.N:5o6s1"a;ej9eƘ頁&ь 7iNok|w#!^z9Ŋmtm:!Aö. :k- $W$8獖<6[#'CA<99 {B2h\sAg!d:IeGT.s}g(a7q">KrՊ$m1c<׊ .Pg/jJL<.W0˄_b Y.W6A3w^;AM`{5 xllEaDڂҢvץ3%<ϖYq1W`^fP0lhy}T ;CO6"Z;A]Rr2>^͒%`@<| ގĂ'\ſѴ2P"H}G<˘<G "q⓹'.Vwbc:2@ɡ PNNĭna`Ӈ@od:ֳ\6)ӘPFO7!&9XKO1i׊&x%| ZǼinoPG~3fZA+$s;5xYM¾I%IqgNϳH.y8tb!ůrAF땩vc3OqK%x߆vV[s1׆?:<@c؃KU)G$Eg/HŠa{AР* }ؠѵmH:r@A,lPd;@ }G8{^z7V|)-6)TT h{47j5H@- iA\c@<2H3T(MK FPrf 9t䀁{@؁D5 s klsQWcel9$o|'zA?\Ͻڀ|P: iH@= 5 $z5H3ϓ?}AUa@x7AWw9/@n3 55-. D'[AA]G]6 ׏U5ΕFB6-CR.gpߴ5u ͢#rHy{,csӪ +"^K;8\y"ftY,=@|sF'g%1=6wڪOj zzH1w9c`}x7sҾJLvXEkư;K]`{69EY$Mo-8&50jb1VrX45z؆w-q:Ifm59+Jծ.cKXHi=-2yDyYq87_9Ιci#Q[@h3ϓ?}AUa@x7AG&;_;^gn1pHwe?Zw>[WY3@G=#qfjV uA 6N2݆ݫkJY4eޝNx?ߛNnAcf.sc.hgxA!f55X31zz?nUݧ7ĴgDǪ 㙉1r19OP^wf3%'1$uAEZBLLt/B(`q6x;5ۦZ8Z}: Hq3YB'i,y:2!I?b3XO+>;<ֵi'`,8;9&l`#^sqCﻢ}Y$ h$׆g=^=s81: hrbG{\G2XorU~}!Y||wj+㋡15.67˩>1x{ԲV֥XD"ezѳjR!##y%Dא4bzw]mq#GګݹkѿMi{y4G"PAS Ü\]ɳ3'y@;^@vl\ӹY p?f ݕȉ]h>G#} "fH#x?oDD"bņBϛ&{q9v'.=n!̝- G,տ?c9Jv%"=ۉ'VpS[4W-xq|dwmcc1־'MU'5SgWHJcawC۵;G ]i\R#8ڧv ܓ\cM)l1V~S1W_.VB^X9Ǟ-ym/z5HlZ 1Ȉu85\ڷYQhw ټN8s-}彲r.:W UN *6(e-oBLgj+i^gM">d'qcdaZ'kjO<]a||oh٪?iǻZ^,J!Mf<;Zp|9Ub-ZOoiP]J̒8ծQLu3JcmFRb!5v Lj8?\> -{Nyc`ݶR-V{t\5Xה!B9sNeilG1[V-GTզV.+zg:jZ~qꕻ+\Nt'^җg?8I +2EFD4{@:\FYҋx4k{N[V{j\vcmѴ{C!M7<,yM1k{k/o $}X6e{vhV3[ZgUkYYp1Wdgq-dܺ/䱦Yݧ5(vLT:}1dиw9iW,[qئ51ɰ0= P|YWVh|PlPGڃtE:#E<PE IT&yC *SEgpXgtw]j8n:ǤW{$:G )m]H|b'?gA]I@NVc1kI q4^R dj1$2 9!^E 8{l.p)- dJ?T~lUg֝qH8\"6=0x]s♝lttJ N L>gגWxpG0GaAE 5fck gp>5l }ܧ}I<3Ϫ.:H*_Utzv9rꃭ],oyu{x!.7DIc7}tA*Ӫ̅Ė;jVfA K0yt57`OV1ܯp`? 3G[n>Ց_þnpP5swG4䉓Dc^ǂ5``ոd kGq Hp;Z守$u]шyPCj 7mh wfG؂m\kgN- kibѩV\} `c͠ޜ\|c{qv&L ^d{6ys[-G]qFaOZ*\y#}zxĢqVb"aq4YZ: f:rZg{LcFeLڈђX漵^`obZj`sr13=`u_ G,kso2t[Dbbb#h1"Ew#ZH׉Q9-33XaoMC(n` 4z_F>(6(#mA 䃡Ae_PUX@@@@@@@@@@@@@@@@@@@@@@@@@@A"1AA/j f'$ +* я }Pn+ٚxa>Jc2֔Hƽ.:#t񳏻ow^zA•/g@\D3:q zb (#: 3Kfp粂JoB89p[78 fu+KbgK@'@um6öÑA ^Gc6\Y\,=޴浮[A]Vk>!ŭd9< a{AР* }ؠѵ2W6y87[m\ݘk[v*$kk7rF2F!A-XK K]~#caZc;*r=a5ny]ye\=3L7--)!qx$ O܂nrşKkX}cvW&& BysAlbBF6=<9ń\vf/w\\doߤߛYْwS;Z[ _k!IzkW5yh^~'Ac"P~B 0ZG5A_{$]h܃秖! Lck%̦fzsDY' /7Զ"dBN!]vfia@`i[ r.kk`FYS),uaYҽol}#gn`@iXks`ދ pY"fPE!ssOweMfٷ3ù8# ufVܶ!' ޺3V2U3Gs8x+trMCh!޸$AW|QnzTwr d%Y2ى'Z{+ dlQ\ |cs@? ZBYw޲"t44~'ZAK+Z6$gy܂Fc9XOx!<&hZ-ْdjVdXKC]lkh[ %ʒ=Usfs[4>;9 dvԱWp|~PW͗%L^~bJl `޿r [aWvYn9'6Kbnٓ "|[5suoGg~(&o=,baqq7GB+GeJ ?0iPz }IjOv{RJCd#]=V[Ub "@!G.u:]zkѾ]m( 3KjX%a9e/w5b^Ӯgk3ײPb3#g #莾JMۯ}'Kwi!:h-ٞ΍򆖂xD:>;[cb"?v⻉wi_F}=zq {E7x^O*'IV7g 6N4l@>J/IjchKo{V 2O̩Y֒cG y~^-Htvn,於BLD -:<:iYk[7=Dž#E>KYV׈qii3kɨ暹 #a k U^"[֓0AkOitNA!Ar-kh;ok5^''BFS  aXysc[D~dY/]'l>a5ּoNwi"e2NKv9:`aY|tޫhTVgzKޓ6>gwuӋ\-59vu[|gu>.ǫa2Vys:[ڈ OI;6XW1I8F8ɾsvKlսG(.\詊#q۬gpSm*?I8'q":O+^k]qIҚ5/K,-n.]ybcZ&k1Hp3];eYg 7v6+EM'<BGgx'dyl- WeE'5 pѣ0RuZ$;w#r٤rWb6*p7:[P[g-Fj" |p' C\J-8I5{Wtyk5j ugGڜ݃rӄ' ]DM}giJwLZXܖ|o`>h20= P|YWVh|PlPGڃt2"G2vnW"ݸAnZvaKkN\?b?ڎύI%w`ǒ#k#f݈4t@|PHbۓdե4z%"%h{=~.X\")6ӧ$VT@"l[ޝq֪K>Z𵒵i-|. µdh𱮍æj1_"׬Z tpc@ꃭ\D5pCdtBEuţ~[gbkIn&`+[ $<@GjSe[cZZk@z_̝εr^ZꃭG{% kJR5lz;4.GE`;o+A{>` fH>Lf ylԽA%ݗfH tH.~H+edHuoweo271r6(ExvNH8=,YQ>Vێ/]$5D4h:7k]4J^@!twCh,&2J{앝dcZ@A ohe\ r2J;L<:۹rFd(X#Xѹ䃃112KbGլkZu}T)㣧j=uD֚CCt>ܔb؃u n̍#a`OZ?'f+C Nf3.&8 BȺؚ?-ZA=ŊRVCܙ-]uac(3s ;<G>AYevBa|\ nҟpα8mGq0 <Aףv@߹= t2I+/Dt5(ֻs)v}l8sAoFV{] 2KG~Ji8:h9{7zɩZk;$Af:= NGa2M:~!x{3[e#YXf8o ׃@O 9 a[bk5mKFzј8R aDڰߐK, ;Džִ4϶{KsxA;puM1&7g%']NWk?f۶yg[|WV<2DS5,"jƴ3Y{IVI6(mY#R0NA!Rm{Dj'H֣<$Zfq vu115~jwҮ$q]4#(kCɠl5[dxi1Sgl ,-Xwv 8'^*}=4k.:mW=W-Ci:uG01quNZju,ٽO>G)33Da;MN9jX(Y[rZ5(1aZ包2v^>դf8&gzqwfb}y*z}Q][#~ }Fl'mI`ᡮ4r_U1更Z% P߲* p뭑?b5sh_e|Np#5h³=7W St,\⻣a{ZƞgAĭ}3iΐ0OH}"݇Wk/x>g[i.mG_xnNɦ[vP\k^x%rjgIN=uR֖E[˷ hװҥ4k1vs gnԽ=G# pÁUVj;?Q*Sck*mAt\O]vmny+rJ>*M7{zIpX;ݚ:58I&cc)~"֒Kv縝IAb{̙m4O K%luA v^.WӬsd^.-qph/f'$ +* я }PnϚ 䃡Ae_PUX@@@@@@@@@@@@@@@@@@@@@@@@@@A"1AA/j VG໓mQhZOY̅W nCo\;17h篊,c(O2KzoE8 m/;owނ&++̫ؑU`$2=%$oo};"kjtO$}JSZ~!b>e|[q1`pYgLcbwtɫLkm/#r?z+/#;[kbtZZIn0?Z _f9wlם;8If`toqa|mJqYoIˌV{X{ҜZb -~x~6xٻ^@$hv:Mn|k \$pBpJeIMsKRG+{Mf|<{F?Y8ysy+[}<|ߏ`NFi[ Z=I>6Ќ1QڗeCHK-]+{x诿ӗM~H]B 䃡Ae_PUX@@@@@@@@@@@@@@@@@@@@@@@@@@A"1AA/j yu>(dY:Oqp [+S2W$CtA!K9NHcK#nyxy{~FGQG1{suŽt'T)84;\G3N͑Rj7MRq0d r+\v-4%fژ(FVJhlQ$$4HjpGʱ[OJ2=H($h.2qZkدY;PW)&C vLZ/ƕ89Z~c!f6՚&Þu.zKdMfzuV8"b:u>f:BBYbRIޜ8N%msqڽ""aw6H[ܹ' 5ݼr!Uaa{AР* }ؠѵ"Ō /GY̼/2< B192vڧ^۪K scdNk bf'pԢרH'Zѩk&. ?>OxH:0VUU=/6a{AР* }ؠѵ3ϓ?}AUa@x7@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@A~|t(>` 4z_F>(6(#mA7rlԒjMu#፮AFk|7ڹE oD+)3R`a; wf VKW>g-gg!@>H- rxA7}) ե%<8hwcCH%UG5j٩bpG Ibqc^I)\Qˋ`  &zG%ov{8֎-kH%R WJzViYl}Y0o`A (],o n޹y9 4ڗ} v. pQqA؂XkFՋRF7t92tc\4 gbPQmKZ[̂2 ]daSU/uWHh;yGi͐/qs O-R;W{AbZ#lQlh':h`g^n0pNq/܃;Pde2-޻'^K>c Kbzg.c磭׳Ե4ҵph}A676uA,v.cӮe Z|LL Q_xl6z7hU!uFzqyo?uIlMg2Pqu{[zo╠4#Dz 42\{.6=49n8.y )v2qCGi9sz W fb1v+6=h%]Z(p9w׳kc%mih'աy1Y ^^ko TuesN@$G.W )XglsމG,ui*pR/}xy`|l9 Hέdje1 _[=DMtpny3q,kZ6Ku̓k~.?fYʳAjxi̯9<ˎof39 dg 9D` H3z_qHoGLX\H8̄2|ݘLp)|.. h 1,pZZ0 M`vd[„bf%z0\@^(4X;;,M"..Ss=}!˖JlN2!'A@r7]|En3fˀgxuءRF8>({-f9+%pc r!hh/{daL5. t޾=Pw[mD6YE}ڃKWEUUi}g0q ڧhbojw?M3@lܷv  Wm>,盄D[| A76romÊ+/ p:c~#֭%` 4z_F>(6(#mA V'db#|-'rUXi&s#ͰyArMb8Y!~>h5Sl Vaaa  W~kr:ָz c /x| \xAij j+I$ Z6 "9a@eLLyk-06`rAT.Er.{Ye$ОlU9irWo9=Io4~n?'>WKh&gvFͪ!գs͍<;6ܭY+X{,/i'Nk nAw^5VX?Ix$'dꂮ'˓qzU'qA8Tc4&w3gt-@e1pzDED?j5l 2=&B(Od"HĜ:k$u)ُ?h&z>g 冥zS3OlM'}z&Mxa(oZ^yWv|\jx7wg u̎JNK3pF׍MNyں1՝U~5c[ԍo kwz8 9-fyPvs[(_}4:̵c2|7DوlF3zy{u6{ݠ*VZ&q9>惽H&gj6RWyG>̈́.ldYly$2Es$YK^iKj;Ms@'̠ S姣kemgf4g\!mn˵ *lGhۯ-ˑZckc7Aϳ=\Ƶ[,╻p 9iw:o&;]`ﻣ_^(,FOj:3DCORc18Wzk(#o *qpx.>T+i2Ӿ'6)[rvwr'!fG⭑Ќ09ҷѾa}FINk>'hnĂ] u(3dJxOe|]b抦%x_;{zC5xtnŏݎڃ 䃡Ae_PUX@@@@@@@@@@@@@@@@@@@@@@@@@@A"1AA/j 4  ?>OxH:0VUU=/6a{AР* }ؠѵ3ϓ?}AUa@x7@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@䁣P4|?"@@(>G@?b^5ls}fkcjhNmY 䃡Ae_PUX@@@@@@@@@@@@@@@@@@@@@@@@@@A"1AA/j B6@hDDD"2;Ph^SD3^x7OhB9ۃc0sC^;x<*_n|ZȆ وPM4V&1'a-ԑx/Yf=dti?'j"v؉͘5b'8<}Uj≮vaozDY p̾j^xsOD;ﳹdi#Zc>L"vs?֫ݚQ;myJv*7vz3q#5SkLDNLqR#r_9Yo\>w?i?=okg )n5Hѱ m-20rޮYHtnqw8)8[@{WE3R5T@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@A~|t(>` 4z_F>(6(#mA ]$lk߾uך i(׷dmcc7AᶐAB ђ#=tg#=}u7|zAVT9qi#|SմD7IW9o~;O WAݍ|^LvyY7ڭ27d5L5aX\mKnoΤ{U-kΪ^՝zDe'˸1/gq?og=vj;qo@cںixn1vW@0= P|YWVh|PlPGڃtWLaBǡɝry ^c_V&; s`%n>:A2,aʌ1 qkk#\ݍo>8I4*-Ɏ]S8 ϗ,Y 1hm5O󽰑q<(*3m{<\npo#P_Eqׇ^{A籙[6#^ $B\[C^޺A" dqASEN:W8A:=6PzJn7{FASc#s/jaF6՛t3g/r r2Z53X7j 츼,ԅXXA]rL9{z`kV wygG)nfQckǸ Dl($ɓ[r,T'ou8t}ORib&Q;&02Gkp`Ay]a- !hqyEr5_khh=|`AF DP.U!Ѽ$^>g:moMivxPr룶6 2;;rxZ;9}!K{<"Kz>J :՝LKG.Wwy8MpZ?Wl 8 sYݝazu=DFG<}5c xPܲߦoNr3lt!QUϘ .M:xD[EV?\_g"E^kao+^K<qֳ=^@&.ane1emò/ZWEYҴ( w0^<|wiHYd߭mk7$mk[ɭq?>%{s}7 ^˺m6 s4G"q^BeS%,/Ţ88u˟ߏ–kӄŀd^I<44ltuDN-[![7,S$܁Ǫ#⌗5tmbOg`߂ÚkYojOT 䃡Ae_PUX@@@@@@@@@@@@@@@@@@@@@@@@@@A"1AA/j Wf1d=,r9u>ke{O3D};2P N֑=A2Fmv.6F4]\oߏ4;E2.lMdFA!4 wՙ;:i\G-w!AuAU P9!{uv@ABj62RJZEvp饭 ;=nL.B&=[ĒߞﳚYeXY^)#1E#vs˧1sbmbcu3 %.P;:A k֎|ؘ114Wޑ; @uv~Z{1{s4@1(;ޓ%G#A%fE+DrJݚt##h'GG[$_WOs^jqmbLuSd']%úޛꁮ~ZrN>;+DV 2'{\w&̤6KK떷E}v8Ah/;w=sױ#w:ˏ/?xRT au;p?zӬs㧹颿eV"x' lӿ+8lrϟ'tQ\/%t1;<qW&=GӒkӶl+׉6Njc,ɑ8uZgi2\v+!c5a]}>ȁP=p5XůY;2V~s3T~R+;{.;eX;?x?߳q9*/3WejUdPc,,o k{upZRm;| zfk۲9 p_}7\zuscf3#FZڭ;||'DA?M&fxu=ZĵRG-o=|=m[M-ߢњrN=ӋHo[ViHk5bl2*w#5כ[6 X'~M[M:ju""&ι|W6t"*3=R֪3ϓ?}AUa@x7@@@@A:Dux7PI@@@@@@@@@@AjO$2^V;Z H,nckAa #h9t[nVq$HCH2ZV+,^tjΧlmIԞ"KH[K/; fVv ?:ަ֙B/k|v o5^UDo.qo[6u~ ?_-b`h|.[-ZpEE.F n>p yi3Ѩ$ap᫿w}bC$(Ҿ5t`o۞ӡs, Đw/jc_m;?7G.k&aB7}[gk*]8]ҵᓌ`עZnPr3 8v>A텼!W[M&v:"*Yuf!!Ao}I=y(:4ynY7C|wKhp@o]6z|yc덭[{<m;81*ù,c@ ms×<f7'pͶFNJlk,Iqs}Ub\zߢcwnb89xY|Ggl"aoƺA@5g -~` 4z_F>(6(#mA 䃡Ae_././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/_static/apiKey02.jpg0000644000175100001770000024747114714401662017365 0ustar00runnerdockerJFIFHHC    #%$""!&+7/&)4)!"0A149;>>>%.DIC;C  ;("(;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;U !1AQRq"2Ta3Sr#BUbs4$56ct%7CEuDd&5Q!1A3Rq"2Ba4#Cb ?*cc΃($^[V#fIdpk۽d/w Tn!>;C#E9T rg෈}:m2|W܆O!{[>_r> xӫC'ou~۽d-Nw Cn!෈}:m2|W܆O!{[>_r> xӫC'ou~۽d-Nw Cn!෈}:m2|W܆O!{[>_r> xӫC'ou~۽d-Nw $ *|S?Tn!?ʟm2|yS܆\< dxvXy|I#;i>oZD6HG4㦂Ih$ey8DN$pkF$q_MԈ6b78DG!t;D<[McpI*P^WYcKqR|Xe[j$@@@AY<~E0{  & hy4<d 2Ć@ hy4<d 2Ć@ hy4<d 2C̃a |WwJ&2P,Ĉ| Ɨhy D0(<e?DVVb{hi < æonً9v<\t%JKOUV kF7#vu qk`3nq28hAsʄ?f w3>S5jA,s:0cC^PT!QslKR{؁alkz|ת=\_n;nSVXcz;}w Yrwfj)6=U ]~ pvv5=qg\k-qO\<y8^;zu4J4^8] p_Ҽ]%t>=&s,M{XVpqn6՗7X|O_G#[-@لM{ â""p-qiEB_@y3ܤp?^#kQ32ˇi%FK8#o@gJ;kE6 yGgp(Ypcs?`?']~Oz 8/t8Ӱ턱$s$j3{[#%؆NWsz ގA24 X_kNsMg]A}HWC r9z Cq_dѣ|>tKw{r|M $|oMZJ1|1nF 7I=~! wc~&ŀƍt\khN(nc#IU7f7P v^%Ri<ϑ(.";f4;ûhJ7m6WόH:7?켫)3/0 4:d2|,}(-MYAb\KX,p׊ rIͱrà|(9cf3CP (,R ak t@@@@@A {?i77@@A?a% cPa$D Df򸸟 ͉ sK%4>IvĒ~Q-POFܮ@hofh:DTgf}0|,Ͼ҆YP?K>J>ggC .َ6ѡmk?H``$0|0Z>-t  c _1C!L{k?H``$0|0Z>-t  c _1C!L{k?H``$0|0Z>-t  c _1C!C8sm?C ,Ͼ҆YP?K>Jy(\NDakyq%BD@<-)'J߉ށc22{> x[S~&Oz |o@1oM=<-)'Dže7d|̦L߉ށc22{> x[S~&Oz |o@1oM=<-)'Dže7d|̦L߉ށc22{> x[S~&Oz |o@1oM=<-)'Dže7d|̦L߉ށc22{> x[S~&Oz 3f߉h>G7w_@1oM=<-)'Dže7d|̦L߉ށc22{> x[S~&Oz |o@1oM=<-)'Dže7d|̦L߉ށc22{> x[S~&Oz |o@1oM=<-)'Dže7d|̦L߉ށc22{> x[S~&Oz |o@1oM=<-)'Dže7d|̦L߉ށc22{> x[S~&Oz |o@1oM=<-)'Dže7d|̦L߉ށc22{> x[S~&Oz |o@1oM=<-)'Dže7d|̦L߉ށc22{> x[S~&Oz |o@1oM=<-)'Dže7d|̦L߉ށc22{> x[S~&Oz |o@1oM=<-)'Dže7d|̦L߉ށc22{> x[S~&Oz |o@1oM=<-)'Dže7d|̦L߉ށc22{> x[S~&Oz |o@1oM=<-)'Dže7d|̦L߉ށc22{> x[S~&Oz |o@1oM=<-)'Dže7d|̦L߉ށc22{> x[S~&Oz |o@1oM=<-)'Dže7d|̦L߉ށc22{> x[S~&Oz |o@1oM=<-)'Dže7d"ݿ zl); P`=6  :$pp ^~d= QeuEwB_r B)PIo0sϤ|P6GoԂ(7o(>iB86j aP\Z9/D-~DOM奔Pv\Wc};7G͠PZ(ĐJb Iqt(9Hu-e/REEKku*N㶟ԃ+KB49/ѻϰ.p]Pb֧%A;_E.&mV%l1u,:މH+ax,[U|dVJwzuA~A~i? |(Վ|_> 78wqty!~+MYxׯO2I_|zy7)1 '?zߙA Zq^J23QNq7 c2,Y<ӸhOcAy6Owr 0yQVȈL"v]ͮwH:0-kVY6hlԂ#},(y,0QX]P[A@AIPnߐ=H2P|FHj/Ўi5(;ߣ>~,Us 0?Q> ڕrb;É_VπA[sh6^΋~~lVrqXX.aHh:+ּ3e= PEߢh̜==6KO#^K_A~^0QQ9ZFf_8ApU 4w;LhR8)L c#hk@sxA* ޭ5Q(=C,xsraL:ˁխQAFVXf^;AWp$| =>PUH2x*/ѧ0~Z Ӊ{|Ø~_n!u84Tġm뿸q?Iba3uO ?r 8"_E4ٚ&]ˠcæv?Fk׎X{H\2ӯk̃@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@A9WgH#Z>0޺8xƞVԵr!pk[۷7{vI]c=}$|T-+Gǰ,sˎA'^^*4`#m`6ciA+0Lc\V&+Sv\ kOqzoCq7My_ va{ist: IF6ZK!۱I;[kÜ!}k>t&M<sqE)rV|Hכ_ 3+|^so 3su knu'7=Af^ω,7;#CRi(XpsH "ݿ zd}c#=H<𗜲nٙ/[5XPEur$G!n5 걡 i 0N7e; tt~*XcWqqsdw6A~m.2p9$btߣɛ0NqC=NZ,7h3 xldIYK. ^T;1ryKs' c%?>t5;tW姾BןNz%|o\`'5({g4i~O/w5A@3Kh^R\tf@+ E!hq:΃:@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@A'SZ e6IcyoDz`q2\ֳ1<9'D?H%vVj9,ױ|su~t؍7{/ et$[ŹÔ8 6ԍ7e):PPy8j;Zqp̂l=s{|f;' ҂\+?B)Y.<`$w#Į-]{@619=ێ¾nJg4ߢ|%b_ehc$r67'pHdh 8w ͞)[ Nf9ϖ]r{>T%V{Crqq^8jL XÚh93>;2H׫-8VlPb{-}iaYpS V->Z7ɥ1z@A؊>"(A[;]<".  `܂(7oqA~jws뿛H$_l*GpcoPi^X$pdtw^P#{9\@#zlrvO0_;4 AA.@TXvӻ`}$le<',?w# =dZpow'@ܟa vr}O;>ow'@ܟap{Cv s8k#Y,`/ .>aV8KX25{(Ԓ{}:7F1=A`CH< /܂&x|_~Vַ9 FDQ? qϗzDr1u:sUhØVbkLʧ)c` $uyb2ЙjJdd~$tAy&ի8FͻűKѬsׂb); ~O(7o@As@$%khI@At<2oSPX29f̒8LDhDGyb 5GW9YIE|(I(0=PGc [4 IcH4w@ G,r>7=ŧ`h@޷PE%/ kJՊW4h7> AR!xc$@5k(J@ NvA߰J GP` S#kxQlSG-'_M:WKKcKWW=(c!h.&246uҏZQ=èp^F1f$P=k|2ܖ @lEu&gc`H%d$&mu HOm91We[rJX |yV#yojghI[=;Um{Ew2`8IkF3ϓų(+K5(<ڣ?ch_<o_vVkV;:,]U.2qvamKU!}:S]Kh%vkb^7#ޒ79mwDS.]('wVWlb<'=w3ݹ;](qxg9kƏ#ᮘN9HܺcM:ߴOvԵ{vf|œ k.GPA߯#N8b֜g'VxౙiPlEd>qC䮝xx8-h Cgfףep6rKA #Nf"cpdDLy"{ZTR٭Xs X ׆tiFUl'}?즹9sĠ89fZ4cf{'Zq3[;+SLږx;hȐ5ođhjx8p3$F@ƾCASQ˯ϟ̴ۆ# <Ɂc0SwsmsGc [4 q^ CHq\nWo;H/ed25|Ƈ4kNH N6Fwb#0l;GւO+5H'14<Ůc^6;N%O3!`cӾPZhgʗMbN`w5>s+;Vd,,lq>dَ0qTtr kC9"qIV?x:<&qxVbuwHڵ |ku:>1ۭ%߇@=:(" PE'qA~@ oI%#ԃ(0z<We7SN{t!WG}W%hjq0\4S"T-R9I#HS3'Nu81lo+C@źz֎jE+_@qֺQ[R:s=18"f;wog o7POaR5"+[Nmkm0,Vl7 3CkM4wxjf'41~J;+zA-%nNg+O)uKG*!dʏ W^"C#84Ѿ|ӷvz3W2Kkr-X d+xz65cN&050HvGU7V#dlյ uL- 顭޺HLN#g-5},K7d{ ļcu^'k96뵎ݽafU,f)ڴe leSZ؍Jݸ{O z՚$jA$MÀvG֧OV}+Zc) ZgãB:x0֑~YԉǯӘϧrv02C#"ڴ1y'n=LږjL%bfɫ+J4 䏕tIM&=frRb>FoVקP#Zc{G/]3[R"g-[f%ci١d$ ;^OZkKhkLYXGc [4 ~8} o{.kyI:vpgɗ]q8uiAX*t6'v̤0CP6bߋ|SN|(۰bs85s% $sw7΃WV*Ԋ"&؃Ð^6HܢŷOtvkעs|:v)ek*=ъn`XlGBLn=*`dkA!]>2F(i/ RwR fPnߒ=H2@  /[ h=<`4}6@A  iPaPGc [4 Ɛ4(0GRȎW3u> pDb`dq5hw T^r65l{ZK{; aPE'qA~@ oI%#ԃ(+K5(# AI- PE%/ (>mzL|O2sF mqi L4f>H n69=}h=!$J-S%S%T̆Ya2>X̂(C7@j`,dICg_R vAҲB O0qCwTQC v\|B)p`$6:TIPnߐ=Ho; 6P574Yx߱9e~@_74Yx߱9e~@_74Yx߱9e~@_74Yx߱9e~A۔|mo4/?,ohcr/?,ohcr/?,ohcr/?,ohcrGm9pqvH#< v7[?,߽oh~cr7[?,߽oh~cr7[?,߽oh~cr7[?,߽oh~cr7[?,߽oh~cr7[?,߽oh~cr7[?,߽oh~cr7[?,߽oh~cr7[?,߽oh~cr7[?,߽oh~cr7[?,߽oh~crvb  cz4u~toq޻tAk%Ě;=E}H;bۍR&9B3u[&t809м8ENExBQFF= 3XUJlϥ-Ϝ^FXC-Gr:q?N$;%".#դ c5lg̃(2W=-{Z\tN|v<2@@@@@@A@@AAdc˃;Gz>bd썼=n ={뙠<3Bç8S1_C2TIPnߐ=H17$ wAxMְ`>̒5ܥ|^K}*粿?F_ ^W ׍1+NA`t;$v:x&ٗ|L--M1rAW%_λcert3V4x|ZE-jd<-X`kzkd@u;+:Z_ w];Ӯ$h-eY&GO"l88㦳ա| [y;2f<C-) vP=#-WVc*=Rg8iORlC$;դ]J7ةlUcѶ;B 8JU"H^g?[k]8uW㗾,v:H2Jűct6{OW9vYj s洎py B}n5GQ=zlߙDk|Z ɿ dMգ sxAC,y EV ΍.#L. d$uz[R/F>übho[}Ke<.fzRRHbߖcn s-e"+WLRfG1c%ǯ4I$=.{;ԼujESN?89 uwNn⌓&& tO#Hsr'=n]eڂ<[H<KL<W8;RQh.cy h>2 &oBv0֛=-wxhekDe}i vXj6fmzsr=s K1yGՠëtr8|` pk'тc:=tx:/yЂ M+ ųH懖6׊ emdҫ/ry,LH[ᮧ΃7vCwG<vcA=GR@A34׃"yi[Jm2x;$fi$ wR=-,NGӊ)gh,::iErR-v21u w}(&e3"!rJb3 NLmGA-ˈ/5 /p츷~h9$1ǒ0T'W$4HYk~ސ^9일~>(H3^ ;9ۘmj'ȜYI(pzg e_jz3

bשPk|в ;w[6,E<5w s ucc AY,C1ZLyyt;ӓf'V\X$sc飠$AVI|ݺRQ&+GYرyZU6`|>^>d󝝶נ/st`7[s,q4|pyE6xyCwvπOЂv8h<{S]bi#%glcG+ËtwD6CYPOg&LlG6 tR5 _س؜aFXm;xhnqAhՎl\tODbdٿzӊVgKq:L6,գ/b'ewC^M^=;A~c{7؄wp=FGK+vZcsk,v~-wNB ger9WoCg79CA GOY޻`1YXmtelu-vy==8?=ROU.BK\ӣө c{VrX-InJ^jXjpJ]7#q;֏0Ӑs;[ k*Uԧ2;E z@ؚ .j57MV܃b+2Zaew{c#COa1vh;-ۖ9[iNv NO٫]3v<:GùJ5(.ߔ:/+d~v8OccHH1~=|Pp(ppdoǶΟG/NPlÙEd&~?MH>.κ}(,[ۛ=#ӵ"ޞ~;\5v#jsDkr\nA#!NHbQ$rt3`l8~xt0{h恐07tĘ̥ڕCM12h^5ZA{qv25Mыtl"RCFi#O_iwsUU}z㛵;qs< bxn Ah IEeLI/g״ hAe/Sn3oj<}[$U S=w.aͲXMsd 3`߳mz{TN;!q4^4z;Z] c8n'])~% [''63K .] 8fs x :Cw q+)GkdHF(nz(=-sGb&f^T amGzh# (dq3: nԒ hm1p͊WrVnt اYkd o6^[y,H&{#fRHk6M c4;6k5^;_~̇Z H?/7~/Yԭeڳ5vnO7MA'eQQcW印ƞ'x[/e2SQF &WA-=t${ACfx? ̠Y[3i Gt{ +gԂUnȃ8I[.֥)&unv\ԃ68^媙L%͵U0zۯR rU;FzuIć{pZ:yɻC^%]%$4ݯA\`3?,y=}?5v$琸|]\dg_^fjbvG❵v ᜌ1ײQݙy `ik@JmFb^v?<胝''k)%ź)sh4}vz q.|J6+,9A#~)dֆ|Aݩ%9܁/,B/Ŕ8ט n഍Џ:,ejmG 3e[uylymNx[#=FW|rHvv4z [ͣ+}אל~{:)?3Zvb/XaCAp<æ:A0j0I^/pԵ"O% -k\7 سج|lNCt<{/WIx lL[ I]>=9"L v#-;u(%_-WEoB9 Y PE'qA~@ ߰OQe,"^&C,Fs8>Q-cCZZѠ4(Jƾ7pp=-^m/ْ;I uPk$m7F񶽥yAJQJшᅁe *5= hoeR!7X :uUHY  1 +zPJ); &$(а8k{:'glX~|ecrXlN t[}PuўIJ+as]A?qV؍uge;G^}H pWGG<̪2yݭwTqC)gcNJ"uˮtA;pܘ/c${cӦ:v,CR,Hآ{@(9"{ȌI={|$uw%~)מVUj[\iIi~*8>M+l0|ut6up}1gqERH-uR'J۴<z 4kܐXo;vw_߾L o1,|nsOPPsdʺ^վ΍|k'pyXy zU,I`qNt%YeW,ZhtdJ%䞞(68u36y4rt=(%əZʷ+-s6wDN₆G"+SvG>(xv|: cZř\aF t^dXf,$r49ip= &rN v>9D2Kg(灠|dg;50"\{zFx۞N]?'mSػo\v>#؂{#o ⨨d1bwB큻=,d8JRbY[wIذtYjas:4<8N*+fbUdr uqG F:-D]Ѷp!#NUyrV${c>v8V5(/世2+l+#)O;%e'g.;]t'cC܃q>2z.:IkՖO$]66׮yMÓ|Ƈf=ނWy>G(5r뿿ȳ,,f(ҘIt>(;Ri_!6ah]#΃{CW2"W.о'5<;:|}ᐪ'l3óY1 Sq5/vrgȜ|4_2:wdmt̚.Zo9 Z'ew6F2{bs qKl4E 8sAk#n3WҸ'?.WC(V"$iArFKiJ][1Aw_oIg[UeS#zmh?AotŎ3kYbHئd5=/V3jh&tp ![EEGHꂖC*s)E4#sLroD *_H#gi${|iBi+̯thzuAKB>2fy qWuw #eEF,coP5,ZlJ E1I e3b9s$Ǹwk΁RuKVErUX!Y=oA633QnsU#}/H׺7qi ~|PyAyuf3εMz<Aӵ8^Sfo%г<5:p:ej6Z}t#tnwfFisyB W̠RwR M ?H5Py[?Ry+#'c${0nΈ=5<9tJݮ:{Iԍ$@$}:AKk!5ul1k:opc.]2pxf7@' w$ZfH$[Ŭ=8P_cݼ֢,7HGὠi=A$^!qևy߂]ځ!f鯊|Knu'yd<+M^bmhyAkjdl[ٽS6؀k!#5E XdqYi%v㮽 C5)U5mK,[i8ĒAtͧsVKvV_Ԡ庬MB^Q+|q0)vFZ5# Z\c#zo=\?r_Ou+mFpAV]|2ӌV5'&o[k!hّY[_B7d̕CF UBDn:fO/;'N$3:i'[A+0pJLDg$sik<` [ ΃ZWkei c&i#ӁvǭCn۱r͋ {v1\w͐9AjfEo%#"s{HoEzۙ _%oX] s@ 4A Q|m;Cxah{wS߯zA܄X6IEf;Y)amw5.kÀ#Z=vF @dC9#tGwq#~;f-uXs5!rwK5v팖b8YˏNa\_AdhgcmQLk9\t GPGrC>$O_ c )Œ@i!]:V%:;q^8N4hqVqȾ{227sA](.Z3ljЭeչ=w+ Hd7Rl5.cW5j&k꘷A1Ae'Z-J%yhRi An |2|%EgӰ0 stzC[iBh9Õl뻹/r̹+5aj*(lZ{d$>bx֎6FeVCbN+Zh' 0=CIJ:'4I<\-#Gq Vc+%l]+$% :Vp7duA-=ۣ6N$><` 8``&id,98>5 {!W=NAlcF$Dyz<6O_z23)rs?B b9"H4aޣ>^wnۂ k\yALػ.Bfݐbnl$4>Vp&ؿNRb|X7` wAL1OOqO Gٹ }K,e)[n> s9oQnO5bX+7,Z#=9ԏ)L2VkȮF ny;rlNߣ469#t-лDk~}/ٳxX(\{\2@u~B#`⃦l,ZDzs^&tIx0sIIr ,'Dr湥 |&ͪxDŚ<݆uҹ7##vIF =od]I]:G0Zuhg{9MuAR'rf3~ ЂB8YrsCAȞjf/檲%Eb7<A>d~2Ƕ(5v>;t ~rev0:ș#`aDAӳj<m<0Ձ`kxs! a2YX[]l5=4csUmM7,b\O/u64Zoйi6]lNx;FӠu΃KffV{b:+kLw_ :y#FUl/aAd3VF rwnϓѵc'+qiylΤt;#H5$32֫j:ő9΁ !Šo_..O QEf\usNGdkH4lpZAҒ#vOs[uHЃw?pךO 3FX^ 1_C2TIPnߐ=H17$ w@@@@@@@@@@@@@@@@@@@@sy*O_"M~+s5O?J TB5+#cGAZwr W̠RwR M ?H5PE%!2a/xwz P6FFm/wxMj}v7w;wAhpv<1,isuߟ̃貳q=9DB+M500oA oMʢ6Hl!dq{3G~ P$P#M=]w t¤+pz<@Rg\44肵<&*`셏t9׸ I#"aGoRXđ=asN eǕn1Yt]) tz_flO6<;joux$ٛ&$oc+9vnAv`3,SG#{@mWt&0VA 蠚^-c獯 ߜuAlB fѷF wٲFΧ8kڃY^#[ ^>EG#W#F;k1&h#}|M Xg<+{ῩHȘ_#7s2BfdѺ1@edd-orʓYaָ tPXuʭu[9w]$4|Qq-iizt~fCnݽݮhsHsH (4xdlr؉wk>H\b6 {'SF35Vit zןAJu\h:7I}:x Cc3=a|F^󆇂K|}(X98nGAfZ1a=avA,LskYDCzH/Gf E,ch9wr|gr R-0څ1R8mq [#GGG2qI; [1Aw:͎|9Tx8 X#s0ѻu34)X?T/iR7i][ɭu]yCc\IN:8%`ph1"Kwe#ٵ>:pzOJ8H#L˧t k}h+ɏ7$W`o@;zdpb~%6T~2w%mR%{TS`ъ+16FU] t@7CF.+8fJ5I&)$c@t1C{AtaܬܣK!^Cu<)zyuxI#i^F"dilE?S^+ysV}]misAzwCq|)t X]$v$apٕIzj -eب)? VAG/+؍ЂMsm̷fv\%Li{69Z]I:wuAɇ Un8w ${As9v[OעcQW8Սq'PxKf5[k'|dK]F6tlNrdwӯ@w^p.+Y3,TC!cc{Ayf!pm lI9@w)A@OcVGڙ\C<;u <CJKKV c.s:)w6>Q^qn6<mh^m ZGG0?(<.j:-u&{=H(n#XagcGRvA.Xp|_dn1յ^R].=ff,tMAM:h oR. 1S5A|PY;Zc:cP57pߙRl$cȒ@ckw>- J>D 8vpܤ?tzΐV47^4 (r1sr5 lu_ԒJ@qqOSAGI~q ,Z@IG7*eѯXKG$-؎ b! #C3GņPa{K69465HAvMÒL\#3̘o23Yk 9>O7Ǻ RmHYVKs⍁w.'`t;_:,DNWLs28;MH97;4GC ÷q-VlSD<ߴw{o1; RAfh|Ff pu肥 שq1cYKk@>$+p 6ukv`d.=4F&űצb۴H |mk̂V_Wz.`IփZgۗ@!l,K6KA۹G(X Ahj:8ˉ7}!{R;W^:ˌG X\$ D&lW+NԔѸ_8kPq;|vb^cKݮϷ` 4➼x YZqSdWx: x3OqUv# M ceۃN@KwC) %B(FKN;W\ӘB cnfau\Tk )f'c]P_mE ͊T{ GhZ(;,:%6V!悹!Ђr Uy |QfAph(+Z;\lW\knZ  e8x[u6Z xh wtA.vHcPec8]hIVyiady1%hntrYLLڵ\[Gմ\?gA3n8G⃖{^`j[8APvxVfH:F s$RԂ\ V8 +UDsk{u^dP\MνNZ8%`ph1 1p"Jde;{ٵ>:pzXțQXGVw$'N@@A NvAa' Nn>a`(}[AICYK$}'q7Gj> ͐P>Am͊8,N>aI[\kn:$rխx[L `4Dm)em9o8>n>%+kF$6YBfexm;}kgi(jDMvM  acdeAN(5$s~9 F?*~VDNgh6Z~XVY1th?WD cMlx3}H1p>sA|vFxgN)wެZ=ei X']Aqv 5̎M|f`z=+ B!H+{ݯ?҂88{Z!ek3: rҭ |srg&q o)7u7bEgdOyolpܱ-:0KL c֎!JeQ۹s[)ֶ?D7 f0cF7wm1Aj8R9wHu6W]lV\Lޒ||A,P,&GCX4gϖ,UV.h88IJC %z.W=sch8{soWVSk>۠7/_0ATVon_A;==}PI5 Dֈ %ͯ2 x{l;y$ooB ݌puHz(~+ ~NMb6^m)ZX+>RqtC[zWC䕲 vӯ8(7Yg Y|cMtͿ7ЂSUQxt%E:!gf]|bw|tnQˣ0um6ct`}6`n<k 1Iti'(#n MqWx`3H>?Z \=1qnN dCtH=P_#/#%c{糨m.@m+ذFͳw7Ѓ# \fdF*ux!klaG]V ^,\>? FRd;#t]=jףV:ad09YG bύ 0t_!7eqo EL' @wͯ>Ju=@`4L:ߟAZ\?uXLA{ }8Kym$2 ӡSDzhI A=PG򬍗dDfviK*M [ mah@U%Z,yE+oNV}~ d1z{$ 6A #$o36Fwt(6 6 dcXmr ̕lT.0˾R$Pl/vE!D%s9OFw;X@@@6 w mVdI(ӕā ] YnI4؁|>Z[rX͑դ ~_Pt l mh@ '<䉅?\!F bdap-#c ހ:Uq9ݽ݂vχz H22Q# 0ןH+cUMqH3ty(1_C2TIPnߐ=H17$ wAFII6]('#;G|O|uAZ9, #/bgW22@kq ޿fK}mԈ>gt p eS6: J9p֎AA#1PIr,G?/+\3@rq6m343y<\g]l3u!kgH#jU+#/pAK+1ÔvĎ(p'9:|mpTZW%mRZBPq_o4gFjff~ٹ\۷iYc3Q.VŶdFXPFG MlAZ YUF9G(saN g+b'od.{9vz3=gɛSu %[o^mcI;Lry`ew|x-q#uּ8ٻke73=kfKZ޼=N51gߒEd}:=*2ZKu%ifs:j\2 Z͐FzwaHKDB6<5I:w⃋k9NyEk2EN y80O:\Ajdc'o%>f9i)bn xw=fG#_ -A:}]/'Z#CH=4 k-A$@ ,#,k~(]b~VB V-% .nAPY^&P٭Y|Q`qZh:$V3GFLSXW䑓4`yt[+Ú27d-5n(8Xrj9Xf{&cNuMRw(~^9raBQ@Z ƒH's׹C ěOp2#NuA>|#qy ,'N<53%%Wms]r .Sfٶ@U 숔Z\5A᥯ĴM]ѓя:nNS%vS0^E5i+ܤ޻A Gj_4jz֞hN@ ^(:[ìw0Z86~1k·-Rb>NG%9W+e?ESa %~&RKJTlXȮҵ` tqtA[n1_mc$oF$Rs:{A5|< 6NoO;y CGd)|汲i9֍܃Qhc_>Fjƙei! tiPFo='m~P)XV ɮn=?&|7$de:$ ׂ K6r8K6E<3M5 跡PGK#~z5RV85 ^̕G[﮵ fr8<ocYl\;FJ r%Zrˢ;A^2yB&,=vkCփ|fakw!=yĖ'A{$0AKY+4l#s ]ŠZ\s;鵌s ۀ$s4(1' Kcs6ړ^{`KI 2x;Ks՞>Xn֎FPkkIkS٥Q;Gh\ H:iÎ^v^6'zhuAǭCVsZ;v7܂Z|<޿\f=|P9(.ePf(fFtӶ 1{f,_5GߧMe| UKnHe_)wA&q n6A>#" IaphC 6,\Xy7eHCCzk8X+fx|6jWĎike4|](; :. -^pƹvFCAjyL݉v/ނӰ;->Dzbf(h$}=PWw wSyDC l7H<-As1w gE5hk=H=FkYYtFԁuB?փ!y,f*=;G<Av f[tq}6G~Kw.6ls P>F@{@8B:X<.]uwggX@>١46Mci$|tFP:5&; fZA?y\4@z 7dfbInZ{t44ִxNk7EL;tQF ^>dmͯbwy lP`M̃24Ϗ#,Mkٽng݂9p[e'`ff4B CU+QkZ88D/Tg<̹+=]soϾ5ab{k7hЂ ipU۹j\A?l{@և&#}HemڥE}jr8 h:h^/<̲@rA(4bc_FY_8ppB 2vb;2ӹU0؈Z4A|̂3Y252`sˠ^P;A(fhGu:=t8XJ7.64|}HQR:WJm`֝;i )V2a&i{ -zCf:2-Ow$1ha kGMkÿh)8mq[?$m[io7@dxp$2xfEѽ}E:0[c!b͛|Y/]45 )Sڨʰ6Fp;yCARX5jYZl{ClH-bpf̖erk FAS!V.F&5'ekoЃ,:؞r3hyt:wx v`hcfG_ւ6ֳ{+k")oY3X#\ǔgkk%VۧRֈ0qk9tż%caQ0$tA)Y1P=1ױS 8 cXXf..2ִhd;zץB6flAxfxn XsI^67xX^;ѹ=&.k Q-Z{rېliCChPb쇬e"ݿ zboI'A2@A@A Dt=3A7ݰdA繁/nQ 2&Hn:=6@҃ u'Ġa;zl2W/{X l(22t p2t4~i$dL/c{tփd@ 2FJױa;zlI%%{(2ǶFpsNl22A  \汥!d4S$VH88 PaPj(&7 iQld=g(%@@@@@@@@@@@@@@@@@@@@@@@@@@@@Aԃ~O? GpA~k(!ҫ#\րF淌2;7t\hᙲ6qI"mV>G4ѠHuר`7ނZN RW9%s[Mczkx*+s^:Gav3CZ}Nw>^'^S 6v8k_Prf`ߑmI#rHG0r? q3?6"kӹ.>͖1 VL\fG9rq,SN*"GSH vI%1\)5 ߫k%=<.؞c`|8 xcǓj9X2ׂЃˊKbՆV;qt5>g#-~!FHIK'5|Pt1c]8s^Nۭ9}8it5ly11dm]_<9#dn:2[Xd2[}!74|R\vRXy>W,,`udq?]R$Xmџ$\Or*=d9-$|ёcy |.&ju sw^~?g)&(ѐDb+H1CAB(k3BT~ca]WeK7>8mZ>~HXMwwAZ''շAg1w)Sn5rmϻa;l_,ج=&Uhub`@;*aR{<| uA?8lĽ[uOЮqQopݰiw/>˭H;9>$^;ku=qpc:l 23qnRY+]!A1 A^tAg}ٱxL}\s#lY݈ $lǙdIcloSKG+1'i{񱧿bN^txw9F'0L"AAG)b١eQ:˙3^: .QyM3?Û×`AO~ \7wn918u 5sٕ~+1RrY9{[<Fǵ6>pTH(N<# ۵FxNw۾klM;gcϤKKô.ЎɒeY ytծS(:gҊJK#WݴAPqVK`ϻo7/irǗ{ Y7 le:9cW58$ k ?1򉫊,Mw0cNۡ<>747C+?E׊2=܅4u:{EgUMmx6؋ x-L cyyGW6RZԱ|<8M+c:{5CO] AP6qQA&(&>&)]}fy ;>nm2Re*V"dd+]V7@ /𙙕9[M5!]$0Aݡ׎ya3%g̑AA@@@A NvAa' DụalpY;.:)YRWɢakù]ӽ<:q!⼎MOfǧ|m{XA<7<>P\yh^:̂9jM&[ZY$Y#kގ;P2JW}渆v];&2ø,vQ||nbͤclc69?Y lGGk[!F h"כ^GEy9ᬈW{,ɺӳ\7~r<`2_|n~ӛͤVݜ50 ߕ}7|`Z׍wG҃b2>nb&꽮{7MA ~{>(yY;0N7cmę퍱ڂXbyAHh肤3]Ӵll㙚<P_؊c12{\t:|98| <#$?9&vFGC>V^bl7FSy*-d|4,b'qwrh>\Iv Umxdٻm:7yKuרߊ ppF3^ST>?:<5g'6F{>wnoyؼ~2 L7Yys4Hh߇RvZ[ l"dno@zz A [?UUr"C˾'s0;~ V4]/Ԁs7?G^=H7pB[0 2Ӂh J*G:ƟӯRMݟkbS'+:^Լ'om(_' Mn֒Q8RxP ls:93 Fly!g{׊|\e+KJ(Y|`h13XdԆ&1HCD/~+^;LMf1'G^ZʻhaX<73^HN}{RX^*9T${.v25N1kd|b9G 1fZExm^( >n_Ƴ"쁄n %s[ӯ^#ߊlQP+9<]zl9fofp:;=GDkpN.K:.y T2Wx 5_: = ۱jC^pkúxC~R7A3ykWAO3?pv2Z֒7](99N#t66uryJɡ5j39u|v~I{j泥0F hBSۊBFKx,ԖӯJ&\&htIOpA­~CTڂ/"ֱqފ [6g7`]Ir;m圥KCA^qyXLHh:qM CQ݀qdOUѲWi>=]j[FNmZw(wO?Bqr ءx^Z9|f.20atϥ s݆:/J]M[ici^kB# 2U&rk&y#xNȹR;9,{}` x&w -m:jc'h,'_R y$ַnzXVs;npoyqe1`xv\ap=XGPz:ԑxӤwx؜K.іLےַZGRWe| ׁ{> ؆9{_cvq9hٮy<~wvgr8"ݩ!3]Җ0kz(7Efi7*Z:7h5fƸ24{zCZCFD=trY-c"w@Ym?NP:teŴld[#R9HiNhH..ir&/M<ͼvb5#k.!kCv 2|_ s;f׼sH3l$w+-nxu7)$2,/- >(V梿VnٳYW|p4r@ hl~&e[P82 D=D }7 dqd Z&ֈvT~$KT6+>Hhmpko`+q9L*0ǂPIsx(6v:xq< 1y. l:i4n; nfUCS]qІick<GFK4iGZUF+JMy6/]{rf8{$4 ĭƿjk1uk`$k܃Zfc5cnb.{10HuR;[Ņz{k#$4d/Ќ뙚,nu6LE}Ա޽$3Ef_CⱎptNaG b?b$v{EB5ֽ>|=9*fu?[:(.dZN*7|9i6|V  MNxQ/`a4<5gk_~ \yd7"@cwhbd}ԹP]= {ӥCs{OPWU^brh.x=Lc-I@(<,sbxr:6уkbkkZ nՎfNbl\i`iicFuH=z8zuDB&aY8tѸ_`vM ]!pxvG)S%^^BaZ#|#'XZ7ނR+/e_;& Z)ߛ KB\N?%nBZo-Ak@$C/QprYFf|`6@8̄y\t7hs53㮺b쇬e"ݿ zboI'A2A!v`LwU=tw6,3qmޒx:nͭlN#EzvuPgAA%H& "n罜G^tLǺCmB/0j5Hkk>:A*r_:wr/T!Fzsb7F+qG ښ)MjalF_Ah{R[B }hZr W̠RwR M ?H5Pb쇬e"ݿ zboI'A2d=g(%@@@@@@@@@@@@@@@@@@@@@@@@@@@@Aԃ~O? GpA26=`ێ (1_C2TIPnߐ=H17$ wAÜ[JId_u7d;]}@PC2cm:Gv w7+W ^dj'z:QRӫfvKY|Zw^ p4Sdrd uV=A(;'<G'K(tS}.~ɝG67@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@A NvAa' ր }e@Aӿzb)9;9k_BpD;=Ce+zPJ); &$(<9ѹ5ILayc;Biv>߇OOyV>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69V>s3~69Vd~xu fۈrScgX2e+zPJ); &$A 8# 1Id.; Ә8^Յ \ig,VDJZ&3#^j >)XtEm=f Sa1owhk>=H_*_lX{rwK Bq;fxE2D$bk϶+^|Z,yטO9YW49㣭[xu8?Y{WSDgό{e}m~4cG~si}[;2޴Dxo j||dt.ɇ@V8my̪ĹAfb9K4 oC[oiO;1qT &&=۵}+Z3>KRf3vq[*)WΏDϲN={ɩ^ 3 mf3 D0VF^GBd#CIM35;dL* G\qU{Wd߮ZFs3M,~/!U>@E6B ֮Ӓh ˹&)+Lλ>}kʜW˛ẬM9 R#y(tNEig2 `Y/0n_i*ZQ6ᴹ V3=[:q*g1wӻRӬF{D$౭8 ^l6jAcnTSRz?DVԜ;>s'I &V-:@xǩʼV[/ +VqhVxz f[&_Cƌ=\cUDݨ;_Zֶ˞S3/'cM&lge譧5mw&Sx1*PrKfϊ.X#%T26f~S893jf[3U2mx*dӛEM;m5+i~6uێh N$=##_QͲ/!.*-uriZc͍&".{؟62 2=Δ: z7ԼN0mkN'YW dĉmlb;Cw}JӭLOwP~*6Ok b.qݽxb-1hGóggxdA۽./UR53ϊ*sW]  'ȗ}kp1_C2TIPnߐ=H17$ w8;y {ΐ0shmzG0+;YqB ޷%bԊlM"kqdKX2L ᯕ_DG/FPf uxׇcw7LgXHgX]nڝ7uILg\Kmwt־:e[BP01W7`;#JkmWRn;q:gzlo8{J`0avΫzVaZY$V'%Wt2tbb1=Ihh?TÍaX| tmYS4Ů$vbz |YLE8'y',;:7x)#ÚXA|OK^>엖| u+ iM~ig4iGlAc b_"k<|4|$Ō[[#Ǫ㧛*S1K鱹 d'k#K^{!t[O6Z39vJ{Zchh7wx{ұ1U ]4ݬY!?to*bmn'pG3]3a!ItN:yTr[+e::y׈oE4ݑǪ+IC>2'حe r =Ɨ )bs)ds2UkZ\)'^;JEk5܊b&4EQY0iއ]اOJ)+HajɇOw}'Zjҵ{E")1/6h eK-S%-5Hi-$yTӊk>V0kL[uLUz/3͖<H!EŖc1tD!v6S#d'ZtmiTDŦݬؙ rZ2O9޺9YVg[WN|~BR+d#] =âRؘgV !;!au3jtހozܫ\#!bJ Ԑ=kiĴambVkpH$<: <~˗g bm rv#@^YN39L4N%}(̐CĄt#{NXш>h)xL>bY6{dG*BmLxpc_ :̟FKF;Js9)R8e49wKٖb^G?['_궈a#R=7n&жS&䔷4͸piKjWZ/8XBi5D:lLx4dlВ&Ӊv1:5])N'1LgLy](Pee!wïrbf9"E02*;Y!qx@k&Ӷ{>"أڗ"{z)㘘Lg)Vܗϊ.ߣ@'̊c=8cqWdm6WHahᾪkqZrE;g)nqU,6/pY>CgpAܽQ])hdE&<)l$xx.g;7} w*mqD3Lr̿?7䁰k Z펞*t k^ÚX@@Afc٦_ǻRqė1K M3wUmv-H%U qU<<S|--cQ:S6(wJG{l/=gD`65_q-]x9 nb*ջ:xV4>hb15-ЫEqs#lg>|36̕'2WoD_("_y Y PE'qA~@ ߰OQ|_)[dʖZy8Խ];GwvRc +?rrw[1~eg]N*f7<̬˽]]w8 2.'w3AwV~cs.ܻܜUnyߙY{ +?rrqWs1~eg]N*f7)8 2.'w3AwV~cs.ܻܜUnyߙY{ +?rrqWs1~eg]N*f7<̬˽]캕9I<̬˽]]w8 2.'w3AwV~cs.ܻܜUnyߙY{ +?rrqWs1~eg]N*f7<̬˽]yߙY{ 2.'w3AwV~cs.ܻܜUnyߙY{ +?rrqWs1~eg]N*f7<̬˽]]w8 2.'w3AwV~cs.ܻܜUnyߙY{ +?rrqWs1~eg]N*f7<̬˽]]w8 2.'w3AwV~cs.ܻܜUnyߙY{ +?rrqWs1~eg]N*f7<̬˽]]w8 2.'w3AwV~cs.ܻܜQ 2.'nf7<̬˽]yߙY{ 2.'w3AwV~cs.ܻܜUnyߙY{ +?rrqWs1~eg]N*f7<̬˽]]w8 2.'w3AwV~cs.ܻܜUnyߙY{ +?rrqWs1~eg]N*f7<̬˽]]w8 2.'w3AwV~cs.ܻܜUnyߙY{ +?rrqWs1~eg]N*f7<̬˽]]w8 2.'w3AwV~cs.ܻܜUnyߙY{ +?rrqWs1~eg]N*f7<̬˽]tO;?-yТm\yx>8D!?A*(7o j; l6|ځ> 9lϵg}>s@jϜP6|ځ> 9lϵg}>s@jϜP6|ځ> 9lϵg}>s@jϜP6|ځ> 9lϵg}>s@jϜP6|ځ> 9lϵg}>s@jϜP6|ځ> 9lϵg}>s@jϜP6|ځ> 9lϵg}>s@jϜP6|ځ> 9lϵg}>s@jϜP6|ځ> 9lϵxp(vL\@6I+} նkFr?o]wqFƧzm>1ޝw:l|#dcS:*uF|Ƨzm^q2PջR(;l |' f՟&^/|&w頝Z|WV/[CGlzC#^oZ ;:*o'"NO~:*o'"NO~:*o'"NO~:*o'"NO~:*o)礚"&4HyCЭ=k+zWEk63|ANl]84@k%z:E3gW=s}OO3E>??ONei|SQm)Ou~ӨZ_l{R:7m81N:Na|ڷyg_9 Y PE'qA~@ ߰OQezWdj2ID?|{PX@@@@@A0ԭ%m{SZͧ+psC#aBr"DAJkBnNiuIkQ(,<6adJbs1ps\<ݽWU֮J"<@$w}$ 6gߒeQGǣB `ulљhsf e(B@p%%@@@@@Af~xXGrw 2R#VK-4yoh-moV-x1d`(,  o13!^WK3Z8#ֱns}!+hdls5;kRk͕eet/ 222~a2O>3/Mݔ)N%@[xT_ -owoĝտߐ_OgxDA :2$A5?3Z|vKOn5}W/^؀D '.1_C2TIPnߐ=H17$ wA[!0V6V<}w uX*75ZMr1op *3e1X\UHb[4ԏx(;|A8l ܋c>Ea= lk#_1_+G%d GٿwF뽽O^6n }qNOo@6׼oD5YJӚ]ܺ +I~G3CzԺ&j;v Adˎ,ߔ1O2 H8J1ZJعoab0d4~FȽ*=pac5akgGsyeG\l\{ݞ^cӏ4jOXkˊ,0;[ןyߊhRߎr{V>|[o]Q,ݾRKW_wV'~A|_?'E1BN+M*q+輺2k6)gBDoy?=G\H@9Y;v[u/7ҳ<515iV9GݞZ~o_ɫCc_2>iӺ.o Ŋ1m(/sx ۳݊>Cڱqv<*Z&FW5- ]PzyO6Ofg%^SF'4;;;H2pt3mCb>;O_sO6] 2Af098ӿꂴx+\ d\7n%Ǧp JJN ) ^v=4{APsYƉrrlyGI}Th9ۧ 9nFw jg~bddmho/6A 6>2Żŝ.̀4hhlIA@@@An).eݴ <.hw֐r1|5^fG\{w?:2>+8o}!56|e6a$M'h9K~1?:w<vZLt9Ï$`Z}nAɋWӑOlg23R[ASʼ]a:)#N]7bb+㣑6@ĒOA6ODn$rUqgg & 8л0y q`@@A NvAa'  eLSZ"XVIeKҷZo6?UKczW. wKczuZK~9GUKew vjgv.B_oRLM :NbK5ag39 rjSV7M5-9Ir[zhz}O6~I>~WA'ޔ?Ci^zS u%VQ,qa\tz:Vwxl2V܈Hui;MJ:ѷ'O# "ϴNrr(}wЌ'CC?} a?u?i# POH_q؇=-{ƋkͳദiZ=%w0d=g(%@@@@@@@@@@@@@@@@@@@@@@@@@@@@Aԃ~O? GpAwspta@C6]>J5-5gN^3k??o]&Wsk??oNKcv{w?C4:]x7'Ij\z]!F5:|-"5ko7f/J(:7m ]KO8r?8vrtvO9o:t;'뜏ݷܝw:]GN|_.#m']>/Gobfۭ2GߏEڳ%ӊL1x9g ZTXs;PO?P}k#?᫯iCK?o3ܹڛ7/|}=?|?̟3ܜMϳ7/|}=?|?̟3ܜMϳ7˹¼O~Oo=cH#릖:gZzZ|>b^0d=g(%@@@@@@@@@@@@@@@@@@@@@@@@@@@@Aԃ~O? GpA؞:3cwlb x|늾R6Ѥl?ꃠnV4u+Rv· fҙc^; N{eH1x;׹Ւ]Ih>d*cV;J懰A>X0ްǺ69!84w%c&f|SZY3G-q kmPaV(hyD&,-#l'[ \[-A]'e7 kz܂ ![U֭YH .$$qAؒ;=؅?6>#3d` /N/zJS牭s I%e#f|hp:>p{C7fsxhCFΐKͱ^9l荠h@Abda1VFP^ZOw9NsOGLH kzPIWtsǖ=acpkzA݉n6/)w.{Au6AA J;?Vp7(e+b7I#Κ֍f"3)B1?{\?'Se9+OLH^Hk:8GZEgg11%d-?*/Zi{KN/Y7w :e8$s|ZZ3ZΥ+8DAЎYc A5?7Z{|vKOm5}W/^؀C8=4+zPJ); &$(8Wr2HZL'G]A~:<=4ݒK\ m$ !6Yn\HV}vM4s1ܗ+(8siܤ`#}VsXqy3ԋMg³%sY}?qΌ릷|[\y؉v̆ԝB8<3_lM-Yϟi'Su06)k" $[w󩙊MkݿDMmڜor[̌%n~zI͘ !! -]l_c/91[ f+3:"&؊LKOr)Aiƕ~q-/W]_qU^ݽkH;ڙ|Sjgyst_L&M-<_88*27;kAoƼB|ʮw֐SPcBt.b)cs:è=SnO#_әok-Hȋ] JɃ~> UC/=OQGwr`]'75j~:tA~6r\e-WIyK 4AW.h槱Xj"F.uAZS=3`'=ku>GfjOe1־[')3aނH<*s)Y?^~gx'R{k0{s=ߙg!dfi`ti c̓w/g0ybՂ;IG18k?)Y$9 تŻ b ]z}=}Yj>I*qMxi<zV[_S3_'u9?[-4~o'uowS~?^;MN6r\ FmJ|Vf}H3\mkxk3cNܜݟg$<^wf&|ݾgs/5ښuot}>q| +\l,+)^P_`ݱ]:Z"|ާѵb^),SdI _%ay:/\{= t9UU[;i*O-=?OJ]Ϙb쇬e"ݿ zboI'A2 #B9v"tO-::pӢkZW k\ӽ^2"[-snq>$AnO4`8iZᜫhaKgVX,<:h_ll3>:Q/<| ^b'^iV~<bQX<d}676'F&|p/ kj:6t?GAUzm:dknݱןDnC^>?UFQb×iof&DO7?ROJ7n;P& kvZv#}P@Ƴk-X$&M@(3Ѥx}\^ \'سfok==:`tPTa[5Hpp}:=>?|$nNͳZ8k@ <6QsݰQϾo}ywe?+8kM:m ZRZѶ(a`dlos@8x6+ڻKsܱ9v׮ ֞{0ܹNx+ݥyL((e)R݌-Vtv|:Ou1-fcKח샽wⰰb8ū6KLxs4@|y\iN֍6ӡ}(,-Q^[5Gs]gZ=i=|DR4dy$>d2ruq\^Pn>Uw F)Yoӛ! whB ~7-۽kd1{!6{ΐPV_{Flq;FH߂qp7.rv港^Pn>RpY)f2YVr D]@(:7hWcf>jFc{GOFyrpzݒib`yAio._O~Ec}i3]V혞w}(=}-mtmcJrvݳރ[F*U#kwY'ēh,c̓w/g0yے4a:--x\4Ab/XbV] qS5ޱRtgٲ7e?/i)R֛Neӥ]*E+  '.1_C2TIPnߐ=H17$ w@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@xCiГ2W;ѯwx)aDpt{yLgC ZӍ;|Ԍ>č%߯X =K8kz" zէV3Lo {ivx5] 侃yBȘ^1r={,nhd2[h~7)+zPJ); &$(#D觉nNa1Qa?}OTSh?Oprs>)|0G0j|S9T`a?)}rtSOShOW!<Gi׭V׵LyBd,=`Xbј^&k9i| .GN~?Sm/a:m>?W⟩6CO[KpNG~8~'MGSoJ;#vjhsZ~RLlb&K"c YgtyuٿWῺj}}ˮWῺj}}ˮWῺj}}ˮWῺj}}ˮWbZ[VP5dF"<_R3?5*1_C2TIPnߐ=H17$ w@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@Ax+oBTt.#çzu-J~yCi0󹺟)w+Oმwzr9)w+Oმwzr9uG4͎FK!x!^=Ǫ[Fa9{"ygh4}}:+8mipΧ/O^. }*7W⟬|1<ל_~r?: su~)˧Cʬ)'. }1-3kH<7ծcg3[J3&x'njՙC;M>״"!OfhSR8xC뗊۽FH?Yn~!#K?7?zq[sGy8#e9͋~X"d:)#h#PuE$к`s{s>sKy6sB|h3TĖ= Z9Ĝ] kA{K汣AAA$dl{^?vc|ktyN-{_G5:: w8l}HF a 1Q$/dBG,81وR]sa9q9k uH0#yyvt[ڵ R"bXO\A9vxt=ǪIsbG.)4`J8Zt,n%-փ< kc\\6}H7@@@@@@@@@@@A-Кa kY5(qd*7!QgsC*jL$_=Dyx7ަ5+kpD+0Qo]Q,ݾRKW_wV'~A|_?'EpfH؀v]tZׯC'NI"|2ik -IţڶFaɩV9GݞZ~o_ɫ8>P ]J W,lz]Gu`Tu⁏p-=;҃f8j(-wVDS-0RU\ FY QWVAcQ]JNsRJy1qsUK$ݜ7H dcxŭz'ִ&?ȡR3-vm64һC˾@ҙj־X+[Vo<9Y2O&a_+l<M,E[3Yy3? Bϑ/zy OǹAW Uvoֺ ? -#d絼 / q3psNsA R;F'ig D<@* ㄼs5CwA|m, Pea뷐\v;)#h3X|=9K*PҙHy7v<8~L+1o" #7Taq^2'tt\D#}z1CÙzPdIfW-Y^H]}A#ԃڠ|t+[x5X柠Ac_H YL[9kIsԂXW&>S#S͈V$L֋vy\Gp.}Xߠ4 c~sH׆ D4F gjBZ6tcOxx6!/K ": X"HcJ՛Sxn88{myK3:uyXM$v/#+381 2S#HRM-le;z{s;pMڑGhpNya6NAoŷVlamszwq1[assힽPf77Kq[g`#^Gv? j;!aoD;*q>mZhaFh6[A h{:FB9A<x+%"H\$a2Uk8rɲƑͰ~0vzւZ~)ؘ2Vg&Yw,Q}Eg%v!2 ,/[ӺWXXP+ @K͆ oT> as2Om'{vv J;?Vp7alaYj">pK~RÕqyߊ@1Yk澌R~ɧ䦽ӣB20vH#m.GVs^kl⮇:#xYk{|$Nj~O۸z)579=\NS]ز^OFMcC{9rs*4LXy{޼hy?ݞg%ykiW-?H_/'_d~j{`B_OUs#JltV"b&򚗚ZҴnjKR," 4mN>+✾E#FI(: '.1_C2TIPnߐ=H17$ wAGdf)2 `,!@w}At{=zclVcxxbz c{ TK܂:u*D"10@ UZaY{3.}h6~6a\V@}GH*.FߕE4\;vUL՚t zQ$'#}o肵nӆ`G^n&8?ZtA<تfiI's?FA;X5mW7w/dЂLlBVؒIkOC#h9Y ؊1vRYd1ݘnw7ͽ G~aj$]s8[kI7G^ j_CpޚOyzU%~d ~+8|oᇄ^I^glHgs{­Jbf\9;ٿ?aɦݧ6e9Ϛy$;sJ"+s33F xe^7; ?wxn/:fg.n]կ\S9”IbS,.y%r^=V1 VMO:ֺ>8}ݾR}rM_A}GƮW &R!%],]պVf',cCN{O %;GI#w33iZV1,pg? hM?8})w`@@A NvAa' Cb d86Z>+yq&=߯Wy)1'S0bOo`ĞaLtvM';<5 J^9̵ѥR;>Q=#ZKX͡ݰCTֶ}GE'?aaaa>k4YӸ\G1ִWNsg; $-G1 {+R3pr=c`l Pz>y>"'޻ށP{<y>"'޻ށP{<0ܻ:*n-_詹[c! mSs6C~ۓYml*'EMβUnNe>0ܝ7:l|#d>aW:*nu?|[m>|W/J'0ۖ4_',h4_?/@rP9gE0Kxg( b쇬e"ݿ zboI'A2:Ofкk(ϑPAG-ɇZ ]/E@ fjWyC$-#D e{R G?$ah2~?Z.>o/U H (?$x~KǓu 6(!R z W̠RwR M ?H5PGS__e+XmֶZul@'0h$xZ GdXCf*퉻<@>t܎c/㒹r)gc1u3 lO佳mp:-p 4AZ sǖ쎊@}XyIcG w6~$Z:x)Zb :}(+bqo~C|FFQ|b0$]@:gzGŹ;(c i摭 EXfqOBw`)ЀGؽ;hת>8$O{k1b/@Zۺx Qef~m{ 4A>3mo`h/7l_؃k~.^_a܃?Z.>o/U;\;nD^9enh$mxט#?ᾌLoZK.&Tcy@h:o(+3ޙMhwpv̳k>D ˽osw?Bt#͊1Ԛt"cwzuNtL#c1Üt4Xlv f.FUxk۶k?ԍ6QVs15c[`A:Gz|y(  ӣi:p;cViuv'NZf׳05=Rku z(,ϗ sXdaN'×ǫ΃4-_-~)D]2N;ZS:9'``l9ӈ 5kF`;H6(1_C2TIPnߐ=H17$ wAN6ku+˻)kS҂ l!+mbcX#yq.2xFd̠>)>4r3E;Wn'-r3bBz"7Ә^)|m}ra*ݘp-۾1ӿT䏕nMΒz΄9OwA汦Cl}p:` /NuH-7|n+û<@6ܟo"Qw領WŽձyxaǺWHekd`h&MG+"W;pkt/m;5ش{8WLbnlsum;".|c/>:,P-gƦy\l2䃱kz9B+ q}Yl>*$;_@=z  I(, hW}yƈU(GM>9L/= wCDÑoߔ66\<ݧ|M{ f R[ dsSpo`ӌ< o 莈<#r_~N/#^b;KZ=7CozTx'enݾ/ |<.VF81iۆg.6w/|ӏ "mbI֜{9ϩIf&|?gE/LzD8Yy,L#V͎ݙ@} k^:};wE1o]kTT`o'itV-sk?w<+l]w%ŏS[p0~I G#_=>o~g܁[?l;Kriop@-}Ϲ~ vw>g܁[?l;Kriop@-}Ϲ~ vw>g܃W6ylfCI$'@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@A#h4tyd=9>@쇧'(elvCӓzr}NOPS)ed7Dc֚zӜ՞u#s?M[uz4?MNWs`q7߹:]ΓKc ~w:M-&}4?MNWs`q7߹:]ΓKc ~w:M-&'Wil6h W+;3NOP;!d=9>@쇧'(elvCӓɻ;^A" W̠RwR M ?H5PaPb쇬e"ݿ zk7_%p  A!? PG zݿ$zAg#z5 wszLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ7arM3}zLz$ޓ=ށ1^>hcCGpA PlߒeA3ځHP9j3#@w}ޑs;>gzGځHP9j3#@w}ޑs;>gzGځHP9j3#@w}ޑs;>gzGځHP9j3#@w}ޑs;>gzGځHP9j3#@w}ޑs;>gzGځHP9j3#@w}ޑs;>gzGځHP9j3#@w}ޑs;>gzGځHP9j3#@w}ޑs;>gzGځHP9j3#@w}ޑs;>gzGځHP9j3#@w}ޑs;>gzGځHP9j3#@w}ޑs;>gzGځHP9j3#@w}ޑs;>gzGځHP9j3#A././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/_static/apiKey03.jpg0000644000175100001770000014152314714401662017355 0ustar00runnerdockerJFIFHHC    #%$""!&+7/&)4)!"0A149;>>>%.DIC;C  ;("(;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;W !1AQRT"2Sa4Uqr#35st6B$b%DE&'7Ccd0 Q2!13ARqab"B ? N. (c|;sUʢD !zѲ5Z;@l1DV8ng*o_2FSOa(Oj.?EڧTS5j{BE3TM'i{|B3O`T~) =>!Rv*N?Ig;LS'i{|B3O`T~) =>!Rv*N?Ig;LS'i{|B3O`T~) =>!Rv*N?Ig!F֪% [Nj/)j_z{]~)O=>#QvS $WFZRSS٫w.Ϥ<;VDYD"ZfzYVӻӹReGlW"x~PѺ> |2;Xd['MD-+t\UJE ^{ v QAGV_b4 v@ϝ7"q yMn,EcDWxH_Ѩdx%nY#{F)|yGtڼJ!Vas$5nTve!*O*WFE]ȫ@c&d txG|KFMŗܕu9s2fӗ}eoFF*5Ҫf Z쨶P&Jy)ȫs_*&d0_$FUys9TT^ehhsDEZ_tE#E1erEb'WOT,tJKWfur]@V+^ZjZTz"Eȿ } BՉ.YI$̞nrPn,$TGu.طz.c$wG++a%cv#s-Hʉ-a"Η- c72YS]p!%\ MSJv3DT@48fxdUΒMTV}]ѪYwWnvqp1v7'^Wѻ+2s]mp+ZJ{gr*sQ7Ě)Rhn˛2jmk}ֹ(|ê0RΈG7syЄUhhWƪ9 -Rat4SM\\Nw*9(ZL29(Y>#YX䩼DE𘫹v ' P\YʈD]`Z8裉Ί*-ew&ئ-KȑDJʍc˷*I@`U)f[lr} /Ej8|qRY33ɚ*Gpʄg#H?%/xpMH㨥Y$W=QuE&%YDj^QUW {]N"bHܱxVne^en] %$~;$T؍I ip=fg\K@0C㍅:կgiK(0|sT[k4[6;6Ǝr3]6 hH-mg$6KM$q%蔑EfGHƮiv+q(p+jk?4!.Ǻ|hǵߴ^d% 9=&+fF=DTEUTTN}{ *c=4~Mw3G]{>*c=4~ƜKK~MmDE<'5֢@= nH#/聗kE]]tĝSsZY>?y&8g֓v3?kIN5ݧ wnӆz;i7i=s洛ឹZMp\py&8g֓v3?kIN5ݧ wnӆz;i7i=s洛ឹZMp\py&8g֓v3?kIN5ݧ wnӆz;i7i=s洛ឹZMp\py&8g֓v3?kIN5ݧ wnӆz;i7i=s洛ឹZMp\py&8g֓v3?kIN5ݧ wnӆz;i7i=s洛ឹZMp\py&8g֓v3?kIN5ݧ wnӆz;i7i=s洛ឹZMp\py&8g֓v3?kIN5ݧ wnӆzm4{$ {]r3G]s#kUv͛BZIqꡕ-I^xmS o2,ˏ骪#Ø#{&zʫ%ԶΞp թcFXՈw*G$TJKQREDsZDV5)b_Lܩ"X佬UBFUsS,{xV$+v^B~ ʪx)`SV5dWUU~)L1[P5O!UUUUUU] hQKb Όϫ{~kbP*,㾥 j^6Ew B"&Dy΀Ewd7nl{H@@$@ ;lШpj "{/ 1v>3Xԃ>1[>Ж| Ff 'N 37@foY8޲qd3zf 'N 37@foY8N b62q$}K^ڬsASK%#vֱuUMPMb쎯m5};^MKZ 5zSSG'.cs7K0Hac%+'UnEluo#NƢyMOŌWIO**la_[3ɣTBV4"+U2o FM*U%]`[CIiaI-iU3}>}v/۾/I. ECNxkJzuu#=Z,dvȢc|g*[_b'8i)TS29T1oeEjX OҸV6#eå6GQ$Mtb\uD@U"TXEP(A'ȈK"b@$[ ZsQة~u41oE\!o'Ezrx<^!o'Ezrx<^!o'Ezrx<^!o'E*x?;!PhVb2HRdW1[p&UqaRĮzFy,.;4 mջ#a3[C)*㩑ȳMsr 3)jJSǟXUVꊾQU7V0gi&3e|Hǿ%ѨD@GuĦJf%3i.7TSUxf֢:ɒJފnŵz@tF]IY^舊ڿH/:-ҽQ`,K4&TW5%K%%+EP`;3OZFDnE~]6I4+*EīfrNl;.5[_N#(~ec/'RӺSUxzf_ ~ f!$,dKJ#om_WE=rƒk⪉Ek~@0 |Sҫ9bU<UebI{kUr}>`;B}7Kuӵs~:bse]纁Ҫl6]%ñG]. QGuE;ԉ*nUEkhϣI=C!F*,FKm}hy ЬE:95Z Yr?\BpZ@UT sR/w Z"*.P nVD:&=;QڗK*TT]5j"5pEsQU7*nW $DMB ʗ6^rt*\ F5ȖNdN`MDMBnMd@$ހ%FDdLlP+լ2:N8\[\41Yc,:ʖ6MlڪT6. Q/_@$@q9 m]h1Q-.avW mEcfgʩe%YW $S[^=STEEInK)vuNkl@68!I*$dId̮r"_QѤH">+&!f˛"ZnK4PFM##bos܈,9I"{^~)TVSku-n7bQږŽr***p> rۀ2 %Wi:_%"΂yY)dig5bUVȻ@aՍdز#SXjGJ~{Tr\ 9j9v]-ΜifWcd D&=.*$lzjv$pIL]Rkks]'P.b9 U3+ΞdDU2"lD`@q9 m]hCMVɑ&˱2S tuZMU;QU.mꩱQQQQ@Gvo  A'W@ᬞDpFiљL[ViM+h'B^ej*Y.SJzXh3l{QU_}Ӥ e]tT4QMKWQͻӥmd@Īj}<c*鹫-c|Unr]f}OEt0Wց8zHDٖ {j#**=#DT]8b8tм sǤZ?dfj5ɑ6[5[H0:&b]T¤|mie׭")SD5 #t"l`-e8m*Ʈ^:<5+ {|/5] .j)[+WHZWsnYԋxrWj&ft58j*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{Sj*{SjjSuSַ<4 zU/ ?@)ud *x?;0N )>Iodp$Mlff[frDCF}. r?\BpZ@UT sR/w a/(R|Yjۅ>ȕ,b;U6}s/QXi–w9[u£,:Pߑ;[[u {+jzG &9əo}k&8*NH1\FÇH2bEIR4Evg.ԾKZ7ո~'QbRC U ]S^ͳo6͠ya**j5vĈ Vvߤbx 2$Zf"ZEUtn.+&ʧĈ͙,M>R:ؤ$hE޻6-SpCF}. r?\BpZ@UT sR/w a/(R|TOHe[ĈwЊ SOUG˟#|u3l{#j8[Yd5vѱI6+-ELkfJY쏶]mmv5Z;GU<:JGPcQ-w'Јk_ I٪ۙ$EٕPI:iRX!tdKb'FMnz3wX rۀ2.Vi*tpLsv=:xY\cw-\?;Ql,{6^VP7]Ͼ_H7܁x܉Q@{$̗F*_rufgY@c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 5뷈 c:Xλx3 B%DO8THH˥X rۀ2!PZe.T[*gHc}E$k.LDȋ{`URGW*ZEGdX=3܈ͼ 3^7 GHlm][ #4}mud*s2lD@-VCQ@I,[o [5sVUSMI VVwYx뢎X.W/2-_ McG2]e[[㖺u3Z9w-ӿH[=+藒DD:yaVMUK|P6վUu*]%g[ޮ+ @4O s~fJ.v*uԸ-=- 4V-WgHTCGIS+adh9QʪP<ʈ^X]5dh&""~p5f)EIoEb0rh5LR^рr 7q9 g62q^޲q,'N 3@goY8޲qd;zv 'N 3@T]ʋ(Gc6ιoevJVQҲ*[λ`~'"rt@HH@ -/\:ɱ8u 8t'NМY: 'BpdN, 8t'!Ȉ,*x?;1r*Q+ҪM0K>]w'gk쵶{UQR+8ɩO-us;= J1<"h⪯/Tڊշ7QU~3[AKC@97wP5pGaa3j!|h=M<{bzsKcXK0v5v#|Ȉ{~H4{XV%&P5xޚb`V*Mdןe@趕 SZV 몭wlHzLʳ@6DDD{YKut}lYvDO߀ixUbĠlXVn9fRT]UTZ>FE6͋ݭK- pb?,Ϊl@׭;Nط@/ k9 wGg{JL'n5uu2@Bf#]ɷoZM&)a|2,R*YrB&HZNr@''T۵S~mRm #,'s::24"V]G[ }WC;#ֺ*]ളN<2_DFɳ2-Rop%f]OQN˥;QʋUʫ W^xzhEW Z|.]P VScԸBE$T +-8nܜT4ݘ\MqI;^۳p4GnYQ#iR{Ex} O*1ۑY&rx]ȉp"M*~'"r;ëfI+!N*ѭ*)ZR5Nr>۝ЗP5!w?M%RWl aG_ͳoถ ./i0ڸJeѪZNg N |'ќjzFTS[6YpwM+I+iek#[|/sl: ѹiejr2VlOM`Z{V`0b5I$nDMUܗEZ_ܦ:Jziܖ{}QXFUCI+⼨j.[uvt T)$ZA6JzI5,1jQYYbXdH{9v&@:"51l&7BS"+}@&Ee[' k 9Q6_ ?k_ϬZ릣&lٳt&[t:)}%>Z9R@ ue$6|ED6ڷڞ`&wuZA#(\N^hCt힮GAl}(iRh:)cc^ǥնP>|3H0f<j%|U2ڷ3glD_(&DR zMβnP=SEj߀iIJFvd|Gm`+6v!Vů vB݊H"/N a،]G6=ܮ T[Do둪Pb.ETUP(Im)èġ̒,[9`4ˆb i;QCjSIS 4ؾh"x)UEب^soԇWWUWKlEl~-k( ќN-6u<.!P|\e Qc7Ok*&5Q̶0B-%(luqS[_|^ii-s|'2[O:Ʈ]m z9WSi7RMe%-Vjv=dzU_ e}y9NRIL|-Fb[iEPOQ r?\>ru &C$-U^DVO &nǵյ uT-e"1j[{QQoڀ{nzʺlTjUlXfTBXF.W@="VP7]Ͼ_H7܀ $}UM+\rL~<ذA-VꪫuU^uP.SM e<Xjr]ZXhb#&XM-ŌaU8t{#X3z"谜2ttѤms'Hk0*z^F9j_e n%WF\4+=Q6nkai⒏ZEm&pJ֦z+z+ D2WafZKV5ֹ*ۦ k1Z+k MT+H\,elD[.mg+k-{喅Lۢ_rߤ ճEKY_hڧ1/eDTnUo8xu|ˋ>j*a}2ʎV6%Q@`=HP٭j|/ jo#kZ:1P=Wx*x?;!@v^g5{<g{@rpOhVn ׳=9Zy'+^7k{<g{@rpOhVn ׳=9Zy'+^7k{<g{@rpOhVn ׳=9Zy'+^7k{<g{@rpOhVn ׳=9Zy'+^7k{<g{@rpOhVn ׳=9Zy'+^7k{<g{@rpOhVn ׳=9Zy'+^7k{<g{@rpOhVn ׳=9Zy'+^7k{<g{@rpOhVn ׳=9Zy'+^7k{<g{@rpOhVn ׳=9Zy'+^7k{<g{@rpOhVn ׳=9Zy'+^7k{<g{@rpOhVn ׳=9Zy'+^7k{<g{@r秙>alGnC]3*eUT\uuukS$MVĚb6ɽTzI)*([9j&ݪz P516Vd˳}p=TS{]-p5`Oo%j&s].U[3z u ԫPX6DWZْu#OZxWTUrZeDuZjdj~DӸ n!k]Q:±/ѭDUMc 0NPJ[#]._bo NNt.l ԳU7X2xZ@w>~'"r~B%$;?؟`````@`ddd`````````````````````````````````````N<(~Hϥ5t0W܀]n2 dU *}OE %Ӿ OC52:*idnʟb$$ZfwEݗ ZZɑ&msd~›lTK5V|TD_0+`Y[,ZkEܿ@iĽ**YfL{i"G^ֿlU2.|&΄ yˊEG4HEk{5yA<5,߻Yfm{"&, *N\|+"*F);gDm؄ \\DDM%jUf3G52^ZCF}. r\CUSTʪH MsfKn[uq:j炂dl݋{\ Ī䭚5g6DG*_*l[T Y%-$sI hI"_sSf"si5mn]eߙ꩒*Ȍ|cr W,wN[ QXDv[z-+ڈU%*fsvzQS`H=gr"fRꊜSeT 1bL}3h[OSVRG%_p,b׷Uʱ5 _SG~kd dU *}OE %Ӿ OCc6c>@5pMC&̶0fŅZ5o۵@>L>ZGDexjlڛ:JEBK"1<ȋ}[X 5)C:aPMc*ꙹcSK[ժ+.K*}YVK+]Lo54*)MƭLE˹2}TGZ9hVeYu_}φ/,s:#-QRwzWʭDd3m%t3-E=L,Ir,wmI(jfrf=#̞ ZMdP/b*#Wu:XO z:[YvslƎKccW>l*o>ͫRHsE _bs*}BzebZH--P,wDG;GcF;+[8J\>#-.>7QE_խ=jZIeE_rQ6M#7Qo6Y1Ik]ǹԶzeZ)~hUe%\X5=fKՙov&âhi tMVgk3Mu%]|knlip]%ejodrv}`|KtTVI,29$V&j&Ժ|Jm=+&*ʍHZTro]^饣)DI"E]2xZ@w>~'"rB'!` 6` 61r"eN 4&D`<(~Hߥߚ`[ O@]@Q/_@DT]!4FG;Q˪''%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxW%O~2TxWWR5#@61z@KDdz9SƖcdĨDʷNԁFmz@EY l+,Ոsjyƨ)gt3H\sN^eub<Dex~yaثYԕ[u؈g0:Jr9"9k.M^JP+.֧pSCT,ΑܶTz"+]dc'YlURɷ@tj&9j&cZjs7ff/EC.y][">-6z"Ȼ dU *}OE%*mIHLjD0idXܑTL69Wyoū竡fL=--U6 #2F- Ϙ m鰼f k*5 ^)-5O-`lTI$EU} +1=C<^ #}.rM1ic2'*jGfWKn670M ڍ[[@v1UUdtnc*]ZS l. wHY#"Yʞp-"VP7]Ͼ_H7܀廠}6. h u_c'z UT sR/w BwH+i_b22]\ƹ⯂ff#"K/Ӹ 4ke-2KfM&dդWs%`0uC]XfɵQ7_ 6#Q][_F|͞ȎeTf#\_2**q}{竢+dEעgCXj䏒4|=躵^e]Xl-|Ȼ8bHsndH6TC"+ލr-'_K3Y.d&#ZxUDU YDWnE]2hnh׶V':6=3QȪJ_"Bf9ި讐Wijӹt &&y@G4r4ok{]tMcn:@X% ʱk.k&L:v7gI-v{ucssdt\9bu8 äG'Ȏ"[HVg{WtG.,zm[>>_excr5[ YQ WV9r*'##r]- n@$㻧r-93c\R*oMK02xZ@w>~'"rjl7K&**uuO\\*'Z/lk*hccgQ*'YSvp%N#hCPh)%$ j@S@\uf*l{yd\K[r"lO EfAO1:fTnk*I]F6{ضM(س*.TW7;aQ*%ɑ(__ihtBq/ <V)$DQܶƚjh4LE|oV[}Mb6Apfnek^D*xz2}5p竰)\ڎW67v/j©4CFqF2ձXUm߱tUVӶ)],/bNFcU`E6WSS̯lԾ{`XObl,sYgTz>Ӹ :Pb>b-S:Uk(ԿNϰ bWҹ-PzUEU]o_8G#LCM(9))RHZ+9-e.`* _,X+95Y*QWkک~;;C1-|y0:VmT`},Xgr%ԍu4x!+r*.ܜtXtphj(12E{Q|Wn y2(v3RSI[,t*6$wm̷uH=GV#-EFN/aHh߈+- G+sYnAOwEǰ= !G*;W|;IISr| ڋW{TP:,JL.fHbʈUUٱvN'Iet99<'%%r:y0)dW;:=~D>/ǏGtOeISc쩵vh*xkcô er_E+Zj5dD'@?u4mx$tpPSH3W&%.W:Ze/F>w6HU؈݈v ͑m7R9JY8+%3U1ۜ:'s âɕ^[t"Ɋ m~$mvh%!YSStUzKD[/L4zǴy6zi]ʑrkeEޖE +}>E7FjdY#G>WFUTg6 TεҢQnolhO@]@Q/_@@d@t0WԔMjT;WF)GII$rxrtv. JUW.F6#Q[nmv T:O,SFQH()nBǹzW**WI_]8lɚڶk3Y6=ij_u,p6Wkr3lDU5Du0Թ5$5-Ȋne dU *}OE @$@t - ?@ર1":dhV΀_m (*­FuEv#Ddsٹ nE\T]P;jSLV׺)R2ε:xˀSLj>*J9"Er+Ҟ$(o#cYuK^oMRdeU$MW2{fmUUUrjT,h BcG˽@(@j4x6C+1.i1Y$N5Eܶ_ tuӺ7LeR6+({W O dnu5UCvV<۾+E\4|T9c$Zދl KL[M1aL΍~J}{.3+0-L3Ke+/e֢oE:.i*'̪:gd>(ؖJ 6MtUF d,*%%躁aU8,1ɪ)"VDnM[n/f)FR+]P,j0L.*gQQS+U삚%S{Πy?Jp` g6$OUbzg7z**AU HjXXSֱ~bRaTxuC*s1Un@0!ĪlU411$XjXUdEޗz}/j⁜#E BgrŷM҅ƱjҶ-V2V]Lzr^}o'ǩb髠*E)aYȩ{7%@cDz!W_4Q/G1Y$N5Eܶ_WX0ˊSGO^I\W#E9\p'UHK5\9VDڪ:-$|$pTAӹ9r_fp5_SN#U>"IW+2ӵKoDmo:ҊB\?SWQWm@۳-N*Npgҽcڨlu-z/ 6)MU]YE&sRVҖpR:*E{rDUKۣf*H`u,dGE9Z'>&e;i*tI2K1w9{/`Wi S5T=S5SC Uj"* xDxU>&%-D~E_cz*eT^F*OQUU,oeP-[@.QSb*m`$(@j rյGSM*Ī6G<Ȼ uEfױ*0VU:k@5NWG0,B'6vm:FrS*˶VPpU##.Kj뫑`2 cs2Z"h`|TIds5WzdF%5t؎ %lz8ΈVډ:Ly[m,#= Fž o{72%ST[Ѫ8 s̏s{2~Sj*%(RJ*kfi*w=)*QșS}ꖰʚ- fcLI*"QK_P4Xi$TQk]@uaN)>}tڷ ˬʹskQm}:YOPj `m1.dT 6%G',.XDr" : h4OuV+|E 7NY7Q5+VʨxU ܪ)eI4$k/ݴ `ڼ::ɊӮsHn.:8 z(R"XyмcGւS6S=Z9z:meLutE~'"rZ z>h^E@2EH#F_z@2.,e\P|MX@T`#/ T y[e62[` |jSňUI4TWRFKfUM[^ul,F_8;ti.ŤmU7;n88Ė*#F"1Xܩ~kgcZHI,ܮvտ 8c'z UT sR/w H .H  .H\ 9na?Kd(@jI"|jȗO:\CgV]J;VZl,GI*Nv\_+ޗ#QlHo#V57'm;2UMږAcu]!S=U5hW~5FlTKW>-DYdPUb.vT\󁰨Z({u[o]Kx5edTU?\=m{5Um̊QauNVGI7"W"SҿH2FGWSGc=^ۢx(\uM*i3lͅ=+{.TKm^LlOV1UlZ[-HN(ΧX2xJd0|+cED{sS{DEOf+RQ+櫘S=zʈBCbUSF繺B$QoUks 4ZAbTM F>4Ȋh x!]U orѨLo֯J0:<4j& $ljTENn=d1]djmhiSG1yI햛V޻rƈWDK^yA'u4ν:^h|ik\cϪ%؜əRڛ7q* 9Beݕ+Vw%>ACPbOVU剱qڪu]ޣO'!lUk5jPܙ+cm bҌ*ji"ȱ#t} a8fT˪"j.2ۊQ!5fEIRhڻ;r/)Q^_:MT=X+[Xñ5;1uNE#Z|dkUQ-`=hZJZysE"G;rer.tpw@q8] O@]@Q/_@FsSb*  5dvLoBKy'dtr'B"W] ojs$+(NrC#\L܊*ȴK;48\r9U|j5ۦ%mW V]F羵mjm$J4;s]tط4ˏR5,ͲKq:+ ZiSSTyǴ zgZQ*o*=R.i u%Lsu^fkʊ[ߜ NGtv/*r#g1ʒ#FEDފwfLfDE2Ql}kB4A2+in]0 RGqn,^U'G1ʓFj"5Q6**X 摪;E5MG/(4 EujYQ`kv"[yT2*[.:Hƶ99r6QpjZ(i5Xy6E^TeVIv #;eжq5EuEZjlD|TZ%U|b6ybPXM"_r,rDDj3""/]ę1MRع葨S=軑jX ~ShbS1IC[3lUTdU@ض,_1L=VZ&a+N鑪[dZXw95f["np3Ʊ*l#1'|%$J-+R퉱-v_"f:Nۨ~P8hR,RT5UXՉZTf LXƓbxsZ(vd\F'BblZ1])cuPm?SL9IU.ustܪXi9YԷڠj0fh9/t]]zTv5MMؓ૫T_UW+j۰ ƞ#Cwf *:|'H4aȯħY᪵EګEU9y`5Cd*~'"r q @"VP7]Ͼ_H7܀SdͶȉds*lr% k1N*#{FmkoEl)e^bRx㖖쌑LR6:x $-1koL&G?{5UUn-vWM-^nM9tM[Qr_nۮ2b0;mRFM ]f%kAHob TZlDF̛n*-%H^x?&IW6*7kЛ|dM,Nط_ʷ)&M2&TftdoJYT$E jFߣ~&YQ<:_VڼðjdXXldK1Qmo#2dǽ[5vxlbѮTضHpdszmU`7jPhSk"%di:Sb3!1>B槰wYM7{j{ }N~;{j{wO|wpxoO`N|wpxoO`N|wpxoO`N|wpxoO`N껄31Zm#UcdEkLg*(o UP=Wx*x?;u%;ܮtLU_0@#Sr`*]-{yKBi$4Y$nD 6?bYo=0 ɸ! y?ǛHS{tcPO@]@Q/_@H݋~N,ŘldJB@D7O+ơ"VP7]Ͼ_H7܀F,GQۑ`XeGoEj%ZiisVn{䏳jY=(*5G&\XW7/ŘF9ZI^U*qX"t&΍Vƶu<*Y&©e&w^:zD扎j*WM3i_$m[Godbeu01I4Db[oР{Ɉ2(rR.l*O9 1ۖp]ؿѱ[|Ω|Y^MĤ$o4G orj(@j,c=}c}N>,c=}c}N>,c=}c}N>,c=}c}N>,c}c}F>,c=}c}FOc_XhѹG0LFIiqnLڻWv+]5SKꉇXfTO@]@Q/_@hr79PhkͩUT```k 9otz7/+oUze)|򞶦eΛcMqjҺ/Þwoe:I\Ժ:5W\5x3޷M1 V5VAIMM&YQ'(jan*3ͯs8SGE)75t[>SS;ok|Ølw`QlO1NA,{7UDmK5G[ZuS-c( dU *}OE". H5tf+'-s>q~'"r@[451Q41MT_ @SmTun$y[]Rڧ̍kbvm}:|YJ絓K+ZvY:v=9sQ#2jdդk.)1x*h9[bkZ~dTFm`e5V_vj c]3[5UW̉@GU槅[[/ Ndr;3[Z9nEMyQU$5uxMRʱDΉ+UNn{* q|k'{[/DS\= y17 D&<~ٱs1]' :]OPkޖO@]@Q/_@EuTWlת Q#U^s9!R,,-``@@D@;3N(ȶ;xyU7B.Þpa(4ك2͉-k dU *}OEHt&(棓DZ"b'HMM?`ExTycV뭦G!+~X%[Ʊ2DSD"fgT[]rvvLgښkmb!cg\kiOh\O|EؠH4?.A68<v(=R{\OkZƣZkSb"%\1P=Wx*x?;31#F%3=#VUG[OKJUҵUVkt} eK]+GUcc׷u8:)GS܍Eޠa-KQY-+$nxdl8q:*1T9 /r1*""]Uw ĩ*sj#z.Eܝ?@-m:E;2LLf5cj9ȫla#IQ2Do?ڋ$U)O%Dmګn&U&QQ˷n@\N*cloWnWv+>!UTh]EU@5WNf$"]"mJ(69ʨYkG5QQR@;jhJ7Բ30I%檷tDlIdU[S68\=W5.vflE1 .=]R6g8kENy6amQkyMOjֻWhhO]ɢr?wg&6j{Tќ(?)S{Fs4rjw)2U,_ns)2mXc'z UT sR/w m:} 55OI<|'? +Eh[:v Չ\Աڈ?o T2E\dj̜n PȮD]bc{QW'joV<&G#%V+n-ݶ;Y-o|=)dJ9 DOT [C F%楺2. ccG[KUMg$inEEU۲e mwR2]2¦<>ijY*ki\W-翘 4j؃\.P(S&Z\ѵ{UQX)8"_:3r%y"*;&މ}8sj$j]R5Sz&hYnb*|؇S|,pa?(׃O6'>x4񪯬چ߆ipDW*""@L"*ȫd .|hoKXc'z UT sR/w [1KQEUDK5mDDFźl=YQIG-{Y?,9^9%<"'?^/?/(.Ǒ{-sе蛊VaO^}?l؞XcƪjnsK~犉% ġu4i$*?Q7T$Z"uCxwGhꜞ]UL\e{IO9? ^;=?mk dU *}OE1fD[/+/8QԲD*or '/|f5UPs[?GS4Zyxx1P=Wx*x?;wt_G8"\AyRr,&7$ZܶE^}3jeUD4Xx l}v@5kc뷊o([]P>x l}v@5kc뷊o([]P>x l}v@5kc뷊o([]P>x l}v@5kc뷊o([]P>x l}v@5kc뷊o([]P>x l}v@5kc뷊o([]P>x l}v@5kc뷊o([]P>x l}v@5kc뷊o([]P>x l}v@5kc뷊o([]P>x l}v@5kc뷊o([]P>x l}v@5kc뷊o([]P>x l}v@5令7hMWr83U9CuE5g/ŠbT)p '/|f5UPs[?׉:]OPkޖO@]@Q]|7oU8ުpT#zFS N27doU8ުpT#zFS N27doU8ުpT#zFS N27doU8ުpT#zFS N27doU8ުpT#zFS N27doU8ުpT#zFS N27doU8ުpT#zFS N27doU8ުpT#zFS N27doU8ުpT#zFS N27doU8ުpT#zFS N27doU8ުpT#zFS N27doU8ުpT#zFS N27廠`QWg"}S9|sw+E鉮fʘꛊVaO^}?l؞XcƪjnsK~犉%Ji9/w[g?uFz%8-è*i⠨ O*oݷoh''n5S7#ř^K#rj-vybycAi-n柞*' UEEETT@/e]@:]OPkޖO@]@Q/_@sbϕlWY/k*~G@꼄~G@꼄~G@꼄~G@꼄~i؊S~WxXvr-RyTGM-DFEUr"5NP\Tڒ*C9sʪEL\23a-3OA#os[gt鑷-3:t:dmw?{26;GA{?#Lo~cF|s?w#os[gt鑷-3:t:dm6sڧgt67@lUkroE1\6- ð?cr'3FPUSp7V?3% ujxWD^9dV#'N4ճOϚ?4ձdzGO5lq+#M[{>hBX`*{>xt 4OIە?6[K[i}rMd61P=Wx*x?;xR~(@_# 3_ ?OB?]Tb'+r2=njBm2VI_RrtչQ@:@$ _#3I?OB7{ 1ܾLi@>*ʚ"T浺ԻZv..阊e E5DPVUc&& V7yq"WznMaIc%MCZT0]E4Zl.%Z+u^_rJ-ѮjF6#[CW23vrU7jWAO%<"t57uŦjfQNf*$9=j@Y6M&9bs·jV+ݕv^EIS۟ۚ( *h릖x`IeF٬::TLĺ339LIv&[ցc'z UT sR/w P=_YpM,ZY5й+Sjnpi$x8Uh,܍Fṍr]ʉP)`lY4z$Ī+SZLmFݩd6_'rZ t&jSū\ȺUm8'V3G!!tKbET^tH1HJ'*j**XuSQ1ݒ"5ʋ6sS. L vv%Ήט Lx ŒGw/.xRb8V'CRuRoi LUETHS&u+]~{N*LOwZxu5=[x$Z*Tn[*]m`&gu8̢٥+ñzEVȩTU._z ,W,~Lɩeb*VT3Nk%55uLN~TYzSa(׵(DT\n-FSV{vzfZ0l%M]|D5ek[ftW3jftD(꠆̡竫&z^GlDmӛuK.UV^*LSL_\-tIKO$K-D|Hz5eU:{ 曑ʚWaRQ+*ѮkUȊ~(*LNɦfi"f7sTu3T,k+Uzķbj")jjfUN>eo)W={ʪDDFPffs$t No0 '@ '@} SyXxzx>Iv&>?b[k]WE{l7CSq8FSS;w'TEDt$>$ dU *}OE"r1s]n,zhv4D^tݢ/Mrn9]7h@EWM"+zh.r׻[4k$ dU *}OE {~N-Ϊ+7Pii*;I=+-̋p$ dU @'Gje:{Z[l/H.";3{;W|]]<ywuw}3{;W|]]<ywuw}3{;W|]]<ywwϯE ]=g.@^m.@wϯ;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@wί;:h>t|ýf@\7i5 (pcKTNbf;1ڣO[Pz'][NMOНunhgxuOGI DT"j|Sd>x5l7 [:VΣxճ5l7 [:VΣx(ַDO >>%.DIC;C  ;("(;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;8X  !1AQ"2SUaqRTr#345Bst$V6bC%7cu&DG5Q!123ARq"a4#BC ?*c4AAAM4n&b!<='Oj'{YC'{YC'{YC'{YC,;Sph ~eK`g .~2x$F{P>YC'K`g .~2x$F{P>YC'K`g .~2x$F{P>YC'K`g .~2x$F{P>YC'K`g .~2x$F{P>YC'K`g .~2x$F{PܚM 3چXo >C'{YC'{YC'YC/vTٮ2ՁďBM 5<51Eu )K-4ӖEv=A٭m6jvN:0:sAMW C\82}'bvbtعmR'#NEJ[l4mp= /wLES:Y^ZIcR<0W^u^1לa@֚RPYQcۄo(!2׼9n;(;?uEfoi;>SRCTQ*` [͒4DQ]C2Ԡ@@@AKotp0 a@0 a@i?$'AJGt# ʏ*;umm $.h˄Q`z@湏,{K\ӂ0ADw,lcX7ÖGZ!i Ԑ9 "B d"HY[Eн0B^5a UC1;NW^esO/1ۼў\ Ie9{4']s;= ;瞴K_cl,۬tUϥ }.5Òׄ1`'L'LCu*/rF#D.F^)ࠤ|$4F9r>(EM%!;>AD D mV<6'H@y eVo5[IYGQT,`E Ntegd.w[\͸n*N KaAKLЀe{7N2PW[ }Do״('uɍئyc8/z ]]վGɻ 9Ӓ*R΂JcLj Ap# B=05cԂ(k=K㊝1]PGٯ G㘑;^x88(7}T9ido 4z48( %^#l}jy`h9 OB TJO-WJ#c7]H9:7%N"w|d|L|5;iޜ !+!iO |u%g*t5,q8=l{>kڳ {M߸KtuGGe 퉊JC{B{0>(y=LPLH=+AǸL펹6t 'wch< :%*SjNja?P햆= Zx2S?~#>B!ewU6<أ~p>"^N#Cuu[SvUSF!dՀhA:{iM%YZޝ~*Q]C2ԡQLiczA(<=k_@D2T`iCts˜Y>qW\I| _Jk)LM~c8; dJNVSEԼ=9F1YT:k0ÛQ Cv]Ѣ cscbsoQr:TpqtyޑDcA%w$iz jTq]1q/xӖT.ze#2=8cނ)kE+`øόzƟ)u A: 5Ad#-&&H@9-:T>RSݬu|_n.qnOP#}E]Xƚ6OA: ]4: Jc*e5LMlp; 8`Zk|&tk(XΈOCm5nMMOfD?=' DF؍mCUW  @$E\/+ёM 59˜=YDr>$OsÖQ+w}?s]*dt=OvQӚzkTs998D*΍sZZArAfpꚠÖsO%_lA[=30LRփcyZnU}[f8?w9A +jULʝ3^CyyA<7TʙL'A^wTUI<\OPD{Gw bԠ@@@AKote0:0 ` a@@esg(s |gg*?/.aP$@D"D@X7 L)MeA pVpWD:V+fW]i.n!|*IrKԩ=CKԩ=CKԩ=CKԩ=C ҝ !_ wR Tr<-I0x[R`{'nJO!ꔟC)?܆ wR Tr<-I0x[R`{'nJO!ꔟC)?܆ wR Tr<-I0x[R`{'nJO!ꔟC ;zsKM!`0~hh 3R*O{R*O{V*O{\U7{nNU a@0 a@0 a@0 a@0 a@0 a@0 a@0 a@0 a@0 a@0 a@0 a@0 a@0 a@0 a@0 a@0 a@0>S{ڥSz{P<짡z{P<짡z{P<짡z{P<짡z{P<짡z{P<짡z{P<짡z{P<짡z{P<짡z{P<짡z{P<짡z{P<짡z{P<짡e} ݨ4;wSx?OBwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vx?OCAwj4vh0PD-ЂT@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@A'ymÜ@dЃHjaitG+A,pp0ʺye1G|178@xfR2Fc0PP-Yf::}mu(jI|ݺeM@~#w>b ;o K=oy?~xv:j8p; ˱ؽ;qnk<9[wFKR혒:8]I$3˗6ͳ}qttintA gM&K5~{`aShzGK5!|s~G35ƨ)Cop]l[P@q89gM9 m&ղ<PRpLgzUDY6[k_l_DAs=}^-ΪuR0u+ޚPRUTm]‹f-<R @!97+b~-؏ 0#&o X$Y$ר:uimocCK*Hxzۃi & bYc(.]J S6w73Aᮂ'KMn<iCYQT=DS^x1P[AA("whA* oz)AK6wek沖Ats@oH m}VWi%u$H"<˾鲵;$&FG $ߢ8`DlYvJil{p;ǣA!M;OgMcc}&J꺖{Ano'oyO{kq>-ZE,lܪhAчgiPUTmkkbw\s˚*?LP48Ӄ"8A@0-t'w)AȫaVJIkf9$7XA%ZRGyt=.;5C M#k44 Ի1bp )#s5siiZ"*khoU_g.-de$c<Ι($ص@Z?7 4Vr}! G :8(>ךk͎'BD"&h@B \I8Aumeރ4ҿP%##wԂRZ%%ˤ5ΤaV6pJ6VFK PzHFF4arǼ*j1P␳;$`m4Q02Ww\NPp?SXm=wiS_=tyazW`o`1A]jCƢ;;-$h:6-}% ֪l/ÜFqn)l]L]KU='2ܜ6yAwZnmElR9A_+a/% q:e_|)XɤJYbu1I,Nk7njsA em*Y斂ssboۧf3uh յtP6G$ Cz3P\jvUO]\c.fg8 t:6-E-}URʊ62cf#`փTN.돈&qw?ygAveAuնY=auß=A&TRU{eCwt5 2I'9AJi osы4r/csr8A$vk(-suLRKh QցKL7z*$ c l準;]-frN$KEulltDbcVh3흭ipܐw胛Uaㆎq}$rWzw6I7N@rPe J(0(09 F9s"@$  J 䦤x3<0pHǭj{UqH3 BY%FiAZGیt@  [ڦ,wSљlbArz?PBR3LL(>7-zmU0= lur2uh:]֙opz#[ ٗJKAŊLOy4s$QW;ǽC 8G$vYbpI{(1QrWGGF* ޔ3CH:9΍{7@%R `[}+5=#䧁4cO,r5VN:P#IA}L1׶Fi';]ӝtpJ6sNcC!!OR4ID >64}CT[抚*esP#BTvN$Ϻ ӱy8se~km-w ps$o-i;X754ƽ6ygQ G 8rƨ, AB Peu0elQnrh.W6T6M(r#(.Y3$667d4 NAh$R6F8ei>ƒd  ;o iwyveõ2ځw@̿vq]_;P8/Wy+jeõ2ځw@̿vq]_;P`FuPlXNs(1w@̿vq]_;P8/Wy+jeõ2ځw@̿vq]_;P8/Wy+jeõ2ځwAwH-v3S7Z9s@?I;P83ğõ1'@?I;P83ğõ1'@?I;P83ğõ1'@?I;P83ğõ 7u  dd0SJ&#:p>*k)cs &;Bq؃ |?L{ZoTRyϰ[>)UAqlQѢCIUt(ڀ vCc8k=;M #mo-q;pynoZ-԰%E/7ӽ]Z4֖2\*tK+! f닩}#lӸuR 0PD-ЂTGA" A^?H, ^ 7G̠A0r>J䐞!PD痝縸I%Ѿl[Ao9[Cicyo]&Y.]|F.`y-ޜԖ=]h6)xTf5zB kt^hY z~ CCm\s(lQEs:qcDr{dw-wpynM+ x3|4038AjwldS:@ծAmsms 'uCO_=Pz("whA* %Р3UScQYLB&2k  d ,r#^:r(ekZ <PeWnv謬$=%^vRڕ+1KѶHױ-sNAYĭP!PapkG2N8g9AؽA7e*(luU@y-@},zBP2kEPeg0)^ds]pj @`` a a AB Pi/*]+;SWJUKmf_9oW 祚Ds[ 2ѝ1W1XDy1:6gIzk-GW,*zuox]IK^Sxͨ/Ogd[P uԷG3e6c=OwSZidVӹ2;suWiN&mlDƥ+u: zzwՈDӹ8G' 2rsЩټr{glG{zWlil-XT|S5#BR+^rռJ{ktGwR ;k0[`g#Щ9-lwx\VKeϽ=c$KMsZ\dv*ߩuKUJʺ*BڈA&r NK+ ugUA^/P``hwIsc2n:סm=XҬR1Si |u46*kD}\&wNvmmT:wV4JLwu#EC?8t+ +Yj6@K&Z1jloOR+is <@*Ӯٵ}WEm=]l삕kc#n0:&)NjZm\2EOJ)3+0Hn09r^bgSRלw}MEf+'s;8 ʴhkgZb޳W =zjVK=jћDu֊־ XTʷnF.,~2Dҳ|ctxj⊒&MP襕>Sה+ZkkLGtKyVvfq Fn&1^W}-:V+]K[Jmg=\6D{OZoi WV C.{ uu=q}C.q>(vy47 V>|_ꂥc}W8K}ެ̸'/.wFH#:fGQfj\-.82]eEG Fށ9q9qh$Aķ\ESrTV%]@nz?Y;j*8{#;:㤠ߴ-[IH] #dڙn)RY'iv($㠂@(,A   JI|($@A UB{ֱq&-i{z)1Ay'\w}"s^M3CWG[Ow7*(QńC,NpIk$[ViӄZzJl5;ݕqϨ7Zj5"cZi1>2OC;U ~۸ӚäfovvX"|p>9NrӧH?q]Ѿf<&0ɝ) sqw im8  lU ;ǜNUWRK~+)꧞Xt!t-gL\e5Zfa.o1dBIwHZwaRi).sUYi)!HXN[۽kfQYsm% t{Ȟ Kq@{WkjWeºsb# oV q3Mv23 &Ms1KM:_Ww6(Mjcxy8g>IIs-kh,S \'Cސ4?#o)mh?cJwχg*d cӽ}#֭mx_զ옷K"q5G'Œ^RuTJDcwvT3R8|ϊ@5-ۏtD/JLnϾeǖylqANbh-ˍod hz3.̍I8Ξ.t?w`!k#[dؽA7e>eDu(᧩qiqI#v6Cb w8cQohsX4 փ8Dxlo@rc$=tϨq$AK5\ȤM.㤐zuo0KfI,N/#ϩlTh,1675t6r ˥+beǻ4$I @: swEsef*^ iNރ;I *:h(姍ЂhlԶkdX38gm6gL{"k,'FxAAAA" ;o QPHe2+܂ a29?{H5̠0e9m43\#w}`rΧTj kxH(`lSG#j pkZ2I8Cy9!1@@@@@@@@@@@@@@@@A'ym_*? W{O߸ A GG|e(@P2 e(@P2 e(@P2 e(@P2 e(@P2 e(@P2 e(2Φ*K|!04d1A($6S-š cYp urA(^UDyn7wF:zc=j4~u-hs<ϱjwgJ|Voq8hAȆߵ52gO+\DRZiMҭGC7Zh:=Q cIczvqe5=QI'KKAx\]s}橑 '0Βjښ6KZL PrX#w<8~}:Jq/2g$3k^#r9'sN3 0PD-ЂT0Ӝy |Bx!r_<~/?@ܗ nK7% |Bx!r_<~/?@ܗ nIh pħL^iw5$@b]禋t$-9VO-`,2WH_^s mq8)QB0񟻒 6hDwAwNݔCI98޵Ǻ:!Ej9in HaA_dE/13 ~ ZlW%ʊ${w!-DZfxqVHJavoiחA䃶N%@@AAAA.[_Qpzys%yxqg\rAW&aAZqR>A#Y': @ 2 J;^r+~ lVP#5,H sNA,Oᙍ75 C[mSjihe`"79  !r nv eHI!5c`[}4p\qp G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&up~"gQ G(&dx\]'Р J9yA?v~HN%A_Р FqP @$dj 9 )+$ἱۮ0}h$@@@@@@@A.8Ⳍ[s0(4FCc7Pelc \AA@@A@@ANdmsA{CPl487QA $J\FA <2I$qǾ"kXH0(#lgXc-,ee|xx柊0n{ 35{C摐A(6@A@AAtA|$Q qIAfAstۏ~kF02s uBIr4>V>;F>vta{ț\)jOnucS:s<2:μqya%Vt` PiW<5Vh.# ͚l 5uh3WIYg@MҕTb,qxܽ-.\(1ګ|͍;494#kx7$78Mwk}7h7 ,u60ɺ\(P]|r=DtI kp ;r⬽SNqK\惗y'#e_juX d͖'i#9ܱӀ4Ahorj-MDs<;u6v~hfmceM?sy8]:P]eM}UHʊaI,N1>!{Fӟ,amE*_`ANmUD4t횆)x;אA&m+˱^^[ Ȍg8uuIht!9XB H:(9vt)CUJ ^X8 $k* QRZ) -`PL6LVXaܩRTۀtn;T uSFkc:Qxӈθ>.7[g2LqQpuUSKT`{7_7uyGihgmRZ]ǎ';$ ҂jJԻup`&751\i:rA~\-WQSg)* WwCnju# Cs+\VVj플q!׸ raꊍ .U66Bat_s(-]nkKiゑk󁽼It]l6JP[iadzAR۵s>uDJlxeI]OR zt m> g Pm-9O vO>AGl颋_O/'At׭KqYE݌S𙓺ZNH<΁ [=C*dUtuc , pATꮗjU{3VA{CZI=h+ffFwݑAN mlteCqYONƌ`roJa1uUvڈ&MfspZsQ:0/4[m*"t1g<9 coQO,Bvƻx8?nB[[F؝KYk G惓Y&Q\ᤧzFćZK]Hz ԗ[=tu(9PE;IB(YдX^uh'nm\4xPoOy}hM74eMT`"i/l-n'9(&I9A]CyPSQI$H]Ɲ=`A=U{ 3_Bj5F#=.ݵ7 kjH[m˸8;ph,4S0CӜ KسYL&TO5HsƗI< =GqDJQOysO<Fyg\GtAQZѸTӾhd5-s[vȜJ]2B2V2jfuHY4TQ A<h: 4;%ye#0MHs.c9v5Y؊=?v=wFu J_`iLOr@9#_J ke-gsRwQS]Ƌxt:#ene4*-0w]lT<7y#\*.;QbTamqa$SՁփlj.O#.%l/;wX:Y-WS;'y-nZI\iK[# %\Cvv4-^H>K76&&=8(<҇\oV :Z,<7fF$A%f6E ,1 dn)Zl[{fp;b67҂~G%ɳ:r:0;t=;҃=S5dTMm41J'#Λzs.g I2x!nIn$ _{x6Ke=Cļj. 1rIגIs73qpD?v~HN%A_Р ^ &8ƙ 팶Gfe5ʊF;*L@vKqNkoᩰY<2:O42$84RKW_dl&ZfykZ'@N$UKilMlEUe΢%w N 'q p4g^h`8K YԠ.!ܬ2 L琻$t I:DNKu-< DيYv*($ZzL~aAaoh'q]N@$T7Pcl1N#!Ǡ4]sK:tJ(#s=#]AҹUnCgXDQB<9 0KON9 dhuž$3.gNh"YAăû qygaQy))CeykwACm{<0>6*gqy n<46~]{$ 3"$1;uS;lwfI6I񀌳Ӭk%LkX|A d89xW.tԛ?%5Gp REKx[mbqEXKxr5kNAiնk-Zm,h\ȤkN22K`(k `|  7Kuǧԁp\b ͨSI,x*82tơ Zk LWײw ӺޜPzh3am繲FZt9g/RYɦ6XjZq \n07[.4U-ʐSH5njrA‡d/ Zȶ:ZNjeݓ8=%z}ܞ ch'xn})] vGPe309FKASd jKq}t$sFAvxLu 0=Ұ9#{ hyiЁYl_TWVQRHS4ue{o{(,lN }9sO0ޱANvh"Tۨ #=ԹMZWEP?togB-U h.ecRܝs˭+qO$m.8 A磰^)mhH.[F99 fܨn6[md5 ܵɀiN3^05RA#㸴OPs.;+[YM} ®*K\t(,[! CLFaOKp3 it3[ @`2[*g87{;s< G.M~= W=MJ:|A!|.cho5ԷmjggFNZ05R*H>'qcZ7tӄhl[F%5N69- 4A֣O+ \01AWEp-9#C\YQ_.Nh)c-#:Ku<29L3lN7۸2 F[e~I' ϹWjqAAYjH;4g} QxN-.2J ZU=̆ -qrJ 5ޝ$c,RuߴWMDJ: .fq0H:Cvn0n LY']%[j_pVyNq_#<.ӵQW g-@$g_d8j+j*XdVoc9  6_1S"%QF:mIUMSL *w4Ё$2sAɃj$lt*mn<142̓sef4BZEX.dA٤d.mRcA% AϹms!ګmԳpd`=@n >(.m4POW4\n #g,\ tmMkFiVe廧9, er 0ӷtNZj9ܮ&hk+! 201Ic{k]trXȑ(3C5UUiZUzNO)*hkq]-\!-'_W4ZZR>Yc2*s-8> fEu1Moyb| D:sqFQ)mk* Bg'9s!eh&\.]$2]Dw@/1̇u]͕W冖 K?p Y1 ,!vyN.Ks=z+oO_GxY)➳4ͅ6oi 7l)P>sp4qѩ;CKUemҖ pc2Z[MO[n7R]##\x'>v-=|4Fր܆qj4胙vK9)P~RH:5AԢ:*zelNUS7]AАA(5--}ltx̔n?hh$@@@@@@@@@@@@@@@@@@@@@@@@@@A'ym^qPaٚi[EDu5NE#\:pnܭw*sX׶ i\7@$T7n ._$yՍ"s/h{KIiy W5jKo> n4i^5AΡc-4 s)ay\VQn[?oe|Ɔ1UIOֿsBRۻ;Ѳ7q 7]5A㤢ګMSGO6:Sx9'к9-;fH='BOmATo7)d[ܿLVy 6nLlfScx'=Z]UU/^TDZ`1U1AŽTvc$̧tN}ˋ2TيIԵ!ϋg#}4?/9ihfLVX)t]IUNApNKhӃԃTbkIT8 :N:Tt;CKMzS<2H] 8#R [5CI䦎R~X 2cˌuSWSDM1ku A3-V׊Y OFF㌍;Aʹw++ER2'E-`9-{Y3SZ:XN F1ΨkZe]: ]P 4xb.7s>Q?~B [jG Qɡ*n8} (-3H4 ΅1(6Jiu0,Pkݐ<`胒Wnzuz @BLRG_ ;\ekԴ#{Y)nah~yA-WYo\d *)w<v<`n[ќ -KBcd2Zx;ڞgmxB*xkI =(80ߩ[`S45ԕ4e2Б b74<0܌kfm}L)r98=8(%کn0إ}K`q6=X:\[-+UCѺQp/%i#գզua}w$;:Ey6MN pw'9E+TKP+(9bhx@t<Έ)շb&Zf3&wp:sւuJkURdt:!8sY [h eÄu =i)v2ztAzy[(mu#i89it O4WQUOIpitxFsysAҮkLwϤc飢cIl  z=j%O<-a|`8`Ϛ D̼6$Ok4Є4!]-RO$:p%7N]q+Ǹjifx8~t:|AMZ˭ o7ZgHdp\3˥ E9\YCUŊS4ms]#aZ;g+`QV6h4Z1Pٻ BU;d'x-iM6кpW2"[7.I=.\+ J. xM{?i8Aъk[*IfF Zqשݨ_/i;ˏPh5aԘ^ 60KwK=x PYᭂ/S:0 gM@OM^(+b9+n}H7U[^ mlwsdw{tz"d5tkI ' *}l=6w>{&231tۍ҂MQ:keM S$mpuSZև8 Hθ:Pt#)wR7<:A-3* <]3C]9A+Hk$^v kI_G^: > sMEwQW#$sc/ m]bNuiփH*i  2fug8 6I+N\yu ;[PȍL6o< g9q j}'AMOEEu4P$kF 8(,1푍{ Gz wki4f%jn44Mc)&2և{2uAPBe8Á-/xdKh)Ƙ#$7szIӯɮz_MlOyϨm*@ /q p :B[L.YFcg PUt]s5 U$72"D0`zP;]e%5Ed,u[\83Imgf(dn IJڦV@CDVNqP)̱#f98#$qk\ԃݧUAq*dH7 xnϨ}Pq*[QLɟNF.h' q 7*9#),iw ֗8'@sWmI515ۻ'AbCY#㥭mVU뺙tT5SnӪ *7Fʪx/Ē̝Po=e5/Khc'^Ak*i eSdӞDe=Nڪ.5]DT0gPj(k))1vpf>S H 1 ]|ъReh?kރxnU i۝Y+K9ܐb+.J &=`h_QvfGKCZrs(-Ct$WSK_{&kiDouQFKHˆtAfu\-aÌ2}x(, AD("whA* %p=(>NmpeEcdi>:fj#4dxq=(9Gyn-kl8N1k8nt4 H.USUV x%ه qick:[4WUM Ӯr9{HmtW [CMPlkX39j@AGxP6vHj%a$ͧ9$u i6ːԽ1l37<E{fC5AydjV0zH"lQ6{E@>XVwۋp=MPzPT`|[k \SRG uSKxk?VkM)P[K~/27ϧ4[+lIV+h\Xp'R\fhvP[Jי>玌mN6f[|UcIA}kump47~$J k-C64v<裦gdMc@JpORudIM\=4Zaq R)a9e/9X'{[3"M7'. ;Zud31 a.gtΈ9UD&s8:8i s(&TKc^|Wc"~>{iWnk)-2 {[cm";@#gԃJ3il+mME=l. ! ;pdr-44IM p[K\%8mkr2Z_%TӻbLc (pm^\ '{'#Fk*kmWѶ'A@ƵH$랟PAdIzn=Ay- @o}9`dg!ltO۪W|]Cw77N{FXؒV?68Ǔ;AZ;|]e3iB*CN {{{>k!{A֭Sx1wxp v>YQ 8oVH9ǩJICYzvF* ttjխڶfک{.F0FF>YAڅihs\l$AngZEΔM7Ä1 Zk_A s7$9K[+SôDBþs9t P[dGYW9wO^tApҚ͋uDp:FтD-umvJhg-m4m48=h8v{dc6Y+%y NuxpSw4AVZ Z_Cm[S]$yt6R*)l,lJ%d ކty9AթAT.2IIjMW3yn ir@$ fy s|P#דyiA*H,.<\}8e\M\.hcQ7y;v((H7*)48C aFEёT14L˄ql}UrEWZM 8t#dv鶓hٔϪmC{4 {J #lo}m;#@is]8vlvz@8܃3[W6ey@{Yƛ=H9{?j` iPsjxsq9ۜNݠP\rᦦ ^u=t/Lr;PyƲ]6]({Nww?GQpSmp~ং! 4:26;jq^#RCI7:iDY''mrl>߰ 4&cp͸733cւZ|u[WElk?rC l@n 1p &Y8S6T,???Pz}u>Rfjꩽn>T4Q`lpfUԀ@s[ }%lM=4q*h%k@~]y Ithk ~)anwJk$,tp67L[.3_qlM w9AүfM͕osO;ysAȇyꙮMk׀||=dE-U~m u4FssX77zyaz^-]L\p9 3Y-U=NC O u7M[(&up>к0XI9'yQ9|&0g;gL#5YVvz(,2PX٤>@9d h)bScv2NI:,_I{~ wלsmqW[jk}n[AMU8n2%ڙqi+Y,vᆸO1 ;l1zFe#j89bClL̅Pc(@P2G$dngqjp5Ξ[Cl7xcAMt4SE(zN{.:i̠ (@@tP22UΞ@%sȋF@-nV-  ZZjIܚӇ2Fy/|Wc ' 2O! 9@@@P22-8.B;o xq KΞHښKCpd vJ y#Pc(5|1k]!N 37Jsm4;#3P2VEAE5\8[;5(&k33 Ѷf^#\H=" IiL/ao:gטAc(5h 281q=+ mXSFqn yM6Xdlei#Fft!249(7AP\7Z;iA8wKy eP GA" ;o B<cs.֪5gW{L`iG0Pu5A7hdV۟1EI7o|-fHo{M]n}ʲ!IsH)+*M3SRQ2GwK#H{CPj DdluR2ϑ PK;*rKmʶGvpdZq嵛FVRK.㋸^Q<: ֆpIĩ1T9` ;EO]uh_ he+`pGkzk=ޢfnU52җ5Λp[:u t5m=di1ar Yy;hiIkSArH fO GO[WySs: tr$3nIQ+ւ[6Mlϖ ,E8qփUU;6H } tA+ 2շ:6Ddaho3AvUQ g)JFzPS{EihDO׻ōkgO%;]8*J#,W)K@#= ؃K4צmsE fNq^ `p`WY]Ys{i.5Ad`hq,jRKp.!c(4CKew ?`g<ꃨ'$.n47H3p c.V&z[};u7[ڷ<Q%E{)\giˏ;LA{gt<. cke{%AiYDvbGr:jumow²(s˸p toI6ZY+$j`;~wsCF*)-}Ĵ {r $r#=h5 ͲSMK%\CG>c\ Or~ise θסF –]Y=#jMlvsARQWU[*fp}CxnK[t؋6Xe'x!hw.6ᵴSҾƕI^cjخrl5Dt"=A~9g=ZE5PިQ@8(d}mSp6 ѻ.u(: H)R(S=4o-R94zSߤ Ʈ0'!GKG@AE\flkdd"L@ֵeyTs[S-M& cY8AnQWG]jG5֪O$Hj& PAZM"GRK':= Hw*ʨ+if|ëhSa]"Ėls4k=?hh$@@@@@@@@@@@@@@@@@@@@@@@@@@A'ym^qPanQvsK1;{O'\'ArgYSpYnĠrp##j[V6YCW5;ۙZ9o |fgc*J"T/nx k.h6; +)!.&p9t` =m5 i讕q4 {\(- 0 HY4ip/8t AxUB KLsnFF3G$S)^(s[۹w8Aߑ7G#CZ摐Aq7a{r@eFchnww;D/@fxp󻻿g{wL6q]GqMPҹqB ۝-LiNIv'惋[;W1cc;awAԃv2QKT*)*i s@[0C }mlrH+ qiJ}2Y-s:hI#.q6s,Y[p)\44VZmOp[MYJ Ik֜x<ϩce_^;jɨQZٞќ 3#()}O9羦g0Fy#iA׸㯫*ʚcӹs 냁g^Z -o}KQtK|OڙsYԁEqc_YYYN;Acg9Avkoe ;'.qq:ASOKQK' a# vbGWK»k5Iq2cDRlS[WQrx ;T[9- zP-:d)IXьN%eS'd{T;|<䁎Z)gK лuokGW< \oK$ncd>#@;Aa0PqvJ*(g]VsM{p[ѝ3g=DT WU;uir#A{-C5RZw 0`cL ڇgK^ꛅe¢uSxM<$OxFn=G,N|OB oQo4uJgq$fK@F 5G=jX!t cݽm]osG-뻜 p[Z+*h*L[9pp !QWQt7ϕdZTQLuLyjJe` ` lMA#]^\*\4zZ5ei{zJG<o7#tAzIIiW^d|loG0ނElN%B@7˒ tC5βcƩ-@\/Ev/(h#wusӭQة %ܹ$FZ^ӻW^PUe2MM_YG5-(BH 촌\ GV"j $uf$q;q#%ժ *vb;\i'2C飈z m;|h{4\BݟmB$sY&92 ;wOvr:͛SA[UGQC9sw0k@&mW o\eH2A#w?yAЩUt `wtAK\4UձC%s-n܂ :{u00ǿoA-!鐂^z2S2ᖻ-8rB ᨻR\Ԍ@vzF0|\Pǝ㖀j [l+#cKYCZA'J o4$elK ѽ#ރh2m(.5܂)E=-Dl>Y"dtc}GFPn36I+Na8h$@@@@@@@@@A֌jqj+$:!|IKϨh, ea0 AHA" \ hs FY"dt}GB GL^"1A($A^*9d 'ˍ{G04OH#$x`vKsA"?hh$@@@@@@@@@@@@@@@@@@@@@@@@@@A'ym^qPa]oWEu$T1Z;.F5ť pfwݮUL Hh$Gtr2Oq8Dˏw/tՆ1zq˥Wi/w+9x,W6i2@9p^*6-E]$9|V3ރ[O}IQ6ۆIP :2s NJ{$PqkUd(aI>{q'UD$kDѐz?G%]:|g20iւ]M k]MS$.g"[sv^[TTKˠ{X}$e]۶7:0{p;rqfPv~UqmlCUIIcd / ),'59#O^NX\.89 9 Րv%Gih{8y9.Ѐ5vtYCx:k5) \9g>ZtTmuM:!1=xN86Kk:)YQn$c@Ptxݮ6z-Iyo0;w篳DkӿV:Kj#,wO"3<Ų--mcNqp9P\mTݪu6wQFHWgqьy-;RUK˄>; F5WInU9e;jdcNq TmU|VzI 8n uxPKU&r*x!kSPZ\Oy`׵UZUOSN`b1XzsUzjI䒮HᴶYftŸss 4K%y濁-&-pw$R,j!)ˌ.AS$(Yf;][%UG} ;q9=H/Cm5U/3E)t`kI=!-e}6:6+ɥa̎}/[vg]}T¢9]xRAh,O1!p4u\ggD+6֢dCÞBrF:zh=:jwdI ]v-V:~ JIy@AvmXu;hw=#,qŻ`Avl."wFU]-C 8ktMtz+MUrL2t8sԂ:͘Uzd ²|i`nn5j1,H]cT[iO,R5h$8ȷ &\hv;elUIiNƘA=eʊmeb4ӽΎ7qȠgUꋝEHdO n5ނg'*62'214 5J a=Dh|F l9c _N\=+.<7~Լ4dhqB+m}6tRA,q]F؂{=z 򹅵|xI$7qאPslw/J4T,UMAq[&l|-.g#ǎ1:gh8:筹R;]p!p@=xРpSsU泍cxoh]r1c{Xɩbfxntf@0+G<⃿ihǫik3@%!;87]4G,> c(+6VKwYyROӞ_ўH,Xj^R>'qc[O< S9tsL #u.pPsn;3[\62xpdӑAAӿ)QA,]KgtgQˠtlj ʶF78zI\'ii-/NArmjkim;e)[!en1D]WI6nu1Zi"tSo9A 핪ZAڋMEܴΌH٣6xrcj!']㡧cSs4AjlܭG9!i `UU%;;Hp pH} Yv==a) s գwNu<Ů= Y#,FN@ אͿcQjkm3i.3MMz3˒>ֺHyi`#_nlkl{Գ%䆻VH.Q[.uw6'Nzx)s q])?u1sI70`t ln:=XKwCtAfVEaBRQ$dsæuBvF]]Co$Q#x.6˔7m-S4T=o5FNAA'[=IYSuUlRH-:u,8K뭶V5qs쎶zj핸7d-ԳTQUCǃ8s\qts,lo|UWѶCKEtvslE=lu['HM^)v~z/6wFIf#,$uq\iSܢd5!xsZΠ`t GA" ;o B29ٻ$fWLoƮ,te9mT\#KIgނ J)l|Y+G̟YA: @됹:#X'-ˀAKGOD2׽84ssI i,Q ╁񽥮iACCnSjh;=h, AD("whA* %MGR3'>C&q]3g[uO|%o0s>ΞESӾUpqd3W*ZzJ؄q~tGPYZK}{u5fisI`>m,6Hg\] ,x`s:q PPASUT/ {G, <w*kнQ+Y #R=ztAVCJZ9*]OP q m&ԽEG+*|ӂI.3蝮ǙY<~ /WIpO4߾;썮  JJ)ga|0u: Rm-%}Ts:9 6/4;:gc=]A=3Vuۥ 4k=}MsKJ*{G> ]$m hTBǷsW}UL*AU!vzpqA+v[zjᛅ7 ,-q΍ۡ{_U=% )|O5.a-2G6QVO"G=p9Meݦedq`P9ž=.8AA{WnufSw * ,Jnќ#(-n@}mQogAAR'J($sooA:g:exv=DLU6yx0:A#`;=$0Pm_UTY;YQ,ﲜs BjTTJ饒㮹A@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@g0PD-ЂTK?  k[@8:gJy^\  A9A;uu4tYe0~Cc<{H6= %[1{.2 g=*RV,Q_ Kh4n c8:sA݊o٪/S#!%ܫ}ePAJ6ǹ.sV_f5%KLbtns D [pfC ;fun1 =TEv6ML4G|K24ASvSiau4yLl1=h= AshÚh!ꭌGN:4 0x=Ne~_l: QQ%E; C2I͓ .7y.<(4LZ+㧖xk,ͅ ,sI oH(ݦ%{팉JwNN#A)䭚iuY䣑m|/h^:jQ)86mW %ҺaF{=M,7mX_fid SSh8!|-cKh:AUgQUGY==M%,;TIVK&sCt5T6*v e9  {!yetQNY3 Ǔ@ 0URaؘ$;G3׌h)O55-M ܪ7:$H/Eilt4X1,0[ ڊxe5lq:B1Ig)oT&=#\@8' HSd˴P\7jŋ\ wKZFf&kibk<Ξ+#[<5$c>h*f2o[S-UTҺ#' pѤp4}vSTsk*FY=4N晈y.^Gh!As6n2ie;=:JE5t1WSo1k fZ W(NTR (s.4:q 3S-gv~QAUƞ>b8ԃ AD("whA* % -L$438tA}eee.e+fָ;6n3Ung!K@N$AAA@A?v~HN%A_Р AD("whA* %ݟ4 AB Pi/8(=H#tVclX\ ݟ4 AB Pi/8(5{# :QYN9' XvFCom;.u{#gՠh䯸k4U7sx8zikg\mW*H~H!~ֺ=hۙ&ᘸHqA~xI4[d4IPzqЂ׺gd"DN?4vLIj\l3qA?AǞkLII 7];N؂jGOwUM],}$ HA؊HjZ%W!{˵fY^KtڶWԲEmKr J;\M7owPt]{EKm4zȄToyYWnޥ dk{=ZK,/͝nuz?v~HN%A_Р {OG,Uc:g?ldFOGO4mM%\7mL8(\s>Ee_G5%+!F&nq^A;]=-52Vk|LFI ڢߣJzaI9Tdv]eim -"&:Y`TAлϼ i)MgsV0G#kh{Apad>=I=\O}at4T 4D2Y. eKHh>Zz(~ JIy@@@@@@@@@@@@@@@@A½E$Tn+yj8 L6 hrA?hh$@@@@@@@@@@@@@@@@@@@@@@@@@@A'ym^qPES;)i<8^ #3|#]W+Gfu\Nd\N'*}u_ f9W<#Ӫ;5ʹgW)ٯNU?AN2rxGU vks>:S_̜|d\N'*}u_ f9W<#Ӫ;5ʹgW)ٯNU?AN2rxGU vks>:S_̜|d\N'*}u_ f9W<#Ӫ;5ʹgW)ٯNU?AN2rxGU vks>:S_̜|d\N'*}u_ f9W<#Ӫ;5ʹgW)ٯNU?AN2rxGU vks>:S_̜|d\N'*}u_ f9W<#Ӫ;5ʹgW)ٯNU?AN2rxGU vks>:S_̜|d\N'*}u_ f9W<#Ӫ;5ʹgW)ٯNU?AN2rxGU vks>:S_̜|d\N'*}u_ f9W<#Ӫ;5ʹgW)ٯNU?AN2rxGU vks>:S_̜|d\N'*}u_ f9W<#Ӫ;5ʹgW)ٯNU?AN2rxGU vks>:S_̜|d\N'*}u_ f9W<#Ӫ;5ʹgW)ٯNU?AN2rxGU vks>:S_̜|d\N'*}u_ f9W<#Ӫ;5ʹgW)ٯNU?AN2rxGU vks>:S_̜|d\N'*}u_ f9W<#Ӫ;5ʹgW)ٯNU?AN2rxGU vks>:S_̜|d\N'*}u_ f9W<#Ӫ;5ʹgW)ٯNU?AN2rxGU vks>:S_̜|d\~9Vzr2? `[}4ko_W!W -7id{2@^ZkDxʺI,т^ƍZ>ěDFfIed @@@@@@@@@Ab[}d4KM# 8B4V-83 -Ҷ*i)5$um&'YY" im=4/W1%DDfLxWV_E-8yKƄTV-बmWmKG+)$xbuT"}pWOfs,ή=8_R"' ţ G-:ڋ- y"GU%#[Ov7SLt˺l?Dz9Ʒx٧E}s39re+h[=qyhvn 9]-kXfD.Ͳ<&Jj$9V5mZ[>1(1Y-Xg*gN)̬0֧N3VӺkvMIM%m\4 i' &q&p[YOr3@(I!#Fk箞RՉg,1y+m%ױVY_Q=O3ӡZmk_SdN!ffbWZ  fWc ٖ_SZe[;Zz\CO[z31ZiZ!fڪ5S1h Tт?.|E&'*L{٬Xe 9婩sa~QF u^}Ls km%I7r-ܺ֯^3Y #d;/#в޵׹P-{;Av[UsiWYLךq6sgCg:<@_ w>йm(ꥭD˅}RJ[S髜58gFgZ7ҶS5JZzvCU &9p='(\JJkEb.vVE<8dzumIN!f|BZw2.3@D`s{M Z͖Ӝz--ӱw:kk%68sړXaYMmIz<=XW*kLƞahrE޹uLsOZt^bXwKW_B<}F_dJMiVvx࠺(i-9vufǔ?%>1*)$x5 d{g"ՎwK> TIqүcW'3øl-u9S!oԚs3)we˷mt[-qWP6j*2:2rZֶDN"ayM!SlYo̮ rt 'ܝ9K-a"}gV~՟٣}%Y(LUo TT+O-uj͞ge@蠨 Xs)}KmZӈ*mk#(fe1{%.A?RDʷw=Ʀ MGncq c'kLF&rXKWZ-3extovF3M[Vx?Z2=ͱOLN؜O_VoyӬU[lJ*-lK_ie*zn3飔JAC?M&y)beOhxᠡEä5q:Tc rcѶj+`X)[:t\m7z6e'wsL4@@@A6 k) Ki4hqO⮾Pj^ptUt@ ,{ij浞RlB;&2_yJPI!9+i3lMw[)kjiGVw=:GxÁ='L㒭bgZ{ȉz9Uge)vL9vIy|չ]14%ݗKS(ΆvE#KR\b =UFT0sH*i5ӂ)1pUveq[q>FnSg|UmH4PE$p"s'W6̣¥MU5Zz mIe,yCUh}mv-.3Mq:7F \+yx{ӌlSgGkSG+7Ny*; ڹv2|/A:֩]((Dg>nKKԐUZdr.IM9L{u- u&C;۾ε^Dxg{,ͤ*'ĊnOw=TwkDGwrzͯS]!:sHankzA5gq"_("Q37'ʚR՘ͲEf=2글{5❓KX3gN^[Wtafiw+y;pg#[iVkHfj4Tuj3GDsU.J&ioڃogSWrJA7KtG|{4c(Ж6haDk챌:Fy-3lo2X  Dy'%NgSy4涽u Ke]  '”Biۢq%9kjQCGhc^e^11 u)/RX+m ec!q!.sź+5͢VT+lvu=$&$=YUm'39flkU^%9Źֱc(EIӌaYC<-sJz$ ܸy7erj7]bwK錖ʭtշr"%I6n !8f2w:鏽^)[o%*`d&8D@1Ăuښt+]A eE(@fVY"iJWc;ilDK^s裓6#v[WlKIMRDNC}yل-)R_%S| a5q%MNyrW6|Vq`m¶G’beG]W{wc.}imZ;>r}-L#}fFȝRj㖜u)<_~SnMwRڇ[ird\&֜n:>.:9qߦfY ^ڙΘg YӍe勽K5@؅ <ܔٙVf\cXu^#cGY5iQAsEGAT[l'-o.\2ts39Lz,uJjS7z ӧ8a;gJݳHoN9pL:ug*95ٰpܪ᭬tQEGb$z]kX;Te,*y-Pg${:kfb|T>m=UҶfepF410i+i3.2({B#X@g0PD-ЂTK?  WZ5u42I*1ܰ?:ؽ]ۺ:z:Mn7/GUMwÿ7שܽWw&7Aޛ^;ru_؛zn^;}zMboStz:Mn7/GUMwÿ7שܽWw&7Aޛ^;ru_؛zn^;}zMboSt:Mn7?GV~Mwÿ7שܽWw&7Aޛ^;ru_؛zn^;}zMboStu;s;Y}z7/GV~MMboStz:Mn7/GUMwÿ7שܽWw&7Aޛ^;ru_؛zn^;}zMboStz:Mn7/GUMwÿ7שܽWw&7Aޛ^;ru_؛zn^;}zMboStz:Mn7/GUMwÿ7שܽWw&7Aޛ^;ru_؛zn^;}zMboStz:Mn7/GUMwÿ7שܽWw&7Aޛ^;ru_؛zn^;}zMboStz:Mn7/GUMwÿ7שܽWw&7Aޛ^;ru_؛zn^;}zMboStz:Mn7/GUMwÿ7שܽWw&7Aޛ^;ru_؛zn^;}zMbn7/GV~wש:6QMwÿ7שNYw&7Wޛk?^;smg؛^zn^;}zޛk?^wÿN7GS7?F~MMѵboStu;smg؛NYw&7GS7?F~7ש::Mnzn~;Stu;smg؛NYw&7GS7?F~MMѵboStu;rugأ}zޛ; rWQMay ,AD("whA* %g^Y@׬kP5( ze^Y@׬kP5( ze^Y@׬kP5( ze^Y@׬kP5( ze^Y@׬kP5( ze^Y@׬kP5( ze^Y@׬kP5( ze^Y@׬kP5( ze^Y@׬kP5( ze^Y@׬kPyک,/Q+KɐִtjI?ss3,DCFRZ:/lH<#]E)T'lH<#]E'bY;eAJ/;:/QIةN~]m7Jc|E՝V}~4뺲aܺ_BGp Иc3/_V&m=Ujv?ktYwj~%>Ywj~%>Ywj~%>Ywj~%>ݶ-]Hf5Ѫվ>N:4iueOi[,SSB&N[\΅m8N xZq\E7O?_Vjv|Sըڝzj?h^?o~}Z-+C⟷;3 < 8:yjm-Myo>&^<^2? `[}4kA}}5Jʹ8PǍd9zA: *'xb~ ǒH f>b31a(!uMeDTJ֌5( Z>;}ʖCGY 9c}@STw4>,wA(0(0(!U^Tr篏2yrS=uU{v==SV[rVW3s6ۻ.n⪜nt\PoU47tTZ9Oxq:KU5ONN7A"t [h? C$LI H m4w.FEkHntnVJ YnZN{"8:6}vv"y7M$s\pKuw9AۺV"-}8"zQEqeTJA%c+H7.s&{^j-1"G"3=GDmme2ntM\@(*m y2==y9rm%}]g疱KU`cpp.#t۴^+}PX. xo O胫EϴۉAql|-c⑃{M Hϳ=?+Y3և[؝+TS\VB״pz4\u"&ՉI<=MqeV͒Èѣϒ9rjiV'mSm ڦS3,n[GBIf2x-.k7ϽC/_/m/ZV쪯dRy::дpqs0+[n(gߘjhkKbcԼ3JzzwVF7>)Z+ Oqg/_}'CͮQ$Q0 g=ޜ)g[;|}jv^:m'^yWZʦw{n&918c3?";џZ8sޮqZ(Klo bpK{:l|? `[}4k{Ŗz+h s1x#PQvǹ'̚I\7s1޴*v~5W r K ;X|Sy%Lfy$dꃠUh5J#m8k9{?]Zk7 iKou]x~1-|c ֘QY<)`0[GкsiSD-rU37rWn jUmEc WO9TfXn Tj1jӯ3} i˯IN:8i!0ijlD*_-] 31qx7{xv1PK+Ū9sc۞D{r˄uuO=62FӝOTeDֻv+kxv3փuvU[sTNi9mme}ߺ]m$DSZY^ZӒ ,ڷuCos;_LZ\r8 g%݅--TyaŎq bAqݛB 'Jq3wu$cD-1hą.0=ZqݙMMON ,L]g#_VFlhB>:vn;'P\AˡHN;.u-hsCKyA^d[ =3o[u>!}0M2gtAVn5wdpT22BZr_CJ@_j땶@0惟wOxm8f!JQ+X.1gVO[MJcmdĉct6UWUoH5QpCp$Fւ{#5% :ئЇ7=XACGP\jFPh`. c ݪ hWMUZwH-<ƍ=eFcƽU<`DְNu$vi+^yi ~3qJ5jًզ>;E8_3ԹԃwwՔp:Ý߿ڙ-ն/.Vdpßqw3єguoYoӉӐ[g62UZftgS߽G;yspq:V=‹fi([M\+\FG2NA=%UU=F0nOXoU* șC 􂃁rg2I;kǸe'Ilr-U\Hr)x@5kZ;g}a]nBjf9Y1ԃ?oO/V~o3%8`8=#sfؘ^O}&2RUr#5C\\gvcӛ\crjWU5Lߙ,ޱ";(v76P_Ye>}/JI{hཛྷrלڒwNɣsWԝ;ERk.(8y0ɿtǽ{*m௞q*WP]m[j| =8ӮB:.ٿ;J_g}ˏo9zGa??mpz)TG1pt18zqx71ws~|ù~N^n74Y cX {ޥcmb2B^`?g[c?v~HN%A_Р d8PlW1s\Z 5 FƽN2ֆ591@@@A0=8z|??< h'k(p9oRX{M_/k/rל f+_E៊3.>ރ4Q^4/OYޠ~iƜ +OK~[ GA" ;o BoDġ- 5W5;CT]{V_ϳit=IZit=IZit=IZit=IZitZvgSnH<}kGJO兺*zƁ<{ۼpBǭto?U{oO\?}۵uA;rvN׭xi18~y;^_'`}z~wuB-E $KҜ;55#J;ƶ&7 [z5t':s;*Y?NMz?*Y?SOĸJ}VO?9.'}#?R_UNEBkg ]L͘ki*[C#ct,}W>?&?eW>?&?eW>?&?eW>?&?eW{YsGA_3j8v\=Bq=)Mik^3080ܾo⯥h;sЦMw7yٴ~?obv#Aٴ~?obv#Aٴ~?obv#Aٴ~?obv#Aٴ~zuH3е|^6^s~#i9_5ʸh؜8$3A^8M=Z;󿭗IÛ~OpOgbso7#Y؜; }lHv'6N|y?[/?%οSp^d/GTSW9IZ `]u-lļHpz1[iK-,@@@g0PD-ЂTK?  6zXg|6a@dy I3iJy)gdZ UӱxcIޭ5ŢfcҪ e9ARtP:l-sZw^Iq N6-Y#`j):B2PtPqڊHmb*xdyז\v4AAA.W*kMkAk rqu=w:ZibY/ =  ϵ֪i{'J哂g/9H.4VDjݻ Q0$nUAEIheΦ:b{apxnc:a)>L3.\Xlj(\cn܂PaP`֗J56Zc.ݧZg)4vk}4y$I:A ' ՟=hy555%El*h]+֎[Z+ZN!ilsDւInީsݕJSzq%`{ D+Ţ|Foce囊SX{M?ԯ -y͢ Q)sr5v8s@.ٿ;J_g}ˏo9zGa??mpzUu3=-{ű~ JIy@@Aֆ:ENq`#C>h)ީ娺m,6Ux_8tC)N9I7Jb<M׾gRU\,tTUHZ79 Fy+ZK^LXjwͥ: I,A)]֏۫> Wcl {)\{Z2ˈӚ);[<馴2wϩ{;sas׈=κ9f}~ޗ&6Xx UmD9T;/>ǭVnE K}tQ cpYwqF IA.}K5o%\U.g4'07kփ٠ HQtRw 4΄ j܅u4JvQNpAzdkmI$L KH n suZueuGsxPpA-z:k8k}H7\(l7J:Y9'y6J ޿rd֞䫊D惆׭AA_|֎Rڊ+|zH'.= cMq&PKb{8 ;wax+:j)*22ד9AܚIoAIm VXSb2Ʈn]e:gMy`!pw339*%fة3/8 vRT"pxk*bkw02A͎IwuvLN)wNx![ijiY5$K 2I)70ju<-]+.RҾ)(K ;*K \ $deDMuu~*5`#n+O{G av|_R#^>_8/g??6"kmH[Z"YMta5 [d g:8w+OqqG8:wΟ{5.ٿ;J_g}ˏo9zGa??mpz)تb{5:NJDGYm.wNrx<u9l=F8ɣXg_MXb3[[C??/'ο7[> AD("whA*PSAq}#/mD{ǎ=cS W4ƢqKJʌD1z@(:kXƱ5t > WW,bN f0r4pFWX՘.TLfVvooո:L24Ro3DLU^ }=]3)2]Aʿ6ۦ|TWlWj;\lI3?{$sƧ ԛcMt?vzgZe9>s@:כ5l:0Bx#1F֎339XF!^lF)jK|FI;-pp!BRWQArH*X(9T'IOYIW%eu\N&TO Kwqc۠5tn3p*L?]CZ4AЭC5SU&QUj"x{q1jf4WVWRI>R?L̻3J넕nI⦟r9\1:1 ͚ Nh@\0uƙ(69\"9T`=u5A5)+! buA__4 {e 4T wX҂ TVj+_5ә9q09 Cu5}Ŕz:>,cǫ8@;/O%[&TfSMQ\F MT!TR쾕D3  5H&RTS9d{Zy Avj+k(ijk)we3ᜉ^̹;c5CeMC<@ Ou|Yq0O&ّ3 ێt9h=(%ijkjib{DHdya4 yisւN\MLʐȩ;K~CGUW5 ⛌y7k8N:(;H2Ic0d~c)-Q!W==h-\vbW-OtRv,1;G :8Aէ*zxqD1 ͟Yj^thIv4 kvz^iE<`eaVdI$$gBd伻d }H4zZ[=uuts#}9j t6k}Ԑ:PG9`jtAZg jk |G0H H%k,sL&zW=;^FA#(8׍j*p|bxd|~Kyt❷>{x79teInlo2^3hL٧k'39#}?Wzyg&,־nG+wc*[ VU|-cXuk<as{Jey4.|.sI|G|p^~9kmmF\ӐUi樴E Ny[G{g|{WK+ey׽--Va˶oh}R?Wc?[^4/O\H'x۲Fg)[V/YTmU\wo<{?WNuehxLҾy,y8Ǭ{+XEc[[C??/'ο7[> ADKPȈ%֌>gGj S?;P;U5OAxC;PIOA8u{P;UO2*/n222矤x.rᎀ;!^1a^C#oa2:2:2:2:wv*7ɵt$F#p\JZxXΤ=Ђ+"q){WzJ?O|du5@@@Az 3N:4p1"N49y0SnD~#O'FagzGX@a#"^ut~Dotg$ܴ3/)D?v~HW;j<>ZO,Ђ   $2  JXˎN0O^4AePCQKOW9ݑTŦ(^Gv+soON]:}YGv'6grw?÷9?SNcDͿYt&Nm)t,RQRCOυn}ʳiBY"fƽN\J1hi|Nm>>)?,'f#s>)?,'f#s>)?,'f#s>)?,'f#s>)`&G{*ӥ=X¶9kQKOW g=?O[ړiUob.{V?Y;iUobr=_㟬*79tv~w?ӡڵ9+O>C{H#ks)EO !`44+DDwCkm9JQݟ4 c-DP` e[e*;&xq3m\+h' k?  k?  k?  k?  k?  k?  k?  k? ?M#Xim-b-(/xbKGXm-b-(/xbKGXm-b xaۏKz](v$_@÷g"? P<0ǥHn=.E ;qv~/xaۏK/m;٦#ɏ(ӏ1¾q&?xWN? o_m8P<+c}LA~v EPE7l s{cn> xL? gpT؁7l s{<&Sobͳ*}@Ov7뱩oK'AOm!}< [-ޒ7A2(0Pr,Pt v7n8#\ӭfaԵf"%qgڷ>tT}bo|Se>QcSOMcwEG&Nn?Y6SᏢ~2J@rp@u[}NtVg19f⴫n]+ !w ťĒ9q518oFhiړ{Fg8y~WBYOW#|+!YrtA HV~!z?>w¿jn'OgڧtO'O=n]+*e2652;y B9xޔӤVgV ADAAA^O{ @sFLOg8ZMobX tq6I] STVd;;{LIqw3Jܿ[*"ZYPsG3T]v76P_Ye>}/JI{hཛྷrל$Dπ,-ᕶ8ypڴ-p!B,sh%vV?g\}+=fE Pkmr#tc> [[C??/'ο7[> ADA B-D *+it l9=#il7?P\NFsԃ} Ej*SzA]cQJ ,0^u}c7[PYdvxkC$ [=1u%[O{MMbqS]^n3927\n햞J:)MR)d$4  PykR)A櫬߲:VS^;LE=  1Z ES<Z5Y ϔVMV| π+O~[<Z}5[؁ ߖVMV xjU->l@iob+O~[o1鋇v_LW,@evOL\=@dv_LW,@edhA{5O0}MUu[GemE(S0nT`Uv 6l,ˀo2 ==mq8M9Ќ&Q)5 ; w9̞Aeɢ|R49ikzAЄX'f{{LrAתZj,34F; QiKMGݢ CJʹj󱬒Lj8ٓARfWIn@{G{s qPRoGO0?iCm'RAZ9` tg̵H7gm4:&ZёsP[[)[KEN!i$1>ARfU s<1N:J=as쬻[!lTV #: he5]A;F@'DKmhkž88/-.:|I9^q<ĚS-4n:GQBjţm5ؿȪ"6>.|>s\Iq'+sXR;[(}G,W>|G|p^~9kmd4FOtps}"VO 3({8ұͧ25P7>)Z+ Oqg/__i_6=F{{^!މĬּW듺H:ح\>^W9yssrI9$DDF!`?gz[c GA" o%{c5,>P=D (0(1{X5$ [c#?H Eum-kb{;{:ף^"&L<^wW`&wS0m)6Ƀ]d.Jf [EQ6ARȟ/yieVN.e8{ N`YHIR+٬߃>r3!>r3)0Y"Jw#v4an)i֋DwFYxDiur~S`0p'Ĵz+R{;+3>|%wP>|%>|%9=][sa{ ' W]JV=S GA"jeD/lhDt܇7}i$vwUM#_Zo0jXߒ;Piݗ=;0;PoW/7Ysn*Gjv\ϒ;P;|ځݗ/3}i$vweL#._Zgr>H@˗֙Gjv\ϒ;P;|ځݗ/3}i$vweL#._Zgr>H@˗֙Gjv\ϒ;P;ڂ+ |?Վ9dIƌPq$W33}矩r>H@˗֙Gjv\ϒ;P;|ځݗ/3}i$vweL#._Zgr>H@˗֙Gjv\ϒ;P;|ځݗ/3}i$vweL#._Zgr>H@˗֙Gjv\ϒ;P;|ځݗ/3}i$vweL#._Zgr>HAUXmLάkCA@,@@@@@@@@@@@@@@@@@@@@@@@@@@@A\ND]?"N;|ڛgub/;Sl7TEjmO`MV쑒Fsظ4H֢f#VyAFꝳ7 :ggo|u6C8߈&mqM=5p\"b|1k\N38}#F.$> :t`0&rN˷I;Gn.%#\״9*a?hh$@@@@@@@@@@@@@@@@@@@A#!2BL3;u w(@QԀiF9 ҍ ^H#f=? rH:;u w(@QԁܣGRrH:;u w(@QԁܣGRrH06DET`:r,K}J%i#_ܣGRrH:;u w(:;u w(@QԁܣGRrH:;u w(@QԁܣGRrH:;u w(A#)AaA y]p/< s{=zpQye1ggp<>gg8q|!yzhv&k#G'Fвquҙjm<{={UD=z|g3Zx(qGy{cvg;#ܛcs=MM7G6C3o U#.q5à镻^"=xΔu֞&&\\F}zDhEV#J֏O+ͳ ;8Q|! Sg8Q|! Sg{ `䌅jr!ݟ4  ` ` ` ~A.@@A mkh؉A#A̼Mz>XdOծv9a{؃ISO]LʚY[,2 Wm}ƶa!7ݫ j/Jk+|p.L̠ٷ{s͸H?8NVn'-ROrۮeVE]LsAڋ 3ڗz]'1g׮eKŲJjdq83[DZ&3沕͠tNag-F'Nc8fHid7ߺscLLx1>~]'YWKRMŊ&RC|w^KU(cl&$t)BMO T >PyEcյi./NYÎ=cִImٮ&]j@GX Mz{miVw@R^٨+[EUq';זzAumm-mƞ 'Ah1|4@Kݩx'u ބg::=h.]ǓzA@@@@2˶nq[-45x sAҵin6& sd8NmzkN3+&mCkm^.wz6AŹāԃI}Q[K_ ~'=ZjghゞgF@te{B$P-m1Ý÷sI䃳lR]UG3eKNw]H(. 'U;icq{XH"[H>բʽ7kH׸ hkA Q:+;c߽\N=Y$ =e==DKPؘN dz Ahu =ƞZpGsFLOg8ZMobZeWOtۈ]ٙPgxS,ь}/JI{hཛྷrל6:YspSZͧ≘f]oY30?N=6sxùʖ ״zM-5emEUlk3>o?ҷ~#i_6=FXH48 IBg-uЊI{ 9[mws7F1ŮUvσ=Vy+GKψ~ JIy@@A綨w s+;Al0TrZ5PTWhݒ74Ajm~S5wdhqAHA6ƂQQ -[%KBMN]` U;94u ȐFް]zuA.~X?gcx =AĻɷ YAٻ 6pcfp1F=(\e )34pr`ttb [Nۜn"(f 7V܂s׌ءgjI s^܃k)-yYQt[֐;}3 7 VvnU&6nt $rC =]m7stVejzf_*v)3> $x+3UeÍ'OT"='V=>Rg(ģ.y7IͥN1XRff]8iZ溶٦m^_Oډ@ `uכui6]biÃs Nz?UQ:X{kkj56!2$2QR:I\dqe$!r: lo*} GBZ@:  mʛ-ҮKsfxkkᑟ(cc\-mIjsUShKik5XK;i*-X[Wդr2AMm<_u] =XFqGa>b}7s:x] zGH0)fdmw[_3xLGs6W:F]]4:. >#7ZJ55ܝO[D$ѱX("UUm< 'Z+FyU'7Z>PA[oS˴IE,n+ts Ƽz.m$⛑aN@j 1r> ՟=hy5ҵ$78acа0d$\u&"ՙZFkhX#mTt7D>(jO5Zy<4cdgO=MުJQ 9 z(b0ͩ37ʒlolq^|8_kk?'_Y{9P9v#"Dd8 O%WZ&Yʹ飮.?$3pG_=ڟhy/^dAhk+YcOXlk3>o?ҷ~#i_6=Dmհ69FqgC5czXͼ0kzt+sY ϻhY +j~P.Χ=+ϜzsYM|Yη1Z8Z^O~ox>|@@@g0PD-ЂTK? 6[n7{ NjCխiփ",odc:VTQC0ŸfZTcOX-3KYa">&:?r mwF4 ɍŠr$j`6(փ&64$r <7/qǕAtB 3ϒ 1|6n}яr K3%`f@zl!M Hy0oNeiq a1F77F1AdL c&`81p|6n{09"X{ 9G-<#AwV^(60ZZca9#tjzm :h"7V_>Vct^X IM6I g0a@ LG!o{%tH[䗰=@r E)o"cނ@1Pj1h'A O976Mx`?z PEq8z 08<#{SeBx.7Ft-dQǍ5 jZxˋ $0 j  hQC ,&Fs44gA" X45 hAY43膀=5 pD0:H6lq{qnAs@'#vk@98 |I9^q<ُ|ncE}&3fa! U92F[uVTAC9f⽔phW+=%s^shLGۺpp.ٿ;J_g}ˏo9zGa??mpz q8Go9(;Hlo bpK{:lx?hh$@@@@@@@@@@@@@@@@@@@@@@@@@@A'ym^qPxW*ihu@df7kqҷpդMm8bⴭy~Y~Oߢ+>Zs#🫷DV||G#S?Wo~iF~YӟO}JhÏF iYxzZk-6|CQ< f@|{/HxMZ&9] {>-/No;>';K⃽SNϭI}oS|s(;[9/W6ybtl/BI[x.R5bD3q:ԚMk9˝Jki uCD\72=\I:yKZCx=:Ny^=W]=Nա5-6[{VSWDnjw蚿-Z}N^}Wӗnj*[A%M]d.ƱВI N-\̼Iq:zZRsZ@? `[}4kC`u!0:Hb 1 2Ԁ(~ JIy@@AiA#>~ go8mM@3F}6AϦ߈ q#>~ go8mM@3F}6AϦ߈ q#>~ godFAzeݟ4 AB Pi/8(4ő=ZOA4И_^.p%O#rܴb? w-?O#rܴb? w-?O#rܴb? w-?O#rܴb? w-?O#rUn6@C4 5{}J@&R3g'?@82y |!'?@82y nLm:OOi|[7jS5x}>C;˥,W]ggbv].y;NSh>ϐ]>v^}u!t}:O}>C;tuzA|v'e?] N˥'i~Y؝KNӫ;o;tW锢Yb籮>(B-z՜&,pd~ϟ pdnsKmHyA wu=e}ƞFڪR3|z-٣ uP)Oݔt0 sгkgCq8MVܪ)D%d9y936Ŧ>uʨq: sۜ5բmyu_ T黙rKTg$789:N#ø:ͫϊފ Jzs r=lGT'!|M;Zfb~ÞkRY*^(4kG+Lz(լn+NG.*ü(>AA|i{aSh]v9SGj:Bs|ExOփzuVHl-sĸ^9s؂mʲ ms77]N6/xt4:oAN\wꋢ=ȳ#^z n7Y('1eawux=Zj#B[<,ܑƅ*Peݟ4 XAu٥KYGtZOYYo{C$@(u($?WWQ%S4rM s`a[=H'3RSgSL״cPAMA1 U*Y *J x40z 3TYw  1ƚzCf6j'ii ާw#g(.Z/>GtsR4IQjKr{&d7H~3=A'g-56*#yH uhKmglTKc`4}%-me͑ۡqp hA;|cdacs\0AzzWoCalyvP#:kwX4?8x˸#ނ:I᪩2:VD488hymt8p>)GXǵSTr{I|gZzZiPH-Xm5چJ #7G$r <^Mm{N'-7q nl[ } |;&F{N`2;VPy~o{4-{r(,g?hh$@@@@@@@@@@@@@@@@@@@@@@@@@@@An\R G/8(#/? a>n+ϭL~H%@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@A0PJAD@@@@@@A?H;?+ Oɏ^0 ( ) aR })_?oZ^=XXUX@@@@@AI6@@g#kC,cv9Z93{zG[jG[jG[jG[jG[jG[jG[jG[jG[jG[jG[jG[jG[jG[jG[jG[jG[jG[j $y{q fɍ^i^i^i^i^i^i^iT[RC3kʵmjk8VՋF-W^*~v?U9: }/HNv?S^*jS9: }/HNv?S^*jS9: }/HNv?S^*jS9: }/HNv?SуR?hHNv''O@ֆ <=======2أiXzA?hh$@@@@@@@@@@@@@@@@@@@@@@@@@@@@@A_Р ADM? 4 n ӄ:؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv p]'y{b u؁wwv pI# P././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/_static/custom.css0000644000175100001770000000355214714401662017311 0ustar00runnerdockerblockquote { font-size: 16px; border-left: none; } dd { margin-left: 30px; } /* Collapse the navbar when its width is less than 1200 pixels. This may need to be adjusted if the navbar menu changes. */ @media (max-width: 1200px) { .navbar-header { float: none; } .navbar-toggle { display: block; } .navbar-collapse { border-top: 1px solid transparent; box-shadow: inset 0 1px 0 rgba(255,255,255,0.1); } .navbar-collapse.collapse { display: none!important; } .navbar-nav { float: none!important; margin: 7.5px -15px; } .navbar-nav>li { float: none; } .navbar-nav>li>a { padding-top: 10px; padding-bottom: 10px; } /* since 3.1.0 */ .navbar-collapse.collapse.in { display: block!important; } .collapsing { overflow: hidden!important; } } /* Sphinx code literals conflict with the notebook code tag, so we special-case literals that are inside text. */ p code { color: #d14; white-space: nowrap; font-size: 90%; background-color: #f9f2f4; font-family: Menlo, Monaco, Consolas, 'Courier New', monospace; } /* Nicer, controllable formatting for tables that have multi-line headers. */ th.head { white-space: pre; } /* labels have a crappy default color that is almost invisible in our doc theme so we use a darker color. */ .label { color: #333333; } /* Hack to prevent internal link targets being positioned behind the navbar. See: https://github.com/twbs/bootstrap/issues/1768 */ *[id]:before :not(p) { display: block; content: " "; margin-top: -45px; height: 45px; visibility: hidden; } /* Make tables span only half the page. */ .table { width: 50% } .navbar-form.navbar-right:last-child { margin-right: -60px; float: left !important; } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/_static/yt_icon.png0000644000175100001770000000765114714401662017443 0ustar00runnerdockerPNG  IHDR2+:gAMA a cHRMz&u0`:pQ<bKGDtIME. }IDATh͙k]Uk}ͽw;~'~%v8qT$iB@K BTDUP'>!DQ>D* E-A%4JiN&]qI3sιq<~EbKG}tٿ_{nԟxȍ4qA( 40n ;[ ? yA>8 _6ܕ*AMmX>j@ & @M@ ܙ߇@f e|h#9젭W@A~ [E":'Dԁ.px]V2gA`5'ޚhPMhiϸkTG&N4:6 <h[̓kv&D5~<{#՝[V`s pO= fN~d:yeIJ}Gfn g'I ngR`#*Io81$^36sO]ϩtnj@ cyZb_ _hS_ynЇN6XD nq9_u'q<؞5/<>[Dq g4뼺K/=5IŢ)|y1ך+)OCR?# (ĉSk֝$tzGxrYDΫm_D k+A ^5]8aOdX*uC׋ a@x,ƸRI q~91D|=۝% $caUh YB7ZI!/7ScU~zkURYTׄWZ{nich;ۉQ Q"ŕ̿g@QTs%u8Ddta*B+(Wg*"Bއn mW}mѹ\ `- vQ=d!{hf1"_;7;.ş6۟9|bYH3N EܩEODPU6lfp!{}/6NQT_zmnmۨ'>h{ߧ׊ E_9MSgߑS(1̥H}x0EToLDDq w;Nz>VۍXEԔIDAʕȵˆ!ph0qߞ<;WvUJ8B9I]u- &9?mF/&w~hU63S]Äx${ .憝8q"E`@rUoVN;9*IUv&ٔ\]2cUac`5w183<;R뽹U~,'qQsO|^(R~pzl3 _){Rjs(O S9t!I,Px\?0 ?k;L-5_U"`FG K6_2䪜̶CGK nZѴhHõ/)gɉIiK=^q( Av*m97/4Z5w*Ei;`Tĭ2҆D3wQP \wli%TKM{(w̡fTQ.%w6"WXI"Y;TX<g_[jB FH&,Xk;td?y[';N*Q1}43wks\HeiE0sVT>Cq\ArU;k]g](Yr@@rbɀBʯ[Re$xӥp2Ci8.*"FcN)4]˫270 zYv~8R@=lTЩmbR11~tKɢ)*֐_@#xlm,n.͕ya5l7^GvX\E+QE'%L\ 8wM坱`j9^U,\YyKC,'Kduy*0w37OKYLqCȽy7]d>SM]+xd^Cd P>"sdG? Ob_6/ZϑAqrAeA`P _+/%tEXtdate:create2016-06-14T20:46:12-05:00 Lh%tEXtdate:modify2016-06-14T20:46:12-05:00QtEXtSoftwareAdobe ImageReadyqe<IENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/_static/yt_logo.png0000644000175100001770000016516114714401662017454 0ustar00runnerdockerPNG  IHDR i1bKGD pHYs B(xtIME 2X IDATxw}/9HZUTPolD/Ƹ`bmvI+=ʽ}\ -ۡju+ ]>;[9NJ`;;;<륗(<3DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD"X"""* y'}`)#ec % ""B> !nrg^f'K@DDDDD DDDDDBDDDDDBDDDDD DDDDDD DDDDDBDDDDD DDDDDD DDDDDBDDDDDBDDDDD DDDDDBDDDDDBDDDDD DDDDDDb lYp/|eAJC{p!T6!<ߍ\Ɇ4ZgOdZK0T eBWB4:Hcߔ Dih@Z! iv'aZ)G=L ˲~Ȧ'b!;ZŅYV-LBA:(Qz@T@28fu,9eBF`7b*Ń@@@ -Up[M p""P2aF!Eiŗo101kORSLD3 ,̥ʄ %Iv>9nPc =E&D;h;\aq?35JԱ(%! ۡ>yܻ2ϚmRL+ΪS4jqJ9k۸2101Pi᳍BPr" RWt8 KOocat-!B ,# 8 O.Nr'"";kx /aS9@3P$< @BD D 4zWA%˂EY@Ǡsqܟ@gf(p^J C:9$D DB/kB WCBZǡqޣ܏@lͶzAep 4 @!|nTZ>} 0wk(D D TN^ JT9@pA|8BBBv̀S_p>y_#¢101ՍP&.K6CpP-""*EOnZB AetJmx #D D 4~l#A,7Ї 䛜NBBV7Ak 0* !֛8VR OBzx5S)!䛸#a nop=`1101YlhӀb1C`7U[ϳD D 4ZKb{TTXba!b[ƃ\@aADEV&7+,101ХA჈.OB;,ba!b>Mh@o*W""\Qz sVY@ \9cOD4Wm@ap9@@أ@OXАb.X xxmB;KP@?-,V0cJq"b)5Ȥ ܅O@@Jd @kl~"kA[0WX>VϢQɅvVoDTbWx [:X*$zbL(BQѐr1X@, khg F-U0d "{\5W΢"bk|dGAb3mba!b՛nM,ٛ;pһKH-o@I u? =xhy!"RV7}߆\ޒpa/v|i@@ \e."*[%Ґ3:nb+۱j%W]ca!b_ Xј{>yةAs"!q0lƉa*Nba!b!Yc S?ԲoJwton$+R[8zxZtŴA>:k>wL"{ru0],<Ddyz>8KBzsǂ $9Nhs>ۛ,|?c|4uힲ4~fԚǧ)!t"VhRՊáIqkgN D ؆;y Q|SVN5 t/{,41FpJܩ,QXr`a!b)on0>x\Z{_m.4lyiXGZ+GvTKJŪ˝IkHy^n D iq+ Qyܷ BbOl#Xb։}}&֥Rnh`bsL!d8{`a!b)m?<9< /27o5?j]$ _PsYY[_E.+f101'6^àgzn*HeB OVZ=7!Z6A*'301n ~HNLlݱ~~OB,VI}2>{W/BR|?~iA6}hd3ANV%rν1V) 0v߄{?f=@@@ >Kg1^w8jZ׆]ntXw$ัV,zQI 77Ͽ8MѯЇM[v]K"\7d1@@@0x o>! vT\[3ɹ"%J=ˍ @@@M_,6]7{)]]t[o'wP1ŊPk@Cl`-~/W8ca!b)5[BȚ-$~dv~kD#L\3E=W4u'9"QzlՀ8}d>z==d% #u?WٕGX "{|ס0~n?Xms%7$*w&TK6\n D fWhal}i[YA4<kqxVi B|0z001 Ϗ/4c_oO|YHe=Q$\:i"@" W!!X~hc |Z@A4>BCI_4^q8W* P[ϳ D m*^-l6 *Z o׾fn3²^Zܻ201ǶN_gsߝ= *=ݞĶ9//NK7Fáe{OىU DrwBl<-uk D%,vTn~IG4Yq "\OgX "b[-hfs0/~! D+s?j %Z7^p,"rxyQ\-6θf$V~,!NXgR:9=m6[;]/-fDPCٺe8a;[O30o7_v}S*̌ *=Uuuqj ) ' D' K+[3|@CǗ=Ҋ=E K/ax5Vp`"rŒBK%Tpc_r7 d6;|,8u; /pgS @\vpzΒ jŵw5W\ێ@,wJ<|ɥXQ%3yH[e;5ߤ~?|yx C[kCbۆ]bV-Z{;T~DJW42Ms'4+1!0(ݧ;¾f3_9P.gzLb ?#-Ƶ+>1'x+py`sR[,Mpe(`,gR=[Ki xo5 WZ%]RRj%RJHeIiAJ(!PJRbXZJmIhH5 5,C(HM)% CB(g%vnuq_&."" ;bZ Sw~][ge4ppnӒXJ,p-mZ4,0,% Jsyf--iXya ò\2a܆e +q9*~\$8;93=)/:j.ȂĹDG=Or nSL;oܦX6m)} ¥,˴\ )yan32̬e=.3qNuǧ0*ks6EҕW 3O>d>w.WAWq|u&g:wdN,p7@"" _v7hN6f%OEZ9-†lHg=l ƚes_6穾,a=\LɏGl{ïDD ãA8banl۾ \v$ys+zټ'z<ټateLJe ,a%L"$%{Ou[L'o(*"":~AqӋ'Zw,џ}pg{La__6 `*^#lHQO+ \"KlNksD"ϣd7fU wq;ǎWO c9O8Y_(@iɪO6=5}Je? I/OWu8{?WDD lb*Pl[.䣽o/_tSi_(3Ya1p8jZDE@r`(O㝕ɬDZř8 Fy1q|&p]6&χzyȏ -7xCɴߓܬ yUWd??nDD #cYI)OE^R~ocpQH+Y_ oGc+K9wחַ~8""g@;n)R|5dGQHCɴ?yo?_̹/:qܓހ#F5&])(^WQ)C9Fh}"v& Ud ?>#'<:5PJ3+ź5Km3g;H oZ}&熔Ӣ?ϵ+Mruu<[.DD PJXJDlJl^;MS3S:fҫx~ ^|XP)~  .40`P1  s y' ׆zʹy}ʁ?kx94`;7+/,8%2ƞjof|hP J ^ B^ ,#5@߅14=-/]p2Q4/s^7U}%04wc>!/Pu  ~L M/UX8V|g$V4[$0[m(Җc_- ft7&"yR_$^p #g *W$wwaO| xO9{<|JYm7-[/LhODd8⹬OJ3mۉRiUU[Wahc/urhGr*Kϼ睹A㲮WnKj l~SoȞJGN/rm zPC4VʛMwI tpU{aKıïU~>й;fwS'"]p/_ S Y.#i(%kcS s;Z9S{;>yƹtnZ< sJ/O?ò@`ڂ.[64Dcan6h+/rg^tnnj ii|^Ck?6Z#"IWkOM7 oX%q~KKÉRi%6s7,q]|8e%@J0Kuk굪L{R1rh*'{qK/yTy\֩i'а^Wyvr,Eo:>zcز*Ϸ! mQc ํ~|lrmzF@me"w1`h-82cO\qDD%@h!?8k_&Ezs˟7K?8 8}.m8p'ԧW[K8~R}sr],z@@X3;vY[¶2+UKVȡ Btڙpvjg4Dsdv QJvR֩nUJDTJ@?; }8ʖ+[ѻYƺztԆ_MXU}_,̣pk%o;8{ǟ\4~!"*Կtz#mG$̻uiJsJBj~oߙ}nkĪ컯Z4DcU A>p_ИKsj$%"* GoIx}DŽ=3 JpTz=ױwOKG}_!4%5~Sh`EJ]#ƧG$R L&6֥]Μdl{һ}ʉJ-0wPQFD glp(fz%U %E3">r4Vd'=j8f Q1G̯Y`ױDd%1o@)])g8g E6mW7_Vpp6 ڍ~ K5ɰ@G}_߲ۗ|;\]ՉcdY}LƐ;_uX~S)nGd l@ZoBw/֩kK{6))p ޷b3mi?ĀHCeR^@2,D[cu?!~[T mWi{3YUY|Tq"80̓as|kd3~s.eRJ'( /4<ϼql'֦4d)2XuiyW|hkDD do8AĎM.m;W"+v-@3'JlYB.Пjc_jIO>əT H[d_jl60K`e 4|^Җ#믪(cdhRwC. 'H1X%mL[)-%u Dkf{ns_s煝>aͣ !Ps{BOXe=ݾuߜ3|&Vj 烈'gpټac la}0[Yk@k+P1 .Y ٞ0:a1S}K~e]I'FG`2$N}ݵ9uϬ6X"bJH)s'i-ko+OnrdW'/lkY%Kd;DiZ\v wm|.$VRfE>Sy3{c')V@ / #.Ѓ]ֲyaZ{02RyBHI@20)K.GzEIXi~aOH"}=!%WW^JWOl^HͪH!oJx2G|7 &*Q2y Ġi i. *GވxYG֛TU$Q#;+cE[#{B>[+ys)~|ͱ)1V@ Spr׶.=KqNޏqis rF`@4`p[sY&(Q+c C4b?9nʩ|ɴX"}SJ= ]̱*I'geI1>q7/S tg8c{o뻪/1?Χ=$|2^GBtԀd~8?0G m9 I=#ē˯dE0Zw| mY7%yҟG5k^܅q0H+aon+˙߃sGåOv%[a# !S:\i)߷nX^r+BD dNo yn2ZA.!= j&71.E9>lB8"f37]d.Ĉ@s` C%'~7d*V@F49KXA&v<9 ^ j7 tg80 슆q*C,/]Osv%]Nlۮ̈́)M8TƒIBD } Gޚsm3$2,)< \jJ\ʋףlBkdUO4YweXH&uFB.~ސ?lYL+BD #`YJ2ڧum@:gyVcK_J5GO1)vsBCQmbT!dJgo |j̤Ok7M"R3rzC#hz:cϓw2Qh0g`̏M=h3q2to岖#zﹽ;mO]# &2ɝZ75Z>Բy0+BD @)CQGT*/|w:ks9N& cG43I/r_WoגF-b0Li 2uD>/Owk0%Kl_b/mxVʹ\ !s-Zc~lđ UHO ?zc]E"ё@*c^C\NdenDD áGVow#liɹ ҹ{xg^(͚_i;u٧nOhFuD#Քՙ}sWdg'1|nO:g.1a;zvJwƍ]!슆DY ot%CӃc^֏.9Ԏz8VpėC75&e>V@@ޝJ::gҊO6=D;˙K_hA]wxě`;FBH2*ҽU9_jZJZ^[ǀ8K=@yעA / O6G~BnU3R6[̑M~=u7MN` 3O==&ł@U9ě/D<YMN#gΊ4p$ö D<.I| #Ozٗ]ݜZ>=FwMZwU-od(Iu}S˯<\jȻmBqt>Ss?6Otaџ37 )Pv7;F+VNJ@VI?XQ%zG6-H!dfWg7/$"N%y.4Y +'ru5p8h:9wܚlmc֙`[o}"Ou ! F}o=Բ+BD X 'rjک{)|eYڐ0W $^ TǕF>woWґߠXC7Ӗ=}aGB 2&wD .Ds?;vgvObW1\ w-L#$5 +_~͟$v/}Gͩ+%5%|Y !%'GC錗GZɻzS?[ؔ`M߇^q2Y.h2 PJ^'7h;3 ڄSƸ,+%vxl8rk@WփŘP'OH`{uKh}E"- dZ*R?V"a:_5r R?7X%ǂM ?#H|(-WBr xcw/S+4 tV}abCPZLꫫ%< ç;w\g[9~)D@Eշfs3ATJvrCoC;Kg۲7]r~`y G#_ZP_ZW q2Q΅j|\#Zkn (/39d%kxl|L݂=Pt{+9 7ʣ'|st$9fōG`őoǺcJ4vE}4^^85fgg[S% y/|o *Xubn6ouN?}x^gymy[׌jfT[ڋc_HCñ*FpKgF@ ah-n;8 W{WJ߂7|?hJ!D@Zut+&bӗ'|lx!1 ` pRW9jiQ{sn쌆)4 ñFяQHtFkb)Hߞf5B##UhYa7ؾ}# ˞$rdQGnv-csJվ0zc\0q/[im OBŹAI&k$`Y3;:88mBpfF ʛmцP[ٻtћ݅OlӉL[[S))jjݴ끊1 =|Uhwu8rLOJU_ah1n `C,9RX89@[= ~]bJ)'ެ61 b#IK"ɡupN9nF-ҏ4enЪmcBp͊I`*CO-,1[9EX9J v{zA4l ɰns3{& ljG8IQ<*--+KKX b JUش 6. xVH,B Ӧ8ǑïVВcB0ot١" NmI.KE}pkX b>9Uz -Wΰ]I<=fdq^$WƮ_.k99ef @<d!: \5 r>.KE!]w w IDAT b/zkgvx}`nソ*H'$,އ iGZqCKjÑ'E|Z*kP !orO?i dr\Xv%e<(7":~ +Ξ')k#@#A =j;=I5|$6&=d1B| *Q,5`1=R_HqS=*.%fu< ZGB eKWm}_KRC&>lnf1fBNn\qp׬t^WNH)6+(q^n)j3nR <(Ri$0> hƪtFkTG d1În|ޓyg5 g<.3sgų:KpSYP&b9ǍgkQoVi{;H1BH>nѽL(wsr:_>bMXNn`֥Q;MF?QҠiuW\7bTԌewz'SCf/1B<*_Ž׬?m'tÒD?#Oz}Z1[X‡"Ѕ2yt'+o# UTgƸ7>rZb)ue=ɍغia:fk-1Z7b)m~םMe~*XB;ʓKi鯙FE~laM+=b)qYF&or_hU(aU/1XƮM>U7wdBp7,TJD:;S1i4! YUt8X, `"~3]Kw+ʖ,b%H"Ō0$H9$p熹9tWD f0=7t'}V )Ktߜ|eт c҈+*z/mE:w \_CB ÔD(!'<x9pڍ[p2bOn7;^>uC/aV+0ݺV\(#PH,@1DD6Bt])'{zA_89YaX~N'KpVj_`Z澕n뚚D3?\g4 !J .ش R!cߐy/^&xyp;o0Ry"DeB©CT ޶ꖖ,SmvhkQ7 !u W~r:g`7Jzz;JWN Hݓ`X[?IHEcy 0)qpGtW°z}KV[E ^x/ ,Wۇj^yÅx{1 ALmP/,L xN@&r3Qlv_V 9XT+Qܮ72L;&REZJ%Y!9<ҽƂ Rf8bh}إPHȗ0;pHH;$9cɹoofsJSg̃^Х`jϰi RbTU}Pf9O?sgvr!De=^Q Hѭ^"UM:p<(AT |cuM8}bAhQY/jp6&btYcbBT!JoDg{K W֚em8uC%<Sܦw0sG]fG TDCA)!ol=Ϋ6%p.ɸyG+g@3UbUD-9'\boă\WW-Ycu=q8|4 ze?W !P .(pBIHoq[hN>,]Ս9Ja WXBOxS}Qoic|L\ӿY f9 gkh0 3O<ܘ`MЋjTVJ=A0*DKνaC޽fjL٤ݟZp1z2 EStbƾ0 8=<} AdXοVtRrHp4U q+vٌM=+pemNz~p׀SpzXqS3%O )J\! M.c>חʺY%fH0">Ƕ} 2 L͇/Uu 9upK΁#{ Z)kwZiOT%~ *nTnsFTY!DAdƝp J@_1:W+jU6r6{^t!#d™-Jw͆VJ>7b|le4$h\j;%J!BI #H'[e5>+X_=v;FBdj1K+ 97瀄sgcs^\N!^gs8QB݄Ǻ?A" 2Co=zڏ878!}P u]ʒL=.dVJ>!S 'Nĝ>`Aw"/Z,4Č'?  b4u8,#?Oav@r0#_.ȑbAmK4R7k^D>iLt{h?ZOy!ČYiA~$Ape``?fܱE8irR:9[75vKX]a/b2{w !f\(+Dv:m#HjHXߠ-}43(0[ƄC \](59`_ M tOB!l1$B F !1DTi3.E+VZ+*Gcƒ;K>5?PkȗSn M ǝ(yK6SכʺgS^ȌҥCc F=8!@(-@}0 #CF.0`}A4Mg_v\J{Ƽ { `k뼐TAjiriVAdxRVɌs5f>.Y _: /x$\<h1FWȅcnh~',S)/u[5MY5|~rnz%#3Jg'rrWׁթLJJ1;% {ƼȔ:/ŀOBc{29r u5vB†,m_MT1i0{n0](`]M`Uk"=©Vρա]_ݰ5$5;Ǽɕ8Ip~rs}hΟPOQ`4{ѧ_kϙl䃟Zi%!&B`'.}5tv)h EbP𔆉e*WIBx$\<mXm[ Tazt`̍2D.S DuN7WfK ̇ki!BL;#C_(_}vKʹV X7 `yE+=s)ꫴfCkvcr'/CHNtxszH$ sm}/5](?ݐ1 v1^VP%36Rq `2J Tѵ:PΆ]c^Kb}*P2 1 `yV**`ϸe')Ux6ω$+a0[⯅"4D%Re3+dR} V ⽉xLVف]Ǔ 76!?r~q GW>ԡ \+K;_ω]jZXm`,&Rxx_\8ˠ@Q?6ϥ@x8Х4>-;0!bV3GB5a(t2,Q2}U M> 1 .8/qr6XG"q?l^*}YnB@_[f B.j<۵ ~ry&@oC0YUc,zipe584]3)ܯN?)vrzުe94h-n c5-HB|UZGq`gc=3sBnQxOch F 8\>fQ/'UmKW7i$JBSg̋X(ǔDEaKoI$BPzZ$@w1GV֧ s؊u5u/utKƲ7R\ySSjW)&\CNNod'TGsPiB?ڱةZiE@x'I+@.Y < yqӘ` /kE,|ҥ0\"_Q91}8q7%K|$p 4Q^H%\DgX/\Ho3t {`w`a WS5cy՘, m0+.Mt{mHY fE*y3 ށpY$B*Q|npc1Z; B`[ώ~к\QQMtq0b/ĭr,|Y IDAT"XV{qÆ @6쏺 .XB":!gF.yDgp>!hA!@G Ki|>dl W lnbQ JJ?* !mZ3[wN=>_ +X')m,`H>_%O[QD`x'iIo'l+]o̜L4) ؤʗq]Mɥccv]뚳\bfHU!kPC.]$DEO [FDZbvQÒcv/kX\c|dT. .J>'f]cndJ|ةPJ!?r`#ѸO0z޲fpο@v{~8.?hBb/fL$ xDH+Vq0=ޤTKnN?AN;q*Kg"MsPQa|<|x*ӥP?/wKv{iӊx?_cPŢ6#`=|"diM>1zR%}ZJ>'J.NXJ!=:Ò0+8<~fpο&wu&xkpievg\PmMܐ ')]a@W 4w M#Ri\kKa8gH:i+U|dߗ'%/> 9!Ţ=uO?h\YM{S>1E^*_R[XQ36'340\¯)NTD=s=,jVif<$GtOl}lh3N[ΐKS+j\MAe&@"Oy:V;?Iȉ/8%/c~5険U1.DcmDƜ\ <9ڊSNcYқqBݸN?!`D;z`Ʊcr4+Աr?uK'"+1Mibo oI5Ơ0cuJg XNw.YZ>wI \ވJ!(wEN၉Ux;iIHv6߆#ޱCee?{Ƙ1^ǡ))$E6$U՚Z3 '"gw/=z6$xrJy 2ϲ϶w?m^g'd?qFk}Tl@'*7SNP\!}qhǒ[bg!o̓ɩt=e̫E `CM,r<$Uƚ͡qT\8˴o-CQ-\Jq44/ dy;yyN3OD.?(h;;g% y{ )(k F v1Ip|/ܗ3Z޻NȥbSl߃6ϥY'" >(lۯ^od&XĽh;nr+䜨i:;upD mtADQkr:9|Mc?OҬM68&<\8nM7!x}ɘmL~ ͡Ƞ=*JǴts|NboċN<̯-6TlKuIH3M) Jr`1_LC2XcJQwI_O ^ Y$wAf _|fS/PP|64$@L?DU,׉ː_1I{wOXRukEN[ސ #'S6Yk9aJ\kQIHMk>Yɓ8f#f |WJ ;{%WũyY!C`v̰!p"/:D ؘ9,|N!N"DzGBB@5xq w:9W2m]Qa/Ә/74A=Bv biDK#R%sBZԃXSdfY+=[P c,'bv~(RȿviKKZW+[3~qH^c@s^D$Bd ͟aj#6iI74﷬?3&UKa+7Ƅb@*E>Q!kyzT!ٳJc$BdS?zE YsA+8 33`˝8~}ov^ ($]`]0jHt 33:.q,sXu IEgܻ"(pE&bMlSԊOj66q+cXDyfkؙ/>yMY]sBZr h^ c޳Ĺ *<ҽf9!&Fvbo4/`p0- ~ŀ!p0iQ`>=m4$@eVՇ4\׬'^[a󆳜wR4X=Nc{vv\ŌE Pkc#$Dblus&@a/hIMӶ8"(䯺.fw8FMaӘcM/ȕ| !s}vOK[L}?N$@3u) Ϥ we̎!zr?fn|gpA+ T֬1_y}C9ɤM1$ֻ( }3zÍ&T?s]4$@F .@dͷ]2Uj{ºES,t o򥱞v ꑭ@ >]lдF",xi s W?]>ט"վtVh2͢6ºN?AWW\/<&M/.s I 4c{GMwl1~"ːINߠßqY&B8lž@ )`[zE㶑<%o_ߒe*mAu?l]\0q}tϧ&b*8_k|ɞ٢ʫőQm FbzxS)[) bxy]~~J0E|1ge4}yT-w\l˺X.gn!3'E|o7Zb\MI} B_Za 僐1>(HHٵ}*1E|s6V鋐u K=-(4nkoo i 3bN(Yw4 Ӡiv~m۞Zo?Wv251$B'"o\wX@S(TA1-n9^obxb'09yќ|<޽f)XgZ~5+zݧdmR6/{-~@ܞ,k6R9AL}qiǏviΘ h8ۿ vMǙPNs`f'briwh,@#1uERnS(<}4 Ԝ?@X6߆#}܀ot|%c905b_TsU bӞȸFbX~e`uA-c5HzB l/VNOw((VDH {=㶑lAel( k.&5(f"̗RÙzxIT!)ța5ks6di <3&My,_u &bɚ@2:2 1i(e正Ҽ$@}Yu=^ȖuiM䄼IH,_TvĥV: \XЛ:h4Νӝbg͙ԯ$@j2]L2<|KqO(-+g"ᴈk( ihȕ+Xi0g_ڵ5oƦ  o<$@]HKgx.b[~>y@V";c|r+sLjmsd[2Ƥ3<2ڀ=[[Ԉ,2ӊzyZyBSO@؊cpVU>*7_hؑ&j_.$@Ad4u?eԵN[R7|Vo3=fs\}[~1#8{P_7ނ  %c$@s]9OsqlQ\_K+j$dQbpy| #b𤲮pFbjXe%̈́c, BT+aJgF^o)BK qV[WܜXUJ>'bF SxC#1qο:t d "b"8_Da'`'.E2n5n;>3je^OѤH\n (q?uHLRub>~QB<#Յ=#/VwL+6iaw\+ uM JFCLN15>:vxT^q#eG fԄ?2m=oEE[G16gh `mÑZGA`LUg,z`4X$@~^Nl F_=IpM<`k0`差 R9A%Ǣ }zL@1r롡#. DVok&.jpqЊ2 {I>Cu2hw!X518V˨GȔXt؉zƱOn$BTgn6Ӊfq~qmk5sdtr/VEa]Yƀ,{rQrقe$0o^s!n%x`CH2ζrL]:dt.ś[aa1E XIdݵ$v^eE%:7!Opc ct93c |#g쁤py;jӝ4mAL 0|XMe4#&4<ެFX/YkܓKgYlɄaXӽa`4a[oyďG7)AW TB> 9r+ +zlֲtTF X*B37tء+Q{ u|DY5t>\y_tL!bf\(ÑZ+Uƚ4}.t4mHF:-7qِ tfW乢0aX,Lܻ^+M$<QH$XQE68VPeI/o7_i^av19{bvg쯙7|}/+h c|m6)T&Qg+҆B՜ʼnP_*cM>=Qg*BQ ڸ$xZ:GGLв3 Γ_ݷ~JTWAWC}.ҠN?~m~c~\tXl  vO,yCCT;M"L/AWaq FjU5i ۜ3Ytw}0]ןpZ6}ˑNzV?;?~rKU?7TOl2;n5VCTC\*U"}(5lE:LI_w/0˙M&cY]; RO9&Q=$yq7զ7_K ŀ)tUIbP,!ǣ;f!]0$l%mzv~DZtߘ_u0өu]5|NP  ޓlLX},UMe4𡹍f {UÓ}232`? ޓ딉\h^()}8xQ*ւ&BT n홮RWs_[*[kgH>w:;kg\ʟB%x/nUΜU](Ñ©Sd i~7~jOHG f40gKjn gjܞ I ǭdE %TBA `م-_RIs~=mS# IDATk帹6 *d.jp1T'=4Wǣm_9dmxbWUP rr/ g,A|,h_=XS)c+j #A}|w񺅞HNOAAe 086dsvԫ*qiy^<Q=s6וԲ90NrMa?l%d0CA 0k{t`yvM*+N DpH].ZْCzfΓR_yؘ 䰇e™au :6v6blP5]mhB[Mc` *H@DbEyP:S r}<;ϝI6NU8&8PhzGSPt ~10zcFWਏ|4WɎ|rMWCZ!\77WxZRm帹6{A.0Ruϛ|a\sLe!D$@$1-oš 弅"SS O9(EAwajo@|"Ģ\LS^Ao=ܮEhR wfj׏1O1቗W!]ן'~W+}E5czR}jy s|_J&<ŁEM 'n3%S}(A:ntV D5P oZw#X?u~䰭$_=#Ey?_&l6i!t2`A .Ҡ?bm& 5o$BT0.ѺʞJũO/,rMFy!!=]>qr6gCL{s!n"?\ $@`"(G]LR 8"6Of>cCM11%æ 6(t`C\eWpp,XamQpοp,ye]lW턭?5#^޳R_:=4 xJbeN7B/ 䥉+yWdtKp4@,O5,K1=;$BM?MtWTňA?ΏOYv8vEN:)/_( !iA WI\}|9q9s ḅQ󯡬@d{VTGKєb~!ohy DyE[XWD $Dp1]!2p V+Ӛ>9QonJ^6Džݲͺ몴{~mGe 8 5ȅ)J2~#i.ʟ0`?5YP^Z㺤 6 2NއO2MY^Ȥ*TU}H:r͹p뜳! 8 u7+hQaLܾ|ֈa<-u _CvW7Bѥ͵勶&z0ފoT) h~o6ՎyOmqhFbP(a/_r Z^븥6Iz B6UBji'^;}Zb@]f :+cߐaϸun<~否Iߕ[119ٟO9-j~@DT9Bys֢{E] ( DU9e[brb9^ϸGSiH~u뒎T9E08KN?/u Q1A[`PJh<@ެHh;II!,5 {er c #zhii cYG䚦XQDoSh,ƙŒc6bxjtA 0`q LYPhf;ISeڳ||?.9j0GcwvR~t1I4"\aƨ͢"( X@6 t,W3*XŊB^|qU4M{1_*3it˃6ԡ#RT5sŬIz\Km #y!d(:JP'ӡ87Nt9}3z87!of0Um}eKgAd7.m M;m9nr&_54zs3 D'ڍVSY'7$.6pr7&$BL ]7(ѣMؿD]05v _ֆ mZ4W@漕u0`k'kzW.l\W?>lk7eIe3m}zNA8pfQљM aTr`^@!^+͵Iv4 ̀K6ۊ>.ζ:y$Qp @D&م{[7Gی\-u#U R XJ¥iAT=+5BI+W7/Xk05j V^$@,\a?S+ԑ2Ld;OV<0qs詌TuT  <`ͣU{7W~n,s h oꭀgzpl5:gw9{h\*Ŕ~sX,7o bxB* y{*,EխKåL&qy!Dާt Yc[(b}m-r1ERzS c3#4䃑-氄YQ.K bp:emd~=?sYKbϝV7\/deQV2XHA bDrw(… f~v8Ԗ\j DE,GB՗:*RmbHн-c^qf.6r+F;X_DrCb2 +g)?sr9o9B3GI^:Β B\B3g+}];WtU}"zk#7Θ!Y*+$ˢ,g6H&0T&05iߥ7ax= Z'l9ڷHc?1V=k=͡n1;,uZuQ'+Pd`{gDaR\wӓg``,@dke-˖{v׻zz_^uAWeda I A0 ss?zPy9/ٶm~ê%3f}Ńlog*fN6PC}"pqcBYhTp8|IUZz]r>+u4N oDO*w KBO~AY^݀v7H#Z9V<(rh< U_|W1rqmT,$IYs Q$k r$ϙ4Kjx5en+Lh4yaXL+^ZQ6rԀ~9ԯGfݴ -a^W$$ (%owkt[)yz]gDyA埓h*!ighZ:ݳ WT)! {If;6/F6S_p@'M*5~l5 0_,' yόYй(sjZjSɐߛ)M%$k,5pus y(MVx&tl8H[5*@*R! *pREעtl  D `Icf<\Y/H*bεw7퍱p 6`yPaL˚S92:~=PH o aq|{D*X$@#' GBϾueO n|~*ĂyP I@6Y\ٔDfo^q$:UOV"qiTO t]BGHu٣NGW=l&ۇ_m)CS#SZG"ce$t\d7t \Ҙő.!RG=Kbt zYi9neH<\bQ=HF N0۽}{)^݀v P = `1XپWnKfł^^s X{ct rCt힖$1*fz@s$I.@ YGٕls3U>GZFb $e>lC탓% 䱼%9~Ԥ 9"D.VN]*M *ɻ$saXRT$Q 2!`Y]h_gḬ̌Y8,MđQӥ^J}C^moI!2;P1qd3Z'\V=|^s d(ݕ!BT˲Ic'>lpk@ػۥX,]rduOpWp2~]%~σSh֤ ճoRgT& "G{A 1{_0/=_~ $@j c1ߎ/^;K0Ȏ-`vGfg,9}Q˥0JސP5.1o"4k@9".Mqd%ѡ*.Vh̥W5-ũ{E科!dYtАopl=~N$bic;ΞjNϥZȰpss9\ے mЙy -Cn75U,N]IO . DR#\N[W"_<sO[:]39A[<Xk$j*?id`rR< jjpnW Tw\d=0'?bb< D1ܡ(KIܺ(x+wsnkv|}Gw?G:!g{چ2'W8(~70W$rYJ45ޤP7aV Ūaټ۩`Z~o0֎?Vy$BTqw9l-g`xW0]=D rL?z푻ۮop: SUMci^$pO S!&͠RA" R>1R?^]!q%8N D"J5v-?mfX݀|۶Wr{ >'|sڇv$T=O?: QO h1hA姩g:y=5Nz{v@0Wt|/WKU!QwҀ9_y⺽~& #v#}鿟|4:2\!oD)Iwpr,4 Yh-55tQzوz^NGQGBC)xW!f\!!ķ6L}OXy*1 '6<⣽8,%U{eIŘR>%oHF*eJxLvo%2?Hy"堩g9ݵtJ+%FޝI'B=ۤ4^s_o>}0}0 ,po|[EM]3E.壠Ζи7O IDATCY%},kIH³FyURھ;ޟve*ҵ#;O!@1)pΝ"ޫmJ­?[r6Oxr?ٟv2N;iCi2NmaiC״$1ӟFYMsn:Z['^^>vfVG`ߍ! D]cYc:hEQn}Ͽ|Xtn=u" PUX{Xk$*^x4 x7NWXђP .)t|/{ Xi:qF>eQgQط5`)g4G+r o}f2U?Z3RHVI졮ߓ؇NPu8E\֔MIt e՚~A4UH;'Hx쳘Y@HeYk;hUU]7"e)ݽ9/qRhV_f|~w8Ib7EOE,B> OaEK4d(W=FvMkϾ÷@™ +g#U6? b21a+Wa_j^b@ i֧v/Wv=whjPTCrZ )!e-RTA Zuӥ)T7dk'D‡qaa3mR!lN/cCW/+n2polDA9EQ>s=2#] ^0_тN'?p9X^(xIk,OT;*qac+[XʢE3l >B%X雇׷^y1LMQOYPm3(Ky@8Jy1Wc2J)9>Go{T<' 63ZHS AU"2.Kp0]R(=!0-p]:uݾgA"+q{UAo.kK j"a+. ז8+T|U̻P爯97kIvao1[@(.sȗCbq9's"⒂<>;0&~o>u\ASs$ F$Ȫvv2HL_ . * &QGP_E; ̸t*NX(\|޼Sݘz50=np 6:"AC^AKoOTs{-٭#3þ3U~.3`zf@8fnjv*11W@ۀHXPz:*;FݴsWP%׆7@2 .$i֐1ny=D BG9d7ܒ' 8)cmŠ ~=I2UhV w ^Yp1!D4_|]\L 9:xt|:wⶸ1sj\J"#j14&m#@ G}$@Yy%lp55|}wGa3Kx*pټ[+zX ˒(U+KL:*7#"tUAP#^K2ɤ"]ش5f``$,fUg%Z!9#Im δc;H:J1!F?GtͬF753B?_taY J9" &渺E"K#n$YF;ٶ+ .n(*$yd*f)8anԖЬ-Yܳa7cLLQDŽ"1xJސW|G!3Gސuw8MQZ# ̳p7 VOE@Iݪ+('}8ԛu=%0Pl!JĭB}Y'k'z.HR݌ :Wz .pƆ>Yݹ0wNIPL\ޭ W:)DIYxf ND4 U@)YUlbt(hϽc1Cy1oce͛UC;#@\BH0Use T~'aXHvkp9QTS~韞^vcs3AUJmfM5ÁPf!r@rCI TUf)4j$*)aaX5r"l|0%۸ys'^4dxUJαHx@ zLPzGnTsNg}=9mЙG -2:s?+`<9wHa937ܐʑ/y ӅJ 1el{%Q61)ϣS&)H`an"GyfWǪdIWO+(}x<ݷi_ϟ|uƮ#a* $>HǪ }_&jpnMnUOd\ri 9c_<윙ّ ZUֆXȗibBdl 8p"€K zg`Ț9#k)ȚyP8 Ыd uE?UG _᯻S:+(DrM; 5988r#CK~bkm$cR20vVQ!vPE< ǯnmMVv7rٺ(R h#SZFBm$VMϢ4?I`ǀ>m\­Yhm/ dI! Q$^Xxn%&/nOo.햏ex딌O@XV[ Q0_4{|dxVb.֭jL$@Zfrcl5NK]ǰ!ptKh_xnɣ &B$|z.7Ӿh2EB2^V 8q*wQFN&{g?Ԑ#rfd?{1nLꖼݤF͝C=hÑa% <C~l5RJ ê\ҿ< oyBZ89'7 ^TȊ]LCC5J}DH  @#gk5hu3a!`&;C7=o3gR@!l{sEMGK,!Ni}JȦЬަmajIi5j`n,ᰝi1`0lfB4H%{+9)7{rR\Y `xBdD術X؛sgI ث{RG嵂~̄<ڧ}e@j2nTc=gLut8ʝ~'i6 /6UIzGÊ}[JkD 7r%~ڹȆE8+KAӌΦ@:i%BZ#Ry,{%fXKDpӇ[sw?Br a->9HAFHLUK.daqåVE RN_n; ٫㻚=$@Dz0QuKF<#fܴؾO o|گY_c<R :{ CUn: ϽEy[_yN ˪`s81[c`(U))K $DƀOК 5XQy|cƨt"{=欀ǜ!975.QߣԹ0ȸcih"Ifh" '2XZͺ(4 qwR9[P otl̏n ψ1;),.omkP8[jzD,df0S]3࣫?[OL*;d Wȸ3. Ͻ -^ khD%ZVEGT\TXi;B g YbUڧiKT` GcMæ|QMM%W0D. nu/wG26 eyH'5">ڻcqSV<޴dC/t>LV,yM5ڛ|Qk&BzP(uX<SEr[? ?YYKc[{{%@$"@QW!JZkwJ)vh Yd=Zyxgh횜r'óX#V)*̞=)t_'l!9k!F7H)'3_FdwЬͻmo.=UEHI֑M'D$> K:ߧțnh͑%b\e\O;"+<,`747|iZ2ܺȂcdju!wk'(UϺ6"g?ё Y'<eʲxtHFdI1)i7pOm&TOt hܮ"xDXfh~L0țohɒ%;qۚݠ8y@gCM '1&ǀ.|s ;Z0Mq;:<5pL<*#&-S2 .a;'@`Q\,=6%Ό->wu\oܥS@'[H$(h QgG t.-$ {S{6\*%GfTaܶ~gѯ̹c˺ffUC§zZOG&5G$9*hH\*3֒1k*%~w8U =zRR D/%N@/Vo|L% [a^%W_t 4{j 8X#uΫ9| Ds&}Sʳ}vc,*2E0Sh+ن=G% gym񗂳̴V@Ejz6!Ǵ\5`,vRsj l{ڹ/,PdY8%13 ]rdD ٯ5g'Q!~RQx_0&pۊ 3@PrY0 uO=QHyb_{cpgp̀H\*ZOW~H[Hy=uI{]uew.Ky@z'|2EQ+B#֯me,v4|xhyCfCVy_UD2~=H܆bFQ@ aF&zA*˗_MiE̻y@vF^ ͎nXoY А{=iҸ}/q(ΎɯԎ_t^wD f"գfkim?)3&chv&3h. Dݱ?-jˁ)ly!@R<2EE/^n«F"d3Gd74/HsO+{"TL0w^?s,ɸ Y=O5r2@jmSz8L%>}>|Qx*د:>jˁ2ya.Go[sƘ d򿥔[zŇNBz`b&$#oHniYq<"վ ɌG+tN \6LB)\=+Ky ܯ7sn&{0CyODp"k&\iO`=v9Z:rΏh&Sb }o S$:e VQHt{}S<"4H"kHf)W(却hc@c !W_=аeG3Q[_=e~k.Goa2|!Sk+ _-7lPm=   Q[T|eFb R ʹ7JR(%d)]1\yQ1O*m甘tNz@ wC޶UX&]LK!^,NB\`[Ӎ8$ˏK FUjo5SP:9H0 @3*uhNdscIW{˗ݜ۶v YܶU-ˣgh6$B2YCJ~اLQE9囪fW3?>"*VүaFZBQ-P8(KK_qE ( pV@.1K7PBb2Y8%U{m g,%IDJ1eP30/RW&ׯnނ"B)4"y"{bLf=1eO aL+R00JwHDqyԅIT64NIC@2mH}B:YLd{wʚoXV1rֽsc 85.o!3?BM|msXX+ 8yn+ЍW 75&wT#/iQ7Jո|y3C$0O}a}$n$@[(&)dCbw? E| MY,fNol IDAT}uL_KMwv\L缁T֧ZK:D".򺛞.}o~>Cysa߬/=);锤^9F`qSxvOVLL.JX(.j.Cai \,rlK}w9wW%w.ak,zs9dSg5$΋{МZ ˨z87徯>VH)z;4a˫IDr&A=Y!^05G勼 _ۑaGOiY…Pa_%.Ms%N̚O5p{65U걶f?ݸd0>քz4c<$PP(tƀw)rhMzck;O\"WC87(62-!oc 4VaSٺb|ԈEj OjLV #K,??qۑpƤfD_9qʊNby,'S}T9Ѻ3[l3nAeUtSA';1 VFYNz*"{ P yrB<#Pᅑ;mXt? *]= LJ_+ryZ9CpΏRR>%\e}Cbl0doK : z rz :3-^ OSMNz@8k\˓p 9Xqm1uwe# `) rhVlSüAUV]¢L aH&<#=kh fZ lRRWycWcΔ툥BQ^lb>kW!D7 ۻ1$`ndk*(/SVC&.)زM{vƩSu C7{ž}F^Ij_{Hh`( 0)}(Y{-AcX@xߕR>cˑ['~q| O| 1x[4ŃVNi2:#!1skF"碑@: \FbVuF cpHg 챥D׽\QC+ppηzNBpCp{n 3 tǼ7n4ubL=z. 7lkDH`w򖠳qGu[_⎕ ɦMG8EQ6J)&KLf|DJy !hUP}|"rU@Gk1I!|۟˻}kzn~x~e9 ̍jHk9$"Ds!]zY@gʻ"ux+?w빂[ݪY?d8_p"+oWQwy˗ THJpϿ+IDPsR_%5!7 t#0UcWvvg8hAݮ:$;bi׿ goVT!\@~QD89?60 rx}r*LCMƘ\fǼ/N{Aީ:G{y7WԼ K )cc_ ZqreO~~;['3ۯ;L *&X'V al cr9~y`{G!N=Erށ;o|L͓kSFr !zGwEqb r%;'r;Zs6ߌ}U¹4pΏpas~i2dX"_ɛ Lel_7yEwł˝74`CsEwfYȨwMϰ/Z !b{WGnۃPCϼ_AT]!f\BT1KUA&7 ~u@+bw`ҊKRMo!${?uϰP2Gng_tJG6[K!72 d)6Mm.W%1 SYN90B2OP fZfjsK^Mj7ܒoTkU?;*gD2ܾjT G[AC !<#-4EM\sαki(P݆LC KqZ2ق1,%s3+O^`Z'IgCmxA 8{1+LB> L\(6RzVj02NL,հE W$bYTĊ汤Q;OM kH w5 Zþ{^ wU#c՟~H8 !wԭF+@_f~D_DBX ]?к(}E&No T!j $c *$B0&`Z1}cp`/kXtO$ZþҙmW̟~ŸEU6F;7qV w.3ID57cO0V;hc9~y`pBJS4-h0&o\%qN.nM?= a y>N"M*`r;l|Y#*l9r$H mvMb_%bO!Ѹgc##CawlP8SBٝ s5NkI]{Vs2?\ŰCy;լ|E p rF A_E~P+<_BBvY>X6uqEE**@Tl@_D; TEtӳ"s^uJ"wY?\gshVAnvߤZʄCi9(,M= 9YEk3.)h!:$Ѥ>KEzId[?No gMgGr ]8Nn>_gzq۵ ApIcB\`;$s׳mPUB20A3^?xAHHDb;8%;>Nx&Cp"K53sa=d67fxԫB3$@db 91|ٯa>A]K 9%֑*yiyj1$c0.~rᡇ"k >{֚B4øs!~P~kdavnXr@bZ'DW}tܻ_no1 o Q0c]1 !C_g\B\ 0/8';4~iQzDu8}kkIGo9H媁J4+X9I1eYW1wu ARzHlZLύ:AqXZ.'ky| 8k|Qn,746gU0J@i$'3#qVpG?z^OScSbٲ !\>Q ƄSPs;v|x h!/A3^89?47;q K!ĩa1.6DME&SջMk)(8w B)~By5Fo0C 秎6 Ff {Ы7c١#g}6A؟@wk^ID܉i0ua' 7-=hӒaAx8zqxOp5i*߃4f)z)hO7[W}Y%C%c١W'p1?c+zթ|dy;uZ$@9? QeL9+L#MCydX2py9$Nݶ9sSGA 'y|M=$}cնZ$@Ŀ 6}Bs:8 ] .MoiЄA-w^zd{;A8-2Xֿ% 4s{8 ?'=Ove8ѽ߆RsS} ss)XEe~:cX i^r왆cdD#?/0H$9>qذSww:j0m OdN:CZ0xOGRpk7q_vظ'[~Y-b2Hq1uXw>W'2} s3 ŤGɦO 'k:(YLOR~O]{~As N]UP (WBdZHi{ ݹPUj`pyi J./|L4k 9 Q*K)?AO1~fR,(c?78sik1ղhzx&ӲcEپPhzn4hdt#!ގ PrshІsGcU1s?_~$Uډ u1ځeYki#th^  Bh]k(H&(W,r;rK $(.=m2bxv«yj¼~Wf/BL5$&5\ 2qdFnXG^L4s&CTg{}Lg}á%l@ &B j)~,@B,d!NUTfFQ}Jq\o'|-ŤN+tSGkd̥SZq TL=BtU@!ʌһrqLxiOL{frnYrzdf yBQ%}HoMzs&STk<%7MHA B:"`D VV)h"h;N۩Z(A*A0E *(&A K 79BQAB{~0a!=:}MY4}ءþ3P׸*v>}h]nC; m\9ihsgOms縲 JrC1v5n_1oU甾g:&p+U#(]hy*F =n2Z&IdapUh7M*{Ǖ=վ}lYm늡hKj\6fJk<3l9W4Y :}-M~ctѰ)PWR."`!-yY3SUz/jt`ё+jEgT )mLZ(#BڣVA$S,Rn9`m梣6hme_^Q[_+T{;U{+hZ)cnkڷ(AuCh1_CL4٥]7$-^>BfWE?nH$}{f^b\tTuY iZĐk:Gg1h@ovۥ<,"J1MkCRhKF([GpNAp%m?2hu@ k"}g[S@P3U?W?ҸE @ӭs.9mjȣ,t;%s23L~eh]<loV}\-~c\R9""J[x Vty@J\g-ZKTw@n-U73d.fP*} vwcòVAAq3peD/L`tnjr'+6K y4Dp_"[!:uѳAAv[%9l%aJ\F˹Wi3 +˪ I~, 򠀠qf-3;(Oԑ7Iwx3`j:W*t>IhuD#(:uThR h5Yn7Ew׃,Y쬲,F@n#iQXE!ŝwN9@( h@EQߔDIGI? 4%K\{/.uSݮ[uqBA+>C @Z[emlY4v`V1(+nUW?gMϓt4^dD̑(l)$?}-xfyqvCxDҗ"2m3-o#x1ݼ,fs-iBAxHQzS@vJ-3}Eė$!72by! u}$-u gA^)V?WɌ;]( ;;䍛ݝWFU,^IIıeY^LfH1C7RJ0_)g Ȉq%rxȕeY~G5?*lt%3oc~$e3T*b"g(`9rPhP2&vRj IDAT˲"(Nۉcl%(4z1M_:ggi2Mg;nU)ٓxL/;Vi53%>=?2̼1"ntVts$+i7>.A3;,$ -+;I4))4MTTtOO?&i(y-{$=;$'H:A|D_f^l[PmdUS%U^]%*6sPүJe9UhX'<tݿ(< 8H[ԃK/T1^&H9^VWF*{&Rj|A 2_u:U|E6 2*dw3Th Ո8LD hBCccrVUce֥NYtIꔲKi]RtI)YoϪ/Ӏ2%gU RFU/h”:~a~ &4GZЬPoW6W֦:V-۔,*VUZTU֔VA5U:J]mx b'^ T*)h0Od) X@ʲ0Ã2֖ fv,ÃFafI$%qeY^B 7w ?zwÃY呒x>:V+ EQN@x,wS>^]q%ghP#pIXy+ / ~C,WO`MwUԱ/GE`wu8]\iw_Iʐed(PG~GJZM;֨^qR>PgA F HWHڝa@]|.8IkI`+ EQz˲\Dh?3I9 Ⱦf6(KIgH"2*$h]CfvnYOJK%E{$F#kor?|` )Q@Lҟ;FqUD(c%Caf8SR?q݄盱#q`d(=syD|Hf#Y$ر=of' rE# ̼̺$៏puD33# 32f_K:KRCyY@+3̼%'!.2AoHEQL^{ʲ<_ҏHRYGu1JLj\dI:S8mSsS ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2191505 yt-4.4.0/doc/source/_templates/0000755000175100001770000000000014714401715015766 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2191505 yt-4.4.0/doc/source/_templates/autosummary/0000755000175100001770000000000014714401715020354 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/_templates/autosummary/class.rst0000644000175100001770000000061014714401662022211 0ustar00runnerdocker{% extends "!autosummary/class.rst" %} {% block methods %} {% if methods %} .. autosummary:: :toctree: {% for item in methods %} ~{{ name }}.{{ item }} {%- endfor %} {% endif %} {% endblock %} {% block attributes %} {% if attributes %} .. autosummary:: :toctree: {% for item in attributes %} ~{{ name }}.{{ item }} {%- endfor %} {% endif %} {% endblock %} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/_templates/layout.html0000644000175100001770000000313314714401662020172 0ustar00runnerdocker{% extends '!layout.html' %} {%- block linktags %} {{ super() }} {%- endblock %} {%- block extrahead %} {{ super() }} {%- endblock %} {%- block footer %}

{%- endblock %} {# Custom CSS overrides #} {% set bootswatch_css_custom = ['_static/custom.css'] %} ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2191505 yt-4.4.0/doc/source/about/0000755000175100001770000000000014714401715014743 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/about/index.rst0000644000175100001770000000657614714401662016623 0ustar00runnerdocker.. _aboutyt: About yt ======== .. contents:: :depth: 1 :local: :backlinks: none What is yt? ----------- yt is a toolkit for analyzing and visualizing quantitative data. Originally written to analyze 3D grid-based astrophysical simulation data, it has grown to handle any kind of data represented in a 2D or 3D volume. yt is an Python-based open source project and is open for anyone to use or contribute code. The entire source code and history is available to all at https://github.com/yt-project/yt . .. _who-is-yt: Who is yt? ---------- As an open-source project, yt has a large number of user-developers. In September of 2014, the yt developer community collectively decided to endow the title of *member* on individuals who had contributed in a significant way to the project. For a list of those members and a description of their contributions to the code, see `our members website. `_ History of yt ------------- yt was originally created to study datasets generated by cosmological simulations of galaxy and star formation conducted by the simulation code Enzo. After expanding to address data output by other simulation platforms, it further broadened to include alternate, grid-free methods of simulating -- particularly, particles and unstructured meshes. With the release of yt 4.0, we are proud that the community has continued to expand, that yt continues to participate in the broader ecosystem, and that the development process is continuing to improve in both inclusivity and openness. For a more personal retrospective by the original author, Matthew Turk, you can see this `blog post from 2017 `_. How do I contact yt? -------------------- If you have any questions about the code, please contact the `yt users email list `_. If you're having other problems, please follow the steps in :ref:`asking-for-help`, particularly including Slack and GitHub issues. How do I cite yt? ----------------- If you use yt in a publication, we'd very much appreciate a citation! You should feel free to cite the `ApJS paper `_ with the following BibTeX entry: :: @ARTICLE{2011ApJS..192....9T, author = {{Turk}, M.~J. and {Smith}, B.~D. and {Oishi}, J.~S. and {Skory}, S. and {Skillman}, S.~W. and {Abel}, T. and {Norman}, M.~L.}, title = "{yt: A Multi-code Analysis Toolkit for Astrophysical Simulation Data}", journal = {The Astrophysical Journal Supplement Series}, archivePrefix = "arXiv", eprint = {1011.3514}, primaryClass = "astro-ph.IM", keywords = {cosmology: theory, methods: data analysis, methods: numerical }, year = 2011, month = jan, volume = 192, eid = {9}, pages = {9}, doi = {10.1088/0067-0049/192/1/9}, adsurl = {https://ui.adsabs.harvard.edu/abs/2011ApJS..192....9T}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } While this paper is somewhat out of date -- and certainly does not include the appropriate list of authors -- we are preparing a new method paper as well as preparing a new strategy for ensuring equal credit distribution for contributors. Some of this work can be found at the `yt-4.0-paper `_ repository. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2191505 yt-4.4.0/doc/source/analyzing/0000755000175100001770000000000014714401715015625 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/Particle_Trajectories.ipynb0000644000175100001770000003063414714401662023160 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Particle Trajectories" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One can create particle trajectories from a `DatasetSeries` object for a specified list of particles identified by their unique indices using the `particle_trajectories` method. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "%matplotlib inline\n", "import glob\n", "from os.path import join\n", "\n", "import yt\n", "from yt.config import ytcfg\n", "\n", "path = ytcfg.get(\"yt\", \"test_data_dir\")\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's start off with a FLASH dataset containing only two particles in a mutual circular orbit. We can get the list of filenames this way:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "my_fns = glob.glob(join(path, \"Orbit\", \"orbit_hdf5_chk_00[0-9][0-9]\"))\n", "my_fns.sort()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And let's define a list of fields that we want to include in the trajectories. The position fields will be included by default, so let's just ask for the velocity fields:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "fields = [\"particle_velocity_x\", \"particle_velocity_y\", \"particle_velocity_z\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are only two particles, but for consistency's sake let's grab their indices from the dataset itself:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ds = yt.load(my_fns[0])\n", "dd = ds.all_data()\n", "indices = dd[\"all\", \"particle_index\"].astype(\"int\")\n", "print(indices)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "which is what we expected them to be. Now we're ready to create a `DatasetSeries` object and use it to create particle trajectories: " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ts = yt.DatasetSeries(my_fns)\n", "# suppress_logging=True cuts down on a lot of noise\n", "trajs = ts.particle_trajectories(indices, fields=fields, suppress_logging=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `ParticleTrajectories` object `trajs` is essentially a dictionary-like container for the particle fields along the trajectory, and can be accessed as such:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(trajs[\"all\", \"particle_position_x\"])\n", "print(trajs[\"all\", \"particle_position_x\"].shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that each field is a 2D NumPy array with the different particle indices along the first dimension and the times along the second dimension. As such, we can access them individually by indexing the field:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "plt.figure(figsize=(6, 6))\n", "plt.plot(trajs[\"all\", \"particle_position_x\"][0], trajs[\"all\", \"particle_position_y\"][0])\n", "plt.plot(trajs[\"all\", \"particle_position_x\"][1], trajs[\"all\", \"particle_position_y\"][1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And we can plot the velocity fields as well:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "plt.figure(figsize=(6, 6))\n", "plt.plot(trajs[\"all\", \"particle_velocity_x\"][0], trajs[\"all\", \"particle_velocity_y\"][0])\n", "plt.plot(trajs[\"all\", \"particle_velocity_x\"][1], trajs[\"all\", \"particle_velocity_y\"][1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we want to access the time along the trajectory, we use the key `\"particle_time\"`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "plt.figure(figsize=(6, 6))\n", "plt.plot(trajs[\"particle_time\"], trajs[\"particle_velocity_x\"][1])\n", "plt.plot(trajs[\"particle_time\"], trajs[\"particle_velocity_y\"][1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Alternatively, if we know the particle index we'd like to examine, we can get an individual trajectory corresponding to that index:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "particle1 = trajs.trajectory_from_index(1)\n", "plt.figure(figsize=(6, 6))\n", "plt.plot(particle1[\"all\", \"particle_time\"], particle1[\"all\", \"particle_position_x\"])\n", "plt.plot(particle1[\"all\", \"particle_time\"], particle1[\"all\", \"particle_position_y\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's look at a more complicated (and fun!) example. We'll use an Enzo cosmology dataset. First, we'll find the maximum density in the domain, and obtain the indices of the particles within some radius of the center. First, let's have a look at what we're getting:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n", "slc = yt.SlicePlot(\n", " ds,\n", " \"x\",\n", " [(\"gas\", \"density\"), (\"gas\", \"dark_matter_density\")],\n", " center=\"max\",\n", " width=(3.0, \"Mpc\"),\n", ")\n", "slc.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So far, so good--it looks like we've centered on a galaxy cluster. Let's grab all of the dark matter particles within a sphere of 0.5 Mpc (identified by `\"particle_type == 1\"`):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "sp = ds.sphere(\"max\", (0.5, \"Mpc\"))\n", "indices = sp[\"all\", \"particle_index\"][sp[\"all\", \"particle_type\"] == 1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next we'll get the list of datasets we want, and create trajectories for these particles:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "my_fns = glob.glob(join(path, \"enzo_tiny_cosmology/DD*/*.hierarchy\"))\n", "my_fns.sort()\n", "ts = yt.DatasetSeries(my_fns)\n", "trajs = ts.particle_trajectories(indices, fields=fields, suppress_logging=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Matplotlib can make 3D plots, so let's pick three particle trajectories at random and look at them in the volume:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "fig = plt.figure(figsize=(8.0, 8.0))\n", "ax = fig.add_subplot(111, projection=\"3d\")\n", "ax.plot(\n", " trajs[\"all\", \"particle_position_x\"][100],\n", " trajs[\"all\", \"particle_position_y\"][100],\n", " trajs[\"all\", \"particle_position_z\"][100],\n", ")\n", "ax.plot(\n", " trajs[\"all\", \"particle_position_x\"][8],\n", " trajs[\"all\", \"particle_position_y\"][8],\n", " trajs[\"all\", \"particle_position_z\"][8],\n", ")\n", "ax.plot(\n", " trajs[\"all\", \"particle_position_x\"][25],\n", " trajs[\"all\", \"particle_position_y\"][25],\n", " trajs[\"all\", \"particle_position_z\"][25],\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It looks like these three different particles fell into the cluster along different filaments. We can also look at their x-positions only as a function of time:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "plt.figure(figsize=(6, 6))\n", "plt.plot(trajs[\"all\", \"particle_time\"], trajs[\"all\", \"particle_position_x\"][100])\n", "plt.plot(trajs[\"all\", \"particle_time\"], trajs[\"all\", \"particle_position_x\"][8])\n", "plt.plot(trajs[\"all\", \"particle_time\"], trajs[\"all\", \"particle_position_x\"][25])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Suppose we wanted to know the gas density along the particle trajectory, but there wasn't a particle field corresponding to that in our dataset. Never fear! If the field exists as a grid field, yt will interpolate this field to the particle positions and add the interpolated field to the trajectory. To add such a field (or any field, including additional particle fields) we can call the `add_fields` method:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "trajs.add_fields([(\"gas\", \"density\")])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also could have included `\"density\"` in our original field list. Now, plot up the gas density for each particle as a function of time:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "plt.figure(figsize=(6, 6))\n", "plt.plot(trajs[\"all\", \"particle_time\"], trajs[\"gas\", \"density\"][100])\n", "plt.plot(trajs[\"all\", \"particle_time\"], trajs[\"gas\", \"density\"][8])\n", "plt.plot(trajs[\"all\", \"particle_time\"], trajs[\"gas\", \"density\"][25])\n", "plt.yscale(\"log\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, the particle trajectories can be written to disk. Two options are provided: ASCII text files with a column for each field and the time, and HDF5 files:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "trajs.write_out(\n", " \"halo_trajectories\"\n", ") # This will write a separate file for each trajectory\n", "trajs.write_out_h5(\n", " \"halo_trajectories.h5\"\n", ") # This will write all trajectories to a single file" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 4 } ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2191505 yt-4.4.0/doc/source/analyzing/_images/0000755000175100001770000000000014714401715017231 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/_images/fields_ipywidget.png0000644000175100001770000005762714714401662023314 0ustar00runnerdockerPNG  IHDR f_^IDATxtW9g͜3;gfvvVoOmwWuUWK-TJ$Q*Y{EI)Gދ{@0$aH <@/3(s22ŋ̈/g$I$I S I$I$$I$I?rssԮ4.5[_2555555%gنӮ=]1rY$۷3@ܾ>ĉصkWʲ-[D@\CX+ m,k>dedYY-5Ex=͊`EfߍHulw0YA555555-O C޲ ,-:g.tZ* 7n7|z^aZvtO̵:÷ö5`6fi=᢭)5Nݾ{KHHpP8m4;w-ZN:e'Ov7Yf97n-\Ж-[n67oA~o$͞=۝'O]}&l߾ Nǎ ._m`:v{"1b XF>c88X}ϻƌ#3.n#Gt1wV>,/7u{$B@,Μ;~s}F( H 8paJI PLj,y 67n@݋ 9f{lgs#XVLK7 ss^i4sF# .]@㡄.r {tqq&/<)4%S\:|]Bgq`;Ϝ~AN*H|ЍY2ᩧ&MX~~Wj(Kn<*HCqP(j"T@I(J3v : Ә0Pߩi<<4L#ޙxp={$I$It:sP$I$I($I$I %I$I@I$I$P$I$I($I$I %I$I@I$I$P$I$IxxҺm,ޛl]ڛ\گ8]6jj?ddiI;+¶5T㢧0M$5 TQQaO!hYh7 Vkc֭[C$`@IP 'E&$I(I@P@I $.Ҿ{lzj?mڴi֯_?mfHǎk]vx4;x`uVVٳ'&L0RSS/Z>e ^d=zT(I${wٳczkYBB 2]/^lf7Cjx]wق |VPP֭C맧;ỞaEEEf eggʕ+-??.]ةSM6͛ȑ#sNw~N<^s^son)))n;}&}ºv D$I… {Njׯwqnݺ¼<}3F<󌃎{Zqq[w裏l۶m矇ԩ7ys`EwAwyi\v`cǎ&MAݻx mĉ޽{ۉ'W^h"  68¾}lɶqFS$I8.Ȥ|yg=@[!5D~ۿu;ӧE8~xgB^=~X7onv xX6|؟~>|3=b[Uعsg'c\d$I`yy{K `.ǎs\%1 [`%g}->mڵs{/n`Zj,_RRrC mĈ6w\ps@4 ;tPӗQF9 !VV':1E(IFu _@Qbٳ}n#G}:K֨ٳg;"]:Æ nݺ믿YW- xí[xҥxX yk{ݲe˜EoժUՋ+V&Gg@c($uPjxTJP(I@P5gΜq#" -dX"O"1K%E@$Ik ]"hbȓFvnn[̞d2 D#u}>4N#x9ܤ~L[l'/k~؞~ؖ }؞eXgc5۲ꌃ%y5E$bV$fd@3|^9& iF P(I@-Or5kfGv`'Iжm[Kc[#A_x7r-.퀨^z |ISO@Aywƌ_8| @9 Y^pېӍ\co.-A5l&MD Qg@&t!2$l b.SMMMzlLF@W^q,R^XH +l`־}{=%+_"8KԿkڴX2*tt5@' 6|meeen$&e۳-q3F,y -2-c 2r!%L@.ȹy2٧P$I6D"`  3f}k'@t ,"A͘ȹ8g9$P(I$ iD˗^|1X6` +ټya!ܭ.TPC=zp`F+ p bA%5E(El#o-@Kb0b`0cb9\hkٲ11+qq^+ xR%Z%$I$f*0HXaG!xݱmp}I">aߖ[Mя޿_߿}^MXrBx29WcY3ڌ`>}1ӕLx 3̒}?1v裏{ %}"EKxקB3wK?" #5K41^Q3`1{bx9>֣(,h@DR 'HCHL8 _BRD;7qp.r\8wg:9Nc39?ц8n>s!O>s$I$lȬއ~IB#= 9lu$d& C#Rܙ*.:Xŧ`!% cUHx7!y0Bwyǁ9X1vRҰo*9@48# #i]^X9ޏ?U" Jx1Oh90RpFIiäF@;yBECB ||Y8?s3HC|g8+Vp;d}.#qJ$IkkV!ݼ}"hr p[vP V Q-V"hǻE Ҁ*Xhi\y$E  /qV˛nɍ8^c{\;|rGY8l,|h` &88 9>b, qȘ.7{ ,q-MX}m'x073rJxه,$I+`0JbUy#JĥJ (t\,O` ayob%U2%ї&,d[@ hx9nNeƴnݺ0 I̱_o$b)…[b=ŢF?@XUlu'3vh9n>K$n0f'Zb_QYn YU= QJ^l4kىou^|{99GK`u9O/IP1 !`)ŋ @n0\T D!\@$y".c¯)n:c>ܣ?5~؏@MD|ٖ>Kn|[79@ha9VXq=U0†sUS 8#g }xr {ĢWSX%b9\XL`.u/wr826*ZVSa=>`=@Ϛ?@z,*٬S~? /++/Fobź+˯}Y+k8eYjrCKЋ Lc4nphT3 ~vjO8~$ oh'iN"P .W nGK֨~h2Q­ (W"nL5}L6\xc 7 WkVTVX_eOzJ[ϵ'<1p" f< lzK%3}==fG>,IXqEk!,q93q6|Nw ~yk>`[軭|>1- Җ1na6uٌfIf|14b!wϜj6"`k^埾e[6D{HU8ɌL#"5woDE5; k1چ9RL".ZLrr/ ,C iĴp-GK bf)叴M@" kvd @R#E&{$ $D".b%ZؙQHS:"~ dv+`f؎faJ/Ua ק| G13FY4űT/\"ͅ"]O ͱ3Vƍ &:9a⁵1s96n|&d\P8bXJ@;|84#\ 73]e_ cogv$O\8sW~"ֶ>w-/$I}@.Ȁ*f@\ ~! 2C-W3 EK0hP^^bn$t&K/Eͱ njB_xc܌-̸SX(I$ oP8Hȑo/Z"h,r^@,@@uJkJb%f&Ox# + (Y~iD@#V8 ZX@CcD7 E "q116~7 %VїXHC1/P$I /@raM )@['a- F1f( [6mVHd%68-j*8݋76P+>ry;py_ q.!9_K0O#8,0 Sk#UAp%cs`V]Ɏ7w=f%1x%_j1e_oe %A 66;"D&$sAD%q|q]62+ĈI, w{WbI<EUtAn|-3-ѻyv9s8uXY [.zӲtc]wu}qbQ)]ok EXKmX<d1L:__[Y|Meu p\ a3/\|`k vCC0 0`pK:Bmڻ.hM2k:ö;gK޵+ê0 QƚG^Q?W2"sD%χekt˥K-*;:lK7Bwڜn\x˂1uh'a uryu V\?Ν-wc㻐?z>9j@Ox6wq&$XfXK,/Hرss'鏴/LI΋6 8C?lӵ*nlAmyze[b{'tދUT^ύ:ZXtpbU[FK(7-$-"'\طs.4$`#@ȩB\<~݃@hfcpZtd\ƒ:r-.6'd7Op#W#`?R܌<2vr?7+XKl$hI Sӊ# g u>uÖʃ-ڤ+ ǪC\-Zs5؉/$ϷVǩ*+O_Z*Xր+.b&ʓc . hEcĺ5A_HgBt<# [ ! 帛VUPn0J,b{ $$I7$Up"ArJ @> J ;DEXc=怍vX8..?ɑU Dg aH$ /"*ZŊ 5"0n`,s"1ݺus8%}PfƺH_p9Sp# T.6vFb6Xq־{/H.- /Uۏo/x`ٜΞ үmeJT4ϕ$~&+*4jRf]Q=CK-_|*`Rҝtb3C&d~_Rߤ$bIU[y)>,pJq">Y~Xy'XqbU׏&9r?@%a{>}c\Y0ڤ׹%g]:@3D]~߀ݱSE6kӱ /օeS]d+1ueH:hkNuR$` b.To(%.7 -+̂5g}g{t]^oȨ;mC=Ҵ!Vl~KLtBL\:YW[b'>ӆ,@5Dt#_4,ibP$I($` ="gHd! ī4ĸb go%DI E\o?Zy6lQa>939V@n`\;z4`A9V˱yb 07iuR*>}'m݁aem8fy2( {튘)Nmم|6$ WKIRcެ/ŧ흙/X~it1@^{c=IDM8}j2 pd&<&kVݿ?h$8' kWه,X[m>&q>M_Wa9.2&ٹ\G0J-Ή3۷8eZ{e:DT'`q_30XD5=6pAb"'xln֮ Dn뿱뿵tS"3AMi$I $HF'c%…ٻ4e/.ܾ}{czövd-C ˗/w5H1aI}0C`xK0IҵR,$ vq@d ЉGb/ L }?p)7w'\|@.r"ޢb R%I(]2 ]B$ L!U@pi{υX2%e=`c'xYOz-\z \P 9$$I @X혩"GulHd/$VK Kš~;{K ,xc?s;v! }nhIJ$I@Dm\nW C, < l2\UM0A%.cDr,%F8`oE>AL ?ck(I7*C ކmYbWzft-d\}C*BK ҂QqxCgx<&q$8gyqԉ4#wFDu}δ-v0#:lWV;$ @n>F&XE.~>8$-[8 zm"uɶl] wm?):s٢ó-.k喆.TAu`,̺<46Wm 1і쟭`[\\un\*JKïOYcVpoYq▛>Ǭٺw`fs_58t㨛,xsNSK?a/}V$IX+^QOƒt# 4n8BO|'5 1XհITTfvtG[jV, 5)VFy ;_i6u"k=#l6l{_+,+"l2?~YnD1P}x^Mpb&Mr5}9Ț4|UGvۤ͝kGB?`*CXؒFܪ B'O^G`LݻΞ>m>}ᶭvjj;ܧwlkY8AxP QrwV<wB^V|[_~9n$ᶯfXŏ o~lwPjN} *eWg>e㿷80Q I6,JHs .u$K*Wzt0~FUR욊\7}\;sv,SLrN:<1syq3la`HnRC0IRDIPlWXov|I*k즖r樃!B{g ljnv)+>WlɁɚ׃fP8==ݥmHPuXkӦMn޼^VV?dE\YuΞζ|vXuh/HӦ8ztc 5@߇A}g}Ybi۶n xo+gG CᡀR`W5?úY**CGRsm꾱{+*+gJ$IZ*s LjU2&ZJLU\AuXư̕\ھqށ¹8Qiִ^:PeT6Ըg͚喑I;vWغv8 (NJJ7lUPq?nڣCZyaaa6e8e"GABBv=-y(L8nu&D`ep>2u3U\ŵ)}|;Qe/r>յ}3`7vtpX Ֆ9zs+Z pC̼%33sBRp!)U;HB07u΅<q+Wt3S믿v3~k}cj!#Gt9T] 6,sX ؖy{4*HJRCB? $H[SI66LG>~Co\~\뮻\B7̀$;ƃՏ>yd3SO/Wͬ!$I$`0=W793>u?;;w/?rxi7sUCuAq[3P^ eÐ$I6^D^O]f"f_BS5 o˖-KsM7׻I ŵN#شi/恒u%]i:k״Fa}wu[~KGrI5VXzii6p~Ta[?gyEeJV^q$Xǩm {S6quRuV^ߥ :^,K)ʣK³p%I6jg1aJD9^{Y$~K$lv{'IBHD*lhcfOXyb~JSp2rJ -=Wa9Ŗy$&pUTwOuAŖ[XCɭ,҂/b _jsۜdfҋw9:kj@bybQOF]wZhN^NPY}`< tc*6 ?Hs;O;`뫎1?u>ta~=}}Jn-7<8lkfmVrQ%IpSQoێ_ $jz `G%ܽĪdp_ ׿խĸrļF8 8ޢU{3m#v,~|b'\4b.N:9K9V;>]bGm@F@XRfyxO,;~&m vx[/ C~5~;QKÐ; 5pT&m˱+f6/28szb99KMܒ0Ɗ$ >QoOm=vo/" Q!o~Mi.&ņodbI%fcCȥ_:w0clP^$k3 x́nGvc^]6qg;W}~mδ*+O2@+Hcu}4d[䬌r u&qoy[ɎtrnBU>ߜx::5Gaϙ.qnJ=9_I{GZ ߹!eUVr$ltx1o>6|j{/lU4k=GX6I1^#9m֦4eq[qzm{7/ҖMb}mO5zV?j@鲄Q %I(ڋo}bF^kfr%?g*&6}}>Bp6mf}u uG5E;SmSvt#F %I$` \wu}ƤB{O߱ךiw_6&~[nȚ9=/~sqk6{O&$I Tپ/.Ye>6z}EYągي4bݰȅ -*YFs#qIN?w!$lkmʕnVo}yJ23-u$;ܷ% .ʲ+>ׇڷd+VYU H;w~c߭6(>5myKn6}~ ڕر~w2ȹ@+ZУv iaJ"40[l;v8XbȲe'@2ovE ;ߖ'.هD]gI9h2\FG;R?d쌙_|ێϔ#U wXnӦ++yA `$MdcکSթt~%?g(-'±|IG< KJ֭o_ғ'-Sp?eqm竒vBweٙ;:t%d9AgZE1!~hqqnٮ>V~[x$ w.+͖=cV^ Ī:Xiy=:N?wѾM]5.뾲cɺK0:M.]iMkW^y/^,|֭+G-E99s +B[m:?nՀuV)+=qHk]ku;UZlmm4W!ˣ౩!0!֒Ŗ %pKgHLs,38p_0;>c (MO?eeeSuS(Ƃ1GK-}+=:2N?6mjsF|U5͓Gv๧'"H_ kgW[Q y݊Ebgϵ{/IxJIRC.אQ^&3m…nuScX`'x[k^ ӣ@Jn YЩ}G]TVI_*en6 YnVǏ Ǐ\{*~+/??u:vh7o YKVhK٬,[s-18_ᱩS޽'b9=fM}$L͵+W_0~RƏ#c3y`|KjkZOm -K0[ٹmejda3ېV]dCX$HX[PÔ*W5##Æ&&&\JϞ=ï 0ZÇv7n֭[K^wk˻wvr &'v8\BdmznhoڴlnYZ1{{_:Ďe l̮!LF ~o{Uh:ш#\2k_?I nmUڶfw+@pM[\o\X -(%N^,z2ܲ/w^o{+N]cqYarR3-]| CaƺVf5mN+}+ҶZo:In疖\ I@*,sX5>ɻqpSr(N]쏿ɹ& ̯C>1.%Rk_v!Qc@r^V tј"a~[od?v@TЗsM0U?M8Y4nPWm"*WmQʰ~cb־M"p#xUxDS/>>0ej{2~2Źm f%bˤ#ymKt՘r5l;ג*V B7O,o ff,ܣ #& {3s;SU.= 3/v(IV$X/>_˖-HaÆ{$L@7u3<.%mk ʰzp*Ke2V^Ub[\KL7VO?]TNB=H0@cukϙ3 7m۶/X1Qcz8k ܗſ?1gϾ&n0guh+Vpcm֬YyU>}믿}?q}`ogZ#;X%t}bI;w;\~_-kHSK`ek MXqe`BBBET3c>H)<b+PG9xe?u ƽ;ܱ`=#U]~'O#w$$I${M+w0 71 !r9 `X!,aX؟~YRwmVX#'jժۿ"n֭[;Jz؃>,X,>˗/wA裏ڤI\?5Vޝ ` PwW/ǀpԩ6c g}'],$$I$֔P3VyD5K0֑շ jc]\K,)Skm/@r1"Y'rm5ρo\E74N5NdhGAI$%`CK1s+U|/~ܰ4}̥1Ѯ (~}o[q0suZFobg=,navܒ-qmn+ g=^sz®4auXKM~&>h?Ɉ^Yń,`]'$$I J;OlYϵc=ִ[-nXU_;K:}{o'MmHwǭ.8گzk1 sЯڎQZS$18[Ҭ v, ZI$P64=ww 2fN,ra̻^.ζ6?o XHKĤ(}.Q9z/$I / poֹq}$/tejH8C/HSW3fG3ED"fswثӞwf` BwNxȞssSJ'5wxS\7`% 97}$o#u_fK$  -)@O CZw^= @YӧOwd9^a9lIꓹ↢_f S@ɤd-qyf%|YVR33~I 1cPd?PLW\im۶u%-J]wek֬q3dھt`n[Ʋw-b.ݩ{ƹ>w8+Ύ妸U&H_ʂƝ<uL^{t", %Ix" ,_d~mpXAŁw5AS9H)YUn}+yI?C " 7B$?RpL=zpAǺ@n@$_"ފs /s˜vqW_T8 $I@oqQX-Z8K!/*T  bNn~{,{XJMA> -4ֿ 8 @Ǖ sN@9VƯ@I$I(WD 9.,ǧ.`\ p/*>_@֥{jn}'.+/^bU_\ BQf $Fّ?nh>بd,X,DBkƋqI@I$I(<|a­ \>pge=m[m(rNw}\r&Rn&~;ʬ*H"4A!K1~@}Ǐwd7& |ϵ_"`9*.$I23m*J;9RX)F<9&]$$I$^"-XpFHaj+JXsLWsP$I^w(I@I$IH'Kf3~=ejq31IL`I(I$ Qg%\Qɻw ,`gc8}ڥ@>\H@I$IxC 6ҩ`5# 3I 2?{ױcG7M6nf-_ѭ[7HL\j"f˒?uYBSt+ia|%*o0q1r|<<>ҡC>bId<8B 9V$$I@ܨca$Mn֬+uR~_;x pLM2i*kyXAc_R2kYt0,#_ nby>}@5qaӨQ hR׸6 X%$I$nQ" G]_/oRGfQ׻]k yѼy=$֕ `Ȃ \l qc^#; t0@I$Ix Ɏ^>@9p[m[vn_VZo;wܯĚA Yߝw:XY~Wj=fRcl>+ q?^0NI(I$ oI d5'S\o ue|2?db vZ?`"-y>/!۰=@9&ڕ^$I`C XE.RW>}^-Q.rLI(I$ ;$$I$JP$I6dibiVHJMܳgOv܂$$I?02x8&FlܸѥW!e 3b3?  `&>o %$$O<|۶msaej| 2sSNn[en2 .͛,^&yŊ.!I≠َ1^̚5ˍI!'Nt315(U=DJ$IAefⒶbB3H󒜜4ns׽{w[{vi`HҤIsm۶uUS}%%%*! _?r}n,ڵsc}x`_ō4KA^7mԎ? (2&6̥$I@,_>*e`AFm~+SB7|& ̫f"h7o=cQ+ST,#XG^B`c,T:}כnYC%GO|XMMMMMMMM[qqq@I$I$$$I$IJ$I$I׳tjgl^IENDB`././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2231507 yt-4.4.0/doc/source/analyzing/_static/0000755000175100001770000000000014714401715017253 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/_static/axes.c0000644000175100001770000000102114714401662020352 0ustar00runnerdocker#include "axes.h" void calculate_axes(ParticleCollection *part, double *ax1, double *ax2, double *ax3) { int i; for (i = 0; i < part->npart; i++) { if (ax1[0] > part->xpos[i]) ax1[0] = part->xpos[i]; if (ax2[0] > part->ypos[i]) ax2[0] = part->ypos[i]; if (ax3[0] > part->zpos[i]) ax3[0] = part->zpos[i]; if (ax1[1] < part->xpos[i]) ax1[1] = part->xpos[i]; if (ax2[1] < part->ypos[i]) ax2[1] = part->ypos[i]; if (ax3[1] < part->zpos[i]) ax3[1] = part->zpos[i]; } } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/_static/axes.h0000644000175100001770000000035214714401662020365 0ustar00runnerdockertypedef struct structParticleCollection { long npart; double *xpos; double *ypos; double *zpos; } ParticleCollection; void calculate_axes(ParticleCollection *part, double *ax1, double *ax2, double *ax3); ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/_static/axes_calculator.pyx0000644000175100001770000000206014714401662023165 0ustar00runnerdockerimport numpy as np cimport numpy as np cdef extern from "axes.h": ctypedef struct ParticleCollection: long npart double *xpos double *ypos double *zpos void calculate_axes(ParticleCollection *part, double *ax1, double *ax2, double *ax3) def examine_axes(np.ndarray[np.float64_t, ndim=1] xpos, np.ndarray[np.float64_t, ndim=1] ypos, np.ndarray[np.float64_t, ndim=1] zpos): cdef double ax1[3] cdef double ax2[3] cdef double ax3[3] cdef ParticleCollection particles cdef int i particles.npart = len(xpos) particles.xpos = xpos.data particles.ypos = ypos.data particles.zpos = zpos.data for i in range(particles.npart): particles.xpos[i] = xpos[i] particles.ypos[i] = ypos[i] particles.zpos[i] = zpos[i] calculate_axes(&particles, ax1, ax2, ax3) return ( (ax1[0], ax1[1], ax1[2]), (ax2[0], ax2[1], ax2[2]), (ax3[0], ax3[1], ax3[2]) ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/_static/axes_calculator_setup.txt0000644000175100001770000000112714714401662024407 0ustar00runnerdockerNAME = "axes_calculator" EXT_SOURCES = ["axes.c"] EXT_LIBRARIES = [] EXT_LIBRARY_DIRS = [] EXT_INCLUDE_DIRS = [] DEFINES = [] from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension(NAME, [NAME+".pyx"] + EXT_SOURCES, libraries = EXT_LIBRARIES, library_dirs = EXT_LIBRARY_DIRS, include_dirs = EXT_INCLUDE_DIRS, define_macros = DEFINES) ] setup( name = NAME, cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/astropy_integrations.rst0000644000175100001770000000504514714401662022653 0ustar00runnerdocker.. _astropy-integrations: AstroPy Integrations ==================== yt enables a number of integrations with the AstroPy package. These are listed below, but more detailed descriptions are given at the given documentation links. Round-Trip Unit Conversions Between yt and AstroPy -------------------------------------------------- AstroPy has a `symbolic units implementation `_ similar to that in yt. For this reason, we have implemented "round-trip" conversions between :class:`~yt.units.yt_array.YTArray` objects and AstroPy's :class:`~astropy.units.Quantity` objects. These are implemented in the :meth:`~yt.units.yt_array.YTArray.from_astropy` and :meth:`~yt.units.yt_array.YTArray.to_astropy` methods. FITS Image File Reading and Writing ----------------------------------- Reading and writing FITS files is supported in yt using `AstroPy's FITS file handling. `_ yt has basic support for reading two and three-dimensional image data from FITS files. Some limited ability to parse certain types of data (e.g., spectral cubes, images with sky coordinates, images written using the :class:`~yt.visualization.fits_image.FITSImageData` class described below) is possible. See :ref:`loading-fits-data` for more information. Fixed-resolution two-dimensional images generated from datasets using yt (such as slices or projections) and fixed-resolution three-dimensional grids can be written to FITS files using yt's :class:`~yt.visualization.fits_image.FITSImageData` class and its subclasses. Multiple images can be combined into a single file, operations can be performed on the images and their coordinates, etc. See :doc:`../visualizing/FITSImageData` for more information. Converting Field Container and 1D Profile Data to AstroPy Tables ---------------------------------------------------------------- Data in field containers, such as spheres, rectangular regions, rays, cylinders, etc., are represented as 1D YTArrays. A set of these arrays can then be exported to an `AstroPy Table `_ object, specifically a `QTable `_. ``QTable`` is unit-aware, and can be manipulated in a number of ways and written to disk in several formats, including ASCII text or FITS files. Similarly, 1D profile objects can also be exported to AstroPy ``QTable``, optionally writing all of the profile bins or only the ones which are used. For more details, see :ref:`profile-astropy-export`. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2231507 yt-4.4.0/doc/source/analyzing/domain_analysis/0000755000175100001770000000000014714401715020777 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/domain_analysis/XrayEmissionFields.ipynb0000644000175100001770000002201714714401662025626 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# X-ray Emission Fields" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> Note: If you came here trying to figure out how to create simulated X-ray photons and observations,\n", " you should go [here](http://hea-www.cfa.harvard.edu/~jzuhone/pyxsim/) instead." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This functionality provides the ability to create metallicity-dependent X-ray luminosity, emissivity, and photon emissivity fields for a given photon energy range. This works by interpolating from emission tables created from the photoionization code [Cloudy](https://www.nublado.org/) or the collisional ionization database [AtomDB](http://www.atomdb.org). These can be downloaded from https://yt-project.org/data from the command line like so:\n", "\n", "`# Put the data in a directory you specify` \n", "`yt download cloudy_emissivity_v2.h5 /path/to/data`\n", "\n", "`# Put the data in the location set by \"supp_data_dir\"` \n", "`yt download apec_emissivity_v3.h5 supp_data_dir`\n", "\n", "The data path can be a directory on disk, or it can be \"supp_data_dir\", which will download the data to the directory specified by the `\"supp_data_dir\"` yt configuration entry. It is easiest to put these files in the directory from which you will be running yt or `\"supp_data_dir\"`, but see the note below about putting them in alternate locations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Emission fields can be made for any energy interval between 0.1 keV and 100 keV, and will always be created for luminosity $(\\rm{erg~s^{-1}})$, emissivity $\\rm{(erg~s^{-1}~cm^{-3})}$, and photon emissivity $\\rm{(photons~s^{-1}~cm^{-3})}$. The only required arguments are the\n", "dataset object, and the minimum and maximum energies of the energy band. However, typically one needs to decide what will be used for the metallicity. This can either be a floating-point value representing a spatially constant metallicity, or a prescription for a metallicity field, e.g. `(\"gas\", \"metallicity\")`. For this first example, where the dataset has no metallicity field, we'll just assume $Z = 0.3~Z_\\odot$ everywhere:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import yt\n", "\n", "ds = yt.load(\n", " \"GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150\", default_species_fields=\"ionized\"\n", ")\n", "\n", "xray_fields = yt.add_xray_emissivity_field(\n", " ds, 0.5, 7.0, table_type=\"apec\", metallicity=0.3\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> Note: If you place the HDF5 emissivity tables in a location other than the current working directory or the location \n", " specified by the \"supp_data_dir\" configuration value, you will need to specify it in the call to \n", " `add_xray_emissivity_field`: \n", " `xray_fields = yt.add_xray_emissivity_field(ds, 0.5, 7.0, data_dir=\"/path/to/data\", table_type='apec', metallicity=0.3)`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Having made the fields, one can see which fields were made:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(xray_fields)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The luminosity field is useful for summing up in regions like this:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sp = ds.sphere(\"c\", (2.0, \"Mpc\"))\n", "print(sp.quantities.total_quantity((\"gas\", \"xray_luminosity_0.5_7.0_keV\")))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Whereas the emissivity fields may be useful in derived fields or for plotting:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "slc = yt.SlicePlot(\n", " ds,\n", " \"z\",\n", " [\n", " (\"gas\", \"xray_emissivity_0.5_7.0_keV\"),\n", " (\"gas\", \"xray_photon_emissivity_0.5_7.0_keV\"),\n", " ],\n", " width=(0.75, \"Mpc\"),\n", ")\n", "slc.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The emissivity and the luminosity fields take the values one would see in the frame of the source. However, if one wishes to make projections of the X-ray emission from a cosmologically distant object, the energy band will be redshifted. For this case, one can supply a `redshift` parameter and a `Cosmology` object (either from the dataset or one made on your own) to compute X-ray intensity fields along with the emissivity and luminosity fields.\n", "\n", "This example shows how to do that, Where we also use a spatially dependent metallicity field and the Cloudy tables instead of the APEC tables we used previously:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ds2 = yt.load(\"D9p_500/10MpcBox_HartGal_csf_a0.500.d\", default_species_fields=\"ionized\")\n", "\n", "# In this case, use the redshift and cosmology from the dataset,\n", "# but in theory you could put in something different\n", "xray_fields2 = yt.add_xray_emissivity_field(\n", " ds2,\n", " 0.5,\n", " 2.0,\n", " redshift=ds2.current_redshift,\n", " cosmology=ds2.cosmology,\n", " metallicity=(\"gas\", \"metallicity\"),\n", " table_type=\"cloudy\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, one can see that two new fields have been added, corresponding to X-ray intensity / surface brightness when projected:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(xray_fields2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note also that the energy range now corresponds to the *observer* frame, whereas in the source frame the energy range is between `emin*(1+redshift)` and `emax*(1+redshift)`. Let's zoom in on a galaxy and make a projection of the energy intensity field:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prj = yt.ProjectionPlot(\n", " ds2, \"x\", (\"gas\", \"xray_intensity_0.5_2.0_keV\"), center=\"max\", width=(40, \"kpc\")\n", ")\n", "prj.set_zlim(\"xray_intensity_0.5_2.0_keV\", 1.0e-32, 5.0e-24)\n", "prj.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> Warning: The X-ray fields depend on the number density of hydrogen atoms, given by the yt field\n", " `H_nuclei_density`. In the case of the APEC model, this assumes that all of the hydrogen in your\n", " dataset is ionized, whereas in the Cloudy model the ionization level is taken into account. If \n", " this field is not defined (either in the dataset or by the user), it will be constructed using\n", " abundance information from your dataset. Finally, if your dataset contains no abundance information,\n", " a primordial hydrogen mass fraction (X = 0.76) will be assumed." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, if you want to place the source at a local, non-cosmological distance, you can forego the `redshift` and `cosmology` arguments and supply a `dist` argument instead, which is either a `(value, unit)` tuple or a `YTQuantity`. Note that here the redshift is assumed to be zero. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xray_fields3 = yt.add_xray_emissivity_field(\n", " ds2,\n", " 0.5,\n", " 2.0,\n", " dist=(1.0, \"Mpc\"),\n", " metallicity=(\"gas\", \"metallicity\"),\n", " table_type=\"cloudy\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prj = yt.ProjectionPlot(\n", " ds2,\n", " \"x\",\n", " (\"gas\", \"xray_photon_intensity_0.5_2.0_keV\"),\n", " center=\"max\",\n", " width=(40, \"kpc\"),\n", ")\n", "prj.set_zlim(\"xray_photon_intensity_0.5_2.0_keV\", 1.0e-24, 5.0e-16)\n", "prj.show()" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 4 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/domain_analysis/clump_finding.rst0000644000175100001770000001710614714401662024355 0ustar00runnerdocker.. _clump_finding: Clump Finding ============= The clump finder uses a contouring algorithm to identified topologically disconnected structures within a dataset. This works by first creating a single contour over the full range of the contouring field, then continually increasing the lower value of the contour until it reaches the maximum value of the field. As disconnected structures are identified as separate contours, the routine continues recursively through each object, creating a hierarchy of clumps. Individual clumps can be kept or removed from the hierarchy based on the result of user-specified functions, such as checking for gravitational boundedness. A sample recipe can be found in :ref:`cookbook-find_clumps`. Setting up the Clump Finder --------------------------- The clump finder requires a data object (see :ref:`data-objects`) and a field over which the contouring is to be performed. The data object is then used to create the initial :class:`~yt.data_objects.level_sets.clump_handling.Clump` object that acts as the base for clump finding. .. code:: python import yt from yt.data_objects.level_sets.api import * ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") data_source = ds.disk([0.5, 0.5, 0.5], [0.0, 0.0, 1.0], (8, "kpc"), (1, "kpc")) master_clump = Clump(data_source, ("gas", "density")) Clump Validators ---------------- At this point, every isolated contour will be considered a clump, whether this is physical or not. Validator functions can be added to determine if an individual contour should be considered a real clump. These functions are specified with the :func:`~yt.data_objects.level_sets.clump_handling.Clump.add_validator` function. Current, two validators exist: a minimum number of cells and gravitational boundedness. .. code:: python master_clump.add_validator("min_cells", 20) master_clump.add_validator("gravitationally_bound", use_particles=False) As many validators as desired can be added, and a clump is only kept if all return True. If not, a clump is remerged into its parent. Custom validators can easily be added. A validator function must only accept a ``Clump`` object and either return True or False. .. code:: python def _minimum_gas_mass(clump, min_mass): return clump["gas", "mass"].sum() >= min_mass add_validator("minimum_gas_mass", _minimum_gas_mass) The :func:`~yt.data_objects.level_sets.clump_validators.add_validator` function adds the validator to a registry that can be accessed by the clump finder. Then, the validator can be added to the clump finding just like the others. .. code:: python master_clump.add_validator("minimum_gas_mass", ds.quan(1.0, "Msun")) Running the Clump Finder ------------------------ Clump finding then proceeds by calling the :func:`~yt.data_objects.level_sets.clump_handling.find_clumps` function. This function accepts the :class:`~yt.data_objects.level_sets.clump_handling.Clump` object, the initial minimum and maximum of the contouring field, and the step size. The lower value of the contour finder will be continually multiplied by the step size. .. code:: python c_min = data_source["gas", "density"].min() c_max = data_source["gas", "density"].max() step = 2.0 find_clumps(master_clump, c_min, c_max, step) Calculating Clump Quantities ---------------------------- By default, a number of quantities will be calculated for each clump when the clump finding process has finished. The default quantities are: ``total_cells``, ``mass``, ``mass_weighted_jeans_mass``, ``volume_weighted_jeans_mass``, ``max_grid_level``, ``min_number_density``, and ``max_number_density``. Additional items can be added with the :func:`~yt.data_objects.level_sets.clump_handling.Clump.add_info_item` function. .. code:: python master_clump.add_info_item("total_cells") Just like the validators, custom info items can be added by defining functions that minimally accept a :class:`~yt.data_objects.level_sets.clump_handling.Clump` object and return a format string to be printed and the value. These are then added to the list of available info items by calling :func:`~yt.data_objects.level_sets.clump_info_items.add_clump_info`: .. code:: python def _mass_weighted_jeans_mass(clump): jeans_mass = clump.data.quantities.weighted_average_quantity( "jeans_mass", ("gas", "mass") ).in_units("Msun") return "Jeans Mass (mass-weighted): %.6e Msolar." % jeans_mass add_clump_info("mass_weighted_jeans_mass", _mass_weighted_jeans_mass) Then, add it to the list: .. code:: python master_clump.add_info_item("mass_weighted_jeans_mass") Once you have run the clump finder, you should be able to access the data for the info item you have defined via the ``info`` attribute of a ``Clump`` object: .. code:: python clump = leaf_clumps[0] print(clump.info["mass_weighted_jeans_mass"]) Besides the quantities calculated by default, the following are available: ``center_of_mass`` and ``distance_to_main_clump``. Working with Clumps ------------------- After the clump finding has finished, the master clump will represent the top of a hierarchy of clumps. The ``children`` attribute within a :class:`~yt.data_objects.level_sets.clump_handling.Clump` object contains a list of all sub-clumps. Each sub-clump is also a :class:`~yt.data_objects.level_sets.clump_handling.Clump` object with its own ``children`` attribute, and so on. .. code:: python print(master_clump["gas", "density"]) print(master_clump.children) print(master_clump.children[0]["gas", "density"]) The entire clump tree can traversed with a loop syntax: .. code:: python for clump in master_clump: print(clump.clump_id) The ``leaves`` attribute of a ``Clump`` object will return a list of the individual clumps that have no children of their own (the leaf clumps). .. code:: python # Get a list of just the leaf nodes. leaf_clumps = master_clump.leaves print(leaf_clumps[0]["gas", "density"]) print(leaf_clumps[0]["all", "particle_mass"]) print(leaf_clumps[0].quantities.total_mass()) Visualizing Clumps ------------------ Clumps can be visualized using the ``annotate_clumps`` callback. .. code:: python prj = yt.ProjectionPlot(ds, 2, ("gas", "density"), center="c", width=(20, "kpc")) prj.annotate_clumps(leaf_clumps) prj.save("clumps") Saving and Reloading Clump Data ------------------------------- The clump tree can be saved as a reloadable dataset with the :func:`~yt.data_objects.level_sets.clump_handling.Clump.save_as_dataset` function. This will save all info items that have been calculated as well as any field values specified with the *fields* keyword. This function can be called for any clump in the tree, saving that clump and all those below it. .. code:: python fn = master_clump.save_as_dataset(fields=["density", "particle_mass"]) The clump tree can then be reloaded as a regular dataset. The ``tree`` attribute associated with the dataset provides access to the clump tree. The tree can be iterated over in the same fashion as the original tree. .. code:: python ds_clumps = yt.load(fn) for clump in ds_clumps.tree: print(clump.clump_id) The ``leaves`` attribute returns a list of all leaf clumps. .. code:: python print(ds_clumps.leaves) Info items for each clump can be accessed with the ``"clump"`` field type. Gas or grid fields should be accessed using the ``"grid"`` field type and particle fields should be access using the specific particle type. .. code:: python my_clump = ds_clumps.leaves[0] print(my_clumps["clump", "mass"]) print(my_clumps["grid", "density"]) print(my_clumps["all", "particle_mass"]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/domain_analysis/cosmology_calculator.rst0000644000175100001770000000555714714401662025772 0ustar00runnerdocker.. _cosmology-calculator: Cosmology Calculator ==================== The cosmology calculator can be used to calculate cosmological distances and times given a set of cosmological parameters. A cosmological dataset, ``ds``, will automatically have a cosmology calculator configured with the correct parameters associated with it as ``ds.cosmology``. A standalone :class:`~yt.utilities.cosmology.Cosmology` calculator object can be created in the following way: .. code-block:: python from yt.utilities.cosmology import Cosmology co = Cosmology( hubble_constant=0.7, omega_matter=0.3, omega_lambda=0.7, omega_curvature=0.0, omega_radiation=0.0, ) Once created, various distance calculations as well as conversions between redshift and time are available: .. notebook-cell:: from yt.utilities.cosmology import Cosmology co = Cosmology() # Hubble distance (c / h) print("hubble distance", co.hubble_distance()) # distance from z = 0 to 0.5 print("comoving radial distance", co.comoving_radial_distance(0, 0.5).in_units("Mpccm/h")) # transverse distance print("transverse distance", co.comoving_transverse_distance(0, 0.5).in_units("Mpccm/h")) # comoving volume print("comoving volume", co.comoving_volume(0, 0.5).in_units("Gpccm**3")) # angular diameter distance print("angular diameter distance", co.angular_diameter_distance(0, 0.5).in_units("Mpc/h")) # angular scale print("angular scale", co.angular_scale(0, 0.5).in_units("Mpc/degree")) # luminosity distance print("luminosity distance", co.luminosity_distance(0, 0.5).in_units("Mpc/h")) # time between two redshifts print("lookback time", co.lookback_time(0, 0.5).in_units("Gyr")) # critical density print("critical density", co.critical_density(0)) # Hubble parameter at a given redshift print("hubble parameter", co.hubble_parameter(0).in_units("km/s/Mpc")) # convert time after Big Bang to redshift my_t = co.quan(8, "Gyr") print("z from t", co.z_from_t(my_t)) # convert redshift to time after Big Bang print("t from z", co.t_from_z(0.5).in_units("Gyr")) .. warning:: Cosmological distance calculations return values that are either in the comoving or proper frame, depending on the specific quantity. For simplicity, the proper and comoving frames are set equal to each other within the cosmology calculator. This means that for some distance value, x, x.to("Mpc") and x.to("Mpccm") will be the same. The user should take care to understand which reference frame is correct for the given calculation. The helper functions, ``co.quan`` and ``co.arr`` exist to create unitful ``YTQuantities`` and ``YTArray`` with the unit registry of the cosmology calculator. For more information on the usage and meaning of each calculation, consult the reference documentation at :ref:`cosmology-calculator-ref`. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/domain_analysis/index.rst0000644000175100001770000000427114714401662022645 0ustar00runnerdocker.. _domain-analysis: Domain-Specific Analysis ======================== yt powers a number modules that provide specialized analysis tools relevant to one or a few domains. Some of these are internal to yt, but many exist as external packages, either maintained by the yt project or independently. Internal Analysis Modules ------------------------- These modules exist within yt itself. .. note:: As of yt version 3.5, most of the astrophysical analysis tools have been moved to the :ref:`yt-astro` and :ref:`attic` packages. See below for more information. .. toctree:: :maxdepth: 2 cosmology_calculator clump_finding XrayEmissionFields xray_data_README External Analysis Modules ------------------------- These are external packages maintained by the yt project. .. _yt-astro: yt Astro Analysis ^^^^^^^^^^^^^^^^^ Source: https://github.com/yt-project/yt_astro_analysis Documentation: https://yt-astro-analysis.readthedocs.io/ The ``yt_astro_analysis`` package houses most of the astrophysical analysis tools that were formerly in the ``yt.analysis_modules`` import. These include halo finding, custom halo analysis, synthetic observations, and exports to radiative transfer codes. See :ref:`yt_astro_analysis:modules` for a list of available functionality. .. _attic: yt Attic ^^^^^^^^ Source: https://github.com/yt-project/yt_attic Documentation: https://yt-attic.readthedocs.io/ The ``yt_attic`` contains former yt analysis modules that have fallen by the wayside. These may have small bugs or were simply not kept up to date as yt evolved. Tools in here are looking for a new owner and a new home. If you find something in here that you'd like to bring back to life, either by adding it to :ref:`yt-astro` or as part of your own package, you are welcome to it! If you'd like any help, let us know! See :ref:`yt_attic:attic-modules` for a list of inventory of the attic. Extensions ---------- There are a number of independent, yt-related packages for things like visual effects, interactive widgets, synthetic absorption spectra, X-ray observations, and merger-trees. See the `yt Extensions `_ page for a list of available extension packages. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/domain_analysis/xray_data_README.rst0000644000175100001770000000423114714401662024523 0ustar00runnerdocker.. _xray_data_README: Auxiliary Data Files for use with yt's Photon Simulator ======================================================= Included in the `xray_data `_ package are a number of files that you may find useful when working with yt's X-ray `photon_simulator `_ analysis module. They have been tested to give spectral fitting results consistent with input parameters. Spectral Model Tables --------------------- * tbabs_table.h5: Tabulated values of the galactic absorption cross-section in HDF5 format, generated from the routines at http://pulsar.sternwarte.uni-erlangen.de/wilms/research/tbabs/ ARFs and RMFs ------------- We have tested the following ARFs and RMFs with the photon simulator. These can be used to generate a very simplified representation of an X-ray observation, using a uniform, on-axis response. For more accurate models of X-ray observations we suggest using MARX or SIMX (detailed below_). * Chandra: chandra_ACIS-S3_onaxis_arf.fits, chandra_ACIS-S3_onaxis_rmf.fits Generated from the CIAO tools, on-axis on the ACIS-S3 chip. * XMM-Newton: pn-med.arf, pn-med.rmf EPIC pn CCDs (medium filter), taken from SIMX * Astro-H: sxt-s_100208_ts02um_intall.arf, ah_sxs_7ev_basefilt_20090216.rmf SXT-S+SXS responses taken from http://astro-h.isas.jaxa.jp/researchers/sim/response.html * NuSTAR: nustarA.arf, nustarA.rmf Averaged responses for NuSTAR telescope A generated by Dan Wik (NASA/GSFC) .. _below: Other Useful Things Not Included Here ------------------------------------- * AtomDB: http://www.atomdb.org FITS table data for emission lines and continuum emission. Must have it installed to use the TableApecModel spectral model. * PyXspec: https://heasarc.gsfc.nasa.gov/xanadu/xspec/python/html/ Python interface to the XSPEC spectral-fitting program. Two of the spectral models for the photon simulator use it. * MARX: https://space.mit.edu/ASC/MARX/ Detailed ray-trace simulations of Chandra. * SIMX: http://hea-www.harvard.edu/simx/ Simulates a photon-counting detector's response to an input source, including a simplified models of telescopes. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/fields.rst0000644000175100001770000011536014714401662017634 0ustar00runnerdocker.. _fields: Fields in yt ============ Fields are spatially-dependent quantities associated with a parent dataset. Examples of fields are gas density, gas temperature, particle mass, etc. The fundamental way to query data in yt is to access a field, either in its raw form (by examining a data container) or a processed form (derived quantities, projections, aggregations, and so on). "Field" is something of a loaded word, as it can refer to quantities that are defined everywhere, which we refer to as "mesh" or "fluid" fields, or discrete points that populate the domain, traditionally thought of as "particle" fields. The word "particle" here is gradually falling out of favor, as these discrete fields can be any type of sparsely populated data. If you are developing a frontend or need to customize what yt thinks of as the fields for a given datast, see both :ref:`per-field-plotconfig` and :ref:`per-field-config` for information on how to change the display units, on-disk units, display name, etc. .. _what-are-fields: What are fields? ---------------- Fields in yt are denoted by a two-element string tuple, of the form ``(field_type, field_name)``. The first element, the "field type" is a category for a field. Possible field types used in yt include ``'gas'`` (for fluid mesh fields defined on a mesh) or ``'io'`` (for fields defined at particle locations). Field types can also correspond to distinct particle of fluid types in a single simulation. For example, a plasma physics simulation using the Particle in Cell method might have particle types corresponding to ``'electrons'`` and ``'ions'``. See :ref:`known-field-types` below for more info about field types in yt. The second element of field tuples, the ``field_name``, denotes the specific field to select, given the field type. Possible field names include ``'density'``, ``'velocity_x'`` or ``'pressure'`` --- these three fields are examples of field names that might be used for a fluid defined on a mesh. Examples of particle fields include ``'particle_mass'``, ``'particle_position'`` or ``'particle_velocity_x'``. In general, particle field names are prefixed by ``particle_``, which makes it easy to distinguish between a particle field or a mesh field when no field type is provided. What fields are available? -------------------------- We provide a full list of fields that yt recognizes by default at :ref:`field-list`. If you want to create additional custom derived fields, see :ref:`creating-derived-fields`. Every dataset has an attribute, ``ds.fields``. This attribute possesses attributes itself, each of which is a "field type," and each field type has as its attributes the fields themselves. When one of these is printed, it returns information about the field and things like units and so on. You can use this for tab-completing as well as easier access to information. Additionally, if you have `ipywidgets `_ installed and are in a `Jupyter environment `_, you can view the rich representation of the fields (including source code) by either typing ``ds.fields`` as the last item in a cell or by calling ``display(ds.fields)``. The resulting output will have tabs and source: .. image:: _images/fields_ipywidget.png :scale: 50% As an example, you might browse the available fields like so: .. code-block:: python print(dir(ds.fields)) print(dir(ds.fields.gas)) print(ds.fields.gas.density) On an Enzo dataset, the result from the final command would look something like this::: Alias Field for ('enzo', 'Density') ('gas', 'density'): (units: 'g/cm**3') You can use this to easily explore available fields, particularly through tab-completion in Jupyter/IPython. It's also possible to iterate over the list of fields associated with each field type. For example, to print all of the ``'gas'`` fields, one might do: .. code-block:: python for field in ds.fields.gas: print(field) You can also check if a given field is associated with a field type using standard python syntax: .. code-block:: python # these examples evaluate to True for a dataset that has ('gas', 'density') "density" in ds.fields.gas ("gas", "density") in ds.fields.gas ds.fields.gas.density in ds.fields.gas For a more programmatic method of accessing fields, you can utilize the ``ds.field_list``, ``ds.derived_field_list`` and some accessor methods to gain information about fields. The full list of fields available for a dataset can be found as the attribute ``field_list`` for native, on-disk fields and ``derived_field_list`` for derived fields (``derived_field_list`` is a superset of ``field_list``). You can view these lists by examining a dataset like this: .. code-block:: python ds = yt.load("my_data") print(ds.field_list) print(ds.derived_field_list) By using the ``field_info()`` class, one can access information about a given field, like its default units or the source code for it. .. code-block:: python ds = yt.load("my_data") ds.index print(ds.field_info["gas", "pressure"].get_units()) print(ds.field_info["gas", "pressure"].get_source()) Using fields to access data --------------------------- .. warning:: These *specific* operations will load the entire field -- which can be extremely memory intensive with large datasets! If you are looking to compute quantities, see :ref:`Data-objects` for methods for computing aggregates, averages, subsets, regriddings, etc. The primary *use* of fields in yt is to access data from a dataset. For example, if I want to use a data object (see :ref:`Data-objects` for more detail about data objects) to access the ``('gas', 'density')`` field, one can do any of the following: .. code-block:: python ad = ds.all_data() # just a field name density = ad["density"] # field tuple with no parentheses density = ad["gas", "density"] # full field tuple density = ad["gas", "density"] # through the ds.fields object density = ad[ds.fields.gas.density] The first data access example is the simplest. In that example, the field type is inferred from the name of the field. However, an error will be raised if there are multiple field names that could be meant by this simple string access. The next two examples use the field type explicitly, this might be necessary if there is more than one field type with a ``'density'`` field defined in the same dataset. The third example is slightly more verbose but is syntactically identical to the second example due to the way indexing works in the Python language. The final example uses the ``ds.fields`` object described above. This way of accessing fields lends itself to interactive use, especially if you make heavy use of IPython's tab completion features. Any of these ways of denoting the ``('gas', 'density')`` field can be used when supplying a field name to a yt data object, analysis routines, or plotting and visualization function. Accessing Fields without a Field Type ------------------------------------- In previous versions of yt, there was a single mechanism of accessing fields on a data container -- by their name, which was mandated to be a single string, and which often varied between different code frontends. yt 3.0 allows for datasets containing multiple different types of fluid fields, mesh fields, particles (with overlapping or disjoint lists of fields). However, to preserve backward compatibility and make interactive use simpler, yt 4.1 and newer will still accept field names given as a string *if and only if they match exactly one existing field*. As an example, we may be in a situation where have multiple types of particles which possess the ``'particle_position'`` field. In the case where a data container, here called ``ad`` (short for "all data") contains a field, we can specify which particular particle type we want to query: .. code-block:: python print(ad["dark_matter", "particle_position"]) print(ad["stars", "particle_position"]) print(ad["black_holes", "particle_position"]) Each of these three fields may have different sizes. In order to enable falling back on asking only for a field by the name, yt will use the most recently requested field type for subsequent queries. (By default, if no field has been queried, it will look for the special field ``'all'``, which concatenates all particle types.) For example, if I were to then query for the velocity: .. code-block:: python print(ad["particle_velocity"]) it would select ``black_holes`` as the field type, since the last field accessed used that field type. The same operations work for fluid and mesh fields. As an example, in some cosmology simulations, we may want to examine the mass of particles in a region versus the mass of gas. We can do so by examining the special "deposit" field types (described below) versus the gas fields: .. code-block:: python print(ad["deposit", "dark_matter_density"] / ad["gas", "density"]) The ``'deposit'`` field type is a mesh field, so it will have the same shape as the gas density. If we weren't using ``'deposit'``, and instead directly querying a particle field, this *wouldn't* work, as they are different shapes. This is the primary difference, in practice, between mesh and particle fields -- they will be different shapes and so cannot be directly compared without translating one to the other, typically through a "deposition" or "smoothing" step. How are fields implemented? --------------------------- There are two classes of fields in yt. The first are those fields that exist external to yt, which are immutable and can be queried -- most commonly, these are fields that exist on disk. These will often be returned in units that are not in a known, external unit system (except possibly by design, on the part of the code that wrote the data), and yt will take every effort possible to use the names by which they are referred to by the data producer. The default field type for mesh fields that are "on-disk" is the name of the code frontend. (For example, ``'art'``, ``'enzo'``, ``'pyne'``, and so on.) The default name for particle fields, if they do not have a particle type affiliated with them, is ``'io'``. The second class of field is the "derived field." These are fields that are functionally defined, either *ab initio* or as a transformation or combination of other fields. For example, when dealing with simulation codes, often the fields that are evolved and output to disk are not the fields that are the most relevant to researchers. Rather than examining the internal gas energy, it is more convenient to think of the temperature. By applying one or multiple functions to on-disk quantities, yt can construct new derived fields from them. Derived fields do not always have to relate to the data found on disk; special fields such as ``'x'``, ``'y'``, ``'phi'`` and ``'dz'`` all relate exclusively to the geometry of the mesh, and provide information about the mesh that can be used elsewhere for further transformations. For more information, see :ref:`creating-derived-fields`. There is a third, borderline class of field in yt, as well. This is the "alias" type, where a field on disk (for example, (``'*frontend*''``, ``'Density'``)) is aliased into an internal yt-name (for example, (``'gas'``, ``'density'``)). The aliasing process allows universally-defined derived fields to take advantage of internal names, and it also provides an easy way to address what units something should be returned in. If an aliased field is requested (and aliased fields will always be lowercase, with underscores separating words) it will be returned in the units specified by the unit system of the database, whereas if the frontend-specific field is requested, it will not undergo any unit conversions from its natural units. (This rule is occasionally violated for fields which are mesh-dependent, specifically particle masses in some cosmology codes.) .. _known-field-types: Field types known to yt ----------------------- Recall that fields are formally accessed in two parts: ``('*field type*', '*field name*')``. Here we describe the different field types you will encounter: * frontend-name -- Mesh or fluid fields that exist on-disk default to having the name of the frontend as their type name (e.g., ``'enzo'``, ``'flash'``, ``'pyne'`` and so on). The units of these types are whatever units are designated by the source frontend when it writes the data. * ``'index'`` -- This field type refers to characteristics of the mesh, whether that mesh is defined by the simulation or internally by an octree indexing of particle data. A few handy fields are ``'x'``, ``'y'``, ``'z'``, ``'theta'``, ``'phi'``, ``'radius'``, ``'dx'``, ``'dy'``, ``'dz'`` and so on. Default units are in CGS. * ``'gas'`` -- This is the usual default for simulation frontends for fluid types. These fields are typically aliased to the frontend-specific mesh fields for grid-based codes or to the deposit fields for particle-based codes. Default units are in the unit system of the dataset. * particle type -- These are particle fields that exist on-disk as written by individual frontends. If the frontend designates names for these particles (i.e. particle type) those names are the field types. Additionally, any particle unions or filters will be accessible as field types. Examples of particle types are ``'Stars'``, ``'DM'``, ``'io'``, etc. Like the front-end specific mesh or fluid fields, the units of these fields are whatever was designated by the source frontend when written to disk. * ``'io'`` -- If a data frontend does not have a set of multiple particle types, this is the default for all particles. * ``'all'`` and ``'nbody'`` -- These are special particle field types that represent a concatenation of several particle field types using :ref:`particle-unions`. ``'all'`` contains every base particle types, while ``'nbody'`` contains only the ones for which a ``'particle_mass'`` field is defined. * ``'deposit'`` -- This field type refers to the deposition of particles (discrete data) onto a mesh, typically to compute smoothing kernels, local density estimates, counts, and the like. See :ref:`deposited-particle-fields` for more information. While it is best to be explicit access fields by their full names (i.e. ``('*field type*', '*field name*')``), yt provides an abbreviated interface for accessing common fields (i.e. ``'*field name*'``). In the abbreviated case, yt will assume you want the last *field type* accessed. If you haven't previously accessed a *field type*, it will default to *field type* = ``'all'`` in the case of particle fields and *field type* = ``'gas'`` in the case of mesh fields. Field Plugins ------------- Derived fields are organized via plugins. Inside yt are a number of field plugins, which take information about fields in a dataset and then construct derived fields on top of them. This allows them to take into account variations in naming system, units, data representations, and most importantly, allows only the fields that are relevant to be added. This system will be expanded in future versions to enable much deeper semantic awareness of the data types being analyzed by yt. The field plugin system works in this order: * Available, inherent fields are identified by yt * The list of enabled field plugins is iterated over. Each is called, and new derived fields are added as relevant. * Any fields which are not available, or which throw errors, are discarded. * Remaining fields are added to the list of derived fields available for a dataset * Dependencies for every derived field are identified, to enable data preloading Field plugins can be loaded dynamically, although at present this is not particularly useful. Plans for extending field plugins to dynamically load, to enable simple definition of common types (divergence, curl, etc), and to more verbosely describe available fields, have been put in place for future versions. The field plugins currently available include: * Angular momentum fields for particles and fluids * Astrophysical fields, such as those related to cosmology * Vector fields for fluid fields, such as gradients and divergences * Particle vector fields * Magnetic field-related fields * Species fields, such as for chemistry species (yt can recognize the entire periodic table in field names and construct ionization fields as need be) Field Labeling -------------- By default yt formats field labels nicely for plots. To adjust the chosen format you can use the ``ds.set_field_label_format`` method like so: .. code-block:: python ds = yt.load("my_data") ds.set_field_label_format("ionization_label", "plus_minus") The first argument accepts a ``format_property``, or specific aspect of the labeling, and the second sets the corresponding ``value``. Currently available format properties are * ``ionization_label``: sets how the ionization state of ions are labeled. Available options are ``"plus_minus"`` and ``"roman_numeral"`` .. _efields: Energy and Momentum Fields -------------------------- Fields in yt representing energy and momentum quantities follow a specific naming convention (as of yt-4.x). In hydrodynamic simulations, the relevant quantities are often energy per unit mass or volume, momentum, or momentum density. To distinguish clearly between the different types of fields, the following naming convention is adhered to: * Energy per unit mass fields are named as ``'specific_*_energy'`` * Energy per unit volume fields are named as ``'*_energy_density'`` * Momentum fields should be named ``'momentum_density_*'`` for momentum per unit density, or ``'momentum_*'`` for momentum, where the ``*`` indicates one of three coordinate axes in any supported coordinate system. For example, in the case of kinetic energy, the fields should be ``'kinetic_energy_density'`` and ``'specific_kinetic_energy'``. In versions of yt previous to v4.0.0, these conventions were not adopted, and so energy fields in particular could be ambiguous with respect to units. For example, the ``'kinetic_energy'`` field was actually kinetic energy per unit volume, whereas the ``'thermal_energy'`` field, usually defined by various frontends, was typically thermal energy per unit mass. The above scheme rectifies these problems, but for the time being the previous field names are mapped to the current field naming scheme with a deprecation warning. These aliases were removed in yt v4.1.0. .. _bfields: Magnetic Fields --------------- Magnetic fields require special handling, because their dimensions are different in different systems of units, in particular between the CGS and MKS (SI) systems of units. Superficially, it would appear that they are in the same dimensions, since the units of the magnetic field in the CGS and MKS system are gauss (:math:`\rm{G}`) and tesla (:math:`\rm{T}`), respectively, and numerically :math:`1~\rm{G} = 10^{-4}~\rm{T}`. However, if we examine the base units, we find that they do indeed have different dimensions: .. math:: \rm{1~G = 1~\frac{\sqrt{g}}{\sqrt{cm}\cdot{s}}} \\ \rm{1~T = 1~\frac{kg}{A\cdot{s^2}}} It is easier to see the difference between the dimensionality of the magnetic field in the two systems in terms of the definition of the magnetic pressure and the Alfvén speed: .. math:: p_B = \frac{B^2}{8\pi}~\rm{(cgs)} \\ p_B = \frac{B^2}{2\mu_0}~\rm{(MKS)} .. math:: v_A = \frac{B}{\sqrt{4\pi\rho}}~\rm{(cgs)} \\ v_A = \frac{B}{\sqrt{\mu_0\rho}}~\rm{(MKS)} where :math:`\mu_0 = 4\pi \times 10^{-7}~\rm{N/A^2}` is the vacuum permeability. This different normalization in the definition of the magnetic field may show up in other relevant quantities as well. For certain frontends, a third definition of the magnetic field and the magnetic pressure may be useful. In many MHD simulations and in some physics areas (such as particle physics/GR) it is more common to use the "Lorentz-Heaviside" convention, which results in: .. math:: p_B = \frac{B^2}{2} \\ v_A = \frac{B}{\sqrt{\rho}} Using this convention is currently only available for :ref:`Athena`, :ref:`Athena++`, and :ref:`AthenaPK` datasets, though it will likely be available for more datasets in the future. yt automatically detects on a per-frontend basis what units the magnetic should be in, and allows conversion between different magnetic field units in the different unit systems as well. To determine how to set up special magnetic field handling when designing a new frontend, check out :ref:`bfields-frontend`. .. _species-fields: Species Fields -------------- For many types of data, yt is able to detect different chemical elements and molecules within the dataset, as well as their abundances and ionization states. Examples include: * CO (Carbon monoxide) * Co (Cobalt) * OVI (Oxygen ionized five times) * H:math:`^{2+}` (Molecular Hydrogen ionized once) * H:math:`^{-}` (Hydrogen atom with an additional electron) The naming scheme for the fields starts with prefixes in the form ``MM[_[mp][NN]]``. ``MM`` is the molecule, defined as a concatenation of atomic symbols and numbers, with no spaces or underscores. The second sequence is only required if ionization states are present in the dataset, and is of the form ``p`` and ``m`` to indicate "plus" or "minus" respectively, followed by the number. If a given species has no ionization states given, the prefix is simply ``MM``. For the examples above, the prefixes would be: * ``CO`` * ``Co`` * ``O_p5`` * ``H2_p1`` * ``H_m1`` The name ``El`` is used for electron fields, as it is unambiguous and will not be utilized elsewhere. Neutral ionic species (e.g. H I, O I) are represented as ``MM_p0``. Additionally, the isotope of :math:`^2`H will be included as ``D``. Finally, in those frontends which are single-fluid, these fields for each species are defined: * ``MM[_[mp][NN]]_fraction`` * ``MM[_[mp][NN]]_number_density`` * ``MM[_[mp][NN]]_density`` * ``MM[_[mp][NN]]_mass`` To refer to the number density of the entirety of a single atom or molecule (regardless of its ionization state), please use the ``MM_nuclei_density`` fields. Many datasets do not have species defined, but there may be an underlying assumption of primordial abundances of H and He which are either fully ionized or fully neutral. This will also determine the value of the mean molecular weight of the gas, which will determine the value of the temperature if derived from another quantity like the pressure or thermal energy. To allow for these possibilities, there is a keyword argument ``default_species_fields`` which can be passed to :func:`~yt.loaders.load`: .. code-block:: python import yt ds = yt.load( "GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150", default_species_fields="ionized" ) By default, the value of this optional argument is ``None``, which will not initialize any default species fields. If the ``default_species_fields`` argument is not set to ``None``, then the following fields are defined: * ``H_nuclei_density`` * ``He_nuclei_density`` More specifically, if ``default_species_fields="ionized"``, then these additional fields are defined: * ``H_p1_number_density`` (Ionized hydrogen: equal to the value of ``H_nuclei_density``) * ``He_p2_number_density`` (Doubly ionized helium: equal to the value of ``He_nuclei_density``) * ``El_number_density`` (Free electrons: assuming full ionization) Whereas if ``default_species_fields="neutral"``, then these additional fields are defined: * ``H_p0_number_density`` (Neutral hydrogen: equal to the value of ``H_nuclei_density``) * ``He_p0_number_density`` (Neutral helium: equal to the value of ``He_nuclei_density``) In this latter case, because the gas is neutral, ``El_number_density`` is not defined. The ``mean_molecular_weight`` field will be constructed from the abundances of the elements in the dataset. If no element or molecule fields are defined, the value of this field is determined by the value of ``default_species_fields``. If it is set to ``None`` or ``"ionized"``, the ``mean_molecular_weight`` field is set to :math:`\mu \approx 0.6`, whereas if ``default_species_fields`` is set to ``"neutral"``, then the ``mean_molecular_weight`` field is set to :math:`\mu \approx 1.14`. Some frontends do not directly store the gas temperature in their datasets, in which case it must be computed from the pressure and/or thermal energy as well as the mean molecular weight, so check this carefully! Particle Fields --------------- Naturally, particle fields contain properties of particles rather than grid cells. By examining the particle field in detail, you can see that each element of the field array represents a single particle, whereas in mesh fields each element represents a single mesh cell. This means that for the most part, operations cannot operate on both particle fields and mesh fields simultaneously in the same way, like filters (see :ref:`filtering-data`). However, many of the particle fields have corresponding mesh fields that can be populated by "depositing" the particle values onto a yt grid as described below. .. _field_parameters: Field Parameters ---------------- Certain fields require external information in order to be calculated. For example, the radius field has to be defined based on some point of reference and the radial velocity field needs to know the bulk velocity of the data object so that it can be subtracted. This information is passed into a field function by setting field parameters, which are user-specified data that can be associated with a data object. The :meth:`~yt.data_objects.data_containers.YTDataContainer.set_field_parameter` and :meth:`~yt.data_objects.data_containers.YTDataContainer.get_field_parameter` functions are used to set and retrieve field parameter values for a given data object. In the cases above, the field parameters are ``center`` and ``bulk_velocity`` respectively -- the two most commonly used field parameters. .. code-block:: python ds = yt.load("my_data") ad = ds.all_data() ad.set_field_parameter("wickets", 13) print(ad.get_field_parameter("wickets")) If a field parameter is not set, ``get_field_parameter`` will return None. Within a field function, these can then be retrieved and used in the same way. .. code-block:: python def _wicket_density(field, data): n_wickets = data.get_field_parameter("wickets") if n_wickets is None: # use a default if unset n_wickets = 88 return data["gas", "density"] * n_wickets For a practical application of this, see :ref:`cookbook-radial-velocity`. .. _gradient_fields: Gradient Fields --------------- yt provides a way to compute gradients of spatial fields using the :meth:`~yt.data_objects.static_output.Dataset.add_gradient_fields` method. If you have a spatially-based field such as density or temperature, and want to calculate the gradient of that field, you can do it like so: .. code-block:: python ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150") grad_fields = ds.add_gradient_fields(("gas", "temperature")) where the ``grad_fields`` list will now have a list of new field names that can be used in calculations, representing the 3 different components of the field and the magnitude of the gradient, e.g., ``"temperature_gradient_x"``, ``"temperature_gradient_y"``, ``"temperature_gradient_z"``, and ``"temperature_gradient_magnitude"``. To see an example of how to create and use these fields, see :ref:`cookbook-complicated-derived-fields`. .. _relative_fields: Relative Vector Fields ---------------------- yt makes use of "relative" fields for certain vector fields, which are fields which have been defined relative to a particular origin in the space of that field. For example, relative particle positions can be specified relative to a center coordinate, and relative velocities can be specified relative to a bulk velocity. These origin points are specified by setting field parameters as detailed below (see :ref:`field_parameters` for more information). The relative fields which are currently supported for gas fields are: * ``('gas', 'relative_velocity_x')``, defined by setting the ``'bulk_velocity'`` field parameter * ``('gas', 'relative_magnetic_field_x')``, defined by setting the ``'bulk_magnetic_field'`` field parameter Note that fields ending in ``'_x'`` are defined for each component. For particle fields, for a given particle type ``ptype``, the relative fields which are supported are: * ``(*ptype*, 'relative_particle_position')``, defined by setting the ``'center'`` field parameter * ``(*ptype*, 'relative_particle_velocity')``, defined by setting the ``'bulk_velocity'`` field parameter * ``(*ptype*, 'relative_particle_position_x')``, defined by setting the ``'center'`` field parameter * ``(*ptype*, 'relative_particle_velocity_x')``, defined by setting the ``'bulk_velocity'`` field parameter These fields are in use when defining magnitude fields, line-of-sight fields, etc.. The ``'bulk_*'`` field parameters are ``[0.0, 0.0, 0.0]`` by default, and the ``'center'`` field parameter depends on the data container in use. There is currently no mechanism to create new relative fields, but one may be added at a later time. .. _los_fields: Line of Sight Fields -------------------- In astrophysics applications, one often wants to know the component of a vector field along a given line of sight. If you are doing a projection of a vector field along an axis, or just want to obtain the values of a vector field component along an axis, you can use a line-of-sight field. For projections, this will be handled automatically: .. code-block:: python prj = yt.ProjectionPlot( ds, "z", fields=("gas", "velocity_los"), weight_field=("gas", "density"), ) Which, because the axis is ``'z'``, will give you the same result if you had projected the ``'velocity_z'`` field. This also works for off-axis projections, using an arbitrary normal vector .. code-block:: python prj = yt.ProjectionPlot( ds, [0.1, -0.2, 0.3], fields=("gas", "velocity_los"), weight_field=("gas", "density"), ) This shows that the projection axis can be along a principle axis of the domain or an arbitrary off-axis 3-vector (which will be automatically normalized). If you want to examine a line-of-sight vector within a 3-D data object, set the ``'axis'`` field parameter: .. code-block:: python dd = ds.all_data() # Set to one of [0, 1, 2] for ["x", "y", "z"] axes dd.set_field_parameter("axis", 1) print(dd["gas", "magnetic_field_los"]) # Set to a three-vector for an off-axis component dd.set_field_parameter("axis", [0.3, 0.4, -0.7]) print(dd["gas", "velocity_los"]) # particle fields are supported too! print(dd["all", "particle_velocity_los"]) .. warning:: If you need to change the axis of the line of sight on the *same* data container (sphere, box, cylinder, or whatever), you will need to delete the field using (e.g.) ``del dd['velocity_los']`` and re-generate it. At this time, this functionality is enabled for the velocity and magnetic vector fields, ``('gas', 'velocity_los')`` and ``('gas', 'magnetic_field_los')`` for the ``"gas"`` field type, as well as every particle type with a velocity field, e.g. ``("all", "particle_velocity_los")``. The following fields built into yt make use of these line-of-sight fields: * ``('gas', 'sz_kinetic')`` uses ``('gas', 'velocity_los')`` * ``('gas', 'rotation_measure')`` uses ``('gas', 'magnetic_field_los')`` General Particle Fields ----------------------- Every particle will contain both a ``'particle_position'`` and ``'particle_velocity'`` that tracks the position and velocity (respectively) in code units. .. _deposited-particle-fields: Deposited Particle Fields ------------------------- In order to turn particle (discrete) fields into fields that are deposited in some regular, space-filling way (even if that space is empty, it is defined everywhere) yt provides mechanisms for depositing particles onto a mesh. These are in the special field-type space ``'deposit'``, and are typically of the form ``('deposit', 'particletype_depositiontype')`` where ``depositiontype`` is the mechanism by which the field is deposited, and ``particletype`` is the particle type of the particles being deposited. If you are attempting to examine the cloud-in-cell (``cic``) deposition of the ``all`` particle type, you would access the field ``('deposit', 'all_cic''')``. yt defines a few particular types of deposition internally, and creating new ones can be done by modifying the files ``yt/geometry/particle_deposit.pyx`` and ``yt/fields/particle_fields.py``, although that is an advanced topic somewhat outside the scope of this section. The default deposition types available are: * ``count`` - this field counts the total number of particles of a given type in a given mesh zone. Note that because, in general, the mesh for particle datasets is defined by the number of particles in a region, this may not be the most useful metric. This may be made more useful by depositing particle data onto an :ref:`arbitrary-grid`. * ``density`` - this field takes the total sum of ``particle_mass`` in a given mesh field and divides by the volume. * ``mass`` - this field takes the total sum of ``particle_mass`` in each mesh zone. * ``cic`` - this field performs cloud-in-cell interpolation (see `Section 2.2 `_ for more information) of the density of particles in a given mesh zone. * ``smoothed`` - this is a special deposition type. See discussion below for more information, in :ref:`sph-fields`. You can also directly use the :meth:`~yt.data_objects.static_outputs.add_deposited_particle_field` function defined on each dataset to depose any particle field onto the mesh like so: .. code-block:: python import yt ds = yt.load("output_00080/info_00080.txt") fname = ds.add_deposited_particle_field( ("all", "particle_velocity_x"), method="nearest" ) print(f"The velocity of the particles are (stored in {fname}") print(ds.r[fname]) .. note:: In this example, we are using the returned field name as our input. You *could* also access it directly, but it might take a slightly different form than you expect -- in this particular case, the field name will be ``("deposit", "all_nn_velocity_x")``, which has removed the prefix ``particle_`` from the deposited name! Possible deposition methods are: * ``'simple_smooth'`` - perform an SPH-like deposition of the field onto the mesh optionally accepting a ``kernel_name``. * ``'sum'`` - sums the value of the particle field for all particles found in each cell. * ``'std'`` - computes the standard deviation of the value of the particle field for all particles found in each cell. * ``'cic'`` - performs cloud-in-cell interpolation (see `Section 2.2 `_ for more information) of the particle field on a given mesh zone. * ``'weighted_mean'`` - computes the mean of the particle field, weighted by the field passed into ``weight_field`` (by default, it uses the particle mass). * ``'count'`` - counts the number of particles in each cell. * ``'nearest'`` - assign to each cell the value of the closest particle. In addition, the :meth:`~yt.data_objects.static_outputs.add_deposited_particle_field` function returns the name of the newly created field. Deposited particle fields can be useful for visualizing particle data, including particles without defined smoothing lengths. See :ref:`particle-plotting-workarounds` for more information. .. _mesh-sampling-particle-fields: Mesh Sampling Particle Fields ----------------------------- In order to turn mesh fields into discrete particle field, yt provides a mechanism to do sample mesh fields at particle locations. This operation is the inverse operation of :ref:`deposited-particle-fields`: for each particle the cell containing the particle is found and the value of the field in the cell is assigned to the particle. This is for example useful when using tracer particles to have access to the Eulerian information for Lagrangian particles. The particle fields are named ``('*ptype*', 'cell_*ftype*_*fname*')`` where ``ptype`` is the particle type onto which the deposition occurs, ``ftype`` is the mesh field type (e.g. ``'gas'``) and ``fname`` is the field (e.g. ``'temperature'``, ``'density'``, ...). You can directly use the :meth:`~yt.data_objects.static_output.Dataset.add_mesh_sampling_particle_field` function defined on each dataset to impose a field onto the particles like so: .. code-block:: python import yt ds = yt.load("output_00080/info_00080.txt") ds.add_mesh_sampling_particle_field(("gas", "temperature"), ptype="all") print("The temperature at the location of the particles is") print(ds.r["all", "cell_gas_temperature"]) For octree codes (e.g. RAMSES), you can trigger the build of an index so that the next sampling operations will be mush faster .. code-block:: python import yt ds = yt.load("output_00080/info_00080.txt") ds.add_mesh_sampling_particle_field(("gas", "temperature"), ptype="all") ad = ds.all_data() ad[ "all", "cell_index" ] # Trigger the build of the index of the cell containing the particles ad["all", "cell_gas_temperature"] # This is now much faster .. _sph-fields: SPH Fields ---------- See :ref:`yt4differences`. In previous versions of yt, there were ways of computing the distance to the N-th nearest neighbor of a particle, as well as computing the nearest particle value on a mesh. Unfortunately, because of changes to the way that particles are regarded in yt, these are not currently available. We hope that this will be rectified in future versions and are tracking this in `Issue 3301 `_. You can read a bit more about the way yt now handles particles in the section :ref:`demeshening`. **But!** It is possible to compute the smoothed values from SPH particles on grids. For example, one can construct a covering grid that extends over the entire domain of a simulation, with resolution 256x256x256, and compute the gas density with this reasonable terse command: .. code-block:: python import yt ds = yt.load("snapshot_033/snap_033.0.hdf5") cg = ds.r[::256j, ::256j, ::256j] smoothed_values = cg["gas", "density"] This will work for any smoothed field; any field that is under the ``'gas'`` field type will be a smoothed field in an SPH-based simulation. Here we have used the ``ds.r[]`` notation, as described in :ref:`quickly-selecting-data` for creating what's called an "arbitrary grid" (:class:`~yt.data_objects.construction_data_containers.YTArbitraryGrid`). You can, of course, also supply left and right edges to make the grid take up a much smaller portion of the domain, as well, by supplying the arguments as detailed in :ref:`arbitrary-grid-selection` and supplying the bounds as the first and second elements in each element of the slice. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/filtering.rst0000644000175100001770000003113514714401662020346 0ustar00runnerdocker.. _filtering-data: Filtering your Dataset ====================== Large datasets are oftentimes too overwhelming to deal with in their entirety, and it can be useful and faster to analyze subsets of these datasets. Furthermore, filtering the dataset based on some field condition can reveal subtle information not easily accessible by looking at the whole dataset. Filters can be generated based on spatial position, say in a sphere in the center of your dataset space, or more generally they can be defined by the properties of any field in the simulation. Because *mesh fields* are internally different from *particle fields*, there are different ways of filtering each type as indicated below; however, filtering fields by spatial location (i.e. geometric objects) will apply to both types equally. .. _filtering-mesh: Filtering Mesh Fields ---------------------- Mesh fields can be filtered by two methods: cut region objects (:class:`~yt.data_objects.selection_data_containers.YTCutRegion`) and NumPy boolean masks. Boolean masks are simpler, but they only work for examining datasets, whereas cut regions objects create wholly new data objects suitable for full analysis (data examination, image generation, etc.) Boolean Masks ^^^^^^^^^^^^^ NumPy boolean masks can be used with any NumPy array simply by passing the array a conditional. As a general example of this: .. notebook-cell:: import numpy as np a = np.arange(5) bigger_than_two = a > 2 print("Original Array: a = \n%s" % a) print("Boolean Mask: bigger_than_two = \n%s" % bigger_than_two) print("Masked Array: a[bigger_than_two] = \n%s" % a[bigger_than_two]) Similarly, if you've created a yt data object (e.g. a region, a sphere), you can examine its field values as a NumPy array by simply indexing it with the field name. Thus, it too can be masked using a NumPy boolean mask. Let's set a simple mask based on the contents of one of our fields. .. notebook-cell:: import yt ds = yt.load("Enzo_64/DD0042/data0042") ad = ds.all_data() hot = ad["gas", "temperature"].in_units("K") > 1e6 print( 'Temperature of all data: ad["gas", "temperature"] = \n%s' % ad["gas", "temperature"] ) print("Boolean Mask: hot = \n%s" % hot) print( 'Temperature of "hot" data: ad["gas", "temperature"][hot] = \n%s' % ad["gas", "temperature"][hot] ) This was a simple example, but one can make the conditionals that define a boolean mask have multiple parts, and one can stack masks together to make very complex cuts on one's data. Once the data is filtered, it can be used if you simply need to access the NumPy arrays: .. notebook-cell:: import yt ds = yt.load("Enzo_64/DD0042/data0042") ad = ds.all_data() overpressure_and_fast = ( (ad["gas", "pressure"] > 1e-14) & (ad["gas", "velocity_magnitude"].in_units("km/s") > 1e2) ) density = ad["gas", "density"] print('Density of all data: ad["gas", "density"] = \n%s' % density) print( 'Density of "overpressure and fast" data: overpressure_and_fast["gas", "density"] = \n%s' % density[overpressure_and_fast] ) .. _cut-regions: Cut Regions ^^^^^^^^^^^ Cut regions are a more general solution to filtering mesh fields. The output of a cut region is an entirely new data object, which can be treated like any other data object to generate images, examine its values, etc. See `this `_. In addition to inputting string parameters into cut_region to specify filters, wrapper functions exist that allow the user to use a simplified syntax for filtering out unwanted regions. Such wrapper functions are methods of :func: ``YTSelectionContainer3D``. .. notebook-cell:: import yt ds = yt.load("Enzo_64/DD0042/data0042") ad = ds.all_data() overpressure_and_fast = ad.include_above(("gas", "pressure"), 1e-14) # You can chain include_xx and exclude_xx to produce the intersection of cut regions overpressure_and_fast = overpressure_and_fast.include_above( ("gas", "velocity_magnitude"), 1e2, "km/s" ) print('Density of all data: ad["gas", "density"] = \n%s' % ad["gas", "density"]) print( 'Density of "overpressure and fast" data: overpressure_and_fast["gas", "density"] = \n%s' % overpressure_and_fast["gas", "density"] ) The following exclude and include functions are supported: - :func:`~yt.data_objects.data_containers.YTSelectionContainer3D.include_equal` - Only include values equal to given value - :func:`~yt.data_objects.data_containers.YTSelectionContainer3D.exclude_equal`- Exclude values equal to given value - :func:`~yt.data_objects.data_containers.YTSelectionContainer3D.include_inside` - Only include values inside closed interval - :func:`~yt.data_objects.data_containers.YTSelectionContainer3D.exclude_inside` - Exclude values inside closed interval - :func:`~yt.data_objects.data_containers.YTSelectionContainer3D.include_outside` - Only include values outside closed interval - :func:`~yt.data_objects.data_containers.YTSelectionContainer3D.exclude_outside` - Exclude values outside closed interval - :func:`~yt.data_objects.data_containers.YTSelectionContainer3D.exclude_nan` - Exclude NaN values - :func:`~yt.data_objects.data_containers.YTSelectionContainer3D.include_above` - Only include values above given value - :func:`~yt.data_objects.data_containers.YTSelectionContainer3D.exclude_above` - Exclude values above given value - :func:`~yt.data_objects.data_containers.YTSelectionContainer3D.include_below` - Only include values below given balue - :func:`~yt.data_objects.data_containers.YTSelectionContainer3D.exclude_below` - Exclude values below given value .. warning:: Cut regions are unstable when used on particle fields. Though you can create a cut region using a mesh field or fields as a filter and then obtain a particle field within that region, you cannot create a cut region using particle fields in the filter, as yt will currently raise an error. If you want to filter particle fields, see the next section :ref:`filtering-particles` instead. .. _filtering-particles: Filtering Particle Fields ------------------------- Particle filters create new particle fields based on the manipulation and cuts on existing particle fields. You can apply cuts to them to effectively mask out everything except the particles with which you are concerned. Creating a particle filter takes a few steps. You must first define a function which accepts a data object (e.g. all_data, sphere, etc.) as its argument. It uses the fields and information in this geometric object in order to produce some sort of conditional mask that is then returned to create a new particle type. Here is a particle filter to create a new ``star`` particle type. For Enzo simulations, stars have ``particle_type`` set to 2, so our filter will select only the particles with ``particle_type`` (i.e. field = ``('all', 'particle_type')`` equal to 2. .. code-block:: python @yt.particle_filter(requires=["particle_type"], filtered_type="all") def stars(pfilter, data): filter = data[pfilter.filtered_type, "particle_type"] == 2 return filter The :func:`~yt.data_objects.particle_filters.particle_filter` decorator takes a few options. You must specify the names of the particle fields that are required in order to define the filter --- in this case the ``particle_type`` field. Additionally, you must specify the particle type to be filtered --- in this case we filter all the particle in dataset by specifying the ``all`` particle type. In addition, you may specify a name for the newly defined particle type. If no name is specified, the name for the particle type will be inferred from the name of the filter definition --- in this case the inferred name will be ``stars``. As an alternative syntax, you can also define a new particle filter via the :func:`~yt.data_objects.particle_filter.add_particle_filter` function. .. code-block:: python def stars(pfilter, data): filter = data[pfilter.filtered_type, "particle_type"] == 2 return filter yt.add_particle_filter( "stars", function=stars, filtered_type="all", requires=["particle_type"] ) This is equivalent to our use of the ``particle_filter`` decorator above. The choice to use either the ``particle_filter`` decorator or the ``add_particle_filter`` function is a purely stylistic choice. Lastly, the filter must be applied to our dataset of choice. Note that this filter can be added to as many datasets as we wish. It will only actually create new filtered fields if the dataset has the required fields, though. .. code-block:: python import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") ds.add_particle_filter("stars") And that's it! We can now access all of the ('stars', field) fields from our dataset ``ds`` and treat them as any other particle field. In addition, it created some ``deposit`` fields, where the particles were deposited on to the grid as mesh fields. We can create additional filters building on top of the filters we have. For example, we can identify the young stars based on their age, which is the difference between current time and their creation_time. .. code-block:: python def young_stars(pfilter, data): age = data.ds.current_time - data[pfilter.filtered_type, "creation_time"] filter = np.logical_and(age.in_units("Myr") <= 5, age >= 0) return filter yt.add_particle_filter( "young_stars", function=young_stars, filtered_type="stars", requires=["creation_time"], ) If we properly define all the filters using the decorator ``yt.particle_filter`` or the function ``yt.add_particle_filter`` in advance. We can add the filter we need to the dataset. If the ``filtered_type`` is already defined but not added to the dataset, it will automatically add the filter first. For example, if we add the ``young_stars`` filter, which is filtered from ``stars``, to the dataset, it will also add ``stars`` filter to the dataset. .. code-block:: python import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") ds.add_particle_filter("young_stars") Additional example of particle filters can be found in the `notebook `_. .. _particle-unions: Particle Unions --------------- Multiple types of particles can be combined into a single, conceptual type. As an example, the NMSU-ART code has multiple "species" of dark matter, which we union into a single ``darkmatter`` field. The ``all`` particle type is a special case of this. To create a particle union, you need to import the ``ParticleUnion`` class from ``yt.data_objects.unions``, which you then create and pass into ``add_particle_union`` on a dataset object. Here is an example, where we union the ``halo`` and ``disk`` particle types into a single type, ``star``. yt will then determine which fields are accessible to this new particle type and it will add them. .. code-block:: python from yt.data_objects.unions import ParticleUnion u = ParticleUnion("star", ["halo", "disk"]) ds.add_particle_union(u) .. _filtering-by-location: Filtering Fields by Spatial Location: Geometric Objects ------------------------------------------------------- Creating geometric objects for a dataset provides a means for filtering a field based on spatial location. The most commonly used of these are spheres, regions (3D prisms), ellipsoids, disks, and rays. The ``all_data`` object which gets used throughout this documentation section is an example of a geometric object, but it defaults to including all the data in the dataset volume. To see all of the geometric objects available, see :ref:`available-objects`. Consult the object documentation section for all of the different objects one can use, but here is a simple example using a sphere object to filter a dataset. Let's filter out everything not within 10 Mpc of some random location, say [0.2, 0.5, 0.1], in the simulation volume. The resulting object will only contain grid cells with centers falling inside of our defined sphere, which may look offset based on the presence of different resolution elements distributed throughout the dataset. .. notebook-cell:: import yt ds = yt.load("Enzo_64/DD0042/data0042") center = [0.20, 0.50, 0.10] sp = ds.sphere(center, (10, "Mpc")) prj = yt.ProjectionPlot( ds, "x", ("gas", "density"), center=center, width=(50, "Mpc"), data_source=sp ) # Mark the center with a big X prj.annotate_marker(center, "x", s=100) prj.show() slc = yt.SlicePlot( ds, "x", ("gas", "density"), center=center, width=(50, "Mpc"), data_source=sp ) slc.show() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/generating_processed_data.rst0000644000175100001770000003415014714401662023546 0ustar00runnerdocker.. _generating-processed-data: Generating Processed Data ========================= Although yt provides a number of built-in visualization methods that can process data and construct from that plots, it is often useful to generate the data by hand and construct plots which can then be combined with other plots, modified in some way, or even (gasp) created and modified in some other tool or program. .. _exporting-container-data: Exporting Container Data ------------------------ Fields from data containers such as regions, spheres, cylinders, etc. can be exported tabular format using either a :class:`~pandas.DataFrame` or an :class:`~astropy.table.QTable`. To export to a :class:`~pandas.DataFrame`, use :meth:`~yt.data_objects.data_containers.YTDataContainer.to_dataframe`: .. code-block:: python sp = ds.sphere("c", (0.2, "unitary")) df2 = sp.to_dataframe([("gas", "density"), ("gas", "temperature")]) To export to a :class:`~astropy.table.QTable`, use :meth:`~yt.data_objects.data_containers.YTDataContainer.to_astropy_table`: .. code-block:: python sp = ds.sphere("c", (0.2, "unitary")) at2 = sp.to_astropy_table(fields=[("gas", "density"), ("gas", "temperature")]) For exports to :class:`~pandas.DataFrame` objects, the unit information is lost, but for exports to :class:`~astropy.table.QTable` objects, the :class:`~yt.units.yt_array.YTArray` objects are converted to :class:`~astropy.units.Quantity` objects. .. _generating-2d-image-arrays: 2D Image Arrays --------------- When making a slice, a projection or an oblique slice in yt, the resultant :class:`~yt.data_objects.data_containers.YTSelectionContainer2D` object is created and contains flattened arrays of the finest available data. This means a set of arrays for the x, y, (possibly z), dx, dy, (possibly dz) and data values, for every point that constitutes the object. This presents something of a challenge for visualization, as it will require the transformation of a variable mesh of points consisting of positions and sizes into a fixed-size array that appears like an image. This process is that of pixelization, which yt handles transparently internally. You can access this functionality by constructing a :class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer` and supplying to it your :class:`~yt.data_objects.data_containers.YTSelectionContainer2D` object, as well as some information about how you want the final image to look. You can specify both the bounds of the image (in the appropriate x-y plane) and the resolution of the output image. You can then have yt pixelize any field you like. .. note:: In previous versions of yt, there was a special class of FixedResolutionBuffer for off-axis slices. This is still used for off-axis SPH data projections: OffAxisFixedResolutionBuffer. To create :class:`~yt.data_objects.data_containers.YTSelectionContainer2D` objects, you can access them as described in :ref:`data-objects`, specifically the section :ref:`available-objects`. Here is an example of how to window into a slice of resolution(512, 512) with bounds of (0.3, 0.5) and (0.6, 0.8). The next step is to generate the actual 2D image array, which is accomplished by accessing the desired field. .. code-block:: python sl = ds.slice(0, 0.5) frb = FixedResolutionBuffer(sl, (0.3, 0.5, 0.6, 0.8), (512, 512)) my_image = frb["density"] This image may then be used in a hand-constructed Matplotlib image, for instance using :func:`~matplotlib.pyplot.imshow`. The buffer arrays can be saved out to disk in either HDF5 or FITS format: .. code-block:: python frb.save_as_dataset("my_images.h5", fields=[("gas", "density"), ("gas", "temperature")]) frb.export_fits( "my_images.fits", fields=[("gas", "density"), ("gas", "temperature")], clobber=True, units="kpc", ) In the HDF5 case, the created file can be reloaded just like a regular dataset with ``yt.load`` and will, itself, be a first-class dataset. For more information on this, see :ref:`saving-grid-data-containers`. In the FITS case, there is an option for setting the ``units`` of the coordinate system in the file. If you want to overwrite a file with the same name, set ``clobber=True``. The :class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer` (and its :class:`~yt.visualization.fixed_resolution.OffAxisProjectionFixedResolutionBuffer` subclass) can even be exported as a 2D dataset itself, which may be operated on in the same way as any other dataset in yt: .. code-block:: python ds_frb = frb.export_dataset( fields=[("gas", "density"), ("gas", "temperature")], nprocs=8 ) sp = ds_frb.sphere("c", (100.0, "kpc")) where the ``nprocs`` parameter can be used to decompose the image into ``nprocs`` number of grids. .. _generating-profiles-and-histograms: Profiles and Histograms ----------------------- Profiles and histograms can also be generated using the :class:`~yt.visualization.profile_plotter.ProfilePlot` and :class:`~yt.visualization.profile_plotter.PhasePlot` functions (described in :ref:`how-to-make-1d-profiles` and :ref:`how-to-make-2d-profiles`). These generate profiles transparently, but the objects they handle and create can be handled manually, as well, for more control and access. The :func:`~yt.data_objects.profiles.create_profile` function can be used to generate 1, 2, and 3D profiles. Profile objects can be created from any data object (see :ref:`data-objects`, specifically the section :ref:`available-objects` for more information) and are best thought of as distribution calculations. They can either sum up or average one quantity with respect to one or more other quantities, and they do this over all the data contained in their source object. When calculating average values, the standard deviation will also be calculated. To generate a profile, one need only specify the binning fields and the field to be profiled. The binning fields are given together in a list. The :func:`~yt.data_objects.profiles.create_profile` function will guess the dimensionality of the profile based on the number of fields given. For example, a one-dimensional profile of the mass-weighted average temperature as a function of density within a sphere can be created in the following way: .. code-block:: python import yt ds = yt.load("galaxy0030/galaxy0030") source = ds.sphere("c", (10, "kpc")) profile = source.profile( [("gas", "density")], # the bin field [ ("gas", "temperature"), # profile field ("gas", "radial_velocity"), ], # profile field weight_field=("gas", "mass"), ) The binning, weight, and profile data can now be access as: .. code-block:: python print(profile.x) # bin field print(profile.weight) # weight field print(profile["gas", "temperature"]) # profile field print(profile["gas", "radial_velocity"]) # profile field The ``profile.used`` attribute gives a boolean array of the bins which actually have data. .. code-block:: python print(profile.used) If a weight field was given, the profile data will represent the weighted mean of a field. In this case, the weighted standard deviation will be calculated automatically and can be access via the ``profile.standard_deviation`` attribute. .. code-block:: python print(profile.standard_deviation["gas", "temperature"]) A two-dimensional profile of the total gas mass in bins of density and temperature can be created as follows: .. code-block:: python profile2d = source.profile( [ ("gas", "density"), ("gas", "temperature"), ], # the x bin field # the y bin field [("gas", "mass")], # the profile field weight_field=None, ) Accessing the x, y, and profile fields work just as with one-dimensional profiles: .. code-block:: python print(profile2d.x) print(profile2d.y) print(profile2d["gas", "mass"]) One of the more interesting things that is enabled with this approach is the generation of 1D profiles that correspond to 2D profiles. For instance, a phase plot that shows the distribution of mass in the density-temperature plane, with the average temperature overplotted. The :func:`~matplotlib.pyplot.pcolormesh` function can be used to manually plot the 2D profile. If you want to generate a default profile plot, you can simply call::: profile.plot() Three-dimensional profiles can be generated and accessed following the same procedures. Additional keyword arguments are available to control the following for each of the bin fields: the number of bins, min and max, units, whether to use a log or linear scale, and whether or not to do accumulation to create a cumulative distribution function. For more information, see the API documentation on the :func:`~yt.data_objects.profiles.create_profile` function. For custom bins the other keyword arguments can be overridden using the ``override_bins`` keyword argument. This accepts a dictionary with an array for each bin field or ``None`` to use the default settings. .. code-block:: python custom_bins = np.array([1e-27, 1e-25, 2e-25, 5e-25, 1e-23]) profile2d = source.profile( [("gas", "density"), ("gas", "temperature")], [("gas", "mass")], override_bins={("gas", "density"): custom_bins, ("gas", "temperature"): None}, ) .. _profile-dataframe-export: Exporting Profiles to DataFrame ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ One-dimensional profile data can be exported to a :class:`~pandas.DataFrame` object using the :meth:`yt.data_objects.profiles.Profile1D.to_dataframe` method. Bins which do not have data will have their fields filled with ``NaN``, except for the bin field itself. If you only want to export the bins which are used, set ``only_used=True``, and if you want to export the standard deviation of the profile as well, set ``include_std=True``: .. code-block:: python # Adds all of the data to the DataFrame, but non-used bins are filled with NaNs df = profile.to_dataframe() # Only adds the used bins to the DataFrame df_used = profile.to_dataframe(only_used=True) # Only adds the density and temperature fields df2 = profile.to_dataframe(fields=[("gas", "density"), ("gas", "temperature")]) # Include standard deviation df3 = profile.to_dataframe(include_std=True) The :class:`~pandas.DataFrame` can then analyzed and/or written to disk using pandas methods. Note that unit information is lost in this export. .. _profile-astropy-export: Exporting Profiles to QTable ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ One-dimensional profile data also can be exported to an AstroPy :class:`~astropy.table.QTable` object. This table can then be written to disk in a number of formats, such as ASCII text or FITS files, and manipulated in a number of ways. Bins which do not have data will have their mask values set to ``False``. If you only want to export the bins which are used, set ``only_used=True``. If you want to include the standard deviation of the field in the export, set ``include_std=True``. Units are preserved in the table by converting each :class:`~yt.units.yt_array.YTArray` to an :class:`~astropy.units.Quantity`. To export the 1D profile to a Table object, simply call :meth:`yt.data_objects.profiles.Profile1D.to_astropy_table`: .. code-block:: python # Adds all of the data to the Table, but non-used bins are masked t = profile.to_astropy_table() # Only adds the used bins to the Table t_used = profile.to_astropy_table(only_used=True) # Only adds the density and temperature fields t2 = profile.to_astropy_table(fields=[("gas", "density"), ("gas", "temperature")]) # Export the standard deviation t3 = profile.to_astropy_table(include_std=True) .. _generating-line-queries: Line Queries and Planar Integrals --------------------------------- To calculate the values along a line connecting two points in a simulation, you can use the object :class:`~yt.data_objects.selection_data_containers.YTRay`, accessible as the ``ray`` property on a index. (See :ref:`data-objects` for more information on this.) To do so, you can supply two points and access fields within the returned object. For instance, this code will generate a ray between the points (0.3, 0.5, 0.9) and (0.1, 0.8, 0.5) and examine the density along that ray: .. code-block:: python ray = ds.ray((0.3, 0.5, 0.9), (0.1, 0.8, 0.5)) print(ray["gas", "density"]) The points are not ordered, so you may need to sort the data (see the example in the :class:`~yt.data_objects.selection_data_containers.YTRay` docs). Also note, the ray is traversing cells of varying length, as well as taking a varying distance to cross each cell. To determine the distance traveled by the ray within each cell (for instance, for integration) the field ``dt`` is available; this field will sum to 1.0, as the ray's path will be normalized to 1.0, independent of how far it travels through the domain. To determine the value of ``t`` at which the ray enters each cell, the field ``t`` is available. For instance: .. code-block:: python print(ray["dts"].sum()) print(ray["t"]) These can be used as inputs to, for instance, the Matplotlib function :func:`~matplotlib.pyplot.plot`, or they can be saved to disk. The volume rendering functionality in yt can also be used to calculate off-axis plane integrals, using the :class:`~yt.visualization.volume_rendering.transfer_functions.ProjectionTransferFunction` in a manner similar to that described in :ref:`volume_rendering`. .. _generating-xarray: Regular Grids to xarray ----------------------- Objects that subclass from :class:`~yt.data_objects.construction_data_containers.YTCoveringGrid` are able to export to `xarray `_. This enables interoperability with anything that can take xarray data. The classes that can do this are :class:`~yt.data_objects.construction_data_containers.YTCoveringGrid`, :class:`~yt.data_objects.construction_data_containers.YTArbitraryGrid`, and :class:`~yt.data_objects.construction_data_containers.YTSmoothedCoveringGrid`. For example, you can: .. code-block:: python grid = ds.r[::256j, ::256j, ::256j] obj = grid.to_xarray(fields=[("gas", "density"), ("gas", "temperature")]) The returned object, ``obj``, will now have the correct labelled axes and so forth. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/index.rst0000644000175100001770000000146014714401662017470 0ustar00runnerdocker.. _analyzing: General Data Analysis ===================== This documentation describes much of the yt infrastructure for manipulating one's data to extract the relevant information. Fields, data objects, and units are at the heart of how yt represents data. Beyond this, we provide a full description for how to filter your datasets based on specific criteria, how to analyze chronological datasets from the same underlying simulation or source (i.e. time series analysis), and how to run yt in parallel on multiple processors to accomplish tasks faster. .. toctree:: :maxdepth: 2 fields ../developing/creating_derived_fields objects units filtering generating_processed_data saving_data time_series_analysis Particle_Trajectories parallel_computation astropy_integrations ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/ionization_cube.py0000644000175100001770000000267414714401662021372 0ustar00runnerdockerimport time import h5py import numpy as np import yt from yt.utilities.parallel_tools.parallel_analysis_interface import communication_system @yt.derived_field( name="IonizedHydrogen", units="", display_name=r"\frac{\rho_{HII}}{\rho_H}" ) def IonizedHydrogen(field, data): return data["gas", "HII_Density"] / ( data["gas", "HI_Density"] + data["gas", "HII_Density"] ) ts = yt.DatasetSeries("SED800/DD*/*.index", parallel=8) ionized_z = np.zeros(ts[0].domain_dimensions, dtype="float32") t1 = time.time() for ds in ts.piter(): z = ds.current_redshift for g in yt.parallel_objects(ds.index.grids, njobs=16): i1, j1, k1 = g.get_global_startindex() # Index into our domain i2, j2, k2 = g.get_global_startindex() + g.ActiveDimensions # Look for the newly ionized gas newly_ion = (g["IonizedHydrogen"] > 0.999) & ( ionized_z[i1:i2, j1:j2, k1:k2] < z ) ionized_z[i1:i2, j1:j2, k1:k2][newly_ion] = z g.clear_data() print(f"Iteration completed {time.time() - t1:0.3e}") comm = communication_system.communicators[-1] for i in range(ionized_z.shape[0]): ionized_z[i, :, :] = comm.mpi_allreduce(ionized_z[i, :, :], op="max") print("Slab % 3i has minimum z of %0.3e" % (i, ionized_z[i, :, :].max())) t2 = time.time() print(f"Completed. {t2 - t1:0.3e}") if comm.rank == 0: f = h5py.File("IonizationCube.h5", mode="w") f.create_dataset("/z", data=ionized_z) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/mesh_filter.ipynb0000644000175100001770000001517614714401662021204 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Filtering Grid Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us demonstrate this with an example using the same dataset as we used with the boolean masks." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import yt\n", "\n", "ds = yt.load(\"Enzo_64/DD0042/data0042\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The only argument to a cut region is a conditional on field output from a data object. The only catch is that you *must* denote the data object in the conditional as \"obj\" regardless of the actual object's name. \n", "\n", "Here we create three new data objects which are copies of the all_data object (a region object covering the entire spatial domain of the simulation), but we've filtered on just \"hot\" material, the \"dense\" material, and the \"overpressure and fast\" material." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ad = ds.all_data()\n", "hot_ad = ad.cut_region(['obj[\"gas\", \"temperature\"] > 1e6'])\n", "dense_ad = ad.cut_region(['obj[\"gas\", \"density\"] > 5e-30'])\n", "\n", "# you can chain cut regions in two ways:\n", "dense_and_cool_ad = dense_ad.cut_region(['obj[\"gas\", \"temperature\"] < 1e5'])\n", "overpressure_and_fast_ad = ad.cut_region(\n", " [\n", " '(obj[\"gas\", \"pressure\"] > 1e-14) & (obj[\"gas\", \"velocity_magnitude\"].in_units(\"km/s\") > 1e2)'\n", " ]\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also construct a cut_region using the include_ and exclude_ functions as well." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ad = ds.all_data()\n", "hot_ad = ad.include_above((\"gas\", \"temperature\"), 1e6)\n", "dense_ad = ad.include_above((\"gas\", \"density\"), 5e-30)\n", "\n", "# These can be chained as well\n", "dense_and_cool_ad = dense_ad.include_below((\"gas\", \"temperature\"), 1e5)\n", "overpressure_and_fast_ad = ad.include_above((\"gas\", \"pressure\"), 1e-14)\n", "overpressure_and_fast_ad = overpressure_and_fast_ad.include_above(\n", " (\"gas\", \"velocity_magnitude\"), 1e2, \"km/s\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Upon inspection of our \"hot_ad\" object, we can still get the same results as we got with the boolean masks example above:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\n", " \"Temperature of all cells:\\n ad['temperature'] = \\n%s\\n\" % ad[\"gas\", \"temperature\"]\n", ")\n", "print(\n", " \"Temperatures of all \\\"hot\\\" cells:\\n hot_ad['temperature'] = \\n%s\"\n", " % hot_ad[\"gas\", \"temperature\"]\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\n", " \"Density of dense, cool material:\\n dense_and_cool_ad['density'] = \\n%s\\n\"\n", " % dense_and_cool_ad[\"gas\", \"density\"]\n", ")\n", "print(\n", " \"Temperature of dense, cool material:\\n dense_and_cool_ad['temperature'] = \\n%s\"\n", " % dense_and_cool_ad[\"gas\", \"temperature\"]\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we've constructed a `cut_region`, we can use it as a data source for further analysis. To create a plot based on a `cut_region`, use the `data_source` keyword argument provided by yt's plotting objects.\n", "\n", "Here's an example using projections:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "proj1 = yt.ProjectionPlot(ds, \"x\", (\"gas\", \"density\"), weight_field=(\"gas\", \"density\"))\n", "proj1.annotate_title(\"No Cuts\")\n", "proj1.set_figure_size(5)\n", "proj1.show()\n", "\n", "proj2 = yt.ProjectionPlot(\n", " ds, \"x\", (\"gas\", \"density\"), weight_field=(\"gas\", \"density\"), data_source=hot_ad\n", ")\n", "proj2.annotate_title(\"Hot Gas\")\n", "proj2.set_zlim((\"gas\", \"density\"), 3e-31, 3e-27)\n", "proj2.set_figure_size(5)\n", "proj2.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `data_source` keyword argument is also accepted by `SlicePlot`, `ProfilePlot` and `PhasePlot`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "slc1 = yt.SlicePlot(ds, \"x\", (\"gas\", \"density\"), center=\"m\")\n", "slc1.set_zlim((\"gas\", \"density\"), 3e-31, 3e-27)\n", "slc1.annotate_title(\"No Cuts\")\n", "slc1.set_figure_size(5)\n", "slc1.show()\n", "\n", "slc2 = yt.SlicePlot(ds, \"x\", (\"gas\", \"density\"), center=\"m\", data_source=dense_ad)\n", "slc2.set_zlim((\"gas\", \"density\"), 3e-31, 3e-27)\n", "slc2.annotate_title(\"Dense Gas\")\n", "slc2.set_figure_size(5)\n", "slc2.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ph1 = yt.PhasePlot(\n", " ad, (\"gas\", \"density\"), (\"gas\", \"temperature\"), (\"gas\", \"mass\"), weight_field=None\n", ")\n", "ph1.set_xlim(3e-31, 3e-27)\n", "ph1.annotate_title(\"No Cuts\")\n", "ph1.set_figure_size(5)\n", "ph1.show()\n", "\n", "ph1 = yt.PhasePlot(\n", " dense_ad,\n", " (\"gas\", \"density\"),\n", " (\"gas\", \"temperature\"),\n", " (\"gas\", \"mass\"),\n", " weight_field=None,\n", ")\n", "ph1.set_xlim(3e-31, 3e-27)\n", "ph1.annotate_title(\"Dense Gas\")\n", "ph1.set_figure_size(5)\n", "ph1.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.3" } }, "nbformat": 4, "nbformat_minor": 1 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/objects.rst0000644000175100001770000010415414714401662020016 0ustar00runnerdocker.. _Data-objects: Data Objects ============ What are Data Objects in yt? ---------------------------- Data objects (also called *Data Containers*) are used in yt as convenience structures for grouping data in logical ways that make sense in the context of the dataset as a whole. Some of the data objects are geometrical groupings of data (e.g. sphere, box, cylinder, etc.). Others represent data products derived from your dataset (e.g. slices, streamlines, surfaces). Still other data objects group multiple objects together or filter them (e.g. data collection, cut region). To generate standard plots, objects rarely need to be directly constructed. However, for detailed data inspection as well as hand-crafted derived data, objects can be exceptionally useful and even necessary. How to Create and Use an Object ------------------------------- To create an object, you usually only need a loaded dataset, the name of the object type, and the relevant parameters for your object. Here is a common example for creating a ``Region`` object that covers all of your data volume. .. code-block:: python import yt ds = yt.load("RedshiftOutput0005") ad = ds.all_data() Alternatively, we could create a sphere object of radius 1 kpc on location [0.5, 0.5, 0.5]: .. code-block:: python import yt ds = yt.load("RedshiftOutput0005") sp = ds.sphere([0.5, 0.5, 0.5], (1, "kpc")) After an object has been created, it can be used as a data_source to certain tasks like ``ProjectionPlot`` (see :class:`~yt.visualization.plot_window.ProjectionPlot`), one can compute the bulk quantities associated with that object (see :ref:`derived-quantities`), or the data can be examined directly. For example, if you want to figure out the temperature at all indexed locations in the central sphere of your dataset you could: .. code-block:: python import yt ds = yt.load("RedshiftOutput0005") sp = ds.sphere([0.5, 0.5, 0.5], (1, "kpc")) # Show all temperature values print(sp["gas", "temperature"]) # Print things in a more human-friendly manner: one temperature at a time print("(x, y, z) Temperature") print("-----------------------") for i in range(sp["gas", "temperature"].size): print( "(%f, %f, %f) %f" % ( sp["gas", "x"][i], sp["gas", "y"][i], sp["gas", "z"][i], sp["gas", "temperature"][i], ) ) Data objects can also be cloned; for instance: .. code-block:: python import yt ds = yt.load("RedshiftOutput0005") sp = ds.sphere([0.5, 0.5, 0.5], (1, "kpc")) sp_copy = sp.clone() This can be useful for when manually chunking data or exploring different field parameters. .. _quickly-selecting-data: Slicing Syntax for Selecting Data --------------------------------- yt provides a mechanism for easily selecting data while doing interactive work on the command line. This allows for region selection based on the full domain of the object. Selecting in this manner is exposed through a slice-like syntax. All of these attributes are exposed through the ``RegionExpression`` object, which is an attribute of a ``DataSet`` object, called ``r``. Getting All The Data ^^^^^^^^^^^^^^^^^^^^ The ``.r`` attribute serves as a persistent means of accessing the full data from a dataset. You can access this shorthand operation by querying any field on the ``.r`` object, like so: .. code-block:: python ds = yt.load("RedshiftOutput0005") rho = ds.r["gas", "density"] This will return a *flattened* array of data. The region expression object (``r``) doesn't have any derived quantities on it. This is completely equivalent to this set of statements: .. code-block:: python ds = yt.load("RedshiftOutput0005") dd = ds.all_data() rho = dd["gas", "density"] .. warning:: One thing to keep in mind with accessing data in this way is that it is *persistent*. It is loaded into memory, and then retained until the dataset is deleted or garbage collected. Selecting Multiresolution Regions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To select rectilinear regions, where the data is selected the same way that it is selected in a :ref:`region-reference`, you can utilize slice-like syntax, supplying start and stop, but not supplying a step argument. This requires that three components of the slice must be specified. These take a start and a stop, and are for the three axes in simulation order (if your data is ordered z, y, x for instance, this would be in z, y, x order). The slices can have both position and, optionally, unit values. These define the value with respect to the ``domain_left_edge`` of the dataset. So for instance, you could specify it like so: .. code-block:: python ds.r[(100, "kpc"):(200, "kpc"), :, :] This would return a region that included everything between 100 kpc from the left edge of the dataset to 200 kpc from the left edge of the dataset in the first dimension, and which spans the entire dataset in the second and third dimensions. By default, if the units are unspecified, they are in the "native" code units of the dataset. This works in all types of datasets, as well. For instance, if you have a geographic dataset (which is usually ordered latitude, longitude, altitude) you can easily select, for instance, one hemisphere with a region selection: .. code-block:: python ds.r[:, -180:0, :] If you specify a single slice, it will be repeated along all three dimensions. For instance, this will give all data: .. code-block:: python ds.r[:] And this will select a box running from 0.4 to 0.6 along all three dimensions: .. code-block:: python ds.r[0.4:0.6] .. _arbitrary-grid-selection: Selecting Fixed Resolution Regions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ yt also provides functionality for selecting regions that have been turned into voxels. This returns an :ref:`arbitrary-grid` object. It can be created by specifying a complex slice "step", where the start and stop follow the same rules as above. This is similar to how the numpy ``mgrid`` operation works. For instance, this code block will generate a grid covering the full domain, but converted to being 21x35x100 dimensions: .. code-block:: python region = ds.r[::21j, ::35j, ::100j] The left and right edges, as above, can be specified to provide bounds as well. For instance, to select a 10 meter cube, with 24 cells in each dimension, we could supply: .. code-block:: python region = ds.r[(20, "m"):(30, "m"):24j, (30, "m"):(40, "m"):24j, (7, "m"):(17, "m"):24j] This can select both particles and mesh fields. Mesh fields will be 3D arrays, and generated through volume-weighted overlap calculations. Selecting Slices ^^^^^^^^^^^^^^^^ If one dimension is specified as a single value, that will be the dimension along which a slice is made. This provides a simple means of generating a slice from a subset of the data. For instance, to create a slice of a dataset, you can very simply specify the full domain along two axes: .. code-block:: python sl = ds.r[:, :, 0.25] This can also be very easily plotted: .. code-block:: python sl = ds.r[:, :, 0.25] sl.plot() This accepts arguments the same way: .. code-block:: python sl = ds.r[(20.1, "km"):(31.0, "km"), (504.143, "m"):(1000.0, "m"), (900.1, "m")] sl.plot() Making Image Buffers ^^^^^^^^^^^^^^^^^^^^ Using the slicing syntax above for choosing a slice, if you also provide an imaginary step value you can obtain a :class:`~yt.visualization.api.FixedResolutionBuffer` of the chosen resolution. For instance, to obtain a 1024 by 1024 buffer covering the entire domain but centered at 0.5 in code units, you can do: .. code-block:: python frb = ds.r[0.5, ::1024j, ::1024j] This ``frb`` object then can be queried like a normal fixed resolution buffer, and it will return arrays of shape (1024, 1024). Making Rays ^^^^^^^^^^^ The slicing syntax can also be used select 1D rays of points, whether along an axis or off-axis. To create a ray along an axis: .. code-block:: python ortho_ray = ds.r[(500.0, "kpc"), (200, "kpc"):(300.0, "kpc"), (-2.0, "Mpc")] To create a ray off-axis, use a single slice between the start and end points of the ray: .. code-block:: python start = [0.1, 0.2, 0.3] # interpreted in code_length end = [0.4, 0.5, 0.6] # interpreted in code_length ray = ds.r[start:end] As for the other slicing options, combinations of unitful quantities with even different units can be used. Here's a somewhat convoluted (yet working) example: .. code-block:: python start = ((500.0, "kpc"), (0.2, "Mpc"), (100.0, "kpc")) end = ((1.0, "Mpc"), (300.0, "kpc"), (0.0, "kpc")) ray = ds.r[start:end] Making Fixed-Resolution Rays ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Rays can also be constructed to have fixed resolution if an imaginary step value is provided, similar to the 2 and 3-dimensional cases described above. This works for rays directed along an axis: .. code-block:: python ortho_ray = ds.r[0.1:0.6:500j, 0.3, 0.2] or off-axis rays as well: .. code-block:: python start = [0.1, 0.2, 0.3] # interpreted in code_length end = [0.4, 0.5, 0.6] # interpreted in code_length ray = ds.r[start:end:100j] Selecting Points ^^^^^^^^^^^^^^^^ Finally, you can quickly select a single point within the domain by providing a single coordinate for every axis: .. code-block:: python pt = ds.r[(10.0, "km"), (200, "m"), (1.0, "km")] Querying this object for fields will give you the value of the field at that point. .. _available-objects: Available Objects ----------------- As noted above, there are numerous types of objects. Here we group them into: * *Geometric Objects* Data is selected based on spatial shapes in the dataset * *Filtering Objects* Data is selected based on other field criteria * *Collection Objects* Multiple objects grouped together * *Construction Objects* Objects represent some sort of data product constructed by additional analysis If you want to create your own custom data object type, see :ref:`creating-objects`. .. _geometric-objects: Geometric Objects ^^^^^^^^^^^^^^^^^ For 0D, 1D, and 2D geometric objects, if the extent of the object intersects a grid cell, then the cell is included in the object; however, for 3D objects the *center* of the cell must be within the object in order for the grid cell to be incorporated. 0D Objects """""""""" **Point** | Class :class:`~yt.data_objects.selection_data_containers.YTPoint` | Usage: ``point(coord, ds=None, field_parameters=None, data_source=None)`` | A point defined by a single cell at specified coordinates. 1D Objects """""""""" **Ray (Axis-Aligned)** | Class :class:`~yt.data_objects.selection_data_containers.YTOrthoRay` | Usage: ``ortho_ray(axis, coord, ds=None, field_parameters=None, data_source=None)`` | A line (of data cells) stretching through the full domain aligned with one of the x,y,z axes. Defined by an axis and a point to be intersected. Please see this :ref:`note about ray data value ordering `. **Ray (Arbitrarily-Aligned)** | Class :class:`~yt.data_objects.selection_data_containers.YTRay` | Usage: ``ray(start_coord, end_coord, ds=None, field_parameters=None, data_source=None)`` | A line (of data cells) defined by arbitrary start and end coordinates. Please see this :ref:`note about ray data value ordering `. 2D Objects """""""""" **Slice (Axis-Aligned)** | Class :class:`~yt.data_objects.selection_data_containers.YTSlice` | Usage: ``slice(axis, coord, center=None, ds=None, field_parameters=None, data_source=None)`` | A plane normal to one of the axes and intersecting a particular coordinate. **Slice (Arbitrarily-Aligned)** | Class :class:`~yt.data_objects.selection_data_containers.YTCuttingPlane` | Usage: ``cutting(normal, coord, north_vector=None, ds=None, field_parameters=None, data_source=None)`` | A plane normal to a specified vector and intersecting a particular coordinate. .. _region-reference: 3D Objects """""""""" **All Data** | Function :meth:`~yt.data_objects.static_output.Dataset.all_data` | Usage: ``all_data(find_max=False)`` | ``all_data()`` is a wrapper on the Box Region class which defaults to creating a Region covering the entire dataset domain. It is effectively ``ds.region(ds.domain_center, ds.domain_left_edge, ds.domain_right_edge)``. **Box Region** | Class :class:`~yt.data_objects.selection_data_containers.YTRegion` | Usage: ``region(center, left_edge, right_edge, fields=None, ds=None, field_parameters=None, data_source=None)`` | Alternatively: ``box(left_edge, right_edge, fields=None, ds=None, field_parameters=None, data_source=None)`` | A box-like region aligned with the grid axis orientation. It is defined by a left_edge, a right_edge, and a center. The left_edge and right_edge are the minimum and maximum bounds in the three axes respectively. The center is arbitrary and must only be contained within the left_edge and right_edge. By using the ``box`` wrapper, the center is assumed to be the midpoint between the left and right edges. **Disk/Cylinder** | Class: :class:`~yt.data_objects.selection_data_containers.YTDisk` | Usage: ``disk(center, normal, radius, height, fields=None, ds=None, field_parameters=None, data_source=None)`` | A cylinder defined by a point at the center of one of the circular bases, a normal vector to it defining the orientation of the length of the cylinder, and radius and height values for the cylinder's dimensions. Note: ``height`` is the distance from midplane to the top or bottom of the cylinder, i.e., ``height`` is half that of the cylinder object that is created. **Ellipsoid** | Class :class:`~yt.data_objects.selection_data_containers.YTEllipsoid` | Usage: ``ellipsoid(center, semi_major_axis_length, semi_medium_axis_length, semi_minor_axis_length, semi_major_vector, tilt, fields=None, ds=None, field_parameters=None, data_source=None)`` | An ellipsoid with axis magnitudes set by ``semi_major_axis_length``, ``semi_medium_axis_length``, and ``semi_minor_axis_length``. ``semi_major_vector`` sets the direction of the ``semi_major_axis``. ``tilt`` defines the orientation of the semi-medium and semi_minor axes. **Sphere** | Class :class:`~yt.data_objects.selection_data_containers.YTSphere` | Usage: ``sphere(center, radius, ds=None, field_parameters=None, data_source=None)`` | A sphere defined by a central coordinate and a radius. **Minimal Bounding Sphere** | Class :class:`~yt.data_objects.selection_data_containers.YTMinimalSphere` | Usage: ``minimal_sphere(points, ds=None, field_parameters=None, data_source=None)`` | A sphere that contains all the points passed as argument. .. _collection-objects: Filtering and Collection Objects ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ See also the section on :ref:`filtering-data`. **Intersecting Regions** | Most Region objects provide a data_source parameter, which allows you to subselect | one region from another (in the coordinate system of the DataSet). Note, this can | easily lead to empty data for non-intersecting regions. | Usage: ``slice(axis, coord, ds, data_source=sph)`` **Union Regions** | Usage: ``union()`` | See :ref:`boolean_data_objects`. **Intersection Regions** | Usage: ``intersection()`` | See :ref:`boolean_data_objects`. **Filter** | Class :class:`~yt.data_objects.selection_data_containers.YTCutRegion` | Usage: ``cut_region(base_object, conditionals, ds=None, field_parameters=None)`` | A ``cut_region`` is a filter which can be applied to any other data object. The filter is defined by the conditionals present, which apply cuts to the data in the object. A ``cut_region`` will work for either particle fields or mesh fields, but not on both simultaneously. For more detailed information and examples, see :ref:`cut-regions`. **Collection of Data Objects** | Class :class:`~yt.data_objects.selection_data_containers.YTDataCollection` | Usage: ``data_collection(center, obj_list, ds=None, field_parameters=None)`` | A ``data_collection`` is a list of data objects that can be sampled and processed as a whole in a single data object. .. _construction-objects: Construction Objects ^^^^^^^^^^^^^^^^^^^^ **Fixed-Resolution Region** | Class :class:`~yt.data_objects.construction_data_containers.YTCoveringGrid` | Usage: ``covering_grid(level, left_edge, dimensions, fields=None, ds=None, num_ghost_zones=0, use_pbar=True, field_parameters=None)`` | A 3D region with all data extracted to a single, specified resolution. See :ref:`examining-grid-data-in-a-fixed-resolution-array`. **Fixed-Resolution Region with Smoothing** | Class :class:`~yt.data_objects.construction_data_containers.YTSmoothedCoveringGrid` | Usage: ``smoothed_covering_grid(level, left_edge, dimensions, fields=None, ds=None, num_ghost_zones=0, use_pbar=True, field_parameters=None)`` | A 3D region with all data extracted and interpolated to a single, specified resolution. Identical to covering_grid, except that it interpolates as necessary from coarse regions to fine. See :ref:`examining-grid-data-in-a-fixed-resolution-array`. **Fixed-Resolution Region** | Class :class:`~yt.data_objects.construction_data_containers.YTArbitraryGrid` | Usage: ``arbitrary_grid(left_edge, right_edge, dimensions, ds=None, field_parameters=None)`` | When particles are deposited on to mesh fields, they use the existing mesh structure, but this may have too much or too little resolution relative to the particle locations (or it may not exist at all!). An `arbitrary_grid` provides a means for generating a new independent mesh structure for particle deposition and simple mesh field interpolation. See :ref:`arbitrary-grid` for more information. **Projection** | Class :class:`~yt.data_objects.construction_data_containers.YTQuadTreeProj` | Usage: ``proj(field, axis, weight_field=None, center=None, ds=None, data_source=None, method="integrate", field_parameters=None)`` | A 2D projection of a 3D volume along one of the axis directions. By default, this is a line integral through the entire simulation volume (although it can be a subset of that volume specified by a data object with the ``data_source`` keyword). Alternatively, one can specify a weight_field and different ``method`` values to change the nature of the projection outcome. See :ref:`projection-types` for more information. **Streamline** | Class :class:`~yt.data_objects.construction_data_containers.YTStreamline` | Usage: ``streamline(coord_list, length, fields=None, ds=None, field_parameters=None)`` | A ``streamline`` can be traced out by identifying a starting coordinate (or list of coordinates) and allowing it to trace a vector field, like gas velocity. See :ref:`streamlines` for more information. **Surface** | Class :class:`~yt.data_objects.construction_data_containers.YTSurface` | Usage: ``surface(data_source, field, field_value)`` | The surface defined by all an isocontour in any mesh field. An existing data object must be provided as the source, as well as a mesh field and the value of the field which you desire the isocontour. See :ref:`extracting-isocontour-information`. .. _derived-quantities: Processing Objects: Derived Quantities -------------------------------------- Derived quantities are a way of calculating some bulk quantities associated with all of the grid cells contained in a data object. Derived quantities can be accessed via the ``quantities`` interface. Here is an example of how to get the angular momentum vector calculated from all the cells contained in a sphere at the center of our dataset. .. code-block:: python import yt ds = yt.load("my_data") sp = ds.sphere("c", (10, "kpc")) print(sp.quantities.angular_momentum_vector()) Some quantities can be calculated for a specific particle type only. For example, to get the center of mass of only the stars within the sphere: .. code-block:: python import yt ds = yt.load("my_data") sp = ds.sphere("c", (10, "kpc")) print( sp.quantities.center_of_mass( use_gas=False, use_particles=True, particle_type="star" ) ) Quickly Processing Data ^^^^^^^^^^^^^^^^^^^^^^^ Most data objects now have multiple numpy-like methods that allow you to quickly process data. More of these methods will be added over time and added to this list. Most, if not all, of these map to other yt operations and are designed as syntactic sugar to slightly simplify otherwise somewhat obtuse pipelines. These operations are parallelized. You can compute the extrema of a field by using the ``max`` or ``min`` functions. This will cache the extrema in between, so calling ``min`` right after ``max`` will be considerably faster. Here is an example. .. code-block:: python import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") reg = ds.r[0.3:0.6, 0.2:0.4, 0.9:0.95] min_rho = reg.min(("gas", "density")) max_rho = reg.max(("gas", "density")) This is equivalent to: .. code-block:: python min_rho, max_rho = reg.quantities.extrema(("gas", "density")) The ``max`` operation can also compute the maximum intensity projection: .. code-block:: python proj = reg.max(("gas", "density"), axis="x") proj.plot() This is equivalent to: .. code-block:: python proj = ds.proj(("gas", "density"), "x", data_source=reg, method="max") proj.plot() The same can be done with the ``min`` operation, computing a minimum intensity projection: .. code-block:: python proj = reg.min(("gas", "density"), axis="x") proj.plot() This is equivalent to: .. code-block:: python proj = ds.proj(("gas", "density"), "x", data_source=reg, method="min") proj.plot() You can also compute the ``mean`` value, which accepts a field, axis, and weight function. If the axis is not specified, it will return the average value of the specified field, weighted by the weight argument. The weight argument defaults to ``ones``, which performs an arithmetic average. For instance: .. code-block:: python mean_rho = reg.mean(("gas", "density")) rho_by_vol = reg.mean(("gas", "density"), weight=("gas", "cell_volume")) This is equivalent to: .. code-block:: python mean_rho = reg.quantities.weighted_average( ("gas", "density"), weight_field=("index", "ones") ) rho_by_vol = reg.quantities.weighted_average( ("gas", "density"), weight_field=("gas", "cell_volume") ) If an axis is provided, it will project along that axis and return it to you: .. code-block:: python rho_proj = reg.mean(("gas", "temperature"), axis="y", weight=("gas", "density")) rho_proj.plot() You can also compute the ``std`` (standard deviation), which accepts a field, axis, and weight function. If the axis is not specified, it will return the standard deviation of the specified field, weighted by the weight argument. The weight argument defaults to ``ones``. For instance: .. code-block:: python std_rho = reg.std(("gas", "density")) std_rho_by_vol = reg.std(("gas", "density"), weight=("gas", "cell_volume")) This is equivalent to: .. code-block:: python std_rho = reg.quantities.weighted_standard_deviation( ("gas", "density"), weight_field=("index", "ones") ) std_rho_by_vol = reg.quantities.weighted_standard_deviation( ("gas", "density"), weight_field=("gas", "cell_volume") ) If an axis is provided, it will project along that axis and return it to you: .. code-block:: python vy_std = reg.std(("gas", "velocity_y"), axis="y", weight=("gas", "density")) vy_std.plot() The ``sum`` function will add all the values in the data object. It accepts a field and, optionally, an axis. If the axis is left unspecified, it will sum the values in the object: .. code-block:: python vol = reg.sum(("gas", "cell_volume")) If the axis is specified, it will compute a projection using the method ``sum`` (which does *not* take into account varying path length!) and return that to you. .. code-block:: python cell_count = reg.sum(("index", "ones"), axis="z") cell_count.plot() To compute a projection where the path length *is* taken into account, you can use the ``integrate`` function: .. code-block:: python proj = reg.integrate(("gas", "density"), "x") All of these projections supply the data object as their base input. Often, it can be useful to sample a field at the minimum and maximum of a different field. You can use the ``argmax`` and ``argmin`` operations to do this. .. code-block:: python reg.argmin(("gas", "density"), axis=("gas", "temperature")) This will return the temperature at the minimum density. If you don't specify an ``axis``, it will return the spatial position of the maximum value of the queried field. Here is an example:: x, y, z = reg.argmin(("gas", "density")) Available Derived Quantities ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ **Angular Momentum Vector** | Class :class:`~yt.data_objects.derived_quantities.AngularMomentumVector` | Usage: ``angular_momentum_vector(use_gas=True, use_particles=True, particle_type='all')`` | The mass-weighted average angular momentum vector of the particles, gas, or both. The quantity can be calculated for all particles or a given particle_type only. **Bulk Velocity** | Class :class:`~yt.data_objects.derived_quantities.BulkVelocity` | Usage: ``bulk_velocity(use_gas=True, use_particles=True, particle_type='all')`` | The mass-weighted average velocity of the particles, gas, or both. The quantity can be calculated for all particles or a given particle_type only. **Center of Mass** | Class :class:`~yt.data_objects.derived_quantities.CenterOfMass` | Usage: ``center_of_mass(use_cells=True, use_particles=False, particle_type='all')`` | The location of the center of mass. By default, it computes of the *non-particle* data in the object, but it can be used on particles, gas, or both. The quantity can be calculated for all particles or a given particle_type only. **Extrema** | Class :class:`~yt.data_objects.derived_quantities.Extrema` | Usage: ``extrema(fields, non_zero=False)`` | The extrema of a field or list of fields. **Maximum Location Sampling** | Class :class:`~yt.data_objects.derived_quantities.SampleAtMaxFieldValues` | Usage: ``sample_at_max_field_values(fields, sample_fields)`` | The value of sample_fields at the maximum value in fields. **Minimum Location Sampling** | Class :class:`~yt.data_objects.derived_quantities.SampleAtMinFieldValues` | Usage: ``sample_at_min_field_values(fields, sample_fields)`` | The value of sample_fields at the minimum value in fields. **Minimum Location** | Class :class:`~yt.data_objects.derived_quantities.MinLocation` | Usage: ``min_location(fields)`` | The minimum of a field or list of fields as well as the x,y,z location of that minimum. **Maximum Location** | Class :class:`~yt.data_objects.derived_quantities.MaxLocation` | Usage: ``max_location(fields)`` | The maximum of a field or list of fields as well as the x,y,z location of that maximum. **Spin Parameter** | Class :class:`~yt.data_objects.derived_quantities.SpinParameter` | Usage: ``spin_parameter(use_gas=True, use_particles=True, particle_type='all')`` | The spin parameter for the baryons using the particles, gas, or both. The quantity can be calculated for all particles or a given particle_type only. **Total Mass** | Class :class:`~yt.data_objects.derived_quantities.TotalMass` | Usage: ``total_mass()`` | The total mass of the object as a tuple of (total gas, total particle) mass. **Total of a Field** | Class :class:`~yt.data_objects.derived_quantities.TotalQuantity` | Usage: ``total_quantity(fields)`` | The sum of a given field (or list of fields) over the entire object. **Weighted Average of a Field** | Class :class:`~yt.data_objects.derived_quantities.WeightedAverageQuantity` | Usage: ``weighted_average_quantity(fields, weight)`` | The weighted average of a field (or list of fields) over an entire data object. If you want an unweighted average, then set your weight to be the field: ``ones``. **Weighted Standard Deviation of a Field** | Class :class:`~yt.data_objects.derived_quantities.WeightedStandardDeviation` | Usage: ``weighted_standard_deviation(fields, weight)`` | The weighted standard deviation of a field (or list of fields) over an entire data object and the weighted mean. If you want an unweighted standard deviation, then set your weight to be the field: ``ones``. .. _arbitrary-grid: Arbitrary Grids Objects ----------------------- The covering grid and smoothed covering grid objects mandate that they be exactly aligned with the mesh. This is a holdover from the time when yt was used exclusively for data that came in regularly structured grid patches, and does not necessarily work as well for data that is composed of discrete objects like particles. To augment this, the :class:`~yt.data_objects.construction_data_containers.YTArbitraryGrid` object was created, which enables construction of meshes (onto which particles can be deposited or smoothed) in arbitrary regions. This eliminates any assumptions on yt's part about how the data is organized, and will allow for more fine-grained control over visualizations. An example of creating an arbitrary grid would be to construct one, then query the deposited particle density, like so: .. code-block:: python import yt ds = yt.load("snapshot_010.hdf5") obj = ds.arbitrary_grid([0.0, 0.0, 0.0], [0.99, 0.99, 0.99], dims=[128, 128, 128]) print(obj["deposit", "all_density"]) While these cannot yet be used as input to projections or slices, slices and projections can be taken of the data in them and visualized by hand. These objects, as of yt 3.3, are now also able to "voxelize" mesh fields. This means that you can query the "density" field and it will return the density field as deposited, identically to how it would be deposited in a fixed resolution buffer. Note that this means that contributions from misaligned or partially-overlapping cells are added in a volume-weighted way, which makes it inappropriate for some types of analysis. .. _boolean_data_objects: Combining Objects: Boolean Data Objects --------------------------------------- A special type of data object is the *boolean* data object, which works with data selection objects of any dimension. It is built by relating already existing data objects with the bitwise operators for AND, OR and XOR, as well as the subtraction operator. These are created by using the operators ``&`` for an intersection ("AND"), ``|`` for a union ("OR"), ``^`` for an exclusive or ("XOR"), and ``+`` and ``-`` for addition ("OR") and subtraction ("NEG"). Here are some examples: .. code-block:: python import yt ds = yt.load("snapshot_010.hdf5") sp1 = ds.sphere("c", (0.1, "unitary")) sp2 = ds.sphere(sp1.center + 2.0 * sp1.radius, (0.2, "unitary")) sp3 = ds.sphere("c", (0.05, "unitary")) new_obj = sp1 + sp2 cutout = sp1 - sp3 sp4 = sp1 ^ sp2 sp5 = sp1 & sp2 Note that the ``+`` operation and the ``|`` operation are identical. For when multiple objects are to be combined in an intersection or a union, there are the data objects ``intersection`` and ``union`` which can be called, and which will yield slightly higher performance than a sequence of calls to ``+`` or ``&``. For instance: .. code-block:: python import yt ds = yt.load("Enzo_64/DD0043/data0043") sp1 = ds.sphere((0.1, 0.2, 0.3), (0.05, "unitary")) sp2 = ds.sphere((0.2, 0.2, 0.3), (0.10, "unitary")) sp3 = ds.sphere((0.3, 0.2, 0.3), (0.15, "unitary")) isp = ds.intersection([sp1, sp2, sp3]) usp = ds.union([sp1, sp2, sp3]) The ``isp`` and ``usp`` objects will act the same as a set of chained ``&`` and ``|`` operations (respectively) but are somewhat easier to construct. .. _extracting-connected-sets: Connected Sets and Clump Finding -------------------------------- The underlying machinery used in :ref:`clump_finding` is accessible from any data object. This includes the ability to obtain and examine topologically connected sets. These sets are identified by examining cells between two threshold values and connecting them. What is returned to the user is a list of the intervals of values found, and extracted regions that contain only those cells that are connected. To use this, call :meth:`~yt.data_objects.data_containers.YTSelectionContainer3D.extract_connected_sets` on any 3D data object. This requests a field, the number of levels of levels sets to extract, the min and the max value between which sets will be identified, and whether or not to conduct it in log space. .. code-block:: python sp = ds.sphere("max", (1.0, "pc")) contour_values, connected_sets = sp.extract_connected_sets( ("gas", "density"), 3, 1e-30, 1e-20 ) The first item, ``contour_values``, will be an array of the min value for each set of level sets. The second (``connected_sets``) will be a dict of dicts. The key for the first (outer) dict is the level of the contour, corresponding to ``contour_values``. The inner dict returned is keyed by the contour ID. It contains :class:`~yt.data_objects.selection_data_containers.YTCutRegion` objects. These can be queried just as any other data object. The clump finder (:ref:`clump_finding`) differs from the above method in that the contour identification is performed recursively within each individual structure, and structures can be kept or remerged later based on additional criteria, such as gravitational boundedness. .. _object-serialization: Storing and Loading Objects --------------------------- Often, when operating interactively or via the scripting interface, it is convenient to save an object to disk and then restart the calculation later or transfer the data from a container to another filesystem. This can be particularly useful when working with extremely large datasets. Field data can be saved to disk in a format that allows for it to be reloaded just like a regular dataset. For information on how to do this, see :ref:`saving-data-containers`. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/parallel_computation.rst0000644000175100001770000007147514714401662022614 0ustar00runnerdocker.. _parallel-computation: Parallel Computation With yt ============================ yt has been instrumented with the ability to compute many -- most, even -- quantities in parallel. This utilizes the package `mpi4py `_ to parallelize using the Message Passing Interface, typically installed on clusters. .. _capabilities: Capabilities ------------ Currently, yt is able to perform the following actions in parallel: * Projections (:ref:`projection-plots`) * Slices (:ref:`slice-plots`) * Cutting planes (oblique slices) (:ref:`off-axis-slices`) * Covering grids (:ref:`examining-grid-data-in-a-fixed-resolution-array`) * Derived Quantities (total mass, angular momentum, etc) * 1-, 2-, and 3-D profiles (:ref:`generating-profiles-and-histograms`) * Halo analysis (:ref:`halo-analysis`) * Volume rendering (:ref:`volume_rendering`) * Isocontours & flux calculations (:ref:`extracting-isocontour-information`) This list covers just about every action yt can take! Additionally, almost all scripts will benefit from parallelization with minimal modification. The goal of Parallel-yt has been to retain API compatibility and abstract all parallelism. Setting Up Parallel yt -------------------------- To run scripts in parallel, you must first install `mpi4py `_ as well as an MPI library, if one is not already available on your system. Instructions for doing so are provided on the mpi4py website, but you may have luck by just running: .. code-block:: bash $ python -m pip install mpi4py If you have an Anaconda installation of yt and there is no MPI library on the system you are using try: .. code-block:: bash $ conda install mpi4py This will install `MPICH2 `_ and will interfere with other MPI libraries that are already installed. Therefore, it is preferable to use the ``pip`` installation method. Once mpi4py has been installed, you're all done! You just need to launch your scripts with ``mpirun`` (or equivalent) and signal to yt that you want to run them in parallel by invoking the ``yt.enable_parallelism()`` function in your script. In general, that's all it takes to get a speed benefit on a multi-core machine. Here is an example on an 8-core desktop: .. code-block:: bash $ mpirun -np 8 python script.py Throughout its normal operation, yt keeps you aware of what is happening with regular messages to the stderr usually prefaced with: .. code-block:: bash yt : [INFO ] YYY-MM-DD HH:MM:SS However, when operating in parallel mode, yt outputs information from each of your processors to this log mode, as in: .. code-block:: bash P000 yt : [INFO ] YYY-MM-DD HH:MM:SS P001 yt : [INFO ] YYY-MM-DD HH:MM:SS in the case of two cores being used. It's important to note that all of the processes listed in :ref:`capabilities` work in parallel -- and no additional work is necessary to parallelize those processes. Running a yt Script in Parallel ------------------------------- Many basic yt operations will run in parallel if yt's parallelism is enabled at startup. For example, the following script finds the maximum density location in the simulation and then makes a plot of the projected density: .. code-block:: python import yt yt.enable_parallelism() ds = yt.load("RD0035/RedshiftOutput0035") v, c = ds.find_max(("gas", "density")) print(v, c) p = yt.ProjectionPlot(ds, "x", ("gas", "density")) p.save() If this script is run in parallel, two of the most expensive operations - finding of the maximum density and the projection will be calculated in parallel. If we save the script as ``my_script.py``, we would run it on 16 MPI processes using the following Bash command: .. code-block:: bash $ mpirun -np 16 python my_script.py .. note:: If you run into problems, the you can use :ref:`remote-debugging` to examine what went wrong. How do I run my yt job on a subset of available processes +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ You can set the ``communicator`` keyword in the :func:`~yt.utilities.parallel_tools.parallel_analysis_interface.enable_parallelism` call to a specific MPI communicator to specify a subset of available MPI processes. If none is specified, it defaults to ``COMM_WORLD``. Creating Parallel and Serial Sections in a Script +++++++++++++++++++++++++++++++++++++++++++++++++ Many yt operations will automatically run in parallel (see the next section for a full enumeration), however some operations, particularly ones that print output or save data to the filesystem, will be run by all processors in a parallel script. For example, in the script above the lines ``print(v, c)`` and ``p.save()`` will be run on all 16 processors. This means that your terminal output will contain 16 repetitions of the output of the print statement and the plot will be saved to disk 16 times (overwritten each time). yt provides two convenience functions that make it easier to run most of a script in parallel but run some subset of the script on only one processor. The first, :func:`~yt.funcs.is_root`, returns ``True`` if run on the 'root' processor (the processor with MPI rank 0) and ``False`` otherwise. One could rewrite the above script to take advantage of :func:`~yt.funcs.is_root` like so: .. code-block:: python import yt yt.enable_parallelism() ds = yt.load("RD0035/RedshiftOutput0035") v, c = ds.find_max(("gas", "density")) p = yt.ProjectionPlot(ds, "x", ("gas", "density")) if yt.is_root(): print(v, c) p.save() The second function, :func:`~yt.funcs.only_on_root` accepts the name of a function as well as a set of parameters and keyword arguments to pass to the function. This is useful when the serial component of your parallel script would clutter the script or if you like writing your scripts as a series of isolated function calls. I can rewrite the example from the beginning of this section once more using :func:`~yt.funcs.only_on_root` to give you the flavor of how to use it: .. code-block:: python import yt yt.enable_parallelism() def print_and_save_plot(v, c, plot, verbose=True): if verbose: print(v, c) plot.save() ds = yt.load("RD0035/RedshiftOutput0035") v, c = ds.find_max(("gas", "density")) p = yt.ProjectionPlot(ds, "x", ("gas", "density")) yt.only_on_root(print_and_save_plot, v, c, plot, verbose=True) Types of Parallelism -------------------- In order to divide up the work, yt will attempt to send different tasks to different processors. However, to minimize inter-process communication, yt will decompose the information in different ways based on the task. Spatial Decomposition +++++++++++++++++++++ During this process, the index will be decomposed along either all three axes or along an image plane, if the process is that of projection. This type of parallelism is overall less efficient than grid-based parallelism, but it has been shown to obtain good results overall. The following operations use spatial decomposition: * :ref:`halo-analysis` * :ref:`volume_rendering` Grid Decomposition ++++++++++++++++++ The alternative to spatial decomposition is a simple round-robin of data chunks, which could be grids, octs, or whatever chunking mechanism is used by the code frontend begin used. This process allows yt to pool data access to a given data file, which ultimately results in faster read times and better parallelism. The following operations use chunk decomposition: * Projections (see :ref:`available-objects`) * Slices (see :ref:`available-objects`) * Cutting planes (see :ref:`available-objects`) * Covering grids (see :ref:`construction-objects`) * Derived Quantities (see :ref:`derived-quantities`) * 1-, 2-, and 3-D profiles (see :ref:`generating-profiles-and-histograms`) * Isocontours & flux calculations (see :ref:`surfaces`) Parallelization over Multiple Objects and Datasets ++++++++++++++++++++++++++++++++++++++++++++++++++ If you have a set of computational steps that need to apply identically and independently to several different objects or datasets, a so-called `embarrassingly parallel `_ task, yt can do that easily. See the sections below on :ref:`parallelizing-your-analysis` and :ref:`parallel-time-series-analysis`. Use of ``piter()`` ^^^^^^^^^^^^^^^^^^ If you use parallelism over objects or datasets, you will encounter the :func:`~yt.data_objects.time_series.DatasetSeries.piter` function. :func:`~yt.data_objects.time_series.DatasetSeries.piter` is a parallel iterator, which effectively doles out each item of a DatasetSeries object to a different processor. In serial processing, you might iterate over a DatasetSeries by: .. code-block:: python for dataset in dataset_series: ... # process But in parallel, you can use ``piter()`` to force each dataset to go to a different processor: .. code-block:: python yt.enable_parallelism() for dataset in dataset_series.piter(): ... # process In order to store information from the parallel processing step to a data structure that exists on all of the processors operating in parallel we offer the ``storage`` keyword in the :func:`~yt.data_objects.time_series.DatasetSeries.piter` function. You may define an empty dictionary and include it as the keyword argument ``storage`` to :func:`~yt.data_objects.time_series.DatasetSeries.piter`. Then, during the processing step, you can access this dictionary as the ``sto`` object. After the loop is finished, the dictionary is re-aggregated from all of the processors, and you can access the contents: .. code-block:: python yt.enable_parallelism() my_dictionary = {} for sto, dataset in dataset_series.piter(storage=my_dictionary): ... # process sto.result = ... # some information processed for this dataset sto.result_id = ... # some identifier for this dataset print(my_dictionary) By default, the dataset series will be divided as equally as possible among the cores. Often some datasets will require more work than others. We offer the ``dynamic`` keyword in the :func:`~yt.data_objects.time_series.DatasetSeries.piter` function to enable dynamic load balancing with a task queue. Dynamic load balancing works best with more cores and a variable workload. Here one process will act as a server to assign the next available dataset to any free client. For example, a 16 core job will have 15 cores analyzing the data with 1 core acting as the task manager. .. _parallelizing-your-analysis: Parallelizing over Multiple Objects ----------------------------------- It is easy within yt to parallelize a list of tasks, as long as those tasks are independent of one another. Using object-based parallelism, the function :func:`~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_objects` will automatically split up a list of tasks over the specified number of processors (or cores). Please see this heavily-commented example: .. code-block:: python # As always... import yt yt.enable_parallelism() import glob # The number 4, below, is the number of processes to parallelize over, which # is generally equal to the number of MPI tasks the job is launched with. # If num_procs is set to zero or a negative number, the for loop below # will be run such that each iteration of the loop is done by a single MPI # task. Put another way, setting it to zero means that no matter how many # MPI tasks the job is run with, num_procs will default to the number of # MPI tasks automatically. num_procs = 4 # fns is a list of all the simulation data files in the current directory. fns = glob.glob("./plot*") fns.sort() # This dict will store information collected in the loop, below. # Inside the loop each task will have a local copy of the dict, but # the dict will be combined once the loop finishes. my_storage = {} # In this example, because the storage option is used in the # parallel_objects function, the loop yields a tuple, which gets used # as (sto, fn) inside the loop. # In the loop, sto is essentially my_storage, but a local copy of it. # If data does not need to be combined after the loop is done, the line # would look like: # for fn in parallel_objects(fns, num_procs): for sto, fn in yt.parallel_objects(fns, num_procs, storage=my_storage): # Open a data file, remembering that fn is different on each task. ds = yt.load(fn) dd = ds.all_data() # This copies fn and the min/max of density to the local copy of # my_storage sto.result_id = fn sto.result = dd.quantities.extrema(("gas", "density")) # Makes and saves a plot of the gas density. p = yt.ProjectionPlot(ds, "x", ("gas", "density")) p.save() # At this point, as the loop exits, the local copies of my_storage are # combined such that all tasks now have an identical and full version of # my_storage. Until this point, each task is unaware of what the other # tasks have produced. # Below, the values in my_storage are printed by only one task. The other # tasks do nothing. if yt.is_root(): for fn, vals in sorted(my_storage.items()): print(fn, vals) This example above can be modified to loop over anything that can be saved to a Python list: halos, data files, arrays, and more. .. _parallel-time-series-analysis: Parallelization over Multiple Datasets (including Time Series) -------------------------------------------------------------- The same ``parallel_objects`` machinery discussed above is turned on by default when using a :class:`~yt.data_objects.time_series.DatasetSeries` object (see :ref:`time-series-analysis`) to iterate over simulation outputs. The syntax for this is very simple. As an example, we can use the following script to find the angular momentum vector in a 1 pc sphere centered on the maximum density cell in a large number of simulation outputs: .. code-block:: python import yt yt.enable_parallelism() # Load all of the DD*/output_* files into a DatasetSeries object # in this case it is a Time Series ts = yt.load("DD*/output_*") # Define an empty storage dictionary for collecting information # in parallel through processing storage = {} # Use piter() to iterate over the time series, one proc per dataset # and store the resulting information from each dataset in # the storage dictionary for sto, ds in ts.piter(storage=storage): sphere = ds.sphere("max", (1.0, "pc")) sto.result = sphere.quantities.angular_momentum_vector() sto.result_id = str(ds) # Print out the angular momentum vector for all of the datasets for L in sorted(storage.items()): print(L) Note that this script can be run in serial or parallel with an arbitrary number of processors. When running in parallel, each output is given to a different processor. You can also request a fixed number of processors to calculate each angular momentum vector. For example, the following script will calculate each angular momentum vector using 4 workgroups, splitting up the pool available processors. Note that parallel=1 implies that the analysis will be run using 1 workgroup, whereas parallel=True will run with Nprocs workgroups. .. code-block:: python import yt yt.enable_parallelism() ts = yt.DatasetSeries("DD*/output_*", parallel=4) for ds in ts.piter(): sphere = ds.sphere("max", (1.0, "pc")) L_vecs = sphere.quantities.angular_momentum_vector() If you do not want to use ``parallel_objects`` parallelism when using a DatasetSeries object, set ``parallel = False``. When running python in parallel, this will use all of the available processors to evaluate the requested operation on each simulation output. Some care and possibly trial and error might be necessary to estimate the correct settings for your simulation outputs. Note, when iterating over several large datasets, running out of memory may become an issue as the internal data structures associated with each dataset may not be properly de-allocated at the end of an iteration. If memory use becomes a problem, it may be necessary to manually delete some of the larger data structures. .. code-block:: python import yt yt.enable_parallelism() ts = yt.DatasetSeries("DD*/output_*", parallel=4) for ds in ts.piter(): # do analysis here ds.index.clear_all_data() Multi-level Parallelism ----------------------- By default, the :func:`~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_objects` and :func:`~yt.data_objects.time_series.DatasetSeries.piter` functions will allocate a single processor to each iteration of the parallelized loop. However, there may be situations in which it is advantageous to have multiple processors working together on each loop iteration. Like with any traditional for loop, nested loops with multiple calls to :func:`~yt.utilities.parallel_tools.parallel_analysis_interface.enable_parallelism` can be used to parallelize the functionality within a given loop iteration. In the example below, we will create projections along the x, y, and z axis of the density and temperature fields. We will assume a total of 6 processors are available, allowing us to allocate to processors to each axis and project each field with a separate processor. .. code-block:: python import yt yt.enable_parallelism() # assume 6 total cores # allocate 3 work groups of 2 cores each for ax in yt.parallel_objects("xyz", njobs=3): # project each field with one of the two cores in the workgroup for field in yt.parallel_objects([("gas", "density"), ("gas", "temperature")]): p = yt.ProjectionPlot(ds, ax, field, weight_field=("gas", "density")) p.save("figures/") Note, in the above example, if the inner :func:`~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_objects` call were removed from the loop, the two-processor work group would work together to project each of the density and temperature fields. This is because the projection functionality itself is parallelized internally. The :func:`~yt.data_objects.time_series.DatasetSeries.piter` function can also be used in the above manner with nested :func:`~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_objects` loops to allocate multiple processors to work on each dataset. As discussed above in :ref:`parallel-time-series-analysis`, the ``parallel`` keyword is used to control the number of workgroups created for iterating over multiple datasets. Parallel Performance, Resources, and Tuning ------------------------------------------- Optimizing parallel jobs in yt is difficult; there are many parameters that affect how well and quickly the job runs. In many cases, the only way to find out what the minimum (or optimal) number of processors is, or amount of memory needed, is through trial and error. However, this section will attempt to provide some insight into what are good starting values for a given parallel task. Chunk Decomposition +++++++++++++++++++ In general, these types of parallel calculations scale very well with number of processors. They are also fairly memory-conservative. The two limiting factors is therefore the number of chunks in the dataset, and the speed of the disk the data is stored on. There is no point in running a parallel job of this kind with more processors than chunks, because the extra processors will do absolutely nothing, and will in fact probably just serve to slow down the whole calculation due to the extra overhead. The speed of the disk is also a consideration - if it is not a high-end parallel file system, adding more tasks will not speed up the calculation if the disk is already swamped with activity. The best advice for these sort of calculations is to run with just a few processors and go from there, seeing if it the runtime improves noticeably. **Projections, Slices, Cutting Planes and Covering Grids** Projections, slices and cutting planes are the most common methods of creating two-dimensional representations of data. All three have been parallelized in a chunk-based fashion. * **Projections**: projections are parallelized utilizing a quad-tree approach. Data is loaded for each processor, typically by a process that consolidates open/close/read operations, and each grid is then iterated over and cells are deposited into a data structure that stores values corresponding to positions in the two-dimensional plane. This provides excellent load balancing, and in serial is quite fast. However, the operation by which quadtrees are joined across processors scales poorly; while memory consumption scales well, the time to completion does not. As such, projections can often be done very fast when operating only on a single processor! The quadtree algorithm can be used inline (and, indeed, it is for this reason that it is slow.) It is recommended that you attempt to project in serial before projecting in parallel; even for the very largest datasets (Enzo 1024^3 root grid with 7 levels of refinement) in the absence of IO the quadtree algorithm takes only three minutes or so on a decent processor. * **Slices**: to generate a slice, chunks that intersect a given slice are iterated over and their finest-resolution cells are deposited. The chunks are decomposed via standard load balancing. While this operation is parallel, **it is almost never necessary to slice a dataset in parallel**, as all data is loaded on demand anyway. The slice operation has been parallelized so as to enable slicing when running *in situ*. * **Cutting planes**: cutting planes are parallelized exactly as slices are. However, in contrast to slices, because the data-selection operation can be much more time consuming, cutting planes often benefit from parallelism. * **Covering Grids**: covering grids are parallelized exactly as slices are. Object-Based ++++++++++++ Like chunk decomposition, it does not help to run with more processors than the number of objects to be iterated over. There is also the matter of the kind of work being done on each object, and whether it is disk-intensive, cpu-intensive, or memory-intensive. It is up to the user to figure out what limits the performance of their script, and use the correct amount of resources, accordingly. Disk-intensive jobs are limited by the speed of the file system, as above, and extra processors beyond its capability are likely counter-productive. It may require some testing or research (e.g. supercomputer documentation) to find out what the file system is capable of. If it is cpu-intensive, it's best to use as many processors as possible and practical. For a memory-intensive job, each processor needs to be able to allocate enough memory, which may mean using fewer than the maximum number of tasks per compute node, and increasing the number of nodes. The memory used per processor should be calculated, compared to the memory on each compute node, which dictates how many tasks per node. After that, the number of processors used overall is dictated by the disk system or CPU-intensity of the job. Domain Decomposition ++++++++++++++++++++ The various types of analysis that utilize domain decomposition use them in different enough ways that they are discussed separately. **Halo-Finding** Halo finding, along with the merger tree that uses halo finding, operates on the particles in the volume, and is therefore mostly chunk-agnostic. Generally, the biggest concern for halo finding is the amount of memory needed. There is subtle art in estimating the amount of memory needed for halo finding, but a rule of thumb is that the HOP halo finder is the most memory intensive (:func:`HaloFinder`), and Friends of Friends (:func:`FOFHaloFinder`) being the most memory-conservative. For more information, see :ref:`halo-analysis`. **Volume Rendering** The simplest way to think about volume rendering, is that it load-balances over the i/o chunks in the dataset. Each processor is given roughly the same sized volume to operate on. In practice, there are just a few things to keep in mind when doing volume rendering. First, it only uses a power of two number of processors. If the job is run with 100 processors, only 64 of them will actually do anything. Second, the absolute maximum number of processors is the number of chunks. In order to keep work distributed evenly, typically the number of processors should be no greater than one-eighth or one-quarter the number of processors that were used to produce the dataset. For more information, see :ref:`volume_rendering`. Additional Tips --------------- * Don't be afraid to change how a parallel job is run. Change the number of processors, or memory allocated, and see if things work better or worse. After all, it's just a computer, it doesn't pass moral judgment! * Similarly, human time is more valuable than computer time. Try increasing the number of processors, and see if the runtime drops significantly. There will be a sweet spot between speed of run and the waiting time in the job scheduler queue; it may be worth trying to find it. * If you are using object-based parallelism but doing CPU-intensive computations on each object, you may find that setting ``num_procs`` equal to the number of processors per compute node can lead to significant speedups. By default, most mpi implementations will assign tasks to processors on a 'by-slot' basis, so this setting will tell yt to do computations on a single object using only the processors on a single compute node. A nice application for this type of parallelism is calculating a list of derived quantities for a large number of simulation outputs. * It is impossible to tune a parallel operation without understanding what's going on. Read the documentation, look at the underlying code, or talk to other yt users. Get informed! * Sometimes it is difficult to know if a job is cpu, memory, or disk intensive, especially if the parallel job utilizes several of the kinds of parallelism discussed above. In this case, it may be worthwhile to put some simple timers in your script (as below) around different parts. .. code-block:: python import time import yt yt.enable_parallelism() ds = yt.load("DD0152") t0 = time.time() bigstuff, hugestuff = StuffFinder(ds) BigHugeStuffParallelFunction(ds, bigstuff, hugestuff) t1 = time.time() for i in range(1000000): tinystuff, ministuff = GetTinyMiniStuffOffDisk("in%06d.txt" % i) array = TinyTeensyParallelFunction(ds, tinystuff, ministuff) SaveTinyMiniStuffToDisk("out%06d.txt" % i, array) t2 = time.time() if yt.is_root(): print( "BigStuff took {:.5e} sec, TinyStuff took {:.5e} sec".format(t1 - t0, t2 - t1) ) * Remember that if the script handles disk IO explicitly, and does not use a built-in yt function to write data to disk, care must be taken to avoid `race-conditions `_. Be explicit about which MPI task writes to disk using a construction something like this: .. code-block:: python if yt.is_root(): file = open("out.txt", "w") file.write(stuff) file.close() * Many supercomputers allow users to ssh into the nodes that their job is running on. Many job schedulers send the names of the nodes that are used in the notification emails, or a command like ``qstat -f NNNN``, where ``NNNN`` is the job ID, will also show this information. By ssh-ing into nodes, the memory usage of each task can be viewed in real-time as the job runs (using ``top``, for example), and can give valuable feedback about the resources the task requires. An Advanced Worked Example -------------------------- Below is a script used to calculate the redshift of first 99.9% ionization in a simulation. This script was designed to analyze a set of 100 outputs on Gordon, running on 128 processors. This script goes through three phases: #. Define a new derived field, which calculates the fraction of ionized hydrogen as a function only of the total hydrogen density. #. Load a time series up, specifying ``parallel = 8``. This means that it will decompose into 8 jobs. So if we ran on 128 processors, we would have 16 processors assigned to each output in the time series. #. Creating a big cube that will hold our results for this set of processors. Note that this will be only for each output considered by this processor, and this cube will not necessarily be filled in every cell. #. For each output, distribute the grids to each of the sixteen processors working on that output. Each of these takes the max of the ionized redshift in their zone versus the accumulation cube. #. Iterate over slabs and find the maximum redshift in each slab of our accumulation cube. At the end, the root processor (of the global calculation) writes out an ionization cube that contains the redshift of first reionization for each zone across all outputs. .. literalinclude:: ionization_cube.py ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/particle_filter.ipynb0000644000175100001770000001302014714401662022035 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Filtering Particle Data\n", "Let us go through a full worked example. Here we have a Tipsy SPH dataset. By general\n", "inspection, we see that there are stars present in the dataset, since\n", "there are fields with field type: `Stars` in the `ds.field_list`. Let's look \n", "at the `derived_field_list` for all of the `Stars` fields. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n", "\n", "import yt\n", "\n", "ds = yt.load(\"TipsyGalaxy/galaxy.00300\")\n", "for field in ds.derived_field_list:\n", " if field[0] == \"Stars\":\n", " print(field)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will filter these into young stars and old stars by masking on the ('Stars', 'creation_time') field. \n", "\n", "In order to do this, we first make a function which applies our desired cut. This function must accept two arguments: `pfilter` and `data`. The first argument is a `ParticleFilter` object that contains metadata about the filter its self. The second argument is a yt data container.\n", "\n", "Let's call \"young\" stars only those stars with ages less 5 million years. Since Tipsy assigns a very large `creation_time` for stars in the initial conditions, we need to also exclude stars with negative ages. \n", "\n", "Conversely, let's define \"old\" stars as those stars formed dynamically in the simulation with ages greater than 5 Myr. We also include stars with negative ages, since these stars were included in the simulation initial conditions.\n", "\n", "We make use of `pfilter.filtered_type` so that the filter definition will use the same particle type as the one specified in the call to `add_particle_filter` below. This makes the filter definition usable for arbitrary particle types. Since we're only filtering the `\"Stars\"` particle type in this example, we could have also replaced `pfilter.filtered_type` with `\"Stars\"` and gotten the same result." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def young_stars(pfilter, data):\n", " age = data.ds.current_time - data[pfilter.filtered_type, \"creation_time\"]\n", " filter = np.logical_and(age.in_units(\"Myr\") <= 5, age >= 0)\n", " return filter\n", "\n", "\n", "def old_stars(pfilter, data):\n", " age = data.ds.current_time - data[pfilter.filtered_type, \"creation_time\"]\n", " filter = np.logical_or(age.in_units(\"Myr\") >= 5, age < 0)\n", " return filter" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we define these as particle filters within the yt universe with the\n", "`add_particle_filter()` function." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "yt.add_particle_filter(\n", " \"young_stars\",\n", " function=young_stars,\n", " filtered_type=\"Stars\",\n", " requires=[\"creation_time\"],\n", ")\n", "\n", "yt.add_particle_filter(\n", " \"old_stars\", function=old_stars, filtered_type=\"Stars\", requires=[\"creation_time\"]\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us now apply these filters specifically to our dataset.\n", "\n", "Let's double check that it worked by looking at the derived_field_list for any new fields created by our filter." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ds.add_particle_filter(\"young_stars\")\n", "ds.add_particle_filter(\"old_stars\")\n", "\n", "for field in ds.derived_field_list:\n", " if \"young_stars\" in field or \"young_stars\" in field[1]:\n", " print(field)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see all of the new `young_stars` fields as well as the 4 deposit fields. These deposit fields are `mesh` fields generated by depositing particle fields on the grid. Let's generate a couple of projections of where the young and old stars reside in this simulation by accessing some of these new fields." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "p = yt.ProjectionPlot(\n", " ds,\n", " \"z\",\n", " [(\"deposit\", \"young_stars_cic\"), (\"deposit\", \"old_stars_cic\")],\n", " width=(40, \"kpc\"),\n", " center=\"m\",\n", ")\n", "p.set_figure_size(5)\n", "p.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see that young stars are concentrated in regions of active star formation, while old stars are more spatially extended." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.1" } }, "nbformat": 4, "nbformat_minor": 0 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/saving_data.rst0000644000175100001770000001721314714401662020644 0ustar00runnerdocker.. _saving_data: Saving Reloadable Data ====================== Most of the data loaded into or generated with yt can be saved to a format that can be reloaded as a first-class dataset. This includes the following: * geometric data containers (regions, spheres, disks, rays, etc.) * grid data containers (covering grids, arbitrary grids, fixed resolution buffers) * spatial plots (projections, slices, cutting planes) * profiles * generic array data In the case of projections, slices, and profiles, reloaded data can be used to remake plots. For information on this, see :ref:`remaking-plots`. .. _saving-data-containers: Geometric Data Containers ------------------------- Data from geometric data containers can be saved with the :func:`~yt.data_objects.data_containers.YTDataContainer.save_as_dataset` function. .. notebook-cell:: import yt ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") sphere = ds.sphere([0.5] * 3, (10, "Mpc")) fn = sphere.save_as_dataset(fields=[("gas", "density"), ("all", "particle_mass")]) print(fn) This function will return the name of the file to which the dataset was saved. The filename will be a combination of the name of the original dataset and the type of data container. Optionally, a specific filename can be given with the ``filename`` keyword. If no fields are given, the fields that have previously been queried will be saved. The newly created dataset can be loaded like all other supported data through ``yt.load``. Once loaded, field data can be accessed through the traditional data containers or through the ``data`` attribute, which will be a data container configured like the original data container used to make the dataset. Grid data is accessed by the ``grid`` data type and particle data is accessed with the original particle type. As with the original dataset, grid positions and cell sizes are accessible with, for example, ("grid", "x") and ("grid", "dx"). Particle positions are accessible as (, "particle_position_x"). All original simulation parameters are accessible in the ``parameters`` dictionary, normally associated with all datasets. .. code-block:: python sphere_ds = yt.load("DD0046_sphere.h5") # use the original data container print(sphere_ds.data["grid", "density"]) # create a new data container ad = sphere_ds.all_data() # grid data print(ad["grid", "density"]) print(ad["grid", "x"]) print(ad["grid", "dx"]) # particle data print(ad["all", "particle_mass"]) print(ad["all", "particle_position_x"]) Note that because field data queried from geometric containers is returned as unordered 1D arrays, data container datasets are treated, effectively, as particle data. Thus, 3D indexing of grid data from these datasets is not possible. .. _saving-grid-data-containers: Grid Data Containers -------------------- Data containers that return field data as multidimensional arrays can be saved so as to preserve this type of access. This includes covering grids, arbitrary grids, and fixed resolution buffers. Saving data from these containers works just as with geometric data containers. Field data can be accessed through geometric data containers. .. code-block:: python cg = ds.covering_grid(level=0, left_edge=[0.25] * 3, dims=[16] * 3) fn = cg.save_as_dataset(fields=[("gas", "density"), ("all", "particle_mass")]) cg_ds = yt.load(fn) ad = cg_ds.all_data() print(ad["grid", "density"]) Multidimensional indexing of field data is also available through the ``data`` attribute. .. code-block:: python print(cg_ds.data["grid", "density"]) Fixed resolution buffers work just the same. .. code-block:: python my_proj = ds.proj(("gas", "density"), "x", weight_field=("gas", "density")) frb = my_proj.to_frb(1.0, (800, 800)) fn = frb.save_as_dataset(fields=[("gas", "density")]) frb_ds = yt.load(fn) print(frb_ds.data["gas", "density"]) .. _saving-spatial-plots: Spatial Plots ------------- Spatial plots, such as projections, slices, and off-axis slices (cutting planes) can also be saved and reloaded. .. code-block:: python proj = ds.proj(("gas", "density"), "x", weight_field=("gas", "density")) proj.save_as_dataset() Once reloaded, they can be handed to their associated plotting functions to make images. .. code-block:: python proj_ds = yt.load("DD0046_proj.h5") p = yt.ProjectionPlot(proj_ds, "x", ("gas", "density"), weight_field=("gas", "density")) p.save() .. _saving-profile-data: Profiles -------- Profiles created with :func:`~yt.data_objects.profiles.create_profile`, :class:`~yt.visualization.profile_plotter.ProfilePlot`, and :class:`~yt.visualization.profile_plotter.PhasePlot` can be saved with the :func:`~yt.data_objects.profiles.save_as_dataset` function, which works just as above. Profile datasets are a type of non-spatial grid datasets. Geometric selection is not possible, but data can be accessed through the ``.data`` attribute. .. notebook-cell:: import yt ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") ad = ds.all_data() profile_2d = yt.create_profile(ad, [("gas", "density"), ("gas", "temperature")], ("gas", "mass"), weight_field=None, n_bins=(128, 128)) profile_2d.save_as_dataset() prof_2d_ds = yt.load("DD0046_Profile2D.h5") print (prof_2d_ds.data["gas", "mass"]) The x, y (if at least 2D), and z (if 3D) bin fields can be accessed as 1D arrays with "x", "y", and "z". .. code-block:: python print(prof_2d_ds.data["gas", "x"]) The bin fields can also be returned with the same shape as the profile data by accessing them with their original names. This allows for boolean masking of profile data using the bin fields. .. code-block:: python # density is the x bin field print(prof_2d_ds.data["gas", "density"]) For 1, 2, and 3D profile datasets, a fake profile object will be constructed by accessing the ".profile" attribute. This is used primarily in the case of 1 and 2D profiles to create figures using :class:`~yt.visualization.profile_plotter.ProfilePlot` and :class:`~yt.visualization.profile_plotter.PhasePlot`. .. code-block:: python p = yt.PhasePlot( prof_2d_ds.data, ("gas", "density"), ("gas", "temperature"), ("gas", "mass"), weight_field=None, ) p.save() .. _saving-array-data: Generic Array Data ------------------ Generic arrays can be saved and reloaded as non-spatial data using the :func:`~yt.frontends.ytdata.utilities.save_as_dataset` function, also available as ``yt.save_as_dataset``. As with profiles, geometric selection is not possible, but the data can be accessed through the ``.data`` attribute. .. notebook-cell:: import yt ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") region = ds.box([0.25]*3, [0.75]*3) sphere = ds.sphere(ds.domain_center, (10, "Mpc")) my_data = {} my_data["region_density"] = region["gas", "density"] my_data["sphere_density"] = sphere["gas", "density"] yt.save_as_dataset(ds, "test_data.h5", my_data) array_ds = yt.load("test_data.h5") print (array_ds.data["data", "region_density"]) print (array_ds.data["data", "sphere_density"]) Array data can be saved with or without a dataset loaded. If no dataset has been loaded, as fake dataset can be provided as a dictionary. .. notebook-cell:: import numpy as np import yt my_data = {"density": yt.YTArray(np.random.random(10), "g/cm**3"), "temperature": yt.YTArray(np.random.random(10), "K")} fake_ds = {"current_time": yt.YTQuantity(10, "Myr")} yt.save_as_dataset(fake_ds, "random_data.h5", my_data) new_ds = yt.load("random_data.h5") print (new_ds.data["data", "density"]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/time_series_analysis.rst0000644000175100001770000001715214714401662022601 0ustar00runnerdocker.. _time-series-analysis: Time Series Analysis ==================== Often, one wants to analyze a continuous set of outputs from a simulation in a uniform manner. A simple example would be to calculate the peak density in a set of outputs that were written out. The problem with time series analysis in yt is general an issue of verbosity and clunkiness. Typically, one sets up a loop: .. code-block:: python for dsi in range(30): fn = "DD%04i/DD%04i" % (dsi, dsi) ds = load(fn) process_output(ds) But this is not really very nice. This ends up requiring a lot of maintenance. The :class:`~yt.data_objects.time_series.DatasetSeries` object has been designed to remove some of this clunkiness and present an easier, more unified approach to analyzing sets of data. Even better, :class:`~yt.data_objects.time_series.DatasetSeries` works in parallel by default (see :ref:`parallel-computation`), so you can use a ``DatasetSeries`` object to quickly and easily parallelize your analysis. Since doing the same analysis task on many simulation outputs is 'embarrassingly' parallel, this naturally allows for almost arbitrary speedup - limited only by the number of available processors and the number of simulation outputs. The idea behind the current implementation of time series analysis is that the underlying data and the operators that act on that data can and should be distinct. There are several operators provided, as well as facilities for creating your own, and these operators can be applied either to datasets on the whole or to subregions of individual datasets. The simplest mechanism for creating a ``DatasetSeries`` object is to pass a glob pattern to the ``yt.load`` function. .. code-block:: python import yt ts = yt.load("DD????/DD????") This will create a new time series, populated with all datasets that match the pattern "DD" followed by four digits. This object, here called ``ts``, can now be analyzed in bulk. Alternately, you can specify an already formatted list of filenames directly to the :class:`~yt.data_objects.time_series.DatasetSeries` initializer: .. code-block:: python import yt ts = yt.DatasetSeries(["DD0030/DD0030", "DD0040/DD0040"]) Analyzing Each Dataset In Sequence ---------------------------------- The :class:`~yt.data_objects.time_series.DatasetSeries` object has two primary methods of iteration. The first is a very simple iteration, where each object is returned for iteration: .. code-block:: python import yt ts = yt.load("*/*.index") for ds in ts: print(ds.current_time) This can also operate in parallel, using :meth:`~yt.data_objects.time_series.DatasetSeries.piter`. For more examples, see: * :ref:`parallel-time-series-analysis` * The cookbook recipe for :ref:`cookbook-time-series-analysis` * :class:`~yt.data_objects.time_series.DatasetSeries` In addition, the :class:`~yt.data_objects.time_series.DatasetSeries` object allows to select an output based on its time or by its redshift (if defined) as follows: .. code-block:: python import yt ts = yt.load("*/*.index") # Get output at 3 Gyr ds = ts.get_by_time((3, "Gyr")) # This will fail if no output is found within 100 Myr ds = ts.get_by_time((3, "Gyr"), tolerance=(100, "Myr")) # Get the output at the time right before and after 3 Gyr ds_before = ts.get_by_time((3, "Gyr"), prefer="smaller") ds_after = ts.get_by_time((3, "Gyr"), prefer="larger") # For cosmological simulations, you can also select an output by its redshift # with the same options as above ds = ts.get_by_redshift(0.5) For more information, see :meth:`~yt.data_objects.time_series.DatasetSeries.get_by_time` and :meth:`~yt.data_objects.time_series.DatasetSeries.get_by_redshift`. .. _analyzing-an-entire-simulation: Analyzing an Entire Simulation ------------------------------ .. note:: Implemented for the Enzo, Gadget, OWLS, and Exodus II frontends. The parameter file used to run a simulation contains all the information necessary to know what datasets should be available. The ``simulation`` convenience function allows one to create a ``DatasetSeries`` object of all or a subset of all data created by a single simulation. To instantiate, give the parameter file and the simulation type. .. code-block:: python import yt my_sim = yt.load_simulation("enzo_tiny_cosmology/32Mpc_32.enzo", "Enzo") Then, create a ``DatasetSeries`` object with the :meth:`frontends.enzo.simulation_handling.EnzoSimulation.get_time_series` function. With no additional keywords, the time series will include every dataset. If the ``find_outputs`` keyword is set to ``True``, a search of the simulation directory will be performed looking for potential datasets. These datasets will be temporarily loaded in order to figure out the time and redshift associated with them. This can be used when simulation data was created in a non-standard way, making it difficult to guess the corresponding time and redshift information .. code-block:: python my_sim.get_time_series() After this, time series analysis can be done normally. .. code-block:: python for ds in my_sim.piter(): all_data = ds.all_data() print(all_data.quantities.extrema(("gas", "density"))) Additional keywords can be given to :meth:`frontends.enzo.simulation_handling.EnzoSimulation.get_time_series` to select a subset of the total data: * ``time_data`` (*bool*): Whether or not to include time outputs when gathering datasets for time series. Default: True. (Enzo only) * ``redshift_data`` (*bool*): Whether or not to include redshift outputs when gathering datasets for time series. Default: True. (Enzo only) * ``initial_time`` (*float*): The earliest time for outputs to be included. If None, the initial time of the simulation is used. This can be used in combination with either ``final_time`` or ``final_redshift``. Default: None. * ``final_time`` (*float*): The latest time for outputs to be included. If None, the final time of the simulation is used. This can be used in combination with either ``initial_time`` or ``initial_redshift``. Default: None. * ``times`` (*list*): A list of times for which outputs will be found. Default: None. * ``initial_redshift`` (*float*): The earliest redshift for outputs to be included. If None, the initial redshift of the simulation is used. This can be used in combination with either ``final_time`` or ``final_redshift``. Default: None. * ``final_redshift`` (*float*): The latest redshift for outputs to be included. If None, the final redshift of the simulation is used. This can be used in combination with either ``initial_time`` or ``initial_redshift``. Default: None. * ``redshifts`` (*list*): A list of redshifts for which outputs will be found. Default: None. * ``initial_cycle`` (*float*): The earliest cycle for outputs to be included. If None, the initial cycle of the simulation is used. This can only be used with final_cycle. Default: None. (Enzo only) * ``final_cycle`` (*float*): The latest cycle for outputs to be included. If None, the final cycle of the simulation is used. This can only be used in combination with initial_cycle. Default: None. (Enzo only) * ``tolerance`` (*float*): Used in combination with ``times`` or ``redshifts`` keywords, this is the tolerance within which outputs are accepted given the requested times or redshifts. If None, the nearest output is always taken. Default: None. * ``parallel`` (*bool*/*int*): If True, the generated ``DatasetSeries`` will divide the work such that a single processor works on each dataset. If an integer is supplied, the work will be divided into that number of jobs. Default: True. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/analyzing/units.rst0000644000175100001770000003162114714401662017525 0ustar00runnerdocker.. _units: Symbolic Units ============== This section describes yt's symbolic unit capabilities. This is provided as quick introduction for those who are already familiar with yt but want to learn more about the unit system. Please see :ref:`analyzing` and :ref:`visualizing` for more detail about querying, analyzing, and visualizing data in yt. Originally the unit system was a part of yt proper but since the yt 4.0 release, the unit system has been split off into `its own library `_, ``unyt``. For a detailed discussion of how to use ``unyt``, we suggest taking a look at the unyt documentation available at https://unyt.readthedocs.io/, however yt adds additional capabilities above and beyond what is provided by ``unyt`` alone, we describe those capabilities below. Selecting data from a data object --------------------------------- The data returned by yt will have units attached to it. For example, let's query a data object for the ``('gas', 'density')`` field: >>> import yt >>> ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030') >>> dd = ds.all_data() >>> dd['gas', 'density'] unyt_array([4.92775113e-31, 4.94005233e-31, 4.93824694e-31, ..., 1.12879234e-25, 1.59561490e-25, 1.09824903e-24], 'g/cm**3') We can see how we get back a ``unyt_array`` instance. A ``unyt_array`` is a subclass of NumPy's NDarray type that has units attached to it: >>> dd['gas', 'density'].units g/cm**3 It is straightforward to convert data to different units: >>> dd['gas', 'density'].to('Msun/kpc**3') unyt_array([7.28103608e+00, 7.29921182e+00, 7.29654424e+00, ..., 1.66785569e+06, 2.35761291e+06, 1.62272618e+07], 'Msun/kpc**3') For more details about working with ``unyt_array``, see the `the documentation `__ for ``unyt``. Applying Units to Data ---------------------- A ``unyt_array`` can be created from a list, tuple, or NumPy array using multiplication with a ``Unit`` object. For convenience, each yt dataset has a ``units`` attribute one can use to obtain unit objects for this purpose: >>> data = np.random.random((100, 100)) >>> data_with_units = data * ds.units.gram All units known to the dataset will be available via ``ds.units``, including code units and comoving units. Derived Field Units ------------------- Special care often needs to be taken to ensure the result of a derived field will come out in the correct units. The yt unit system will double-check for you to make sure you are not accidentally making a unit conversion mistake. To see what that means in practice, let's define a derived field corresponding to the square root of the gas density: >>> import yt >>> import numpy as np >>> def root_density(field, data): ... return np.sqrt(data['gas', 'density']) >>> ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030') >>> ds.add_field(("gas", "root_density"), units="(g/cm**3)**(1/2)", ... function=root_density, sampling_type='cell') >>> ad = ds.all_data() >>> ad['gas', 'root_density'] unyt_array([7.01979425e-16, 7.02855059e-16, 7.02726614e-16, ..., 3.35975050e-13, 3.99451486e-13, 1.04797377e-12], 'sqrt(g)/cm**(3/2)') No special unit logic needs to happen inside of the function: the result of ``np.sqrt`` will have the correct units: >>> np.sqrt(ad['gas', 'density']) unyt_array([7.01979425e-16, 7.02855059e-16, 7.02726614e-16, ..., 3.35975050e-13, 3.99451486e-13, 1.04797377e-12], 'sqrt(g)/cm**(3/2)') One could also specify any other units that have dimensions of square root of density and yt would automatically convert the return value of the field function to the specified units. An error would be raised if the units are not dimensionally equivalent to the return value of the field function. Code Units ---------- All yt datasets are associated with a "code" unit system that corresponds to whatever unit system the data is represented in on-disk. Let's take a look at the data in an Enzo simulation, specifically the ``("enzo", "Density")`` field: >>> import yt >>> ds = yt.load('Enzo_64/DD0043/data0043') >>> ad = ds.all_data() >>> ad["enzo", "Density"] unyt_array([6.74992726e-02, 6.12111635e-02, 8.92988636e-02, ..., 9.09875931e+01, 5.66932465e+01, 4.27780263e+01], 'code_mass/code_length**3') we see we get back data from yt in units of ``code_mass/code_length**3``. This is the density unit formed out of the base units of mass and length in the internal unit system in the simulation. We can see the values of these units by looking at the ``length_unit`` and ``mass_unit`` attributes of the dataset object: >>> ds.length_unit unyt_quantity(128, 'Mpccm/h') >>> ds.mass_unit unyt_quantity(4.89045159e+50, 'g') And we can see that both of these have values of 1 in the code unit system. >>> ds.length_unit.to('code_length') unyt_quantity(1., 'code_length') >>> ds.mass_unit.to('code_mass') unyt_quantity(1., 'code_mass') In addition to ``length_unit`` and ``mass_unit``, there are also ``time_unit``, ``velocity_unit``, and ``magnetic_unit`` attributes for this dataset. Some frontends also define a ``density_unit``, ``pressure_unit``, ``temperature_unit``, and ``specific_energy`` attribute. If these are not defined then the corresponding unit is calculated from the base length, mass, and time unit. Each of these attributes corresponds to a unit in the code unit system: >>> [un for un in dir(ds.units) if un.startswith('code')] ['code_density', 'code_length', 'code_magnetic', 'code_mass', 'code_metallicity', 'code_pressure', 'code_specific_energy', 'code_temperature', 'code_time', 'code_velocity'] You can use these unit names to convert arbitrary data into a dataset's code unit system: >>> u = ds.units >>> data = 10**-30 * u.g / u.cm**3 >>> data.to('code_density') unyt_quantity(0.36217187, 'code_density') Note how in this example we used ``ds.units`` instead of the top-level ``unyt`` namespace or ``yt.units``. This is because the units from ``ds.units`` know about the dataset's code unit system and can convert data into it. Unit objects from ``unyt`` or ``yt.units`` will not know about any particular dataset's unit system. .. _cosmological-units: Comoving units for Cosmological Simulations ------------------------------------------- The length unit of the dataset I used above uses a cosmological unit: >>> print(ds.length_unit) 128 Mpccm/h In English, this says that the length unit is 128 megaparsecs in the comoving frame, scaled as if the hubble constant were 100 km/s/Mpc. Although :math:`h` isn't really a unit, yt treats it as one for the purposes of the unit system. As an aside, `Darren Croton's research note `_ on the history, use, and interpretation of :math:`h` as it appears in the astronomical literature is pretty much required reading for anyone who has to deal with factors of :math:`h` every now and then. In yt, comoving length unit symbols are named following the pattern ``< length unit >cm``, i.e. ``pccm`` for comoving parsec or ``mcm`` for a comoving meter. A comoving length unit is different from the normal length unit by a factor of :math:`(1+z)`: >>> u = ds.units >>> print((1*u.Mpccm)/(1*u.Mpc)) 0.9986088499304777 dimensionless >>> 1 / (1 + ds.current_redshift) 0.9986088499304776 As we saw before, h is treated like any other unit symbol. It has dimensionless units, just like a scalar: >>> (1*u.Mpc)/(1*u.Mpc/u.h) unyt_quantity(0.71, '(dimensionless)') >>> ds.hubble_constant 0.71 Using parsec as an example, * ``pc`` Proper parsecs, :math:`\rm{pc}`. * ``pccm`` Comoving parsecs, :math:`\rm{pc}/(1+z)`. * ``pccm/h`` Comoving parsecs normalized by the scaled hubble constant, :math:`\rm{pc}/h/(1+z)`. * ``pc/h`` Proper parsecs, normalized by the scaled hubble constant, :math:`\rm{pc}/h`. Overriding Code Unit Definitions -------------------------------- On occasion, you might have a dataset for a supported frontend that does not have the conversions to code units accessible or you may want to change them outright. ``yt`` provides a mechanism so that one may provide their own code unit definitions to ``yt.load``, which override the default rules for a given frontend for defining code units. This is provided through the ``units_override`` argument to ``yt.load``. We'll use an example of an Athena dataset. First, a call to ``yt.load`` without ``units_override``: >>> ds = yt.load("MHDSloshing/virgo_low_res.0054.vtk") >>> ds.length_unit unyt_quantity(1., 'cm') >>> ds.mass_unit unyt_quantity(1., 'g') >>> ds.time_unit unyt_quantity(1., 's') >>> sp1 = ds1.sphere("c", (0.1, "unitary")) >>> print(sp1["gas", "density"]) [0.05134981 0.05134912 0.05109047 ... 0.14608461 0.14489453 0.14385277] g/cm**3 This particular simulation is of a galaxy cluster merger so these density values are way, way too high. This is happening because Athena does not encode any information about the unit system used in the simulation or the output data, so yt cannot infer that information and must make an educated guess. In this case it incorrectly assumes the data are in CGS units. However, we know *a priori* what the unit system *should* be, and we can supply a ``units_override`` dictionary to ``yt.load`` to override the incorrect assumptions yt is making about this dataset. Let's define: >>> units_override = {"length_unit": (1.0, "Mpc"), ... "time_unit": (1.0, "Myr"), ... "mass_unit": (1.0e14, "Msun")} The ``units_override`` dictionary can take the following keys: * ``length_unit`` * ``time_unit`` * ``mass_unit`` * ``magnetic_unit`` * ``temperature_unit`` and the associated values can be ``(value, "unit")`` tuples, ``unyt_quantity`` instances, or floats (in the latter case they are assumed to have the corresponding cgs unit). Now let's reload the dataset using our ``units_override`` dict: >>> ds = yt.load("MHDSloshing/virgo_low_res.0054.vtk", ... units_override=units_override) >>> sp = ds.sphere("c",(0.1,"unitary")) >>> print(sp["gas", "density"]) [3.47531683e-28 3.47527018e-28 3.45776515e-28 ... 9.88689766e-28 9.80635384e-28 9.73584863e-28] g/cm**3 and we see how the data now have much more sensible values for a galaxy cluster merge simulation. Comparing Units From Different Simulations ------------------------------------------ The code units from different simulations will have different conversions to physical coordinates. This can get confusing when working with data from more than one simulation or from a single simulation where the units change with time. As an example, let's load up two enzo datasets from different redshifts in the same cosmology simulation, one from high redshift: >>> ds1 = yt.load('Enzo_64/DD0002/data0002') >>> ds1.current_redshift 7.8843748886903 >>> ds1.length_unit unyt_quantity(128, 'Mpccm/h') >>> ds1.length_unit.in_cgs() unyt_quantity(6.26145538e+25, 'cm') And another from low redshift: >>> ds2 = yt.load('Enzo_64/DD0043/data0043') >>> ds2.current_redshift 0.0013930880640796 >>> ds2.length_unit unyt_quantity(128, 'Mpccm/h') >>> ds2.length_unit.in_cgs() unyt_quantity(5.55517285e+26, 'cm') Now despite the fact that ``'Mpccm/h'`` means different things for the two datasets, it's still a well-defined operation to take the ratio of the two length units: >>> ds2.length_unit / ds1.length_unit unyt_quantity(8.87201539, '(dimensionless)') Because code units and comoving units are defined relative to a physical unit system, ``unyt`` is able to give the correct answer here. So long as the result comes out dimensionless or in a physical unit then the answer will be well-defined. However, if we want the answer to come out in the internal units of one particular dataset, additional care must be taken. For an example where this might be an issue, let's try to compute the sum of two comoving distances from each simulation: >>> d1 = 12 * ds1.units.Mpccm >>> d2 = 12 * ds2.units.Mpccm >>> d1 + d2 unyt_quantity(118.46418468, 'Mpccm') >>> d2 + d1 unyt_quantity(13.35256754, 'Mpccm') So this is definitely weird - addition appears to not be associative anymore! However, both answers are correct, the confusion is arising because ``"Mpccm"`` is ambiguous in these expressions. In situations like this, ``unyt`` will use the definition for units from the leftmost term in an expression, so the first example is returning data in high-redshift comoving megaparsecs, while the second example returns data in low-redshift comoving megaparsecs. Wherever possible it's best to do calculations in physical units when working with more than one dataset. If you need to use comoving units or code units then extra care must be taken in your code to avoid ambiguity. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/conf.py0000644000175100001770000002111214714401662015126 0ustar00runnerdocker# # yt documentation build configuration file, created by # sphinx-quickstart on Tue Jan 11 09:46:53 2011. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import glob import os import sys import sphinx_bootstrap_theme on_rtd = os.environ.get("READTHEDOCS", None) == "True" # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath("../extensions/")) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ "sphinx.ext.autodoc", "sphinx.ext.intersphinx", "sphinx.ext.mathjax", "sphinx.ext.viewcode", "sphinx.ext.napoleon", "yt_cookbook", "yt_colormaps", "config_help", "yt_showfields", "nbsphinx", ] if not on_rtd: extensions.append("sphinx.ext.autosummary") extensions.append("pythonscript_sphinxext") # Add any paths that contain templates here, relative to this directory. templates_path = ["_templates"] # The suffix of source filenames. source_suffix = ".rst" # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = "index" # General information about the project. project = "The yt Project" copyright = "2013-2021, the yt Project" # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = "4.4" # The full version, including alpha/beta/rc tags. release = "4.4.0" # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = "sphinx" # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = "bootstrap" html_theme_path = sphinx_bootstrap_theme.get_html_theme_path() # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. html_theme_options = dict( bootstrap_version="3", bootswatch_theme="readable", navbar_links=[ ("", ""), # see https://github.com/yt-project/yt/pull/3423 ("How to get help", "help/index"), ("Quickstart notebooks", "quickstart/index"), ("Cookbook", "cookbook/index"), ], navbar_sidebarrel=False, globaltoc_depth=2, ) # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. html_logo = "_static/yt_icon.png" # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ["_static", "analyzing/_static"] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. html_domain_indices = False # If false, no index is generated. html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. html_show_sourcelink = False # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = "ytdoc" # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). # latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). # latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ("index", "yt.tex", "yt Documentation", "The yt Project", "manual"), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [("index", "yt", "yt Documentation", ["The yt Project"], 1)] nbsphinx_allow_errors = True # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = { "python": ("https://docs.python.org/3/", None), "ipython": ("https://ipython.readthedocs.io/en/stable/", None), "numpy": ("https://numpy.org/doc/stable/", None), "matplotlib": ("https://matplotlib.org/stable/", None), "astropy": ("https://docs.astropy.org/en/stable", None), "pandas": ("https://pandas.pydata.org/pandas-docs/stable", None), "trident": ("https://trident.readthedocs.io/en/latest/", None), "yt_astro_analysis": ("https://yt-astro-analysis.readthedocs.io/en/latest/", None), "yt_attic": ("https://yt-attic.readthedocs.io/en/latest/", None), "pytest": ("https://docs.pytest.org/en/stable", None), } if not on_rtd: autosummary_generate = glob.glob("reference/api/api.rst") # as of Sphinx 3.1.2 this is the supported way to link custom style sheets def setup(app): app.add_css_file("custom.css") ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.239151 yt-4.4.0/doc/source/cookbook/0000755000175100001770000000000014714401715015437 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/amrkdtree_downsampling.py0000644000175100001770000000506614714401662022561 0ustar00runnerdocker# Using AMRKDTree Homogenized Volumes to examine large datasets # at lower resolution. # In this example we will show how to use the AMRKDTree to take a simulation # with 8 levels of refinement and only use levels 0-3 to render the dataset. # Currently this cookbook is flawed in that the data that is covered by the # higher resolution data gets masked during the rendering. This should be # fixed by changing either the data source or the code in # yt/utilities/amr_kdtree.py where data is being masked for the partitioned # grid. Right now the quick fix is to create a data_collection, but this # will only work for patch based simulations that have ds.index.grids. # We begin by loading up yt, and importing the AMRKDTree import numpy as np import yt from yt.utilities.amr_kdtree.api import AMRKDTree # Load up a dataset and define the kdtree ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") im, sc = yt.volume_render(ds, ("gas", "density"), fname="v0.png") sc.camera.set_width(ds.arr(100, "kpc")) render_source = sc.get_source() kd = render_source.volume # Print out specifics of KD Tree print("Total volume of all bricks = %i" % kd.count_volume()) print("Total number of cells = %i" % kd.count_cells()) new_source = ds.all_data() new_source.max_level = 3 kd_low_res = AMRKDTree(ds, data_source=new_source) print(kd_low_res.count_volume()) print(kd_low_res.count_cells()) # Now we pass this in as the volume to our camera, and render the snapshot # again. render_source.set_volume(kd_low_res) render_source.set_field(("gas", "density")) sc.save("v1.png", sigma_clip=6.0) # This operation was substantially faster. Now lets modify the low resolution # rendering until we find something we like. tf = render_source.transfer_function tf.clear() tf.add_layers( 4, 0.01, col_bounds=[-27.5, -25.5], alpha=np.ones(4, dtype="float64"), colormap="RdBu_r", ) sc.save("v2.png", sigma_clip=6.0) # This looks better. Now let's try turning on opacity. tf.grey_opacity = True sc.save("v3.png", sigma_clip=6.0) # ## That seemed to pick out some interesting structures. Now let's bump up the ## opacity. # tf.clear() tf.add_layers( 4, 0.01, col_bounds=[-27.5, -25.5], alpha=10.0 * np.ones(4, dtype="float64"), colormap="RdBu_r", ) tf.add_layers( 4, 0.01, col_bounds=[-27.5, -25.5], alpha=10.0 * np.ones(4, dtype="float64"), colormap="RdBu_r", ) sc.save("v4.png", sigma_clip=6.0) # ## This looks pretty good, now lets go back to the full resolution AMRKDTree # render_source.set_volume(kd) sc.save("v5.png", sigma_clip=6.0) # This looks great! ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/annotate_timestamp_and_scale.py0000644000175100001770000000041414714401662023676 0ustar00runnerdockerimport yt ts = yt.load("enzo_tiny_cosmology/DD000?/DD000?") for ds in ts: p = yt.ProjectionPlot(ds, "z", ("gas", "density")) p.annotate_timestamp(corner="upper_left", redshift=True, draw_inset_box=True) p.annotate_scale(corner="upper_right") p.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/annotations.py0000644000175100001770000000130114714401662020342 0ustar00runnerdockerimport yt ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") p = yt.ProjectionPlot(ds, "z", ("gas", "density")) p.annotate_sphere([0.54, 0.72], radius=(1, "Mpc"), coord_system="axis", text="Halo #7") p.annotate_sphere( [0.65, 0.38, 0.3], radius=(1.5, "Mpc"), coord_system="data", circle_args={"color": "green", "linewidth": 4, "linestyle": "dashed"}, ) p.annotate_arrow([0.87, 0.59, 0.2], coord_system="data", color="red") p.annotate_text([10, 20], "Some halos", coord_system="plot") p.annotate_marker([0.45, 0.1, 0.4], coord_system="data", color="yellow", s=500) p.annotate_line([0.2, 0.4], [0.3, 0.9], coord_system="axis") p.annotate_timestamp(redshift=True) p.annotate_scale() p.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/average_value.py0000644000175100001770000000105214714401662020616 0ustar00runnerdockerimport yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # load data field = ("gas", "temperature") # The field to average weight = ("gas", "mass") # The weight for the average ad = ds.all_data() # This is a region describing the entire box, # but note it doesn't read anything in yet! # We now use our 'quantities' call to get the average quantity average_value = ad.quantities.weighted_average_quantity(field, weight) print( "Average %s (weighted by %s) is %0.3e %s" % (field[1], weight[1], average_value, average_value.units) ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/calculating_information.rst0000644000175100001770000000763714714401662023102 0ustar00runnerdockerCalculating Dataset Information ------------------------------- These recipes demonstrate methods of calculating quantities in a simulation, either for later visualization or for understanding properties of fluids and particles in the simulation. Average Field Value ~~~~~~~~~~~~~~~~~~~ This recipe is a very simple method of calculating the global average of a given field, as weighted by another field. See :ref:`derived-quantities` for more information. .. yt_cookbook:: average_value.py Mass Enclosed in a Sphere ~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe constructs a sphere and then sums the total mass in particles and fluids in the sphere. See :ref:`available-objects` and :ref:`derived-quantities` for more information. .. yt_cookbook:: sum_mass_in_sphere.py Global Phase Plot ~~~~~~~~~~~~~~~~~ This is a simple recipe to show how to open a dataset and then plot a couple global phase diagrams, save them, and quit. See :ref:`how-to-make-2d-profiles` for more information. .. yt_cookbook:: global_phase_plots.py .. _cookbook-radial-velocity: Radial Velocity Profile ~~~~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to subtract off a bulk velocity on a sphere before calculating the radial velocity within that sphere. See :ref:`how-to-make-1d-profiles` for more information on creating profiles and :ref:`field_parameters` for an explanation of how the bulk velocity is provided to the radial velocity field function. .. yt_cookbook:: rad_velocity.py Simulation Analysis ~~~~~~~~~~~~~~~~~~~ This uses :class:`~yt.data_objects.time_series.DatasetSeries` to calculate the extrema of a series of outputs, whose names it guesses in advance. This will run in parallel and take advantage of multiple MPI tasks. See :ref:`parallel-computation` and :ref:`time-series-analysis` for more information. .. yt_cookbook:: simulation_analysis.py .. _cookbook-time-series-analysis: Time Series Analysis ~~~~~~~~~~~~~~~~~~~~ This recipe shows how to calculate a number of quantities on a set of parameter files. Note that it is parallel aware, and that if you only wanted to run in serial the operation ``for pf in ts:`` would also have worked identically. See :ref:`parallel-computation` and :ref:`time-series-analysis` for more information. .. yt_cookbook:: time_series.py .. _cookbook-simple-derived-fields: Simple Derived Fields ~~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to create a simple derived field, ``thermal_energy_density``, and then generate a projection from it. See :ref:`creating-derived-fields` and :ref:`projection-plots` for more information. .. yt_cookbook:: derived_field.py .. _cookbook-complicated-derived-fields: Complicated Derived Fields ~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to use the :meth:`~yt.frontends.flash.data_structures.FLASHDataset.add_gradient_fields` method to generate gradient fields and use them in a more complex derived field. .. yt_cookbook:: hse_field.py Using Particle Filters to Calculate Star Formation Rates ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to use a particle filter to calculate the star formation rate in a galaxy evolution simulation. See :ref:`filtering-particles` for more information. .. yt_cookbook:: particle_filter_sfr.py Making a Turbulent Kinetic Energy Power Spectrum ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe shows how to use ``yt`` to read data and put it on a uniform grid to interface with the NumPy FFT routines and create a turbulent kinetic energy power spectrum. (Note: the dataset used here is of low resolution, so the turbulence is not very well-developed. The spike at high wavenumbers is due to non-periodicity in the z-direction). .. yt_cookbook:: power_spectrum_example.py Downsampling an AMR Dataset ~~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe shows how to use the ``max_level`` attribute of a yt data object to only select data up to a maximum AMR level. .. yt_cookbook:: downsampling_amr.py ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/camera_movement.py0000644000175100001770000000130014714401662021146 0ustar00runnerdockerimport numpy as np import yt ds = yt.load("MOOSE_sample_data/out.e-s010") sc = yt.create_scene(ds) cam = sc.camera # save an image at the starting position frame = 0 sc.save("camera_movement_%04i.png" % frame) frame += 1 # Zoom out by a factor of 2 over 5 frames for _ in cam.iter_zoom(0.5, 5): sc.save("camera_movement_%04i.png" % frame) frame += 1 # Move to the position [-10.0, 10.0, -10.0] over 5 frames pos = ds.arr([-10.0, 10.0, -10.0], "code_length") for _ in cam.iter_move(pos, 5): sc.save("camera_movement_%04i.png" % frame) frame += 1 # Rotate by 180 degrees over 5 frames for _ in cam.iter_rotate(np.pi, 5): sc.save("camera_movement_%04i.png" % frame) frame += 1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/changing_label_formats.py0000644000175100001770000000052614714401662022465 0ustar00runnerdockerimport yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # Set the format of the ionozation_label to be `plus_minus` # instead of the default `roman_numeral` ds.set_field_label_format("ionization_label", "plus_minus") slc = yt.SlicePlot(ds, "x", ("gas", "H_p1_number_density")) slc.save("plus_minus_ionization_format_sliceplot.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/colormaps.py0000644000175100001770000000115014714401662020006 0ustar00runnerdockerimport yt # Load the dataset ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # Create a projection and save it with the default colormap ('cmyt.arbre') p = yt.ProjectionPlot(ds, "z", ("gas", "density"), width=(100, "kpc")) p.save() # Change the colormap to 'cmyt.dusk' and save again. We must specify # a different filename here or it will save it over the top of # our first projection. p.set_cmap(field=("gas", "density"), cmap="cmyt.dusk") p.save("proj_with_dusk_cmap.png") # Change the colormap to 'hot' and save again. p.set_cmap(field=("gas", "density"), cmap="hot") p.save("proj_with_hot_cmap.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/complex_plots.rst0000644000175100001770000003777214714401662021102 0ustar00runnerdockerA Few Complex Plots ------------------- The built-in plotting functionality covers the very simple use cases that are most common. These scripts will demonstrate how to construct more complex plots or publication-quality plots. In many cases these show how to make multi-panel plots. Multi-Width Image ~~~~~~~~~~~~~~~~~ This is a simple recipe to show how to open a dataset and then plot slices through it at varying widths. See :ref:`slice-plots` for more information. .. yt_cookbook:: multi_width_image.py .. _image-resolution-primer: Varying the resolution of an image ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This illustrates the various parameters that control the resolution of an image, including the (deprecated) refinement level, the size of the :class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer`, and the number of pixels in the output image. In brief, there are three parameters that control the final resolution, with a fourth entering for particle data that is deposited onto a mesh (i.e. pre-4.0). Those are: 1. ``buff_size``, which can be altered with :meth:`~yt.visualization.plot_window.PlotWindow.set_buff_size`, which is inherited by :class:`~yt.visualization.plot_window.AxisAlignedSlicePlot`, :class:`~yt.visualization.plot_window.OffAxisSlicePlot`, :class:`~yt.visualization.plot_window.AxisAlignedProjectionPlot`, and :class:`~yt.visualization.plot_window.OffAxisProjectionPlot`. This controls the number of resolution elements in the :class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer`, which can be thought of as the number of individually colored squares (on a side) in a 2D image. ``buff_size`` can be set after creating the image with :meth:`~yt.visualization.plot_window.PlotWindow.set_buff_size`, or during image creation with the ``buff_size`` argument to any of the four preceding classes. 2. ``figure_size``, which can be altered with either :meth:`~yt.visualization.plot_container.PlotContainer.set_figure_size`, or can be set during image creation with the ``window_size`` argument. This sets the size of the final image (including the visualization and, if applicable, the axes and colorbar as well) in inches. 3. ``dpi``, i.e. the dots-per-inch in your final file, which can also be thought of as the actual resolution of your image. This can only be set on save via the ``mpl_kwargs`` parameter to :meth:`~yt.visualization.plot_container.PlotContainer.save`. The ``dpi`` and ``figure_size`` together set the true resolution of your image (final image will be ``dpi`` :math:`*` ``figure_size`` pixels on a side), so if these are set too low, then your ``buff_size`` will not matter. On the other hand, increasing these without increasing ``buff_size`` accordingly will simply blow up your resolution elements to fill several real pixels. 4. (only for meshed particle data) ``n_ref``, the maximum number of particles in a cell in the oct-tree allowed before it is refined (removed in yt-4.0 as particle data is no longer deposited onto an oct-tree). For particle data, ``n_ref`` effectively sets the underlying resolution of your simulation. Regardless, for either grid data or deposited particle data, your image will never be higher resolution than your simulation data. In other words, if you are visualizing a region 50 kpc across that includes data that reaches a resolution of 100 pc, then there's no reason to set a ``buff_size`` (or a ``dpi`` :math:`*` ``figure_size``) above 50 kpc/ 100 pc = 500. The below script demonstrates how each of these can be varied. .. yt_cookbook:: image_resolution.py Multipanel with Axes Labels ~~~~~~~~~~~~~~~~~~~~~~~~~~~ This illustrates how to use a SlicePlot to control a multipanel plot. This plot uses axes labels to illustrate the length scales in the plot. See :ref:`slice-plots` and the `Matplotlib AxesGrid Object `_ for more information. .. yt_cookbook:: multiplot_2x2.py The above example gives you full control over the plots, but for most purposes, the ``export_to_mpl_figure`` method is a simpler option, allowing us to make a similar plot as: .. yt_cookbook:: multiplot_export_to_mpl.py Multipanel with PhasePlot ~~~~~~~~~~~~~~~~~~~~~~~~~~~ This illustrates how to use PhasePlot in a multipanel plot. See :ref:`how-to-make-2d-profiles` and the `Matplotlib AxesGrid Object `_ for more information. .. yt_cookbook:: multiplot_phaseplot.py Time Series Multipanel ~~~~~~~~~~~~~~~~~~~~~~ This illustrates how to create a multipanel plot of a time series dataset. See :ref:`projection-plots`, :ref:`time-series-analysis`, and the `Matplotlib AxesGrid Object `_ for more information. .. yt_cookbook:: multiplot_2x2_time_series.py Multiple Slice Multipanel ~~~~~~~~~~~~~~~~~~~~~~~~~ This illustrates how to create a multipanel plot of slices along the coordinate axes. To focus on what's happening in the x-y plane, we make an additional Temperature slice for the bottom-right subpanel. See :ref:`slice-plots` and the `Matplotlib AxesGrid Object `_ for more information. .. yt_cookbook:: multiplot_2x2_coordaxes_slice.py Multi-Plot Slice and Projections ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This shows how to combine multiple slices and projections into a single image, with detailed control over colorbars, titles and color limits. See :ref:`slice-plots` and :ref:`projection-plots` for more information. .. yt_cookbook:: multi_plot_slice_and_proj.py .. _advanced-multi-panel: Advanced Multi-Plot Multi-Panel ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This produces a series of slices of multiple fields with different color maps and zlimits, and makes use of the FixedResolutionBuffer. While this is more complex than the equivalent plot collection-based solution, it allows for a *lot* more flexibility. Every part of the script uses matplotlib commands, allowing its full power to be exercised. See :ref:`slice-plots` and :ref:`projection-plots` for more information. .. yt_cookbook:: multi_plot_3x2_FRB.py Time Series Movie ~~~~~~~~~~~~~~~~~ This shows how to use matplotlib's animation framework with yt plots. .. yt_cookbook:: matplotlib-animation.py .. _cookbook-offaxis_projection: Off-Axis Projection (an alternate method) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to take an image-plane line integral along an arbitrary axis in a simulation. This uses alternate machinery than the standard :ref:`PlotWindow interface ` to create an off-axis projection as demonstrated in this :ref:`recipe `. .. yt_cookbook:: offaxis_projection.py Off-Axis Projection with a Colorbar (an alternate method) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe shows how to generate a colorbar with a projection of a dataset from an arbitrary projection angle (so you are not confined to the x, y, and z axes). This uses alternate machinery than the standard :ref:`PlotWindow interface ` to create an off-axis projection as demonstrated in this :ref:`recipe `. .. yt_cookbook:: offaxis_projection_colorbar.py .. _thin-slice-projections: Thin-Slice Projections ~~~~~~~~~~~~~~~~~~~~~~ This recipe is an example of how to project through only a given data object, in this case a thin region, and then display the result. See :ref:`projection-plots` and :ref:`available-objects` for more information. .. yt_cookbook:: thin_slice_projection.py Plotting Particles Over Fluids ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to overplot particles on top of a fluid image. See :ref:`annotate-particles` for more information. .. yt_cookbook:: overplot_particles.py Plotting Grid Edges Over Fluids ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to overplot grid boxes on top of a fluid image. Each level is represented with a different color from white (low refinement) to black (high refinement). One can change the colormap used for the grids colors by using the cmap keyword (or set it to None to get all grid edges as black). See :ref:`annotate-grids` for more information. .. yt_cookbook:: overplot_grids.py Overplotting Velocity Vectors ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to plot velocity vectors on top of a slice. See :ref:`annotate-velocity` for more information. .. yt_cookbook:: velocity_vectors_on_slice.py Overplotting Contours ~~~~~~~~~~~~~~~~~~~~~ This is a simple recipe to show how to open a dataset, plot a slice through it, and add contours of another quantity on top. See :ref:`annotate-contours` for more information. .. yt_cookbook:: contours_on_slice.py Simple Contours in a Slice ~~~~~~~~~~~~~~~~~~~~~~~~~~ Sometimes it is useful to plot just a few contours of a quantity in a dataset. This shows how one does this by first making a slice, adding contours, and then hiding the colormap plot of the slice to leave the plot containing only the contours that one has added. See :ref:`annotate-contours` for more information. .. yt_cookbook:: simple_contour_in_slice.py Styling Radial Profile Plots ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates a method of calculating radial profiles for several quantities, styling them and saving out the resultant plot. See :ref:`how-to-make-1d-profiles` for more information. .. yt_cookbook:: radial_profile_styles.py Customized Profile Plot ~~~~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to create a fully customized 1D profile object using the :func:`~yt.data_objects.profiles.create_profile` function and then create a :class:`~yt.visualization.profile_plotter.ProfilePlot` using the customized profile. This illustrates how a ``ProfilePlot`` created this way inherits the properties of the profile it is constructed from. See :ref:`how-to-make-1d-profiles` for more information. .. yt_cookbook:: customized_profile_plot.py Customized Phase Plot ~~~~~~~~~~~~~~~~~~~~~ Similar to the recipe above, this demonstrates how to create a fully customized 2D profile object using the :func:`~yt.data_objects.profiles.create_profile` function and then create a :class:`~yt.visualization.profile_plotter.PhasePlot` using the customized profile object. This illustrates how a ``PhasePlot`` created this way inherits the properties of the profile object from which it is constructed. See :ref:`how-to-make-2d-profiles` for more information. .. yt_cookbook:: customized_phase_plot.py .. _cookbook-camera_movement: Moving a Volume Rendering Camera ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In this recipe, we move a camera through a domain and take multiple volume rendering snapshots. This recipe uses an unstructured mesh dataset (see :ref:`unstructured_mesh_rendering`), which makes it easier to visualize what the Camera is doing, but you can manipulate the Camera for other dataset types in exactly the same manner. See :ref:`camera_movement` for more information. .. yt_cookbook:: camera_movement.py Volume Rendering with Custom Camera ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In this recipe we modify the :ref:`cookbook-simple_volume_rendering` recipe to use customized camera properties. See :ref:`volume_rendering` for more information. .. yt_cookbook:: custom_camera_volume_rendering.py .. _cookbook-custom-transfer-function: Volume Rendering with a Custom Transfer Function ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In this recipe we modify the :ref:`cookbook-simple_volume_rendering` recipe to use customized camera properties. See :ref:`volume_rendering` for more information. .. yt_cookbook:: custom_transfer_function_volume_rendering.py .. _cookbook-sigma_clip: Volume Rendering with Sigma Clipping ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In this recipe we output several images with different values of sigma_clip set in order to change the contrast of the resulting image. See :ref:`sigma_clip` for more information. .. yt_cookbook:: sigma_clip.py Zooming into an Image ~~~~~~~~~~~~~~~~~~~~~ This is a recipe that takes a slice through the most dense point, then creates a bunch of frames as it zooms in. It's important to note that this particular recipe is provided to show how to be more flexible and add annotations and the like -- the base system, of a zoomin, is provided by the "yt zoomin" command on the command line. See :ref:`slice-plots` and :ref:`callbacks` for more information. .. yt_cookbook:: zoomin_frames.py .. _cookbook-various_lens: Various Lens Types for Volume Rendering ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example illustrates the usage and feature of different lenses for volume rendering. .. yt_cookbook:: various_lens.py .. _cookbook-opaque_rendering: Opaque Volume Rendering ~~~~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to make semi-opaque volume renderings, but also how to step through and try different things to identify the type of volume rendering you want. See :ref:`opaque_rendering` for more information. .. yt_cookbook:: opaque_rendering.py Volume Rendering Multiple Fields ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can render multiple fields by adding new ``VolumeSource`` objects to the scene for each field you want to render. .. yt_cookbook:: render_two_fields.py .. _cookbook-amrkdtree_downsampling: Downsampling Data for Volume Rendering ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to downsample data in a simulation to speed up volume rendering. See :ref:`volume_rendering` for more information. .. yt_cookbook:: amrkdtree_downsampling.py .. _cookbook-volume_rendering_annotations: Volume Rendering with Bounding Box and Overlaid Grids ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to overplot a bounding box on a volume rendering as well as overplotting grids representing the level of refinement achieved in different regions of the code. See :ref:`volume_rendering_annotations` for more information. .. yt_cookbook:: rendering_with_box_and_grids.py Volume Rendering with Annotation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to write the simulation time, show an axis triad indicating the direction of the coordinate system, and show the transfer function on a volume rendering. Please note that this recipe relies on the old volume rendering interface. While one can continue to use this interface, it may be incompatible with some of the new developments and the infrastructure described in :ref:`volume_rendering`. .. yt_cookbook:: vol-annotated.py .. _cookbook-render_two_fields_tf: Volume Rendering Multiple Fields And Annotation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe shows how to display the transfer functions when rendering multiple fields in a volume render. .. yt_cookbook:: render_two_fields_tf.py .. _cookbook-vol-points: Volume Rendering with Points ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to make a volume rendering composited with point sources. This could represent star or dark matter particles, for example. .. yt_cookbook:: vol-points.py .. _cookbook-vol-lines: Volume Rendering with Lines ~~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to make a volume rendering composited with line sources. .. yt_cookbook:: vol-lines.py Plotting Streamlines ~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to display streamlines in a simulation. (Note: streamlines can also be queried for values!) See :ref:`streamlines` for more information. .. yt_cookbook:: streamlines.py Plotting Isocontours ~~~~~~~~~~~~~~~~~~~~ This recipe demonstrates how to extract an isocontour and then plot it in matplotlib, coloring the surface by a second quantity. See :ref:`surfaces` for more information. .. yt_cookbook:: surface_plot.py Plotting Isocontours and Streamlines ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This recipe plots both isocontours and streamlines simultaneously. Note that this will not include any blending, so streamlines that are occluded by the surface will still be visible. See :ref:`streamlines` and :ref:`surfaces` for more information. .. yt_cookbook:: streamlines_isocontour.py ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/constructing_data_objects.rst0000644000175100001770000000232714714401662023422 0ustar00runnerdockerConstructing Data Objects ------------------------- These recipes demonstrate a few uncommon methods of constructing data objects from a simulation. Creating Particle Filters ~~~~~~~~~~~~~~~~~~~~~~~~~ Create particle filters based on the age of star particles in an isolated disk galaxy simulation. Determine the total mass of each stellar age bin in the simulation. Generate projections for each of the stellar age bins. .. yt_cookbook:: particle_filter.py .. _cookbook-find_clumps: Identifying Clumps ~~~~~~~~~~~~~~~~~~ This is a recipe to show how to find topologically connected sets of cells inside a dataset. It returns these clumps and they can be inspected or visualized as would any other data object. More detail on this method can be found in `Smith et al. 2009 `_. .. yt_cookbook:: find_clumps.py .. _extract_frb: Extracting Fixed Resolution Data ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is a recipe to show how to open a dataset and extract it to a file at a fixed resolution with no interpolation or smoothing. Additionally, this recipe shows how to insert a dataset into an external HDF5 file using h5py. .. yt_cookbook:: extract_fixed_resolution_data.py ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/contours_on_slice.py0000644000175100001770000000067014714401662021544 0ustar00runnerdockerimport yt # first add density contours on a density slice ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150") # add density contours on the density slice. p = yt.SlicePlot(ds, "x", ("gas", "density")) p.annotate_contour(("gas", "density")) p.save() # then add temperature contours on the same density slice p = yt.SlicePlot(ds, "x", ("gas", "density")) p.annotate_contour(("gas", "temperature")) p.save(str(ds) + "_T_contour") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/count.sh0000644000175100001770000000017514714401662017127 0ustar00runnerdockerfor fn in *.py do COUNT=`cat *.rst | grep --count ${fn}` if [ $COUNT -lt 1 ] then echo ${fn} fi done ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/custom_camera_volume_rendering.py0000644000175100001770000000121614714401662024260 0ustar00runnerdockerimport yt # Load the dataset ds = yt.load("Enzo_64/DD0043/data0043") # Create a volume rendering sc = yt.create_scene(ds, field=("gas", "density")) # Now increase the resolution sc.camera.resolution = (1024, 1024) # Set the camera focus to a position that is offset from the center of # the domain sc.camera.focus = ds.arr([0.3, 0.3, 0.3], "unitary") # Move the camera position to the other side of the dataset sc.camera.position = ds.arr([0, 0, 0], "unitary") # save to disk with a custom filename and apply sigma clipping to eliminate # very bright pixels, producing an image with better contrast. sc.render() sc.save("custom.png", sigma_clip=4) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/custom_colorbar_tickmarks.ipynb0000644000175100001770000000541414714401662023754 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Custom Colorbar Tickmarks" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import yt" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")\n", "slc = yt.SlicePlot(ds, \"x\", (\"gas\", \"density\"))\n", "slc" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`PlotWindow` plots are containers for plots, keyed to field names. Below, we get a copy of the plot for the `Density` field." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "plot = slc.plots[\"gas\", \"density\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The plot has a few attributes that point to underlying `matplotlib` plot primitives. For example, the `colorbar` object corresponds to the `cb` attribute of the plot." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "colorbar = plot.cb" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we call `_setup_plots()` to ensure the plot is properly initialized. Without this, the custom tickmarks we are adding will be ignored." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "slc._setup_plots()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To set custom tickmarks, simply call the `matplotlib` [`set_ticks`](https://matplotlib.org/stable/api/colorbar_api.html#matplotlib.colorbar.ColorbarBase.set_ticks) and [`set_ticklabels`](https://matplotlib.org/stable/api/colorbar_api.html#matplotlib.colorbar.ColorbarBase.set_ticklabels) functions." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "colorbar.set_ticks([1e-28])\n", "colorbar.set_ticklabels([\"$10^{-28}$\"])\n", "slc" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.1" } }, "nbformat": 4, "nbformat_minor": 0 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/custom_transfer_function_volume_rendering.py0000644000175100001770000000123614714401662026563 0ustar00runnerdockerimport numpy as np import yt # Load the dataset ds = yt.load("Enzo_64/DD0043/data0043") # Create a volume rendering sc = yt.create_scene(ds, field=("gas", "density")) # Modify the transfer function # First get the render source, in this case the entire domain, # with field ('gas','density') render_source = sc.get_source() # Clear the transfer function render_source.transfer_function.clear() # Map a range of density values (in log space) to the Reds_r colormap render_source.transfer_function.map_to_colormap( np.log10(ds.quan(5.0e-31, "g/cm**3")), np.log10(ds.quan(1.0e-29, "g/cm**3")), scale=30.0, colormap="RdBu_r", ) sc.save("new_tf.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/customized_phase_plot.py0000644000175100001770000000133414714401662022417 0ustar00runnerdockerimport yt import yt.units as u ds = yt.load("HiresIsolatedGalaxy/DD0044/DD0044") center = [0.53, 0.53, 0.53] normal = [0, 0, 1] radius = 40 * u.kpc height = 2 * u.kpc disk = ds.disk(center, [0, 0, 1], radius, height) profile = yt.create_profile( data_source=disk, bin_fields=[("index", "radius"), ("gas", "velocity_cylindrical_theta")], fields=[("gas", "mass")], n_bins=256, units=dict(radius="kpc", velocity_cylindrical_theta="km/s", mass="Msun"), logs=dict(radius=False, velocity_cylindrical_theta=False), weight_field=None, extrema=dict(radius=(0, 40), velocity_cylindrical_theta=(-250, 250)), ) plot = yt.PhasePlot.from_profile(profile) plot.set_cmap(("gas", "mass"), "YlOrRd") plot.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/customized_profile_plot.py0000644000175100001770000000132314714401662022755 0ustar00runnerdockerimport yt import yt.units as u ds = yt.load("HiresIsolatedGalaxy/DD0044/DD0044") center = [0.53, 0.53, 0.53] normal = [0, 0, 1] radius = 40 * u.kpc height = 5 * u.kpc disk = ds.disk(center, [0, 0, 1], radius, height) profile = yt.create_profile( data_source=disk, bin_fields=[("index", "radius")], fields=[("gas", "velocity_cylindrical_theta")], n_bins=256, units=dict(radius="kpc", velocity_cylindrical_theta="km/s"), logs=dict(radius=False), weight_field=("gas", "mass"), extrema=dict(radius=(0, 40)), ) plot = yt.ProfilePlot.from_profiles(profile) plot.set_log(("gas", "velocity_cylindrical_theta"), False) plot.set_ylim(("gas", "velocity_cylindrical_theta"), 60, 160) plot.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/derived_field.py0000644000175100001770000000164714714401662020607 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # You can create a derived field by manipulating any existing derived fields # in any way you choose. In this case, let's just make a simple one: # thermal_energy_density = 3/2 nkT # First create a function which yields your new derived field def thermal_energy_dens(field, data): return (3 / 2) * data["gas", "number_density"] * data["gas", "kT"] # Then add it to your dataset and define the units ds.add_field( ("gas", "thermal_energy_density"), units="erg/cm**3", function=thermal_energy_dens, sampling_type="cell", ) # It will now show up in your derived_field_list for i in sorted(ds.derived_field_list): print(i) # Let's use it to make a projection ad = ds.all_data() yt.ProjectionPlot( ds, "x", ("gas", "thermal_energy_density"), weight_field=("gas", "density"), width=(200, "kpc"), ).save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/downsampling_amr.py0000644000175100001770000000146514714401662021361 0ustar00runnerdockerimport yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # The maximum refinement level of this dataset is 8 print(ds.max_level) # If we ask for *all* of the AMR data, we get back field # values sampled at about 3.6 million AMR zones ad = ds.all_data() print(ad["gas", "density"].shape) # Let's only sample data up to AMR level 2 ad.max_level = 2 # Now we only sample from about 200,000 zones print(ad["gas", "density"].shape) # Note that this includes data at level 2 that would # normally be masked out. There aren't any "holes" in # the downsampled AMR mesh, the volume still sums to # the volume of the domain: print(ad["gas", "volume"].sum()) print(ds.domain_width.prod()) # Now let's make a downsampled plot plot = yt.SlicePlot(ds, "z", ("gas", "density"), data_source=ad) plot.save("downsampled.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/extract_fixed_resolution_data.py0000644000175100001770000000205414714401662024120 0ustar00runnerdocker# For this example we will use h5py to write to our output file. import h5py import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") level = 2 dims = ds.domain_dimensions * ds.refine_by**level # We construct an object that describes the data region and structure we want # In this case, we want all data up to the maximum "level" of refinement # across the entire simulation volume. Higher levels than this will not # contribute to our covering grid. cube = ds.covering_grid( level, left_edge=[0.0, 0.0, 0.0], dims=dims, # And any fields to preload (this is optional!) fields=[("gas", "density")], ) # Now we open our output file using h5py # Note that we open with 'w' (write), which will overwrite existing files! f = h5py.File("my_data.h5", mode="w") # We create a dataset at the root, calling it "density" f.create_dataset("/density", data=cube["gas", "density"]) # We close our file f.close() # If we want to then access this datacube in the h5 file, we can now... f = h5py.File("my_data.h5", mode="r") print(f["density"][()]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/find_clumps.py0000644000175100001770000000465614714401662020330 0ustar00runnerdockerimport numpy as np import yt from yt.data_objects.level_sets.api import Clump, find_clumps ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") data_source = ds.disk([0.5, 0.5, 0.5], [0.0, 0.0, 1.0], (8, "kpc"), (1, "kpc")) # the field to be used for contouring field = ("gas", "density") # This is the multiplicative interval between contours. step = 2.0 # Now we set some sane min/max values between which we want to find contours. # This is how we tell the clump finder what to look for -- it won't look for # contours connected below or above these threshold values. c_min = 10 ** np.floor(np.log10(data_source[field]).min()) c_max = 10 ** np.floor(np.log10(data_source[field]).max() + 1) # Now find get our 'base' clump -- this one just covers the whole domain. master_clump = Clump(data_source, field) # Add a "validator" to weed out clumps with less than 20 cells. # As many validators can be added as you want. master_clump.add_validator("min_cells", 20) # Calculate center of mass for all clumps. master_clump.add_info_item("center_of_mass") # Begin clump finding. find_clumps(master_clump, c_min, c_max, step) # Save the clump tree as a reloadable dataset fn = master_clump.save_as_dataset(fields=[("gas", "density"), ("all", "particle_mass")]) # We can traverse the clump hierarchy to get a list of all of the 'leaf' clumps leaf_clumps = master_clump.leaves # Get total cell and particle masses for each leaf clump leaf_masses = [leaf.quantities.total_mass() for leaf in leaf_clumps] # If you'd like to visualize these clumps, a list of clumps can be supplied to # the "clumps" callback on a plot. First, we create a projection plot: prj = yt.ProjectionPlot(ds, 2, field, center="c", width=(20, "kpc")) # Next we annotate the plot with contours on the borders of the clumps prj.annotate_clumps(leaf_clumps) # Save the plot to disk. prj.save("clumps") # Reload the clump dataset. cds = yt.load(fn) # Clump annotation can also be done with the reloaded clump dataset. # Remove the original clump annotation prj.clear_annotations() # Get the leaves and add the callback. leaf_clumps_reloaded = cds.leaves prj.annotate_clumps(leaf_clumps_reloaded) prj.save("clumps_reloaded") # Query fields for clumps in the tree. print(cds.tree["clump", "center_of_mass"]) print(cds.tree.children[0]["grid", "density"]) print(cds.tree.children[1]["all", "particle_mass"]) # Get all of the leaf clumps. print(cds.leaves) print(cds.leaves[0]["clump", "cell_mass"]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/fits_radio_cubes.ipynb0000644000175100001770000003057614714401662022022 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Analyzing FITS Radio Cubes" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline\n", "import yt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook demonstrates some of the capabilities of yt on some FITS \"position-position-spectrum\" cubes of radio data.\n", "\n", "Note that it depends on some external dependencies, including `astropy` and `regions`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## M33 VLA Image" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The dataset `\"m33_hi.fits\"` has `NaN`s in it, so we'll mask them out by setting `nan_mask` = 0:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ds = yt.load(\"radio_fits/m33_hi.fits\", nan_mask=0.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, we'll take a slice of the data along the z-axis, which is the velocity axis of the FITS cube:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "slc = yt.SlicePlot(ds, \"z\", (\"fits\", \"intensity\"), origin=\"native\")\n", "slc.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The x and y axes are in units of the image pixel. When making plots of FITS data, to see the image coordinates as they are in the file, it is helpful to set the keyword `origin = \"native\"`. If you want to see the celestial coordinates along the axes, you can import the `PlotWindowWCS` class and feed it the `SlicePlot`. For this to work, a version of AstroPy >= 1.3 needs to be installed." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from yt.frontends.fits.misc import PlotWindowWCS\n", "\n", "PlotWindowWCS(slc)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Generally, it is best to get the plot in the shape you want it before feeding it to `PlotWindowWCS`. Once it looks the way you want, you can save it just like a normal `PlotWindow` plot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "slc.save()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also take slices of this dataset at a few different values along the \"z\" axis (corresponding to the velocity), so let's try a few. To pick specific velocity values for slices, we will need to use the dataset's `spec2pixel` method to determine which pixels to slice on:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import yt.units as u\n", "\n", "new_center = ds.domain_center\n", "new_center[2] = ds.spec2pixel(-250000.0 * u.m / u.s)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can use this new center to create a new slice:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "slc = yt.SlicePlot(ds, \"z\", (\"fits\", \"intensity\"), center=new_center, origin=\"native\")\n", "slc.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can do this a few more times for different values of the velocity:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "new_center[2] = ds.spec2pixel(-100000.0 * u.m / u.s)\n", "slc = yt.SlicePlot(ds, \"z\", (\"fits\", \"intensity\"), center=new_center, origin=\"native\")\n", "slc.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "new_center[2] = ds.spec2pixel(-150000.0 * u.m / u.s)\n", "slc = yt.SlicePlot(ds, \"z\", (\"fits\", \"intensity\"), center=new_center, origin=\"native\")\n", "slc.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These slices demonstrate the intensity of the radio emission at different line-of-sight velocities. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also make a projection of all the emission along the line of sight:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "prj = yt.ProjectionPlot(ds, \"z\", (\"fits\", \"intensity\"), origin=\"native\")\n", "prj.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also look at the slices perpendicular to the other axes, which will show us the structure along the velocity axis:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "slc = yt.SlicePlot(ds, \"x\", (\"fits\", \"intensity\"), origin=\"native\", window_size=(8, 8))\n", "slc.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "slc = yt.SlicePlot(ds, \"y\", (\"fits\", \"intensity\"), origin=\"native\", window_size=(8, 8))\n", "slc.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In these cases, we needed to explicitly declare a square `window_size` to get a figure that looks good. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## $^{13}$CO GRS Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This next example uses one of the cubes from the [Boston University Galactic Ring Survey](http://www.bu.edu/galacticring/new_index.htm). " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ds = yt.load(\"radio_fits/grs-50-cube.fits\", nan_mask=0.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use the `quantities` methods to determine derived quantities of the dataset. For example, we could find the maximum and minimum temperature:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "dd = ds.all_data() # A region containing the entire dataset\n", "extrema = dd.quantities.extrema((\"fits\", \"temperature\"))\n", "print(extrema)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can compute the average temperature along the \"velocity\" axis for all positions by making a `ProjectionPlot`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "prj = yt.ProjectionPlot(\n", " ds, \"z\", (\"fits\", \"temperature\"), origin=\"native\", weight_field=(\"index\", \"ones\")\n", ") # \"ones\" weights each cell by 1\n", "prj.set_zlim((\"fits\", \"temperature\"), zmin=(1e-3, \"K\"))\n", "prj.set_log((\"fits\", \"temperature\"), True)\n", "prj.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also make a histogram of the temperature field of this region:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "pplot = yt.ProfilePlot(\n", " dd, (\"fits\", \"temperature\"), [(\"index\", \"ones\")], weight_field=None, n_bins=128\n", ")\n", "pplot.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see from this histogram and our calculation of the dataset's extrema that there is a lot of noise. Suppose we wanted to make a projection, but instead make it only of the cells which had a positive temperature value. We can do this by doing a \"field cut\" on the data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fc = dd.cut_region(['obj[\"fits\", \"temperature\"] > 0'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's check the extents of this region:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(fc.quantities.extrema((\"fits\", \"temperature\")))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looks like we were successful in filtering out the negative temperatures. To compute the average temperature of this new region:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fc.quantities.weighted_average_quantity((\"fits\", \"temperature\"), (\"index\", \"ones\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's make a projection of the dataset, using the field cut `fc` as a `data_source`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "prj = yt.ProjectionPlot(\n", " ds,\n", " \"z\",\n", " [(\"fits\", \"temperature\")],\n", " data_source=fc,\n", " origin=\"native\",\n", " weight_field=(\"index\", \"ones\"),\n", ") # \"ones\" weights each cell by 1\n", "prj.set_log((\"fits\", \"temperature\"), True)\n", "prj.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we can also take an existing [ds9](http://ds9.si.edu/site/Home.html) region and use it to create a \"cut region\" as well, using `ds9_region` (the [regions](https://astropy-regions.readthedocs.io/) package needs to be installed for this):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from yt.frontends.fits.misc import ds9_region" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this example we'll create a ds9 region from scratch and load it up:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "region = 'galactic;box(+49:26:35.150,-0:30:04.410,1926.1927\",1483.3701\",0.0)'\n", "box_reg = ds9_region(ds, region)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This region may now be used to compute derived quantities:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(box_reg.quantities.extrema((\"fits\", \"temperature\")))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or in projections:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "prj = yt.ProjectionPlot(\n", " ds,\n", " \"z\",\n", " (\"fits\", \"temperature\"),\n", " origin=\"native\",\n", " data_source=box_reg,\n", " weight_field=(\"index\", \"ones\"),\n", ") # \"ones\" weights each cell by 1\n", "prj.set_zlim((\"fits\", \"temperature\"), 1.0e-2, 1.5)\n", "prj.set_log((\"fits\", \"temperature\"), True)\n", "prj.show()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.1" } }, "nbformat": 4, "nbformat_minor": 0 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/fits_xray_images.ipynb0000644000175100001770000003312414714401662022043 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# X-ray FITS Images" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline\n", "import numpy as np\n", "\n", "import yt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook shows how to use yt to make plots and examine FITS X-ray images and events files. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sloshing, Shocks, and Bubbles in Abell 2052" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This example uses data provided by [Scott Randall](http://hea-www.cfa.harvard.edu/~srandall/), presented originally in [Blanton, E.L., Randall, S.W., Clarke, T.E., et al. 2011, ApJ, 737, 99](https://ui.adsabs.harvard.edu/abs/2011ApJ...737...99B). They consist of two files, a \"flux map\" in counts/s/pixel between 0.3 and 2 keV, and a spectroscopic temperature map in keV. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ds = yt.load(\n", " \"xray_fits/A2052_merged_0.3-2_match-core_tmap_bgecorr.fits\",\n", " auxiliary_files=[\"xray_fits/A2052_core_tmap_b1_m2000_.fits\"],\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since the flux and projected temperature images are in two different files, we had to use one of them (in this case the \"flux\" file) as a master file, and pass in the \"temperature\" file with the `auxiliary_files` keyword to `load`. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, let's derive some new fields for the number of counts, the \"pseudo-pressure\", and the \"pseudo-entropy\":" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def _counts(field, data):\n", " exposure_time = data.get_field_parameter(\"exposure_time\")\n", " return data[\"fits\", \"flux\"] * data[\"fits\", \"pixel\"] * exposure_time\n", "\n", "\n", "ds.add_field(\n", " (\"gas\", \"counts\"),\n", " function=_counts,\n", " sampling_type=\"cell\",\n", " units=\"counts\",\n", " take_log=False,\n", ")\n", "\n", "\n", "def _pp(field, data):\n", " return np.sqrt(data[\"gas\", \"counts\"]) * data[\"fits\", \"projected_temperature\"]\n", "\n", "\n", "ds.add_field(\n", " (\"gas\", \"pseudo_pressure\"),\n", " function=_pp,\n", " sampling_type=\"cell\",\n", " units=\"sqrt(counts)*keV\",\n", " take_log=False,\n", ")\n", "\n", "\n", "def _pe(field, data):\n", " return data[\"fits\", \"projected_temperature\"] * data[\"gas\", \"counts\"] ** (-1.0 / 3.0)\n", "\n", "\n", "ds.add_field(\n", " (\"gas\", \"pseudo_entropy\"),\n", " function=_pe,\n", " sampling_type=\"cell\",\n", " units=\"keV*(counts)**(-1/3)\",\n", " take_log=False,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, we're deriving a \"counts\" field from the \"flux\" field by passing it a `field_parameter` for the exposure time of the time and multiplying by the pixel scale. Second, we use the fact that the surface brightness is strongly dependent on density ($S_X \\propto \\rho^2$) to use the counts in each pixel as a \"stand-in\". Next, we'll grab the exposure time from the primary FITS header of the flux file and create a `YTQuantity` from it, to be used as a `field_parameter`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "exposure_time = ds.quan(ds.primary_header[\"exposure\"], \"s\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we can make the `SlicePlot` object of the fields we want, passing in the `exposure_time` as a `field_parameter`. We'll also set the width of the image to 250 pixels." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "slc = yt.SlicePlot(\n", " ds,\n", " \"z\",\n", " [\n", " (\"fits\", \"flux\"),\n", " (\"fits\", \"projected_temperature\"),\n", " (\"gas\", \"pseudo_pressure\"),\n", " (\"gas\", \"pseudo_entropy\"),\n", " ],\n", " origin=\"native\",\n", " field_parameters={\"exposure_time\": exposure_time},\n", ")\n", "slc.set_log((\"fits\", \"flux\"), True)\n", "slc.set_zlim((\"fits\", \"flux\"), 1e-5)\n", "slc.set_log((\"gas\", \"pseudo_pressure\"), False)\n", "slc.set_log((\"gas\", \"pseudo_entropy\"), False)\n", "slc.set_width(250.0)\n", "slc.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To add the celestial coordinates to the image, we can use `PlotWindowWCS`, if you have a recent version of AstroPy (>= 1.3) installed:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from yt.frontends.fits.misc import PlotWindowWCS\n", "\n", "wcs_slc = PlotWindowWCS(slc)\n", "wcs_slc.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can make use of yt's facilities for profile plotting as well." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "v, c = ds.find_max((\"fits\", \"flux\")) # Find the maximum flux and its center\n", "my_sphere = ds.sphere(c, (100.0, \"code_length\")) # Radius of 150 pixels\n", "my_sphere.set_field_parameter(\"exposure_time\", exposure_time)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Such as a radial profile plot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "radial_profile = yt.ProfilePlot(\n", " my_sphere,\n", " \"radius\",\n", " [\"counts\", \"pseudo_pressure\", \"pseudo_entropy\"],\n", " n_bins=30,\n", " weight_field=\"ones\",\n", ")\n", "radial_profile.set_log(\"counts\", True)\n", "radial_profile.set_log(\"pseudo_pressure\", True)\n", "radial_profile.set_log(\"pseudo_entropy\", True)\n", "radial_profile.set_xlim(3, 100.0)\n", "radial_profile.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or a phase plot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "phase_plot = yt.PhasePlot(\n", " my_sphere, \"pseudo_pressure\", \"pseudo_entropy\", [\"counts\"], weight_field=None\n", ")\n", "phase_plot.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we can also take an existing [ds9](http://ds9.si.edu/site/Home.html) region and use it to create a \"cut region\", using `ds9_region` (the [regions](https://astropy-regions.readthedocs.io/) package needs to be installed for this):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from yt.frontends.fits.misc import ds9_region\n", "\n", "reg_file = [\n", " \"# Region file format: DS9 version 4.1\\n\",\n", " \"global color=green dashlist=8 3 width=3 include=1 source=1\\n\",\n", " \"FK5\\n\",\n", " 'circle(15:16:44.817,+7:01:19.62,34.6256\")',\n", "]\n", "f = open(\"circle.reg\", \"w\")\n", "f.writelines(reg_file)\n", "f.close()\n", "circle_reg = ds9_region(\n", " ds, \"circle.reg\", field_parameters={\"exposure_time\": exposure_time}\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This region may now be used to compute derived quantities:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(\n", " circle_reg.quantities.weighted_average_quantity(\"projected_temperature\", \"counts\")\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or used in projections:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "prj = yt.ProjectionPlot(\n", " ds,\n", " \"z\",\n", " [\n", " (\"fits\", \"flux\"),\n", " (\"fits\", \"projected_temperature\"),\n", " (\"gas\", \"pseudo_pressure\"),\n", " (\"gas\", \"pseudo_entropy\"),\n", " ],\n", " origin=\"native\",\n", " field_parameters={\"exposure_time\": exposure_time},\n", " data_source=circle_reg,\n", " method=\"sum\",\n", ")\n", "prj.set_log((\"fits\", \"flux\"), True)\n", "prj.set_zlim((\"fits\", \"flux\"), 1e-5)\n", "prj.set_log((\"gas\", \"pseudo_pressure\"), False)\n", "prj.set_log((\"gas\", \"pseudo_entropy\"), False)\n", "prj.set_width(250.0)\n", "prj.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Bullet Cluster" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This example uses an events table file from a ~100 ks exposure of the \"Bullet Cluster\" from the [Chandra Data Archive](http://cxc.harvard.edu/cda/). In this case, the individual photon events are treated as particle fields in yt. However, you can make images of the object in different energy bands using the `setup_counts_fields` function. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from yt.frontends.fits.api import setup_counts_fields" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`load` will handle the events file as FITS image files, and will set up a grid using the WCS information in the file. Optionally, the events may be reblocked to a new resolution. by setting the `\"reblock\"` parameter in the `parameters` dictionary in `load`. `\"reblock\"` must be a power of 2. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ds2 = yt.load(\"xray_fits/acisf05356N003_evt2.fits.gz\", parameters={\"reblock\": 2})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`setup_counts_fields` will take a list of energy bounds (emin, emax) in keV and create a new field from each where the photons in that energy range will be deposited onto the image grid. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ebounds = [(0.1, 2.0), (2.0, 5.0)]\n", "setup_counts_fields(ds2, ebounds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The \"x\", \"y\", \"energy\", and \"time\" fields in the events table are loaded as particle fields. Each one has a name given by \"event\\_\" plus the name of the field:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "dd = ds2.all_data()\n", "print(dd[\"io\", \"event_x\"])\n", "print(dd[\"io\", \"event_y\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we'll make a plot of the two counts fields we made, and pan and zoom to the bullet:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "slc = yt.SlicePlot(\n", " ds2, \"z\", [(\"gas\", \"counts_0.1-2.0\"), (\"gas\", \"counts_2.0-5.0\")], origin=\"native\"\n", ")\n", "slc.pan((100.0, 100.0))\n", "slc.set_width(500.0)\n", "slc.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The counts fields can take the field parameter `\"sigma\"` and use [AstroPy's convolution routines](https://astropy.readthedocs.io/en/latest/convolution/) to smooth the data with a Gaussian:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "slc = yt.SlicePlot(\n", " ds2,\n", " \"z\",\n", " [(\"gas\", \"counts_0.1-2.0\"), (\"gas\", \"counts_2.0-5.0\")],\n", " origin=\"native\",\n", " field_parameters={\"sigma\": 2.0},\n", ") # This value is in pixel scale\n", "slc.pan((100.0, 100.0))\n", "slc.set_width(500.0)\n", "slc.set_zlim((\"gas\", \"counts_0.1-2.0\"), 0.01, 100.0)\n", "slc.set_zlim((\"gas\", \"counts_2.0-5.0\"), 0.01, 50.0)\n", "slc.show()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.1" } }, "nbformat": 4, "nbformat_minor": 0 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/geographic_xforms_and_projections.ipynb0000644000175100001770000003534614714401662025465 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Geographic Transforms and Projections" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Loading the GEOS data " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this analysis we'll be loading some global climate data into yt. A frontend does not exist for this dataset yet, so we'll load it in as a uniform grid with netcdf4." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import re\n", "\n", "import netCDF4 as nc4\n", "import numpy as np\n", "\n", "import yt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_data_path(arg):\n", " if os.path.exists(arg):\n", " return arg\n", " else:\n", " return os.path.join(yt.config.ytcfg.get(\"yt\", \"test_data_dir\"), arg)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "n = nc4.Dataset(get_data_path(\"geos/GEOS.fp.asm.inst3_3d_aer_Nv.20180822_0900.V01.nc4\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using the loaded data we'll fill arrays with the data dimensions and limits. We'll also rename `vertical level` to `altitude` to be clearer. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dims = []\n", "sizes = []\n", "bbox = []\n", "ndims = len(n.dimensions)\n", "for dim in n.dimensions.keys():\n", " size = n.variables[dim].size\n", " if size > 1:\n", " bbox.append([n.variables[dim][:].min(), n.variables[dim][:].max()])\n", " dims.append(n.variables[dim].long_name)\n", " sizes.append(size)\n", "dims.reverse() # Fortran ordering\n", "sizes.reverse()\n", "bbox.reverse()\n", "dims = [f.replace(\"vertical level\", \"altitude\") for f in dims]\n", "bbox = np.array(bbox)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll also load the data into a container dictionary and create a lookup for the short to the long names " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "w_regex = re.compile(r\"([a-zA-Z]+)(.*)\")\n", "\n", "\n", "def regex_parser(s):\n", " try:\n", " return \"**\".join(filter(None, w_regex.search(s).groups()))\n", " except AttributeError:\n", " return s" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = {}\n", "names = {}\n", "for field, d in n.variables.items():\n", " if d.ndim != ndims:\n", " continue\n", " units = n.variables[field].units\n", " units = \" * \".join(map(regex_parser, units.split()))\n", " data[field] = (np.squeeze(d), str(units))\n", " names[field] = n.variables[field].long_name.replace(\"_\", \" \")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now the data can be loaded with yt's `load_uniform_grid` function. We also need to say that the geometry is a `geographic` type. This will ensure that the axes created are matplotlib GeoAxes and that the transform functions are available to use for projections. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ds = yt.load_uniform_grid(\n", " data, sizes, 1.0, geometry=\"geographic\", bbox=bbox, axis_order=dims\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Default projection with geographic geometry" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that the data is loaded, we can plot it with a yt SlicePlot along the altitude. This will create a figure with latitude and longitude as the plot axes and the colormap will correspond to the air density. Because no projection type has been set, the geographic geometry type assumes that the data is of the `PlateCarree` form. The resulting figure will be a `Mollweide` plot. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p = yt.SlicePlot(ds, \"altitude\", \"AIRDENS\")\n", "p.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that this doesn't have a lot of contextual information. We can add annotations for the coastlines just as we would with matplotlib. Before the annotations are set, we need to call `p._setup_plots` to make the axes available for annotation. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p = yt.SlicePlot(ds, \"altitude\", \"AIRDENS\")\n", "p._setup_plots()\n", "p.plots[\"AIRDENS\"].axes.set_global()\n", "p.plots[\"AIRDENS\"].axes.coastlines()\n", "p.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Using geographic transforms to project data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If a projection other than the default `Mollweide` is desired, then we can pass an argument to the `set_mpl_projection()` function to set a different projection than the default. This will set the projection to a Robinson projection. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p = yt.SlicePlot(ds, \"altitude\", \"AIRDENS\")\n", "p.set_mpl_projection(\"Robinson\")\n", "p._setup_plots()\n", "p.plots[\"AIRDENS\"].axes.set_global()\n", "p.plots[\"AIRDENS\"].axes.coastlines()\n", "p.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`geo_projection` accepts a string or a 2- to 3- length sequence describing the projection the second item in the sequence are the args and the third item is the kwargs. This can be used for further customization of the projection. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p = yt.SlicePlot(ds, \"altitude\", \"AIRDENS\")\n", "p.set_mpl_projection((\"Robinson\", (37.5,)))\n", "p._setup_plots()\n", "p.plots[\"AIRDENS\"].axes.set_global()\n", "p.plots[\"AIRDENS\"].axes.coastlines()\n", "p.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We don't actually need to keep creating a SlicePlot to change the projection type. We can use the function `set_mpl_projection()` and pass in a string of the transform type that we desire after an existing `SlicePlot` instance has been created. This will set the figure to an `Orthographic` projection. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p.set_mpl_projection(\"Orthographic\")\n", "p._setup_plots()\n", "p.plots[\"AIRDENS\"].axes.set_global()\n", "p.plots[\"AIRDENS\"].axes.coastlines()\n", "p.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`set_mpl_projection()` can be used in a number of ways to customize the projection type. \n", "* If a **string** is passed, then the string must correspond to the transform name, which is exclusively cartopy transforms at this time. This looks like: `set_mpl_projection('ProjectionType')`\n", "\n", "* If a **tuple** is passed, the first item of the tuple is a string of the transform name and the second two items are args and kwargs. These can be used to further customize the transform (by setting the latitude and longitude, for example. This looks like: \n", " * `set_mpl_projection(('ProjectionType', (args)))`\n", " * `set_mpl_projection(('ProjectionType', (args), {kwargs}))`\n", "* A **transform object** can also be passed. This can be any transform type -- a cartopy transform or a matplotlib transform. This allows users to either pass the same transform object around between plots or define their own transform and use that in yt's plotting functions. With a standard cartopy transform, this would look like:\n", " * `set_mpl_projection(cartopy.crs.PlateCarree())`\n", " \n", "To summarize:\n", "The function `set_mpl_projection` can take one of several input types:\n", "* `set_mpl_projection('ProjectionType')`\n", "* `set_mpl_projection(('ProjectionType', (args)))`\n", "* `set_mpl_projection(('ProjectionType', (args), {kwargs}))`\n", "* `set_mpl_projection(cartopy.crs.MyTransform())`\n", "\n", "For example, we can make the same Orthographic projection and pass in the central latitude and longitude for the projection: " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p.set_mpl_projection((\"Orthographic\", (90, 45)))\n", "p._setup_plots()\n", "p.plots[\"AIRDENS\"].axes.set_global()\n", "p.plots[\"AIRDENS\"].axes.coastlines()\n", "p.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or we can pass in the arguments to this function as kwargs by passing a three element tuple. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p.set_mpl_projection(\n", " (\"Orthographic\", (), {\"central_latitude\": -45, \"central_longitude\": 275})\n", ")\n", "p._setup_plots()\n", "p.plots[\"AIRDENS\"].axes.set_global()\n", "p.plots[\"AIRDENS\"].axes.coastlines()\n", "p.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### A few examples of different projections" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This next section will show a few of the different projections that one can use. This isn't meant to be complete, but it'll give you a visual idea of how these transforms can be used to illustrate geographic data for different purposes. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p.set_mpl_projection((\"RotatedPole\", (177.5, 37.5)))\n", "p._setup_plots()\n", "p.plots[\"AIRDENS\"].axes.set_global()\n", "p.plots[\"AIRDENS\"].axes.coastlines()\n", "p.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p.set_mpl_projection(\n", " (\"RotatedPole\", (), {\"pole_latitude\": 37.5, \"pole_longitude\": 177.5})\n", ")\n", "p._setup_plots()\n", "p.plots[\"AIRDENS\"].axes.set_global()\n", "p.plots[\"AIRDENS\"].axes.coastlines()\n", "p.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p.set_mpl_projection(\"NorthPolarStereo\")\n", "p._setup_plots()\n", "p.plots[\"AIRDENS\"].axes.set_global()\n", "p.plots[\"AIRDENS\"].axes.coastlines()\n", "p.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p.set_mpl_projection(\"AlbersEqualArea\")\n", "p._setup_plots()\n", "p.plots[\"AIRDENS\"].axes.set_global()\n", "p.plots[\"AIRDENS\"].axes.coastlines()\n", "p.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p.set_mpl_projection(\"InterruptedGoodeHomolosine\")\n", "p._setup_plots()\n", "p.plots[\"AIRDENS\"].axes.set_global()\n", "p.plots[\"AIRDENS\"].axes.coastlines()\n", "p.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p.set_mpl_projection(\"Robinson\")\n", "p._setup_plots()\n", "p.plots[\"AIRDENS\"].axes.set_global()\n", "p.plots[\"AIRDENS\"].axes.coastlines()\n", "p.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p.set_mpl_projection(\"Gnomonic\")\n", "p._setup_plots()\n", "p.plots[\"AIRDENS\"].axes.set_global()\n", "p.plots[\"AIRDENS\"].axes.coastlines()\n", "p.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Modifying the data transform" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While the data projection modifies how the data is displayed in our plot, the data transform describes the coordinate system that the data is actually described by. By default, the data is assumed to have a `PlateCarree` data transform. If you would like to change this, you can access the dictionary in the coordinate handler and set it to something else. The dictionary is structured such that each axis has its own default transform, so be sure to set the axis you intend to change. This next example changes the transform to a Miller type. Because our data is not in Miller coordinates, it will be skewed. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ds.coordinates.data_transform[\"altitude\"] = \"Miller\"\n", "p = yt.SlicePlot(ds, \"altitude\", \"AIRDENS\")\n", "p.plots[\"AIRDENS\"].axes.set_global()\n", "p.plots[\"AIRDENS\"].axes.coastlines()\n", "p.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Because the transform type shouldn't change as we make subsequent figures, once it is changed it will be the same for all other figures made with the same dataset object. Note that this particular dataset is not actually in a Miller system, which is why the data now doesn't span the entire globe. Setting the new projection to Robinson results in Miller-skewed data in our next figure. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p.set_mpl_projection(\"Robinson\")\n", "p._setup_plots()\n", "p.plots[\"AIRDENS\"].axes.set_global()\n", "p.plots[\"AIRDENS\"].axes.coastlines()\n", "p.show()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.10" } }, "nbformat": 4, "nbformat_minor": 4 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/global_phase_plots.py0000644000175100001770000000062214714401662021653 0ustar00runnerdockerimport yt # load the dataset ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # This is an object that describes the entire box ad = ds.all_data() # We plot the average velocity magnitude (mass-weighted) in our object # as a function of density and temperature plot = yt.PhasePlot( ad, ("gas", "density"), ("gas", "temperature"), ("gas", "velocity_magnitude") ) # save the plot plot.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/hse_field.py0000644000175100001770000000301414714401662017732 0ustar00runnerdockerimport numpy as np import yt # Open a dataset from when there's a lot of sloshing going on. ds = yt.load("GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0350") # Define the components of the gravitational acceleration vector field by # taking the gradient of the gravitational potential grad_fields = ds.add_gradient_fields(("gas", "gravitational_potential")) # We don't need to do the same for the pressure field because yt already # has pressure gradient fields. Now, define the "degree of hydrostatic # equilibrium" field. def _hse(field, data): # Remember that g is the negative of the potential gradient gx = -data["gas", "density"] * data["gas", "gravitational_potential_gradient_x"] gy = -data["gas", "density"] * data["gas", "gravitational_potential_gradient_y"] gz = -data["gas", "density"] * data["gas", "gravitational_potential_gradient_z"] hx = data["gas", "pressure_gradient_x"] - gx hy = data["gas", "pressure_gradient_y"] - gy hz = data["gas", "pressure_gradient_z"] - gz h = np.sqrt((hx * hx + hy * hy + hz * hz) / (gx * gx + gy * gy + gz * gz)) return h ds.add_field( ("gas", "HSE"), function=_hse, units="", take_log=False, display_name="Hydrostatic Equilibrium", sampling_type="cell", ) # The gradient operator requires periodic boundaries. This dataset has # open boundary conditions. ds.force_periodicity() # Take a slice through the center of the domain slc = yt.SlicePlot(ds, 2, [("gas", "density"), ("gas", "HSE")], width=(1, "Mpc")) slc.save("hse") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/image_background_colors.py0000644000175100001770000000231314714401662022653 0ustar00runnerdockerimport yt # This shows how to save ImageArray objects, such as those returned from # volume renderings, to pngs with varying backgrounds. # First we use the simple_volume_rendering.py recipe from above to generate # a standard volume rendering. ds = yt.load("Enzo_64/DD0043/data0043") im, sc = yt.volume_render(ds, ("gas", "density")) im.write_png("original.png", sigma_clip=8.0) # Our image array can now be transformed to include different background # colors. By default, the background color is black. The following # modifications can be used on any image array. # write_png accepts a background keyword argument that defaults to 'black'. # Other choices include: # black (0.,0.,0.,1.) # white (1.,1.,1.,1.) # None (0.,0.,0.,0.) <-- Transparent! # any rgba list/array: [r,g,b,a], bounded by 0..1 # We include the sigma_clip=8 keyword here to bring out more contrast between # the background and foreground, but it is entirely optional. im.write_png("black_bg.png", background="black", sigma_clip=8.0) im.write_png("white_bg.png", background="white", sigma_clip=8.0) im.write_png("green_bg.png", background=[0.0, 1.0, 0.0, 1.0], sigma_clip=8.0) im.write_png("transparent_bg.png", background=None, sigma_clip=8.0) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/image_resolution.py0000644000175100001770000000512114714401662021356 0ustar00runnerdockerimport numpy as np import yt # Load the dataset. We'll work with a some Gadget data to illustrate all # the different ways in which the effective resolution can vary. Specifically, # we'll use the GadgetDiskGalaxy dataset available at # http://yt-project.org/data/GadgetDiskGalaxy.tar.gz # load the data with a refinement criteria of 2 particle per cell # n.b. -- in yt-4.0, n_ref no longer exists as the data is no longer # deposited only a grid. At present (03/15/2019), there is no way to # handle non-gas data in Gadget snapshots, though that is work in progress if int(yt.__version__[0]) < 4: # increasing n_ref will result in a "lower resolution" (but faster) image, # while decreasing it will go the opposite way ds = yt.load("GadgetDiskGalaxy/snapshot_200.hdf5", n_ref=16) else: ds = yt.load("GadgetDiskGalaxy/snapshot_200.hdf5") # Create projections of the density (max value in each resolution element in the image): prj = yt.ProjectionPlot( ds, "x", ("gas", "density"), method="max", center="max", width=(100, "kpc") ) # nicen up the plot by using a better interpolation: plot = prj.plots[list(prj.plots)[0]] ax = plot.axes img = ax.images[0] img.set_interpolation("bicubic") # nicen up the plot by setting the background color to the minimum of the colorbar prj.set_background_color(("gas", "density")) # vary the buff_size -- the number of resolution elements in the actual visualization # set it to 2000x2000 buff_size = 2000 prj.set_buff_size(buff_size) # set the figure size in inches figure_size = 10 prj.set_figure_size(figure_size) # if the image does not fill the plot (as is default, since the axes and # colorbar contribute as well), then figuring out the proper dpi for a given # buff_size and figure_size is non-trivial -- it requires finding the bbox # for the actual image: bounding_box = ax.get_position() # we're going to scale to the larger of the two sides image_size = figure_size * max([bounding_box.width, bounding_box.height]) # now save with a dpi that's scaled to the buff_size: dpi = np.rint(np.ceil(buff_size / image_size)) prj.save("with_axes_colorbar.png", mpl_kwargs=dict(dpi=dpi)) # in the case where the image fills the entire plot (i.e. if the axes and colorbar # are turned off), it's trivial to figure out the correct dpi from the buff_size and # figure_size (or vice versa): # hide the colorbar: prj.hide_colorbar() # hide the axes, while still keeping the background color correct: prj.hide_axes(draw_frame=True) # save with a dpi that makes sense: dpi = np.rint(np.ceil(buff_size / figure_size)) prj.save("no_axes_colorbar.png", mpl_kwargs=dict(dpi=dpi)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/index.rst0000644000175100001770000000262314714401662017304 0ustar00runnerdocker.. _cookbook: The Cookbook ============ yt provides a great deal of functionality to the user, but sometimes it can be a bit complex. This section of the documentation lays out examples recipes for how to do a variety of tasks. Most of the early, simple code demonstrations are small scripts which you can easily copy and paste into your own code; however, as we move to more complex tasks, the recipes move to iPython notebooks to display intermediate steps. All of these recipes are available for download in a link next to the recipe. Getting the Sample Data ----------------------- All of the data used in the cookbook is freely available `here `_, where you will find links to download individual datasets. .. note:: To contribute your own recipes, please follow the instructions on how to contribute documentation code: :ref:`writing_documentation`. Example Scripts --------------- .. toctree:: :maxdepth: 2 simple_plots calculating_information complex_plots constructing_data_objects .. _example-notebooks: Example Notebooks ----------------- .. toctree:: :maxdepth: 1 notebook_tutorial custom_colorbar_tickmarks yt_gadget_analysis yt_gadget_owls_analysis ../visualizing/TransferFunctionHelper_Tutorial fits_radio_cubes fits_xray_images geographic_xforms_and_projections tipsy_and_yt ../visualizing/Volume_Rendering_Tutorial ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/matplotlib-animation.py0000644000175100001770000000127114714401662022137 0ustar00runnerdockerfrom matplotlib import rc_context from matplotlib.animation import FuncAnimation import yt ts = yt.load("GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_*") plot = yt.SlicePlot(ts[0], "z", ("gas", "density")) plot.set_zlim(("gas", "density"), 8e-29, 3e-26) fig = plot.plots["gas", "density"].figure # animate must accept an integer frame number. We use the frame number # to identify which dataset in the time series we want to load def animate(i): ds = ts[i] plot._switch_ds(ds) animation = FuncAnimation(fig, animate, frames=len(ts)) # Override matplotlib's defaults to get a nicer looking font with rc_context({"mathtext.fontset": "stix"}): animation.save("animation.mp4") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/multi_plot_3x2_FRB.py0000644000175100001770000000477714714401662021406 0ustar00runnerdockerimport numpy as np from matplotlib.colors import LogNorm import yt from yt.visualization.api import get_multi_plot fn = "Enzo_64/RD0006/RedshiftOutput0006" # dataset to load # load data and get center value and center location as maximum density location ds = yt.load(fn) v, c = ds.find_max(("gas", "density")) # set up our Fixed Resolution Buffer parameters: a width, resolution, and center width = (1.0, "unitary") res = [1000, 1000] # get_multi_plot returns a containing figure, a list-of-lists of axes # into which we can place plots, and some axes that we'll put # colorbars. # it accepts: # of x-axis plots, # of y-axis plots, and how the # colorbars are oriented (this also determines where they go: below # in the case of 'horizontal', on the right in the case of # 'vertical'), bw is the base-width in inches (4 is about right for # most cases) orient = "horizontal" fig, axes, colorbars = get_multi_plot(2, 3, colorbar=orient, bw=6) # Now we follow the method of "multi_plot.py" but we're going to iterate # over the columns, which will become axes of slicing. plots = [] for ax in range(3): sli = ds.slice(ax, c[ax]) frb = sli.to_frb(width, res) den_axis = axes[ax][0] temp_axis = axes[ax][1] # here, we turn off the axes labels and ticks, but you could # customize further. for ax in (den_axis, temp_axis): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) # converting our fixed resolution buffers to NDarray so matplotlib can # render them dens = np.array(frb["gas", "density"]) temp = np.array(frb["gas", "temperature"]) plots.append(den_axis.imshow(dens, norm=LogNorm())) plots[-1].set_clim((5e-32, 1e-29)) plots[-1].set_cmap("bds_highcontrast") plots.append(temp_axis.imshow(temp, norm=LogNorm())) plots[-1].set_clim((1e3, 1e8)) plots[-1].set_cmap("hot") # Each 'cax' is a colorbar-container, into which we'll put a colorbar. # the zip command creates triples from each element of the three lists # . Note that it cuts off after the shortest iterator is exhausted, # in this case, titles. titles = [ r"$\mathrm{density}\ (\mathrm{g\ cm^{-3}})$", r"$\mathrm{temperature}\ (\mathrm{K})$", ] for p, cax, t in zip(plots, colorbars, titles): # Now we make a colorbar, using the 'image' we stored in plots # above. note this is what is *returned* by the imshow method of # the plots. cbar = fig.colorbar(p, cax=cax, orientation=orient) cbar.set_label(t) # And now we're done! fig.savefig(f"{ds}_3x2.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/multi_plot_slice_and_proj.py0000644000175100001770000000562514714401662023245 0ustar00runnerdockerimport numpy as np from matplotlib.colors import LogNorm import yt from yt.visualization.base_plot_types import get_multi_plot fn = "GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150" # dataset to load orient = "horizontal" ds = yt.load(fn) # load data # There's a lot in here: # From this we get a containing figure, a list-of-lists of axes into which we # can place plots, and some axes that we'll put colorbars. # We feed it: # Number of plots on the x-axis, number of plots on the y-axis, and how we # want our colorbars oriented. (This governs where they will go, too. # bw is the base-width in inches, but 4 is about right for most cases. fig, axes, colorbars = get_multi_plot(3, 2, colorbar=orient, bw=4) slc = yt.SlicePlot( ds, "z", fields=[("gas", "density"), ("gas", "temperature"), ("gas", "velocity_magnitude")], ) proj = yt.ProjectionPlot(ds, "z", ("gas", "density"), weight_field=("gas", "density")) slc_frb = slc.data_source.to_frb((1.0, "Mpc"), 512) proj_frb = proj.data_source.to_frb((1.0, "Mpc"), 512) dens_axes = [axes[0][0], axes[1][0]] temp_axes = [axes[0][1], axes[1][1]] vels_axes = [axes[0][2], axes[1][2]] for dax, tax, vax in zip(dens_axes, temp_axes, vels_axes): dax.xaxis.set_visible(False) dax.yaxis.set_visible(False) tax.xaxis.set_visible(False) tax.yaxis.set_visible(False) vax.xaxis.set_visible(False) vax.yaxis.set_visible(False) # Converting our Fixed Resolution Buffers to numpy arrays so that matplotlib # can render them slc_dens = np.array(slc_frb["gas", "density"]) proj_dens = np.array(proj_frb["gas", "density"]) slc_temp = np.array(slc_frb["gas", "temperature"]) proj_temp = np.array(proj_frb["gas", "temperature"]) slc_vel = np.array(slc_frb["gas", "velocity_magnitude"]) proj_vel = np.array(proj_frb["gas", "velocity_magnitude"]) plots = [ dens_axes[0].imshow(slc_dens, origin="lower", norm=LogNorm()), dens_axes[1].imshow(proj_dens, origin="lower", norm=LogNorm()), temp_axes[0].imshow(slc_temp, origin="lower"), temp_axes[1].imshow(proj_temp, origin="lower"), vels_axes[0].imshow(slc_vel, origin="lower", norm=LogNorm()), vels_axes[1].imshow(proj_vel, origin="lower", norm=LogNorm()), ] plots[0].set_clim((1.0e-27, 1.0e-25)) plots[0].set_cmap("bds_highcontrast") plots[1].set_clim((1.0e-27, 1.0e-25)) plots[1].set_cmap("bds_highcontrast") plots[2].set_clim((1.0e7, 1.0e8)) plots[2].set_cmap("hot") plots[3].set_clim((1.0e7, 1.0e8)) plots[3].set_cmap("hot") plots[4].set_clim((1e6, 1e8)) plots[4].set_cmap("gist_rainbow") plots[5].set_clim((1e6, 1e8)) plots[5].set_cmap("gist_rainbow") titles = [ r"$\mathrm{Density}\ (\mathrm{g\ cm^{-3}})$", r"$\mathrm{Temperature}\ (\mathrm{K})$", r"$\mathrm{Velocity Magnitude}\ (\mathrm{cm\ s^{-1}})$", ] for p, cax, t in zip(plots[0:6:2], colorbars, titles): cbar = fig.colorbar(p, cax=cax, orientation=orient) cbar.set_label(t) # And now we're done! fig.savefig(f"{ds}_3x2") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/multi_width_image.py0000644000175100001770000000162614714401662021512 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # Create a slice plot for the dataset. With no additional arguments, # the width will be the size of the domain and the center will be the # center of the simulation box slc = yt.SlicePlot(ds, "z", ("gas", "density")) # Create a list of a couple of widths and units. # (N.B. Mpc (megaparsec) != mpc (milliparsec) widths = [(1, "Mpc"), (15, "kpc")] # Loop through the list of widths and units. for width, unit in widths: # Set the width. slc.set_width(width, unit) # Write out the image with a unique name. slc.save("%s_%010d_%s" % (ds, width, unit)) zoomFactors = [2, 4, 5] # recreate the original slice slc = yt.SlicePlot(ds, "z", ("gas", "density")) for zoomFactor in zoomFactors: # zoom in slc.zoom(zoomFactor) # Write out the image with a unique name. slc.save("%s_%i" % (ds, zoomFactor)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/multiplot_2x2.py0000644000175100001770000000276114714401662020544 0ustar00runnerdockerimport matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import AxesGrid import yt fn = "IsolatedGalaxy/galaxy0030/galaxy0030" ds = yt.load(fn) # load data fig = plt.figure() # See http://matplotlib.org/mpl_toolkits/axes_grid/api/axes_grid_api.html # These choices of keyword arguments produce a four panel plot that includes # four narrow colorbars, one for each plot. Axes labels are only drawn on the # bottom left hand plot to avoid repeating information and make the plot less # cluttered. grid = AxesGrid( fig, (0.075, 0.075, 0.85, 0.85), nrows_ncols=(2, 2), axes_pad=1.0, label_mode="1", share_all=True, cbar_location="right", cbar_mode="each", cbar_size="3%", cbar_pad="0%", ) fields = [ ("gas", "density"), ("gas", "velocity_x"), ("gas", "velocity_y"), ("gas", "velocity_magnitude"), ] # Create the plot. Since SlicePlot accepts a list of fields, we need only # do this once. p = yt.SlicePlot(ds, "z", fields) # Velocity is going to be both positive and negative, so let's make these # slices use a linear colorbar scale p.set_log(("gas", "velocity_x"), False) p.set_log(("gas", "velocity_y"), False) p.zoom(2) # For each plotted field, force the SlicePlot to redraw itself onto the AxesGrid # axes. for i, field in enumerate(fields): plot = p.plots[field] plot.figure = fig plot.axes = grid[i].axes plot.cax = grid.cbar_axes[i] # Finally, redraw the plot on the AxesGrid axes. p.render() plt.savefig("multiplot_2x2.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/multiplot_2x2_coordaxes_slice.py0000644000175100001770000000340414714401662023765 0ustar00runnerdockerimport matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import AxesGrid import yt fn = "IsolatedGalaxy/galaxy0030/galaxy0030" ds = yt.load(fn) # load data fig = plt.figure() # See http://matplotlib.org/mpl_toolkits/axes_grid/api/axes_grid_api.html # These choices of keyword arguments produce two colorbars, both drawn on the # right hand side. This means there are only two colorbar axes, one for Density # and another for temperature. In addition, axes labels will be drawn for all # plots. grid = AxesGrid( fig, (0.075, 0.075, 0.85, 0.85), nrows_ncols=(2, 2), axes_pad=1.0, label_mode="all", share_all=True, cbar_location="right", cbar_mode="edge", cbar_size="5%", cbar_pad="0%", ) cuts = ["x", "y", "z", "z"] fields = [ ("gas", "density"), ("gas", "density"), ("gas", "density"), ("gas", "temperature"), ] for i, (direction, field) in enumerate(zip(cuts, fields)): # Load the data and create a single plot p = yt.SlicePlot(ds, direction, field) p.zoom(40) # This forces the ProjectionPlot to redraw itself on the AxesGrid axes. plot = p.plots[field] plot.figure = fig plot.axes = grid[i].axes # Since there are only two colorbar axes, we need to make sure we don't try # to set the temperature colorbar to cbar_axes[4], which would if we used i # to index cbar_axes, yielding a plot without a temperature colorbar. # This unnecessarily redraws the Density colorbar three times, but that has # no effect on the final plot. if field == ("gas", "density"): plot.cax = grid.cbar_axes[0] elif field == ("gas", "temperature"): plot.cax = grid.cbar_axes[1] # Finally, redraw the plot. p.render() plt.savefig("multiplot_2x2_coordaxes_slice.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/multiplot_2x2_time_series.py0000644000175100001770000000256314714401662023134 0ustar00runnerdockerimport matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import AxesGrid import yt fns = [ "Enzo_64/DD0005/data0005", "Enzo_64/DD0015/data0015", "Enzo_64/DD0025/data0025", "Enzo_64/DD0035/data0035", ] fig = plt.figure() # See http://matplotlib.org/mpl_toolkits/axes_grid/api/axes_grid_api.html # These choices of keyword arguments produce a four panel plot with a single # shared narrow colorbar on the right hand side of the multipanel plot. Axes # labels are drawn for all plots since we're slicing along different directions # for each plot. grid = AxesGrid( fig, (0.075, 0.075, 0.85, 0.85), nrows_ncols=(2, 2), axes_pad=0.05, label_mode="L", share_all=True, cbar_location="right", cbar_mode="single", cbar_size="3%", cbar_pad="0%", ) for i, fn in enumerate(fns): # Load the data and create a single plot ds = yt.load(fn) # load data p = yt.ProjectionPlot(ds, "z", ("gas", "density"), width=(55, "Mpccm")) # Ensure the colorbar limits match for all plots p.set_zlim(("gas", "density"), 1e-4, 1e-2) # This forces the ProjectionPlot to redraw itself on the AxesGrid axes. plot = p.plots["gas", "density"] plot.figure = fig plot.axes = grid[i].axes plot.cax = grid.cbar_axes[i] # Finally, this actually redraws the plot. p.render() plt.savefig("multiplot_2x2_time_series.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/multiplot_export_to_mpl.py0000644000175100001770000000077514714401662023027 0ustar00runnerdockerimport yt ds = yt.load_sample("IsolatedGalaxy") fields = [ ("gas", "density"), ("gas", "velocity_x"), ("gas", "velocity_y"), ("gas", "velocity_magnitude"), ] p = yt.SlicePlot(ds, "z", fields) p.set_log(("gas", "velocity_x"), False) p.set_log(("gas", "velocity_y"), False) # this returns a matplotlib figure with an ImageGrid and the slices # added to the grid of axes (in this case, 2x2) fig = p.export_to_mpl_figure((2, 2)) fig.tight_layout() fig.savefig("multiplot_export_to_mpl.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/multiplot_phaseplot.py0000644000175100001770000000265214714401662022127 0ustar00runnerdockerimport matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import AxesGrid import yt fig = plt.figure() # See http://matplotlib.org/mpl_toolkits/axes_grid/api/axes_grid_api.html grid = AxesGrid( fig, (0.085, 0.085, 0.83, 0.83), nrows_ncols=(1, 2), axes_pad=0.05, label_mode="L", share_all=True, cbar_location="right", cbar_mode="single", cbar_size="3%", cbar_pad="0%", aspect=False, ) for i, SnapNum in enumerate([10, 40]): # Load the data and create a single plot ds = yt.load("enzo_tiny_cosmology/DD00%2d/DD00%2d" % (SnapNum, SnapNum)) ad = ds.all_data() p = yt.PhasePlot( ad, ("gas", "density"), ("gas", "temperature"), [ ("gas", "mass"), ], weight_field=None, ) # Ensure the axes and colorbar limits match for all plots p.set_xlim(1.0e-32, 8.0e-26) p.set_ylim(1.0e1, 2.0e7) p.set_zlim(("gas", "mass"), 1e42, 1e46) # This forces the ProjectionPlot to redraw itself on the AxesGrid axes. plot = p.plots["gas", "mass"] plot.figure = fig plot.axes = grid[i].axes if i == 0: plot.cax = grid.cbar_axes[i] # Actually redraws the plot. p.render() # Modify the axes properties **after** p.render() so that they # are not overwritten. plot.axes.xaxis.set_minor_locator(plt.LogLocator(base=10.0, subs=[2.0, 5.0, 8.0])) plt.savefig("multiplot_phaseplot.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/notebook_tutorial.rst0000644000175100001770000000254014714401662021736 0ustar00runnerdocker.. _notebook-tutorial: Notebook Tutorial ----------------- The IPython notebook is a powerful system for literate coding - a style of writing code that embeds input, output, and explanatory text into one document. yt has deep integration with the IPython notebook, explained in-depth in the other example notebooks and the rest of the yt documentation. This page is here to give a brief introduction to the notebook itself. To start the notebook, enter the following command at the bash command line: .. code-block:: bash $ ipython notebook Depending on your default web browser and system setup this will open a web browser and direct you to the notebook dashboard. If it does not, you might need to connect to the notebook manually. See the `IPython documentation `_ for more details. For the notebook tutorial, we rely on example notebooks that are part of the IPython documentation. We link to static nbviewer versions of the 'evaluated' versions of these example notebooks. If you would like to run them locally on your own computer, simply download the notebook by clicking the 'Download Notebook' link in the top right corner of each page. 1. `IPython Notebook Tutorials `_ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/offaxis_projection.py0000644000175100001770000000241514714401662021707 0ustar00runnerdockerimport numpy as np import yt # Load the dataset. ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # Choose a center for the render. c = [0.5, 0.5, 0.5] # Our image plane will be normal to some vector. For things like collapsing # objects, you could set it the way you would a cutting plane -- but for this # dataset, we'll just choose an off-axis value at random. This gets normalized # automatically. L = [1.0, 0.0, 0.0] # Our "width" is the width of the image plane as well as the depth. # The first element is the left to right width, the second is the # top-bottom width, and the last element is the back-to-front width # (all in code units) W = [0.04, 0.04, 0.4] # The number of pixels along one side of the image. # The final image will have Npixel^2 pixels. Npixels = 512 # Create the off axis projection. # Setting no_ghost to False speeds up the process, but makes a # slightly lower quality image. image = yt.off_axis_projection(ds, c, L, W, Npixels, ("gas", "density"), no_ghost=False) # Write out the final image and give it a name # relating to what our dataset is called. # We save the log of the values so that the colors do not span # many orders of magnitude. Try it without and see what happens. yt.write_image(np.log10(image), f"{ds}_offaxis_projection.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/offaxis_projection_colorbar.py0000644000175100001770000000324014714401662023567 0ustar00runnerdockerimport yt fn = "IsolatedGalaxy/galaxy0030/galaxy0030" # dataset to load ds = yt.load(fn) # load data # Now we need a center of our volume to render. Here we'll just use # 0.5,0.5,0.5, because volume renderings are not periodic. c = [0.5, 0.5, 0.5] # Our image plane will be normal to some vector. For things like collapsing # objects, you could set it the way you would a cutting plane -- but for this # dataset, we'll just choose an off-axis value at random. This gets normalized # automatically. L = [0.5, 0.4, 0.7] # Our "width" is the width of the image plane as well as the depth. # The first element is the left to right width, the second is the # top-bottom width, and the last element is the back-to-front width # (all in code units) W = [0.04, 0.04, 0.4] # The number of pixels along one side of the image. # The final image will have Npixel^2 pixels. Npixels = 512 # Now we call the off_axis_projection function, which handles the rest. # Note that we set no_ghost equal to False, so that we *do* include ghost # zones in our data. This takes longer to calculate, but the results look # much cleaner than when you ignore the ghost zones. # Also note that we set the field which we want to project as "density", but # really we could use any arbitrary field like "temperature", "metallicity" # or whatever. image = yt.off_axis_projection(ds, c, L, W, Npixels, ("gas", "density"), no_ghost=False) # Image is now an NxN array representing the intensities of the various pixels. # And now, we call our direct image saver. We save the log of the result. yt.write_projection( image, "offaxis_projection_colorbar.png", colorbar_label="Column Density (cm$^{-2}$)", ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/opaque_rendering.py0000644000175100001770000000403414714401662021342 0ustar00runnerdockerimport numpy as np import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # We start by building a default volume rendering scene im, sc = yt.volume_render(ds, field=("gas", "density"), fname="v0.png", sigma_clip=6.0) sc.camera.set_width(ds.arr(0.1, "code_length")) tf = sc.get_source().transfer_function tf.clear() tf.add_layers( 4, 0.01, col_bounds=[-27.5, -25.5], alpha=np.logspace(-3, 0, 4), colormap="RdBu_r" ) sc.save("v1.png", sigma_clip=6.0) # In this case, the default alphas used (np.logspace(-3,0,Nbins)) does not # accentuate the outer regions of the galaxy. Let's start by bringing up the # alpha values for each contour to go between 0.1 and 1.0 tf = sc.get_source().transfer_function tf.clear() tf.add_layers( 4, 0.01, col_bounds=[-27.5, -25.5], alpha=np.logspace(0, 0, 4), colormap="RdBu_r" ) sc.save("v2.png", sigma_clip=6.0) # Now let's set the grey_opacity to True. This should make the inner portions # start to be obscured tf.grey_opacity = True sc.save("v3.png", sigma_clip=6.0) # That looks pretty good, but let's start bumping up the opacity. tf.clear() tf.add_layers( 4, 0.01, col_bounds=[-27.5, -25.5], alpha=10.0 * np.ones(4, dtype="float64"), colormap="RdBu_r", ) sc.save("v4.png", sigma_clip=6.0) # Let's bump up again to see if we can obscure the inner contour. tf.clear() tf.add_layers( 4, 0.01, col_bounds=[-27.5, -25.5], alpha=30.0 * np.ones(4, dtype="float64"), colormap="RdBu_r", ) sc.save("v5.png", sigma_clip=6.0) # Now we are losing sight of everything. Let's see if we can obscure the next # layer tf.clear() tf.add_layers( 4, 0.01, col_bounds=[-27.5, -25.5], alpha=100.0 * np.ones(4, dtype="float64"), colormap="RdBu_r", ) sc.save("v6.png", sigma_clip=6.0) # That is very opaque! Now lets go back and see what it would look like with # grey_opacity = False tf.grey_opacity = False sc.save("v7.png", sigma_clip=6.0) # That looks pretty different, but the main thing is that you can see that the # inner contours are somewhat visible again. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/overplot_grids.py0000644000175100001770000000070714714401662021060 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("Enzo_64/DD0043/data0043") # Make a density projection. p = yt.ProjectionPlot(ds, "y", ("gas", "density")) # Modify the projection # The argument specifies the region along the line of sight # for which particles will be gathered. # 1.0 signifies the entire domain in the line of sight. p.annotate_grids() # Save the image. # Optionally, give a string as an argument # to name files with a keyword. p.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/overplot_particles.py0000644000175100001770000000120314714401662021726 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("Enzo_64/DD0043/data0043") # Make a density projection centered on the 'm'aximum density location # with a width of 10 Mpc.. p = yt.ProjectionPlot(ds, "y", ("gas", "density"), center="m", width=(10, "Mpc")) # Modify the projection # The argument specifies the region along the line of sight # for which particles will be gathered. # 1.0 signifies the entire domain in the line of sight # p.annotate_particles(1.0) # but in this case we only go 10 Mpc in depth p.annotate_particles((10, "Mpc")) # Save the image. # Optionally, give a string as an argument # to name files with a keyword. p.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/particle_filter.py0000644000175100001770000000447614714401662021175 0ustar00runnerdockerimport numpy as np import yt from yt.data_objects.particle_filters import add_particle_filter # Define filter functions for our particle filters based on stellar age. # In this dataset particles in the initial conditions are given creation # times arbitrarily far into the future, so stars with negative ages belong # in the old stars filter. def stars_10Myr(pfilter, data): age = data.ds.current_time - data["Stars", "creation_time"] filter = np.logical_and(age >= 0, age.in_units("Myr") < 10) return filter def stars_100Myr(pfilter, data): age = (data.ds.current_time - data["Stars", "creation_time"]).in_units("Myr") filter = np.logical_and(age >= 10, age < 100) return filter def stars_old(pfilter, data): age = data.ds.current_time - data["Stars", "creation_time"] filter = np.logical_or(age < 0, age.in_units("Myr") >= 100) return filter # Create the particle filters add_particle_filter( "stars_young", function=stars_10Myr, filtered_type="Stars", requires=["creation_time"], ) add_particle_filter( "stars_medium", function=stars_100Myr, filtered_type="Stars", requires=["creation_time"], ) add_particle_filter( "stars_old", function=stars_old, filtered_type="Stars", requires=["creation_time"] ) # Load a dataset and apply the particle filters filename = "TipsyGalaxy/galaxy.00300" ds = yt.load(filename) ds.add_particle_filter("stars_young") ds.add_particle_filter("stars_medium") ds.add_particle_filter("stars_old") # What are the total masses of different ages of star in the whole simulation # volume? ad = ds.all_data() mass_young = ad["stars_young", "particle_mass"].in_units("Msun").sum() mass_medium = ad["stars_medium", "particle_mass"].in_units("Msun").sum() mass_old = ad["stars_old", "particle_mass"].in_units("Msun").sum() print(f"Mass of young stars = {mass_young:g}") print(f"Mass of medium stars = {mass_medium:g}") print(f"Mass of old stars = {mass_old:g}") # Generate 4 projections: gas density, young stars, medium stars, old stars fields = [ ("stars_young", "particle_mass"), ("stars_medium", "particle_mass"), ("stars_old", "particle_mass"), ] prj1 = yt.ProjectionPlot(ds, "z", ("gas", "density"), center="max", width=(100, "kpc")) prj1.save() prj2 = yt.ParticleProjectionPlot(ds, "z", fields, center="max", width=(100, "kpc")) prj2.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/particle_filter_sfr.py0000644000175100001770000000205714714401662022040 0ustar00runnerdockerimport numpy as np from matplotlib import pyplot as plt import yt from yt.data_objects.particle_filters import add_particle_filter def formed_star(pfilter, data): filter = data["all", "creation_time"] > 0 return filter add_particle_filter( "formed_star", function=formed_star, filtered_type="all", requires=["creation_time"] ) filename = "IsolatedGalaxy/galaxy0030/galaxy0030" ds = yt.load(filename) ds.add_particle_filter("formed_star") ad = ds.all_data() masses = ad["formed_star", "particle_mass"].in_units("Msun") formation_time = ad["formed_star", "creation_time"].in_units("yr") time_range = [0, 5e8] # years n_bins = 1000 hist, bins = np.histogram( formation_time, bins=n_bins, range=time_range, ) inds = np.digitize(formation_time, bins=bins) time = (bins[:-1] + bins[1:]) / 2 sfr = np.array( [masses[inds == j + 1].sum() / (bins[j + 1] - bins[j]) for j in range(len(time))] ) sfr[sfr == 0] = np.nan plt.plot(time / 1e6, sfr) plt.xlabel("Time [Myr]") plt.ylabel(r"SFR [M$_\odot$ yr$^{-1}$]") plt.savefig("filter_sfr.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/particle_one_color_plot.py0000644000175100001770000000042714714401662022715 0ustar00runnerdockerimport yt # load the dataset ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # create our plot p = yt.ParticlePlot( ds, ("all", "particle_position_x"), ("all", "particle_position_y"), color="b" ) # zoom in a little bit p.set_width(500, "kpc") # save result p.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/particle_xvz_plot.py0000644000175100001770000000066614714401662021572 0ustar00runnerdockerimport yt # load the dataset ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # create our plot p = yt.ParticlePlot( ds, ("all", "particle_position_x"), ("all", "particle_velocity_z"), [("all", "particle_mass")], ) # pick some appropriate units p.set_unit(("all", "particle_position_x"), "Mpc") p.set_unit(("all", "particle_velocity_z"), "km/s") p.set_unit(("all", "particle_mass"), "Msun") # save result p.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/particle_xy_plot.py0000644000175100001770000000057414714401662021401 0ustar00runnerdockerimport yt # load the dataset ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # create our plot p = yt.ParticlePlot( ds, ("all", "particle_position_x"), ("all", "particle_position_y"), ("all", "particle_mass"), width=(0.5, 0.5), ) # pick some appropriate units p.set_axes_unit("kpc") p.set_unit(("all", "particle_mass"), "Msun") # save result p.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/power_spectrum_example.py0000644000175100001770000000570414714401662022611 0ustar00runnerdockerimport matplotlib.pyplot as plt import numpy as np import yt """ Make a turbulent KE power spectrum. Since we are stratified, we use a rho**(1/3) scaling to the velocity to get something that would look Kolmogorov (if the turbulence were fully developed). Ultimately, we aim to compute: 1 ^ ^* E(k) = integral - V(k) . V(k) dS 2 n ^ where V = rho U is the density-weighted velocity field, and V is the FFT of V. (Note: sometimes we normalize by 1/volume to get a spectral energy density spectrum). """ def doit(ds): # a FFT operates on uniformly gridded data. We'll use the yt # covering grid for this. max_level = ds.index.max_level ref = int(np.prod(ds.ref_factors[0:max_level])) low = ds.domain_left_edge dims = ds.domain_dimensions * ref nx, ny, nz = dims nindex_rho = 1.0 / 3.0 Kk = np.zeros((nx // 2 + 1, ny // 2 + 1, nz // 2 + 1)) for vel in [("gas", "velocity_x"), ("gas", "velocity_y"), ("gas", "velocity_z")]: Kk += 0.5 * fft_comp( ds, ("gas", "density"), vel, nindex_rho, max_level, low, dims ) # wavenumbers L = (ds.domain_right_edge - ds.domain_left_edge).d kx = np.fft.rfftfreq(nx) * nx / L[0] ky = np.fft.rfftfreq(ny) * ny / L[1] kz = np.fft.rfftfreq(nz) * nz / L[2] # physical limits to the wavenumbers kmin = np.min(1.0 / L) kmax = np.min(0.5 * dims / L) kbins = np.arange(kmin, kmax, kmin) N = len(kbins) # bin the Fourier KE into radial kbins kx3d, ky3d, kz3d = np.meshgrid(kx, ky, kz, indexing="ij") k = np.sqrt(kx3d**2 + ky3d**2 + kz3d**2) whichbin = np.digitize(k.flat, kbins) ncount = np.bincount(whichbin) E_spectrum = np.zeros(len(ncount) - 1) for n in range(1, len(ncount)): E_spectrum[n - 1] = np.sum(Kk.flat[whichbin == n]) k = 0.5 * (kbins[0 : N - 1] + kbins[1:N]) E_spectrum = E_spectrum[1:N] index = np.argmax(E_spectrum) kmax = k[index] Emax = E_spectrum[index] plt.loglog(k, E_spectrum) plt.loglog(k, Emax * (k / kmax) ** (-5.0 / 3.0), ls=":", color="0.5") plt.xlabel(r"$k$") plt.ylabel(r"$E(k)dk$") plt.savefig("spectrum.png") def fft_comp(ds, irho, iu, nindex_rho, level, low, delta): cube = ds.covering_grid(level, left_edge=low, dims=delta, fields=[irho, iu]) rho = cube[irho].d u = cube[iu].d nx, ny, nz = rho.shape # do the FFTs -- note that since our data is real, there will be # too much information here. fftn puts the positive freq terms in # the first half of the axes -- that's what we keep. Our # normalization has an '8' to account for this clipping to one # octant. ru = np.fft.fftn(rho**nindex_rho * u)[ 0 : nx // 2 + 1, 0 : ny // 2 + 1, 0 : nz // 2 + 1 ] ru = 8.0 * ru / (nx * ny * nz) return np.abs(ru) ** 2 ds = yt.load("maestro_xrb_lores_23437") doit(ds) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/profile_with_standard_deviation.py0000644000175100001770000000211214714401662024423 0ustar00runnerdockerimport matplotlib.pyplot as plt import yt # Load the dataset. ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # Create a sphere of radius 1 Mpc centered on the max density location. sp = ds.sphere("max", (1, "Mpc")) # Calculate and store the bulk velocity for the sphere. bulk_velocity = sp.quantities.bulk_velocity() sp.set_field_parameter("bulk_velocity", bulk_velocity) # Create a 1D profile object for profiles over radius # and add a velocity profile. prof = yt.create_profile( sp, "radius", ("gas", "velocity_magnitude"), units={"radius": "kpc"}, extrema={"radius": ((0.1, "kpc"), (1000.0, "kpc"))}, weight_field=("gas", "mass"), ) # Create arrays to plot. radius = prof.x mean = prof["gas", "velocity_magnitude"] std = prof.standard_deviation["gas", "velocity_magnitude"] # Plot the average velocity magnitude. plt.loglog(radius, mean, label="Mean") # Plot the standard deviation of the velocity magnitude. plt.loglog(radius, std, label="Standard Deviation") plt.xlabel("r [kpc]") plt.ylabel("v [cm/s]") plt.legend() plt.savefig("velocity_profiles.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/rad_velocity.py0000644000175100001770000000241414714401662020477 0ustar00runnerdockerimport matplotlib.pyplot as plt import yt ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150") # Get the first sphere sp0 = ds.sphere(ds.domain_center, (500.0, "kpc")) # Compute the bulk velocity from the cells in this sphere bulk_vel = sp0.quantities.bulk_velocity() # Get the second sphere sp1 = ds.sphere(ds.domain_center, (500.0, "kpc")) # Set the bulk velocity field parameter sp1.set_field_parameter("bulk_velocity", bulk_vel) # Radial profile without correction rp0 = yt.create_profile( sp0, ("index", "radius"), ("gas", "radial_velocity"), units={("index", "radius"): "kpc"}, logs={("index", "radius"): False}, ) # Radial profile with correction for bulk velocity rp1 = yt.create_profile( sp1, ("index", "radius"), ("gas", "radial_velocity"), units={("index", "radius"): "kpc"}, logs={("index", "radius"): False}, ) # Make a plot using matplotlib fig = plt.figure() ax = fig.add_subplot(111) ax.plot( rp0.x.value, rp0["gas", "radial_velocity"].in_units("km/s").value, rp1.x.value, rp1["gas", "radial_velocity"].in_units("km/s").value, ) ax.set_xlabel(r"$\mathrm{r\ (kpc)}$") ax.set_ylabel(r"$\mathrm{v_r\ (km/s)}$") ax.legend(["Without Correction", "With Correction"]) fig.savefig(f"{ds}_profiles.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/radial_profile_styles.py0000644000175100001770000000236614714401662022400 0ustar00runnerdockerimport matplotlib.pyplot as plt import yt ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150") # Get a sphere object sp = ds.sphere(ds.domain_center, (500.0, "kpc")) # Bin up the data from the sphere into a radial profile rp = yt.create_profile( sp, "radius", [("gas", "density"), ("gas", "temperature")], units={"radius": "kpc"}, logs={"radius": False}, ) # Make plots using matplotlib fig = plt.figure() ax = fig.add_subplot(111) # Plot the density as a log-log plot using the default settings dens_plot = ax.loglog(rp.x.value, rp["gas", "density"].value) # Here we set the labels of the plot axes ax.set_xlabel(r"$\mathrm{r\ (kpc)}$") ax.set_ylabel(r"$\mathrm{\rho\ (g\ cm^{-3})}$") # Save the default plot fig.savefig("density_profile_default.png" % ds) # The "dens_plot" object is a list of plot objects. In our case we only have one, # so we index the list by '0' to get it. # Plot using dashed red lines dens_plot[0].set_linestyle("--") dens_plot[0].set_color("red") fig.savefig("density_profile_dashed_red.png") # Increase the line width and add points in the shape of x's dens_plot[0].set_linewidth(5) dens_plot[0].set_marker("x") dens_plot[0].set_markersize(10) fig.savefig("density_profile_thick_with_xs.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/render_two_fields.py0000644000175100001770000000131314714401662021506 0ustar00runnerdockerimport yt from yt.visualization.volume_rendering.api import Scene, create_volume_source filePath = "Sedov_3d/sedov_hdf5_chk_0003" ds = yt.load(filePath) ds.force_periodicity() sc = Scene() # set up camera cam = sc.add_camera(ds, lens_type="perspective") cam.resolution = [400, 400] cam.position = ds.arr([1, 1, 1], "cm") cam.switch_orientation() # add rendering of density field dens = create_volume_source(ds, field=("flash", "dens")) dens.use_ghost_zones = True sc.add_source(dens) sc.save("density.png", sigma_clip=6) # add rendering of x-velocity field vel = create_volume_source(ds, field=("flash", "velx")) vel.use_ghost_zones = True sc.add_source(vel) sc.save("density_any_velocity.png", sigma_clip=6) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/render_two_fields_tf.py0000644000175100001770000000206514714401662022204 0ustar00runnerdockerimport numpy as np import yt from yt.visualization.volume_rendering.api import Scene, create_volume_source ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # create a scene and add volume sources to it sc = Scene() # Add density field = "density" vol = create_volume_source(ds, field=field) vol.use_ghost_zones = True tf = yt.ColorTransferFunction([-28, -25]) tf.clear() tf.add_layers(4, 0.02, alpha=np.logspace(-3, -1, 4), colormap="winter") vol.set_transfer_function(tf) sc.add_source(vol) # Add temperature field = "temperature" vol2 = create_volume_source(ds, field=field) vol2.use_ghost_zones = True tf = yt.ColorTransferFunction([4.5, 7.5]) tf.clear() tf.add_layers(4, 0.02, alpha=np.logspace(-0.2, 0, 4), colormap="autumn") vol2.set_transfer_function(tf) sc.add_source(vol2) # setup the camera cam = sc.add_camera(ds, lens_type="perspective") cam.resolution = (1600, 900) cam.zoom(20.0) # Render the image. sc.render() sc.save_annotated( "render_two_fields_tf.png", sigma_clip=6.0, tf_rect=[0.88, 0.15, 0.03, 0.8], render=False, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/rendering_with_box_and_grids.py0000644000175100001770000000111014714401662023675 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("Enzo_64/DD0043/data0043") sc = yt.create_scene(ds, ("gas", "density")) # You may need to adjust the alpha values to get a rendering with good contrast # For annotate_domain, the fourth color value is alpha. # Draw the domain boundary sc.annotate_domain(ds, color=[1, 1, 1, 0.01]) sc.save(f"{ds}_vr_domain.png", sigma_clip=4) # Draw the grid boundaries sc.annotate_grids(ds, alpha=0.01) sc.save(f"{ds}_vr_grids.png", sigma_clip=4) # Draw a coordinate axes triad sc.annotate_axes(alpha=0.01) sc.save(f"{ds}_vr_coords.png", sigma_clip=4) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/show_hide_axes_colorbar.py0000644000175100001770000000047714714401662022676 0ustar00runnerdockerimport yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "x", ("gas", "density")) slc.save("default_sliceplot.png") slc.hide_axes() slc.save("no_axes_sliceplot.png") slc.hide_colorbar() slc.save("no_axes_no_colorbar_sliceplot.png") slc.show_axes() slc.save("no_colorbar_sliceplot.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/sigma_clip.py0000644000175100001770000000115114714401662020117 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("enzo_tiny_cosmology/RD0009/RD0009") # Create a volume rendering, which will determine data bounds, use the first # acceptable field in the field_list, and set up a default transfer function. # Render and save output images with different levels of sigma clipping. # Sigma clipping removes the highest intensity pixels in a volume render, # which affects the overall contrast of the image. sc = yt.create_scene(ds, field=("gas", "density")) sc.save("clip_0.png") sc.save("clip_2.png", sigma_clip=2) sc.save("clip_4.png", sigma_clip=4) sc.save("clip_6.png", sigma_clip=6) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_1d_line_plot.py0000644000175100001770000000066414714401662021742 0ustar00runnerdockerimport yt # Load the dataset ds = yt.load("SecondOrderTris/RZ_p_no_parts_do_nothing_bcs_cone_out.e", step=-1) # Create a line plot of the variables 'u' and 'v' with 1000 sampling points evenly # spaced between the coordinates (0, 0, 0) and (0, 1, 0) plot = yt.LinePlot( ds, [("all", "v"), ("all", "u")], (0.0, 0.0, 0.0), (0.0, 1.0, 0.0), 1000 ) # Add a legend plot.annotate_legend(("all", "v")) # Save the line plot plot.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_contour_in_slice.py0000644000175100001770000000165114714401662022724 0ustar00runnerdockerimport yt # Load the data file. ds = yt.load("Sedov_3d/sedov_hdf5_chk_0002") # Make a traditional slice plot. sp = yt.SlicePlot(ds, "x", ("gas", "density")) # Overlay the slice plot with thick red contours of density. sp.annotate_contour( ("gas", "density"), levels=3, clim=(1e-2, 1e-1), label=True, plot_args={"colors": "red", "linewidths": 2}, ) # What about some nice temperature contours in blue? sp.annotate_contour( ("gas", "temperature"), levels=3, clim=(1e-8, 1e-6), label=True, plot_args={"colors": "blue", "linewidths": 2}, ) # This is the plot object. po = sp.plots["gas", "density"] # Turn off the colormap image, leaving just the contours. po.axes.images[0].set_visible(False) # Remove the colorbar and its label. po.figure.delaxes(po.figure.axes[1]) # Save it and ask for a close fit to get rid of the space used by the colorbar. sp.save(mpl_kwargs={"bbox_inches": "tight"}) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_off_axis_projection.py0000644000175100001770000000110314714401662023410 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # Create a 15 kpc radius sphere, centered on the center of the sim volume sp = ds.sphere("center", (15.0, "kpc")) # Get the angular momentum vector for the sphere. L = sp.quantities.angular_momentum_vector() print(f"Angular momentum vector: {L}") # Create an off-axis ProjectionPlot of density centered on the object with the L # vector as its normal and a width of 25 kpc on a side p = yt.ProjectionPlot( ds, L, fields=("gas", "density"), center=sp.center, width=(25, "kpc") ) p.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_off_axis_slice.py0000644000175100001770000000104414714401662022337 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # Create a 15 kpc radius sphere, centered on the center of the sim volume sp = ds.sphere("center", (15.0, "kpc")) # Get the angular momentum vector for the sphere. L = sp.quantities.angular_momentum_vector() print(f"Angular momentum vector: {L}") # Create an OffAxisSlicePlot of density centered on the object with the L # vector as its normal and a width of 25 kpc on a side p = yt.OffAxisSlicePlot(ds, L, ("gas", "density"), sp.center, (25, "kpc")) p.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_pdf.py0000644000175100001770000000107014714401662020132 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("GalaxyClusterMerger/fiducial_1to3_b0.273d_hdf5_plt_cnt_0175") # Create a data object that represents the whole box. ad = ds.all_data() # This is identical to the simple phase plot, except we supply # the fractional=True keyword to divide the profile data by the sum. plot = yt.PhasePlot( ad, ("gas", "density"), ("gas", "temperature"), ("gas", "mass"), weight_field=None, fractional=True, ) # Save the image. # Optionally, give a string as an argument # to name files with a keyword. plot.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_phase.py0000644000175100001770000000127514714401662020470 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # Create a sphere of radius 100 kpc in the center of the domain. my_sphere = ds.sphere("c", (100.0, "kpc")) # Create a PhasePlot object. # Setting weight to None will calculate a sum. # Setting weight to a field will calculate an average # weighted by that field. plot = yt.PhasePlot( my_sphere, ("gas", "density"), ("gas", "temperature"), ("gas", "mass"), weight_field=None, ) # Set the units of mass to be in solar masses (not the default in cgs) plot.set_unit(("gas", "mass"), "Msun") # Save the image. # Optionally, give a string as an argument # to name files with a keyword. plot.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_plots.rst0000644000175100001770000002255514714401662020715 0ustar00runnerdockerMaking Simple Plots ------------------- One of the easiest ways to interact with yt is by creating simple visualizations of your data. Below we show how to do this, as well as how to extend these plots to be ready for publication. Simple Slices ~~~~~~~~~~~~~ This script shows the simplest way to make a slice through a dataset. See :ref:`slice-plots` for more information. .. yt_cookbook:: simple_slice.py Simple Projections (Non-Weighted) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is the simplest way to make a projection through a dataset. There are several different :ref:`projection-types`, but non-weighted line integrals and weighted line integrals are the two most common. Here we create density projections (non-weighted line integral). See :ref:`projection-plots` for more information. .. yt_cookbook:: simple_projection.py Simple Projections (Weighted) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ And here we produce density-weighted temperature projections (weighted line integral) for the same dataset as the non-weighted projections above. See :ref:`projection-plots` for more information. .. yt_cookbook:: simple_projection_weighted.py Simple Projections (Methods) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ And here we illustrate different methods for projection plots (integrate, minimum, maximum). .. yt_cookbook:: simple_projection_methods.py Simple Projections (Weighted Standard Deviation) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ And here we produce a density-weighted projection (weighted line integral) of the line-of-sight velocity from the same dataset (see :ref:`projection-plots` for more information). .. yt_cookbook:: simple_projection_stddev.py Simple Phase Plots ~~~~~~~~~~~~~~~~~~ This demonstrates how to make a phase plot. Phase plots can be thought of as two-dimensional histograms, where the value is either the weighted-average or the total accumulation in a cell. See :ref:`how-to-make-2d-profiles` for more information. .. yt_cookbook:: simple_phase.py Simple 1D Line Plotting ~~~~~~~~~~~~~~~~~~~~~~~ This script shows how to make a ``LinePlot`` through a dataset. See :ref:`manual-line-plots` for more information. .. yt_cookbook:: simple_1d_line_plot.py .. note:: Not every data types have support for ``yt.LinePlot`` yet. Currently, this operation is supported for grid based data with cartesian geometry. Simple Probability Distribution Functions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Often, one wants to examine the distribution of one variable as a function of another. This shows how to see the distribution of mass in a simulation, with respect to the total mass in the simulation. See :ref:`how-to-make-2d-profiles` for more information. .. yt_cookbook:: simple_pdf.py Simple 1D Histograms (Profiles) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is a "profile," which is a 1D histogram. This can be thought of as either the total accumulation (when weight_field is set to ``None``) or the average (when a weight_field is supplied.) See :ref:`how-to-make-1d-profiles` for more information. .. yt_cookbook:: simple_profile.py Simple Radial Profiles ~~~~~~~~~~~~~~~~~~~~~~ This shows how to make a profile of a quantity with respect to the radius. See :ref:`how-to-make-1d-profiles` for more information. .. yt_cookbook:: simple_radial_profile.py 1D Profiles Over Time ~~~~~~~~~~~~~~~~~~~~~ This is a simple example of overplotting multiple 1D profiles from a number of datasets to show how they evolve over time. See :ref:`how-to-make-1d-profiles` for more information. .. yt_cookbook:: time_series_profiles.py .. _cookbook-profile-stddev: Profiles with Standard Deviation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This shows how to plot a 1D profile with error bars indicating the standard deviation of the field values in each profile bin. In this example, we manually create a 1D profile object, which gives us access to the standard deviation data. See :ref:`how-to-make-1d-profiles` for more information. .. yt_cookbook:: profile_with_standard_deviation.py Making Plots of Multiple Fields Simultaneously ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ By adding multiple fields to a single :class:`~yt.visualization.plot_window.SlicePlot` or :class:`~yt.visualization.plot_window.ProjectionPlot` some of the overhead of creating the data object can be reduced, and better performance squeezed out. This recipe shows how to add multiple fields to a single plot. See :ref:`slice-plots` and :ref:`projection-plots` for more information. .. yt_cookbook:: simple_slice_with_multiple_fields.py Off-Axis Slicing ~~~~~~~~~~~~~~~~ One can create slices from any arbitrary angle, not just those aligned with the x,y,z axes. See :ref:`off-axis-slices` for more information. .. yt_cookbook:: simple_off_axis_slice.py .. _cookbook-simple-off-axis-projection: Off-Axis Projection ~~~~~~~~~~~~~~~~~~~ Like off-axis slices, off-axis projections can be created from any arbitrary viewing angle. See :ref:`off-axis-projections` for more information. .. yt_cookbook:: simple_off_axis_projection.py .. _cookbook-simple-particle-plot: Simple Particle Plot ~~~~~~~~~~~~~~~~~~~~ You can also use yt to make particle-only plots. This script shows how to plot all the particle x and y positions in a dataset, using the particle mass to set the color scale. See :ref:`particle-plots` for more information. .. yt_cookbook:: particle_xy_plot.py .. _cookbook-non-spatial-particle-plot: Non-spatial Particle Plots ~~~~~~~~~~~~~~~~~~~~~~~~~~ You are not limited to plotting spatial fields on the x and y axes. This example shows how to plot the particle x-coordinates versus their z-velocities, again using the particle mass to set the colorbar. See :ref:`particle-plots` for more information. .. yt_cookbook:: particle_xvz_plot.py .. _cookbook-single-color-particle-plot: Single-color Particle Plots ~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you don't want to display a third field on the color bar axis, simply pass in a color string instead of a particle field. See :ref:`particle-plots` for more information. .. yt_cookbook:: particle_one_color_plot.py .. _cookbook-simple_volume_rendering: Simple Volume Rendering ~~~~~~~~~~~~~~~~~~~~~~~ Volume renderings are 3D projections rendering isocontours in any arbitrary field (e.g. density, temperature, pressure, etc.) See :ref:`volume_rendering` for more information. .. yt_cookbook:: simple_volume_rendering.py .. _show-hide-axes-colorbar: Showing and Hiding Axis Labels and Colorbars ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example illustrates how to create a SlicePlot and then suppress the axes labels and colorbars. This is useful when you don't care about the physical scales and just want to take a closer look at the raw plot data. See :ref:`hiding-colorbar-and-axes` for more information. .. yt_cookbook:: show_hide_axes_colorbar.py .. _cookbook_label_formats: Setting Field Label Formats --------------------------- This example illustrates how to change the label format for ion species from the default roman numeral style. .. yt_cookbook:: changing_label_formats.py .. _matplotlib-primitives: Accessing and Modifying Plots Directly ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ While often the Plot Window, and its affiliated :ref:`callbacks` can cover normal use cases, sometimes more direct access to the underlying Matplotlib engine is necessary. This recipe shows how to modify the plot window :class:`matplotlib.axes.Axes` object directly. See :ref:`matplotlib-customization` for more information. .. yt_cookbook:: simple_slice_matplotlib_example.py Changing the Colormap used in a Plot ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ yt has sensible defaults for colormaps, but there are over a hundred available for customizing your plots. Here we generate a projection and then change its colormap. See :ref:`colormaps` for a list and for images of all the available colormaps. .. yt_cookbook:: colormaps.py Image Background Colors ~~~~~~~~~~~~~~~~~~~~~~~ Here we see how to take an image and save it using different background colors. In this case we use the :ref:`cookbook-simple_volume_rendering` recipe to generate the image, but it works for any NxNx4 image array (3 colors and 1 opacity channel). See :ref:`volume_rendering` for more information. .. yt_cookbook:: image_background_colors.py .. _annotations-recipe: Annotating Plots to Include Lines, Text, Shapes, etc. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ It can be useful to add annotations to plots to show off certain features and make it easier for your audience to understand the plot's purpose. There are a variety of available :ref:`plot modifications ` one can use to add annotations to their plots. Below includes just a handful, but please look at the other :ref:`plot modifications ` to get a full description of what you can do to highlight your figures. .. yt_cookbook:: annotations.py Annotating Plots with a Timestamp and Physical Scale ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When creating movies of multiple outputs from the same simulation (see :ref:`time-series-analysis`), it can be helpful to include a timestamp and the physical scale of each individual output. This is simply achieved using the :ref:`annotate_timestamp() ` and :ref:`annotate_scale() ` callbacks on your plots. For more information about similar plot modifications using other callbacks, see the section on :ref:`Plot Modifications `. .. yt_cookbook:: annotate_timestamp_and_scale.py ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_profile.py0000644000175100001770000000106314714401662021023 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # Create a 1D profile within a sphere of radius 100 kpc # of the average temperature and average velocity_x # vs. density, weighted by mass. sphere = ds.sphere("c", (100.0, "kpc")) plot = yt.ProfilePlot( sphere, ("gas", "density"), [("gas", "temperature"), ("gas", "velocity_x")], weight_field=("gas", "mass"), ) plot.set_log(("gas", "velocity_x"), False) # Save the image. # Optionally, give a string as an argument # to name files with a keyword. plot.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_projection.py0000644000175100001770000000052614714401662021542 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("GalaxyClusterMerger/fiducial_1to3_b0.273d_hdf5_plt_cnt_0175") # Create projections of the gas density (non-weighted line integrals). yt.ProjectionPlot(ds, "x", ("gas", "density")).save() yt.ProjectionPlot(ds, "y", ("gas", "density")).save() yt.ProjectionPlot(ds, "z", ("gas", "density")).save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_projection_methods.py0000644000175100001770000000052414714401662023263 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("GalaxyClusterMerger/fiducial_1to3_b0.273d_hdf5_plt_cnt_0175") # Create projections of temperature (with different methods) for method in ["integrate", "min", "max"]: proj = yt.ProjectionPlot(ds, "x", ("gas", "temperature"), method=method) proj.save(f"projection_method_{method}.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_projection_stddev.py0000644000175100001770000000061414714401662023111 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("GalaxyClusterMerger/fiducial_1to3_b0.273d_hdf5_plt_cnt_0175") # Create density-weighted projections of standard deviation of the velocity # (weighted line integrals) for normal in "xyz": yt.ProjectionPlot( ds, normal, ("gas", f"velocity_{normal}"), weight_field=("gas", "density"), moment=2, ).save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_projection_weighted.py0000644000175100001770000000050114714401662023413 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("GalaxyClusterMerger/fiducial_1to3_b0.273d_hdf5_plt_cnt_0175") # Create density-weighted projections of temperature (weighted line integrals) for normal in "xyz": yt.ProjectionPlot( ds, normal, ("gas", "temperature"), weight_field=("gas", "density") ).save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_radial_profile.py0000644000175100001770000000111414714401662022334 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # Create a sphere of radius 100 kpc in the center of the box. my_sphere = ds.sphere("c", (100.0, "kpc")) # Create a profile of the average density vs. radius. plot = yt.ProfilePlot( my_sphere, ("index", "radius"), ("gas", "density"), weight_field=("gas", "mass"), ) # Change the units of the radius into kpc (and not the default in cgs) plot.set_unit(("index", "radius"), "kpc") # Save the image. # Optionally, give a string as an argument # to name files with a keyword. plot.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_slice.py0000644000175100001770000000054214714401662020463 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150") # Create gas density slices in all three axes. yt.SlicePlot(ds, "x", ("gas", "density"), width=(800.0, "kpc")).save() yt.SlicePlot(ds, "y", ("gas", "density"), width=(800.0, "kpc")).save() yt.SlicePlot(ds, "z", ("gas", "density"), width=(800.0, "kpc")).save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_slice_matplotlib_example.py0000644000175100001770000000204314714401662024423 0ustar00runnerdockerimport numpy as np import yt # Load the dataset. ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150") # Create a slice object slc = yt.SlicePlot(ds, "x", ("gas", "density"), width=(800.0, "kpc")) # Rendering should be performed explicitly *before* any modification is # performed directly with matplotlib. slc.render() # Get a reference to the matplotlib axes object for the plot ax = slc.plots["gas", "density"].axes # Let's adjust the x axis tick labels for label in ax.xaxis.get_ticklabels(): label.set_color("red") label.set_fontsize(16) # Get a reference to the matplotlib figure object for the plot fig = slc.plots["gas", "density"].figure # And create a mini-panel of a gaussian histogram inside the plot rect = (0.2, 0.2, 0.2, 0.2) new_ax = fig.add_axes(rect) n, bins, patches = new_ax.hist( np.random.randn(1000) + 20, 50, facecolor="black", edgecolor="black" ) # Make sure its visible new_ax.tick_params(colors="white") # And label it la = new_ax.set_xlabel("Dinosaurs per furlong") la.set_color("white") slc.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_slice_with_multiple_fields.py0000644000175100001770000000046414714401662024762 0ustar00runnerdockerimport yt # Load the dataset ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150") # Create gas density slices of several fields along the x axis simultaneously yt.SlicePlot( ds, "x", [("gas", "density"), ("gas", "temperature"), ("gas", "pressure")], width=(800.0, "kpc"), ).save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simple_volume_rendering.py0000644000175100001770000000054114714401662022727 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("Enzo_64/DD0043/data0043") # Create a volume rendering, which will determine data bounds, use the first # acceptable field in the field_list, and set up a default transfer function. # This will save a file named 'data0043_Render_density.png' to disk. im, sc = yt.volume_render(ds, field=("gas", "density")) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/simulation_analysis.py0000644000175100001770000000240414714401662022101 0ustar00runnerdockerimport yt yt.enable_parallelism() # Enable parallelism in the script (assuming it was called with # `mpirun -np ` ) yt.enable_parallelism() # By using wildcards such as ? and * with the load command, we can load up a # Time Series containing all of these datasets simultaneously. ts = yt.load("enzo_tiny_cosmology/DD????/DD????") # Calculate and store density extrema for all datasets along with redshift # in a data dictionary with entries as tuples # Create an empty dictionary data = {} # Iterate through each dataset in the Time Series (using piter allows it # to happen in parallel automatically across available processors) for ds in ts.piter(): ad = ds.all_data() extrema = ad.quantities.extrema(("gas", "density")) # Fill the dictionary with extrema and redshift information for each dataset data[ds.basename] = (extrema, ds.current_redshift) # Sort dict by keys data = {k: v for k, v in sorted(data.items())} # Print out all the values we calculated. print("Dataset Redshift Density Min Density Max") print("---------------------------------------------------------") for key, val in data.items(): print( "%s %05.3f %5.3g g/cm^3 %5.3g g/cm^3" % (key, val[1], val[0][0], val[0][1]) ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/streamlines.py0000644000175100001770000000234214714401662020341 0ustar00runnerdockerimport matplotlib.pyplot as plt import numpy as np from mpl_toolkits.mplot3d import Axes3D import yt from yt.units import Mpc from yt.visualization.api import Streamlines # Load the dataset ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # Define c: the center of the box, N: the number of streamlines, # scale: the spatial scale of the streamlines relative to the boxsize, # and then pos: the random positions of the streamlines. c = ds.domain_center N = 100 scale = ds.domain_width[0] pos_dx = np.random.random((N, 3)) * scale - scale / 2.0 pos = c + pos_dx # Create streamlines of the 3D vector velocity and integrate them through # the box defined above streamlines = Streamlines( ds, pos, ("gas", "velocity_x"), ("gas", "velocity_y"), ("gas", "velocity_z"), length=1.0 * Mpc, get_magnitude=True, ) streamlines.integrate_through_volume() # Create a 3D plot, trace the streamlines through the 3D volume of the plot fig = plt.figure() ax = Axes3D(fig, auto_add_to_figure=False) fig.add_axes(ax) for stream in streamlines.streamlines: stream = stream[np.all(stream != 0.0, axis=1)] ax.plot3D(stream[:, 0], stream[:, 1], stream[:, 2], alpha=0.1) # Save the plot to disk. plt.savefig("streamlines.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/streamlines_isocontour.py0000644000175100001770000000467214714401662022635 0ustar00runnerdockerimport matplotlib.pyplot as plt import numpy as np from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d.art3d import Poly3DCollection import yt from yt.visualization.api import Streamlines # Load the dataset ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # Define c: the center of the box, N: the number of streamlines, # scale: the spatial scale of the streamlines relative to the boxsize, # and then pos: the random positions of the streamlines. c = ds.arr([0.5] * 3, "code_length") N = 30 scale = ds.quan(15, "kpc").in_units("code_length") # 15 kpc in code units pos_dx = np.random.random((N, 3)) * scale - scale / 2.0 pos = c + pos_dx # Create the streamlines from these positions with the velocity fields as the # fields to be traced streamlines = Streamlines( ds, pos, ("gas", "velocity_x"), ("gas", "velocity_y"), ("gas", "velocity_z"), length=1.0, ) streamlines.integrate_through_volume() # Create a 3D matplotlib figure for visualizing the streamlines fig = plt.figure() ax = Axes3D(fig, auto_add_to_figure=False) fig.add_axes(ax) # Trace the streamlines through the volume of the 3D figure for stream in streamlines.streamlines: stream = stream[np.all(stream != 0.0, axis=1)] # Make the colors of each stream vary continuously from blue to red # from low-x to high-x of the stream start position (each color is R, G, B) # can omit and just set streamline colors to a fixed color x_start_pos = ds.arr(stream[0, 0], "code_length") x_start_pos -= ds.arr(0.5, "code_length") x_start_pos /= scale x_start_pos += 0.5 color = np.array([x_start_pos, 0, 1 - x_start_pos]) # Plot the stream in 3D ax.plot3D(stream[:, 0], stream[:, 1], stream[:, 2], alpha=0.3, color=color) # Create a sphere object centered on the highest density point in the simulation # with radius = 1 Mpc sphere = ds.sphere("max", (1.0, "Mpc")) # Identify the isodensity surface in this sphere with density = 1e-24 g/cm^3 surface = ds.surface(sphere, ("gas", "density"), 1e-24) # Color this isodensity surface according to the log of the temperature field colors = yt.apply_colormap(np.log10(surface["gas", "temperature"]), cmap_name="hot") # Render this surface p3dc = Poly3DCollection(surface.triangles, linewidth=0.0) colors = colors[0, :, :] / 255.0 # scale to [0,1] colors[:, 3] = 0.3 # alpha = 0.3 p3dc.set_facecolors(colors) ax.add_collection(p3dc) # Save the figure plt.savefig("streamlines_isocontour.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/sum_mass_in_sphere.py0000644000175100001770000000120414714401662021672 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("Enzo_64/DD0029/data0029") # Create a 1 Mpc radius sphere, centered on the max density. sp = ds.sphere("max", (1.0, "Mpc")) # Use the total_quantity derived quantity to sum up the # values of the mass and particle_mass fields # within the sphere. baryon_mass, particle_mass = sp.quantities.total_quantity( [("gas", "mass"), ("all", "particle_mass")] ) print( "Total mass in sphere is %0.3e Msun (gas = %0.3e Msun, particles = %0.3e Msun)" % ( (baryon_mass + particle_mass).in_units("Msun"), baryon_mass.in_units("Msun"), particle_mass.in_units("Msun"), ) ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/surface_plot.py0000644000175100001770000000274214714401662020505 0ustar00runnerdockerimport matplotlib.pyplot as plt import numpy as np from mpl_toolkits.mplot3d import Axes3D # noqa: F401 from mpl_toolkits.mplot3d.art3d import Poly3DCollection import yt # Load the dataset ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # Create a sphere object centered on the highest density point in the simulation # with radius 1 Mpc sphere = ds.sphere("max", (1.0, "Mpc")) # Identify the isodensity surface in this sphere with density = 1e-24 g/cm^3 surface = ds.surface(sphere, ("gas", "density"), 1e-24) # Color this isodensity surface according to the log of the temperature field colors = yt.apply_colormap(np.log10(surface["gas", "temperature"]), cmap_name="hot") # Create a 3D matplotlib figure for visualizing the surface fig = plt.figure() ax = fig.add_subplot(projection="3d") p3dc = Poly3DCollection(surface.triangles, linewidth=0.0) # Set the surface colors in the right scaling [0,1] p3dc.set_facecolors(colors[0, :, :] / 255.0) ax.add_collection(p3dc) # Let's keep the axis ratio fixed in all directions by taking the maximum # extent in one dimension and make it the bounds in all dimensions max_extent = (surface.vertices.max(axis=1) - surface.vertices.min(axis=1)).max() centers = (surface.vertices.max(axis=1) + surface.vertices.min(axis=1)) / 2 bounds = np.zeros([3, 2]) bounds[:, 0] = centers[:] - max_extent / 2 bounds[:, 1] = centers[:] + max_extent / 2 ax.auto_scale_xyz(bounds[0, :], bounds[1, :], bounds[2, :]) # Save the figure plt.savefig(f"{ds}_Surface.png") ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.239151 yt-4.4.0/doc/source/cookbook/tests/0000755000175100001770000000000014714401715016601 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/tests/test_cookbook.py0000644000175100001770000000226414714401662022025 0ustar00runnerdocker"""Module for cookbook testing This test should be run from main yt directory. Example: $ sed -e '/where/d' -i nose.cfg setup.cfg $ nosetests doc/source/cookbook/tests/test_cookbook.py -P -v """ import subprocess import sys from pathlib import Path BLACKLIST = [ "matplotlib-animation.py", ] def test_recipe(): """Dummy test grabbing all cookbook's recipes""" COOKBOOK_DIR = Path("doc", "source", "cookbook") for fname in sorted(COOKBOOK_DIR.glob("*.py")): if fname.name in BLACKLIST: continue check_recipe.description = f"Testing recipe: {fname.name}" yield check_recipe, ["python", str(fname)] def check_recipe(cmd): """Run single recipe""" proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = proc.communicate() if out: sys.stdout.write(out.decode("utf8")) if err: sys.stderr.write(err.decode("utf8")) if proc.returncode != 0: retstderr = " ".join(cmd) retstderr += "\n\nTHIS IS THE REAL CAUSE OF THE FAILURE:\n" retstderr += err.decode("UTF-8") + "\n" raise subprocess.CalledProcessError(proc.returncode, retstderr) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/thin_slice_projection.py0000644000175100001770000000215314714401662022370 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("Enzo_64/DD0030/data0030") # Make a projection that is the full width of the domain, # but only 5 Mpc in depth. This is done by creating a # region object with this exact geometry and providing it # as a data_source for the projection. # Center on the domain center center = ds.domain_center.copy() # First make the left and right corner of the region based # on the full domain. left_corner = ds.domain_left_edge.copy() right_corner = ds.domain_right_edge.copy() # Now adjust the size of the region along the line of sight (x axis). depth = ds.quan(5.0, "Mpc") left_corner[0] = center[0] - 0.5 * depth right_corner[0] = center[0] + 0.5 * depth # Create the region region = ds.box(left_corner, right_corner) # Create a density projection and supply the region we have just created. # Only cells within the region will be included in the projection. # Try with another data container, like a sphere or disk. plot = yt.ProjectionPlot( ds, "x", ("gas", "density"), weight_field=("gas", "density"), data_source=region ) # Save the image with the keyword. plot.save("Thin_Slice") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/time_series.py0000644000175100001770000000324414714401662020325 0ustar00runnerdockerimport matplotlib.pyplot as plt import numpy as np import yt # Enable parallelism in the script (assuming it was called with # `mpirun -np ` ) yt.enable_parallelism() # By using wildcards such as ? and * with the load command, we can load up a # Time Series containing all of these datasets simultaneously. # The "entropy" field that we will use below depends on the electron number # density, which is not in these datasets by default, so we assume full # ionization using the "default_species_fields" kwarg. ts = yt.load( "GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0*", default_species_fields="ionized", ) storage = {} # By using the piter() function, we can iterate on every dataset in # the TimeSeries object. By using the storage keyword, we can populate # a dictionary where the dataset is the key, and sto.result is the value # for later use when the loop is complete. # The serial equivalent of piter() here is just "for ds in ts:" . for store, ds in ts.piter(storage=storage): # Create a sphere of radius 100 kpc at the center of the dataset volume sphere = ds.sphere("c", (100.0, "kpc")) # Calculate the entropy within that sphere entr = sphere["gas", "entropy"].sum() # Store the current time and sphere entropy for this dataset in our # storage dictionary as a tuple store.result = (ds.current_time.in_units("Gyr"), entr) # Convert the storage dictionary values to a Nx2 array, so the can be easily # plotted arr = np.array(list(storage.values())) # Plot up the results: time versus entropy plt.semilogy(arr[:, 0], arr[:, 1], "r-") plt.xlabel("Time (Gyr)") plt.ylabel("Entropy (ergs/K)") plt.savefig("time_versus_entropy.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/time_series_profiles.py0000644000175100001770000000155414714401662022232 0ustar00runnerdockerimport yt # Create a time-series object. sim = yt.load_simulation("enzo_tiny_cosmology/32Mpc_32.enzo", "Enzo") sim.get_time_series(redshifts=[5, 4, 3, 2, 1, 0]) # Lists to hold profiles, labels, and plot specifications. profiles = [] labels = [] plot_specs = [] # Loop over each dataset in the time-series. for ds in sim: # Create a data container to hold the whole dataset. ad = ds.all_data() # Create a 1d profile of density vs. temperature. profiles.append( yt.create_profile(ad, [("gas", "density")], fields=[("gas", "temperature")]) ) # Add labels and linestyles. labels.append(f"z = {ds.current_redshift:.2f}") plot_specs.append(dict(linewidth=2, alpha=0.7)) # Create the profile plot from the list of profiles. plot = yt.ProfilePlot.from_profiles(profiles, labels=labels, plot_specs=plot_specs) # Save the image. plot.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/tipsy_and_yt.ipynb0000644000175100001770000001161314714401662021213 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Loading Tipsy Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Alright, let's start with some basics. Before we do anything, we will need to load a snapshot. You can do this using the ```load_sample``` convenience function. yt will autodetect that you want a tipsy snapshot and download it from the yt hub." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import yt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will be looking at a fairly low resolution dataset." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ">This dataset is available for download at https://yt-project.org/data/TipsyGalaxy.tar.gz (10 MB)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ds = yt.load_sample(\"TipsyGalaxy\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now have a `TipsyDataset` object called `ds`. Let's see what fields it has." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ds.field_list" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "yt also defines so-called \"derived\" fields. These fields are functions of the on-disk fields that live in the `field_list`. There is a `derived_field_list` attribute attached to the `Dataset` object - let's take look at the derived fields in this dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ds.derived_field_list" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All of the field in the `field_list` are arrays containing the values for the associated particles. These haven't been smoothed or gridded in any way. We can grab the array-data for these particles using `ds.all_data()`. For example, let's take a look at a temperature-colored scatterplot of the gas particles in this output." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "import numpy as np" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ad = ds.all_data()\n", "xcoord = ad[\"Gas\", \"Coordinates\"][:, 0].v\n", "ycoord = ad[\"Gas\", \"Coordinates\"][:, 1].v\n", "logT = np.log10(ad[\"Gas\", \"Temperature\"])\n", "plt.scatter(\n", " xcoord, ycoord, c=logT, s=2 * logT, marker=\"o\", edgecolor=\"none\", vmin=2, vmax=6\n", ")\n", "plt.xlim(-20, 20)\n", "plt.ylim(-20, 20)\n", "cb = plt.colorbar()\n", "cb.set_label(r\"$\\log_{10}$ Temperature\")\n", "plt.gcf().set_size_inches(15, 10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Making Smoothed Images" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "yt will automatically generate smoothed versions of these fields that you can use to plot. Let's make a temperature slice and a density projection." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "yt.SlicePlot(ds, \"z\", (\"gas\", \"density\"), width=(40, \"kpc\"), center=\"m\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "yt.ProjectionPlot(ds, \"z\", (\"gas\", \"density\"), width=(40, \"kpc\"), center=\"m\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Not only are the values in the tipsy snapshot read and automatically smoothed, the auxiliary files that have physical significance are also smoothed. Let's look at a slice of Iron mass fraction." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "yt.SlicePlot(ds, \"z\", (\"gas\", \"Fe_fraction\"), width=(40, \"kpc\"), center=\"m\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.1" } }, "nbformat": 4, "nbformat_minor": 0 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/various_lens.py0000644000175100001770000001111214714401662020517 0ustar00runnerdockerimport numpy as np import yt from yt.visualization.volume_rendering.api import Scene, create_volume_source field = ("gas", "density") # normal_vector points from camera to the center of the final projection. # Now we look at the positive x direction. normal_vector = [1.0, 0.0, 0.0] # north_vector defines the "top" direction of the projection, which is # positive z direction here. north_vector = [0.0, 0.0, 1.0] # Follow the simple_volume_rendering cookbook for the first part of this. ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") sc = Scene() vol = create_volume_source(ds, field=field) tf = vol.transfer_function tf.grey_opacity = True # Plane-parallel lens cam = sc.add_camera(ds, lens_type="plane-parallel") # Set the resolution of the final projection. cam.resolution = [250, 250] # Set the location of the camera to be (x=0.2, y=0.5, z=0.5) # For plane-parallel lens, the location info along the normal_vector (here # is x=0.2) is ignored. cam.position = ds.arr(np.array([0.2, 0.5, 0.5]), "code_length") # Set the orientation of the camera. cam.switch_orientation(normal_vector=normal_vector, north_vector=north_vector) # Set the width of the camera, where width[0] and width[1] specify the length and # height of final projection, while width[2] in plane-parallel lens is not used. cam.set_width(ds.domain_width * 0.5) sc.add_source(vol) sc.save("lens_plane-parallel.png", sigma_clip=6.0) # Perspective lens cam = sc.add_camera(ds, lens_type="perspective") cam.resolution = [250, 250] # Standing at (x=0.2, y=0.5, z=0.5), we look at the area of x>0.2 (with some open angle # specified by camera width) along the positive x direction. cam.position = ds.arr([0.2, 0.5, 0.5], "code_length") cam.switch_orientation(normal_vector=normal_vector, north_vector=north_vector) # Set the width of the camera, where width[0] and width[1] specify the length and # height of the final projection, while width[2] specifies the distance between the # camera and the final image. cam.set_width(ds.domain_width * 0.5) sc.add_source(vol) sc.save("lens_perspective.png", sigma_clip=6.0) # Stereo-perspective lens cam = sc.add_camera(ds, lens_type="stereo-perspective") # Set the size ratio of the final projection to be 2:1, since stereo-perspective lens # will generate the final image with both left-eye and right-eye ones jointed together. cam.resolution = [500, 250] cam.position = ds.arr([0.2, 0.5, 0.5], "code_length") cam.switch_orientation(normal_vector=normal_vector, north_vector=north_vector) cam.set_width(ds.domain_width * 0.5) # Set the distance between left-eye and right-eye. cam.lens.disparity = ds.domain_width[0] * 1.0e-3 sc.add_source(vol) sc.save("lens_stereo-perspective.png", sigma_clip=6.0) # Fisheye lens dd = ds.sphere(ds.domain_center, ds.domain_width[0] / 10) cam = sc.add_camera(dd, lens_type="fisheye") cam.resolution = [250, 250] v, c = ds.find_max(field) cam.set_position(c - 0.0005 * ds.domain_width) cam.switch_orientation(normal_vector=normal_vector, north_vector=north_vector) cam.set_width(ds.domain_width) cam.lens.fov = 360.0 sc.add_source(vol) sc.save("lens_fisheye.png", sigma_clip=6.0) # Spherical lens cam = sc.add_camera(ds, lens_type="spherical") # Set the size ratio of the final projection to be 2:1, since spherical lens # will generate the final image with length of 2*pi and height of pi. # Recommended resolution for YouTube 360-degree videos is [3840, 2160] cam.resolution = [500, 250] # Standing at (x=0.4, y=0.5, z=0.5), we look in all the radial directions # from this point in spherical coordinate. cam.position = ds.arr([0.4, 0.5, 0.5], "code_length") cam.switch_orientation(normal_vector=normal_vector, north_vector=north_vector) # In (stereo)spherical camera, camera width is not used since the entire volume # will be rendered sc.add_source(vol) sc.save("lens_spherical.png", sigma_clip=6.0) # Stereo-spherical lens cam = sc.add_camera(ds, lens_type="stereo-spherical") # Set the size ratio of the final projection to be 1:1, since spherical-perspective lens # will generate the final image with both left-eye and right-eye ones jointed together, # with left-eye image on top and right-eye image on bottom. # Recommended resolution for YouTube virtual reality videos is [3840, 2160] cam.resolution = [500, 500] cam.position = ds.arr([0.4, 0.5, 0.5], "code_length") cam.switch_orientation(normal_vector=normal_vector, north_vector=north_vector) # In (stereo)spherical camera, camera width is not used since the entire volume # will be rendered # Set the distance between left-eye and right-eye. cam.lens.disparity = ds.domain_width[0] * 1.0e-3 sc.add_source(vol) sc.save("lens_stereo-spherical.png", sigma_clip=6.0) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/velocity_vectors_on_slice.py0000644000175100001770000000033714714401662023273 0ustar00runnerdockerimport yt # Load the dataset. ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150") p = yt.SlicePlot(ds, "x", ("gas", "density")) # Draw a velocity vector every 16 pixels. p.annotate_velocity(factor=16) p.save() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/vol-annotated.py0000644000175100001770000000167214714401662020573 0ustar00runnerdockerimport yt ds = yt.load("Enzo_64/DD0043/data0043") sc = yt.create_scene(ds, lens_type="perspective") source = sc[0] source.set_field(("gas", "density")) source.set_log(True) # Set up the camera parameters: focus, width, resolution, and image orientation sc.camera.focus = ds.domain_center sc.camera.resolution = 1024 sc.camera.north_vector = [0, 0, 1] sc.camera.position = [1.7, 1.7, 1.7] # You may need to adjust the alpha values to get an image with good contrast. # For the annotate_domain call, the fourth value in the color tuple is the # alpha value. sc.annotate_axes(alpha=0.02) sc.annotate_domain(ds, color=[1, 1, 1, 0.01]) text_string = f"T = {float(ds.current_time.to('Gyr'))} Gyr" # save an annotated version of the volume rendering including a representation # of the transfer function and a nice label showing the simulation time. sc.save_annotated( "vol_annotated.png", sigma_clip=6, text_annotate=[[(0.1, 0.95), text_string]] ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/vol-lines.py0000644000175100001770000000072514714401662017726 0ustar00runnerdockerimport numpy as np import yt from yt.units import kpc from yt.visualization.volume_rendering.api import LineSource ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") sc = yt.create_scene(ds) np.random.seed(1234567) nlines = 50 vertices = (np.random.random([nlines, 2, 3]) - 0.5) * 200 * kpc colors = np.random.random([nlines, 4]) colors[:, 3] = 0.1 lines = LineSource(vertices, colors) sc.add_source(lines) sc.camera.width = 300 * kpc sc.save(sigma_clip=4.0) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/vol-points.py0000644000175100001770000000114014714401662020120 0ustar00runnerdockerimport numpy as np import yt from yt.units import kpc from yt.visualization.volume_rendering.api import PointSource ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") sc = yt.create_scene(ds) np.random.seed(1234567) npoints = 1000 # Random particle positions vertices = np.random.random([npoints, 3]) * 200 * kpc # Random colors colors = np.random.random([npoints, 4]) # Set alpha value to something that produces a good contrast with the volume # rendering colors[:, 3] = 0.1 points = PointSource(vertices, colors=colors) sc.add_source(points) sc.camera.width = 300 * kpc sc.save(sigma_clip=5) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/yt_gadget_analysis.ipynb0000644000175100001770000001425314714401662022362 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Loading Gadget data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First we set up our imports:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n", "\n", "import yt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First we load the data set, specifying both the unit length/mass/velocity, as well as the size of the bounding box (which should encapsulate all the particles in the data set)\n", "\n", "At the end, we flatten the data into \"ad\" in case we want access to the raw simulation data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ">This dataset is available for download at https://yt-project.org/data/GadgetDiskGalaxy.tar.gz (430 MB)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fname = \"GadgetDiskGalaxy/snapshot_200.hdf5\"\n", "\n", "unit_base = {\n", " \"UnitLength_in_cm\": 3.08568e21,\n", " \"UnitMass_in_g\": 1.989e43,\n", " \"UnitVelocity_in_cm_per_s\": 100000,\n", "}\n", "\n", "bbox_lim = 1e5 # kpc\n", "\n", "bbox = [[-bbox_lim, bbox_lim], [-bbox_lim, bbox_lim], [-bbox_lim, bbox_lim]]\n", "\n", "ds = yt.load(fname, unit_base=unit_base, bounding_box=bbox)\n", "ds.index\n", "ad = ds.all_data()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's make a projection plot to look at the entire volume" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "px = yt.ProjectionPlot(ds, \"x\", (\"gas\", \"density\"))\n", "px.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's print some quantities about the domain, as well as the physical properties of the simulation\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(\"left edge: \", ds.domain_left_edge)\n", "print(\"right edge: \", ds.domain_right_edge)\n", "print(\"center: \", ds.domain_center)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also see the fields that are available to query in the dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "sorted(ds.field_list)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create a data object that represents the full simulation domain, and find the total mass in gas and dark matter particles contained in it:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ad = ds.all_data()\n", "\n", "# total_mass returns a list, representing the total gas and dark matter + stellar mass, respectively\n", "print([tm.in_units(\"Msun\") for tm in ad.quantities.total_mass()])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's say we want to zoom in on the box (since clearly the bounding we chose initially is much larger than the volume containing the gas particles!), and center on wherever the highest gas density peak is. First, let's find this peak:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "density = ad[\"PartType0\", \"density\"]\n", "wdens = np.where(density == np.max(density))\n", "coordinates = ad[\"PartType0\", \"Coordinates\"]\n", "center = coordinates[wdens][0]\n", "print(\"center = \", center)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Set up the box to zoom into" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "new_box_size = ds.quan(250, \"code_length\")\n", "\n", "left_edge = center - new_box_size / 2\n", "right_edge = center + new_box_size / 2\n", "\n", "print(new_box_size.in_units(\"Mpc\"))\n", "print(left_edge.in_units(\"Mpc\"))\n", "print(right_edge.in_units(\"Mpc\"))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ad2 = ds.region(center=center, left_edge=left_edge, right_edge=right_edge)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using this new data object, let's confirm that we're only looking at a subset of the domain by first calculating the total mass in gas and particles contained in the subvolume:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print([tm.in_units(\"Msun\") for tm in ad2.quantities.total_mass()])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And then by visualizing what the new zoomed region looks like" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "px = yt.ProjectionPlot(ds, \"x\", (\"gas\", \"density\"), center=center, width=new_box_size)\n", "px.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Cool - there's a disk galaxy there!" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.1" } }, "nbformat": 4, "nbformat_minor": 0 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/yt_gadget_owls_analysis.ipynb0000644000175100001770000001220714714401662023423 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Loading Gadget OWLS Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The first thing you will need to run these examples is a working installation of yt. The author or these examples followed the instructions under \"Get yt: from source\" at https://yt-project.org/ to install an up to date development version of yt.\n", "\n", "We will be working with an OWLS snapshot: snapshot_033" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will tell the notebook that we want figures produced inline. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import yt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will load the snapshot." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ds = yt.load_sample(\"snapshot_033\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Set a ``YTRegion`` that contains all the data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ad = ds.all_data()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Inspecting " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The dataset can tell us what fields it knows about, " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ds.field_list" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ds.derived_field_list" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that the ion fields follow the naming convention described in YTEP-0003 http://ytep.readthedocs.org/en/latest/YTEPs/YTEP-0003.html#molecular-and-atomic-species-names" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Accessing Particle Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The raw particle data can be accessed using the particle types. This corresponds directly with what is in the hdf5 snapshots. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ad[\"PartType0\", \"Coordinates\"]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ad[\"PartType4\", \"IronFromSNIa\"]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ad[\"PartType1\", \"ParticleIDs\"]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ad[\"PartType0\", \"Hydrogen\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Projection Plots" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The projection plots make use of derived fields that store the smoothed particle data (particles smoothed onto an oct-tree). Below we make a projection of all hydrogen gas followed by only the neutral hydrogen gas. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "pz = yt.ProjectionPlot(ds, \"z\", (\"gas\", \"H_density\"))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "pz.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "pz = yt.ProjectionPlot(ds, \"z\", (\"gas\", \"H_p0_density\"))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "pz.show()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.1" } }, "nbformat": 4, "nbformat_minor": 0 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/cookbook/zoomin_frames.py0000644000175100001770000000233514714401662020665 0ustar00runnerdockerimport numpy as np import yt # load data fn = "IsolatedGalaxy/galaxy0030/galaxy0030" ds = yt.load(fn) # This is the number of frames to make -- below, you can see how this is used. n_frames = 5 # This is the minimum size in smallest_dx of our last frame. # Usually it should be set to something like 400, but for THIS # dataset, we actually don't have that great of resolution. min_dx = 40 frame_template = "frame_%05i" # Template for frame filenames p = yt.SlicePlot(ds, "z", ("gas", "density")) # Add our slice, along z p.annotate_contour(("gas", "temperature")) # We'll contour in temperature # What we do now is a bit fun. "enumerate" returns a tuple for every item -- # the index of the item, and the item itself. This saves us having to write # something like "i = 0" and then inside the loop "i += 1" for ever loop. The # argument to enumerate is the 'logspace' function, which takes a minimum and a # maximum and the number of items to generate. It returns 10^power of each # item it generates. for i, v in enumerate( np.logspace(0, np.log10(ds.index.get_smallest_dx() * min_dx), n_frames) ): # We set our width as necessary for this frame p.set_width(v, "unitary") # save p.save(frame_template % (i)) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.239151 yt-4.4.0/doc/source/developing/0000755000175100001770000000000014714401715015765 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/developing/building_the_docs.rst0000644000175100001770000002176314714401662022176 0ustar00runnerdocker.. _documentation: Documentation ============= .. _writing_documentation: How to Write Documentation -------------------------- Writing documentation is one of the most important but often overlooked tasks for increasing yt's impact in the community. It is the way in which the world will understand how to use our code, so it needs to be done concisely and understandably. Typically, when a developer submits some piece of code with new functionality, the developer should also include documentation on how to use that functionality (as per :ref:`requirements-for-code-submission`). Depending on the nature of the code addition, this could be a new narrative docs section describing how the new code works and how to use it, it could include a recipe in the cookbook section, or it could simply be adding a note in the relevant docs text somewhere. The documentation exists in the main code repository for yt in the ``doc`` directory (i.e. ``$YT_GIT/doc/source`` where ``$YT_GIT`` is the path of the yt git repository). It is organized hierarchically into the main categories of: * Visualizing * Analyzing * Analysis Modules * Examining * Cookbook * Quickstart * Developing * Reference * FAQ * Help You will have to figure out where your new/modified doc fits into this, but browsing through the existing documentation is a good way to sort that out. All the source for the documentation is written in `Sphinx `_, which uses ReST for markup. ReST is very straightforward to markup in a text editor, and if you are new to it, we recommend just using other .rst files in the existing yt documentation as templates or checking out the `ReST reference documentation `_. New cookbook recipes (see :ref:`cookbook`) are very helpful for the community as they provide simple annotated recipes on how to use specific functionality. To add one, create a concise Python script which demonstrates some functionality and pare it down to its minimum. Add some comment lines to describe what it is that you're doing along the way. Place this ``.py`` file in the ``source/cookbook/`` directory, and then link to it explicitly in one of the relevant ``.rst`` files in that directory (e.g. ``complex_plots.rst``, etc.), and add some description of what the script actually does. We recommend that you use one of the `sample data sets `_ in your recipe. When the full docs are built, each of the cookbook recipes is executed dynamically on a system which has access to all of the sample datasets. Any output images generated by your script will then be attached inline in the built documentation directly following your script. After you have made your modifications to the docs, you will want to make sure that they render the way you expect them to render. For more information on this, see the section on :ref:`docs_build`. Unless you're contributing cookbook recipes or notebooks which require a dynamic build, you can probably get away with just doing a 'quick' docs build. When you have completed your documentation additions, commit your changes to your repository and make a pull request in the same way you would contribute a change to the codebase, as described in the section on :ref:`sharing-changes`. .. _docs_build: Building the Documentation -------------------------- The yt documentation makes heavy use of the Sphinx documentation automation suite. Sphinx, written in Python, was originally created for the documentation of the Python project and has many nice capabilities for managing the documentation of Python code. While much of the yt documentation is static text, we make heavy use of cross-referencing with API documentation that is automatically generated at build time by Sphinx. We also use Sphinx to run code snippets (e.g. the cookbook and the notebooks) and embed resulting images and example data. Essential tools for building the docs can be installed alongside yt itself. From the top level of a local copy, run .. code-block:: bash $ python -m pip install -e . -r requirements/docs.txt Quick versus Full Documentation Builds ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Building the entire set of yt documentation is a laborious task, since you need to have a large number of packages in order to successfully execute and render all of the notebooks and yt recipes drawing from every corner of the yt source. As a quick alternative, one can do a ``quick`` build of the documentation, which eschews the need for downloading all of these dependencies, but it only produces the static docs. The static docs do not include the cookbook outputs and the notebooks, but this is good enough for most cases of people testing out whether or not their documentation contributions look OK before submitting them to the yt repository. If you want to create the full documentation locally, then you'll need to follow the instructions for building the ``full`` docs, so that you can dynamically execute and render the cookbook recipes, the notebooks, etc. Building the Docs (Quick) ^^^^^^^^^^^^^^^^^^^^^^^^^ In order to tell Sphinx not to do all of the dynamic building, you must set the ``$READTHEDOCS`` environment variable to be ``True`` by run from the command line (using bash syntax for example), as .. code-block:: bash export READTHEDOCS=True This variable is set for automated builds on the free ReadTheDocs service but can be used by anyone to force a quick, minimal build. Now all you need to do is execute Sphinx on the yt doc source. Go to the documentation directory and build the docs: .. code-block:: bash cd $YT_GIT/doc make html This will produce an html version of the documentation locally in the ``$YT_GIT/doc/build/html`` directory. You can now go there and open up ``index.html`` or whatever file you wish in your web browser. Building the Docs (Full) ^^^^^^^^^^^^^^^^^^^^^^^^ As alluded to earlier, building the full documentation is a bit more involved than simply building the static documentation. The full documentation makes heavy use of custom Sphinx extensions to transform recipes, notebooks, and inline code snippets into Python scripts, IPython_ notebooks, or notebook cells that are executed when the docs are built. To do this, we use Jupyter's nbconvert module to transform notebooks into HTML. to simplify versioning of the notebook JSON format, we store notebooks in an unevaluated state. To build the full documentation, you will need yt, jupyter, and all dependencies needed for yt's analysis modules installed. The following dependencies were used to generate the yt documentation during the release of yt 3.2 in 2015. * Sphinx_ 1.3.1 * Jupyter 1.0.0 * RunNotebook 0.1 * pandoc_ 1.13.2 * Rockstar halo finder 0.99.6 * SZpack_ 1.1.1 * ffmpeg_ 2.7.1 (compiled with libvpx support) * Astropy_ 0.4.4 .. _SZpack: http://www.jb.man.ac.uk/~jchluba/Science/SZpack/SZpack.html .. _Astropy: https://www.astropy.org/ .. _Sphinx: http://www.sphinx-doc.org/en/master/ .. _pandoc: https://pandoc.org/ .. _ffmpeg: http://www.ffmpeg.org/ .. _IPython: https://ipython.org/ You will also need the full yt suite of `yt test data `_, including the larger datasets that are not used in the answer tests. You will need to ensure that your testing configuration is properly configured and that all of the yt test data is in the testing directory. See :ref:`run_answer_testing` for more details on how to set up the testing configuration. Now that you have everything set up properly, go to the documentation directory and build it using Sphinx: .. code-block:: bash cd $YT_GIT/doc make html If all of the dependencies are installed and all of the test data is in the testing directory, this should churn away for a while (several hours) and eventually generate a docs build. We suggest setting :code:`suppress_stream_logging = True` in your yt configuration (See :ref:`configuration-file`) to suppress large amounts of debug output from yt. To clean the docs build, use :code:`make clean`. Building the Docs (Hybrid) ^^^^^^^^^^^^^^^^^^^^^^^^^^ It's also possible to create a custom Sphinx build that builds a restricted set of notebooks or scripts. This can be accomplished by editing the Sphinx :code:`conf.py` file included in the :code:`source` directory at the top level of the docs. The extensions included in the build are contained in the :code:`extensions` list. To disable an extension, simply remove it from the list. Doing so will raise a warning when Sphinx encounters the directive in the docs and will prevent Sphinx from evaluating the directive. As a concrete example, if one wanted to include the :code:`notebook`, and :code:`notebook-cell` directives, but not the :code:`python-script` or :code:`autosummary` directives, one would just need to comment out the lines that append these extensions to the :code:`extensions` list. The resulting docs build will be significantly quicker since it would avoid executing the lengthy API autodocumentation as well as a large number of Python script snippets in the narrative docs. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/developing/creating_datatypes.rst0000644000175100001770000000511414714401662022373 0ustar00runnerdocker.. _creating-objects: Creating Data Objects ===================== The three-dimensional datatypes in yt follow a fairly simple protocol. The basic principle is that if you want to define a region in space, that region must be identifiable from some sort of cut applied against the cells -- typically, in yt, this is done by examining the geometry. Creating a new data object requires modifications to two different files, one of which is in Python and the other in Cython. First, a subclass of :class:`~yt.data_objects.data_containers.YTDataContainer` must be defined; typically you actually want to subclass one of: :class:`~yt.data_objects.data_containers.YTSelectionContainer0D` :class:`~yt.data_objects.data_containers.YTSelectionContainer1D` :class:`~yt.data_objects.data_containers.YTSelectionContainer2D` :class:`~yt.data_objects.data_containers.YTSelectionContainer3D`. The following attributes must be defined: * ``_type_name`` - this is the short name by which the object type will be known as. Remember this for later, as we will have to use it when defining the underlying selector. * ``_con_args`` - this is the set of arguments passed to the object, and their names as attributes on the data object. * ``_container_fields`` - any fields that are generated by the object, rather than by another derived field in yt. The rest of the object can be defined in Cython, in the file ``yt/geometry/selection_routines.pyx``. You must define a subclass of ``SelectorObject``, which will require implementation of the following methods: * ``fill_mask`` - this takes a grid object and fills a mask of which zones should be included. It must take into account the child mask of the grid. * ``select_cell`` - this routine accepts a position and a width, and returns either zero or one for whether or not that cell is included in the selector. * ``select_sphere`` - this routine returns zero or one whether a sphere (point and radius) is included in the selector. * ``select_point`` - this identifies whether or not a point is included in the selector. It should be identical to selecting a cell or a sphere with zero extent. * ``select_bbox`` - this returns whether or not a bounding box (i.e., grid) is included in the selector. * ``_hash_vals`` - this must return some combination of parameters that semi-uniquely identifies the selector. Once the object has been defined, it must then be aliased within ``selection_routines.pyx`` as ``typename_selector``. For instance, ``ray_selector`` or ``sphere_selector`` for ``_type_name`` values of ``ray`` and ``sphere``, respectively. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/developing/creating_derived_fields.rst0000644000175100001770000003771114714401662023355 0ustar00runnerdocker.. _creating-derived-fields: Creating Derived Fields ======================= One of the more powerful means of extending yt is through the usage of derived fields. These are fields that describe a value at each cell in a simulation. Defining a New Field -------------------- Once a new field has been conceived of, the best way to create it is to construct a function that performs an array operation -- operating on a collection of data, neutral to its size, shape, and type. A simple example of this is the pressure field, which demonstrates the ease of this approach. .. code-block:: python import yt def _pressure(field, data): return ( (data.ds.gamma - 1.0) * data["gas", "density"] * data["gas", "specific_thermal_energy"] ) Note that we do a couple different things here. We access the ``gamma`` parameter from the dataset, we access the ``density`` field and we access the ``specific_thermal_energy`` field. ``specific_thermal_energy`` is, in fact, another derived field! We don't do any loops, we don't do any type-checking, we can simply multiply the three items together. In this example, the ``density`` field will return data with units of ``g/cm**3`` and the ``specific_thermal_energy`` field will return data units of ``erg/g``, so the result will automatically have units of pressure, ``erg/cm**3``. This assumes the unit system is set to the default, which is CGS: if a different unit system is selected, the result will be in the same dimensions of pressure but different units. See :ref:`units` for more information. Once we've defined our function, we need to notify yt that the field is available. The :func:`add_field` function is the means of doing this; it has a number of fairly specific parameters that can be passed in, but here we'll only look at the most basic ones needed for a simple scalar baryon field. .. note:: There are two different :func:`add_field` functions. For the differences, see :ref:`faq-add-field-diffs`. .. code-block:: python yt.add_field( name=("gas", "pressure"), function=_pressure, sampling_type="local", units="dyne/cm**2", ) We feed it the name of the field, the name of the function, the sampling type, and the units. The ``sampling_type`` keyword determines which elements are used to make the field (i.e., grid cell or particles) and controls how volume is calculated. It can be set to "cell" for grid/mesh fields, "particle" for particle and SPH fields, or "local" to use the primary format of the loaded dataset. In most cases, "local" is sufficient, but "cell" and "particle" can be used to specify the source for datasets that have both grids and particles. In a dataset with both grids and particles, using "cell" will ensure a field is created with a value for every grid cell, while using "particle" will result in a field with a value for every particle. The units parameter is a "raw" string, in the format that yt uses in its :ref:`symbolic units implementation ` (e.g., employing only unit names, numbers, and mathematical operators in the string, and using ``"**"`` for exponentiation). For cosmological datasets and fields, see :ref:`cosmological-units `. We suggest that you name the function that creates a derived field with the intended field name prefixed by a single underscore, as in the ``_pressure`` example above. Field definitions return array data with units. If the field function returns data in a dimensionally equivalent unit (e.g. a ``"dyne"`` versus a ``"N"``), the field data will be converted to the units specified in ``add_field`` before being returned in a data object selection. If the field function returns data with dimensions that are incompatible with units specified in ``add_field``, you will see an error. To clear this error, you must ensure that your field function returns data in the correct units. Often, this means applying units to a dimensionless float or array. If your field definition includes physical constants rather than defining a constant as a float, you can import it from ``yt.units`` to get a predefined version of the constant with the correct units. If you know the units your data is supposed to have ahead of time, you can also import unit symbols like ``g`` or ``cm`` from the ``yt.units`` namespace and multiply the return value of your field function by the appropriate combination of unit symbols for your field's units. You can also convert floats or NumPy arrays into :class:`~yt.units.yt_array.YTArray` or :class:`~yt.units.yt_array.YTQuantity` instances by making use of the :func:`~yt.data_objects.static_output.Dataset.arr` and :func:`~yt.data_objects.static_output.Dataset.quan` convenience functions. Lastly, if you do not know the units of your field ahead of time, you can specify ``units='auto'`` in the call to ``add_field`` for your field. This will automatically determine the appropriate units based on the units of the data returned by the field function. This is also a good way to let your derived fields be automatically converted to the units of the unit system in your dataset. If ``units='auto'`` is set, it is also required to set the ``dimensions`` keyword argument so that error-checking can be done on the derived field to make sure that the dimensionality of the returned array and the field are the same: .. code-block:: python import yt from yt.units import dimensions def _pressure(field, data): return ( (data.ds.gamma - 1.0) * data["gas", "density"] * data["gas", "specific_thermal_energy"] ) yt.add_field( ("gas", "pressure"), function=_pressure, sampling_type="local", units="auto", dimensions=dimensions.pressure, ) If ``dimensions`` is not set, an error will be thrown. The ``dimensions`` keyword can be a SymPy ``symbol`` object imported from ``yt.units.dimensions``, a compound dimension of these, or a string corresponding to one of these objects. :func:`add_field` can be invoked in two other ways. The first is by the function decorator :func:`derived_field`. The following code is equivalent to the previous example: .. code-block:: python from yt import derived_field @derived_field(name="pressure", sampling_type="cell", units="dyne/cm**2") def _pressure(field, data): return ( (data.ds.gamma - 1.0) * data["gas", "density"] * data["gas", "specific_thermal_energy"] ) The :func:`derived_field` decorator takes the same arguments as :func:`add_field`, and is often a more convenient shorthand in cases where you want to quickly set up a new field. Defining derived fields in the above fashion must be done before a dataset is loaded, in order for the dataset to recognize it. If you want to set up a derived field after you have loaded a dataset, or if you only want to set up a derived field for a particular dataset, there is an :func:`~yt.data_objects.static_output.Dataset.add_field` method that hangs off dataset objects. The calling syntax is the same: .. code-block:: python ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100") ds.add_field( ("gas", "pressure"), function=_pressure, sampling_type="cell", units="dyne/cm**2", ) If you specify fields in this way, you can take advantage of the dataset's unit system to define the units for you, so that the units will be returned in the units of that system: .. code-block:: python ds.add_field( ("gas", "pressure"), function=_pressure, sampling_type="cell", units=ds.unit_system["pressure"], ) Since the :class:`yt.units.unit_systems.UnitSystem` object returns a :class:`yt.units.unit_object.Unit` object when queried, you're not limited to specifying units in terms of those already available. You can specify units for fields using basic arithmetic if necessary: .. code-block:: python ds.add_field( ("gas", "my_acceleration"), function=_my_acceleration, sampling_type="cell", units=ds.unit_system["length"] / ds.unit_system["time"] ** 2, ) If you find yourself using the same custom-defined fields over and over, you should put them in your plugins file as described in :ref:`plugin-file`. A More Complicated Example -------------------------- But what if we want to do something a bit more fancy? Here's an example of getting parameters from the data object and using those to define the field; specifically, here we obtain the ``center`` and ``bulk_velocity`` parameters and use those to define a field for radial velocity (there is already a ``radial_velocity`` field in yt, but we create this one here just as a transparent and simple example). .. code-block:: python import numpy as np from yt.fields.api import ValidateParameter def _my_radial_velocity(field, data): if data.has_field_parameter("bulk_velocity"): bv = data.get_field_parameter("bulk_velocity").in_units("cm/s") else: bv = data.ds.arr(np.zeros(3), "cm/s") xv = data["gas", "velocity_x"] - bv[0] yv = data["gas", "velocity_y"] - bv[1] zv = data["gas", "velocity_z"] - bv[2] center = data.get_field_parameter("center") x_hat = data["gas", "x"] - center[0] y_hat = data["gas", "y"] - center[1] z_hat = data["gas", "z"] - center[2] r = np.sqrt(x_hat * x_hat + y_hat * y_hat + z_hat * z_hat) x_hat /= r y_hat /= r z_hat /= r return xv * x_hat + yv * y_hat + zv * z_hat yt.add_field( ("gas", "my_radial_velocity"), function=_my_radial_velocity, sampling_type="cell", units="cm/s", take_log=False, validators=[ValidateParameter(["center", "bulk_velocity"])], ) Note that we have added a few optional arguments to ``yt.add_field``; we specify that we do not wish to display this field as logged, that we require both the ``bulk_velocity`` and ``center`` field parameters to be present in a given data object we wish to calculate this for, and we say that it should not be displayed in a drop-down box of fields to display. This is done through the parameter *validators*, which accepts a list of :class:`~yt.fields.derived_field.FieldValidator` objects. These objects define the way in which the field is generated, and when it is able to be created. In this case, we mandate that parameters ``center`` and ``bulk_velocity`` are set before creating the field. These are set via :meth:`~yt.data_objects.data_containers.set_field_parameter`, which can be called on any object that has fields: .. code-block:: python ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100") sp = ds.sphere("max", (200.0, "kpc")) sp.set_field_parameter("bulk_velocity", yt.YTArray([-100.0, 200.0, 300.0], "km/s")) In this case, we already know what the ``center`` of the sphere is, so we do not set it. Also, note that ``center`` and ``bulk_velocity`` need to be :class:`~yt.units.yt_array.YTArray` objects with units. If you are writing a derived field that uses a field parameter that changes the behavior of the field depending on the value of the field parameter, you can make yt test to make sure the field handles all possible values for the field parameter using a special form of the ``ValidateParameter`` field validator. In particular, ``ValidateParameter`` supports an optional second argument, which takes a dictionary mapping from parameter names to parameter values that you would like yt to test. This is useful when a field will select different fields to access based on the value of a field parameter. This option allows you to force yt to select *all* needed dependent fields for your derived field definition at field detection time. This can avoid errors related to missing fields. For example, let's write a field that depends on a field parameter named ``'axis'``: .. code-block:: python def my_axis_field(field, data): axis = data.get_field_parameter("axis") if axis == 0: return data["gas", "velocity_x"] elif axis == 1: return data["gas", "velocity_y"] elif axis == 2: return data["gas", "velocity_z"] else: raise ValueError ds.add_field( "my_axis_field", function=my_axis_field, units="cm/s", validators=[ValidateParameter("axis", {"axis": [0, 1, 2]})], ) In this example, we've told yt's field system that the data object we are querying ``my_axis_field`` must have the ``axis`` field parameter set. In addition, it forces yt to recognize that this field might depend on any one of ``x-velocity``, ``y-velocity``, or ``z-velocity``. By specifying that ``axis`` might be 0, 1, or 2 in the ``ValidataParameter`` call, this ensures that this field will only be valid and available for datasets that have all three fields available. Other examples for creating derived fields can be found in the cookbook recipe :ref:`cookbook-simple-derived-fields`. .. _derived-field-options: Field Options ------------- The arguments to :func:`add_field` are passed on to the constructor of :class:`DerivedField`. There are a number of options available, but the only mandatory ones are ``name``, ``units``, and ``function``. ``name`` This is the name of the field -- how you refer to it. For instance, ``pressure`` or ``magnetic_field_strength``. ``function`` This is a function handle that defines the field ``units`` This is a string that describes the units, or a query to a UnitSystem object, e.g. ``ds.unit_system["energy"]``. Powers must be in Python syntax (``**`` instead of ``^``). Alternatively, it may be set to ``"auto"`` to have the units determined automatically. In this case, the ``dimensions`` keyword must be set to the correct dimensions of the field. ``display_name`` This is a name used in the plots, for instance ``"Divergence of Velocity"``. If not supplied, the ``name`` value is used. ``take_log`` This is *True* or *False* and describes whether the field should be logged when plotted. ``particle_type`` Is this field a *particle* field? ``validators`` (*Advanced*) This is a list of :class:`FieldValidator` objects, for instance to mandate spatial data. ``display_field`` (*Advanced*) Should this field appear in the dropdown box in Reason? ``not_in_all`` (*Advanced*) If this is *True*, the field may not be in all the grids. ``output_units`` (*Advanced*) For fields that exist on disk, which we may want to convert to other fields or that get aliased to themselves, we can specify a different desired output unit than the unit found on disk. ``force_override`` (*Advanced*) Overrides the definition of an old field if a field with the same name has already been defined. ``dimensions`` Set this if ``units="auto"``. Can be either a string or a dimension object from ``yt.units.dimensions``. Debugging a Derived Field ------------------------- If your derived field is not behaving as you would like, you can insert a call to ``data._debug()`` to spawn an interactive interpreter whenever that line is reached. Note that this is slightly different from calling ``pdb.set_trace()``, as it will *only* trigger when the derived field is being called on an actual data object, rather than during the field detection phase. The starting position will be one function lower in the stack than you are likely interested in, but you can either step through back to the derived field function, or simply type ``u`` to go up a level in the stack. For instance, if you had defined this derived field: .. code-block:: python @yt.derived_field(name=("gas", "funthings")) def funthings(field, data): return data["sillythings"] + data["humorousthings"] ** 2.0 And you wanted to debug it, you could do: .. code-block:: python @yt.derived_field(name=("gas", "funthings")) def funthings(field, data): data._debug() return data["sillythings"] + data["humorousthings"] ** 2.0 And now, when that derived field is actually used, you will be placed into a debugger. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/developing/creating_frontend.rst0000644000175100001770000004023714714401662022221 0ustar00runnerdocker.. _creating_frontend: Creating A New Code Frontend ============================ yt is designed to support analysis and visualization of data from multiple different simulation codes. For a list of codes and the level of support they enjoy, see :ref:`code-support`. We'd like to support a broad range of codes, both Adaptive Mesh Refinement (AMR)-based and otherwise. To add support for a new code, a few things need to be put into place. These necessary structures can be classified into a couple categories: * Data meaning: This is the set of parameters that convert the data into physically relevant units; things like spatial and mass conversions, time units, and so on. * Data localization: These are structures that help make a "first pass" at data loading. Essentially, we need to be able to make a first pass at guessing where data in a given physical region would be located on disk. With AMR data, this is typically quite easy: the grid patches are the "first pass" at localization. * Data reading: This is the set of routines that actually perform a read of either all data in a region or a subset of that data. Note that a frontend can be built as an external package. This is useful to develop and maintain a maturing frontend at your own pace. For technical details, see :ref:`frontends-as-extensions`. If you are interested in adding a new code, be sure to drop us a line on `yt-dev `_! Bootstrapping a new frontend ---------------------------- To get started * make a new directory in ``yt/frontends`` with the name of your code and add the name into ``yt/frontends/api.py:_frontends`` (in alphabetical order). * copy the contents of the ``yt/frontends/_skeleton`` directory, and replace every occurrence of ``Skeleton`` with your frontend's name (preserving case). This adds a lot of boilerplate for the required classes and methods that are needed. Data Meaning Structures ----------------------- You will need to create a subclass of ``Dataset`` in the ``data_structures.py`` file. This subclass will need to handle conversion between the different physical units and the code units (typically in the ``_set_code_unit_attributes()`` method), read in metadata describing the overall data on disk (via the ``_parse_parameter_file()`` method), and provide a ``classmethod`` called ``_is_valid()`` that lets the ``yt.load`` method help identify an input file as belonging to *this* particular ``Dataset`` subclass (see :ref:`data-format-detection`). For the most part, the examples of ``yt.frontends.amrex.data_structures.OrionDataset`` and ``yt.frontends.enzo.data_structures.EnzoDataset`` should be followed, but ``yt.frontends.chombo.data_structures.ChomboDataset``, as a slightly newer addition, can also be used as an instructive example. A new set of fields must be added in the file ``fields.py`` in your new directory. For the most part this means subclassing ``FieldInfoContainer`` and adding the necessary fields specific to your code. Here is a snippet from the base BoxLib field container (defined in ``yt.frontends.amrex.fields``): .. code-block:: python from yt.fields.field_info_container import FieldInfoContainer class BoxlibFieldInfo(FieldInfoContainer): known_other_fields = ( ("density", (rho_units, ["density"], None)), ("eden", (eden_units, ["energy_density"], None)), ("xmom", (mom_units, ["momentum_x"], None)), ("ymom", (mom_units, ["momentum_y"], None)), ("zmom", (mom_units, ["momentum_z"], None)), ("temperature", ("K", ["temperature"], None)), ("Temp", ("K", ["temperature"], None)), ("x_velocity", ("cm/s", ["velocity_x"], None)), ("y_velocity", ("cm/s", ["velocity_y"], None)), ("z_velocity", ("cm/s", ["velocity_z"], None)), ("xvel", ("cm/s", ["velocity_x"], None)), ("yvel", ("cm/s", ["velocity_y"], None)), ("zvel", ("cm/s", ["velocity_z"], None)), ) known_particle_fields = ( ("particle_mass", ("code_mass", [], None)), ("particle_position_x", ("code_length", [], None)), ("particle_position_y", ("code_length", [], None)), ("particle_position_z", ("code_length", [], None)), ("particle_momentum_x", (mom_units, [], None)), ("particle_momentum_y", (mom_units, [], None)), ("particle_momentum_z", (mom_units, [], None)), ("particle_angmomen_x", ("code_length**2/code_time", [], None)), ("particle_angmomen_y", ("code_length**2/code_time", [], None)), ("particle_angmomen_z", ("code_length**2/code_time", [], None)), ("particle_id", ("", ["particle_index"], None)), ("particle_mdot", ("code_mass/code_time", [], None)), ) The tuples, ``known_other_fields`` and ``known_particle_fields`` contain entries, which are tuples of the form ``("name", ("units", ["fields", "to", "alias"], "display_name"))``. ``"name"`` is the name of a field stored on-disk in the dataset. ``"units"`` corresponds to the units of that field. The list ``["fields", "to", "alias"]`` allows you to specify additional aliases to this particular field; for example, if your on-disk field for the x-direction velocity were ``"x-direction-velocity"``, maybe you'd prefer to alias to the more terse name of ``"xvel"``. By convention in yt we use a set of "universal" fields. Currently these fields are enumerated in the stream frontend. If you take a look at ``yt/frontends/stream/fields.py``, you will see a listing of fields following the format described above with field names that will be recognized by the rest of the built-in yt field system. In the example from the boxlib frontend above many of the fields in the ``known_other_fields`` tuple follow this convention. If you would like your frontend to mesh nicely with the rest of yt's built-in fields, it is probably a good idea to alias your frontend's field names to the yt "universal" field names. Finally, "display_name"`` is an optional parameter that can be used to specify how you want the field to be displayed on a plot; this can be LaTeX code, for example the density field could have a display name of ``r"\rho"``. Omitting the ``"display_name"`` will result in using a capitalized version of the ``"name"``. .. _data-format-detection: How to make ``yt.load`` magically detect your data format ? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``yt.load`` takes in a file or directory name, as well as any number of positional and keyword arguments. On call, ``yt.load`` attempts to determine what ``Dataset`` subclasses are compatible with the set of arguments it received. It does so by passing its arguments to *every* ``Dataset`` subclasses' ``_is_valid`` method. These methods are intended to be heuristics that quickly determine whether the arguments (in particular the file/directory) can be loaded with their respective classes. In some cases, more than one class might be detected as valid. If all candidate classes are siblings, ``yt.load`` will select the most specialized one. When writing a new frontend, it is important to write ``_is_valid`` methods to be as specific as possible, otherwise one might constrain the design space for future frontends or in some cases deny their ability to leverage ``yt.load``'s magic. Performance is also critical since the new method is going to get called every single time along with ``yt.load``, even for unrelated data formats. Note that ``yt.load`` knows about every ``Dataset`` subclass because they are automatically registered on creation. .. _bfields-frontend: Creating Aliases for Magnetic Fields ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Setting up access to the magnetic fields in your dataset requires special handling, because in different unit systems magnetic fields have different dimensions (see :ref:`bfields` for an explanation). If your dataset includes magnetic fields, you should include them in ``known_other_fields``, but do not set up aliases for them--instead use the special handling function :meth:`~yt.fields.magnetic_fields.setup_magnetic_field_aliases`. It takes as arguments the ``FieldInfoContainer`` instance, the field type of the frontend, and the list of magnetic fields from the frontend. Here is an example of how this is implemented in the FLASH frontend: .. code-block:: python class FLASHFieldInfo(FieldInfoContainer): known_other_fields = ( ("magx", (b_units, [], "B_x")), # Note there is no alias here ("magy", (b_units, [], "B_y")), ("magz", (b_units, [], "B_z")), ..., ) def setup_fluid_fields(self): from yt.fields.magnetic_field import setup_magnetic_field_aliases ... setup_magnetic_field_aliases(self, "flash", ["mag%s" % ax for ax in "xyz"]) This function should always be imported and called from within the ``setup_fluid_fields`` method of the ``FieldInfoContainer``. If this function is used, converting between magnetic fields in different unit systems will be handled automatically. Data Localization Structures ---------------------------- These functions and classes let yt know about how the arrangement of data on disk corresponds to the physical arrangement of data within the simulation. yt has grid datastructures for handling both patch-based and octree-based AMR codes. The terms 'patch-based' and 'octree-based' are used somewhat loosely here. For example, traditionally, the FLASH code used the paramesh AMR library, which is based on a tree structure, but the FLASH frontend in yt utilizes yt's patch-based datastructures. It is up to the frontend developer to determine which yt datastructures best match the datastructures of their simulation code. Both approaches -- patch-based and octree-based -- have a concept of a *Hierarchy* or *Index* (used somewhat interchangeably in the code) of datastructures and something that describes the elements that make up the Hierarchy or Index. For patch-based codes, the Index is a collection of ``AMRGridPatch`` objects that describe a block of zones. For octree-based codes, the Index contains datastructures that hold information about the individual octs, namely an ``OctreeContainer``. Hierarchy or Index ^^^^^^^^^^^^^^^^^^ To set up data localization, a ``GridIndex`` subclass for patch-based codes or an ``OctreeIndex`` subclass for octree-based codes must be added in the file ``data_structures.py``. Examples of these different types of ``Index`` can be found in, for example, the ``yt.frontends.chombo.data_structures.ChomboHierarchy`` for patch-based codes and ``yt.frontends.ramses.data_structures.RAMSESIndex`` for octree-based codes. For the most part, the ``GridIndex`` subclass must override (at a minimum) the following methods: * ``_detect_output_fields()``: ``self.field_list`` must be populated as a list of strings corresponding to "native" fields in the data files. * ``_count_grids()``: this must set ``self.num_grids`` to be the total number of grids (equivalently ``AMRGridPatch``'es) in the simulation. * ``_parse_index()``: this must fill in ``grid_left_edge``, ``grid_right_edge``, ``grid_particle_count``, ``grid_dimensions`` and ``grid_levels`` with the appropriate information. Each of these variables is an array, with an entry for each of the ``self.num_grids`` grids. Additionally, ``grids`` must be an array of ``AMRGridPatch`` objects that already know their IDs. * ``_populate_grid_objects()``: this initializes the grids by calling ``_prepare_grid()`` and ``_setup_dx()`` on all of them. Additionally, it should set up ``Children`` and ``Parent`` lists on each grid object. The ``OctreeIndex`` has somewhat analogous methods, but often with different names; both ``OctreeIndex`` and ``GridIndex`` are subclasses of the ``Index`` class. In particular, for the ``OctreeIndex``, the method ``_initialize_oct_handler()`` setups up much of the oct metadata that is analogous to the grid metadata created in the ``GridIndex`` methods ``_count_grids()``, ``_parse_index()``, and ``_populate_grid_objects()``. Grids ^^^^^ .. note:: This section only applies to the approach using yt's patch-based datastructures. For the octree-based approach, one does not create a grid object, but rather an ``OctreeSubset``, which has methods for filling out portions of the octree structure. Again, see the code in ``yt.frontends.ramses.data_structures`` for an example of the octree approach. A new grid object, subclassing ``AMRGridPatch``, will also have to be added in ``data_structures.py``. For the most part, this may be all that is needed: .. code-block:: python class ChomboGrid(AMRGridPatch): _id_offset = 0 __slots__ = ["_level_id"] def __init__(self, id, index, level=-1): AMRGridPatch.__init__(self, id, filename=index.index_filename, index=index) self.Parent = None self.Children = [] self.Level = level Even one of the more complex grid objects, ``yt.frontends.amrex.BoxlibGrid``, is still relatively simple. Data Reading Functions ---------------------- In ``io.py``, there are a number of IO handlers that handle the mechanisms by which data is read off disk. To implement a new data reader, you must subclass ``BaseIOHandler``. The various frontend IO handlers are stored in an IO registry - essentially a dictionary that uses the name of the frontend as a key, and the specific IO handler as a value. It is important, therefore, to set the ``dataset_type`` attribute of your subclass, which is what is used as the key in the IO registry. For example: .. code-block:: python class IOHandlerBoxlib(BaseIOHandler): _dataset_type = "boxlib_native" ... At a minimum, one should also override the following methods * ``_read_fluid_selection()``: this receives a collection of data "chunks", a selector describing which "chunks" you are concerned with, a list of fields, and the size of the data to read. It should create and return a dictionary whose keys are the fields, and whose values are numpy arrays containing the data. The data should actually be read via the ``_read_chunk_data()`` method. * ``_read_chunk_data()``: this method receives a "chunk" of data along with a list of fields we want to read. It loops over all the grid objects within the "chunk" of data and reads from disk the specific fields, returning a dictionary whose keys are the fields and whose values are numpy arrays of the data. If your dataset has particle information, you'll want to override the ``_read_particle_coords()`` and ``read_particle_fields()`` methods as well. Each code is going to read data from disk in a different fashion, but the ``yt.frontends.amrex.io.IOHandlerBoxlib`` is a decent place to start. And that just about covers it. Please feel free to email `yt-users `_ or `yt-dev `_ with any questions, or to let us know you're thinking about adding a new code to yt. How to add extra dependencies ? ------------------------------- .. note:: This section covers the technical details of how optional runtime dependencies are implemented and used in yt. If your frontend has specific or complicated dependencies other than yt's, we advise writing your frontend as an extension package :ref:`frontends-as-extensions` It is required that a specific target be added to ``pyproject.toml`` to define a list of additional requirements (even if empty), see :ref:`install-additional`. At runtime, extra third party dependencies should be loaded lazily, meaning their import needs to be delayed until actually needed. This is achieved by importing a wrapper from ``yt.utitilies.on_demand_imports.py``, instead of the actual package like so .. code-block:: python from yt.utilities.on_demand_imports import _mypackage as mypackage Such import statements can live at the top of a module without generating overhead or errors in case the actual package isn't installed. If the extra third party dependency is new, a new import wrapper must also be added. To do so, follow the example of the existing wrappers in ``yt.utilities.on_demand_imports.py``. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/developing/debugdrive.rst0000644000175100001770000001044214714401662020641 0ustar00runnerdocker.. _debug-drive: Debugging yt ============ There are several different convenience functions that allow you to control yt in perhaps unexpected and unorthodox manners. These will allow you to conduct in-depth debugging of processes that may be running in parallel on multiple processors, as well as providing a mechanism of signalling to yt that you need more information about a running process. Additionally, yt has a built-in mechanism for optional reporting of errors to a central server. All of these allow for more rapid development and debugging of any problems you might encounter. Additionally, yt is able to leverage existing developments in the IPython community for parallel, interactive analysis. This allows you to initialize multiple yt processes through ``mpirun`` and interact with all of them from a single, unified interactive prompt. This enables and facilitates parallel analysis without sacrificing interactivity and flexibility. .. _pastebin: Pastebin -------- A pastebin is a website where you can easily copy source code and error messages to share with yt developers or your collaborators. At http://paste.yt-project.org/ a pastebin is available for placing scripts. With yt the script ``yt_lodgeit.py`` is distributed and wrapped with the ``pastebin`` and ``pastebin_grab`` commands, which allow for commandline uploading and downloading of pasted snippets. To upload a script you would supply it to the command: .. code-block:: bash $ yt pastebin some_script.py The URL will be returned. If you'd like it to be marked 'private' and not show up in the list of pasted snippets, supply the argument ``--private``. All snippets are given either numbers or hashes. To download a pasted snippet, you would use the ``pastebin_grab`` option: .. code-block:: bash $ yt pastebin_grab 1768 The snippet will be output to the window, so output redirection can be used to store it in a file. Use the Python Debugger ----------------------- yt is almost entirely composed of python code, so it makes sense to use the `python debugger`_ as your first stop in trying to debug it. .. _python debugger: https://docs.python.org/3/library/pdb.html Signaling yt to Do Something ---------------------------- During startup, yt inserts handlers for two operating system-level signals. These provide two diagnostic methods for interacting with a running process. Signalling the python process that is running your script with these signals will induce the requested behavior. SIGUSR1 This will cause the python code to print a stack trace, showing exactly where in the function stack it is currently executing. SIGUSR2 This will cause the python code to insert an IPython session wherever it currently is, with all local variables in the local namespace. It should allow you to change the state variables. If your yt-running process has PID 5829, you can signal it to print a traceback with: .. code-block:: bash $ kill -SIGUSR1 5829 Note, however, that if the code is currently inside a C function, the signal will not be handled, and the stacktrace will not be printed, until it returns from that function. .. _remote-debugging: Remote and Disconnected Debugging --------------------------------- If you are running a parallel job that fails, often it can be difficult to do a post-mortem analysis to determine what went wrong. To facilitate this, yt has implemented an `XML-RPC `_ interface to the Python debugger (``pdb``) event loop. Running with the ``--rpdb`` command will cause any uncaught exception during execution to spawn this interface, which will sit and wait for commands, exposing the full Python debugger. Additionally, a frontend to this is provided through the yt command. So if you run the command: .. code-block:: bash $ mpirun -np 4 python some_script.py --parallel --rpdb and it reaches an error or an exception, it will launch the debugger. Additionally, instructions will be printed for connecting to the debugger. Each of the four processes will be accessible via: .. code-block:: bash $ yt rpdb 0 where ``0`` here indicates the process 0. For security reasons, this will only work on local processes; to connect on a cluster, you will have to execute the command ``yt rpdb`` on the node on which that process was launched. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/developing/deprecating_features.rst0000644000175100001770000001040414714401662022702 0ustar00runnerdocker.. how-to-deprecate:: How to deprecate a feature -------------------------- Since the 4.0.0 release, deprecation happens on a per-release basis. A functionality can be marked as deprecated using ``~yt._maintenance.deprecation.issue_deprecation_warning``, which takes a warning message and two version numbers, indicating the earliest release deprecating the feature and the one in which it will be removed completely. The message should indicate a viable alternative to replace the deprecated feature at the user level. ``since`` and ``removal`` arguments should indicate in which release something was first deprecated, and when it's expected to be removed. While ``since`` is required, ``removal`` is optional. Here's an example call. .. code-block::python def old_function(*args, **kwargs): from yt._maintenance.deprecation import issue_deprecation_warning issue_deprecation_warning( "`old_function` is deprecated, use `replacement_function` instead." stacklevel=3, since="4.0", removal="4.1.0", ) ... If a whole function or class is marked as deprecated, it should be removed from ``doc/source/reference/api/api.rst``. Deprecating Derived Fields -------------------------- Occasionally, one may want to deprecate a derived field in yt, normally because naming conventions for fields have changed, or simply because a field has outlived its usefulness. There are two ways to mark fields as deprecated in yt. The first way is if you simply want to mark a specific derived field as deprecated. In that case, you call :meth:`~yt.fields.field_info_container.FieldInfoContainer.add_deprecated_field`: .. code-block:: python def _cylindrical_radial_absolute(field, data): """This field is deprecated and will be removed in a future version""" return np.abs(data[ftype, f"{basename}_cylindrical_radius"]) registry.add_deprecated_field( (ftype, f"cylindrical_radial_{basename}_absolute"), sampling_type="local", function=_cylindrical_radial_absolute, since="4.0", removal="4.1.0", units=field_units, validators=[ValidateParameter("normal")], ) Note that the signature for :meth:`~yt.fields.field_info_container.FieldInfoContainer.add_deprecated_field` is the same as :meth:`~yt.fields.field_info_container.FieldInfoContainer.add_field`, with the exception of the ``since`` and ``removal`` arguments which indicate in what version the field was deprecated and in what version it will be removed. The effect is to add a warning to the logger when the field is first used: .. code-block:: python import yt ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100") sp = ds.sphere("c", (100.0, "kpc")) print(sp["gas", "cylindrical_radial_velocity_absolute"]) .. code-block:: pycon yt : [WARNING ] 2021-03-09 16:30:47,460 The Derived Field ('gas', 'cylindrical_radial_velocity_absolute') is deprecated as of yt v4.0.0 and will be removed in yt v4.1.0 The second way to deprecate a derived field is to take an existing field definition and change its name. In order to mark the original name as deprecated, use the :meth:`~yt.fields.field_info_container.FieldInfoContainer.alias` method and pass the ``since`` and ``removal`` arguments (see above) as a tuple in the ``deprecate`` keyword argument: .. code-block:: python registry.alias( (ftype, "kinetic_energy"), (ftype, "kinetic_energy_density"), deprecate=("4.0.0", "4.1.0"), ) Note that the old field name which is to be deprecated goes first, and the new, replacement field name goes second. In this case, the log message reports to the user what field they should use: .. code-block:: python print(sp["gas", "kinetic_energy"]) .. code-block:: pycon yt : [WARNING ] 2021-03-09 16:29:12,911 The Derived Field ('gas', 'kinetic_energy') is deprecated as of yt v4.0.0 and will be removed in yt v4.1.0 Use ('gas', 'kinetic_energy_density') instead. In most cases, the ``since`` and ``removal`` arguments should have a delta of one minor release, and that should be the minimum value. However, the developer is free to use their judgment about whether or not the delta should be multiple minor releases if the field has a long provenance. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/developing/developing.rst0000644000175100001770000000004714714401662020655 0ustar00runnerdocker.. include:: ../../../CONTRIBUTING.rst ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/developing/extensions.rst0000644000175100001770000000622114714401662020720 0ustar00runnerdocker.. _extensions: Extension Packages ================== .. note:: For some additional discussion, see `YTEP-0029 `_, where this plan was designed. As of version 3.3 of yt, we have put into place new methods for easing the process of developing "extensions" to yt. Extensions might be analysis packages, visualization tools, or other software projects that use yt as a base engine but that are versioned, developed and distributed separately. This brings with it the advantage of retaining control over the versioning, contribution guidelines, scope, etc, while also providing a mechanism for disseminating information about it, and potentially a method of interacting with other extensions. We have created a few pieces of infrastructure for developing extensions, making them discoverable, and distributing them to collaborators. If you have a module you would like to retain some external control over, or that you don't feel would fit into yt, we encourage you to build it as an extension module and distribute and version it independently. Hooks for Extensions -------------------- Starting with version 3.3 of yt, any package named with the prefix ``yt_`` is importable from the namespace ``yt.extensions``. For instance, the ``yt_interaction`` package ( https://bitbucket.org/data-exp-lab/yt_interaction ) is importable as ``yt.extensions.interaction``. In subsequent versions, we plan to include in yt a catalog of known extensions and where to find them; this will put discoverability directly into the code base. .. _frontends-as-extensions: Frontends as extensions ----------------------- Starting with version 4.2 of yt, any externally installed package that exports :class:`~yt.data_objects.static_output.Dataset` subclass as an entrypoint in ``yt.frontends`` namespace in ``setup.py`` or ``pyproject.toml`` will be automatically loaded and immediately available in :func:`~yt.loaders.load`. To add an entrypoint in an external project's ``setup.py``: .. code-block:: python setup( # ..., entry_points={ "yt.frontends": [ "myFrontend = my_frontend.api.MyFrontendDataset", "myOtherFrontend = my_frontend.api.MyOtherFrontendDataset", ] } ) or ``pyproject.toml``: .. code-block:: toml [project.entry-points."yt.frontends"] myFrontend = "my_frontend.api:MyFrontendDataset" myOtherFrontend = "my_frontend.api:MyOtherFrontendDataset" Extension Template ------------------ A template for starting an extension module (or converting an existing set of code to an extension module) can be found at https://github.com/yt-project/yt_extension_template . To get started, download a zipfile of the template ( https://codeload.github.com/yt-project/yt_extension_template/zip/master ) and follow the directions in ``README.md`` to modify the metadata. Distributing Extensions ----------------------- We encourage you to version on your choice of hosting platform (Bitbucket, GitHub, etc), and to distribute your extension widely. We are presently working on deploying a method for listing extension modules on the yt webpage. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/developing/external_analysis.rst0000644000175100001770000003646214714401662022260 0ustar00runnerdocker.. _external-analysis-tools: Using yt with External Analysis Tools ===================================== yt can be used as a ``glue`` code between simulation data and other methods of analyzing data. Its facilities for understanding units, disk IO and data selection set it up ideally to use other mechanisms for analyzing, processing and visualizing data. Calling External Python Codes ----------------------------- Calling external Python codes very straightforward. For instance, if you had a Python code that accepted a set of structured meshes and then post-processed them to apply radiative feedback, one could imagine calling it directly: .. code-block:: python import radtrans import yt ds = yt.load("DD0010/DD0010") rt_grids = [] for grid in ds.index.grids: rt_grid = radtrans.RegularBox( grid.LeftEdge, grid.RightEdge, grid["density"], grid["temperature"], grid["metallicity"], ) rt_grids.append(rt_grid) grid.clear_data() radtrans.process(rt_grids) Or if you wanted to run a population synthesis module on a set of star particles (and you could fit them all into memory) it might look something like this: .. code-block:: python import pop_synthesis import yt ds = yt.load("DD0010/DD0010") ad = ds.all_data() star_masses = ad["StarMassMsun"] star_metals = ad["StarMetals"] pop_synthesis.CalculateSED(star_masses, star_metals) If you have a code that's written in Python that you are having trouble getting data into from yt, please feel encouraged to email the users list and we'll help out. Calling Non-Python External Codes --------------------------------- Independent of its ability to process, analyze and visualize data, yt can also serve as a mechanism for reading and selecting simulation data. In this way, it can be used to supply data to an external analysis routine written in Fortran, C or C++. This document describes how to supply that data, using the example of a simple code that calculates the best axes that describe a distribution of particles as a starting point. (The underlying method is left as an exercise for the reader; we're only currently interested in the function specification and structs.) If you have written a piece of code that performs some analysis function, and you would like to include it in the base distribution of yt, we would be happy to do so; drop us a line or see :ref:`contributing-code` for more information. To accomplish the process of linking Python with our external code, we will be using a language called `Cython `_, which is essentially a superset of Python that compiles down to C. It is aware of NumPy arrays, and it is able to massage data between the interpreted language Python and C, Fortran or C++. It will be much easier to utilize routines and analysis code that have been separated into subroutines that accept data structures, so we will assume that our halo axis calculator accepts a set of structs. Our Example Code ++++++++++++++++ Here is the ``axes.h`` file in our imaginary code, which we will then wrap: .. code-block:: c typedef struct structParticleCollection { long npart; double *xpos; double *ypos; double *zpos; } ParticleCollection; void calculate_axes(ParticleCollection *part, double *ax1, double *ax2, double *ax3); There are several components to this analysis routine which we will have to wrap. #. We have to wrap the creation of an instance of ``ParticleCollection``. #. We have to transform a set of NumPy arrays into pointers to doubles. #. We have to create a set of doubles into which ``calculate_axes`` will be placing the values of the axes it calculates. #. We have to turn the return values back into Python objects. Each of these steps can be handled in turn, and we'll be doing it using Cython as our interface code. Setting Up and Building Our Wrapper +++++++++++++++++++++++++++++++++++ To get started, we'll need to create two files: .. code-block:: bash axes_calculator.pyx axes_calculator_setup.py These can go anywhere, but it might be useful to put them in their own directory. The contents of ``axes_calculator.pyx`` will be left for the next section, but we will need to put some boilerplate code into ``axes_calculator_setup.pyx``. As a quick sidenote, you should call these whatever is most appropriate for the external code you are wrapping; ``axes_calculator`` is probably not the best bet. Here's a rough outline of what should go in ``axes_calculator_setup.py``: .. code-block:: python NAME = "axes_calculator" EXT_SOURCES = [] EXT_LIBRARIES = ["axes_utils", "m"] EXT_LIBRARY_DIRS = ["/home/rincewind/axes_calculator/"] EXT_INCLUDE_DIRS = [] DEFINES = [] from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [ Extension( NAME, [NAME + ".pyx"] + EXT_SOURCES, libraries=EXT_LIBRARIES, library_dirs=EXT_LIBRARY_DIRS, include_dirs=EXT_INCLUDE_DIRS, define_macros=DEFINES, ) ] setup(name=NAME, cmdclass={"build_ext": build_ext}, ext_modules=ext_modules) The only variables you should have to change in this are the first six, and possibly only the first one. We'll go through these variables one at a time. ``NAME`` This is the name of our source file, minus the ``.pyx``. We're also mandating that it be the name of the module we import. You're free to modify this. ``EXT_SOURCES`` Any additional sources can be listed here. For instance, if you are only linking against a single ``.c`` file, you could list it here -- if our axes calculator were fully contained within a file called ``calculate_my_axes.c`` we could link against it using this variable, and then we would not have to specify any libraries. This is usually the simplest way to do things, and in fact, yt makes use of this itself for things like HEALPix and interpolation functions. ``EXT_LIBRARIES`` Any libraries that will need to be linked against (like ``m``!) should be listed here. Note that these are the name of the library minus the leading ``lib`` and without the trailing ``.so``. So ``libm.so`` would become ``m`` and ``libluggage.so`` would become ``luggage``. ``EXT_LIBRARY_DIRS`` If the libraries listed in ``EXT_LIBRARIES`` reside in some other directory or directories, those directories should be listed here. For instance, ``["/usr/local/lib", "/home/rincewind/luggage/"]`` . ``EXT_INCLUDE_DIRS`` If any header files have been included that live in external directories, those directories should be included here. ``DEFINES`` Any define macros that should be passed to the C compiler should be listed here; if they just need to be defined, then they should be specified to be defined as "None." For instance, if you wanted to pass ``-DTWOFLOWER``, you would set this to equal: ``[("TWOFLOWER", None)]``. To build our extension, we would run: .. code-block:: bash $ python axes_calculator_setup.py build_ext -i Note that since we don't yet have an ``axes_calculator.pyx``, this will fail. But once we have it, it ought to run. Writing and Calling our Wrapper +++++++++++++++++++++++++++++++ Now we begin the tricky part, of writing our wrapper code. We've already figured out how to build it, which is halfway to being able to test that it works, and we now need to start writing Cython code. For a more detailed introduction to Cython, see the Cython documentation at http://docs.cython.org/en/latest/ . We'll cover a few of the basics for wrapping code however. To start out with, we need to open up and edit our file, ``axes_calculator.pyx``. Open this in your favorite version of vi (mine is vim) and we will get started by declaring the struct we need to pass in. But first, we need to include some header information: .. code-block:: cython import numpy as np cimport numpy as np cimport cython from stdlib cimport malloc, free These lines simply import and "Cython import" some common routines. For more information about what is already available, see the Cython documentation. For now, we need to start translating our data. To do so, we tell Cython both where the struct should come from, and then we describe the struct itself. One fun thing to note is that if you don't need to set or access all the values in a struct, and it just needs to be passed around opaquely, you don't have to include them in the definition. For an example of this, see the ``png_writer.pyx`` file in the yt repository. Here's the syntax for pulling in (from a file called ``axes_calculator.h``) a struct like the one described above: .. code-block:: cython cdef extern from "axes_calculator.h": ctypedef struct ParticleCollection: long npart double *xpos double *ypos double *zpos So far, pretty easy! We've basically just translated the declaration from the ``.h`` file. Now that we have done so, any other Cython code can create and manipulate these ``ParticleCollection`` structs -- which we'll do shortly. Next up, we need to declare the function we're going to call, which looks nearly exactly like the one in the ``.h`` file. (One common problem is that Cython doesn't know what ``const`` means, so just remove it wherever you see it.) Declare it like so: .. code-block:: cython void calculate_axes(ParticleCollection *part, double *ax1, double *ax2, double *ax3) Note that this is indented one level, to indicate that it, too, comes from ``axes_calculator.h``. The next step is to create a function that accepts arrays and converts them to the format the struct likes. We declare our function just like we would a normal Python function, using ``def``. You can also use ``cdef`` if you only want to call a function from within Cython. We want to call it from Python, too, so we just use ``def``. Note that we don't here specify types for the various arguments. In a moment we'll refine this to have better argument types. .. code-block:: cython def examine_axes(xpos, ypos, zpos): cdef double ax1[3], ax2[3], ax3[3] cdef ParticleCollection particles cdef int i particles.npart = len(xpos) particles.xpos = malloc(particles.npart * sizeof(double)) particles.ypos = malloc(particles.npart * sizeof(double)) particles.zpos = malloc(particles.npart * sizeof(double)) for i in range(particles.npart): particles.xpos[i] = xpos[i] particles.ypos[i] = ypos[i] particles.zpos[i] = zpos[i] calculate_axes(&particles, ax1, ax2, ax3) free(particles.xpos) free(particles.ypos) free(particles.zpos) return ( (ax1[0], ax1[1], ax1[2]), (ax2[0], ax2[1], ax2[2]), (ax3[0], ax3[1], ax3[2]) ) This does the rest. Note that we've weaved in C-type declarations (ax1, ax2, ax3) and Python access to the variables fed in. This function will probably be quite slow -- because it doesn't know anything about the variables xpos, ypos, zpos, it won't be able to speed up access to them. Now we will see what we can do by declaring them to be of array-type before we start handling them at all. We can do that by annotating in the function argument list. But first, let's test that it works. From the directory in which you placed these files, run: .. code-block:: bash $ python2.6 setup.py build_ext -i Now, create a sample file that feeds in the particles: .. code-block:: python import axes_calculator axes_calculator.examine_axes(xpos, ypos, zpos) Most of the time in that function is spent in converting the data. So now we can go back and we'll try again, rewriting our converter function to believe that its being fed arrays from NumPy: .. code-block:: cython def examine_axes(np.ndarray[np.float64_t, ndim=1] xpos, np.ndarray[np.float64_t, ndim=1] ypos, np.ndarray[np.float64_t, ndim=1] zpos): cdef double ax1[3], ax2[3], ax3[3] cdef ParticleCollection particles cdef int i particles.npart = len(xpos) particles.xpos = malloc(particles.npart * sizeof(double)) particles.ypos = malloc(particles.npart * sizeof(double)) particles.zpos = malloc(particles.npart * sizeof(double)) for i in range(particles.npart): particles.xpos[i] = xpos[i] particles.ypos[i] = ypos[i] particles.zpos[i] = zpos[i] calculate_axes(&particles, ax1, ax2, ax3) free(particles.xpos) free(particles.ypos) free(particles.zpos) return ( (ax1[0], ax1[1], ax1[2]), (ax2[0], ax2[1], ax2[2]), (ax3[0], ax3[1], ax3[2]) ) This should be substantially faster, assuming you feed it arrays. Now, there's one last thing we can try. If we know our function won't modify our arrays, and they are C-Contiguous, we can simply grab pointers to the data: .. code-block:: cython def examine_axes(np.ndarray[np.float64_t, ndim=1] xpos, np.ndarray[np.float64_t, ndim=1] ypos, np.ndarray[np.float64_t, ndim=1] zpos): cdef double ax1[3], ax2[3], ax3[3] cdef ParticleCollection particles cdef int i particles.npart = len(xpos) particles.xpos = xpos.data particles.ypos = ypos.data particles.zpos = zpos.data for i in range(particles.npart): particles.xpos[i] = xpos[i] particles.ypos[i] = ypos[i] particles.zpos[i] = zpos[i] calculate_axes(&particles, ax1, ax2, ax3) return ( (ax1[0], ax1[1], ax1[2]), (ax2[0], ax2[1], ax2[2]), (ax3[0], ax3[1], ax3[2]) ) But note! This will break or do weird things if you feed it arrays that are non-contiguous. At this point, you should have a mostly working piece of wrapper code. And it was pretty easy! Let us know if you run into any problems, or if you are interested in distributing your code with yt. A complete set of files is available with this documentation. These are slightly different, so that the whole thing will simply compile, but they provide a useful example. * `axes.c <../_static/axes.c>`_ * `axes.h <../_static/axes.h>`_ * `axes_calculator.pyx <../_static/axes_calculator.pyx>`_ * `axes_calculator_setup.py <../_static/axes_calculator_setup.txt>`_ Exporting Data from yt ---------------------- yt is installed alongside h5py. If you need to export your data from yt, to share it with people or to use it inside another code, h5py is a good way to do so. You can write out complete datasets with just a few commands. You have to import, and then save things out into a file. .. code-block:: python import h5py f = h5py.File("some_file.h5", mode="w") f.create_dataset("/data", data=some_data) This will create ``some_file.h5`` if necessary and add a new dataset (``/data``) to it. Writing out in ASCII should be relatively straightforward. For instance: .. code-block:: python f = open("my_file.txt", "w") for halo in halos: x, y, z = halo.center_of_mass() f.write("%0.2f %0.2f %0.2f\n", x, y, z) f.close() This example could be extended to work with any data object's fields, as well. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/developing/index.rst0000644000175100001770000000164214714401662017632 0ustar00runnerdockerDeveloping in yt ================ yt is an open-source project with a community of contributing scientists. While you can use the existing framework within yt to help answer questions about your own datasets, yt thrives by the addition of new functionality by users just like yourself. Maybe you have a new data format that you would like supported, a new derived quantity that you feel should be included, or a new way of visualizing data--please add them to the code base! We are eager to help you make it happen. There are many ways to get involved with yt -- participating in the mailing list, helping people out in IRC, providing suggestions for the documentation, and contributing code! .. toctree:: :maxdepth: 2 developing building_the_docs testing extensions debugdrive releasing creating_datatypes creating_derived_fields creating_frontend external_analysis deprecating_features ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/developing/releasing.rst0000644000175100001770000002442714714401662020502 0ustar00runnerdockerHow to Do a Release ------------------- Periodically, the yt development community issues new releases. yt loosely follows `semantic versioning `_. The type of release can be read off from the version number used. Version numbers should follow the scheme ``MAJOR.MINOR.PATCH``. There are three kinds of possible releases: * Bugfix releases These releases should contain only fixes for bugs discovered in earlier releases and should not contain new features or API changes. Bugfix releases only increment the ``PATCH`` version number. Bugfix releases should *not* be generated by merging from the ``main`` branch, instead bugfix pull requests should be backported to a dedicated branch. See :ref:`doing-a-bugfix-release`. Version ``3.2.2`` is a bugfix release. * Minor releases These releases happen when new features are deemed ready to be merged into the ``stable`` branch and should not happen on a regular schedule. Minor releases can also include fixes for bugs if the fix is determined to be too invasive for a bugfix release. Minor releases should *not* include backwards-incompatible changes and should not change APIs. If an API change is deemed to be necessary, the old API should continue to function but might trigger deprecation warnings. Minor releases should happen by merging the ``main`` branch into the ``stable`` branch. Minor releases should increment the ``MINOR`` version number and reset the ``PATCH`` version number to zero. Version ``3.3.0`` is a minor release. * Major releases These releases happen when the development community decides to make major backwards-incompatible changes intentionally. In principle a major version release could include arbitrary changes to the library. Major version releases should only happen after extensive discussion and vetting among the developer and user community. Like minor releases, a major release should happen by merging the ``main`` branch into the ``stable`` branch. Major releases should increment the ``MAJOR`` version number and reset the ``MINOR`` and ``PATCH`` version numbers to zero. If it ever happens, version ``4.0.0`` will be a major release. The job of doing a release differs depending on the kind of release. Below, we describe the necessary steps for each kind of release in detail. .. _doing-a-bugfix-release: Doing a Bugfix Release ~~~~~~~~~~~~~~~~~~~~~~ As described above, bugfix releases are regularly scheduled updates for minor releases to ensure fixes for bugs make their way out to users in a timely manner. Since bugfix releases should not include new features, we do not issue bugfix releases by simply merging from the development ``main`` branch into the ``stable`` branch. Instead, commits are cherry-picked from the ``main`` branch to a backport branch, which is itself merged into ``stable`` when a release happens. Backport branches are named after the minor version then descend from, followed by an ``x``. For instance, ``yt-4.0.x`` is the backport branch for all releases in the 4.0 series. Backporting bugfixes can be done automatically using the `MeeseeksBox bot `_. This necessitates having a Github milestone dedicated to the release, configured with a comment in its description such as ``on-merge: backport to yt-4.0.x``. Then, every PR that was triaged into the milestone will be replicated as a backport PR by the bot when it's merged into main. Some backports are non-trivial and require human attention; if conflicts occur, the bot will provide detailed instructions to perfom the task manually. In short, a manual backport consist of 4 steps - checking out the backport branch locally - create a new branch from there - cherry-picking the merge commit from the original PR with ``git cherry-pick -m1 `` - opening a PR to the backport branch Doing a Minor or Major Release ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is much simpler than a bugfix release. First, make sure that every deprecated features targeted for removal in the new release are removed from the ``main`` branch, ideally in a single PR. Such a PR can be issued at any point between the previous minor or major release and the new one. Then, all that needs to happen is the ``main`` branch must get merged into the ``stable`` branch, and any conflicts that happen must be resolved, almost always in favor of the state of the code on the ``main`` branch. Incrementing Version Numbers and Tagging a Release ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Before creating the tag for the release, you must increment the version numbers that are hard-coded in a few files in the yt source so that version metadata for the code is generated correctly. This includes things like ``yt.__version__`` and the version that gets read by the Python Package Index (PyPI) infrastructure. The paths relative to the root of the repository for the three files that need to be edited are: * ``doc/source/conf.py`` The ``version`` and ``release`` variables need to be updated. * ``setup.py`` The ``VERSION`` variable needs to be updated * ``yt/__init__.py`` The ``__version__`` variable must be updated. Once these files have been updated, commit these updates. This is the commit we will tag for the release. To actually create the tag, issue the following command from the ``stable`` branch: .. code-block:: bash git tag Where ```` follows the project's naming scheme for tags (e.g. ``yt-3.2.1``). Once you are done, you will need to push the tag to github:: git push origin --tag This assumes that you have configured the remote ``origin`` to point at the main yt git repository. If you are doing a minor or major version number release, you will also need to update back to the development branch and update the development version numbers in the same files. Uploading to yt-project.org ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Before uploading the release to the Python Package Index (pypi.org) we will first upload the package to yt-project.org. This facilitates building binary wheels for pypi and binary conda packages on conda-forge before doing the "official" release. This also ensures that there isn't a period of time when users do ``pip install yt`` and end up downloading the source distribution instead of one of the binary wheels. To create the source distribution, issue the following command in the root of the yt repository:: $ python setup.py sdist This will generate a tarball in a ``dist/`` directory located in the root of the repository. Access to yt-project.org mediated via SSH login. Please contact one of the current yt developers for access to the webserver running yt-project.org if you do not already have it. You will need a copy of your SSH public key so that your key can be added to the list of authorized keys. Once you login, use e.g. ``scp`` to upload a copy of the source distribution tarball to https://yt-project.org/sdist, like so:: $ scp dist/yt-3.5.1.tar.gz yt_analysis@dickenson.dreamhost.com:yt-project.org/sdist You may find it helpful to set up an ssh config for dickenson to make this command a bit easier to execute. Publishing ~~~~~~~~~~ We distribute yt on two main channels: PyPI.org and conda-forge, in this order. PyPI ++++ The publication process for PyPI is automated for the most part, via Github actions, using ``.github/workflows/wheels.yaml``. Specifically, a release is pushed to PyPI when a new git tag starting with ``yt-`` is pushed to the main repo. Let's review the details here. PyPI releases contain the source code (as a tarball), and wheels. Wheels are compiled distributions of the source code. They are OS specific as well as Python-version specific. Producing wheels for every supported combination of OS and Python versions is done with `cibuildwheels `_ Upload to PyPI is automated via Github Actions `upload-artifact `_ and `download-artifact `_. Note that automated uploads are currently perfomed using Matt Turk's credentials. If that worked, you can skip to the next section. Otherwise, upload can be perfomed manually by first downloading the artifacts ``wheels`` and ``tarball`` from the workflow webpage, then at the command line (make sure that the ``dist`` directory doesn't exist or is empty) .. code-block:: bash unzip tarball.zip -d dist unzip wheels.zip -d dist python -m pip install --upgrade twine twine upload dist/* You will be prompted for your PyPI credentials and then the package should upload. Note that for this to complete successfully, you will need an account on PyPI and that account will need to be registered as an "owner" or "maintainer" of the yt package. ``conda-forge`` +++++++++++++++ Conda-forge packages for yt are managed via the yt feedstock, located at https://github.com/conda-forge/yt-feedstock. When a release is pushed to PyPI a bot should detect a new version and issue a PR to the feedstock with the new version automatically. When this feedstock is updated, make sure that the SHA256 hash of the tarball matches the one you uploaded to PyPI and that the version number matches the one that is being released. In case the automated PR fails CI, feedstock maintainers are allowed to push to the bot's branch with any fixes required. Should you need to update the feedstock manually, you will need to update the ``meta.yaml`` file located in the ``recipe`` folder in the root of the feedstock repository. Most likely you will only need to update the version number and the SHA256 hash of the tarball. If yt's dependencies change you may also need to update the recipe. Once you have updated the recipe, propose a pull request on github and merge it once all builds pass. Announcing ~~~~~~~~~~ After the release is uploaded to `PyPI `_ and `conda-forge `_, you should send out an announcement e-mail to the yt mailing lists as well as other possibly interested mailing lists for all but bugfix releases. Creating a Github release attached to the tag also offers a couple advantages. Auto-generated release notes can be a good starting point, though it's best to edit out PRs that not directly affecting users, and these notes can be edited before (draft mode) and after the release, so errors can be corrected after the fact. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/developing/testing.rst0000644000175100001770000006205414714401662020204 0ustar00runnerdocker.. _testing: Testing ======= yt includes a testing suite that one can run on the code base to ensure that no breaks in functionality have occurred. This suite is based on the `pytest `_ testing framework and consists of two types of tests: * Unit tests. These make sure that small pieces of code run (or fail) as intended in predictable contexts. See :ref:`unit_testing`. * Answer tests. These generate outputs from the user-facing yt API and compare them against the outputs produced using an older, "known good", version of yt. See :ref:`answer_testing`. These tests ensure consistency in results as development proceeds. We recommend that developers run tests locally on changed features when developing to help ensure that the new code does not break any existing functionality. To further this goal and ensure that changes do not propagate errors or have unintentional consequences on the rest of the codebase, the full test suite is run through our continuous integration (CI) servers. CI is run on push on open pull requests on a variety of computational platforms using Github Actions and a `continuous integration server `_ at the University of Illinois. The full test suite may take several hours to run, so we do not recommend running it locally. .. _unit_testing: Unit Testing ------------ What Do Unit Tests Do ^^^^^^^^^^^^^^^^^^^^^ Unit tests are tests that operate on some small piece of code and verify that it behaves as intended. In practice, this means that we write simple assertions (or ``assert`` statements) about results and then let pytest go through them. A test is considered a success when no error (in particular ``AssertionError``) occurs. How to Run the Unit Tests ^^^^^^^^^^^^^^^^^^^^^^^^^ One can run the unit tests by navigating to the base directory of the yt git repository and invoking ``pytest``: .. code-block:: bash $ cd $YT_GIT $ pytest where ``$YT_GIT`` is the path to the root of the yt git repository. If you only want to run tests in a specific file, you can do so by specifying the path of the test relative to the ``$YT_GIT/`` directory. For example, if you want to run the ``plot_window`` tests, you'd run: .. code-block:: bash $ pytest yt/visualization/tests/test_plotwindow.py from the yt source code root directory. Additionally, if you only want to run a specific test in a test file (rather than all of the tests contained in the file), such as, ``test_all_fields`` in ``plot_window.py``, you can do so by running: .. code-block:: bash $ pytest yt/visualization/tests/test_plotwindow.py::test_all_fields from the yt source code rood directory See the pytest documentation for more on how to `invoke pytest `_ and `select tests `_. Unit Test Tools ^^^^^^^^^^^^^^^ yt provides several helper functions and decorators to write unit tests. These tools all reside in the ``yt.testing`` module. Describing them all in detail is outside the scope of this document, as in some cases they belong to other packages. How To Write New Unit Tests ^^^^^^^^^^^^^^^^^^^^^^^^^^^ To create new unit tests: #. Create a new ``tests/`` directory next to the file containing the functionality you want to test and add an empty ``__init__.py`` file to it. #. If a ``tests/`` directory already exists, there is no need to create a new one. #. Inside this new ``tests/`` directory, create a new python file prefixed with ``test_`` and including the name of the functionality or source file being tested. #. If a file testing the functionality you're interested in already exists, please add your tests to the existing there. #. Inside this new ``test_`` file, create functions prefixed with ``test_`` that accept no arguments. #. Each test function should do some work that tests some functionality and should also verify that the results are correct using assert statements or functions. #. If a dataset is needed, use ``fake_random_ds``, ``fake_amr_ds``, or ``fake_particle_ds`` (the former two of which have support for particles that may be utilized) and be sure to test for several combinations of ``nproc`` so that domain decomposition can be tested as well. #. To iterate over multiple options, or combinations of options, use the `@pytest.mark.parametrize decorator `_. For an example of how to write unit tests, look at the file ``yt/data_objects/tests/test_covering_grid.py``, which covers a great deal of functionality. Debugging Failing Tests ^^^^^^^^^^^^^^^^^^^^^^^ When writing new tests, one often exposes bugs or writes a test incorrectly, causing an exception to be raised or a failed test. To help debug issues like this, ``pytest`` can `drop into a debugger `_ whenever a test fails or raises an exception. In addition, one can debug more crudely using print statements. To do this, you can add print statements to the code as normal. However, the test runner will capture all print output by default. To ensure that output gets printed to your terminal while the tests are running, pass ``-s`` (which will disable stdout and stderr capturing) to the ``pytest`` executable. .. code-block:: bash $ pytest -s Lastly, to quickly debug a specific failing test, it is best to only run that one test during your testing session. This can be accomplished by explicitly passing the name of the test function or class to ``pytest``, as in the following example: .. code-block:: bash $ pytest yt/visualization/tests/test_plotwindow.py::TestSetWidth This pytest invocation will only run the tests defined by the ``TestSetWidth`` class. See the `pytest documentation `_ for more on the various ways to invoke pytest. Finally, to determine which test is failing while the tests are running, it helps to run the tests in "verbose" mode. This can be done by passing the ``-v`` option to the ``pytest`` executable. .. code-block:: bash $ pytest -v All of the above ``pytest`` options can be combined. So, for example, to run the ``TestSetWidth`` tests with verbose output, letting the output of print statements come out on the terminal prompt, and enabling pdb debugging on errors or test failures, one would do: .. code-block:: bash $ pytest yt/visualization/tests/test_plotwindow.py::TestSetWidth -v -s --pdb More pytest options can be found by using the ``--help`` flag .. code-block:: bash $ pytest --help .. _answer_testing: Answer Testing -------------- .. note:: This section documents answer tests run with ``pytest``. The plan is to switch to using ``pytest`` for answer tests at some point in the future, but currently (July 2024), answer tests are still implemented and run with ``nose``. We generally encourage developers to use ``pytest`` for any new tests, but if you need to change or update one of the older ``nose`` tests, or are, e.g., writing a new frontend, an `older version of this documentation `_ decribes how the ``nose`` tests work. .. note:: Given that nose never had support for Python 3.10 (which as of yt 4.4 is our oldest supported version), it is necessary to patch it to get tests running. This is the command we run on CI to this end ``find .venv/lib/python3.10/site-packages/nose -name '*.py' -exec sed -i -e s/collections.Callable/collections.abc.Callable/g '{}' ';'`` What Do Answer Tests Do ^^^^^^^^^^^^^^^^^^^^^^^ Answer tests use `actual data `_ to test reading, writing, and various manipulations of that data. Answer tests are how we test frontends, as opposed to operations, in yt. In order to ensure that each of these operations are performed correctly, we store gold standard versions of yaml files called answer files. More generally, an answer file is a yaml file containing the results of having run the answer tests, which can be compared to a reference, enabling us to control that results do not drift over time. .. _run_answer_testing: How to Run the Answer Tests ^^^^^^^^^^^^^^^^^^^^^^^^^^^ In order to run the answer tests locally: * Create a directory to hold the data you'll be using for the answer tests you'll be writing or the answer tests you'll be running. This directory should be outside the yt git repository in a place that is logical to where you would normally store data. * Add folders of the required data to this directory. Other yt data, such as ``IsolatedGalaxy``, can be downloaded to this directory as well. * Tell yt where it can find the data. This is done by setting the config parameter ``test_data_dir`` to the path of the directory with the test data downloaded from https://yt-project.org/data/. For example, .. code-block:: bash $ yt config set yt test_data_dir /Users/tomservo/src/yt-data this should only need to be done once (unless you change where you're storing the data, in which case you'll need to repeat this step so yt looks in the right place). * Generate or obtain a set of gold standard answer files. In order to generate gold standard answer files, wwitch to a "known good" version of yt and then run the answer tests as described below. Once done, switch back to the version of yt you wish to test. * Now you're ready to run the answer tests! As an example, let's focus on running the answer tests for the tipsy frontend. Let's also assume that we need to generate a gold standard answer file. To do this, we first switch to a "known good" version of yt and run the following command from the top of the yt git directory (i.e., ``$YT_GIT``) in order to generate the gold standard answer file: .. note:: It's possible to run the answer tests for **all** the frontends, but due to the large number of test datasets we currently use this is not normally done except on the yt project's contiguous integration server. .. code-block:: bash $ cd $YT_GIT $ pytest --with-answer-testing --answer-store --local-dir="$HOME/Documents/test" -k "TestTipsy" The ``--with-answer-testing`` tells pytest that we want to run answer tests. Without this option, the unit tests will be run instead of the answer tests. The ``--answer-store`` option tells pytest to save the results produced by each test to a local gold standard answer file. Omitting this option is how we tell pytest to compare the results to a gold standard. The ``--local-dir`` option specifies where the gold standard answer file will be saved (or is already located, in the case that ``--answer-store`` is omitted). The ``-k`` option tells pytest that we only want to run tests whose name matches the given pattern. .. note:: The path specified by ``--local-dir`` can, but does not have to be, the same directory as the ``test_data_dir`` configuration variable. It is best practice to keep the data that serves as input to yt separate from the answers produced by yt's tests, however. .. note:: The value given to the `-k` option (e.g., `"TestTipsy"`) is the name of the class containing the answer tests. You do not need to specify the path. The newly generated gold standard answer file will be named ``tipsy_answers_xyz.yaml``, where ``xyz`` denotes the version number of the gold standard answers. The answer version number is determined by the ``answer_version`` attribute of the class being tested (e.g., ``TestTipsy.answer_version``). .. note:: Changes made to yt sometimes result in known, expected changes to the way certain operations behave. This necessitates updating the gold standard answer files. This process is accomplished by changing the version number specified in each answer test class (e.g., ``TestTipsy.answer_version``). The answer version for each test class can be found as the attribute `answer_version` of that class. Once the gold standard answer file has been generated we switch back to the version of yt we want to test, recompile if necessary, and run the tests using the following command: .. code-block:: bash $ pytest --with-answer-testing --local-dir="$HOME/Documents/test" -k "TestTipsy" The result of each test is printed to STDOUT. If a test passes, pytest prints a period. If a test fails, encounters an exception, or errors out for some reason, then an F is printed. Explicit descriptions for each test are also printed if you pass ``-v`` to the ``pytest`` executable. Similar to the unit tests, the ``-s`` and ``--pdb`` options can be passed, as well. How to Write Answer Tests ^^^^^^^^^^^^^^^^^^^^^^^^^ To add a new answer test: #. Create a new directory called ``tests`` inside the directory where the component you want to test resides and add an empty ``__init__.py`` file to it. #. Create a new file in the ``tests`` directory that will hold the new answer tests. The name of the file should begin with ``test_``. #. Create a new class whose name begins with ``Test`` (e.g., ``TestTipsy``). #. Decorate the class with ``pytest.mark.answer_test``. This decorator is used to tell pytest which tests are answer tests. .. note:: Tests that do not have this decorator are considered to be unit tests. #. Add the following three attributes to the class: ``answer_file=None``, ``saved_hashes=None``, and ``answer_version=000``. These attributes are used by the ``hashing`` fixture (discussed below) to automate the creation of new answer files as well as facilitate the comparison to existing answers. #. Add methods to the class that test a number of different fields and data objects. #. If these methods are performing calculations or data manipulation, they should store the result in a ``ndarray``, if possible. This array should be be added to the ``hashes`` (see below) dictionary like so: ``self.hashes.update(:)``, where ```` is the name of the function from ``yt/utilities/answer_testing/answer_tests.py`` that is being used and ```` is the ``ndarray`` holding the result If you are adding to a frontend that has tests already, simply add methods to the existing test class. There are several things that can make the test writing process easier: * ``yt/utilities/answer_testing/testing_utilities.py`` contains a large number of helper functions. * Most frontends end up needing to test much of the same functionality as other frontends. As such, a list of functions that perform such work can be found in ``yt/utilities/answer_testing/answer_tests.py``. * `Fixtures `_! You can find the set of fixtures that have already been built for yt in ``$YT_GIT/conftest.py``. If you need/want to add additional fixtures, please add them there. * The `parametrize decorator `_ is extremely useful for performing iteration over various combinations of test parameters. It should be used whenever possible. * The use of this decorator allows pytest to write the names and values of the test parameters to the generated answer files, which can make debugging failing tests easier, since one can easily see exactly which combination of parameters were used for a given test. * It is also possible to employ the ``requires_ds`` decorator to ensure that a test does not run unless a specific dataset is found, but not necessary. If the dataset is parametrized over, then the ``ds`` fixture found in the root ``conftest.py`` file performs the same check and marks the test as failed if the dataset isn't found. Here is what a minimal example might look like for a new frontend: .. code-block:: python # Content of yt/frontends/new_frontend/tests/test_outputs.py import pytest from yt.utilities.answer_testing.answer_tests import field_values # Parameters to test with ds1 = "my_first_dataset" ds2 = "my_second_dataset" field1 = ("Gas", "Density") field2 = ("Gas", "Temperature") obj1 = None obj2 = ("sphere", ("c", (0.1, "unitary"))) @pytest.mark.answer_test class TestNewFrontend: answer_file = None saved_hashes = None answer_version = "000" @pytest.mark.usefixtures("hashing") @pytest.mark.parametrize("ds", [ds1, ds2], indirect=True) @pytest.mark.parametrize("field", [field1, field2], indirect=True) @pytest.mark.parametrize("dobj", [obj1, obj2], indirect=True) def test_fields(self, ds, field, dobj): self.hashes.update({"field_values": field_values(ds, field, dobj)}) Answer test examples can be found in ``yt/frontends/enzo/tests/test_outputs.py``. .. _update_image_tests: Creating and Updating Image Baselines for pytest-mpl Tests ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ We use `pytest-mpl `_ for image comparison tests. These tests take the form of functions, which must be decorated with ``@pytest.mark.mpl_image_compare`` and return a ``matplotlib.figure.Figure`` object. The collection of reference images is kept as git submodule in ``tests/pytest_mpl_baseline/``. There are 4 situations where updating reference images may be necessary - adding new tests - bugfixes - intentional change of style in yt - old baseline fails with a new version of matplotlib, but changes are not noticeable to the human eye The process of updating images is the same in all cases. It involves opening two Pull Requests (PR) that we'll number PR1 and PR2. 1. open a Pull Request (PR1) to yt's main repo with the code changes 2. wait for tests jobs to complete 3. go to the "Checks" tab on the PR page (``https://github.com/yt-project/yt/pull//checks``) 4. if all tests passed, you're done ! 5. if tests other than image tests failed, fix them, and go back to step 2. Otherwise, if only image tests failed, navigate to the "Build and Tests" job summary page. 6. at the bottom of the page, you'll find "Artifacts". Download ``yt_pytest_mpl_results.zip``, unzip it and open ``fig_comparison.html`` therein; This document is an interactive report of the test job. Inspect failed tests results and verify that any differences are either intended or insignificant. If they are not, fix the code and go back to step 2 7. clone ``https://github.com/yt-project/yt_pytest_mpl_baseline.git`` and unzip the new baseline 8. Download the other artifact (``yt_pytest_mpl_new_baseline.zip``), unzip it within your clone of ``yt_pytest_mpl_baseline``. 9. create a branch, commit all changes, and open a Pull Request (PR2) to ``https://github.com/yt-project/yt_pytest_mpl_baseline`` (PR2 should link to PR1) 10. wait for this second PR to be merged 11. Now it's time to update PR1: navigate back to your local copy of ``yt``'s main repository. 12. run the following commands .. code-block:: bash $ git submodule update --init $ cd tests/pytest_mpl_baseline $ git checkout main $ git pull $ cd ../ $ git add pytest_mpl_baseline $ git commit -m "update image test baseline" $ git push 13. go back to step 2. This time everything should pass. If not, ask for help ! .. note:: Though it is technically possible to (re)generate reference images locally, it is best not to, because at a pixel level, matplotlib's behaviour is platform-dependent. By letting CI runners generate images, we ensure pixel-perfect comparison is possible in CI, which is where image comparison tests are most often run. .. _deprecated_generic_image: How to Write Image Comparison Tests (deprecated API) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. warning:: this section describes deprecated API. New test code should follow :ref:`_update_image_tests` Many of yt's operations involve creating and manipulating images. As such, we have a number of tests designed to compare images. These tests employ functionality from matplotlib to automatically compare images and detect differences, if any. Image comparison tests are used in the plotting and volume rendering machinery. The easiest way to use the image comparison tests is to make use of the ``generic_image`` function. As an argument, this function takes a function the test machinery can call which will save an image to disk. The test will then find any images that get created and compare them with the stored "correct" answer. Here is an example test function (from ``yt/visualization/tests/test_raw_field_slices.py``): .. code-block:: python import pytest import yt from yt.utilities.answer_testing.answer_tests import generic_image from yt.utilities.answer_testing.testing_utilities import data_dir_load, requires_ds # Test data raw_fields = "Laser/plt00015" def compare(ds, field): def slice_image(im_name): sl = yt.SlicePlot(ds, "z", field) sl.set_log("all", False) image_file = sl.save(im_name) return image_file gi = generic_image(slice_image) # generic_image returns a list. In this case, there's only one entry, # which is a np array with the data we want assert len(gi) == 1 return gi[0] @pytest.mark.answer_test @pytest.mark.usefixtures("temp_dir") class TestRawFieldSlices: answer_file = None saved_hashes = None answer_version = "000" @pytest.mark.usefixtures("hashing") @requires_ds(raw_fields) def test_raw_field_slices(self, field): ds = data_dir_load(raw_fields) gi = compare(ds, field) self.hashes.update({"generic_image": gi}) .. note:: The inner function ``slice_image`` can create any number of images, as long as the corresponding filenames conform to the prefix. Another good example of an image comparison test is the ``plot_window_attribute`` defined in the ``yt/utilities/answer_testing/answer_tests.py`` and used in ``yt/visualization/tests/test_plotwindow.py``. This sort of image comparison test is more useful if you are finding yourself writing a ton of boilerplate code to get your image comparison test working. The ``generic_image`` function is more useful if you only need to do a one-off image comparison test. Updating Answers ~~~~~~~~~~~~~~~~ In order to regenerate answers for a particular set of tests it is sufficient to change the ``answer_version`` attribute in the desired test class. When adding tests to an existing set of answers (like ``local_owls_000.yaml`` or ``local_varia_000.yaml``), it is considered best practice to first submit a pull request adding the tests WITHOUT incrementing the version number. Then, allow the tests to run (resulting in "no old answer" errors for the missing answers). If no other failures are present, you can then increment the version number to regenerate the answers. This way, we can avoid accidentally covering up test breakages. .. _handling_dependencies: Handling yt Dependencies ------------------------ Our dependencies are specified in ``pyproject.toml``. Hard dependencies are found in ``project.dependencies``, while optional dependencies are specified in ``project.optional-dependencies``. The ``full`` target contains the specs to run our test suite, which are intended to be as modern as possible (we don't set upper limits to versions unless we need to). The ``test`` target specifies the tools needed to run the tests, but not needed by yt itself. Documentation and typechecking requirements are found in ``requirements/``, and used in ``tests/ci_install.sh``. **Python version support.** We vow to follow numpy's deprecation plan regarding our supported versions for Python and numpy, defined formally in `NEP 29 `_, but generally support larger version intervals than recommended in this document. **Third party dependencies.** We attempt to make yt compatible with a wide variety of upstream software versions. However, sometimes a specific version of a project that yt depends on causes some breakage and must be blacklisted in the tests or a more experimental project that yt depends on optionally might change sufficiently that the yt community decides not to support an old version of that project. **Note.** Some of our optional dependencies are not trivial to install and their support may vary across platforms. If you would like to add a new dependency for yt (even an optional dependency) or would like to update a version of a yt dependency, you must edit the ``pyproject.toml`` file. For new dependencies, simply append the name of the new dependency to the end of the file, along with a pin to the latest version number of the package. To update a package's version, simply update the version number in the entry for that package. Finally, we also run a set of tests with "minimal" dependencies installed. When adding tests that depend on an optional dependency, you can wrap the test with the ``yt.testing.requires_module decorator`` to ensure it does not run during the minimal dependency tests (see ``yt/frontends/amrvac/tests/test_read_amrvac_namelist.py`` for a good example). If for some reason you need to update the listing of packages that are installed for the "minimal" dependency tests, you will need to update ``minimal_requirements.txt``. ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.243151 yt-4.4.0/doc/source/examining/0000755000175100001770000000000014714401715015610 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/examining/Loading_Data_via_Functions.ipynb0000644000175100001770000002545414714401662024063 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "id": "35cb9c22-04eb-476c-a7b6-afd27690be4d", "metadata": {}, "source": [ "# Loading Data via Functions\n", "\n", "One of the things that yt provides is the notion of \"derived fields.\" These are fields that exist only in memory, and are defined by a functional form, typically transforming other fields. Starting with version 4.1 of yt, however, we have made it much, much easier to define so-called \"on-disk\" fields through their functional forms, akin to how derived fields are generated. **At present, this is only available for grid-based frontends. Extension to other types is anticipated in future versions of yt.**\n", "\n", "What this means is that if you have a way to grab data -- at any resolution -- but don't want to either load it into memory in advance or write a complete \"frontend\", you can just write some functions and use those to construct a fully-functional dataset using the existing `load_amr_grids` and `load_uniform_grid` functions, supplying *functions* instead of arrays.\n", "\n", "There are a few immediate use cases that can be seen for this:\n", "\n", " - Data is accessible through another library, for instance if a library exists that reads subsets of data (or regularizes that data to given regions) or if you are calling yt from an *in situ* analysis library\n", " - Data can be remotely accessed on-demand\n", " - You have a straightforward data format for which a frontend does not exist\n", " - All of the data can be generated through an analytical definition\n", " \n", "The last one is what we'll use to demonstrate this. Let's imagine that I had a grid structure that I wanted to explore, but I wanted all of my data to be generated through functions that were exclusively dependent on the spatial position." ] }, { "cell_type": "code", "execution_count": null, "id": "fd2b8726-5e3b-4641-8b13-7c1163336c8b", "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import numpy as np\n", "\n", "import yt" ] }, { "cell_type": "markdown", "id": "2f2074a7-a71a-4032-8e92-11e908e0f15c", "metadata": {}, "source": [ "The example we've cooked up is going to be a bit silly, and we'll demonstrate it a little bit with one and two dimensional setups before getting into the full yt demonstration. (If you have ideas for a better one, please do suggest them!) We'll start with some overlapping trigonometric functions, which we'll attenuate by their distance from the center.\n", "\n", "So we'll set up some coefficients for different periods of the functions (the `coefficients` variable) and we'll sum up those functions. The next thing we'll do, so that we have some global attenuation we can see, is use a Gaussian function centered at the center of our domain." ] }, { "cell_type": "code", "execution_count": null, "id": "27180398-def2-4a65-bc3f-c807ee0c13a4", "metadata": {}, "outputs": [], "source": [ "x = np.mgrid[0.0:1.0:512j]\n", "coefficients = (100, 50, 30, 10)\n", "y = sum(c * np.sin(2 ** (2 + i) * (x * np.pi * 2)) for i, c in enumerate(coefficients))\n", "atten = np.exp(-20 * (1.1 * (x - 0.5)) ** 2)" ] }, { "cell_type": "markdown", "id": "22c9b8e5-32ca-4402-95e6-452b3d71418c", "metadata": {}, "source": [ "Now let's plot it! The top right is the attenuation, bottom left is the base sum of trig functions, and then the bottom right is the product." ] }, { "cell_type": "code", "execution_count": null, "id": "dbd391fb-10ff-4f2e-8b3e-bbd11f4cc4a6", "metadata": {}, "outputs": [], "source": [ "fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, dpi=150)\n", "ax4.plot(x, y * atten)\n", "ax2.plot(x, atten)\n", "ax3.plot(x, y)\n", "ax1.axis(False);" ] }, { "cell_type": "markdown", "id": "958c652a-987a-40cf-86ff-fa0bd7050996", "metadata": {}, "source": [ "Well, that looks like it might have some structure at different scales! We should be able to use something like this to show sampling errors and so on in AMR, and it'll have structure down to a reasonably high level of detail. Let's briefly demonstrate in 2D before moving on to 3D, using similar functions. This is all basically the same as the previous cells, except we're overlaying along a couple different dimensions." ] }, { "cell_type": "code", "execution_count": null, "id": "9e7897e1-b593-43bc-aeaa-1e86ff94566a", "metadata": {}, "outputs": [], "source": [ "x, y = np.mgrid[0.0:1.0:512j, 0.0:1.0:512j]\n", "\n", "x_coefficients = (100, 50, 30, 10)\n", "y_coefficients = (20, 90, 80, 30)\n", "\n", "z = sum(\n", " c * np.sin(2 ** (1 + i) * (x * np.pi * 2 + 2**i))\n", " for i, c in enumerate(x_coefficients)\n", ") * sum(\n", " c * np.sin(2 ** (1 + i) * (y * np.pi * 2 + 2**i))\n", " for i, c in enumerate(y_coefficients)\n", ")\n", "r = np.sqrt(((x - 0.5) ** 2) + ((y - 0.5) ** 2))\n", "atten = np.exp(-20 * (1.1 * r**2))\n", "\n", "plt.pcolormesh(x, y, z * atten)" ] }, { "cell_type": "markdown", "id": "2fc53e45-ff35-4d15-a2eb-b74a32be9609", "metadata": {}, "source": [ "This is an image of the full dataset, but what happens if we coarsely sample, as we would in AMR simulations? We can stride along the axes (let's say, every 32nd point) to get an idea of what that looks like." ] }, { "cell_type": "code", "execution_count": null, "id": "797c884d-3ccb-412b-af13-a46de54e5ab9", "metadata": {}, "outputs": [], "source": [ "plt.pcolormesh(x[::32, ::32], y[::32, ::32], (z * atten)[::32, ::32])" ] }, { "cell_type": "markdown", "id": "9355880c-8579-4dd9-b24f-bf931f1d84f3", "metadata": {}, "source": [ "For moving to 3D, I'm going to add on some higher-frequency modes, which I'll also amplify a bit more. We'll use the standard attenuation (although a directionally-dependent attenuation would be nice, wouldn't it?)\n", "\n", "And this time, we'll write them into a *special* function. This is the function we'll use to supply to our `load_amr_grids` function -- it has different arguments than a derived field; because it is assumed to always return three-dimensional data, it accepts a proper grid object (which may have spatial or other attributes) and it also receives the field name.\n", "\n", "Using this, we will compute the cell-centers for all of the cells in the grid, and use them to compute our overlapping functions and apply the attenuation. In doing so, we should be able to see structure at different levels. This is the same way you would write a function that loaded a file from disk, or received over HTTP, or used a library to read data; by having access to the grid and the field name, it should be completely determinative of how to access the data." ] }, { "cell_type": "code", "execution_count": null, "id": "9ce0d056-2e08-4ff2-bddf-1594cfa4a801", "metadata": {}, "outputs": [], "source": [ "x_coefficients = (100, 50, 30, 10, 20)\n", "y_coefficients = (20, 90, 80, 30, 30)\n", "z_coefficients = (50, 10, 90, 40, 40)\n", "\n", "\n", "def my_function(grid, field_name):\n", " # We want N points from the cell-center to the cell-center on the other side\n", " x, y, z = (\n", " np.linspace(\n", " grid.LeftEdge[i] + grid.dds[i] / 2,\n", " grid.RightEdge[i] - grid.dds[i] / 2,\n", " grid.ActiveDimensions[i],\n", " )\n", " for i in (0, 1, 2)\n", " )\n", " r = np.sqrt(\n", " ((x.d - 0.5) ** 2)[:, None, None]\n", " + ((y.d - 0.5) ** 2)[None, :, None]\n", " + ((z.d - 0.5) ** 2)[None, None, :]\n", " )\n", " atten = np.exp(-20 * (1.1 * r**2))\n", " xv = sum(\n", " c * np.sin(2 ** (1 + i) * (x.d * np.pi * 2))\n", " for i, c in enumerate(x_coefficients)\n", " )\n", " yv = sum(\n", " c * np.sin(2 ** (1 + i) * (y.d * np.pi * 2))\n", " for i, c in enumerate(y_coefficients)\n", " )\n", " zv = sum(\n", " c * np.sin(2 ** (1 + i) * (z.d * np.pi * 2))\n", " for i, c in enumerate(z_coefficients)\n", " )\n", " return atten * (xv[:, None, None] * yv[None, :, None] * zv[None, None, :])" ] }, { "cell_type": "markdown", "id": "479d7e97-a3ad-409e-a067-e722b3acab0f", "metadata": {}, "source": [ "We'll use a standard grid hierarchy -- which is used internally in yt testing -- and fill it up with a single field that provides this function rather than any arrays. We'll then use `load_amr_grids` to read it; note that we're not creating any arrays ahead of time." ] }, { "cell_type": "code", "execution_count": null, "id": "132c3f99-5b6e-4042-8fc5-49acb8cd9d8c", "metadata": {}, "outputs": [], "source": [ "from yt.testing import _amr_grid_index\n", "\n", "grid_data = []\n", "for level, le, re, dims in _amr_grid_index:\n", " grid_data.append(\n", " {\n", " \"level\": level,\n", " \"left_edge\": le,\n", " \"right_edge\": re,\n", " \"dimensions\": dims,\n", " \"density\": my_function,\n", " }\n", " )\n", "ds = yt.load_amr_grids(\n", " grid_data, [32, 32, 32], bbox=np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]])\n", ")" ] }, { "cell_type": "markdown", "id": "2972eb49-823b-41d2-a132-4a0fd4be2904", "metadata": {}, "source": [ "And finally, we'll demonstrate it with a slice along the y axis." ] }, { "cell_type": "code", "execution_count": null, "id": "d35407c6-da5d-47e1-93bd-54672c33fbe8", "metadata": {}, "outputs": [], "source": [ "p = ds.r[:, 0.5, :].plot(\"density\").set_log(\"density\", False)" ] }, { "cell_type": "markdown", "id": "cfd5d56e-5d3a-4537-98a5-22f659268f80", "metadata": {}, "source": [ "And with a quick zoom, we can see that the structure is indeed present *and* subject to the sampling effects we discussed earlier." ] }, { "cell_type": "code", "execution_count": null, "id": "aa420efd-536f-45e2-90ae-51bc8dc68c5f", "metadata": {}, "outputs": [], "source": [ "p.zoom(4)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 5 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/examining/Loading_Generic_Array_Data.ipynb0000644000175100001770000006465714714401662023776 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Loading Generic Array Data\n", "\n", "Even if your data is not strictly related to fields commonly used in\n", "astrophysical codes or your code is not supported yet, you can still feed it to\n", "yt to use its advanced visualization and analysis facilities. The only\n", "requirement is that your data can be represented as three-dimensional NumPy arrays with a consistent grid structure. What follows are some common examples of loading in generic array data that you may find useful. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Generic Unigrid Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The simplest case is that of a single grid of data spanning the domain, with one or more fields. The data could be generated from a variety of sources; we'll just give three common examples:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data generated \"on-the-fly\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The most common example is that of data that is generated in memory from the currently running script or notebook. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "import numpy as np\n", "from numpy.random import default_rng # we'll be generating random numbers here\n", "\n", "import yt\n", "\n", "prng = default_rng(seed=42)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example, we'll just create a 3-D array of random floating-point data using NumPy:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "arr = prng.random(size=(64, 64, 64))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To load this data into yt, we need associate it with a field. The `data` dictionary consists of one or more fields, each consisting of a tuple of a NumPy array and a unit string. Then, we can call `load_uniform_grid`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "data = {\"density\": (arr, \"g/cm**3\")}\n", "bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])\n", "ds = yt.load_uniform_grid(data, arr.shape, length_unit=\"Mpc\", bbox=bbox, nprocs=64)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`load_uniform_grid` takes the following arguments and optional keywords:\n", "\n", "* `data` : This is a dict of numpy arrays, where the keys are the field names\n", "* `domain_dimensions` : The domain dimensions of the unigrid\n", "* `length_unit` : The unit that corresponds to `code_length`, can be a string, tuple, or floating-point number\n", "* `bbox` : Size of computational domain in units of `code_length`\n", "* `nprocs` : If greater than 1, will create this number of subarrays out of data\n", "* `sim_time` : The simulation time in seconds\n", "* `mass_unit` : The unit that corresponds to `code_mass`, can be a string, tuple, or floating-point number\n", "* `time_unit` : The unit that corresponds to `code_time`, can be a string, tuple, or floating-point number\n", "* `velocity_unit` : The unit that corresponds to `code_velocity`\n", "* `magnetic_unit` : The unit that corresponds to `code_magnetic`, i.e. the internal units used to represent magnetic field strengths. NOTE: if you want magnetic field units to be in the SI unit system, you must specify it here, e.g. `magnetic_unit=(1.0, \"T\")`\n", "* `periodicity` : A tuple of booleans that determines whether the data will be treated as periodic along each axis\n", "* `geometry` : The geometry of the dataset, can be `cartesian`, `cylindrical`, `polar`, `spherical`, `geographic` or `spectral_cube`\n", "* `default_species_fields` : if set to `ionized` or `neutral`, default species fields are accordingly created for H and He which also set mean molecular weight\n", "* `axis_order` : The order of the axes in the data array, e.g. `(\"z\", \"y\", \"x\")` with cartesian geometry\n", "* `cell_widths` : If set, specify the cell widths along each dimension. Must be consistent with the `domain_dimensions` argument\n", "* `parameters` : A dictionary of dataset parameters, , useful for storing dataset metadata\n", "* `dataset_name` : The name of the dataset. Stream datasets will use this value in place of a filename (in image prefixing, etc.)\n", "\n", "This example creates a yt-native dataset `ds` that will treat your array as a\n", "density field in cubic domain of 3 Mpc edge size and simultaneously divide the \n", "domain into `nprocs` = 64 chunks, so that you can take advantage\n", "of the underlying parallelism. \n", "\n", "The optional unit keyword arguments allow for the default units of the dataset to be set. They can be:\n", "* A string, e.g. `length_unit=\"Mpc\"`\n", "* A tuple, e.g. `mass_unit=(1.0e14, \"Msun\")`\n", "* A floating-point value, e.g. `time_unit=3.1557e13`\n", "\n", "In the latter case, the unit is assumed to be cgs. \n", "\n", "The resulting `ds` functions exactly like a dataset like any other yt can handle--it can be sliced, and we can show the grid boundaries:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "slc = yt.SlicePlot(ds, \"z\", (\"gas\", \"density\"))\n", "slc.set_cmap((\"gas\", \"density\"), \"Blues\")\n", "slc.annotate_grids(cmap=None)\n", "slc.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Particle fields are detected as one-dimensional fields. Particle fields are then added as one-dimensional arrays in\n", "a similar manner as the three-dimensional grid fields:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "posx_arr = prng.uniform(low=-1.5, high=1.5, size=10000)\n", "posy_arr = prng.uniform(low=-1.5, high=1.5, size=10000)\n", "posz_arr = prng.uniform(low=-1.5, high=1.5, size=10000)\n", "data = {\n", " \"density\": (prng.random(size=(64, 64, 64)), \"Msun/kpc**3\"),\n", " \"particle_position_x\": (posx_arr, \"code_length\"),\n", " \"particle_position_y\": (posy_arr, \"code_length\"),\n", " \"particle_position_z\": (posz_arr, \"code_length\"),\n", "}\n", "bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])\n", "ds = yt.load_uniform_grid(\n", " data,\n", " data[\"density\"][0].shape,\n", " length_unit=(1.0, \"Mpc\"),\n", " mass_unit=(1.0, \"Msun\"),\n", " bbox=bbox,\n", " nprocs=4,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example only the particle position fields have been assigned. If no particle arrays are supplied, then the number of particles is assumed to be zero. Take a slice, and overlay particle positions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "slc = yt.SlicePlot(ds, \"z\", (\"gas\", \"density\"))\n", "slc.set_cmap((\"gas\", \"density\"), \"Blues\")\n", "slc.annotate_particles(0.25, p_size=12.0, col=\"Red\")\n", "slc.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### HDF5 data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "HDF5 is a convenient format to store data. If you have unigrid data stored in an HDF5 file, it is possible to load it into memory and then use `load_uniform_grid` to get it into yt:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "from os.path import join\n", "\n", "import h5py\n", "\n", "from yt.config import ytcfg\n", "\n", "data_dir = ytcfg.get(\"yt\", \"test_data_dir\")\n", "from yt.utilities.physical_ratios import cm_per_kpc\n", "\n", "f = h5py.File(\n", " join(data_dir, \"UnigridData\", \"turb_vels.h5\"), \"r\"\n", ") # Read-only access to the file" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The HDF5 file handle's keys correspond to the datasets stored in the file:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(f.keys())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We need to add some unit information. It may be stored in the file somewhere, or we may know it from another source. In this case, the units are simply cgs:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "units = [\n", " \"gauss\",\n", " \"gauss\",\n", " \"gauss\",\n", " \"g/cm**3\",\n", " \"erg/cm**3\",\n", " \"K\",\n", " \"cm/s\",\n", " \"cm/s\",\n", " \"cm/s\",\n", " \"cm/s\",\n", " \"cm/s\",\n", " \"cm/s\",\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can iterate over the items in the file handle and the units to get the data into a dictionary, which we will then load:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "data = {k: (v[()], u) for (k, v), u in zip(f.items(), units)}\n", "bbox = np.array([[-0.5, 0.5], [-0.5, 0.5], [-0.5, 0.5]])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ds = yt.load_uniform_grid(\n", " data,\n", " data[\"Density\"][0].shape,\n", " length_unit=250.0 * cm_per_kpc,\n", " bbox=bbox,\n", " nprocs=8,\n", " periodicity=(False, False, False),\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this case, the data came from a simulation which was 250 kpc on a side. An example projection of two fields:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "prj = yt.ProjectionPlot(\n", " ds, \"z\", [\"z-velocity\", \"Temperature\", \"Bx\"], weight_field=\"Density\"\n", ")\n", "prj.set_log(\"z-velocity\", False)\n", "prj.set_log(\"Bx\", False)\n", "prj.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Volume Rendering Loaded Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Volume rendering requires defining a `TransferFunction` to map data to color and opacity and a `camera` to create a viewport and render the image." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "# Find the min and max of the field\n", "mi, ma = ds.all_data().quantities.extrema(\"Temperature\")\n", "# Reduce the dynamic range\n", "mi = mi.value + 1.5e7\n", "ma = ma.value - 0.81e7" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Define the properties and size of the `camera` viewport:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "# Choose a vector representing the viewing direction.\n", "L = [0.5, 0.5, 0.5]\n", "# Define the center of the camera to be the domain center\n", "c = ds.domain_center[0]\n", "# Define the width of the image\n", "W = 1.5 * ds.domain_width[0]\n", "# Define the number of pixels to render\n", "Npixels = 512" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create a `camera` object and " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "sc = yt.create_scene(ds, \"Temperature\")\n", "dd = ds.all_data()\n", "\n", "source = sc[0]\n", "\n", "source.log_field = False\n", "\n", "tf = yt.ColorTransferFunction((mi, ma), grey_opacity=False)\n", "tf.map_to_colormap(mi, ma, scale=15.0, colormap=\"cmyt.algae\")\n", "\n", "source.set_transfer_function(tf)\n", "\n", "sc.add_source(source)\n", "\n", "cam = sc.add_camera()\n", "cam.width = W\n", "cam.center = c\n", "cam.normal_vector = L\n", "cam.north_vector = [0, 0, 1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "sc.show(sigma_clip=4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### FITS image data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The FITS file format is a common astronomical format for 2-D images, but it can store three-dimensional data as well. The [AstroPy](https://www.astropy.org) project has modules for FITS reading and writing." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "import astropy.io.fits as pyfits" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using `pyfits` we can open a FITS file. If we call `info()` on the file handle, we can figure out some information about the file's contents. The file in this example has a primary HDU (header-data-unit) with no data, and three HDUs with 3-D data. In this case, the data consists of three velocity fields:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "f = pyfits.open(join(data_dir, \"UnigridData\", \"velocity_field_20.fits\"))\n", "f.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can put it into a dictionary in the same way as before, but we slice the file handle `f` so that we don't use the `PrimaryHDU`. `hdu.name` is the field name and `hdu.data` is the actual data. Each of these velocity fields is in km/s. We can check that we got the correct fields. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "data = {}\n", "for hdu in f:\n", " name = hdu.name.lower()\n", " data[name] = (hdu.data, \"km/s\")\n", "print(data.keys())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The velocity field names in this case are slightly different than the standard yt field names for velocity fields, so we will reassign the field names:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "data[\"velocity_x\"] = data.pop(\"x-velocity\")\n", "data[\"velocity_y\"] = data.pop(\"y-velocity\")\n", "data[\"velocity_z\"] = data.pop(\"z-velocity\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we load the data into yt. Let's assume that the box size is a Mpc. Since these are velocity fields, we can overlay velocity vectors on slices, just as if we had loaded in data from a supported code. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ds = yt.load_uniform_grid(data, data[\"velocity_x\"][0].shape, length_unit=(1.0, \"Mpc\"))\n", "slc = yt.SlicePlot(\n", " ds, \"x\", [(\"gas\", \"velocity_x\"), (\"gas\", \"velocity_y\"), (\"gas\", \"velocity_z\")]\n", ")\n", "for ax in \"xyz\":\n", " slc.set_log((\"gas\", f\"velocity_{ax}\"), False)\n", "slc.annotate_velocity()\n", "slc.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Generic AMR Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In a similar fashion to unigrid data, data gridded into rectangular patches at varying levels of resolution may also be loaded into yt. In this case, a list of grid dictionaries should be provided, with the requisite information about each grid's properties. This example sets up two grids: a top-level grid (`level == 0`) covering the entire domain and a subgrid at `level == 1`. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "grid_data = [\n", " {\n", " \"left_edge\": [0.0, 0.0, 0.0],\n", " \"right_edge\": [1.0, 1.0, 1.0],\n", " \"level\": 0,\n", " \"dimensions\": [32, 32, 32],\n", " },\n", " {\n", " \"left_edge\": [0.25, 0.25, 0.25],\n", " \"right_edge\": [0.75, 0.75, 0.75],\n", " \"level\": 1,\n", " \"dimensions\": [32, 32, 32],\n", " },\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll just fill each grid with random density data, with a scaling with the grid refinement level." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "for g in grid_data:\n", " g[\"density\"] = (prng.random(g[\"dimensions\"]) * 2 ** g[\"level\"], \"g/cm**3\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Particle fields are supported by adding 1-dimensional arrays to each `grid`. If a grid has no particles, the particle fields still have to be defined since they are defined elsewhere; set them to empty NumPy arrays:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "grid_data[0][\"particle_position_x\"] = (\n", " np.array([]),\n", " \"code_length\",\n", ") # No particles, so set empty arrays\n", "grid_data[0][\"particle_position_y\"] = (np.array([]), \"code_length\")\n", "grid_data[0][\"particle_position_z\"] = (np.array([]), \"code_length\")\n", "grid_data[1][\"particle_position_x\"] = (\n", " prng.uniform(low=0.25, high=0.75, size=1000),\n", " \"code_length\",\n", ")\n", "grid_data[1][\"particle_position_y\"] = (\n", " prng.uniform(low=0.25, high=0.75, size=1000),\n", " \"code_length\",\n", ")\n", "grid_data[1][\"particle_position_z\"] = (\n", " prng.uniform(low=0.25, high=0.75, size=1000),\n", " \"code_length\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then, call `load_amr_grids`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ds = yt.load_amr_grids(grid_data, [32, 32, 32])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`load_amr_grids` also takes the same keywords `bbox` and `sim_time` as `load_uniform_grid`. We could have also specified the length, time, velocity, and mass units in the same manner as before. Let's take a slice:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "slc = yt.SlicePlot(ds, \"z\", (\"gas\", \"density\"))\n", "slc.annotate_particles(0.25, p_size=15.0, col=\"Pink\")\n", "slc.show()" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "source": [ "## Species fields" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "source": [ "One can also supply species fields to a stream dataset, in the form of mass fractions. These will then be used to generate derived fields for mass, number, and nuclei densities of the separate species. The naming conventions for the mass fractions should correspond to the format specified in [the yt documentation for species fields](https://yt-project.org/doc/analyzing/fields.html#species-fields)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "arr = prng.random(size=(64, 64, 64))\n", "data = {\n", " \"density\": (arr, \"g/cm**3\"),\n", " \"H_p0_fraction\": (0.37 * np.ones_like(arr), \"dimensionless\"),\n", " \"H_p1_fraction\": (0.37 * np.ones_like(arr), \"dimensionless\"),\n", " \"He_fraction\": (0.24 * np.ones_like(arr), \"dimensionless\"),\n", " \"CO_fraction\": (0.02 * np.ones_like(arr), \"dimensionless\"),\n", "}\n", "bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])\n", "ds = yt.load_uniform_grid(data, arr.shape, length_unit=\"Mpc\", bbox=bbox, nprocs=64)\n", "dd = ds.all_data()\n", "print(dd.mean((\"gas\", \"CO_density\")))\n", "print(dd.min((\"gas\", \"H_nuclei_density\")))\n", "print(dd.max((\"gas\", \"He_number_density\")))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Multiple Particle Types" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For both uniform grid data and AMR data, one can specify particle fields with multiple types if the particle field names are given as field tuples instead of strings (the default particle type is `\"io\"`):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "posxr_arr = prng.uniform(low=-1.5, high=1.5, size=10000)\n", "posyr_arr = prng.uniform(low=-1.5, high=1.5, size=10000)\n", "poszr_arr = prng.uniform(low=-1.5, high=1.5, size=10000)\n", "posxb_arr = prng.uniform(low=-1.5, high=1.5, size=20000)\n", "posyb_arr = prng.uniform(low=-1.5, high=1.5, size=20000)\n", "poszb_arr = prng.uniform(low=-1.5, high=1.5, size=20000)\n", "data = {\n", " (\"gas\", \"density\"): (prng.random(size=(64, 64, 64)), \"Msun/kpc**3\"),\n", " (\"red\", \"particle_position_x\"): (posxr_arr, \"code_length\"),\n", " (\"red\", \"particle_position_y\"): (posyr_arr, \"code_length\"),\n", " (\"red\", \"particle_position_z\"): (poszr_arr, \"code_length\"),\n", " (\"blue\", \"particle_position_x\"): (posxb_arr, \"code_length\"),\n", " (\"blue\", \"particle_position_y\"): (posyb_arr, \"code_length\"),\n", " (\"blue\", \"particle_position_z\"): (poszb_arr, \"code_length\"),\n", "}\n", "bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])\n", "ds = yt.load_uniform_grid(\n", " data,\n", " data[\"gas\", \"density\"][0].shape,\n", " length_unit=(1.0, \"Mpc\"),\n", " mass_unit=(1.0, \"Msun\"),\n", " bbox=bbox,\n", " nprocs=4,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now see we have multiple particle types:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dd = ds.all_data()\n", "print(ds.particle_types)\n", "print(dd[\"red\", \"particle_position_x\"].size)\n", "print(dd[\"blue\", \"particle_position_x\"].size)\n", "print(dd[\"all\", \"particle_position_x\"].size)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Caveats for Loading Generic Array Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Particles may be difficult to integrate.\n", "* Data must already reside in memory before loading it in to yt, whether it is generated at runtime or loaded from disk. \n", "* Some functions may behave oddly, and parallelism will be disappointing or non-existent in most cases.\n", "* No consistency checks are performed on the hierarchy\n", "* Consistency between particle positions and grids is not checked; `load_amr_grids` assumes that particle positions associated with one grid are not bounded within another grid at a higher level, so this must be ensured by the user prior to loading the grid data. " ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 4 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/examining/Loading_Generic_Particle_Data.ipynb0000644000175100001770000001724214714401662024447 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Loading Generic Particle Data\n", "\n", "This example creates a fake in-memory particle dataset and then loads it as a yt dataset using the `load_particles` function.\n", "\n", "Our \"fake\" dataset will be numpy arrays filled with normally distributed randoml particle positions and uniform particle masses. Since real data is often scaled, I arbitrarily multiply by 1e6 to show how to deal with scaled data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n", "\n", "rng = np.random.default_rng()\n", "n_particles = 5_000_000\n", "\n", "ppx, ppy, ppz = 1e6 * rng.normal(size=(3, n_particles))\n", "\n", "ppm = np.ones(n_particles)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `load_particles` function accepts a dictionary populated with particle data fields loaded in memory as numpy arrays or python lists:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data = {\n", " (\"io\", \"particle_position_x\"): ppx,\n", " (\"io\", \"particle_position_y\"): ppy,\n", " (\"io\", \"particle_position_z\"): ppz,\n", " (\"io\", \"particle_mass\"): ppm,\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To hook up with yt's internal field system, the dictionary keys must be 'particle_position_x', 'particle_position_y', 'particle_position_z', and 'particle_mass', as well as any other particle field provided by one of the particle frontends." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `load_particles` function transforms the `data` dictionary into an in-memory yt `Dataset` object, providing an interface for further analysis with yt. The example below illustrates how to load the data dictionary we created above." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import yt\n", "from yt.units import Msun, parsec\n", "\n", "bbox = 1.1 * np.array(\n", " [[min(ppx), max(ppx)], [min(ppy), max(ppy)], [min(ppz), max(ppz)]]\n", ")\n", "\n", "ds = yt.load_particles(data, length_unit=1.0 * parsec, mass_unit=1e8 * Msun, bbox=bbox)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `length_unit` and `mass_unit` are the conversion from the units used in the `data` dictionary to CGS. I've arbitrarily chosen one parsec and 10^8 Msun for this example. \n", "\n", "The `n_ref` parameter controls how many particle it takes to accumulate in an oct-tree cell to trigger refinement. Larger `n_ref` will decrease poisson noise at the cost of resolution in the octree. \n", "\n", "Finally, the `bbox` parameter is a bounding box in the units of the dataset that contains all of the particles. This is used to set the size of the base octree block." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This new dataset acts like any other yt `Dataset` object, and can be used to create data objects and query for yt fields." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ad = ds.all_data()\n", "print(ad.mean((\"io\", \"particle_position_x\")))\n", "print(ad.sum((\"io\", \"particle_mass\")))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can project the particle mass field like so:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "prj = yt.ParticleProjectionPlot(ds, \"z\", (\"io\", \"particle_mass\"))\n", "prj.set_width((8, \"Mpc\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, one can specify multiple particle types in the `data` directory by setting the field names to be field tuples (the default field type for particles is `\"io\"`) if one is not specified:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "n_gas_particles = 1_000_000\n", "n_star_particles = 1_000_000\n", "n_dm_particles = 2_000_000\n", "\n", "ppxg, ppyg, ppzg = 1e6 * rng.normal(size=(3, n_gas_particles))\n", "ppmg = np.ones(n_gas_particles)\n", "hsml = 10000 * np.ones(n_gas_particles)\n", "dens = 2.0e-4 * np.ones(n_gas_particles)\n", "\n", "ppxd, ppyd, ppzd = 1e6 * rng.normal(size=(3, n_dm_particles))\n", "ppmd = np.ones(n_dm_particles)\n", "\n", "ppxs, ppys, ppzs = 5e5 * rng.normal(size=(3, n_star_particles))\n", "ppms = 0.1 * np.ones(n_star_particles)\n", "\n", "bbox = 1.1 * np.array(\n", " [\n", " [\n", " min(ppxg.min(), ppxd.min(), ppxs.min()),\n", " max(ppxg.max(), ppxd.max(), ppxs.max()),\n", " ],\n", " [\n", " min(ppyg.min(), ppyd.min(), ppys.min()),\n", " max(ppyg.max(), ppyd.max(), ppys.max()),\n", " ],\n", " [\n", " min(ppzg.min(), ppzd.min(), ppzs.min()),\n", " max(ppzg.max(), ppzd.max(), ppzs.max()),\n", " ],\n", " ]\n", ")\n", "\n", "data2 = {\n", " (\"gas\", \"particle_position_x\"): ppxg,\n", " (\"gas\", \"particle_position_y\"): ppyg,\n", " (\"gas\", \"particle_position_z\"): ppzg,\n", " (\"gas\", \"particle_mass\"): ppmg,\n", " (\"gas\", \"smoothing_length\"): hsml,\n", " (\"gas\", \"density\"): dens,\n", " (\"dm\", \"particle_position_x\"): ppxd,\n", " (\"dm\", \"particle_position_y\"): ppyd,\n", " (\"dm\", \"particle_position_z\"): ppzd,\n", " (\"dm\", \"particle_mass\"): ppmd,\n", " (\"star\", \"particle_position_x\"): ppxs,\n", " (\"star\", \"particle_position_y\"): ppys,\n", " (\"star\", \"particle_position_z\"): ppzs,\n", " (\"star\", \"particle_mass\"): ppms,\n", "}\n", "\n", "ds2 = yt.load_particles(\n", " data2, length_unit=1.0 * parsec, mass_unit=1e8 * Msun, bbox=bbox\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now have separate `\"gas\"`, `\"dm\"`, and `\"star\"` particles. Since the `\"gas\"` particles have `\"density\"` and `\"smoothing_length\"` fields, they are recognized as SPH particles:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ad = ds2.all_data()\n", "c = np.array([ad.mean((\"gas\", ax)).to(\"code_length\") for ax in \"xyz\"])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "slc = yt.SlicePlot(ds2, \"z\", (\"gas\", \"density\"), center=c)\n", "slc.set_zlim((\"gas\", \"density\"), 1e-19, 2.0e-18)\n", "slc.set_width((4, \"Mpc\"))\n", "slc.show()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.0" } }, "nbformat": 4, "nbformat_minor": 0 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/examining/Loading_Spherical_Data.ipynb0000644000175100001770000001643514714401662023165 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Loading Spherical Data\n", "\n", "Support in yt for non-Cartesian geometries is partial and is still being extended, but here is an example of how to load spherical data from a regularly-spaced grid. For irregularly spaced grids, a similar setup can be used, either by using supplying the `cell_widths` parameter to `load_uniform_grid` (see [Stretched Grid Data](https://yt-project.org/docs/dev/examining/loading_data.html#stretched-grid-data) for more details), or by using the `load_hexahedral_mesh` method.\n", "\n", "Note that in yt, \"spherical\" means that it is ordered $r$, $\\theta$, $\\phi$, where $\\theta$ is the declination from the azimuth (running from $0$ to $\\pi$ and $\\phi$ is the angle around the zenith (running from $0$ to $2\\pi$).\n", "\n", "We first start out by loading yt." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n", "\n", "import yt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we create a few derived fields. The first three are just straight translations of the Cartesian coordinates, so that we can see where we are located in the data, and understand what we're seeing. The final one is just a fun field that is some combination of the three coordinates, and will vary in all dimensions." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "@yt.derived_field(name=\"sphx\", units=\"cm\", take_log=False, sampling_type=\"cell\")\n", "def sphx(field, data):\n", " return np.cos(data[\"phi\"]) * np.sin(data[\"theta\"]) * data[\"r\"]\n", "\n", "\n", "@yt.derived_field(name=\"sphy\", units=\"cm\", take_log=False, sampling_type=\"cell\")\n", "def sphy(field, data):\n", " return np.sin(data[\"phi\"]) * np.sin(data[\"theta\"]) * data[\"r\"]\n", "\n", "\n", "@yt.derived_field(name=\"sphz\", units=\"cm\", take_log=False, sampling_type=\"cell\")\n", "def sphz(field, data):\n", " return np.cos(data[\"theta\"]) * data[\"r\"]\n", "\n", "\n", "@yt.derived_field(name=\"funfield\", units=\"cm\", take_log=False, sampling_type=\"cell\")\n", "def funfield(field, data):\n", " return (np.sin(data[\"phi\"]) ** 2 + np.cos(data[\"theta\"]) ** 2) * (\n", " 1.0 * data[\"r\"].uq + data[\"r\"]\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading Data\n", "\n", "Now we can actually load our data. We use the `load_uniform_grid` function here. Normally, the first argument would be a dictionary of field data, where the keys were the field names and the values the field data arrays. Here, we're just going to look at derived fields, so we supply an empty one.\n", "\n", "The next few arguments are the number of dimensions, the bounds, and we then specify the geometry as spherical." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ds = yt.load_uniform_grid(\n", " {},\n", " [128, 128, 128],\n", " bbox=np.array([[0.0, 1.0], [0.0, np.pi], [0.0, 2 * np.pi]]),\n", " geometry=\"spherical\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Looking at Data\n", "\n", "Now we can take slices. The first thing we will try is making a slice of data along the \"phi\" axis, here $\\pi/2$, which will be along the y axis in the positive direction. We use the `.slice` attribute, which creates a slice, and then we convert this into a plot window. Note that here 2 is used to indicate the third axis (0-indexed) which for spherical data is $\\phi$.\n", "\n", "This is the manual way of creating a plot -- below, we'll use the standard, automatic ways. Note that the coordinates run from $-r$ to $r$ along the $z$ axis and from $0$ to $r$ along the $R$ axis. We use the capital $R$ to indicate that it's the $R$ along the $x-y$ plane." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "s = ds.slice(2, np.pi / 2)\n", "p = s.to_pw(\"funfield\", origin=\"native\")\n", "p.set_zlim(\"all\", 0.0, 4.0)\n", "p.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also slice along $r$. For now, this creates a regular grid with *incorrect* units for phi and theta. We are currently exploring two other options -- a simple aitoff projection, and fixing it to use the correct units as-is." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "s = yt.SlicePlot(ds, \"r\", \"funfield\")\n", "s.set_zlim(\"all\", 0.0, 4.0)\n", "s.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also slice at constant $\\theta$. But, this is a weird thing! We're slicing at a constant declination from the azimuth. What this means is that when thought of in a Cartesian domain, this slice is actually a cone. The axes have been labeled appropriately, to indicate that these are not exactly the $x$ and $y$ axes, but instead differ by a factor of $\\sin(\\theta))$." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "s = yt.SlicePlot(ds, \"theta\", \"funfield\")\n", "s.set_zlim(\"all\", 0.0, 4.0)\n", "s.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We've seen lots of the `funfield` plots, but we can also look at the Cartesian axes. This next plot plots the Cartesian $x$, $y$ and $z$ values on a $\\theta$ slice. Because we're not supplying an argument to the `center` parameter, yt will place it at the center of the $\\theta$ axis, which will be at $\\pi/2$, where it will be aligned with the $x-y$ plane. The slight change in `sphz` results from the cells themselves migrating, and plotting the center of those cells." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "s = yt.SlicePlot(ds, \"theta\", [\"sphx\", \"sphy\", \"sphz\"])\n", "s.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can do the same with the $\\phi$ axis." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "s = yt.SlicePlot(ds, \"phi\", [\"sphx\", \"sphy\", \"sphz\"])\n", "s.show()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.1" } }, "nbformat": 4, "nbformat_minor": 0 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/examining/index.rst0000644000175100001770000000136614714401662017460 0ustar00runnerdocker.. _examining-data: Loading and Examining Data ========================== Nominally, one should just be able to run ``yt.load()`` on a dataset and start computing; however, there may be additional notes associated with different data formats as described below. Furthermore, we provide methods for loading data from unsupported data formats in :doc:`Loading_Generic_Array_Data`, :doc:`Loading_Generic_Particle_Data`, and :doc:`Loading_Spherical_Data`. Lastly, if you want to examine the raw data for your particular dataset, visit :ref:`low-level-data-inspection`. .. toctree:: :maxdepth: 2 loading_data Loading_Generic_Array_Data Loading_Generic_Particle_Data Loading_Data_via_Functions Loading_Spherical_Data low_level_inspection ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/examining/loading_data.rst0000644000175100001770000036661714714401662020774 0ustar00runnerdocker.. _loading-data: Loading Data ============ This section contains information on how to load data into yt, as well as some important caveats about different data formats. .. _loading-sample-data: Sample Data ----------- The yt community has provided a large number of sample datasets, which are accessible from https://yt-project.org/data/ . yt also provides a helper function, ``yt.load_sample``, that can load from a set of sample datasets. The quickstart notebooks in this documentation utilize this. The files are, in general, named identically to their listings on the data catalog page. For instance, you can load ``IsolatedGalaxy`` by executing: .. code-block:: python import yt ds = yt.load_sample("IsolatedGalaxy") To find a list of all available datasets, you can call ``load_sample`` without any arguments, and it will return a list of the names that can be supplied: .. code-block:: python import yt yt.load_sample() This will return a list of possible filenames; more information can be accessed on the data catalog. .. _loading-archived-data: Archived Data ------------- If your data is stored as a (compressed) tar file, you can access the contained dataset directly without extracting the tar file. This can be achieved using the ``load_archive`` function: .. code-block:: python import yt ds = yt.load_archive("IsolatedGalaxy.tar.gz", "IsolatedGalaxy/galaxy0030/galaxy0030") The first argument is the path to the archive file, the second one is the path to the file to load in the archive. Subsequent arguments are passed to ``yt.load``. The functionality requires the package `ratarmount `_ to be installed. Under the hood, yt will mount the archive as a (read-only) filesystem. Note that this requires the entire archive to be read once to compute the location of each file in the archive; subsequent accesses will be much faster. All archive formats supported by `ratarmount `__ should be loadable, provided the dependencies are installed; this includes ``tar``, ``tar.gz`` and tar.bz2`` formats. .. _loading-hdf5-data: Simple HDF5 Data ---------------- .. note:: This wrapper takes advantage of the functionality described in :doc:`Loading_Data_via_Functions` but the basics of setting up function handlers, guessing fields, etc, are handled by yt. Using the function :func:`yt.loaders.load_hdf5_file`, you can load a generic set of fields from an HDF5 file and have a fully-operational yt dataset. For instance, in the yt sample data repository, we have the `UniGrid Data `_ dataset (~1.6GB). This dataset includes the file ``turb_vels.h5`` with this structure: .. code-block:: bash $ h5ls -r h5ls -r ./UnigridData/turb_vels.h5 / Group /Bx Dataset {256, 256, 256} /By Dataset {256, 256, 256} /Bz Dataset {256, 256, 256} /Density Dataset {256, 256, 256} /MagneticEnergy Dataset {256, 256, 256} /Temperature Dataset {256, 256, 256} /turb_x-velocity Dataset {256, 256, 256} /turb_y-velocity Dataset {256, 256, 256} /turb_z-velocity Dataset {256, 256, 256} /x-velocity Dataset {256, 256, 256} /y-velocity Dataset {256, 256, 256} /z-velocity Dataset {256, 256, 256} In versions of yt prior to 4.1, these could be loaded into memory individually and then accessed *en masse* by the :func:`yt.loaders.load_uniform_grid` function. Introduced in version 4.1, however, was the ability to provide the filename and then allow yt to identify the available fields and even subset them into chunks to preserve memory. Only those requested fields will be loaded at the time of the request, and they will be subset into chunks to avoid over-allocating for reduction operations. To use the auto-loader, call :func:`~yt.loaders.load_hdf5_file` with the name of the file. Optionally, you can specify the root node of the file to probe for fields -- for instance, if all of the fields are stored under ``/grid`` (as they are in output from the ytdata frontend). You can also provide the expected bounding box, which will otherwise default to 0..1 in all dimensions, the names of fields to make available (by default yt will probe for them) and the number of chunks to subdivide the file into. If the number of chunks is not specified it defaults to trying to keep the size of each individual chunk no more than $64^3$ zones. To load the above file, we would use the function as follows: .. code-block:: python import yt ds = yt.load_hdf5_file("UnigridData/turb_vels.h5") At this point, we now have a dataset that we can do all of our normal operations on, and all of the known yt derived fields will be available. .. _loading-amrvac-data: AMRVAC Data ----------- To load data to yt, simply use .. code-block:: import yt ds = yt.load("output0010.dat") .. rubric:: Dataset geometry & periodicity Starting from AMRVAC 2.2, and datfile format 5, a geometry flag (e.g. "Cartesian_2.5D", "Polar_2D", "Cylindrical_1.5D"...) was added to the datfile header. yt will fall back to a cartesian mesh if the geometry flag is not found. For older datfiles however it is possible to provide it externally with the ``geometry_override`` parameter. .. code-block:: python # examples ds = yt.load("output0010.dat", geometry_override="polar") ds = yt.load("output0010.dat", geometry_override="cartesian") Note that ``geometry_override`` has priority over any ``geometry`` flag present in recent datfiles, which means it can be used to force ``r`` VS ``theta`` 2D plots in polar geometries (for example), but this may produce unpredictable behaviour and comes with no guarantee. A ``ndim``-long ``periodic`` boolean array was also added to improve compatibility with yt. See http://amrvac.org/md_doc_fileformat.html for details. .. rubric:: Auto-setup for derived fields Yt will attempt to mimic the way AMRVAC internally defines kinetic energy, pressure, and sound speed. To see a complete list of fields that are defined after loading, one can simply type .. code-block:: python print(ds.derived_field_list) Note that for adiabatic (magneto-)hydrodynamics, i.e. ``(m)hd_energy = False`` in AMRVAC, additional input data is required in order to setup some of these fields. This is done by passing the corresponding parfile(s) at load time .. code-block:: python # example using a single parfile ds = yt.load("output0010.dat", parfiles="amrvac.par") # ... or using multiple parfiles ds = yt.load("output0010.dat", parfiles=["amrvac.par", "modifier.par"]) In case more than one parfile is passed, yt will create a single namelist by replicating AMRVAC's rules (see "Using multiple par files" http://amrvac.org/md_doc_commandline.html). .. rubric:: Unit System AMRVAC only supports dimensionless fields and as such, no unit system is ever attached to any given dataset. yt however defines physical quantities and give them units. As is customary in yt, the default unit system is ``cgs``, e.g. lengths are read as "cm" unless specified otherwise. The user has two ways to control displayed units, through ``unit_system`` (``"cgs"``, ``"mks"`` or ``"code"``) and ``units_override``. Example: .. code-block:: python units_override = dict(length_unit=(100.0, "au"), mass_unit=yt.units.mass_sun) ds = yt.load("output0010.dat", units_override=units_override, unit_system="mks") To ensure consistency with normalisations as used in AMRVAC we only allow overriding a maximum of three units. Allowed unit combinations at the moment are .. code-block:: none {numberdensity_unit, temperature_unit, length_unit} {mass_unit, temperature_unit, length_unit} {mass_unit, time_unit, length_unit} {numberdensity_unit, velocity_unit, length_unit} {mass_unit, velocity_unit, length_unit} Appropriate errors are thrown for other combinations. .. rubric:: Partially supported and unsupported features * a maximum of 100 dust species can be read by yt at the moment. If your application needs this limit increased, please report an issue https://github.com/yt-project/yt/issues * particle data: currently not supported (but might come later) * staggered grids (AMRVAC 2.2 and later): yt logs a warning if you load staggered datasets, but the flag is currently ignored. * "stretched grids" are being implemented in yt, but are not yet fully-supported. (Previous versions of this file suggested they would "never" be supported, which we hope to prove incorrect once we finish implementing stretched grids in AMR. At present, stretched grids are only supported on a single level of refinement.) .. note:: Ghost cells exist in .dat files but never read by yt. .. _loading-art-data: ART Data -------- ART data has been supported in the past by Christopher Moody and is currently cared for by Kenza Arraki. Please contact the ``yt-dev`` mailing list if you are interested in using yt for ART data, or if you are interested in assisting with development of yt to work with ART data. To load an ART dataset you can use the ``yt.load`` command and provide it the gas mesh file. It will search for and attempt to find the complementary dark matter and stellar particle header and data files. However, your simulations may not follow the same naming convention. .. code-block:: python import yt ds = yt.load("D9p_500/10MpcBox_HartGal_csf_a0.500.d") It will search for and attempt to find the complementary dark matter and stellar particle header and data files. However, your simulations may not follow the same naming convention. For example, the single snapshot given in the sample data has a series of files that look like this: .. code-block:: none 10MpcBox_HartGal_csf_a0.500.d #Gas mesh PMcrda0.500.DAT #Particle header PMcrs0a0.500.DAT #Particle data (positions,velocities) stars_a0.500.dat #Stellar data (metallicities, ages, etc.) The ART frontend tries to find the associated files matching the above, but if that fails you can specify ``file_particle_header``, ``file_particle_data``, and ``file_particle_stars``, in addition to specifying the gas mesh. Note that the ``pta0.500.dat`` or ``pt.dat`` file containing particle time steps is not loaded by yt. You also have the option of gridding particles and assigning them onto the meshes. This process is in beta, and for the time being, it's probably best to leave ``do_grid_particles=False`` as the default. To speed up the loading of an ART file, you have a few options. You can turn off the particles entirely by setting ``discover_particles=False``. You can also only grid octs up to a certain level, ``limit_level=5``, which is useful when debugging by artificially creating a 'smaller' dataset to work with. Finally, when stellar ages are computed we 'spread' the ages evenly within a smoothing window. By default this is turned on and set to 10Myr. To turn this off you can set ``spread=False``, and you can tweak the age smoothing window by specifying the window in seconds, ``spread=1.0e7*365*24*3600``. There is currently preliminary support for dark matter only ART data. To load a dataset use the ``yt.load`` command and provide it the particle data file. It will search for the complementary particle header file. .. code-block:: python import yt ds = yt.load("PMcrs0a0.500.DAT") Important: This should not be used for loading just the dark matter data for a 'regular' hydrodynamical data set as the units and IO are different! .. _loading-artio-data: ARTIO Data ---------- ARTIO data has a well-specified internal parameter system and has few free parameters. However, for optimization purposes, the parameter that provides the most guidance to yt as to how to manage ARTIO data is ``max_range``. This governs the maximum number of space-filling curve cells that will be used in a single "chunk" of data read from disk. For small datasets, setting this number very large will enable more data to be loaded into memory at any given time; for very large datasets, this parameter can be left alone safely. By default it is set to 1024; it can in principle be set as high as the total number of SFC cells. To load ARTIO data, you can specify a command such as this: .. code-block:: python ds = load("./A11QR1/s11Qzm1h2_a1.0000.art") .. _loading-athena-data: Athena Data ----------- Athena 4.x VTK data is supported and cared for by John ZuHone. Both uniform grid and SMR datasets are supported. .. note:: yt also recognizes Fargo3D data written to VTK files as Athena data, but support for Fargo3D data is preliminary. Loading Athena datasets is slightly different depending on whether your dataset came from a serial or a parallel run. If the data came from a serial run or you have joined the VTK files together using the Athena tool ``join_vtk``, you can load the data like this: .. code-block:: python import yt ds = yt.load("kh.0010.vtk") The filename corresponds to the file on SMR level 0, whereas if there are multiple levels the corresponding files will be picked up automatically, assuming they are laid out in ``lev*`` subdirectories under the directory where the base file is located. For parallel datasets, yt assumes that they are laid out in directories named ``id*``, one for each processor number, each with ``lev*`` subdirectories for additional refinement levels. To load this data, call ``load`` with the base file in the ``id0`` directory: .. code-block:: python import yt ds = yt.load("id0/kh.0010.vtk") which will pick up all of the files in the different ``id*`` directories for the entire dataset. The default unit system in yt is cgs ("Gaussian") units, but Athena data is not normally stored in these units, so the code unit system is the default unit system for Athena data. This means that answers to field queries from data objects and plots of data will be expressed in code units. Note that the default conversions from these units will still be in terms of cgs units, e.g. 1 ``code_length`` equals 1 cm, and so on. If you would like to provided different conversions, you may supply conversions for length, time, and mass to ``load`` using the ``units_override`` functionality: .. code-block:: python import yt units_override = { "length_unit": (1.0, "Mpc"), "time_unit": (1.0, "Myr"), "mass_unit": (1.0e14, "Msun"), } ds = yt.load("id0/cluster_merger.0250.vtk", units_override=units_override) This means that the yt fields, e.g. ``("gas","density")``, ``("gas","velocity_x")``, ``("gas","magnetic_field_x")``, will be in cgs units (or whatever unit system was specified), but the Athena fields, e.g., ``("athena","density")``, ``("athena","velocity_x")``, ``("athena","cell_centered_B_x")``, will be in code units. The default normalization for various magnetic-related quantities such as magnetic pressure, Alfven speed, etc., as well as the conversion between magnetic code units and other units, is Gaussian/CGS, meaning that factors of :math:`4\pi` or :math:`\sqrt{4\pi}` will appear in these quantities, e.g. :math:`p_B = B^2/8\pi`. To use the Lorentz-Heaviside normalization instead, in which the factors of :math:`4\pi` are dropped (:math:`p_B = B^2/2), for example), set ``magnetic_normalization="lorentz_heaviside"`` in the call to ``yt.load``: .. code-block:: python ds = yt.load( "id0/cluster_merger.0250.vtk", units_override=units_override, magnetic_normalization="lorentz_heaviside", ) Some 3D Athena outputs may have large grids (especially parallel datasets subsequently joined with the ``join_vtk`` script), and may benefit from being subdivided into "virtual grids". For this purpose, one can pass in the ``nprocs`` parameter: .. code-block:: python import yt ds = yt.load("sloshing.0000.vtk", nprocs=8) which will subdivide each original grid into ``nprocs`` grids. Note that this parameter is independent of the number of MPI tasks assigned to analyze the data set in parallel (see :ref:`parallel-computation`), and ideally should be (much) larger than this. .. note:: Virtual grids are only supported (and really only necessary) for 3D data. Alternative values for the following simulation parameters may be specified using a ``parameters`` dict, accepting the following keys: * ``gamma``: ratio of specific heats, Type: Float. If not specified, :math:`\gamma = 5/3` is assumed. * ``geometry``: Geometry type, currently accepts ``"cartesian"`` or ``"cylindrical"``. Default is ``"cartesian"``. * ``periodicity``: Is the domain periodic? Type: Tuple of boolean values corresponding to each dimension. Defaults to ``True`` in all directions. * ``mu``: mean molecular weight, Type: Float. If not specified, :math:`\mu = 0.6` (for a fully ionized primordial plasma) is assumed. .. code-block:: python import yt parameters = { "gamma": 4.0 / 3.0, "geometry": "cylindrical", "periodicity": (False, False, False), } ds = yt.load("relativistic_jet_0000.vtk", parameters=parameters) .. rubric:: Caveats * yt primarily works with primitive variables. If the Athena dataset contains conservative variables, the yt primitive fields will be generated from the conserved variables on disk. * Special relativistic datasets may be loaded, but at this time not all of their fields are fully supported. In particular, the relationships between quantities such as pressure and thermal energy will be incorrect, as it is currently assumed that their relationship is that of an ideal a :math:`\gamma`-law equation of state. This will be rectified in a future release. * Domains may be visualized assuming periodicity. * Particle list data is currently unsupported. .. _loading-athena-pp-data: Athena++ Data ------------- Athena++ HDF5 data is supported and cared for by John ZuHone. Uniform-grid, SMR, and AMR datasets in cartesian coordinates are fully supported. Support for curvilinear coordinates and/or non-constant grid cell sizes exists, but is preliminary. The default unit system in yt is cgs ("Gaussian") units, but Athena++ data is not normally stored in these units, so the code unit system is the default unit system for Athena++ data. This means that answers to field queries from data objects and plots of data will be expressed in code units. Note that the default conversions from these units will still be in terms of cgs units, e.g. 1 ``code_length`` equals 1 cm, and so on. If you would like to provided different conversions, you may supply conversions for length, time, and mass to ``load`` using the ``units_override`` functionality: .. code-block:: python import yt units_override = { "length_unit": (1.0, "Mpc"), "time_unit": (1.0, "Myr"), "mass_unit": (1.0e14, "Msun"), } ds = yt.load("AM06/AM06.out1.00400.athdf", units_override=units_override) This means that the yt fields, e.g. ``("gas","density")``, ``("gas","velocity_x")``, ``("gas","magnetic_field_x")``, will be in cgs units (or whatever unit system was specified), but the Athena fields, e.g., ``("athena_pp","density")``, ``("athena_pp","vel1")``, ``("athena_pp","Bcc1")``, will be in code units. The default normalization for various magnetic-related quantities such as magnetic pressure, Alfven speed, etc., as well as the conversion between magnetic code units and other units, is Gaussian/CGS, meaning that factors of :math:`4\pi` or :math:`\sqrt{4\pi}` will appear in these quantities, e.g. :math:`p_B = B^2/8\pi`. To use the Lorentz-Heaviside normalization instead, in which the factors of :math:`4\pi` are dropped (:math:`p_B = B^2/2), for example), set ``magnetic_normalization="lorentz_heaviside"`` in the call to ``yt.load``: .. code-block:: python ds = yt.load( "AM06/AM06.out1.00400.athdf", units_override=units_override, magnetic_normalization="lorentz_heaviside", ) Alternative values for the following simulation parameters may be specified using a ``parameters`` dict, accepting the following keys: * ``gamma``: ratio of specific heats, Type: Float. If not specified, :math:`\gamma = 5/3` is assumed. * ``geometry``: Geometry type, currently accepts ``"cartesian"`` or ``"cylindrical"``. Default is ``"cartesian"``. * ``periodicity``: Is the domain periodic? Type: Tuple of boolean values corresponding to each dimension. Defaults to ``True`` in all directions. * ``mu``: mean molecular weight, Type: Float. If not specified, :math:`\mu = 0.6` (for a fully ionized primordial plasma) is assumed. .. rubric:: Caveats * yt primarily works with primitive variables. If the Athena++ dataset contains conservative variables, the yt primitive fields will be generated from the conserved variables on disk. * Special relativistic datasets may be loaded, but at this time not all of their fields are fully supported. In particular, the relationships between quantities such as pressure and thermal energy will be incorrect, as it is currently assumed that their relationship is that of an ideal :math:`\gamma`-law equation of state. This will be rectified in a future release. * Domains may be visualized assuming periodicity. .. _loading-parthenon-data: Parthenon Data -------------- Parthenon HDF5 data is supported and cared for by Forrest Glines and Philipp Grete. The Parthenon framework is the basis for various downstream codes, e.g., `AthenaPK `_, `Phoebus `_, `KHARMA `_, RIOT, and the `parthenon-hydro `_ miniapp. Support for these codes is handled through the common Parthenon frontend with specifics described in the following. Note that only AthenaPK data is currently automatically converted to the standard fields known by yt. For other codes, the raw data of the fields stored in the output file is accessible and a conversion between those fields and yt standard fields needs to be done manually. .. rubric:: Caveats * Reading particle data from Parthenon output is currently not supported. * Spherical and cylindrical coordinates only work for AthenaPK data. * Only periodic boundary conditions are properly handled. Calculating quantities requiring larger stencils (like derivatives) will be incorrect at mesh boundaries that are not periodic. AthenaPK ^^^^^^^^ Fluid data on uniform-grid, SMR, and AMR datasets in Cartesian coordinates are fully supported. AthenaPK data may contain information on units in the output (when specified via the ```` block in the input file when the simulation was originally run). If that information is present, it will be used by yt. Otherwise the default unit system will be the code unit system with conversion of 1 ``code_length`` equalling 1 cm, and so on (given yt's default cgs/"Gaussian" unit system). If you would like to provided different conversions, you may supply conversions for length, time, and mass to ``load`` using the ``units_override`` functionality: .. code-block:: python import yt units_override = { "length_unit": (1.0, "Mpc"), "time_unit": (1.0, "Myr"), "mass_unit": (1.0e14, "Msun"), } ds = yt.load("parthenon.restart.final.rhdf", units_override=units_override) This means that the yt fields, e.g. ``("gas","density")``, ``("gas","velocity_x")``, ``("gas","magnetic_field_x")``, will be in cgs units (or whatever unit system was specified), but the AthenaPK fields, e.g., ``("parthenon","prim_density")``, ``("parthenon","prim_velocity_1")``, ``("parthenon","prim_magnetic_field_1")``, will be in code units. The default normalization for various magnetic-related quantities such as magnetic pressure, Alfven speed, etc., as well as the conversion between magnetic code units and other units, is Gaussian/CGS, meaning that factors of :math:`4\pi` or :math:`\sqrt{4\pi}` will appear in these quantities, e.g. :math:`p_B = B^2/8\pi`. To use the Lorentz-Heaviside normalization instead, in which the factors of :math:`4\pi` are dropped (:math:`p_B = B^2/2), for example), set ``magnetic_normalization="lorentz_heaviside"`` in the call to ``yt.load``: .. code-block:: python ds = yt.load( "parthenon.restart.final.rhdf", units_override=units_override, magnetic_normalization="lorentz_heaviside", ) Alternative values (i.e., overriding the default ones stored in the simulation output) for the following simulation parameters may be specified using a ``parameters`` dict, accepting the following keys: * ``gamma``: ratio of specific heats, Type: Float. If not specified, :math:`\gamma = 5/3` is assumed. * ``mu``: mean molecular weight, Type: Float. If not specified, :math:`\mu = 0.6` (for a fully ionized primordial plasma) is assumed. Other Parthenon based codes ^^^^^^^^^^^^^^^^^^^^^^^^^^^ As mentioned above, a default conversion from code fields to yt fields (e.g., from a density field to ``("gas","density")``) is currently not available -- though more specialized frontends may be added in the future. All raw data of a Parthenon-based simulation output is available through the ``("parthenon","NAME")`` fields where ``NAME`` varies between codes and the respective code documentation should be consulted. One option to manually convert those raw fields to the standard yt fields is by adding derived fields, e.g., for the field named "``mass.density``" that is stored in cgs units on disk: .. code-block:: python from yt import derived_field @derived_field(name="density", units="g*cm**-3", sampling_type="cell") def _density(field, data): return data[("parthenon", "mass.density")] * yt.units.g / yt.units.cm**3 Moreover, an ideal equation of state is assumed with the following parameters, which may be specified using a ``parameters`` dict, accepting the following keys: * ``gamma``: ratio of specific heats, Type: Float. If not specified, :math:`\gamma = 5/3` is assumed. * ``mu``: mean molecular weight, Type: Float. If not specified, :math:`\mu = 0.6` (for a fully ionized primordial plasma) is assumed. .. _loading-orion-data: AMReX / BoxLib Data ------------------- AMReX and BoxLib share a frontend, since the file format is nearly identical. yt has been tested with AMReX/BoxLib data generated by Orion, Nyx, Maestro, Castro, IAMR, and WarpX. Currently it is cared for by a combination of Andrew Myers, Matthew Turk, and Mike Zingale. To load an AMReX/BoxLib dataset, you can use the ``yt.load`` command on the plotfile directory name. In general, you must also have the ``inputs`` file in the base directory, but Maestro, Castro, Nyx, and WarpX will get all the necessary parameter information from the ``job_info`` file in the plotfile directory. For instance, if you were in a directory with the following files: .. code-block:: none inputs pltgmlcs5600/ pltgmlcs5600/Header pltgmlcs5600/Level_0 pltgmlcs5600/Level_0/Cell_H pltgmlcs5600/Level_1 pltgmlcs5600/Level_1/Cell_H pltgmlcs5600/Level_2 pltgmlcs5600/Level_2/Cell_H pltgmlcs5600/Level_3 pltgmlcs5600/Level_3/Cell_H pltgmlcs5600/Level_4 pltgmlcs5600/Level_4/Cell_H You would feed it the filename ``pltgmlcs5600``: .. code-block:: python import yt ds = yt.load("pltgmlcs5600") For Maestro, Castro, Nyx, and WarpX, you would not need the ``inputs`` file, and you would have a ``job_info`` file in the plotfile directory. .. rubric:: Caveats * yt does not read the Maestro base state (although you can have Maestro map it to a full Cartesian state variable before writing the plotfile to get around this). E-mail the dev list if you need this support. * yt supports AMReX/BoxLib particle data stored in the standard format used by Nyx and WarpX, and optionally Castro. It currently does not support the ASCII particle data used by Maestro and Castro. * For Maestro, yt aliases either "tfromp" or "tfromh to" ``temperature`` depending on the value of the ``use_tfromp`` runtime parameter. * For Maestro, some velocity fields like ``velocity_magnitude`` or ``mach_number`` will always use the on-disk value, and not have yt derive it, due to the complex interplay of the base state velocity. Viewing raw fields in WarpX ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Most AMReX/BoxLib codes output cell-centered data. If the underlying discretization is not cell-centered, then fields are typically averaged to cell centers before they are written to plot files for visualization. WarpX, however, has the option to output the raw (i.e., not averaged to cell centers) data as well. If you run your WarpX simulation with ``warpx.plot_raw_fields = 1`` in your inputs file, then you should get an additional ``raw_fields`` subdirectory inside your plot file. When you load this dataset, yt will have additional on-disk fields defined, with the "raw" field type: .. code-block:: python import yt ds = yt.load("Laser/plt00015/") print(ds.field_list) The raw fields in WarpX are nodal in at least one direction. We define a field to be "nodal" in a given direction if the field data is defined at the "low" and "high" sides of the cell in that direction, rather than at the cell center. Instead of returning one field value per cell selected, nodal fields return a number of values, depending on their centering. This centering is marked by a ``nodal_flag`` that describes whether the fields is nodal in each dimension. ``nodal_flag = [0, 0, 0]`` means that the field is cell-centered, while ``nodal_flag = [0, 0, 1]`` means that the field is nodal in the z direction and cell centered in the others, i.e. it is defined on the z faces of each cell. ``nodal_flag = [1, 1, 0]`` would mean that the field is centered in the z direction, but nodal in the other two, i.e. it lives on the four cell edges that are normal to the z direction. .. code-block:: python ds.index ad = ds.all_data() print(ds.field_info["raw", "Ex"].nodal_flag) print(ad["raw", "Ex"].shape) print(ds.field_info["raw", "Bx"].nodal_flag) print(ad["raw", "Bx"].shape) print(ds.field_info["raw", "Bx"].nodal_flag) print(ad["raw", "Bx"].shape) Here, the field ``('raw', 'Ex')`` is nodal in two directions, so four values per cell are returned, corresponding to the four edges in each cell on which the variable is defined. ``('raw', 'Bx')`` is nodal in one direction, so two values are returned per cell. The standard, averaged-to-cell-centers fields are still available. Currently, slices and data selection are implemented for nodal fields. Projections, volume rendering, and many of the analysis modules will not work. .. _loading-pluto-data: Pluto Data (AMR) ---------------- Support for Pluto AMR data is provided through the Chombo frontend, which is currently maintained by Andrew Myers. Pluto output files that don't use the Chombo HDF5 format are currently not supported. To load a Pluto dataset, you can use the ``yt.load`` command on the ``*.hdf5`` files. For example, the KelvinHelmholtz sample dataset is a directory that contains the following files: .. code-block:: none data.0004.hdf5 pluto.ini To load it, you can navigate into that directory and do: .. code-block:: python import yt ds = yt.load("data.0004.hdf5") The ``pluto.ini`` file must also be present alongside the HDF5 file. By default, all of the Pluto fields will be in code units. .. _loading-idefix-data: Idefix, Pluto VTK and Pluto XDMF Data ------------------------------------- Support for Idefix ``.dmp``, ``.vtk`` data is provided through the ``yt_idefix`` extension. It also supports monogrid ``.vtk`` and ``.h5`` data from Pluto. See `the PyPI page `_ for details. .. _loading-enzo-data: Enzo Data --------- Enzo data is fully supported and cared for by Matthew Turk. To load an Enzo dataset, you can use the ``yt.load`` command and provide it the dataset name. This would be the name of the output file, and it contains no extension. For instance, if you have the following files: .. code-block:: none DD0010/ DD0010/data0010 DD0010/data0010.index DD0010/data0010.cpu0000 DD0010/data0010.cpu0001 DD0010/data0010.cpu0002 DD0010/data0010.cpu0003 You would feed the ``load`` command the filename ``DD0010/data0010`` as mentioned. .. code-block:: python import yt ds = yt.load("DD0010/data0010") .. rubric:: Caveats * There are no major caveats for Enzo usage * Units should be correct, if you utilize standard unit-setting routines. yt will notify you if it cannot determine the units, although this notification will be passive. * 2D and 1D data are supported, but the extraneous dimensions are set to be of length 1.0 in "code length" which may produce strange results for volume quantities. Enzo MHDCT data ^^^^^^^^^^^^^^^ The electric and magnetic fields for Enzo MHDCT simulations are defined on cell faces, unlike other Enzo fields which are defined at cell centers. In yt, we call face-centered fields like this "nodal". We define a field to be nodal in a given direction if the field data is defined at the "low" and "high" sides of the cell in that direction, rather than at the cell center. Instead of returning one field value per cell selected, nodal fields return a number of values, depending on their centering. This centering is marked by a ``nodal_flag`` that describes whether the fields is nodal in each dimension. ``nodal_flag = [0, 0, 0]`` means that the field is cell-centered, while ``nodal_flag = [0, 0, 1]`` means that the field is nodal in the z direction and cell centered in the others, i.e. it is defined on the z faces of each cell. ``nodal_flag = [1, 1, 0]`` would mean that the field is centered in the z direction, but nodal in the other two, i.e. it lives on the four cell edges that are normal to the z direction. .. code-block:: python ds.index ad = ds.all_data() print(ds.field_info["enzo", "Ex"].nodal_flag) print(ad["enzo", "Ex"].shape) print(ds.field_info["enzo", "BxF"].nodal_flag) print(ad["enzo", "Bx"].shape) print(ds.field_info["enzo", "Bx"].nodal_flag) print(ad["enzo", "Bx"].shape) Here, the field ``('enzo', 'Ex')`` is nodal in two directions, so four values per cell are returned, corresponding to the four edges in each cell on which the variable is defined. ``('enzo', 'BxF')`` is nodal in one direction, so two values are returned per cell. The standard, non-nodal field ``('enzo', 'Bx')`` is also available. Currently, slices and data selection are implemented for nodal fields. Projections, volume rendering, and many of the analysis modules will not work. .. _loading-enzoe-data: Enzo-E Data ----------- Enzo-E outputs have three types of files. .. code-block:: none hello-0200/ hello-0200/hello-0200.block_list hello-0200/hello-0200.file_list hello-0200/hello-0200.hello-c0020-p0000.h5 To load Enzo-E data into yt, provide the block list file: .. code-block:: python import yt ds = yt.load("hello-0200/hello-0200.block_list") Mesh and particle fields are fully supported for 1, 2, and 3D datasets. Enzo-E supports arbitrary particle types defined by the user. The available particle types will be known as soon as the dataset index is created. .. code-block:: python ds = yt.load("ENZOP_DD0140/ENZOP_DD0140.block_list") ds.index print(ds.particle_types) print(ds.particle_type_counts) print(ds.r["dark", "particle_position"]) .. _loading-exodusii-data: Exodus II Data -------------- .. note:: To load Exodus II data, you need to have the `netcdf4 `_ python interface installed. Exodus II is a file format for Finite Element datasets that is used by the MOOSE framework for file IO. Support for this format (and for unstructured mesh data in general) is a new feature as of yt 3.3, so while we aim to fully support it, we also expect there to be some buggy features at present. Currently, yt can visualize quads, hexes, triangles, and tetrahedral element types at first order. Additionally, there is experimental support for the high-order visualization of 20-node hex elements. Development of more high-order visualization capability is a work in progress. To load an Exodus II dataset, you can use the ``yt.load`` command on the Exodus II file: .. code-block:: python import yt ds = yt.load("MOOSE_sample_data/out.e-s010", step=0) Because Exodus II datasets can have multiple steps (which can correspond to time steps, Picard iterations, non-linear solve iterations, etc...), you can also specify a step argument when you load an Exodus II data that defines the index at which to look when you read data from the file. Omitting this argument is the same as passing in 0, and setting ``step=-1`` selects the last time output in the file. You can access the connectivity information directly by doing: .. code-block:: python import yt ds = yt.load("MOOSE_sample_data/out.e-s010", step=-1) print(ds.index.meshes[0].connectivity_coords) print(ds.index.meshes[0].connectivity_indices) print(ds.index.meshes[1].connectivity_coords) print(ds.index.meshes[1].connectivity_indices) This particular dataset has two meshes in it, both of which are made of 8-node hexes. yt uses a field name convention to access these different meshes in plots and data objects. To see all the fields found in a particular dataset, you can do: .. code-block:: python import yt ds = yt.load("MOOSE_sample_data/out.e-s010") print(ds.field_list) This will give you a list of field names like ``('connect1', 'diffused')`` and ``('connect2', 'convected')``. Here, fields labelled with ``'connect1'`` correspond to the first mesh, and those with ``'connect2'`` to the second, and so on. To grab the value of the ``'convected'`` variable at all the nodes in the first mesh, for example, you would do: .. code-block:: python import yt ds = yt.load("MOOSE_sample_data/out.e-s010") ad = ds.all_data() # geometric selection, this just grabs everything print(ad["connect1", "convected"]) In this dataset, ``('connect1', 'convected')`` is nodal field, meaning that the field values are defined at the vertices of the elements. If we examine the shape of the returned array: .. code-block:: python import yt ds = yt.load("MOOSE_sample_data/out.e-s010") ad = ds.all_data() print(ad["connect1", "convected"].shape) we see that this mesh has 12480 8-node hexahedral elements, and that we get 8 field values for each element. To get the vertex positions at which these field values are defined, we can do, for instance: .. code-block:: python import yt ds = yt.load("MOOSE_sample_data/out.e-s010") ad = ds.all_data() print(ad["connect1", "vertex_x"]) If we instead look at an element-centered field, like ``('connect1', 'conv_indicator')``, we get: .. code-block:: python import yt ds = yt.load("MOOSE_sample_data/out.e-s010") ad = ds.all_data() print(ad["connect1", "conv_indicator"].shape) we instead get only one field value per element. For information about visualizing unstructured mesh data, including Exodus II datasets, please see :ref:`unstructured-mesh-slices` and :ref:`unstructured_mesh_rendering`. Displacement Fields ^^^^^^^^^^^^^^^^^^^ Finite element codes often solve for the displacement of each vertex from its original position as a node variable, rather than updating the actual vertex positions with time. For analysis and visualization, it is often useful to turn these displacements on or off, and to be able to scale them arbitrarily to emphasize certain features of the solution. To allow this, if ``yt`` detects displacement fields in an Exodus II dataset (using the convention that they will be named ``disp_x``, ``disp_y``, etc...), it will optionally add these to the mesh vertex positions for the purposes of visualization. Displacement fields can be controlled when a dataset is loaded by passing in an optional dictionary to the ``yt.load`` command. This feature is turned off by default, meaning that a dataset loaded as .. code-block:: python import yt ds = yt.load("MOOSE_sample_data/mps_out.e") will not include the displacements in the vertex positions. The displacements can be turned on separately for each mesh in the file by passing in a tuple of (scale, offset) pairs for the meshes you want to enable displacements for. For example, the following code snippet turns displacements on for the second mesh, but not the first: .. code-block:: python import yt ds = yt.load( "MOOSE_sample_data/mps_out.e", step=10, displacements={"connect2": (1.0, [0.0, 0.0, 0.0])}, ) The displacements can also be scaled by an arbitrary factor before they are added in to the vertex positions. The following code turns on displacements for both ``connect1`` and ``connect2``, scaling the former by a factor of 5.0 and the later by a factor of 10.0: .. code-block:: python import yt ds = yt.load( "MOOSE_sample_data/mps_out.e", step=10, displacements={ "connect1": (5.0, [0.0, 0.0, 0.0]), "connect2": (10.0, [0.0, 0.0, 0.0]), }, ) Finally, we can also apply an arbitrary offset to the mesh vertices after the scale factor is applied. For example, the following code scales all displacements in the second mesh by a factor of 5.0, and then shifts each vertex in the mesh by 1.0 unit in the z-direction: .. code-block:: python import yt ds = yt.load( "MOOSE_sample_data/mps_out.e", step=10, displacements={"connect2": (5.0, [0.0, 0.0, 1.0])}, ) .. _loading-fits-data: FITS Data --------- FITS data is *mostly* supported and cared for by John ZuHone. In order to read FITS data, `AstroPy `_ must be installed. FITS data cubes can be loaded in the same way by yt as other datasets. yt can read FITS image files that have the following (case-insensitive) suffixes: * fits * fts * fits.gz * fts.gz yt can currently read two kinds of FITS files: FITS image files and FITS binary table files containing positions, times, and energies of X-ray events. These are described in more detail below. Types of FITS Datasets Supported by yt ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ yt FITS Data Standard """"""""""""""""""""" yt has facilities for creating 2 and 3-dimensional FITS images from derived, fixed-resolution data products from other datasets. These include images produced from slices, projections, and 3D covering grids. The resulting FITS images are fully-describing in that unit, parameter, and coordinate information is passed from the original dataset. These can be created via the :class:`~yt.visualization.fits_image.FITSImageData` class and its subclasses. For information about how to use these special classes, see :doc:`../visualizing/FITSImageData`. Once you have produced a FITS file in this fashion, you can load it using yt and it will be detected as a ``YTFITSDataset`` object, and it can be analyzed in the same way as any other dataset in yt. Astronomical Image Data """"""""""""""""""""""" These files are one of three types: * Generic two-dimensional FITS images in sky coordinates * Three or four-dimensional "spectral cubes" * *Chandra* event files These FITS images typically are in celestial or galactic coordinates, and for 3D spectral cubes the third axis is typically in velocity, wavelength, or frequency units. For these datasets, since yt does not yet recognize non-spatial axes, the coordinates are in units of the image pixels. The coordinates of these pixels in the WCS coordinate systems will be available in separate fields. Often, the aspect ratio of 3D spectral cubes can be far from unity. Because yt sets the pixel scale as the ``code_length``, certain visualizations (such as volume renderings) may look extended or distended in ways that are undesirable. To adjust the width in ``code_length`` of the spectral axis, set ``spectral_factor`` equal to a constant which gives the desired scaling, or set it to ``"auto"`` to make the width the same as the largest axis in the sky plane: .. code-block:: python ds = yt.load("m33_hi.fits.gz", spectral_factor=0.1) For 4D spectral cubes, the fourth axis is assumed to be composed of different fields altogether (e.g., Stokes parameters for radio data). *Chandra* X-ray event data, which is in tabular form, will be loaded as particle fields in yt, but a grid will be constructed from the WCS information in the FITS header. There is a helper function, ``setup_counts_fields``, which may be used to make deposited image fields from the event data for different energy bands (for an example see :doc:`../cookbook/fits_xray_images`). Generic FITS Images """"""""""""""""""" If the FITS file contains images but does not have adequate header information to fall into one of the above categories, yt will still load the data, but the resulting field and/or coordinate information will necessarily be incomplete. Field names may not be descriptive, and units may be incorrect. To get the full use out of yt for FITS files, make sure that the file is sufficiently self-descripting to fall into one of the above categories. Making the Most of yt for FITS Data ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ yt will load data without WCS information and/or some missing header keywords, but the resulting field and/or coordinate information will necessarily be incomplete. For example, field names may not be descriptive, and units will not be correct. To get the full use out of yt for FITS files, make sure that for each image HDU the following standard header keywords have sensible values: * ``CDELTx``: The pixel width in along axis ``x`` * ``CRVALx``: The coordinate value at the reference position along axis ``x`` * ``CRPIXx``: The reference pixel along axis ``x`` * ``CTYPEx``: The projection type of axis ``x`` * ``CUNITx``: The units of the coordinate along axis ``x`` * ``BTYPE``: The type of the image, this will be used as the field name * ``BUNIT``: The units of the image FITS header keywords can easily be updated using AstroPy. For example, to set the ``BTYPE`` and ``BUNIT`` keywords: .. code-block:: python from astropy.io import fits f = fits.open("xray_flux_image.fits", mode="update") f[0].header["BUNIT"] = "cts/s/pixel" f[0].header["BTYPE"] = "flux" f.flush() f.close() FITS Data Decomposition ^^^^^^^^^^^^^^^^^^^^^^^ Though a FITS image is composed of a single array in the FITS file, upon being loaded into yt it is automatically decomposed into grids: .. code-block:: python import yt ds = yt.load("m33_hi.fits") ds.print_stats() .. parsed-literal:: level # grids # cells # cells^3 ---------------------------------------------- 0 512 981940800 994 ---------------------------------------------- 512 981940800 For 3D spectral-cube data, the decomposition into grids will be done along the spectral axis since this will speed up many common operations for this particular type of dataset. yt will generate its own domain decomposition, but the number of grids can be set manually by passing the ``nprocs`` parameter to the ``load`` call: .. code-block:: python ds = yt.load("m33_hi.fits", nprocs=64) Fields in FITS Datasets ^^^^^^^^^^^^^^^^^^^^^^^ Multiple fields can be included in a FITS dataset in several different ways. The first way, and the simplest, is if more than one image HDU is contained within the same file. The field names will be determined by the value of ``BTYPE`` in the header, and the field units will be determined by the value of ``BUNIT``. The second way is if a dataset has a fourth axis, with each slice along this axis corresponding to a different field. In this case, the field names will be determined by the value of the ``CTYPE4`` keyword and the index of the slice. So, for example, if ``BTYPE`` = ``"intensity"`` and ``CTYPE4`` = ``"stokes"``, then the fields will be named ``"intensity_stokes_1"``, ``"intensity_stokes_2"``, and so on. The third way is if auxiliary files are included along with the main file, like so: .. code-block:: python ds = yt.load("flux.fits", auxiliary_files=["temp.fits", "metal.fits"]) The image blocks in each of these files will be loaded as a separate field, provided they have the same dimensions as the image blocks in the main file. Additionally, fields corresponding to the WCS coordinates will be generated based on the corresponding ``CTYPEx`` keywords. When queried, these fields will be generated from the pixel coordinates in the file using the WCS transformations provided by AstroPy. .. note:: Each FITS image from a single dataset, whether from one file or from one of multiple files, must have the same dimensions and WCS information as the first image in the primary file. If this is not the case, yt will raise a warning and will not load this field. .. _additional_fits_options: Additional Options ^^^^^^^^^^^^^^^^^^ The following are additional options that may be passed to the ``load`` command when analyzing FITS data: ``nan_mask`` """""""""""" FITS image data may include ``NaNs``. If you wish to mask this data out, you may supply a ``nan_mask`` parameter, which may either be a single floating-point number (applies to all fields) or a Python dictionary containing different mask values for different fields: .. code-block:: python # passing a single float for all images ds = yt.load("m33_hi.fits", nan_mask=0.0) # passing a dict ds = yt.load("m33_hi.fits", nan_mask={"intensity": -1.0, "temperature": 0.0}) ``suppress_astropy_warnings`` """"""""""""""""""""""""""""" Generally, AstroPy may generate a lot of warnings about individual FITS files, many of which you may want to ignore. If you want to see these warnings, set ``suppress_astropy_warnings = False``. Miscellaneous Tools for Use with FITS Data ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A number of tools have been prepared for use with FITS data that enhance yt's visualization and analysis capabilities for this particular type of data. These are included in the ``yt.frontends.fits.misc`` module, and can be imported like so: .. code-block:: python from yt.frontends.fits.misc import PlotWindowWCS, ds9_region, setup_counts_fields ``setup_counts_fields`` """"""""""""""""""""""" This function can be used to create image fields from X-ray counts data in different energy bands: .. code-block:: python ebounds = [(0.1, 2.0), (2.0, 5.0)] # Energies are in keV setup_counts_fields(ds, ebounds) which would make two fields, ``"counts_0.1-2.0"`` and ``"counts_2.0-5.0"``, and add them to the field registry for the dataset ``ds``. ``ds9_region`` """""""""""""" This function takes a `ds9 `_ region and creates a "cut region" data container from it, that can be used to select the cells in the FITS dataset that fall within the region. To use this functionality, the `regions `_ package must be installed. .. code-block:: python ds = yt.load("m33_hi.fits") circle_region = ds9_region(ds, "circle.reg") print(circle_region.quantities.extrema("flux")) ``PlotWindowWCS`` """"""""""""""""" This class takes a on-axis ``SlicePlot`` or ``ProjectionPlot`` of FITS data and adds celestial coordinates to the plot axes. To use it, a version of AstroPy >= 1.3 must be installed. .. code-block:: python wcs_slc = PlotWindowWCS(slc) wcs_slc.show() # for Jupyter notebooks wcs_slc.save() ``WCSAxes`` is still in an experimental state, but as its functionality improves it will be utilized more here. ``create_spectral_slabs`` """"""""""""""""""""""""" .. note:: The following functionality requires the `spectral-cube `_ library to be installed. If you have a spectral intensity dataset of some sort, and would like to extract emission in particular slabs along the spectral axis of a certain width, ``create_spectral_slabs`` can be used to generate a dataset with these slabs as different fields. In this example, we use it to extract individual lines from an intensity cube: .. code-block:: python slab_centers = { "13CN": (218.03117, "GHz"), "CH3CH2CHO": (218.284256, "GHz"), "CH3NH2": (218.40956, "GHz"), } slab_width = (0.05, "GHz") ds = create_spectral_slabs( "intensity_cube.fits", slab_centers, slab_width, nan_mask=0.0 ) All keyword arguments to ``create_spectral_slabs`` are passed on to ``load`` when creating the dataset (see :ref:`additional_fits_options` above). In the returned dataset, the different slabs will be different fields, with the field names taken from the keys in ``slab_centers``. The WCS coordinates on the spectral axis are reset so that the center of the domain along this axis is zero, and the left and right edges of the domain along this axis are :math:`\pm` ``0.5*slab_width``. Examples of Using FITS Data ^^^^^^^^^^^^^^^^^^^^^^^^^^^ The following Jupyter notebooks show examples of working with FITS data in yt, which we recommend you look at in the following order: * :doc:`../cookbook/fits_radio_cubes` * :doc:`../cookbook/fits_xray_images` * :doc:`../visualizing/FITSImageData` .. _loading-flash-data: FLASH Data ---------- FLASH HDF5 data is *mostly* supported and cared for by John ZuHone. To load a FLASH dataset, you can use the ``yt.load`` command and provide it the file name of a plot file, checkpoint file, or particle file. Particle files require special handling depending on the situation, the main issue being that they typically lack grid information. The first case is when you have a plotfile and a particle file that you would like to load together. In the simplest case, this occurs automatically. For instance, if you were in a directory with the following files: .. code-block:: none radio_halo_1kpc_hdf5_plt_cnt_0100 # plotfile radio_halo_1kpc_hdf5_part_0100 # particle file where the plotfile and the particle file were created at the same time (therefore having particle data consistent with the grid structure of the former). Notice also that the prefix ``"radio_halo_1kpc_"`` and the file number ``100`` are the same. In this special case, the particle file will be loaded automatically when ``yt.load`` is called on the plotfile. This also works when loading a number of files in a time series. If the two files do not have the same prefix and number, but they nevertheless have the same grid structure and are at the same simulation time, the particle data may be loaded with the ``particle_filename`` optional argument to ``yt.load``: .. code-block:: python import yt ds = yt.load( "radio_halo_1kpc_hdf5_plt_cnt_0100", particle_filename="radio_halo_1kpc_hdf5_part_0100", ) However, if you don't have a corresponding plotfile for a particle file, but would still like to load the particle data, you can still call ``yt.load`` on the file. However, the grid information will not be available, and the particle data will be loaded in a fashion similar to other particle-based datasets in yt. Mean Molecular Weight and Number Density Fields ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The way the mean molecular weight and number density fields are defined depends on what type of simulation you are running. If you are running a simulation without species and a :math:`\gamma`-law equation of state, then the mean molecular weight is defined using the ``eos_singleSpeciesA`` parameter in the FLASH dataset. If you have multiple species and your dataset contains the FLASH field ``"abar"``, then this is used as the mean molecular weight. In either case, the number density field is calculated using this weight. If you are running a FLASH simulation where the fields ``"sumy"`` and ``"ye"`` are present, Then the mean molecular weight is the inverse of ``"sumy"``, and the fields ``"El_number_density"``, ``"ion_number_density"``, and ``"number_density"`` are defined using the following mathematical definitions: * ``"El_number_density"`` :math:`n_e = N_AY_e\rho` * ``"ion_number_density"`` :math:`n_i = N_A\rho/\bar{A}` * ``"number_density"`` :math:`n = n_e + n_i` where :math:`n_e` and :math:`n_i` are the electron and ion number densities, :math:`\rho` is the mass density, :math:`Y_e` is the electron number per baryon, :math:`\bar{A}` is the mean molecular weight, and :math:`N_A` is Avogadro's number. .. rubric:: Caveats * Please be careful that the units are correctly utilized; yt assumes cgs by default, but conversion to other unit systems is also possible. .. _loading-gadget-data: Gadget Data ----------- .. note:: For more information about how yt indexes and reads particle data, set the section :ref:`demeshening`. yt has support for reading Gadget data in both raw binary and HDF5 formats. It is able to access the particles as it would any other particle dataset, and it can apply smoothing kernels to the data to produce both quantitative analysis and visualization. See :ref:`loading-sph-data` for more details and :doc:`../cookbook/yt_gadget_analysis` for a detailed example of loading, analyzing, and visualizing a Gadget dataset. An example which makes use of a Gadget snapshot from the OWLS project can be found in :doc:`../cookbook/yt_gadget_owls_analysis`. .. note:: If you are loading a multi-file dataset with Gadget, you can either supply the *zeroth* file to the ``load`` command or the directory containing all of the files. For instance, to load the *zeroth* file: ``yt.load("snapshot_061.0.hdf5")`` . To give just the directory, if you have all of your ``snapshot_000.*`` files in a directory called ``snapshot_000``, do: ``yt.load("/path/to/snapshot_000")``. Gadget data in HDF5 format can be loaded with the ``load`` command: .. code-block:: python import yt ds = yt.load("snapshot_061.hdf5") Gadget data in raw binary format can also be loaded with the ``load`` command. This is supported for snapshots created with the ``SnapFormat`` parameter set to 1 or 2. .. code-block:: python import yt ds = yt.load("snapshot_061") .. _particle-bbox: Units and Bounding Boxes ^^^^^^^^^^^^^^^^^^^^^^^^ There are two additional pieces of information that may be needed. If your simulation is cosmological, yt can often guess the bounding box and the units of the simulation. However, for isolated simulations and for cosmological simulations with non-standard units, these must be supplied by the user. For example, if a length unit of 1.0 corresponds to a kiloparsec, you can supply this in the constructor. yt can accept units such as ``Mpc``, ``kpc``, ``cm``, ``Mpccm/h`` and so on. In particular, note that ``Mpc/h`` and ``Mpccm/h`` (``cm`` for comoving here) are usable unit definitions. yt will attempt to use units for ``mass``, ``length``, ``time``, and ``magnetic`` as supplied in the argument ``unit_base``. The ``bounding_box`` argument is a list of two-item tuples or lists that describe the left and right extents of the particles. In this example we load a dataset with a custom bounding box and units. .. code-block:: python bbox = [[-600.0, 600.0], [-600.0, 600.0], [-600.0, 600.0]] unit_base = { "length": (1.0, "kpc"), "velocity": (1.0, "km/s"), "mass": (1.0, "Msun"), } ds = yt.load("snap_004", unit_base=unit_base, bounding_box=bbox) .. warning:: If a ``bounding_box`` argument is supplied and the original dataset has periodic boundaries, it will no longer have periodic boundaries after the bounding box is applied. In addition, you can use ``UnitLength_in_cm``, ``UnitVelocity_in_cm_per_s``, ``UnitMass_in_g``, and ``UnitMagneticField_in_gauss`` as keys for the ``unit_base`` dictionary. These name come from the names used in the Gadget runtime parameter file. This example will initialize a dataset with the same units as the example above: .. code-block:: python unit_base = { "UnitLength_in_cm": 3.09e21, "UnitVelocity_in_cm_per_s": 1e5, "UnitMass_in_g": 1.989e33, } ds = yt.load("snap_004", unit_base=unit_base, bounding_box=bbox) .. _gadget-field-spec: Field Specifications ^^^^^^^^^^^^^^^^^^^^ Binary Gadget outputs often have additional fields or particle types that are non-standard from the default Gadget distribution format. These can be specified in the call to ``GadgetDataset`` by either supplying one of the sets of field specifications as a string or by supplying a field specification itself. As an example, yt has built-in definitions for ``default`` (the default), ``agora_unlv``, ``group0000``, and ``magneticum_box2_hr``. They can be used like this: .. code-block:: python ds = yt.load("snap_100", field_spec="group0000") Field specifications must be tuples, and must be of this format: .. code-block:: python default = ( "Coordinates", "Velocities", "ParticleIDs", "Mass", ("InternalEnergy", "Gas"), ("Density", "Gas"), ("SmoothingLength", "Gas"), ) This is the default specification used by the Gadget frontend. It means that the fields are, in order, Coordinates, Velocities, ParticleIDs, Mass, and the fields InternalEnergy, Density and SmoothingLength *only* for Gas particles. So for example, if you have defined a Metallicity field for the particle type Halo, which comes right after ParticleIDs in the file, you could define it like this: .. code-block:: python import yt my_field_def = ( "Coordinates", "Velocities", "ParticleIDs", ("Metallicity", "Halo"), "Mass", ("InternalEnergy", "Gas"), ("Density", "Gas"), ("SmoothingLength", "Gas"), ) ds = yt.load("snap_100", field_spec=my_field_def) To save time, you can utilize the plugins file for yt and use it to add items to the dictionary where these definitions are stored. You could do this like so: .. code-block:: python import yt from yt.frontends.gadget.definitions import gadget_field_specs gadget_field_specs["my_field_def"] = my_field_def ds = yt.load("snap_100", field_spec="my_field_def") Please also feel free to issue a pull request with any new field specifications, as we're happy to include them in the main distribution! Magneticum halos downloaded using the SIMCUT method from the `Cosmological Web Portal `_ can be loaded using the ``"magneticum_box2_hr"`` value for the ``field_spec`` argumemt. However, this is strictly only true for halos downloaded after May 14, 2021, since before then the halos had the following signature (with the ``"StellarAge"`` field for the ``"Bndry"`` particles missing): .. code-block:: python magneticum_box2_hr = ( "Coordinates", "Velocities", "ParticleIDs", "Mass", ("InternalEnergy", "Gas"), ("Density", "Gas"), ("SmoothingLength", "Gas"), ("ColdFraction", "Gas"), ("Temperature", "Gas"), ("StellarAge", "Stars"), "Potential", ("InitialMass", "Stars"), ("ElevenMetalMasses", ("Gas", "Stars")), ("StarFormationRate", "Gas"), ("TrueMass", "Bndry"), ("AccretionRate", "Bndry"), ) and before November 20, 2020, the field specification had the ``"ParticleIDs"`` and ``"Mass"`` fields swapped: .. code-block:: python magneticum_box2_hr = ( "Coordinates", "Velocities", "Mass", "ParticleIDs", ("InternalEnergy", "Gas"), ("Density", "Gas"), ("SmoothingLength", "Gas"), ("ColdFraction", "Gas"), ("Temperature", "Gas"), ("StellarAge", "Stars"), "Potential", ("InitialMass", "Stars"), ("ElevenMetalMasses", ("Gas", "Stars")), ("StarFormationRate", "Gas"), ("TrueMass", "Bndry"), ("AccretionRate", "Bndry"), ) In general, to determine what fields are in your Gadget binary file, it may be useful to inspect them with the `g3read `_ code first. .. _gadget-species-fields: Gadget Species Fields ^^^^^^^^^^^^^^^^^^^^^ Gas and star particles in Gadget binary and HDF5 files can have fields corresponding to different species fractions or masses. The following field definitions are supported, in the sense that they are automatically detected and will be used to construct species fractions, densities, and number densities after the manner specified in :ref:`species-fields`. For Gadget binary files, the following fields (as specified in the ``field_spec`` argument) are supported: * ``"ElevenMetalMasses"``: 11 mass fields: He, C, Ca, O, N, Ne, Mg, S, Si, Fe, Ej * ``"FourMetalFractions"``: 4 fraction fields: C, O, Si, Fe For Gadget HDF5 files, the fields ``"MetalMasses"`` or ``"Mass Of Metals"`` are supported, with the number of species determined by the size of the dataset's second dimension in the file. Four different numbers of species in these fields are supported, corresponding to the following species: * 7, corresponding to C, N, O, Mg, Si, Fe, Ej * 8, corresponding to He, C, O, Mg, S, Si, Fe, Ej * 11, corresponding to He, C, Ca, O, N, Ne, Mg, S, Si, Fe, Ej * 15, corresponding to He, C, Ca, O, N, Ne, Mg, S, Si, Fe, Na, Al, Ar, Ni, Ej Two points should be noted about the above: the "Ej" species corresponds to the remaining mass of elements heavier than hydrogen and not enumerated, and in the case of 8, 11, and 15 species, hydrogen is assumed to be the remaining mass fraction. Finally, for Gadget HDF5 files, element fields which are of the form ``"X_fraction"`` are also suppoted, and correspond to the mass fraction of element X. .. _gadget-long-ids: Long Particle IDs ^^^^^^^^^^^^^^^^^ Some Gadget binary files use 64-bit integers for particle IDs. To use these, simply set ``long_ids=True`` when loading the dataset: .. code-block:: python import yt ds = yt.load("snap_100", long_ids=True) This is needed, for example, for Magneticum halos downloaded using the SIMCUT method from the `Cosmological Web Portal `_ .. _gadget-ptype-spec: Particle Type Definitions ^^^^^^^^^^^^^^^^^^^^^^^^^ In some cases, research groups add new particle types or re-order them. You can supply alternate particle types by using the keyword ``ptype_spec`` to the ``GadgetDataset`` call. The default for Gadget binary data is: .. code-block:: python ("Gas", "Halo", "Disk", "Bulge", "Stars", "Bndry") You can specify alternate names, but note that this may cause problems with the field specification if none of the names match old names. .. _gadget-header-spec: Header Specification ^^^^^^^^^^^^^^^^^^^^ If you have modified the header in your Gadget binary file, you can specify an alternate header specification with the keyword ``header_spec``. This can either be a list of strings corresponding to individual header types known to yt, or it can be a combination of strings and header specifications. The default header specification (found in ``yt/frontends/sph/definitions.py``) is: .. code-block:: python default = ( ("Npart", 6, "i"), ("Massarr", 6, "d"), ("Time", 1, "d"), ("Redshift", 1, "d"), ("FlagSfr", 1, "i"), ("FlagFeedback", 1, "i"), ("Nall", 6, "i"), ("FlagCooling", 1, "i"), ("NumFiles", 1, "i"), ("BoxSize", 1, "d"), ("Omega0", 1, "d"), ("OmegaLambda", 1, "d"), ("HubbleParam", 1, "d"), ("FlagAge", 1, "i"), ("FlagMEtals", 1, "i"), ("NallHW", 6, "i"), ("unused", 16, "i"), ) These items will all be accessible inside the object ``ds.parameters``, which is a dictionary. You can add combinations of new items, specified in the same way, or alternately other types of headers. The other string keys defined are ``pad32``, ``pad64``, ``pad128``, and ``pad256`` each of which corresponds to an empty padding in bytes. For example, if you have an additional 256 bytes of padding at the end, you can specify this with: .. code-block:: python header_spec = "default+pad256" Note that a single string like this means a single header block. To specify multiple header blocks, use a list of strings instead: .. code-block:: python header_spec = ["default", "pad256"] This can then be supplied to the constructor. Note that you can also define header items manually, for instance with: .. code-block:: python from yt.frontends.gadget.definitions import gadget_header_specs gadget_header_specs["custom"] = (("some_value", 8, "d"), ("another_value", 1, "i")) header_spec = "default+custom" The letters correspond to data types from the Python struct module. Please feel free to submit alternate header types to the main yt repository. .. _specifying-gadget-units: Specifying Units ^^^^^^^^^^^^^^^^ If you are running a cosmology simulation, yt will be able to guess the units with some reliability. However, if you are not and you do not specify a dataset, yt will not be able to and will use the defaults of length being 1.0 Mpc/h (comoving), velocity being in cm/s, and mass being in 10^10 Msun/h. You can specify alternate units by supplying the ``unit_base`` keyword argument of this form: .. code-block:: python unit_base = {"length": (1.0, "cm"), "mass": (1.0, "g"), "time": (1.0, "s")} yt will utilize length, mass and time to set up all other units. .. _loading-swift-data: SWIFT Data ---------- .. note:: For more information about how yt indexes and reads particle data, set the section :ref:`demeshening`. yt has support for reading in SWIFT data from the HDF5 file format. It is able to access all particles and fields which are stored on-disk and it is also able to generate derived fields, i.e, linear momentum from on-disk fields. It is also possible to smooth the data onto a grid or an octree. This interpolation can be done using an SPH kernel using either the scatter or gather approach. The SWIFT frontend is supported and cared for by Ashley Kelly. SWIFT data in HDF5 format can be loaded with the ``load`` command: .. code-block:: python import yt ds = yt.load("EAGLE_6/eagle_0005.hdf5") .. _arepo-data: Arepo Data ---------- .. note:: For more information about how yt indexes and reads discrete data, set the section :ref:`demeshening`. Arepo data is currently treated as SPH data. The gas cells have smoothing lengths assigned using the following prescription for a given gas cell :math:`i`: .. math:: h_{\rm sml} = \alpha\left(\frac{3}{4\pi}\frac{m_i}{\rho_i}\right)^{1/3} where :math:`\alpha` is a constant factor. By default, :math:`\alpha = 2`. In practice, smoothing lengths are only used for creating slices and projections, and this value of :math:`\alpha` works well for this purpose. However, this value can be changed when loading an Arepo dataset by setting the ``smoothing_factor`` parameter: .. code-block:: python import yt ds = yt.load("snapshot_100.hdf5", smoothing_factor=1.5) Currently, only Arepo HDF5 snapshots are supported. If the "GFM" metal fields are present in your dataset, they will be loaded in and aliased to the appropriate species fields in the ``"GFM_Metals"`` field on-disk. For more information, see the `Illustris TNG documentation `_. If passive scalar fields are present in your dataset, they will be loaded in and aliased to fields with the naming convention ``"PassiveScalars_XX"`` where ``XX`` is the number of the passive scalar array, e.g. ``"00"``, ``"01"``, etc. HDF5 snapshots will be detected as Arepo data if they have the ``"GFM_Metals"`` field present, or if they have a ``"Config"`` group in the header. If neither of these are the case, and your snapshot *is* Arepo data, you can fix this with the following: .. code-block:: python import h5py with h5py.File(saved_filename, "r+") as f: f.create_group("Config") f["/Config"].attrs["VORONOI"] = 1 .. _loading-gamer-data: GAMER Data ---------- GAMER HDF5 data is supported and cared for by Hsi-Yu Schive and John ZuHone. Datasets using hydrodynamics, particles, magnetohydrodynamics, wave dark matter, and special relativistic hydrodynamics are supported. You can load the data like this: .. code-block:: python import yt ds = yt.load("InteractingJets/jet_000002") For simulations without units (i.e., ``OPT__UNIT = 0``), you can supply conversions for length, time, and mass to ``load`` using the ``units_override`` functionality: .. code-block:: python import yt code_units = { "length_unit": (1.0, "kpc"), "time_unit": (3.08567758096e13, "s"), "mass_unit": (1.4690033e36, "g"), } ds = yt.load("InteractingJets/jet_000002", units_override=code_units) Particle data are supported and are always stored in the same file as the grid data. For special relativistic simulations, both the gamma-law and Taub-Mathews EOSes are supported, and the following fields are defined: * ``("gas", "density")``: Comoving rest-mass density :math:`\rho` * ``("gas", "frame_density")``: Coordinate-frame density :math:`D = \gamma\rho` * ``("gas", "gamma")``: Ratio of specific heats :math:`\Gamma` * ``("gas", "four_velocity_[txyz]")``: Four-velocity fields :math:`U_t, U_x, U_y, U_z` * ``("gas", "lorentz_factor")``: Lorentz factor :math:`\gamma = \sqrt{1+U_iU^i/c^2}` (where :math:`i` runs over the spatial indices) * ``("gas", "specific_reduced_enthalpy")``: Specific reduced enthalpy :math:`\tilde{h} = \epsilon + p/\rho` * ``("gas", "specific_enthalpy")``: Specific enthalpy :math:`h = c^2 + \epsilon + p/\rho` These, and other fields following them (3-velocity, energy densities, etc.) are computed in the same manner as in the `GAMER-SR paper `_ to avoid catastrophic cancellations. All of the special relativistic fields will only be available if the ``Temp`` and ``Enth`` fields are present in the dataset, which can be ensured if the runtime options ``OPT__OUTPUT_TEMP = 1`` and ``OPT__OUTPUT_ENTHALPY = 1`` are set in the ``Input__Parameter`` file when running the simulation. This greatly speeds up calculations of the above derived fields in yt. .. rubric:: Caveats * GAMER data in raw binary format (i.e., ``OPT__OUTPUT_TOTAL = "C-binary"``) is not supported. .. _loading-amr-data: Generic AMR Data ---------------- See :doc:`Loading_Generic_Array_Data` and :func:`~yt.frontends.stream.data_structures.load_amr_grids` for more detail. .. note:: It is now possible to load data using *only functions*, rather than using the fully-in-memory method presented here. For more information and examples, see :doc:`Loading_Data_via_Functions`. It is possible to create native yt dataset from Python's dictionary that describes set of rectangular patches of data of possibly varying resolution. .. code-block:: python import yt grid_data = [ dict( left_edge=[0.0, 0.0, 0.0], right_edge=[1.0, 1.0, 1.0], level=0, dimensions=[32, 32, 32], ), dict( left_edge=[0.25, 0.25, 0.25], right_edge=[0.75, 0.75, 0.75], level=1, dimensions=[32, 32, 32], ), ] for g in grid_data: g["density"] = np.random.random(g["dimensions"]) * 2 ** g["level"] ds = yt.load_amr_grids(grid_data, [32, 32, 32], 1.0) .. note:: yt only supports a block structure where the grid edges on the ``n``-th refinement level are aligned with the cell edges on the ``n-1``-th level. Particle fields are supported by adding 1-dimensional arrays to each ``grid``'s dict: .. code-block:: python for g in grid_data: g["particle_position_x"] = np.random.random(size=100000) .. rubric:: Caveats * Some functions may behave oddly, and parallelism will be disappointing or non-existent in most cases. * No consistency checks are performed on the index * Data must already reside in memory. * Consistency between particle positions and grids is not checked; ``load_amr_grids`` assumes that particle positions associated with one grid are not bounded within another grid at a higher level, so this must be ensured by the user prior to loading the grid data. Generic Array Data ------------------ See :doc:`Loading_Generic_Array_Data` and :func:`~yt.frontends.stream.data_structures.load_uniform_grid` for more detail. Even if your data is not strictly related to fields commonly used in astrophysical codes or your code is not supported yet, you can still feed it to yt to use its advanced visualization and analysis facilities. The only requirement is that your data can be represented as one or more uniform, three dimensional numpy arrays. Assuming that you have your data in ``arr``, the following code: .. code-block:: python import yt data = dict(Density=arr) bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]]) ds = yt.load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12) will create yt-native dataset ``ds`` that will treat your array as density field in cubic domain of 3 Mpc edge size (3 * 3.08e24 cm) and simultaneously divide the domain into 12 chunks, so that you can take advantage of the underlying parallelism. Particle fields are added as one-dimensional arrays in a similar manner as the three-dimensional grid fields: .. code-block:: python import yt data = dict( Density=dens, particle_position_x=posx_arr, particle_position_y=posy_arr, particle_position_z=posz_arr, ) bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]]) ds = yt.load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12) where in this example the particle position fields have been assigned. If no particle fields are supplied, then the number of particles is assumed to be zero. .. rubric:: Caveats * Particles may be difficult to integrate. * Data must already reside in memory. .. _loading-semi-structured-mesh-data: Semi-Structured Grid Data ------------------------- .. note:: With the release of yt-4.1, functionality has been added to allow loading "stretched" grids that are operated on in a more efficient way. This is done via the :func:`~yt.frontends.stream.data_structures.load_uniform_grid` operation, supplying the ``cell_widths`` argument. Using the hexahedral mesh is no longer suggested for situations where the mesh can be adequately described with three arrays of cell widths. See :ref:`loading-stretched-grids` for more information. See :doc:`Loading_Generic_Array_Data`, :func:`~yt.frontends.stream.data_structures.hexahedral_connectivity`, :func:`~yt.frontends.stream.data_structures.load_hexahedral_mesh` for more detail. In addition to uniform grids as described above, you can load in data with non-uniform spacing between datapoints. To load this type of data, you must first specify a hexahedral mesh, a mesh of six-sided cells, on which it will live. You define this by specifying the x,y, and z locations of the corners of the hexahedral cells. The following code: .. code-block:: python import numpy import yt xgrid = numpy.array([-1, -0.65, 0, 0.65, 1]) ygrid = numpy.array([-1, 0, 1]) zgrid = numpy.array([-1, -0.447, 0.447, 1]) coordinates, connectivity = yt.hexahedral_connectivity(xgrid, ygrid, zgrid) will define the (x,y,z) coordinates of the hexahedral cells and information about that cell's neighbors such that the cell corners will be a grid of points constructed as the Cartesian product of xgrid, ygrid, and zgrid. Then, to load your data, which should be defined on the interiors of the hexahedral cells, and thus should have the shape, ``(len(xgrid)-1, len(ygrid)-1, len(zgrid)-1)``, you can use the following code: .. code-block:: python bbox = numpy.array( [ [numpy.min(xgrid), numpy.max(xgrid)], [numpy.min(ygrid), numpy.max(ygrid)], [numpy.min(zgrid), numpy.max(zgrid)], ] ) data = {"density": arr} ds = yt.load_hexahedral_mesh(data, conn, coords, 1.0, bbox=bbox) to load your data into the dataset ``ds`` as described above, where we have assumed your data is stored in the three-dimensional array ``arr``. .. rubric:: Caveats * Integration is not implemented. * Some functions may behave oddly or not work at all. * Data must already reside in memory. .. _loading-stretched-grids: Stretched Grid Data ------------------- .. warning:: API consistency for loading stretched grids is not guaranteed until at least yt 4.2! There may be changes in between then and now, as this is a preliminary feature. With version 4.1, yt has the ability to specify cell widths for grids. This allows situations where a grid has a functional form for cell widths, or where widths are provided in advance. .. note:: At present, stretched grids are restricted to a single level of refinement. Future versions of yt will have more complete and flexible support! To load a stretched grid, you use the standard (and now rather-poorly named) ``load_uniform_grid`` function, but supplying a ``cell_widths`` argument. This argument should be a list of three arrays, corresponding to the first, second and third index-direction cell widths. (For instance, in a "standard" cartesian dataset, this would be x, y, z.) This script, demonstrates loading a simple "random" dataset with a random set of cell-widths. .. code:: python import yt import numpy as np N = 8 data = {"density": np.random.random((N, N, N))} cell_widths = [] for i in range(3): widths = np.random.random(N) widths /= widths.sum() # Normalize to span 0 .. 1. cell_widths.append(widths) ds = yt.load_uniform_grid( data, [N, N, N], bbox=np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]), cell_widths=cell_widths, ) This can be modified to load data from a file, as well as to use more (or fewer) cells. Like with a standard uniform grid, providing ``nprocs>1`` will decompose the domain into multiple grids (without refinement). Unstructured Grid Data ---------------------- See :doc:`Loading_Generic_Array_Data`, :func:`~yt.frontends.stream.data_structures.load_unstructured_mesh` for more detail. In addition to the above grid types, you can also load data stored on unstructured meshes. This type of mesh is used, for example, in many finite element calculations. Currently, hexahedral and tetrahedral mesh elements are supported. To load an unstructured mesh, you need to specify the following. First, you need to have a coordinates array, which should be an (L, 3) array that stores the (x, y, z) positions of all of the vertices in the mesh. Second, you need to specify a connectivity array, which describes how those vertices are connected into mesh elements. The connectivity array should be (N, M), where N is the number of elements and M is the connectivity length, i.e. the number of vertices per element. Finally, you must also specify a data dictionary, where the keys should be the names of the fields and the values should be numpy arrays that contain the field data. These arrays can either supply the cell-averaged data for each element, in which case they would be (N, 1), or they can have node-centered data, in which case they would also be (N, M). Here is an example of how to load an in-memory, unstructured mesh dataset: .. code-block:: python import numpy as np import yt coords = np.array([[0.0, 0.0], [1.0, 0.0], [1.0, 1.0], [0.0, 1.0]], dtype=np.float64) connect = np.array([[0, 1, 3], [1, 2, 3]], dtype=np.int64) data = {} data["connect1", "test"] = np.array( [[0.0, 1.0, 3.0], [1.0, 2.0, 3.0]], dtype=np.float64 ) Here, we have made up a simple, 2D unstructured mesh dataset consisting of two triangles and one node-centered data field. This data can be loaded as an in-memory dataset as follows: .. code-block:: python ds = yt.load_unstructured_mesh(connect, coords, data) The in-memory dataset can then be visualized as usual, e.g.: .. code-block:: python sl = yt.SlicePlot(ds, "z", ("connect1", "test")) sl.annotate_mesh_lines() Note that load_unstructured_mesh can take either a single mesh or a list of meshes. To load multiple meshes, you can do: .. code-block:: python import numpy as np import yt coordsMulti = np.array( [[0.0, 0.0], [1.0, 0.0], [1.0, 1.0], [0.0, 1.0]], dtype=np.float64 ) connect1 = np.array( [ [0, 1, 3], ], dtype=np.int64, ) connect2 = np.array( [ [1, 2, 3], ], dtype=np.int64, ) data1 = {} data2 = {} data1["connect1", "test"] = np.array( [ [0.0, 1.0, 3.0], ], dtype=np.float64, ) data2["connect2", "test"] = np.array( [ [1.0, 2.0, 3.0], ], dtype=np.float64, ) connectList = [connect1, connect2] dataList = [data1, data2] ds = yt.load_unstructured_mesh(connectList, coordsMulti, dataList) # only plot the first mesh sl = yt.SlicePlot(ds, "z", ("connect1", "test")) # only plot the second sl = yt.SlicePlot(ds, "z", ("connect2", "test")) # plot both sl = yt.SlicePlot(ds, "z", ("all", "test")) Note that you must respect the field naming convention that fields on the first mesh will have the type ``connect1``, fields on the second will have ``connect2``, etc... .. rubric:: Caveats * Integration is not implemented. * Some functions may behave oddly or not work at all. * Data must already reside in memory. Generic Particle Data --------------------- .. note:: For more information about how yt indexes and reads particle data, set the section :ref:`demeshening`. See :doc:`Loading_Generic_Particle_Data` and :func:`~yt.frontends.stream.data_structures.load_particles` for more detail. You can also load generic particle data using the same ``stream`` functionality discussed above to load in-memory grid data. For example, if your particle positions and masses are stored in ``positions`` and ``masses``, a vertically-stacked array of particle x,y, and z positions, and a 1D array of particle masses respectively, you would load them like this: .. code-block:: python import yt data = dict(particle_position=positions, particle_mass=masses) ds = yt.load_particles(data) You can also load data using 1D x, y, and z position arrays: .. code-block:: python import yt data = dict( particle_position_x=posx, particle_position_y=posy, particle_position_z=posz, particle_mass=masses, ) ds = yt.load_particles(data) The ``load_particles`` function also accepts the following keyword parameters: ``length_unit`` The units used for particle positions. ``mass_unit`` The units of the particle masses. ``time_unit`` The units used to represent times. This is optional and is only used if your data contains a ``creation_time`` field or a ``particle_velocity`` field. ``velocity_unit`` The units used to represent velocities. This is optional and is only used if you supply a velocity field. If this is not supplied, it is inferred from the length and time units. ``bbox`` The bounding box for the particle positions. .. _smooth-non-sph: Adding Smoothing Lengths for Non-SPH Particles ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A novel use of the ``load_particles`` function is to facilitate SPH visualization of non-SPH particles. See the example below: .. code-block:: python import yt # Load dataset and center on the dense region ds = yt.load("FIRE_M12i_ref11/snapshot_600.hdf5") _, center = ds.find_max(("PartType0", "density")) # Reload DM particles into a stream dataset ad = ds.all_data() pt = "PartType1" fields = ["particle_mass"] + [f"particle_position_{ax}" for ax in "xyz"] data = {field: ad[pt, field] for field in fields} ds_dm = yt.load_particles(data, data_source=ad) # Generate the missing SPH fields ds_dm.add_sph_fields() # Make the SPH projection plot p = yt.ProjectionPlot(ds_dm, "z", ("io", "density"), center=center, width=(1, "Mpc")) p.set_unit(("io", "density"), "Msun/kpc**2") p.show() Here we see two new things. First, ``load_particles`` accepts a ``data_source`` argument to infer parameters like code units, which could be tedious to provide otherwise. Second, the returned :class:`~yt.frontends.stream.data_structures.StreamParticleDataset` has an :meth:`~yt.frontends.stream.data_structures.StreamParticleDataset.add_sph_fields` method, to create the ``smoothing_length`` and ``density`` fields required for SPH visualization to work. .. _loading-gizmo-data: Gizmo Data ---------- .. note:: For more information about how yt indexes and reads particle data, set the section :ref:`demeshening`. Gizmo datasets, including FIRE outputs, can be loaded into yt in the usual manner. Like other SPH data formats, yt loads Gizmo data as particle fields and then uses smoothing kernels to deposit those fields to an underlying grid structure as spatial fields as described in :ref:`loading-gadget-data`. To load Gizmo datasets using the standard HDF5 output format:: import yt ds = yt.load("snapshot_600.hdf5") Because the Gizmo output format is similar to the Gadget format, yt may load Gizmo datasets as Gadget depending on the circumstances, but this should not pose a problem in most situations. FIRE outputs will be loaded accordingly due to the number of metallicity fields found (11 or 17). If ``("PartType0", "MagneticField")`` is present in the output, it would be loaded and aliased to ``("PartType0", "particle_magnetic_field")``. The corresponding component field like ``("PartType0", "particle_magnetic_field_x")`` would be added automatically. Note that ``("PartType4", "StellarFormationTime")`` field has different meanings depending on whether it is a cosmological simulation. For cosmological runs this is the scale factor at the redshift when the star particle formed. For non-cosmological runs it is the time when the star particle formed. (See the `GIZMO User Guide `_) For this reason, ``("PartType4", "StellarFormationTime")`` is loaded as a dimensionless field. We defined two related fields ``("PartType4", "creation_time")``, and ``("PartType4", "age")`` with physical units for your convenience. For Gizmo outputs written as raw binary outputs, you may have to specify a bounding box, field specification, and units as are done for standard Gadget outputs. See :ref:`loading-gadget-data` for more information. .. _halo-catalog-data: Halo Catalog Data ----------------- .. note:: For more information about how yt indexes and reads particle data, set the section :ref:`demeshening`. yt has support for reading halo catalogs produced by the AdaptaHOP, Amiga Halo Finder (AHF), Rockstar and the inline FOF/SUBFIND halo finders of Gadget and OWLS. The halo catalogs are treated as particle datasets where each particle represents a single halo. For example, this means that the ``"particle_mass"`` field refers to the mass of the halos. For Gadget FOF/SUBFIND catalogs, the member particles for a given halo can be accessed by creating ``halo`` data containers. See :ref:`halo_containers` for more information. If you have access to both the halo catalog and the simulation snapshot from the same redshift, additional analysis can be performed for each halo using :ref:`halo-analysis`. The resulting product can be reloaded in a similar manner to the other halo catalogs shown here. AdataHOP ^^^^^^^^ `AdaptaHOP `_ halo catalogs are loaded by providing the path to the ``tree_bricksXXX`` file. As the halo catalog does not contain all the information about the simulation (for example the cosmological parameters), you also need to pass the parent dataset for it to load correctly. Some fields of note available from AdaptaHOP are: +---------------------+---------------------------+ | Rockstar field | yt field name | +=====================+===========================+ | halo id | particle_identifier | +---------------------+---------------------------+ | halo mass | particle_mass | +---------------------+---------------------------+ | virial mass | virial_mass | +---------------------+---------------------------+ | virial radius | virial_radius | +---------------------+---------------------------+ | virial temperature | virial_temperature | +---------------------+---------------------------+ | halo position | particle_position_(x,y,z) | +---------------------+---------------------------+ | halo velocity | particle_velocity_(x,y,z) | +---------------------+---------------------------+ Numerous other AdataHOP fields exist. To see them, check the field list by typing ``ds.field_list`` for a dataset loaded as ``ds``. Like all other datasets, fields must be accessed through :ref:`Data-objects`. .. code-block:: python import yt parent_ds = yt.load("output_00080/info_00080.txt") ds = yt.load("output_00080_halos/tree_bricks080", parent_ds=parent_ds) ad = ds.all_data() # halo masses print(ad["halos", "particle_mass"]) # halo radii print(ad["halos", "virial_radius"]) Halo Data Containers """""""""""""""""""" Halo member particles are accessed by creating halo data containers with the the halo id and the type of the particles. Scalar values for halos can be accessed in the same way. Halos also have mass, position, velocity, and member ids attributes. .. code-block:: python halo = ds.halo(1, ptype="io") # member particles for this halo print(halo.member_ids) # masses of the halo particles print(halo["io", "particle_mass"]) # halo mass print(halo.mass) In addition, the halo container contains a sphere container. This is the smallest sphere that contains all the halos' particles .. code-block:: python halo = ds.halo(1, ptype="io") sp = halo.sphere # Density in halo sp["gas", "density"] # Entropy in halo sp["gas", "entropy"] .. _ahf: Amiga Halo Finder ^^^^^^^^^^^^^^^^^ Amiga Halo Finder (AHF) halo catalogs are loaded by providing the path to the .parameter files. The corresponding .log and .AHF_halos files must exist for data loading to succeed. The field type for all fields is "halos". Some fields of note available from AHF are: +----------------+---------------------------+ | AHF field | yt field name | +================+===========================+ | ID | particle_identifier | +----------------+---------------------------+ | Mvir | particle_mass | +----------------+---------------------------+ | Rvir | virial_radius | +----------------+---------------------------+ | (X,Y,Z)c | particle_position_(x,y,z) | +----------------+---------------------------+ | V(X,Y,Z)c | particle_velocity_(x,y,z) | +----------------+---------------------------+ Numerous other AHF fields exist. To see them, check the field list by typing ``ds.field_list`` for a dataset loaded as ``ds``. Like all other datasets, fields must be accessed through :ref:`Data-objects`. .. code-block:: python import yt ds = yt.load("ahf_halos/snap_N64L16_135.parameter", hubble_constant=0.7) ad = ds.all_data() # halo masses print(ad["halos", "particle_mass"]) # halo radii print(ad["halos", "virial_radius"]) .. note:: Currently the dimensionless Hubble parameter that yt needs is not provided in AHF outputs. So users need to provide the ``hubble_constant`` (default to 1.0) while loading datasets, as shown above. .. _rockstar: Rockstar ^^^^^^^^ Rockstar halo catalogs are loaded by providing the path to one of the .bin files. In the case where multiple files were produced, one need only provide the path to a single one of them. The field type for all fields is "halos". Some fields of note available from Rockstar are: +----------------+---------------------------+ | Rockstar field | yt field name | +================+===========================+ | halo id | particle_identifier | +----------------+---------------------------+ | virial mass | particle_mass | +----------------+---------------------------+ | virial radius | virial_radius | +----------------+---------------------------+ | halo position | particle_position_(x,y,z) | +----------------+---------------------------+ | halo velocity | particle_velocity_(x,y,z) | +----------------+---------------------------+ Numerous other Rockstar fields exist. To see them, check the field list by typing ``ds.field_list`` for a dataset loaded as ``ds``. Like all other datasets, fields must be accessed through :ref:`Data-objects`. .. code-block:: python import yt ds = yt.load("rockstar_halos/halos_0.0.bin") ad = ds.all_data() # halo masses print(ad["halos", "particle_mass"]) # halo radii print(ad["halos", "virial_radius"]) .. _gadget_fof: Gadget FOF/SUBFIND ^^^^^^^^^^^^^^^^^^ Gadget FOF/SUBFIND halo catalogs work in the same way as those created by :ref:`rockstar`, except there are two field types: ``FOF`` for friend-of-friends groups and ``Subhalo`` for halos found with the SUBFIND substructure finder. Also like Rockstar, there are a number of fields specific to these halo catalogs. +-------------------+---------------------------+ | FOF/SUBFIND field | yt field name | +===================+===========================+ | halo id | particle_identifier | +-------------------+---------------------------+ | halo mass | particle_mass | +-------------------+---------------------------+ | halo position | particle_position_(x,y,z) | +-------------------+---------------------------+ | halo velocity | particle_velocity_(x,y,z) | +-------------------+---------------------------+ | num. of particles | particle_number | +-------------------+---------------------------+ | num. of subhalos | subhalo_number (FOF only) | +-------------------+---------------------------+ Many other fields exist, especially for SUBFIND subhalos. Check the field list by typing ``ds.field_list`` for a dataset loaded as ``ds``. Like all other datasets, fields must be accessed through :ref:`Data-objects`. .. code-block:: python import yt ds = yt.load("gadget_fof_halos/groups_042/fof_subhalo_tab_042.0.hdf5") ad = ds.all_data() # The halo mass print(ad["Group", "particle_mass"]) print(ad["Subhalo", "particle_mass"]) # Halo ID print(ad["Group", "particle_identifier"]) print(ad["Subhalo", "particle_identifier"]) # positions print(ad["Group", "particle_position_x"]) # velocities print(ad["Group", "particle_velocity_x"]) Multidimensional fields can be accessed through the field name followed by an underscore and the index. .. code-block:: python # x component of the spin print(ad["Subhalo", "SubhaloSpin_0"]) .. _halo_containers: Halo Data Containers """""""""""""""""""" Halo member particles are accessed by creating halo data containers with the type of halo ("Group" or "Subhalo") and the halo id. Scalar values for halos can be accessed in the same way. Halos also have mass, position, and velocity attributes. .. code-block:: python halo = ds.halo("Group", 0) # member particles for this halo print(halo["member_ids"]) # halo virial radius print(halo["Group_R_Crit200"]) # halo mass print(halo.mass) Subhalos containers can be created using either their absolute ids or their subhalo ids. .. code-block:: python # first subhalo of the first halo subhalo = ds.halo("Subhalo", (0, 0)) # this subhalo's absolute id print(subhalo.group_identifier) # member particles print(subhalo["member_ids"]) OWLS FOF/SUBFIND ^^^^^^^^^^^^^^^^ OWLS halo catalogs have a very similar structure to regular Gadget halo catalogs. The two field types are ``FOF`` and ``SUBFIND``. See :ref:`gadget_fof` for more information. At this time, halo member particles cannot be loaded. .. code-block:: python import yt ds = yt.load("owls_fof_halos/groups_008/group_008.0.hdf5") ad = ds.all_data() # The halo mass print(ad["FOF", "particle_mass"]) .. _halocatalog: YTHaloCatalog ^^^^^^^^^^^^^ These are catalogs produced by the analysis discussed in :ref:`halo-analysis`. In the case where multiple files were produced, one need only provide the path to a single one of them. The field type for all fields is "halos". The fields available here are similar to other catalogs. Any addition :ref:`halo_catalog_quantities` will also be accessible as fields. +-------------------+---------------------------+ | HaloCatalog field | yt field name | +===================+===========================+ | halo id | particle_identifier | +-------------------+---------------------------+ | virial mass | particle_mass | +-------------------+---------------------------+ | virial radius | virial_radius | +-------------------+---------------------------+ | halo position | particle_position_(x,y,z) | +-------------------+---------------------------+ | halo velocity | particle_velocity_(x,y,z) | +-------------------+---------------------------+ .. code-block:: python import yt ds = yt.load("tiny_fof_halos/DD0046/DD0046.0.h5") ad = ds.all_data() # The halo mass print(ad["halos", "particle_mass"]) Halo Data Containers """""""""""""""""""" Halo particles can be accessed by creating halo data containers with the type of halo ("halos") and the halo id and then querying the "member_ids" field. Halo containers have mass, radius, position, and velocity attributes. Additional fields for which there will be one value per halo can be accessed in the same manner as conventional data containers. .. code-block:: python halo = ds.halo("halos", 0) # particles for this halo print(halo["member_ids"]) # halo properties print(halo.mass, halo.radius, halo.position, halo.velocity) .. _loading-openpmd-data: openPMD Data ------------ `openPMD `_ is an open source meta-standard and naming scheme for mesh based data and particle data. It does not actually define a file format. HDF5-containers respecting the minimal set of meta information from versions 1.0.0 and 1.0.1 of the standard are compatible. Support for the ED-PIC extension is not available. Mesh data in cartesian coordinates and particle data can be read by this frontend. To load the first in-file iteration of a openPMD datasets using the standard HDF5 output format: .. code-block:: python import yt ds = yt.load("example-3d/hdf5/data00000100.h5") If you operate on large files, you may want to modify the virtual chunking behaviour through ``open_pmd_virtual_gridsize``. The supplied value is an estimate of the size of a single read request for each particle attribute/mesh (in Byte). .. code-block:: python import yt ds = yt.load("example-3d/hdf5/data00000100.h5", open_pmd_virtual_gridsize=10e4) sp = yt.SlicePlot(ds, "x", ("openPMD", "rho")) sp.show() Particle data is fully supported: .. code-block:: python import yt ds = yt.load("example-3d/hdf5/data00000100.h5") ad = f.all_data() ppp = yt.ParticlePhasePlot( ad, ("all", "particle_position_y"), ("all", "particle_momentum_y"), ("all", "particle_weighting"), ) ppp.show() .. rubric:: Caveats * 1D, 2D and 3D data is compatible, but lower dimensional data might yield strange results since it gets padded and treated as 3D. Extraneous dimensions are set to be of length 1.0m and have a width of one cell. * The frontend has hardcoded logic for renaming the openPMD ``position`` of particles to ``positionCoarse`` .. _loading-pyne-data: PyNE Data --------- `PyNE `_ is an open source nuclear engineering toolkit maintained by the PyNE development team (pyne-dev@googlegroups.com). PyNE meshes utilize the Mesh-Oriented datABase `(MOAB) `_ and can be Cartesian or tetrahedral. In addition to field data, pyne meshes store pyne Material objects which provide a rich set of capabilities for nuclear engineering tasks. PyNE Cartesian (Hex8) meshes are supported by yt. To create a pyne mesh: .. code-block:: python from pyne.mesh import Mesh num_divisions = 50 coords = linspace(-1, 1, num_divisions) m = Mesh(structured=True, structured_coords=[coords, coords, coords]) Field data can then be added: .. code-block:: python from pyne.mesh import iMeshTag m.neutron_flux = IMeshTag() # neutron_flux_data is a list or numpy array of size num_divisions^3 m.neutron_flux[:] = neutron_flux_data Any field data or material data on the mesh can then be viewed just like any other yt dataset! .. code-block:: python import yt pf = yt.frontends.moab.data_structures.PyneMoabHex8Dataset(m) s = yt.SlicePlot(pf, "z", "neutron_flux") s.display() .. _loading-ramses-data: RAMSES Data ----------- In yt-4.x, RAMSES data is fully supported. If you are interested in taking a development or stewardship role, please contact the yt-dev mailing list. To load a RAMSES dataset, you can use the ``yt.load`` command and provide it the ``info*.txt`` filename. For instance, if you were in a directory with the following files: .. code-block:: none output_00007 output_00007/amr_00007.out00001 output_00007/grav_00007.out00001 output_00007/hydro_00007.out00001 output_00007/info_00007.txt output_00007/part_00007.out00001 You would feed it the filename ``output_00007/info_00007.txt``: .. code-block:: python import yt ds = yt.load("output_00007/info_00007.txt") yt will attempt to guess the fields in the file. For more control over the hydro fields or the particle fields, see :ref:`loading-ramses-data-args`. yt also support the new way particles are handled introduced after version ``stable_17_09`` (the version introduced after the 2017 Ramses User Meeting). In this case, the file ``part_file_descriptor.txt`` containing the different fields in the particle files will be read. If you use a custom version of RAMSES, make sure this file is up-to-date and reflects the true layout of the particles. yt supports outputs made by the mainline ``RAMSES`` code as well as the ``RAMSES-RT`` fork. Files produces by ``RAMSES-RT`` are recognized as such based on the presence of a ``info_rt_*.txt`` file in the output directory. .. note:: for backward compatibility, particles from the ``part_XXXXX.outYYYYY`` files have the particle type ``io`` by default (including dark matter, stars, tracer particles, ...). Sink particles have the particle type ``sink``. .. _loading-ramses-data-args: Arguments passed to the load function ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It is possible to provide extra arguments to the load function when loading RAMSES datasets. Here is a list of the ones specific to RAMSES: ``fields`` A list of fields to read from the hydro files. For example, in a pure hydro simulation with an extra custom field named ``my-awesome-field``, one would specify the fields argument following this example: .. code-block:: python import yt fields = [ "Density", "x-velocity", "y-velocity", "z-velocity", "Pressure", "my-awesome-field", ] ds = yt.load("output_00123/info_00123.txt", fields=fields) "my-awesome-field" in ds.field_list # is True ``extra_particle_fields`` A list of tuples describing extra particles fields to read in. By default, yt will try to detect as many fields as possible, assuming the extra ones to be double precision floats. This argument is useful if you have extra fields besides the particle mass, position, and velocity fields that yt cannot detect automatically. For example, for a dataset containing two extra particle integer fields named ``family`` and ``info``, one would do: .. code-block:: python import yt extra_fields = [("family", "I"), ("info", "I")] ds = yt.load("output_00001/info_00001.txt", extra_particle_fields=extra_fields) # ('all', 'family') and ('all', 'info') now in ds.field_list The format of the ``extra_particle_fields`` argument is as follows: ``[('field_name_1', 'type_1'), ..., ('field_name_n', 'type_n')]`` where the second element of the tuple follows the `python struct format convention `_. Note that if ``extra_particle_fields`` is defined, yt will not assume that the ``particle_birth_time`` and ``particle_metallicity`` fields are present in the dataset. If these fields are present, they must be explicitly enumerated in the ``extra_particle_fields`` argument. ``cosmological`` Force yt to consider a simulation to be cosmological or not. This may be useful for some specific simulations e.g. that run down to negative redshifts. ``bbox`` The subbox to load. yt will only read CPUs intersecting with the subbox. This is especially useful for large simulations or zoom-in simulations, where you don't want to have access to data outside of a small region of interest. This argument will prevent yt from loading AMR files outside the subbox and will hence spare memory and time. For example, one could use .. code-block:: python import yt # Only load a small cube of size (0.1)**3 bbox = [[0.0, 0.0, 0.0], [0.1, 0.1, 0.1]] ds = yt.load("output_00001/info_00001.txt", bbox=bbox) # See the note below for the following examples ds.right_edge == [1, 1, 1] # is True ad = ds.all_data() ad["all", "particle_position_x"].max() > 0.1 # _may_ be True bb = ds.box(left_edge=bbox[0], right_edge=bbox[1]) bb["all", "particle_position_x"].max() < 0.1 # is True .. note:: When using the bbox argument, yt will read all the CPUs intersecting with the subbox. However it may also read some data *outside* the selected region. This is due to the fact that domains have a complicated shape when using Hilbert ordering. Internally, yt will hence assume the loaded dataset covers the entire simulation. If you only want the data from the selected region, you may want to use ``ds.box(...)``. .. note:: The ``bbox`` feature is only available for datasets using Hilbert ordering. ``max_level, max_level_convention`` This will set the deepest level to be read from file. Both arguments have to be set, where the convention can be either "ramses" or "yt". In the "ramses" convention, levels go from 1 (the root grid) to levelmax, such that the finest cells have a size of ``boxsize/2**levelmax``. In the "yt" convention, levels are numbered from 0 (the coarsest uniform grid at RAMSES' ``levelmin``) to ``max_level``, such that the finest cells are ``2**max_level`` smaller than the coarsest. .. code-block:: python import yt # Assuming RAMSES' levelmin=6, i.e. the structure is full # down to levelmin=6 ds_all = yt.load("output_00080/info_00080.txt") ds_yt = yt.load("output_00080/info_00080.txt", max_level=2, max_level_convention="yt") ds_ramses = yt.load( "output_00080/info_00080.txt", max_level=8, max_level_convention="ramses", ) any(ds_all.r["index", "grid_level"] > 2) # True all(ds_yt.r["index", "grid_level"] <= 2) # True all(ds_ramses.r["index", "grid_level"] <= 2) # True Adding custom particle fields ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ There are three way to make yt detect all the particle fields. For example, if you wish to make yt detect the birth time and metallicity of your particles, use one of these methods 1. ``yt.load`` method. Whenever loading a dataset, add the extra particle fields as a keyword argument to the ``yt.load`` call. .. code-block:: python import yt epf = [("particle_birth_time", "d"), ("particle_metallicity", "d")] ds = yt.load("dataset", extra_particle_fields=epf) ("io", "particle_birth_time") in ds.derived_field_list # is True ("io", "particle_metallicity") in ds.derived_field_list # is True 2. yt config method. If you don't want to pass the arguments for each call of ``yt.load``, you can add in your configuration .. code-block:: none [ramses-particles] fields = """ particle_position_x, d particle_position_y, d particle_position_z, d particle_velocity_x, d particle_velocity_y, d particle_velocity_z, d particle_mass, d particle_identifier, i particle_refinement_level, I particle_birth_time, d particle_metallicity, d """ Each line should contain the name of the field and its data type (``d`` for double precision, ``f`` for single precision, ``i`` for integer and ``l`` for long integer). You can also configure the auto detected fields for fluid types by adding a section ``ramses-hydro``, ``ramses-grav`` or ``ramses-rt`` in the config file. For example, if you customized your gravity files so that they contain the potential, the potential in the previous timestep and the x, y and z accelerations, you can use : .. code-block:: none [ramses-grav] fields = [ "Potential", "Potential-old", "x-acceleration", "y-acceleration", "z-acceleration" ] 3. New RAMSES way. Recent versions of RAMSES automatically write in their output an ``hydro_file_descriptor.txt`` file that gives information about which field is where. If you wish, you can simply create such a file in the folder containing the ``info_xxxxx.txt`` file .. code-block:: none # version: 1 # ivar, variable_name, variable_type 1, position_x, d 2, position_y, d 3, position_z, d 4, velocity_x, d 5, velocity_y, d 6, velocity_z, d 7, mass, d 8, identity, i 9, levelp, i 10, birth_time, d 11, metallicity, d It is important to note that this file should not end with an empty line (but in this case with ``11, metallicity, d``). .. note:: The kind (``i``, ``d``, ``I``, ...) of the field follow the `python convention `_. Customizing the particle type association ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In versions of RAMSES more recent than December 2017, particles carry along a ``family`` array. The value of this array gives the kind of the particle, e.g. 1 for dark matter. It is possible to customize the association between particle type and family by customizing the yt config (see :ref:`configuration-file`), adding .. code-block:: none [ramses-families] gas_tracer = 100 star_tracer = 101 dm = 0 star = 1 Particle ages and formation times ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ For non-cosmological simulations, particle ages are stored in physical units on disk. To access the birth time for the particles, use the ``particle_birth_time`` field. The time recorded in this field is relative to the beginning of the simulation. Particles that were present in the initial conditions will have negative values for ``particle_birth_time``. For cosmological simulations that include star particles, RAMSES stores particle formation times as conformal times. To access the formation time field data in conformal units use the ``conformal_birth_time`` field. This will return the formation times of particles in the simulation in conformal units as a dimensionless array. To access the formation time in physical units, use the ``particle_birth_time`` field. Finally, to access the ages of star particles in your simulation, use the ``star_age`` field. Note that this field is defined for all particle types but will only make sense for star particles. For simulations conducted in Newtownian coordinates, with no cosmology or comoving expansion, the time is equal to zero at the beginning of the simulation. That means that particles present in the initial conditions may have negative birth times. This can happen, for example, in idealized isolated galaxy simulations, where star particles are included in the initial conditions. For simulations conducted in cosmological comoving units, the time is equal to zero at the big bang, and all particles should have positive values for the ``particle_birth_time`` field. To help clarify the above discussion, the following table describes the meaning of the various particle formation time and age fields: +------------------+--------------------------+--------------------------------+ | Simulation type | Field name | Description | +==================+==========================+================================+ | cosmological | ``conformal_birth_time`` | Formation time in conformal | | | | units (dimensionless) | +------------------+--------------------------+--------------------------------+ | any | ``particle_birth_time`` | The time relative to the | | | | beginning of the simulation | | | | when the particle was formed. | | | | For non-cosmological | | | | simulations, this field will | | | | have positive values for | | | | particles formed during the | | | | simulation and negative for | | | | particles of finite age in the | | | | initial conditions. For | | | | cosmological simulations this | | | | is the time the particle | | | | formed relative to the big | | | | bang, therefore the value of | | | | this field should be between | | | | 0 and 13.7 Gyr. | +------------------+--------------------------+--------------------------------+ | any | ``star_age`` | Age of the particle. | | | | Only physically meaningful for | | | | stars and particles that | | | | formed dynamically during the | | | | simulation. | +------------------+--------------------------+--------------------------------+ RAMSES datasets produced by a version of the code newer than November 2017 contain the metadata necessary for yt to automatically distinguish between star particles and other particle types. If you are working with a dataset produced by a version of RAMSES older than November 2017, yt will only automatically recognize a single particle ``io``. It may be convenient to define a particle filter in your scripts to distinguish between particles present in the initial conditions and particles that formed dynamically during the simulation by filtering particles with ``"conformal_birth_time"`` values equal to zero and not equal to zero. An example particle filter definition for dynamically formed stars might look like this: .. code-block:: python @yt.particle_filter(requires=["conformal_birth_time"], filtered_type="io") def stars(pfilter, data): filter = data[pfilter.filtered_type, "conformal_birth_time"] != 0 return filter For a cosmological simulation, this filter will distinguish between stars and dark matter particles. .. _loading-sph-data: SPH Particle Data ----------------- .. note:: For more information about how yt indexes and reads particle data, set the section :ref:`demeshening`. For all of the SPH frontends, yt uses cython-based SPH smoothing onto an in-memory octree to create deposited mesh fields from individual SPH particle fields. This uses a standard M4 smoothing kernel and the ``smoothing_length`` field to calculate SPH sums, filling in the mesh fields. This gives you the ability to both track individual particles (useful for tasks like following contiguous clouds of gas that would be require a clump finder in grid data) as well as doing standard grid-based analysis (i.e. slices, projections, and profiles). The ``smoothing_length`` variable is also useful for determining which particles can interact with each other, since particles more distant than twice the smoothing length do not typically see each other in SPH simulations. By changing the value of the ``smoothing_length`` and then re-depositing particles onto the grid, you can also effectively mimic what your data would look like at lower resolution. .. _loading-tipsy-data: Tipsy Data ---------- .. note:: For more information about how yt indexes and reads particle data, set the section :ref:`demeshening`. See :doc:`../cookbook/tipsy_and_yt` and :ref:`loading-sph-data` for more details. yt also supports loading Tipsy data. Many of its characteristics are similar to how Gadget data is loaded. .. code-block:: python ds = load("./halo1e11_run1.00400") .. _specifying-cosmology-tipsy: Specifying Tipsy Cosmological Parameters and Setting Default Units ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Cosmological parameters can be specified to Tipsy to enable computation of default units. For example do the following, to load a Tipsy dataset whose path is stored in the variable ``my_filename`` with specified cosmology parameters: .. code-block:: python cosmology_parameters = { "current_redshift": 0.0, "omega_lambda": 0.728, "omega_matter": 0.272, "hubble_constant": 0.702, } ds = yt.load(my_filename, cosmology_parameters=cosmology_parameters) If you wish to set the unit system directly, you can do so by using the ``unit_base`` keyword in the load statement. .. code-block:: python import yt ds = yt.load(filename, unit_base={"length", (1.0, "Mpc")}) See the documentation for the :class:`~yt.frontends.tipsy.data_structures.TipsyDataset` class for more information. Loading Cosmological Simulations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you are not using a parameter file (i.e. non-Gasoline users), then you must use keyword ``cosmology_parameters`` when loading your data set to indicate to yt that it is a cosmological data set. If you do not wish to set any non-default cosmological parameters, you may pass an empty dictionary. .. code-block:: python import yt ds = yt.load(filename, cosmology_parameters={}) .. _loading-cfradial-data: CfRadial Data ------------- Cf/Radial is a CF compliant netCDF convention for radial data from radar and lidar platforms that supports both airborne and ground-based sensors. Because of its CF-compliance, CfRadial will allow researchers familiar with CF to read the data into a wide variety of analysis tools, models etc. For more see: [CfRadialDoc.v1.4.20160801.pdf](https://github.com/NCAR/CfRadial/blob/d4562a995d0589cea41f4f6a4165728077c9fc9b/docs/CfRadialDoc.v1.4.20160801.pdf) yt provides support for loading cartesian-gridded CfRadial netcdf-4 files as well as polar coordinate Cfradial netcdf-4 files. When loading a standard CfRadial dataset in polar coordinates, yt will first build a sample on a cartesian grid (see :ref:`cfradial_gridding`). To load a CfRadial data file: .. code-block:: python import yt ds = yt.load("CfRadialGrid/grid1.nc") .. _cfradial_gridding: Gridding Behavior ^^^^^^^^^^^^^^^^^ When you load a CfRadial dataset in polar coordinates (elevation, azimuth and range), yt will first build a sample by mapping the data onto a cartesian grid using the Python-ARM Radar Toolkit (`pyart `_). Grid points are found by interpolation of all data points within a specified radius of influence. This data, now in x, y, z coordinate domain is then saved as a new dataset and subsequent loads of the original native CfRadial dataset will use the gridded file. Mapping the data from spherical to Cartesian coordinates is useful for 3D volume rendering the data using yt. See the documentation for the :class:`~yt.frontends.cf_radial.data_structures.CFRadialDataset` class for a description of how to adjust the gridding parameters and storage of the gridded file. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/examining/low_level_inspection.rst0000644000175100001770000003001214714401662022562 0ustar00runnerdocker.. _low-level-data-inspection: Low-Level Data Inspection: Accessing Raw Data ============================================= yt can not only provide high-level access to data, such as through slices, projections, object queries and the like, but it can also provide low-level access to the raw data. .. note:: This section is tuned for patch- or block-based simulations. Future versions of yt will enable more direct access to particle and oct based simulations. For now, these are represented as patches, with the attendant properties. For a more basic introduction, see :ref:`quickstart` and more specifically :doc:`../quickstart/2)_Data_Inspection`. .. _examining-grid-hierarchies: Examining Grid Hierarchies -------------------------- yt organizes grids in a hierarchical fashion; a coarser grid that contains (or overlaps with) a finer grid is referred to as its parent. yt organizes these only a single level of refinement at a time. To access grids, the ``grids`` attribute on a :class:`~yt.geometry.grid_geometry_handler.GridIndex` object. (For fast operations, a number of additional arrays prefixed with ``grid`` are also available, such as ``grid_left_edges`` and so on.) This returns an instance of :class:`~yt.data_objects.grid_patch.AMRGridPatch`, which can be queried for either data or index information. The :class:`~yt.data_objects.grid_patch.AMRGridPatch` object itself provides the following attributes: * ``Children``: a list of grids contained within this one, of one higher level of refinement * ``Parent``: a single object or a list of objects this grid is contained within, one level of refinement coarser * ``child_mask``: a mask of 0's and 1's, representing where no finer data is available in refined grids (1) or where this grid is covered by finer regions (0). Note that to get back the final data contained within a grid, one can multiple a field by this attribute. * ``child_indices``: a mask of booleans, where False indicates no finer data is available. This is essentially the inverse of ``child_mask``. * ``child_index_mask``: a mask of indices into the ``ds.index.grids`` array of the child grids. * ``LeftEdge``: the left edge, in native code coordinates, of this grid * ``RightEdge``: the right edge, in native code coordinates, of this grid * ``dds``: the width of a cell in this grid * ``id``: the id (not necessarily the index) of this grid. Defined such that subtracting the property ``_id_offset`` gives the index into ``ds.index.grids``. * ``NumberOfParticles``: the number of particles in this grid * ``OverlappingSiblings``: a list of sibling grids that this grid overlaps with. Likely only defined for Octree-based codes. In addition, the method :meth:`~yt.data_objects.grid_patch.AMRGridPatch.get_global_startindex` can be used to get the integer coordinates of the upper left edge. These integer coordinates are defined with respect to the current level; this means that they are the offset of the left edge, with respect to the left edge of the domain, divided by the local ``dds``. To traverse a series of grids, this type of construction can be used: .. code-block:: python g = ds.index.grids[1043] g2 = g.Children[1].Children[0] print(g2.LeftEdge) .. _examining-grid-data: Examining Grid Data ------------------- Once you have identified a grid you wish to inspect, there are two ways to examine data. You can either ask the grid to read the data and pass it to you as normal, or you can manually intercept the data from the IO handler and examine it before it has been unit converted. This allows for much more raw data inspection. To access data that has been read in the typical fashion and unit-converted as normal, you can access the grid as you would a normal object: .. code-block:: python g = ds.index.grids[1043] print(g["gas", "density"]) print(g["gas", "density"].min()) To access the raw data (as found in the file), use .. code-block:: python g = ds.index.grids[1043] rho = g["gas", "density"].in_base("code") .. _finding-data-at-fixed-points: Finding Data at Fixed Points ---------------------------- One of the most common questions asked of data is, what is the value *at this specific point*. While there are several ways to find out the answer to this question, a few helper routines are provided as well. To identify the finest-resolution (i.e., most canonical) data at a given point, use the point data object:: from yt.units import kpc point_obj = ds.point([30, 75, 80]*kpc) density_at_point = point_obj['gas', 'density'] The point data object works just like any other yt data object. It is special because it is the only zero-dimensional data object: it will only return data at the exact point specified when creating the point data object. For more information about yt data objects, see :ref:`Data-objects`. If you need to find field values at many points, the :meth:`~yt.data_objects.static_output.Dataset.find_field_values_at_points` function may be more efficient. This function returns a nested list of field values at multiple points in the simulation volume. For example, if one wanted to find the value of a mesh field at the location of the particles in a simulation, one could do:: ad = ds.all_data() ppos = ad["all", "particle_position"] ppos_den_vel = ds.find_field_values_at_points( [("gas", "density"), ("gas", "velocity_x")], ppos ) In this example, ``ppos_den_vel`` will be a list of arrays. The first array will contain the density values at the particle positions, the second will contain the x velocity values at the particle positions. .. _examining-grid-data-in-a-fixed-resolution-array: Examining Grid Data in a Fixed Resolution Array ----------------------------------------------- If you have a dataset, either AMR or single resolution, and you want to just stick it into a fixed resolution numpy array for later examination, then you want to use a :ref:`Covering Grid `. You must specify the maximum level at which to sample the data, a left edge of the data where you will start, and the resolution at which you want to sample. For example, let's use the :ref:`sample dataset ` ``Enzo_64``. This dataset is at a resolution of 64^3 with 5 levels of AMR, so if we want a 64^3 array covering the entire volume and sampling just the lowest level data, we run: .. code-block:: python import yt ds = yt.load("Enzo_64/DD0043/data0043") all_data_level_0 = ds.covering_grid(level=0, left_edge=[0, 0.0, 0.0], dims=[64, 64, 64]) Note that we can also get the same result and rely on the dataset to know its own underlying dimensions: .. code-block:: python all_data_level_0 = ds.covering_grid( level=0, left_edge=[0, 0.0, 0.0], dims=ds.domain_dimensions ) We can now access our underlying data at the lowest level by specifying what :ref:`field ` we want to examine: .. code-block:: python print(all_data_level_0["gas", "density"].shape) # (64, 64, 64) print(all_data_level_0["gas", "density"]) # array([[[ 1.92588925e-31, 1.74647692e-31, 2.54787518e-31, ..., print(all_data_level_0["gas", "temperature"].shape) # (64, 64, 64) If you create a covering grid that spans two child grids of a single parent grid, it will fill those zones covered by a zone of a child grid with the data from that child grid. Where it is covered only by the parent grid, the cells from the parent grid will be duplicated (appropriately) to fill the covering grid. Let's say we now want to look at that entire data volume and sample it at a higher resolution (i.e. level 2). As stated above, we'll be oversampling under-refined regions, but that's OK. We must also increase the resolution of our output array by a factor of 2^2 in each direction to hold this new larger dataset: .. code-block:: python all_data_level_2 = ds.covering_grid( level=2, left_edge=[0, 0.0, 0.0], dims=ds.domain_dimensions * 2**2 ) And let's see what's the density in the central location: .. code-block:: python print(all_data_level_2["gas", "density"].shape) (256, 256, 256) print(all_data_level_2["gas", "density"][128, 128, 128]) 1.7747457571203124e-31 There are two different types of covering grids: unsmoothed and smoothed. Smoothed grids will be filled through a cascading interpolation process; they will be filled at level 0, interpolated to level 1, filled at level 1, interpolated to level 2, filled at level 2, etc. This will help to reduce edge effects. Unsmoothed covering grids will not be interpolated, but rather values will be duplicated multiple times. To sample our dataset from above with a smoothed covering grid in order to reduce edge effects, it is a nearly identical process: .. code-block:: python all_data_level_2_s = ds.smoothed_covering_grid( 2, [0.0, 0.0, 0.0], ds.domain_dimensions * 2**2 ) print(all_data_level_2_s["gas", "density"].shape) (256, 256, 256) print(all_data_level_2_s["gas", "density"][128, 128, 128]) 1.763744852165591e-31 Covering grids can also accept a ``data_source`` argument, in which case only the cells of the covering grid that are contained by the ``data_source`` will be filled. This can be useful to create regularized arrays of more complex geometries. For example, if we provide a sphere, we see that the covering grid shape is the same, but the number of cells with data is less .. code-block:: python sp = ds.sphere(ds.domain_center, (0.25, "code_length")) cg_sp = ds.covering_grid( level=0, left_edge=[0, 0.0, 0.0], dims=ds.domain_dimensions, data_source=sp ) print(cg_sp["gas", "density"].shape) (64, 64, 64) print(cg_sp["gas", "density"].size) 262144 print(cg_sp["gas", "density"][cg_sp["gas", "density"] != 0].size) 17256 The ``data_source`` can be any :ref:`3D Data Container `. Also note that the ``data_source`` argument is only available for the ``covering_grid`` at present (not the ``smoothed_covering_grid``). .. _examining-image-data-in-a-fixed-resolution-array: Examining Image Data in a Fixed Resolution Array ------------------------------------------------ In the same way that one can sample a multi-resolution 3D dataset by placing it into a fixed resolution 3D array as a :ref:`Covering Grid `, one can also access the raw image data that is returned from various yt functions directly as a fixed resolution array. This provides a means for bypassing the yt method for generating plots, and allows the user the freedom to use whatever interface they wish for displaying and saving their image data. You can use the :class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer` to accomplish this as described in :ref:`fixed-resolution-buffers`. High-level Information about Particles -------------------------------------- There are a number of high-level helpers attached to ``Dataset`` objects to find out information about the particles in an output file. First, one can check if there are any particles in a dataset at all by examining ``ds.particles_exist``. This will be ``True`` for datasets the include particles and ``False`` otherwise. One can also see which particle types are available in a dataset. Particle types that are available in the dataset's on-disk output are known as "raw" particle types, and they will appear in ``ds.particle_types_raw``. Particle types that are dynamically defined via a particle filter of a particle union will also appear in the ``ds.particle_types`` list. If the simulation only has one particle type on-disk, its name will by ``'io'``. If there is more than one particle type, the names of the particle types will be inferred from the output file. For example, Gadget HDF5 files have particle type names like ``PartType0`` and ``PartType1``, while Enzo data, which usually only has one particle type, will only have a particle named ``io``. Finally, one can see the number of each particle type by inspecting ``ds.particle_type_counts``. This will be a dictionary mapping the names of particle types in ``ds.particle_types_raw`` to the number of each particle type in a simulation output. ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.243151 yt-4.4.0/doc/source/faq/0000755000175100001770000000000014714401715014400 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/faq/index.rst0000644000175100001770000003776714714401662016266 0ustar00runnerdocker.. _faq: Frequently Asked Questions ========================== .. contents:: :depth: 2 :local: :backlinks: none Version & Installation ---------------------- .. _determining-version: How can I tell what version of yt I'm using? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you run into problems with yt and you're writing to the mailing list or contacting developers on Slack, they will likely want to know what version of yt you're using. Often times, you'll want to know both the yt version, as well as the last changeset that was committed to the branch you're using. To reveal this, go to a command line and type: .. code-block:: bash $ yt version The result will look something like this: .. code-block:: bash yt module located at: /Users/mitchell/src/yt-conda/src/yt-git The current version of yt is: --- Version = 4.0.dev0 Changeset = 9f947a930ab4 --- This installation CAN be automatically updated. For more information on this topic, see :ref:`updating`. .. _yt-3.0-problems: I upgraded to yt 4.0 but my code no longer works. What do I do? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ We've tried to keep the number of backward-incompatible changes to a minimum with the release of yt-4.0, but because of the wide-reaching changes to how yt manages data, there may be updates you have to make. You can see many of the changes in :ref:`yt4differences`, and in :ref:`transitioning-to-4.0` there are helpful tips on how to modify your scripts to update them. Code Errors and Failures ------------------------ Python fails saying that it cannot import yt modules ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This is commonly exhibited with an error about not being able to import code that is part of yt. This is likely because the code that is failing to import needs to be compiled or recompiled. This error tends to occur when there are changes in the underlying Cython files that need to be rebuilt, like after a major code update or when switching between distant branches. This is solved by running the install command again. See :ref:`install-from-source`. .. _faq-mpi4py: yt complains that it needs the mpi4py module ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ For yt to be able to incorporate parallelism on any of its analysis (see :ref:`parallel-computation`), it needs to be able to use MPI libraries. This requires the ``mpi4py`` module to be installed in your version of python. Unfortunately, installation of ``mpi4py`` is *just* tricky enough to elude the yt batch installer. So if you get an error in yt complaining about mpi4py like: .. code-block:: bash ImportError: No module named mpi4py then you should install ``mpi4py``. The easiest way to install it is through the pip interface. At the command line, type: .. code-block:: bash $ python -m pip install mpi4py What this does is it finds your default installation of Python (presumably in the yt source directory), and it installs the mpi4py module. If this action is successful, you should never have to worry about your aforementioned problems again. If, on the other hand, this installation fails (as it does on such machines as NICS Kraken, NASA Pleaides and more), then you will have to take matters into your own hands. Usually when it fails, it is due to pip being unable to find your MPI C/C++ compilers (look at the error message). If this is the case, you can specify them explicitly as per: .. code-block:: bash $ env MPICC=/path/to/MPICC python -m pip install mpi4py So for example, on Kraken, I switch to the gnu C compilers (because yt doesn't work with the portland group C compilers), then I discover that cc is the mpi-enabled C compiler (and it is in my path), so I run: .. code-block:: bash $ module swap PrgEnv-pgi PrgEnv-gnu $ env MPICC=cc python -m pip install mpi4py And voila! It installs! If this *still* fails for you, then you can build and install from source and specify the mpi-enabled c and c++ compilers in the mpi.cfg file. See the `mpi4py installation page `_ for details. Units ----- .. _conversion-factors: How do I convert between code units and physical units for my dataset? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Starting with yt-3.0, and continuing to yt-4.0, yt uses an internal symbolic unit system. In yt-3.0 this was bundled with the main yt codebase, and with yt-4.0 it is now available as a separate package called `unyt `_. Conversion factors are tied up in the ``length_unit``, ``times_unit``, ``mass_unit``, and ``velocity_unit`` attributes, which can be converted to any arbitrary desired physical unit: .. code-block:: python print("Length unit: ", ds.length_unit) print("Time unit: ", ds.time_unit) print("Mass unit: ", ds.mass_unit) print("Velocity unit: ", ds.velocity_unit) print("Length unit: ", ds.length_unit.in_units("code_length")) print("Time unit: ", ds.time_unit.in_units("code_time")) print("Mass unit: ", ds.mass_unit.in_units("kg")) print("Velocity unit: ", ds.velocity_unit.in_units("Mpc/year")) So to accomplish the example task of converting a scalar variable ``x`` in code units to kpc in yt-4.0, you can do one of two things. If ``x`` is already a YTQuantity with units in ``code_length``, you can run: .. code-block:: python x.in_units("kpc") However, if ``x`` is just a numpy array or native python variable without units, you can convert it to a YTQuantity with units of ``kpc`` by running: .. code-block:: python x = x * ds.length_unit.in_units("kpc") For more information about unit conversion, see :ref:`units`. How do I make a YTQuantity tied to a specific dataset's units? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you want to create a variable or array that is tied to a particular dataset (and its specific conversion factor to code units), use the ``ds.quan`` (for individual variables) and ``ds.arr`` (for arrays): .. code-block:: python import yt ds = yt.load(filename) one_Mpc = ds.quan(1, "Mpc") x_vector = ds.arr([1, 0, 0], "code_length") You can then naturally exploit the units system: .. code-block:: python print("One Mpc in code_units:", one_Mpc.in_units("code_length")) print("One Mpc in AU:", one_Mpc.in_units("AU")) print("One Mpc in comoving kpc:", one_Mpc.in_units("kpccm")) For more information about unit conversion, see :ref:`units`. .. _accessing-unitless-data: How do I access the unitless data in a YTQuantity or YTArray? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ While there are numerous benefits to having units tied to individual quantities in yt, they can also produce issues when simply trying to combine YTQuantities with numpy arrays or native python floats that lack units. A simple example of this is:: # Create a YTQuantity that is 1 kpc in length and tied to the units of # dataset ds >>> x = ds.quan(1, 'kpc') # Try to add this to some non-dimensional quantity >>> print(x + 1) YTUnitOperationError: The addition operator for YTArrays with units (kpc) and (1) is not well defined. The solution to this means using the YTQuantity and YTArray objects for all of one's computations, but this isn't always feasible. A quick fix for this is to just grab the unitless data out of a YTQuantity or YTArray object with the ``value`` and ``v`` attributes, which return a copy, or with the ``d`` attribute, which returns the data itself: .. code-block:: python x = ds.quan(1, "kpc") x_val = x.v print(x_val) array(1.0) # Try to add this to some non-dimensional quantity print(x + 1) 2.0 For more information about this functionality with units, see :ref:`units`. Fields ------ .. _faq-handling-log-vs-linear-space: How do I modify whether or not yt takes the log of a particular field? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ yt sets up defaults for many fields for whether or not a field is presented in log or linear space. To override this behavior, you can modify the ``field_info`` dictionary. For example, if you prefer that ``density`` not be logged, you could type: .. code-block:: python ds = load("my_data") ds.index ds.field_info["gas", "density"].take_log = False From that point forward, data products such as slices, projections, etc., would be presented in linear space. Note that you have to instantiate ds.index before you can access ds.field info. For more information see the documentation on :ref:`fields` and :ref:`creating-derived-fields`. .. _faq-new-field: I added a new field to my simulation data, can yt see it? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Yes! yt identifies all the fields in the simulation's output file and will add them to its ``field_list`` even if they aren't listed in :ref:`field-list`. These can then be accessed in the usual manner. For example, if you have created a field for the potential called ``PotentialField``, you could type: .. code-block:: python ds = load("my_data") ad = ds.all_data() potential_field = ad["PotentialField"] The same applies to fields you might derive inside your yt script via :ref:`creating-derived-fields`. To check what fields are available, look at the properties ``field_list`` and ``derived_field_list``: .. code-block:: python print(ds.field_list) print(ds.derived_field_list) or for a more legible version, try: .. code-block:: python for field in ds.derived_field_list: print(field) .. _faq-add-field-diffs: What is the difference between ``yt.add_field()`` and ``ds.add_field()``? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The global ``yt.add_field()`` (:meth:`~yt.fields.field_info_container.FieldInfoContainer.add_field`) function is for adding a field for every subsequent dataset that is loaded in a particular python session, whereas ``ds.add_field()`` (:meth:`~yt.data_objects.static_output.Dataset.add_field`) will only add it to dataset ``ds``. Data Objects ------------ .. _ray-data-ordering: Why are the values in my Ray object out of order? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Using the Ray objects (:class:`~yt.data_objects.selection_data_containers.YTOrthoRay` and :class:`~yt.data_objects.selection_data_containers.YTRay`) with AMR data gives non-contiguous cell information in the Ray's data array. The higher-resolution cells are appended to the end of the array. Unfortunately, due to how data is loaded by chunks for data containers, there is really no easy way to fix this internally. However, there is an easy workaround. One can sort the ``Ray`` array data by the ``t`` field, which is the value of the parametric variable that goes from 0 at the start of the ray to 1 at the end. That way the data will always be ordered correctly. As an example you can: .. code-block:: python my_ray = ds.ray(...) ray_sort = np.argsort(my_ray["t"]) density = my_ray["gas", "density"][ray_sort] There is also a full example in the :ref:`manual-line-plots` section of the docs. Developing ---------- .. _making-a-PR: Someone asked me to make a Pull Request (PR) to yt. How do I do that? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A pull request is the action by which you contribute code to yt. You make modifications in your local copy of the source code, then *request* that other yt developers review and accept your changes to the main code base. For a full description of the steps necessary to successfully contribute code and issue a pull request (or manage multiple versions of the source code) please see :ref:`sharing-changes`. .. _making-an-issue: Someone asked me to file an issue or a bug report for a bug I found. How? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ See :ref:`reporting-a-bug` and :ref:`sharing-changes`. Miscellaneous ------------- .. _getting-sample-data: How can I get some sample data for yt? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Many different sample datasets can be found at https://yt-project.org/data/ . These can be downloaded, unarchived, and they will each create their own directory. It is generally straight forward to load these datasets, but if you have any questions about loading data from a code with which you are unfamiliar, please visit :ref:`loading-data`. To make things easier to load these sample datasets, you can add the parent directory to your downloaded sample data to your *yt path*. If you set the option ``test_data_dir``, in the section ``[yt]``, in ``~/.config/yt/yt.toml``, yt will search this path for them. This means you can download these datasets to ``/big_drive/data_for_yt`` , add the appropriate item to ``~/.config/yt/yt.toml``, and no matter which directory you are in when running yt, it will also check in *that* directory. In many cases, these are also available using the ``load_sample`` command, described in :ref:`loading-sample-data`. .. _faq-scroll-up: I can't scroll-up to previous commands inside python ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If the up-arrow key does not recall the most recent commands, there is probably an issue with the readline library. To ensure the yt python environment can use readline, run the following command: .. code-block:: bash $ python -m pip install gnureadline .. _faq-old-data: .. _faq-log-level: How can I change yt's log level? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ yt's default log level is ``INFO``. However, you may want less voluminous logging, especially if you are in an IPython notebook or running a long or parallel script. On the other hand, you may want it to output a lot more, since you can't figure out exactly what's going wrong, and you want to output some debugging information. The default yt log level can be changed using the :ref:`configuration-file`, either by setting it in the ``$HOME/.config/yt/yt.toml`` file: .. code-block:: bash $ yt config set yt log_level 10 # This sets the log level to "DEBUG" which would produce debug (as well as info, warning, and error) messages, or at runtime: .. code-block:: python yt.set_log_level("error") This is the same as doing: .. code-block:: python yt.set_log_level(40) which in this case would suppress everything below error messages. For reference, the numerical values corresponding to different log levels are: .. csv-table:: :header: Level, Numeric Value :widths: 10, 10 ``CRITICAL``,50 ``ERROR``,40 ``WARNING``,30 ``INFO``,20 ``DEBUG``,10 ``NOTSET``,0 Can I always load custom data objects, fields, quantities, and colormaps with every dataset? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The :ref:`plugin-file` provides a means for always running custom code whenever yt is loaded up. This custom code can be new data objects, or fields, or colormaps, which will then be accessible in any future session without having modified the source code directly. See the description in :ref:`plugin-file` for more details. How do I cite yt? ^^^^^^^^^^^^^^^^^ If you use yt in a publication, we'd very much appreciate a citation! You should feel free to cite the `ApJS paper `_ with the following BibTeX entry: :: @ARTICLE{2011ApJS..192....9T, author = {{Turk}, M.~J. and {Smith}, B.~D. and {Oishi}, J.~S. and {Skory}, S. and {Skillman}, S.~W. and {Abel}, T. and {Norman}, M.~L.}, title = "{yt: A Multi-code Analysis Toolkit for Astrophysical Simulation Data}", journal = {The Astrophysical Journal Supplement Series}, archivePrefix = "arXiv", eprint = {1011.3514}, primaryClass = "astro-ph.IM", keywords = {cosmology: theory, methods: data analysis, methods: numerical }, year = 2011, month = jan, volume = 192, eid = {9}, pages = {9}, doi = {10.1088/0067-0049/192/1/9}, adsurl = {https://ui.adsabs.harvard.edu/abs/2011ApJS..192....9T}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.243151 yt-4.4.0/doc/source/help/0000755000175100001770000000000014714401715014561 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/help/index.rst0000644000175100001770000001704514714401662016432 0ustar00runnerdocker.. _asking-for-help: What to do if you run into problems =================================== If you run into problems with yt, there are a number of steps to follow to come to a solution. The first handful of options are things you can do on your own, but if those don't yield results, we have provided a number of ways to connect with our community of users and developers to solve the problem together. To summarize, here are the steps in order: .. contents:: :depth: 1 :local: :backlinks: none .. _dont-panic: Don't panic and don't give up ----------------------------- This may seem silly, but it's effective. While yt is a robust code with lots of functionality, like all actively-developed codes sometimes there are bugs. Chances are good that your problems have a quick fix, either because someone encountered it before and fixed it, the documentation is out of date, or some other simple solution. Don't give up! We want to help you succeed! .. _update-the-code: Update to the latest version ---------------------------- Sometimes the pace of development is pretty fast on yt, particularly in the development branch, so a fix to your problem may have already been developed by the time you encounter it. Many users' problems can simply be corrected by updating to the latest version of the code and/or its dependencies. If you have installed the latest stable release of yt then you should update yt using the package manager appropriate for your python installation. See :ref:updating. .. _search-the-documentation: Search the documentation, FAQ, and mailing lists ------------------------------------------------ The documentation has a lot of the answers to everyday problems. This doesn't mean you have to read all of the docs top-to-bottom, but you should at least run a search to see if relevant topics have been answered in the docs. Click on the search field to the right of this window and enter your text. Another good place to look for answers in the documentation is our :ref:`faq` page. OK, so there was no obvious solution to your problem in the documentation. It is possible that someone else experienced the problem before you did, and wrote to the mailing list about it. You can easily check the mailing list archive with the other search field to the right of this window (or you can use the search field below). .. raw:: html .. _look-at-the-source: Look at the source code ----------------------- We've done our best to make the source clean, and it is easily searchable from your computer. If you have not done so already (see :ref:`install-from-source`), clone a copy of the yt git repository and make it the 'active' installation by doing Once inside the yt git repository, you can then search for the class, function, or keyword which is giving you problems with ``grep -r *``, which will recursively search throughout the code base. (For a much faster and cleaner experience, we recommend ``grin`` instead of ``grep -r *``. To install ``grin`` with python, just type ``python -m pip install grin``.) So let's say that ``SlicePlot`` is giving you problems still, and you want to look at the source to figure out what is going on. .. code-block:: bash $ cd $YT_GIT/yt $ grep -r SlicePlot * (or $ grin SlicePlot) This will print a number of locations in the yt source tree where ``SlicePlot`` is mentioned. You can now follow-up on this and open up the files that have references to ``SlicePlot`` (particularly the one that defines SlicePlot) and inspect their contents for problems or clarification. .. _isolate_and_document: Isolate and document your problem --------------------------------- As you gear up to take your question to the rest of the community, try to distill your problem down to the fewest number of steps needed to produce it in a script. This can help you (and us) to identify the basic problem. Follow these steps: * Identify what it is that went wrong, and how you knew it went wrong. * Put your script, errors, inputs and outputs online: * ``$ yt pastebin script.py`` - pastes script.py online * ``$ yt upload_image image.png`` - pastes image online * ``$ yt upload my_input.tar`` - pastes my_input.tar online * Identify which version of the code you’re using. * ``$ yt version`` - provides version information, including changeset hash It may be that through the mere process of doing this, you end up solving the problem! .. _irc: Go on Slack to ask a question ----------------------------- If you want a fast, interactive experience, you could try jumping into our Slack to get your questions answered in a chatroom style environment. To join our slack channel you will need to request an invite by going to https://yt-project.org/development.html, click the "Join as @ Slack!" button, and fill out the form. You will get an invite as soon as an administrator approves your request. .. _mailing-list: Ask the mailing list -------------------- If you still haven't yet found a solution, feel free to write to the mailing list regarding your problems. There are two mailing lists, `yt-users `_ and `yt-dev `_. The first should be used for asking for help, suggesting features and so on, and the latter has more chatter about the way the code is developed and discussions of changes and feature improvements. If you email ``yt-users`` asking for help, remember to include the information about your problem you identified in :ref:`this step `. When you email the list, providing this information can help the developers understand what you did, how it went wrong, and any potential fixes or similar problems they have seen in the past. Without this context, it can be very difficult to help out! .. _reporting-a-bug: Submit a bug report ------------------- If you have gone through all of the above steps, and you're still encountering problems, then you have found a bug. To submit a bug report, you can either directly create one through the GitHub `web interface `_. Alternatively, email the ``yt-users`` mailing list and we will construct a new ticket in your stead. Remember to include the information about your problem you identified in :ref:`this step `. Special Issues -------------- Installation Issues ^^^^^^^^^^^^^^^^^^^ If you are having installation issues and nothing from the :ref:`installation instructions ` seems to work, you should *definitely* email the ``yt-users`` email list. You should provide information about the host, the version of the code you are using, and the output of ``yt_install.log`` from your installation. We are very interested in making sure that yt installs everywhere! Customization and Scripting Issues ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you have customized yt in some way, or created your own plugins file (as described in :ref:`plugin-file`) then it may be necessary to supply users willing to help you (or the mailing list) with both your patches to the source, the plugin file, and perhaps even the datafile on which you're running. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/index.rst0000644000175100001770000001214014714401662015471 0ustar00runnerdockeryt Overview =========== yt is a community-developed analysis and visualization toolkit for volumetric data. yt has been applied mostly to astrophysical simulation data, but it can be applied to many different types of data including seismology, radio telescope data, weather simulations, and nuclear engineering simulations. yt is developed in Python under the open-source model. yt supports :ref:`many different code formats `, and we provide :ref:`sample data for each format ` with :ref:`instructions on how to load and examine each data type `. Table of Contents ----------------- .. raw:: html

Introduction to yt

What does yt offer? How can I use it? How to think in yt?

yt 4.0

How yt-4.0 differs from past versions

yt 3.0

How yt-3.0 differs from past versions

Installation

Getting, installing, and updating yt

yt Quickstart

Demonstrations of what yt can do

Loading and Examining Data

How to load all dataset types in yt and examine raw data

The Cookbook

Example recipes for how to accomplish a variety of tasks

Visualizing Data

Make plots, projections, volume renderings, movies, and more

General Data Analysis

The nuts and bolts of manipulating yt datasets

Domain-Specific Analysis

Astrophysical analysis, clump finding, cosmology calculations, and more

Developer Guide

Catering yt to work for your exact use case

Reference Materials

Lists of fields, quantities, classes, functions, and more

Frequently Asked Questions

Solutions for common questions and problems

Getting help

What to do if you run into problems

About yt

What is yt?

.. toctree:: :hidden: intro/index installing yt Quickstart yt4differences yt3differences cookbook/index visualizing/index analyzing/index analyzing/domain_analysis/index examining/index developing/index reference/index faq/index Getting Help about/index ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/installing.rst0000644000175100001770000002000514714401662016525 0ustar00runnerdocker.. _installing-yt: Getting and Installing yt ========================= .. contents:: :depth: 2 :local: :backlinks: none .. _getting-yt: Disclaimer ---------- The Python ecosystem offers many viable tools to setup isolated Python environments, including but not restricted to - `venv `_ (part of the Python standard library) - `Anaconda/conda `_ - `virtualenv `_ We strongly recommend you choose and learn one. However, it is beyond the scope of this page to cover every situation. We will show you how to install a stable release or from source, using conda or pip, and we will *assume* that you do so in an isolated environment. Also note that each yt release supports a limited range of Python versions. Here's a summary for most recent releases +------------+------------+----------------+-----------------+ | yt release | Python 2.7 | Python3 min | Python3 max | +============+============+================+=================+ | 4.4.x | no | 3.10.3 | 3.13 (expected) | +------------+------------+----------------|-----------------| | 4.3.x | no | 3.9.2 | 3.12 | +------------+------------+----------------+-----------------+ | 4.2.x | no | 3.8 | 3.11 | +------------+------------+----------------+-----------------+ | 4.1.x | no | 3.7 | 3.11 | +------------+------------+----------------+-----------------+ | 4.0.x | no | 3.6 | 3.10 | +------------+------------+----------------+-----------------+ | 3.6.x | no | 3.5 | 3.8 | +------------+------------+----------------+-----------------+ | 3.5.x | yes | 3.4 | 3.5 | +------------+------------+----------------+-----------------+ Minimum Python versions are strict requirements, while maximum indicates the newest version for which the yt development team provides pre-compiled binaries via PyPI and conda-forge. It may be possible to compile existing yt versions under more recent Python versions, though this is never guaranteed. yt also adheres to `SPEC 0 `_ as a soft guideline for our support policy of core dependencies (Python, numpy, matplotlib ...). Getting yt ---------- In this document we describe several methods for installing yt. The method that will work best for you depends on your precise situation: * If you need a stable build, see :ref:`install-stable` * If you want to build the development version of yt see :ref:`install-from-source`. .. _install-stable: Installing a stable release +++++++++++++++++++++++++++ The latest stable release can be obtained from PyPI with pip .. code-block:: bash $ python -m pip install --upgrade pip $ python -m pip install --user yt Or using the Anaconda/Miniconda Python distributions .. code-block:: bash $ conda install --channel conda-forge yt .. _install-additional: Additional requirements (pip only) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ yt knows about several data file formats. In many cases (e.g. HDF5), additional dependencies are needed to enable parsing, that are not installed with yt by default. In order to install all required packages for a specific format alongside yt, one can specify them as, for instance .. code-block:: bash $ python -m pip install --upgrade pip $ python -m pip install --user "yt[ramses]" Extra requirements can be combined, separated by commas (say ``yt[ramses,enzo_e]``). Note that all format names are normalized to lower case. .. _install-from-source: Building from source ++++++++++++++++++++ To build yt from source, you need ``git``, and a C compiler (such as ``gcc`` or ``clang``). Then run .. code-block:: bash $ git clone https://github.com/yt-project/yt $ cd yt $ python -m pip install --upgrade pip $ python -m pip install --user -e . .. _optional-runtime-deps: Leveraging optional yt runtime dependencies +++++++++++++++++++++++++++++++++++++++++++ Some relatively heavy runtime dependencies are not included in your build by default as they may be irrelevant in your workflow. Common examples include h5py, mpi4py, astropy or scipy. yt implements a on-demand import mechanism that allows it to run even when they are not installed *until they're needed*, in which case it will raise an ``ImportError``, pointing to the missing requirement. If you wish to get everything from the start, you may specify it when building yt as by appending ``[full]`` to the target name when calling pip, i.e., .. code-block:: bash $ # stable release $ python -m pip install --user yt[full] $ # from source $ python -m pip install --user -e .[full] .. _testing-installation: Testing Your Installation +++++++++++++++++++++++++ To make sure everything is installed properly, try running yt at the command line: .. code-block:: bash $ python -c "import yt" If this runs without raising errors, you have successfully installed yt. Congratulations! Otherwise, read the error message carefully and follow any instructions it gives you to resolve the issue. Do not hesitate to :ref:`contact us ` so we can help you figure it out. .. _updating: Updating yt +++++++++++ For pip-based installations: .. code-block:: bash $ python -m pip install --upgrade yt For conda-based installations: .. code-block:: bash $ conda update yt For git-based installations (yt installed from source), we provide the following one-liner facility .. code-block:: bash $ yt update This will pull any changes from GitHub, and recompile yt if necessary. Uninstalling yt +++++++++++++++ If you've installed via pip (either from Pypi or from source) .. code-block:: bash $ python -m pip uninstall yt Or with conda .. code-block:: bash $ conda uninstall yt TroubleShooting --------------- If you are unable to locate the yt executable (i.e. executing ``yt version`` at the bash command line fails), then you likely need to add the ``$HOME/.local/bin`` (or the equivalent on your OS) to your PATH. Some Linux distributions do not include this directory in the default search path. Additional Resources -------------------- .. _distro-packages: yt Distribution Packages ++++++++++++++++++++++++ Some operating systems have yt pre-built packages that can be installed with the system package manager. Note that the packages in some of these distributions may be out of date. .. note:: Since the third-party packages listed below are not officially supported by yt developers, support should not be sought out on the project mailing lists or Slack channels. All support requests related to these packages should be directed to their official maintainers. While we recommended installing yt with either pip or conda, a number of third-party packages exist for the distributions listed below. .. image:: https://repology.org/badge/vertical-allrepos/python:yt.svg?header=yt%20packaging%20status :target: https://repology.org/project/python:yt/versions Intel distribution for Python +++++++++++++++++++++++++++++ A viable alternative to the installation based on Anaconda is the use of the `Intel Distribution for Python `_. For `Parallel Computation `_ on Intel architectures, especially on supercomputers, a large `performance and scalability improvement `_ over several common tasks has been demonstrated. See `Parallel Computation `_ for a discussion on using yt in parallel. Leveraing this specialized distribution for yt requires that you install some dependencies from the intel conda channel before installing yt itself, like so .. code-block:: bash $ conda install -c intel numpy scipy mpi4py cython git sympy ipython matplotlib netCDF4 $ python -m install --user yt ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.243151 yt-4.4.0/doc/source/intro/0000755000175100001770000000000014714401715014764 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/intro/index.rst0000644000175100001770000001771014714401662016634 0ustar00runnerdockerIntroduction to yt ================== Herein, we present a brief introduction to yt's capabilities and infrastructure with numerous links to relevant portions of the documentation on each topic. It is our hope that readers will not only gain insight to what can be done with yt, but also they will learn how to *think in yt* to solve science questions, learn some of the yt jargon, and figure out where to go in the docs for help. .. contents:: :depth: 2 :local: :backlinks: none Fields ^^^^^^ yt is an analysis toolkit operating on multidimensional datasets for :ref:`a variety of data formats `. It represents quantities varying over a multidimensional space as :ref:`fields ` such as gas density, gas temperature, etc. Many fields are defined when yt :ref:`loads the external dataset ` into "native fields" as defined by individual frontends for each code format. However, yt additionally creates many "derived fields" by manipulating and combining native fields. yt comes with a large existing :ref:`set of derived fields `, but you can also :ref:`create your own `. Objects ^^^^^^^ Central to yt's infrastructure are :ref:`data objects `, which act as a means of :ref:`filtering data ` based on :ref:`spatial location ` (e.g. lines, spheres, boxes, cylinders), based on :ref:`field values ` (e.g. all gas > 10^6 K), or for :ref:`constructing new data products ` (e.g. projections, slices, isosurfaces). Furthermore, yt can calculate the :ref:`bulk quantities ` associated with these data objects (e.g. total mass, bulk velocity, angular momentum). General Analysis ^^^^^^^^^^^^^^^^ The documentation section on :ref:`analyzing data ` has a full description of :ref:`fields `, :ref:`data objects `, and :ref:`filters `. It also includes an explanation of how the :ref:`units system ` works to tag every individual field and quantity with a physical unit (e.g. cm, AU, kpc, Mpc, etc.), and it describes ways of analyzing multiple chronological data outputs from the same underlying dataset known as :ref:`time series `. Lastly, it includes information on how to enable yt to operate :ref:`in parallel over multiple processors simultaneously `. Datasets can be analyzed by simply :ref:`examining raw source data `, or they can be processed in a number of ways to extract relevant information and to explore the data including :ref:`visualizing data `. Visualization ^^^^^^^^^^^^^ yt provides many tools for :ref:`visualizing data `, and herein we highlight a few of them. yt can create :ref:`slice plots `, wherein a three-dimensional volume (or any of the :ref:`data objects `) is *sliced* by a plane to return the two-dimensional field data intersected by that plane. Similarly, yt can generate :ref:`line queries (i.e. rays) ` of a single line intersecting a three-dimensional dataset. :ref:`Projection plots ` are generated by projecting a three-dimensional volume into two dimensions either :ref:`by summing or integrating ` the field along each pixel's line of sight with or without a weighting field. Slices, projections, and rays can be made to align with the primary axes of the simulation (e.g. x,y,z) or at any arbitrary angle throughout the volume. For these operations, a number of :ref:`"callbacks" ` exist that will annotate your figures with field contours, velocity vectors, particle and halo positions, streamlines, simple shapes, and text. yt can examine correlations between two or three fields simultaneously with :ref:`profile plots ` and :ref:`phase plots `. By querying field data for two separate fields at each position in your dataset or :ref:`data object `, yt can show the relationship between those two fields in a :ref:`profile plot ` (e.g. average gas density as a function radius). Similarly, a :ref:`phase plot ` correlates two fields as described above, but it weights those fields by a third field. Phase plots commonly use mass as the weighting field and are oftentimes used to relate gas density and temperature. More advanced visualization functionality in yt includes generating :ref:`streamlines ` to track the velocity flow in your datasets, creating photorealistic isocontour images of your data called :ref:`volume renderings `, and :ref:`visualizing isosurfaces in an external interactive tool `. yt even has a special web-based tool for exploring your data with a :ref:`google-maps-like interface `. Executing and Scripting yt ^^^^^^^^^^^^^^^^^^^^^^^^^^ yt is written almost entirely in python and it functions as a library that you can import into your python scripts. There is full docstring documentation for all of the major classes and functions in the :ref:`API docs `. yt has support for running in IPython and for running IPython notebooks for fully interactive sessions both locally and on remote supercomputers. yt also has a number of ways it can be :ref:`executed at the command line ` for simple tasks like automatically loading a dataset, updating the yt sourcecode, starting an IPython notebook, or uploading scripts and images to public locations. There is an optional :ref:`yt configuration file ` you can modify for controlling local settings like color, logging, output settings. There is also an optional :ref:`yt plugin file ` you can create to automatically load certain datasets, custom derived fields, derived quantities, and more. Cookbook and Quickstart ^^^^^^^^^^^^^^^^^^^^^^^ yt contains a number of example recipes for demonstrating simple and complex tasks in :ref:`the cookbook ` including many of the topics discussed above. The cookbook also contains :ref:`more lengthy notebooks ` to demonstrate more sophisticated machinery on a variety of topics. If you're new to yt and you just want to see a broad demonstration of some of the things yt can do, check out the :ref:`yt quickstart `. Developing in yt ^^^^^^^^^^^^^^^^ yt is an open source development project, with only scientist-developers like you to support it, add code, add documentation, etc. As such, we welcome members of the public to join :ref:`our community ` by contributing code, bug reports, documentation, and helping to :ref:`support the code in a number of ways `. Sooner or later, you'll want to :ref:`add your own derived field `, :ref:`data object `, :ref:`code frontend ` or :ref:`make yt compatible with an external code `. We have detailed instructions on how to :ref:`contribute code ` :ref:`documentation `, and :ref:`tests `, and how to :ref:`debug this code `. Getting Help ^^^^^^^^^^^^ We have all been there, where something is going wrong and we cannot understand why. Check out our :ref:`frequently asked questions ` and the documentation section :ref:`asking-for-help` to get solutions for your problems. Getting Started ^^^^^^^^^^^^^^^ We have detailed :ref:`installation instructions ` and support for a number of platforms including Unix, Linux, MacOS, and Windows. If you are new to yt, check out the :ref:`yt Quickstart ` and the :ref:`cookbook ` for a demonstration of yt's capabilities. If you previously used yt version 2, check out our guide on :ref:`how to make your scripts work in yt 3 `. So what are you waiting for? Good luck and welcome to the yt community. ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.243151 yt-4.4.0/doc/source/quickstart/0000755000175100001770000000000014714401715016023 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/quickstart/1)_Introduction.ipynb0000644000175100001770000000601314714401662022041 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Welcome to the yt quickstart!\n", "\n", "In this brief tutorial, we'll go over how to load up data, analyze things, inspect your data, and make some visualizations.\n", "\n", "Our documentation page can provide information on a variety of the commands that are used here, both in narrative documentation as well as recipes for specific functionality in our cookbook. The documentation exists at https://yt-project.org/doc/. If you encounter problems, look for help here: https://yt-project.org/doc/help/index.html." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Acquiring the datasets for this tutorial\n", "\n", "If you are executing these tutorials interactively, you need some sample datasets on which to run the code. You can download these datasets at https://yt-project.org/data/, or you can use the built-in yt sample data loader (using [pooch](https://www.fatiando.org/pooch/latest/api/index.html) under the hood) to automatically download the data for you.\n", "\n", "The datasets necessary for each lesson are noted next to the corresponding tutorial, and by default it will use the pooch-based dataset downloader. If you would like to supply your own paths, you can choose to do so.\n", "\n", "## Using the Automatic Downloader\n", "\n", "For the purposes of this tutorial, or whenever you want to use sample data, you can use the `load_sample` command to utilize the pooch auto-downloader. For instance:\n", "\n", "```python\n", "ds = yt.load_sample(\"IsolatedGalaxy\")\n", "```\n", "\n", "## Using manual loading\n", "\n", "The way you will *most frequently* interact with `yt` is using the standard `load` command. This accepts a path and optional arguments. For instance:\n", "\n", "```python\n", "ds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")\n", "```\n", "\n", "would load the `IsolatedGalaxy` dataset by supplying the full path to the parameter file." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## What's Next?\n", "\n", "The Notebooks are meant to be explored in this order:\n", "\n", "1. Introduction (this file!)\n", "2. Data Inspection (IsolatedGalaxy dataset)\n", "3. Simple Visualization (enzo_tiny_cosmology & Enzo_64 datasets)\n", "4. Data Objects and Time Series (IsolatedGalaxy dataset)\n", "5. Derived Fields and Profiles (IsolatedGalaxy dataset)\n", "6. Volume Rendering (IsolatedGalaxy dataset)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 0 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/quickstart/2)_Data_Inspection.ipynb0000644000175100001770000002630714714401662022435 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Starting Out and Loading Data\n", "\n", "We're going to get started by loading up yt. This next command brings all of the libraries into memory and sets up our environment." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "import yt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we've loaded yt, we can load up some data. Let's load the `IsolatedGalaxy` dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ds = yt.load_sample(\"IsolatedGalaxy\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Fields and Facts\n", "\n", "When you call the `load` function, yt tries to do very little -- this is designed to be a fast operation, just setting up some information about the simulation. Now, the first time you access the \"index\" it will read and load the mesh and then determine where data is placed in the physical domain and on disk. Once it knows that, yt can tell you some statistics about the simulation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ds.print_stats()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "yt can also tell you the fields it found on disk:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ds.field_list" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And, all of the fields it thinks it knows how to generate:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ds.derived_field_list" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "yt can also transparently generate fields. However, we encourage you to examine exactly what yt is doing when it generates those fields. To see, you can ask for the source of a given field." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(ds.field_info[\"gas\", \"vorticity_x\"].get_source())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "yt stores information about the domain of the simulation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ds.domain_width" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "yt can also convert this into various units:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(ds.domain_width.in_units(\"kpc\"))\n", "print(ds.domain_width.in_units(\"au\"))\n", "print(ds.domain_width.in_units(\"mile\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we can get basic information about the particle types and number of particles in a simulation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(ds.particle_types)\n", "print(ds.particle_types_raw)\n", "print(ds.particle_type_counts)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this dataset, we see that there are two particle types defined, (`io` and `all`), but that only one of these particle types in `ds.particle_types_raw`. The `ds.particle_types` list contains *all* particle types in the simulation, including ones that are dynamically defined like particle unions. The `ds.particle_types_raw` list includes only particle types that are in the output file we loaded the dataset from.\n", "\n", "We can also see that there are a bit more than 1.1 million particles in this simulation. Only particle types in `ds.particle_types_raw` will appear in the `ds.particle_type_counts` dictionary." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Mesh Structure\n", "\n", "If you're using a simulation type that has grids (for instance, here we're using an Enzo simulation) you can examine the structure of the mesh. For the most part, you probably won't have to use this unless you're debugging a simulation or examining in detail what is going on." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(ds.index.grid_left_edge)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But, you may have to access information about individual grid objects! Each grid object mediates accessing data from the disk and has a number of attributes that tell you about it. The index (`ds.index` here) has an attribute `grids` which is all of the grid objects." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ds.index.grids[1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "g = ds.index.grids[1]\n", "print(g)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Grids have dimensions, extents, level, and even a list of Child grids." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "g.ActiveDimensions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "g.LeftEdge, g.RightEdge" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "g.Level" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "g.Children" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Advanced Grid Inspection\n", "\n", "If we want to examine grids only at a given level, we can! Not only that, but we can load data and take a look at various fields.\n", "\n", "*This section can be skipped!*" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "gs = ds.index.select_grids(ds.index.max_level)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "g2 = gs[0]\n", "print(g2)\n", "print(g2.Parent)\n", "print(g2.get_global_startindex())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "g2[\"density\"][:, :, 0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print((g2.Parent.child_mask == 0).sum() * 8)\n", "print(g2.ActiveDimensions.prod())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "for f in ds.field_list:\n", " fv = g[f]\n", " if fv.size == 0:\n", " continue\n", " print(f, fv.min(), fv.max())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Examining Data in Regions\n", "\n", "yt provides data object selectors. In subsequent notebooks we'll examine these in more detail, but we can select a sphere of data and perform a number of operations on it. yt makes it easy to operate on fluid fields in an object in *bulk*, but you can also examine individual field values.\n", "\n", "This creates a sphere selector positioned at the most dense point in the simulation that has a radius of 10 kpc." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "sp = ds.sphere(\"max\", (10, \"kpc\"))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "sp" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can calculate a bunch of bulk quantities. Here's that list, but there's a list in the docs, too!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "list(sp.quantities.keys())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at the total mass. This is how you call a given quantity. yt calls these \"Derived Quantities\". We'll talk about a few in a later notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "sp.quantities.total_mass()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 4 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/quickstart/3)_Simple_Visualization.ipynb0000644000175100001770000001760414714401662023544 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Simple Visualizations of Data\n", "\n", "Just like in our first notebook, we have to load yt and then some data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "import yt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this notebook, we'll load up a cosmology dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ds = yt.load_sample(\"enzo_tiny_cosmology\")\n", "print(\"Redshift =\", ds.current_redshift)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the terms that yt uses, a projection is a line integral through the domain. This can either be unweighted (in which case a column density is returned) or weighted, in which case an average value is returned. Projections are, like all other data objects in yt, full-fledged data objects that churn through data and present that to you. However, we also provide a simple method of creating Projections and plotting them in a single step. This is called a Plot Window, here specifically known as a `ProjectionPlot`. One thing to note is that in yt, we project all the way through the entire domain at a single time. This means that the first call to projecting can be somewhat time consuming, but panning, zooming and plotting are all quite fast.\n", "\n", "yt is designed to make it easy to make nice plots and straightforward to modify those plots directly. The cookbook in the documentation includes detailed examples of this." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "p = yt.ProjectionPlot(ds, \"y\", (\"gas\", \"density\"))\n", "p.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `show` command simply sends the plot to the IPython notebook. You can also call `p.save()` which will save the plot to the file system. This function accepts an argument, which will be prepended to the filename and can be used to name it based on the width or to supply a location.\n", "\n", "Now we'll zoom and pan a bit." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "p.zoom(2.0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "p.pan_rel((0.1, 0.0))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "p.zoom(10.0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "p.pan_rel((-0.25, -0.5))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "p.zoom(0.1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we specify multiple fields, each time we call `show` we get multiple plots back. Same for `save`!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "p = yt.ProjectionPlot(\n", " ds,\n", " \"z\",\n", " [(\"gas\", \"density\"), (\"gas\", \"temperature\")],\n", " weight_field=(\"gas\", \"density\"),\n", ")\n", "p.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can adjust the colormap on a field-by-field basis." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "p.set_cmap((\"gas\", \"temperature\"), \"hot\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And, we can re-center the plot on different locations. One possible use of this would be to make a single `ProjectionPlot` which you move around to look at different regions in your simulation, saving at each one." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "v, c = ds.find_max((\"gas\", \"density\"))\n", "p.set_center((c[0], c[1]))\n", "p.zoom(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Okay, let's load up a bigger simulation (from `Enzo_64` this time) and make a slice plot." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ds = yt.load_sample(\"Enzo_64/DD0043/data0043\")\n", "s = yt.SlicePlot(\n", " ds, \"z\", [(\"gas\", \"density\"), (\"gas\", \"velocity_magnitude\")], center=\"max\"\n", ")\n", "s.set_cmap((\"gas\", \"velocity_magnitude\"), \"cmyt.pastel\")\n", "s.zoom(10.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can adjust the logging of various fields:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "s.set_log((\"gas\", \"velocity_magnitude\"), True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "yt provides many different annotations for your plots. You can see all of these in the documentation, or if you type `s.annotate_` and press tab, a list will show up here. We'll annotate with velocity arrows." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "s.annotate_velocity()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Contours can also be overlaid:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "s = yt.SlicePlot(ds, \"x\", (\"gas\", \"density\"), center=\"max\")\n", "s.annotate_contour((\"gas\", \"temperature\"))\n", "s.zoom(2.5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we can save out to the file system." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "s.save()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 4 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/quickstart/4)_Data_Objects_and_Time_Series.ipynb0000644000175100001770000003003014714401662025013 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Data Objects and Time Series Data\n", "\n", "Just like before, we will load up yt. Since we'll be using pyplot to plot some data in this notebook, we additionally tell matplotlib to place plots inline inside the notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "\n", "import yt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Time Series Data\n", "\n", "Unlike before, instead of loading a single dataset, this time we'll load a bunch which we'll examine in sequence. This command creates a `DatasetSeries` object, which can be iterated over (including in parallel, which is outside the scope of this quickstart) and analyzed. There are some other helpful operations it can provide, but we'll stick to the basics here.\n", "\n", "Note that you can specify either a list of filenames, or a glob (i.e., asterisk) pattern in this." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ts = yt.load(\"enzo_tiny_cosmology/DD????/DD????\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Simple Time Series\n", "\n", "As a simple example of how we can use this functionality, let's find the min and max of the density as a function of time in this simulation. To do this we use the construction `for ds in ts` where `ds` means \"Dataset\" and `ts` is the \"Time Series\" we just loaded up. For each dataset, we'll create an object (`ad`) that covers the entire domain. (`all_data` is a shorthand function for this.) We'll then call the `extrema` Derived Quantity, and append the min and max to our extrema outputs. Lastly, we're turn down yt's logging to only show \"error\"s so as to not produce too much logging text, as it loads each individual dataset below." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "yt.set_log_level(\"error\")\n", "rho_ex = []\n", "times = []\n", "for ds in ts:\n", " ad = ds.all_data()\n", " rho_ex.append(ad.quantities.extrema(\"density\"))\n", " times.append(ds.current_time.in_units(\"Gyr\"))\n", "rho_ex = np.array(rho_ex)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we plot the minimum and the maximum:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "ax.set(\n", " xlabel=\"Time (Gyr)\",\n", " ylabel=\"Density ($g/cm^3$)\",\n", " yscale=\"log\",\n", " ylim=(1e-32, 1e-21),\n", ")\n", "ax.plot(times, rho_ex[:, 0], \"-xk\", label=\"Minimum\")\n", "ax.plot(times, rho_ex[:, 1], \"-xr\", label=\"Maximum\")\n", "ax.legend()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data Objects\n", "\n", "Time series data have many applications, but most of them rely on examining the underlying data in some way. Below, we'll see how to use and manipulate data objects.\n", "\n", "### Ray Queries\n", "\n", "yt provides the ability to examine rays, or lines, through the domain. Note that these are not periodic, unlike most other data objects. We create a ray object and can then examine quantities of it. Rays have the special fields `t` and `dts`, which correspond to the time the ray enters a given cell and the distance it travels through that cell.\n", "\n", "To create a ray, we specify the start and end points.\n", "\n", "Note that we need to convert these arrays to numpy arrays due to a bug in matplotlib 1.3.1." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "ray = ds.ray([0.1, 0.2, 0.3], [0.9, 0.8, 0.7])\n", "ax.semilogy(np.array(ray[\"t\"]), np.array(ray[\"density\"]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(ray[\"dts\"])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(ray[\"t\"])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(ray[\"gas\", \"x\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Slice Queries\n", "\n", "While slices are often used for visualization, they can be useful for other operations as well. yt regards slices as multi-resolution objects. They are an array of cells that are not all the same size; it only returns the cells at the highest resolution that it intersects. (This is true for all yt data objects.) Slices and projections have the special fields `px`, `py`, `pdx` and `pdy`, which correspond to the coordinates and half-widths in the pixel plane." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ds = yt.load_sample(\"IsolatedGalaxy\")\n", "v, c = ds.find_max((\"gas\", \"density\"))\n", "sl = ds.slice(2, c[0])\n", "print(sl[\"index\", \"x\"])\n", "print(sl[\"index\", \"z\"])\n", "print(sl[\"pdx\"])\n", "print(sl[\"gas\", \"density\"].shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we want to do something interesting with a `Slice`, we can turn it into a `FixedResolutionBuffer`. This object can be queried and will return a 2D array of values." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "frb = sl.to_frb((50.0, \"kpc\"), 1024)\n", "print(frb[\"gas\", \"density\"].shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "yt provides a few functions for writing arrays to disk, particularly in image form. Here we'll write out the log of `density`, and then use IPython to display it back here. Note that for the most part, you will probably want to use a `PlotWindow` for this, but in the case that it is useful you can directly manipulate the data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "yt.write_image(np.log10(frb[\"gas\", \"density\"]), \"temp.png\")\n", "from IPython.display import Image\n", "\n", "Image(filename=\"temp.png\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Off-Axis Slices\n", "\n", "yt provides not only slices, but off-axis slices that are sometimes called \"cutting planes.\" These are specified by (in order) a normal vector and a center. Here we've set the normal vector to `[0.2, 0.3, 0.5]` and the center to be the point of maximum density.\n", "\n", "We can then turn these directly into plot windows using `to_pw`. Note that the `to_pw` and `to_frb` methods are available on slices, off-axis slices, and projections, and can be used on any of them." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "cp = ds.cutting([0.2, 0.3, 0.5], \"max\")\n", "pw = cp.to_pw(fields=[(\"gas\", \"density\")])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once we have our plot window from our cutting plane, we can show it here." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "pw.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "pw.zoom(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can, as noted above, do the same with our slice:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "pws = sl.to_pw(fields=[(\"gas\", \"density\")])\n", "pws.show()\n", "print(list(pws.plots.keys()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Covering Grids\n", "\n", "If we want to access a 3D array of data that spans multiple resolutions in our simulation, we can use a covering grid. This will return a 3D array of data, drawing from up to the resolution level specified when creating the data. For example, if you create a covering grid that spans two child grids of a single parent grid, it will fill those zones covered by a zone of a child grid with the data from that child grid. Where it is covered only by the parent grid, the cells from the parent grid will be duplicated (appropriately) to fill the covering grid.\n", "\n", "There are two different types of covering grids: unsmoothed and smoothed. Smoothed grids will be filled through a cascading interpolation process; they will be filled at level 0, interpolated to level 1, filled at level 1, interpolated to level 2, filled at level 2, etc. This will help to reduce edge effects. Unsmoothed covering grids will not be interpolated, but rather values will be duplicated multiple times.\n", "\n", "For SPH datasets, the covering grid gives the SPH-interpolated value of a field at each grid cell center. This is done for unsmoothed grids; smoothed grids are not available for SPH data.\n", "\n", "Here we create an unsmoothed covering grid at level 2, with the left edge at `[0.0, 0.0, 0.0]` and with dimensions equal to those that would cover the entire domain at level 2. We can then ask for the Density field, which will be a 3D array." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "cg = ds.covering_grid(2, [0.0, 0.0, 0.0], ds.domain_dimensions * 2**2)\n", "print(cg[\"density\"].shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example, we do exactly the same thing: except we ask for a *smoothed* covering grid, which will reduce edge effects." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "scg = ds.smoothed_covering_grid(2, [0.0, 0.0, 0.0], ds.domain_dimensions * 2**2)\n", "print(scg[\"density\"].shape)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3.9.5 64-bit ('yt-dev': pyenv)", "metadata": { "interpreter": { "hash": "14363bd97bed451d1329fb3e06aa057a9e955a9421c5343dd7530f5497723a41" } }, "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.5" } }, "nbformat": 4, "nbformat_minor": 4 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/quickstart/5)_Derived_Fields_and_Profiles.ipynb0000644000175100001770000002320014714401662024716 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Derived Fields and Profiles\n", "\n", "One of the most powerful features in yt is the ability to create derived fields that act and look exactly like fields that exist on disk. This means that they will be generated on demand and can be used anywhere a field that exists on disk would be used. Additionally, you can create them by just writing python functions." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "\n", "import yt\n", "from yt import derived_field" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Derived Fields\n", "\n", "This is an example of the simplest possible way to create a derived field. All derived fields are defined by a function and some metadata; that metadata can include units, LaTeX-friendly names, conversion factors, and so on. Fields can be defined in the way in the next cell. What this does is create a function which accepts two arguments and then provide the units for that field. In this case, our field is `dinosaurs` and our units are `K*cm/s`. The function itself can access any fields that are in the simulation, and it does so by requesting data from the object called `data`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "@derived_field(name=\"dinosaurs\", units=\"K * cm/s\", sampling_type=\"cell\")\n", "def _dinos(field, data):\n", " return data[\"gas\", \"temperature\"] * data[\"gas\", \"velocity_magnitude\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One important thing to note is that derived fields must be defined *before* any datasets are loaded. Let's load up our data and take a look at some quantities." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ds = yt.load_sample(\"IsolatedGalaxy\")\n", "dd = ds.all_data()\n", "print(list(dd.quantities.keys()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One interesting question is, what are the minimum and maximum values of dinosaur production rates in our isolated galaxy? We can do that by examining the `extrema` quantity -- the exact same way that we would for density, temperature, and so on." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(dd.quantities.extrema((\"gas\", \"dinosaurs\")))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can do the same for the average quantities as well." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(\n", " dd.quantities.weighted_average_quantity(\n", " (\"gas\", \"dinosaurs\"), weight=(\"gas\", \"temperature\")\n", " )\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A Few Other Quantities\n", "\n", "We can ask other quantities of our data, as well. For instance, this sequence of operations will find the most dense point, center a sphere on it, calculate the bulk velocity of that sphere, calculate the baryonic angular momentum vector, and then the density extrema. All of this is done in a memory conservative way: if you have an absolutely enormous dataset, yt will split that dataset into pieces, apply intermediate reductions and then a final reduction to calculate your quantity." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "sp = ds.sphere(\"max\", (10.0, \"kpc\"))\n", "bv = sp.quantities.bulk_velocity()\n", "L = sp.quantities.angular_momentum_vector()\n", "rho_min, rho_max = sp.quantities.extrema((\"gas\", \"density\"))\n", "print(bv)\n", "print(L)\n", "print(rho_min, rho_max)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Profiles\n", "\n", "yt provides the ability to bin in 1, 2 and 3 dimensions. This means discretizing in one or more dimensions of phase space (density, temperature, etc) and then calculating either the total value of a field in each bin or the average value of a field in each bin.\n", "\n", "We do this using the objects `Profile1D`, `Profile2D`, and `Profile3D`. The first two are the most common since they are the easiest to visualize.\n", "\n", "This first set of commands manually creates a profile object the sphere we created earlier, binned in 32 bins according to density between `rho_min` and `rho_max`, and then takes the density-weighted average of the fields `temperature` and (previously-defined) `dinosaurs`. We then plot it in a loglog plot." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "prof = yt.Profile1D(\n", " sp, (\"gas\", \"density\"), 32, rho_min, rho_max, True, weight_field=(\"gas\", \"mass\")\n", ")\n", "prof.add_fields([(\"gas\", \"temperature\"), (\"gas\", \"dinosaurs\")])\n", "\n", "fig, ax = plt.subplots()\n", "ax.loglog(np.array(prof.x), np.array(prof[\"gas\", \"temperature\"]), \"-x\")\n", "ax.set(\n", " xlabel=\"Density $(g/cm^3)$\",\n", " ylabel=\"Temperature $(K)$\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we plot the `dinosaurs` field." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "ax.loglog(np.array(prof.x), np.array(prof[\"gas\", \"dinosaurs\"]), \"-x\")\n", "ax.set(\n", " xlabel=\"Density $(g/cm^3)$\",\n", " ylabel=\"Dinosaurs $(K cm / s)$\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we want to see the total mass in every bin, we profile the `mass` field with no weight. Specifying `weight=None` will simply take the total value in every bin and add that up." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "prof = yt.Profile1D(\n", " sp, (\"gas\", \"density\"), 32, rho_min, rho_max, True, weight_field=None\n", ")\n", "prof.add_fields([(\"gas\", \"mass\")])\n", "\n", "fig, ax = plt.subplots()\n", "ax.loglog(np.array(prof.x), np.array(prof[\"gas\", \"mass\"].in_units(\"Msun\")), \"-x\")\n", "ax.set(\n", " xlabel=\"Density $(g/cm^3)$\",\n", " ylabel=r\"Cell mass $(M_\\odot)$\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In addition to the low-level `ProfileND` interface, it's also quite straightforward to quickly create plots of profiles using the `ProfilePlot` class. Let's redo the last plot using `ProfilePlot`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "prof = yt.ProfilePlot(sp, (\"gas\", \"density\"), (\"gas\", \"mass\"), weight_field=None)\n", "prof.set_unit((\"gas\", \"mass\"), \"Msun\")\n", "prof.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Field Parameters\n", "\n", "Field parameters are a method of passing information to derived fields. For instance, you might pass in information about a vector you want to use as a basis for a coordinate transformation. yt often uses things like `bulk_velocity` to identify velocities that should be subtracted off. Here we show how that works:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "sp_small = ds.sphere(\"max\", (50.0, \"kpc\"))\n", "bv = sp_small.quantities.bulk_velocity()\n", "\n", "sp = ds.sphere(\"max\", (0.1, \"Mpc\"))\n", "rv1 = sp.quantities.extrema((\"gas\", \"radial_velocity\"))\n", "\n", "sp.clear_data()\n", "sp.set_field_parameter(\"bulk_velocity\", bv)\n", "rv2 = sp.quantities.extrema((\"gas\", \"radial_velocity\"))\n", "\n", "print(bv)\n", "print(rv1)\n", "print(rv2)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 4 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/quickstart/6)_Volume_Rendering.ipynb0000644000175100001770000000700414714401662022632 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# A Brief Demo of Volume Rendering\n", "\n", "This shows a small amount of volume rendering. Really, just enough to get your feet wet!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "import yt\n", "\n", "ds = yt.load_sample(\"IsolatedGalaxy\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To create a volume rendering, we need a camera and a transfer function. We'll use the `ColorTransferFunction`, which accepts (in log space) the minimum and maximum bounds of our transfer function. This means behavior for data outside these values is undefined.\n", "\n", "We then add on \"layers\" like an onion. This function can accept a width (here specified) in data units, and also a color map. Here we add on four layers.\n", "\n", "Finally, we create a camera. The focal point is `[0.5, 0.5, 0.5]`, the width is 20 kpc (including front-to-back integration) and we specify a transfer function. Once we've done that, we call `show` to actually cast our rays and display them inline." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "sc = yt.create_scene(ds)\n", "\n", "sc.camera.set_width(ds.quan(20, \"kpc\"))\n", "\n", "source = sc.sources[\"source_00\"]\n", "\n", "tf = yt.ColorTransferFunction((-28, -24))\n", "tf.add_layers(4, w=0.01)\n", "\n", "source.set_transfer_function(tf)\n", "\n", "sc.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we want to apply a clipping, we can specify the `sigma_clip`. This will clip the upper bounds to this value times the standard deviation of the values in the image array." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "sc.show(sigma_clip=4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are several other options we can specify. Note that here we have turned on the use of ghost zones, shortened the data interval for the transfer function, and widened our gaussian layers." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "sc = yt.create_scene(ds)\n", "\n", "sc.camera.set_width(ds.quan(20, \"kpc\"))\n", "\n", "source = sc.sources[\"source_00\"]\n", "\n", "source.field = \"density\"\n", "\n", "tf = yt.ColorTransferFunction((-28, -25))\n", "tf.add_layers(4, w=0.03)\n", "\n", "source.transfer_function = tf\n", "\n", "sc.show(sigma_clip=4.0)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 4 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/quickstart/index.rst0000644000175100001770000000540314714401662017667 0ustar00runnerdocker.. _quickstart: yt Quickstart ============= The quickstart is a series of worked examples of how to use much of the functionality of yt. These are simple, short introductions to give you a taste of what the code can do and are not meant to be detailed walkthroughs. There are two ways in which you can go through the quickstart: interactively and non-interactively. We recommend the interactive method, but if you're pressed on time, you can non-interactively go through the linked pages below and view the worked examples. To execute the quickstart interactively, you have a couple of options: 1) run the notebook from your own system or 2) run it from the url https://girder.hub.yt/#raft/5b5b4686323d12000122aa8a. Option 1 requires an existing installation of yt (see :ref:`installing-yt`), a copy of the yt source (which you may already have depending on your installation choice), and a download of the tutorial data-sets (total about 3 GB). If you know you are going to be a yt user and have the time to download the data-sets, option 1 is a good choice. However, if you're only interested in getting a feel for yt and its capabilities, or you already have yt but don't want to spend time downloading the data, go ahead to https://girder.hub.yt/#raft/5b5b4686323d12000122aa8a. If you're running the tutorial from your own system and you do not already have the yt repository, the easiest way to get the repository is to clone it using git: .. code-block:: bash git clone https://github.com/yt-project/yt Now start the IPython notebook from within the repository (we presume you have yt and [jupyterlab](https://jupyterlab.readthedocs.io/en/latest/) installed): .. code-block:: bash cd yt/doc/source/quickstart jupyter lab This command will give you information about the notebook server and how to access it. You will basically just pick a password (for security reasons) and then redirect your web browser to point to the notebook server. Once you have done so, choose "Introduction" from the list of notebooks, which includes an introduction and information about how to download the sample data. .. warning:: The pre-filled out notebooks are *far* less fun than running them yourselves! Check out the repo and give it a try. Here are the notebooks, which have been filled in for inspection: .. toctree:: :maxdepth: 1 1)_Introduction 2)_Data_Inspection 3)_Simple_Visualization 4)_Data_Objects_and_Time_Series 5)_Derived_Fields_and_Profiles 6)_Volume_Rendering .. note:: The notebooks use sample datasets that are available for download at https://yt-project.org/data. See :doc:`1)_Introduction` for more details. Let us know if you would like to contribute other example notebooks, or have any suggestions for how these can be improved. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2471511 yt-4.4.0/doc/source/reference/0000755000175100001770000000000014714401715015567 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2471511 yt-4.4.0/doc/source/reference/_images/0000755000175100001770000000000014714401715017173 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/reference/_images/yt3_p0010_proj_density_None_x_z002.png0000644000175100001770000051730114714401662026122 0ustar00runnerdockerPNG  IHDR=ւsBIT|d pHYsaa?i8tEXtSoftwarematplotlib version3.1.2, http://matplotlib.org/%J IDATx{dWu3_R-!$<+@0 0W/cfČqGAh0,O..ce lI]U{̽Nʬ~uwk>QU!B!BȂB!B!FB!B!,$4zB!B!d!уB!B! B!BYHh B!BBB!B!B=!B!A!B!FB!B!,$4zB!B!d!уB!B! B!BYHh B!BBB!B!B=!B!A!B!FB!B!,$4zB!B!d!уB!B! B!BYHh B!BBB!B!B=!B!A!B!FB!B!,$4zB!B!d!уB!B! Ic GG!"!B!dQU<8l[g&=z_|A!B#^ =ܞ}矏o4|1hX`<| ƍ| < [ ?+y,2rDP{[Œ&&b\J۹%#Op%I,Iͬ8?on|r^c[yf: o۞qCH5:YNw:>!PA{<%`?|[j]lP/˗,}g(϶8TسM׸W&V ~SKuͽRI7:hVox'qv|Y7Tס=:FN,]E@'q˿&h&¨j׎Y,?-\Y8.yu/5;9p_2:8]<ٟFݦуAwiY-,ǰ9Z bL%1z鷠vqi=>}|5.ƈ%'@1PI/L(}8ƼZlE4wDC:f5&ip~i5v+F>~ehpF&Fm:AN}1pY<2/ViN|kԶ?ΗYϥ|H?}h<Ɍ{*m4z*J4]F}6z@4ХG"S?yR[7ɶJg5g=ܴScѣ_ghQ7IQ Lc~FQ'"-qqW]EW0zCb/A{h۫e΢T[[=N}rz :ĩ&$jaí~?2 Y?٫3ӫ{x\W.kWlsH5حKQۡ:gtKms/g>kg5y_2>sM>ϐyfKyM(|`E!gP&!M{;C\}.ӽĕ/k.L=⇝LKae"Az\}7oEGstnkoŗ>%s{(Og_5Y_:_xOQ;. \Q A!^=,-2Mī/q422 8✫t"d4ڳ݃#vAX;x,4zq4 xX~f'8sX5۴c/lq<^~[▨MYsFҘȔ̋;] ^s(H(ՠ"!Y˾ӕځéf}mG2 %8<muk@m `ah[Vdw("ZW#Ɠ@`Kv@d*AW/xNACfGPvM3,7Rmk&^yοyox%赉V9?oQ3P?"1=2ڽ1SjC;+%}PD3Vy"L=YDGс1*NH mb#蒆TSU"+`^`U@<8:hۢ`B(1 ZwMiC Jh1͝$Tֺ 2fQqK>G 9 !њiw(:g_W-#"l9PMn`A5dK"qόZvq6 pivН"-ĆF=} kv7!B!{ch\3=wA#-4A΢ V] oN qG!T*2P35q*CWAGwB=3//↹SIrbK$ 6fv6)tRIR 14g'MPCݯzVNdRk%QʵF{wAا.[44@~p CqHhҬeUsS;бoRQR Q]~rd`oDf߬YcgKyB 'Ɂvz=FV@N4^3HDLK-uӉ}!o"ryu!' Bjs5ÞpX|[ȯUkI|VKK<qL;5qﰌJֳR4%S^31&I!*X.1JOүdqRzLa/b' ՋF~C[ruXrb D;r&4mʾNb[YFERtx<^[Si@$&Q 3%v4V$uc(-1*z8ca3Vo[уB!=HŐ_6bBf{ `TYZKB!Tݦ+I 3D&Fs{X^O˵B{a8pYliB2*l]X;#5x ;=jpUYz0]E@Fy>|wb1{ Ix>B!,4E V/&F/N}FvcJs[j7y\gO:EԴTVRb'u"Ib-`\gѐTL^SӰfLb5WINRJ14[&yV|iN>qpL^ECҡfyY>1ToITX{xTiI 8<އN`4J0FJi urJt0(yÃ-oGaФdGS=jc"Ը5VjKc<`ph 3\ p=!x8W:A3w2CWq 52V !d`F!Ud6|τHp)v $cH5+LmCu.bx@J !dN=AnA!LqK/Ĺ>?BeOqa;f-AIEFutGd0^J|DȐ8cȹ;=!hu\z9Upa8p!B4}k;>C^ c={QUi&{Z[n53z:d<6(Ű:xrף1WHNh52Jg*dN q#[4>y!L]2@i_oEKƚ1g4Җ'% IKt؜#+KERSȾ_̺B)?Wpƍ醇{{ Mۛ!4z15HF!]`Z2EiVB!FR"`3]ȓMUOBz+s|8K^Ƒ@u1+>Kp KEce )U$)@U-jZyW`No ng!1GX.r7-=B4z$ޫmt[)):"jʿ" 4:D)\Dg IY$NѧN4+;$fKµepSHZwO ov(wTF\dtgzbAI@7Bكи BtHEŎ @C+[^WFoS5ަRrIbF/cIfoԘ-qkEDWawǭ`+oκ&m էO z@8VV631JM Vinט*w Yw 0ىOZVP 8LN'S)VXn$҄*^ c0gb*%-'!ZF3m9N%ՋEE0ejƌҪTڢ~pG{)=r$%u+e@"M8W.Wc0vIf7[N3=NNm_IA&8*5U7睛$sKYvܱ.!F=jgՐT6NN;HDzb5;6& s͝N;. ˚Ddvίd=r;~E'{n`1H*;{02h?aK%ALw[C%NAƴAhfPF)Ի4k ֽGT(qIs+tx|xR<]*.Q,eSKiHLkQ&0 E[/S[K~0y+m!tmYwpz;DzeIA|С+jـR5"GI/zLo"][Ƭlc2ے1"4rφ1xדT*MŗbO~ߕ=ΘAI:GJfVFVQe:-B!B:=:8V1zB!3FB!̇ʊqyMaZk <Ͻ3hl!=>.Y>N૟ÐBB ~Cn&&Lh3Phn}@0R;buV:j\ܗe[NFB!2}zêAJ !B^&1  .PeQTI jjCHKHWK͕='UVTDGIq7~@O-+ d8YX;4(w,ՊtcwTmuhs:Ȩ+}=퀥bׇa`Y C?\F(Rә*SlYo5cS M!FRz;DkdL5eC 97̒hEue?o Ƥq#B&)2uQ27Ƈɉ%UBa !q1TD'8Hx]PH&7`(e&ժXN^M%(Gf_U)}.$oxOwa2b^{eL2*M ʆ[(st*Fdl#hX\pvgc5>1;v W_}5zֳ杵=[B!2_dpˀB4u.`Bv9o}[qA=[Vqxի^cǎ;k{ B7 j* IDAT9%u]Ҿ+\(|NykA] pMh|}JU|k_OC^xDn[EJ 132t{J<q#KgB}р]ЅDCţØD, -wqN8믿~rTp mop7'ضtInyg|_u]WX{ދٟ|%/ ^z[N/~1+666/yK3O&;=?5XrFP/B!Zf*BǕ.•.n7$y'q뭷G>S/^{-z|Gg?Y\wu;|`|qxk_>]OуTi7i;p`従-}{s;믿F}{luk7ߌw]ks?sY\[[G>yu B! ?t8'ew|QGlB!2lcF]NFyއ?? Xoy[|;-180<'yߛoK/4~ Ʌ;=!x|WؑU\#xg8n'2˃Bmx_>_. ַO} {pE/zΐ2뮻^{囤уB! 'p9p_CZnPgPE5\J)j]^ڣҵ%n?ݱ{o\$ݞK^.d\FDL"G}s/O>ַO~{p!e>Ï؏ 㮏@!B6p@Mx`$kgiĞ#400z[_; J_".% h]oVȎ#%k-.(y΅^׾v9 ?s???l8t#A?ַbe%?P<ɇF aܲx8WpER iCQ¹dzՐE|k[[v-eX5&U1fғ\S0)f”)[oI~m?<4v#h_Et8l'R1 F4zVr?ĸ۫S{.ſhQil.wprVS   Nh1tyVD<Ȁ7ȑ#+_ד^^WL-]} Bٵڸ!)ҼXTࣃW6zP USDWH ]o|ӟwߍ#G5yͬF&-{'KpkN)s k ;(D_T<(]c5Ϸ}^0Rm#@dxF oʖOMPx]Ѯ절ue=Yi0$"TNʥeXmUE҄sg@Iy:dݚ}Q'J*-WikqJ3&cpk_yu*-ΗU;jDK|8U_pd} |_WX jT:0iC!ނ &q5'*oNCdq:&*tf;ni|* <`cى)fkL5M7mO1 7܀ߎ߮,?#nJSsc/cR+iM6F;#\bVۙdNEYTNu@tL<25IdF8α$?K&YKj G$&|;O.{g`Zm$=95|%k\vF].jmQg eD]FQ1nH:gcט|6bJmL-ÚzVa8,ʹ4E7~+y<5eI%h 4?K*XQ(7nɃx~E#;DJcߖ OM2mXC$i#%,ƥ5$(W31 G1#A]D&x5QYXFZ g,hǢL-s*I>>>ϢѨN9x߈ol6qw~7^xT>B!1T;pq>~@`P d :E`iF(OG[[e,\F?4HTnbBW^K.G.Bo;>9ꫯ^2/"~7~WӃ>`XuB>@B!H uO~W!#Q@z;[Tb-IJkKWt*Ms *hIG G cH:Gtw=Oz8~B#wBw͎CkhvDcBBwĽ咐D QуB! KoCXw 0=kx]w+kk\W{- K!@9}z8 bi΢p;[MwҮJc/^0[3E(YnmBࠉ.]_)MRFz.l쀚S4\ϯw? g'J[)@[ \, !&|MFpQskRRV lh7L 5D]LٙG da~\$""0U9uTbEqg txFZtleIRi9N ߧ_О@419<'XHJ) [.Ppw"X^1OeɒŮQ4zժu? E9Y _Hg.cD@!Q;!gDh\'Z4un0FX߄#Ie-.\1]{3Vo ^WA9mD=ie'ZhsUB!BȜу>҂w_l3-5-h ! dz wqI!r0BW8Q?}B!={뿎~طo666KK|n|C7 4M/7ߌoA!. Àx8&Uۅ  !ln -X^}%j\OV; uŖti{rDK@E*n!dn0ۖ[ /x ^U?y?#nʯ ~~aKPUMo//㮻&N8r-+JqK=?PvB!B4zlǎ_Fh6`\s nfvmЇ>t|]w݅:VWW]vo5h.JaQ¹q7nW6Bܒq_. <>O7׼5G<9+~f-dGl?>M ! -? iq3|F&DMm!hK!F)裏{\~s>OkkO}S\rɤYz+.kwoǑGwg".CHb7,h_'Bz!Bf[]w5|E^x!k_yfYCaii ~'(o8kYk`k$Z W]NoM4Pg]S\)AJ$NF!G0Nn*P)uǁkw*.Je"~^%Ŗu= [)!Bǔy/h<><#/ٟYuY>?T۴ m B:͑A !d˔ ju uV^' d/EzOe HŀWcOIsa82BM7tKuv n*!=h2'OZ@ZKKEඵɶ~cCĻ-~a+'JԴXap!Okx$<EHWupj7 $3NrR ~+h8!, /G0lR Sf߾aN4|t:΋@a(_U|#.ioƒ1en㖶TYqmUS렉ܢvY`h8iF*- V$R*@Pcvib]K);JF݀$y9үXFmxh,ozIWMQ욋=uillw}L5j"ͼĔ;WRKb+ ~DiAdDX^0ErrQ5]3ePҨca>ƿn(5dq!KOcx$ ˟*+7gliSuMB"!6UWr;Sc.BHA귳ӘY4T\_i¹x/Ky =Z~ 4zL?yuuG;gu;xxߎWU[fɷaL:/%3&bp'b'~5פ >:=cfv6kVu)#H+ Ho)ߙFJ:$fq/{ɂm99W$x]5eD<3؟MC\sI."ݺ\cLT=^=PDURegu)?sǟd~?7peB!dx /gi_?t -! wzL#G+W|_O6 /x+|n ǎL-SVruF'K$clZgCu z"vo?Pcӥ}$1 \b\c !lcx ?=?Gk^-(N77gv(_}]k{@UBl4pPrc"$i\ٻU.`PA&6b<ǡq㮯}6PrkG72ZRK6p 7s^Qs~1⦛n= !q5ѣ?O//w,5o&z1B!9h74r'!Cmȑ#?7fwy'w=w җ??|#M7)?hG?:N? C`ԫa,}?:ZՕMDWL/mvtЧIoV~$>V$H?o&[j-y>4fo7jR&ۼˁOyv@V_snfeW|~:CeC@mf{CZ福AD̀VR07t|}qPuwiWX'\tH`FAH-`Ǥ # Fdn!5AjәIҁ Eʎ@{hIU*Zqӫ*LqJc~{73u[g&.մե pͻ@|Ek'a$b)ՂF2 %vF#X[SL4[ZwIu@6qu IDATs꫱5onFx?lX.r\uU8z(^n oMoz80gG6h5_j5ӝ1ȗ Utj9XқƶÍ̎I'%e!-PE| l sFJ׸&5̉b#XP;I) `{Y"dH7p4U6jIT=AڈY73q@64zB)o#o4>,x1iNRCy%CPכ&@wAKȡ`۽ Pe̗/齛H83TU_*FqMTIOz XE8+.AL*L =!B.#@*ÃBȞF=VՉŊ]{ C eng:g` X[kcECek}IǮ#V~z_Sצ=7Я} R)H/T"f%qٚYF$BT]zc1[EVt1WŨx& 5N @Ȓ5R=6ΌxZooL>+$R(+vqlmYs{+p$<&,Rŀ_z.:gN^%;NB!d;魐Ii7`E[Z(O+E${刱iw \1+azDh62*z;<;э2_h Sنjmd-s#jCIV^-ej/pl;8|'%#jqƶkGv ;Q_*F-hfH$T&JVK-sUнkRMB,zw{Bf7_zr,~"&x4pBB(zYBG?xvLpv;Y2=6DKQJ~N%VD(,# (ǥpQo~ oBWGug%5Xe:dr ڭNs8>!!+ԩb7\7r;4,:ߗOy\*C6?:b?6v1C渚!Щ ҁAN@<~k!B!m;={seڹ˭B!B 䴉݇!q(<&2v%+hz;+!Bj~E!1D||ᚡP:*MKA@* ҋ1d1BB!R_9)1/"w2]8tg8rш@9}pgQ)!B! h M T@J?D^D*VDyVTfhƮ@ b(s7kR=H9qHP2U~zڄ$ܨL @G NQōOg+|LN hIJ "jO:YQ&r  U+Yvvoh4޹4>2B2w^*-I]mV$$⚚v/Y\ jTR'nL 7ǍA'#a}Og5ׄp"+I|{,/ u/ӄу6eNR5ڙ:0sz1=*+|75سNs19hV 9>L0oKL4Nk\;I ݱ,7ʃJj!jioL;H14NVP9ȑ {=|hCN8LbB~ Y;&jr1޹a{i:+;:~e][8imi[BӒEpӌjk3# %}GƩY?CWg7ƛeZdA!B!;=Z+M~yo/;u'![Gz蛅K<];!OD`v RPġK2BFrX}ܦҟ ٥[w!<u-B!dfҴUnFNM~-{ح=!di{g}=n%B!d/<| :|I-d*G(Z -Z~G+K+8b$s7JXj[+A5gSSFDӓ8Y8M3}Q=0;H+TZ?Rovz"5iᵢo$WTt1WUiwI q{N:0D9n~J4˃yM,\h\DRy/k{FNEHsE0]gJ%Q21ʛDv$A%-u#>ʪf^MdC\.y`uI5_<;iRW\b>h'Ģ:UKS\fQf}Fc6qnSVo cyyb'OރA,)Yt&!f8f"5po5!l#E ^]|1HMcIeZfBqϋRjE  K.no* d}lcYjy ~z0ƢPilu_{Wխ;խ'6ۤ)HEMD$#8A (COF@q8  K%Ec7f^OoCSʇ}}NunzuNzΰ}ZuNK #1z7JZڀ8ww0&Tfu1G~\M_[?U⤱`XzDuU>gbݭvjsFn+:jm_$R 1B.+0EqwwUB~m!Jطꑫ% UM|4fwЀh#;b0iþi-'(z(r~fvnuDOQEQ< W:1 *mz7!LlZM *Qebצ!;#i>v[rP}O>d4-R$# o7[*.5z(g7uc{[]?yi(+:KhXdqN}Zeyz)0mbى}Lxj*B^ &놓ȑc1VC1CDS8a^;(πG꾷'Vy+r$Y-OIemT, qms L35T9oKSR=Y.Iˤ ."FyRX(++Zk^+ 6׼1Z%k=( 4 OEQ3DM 02=S%)f3UrQF(BϏDZDLTrBE![` Af@PѻyQôsvʣ́{Mu^ -  .79ٗ^gE#/%q rك!WkcE9^(r20zxP ؁dWuy vR.-# >oC/oS<[~aR.$m#d:.[Տ1d-JZ;EQ$*r ]@PN2QL/=O}ƥ*ʽ?EQPjLrI&[{kmޟy[pՏ1䢹J/Bʌ/=?w}MU!%Q^;YWdQj  \>G kI&J>M|}%: 7EISNn7I(:}n25.Wby~K䗑d`7ᾋIxdwymEKy^ /]IU&a'ɜJId43.~۽DFV!O*vc\s{>^S#L4%5$)'uOnU wRm? J%/ŗ m 1^VV R~X|REd=bc'>-? %C< @kkxY\ 5.M56oH|mmCrP Feu^K9{CQEQEQ;ì?(rQ(Go4Jsr]|h ЎEP==,r*i SphEQ$~o;G wNRbFEQNո"i`D+oQ S1$ۄa}aVEdrډCT̉TտL.ù,5*ZCk(((g*[QD==A_v!%K)g:?[w*]bgZrRhwC="%)qE'w3 kLZ Jg{}b<$&O-3WVsU:) ȽX܄2N%{yv=>.7g]kSkpS-¸ϔϥ)GJJ'`L=Ȼ̨4EQփ=.¿!.IuDö_Mq}ew[%9FyH\138^bĂa! )&(m&#'Jqh`Xc> IDATmƤ_'m9K}ςseg.%eYnBzIcIN'wiism EYeI3(܆XpjI)U 2=mr$4n"d@Y9NQGV7a”_E) cA){l ]}> .^w7f/#n,&c2U>}p>܏54EA{{qWEQeT{d !")A^)Igu7$`x=k{`N ٍvYgb6M_Cb HxQ*9һTt7 0kSB;~q ݅wG1> |#=jH½56^ f&~Ui8Rr՛1N5\)q Nf#hdjF{EuheGzltwrP "}y fJng. \1ז|Z!u˴g#W;%o&9=6؇̖bD)u<цcѺQxso&5)8.Ju] c*߈{=/ 2[m5q~pp-]`]94w /ȩz҃EQEQEY$fEQC==EQEQ%EHfEQԡFT2h_Ǯ,* 2;{.8C,Pi2fZJd&)v#½ԔC =j%&`0f_gEQ{@()xIGϗC,O+ (XgGG鱾Gsz(}CQEQEQ m BJEQTFֿI&#;!-7q?ZMtvxtfsO2 QC !.)u2|*x{^!C{&I`̖_Z/ ̅TR@J+1,V=5{_#.+szoz\c:QYuOP!}s1<}P^ma/%K!Rh)oڳ8@h,l6nU}J\OFzX8=y+|ìͧ3-zɤ +9~d+ͤZT[Jt;+*W0d*;->檘z *g5z(<_"v9s醘(!!u(RǑPB &-!H6].voNtIM-Iq(SH$[r):0O[3[d{vA83u^A -Xz+UߌيERn&u6_ɫse8=JF v=TOWea@\KQY >ؔ1CX<'OnHGO:re0a\ISPH? irP.ZIn Fv8eX,z`F=̕h{l8mj1oMHA+)@{j^ Vى@m,1HKw^r5cyHD,;Anك0 fߥ ]h PRmȓsK)IJo$LECL)'ѓ;8 \5e28 7϶%+J 0XaRrUxdҥDnP޻*D)>WU*LNu,6[n/MH۹"R!K,d+p m aL+2}.YJgqW~?KduzopطU `y=\4* Uْra2 ?[T6NTyK oQNjPEg,`=p*EQ-2n.gWƼX.)yFRUF ҨUTA/UDmKAUFm<)o UcAG݋gX8E9[;g7RzSS{t,&s[|_FEI@>xw5~\4ZS`F /(ͪbGQ C%ގ[`S:,h?~٬FEF1&>e"x/l0NW2ܐD .> 8VcJDe,m]tu8NF/|isϯKHEY8|#Y&wǙ]_9|oY5~u^K9{hPEQEQEQ\FEQEQEQEQ%r0f;ynoN#01pVR'ulDRX&~K,+µdOS6}:#8  R<Ko0 ~R[=<_Kz c0.7ݔ%IMC/"ʰH*gBxr䢊ȰVf(z=wU]SIc;av}޸Kf, 3eEeAy(aM[ӄ=. Cs=^,hN|h >^O9Ř0 xEQEQƿN(2=.+mMJ|0wMVaTX%PEQEQ>z{']EiFeuw71| ?< o׮7xU@t$ πokLϯ6.MH hqC oQ 8bpCZ{&S+T(̣Oe0"Ai1;=i:?J#Y(/$=ĵ^ÔT"d'+7LڌrP+r-~/\퉤'8'RTɄ%Ec.wM#H\GB.:ôr^4>yL(jOVl彚{6hߍ8c؁XY鄑+I s$ O!waLT#YB8ۘ֕b^N\y( V|Ht\D;0_Mz`wZJӥ@88@xK|!K:#a$ JK4˛61|1q}[hFe.`H4`J<[@$_,$>s8I6ZDczR} 炼'd#^Koe|?|7Kðu>0FI6ZR| OTOk@xuD}*tvצ y* z {p2dYl^-* xUe]k`{+ swU, oQNZ;eŸ7_`a#)ـvXWx^5 2e6OTfrrx0^7qm/`;JE Te0{rFPOE'`4ve<޽ 'F^ )Wm}; 3Q+gv@a[M0&4IZp;/``{K]x?d}n;x*5z(ı;=#Ķ{# {Y@^cQdvՠi]:.3!p]dfQdS|b>}p{eXkm.( qp'lCe85FE9&7֬9@ 3[&K+ o}M Q2:BNd@27$PEQQr&J;yRdBgVnRNT*`!Q߇0`%:IxI{r,nNĽ]uvZ)   r`N9]=wSl_pw͸ Wm $CL )(H%%]ip$Br# 5(BtF)dĤ&{ CԧD[^;͐f}O_\ȸNhj)_2rFpӘC7VĘt&vCɄ̻bxI"q! *y >-Sx2g,_g0ᖑ&c5~k/4P@}o.5hTA:)9YivH-?T0B[" a'1)dVDma|g?Ϯ dR;\Rh\ɳ&Jkӿ=]WP1̶b$E-65N9mոo߃,f8{@pI!ȑiUz{Vh٥wc͛/7Op(((+Gr 2kW{p*;'],EQEQEQcA; k˸]׭?{L`rJB{ab#Q<&Hq\D#KP{N_sR3iPb)2Bo)+Ϟ++:2KRsyw7}4z+lm^t1bv_58)  jQYܼ4{\|0 7fo%{5Y߉ ߒF~>w0&k5QE5rY!~ 5&v?FMVl/4 &3&>*u.=X}wUH m{OpO$o 7 /{.m3aT$a6B%ϜU0$˴;^eCl^f[P)Uբ^A-_,c"B(k}8^q=W Ə! b:F5{+a0nVV怛Ls Kf^c:=((ʉJ@43Hz*8Vm 3((CQEQ%D}`<3@M)s,@Y[NQEGQ xz 'Vs 4q][%M Bf' $ $C-ZaQ;x|i=[.Dca.~1\{ܫ`Oc[[sHH4-Q KA]^ڬC\&@CkPvW cT}]aEJ?Rtąn匣Fd`'*,bbauЪ+*vL~kE&7OÃP_b;}!K{ѸjyFy SNZ2%5z((r"0kx￱` ^/ɖa.breEQEQ4j3YZV z2Y{],]y۠h%/roEqd  \ 'm c E `qe\&"쐭sdev^s۽v5V]rk"bB Xl7AY(Kj4YAA*"<?~g>|g 7-ڟVOX΍Jja,%`nW$!tzi3~r1O;/q.~ )ٖ[D!=^EϲcC؊ ?7`2/8R? i3\M}'AEX(`$biٹ{Hp+m>#tD*U PQ jKMQ{BEyR_ S 6hI<uN$Srl9C!䄄X_&'D6D HHʊț"ƹMOm$9{P'{}]-n8{l DFՒ!fbo4ȓ\b!!ۈ">ٞ3z DWv&ɰ^p]ud3C; 5?? âڒ;N>F=iz滯'^w`+X-L ;L2./sۥ#[#-Hw{*TSkS[ 0 Scϯ_%kӄ=.W |Ň.(*;qw~nl #(ʚ"ӭmx6`#X2 ?ih|!2m"i}DştQe5z\w=kEMt$2@N3/Q++ ve 6U2X >7rOQ/f΅RqoWR.,4,Jߜtq=0^K*GtšPY e'bJi}}QÅN$'CGL Zmha8sm@UF>7M_ho+xv!ؔ@0ppend.ߛ 0קk)g5z((DŽpM v NPi0 >hUe]p~(va r Ux5Į8.ىIEY/js lι5+QɮyNj ZUZIތUYPY"{z r;\D&σ,P2mԐJ+KbJJѥ{NpJrJh!Yp>lj>ȕ6hpӭxk1ؿ"PQ'kCkٵ~_QjRrge/+(xAGH\$կ$r=Sjc֡(DerrB3dfXV~h˥ rm{:t3t8RJ!]=Dk cGIh;RT#SD S-92 Opu ~N&hOоpET} jHr<%z0ct %ݮ ;icUy>JJ`R}Zm=C\ljvc Ȁf{S h+:6\JKCHkiNBz7%ͼ B]U]S=.ld4d( t4  bV!_o&} kR&0ϰч(s.iBBǑ@̸\U60xQ'I$~)Ct4aT/*$ lC]$+4[|-Ǵ]RXDOX6M`sVy3 JV۵S߀wCܼgϣm)D[>60왰IYe@RX_go<|]3TAWO\2}D>pT)"vBVoT-ToEF( #)A>/EQ$y'\ukG^_Wa[?K2ΕFd/?(5z\PZ+pڱ)KkUS؃`Eʝ`5 sgdѪJ RQAr!kO$\"fu[JV5u:S_\ F8w u}Vr{hy^@MS>0kE9"lF"ĭBUQcaS` zwXW.щкi[DcUDirUQD:ʾvU)Nހ~v͢m:4ɠ9͚=ȕ"sg49֣|"ٍ2t\JSѓKtR|Edu:rpѽ}CV @ t?~z[7W*Q4FEQei<COڮ()|1:O\$BY\n fW.(2\w5?\Lq!WIR ,@qUy{nBrW:nrU>\IY9ٞ$~I% T(/"zUm0r {m:AKQoFJj+,ۥ_4{HVX]xQ!weUĉ~+c@0 nTJu:=m\'l\wH I_ u)]Wݰ'QOj<Cb]csnF2yy|%JW n{7jGrڣ='2Q}Bx0jp#blQ7Fhog\CƏ{#a ˠ.iTe"SZmOh+͒ZQ.b05*77#Fag 2odv(u;U{{5,gxj%NM|(ʉF'MQ"n#"v=! oQ)A`ݻr/hc5w| aze+xǿ}%eN&3ٖmt4~l6hJcGRQ޲pY-lfnBsW!EoHEkQ8_nu|v1nޠv^_ 1伩[ 8*LzD빞CQEQE9Wߘ)CGRE:>3de&*ńX,}m"#*Hns͠,yȂv~_(a0N~_|kGuJf3J0>;HE.DIR$Y`e?-Qy@8&Y~y;$iTD<WY%Ahcܫ`JutO+Ϗy``(单*>SBYVñZ42Wfug0 gB^SAAؘ!aTXC\}_kEC"չ1`B(bJo(w؇,i<`R5?KaB;+EG:uڰ6LPW9Nz~RV_YSUۘ+;-JjP:$0M(9ؾk`zSOdSsyT?od &t((Mk<`۫9J5Kd2ɀm30 nP7 nc}7/cc2xP"Eff+k#N P*XPNc8ć@\}=@f?r9VőnDb˖/TLg$%oiʙB4 aq$E9t,3-L3|/|~0UO\ &kD{6X`|a&lZCc:ބ JIEQ =ܐ L(ʩp.sylv:{c;3뤧1EQQO3?qܽ{ػ @~/%}//c~WuACÆq%F3ioHJCl;W5%RJZ.UZ4\!X( HHu#ֻ"8\A !:lcL~$&Ӧxn._XTYls~rZ_*H%\)M [Uf*m\^MVe [aIUGݷW?0N(3vF _B  /Aafear'M!h @gϔ񏓔|tla~)#^x??K.9ʯ`:=:QQcN%D IDAT3c$c3j}'%ժ\臉,oG>fPνL0`O_4}ցm`I()&ã^OԟJkb3Zb(Iwف(ڟSO;?3c0AB˔0:4Xl.>KCA 9t?v|^F31OQXk~}wu<xGGk.N6Ƶ䴔OQCl8+ cPpxpuc.ccߗ^!yqi?1I9=AW=δCQNrLkxgz׻=s=}c/ԜB܀6U'Oݬ:Id4VL hvg<2#xPt@l p;ؘXl원Y}zR+9hFO8}=QFr¹KkЪqDiEܜ+^2t^ZBP b89Ā.S_"ՓÈ* &UU*{" j@Xա)c MuEE΄cn-sIL%%i҉٧3KGb]̇w1>4; gn]gfM. 0L%(\RbYn!W2BbPm(1 i_#H=jZbRh*Vt}ngl;Tv#0.M7;]噫h03<:Qk1YxqPK%a-Tv_MVsGzK7gϋ*FkH*m]+\^wIq0!쿪,$-՚*|@hKjG1˜ZQ1/}c%yG_0e_=1=tѤ ɶroQ9X(7qzv5`rЊd୅ 0]f.|.۟y O|`8) -Qz& DH4JYuDnUdchln\3yd 3md2>x`Pڋ#N Y*WYZFTҧp@E|洙,/ %oI*wMK2 .hD@{{tʃgCW/q#w .;D[qLn>˗/^}c-dVj2G,iyގWF}p ^  a\TGVAۈڧݾJP3"GdJck,Imb<쮴40ƀ@G &su3wx$4CBFHH,?)Q,{d0!m++q!cn liqbL1iݠLd03W~Ħ}r]m;A'`gLfK3YFPE/߷7 oQNj8&y.]6n!kYƃmU UN'cJ;})9-̬- ;l.9tl1v޴8w/EQ.ұ;3j-N9Q1wDf/4zw.^IݓH*R6@.v >;y> bobS +5 6!Be;%DQV̾/NjŃmL7{P| j8d;om/~9/Ƿ5QYrYS$N z &4(/ywKuc9T]>Ϟ@+&(4/+߻R(=ji܀ZwkC#Rriy<*Y*JDYNf2ei,UHȘ.Jܣ&sAe[#8Iyژio|"ӀOoh9Id<SHo2UAzpJ$"bRTa(YܵAԻI (&IW쟤R~RNj8&]>z*iب^~pk'{;\q>Ƚ3FKmO5$}YNQk |G[j.~ꤧ9O4|IKyjd_NܤrN-sݣс,{mU# ] 7ks@35 P!"BFMrĒ1&BE`*gϐc &6(* -sC7M޳q};ooZ[3}ϰ>k~F^Ϳ,eL(rMaȌ vfR12Y퉢:Pqz=X?(Edҗs=>>Ojne]Ÿ袋oF?#_*w8?뮻uVi$euf6]Aky CI"ʵa1Ķa k.=2+-p6` Hva_v`a3H;Ԍ Ikk3A4xh4)3 Q veJB1\H-3m98Gk$p6 >iS[_\ ,<_B(\<*7$S(Ӥ4`-EFl: _=g<gqַK.7 E'C9/zыjqO:8 ks=wIYdȂvo9nhC0CLI4A){x}tg%T{KJ~ Yy=!A^3\M2cK0yfk <_KOrdz jn "53e; JL-AEK0+#f5qI`ˆ\s@6f  ɺҸZå 6[uS8}0 vڠ5*Dp{mWMVܣ 9cOKQmnѽoϓI2Í]YԂ㫚:JmO?Oaw;c-[R5N[cIMvkbZ@^^b1U(Pcu;T^V>*e:0Z6G>5<"٢ҍ('+{5^ՔB =XY)xl/H~V9o.JcQP+zYc$3lʘnxشM@ odYC* LBhUd FoGqcH4,ERtΩHRt6lzB)fM5e葧aLizeFe`{kGt1ЗDeHzm LxB G5&EQEYCa&{)F%60\@mc#x<}k!N{k1;s)dCP=ҋ]gȗ0XpVSMҩ\8&*i6V6ꁳƠ^ؚpl_SXc@>%TIR oݖdž0.BkО":}(,D*Aol2/kߪ, ڼ3>تMv˱oV U!2 X.xxwnƦls@q Ȗ!-WQjPdiY,w X dj|yz|I]Ш!X!zoՎQ㰕Liȹ{i2OVƢ;,wMCつC&Ue31S7jyg03.Df!Ce<Lv^P=bcصKjNjPV>*H&oKH[t;`e6P$IxFk0^.αڈn{|-_ 'i6Y! >"w2:33:ŒpޥPU06Pdׇ{7SOKt; _c `0c}S~O93$.RT ;2튜{|8*;|bmhU@aRl 6`њda vl`Bӎݢ27˿eZlKPߘc^C*SP a/Bsl̺*%(IJEV1CbZlq)l'SF/keՠF51Ĥ,]eF +2blEؙ`ch;d#Γ!P\EzK,  wabzYxP#bņ ت^dY$g&-Q3Y8Ѐ7\Z1,jٗҐEޗƌGGҩ7̸l`;|ge-T'=ތoMs =w!?k\^&?x_at7}h ÏoCջ(5Nh>by;H{P& rh䱷BG voN2g@}G|$v7q^`_Kl>{@ ,eFEY75Og}&BĀ͖ăsP6܆tTjPEQfd 0|dd3I6!&= uR<;{`0u,uueVCY!ȡ'e˪U)~{<숼G+*d%mgл#4z,#M\:6:bʨr6)XEb I/fd5% NFs'YoؕM/X̭vѾMNrAx w$A(Kg$2Ef3Gd0l%t)`lF<ø<GpS{C l f^'e\>bfSۇz\(+5z(ގq' A/[[_ė^ac OJaFn+pJɨzZ ļ{a*o2w,7ܣ`_3t=b TE=}h`tۅ''cjI `<Bъ ;| O\SeC.7c'W!df{ٗ<Yc>CQgabLtେ3fF&qB`N 憎b&S0:cc0q-߭{{((}! `CH%6t؆IOq0>Y>M'c,Ws0DlWQ3jX=S`BRvzzIY;},Ypc)jc u"#d掐LXJdKTIt !GR2s{_3zL&F84q0!|ؔjI*aU2ci~DO;kS-(%]VDg`s"y<!|0:=e۸q =%"+SrAkãߜB$wTiT$=M ω9S0MNjIh͟A#w }!YqA:Tlp*QbؽRPG\Eg}4gƪGBqxr nIڧya (m Z,iʒ'aOMG~qC$ Mf#'sQ&9څ|Dh=$ L]ՅǢ$$/%9s%)DL\ERu녻$ϼ$ OvjR!jX0w2)Y@蓹3>0_2tɬ. X3xk#t܊f F \J7̏*>Qzx,&) M%Tw1O=Qg1FURmg9n}mߑV3.IT(#=1Ulj\b; 6J Lc  .A8Le.fс{s`QEq|\'݋E>WlЫW &PjELgQPJ2Pzː& Bf0s^^CG&++@,KYyCQVdrЬTe !4 cB vߪv' dYckV)}a.Y;O0xrHm7=S)dXVEQĹ3LaS ,Q{ hu$͔[Vz(PO5(#n)헰_"⃜L*GJ|Ƣ%Z0a\^XS(]~L*YM՝KBQﭦzH j'<:޷y4gec ݣ/#%f$ Z'gEyX1kuGS4^0 1#3hLK.l ( O@ CH}68&yS*ϖ'V`S r3oFO * -6>j֍a1Dy, dAІe|$c m!j}d&{טu{Ṣlܐ2F!*&T6v15vċq&IYI̘ϽeK#_Cj, ]j@ "4 HP ^wF7>W:;!M0 8\Ro< xFeي>T">C;j,6dd@nAiNq(AARXIWc/ϯh@5~u,GX^*v~It/ޢA0NqaM-LaS 08q4g@T#9u׌2JҸC EQ FR} |G=< g7~ώ00{DQ=ntM;q]wa֭=PiuLTKjSd:{)5igpF*nF t0O157wGm]6Hvv7EQVuI֪'7U~Ig65)pv_m@B[Zw˞f.^'' 74 SPw7椰lP䲰:+P@Swn{^Hsu"ܠdYK 7܀׾5i fƺup'#-[`dd "crrv]w݅ q__3ğٟ%/yRGk96ߪ^L>c0L`O'\10. >A' ֙,^/LT@)=^ {؄1=,(2glaS (UHV vz*gz7djG<-/[^]_Y?s|_E$8묳pYg>9҃ ok/}Kqg8?EK=eB>ODYR3&0,ų͝g;ЖQ;CAKo.%s|NLŇEQ1&hu#5n!qӀbJDw& `wWUu{oy裏G׽uسg _~9+y睷_zԐ3fo,ݬ9Ep݆*s 4R u@=Ze0۹#3H؄4 uiwLwuƗ-\;Ic*- 6(^jLcandAfTZ]lfT[6ҥ/R5^!H&˳SfKGӆ<",kާ,\.TZI.Q[I 5T 0dmx&]ս=њcm |:VM)-"yBm>lwQKC =dm{qnɂEDzqq{QvaUS?y.>ebmd|9 W< U6qJ_[O|袋\pǃll ox׿W^y%[n?E)oh/i-eA>xQ slrodlxM W!c&'5Cc)+ҥ8>w Gm:(7bۻ&4Q5Ft._i5`ݥ嘀OT5*d簙?3B =DeG-v Gs@^8+/j0),p8# l( sV&4W =JoydkCQ#>@ oἭ"}`|zoϹ j,.S5 ?1̾YKV" GldS^Ck>|gNCY Xp-&3XUBap0sk@8 *I&.TUpɓN>dEy&Al5x#{ nʤ,%F]ؗYEegU4)M qÆ)MsYrV}tnzWV\%s侥unt|!i2AarP lKvznߙ[{{>=y<؎ҡ2Iy:օҰdxݲ^ /W!?d`vV%d̰kd]oM} shë Uvy@qC1Q,ɜr!tTV6(&6I78cHQEY]#X ,-uf~ḧ㻠馞0%0Q$_=n݊c9/yKwy'zva8Ӗ=e*^n9L'I/g { QI}k3l;,<ղ++`xb޷O~dU݇!T]aW'^DQeRvf $/ ʰm6mLn\Gn;rϾxa{90 W B|F2eisR1Ptw_< CȏDF@hK@ym*ߗQiΔx~W^y%~a^}C%҃S;u^阴֔áڰG2Ek2b'5:f*&J0dc`tش &u056SmEH,'7]%=I \ʚrYnbt:&An ǝGwq&W/aRqea~۷׾[nh=n݊_Ξ˛n x;|:=EߨxpWgW fA>_<֗g9[EQ;>۷c=-<- 8`|-/O~y=8G@x8cpo;w.O}SKG>9|Ygc-ݚwJ\MJ}xMviVsL3Fe);f1"Y)smXy} oMAmwpF\Ki}`Fc3\fIfRdKD2K,惒سFGgt<V'+j [ +0ubRLfkJ[*h<Hקb$搰"P+#>kҽO'~Nx,^0vc} }[l%46k+{Gua-} BG6'4=] Q)qEgC4O``҉M[$ ] \=at6.b1vJh.M޿7 >ԃ`?#;nx)PZAwcsY2FFFJG]pg/ZGy$*_5 'pg衬l;/ԶQ7R=2ifoKKF#\c׃Q˓1z1ep !zoGV/fzpP ٚ]?"8BHeo(O!U]PWA%2hP[׶0iSoqoqQg8}Z%Z_Q;"e&eVGoNE}f̐?ck.0vhn.;lhEő}y[P11ngN >CZvsծ@vzrN^gQTV}d\km6)ۂHBkpjc 5zp%%9\>B!爗OfNk9媹[~/Ʒm_u_ߎ[022)wyx[2zn۶ 7|39?^c۶m>c=o~3?iPCQE8p a0{ sREQAft,K믿~tMxss9?juVuY|feر_~9>O`||Gy$.9qQG TgvmxXZtDRQiic0y7(է$n,,Y Eal;Nl?j:êأ(JŖ,{vt(w"!p'*1o$Ԯ"]!T/7n__/kxɣ>s9IZNû.|??̺}y{8|:3P.¥¼Pucg:ėB0]OhqVcikJ|'t֔<x& (2hCj/y6@- #m[Vfu>y6U5 1V0m,4^sV =c|SWׯ{x{ރn7 aW*BD)kGqڽyg?Ë^"< O(=B:(y޼EQe&5H: JuEQVoqhhg? 83g;pQGa۶m9眃SO=ܦ $ N9唾|1>:o\<9xի^xq饗K= QȖgV|IO3{T3dT@Y⫐L+豟wG;+PP-ٻJPӤ0-bF|+$8uGEޟ:͐[KA\l @qEYay=bSf?.3JpvZϠ=vGP(Rr6$Y\BV%BBe':%3DĴ5Y_0}x>Co~8ۜp ضmogx{śnя~/OOq _N9Y(v^^j`AQRMP$ X0euWf"V*Y!C;.f7B8x  sR(HrGL[&YAsNl +DT 3J;4zcccf>..W{y)QOEQ*̈(VwpJb|eŒ[|fHpIU+abj}׉e W&,gtkH?sX+0略Ҋ2:&(&paBȹo$<ƬߋsS=}U$4r.VJ_KwW6T]I},-#&^1= JIj|cb{j5Սk+ JNv[iQB]*7 $XP5e0F7ˊhR0KOpTɉǒ岛E%sϝwK=EYz,e/4(($կ7ߌ}{3(xvmؼy3^.Q 3N?t~ r;Kj^ d%)EYb Qx;o̥((̗w[kj.pUW{w \8qM7;mߎ?K]y ՔlaZ&kʰЕmwʒ˴r0>-{Fp/U?%|pSZAYW]0e}:rNYYYv?ZO6aR?뮻uVi$o޼_җpǧ?iZ-p xIlj']w{YXW'}Q 9ySƳS @4\C]:QʅP7,vH=/Jo>ФʊE!\w? |,s8S_Yk|0N;4˿Km: _=ox30::q\r%8a9 K_RLLLdxoߎM6/}lc p_a,9 .x-EQEQer-/ǮqO~oZG9qW/R_7 W\qFFF{N:ik8w( ia|MsS 0[*`f‚R`7ގf*]8\l!E[:~gc-$`p =ߣPI Z,\YBS % ,5yQt1S)i&:F*(`Q^F츮&yu9ܭ[Q.2s"zδ*Ñ-H{\=L&SAZ"$ŌH#Tr+1* cF,4a3=$EV{ Q{W݁x\n`;= q%)t3!]LxLjl/ïk|ǙgY.op饗}=EY>H"Jd (Aʲ58A'nBPт"/Ï9(k6H^.fk5Wk4#v`^%0Ab ױ+w/4uvvs1#u_V;t/_?SH2`5>ƬM80.x͛ IF)(YZXB6iuȯbv&3v o羝a4/n m0 2V+{K4|FEQEQUÑ$æ-#z(algxɧh, wuWtnfaQ(u(!Y EgKQz003 3 |&ծљW2y4( g՟;i @$ە\K]<\͟1SA#9FH 1 @-e)^k}Y] 7&Q$_;ok7?(ʚ㞉Nݎt֗1EQV PJ0t(ʪF*_|1ַ뮃zֳp,R=̙+dsŒ$q"FVX}w2;4wD2wT$a{DaRTv+&sոwy pD=`Pz l~o:OǘK_|аAV9$NHrVSE/f=M[l ik[Ța|vWݧV @!\.J%3 Chl(CѾ?1s≯>( 'ޣPʖHn$`i!peоHӲ}bdI`/*sH4}qŷečR_D>Phw%5$xۗ~ҷr9b}T!/tk^=:q L i r} /UW]fmBCo|#o,I fQEQ 2љi; ^ɥ878iF)nW*KOCCH!!KPEQpA~68 ~/g?^HCQ5L }^D֗x@.^ Fkf Lga2zzBҴ*OVf|b-dmgSZVINvNQV gxUhAR|&';ã.2-68pI'//q.ǎ;p%_"//|ᢖبCQKMv2EzAa&IP*-@>gFE'WPi0zmc_Bmc<0Nn\=pց@#9f?Ѻ,EY|f1As+\֧M 4f 8qqW ._zr! Zέފ}sg>-[ 'e,:zPEQb֭ < _|1N:$NO_p ڝύyPcMuy) HȔ\qPɇs2YP"R;l:99^Ғ'IAJ7g7䳴Ɣ,l.\˜ryZHΥYTo"|v8_ /MY/0sJkv}6+ĉRce/2&x~ e811B)7M>2)] 3t,y8w.a;:61DT[hefTRf:r˒>+ą^+_??o=C?x bxx{LMMarr> ^q̌}z׻pC]ʟCQVABxIc/rQґ ? dF/:_q}?dc[."o%G.^2с;`@u5 {X $뺳X.nji %>GBBARY:1)_MZ- m[$3xL3F&G03 O 0qcd,k jjG UC1Bm:?n;nmS; e J0Mp$B_+K1B.eX)ۡZ^-Eha}a8Ecŧ>)|#7M|_=l߾۷W;qge/{^^(څ;8&Nˌ(m1:CtV'u>6 ˌ._~EY0|itO{WwoilDTe&֭+_JIr-;}v>Zuxb6k 31B$]}M29o?':Ijz,<'2S Bў$ӈ{!C& R2&mCŐ'_t9P?.Fap=[m} /I lT~ej%lu J>=&rTx_ AݫgL!&lC +QNVΜB9$f{r`t'lI'=㷺{6gN&atR43ԟNUvA|`.k%:daE}h;umR\$9O" KY04|ÐS݇5V&Y"ϯ0. }3iVbY)n1dYC( p)О&-<e6$0`a<Iyyd<+,G 6T=VVjPe^[s\s]nɆr"}} ,!Rk^,wE<$pa#*33Mu|-y3E ;:v=aH߃ޠtJ'S-\3ljalj 33ZSN<9hzCR6\4eég<|w{wt]Vm;-ț2l({F2lTEY&](fXjy%FT"O3їy !/th[I}nȒ Qʲ0TNBFU!n3,Y;=EQl=O\SLM \%so٤A$[A.]**(40Yy*5zHΐR6r,+QK(ہ}GGSpf cn5>poy(A/0v,r.<[疽]?tB6<| _*u 9FzE*%J:`x1sjyߺJ/o{A]ac* GA)ak.+i`ƍ0sާs2h73/`~0='HnQ w}7&{7\YGj ]sI 562ah8qHs0-_SrzXp> IDATؔrJ+[102Ȳ=EÏMb˽Zh1gͣ,ʻzn 00E4!N+* 5Q._] d3BsȫFqB0b3hT@Ǫ{랳LܻSug}33ϳD@ 0BNri?-a4Fѩ8zHن=!@`ApGNm8)OŽg< '~'7B]X%S-[fNJygV>2z} "}o&wޖ ?d{UOﵶ6S˓ ;^H@`zyT;3to< i0<= c$Q{mDІ%TK`󊯓GA#gz+N Yv~V_Ny6MtsFEJM(}ۅ?_='N UB`!-{q`a|\ ½ F>y 1BJBBkb%<<+-hˀsPpʾ ' 8lw}(as |5,/zc =Z 62Xch V}}>⇿'$!i` Fe_=yHt%Q@HJ,kL ZhR[2I @)Oڟ TϯZAqmR/gUB~l$ŕT3{!I?&j",FTb*30r qL#{/efJ^8KM},V!26 (m%QN:!DT0xߎ+rY#0.tvb*& oBqt>RłTb 6/(ldB>RkaËX?BԶ0XF<` ԍ_ڄہ h_Hbwzv:ªdl)5E $ Ҕ @634uÐ;T iQeQ*9=yp )D<\ {qeu;Jt rЌJt.yy! *SJA' Vioqdy{0%DA'>ƻ-D=i:7n8.Rp.H2٤(Cj-֦×T5':D# +lDu:_>U Q1G@%2C6%U@ -<|:ixGcdn&&5; Sc0x`t:~җTHX4nMaYG)ڀZNH{pG<9ˉ,ۊ 0>C\ʙM' tryRBlr6I޼6}& z]8oaV*zZx-"L@&K %/٤{Z[N%كᆰd)23j)!93KIF*SERVwGNe(쒢Gyת,$}m զjQ-*Tv3&=;'/y̑_ ^2]j䄩B):37\v$]GW8\D',fn8gvoF%)QxhT.&2e^@Y C2!ͺBBIǺKK :YGȣRXxw w|=sH K8Vbu(\LQp:&lMe1ݑVk/MiBv6zxM}2ˆ}TT((*rB=Ȧt<𫃱~j ^P5Y\gΙ4e9 Z@T Ylc:[s)3ʳj4L7>_S)ٯ→HH&"ǨArY}K2\'稺2gG@.Xۼ%)Q srvuQҏ )G6as0Hd o1=QDcU>MK('qcÊ5}y RkC7Ij;`@{?>I}l([gZae<)ATjj=cc>Ϧ0>WK褔ϓi4|@%t6q8%T0,.\s ^.;wcrr\p^@48Xtcp݉v(ڈκg}v=ݹ HjPfƧh+1Q#u76jFvm'VclN΃t&:5V8'z: gun݊~8𒗼^x!9!rx`12 /GG2ł{ؽ[NMI튢˒d֣mu2 1W K :;5Z=7V*FoE@t! ^VОVDS 6 P6I6{F@{&Mz1YHlUJ 'Zb.gMbR|:e볭=ܾH`4WU\/~|8_ׇ]E%xz 7Oe?X`Nwp><;ĹwBj1YE2&Pv.sx#JK#-218NAb䒵)=3ߖVk.scrVbE6 CO[8 X$ٝ }sܖUgYq,e bkY'VrxUr𖱮$[v`ᄺ,#3#rnh,@ Qtp+3]Xy$z'0'&xer#`˖-'>뮻{R /| qEs?O}SG>. gqp G`a"9 pj@To`FM`fQr!q-qAo;.BlڴXwӦM𶷽 ox055s=w_x##` "!Rwg\vX2<[J׾8tg7@ 0P'Wx*$2RhnTxq ZzC*MjB@ ;Fq꫱/}jtF@`/C٥|5_o̪ f!j-R&`]61 M.[>WU]UO;^ս1[XŅU"nI1rc"ysݒMCp*6,dYL/RCl1ӟNGu(ʜNC I '⡧LDDc6PX:a2 TBZ*emm"F vw w]ܐH !U^W7Z_)7 S$=eH|'{-dp=$r9dǸ1lp_P Èo{ MW%aĿҁ H11%XtpQ?=㝨'hD ̙E6mVs;1QFv1[1==M6#vqF0z@` lcO'ܗ%Y #p܍?}{yVHP׾g|xUW]>z%.x*4*[Yqpǿn{1y(}Y>W1p g[T>khO/Wʲ~P2o{BwRP${h2r֫!:HY%5B۟Q 4oP]IAQB>F"H*Uz"nBh -q1od MGm`ruzlX2QIZVrW6AN畫J]V]TUEm!PxyźDR*ZeeBT]!"=λw$rFAR,fRT#V+ۨUh+G}\5WQ!U+ou1^&U9?`z^ Ͻ];WַX#C0z4Rgo^ͧA㞷s*wq,,Ov>CǓqŕ>hh.&,=g$`AuǨ+G)3[;ÊIu[חX<í͗I34G>OQ:o8PܲrF7.F&. @,1 @j` "iP@"جJQbK[*l3) ߀ʀS+ƐfF3~L~ջWy=00:yfRsA.l9I/%R'zܿ\R/8sقkE`t%(oA9ߦ}TyJ]aV2.gZe\"\ٵM <&&7Ġˈ2ޜǣ,ڵk#,BiF`HP0TjNU 67dF"Q b՜V1R?(B!` ,+tc>{7r}lق-[T=#k<اaflݺ_W~*HN6z@N8Eݓ;`!;:HUuZ;^J@ب i N#Y؁bXp<,^V[X`Bϫ6 FU6-NTa]*[ÅH;+'q澳 ;]Q62vFUxF֎-8])[iL71@$7tSNa|;Yuy8k&M h} f7㺪&ԅ$Q;rcF2mQ([>V,S꒒AMTЉb ;pEᘮr<& 6 M 1B.ه &K|tYBn`!TQ*Pl>me"3S&WM'(ԝߤ7ƛ@Tu@lDgEd-F@.VEʽ*uD &灤]{')İ/aif x|XebGa ((&Li"S&!nYpV4K0P$W DPfJP$m1A$%sa/U $;A:JyϹcKZ]%ᄫ9}eTA g)2[Y @TR|ט B#ZC>:SJK]2XdծxzpvpyJC9%p(=h֖54b6P军f*RtMLkTOdt'De ~(x/%2m<'t^ #M0z,Z %uH?5$ Ms7#ۯAhJ6SCd 1h۠-fu\Zj^d~vZmƆm=]PE~G IDAT(AQ E$)G4 dJ%I&WC]Ԗ 1Br#Mrl\>ن`ZlňMefzA6njs5XQj;FA"[Ͱ> Gc#5y8;g (Gj<סreߧlA2h8ƓA:W~'9}e~ r=dbkZoofO~x \cDu6~=b-n>\@" ra"J`Z v I;.~LMNuZI^d`2 g @O6B"I#E]4" u o$L:vo܁)6 }ȶ/KuƱ:1rV`5Ē{)=BX9L())xaDÐJJ[,T#*T/B=< `!)\]:v5V8ZE/zn݊w 0}{Oo^G \(bLhz:k@dZR<2VgGj l_pBSRZig,C1lfPHnt},ώ@ B kVD +؉hh^)@3N;4|_iV,[n^z)9h7رwbuFe?Pp3q~g8 QiktaوbSYÏyH* f;l/v7ɶ2:״RqGI FPq+.%\rcgJ,;iĥg*<yXBֺ|G°ʂu45bpô Z6t`*M"0v0,3I+o~™ (%Q۫\ͶFc>YU[F}Er' 3nE`#S<Ћc!飜,u4 \L-UloI]vUTݵ as?]vzUPmӨL{~͏ISU@#n<X~}L)}}{jժc=v82oy%CÎ)Hh׸'KTbRP\=DyFE4ZŮ;Yg,K_lᬯ {UW6rqIJhl0s JPAhI]o@3eTw=@ Q(!L-yrV!HZU*.C^laܰdN.s~ 62C0 r[la"kl)(K0BIHC0sulSQzm\N~pP/C5TH]((KPa*s DcP7mq ?\v lMڡb}H~H1?57T1;5SfV|os 쀚v[x-wA/vRbn5t%!͖KD* vw(t]Tloj$ke/DF`(fUIL2i[{{ru`ҋz^@ ɍ*ލ1do~I}8裏=n&lذ]v.!t8G S6eUxnW_‘G 8sq+ĺuggC.F%>,@ოRn uˑ~ XEDa#70l2LXjUc@E(Kё6@+mŠ<87r8vn| I4~Q eȎܮXn$yc/晻ŻSq@@-ż!؎7tDs%:75nrGx{n)x˘(o7o|;m݆|KIP;ei{@ :h;:npf$Y^) !%-R<_2~lQvo%xz@`TL_Z1ZlB-H4I@J@FC&@ waBmo6~8蠃*˧%[FAYtf$2w&ԖcGQ2r~E+kI9pl*4n.W{ ցZOtYld,\uGfgN~|nZȆN ;׍:QYݕ&KJPV{TÓ ꂤb!SZS%f D,@BANIە/U[{bʊ˃6 &%@T"sWHW[ixj {<1 H9B]l~~T+Y+H1:U]4 (= 6/{ ဠ"=o+KҕR4^"epP%%}7R1FfL2Q!@j(ژ9쿩2G`RN$z7O_qAGX>xJ{{M> tj1[o j{ 7wI~/Љzd@k`0$>g8bcm$N6w։߃%cc}݇'?:q{ػ -@`v*1PMʨBV e4tA *ݧUYKn).[Y  'R_oġ@<9y.qx[ fk^;/{~pd7>@  +Z#0Bַۿg_W8Sg>~;w C!#Z &V-tהNPP"274rwr*Tl"lB`-J[l A * RPO,!y}T2N|(h:FBC'r[*-{y]bTBA8bHqa@6/aPJa4#zˑ`UF: ga=!A*ad`Ns}g VATEq&A P`e*1Ÿq3Yw`]ؗsc{&}ъqRW);qlL!2"S_[ʵʫ擪 2bzQ;`_yFdgDfS1`r"5`Ɍo'WXר7I4-›\l` P%*(& +azV[(aŅbէ'*51CL^Xފ#{Zt~lZ!g\1PIqEl^>?sT)x%)DA~.# WȲ_YٯWb+Ou$,YGAPOg4=~By4LU<;FhcԩQPί5鼙D짐sa֭q#Iwڅ;c%>,}|7'GCr5ƃYïgc Lzva@PO8VIMMxK%<'2r^eGt!IbVUjL&|vARGȥdqey_V1HGL]hK^1İ*ݑ\ v=7 o\%::_GO',W <5i)3nJ)vLm>8#HߓuCnW$o[B9sEYe1OPci?# 592zb%\Bx{.vQ  o@OI=lB2X Uo[maZN|`*=fH˥bJL ,Gf 9 z~a2:օmnVQ 8G`]‘GopΝx;9 `=}K o6E^1t*Iq8JD M=Ilqw 4TahA5Pj Ho=` nn2tZ tB&5,DIDA"(bh&p0`fXfPnPFyrC 2q۽]\e!-* iev7-oUlɼ>pa3(#&@ y+Tm]F8R3[nŖ-[^T'?H2ӀMA6N=<8vtG4?$Ð}8t ݍǁ?=CY8# e LGRmnb)eΌ r34dԸALz)%%bFGheN*S[' @ E'Ƒ2D_\lK޲~3<"f-@`vFӅwC%S^1O< &&ylI^ # ?QLY.ʔYUٔ+tfhb:^ 8Bdti0*ۑ,R .^\@<Dh/tH@^1#TeO("ݗDJ0˾. 3 j H *>Ցr&5MJ&& u3b,9Pllv2͈އbR]ZqE=x1z֟oDD>%9YB(`0Lqvq wk*=9bXbAw6@P1](ϳuyURhrÂBdRFY$Qu?L R <=ELҒ)hOBLXՋRQy}0SEN%ЖB%1e"SkTsc>NUy|(CRvJؗ<'<׶˞r9X:j؏D64>2Pzs )Qb 'Us֘raQ;fͧ?iyx󞇃:Gujbݸ뮻TRf (X&8fN=>,m*Sk+KscC2QjnXeRos`%W\m۶#'?w Y{!'9eA׆2YIpVP|t=J g";=eZJ\WNP>ZrX d^8 B+X I)V[:r~ƌء|6vyx z#RTtHdDlA]JGOmk  'x"!;wy睷% cYF w>%I+;VNŚ';6'23q(??+ks0zT2Þ _$ i#u]-^z 9X 0v44pfHbDy:['Y')jwڽh$5| r[s~Eh3@`GƔؖ5/9mbӐ@`9[7/g]ޱH%-zr=SOO~CO`Iwx{0ӷ8B' QM*ZZv8;cI}冷!   VF{A@jM#@ Dԧ>SN9eiJ3zOg?N?t/-܂Vxs[oԧo?ExBőêN`uT,sC]]$/en;J9Q% (s&$Q7 =G-M%eOJj@vy PIRmp d ,|+V|i j`WTZQ1yGޒTr ',8d-)TP9$WK?Xf2:/a|Mq]'|B,wּ&-m[L-)!_nRʅwiN.#Gy$`zz?яt''o|0֭[:jܹ/x E>J+͛7K/>l޼~e?1 ϜU'Ɔx4XG:B2F<ЉEԍAƺk` o~qNeqsl('(102uuYVf˻G.&㺧\LrM/lWVȬ.8I$`y؊2JT%|aP ]PYLO+ȋ)QS/>U =F &2| q]We!|[^Rk CJ|$Qo*. pƀq9tv\2\ Y6Z+IJukF/^-. kmS@qJ;]0|滨RU6 s󮽿*d6H-;k#u"-׹F)iQr?^ 9 > IDATyw}sK}vz衸{n\p^UzՐK9Bx<^җbՕr]l&ULVkHVq@ 0^TO.Kύ&e@ Q.Bl޼W]u6lPv'_*lقniKqQGoX7~:N9<Dž^8U? 83~;qGwEOOtP(4L R++r'$Q !&d0Δ_H2_5([N-y҅s;h^jC;Føk} oy[jժu>J7|Lxwߍs=_|1.Bࡇ›&\s5`lݺw}7Owg<{q7V :ss #Vx޸Fy_Lİ%hU h9F n)\)AILJqr[,udS8-\P0["706uVly[t|Tu\62-$: mI!. l6N^99U+CZt!8ʔ1$i(ǿ+pחuJ涻] fKPoY\&''WvvNgd<=9p +g=0lڴ ]wתu]]gӦM;s3&RH1>pv1=H \<H_И@ }gV= \d=?%\8$wVX;nH%Ly'<9k֬DYC+Ҵ{ݦkgn6~VxN^ JJP@ԂRk@!KIP21 eܿ^HCl9m켩 oB{ς.Իuקej6FV'*וr5\ĩC\ s!T=h@4T1k8_!5r)yYh\4! uDy[]PW=#) iJn.˱X,F7M8묳pW/|!(~'p7㳟,K:?spw///|?'|rqD!DZ`X!7N:u)𐏬D¯vDZ.\̛~kC4O1ڶ/Hy@ 4q{W<}Xɋ_b|+_J(z҈wWOb)ǒ o/Y1|lܸo+<N;4\~^W(+1*=riy ۞(֙9N K/5Re&{  Rx`]lB^@4u>#8I`zž;.X2w{]\r 8 {(5\;1"3y{HU8ZU C!2ҕ$1] 06zbGP-nJ.Cyd(:.l݋ޞsW`ک2ӠxWlPPhHQ8 .22;OnxPN5tFs ILl\%HS e׎)mqبo!$ AEc-z0W vkО@ F?>><8qA XCgxz<7ߌSO=qx~;oׯ/?+y ^)6&Y bFԳh^jhO%~urA~^u3g`(,y- |Bc܁<#AXkюNP<7Aꅯo@`@BvQ&,va?`X2k_r%Zo\ɼ/ǖ-[}/{*=կ~sY4A{ֈjXE`& g"0`%xe s@ 4'M7݄-[իԧ>~qa x#1z̖g=Y.^㪫E|C$k`"Y1iӦ9Gƥ@*a{h땡ǡ@%=0)p7nVn< t78Č<$&)pQ;qej#m0g9Q܎ڥdSw[[0 ҕt}DʿXylCwG Dzq-t\̫5a؁id:=럤]I/W~xP4:BgS]Uĕ4 5g75 Ni<{rSsFzr4@.M*Z h@[N;I\Hn?veȌZZ'Yw g~H#짹`G:_ьbaYЖlZNp 8puס끙7 tIk]䡲yz ysp7 N9Xxߏ7к㤓NC=>sO`PnPAϼd'qOGb|l% $3Xaao0^Wʕ+q}.a}-djw؁{7x#>Ou{&&&| 豀wq8z+V-#b{I{_9ˣ*77-wf{LRcHv^8 9BV=4-r L{-T25~odLPoqi'Q*:kV܁Yð .Nq=t(q04vXҩC ^:uYaÆuq͛q%կ~573?KNR#7K<Ʃ_ -tTqQhf L4e^7+$7,fv8uǬЗ1UhiLR8Q`Kӝ20J8n ̊ lm }Q”4'Pm";`=zYRC۝w{ diѧ'> q75xXz5z}ѸI0z!(3&u)!XnÜv-%d= $ d@ `pa'$+_ >4NqƇ?a|k_[6!e9`v$tCmpCv\wZZ+0suGx% YDL=F gD6anv#1(y?|6Hꔛ9H#yV(լ56 C`$ڢ$R>6",#;c 3הP(ﮨF]ӂJIzw-eE"-sFI:w׳ý*?I!.f=U\xxTBZn_)g}띗  'c㽇L/hAԂR~ԪSgӤ{-6F5vG䗺EBH8Sgi>c@#oyCC`YDؒj4MQW%tU2TCk~P_'Y"Q,HMfjEi< "T}PJ1Ss¡ Æ8=ahp=oÄFc+)؁c}L(Y (FA]!vi2tE^u5?~/{;\-{shB> Mů 5Y18'SKiɔӚ 1{ge&Da{Y=i >G13dB `_/c.N5% $ndL`[KBN_Sf4.=]x{2 UY(ٽ{)gϞuBQ/`GmP4 &ζ+ai"럟Ze  &6L f!z)2zȀؤ@Aq65;GVEQݮg7+jP Jc~L06@ރIQ3l*VuIO֯b`E[B.Mŏ#*UQRl-' װAyHJ^IL#I_Ӿ GKrv" ;r؍\Bd2Xm[ӆWW *-BRUiD# v dŁ+Ch2r|LTQ@x `Cp^p0ܔN$d =bΕ=eA^-Pt"z>E*&W㼜Dq-4Q8,if\5al\ǘye*MKULLec? b~1@UZ_^7oNbêʭ_ )nyB쏥?*QWbjGkd%u_[gv>ޗ@ho8gÀFlX9x8w=y/T}x[ߊOӣQ Kܽ |$[v=Gop-C|Lz?NSzCR҂U#[6p~٠Lܛ헳m7 v369G՟{aĮoqgM#<ʖaqb튄v^(Wzf/}iJw߉Z6ߔF!5x]kVQJ(d@a00d`@gqM6VIlu7jcTiVp@,DW9@| pj~IevmT~o՗U剑=ǗcX: bddX{K95 d8o)Scn| ?~`S<.$nW˗݃/:|h2,kT,--u0<εcu5Eo09m8z֓a`Hj((^Ym̡i&܅KcE[eW>0 7I`׮]xއ}xk_7⾿[E]űD6b7mb2=ͤ݊(l\,)ߠ_>J|#g>}o\s5آF 24[Gds(' \ bl{Tg|R3#کZ ;)i&1-4#zb2*ՑgR䱚s<=ؤҼk-|ew|0I¾2<Q ̃N+(Ī[3;xQx^1 *#pD9#,cNBbGX1> I @@sZQcX:S%`8;XJbC.['8p6z-ALCalCm$S̖gdVtӤ-g F-rTy+%$>>`^ȱ=7q%v2FC󖊑TĄ&7Ȱ{rRm^PJXq5=2$-!9o5ޡbŤs ^kR9Hj~_tG9/K N8+k s Rx>1IppD}>T"ycٟYq+98z@[5zLmQIruXl|W!|Kp(a.<|qR]<,+3xpтk68EcOdR|yrϏÃ>~݇{_lbNeەxvcXA.!nX0MBA"20T"B-Aw>q` +8;& o؞:z*/0>5hOa>J /Bֵ,g̅~*[KmLh!/1Dp%6~Z`q*F2NWmC0TX*& eJ7Ԋ\MYA2K6=xkUaYs53jA-OP]_O6I-Hwb$=$U9U*Wc"0"^na"C/~CT)ybGXK)^ނ Pobaa9X[edVペE-}Mp7x?]ݏgfEG~p]Î?MqCӅXrHZ\ãXbanǷ-Ķe,6]ّ35מREQvEN20_ׁLVQzֳ𶷽rw$zz(2\!LESwh+|%N6m ]\g!/ݦCM B,),MF6|,d}.&9Uք@gql.y$d㤟ro6g_*^K.Ν;{r >я;uUG=Ex¼!p`bS|PiS0f0p0i΂̅IN:WIU[RC m )V0i%p Vg<"wuAăA 3X۟;&b>hojmFQBꆱ(Jga[`v8?- />5?? 7~7gvGzE-ASFw8u;=60MHluX]®x‡8Tnӕdl߬( .#ݡLW_}5>+ k- |s^QWqd2tWMtQ %]U_7xUs؁٤ǼOUP ŐhY5UAWUȍ@JP9fH?m_2@y `@0a'gJsc3:1Cd&)Kpo.y u =ObkPVTEtbLPYr땐MbRBQ_] U01ѧxɄ2+zotNFEH\Y78Rh1TFE2Zj3U`>r?S#`wf؂ryR %:*~ػwoerIA)#)y.5)%e߉ޫTYThЌ .8xoɦ ة"l`҃)7e븃\aRÑb+gX ?.]TqQgKI2TEޞM>9ƗsDʶag.͕ aO?qKsw0ȤF0L&e+| dcj?#15-6+j xLF=##e5FWDJDTNPi4< >+@D+&+(e\lzAzW,&R*dW)S+&(P/pXhDdq> iǺԮ/*!JCVJ-}WkPxc:qDp[h[rVVVp]waii W^y%_::,ڵkĵޢ(lZȧ*5A#Ͼ~5h4H,3 ,~-1=s##CmXhcj(~83p+///5y >vE 5z(([<ѵ#3zN  g?ܘ% 2)&yކ{fSp}>EQNBahZXZ$N(l9'o &ĩgG1He`leUPEY;w\v'jPʐ9}8< Yŝ6^LIYIPOe2LT]0 #ŌKwiGcfOl(g|z,C#jGBkvqXFgx m]/+ה/'Ej\cBVBT0R/je@u2lRXi$yoN۠N:p V̠6\ř@xej1ixKzL2)u];l04 /'Moy!z}db8r/ݵ(vI?\(Y+ ?I.̥vS %{tG](@Vb$V44I0>mOp_eZu*u5)Ŋ^(CTPyB~.v/C!RUO44$׹Ri=V[mf D jOJۅ? `^*ƒ@ݐˣ#*Q q{>#}/9aWm,D! {:>9pQWGQ6Uz10`zzN]z.rtCQ0ϣ%ւ WULbTEw7"!>uŦghv B#!x4L9\%Mz*06g=fhvMv\&N& 7>KΘ F÷zqsF=s.%fC" 06m3Ds8t ca&:m}XBP@e0{ , F==ƍo^z)nw}Hpo^ptCQ2`^r E9~YQƂ1Ruc{.ꓛ{{4;@E8t8F*0+=#뮇b-{(|` D)a8v0,p03Ia5\s5kg3w5z(ʖĥ &T#5!znƏ~%35yx+.5*&K0ЙҜVSN|'PPrB`vY0'+T >J,}<Ѷ2{8s|<N?twyD`p9kLb=;4pqԬG_`漤_{V!ZK'HL:! 넌$dg6XABp0̅ s2eAh{G:+.@ZX__n%3 4 U6ݹֆ-yrSht37 ט "&7;  l9~kqׂIu46JԻf׋˞o/DIh%cjHe/3]K>DE )W%?I!Ƙҹ D-X*!XhfO PW]8m#HQSiF)RZ&-Cz>Ͽ1*%x#%E\T#ʘ!=&XsxȰ(xk_??X/~#0]W $=ea0a2R6$&'=2oVų)wZuy0d$}Y7ފχjACZIv 9#a[ȲcUe q/z;xQtl=J3{T\΁ I:$ݱm;ʪItdSNfԲs5a1̲Ƒ~ox='UAEQEQEQeKVcJET|pC~7]g<\IA#p E¥{F}T*oL/7mmu7y¼NvikD*PɟտTeH<NqW>̌{PC((( Kq饗ox++18s?l] U qOWM(H쩉ڙ!dDv5Qދpm٫5z#@L0P,;YPD%-CP@D[m7-UKȒbr'w jG{Yˬ$܏t8`b{XC/Sզ~=g؃)2PK+5&r ;LɻBWQ ȴ!Rv`}<ȂxfW%Sl塵0\4Qcnobۭ [u6gԱ$*$dLAF=5E +dr3AԬ"KWyKތ*5C a Zuc[j)'^pU2;z a$O%Tn)ޏUmUs֫4ƦO\yWxʩ#?# -o:c=&C0i$Q{8Rv5q⇼p0eU\;fa`,ԏ?`UNT:؂ItH`&bJZQ)SHWU Ʈ{0Bȇp2ȰSI27T!FcPBei$q %LJ2ԝ/K4?n kPbEbt2CbV$n?X/-xeYYj#RmzF*IeX[DLS|KEʾ[WKͺOq ?dazһh:`vl-; >(K ۞уIRx sCdץl2 ۺ=>m$Y3ch@ބagm0[Xָg????j5j} QWGQEQeĹc *) IDATwZQWGQ֝/:y$>9ʺboo{yܹ3Q G=&.ohۍ_ML]ED٣CZBOu6DlU{#dQNDp_^ǂkW =QE-: %^hd 1{xታ1Ydqȡ% Yw腥o[Jo*B)k塇 _B<˿˒478Z 4S# #`L,ѐ?/$Wi|afcZeX>P 2ol ]cQ3`{aX }e a#pRN2 4=^x٘{;B "u`*y#>y?R(4$wss8|pi駟|طo>eGg((1_GTfWQu6>Ok_6o||54eRzn67Xb{<++'J(du7[v蚽(~u+a8Rj`W(1U#!%生I1Zf0T~>c-}# {Ve aIBWLBegPM^KP4UtVGPcOɧ8nU $gPwYQJdi@~9^U(6)7:5+T̋W0`v,OXeP'„}"Y~ZJf.͆K˥c-.Y;?@cVQ8jg>S܆('le1'{ e@7˃ u_C/Д Srl($%ȃlQ&ZВPy)qGPRLn$|m^t95H%qd?&vy ]RH䚤raøxb&cw@у\*5'SJ 跱@q5*Yw~)|W"݆eA7ϲ XPU~꽷+ռDCjԦ"</ܿ$PYMxI&Nl҈iܭxn@NY;ɸ7,Y8rĉn|?LŠ'dHEQEBͤ{?EQSw~w=mH5/Cصk=/}Kx{ރ/0ꪎ5z(ahe&y~wF]5bA luKGPѣ~}Ah,l5;VlEQEZ'N`۱(TE/2. &%/>9\{POe(xLd^=%EQeîP1G?#(ʖK.߀qAٳ=&Fhؙ2'T޿^pt d!:wgO yL_z{墲JR@Lcc \W|pu3sWVVTB-s{HW* o KJ@큊l½ e[ˬN JT{ ra$>;;z­&6P]q+kLd-6Eg6xL_BWLxdJkW(Pa^ 5%+++~8y*TvT2r,}R_RSV"cݕka DIrW 4uUR!lGmYUVST|V,T>a>.ޖTv7_ݛ C\!~=yˁ2S)DEV=*4RUJL5 u! '^Cil aY[ KeKVԆYˡ-.xgL";Ue1@C'V<u5SEQ&6+3+3[HF)(&2 0;3'N^( _'e_qH=VCGg(DBރ '/g-byւmcU =@DEQEwh0s.ۃ?38_~9b2^Ck7Ztt6"υ :p]H2}fKi6eMH}BK?xb*-{`6lszF h?ү3 Kvb~n&ݻXweq#Мyl !RR&[s0\໦ToK΃ݓVI/$5ۨx?N-FiPa[^y S2tp: y8 rf ޳gFM®4 ɲr2Q ׷Ue4%CSŽ)٨VX}?JU.O@,V[')dy H6l?m߁Gћ&ɠ~D6"*cþJVա*L&{Ngndsyc74*{w}8̃{^s9îX0ItepcݙHwceil;=lXe,1vl)a/KEQEQF ;{ k}{N`z낎 g?7I β)ǰ&l>oW]u7qMGzzLy(C-{9g g~oyN`>h%V'1(ެM E fþ4hԦĥg([5xIp|V4Ly"4x }[x  - BMO}b^y%ؕeL/nc<={AOWǕ/9>Uz10`zzN~]e]6Z5zL2E{oxL-nj9C2>0$Y/7x#^CJg!Xz<]exGtPQEQMEn8F%%7xd [ϺA}XaeSK`yoQɸkp5?38gy樫5r1I( <7"Iσ0]'uY 5YP'*USHX^P5*OGt^\RR3Kx))&8T9PBP3ۼF<uCl'ՕmLY ٶHf[Mx“i0{KH'&e`ZRJCQ 1 f Xb_,'{G Gg6֢y_NhZl n7UWEDMmpSxǯREHl-!&d0l6OvP<=SF18w8G"k'5^DJ%u\g0#aa> &4*+/zW q%bFH~y>%|gdy xaVvvJvgЉRbR{ fz/V bZgPJK.nT  C sŹ;j jL |Q8@`#Ft68 |L+~(QǝoY|oSbjD}x5g $٭0HH҉6C44EEHD&B>m1n{.Y#OjbVj %=Flt&S E)Lmy; [].i$-|1k]͚ndBa /}ߑR C(C*$]Z76$ֵ+<ϲף{ˍ-RH"?=p[Fkm GVEO 蛰CO*ߐ?1G3no}}@x6q@d\WT <,qn7Ik!=4uSo箨 (Vһ4.9WhӮm(K)Quk4fIE !d+0%n7qsuv`+FEٌP BI Q"ўOwScAՌ(R5>хodcafat7BCEQFwѓ05ss7lEQƏg?__bOO-eff-]ͱCdaqe#>>`w)\3&eTXչ߅^b>HMHYXBk [#܁F9ցmz6K;ht__ą^,..6;voowU 5zL#Q\La(+1lwdGF̂瀺Gw&r >JqéUS~rk!1 g3&^Z ӸTx}- UfCiz .0}PYv jT e PR"<" n8(_0x (5GFca2r"4L/2lF90˛>Onë^.GkGϩCQg9gamť"VQDǾ} fo&@J 6.MS+ xܻ$?)>E)%F9IddAPÅ^_Ǻpl&$BcC4YgHՋ=V'2xyRӒ;+OԌme8!iڼhb+iAYKi3}DYRf##%EJK: 1э&djeW3'PN~2URA;ˋaPɛ{dbKU5 H^XVXHde-n># k[KXv}nNԼEP Ke[i1%s{!`zj ]^jm]Erge\Zu]x ^s9j|P@IueWX JeD%d{i*(|8yg*In3 Wvz Ib$T_qA8NP#:~,xL.8Ao;pu 92 t14H->AGH G CPa* '>uh{7GL]fB"iY̘AI\f烆!9-{+QIn~z9*w ձ2ȏ{j!G]=%(.I?ҴKöS|bxT$^Vo2L)ԡ*诂~ & Rg.J(FBm7Ѕ,swU =Xv,P/kp;Eq`_$ccI1Vo*TYf< a0̲ssshZ8rַ:=&.F1U6)ŒUz sL9| ?}mSEQCAS'BjOm FE0k_z!۷W^y%۽?iXs=cϼ 8󈄷nDd2A! euNdUX~a\uUW-nfz뭥PQTs$c?.0eߨ_z*@< `c8..D-,Halx8IaZY*|ʡ{kk`~aA;!|vQD6b_4lFFƬ)V/Ҹ%!(C\9Wxu!mZI2%01ql7Tq,,uC IjTy@TG'ߝ1wxsvzːYXXW\fƛ&}h68x ~J}5zLOGwzZuJUjq0+@+YuEQeS7_.{J_+cB&ɛO](g<G>UxqOO#h|`tkѴqw>7KԷ-@9_OǀC#(lF!AҲHZ65L@fҼf܅ {|Rs뭷=yO1>OfM+p_x"hyt\uE.,8t˰x pcN 2P`ZqL`xBRpl /OB0Xjdjnƾx$ ){ g{l 3E1/_Pl$t!$\` t*I:J7v.֏D~,oH \mQ)AR6|HƓ * />;?_P9I{lR(? IDATDA)4 }wx*][YF`}=WszPu5X4OU0|T%ET`gE͞oyoWp- 0!t@Ű X/]6 a.Z/;;˜ύǣ˫na1G"#(s4igtSRJ@!w;p~撬4 J{stgP%0W^^ qdjj ;wZFc2?'A[Zĩ<*(-Đʦ n 86Ԉ oRs7/q(cyyuFz~M)/[4U5N<(2)MgNt$0e@Z!5Nqlǡ'4EE@6=e#E9)dF:,|򓟬;!hPOeH0p`IS'\3e!p@,l%tV۸ҝIٕ-JH[Vfzl"R n;{{!Ҷ cKIj7ZӓN<||Ky_ۏMҐ<)]lĮ-ala?A̼ȪEWiQҗx;W{)=?o8Z5z(!VD p%՗a!/(-kAن66+\&:yzF%@si HsxC0A̽?Q ךEcpj\݂xȒ]v=^cjj V p.m_7p猸A[ёt 4 KNIQeCY޶%w>7wˑ4R/oJq/6J #y<<|fI=ŗ`yiޢoxpmx1??$I<G𖷼eU1xIk w1&IΛZl`iubI<:Ë 11˂w w  װ}{-^"c)QzC'@,@fV^.$%% ijKLK-*$DlLlpL+خwl(߀MzOn鸥bair ޿'ǤN+^[A@XN<꩷'~v{I8l7g_u,+?*ncHݒZ0JKmZPԌ*sP"X6M!Fnc4̃OtW@m;+[;J3$}N#4QN@|Z7*sl XܶbL*Ty~؁N_T^T ဈ_ \qxGCa8F]F ; &Yzl\f|aWQ[ _~:XAZ6x.G(T>T5:+/ l6hnØXx/NnEk44sNo\oPwM1$ac @iyno1zju*cHe7}S5)uN'zY,XޓjSϏ C@]Fp^!7"))z{&ĀM oBx}.UG }t|5١vDeݶȖ&ꪌ%jP1ʳy$~O 2! 9׽DS$„gGIw:Ey.);&|&H `W.fNv'oJự꬏hv~,ͮf'5x4:^zKXpDֶH~ ^6Ks+=\|k_î]*{^}CLeO5IsσW_+y.~P;U~R"ҿ2/D[JC>yk+L LLB>h{߆F.Gadz?m3멷l>gcm3o̹>ܧ}l @aw IռRQ" nD ]V7-R'jB`&*1-B)tZشc7E;qů8}^sc\kϱ^?s=\c}Eͻ*{Ї><;wE-Yh({ς``{GEQ}H=sAFGqMke˹?/qq\s58z((y'nc+ zد[ i(g̑锻hv#ޑZ εDW嚥H5JAxHEb7ۚ2pb ]L'۴.)e|&ޯ?s;$gfWy+-r2-W ߾bd7{  i"H5%"k2< &/BRHO_~>=N!+ cjxH.M}t˔L% D; ĝ\G}CyiZ 6 ɕlcː!s.lR9$h(WrvS!hsͅScѵF.:GCP\[uV@\7|R0Z{~yOY;N)HZg2]<e?=}rI.}k ຨaoo`f<qЃOhcy5(&@_Kw@0Hbao]I`C,ۮ!A~U&Ǡ!+ц=AKRD ylPWS G&_Am%= M]d,&C,=^FInm!_ ;*S,eP|m[K gc;CʥqaGύ`tOkC E4GnaBѵ(|B5ia/,tg0][dG.lu.pښn"(eICHA8mzt= go@.K$ B9DMsN)+}%j'I>ر%^|:ym}Wˮ 4A|~bcc`[F\ŠGz]>^*XtU r];K//zыpM7!\N>[ne[3Р(;/= ,Cm\ @4D62XdJİ;Š`EQ%Peua({K.7s[Ԣ=]Ag6_|o_!O:Iu-Cv[e-o$4޷9qA,Kq!~3}#ۣ=(C* fg|F (oC 2+ c^OBH!keb" 2a\֝Gʼ`Q (KLur@7xl ZР;:V %0T}2`2 |#qY I4Ku0DQYPe%&7"a8'?*\-2VxEc8KPֲrū_ 5>+(.Dxf| GN8<'qaWz/~kkVt4H)8M ^1p@Etlb7Ni!SbJQLN?h RD,v?}d& Dz,Ҏ= l nk7,P0:mk%M!1/DRw)A>ߍė>_ȥXX0`G9;l]R7Tmg|HKK]/p#k0yh]kQXpR[$ "z^%ա<0ll$q?1,sym/&AEry$z.-^e'J}Tǵa:L{;J] n>CSH9zqvk\哠-Ϧ}FhPE,ΰŻXGlH=txGqﯾ_e|_űcǶe; z~NR'4JA d i|9PTr$fJ"6|3AO|>l#Ǣ2.LBP:Nt\Q's<ЦSs?C!;H->t؃(!~3I?Fug'Uz#YQR@`wp.3>|f@?A;,mE O&xg4WxG17^vj14C%Ӌzm6"@pcrM~>G:Nj=lmX]8N&UƔ9+ʀ"X]+>OW?K} ۶6fB9!R쁤em(Hk!vE= p^p1~ 9^Ñ'l ^dyLM}kZ&:i*-Ԣ8>fMQ4s^%H A6 ͫ =[Vk;qs_6lgEQEQES,9{hOYbE٣|s#<~{;6onww$j{Qd 2/nonT0>e"n)PXQ e gƥ.$ľ!≯؅ea{ؘgpQQ&Nں΢߶zDϩhVv`^K_/׫﮺*5_oo|; z(8;RݠVkgCȇe:#!+[;&YT#ͤˍBQnNo,S?9(8We.z}lFj+$ؘMNuesrLjݨ( E%)5f(wȏǤY #Kv3.9h:n'_<*uX=l\Xg+LҬZO%V%Tn0J"z_>\7bv#So?c51뮻n[;EQEQEQv8.C]e;;[74Çߎ9}4(((ʎ@;W}saa)Dɏ~ {7-JX1O̓~zSsazgTnHj.k3pY^6H#V^L|jmq҆gY]M Hܕ.^>݉ºRǤmn;\K~mޭtF5@z=9(ۧ4d:cb "Fc Qo_{ %-=>.1鷶ޟF_Eg]r{ SM>2G!qu n\Uq=du M%rbêaST.+>o! u\1ҥg;Yn: a)ڎr?A삕WvJ^痮Cܰlr]gWU\qֆowmۅfz({6&<\[dpe@?eRPAEQEQ62|CpYKxqm!=zZnE=ˀ2 P')DM3;?2(âEҕ)((v )˷-={7pæK_^K.9}x{#k_>u=z!1~`ii nig?÷m|?ynM/|a{"3m Р>࿻ŢًpNHW|5-s~iaqy !=uy@うd ʲN\Qvx00pڮQuf+RLiEz8֓";O%b +Ч;B57 Jը\^Cl-ARh?-vK~/UG~/#ҲvSNnÓO>ku{n} ;qu{>~fv:\zOĉ׿>v&4IJgp փ ĸͷL2\l_>EKW`ƧSEQE_m"BW6%l*MgNmњNFS:/sӵ-Sp4M9OtKs䉃zCtKF$ A1 uI$]]׆l·݄t_}LIA q}b4KZ@GY  /]XUP@wY9 ~5. EJǍɊ/Mȳ~<,c2X9< 387er#x, @'n9kmɧ?i_߮ \yxo|oy[p| _jS=~7лL峟l=nEʞ< ™"Lj!qEQ66x1MO7((d1X9sGWp!hXc-qI*o%NVO*1f{ط7Te[׬>a.7v#АWmCkCQz>:*zMǙDC qg׎1p`%id?#jq4IL^GI^wZ6Uɋh+ʎ򈃍@3SZ- X%l*ԩXpjL3aO-{aDUIˢ7fZzD!S<&Ra+2"i_:{7 io1O2SM.-}0&F4H2I}IJ|vAUBM`b1Y &?zPQB~Y m e` (?}sYR=L&dÁ=ʶA牜ۿʅgρis"X:-w2=%Ɛ CPy;;8p9K[@M?`0:]V}2~kH3FiIV)Wah?׃[mn_ӵtDKĠk2] הt̅@ ` [P׺c!?8\|)[LgI}V`czuRУV?9a0F֒wjC-֖YnP o1o޵5O )/IɊ=!}T>!' $b2kbZdbPףi90-x& &o{/ޘ_`N_ GW*8e܁';`=f%D"gxrJۉcܵ.~y疢`CXQQbCff1OJo啓 0q>c\zQ+٧f1(KsL}s[}^^^ƱcǦY^^+AEQe` ,_x|v7KQEc!Q}qVd ~ t/})̌'N$'N\s5[7uk![elwP@>yΠjhxx\fBiBJqͰaaeڰX~n/Z-3= %tҙoeAl@;UD4f$vʩ,2 цKuv\kf?YN;Yd_6cp< l/oxyЧɠRƢ.vJ?tqoWb؜;:RvFVumM͐GEjcbU\('.r~nl㺟2UOʤ'Mh{>MK熓 Iteϧ6[ywwmo{['x> ;7Ms]9ﵺ(Js8!y((0y\!;#a0ʑwG0c)yM7/ps|3}oCTٹh~&)[̳^hԊk\Zq1}J:ZJhĆͮ؄l0?G.0RE\OmܱodReX*1*/'GTRlA)FM`2 x)kC;"w̦Lk&,SؖW7LVZ(XpH^tNo/Id3a_kA /i v\ ],.ǕIhH}UP:Ev=l!]Tm2%bMf{* n<[Hv=,QI.")~s.Ȑ}gpwNy,k>;v ַ7|3>Ob0{G?Q\[РǾgH[-/QIK^~oM.8t!wLz~2> |><.M"ZIX aSJd ! 0DFt\D F8ZT;$MаAt=I#Y*z0n$ك :4\AZ-uaB*h0>G /X !6Jo͋o:J_Kt;AYC00Lp#jGZ(- F?2j:@z0Y-N B ذR,3d!ٖ6eyJ|O##8fs(y 6< ^pIaoξ6![!D[btJaKVR ez.-kHƀl7ۑ ٲv#vҸ'@bbpC h`BEnTIIvM6 pXDp!`fܪ2ov=tc9Wя~p7. ǏǗ]w.|#^*8p+++?oРc 0Hk P6GQݯ,=cj]((҅zsoߤ)[=L(HH&Id=@dz:lTbYB>N0YBF2# ^xseA(k1X靯(ęqB<*6VD;EVAe[!˜#SS`e =odڼ`"dgjg{(](P-Ve"̫w _UZ1eH+`wz۰i!UIqa /v`'{R|EY3A'p;Xbŀַ+{f)7i*1 6@n=x+9TLe0Zrr.MכȽee}^y(h'EV'xi)= xv\dc[X ngZ[XhC19/k^(8F1<1\x?yECKDQEQEPv]p6 ZJ(¼u-2p-~<EQEQ6 z'`(QOA_˜E-L0{ KPMft]\(@Y Wjks+_dE,I+K~y{1gkMG ,}%9(H) -D:æEvs8?ښKK _樰O}N]˖.k]6d KqݦXKF2q7Tiw,Φbǖ lnf8}N<}k.3{tzzsu.iH;tI`d7bbQִ]IaQ.I G.6`gk7~:=`ɱGXwlthT*VgA+Q|'(rPbf$, }gt ֙@. mq9_h}EEqkU ˡfjג  SBUw!ַWW EQe&&)EQEQe~hcy衇w /׼x+pmmhY{/:oĽ;*([?EQEQesM>^nwߍ`;]w|Si9Nm݆'|wu&ZQeCgSKOe,`F à(5=.jS>.C*{1S0m1|`.Ek"Ԋ(h7)}4ơ*aTe(D)-.J*{ zlO 7܀,A;~8[ow{e=z>(nl(YZ%-7oĿukl:a3(rE r-7P:M~ӟ~ppS0o|dW6Q=(]\DEQL(!Oڢ((h&O׿~+W^y%~a|[-nv!"# r%tc ȶ7H*;QK +"n_zA~z@=_R۶Jo ȈoDёYApUSNq" ش KדdJr ئ/|o\^9y4V!w&q U߷3J9-<c[Byb*a|ZUˊ!r#bvup,1mT۰~\aܠHRoJ~$\#E Kb~#$KbspE $](cхYp$c7c"}M9m9''&]iG)K^SaL)mK}\Ҹ͠\O}E~\o/O'sTZ#m4 r솂ś8##7RAK2hiROS@Es=6!iw/!1= t rxbuMDdKײHh'uvBx ؈=l_z}!>Kv]<@#{;k@z@D̐BE#5:u9m|ߍ3}јG﮳_omɉ(hy&pW/4]v?ᖴIQe??{\:!88E (((NZfl^;5u,YMύd4) N0rr?A4kN(QmDsupN_[XXx~㍗ҦWQe+)ZGo{\Sȇ Ivp+9ç:Rie$jݏHc}RW7'U ΍k6`҆iho|1(~FJc-rzۊgmm#uʜ _/'v[j{Jjd |-w`mz (Jy=odOa>uĂ@WCQEQů^6y3~oE2=|{r->>w8vԴK/t !H%\ Z9}s a8! Yai h};fo@ȴSK2MK|T!:^w+EOw^v?I$C: B`1nDɥk^&? ,!u1pZN;$TM7N+֡۞-6kgt fZj5NfXpYt_-MvvF&?]Zˤi(]3Ƴ]صI0Ee;S  $Um ䷅FƎus$)i-n @G}A2F}?}_YIcޢu#mixt1Nr:H^7Hcܕϱ @na7؟c l=& }0&Hx%Jk1@=^rEN4 /K^]r% "03N8 z8qp5ל_cgeIĐ(5SJ sӳHJ<؎:}NzrѢIҢ\0$l]n4γv, sk, ([Ƶz${@k0UeҒ (Oƚ4)'<! N>13s_ؾ!؜ϹMj282r0i td0>PYJ-\T 5Lu&lO%QUFCa ݑWV̦vM^C9)!'q1V nM%1 J:v:<`AG XKK+&[)#`'#'%Ik7pe\hHr:Gu7.LHNvcƉDc rj1c >˂cZn˦axq$B[4o;oxcǎk}݇x (^7lx=({ 6 l f`E((}n0y哀QZQv0$wwmokOıc7iZ "[RMىo9͉㈿~o;g5Hc7\Q13K jy ; IDATm>E.SKI_BJBch&Y@,Te}x0]Rf`#زL6gEQeD.9VƋc?. eC"LqM7/ps|3}o%zoox宮o/w#dbWMӚ.ԯN)Bn~(5Ϙ_Ln0_\3iV|>6.܅೎Z$,sXZvX\g xuh#Uøa7W_}5~W~?877+_JG?JH/7. ǏǗ-ݮMe tv$*w+h L, Hksy+w׉ Y>bȢd]G򠏝XLص gt(0x!,d{KW[UHjM:@aPkg,-^gr05)57e/Ԙ,!Ģ:gQ _1S0 /5W8.o!J|м>]b~IQ S=]+`EQľH g+!;D)N}]D %qi bR O|cs㸊.u_87,͆XsrM7&*uq#dؑgMF L74Byhckq;݁pw'{޼3UD;\Cx#:= K\/ZK,hqD`Op`YkkiS QL,p8R9/J\|gWgZ 8#w#ye}۰~ m ZVyɠY8&,RI"m eE/e3#M.p!.#,/,>N層½>I4nW\oge(=zi8C]~C`)h&$HpJJ=hw99$C +Ɨs)vc@f [MiYD&[tu >Niu̵T@<}_ק F/{Kc5H.p,{ ;E17* $:HS؀yQ=E;09c8ZI6 01lN0 a(( z(;8:>˛ƛDunVq'C?nw 乑7ބL|)cSqlv+ Va|U0L SR@]BSb]Zwsʈ:+XI2KGrZEyhC^J.ꛦz, = e B2ACÄe2dLˇGKLLx@uIj<o[J;6Zo$J[hkxĥ'meS8[|ݥBu.14e%e,Yj}q!.L1>8WUz~Ċ(J71Ytoސ^BM`+ץ>4YQMz֭Tڔj1Z`t` ?yOiJ;Reo!e((4CQMEvQQgpx8yi}i\a +((F]kf!{5xv Oz*{Diڅ5G`N%u4eOdםDf::1%%9brL4`zRve?VvR2-Tu˚be\xC؜!>*{J>LY2L +5.OKByDі־I0,ݽ,o]fE7t2rSYzfPB'l[>`dJs90&^b\dzCdݬrl ap eTaٜò3KL8.jM68[FZNu͓ %bPv1/U(<-{p=NyQe dӫ:6paXZ{褱c ,Oʒx }z>.dޯ4\LrHHe&ʄj6,kN9ITq ,fg̀h . *EN4(8._36%)҃I=Q+PfNd4L#Zk VLӚ Lg,v 1LiU OpѣkC xĎ,,l|*MTVy #E`MmmMp) U}hT d}hd:~^N.=}i5Wn=^'=z h>d[It]yFb53s OJ/q6mhRv>PEgA܆j1+O<2_L8hr0pe>6uG򃅁3U/ M#>AȴK1Ln00cGjn((Р(ʌ7 ^o2eL[P΀0gKXXs͒M'؈K3@-'.?[gS%:C"#9G22> [WFYog)`x[x#_Y2S%;QQTU8f|k߶2ra?`}qo;Ҽe({L.蛑(Р(ʆؐbZv.? r|4\PilG⥆4-3pP4C*JYol1s|̛XRRfw.q)!3AqK [3D|()^<K RBTiK1&HA*f?|Gv1~4H#Yū稼\MV^YBP-A(JG<C?wxۿ{5=O춤|Fx?\w䘇s/.atdc]ooi *cW.Q8#!&CFy0 .t7Rbk ǟM~_1](( zK8m5O_Flzz$QaŞG Ae_bPa{sIA0+HkǶmu+9Cz5ZH9?v'.}9 m8nJln1- Al\o#}?M.5%&/z(lrhoɹ \|bj?DLm,N-z-%ڴQ) x4t9Rccޕ з)JCa J'SBt5H<49&qc6t¸gKe>hc`hd+;_iLv^ )AOahS/".-ݜl 4UҲ-d?Y{? ֪fb`6 Eܴ-?gn| "G :k0jLqa BsćL֓==59Gozh1&.iO>o0= D'>XoVٖ5m]g[$=Iu@G3,pat lˆEX_d㯒ƴl6v8xN ojHa{MoJZe7J`+R=@lO[?* o~GBADhA0` ã%-bb} pKM25c pjn   SHcO$걜T]ۅ 0:+N…d+Xs`*`LpQv둲-LR IsYBUnkzFYzmE[=W2xs~ 藷9quLh'?e/I٠)RZ~.2<|@6")B=7> @mX![PXGKWf 3'4\r /ε4EQ+ή*jxd5wݟ}Ċhco_t#m_1Le!u6 I1\S [~ AC-ɂ7[0 ⥉7R!ɈveBxnGlW) ,}{ ӭ۔z4A*Ҳ6.kih}pdS`<`͑ y-7.*q aS,7ڗQIHak\j·2VH vs6L/W7>A6`+8-v!L*{|߲r{$P#u놶3s7D"Y=PbI7d+L}m9K؏]_oW 2ʊ@-~F50F0ߺpi_؃EH)6rƀxMsߊܷړ͞VEא]V}vֻx[;<[q)dGhD~r'>X7}׊!h:A2P jjPR, Fп~on}p.3`Ta}< Dls2 P)\az p1l`R1qI.fa/bASC ,62rL ! /J \]!jWĥ]S{M˂p*|t*PF# `aalܚ=!\1@Õc;^p/#UBpi"KV.t?Vy^GH*qr]3xML>Z_K;mH_i v{0 NpCYDp: n*'bc/L#8:U%{fٍCv+rLAK̡*vskO Y __~vm{χqOBpdDLw+|[[bJPE9OQ\_ pnUb;0.&C`p1LbkP"= ن=mifx`N04 Laó 900S)M}xo8l2Ҙh[Յ Rᡝ,!c8^]6'0EE/^…YpC b}eos`(;gF1VNvq7}d& EH9fN&gCD /Ur18UŦ53$&D$!_ E;Ƒdp=vTA& . IDATdBvH\6YqObf+jN2_=d)Qx⯼Gz؍0 wY60@ˠmlH9Csn^}[}sT{-?vd5WP8q"c;V}:HYH*gh{ɪ ~xK?Xݹ;̂k0Š /H9 IߴA:jy>g_ Cz{׭_e,„= aBM/vjِ@M a5ώe0u/=r_+ iSz!'I# kcK ҿ7e,ҔtYlZ[GI)rj T>BCc)V\i*QTN)ĥ=veY?$ԔèU@iC=eq32 ź-No]\Kl-fY&PŹzs- |?Hh8@O@^p~TShݦK^ V PRFh (\Խ[=qNIX:C@'h\0W܅EjQ47DBd0[2B8\33?h<\?8މ/[*I#n 2mȃ.6<}!2f w[_8@:93F!>OA}PƑE{ jaՊ4kF`2cAtǹnx8)S5w:uӹUh3=B-?o*)#;fKlDLzcPR()QqD AA ?'ڪ)?m_pj^N=qNF ;x_/{-viKz}1j5=y&=޳sl?l?qaQ::!<&ͣb>bلg񉵰vAɵ=v3Ֆ›a`)cj#twl ,^I o7 K?,̋ai6]P"T^7V]NKl9`xIBڧ~t:R)2aGx_P-}ץ@A=Asx~xuaq8^v*bhVf@=cU<6H+ٖxZѮ~]]=։]+siJ[ڗlQBTƱVx:RgXIvx}I_vC4ͫ}*-%-yT qi6YZǾT1a8, X.8bjXuQ*,2UB0cn_ULm 2K1P)W@g. 0<%>oSY s vy/ȷGpFs F6M*'1(f+Ïx9b\O@l ,iKVcC.e.YPq# qJ*,D€e`4 f/䖵5 x«݋G)ܡ+[ԹCX>?y[HFhh9**<0% uۅ=SCf]m}^Ơ`x s.TS:j?<* N5wn+26^z>͗1?:M;[& \eHre/mw!MJTQϥX2*=a灋G.N;(4TMً(%̀`JQ98CXxm=}hKZד?։UhK6Ly3)AO?g_o}@)v;m \z@! Vw n /S޿zN$!B*ຩ?TYf-35rm[B 0>FܴsV^GX2?(Qr=tXfPUvzח"Ww o?ӿ͋2)G'Nj8X>J\ ZŶ"mlaDY!U[O]ƆDE(wN-=`L1 %mn=|OAQB\#nCݼ]oƝ3B#U=u@!5 CP&6u"ΜʔBwNx`5AF/P:2\T}M RPPw˦7 kR_Ǵ[:Wz8szr&-7޹[Y+ |g<pp887M>ۃ^ d$8!U +P#vPT5{ * mdz?Jes Ӹ^A ]|Ys%b| |\/{qIb|qnILyV3628AԨ.ӗ}/}_~L^ўĮ(*'3I(cLPYZ~Kɩڎ|˷)Ŭ6n/,8O;|˿s]w%.\)jjޗTuz 5d#>boK22۽r8nQ̐}4,yx@ O~-.fᲷ%Yဢ4յ+=A,1$4mCDNU ²}ҋR>MiB<7Š~wTN7& uXw i$K:2lד}FuiG_aˍ`,B;źp ۛgDF<oB6(x~2p"eғ# 4=˥r{~rGh QҼPS3c#:ϯn {};!! +bE%əI1?V5X3塣U'HDEI@JקƖgHie Ь"luK%hqn#pNMknUz+wUXSf7EaV\b}ۢ-ethݷ 8UR穯nHӤA]r6J7& PAjӭ[0_#TK˦vIe1hj7G, i Ԁ6V]jKS^<;]c=sdV>k0uպ6meM{a֝O^Dl{hG yjN@ "u^v.`H;*̓EtK=2YoV!z#/]1M SzwkXXx94BH| ].R=*ǹՐ„Z+vKKVa5Knmڕ$,"!^.ҷNV2CY6T͡)es?|џV8zMFew/eK10τ=$rJ@[TH!e%, IүK’hgnJ_HA):VÃ62hYZM\TIf0;,-X隈iWܲ(%Xv7Leoe5\y07|w^Ƴ7^wZ~'usC9$T0^X {~ A8=9b+Tqwޜ\bֶ (;˿M;pΌV,8S5"{_(ȱBV]'c,9Q <і@XlHM?}K~{5:E%K:&$}9)58 J͸\^4la(%9kIy-MVGXL# u<@G6pHkslzx<׵4^,69+ ^O@l ٹ9]ZB= _O;|Gw'=̾q|ϟ~ uӛ -{/^[8ײ-d9*!uX+ƝAk/dž!eV0EҔ&K#FtU" <Ӄ +5ISC>{ז, dY/w2>ǹho ?[Go}/΅[C >Gxq%EXU3?+U}q.zD [F5 vZ[Ь;]D&[}Jz& 8UL];&'SȗR fZuy1ϟ`d_󯕹 &\s"ʧ2r X  `_rX/rʌ(3˲?˽Z0$&=ޜ^c 4Ԧf|YDZ!xBlfn|<3"ς@ɸ& ]FA6GE34'=R9 rYap}P$kû]̗3̖UH%/Y,rw|nJL?v,;ɲoZvJұkp~U,}}PnlpZ[Gr0-:+-%Jjl (m'Fy?[Vq3xLp=A0O@ᡱ~ILw1_>+ NEq "Cp< O|WǑq`%T eVB]UE]F ۴ծqnt=@/fڻ!;fñXaFL{-?-Nգ`ÃCxSM<-\q=RbY+a$x3mYcd퇱{6)˨~&m0bq14 M;-t<+a-:XGOyY݈%e <(  ZD=_Ɗ<+reL{3])$f;bN燅+E='y-yUܼS% nZI|L )]lJVJ5-a_3UCi9-j{ZHoAqǹDڀB6IYI1RM@HidzP t4  q{ XlǹބZp 5Œ!f?EDG !Z9s~xqڐ-3\_sǗ3h$8Eїcet7DMe-yBYAS9RՓ!ӶEI 7Y ͪl=t(dZ-YFG6dOd̖UW"wAyK#k7?yRuM'dW ;#pE~3qAȲ3g3(B-ӽA9`b(ߌ2nL6]F<{ C1_8=q2]g8 Ґ[Bgr3 3퍉:5n^lk󒰩 [P}8iYD h2,b^uYQW*)DHSL zTZ=a ~ 3>Zj4Eڲ>ъUSe-ժ"%;UOg Uhl+ْ%-$[ycXnyy@Wa f#A4bk&+ ۮ P ʄ2@9ݯX_jk۠}rMޖk[yFgO@]욣*yq,<88H+ 3 R۠t]_3]mKV )YyP=涮tqn&\7eN)Q1u_\+e~^7?x /yvAԘQ]"[9h`Sj),vr ׂQlqL 8J/86˹ ǷQ _˚x=(*Os!j|JKcb=d3'V%et䢦5w0!2bV*epZҕ&ɽK_Mm2;p1(m܁2ܽ%.-gXZÒ QcQ \J]~y@$>.nVNcy1Pje̴2-w{'s< ^k#IHse2A4v=2y{}1wm܍l;@Ή5Z|6eYPk^ILq.z8gUy,N(i{4~bLB%&, ӎoj@Ŷ5.ˈKӪbG@9P@3/ATJnbh4/$,*VW:0i hkAgEAI\첖mB]2R 5&b%gLb7] ) Ai$KX2% ظY.CZ̗}#wPxm{{,I,?JWY)`\˩a4g왻{uYl@{yٷٯN#ceQ\tӁq Bq֫o:Bx{8^].BirFf`J*eSYH@D)A"u$2E62?գnTp^6$>Ѵu(H H͊mFmJK#-Uƒ878+46xz?CBFKmu .ь_u!˯o>qǹ6xq窑t2-i=0.,g# į/ͯ/Օ|LMȢ%!X+oI|ŲRJSt-&[N,7XLɗQ*aq.EDK2/AcE,0R3V=HSYKG7>qƃ8W\C^d)M͓ ;<;f:ݮn;C1;zp ~6Vf5aB J]YÒ5Z`]Ab7Xv!=1#/cA$QХ%-@*oi1[-w(u[/2^?*/R8BfZ*B_Df-El gdꨬYNǣ짡NI'ĬK7 IDAT|RF8pǹi\ ~8'fgO1N63=xMp&S{]kqq8w#0~q2C>D3H|?{C(J(rbbýAea>-~RByN(csœ+18hEk#}[I.mK4Ot7[V^w@fEnqZ׆Uy*sg:ׁoƃjmURW_Ua?: kyXFg?dşJs_`VE˹~\Os48\g4.z8qIxù$rjqǹ t:8/} ]V98[IG[)^)S95ъ8H;[ݰsu>QWxti)퐉)U6ԟ츹J^Ғ`xXF^UPr[[%a-"$aSάkקϷw8"e,4G@Z*n[-.im:Kw-?-wQˋbmK;nsK\k1Fۑ;s{s39/|S<̳N<0h1xWF*}9+z5*t.Rqny0DR ZY*zc=ѭOʁ2@i;uI]u"D.@m g7r B0!|$Q\+9us2^\]|3O>+8\g˄%88Ngz8"yR.iXJ l*̀x/[8xW Pa-cYuˆfݶdJNU߃?׎Q?nTz7l8f>(9@@x=j-MN\~YEuX+@F PVVLtQ@u1||SS, %珄rz "E<"ߐ<[[EB^~,uі-e_Z,%DF`%'_UCEa՜s(uѸvlKK~ܖ# [{7u0n4KYs w]\Z.̏1vZn.gRP0ILzUדȰv8SNm5ʘrBCTv!(OgL\/KnO0ۮnRaqjOL3:v )-uZQM2r)w>@6lvVd m;۱]qzIp8 52ۏ|ƲSfKL м&t\^m{RJ)ޝ ԧ*uZOisƺ9P)uNi`eەqVz2r h "TKi^T,VZz`-iJnDA~^1bh]_lvnIKYCe{U, \mȴ6;+-ddz/& V`[׼ub֋;O^Csh $}D>ȹpEϝ=b^ߔf ͠aq6{ёJqqƃ΍D1xaZ. ːú7)/QN)" 5 \bv|(7ӿ3 ZooEh3flfxԽKKR>"VMY= /ɳ3+}B}rZLb~xOv dL`3>C D6D]Zu%:ogv\q EEBF]?(/LW]3,Tc*PkDF 9nq7Br BSlݽ1r΂z{M-%sn'rNs z8fPL i4/4mЉ<8MN+88s-@h6H^ro e*pLY(8Ez8PnQDe #WB)`>v` -0812䟌4F,tP'[ʋ{A1MoYd|HbY@.2}[DulRU,J~1}iƲ}{űT֫iL\Jh33-78u)[RA^_ΰEb Ѯ ~N9Ϻ@h_/;bE85w/y:|Y#4߮P.-$:/mwHLNڳ`a]:`öy-Ʊºdqש=DC6\6@\v|3-U㎂ץhW ?]dRQ0 m-G2KOun@Tvc0_- T:4,qdʼn >>x &N7˖Zwqs z٣[$P  ae+[bbl?NkF7-klFPP#[)Bb:7p(`.%c] zbth n0PSeT?u&9=q9zex88p΂jXڨJ)(F7'*O!D[!aKDg鱳"a~>jGIv K-Ǣ(Y(ZOí{JoS A;3LӒ[\m6sJY#yo_ѭp1#Х^c/;OtpڋI6ʳZ&8AH4qs )c,a< 80"|w0탹9[8ن7'9`LxLNYo6XP,v:*:-Z[^]d4[`ڭH-&buUF25"Юvn*kǠClxPjm~l'YnnR%ө(9Ʋ Zc$lKZ6pp(So/V}КVXARf8lU G++ciĉNPq٦i ̕gyo7oe:X/Ae` %Ǭ[B2Iro), Y7^֍yѿ>n*yI00lqG˕ute!hK]v}D{`s6K9@ziRxrf[prpqNAnY@΅f{\bepq#)0G} Dsg?!x>]cK zAs*!/Ch|K.պ *M =ƒw<`}b 6]vF832 ='(vz$5Tu/*L[eN88W@#.4TBE\Y1IՏ3tǹxqٙzwUwb ^j9,qqnjZ*/R(T`\OV^1\jYYJG)v]F;*uSAqޝ\`?S ,>m>J+W, Z8\tfjiip3J?~]GGM+T]|}\ᰌ]LBיH"K (oFcٝbԆ:g-Q#61>.Z$F4mγ#]Tէڥ\^<u0)[XKoݱfO;lJ v3 } j}g3r1]&?ƞRpI]UE籴vYgw|^{Z*`ObFwpPb+>tuo_ҝ8oa-9D-PGy$>3k6ݽŹJ8Bb|.{3KVмy)-qz8s QO_?|ٛ\!(hYEm!hx/<(%Jh;͸9{77>@wR  hPJ2#!q=q/9*Kݖ< $!~J5^ F ${NHI+qgA:9kz88-*YcOX Dq:Dу'8'0U;e 'tҎ7c0]A_+q0a^tCP Ln[LM놩~MbkV@]r.*i@2!^Jpm9 xZLc#miש@[Ⲷ5^W˅SJy|rV1 轤0!ֽ6I1b|whg&ˁhV!e%- Bs,H~@VxB{l@[tuV_ܨ{唖3 NIE,,7Y1r?HMWk ] mH:qNiu/ǔέ{y?CHǹYn>DZOt888H<ùLjs_U0UvfpSxZuc޴ Q&DVΓ!i3`LCYAbq n 蘣!V ,V:oG~ WovK~Ac3>gXK AEGEX!˜V3{o>GUlFBcURf&2-2*xucew=jacc_)i*C'r ƃUnٻ|B==Hiq*.Mnf後˜g,h'LB.~j90u"cZ!SbX={%u8s-PV>g>DZ8\i^n88sz<8PQru( D8IJ9xuXx{0??ă<#ϴMG[52\N|q)hBmL kd2"1"wkTcQSu0)Kr( ^ie⵹0b6c.oD( #bvM]ϲ#TUh\Um39 ݩM_c8Wz83e6 )>ұ5K˶jDKTqYc_6u IDATP52m]l{ϑ'lZ. ;:iW.+PnkFdy`{$]fQܖd:bռBX"#Ԍ*,$,Y@B+Ch$F0ܮj t+U,}qz^mxeԺw\3TW u X]%(Bº/HUѾf&Ϣ+&\ew|f(5Qi֛{|bӥr0[&EB G2iM$XHSmUWKQD8K1_3t \U}\nS ._k#.-8Wz8lIC%I?|q,eE <|Ya/Ā0 cqqaqma\D 2k )q&[RG ƳX"b]nh39<mrXiJYyeqǹx8Ζp{K|?9HNF e3e{FP4шZ@dISɹWA,׾(vAgPV,8[~q .s1 #Wk$qN w&/4+8ָ{s8ΖP}aO~̀L +HRBHiCqqz8l ;2 /MqNW>u}/^[`o1~)VS88-|1n=vn*޴Y&ڶwP#C0ߙ $gi^![vRIG9%@TTc wսl;ڙ<CfP:>}{vRqYbnSee6aXmal{7ʗY&xlZ ЬAX}]QLp|&KOP-Ӌ6 H#5X%v:Oe]PY */6=%TP-;XH|6mi s~X|C|P^Os$Zֻbln u>BE1>IЬxT(k8Xd5X"/OoAX.-!+nl}ĺTiiW0}ڏS%l<G4-woq^8s4 8w~.=LB"c~\!ԄPo>ٳ<qq&AqxR?7~,ঔ%Ԅj8x1lpq9qKrO*t]b~E]U @ƪqpRJ: I !JJ5k57Y{cPCxD@W DꠈL8:)mL ĀJZ8F(vI]n("Iۃs 6++1(MS[kc;#t/j_4zQM6#8eZ^r`_z)fu٥r,}&uO*{@sYivEH/ X >9=ǹpO~$9H_qqǹxq[N$Afs(sMk _/ j|̃)qb_ePɝY_hM%pʒk|tRxLúJ.s ._y7ꓠ_KO!;iX^t)q5<8Rs>27^4Ss5~͞Mƀ"7ጱc>LG繹i߼ӽdkD䦸97 z~ݯv˟.=y{Cuzc*NL|LlxV{ě/"{RKcȮddEkTzݚ>֍kmm e8suQM4sz8siѲRYA9}|k[q`0tqqxqω]Ә.(`wz .Jr4grv)c%pf,ol1Фt$BR>DJ! >*9D+ËE)dX)P1Erf6m:JdBbyض ^[y +dqky}.MxelKLb^8z8s#n6 K/͑l.ރ4IuvyD;!UӉ"^M別 J,O'bu>kmtЙo?'4Y/R\tǧZΡO",n!c8fpE".+3!+@,Wi.;* ..س:Ƃ$Rtf)ܙfb;Ym}k2 Ui9h\ݴPy$dUB9}ց7 :w^s vqf-(=f88p0(Wxv_K#|Kx}9Aؖ88nxqۋKHaS`Ş5.rc(:ڼb %}>O656q-?#98y1ix888=q@g98\8^r pV]ij.[5mpg* H.OŶG& CEb~,.tby׍~0M9BQT疴t/OFEs{-%S:_%~P:TlK>l>exX֒CV{cfԶ1!ZF7u{ ޛ4O_VyNX.4Il٤ILNmͺ_s887?eo88Υ8s4pqǹMxqq2BSrO 0ss8{P i0OCqZt1˱9;<8kֻg8\0e[:Wz88 "hK[pqǹ x6PTfD<=YYV>;8GXJ_te9[b:f;lN8gvZ^=~kY(;` F&8݌b湚m8V 1nbrMXk9DIpp6AҟC ڜ'N\Zʛ0tuZs6L# H*ͨIuaM `9ݔPxhN.rXXϏ1Lww@os3wiJn1ɡi!$~ ڞAgeEF3:MpAӖ2$mV/=(f`L55qQe g~,VfleltBw*ǎ3AT_yfgRG.>CuRyT.O 骔G{Xm"=Fj'}Pm4[.߯FIV6i#[<"^tQv,|Om@W4q.z88DJ(ƤA||^8I^2'rz.+\x;=qqq.Op,_9o888b//~^F+__Wf3| _//_QYnDUL.v RV aE2DT5>+GƉ X6 TaB(P&+)|G`:"e@dsmt$DG1ׇe OucQ=:rmcrn|jDec<-wTt}ڽ\ŵP` _W j1FM%%Edϊ)~(KUe (.ݽsf;;μ_U^fzttGKXj]GQ{]ܨ*c^&-|U2V3_AR##dBXLh%J*8ALx?J#*WE\{iatH;I-@Fues,mQ T3,rq.$Dۏ39:,0vζkcʟj ٽk[7zO yS49C^[X=.B\yرc ~.Ν;qm9͛} q9i?~tMy-L`lv y-G sd$&\~Qb2NF|Π*1!ps c3#9Oa9O9\eq'xj8b. ePq0 8UOr~|us"^rvRDZҶI9O"uLb+ Z)a|ʽxY|!]ShjO#TԲ.* nClWr͌r^pȸ^lyV%un(_P(>0мPFpm;uqw[:FNϝ+]Lh72ߔ?ȗO+`Z"XqW"c|(ѦMp5`җ4|n6޽?011|;ذak޽{-cN]fDU^s"L-QYOr-xwM]}O|(| _=܃ /qcdd_~9n {YADlUbh '_+:4tjs&paW4&j<߿Z{&[~~}jCe0r Gm߾pEvꩧbÆ ػw/>uO>$.R~-mܸo|2N0*baY D0ѪC:{ g0"FhzU_vUfa""/HàG8pO=Ns9^u>6l^~i֯_<)?U~o— JJ妤DDDDL;hO94֭<| B=H /AomsI3dfU&R"KsYµJKPT{;R%%k m[y YI ^i@JIC\j 5gO%G׺b0cgM7Hj#ٜƈRe޹=$9eqN\U*S;,qܝX||5V%^z饶ژc=|8WnAez^AJi^]"GevkeUR^3-[r/-9Ѣb#eؼysù瞋]v'6j^.yw>J2ĥv)얈Aꪫk.<x{ڋ/={`ll ^zw]waΝ[W\9W.ͳ1Fțv0 IDAT[$S+\émT-SfE<*){2Ę_TdpbycE}zK\PIDƎ((=Y6tkJ{d DvEn\Jd:N00pFyQ̶Ag絯5+ԲJx0Hy>9|Bn o|C0|mEUZ[[I^myRfOpxNQ~#l1vb}{lx'*Ϯ6 SK8w9k<;3hUWo{Df YxZ $T RaQ];Vki$\!ƻIx<9N?Œ)#:TsFrdm6'J!.W"[I/8ölقk;l~;ضm[=s}{i&۷i{7q7S,O/}""""""eA×e;v [nWkٹs'n\~>VGWU<;?x׻ޅ rJ|UW]蟑h9peᡇ_7MAXu]["]>lطoك+SSSl޼+W\BDD,DDD\1@=\ysN722;v4=׽版(ddZ4uCDDDDDDD===z2gU(1Jcsg 7&\!`Js(E1aZ%;KHbd-zjH\UjHmWӾUqJK(Eq [{J![#֠<9lŢ0buZ]@Td@ZH6/mTЪhǤ 1\NLy)ֆ1خU ItSDx_𹫞dS.X ҝ1z5형u4Q`Ѓ2<N>J%o0O؍ e1n@h!p1&F tB?G;1 WV)Y;r>66Ebт!jA_Qq}߂G$ފDM|Z&/]j1`A LTsGĀv֮ %\"BtyCJi=^B#R@h@":IJ`"'[Q"+ OMFQ}'\(]k;wE6h`X}1=2UDy˺ /_(5M_j 1U#RLi=yR9Ʈ@Dr URSvڑ{l4VJюKr8H}V%2Lc+{88k[O­Ln"uy g4|WVɠMFsϝ:o%8QeɪDl×s/W[r&qnBߜzD"DK*w刅! A`1ٍk#)",^:v$ >D0fQt]cpn&¢?ڪ`!X;>tο~)AqUOrUK@( s.ʼ ԴS-'j&_ `%0E:fc XefK\%x0,q $en `zأP66*eL-by(9ݘ])FbSj 5Loj5 })ۜrl׶kQ.ll2x\1hM`F"ᠲvL4v(xae`dxx ƮDm4J^-H9/Wc\{JQ[Bc8=0ԊdB@I8'yVeŸ_m|n t`rS!ȈL.%xӖ EǺg zA V[vi[@q>'ُ#'T~3"cЃ_"CA&Q2 Hy;SwWf@m<)>ZEѣC$r4QaL!  qӔ`vwz4/\zMU~f.Y\Z)aԦ,7e=W:hp#o}yђ-0A'.' gL`KHY\st IEDDD -{Zr.F\.xI*&g%Lt $Ճ$otH@6-j$?#|;;C:ZI}uh @jp2e,W'eZi;[&s^WFh]󘯺LYiGV%GGySa|u|5&kѾϣ Fz<oJ04^9o -t-Ymh6T|pgȾU$ W 5G$"ZemlҵJ"h*ilV+gM_Fq~:|1u>6jEdw\ -tWk+dmZ 32*H9~gm|G3]ӼzJi?<˜uը"f,WFWkl!w5(Ķ|/mH|J]hS!x#R0A=QN`DDۤ4-j)Z07-3 zPo2l+3@hh&oD xP \jXR'[[DDԯV0x M]3֧&Ku=%lJ,l<9.갖a,y=x4F-" x7$Y$"!ԓx5"\e8"-+2Ԓ򄒈=7i ӓ^ DD9D%w%GDDDA>`X;t.O6_ґ\ӗ̏>h(W6}W>ΰ-T¥JY̘o`7X W&r-RνkUʁm1cypzeJ(U>-F%=!w޿;*+Dy*HTaםcO pjt)eS:6$mF%VKfJw"8|cP~Hㅁi ]ͣKiEzS<3*-Gl9PGUPQI@S*ށ2/Ux҈(=T7|=*`9[c *+S\&K:SX;,m,rHSE:FeW:lCeZ2 Jd TN} ~&orH:?yOf6rc`mD0 ;89ϫsϕ=&_-!nqI3eADDD AFh9/,{#(NË7G7y#]N""RqƃSU&h)3`LȎQD!p(n/ZOy{`e@U! ~47mTjmgV3f0-? A*C#p'?]e2af$76B) 4xԇ?ieiu׬ /$ ʇq TcpS-`lx[ -2)%~Ӭ+ U߀^ ƚdj~/Htn{ #+\e#s6Z Pe҇*dJ<w)8;KY)몝gyU*;mxc=S LɂZ.ïŝnQ>_R66[r: Ug|NJ@Z?W MT?k.yA&o``CO: ?s} 4}y2ŷ$ɳ9ܱբrhɕ+ƪڄrӞO~qlOU:{Yګҥ_HLs|-4=1fAt'{݉ g97k%hm4xlc4YH%Z1QX\(m#(sCʻũvbkP='쩋V2PSPNKZSZ?y2 z4>Ʋ3^@ǥV\|)3.>aHX%PmG: &ɿ.Nbmcð1*4OokqQ*ɶtoNSl{7I=XmK=>U2F2RYu""Ķ<cJ7RV7T֡hALQ.U^̐rd3ʋjU%ەhh'Zq=ټ7%ux8#u}=wU vݜjdcϞ=83(}ai'""""lݰVoFDQGyۋ7Ӄ99쉖ڶkCO."V|)ݣC!a9Ѩ(Am->aoV_eFcxI7  --7x#nV ]^Q[Oj.ՠBKΚjq-T @8" vx+M򀇯++QԆWKyJX1͇@;CSU1 |u4߿n/J5Jg%QR-;7tv//H}==PV˞n L&3u՞OҦ}F%NJYN\,dIrIcW&ZD$[AIjm[̪"[S1߀B:7N%#kHΠVMLXOdۻZH],>[IDx&856umj9Oupt NǎÕW^}G>~WUႶK z-"(? '6 ǘ^R?ɏo]!J:;|'#е^˼/rZXz=ryǯS'JpCu4."L'lľ`/#*> q3?oev[~#4:LO%HrRNj!f= 6M[LK!P,o8RvhMOcAˤ"*`HKrA (Tbb b`b`ޔ* ?DϪAIz"[}nkWK4[wg1އ(-IM.6jլ6n>!bܕvv0vV U/N Vo#ooqk z<{?0}[lw{o} }xgvyχ U"" </~fƮVWK.b fFbL zX0)T>beiEKIe?(~;Obbe.p""ۑ#GWzh{1wyر> G׾I|[jkOzO=lrߌp5},B7*!`"ViX[ayX\,(\K,ƛCo/R{E."=LD`a<wB1DD4?WG?Q|iqW"c|(h6m5\۷K_R}ʕ+돍7袋׾/bG>͍[zLSRq$5W@IK-b!#1|`2E1Ï_D4,Viϰ)@Ն5 Dk_@2d!. rZ %*WK{ԫQ%[ Ӫ[sa˖-mzꫯƧ>)|> \y啸 Ԃ$5kओNs3 "%-I&Xn &W8!a*B0*Kn-7"FP]=)XXxNhڮiӟ>+% MR۷oʘSOņ _ wctt%89z(ymo"S %"%.I )~}Y4:/BT|ŠL,b52 .XDտM;Fȣ'=>:#x`́Q% x""_SO= r9yرx;mpuaݺu\jž}pv~-[_)~O^# .#;uh6‡[x7|ƫm^%kl2VqE%~IH^5u098w]4f(X5GL w"^]Ny%6^;GJ7Me1J QcVO<Ɣv㚇Õb\ھa!\[kD\s]2f<qIz`I^DFt/ 7ڨqGq|1&Va0xazEt6֍Lڋ_[;Q``RB7g->%Y^B5ߙyXj~ rF%BBD:(Wk `UVlv9|ϘBR>H;+h(~lW&{r7 noW}Ih79Bvչ`N9J>ovX-qܽ{wSN9%8M-`gk*c/IDATժUxw{׼5?#\|[^DDȢb Os\qt*t#݇F R ,0"Z|֎_ _Fi@V f8 8Í*qڐU/}3?ܹ0ADDؕIo2봻ŴE"D|_O#'aUQ)T`Mzy8<;cW NFETz 4~`jqѻy{zxoG=9Y;0988(M cЃ:N펎dhU}x n )5Dk`(R4E<,{Z/Z\I)|)l*T{koa/VjupXSH@`E%q񽋑zOb#bg CX,ö fE!D>/"2eVOqC4wۋF}$6Y`pr |I撚|C; ".p)|1fI.d92sT5=`oV+.c.v.-Oڬr̓af!ُ>K'\{bbccc-LL$QXvvy`ЃԪT:_ljJ!8QHMcQBD GW^?x嬳΂1"lܸq"R3kٽm[csGm8ۻvy.g6~ ۨ%li"0.@{$BUDDF*W{L#{2k]fZd*\@0X"NFJeo@Dq3ʅ7Q ^%[Z[=/|j؋u5)D \3{5PRۤmmF9RMb^G ||ʚv9$Bo@w-ٻ*~k%iIxY2TE2L9$P;D466s=vO< l* ^ VO@;,Llp"- è'~P $Qy'Ӊ/\oO\MKK0FzHJ91RƐVg|L1Db#SD3".f$4m^Nhߙn ߊ j0=\nRƂN2;жeft#|"@Kir# 8[ફ®]=yOk/"ك1\zmh.Iz}F=F*Cyܘ&DDDij*}8Xe]wygS5ǶmۚŞ}7 oQIu'-I.6 -!s%/MIM=7GzhK-aG:a11=/+z8qqc]W \F i,6lm 7pKLxG2?R~;<)AСCя~J8n,×e֭[q뭷P(`ΝpcX7GxGyC"~S `3ݲ\&ƛ.ڽ8aZ1m B1+h֣"MATCW"g}QV2d^0)i58n|jE\yqs _b}֭æMo|i. =zMo":lݺQ A""" ][-5[rl}@\, (R nov L=b.v0iєtVBPo >-RӥvPJdj}zV5K7/[1, /;g$Mۆe"hUA:Zd Qh"OB߇*>uʺ$w>s0W?S.g)>H&oK1Aj3J1ȜGe\iV*y(' XRQ3<xu[{"&Br^>G^ ^x$e lcq2!ܶ.J'GyeT=Y.3SΒ]8 lgI *ՠ7RϝaءRpڽ=S+QKۦDRۨ&ȥ+XX$]6)jcyMu(-%6܌ "'TYC )jl[`Q8H#c=۩4u +[c, xJٴmA-نߵYG;Ay kQ6:<},aMKDDD}x9GTr30b /R/G+1__MlZP z~"ap3DDD_ ""3a!^$iiCsFÓJ>ZToǬ!.4zrD@$SqUHZc`NQKfj ԃ(A""g|#4:DSm+v`ʶ|c|GS~z#GM):C]=<?,M&^q)J*քVs"C<|2_H A"">g}@H0^I4 #WձFXo`'GMx\"""`Ѓh VO&H*Uc &qs% IjCՖ N ',4ZO!wEWe%Ty$>:tR!MMGpQMp(RFLd2`HWr\K/w'oTkaiSL|nN&=Z/t g(ev$ZL2??R2OUo) gbL/N6@ZP'$Z{(nCrf+`?_F9YйpT3(R;֟v[[RrDW2b6H/| FO0+>7{חbUln*la24m?dW+)ZŷV6oBN!smTP  ҂!ǔF e:~l9?z|]{YpbV#`<8yD *~(-Tq3xWf?I?Hd!G.vp * \쒡/Q:i뼼ZS5!cfP~K*^*^)N*p'Qa g0Ap")w'*]QN3`s^Ze("b ]5"jdjCm\Q`r8|򢓺‰_wX-Y zPǦ-?<<]mc~=tѮ?jAW `69QS lԇs+o`uQz*TV,$/1+O=SA*[[ï<'%\C?KN]]m'wtn˿j?9xyOWy䙮m;p#nD =(c3]/ `Wrw{{?|m?=XW~_k#]%]xB"Q%? # O?9e`&Iϟ?x|*w|yϺC}Cj;t91lW9&2a>RӋfx_Ia{l[I|05&gfsQ'p"S̄#^s@E &]#zr+;ĚynJX8%Us,`$ƴJ2?p"$h`/a^ՄV$J\m#cMɨW.ٟ%i*/+Նn{$AټxLWʱzxii/mrS@R0 Jd2*e_j,.rpWIWpr#"Tx S.W{L[x;K̗-LT]&.'UgfJ+TBDp]A*O»?N5^f税}_S({4缻>ƤY`wf{$wD½AEM^g9=!ToYL:iZ*L)Q/+x:{=/ .y}| sϷ}c'i}Ebn{?SO=ۋADDDDDN>d<3[Ez={}attfKFQ8p'x"]L  xPDDDDDDDԓȔzDDDDDDDԓ "qqK)񍈈 QO?|moy睇/~Í7ވ7 ƪUpE;-5Nl5<.2-o#<%SV'Q/*V??R$""?яdŊ~p^˻. q1F7tB~Z: ~Vm&Vr.RrЩmSu A"9r䈜z266&GmzSo}zrg/rYŢ|;ߑ 6~>-gE͛7uj6J7[s~7FGG^'P.3/|s.Bq\~Q(033{g>-_SSV'Q/uO>$.R~-mܸo|2So6Hҩm(u `ЃSO=8쳃Ӝs9{Wφ pתvi׷ԣ: ֩m(u &uݻr)i֭[x  ^xx[R6H4[-n.n;-<#w߳{֬Z K/ {1| 'N>&&&mԞ?餓r/֭[я~\rI RԶmmwDTDYs|%ଳ΂1"clleHJ|gqlْ{hX Nm[F]v[x~GD5 zвyfl޼sŮ]Oj/yw> uj6JC;""W]uhy_Ğ={066K/t^뮻sNv K= QM-n.n;-<#"A"-[vZyMپo۶m'r}/6mڄ}5M=&nv">O^XCR')~Nm[yCTu  D$""_%ggR*DDdǎj*˥RԧݹsC(b 9vآ>Z: $."۷7bϞ=8q6j?BZgq ./Kk׮ŦMo|cQ?- ض̇h6ߨ[x~G """"""ĜDDDDDDDԓ """"""Ġ$='1ADDDDDDD=A"""""""I zQObЃzDDDDDDDԓ """"""Ġ$='1ADDDDDDD=A"""""""I zQObЃzDDD}oYgc 1___;044c KKDDDBQw}ƍ7ވضm[k/n?N9.-!Q~ADDDXkq]wv!\ xѲàa˰cns( Z,"""1s8q۷ꪫذaC(7 """@E? |+_K/ &"ضm~aT*n/QG0ADDDDDDD=}W'1ADDDDDDD=A"""""""I zQObЃzDDDDDDDԓ """"""Ġ$='1ADDDDDDD=A"""""""I zQObЃzDDDDDDDԓ """"""Ġ$='1ADDDDDDD=A"""""""I zQObЃzDDDDDDDԓ """"""Ġ$='1ADDDDDDD=A"""""""I zQObЃzDDDDDDDԓ """"""Ġ$='?H IENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/reference/_images/yt4_p0010_proj_density_None_x_z002.png0000644000175100001770000121212614714401662026121 0ustar00runnerdockerPNG  IHDR=ւsBIT|d pHYsaa?i8tEXtSoftwarematplotlib version3.1.2, http://matplotlib.org/%J IDATx{nuK=hJ)Ů#/,K%JX(&n(#[ԑ6j0EbPT.lnT`ǵzд#َbU)i).yy?5k֚sxs~{^7Xk-n0t']C 7$pC 7$pC 7$pC 7$pC 7$pC 7$pC 7$pC 7$pC 7$pC 7$pC 7$pC 7$pC 7$pC 7$Νta}.]1椋pZ/xs;ŋǚg3}wuIx|3iGCFztG_=9?-3ƌX;==uә [qBli 6G_jZ<ɽ= G'B75r~I_kRlHhE{c[)9ߏs|Le(t{Wa ~o!L i gE!|y>s5}Ws0t[sе:O3[ކ q32S_wm_`uߗaH!]<}oC=\Pgdydw<yF/e {5^#;B;be?|w7-+䳿q;wCaONߧ&MBv?`]'wh'0kU:`؍UǺzږP%u-?~>cB6:?kov8BdP:ϑiMw]#3ڏ~h4|zl^y(F?3C?I?Jǔ.0}6yl 6QzS <%9ud&kiΙLw7bxֿa4F!"e0`M~.GV$Q^ ׻ i<1}sBL Pi 6(?D±cB9 x{o94K kH -1);K|oENZM|s^IF|T5^gST US pF@މ{CorlBSߒlq}bTcB6T1铴vca<2vabہc>>7iڀ1[|?y5^+#;~ηDP1P/*k{saGx olM&{,"UBgݫn{IV6UgXm D1K;C>;BrHkyI9_r7$k۞q雳z.==\bGǖW38H%nVq TS>nk  ZYؘ񶻰YժJ > A2I'>=H>"ȒcfLBAc{& ~#Ι {pM`'/':r="#7CifA 5TPNbcO[buGQB-y(\-@3H~Jaz~bidٷ`L͍\# \ !%>0;zch2W5u=oA%h =b%OL|1bmB|#Gv̎ǕuxA'xnx#ʻX:I sI9#[#1mn+`Od+GfJp*]\y۝Lp 29' q78UGkiTУ4hY@E]w0[ Ox;CDұ+ۣk vc1#Lڹz&Cb}dDU&ziMXQ axG4q#-}B@j ;(e f*mD4꧸Z$pU`+=#3/U+ Jiݛvfqm/Iׄ@|,rsaDxq"\ȪocGM,k*Dƒ)|}lAxWϫU}0;M-Nq~cG掂J9R!QB^ >VEVUI3׽yJƆSFzt[މns:,0|>zwb᱊Lc6K|8i00-2Ao%r…/z%xN[O`ž>3U'eP~ wE )  '%\*5T0K ǥPOGHtytEhg9L$>p; bbn.Iҩlj}Kpvi pu≎袾t[m9sć'@hpkqw/R5H1LH`nZKH)-$G={).:ۡ BSatŔ\QXt|:t\[uiNqu7ċ7xBYN!N*DYܝ94y;rMlzbjf`Y,MƳ+dubwɈJX>5VWy1W.,ܭ%΋dY?_W .5.\I+6[N8DqwNvTg _RɩlH cCb?UmbfS]l8&<՟XpS|pk5t^.T2Y&WFrmH8!m߀-KQ`7cy8;J/]6HñRHzq^AW BL1X饡4dH3kc.װ}. 81؃"5E减;kv@|$ؗY[xHdb%Ț%Ѡ# ˫H}"rۊC\Frx cߚW\¹(ž?E@H%;m=^f˙< -NQ{evHiB|Yۗ:"EC1guX#VyoI9jQaI.ը9U}(/[rZp})B>O4`3*\T|S{IPGmR)IF@jXeܹ%\CιW)-c6\=ƭH^8sH.#i3# =,Y?!QeVK?7;#q }/9?)+_Tvb*_gBj.F5. _Lt-MaҠgurXU H$@-) UbhyjF1!>WjIdr%ʪ>~5Դ`|ҭ^up#?|5%g|oHPM"3@xTM<|fϻabijfRKW4 H·IEpVS|kKE).qF BUg4 I[Y;A?`ƶISD >y0v{Bd*il \17idݲPD*Fx[NC4y,تVKXu]mׯBh U)ůqF1Ea8n:F7',֒v] !/sBv$ 2;dT(E2꣝u;_,[|l%ʨӢ`?r=:O&ڭo=lph:n/'nUl"l캌:QINdm5LDmv9![{m}&!3"[vCtQF,-MD}b0Ro#CJvƒ(,Tx9GU{TS;_撛TgwOq?'v;;ANuܐؽE$;2FYK׸Tf fj`?0 7 &]#|lBxW?[!NT5 52]WVpj..}_Q|>k3O/K'YZsoi8Mh赸Ѝ'ɟ8 dX!MΨo7ܗ&5vTb|X ć?&LL@dutJyP3)[}-)?_ tbK㜴2_NNȓue$i+hKىDT*ve"`XGx9Bܜ3SKЋI,sp-׏]_hh8q&PP;U @v8DDxs""A^!O}2+}|xLG[ٓ@,Ңߗ+0N1һrKjڣ\T|"1ԋ{%6%Y =744,kMxп_Թ=g$Pʻ -;PVlA'a;MKdbo$J,4%>'Y#JfFU˝d}mE Uy,,OYwiB|]]de\xrL! Q0-!<0!ns;^kOa9Jqucxm.)\: \G)xl6ir(*3/\+hbg6vaL~~\p~ㆴ)E1A Gax0lDZ_C!H`]ZR`ٸ_%{Lv=$!f"VMRM=7!U,b`E\lLơD| `/痌fn@(MLk@+o|=T4-MZQw0EXFgǤ]yv4Me숷uJ }~|(}yMw{}D|4P>7~?ݦ6"iW>PtcyWU-|ԎMH./[BTB@䩝> DE8uǭ!&'@2N1Oqrg*uXٽeh- 4 %a5Uy6;zҊ!<4xVGCU!Ge 0Q>N6> ^3BjRQ&F})|HXI,X[g̋QMɽ07@G6Rԭ,5fqx_u٤8NȭX5}l#1ѻScBxBdwOZw.CGt6ӵqG.D|Ȑ"X4( m*i@VPL<`Z7eG$<4B2g^<8$"+Q}D+Pvw?¶ᙂFzA%<{t0JjyZBxUOsN$GuQC&mn1[G Ul⁍O7YP&>s]|\~9(N3j%m]֘neN&>r{}>b wE ]󉂹._ wYVyVOUSH12oF>^PU+ST3&\?`JHGLY;Xx5۠v}4FL$wOXM|@ͤG zzգ6~]s<#?j\(j$ #DŽjex2ocݤF$\ $!gėY*eVc6 AQw<aMkti';-X,&vq I[ˏ2c3o+X*>ߘ_x(i`co~T b¶~pe*P.'\(},)=1N-AHv%?g$B>:徭@vjp4*$#EA UAHsZ9*qGIpH mRgV`dϘiyFqLR#籃?3X_~-OybԦTtER£ ڢ` jO|B2FIjaZjdܜ\^E!*>? hY s0}3~'ZL71Aeu={_`Jʕ/f; !}DxpxPK|DebOd(gxӸ%|.}Q\;9Q- _c{X'C$ZG+4qn(shӠdR=Fи! mW LjFz4$-ъ܂T>jRE+ 8TQT;!>8p_NvRd%]0ZXά|kBJćuLݐO[ 2i"OD8&āBS&y(0V HVԦ4r%&=ϯ-RrKc騮sj")n9$LT2}zrL*KR/D?x܃hE.T ##B|HȩڦvSrDp$0m[oޢxHn|* ebޓ$@L$)~3n-u7k펥VBGFƧDmJ1;iIwgq8I7q0*#^I+5/NV\up uO.Uyޫ q}]5.ռq$9q>7MYWȋF{'sZw+$/O.}OdD!R4sRk< ThLh(vxV+B{? *ߑۦڭPw[L`3! Qok 7 [V}"پvJë=;kwP+ry3%Ň޲o$juL6A2uV#rmH.klIHx3.{xS79ÙfeŔm=9K pP_yRSRAڐ[p둶L(/%pBaxvJkJRߡ C@I}dPOR1g p1 8c;qp L1[}.`õ+i''L x .'igūԦ=i ,dORL3J>< ab`Fiu DŜ I6؛]FVTr$85@5oXѤh9[TDf24nBNҟ-M݀2ѠQZ3\[A!6@z#M` e1=@ &2CrvcDH-fzG<+WE6AnnLIdBϊ0FJ>GMBO%W_.#F)S>nVV`$>^fz{Xr<Űd#u`g}vOj1z;Cs@^Cc8X۝c_`+߄c-AR]}=2$>4i,fb8cb..S y-=X_=REE(`??巑NyK|'DK}ĪԶ*\Zr`ᜃ :KPf} b;'4c@gcP \ݮQ_6GuyX!.prc/@=8Et%fbU_2qIq_-K^TB+n9<kO¢UԉE_A'JW"ҵ{N-j%&#A s,7~7pe<{~/_k^ww}{~gF pp4,!autn9eMj*$(ug,m5";~.EOlM\F4]ne*?쿣bb nr|߭Y' SJ82XݵemH.oXLxm.J/ _n5HTi_)=h ^ĕ+~of5:T|.M" '~RwDAKECyL3љ0tߔвPÄ4)n{@%AewT7=V\з8 oJJDV=%UiPtH(a{ x<y><#k?Oo~3^Wⵯ}-4FSz[J8(B,uNH(kxsI sӄ}kx/} ?x_xGnG?Qs_Sfoħ%}B$ #ƒ'g{ Ó8yo)<*!<&Czoy*JGa"$B٭OХ1"?"EFdƖZ9&[J2,e+;RyTp;Դn |;g_' /+sǙ̍d18H]C`速"2)䞣I3 .+"sDZ+ODub ],770P(],9CI-V%7p]46@WqLwc\|($W4` \YsM1j`V3c6z$,p%S{$=\V6w1O mv2*wsɍ"lWT|63Bk[uR]U##BSTGEU `z3Y6k ұS<RBŋkw'ꍷ-n7k/x O>dt/|~o 'oA4؄I΋oI&= !6q:fȑёWW1s󨾤f{N!IU!eU%5y *ْEJvXe2̓.d\Z/']=XR]Ypvo9Ο/ x_.^__om݆߉?H^8C(?Y\Hxh{K(NHAإ$Aڢi֑uHje/0.5Dί+t1uˣNn?KI# `I dQ;q0Y,e' ~~]5 &B#$a!EIOwVC+ %Cy%qW. kj$F"G?] $#;kG4:WO5;O||'z! n}Q| %@gQ;;!e׺U 3CUP8mEC+.n50fmyOC:,H G2gߔvm9h7zI#BQB2-M<&&Ƹ2NFKBxd-ȏ9 lu0L/ ;r*(+2JⳈܚQj'>'4q}E}Zޯ">*UxP8A<_%ld넾ff+kwѼY[x:.cJ;Pj #t8:ӗŃ}ۭ婧ݿwRHO#\pla C|2˭N>&\AdZ=&1i~+@5@5עd)l#m #UypUJ~> ; % fR\ @<āo+S{%￟\M|>\P}s=bdfM[Uw 0\P\J]KLa3N1|%;  cf!F~1BMD a9gXs}zA(F|ܟE*ӻ^#O֍+Xws]\{{oCj˩!;ܮ-7|*y68KȬ!L(%1&tb?`;sUFVĞR&3 zr"p[hǠ*W$pɄM&|J$>8 ^usQWcזowFLbkV1kȳ=`Mnjʊj}ޫH|!0;T9oe xN!6VٺʣpOSxȏ XCM `Tna1Ec̘-`;X˒O6a+#?Ƃ~n0mk 8оć7UЀd jrZi[зp`%3:pO>o=mE붪m8hY ʠHZٮ[%Mz2 rL&կЫ$m2>lK}"LVnąۢ?CPo<4M0LңnfYW6ӻL,x_j+hFÁe#jBJ.'W$r͢uwRۍ5LLn.^0\.yBdѽMY1bH[FI"rdVQĨ*F hGZ9cKjK51;^BD|(#?ܣ2jRr.=ww2Ctgjl~T5L ;Tt| ;Q8;]ݷ *FG;E%l£G/{`E<xj68x1[]ymY)wo$y;HPbr9(5sCbQ\dkxbc,+o'rȒEW{')Gf D~tD!cMRU\M DHԦ缃VBxJڂ+PuF0BȊۭ΂Q =')*1";!b0 9bZkI\s%"-x.";|ڋ[:chrTYyEȯ"y+x@?=Hl^{װG5~/[xEBx]4 qJүXZn?ۺjWeԸhAWKxt:TE.a~-l͜WdLi H~z,>7 ~.׭p?voY"ƺl{ւJ#VLh]9 (*[0<>EVUHTnk.b{!hO)sN@$di)*w FLdGJxl&CyPĜ\a*^Mml xq!}ΣmFJwXA0l o4~d/W{<ᇞM{o~~+9/~ԧprEkXFz4P!&'ZdrXϠR5qٓ@QQ"J#Ot_QB);sP89(+P'fl6F ԕFm|q땮N hVo)ى`44<$U'؎qD$Ka:UBvBQ%óϧ qIlekV:%D0Vb^{~N1܏ȎJ£8;hy>ԁm%gXrn `?gCf駟 2mo{ '//r}w⣀ErI1TI1;  X2?qZʣ{mtφ.mZ Uz񤢯_Xzѻ  mG1ِխ""$d4>P.%8fHPӺ/4-fs;mfsW묢 }"~iK4;voUEÍ5}ͩ>㽿W<8:J]nv}˗я~?3?7M{ Gq }M|hmɯ!9\nrTe(/ Wtn?8;"K++wmȫ$b##VNdf|eCJkmvliWJzJ] ctH}j!Mmj!1P%+ 6}[mk/-r|Ajdȱ9-.+ph#)6@qkR ?Mo?3}}QO|4؜oƚR3]kU>Ȱ *vLI h}L)8&r磱,wq fLk; uFL q2C];1L;Nm&0&``qmibH?{9zL{lc|)!X ꘱I|^k>C=뻾 Gz ;׽+]0~ykp7㩧o۱ٜ8#();]aVKG9 1ć`@OJpDໜXZǝxRxp#wBBF AM٩pi1. N6 >ͺۿ@%j8.4ңAEΑν. OS|DIK sI?#]XùGpoqytCamWʜtg,jT*4qwDXt#{vxpCH.mp )#!!& v}[Txk6:IBC,cm5vMף1FRXsچhVK"26 ~;Z `„W +H2!%>Z~3W+pePX͐Q^ɩ^2%qݗAq }p=]OfC"2D#qW{kt*[&XQ =x'!'ƇԐH0\Wkb>Vԅ#%e!B)QZ⒤n-'"ãu`yXs}Ry>LD2=Ũ@ꋴ+}乹@AM>oiݵ%*H 8ֵf݈G{kalˋ$𕜏gp'iX'Ļa/_1]blDH4\b|Εqqm׆c!N|w2 q49ɹP(kb`$gzga]q]q}'tuL&Kr (#W!uU 䉲sQNPp++|`[I]h$Dr|m++-&g)I',N"AJkU~"]SťȎVQ)랓ƛ3b8"˫B:KxT߯fQWzkŭuH]x \Lu>c 8.rny\בgE#=*f d冻Йj2g0SʐS4*T&@ Z[x5 jri+M4*;f^Q4.AV .ʵD|D$["F gj|5.E-bh`侤\~4&h߉琶]c:߽V+4J߂b+dtH F£~*#tQ_b tGNwhIE*>3ձo_10 ls44nݖ9ۑB*5:|{O%ƒqutPKM|* tGK_ҐA#=FthDEro\W^+d/>H &8K qQpY)R#+sF+3fQEDp nTN"MX) %;Cʻȓ,\Ȯu(Ob =?4#IJAws#]wJ]yF9=M3D('$sגŒPD"0YG֎q7¿U1n盎IY#qa60U3*>K=@h1փXT^UBd[l!~ap'DyIB BSy$=![fv᱆k-tGcltcX< VJ|hl ~/" :qTjύ;ELsQPaEJV5]Y&eIЕYf8 3 /'}6:)A_P&,w:{")<qkq~z=fSP Ũ6C~]"Z f+,nlZB]kϙb (׳cݽxɖgqQ=Q3iPU C&de&6NIrh+9(\ůTvwOP{u/ 'Ңjq\yf`v]Ѵm 3Uy`ɓ69deUG=(ڥfU@=o<7 IDATxk]lm'Dہe4^Aoүp{\@ MHHnpV&E(MvBvox3,v8-Z),\4KK7:6rZ_xUWU`Lbrqa;sPLp2Dto8T%xqX) 9J[LÝ@>P)-t 獁 &U.>)X(b̳SFzQ|UA#>tuKIyGto ys 5mCS{H).Em:|^DO.{hI&u)ܻId>,i&7'ElU7l.eɨ=D_1~G:#W`A8j.v#lݎi`)4S1^r WީCidoGG_SB9G"rRmQ2_[ٗ /s`%gbFz `3 ,(-#?AGC!akV=JW xTTN· fDdz0i0YjF3 8H:7`:1&𒯵7 Bg >f|N,PEPdGl|>p"r(^C.fFʏJ 9"jb4M&߀k$+pΜ2@su)\NJDN4LU0D%8w Z<*z*jٺHiq2&1//!WI V?ijO+>eJ 8Ka{*Lin:l{PV?`eo`q}҅_B+m l\LPс )p.*}ȍUɖ主H^R֨? 7T(bԲjW*o,*׌Wpy57ny"*؏VdU@%txG)XG&+ڂr2%==ث@4莮WY\'o~y }E=0I5 5YZV3v'O.ISR(Bs4 黟@sX ,'"DIqHܾ '8cPUs$\ DҙlU8Zmb9r,Uy'9#XpV)J1q~;ݡ |)2( $SLҚ.za@뜑7WJ,%'IdL//rijd 7 ub@ }Ou[vci qRٖ=WYk<P\c9jWӹ~r\10v.0@,<= w*k_ޭ> ;ذp̦$G >ҸERSNj̢Iiγ2}|\kvurT{1.sj =>DR1uFbHq<b>'WlȹYJئ%pLhByΊsBz򶏓Α~,&&GrbЫsvOZQLCW]_K^1F 2Yq&ZɠLBܳKXQ{ v2 Ox/"Ecrs dYsptgx0\&ޛ(.^ eդ3}]L !ۙ~#C㤫Dx<_MƹŒc$ݝh=iqŜG2TG_"`Oק'N-b\/( u{=־*`f}8O~>*I¸^ ;r[j׬R[Exb-uEċZ1#9>n4/Z1*p7)4@"<$rB: '8"> }햆3?(!B@0y>_c?BEBUA&F?̗ƄމkߓSS 5Bշh/t 2H =khC_*:IUT0ߦc@2'sKƏXBxU,"OQwLx@!)u Tn].-]c7] <.-Yˬ5y#I)a&5;laup}"#{8%Ȉk6> ;\ bw C}+]> ᱧZխ ?^ǸhnbmSv+o{T0(-\ )$ƒ ($P ?fʹ8a`"lV'#u~ڕ"}_!`tuU>FRI1ybD"]MVπ- mQ<\[sFl2\cȨኁ[gyB&!\J I/mh8 & 1]1c{>c{X3MX3>Hcv|uOOZh&5%3ujrwL[ Pb%!NǼ{K T6qpX.~%ƹuQV# arS+T6b\"^Us}mrREnbr+Ydo | U%{VTCLD$£XLif{6ݳљљ 0f ERjI\,I}`2~RT|б}*knzw}eNqrQC%ЍD?c HZʈ%8P ӯ!/҄Ԑz\Rw}&JK|k1Qr|Z$kVC(AޓCX{K#/)myQ!MWe*QO n|27nACSz>(^G>l[<øя~=P1 k-~~KsΡ{\z??#?E2k+f9h2lw{U{ N|,5^wR6[XxFAu-w_,$».jQ{[u-K/ʄW'xwUI 7fT}*| *%[~Ŭ'bXyк-պрh[VE64(hJ'Ǵ ‚45זxJ'@Mk 5[saD|H\ R#:גGQߩś_6*{ϼwW $!@RER Cp(Hd燪 IEG)J*e\2PH(b E0e@JW9oΏZk?3WԜww{g=k_3᤼*e*!(-=l{<Ӡa&|.&Ap]IS&G?j¿D9Cczy3wf$f7DڿlMZIj+٣brvŒ}v@7꫾*>c~~l=y^Ws>s߾ |+G26z}/e/{z_r///^4%aNgbx> ]#R\'( U^llJcq,deRj+f9+R5C*VRRE~ F]A?k7ܓj"u'|BJu8VCRl VU`G_>dB zL&`2Gn&Fvx:{011-~hGQW36o;m >fG^?-!y5G&Lmɞ:/Ol\`/P<-_r 2e짤{H䊹 /: $qKgL5+%lg@c I<#A~ʺIϙ>ӱ_ zw Il*YhsEiLr8LvŒX>2g?Ës~߉OO-?^݈kAl_ 9}Op|R|M*˨oR `0@}f(H:Q3DD-87\G#B299FxO#z9-\~Oy9p~+Fzրs]iU\zPW3dPۤs<끲?1%k}-ƶa5bؙ^̻!TF |Xz}T2ڪϏwJ7{Z8Xrf8.d͘`V<@P'Eo(krI3iY2K (. Z牝MۋM/eݔ}aV\N 3GӐjͶɱpX/pƎ֜\]Z_rS[>oٍRK-xI \zFKq+= fDob9uL/]>F[3I}AxmjB,29kRZ"d@x2,KEXm3>c4&~xN͗mM"Gp(>~ #̹ }j{%'?(5 .OcóB=}g@J=O)EOkc N?VЄ ĺ?8X *|u0)`VW4~VYc{>jΡ}bqҧ%9 Nԍy]))xT_͛ߺu ox𖷼㳿tXbD:Wc&>҃]^5LKh\v٬,\3H6ФR FIQjޣR IDAT`j*Qд PL𒲷RVݳ[oှ_sEp qQHgNGX+IdX_e+px_܀݉!gbPv +x'gG5NLi 7 Ck>~*^DeZ|L"1zdLR,s)X?<[{Cї/υgv}7.X^< }gy?׽u|}]9k hea hnN,=#wҡpVf:u:<FXVUn v8ݒdKRW TOP>xP 8žq ϭmPI( ɓ}8UC\IvJIp >%B.PIcHx}y- +fc#(+g@ݽ><*FvHb}`KDq6SF*`G\XDX ux@''~z/}_Edf' `Mܑ@A1}c]׆1l* 1ƱaG {]>汄N%?>c,̌hWawhyp(wLed/3>{vd=69|CRA}Cҳ-?xK^׾k[bxB9+FohnQtrXym B@aӵڈs%|>&4$SXUbDgֆۤ M`kJO)9>4C>|8jz`P=,A i`{0w=?E?~wl2GàwM0 c>K KhN=!eғN5˃B4 @$\( b|}i~0nV -kC[j8klrzl,>(^Ww~w{{*+|WuuozӛSOG~GNlR$pQ-[OeU͜G0|^][xޭ >c,x+3keE=ϓf)DG&َTuY$lc:6G<mY|k=9*;>jk;+JH5N r`?p6(m3<vdNxE IZwD;EXE1w P2\Y{>#:ul% @i0q{B{dt q@|hM* 5(PS|?KJw 09%߅=Kw #*k03}*X(%ʶᢕX]OOO.ovʯ$x ^ְ<ҋP GH1`~a<5$2>Zm,gKrƞ^+! g+fk}x$xC_ˑkE_z)g=V=HїT#]{>w"c]x(ٓy9C\.߰dd'׾x󞇷 o8x_À ;x~?c?s-|AX@,7!*FBM*T'%6MA)^ehXMEsfi?e c %4TyUe5CnFoO |ՐrPr;-9&P'`PЎg%lnZ77fc? `J3^J9¹l;cxx | $7# c:7,'?M^ZӶ}v,:fHpC7a>!K7\@I'P/~ZXᕑt#T:K|:s2i)СV0b,Aӈ>S T+ z:Jm> -B~,fF%ryG?ouÏ ?87⛾w][01E[wa/=NYK8Yjn 6OgB^H3`!_eo_ c+~ӗ}^/%:>+ߵ( x7 *E3\KP66,FklOc ,yl),MX7g9diS񮺩J= wІ$"5G 'ޡ'䦓dSy>V H')\n<{##q2݃NC8%@{%e`sA]_J, BrvTR=9I汮ӑw$x,ѨsJ=cˀpOK.P-6y /4Qy~Ϩ\ZN]vi85~_y&nݺ^:qy^WU?oo<of|mmxG(i^p5-8 5q2$W9(@(l5l2r^NtcJ}+J67b"%RܰLhD~kR5o޵Pw!aL61|l %Xw ^2s#Ƣ[N*PJLgiLgF TrgX`F>ynj87 &>&y#|0:!O}\!lȕHOo)h1Kp+H y>=~dl ~uwe򸴭8vHs* JW& dg|@Gg"|$%(*4ӖzhY8O~jZ;4=i|;`p`Qo?a] xַy&w>>O=ԩ*YFn149An` xtxSrh cg0Y/l9E_6ʝS+~Dq^%IW1Zn4eT9Оnք7rGؕw֘!V5\EI@Cp p2}o(1S&:B,&Rƃ# x>VFVKI-`6|i؈ @` ҽhy*Ț=*t0V@SX$Sʯ3t \`I#+嘒d!Ctb_<#@I3m@X|4O +Y bt]?]l^s:yITRJp s`ֶU#&xvA{ k1|w?O|ſp}z\UQ6Nt17lJ:U|6 "kt@B\jKЭI`kqR"6 AB3߀ a*TVrigsVḊ@wvBϊt  SpM jisֺAe_ZicAgx\c]jd=<|^ GR[ qO {YlP^:&F#zITK2R/Yd--Dם))#:G C>{@F LELG817^@rYr}z\e1$R385X{mr*jԺCcz3??nkh$sUk&`V>WG>Q"%ϤDIcyL(&í|W\IPkbR *\ڣ%v]C~B p =RW|>k Fj QL긩T$%W8T},4-ԿȬU!6~ %{dKާ@,va} ;ɐI槒 McQȅBM$yk5 t}>}+q@i~{s0&CH>2j -[xTfojaJ.W@:LʥM0+Yp[Qw*,*,> 423FոX.%w878Fk˦eLE]c?w+CNDy|))ʵ=\=UP Ҽp9W>Igs+k)yC2TdE)>>+ۨ Ƅ+Ҫ̖"q!Ǒ39&JŋDe%T$fIS1+\옆t| ~;zڣGpqp㝘4[xv5k4I0[g 908$uYv1y7bIZD8,=Yt=ܥc$u2:[c'`Xr%#;q$8P<$$"_@Tb~M!Yn=9I"lx3[95{\9VOc,B4j@jaw5Q5/P?Vyd&;@OjгĠ`G*p =t2G?+҉H x7yʍ >|sXxO{׊Ԅy믣ox?E6jDf;Z^(SX~,"͋*V_˻Bzc+y|GZ i+:E>BZ2)e+@,|?x>{:^ûPddD5c23^+9J@KC:m/h96eKQU,C[#%Nj\A+"ǽv΅`Zd&]{1Q'ѩjkE6$fo&^HSQUƃnxlT sQAm*`*0NڢI3KtJ8{)kuН HrXАp{1Rv'g]S+|,M:d֔kj05>AEBax|jlA-Ipk7ކ01x߉0V43,ss/ 6)c%ot ZALp k9c3Ln^*T87 .\' Ѝ#|L8__pQ?.ji5I1]!okXakex#f~3Wycz+7MҎ&Iۍ^OVJ$\yԒE;XݍH3BSü}a"z i[υ5ټƺb0X Y5Ә;˓2N_̟ɭc*ҏ ȸ2b#+'2~(]'axgTytp#5d[GNNhYGs^5IQ%EV2kE8m|iFNMY Hk"sl4܏3T' LjBxK&T69|BPˈO2 JDs>*Q"&i kFZ1Sl̀$9B\(pѹ]f֫,C5`:oWr/d=L/^Ip e)$s)NrC~l&1Ixj#JMP Mû)P-A/}o@x-@dylewܘрP<̌I]ks,c15Tl[׷pƦk?~،m4T2%#.&k[$c={%^'R<:oz'0Y[vW@nyDžfюüD#1~ Ps14; @۔+|pZ$; D&1i*I:W;]SQ2dZ9̤Mb*(ϳF<$(E+ @{mRHXI^VW#9<]U)UP0 _:ҵVRح1k~UB9\<# x\g[uLk#YN뜱Wh)`^Rw1RӺ^/WCl;!jk6kÆsC5-9|XY]M:!2KZ 2`>jD}&5 pن@*<\% _^/k/ ֓˃);q0=YJry=G)G;V HiC.3#P}iʐXuT^ڿ>GFG7vZ[Z(s@'$*dVC& # &X;K9>x|*'ϵ pmfXGJL= Rhܩq8d# e4X] eɧqnd=`S9FRz {^cR $}[^P{]VB5reawҚ{ګa|UPА{ X|=\YYi$qJ6)[x9yh0 ~{]dA+*FU uVFĄ(7u_a,?i,wx'Lڮ|H#}H CkK<*=#t'}W7%gD>Q+T- C>Hb}a]0,=-x ߤZq_BY/d qz\l\f_Y #bZiI\K ֪h6,vV:Z r躳6ݑ{F {xD(C <2אs#¯j-'*+g}0) =ձ'FD]iH0~xO@)vAOm?! b#k| uT> c(s *#; +W7μ^ sq9Qb92&?kTT5>jLgy\}䩝g{VhWl$50)oj [ϼF]"LOc;gi8}cۯd*hB9$ar6Ǫa7a"`jzQ>CI}' y#m9WskYeϴ@]?C,ܵ| 4^?cbkU먱sEj* E㑴D.&+g%djr?z\qÌZ0ŗ*a$YK<[^aN 8Y;^S;º(f=|$K<^IT Z27JXů]ƙ$̏ LULaXP?1? T̯4J8 FoAZFVW!%bI 5繘c_7Il#xl,PFKנP0X#y>h |c!I1 cGINzi9ߺ &WUe.?17BW,=j2Ec?݃:@-m*z4&[/ AC &u Trgj )ycj>enR_K]<8\J.XW@S #INuߴ9#hJ uJ |p%"kJI@jȂ(rȰqaҹ 7&c PZM0Rpix街.>q=3 JH> ~/6wV [Шݿˎ0 }aGGҲz\w9lnOmFX#]W|<|ЏzSpq7fo)/vjQ[ce+ $)ց XnL.H_Yl2o է35%e @\sZs,2~cc" K 8-_u02:Gya{{vdI$GbP}}Y&':cɾ{D Ѝw0d{)ah1'ohKXD2HЮI5Tpk>l`;Ys8~B4ƨ=$-S}7928 |e5/ՙX=^[xm+E:ݤؔ@Nc|D#*[eBB)ͣK8 dZkv€6t8uM XZZaS=(,:wY8r ;c}| aKI.w;6 >pRƛu/c{!. `V BVe\~S!,ᑻ#V3M=ʭ!O~"L4|[9s;Tg0B9vR[IN x=AkW7CB~Vj ,T>ZEˀt=СUb_IX;vcG XduoJ%ˬҾ$FZ;Vdmj,}_VnW``Mv> X>d9%PB"*|VRP1, U@tHAmV,)EUUY8FxY‹d`J2y8QA_"ўK.5=CIm "s,u7ƺ%CϙsZUu>3l%~-dtSҍlGwbKWNtO :wmSs4vg)Fa'> P;6X;9 %~6Cu 7mSAeXRvKCs-Z{fM˦uEPCulIh`9i.ֺgn{Vo.]@$|ޒA|%!tp C*ⲔjR4%G2 C;R%=гaZxh q%.edZW̝4d)c_ (U@~Mc5!Re`bmK`Vkk!zfQV9{ve!چ T5!~χQ 1.z%˃'; GI4"cvȈq r>!xڤ:Fy֩XUQN^npȨ;YhPɢŁ488cfgH@ºfnLPڨS.,m [ xXʶwLՍ"!Y[\{F4# Kxzc)+ZGzbn딭׮)Ƹ\OU@IPrءSq=F~mpC6giȲuP ~^Wp:$G Q xHWHڔG4ϲ:rseࣶ&.\UTܒbammؒͬbmx YMhŠypK }4%R(12\Jj71$@x&'79y4ȹ4٠KI"T`D 1@giRS@4 ՄcVU oՑgby sf$$^]v>Ϻ|>ɮɉˀ,}GimBuBC}Lgs6`{,/aov$п}@5J6(SjE1>p-W>=s_Tm1nhz8Nk+P:y ~U^.p/_/_;Oŏ>N..Q*<-qߧIMs]ACc@ Jc R]LŴBxL!V,zHyOB'MՍӐܽ&Sh:ŽK)!3sub;SDs3ty~dV  s ܧ)BR3ihbz9J q&"PͿaPnk"7XB%]w#rX< ^ѝ? 裇UN0xԋs낹YSƛp9ě+Uu;cqsy",f~| KĠgG`4ci[SS6aP-0@R'KOE=<L,;*tjP{S,C˵س! 7 P;VdbYxhݤLv%G=`=v~ 1ė)Tnjl^x5R.G LO_UslFi\r҆h*q☜'*X a.^DWA7;bxpHz}w׮}&E|W`<' 76p9ófJv֏2JU3-XqTI^ Wz UL9M߉*qFAb^ wėu࡝2(ZKZr[R6T6)J,T5GuROK F& jUUxQFxzΞ'ࡢPs*fF"xn ~$/sJHQfp#1(c^kX)Z=NzgVdH6R=ɔʾX |+ޏ%"wp9Нgg9a#v@},;qդj TxPRINbO/ҐXN;g+=•EJ\02Rz~\"A[04%]OߕAscinם¤byF<tĦ@ 0{a*$PR8Fͼ_?ݚOywVpm)͛]X\ӁEH h".ޯ15 F[M]X ;C=i"6Xc>5#V:%' ^Ҁrm&ev8̹<xT,C7^{nm sףs=/ѧfXU dKkҧ1&;i+51-1h=at#; b69WHK|84`TH@k_ba}}v]n<f*7G[)MB*]K%Ѥ؇{:˹12 \5 {"Ϲ~ßQ3ͱI5YA"rImD7ԮO9cL 9dp5g>DMCMr?6g_p xP#irL%sm J6psiLq9[<$ l=?z*Bk51~7|_=$DM y2`5jםq}(;z0[AK1)A5 |0Ѩʱ]ȍz!w9u>\m vd=+'Re`{cוqd?< O&ݮxOK[n.>_F_j "=MQ,!ŸzJǚ'8ͅjY!wݵhS3)Pt,4%a3tҘs c,I DA lE""x71xr{8+4']K?Pa!)aC4p--`xb9CI]eċWo&EPlZ(/b^ J \'Q:Ww0>sJy1Kœh]½竉LS[2J0wfB@ c=eR;K@7B-uV!չAفɜ,Uy̴0^ ``GջFn| /;a.,)s~y>Ed-6]Y* >0Irw[g73OhԐ]]*UzS^˩i砀@Ԗ<KӾ{PoԈ'a,1ij<p#O6BB~ɻ=Ai]$H?*E6r 0~ :\P(5'!csNz"I3t;1Ǒ7&XhR6Z>0ZX-9-t79wdvC0OcoC<l@,.V.5tQc7%ճ]Ԭa5}'Yp8#M&cgg2X0ptrz\I.$+ ΑrVM4)⺡lr -=o%S{B+-U 0IT ҮG_Gߜw8Vs[%fȦ%ȫfiq xУCn݂swF5Oa2^-]Lv.4,bcc}LZU7@߻pcS+Tqv\+80Pق2_%\-dd=LQSMٗ(ƍPt|j$a.&h9e$'D c;L(O ͘KhFuXLVyRp #V5R&(/2a/V"v,Aϙ$呛}kO\3<a|t9Y6H&kFZ``u_ h'Frݲ ]`ljsc4 }gZVu8J:ӣ?<Νu=n$"C\c6N[1AUo kWo7;swd|H^+M%e^y+ғ7 ѱ%Tp a'>}bX&OJQ{:˵Q\qQIZBlycV2waWyH~.=&AYA@)5F4e'O˫pGƙc2CN 5ʾrCMuϷVkty7H#Y*[I"%IK MEw3ĹJKPrOB lqoy/S f8Wx_[ 0;ع ~qe˹k Rr?ƿ'C^9g:j+8IpQ_o8{IY* Os+)u_W)s-3MeUvUڰ tՆ\A_ۂŘ=T!~MV8Nm&Jq<*}եh[ oWJR-$`^iU(,b?Gb%}#)Ihݒ[a<:˨^>t1$qLϨD'  5]Z @:%5{Cqo㑏ߜZ\s-u+@)=@ TNfijk4}٦I 8`|..M52 3d "WxA;vKTǮM_C_ր%0c6euxi0ު -gGր)Od&IZz n~n}IhM% ]5F_RH'fl8-GхP2\LpQ&Ηa2OM( UUف8!s΃S#8Iv xpwȹ籄bV\ xYG$Ci47a'.q eXdn|Cۺ9M0kWdB09MV9ן@Ajc!j5G\7eMvHi>6 f#]n4Tֲ^QGn-RqVL2Sb'Mܵײ^1ƒ\/y |=Ng 5X]YgAqȉ^}CQN|\BGWl0!N` cb{Pa445;ٞ8*ٽߌ3&ˋ~Op BL#f_0$a,v <cĊ*.U rZ<+XȬ2.G9EnQivEFAҞKp+1Nn)'TUFeX(e߄_P%}8ڒ*c:Sxdj D3ĠC\(WU`3*>2rļ mFf-vG|#vIFf0Fe{(cSa )6Qzs"!'Pr6?24_]ff"q0C qN&ե{'{-X#:cVIDtq SY\3ԃkס~N]vY);qdP\gF!Mu7d#2h*QJF݇/`6KchH)0 #znKNRF?H$ q}"7m߳L0{(!\>&&aZ!1g[!{N]?A+ o~ww#/~yyR_Igp@QYJH( qO b*>{1AγG$?7CU3rJMt {ߣsν ;78\B㼷^83/0i։j9&Y+?q߅_b޲KFv wJkM]l\2 2 IC,Bd XGt#9[g̰" .y_>O1ZsP;w򘊄@uq8|*..>ߣ.ѲnhL :6x(x20x@pkI(\~syD-{v;^=Iˆs`,KWת_s-)q$Քl!X7c IDATDge06^3>o0sTK9l KC^W|jX/ "٩]>&+Hí{g׊Ck?_ߎH#֔f~d_2uDTb>^璺ؗ|΋Owݿ(y].SvHl*>B 5隔pXx>{DRfS5Koxtq sЪō_~!?τI~ڑ5ȥr(%wd<+O=)L[ ./4KdS}H֜8HL?%_Lܹڿ3{ ]ٷEl늼4mM"Gvk.g3=vʒDɢiROcs6VIii] #Xyp+iPMecF. M0| Q0v>53Ɯvt{G-|F}(#z _͏8p..lY%p2R1M6u +=M8g/4)wG\WyhQzkYQm0=*5`$9) ʭ?Ԑ-VG\\|į̕@(q-V(tn^S)(s9< ?h'/<aPoh6XefGOk5,Г$R!m*G݉vI;Z`~CvO3~- ou2y5c @ 'X+z~2((3&zL{ xn\Іc$N/6G_5;G]Ilʝz5g憚sO2&؀ǖZz>^w7/]A"^6ʌdA3b6eG8tzBc?a#hU8%P 5<`V&_W9xycBFAG찥tnHS*NOK|UOoȹ1L:|l|UNy[/Ƨz}+HJX),<Ъ*0x=k V^ggFcKBӭemԔ!MOzjJ7[cZֽ^w =&ΈWDXĽKfӧpG&$ޏSdU#Xk"b%2 yjم@r9c{z%;Kl*LPVlnb*.H Q?RcwX0zSȪDp GnLjڮ'& "n4W]FZA0g!m53x&eesǩiҔFA1E؂b ' П(U7p*n+X x$e[CJP2Mr Y}\Prj<푞_`Mыg1p`V⚜H3`l"@4st>Pݘdo,\jN`]Q΄}<|-Y^  u1A:iKYj*6 ry8%P!&eDv* oEۂºXiDP/ ey$J}kYsm ƍjH$ }UPb% eB9&Lg.$Gx"XQCAǥ% l)kPL 5BHw *K%AdTחB9P IrcO%Ԋ=r-TavܐdNH/n^f E k?6[C b@<=q2=) wKPQ${Vw0%nHITwŹYXKC9F%U pݼcb eYv |,ay< yM`|mkt yFzMU9UcKR+X e<'A AE+DG̼H\(GIiO ؒ;X;8 WRh5y!ۥiKB"²cixT^ismM\< R8!~TA݇>)F8 [ A~0 CZQ H|qC7^YO|1tҾ,P*̀#WD78R {?JۘN,zX͚ٴT*")q\eEc1k IeIvHkB-RcASヾV!#da [b(Rzگ|qi`ykՒo)%&'Fh QKc}$B^ ī!dYe-(@na͍s m.xcaW<(,C[LYLls`Cb<#W/7 ?eN)'}j#n9hw^gD%l/| {`I 9/| 8WkhLkYL>EМ|g΍lM.5_d}Ii_ `U栲^w]^V5 !-1M9=.sz쒓U9?uFj4PVWl)6*&&J42T#F[N¦{aCXgaY1vSDY 2">*ɬ5rLk7FnM{Dޙ pkӰVoWnmސaB*Ъe7Hy6&遹Q=m`ˁZ okt#χo~ЁLq!!/"ԄU%(  !_%d {<8{jy\F> Z?Ss]Zjjm[g@எ[$t/)w>Ө(hՆou#B %ϵp%tA[;g׏[ Ǡ[({1eݗQd/VVb6T7(c.ڰL> ܀8^n919pjϊ/[Q8j>2yE_qnm|%pc3qGYce%l}YMzop\12@/zbb2^̥yܫd 01 7R* oXB$%˃';qŤ52CPx]r^CrV ь R̃.QASXTࡤ0$܄aR |ZrSKYeSaV$$6JPSrcI4|Jsόч}49}Lئ Ua-yjdn !|SEܟ'e-;8뮣>ؖ>wC#d} (DHGb &`VRGB0EUĥG1@H)K}10, X4_ћ{n{k?yUu޻Oݻ^~k-Cg.E tlH#WDRKM vd:}at89Zp; T#Ɣm1Ex YwŪX Y,tm?/-s$_[/ @N`Bew?VW<{%n^ZV$ԛc.hrG-\ŲgP\N*XjUm#v_.B_&?$Gm/|{ؤX_TL:G!_#eN2em%JVA15#,nW W*WYZ&&qbL]0: Aoe ʱxf *bdhZW F  *ό3@OɋǑg?(w!m'@nB9Y( P&WDtkO#wp@4}rё0MT#_RXi0&|b>š~.햛{&Ⱦy |L4MBsخǕCWsܟ'>\b]J߁x=/c}T͵ Ҙ,jYn;p 0[va)-N 5 b12~®Y ʷϪ5]tE~,h"Ɛ@&ҤT!騛8UG*,#+q%]Vdw/ȣ,ayye^PG}Č~t^I.b4]tTPJG=Az"<Є*~{3eFņ'*axEyѝaHpmג=9cw8p7i^7-@cg&SXIb (h[R'Pcxb{y;<wxJ:BtО6?]c|`N,ϻ@Uݢm*8 2];tz)zCV(O2%Nf(+B kB5sy* App\ kl\~L8jy@Uip<im}$sos\C]^\_Y/8 W[ǥ{ oYj8#"O%.~ lpj]-^k 8uYavc0Bv22.'-z&}s <˪6ϥn J:/c'02qJ U"h5;SRFN"#'r,p~%RBW8Dj e#9㟇ɵ?9gw A\pE~BqMXime:or_RJ}y<R<բ{:@ExGj!½EfzZO%56 >.@IDGh=_ u>Ycc^ w)3. E#e4"߲y0',`2Ge\-4~Iå^qaSwfWy zbkbW**6~oLF` ͛2Vi+F22[\>1TD~+Y]6#iғdjj=+|1*R'n.\ߍ>О<S lfKXJ&%hP.; H}`4CC;X[%jR`PF0P)y^ `2E}QySU *>1E`-6_cѵ.+9m+^b a@Dٹq(9#NBC&PF0" dGrСS]WYeYFEtK,5l#8?I%=@_Ow[1wo5ij&cpEPǤؓ9Wrr5Ӎ#cޯ=&C A0=8hVP7~Kd2Ryj_MJ/1{'!|qް~Z;v_$L`p))2ɝ,>_N=O00Gڋ%  / "K-"i!Vx8%x5G τD2DUN4;`} jh g)6G.s:W6t% \;6tl㔖w.Lfs#&|WjtM)|k4:NN<\?B?KC6{U2=΄h;7n" iN T$뵜,NJ 0HRޞ3طcϸ0Y`;x k ]Ʋ*'Y cE`6=S :_'/Idq!юj5~5^LjU&xTƇ;_щmAH91 f<!E/|s PE;vc0g5+Y_19Wjleޙ *8-cID[ۅ;`XE;}ӟKTma89 ^ѵ5x,=3@4=9ȕ/]N6fQޮ,<e=΂PT)aھs8hM>Ųw2JGRaJ>ɓ m0"`whۋӁ1KXN.xS@\V kQvvvmmzd޲JJV 1<>Ud]Z'4諐8Mi_ |2!CgQQ1)[Ǘeb&)W6@r ^WҾwrx ӲDX 7lрwlAToqm3//-(\S.dLĽe"IܨN&"`t k`G.2>Vy zIx0=nP]Ұʭ"#ax|+*X(Qc y |  qIH"#NIuR^B^љF>ÚJEZE؏mM@Οz9&4=KSXx z @x4F59ްZciէbHyfH$=bL!uD|Fdx!O--s(B(B *J7+;2m#t;X?p~5h9I|`HL VDѐcJ @sTeR) M`qWSgJ"˜u(DWPθiDbh+DJҋRsD ץ@U]@~UKϥ91T#:Hg8<o3`%dCXک<п SqZ'+q$$n(*tHI,LG=]A^Uo!W% Y$_|87q  ?ß2g8F^2٧~IS(` %͐y,U%eSETNL33;"@#^LJAL?S5wҟjT 9Es%ܻ%Mxh.sW3{oU d=΢HBQ%7'4[҅^(׬WC󮹄f@j2'=\ʓU%'S|pS*"lDJ-i3Hl:LI%U14)3`Jg|,#G Hh}w3J%\>g< ϗK:pݚ |v]Ysbl'A3l5~Ls-ǹW9ׂ{0[$uNq/q^v欧+t. \z{-e hH^z~[H1.5*CoP蓨k]e= z5(9icж| |H[t$c?'̿Y)9_.GQF?qN@W^ضC8 {.$+wDA FF #le<00|LCka.@|q z &C\\hLIS+؜R=^(ӼLT`a>90aq](&yKpLiWs cz{r I\h7Xd@?9P %;I_[!B]`I^RDe <(Ua3i%!*7hh A,;awh+M+~Gש}G2j\R@Ǹ8gID2n+ܞ>f>YX.5TvGA7# `;M>]Ox0V;(UZNC=ksxǴ }Yc|5TdLge"@w}UwqǪlp< >*w/s%E>y*`H_}`ƃ0<D04)LV5*6*bp*xl7!a(ʕb}Sjҹ1}G)'iE"c,?7gL3X4)E'؉F)/4nQy:>Gؗe9""qIIAH7h?'K&'| 9\>} ܁S$<c*#~yrVٯv(%Mhax;ۗlO@P~m;q}'_\Z{x% |2b6gq=2:A6`[ v)mqwfPE7 $CjũeAq S|ԞBb2TT <Oў1$+mg-L vd^p:daAEi4ܷՎͅ>BnKJbα7x^j#H* u@pAߧe^Up=%+v0}f y>dJۺsE%n{\ xlIziiD O8Q9B+#bp! |b( 'Ę,(+:7݁ _vs#21@$(R%JsW"u PE W lq)H/S>$^=;.*N)){+W-ıx1 g\qO6( DxS-d}k>|;G+* |LGP?78snɕɟ"M`K#;)tg)0:epbŌ|.l%vZ`\1T|)VvT]]:tH"H ԙ5CN'@xf2c/YA3,)8ʐz"];WmADvxCBHN%CtzVƀފƋF;nt e>t8c-Y]7sA`"sh)Q$B$)ۃyΞ˃Wn_x!F{ G: HؕzN7),@]ba0J%JФ.&ʊZ%i~WüXZ ׭.6.T$bM\^,<|ŋqm=]Yeϲ_^*W&z{&4%J*}IQtT(H&^ԎIP8XC^>vdlwH%g|8y/IaIJ'Q7% %@Rƥ<c`LxIϩ&xH% `{TYn?T;晒 6H2FDžL?KgDgq&~S}- ָ`,HtC'U ¹yTAV l|DENz뭸[pw&?׿x_WxK^ozR2=Πȕ34c"a|  ]<5 Jxع%HAm;F9*Q8ܘ9/Š9*C>*an.c{eq^XYa<#=a{q `G Bݱ4HUXc KlTEUJL|{d?\u0LLڷafX.CDrN1LL9|AHЦS>pEɿ D(ÇE3 `sG8{iBR*;F(k>SsvLFNms`d-9d,1ү#\epHy[4es0%[QXK`4ʻ檪ߎGu]x+_n ǰnz+>]z]_dez1)<Ռ|Ĥ;Oy)6pi!67` 4 =>/tD S <o}J"c}kȧ:Sw gTYw]X@.ղ<\la-LsLgE^= yjʰі cujaF< XLL$`G-C-D!2g*QhS`{4(JmR:$z2L<-$7po;^&}Gpmx;,oo~w?s?_zasI^9ޓ$O"w VL%/e88}p|P@ܵ,Ѩ+Fn2]HUaR2&ޣdy JrOa"9JLSZ7 `dv43VrI2ӴtH Lcrvn0#}w{F\wuoox𶷽 o}[C?Cn𲗽@UYAUy0(N>WN{XS̢W=ϡk/P2{BR撡9FXk+0,$mP"p-cB2ݧa9W^#E?IӜ?k/c5˄R#~zիߞg~6n}k_Z\wu8#W7bR"I$;RMqt aѢ.cwp=}x~~JʖV-AMI!-#YĻ(EtN1NJ”Ιq IDATdJPO@fsl[xoA˜s8 ܐħBXR2G=>M"ku~0 h h 0&%0@Mlt1  ڢfoP/5g"91k;o ],RG:gF1|/} ) 3#.hB0'Ֆn#`G2s8ov~&C1 2ep=UZwH~$G\/c68|30|1Ė{}雾 ^ruw 8b#O@gU=UxnS@BX{Wp]F>+ǔ++㮭d.2frIBsw6@+}IfM1} HBvN:(.3-.!?Ʈ}p;P*?s'~{:+ߺ øwGuBxIx.b6!C`d.!tAG251zH%CksY@X <&3u17v.~j~I>S++}g=xq9||̞R*+JHHTK k귥$ MEj/+Cv(VUG,IzS} BgJ* $.I*zXޯn/mYЧ yC" DQ݇<Qg*W)rN60ZXC}{dO?FRؿ  p̡nt{6m+ǰh;,0LZ:H '{ȏKi̅mHJV=?%GZp"}``:,Ɵt\䄇m7ʚ:Wx$(6A/ޱ^E.B/bvu s`O]=Co-dw}Emu^[e*,K:28@$Y)\2."M6&-~)cki3;NI4iA<@Gا2<>ڏ(F0bzar=3C^R! W*Xf~0_i[42lT=AA+e H\I yYrZ.1beAYœMKk+u)+#o7v-={i1ɹs}ΟC.]>*s\+MU*=\Nj钙pc$5Z%[3"+Aĵ>)CT i-T/5ϱq_*=( y'qfԐ(5eBѬp8c11z/z=W>狰G(Eς/HA9V(2,9<3-|r8>c;Bl`!2!I+z:)Y+&RxbyPټ>TTox]VbO@(HP„'1<*];aGĺPbX%y0m`aΕg{>"m_s5n']~…Eι~e=V)q%1%嵲<̂gBޣ&b)O[4{IKPtм3,4͓eW|dP)vi9D#(8ↁ93u^2zqS& 'M]3jǜRK S2!odw뾌ggKu1ܠm*F`S`4&\?t^<N:ozRykk9vi>:X&߶Q>(ϣcEN{=}Sێ\8|,"XX (( RHeMt/Ԑ!2ND|LZ uז>40[:}X;V#gYS\[yllG0bs=O?裸馛}}QӞλd=ΤJFH瀂2ʣ]7mPhܕNƱ/t)<.\ڂ9/91kjY+,"xC3ĐŤñCkJXC.)Иs& \}V*Hʖ)Ai\oʸ#,%R`dc|prrwϣ?lZ06~AIѧSN>Xkqǽ x[eƟxZY&cLpƅJ3V&,{oolp?n-+݅φ_/ҶИ2}+dwP%BǔSEϠ7]E)躋hۇжŀǜJND1[אfK.Q)FOվ'@юಞAB YczX`Zɵ(]Թf~d>QZVIJ5.v'ح0E|.ԩù)ez3q]xD)`<χiyM7/xӟ֯U˟J8YwH:3Cviâ(AjQ }k 9]@b#1^Ms9iʰ 3AF9)Pm.Pb05l1iQ*['΍() }@\G z$@R Ԝ+sApG{1 D9Dd-3f?n~r@I#qQc3f"L1W`.D%P2*JֱUC(%ȮxfRF,[*$ r@xo39HssT1Ji<Ô0CrqKz3a\4/KBGxPgS,)???/~gqM7կ~]e?gPJyMԚ )^)4EBaDcĥWSbz4M$lk^gG6όʋ.%:=OM6abG -lח%PԌg1 <<_ <` '@Ĭ7Rtڸâ_;:'yFY{p la|lwq];{q1'D Fà 5;)5HS8[2*~O[\C `"Xx#rwůOhO;4@/z9[}$6t1ءՒOv5*z|wVTq=7O}Sqw\ggu~G4Hz++qVD@{c#T\B习gR)= .P> +u Ls-~+~Z|J&QPgia8)7ӬP@~G=4m :ϣVS4!iD`iAbydi2J^D>}=`4ӌ  yIDyhC<sI?wh(OWr=.(V(QMvR#:L(,Poʈ+SLwp?4ԋlNR%"w[{ ӟ#C`G{ @:/ޓS֫C๿ C0NNN}n&<.^7MZ?q 뻾 ?cTYe&2=ƒT1$%)U(Hrv\~ MdUO Js럓GF\1/.]W/t.s^8x,dDPtSdKk?:_JV!b߃k؅CNRRrm񲗽 gp%~<7ߌ__ [|??p.]ɟě&l6VY^V _(< '숚n1"$,JUFdYcI_`~/1[c#'+!\-M7)5}5~3ؑYkuU6 >OTwqǞzi z!))U2aq'%ӣc5(YHb_vda5]R}ڕzy ,?^'D+Ȅ AA'( v0ܲcӥWv؍r!@``IuwF"aʰl5 7' lcoEGFMz5 !->a r~rGϜ%9?v@/u- dUm=߂ ^_zewTN;׷{zUV)8#R2UMR 10<3_@4rM/=})9y/G3nWVp>֜y}Kh? > IQZ/ ! Q)p&18=tU}Ig<\R5,i,Ah,y)m[=0h ҡ6"N@H*4\"b}<_ F`HZ3l74<ۯ?_c\9 ]wkaTjzhla{2VQ!q># RKtSn;; J=D(g{mAنَl bԌI9^\^EGscϦy2ऽPYàMJˁE  ^dK4õsa"XiM }>f#:ʧ+7u!x,GQNS黰0Q-dMZ ʨJ/z2:F M:x TNSԭA_úuʭY~! ڠ"υ1S|qPi myyҶHϐIdeyđ8 <'7oCko4* f4%q^*t{RI O= gK] 7"yb+v|X[bdB[>kL{-=WZɥ8EM!XwEX1x1fiǝZ|˯ߏͥv4%c>ȁj6R ,<͌H،!Y ?$mkv3HUPQ-ro'B̆oFiRǒ# GL쒱B&fGˆe'Mykͭ=w 5%tv7̩rӸ094"%|LA tI`=N{Ma<Gc;仹|@M,$.'/ۏm?gEȐ&В|hP*_`*ۃ9@n.$k^Q[wGOF( G'+FKKMY+IRx/Oy6hTm.n41dꜵlNk ri:QP0ɀ&bh-Yt{|nytT h\f $|%ޢLjAc[X_־P/XuøЯ|10Ɛ_tjX=mQPWLv-9٠188x6yέyp uƨĮ c{P&dLƇ9k7t $ VϪI` (r CJ>导O+FcKo'ۿ[hKozo۫4>5;%MjׇFujI0)Kw:Ue% l 4,|;O8Q&a.qiTv͟S`8gt0vA@aAM)Dv}.kMCL=1j100(!40UfؖBoJ}c!_xL5]0r!)rpq'Zhk~L_{IUBYA3 o<+HI}=j0VKH% f,(E% X+=JX\L锁Z[ug\}ɱ=Ik'洩ħ3~\SkP7gAM1}<X6kO IDATeo.Qzh/7aYg[zE;9FrT&\D{_ 688 ^x5{WG F&' qHVB?|/ _?*[V,ׂ/&_phG @,{&L)cJfTJQ\LSHB dW?7L )KI-)S0i @iե4hjk=R?ySn$>oY)ǒP8–ǴvOKNOҀF/9&e4ϵOVHX sGyz{g"\.9! E܆D4k@MdX2["E]n !%L ~؜k#,Uǜ;x&:{ ?B)ג{?G:ܟʨ̶C%W楘hO Ӥt>vKWqNj@qoS4s >Жa i㸠GQ]> ύ5Ql4ע|5jP t\SO!|z( 9T#blbCn;S3W!,|a/4؞cGC<&_ώYtKg/=8!\Nق=ƻ]('Y&edP,$@Z#g9T+]Ρ9Ƀ9u_] Kb*Qux]\UVYA "M٧-)^,09>qU?lUP:`y];>h%Ѐ9瘫Lxf02쥏ءr|H1ѺYa}PG])G|^uP w,~)cKgJ(MRǗkchrzC8U%Amp~v N|i5S$x|С1<O kY}p\%ߐ\$fF,|fǃ弨I|W)Si)(QEX Ȱ=1Cg|' Mpuݗu_ǎ Bc32sҏ* и%8N-UT{mBFd}8I=~{$ ~ h#Eezn #> &Bࡱ:(!6,LJ3TvsXz⵻7AY54i/Flvw}p uI 5? FGCrJ%/d;X(a>0ɠ/` GfmWz4_I9{||NXe'5Bvi{!+ZpH|0IjJX>ҌENy}]-ӼkrY\PŘ/ޔY0J U+d=`Ec+iDץ=ܱHte˛r |2(9$6bs KӀGS\r쏒uD>gw |c2`c NTe=jy236DP76GajAvq?Iy3nWnU]~LX*f[cOVLH˜--߆}k>OH>"*g>nIHKopc@̥8WXT/VR;m$u믱J$dDf,g=}ۣvk^5z|:XWrqo7ۢ99 N/;ߗvDW.z}B6W2[6|X"*ST/kcmjjm;K9J$<Gzx{xwu>ln gx5}6Լ' !;RLL1y?&B_ߖ<{6o~D{fm۪%xTr5 z1>)7 cX\65M+- UdU54ܷU>9/^ydQZ ' @Q^|_4d3]Cg*uT锈 xSO&dL$vkvq2~@d,#*3Zp}% ^(KHIQH|Ŧ sVQ= [XԼQtLnq7gMss4hkm>H-S1'4{] Ɯ@ nmBsolN|NƬEc~ɺ ϟ̀mw8z_ZŌD yI^Ce| kӰ9x>09CHTxsnS>MiN-2V^opRx =*hg@rn{`DG U\:/W$Cɘ`_$pNre)-JarJ0[}$Ie O*ҕCh܄}RCz.3F9@(;CJ‚X2{s5oDu 6gR< 6ǒ{SxGY\lJRڲ߇6enȆ)?wfmIgE<=c{-L*ӈƲ0gAԉ7F̨P[{$zԒ{`GZR$5@D(EZ@CGщX~ozO%XcLRy<~ӌE7G1VR=yM A#!EFp3e9`x(}X q@C/Y^8$)QVAc1 ^( P(yd۩d6VܹjxOqW#hAyHr|;%o\~IjxvJhGTlɡ;&]{K\1!EGo{n:خJ9/!Ԉyp.*Cbߓ>1 t}.xnE6 @CYl:YI3d%xo鶡|A c*Ƹ߻ bHYn7P*fT5<JCEO!C)()F\>XbG5\)1_2PJI{lJGWx*̑8BRH\kc.[)G/4E1EZ EB ),a/Ĩ "ء?T?z]f/S/VۙRp#C O c;Ur2wBF0Ms#Ls}P5)#)pxsY1["$f)efǠ5 z'j7我 5l˂9Mv xh9֛J#ݖ*(}_clk`5=F}=}h B 6pA\>DΡ<pf.!aA4ch9ÀaNszZu ӁJǖm;ǢE<a?u|_[ϕY )ORyw12CBϬ |U]BީM! Rq6{EBqu19JWg$)\pYRr \˨EBϝƤжT0s`y# :c)Y3&l )NҜMc/yXb}!7~IW)ݷ"#u=tO\L5pLK xuPY#UUCd}'&^;cpS(A0jK={(l{gڨ,=g"=ϴ݃w_]yj%|,'C@fR*PJ;bI**)OQ vsGiޕekMM?^g~UgAJ mT,4!=˃,woZUH{DIiv\T]_pjga7P. QdRiOWL؇v>i<cܿ=2ZQ~'9dD2MJ~v n r-F5Syƕjm_IL"I :1Qb#6PAX xH} YaqP5MAM~CRpc,M [ NԐ4<>)v[Z^H%?.:J<HsUYHK蜓*yIv[e@V, $tY7qUx*xP1'6b@41">@/3x%][,4gybQd~QHLOF-*bMtWYB=LH!ڒEcl3 bO> /OIrH*4!odsLDiJN[|JBߠgQO4o$k2]Ym]7QyۍU}?v`kk.ۋQ#6BFXeJ;BZOI6~=8:F-М +m"š\sLH+֣hok?'*d`\#d 8HtR)J<ҽ};WY*8#)>xE1)ؤ[b2C?F ,JӱE@6)Z ~!טs"i6hֽ¼HĆ/v9+2TP#Go GRbK #ȹq]c*v9egS.9)t[q\l0= 8rg_LqU c$qv\Rf0lATB:ߢ\dtrnǿ>c*'HPK[-Ia/d<ĐAt*)Q]%0Zp{p-N_awvl*?Z=D2C?T>zvnWq"Y*!溧hJ]ׂ}J92]V HaO/=7(E-4lbLeA!0Cjh!ĞFm|خ4Ͷnχ1o5.1>1W.L//nt[PǺB`Рiǽ/n6]W(lZ4Q&q!@!0>RD4Ëra~amDxKG@w \Q#~?6.䶷x67K_7 &O,J8_,$q(bBֲF):\&g`PNpC9Fr;*q| uT|_z/+^ %/>;z+nyhʮfG3~ϴ"~v<%l0cD{xͼK?Idž /M0j3}OT5]8KrIv̹2FD|`y^g_M0Жp0el'3G#q!}ŻY9NB['n4ĢC;RE,Xt"?x>T~03f ]'OP9EZ]mlEmm8V|nܷԺ_`aǥ|=^.+˘Cl??ePx܄>}+&;d֔s_ ߅No[| {\zL2VYEg뮻Wv>a0nV|ǻv~a~|dz2qO`w؃!l༧ˁ2R30^Þ9I@ƝɇpC~c؞l(NvCCYځF_Je 1p&IO2vF #MFzv\OHVVR Rj|<]~?4[kDc),#/Wp}2|<7\1U (5&N{c֋_|Vx &H}HX߾ 4s5`7}@Hx!Cap Hagp/2W*lC< )'Sڤ}D@9b&ꀱ9I4bKUp R\5˫R)+GyGpmx;7ߌ7xކo??mn[ɟ;(00 PP¶9 `ۣ!EXĎ;Q>F0xEzC+@Zy [op^ \߻+SY=A>I*(XxxŘPV *s$>o)/s,RCP(>O}etl($Ўvy?[  &yDc~^0D8n\xT%+J4\4E6~Qla 32? ֳIݶDzS4uph?Տ#vgw.g"ML륋c;Rּpr.T.%R:=a~mGLL89WYuTQw{}B[VvĖ+{OB>kS/h-a2\Hu}rõ4`H:|F:~"_+JiX6ClP2 HKHORɀOn4Ls0G Ï 8|A0Ebcqav<އl>ߌc>ڶZ{o7@ $4'߆Ѐ(cV?PP' A遈OB d\uH 3 g;r ikX| _{bs9~O@E$O*m%5 0փ,|<%0jL C[`C>Y`z~BF'PbcaLB4+y@ wň7=ލM0b8λl!,$YX{bE{DKM Nv R3PpL;"b0wT=pQ m<4vG-ر2c!}L> YQ!E t@Tܱ T9AO,<4NMۉYϬDק.lUXPc]7}.rCj\WI͑zsm+L`2{t]!dNG61[S0j݋Oyv1uX㐓uxZ2=V$ַ<{ni};ޚĝVp9Gy'iU}4θ4\W_;±YtPw5\sO,{vB ф.* @1%̱W0*B[UG"^#$3E>0Km)EkHƅ1JO8m*: fJXwG3j/LYOEL1(3IyƸ\@ 1#QiHYEYrN.!abad%_-!,UgPZxQ4&ȼ@wv|8P];GKK# '@N̤ "ȍM9ҵBwxN$OrZՀ> 9viϯ{ŪxhGvnA6f+;ֲA֠NJرq-́G]i_?f)}v'#٪.4A iۡohF V8,Z?}xǶ`wa8操|Ua FޱLI(BJr, C"LPHZ YB<2?@N?%iJB*X:m<|Z*ʬUϗHEbcxy=6 ҅߭q;Lx'|Wr|}cwD?ɖb>xۄ!xg=\G l?"= +Nyo;hlHФ+)Hxh, *oWG(׉?zǽQv0T:N QR_7 Z֠NJСC𻶜r)+~0)b Zx@?@=GaXlÆs:`wHX*Vb85ȀW薭pH+m@]a ]qƄ$ na,bOxBRܑ# יQh%'d[c{shH g]vI20wS*{T+ x {|пi0M~)#S"! {!ۣطw1o6$F̴ِGؖ )k ,`Zok6hK`T&yRM!d?0? ^Ovf O ʔ}YŇ߹}:e-'Aw#Gpg&e9}Ҿh`{0Zljz“_Lb(Q3GypEEj<ql$e5Pǜ"uJi4FF{9cnIK)㈌5^oIyzLQ\H۠-c,&<kaG! | E#9:]32}˔DZx0c2!xcG$C2D+<&B z)XP @MQ>V!<'O @",uG dAC)wR&c4և| kr ]yΎu%)$:P<'bښV0|UN+Yp=_W|y#i)zl4 &v[졜>>?0Ê],袋_^7]0<(Pk IK;!76N 1$%፝q/ xLc`S4o7rg>дgTn?{{3|@v tkko3|\?n``ӏ~^6n$6D/FIX2IRi[iIC*KLP X9gs,ߜL<$6Z@P7dud˲jEhK-Uô":zeE2+o9.ߧ|J-hi;o"K }^WwdqVp=*y`'ocBtSs2Z*+i\[-1Uq=Egj?rmL SKDD%TAs-k9 e=2W$gy&G/e/O~We>>o|lJm#$ QV^:tc+yW(2Eq?><]d5g:ݭ8@l( ?> F7.;7|h|hp}db`NvBFg(%&(VgfGw9} ^M_W;**72o#|;$mԶqP j, @Zؑ%‡Hqmi&x>x'øӜv',vB>A2Pă&nBAqN@ݨlEBM߉ Esw02 seT@3 dCPHgP kCNohrXߞr饗>%nF|K_™gXi?D0e,!TAc2K՝?Ƒ7+c'xŮE]b}D11 !P^;/<0&q 2c{$mSHur%bY$PkN뽈!l0! y:^e-'Aଳh7xߍ!iux ^/7pZ]w ĞJΔnFMu(1DGe{pdfpErz9 $ ~gPJt.y3zsh^i$!Rhqpz U'jgZc+b},*6bC^bAꕬ 1ˉhfr@fC$:'.wj|pC)LD#Tz,{k \ot,k1^H+ɚUlHB\$Nk:k .si 2r4V:8x󨾨"\xd B^b]KkZ$1Y{@~n| gY.B%UkYI$kcrg=yN%/ \}xހ+^P'>br >J;cQ)&a9~KЎtCKxe$3VD^T (&YyaƔol˜0AlnCkF$<%2= |BF7--Xh.}Rލ xhvZ4<``ͩ1738a8I|hGyK~/}ȅW5sG ~Tźĝ[Or.c=R7G4ZH`g$r#%=2a`޲]ֻX.\uU_u<)Gⵯ}-^򒗠mI /cp xӟukѣGW\q9\|>! UqI)5L;x~ؖ$Pd$tJ,ʶ[lH JfH jжwnpJ9}l~6N3|}~džwon;`~ա(<;in۪8=`̏GʽH δ긞&=GbHv&]@]cg IDATPb6:q4\a=u0G(`siۺw#2b@XlhSi Mɥ}xm:./ѻ9%>|`c{ 'q7PfFǿ_?c a.S;0aGr}%kKy;8okyfp}nTo#Yv-0'ϗ'/|Xz/0G[!I@SN+aҽXT zŔrEl-ܧX8 Ư^,6W`F&lSNd=@G}{_)O~ⱶmO}j^( &qnG?nh%Yem 2(R颒s--UW'PEh9t7 F$Aڛд(m1 8%+ (2.x6lZ|+VuC?xaoK9E`E9kRŒXn,EwdGRfNvp; rÃ~_K$mvC̠7ƀ=' sWvǘw bI6DPJ?r-LW(RI^HeDðٓsx "#^acp֟ 8VB?Ģm%ï#+u`0(u$?sh| =o꺹TcfmȆ~LmnN]uܵDb(򝋺zѻ\=H;PM{*κ<]ydŠ?tS'YOނ*&kdȊg 1O?aav`o">|m#|mͶ|m;#>~T)^pcɳ<<`w!ÚHآHDӛ;qΌq uKD<c96܉Ӿ{EQTFYSC i,^d# N1?Kй 4Yk`Q[e[kbDrH_$`d -X%A?~g@p1Re*62>eP( Q.)`/N %o*nij}۷fLC$ ->F dii, x e}J2\6\h"!8{\E1&'g7(=Bw;Q=tEٚ1 ]gB .\H4kMA^`G>gC!$R³p\lh3ӠDgo x}+ܽK!,PtA ֲe zIrCgoCߦ*r!),VQz sD0DIS,БsR0fl nbܹBZZ9 _{wた #21!Dp`/>鉜\d@~,"C[J4X3ju2qxmU|OӰypifbGX+Kͯ Yf(|3qs1Ӟ!kcS_tYI<=(%Y&h tRg BB@rlmbc\^\LgUEA". >mحvxg !K&x+ a2ʶ0vu ][#ğ`\};=4'"ԅ$/B4J>"BJKIywWĊ^ֈ`Zٽ o'-#Co/_Pp:)̟HyTb.aZxxʸ|N;-Kc6kQBx' U#j8"&:e9@6*Y1z4H^5D €g^{t0԰=7+Z2:_]w;̑Gi =] E'}ʱ<ʥcĠ] pYZNA$\.dGXQP‚~Ӈ|ǟ'ޘ=_=a GQ"c?+`m-l"ַ ތ#6"I81Lv0f[#1$- a|bgIPZV̔[V^mAx=0h$JP/ F֎v0-~9?^y.?+ uO<>d Z, x2y˳''*رd@hn>.3 kHzb|5=De{!02ӣehs a/k7TXt8`?>1#ppTC|8ܯ`LgnXt@7 ?4F[2,$ap1<K,!%Y%l%G鱖A,(藜]')OF3Z4xލ? WdCܝE\F$ Q8z=.f{u ΋Ű ?< a.#(1I?}5 m C–~l䠦~Ѯڟ~q3} Հ Mw+b8 gS+,K5Rf-B֠~mq]X j8] )-B"sAR @;9 8QDhua5xlxߟh/B[QzCw:p|햤R1%0 mIu%ӬQ+ %嬔E 3euޭ>,:^XVypW{'k;yQ0k#$N)X25T4<4`k k7(< lP9Chy ":L4%O98B4MKvqTu ɬNP-˜fkY-kc?WdS°'1"")ƅ9؁{uҤì\žY4fd4AdM/={}hK?dwR!2]\n8f&mN BVs/r3贸zM 1H|ĻE+RsV cITTՀ8_昏TY8)`FFu*}FLK+yݚm'kc߈8i2$*3^x/ GƳiR|#QJ&;^K]YE|ݑ.Q%J*Ks@z` 0VDa1C~tI=>>0gАf<`sgЩ1̇6݀B[@{aΥH0vցG;{~*qj!Sڏίv`5EAjC]N;:qǙ-{A`-k99e z ))Cq(J=f r#*Ψ18FJY@B,,#`ʸ3=H!>,"(|xeth;z][}1^~L^:e{4:cD!b`H UD%-a|$ֵ?)r य़CL#Y&E'{_xX)<`Isd,÷=1%cc?2.QYS)`f8Xǹ3> $/5XXn喝o$#?w؉  Sb{d[>.+ s dfGNg݄_|qkCk'=Sxǥ;a-b90~Wj\Z$kcg1 xgLXjX M"˞TIg1@[ 9@9!`f "/-l(6m!y䭋_祲EԸ A]&)DR(Q%UTx-UDscBxLˏ"%ϥC`m$R=}FvHy7rKZs}Ju9S Qro)o"hb5C3! J|kAжȶtt3[i"@L>mou!` ; lu=|-mo+Jt# AuVc`@a~GMr,ivΊ ҄j4h f ~ocI1TXKV!( wI&>E54/Sa$D?i@G鷩uh KO&&52hț!0m]gn 4l=z߸ ʆ*D1%Dp:Ux"^xPN :釼(HX,{ O-h,kߒ_*W1%C4 `v n zR6F#$H <2=<іQorax^{A} mZu6A L܂W>T*m%Ab-E)*/x=h$&lc7MVfG4 ] \!F<t[-("BwP,HSfyp!o.<钲AFkG.昁>D&i`x!D6>fʉN'$˜ؑ>7447Zt z@0I;p؀! tq X/5|tD= TPE) .w'H) Jj]:g0X2Wr@'$jiY_yguc4j d(,{:l;UجJ͙ BĘ"夓%wsTUa.v[dmZw (od-)md>*u2f[LMt[8Нo.ډRoBLhA a)R:'2 Eߞ\8pueA"J̨&Ez1 )09vJ.\#@BZLخ.N(>%i"%-v/ncc``zq7hv{V!Ͷ7qNIwfaxb0P}xPg,Ai_@aLg 21y$vy9Mk#H8&4("Qa차P>ZdAd[xp0+e* WxHcGրTf|!Z$|MFO>r }efH^Js10n ߁hqf׎H½A}u uVHb>WFy T *TXLɱWi.KZlF1r$[e|t'*PN\:I ?i {ý#[ps:JxKZ <<2='D=Qn{.=XFh\<g;z_9!}8{|sg>|_u];GE8Sqixᢋ.£h8pDw}Od z#QlMSm!s(LFY9`K4c17 r\ݳ~th|?:ցݮ~G;HMMW3P$D)4 s%#l+e̙o!,J/ã1,Kaw$怏izȅUٹ.4u G6u1:V){'2F%0"c4Qk!s} QvC[!YzTcRi'A7=hA?>x6 w jH path,J`ܗݮ]z\/' >nもtnz Z,pm$vheŪ=XJ^_Kӽٲy-#СC E/Ç矏E8ro~3G-oy ^/'^֠>Il%VM9+JX[>Jv !>o=kƅ/\$@e4 ?ŋ;< AYǹAfҖ`zK2h }@B>;#(;jT$< LATV/و˅,20PrF"4zPCyH< vMDœ|ZRջN1n/5`j$Kcb,jF>ѻÞ eJi9 x,z L-`_i#ayp<.ɩ9QWQcNJbI2u&ߒgkqsIדd.(%V>v>l?'㝬x~W~/}Kqsi''?=5>9mo[I{{)kcʜ0+ x6F;<$c2[A Ma\jlV~8&YñPu}dl x0M0ϟ0;h^:-q?<;Øa&{Ah=ό$o{~ZWƢRxP, ka9cxR"J| 1 r K5&0]r䄮9rk#@y-_o G8|Q9994Pv-3B]O < s\.|?s,ZOxW&e$@ПJKKryCa(~/sjÔx<;\\y׻ޅwۿ[<ݓ6_b<zgtH'@zXtx,)UK-ez az]ӖɌ .~~5,xq˼rrQL).(#FZXԀm, h%+3<"jb #H9 p('#m,@ޓJcHcAA dCwLjp׃3ȜjԖj,k18&'$m!!Wp OIk3*!5 Q|MqԖg Zn{-e+} zꞷ}a|#//ǹ瞻}XATz)Ad-7LjK ݱ ¸5'1I##fk@`` >G#>!&FW@ҳŜxc_Ia|lx5z"cʐI9?]^赜 \52!^ZZU7dӄ#;y2o>t>M$HiY!ypc9˞mc1A4a4X{őš0]uk#MNRf0c Q &9=k4>͘՘-4nZ&e]vBoozӛNh!kclOP})Pr#UK\e5u[ER0tAA#aXq8l)+;ju#baY`"\6ԗ1  J-nhl}V!/$uy|U41Y5 xI˃}y3xԷ±9  :ǖu$'L{<;qk z|!48+G2Sr"\8KzU5K(y#ϼ­_?3^6鱖e zE9Ԓ`M{C`۶/^a7U#`(f3ώB"G*Q]b8XEE&0&L! }N yhT700AHqP6EReE3$sXX= B]&TSjC"%eaArlqS2cȠz wLf-.5O:YIӘ%j%ڲ9e4t2`caxB #İBCRFzЖHhqNmB嵉ƍ߆rҼ9S2#7&)sɜ5PYh ,죿3]˪m[tݒ{A \iaf=7G'n"4_>>@>)𡅹ʲ ƊuCro-C:v <#4)e/&?XQaKdˬE6 /cͮ9'j~ǾX&X$%[c(ۃNM 0Fq@0|!wvBY4:&I<<1 \800kEm]?[֞bVT֠Z̎Mp@IhtpBF$CoᜧK;GJI$o)OE|PhCN5I, si'3 ScuⰙ61JuJCTXxJ!%0rCC2 a Sa{|(nyCa?`#' nXV6;G۾6ޮtxx$obssiDuX:ɰƌzO';mX{,=BK2.G܊Yq?\K4g=v^_a-KADc3Idݥ:EkYXK/UW]7 q7Az{{OtW&k>ɞMY b+V˺eqT2sqkUp1! k݉t)a{Jk=uQْNIY*!22Ilj:Ff> JfNIMΙC5ZS~oVP}  c.3Q8g3N#gg'?[H6W2met_ԵI{t^we"tݥ-uf%kS6G!^Vu5ͶI9]PO IDAT!ձ$@XG5.naR<60` ƤdP p% ؽAQbzuEN2XF R-2΋}R>h,d> Qh3v]7&3qt^/{Lbx0v#~ZxxNũa>R( }*.+s&ox%C m c|xNPН~*Qb` ֶRQ~u@RpC<$Ip:᷉%sN}k(wҮX!|O=xT]Itnۆv%n׈tu0[aq {va%M߃h|wT$OQ̡xh.>(b,`p NܛwSx1bx{Bjd}]|_J廾_vexӛބK.:B#tI}K NѹßoaвCq˶9@AsK*z 2+>"v0Fw$v} 5$YU/&MkZ/$\/u QO{&I-84_+K=|#hQ 62M>HEbRF"{D[Ɵ{sCoXsn%:{$ȱdO`wXZÝKڕB9EXU5alڻa;%:Lex㨘'tZ-~w??#8pzիwc z/j|wF?r#.Zg)AYeq GFIʐW}JrU#s-W#78)Pk`5a Q0$9N7[HA!2:ڄуw zMsKxݎgs :T\;?!<p'YACxV~$X/ W-UI:כ&2- ɵ F؆l% DI/IZvG>jRJkgnj[NR?##o&4m'x`X>8SُϺ 0x vv",`ArD!|\Fѓ?ܹ[IHtA:Vn\}`kD➿fk9<yCc=ߣ/} }CWխXK,x,"$ݘ.(*qJQLU~0G0$#1,V D-Im˲ @eh j@fDʝI)E }C_5kP `z; m 5q4O7;qD/Xs숏e9*c*f:&8XJ8w>?]`}xBWrHa-{h!gGOe1V0uNY) / ʡ4e_ՙgw,X)bH,KN4gH1^&|s7q;fS"aPK| )Q *KL;h|rDCh^!ߎ ?ݍٲ=Բ=JLP&yIgx 1ED qr{p>.f]&Qd : y+IՓ-2W̋LmR;3-C},kVi}}HUn#C|BSnDl]r,9  a C!,QYdN63^&Gq{Zc {"ˣI+]&gEy1Gv ;ŋ/]XR6LXf[u,sD:pit6B D%EadercW$<^+_ y_N`5dRU&I[*:cYxU!siѣWRhK|@Mn *cdMT%<)H)cKT11 ^iu)1ԠΖ(F8w 9 oѱ3 ~8e}Hj;Xs-fNf|}u{HYkd01g+кvmnѪ F8* ȍ[Kw06jPs ϓ q-BMb0vGE±EsfР >%ĸ}!a.ma(!vƏ jp,` gYf(2$y%  ` kcW2kb@<(kf(,AE'|L:{VFmKctll  ΁==@Zx4+Go^)mҝC|_5fteבQ ֧R>DWY$B*#Nrx+_)>ѲZNKQ" a-Kay$Jn X*dӬNYSsBCX+όBcG(u?;iO{;bI0ɝƻz^7ǘf]`?IF?(Qԃ?v{P]R?nC]t\&'LbG_@61=d :E=:ߓcJtp #eEXBGRb`~%?>iG]( sbQ=TjֆiwX5^28li^h)hA2g9kw2‡v| Ms9Ms/w)K>QQIt~q=07]Mq} IX6'җ|$^nI¹v?ߎ??x{{^??W_.l(S?=d'a1tD7oK45>fYv,t 4 |~X܇D^Y Qn?;AXmˀZ{IzSR6oعsO7 "9޽>A(slޝEѢiNr RNdÍAf\ 4E÷Os}=l#dUw x^ vAl;Dɱ%: `%dx,V 5 6ac;6qs-h=A2ڹC֓|D fip^N@-_D@($ A-wo Ŀ:}asq,ߍv:3A17߂>Չ8 WDMO8^loo]z:׽uЇ>+_ 'N9眳p[|37YduUZ)},*C"#5vL!{v{wn(|4Ӈx^Bό wKQ O) kNyi1tpqvS Ѱ{t1-@n 5FȆxnx:p6ؖۖ$hWX81; exOIKr:Br]Sc(!+I >Ӏ^OPfh 14OZyN"QkR Zn*N*p).ڥdAEfj6ﹰh3`6uwV5L|@Xe'†pZs_~7qa)pƙ㤉t Cu<nK~ȋ+Mw2hC?}n~ڝ!כӁhјp'Ɨ~tG3?҇UJu'>׼x3p뭷Z>C--܂x+^p]'AximQhcd=:|C( &cF7@1Rj@&r#1v|. szLPb;X2d.=G2cYBP If(ghll>w{Salռ[{rlnZF*5nJh<3C>\=QBWj  ג_/-`P 췹ua/)gI{r>xDA{Bg O'_Pc,p#F)wdžZYLo%[r0%Ǽ[L?a,p[SPV5<[CNBw,8}]kū_jwy+<я~Z\8sq/֓$xl z%"\;0>BKG6-c 8IƛizFp߲^qn•2C$k~T;cTh"=ݾf&Eύv) Ǵ]MTJ6)km gwpp#BeFmy"`$)?AXvИ53hkd*Lq [1 h&9=T{U 8E$6(G g5ۀwq 6=TCۢƀe!5I|*{N~pI׊l""74)s"$‰ < +bRX `5Ե ejбP9`ى־p͝F>x%÷џOzqfN{9gL Ȁ:ࡁsޙ7&e%1ƨѱ`yV-ߦ-g}6>7ks܅z ^__Ӟ4< [k7ysnDد24fȂ[C&;NOqECNn:;˺doxo^|6q(0[ nvk 쐔hn}7kQ~C@>ooAh~I>= aIl Bn>qdsrC }0[w72%))_9{! Se. 2זzC$1XGF;@~kdn$7.q)u]x8v^FkmrO5/ `V^`dDTW(,F{&s SfE{Fog}xͨ[Q6;DPBhAsp@nP xb8 !@1/5 Vqˍ 1UH~[bkq=/ slމ2EwpZ;Ա$^;.!Ցnذ]  cALD?~5N޲r:6U?U:cB9] % IֳTvGUа6j$^3 $ O9WA* )^)QP?os)!cokb}:),*Q_c`+uV[SC> a^Nr䩀Kdz饗^-G֠~ |( xdj,iJ0 kXT$p2Mwh&XQqACKbw)>z325 -,$ יyU,ޒ 0TVKԲ=d8Yeg-$QWnyM HbǢPHq:̀'Ex(vyϟGA@RC Uzg5ɇ0౽Tͱ=|q  X- a]x}`4FgS@4 UA|Ei$: (aPGb_4^G^ xsIbD w3FcOF(>y 9Wf͚z9ޫs7gndJpGM0Zr`-c%[^ XX[|)(6Xx.-$Er(/ 1kڅhZ| IDATrGMxzn<@8A`tNA:L&7c,p{G, Q> t幉dGGF@Xn@ Yb3PCgfVKG$(N?Pj]J $Rf j $*N 3\ӕeБGJ V}"$3aXPK80ь@0R#*ȁۺ1r}4;1hMso]жstY(J?g9G(+N" XacJT^gS鋺^1ߌ} A/MsXvޓ@§4%`7? sB~q9sG}4pk6,[0#a&%z"JmP "^՟BÕC~E|z`e=]zƣCH;W龹/#KA!fƩx-vy5$VQ6N4vB9O FR@ 'xeSOm~ p}062+Z ҕJJG lh ێ.]g26j;E4pטݜiڡ7W6wsH9p0$q$ s/SIޚ?m,{ J]P #݃0X4enØXZv;4GfʐObNUi L-XOmorm+ c!L;daѹIxq04=Vanh |a0=i]ẁ>ZxR*cwXc?<)p c2$ZLkL!33Pgۡ$'DόTJ ~Hɱ=3 ZQ# AR@ V8L= +9Vȫ^8 7L(Zh=F;0Sc xidXT]1@}Ts*Q1PTN4?& *c)\D;y8B'< 83pg~g_җ8rf豐^fUPHQ8kNWwZ<|:ʞAYlfY?=7|3c<.R\pv*"*@!J4`ImhG屁+b-|eH]]8ulY#ŷ|2#7'+1}:FQxBj4[g_?fOne(THέLClWa"9?ItYEz sw2b>H=3E`%;~L)I } tQw7g[!+v`pK/>jmO?7v' c 9݀)'ڤV}s*18cFQ QY/vOxMgY3J\{L=@Z/=+M"0.)B2YV-n~>;gwzs[k{zclh%="KFBsw\o) ɗZvs_~\P_a>2$tUI@)h'8m=.5 coIuf <~zh.iXؑe,pE3瑘\؃ 8W4W|‰؇JnB:񲶈B-0[aFjR{ O̞M%9+ wM][,ŤPsy;dr;vx=߲?giPSin;T/W t Fğ7.ϯi+3)R~ ^N0P <{`Gnkf5-o2\~#m÷kbo筈{}s0&v}n됫_. 7Ddm!s+ׁM|4-ixkq{ҿrFO9ߛ.I=i(넩n;O%Unl!}<ꪾSF-rA5 <1/47*V 29HB*sK[☙{)ry%)OE#Wk=ufm[*-):;sѷ=h'~)c$*#Hx~ikݦ#w5T)vj;z4ۣ_Abz}M|HAEY%`~8n$Een,@O%3[7]OR;҅~3J v>c?y(":E߭'# jt@ ~Y!W$|hV79|^ԃ?ɷ_'7Y}c9ycpgƦOycpEmzE*IG%]ngVjlPu1s;j3"%jR!S2/L KW0J)yw4 @;f+y>둰hvPCMfJt^vO7ې\uu![&;r+))]{ƞ\iUT XݱcvWUN9x@ XP=|/=̘'m@ Fi?T<bWzyt[:Ղ53Hfд[㼥G `')7HaW4@L1`&DS",MR_yىK'ĸV%B Ns+$H#95}~Kp, OkPlSNUIb^RJ;puSD'Gp;$̲\tE8餓xӛބ8po\w}qWSNkC%)2fsG.!>@r+5$`zzdC 1`"(;Ѷ{z4"& rr2Zj(39[xȻ.$wNow?8ơ>֮Ti6ǥ%ILh?fPGۣs%c&]Dwv i )GO2U$L.mӃ=`x$Ogl  Xdr_H0be1"#0ORhR(6O9#- N7fYFk*,`E^8:Tx-$ yjGAXs@<rYgk_^Wկ~5^3g3<vv9(o_я~/~X^^%\Wؾ}=b0xUDFWLV#1..t 1Ui'jQTV*k,OqK),RNe,{*Kbt k_ä_OwgF#!\J۴,oB/\ aح%|TE쳬k=">tVRaW)NJ&d`w)h`й < L^1:F057P!}"Pqڣ~hxs`RTlJCۅ84IU'd10h2p.>ٲ7d{H}+Ķ{]o5Z*{‚O`j)J ]JƌD 22_9ꨣ/}K7 7O|0D;?S?>׽;wbhؿ?nf|{7M|XkqQGK.E];`>ezl~_IILi ()j (:*a>Ii[)KNx)?wuBf4L@tԬ i!/] $oεI|/[y軙&aN7 K7r}qn-^IfGFlgРصRб[WädlsM xzTډcVح#!Oo5cӯ;G 8n+9xpv) 02ciU&:]wh>Is:gz7Wc=q7@STr|:E-J]|W> 6&PɅ{I'mxހ}C>O~馛pM7t :`OOq{~guVVV6- cKHN+t(|R^Sg t_Q +*@C'Bi`ؠI!Uh+ >HX tu41%+zzAG*((1F ]mtMEĘcX \;ff..M HUl!6_.)ƨ0q,PEfB,ڸE]ϽÌp`Nv_?7r"m>ޅOB=k2#. vax3g>-n7qM7aϞ=ؿ?1عs'q{N<ݻl,@$餖Ahx(j ;GUe neLs3@+~*c2HH7\YC˵]n1AV"@ɎwG{#8ߗ+m}_[Z/>^G}In沕Hj*(%J"5R:xE``$?Pr%TFu#pIvs)3*LZғ":m}x| ?>P[ Nnk>1P`5cU| ڒcR Y0\6@SrfTJ;l谦cL]YеtG/!ץJcx݋aN#Ǣw+FP<1"YN|u|6>6#7>\7gw]%% 2ۡWb ]]܎ANŠDC*E%KgE璛 ѝ@j#d,d,@M3Nܠ) ,U9f(z&T@ꁄIKW.ЙU%dv#g|2( e~N2cH7ho|iJNOD1MѵE>A}\^=~ i%7u i]~_ͭ12"3ڞ\^wn-.6H,*VH\<cߙ_O!?-h9`]x$$W+q R5PŪЀ@qs9c{Hc1ۤ☈yEibh~R xx񂒴M ]0h x[rx(~Iǖ9}eL`NI`}8zMI_kqU$gLq,kV'@V)f(KґI̤Hϗ[P [wo,Lvӱh}F*>uQrmY aoTi/=lQRvk] = IDATmxd.4?&$+B{8'C=gT6*#A}= Ʋȹ[*ƈHy;=~w IAJ k" pԦm#<<62sk krT *JʱЊrlB:Z~#pcBI=d>ϴ続ˋBm]9MQ9)fa51'8Ԁu(IQS؆+go^OyG"!ICDTƼ#+NnPßY +S`.K.t7 lw>|3H+Ӯ::ӱ=`]0K?ӯv ? ]/Cc ) fƾk]5 \4`& ]+_ :>$?نՉUԽFL; [gK6^/$x0r2 3`w~#8"׽/s핻i2 QSHؕQpyVU`,nTn6gɼe3ZOB)|[^9% 1 Uzc$=o0匦6T1uD>y!G fal ];$D-BF?*Mǵp..`!0< Q5W`D g׃. `?{v 'vM]؞aJԮs_Ztx~<,:_+}Su%h]=FɵbN L M$h<}GN` FŘhWIh2taޟsuy*-#CCҭLzWk0X MZN8i&)D kݮ+okO|킛nk$0uAT-UZbulϽ_aF/H鶭>*!Uw |`\[t w PS:س}zxg}",#`BrȢUn1c T_.V4fZ1+N#t'p"Ì]3KQxF俥{]@%9@+|#!; A̻wYSF4i>gdL|7&>YAc߃$4IÕ/E7 ]c}Sxn@wƍ2SL.6Q+7u}N=¹®b ]ݹKXMXwƽ_zW{6^N#>u%uD_&9-bΨ?CImI- N#bmr lzK͑}z[^o+]wCMQɹ6P?'pl jI>.pرҳ11+}+}| ~d޽xK^N8Gy$'_֖c I Usf8^-pwq Є2ɌDiA bl GtJ')usƒbokEEp]i݌`LjJR#^R0+l#Ƶ4sQ *5%mNdqm2pI=`BUBng\P4t̏\3L׽_ʚm)uvi`:C^NYsYtE}HeƱ^`7 ('Q [q_dD $$)@yQ=\pRkձ H/5̍l)e)dy3q0qpi |Dn.=vKi;]5X~!q+XYfۑKK2s6L*M"%R#f[Q7{KKWI֒D ەb]۰{P=Bz\S9 cH*za8K*+S? Viy|{\:@ut.)uQMtCx A qt XM |K_ƒ`\|__=yO\r%G>__;52Gxx!w+"2) P赔[:h":1!f9>D1}̍C$ֳBjE5&LnXuKMV ͢odkֳZ7ۿeI &=JC6ߥR'yQwԀ-wn.*Z kwt}#0**V ϓpj.[Q2D㕏g|<4-G MhT9w]oȎanbێ#Cuo2VUѺ;,Fޚj8 ڸT@('jǐKXrk*ɟe;,KY5Ak 8 [qG0;޵ťU)?eٳ;;?я6Y[QJ.JDɂ OxTSe*b܁Y&;6 p#QHT@&h7l%##(_r冮 ?B)h@ClrQAC!i>ɪ(!+8_!md r9ŠF ~дi|XQlȸ԰ߘ PI|ہZ  Jz $,Xߤ!E&L %Y@dU L ՝>H`!!hlnqe|;"cÊTKIc^ɥg0JmlwKKeΘ|6o(3N3m;m?*+^J }mY#y. ,cx->·ncXG|g'< ?Mo|N; ~:o{uYzԣ}*Og!½e!2ĥ$qQOlnH "^ȧ.Lpŀ('Sp7bj[X7p/O0nvmT)ǩjQ7P =B}D1&ػt )IV6kwRL;J{PAK};tD+5*=ߜ@yK~%qu뫤Gj'; ߵTIf @V)%>  D(ni^i?rwiMl*njƨt̄ 0qhՙU*Ǩ7Ãn[rص^ zw܁q5ۿsg>,//S>l\{x;1U=>O~2N9唩_pY[\IW2)|Mq/ =9W5^m }w#Č ~b,$ƈ"{kU]Š҇n,?! =1M"+5|{FGHï:D@Q>wBϧ(k7w˕ $pں !Ԓ2X`8,I Q{_el0U+ -&+xPxl aP Y,zݧ!z,%+#խ=>0 $s ]|NKZ|>3VXcN_RX%]7JS1I5)<31ldc'Xߝס޲k./|;n5wމs9KKKx[ނ=3%\׾83py ޽{w W]uϱqK 9% FU-O^$+< 7܀~xSs9x#| gg8>vY lkgWRx2\"JۋҬDzTAJ30;QS1Yy~w3Z,=+"VS ^5\=Hcu<V=t=M`8̊"$2XGb4Y®yYcRiѷv ~:uH\]0kQG/ uCHʹHJLߋDxpbɭ`#=:/eR0;TjCd|a9r˜A8o[zd[>j,$hFQۇ0ዝIj#C‰60֬% \T1 vL/k;ͯW;8]mbWqwIu;܂?ro[n 91)nG<GqDw}pswuncdzlQ5$3<@1p \W1cVGAd5<~]Ptxݒ`92)3e>V r2HI\K, 7n1h%JcpFy l+m cI+W9mCIFEڳ" G 욨{ƙJь17 FwKA<eQXGC'woQ#*Gy zn:o 0gvb,o<_6F<xL$¯Vbu'sp\pPUr <2Q\Rꖺ"wvZdv 'czp>Z/NOqZsG򛿈_t`GWnx7AX֗ew?1ͱ җT磌>wyx{ طo>+CR豐ǐ3p >QHfl;iX?;{f9 _*,o1P,]]JI&Cњn?X><(ųK%Apɭ )" ^QnKaj%.ZTjQ%|T{gU0b*`U% W ';6&75e a`@H)"U< њџjX *tx?.W@S6lFdps2$ikhksok{o9W_g?8p>E.Bܹs.FmωdIvmr"~;@?ԁYG+ | @䣦%M PUvGmv?1syX[zWe͡phx$B$3BRÝVmo3: zSp՗Ú589]W -Aj(Qmف_JF\.mWX 1^sqC)<\5@>);hp"Yz#hRɍ!+(h+GeqVɷDt/~Kڎ$#~!,>+~S p{hX+O2j'S^O_NyJS e3"0;`]SaE IDATa<QE.R5>=Qm=iVڳo*x!KKh+㓈1oN@*Clm}X0*KPa{yxкΕA%Mi&m0PLn+V~1C;HH$F L cIB4E?Oo6N#x-ߨ/޵=][=S}PvX:a]Zh@~[Ff~jpv+ɠ`vp {lX@~xHΝﶴLX~siWp8lnyS஻Ӹ}V,~<{^L&jtw豐va0C+Mt41F3:.Rv0,V,Ǹؗ4;c{$ѓQ٭Z]T&J3\[DVk_aԑґ:WY ƴ]ͨo+@s8.ć+i./v 82l7+ҭYh' 8`?Wf00 `g%2R+UC}9P~ \3O]7$@̈9S#xZPxT3%1q89Hˠ,ӀՌX q<ݤ}.(]smVQ>Dqu8\Rޛog9='gA1I\Lߥ:u@tGa m~יϝ$=AD_ck:ý;CAiO>yꕓm۶q7<^Wo<:dd޲e`[`Y/‰ )TTYL\\UcBH1N,-m{~;v{L8k, 噢$sCZNi E&-IbTU|,%Zn9QG1X?!0iV`MZ`tv+ TJ+z6o{Ϲމ^O2 F`h1b;D;zQ_R>3 'ٖ\ϻ%u}Og[pX.@LA/8JeBʰ)#^rr,"1u3B}}D{2ΉvժC9upБ:V9#=ڏ')gC]Qrh_pc]ຂ+t(ILvssuְXC(%"ܪkKmywCGzn)̇R^ep,lW9p`H['vgi;Kvw`yt6-\Yݻqꩧ_/n?s+wy;߉g=Ye>K/{oosh4#@|7c!V{|eA/؁ SYqȱ?9N, 4G`i,- F{LjM$ǴdcdV|"g玨|yS(Džl vr;01~kRvp4ahv2Ҙ5Ga47q%e/><+N}yʺwi-GmWaj7Yd 5V,pゲ3O  L9$ߌg4 ֌IXyd}~Ǒ5Z{ #~Q0֡IVH5<=K }8V@ k/)o=dofRZj)0Y91jSꊔyAIDSKn(͒ _Q!̞4yRg5Uڶ]l{'5O1?m?LW4W^jN)9 `'{ ȱSLA1iWa۽h{kWë&0 ߗ|$@xgpR>4Ƈ6h1QrsIdn_. cyLj+퇻]B,2\2l,l!1")@$VGDկ:h~|H@G`0xb 2$Oއ+DGT yνۮ= P㼰1&:R pjm#;]OANv4Fq1$[܎,k/t>z#=a.^ۿ[k[/u/vs:VE۶⋣ݻqW+_ ^3Ĺ瞋~z* c h,2|]S܊C ۱5zޠ`U_G]Cʭfg~P |DL顫B#~'\<I#)ֶsgگF{ÿޑ"5½9zLhSbV1ɀc2 íqtaJ/h:s#VSe.4:2#~Hk"a|p@|$ ԧwQm9jЮL:6$̬iy Y-MgfDW=S'FHwdKnY@V*( h1+E`xtd kfxLH r @PcXD?}E }qy"Gcc;Pc-m/`=iͦ>a>+*K'vj>)'ivލ+{E]wkk''xSWen%/_wux _c=\p/sŇ?axx[ߊ/| x9xDYXH^(\gنt{_}\ >1h}h+R']aI j≫P2abXMHo-,zgnM?gẂRbmKILA[Ov P/(?s5E1܂;}u{?I_ES(b<(AKEΠv>m]4#oܥ'Ւgxg]y53b|-!~o&n-!mvV-v ԷG:wbr@ fUַb?H Ar(davs!ڻI' ir4vZd ym/ʷ.ö$ڦl3+^;w޽{袋]v?x\yg>k-;8\|xc=֧=c׼/}Kqc߾}xSzk= :8>H['rH{Zqia۱W萂& ~>cCVn^%WZu;9_yRub(YLpEyĆ~Ƶv;a`pn#aBDOv3x%Қv\tXZEngٯ!@ PB\_&(4bzG<cpr$Ֆ RQ@ž~h7n.+Ņ9F#KiKn:Qq2;'STub^i9j2RaV0rDT>F]:؇=1xAGrm38;`(x{V9W@q^aޢ :~hNسs:X2~v=z*woP'wu^'?ɸ KKh_qx;߉__\ Kb` *.7g@dZ}gƓSs[jDtI?W VY͜!?Y+p`&-۲ ^}<6lсe< 8\.F hQ_6>:amPǤ5x+.i% Rʯvx|bwk]ňjЧ7#6у%Q,$f\rq(}ʌQ9Ҭqr̨s : F\]b0ܙT'~'ˀGȥɤAXrc qg"W7;Dg7쥨/:\ CDbMu]+2aalf2+@9C~tNA zC`82^Oek`Mܽe:T/~1~~|1}gq~!'\HdTD9VkXk$P4Ys'3 +&dL?}zL DcA09"Hb21b ߞ־6g҅stޕNڸ2p7@ 9TOg}2ެ7'QhL7 | P()s;?bmHey7]3>|=((NN:$oN5-oy .r/=+󜍊 T xsRL l/xP,Y2)$D JL+Hb ]$`2ۃ\ Lƀ OPJKW3> בLډqB} Xٌj^2馝X St6I>"pc=2؟t0*zTH\1&<"bL>u]<4`$+v"r}s25+4*QĀ;` ]i.냻 5IR] y6 ̔ixۮd ᄎտ}a-?S1Tm023AQEjS3(Uu}JFr[Aa)w#O܂{h)xn iGPfv!FVg|HɉW9+SJVIkVFB} A\`V7 yJ\ {#=.1v+z1;K Jp+`~7(f[psoSk`ۮ_d=0հ[mMCpևhÆUAcUO }6Ѫ JeK0+\35Jz6E{fZT0H۝~!VL H?G:W? M]dPą\xYm( tIDefX +5\w 65Z' VI%C?*W" KE.dw "o"ݱ> Q5|\jUZ!1CuO/`P\Ȍs߅yÍ au}!&U %k߂i1;aժma?r:pd=qj^;2txTHat|ylރRJإS{F71MN'cpzE-x#gNE\}Bo3慿k`5VRfW:_MnႜnKۄ៫X !ma=``AY_P=7S2%ƁX]Cc`rQ5(x)(Ɛ x12a䘂E"{_+03 }ǁ'E/3cdX3fgvEC%jӳf^c`yD4G<w~oQ/<*޵겓*H&E5уtʩ< ࡺNy?݋r, t!uթ!.8AҢTa'Cy!ru뮋?W\+Ykq-\br IDAT?v޽Y+3HHܯq壏QZUnwO7pDcnnq_0D YwI3|2WcqDE˨]g>c'vO=*kH,DtiZX7&D7b.:nF\}g>G}4>яnB =Y[I ;]o8 AClV*7Kn*xwkH&z6շ.J;Gt֕A+-u YfRR( ! :DK߶] @ qER&Jy%X R1ʍ\ vt 5ZaʐE6ӱmǼhxX8'f=A+8& ӎf3,+zJ@|S lpG_" +Yп1L:]M7KN̲<CPW*VWW>WL4 N8fe7Y;95siVXgq+֤=|P U>75tR+Q=4 9fYu cuRzt,{:tmQ ~8zÃGW6>1#eԲ=Q+ր9+i!0/o.%*(39'+-$#BIKT.46oG%#) `^I9ԭЏl|U7}u*+et,Djx]{1\᎕GШH29 ̵ka,gƇ,"81g}vX09)yЃ/^z)^=i> ]T\䴅|*empTZZXRWIdNWJ S 1jiv?dr,V~QrߨO2cxE(νeHN_^xf<{x_ƴS)W>6dID<|}@&p1iNtN;HAoQbCLuTzUccmNt;rZD(nt5S(X9҆hruO +@G.]5AƠ4SY5;ymUq6%5ӣ:V^տ_d W QzBhҠg{[> tX)-lWRsѼǾ5 ۖvso/`mfjFB<ӟ]CZVy2K*}B фMީȕ.ɭ`͠nt鶣fkh͏ѱ<&DIr*AuF[[xZxnj+ه>o|q3EPTewgn. xLɻ,W/tlx '+K >vW}%6GPڹ'd%T.ucDգ#Ց}C瑴= x 4]DO OYGC/H-/mc} vkK̾&{P(RLX g.΍&T..״^ɏ?$-(Oth4vuaRI+HY@8Rfo[`k{۪YIhR.}]"hx`]&뾷<7pM3A$ >zTr.cȓ!>"Wxְ, <+<.RF\ϟ]+O=eVf>OӸ+qggwq7ݳgOV$T8Ii-,+Cz++Nu,byb O'$Xezd *|#a%:#M3A2s,!.~cjPų)&Z[JS>$x>%/њq L\ #P#:!MRLp5izl 6 l+4uM荦lRI5Le"PCF}:Ggo(n2XH%F*KswI" NtC? c9(] pGI@ s$޼fܫcDߗ(J + y\”zB $vj];zPk+3q5ӳi PE5%cPgi,(&L&xk_ c l~Oezl1M2b \Ie%:r]MQŠ]. w`}hݎ/ %9F![BW¾fk0Xb\(FC~+*m_[EӇ6FvԳ;.A,`y 0-д1Zo.D큦 XZpl}fu Xr]iWY\+~€Uȍ@*?bW ]ƜP$1 jVSFuc!yb(]Io8TRCX]ex792*nqCbw߳r^*=Y %T9sMAE҆;\[J~2G6S"И`0f&[a=eL xk UC{%xirg}{Ev\wѽ>s7p7`n=ߎݻwsM#ī^*~O>䍮!) c _ci;;Di` ,JSj:ԁϓsavW$܇G&҄rD}3yOTV %hDU`S#V]\X6CudVp$(y" EDxD@D@y~\\`CgU5k֜Uս{cꮮߜՊait|!L);!Izr1!h İ<,qIhAv07;jNگ >J*m*?od !.՗ˆl9nmۍP v;2` @mCsq[|[y\@EKJH{Gk7_ oYPM7T<o2ezG\2]OЦ$bd[*egS< D%,04Q7cQF\2ʫ (z<ص0[ϟ݆NCjrxJgvhK_CXyұYQ>%fۃh a~j/yf~,j9că*_4~K!R Nꓖ?6DQSs#Jvbqȫ !L=r$wm7 9 l kRCM\)^`e!AqUCy!p eF&O1!󐎛CZr5Rbhy@Bv<\C=4t3PR"`M/v mQY[2Ǥ]co잼]6~0.bvmx8x-YR8&_ZM (@(kQ]mݠۓbO`9 Yjr(PJI{;VJHXY+ҵ^I1td XZX_{G]Z#hCª$.뼹}.m=˃|;N8'`{6YɶA6~UWm+lo])~<)V#s\BB~iApz A.0Xُ6:Fo 3. W)a'ݒ[ݷ8oKpxbqգ?- S6uF7^}<0Tqh 6kҰ֫]t7?VdwF呂8+%gh>!cRd=dG1Y xM~׿???p㓟$^WsŕW^SO=߻˭yY+Mnkvx]ţStYaA!rXk6ZG>|c$X@dz$BW*57XC\'xJ=udyf+P`Kr%>nfW ]Mz0ʳ4ay=B_rI:G`,lcq.sNy>:[4!e< qj mIZ\;f½ UDeCA/5/2,2!-c1f@-r];ll g+=uIz\.&1³y 3!G ;c\۹By&4 @3py(t2oUG$:mRc۵cWb~ƿ9tmOѯ_]knG>UO\q~ۙgҒ_= ., FSZgװyV{ElP x  N&eeyuoԽuWZm$J)1;|l,jO\ҝmlSVq\ۢ._vȄT'L0{Aqφɸ1W dpe!UJ,_:EC8NrT2[)tx>W@htenC]H Y߷1bl %p$^ߖ0?@Xi?s+^3 sߗ;מ萤FZege:b6a2o??2gpgZ=/Ie;=1&|$]~Ԝ&0t׌䄊q^ V)T5& m'+:beP]TpR\W=G%}kNX)C.8n%kWߤ4Ht+f5j6'0!V9Zh[F/79/gL򝄉*x:"XAcPs >-ݻ-GM|u'1'%;uU(ࠠ | T<#3 %݃]2Z4,! JڧѴͶ%{~-)DalTZ9=Z@Eʯ6U4hȺK@ Ifs&EvǶbw(`X+gκl9YE@]WY> ǢDiR Mm'jN4w^ Kq?In3 |xXB׺/nd5l.oZ./yK?M߄u/2'ӟ4g0_n7wez 8Nb3>G\=mvQ<c72b5LzEcxxJ?fKU:ea(SA`{]I߈+)G֒wտYc4G~[l4.nmagi8VƔVv透G3=3M6 A!58q8x 8Q]>Fghb $7,Eڃݑ<$m&2Ô%LNszk:(道uJ hYZ!Ǔ/D\]EA1'E8x%2>B3z/vօq$zNj2W;Vl+?c>|MOwyx;ۏ=֒Rmme/sP+ i)8vLw^&h̉0P7.GS~DK$Ϫ۝U:N @M mDGKJ$HӠ 2>)ȫ' u>y c9խNN>Lߕ!L%-`T>5&_쨰vchlu|vйwf?TUFdr~ ٟ&+D}.oO-髋<}ƷR߮tArc Y2n.gV}ߢ>Cg٨טHwKy{=I8O@lPbvI6lM v (uGp"#[(6[ģ[U4vS >E|eaf| BmXY2Z y#CFxL59%o>7Xc5{^JW (O ^e@^Ϯ^%5<1%U< UW|\13bT}0ll`9Q1 gl1_;'o99\8o\s nloo'.x㍻вݗ豏d<Z 7*R:<3}RrnLPfgYd`r2>-ww+2':%+v_D_࣯s11T5_{r|@(a@/%˝E177D}c4>4ђ G(f2y'/<(%ʒ7B7Im[2ؑږužWۮmHy@<#cc"×O&;%iju@ܢD˵}! nW |( yX/gۅ_<a,bO|:Ч?{}FXҞLQdl634As^w؞"_Ԁ68(ˬm oY?/_WauGO?Y@$z,>pK @* w,<JF"0E-4S{vvy:OV xΫTBD6)^ e0LHAEegRșP(YNCL'k40)pF3XL`LʜHE1$]히@mУQpGc]E%,3G'gPPtg+IO9pvPqF6E`\!sb|AWA 2ȁ+n_1DECxk,p%f+b@odBܑ}!Gzz$X(,CEi%Au;/ Еhq'7OڻBssZٷ9f^B=Xgq&=;񒗼dZ=ԇǗ9EN :ёSj,0_8,[jSMZnnO_m{:QvmZ }t-8ARpX,=a4LN 3 IF-(yAAzs.2Ѕz ڦ5~o nyffk>0޳mxhEקO a+ju]郵l94ϛ$۴|ct -vlC D*aiԣcPX \~\+svvo7pvPjޒ_d~2uml%dGz+wyX`wwVx_1Dx,vDU9<gHJ[ o_W0er5H<&'a}lo<6? Owa,|Ve{lkcd\{)x &[w'1nlIHKBǃ3ib݋\<"ΎUahIUZ͚G mI)1`?;߁a@yDOP*&=W͌qb2Y&n$+VUɩ1x`׊'icyjKs&WBL"ʀJcCBxLqѯ=?񈥟oʹ[<u{fq+y&鍫2Y',J/ FKF+*;3D FJ썰vnw#vNY\_C,L=\~ >:2YHkr&I#~ a};l{~UtވO n]GØ30zcm̶{4c0;L; `,!Gxb@oY;u% 0,P|A"7!^1LNAWY}&DyyhN(/B&O!R=I̸6F [{G t}ۚnCH%#_DZ#.Pɸ*O IƂ~t˹%J-!iuaŴ9u][n)ۯLWs:OvF_Dr 2ߌ4x<. c&줼'L +gv4D`| G/x`Zl4qbVdx#/xQMabÇG1!}mƽȷnEHޛ-{҄$`%}~?2)A? K;C i!#!ֱhS7Gqe(a]g@΅U>Id[N3vd1R62T5{P!lA7xIVLkr]ع;kO>4|m0{T?~x xJL?N -LVeAL@7byD$΄H!%Z?rnPuFYiQ!$#N9Bۥ}}u6}R7olc.Cs :w'{PG ap}N<0&DyڨEi-"NI|="hg|}PVBZPL2CyIަ.1֯{!c#\3^2ݩ.;'cyӛބ?]߅>\p6662'>]j OR;M0F[HzxqW7s1x^wܣK 1o$J)MCbp,jog&qO%^( +f$Uxl;WH*]&ٷ1\H($Mm&"[0fFE0<&CI_{NzL{P4%F?_ʢyKIRsLA+б̈́\khτ:@W ]I `h`c`1a6\% h63A;9;Yd`3;greyL &"`ЧeQ'y$d!,WxS eBlp vʂ @P(ۣ` -ePȵodrZN;lJE)@pG,9 dZ(d|y[=kGqY8$ s ,ϣ$kzGd/CRb:NBϷ7^ OxBCjޒD馛7x#:M<]G<%@}yn0 qZW%< l8(f>*1:]r3gMo:?&2)ah`'S4ܨ-= Jfb\6Ѱc[C2$E.ĖŤ1>&?(=v ΰvl8OS?!ox5Jy@xqp,Z^1 bRݤHP$r2`\ٸl}{%5'1N%@ƆvX4hxaT )%*]:Q)LX?ط?WΦqIܓ(טX J0% hjA8cwTyDM(x`@jaG;}~tlLm|bs 屷o|#u|e]~x߸cD+pϗ1ٕk;c{p4?͏6I ~ &f|Km93%-$vEր*I_8 PL ^Wb3˱2]XBHLA=UEα<:e#CztE`v,+#~Eh#9L?&0E/C-XW)ݙP?GlMĕFʐPA=2!@$6;o͍ǴuXvzxÅ}$X?b6P{bC\80hoZ ͣ7㴱>J \ra?ϘfWr{Ho;:Y(>رcЇ>G;;p9SO\5YAtK;O~򓱶z\r%x_7MꪫvtOKbـi1'˜{t@#p& fp7zIo! DM3x.y3SgbG2"6mR󻰦UvLBVel{T)U(]b ! g:_62hu9:ǭe~R+xŖ𨼟3 M%ah Wfee 䖈-ΛgXf7Oby[6 ]#{An{>~>`Ε3FOpEg'~qM~ *aϯg@ۍ =|%kM]usq9"n\Z;_~G5 i1{|9IE%'3qZ]"ddzIYI26YYꪫpOxoijl-o쭷D~z+'tR9y/})SP]t=J0w89y^Qz:7R ƀg4фHŚPچzt( ՀmdN0f`3԰x4mIr<Ӟ9(Ne':%x'Bb`pp,?^Da,BY .f,cZrm@C%;b9 PYNz;jt$m IDAT*Ic\Pd:!6?εY>f]6sl W^ 8QrILSpɞWf^lv;0۾ Ѷҩc}G2ɨKrdIAwxsK.O?w[{r܀\pdȏK/ţh|w7Bbyӛx;spy??gGI'p!, aB`I7:$GeKa}PBʃ&R_4@YdpI9s>&|4sG}eZ##n Ad{,`Ү]f 2eχIB*glpc"0;:`2rfE$ <,~:XllAxW C: +J.L UVS;jGiL'؆z.yEO?_Q24& #v[̦_Kh#t㖕ܐ qvg*"alLW NVcMr⾣&9W$=\Pm|솼[V_~=2~׽nZr܀gq/~17 xߏ??7b+og^xXa{ ΢Bl9dl`G蔢 DB@τɡ4M9}ǯ@L+̿5 VORU%dVوYz?aҟ-4#1֥nߠFT`Ov;{m?.wAv/va(t{ l(/Y<N=ayXUR9;@C7hT~5$O66 FYt4aL`V2UƱªڒ&O(KC[,1RvƸ LvYNU!+y^Wd]ph9{~A'>T`ά Z$}FP/ IHߡ]왌Ibe mgC[vv(w}7elllѣ;Ԣ% <x#pW1yuYx&Z;)'b: |" D1Nw!t9<6`!)ݧ9cd̓T&s%yCo\f"Kk_.PdJ [y$ b (FWnpX% }.sw)ׁA{@"iB~G*[=HzD!-0":_cye*G&s`G tx3MςX +yM:;4cqɰ&bDuKFS^E2O\Al cM|$)lC>@ɂ8yVb W< "ic_q;'s P#g\oq 3Fv,Zy{V򗿼M9nVo93q5~pW⛿+|N8ww<NN>dťf ,֚M7z4'/w*κaF?o&+r -\q "չ%R]Ѳsk-<tL:m(N:s{(%)2dY5b0 Y#30MR xor x|]nz{ŴVV';mtX~t%-.v3lxR>?P-%iKBY<9 y~8ɵ=qΰ;п -1q)̠Ï3S\Rc͑ ͮ]fݣ^NY9#~Мvc XC~wf"-MLJ])/1 +tü"<{>!S_!t-2t~M;ݮlyEC3pɷ m`#wBn~[q?>lwNH۶g?<k̮qo'=X[[k^|; <ƻnoN9]m#G ́t4e~#mg^oメ @ĻyǰyxOE!%?xwfGq]^Ƞ; ?62;@h&GJ32dE-/~8!Ja;?C0#Rɳ^( aM\:zca$G-A*Y9~cN(p!أq7t:C s=}=KA 6aFrgz41tm7c`h\V _U4%HeU^Fy$(_AE&BMwʸ({Bۡsy,?X?O0jp̪j?#L(K)04!zBXEuPºvlzw@&}lP&EsMw99W^o ^|۷}^{{`ywk//|`[;r܀԰ғo?C?w?-9t=NE­r ',xɃOX;) _b6 S~`?"}lɘƗcLW ~fsCb,Ɋ A'㐗 <2r !@qZc%4gc` 94d/JEπ6CۈaR#sqIdw kw*,q6clH@BZ$Ϥ\y.u*)eȫ9 1ɌM ~Hb&`ygA{kzokw +?MӠi:t[[!ٕW^/x[{%ɿFǽ}o ^[ocXU<;)}}kb3\z=p`NNAZӁF4fpp1n/ ӛ1ݺǑLe2*V}W̷^D ЎXBs> U8#y 10Znp#LQL]Y1 0;rvH1pᖶ cKNk={xݧ\A1 h<2JK٭4'|~SXx\6>ۂ%ɩMOi`xTɨLu1DAk/iN-};*%8}.~(0Z{&{>ZSv\ M8xџefcΗHIŰ:x1c?_K-&Vs>(zuc}eUy5bh)zj":!<ˢ@]  Cd5yojQxKT`[g ˭|e۪+C</zΆV wR C=_J \qqm/>n֮qxk-߂>1|$_x3vqid'2ne?K@Db 8vGa&`ڣVKpxJ=E(Ic>lNOM;5O `dip1/D'x(UZᙡ0cF0V>OJc.!e]S2*NӖCYȆCm4F~>. Pu6}P &|>瀇T?6-u8̘f`xM@8N*Xk;-c<~P]K4r,_ /$;*X 7܀/8餓>O|q'=j1yn7{?CկoF۶)t6|&>ꫯYg5tg z+"zu cH5(6#H(X"+l~Ux+%%N*L eP@ cl 4'v)M`EMwR8'@= ({BOT`$KtAPx_XB_"Z>|ҾqJm0ř Rh=C iGTܶqj.ߝlDy;HV9xsGD<;m =q F R>SuAy"1(s<:ek!P rDԽOj^'ϟr8SYb(=k xHsIG^.,hBv(9ӍCi(0d cd[nK_R\}~ ._~9^WWC/~i8|0^Nя~zիp%/y/x;1Mf;MPBw: khۻ`ۻ¶0"ͦ weUp-CVVeݽ qca~ƪa !.d{dG?'x"^;;=׽2_WqM7kq!OFRDǗDoMzL6A4nrXVUF(K=mT5Fֆ&gɽ41i N ' 7*(Z(GFO))} e㚬4S.n+U1s^p$4;2vccv8f!ࢾxu#f7xh8O X!RFg'SWǮ4"<"\~5IaMiNވͣ@;cKG̀rf !/`!p\Xkՠ6&oQC%S._sDG%PUF6ǵrpIuЃHoZ#IjLyϥfʊ.t?f=2MB1Y4eWB;5q_a;Qa>wsk&lS˒y "ՉF@461$QiWt$GZ _ _9uGa~7 8w N: G?x_MY4vQLcvߩ T>*BR)~JKJ(\%cLu40d5gȕ!C e6M}QFxwc/ -dWiqS.Hib>oxrfX bJOc‰CmVAD *1ŗ*1$]'S V;WzĦt ChQI/<3]8>D"6(w&[og*L~ɥj1(sU ZX4b/ F ݹsMWLm!rh IDATڻ.cc63gM@?`>*[cu>P! Ds tX(uT7/`=܆_^o+c*a??ooT82L׾/| 񲗽l9ò=tp5NEi<0)$6 ,z`;~ͽ)q[+%t%9F*a\G@>u `a Fb ۪W7Y>']}A #6Y}ŭ҉޲@~Gز-;KNW> f$,LC! `ND{l1덉||aAf끑ؑ0KDžLVHe$bB'OjW&64~V EI^ 9մ5~O<\ |(+:Ap< - U1|#KpY8nȑ#KF{%ѣ n!+cHg+Q6+'$`D߫E^Έ7I3Nm{NӉI"NMB}dp,km1xĞ|rpkKFJ Q[EcA4Xu][HFJ gO8I"l ӃiXQ ! |tg;xbPw4_b\1LUHːvdҐ\*] WHkҶX˞i~ lz cq0us$rKƬwy\0 y"m"5x~AG'd#,e&?J<WQ{ L{)}q~9FXW^HS>Sں͍ C#Eufײ֑^H`0[d $eE0t Xb9~^`;ipvE8ph3w(33lHcƺ$GKW,/&\UZtz$%2At2@D}wvWWR/,%Ǘ@} nڲDɳ H=x+~`evpsd% ^ufr/3logde Zy뷽 { ۈWqnrd'Ɍw#-*f28=a>5.(}^vy CPdCHi"6vƈx}_h\w'Fd'+Lc1l%3 {_`,[P@rI'],*}m(IlPC }+؎a~Ls)kw3TSqcGr8}=I! +ˍ2Jˏl]fw_22F>eWE3XyZ1sSJQ24<ء 50Ѳ1Dž0w1kXka6m{ Hܛ>,Zv/ضSf+fSc@ ĎWF<$A HXI_5;-Er%IiXTʔ%;0`Nk7ekkkW?^ezsR\s0LNĈXևOW0fNlBm&D!`+72[}xdfk0qր|ԄD|@4BTaC"żfuFx\S`Fq?Z*xv|-j@8Cxܛߋn*hJ={&r|d۰(ࣆ{rOyC15yh} $39_ =IL`u O]pGs9y76O>~j|n$nmMlpw5:9րHS2:(K-*ӡ,L|!{;<Ѳ|lH/q-%7}-omwٺ|q ' >رcЪ/+cHߌ(1ݖbXD63`1c|4se b)0AB-pq쐸ݒ"+f}9 ?(3T⡪K+鵑 喉X莏UiPQZQA)x%κP_~0 ;pQI>1x9hEwe#:הJ *ۯy={XYc~7.j׹(C OuBg;:fv n":o"5Ԯ\ZBl[.b0R :8 4/~R;G+d:XM-_]~W>IdɀĊR"4$>`ǻ7L7/_ne{{o|GkQ\M>_VOJP/&izA7x(SdR.%hah&LTE9=0wz֟o(V$%p C씕ɽ*>0]s@HT"%e/ OcТ2ln=४`x~8r.)+G+C}!"W  cԍ7u>3 @yX7s}ض -ivY供m\U;gH {PY#+ X9k"]WkQW!_2a] G:~. gy/ooТ/+c,lr/Զ F5䐗TqDG{!h|3&hۯfM1=̉ـ1̷>mB4i(fCGKȆXh)m:iB1r "@?j #j, IhL%} {b0xq؇ }8j;5ԄzKb*1;*@[~;j;#ꄎAT[ڮ+ݓ]7.Y&,=b| `GA`W~ R6;F$<$-uBV+={̴SX#'[LdLҩv,hYCE\-eи>ƫmT#~38z׻F۶-:XK!*{qИ&MrUW(#joBqTa 9s@V>؛v95P?M0!W`Qlٸ?>ȋp?^S^oyIUbhAzƠH_&XINV:BXKj G#Ґ$b/v~ Ŀ%ࣻ-<çv5֗WyB.6;`WR~PM1Y(:s 9sG'= _k]hP8`|)Zs}l4s< 21\ Z2 H9eb![gM$ ړyQjBqcn9X3smPFw3nu|F.Xla64C؎;prAf%>r0O؎hbbķ)4c%=&X<#W`H_t]]%G*U;,>}L1+4j9n,Đv_eIs#mg=eؑ Ѥ3Jy.(aj"͉T=X8@UF-Q+93AÎf m3qz uIΟ_Ɏm9L66 pLwy݁UHp)ࡏ3PW{1'6(!:PB/`u}RԺ2M-e)$)j[]=xD\ALYeQ=فcka(j$1#GF-M* !>p{')e`ST}HmٖxhPeoݥjęrԳ@ :q?$#xKyC wqͫi3-xZD)<%lukDRL18-I L~65Tr?BRc2 0M#KmV%c}P$ESz6ѵ0DRQKiE,Jds\Gni6nxyXK<L>"V 7mIdcD\o+6R\W`6v XNcx[ %Yr$EVr2dO: .`W?^ezqSYa&f&҄PLVdKM6i.w%%sX3닽dhLwPXC/oq{Ѡ'iBrĬ*09bN3 /LAAև|Dۨp\i%e$̬þQib0T}P>qnD>+>Ne(u9j!KR0ٌr NWZܠMh0ۮO1YVdoNP3 _XXeC3@U:*v{eUHNcji3I~{#@>9}]#mX;(όH³H_ gWǪsJVdz'a!fglF5@xXsYGE~ 'ֆߜ)y5=p1 [)&x5 V#QdLBdH ?^|$24_Z= H\̟ͽOԀP/;036bUԹn=|u2*2 76ƃx͉mˉҵ{_{"?(G3M]32"c-Da/0>rml,sdaX1T.?d{"c#sGm=;~C]Ҽ2|?Bx2Mg@|r 5s>W̑vj FZfإ>y;KeٴMs!';y= DC U$ }fwDc_1x#)IB|-4iNؽUX\IB !DFC}U]PbT$W;s({2NqpoM8! %RcQ1#!) MO<ߴo$'<8\?8kDaPue2~tZB=r"kQAqQ#a `h{)'/ΞUM'7i%C b ?ۅ#ܮnB6if(N6<)gFm@TWi8@,y?'+S2޹쎬@ v "ᶻ"Rk͎3QVV(I>da.pF^[^bap+΂ʟ @V24/}yJ9z3Bܹ8.,(e"1Zc-(@b}ˆ"s+Mݻ! 'H+c{oXېoK%lG~fH)쬜-7L a?y1++j?=FFǦ=g=Xx??5Q{><*Ώ2@#7X0{,{~CowŁE¶Ru ,O;N>0HM`Vl_JvSV~%6]" jҶQFHɨ\1o࣓-)k [NV)GXg\tP[2M]|@DCSI/lLǻwKlZl0&T]KSX.;~Y@"E9bF_3XWR1`eK >Yȫ} ji!mp/NFIll^Z1B2(GcDaC&al>@Zx#5kKOrZc&0fLИ0`6؏%f ;m>S7ObzH_-[}0ےg(9 s'wx]J#YMy^ +|2V,QV Iʊl1$v } >BKjuFMb j. GSA$yBB+p(_ώ` /(ɁZȮ#(ATg}Ȥ >ĤzܘG&f@Xhi_L:53ju(\/X 0F@P`q1V @(ŕ(5E!-@dmca$z)'n: `jdXR<Ƽ{0nte@z 6? .+F=B{%t{ɲ<܆|Jq^\fCVџ^+ὐߦQ*Ւ+K=dɌWH&i2i$ȼG|E4z՟zoÃze=Hd% xdI35Fr)ءl!@ ÙE}1ǚqǛ͟C64eyR؉͝w՗;dIFH\^o$v>JR @ BCG1>H϶6`QX(#`NbtKF έ] ~DmF̻~VFC@UvoK)xs[e,$À~؝s ]'дTƱ{$vG~h}fJ dsڮokDse1QKh7` $ ]A us\1ۣιÇq뭷 /츬@}"z~hIv(Ӕu=mI_FJj&=!\-Oz]?0>ĢC<#$ yJC)Ӡ 't' FƮ7;)^QAf1m-&S?DzK=RΐA%$K7kkG\,}Ԑ8w=` )DF"14Y dDuCʅ߱"3R%jLEj^]FcH^7^:{Dfkͺ3Vl@>h[I;3!QWe Ѡ+~Yt&(!!T`NmL,+U4t(a| jhv+Y\wux_}ݔ䍷1sq=˃5^Kzej$ԃ\3eyLfy:ncWw^_t'q!G AU E<Ѿb׵*a`LM'{NOYLpbc0wnL{O)D.li0.)/kpHsL%\XM>!QE s?Gc;/p 0AJ3 a|xa"cHB"ay/)@O65:40 a)SvRi)!A@F gI=+5Isqx ͷvqyPcyD%f}m ы1L4'a~+1WIڜi.adԊ1 >z-Y۟50;NkѣG;{{ܪCV>>R[΋se l 4wvG8@Z  {3 x+&w~$̒ZCbt8qŽ&B39jw=v+r8_Yr('+oh Enxdꏎ}2?9 R﷓U ]N XJ3y@cƃJV ;Sq\df%~ 1:fFt& X7 xr분{ ?_B?4m&|!<75Cr@X)Ig0ʸumb!X5 ϣx韞mb' 1hQ> 8L;GL˜2HĐ1nTrrPE=HrG99)$$GoׇQ;cH 1gxP@\K,NɞqZ^9gX#ʼ` =~lyjy4ئ"{omˀ:CUYtR8-kcاqlSm*u( | lĺsc*eir'c2~'>19x+_K/sNru]}sl豒 5(+ŜU1;pM:oΐ)cpXL8UqD' (cVoQSТQRx$w֥E~1 وHDC`Tƍf$\%$o&D1O!>8T usAQ؏ַGwUϪ^=ַQU]]5o,3䲹%F|.HS&>h0l0dԪ- 6-=zNJq@{DKLrU8:87i'@B+˸rֵe\q8蠃W\q^k7g}R7qUX˒v…}ĮA=)aH p_w rx3(y)D5DQR׵(\G;*[Q5&dUaPɕq\2 GA'b^AB[sR')ɻ,eK,: J |-յ.(>%5\Öfd`C"O\/ߋx6չW HMp)Ho|{id)dT[Z6hWi g[7τ'6tyڌaق~% q ⸐a0Rk-íxo-0!kKmm+>n^ ;vsjު)&d ĿO2{o`q)'!JWT00!4m%_Q`óqYG>r|_s$tb v=$ R*CWr'"t}IIi(A^ F;ǨA\sY+xyS1[bߠ"Se&25Amkp+O Bûg%CJ (6x <7mC+ 9Qc!.Z=.w^&d/5,b6FQXT*+x>hT.:\?Zx^X daWvw>9e 3ZZ}*^"XQAݻwݤBZAHR\$7|pm/#%H)*~1՗85<~TՕxlx sG;p뛅Ҏaanv3uC|^_0߁.`Vq3X#Gu-Į#@A1R82ٖdpu']Ytϸtp~_ zd;{^ӨhV޷l|oiݔyakڡ,F1Мc&&H3^pwzSjl'% ߧlYs0O0}8KXx}2(U8[k}fvp\{0^cQ\mL9r'{ 'afdW%7l̼b>[qqx4rꩧ\velӵ STQS'YFj\W:n"[ gW(sw=a~وr~r92TF7 7JN¶z|o^馮LA5.mV.:5&緳2%bSGp9wIq|#bE`OʩcҠuQ5# Hd :4r+G>L=}z-J:O'0@1݌bP'^!P̢<: ǯA!ekvR7DV7W{OY3I{޾7gKbφ縊y+aנPQm m7*l}ժ--;C;QK0$@"u|j@? ( &GƢm#?~% Bc=>߭VkȷʇJ%IdHC#ۏҷÅD⇺P/c Ieжhw zԔ>fջHlP;SpNrΑ.%%eBW/>#(ᆵ .zr+A+oyƀ*M)N\U2ghP}uٹ,1ٚI3otV~?,Ca۶mkE␡3dۈ^/KM||*â3"d`Az0FK:gyʌ@P`3^J.m:H imjElj] JAJd0G/=[p-Ph~DnFU*M9s "6tDE,A@|iD:\ 5ƌ6FjO>/iG|%6~'ᙓĜ>{$CCb]].mh]xZN>|m~[eFXUrA-7Q 3BB{W79K]e9p0Y& =˳z Ͼ'Gw2M_.iJPo> ò?a;tx*s,i0 ̹̊7X©}lS/G`4:4A.dZIqLރRCnz!AXak`4 8ecU9SA"y#dm13?G=?$kМ"@GۨK|/FFe&DG,$\~ev^aMeJP: CX)m ghvǝ~MoksX%ʗ!@ۑ-,(oN6l>|^*s}DM \`V퇖OeYꫯƛ&ڵ ۷ow ë_j/+_nȴWYJcl^\.g-?_E|Or$!-J{Vd=UŘv6BI{qqbmct#<Gמv- `+仂mGCs` ̛:l?r .xӶKp2cQ%]]74 5ctM\xFS

/gu.Rl۶;3pbΝg?\92=ּHJ;h2R"E!4CRzZUʬ[)(:;i ]6||Cp(RMrIHi$h^\ﰩ.!.q6Ͱ=>( |<}ec`>ՠ k)k4 wiO&i~3ly,.T*e`h`GL1$GN3dwLqC&Gƈ&ⷳtY)"#%`Kv&ccLpᎴAs8cf.ZvS 7enU8U='<0xwjJ;^_,es^:ISXm? 9![Z% rD{}Y+ sZދ(!ַ~7\p뮻.8]z^r7oe zIi?*D@a,0ð$ǎGbD wa .z&hi)G^6jˆޭ4w_( A T{Ĉ%0rI]4.)lB\ 'SkeC[fs1}a^zl?u QS.ZRIW@hVk嚹[b)oXK<Kfv'tOr^pJZ aٻw<$EKǚ6~K$%dr)ɉ7 |M8LcӖAy>Ⱦ=X@B852NlsZĩ~^#gX~|7z?h#LR]bЀoa ߕKXVƵIո9}c:u\F9Q, yJ|| OR.4]lT͒pFx\vKGl5~'fe2Űp%ª$wxSVlcௗ#(˹(ʆІkM(}Va?V Kྷ\xC˝ GuG۱+ NaW|E˩L͛7g#,qKVLA5!%$YA˟+,@_"[z-ypРYE``+J=]ACra@!?̓Y6?!nHŤ2"}Æ`E:FfBId 8ӏ)`b.8Iw;R(cfxk K2f ƜZK*`G0c9ަjzl .4K7aU6a62/1{GXH9L/406ו# lGP;`G;H2\[}*<#χ6P Z"d]UWο $FCe1aUc9(LLCuD\Y[d IDATκVe;{,@3ބؾ}r7mUJsr I%}s-&ʴ;FΉJlȘߤQلxSI|W[<0.pxfۼ1_S_~pf=2~ZW4ɾ>?f|/K!QlPnvvkc{%`aT)w?_zBB}l*\uDPRPm9fR°hdž'FwoۍJ2CϢ/p(Q&MHj×<(6l( ž7[amnF2"GP}զGE^Sɘx@d4-KvZ1x f(̸2MW>vmE]=yG>C1>(>O_K.d[22ezLœ4(Nr64)J7ͳpۚ<JQD_yG=iD˵R1jM֜s^rT9{^o~!.Tu49Ǔ=qMnB҇u (T`,@Yh /~,~\&ox5Ԟי(фw%ae=#%?Pևɹ#֕qg.Hzr=y!GԸ 6N,sSTK"} Xe, hsY⣵Ae ȵӹOJ!,iWYggM?3CM'Kw};:NN~_aѴ݄sT]^o_rE(a,,,s3XVL10<8I^"j2eD>wd'`^ i~v?h][%#He7?1nMo xVɚ4%z2%D89x )2b K~irof3؞k&bT jx4Yľ=sÖ< Qhߊt zL|DŽ, i>,[!bf|UC QJ ҽ3I,lC8necrފP9-ublLjW__V]~wD_I]c|??ۦYdGPs֥m2VmjG+?vC6l6TLꫯ1ECkN: +)c* 1NC6EֵmY}xA{낲M7' &#Z#e=1FlVp1^e,fs$ڗV{oNnČ1RH1 \xJ8ÿbz39%{DXB^C؅}߇r!4uߌMqçC G?~UuM `{aF <)tCm"vr?[?TFX|'0} PBڨg 4DIk܋xs y|4yKt $AX#mi-rxeZ' 0 a%)*i{a&fy/v=R 3 t\ĭH0.w)T˼dյeǎرc<8#ԧ>u2=ր8;"ٲL\޲k]˓t{Ƙ˳2$xD{ǜ*u(kE)?(ʚ/ ~Lj@~`P+tM(Mdh;lt/=i^8 暪ϰ|D|/7`wSd@fOU6@C~wZ@'Q?ZIL%e}~I@O +Hsa ;1g6cjlkUI##whzo@jJs,XnĔ裏G[os8s1+ݬ)$ ۈ%8&jV_rqv ,5~s T>xG`q!Rs87(hlCm5%L@mU9x+h; `򖰭0>=fKTjȸ@F2"GOߝ c@"w$$8*>T3-BzvI1ğr̻w%Y 4G@ `ݘ7?ˢ5ұÄAsGթ2l#l8sXL/G~?^?~Z2=֬ _؝Ҷ({ɲ@1p> `=  8Щ2>̹,4ֶ$T^D(]TLjbB^Lrg(J4RA-crֵfگ{n\p뗫YFǚhzM0mKz}zrHqAOKE 0NGۄ$F)+C&dH3`\ FK?ޜa2Åi{nшIW80"8FsN( P)v4*q u!]H%&\- TL89MSr\C ꜻJШV&@èj%RXKxFe2aޝVP~8Zs^[IY6uÌ tb ˣp udI ʏXC=<2fXSvŸEgR.K|o%LZDX5)O(TRdc2AL5xaiQ> ?a`Y!19[zX1!(`>,KHϓ؆$na{V^"_4}*dM gUxDFl?elHS9ރ1-K+٠yWXY6}7Lb\$IQ L l.9QO墠9EK`{4ㄘ/v 'e i5t>;N1VxvcgVlqWih% x)Q$#yyRL݁Ltٹs'v{p5@3t/5z!\{غur5sUX31R$J`2ahtmȻU2:x2a4H N?gH,6$4| H!+`18zV6pʤ=k}>ư=b2+qp ;%l(0@Y>y6/ ؗ]ٴ yj h=b , FGJwBS VE-{.,UTXW_W+HKHrYI0\qcmY'zфb!3r4a0]@g<(ˣ@Q;KV3Urz[Nq=og?YUzx۶mg>eh)&D۝9%GKț"]D\ G6{W0v 4jB`DQ[Q@RzLm=Haʍ(Kf| ,[W7Yf0~u2Ѐ#}K&ʍS{6g26o4FA AhA l=x XGlܷ; F3r@\v~+L_H=걫Ma:r]FWS%0-bzTiKU;Xrr~XF}:L1Q:7Pf ꂼ1PZ =:sń|69RX3*4J  mxd2q8o9NQ-9Bdi ~Niַ+_J(}v1m` XrH6RZ v)A9Mܱ,k4 Q*%u룭])E z^J$vrg%bvegS8z=m>P '>$ _=f+z? !|ԫp eU}awHˆ1̲L~jel8\(:> ǭp<چL Lz̲=rˎ޽".Уrq0I3sbUn0T:Nh|Ca? , hr=#/T '˿[?W9VF9%>HTn>GR \*ɞ!Z= JAF}:-l/|s"YlJ=_g1Sڂa0`YO~ 0M~Vw|7rX_j!COmd ,uɝaa$N1|тt*uRұS-NZP,6r2E$V) w+ag^km}`@-BZN ˦mUU%eX\"#)AǀZ~G3F9Syݿ CB'|߅V(v d9Zmr/~J7cUXr>(pO雏ڇy̡LN&ƸJ= N xpgL =gΉ7(AMC,a;HH2>eîxqOxp>"rvue|8G}ؔkfy[y@;Ff_Α1ƚu뱩^F ]ez'HاT7t_.oGp  D 5 xвc#%,ãR0}DѤI|вIb{)=jPCF ̏am,.>[X+;U~ezf84:oL;&4W~J|`t%[ڒY ޲۸e׷1. ˏ :ʲ?͛7;.hqǮP W^sn0loPP(2>E@ mh%kM5;ql6ߡ~㤯>#=Wc(1OK&Y NVP$ un;$o?]xemXpz,sGi^ǵZh 9$#ъᇸɰB<?Yw{3i%%n:dv儵2X]$g5THw*yR 7uPϙ[U-[c {vzE^O4dž 8}[rzf,(!m1@P9nt7ڑRP2Z$Rmr7&иs6z08"_᧛6 yOZpߞQz[1"CRgkEƔ6,w04 e4TF'䛣.ݲ̡P᾿p '㕹&VF9j60<ɿ7=Da&̕cp+sst^"wY3Gno+}VcI&浪o?Ɣ#S+H ~n_@X2%9>{(ղokYwcxGcGq*va?B-\Y2=֒1m(Ŀ$Y 29eGBxN3gĻicCqw]\v-1N9 '_H͐qj:U8>j}l;fIckMY\o'Qe[K y )!Mrup콚g9vϡ_Oxfyj=Fb# ,Yz:y[cQ)՟=nݰOhrW@RC )ow9 Hc:P0m(ἦx?Qdq 7/&kUh$ThBǸ ”F0ܲr)ڗ >,n[5f,iD;?i~8s.bwyرc2ne zq1SYm=%⚊Y,f3e  vGg>hPeu#Tf|D 9^]N]9sOR{cIY% @Qt#hWInR/>d_FZ;D%QbJwcѰz_fNi8˄8e Ù }Μٿ@}+?ᣒ vѱ ek[QVrrֵeϞ=s.CkV,mTV9(rc]\va`f o3UBC88(#U+w!$)MfbI&<1`@.y:u챈ЏJ7CW'[磎q~ ޏi 3] qA.U /LsxWH]N}!"ʂ6ҳJs(!n-"I|unre?ImB-%"%$1M*Ã].ֹ>6P($-:Xk?(|bh6?XλBQ0{m^?"K;O:8zΛ&xԢ%SǚA8QEIy~ۈGݫzVZ )frҁ'xȡyˊ(㣴ys IDATi+y^K]9=36(}{KmW4>BqJ)gw2UHJ < als3OVlGB8O,11.ǀr h9&sDzѲ!kˎSuYfYc]B͵zTj{t;ZP9/\"X$6zD&CZқ3G2#歂;=Ly}IP{fΰ99sJs)S@>SKd*˛fy晸W '|2lقݻw+_ .rr-M]k@q,(UDLaA$}mK%,h\g1OHN2?Ye'a.&-clQ xiI>h {q*^H磹/RqX!e&Bg7!Ӄ;x')IO Ca5&9)<:|o4-lϣ<;RKF§-͞t,+7h~ܺ^3Fj%tHo;˸~#h/X m{ڀ N"y`A`{oQ9{W M9=D_P/YBX9kUTL^WK_;< ChQ^ছn…^M\12=րT^֒CǍO(O<"8|q_?)-%4zWjC 캓tӶ">, 'ⱞ=;xd2msj&(7l=}HjqN:ǎ&I`Qbhle 6,&K*@H8f?^,-DC¶\/bڰ,)<èhc=Ґn OH p92 T̷eѵ𲻅6]3G16IloaaNH3kR>+mxOxG,lw4atUr&4'R[y<D>}m}rY)TWN9|='a0XF'*)DZQh&'jT2]qg)Cmx_cT=Vq?D$(.''ƎP1r66#y>%N^/A|C[|#,(f_҃|*+<eoH+ |[Xuta/y,3A!Q{V1/'$&y<(hW#yS#c Rh[VjQ kE)9 x ي|:\\;$~2u-e2i V}d 澂gU4aM ";<ҀZFGr5&J)<'5:u|kdW\`6ߴ-)*\ϓa}d>R%w裏s9w}5??_ovmصkגwXĒZ)J{6I>?s)cc:C(A(A5H+l\*Kn=@|'lق|+p=xE\}8#?]t?77W|>?3 {II hRV!YD@>È]s̗ g> aa^:1vdsFĦQ?zgyP0S#t2)cy8ö;`/Byϓh >&Lڅ+ހC>P;Y a5ƓZIYw5.x>20y b{=;ZM"T lV>=uTJcF{ .roŖeNB1< {zTg}~1<]<1, qǤ#} >QQ_>+Q|82PV!Kq !"}so rah**Zf@[te:t>w~Rxv9.K<%<ƲחffW_F fƿ^v9r 7__Z_F6l؀ξݻwswݸkpu/ƦMsw.EZ`mpcP-sQo_(w{.R\Cүg-Ʉ2IA3hh ?z<2bcq!a]'e0:ӈ}&KAߔ"8)`Szr=nq1FBY>.ɦQcNĿ^Tlc(ǻL+AK2UVsY!$H=Hǫ,v u?oLs3 {g1܏$J1T-ȹ44akei7Qqp$ }Kv`^ bGv>`Yh3& ~m^Szv 4cݯƋ)_ZIWNj9xpWUs9x󞇯~s.رr ~~ z*u}عs'>t.{ .3;;Z"[ր3S-6ڔ]Yo]P{Q$Gj %]_֯PH 6]y} :’L`:BS߃y OqKg*aҾg=GY'mni=0 }gf9x=} ݘ̼^<%m8Q5Ʉ1@ej9H2^թ+0Hb5uYN 7+( f5=o`!Ch-1K9m~#f%w|͉7tk0}3; \Ykm@ͳ!*`VVP+oܻdH_fb}9B&Cˡ#z +u\=%`verZ?QTPݻw+C|_Ng}6;{x{ߋק,KyUtcy Xb z/̓b3fW 'Aua'}Rm2(;vGx×) }}rO0;!ff`fT>(xմ8@zjβtyybO^4z!4o6G=pj%{住0 &;; O3nmsoc@/8!θHb+Drp扂.){]şwavIm ÍjPE|hX_k5m'A'ch}Uڂc*1YNvnQ~gC 0D6L 5eјkf@ ~\p+%9Gt (:g[FxP%)- QH=D~ș{矏>9׼ַ}6mۉ'Խi&Tt>m."߿룏>O~8qK.I-kB8;YA9l(#h,@%ʆ赃jV蕂 y18#07!I%0eaNl k;j9*p㯾g_7k1[)KϓdhK {w{ô.mLE/#%4[cyĞC19$}6NqjyN=U;))Q{ w/ )JQy2ߩ';<@Iu%v/Tg@P裡 W{O6f״͜Є@ya.&niTXKU$hXrs{v?>G%mh8G?L% {Oߋ-[~ &!-+x䪫< uQ8cpw[y睇3<Ӟc!3ΈV^o_rE(a,,,s3IǚJa %.`46{wQY@ #5Lz G>ܦrرC"x7cn!'_RBr# ۞%Bdx0`5 h%5GGM }9P#0"b/ow`Y 6iƤxq~yA= @"J dJ$|4D -6lƪؓ~pES[ Azϫ$^3sǃ/ηϟ4J>DzȀb Zlqa-w]Z\dΦ u3]]sBgs@{3p,\ߞOO{A'$@I%z衇l3 wߍO8sqw}x^K{}im;xֳx _k}LA5%A*$paa'P7sQId w1ʘ|ԀItJPD h6=OIց|,LHHmSx0k [XF!㕷 };( 8ѝ(OPosfH;'sXs"@= kDabS^Ԡ(&XʸƦN eyPßܫ?{ %*I X0^bcdJ0UJX#% K*hY/!B^*5 } 1>)fFi,A(S~;f7N2#lNi2})ᒰl8f([e+%}&荅Y8AqmezX>^<FH;`XNC12!!y|aeyc RC!}"ee ,~f綋/%K }fϼg~PjfE4Ϸ!Rsdp ,ݫl_ʔ xtE'Z|sT>Wi0Neq}UL aW4GQzl{5cB,Z6n]t1pMıjê_6->Z7i IDATܸ'u" ϱFo~pίhސ8t}S;_\6BQf#έ喯_;}߳9֭ݻ7Z}km4odz,gM7Ԫx6m[VLAȢK@ >(^/x٣1- i3 ̏aaGrY>$%L܋l}4J;!/D}Hew߉|P،(3Y$,s~>z#(ǁIg@xgXlXxH H<6LT-x ۮ~<ע 6 ZXëW#*n$GxMe)sn.NOgDB*X#Ճ;rf7hyQLQof=h||.'`"S좴ynrmnIj iVPYtJ\vW WKI= Zc9n"_3;c o5^ްa=Xìڲq^̕s97x#^+BTm-o黙NǚW̠Qw|`\GCTRЪ|A`Ebb{`vpF2~NtW]?tB6V)oaKcN P6f M=WPsؐے6B* iS@ InA^CP63Yp E-^o=py&M}chi` -1T*R@k@9S c!uZ*BإOdbqav8˶FDn!UZcW%&OHDZ%tmp17q9Nn=6i&;(x/#@Dy,5:r%H0 Jг`RaV>?7ԊFGS-?WQ;`$OyWY9`ݒ<޳gngc9宻c;.};;z_վd z efBo ?| bK,}0L mHhKSc36oX4uTȊ`r_]ZNIJQt'6y*]~x쇼z <&0@A 0AtDqwQi irO GraV߿f1 A!aRc~:'s^oV*A198q<{ƪ0`~G2*ࣨ@e5 0>x!fx( $5Ɉ`O䞸s}z"BLTi_˲Hup PJAk￟=~'_O'p~/ʏ8c#et`{+]@~T*pFr)5DpgcdUG J ЯG疅f$X7D~#yU>Dcc]7B|pI9FrYU"[i2lf{q8{Ҁ7cy.12i7j;B"@o8QKj%꜈1Nz;=]jV P eq5@0 2^*ϻOC?:TVw;$ڝ~6awM>i:wd#"C;+9'w2nGӃ.L**pg]GYn݊N: ;wfYֶwq,~Zuwo*ʊN֭(] pC&S5w3Ǜ;a Iaw#+*ѥmJ=RP94r2A֘ IS ^hBʲ\C `ٟ֭n+/}K8q'c0yoLk@8Uۭs1U&8ĔahY\E\Y\C)/feVsX7PjTqT1 ]<_r9VQh.']y'T]Ɠ6`*:rñdI`2F&P=H}޹Ф 0\̸]fsyAr ۶!ΐZcAPWF}3 ?|O;PIUUD~Z\F('l92J2Bؗ/Ռ1X(ZFl@Ff`$kƜ^i'2N;18.e]#o| _"ff\x֭kox[w/| 7Tʲ??pù瞋;v`g> 6 z_܍r>F oa5XE ψN1*u`8)&V -ތRϱ^5RAB|7+]V& qJIn5-] Q-; L"Z7(}7w2T)1="At^.ȩSνHxsꦉF)AL2 !BʻŮa춓ҹ}v\<+7mCP xmJw La,2DSC ok$F޷L:u(y^x}Z[";}Wu&,933Ub~ W1&±{9@} gUZ7V^kFWvݎ[v}Į=^~?]֤b@{ ;qg;޽{; gu>яvmxߎ;7n޽{WoxVX>nVP_)?]w=lzolCL=粉) ! ^&.Vmnc Jb%ӚYSB1؊BFr ㇡}AfXH}H(K z6^FڂA5bGLr=cK z  gfѱ7 \i# H Hi+RFwdm2|߸HPq&a"IC`} ؟=Vs f&G 䈃N;9PÎr^(ϵQ= RFQqJˌ:&7#aR. dj>1A ~縘ȘSzռ>mtv%=uƗ-?ޏAeٻw/W5ԗixRGB a>h 왨TᲸ.&\CIrFA(Q<$1"5a\4ӆDs%^.5IF4cj!8oGeC\21]`IUݿEI+3͆G 86ƄxAb .\2 (} X5:#I0 Q25+x:J򨮩 ݡpG3hUj;6+(U3$,K":H|fJ.R6v0un;q rgbx2PGW<+#ڢF+kc;'VpI'C ,q%OO{{+•>&Ȟ8+dl\ӡjY8ىQra=6y(`!c('0^|b<~fЌwð7ۄ}71ey[@8)dnqey~2oyRP6]U^o{V;*FN]]}='2qmj~٪dH;VzyOEKf Pb}Ś,R0,fR lJ9Ԡ)\E8M_FW.X* H4z~BCg|i39Q%oAЍ+Ǖ-އ{APoJ\:\~,EQOO}n[:dXbEkM&T`fVM60Y6z&+lћǿ3`J|qG4IV`:+8ANz<~PMba矓0ϴݷ9br$9FK,+}|s<}Fy(3q0n+ZNnݰXhcEQ˗Ea9Zn:l޼9z`0YwxakhH\,&6sOS%]$ QSLDB$# z.6ӏI-*H+6) ^ L0.<,겴O|0us}*b_.aZW[ɸ2-Q-] i lT Wٯ1Bҟ-Q̑0RlDz}+^xXaXiwIX; 5Z/T^L,Xx a(мLm0 5l0a\gNgT*5 Bd_#^+#=@( |0W7^‡$cTNΛ_▬NkB=責{2*PA(y?lql< g@ BfdR6F E BhJG=2+=^k{+"#w>{s>{[^ H0ab p>Y@L@AI*2RT2a$12NuPp+yXC#L|4iXX3T5\s4He0$@lpay{>sވg5)OBh&͟f'0p-MO%@ذS} X蘣/m{e:>qu΅,C<~\qثsG!M4*M`U2Y c\i-XRĮVg:a0E_oIB0ۇRHWdj=9%c.-+t5 :/%ZPYaU?o0 ]a݇A_̺kGx6an\) G̒:PI Av( {ӹڼS"%@@CxcwDOIVQXYĜv: EbD6'Г:_v;.</NAYa9Hv79zB픑Ď[5%+gG NݍW5yի^k+|˷s_x_???<{=L70㵴=[XN&"3˫@#*hmy{ң)1wg4x߫ V*َ@RiTrVE8Ch*Rs=32+ }J*cg@ < cԽ%@']Tz8L;; @_էCU\눘GE">żn)20֭ K%1<>43=2t~6Csȃ:O8-ĭ]s,dmp #qsKr+kE-=A 1它5VdiuJF(A iYtbq!)XJ$l/:fí 4U0&'?x^x_Gbww'ODߏ<,8}2$|Ҁ4$C9!g{ ?mƔK@@-']`kd⒩Xg%q.ˠ{&kd:x1=z[UL%cASu<xt;nX)Q`rvH]nݙ7 -p 8w29jeA@ ]Ùhk2Tc?_# `z9MI+ IDATԃ_|]@v}_P /s~{h+ٟ#g>8^E/zYٓ-q+e0IhIC.VyeQ>/GFzkDrߜQI) p/3 Np{S2Vu[pb2<5]:XǪcm݀JN*!r=?AS0&\MӅS_U#ٟc{dkwlr}rNccJ ~de&es:@׽P~2j>a !\)v%U#TmGN|Bǵ0Sl%0ύwC{?t4VGd|KA,SHQG0ֈ܁/_;ur0%,o+`(YV4ýy+9Xd%-:zi Pa|lܓ>}s/r .2i6EC/RwKަ@-elASEgy(.l R{--GWFX-)#.v&&_z7/a :b@ud""HY}gF(vڃ7Mxk^x[ނ W]u9#[Pgz Dއ_L<=`S)e*_]X3О5eHwTl+*EXҞk->Ԩ;496|sGUMC6L+i1(;I#u>2J }w4WD擛knȽcDA_$GNПJ5/S`bKL kT7Aa^$sI枪g)i垾z\W{n.#t9Ƅ#~!LR"ֻ,bܱg?Kp2uM{0xPPzwJm<nS\N`|Ȫp0Ϣ˯گ{s=g+lAC"ScM=bK<?ɲ"4ZfX -GF*Q(̢z Wu45v>JGsoT+Ҝ8 <['/b\^ axsbȮSlGDF;zy %(rb8gDS!GM_.QJ uJFwKs!\QQgy^m@eoaڐkV1?ZF,cƚr4ovpw@nfXϳ91[]syZ p$k,#瀣ZErMZȔu<YO'sà -d,iȍo圐>)\r%r/{puPقA<WhL g~d n7UfǦQ0(!0H@N7wrȄϔImi2 UBjCjj,L# Qj[&IV C{>$05ITKl_bNB; Xh9􈻮@(p|hse1'R(fRxlP !&NE- 1{zܛX|<@FssU `0Al`N`̝݇|eJ#nE۳k V ~T}k;yIjԮ̴ffо٩sg.cxP)\zt,tv֘_}drӌ~ozӛcЇ>t:wd zq=Tx60*.g+k3Jszb H=xVg? vЏI .5džC3KV+)⾍>J9x(en*K&<S}&1[l6ؼ̭;5yj/(fӚ ^$u%oebw?+9ZD1xs߹gv |R4.8Z%ONSֲ`kddЦ[V`.oHnᆛp9}X~0kc\|VH_WpM7^=قE> ~PP.Qw<%0!`Bnڪ*`A~euXXhq ݩby(u[03VZgwʌ aڧc4LzUY S}\o+8썅o$XsG TIñdH tp×Qh>UrG6b|(UG 9֕q+TY&2]lE5+dG"LV Pw"fh71WlU.ϣ'?ex#^Sw:;~wus>S9>L=xd=@vyOg  `{H/lyL6H滦a/5`JʨpkYDzkXP[d+:Mh=G.%3_߫(PJ' ˂®'}*G=8 @ȱ>Z2ɰ2ܘҀ&ڂ dԻ&q>YHvu(gKC(>6nq#BUVknVTR{G\p3 Q`5b^q I:~7* qIhЧeY vXBVKnT7ʀYѰ `%xl@dդk?LPW2ϴ"sAo{ ג1^<4G0̭m]h87;FQԲ2Ab8?dlceM;0RbȾ'N)h겗@X+q peᵯ}-f˽o?[=$DRLuEc!2(,RU/T$$1xJvM &^ |$FF %9ZPP!gNupô u7M Zb^;0˃+& YN`)X?ĠqvB[S2s'*R:gh氍(AFH9Z{rD!h+k mtؑ$+ x;n8%i#a<ֺƯ ݼXW;a`<>*1@YelR=+%$  a4 UhXnPH*oW_],ЛsO!,0*7?q=J# |x_I_hn;.&ܽ@5PE2\I#*Ň@y݊:lRPi4?uxwWʅ4R;}(wz:=*]LЉ*da@PeFc?v3P=^B92ߤ15lSdH8o00ܭG K-ˢE5Jzs_%t(Ez@yF }˅mR04呌mq\$nyJGS@Pޞ4bcNӹ῭Ѕ9cCR+gy߱3ܒ{k]#ۧ^GٿC.JF'-d|VQ# ƀ _vBB,j&b ~dmxˁӞrx36ܓsS!e[Ƅ QA_n]wp{pņ!ђV0D2Y@i$%ߣKʸrs$Z- & Nu+ %sf,"d `NSih$ 5?:G'^KFqtĕOB78\p_cbs) ^-ZPb Y,.{xo݊=Y^}c}ZXyj s(`4^{\p>~X,.rynb]%FXpG}oIl) a''qW~mقBz%U;߅9:1b,>S2"д8uK06X rI(s!fEAKr5 aJU I#`όwhV < $ 0dUKˬ9:2oJcc];p>4/Y fL@q<>es]~&RV\b=4e"`}ǯP2IVk1TL !/ͼs5lښpH;0ك<|1;>=1<~`a(ݡ G H7CEx(窜a&kF%ϝeB?A:İ%1tqtvsI)C&)-o \8h]w'D!xߍK/,ܕ-qEL#/ %*ARR'S2vH1V0z2FiV_h$HՀD):$(V9/DX'<=/ä`F3ߍ'VZXVj7^5<*>poܤz:v|i.&E ϛ*]:)?9v&,6vQ]j_ mospgU>QsL~ ,JTࡄv$Cݘ1)x;Ccw術)#;aEQo( R'\|~GGom6Ad_3JPK~XY|< {R' rZgp$8llJx~?q/{޳Գs[!t{N,Kħ2#c{P'|+\)̅/V*U cvTCFEeai5GX |l4L1T K&*(܋V6L ͓z#@p4"ȶF r;vQEBO7Wۚ@c{>_(<@7nƲN\5XQlځU$aiP#!t?5@@jX1H=ideK7 .4ԫ1dVҰ̓ͤR7GB_ j< |`\t8eh5sAP~RX IwLb4ܗ ' 2&(0^!Ƥ.'n2J<`xm{QQˠ뙷L?dxYk"Hcw-xt.-H4N01<1dhà̧9`9%5Z?]]tFRy:^l];ȓIK2$l%THs+2䬠: ?0?7g [TQ%HJ׸Za(rsӻfϐ>ԇ4Nߍ_vm }.~0{?ꪫpoAE!װP ?hޏ28C`ɱBwd{G6^{eb0`M3C\|Mʺ5ja.딪0+hRϺ#y@.3Xyt1 !,$ӹ$AdgwK D2Z#Vv,+8-f7 =T׀Eãn5wb[Jy򚰄4ׂ56ceCۡ%[\0"֥Ή>mhI;5G:t>'I(lf9LF']t,lJiW3EAg;Ro[ 6djNst1?)_A/})![PfHqF,0|K`!0Èt[,ax/ZчES˙♄PvX-,}$@dv"$J&7M[X v&ee#Ӣ)d֥]N4/!TAiǦ2KePm4&չGM3a4u>Vv Fdqlpf[,¶YW̍Ŀ{@ͥN\ߋxa&cr} < ? I !mn7Ɵv\o+ +ٳQq0ӷu? 1!Ca릝KQ2N̴r[ؚFu\8$!&gS.BE]t=9d zJKz/L, %Ut!YC*]$[2ׯziN>{7a"CA`G2vVy\n' ڜ՟(1` qp"&})1~:%@stN݄ hJGf?~<{<뻾k]:/d zl¶5SUn.IrӈpP3 vT]0hL%LCdX8P;g:vuqdT~9e%G&hb5Lr(ZWqPsdulz\hB|$-tұ\&Tеnd[糼%/G>ݍ-q(ހʇ>(8Qw4#<(}Ѕ1R178y6P#s#mbņȜ(mŸY!VKjEp|J[ߺT3"Ԫ%4"W> ?*ag 4 vB9xmqTvBZ0QCT2֙uHpq}[{7MvP<|0hd"~J+w<v!.9 ]Л( "΁X.Q2>̼jG7z;b-X{5̯,k5/K4:s☝ :g{`BkMqG/ xlPߊ){{2)s Àoy9d zZ1)^Al?ԅ1ӂ?FtOfGe풧F%g}c5[UkQV"89a$UӾٶiΔn.,/+G-句!ssIjƆ1n*ǜ~\ȑ&(h=s6e<ڢxP9;0X֘/ K$FNPp.Way1(QU FY-M p;~' v 0Rljj41;'-DQpX;:93 \aSIl閰nWȵ#iIt>z2GiZkJVVn/0n /~svMv-qm|/v< g-lP-C^Lb0]&9QXMTғ)⨰fe{^,RV-soFŦvu;XtPYFI i|P/owѹ=%6Gz^&5.,c\g4b~T'ڰhIBX'B1i6vń{c}:?a- 1+l`Er .aؠ NiTt?ȭ#VL칵Xy{D P[ham|Zi> 9rT𖵇'ǵ^/n }{7Уs_!=dW!M+.HD &I l7pޭ8u=|OeɯgNh)1H]"l-=hcq]K$32ҒP) ޛ9a;T-cq= R20^avF]:TMrW/6r/-+:|H7Ehҡ9Fid|8-)aM| PǚgX ,4[,hQ&Уpkˁ$V$'@^0[(*ksAʀ!bއNjwx}(訁y}S  i2-_W[bzGW`{Xd(}X . :apWGlAkRc X&+S\@ ņ@|/ΜđSG]7՟ ,;=&.6™UJݷ˳Pۦ&~RFω+a]6QVz^LPYgFa/^ķ!Y.ܠ 5Z8Kk&u '>[sFQlAxi 2vC\v)0B*=:#+yZ|xND:%cE \eJ6<7&U;dLr&;k{l΋[%hdbNsx{׆_KX(f`_Ф³ KưЏ59cTC2\>k e;EbJMU69o\#s<0xuZ{heג'9soGKt{_EM'nW;uaGV*[c+v!% cy$M Å{*9>aN]@}utS}9/ #ƊDGwlxˤo3Ŗ)]ַ,HPg,’t>" $ TyLauZ>/dwdFȁFA51ВS[WKxb!HAq _DHzv EaXd׌m0ùgy]|sחȑG.Ɛ+$u1:ۀ/Mbpbog")Ӻׇ0fuU8فbd[[9d zlMM2%^FHr|@|,&UГcQd.f׶^4xu1Y»4 6sG)σȩ\(:LKWEʇVD !?&Dd po(>Yݗ𿥂#o% a[0uyu*f2ScɎH Ii`y;3ތ:M˦-3/ɣmD9`=r t2sw`)%ew@G$| HAJŴ f,q~6N6 2s{Sr?z`-Wo1srRb%kcپbA=3Vre zqnH,g+Z%/9<h.R;rQ.II dK𬕒Ǻ,l%&W|:$@*s9cy@|] t{~WѝwXFvbd';oZ![7lAC*<)_AV4-*k}%}T"&GCfd- GL %Z() c_',~E<;ҹ eH5WT_4{1!-,Jg UYZM00WosF;T1[H- SXW &M2(lj-O7&@ETQ%u[;Ǵ&iM3阐ոރ - ksdnhd)EX%JZ" sy:;1 [ amE? Ym7|CG(v>H8`QVuS9|+[9e zbI?^ |c}PT EH"%Qpm)E 7ߦ쯚#~Ê"=}5-}#WB5yw U,#3,Y* ៝:U@JI fK޶_{Oxl+с2;xFǒmrnԂ Ѐ\22ؖRVJC7K8Y gsWC6 賲TҘLYi'uVfx7`. ec1dPxwWC+ޘِV[HD> ˉg*!B4<Ŝ =uJCt*竀Z]VUق[Y4)E~RV]乤50j^bO^9!@oC|6~6 Qcj7֯Qצ*Z8de΁8HRC\250*ci[bwȘXT/h6|wd~ u@_F*@4Z'770V0nTPNOղ:ھ:oh׌^KFϏM!%8b?h_ߕk>+k1){܀wo9ȶr8Ҝ4#GWhO vEcYEDΕRxdDYјةW{l|~n)vTm%tF.{NR!ycpt<\mO%Уv7 !D5Qfmkn KVޟR+=(G-P?(̋ @?œ=$Ȅ%|Njuqd9/-Zeb= \e~l\-qzEl'pV 0h&&PJc  Ladž7 [-x 6˨W5"C>9Vʵa]Ji =T9VERKLIaX[cB{@ GxJi]8X~`Rpm#e{䄀4b."dC1SJPP3ɁH:fHeUiɇ "p;T`}j|eb*)C _zӾkIcDQɂH 9ϲd`)u4˸ Й׮Źd I>tס%)ą12}3`xXcU)^GV%قV"7 hL\YLյ {jx_;Q\SJxHY)./# ,pi}7(+l1;ޥXV 'd[imk>&N4ېnMu-xq׎"ӄ=䷣TxoGfQ杚ܘk@}8P2]OBVНY=Qm%ԛ9faLzq~\(y83Ic9 aV9!.2lqL.[0)8?~FJ1.=3bVwW•iaDeA<+#BR^˘XX /H t9O%>;=[F \Mer0wI^:*崄Kxy<[[2j4Vx;0-W |*+*@ I7scUAʘJדг:Ce9!NG̘,7 pE7CDZG<=J7=< τ(j"V[3s^SL4 ewFTP}<; 3||Y*MF`p#7GJCi  {2\n+/ƄfӪȷL a%@( Q>r90"AX(K9 | -v#v5 g?$ctJq97:%#AMߞ*[K.o\l;K?.qFUӍUIP)x--z* Cw9!T8yC;VR-q,T.ڂ?&+ /`߉~۟4W &d{`wߡD^zH*D1~)Ji [TJJ@]fLTyEmׄ(*}{N݄PP)>n?bGԴr0+h$(C?=@a3Q}ݑi7mhv)%1*93+Nf JfU9r w|;IFxOs(q.DeD `h2s=3 {YIKsH /+|o3eU:6 !D/WkG{ilE%9U3R֧.q?sn87mp}66YrTu,?(AC_֢\2yD˄9ji7J=C?{/x ;j `G[ŀ3gnC} pJCs"20c\z8iP^cܰu͢PDScJA:('_BnXI6u*9CI >;qxcpDx1;k M}'ы1q<͡+^;P&i(`@tDRV~b}YC < 0 k+[5[rU6>jDkVaʿ连)4]^{-? mU]4cZ&Δ1:w(@7m kYu[6dFXLuu0Jс_Ni?34K\^RvE7(yEZ+*)]m\5\#`CI @)llܒ-hr7/~1O:뮛U'?I\{xO~W\`]0CPc윧 cL@90;z=i.XޱνS2{Les~(k>ky[f"nvn$Uroix/p/9Z I]xg1p CrX׻a!_Z3q b.XBZVN]|_'PIsPo떶řh|c#9j ;w~ÙzӲELTS]C|EIb^<(/_6\CXb3DV5MmkE  $NI? a[\ְj!Zϸ y~]z/.Ģ]FIm}@R.? zwKp_5:;,Nas@L ʧ?i< ^p ?a\{?ߨ:~>6'w񄄅lN[\&o3 'R, /ӑ :3:.hFO4xZ?[> F==cU<; T1=`1x +cqDy }cSx={~lo"9<3uu l FR3P;5 ocD ` }Fz-4`kJgZ2c}tS/ayĴ^ b)CXd=c LgInAs;y3 ؐu]x ^#G-oy vvFk^*/Od/??җ믿~s_aG y$"9V6 Ô,6MWJ'<@{[#MϤZ>D Azd]| `ו(00 e<$p;4e,J 4<'ϋVENEi3ZfqxQc(n1 (~Yߕzd8ORV w]3R$F9r nZؽITsWI"/Bg qc3FHB^PTngUҮ /a*4|EAICǦ =GLCŬ*V`;# pJM\ wxلp)Z?T`"It@.0jSz[Ǥ)Оc{AY]\,`C{~e2B oSr|:Qy) "Vةz3M&/CYbX"z8Yڏnz릟K &i_< S^j#sE ԱW% 4R+B\. éN1CnDK9;t=+N/9M/~O[&ۉ4leؐmo|~or!y _y{ށiY7e`X(BM@x^6E.4 W Ăm\Wfк8%^~N HF.)f4Rz<&v ^7k\]crZ36 6%CV0/u^#RjxmV_$PZ _2AIci0qfM,oUKa)vF1M<ԍ!.Jti{Jd|Vz' cօ; f@(,Vo\Wr3Z9^d1eyTj6QZΘV]fͅZ;.=3F̈́stuexV껡Q~Xcak€L8n3Ѐd?wPَ w܁<яV|w7=yρn(kz6hckuJSd3]8/=1^Hdh.QBsH)>O\:O ",]P!*G-I5AW|Fa?lT##ۍ5.< Dm|mwW+a0}|mw%}H-d((7 a[Tͷ2l"âh`V=|cu&+#=b0VXt K/Ȭ ʹ"r7¾.xWqy\[](#ngY9SeI6-\U'r:Hr^?yVyB5:dɶc&2%g\{CL ȍ7~_uUj+~v [Ү362/ĕ|z?qjO8#q +(.a/#!'R$aHua5=TŝlP}6LԤx3 .*䞓9: .k4X]ߏi`WLr 5́ !n)d$l>j%ymMAE q!`&PZIRYdm!-u٭t>yhN\ό3!Ir؀t!z!%KM-ݪA:]oBHgiaҰ/ّ'"E>b.%zxz2yw݋7oYC\7oEu^P׽ޫ_~GG¿#MJ*zo&M2GLIo&JLDʄxeLB^K bhec·p̋ xtW0>Ze.F 9cXC4dD\OHeHA5E]]aӱިεca,gt. G= ]9n ^ziRn<1Y3Ot5~QiIlDfwrDݒJ*&!yFnОI9æ e|<& W)yb)|Vb:Dľhۭ0kҺc2>@g<`ܕ.^&|tkX%Sqk'Ys5z: m[q(C+&ITs9-g M~7 xr [b=EX8p99/vl%9ȶrP/|!^ξK/cX|3>9;IV(>iN(Z+,Z\ ctQTZEn/m 0 p-[k-%hWv|F.f#WѫS6_ѧ~ڷ{Fpԧgϧ&.B2lbQOR~.V#قEg>}C1v+_ n&\zxs񾌓1-")`GU8@qxf`ˣ۪;r "JUYjp%@i_iBMP)TJJO]Ѿ]T5o1;_8Q+|GrSt.P/Y*n\cX,P~ pTgi\ud&b4cnOK?!ި(KXVj؋(S#ZdFd3<)#X{<$b<0 E~rYI#p0Pah H妰E WCX/ICi晠L]L";^inU4ي?މ\&uKy? zn؁9:?|yfr~xEwhPA2ί.v莡[^ang խGn}' moNy^=An.|+CӾ?kp뭷w}aXǶJ3JS0 rm(DžG(Zl{jY_zBgP \x.Xn*I4sk|+TP;F4i~p]v<??Cn?K^Y P5[pzUoR2! fp*Ws\<'+p]CZ~\;==!ig)%#=oN3-xοO} iқ`TpDGu$i*'avXFD1Aybt4IjKh$e4X19Q1-Gcfmv|w)&,{ލ{[<4{5H^i"CIXjT X 1G َ@HTa',:b+<{wN(Y)Cn9;VΖlA ʵ^}cx_> p u+^ ,qz'=I[>=j Kj?;?Y]#( Y“ZccW̋:2ɼ_;[b;G?n8 qiӂYQ3fBQc{d[&9[rz󣷾`[CJZi2X Rfe'b 8ΟgȶTGrzr~ق[2KE ˚!5D'qn85O&DE煉 |BR{ p>p ]w??rΜvƘQI(^Mr,F]$bNĜ P&vosw=M}|g3B}vM5F xc}MNkw%e> k8oԀHx>[J*8X|?SA*qqg c'g|[e z)yꅴzS%cT:'G}Pcܽ% saZD,j@HFgda8E՛߈Tkk )4UU3 U-7;NKw.G+Q` +Y - TDum\mJTSaYk`׋oiXa.a>2b'=RTD&ψ<~>t#m^GZXY̩?rsy#ÔumV ѧ56 >4LFEk8Veչb#iNUe[9d z&)x~ @a#r %%_Ytca.diVMU 7MYBe)sn8_| a&v`s].H{Vk2mD7L8=ۋx`iۼk*ٙ[*=#M!4Fx8ʥc&1&4 Ml+TגH [LlfF/>SIt,M -K̾I!KsrUNSJHA1tPKKpeP=e3oj|͹PrѪlE񘡷 Zm訄!C߉'q/=Sx׸-'8HWx؂O:^{|Vbz.7lZUZ e,`=L{wݻ{V~S?yy`0G |ԍ(-c-c81:GiZMh]2x(o{RQb╲;](t.כ˒gn~gA+./>R4 ʢ&=Rb,Rl=et|'c/R:yW@@*v+׻ghX5:/_|M.paJ(ݕ[;^k/yaRЂ;,|`f.jU130{L0#}`c>p|5앁>_+2f9!#&}. "֌#7L# ۵giw7qMkO  ~P|Vq80XW~x$Cx ^o(OA f >IkVKl44vF<0 /"Fʅ,r#e״u%IcBF].@Ɓfϖ+!ZCE1 /qPhpEZ}T5\aSJt'"@SJ\WR4c0&l#\kuVUO`y8qgQ}C ywdfڒ 3JQn? Z_<-Wbr]ZrFjlըnQ0~@ݝ~ů`o.\h/ 6Ow+3tWd I NJq$#m2<3z\1f'ȔQmWYoa |hv!E!v@W#-OQ*q xhv<}|PV"?s\!~@U =\8sm屿Bb0]|Ph@G;\،s阱8Y,|/`G6Eqs1܇rdn󫛱ftdTh倫餖U0]M7}z"uIcVm>z:6CKҧfTuZa.59W6P50[GjFꯣr ~Cl=gbܢC,5i-FQ42R#BSF/Z9mFJŢ?@qkťn D3ڛ iG@3~BiNc[.=SƬҳ RAoᕩu%<_QT2an! uE<ce?Ǻܒx>mkg?Nr {?{K0pfI;rjHo=<ߦa5G5R#b]*piElEQWD `` l{R%;{56A+Gm w-[j#@A\;.|cW{l<EυI<<z\ Wn+dgX#CGq~fwԠEvs֋(vC8냺\&-.Z~)1P E(@{+%Bݵ4tgs \pE# +Jw+ ۣru ݏmH܎t x83Mu&2C ޽%ΰԝ0-G~3X¼\ |E1ȵ8; ^pw>]l V͏bs%lU3. cx#p/r@C&lVQ̜Vc`*hKs~;]D14GDZ˹[{b{hQx n(OA+ ޽p?8!1lG-mU׀s*IL𣷋'c; oe:TP*(;z 4v4Qc]*7ׁqQQ+e%wĠ,vvzIo;FE=׷kAvϖA8͌//|`k۩*u0a.R+ ezHrרb8~*`<% tT^n >b}EHSyB m) n<-%k]awdŠ̅~ 7e[p |?X}㍲F`oG񌣱u0TOnw O:7.<Z.,Z{h |2F9Mgkw+~~YkT]c@]6usruY'GHݱ3 #]<D@v'J4Y|:ny&Ikіi> UP$ z=Kd`{ܳn.d̸[Q &=`$l\>zAL}; ͋Y\T%^ezߘO\z .-{L4wq MKD:ۨbxp`;>xƴQIsk xJzR}()G HMgc7Yqq1fdw2v] HFTo >f֕TPre媢CK&n!`Y4Rp u PaexW IDAT( XBvʀ3_/n֗m }Cayg9s7`yj*|`ġʱଏBFMdr2`VvXՙ<[XN -H 3J씽HCڒQ;ё١1/J.oVGյ9 pha<GU=ۡXTqk/=Й4{ }8;y<Ümb=f& PS+Fȴ!3z <0˃>{vms 'qw:0cgRi3w"_g$ס1BQLGM U,EuE2s{5㿚 `qƻ pk[=ֺ1pcGc_"]-CQNl E<nII>^=yР2v2eх6- ~j㞂(?V4@H.9=[n;auB[,CqhH[=rlޅn,R lJ>;X6=v|.1p}|4)i_ò}DL]݁g4$ xt[ @GW㎜)Ù90"u|x+d)*cCsIۇl .W,'pgasꟅ\(ᬁ p2aZX0p&aZJ׬BTx3ty~b d_rNvn@ė(*9WBlz2b XW?GWX̟(pGj(E<3|g?jPdn}?v>Feych잍>_Pw.ȋ?gtuFC*k)6"G߆,%9RvΓ[AN۟TǘmEڮ,sx(u.5"$x`G{ xxˉF/tILa &g0M6L+0EYX̘!;qSë.X|s't'tnHѭee[[}NIƮ^T;iMO!faLu)@y2C#JۼֽAs%j׊56֦Li|K.@\JDϖ>@2e];3.\P=$mi&OE* }9xQݢ>BD=~Y"G ~ta{DGc!ϤHJMyI~L¢툢4+CZ9a(Vߎ ߁L99;H߹s|⧫~*ճ^J!:x (  3 撪\~E(Kv)])E.ػ-~~^(]Z(౜x6rrb+@eeJ0X!0<W P4g.wK^9F=woM TK׆?9$SQ9W@w`2_tϋE 0~B$aaL6Aj J_ޑ}$|ʐYeGi-ogw+!|Y"SV Pƾ xyg[6o+vds/Rb[ _]<Iu̱ xL ;{ ĵ/.-}\=' 6g[혷5rb1|0 z74,) (+#NvD2pZ<DGb{Dޜۇ܍FYRyt(iAo -H0Hrdk5e jAO[XdW!en H J5>Ӎ3h2>U>ⱊܹ.uMqeBWPFC3EŅ? |TMw4ݍizXҰ .I7?UB9~+m#\M>D)ʜa5p.wJߝkb˃B{6ٜ#vn,nr?-\|ٓ3Xf c Nmjjl `Sb3ewL9%}fQ$Qxv܍N`EÑ+x=e{TvY,4Flx]A\zVC%2౓ ?he>Mc߂}n¹=p<} m<#2Z]!B`SE TVo%>#qE%5vCtFjDN™$r9+t(vt!Zl8ڑ)x%p 8QPt'N>}n&I˄257mf`dkR3*MPhNъ"_TV%clwp}cn}FHz<ʭ$7rbt?\8X]3068:,sc@I'vƌu(ÌUJpʹ)@*I8LcDXL-e7uHcd~V@]rCC]>da~&cC\EÑmPP`A!CnC J S{J:lcF{b"Lsd,[<8Fx rCHZ>9hk}ݫb AfHb,Q~j췘Rؘ}lT>/5mjhs6ȨL -@2Qgc{ĶtZ ):~Cϛ;p۵/> cߓ(ڧ`3N>cxs3VTRUP#x#h QQZ m-5I\hz>Z3njMT|ٟo>U*,]A=gz }x]6 "(aSzzjCP~n5M#3:@R x`2ڤ-]XS&tfLlp5A79|0-O1<1 N )Eeo1s<.E^gv JGfmmq8c #atn^Qf9W@ 4X-y1b)S{P20,b 9bhWRS赚;]SL~NgLRoXwpH<;LSZ eդJG_ <1=gV+a\>PZ{ڑN,sh P,|) pp6A: x0CRCZ2e;,pT." g`Fj,М5K뫑gXnS{ǞLx{22zS Vnswrxaa RJ<,#^$UqXMFxskU}N6K"V"l}Źnya3@reI>6ͳ@X1(-6VBu, Ugqcb; f 06?.ˁ^'ّpd[ۈr.8Σz\Q&nBwjTnU]>~h3 ~v?$JFk8˝)c"0XNje x05]Dz&])|،zgmlWDx_Cd{nK_k7&| ,br&f5Μ< ϴ0"[ /RjQ=5بTM!NC p;4]mhĸo`~$9?'/>HJ.JK﹫4n66XyaM;s]B1p!B_%<*ZFmbx :cq\ ̝pS.adGg"Ǿ@I `s,#9ݯy=[15 ѽ^5zDא9MCP_38v XܶK@`Qt9WRB΂64@k T8ѭfFJSƍe4;ƪj -|aюvhWk0eWm0X?ݞogN6RX>U`n?|__Fm&'$DoהϺP~T/ʶ@ Q3J`yd|cLxů}&6s bg0 fp5)m[lɶTM"*2~`0r7)FOj:QIb_t0Rgy:G Eܟ]Ḯ ;Lk3z<$<3AZq/iWHms38G#M<ʾO ?}^c~R4a.v.;0nuҵS3H0 gDחM`uS[w::lS 7^\x"G9,G*[|\< GRX1bhhk.Xo#SeAYvqLjXuAVǞ a"]מV̋x~`;,8/pqݙh…eeFn]qlA'JP6aSMT1ҹl*cj:> 睃 U<6cE(yo"Xc (6L)kOUkcxH<-JYk:EǙg/j>V+ڡϹв%si}  ]ythkRrj IDATksK >K-j4;J;&U8R4h/5?0wƜUn92JcTQ t'6 )yˠ`M䠄.r3iᦻS9QYr=D< ʋ>^SѮN˖^3Z羻#Ey@kܭ|7x9w\Y ͹ј3s"^1뿍Y}wav,8 BU.ڊD孌 i-9YR6# 3E.2X~%8&enALSlRLS@<}2-E5$*ؾ;F|nm.gp:{y3f= ),ˣԏ8='S#2hWtVH\8ϱPۉFyCAH7 r9Z{q>`g2:ʳMgp;U-+7NE-AL6Ti7W) v0}OcDCZF]˸ֵeb<_6vJFgG@T 8gYb`15 b:4unŘ%9Wqx5t]Ɩj2Gk.cW?X(A<ҤۗٵŘ_G/c.\41Du8Nǣ|irN'0M[5,.))qkS9Ln&Lb< z->^@~ɟSK☯28z;j驌ڔ]Ÿ bgx|nAգ:-Cmז*h5Li= {k+k!6 u/挌ǃ3uqk] z3Ir+WHi"Qp6]Gcv8(14G%#8{,[DxN9Q9r= ؉;YM1S9%?ژs lڹk`sCp }JkVzI4G.G~1)Í@]vD7"xJRVfH`rj\~cê:!e)!.{=l om,;eKJZ0V}7(hƄ!07i D*I'__fA,x>',w5<.,ܸIYpQdoq6GbGū]-74]1%i:Fi8tvu+I}6T1"%E9%\˨brźyp_ۥ7:;#0e"k38(ÎuqM=X2!fn!2UYzSQˮwz mW tp ˦qTh ؞hp{"cb(.L2a0Ļ>܂smV8G@E ]ڻ c+̣~cR뢯2EW$CۺFH l0Q3;WaN%ccAb`ݷs3ƒUZxG'9Y..]Z< X9cKyk| 7Pfau$Nd/h FB4z/Epu18] -Bl ~[ז(#s/27]w#Vd{_lȬz/xHV5eS)q:ٸ zV-yvq`z88@Ӟ6 }ْbaeg1fFLG*8֬ϬO'֛c~k %bjΝ90Ώr/Sx2ܝn .]QC[Yyt@WdחQ\o6䲲~t p9JK>蓘bX F p\eJtEnz:G!7mw0 "W ^E>DZm*Gx)OI]ʌfAvyn8 @@X,Nc8)!۱@ak@fU?DN9v̓U}F {R»c& Xl+ H&c `'n m[L?|=>2->[ ͘ w?sJ cMˣWkz:B71GI2_x4j74}3K:g$<[4<%HGcjJR @=1A3k&ƀ2 H+c{}[ 9gܨ. (<4bV)G}TzG i?Q.HU) ,2@iP(n+bJ`HQcYK`sha~ me1bE)` 0 dqWbXGˍJ+2\BxJ dI1w;M^4&z?11;bzg5F6*( ;NT؄pehZr3݁nFRrKc{v* 3]0]pXʞ>O'*L~;l|,3*Jf\5>YY=tu$C@Q,#q9 Ƕ8kAAO< c7p&+k7Kl S1ʚ8O(6R5n,q>SJneV=7z4X T<"x(pCV=/Ιg#nlgX||!f!,fE66.N/"-q%..uqoL#P,#pKrk1(yĝ~F q )ؤ;z+*D8 C5z(hg֊0@/&`":2qژ( W~Cpcv4HRV1Nq>__kXΞ/>W0fO6 F]Jᢹ@^ Kӟ-e&HLUtQ^/K.,i$z)+^Wu @eIuhz-+2T1љvl|mkkNFT1: eV02ixO9ظKo~76?i\X5@)(taTڶo]@ L \ ׈>xQvo6ցxД)" ДPr mȜ3񹶡n4˝@=@wwQ >h!%|+` H >7Ƙ&n G4g<1?`$6Ȭ n0*=xS]{r f3r_ #q(>n.ʉǤnpWz9֤އUl{gŶdNKuS}2]H 5bnkD@V{74 Z' I3A%euw*ܸmBqT/3etƞP3ݷ5]%[rŌ.R{I, |5lMYZ:cbܴ"k"2v)-="E#qEDf+E@pRlNxI@פثtHö#bU˃^5:@4YX>g?e{8&<8bYA&Z+. K?|kP@P sIrwH1ZQ^/\Ѭh0i3(CepO.EZM|pӜb{:ɀBuR$@e`GjCbL`Znwލ۞ؐ{`xD#IxXb,V,EZnh#l3 J$r ~O:n\% jH12lIfFgmwTL|Q.d[xoIYˀ TrW<9#I D.Mpsyܞq{ۀ&m9 ƞ`G v`Oq@aZqmA Ǭ-Gy8 TCNņ*jEvMdNQE-,]~*"LsvC%?P"ES,of.4|H踮\2ʍLrO*:HӊXwwue[8CZ>Ǻ񢺹X 3Hal)&nGŅ3;hv.-Kfy$uOͷ n#GFRGQ1Q7G+z,9X"j_SdCYr]ޓmH.& Xx7 ޵./؀D#^Oc)h#, j-0O  G+)&j s[x% C֏[cN1s`0mlWLš&}Wv?b9p0FJcY.o.Qs}(1ochΜ"3QfjuYhk_TzGdlD0'3VrV&^i С\<>Ă 3Y'M!Z 67vsX㎢nq̩}}uƕ(WOU6XiT"5IKDךvD$_:7% Q%-H2zr<=5}qnۮG%f1׊EcR]V;/Blh~GA=FZjq\R%in-fnঠ8Һ~O=`ecA X :`)wi),;7~kfnKrau&9E~ ٘S(HS |ӞNd11*c8pټ"-|7ٿq#E<$̝EG㹲5HhMnF@R^w:PJ J ^)w*olӠYmc(LQf#""'-t}^+ˣpPwEKQX|h<"1|Li tqc/MDC`MC}_{z7O^Tq(#™A?k#PrEDX|kb,qB */%:Q\"X!B̔e덣Msp$ $? - 9X'-Iqi R: hc#HRMhLzGAx[qf'S\ |9~o1]:>ߑǜqu(cH[DNmo5X6nYNO9SvM5^`cqJq qɡk΂Jsy?kygD{{Q``GQiF W-ݨxU5rgAɩv%@̲UB)aUvR@@8^qEVC-2v#m KxSizNDSVfbu*QaW <_3$8;خ@W#F_4oPP;sPms*J܅Ā~NSS`#=[ EW~gKqߏ ˃KmK;1c tjgȀDH׌@yԹ } mm67}9aidmuOv" `Ю}Z EmbyS"Ѵ* SlM bB'n:\ʔ\8qܪƫ16Z=i 1 $鵋gb >2 R;c{U4 IDATRzM9##Dɕ v V%RM*Y:$Q&L)ZN~?-Fb A/q9h4 dDMp68߫45dtHGIi,GAwUzVc1~4x%cƻ_7-P Rx _\Ef6 Xqu!(gzP9tnJ ~8x̀93>:[7aG! ȚID}b `+Xny]0 sA;ܦG`cx[#Zm}4ú!`aG9J |ߵY!Sꀥ5 $1aϗMz0Ѭ-qܮM j{։䱤}0fLʵ&kw=z@TB cY KF=ضrwL R_(?K~AlvvA|ө; ZY?BvY(M=&P#Ŵ}dCJGFҒ!-2BʀVf Oa [Βx Mz/bO"{.PiFkn¹b% #l1)peڿynF s{fYns}UY:7)==F^状G6.-Կm:ݘqfcqkk}V9 I]̧˒ˬ(:9WME"إ*J-fk{Kq.<")RĘ2cѻV<r_(+.ݷn^ؤgw1 zq8LyL*gg!%[l, p(cG>Wʦ@S,d lB%J*m[܉Mn, L&K`aax-6jq 9 .7D#(t`Km4&cpm0JBz[Y}*f {η"Ӄ+fAPnF> tДGpfLՉ\xD ؄NLM,=\O`5+aNom !*=hpBjVޣ--L oxrez\yG1x+|^w-V .Aw[,gw&KF溏շw\F j,. + DW~唕x? \[+]a̗/M@XG#ĶkDWx]m|ײhLh9]XAC H0> Lq +ޙNjh &[N-4c絓qA>S.s|D[>Xx+%HfX̍]iRښw4n ~e4F[e/F׏pHDe Pmʵo{?<{%eֵ<#x[ނEOwSA+&AuݨH>4- ;3L fɎ@:p1[d٨,X X6@x~gRf-]GVAe4ixvYQbL8#0-( /f&e&RV- I(Q0*s[_W5BTR~f-#fu(XH +6KQn+ ^F߭eC9ȅf랂]MtOnHS7Wnit\P C:4 UdcMDh`|&gRAoqʣmlջZj:wP+PW5p>+b@Ԏ+(=߃7 )WNUDNtU=g mΕNC:~>;2'lLTހϠK!)/Xxt@ץؽ/G t.3jR2@i]\[FS,nS^%rE*؍JGR׻|$[ W@G)-Mg5K4 GQ$ A;s1wazCAX׈&W};?Pޫ:ʂHNz+Vװ1G] 1ϊv&W1U0z?)̎(RdFf$)}TQQ׭GSMkGӽzqb| L__O<x)|W~%_x]G尜j;Pk'k@cX(3>ʨY-R  '{[N]Z$-_,g/ԯU*Ӛ9.5U^4@BPB=vSEY-=ן.R }.+<-޷6긶1 B1vXt9]<iM,ּ<1c'28DnpuZqIG_,qk?<9Tw!ֆK aN!/piT8]!bL ס,+N͏œ4خzo gǙ4w}k_x<?__//+_JȏȪz!/"雾ife92=d7)_!9LU R^i̊wi''/vl7s10M0˜38wn b<&gl$6M`]6 D+|3캳DY[n}?o#ۣ3Fka\T,ɭe ,[-O+RVB'E*G rXF<ښ>g8>c+؊a1=@5 1k3(@\Fϼe'ƑOx`@7=NݤFC>F@Scf+Z=*2{Qa M6:Eҋx.jj-<+R%~)m͵L|\Kc6fv2+Et{ց3w[#~mmVtqf{ ?#?~x;==y^Wx;׾z׻?C-.|}йG92=(}wa> -kL4̝펯-w;0Mwc>ymUweTTd3ݖ(AAO5R4rʕܸwNz|ߩY<Y kk]QPj7+ZeDa*Gu®[W֮g_X;Ñr~w7QdgG}(lE3$sR]u=kd9(ȱ•y|= AƎ~q~U`?=aV:Pӭ|tf) _2l6? F5lo1;ҟ݈ p5 =SPذ^~yџ;jkF{uk ~~//w|wo|#7|Au]/9Moz^y{ޅQr=( ]L+#| HtdD ;RYd%A$#9>m?_ ;"Ft/M5sbo4!#mh(ɀs]1;`Yz=2] uރ6!3 Z. kKPvP ~* }49;HqQeA+յڙ79C`3ggh 'Bp\T; 7׼E/z^u{ ɟč7cyk o6mQd9WH,;e@S-LVҺ1ihX{dFZmBp.{3JVR5+Z*@} mpcN4v/cuSSwv@vdwf}t!^u_`8{|=߷Ei?fW#IQ@xv/-.kAcjsXDWϑK)C#|]/{Xm_ZZyy\aH7g#;v}c?C}>omʼ/K_4(]9W@8{vB_ P^006@oGC>@̾W@GtKd] 3>jY^lTq)?q Q>y|HN@CN;1s=#|VU=C,A+(F癘7]pmDCOk*EN1p[4Yl foUJo5F;ܯ}dJ> wI\AS )KQVhJ$+7(06rRs%@W>zEc thszWV-ET~V+% x쓧 ^6 rR,rНgΞ]>>R&(>̞LОג]tvM=/Bvۦ7+GM%xsG?~;('{ #ĜkSga'֖:Ɣ#J3TqY'ϊpsa'8,! p8Fp >4]8Vf} ,.`Y(A1 ($GD<넶-U,.y%Lad+zOˋ6ә#@tև&X`ee3bjHBvn,#QpVX+XBtP56+0~d<ab̅qgU=(.B|(mT'ˬw(iH)t#|| (G~Ҝ26$(+`%`_aeF?[?ֶcпzLm_.~gA #$L`Ԙƀ |d("~feŔ?sq"1>2N7 W =g?8g_N\2}{57pYr@HEw irad";dW%وǘ!\㢰s;s>ff>n2mpQ@38*H#A(ހcmNsl ԀGip5\`j<ې5{}2_*$=fG-kzfƵn{ D'4,CAg*΁޵SؠJh@ZG5QݖwRa+@MqRE,_M e~f;8HGX9W@syEd G%80- %KM٬ >tկSAmDTwZ0"2C. +:j#/ nBURQ2j\e횋eYT{;iJaJ2 쨜F79?JQܲyw4D2d0g2w_?SA<|+?ז5(WhͰphXȹ,i"v<$\[.E x43j6w@8k[n[$'|NĤt IDATW%r=&Ji|.}[t1=4E[(n  BM~g}vR]6 Q>>)q-F 6@?b\ⱖTn0!X++Р-S݁%">z:C xs `؛MO3|:$GkWa̪q>|}./L{'Ah_aYv IDr56w2p:`jt<; 0x _'gd kul!7( iϼM";1V* q(9n QG˵ &:C=ftCLϊe/19|#A|#ݼ G*[+p|" *S\ < |p%Q);1 @Z e+#ŔR2LleT>4Q._oTEmﹼpC)nD#/> ܎!M 魔QQ9%Poǟ+a5L?{sKrՋ93c.a2`b]&D@6ѕ#%FdPBD-qBc \lƱ 9s;gUjZUz7|wwuUuuuUUxɅhw 8ח?) 3w'S.B~=\)ⱒ.*Xt 1A6Y3w1ʶ:,>QcQQkGIds/{5e5tKZb<衅թ%Ԁ$ b%a~/YS hG_ |R?D.1cy;gRr5<>}C"vxyK^zl|HW$lC 0`NbQ][|̏Z eՌg\RlqEV&DyaV}7{Ui`` "[, XV^dH1dB^iխ"Cc2 8¢owrh=J Uv"I@靉mF=Bv*ㄳ;q@?jbL'j kL:& Q{7-,O֓H11[;j"nq+c7*`DhJ뒹,A XPܕy1@쀱Z /=y>E $/nǕg8UٛgݚcDbciߋ;sȲrYzի||;ɹO}SG>k׮/}nr@ ާA/A0kIap?1VE 6/D.ayĺfָ29M.~`_(Oy 3MB LLR.%T(2V@iG)BjaP;b!3<$Q?#obe1֭..c]h"g]80X cBbl.(Ӑ" ?ݺ V~z)+嬰d')C2n]Ir0ʪ`*S@WB´)09M;(Y*kg.l~ D Ǒ;(sUm:/ {ı9 t?K WaFKRR^YJnL4`ѳ5xG?d1>2c]?̻ǟ# >a{k^[VOd-oy a^$&W6cW>|9hs<ҋS%C݁CkҌj7+?Ib6Ak\`u&À1琰=:-CVL2H[$0 Ę/\RG .ZV[dVL=-Ӽ6ׁxTi~<(X!Vӣ$SW UGΨJHfsRf# ^딜#Gw`X+h(\WTyPrU0.*5%gT^`$x{ek ln (L u)abUʯwxUQ]]C| c k ^>~>2,{}Ƣ,>[L͑)4ad֧=o}ڃ8O#æk~a!]zn߾KRڵk_e|w~'^~prrzox??lr@MCn.bmHAO&ٵ٤)|@6Q{I)k#NuW<S`e{hKën['/iV"XϤ;gMY(u=j0g5$4stHbd)"Xl`{FW|Pc}L3ICy}# 3bXrf~5C"90Wt} ct}bVW1VQ#ٲ}:%X7%;+Gspv  k-`9V& I%8;֧efmeNLf(A3d1=J-2;6zZ~'FalC7κp oaٍ*qRVx ^ø~}4goo$_~7~'~/| qwQnwFMzlI QTHDYre'Ȓ!1(Mluw:*Ip2Q8c*I7D P>sPR.Y}H ~|=X"CZ)/̻tQcbE YAg$t==r79?}'\Vv<#1!a%@;IfH:)} v6:,8in:[sև-r i;=+6c, 2kX`ӎ3<<i1pH3|L#kdWѝݓq GEXB`` xw.})\wn 0w({yַH5|j+}和2+\iEsz]$?4X?OԞѢԱ:ܮh2ߏgtOO%r91RZ J}b@:vHM_Z[?Ӷ<1 }> 1* :S9A8Eːf2KA(p݊g(]h>=u|xI,g h,'쓾'i7>W0}qj/ӱ&Vy%á?;|jjڒk]rOp;d:$T^xމQ*d98<3L+I囷g 0y;Su~_tP]Y!MՒ#>:?k&3r]A'?W|.`0+0 mcxCfi=%]zmZ^6H1=.PKrne+^3?k%Sc vʏSGǗ{3Zp:~T0MV|_-cr ܏vK(! 2 àZg!QC xFMxJ~ >H[&{4 ˏumzcf1*˕E,%`R%v)1,?3фpdN1gK{,gpːE`qK)$F1E76RW?QddЉ-žդ@Ps=3ҝt*ЁYSs_?;%h ə)l(BD_Pӱ5 oה1L.-p ?\/V޳׾)zn>y%Ydd=6*"Q$9JjЦ&N W 1&FeQd# %]l-3R%l_l)/YiiY1K|n箢sW; `T4O`u$`n&cL@pzb7/@R{ elcgQB!oe4N<)&,hU79W2? \A\X BAyeyWG &ˣfCSIbI洳kxp<69(Qb{&HzqP\h=X'0A8jm&ngTE(`t1;a)p?>N؇K}W>6{TNg"VV,o_(_>& Y x }+"5v ӥ}]r|V|18p7m9ǹ;xӒ+Yw \w5~`80@6mm:鷟5hGˊ*J||%\h=Ju/ |.ge5D6T2Ԃb ‘ؾѺLNJi/<8O'H69Dj~!S6Q–k"rc  FX|Qlj](5&eR%=HԆv k=#=5{۟0$\ -b* VsTϭ |H ~"b^TG_t?3x(ZHWW8sWa(܅|KsYrK{FO6ЁPx`7;"(CwO"yIJV Vg# n- 9ViͿPAQ>$g~$ ]J.}+[dR냬Ww6 d&q9V|Hםi̕lH  xV:˧70/[ɂ)ֿ|̲,;@6mǚcP[ Ly x`~E!^J9㜁&]9h- # v;s#iUIh׍qGE,kx]|b-"OOpPcT n*ФK`d*=5jTjXA]['3[9cb{ |B}DAhMGla^]XYi&kcTV[ݕ|TPvܵ/+ ZZc=%m.Up}ƹt*872>#vWlp7.e>pXi+BOF<" Aޏrlskk1/|9X.c,ƫu`; ݤ4c8E}w! \=ׁNw6.,M6Qd=.xqBo?bY\,qqYCkDM_xc{Gl jtHwߐG>\! NJ;3.JX7uO-lg&W}a|O*L|H> B;.)[/\S' aNT&%}#ߜ}wMx<5]{Ċ[Nz/H#._)VZuv:05Dz\j>Jk)Zߐ6H3Ř+IQ9`,XH  U kTu]Rx] c'qpᛮލuf "^\PJjlcLΪD R댶w(BK$%NXxQ@9H"D`yTܨ9{#}t(\Hi vu)ۃeQ=1+3qXLޒ J~ ǻ(bu)NTKNGx15 |d-,8--ر0<+S]e8o`!@WP0f 2]s$/l]hYÆ%K !w1 ;$-;Z,~N|FX#= z\/)R13Xa 9ۃbX]8,&( {EKƪ XFC=˺1- :pŀ \eVR'pq;, %g{b)f]"i7<4pj'(߀]~|.N1~NS!ҵgxaAӬ3/[}VmUZ]AHkה7"bƢU+nt:o:*!=ϿSQMMF]l&5*K; U K-X٤O-SFe$;fi91A.jBXvP,tUR6o^局OE T۹tn8w1GxR/p_S2\zK̍L}Juq9c.QSL+u]kMy4\wpU ձ}llEgTu\5Z-`Je*  q4<}B엶 H<ݸS95m{/ܻ8HwIPm#5TɃ2ۄQ`bUby\`w3Sg,g̅)c-<2,n`"EOI x\iIYA)catz.50< QB`+ IDATaIT%]A:>zfCR+Қ/(P 쌱_4hrr*c U2KkQmX>[?ه0qs)2iqojDۃuVҲh>0ias=݊q08Ȣi7]K1nW[Շ:^k/(~VwI X%e'XOʖb3*o|s ^ߡnQ% _؁Escǖ2~\ NxhsjJÐ)X&ͲŒz*FzZS1#uNK Ld;-IfPl1n+R))KKq0 F)2<w!`/̢ 50nRځh=]MB6xd9?[ۓwdv'/`qX@P/J P|$(I$FC8>kw~F-Q[ĺVSt)1F2#uAڅiLO @d%Hw^cʨ=| ^I/uIE<տ&ya7!I^&-~aG11_)2sJXO4w̺Qx|J7zlr,x_ZDEUhEQ1 ,³v{eBGz_|w'_8r$.ˢz;߅&sR:s9JV\[F 3ayj@y<4^[t汊N4i>T~`zsY;Xj@Pe-)ͳQA&n.Kx.kbqC]Z;_ Td 8,9/E!g.Xu^qEd;-qL+|1BKZsj"Ҽh@X43K( KcxnEq;xLy,%nRZ,he{Jbi-[ٖ'H [/ ' Ffwq=hcy#9G1~F`w'r~P 9%ug0ΙwްgynRwQNKIB Od{F!'ؤQJHޠ@Qʬ)ۄέx H$K˜ddXZxX]ZZf5½Iұ&|kYO6"Q-Ka&n0O&+j5'+0@aL<1ץK %.K6U&䭻.MLJ2g':h]h:KESjktc-@!l8pM=_7 @Ǫ ڠmkTYG4[e`GuFE$ l+aL&8?<~Cm))Ν'V w6 KKRb{Hl)]2T-Ukmv@E&=`_c;1`9Quc银alkvg&&4z\0*6Pk[qhR&URm#=jy1X'DzBSb 9 P\p'1hb/w,?$}eDV.PK #֧-|e!1G*E#i49XqdY"2N'pnSTeJ. Vp2 $Cbtc\X6ƹOqئ,[a)RW sBu~"! g⚬9I5uw 8tت{Tw?qe=6YUlXWe- 튨B1)vPp/&hk)P)#1]|O J|X..X }[ m;A8u T V 'J,@6@p&%%wu L >Iyp@{ANdˮUq; fڿXTQۂVFd8$̴<ˮ0Awd|2=kc2IɵԬtYKE*u箚;g+R fLq7um3 rQ\\Q^o 8o>/^&OL@M' eo+" =erc(f>+l$; nx_}8|He*t@ķJe_,2x,!<7Ci[ >Hve]. ~l _D&Ti( B.]?[⤈ q_d uvcMQLngrEqlGKw pNnޝ4)FcvZ^jlV:voDuqT9vTPp1k̞}M#7Q\[PkqA7i±`Q>}zpx¸-; =Qgy+VctS3 S/v< Je`{ iB-ԛxBHRXd`p-gyR]LXsݓ8w!Jʟ~t?.1݅,i@=6m ie&T=΀ +Km)K%kk|ySʳU8Glu>"[ck}(`Vq=ޏњos,m`Iّ"IҰ1^ aLc>%[B v:\kx.](w'׹|ym{6lDq2; 0_?$֫x,Na>1Akl)R C. -ׯ Bzc:ӿ]πGKD\;MdxrxXۿaeQBswkb 0:YL vHqM'0nw==>?SceMX쎳x,,AVrZK4Xa5AĵLy30J-1 -.91!: O]`}d #և^F\u)\PCDL[r-ZZU3xuh9'ld,piFh5X,FFoCƛbGmMΓl&GP;8~䃨GldtEMq%ncBОQngpY6vqєqBdOk~cd#)[|# uGv0zo,i}?<~WӁtHޠ{w?߇z+:?ydR@NO[)n.3ֆzlQj~͵ODh}q,R$8k+J=4(H.,qvK%>oxX:d&cɮM\6]sǨUk[,͑ZyY);1*B1y5-%'.I|UL2d'&8 [R@R6tz[jeH63B @B ;Xk>6veGd{k %CZHLV%vX~TߤM.lfcf䈀ELuv-3KE` KہQ0 }8y(h^e  <3L=㼟~m0=(n.1^gbaRLcλd]QrECAXJ@TeRýIK9XJ~7xw\hϔp\Sz,5RYiG;>8NV],d DGsDۊ@ڮpNIʜ]67eM^6?q^D Ф0SkKrϕy7mvb j)yQZL/`0a/Au}]Geb@b'!{-Nq伶&C6AGbӀ*% c)!$j (J+b1M sWw//h;`gVQɳ2P/kb(èLnFW#Mڙ. G)_.o[ZxʖY/z-miAͨM[܀a8Uc|-F)I[H- !B$N#1]x~9ЧKAt>ِ_]Zr x@uX\|n$t.8 sw00pгOp-6KƐbx\J 6w,j_Ϟ&@&&W|Rarl2xduZLAaWW>>B:XͯV"ۣ0Hq tNsWѹa2 IDATS=8NL”ԓ\"pE#<_Nd}|`??eZ:^nD[~z7_'G܅ tnIh)g6> Xae3L\w7vO.tSG]qa&VȥgK\:1eQ,~93iul,㢈(_[eAc ĀI *tgkz K~fgV&`+$Xt㰛>523e^SP 7.JS 1!lJ@H\  e>]6>4t\j@y;~c)*(ڻQ[emC+cQ [c u"F$H.TdLiٓ[s\hDiy^VόAДbT'H ߜy= (n*hICrV3[ } fPh۽e"q=Kp TSQȵS*ku=:GfAJ)s=wèNҢ#W -Ţ4{J ͨH9GA^LWɚ@%bL&uh| B8 .=Zw_`xm9e`}b3 _jп>3>S*-%HtWA+w!sA?-IrÙ^> ܋~:4 iU3< I^~ЋPsuzeGe=&kIb6CQEjq/2< _}?8b7٤]6I-Yu.,5Kyn-]^r5?dXsHLF%tw)cTT[*G<'(\R1>pahŴ% J@/l FSO.33\fǼ ʜf î}\؁l3hƀJdu c>W m~gVDQJr0,e{/y2Q,j8 kJxChSƘI҇P@`V=wDd <19 }BdF.2 |Lc[˭d(qA(+Y:5m;0vnu1*X2ʖ,-Xvm$ Uor|_>6.6R +qzb)tL"z3MRrqJn.S:qQp=aV M@<Τ">$"uyxrh%\S 9<,T Pލ1~GX7轆aw{%S E0f́sXU3F'H!qZM6Qd=."״- ,q158 ;ZW>Yn˓ F$XE%l-^Jq}[-.W!-.@5n #Z~|^ax$9/DFQ!$ᇉ]ߊ&RQXfrm9(+dnAAD}>|XymeKg w*dqAtᡀG=v!/I%e'ilR^/ CˤIqv'FRHȘL~OpSie^e C$>Bk,A!·*ds(qAE\[J*Wj*t#'$9%J\(wP,b ~[M 0(bɹېZ״DBKftYvrѥ4{K\Gc))m*4GK+t% G-W$wCjKXNHxhLYer,͵/>JGeLvH|I ';Y~e7O-e9e47U 'cv!&ɺKXw /Jw &#@$;ŤUoمY,ēq2F>{M]\T#) =Y ^$q>*) .9( 4bPG*S CC/-N| H,XF=/C> $8b̎Y;v~LSzH DeK+S8.?6k<8 CSkt-[x ߆]: P̲/v;ٍb!Y5CK(B !1l} Q_DCbF@ޜˁ.XE|HuW Ѐm.֛:1/|%.d=.Zrw'dQGE~a|l%fN3ʕY`KiE);FPcKvnL@ ZN'/{6ϾW>w(aXRfd7ۣ |v`e(eAD`k{Jy$K]]|Wq{TE{!h*j{Gj 詺Ƙ^>brCf{6g7$jt*e-p[u@(ԣeSQvgc2y~1G֯–! \Qq7((Q\ k<`i>R0W\(@ ٷ|2[]fp<E˙mZiߒBc_T%Ѫ,3F=m#n x;HՒ .uQgR``m -S%€~sw Zly`C T>S'8zh;$Q*..HʩypbmCR+`e`G˰{k׉_GJ@M ӁGd `.g:Z:nlC^0dA+\M)K&\-EqfT*ER?ͼO׾/8o 3-{'l_F#iW_eT:/YQVKA H PUA P\ #`jcS~*qRe|cJ|-1VLm<Hrϱw"0Ȅ:0>ԅJb&`Yp{^$x=;:PB+%& 0K1>ĊؘsيX`Wm玘OJqpZcUR{;."5-n9Ɂ-kJSά`Tq+l}#J!(V%_&k)2iَ4=fF">u DdyHh_j4(.Ve21f=46hD?fp7i.W17dlEˁ E U'ɷh EDA&E@G^RLAzN2RK2Esl"LA膭i.$ waC(,DYD>`1ۥr.9]8d&2!ɌE c{]4W/9AbgSs2, piljYL񻲱;k&Xd=.1:$lc8Sqז0 8tr Em;b^u UDwS+qw4z|6HV4s3Ѽl#!^\ YNE'}4'N@F,[c>=޹1N5n1i-e=1怆VxsZ~B#qg l@)=12`@R\[ꖧ$5{.T2za/QsVךA;2sSD)`EZ9#_ @ 7匕iuGA,=gcwMz\qAfa_,+]nji`}*]rj~>Kz69,fnAE|<@n-7V% ̠> |тEFE7p X~l 7EsG^m۝v=)FK|P @b>tm"X1is@9l<@ϟX~g+?.h=*ogaL) U.y[K$Xxcs|;帔eq(.Jח a\7hR|;۟4\w?k.+!ȑx)9f+/:NK]gsYy Hi[eܪ!Bu6߃ x$!׋yLc(x֦(O4m e2dyےURnM61=.Љl;6_50;V"Ls52> O$eG.M#qkOv߃b>4/ُZte.3]y l  i򐪧$(=J wgƱl Պ2YF}l]h[;T |,Z{"K3u2FJI1ش|bu)FA?/ĕ*Z.^ DD4dlJl˜T2c=3Լ~OEF3yd MF C/dYGTAc #iɹ%쒹i}Scr(][%x ҼPO1VlwE ~NUhy\Ya='еOwy79@(4NΖX 贈(Zt/&n%GVW%nq EvE2q4q"яV>{VEQܢ"Vèr%b9-!d!-9 =WCI1Y s}dL*W1k]'g{>0e!>$cvX,vlNy} Uʴ\]C9E+ĵAe֕ϓ\ju X]W-YSS DfAW7sA}X\ns#uoR5۳mey : víӏisؼTZ#-|9 7M@ƝHUnP$K_>P=R0Iůql7I^2Q/_dGxTƳ5 IDATgۮq+ņD_XcÎ7}yՔEYػ]2~ BIYES .gE>$"E{Z$P)s<77>LyU@LXZS^9G 1W H3\:6QarsX9V65. s3 Sb+ IE v4vH>YgaW身|k7~ySGG#([݅*;&4z\}\il"Zp&|%ibVC\dQZQ8O*_d/K@RjͲ ;U!?rП嬅fIAo[U On< hwuG=62HRC KdWelM^xn+ L Ȁ ,,i>X~h_k/"5և6t;-`&G< soLűvZDW=->X(-,QvYt%fT HѪ@4bC"\T0S 9v))C-A`ఱ=i[|c{pB{ۣSi࿁JY 0[ `@;s(#qK]7<`| &bπHRѺ:W,R NH|ϾEJkTБSdk55 Q HpmM]c.?$㠑> 8~,^&ȲA4k!@X@;:'sh$tZ](s'pqSẫqx/"l/Ki-SƬi`oXt D |zVYn+0J8hay~ ϴ f#npjaU*XpQ)8X{2e/yP}ӑȘpenxxn{-!SWgy,4`mNH(Y[Bz6+ u0/J6] pZd{ˁz}X%OFET^̓Lq>, d6H{A$KxQ>DʍJ866]fΙkTd|ֳ-)R[iAgq``>0? ~БH.-;ArpӤ205ESlnPIy;' kl,I#"A_b|I;eU|E_r嶩14iswZ84+h&-Y4 :?/vp ]w2Υ@²q9MWR%.ea=/w1DZgH11)3wϞlJ{XɌPLgiS >r@`[Z1&>ḢSx<aOas';sӺo׎d] ;}Lڹ;,: mi?y֥4Ǣ1M`2&g$/zыW}{qrrw]x^zozӛyxWkK.{xwgg}}x'b5>Iw#@ \^b!U`a>ɓ)Zb (2Z-QIhzX P ~(ҒufO$NΕb0ku_F *?'٪4JRc~Y [W@މT \\'{o_Gt`ArlE9 lsw, ?^98sO)̶wX_C`{$n#y'[3gYG2 P#;9iYrCE#;0|]7R5k5EvҊ;|^Ǥxa ^o~3~~ϛ&|c=mww'8$EXj X{Jj4,]PAA$K*|qe5o`IEb0Vguh !Q;A N4S:'vhI{,͒zl5|/NIٌQn^nAQuL1@e{z}q'4(V9Ճ~4#zzA?C1O̎8Au!/V&fO>yMv}2K0g>GW]SQbtXP몸~?O3!?O~w_5yѝ&XX o|]߅{79~.?xߎEtn/oo7ov[d-Q` M _~ G0{MA).4N"*Q:O=5?9(ARh,9+ #U^\P&KcPfi+8P>X˔ׄҳ4kc>:|vOg;KOMx+MOK$ܱ U=᯿_4 n%9rKxs(UO@H `_w'rh  wf]`H~!2 +ugpD3)8qkqQ`hk1&."C~tl`&,,o~/~syspjs\<<æ:ir YAScb}d?/YEd?7Ƙ"MbyT „ݴVJh J .X!X R< Qf~s4VC/S%WbcT3KF>HFҦgp|J?lz(gя~_booߏQٗ}˿˗VU,N~'@#0>$V~RQYW,P=`mX#MIWteW??2$adƭ`}%1{*tq,ڀPu R|.`N)90 HǬJy&<)nj!)Tѷ>TŭI[keTҷ @DZ耇fP4j I> 0b,dI@+?KK4zֳsrrcHO~W\/)Β0 bާ?ms;s*Í[xf.m|?i0 0ILb`!B,E[ hZcLi$k֧NC\4Y5vF2/,B=Z% VVxn PJͱ(AߞsvI>6!(u[&9>i߷X/iIǑkfJY-ׄOgkC?0Jy%nW_MnkK)1bxB?^`@jը9eR{ByݝLrMt#ڭՒq<ؔ&lNJ}.A⻘(ӹ^U@Y@ƪxhL/*eTdc?/yשK \nl̽4x:hEs1ЏV Kb|=V Edfa|4E*&Ym?'!\?4a1O6e=V7nߗ/_\rpEe:NNN7|/'r׽x/Jq5Ir䁙s6#酌 iz߫plPAil1 QW88V  S6?A o0R C2Ӏ>㾀S  5ݛlIxlk1& J*6C P-c}lKi]: 1瓏7w] ,}jg>/w~چR~1uV< Rlc^<:Jn 3lk?EKuY%TGYŠa hN=xףdlNJr]w^>®-w}wsׯ_W~W,L\&9u<|,HTRBZ~$E.;4fl.-j޹44A[yK>d^>띹JjtJ7%caEv 7?>Ƒ{PV(..bT8 ZKrZS`VHUq!ء?Q,Bd11S&#^&K0 P(ۥ{.5E+ hߖ6 2:~6< ;|%x⺢<w']XFv!X< R%z3b} oƿ&wBVv{bF8???x3z2g<{k_Ƿ}۷5]wಛ?;ȾvkJt1(oΩ;0t})!Vh'g<$6u\#T'ȋ9(÷B,8뚭nE`!p9Ncoc}Xβvp8.9=Gi ~Z:.puqPwg̻Ą4n|8wt'si0Y2ozd,Ɨkt,T@,QRC e+E#Vvg78ֹ㚴T/TSg3~_[k!說c%J gM4٘D'kk{ӟ:8'> \v-KO>Me8ykW5o@+i +Ȝ` y u,F\[ U1^j"#j7\)KqI$j;41Efe­%vv LrpL1׀yk0-xvnI9Cܪ8柲}JFh0YƁx3w’4 ZںqQY Ǎ%m23vu%f~vM6YK6Ѓ+_J\|k>F%/y97xGS?S5ZIۏ=)A#ౄH]Z{?6񔨖ج`-H(23孖KʎZ}E3VRjTuY,94ЏWg>  (8!D& w On/Mâ@D xlQB_6ܗpp DU1>h;mh&>- "a,8խe^(n+ q\#DZ?D4Cy{Y+I5Cژ&g [\Y^Wwf>O#]~W= xoJrdh=э~ī%<&+.ZSVLPӴcKMj,Be1)HI93s@Oҿooo~r_'?Ye1׋V&KX@k-ā x(GTCPvM~:j@cDc<|0ۀmթ<]vLP5)Z$U )S x\^11Oyl#Qz¯'Y[w(Ec5+;1  %sÁ"z \!+AQr5zR% p`PxW!X*AKWƼQ wk11Fc1Z{Ws=t܃Ytl[Ex ?2dELa[AdPUU7&h=:DCv縱B꭪} < hD\]|±8xWXہ!aSj*Ҧs6L0.R\qꪫ0q7=y.mo3oV||#W^ S:>4MSN9%S %4\숞ϴE}.މYܚ1ꯀ@pg{$ktsۥNʇz'jԕI(uktŸh7gKZh_)9+I_N}"A>[KRC8+"cE{'v7΍E&ۭtqoH2-R

 IDAT;OO'seSOMWschy1JeE7PtEN&==OȌc;mHpT Cd$1vFVyVw;;H>2Ix8uKr5 59LQulDWOPU)Ԝ&3T?*O}T0b= c=>%DGpO|sRm1,᱌pРu*VUlI`UcE8q&۷7ps9y}UUm4۱1Pw3;{< OmeU%+c@ث&IU ȋF܈TxsѷUNRM¨7D 9=eݭ|щA@"sɶ9ؠ xn3*¬'AQ^"F7˩GCkd*v D]|VvDH6,uI~u\I!D 3'w].ƏK%Z$;r"_ ^ctGe-B8%mYhm%XʵXpCHa[[|K`yOI yYekvj\RۤPzJIT}UT֍ R\BA< ݏVK5LUI˹=M٘yÚ7! a캌5Z.Ɣ ;Jju}Q8P:jyKElL@|rCKۨ ~#BFd16#KΤ5Rc6bԊNI pw(U4䡍t2KxlCA+䬰S)X+#m<nS,$>{K^NV,-e4Uto+Un>U!!Ml;slpi .rW*ƪ커dzٮ]GnׄHºuy\\!cL(1 ,"]`2Z5';trxsñ2UaL>::d@|îSK6buU|?DGYHb4>ێn՚ ,s6X M[418&F*#9m~fPT3`WG܋#a~be;</+% v 2.;N0?6"ptƜ=A)eg +K$ƱBtrκz$e%Əދ0$&&HQ, dJ*$!ɐcn!öb)<|"+^$h+ b;iRsH*VEU=zqbIOntpe8 p'y} u= ]ylբ ܿ{HTAn'Ps*,W%Ub:LbT@]!E#3Gp^ #q~% ꚬ`JF $jBlάFs >\7\^գa4|0MxX9I)>(DoLpFbԵ+; $;@' ;!40tJ"ScdriT3;]p+%?!',`D-35Rq<a J~D` pU `8 [9Irsa}|(?MVF` HLju(_'>&U浴BxtJ'&NKe za./fmEUZ>"õiwfpg[0nn`Orȭioj PmW1,B9#6RєE EjHn2X " ǔd C`3\Z6E|\}ldmNouXIxI!=,rTJ@x*I6"?Ƣ掿Yi]7: ʽa[U3TnT|@J-XKa" pPh5\SXiJѕW8$V| 2`qL`,}h7/H*rWsʕPgŠYuu'R h'uNS$R~\^GD=bq Y זh&/Xg^x opvıMTXKƔPuDZ"-+EjT*i"' Y#ǝĮ̼͎߱ij ! ;Bz,;ȐB#IvLIƲeu֊j.6œlZzwFG| J?;3HU%>xG\Ws ~ 1@^jf$ᗕ3\`&ɱz2!$< .Qj.]ffbBjhH"vSTO`C>{mU{t7JBvO=,1x?c">s@Dy\>@(d  ᄇ(NI@^USSNc^P0A!Pυć! ~rMZvl6g2ݻw~܏UۓVjC.QԵv`3hccU;L(ϪlVjo=ۏo?sT|UD&\0KJ=ȕ`5G$%Ͻԅh v}V!ӈv2#Z(2&{D ?b:h՘VyQէA==i8Ϳ2fxnUm8qYkXGnSoXJK&2j&j|lNN')%'Kv1y('9(mm%&Ȏ9-M|5Mp ?(x\lP.VNc]PNxlnzn~ZWQ `cU/FؼJ *7~Ju}x'މ'6Bwadίݚ!>{6$#ExVɞ511A[Z.˝fFeEOҔ.={~ǏM@ RȝA~bK~xpZqOyvhM*o\Z|. a=j{1:f']}<`b Gz[X'<<=2k}nF\0fMv`~zmOy±EyT[Իćz&XW;7$?w :hΘjۀ8n[]602X*ҼcL=زP> ZGxm{]'g辷C|2] u['`|y 9'Z D d;,7oĴԋeߧ :I037HQU+Kƽu`^G躹AM3vr\F8uIdsv1MxTZׇ6N%n-:wjvsR pʹ XrtG\}wŒyM~ ejT"[ՏAƥ[MqD4oxm#k[5smBn %c'@5mILCxLAp$)57 ##Acm]eXCoD@:3H7*1G,;1Z5{1R20y3*mhZ%iT}c 9A|t!i e ? y #ʱײA!IR>j"Uk^S"<6`fPQsuB=:e_7͜דf~w\0H@9maH}d|9C&kgi[|"BmRcCM|YWN|_B:feօRvI¹{ҽK.Dd2J u#C"[ǸeI=ՆtE;ۺv9ijT_NfPEb^+ #'}@rsrbK~ ,L_ԨT+(NTl`,ّe P \kJ⟓"@ Tj6e ]>(.O ` Ng !*g9w3ROp}ɾmg~eV#!Q{N>e ҝ&y}bVP& 7:Y&빶;aN5}4I8y#Ȍ=N61>{YJ`X}Cl@=|~f ?'W]ȐF)R]mpCVT|UZXIRT D!`Qœ'; 28lO1xojd]ʄc[lt,kG Ruge_U%egL=㵫SXRq+@ $򉏜WA>e U'ܰfRu29E| }M|-sfw3XϺucxäKOʝ]}I*LP|&LIABN¸9 =EVڗ= huǭ:>bGz fJ \3֤.91 Y/|xGwkw\~p\Z6]K( uz1=4/S?JMEPd`BH()f" # Aַ4鑁.f1`\aIz;H > I?E~l a1E] c()RjSRNJ `%%>8#L!C999wx@Q?>$ɏW)W]Jmld#D7lcjM.-g)ƒuPtEj,w B FLbU*2W%uqAK_Kb T{9:VaJ$h H̩'m~S` B<>d Q82B£M~숸w ~BeVuwa+ioXVjAh]>r [sTjG.gk?-,#xVOM|xV 7ikW"}#9A[vKs'$<!=FO՝g#ExؿU >=o~& Fr&@ 9+In7[gE jZj >5G? V}:"g O"e 4 RIxD^UEԥuuUVX*(R#>P@?D=VKa#?,X-%+:)!~7>48VMm7/L"ćSeB9z˕J_ tp??Y軋$&1z{;&1{FI1꧳(uUlܭkd"}Pt{`>tGn}xx<>|XߑRrph7QA]r&a,%RD9\ Bz|QUӮ]x}x#OuQ؄?Jx8G^T@HX[ڃ:m81Wx1lY0H? eL2KVUi*<wC|Zp` jCN~l(h.!<%| ?ɷ  K2j[ZH΂=a\[RBY1C̍&! IDAT -,i('u)"I08@2uK +3zm7YCD9  dWrJgT2fs8,E|ƌ$O U@V5^ my~mX|ȓ)}wѴB!e/RZITwt#]j2"uIV,"> c4%_ksvQ{r )٪4̲$ԡCΜc^6(%#>Gy@_f\$JA,eTBRMGvyftEl㲊#?3+'ӓV[a C0S~` >2_1$Z 1fb€>";Ѣ*L6n1h7C!>HlL2>.#(qiþbR]"uEYWzBǦb~s{1c ^xxlc_FQ X fKIpu&l!gs}DU.Wj{[7҃K{RH_IbW3^C QU蔺?zuDA 5mr+@̿; BLDwCA,"S,τ|ٹ1"$SfLB:3DJTp&u5} NN`]L>nj";FǦݜmǸ\y ~ N=&VG/oP:FUxK4R BֳU=*<6 iԭ\2"wQ~)>Ka%' ءXƵ%R4\͎S{x" i8ɢ[ %;iXM",+yl;;ߏ\W` ~/$z 跏zRﬢdlwvn;|w͎q ;&O@xTsty͡U= 6=f5cBa+#잯(jҐP{injwqDJsUk7ޖX~ Q׌Q)+ZF\˻uL~~?_^X+}dnˎ-I5F'ovkTƒRyFJdP9#;-E ˹'%<|Ex=ǂ<@?Scs`z+Z.QcfjsGOLc1?HdgRِN3fgndL,'* Xcy qj@MPtvEqFDy'|#޻#l>.j툭~\]渧,k/tږMxT|Vqw2- .vcF} xKv0raB"hL#&;KJ3Wc3'1c2ɎI!1)-ܝTq!^xSa={0[=-|%ϲ@`Yb +[a;zpQalۧ+ջc5^(y!r\Y]2qu:,`ؙHjf:ww!=v*F4#J[)D<پЪGTlN#a+džN÷dK;! ݠiLEv'>rskl[C&`<ƒd|1N\( &b^A9j1n%#1YN[5c aqb'nےL[#]miz^3zC  vjeׯ"nJL` ֈIQ(QWMM|fk .ނ*ZrClEфG9A{Tރ@Ln ʣqҺI~u rwa5烂qVYg" l+>lwGpLF|Z$0퉲 A]T>ugpgqcSLZɬ!o>13=⟗}ܹ*&4btZ;^v oHm 906Al2..ۙD\|VpߣV.Bz\`Ew ѝ{M43J @GOx{?a1] p@)Vߘ[٠ 56ShI9:'|jn;fŨ4ZC|3CLp/Iw*%O2wWVkb&O]=y]qe|)#ꓼ6AOƒ:QE1Iß}L%^1u%#l:8! _`K\6C0䔮3xn.ݱm?E$Ga>Fw]D@8•X`t*"!=1*2:ťpМi\vkJϿHO `'a|9Ec |3;gDQ^`7)_=w3x o qXm$<}webh6pl0ҟ)ң'<<"WUCݳk+-V{0VyP*0qiQx&싸>e&6d#>'a;[,je!b}T'IRXc9cR.Z9Bǹ8v[Cf)L{<ۋs+BԮ'EծЙXe q\ 1 U P:0jt]`vhGQcn%(#J?$6їtt+ %Ȏ.~O3(ӷ}MRQ>YIkθ8G"P$څ5ATլwCƲ-%T&">@-dC`6:ݔc$j[)519.ሴحj߼6g +>7\}aޭz:1?r> !4gЪ])=^<>:xxG /AB.ΟRy8WP(38b/tƽKlOL4;(3c2򧸆8~zc # w7#h6}:6gM|Gv`_Ulv4>AmO~yuI ts_>KS$W%3ͨOISe)X3fQ$2 ~c?G oG]ר UUa6[nj{ D&4Vp;)d"bVï)/%&;"̮$V4H9J(/}f@帎ctS6SFeA;'SӒ:? 3/wMV>E<j9e:Kvu= 9 ա4li3oe:RA^iuf8_0U)Q甁mռ>:]'[cg<P M&WIPNB;> IL8%7׈`Prk׺) \\r(.6Ǟ$v-Yٸ pr0ѳbk aZ(,`] H̥_m΃(0c]-e'_=.>\uUػw/`޽-΁'h9s*-&;!?(r&9CUM(7(8nap_&T{ɔ8eGU4hX3`U1I=Kv$>߁(ᬾ9~}P9!>v灺7JE-pڃ^12̅!̶}(lXo}}bHルs&5lwT zeWwHخy/6Cxn7,ƌEd,lj(qȞĄ$ŐG8QD_;9ɜRgaКʹ&41%cǮ2v'Rg1%TVDxwn̎1zC.BgK2qs=xG[] !=v( oSWF Exx>(v[eMչӴnZde\bU_uSDmgXHSz}yO'Pl7n(cSԝSMf Th-_ IDAT2Z$dm3L'ȯ3?t_BUo';BKV?#JMYKH!/?+ dw6 QWne-?d({u 0ow;] Qr}ğLO`,C;p6+qNWV%bMܫ0O@}/Dή9;U9_ FZoe;MÐɟyڌi9W,(pX6U͹j.{l=#?yG$dǒ6gTS$,l'-܂//˓Do ox^/})^CPQy??m݆뮻r ^_28[`iF{jx4`؊TG)VsP 4gVS'Xmh@ @=Ƹ]lńud/W9uCXvH ,L%@KT3TX[;ug߁V{dz마ƛ X^jdNlArh{j?Ϭ/<;g Oݍf l lSzPd.9f7R|U-yT9jw'8 Ov=O^Qϭw&_}2m݆PJne!򖷌S?S˿K=z73*A9NV+ )"12j*eϺdab;žﰞMxj8S~H؄:u+:Ⱞ4ԳSQW{v2cu_1{8bb{'PY$[V=Kk٨gVJWcRL$M ebM|1}&~ix{ߋWѴ>(^`mm 0w /ooꫯƟ韚gSO=wyl]tgiH Sn)u+&< @A68UE:4îԆujŢFule뷾d uQܹT}@[42c[0g=wjítnIu=7klFm`()/OT IS\8X.ET:TG7}jжG.}kWlO\IN?Wz/w z|vm먛 Fжg[ahUP'4n' yĐ.Zp'T;"lU;c T{ E,}5b8DXb84!qkWa>Ru'l v[N5䩉5U-T{gr=cAf:Nǥɽ)JIMǞDu:?L\G 6AyߖU5l;gϞh~뮻pcoo|wo׿ykؤ$ '=IxS`!0 8PJГ1v-" ւaB=i8LJ[j bNPlMP0Q1SiLnYT7Yyz+l5 21 &8GxVsLƒ9}.CkeQz 9Z`8ǐ' VDuyJN['RXJm@qYgƷ-|3_j߿? G8SGU:>Ek_EUsmY +MdmK)sYW^.pows *ivU> .-cpn {yp KJ]g1a1C075a1>86e]rVz'"5(w{F)5G(R%wZڣ[7ϿeJo7~$&>t6)Uh1sفK$pv>GG ZkT U- CO8:w gCXLgFbd*[ɪJG&< y% 2"AY7Qbl׾X ÕoP EOxz!sY9pn4>馛&`GDUUx}x} QA ڰTI<(VII# Ъ &f߷wmz235FO2EU[BN(>^z&v EV~r=]I_ j>C7a @8Y%n[WY aqx%y=z|޵kf#Gi!ϓ;fȌVNvyr.- uEu&0TˌXBw6m*c.e3b'[,v f=Aӹuؓ^-'$+^Iì*BcH'=L *)#E"|O^{țVǮ5YZX6ՆÊb^(Y|>|0<9|Sȶ'ةZ^łWҒG|t,/z4B\ ۄ4w3ҘVlF& CkS:qȟh#sq鶇'v5H#hs9 钫ȞAM'>򷭦 N';YMddK+TjUզY;RVGĎ#ZL4xBj̃]ra k_Il^dC$ؖ4E%]d`/]vR3˩f~^X$U<OiZn?aJ'~}@墪f>2;\09쿧aKͥv?}.h7UGx*篝͆Q8[{_=p\prao8D9q)16, K#ˌNOY _fAD~^8MFaHzݷLj{-cːP 䓿|^v};Wr&7dDz+,&z:KfL29U?!6Q솜,p3yI٢7Kɓ)0*BKCM;3H.y |.#7bd>dRHN (|sm v1C}֤&>zCw$ZtuT1G٠a!7ڵhg37}Iq5Cu_wR6$G|6ds0puXiL=/%;h7!# h j,lDƂ ւe-(ɷ{m{CG.:'n3_~9N?t\{n.p5נm[\yNS 3 +PwΫ<֝XE Oٱ٫.Zo\T ? M|UN+QS{x.Yգ&(AlI[; 7|Pϧa~w]l5 <}85ߞC|g6͠@j׻?tn'UQsTh׆g5ujPmclyYPAL P# j7cɉKrCeR'X'>#/o;CbjSIEIRićAIkޅ󧝖K> //acc#HsA|?+r7=y.mo.Sc'"5i(5bǸ: ZzǴ1($qܬ0RbBeW9AH+պIT]YG7(>8*1*{NS&ljڃJsE,qn*MX ȏÌ';b/i+ޣ`L0J2w_SLA\ip|&:g?38^zi/_#G%/y ~Gkw]0mwMB qNj]NQ_, qmV%>`}^xRR|d];ń tG2PT'P:[1I5K+2XT T;iQw}@. Y yj3ກ4nijquЮu@*T-ޠ]hUcj$~7k j]^mځtTߎ #uT v͈q4ʮ S_|@L apPr6uiHD:w|3E l0pM7sk]Q!=KcUG1An%:tuN[uF#Ld>K>Fbu 5~MX׵%9º}߇]L3}y73CrQ]0ڠeIpG̎-4:"h ہ!E,c":L^ѮUX쪠tV+4k5fn.J)լܭ۹k7]{@d+[U-$8;J׶Ԗ\{$I~=t~ꭀemUjJ˜&fb{oi"dn0G `3wx^c1oc>VU56—>a3EvbWZ1Hr)jF$1| @>o4Fko MhgC>y.)D]a6kI)׎92dhA@ر>` )]M3 p;_CK5rz9;D# `["XnףE|0s#l{~@`4~b4n{ 2o@s߈ (MܘbP y3 y* 3 /0Brp (ڕ>R1+GfZ_%߻z#8w~+M{ңM86=`W*1%sej\ },܉kHt *}k9}J}RhRJI.i$Շ!<>;9$~"m6΂CCO$Vp9 IDATAT/I4UUF|{<_ .CmrD *Gx*lSո4ćue+nznϽnP{ t WR6QqroL6I:DzJ s{. HP!Exg YUvq8=J-Df(8Cㆇt ݇Qt)J _oz^5}8ouo} KxDų;{cIVmzmʨ2tٱA>rB{'ͅA@|+'h:U ̭oI˚#2,b`?q_mE)6,Ul#85:+6ZTBP/(n=Ux?܎1{>M'>;7wca6 fOĈEż?!0m"u7?O-D䇫_;Pć4R2et*)pK IUe0n-\1O GҎE$"m#lI$uB\M9+.TU3,S>Doʢ~Cs*.uպ\UNgXT * w:AxMg(e%IR8R*ԀٸCׯ/RfťV5ablaL3)P:^|'@6{`w[TK^@:Yuq;gԑ7q;!_dK]5:57t={Ims&Xؖb{h R8R:=fڌY@Zk{}Kժs}M~lxf&LPֳsYU56fkب]GThGݴ Tv#QڍiplpߞtNV u[*[c֍;~wݥD۠ *%nOZ}8묳@ @RjM )!X77 x ^x _}CEy4M}x󟏽{袋O~rEL jr-r5hmMF; ҿ b ;J ([Ձ//uR_)~W~%+mk_Z@ۼ_%NpLRV{Ju@XQ'j[S#yMUN !(yuYg{|d>W]uzs>ϫb9>O>[PwVw}Bp"c6Tgi\ve2)ᘪmMF; ҿ b ?Au]ٟY߿o|#o;E4??§>)؏ְo>\r%0qq|SZuN\L*YLնl[ 1=W_ 袋:,}w|3l_Wzֳ;<\p|p: N.L S-i#*}'!=78s4{|>l`3x39SATmKڨ`, b @ո3j[Fc!mGUN hCCϾƁwGqqvmxހ'?ɣ،6(ؙmIVA;@!`ѣ]4w9rdTW|=،6(ؙmIVA;@!`c޽3[߷o_qGoo#H v.j[Fc!mGUN h!8~UUoO}dS]qx[ߊqu۽ v.j[Fc!mGUN hH S '=Ip9砪*(p=A{@5Y ~w??/~۹ v6j[Fc!mGUN h!8qe.}q+_ ,;߫>(o&81]۠@0Uے6* i;w@C[{_>y8x ^Wf} 7x#; ~_YI۠@1Uے6* i;wC \~8q:ѾkA۶+MPis?s q!'>)|>#Gw~wpb'$l6x @۶`[cU@!` @ (@R޽{//uR7p:pKƆI{7* z{k?ڻwfu)|ӯO1E38CP/bؔl?LնJlH&*}'* E ؎g7 ۷G//+l63=.ÝwމExǢeoR'(iM/~19r?q7S|ҿ b ;Bz@ @ 8)!1=@ @ C @ I !=@ @ C @ I !=@ @ C @ I !=@ @ C @ I !=@ @ C @ I !=@ @ C @ I !=@ @ C @ I !=@ @ C @ I !=@ kp9砪*TU7~7GgTU{o~V  RJmu%@ l-7}{+t~?=?yy[TC@ rC 7k|c ~{w[@ '@g>q 7[o5ǕRp饗na@ AH@ __\uUg?YȏVUK `4$@ @4xֳGy׽!nbQ? D@H$xl%A`4l`jfx< (@H@0jT?#͛sίEkNzQ*%}fnf9|YQVl`0F~YWy,zT*tb>xjU$|A^Sহ|s>ZGj\IfVKnw3Eo4a~wl:sL&vneN;r9vF~!Sc%q,8NNY?BRrvHIR=D %HIR=D %HIR=D %HIR=D %HIR=D %HIR=D %HIR=D %HIR=D %H7JdoIENDB`././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2471511 yt-4.4.0/doc/source/reference/api/0000755000175100001770000000000014714401715016340 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/reference/api/api.rst0000644000175100001770000007307514714401662017660 0ustar00runnerdocker.. _api-reference: API Reference ============= Plots and the Plotting Interface -------------------------------- SlicePlot and ProjectionPlot ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autosummary:: ~yt.visualization.plot_window.SlicePlot ~yt.visualization.plot_window.AxisAlignedSlicePlot ~yt.visualization.plot_window.OffAxisSlicePlot ~yt.visualization.plot_window.ProjectionPlot ~yt.visualization.plot_window.AxisAlignedProjectionPlot ~yt.visualization.plot_window.OffAxisProjectionPlot ~yt.visualization.plot_window.WindowPlotMPL ~yt.visualization.plot_window.PlotWindow ~yt.visualization.plot_window.plot_2d ProfilePlot and PhasePlot ^^^^^^^^^^^^^^^^^^^^^^^^^ .. autosummary:: ~yt.visualization.profile_plotter.ProfilePlot ~yt.visualization.profile_plotter.PhasePlot ~yt.visualization.profile_plotter.PhasePlotMPL Particle Plots ^^^^^^^^^^^^^^ .. autosummary:: ~yt.visualization.particle_plots.ParticleProjectionPlot ~yt.visualization.particle_plots.ParticlePhasePlot ~yt.visualization.particle_plots.ParticlePlot Fixed Resolution Pixelization ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autosummary:: ~yt.visualization.fixed_resolution.FixedResolutionBuffer ~yt.visualization.fixed_resolution.ParticleImageBuffer ~yt.visualization.fixed_resolution.CylindricalFixedResolutionBuffer ~yt.visualization.fixed_resolution.OffAxisProjectionFixedResolutionBuffer Writing FITS images ^^^^^^^^^^^^^^^^^^^ .. autosummary:: ~yt.visualization.fits_image.FITSImageData ~yt.visualization.fits_image.FITSSlice ~yt.visualization.fits_image.FITSProjection ~yt.visualization.fits_image.FITSOffAxisSlice ~yt.visualization.fits_image.FITSOffAxisProjection ~yt.visualization.fits_image.FITSParticleProjection Data Sources ------------ .. _physical-object-api: Physical Objects ^^^^^^^^^^^^^^^^ These are the objects that act as physical selections of data, describing a region in space. These are not typically addressed directly; see :ref:`available-objects` for more information. Base Classes ++++++++++++ These will almost never need to be instantiated on their own. .. autosummary:: ~yt.data_objects.data_containers.YTDataContainer ~yt.data_objects.selection_objects.data_selection_objects.YTSelectionContainer ~yt.data_objects.selection_objects.data_selection_objects.YTSelectionContainer0D ~yt.data_objects.selection_objects.data_selection_objects.YTSelectionContainer1D ~yt.data_objects.selection_objects.data_selection_objects.YTSelectionContainer2D ~yt.data_objects.selection_objects.data_selection_objects.YTSelectionContainer3D Selection Objects +++++++++++++++++ These objects are defined by some selection method or mechanism. Most are geometric. .. autosummary:: ~yt.data_objects.selection_objects.point.YTPoint ~yt.data_objects.selection_objects.ray.YTOrthoRay ~yt.data_objects.selection_objects.ray.YTRay ~yt.data_objects.selection_objects.slices.YTSlice ~yt.data_objects.selection_objects.slices.YTCuttingPlane ~yt.data_objects.selection_objects.disk.YTDisk ~yt.data_objects.selection_objects.region.YTRegion ~yt.data_objects.selection_objects.object_collection.YTDataCollection ~yt.data_objects.selection_objects.spheroids.YTSphere ~yt.data_objects.selection_objects.spheroids.YTEllipsoid ~yt.data_objects.selection_objects.cut_region.YTCutRegion ~yt.data_objects.index_subobjects.grid_patch.AMRGridPatch ~yt.data_objects.index_subobjects.octree_subset.OctreeSubset ~yt.data_objects.index_subobjects.particle_container.ParticleContainer ~yt.data_objects.index_subobjects.unstructured_mesh.UnstructuredMesh ~yt.data_objects.index_subobjects.unstructured_mesh.SemiStructuredMesh Construction Objects ++++++++++++++++++++ These objects typically require some effort to build. Often this means integrating through the simulation in some way, or creating some large or expensive set of intermediate data. .. autosummary:: ~yt.data_objects.construction_data_containers.YTStreamline ~yt.data_objects.construction_data_containers.YTQuadTreeProj ~yt.data_objects.construction_data_containers.YTCoveringGrid ~yt.data_objects.construction_data_containers.YTArbitraryGrid ~yt.data_objects.construction_data_containers.YTSmoothedCoveringGrid ~yt.data_objects.construction_data_containers.YTSurface Time Series Objects ^^^^^^^^^^^^^^^^^^^ These are objects that either contain and represent or operate on series of datasets. .. autosummary:: ~yt.data_objects.time_series.DatasetSeries ~yt.data_objects.time_series.DatasetSeriesObject ~yt.data_objects.time_series.SimulationTimeSeries ~yt.data_objects.time_series.TimeSeriesQuantitiesContainer ~yt.data_objects.time_series.AnalysisTaskProxy ~yt.data_objects.particle_trajectories.ParticleTrajectories Geometry Handlers ----------------- These objects generate an "index" into multiresolution data. .. autosummary:: ~yt.geometry.geometry_handler.Index ~yt.geometry.grid_geometry_handler.GridIndex ~yt.geometry.oct_geometry_handler.OctreeIndex ~yt.geometry.particle_geometry_handler.ParticleIndex ~yt.geometry.unstructured_mesh_handler.UnstructuredIndex Units ----- yt's symbolic unit handling system is now based on the external library unyt. In complement, Dataset objects support the following methods to build arrays and scalars with physical dimensions. .. autosummary:: yt.data_objects.static_output.Dataset.arr yt.data_objects.static_output.Dataset.quan Frontends --------- .. autosummary:: AMRVAC ^^^^^^ .. autosummary:: ~yt.frontends.amrvac.data_structures.AMRVACGrid ~yt.frontends.amrvac.data_structures.AMRVACHierarchy ~yt.frontends.amrvac.data_structures.AMRVACDataset ~yt.frontends.amrvac.fields.AMRVACFieldInfo ~yt.frontends.amrvac.io.AMRVACIOHandler ~yt.frontends.amrvac.io.read_amrvac_namelist ARTIO ^^^^^ .. autosummary:: ~yt.frontends.artio.data_structures.ARTIOIndex ~yt.frontends.artio.data_structures.ARTIOOctreeSubset ~yt.frontends.artio.data_structures.ARTIORootMeshSubset ~yt.frontends.artio.data_structures.ARTIODataset ~yt.frontends.artio.definitions.ARTIOconstants ~yt.frontends.artio.fields.ARTIOFieldInfo ~yt.frontends.artio.io.IOHandlerARTIO Athena ^^^^^^ .. autosummary:: ~yt.frontends.athena.data_structures.AthenaGrid ~yt.frontends.athena.data_structures.AthenaHierarchy ~yt.frontends.athena.data_structures.AthenaDataset ~yt.frontends.athena.fields.AthenaFieldInfo ~yt.frontends.athena.io.IOHandlerAthena AMReX/Boxlib ^^^^^^^^^^^^ .. autosummary:: ~yt.frontends.amrex.data_structures.BoxlibGrid ~yt.frontends.amrex.data_structures.BoxlibHierarchy ~yt.frontends.amrex.data_structures.BoxlibDataset ~yt.frontends.amrex.data_structures.CastroDataset ~yt.frontends.amrex.data_structures.MaestroDataset ~yt.frontends.amrex.data_structures.NyxHierarchy ~yt.frontends.amrex.data_structures.NyxDataset ~yt.frontends.amrex.data_structures.OrionHierarchy ~yt.frontends.amrex.data_structures.OrionDataset ~yt.frontends.amrex.fields.BoxlibFieldInfo ~yt.frontends.amrex.io.IOHandlerBoxlib ~yt.frontends.amrex.io.IOHandlerOrion CfRadial ^^^^^^^^ .. autosummary:: ~yt.frontends.cf_radial.data_structures.CFRadialGrid ~yt.frontends.cf_radial.data_structures.CFRadialHierarchy ~yt.frontends.cf_radial.data_structures.CFRadialDataset ~yt.frontends.cf_radial.fields.CFRadialFieldInfo ~yt.frontends.cf_radial.io.CFRadialIOHandler Chombo ^^^^^^ .. autosummary:: ~yt.frontends.chombo.data_structures.ChomboGrid ~yt.frontends.chombo.data_structures.ChomboHierarchy ~yt.frontends.chombo.data_structures.ChomboDataset ~yt.frontends.chombo.data_structures.Orion2Hierarchy ~yt.frontends.chombo.data_structures.Orion2Dataset ~yt.frontends.chombo.io.IOHandlerChomboHDF5 ~yt.frontends.chombo.io.IOHandlerOrion2HDF5 Enzo ^^^^ .. autosummary:: ~yt.frontends.enzo.answer_testing_support.ShockTubeTest ~yt.frontends.enzo.data_structures.EnzoGrid ~yt.frontends.enzo.data_structures.EnzoGridGZ ~yt.frontends.enzo.data_structures.EnzoGridInMemory ~yt.frontends.enzo.data_structures.EnzoHierarchy1D ~yt.frontends.enzo.data_structures.EnzoHierarchy2D ~yt.frontends.enzo.data_structures.EnzoHierarchy ~yt.frontends.enzo.data_structures.EnzoHierarchyInMemory ~yt.frontends.enzo.data_structures.EnzoDatasetInMemory ~yt.frontends.enzo.data_structures.EnzoDataset ~yt.frontends.enzo.fields.EnzoFieldInfo ~yt.frontends.enzo.io.IOHandlerInMemory ~yt.frontends.enzo.io.IOHandlerPacked1D ~yt.frontends.enzo.io.IOHandlerPacked2D ~yt.frontends.enzo.io.IOHandlerPackedHDF5 ~yt.frontends.enzo.io.IOHandlerPackedHDF5GhostZones ~yt.frontends.enzo.simulation_handling.EnzoCosmology ~yt.frontends.enzo.simulation_handling.EnzoSimulation FITS ^^^^ .. autosummary:: ~yt.frontends.fits.data_structures.FITSGrid ~yt.frontends.fits.data_structures.FITSHierarchy ~yt.frontends.fits.data_structures.FITSDataset ~yt.frontends.fits.fields.FITSFieldInfo ~yt.frontends.fits.io.IOHandlerFITS FLASH ^^^^^ .. autosummary:: ~yt.frontends.flash.data_structures.FLASHGrid ~yt.frontends.flash.data_structures.FLASHHierarchy ~yt.frontends.flash.data_structures.FLASHDataset ~yt.frontends.flash.fields.FLASHFieldInfo ~yt.frontends.flash.io.IOHandlerFLASH GDF ^^^ .. autosummary:: ~yt.frontends.gdf.data_structures.GDFGrid ~yt.frontends.gdf.data_structures.GDFHierarchy ~yt.frontends.gdf.data_structures.GDFDataset ~yt.frontends.gdf.io.IOHandlerGDFHDF5 Halo Catalogs ^^^^^^^^^^^^^ .. autosummary:: ~yt.frontends.ahf.data_structures.AHFHalosDataset ~yt.frontends.ahf.fields.AHFHalosFieldInfo ~yt.frontends.ahf.io.IOHandlerAHFHalos ~yt.frontends.gadget_fof.data_structures.GadgetFOFDataset ~yt.frontends.gadget_fof.data_structures.GadgetFOFHDF5File ~yt.frontends.gadget_fof.data_structures.GadgetFOFHaloDataset ~yt.frontends.gadget_fof.io.IOHandlerGadgetFOFHDF5 ~yt.frontends.gadget_fof.io.IOHandlerGadgetFOFHaloHDF5 ~yt.frontends.gadget_fof.fields.GadgetFOFFieldInfo ~yt.frontends.gadget_fof.fields.GadgetFOFHaloFieldInfo ~yt.frontends.halo_catalog.data_structures.YTHaloCatalogFile ~yt.frontends.halo_catalog.data_structures.YTHaloCatalogDataset ~yt.frontends.halo_catalog.fields.YTHaloCatalogFieldInfo ~yt.frontends.halo_catalog.io.IOHandlerYTHaloCatalog ~yt.frontends.owls_subfind.data_structures.OWLSSubfindParticleIndex ~yt.frontends.owls_subfind.data_structures.OWLSSubfindHDF5File ~yt.frontends.owls_subfind.data_structures.OWLSSubfindDataset ~yt.frontends.owls_subfind.fields.OWLSSubfindFieldInfo ~yt.frontends.owls_subfind.io.IOHandlerOWLSSubfindHDF5 ~yt.frontends.rockstar.data_structures.RockstarBinaryFile ~yt.frontends.rockstar.data_structures.RockstarDataset ~yt.frontends.rockstar.fields.RockstarFieldInfo ~yt.frontends.rockstar.io.IOHandlerRockstarBinary MOAB ^^^^ .. autosummary:: ~yt.frontends.moab.data_structures.MoabHex8Hierarchy ~yt.frontends.moab.data_structures.MoabHex8Mesh ~yt.frontends.moab.data_structures.MoabHex8Dataset ~yt.frontends.moab.data_structures.PyneHex8Mesh ~yt.frontends.moab.data_structures.PyneMeshHex8Hierarchy ~yt.frontends.moab.data_structures.PyneMoabHex8Dataset ~yt.frontends.moab.io.IOHandlerMoabH5MHex8 ~yt.frontends.moab.io.IOHandlerMoabPyneHex8 OpenPMD ^^^^^^^ .. autosummary:: ~yt.frontends.open_pmd.data_structures.OpenPMDGrid ~yt.frontends.open_pmd.data_structures.OpenPMDHierarchy ~yt.frontends.open_pmd.data_structures.OpenPMDDataset ~yt.frontends.open_pmd.fields.OpenPMDFieldInfo ~yt.frontends.open_pmd.io.IOHandlerOpenPMDHDF5 ~yt.frontends.open_pmd.misc.parse_unit_dimension ~yt.frontends.open_pmd.misc.is_const_component ~yt.frontends.open_pmd.misc.get_component RAMSES ^^^^^^ .. autosummary:: ~yt.frontends.ramses.data_structures.RAMSESDomainFile ~yt.frontends.ramses.data_structures.RAMSESDomainSubset ~yt.frontends.ramses.data_structures.RAMSESIndex ~yt.frontends.ramses.data_structures.RAMSESDataset ~yt.frontends.ramses.fields.RAMSESFieldInfo ~yt.frontends.ramses.io.IOHandlerRAMSES SPH and Particle Codes ^^^^^^^^^^^^^^^^^^^^^^ .. autosummary:: ~yt.frontends.gadget.data_structures.GadgetBinaryFile ~yt.frontends.gadget.data_structures.GadgetHDF5Dataset ~yt.frontends.gadget.data_structures.GadgetDataset ~yt.frontends.http_stream.data_structures.HTTPParticleFile ~yt.frontends.http_stream.data_structures.HTTPStreamDataset ~yt.frontends.owls.data_structures.OWLSDataset ~yt.frontends.sph.data_structures.ParticleDataset ~yt.frontends.tipsy.data_structures.TipsyFile ~yt.frontends.tipsy.data_structures.TipsyDataset ~yt.frontends.sph.fields.SPHFieldInfo ~yt.frontends.gadget.io.IOHandlerGadgetBinary ~yt.frontends.gadget.io.IOHandlerGadgetHDF5 ~yt.frontends.http_stream.io.IOHandlerHTTPStream ~yt.frontends.owls.io.IOHandlerOWLS ~yt.frontends.tipsy.io.IOHandlerTipsyBinary Stream ^^^^^^ .. autosummary:: ~yt.frontends.stream.data_structures.StreamDictFieldHandler ~yt.frontends.stream.data_structures.StreamGrid ~yt.frontends.stream.data_structures.StreamHandler ~yt.frontends.stream.data_structures.StreamHexahedralHierarchy ~yt.frontends.stream.data_structures.StreamHexahedralMesh ~yt.frontends.stream.data_structures.StreamHexahedralDataset ~yt.frontends.stream.data_structures.StreamHierarchy ~yt.frontends.stream.data_structures.StreamOctreeHandler ~yt.frontends.stream.data_structures.StreamOctreeDataset ~yt.frontends.stream.data_structures.StreamOctreeSubset ~yt.frontends.stream.data_structures.StreamParticleFile ~yt.frontends.stream.data_structures.StreamParticleIndex ~yt.frontends.stream.data_structures.StreamParticlesDataset ~yt.frontends.stream.data_structures.StreamDataset ~yt.frontends.stream.fields.StreamFieldInfo ~yt.frontends.stream.io.IOHandlerStream ~yt.frontends.stream.io.IOHandlerStreamHexahedral ~yt.frontends.stream.io.IOHandlerStreamOctree ~yt.frontends.stream.io.StreamParticleIOHandler ytdata ^^^^^^ .. autosummary:: ~yt.frontends.ytdata.data_structures.YTDataContainerDataset ~yt.frontends.ytdata.data_structures.YTSpatialPlotDataset ~yt.frontends.ytdata.data_structures.YTGridDataset ~yt.frontends.ytdata.data_structures.YTGridHierarchy ~yt.frontends.ytdata.data_structures.YTGrid ~yt.frontends.ytdata.data_structures.YTNonspatialDataset ~yt.frontends.ytdata.data_structures.YTNonspatialHierarchy ~yt.frontends.ytdata.data_structures.YTNonspatialGrid ~yt.frontends.ytdata.data_structures.YTProfileDataset ~yt.frontends.ytdata.data_structures.YTClumpTreeDataset ~yt.frontends.ytdata.data_structures.YTClumpContainer ~yt.frontends.ytdata.fields.YTDataContainerFieldInfo ~yt.frontends.ytdata.fields.YTGridFieldInfo ~yt.frontends.ytdata.io.IOHandlerYTDataContainerHDF5 ~yt.frontends.ytdata.io.IOHandlerYTGridHDF5 ~yt.frontends.ytdata.io.IOHandlerYTSpatialPlotHDF5 ~yt.frontends.ytdata.io.IOHandlerYTNonspatialhdf5 Loading Data ------------ .. autosummary:: ~yt.loaders.load ~yt.loaders.load_uniform_grid ~yt.loaders.load_amr_grids ~yt.loaders.load_particles ~yt.loaders.load_octree ~yt.loaders.load_hexahedral_mesh ~yt.loaders.load_unstructured_mesh ~yt.loaders.load_sample Derived Datatypes ----------------- Profiles and Histograms ^^^^^^^^^^^^^^^^^^^^^^^ These types are used to sum data up and either return that sum or return an average. Typically they are more easily used through the ``ProfilePlot`` ``PhasePlot`` interface. We also provide the ``create_profile`` function to create these objects in a uniform manner. .. autosummary:: ~yt.data_objects.profiles.ProfileND ~yt.data_objects.profiles.Profile1D ~yt.data_objects.profiles.Profile2D ~yt.data_objects.profiles.Profile3D ~yt.data_objects.profiles.ParticleProfile ~yt.data_objects.profiles.create_profile .. _clump_finding_ref: Clump Finding ^^^^^^^^^^^^^ The ``Clump`` object and associated functions can be used for identification of topologically disconnected structures, i.e., clump finding. .. autosummary:: ~yt.data_objects.level_sets.clump_handling.Clump ~yt.data_objects.level_sets.clump_handling.Clump.add_info_item ~yt.data_objects.level_sets.clump_handling.Clump.add_validator ~yt.data_objects.level_sets.clump_handling.Clump.save_as_dataset ~yt.data_objects.level_sets.clump_handling.find_clumps ~yt.data_objects.level_sets.clump_info_items.add_clump_info ~yt.data_objects.level_sets.clump_validators.add_validator X-ray Emission Fields ^^^^^^^^^^^^^^^^^^^^^ This can be used to create derived fields of X-ray emission in different energy bands. .. autosummary:: ~yt.fields.xray_emission_fields.XrayEmissivityIntegrator ~yt.fields.xray_emission_fields.add_xray_emissivity_field Field Types ----------- .. autosummary:: ~yt.fields.field_info_container.FieldInfoContainer ~yt.fields.derived_field.DerivedField ~yt.fields.derived_field.ValidateDataField ~yt.fields.derived_field.ValidateGridType ~yt.fields.derived_field.ValidateParameter ~yt.fields.derived_field.ValidateProperty ~yt.fields.derived_field.ValidateSpatial Field Functions --------------- .. autosummary:: ~yt.fields.field_info_container.FieldInfoContainer.add_field ~yt.data_objects.static_output.Dataset.add_field ~yt.data_objects.static_output.Dataset.add_deposited_particle_field ~yt.data_objects.static_output.Dataset.add_mesh_sampling_particle_field ~yt.data_objects.static_output.Dataset.add_gradient_fields ~yt.frontends.stream.data_structures.StreamParticlesDataset.add_sph_fields Particle Filters ---------------- .. autosummary:: ~yt.data_objects.particle_filters.add_particle_filter ~yt.data_objects.particle_filters.particle_filter Image Handling -------------- For volume renderings and fixed resolution buffers the image object returned is an ``ImageArray`` object, which has useful functions for image saving and writing to bitmaps. .. autosummary:: ~yt.data_objects.image_array.ImageArray Volume Rendering ^^^^^^^^^^^^^^^^ See also :ref:`volume_rendering`. Here are the primary entry points and the main classes involved in the Scene infrastructure: .. autosummary:: ~yt.visualization.volume_rendering.volume_rendering.volume_render ~yt.visualization.volume_rendering.volume_rendering.create_scene ~yt.visualization.volume_rendering.off_axis_projection.off_axis_projection ~yt.visualization.volume_rendering.scene.Scene ~yt.visualization.volume_rendering.camera.Camera ~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree The different kinds of sources: .. autosummary:: ~yt.visualization.volume_rendering.render_source.RenderSource ~yt.visualization.volume_rendering.render_source.VolumeSource ~yt.visualization.volume_rendering.render_source.PointSource ~yt.visualization.volume_rendering.render_source.LineSource ~yt.visualization.volume_rendering.render_source.BoxSource ~yt.visualization.volume_rendering.render_source.GridSource ~yt.visualization.volume_rendering.render_source.CoordinateVectorSource ~yt.visualization.volume_rendering.render_source.MeshSource The different kinds of transfer functions: .. autosummary:: ~yt.visualization.volume_rendering.transfer_functions.TransferFunction ~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction ~yt.visualization.volume_rendering.transfer_functions.ProjectionTransferFunction ~yt.visualization.volume_rendering.transfer_functions.PlanckTransferFunction ~yt.visualization.volume_rendering.transfer_functions.MultiVariateTransferFunction ~yt.visualization.volume_rendering.transfer_function_helper.TransferFunctionHelper The different kinds of lenses: .. autosummary:: ~yt.visualization.volume_rendering.lens.Lens ~yt.visualization.volume_rendering.lens.PlaneParallelLens ~yt.visualization.volume_rendering.lens.PerspectiveLens ~yt.visualization.volume_rendering.lens.StereoPerspectiveLens ~yt.visualization.volume_rendering.lens.FisheyeLens ~yt.visualization.volume_rendering.lens.SphericalLens ~yt.visualization.volume_rendering.lens.StereoSphericalLens Streamlining ^^^^^^^^^^^^ See also :ref:`streamlines`. .. autosummary:: ~yt.visualization.streamlines.Streamlines Image Writing ^^^^^^^^^^^^^ These functions are all used for fast writing of images directly to disk, without calling matplotlib. This can be very useful for high-cadence outputs where colorbars are unnecessary or for volume rendering. .. autosummary:: ~yt.visualization.image_writer.multi_image_composite ~yt.visualization.image_writer.write_bitmap ~yt.visualization.image_writer.write_projection ~yt.visualization.image_writer.write_image ~yt.visualization.image_writer.map_to_colors ~yt.visualization.image_writer.strip_colormap_data ~yt.visualization.image_writer.splat_points ~yt.visualization.image_writer.scale_image We also provide a module that is very good for generating EPS figures, particularly with complicated layouts. .. autosummary:: ~yt.visualization.eps_writer.DualEPS ~yt.visualization.eps_writer.single_plot ~yt.visualization.eps_writer.multiplot ~yt.visualization.eps_writer.multiplot_yt ~yt.visualization.eps_writer.return_colormap .. _derived-quantities-api: Derived Quantities ------------------ See :ref:`derived-quantities`. .. autosummary:: ~yt.data_objects.derived_quantities.DerivedQuantity ~yt.data_objects.derived_quantities.DerivedQuantityCollection ~yt.data_objects.derived_quantities.WeightedAverageQuantity ~yt.data_objects.derived_quantities.AngularMomentumVector ~yt.data_objects.derived_quantities.BulkVelocity ~yt.data_objects.derived_quantities.CenterOfMass ~yt.data_objects.derived_quantities.Extrema ~yt.data_objects.derived_quantities.MaxLocation ~yt.data_objects.derived_quantities.MinLocation ~yt.data_objects.derived_quantities.SpinParameter ~yt.data_objects.derived_quantities.TotalMass ~yt.data_objects.derived_quantities.TotalQuantity ~yt.data_objects.derived_quantities.WeightedAverageQuantity .. _callback-api: Callback List ------------- See also :ref:`callbacks`. .. autosummary:: ~yt.visualization.plot_window.PWViewerMPL.clear_annotations ~yt.visualization.plot_modifications.ArrowCallback ~yt.visualization.plot_modifications.CellEdgesCallback ~yt.visualization.plot_modifications.ClumpContourCallback ~yt.visualization.plot_modifications.ContourCallback ~yt.visualization.plot_modifications.CuttingQuiverCallback ~yt.visualization.plot_modifications.GridBoundaryCallback ~yt.visualization.plot_modifications.ImageLineCallback ~yt.visualization.plot_modifications.LinePlotCallback ~yt.visualization.plot_modifications.MagFieldCallback ~yt.visualization.plot_modifications.MarkerAnnotateCallback ~yt.visualization.plot_modifications.ParticleCallback ~yt.visualization.plot_modifications.PointAnnotateCallback ~yt.visualization.plot_modifications.QuiverCallback ~yt.visualization.plot_modifications.RayCallback ~yt.visualization.plot_modifications.ScaleCallback ~yt.visualization.plot_modifications.SphereCallback ~yt.visualization.plot_modifications.StreamlineCallback ~yt.visualization.plot_modifications.TextLabelCallback ~yt.visualization.plot_modifications.TimestampCallback ~yt.visualization.plot_modifications.TitleCallback ~yt.visualization.plot_modifications.TriangleFacetsCallback ~yt.visualization.plot_modifications.VelocityCallback Colormap Functions ------------------ See also :ref:`colormaps`. .. autosummary:: ~yt.visualization.color_maps.add_colormap ~yt.visualization.color_maps.make_colormap ~yt.visualization.color_maps.show_colormaps Function List ------------- .. autosummary:: ~yt.frontends.ytdata.utilities.save_as_dataset ~yt.data_objects.data_containers.YTDataContainer.save_as_dataset ~yt.data_objects.static_output.Dataset.all_data ~yt.data_objects.static_output.Dataset.box ~yt.funcs.enable_plugins ~yt.funcs.get_pbar ~yt.funcs.humanize_time ~yt.funcs.insert_ipython ~yt.funcs.is_root ~yt.funcs.is_sequence ~yt.funcs.iter_fields ~yt.funcs.just_one ~yt.funcs.only_on_root ~yt.funcs.paste_traceback ~yt.funcs.pdb_run ~yt.funcs.print_tb ~yt.funcs.rootonly ~yt.funcs.time_execution ~yt.data_objects.level_sets.contour_finder.identify_contours ~yt.utilities.parallel_tools.parallel_analysis_interface.enable_parallelism ~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_blocking_call ~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_objects ~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_passthrough ~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_root_only ~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_simple_proxy ~yt.data_objects.data_containers.YTDataContainer.get_field_parameter ~yt.data_objects.data_containers.YTDataContainer.set_field_parameter Math Utilities -------------- .. autosummary:: ~yt.utilities.math_utils.periodic_position ~yt.utilities.math_utils.periodic_dist ~yt.utilities.math_utils.euclidean_dist ~yt.utilities.math_utils.rotate_vector_3D ~yt.utilities.math_utils.modify_reference_frame ~yt.utilities.math_utils.compute_rotational_velocity ~yt.utilities.math_utils.compute_parallel_velocity ~yt.utilities.math_utils.compute_radial_velocity ~yt.utilities.math_utils.compute_cylindrical_radius ~yt.utilities.math_utils.ortho_find ~yt.utilities.math_utils.quartiles ~yt.utilities.math_utils.get_rotation_matrix ~yt.utilities.math_utils.get_sph_r ~yt.utilities.math_utils.resize_vector ~yt.utilities.math_utils.get_sph_theta ~yt.utilities.math_utils.get_sph_phi ~yt.utilities.math_utils.get_cyl_r ~yt.utilities.math_utils.get_cyl_z ~yt.utilities.math_utils.get_cyl_theta ~yt.utilities.math_utils.get_cyl_r_component ~yt.utilities.math_utils.get_cyl_theta_component ~yt.utilities.math_utils.get_cyl_z_component ~yt.utilities.math_utils.get_sph_r_component ~yt.utilities.math_utils.get_sph_phi_component ~yt.utilities.math_utils.get_sph_theta_component Miscellaneous Types ------------------- .. autosummary:: ~yt.config.YTConfig ~yt.utilities.parameter_file_storage.ParameterFileStore ~yt.utilities.parallel_tools.parallel_analysis_interface.ObjectIterator ~yt.utilities.parallel_tools.parallel_analysis_interface.ParallelAnalysisInterface ~yt.utilities.parallel_tools.parallel_analysis_interface.ParallelObjectIterator .. _cosmology-calculator-ref: Cosmology Calculator -------------------- .. autosummary:: ~yt.utilities.cosmology.Cosmology ~yt.utilities.cosmology.Cosmology.hubble_distance ~yt.utilities.cosmology.Cosmology.comoving_radial_distance ~yt.utilities.cosmology.Cosmology.comoving_transverse_distance ~yt.utilities.cosmology.Cosmology.comoving_volume ~yt.utilities.cosmology.Cosmology.angular_diameter_distance ~yt.utilities.cosmology.Cosmology.angular_scale ~yt.utilities.cosmology.Cosmology.luminosity_distance ~yt.utilities.cosmology.Cosmology.lookback_time ~yt.utilities.cosmology.Cosmology.critical_density ~yt.utilities.cosmology.Cosmology.hubble_parameter ~yt.utilities.cosmology.Cosmology.expansion_factor ~yt.utilities.cosmology.Cosmology.z_from_t ~yt.utilities.cosmology.Cosmology.t_from_z ~yt.utilities.cosmology.Cosmology.get_dark_factor Testing Infrastructure ---------------------- The core set of testing functions are re-exported from NumPy, and are deprecated (prefer using `numpy.testing `_ directly). .. autosummary:: ~yt.testing.assert_array_equal ~yt.testing.assert_almost_equal ~yt.testing.assert_approx_equal ~yt.testing.assert_array_almost_equal ~yt.testing.assert_equal ~yt.testing.assert_array_less ~yt.testing.assert_string_equal ~yt.testing.assert_array_almost_equal_nulp ~yt.testing.assert_allclose ~yt.testing.assert_raises `unyt.testing `_ also provides some specialized functions for comparing arrays in a units-aware fashion. Finally, yt provides the following functions: .. autosummary:: ~yt.testing.assert_rel_equal ~yt.testing.amrspace ~yt.testing.expand_keywords ~yt.testing.fake_random_ds ~yt.testing.fake_amr_ds ~yt.testing.fake_particle_ds ~yt.testing.fake_tetrahedral_ds ~yt.testing.fake_hexahedral_ds ~yt.testing.small_fake_hexahedral_ds ~yt.testing.fake_stretched_ds ~yt.testing.fake_vr_orientation_test_ds ~yt.testing.fake_sph_orientation_ds ~yt.testing.fake_sph_grid_ds ~yt.testing.fake_octree_ds These are for the pytest infrastructure: .. autosummary:: ~conftest.hashing ~yt.utilities.answer_testing.answer_tests.grid_hierarchy ~yt.utilities.answer_testing.answer_tests.parentage_relationships ~yt.utilities.answer_testing.answer_tests.grid_values ~yt.utilities.answer_testing.answer_tests.projection_values ~yt.utilities.answer_testing.answer_tests.field_values ~yt.utilities.answer_testing.answer_tests.pixelized_projection_values ~yt.utilities.answer_testing.answer_tests.small_patch_amr ~yt.utilities.answer_testing.answer_tests.big_patch_amr ~yt.utilities.answer_testing.answer_tests.generic_array ~yt.utilities.answer_testing.answer_tests.sph_answer ~yt.utilities.answer_testing.answer_tests.get_field_size_and_mean ~yt.utilities.answer_testing.answer_tests.plot_window_attribute ~yt.utilities.answer_testing.answer_tests.phase_plot_attribute ~yt.utilities.answer_testing.answer_tests.generic_image ~yt.utilities.answer_testing.answer_tests.axial_pixelization ~yt.utilities.answer_testing.answer_tests.extract_connected_sets ~yt.utilities.answer_testing.answer_tests.VR_image_comparison ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/reference/changelog.rst0000644000175100001770000040501514714401662020256 0ustar00runnerdocker.. _changelog: ChangeLog ========= This is a non-comprehensive log of changes to yt over its many releases. Contributors ------------ The `CREDITS file `_ contains the most up-to-date list of everyone who has contributed to the yt source code. yt 4.0 ------ Welcome to yt 4.0! This release is the result of several years worth of developer effort and has been in progress since the mid 3.x series. Please keep in mind that this release **will** have breaking changes. Please see the yt 4.0 differences page for how you can expect behavior to differ from the 3.x series. This is a manually curated list of pull requests that went in to yt 4.0, representing a subset of `the full list `__. New Functions ^^^^^^^^^^^^^ - ``yt.load_sample`` (PR #\ `2417 `__, PR #\ `2496 `__, PR #\ `2875 `__, PR #\ `2877 `__, PR #\ `2894 `__, PR #\ `3262 `__, PR #\ `3263 `__, PR #\ `3277 `__, PR #\ `3309 `__, and PR #\ `3336 `__) - ``yt.set_log_level`` (PR #\ `2869 `__ and PR #\ `3094 `__) - ``list_annotations`` method for plots (PR #\ `2562 `__) API improvements ^^^^^^^^^^^^^^^^ - ``yt.load`` with support for ``os.PathLike`` objects, improved UX and moved a new ``yt.loaders`` module, along with sibling functions (PR #\ `2405 `__, PR #\ `2722 `__, PR #\ `2695 `__, PR #\ `2818 `__, and PR #\ `2831 `__, PR #\ `2832 `__) - ``Dataset`` now has a more useful repr (PR #\ `3217 `__) - Explicit JPEG export support (PR #\ `2549 `__) - ``annotate_clear`` is now ``clear_annotations`` (PR #\ `2569 `__) - Throw an error if field access is ambiguous (PR #\ `2967 `__) Newly supported data formats ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Arepo ~~~~~ - PR #\ `1807 `__ - PR #\ `2236 `__ - PR #\ `2244 `__ - PR #\ `2344 `__ - PR #\ `2434 `__ - PR #\ `3258 `__ - PR #\ `3265 `__ - PR #\ `3291 `__ Swift ~~~~~ - PR #\ `1962 `__ Improved support and frontend specific bugfixes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ adaptahop ~~~~~~~~~ - PR #\ `2678 `__ AMRVAC ~~~~~~ - PR #\ `2541 `__ - PR #\ `2745 `__ - PR #\ `2746 `__ - PR #\ `3215 `__ ART ~~~ - PR #\ `2688 `__ ARTIO ~~~~~ - PR #\ `2613 `__ Athena++ ~~~~~~~~ - PR #\ `2985 `__ Boxlib ~~~~~~ - PR #\ `2807 `__ - PR #\ `2814 `__ - PR #\ `2938 `__ (AMReX) Enzo-E (formerly Enzo-P) ~~~~~~~~~~~~~~~~~~~~~~~~ - PR #\ `3273 `__ - PR #\ `3274 `__ - PR #\ `3290 `__ - PR #\ `3372 `__ fits ~~~~ - PR #\ `2246 `__ - PR #\ `2345 `__ Gadget ~~~~~~ - PR #\ `2145 `__ - PR #\ `3233 `__ - PR #\ `3258 `__ Gadget FOF Halo ~~~~~~~~~~~~~~~ - PR #\ `2296 `__ GAMER ~~~~~ - PR #\ `3033 `__ Gizmo ~~~~~ - PR #\ `3234 `__ MOAB ~~~~ - PR #\ `2856 `__ Owls ~~~~ - PR #\ `3325 `__ Ramses ~~~~~~ - PR #\ `2679 `__ - PR #\ `2714 `__ - PR #\ `2960 `__ - PR #\ `3017 `__ - PR #\ `3018 `__ Tipsy ~~~~~ - PR #\ `2193 `__ Octree Frontends ~~~~~~~~~~~~~~~~ - Ghost zone access (PR #\ `2425 `__ and PR #\ `2958 `__) - Volume Rendering (PR #\ `2610 `__) Configuration file ^^^^^^^^^^^^^^^^^^ - Config files are now in `TOML `__ (PR #\ `2981 `__) - Allow a local plugin file (PR #\ `2534 `__) - Allow per-field local config (PR #\ `1931 `__) yt CLI ^^^^^^ - Fix broken command-line options (PR #\ `3361 `__) - Drop yt hub command (PR #\ `3363 `__) Deprecations ^^^^^^^^^^^^ - Smoothed fields are no longer necessary (PR #\ `2194 `__) - Energy and momentum field names are more accurate (PR #\ `3059 `__) - Incorrectly-named ``WeightedVariance`` is now ``WeightedStandardDeviation`` and the old name has been deprecated (PR #\ `3132 `__) - Colormap auto-registration has been changed and yt 4.1 will not register ``cmocean`` (PR #\ `3175 `__ and PR #\ `3214 `__) Removals ~~~~~~~~ - ``analysis_modules`` has been `extracted `__ (PR #\ `2081 `__) - Interactive volume rendering has been `extracted `__ (PR #\ `2896 `__) - The bundled version of ``poster`` has been removed (PR #\ `2783 `__) - The deprecated ``particle_position_relative`` field has been removed (PR #\ `2901 `__) - Deprecated functions have been removed (PR #\ `3007 `__) - Vendored packages have been removed (PR #\ `3008 `__) - ``yt.pmods`` has been removed (PR #\ `3061 `__) - yt now utilizes unyt as an external package (PR #\ `2219 `__, PR #\ `2300 `__, and PR #\ `2303 `__) Version 3.6.1 ------------- Version 3.6.1 is a bugfix release. It includes the following backport: - hotfix: support matplotlib 3.3.0. See `PR 2754 `__. Version 3.6.0 ------------- Version 3.6.0 our next major release since 3.5.1, which was in February 2019. It includes roughly 180 pull requests contributed from 39 contributors, 22 of which committed for their first time to the project. We have also updated our project governance and contribution guidelines, which you can `view here `_ . We'd like to thank all of the individuals who contributed to this release. There are lots of new features and we're excited to share them with the community. Breaking Changes ^^^^^^^^^^^^^^^^ The following breaking change was introduced. Please be aware that this could impact your code if you use this feature. - The angular momentum has been reversed compared to previous versions of yt. See `PR 2043 `__. Major Changes and New Features ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - New frontend support for the code AMRVAC. Many thanks to Clément Robert and Niels Claes who were major contributors to this initiative. Relevant PRs include - Initial PR to support AMRVAC native data files `PR 2321 `__. - added support for dust fields and derived fields `PR 2387 `__. - added support for derived fields for hydro runs `PR 2381 `__. - API documentation and docstrings for AMRVAC frontend `PR 2384 `__, `PR 2380 `__, `PR 2382 `__. - testing-related PRs for AMRVAC: `PR 2379 `__, `PR 2360 `__. - add verbosity to logging of geometry or ``geometry_override`` `PR 2421 `__. - add attribute to ``_code_unit_attributes`` specific to AMRVAC to ensure consistent renormalisation of AMRVAC datasets. See `PR 2357 `__. - parse AMRVAC's parfiles if user-provided `PR 2369 `__. - ensure that min_level reflects dataset that has refinement `PR 2475 `__. - fix derived unit parsing `PR 2362 `__. - update energy field to be ``energy_density`` and have units of code pressure `PR 2376 `__. - Support for the AdaptaHOP halo finder code `PR 2385 `__. - yt now supports geographic transforms and projections of data with cartopy with support from `PR 1966 `__. - annotations used to work for only a single point, they now work for multiple points on a plot, see `PR 2122 `__. - cosmology calculations now have support for the relativistic energy density of the universe, see `PR 1714 `__. This feature is accessible to cosmology datasets and was added to the Enzo frontend. - the eps writer now allows for arrow rotation. this is accessible with the ``rotate`` kwarg in the ``arrow`` function. See `PR 2151 `__. - allow for dynamic load balancing with parallel loading of timeseries data using the ``dynamic`` kwarg. `PR 2149 `__. - show/hide colorbar and show/hide axes are now available for ``ProfilePlot`` s. These functions were also moved from the PlotWindow to the PlotContainer class. `PR 2169 `__. - add support for ipywidgets with an ``__ipython_display__`` method on the FieldTypeContainer. Field variables, source, and the field array can be viewed with this widget. See PRs `PR 1844 `__ and `PR 1848 `__, or try ``display(ds.fields)`` in a Jupyter notebook. - cut regions can now be made with ``exclude_`` and ``include_`` on a number of objects, including above and below values, inside or outside regions, equal values, or nans. See `PR 1964 `__ and supporting documentation fix at `PR 2262 `__. - previously aliased fluid vector fields in curvilinear geometries were not converted to curvilinear coordinates, this was addressed in `PR 2105 `__. - 2d polar and 3d cylindrical geometries now support annotate_quivers, streamlines, line integral convolutions, see `PR 2105 `__. - add support for exporting data to firefly `PR 2190 `__. - gradient fields are now supported in curvilinear geometries. See `PR 2483 `__. - plotwindow colorbars now utilize mathtext in their labels, from `PR 2516 `__. - raise deprecation warning when using ``mylog.warn``. Instead use ``mylog.warning``. See `PR 2285 `__. - extend support of the ``marker``, ``text``, ``line`` and ``sphere`` annotation callbacks to polar geometries `PR 2466 `__. - Support MHD in the GAMER frontend `PR 2306 `__. - Export data container and profile fields to AstroPy QTables and pandas DataFrames `PR 2418 `__. - Add turbo colormap, a colorblind safe version of jet. See `PR 2339 `__. - Enable exporting regular grids (i.e., covering grids, arbitrary grids and smoothed grids) to ``xarray`` `PR 2294 `__. - add automatic loading of ``namelist.txt``, which contains the parameter file RAMSES uses to produce output `PR 2347 `__. - adds support for a nearest neighbor value field, accessible with the ``add_nearest_neighbor_value_field`` function for particle fields. See `PR 2301 `__. - speed up mesh deposition (uses caching) `PR 2136 `__. - speed up ghost zone generation. `PR 2403 `__. - ensure that a series dataset has kwargs passed down to data objects `PR 2366 `__. Documentation Changes ^^^^^^^^^^^^^^^^^^^^^ Our documentation has received some attention in the following PRs: - include donation/funding links in README `PR 2520 `__. - Included instructions on how to install yt on the Intel Distribution `PR 2355 `__. - include documentation on package vendors `PR 2494 `__. - update links to yt hub cookbooks `PR 2477 `__. - include relevant API docs in .gitignore `PR 2467 `__. - added docstrings for volume renderer cython code. see `PR 2456 `__ and for `PR 2449 `__. - update documentation install recommendations to include newer python versions `PR 2452 `__. - update custom CSS on docs to sphinx >=1.6.1. See `PR 2199 `__. - enhancing the contribution documentation on git, see `PR 2420 `__. - update documentation to correctly reference issues suitable for new contributors `PR 2346 `__. - fix URLs and spelling errors in a number of the cookbook notebooks `PR 2341 `__. - update release docs to include information about building binaries, tagging, and various upload locations. See `PR 2156 `__ and `PR 2160 `__. - ensuring the ``load_octree`` API docs are rendered `PR 2088 `__. - fixing doc build errors, see: `PR 2077 `__. - add an instruction to the doc about continuous mesh colormap `PR 2358 `__. - Fix minor typo `PR 2327 `__. - Fix some docs examples `PR 2316 `__. - fix sphinx formatting `PR 2409 `__. - Improve doc and fix docstring in deposition `PR 2453 `__. - Update documentation to reflect usage of rcfile (no brackets allowed), including strings. See `PR 2440 `__. Minor Enhancements and Bugfixes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - update pressure units in artio frontend (they were unitless previously) `PR 2521 `__. - ensure that modules supported by ``on_demand_imports`` are imported with that functionality `PR 2436 `__. - fix issues with groups in python3 in Ramses frontend `PR 2092 `__. - add tests to ytdata frontend api `PR 2075 `__. - update internal field usage from ``particle_{}_relative`` to ``relative_particle_{}`` so particle-based fields don't see deprecation warnings see `PR 2073 `__. - update save of ``field_data`` in clump finder, see `PR 2079 `__. - ensure map.js is included in the sdist for mapserver. See `PR 2158 `__. - add wrapping around ``yt_astro_analysis`` where it is used, in case it isn't installed `PR 2159 `__. - the contour finder now uses a maximum data value supplied by the user, rather than assuming the maximum value in the data container. Previously this caused issues in the clump finder. See `PR 2170 `__. - previously ramses data with non-hilbert ordering crashed. fixed by `PR 2200 `__. - fix an issue related to creating a ds9 region with FITS `PR 2335 `__. - add a check to see if pluginfilename is specified in ytrc `PR 2319 `__. - sort .so input file list so that the yt package builds in a reproducible way `PR 2206 `__. - update ``stack`` ufunc usage to include ``axis`` kwarg. See `PR 2204 `__. - extend support for field names in RAMSES descriptor file to include all names that don't include a comma. See `PR 2202 `__. - ``set_buff_size`` now works for ``OffAxisProjectionPlot``, see `PR 2239 `__. - fix chunking for chained cut regions. previously chunking commands would only look at the most recent cut region conditionals, and not any of the previous cut regions. See `PR 2234 `__. - update git command in Castro frontend to include ``git describe`` `PR 2235 `__. - in datasets with a single oct correctly guess the shape of the array `PR 2241 `__. - update ``get_yt_version`` function to support python 3. See `PR 2226 `__. - the ``"stream"`` frontend now correctly returns ``min_level`` for the mesh refinement. `PR 2519 `__. - region expressions (``ds.r[]``) can now be used on 2D datasets `PR 2482 `__. - background colors in cylindrical coordinate plots are now set correctly `PR 2517 `__. - Utilize current matplotlib interface for the ``_png`` module to write images to disk `PR 2514 `__. - fix issue with fortran utils where empty records were not supported `PR 2259 `__. - add support for python 3.7 in iterator used by dynamic parallel loading `PR 2265 `__. - add support to handle boxlib data where ``raw_fields`` contain ghost zones `PR 2255 `__. - update quiver fields to use native units, not assuming cgs `PR 2292 `__. - fix annotations on semi-structured mesh data with exodus II `PR 2274 `__. - extend support for loading exodus II data `PR 2274 `__. - add support for yt to load data generated by WarpX code that includes ``rigid_injected`` species `PR 2289 `__. - fix issue in GAMER frontend where periodic boundary conditions were not identified `PR 2287 `__. - fix issue in ytdata frontend where data size was calculated to have size ``(nparticles, dimensions)``. Now updated to use ``(nparticles, nparticles, dimensions)``. see `PR 2280 `__. - extend support for OpenPMD frontend to load data containing no particles see `PR 2270 `__. - raise a meaningful error on negative and zero zooming factors, see `PR 2443 `__. - ensure Datasets are consistent in their ``min_level`` attribute. See `PR 2478 `__. - adding matplotlib to trove classifiers `PR 2473 `__. - Add support for saving additional formats supported by matplotlib `PR 2318 `__. - add support for numpy 1.18.1 and help ensure consistency with unyt `PR 2448 `__. - add support for spherical geometries in ``plot_2d``. See `PR 2371 `__. - add support for sympy 1.5 `PR 2407 `__. - backporting unyt PR 102 for clip `PR 2329 `__. - allow code units in fields ``jeans_mass`` and ``dynamical_time``. See`PR 2454 `__. - fix for the case where boxlib nghost is different in different directions `PR 2343 `__. - bugfix for numpy 1.18 `PR 2419 `__. - Invoke ``_setup_dx`` in the enzo inline analysis. See `PR 2460 `__. - Update annotate_timestamp to work with ``"code"`` unit system. See `PR 2435 `__. - use ``dict.get`` to pull attributes that may not exist in ytdata frontend `PR 2471 `__. - solved bug related to slicing out ghost cells in chombo `PR 2388 `__. - correctly register reversed versions of cmocean cmaps `PR 2390 `__. - correctly set plot axes units to ``"code length"`` for datasets loaded with ``unit_system="code"`` `PR 2354 `__. - deprecate ``ImagePlotContainer.set_cbar_minorticks``. See `PR 2444 `__. - enzo-e frontend bugfix for single block datasets. See `PR 2424 `__. - explicitly default to solid lines in contour callback. See `PR 2330 `__. - replace all bare ``Except`` statements `PR 2474 `__. - fix an inconsistency between ``argmax`` and ``argmin`` methods in YTDataContainer class `PR 2457 `__. - fixed extra extension added by ``ImageArray.save()``. See `PR 2364 `__. - fix incorrect usage of ``is`` comparison with ``==`` comparison throughout the codebase `PR 2351 `__. - fix streamlines ``_con_args`` attribute `PR 2470 `__. - fix python 3.8 warnings `PR 2386 `__. - fix some invalid escape sequences. `PR 2488 `__. - fix typo in ``_vorticity_z`` field definition. See `PR 2398 `__. - fix an inconsistency in annotate_sphere callback. See `PR 2464 `__. - initialize unstructured mesh visualization background to ``nan`` `PR 2308 `__. - raise a meaningful error on negative and zero zooming factors `PR 2443 `__. - set ``symlog`` scaling to ``log`` if ``vmin > 0``. See `PR 2485 `__. - skip blank lines when reading parameters. See `PR 2406 `__. - Update magnetic field handling for RAMSES. See `PR 2377 `__. - Update ARTIO frontend to support compressed files. See `PR 2314 `__. - Use mirror copy of SDF data `PR 2334 `__. - Use sorted glob in athena to ensure reproducible ordering of grids `PR 2363 `__. - fix cartopy failures by ensuring data is in lat/lon when passed to cartopy `PR 2378 `__. - enforce unit consistency in plot callbacks, which fixes some unexpected behaviour in the plot annotations callbacks that use the plot window width or the data width `PR 2524 `__. Separate from our list of minor enhancements and bugfixes, we've grouped PRs related to infrastructure and testing in the next three sub-sub-sub sections. Testing and Infrastructure ~~~~~~~~~~~~~~~~~~~~~~~~~~ - infrastructure to change our testing from nose to pytest, see `PR 2401 `__. - Adding test_requirements and test_minimum requirements files to have bounds on installed testing versioning `PR 2083 `__. - Update the test failure report to include all failed tests related to a single test specification `PR 2084 `__. - add required dependencies for docs testing on Jenkins. See `PR 2090 `__. - suppress pyyaml warning that pops up when running tests `PR 2182 `__. - add tests for pre-existing ytdata datasets. See `PR 2229 `__. - add a test to check if cosmology calculator and cosmology dataset share the same unit registry `PR 2230 `__. - fix kh2d test name `PR 2342 `__. - disable OSNI projection answer test to remove cartopy errors `PR 2350 `__. CI related support ~~~~~~~~~~~~~~~~~~ - disable coverage on OSX to speed up travis testing and avoid timeouts `PR 2076 `__. - update travis base images on Linux and MacOSX `PR 2093 `__. - add ``W504`` and ``W605`` to ignored flake8 errors, see `PR 2078 `__., - update pyyaml version in ``test_requirements.txt`` file to address github warning `PR 2148 `__., - fix travis build errors resulting from numpy and cython being unavailable `PR 2171 `__. - fix appveyor build failures `PR 2231 `__. - Add Python 3.7 and Python 3.8 to CI test jobs. See `PR 2450 `__. - fix build failure on Windows `PR 2333 `__. - fix warnings due to travis configuration file. See `PR 2451 `__. - install pyyaml on appveyor `PR 2367 `__. - install sympy 1.4 on appveyor to work around regression in 1.5 `PR 2395 `__. - update CI recipes to fix recent failures `PR 2489 `__. Other Infrastructure ~~~~~~~~~~~~~~~~~~~~ - Added a welcomebot to our github page for new contributors, see `PR 2181 `__. - Added a pep8 bot to pre-run before tests, see `PR 2179 `__, `PR 2184 `__ and `PR 2185 `__. Version 3.5.0 ------------- Version 3.5.0 is the first major release of yt since August 2017. It includes 328 pull requests from 41 contributors, including 22 new contributors. Major Changes ^^^^^^^^^^^^^ - ``yt.analysis_modules`` has been deprecated in favor of the new ``yt_astro_analysis`` package. New features and new astronomy-specific analysis modules will go into ``yt_astro_analysis`` and importing from ``yt.analysis_modules`` will raise a noisy warning. We will remove ``yt.analysis_modules`` in a future release. See `PR 1938 `__. - Vector fields and derived fields depending on vector fields have been systematically updated to account for a bulk correction field parameter. For example, for the velocity field, all derived fields that depend on velocity will now account for the ``"bulk_velocity"`` field parameter. In addition, we have defined ``"relative_velocity"`` and ``"relative_magnetic_field"`` fields that include the bulk correction. Both of these are vector fields, to access the components, use e.g. ``"relative_velocity_x"``. The ``"particle_position_relative"`` and ``"particle_velocity_relative"`` fields have been deprecated. See `PR 1693 `__ and `PR 2022 `__. - Aliases to spatial fields with the ``"gas"`` field type will now be returned in the default unit system for the dataset. As an example the ``"x"`` field might resolve to the field tuples ``("index", "x")`` or ``("gas", "x")``. Accessing the former will return data in code units while the latter will return data in whatever unit system the dataset is configured to use (CGS, by default). This means that to ensure the units of a spatial field will always be consistent, one must access the field as a tuple, explicitly specifying the field type. Accessing a spatial field using a string field name may return data in either code units or the dataset's default unit system depending on the history of field accesses prior to accessing that field. In the future accessing fields using an ambiguous field name will raise an error. See `PR 1799 `__ and `PR 1850 `__. - The ``max_level`` and ``min_level`` attributes of yt data objects now correctly update the state of the underlying data objects when set. In addition we have added an example to the cookbook that shows how to downsample AMR data using this functionality. See `PR 1737 `__. - It is now possible to customize the formatting of labels for ion species fields. Rather than using the default spectroscopic notation, one can call ``ds.set_field_label_format("ionization_label", "plus_minus")`` to use the more traditional notation where ionization state is indicated with ``+`` and ``-`` symbols. See `PR 1867 `__. Improvements to the RAMSES frontend ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ We would particularly like to recognize Corentin Cadiou for his tireless work over the past year on improving support for RAMSES and octree AMR data in yt. - Added support for reading RAMSES sink particles. See `PR 1548 `__. - Add support for the new self-describing Ramses particle output format. See `PR 1616 `__. - It is now possible to restrict the domain of a loaded Ramses dataset by passing a ``bbox`` keyword argument to ``yt.load()``. If passed this corresponds to the coordinates of the top-left and bottom-right hand corner of the subvolume to load. Data outside the bounding box will be ignored. This is useful for loading very large Ramses datasets where yt currently has poor scaling. See `PR 1637 `__. - The Ramses ``"particle_birth_time"`` field now contains the time when star particles form in a simulation in CGS units, formerly these times were only accessible via the incorrectly named ``"particle_age"`` field in conformal units. Correspondingly the ``"particle_age"`` field has been deprecated. The conformal birth time is not available via the ``"conformal_birth_time``" field. See `PR 1649 `__. - Substantial performance improvement for reading RAMSES AMR data. See `PR 1671 `__. - The RAMSES frontend will now produce less voluminous logging feedback when loading the dataset or reading data. This is particularly noticeable for very large datasets with many CPU files. See `PR 1738 `__. - Avoid repeated parsing of RAMSES particle and RT descriptors. See `PR 1739 `__. - Added support for reading the RAMSES gravitational potential field. See `PR 1751 `__. - Add support for RAMSES datasets that use the ``groupsize`` feature. See `PR 1769 `__. - Dramatically improve the overall performance of the RAMSES frontend. See `PR 1771 `__. Additional Improvements ^^^^^^^^^^^^^^^^^^^^^^^ - Added support for particle data in the Enzo-E frontend. See `PR 1490 `__. - Added an ``equivalence`` keyword argument to ``YTArray.in_units()`` and ``YTArray.to()``. This makes it possible to specify an equivalence when converting data to a new unit. Also added ``YTArray.to_value()`` which allows converting to a new unit, then stripping off the units to return a plain numpy array. See `PR 1563 `__. - Rather than crashing, yt will now assume default values for cosmology parameters in Gadget HDF5 data if it cannot find the relevant header information. See `PR 1578 `__. - Improve detection for OpenMP support at compile-time, including adding support for detecting OpenMP on Windows. See `PR 1591 `__, `PR 1695 `__ and `PR 1696 `__. - Add support for 2D cylindrical data for most plot callbacks. See `PR 1598 `__. - Particles outside the domain are now ignored by ``load_uniform_grid()`` and ``load_amr_grids()``. See `PR 1602 `__. - Fix incorrect units for the Gadget internal energy field in cosmology simulations. See `PR 1611 `__. - Add support for calculating covering grids in parallel. See `PR 1612 `__. - The number of particles in a dataset loaded by the stream frontend (e.g. via ``load_uniform_grid``) no longer needs to be explicitly provided via the ``number_of_particles`` keyword argument, using the ``number_of_particles`` keyword will now generate a deprecation warning. See `PR 1620 `__. - Add support for non-cartesian GAMER data. See `PR 1622 `__. - If a particle filter depends on another particle filter, both particle filters will be registered for a dataset if the dependent particle filter is registered with a dataset. See `PR 1624 `__. - The ``save()`` method of the various yt plot objects now optionally can accept a tuple of strings instead of a string. If a tuple is supplied, the elements are joined with ``os.sep`` to form a path. See `PR 1630 `__. - The quiver callback now accepts a ``plot_args`` keyword argument that allows passing keyword arguments to matplotlib to allow for customization of the quiver plot. See `PR 1636 `__. - Updates and improvements for the OpenPMD frontend. See `PR 1645 `__. - The mapserver now works correctly under Python3 and has new features like a colormap selector and plotting multiple fields via layers. See `PR 1654 `__ and `PR 1668 `__. - Substantial performance improvement for calculating the gravitational potential in the clump finder. See `PR 1684 `__. - Added new methods to ``ProfilePlot``: ``set_xlabel()``, ``set_ylabel()``, ``annotate_title()``, and ``annotate_text()``. See `PR 1700 `__ and `PR 1705 `__. - Speedup for parallel halo finding operation for the FOF and HOP halo finders. See `PR 1724 `__. - Add support for halo finding using the rockstar halo finder on Python3. See `PR 1740 `__. - The ``ValidateParameter`` field validator has gained the ability for users to explicitly specify the values of field parameters during field detection. This makes it possible to write fields that access different sets of fields depending on the value of the field parameter. For example, a field might define an ``'axis'`` field parameter that can be either ``'x'``, ``'y'`` or ``'z'``. One can now explicitly tell the field detection system to access the field using all three values of ``'axis'``. This improvement avoids errors one would see now where only one value or an invalid value of the field parameter will be tested by yt. See `PR 1741 `__. - It is now legal to pass a dataset instance as the first argument to ``ProfilePlot`` and ``PhasePlot``. This is equivalent to passing ``ds.all_data()``. - Functions that accept a ``(length, unit)`` tuple (e.g. ``(3, 'km')`` for 3 kilometers) will not raise an error if ``length`` is a ``YTQuantity`` instance with units attached. See `PR 1749 `__. - The ``annotate_timestamp`` plot annotation now optionally accepts a ``time_offset`` keyword argument that sets the zero point of the time scale. Additionally, the ``annotate_scale`` plot annotation now accepts a ``format`` keyword argument, allowing custom formatting of the scale annotation. See `PR 1755 `__. - Add support for magnetic field variables and creation time fields in the GIZMO frontend. See `PR 1756 `__ and `PR 1914 `__. - ``ParticleProjectionPlot`` now supports the ``annotate_particles`` plot callback. See `PR 1765 `__. - Optimized the performance of off-axis projections for octree AMR data. See `PR 1766 `__. - Added support for several radiative transfer fields in the ARTIO frontend. See `PR 1804 `__. - Performance improvement for Boxlib datasets that don't use AMR. See `PR 1834 `__. - It is now possible to set custom profile bin edges. See `PR 1837 `__. - Dropped support for Python3.4. See `PR 1840 `__. - Add support for reading RAMSES cooling fields. See `PR 1853 `__. - Add support for NumPy 1.15. See `PR 1854 `__. - Ensure that functions defined in the plugins file are available in the yt namespace. See `PR 1855 `__. - Creating a profiles with log-scaled bins but where the bin edges are negative or zero now raises an error instead of silently generating a corrupt, incorrect answer. See `PR 1856 `__. - Systematically added validation for inputs to data object initializers. See `PR 1871 `__. - It is now possible to select only a specific particle type in the particle trajectories analysis module. See `PR 1887 `__. - Substantially improve the performance of selecting particle fields with a ``cut_region`` data object. See `PR 1892 `__. - The ``iyt`` command-line entry-point into IPython now installs yt-specific tab-completions. See `PR 1900 `__. - Derived quantities have been systematically updated to accept a ``particle_type`` keyword argument, allowing easier analysis of only a single particle type. See `PR 1902 `__ and `PR 1922 `__. - The ``annotate_streamlines()`` function now accepts a ``display_threshold`` keyword argument. This suppresses drawing streamlines over any region of a dataset where the field being displayed is less than the threshold. See `PR 1922 `__. - Add support for 2D nodal data. See `PR 1923 `__. - Add support for GAMER outputs that use patch groups. This substantially reduces the memory requirements for loading large GAMER datasets. See `PR 1935 `__. - Add a ``data_source`` keyword argument to the ``annotate_particles`` plot callback. See `PR 1937 `__. - Define species fields in the NMSU Art frontend. See `PR 1981 `__. - Added a ``__format__`` implementation for ``YTArray``. See `PR 1985 `__. - Derived fields that use a particle filter now only need to be derived for the particle filter type, not for the particle types used to define the particle filter. See `PR 1993 `__. - Added support for periodic visualizations using ``ParticleProjectionPlot``. See `PR 1996 `__. - Added ``YTArray.argsort()``. See `PR 2002 `__. - Calculate the header size from the header specification in the Gadget frontend to allow reading from Gadget binary datasets with nonstandard headers. See `PR 2005 `__ and `PR 2036 `__. - Save the standard deviation in ``profile.save_as_dataset()``. See `PR 2008 `__. - Allow the ``color`` keyword argument to be passed to matplotlib in the ``annotate_clumps`` callback to control the color of the clump annotation. See `PR 2019 `__. - Raise an exception when profiling fields of unequal shape. See `PR 2025 `__. - The clump info dictionary is now populated as clumps get created instead of during ``clump.save_as_dataset()``. See `PR 2053 `__. - Avoid segmentation fault in slice selector by clipping slice integer coordinates. See `PR 2055 `__. Minor Enhancements and Bugfixes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - Fix incorrect use of floating point division in the parallel analysis framework. See `PR 1538 `__. - Fix integration with that matplotlib QT backend for interactive plotting. See `PR 1540 `__. - Add support for the particle creation time field in the GAMER frontend. See `PR 1546 `__. - Various minor improvements to the docs. See `PR 1542 `__. and `PR 1547 `__. - Add better error handling for invalid tipsy aux files. See `PR 1549 `__. - Fix typo in default Gadget header specification. See `PR 1550 `__. - Use the git version in the get_yt_version function. See `PR 1551 `__. - Assume dimensionless units for fields from FITS datasets when we can't infer the units. See `PR 1553 `__. - Autodetect ramses extra particle fields. See `PR 1555 `__. - Fix issue with handling unitless halo quantities in HaloCatalog. See `PR 1558 `__. - Track the halo catalog creation process using a parallel-safe progress bar. See `PR 1559 `__. - The PPV Cube functionality no longer crashes if there is no temperature field in the dataset. See `PR 1562 `__. - Fix crash caused by saving the ``'x'``, ``'y'``, or ``'z'`` fields in clump.save_as_dataset(). See `PR 1567 `__. - Accept both string and tuple field names in ``ProfilePlot.set_unit()`` and ``PhasePlot.set_unit()``. See `PR 1568 `__. - Fix issues with some arbitrary grid attributes not being reloaded properly after being saved with ``save_as_dataset()``. See `PR 1569 `__. - Fix units issue in the light cone projection operation. See `PR 1574 `__. - Use ``astropy.wcsaxes`` instead of the independent ``wcsaxes`` project. See `PR 1577 `__. - Correct typo in WarpX field definitions. See `PR 1583 `__. - Avoid crashing when loading an Enzo dataset with a parameter file that has commented out parameters. See `PR 1586 `__. - Fix a corner case in the clump finding machinery where the reference to the parent clump is invalid after pruning a child clump that has no siblings. See `PR 1587 `__. - Fix issues with setting up yt fields for the magnetic and velocity field components and associated derived fields in curvilinear coordinate systems. See `PR 1588 `__ and `PR 1687 `__. - Fix incorrect profile values when the profile weight field has values equal to zero. See `PR 1590 `__. - Fix issues with making matplotlib animations of a ``ParticleProjectionPlot``. See `PR 1594 `__. - The ``Scene.annotate_axes()`` function will now use the correct colors for drawing the axes annotation. See `PR 1596 `__. - Fix incorrect default plot bounds for a zoomed-in slice plot of a 2D cylindrical dataset. See `PR 1597 `__. - Fix issue where field accesses on 2D grids would return data with incorrect shapes. See `PR 1603 `__. - Added a cookbook example for a multipanel phase plot. See `PR 1605 `__. - Boolean simulation parameters in the Boxlib frontend will now be interpreted correctly. See `PR 1619 `__. - The ``ds.particle_type_counts`` attribute will now be populated correctly for AMReX data. - The ``"rad"`` unit (added for compatibility with astropy) now has the correct dimensions of angle instead of solid angle. See `PR 1628 `__. - Fix units issues in several plot callbacks. See `PR 1633 `__ and `PR 1674 `__. - Various fixes for how WarpX fields are interpreted. See `PR 1634 `__. - Fix incorrect units in the automatically deposited particle fields. See `PR 1638 `__. - It is now possible to set the axes background color after calling ``plot.hide_axes()``. See `PR 1662 `__. - Fix a typo in the name of the ``colors`` keyword argument passed to matplotlib for the contour callback. See `PR 1664 `__. - Add support for Enzo Active Particle fields that arrays. See `PR 1665 `__. - Avoid crash when generating halo catalogs from the rockstar halo finder for small simulation domains. See `PR 1679 `__. - The clump callback now functions correctly for a reloaded clump dataset. See `PR 1683 `__. - Fix incorrect calculation for tangential components of vector fields. See `PR 1688 `__. - Allow halo finders to run in parallel on Python3. See `PR 1690 `__. - Fix issues with Gadget particle IDs for simulations with large numbers of particles being incorrectly rounded. See `PR 1692 `__. - ``ParticlePlot`` no longer needs to be passed spatial fields in a particular order to ensure that a ``ParticleProjectionPlot`` is returned. See `PR 1697 `__. - Accessing data from a FLASH grid directly now returns float64 data. See `PR 1708 `__. - Fix periodicity check in ``YTPoint`` data object. See `PR 1712 `__. - Avoid crash on matplotlib 2.2.0 when generating yt plots with symlog colorbars. See `PR 1720 `__. - Avoid crash when FLASH ``"unitsystem"`` parameter is quoted in the HDF5 file. See `PR 1722 `__. - Avoid issues with creating custom particle filters for OWLS/EAGLE datasets. See `PR 1723 `__. - Adapt to behavior change in matplotlib that caused plot inset boxes for annotated text to be drawn when none was requested. See `PR 1731 `__ and `PR 1827 `__. - Fix clump finder ignoring field parameters. See `PR 1732 `__. - Avoid generating NaNs in x-ray emission fields. See `PR 1742 `__. - Fix compatibility with Sphinx 1.7 when building the docs. See `PR 1743 `__. - Eliminate usage of deprecated ``"clobber"`` keyword argument for various usages of astropy in yt. See `PR 1744 `__. - Fix incorrect definition of the ``"d"`` unit (an alias of ``"day"``). See `PR 1746 `__. - ``PhasePlot.set_log()`` now correctly handles tuple field names as well as string field names. See `PR 1787 `__. - Fix incorrect axis order in aitoff pixelizer. See `PR 1791 `__. - Fix crash in when exporting a surface as a ply model. See `PR 1792 `__ and `PR 1817 `__. - Fix crash in scene.save_annotated() in newer numpy versions. See `PR 1793 `__. - Many tests no longer depend on real datasets. See `PR 1801 `__, `PR 1805 `__, `PR 1809 `__, `PR 1883 `__, and `PR 1941 `__ - New tests were added to improve test coverage or the performance of the tests. See `PR 1820 `__, `PR 1831 `__, `PR 1833 `__, `PR 1841 `__, `PR 1842 `__, `PR 1885 `__, `PR 1886 `__, `PR 1952 `__, `PR 1953 `__, `PR 1955 `__, and `PR 1957 `__. - The particle trajectories machinery will raise an error if it is asked to analyze a set of particles with duplicated particle IDs. See `PR 1818 `__. - Fix incorrect velocity unit int he ``gadget_fof`` frontend. See `PR 1829 `__. - Making an off-axis projection of a cut_region data object with an octree AMR dataset now works correctly. See `PR 1858 `__. - Replace hard-coded constants in Enzo frontend with calculations to improve agreement with Enzo's internal constants and improve clarity. See `PR 1873 `__. - Correct issues with Enzo magnetic units in cosmology simulations. See `PR 1876 `__. - Use the species names from the dataset rather than hardcoding species names in the WarpX frontend. See `PR 1884 `__. - Fix issue with masked I/O for unstructured mesh data. See `PR 1918 `__. - Fix crash when reading DM-only Enzo datasets where some grids have no particles. See `PR 1919 `__. - Fix crash when loading pure-hydro Nyx dataset. See `PR 1950 `__. - Avoid crashes when plotting fields that contain NaN. See `PR 1951 `__. - Avoid crashes when loading NMSU ART data. See `PR 1960 `__. - Avoid crash when loading WarpX dataset with no particles. See `PR 1979 `__. - Adapt to API change in glue to fix the ``to_glue()`` method on yt data objects. See `PR 1991 `__. - Fix incorrect width calculation in the ``annotate_halos()`` plot callback. See `PR 1995 `__. - Don't try to read from files containing zero halos in the ``gadget_fof`` frontend. See `PR 2001 `__. - Fix incorrect calculation in ``get_ortho_base()``. See `PR 2013 `__. - Avoid issues with the axes background color being inconsistently set. See `PR 2018 `__. - Fix issue with reading multiple fields at once for octree AMR data sometimes returning data for another field for one of the requested fields. See `PR 2020 `__. - Fix incorrect domain annotation for ``Scene.annotate_domain()`` when using the plane-parallel camera. See `PR 2024 `__. - Avoid crash when particles are on the domain edges for ``gadget_fof`` data. See `PR 2034 `__. - Avoid stripping code units when processing units through a dataset's unit system. See `PR 2035 `__. - Avoid incorrectly rescaling units of metalicity fields. See `PR 2038 `__. - Fix incorrect units for FLASH ``"divb"`` field. See `PR 2062 `__. Version 3.4 ----------- Version 3.4 is the first major release of yt since July 2016. It includes 450 pull requests from 44 contributors including 18 new contributors. - yt now supports displaying plots using the interactive matplotlib backends. To enable this functionality call ``yt.toggle_interactivity()``. This is currently supported at an experimental level, please let us know if you come across issues using it. See `Bitbucket PR 2294 `__. - The yt configuration file should now be located in a location following the XDG\_CONFIG convention (usually ``~/.config/yt/ytrc``) rather than the old default location (usually ``~/.yt/config``). You can use ``yt config migrate`` at the bash command line to migrate your configuration file to the new location. See `Bitbucket PR 2343 `__. - Added ``yt.LinePlot``, a new plotting class for creating 1D plots along lines through a dataset. See `Github PR 1509 `__ and `Github PR 1440 `__. - Added ``yt.define_unit`` to easily define new units in yt's unit system. See `Bitbucket PR 2485 `__. - Added ``yt.plot_2d``, a wrapper around SlicePlot for plotting 2D datasets. See `Github PR 1476 `__. - We have restored support for boolean data objects. Boolean objects are data objects that are defined in terms of boolean operations on other data objects. See `Bitbucket PR 2257 `__. - Datasets now have a ``fields`` attribute that allows access to fields via a python object. For example, instead of using a tuple field name like ``('gas', 'density')``, one can now use ``ds.fields.gas.density``. See `Bitbucket PR 2459 `__. - It is now possible to create a wider variety of data objects via ``ds.r``, including rays, fixed resolution rays, points, and images. See `Github PR 1518 `__ and `Github PR 1393 `__. - ``add_field`` and ``ds.add_field`` must now be called with a ``sampling_type`` keyword argument. Possible values are currently ``cell`` and ``particle``. We have also deprecated the ``particle_type`` keyword argument in favor of ``sampling_type='cell'``. For now a ``'cell'`` ``sampling_type`` is assumed if ``sampling_type`` is not specified but in the future ``sampling_type`` will always need to be specified. - Added support for the ``Athena++`` code. See `Bitbucket PR 2149 `__. - Added support for the ``Enzo-E`` code. See `Github PR 1447 `__, `Github PR 1443 `__ and `Github PR 1439 `__. - Added support for the ``AMReX`` code. See `Bitbucket PR 2530 `__. - Added support for the ``openPMD`` output format. See `Bitbucket PR 2376 `__. - Added support for reading face-centered and vertex-centered fields for block AMR codes. See `Bitbucket PR 2575 `__. - Added support for loading outputs from the Amiga Halo Finder. See `Github PR 1477 `__. - Added support for particle fields for Boxlib data. See `Bitbucket PR 2510 `__ and `Bitbucket PR 2497 `__. - Added support for custom RAMSES particle fields. See `Github PR 1470 `__. - Added support for RAMSES-RT data. See `Github PR 1456 `__ and `Github PR 1449 `__. - Added support for Enzo MHDCT fields. See `Github PR 1438 `__. - Added support for units and particle fields to the GAMER frontend. See `Bitbucket PR 2366 `__ and `Bitbucket PR 2408 `__. - Added support for type 2 Gadget binary outputs. See `Bitbucket PR 2355 `__. - Added the ability to detect and read double precision Gadget data. See `Bitbucket PR 2537 `__. - Added the ability to detect and read in big endian Gadget data. See `Github PR 1353 `__. - Added support for Nyx datasets that do not contain particles. See `Bitbucket PR 2571 `__ - A number of untested and unmaintained modules have been deprecated and moved to the `yt attic repository `__. This includes the functionality for calculating two point functions, the Sunrise exporter, the star analysis module, and the functionality for calculating halo mass functions. If you are interested in working on restoring the functionality in these modules, we welcome contributions. Please contact us on the mailing list or by opening an issue on GitHub if you have questions. - The particle trajectories functionality has been removed from the analysis modules API and added as a method of the ``DatasetSeries`` object. You can now create a ``ParticleTrajectories`` object using ``ts.particle_trajectories()`` where ``ts`` is a time series of datasets. - The ``spectral_integrator`` analysis module is now available via ``yt.fields.xray_emission_fields``. See `Bitbucket PR 2465 `__. - The ``photon_simulator`` analysis module has been deprecated in favor of the ``pyXSIM`` package, available separately from ``yt``. See `Bitbucket PR 2441 `__. - ``yt.utilities.fits_image`` is now available as ``yt.visualization.fits_image``. In addition classes that were in the ``yt.utilities.fits_image`` namespace are now available in the main ``yt`` namespace. - The ``profile.variance`` attribute has been deprecated in favor of ``profile.standard_deviation``. - The ``number_of_particles`` key no longer needs to be defined when loading data via the stream frontend. See `Github PR 1428 `__. - The install script now only supports installing via miniconda. We have removed support for compiling python and yt's dependencies from source. See `Github PR 1459 `__. - Added ``plot.set_background_color`` for ``PlotWindow`` and ``PhasePlot`` plots. This lets users specify a color to fill in the background of a plot instead of the default color, white. See `Bitbucket PR 2513 `__. - ``PlotWindow`` plots can now optionally use a right-handed coordinate system. See `Bitbucket PR 2318 `__. - The isocontour API has been overhauled to make use of units. See `Bitbucket PR 2453 `__. - ``Dataset`` instances now have a ``checksum`` property, which can be accessed via ``ds.checksum``. This provides a unique identifier that is guaranteed to be the same from session to session. See `Bitbucket PR 2503 `__. - Added a ``data_source`` keyword argument to ``OffAxisProjectionPlot``. See `Bitbucket PR 2490 `__. - Added a ``yt download`` command-line helper to download test data from https://yt-project.org/data. For more information see ``yt download --help`` at the bash command line. See `Bitbucket PR 2495 `__ and `Bitbucket PR 2471 `__. - Added a ``yt upload`` command-line helper to upload files to the `yt curldrop `__ at the bash command line. See `Github PR 1471 `__. - If it's installed, colormaps from the `cmocean package `__ will be made available as yt colormaps. See `Bitbucket PR 2439 `__. - It is now possible to visualize unstructured mesh fields defined on multiple mesh blocks. See `Bitbucket PR 2487 `__. - Add support for second-order interpolation when slicing tetrahedral unstructured meshes. See `Bitbucket PR 2550 `__. - Add support for volume rendering second-order tetrahedral meshes. See `Bitbucket PR 2401 `__. - Add support for QUAD9 mesh elements. See `Bitbucket PR 2549 `__. - Add support for second-order triangle mesh elements. See `Bitbucket PR 2378 `__. - Added support for dynamical dark energy parameterizations to the ``Cosmology`` object. See `Bitbucket PR 2572 `__. - ``ParticleProfile`` can now handle log-scaled bins and data with negative values. See `Bitbucket PR 2564 `__ and `Github PR 1510 `__. - Cut region data objects can now be saved as reloadable datasets using ``save_as_dataset``. See `Bitbucket PR 2541 `__. - Clump objects can now be saved as reloadable datasets using ``save_as_dataset``. See `Bitbucket PR 2326 `__. - It is now possible to specify the field to use for the size of the circles in the ``annotate_halos`` plot modifying function. See `Bitbucket PR 2493 `__. - The ``ds.max_level`` attribute is now a property that is computed on demand. The more verbose ``ds.index.max_level`` will continue to work. See `Bitbucket PR 2461 `__. - The ``PointSource`` volume rendering source now optionally accepts a ``radius`` keyword argument to draw spatially extended points. See `Bitbucket PR 2404 `__. - It is now possible to save volume rendering images in eps, ps, and pdf format. See `Github PR 1504 `__. Minor Enhancements and Bugfixes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - Fixed issue selecting and visualizing data at very high AMR levels. See `Github PR 1521 `__ and `Github PR 1433 `__. - Print a more descriptive error message when defining a particle filter fails with missing fields See `Github PR 1517 `__. - Removed grid edge rounding from the FLASH frontend. This fixes a number of pernicious visualization artifacts for FLASH data. See `Github PR 1493 `__. - Parallel projections no longer error if there are less io chunks than MPI tasks. See `Github PR 1488 `__. - A memory leak in the volume renderer has been fixed. See `Github PR 1485 `__ and `Github PR 1435 `__. - The ``force_override`` keyword argument now raises an error when used with on-disk fields. See `Github PR 1516 `__. - Restore support for making plots from reloaded plots. See `Github PR 1514 `__ - Don't ever try to read inputs or probin files for Castro and Maestro. See `Github PR 1445 `__. - Fixed issue that caused visualization artifacts when creating an off-axis projection for particle or octree AMR data. See `Github PR 1434 `__. - Fix i/o for the Enzo ``'Dark_Matter_Density'`` field. See `Github PR 1360 `__. - Create the ``'particle_ones'`` field even if we don't have a particle mass field. See `Github PR 1424 `__. - Fixed issues with minor colorbar ticks with symlog colorbar scaling. See `Github PR 1423 `__. - Using the rockstar halo finder is now supported under Python3. See `Github PR 1414 `__. - Fixed issues with orientations of volume renderings when compositing multiple sources. See `Github PR 1411 `__. - Added a check for valid AMR structure in ``load_amr_grids``. See `Github PR 1408 `__. - Fix bug in handling of periodic boundary conditions in the ``annotate_halos`` plot modifying function. See `Github PR 1351 `__. - Add support for plots with non-unit aspect ratios to the ``annotate_scale`` plot modifying function. See `Bitbucket PR 2551 `__. - Fixed issue with saving light ray datasets. See `Bitbucket PR 2589 `__. - Added support for 2D WarpX data. ee `Bitbucket PR 2583 `__. - Ensure the ``particle_radius`` field is always accessed with the correct field type. See `Bitbucket PR 2562 `__. - It is now possible to use a covering grid to access particle filter fields. See `Bitbucket PR 2569 `__. - The x limits of a ``ProfilePlot`` will now snap exactly to the limits specified in calls to ``ProfilePlot.set_xlim``. See `Bitbucket PR 2546 `__. - Added a cookbook example showing how to make movies using matplotlib's animation framework. See `Bitbucket PR 2544 `__. - Use a parallel-safe wrapper around mkdir when creating new directories. See `Bitbucket PR 2570 `__. - Removed ``yt.utilities.spatial``. This was a forked version of ``scipy.spatial`` with support for a periodic KD-tree. Scipy now has a periodic KD-tree, so we have removed the forked version from yt. Please use ``scipy.spatial`` if you were relying on ``yt.utilities.spatial``. See `Bitbucket PR 2576 `__. - Improvements for the ``HaloCatalog``. See `Bitbucket PR 2536 `__ and `Bitbucket PR 2535 `__. - Removed ``'log'`` in colorbar label in annotated volume rendering. See `Bitbucket PR 2548 `__ - Fixed a crash triggered by depositing particle data onto a covering grid. See `Bitbucket PR 2545 `__. - Ensure field type guessing is deterministic on Python3. See `Bitbucket PR 2559 `__. - Removed unused yt.utilities.exodusII\_reader module. See `Bitbucket PR 2533 `__. - The ``cell_volume`` field in curvilinear coordinates now uses an exact rather than an approximate definition. See `Bitbucket PR 2466 `__. Version 3.3 ----------- Version 3.3 is the first major release of yt since July 2015. It includes more than 3000 commits from 41 contributors, including 12 new contributors. Major enhancements ^^^^^^^^^^^^^^^^^^ * Raw and processed data from selections, projections, profiles and so forth can now be saved in a ytdata format and loaded back in by yt. See :ref:`saving_data`. * Totally re-worked volume rendering API. The old API is still available for users who prefer it, however. See :ref:`volume_rendering`. * Support for unstructured mesh visualization. See :ref:`unstructured-mesh-slices` and :ref:`unstructured_mesh_rendering`. * Interactive Data Visualization for AMR and unstructured mesh datasets. See :ref:`interactive_data_visualization`. * Several new colormaps, including a new default, 'arbre'. The other new colormaps are named 'octarine', 'kelp', and 'dusk'. All these new colormaps were generated using the `viscm package `_ and should do a better job of representing the data for colorblind viewers and when printed out in grayscale. See :ref:`colormaps` for more detail. * New frontends for the :ref:`ExodusII `, :ref:`GAMER `, and :ref:`Gizmo ` data formats. * The unit system associated with a dataset is now customizable, defaulting to CGS. * Enhancements and usability improvements for analysis modules, especially the ``absorption_spectrum``, ``photon_simulator``, and ``light_ray`` modules. See :ref:`synthetic-observations`. * Data objects can now be created via an alternative Numpy-like API. See :ref:`quickly-selecting-data`. * A line integral convolution plot modification. See :ref:`annotate-line-integral-convolution`. * Many speed optimizations, including to the volume rendering, units, tests, covering grids, the absorption spectrum and photon simulator analysis modules, and ghost zone generation. * Packaging and release-related improvements: better install and setup scripts, automated PR backporting. * Readability improvements to the codebase, including linting, removing dead code, and refactoring much of the Cython. * Improvements to the CI infrastructure, including more extensible answer tests and automated testing for Python 3 and Windows. * Numerous documentation improvements, including formatting tweaks, bugfixes, and many new cookbook recipes. * Support for geographic (lat/lon) coordinates. * Several improvements for SPH codes, including alternative smoothing kernels, an ``add_smoothed_particle_field`` function, and particle type-aware octree construction for Gadget data. * Roundtrip conversions between Pint and yt units. * Added halo data containers for gadget_fof frontend. * Enabled support for spherical datasets in the BoxLib frontend. * Many new tests have been added. * Better hashing for Selector objects. Minor enhancements and bugfixes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ * Fixed many bugs related to Python 3 compatibility * Fixed bugs related to compatibility issues with newer versions of numpy * Added the ability to export data objects to a Pandas dataframe * Added support for the fabs ufunc to YTArray * Fixed two licensing issues * Fixed a number of bugs related to Windows compatibility. * We now avoid hard-to-decipher tracebacks when loading empty files or directories * Fixed a bug related to ART star particle creation time field * Fixed a bug caused by using the wrong int type for indexing in particle deposit * Fixed a NameError bug in comparing temperature units with offsets * Fixed an API bug in YTArray casting during coercion from YTQuantity * Added loadtxt and savetxt convenience functions for ``YTArray`` * Fixed an issue caused by not sort species names with Enzo * Fixed a units bug for RAMSES when ``boxlen > 1``. * Fixed ``process_chunk`` function for non-cartesian geometry. * Added ``scale_factor`` attribute to cosmological simulation datasets * Fixed a bug where "center" vectors are used instead of "normal" vectors in get_sph_phi(), etc. * Fixed issues involving invalid FRBs when uses called _setup_plots in their scripts * Added a ``text_args`` keyword to ``annotate_scale()`` callback * Added a print_stats function for RAMSES * Fixed a number of bugs in the Photon Simulator * Added support for particle fields to the [Min,Max]Location derived quantities * Fixed some units bugs for Gadget cosmology simulations * Fixed a bug with Gadget/GIZMO StarFormationRate units * Fixed an issue in TimeSeriesData where all the filenames were getting passed to ``load`` on each processor. * Fixed a units bug in the Tipsy frontend * Ensured that ARTIOIndex.get_smallest_dx() returns a quantity with units * Ensured that plots are valid after invalidating the figure * Fixed a bug regarding code unit labels * Fixed a bug with reading Tipsy Aux files * Added an effective redshift field to the Light Ray analysis module for use in AbsorptionSpectrum * Fixed a bug with the redshift calculation in LightRay analysis module * Fixed a bug in the Orion frontend when you had more than 10 on-disk particle fields in the file * Detect more types of ART files * Update derived_field_list in add_volume_weighted_smoothed_field * Fixed casting issues for 1D and 2D Enzo simulations * Avoid type indirection when setting up data object entry points * Fixed issues with SIMPUT files * Fixed loading athena data in python3 with provided parameters * Tipsy cosmology unit fixes * Fixed bad unit labels for compound units * Making the xlim and ylim of the PhasePlot plot axes controllable * Adding grid_arrays to grid_container * An Athena and a GDF bugfix * A small bugfix and some small enhancements for sunyaev_zeldovich * Defer to coordinate handlers for width * Make array_like_field return same units as get_data * Fixing bug in ray "dts" and "t" fields * Check against string_types not str * Closed a loophole that allowed improper LightRay use * Enabling AbsorptionSpectrum to deposit unresolved spectral lines * Fixed an ART byte/string/array issue * Changing AbsorptionSpectrum attribute lambda_bins to be lambda_field for consistency * No longer require user to save to disk when generating an AbsorptionSpectrum * ParticlePlot FRBs can now use save_as_dataset and save attributes properly * Added checks to assure ARTIO creates a metal_density field from existing metal fields. * Added mask to LightRay to assure output elements have non-zero density (a problem in some SPH datasets) * Added a "fields" attribute to datasets * Updated the TransferFunctionHelper to work with new profiles * Fixed a bug where the field_units kwarg to load_amr_grids didn't do anything * Changed photon_simulator's output file structure * Fixed a bug related to setting output_units. * Implemented ptp operation. * Added effects of transverse doppler redshift to LightRay * Fixed a casting error for float and int64 multiplication in sdf class * Added ability to read and write YTArrays to and from groups within HDF5 files * Made ftype of "on-disk" stream fields "stream" * Fixed a strings decoding issue in the photon simulator * Fixed an incorrect docstring in load_uniform_grid * Made PlotWindow show/hide helpers for axes and colorbar return self * Made Profile objects store field metadata. * Ensured GDF unit names are strings * Taught off_axis_projection about its resolution keyword. * Reintroduced sanitize_width for polar/cyl coordinates. * We now fail early when load_uniform_grid is passed data with an incorrect shape * Replaced progress bar with tqdm * Fixed redshift scaling of "Overdensity" field in yt-2.x * Fixed several bugs in the eps_writer * Fixed bug affecting 2D BoxLib simulations. * Implemented to_json and from_json for the UnitRegistry object * Fixed a number of issues with ds.find_field_values_at_point[s] * Fixed a bug where sunrise_exporter was using wrong imports * Import HUGE from utilities.physical_ratios * Fixed bug in ARTIO table look ups * Adding support for longitude and latitude * Adding halo data containers for gadget_fof frontend. * Can now compare YTArrays without copying them * Fixed several bugs related to active particle datasets * Angular_momentum_vector now only includes space for particle fields if they exist. * Image comparison tests now print a meaningful error message if they fail. * Fixed numpy 1.11 compatibility issues. * Changed _skip_cache to be True by default. * Enable support for spherical datasets in the BoxLib frontend. * Fixed a bug in add_deposited_particle_field. * Fixed issues with input sanitization in the point data object. * Fixed a copy/paste error introduced by refactoring WeightedMenParticleField * Fixed many formatting issues in the docs build * Now avoid creating particle unions for particle types that have no common fields * Patched ParticlePlot to work with filtered particle fields. * Fixed a couple corner cases in gadget_fof frontend * We now properly normalise all normal vectors in functions that take a normal vector (for e.g. get_sph_theta) * Fixed a bug where the transfer function features were not always getting cleared properly. * Made the Chombo frontend is_valid method smarter. * Added a get_hash() function to yt/funcs.py which returns a hash for a file * Added Sievert to the default unit symbol table * Corrected an issue with periodic "wiggle" in AbsorptionSpectrum instances * Made ``ds.field_list`` sorted by default * Bug fixes for the Nyx frontend * Fixed a bug where the index needed to be created before calling derived quantities * Made latex_repr a property, computed on-demand * Fixed a bug in off-axis slice deposition * Fixed a bug with some types of octree block traversal * Ensured that mpi operations retain ImageArray type instead of downgrading to YTArray parent class * Added a call to _setup_plots in the custom colorbar tickmark example * Fixed two minor bugs in save_annotated * Added ability to specify that DatasetSeries is not a mixed data type * Fixed a memory leak in ARTIO * Fixed copy/paste error in to_frb method. * Ensured that particle dataset max_level is consistent with the index max_level * Fixed an issue where fields were getting added multiple times to field_info.field_list * Enhanced annotate_ray and annotate_arrow callbacks * Added GDF answer tests * Made the YTFieldTypeNotFound exception more informative * Added a new function, fake_vr_orientation_test_ds(), for use in testing * Ensured that instances of subclasses of YTArray have the correct type * Re-enabled max_level for projections, ProjectionPlot, and OffAxisProjectionPlot * Fixed a bug in the Orion 2 field definitions * Fixed a bug caused by matplotlib not being added to install_requires * Edited PhasePlot class to have an annotate_title method * Implemented annotate_cell_edges * Handled KeyboardInterrupt in volume rendering Cython loop * Made old halo finders now accept ptype * Updated the latex commands in yt cheatsheet * Fixed a circular dependency loop bug in abar field definition for FLASH datasets * Added neutral species aliases as described in YTEP 0003 * Fixed a logging issue: don't create a StreamHandler unless we will use it * Correcting how theta and phi are calculated in ``_particle_velocity_spherical_radius``, ``_particle_velocity_spherical_theta``, ``_particle_velocity_cylindrical_radius``, and ``_particle_velocity_cylindrical_theta`` * Fixed a bug related to the field dictionary in ``load_particles`` * Allowed for the special case of supplying width as a tuple of tuples * Made yt compile with MSVC on Windows * Fixed a bug involving mask for dt in octree * Merged the get_yt.sh and install_script.sh into one * Added tests for the install script * Allowed use axis names instead of dimensions for spherical pixelization * Fixed a bug where close() wasn't being called in HDF5FileHandler * Enhanced commandline image upload/delete * Added get_brewer_cmap to get brewer colormaps without importing palettable at the top level * Fixed a bug where a parallel_root_only function was getting called inside another parallel_root_only function * Exit the install script early if python can't import '_ssl' module * Make PlotWindow's annotate_clear method invalidate the plot * Adding int wrapper to avoid deprecation warning from numpy * Automatically create vector fields for magnetic_field * Allow users to completely specify the filename of a 1D profile * Force nose to produce meaningful traceback for cookbook recipes' tests * Fixed x-ray display_name and documentation * Try to guess and load particle file for FLASH dataset * Sped up top-level yt import * Set the field type correctly for fields added as particle fields * Added a position location method for octrees * Fixed a copy/paste error in uhstack function * Made trig functions give correct results when supplied data with dimensions of angle but units that aren't radian * Print out some useful diagnostic information if check_for_openmp() fails * Give user-added derived fields a default field type * Added support for periodicity in annotate_particles. * Added a check for whether returned field has units in volume-weighted smoothed fields * Casting array indices as ints in colormaps infrastructure * Fixed a bug where the standard particle fields weren't getting set up correctly for the Orion frontends * Enabled LightRay to accept loaded datasets instead of just filenames * Allowed for adding or subtracting arrays filled with zeros without checking units. * Fixed a bug in selection for semistructured meshes. * Removed 'io' from enzo particle types for active particle datasets * Added support for FLASH particle datasets. * Silenced a deprecation warning from IPython * Eliminated segfaults in KDTree construction * Fixed add_field handling when passed a tuple * Ensure field parameters are correct for fields that need ghost zones * Made it possible to use DerivedField instances to access data * Added ds.particle_type_counts * Bug fix and improvement for generating Google Cardboard VR in StereoSphericalLens * Made DarkMatterARTDataset more robust in its _is_valid * Added Earth radius to units * Deposit hydrogen fields to grid in gizmo frontend * Switch to index values being int64 * ValidateParameter ensures parameter values are used during field detection * Switched to using cythonize to manage dependencies in the setup script * ProfilePlot style changes and refactoring * Cancel terms with identical LaTeX representations in a LaTeX representation of a unit * Only return early from comparison validation if base values are equal * Enabled particle fields for clump objects * Added validation checks for data types in callbacks * Enabled modification of image axis names in coordinate handlers * Only add OWLS/EAGLE ion fields if they are present * Ensured that PlotWindow plots continue to look the same under matplotlib 2.0 * Fixed bug in quiver callbacks for off-axis slice plots * Only visit octree children if going to next level * Check that CIC always gets at least two cells * Fixed compatibility with matplotlib 1.4.3 and earlier * Fixed two EnzoSimulation bugs * Moved extraction code from YTSearchCmd to its own utility module * Changed amr_kdtree functions to be Node class methods * Sort block indices in order of ascending levels to match order of grid patches * MKS code unit system fixes * Disabled bounds checking on pixelize_element_mesh * Updated light_ray.py for domain width != 1 * Implemented a DOAP file generator * Fixed bugs for 2D and 1D enzo IO * Converted mutable Dataset attributes to be properties that return copies * Allowing LightRay segments to extend further than one box length * Fixed a divide-by-zero error that occasionally happens in triangle_plane_intersect * Make sure we have an index in subclassed derived quantities * Added an initial draft of an extensions document * Made it possible to pass field tuples to command-line plotting * Ensured the positions of coordinate vector lines are in code units * Added a minus sign to definition of sz_kinetic field * Added grid_levels and grid_indices fields to octrees * Added a morton_index derived field * Added Exception to AMRKDTree in the case of particle of oct-based data Version 3.2 ----------- Major enhancements ^^^^^^^^^^^^^^^^^^ * Particle-Only Plots - a series of new plotting functions for visualizing particle data. See here for more information. * Late-stage beta support for Python 3 - unit tests and answer tests pass for all the major frontends under python 3.4, and yt should now be mostly if not fully usable. Because many of the yt developers are still on Python 2 at this point, this should be considered a "late stage beta" as there may be remaining issues yet to be identified or worked out. * Now supporting Gadget Friend-of-Friends/Subfind catalogs - see here to learn how to load halo catalogs as regular yt datasets. * Custom colormaps can now be easily defined and added - see here to learn how! * Now supporting Fargo3D data * Performance improvements throughout the code base for memory and speed Minor enhancements ^^^^^^^^^^^^^^^^^^ * Various updates to the following frontends: ART, Athena, Castro, Chombo, Gadget, GDF, Maestro, Pluto, RAMSES, Rockstar, SDF, Tipsy * Numerous documentation updates * Generic hexahedral mesh pixelizer * Adding annotate_ray() callback for plots * AbsorptionSpectrum returned to full functionality and now using faster SciPy Voigt profile * Add a color_field argument to annotate_streamline * Smoothing lengths auto-calculated for Tipsy Datasets * Adding SimulationTimeSeries support for Gadget and OWLS. * Generalizing derived quantity outputs to all be YTArrays or lists of YTArrays as appropriate * Star analysis returned to full functionality * FITS image writing refactor * Adding gradient fields on the fly * Adding support for Gadget Nx4 metallicity fields * Updating value of solar metal mass fraction to be consistent with Cloudy. * Gadget raw binary snapshot handling & non-cosmological simulation units * Adding support for LightRay class to work with Gadget+Tipsy * Add support for subclasses of frontends * Dependencies updated * Serialization for projections using minimal representation * Adding Grid visitors in Cython * Improved semantics for derived field units * Add a yaw() method for the PerspectiveCamera + switch back to LHS * Adding annotate_clear() function to remove previous callbacks from a plot * Added documentation for hexahedral mesh on website * Speed up nearest neighbor evaluation * Add a convenience method to create deposited particle fields * UI and docs updates for 3D streamlines * Ensure particle fields are tested in the field unit tests * Allow a suffix to be specified to save() * Add profiling using airspeed velocity * Various plotting enhancements and bugfixes * Use hglib to update * Various minor updates to halo_analysis toolkit * Docker-based tests for install_script.sh * Adding support for single and non-cosmological datasets to LightRay * Adding the Pascal unit * Add weight_field to PPVCube * FITS reader: allow HDU in auxiliary * Fixing electromagnetic units * Specific Angular Momentum [xyz] computed relative to a normal vector Bugfixes ^^^^^^^^ * Adding ability to create union fields from alias fields * Small fix to allow enzo AP datasets to load in parallel when no APs present * Use proper cell dimension in gradient function. * Minor memory optimization for smoothed particle fields * Fix thermal_energy for Enzo HydroMethod==6 * Make sure annotate_particles handles unitful widths properly * Improvements for add_particle_filter and particle_filter * Specify registry in off_axis_projection's image finalization * Apply fix for particle momentum units to the boxlib frontend * Avoid traceback in "yt version" when python-hglib is not installed * Expose no_ghost from export_sketchfab down to _extract_isocontours_from_grid * Fix broken magnetic_unit attribute * Fixing an off-by-one error in the set x/y lim methods for profile plots * Providing better error messages to PlotWindow callbacks * Updating annotate_timestamp to avoid auto-override * Updating callbacks to consistently define coordinate system * Fixing species fields for OWLS and tipsy * Fix extrapolation for vertex-centered data * Fix periodicity check in FRBs * Rewrote project_to_plane() in PerspectiveCamera for draw_domain() * Fix intermittent failure in test_add_deposited_particle_field * Improve minorticks for a symlog plot with one-sided data * Fix smoothed covering grid cell computation * Absorption spectrum generator now 3.0 compliant * Fix off-by-one-or-more in particle smallest dx * Fix dimensionality mismatch error in covering grid * Fix curvature term in cosmology calculator * Fix geographic axes and pixelization * Ensure axes aspect ratios respect the user-selected plot aspect ratio * Avoid clobbering field_map when calling profile.add_fields * Fixing the arbitrary grid deposit code * Fix spherical plotting centering * Make the behavior of to_frb consistent with the docstring * Ensure projected units are initialized when there are no chunks. * Removing "field already exists" warnings from the Owls and Gadget frontends * Various photon simulator bugs * Fixed use of LaTeX math mode * Fix upload_image * Enforce plot width in CSS when displayed in a notebook * Fix cStringIO.StringIO -> cStringIO in png_writer * Add some input sanitizing and error checking to covering_grid initializer * Fix for geographic plotting * Use the correct filename template for single-file OWLS datasets. * Fix Enzo IO performance for 32 bit datasets * Adding a number density field for Enzo MultiSpecies=0 datasets. * Fix RAMSES block ordering * Updating ragged array tests for NumPy 1.9.1 * Force returning lists for HDF5FileHandler Version 3.1 ----------- This is a scheduled feature release. Below are the itemized, aggregate changes since version 3.0. Major changes: ^^^^^^^^^^^^^^ * The RADMC-3D export analysis module has been updated. `PR 1358 `_, `PR 1332 `_. * Performance improvements for grid frontends. `PR 1350 `_. `PR 1382 `_, `PR 1322 `_. * Added a frontend for Dark Matter-only NMSU Art simulations. `PR 1258 `_. * The absorption spectrum generator has been updated. `PR 1356 `_. * The PerspectiveCamera has been updated and a new SphericalCamera has been added. `PR 1346 `_, `PR 1299 `_. * The unit system now supports unit equivalencies and has improved support for MKS units. See :ref:`unit_equivalencies`. `PR 1291 `_, `PR 1286 `_. * Data object selection can now be chained, allowing selecting based on multiple constraints. `PR 1264 `_. * Added the ability to manually override the simulation unit system. `PR 1236 `_. * The documentation has been reorganized and has seen substantial improvements. `PR 1383 `_, `PR 1373 `_, `PR 1364 `_, `PR 1351 `_, `PR 1345 `_. `PR 1333 `_, `PR 1342 `_, `PR 1338 `_, `PR 1330 `_, `PR 1326 `_, `PR 1323 `_, `PR 1315 `_, `PR 1305 `_, `PR 1289 `_, `PR 1276 `_. Minor or bugfix changes: ^^^^^^^^^^^^^^^^^^^^^^^^ * The Ampere unit now accepts SI prefixes. `PR 1393 `_. * The Gadget InternalEnergy and StarFormationRate fields are now read in with the correct units. `PR 1392 `_, `PR 1379 `_. * Substantial improvements for the PPVCube analysis module and support for FITS dataset. `PR 1390 `_, `PR 1367 `_, `PR 1347 `_, `PR 1326 `_, `PR 1280 `_, `PR 1336 `_. * The center of a PlotWindow plot can now be set to the maximum or minimum of any field. `PR 1280 `_. * Fixes for yt testing infrastructure. `PR 1388 `_, `PR 1348 `_. * Projections are now performed using an explicit path length field for all coordinate systems. `PR 1307 `_. * An example notebook for simulations using the OWLS data format has been added to the documentation. `PR 1386 `_. * Fix for the camera.draw_line function. `PR 1380 `_. * Minor fixes and improvements for yt plots. `PR 1376 `_, `PR 1374 `_, `PR 1288 `_, `PR 1290 `_. * Significant documentation reorganization and improvement. `PR 1375 `_, `PR 1359 `_. * Fixed a conflict in the CFITSIO library used by the x-ray analysis module. `PR 1365 `_. * Miscellaneous code cleanup. `PR 1371 `_, `PR 1361 `_. * yt now hooks up to the python logging infrastructure in a more standard fashion, avoiding issues with yt logging showing up with using other libraries. `PR 1355 `_, `PR 1362 `_, `PR 1360 `_. * The docstring for the projection data object has been corrected. `PR 1366 `_ * A bug in the calculation of the plot bounds for off-axis slice plots has been fixed. `PR 1357 `_. * Improvements for the yt-rockstar interface. `PR 1352 `_, `PR 1317 `_. * Fix issues with plot positioning with saving to postscript or encapsulated postscript. `PR 1353 `_. * It is now possible to supply a default value for get_field_parameter. `PR 1343 `_. * A bug in the interpretation of the units of RAMSES simulations has been fixed. `PR 1335 `_. * Plot callbacks are now only executed once before the plot is saved. `PR 1328 `_. * Performance improvements for smoothed covering grid alias fields. `PR 1331 `_. * Improvements and bugfixes for the halo analysis framework. `PR 1349 `_, `PR 1325 `_. * Fix issues with the default setting for the ``center`` field parameter. `PR 1327 `_. * Avoid triggering warnings in numpy and matplotlib. `PR 1334 `_, `PR 1300 `_. * Updates for the field list reference. `PR 1344 `_, `PR 1321 `_, `PR 1318 `_. * yt can now be run in parallel on a subset of available processors using an MPI subcommunicator. `PR 1340 `_ * Fix for incorrect units when loading an Athena simulation as a time series. `PR 1341 `_. * Improved support for Enzo 3.0 simulations that have not produced any active particles. `PR 1329 `_. * Fix for parsing OWLS outputs with periods in the file path. `PR 1320 `_. * Fix for periodic radius vector calculation. `PR 1311 `_. * Improvements for the Maestro and Castro frontends. `PR 1319 `_. * Clump finding is now supported for more generic types of data. `PR 1314 `_ * Fix unit consistency issue when mixing dimensionless unit symbols. `PR 1300 `_. * Improved memory footprint in the photon_simulator. `PR 1304 `_. * Large grids in Athena datasets produced by the join_vtk script can now be optionally split, improving parallel performance. `PR 1304 `_. * Slice plots now accept a ``data_source`` keyword argument. `PR 1310 `_. * Corrected inconsistent octrees in the RAMSES frontend. `PR 1302 `_ * Nearest neighbor distance field added. `PR 1138 `_. * Improvements for the ORION2 frontend. `PR 1303 `_ * Enzo 3.0 frontend can now read active particle attributes that are arrays of any shape. `PR 1248 `_. * Answer tests added for halo finders. `PR 1253 `_ * A ``setup_function`` has been added to the LightRay initializer. `PR 1295 `_. * The SPH code frontends have been reorganized into separate frontend directories. `PR 1281 `_. * Fixes for accessing deposit fields for FLASH data. `PR 1294 `_ * Added tests for ORION datasets containing sink and star particles. `PR 1252 `_ * Fix for field names in the particle generator. `PR 1278 `_. * Added wrapper functions for numpy array manipulation functions. `PR 1287 `_. * Added support for packed HDF5 Enzo datasets. `PR 1282 `_. Version 3.0 ----------- This release of yt features an entirely rewritten infrastructure for data ingestion, indexing, and representation. While past versions of yt were focused on analysis and visualization of data structured as regular grids, this release features full support for particle (discrete point) data such as N-body and SPH data, irregular hexahedral mesh data, and data organized via octrees. This infrastructure will be extended in future versions for high-fidelity representation of unstructured mesh datasets. Highlighted changes in yt 3.0: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ * Units now permeate the code base, enabling self-consistent unit transformations of all arrays and quantities returned by yt. * Particle data is now supported using a lightweight octree. SPH data can be smoothed onto an adaptively-defined mesh using standard SPH smoothing * Support for octree AMR codes * Preliminary Support for non-Cartesian data, such as cylindrical, spherical, and geographical * Revamped analysis framework for halos and halo catalogs, including direct ingestion and analysis of halo catalogs of several different formats * Support for multi-fluid datasets and datasets containing multiple particle types * Flexible support for dynamically defining new particle types using filters on existing particle types or by combining different particle types. * Vastly improved support for loading generic grid, AMR, hexahedral mesh, and particle without hand-coding a frontend for a particular data format. * New frontends for ART, ARTIO, Boxlib, Chombo, FITS, GDF, Subfind, Rockstar, Pluto, RAMSES, SDF, Gadget, OWLS, PyNE, Tipsy, as well as rewritten frontends for Enzo, FLASH, Athena, and generic data. * First release to support installation of yt on Windows * Extended capabilities for construction of simulated observations, and new facilities for analyzing and visualizing FITS images and cube data * Many performance improvements This release is the first of several; while most functionality from the previous generation of yt has been updated to work with yt 3.0, it does not yet have feature parity in all respects. While the core of yt is stable, we suggest the support for analysis modules and volume rendering be viewed as a late-stage beta, with a series of additional releases (3.1, 3.2, etc) appearing over the course of the next year to improve support in these areas. For a description of how to bring your 2.x scripts up to date to 3.0, and a summary of common gotchas in this transition, please see :ref:`yt3differences`. Version 2.6 ----------- This is a scheduled release, bringing to a close the development in the 2.x series. Below are the itemized, aggregate changes since version 2.5. Major changes: ^^^^^^^^^^^^^^ * yt is now licensed under the 3-clause BSD license. * HEALPix has been removed for the time being, as a result of licensing incompatibility. * The addition of a frontend for the Pluto code * The addition of an OBJ exporter to enable transparent and multi-surface exports of surfaces to Blender and Sketchfab * New absorption spectrum analysis module with documentation * Adding ability to draw lines with Grey Opacity in volume rendering * Updated physical constants to reflect 2010 CODATA data * Dependency updates (including IPython 1.0) * Better notebook support for yt plots * Considerably (10x+) faster kD-tree building for volume rendering * yt can now export to RADMC3D * Athena frontend now supports Static Mesh Refinement and units ( http://hub.yt-project.org/nb/7l1zua ) * Fix long-standing bug for plotting arrays with range of zero * Adding option to have interpolation based on non-uniform bins in interpolator code * Upgrades to most of the dependencies in the install script * ProjectionPlot now accepts a data_source keyword argument Minor or bugfix changes: ^^^^^^^^^^^^^^^^^^^^^^^^ * Fix for volume rendering on the command line * map_to_colormap will no longer return out-of-bounds errors * Fixes for dds in covering grid calculations * Library searching for build process is now more reliable * Unit fix for "VorticityGrowthTimescale" field * Pyflakes stylistic fixes * Number density added to FLASH * Many fixes for Athena frontend * Radius and ParticleRadius now work for reduced-dimensionality datasets * Source distributions now work again! * Athena data now 64 bits everywhere * Grids displays on plots are now shaded to reflect the level of refinement * show_colormaps() is a new function for displaying all known colormaps * PhasePlotter by default now adds a colormap. * System build fix for POSIX systems * Fixing domain offsets for halo centers-of-mass * Removing some Enzo-specific terminology in the Halo Mass Function * Addition of coordinate vectors on volume render * Pickling fix for extracted regions * Addition of some tracer particle annotation functions * Better error message for "yt" command * Fix for radial vs poloidal fields * Piernik 2D data handling fix * Fixes for FLASH current redshift * PlotWindows now have a set_font function and a new default font setting * Colorbars less likely to extend off the edge of a PlotWindow * Clumps overplotted on PlotWindows are now correctly contoured * Many fixes to light ray and profiles for integrated cosmological analysis * Improvements to OpenMP compilation * Typo in value for km_per_pc (not used elsewhere in the code base) has been fixed * Enable parallel IPython notebook sessions ( http://hub.yt-project.org/nb/qgn19h ) * Change (~1e-6) to particle_density deposition, enabling it to be used by FLASH and other frontends * Addition of is_root function for convenience in parallel analysis sessions * Additions to Orion particle reader * Fixing TotalMass for case when particles not present * Fixing the density threshold or HOP and pHOP to match the merger tree * Reason can now plot with latest plot window * Issues with VelocityMagnitude and aliases with velo have been corrected in the FLASH frontend * Halo radii are calculated correctly for domains that do not start at 0,0,0. * Halo mass function now works for non-Enzo frontends. * Bug fixes for directory creation, typos in docstrings * Speed improvements to ellipsoidal particle detection * Updates to FLASH fields * CASTRO frontend bug fixes * Fisheye camera bug fixes * Answer testing now includes plot window answer testing * Athena data serialization * load_uniform_grid can now decompose dims >= 1024. (#537) * Axis unit setting works correctly for unit names (#534) * ThermalEnergy is now calculated correctly for Enzo MHD simulations (#535) * Radius fields had an asymmetry in periodicity calculation (#531) * Boolean regions can now be pickled (#517) Version 2.5 ----------- Many below-the-surface changes happened in yt 2.5 to improve reliability, fidelity of the answers, and streamlined user interface. The major change in this release has been the immense expansion in testing of yt. We now have over 2000 unit tests (run on every commit, thanks to both Kacper Kowalik and Shining Panda) as well as answer testing for FLASH, Enzo, Chombo and Orion data. The Stream frontend, which can construct datasets in memory, has been improved considerably. It's now easier than ever to load data from disk. If you know how to get volumetric data into Python, you can use either the ``load_uniform_grid`` function or the ``load_amr_grid`` function to create an in-memory dataset that yt can analyze. yt now supports the Athena code. yt is now focusing on providing first class support for the IPython notebook. In this release, plots can be displayed inline. The Reason HTML5 GUI will be merged with the IPython notebook in a future release. Install Script Changes: ^^^^^^^^^^^^^^^^^^^^^^^ * SciPy can now be installed * Rockstar can now be installed * Dependencies can be updated with "yt update --all" * Cython has been upgraded to 0.17.1 * Python has been upgraded to 2.7.3 * h5py has been upgraded to 2.1.0 * hdf5 has been upgraded to 1.8.9 * matplotlib has been upgraded to 1.2.0 * IPython has been upgraded to 0.13.1 * Forthon has been upgraded to 0.8.10 * nose has been added * sympy has been added * python-hglib has been added We've also improved support for installing on OSX, Ubuntu and OpenSUSE. Most Visible Improvements ^^^^^^^^^^^^^^^^^^^^^^^^^ * Nearly 200 pull requests and over 1000 changesets have been merged since yt 2.4 was release on August 2nd, 2012. * numpy is now imported as np, not na. na will continue to work for the foreseeable future. * You can now get a `yt cheat sheet `_! * yt can now load simulation data created by Athena. * The Rockstar halo finder can now be installed by the install script * SciPy can now be installed by the install script * Data can now be written out in two ways: * Sidecar files containing expensive derived fields can be written and implicitly loaded from. * GDF files, which are portable yt-specific representations of full simulations, can be created from any dataset. Work is underway on a pure C library that can be linked against to load these files into simulations. * The "Stream" frontend, for loading raw data in memory, has been greatly expanded and now includes initial conditions generation functionality, particle fields, and simple loading of AMR grids with ``load_amr_grids``. * Spherical and Cylindrical fields have been sped up and made to have a uniform interface. These fields can be the building blocks of more advanced fields. * Coordinate transformations have been sped up and streamlined. It is now possible to convert any scalar or vector field to a new cartesian, spherical, or cylindrical coordinate system with an arbitrary orientation. This makes it possible to do novel analyses like profiling the toroidal and poloidal velocity as a function of radius in an inclined disk. * Many improvements to the EnzoSimulation class, which can now find many different types of data. * Image data is now encapsulated in an ImageArray class, which carries with it provenance information about its trajectory through yt. * Streamlines now query at every step along the streamline, not just at every cell. * Surfaces can now be extracted and examined, as well as uploaded to Sketchfab.com for interactive visualization in a web browser. * allsky_projection can now accept a datasource, making it easier to cut out regions to examine. * Many, many improvements to PlotWindow. If you're still using PlotCollection, check out ``ProjectionPlot``, ``SlicePlot``, ``OffAxisProjectionPlot`` and ``OffAxisSlicePlot``. * PlotWindow can now accept a timeseries instead of a dataset. * Many fixes for 1D and 2D data, especially in FLASH datasets. * Vast improvements to the particle file handling for FLASH datasets. * Particles can now be created ex nihilo with CICSample_3. * Rockstar halo finding is now a targeted goal. Support for using Rockstar has improved dramatically. * Increased support for tracking halos across time using the FOF halo finder. * The command ``yt notebook`` has been added to spawn an IPython notebook server, and the ``yt.imods`` module can replace ``yt.mods`` in the IPython Notebook to enable better integration. * Metallicity-dependent X-ray fields have now been added. * Grid lines can now be added to volume renderings. * Volume rendering backend has been updated to use an alpha channel, fixing parallel opaque volume renderings. This also enables easier blending of multiple images and annotations to the rendering. Users are encouraged to look at the capabilities of the ``ImageArray`` for writing out renders, as updated in the cookbook examples. Volume renders can now be saved with an arbitrary background color. * Periodicity, or alternately non-periodicity, is now a part of radius calculations. * The AMRKDTree has been rewritten. This allows parallelism with other than power-of-2 MPI processes, arbitrary sets of grids, and splitting of unigrids. * Fixed Resolution Buffers and volume rendering images now utilize a new ImageArray class that stores information such as data source, field names, and other information in a .info dictionary. See the ``ImageArray`` docstrings for more information on how they can be used to save to a bitmap or hdf5 file. Version 2.4 ----------- The 2.4 release was particularly large, encompassing nearly a thousand changesets and a number of new features. To help you get up to speed, we've made an IPython notebook file demonstrating a few of the changes to the scripting API. You can `download it here `_. Most Visible Improvements ^^^^^^^^^^^^^^^^^^^^^^^^^ * Threaded volume renderer, completely refactored from the ground up for speed and parallelism. * The Plot Window (see :ref:`simple-inspection`) is now fully functional! No more PlotCollections, and full, easy access to Matplotlib axes objects. * Many improvements to Time Series analysis: * EnzoSimulation now integrates with TimeSeries analysis! * Auto-parallelization of analysis and parallel iteration * Memory usage when iterating over datasets reduced substantially * Many improvements to Reason, the yt GUI * Addition of "yt reason" as a startup command * Keyboard shortcuts in projection & slice mode: z, Z, x, X for zooms, hjkl, HJKL for motion * Drag to move in projection & slice mode * Contours and vector fields in projection & slice mode * Color map selection in projection & slice mode * 3D Scene * Integration with the all new yt Hub ( http://hub.yt-project.org/ ): upload variable resolution projections, slices, project information, vertices and plot collections right from the yt command line! Other Changes ^^^^^^^^^^^^^ * :class:`~yt.visualization.plot_window.ProjectionPlot` and :class:`~yt.visualization.plot_window.SlicePlot` supplant the functionality of PlotCollection. * Camera path creation from keyframes and splines * Ellipsoidal data containers and ellipsoidal parameter calculation for halos * PyX and ZeroMQ now available in the install script * Consolidation of unit handling * HDF5 updated to 1.8.7, Mercurial updated to 2.2, IPython updated to 0.12 * Preview of integration with Rockstar halo finder * Improvements to merger tree speed and memory usage * Sunrise exporter now compatible with Sunrise 4.0 * Particle trajectory calculator now available! * Speed and parallel scalability improvements in projections, profiles and HOP * New Vorticity-related fields * Vast improvements to the ART frontend * Many improvements to the FLASH frontend, including full parameter reads, speedups, and support for more corner cases of FLASH 2, 2.5 and 3 data. * Integration of the Grid Data Format frontend, and a converter for Athena data to this format. * Improvements to command line parsing * Parallel import improvements on parallel filesystems (``from yt.pmods import *``) * proj_style keyword for projections, for Maximum Intensity Projections (``proj_style = "mip"``) * Fisheye rendering for planetarium rendering * Profiles now provide \*_std fields for standard deviation of values * Generalized Orientation class, providing 6DOF motion control * parallel_objects iteration now more robust, provides optional barrier. (Also now being used as underlying iteration mechanism in many internal routines.) * Dynamic load balancing in parallel_objects iteration. * Parallel-aware objects can now be pickled. * Many new colormaps included * Numerous improvements to the PyX-based eps_writer module * FixedResolutionBuffer to FITS export. * Generic image to FITS export. * Multi-level parallelism for extremely large cameras in volume rendering * Light cone and light ray updates to fit with current best practices for parallelism Version 2.3 ----------- `(yt 2.3 docs) `_ * Multi-level parallelism * Real, extensive answer tests * Boolean data regions (see :ref:`boolean_data_objects`) * Isocontours / flux calculations (see :ref:`extracting-isocontour-information`) * Field reorganization * PHOP memory improvements * Bug fixes for tests * Parallel data loading for RAMSES, along with other speedups and improvements there * WebGL interface for isocontours and a pannable map widget added to Reason * Performance improvements for volume rendering * Adaptive HEALPix support * Column density calculations * Massive speedup for 1D profiles * Lots more, bug fixes etc. * Substantial improvements to the documentation, including :ref:`manual-plotting` and a revamped orientation. Version 2.2 ----------- `(yt 2.2 docs) `_ * Command-line submission to the yt Hub (http://hub.yt-project.org/) * Initial release of the web-based GUI Reason, designed for efficient remote usage over SSH tunnels * Absorption line spectrum generator for cosmological simulations (see :ref:`absorption_spectrum`) * Interoperability with ParaView for volume rendering, slicing, and so forth * Support for the Nyx code * An order of magnitude speed improvement in the RAMSES support * Quad-tree projections, speeding up the process of projecting by up to an order of magnitude and providing better load balancing * "mapserver" for in-browser, Google Maps-style slice and projection visualization (see :ref:`mapserver`) * Many bug fixes and performance improvements * Halo loader Version 2.1 ----------- `(yt 2.1 docs) `_ * HEALPix-based volume rendering for 4pi, allsky volume rendering * libconfig is now included * SQLite3 and Forthon now included by default in the install script * Development guide has been lengthened substantially and a development bootstrap script is now included. * Installation script now installs Python 2.7 and HDF5 1.8.6 * iyt now tab-completes field names * Halos can now be stored on-disk much more easily between HaloFinding runs. * Halos found inline in Enzo can be loaded and merger trees calculated * Support for CASTRO particles has been added * Chombo support updated and fixed * New code contributions * Contour finder has been sped up by a factor of a few * Constrained two-point functions are now possible, for LOS power spectra * Time series analysis (:ref:`time-series-analysis`) now much easier * Stream Lines now a supported 1D data type * Stream Lines now able to be calculated and plotted (:ref:`streamlines`) * In situ Enzo visualization now much faster * "gui" source directory reorganized and cleaned up * Cython now a compile-time dependency, reducing the size of source tree updates substantially * ``yt-supplemental`` repository now checked out by default, containing cookbook, documentation, handy mercurial extensions, and advanced plotting examples and helper scripts. * Pasteboards now supported and available * Parallel yt efficiency improved by removal of barriers and improvement of collective operations Version 2.0 ----------- * Major reorganization of the codebase for speed, ease of modification, and maintainability * Re-organization of documentation and addition of Orientation Session * Support for FLASH code * Preliminary support for MAESTRO, CASTRO, ART, and RAMSES (contributions welcome!) * Perspective projection for volume rendering * Exporting to Sunrise * Preliminary particle rendering in volume rendering visualization * Drastically improved parallel volume rendering, via kD-tree decomposition * Simple merger tree calculation for FOF catalogs * New and greatly expanded documentation, with a "source" button Version 1.7 ----------- * Direct writing of PNGs * Multi-band image writing * Parallel halo merger tree (see :ref:`merger_tree`) * Parallel structure function generator (see :ref:`two_point_functions`) * Image pan and zoom object and display widget. * Parallel volume rendering (see :ref:`volume_rendering`) * Multivariate volume rendering, allowing for multiple forms of emission and absorption, including approximate scattering and Planck emissions. (see :ref:`volume_rendering`) * Added Camera interface to volume rendering (See :ref:`volume_rendering`) * Off-axis projection (See :ref:`volume_rendering`) * Stereo (toe-in) volume rendering (See :ref:`volume_rendering`) * DualEPS extension for better EPS construction * yt now uses Distribute instead of SetupTools * Better ``iyt`` initialization for GUI support * Rewritten, memory conservative and speed-improved contour finding algorithm * Speed improvements to volume rendering * Preliminary support for the Tiger code * Default colormap is now ``algae`` * Lightweight projection loading with ``projload`` * Improvements to ``yt.data_objects.time_series`` * Improvements to :class:`yt.extensions.EnzoSimulation` (See :ref:`analyzing-an-entire-simulation`) * Removed ``direct_ray_cast`` * Fixed bug causing double data-read in projections * Added Cylinder support to ParticleIO * Fixes for 1- and 2-D Enzo datasets * Preliminary, largely non-functional Gadget support * Speed improvements to basic HOP * Added physical constants module * Beginning to standardize and enforce docstring requirements, changing to ``autosummary``-based API documentation. Version 1.6.1 ------------- * Critical fixes to ParticleIO * Halo mass function fixes for comoving coordinates * Fixes to halo finding * Fixes to the installation script * "yt instinfo" command to report current installation information as well as auto-update some types of installations * Optimizations to the volume renderer (2x-26x reported speedups) Version 1.6 ----------- Version 1.6 is a point release, primarily notable for the new parallel halo finder (see :ref:`halo-analysis`) * (New) Parallel HOP ( https://arxiv.org/abs/1001.3411 , :ref:`halo-analysis` ) * (Beta) Software ray casting and volume rendering (see :ref:`volume_rendering`) * Rewritten, faster and better contouring engine for clump identification * Spectral Energy Distribution calculation for stellar populations (see :ref:`synthetic_spectrum`) * Optimized data structures such as the index * Star particle analysis routines (see :ref:`star_analysis`) * Halo mass function routines * Completely rewritten, massively faster and more memory efficient Particle IO * Fixes for plots, including normalized phase plots * Better collective communication in parallel routines * Consolidation of optimized C routines into ``amr_utils`` * Many bug fixes and minor optimizations Version 1.5 ----------- Version 1.5 features many new improvements, most prominently that of the addition of parallel computing abilities (see :ref:`parallel-computation`) and generalization for multiple AMR data formats, specifically both Enzo and Orion. * Rewritten documentation * Fully parallel slices, projections, cutting planes, profiles, quantities * Parallel HOP * Friends-of-friends halo finder * Object storage and serialization * Major performance improvements to the clump finder (factor of five) * Generalized domain sizes * Generalized field info containers * Dark Matter-only simulations * 1D and 2D simulations * Better IO for HDF5 sets * Support for the Orion AMR code * Spherical re-gridding * Halo profiler * Disk image stacker * Light cone generator * Callback interface improved * Several new callbacks * New data objects -- ortho and non-ortho rays, limited ray-tracing * Fixed resolution buffers * Spectral integrator for CLOUDY data * Substantially better interactive interface * Performance improvements *everywhere* * Command-line interface to *many* common tasks * Isolated plot handling, independent of PlotCollections Version 1.0 ----------- * Initial release! ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/reference/code_support.rst0000644000175100001770000002313414714401662021033 0ustar00runnerdocker .. _code-support: Code Support ============ Levels of Support for Various Codes ----------------------------------- yt provides frontends to support several different simulation code formats as inputs. Below is a list showing what level of support is provided for each code. See :ref:`loading-data` for examples of loading a dataset from each supported output format using yt. +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Capability ► | Fluid | Particles | Parameters | Units | Read on | Load Raw | Part of | Level of | | Code/Format ▼ | Quantities | | | | Demand | Data | test suite | Support | +=======================+============+===========+============+=======+==========+==========+============+=============+ | AMRVAC | Y | N | Y | Y | Y | Y | Y | Partial | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | AREPO | Y | Y | Y | Y | Y | Y | Y | Full [#f4]_ | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | ART | Y | Y | Y | Y | Y [#f2]_ | Y | N | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | ARTIO | Y | Y | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Athena | Y | N | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Athena++ | Y | N | Y | Y | Y | Y | Y | Partial | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Castro | Y | Y [#f3]_ | Partial | Y | Y | Y | N | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | CfRadial | Y | N/A | Y | Y | Y | Y | Y | [#f5]_ | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | CHOLLA | Y | N/A | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Chombo | Y | Y | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Enzo | Y | Y | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Enzo-E | Y | Y | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Exodus II | ? | ? | ? | ? | ? | ? | ? | ? | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | FITS | Y | N/A | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | FLASH | Y | Y | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Gadget | Y | Y | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | GAMER | Y | Y | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Gasoline | Y | Y | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Gizmo | Y | Y | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Grid Data Format (GDF)| Y | N/A | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | IAMR | ? | ? | ? | ? | ? | ? | ? | ? | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Maestro | Y [#f1]_ | N | Y | Y | Y | Y | N | Partial | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | MOAB | Y | N/A | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Nyx | Y | Y | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | openPMD | Y | Y | N | Y | Y | Y | N | Partial | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Orion | Y | Y | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | OWLS/EAGLE | Y | Y | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Parthenon | Y | N | Y | Y | Y | Y | Y | Partial | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Piernik | Y | N/A | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Pluto | Y | N | Y | Y | Y | Y | Y | Partial | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | RAMSES | Y | Y | Y | Y | Y [#f2]_ | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Tipsy | Y | Y | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | WarpX | Y | Y | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ .. [#f1] one-dimensional base-state not read in currently. .. [#f2] These handle mesh fields using an in-memory octree that has not been parallelized. Datasets larger than approximately 1024^3 will not scale well. .. [#f3] Newer versions of Castro that use BoxLib's standard particle format are supported. The older ASCII format is not. .. [#f4] The Voronoi cells are currently treated as SPH-like particles, with a smoothing length proportional to the cube root of the cell volume. .. [#f5] yt provides support for cartesian-gridded CfRadial datasets. Data in native CFRadial coordinates will be gridded on load, see :ref:`loading-cfradial-data`. If you have a dataset that uses an output format not yet supported by yt, you can either input your data following :doc:`../examining/Loading_Generic_Array_Data` or :doc:`../examining/Loading_Generic_Particle_Data`, or help us by :ref:`creating_frontend` for this new format. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/reference/command-line.rst0000644000175100001770000002225714714401662020675 0ustar00runnerdocker.. _command-line: Command-Line Usage ------------------ Command-line Functions ~~~~~~~~~~~~~~~~~~~~~~ The :code:`yt` command-line tool allows you to access some of yt's basic functionality without opening a python interpreter. The tools is a collection of subcommands. These can quickly making plots of slices and projections through a dataset, updating yt's codebase, print basic statistics about a dataset, launch an IPython notebook session, and more. To get a quick list of what is available, just type: .. code-block:: bash yt -h This will print the list of available subcommands, .. config_help:: yt To execute any such function, simply run: .. code-block:: bash yt Finally, to identify the options associated with any of these subcommand, run: .. code-block:: bash yt -h Plotting from the command line ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ First, we'll discuss plotting from the command line, then we will give a brief summary of the functionality provided by each command line subcommand. This example uses the :code:`DD0010/moving7_0010` dataset distributed in the yt git repository. First let's see what our options are for plotting: .. code-block:: bash $ yt plot --help There are many! We can choose whether we want a slice (default) or a projection (``-p``), the field, the colormap, the center of the image, the width and unit of width of the image, the limits, the weighting field for projections, and on and on. By default the plotting command will execute the same thing along all three axes, so keep that in mind if it takes three times as long as you'd like! The center of a slice defaults to the center of the domain, so let's just give that a shot and see what it looks like: .. code-block:: bash $ yt plot DD0010/moving7_0010 Well, that looks pretty bad! What has happened here is that the center of the domain only has some minor shifts in density, so the plot is essentially incomprehensible. Let's try it again, but instead of slicing, let's project. This is a line integral through the domain, and for the density field this becomes a column density: .. code-block:: bash $ yt plot -p DD0010/moving7_0010 Now that looks much better! Note that all three axes' projections appear nearly indistinguishable, because of how the two spheres are located in the domain. We could center our domain on one of the spheres and take a slice, as well. Now let's see what the domain looks like with grids overlaid, using the ``--show-grids`` option: .. code-block:: bash $ yt plot --show-grids -p DD0010/moving7_0010 We can now see all the grids in the field of view. If you want to annotate your plot with a scale bar, you can use the ``--show-scale-bar`` option: .. code-block:: bash $ yt plot --show-scale-bar -p DD0010/moving7_0010 Command-line subcommand summary ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ help ++++ Help lists all of the various command-line options in yt. instinfo and version ++++++++++++++++++++ This gives information about where your yt installation is, what version and changeset you're using and more. mapserver +++++++++ Ever wanted to interact with your data using a `google maps `_-style interface? Now you can by using the yt mapserver. See :ref:`mapserver` for more details. pastebin and pastebin_grab ++++++++++++++++++++++++++ The `pastebin `_ is an online location where you can anonymously post code snippets and error messages to share with other users in a quick, informal way. It is often useful for debugging code or co-developing. By running the ``pastebin`` subcommand with a text file, you send the contents of that file to an anonymous pastebin; .. code-block:: bash yt pastebin my_script.py By running the ``pastebin_grab`` subcommand with a pastebin number (e.g. 1768), it will grab the contents of that pastebin (e.g. the website http://paste.yt-project.org/show/1768 ) and send it to STDOUT for local use. See :ref:`pastebin` for more information. .. code-block:: bash yt pastebin_grab 1768 upload ++++++ Upload a file to a public curldrop instance. Curldrop is a simple web application that allows you to upload and download files straight from your Terminal with an http client like e.g. curl. It was initially developed by `Kevin Kennell `_ and later forked and adjusted for yt’s needs. After a successful upload you will receive a url that can be used to share the data with other people. .. code-block:: bash yt upload my_file.tar.gz plot ++++ This command generates one or many simple plots for a single dataset. By specifying the axis, center, width, etc. (run ``yt help plot`` for details), you can create slices and projections easily at the command-line. rpdb ++++ Connect to a currently running (on localhost) rpdb session. See :ref:`remote-debugging` for more info. notebook ++++++++ Launches a Jupyter notebook server and prints out instructions on how to open an ssh tunnel to connect to the notebook server with a web browser. This is most useful when you want to run a Jupyter notebook using CPUs on a remote host. stats +++++ This subcommand provides you with some basic statistics on a given dataset. It provides you with the number of grids and cells in each level, the time of the dataset, and the resolution. It is tantamount to calling the ``Dataset.print_stats`` method. Additionally, there is the option to print the minimum, maximum, or both for a given field. The field is assumed to be density by default: .. code-block:: bash yt stats GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150 --max --min or a different field can be specified using the ``-f`` flag: .. code-block:: bash yt stats GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150 --max --min -f gas,temperature The field-related stats output from this command can be directed to a file using the ``-o`` flag: .. code-block:: bash yt stats GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150 --max -o out_stats.dat update ++++++ This subcommand updates the yt installation to the most recent version for your repository (e.g. stable, 2.0, development, etc.). Adding the ``--all`` flag will update the dependencies as well. .. _upload-image: upload_image ++++++++++++ Images are often worth a thousand words, so when you're trying to share a piece of code that generates an image, or you're trying to debug image-generation scripts, it can be useful to send your co-authors a link to the image. This subcommand makes such sharing a breeze. By specifying the image to share, ``upload_image`` automatically uploads it anonymously to the website `imgur.com `_ and provides you with a link to share with your collaborators. Note that the image *must* be in the PNG format in order to use this function. delete_image ++++++++++++ The image uploaded using ``upload_image`` is assigned with a unique hash that can be used to remove it. This subcommand provides an easy way to send a delete request directly to the `imgur.com `_. download ~~~~~~~~ This subcommand downloads a file from https://yt-project.org/data. Using ``yt download``, one can download a file to: * ``"test_data_dir"``: Save the file to the location specified in the ``"test_data_dir"`` configuration entry for test data. * ``"supp_data_dir"``: Save the file to the location specified in the ``"supp_data_dir"`` configuration entry for supplemental data. * Any valid path to a location on disk, e.g. ``/home/jzuhone/data``. Examples: .. code-block:: bash $ yt download apec_emissivity_v2.h5 supp_data_dir .. code-block:: bash $ yt download GasSloshing.tar.gz test_data_dir .. code-block:: bash $ yt download ZeldovichPancake.tar.gz /Users/jzuhone/workspace If the configuration values ``"test_data_dir"`` or ``"supp_data_dir"`` have not been set by the user, an error will be thrown. Config helper ~~~~~~~~~~~~~ The :code:`yt config` command-line tool allows you to modify and access yt's configuration without manually locating and opening the config file in an editor. To get a quick list of available commands, just type: .. code-block:: bash yt config -h This will print the list of available subcommands: .. config_help:: yt config Since yt version 4, the configuration file is located in ``$XDG_CONFIG_HOME/yt/yt.toml`` adhering to the `XDG Base Directory Specification `_. Unless customized, this defaults to ``$HOME/.config/`` on Unix-like systems (macOS, Linux, ...). The old configuration file (``$XDG_CONFIG_HOME/yt/ytrc``) is deprecated. In order to perform an automatic migration of the old config, you are encouraged to run: .. code-block:: bash yt config migrate This will convert your old config file to the toml format. The original file will be moved to ``$XDG_CONFIG_HOME/yt/ytrc.bak``. Examples ++++++++ Listing current content of the config file: .. code-block:: bash $ yt config list [yt] log_level = 50 Obtaining a single config value by name: .. code-block:: bash $ yt config get yt log_level 50 Changing a single config value: .. code-block:: bash $ yt config set yt log_level 10 Removing a single config entry: .. code-block:: bash $ yt config rm yt log_level ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/reference/configuration.rst0000644000175100001770000002651314714401662021200 0ustar00runnerdockerCustomizing yt: The Configuration and Plugin Files ================================================== yt features ways to customize it to your personal preferences in terms of how much output it displays, loading custom fields, loading custom colormaps, accessing test datasets regardless of where you are in the file system, etc. This customization is done through :ref:`configuration-file` and :ref:`plugin-file` both of which exist in your ``$HOME/.config/yt`` directory. .. _configuration-file: The Configuration ----------------- The configuration is stored in simple text files (in the `toml `_ format). The files allow to set internal yt variables to custom default values to be used in future sessions. The configuration can either be stored :ref:`globally ` or :ref:`locally `. .. _global-conf: Global Configuration ^^^^^^^^^^^^^^^^^^^^ If no local configuration file exists, yt will look for and recognize the file ``$HOME/.config/yt/yt.toml`` as a configuration file, containing several options that can be modified and adjusted to control runtime behavior. For example, a sample ``$HOME/.config/yt/yt.toml`` file could look like: .. code-block:: none [yt] log_level = 1 maximum_stored_datasets = 10000 This configuration file would set the logging threshold much lower, enabling much more voluminous output from yt. Additionally, it increases the number of datasets tracked between instantiations of yt. The configuration file can be managed using the ``yt config --global`` helper. It can list, add, modify and remove options from the configuration file, e.g.: .. code-block:: none $ yt config -h $ yt config list $ yt config set yt log_level 1 $ yt config rm yt maximum_stored_datasets .. _local-conf: Local Configuration ^^^^^^^^^^^^^^^^^^^ yt will look for a file named ``yt.toml`` in the current directory, and upwards in the file tree until a match is found. If so, its options are loaded and any global configuration is ignored. Local configuration files can contain the same options as the global one. Local configuration files can either be edited manually, or alternatively they can be managed using ``yt config --local``. It can list, add, modify and remove options, and display the path to the local configuration file, e.g.: .. code-block:: none $ yt config -h $ yt config list --local $ yt config set --local yt log_level 1 $ yt config rm --local yt maximum_stored_datasets $ yt config print-path --local If no local configuration file is present, these commands will create an (empty) one in the current working directory. Configuration Options At Runtime ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In addition to setting parameters in the configuration file itself, you can set them at runtime. .. warning:: Several parameters are only accessed when yt starts up: therefore, if you want to modify any configuration parameters at runtime, you should execute the appropriate commands at the *very top* of your script! This involves importing the configuration object and then setting a given parameter to be equal to a specific string. Note that even for items that accept integers, floating points and other non-string types, you *must* set them to be a string or else the configuration object will consider them broken. Here is an example script, where we adjust the logging at startup: .. code-block:: python import yt yt.set_log_level(1) ds = yt.load("my_data0001") ds.print_stats() This has the same effect as setting ``log_level = 1`` in the configuration file. Note that a log level of 1 means that all log messages are printed to stdout. To disable logging, set the log level to 50. .. _config-options: Available Configuration Options ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The following external parameters are available. A number of parameters are used internally. * ``colored_logs`` (default: ``False``): Should logs be colored? * ``default_colormap`` (default: ``cmyt.arbre``): What colormap should be used by default for yt-produced images? * ``plugin_filename`` (default ``my_plugins.py``) The name of our plugin file. * ``log_level`` (default: ``20``): What is the threshold (0 to 50) for outputting log files? * ``test_data_dir`` (default: ``/does/not/exist``): The default path the ``load()`` function searches for datasets when it cannot find a dataset in the current directory. * ``reconstruct_index`` (default: ``True``): If true, grid edges for patch AMR datasets will be adjusted such that they fall as close as possible to an integer multiple of the local cell width. If you are working with a dataset with a large number of grids, setting this to False can speed up loading your dataset possibly at the cost of grid-aligned artifacts showing up in slice visualizations. * ``requires_ds_strict`` (default: ``True``): If true, answer tests wrapped with :func:`~yt.utilities.answer_testing.framework.requires_ds` will raise :class:`~yt.utilities.exceptions.YTUnidentifiedDataType` rather than consuming it if required dataset is not present. * ``serialize`` (default: ``False``): If true, perform automatic :ref:`object serialization ` * ``sketchfab_api_key`` (default: empty): API key for https://sketchfab.com/ for uploading AMRSurface objects. * ``suppress_stream_logging`` (default: ``False``): If true, execution mode will be quiet. * ``stdout_stream_logging`` (default: ``False``): If true, logging is directed to stdout rather than stderr * ``skip_dataset_cache`` (default: ``False``): If true, automatic caching of datasets is turned off. * ``supp_data_dir`` (default: ``/does/not/exist``): The default path certain submodules of yt look in for supplemental data files. .. _per-field-plotconfig: Available per-field Plot Options ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It is possible to customize the default behaviour of plots using per-field configuration. The default options for plotting a given field can be specified in the configuration file in ``[plot.field_type.field_name]`` blocks. The available keys are * ``cmap`` (default: ``yt.default_colormap``, see :ref:`config-options`): the colormap to use for the field. * ``log`` (default: ``True``): use a log scale (or symlog if ``linthresh`` is also set). * ``linthresh`` (default: ``None``): if set to a float different than ``None`` and ``log`` is ``True``, use a symlog normalization with the given linear threshold. * ``units`` (defaults to the units of the field): the units to use to represent the field. * ``path_length_units`` (default: ``cm``): the unit of the integration length when doing e.g. projections. This always has the dimensions of a length. Note that this will only be used if ``units`` is also set for the field. The final units will then be ``units*path_length_units``. You can also set defaults for all fields of a given field type by omitting the field name, as illustrated below in the deposit block. .. code-block:: toml [plot.gas.density] cmap = "plasma" log = true units = "mp/cm**3" [plot.gas.velocity_divergence] cmap = "bwr" # use a diverging colormap log = false # and a linear scale [plot.deposit] path_length_units = "kpc" # use kpc for deposition projections .. _per-field-config: Available per-Field Configuration Options ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It is possible to set attributes for fields that would typically be set by the frontend source code, such as the aliases for field, the units that field should be expected in, and the display name. This allows individuals to customize what yt expects of a given dataset without modifying the yt source code. For instance, if your dataset has an on-disk field called "particle_extra_field_1" you could specify its units, display name, and what yt should think of it as with: .. code-block:: toml [fields.nbody.particle_extra_field_1] aliases = ["particle_other_fancy_name", "particle_alternative_fancy_name"] units = "code_time" display_name = "Dinosaurs Density" .. _plugin-file: Plugin Files ------------ Plugin files are a means of creating custom fields, quantities, data objects, colormaps, and other code executable functions or classes to be used in future yt sessions without modifying the source code directly. To enable a plugin file, call the function :func:`~yt.funcs.enable_plugins` at the top of your script. Global system plugin file ^^^^^^^^^^^^^^^^^^^^^^^^^ yt will look for and recognize the file ``$HOME/.config/yt/my_plugins.py`` as a plugin file. It is possible to rename this file to ``$HOME/.config/yt/.py`` by defining ``plugin_filename`` in your ``yt.toml`` file, as mentioned above. .. note:: You can tell that your system plugin file is being parsed by watching for a logging message when you import yt. Note that the ``yt load`` command line entry point parses the plugin file. Local project plugin file ^^^^^^^^^^^^^^^^^^^^^^^^^ Optionally, :func:`~yt.funcs.enable_plugins` can be passed an argument to specify a custom location for a plugin file. This can be useful to define project wise customizations. In that use case, any system-level plugin file will be ignored. Plugin File Format ^^^^^^^^^^^^^^^^^^ Plugin files should contain pure Python code. If accessing yt functions and classes they will not require the ``yt.`` prefix, because of how they are loaded. For example, if one created a plugin file containing: .. code-block:: python def _myfunc(field, data): return np.random.random(data["density"].shape) add_field( "random", function=_myfunc, sampling_type="cell", dimensions="dimensionless", units="auto", ) then all of my data objects would have access to the field ``random``. You can also define other convenience functions in your plugin file. For instance, you could define some variables or functions, and even import common modules: .. code-block:: python import os HOMEDIR = "/home/username/" RUNDIR = "/scratch/runs/" def load_run(fn): if not os.path.exists(RUNDIR + fn): return None return load(RUNDIR + fn) In this case, we've written ``load_run`` to look in a specific directory to see if it can find an output with the given name. So now we can write scripts that use this function: .. code-block:: python import yt yt.enable_plugins() my_run = yt.load_run("hotgasflow/DD0040/DD0040") And because we have used ``yt.enable_plugins`` we have access to the ``load_run`` function defined in our plugin file. .. note:: if your convenience function's name colliding with an existing object within yt's namespace, it will be ignored. Note that using the plugins file implies that your script is no longer fully reproducible. If you share your script with someone else and use some of the functionality if your plugins file, you will also need to share your plugins file for someone else to re-run your script properly. Adding Custom Colormaps ^^^^^^^^^^^^^^^^^^^^^^^ To add custom :ref:`colormaps` to your plugin file, you must use the :func:`~yt.visualization.color_maps.make_colormap` function to generate a colormap of your choice and then add it to the plugin file. You can see an example of this in :ref:`custom-colormaps`. Remember that you don't need to prefix commands in your plugin file with ``yt.``, but you'll only be able to access the colormaps when you load the ``yt.mods`` module, not simply ``yt``. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/reference/demeshening.rst0000644000175100001770000002553514714401662020622 0ustar00runnerdocker.. _demeshening: How Particles are Indexed ========================= With yt-4.0, the method by which particles are indexed changed considerably. Whereas in previous versions, particles were indexed based on their position in an octree (the structure of which was determined by particle number density), in yt-4.0 this system was overhauled to utilize a `bitmap index `_ based on a space-filling curve, using a `enhanced word-aligned hybrid `_ boolean array as their backend. .. note:: You may see scattered references to "the demeshening" (including in the filename for this document!). This was a humorous name used in the yt development process to refer to removing a global (octree) mesh for particle codes. By avoiding the use of octrees as a base mesh, yt is able to create *much* more accurate SPH visualizations. We have a `gallery demonstrating this `_ but even in this side-by-side comparison the differences can be seen quite easily, with the left image being from the old, octree-based approach and the right image the new, meshless approach. .. image:: _images/yt3_p0010_proj_density_None_x_z002.png :width: 45 % .. image:: _images/yt4_p0010_proj_density_None_x_z002.png :width: 45 % Effectively, what "the demeshening" does is allow yt to treat the particles as discrete objects (or with an area of influence) and use their positions in a multi-level index to optimize and minimize the disk operations necessary to load only those particles it needs. .. note:: The theory and implementation of yt's bitmap indexing system is described in some detail in the `yt 4.0 paper `_ in the section entitled `Indexing Discrete-Point Datasets `_. In brief, however, what this relies on is two numbers, ``index_order1`` and ``index_order2``. These control the "coarse" and "refined" sets of indices, and they are supplied to any particle dataset ``load()`` in the form of a tuple as the argument ``index_order``. By default these are set to 5 and 7, respectively, but it is entirely likely that a different set of values will work better for your purposes. For example, if you were to use the sample Gadget-3 dataset, you could override the default values and use values of 5 and 5 by specifying this argument to the ``load_sample`` function; this works with ``load`` as well. .. code-block:: python ds = yt.load_sample("Gadget3-snap-format2", index_order=(5, 5)) So this is how you *change* the index order, but it doesn't explain precisely what this "index order" actually is. Indexing and Why yt Does it --------------------------- yt is based on the idea that data should be selected and read only when it is needed. So for instance, if you only want particles or grid cells from a small region in the center of your dataset, yt wants to avoid any reading of the data *outside* of that region. Now, in practice, this isn't entirely possible -- particularly with particles, you can't actually tell when something is inside or outside of a region *until* you read it, because the particle locations are *stored in the dataset*. One way to avoid this is to have an index of the data, so that yt can know that some of the data that is located *here* in space is located *there* in the file or files on disk. So if you're able to say, I only care about data in "region A", you can look for those files that contain data within "region A," read those, and discard the parts of them that are *not* within "region A." The finer grained the index, the longer it takes to build that index -- and the larger than index is, and the longer it takes to query. The cost of having too *coarse* an index, on the other hand, is that the IO conducted to read a given region is likely to be *too much*, and more particles will be discarded after being read, before being "selected" by the data selector (sphere, region, etc). An important note about all of this is that the index system is not meant to *replace* the positions stored on disk, but instead to speed up queries of those positions -- the index is meant to be lossy in representation, and only provides means of generating IO information. Additionally, the atomic unit that yt considers when conducting IO or selection queries is called a "chunk" internally. For situations where the individual *files* are very, very large, yt will "sub-chunk" these into smaller bits, which are by-default set to $64^3$ particles. Whenever indexing is done, it is done at this granular level, with offsets to individual particle collections stored. For instance, if you had a (single) file with $1024^3$ particles in it, yt would instead regard this as a series of $64^3$ particle files, and index each one individually. Index Order ----------- The bitmap index system is based on a two-level scheme for assigning positions in three-dimensional space to integer values. What this means is that each particle is assigned a "coarse" index, which is global to the full domain of the collection of particles, and *if necessary* an additional "refined" index is assigned to the particle, within that coarse index. The index "order" values refer to the number of entries on a side that each index system is allowed. For instance, if we allow the particles to be subdivided into 8 "bins" in each direction, this would correspond to an index order of 3 (as $2^3 = 8$); correspondingly, an index order of 5 would be 32 bins in each direction, and an index order of 7 would be 128 bins in each direction. Each particle is then assigned a set of i, j, k values for the bin value in each dimension, and these i, j, k values are combined into a single (64-bit) integer according to a space-filling curve. The process by which this is done by yt is as follows: 1. For each "chunk" of data -- which may be a file, or a subset of a file in which particles are contained -- assign each particle to an integer value according to the space-filling curve and the coarse index order. Set the "bit" in an array of boolean values that each of these integers correspond to. Note that this is almost certainly *reductive* -- there will be fewer bits set than there are particles, which is *by design*. 2. Once all chunks or files have been assigned an array of bits that correspond to the places where, according to the coarse indexing scheme, they have particles, identify all those "bits" that have been set by more than one chunk. All of these bits correspond to locations where more than one file contains particles -- so if you want to select something from this region, you'd need to read more than one file. 3. For each "collision" location, apply a *second-order* index, to identify which sub-regions are touched by more than one file. At the end of this process, each file will be associated with a single "coarse" index (which covers the entire domain of the data), as well as a set of "collision" locations, and in each "collision" location a set of bitarrays that correspond to that subregion. When reading data, yt will identify which "coarse" index regions are necessary to read. If any of those coarse index regions are covered by more than one file, it will examine the "refined" index for those regions and see if it is able to subset more efficiently. Because all of these operations can be done with logical operations, this considerably reduces the amount of data that needs to be read from disk before expensive selection operations are conducted. For those situations that involve particles with regions of influence -- such as smoothed particle hydrodynamics, where particles have associated smoothing lengths -- these are taken into account when conducting the indexing system. Efficiency of Index Orders -------------------------- What this can lead to, however, is situations where (particularly at the edges of regions populated by SPH particles) the indexing system identifies collisions, but the relatively small number of particles and correspondingly large "smoothing lengths" result in a large number of "refined" index values that need to be set. Counterintuitively, this actually means that occasionally the "refined" indexing process can take an inordinately long amount of time for *small* datasets, rather than large datasets. In these situations, it is typically sufficient to set the "refined" index order to be much lower than its default value. For instance, setting the ``index_order`` to (5, 3) means that the full domain will be subdivided into 32 bins in each dimension, and any "collision" zones will be further subdivided into 8 bins in each dimension (corresponding to an effective 256 bins across the full domain). If you are experiencing very long index times, this may be a productive parameter to modify. For instance, if you are seeing very rapid "coarse" indexing followed by very, very slow "refined" indexing, this likely plays a part; often this will be most obvious in small-ish (i.e., $256^3$ or smaller) datasets. Index Caching ------------- The index values are cached between instantiation, in a sidecar file named with the name of the dataset file and the suffix ``.indexII_JJ.ewah``, where ``II`` and ``JJ`` are ``index_order1`` and ``index_order2``. So for instance, if ``index_order`` is set to (5, 7), and you are loading a dataset file named "snapshot_200.hdf5", after indexing, you will have an index sidecar file named ``snapshot_200.hdf5.index5_7.ewah``. On subsequent loads, this index file will be reused, rather than re-generated. By *default* these sidecars are stored next to the dataset itself, in the same directory. However, the filename scheme (and thus location) can be changed by supplying an alternate filename to the ``load`` command with the argument ``index_filename``. For instance, if you are accessing data in a read-only location, you can specify that the index will be cached in a location that is write-accessible to you. These files contain the *compressed* bitmap index values, along with some metadata that describes the version of the indexing system they use and so forth. If the version of the index that yt uses has changed, they will be regenerated; in general this will not vary very often (and should be much less frequent than, for instance, yt releases) and yt will provide a message to let you know it is doing it. The file size of these cached index files can be difficult to estimate; because it is based on a compressed bitmap arrays, it will depend on the spatial organization of the particles it is indexing, and how co-located they are according to the space filling curve. For very small datasets it will be small, but we do not expect these index files to grow beyond a few hundred megabytes even in the extreme case of large datasets that have little to no coherence in their clustering. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/reference/field_list.rst0000644000175100001770000000005214714401662020435 0ustar00runnerdocker.. _available-fields: .. yt_showfields:: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/reference/index.rst0000644000175100001770000000101114714401662017422 0ustar00runnerdockerReference Materials =================== Here we include reference materials for yt with a list of all the code formats supported, a description of how to use yt at the command line, a detailed listing of individual classes and functions, a description of the useful config file, and finally a list of changes between each major release of the code. .. toctree:: :maxdepth: 2 code_support command-line api/api api/modules configuration python_introduction field_list demeshening changelog ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/reference/python_introduction.rst0000644000175100001770000007433714714401662022462 0ustar00runnerdockerA Brief Introduction to Python ------------------------------ All scripts that use yt are really Python scripts that use yt as a library. The great thing about Python is that the standard set of libraries that come with it are very extensive -- Python comes with everything you need to write and run mail servers and web servers, create Logo-style turtle graphics, do arbitrary precision math, interact with the operating system, and many other things. In addition to that, efforts by the scientific community to improve Python for computational science have created libraries for fast array computation, GPGPU operations, distributed computing, and visualization. So when you use yt through the scripting interface, you get for free the ability to interlink it with any of the other libraries available for Python. In the past, this has been used to create new types of visualization using OpenGL, data management using Google Docs, and even a simulation that sends an SMS when it has new data to report on. But, this also means learning a little bit of Python! This next section presents a short tutorial of how to start up and think about Python, and then moves on to how to use Python with yt. Starting Python +++++++++++++++ Python has two different modes of execution: interactive execution, and scripted execution. We'll start with interactive execution and then move on to how to write and use scripts. Before we get started, we should briefly touch upon the commands ``help`` and ``dir``. These two commands provide a level of introspection: ``help(something)`` will return the internal documentation on ``something``, including how it can be used and all of the possible "methods" that can be called on it. ``dir()`` will return the available commands and objects that can be directly called, and ``dir(something)`` will return information about all the commands that ``something`` provides. This probably sounds a bit opaque, but it will become clearer with time -- it's also probably helpful to call ``help`` on any or all of the objects we create during this orientation. To start up Python, at your prompt simply type: .. code-block:: bash $ python This will open up Python and give to you a simple prompt of three greater-than signs. Let's inaugurate the occasion appropriately -- type this:: >>> print("Hello, world.") As you can see, this printed out the string "Hello, world." just as we expected. Now let's try a more advanced string, one with a number in it. For this we'll use an "f-string", which is the preferred way to format strings in modern Python. We'll print pi, but only with three digits of accuracy.:: >>> print(f"Pi is precisely {3.1415926:0.2f}") This took the number we fed it (3.1415926) and printed it out as a floating point number with two decimal places. Now let's try something a bit different -- let's print out both the name of the number and its value.:: >>> print(f"{'pi'} is precisely {3.1415926:0.2f}") And there you have it -- the very basics of starting up Python, and some very simple mechanisms for printing values out. Now let's explore a few types of data that Python can store and manipulate. Data Types ++++++++++ Python provides a number of datatypes, but the main ones that we'll concern ourselves with at first are lists, tuples, strings, numbers, and dictionaries. Most of these can be instantiated in a couple different ways, and we'll look at a few of them. Some of these objects can be modified in place, which is called being mutable, and some are immutable and cannot be modified in place. We'll talk below about what that means. Perhaps most importantly, though, is an idea about how Python works in terms of names and bindings -- this is called the "object model." When you create an object, that is independent from binding it to a name -- think of this like pointers in C. This also operates a bit differently for mutable and immutable types. We'll talk a bit more about this later, but it's handy to initially think of things in terms of references. (This is also, not coincidentally, how Python thinks of things internally as well!) When you create an object, initially it has no references to it -- there's nothing that points to it. When you bind it to a name, then you are making a reference, and so its "reference count" is now 1. When something else makes a reference to it, the reference count goes up to 2. When the reference count returns to 0, the object is deleted -- but not before. This concept of reference counting comes up from time to time, but it's not going to be a focus of this orientation. The two easiest datatypes are simply strings and numbers. We can make a string very easily:: >>> my_string = "Hello there" >>> print(my_string) We can also take a look at each individual part of a string. We'll use the 'slicing' notation for this. As a brief note, slicing is 0-indexed, so that element 0 corresponds to the first element. If we wanted to see the third element of our string:: >>> print(my_string[2]) We can also take the third through the 5 elements:: >>> print(my_string[2:5]) But note that if you try to change an element directly, Python objects and it won't let you -- that's because strings are immutable. (But, note that because of how the += operator works, we can do "my_string += '1'" without issue.) To create a number, we do something similar:: >>> a = 10 >>> print(a) This works for floating points as well. Now we can do math on these numbers:: >>> print(a**2) >>> print(a + 5) >>> print(a + 5.1) >>> print(a / 2.0) Now that we have a couple primitive datatypes, we can move on to sequences -- lists and tuples. These two objects are very similar, in that they are collections of arbitrary data types. We'll only look at collections of strings and numbers for now, but these can be filled with arbitrary datatypes (including objects that yt provides, like spheres, datasets, grids, and so on.) The easiest way to create a list is to simply construct one:: >>> my_list = [] At this point, you can find out how long it is, you can append elements, and you can access them at will:: >>> my_list.append(1) >>> my_list.append(my_string) >>> print(my_list[0]) >>> print(my_list[-1]) >>> print(len(my_list)) You can also create a list already containing an initial set of elements:: >>> my_list = [1, 2, 3, "four"] >>> my_list[2] = "three!!" Lists are very powerful objects, which we'll talk about a bit below when discussing how iteration works in Python. A tuple is like a list, in that it's a sequence of objects, and it can be sliced and examined piece by piece. But unlike a list, it's immutable: whatever a tuple contains at instantiation is what it contains for the rest of its existence. Creating a tuple is just like creating a list, except that you use parentheses instead of brackets:: >>> my_tuple = (1, "a", 62.6) Tuples show up very commonly when handling arguments to Python functions and when dealing with multiple return values from a function. They can also be unpacked:: >>> v1, v2, v3 = my_tuple will assign 1, "a", and 62.6 to v1, v2, and v3, respectively. Mutables vs Immutables and Is Versus Equals +++++++++++++++++++++++++++++++++++++++++++ This section is not a "must read" -- it's more of an exploration of how Python's objects work. At some point this is something you may want to be familiar with, but it's not strictly necessary on your first pass. Python provides the operator ``is`` as well as the comparison operator ``==``. The operator ``is`` determines whether two objects are in fact the same object, whereas the operator ``==`` determines if they are equal, according to some arbitrarily defined equality operation. Think of this like comparing the serial numbers on two pictures of a dollar bill (the ``is`` operator) versus comparing the values of two pieces of currency (the ``==`` operator). This digs in to the idea of how the Python object model works, so let's test some things out. For instance, let's take a look at comparing two floating point numbers:: >>> a = 10.1 >>> b = 10.1 >>> print(a == b) >>> print(a is b) The first one returned True, but the second one returned False. Even though both numbers are equal, they point to different points in memory. Now let's try assigning things a bit differently:: >>> b = a >>> print(a is b) This time it's true -- they point to the same part of memory. Try incrementing one and seeing what happens. Now let's try this with a string:: >>> a = "Hi there" >>> b = a >>> print(a is b) Okay, so our intuition here works the same way, and it returns True. But what happens if we modify the string?:: >>> a += "!" >>> print(a) >>> print(b) >>> print(a is b) As you can see, now not only does a contain the value "Hi there!", but it also is a different value than what b contains, and it also points to a different region in memory. That's because strings are immutable -- the act of adding on "!" actually creates an entirely new string and assigns that entirely new string to the variable a, leaving the string pointed to by b untouched. With lists, which are mutable, we have a bit more liberty with how we modify the items and how that modifies the object and its pointers. A list is really just a pointer to a collection; the list object itself does not have any special knowledge of what constitutes that list. So when we initialize a and b:: >>> a = [1, 5, 1094.154] >>> b = a We end up with two pointers to the same set of objects. (We can also have a list inside a list, which adds another fun layer.) Now when we modify a, it shows up in b:: >>> a.append("hat wobble") >>> print(b[-1]) This also works with the concatenation operator:: >>> a += ["beta sequences"] >>> print(a[-1], b[-1]) But we can force a break in this by slicing the list when we initialize:: >>> a = [1, 2, 3, 4] >>> b = a[:] >>> a.append(5) >>> print(b[-1], a[-1]) Here they are different, because we have sliced the list when initializing b. The coolest datatype available in Python, however, is the dictionary. This is a mapping object of key:value pairs, where one value is used to look up another value. We can instantiate a dictionary in a variety of ways, but for now we'll only look at one of the simplest mechanisms for doing so:: >>> my_dict = {} >>> my_dict["A"] = 1.0 >>> my_dict["B"] = 154.014 >>> my_dict[14001] = "This number is great" >>> print(my_dict["A"]) As you can see, one value can be used to look up another. Almost all datatypes (with a few notable exceptions, but for the most part these are quite uncommon) can be used as a key, and you can use any object as a value. We won't spend too much time discussing dictionaries explicitly, but I will leave you with a word on their efficiency: the Python lookup algorithm is known for its hand-tuned optimization and speed, and it's very common to use dictionaries to look up hundreds of thousands or even millions of elements and to expect it to be responsive. Looping +++++++ Looping in Python is both different and more powerful than in lower-level languages. Rather than looping based exclusively on conditionals (which is possible in Python) the fundamental mode of looping in Python is iterating over objects. In C, one might construct a loop where some counter variable is initialized, and at each iteration of the loop it is incremented and compared against a reference value; when the counter variable reaches the reference variable, the loop is terminated. In Python, on the other hand, to accomplish iteration through a set of sequential integers, one actually constructs a sequence of those integers, and iterates over that sequence. For more discussion of this, and some very, very powerful ways of accomplishing this iteration process, look through the Python documentation for the words 'iterable' and 'generator.' To see this in action, let's first take a look at the built-in function ``range``. :: >>> print(range(10)) As you can see, what the function ``range`` returns is a list of integers, starting at zero, that is as long as the argument to the ``range`` function. In practice, this means that calling ``range(N)`` returns ``0, 1, 2, ... N-1`` in a list. So now we can execute a for loop, but first, an important interlude: Control blocks in Python are delimited by white space. This means that, unlike in C with its brackets, you indicate an isolated control block for conditionals, function declarations, loops and other things with an indentation. When that control block ends, you dedent the text. In yt, we use four spaces -- I recommend you do the same -- which can be inserted by a text editor in place of tab characters. Let's try this out with a for loop. First type ``for i in range(10):`` and press enter. This will change the prompt to be three periods, instead of three greater-than signs, and you will be expected to hit the tab key to indent. Then type "print(i)", press enter, and then instead of indenting again, press enter again. The entire entry should look like this:: >>> for i in range(10): ... print(i) ... As you can see, it prints out each integer in turn. So far this feels a lot like C. (It won't, if you start using iterables in place of sequences -- for instance, ``xrange`` operates just like range, except instead of returning an already-created list, it returns the promise of a sequence, whose elements aren't created until they are requested.) Let's try it with our earlier list:: >>> my_sequence = ["a", "b", 4, 110.4] >>> for i in my_sequence: ... print(i) ... This time it prints out every item in the sequence. A common idiom that gets used a lot is to figure out which index the loop is at. The first time this is written, it usually goes something like this:: >>> index = 0 >>> my_sequence = ["a", "b", 4, 110.4] >>> for i in my_sequence: ... print("%s = %s" % (index, i)) ... index += 1 ... This does what you would expect: it prints out the index we're at, then the value of that index in the list. But there's an easier way to do this, less prone to error -- and a bit cleaner! You can use the ``enumerate`` function to accomplish this:: >>> my_sequence = ["a", "b", 4, 110.4] >>> for index, val in enumerate(my_sequence): ... print("%s = %s" % (index, val)) ... This does the exact same thing, but we didn't have to keep track of the counter variable ourselves. You can use the function ``reversed`` to reverse a sequence in a similar fashion. Try this out:: >>> my_sequence = range(10) >>> for val in reversed(my_sequence): ... print(val) ... We can even combine the two!:: >>> my_sequence = range(10) >>> for index, val in enumerate(reversed(my_sequence)): ... print("%s = %s" % (index, val)) ... The most fun of all the built-in functions that operate on iterables, however, is the ``zip`` function. This function will combine two sequences (but only up to the shorter of the two -- so if one has 16 elements and the other 1000, the zipped sequence will only have 16) and produce iterators over both. As an example, let's say you have two sequences of values, and you want to produce a single combined sequence from them.:: >>> seq1 = ["Hello", "What's up", "I'm fine"] >>> seq2 = ["!", "?", "."] >>> seq3 = [] >>> for v1, v2 in zip(seq1, seq2): ... seq3.append(v1 + v2) ... >>> print(seq3) As you can see, this is much easier than constructing index values by hand and then drawing from the two sequences using those index values. I should note that while this is great in some instances, for numeric operations, NumPy arrays (discussed below) will invariably be faster. Conditionals ++++++++++++ Conditionals, like loops, are delimited by indentation. They follow a relatively simple structure, with an "if" statement, followed by the conditional itself, and then a block of indented text to be executed in the event of the success of that conditional. For subsequent conditionals, the word "elif" is used, and for the default, the word "else" is used. As a brief aside, the case/switch statement in Python is typically executed using an if/elif/else block; this can be done using more complicated dictionary-type statements with functions, but that typically only adds unnecessary complexity. For a simple example of how to do an if/else statement, we'll return to the idea of iterating over a loop of numbers. We'll use the ``%`` operator, which is a binary modulus operation: it divides the first number by the second and then returns the remainder. Our first pass will examine the remainders from dividing by 2, and print out all the even numbers. (There are of course easier ways of determining which numbers are multiples of 2 -- particularly using NumPy, as we'll do below.):: >>> for val in range(100): ... if val % 2 == 0: ... print("%s is a multiple of 2" % (val)) ... Now we'll add on an ``else`` statement, so that we print out all the odd numbers as well, with the caveat that they are not multiples of 2.:: >>> for val in range(100): ... if val % 2 == 0: ... print("%s is a multiple of 2" % (val)) ... else: ... print("%s is not a multiple of 2" % (val)) ... Let's extend this to check the remainders of division with both 2 and 3, and determine which numbers are multiples of 2, 3, or neither. We'll do this for all numbers between 0 and 99.:: >>> for val in range(100): ... if val % 2 == 0: ... print("%s is a multiple of 2" % (val)) ... elif val % 3 == 0: ... print("%s is a multiple of 3" % (val)) ... else: ... print("%s is not a multiple of 2 or 3" % (val)) ... This should print out which numbers are multiples of 2 or 3 -- but note that we're not catching all the multiples of 6, which are multiples of both 2 and 3. To do that, we have a couple options, but we can start with just changing the first if statement to encompass both, using the ``and`` operator:: >>> for val in range(100): ... if val % 2 == 3 and val % 3 == 0: ... print("%s is a multiple of 6" % (val)) ... elif val % 2 == 0: ... print("%s is a multiple of 2" % (val)) ... elif val % 3 == 0: ... print("%s is a multiple of 3" % (val)) ... else: ... print("%s is not a multiple of 2 or 3" % (val)) ... In addition to the ``and`` statement, the ``or`` and ``not`` statements work in the expected manner. There are also several built-in operators, including ``any`` and ``all`` that operate on sequences of conditionals, but those are perhaps better saved for later. Array Operations ++++++++++++++++ In general, iteration over sequences carries with it some substantial overhead: each value is selected, bound to a local name, and then its type is determined when it is acted upon. This is, regrettably, the price of the generality that Python brings with it. While this overhead is minimal for operations acting on a handful of values, if you have a million floating point elements in a sequence and you want to simply add 1.2 to all of them, or multiply them by 2.5, or exponentiate them, this carries with it a substantial performance hit. To accommodate this, the NumPy library has been created to provide very fast operations on arrays of numerical elements. When you create a NumPy array, you are creating a shaped array of (potentially) sequential locations in memory which can be operated on at the C-level, rather than at the interpreted Python level. For this reason, which NumPy arrays can act like Python sequences can, and can thus be iterated over, modified in place, and sliced, they can also be addressed as a monolithic block. All of the fluid and particle quantities used in yt will be expressed as NumPy arrays, allowing for both efficient computation and a minimal memory footprint. For instance, the following operation will not work in standard Python:: >>> vals = range(10) >>> vals *= 2.0 (Note that multiplying vals by the integer 2 will not do what you think: rather than multiplying each value by 2.0, it will simply double the length of the sequence!) To get started with array operations, let's first import the NumPy library. This is the first time we've seen an import in this orientation, so we'll dwell for a moment on what this means. When a library is imported, it is read from disk, the functions are loaded into memory, and they are made available to the user. So when we execute:: >>> import numpy The ``numpy`` module is loaded, and then can be accessed:: >>> numpy.arange(10) This calls the ``arange`` function that belongs to the ``numpy`` module's "namespace." We'll use the term namespace to refer to the variables, functions, and submodules that belong to a given conceptual region. We can also extend our current namespace with the contents of the ``numpy`` module, so that we don't have to prefix all of our calling of ``numpy`` functions with ``numpy.`` but we will not do so here, so as to preserve the distinction between the built-in Python functions and the NumPy-provided functions. To get started, let's perform the NumPy version of getting a sequence of numbers from 0 to 99:: >>> my_array = numpy.arange(100) >>> print(my_array) >>> print(my_array * 2.0) >>> print( my_array * 2) As you can see, each of these operations does exactly what we think it ought to. And, in fact, so does this one:: >>> my_array *= 2.0 So far we've only examined what happens if we have operate on a single array of a given shape -- specifically, if we have an array that is N elements long, but only one dimensional. NumPy arrays are, for the most part, defined by their data, their shape, and their data type. We can examine both the shape (which includes dimensionality) and the size (strictly the total number of elements) in an array by looking at a couple properties of the array:: >>> print(my_array.size) >>> print(my_array.shape) Note that size must be the product of the components of the shape. In this case, both are 100. We can obtain a new array of a different shape by calling the ``reshape`` method on an array:: >>> print(my_array.reshape((10, 10))) In this case, we have not modified ``my_array`` but instead created a new array containing the same elements, but with a different dimensionality and shape. You can modify an array's shape in place, as well, but that should be done with care and the explanation of how that works and its caveats can come a bit later. There are a few other important characteristics of arrays, and ways to create them. We can see what kind of datatype an array is by examining its ``dtype`` attribute:: >>> print(my_array.dtype) This can be changed by calling ``astype`` with another datatype. Datatypes include, but are not limited to, ``int32``, ``int64``, ``float32``, ``float64``.:: >>> float_array = my_array.astype("float64") Arrays can also be operated on together, in lieu of something like an iteration using the ``zip`` function. To show this, we'll use the ``numpy.random.random`` function to generate a random set of values of length 100, and then we'll multiply our original array against those random values.:: >>> rand_array = numpy.random.random(100) >>> print(rand_array * my_array) There are a number of functions you can call on arrays, as well. For instance:: >>> print(rand_array.sum()) >>> print(rand_array.mean()) >>> print(rand_array.min()) >>> print(rand_array.max()) Indexing in NumPy is very fun, and also provides some advanced functionality for selecting values. You can slice and dice arrays:: >>> print(my_array[50:60]) >>> print(my_array[::2]) >>> print(my_array[:-10]) But Numpy also provides the ability to construct boolean arrays, which are the result of conditionals. For example, let's say that you wanted to generate a random set of values, and select only those less than 0.2:: >>> rand_array = numpy.random.random(100) >>> print(rand_array < 0.2) What is returned is a long list of booleans. Boolean arrays can be used as indices -- what this means is that you can construct an index array and then use that toe select only those values where that index array is true. In this example we also use the ``numpy.all`` and ``numpy.any`` functions, which do exactly what you might think -- they evaluate a statement and see if all elements satisfy it, and if any individual element satisfies it, respectively.:: >>> ind_array = rand_array < 0.2 >>> print(rand_array[ind_array]) >>> print(numpy.all(rand_array[ind_array] < 0.2)) You can even skip the creation of the variable ``ind_array`` completely, and instead just coalesce the statements into a single statement:: >>> print(numpy.all(rand_array[rand_array < 0.2] < 0.2)) >>> print(numpy.any(rand_array[rand_array < 0.2] > 0.2)) You might look at these and wonder why this is useful -- we've already selected those elements that are less than 0.2, so why do we want to re-evaluate it? But the interesting component to this is that a conditional applied to one array can be used to index another array. For instance:: >>> print(my_array[rand_array < 0.2]) Here we've identified those elements in our random number array that are less than 0.2, and printed the corresponding elements from our original sequential array of integers. This is actually a great way of selecting a random sample of a dataset -- in this case we get back approximately 20% of the dataset ``my_array``, selected at random. To create arrays from nothing, several options are available. The command ``numpy.array`` will create an array from any arbitrary sequence:: >>> my_sequence = [1.0, 510.42, 1789532.01482] >>> my_array = numpy.array(my_sequence) Additionally, arrays full of ones and zeros can be created:: >>> my_integer_ones = numpy.ones(100) >>> my_float_ones = numpy.ones(100, dtype="float64") >>> my_integer_zeros = numpy.zeros(100) >>> my_float_zeros = numpy.zeros(100, dtype="float64") The function ``numpy.concatenate`` is also useful, but outside the scope of this orientation. The NumPy documentation has a number of more advanced mechanisms for combining arrays; the documentation for "broadcasting" in particular is very useful, and covers mechanisms for combining arrays of different shapes and sizes, which can be tricky but also extremely powerful. We won't discuss the idea of broadcasting here, simply because I don't know that I could do it justice! The NumPy Docs have a great `section on broadcasting `_. Scripted Usage ++++++++++++++ We've now explored Python interactively. However, for long-running analysis tasks or analysis tasks meant to be run on a compute cluster non-interactively, we will want to utilize its scripting interface. Let's start by quitting out of the interpreter. If you have not already done so, you can quit by pressing "Ctrl-D", which will free all memory used by Python and return you to your shell's command prompt. At this point, open up a text editor and edit a file called ``my_first_script.py``. Python scripts typically end in the extension ``.py``. We'll start our scripting tests by doing some timing of array operations versus sequence operations. Into this file, type this text:: import numpy import time my_array = numpy.arange(1000000, dtype="float64") t1 = time.time() my_array_squared = my_array**2.0 t2 = time.time() print("It took me %0.3e seconds to square the array using NumPy" % (t2-t1)) t1 = time.time() my_sequence_squared = [] for i in range(1000000): my_sequence_squared.append(i**2.0) t2 = time.time() print("It took me %0.3e seconds to square the sequence without NumPy" % (t2-t1)) Now save this file, and return to the command prompt. We can execute it by supplying it to Python: .. code-block:: bash $ python my_first_script.py It should run, display two pieces of information, and terminate, leaving you back at the command prompt. On my laptop, the array operation is approximately 42 times faster than the sequence operation! Of course, depending on the operation conducted, this number can go up quite substantially. If you want to run a Python script and then be given a Python interpreter prompt, you can call the ``python`` command with the option ``-i``: .. code-block:: bash $ python -i my_first_script.py Python will execute the script and when it has reached the end it will give you a command prompt. At this point, all of the variables you have set up and created will be available to you -- so you can, for instance, print out the contents of ``my_array_squared``:: >>> print(my_array_squared) The scripting interface for Python is quite powerful, and by combining it with interactive execution, you can, for instance, set up variables and functions for interactive exploration of data. Functions and Objects +++++++++++++++++++++ Functions and Objects in Python are the easiest way to perform very complex, powerful actions in Python. For the most part we will not discuss them; in fact, the standard Python tutorial that comes with the Python documentation is a very good explanation of how to create and use objects and functions, and attempting to replicate it here would simply be futile. yt provides both many objects and functions for your usage, and it is through the usage and combination of functions and objects that you will be able to create plots, manipulate data, and visualize your data. And with that, we conclude our brief introduction to Python. I recommend checking out the standard Python tutorial or browsing some of the NumPy documentation. If you're looking for a book to buy, the only book I've personally ever been completely satisfied with has been David Beazley's book on Python Essentials and the Python standard library, but I've also heard good things about many of the others, including those by Alex Martelli and Wesley Chun. We'll now move on to talking more about how to use yt, both from a scripting perspective and interactively. Python and Related References +++++++++++++++++++++++++++++ * `Python quickstart `_ * `Learn Python the Hard Way `_ * `Byte of Python `_ * `Dive Into Python `_ * `Numpy docs `_ * `Matplotlib docs `_ ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.251151 yt-4.4.0/doc/source/visualizing/0000755000175100001770000000000014714401715016175 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/FITSImageData.ipynb0000644000175100001770000005167214714401662021556 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Writing FITS Images" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "yt has capabilities for writing 2D and 3D uniformly gridded data generated from datasets to FITS files. This is via the `FITSImageData` class. We'll test these capabilities out on an Athena dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import yt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "units_override = {\n", " \"length_unit\": (1.0, \"Mpc\"),\n", " \"mass_unit\": (1.0e14, \"Msun\"),\n", " \"time_unit\": (1.0, \"Myr\"),\n", "}\n", "ds = yt.load(\"MHDSloshing/virgo_low_res.0054.vtk\", units_override=units_override)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating FITS images from Slices and Projections" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are several ways to make a `FITSImageData` instance. The most intuitive ways are to use the `FITSSlice`, `FITSProjection`, `FITSOffAxisSlice`, and `FITSOffAxisProjection` classes to write slices and projections directly to FITS. To demonstrate a useful example of creating a FITS file, let's first make a `ProjectionPlot`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prj = yt.ProjectionPlot(\n", " ds,\n", " \"z\",\n", " (\"gas\", \"temperature\"),\n", " weight_field=(\"gas\", \"density\"),\n", " width=(500.0, \"kpc\"),\n", ")\n", "prj.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Suppose that we wanted to write this projection to a FITS file for analysis and visualization in other programs, such as ds9. We can do that using `FITSProjection`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prj_fits = yt.FITSProjection(\n", " ds, \"z\", (\"gas\", \"temperature\"), weight_field=(\"gas\", \"density\")\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "which took the same parameters as `ProjectionPlot` except the width, because `FITSProjection` and `FITSSlice` always make slices and projections of the width of the domain size, at the finest resolution available in the simulation, in a unit determined to be appropriate for the physical size of the dataset." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also set width manually in `FITSProjection`. For example, set the width to 500 kiloparsec to get FITS file of the same projection plot as discussed above." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prj_fits = yt.FITSProjection(\n", " ds,\n", " \"z\",\n", " (\"gas\", \"temperature\"),\n", " weight_field=(\"gas\", \"density\"),\n", " width=(500.0, \"kpc\"),\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want the center coordinates of the image in either a slice or a projection to be (0,0) instead of the domain coordinates, set `origin=\"image\"`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prj_fits_img = yt.FITSProjection(\n", " ds,\n", " \"z\",\n", " (\"gas\", \"temperature\"),\n", " weight_field=(\"gas\", \"density\"),\n", " width=(500.0, \"kpc\"),\n", " origin=\"image\",\n", ")" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "source": [ "## Making FITS images from Particle Projections" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "source": [ "To create a FITS image from a particle field which is smeared onto the image, we can use\n", "`FITSParticleProjection`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "dsp = yt.load(\"gizmo_64/output/snap_N64L16_135.hdf5\")\n", "prjp_fits = yt.FITSParticleProjection(\n", " dsp, \"x\", (\"PartType1\", \"particle_mass\"), deposition=\"cic\"\n", ")\n", "prjp_fits.writeto(\"prjp.fits\", overwrite=True)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "source": [ "Note that we used the \"Cloud-In-Cell\" interpolation method (`\"cic\"`) instead of the default\n", "\"Nearest-Grid-Point\" (`\"ngp\"`) method. \n", "\n", "If you want the projection to be divided by the pixel area (to make a projection of mass density, \n", "for example), supply the ``density`` keyword argument:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "prjpd_fits = yt.FITSParticleProjection(\n", " dsp, \"x\", (\"PartType1\", \"particle_mass\"), density=True, deposition=\"cic\"\n", ")\n", "prjpd_fits.writeto(\"prjpd.fits\", overwrite=True)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "source": [ "`FITSParticleOffAxisProjection` can be used to make a projection along any arbitrary sight line:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "L = [1, -1, 1] # normal or \"line of sight\" vector\n", "N = [0, 0, 1] # north or \"up\" vector\n", "poff_fits = yt.FITSParticleOffAxisProjection(\n", " dsp, L, (\"PartType1\", \"particle_mass\"), deposition=\"cic\", north_vector=N\n", ")\n", "poff_fits.writeto(\"poff.fits\", overwrite=True)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "source": [ "## Using `HDUList` Methods" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can call a number of the [AstroPy `HDUList`](https://astropy.readthedocs.io/en/latest/io/fits/api/hdulists.html) class's methods from a `FITSImageData` object. For example, `info` shows us the contents of the virtual FITS file:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prj_fits.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also look at the header for a particular field:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prj_fits[\"temperature\"].header" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "where we can see that the units of the temperature field are Kelvin and the cell widths are in kiloparsecs. Note that the length, time, mass, velocity, and magnetic field units of the dataset have been copied into the header " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " If we want the raw image data with units, we can use the `data` attribute of this field:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prj_fits[\"temperature\"].data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Changing Aspects of the Images" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use the `set_unit` method to change the units of a particular field:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prj_fits.set_unit(\"temperature\", \"R\")\n", "prj_fits[\"temperature\"].data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The length units of the image (and its coordinate system), as well as the resolution of the image, can be adjusted when creating it using the `length_unit` and `image_res` keyword arguments, respectively:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# length_unit defaults to that from the dataset\n", "# image_res defaults to 512\n", "slc_fits = yt.FITSSlice(\n", " ds, \"z\", (\"gas\", \"density\"), width=(500, \"kpc\"), length_unit=\"ly\", image_res=256\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now check that this worked by looking at the header, notice in particular the `NAXIS[12]` and `CUNIT[12]` keywords (the `CDELT[12]` and `CRPIX[12]` values also change):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "slc_fits[\"density\"].header" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Saving and Loading Images" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The image can be written to disk using the `writeto` method:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prj_fits.writeto(\"sloshing.fits\", overwrite=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since yt can read FITS image files, it can be loaded up just like any other dataset. Since we created this FITS file with `FITSImageData`, the image will contain information about the units and the current time of the dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ds2 = yt.load(\"sloshing.fits\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "and we can make a `SlicePlot` of the 2D image, which shows the same data as the previous image:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "slc2 = yt.SlicePlot(ds2, \"z\", (\"gas\", \"temperature\"), width=(500.0, \"kpc\"))\n", "slc2.set_log(\"temperature\", True)\n", "slc2.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating `FITSImageData` Instances Directly from FRBs, PlotWindow instances, and 3D Grids" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want more fine-grained control over what goes into the FITS file, you can call `FITSImageData` directly, with various kinds of inputs. For example, you could use a `FixedResolutionBuffer`, and specify you want the units in parsecs instead:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "slc3 = ds.slice(0, 0.0)\n", "frb = slc3.to_frb((500.0, \"kpc\"), 800)\n", "fid_frb = frb.to_fits_data(\n", " fields=[(\"gas\", \"density\"), (\"gas\", \"temperature\")], length_unit=\"pc\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If one creates a `PlotWindow` instance, e.g. `SlicePlot`, `ProjectionPlot`, etc., you can also call this same method there:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fid_pw = prj.to_fits_data(\n", " fields=[(\"gas\", \"density\"), (\"gas\", \"temperature\")], length_unit=\"pc\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A 3D FITS cube can also be created from regularly gridded 3D data. In yt, there are covering grids and \"arbitrary grids\". The easiest way to make an arbitrary grid object is using `ds.r`, where we can index the dataset like a NumPy array, creating a grid of 1.0 Mpc on a side, centered on the origin, with 64 cells on a side:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "grid = ds.r[\n", " (-0.5, \"Mpc\"):(0.5, \"Mpc\"):64j,\n", " (-0.5, \"Mpc\"):(0.5, \"Mpc\"):64j,\n", " (-0.5, \"Mpc\"):(0.5, \"Mpc\"):64j,\n", "]\n", "fid_grid = grid.to_fits_data(\n", " fields=[(\"gas\", \"density\"), (\"gas\", \"temperature\")], length_unit=\"Mpc\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Other `FITSImageData` Methods" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Creating Images from Others" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A `FITSImageData` instance can be generated from one previously written to disk using the `from_file` classmethod:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fid = yt.FITSImageData.from_file(\"sloshing.fits\")\n", "fid.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Multiple `FITSImageData` can be combined to create a new one, provided that the coordinate information is the same:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prj_fits2 = yt.FITSProjection(ds, \"z\", (\"gas\", \"density\"), width=(500.0, \"kpc\"))\n", "prj_fits3 = yt.FITSImageData.from_images([prj_fits, prj_fits2])\n", "prj_fits3.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Alternatively, individual fields can be popped as well to produce new instances of `FITSImageData`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dens_fits = prj_fits3.pop(\"density\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So this new instance would only have the `\"density\"` field:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dens_fits.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "and the old one has the `\"density\"` field removed:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prj_fits3.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Adding Sky Coordinates to Images" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So far, the FITS images we have shown have linear spatial coordinates. We can see this by looking at the header for one of the fields, and examining the `CTYPE1` and `CTYPE2` keywords:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prj_fits[\"temperature\"].header" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `WCSNAME` keyword is set to `\"yt\"` by default. \n", "\n", "However, one may want to take a projection of an object and make a crude mock observation out of it, with celestial coordinates. For this, we can use the `create_sky_wcs` method. Specify a center (RA, Dec) coordinate in degrees, as well as a linear scale in terms of angle per distance:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sky_center = [30.0, 45.0] # in degrees\n", "sky_scale = (2.5, \"arcsec/kpc\") # could also use a YTQuantity\n", "prj_fits.create_sky_wcs(sky_center, sky_scale, ctype=[\"RA---TAN\", \"DEC--TAN\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By default, a tangent RA/Dec projection is used, but one could also use another projection using the `ctype` keyword. We can now look at the header and see it has the appropriate WCS now. The old `\"yt\"` WCS has been added to a second WCS in the header, where the parameters have an `\"A\"` appended to them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prj_fits[\"temperature\"].header" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "and now the `WCSNAME` has been set to `\"celestial\"`. If you want the original WCS to remain in the original place, then you can make the call to `create_sky_wcs` and set `replace_old_wcs=False`, which will put the new, celestial WCS in the second one:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "prj_fits3.create_sky_wcs(\n", " sky_center, sky_scale, ctype=[\"RA---TAN\", \"DEC--TAN\"], replace_old_wcs=False\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prj_fits3[\"temperature\"].header" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Updating Header Parameters" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also add header keywords to a single field or for all fields in the FITS image using `update_header`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fid_frb.update_header(\"all\", \"time\", 0.1) # Update all the fields\n", "fid_frb.update_header(\"temperature\", \"scale\", \"Rankine\") # Update just one field" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(fid_frb[\"density\"].header[\"time\"])\n", "print(fid_frb[\"temperature\"].header[\"scale\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Changing Image Names" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can use the `change_image_name` method to change the name of an image in a `FITSImageData` instance:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fid_frb.change_image_name(\"density\", \"mass_per_volume\")\n", "fid_frb.info() # now \"density\" should be gone and \"mass_per_volume\" should be in its place" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Convolving FITS Images" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, you can convolve an image inside a `FITSImageData` instance with a kernel, either a Gaussian with a specific standard deviation, or any kernel provided by AstroPy. See AstroPy's [Convolution and filtering](http://docs.astropy.org/en/stable/convolution/index.html) for more details." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dens_fits.writeto(\"not_convolved.fits\", overwrite=True)\n", "# Gaussian kernel with standard deviation of 3.0 kpc\n", "dens_fits.convolve(\"density\", 3.0)\n", "dens_fits.writeto(\"convolved.fits\", overwrite=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's load these up as datasets and see the difference:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ds0 = yt.load(\"not_convolved.fits\")\n", "dsc = yt.load(\"convolved.fits\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "slc3 = yt.SlicePlot(ds0, \"z\", (\"gas\", \"density\"), width=(500.0, \"kpc\"))\n", "slc3.set_log(\"density\", True)\n", "slc3.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "slc4 = yt.SlicePlot(dsc, \"z\", (\"gas\", \"density\"), width=(500.0, \"kpc\"))\n", "slc4.set_log(\"density\", True)\n", "slc4.show()" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 4 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb0000644000175100001770000001733214714401662025564 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Transfer Function Helper Tutorial" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, we explain how to use TransferFunctionHelper to visualize and interpret yt volume rendering transfer functions. Creating a custom transfer function is a process that usually involves some trial-and-error. TransferFunctionHelper is a utility class designed to help you visualize the probability density functions of yt fields that you might want to volume render. This makes it easier to choose a nice transfer function that highlights interesting physical regimes.\n", "\n", "First, we set up our namespace and define a convenience function to display volume renderings inline in the notebook. Using `%matplotlib inline` makes it so matplotlib plots display inline in the notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "import numpy as np\n", "from IPython.core.display import Image\n", "\n", "import yt\n", "from yt.visualization.volume_rendering.transfer_function_helper import (\n", " TransferFunctionHelper,\n", ")\n", "\n", "\n", "def showme(im):\n", " # screen out NaNs\n", " im[im != im] = 0.0\n", "\n", " # Create an RGBA bitmap to display\n", " imb = yt.write_bitmap(im, None)\n", " return Image(imb)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we load up a low resolution Enzo cosmological simulation." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ds = yt.load(\"Enzo_64/DD0043/data0043\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have the dataset loaded, let's create a `TransferFunctionHelper` to visualize the dataset and transfer function we'd like to use." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "tfh = TransferFunctionHelper(ds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`TransferFunctionHelpler` will intelligently choose transfer function bounds based on the data values. Use the `plot()` method to take a look at the transfer function." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "# Build a transfer function that is a multivariate gaussian in temperature\n", "tfh = TransferFunctionHelper(ds)\n", "tfh.set_field((\"gas\", \"temperature\"))\n", "tfh.set_log(True)\n", "tfh.set_bounds()\n", "tfh.build_transfer_function()\n", "tfh.tf.add_layers(5)\n", "tfh.plot()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's also look at the probability density function of the `mass` field as a function of `temperature`. This might give us an idea where there is a lot of structure. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "tfh.plot(profile_field=(\"gas\", \"mass\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It looks like most of the gas is hot but there is still a lot of low-density cool gas. Let's construct a transfer function that highlights both the rarefied hot gas and the dense cool gas simultaneously." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "tfh = TransferFunctionHelper(ds)\n", "tfh.set_field((\"gas\", \"temperature\"))\n", "tfh.set_bounds()\n", "tfh.set_log(True)\n", "tfh.build_transfer_function()\n", "tfh.tf.add_layers(\n", " 8,\n", " w=0.01,\n", " mi=4.0,\n", " ma=8.0,\n", " col_bounds=[4.0, 8.0],\n", " alpha=np.logspace(-1, 2, 7),\n", " colormap=\"RdBu_r\",\n", ")\n", "tfh.tf.map_to_colormap(6.0, 8.0, colormap=\"Reds\")\n", "tfh.tf.map_to_colormap(-1.0, 6.0, colormap=\"Blues_r\")\n", "\n", "tfh.plot(profile_field=(\"gas\", \"mass\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's take a look at the volume rendering. First use the helper function to create a default rendering, then we override this with the transfer function we just created." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "im, sc = yt.volume_render(ds, [(\"gas\", \"temperature\")])\n", "\n", "source = sc.get_source()\n", "source.set_transfer_function(tfh.tf)\n", "im2 = sc.render()\n", "\n", "showme(im2[:, :, :3])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That looks okay, but the red gas (associated with temperatures between 1e6 and 1e8 K) is a bit hard to see in the image. To fix this, we can make that gas contribute a larger alpha value to the image by using the ``scale`` keyword argument in ``map_to_colormap``." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "tfh2 = TransferFunctionHelper(ds)\n", "tfh2.set_field((\"gas\", \"temperature\"))\n", "tfh2.set_bounds()\n", "tfh2.set_log(True)\n", "tfh2.build_transfer_function()\n", "tfh2.tf.add_layers(\n", " 8,\n", " w=0.01,\n", " mi=4.0,\n", " ma=8.0,\n", " col_bounds=[4.0, 8.0],\n", " alpha=np.logspace(-1, 2, 7),\n", " colormap=\"RdBu_r\",\n", ")\n", "tfh2.tf.map_to_colormap(6.0, 8.0, colormap=\"Reds\", scale=5.0)\n", "tfh2.tf.map_to_colormap(-1.0, 6.0, colormap=\"Blues_r\", scale=1.0)\n", "\n", "tfh2.plot(profile_field=(\"gas\", \"mass\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that the height of the red portion of the transfer function has increased by a factor of 5.0. If we use this transfer function to make the final image:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "source.set_transfer_function(tfh2.tf)\n", "im3 = sc.render()\n", "\n", "showme(im3[:, :, :3])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The red gas is now much more prominent in the image. We can clearly see that the hot gas is mostly associated with bound structures while the cool gas is associated with low-density voids." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 4 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/Volume_Rendering_Tutorial.ipynb0000644000175100001770000002521514714401662024375 0ustar00runnerdocker{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Volume Rendering Tutorial \n", "\n", "This notebook shows how to use the new (in version 3.3) Scene interface to create custom volume renderings. The tutorial proceeds in the following steps: \n", "\n", "1. [Creating the Scene](#1.-Creating-the-Scene)\n", "2. [Displaying the Scene](#2.-Displaying-the-Scene)\n", "3. [Adjusting Transfer Functions](#3.-Adjusting-Transfer-Functions)\n", "4. [Saving an Image](#4.-Saving-an-Image)\n", "5. [Adding Annotations](#5.-Adding-Annotations)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Creating the Scene \n", "\n", "To begin, we load up a dataset and use the `yt.create_scene` method to set up a basic Scene. We store the Scene in a variable called `sc` and render the default `('gas', 'density')` field." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "import yt\n", "from yt.visualization.volume_rendering.transfer_function_helper import (\n", " TransferFunctionHelper,\n", ")\n", "\n", "ds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")\n", "sc = yt.create_scene(ds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that to render a different field, we would use pass the field name to `yt.create_scene` using the `field` argument. \n", "\n", "Now we can look at some information about the Scene we just created using the python print keyword:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(sc)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This prints out information about the Sources, Camera, and Lens associated with this Scene. Each of these can also be printed individually. For example, to print only the information about the first (and currently, only) Source, we can do:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(sc.get_source())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Displaying the Scene \n", "\n", "We can see that the `yt.create_source` method has created a `VolumeSource` with default values for the center, bounds, and transfer function. Now, let's see what this Scene looks like. In the notebook, we can do this by calling `sc.show()`. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sc.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That looks okay, but it's a little too zoomed-out. To fix this, let's modify the Camera associated with our Scene. This next bit of code will zoom in the camera (i.e. decrease the width of the view) by a factor of 3." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sc.camera.zoom(3.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now when we print the Scene, we see that the Camera width has decreased by a factor of 3:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(sc)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To see what this looks like, we re-render the image and display the scene again. Note that we don't actually have to call `sc.show()` here - we can just have Ipython evaluate the Scene and that will display it automatically." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sc.render()\n", "sc" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That's better! The image looks a little washed-out though, so we use the `sigma_clip` argument to `sc.show()` to improve the contrast:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sc.show(sigma_clip=4.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Applying different values of `sigma_clip` with `sc.show()` is a relatively fast process because `sc.show()` will pull the most recently rendered image and apply the contrast adjustment without rendering the scene again. While this is useful for quickly testing the affect of different values of `sigma_clip`, it can lead to confusion if we don't remember to render after making changes to the camera. For example, if we zoom in again and simply call `sc.show()`, then we get the same image as before:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sc.camera.zoom(3.0)\n", "sc.show(sigma_clip=4.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the change to the camera to take affect, we have to explicitly render again: " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sc.render()\n", "sc.show(sigma_clip=4.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a general rule, any changes to the scene itself such as adjusting the camera or changing transfer functions requires rendering again. Before moving on, let's undo the last zoom:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sc.camera.zoom(1.0 / 3.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Adjusting Transfer Functions\n", "\n", "Next, we demonstrate how to change the mapping between the field values and the colors in the image. We use the TransferFunctionHelper to create a new transfer function using the `gist_rainbow` colormap, and then re-create the image as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Set up a custom transfer function using the TransferFunctionHelper.\n", "# We use 10 Gaussians evenly spaced logarithmically between the min and max\n", "# field values.\n", "tfh = TransferFunctionHelper(ds)\n", "tfh.set_field(\"density\")\n", "tfh.set_log(True)\n", "tfh.set_bounds()\n", "tfh.build_transfer_function()\n", "tfh.tf.add_layers(10, colormap=\"gist_rainbow\")\n", "\n", "# Grab the first render source and set it to use the new transfer function\n", "render_source = sc.get_source()\n", "render_source.transfer_function = tfh.tf\n", "\n", "sc.render()\n", "sc.show(sigma_clip=4.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's try using a different lens type. We can give a sense of depth to the image by using the perspective lens. To do, we create a new Camera below. We also demonstrate how to switch the camera to a new position and orientation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cam = sc.add_camera(ds, lens_type=\"perspective\")\n", "\n", "# Standing at (x=0.05, y=0.5, z=0.5), we look at the area of x>0.05 (with some open angle\n", "# specified by camera width) along the positive x direction.\n", "cam.position = ds.arr([0.05, 0.5, 0.5], \"code_length\")\n", "\n", "normal_vector = [1.0, 0.0, 0.0]\n", "north_vector = [0.0, 0.0, 1.0]\n", "cam.switch_orientation(normal_vector=normal_vector, north_vector=north_vector)\n", "\n", "# The width determines the opening angle\n", "cam.set_width(ds.domain_width * 0.5)\n", "\n", "print(sc.camera)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The resulting image looks like:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sc.render()\n", "sc.show(sigma_clip=4.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Saving an Image\n", "\n", "To save a volume rendering to an image file at any point, we can use `sc.save` as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sc.save(\"volume_render.png\", render=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Including the keyword argument `render=False` indicates that the most recently rendered image will be saved (otherwise, `sc.save()` will trigger a call to `sc.render()`). This behavior differs from `sc.show()`, which always uses the most recently rendered image. \n", "\n", "An additional caveat is that if we used `sigma_clip` in our call to `sc.show()`, then we must **also** pass it to `sc.save()` as sigma clipping is applied on top of a rendered image array. In that case, we would do the following: " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sc.save(\"volume_render_clip4.png\", sigma_clip=4.0, render=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. Adding Annotations\n", "\n", "Finally, the next cell restores the lens and the transfer function to the defaults, moves the camera, and adds an opaque source that shows the axes of the simulation coordinate system." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# set the lens type back to plane-parallel\n", "sc.camera.set_lens(\"plane-parallel\")\n", "\n", "# move the camera to the left edge of the domain\n", "sc.camera.set_position(ds.domain_left_edge)\n", "sc.camera.switch_orientation()\n", "\n", "# add an opaque source to the scene\n", "sc.annotate_axes()\n", "\n", "sc.render()\n", "sc.show(sigma_clip=4.0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 4 } ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2551513 yt-4.4.0/doc/source/visualizing/_images/0000755000175100001770000000000014714401715017601 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/_images/all_colormaps.png0000644000175100001770000062610714714401662023153 0ustar00runnerdockerPNG  IHDRX =:EbKGD pHYsaa?itIME  5G[ IDATxypǹs$: !t}1, fd+^Ʃ87T/UMK᪐ʽs"c! c_K$esFg z9L[IB!~CaB!"BE!BB!"BE!BB!"BE!BB!"BE!BB!!BE!BB!!BE!BB!!BE!BB!!BE!r0 TEEE~6GɷE{e\d WOgd`0Ưbmg:Qߏ D:aC'2ҝ:eKnx{au-~4 ss\tΙ󏜸x(@dGfT`6X,BVo~ç26 `#ݟ+B@OM-ɓ'qܸq>,LBmm-t]ѣvZ$$I?z(/ߺu+_-#w|I̘1#$ݻЀ8r;!wRRR o"55VbK0d!t"55û6Mss3 GNN"##q5LwS999fxP^^b`9N|gXb!gx ,E4LdXlOϓ^ii)x,_KJJf3bcc͝;.F>3H)1sc*&&IB"澔x"~a[hhh@bb",Yv*? TTT`ٲeFxEEN(6 ,B!dhrM8N{,Ν;3gBQ̞=;jnnƥKz*6m6k,i`B!ߴ)SpB@jj*._O> Yfaɒ%aSsʼn'_flyRRRPXX0ƣE! N( RRRB˗/ߕHX=/LTTT ++ ϟCҩ/߄}BȐGUU) %s?7n`Μ9li .@ɰt٘^kiiACC,a@-Zߐ͆//^ĺuzraZ\.bӧOǾ}w^L8!"n%66b$4zMFx2 ܚzB@fϞԩS?Grr2~`nnnl6#s΅|>qD?4f3f̘saܹaillĖ-[ )a2_O駟ҥK h`% p5'Î=l{aܹ=8xG#/YV~~>靖- ~,ޟ8E!4!B ,B!XB!4!BH7_B!#"Bgl,;p&H!(^_@PpH/ '^(RSVHYJ@҃nH￳+=+^?L (NPiW%`ȓe!n8E]Ix.PD/&K& 䇍^B(/;V¤U/$4 R@&4u@.vnM77Aޝ^=.wwZ_*˥ݺl GF*\Z72p\|om۶ :|M߿Dh`BZ㡇B]]nܸ !ao"d p:p~ӟnK,1]CRJIII<{ >}͈ԩSf@WWJJJ_@4;+WDjj*ѣw]9r?ڵ G}!_l8XÀ!BաRvOCg2 .`ԨQHNNƬY~joo֭[a/"22/_{={,JJJb dggpҥK;v ""?яO>_ꫯbl _Cgg'v؁'N 77VBSSF\H)Gk  gCJ-eň%и>Z^^Yf&MÁZdee?FTT~i(j=~8~a}:ˑW"330y&nܸVvCC^pۍ<110 667od``qp)~%pkHLp9H)e˖pUUzjL=?z}gΜASSqĉ1~xl߾555hmmťKpaYD|7hmmEGG?8EHe7X" )rL0!qӦMéSގ^xěo !RSS3g4MӧqDGGcڴi{ݻqM"33}sѢEصk?vs~ fCu([p*ǭrUV9d)BB!X݇RSTvO )B_kVBh`yՋԕ~/1욕X|#Bh`B!fiޥOv!$-(A`z^_r]*bB!ҏ k^QQQ7!! ;{-ͽW9w#E;q7z/Wލok}/`ߒ7|*Xd 2!>=vkvڅ b ==+V1cz .ŋɓx&ܺu+ǁg}SL śo7"**8=z4^y啀6޼y3VZ9s~ߣ-O<z[o ?OBfՑπӑs ,2H) 'Fvv6 nGii)mۆ 6'++ Jmm-P[[kXn~-ΝۯrKK *++ c|7o^@x@mmm|2z!;w.:4 عs'X|9;}`]2 ﱾ*bbbT,^1f˗r8hhhŋQ[[k|24M ,X#G@Ӵ^EDD 666́R^^ɓ';.\ccʔ)8q"jjjyh`fNh{w}*++aZc,8.]dL:|a"11_7gB`…ugΜky5kRRR`ZիtT9pBȐŋشit"..INNF||h4- N+>>8.//nj33fhiiARRRH9N>}`ԩ<4!c"??7oFYYrssO,u?-W\Aff& 33eeeȀlDoEQݻwcaDGGjDUUt]ٳgp)%ˑx Dyy]}Oh`B oٌdᅬjXӧOǩSp{y׫ӧJ!,Y̜9K5g;!# ~PM*nnɓ{͗|Y%$$ ..eee}jmmECCCs:a˗etsgzpڴi5jFms͛հS1;},%3B}{oTWWc˖-<`ڵƔ_ol6TVV|Ȟ>XB[.l;l6l6|W!29#G͟?իW㏇䉊„ p9dggEQ,X'Oă> EEE:˭rUN`ҝ[pnírSoׄpBo roc"B!4!Bh`B!"B! 2 B!`B!3{BW`NNf89}%x;|T/}yoNX |e~DJP×t4s=(wDzۗpK;ƹw.k H(0I^gIq~:֗>Bnyoru;w>OL.Ϲ!O6Cp?/\"8O-#Gp:t+~g&A`?@7Pxz tOTO C]cO~"> G +i! !vڅ b ==+V1cxfE}YL2%$EEߏ7kmmo"~cܸql B wA4"Cl@J ݎRl۶ 6lrx1jԨh6-8EH@P}:*VAll,RSSxb_FbAlllS>. ,'CjATZY"N>FH%C/bӦMӉ8ޓ򗿄lk ,Bbh"}sTfC^^ٳgQ\\#!!_zg¾A N>")U==|ٌ$XV;p\(++eH\.@TTT@x||ܸAu!"P=!!3Г2J{Gݰk.TTTXbƌɳ>)S䯨qƐ8|x7Bqƅѣ8zQt\\|rX,#{~Gx#"",,\0`SjBB8El@J ݎRl۶ 6lߗ!cԨQѽ=z4^x躎F޽O?t\̛7/ oddd].]>Bl6v! ! zTUELL bccŋ2 %, bccT111Ä 0}tԄm6Öɓ'^@zz:v )BB8TVVjrdi hiiAuu5TU7 .D[["$2x"6mp:Caa=)/=^{&JJJB[xgpQݻ7n@||B7HHHfC~~M-+**Kn͞3337{fdxo!Bh`B!"BE!BȝB!d,B!~f*zs?pIS 'Vio'NuAubWT@^)|yDOq$ tEn@: .NhyTP|nYF^Wet4zCڠhP6AG*t qW5LTUHKI6}qCQ$L&*z*Bx\U}+ޕ@}zmB#^AJ"&.&ݝ _BupBL8. a6oތ28N;O= nk͚5kUU?y[?%pe#!HE!ۍ4|gFdddt/_F^^&MhkkCGG?f ɬXr FG lZ W$4MnnGcc#˅ɓ'c֬YQWW|طoɨDcc#{0͆,ddd`ǎAKK KTWW"MOYþsWtmزe "")))Xv-233֭áCc8f#ZO<ك?Oǣ>w\.cBE!QPP^&55?O¦MpnXO?g̙3' lʔ)"a!m*V){d+2 ,NRO6-uu ?*!GE!2Я%EEE%BG8E! 2 o} SG @{e9„t煟 9@x}E:c/aJ/z >,(~00ҷjNJO0#0oP=҄ze~<"LT_c/\&ld[t ݀p; 4' 9]PN(K BsA敡kA5n AP5@U t ?u@>s 4@hXRBxst'ƱaïZ)=]NJ(2Dv7KuЮ˯;Aaẍ 6Ӈ]M%zWw!փ3$hQ]=WZ[[o_Fjj*aśo7"**jPk.tuugEH/%{}IC8X,HOONJ+0f̘>3H2BH) 6oތ2M)BӉ`/籱=x饗*};#|>(, .]l6c޼y4i.^hc+WĂ –t:=}Ͼ,Z'Gcܹ>cǎSO=]ULL̰֊֠!n;lxtt4GHTCCC@4455ji29- B}}=Xrv…tw|;8tʰ`cH5jL&x:0&&pU!d`̙0LHKKg}hDFFoWu\r#nN!dHiv;v;o>\.L<9lql6СChnne˖Gŵk؈ .^FhV+t]G}TVV, Q]]ףFcJ/͛7t:Ea@ss3\3gΠOz ӧO(fBtt4oߎ:믿ƾ}x衇p TUU?a_95`pM!]Pљ|w e@DDRRRvڀiAł'|Ĺs0a`Ϟ=FI&ǎɓ'*RRR0o޼o^֩JMMʕ+qI>|X|9viu|lc+>>9998tvm,Ӑ8q---BZZ,Y'jFQ` f3֭[Caǎp8f3F-Zݎ]vAsbԩ͞U0*G=oír׉n[ rB¸U!d)BB!XNNKa'? 9Bh`J2 BB!d1`4X4T7rj{CH*hg 3[p?B!!Be+HРC ] 7:F't't'4I#:9=4Mq@x8!n )4 ]hB: T* BBB XS~zIVR^B\!( ED(" E!`*"" ((" " śτh@&X"&X`B4 <P&J3TJ8J'8aPe'T '"P*`*4pCHvAH7n'PA?\.CV0N oV!  v b*0A**tS#+^LЕhj45͗V1EAS-s{Z2v­vBS;VYq@S<[鄦tzӿB.h.H$.н7g ) zT!ts[pTӼR: u hR=CjnH]4cQ]jf( HnG@PH2GNJ{OL~ " HI@&"(P *aR$TUIuävBU;`2uBUP'MN.q bvyPLnUP5ϱIbvCPn@n8(BB4xnu) HB*4C&Hx{BDt ДHDBfM'<&"-pk4 4-Rt l㦻z7/T>4!k׮}- ӱb 3WFqq1?5d2E /fi`B!l@J ݎRl۶ 6l0ܫ}ddd@UUbƌFnn"11_d2al<XBDUUbccxblݺor`ƍFXUUoߎ"}p1466"..gҥK()GDD`رVmm-222 Dmm-̙c7&Q[YY3g눈fêU)%~cҥwcwO/~ $$$ %%%/iƎ+WS ؿ?!@rr20vXv X bH&l|@ee%VkX y?;wĚ5kfٳB,[,l,q^[[,b`͛7H:rss7ovs=!f̘O?4O;G'|xWaX{!-- B"d 5O[/^ĦMN'PXXxW2;%K`D=X6 'NnGll,jkkC4|'f!++7w\8)) VŸg8NDDD`֬Y8}4ڐ)%.\`QWWz4{ UUU1|Frr2j"Bzf!//RJtuuٳ(..ۥ/_Ǎ0]סi\.ЕtǏEQP[[1cv#-- lƸq|8z(^N{65 HII~ŋ7oĴiW^믿FKK .vލJL0ӦMESlXCil6#))8͛QVV 3EiZDNNNHOO_ddd@UU1~x|wY&ӉbL4 O=ֆf͚eX~)&MbȈúuB>揊<#9s&K|8z(~iL2!GDWn;lxtt4GHTCCC@4455(fÅ 0 Z|Kuutvvb刏|!rgΜR?G~~~vBC3d$''c…rXWr'2$4 vvطo\.&O6q`6q!477kiePYYGڵkhllą PZZګ.YYYhjjBMMM*}UUq*`ZGbb"ƏRʀk8q"Ə۷t>z\.|51j(v#X!xPŪƖ-[xKHIIڵk}M~ӂO>$#|֭d755aݨGRRVZCBw*ÁJXVDGGty=LaΝXf 233܌={@e˖I]QVVf}-ĻヒX'?AWWߦ9 aEXŋشit"..w,رcXd fϞ HLLDNN6oތ2̛7/ 3զiZDNNN,Z(.111ZZx'o__ozm2K\UKM l5ڣ3z#==iii8~8V^'٣FB{{;v17aV-!Tihllľ}r0y䐴ƍlơC܌˖-Cee%=k׮.\@iimpBƍ}=aXVܹ r#XB=زe "")))Xv-2330cXO8w&LٳH3i$رc8y$TUEJJJtcf);;III8~8R>,}Fbb"V^͏܇FQQр1 ͞Ms37{f왐!pUK!"2(gڄ"B' GEEz1.@X c3_S|B,B!XB!"߯RJs 0eJs>>,?=ACyXw^vCx]ʐza[Ȓ޿^ey:;?$[ѩ[|zOKCӣ]dP~姫*ӣwy>Y[non[`֙} yA 4c}Q߱`A֗(;(ot`ʄ#XTSWr(d;BF!vՍbAzz:VX1cxlbGh`HHũ+l@J ݎRl۶ 6l4dX !{rkFЊ5"鯢P<%]{8TVVj"::N3e[^WW;wb͚5Dss3!-[> }y5 vW^ei`NJ*ŋi&D\\ Xޱcǰd̞=o&MlL<٘>$dDX"zrwsŜ"d f3|l޼eee7o^viN999:IB IDATujزꫯ⫯BMM >:u ֭EFE!ddvC¢p8r`6 d4455j4>իWƊ'#SC@OcE zu@4v@gg'>c\.L<9$q`6q! ;YT >'!B#6ua |w"9!N__Yq40E8ϱ҆ .$M ( @"<ǾpEs{+HA28bȁP7ܗ[S 9{#u7dk>'|o^_x J/\2d KDx4׃/0ե4\a⻝֩[ #XC.IW2t,{`uh^_tl//p=P/?8].Aiz[.%WK8zK;3u1&ޝx4L8EH(WŚe_&"Bpgҏp]ۮ.e_&b׮]{ Z,cŊ3fLjkkoB)%L&pB̟?? nLJ~/툊j̙31gchBh`~NnCH!64OK1{%;;Rn۶mÆ ¦4 ꫯ"""n_| hii_X,,_GvvSiBh`{}57P1np^>-UUŋuVtttt7O?goE^^ш [H"h+a!n"Bzڵkq8)) 7p. 2Wjkk~455AJ p8ل$m%?d`3f ¶m׿/{㭷W_};HMMťK_bΝ]OH\;XB>͒ЊKXWAdemnv˸r Μ9W^yo8}4^|E<CaѢEu6l)kע ÇG}׋ &`֬Y\ħ❵_2lxqّ1dkq,vp k%q!vRJ\tI?i܄ p9|W4r]`! k,du]W&T0mw5^. ڊ}vرcq9޽G[ooHIIŋq!~H`uyOd+¾}++Ez1X`ِ bĈcxr& .'ݵk4MCZZ PYY"%}{˫"kZƵb-B}]X5!!!dPsH~ya1dulÊ.^w'VLݿE8x9Ke*3:X/B,B!ʀ}E_$ƌ.0!>؂E! X V4 !@FЅnҖ=|][DҥDJ GGM^Edk/eЗB$>e 1|2.<<8!K|Q?'= Mz@|I^.{AMv i )$Lؤ32R!qǕ>HO`6SOݡuJL#M bԣISOiD"KiLQ['L؂E7=R%"B,c8_GF?k׮Ł㤤$p 뮻0dȐhhhU >|8fΜLf2Ùeq3 L``̘1. ۶m_W3'fùs Wc=!d3˘{>o@1iX1XцHNNFJJ pW\'P]]6]ӨŋMzv;RRR0b?@SSΟ?/OeP]]o< z88x v;a[kX|8^GvͼvZ-GHMMŢEZߥKG!-- YYYtR^B`E,[ĢWEx`d 0gH)ֆ?*.]ګgij> )%<pB &t!VܹsO!n?;>$''fz2`DzL &Mk9+Vxnp ==;v@yy9Ν;ݻwщjŰa"##/_ƶmۘWBcǎaŊXb? .Ĉ#*ϟӉ_|vByyywĮ[^h^z 1c f<yaŞ2{sgs^bcgg_L,ll{bτfťɴB/I⧲Q:XB!1}E_\/<2h !|f2B!!BHt3`]_!^ ڔ0Y_/B~ >8\1!Yҋ(P  EHtnР@-(Wͺ@!;uAtu7ۦ7=}TxcUhPh+:ϫ0]W@<)6 ͿI@h< @  HH! ?(-8W5߇ƽb+J9Mn/:uy Ap@(0+ߦe`gooA>>:e[R&<`pSZL`pHz?raΝCKK RRR0daԨQx܌c„ /PUUεPZZʂ&t! ~.^+W")) 3gDnn.4Mñcǰa<B ==09X~-\.WBnƺu됖ѣG3 ,B b8r`Z֯_!,Yժ-ܢO8wFKK ĉq6 )))Sb׮]8~`8p6m~#GKK ƌy/;ֆor<޽{t:aPPPJ$''j*,Z[nŹs琗eEp;kR#X%H?e(sǏɓMUDwrr2Fn7:dr;Fkk+T<х޽{`e}EH1>lT.Br}24r{ PXX}v[pI;4F+1~xlٲu:hmmEEEl\В~~ڊs+"BٳgCJÇq9455aϞ=Xre|NN~_{U:eee8z(t{… /sΰ|q];v, |1Xakoc]'vmt4 'ܹ7onǐ!CpwtrrrPXX۷*[QUU[b޽:t(n^\B6m瑗?+FrR9\*'֗Qsp8Z*K\+ xW]$`!!2 lE⼝3>l.̤EH3EzIp^[P%qm02rHwcF b؂Eվ}2 H<{V|B`B! @}EH!2Xa !BH3`gx+@H Po `^9 ſ7Y0]VӠJoN}=/"5:hOnb{`$Hŷ o$v%p~&TN8Bפˢԕ>XidH[鑠hҗP57o28a&paٹdL8F]$4PBhkT ^۫Hy׿U8YRB iiWmHWmtyC!=9:k E}g+~yy:0RYW@.ke >5D?]9MJK^ G :pC:o-]^hZ13 h2Mgu/+;H$ _V*&B;AR EmS`aA]:3 ,B!Ё III2dQRRI9r1N> ׋L>Gf:X&v X$3f i\.;7ø(4Gɓ'QXX $&&bXz5,Y<V+Nup]fc:ЛbR˻uhUU HMMСC1l0+8pJKK{n߿.\@RRp]wf8M6a޼yزe Ν;!i:u NS>c |ꫯ`hvⴥaΎ}cu,=e^Z\Ţ #GBs= {ٳ8nvCvi P__7x3gDiiiKHJJA"Ad"d֐x ;;/^aȑ@AA_5Mٳ}YYYZzؑ#Gcܹ+صk:::0~xi LvFmpPߺ8Ǐ?D{{;4M)UU1dȐ=~-= bرϰsN!_D.b+N"Rct:/b՘4if̘$|Xn^`Y,{vǘ1cjoXp! A.a!!gΜq)%n 6 YYYhii.ݎŋx7iZsn:̟?cƌaXp{AEc`(PO%Xy^\.Ν;믣7|34MÞ={pvVXEQ5S5W wƱ\ -XQGد~\vKH`=d`_K1z=;ݵkB@J_B״kjjm6u]2e w$,:W$[hw\Ovƌ*cߎ"`EnEbu-XaqHNN܌ 6ĉB`ј5kRRROƦM!0gGǃ;7|}{xr!;Xz5#@4{x뭷~m :s΅O]ݿ?( 駟"t! =˗q;P__g'DZZ`޼yx؈|477cԩ+qa?#/2f͚!t!>3gi VRojӉtݹ$&& w}ĨQ0nܸ.?C !==_|nE!$Z̼ӧOGqq1PWW;v`;vlXٳRbt,Bxdgg---z+ٳgֆ].++ YYY(++[ouΜ9F<#HLLϷbժUp:fƓxp ihw\@|izrLEnGaa!rssfTVVbÆ (((@~~>n7}7hiiAcc#ƍ6O?7pO?3g住ăʼnFWQKLcz"cǎaŊsYYYx?6n܈_~B3f+Ww˗aqM7aa?~{X;ƍݻw%}ɠw\د~lݱr=׻ UUU]ʤ *ϟߣTUůSNԩSy#]2\oI׋|3[nrO,B!:XB!!B,B!GTWWs"!BH,B!>fwsЀ'NV44͂6*P#a+6^ Y{װ@"$fS!(6@Xsǰ]W \%g A:ؽ4mSų s.ZW'mVC-l`iv@dŬD }[$ۀd+`KB3l^s6HP+F]^/^@ U! pv÷8 c津{ovƵ)|aA_Ha\M: %Юmn\ptz5_v I H׿$R*X ҍ{?PׁMwU2pteНQtc;h;vj7{cJJu5ʄ!#^sDtvZ8pBH+! K=RRR0zh̜9III,BBH3fTUUX,BRܹsXn6mڄy@,-^녪HNN܌ 6ĉB`ј5kRRROƦM!0gGi())CX!EHG#*իGy[o~ocС;w.8}4TUq---ꫯ0l0VBau}Q&.9*,Gbswqٳx'7o^x466"??͘:u*4lقm۶A4x< 6 3gd t^ 0g`sӉtݹ$&& w}ĨQ0nܸn)S r֭x裏Bb:X/"qX寷#jZyObա;v1v؈qv9TVVON8QF"pQ‡U_.Ae7uvv6Ң;{,ڐBYY~`رؿyV+J@-XBb en%Pv"77k֬Aee%^/6l؀vǸq㐑466bܸq])%e$''{ ";`pPzݳ|t1Xt.++ ?8ƍ/C1c`֬Y#v;|2v;n&L>4oߎ۷up O~‰F ,‡l,y"aUUIOO}6LUU̟?Wi>,hrp !B,oQ ݬ׃ EHz"GB`·XzM!t!BËZuu5[z !B`B!16Vuuuaa/8ItLOlWעZl|ʸb8VFN3\t(tAJ8[ >6ǻۻnm|E@_D@*S @@a_?\P z(-yEpakУLdP{5pJcfm &ۻ׀ސ| O=ʄ-X$_3 ,7B؂Eɻ8k^Cx*7˅աp8(..FII V+{9477V+233QVVR]OCCVZ!,~+`ǎرcGLvv4/'Ob0a.gٳ3"Ȅ<npV\$TTT 77ٳEZZ @yy9JKKvqa[iii=zəz'`L餤sssxb(IjŶm0n8S!t_Dm1x?/CUU,]VU?"fSb׮]8~|7'&&FLSQ$''wiׄ W_&Mtf!hʕ+ɓMUΧ8|0Z[[jؖiӦ>fa`BjΟ?)%Lyx<ɓQQQزe mM`Mc׳>k:g3g|rĉ1gӹn {ݻ1m4E!$vYd Xfh)SPRR˅͛7cҤIp8B<裦1X\Xhi VBBBbwމ7nc:XB!N|ff&tv88,X\FFFcTU莉'⣏>Ν;B#EH!i*5jqN?~EkF\3kl-°p-°l-B-\h]gdVN1x?B`B!"BE!B`B!\?+BB! [!Bh4ɰX-B'RV 2Jy%(Nc :m U*-(O4oȷ.2W< JǶW0ߕ+V[EH"}U}8ۇ;V qT)j>=֩G1X4 K׌zT)H{/&QNDJ1}\m>Mdtp>.$l[A ;M H[ҶNE@'O Ly[!:ٗ!l"B˅աp8(..FII V+{9477V+233QVVR]OCCVZ!,~+`ǎرcGLvv6q/'Ob0a.gٳO>dk1B -- cǎEyy9l6.^˖-C^^~_l_|cǎVB^^*++M67oVBp}D… Xr%PQQ\gϢiii(**(--Çn:a&g'09*ŋMqQi!V+mۆqut8^/]sс>Hwz۱{n,ZLtǞdI.X~=TUҥKaZ󙙙(**2l6Q:u*vڅǏ,HLL(HNNҮ &૯Bmmm Nw񨯯W_};XF.ɓ{nL4[`Æ CX`Ab!!zrc&Fkk+TU0m4|pפbFp8ԥi֬Y#GGs5. ۅ!DVV3<ײSQQزe mM`Mc׳>k:g3g|rĉvكݻwcڴiWuQPPХ3fիaekkk!eːJD >xYu%K@J5kL2%%%p\ؼy3&Ma.-!裏`reggcѢEł;7nmf knn /q;Li4 7x#n{>|8mۆaeFӧOc۶m?~LJ:Xm$hE,Nt>zmhp8p8`D~~>rrrLr]RU5b Q0'NG};w"##C?e˖III!Ƭʕ+1eʔṹ9s&7Ă d]gۄBݎQFa߾}甞c˖-2 f̘?/^|*9p222z\*pM7uy]yyyx衇pI4<>V(KJٳgC4K8tt:qA8.[gpQ466v-)qe\.fdiZH劘΍7ވaÆO> q ;wK'kot9-"$O,"d]f-[lݺ---X,)S"!''ؾ};x_L΋?O/b bSO=1 ;U㬬,r-O3d,^]__UCyUWW}kr-BEȵ[7!!!Q GQ[R"$"d<"9XRAB,BH6!B,B!{AAB!`B!16Vi_O%B?6RqÇGN+qo-#zk{kא KϋNއl /ǡ/L+7B8Y/KU JPzME !!(BoTL,k<Ӂ0cHWu!驝3g8Hhb#"خ^|"z"+D i l"$JhwV,BΥMJ7NBMM ҂D8V=DYYJKKu= XjrsscZ~iTVV"OZz'Յ rJ$%%PUgϞEmm-PTT!QZZ ۍÇcݺuHKKѣCt|8,X7)CPQP^^{ҥKlBBH|`1j(۷nWq1~xlٲ%㑓;v0 ,B/I8a4 /:&8N>9r$~߅'?ae$}ɠk $d'!B,B!BB!!BBTWWs$!BH,B!>fpD [wa6 6a6@QX5Tw~{|Ty;pcF`}tK;WbT7xf T [AL?"b7Ǔ~}qߩf8giޔva ]Ww.yj1#$M%l>ueN۬D(rT_BCIx[]d s^" 2±6bXaT7 NYbSl:K6XiT:PT{> [!D ]! ,B!7\.Ԡ---HLL@qq1JJJ`Zsϡ`Z2zj*3-O%%%{`$$$ ++ ӦMرcڵk!0{hXSO֮]@!0dzD7P7nf̘Ţ۱i& cժUCee|w>l޼ou ;::Ocxu'NW^/~ dffGYYL:w؁/˖-?/7|3`BbcgZ2A`ɐ\p+WDRR*** UUqY"-- EEEBp8|0֭[4=:Dt"1o<1x ,[ Lbb"x 43UUU4 . ǎƍqa].bmC4>}k׮fÝwٯ1rHn466bذa'O"%%Nѝddd 33.] M| x <l "zaCUU,]pDQQIf!%%0uTڵ Ǐo$: IDATq&O۷ʞ %%سgN8ar 99K=2:t( W^y0ueaر<jhhرcq |9r~>vwRR~.111f,`EkS}T8Ӎ{•+WP__ɓ'BJÇ5āB a޽Wmi՝yyy8rH9N8qBwx#GA?8RnNBAAAk0,B]*<ϟYYY< <_kTEE`˖-ضm<4MndZ[nŭފ۴f!v!Dff&ƏoikkMFx[8sL4 EEEۯK`ӦM4 nOȑ#zuo i ׋:X|dIC?a%K@J5kL2%%%p\ؼy3&MVGii)vލ?3f0_}~ooJ… Yf]Yl2SLO[औx!i&ۘ?~ɓ'ksEqqiVkk+`1rHxz9sƍE+ollԻdV)|!d #PWWSNz;,H2!׎Sk}nfϞ MK/СChjjt:Π GEcccg^ {Op%yzp\hiiw};w_GQQnnmkk¥KЀ>YYYeS8&Ip9lܸgΜG}/"b#G믿ƙ3gBZh֣oxvFݖ]$Ex=J]q8XljjjuVb ''SLI"Aaa!o߮0ワL:fdffbΝ={6+V09 BWһَ;+V@Q$&&"//sOBwՏSRR0rHNۍ6sdffG֭[^Xp! ٦#FC7DrQ]]= lEȵ@E="T"ٵEU]Zj}aK"$$va,*vkea4 Q?$.B2K:XC HZ=z%"BWفN!2Xa !BH3``'tN&C5C=.ٍ#ۮ t5i poдAW$[i} ʄ-XVB[!BMM ҂D8VD` $m$㴕D… Xr%PQQ\gϢiii(** z&D!bh>2K$j}Y~=TUҥKM-U&ǪgƱcP__Sb8s }|װZ(,,Dee%vϧ~!jkkriӦaܸqZ =lق&{Evv6شi!@VV̙|V:X1,c);W Hgo<%?cɑUʕ+GEEE?TTT _p뭷b֬Ypxob|;w.N<~1b{۶ma۱~z[>(СC1w\!pi"qt#@XXcCJ,gy0ydTTT>&+Ν;1tP?{o\qlw (b'fX7T%)}oݨororխ\&k` X !#l@##,K-GZ93ClU>tOOwk;3[n^ٹs'ӧO{jkk9y!EaÆ C=믿bηm#Ry"& x,)Oa“O>;|gvTUtꫯzJBB_\\;44."""XbyyyBjj,XYT!Bk"::EQhnn6w8̝;7'¸r ?я IZ, 9N Z뉍2I!{~!IdP*DGGOS\\Çbʕ+Ydɰaaa?k\."##={a~zBBB8vv@XjuL&Epp0gڵT&#HNN=ZZM"G`^Z^! "2D8Q:gMdy^D`x"R(fA%LAy`&K.,AA=߿K eB6*aR#=X  {փ}1'JۙѸs+ǺUwofuFF6#s+.h>rqFnFs#=ii7owMFͶ~oiڨzu#weAzAFxFEa` h줸J::: $::4GF a3vԩS1\zSNܹs}⩪,"n EQ<.!h{ ^8lGOUTT!;;*.]ķm֮]˕+W8x XVf͚ŦMB~_{{ȶmhllzE!&&-[(#eB>nxrSmO/BfҥK,]tDÀ| 痿% .W_}Dz)vAWWo6W_}eEee% .w!""zzzHzƄ"C 4NLLۿrXt);Y=z֯_omݺ^xbbbHKKTˉ$99z!#R1X2D(H= 'DuyCh>vTWWvbbb뿸vaaa{+VGyy9)))&!BAQD0񈎎FQ=| seƍ>m0,,  !֮]KZZTVVRTT?μy󤂄/AGpp0)))|g,[l1$$$py"""08##R:::HII!<<#<&&/_ξ}(++% ˄z%R;1۞^ Bvv6׿/&)//iҥo>Z[[b2--N>mt:)((6jkk'66V*F" xDpDGGOS\\Çbʕ+Ydɰ;aaa?k\."##={}`` TVVzLL&ͥ`ϟڵkb!AKhh(7of sO=7 f\*@2D8,L'"WXse7Ln" (^z^6n3ߖ Lz%E2=p[1I  ʜyAAH  ,AA={_h`Lڇbq('(VG ڿY],:fM3ٌfM覡 dE1 d 06E(V E l +`CWMӭ l u6Ь[Q4 f݄Y7cLuVՄUSj&,Yf եaS5ͥbS]X].l.ͅYua~wͪp]=ƦՉrpshӝh~@S]h.'Bu:q:\N'. 4TUapX>FiYiQpzf,f̮^c9n˅PUEEsUu\Nѡ5M ׀k6kVBUblbaYpYV 6shh $@555olC]f BYQMVTنl@9Kf&+}f }& Błl%l!l!d"l&d"d"DPB&RtMm Q4@E#nE;.'mPtn8~ҫҧѧT݉;th֋a֜5f݉Uu:Su9U.U`M%XU UUB][KOչA_o.hqBZ\44 fT3.LFJFJJj%e!e#@3@g 6 fB̘43 &L =X8Fl'8KLF1Vɡ桭BD` e>555;q dcX8B'8KLFq7cIq16Ѩ(uAZTrtwwSPP˗%**UVfٳSb28{,qqqܹ򨯯'**M6G}ŋq\ƒʹihmm?ɔ)S$%%ň?Eʹs b,ZȰ4773uTVZ呇_|ŋrJï^zg}h3%_r`Kÿb+{w\.Z Fee%DGG3m4îŋH7$44'|^ =z/2?$44F￟Lf3ݻF:ǏgݺuZ/LLL {2k,a>/ ׭[ꫯXt?gc!zǏS]]MLL gϞEun݊b!66}#~zz:Goaڴi?[8ZHE@j.gΜɖ-[X,#,,9sPVVƴiꫯPUT%r@!aZF^ %%%8q͛73uTV+>/{ (-׿U~TWWb᭷9],Ann.6m̙3<7uNG"A=_5#--8hiiaNïݧ8̛7SB[[ۨ˕+Wp\fΜ9l6>sXpT;0=ȉ" xkV{LL /^믿{C4 GJJ ˗=l 7ޠΗ_~iΟ?Occ#JKKCQijj… |駾7m p!bbb|X?Q! Qx2D8NYT&o^^{5^y˜?LQؾ};.P IDATo{lذl6^uPRRbreeewٳIHHUm6?z*/Gxߝ TUޫ1|E( i}Qw3o~t~ӟa׮]~DDDoHv7 =O~c>~wf3 ,!kLhdpB !Bp\tuuQTTķ-BBBPD`M; @*B*>v/#!!{L D0G QxR4 %==t)qL  prrrGXA6"=X  { 4\'.&B\؂t4]Cu4]Guts\m}:pw5]G׆O_4?y9o:F7pMQ9^vk6y3oyΣV7n7HUwk{ LL: : `RtEp}\?67 ýqӀLKo۠ˀ4zqޕQϸ)Wunr Lp8Gʂ` F:A%  K6ԉ0Qؿ?oMM 999J! "$/'E ʵKd 5hW/T~+SKu2] tYpez{{bժUƟgEW$''k^|ÇS__OHHcÆ l6)`ar ,A#u"L&a:Ԧ\.Z Fee%DGG3m4"""mwvvꫯ @kk+?Æ xGꢠ>~RXO&y^'Fjnumopy?Mӌ\_t)UUU;wiӦ( ۻw/III]cǎl2ٴi{!;;E#I,dPgGQT-]dF̜9-[x\5Vqq1ΝڵkXVߏ{իW9{=)SHC&A&&V(wII 'N`L:Jaa!zOtO>ǻUE|rar ,"<ze cNյ۞_ͼyu]X/ѣ?k 455 2i!BuE ҵKޭŋijjޣr /DGGo}K W%C={j{u2]&8>_d~۷o6tӯ{Dv! R-=MG0雏  ,A'}RR' "AA&  Qڹ2Iv&(YJ.VHCە+¤BzAAD`  mS\ `5\Efpu (VtbSnX]`Z[6hg2lln~{ l0~&tń궹~kk~PK>GwsU@nqtӍpw,uGQPq (f{Շni 4?E_oq!;ExW>7•O1z 鐝>G:i gn⹯( U1 +*>zw=W8`|+Mk_.rW{!>>M68NNN۷og޼yR]` dGׯn\FRnؿ?>s]԰gvM``;~WWGNg͚5$%%(3gPXXݻo'loޭrUΟ?ɓ'~xX pEߝW}o&m6줺qj ])[pp}o}{oLU #t]znXFH/QTTDEEO?WZZJii)=w)?N[[,[%KMZvŌ3`ڴivǏ NPPseƍl6jjjCQrrrP5kְvQkϧ06lCX|9˗/7uwwopEbܹ+( P 裏gX,>}ŋ=ˇ~W_}$&&e,566RXXH}}=Ö-[HLLjkkQU(6nȜ9s hkkOr/X ¸kSONqffngϞlihhf7pwl66 Mby"## JSSm+XiWnqsSU`C{ep?i7˕eISS'O+L&۶m#???y3{"##Y~= ;;lrʤK.SO֭[y}lyذa'Noaưcpp;XqqqYh>3.]DJJ /_7hzᇩ/dѢEo#A:::HMMeԩDEEMk  =a ݎ3ge^:rssGᠵ<=^?>s̡:*++)))a֭"/^رc477ׇi`6 q5(`.m6~{ 44ԈwarX?EQF4zx0v֠`]v-iiiTVVRYYIQQ?8###ٳgsCfeetRXciWnqsS!;$ʌ!p/^ŋ9tN_= 0DArr2;vT" Yn|̞=E}}=X'n~m.\H\\6z>SC$DGGi̝;ZN:Ndd$K.jgK4a8(bcc}̙3g˖-L&>#VHDD_}sjeYfoƍJϟOll,$55H:::'55BfϞMLL ===;===с477944T֝|ڕ^1Ӎ G@nwώ ޑ<*cݺ X)..ѣrJjbZxI)--MgѢEZ x())̘1LÒXx148N^z%(}Y{<^~eBCCٰaW^Fb+<L^^]]]2c BCC1LtwwKWW̟?8oM(((̙ W_}WXPy⎶˜{IdY*g/[qxR9~{TЊ7TX^x;w2sL)AaTWWp8:u*׮]DEE "n;"c+vdpe <*NLTUvHJJWQ]1s&|;r,j2{lfϞ-1y+G@NIHX  ïAA&*҃% pg/? >ƾmosnD^ WnçO-=]|#w *s,*ۜLG׽'ozxH~yhv0᚛;y{lo ׏*❦Cᆫzao(1;\\ 1I1 f`>2c/0cƤni 1lM f3mr7S1c7QthnŢKSԁpUw)ꀽ[p{:;6mU3d;j3ONM+={ϦMF'''۷3 X` $ʥ}ˆؿ?l߾ÿ={{ncq;AWWGNg͚5$%%(3gPXXݻo'0]ܭrTU?N`` )))dff&KAn]7D4mFTTTWW=4nBAAAt]';NYv-qqqw^~_mRip;)**~+--{;uǏH-[ƒ%KKmm-v2fG`ڴivǏ NPPseƍl6jjjCQrrrP"vTWWƆ 8t˗/X_7x/FVVsΥW^yEQ(Z{!..ӧ1,^#߽||WJbb"YYYHaa!(BLL [l!116 EUUظq#s9@v#vcjXw) AAygez;rea7CE{^Fٳg)**";;xxl,Xfa٨`ڴi({߃y"## JSSӨʥ_҃u\qd{pǒwյN6C}i]pOӴ2^'OX&m۶NBB<vE_l6D@HHm+&.]SO=EBB[n_MOO`Æ 8qoٳgÎ>b%..5kg}ƥKHIIM?0|,Zvoc1HGGL:СC KEw,i0sLl0]WWGnnp8G~~P^/3gQYYIII [n%==/r1C4TUtzNZZZ0͆0r6=]׋jĻr }}}?v+VGyy9)))"kٲe8p*#xPUz EQUB/ ޣ^!ApZV~F#[,RRRHIIaSTTDzz:v{d6l@PP磪X7pƮ]|k׮%--J*++)**g޼yddd0{l.\`Ԭ,.]z]qoΝ;dՄX2Dx<{]Xյ"Wn'--햎5e***hhh@u/l6߶/ i F/VKK ===JgpxoKHHEQ;CLL ˗/g߾}C,^ŋs!N:5Wv;wޑ/*oK 0QرcgQUUan:9q---\r2?7n^yΞ=˕+W;wO? hFii)vrN:Ndd$K.ݍ4Mckjj+fΜI~~>| 8p:iꫯ2znĬYHJJ2NlkkÇS__餠#X nS__OMMO\[444citvvzRdp,bc[hu!Byk"Cw?Hll,sQRSSYrjRRRZyfc锖ڊih"VZ@||c=F^^/2lذWz|x#κu8tyyy4 #G?&//.BCC1cL&ͥ`ϟoiA@@stڵk\p4ĵ(ܹ.bϲT^K{R9h4nT>,#KR9ioo^`Ν̜9S DzAA-8Nʵk8x QQQƤ[!»1 NXdPÇ?~ï"X#_Ǚ0Ea3/fϞٳ "o'30\ڇ ,AA }"AAH  ,AA=_s~?KyMbϞ܁0 ڙ [Іow[[b㸖 x O`*fT@U\.tU?p+\Jc0us,-Lӈ~ 9^dq+n[qV:flU;TCf/WɳZsN8;n?᧢ o*n~pz M9 罯pp_!{ B\&T~+P0 Lm=L(|pc70c20rokx`>a3O}Esx+^i,矗m:tRI`-a5icrX߿^o_SSÞ={ؽ{7w]]]9rJ:;; ">>5k֐44Μ9Caa!wO?ѣ׿XӟYl٘/SNQ__OOO/:u,AƵ.UcÛoil۶(:;;{ix paΟ?OZZGعsPU|pL`ƌri"F E L&#FJQQQO?mRZZs=g:uǏFdd$˖-cɒ%~]vDDD0m4ǏSVVn'((sqFl6555( 999(w琐~|֙3g7oAAAP^^no޼ydffbǻヒWPP@ss3?O X\\ӧdʔ)Y[HD=N\n؂BL[iMz7gRTTDvv6444{aX`ffQQQi|"Ry"##_C\|;vvfl"""8pEQؼy IDATM'|—_~֭[wy;wx(T5~(ww]2~{2lL_dw@ xExMFNQQYYY̛7H8y_e2ضm|$$$<@\\a|rwdd$ׯdggc6wBBBYfFYYu"""HIIx֭餤]v… |2'O%kAdO rɹ)fΜɖ-[l "e,<j2"ʰ1evLgg_cc;440vKeʔ)TTTЀdee_|񅇽l;c̘1ӧObP__(<ÆٳgoX6uuu>eԩS1ʹpS  &Z)((رcRUUEUU!"֭[|@@@grQ__Ooo/+VI~ fO?x+::M(--eܹr)t"##q8\txVu{222GQ}Q*}sˆ3gRZZٳg6m477=yX>UUIJJZvuf4M#44PX!Bi2#br yzw>kn&dggS\\ѣGIMMeʕb'##JII jҸ;6ӧSZZJkk+΢EXjdeeQRRÇ1c%%%xbGOOiܙ?>L&C ƍ)..$''MgժU|GJFF> ---Mff&c $118O?oBvՓsOR9/Y*GʹR9f/1Tΐ,3XF,,  n"C#M\O_  ,a24{||A(sA%O0jAD`  {  DEzAAn3llm{8=~+-n[Wuǀn3}pM,>wWsausquo;?k ugk`)#ɏp[ih#=h3䇑| nsp }`+:f}?yeAt𿱽i;▆{A;c_n pՁ?$u 9`qWTm;ݎ뾹۪:8lH+I10]6ri:o[)$N2t:ѳ~z{{پ}MM {aޱwuuq*++$((x֬Y3uΜ9Caa!wO?ѣ׿Y`t?׳lٲ1Q/.?*cYf3‹A*o&m6줺1,XÇ9bΝCUa8^ڵkx I L4DDQQQO?mRZZs=g:uǏFdd$˖-cɒ%~]v1c """E9~8eeev;w.7nfQSSC^^(~" 溺G`9sy@II ͛Gff&6y\.~477Ŝ>}NLš5k?4yfk׮&k_Z[[|2;v0f3DDD`9py.O>/[Muu5;wPhoo/![Xw/m<3nH,^.36ԉp+\p4mży'OX&m۶NBB<v˗/7~GFF~z8@vv6f!!!߬Y :s  {uQXXxtRRR®]HLL`…\|'OH`9N>̃>j%V@Dp̜9-[xZՑ;4呟!Ԯ2|̙Cmm-uuuTVVRRR֭[IOOŋ;vf4 UUq:& p֮]딗aWUUEIITUl6|[ZZp:ٳ_UUP[olGUs+C(qA%V(ۀ.n#LO iXHII!%%իWOQQv˒%KذaAAA֒Yp!ǎM0ݻe˖I`` 5558p`X5ҲٱcO/pâtvvsa?E`Gd"=2D8ԉp ïJXXv%2e***hhh@u/l6hf̘ӧHII1GQ~aٳ7,: `ԩfG`Ν+CK zH$''űchmm>~ݺus ZZZr eee?~oݼ+={+W`9w~Wtt4QZZnSNyҥKtwwt:{^? hTU>nsC ̙3ٳlb >imm'N +TU7իzPSS'|Bcc#.0>`69s ޽[*L`k TQ/`ӦMX,/?"##񛚚xX|97ojŦu]q&&G?LA두n󤥥py"##}VUUGի(BRR6m"::ڰ࣏>ŋ\.ciӦŋiEEE1{lw+//EQAQ֬Y(;w_/a޼y[n;vSNɔ)SXz5D`g1,\1^n4j)鈶Lv: .XeeeSSSap8Xbq|M ?!466 ʵk׸|23f pEEE<3躎fO>zhhhի?{b|h._̻KHHc "Eq,t3Kw,݊&Mǜ> »I_iii:tvt]믿X޽=[n߹z*SNٳtwwSOH Gjj*/^dϞ=0}tRRRX`f##fc֬YdÚ.bvӍ|rIX"(g)p 溺 9sc‘#GoFuESr Qt#L&wY~=Q\\̱c/~Ahhq322'++ EQڊW_WUi" 2Dx&C dPCWt Pl6:QQQlݺ0t]??PUg_XX> > ׯ_ɓ]v8sl6SQQdBӴaߧr8G?",,mD` ޞbR"$={6b25kOxww7---|޻GWU䉿7ɓH(y-@I^ӭj;ӯ~s{{v~eL3WW/W]L˲ii%%@eHd]ssɋ@BrUٻUUJU}}LNN| ]]]$$$\񤤤Fi2 ŋ'`& .X&Áb` µ0 ~?MINN櫯⭷ފ-..۷n:RRR/1c;)?Ϝ?"~?e駟=@ZZ@?3gzz@pXj{qΝKOOO&>>ŋKk2|ĺM&72D8jIM"..nh]_'337uVW4Myx ^z%!++55k ޽v|>KAҥK=znf)1&2D8ѺM&72D8jI?"_$%%QZZzBb`MY覦nWS )+\ abW\  AA%&l[ )kxCN*u ? i0  0LXS*i0 ౻0.L ՅǾwӅtb:(, m  ՋRհrP2~K̫AR±Lt{=hD[6Ѷs}űM{p(/R*K%blUJW%cob ]ت N,ՅE;tK;p!MG%,݁%,} KwaN,݉qtM\/Ӌ{qZx1Q!Q./9,;F;6cZ@J} `[^a8a8<:S'q]d<:>L|:/xt ^1DNWM|H#iCgvrwn2!H ~AAAK&7@ 4;wFS[neϞ=RXW \)^/®5mK#"+ ,E μyhii{@xgg'tww)))]vq)9tJ)~sIx ~ӟi԰}vwwޡ+Vp֚Aff&Wo dga v wR[?\3fD[E^^%%%|>jkkٹs'̚5 6Lvv6wqIIICEoiiz >ooHOO?$%%quI%] 䧰3g;PZZ6c VZX'NPUUŬY4M^/ɣ۶m>]G}ٳg9}4dPR0Uc q*++۶mwlOKKs+hv!+% ;\wu,X}q7{=>ƍgϞ'6D8XhC-n|G^  )źuMFF{BwR֚f\4:򁐘HOOuu&Át4*-S-ZÇsx>:::" 4Μ9W_}#11ٳgzٷo+W̙39rdDXj{qΝKOOO&>>ŋKE% wѣGիWʶmz,[uVZŮ]x,Ǥ;W_Q;3$3.]Djj*%%%,Y/7|ӧOz?>6lp{N8pJ)̙Æ HOO૯gÜ={t6mĜ9s>A)7 )SJDWWuuur-qGrr2-r%ܹs'G{?|>oK^^O=?0.]bǎnz@[ozG}osm ^yǑ)Y< z[p2KhnnFkMffᙙtwwsY(++_M7~Hnn.wy'̜9RN˲"Xp!999h9uԠrRSSIOOp^NYz5dff9 A ,Aa p=`6/"Q[[ˋ/Hjj*wy'iii,^VN:EUU+V?={N8]Z@bb"1---|ݻwA%=U CU< 4k*9X0dddO~vAWWfw M6[oQQQAWWLq|MmۆeY`׈zxydƍlݺ5yK ֤@fd]bǵ0 RxX~=ׯ4<==|p#?wE8'2D(  @WWmFڭ b`aA1a,hA ,AA!qV^^.=  c`  1}+uo? \aNG%*!d. m &L0rt)hd^`ᗗmcQ!ːѣ ׫OP>:tuy-O_>RXâT F7i,<򌈙6PQQ RSS9s&կx{ygYd ---PTT_Grr2y:t'OFX:t˲IJJ5+'α7"k{;UtgDL.[\㪏d-ZѣGs|3g>k֬8|nѣGyw뮻x'IIIᣏ>"!p477&33sLtz^yRsM7fҥ,Y;3?˲in`0Yި"C<#d ϧ7x7FjlldqfϞͩS…Q#CިN]m ==766Ηz%%%7Ɏ;^/AKAV$&&2o<>u|g,\pTiΚ5\8ٳrgΜ ".c*F7/w*ψism/R__O[[⋤nݺQ?vV\'|'|Bss3LFbj!7"{S'NWi1222x'ٿ?;v젫dXf #<~8… imme߾}XEQQr 'N F,//GE{E({"{2h޲ !BL޽+;Utgtj ֘b`|*ψlA ,a~7  % 0L_qB1mf(_b TBzAAƘ G:`e\<L0y X!x|4\ LHr0u– @]qPyFe>hCqkD]:L^GJS\}! ?.DD@y:|%"#V [y"RGر2E(24l<vMMM vmTWW3sL6leY[=znrrr뮻ꫯx8}4m)((X pꢮuֹU,ZGrf֮]q}̅ ӟD\\v+±cwCWW/2;7ޠ#))_0sL7^{&oBuu5۶mHOOqؼy3^F׀L 0innFkMffᙙtwws%[~V^ٳIKKk_Vj@Zeeedee1w\/^{>S֯_O~~>|q7W_}ő#GַܹsZs'pEΝKvv6~}k\wuRSAyyy=zÇJ q"d"z]q"Ҏ0.\8GDmW\ݻ9qo$''G*M ,AAፍǓohh#wq'>>>> Bo>VJG&`O=*j>mҥ,XǏSWWY~=+Vb05 R'”!11yGaYVDX{{;} .2~CCiiiGzz:_}ըtss477ǹht.99ٕ1c˗/[o?Jk# µDi s=ض͋/H}}=mmm/ʺu놌A[[GCQSS3Xx1{ɓDy[Aqq1;wVΜ9Cee%ٳ'Nʹs8uYYYP2 V]Z/ӐO>ٱc]]]$''STTĚ5k2 7׿u^{5lۦ5k;J 6{n~߹eee8p{rE={67pkqE(((`P!l#ًP"v/BF^A]׽ {(ia ga,YD, Az ?zldQq 5kݼn LmKAƑfL$//͛7>X pSOIA,     f†/^`,QoBL6[A7  0"փNG_0Jk(e 7 00Txx\97 eQa"rJPa@J)PˆdD #j8R(EaDɆn Vx}  ݈ݣɠDd UkTeQe+LX9 ݔ#_юNh'H!?Bvk4||`;n_'$9A`|a;$PXD^N0dDO`XSZZǩ#%%׋& 5gU<u W}nW_NA}{<lp/QΊ+.{we…<+%@ۨٳgY`T ֤@'"d0oj"W?Ζ-[^b֮]ٳgQJyf^/gΜM6O<֚W^y5 ҥK1t_c{\U.׸tU_Po"X2D8i& %%ÈᒖF||<tttc{1QJ :ln¸ sAI:fŊ\p 8~޼y|Ja b`MO>"^떤Gӗ'G_ES_ޓ^7Ɩ.]s{/ /pQijjO?iDMF NI'`|wab :DUU7t|ߧz/xnc1&LvY^^>!XE-{^Wk`]^}by/ˆ= e/BًP"8':|2D8Iu/$"7-XDF'1iSYDoAKO>aZҨz4^A1AA0&ז7:Ƥե´ 8'Ra7L[AHKAa+ьreCQ燓*&6݇k2GDzFșA_9},sppYk>tdAs-2./3L!dlspg WQADH|1txDaxΩᕿJR^hJ4i0Xգ pNG f#oX-o]F^,"LGAzAD(KA ,A-z LdARH0صkGqwHMMeŔpinnŔȬY뮻ɉHzK.~jkk !!3gf̙ۿ<… #tzhlloY*IK6҃%LM (++ò,jkk4MfϞR|>ٻw//?я0=p}ɓtvv(Ejj*G0Μ9CGG>O*F"A&=iDjj*˗/g޼y;v OLL$99\nV.^HSSӠiuwwsinzRSS5k~;7pCE8u/^t}',ZÐW  0x<ض=5aIJJb9r^=ʒ%Kۨ KQWWʕ+< @B233Y0 >?G_… #mq7w^V^MUU̜9S*BKAm?Ζ-[^b֮]ٳgQJyf^/gΜM6 ^QQ>}3gP[[{GiiI_ؽ{7Nȑ#,]T*DKA}ٴia2`SZZddd;xdžz<̛7yzj?; 0 `ѢE;={KDA%S YAx^~?#N._b.\fTydffC,Yz FDzaJ!4Ӱk=[t)藺p|gg';v`ɒ%8wYYY?^W \K҃%LV+Vp!馛"|>gСC83f`ٲe OBB  LoyKr v|T~ aݺu[n| ӟT*G%    X  b`  W*//٧  c`  1LJq4:鵃蠌YCoX;sPD[v֘Fcjq~{3$oj<xL'B/,&q=nǣem,'َ ;!ciZeaavh6b&=x1!.|O\0^_/$whP>LmoY^ mY`YhFv0̶QQn:n^(T(( 9/"e"<gf4Cq ؠ9} Xpy'(%ciA+ qAІ/+𢕷_Fh1A cġU[W>V^8Ʊuw0g;8ƶaB#,rC8ƱlK8<7]qxox}x}f(덌.Wc8&hpVз+ 뵱ᾅ:m>?\ ๾zm/قF,aQشPUk֭[ٳgϸk.o.,L,AA\#ȑ#(0 T/^LIIɈ<))5ѢشPU#XpPVVeYRQQi~#i۶." 0ki$%%|r9v˖-zP\\ݺu+3gdÆ <,Yjjj(**6K]]J)ΝƍIKKqˑ#G0 %K hZ&Bjj4I=KNjgal7X{y<,6m0 RRRܿlܸl^/{q 6yyyU%%%\+a*X 0 744PXXNjZLVV֨ͥ!RRR8{,]w_+$ `RW!55dPUcq3Ԍ hhhW_}Qh"پ};rI^u.^ʕ+9x 555455QQQAwwG <[G NW4@c3`մm6^/˖-hԆc߾}0c UV]PJdɒ+KKA&!x0'%''alذ]OKCu*}Ɣ2D3i 2dPin %LoF%e(} X  ]^^.}  c`  1L|BQ8}Rh pPhЮ2p\~_eA<ןwEt˳/'VY4}iFJ;|Ž%;L1a:t+wtht *Flt@sC3\UEX-9A嶴-A,<*ݾ8)?\qlc>aqumEa(Au ?= 0}8305Pi0΀gV7N-&3Hca#W<3Vk2Ks*U` BR1qVW%4Pc\!x{阸b c;ڵ۷=& ɔZ=vrғ\K݇jYڃw׮]9raŋ)))0N{G{/vz! k XVLGָ&w#\ &I5c i9vK2,ˢ Lo9s&K.XE( L~L$)) ˗S]]ͱcXlݍ理b7nUU.---x^rss{G믧{RWWRsqF҆4 Bz2WAKAŗCWWeGII >ZvIzz:f͢W^yo|GkͪUhll$PVV֚lf۶m̙3͛7c`۶m<Ә) % uuuՑOJJ V"''ϊ+X`UUUttt4[||>^;iTUUl233)--SNI E  Lv?Ζ-[mb֮]8TVVRUUE{{;mc6^y?>o$!!aȼΟ?OKK [l8oYR42 ؘ*@k2Şt9lڴ 0HIIqzÇqFzٳ5 G:>C~mx!Ş&#! N8 zQ׋'555bi )..v ğ3gk׮婧4Mh#*77fIOOpqqqr_ Fǫ\eQ!v h9YȠyWpϜ9Cee%Νj:;; --/&:;;mE۩'OsA?/VΟ?O[[܇YhttFǹ\Yht իimme۶mx^-[FQQQ__ϡC!--׳`.]ʩSxuixطo/2===̘1!{Ν;֭[Q*޽{Xx1eeer/NOyy<\pPQ qPE*WW4aR]DHe6Fw$t.`VKDEa׮]l߾] B|=X p̑#GPJa,^} 4nz衫Jjz̙3L@1AhKlPVVeYRQQi~BzV\I^^o>^|E~z]c 5Fcd/bfG5X9O6 QƜŝ4MX|9;ve˖QQQA}}=~JJJ(..vVUU҂%77oۼ{nXyy9J)w/¶6K]]J)ΝƍIKKT~eeeܹs\wubmYcdrȀN&51.غ&F7'ZOCWWeGII >ZvIzz:f͢W^yo|GkͪUhll$PVV֚lf۶m̙3͛7c`۶m<Ә9~(HHHKc"a"AD]]uuu瓒ªUb ,X@UUh)**"--ln|>>%''c&UUUh)--%;;LJKKikkԩSQ={0w\¦)2D8a ḵr!q(WdpRFqz9~8[lmYv-PYYIUUضmy?>og󴴴e˖e:466㏋! !U},C}MP): M~~>6m0 RRRܿlܸl^/{q 10xGhhh?~'xbT@<~tԅθ#%%Ei  ׋'555bi )..v ğ3gk׮婧4Mh#*77fIOOpqqqWǎGx9!˜Qx*8ux\Zrv8ߍ+K hhhW_}7̙3TVVr9ڨ,/ijj۶Yhl߾zZ[[9y$:/^Tݻwgq蠣^4)2D8a ḵrV 6A42D8zV^Mkk+۶ml2 ..z:DOOiii_ tRN: /@ooLc=ƾ}xaƌك?[n8o~oYi*//D"aEƗe/p}a C kux\Zrv8ߍ2D(b`Mw~l085:uj\%Cz&!mBAAgU)}AA$҃% 0LX]Af0?ڍ>iAZÅqi\i~іxUH%)-UR;x(8Y=|Y|ZG ~;6\Lx1H V!g2}N *d!{{/#tq0BN`X=T'J3Dxosݢ t 9u57~Q].oeAzVrJd*U*XdgzTTTWvۥ !H,S* HN1sRAjj*/$b+MzH 0j R#S)fPVVeYRQQi~1AJ0M$/_Nuu5ǎcٲeTTTP__Oww7~ݸUUUz63V^^R칭{RWWRsqFo<^TX #//.,"//|>ܹtf͚E{{; 7(,,$P__֚UVH  5 ضͶmۘ3g7o0 8mx1M3BA:V\IJJ VrVX'b֬YtttT]y׋mn_֔JKKN:#)A ,AA?~-[`6Ŭ]qvlƶmwh.''|yϟIHH2–-["[Ekk+CKA1)gӦMAJJ׃>|7eϞ=!f< oO 9*?`/+C:Xe 4HU"m@tBz~RSS#fhhhbrrr477?g֮]SO=iTWWFTnn.$&&q<1a>?eTI6 U:ȠyWpϜ9Cee%Νj:;; --/&:;;mE۩'Os:0=!BaB; HWm۶zYlEEEtwwG}}=4֯_ς Xt)N^]c߾}0c )A ,AYTt YeeeC%$$ {VVw OJJp>99y|FC  X  b`  %  \9\f  !҃% 0L2 埄~%@I 2i"F2iG.P7F'Y,ɤ,j`5p;=BJO#fwe3&l PB3⯡J&dH6PcXpW*=58ƹ007wPD:mnX&M0łAO#F\uQu!OjD~F=|^kuf̰2++'<3ml 1/A!2 @tw_,!sή}NW:N:oٳgIII;&!!k2sL.\ѣGILLsZWVVb =Jcc#N{iӦY2W\a˖-444bw}vrrrػw/6gyk2k,9rIII;W_}'O<@NN"ܴi=555׳`nJ[[\illDTJ3X ykH!3nC5J"zl6^/---ZO>7|3D~Ν35kPZZʦMCdnJAAw1c_x"Zno||pn:^Lss3O<>u|݌?5k0ejwCzz:ׯ[ZZ8z(==l߾˗3n8f͚\.i(` Bl444@^^۷oy摑qX|9uuux<+;d޼yL6]v[PP̙3dҥpALdʕ9,V\ɕ+WhllHHH`ʕdggmgdddp]wMFff&\xϽlƏOQQ$&&&:vRSSQJ~!@L004[lرcTUUYE3f`7>?dMye;vlH~cǎcϟ󴴴PUU"xt钵?jԨ]5;55#GZRRR0Mׯ[ᤥYÞ|_.C,ahFiB^^h鴾d̟?rYgg'999f.\5.ӧOGz3*СC$''p8t]k!` )d EQZ455op97kjjbǎ477g>Cd>Lmm-lݺgZ_GN:ťK8y$o&W^HKK\|7n5H֐B~LlҲ5jO=o6o0M B,Xٳg!11˗3iҤŋsA6n܈dժU֋vzzW^yFA^^-hE{󾔾pB~iA1>''=8~eNgKwq>lı>d?---Xqq1!ŋC̟??.33o~` 2D8ahFiق % 2" (-[3JcdPAD '|N>/҃% ǨJ}AACKAwFc׺Q RPS 4_B))w,(ޒSOXAyjMBi]B-T&<>(2(] +L4(s/?]o_uj( Z@ο.HWe5*輺*O!7%3!镕yF) !B' !V|iдB4⼾cA2zCM 0n̠xeCH #X70Bt7Lӊ /4Mf7z6|y?oE 3 meByBH 3(_gX#d 3(>7-+P&ߌܕs2̾ZoAKA›:^x10ia0aխ#_Juu5uuu;4 EQQeeeL>|}]=Ν㗿%_ט2e 'Od׮]>}NFANNsa„ btAhV IDATJ?]P#]{> sʃIx9lo|***x<?~7:l[YYYǎcƍZ Kuu5EEEsg|MxȠ'OyfoK `?+ѷt5A븄q0s9-0@2ۈ뤤PRR‘#G8z(lڴ{MW¿saMƶmhooK_W\a,X{r1jԨA,A  -߼l6"geeqwqFؾ};_q8>|0XpQ+֧cѷt!p2D8 za\7o^S__/͵⚛q8Z>LuuO3rHcAӱ 2D_+2D8zs7MرcTUUz1c/СCQ-ZK/DYYYyO<5kpU~a&qAL^^hDze(.IGGV/VBBh柰W>?Cf,"G]ez>h *n9Ovq\:W=1m44McǎQ~\ } 12DO+2D8z4^q\{lڴ7nP\\Lzz:mmm8pkKA0͛Gvv6v3$%%1n8qy]+XhǸ"T_|E8hRQQm\qq1sssyM7qD&N((C}0Ǝ"zs5|.  O.8X  -3``]}0eb+ \(@4(1 H  8X  "ǟV)kSt4 tjʮ6]iA :*AG9 :4Mݗݗg5oe}a@[)@Ct740P][Cy4[o^ ;5T:+Tx@ޭ:ʭd:{47 6جa:ʴuR  @Q6(e3P ڼ(Ko6OE%fl& neLM2AC@E0}S ( 5m:Mp(x`Sn:N: 1|;/t{{}A<׍}EӋfvpNRTJi(t͍fC7Jn솿NLnBgf|?PxZo<}~^1ӄN:_^T+:=Ay>Bw'ǍrQN0(ӵ({P:PjYhz)J_f+EcQ{Pv.h n·a-뫓/ׁ?߰ |V:P)5U5SSA W zN|;>: hovn7ow7x׃itbz;0L x1M#nܕ|&cKv{ /!A:kEQQeeebA,A,iJ&?? <ǏgƍNjjGKq{°U5,Z뤤PRR‘#G8z(gkiia͜>}MVV˖- YgϞ=޽Wp80a_Woȑ#4:t]f>|T>0 ^{'OڊbΜ9̟?_8X0z zY(͌3s%m F[[[NL²eu|rqY6mă>ȸqhkkԩS!y8p;3oW~!'N ==p̙BA,A  a/} 6_dJKK9r555x<233Yj7Wgܹs/A)ә;w `  ۸b4V^"3gOv_g}6?u#xxBdnaL "f_lA,Anf}(g( Z5 8Xpm![[AKAATYY)=  }`  16M]K  .8aafԴy=^6c7zݜk>ay.͕-ތbw4#u k}%ʦ'(qFо˹[PaQ!خFXڀl@ưۈ{p~2jFoyZeN7Z~afv5mn?x3JH~ȏ1JM~Mfw# ɛ&F Ado {{6ވ!weAzF4qAqAAhLST BC)i\.(++Cz'ip85jK.%77W ,Om-T m񡹨8,@?]pou ?? <ǏgƍNiiiLGի1M6v~p8>>gH ?xT`x xHq墤'rQjjj/~"{n֮]z4RRRHMM%;;%KIss3/_sYi۩Q[V_DEA ĈfRZCmm-IIIdffv=(C6:NFѶpV1@?]p:@CC͋9穪vp8XjUiJ;CW_3XE"v*OZܱcǨzf̘ŋٱcGL鳲xG1MN<+“O>INNԧ  0ˣMp:׃ц Nzz?zhٽ{7>`B w" 4>UqL:h6C|rBfHNN5D6Eڵk!{Y8XWWV>QZ"Z @nn.ׯ_ggN8!g̻ŋ;,nرl߾ .;#,22D( -٬X{m۶1m4.\Ⱦ}B.\O~˙JOOBKW__$++{^zI,r@CcQY/ܵPp`mV3N{ZܘAsZ9O<,d>Z9$YY‘!B!6F6G u1 rAAKAap3`CNQ⻍5tes|Of" 2}4sim4#ucpLtM-@SpSDϟ:;]͈ hdi*^ 3Hw3hF׾Ҙ~INg훑偉avzҭ>@SXN.d 3b-1]BC\.e{ݧ}arWKk+BAA,AA TWWSWWR Mp\QVV.awPSSR 4q85K{>|{駟xHKKcܸq̝;1cH ` .dBLAϧٸq#SZZS#GzjLӤ;w?8_ʮ]?>K,rq ?o?.% =+ttDm%Y H3@]III#GpQ<Yƒݽ{7wgifOMMeɒ%LNN/_fڵYѣG /O>Inn.MMMر￟sZy\Y`[n|V\IBBT0@TwAl`eab|_no{͆뽥Z yHp8())=c=c=Fcc#۷o.mTL wYO444ybNsyp8VZ2$999̙3ӧY~}=(---d8055ׯKe VQ}aƗJo(//G4NՋmX/С뤧[GݻwƔOff&MMMaHbb"W^Hcd'0yAAn'==$''"{ܹkX\v-$`ktvvw J7Fl%J}a@0l߾iӦq N81a#x"eee7vXoNZZׯ_w cܸq,X͛7seNʈ#hmmֺ sd?"*v_*qfŊ{l۶iӦpB"w~XTzz:Z2<*/"YYYs=K!{| _>Mjj*&L駟9!lWYY9 WY0|-B^ure-BYP"!!Vdl!^T* U>rk lS)L Bi0 0ÎF]eAiot |JO/0B 7 3"02]tF?4.X@9rWk"h1kHdAAAd7iJ/VܬTjah\.(++ Y'ڨV`$%%q-_ی|E(A \~۷ž={8qDwMjj*/Ǐʕ+466n: C }  qKvv6+Vc۶mL6 o>K&99~w}_VFqóq_.CnN!B?***z/))$XYYY~rr2ww_y=bpO!B7&rx*]]~ς 5u{{\ҒAKAa1``{ 7!u;CXa=6Y[%  % 0x![7xƞ7i 2Sڨyk^=U E߃nύnt']MP t7QT*gBf|{(ߚ04:/#8޿AF*8>8@|Pڈ 9Fw;9_C'h?ʊ7.ԞݍnVx9jjA#,yqaF 3Gоt48$J~F߄1{#4Sܕ sSYYI}}B,AAjPJi."B%쎚jjjN',[L Ì DZTTTx8~87nDuJKKcJ?rHV^a\p 6Uĸ8X {*567^5ǥrn9NJJ [ȑ#=zC}}=k֬dwݻygcYN'ֆ̆ 8{,,_\~@8X{\\lrT F[[-t'N@ǟ'RSSַE{{;6maX;XO<"# 4440o޼Ӝ?* K_RH<pwn:1  7^e0^رcTUUz1c/fǎ1Gvr9Νk_x#FXرc$2D(*pU:Gyy9t: `t]'==e˖/SSSҥKG" 2 0rBfHNN5Dܹsh"vɵkի!y>}Z$?8->>%v0rss~:۷o={pĉ^Ӎ7QF{0qD222X~=ΝԩS;"u܅Xګ'u#CGvv6+V`޽ٳ,\0 ,`\z<_Wk}n_YY9 YP"ߵ YKp0E d]?YpE0E(k `!C  7^ep]A[aWYڄ)NY|-=R7 %  ȀM4 taQu1G֞< BEAJ222͵#q5= IDAT֘1 Wamtp\ARH$)OQ0xߋI7͍RH )8,@FPz7nڹA;i`Oo'?-?{_{v)HN z*Ӵt:\uxIEÉF*ʯ{צBn 3(f3 W2V <?L +Ou=+@oo]{$86 50䄤01%R>8/ﵮM7 ے lt|<7Zז.oKsM -fl׮˾up:`l<;(mxwXS|lkp_0H6zί k^4LlNYO5-ʴ]uͷ%$D*z5ft5vqBHLl"yo p6*Oxx5(ֺ6WGm:ZZɵVмN ٰL5krõv/W<\kjvU307]y kkknNc8I]~/ F*++yG;~մ#ąQWWǦMx6#Ô@աB4\.EEE,/研bĈdl;޷o{ҥKhFZZӧOӧ/=X 0SQQlܸ]׭zu}@rY֯_Re˖V=ꨫ'I~lڴ￟ &z9<}-fŅS) Ed!AAuRRR())ȑ#=zRnѣGYf%{nvͳ> tw^l6< k׮e̙\pGHYYsV+WePJ1~x>b>#F0i$B6 -[PWWi̜93<}˱cǘ>}:3gδegg[Xs(Xz5iii]UVw^Μ9Cyy9@aMM m6ژ2e +WpX۷]vqeҘ7os̱M6qIMMpqۀݥޅanfͧMÏ?O<r|Ν,Z%Kp 6mDVV'Nn:ƍ7 4Mc۶m[~1?O>$)%;wr***bΝדoNMMԩSc… pTTT`&III\v ~{w1c`٢.ÇyhoogÆ lܸ|?VXѣOyHHHݻws1կrr W^,aw8]v* 4440o޼Jʕ+#swISSv`ȥN&@9z̘1ŋTF4v؈?jϟҥK:u_~:/3fGyy9ڵ MBsw{{;׮] _i999=+W~f )N'78uMMM_W_! 8W:0MfhiiaÆ !瑘@qq17&O̔)Sztv+e.ˊMp:!_F" 8bϭGgg'999ideeҥKVo`40uT^~ebr]A{rss~:۷o={{IMMMرfÇ?~TBȩSt'O7߼闪 4={Ĝyؾ};\x7ޯ}ywO|5D¸q;QϟŋܸqAfQ]]͹s8uo&;]K,ߧS[[ˮ]صk}/^ŋ:tq@zJb;Eα͊+xضmӦMc…۷/ ,ٳԐC 6;v,WYfȋ/i+##ujoogʔ)!̚5 Ύ;_ngԨQ#;hiiK{M%x!wG!BѹWJJJBS~>d?555&b9Ґz[4/_oٞ,?u^LII{z*?Z3cƌngϞٳoFAAn?2D8, dP,+An"bwA,;[C`)aXxMi6ؑ,AH 8X  RAAAAKAap3`_"[0i~h&țEŤ{wtś=1c~=ě 7ojFěI3nu7#C 54mP|abv[7ە{УgGvrzs_dnP`f/uc׀z'Zǔow kқ';0"FOt]TrWKk䓱>STVVR__/KAb:Rh墨24M q9ԚbNAC|***x<?~7z~zuBoJw>8j*J!:)))oA#GpQ<Yƒݽ{7wfh݋fgaڵ̜9 .pQ)++cܹqlBCCJ)Ə}GZZ'O䭷>CuFC=F#鵈/*g- _˹@f'9sN-ZĒ%K8q6m"++'FzYnƍm6֭[w]ROb٬Z ˙3gG( NuAC7o^iXre;̤]vEu|G-,,$99?:uK.qI|M^ʥKx뭷hjj˜8q8 !Aa_:d *ׁllVX{Ƕmۘ6m .d߾}1_`gϞD/_ΤI>vz)z-^y:::1byyy8n7/^ܹs)))p|_E_YO"ʗe-¸ZH{!kk׮eX AW7SO2D(vq*S4M@d  zǥ)` rz!   B3`=X?o * ma!=X  }̀`UVVZ4 = :w\u4+QVI _߶зMBݿͰx&:?@@D0aņ 5+!^븆  ޔ#Hþ9Cpy-hS^0y=,hݞ 4k#,<o{66wX~F^7,?O|Aǽa}*^E^bir韣Fdzj~4/(o'aMsM&97`O5n-P ?Aza!N4UA,AAtTWWSWWR #F%K`^{5?̴iBjjjg͚5~MM %%%[rΝ>,iii\|kZr(..fѢE!eY崶ɓYx1. >-[sYVwvv /0~x|I+'Ogy&Z8XзȗB4? 󩨨rY֯_Re˖Y2nRZZ#7AZ.\HFFu<|jO<x^>6l؀d̙s_Ɨe|2/"O?4v9{SHMM̙3x<qlll$--M8D(I뤤0b&MDCCC̡C9r$:uWoVV=;ʦIRR\.f̘O-~VV^ɓq\L0]y72SSSill666rwӧCJK8[BT'kRXX ??[lٳ1p>S4M36l@{{;{]wӟF0azlGdRSS-9|^Fڸ;&L9s9sH)?:;Xb;hƞ={%??QF1rHk+((pGŔgjj* ,ߏGƍrU?O^^ロT^z%?Ε+Whlldݺugnn.| ϟyj„ ۷0+,!~ 2D(4~4Ν_W4M㡇RL:pBk}K/dt:2e K.dyywyimm%))||wxZmG,%jPM ۿ@ӂ Sh*T.8$Z9Zh9=[yRN-Xw7unnpZ/+ Tlt !."0yM&MX~'])GӢwSwOf&N YFK(UY =XB`?t bةJ)FAAAK,f뺕EQJC1}tok-@zz:&Mb8ΨPSSR*BJ)y֯_ρBRL4ʕ+Zӧ .PQQAqqq<ʢu>o8uor(..,K||["''0я~ԣ ,YرcCBd_RΝkpŢEh߿?jD ?KkUl'?TTTz9{,ׯG)ŲeB0iҤc=+;O?eǎ߿z#GFpBJJJ_|f͚d v@.ӴF8%J).]ʬY`Ν/aĈ7#Gz25M c,]ɓ'yWIJJ 9/+Wļyؿ`i??Xr}rMMMQz<:::xyWXf ٖLrrrH9sõ*< ~ ]NJJ #F;`ҤI444D9RSSC`'pNMM%33ӧo|^e'$$iZıp=]$?NII MMM\~OIIav9x 3gƭi躎O:Ņ گB÷N^/G)g: ;v* i+V}{\˟w$g0eee;e-ZġC`=xصk8eR^^nݻzzvϟɓW0o-[F~~~s裏 v-ZD[[eܸq;6Dӧ[6>}:[lҥKKw}r%6mĊ+"Tr~2)  aۭ<sjkk#RRRȸ/\@ZZZQXXHMM gΜGVv…[bze:IOO?)cǎeĉQ__aݻ7񬭭eҥ1xa=yJMM%## V\GC7O,A9^tn eeel޼3fD~nτ "^OfΜɮ]>}zLrr-9ݑyزe k֬?g탢߽ͧ4W7[Bg$DLƈdk$f&q*5LT[NYkmM53135L2L4Q@N dm4(I 6\nQ_a IDATn{ys9s}b)jll t:-tttrHII "55w}ׇuK?2UYŸR^^NMM ^!@o|嗸nnܸ餪ׯۋ4R<ϐUU jl64M'k+W;xbjkkYx{GΟ?7,X-[[o(l6)//'33Yf >??SPPhO ,a" F)ɇQYY=Cdd$pС!q׮]իYE'?ax%%%1o<}y~KII1~3Hn+W:밆׿u***HLL-[ ܹsyGd`lڴSNoݍlf޼yC-X©S(../OQ8꽩tB#AUU׿Naa!RWWǛo!iKKK7… bϞ=FXll,O=#v駟rQL&HQ'xywogKMM'LYYn͛7r#&&~Ƚ!B DWA-w<0M&qqqFOn/Ntt4fo`ƌfRRRXd ַȑ##!!!+W2w\>ћNll,fT֬YÍ7hii@q͹|k_p E\\_ypX$x_]1(WS4iԄdmY\fQV}}}֢(Mg,2D(H!BAZhhh`׮]hFUU6mC}YmZqOcc#Zꖮ痿%ׇ̚5s熌t:9{,#Σ]voo%LdP&q˝&Cl޼Euu5h"86lbl: )¥KxWbҥ~F@/ҥKynz{1V+픗uVTP0*5MclܸqyDGGsNHU1i~i?$%%uV^x}VXaĉ#99yԲ\@bb"QQQܹ3Cbbb(,,$//رc\xEQ7oŘfz{{ٽ{7< ݻZ<|?~^*Y ,AA8hjj"118Ͷm0ʹsaBʹx"رk׮kΊ+8UUUY"Ο?ϛojeܹξ}橧B4=ٱc111p8Ƞ EQ|2.(.]DVVTX pihh`׮]hFUU6mdYHAAΝeƍ(je\p3g!#%%f;w..\{mO$##L8v;4551o<W KA۷ogΝx@EQ}dffrp:p`VVÁb!99Y*X ,A#nB-EI( 8qMss3Arr2]]]cWKKːc j#^l6111RSSdjIkk+ dffJ%BhMT%!77EQ!%%nΝ;ǵk8}4cOss3\z>crZ[[iiil222 YYY={0bccl|G2<( ׭ PKraRU򨬬dܹs1^|EZZZ${޽{9u6ln?8K7!99G}OFVV=XSMii鄼fPbbϲ3س,,= `  %  %    Tb^E't"j4L8&4#} 7CLn4T#7 }޽w_-ʻEzjS4ɏP~ gk~Wj`xw}>[ow >0Md{B5EoÅgHkd@j"1DT&2|k&ڥT{& O t?0/PM7nlwiO+pg\O***gΝ#Ng /̙3ٰaΑ)  L eeeա( `Xͥ? Xj   PRRtrAEaݺu#5%_fٲe,[LX pL&X,v rl6tRxTqaYY̙3*<K,Hp '|BLL ;v/( ͣLoo/wg!##]ٽ{7V~>ǏϏ[iƱc0L\|0djjkk$66 CMMM8q> ٳyG"yf.]* ´Ehjj"11|tt4۶ml6Ç )ŋdzc]kFzz:+V0TUUf8IeMSصkvQUxfwbb";wnX+667( Vs?cΜ9fܹ\pv{9, ۶m?)N 233q8p8tttļyp8^z\/!!pB[[! ,<9r(*222ظqjSSSÉ'ؾ};Ӧ% l޼Euu5h"8Ν̙3tvvr4a妦(]_̞={3g !!0l6111\r Eu.]nl6p8HKKڵk`:uSN}}}p1ܳ>KBBBH]F}]:::qx#22˗/;l~||&##cZW1AB!CHغu+/,_f^upY; t}l333q\8N.]ڵk1ͼaXHNN~ʕ~́Xx?fvuuo>֮]Kll,MMM>|Cdd䈾LOOXhT!2D8ym_EǏv +11B222HNNkLkiir<0fZooo@LL `2ZdffJCCMbccINN6H|gC!999hDu~afϞMJJq…aLJJ'O>mKA&)JMM ?sq5N>M}}Lee%W^?ؘdIMMBYY~6YYY={0bccl|G>t|뭷ٳԄt*i>}N>?),,trQڸr ׯ_“O>_|i>eP!d7$%*yyyTVVr=`9v~***n;{IEE111lذn?K(BNN~28s ~/~ TU%??;hܙ3gSYYdݺu(?Q^^NSSv6l3AH֤&l۶ L{{;&:://ώ;vXˆSUUŚ5k(**2 ܹsu}SO=i=zcbbbHOOpA[[a\.tҸ-yx'|A]v^xׯ1֬YٳILLdG +7667bZ?>… CVZŋ… #άYضm @ffa8v;V&xX111域ox===\vL#il޼tR&M&a6QyIIIa̙lٲ/r)^KA ټy3.jTUeѢE~qΝ;Ǚ3griM (c_̞={3g !!bl6bbbr deeQ[[\t ݎlpƵk׆5N:ũS㾾>ZZZ8vqg%!!!h,455n:>#ꫯX,$''###IJJ+/rTue.^Ȯ]+Bgg')))b` Dߺu+/,_f^upY)s /ƂL\.NK.vZf3.iiiC @V\Innq|/^g`LJL?`]|Ʉj5ʯ*Tł x衇p% ([otR"""hnn&11o.QWWט2j`ZjooRSSc8qqqk444 1p%66،(eΜ9ܸqj#,}]z{{{o|L&#*==/$$$ 1ئ2KAKrssQURRRܹs\vӧOS__?&y577SYYիW?6vRSS9pPVVFvv6,Ξ=k8l6>qkqY#LZ[[zm矘HKK ]]]\~<+g}Ƶk8,X@~~>ǎ_L{q:ݻSNavK/o~yGddee:C΍,l6|TU?)_EWW|Fu^y^xz-bccoi1}LbϲT,#KR9TT^*GKAzAK윯 IDATa!FAKAA ,AA1AA1AA8AAa"=X  c̄E8hԆ>,cB<1O鹚NOG=W2qLUwynp1mM¶kXW~ ʯwW8҃% Ly***ػwٳӧOK D 0)++EQPBnn.EEEDD1VPPU&;|\|!q>3? Yx衇9s41AaPRRtrAEaݺu#Jשi̞=!.W^y yf4Mɓ+< L% #d2?b`ill4 rl6tRxਨ;wb̙3*<K,HqO>!&&Bn;ŋQyQ\\lݻw3ϐ޽O? |yx ZRTT?gg޽tuu,e VԄ4qNtt4۶mga\xNvmۨbTUUΝ;Yz5o&.\@uGoo/O=O<߿q8ފpe\.]DVVֈ=%%X}<}}}l6qL"KA444k.4Mv*6m2׬YcNLLsq}ƍQp+V0̙3ǐBss3̝; .s=g&m۶8N222pPPPnASSpzCtt4;vwӟ oX 0:ټy3.jTUeѢEFs8s \.4M#::zX(>~]f3~qfϞ=̙3@p]BBa\l6bbbr deeQ[[\t ݎlpƵkFՃǡC3g>(QUUūwO1wAIOdd$IIIuVZZZIͼ̟?۷sN xw( >dffrp:p`VVÁbռgMII ̞=GyN륡% (r n7$&&RXXHFFtuuI^---CV+Vnzzzvz{{l@=f9TVVrUjjjnʁhmm20ddeeqYØfG tuu:/_oꫯ8z(W\C*0&2X+ _Gee%=;v CNN?Ͻދ餢6l؀n7qx ^z%E!''b?YYY9sʢmuI>_'$++ ۩%7Mf4IRZZ:!}M} k Y08C"k =AA1AAAAKAA5&l  TEzAAƘ Z[Zz *IQPUQAHY* +TUA )Q |(( j?\QPtqCT8桨G(JpWea0 + G1I^: :%M׽ǚ1F8A#O ' ?M88yi>qtqވO|}uՂhia}t</g M0]M,},1.++Θ`XͥЏ1Eسgql2HHH`ٲeYFڰ05 Ku뷣>(^ҹaXu@v^E]w>B8QPRRtrAEaݺu//E'fxhjjСCdz|rg` pL&דX,v #GAjj*~=XWll(ҥK0^~efΜɆ tD*BKADbb.}ayG7޸>V-[&*% L?صkvQUM6:[l!""FOOG"痿%x4ロ}kRX #;;͛7rFUU-Z@GGiii~gϞTc=jE49v111[aRioi.;iKץQL2"##IJJ"--[BmmX,Z,^|qCmx<!< 醸i?Y*BVB?ffI?=X`4M33f=] VȋMN"Do\TUKpa\BCCUUUC/W׹~:_|===|駜9slO?:::8z(ƅQ#s&QzG7q4:sTU%//JoΑ#Gxl!!窫g!##8_]]ӧGi8BX홨uVG802c0ƶ6HLLro>v;<~iΝ;Gtt4+WVvbb"sΥcrse`H0-/0bصkvQUM6:[l!""FOOG5_z$TuۮjNjpEUUjjjXt)ʕ+444PUU{孷⭷ޢ.0l̞=r.]:-A|E8Upu2<+I;QYY=9r/"6z?~i֯_ϬY?jkkl6O?Mtt_+V"ÃBX W^իI;w !irssQ===kLv n2D(IrFMM Vn na5D(bHNJ;ƹsXp  m`PHßU3M` ))) 9,)#E4  1փ^mwn u@KR3&]d2IatAAƖ 3{GrNUY#yo.@ϑ27??l| 8 +ȱ7D6|tMIBn~}QRU/G]] h>>yV̀c@9~{ثېp-H\x ^ < |M? B݃Mlz0]< tݡ!MQ >ئׂՆ_qhBt:CJS,H 0ȗ   LR(--R„ ~A& "eeeա( `Xͥh5}vuug866 ֭[Gzz ְ(Ϝv=tI83G:jxrrr())t:9x nݺjVEQx'loꫯ:pL!€IC-{u|+W@UUUTvv6555gqY[KAF/U%//JEa/rIz!i<$$$K/s})-Jii鄼b>aȺ=9hؤZAٲs@y~gg߅e]9X,,҃59!;9cX$2D8_*AKA_d %  e`,/k|\(\'aAAAALaNoITÅ'3T*o24jy%dPB!˛NOO8{M;HnFq>Y F(7LUE ^F((BVx/p_%H ar@A޻ aQ H ӡ>rzh:! mx pE~G_~7 y>.w x.%4?rDK Mp=1yÍ8\vB7\KkCch~i~c[O "|7|m۶I2ewt-]%L VjƱULMDE{d8L&qqqX,.\noiia޽G??9~k]Dyy9/^m۶ sTTfxXlFxEE{Ksi#6cwxnK;F5szQz֘=2hjj2 #ž}HMM.<o_mƟg9t8ߞ>fϞ-փpSdP&/CTU"l444k.4Mv*6m?DulBDD6='###B~oGN8a;{aX2D8~_08\XȹSE:f6͛q\TWW*-4 z֬YCEEWFUo>SPPe|[@dPn2D(o2Dx Kdd$IIIuVZZZCkT81c$''͆ hnnŋ!x<.% PXXql6pFq01c_|_7;ndIVO»M ~盋r i 梪*555,]Çs|\._|9---]w/ywv555?^ a “Y@nKW]F֘8DOo!B}Š#TU%//JoΑ#GxlD)--{{V} ŞGس] y{V 9ŜV{Ys8{/<4,سϢͲs[]칿dgA=X2D8~UY FP@RHԀ`aZXv(r b`  #b~Z]JNy)M}yM$ 7?"  % 0y!Gl~{=o&cؽ>L^kt_=4c=oM'tӃ 49 sMC ZՕ#=o_'ڄ?gCe*WLz0=Dzz@W'_ӂWn#sO4 Wgptm?Ap] [6\NWj"5@qCXnqSY,AѠ$wRABuuu^ \ǘ  ]tnJJJx<8N<([N GkQyTs7=IsS q[L&X,v ~FEaΜ9iա*˗/G}ğ']Fdd$V+UUUד ΁X~= .rqҥ!F LOKyĔ]vinUUٴiΝCulb߲e wp`9s ,\͛7sy#_|,ZRSS1_;ep)mOCb0M:;;͛7rFUU-Zh]v]vqtvvάY0UU0g?vŋ"%CdPn!22$n /@mm-˗/r#<2dH/..nD|OLcc#5558qgy ~ABQ 9~8?{\q&~B `@`bO2>q0rI\5TET*UsNc{|'1VClc0`XH H6Z{koiޫ{u^HiӦqeKp`2339|L8|tb ˲8yT j n&F5#TαRVo9s栵=UUU}vhkkݻwsUxyy;wAKK W\ɓttt0yd߄;BGRN{\փ:;5/f,ZM6gvI8&++R =P]vbTTTČ`0HSS"Ú5k5koX cÆ I×,Y’%KڵkYvmɓ'o}K*[qu6D(唾7j,e-b`UdP)}OCrJוZaܢ %  )#RB? #0aI5 ,AAAffn,=gya:IO\x|I뺑-:h46 s|#Edaci"X" Ύs$NL|\zPڱ|$[8 {Qy>||,LBݬ}IdqΗ}JYK^5XVβe ʀr@}m~\w=;>o}gi|g),,|~ei,KaYm)7:.βt,緰|BWâ>~2=$ӕ,/}i٭KV-{b~]YK+: @\~ZB2uT/_m̴i())A#??n:::HOOe~fRRRæSL!77> Lcбc(//'--mު˗9w\ݩ*^u-[&p[0 MEE?7Iyy9gΜ__mF&:n㻢˖-c˖-,_XxWW'O*VUU52aUUΝު|]]f"55YfEcc#MMMl۶-?.m,Aq(++e˖QSS޽{|vM4?z~G~,Nwnrn8vQ|m˘lJ4ު̭%sF[^ZQ.w:R7dkG?$m7IoV8uBdQfNijZ|p ,,߱g\48 B-_~?9~y~4% 2%  ,AAU_Nmm- B!RSS)((~-7n]v| vرc(ZͼyXtiG=mmmhɡ%KH# b` c;v8 hlldG?޽n6l3RSSioo'%%z*r6l@$^{ ˲Xd G巿-?0ضŋt4 02]]]477i&fƌd~?mT@Q,ˊ#*++Y`AL~;XBl&bD"aa6;wvwT 0g>̴i())[ŶmŠy'ʟ>}ӧO|rvɳ>ˤI(**{キR %!d&jWTTP^^Nss3Ν~zϟK: [lIx'|Gl۶-6k5w\VX@ff&7oҥK455qY^y=ʓO>)}QKԐ'(++e˖QSS޽{oRJ{CRկ&333׃L2)Sh"~~sJJJ? 1,AaLOww撝ԸJV`!a `RCDl^~e,X@AA@8ٳoKW(qKiߐIii)YYY\v}NQQEA ,a" `RCBFFV Cڊ8deepB.]zz0?zR?%#k̙{IZZlܸT鋂X Q|ZUVTvÆ IϟwJ iᖐw1,I  2&5$H X  b`  _uV`AADdKAamn HRQJYJ"M)Pz~dy^|j4^iRP(MR&"4|R8("Ja{sm/<&I<91yMO#*WFqnP6Ϲ%-'U)Qd{4J)Ҁ>\kL&>~`2oEύʲ F>r|"(e v$qt镊x2Dǧ_OY#}+ew%}3F)s;L=ףȨ>:+eOVuweS}}c+ b` 0Z+~appujkkihh  ԩSY|9msϡJsoWlܸvvO77ڵcǎy?,̙×%|>_$㩧";;;}vRRRn֑#GO[[Zkrrrdɒ%ݻ{&a٬\@ @{{;>lܼy3ض9~8~Y`Ckͮ]OuY~S^^7̈́h[laԩry% Uv؁8TWWK(f͚ŏ~ݻfÆ 1C*55vRRRxꩧ ,穣 6`6---+(Ϡ=z<c6/^ҥK rSLaƍضٳgٵkH~o3ytiii1^ŋ\" ΝL6Q]]<uuu\v~.%݊12% njhnnfӦM͌3b2ضMzzzR}G,+&̙39}t?km(U ܌/"wra\'CV@ @ H$2y_xf,WTTw͞={nY_FFΝfC|E}9~8eeeIgӞ8qɓ'3i$ ,ϮQWn|2C@uru{㴣TWWSSSÇ6m%%%TVVRPPp[ضm[BXqq1Ob۶m8C$Ak#,&Mrٖ?NiiiB__o:5kp3gPRR"X0oCfҧF innܹs444~֯_ӏ8 ٲeKWٜRկZɓ'3o??QWWT<+V﩯rL2)Sh"~~'o~I(KҤI|]G~p˲x Uڭ~ F׉,7eҥKyי;wnV QYx1oMܗ ٘|A7/q]w 92d3-JF}j:tV!++ >f ,5/f,Zo\LMMxo8fΜ9:{=:;;IKK7z[ /<{1*++||9r7|ϤIˀ{/ŋAR'_%++KFcT|*g>5#ʑOr>A2'  0ӜD8Ƹ0P%BAK/tGY(PkHa 4ۧAKAaKFAA+2%    fvr?1v7>"yqG,cxi#ڊKud8Y G[r^^}tXoGi"Pho覌*Ai;{e:^D" @ IDATty:*gQhB;h7 (T4võm%E+)|/̊K߫ˊӕ-46ZhW6ZEGRUsېR{uWQ}^ceнR8hAimG^gEwUod:.Χ$ɧO7ZF) 18=D`G#&gGƍsx\&No<];sz㢺zuB+ |q~&ۘ3Q`SG7D_mDM[m ڿ\fx6GVGo qq`xW_<}$I7=O?Tʂ` whҐ 3X 0~:444 HMMeԩ,_>w^ٲeKB#G8~8~)p&%%E*\K8jڱcP]]Mnn.PF:::ޞf͚ŬYx뭷}i`̳0HMnKR &؆tuu̦M(.. ;;3f$ȼ:u ۶>}:k֬aԩ;v{b֭(xG?>> gΜ&Ư%0OITޠ_5J@zfֵ̘sNO>$`{z9sp%>c6n܈1Fe` 07ab?)ZkL6*++)((~i,+_ |,\@ ֚tiCaX+d'%BY"n _/fΝ;GCCgtwwO~&&M%%0OײD(K4_%>2Xl555ֲh"233ٴi$K1A6ϧӧ PJTֲ~Ɨ LKpDD8Krk'&:o_~ PPP@ 0{l(,,d^I&qU`ƅ "BB!._1/ &55U[ _,,*P޿",,,СC8YYY,\Ko}zW_}ׯAqq1ɓ'yñm{J){9X KAݨ|>VZŪUnh[u F+Vb daU"'\a$;,!f'aX2 #ى̄FUAFlA 1AA;f^rYQ2= C0sǮ-|â\o9CGv8C~ sZ@|/ˆAAA~ZVA-`9ɀe|&y|4׹Ǿh* iI'1.^ 0(@acqahW;Uo銓P_b?VʊZ{(_Xyʳ<9PTT\x#nQ[>qQ|vpv,DeqNqrNG8:*kӫ7A#Qָq:>x=g׃u\u4q$աԱ[/q[4i1:dӐӛC4ޠ4q~X2qQq{C$$Iw[6AzG 3X! 8-# ,AAU_Nmm- B!RSS:u*˗/K}}=[luvvR[[ӧr ̞=/}KhA *`ǎ8Cuu5B!c׮]# f&OL{{;ovZ]aX_wTiX^y>܉ATj.c^hnnfӦM͌3d^uN:mL>5k0uT;֭[QJž5oHrJ^yAkyFWfNa!3_0>mK%jlj5gƌ|o];w$O yxyꩧ3g.]?fƍc\" q%GK Lk[>̴i())hii駟ƲܿrWB}}=~! .$&==}|_ξ}XptFa<XD8*,D8ѐ%^+**(//sY~=݄a~$D"ݒp8K/Ĕ)SXb40 ,Y"u7dY"|?-!KCt8|QVVƲe˨EɦM0} v+ yHIIOOdyP  ܜ|>}:PIe-g|W~saXDxOX=f%‰X^"K` gvtt/` pfϞMYYl߾իW3i$^JCCL>ڸpYYYl'c3--Mff`ᨻ!#ж,paaa!qb…,]o}[[\~ wN,4&㨢epl  % @   0b/O {St<'L<`y/1n^߶T #dKAA ,AA͈-f>A+,6 >ZQ?n^_{{eecWE.+^vMcQ8v4plq4w黎g 09l09wɘL`D;o gr oύS; |/.MzGGi堰ўS8 NzM//}bq +weAf&.JAZLԒ xKAnׯS[[KCCPTN)**i{R__ϖ-[_'p5EEE^|tA |DpT0572}IZ_ji,cǡ\Bttt|!ӧOl:;;ٻw//?P]0 ,uKKr·T .]rc ꢹM6Q\\ @vv63fHy9um3}t֬YԩS9v{E)֭[QJ裏2|.\ӑʕ+g͕w„ ``[P3ظRxFj8k [ƮoU0,1wk0'3c |;wx' {<O>dL {KGn6D|?aY"WhEE477s9ؿ?ׯp8O~4Hꮪb̙\vsN6oޜtLƝ%K-Km.,dpG!a|QVVƲe˨EɦM}Hؗ`0H0$//BSYY)V0 ,AA'??zON(B)5e3?`2HD*XY"网\"4XDhnp掴wyK,X---8pٳgSVVFaa!۷ogL4W@EEӧO'''6.\@VV@k׮q fΜIzz:W\wS^^.01 ,Y",(Ks%j@B:Dkk+㐕… Yt)ַx뭷xW~:p̙3innwߥ3fI_qi` Fj*VZuC#lݺu[n@7?R0fExKGwY"A=K3a4b %BcpT KrckAX"aXqQAZlԒ b`  9F%k|]jby.EТ0Drc SdKAa޹VJR{TV>ihVW>^WiOVRcWJXTxL㸰XOb-v};8s*+K\|X[}tކSrx[qoG|CHXIOҐJtB&XWFщ> \&i*LܟoG%;q#|/މ8eܲ9o 8Fya 'dNL6N8y͇81oGeǗ1gr4.Yھ277F8q9 (Nܱ<+;qeIwy哐[ʀ26Uv81h&1MTgo>N<ひ]Y,A JwV. 1lݺ .He  ׯ_BL:˗STT4,eغu+?8g;^%/ XF`%ř Ž;pjrss B4661i6uk/#M 1U#T65f=o 7n1הWம.ٴidgg3cƌ֭[yG8ugΜ!33իWsd\oӧQJq]wn:rrrb2GFEE?0>,J)o@NN?ԩS,^}qy>cǥKPJQTTڵk˓.L9 ; 3:¸ޏvcǀkOWA@@ @}}=3fK~몭eլ[_|'??۶y)**;ZkNj/__`Y_gՔinnOJuu53gDkf|iӦQRRBee%19s`V\'|»#<‰'0ư~O~™3g9s&xb2ӦM ==`0HFFFBlۦXXY4\t)SH5{yepv7" U; fΝ;GCCg̟?„4\x/ʶmd"mmm\~k׮QZZzeI0._Lmm-ϟc J)\"X20+K#轝{>2Xl555ݻ7f`݈nOc=ԝK/Dnn.ׯ'33c mKA;>w\Bsܥ˗/F^^^ Arrr䓁7;,򯣣˗/l2JKKO׎MXg2`WYG0㘻_;::7?pE8p@žT~!uuu幖/^ @UUiiil߾&hlld\z+VpA}]._LKK nL BtvvXT8r|' *٦a|#KcD8:,gQ׎D8 @B:Dkk+㐕… YtiLnŊ8q^{Lɓ'2ަMسg;w$Eii)`c67 ---e|+9r,~$>J޽$??us/j֭#2B˷[[Ja@"[Ϸ[}[[˺ |Qdp WYG%;W'Kp1x85#T63f½ pv4'm(C6w!gJFM   3b3X,Nύ[ 2 xU9Au AA}ghlu8cq}8VƱ"Ʊl|cwMJ訳0>_QQ FAA@]TJn7(ܴʸ2 <9 2ӎ~+3lǠ'h'#DPNmlN8k'tb]hn7F;XvVč.|.H;Ʋ#݃/҃ew{~$VN˶l8o> >uK+VXZ||>ˇeYXK[ >+ A|Vth@)?+BDAl~l nJ'|D׏xr=ODQ(l\E=`:c`cl"ƦD"ӃmzMn"Ns]DL7vzpLމt7q]]v妵0vcwc0a>d9,uHkhs}ˀe | RhK-֞|h}~@[)XV ykCQWVh+^<iPZR$eπ{8V=@n  cz. ]ʘ Ӎ1]Ӊc0&ƙ8aӅc;lLDZ186c6;p}mklŽX‰s clۏcq?qD"Al;s81o*weAf ֯˸jiNAz?9oT0 .4DS^2tKI㚚غu+/^'>7㩫cƌHE Qv =W [CPw}7iii;vez?5kțoŋIKKc\rMH#o6'Nl.]&o~ BdggxbxX_WD"OΡCpsfv/:;;?Ѐm۔n:KAF5g`}ck_}5554O?Gv:::\s?RSSinn׿5YYYTTTt>} O˗//3m4ϟ3®\O<7䥗^~i*AF; ,3gŽ;ƽK0vZ󩨨`Ŋ8p K.qI{ɡ{ם}XbӦM#''*͛|'==~|՟} lذ"N׾5裏Q92% z)**._LSS+W?(!MQQpk׮wE,⮻0w}cǎq"m3})S$gdd+KHN^^}gϖKAFݻw#p1(..Ʒ={vZf̘A w… r}R#oe D( vҽF̙R?w}|Ξ= LJJJ v IDAT+ǡ)i^gϞ 2uTb3SJ~~>msXX(ɓ'%)p"4cM9%CO `Μ9[B!͛[x1mmm޽?'Oz(<Νˮ];V&Mܹs>}˗/[o駟Vy'OLyy9555={ ._\niql0*i  ,򄙩lx x 9Bjj*-bɒ%Z~={^_.Z ./bܹ466Vyٽ{78o~SP ,y“S iNA.CQQ}Š+@mPR :۠ :@ \?Š|eymP:Au6jPm"m{~A9*Ϡs=7h0utOgt; iЙq4ܴAUWjy\w:l7Nm9(WFʳrmtk➟:舃}թ:{hA7}J7 -lۭAWtJS4Mm\}9:Get>>{bPu͠H]7Ihc3 *נ !|n T+!"SP\&nqqF{}r5s+7CA'q&Z&L¤9opz;VDLJVw̠qK~@d*ls&qvGӥpB+ keVL_nozӫnpzcuoLtaG,ǏߏmۼAyf /"m8;v젤|e…rlŸ(;z!Jii)8C8… p]w gbvlJKKYr%|ALgUUa>#8q֯_ϔ)Sg\r3g }撟ϼydJdp%VD8L=VG՞Vgg'&M"--^}U"gΜ!777f>}wy?p88ضMOO~r֜:uJ>CRRRb3`/^m۶%%̙37o/eee1g2331FlP5cȊwP{b,aP6R=hԼ<233illb233ٳ9s&ffѢEZTm~?eqrq*++9~8sAkwqӧcaLbͥa|A>c>jkky'c Y"A9s&Evb8|,c k֬I&q~:Ν̥Khll-L6˗/F^^^ 1Sd6oɓ%% 룢'4%Bs j5HXj 5z={jII \xulj`8#G$ՙ~+rss1cF,4oNSSmmm466{n^J[[{ٳɓ'ȍdD8ڌ_Y"Pdddxb~ b` cy[y^իW'3## 6$ <0BRdpepD%;dpD-w1&%eS0N 읲D(b` (ϾrڣAKAApn*3Ƃ  `  2#MwvJw xL>Nq,o͕7*_OVv&eYU4qP3fN\ەg􏣲[m A;v>m?jܕqR79E,LTRV鷣[nݻ'Z1]īO#e~;|Y{l{Өt(,y1Fi #Q`;1tlN+\KȽkݕ혾7+ݙcD!A Q#N2Ȍ29P(j:g?{吏s!|})dKKKf[5k֨DرmM`Ӟԙxv1czzzP]]-+//+/++|}}Z=z]]]&±ʙ=XS@K:kOSlO v9`YY@jܹ(--ŝ;wşӱuV̝;Gŵk ///$$$Ǟ={ >3}Kױk.@UU;Z !%%!8$3f@YYq9z I^*řǍ7CӡkxO#::*U (PHg?{N@!d .bƌ@YYA2޽{GDDt:"""`'śjpBCcƍ'`BCbƌ8tz{{QVV___L6,ll,ߏALL fϞֆ$l Booox{{CӡR)x7q-/̙3۬ى 6@AV/ ! Bzh{cGTTw͛7QUU8nylڴ [lO?j5$iҥKӧCV C,X=XiO:_Ʈ3f@ff&DQ4 [˃W[%%%Z8=?.]`tuuテF/'lQQLϞL] ^/>`uww91ѿ]]]q455AR!$$۶mӓBL6 oY7o7|ôiӰvZdgggIZF-Ou!IƵG&%3ݷc]ٙkKLT$;jН%G=, J,B!B!!B,B!'dҦi;\B& {!BƘIziS$̋e2הi$Y,snۢ$/0P-}SYi}5tFd[tX*PlM~V4խ+Xs^eH&^c")lQ:!hK߾dnEMɿӒRwaR(d#엃w#>0ݚ1K֧H벮`">>܌7n( ^G]]֬YphZ(((ڵkiXᑨJ'dxt:RRRhСC… QTTCee%v܉ E9 }!00...pj5zj{]b4 KҸ3g"##ΝC}}=6nhQʕ+9s"K.֭['NLF`Iv*_'ߡud;XSo7mڄxj-566"**Jq>@rZ`` VZCAAJJJ0sL65aktNNq2<==b̙3gDy7n܈n;w労@žipc=<˃2i]LT*TC <<<,u`> !#SG~B!)"د: NLwo,XֲȤ!BB!qMoêU www466͛N%{ГL:.#Cv...رcQTTcǎA$"&&˗/gX*_c'Eh"tH]zUUSxc Y/lvl֭lbPB!pƒ=!‰[EӲ~K,WE4 KҸ0"N0iB`B!L/w'0~vq!=XB!c̤`),%y38d"YdkEW=%iPz}<]#M2,e߬aQ %Q1M˒ӥ~9(\,G)/򍾛m)Q;(ȎC2t}뷓q@2i%}#yc9|_&v ؾ}xAArrrh`0 ??W\ACCj5|||0{l,^6__hmm5^^^Arrţ qոY֎{}t؎7o|BcƍEQ!ۋ飼Ú5kV\|XvI؈ÇѣxyI `k>>8tbbbpB)PYY;w"((HQNddBv߾} .\Zx\]]Fll,\"?~صk|,??x7yp2%wt\ OgّOL_$rX̜98wqFW\̙3P\t xװn:8q%%%Vݻׯ#,,bBi& /VkQQQQc}4W_j* PRR3g2=rrr "z{{d6 aE!ľDBB1gΜݸq#q9+^^^hooWKLLDll,ǎ_v%B}ݸT*TC߾`4Yןiyj󃟟JyL1 l0XBc`vz{{M!3X$Dړ:X|roQSSfܺu 7ohkkq|wDxx8`ƌhooӧԄܺupbe+`gN8m;v ??EEE8v$I/bbb|\}Uؾ}<8}t<8uN<#11d8ٕ>)R9\*K :=R9T!S "8+,yD lGj0N4:nȉF'&]WN4.@,B!XB!!B,B!bzOi}2O j~?|6!<!B%&_b:ٜL1di0ЂpKt,?Ld×%ar`\I쟼P_T~,eZeDθau<[9 #AZK7s(9I11,.-[KXtgk%E~P4ɂ.vDRC[*Wϒ(Y øOk%D =f\$yWbM1}cݭt lQS.^,޽wv!/2D0zN d߿ `AbbY>}W\AKK 0c ^G\Ddd$F!C!BB!S͛7J_ss3{lذӦMSb(**ڵk۶m(_;wX`0?[oz ?q=ffN27O`ݻ&zKϲ͏2$"$p,_ׯ_'|d NBCCvaS9===hkkĉh4Vm8xNIE8Y p~8;vShWM p;m6قM Hzz"\*KR9D ʙԥrR9\*gl!dF !B` !7.3OI$;p2yJog /B ^&]u !`2xML&aB`B!2i4·>r2Y`/'wJB!Icz~{crEzv IDATk7ajnatߎ] mk*u6mCl߾]q999HLL4ӃӧOʕ+W^mL}n..tŊ+lq #jmKж2]w|l3vݼy3>">>܌7n( ^߿*E/,e0V!㑔^@BB8Q,vжT;!ht: 33C H!;w`׮] xo~@gg'BBBpy@ܾ}[`ݸqYYYhmmExx8-ZD""m[:YADkoƢ7o֢˗-W/_  /Q__???Xkk+ ,Y񨮮FVVk,B!ͦM/ZEFDEEYL$ rꊴ4jq  h4,[LNd?޽3gΰ`B!'P\\9sYf,\+WăpŻ[ Unq.8!qTP}bZ}}=A^i4nnnEHH{9@`Bq>,Xܽ{Wq\$cfg +.]lQUUs0"S$'Y|9BCC'իhmmEUU>s444`ͣ*7!!駟&dgg??si՟OcnWرNBNNb_W6M4j ,ZǏ#++  OU뎏mhWB)8ږ&mSN4 K! !B`B!0"B! !B 6;W>ڤ|,")/ՄB!c̤`=<olZgk}ʂI9CgiU6dVeiTGLSHAշ53´aTJف- Jy^tVJ}ҭvke *@P ߭?^enlkST!WYn!f*K[4}(,4Y4ݍY+W3ʄ!;+X;EpʈЀ Bc׮]*'##غu+J`Bqnrss7xKQQ=ݻwa`B ر4'gφNSXi2z4.Dnݺ'O <<Ezz:A@uu5N8UV!66{s=sΡxgى#Gxgj={gϞ` イoō7`0T  ^Vիو >`QXoTek˗/GPP> :z-߿111HLL+:::ǏGJJ t:222p!55_|rssOu/2_"88qqqjjj‹/777|wHIIo@I$i #fǬ80Ǽy닠 ݻT*\]]h ^e˖VBxx8RE]ذaz=fϞٳg؈ׯc͈@`` yܿPpwwxzzC`q4K&F梪 $ V===@!vE {yy@߿j5BCCtV ׳,2>:C!B2uO담4x{{C$0 Cxo 43})8ݚ !!BB!S466bʕ^߱2>Ew1ogLV3 # App "z˜VFxxx@բMMM())Avv a̙3| ***P[[ӦMÜ9s>>>FII :::FdE"%;ҕ4eKe[lAuu5݋l$''iCHyꩧO>_!m&-#!!_}{=9s~>)$Eȵ$Eʲq-Bp-BEH؃E1)y_2|"$XdS8Z%X+[!0"BqgzɝB!Qa!B3iK\LYTؿvPFm> #3zJ&'×b P?`P2qѺ ~Ԧeԧ(0X1Q 2kՠHRIpQj>*@-OS  g\dUuWnu]KWA\O|P~44xLB4h!iqyL,. hEwɆP2#]tت#*>z%W풀nq#GL둀~9"z{a!d#8'd ==4aE!CȞ0"L0Ҹ C~BFv\\L1\hy~ դ=8>nɓ'QWWAY/..Fvv6ݻ,ZݻdffŊ+裏*ӧOmmmXr%ϟ! & 0wd.?믿n&܌/˖-C\\jjjFEHHVXWWWܼyBCCN˗i&_5<==F! a8%+2C]]\]]iXnuuu8u,3m4$&&K,[pUNŽ;EEE~X5)؈\TUU$Ab ن( J "N:W0 0 h4&`|l}L!EjtG2|'EZZ!" 0Μ9s!55h48z\^ww7`۶mV8]x$ !9hll͛(//*qMű*~ee%Ν+.Iްӧ$ 8ImSI1|L1<<?~ pwwG^^ 8z(DQDDDPQQwww,Z C`9I9D8HBe|ػw/z=RSSo>9`2|}}#;;ΝCxx8V\#G@VV\fFA||<͛N$xzzhnn;b 6 aE!9s&~*9s0gyɓ6mn:lK.ҥK?jRbV ={aExW2&GOzq$qI.edDw@RV$HuH& ʋF2`*kn,?R>&Bzea|\7 b֨,A /]OLU r8]e.B jAJފj`kXFH(B_YFa؀y e_A9(_!TT|1 }5TPm?}_L؃ES8iweXB! !t'l901N F9%GǮ9gEٔnɓ'QWWA{-[pyTUUaƍrbdgg޽{DZZt:(..ƒ%KpI߆$I8}4 ^+Wb< D(P PNr;4ʊ v i~~-gSvwwc BWWrss_e;dnBOON:gyj?q򗿔555ڵkغukSNشiP^^M  q޹*&rǼy닠 ݻe-[y^^^Qa!88O?4***PUU%3 x@ԩSؼy3X,\?{ʂCiw:7CEUU::: IA@kk+O 1EP*zA>V+455We0ӟ0O"8D8>wl:7?"-- E{UhFUinmiI`Bq:::؈͛7#""P^^nS^QQUU%V544SVoE`9e'GtƮ$e8ZBKK ;&>* ~-RRRRpÉ!11G(@WW***E&F`9=ʂCu3~=D(lقo{^Gjj*'Yւ-F{ i#)) 8}4`X70bߦOkr-BE8Ik8qkB\о"HZiN* ۆ&iL 4YCS{IB`9=b܆8eH1"$!dj#8]ń-G,B!{gEpƔ9b !+2-ʰw]1F7yFlxhYSX^KLN:ʄ=X;o,B,B!a)**»KCXq/gt{O&EHG&[nɓ GJJ PVV}apww3|MСCVի[bܹr].RRRٳ=Ν;jgEgg'9DDDgVddd8{, ,XTT`E`_ ;" {NLww7/_ tuu!779^>`x7 I\]]GTǑN 8pnnnHMMF_|\/ Qw^ QТM 1=f"Vd!BB!S466bʕ^ǃtV Ip}Xmm Zm1j|6#d8؃Elxģ{Z{vZ- 兖;vL|pq$%%yyy2|||ݍA@ **  (V+׍' {m7d@&e[lAuu5݋l$''j[lACC?̙3HJJR|WxpG믿c=fNPP$ĪߦOekkr-YaZ\k,=|dO }jA0J+8E%uȪ75N3-K/ 2X7cRЗRE|P~44xLB4h!iqyL,. hEwɆP2#]tت#*>z%W풀nq#GL둀~9"z a!d*#8'Ą}ѣ#ʳg?tÕ҂t.!B,[o! )W{!dMxbG///T{;E = \h9ig5@"(ĥKV /ގCXfY\ ZGjjMuСCz*<<A`߾}ضmrrrPWW۷cڴi;w^'x3gΔ-((@~~>ݻ777DFF矷Z,p}V 0wd)**B\\v܉j>|>>>ֆW^y* hoo^zx0}tݻ6ם5k`Ŋ駟3f̀ 'Mss3K,[ qqqAvvBf hó> 믑gypydeeaݺu5kPQQwJ>SW^(8r+rMMM~:mۆ/ӧ466" III$ hii;v ;Z[[1{l<Pոt>SwNCuu5=gyxj]"8W%+2t:q]!22nΝ;ؼy3?yݻooo̜9* :6=k,,^#//r%IN_XX^uɺԩSf=r7n/`ɒ%8qℜ~Iك̚5 s̱@ž̘.,sFLlooeN'W0}t^G[q-ݍ\ܼymmmEhmm̜9:NqLL ͛F>9(DL!=9DHN7x%%%}6233qYyHq8LeAPZKe7f5?ަQVVJKK ???/`׿5pm?~ǏΝ;3BTUU)ܹ???z TWWi T&`HMMŎ;PYY9lE+tp%%%VhmmŽ{cuuutj怫;w.kt3gĺuk.tu9 r8&AOi$,2immEVVQSS_z111oOBRѣ"0h4h4/xモ_r/**;OիWȑ#DLL PYYK"::8pRRR`0(=-[;wWWWxxx!\v gLc7npww͛7`Z]#,26.&>#c-Z^|PTXlsŚ5kPDơbpEmBPTTJ;w"55yyypU'N`x1k,8p<:/~aիWe/^|Eg?CCCeg?(!qHlXt:yvtܽ{yyy( ֮]sΡ 111PXz,J\z<|iiih\x!!!/bƌa# 8DH!IXX~^^k~``"PrssC{{|/^Dkk+z{{a0dV{Wqqq8|0֯_Ap.C?bpElB@RY&\|HIIAXX\]]qTUU)Xg9sVQ\\ JQ1|8vE!11 ~ܹ??QoYYY$$$ǚm-Z/BVcpq᭔XG1| +BVdee/_FAA_#ոuc E\\JKKq-N!BpbѢEŇ~Je˖!>>~T񨭭W_}A`,Y7o޴9@ ǃ!0==}RB\kĎ\kr-BE8%KO{{;\6 9,T !'<==www8GE$4I&M@!2;!B,B!XB!SI{EAh+ْn,˶t2 e-?nh +˖=v1.6lk(χmtx+0 Hupm<6'Fs=:}ӦwxW&"+\RѣGٳVPddd>;"Bؐʩi`B& "ıpss`C&QK.AV#!!IIIepE477su*TTT ''UUUP Ö-[,~ 8p7nģ>jǏGAA }QBV[MOO֭[1w\ػヒy9VdggHMMaNlݺpU8qMMMh4sh4gٳ0 X`RSSR!X=v+'V;w=\aGw}'|PSS#..N|JKK!YY?#9-[`֬Y6ZRR+hiiAFFZՀp8  <</RpI믿nݿ@rr2Νn+KKK퍗_~MMM/lўdt0T%)B NGҥKgQvٲe1c|||$\zUN?{,BBBaBcjr _| \\\SOa5k֬Ys΍_r$!-- HKKCkk+ I͛`Ŋ<lذzgٳQRRBGC؃Eǃ=爄Y|8}4Qa0FZ<#CO?*BBBFk`` \\\vww:nĿݻhjj¿+bQQQػw/C PL兺::,2="d @{S|Xx1֮] TTT0 h4 F``0߻g},4˫RK/oFAArrrklk%8pBȔJ'Ր$ ׯGXXq=L``aرc_z]*tuuu{jq}y===`Z~~~P ի_j׮]#1"p]' >Vdee/_FAA-[f&Qf\t +VGݻwQ__ϣC!;vڵk#`0СCǍ7pq,YĪ|TT PSS*TpBhZ|g(//Gss3JKKߚݩSP]]V\v >}:i!!B7y,ZRl2Ǜa8s ;H<8x"pھ};;?`x饗 * |G4 W*~z:t}9]W^/]]]6m,`ꂏ֯_:D^i'qkr-BE8Ykf9Eȵ!!#Cz<؃`[O Bo"$GLJ0"""h$A!e׮]VӦMFXBHm;!B— !BƘI"4aږ>ǧL XFGckeNSͮ#I>>9s&֬Y~AQvll,,X???]ݨM0"=$,,l?//d۷ohhh@WWDQ`@OO4 jkk# YO?v IN)U*BBB4-]]]^47728yn 00PNwuuD !҂O?/ڵk၊ >|..QSS .``hnx{{W^1 zf̛7oǒb J`x8{Z{vם;wAP$a+W(dQRRիW[طoT*6l0Buu5"##0(^TеA{TTw~K$](Q1Q&LF;'\LOOϬ>^ϳ'yӫ㚞YxjgvDcgoM$b:xAEVPU*(ϋ׮ۿۛcFܬN ]%BV])ijjҥK\xbccq:?V/lܸ[ng}FWWW>ƛo˗}ѵk'PQQASS[`ڵkQ]]ӧu< @cc#_5viH=X""2eeeϾ}X,f͚ܹs>}t{9:8>}}Hrr[0__X,lݺ|ꩧ5kX`v޽ӧOIDDDDDɮ]8s ΝjcrL.=yH]{1\.OĊ+ؼy`x~B"Ѿ'Ƶkט?>\x1oP* D)y`?OGMOTTeeeg9{0FoW3J}ޛȺ,Ӝ7l7eZϰrk}C 1O}3v>-߰_g#>x[{[Yԃ%"˘ey0E`(e=t,Q pٲ;N=Jyy9Vl@ww7ǎpζm۰lQXXOBܹCZZ;v0_PRRB[[Ѭ['xB<`Ɍ]5Y{Ԭ>reeeX,öm())Çsmv__oo8Ns>JJJxWyiooĉfW_}Eqq1=?яxg)**\KDef"??ʕ+Yn%%%477߲}vHHHW^ sy=HJJbڵ\~L/..&//%KҥK?W)BQtPtpfJII1]RRBcc#Vd3-,, Fcc9/00s:""Nv;---|9r+( !% D\() =\d\\aXF jv;/Wk9X""2czM߼yXq8ܼyThnnfܹ*;""HZ[[Yr[` _ayLYf oŋaXd GPPNbΜ9,^xo޼cǎLff&ܺu|I},tP_:E83eeeϾ}X,fvco~`޽N^@Ν;ɓ' $!!\},yz-/8"=$$;wUXj׼%Kz[rNʄ{2)ٴGLjuPD` =D7X"Q%"""")[PPf , 6ei((X`56#}xhc_ˍw}Y .Qe7TyǮe-8}lV(o;]*ϻ͝>ރg^vmc5F{w̲<-i=_^u(iwaܣ <,9-F}x1rhWƱ]g߻v>ŀ ,yOaaB`(!A4|T:ahk9Dp:=zrV+999lٲ{MKK ,];vP]]ѣGijj"!!~D5̶5E{}4`YX<eee^={p-9Btt4WgyM6Yh*>cD<E~~>6zJJJ+##'|ga/dz~z?1员` |"3aIII1b>q޼y^$$$0ԏ`DdbDy&^SGCC/CdX:E-2{v?NSS.]ŋ掚ʕ8N9Bcc#W^O?Ѕ2;,"={OVV۷G˚5kF̮]luY"E{Lko_r?0+, QQQjTn4:k9Dd*//'&&H8u+VPKc-uQ7wޥwͻKDDmذ 6!dJ6 """"(((?^L `L)]a xpi}-7VxsW-wmu?^md{V?eUʜN =\kp:=W5<}ZZs0z]F{0VSݮmtۮӮy_{V`07=~7L EDDkjۿ?jQ%3zDDD`STC B7itrQ˱ZN$ IDATe.^C._~;뿒–-[(..krYyw2ԃ%""^YY={m6JJJ(--%==FqTUU=T555dddepe^uA(Y%**|l6+Wdݺu@hhPUUUOr jkkq:e9vIbb" j\Q%""g48҂"==*zzzhjj'&nܸArr2фQEhϟOUU7n 11`3說"==+g%Kn "Sk͛by7|͠ԘD`ɴ4Tkoo455q%.^Hnn.p%AZZP9ݦAƤ,jYYYo>, YLOOOʕ+f0@pp0qqq:%( dzrǖuDM6i&5L:"Q%"""KDDDD<8@."""2ԃ%"""26 ;fxJW*s{0Q(ezkFˈy}=vNt?:{v#aaʼﺏ=2s{p×2cdz}ϸ~?x>FiW_{6Owxv.{[Yԃ%2^Xjqjkk:5(C(놚F~ =T&zTojVUDT@Νv"""aƍSXXHMM ,[j[]l+q2:uRIKKFv; 55wywr=ʎ;8<ϟ祗^"11R<ƕIS2sDM#3Boo/.\`֭deeCJJ \t~vI||< ~SO=˱l<$&&ry5L"q5*}o455p8𙖘H``9/55ESSttt\jj*j\43KլoV8y=jQ%""2l6\~}DZ\\uuu󪫫X,Ldd$^?Cia0~g4Ut1 `Æ 6 [n5]n8q¼Mî]P=XKDDf7qΝ˛o9 ggͻ 3,ͪScS8XPj{"KdZPwFDD 1[PPm , 6ei8 p߻0 {0`X,^i|yk1ܫ} 0nsz)u^N+&gN__,e O:EOwoMuu5~! , 99Y;)Bz{{p[n%++RRRҥKsN^N>Sz)/_f'11\󩮮rQWWjeʕTUU=R7gggi+??Wb۽ݼy3=111oN||<-"##fmۆf#;;8*++#`ȴԄ ##gZbb"T\.MMMAjjr\_zz:v۷oSSS?>q6l0ߺub6joo'>>~޼y#mb7EhBBBHHHJOOfIOO0OQꫯg^uWA%<,2 6 [n5]n8q¼Mî]W ϟυ 慇cN>ͅ HJJ"//uͺ!S0{E8gNݳ glb{y,}e,B:gճ]z/3ht*~=oS=*gwY򨜙~o4,X""""KDDDD,nnӰ5xfLW%d/%7Z^dQ?^G9o1<罤^e5jyHcxa ƈ^ _73jw]YabĴw; {xNcm1>wwkxڮ>1Cj{6?^ƹ2J;:ðqx0jIf%=YO)Uj;s8wANN7nBjjj̇=Ç!11/p8Xr%۶mj3w\˱Ze>}ẋ?QXXΝ;9u?ڀ D|~OGuw$kjVby)JKK'--N8pTy޽ˑ#G8z(;v0~:۴qa¼rٳgnȑ#DGGzj=JSS\|X8w/2aaa@tPdbA5He\z{{p[n%++RRRҥKsN^ΡބvA||< .d\pk=s!??ʕ+Yn%%%QVVkFZZ111_4K ɋ/Hjj*6@3zdXuPij&>T\.MMMfRBBC_y)))vۉ2yJIIECCN_^yaaaj%!!AM`x~ f~S3n nb; mZRWtPDD5F@@ׯ_G]]}}}j, qqqz7od^z}Mbcc1 $\.z HKNN]_4ImS3n `Æ HCsl ]apy1\L7r 14 kޒ<=➶.[7>llw` єVgx LyfLoxmxX]R{74=pX},gmC ֡wgбdc(n׃Xp 'XNJ+*Lr#g:.c5X<<#QE<бqhL\刏/׈4]'cī3gΰb o  @OOʼyxwx7>۷oSTT\)X\p[ELL )))dgg /@JJ IIIܹjjkkiƍdeec=͛ϽYj+V 66g}nqEزe 6D^~e*++innW=lBhh(G> ;;Ei'C:E(""ZSSMXHNN6 dꨩٳ^Aᠯϼ=!!L "88N`dee%?Oml1iZyW/Itt4yyy`LU_UN:^ng,]tDy^5~!vŋϛEFF.twwݭ2S:E("ӗk?xîG_GF@@ׯ_墣ÜWWW7"4O@XOO$%%Lllahhh **jD ZZZ8~8/2:tH Dݦڮ6 lذ'OR^^NKK 7oޤX(.._v ǎ͛ܺuÇʼyxg(//]ϵkͿS[[KKK W^#z|q:|,\UV}v9w>Nmf.WGȦMZAdd$999XV^}U?_W̛7-[Ͻ@6lOGGyAڵ3gp9V+qqq^z(_'OMff=LOhoogfy/>^)((=P"gzzC`X!s2gQQkt2daqf]tܥ.wiA[]o'iS[eOCK 45A]m Nzzzu.`crgZra<՛y,g 1)+ihFq}zho立=tv-@{5앱l~*-Sټ!0Gܹ=0Aw+tAW+6z:]zwEAAafmƖجD7puoqnz/AO/@O/Zib kwWylP@?[yI8[F ]НͣmnWݢmڻ$Œ105""藺P ! DDDD`}Þ"]aD|GTTT̓O>ͱcp8m6l6o"##Yv-ׯ7իinn˄ /‘#G$&&۷3odee;áCAD7Η_~֭[`ܹܹ oپ};iii$$$+p***())駟fj*֮]˧~굮 fbccyg%99e˖axꩧhjjݻ0k2LvцYIrr9/44<ԄjJ fhIKK*755f\ 눈Νk r٩ # &NʤomGhsҁ) DDğbX5uww @\\7o]]]477OqqqTWW{[]]f32ԭ, &Nʤom傂ĉTP IDATVVR__Ç͞%ƒ%K@uu5uuu|̙3ŋ~z*++9s ͔gaÆzdgD=612v1 Isݖ1<3Wʢ,E>ƴ*ff1ZѶ),, Ё. U13k7P=g]өt(Q,Qִ,ǏRYbfw1Kk*7D䵾n磏>`|I^u,Yb~F~~>VpPXX˗!""z)+**$$${@`s 9uo&11q\˟???#**vܹ3ѣGr ILL_Nȴf/ٺu+̝;;wt:]Ɲ;wlEZZ+Vt:yTp% DDdfkiitl fUVqm~_p1]6"Oaa!DFFEnD:))lٲ~~ʿ`:::zTe"2nX,jkkfϟ@XX2yY|9K.tww ŋYx1bq QDLOC?*ff,MCPPٜ8qP㏱XNdddpERRRp::u jARRӟ4AK,W^СCX,-[_`d"6 ?m4LeM^oݺ>c;Y2"w_?Lb*jcl;=*GD(Q%"""KDDDDLSv?\ritj;``SփnAXXgt'eAGctDePwuroޢ[մ\sY2J̲b,9R!!.tϣʯ[ZFPMw[ pJ L|T0Q!-Y O@Z^K#uPH]g_q˟٩\b*si`.u̥yF C m,-e IɠњD5$nJ\| ~KOw}K6jqXjO/ ,I%lB²3 ^aK+*R'/qFgct6_e@)\ ` , $rޜ!i!aX%$j*vSUT~AwessHt =, ep …er;nPɕ.~͕.j߹BA2$.~ } ~Ko{ք _ck,\"F@Zl8)cn#9Fhz:/4(4q)+M|oZߴr]uW0:1I@Ҭ9dKlz[$TֆpT~7[L !v"RqO9@䊅T6φ ti5NK _\4XW_ `WU¸J(W&mHd%ؖ&N$,'a1 8TÕZ U4Vަ6wV5\e>wYN?gK173f&`IɄLH^s. rm o@s>\|f~&QAKY " iɰn-}֭3.P!4C_8Sҫ5PC45pneu V. `K-||skk+eܾ}'Nx=YD?1,3|}x1@pp0#);;|C~~>Ov;AAA|`hhhO>Ѯ &":&fN$z-!ļyINNn ɏ9Ӡ":Eh7GvɫJXX8pá, f/^}+ '":E455s=ǜ9sǕ+W #2""עZ\pV***8{W5kɓ'inn믿l ԅk͊.>+2ufF`Ύ;ogΝ;G^^W ~__O`Z;Ȅ)GH2w񉫈N>H"\rssz+}ŊXk޻5xb/^lN={9sBQ%""@>3͛GXX|駬[N # &ܬ>ӽ":EhJd4Vss3gϞ(֯_SO=]A`)8E(!??|mvy$t>&j$X""""@=""""H=X""""l.r/((p 昃A8A{ A@a K 0,c fд;}0axwH{ 1Vaopyn|CeHJswz. c44 ^PkFKǸcp9ud^PgbZfߟTʢ,yHXdݻϘSPP@EE6,g֬YwXqq1կg Kd̜GL0ԆS%_=i~,ƣ(ǟwG=c%|oo/}a._Lbb"ݻyEEEq]Xl۶mcqq 1 c3 }qO֭[d}Y(//… 455DFFPUUٽ{71 KcYj]]]ڵkDFF,EQ%~rOOBz,ǏuDU]~=³'԰k.)**$&&?kݻw/INNW׺[ZZ7}YvAgg'Gرcl߾ڲe 6N?Çٽ{WYOf֭z*o&.33gxٺu+.\BhhH@DKyy9yyyddd0w\o>);wc=FTTf0b!(("""\Gq֭[Gll,SVVF?dffCJJ \zU͛yLj!22 , DDD0j*VXAll,>,vZ~B=X2vGNΊSStZ[[q:̛7ϜB\\˖-ݻL.\ŋXOꫡG8nݺEqq1twwǛa, 렠 ԗ,1-:E8##":E8DEE\~k׮qQ>S~ v;k֬!77 a۽. GN8$!!]v}yf>#1mx9}4q\Ʋ|rٱcO… $%%tR._5oW?lG|AAMYz1:=P"ԳYv呝osQaSӺMǨpF}6MMM$''Ù3gtMQ%Kg[)):E8.~)XV͛__N#r8@uuϴ7qF},K,D-))wygR޾};}}}>ttQ%"""##2񿡧"wJQ%"""2M5XC0_jwoLVauhe}OF;MvdZd}"ܮMVMgìg"?K,;DLw^Ο?X"""eϞ=YFLM,W7Dr8XVmX"/c}of壏>P6l˗ILL$??{Knn.EEEq]Xl۶mcqq 1 ċ=|9sn.\Hjj*gΜ'? TTTvZΞ=K{{;.W^ٳ444` ˿ /`|'##C_CuU D&ǩa׮]STT۷ILLOϟ^#>>wR___V^=uWWWGuV-Z)**˗/믛n'$11^~zj=J^^W_1gW~N`@@ğR^^N^^̝;۷t:}s8O>믿_|Aqq1QQQl۶7ok-ZΝ;9s ?gÆ Y{駹r w*++1ի1MիWߵ\EDds:<䓁׽TWW+H[bňG,XC4:ہ{Ylݺ[yҥKiuG}Q%:⡺N1:E7FFFݜ:u 2Z.]__9{,;wr=o~… #K`6Ȫs"䴷cHOO򈜣G1mӦMlڴ>^y'b9BRRw`t @4<8!e޽im=32%KPRRKDDdfS#̖/""""Higv04i{1'}}9EFhMۇaünG0ƱO0zO'Ӈ"/lLraF6Hu7̠!ܰa wQ䍔2 i@ f  ~=B^F6/#FP#-jELhll$**HMMR6l ۷oMNNEEE9r7or ***0 #k"ϨEgbcc1gΜ͛$$$~z֭[H?y$tuuիٺu+iR__xuII a{n֬YKtB8q---CUU-sExgHNN\v }믓?qz)?NeeeΝ|;??t:rw^bcc~:o6.VZؿ?^ۭKDDѹLΞ=O?Mff&~cҥI||<it:C>ݍeYܹDovxtBB_5͛ظq#.\Á4Mbbb`lVd,n7IIIPWWGii)˖-#++lLp8@ll,w7n_~;$( > u>C:::Ų,\.V,AъV&S_||4˲X`sp8HHH|?]|UVa۶mwrl2r <̛7N.\@zz:~)!e$$$իW3gN]iX2ci\&[aa!eee;v,˻ΈJ,"%%bPYYgԷiI^^>'Op8HII 6";; 6P^^!++-[P]](cʕ444pzzztZRR2)#,B=P""4gY,^~cymEGdFjhIJ[[twwnX2=i-2ގf#==SG9bڦMB.LQ%"O4<8!e޽im(85d]."""2SjX""""Sڤ] [|cG/8i~OGwhu傾8G nOO h npp:! ;,Oz3oD [`kGFoL?78x@>Ohw`tC 1m`aea?-5i5~ _2mf01L00M6Æm φ ;6|)9X`ǃ 6,̠֏͟g` k{=´,l^/ XA;8| 5yް<Wݪ}^_~8wxzPW  Vмy=Xs}(8`N8xp1]N<~ӆǀ~7٠?X6^,݋6 von/6;|[Sa`}5tn ~mw`Y6bFwVpzȼe?k,:%""͸UéRԅDxyYv12t){)ykΜ9Í7fʕ<9stttEvv6۷otc޽{/kf9,4M3$}'N`dee0x'HHHwy'O|'d|v"::}$`DGϳaOOgϞ駟&33=g?Y]]]űtRL$>> 4q:!GƲ,vIbb"7oKAAׯLKKK =pd !!˲,(Ӡ7Lu,"===0v1NNuuul24w]#\rܹí[_$9555ӃeYx<p8l6W3~E8 et"b)ScK/Ν;q8_˲}c_|n|͛;vTG<ϨX"""&114rJ`Zww7þn|rؿ?---l6^E..]1ʕ+x^v 7o]]]Z3)o(3 ܛ| `MPw\RYY&&&aocP__eY,Xsp8_%$$peVZn[>>GQ[[KWWlڴ\*++ikk3gp l2^{5{1Թ`\3y u^_'Nbbbb!y/^H]]< ܾ}k׮o>^u\k.5k֐륶/2Nsز~oso/cf`^O'fQVVƛoɁhhh"233OM||:;;G,+X^/w ,iddd$%%vnkk#???̅ &55&l6vkR]]Moo//_fɒ%X E<:E(4>>^zK.S^^Nmm-?=Yv~+y'fXVp.zǷc/Y$`-^(hnn7[N0")-114rJ`Zww7#D˗/fK@Yll6?j◇eNMrr2@ P @:N677_V-Ys R=噱?)ՙygݗ"77J _o=O~:;wABB \|D#7ogϞ믿/wh;4yw/r o . "-((>vjkkihh $O?`2o`$n 2`WGd|T07z#>_^Xi긇,up_vKnL@'e*hll$**HMMR6l ۷oMNNEEE9r7or ***0 {y&477xHLLd$''wwOS 77={z?$%%yfrrrߑ#G}7\~455ȣ>JUUӟb׮]8NuX2Go@<79}I[3m'N8AKK PUUEkk+C^x:y}6׮]`߾}瓗7y;XŁp8|8Nٷoo/2Nӧ9<ܹs|2Ǐ'&&ŋ~Mbb"n&nܸ?7xq}`T tP&'`lzzz8{,O?46?""..Kb&dddGߘ$66vT"''HxNtt4n~N>Y`A=w mҥKsϞ=#V\tI,i:E ֆNΪIGGevIJJ?'':JKKYlYYYdggMׯ>KCJJʰoܸA__2񐖖}LBNruX2Gup)I[:E8%K/q%>s˩矿 +//e˖駟SSSÎ;71oo/J,"%%۶m^{ s4XEyy9]]]\.رc;m6{=~_nCMM nشi4!QjJJJ&%ֳ,B=pE",G"ԳG,Bېד,^~cymEGdX:E<(صFFFݜ:u l=Q%RgKi5*pPcHOO򈜣G1mӦM!D`ZKlz8x{n" \%KDDd U%2.%uLI;E8pߑgʼߺg݆{keӮYjXp,~?Yv.Xu,u<,)J7]JKKSC,/vZc2eW2#_"xAx<E)0 mXEDfhll$**HMMR6l ۷oMNNEEE9r7or ***0 cgݻw?t:9y$tuuիٺu+;QT]]Mcc#7oKgg'?O\KG-DL'NBqq1111TUUJjjꐼ/^gydn߾͵kطo:uN??>$gժU8N\.{%66ׯoJ=ƍ444rkdFRp%2spYvAff&gX1WWqqq,]x222TTTit:%66>ܹs y͛Y` ,_7rŐr<{%551/L/:E(""SZGGevIJJ?'':JKKYlYYYdggN׍UFFcqYXxq υ 蠷˲p\!$$$\+6eFRp%NKܹAyy9/P^^`ѫ?)..СClڴ RQ%""25$&&b&W\ L}v/_NQQ痢%pf?Lgg'~!mmmX lڴtΝ͛7'dd)BDfEnn.nbbb0G, p88wwˬZ >[ @vv66lCVV[lz\K`+ɓO>xKuu5++VbŊaZ`s&;;{Hپ}{ȴ{rlݺ[yDZȔ~ 2FFFݜ:u b3u555'sD6""Bmm-l69p<"ѣ477GL5o[,Yc=6mK`;Y8x{n"EEET_.Q%""2i2;%%%GѨ,mҮ*))N`ǁ 6q`bĆ f`?|v Z=Lc`k50ق\ Ac#8m?7p+c i6K /`b`_[Xacc  <- ߟ?$/ Fu _㯟/h

Ti` IDAT}A",yv4p#gÿbB^7!EJ4#|IÏ!l:c<6^3aʈ,EJ(gfk#~A5Lk3B݃.=~6pq2#=,u9NZ27466}DGD&P#rMJJJzC׫ RM<ȯ5AXkkW$CYYDEEQPP@CCRZZڵkq/^$**͛7vZ8rի>|W^y멨`Ϟ=TVVŋٵkPNO>Ν;$''o}e˖_08|0K, tKʘ%N| 4i'NBqq1ijj5$ϙ3gHOOСC[25‘i}}}>}'|twwo[:رcC=ıcǸq/^} EA&Nj6%՘}`JTz4XOOgϞeǎdff2|vލe0$++u1w\{1/4/˲x'X`iiiݻf꫈kkkyXjcRWW@LL #|ll,QQQڨ(ZC$SEGGevIJJ ɗ:66;wmhddd^'%%vikkݺu L_p!_V,q?KTZ1MZ3I#xoY=~ ;qQa|儧yWUtZLfj>{KDDdy&:t?gru7IFFM{nK̲^F| ˗Yz5 .nSSS??WWũ`ƤY:2RSS),,kr .^HTT7ofڵ455q իW9|0 SQQ={+aSN'o}e˖_0,Ys=\E]vK t mgKp(bwp ZZZ(..>MMM9s :tuQVVF{{`p$)|Z__O'?!ݼ֫:v__Cq1nܸ /e?f߾}KDDQٳرLϟݻ,+$_VV֭cܹ~Ҵg۽Ƹy6aq|"77Jn7111TWWcF#S̝;xyioo̙3C:&.Iyy9 . 9=jIMM=W^婧|`9>3njjj駟&>>۷ot:q:Z Df,9"T 9io*,,cǎr(((s,l6~i8|0AAAo&n ܦa8ׯm;wn `+**ԩSTUUh"{9~aYoFHy[la֭Z DDD& wToo/+ yϡCB^/\G!~ yʕ+Yrezlݺ5$2 -[e˖a랗G^^^ȴHX"3NN47 FFFݜ:u l5(c,|jFzz: ::Z # DfmLʽIKK:5kְf5 ݦADDDdw)KJJK)"""2tKDDDD6iy_6}<Uf2/RuzT{irOD;Mv[;|̉xܮ&&;~3%_}UmEGDdzEDDDHQ#Ȥ}Df׫X2PVVFcc#QQQ@jj*#p Yr%{ɓ466Ell,Wf֭͛7/~ /03gPWWǿwN / dLQߛN8AKK PUUEkk+!QÒ].{%66ׯoҥK yGr_tP D}opYvAff&gX/33G}Dؼy3 , !!˗qF.^xO^^ϟRr ׯ_, dtсeY!Gn7III!\p?z{{, H_b ٟdffX2tPD}o:p8![ZZ8~8۶m㡇vsyΜ9cͥ+WrԘrtPJdJLL4M\M{{kii!!!M6ܹsy|yyy||GXʕ+rtKDD4Enn.nbbbx#`ͣ .Χ~Jcc|,X'OݮM(@g]7RVVƱc1fÆ xb˖-TWWɛǗ_~_,YtO^R]]M~~>Jm߾۷L۰aÐ|]]]̟??" DDdFjmm 9u;Ju?z{{__7X̢#-{Umm-l69pUfyy9.\`Ŋ:=( Dd7qq/wϞ=ٳG ,Nie(~;%%%;8,q6iג߆?i7\Hݭ Phkwsͼv5"hk4reV`NtFm^7 26C n?Rml6lݺN/0 e˖QTTDll,TWWc%%%ݻYfU`]GFş={G}^xz--ZDff&ǎrcY_{9VZؿ?^ۭU*NEDdfeΝKnn.\tK.quz)`޽455qN4!66]DhuM 2VXܹC[[̙3'믿VÉ,ѾLAŏ \׫/ DDD]RRtuu]~nϟfS0& D3nj9npC1||MZ[[/y뭷$-- :::z** +dtjQ}]~_bYYYW\ICCGGiX"""=ܐi>lC6tv;wՐ@L}WQ#ȃD`ɬj׾ADDLC:L*KDDDdwKJJJtADDDd84؂pJ 1LYQWmҽ4?^sp ?#G6)m^^LċaGiFQttáE`Ȭ84w\_K.C\\K.4M`py6o s\"E`ȴrVZE__444`YV ի9| xՠKDD4C7Waz3g/;wpP^^/@zjhmm.VZ S"n eCf|r/_κu]FZZsaɒ%;w~z!bbbhKD&n @}}=e`Νpȳzjx<DL"ivSSSCee%eBqq1QQQ<999c&+VP,D:%Ss=7dZm4nOjHyt,X"""" DDDD%%%UDDDd84?A|Fk߆렿 3zplx?%5g K7$32BjAצ1rqhia_{CװTftoX9ޠ4}`/~#c3axÖVuXC1І 12L^_;^t7@pZV{0O'(0/3 -ck×@/s -2;AP~ӿ&2;sbuts0byM T7$z lioM ioxmm 4#x Æac(g?(iP]z"8r {[[oW^%))Cs/ DDD۷6UUU8N^z%NP`eNuU͆ ~httt|rygLA,QcT#aii)k׮ƍ\x(6oڵk())0 \©Sزe [nJ>s `ѢE [oMzz:}vo]p5***hiipÎ;tlӯEՑY17fɺNU3jv9t֭v~ƍ߳qF<Grq~r8z('PKhoo//(..z{{9z(QQQ1{,.垺9M'|RSS۷IJJbpM0Oa藈,mR i~:5K@BxPӧOsysre?NLL -b߾}8NV(N?Of,XD~ŋ ::ۭFX"""#q}}}}tCZZH`Uoo/ ݨٵYX"""cnŋGc߯ E`=pLoUƍ,-ZDOO͸nrssYYY8NV(BiЪ CMM nشisa۶m{׿mDhw4%'CB_l޼Y+R&#3kXʙK 3OX/?V(ׇL,%'&[-'??_+F")-55B"Q%"""KDDDdVkJJ/CdX}j|G2uwGK:a#_""""2&xZ ;݅isc\ a`3|cE6c kقg o{-lu734@; J iḿiC `+3n҆ fhFPCTjUVC IDAT+"416,_pzXx`5|Ҭ醁eF+x|?{ou/~~mvZH-! Bf F&XB@y{~q+k*35UVf~~ɫ&oˋ&lj01;F!„MX`,$ЂhGw_n@BjC9߳}Ϲ~98'e]ߩrN1'wK>k/"_rgl4Td H١Νcƍ,[DG}}=NAII |ii)˖-ܹs\tx֭[vUV@SSvիtuuĢExG|ٷoUUU¢Eʒ" <| ;+,. 'Odƍ^<:;;)((W_?v?tvvO)((_glܸm۶1w\~:۷o7;;;yGYn:2۷oڵkYL6{& X kWCÇ}v^|Erss&v$-[FWW>iOδiӰ<䓴o~(**Ip81cǏ'11 ٳghmmĉYLx'Y,ï8Neb`ڣBClx(.ur)dk]RSSرc>֦Mhoogɒ%cX~',( uwvťKxgILL$,, 6vvvi?^eb` }Y4%KPSSòe˘4i^s6 L@GG--->봵)$BkϠ: C쬐wv֮]ӧv*++ihhի|XցX |S__|+>//M6_իWꫯd2%Gr}`\熀$֮];#i|;a֭%..gyݻwvg٪Šٲe Itt4O=>+VO?eܺuhәU# >wY}Qzuśޫ,_'\n.UuH< M}/;` ū;߁,o`W7A΁?={Splk~7Ϸx{ p<>}ҫL#w|ʤ< 3XBL*y9yƀ;cO)' %<sl]~;>Lc@;cO\VVflsPZZJ}}ȂX,a{25}#A ,AFn!Tu?c@ SLm YA*Ν;ƍYlyyy=z ZZZl̞=YfL[]]MYY/"͛8/_Nrr(WKA=N{aƲw^}~򓟠i# 2Al߾_|\)..fʔ)l6NJQQG7#ʕ+}6_~(Y2dKA>}6^}U줩-[uVCrg^J)Ӎ㨨(v;hA ,Avty]'x~Lu HMMcǎX˗/g„ >'424a{IHH`ڵ={۷˸qhnn&11lX]իݻwy&IIIr C` cjA152#Tyχ?fYv-eeehFII ,`ǎDDD0i$9s'|BTT111|0e91>0B?c@fu 7Xv-bZ9x {jBQQo__)-ZΝ;ijjp /`Xd|"rC3uFK(`[xq^^yyyl6z^ᙙ? 0l;X Bh=4ʗ1Aah , !CVVV%CAjdKAa8F'^GݘzHK+0  3b3XH|_FTc>z}|8OBSzy{hƱ2iQ(';^3iFVR3ZL=ה9)^╧wooe5co1cNok3(k>qm; Hϱ#^ӻniJ\#Oݯ'G^SǨFݕ_~{ک/P2fW*s lRz߀U~4м=aJW~OÕwRӍ<4ߺaֻކex3UwSgy 75+ 2%0ȏAKAAKэ/B"A Cׯ.,AYYhƉ'X,<3aimmӉ۷ogZZZ܁iY"^J/ KcMT05@ݻwSRRBzz:3HJJô{v޲1r#USY"[,K{zroe4sL/Rĉܾ}hMŋihhٳDFF2w\q#}{{;vٳ8N(..p2gϞO>ƍ3qD{9hiia׮]ܹoǏg׮]\{rMx طo8N%%%KAu*++3g?adff#߳g%%%??%**ӧs ĉdee h\:uCfƏ۷~_W͛ǂ 8<;w$))ɨ _&""#G(Ν;ǟg͛ǪUp\|W}uƍG!>> &:.::piXm\0{l&MX,ܹsGX ȴCw_G}D{{iݻkY/N8cǸq͜ڣo[Z?i{uSJ)@Ss 0MSXL2?' 3ᮖL,n~ůho+k;}ݤWgh{z0T`z@u7E)/~ + 2%B b`  %¨F 3a( ұ*uww{nN:EGGiii3anܸ޽{|2ʊ+HHHѣTTT҂fc̚5Ϟ=9s[nK^^G9s9s~޽Knn.˗/'<<\ pJ*Tu=] #k\x<{oAWWfݺuDDDPSS_g///gҥ8 7!`ʕr nJDDO>Q&Ξ=K/ݻwٰa`…rY!P#GrJ&Mm.\ǹ{.^ژq2,bcfFCCG1 y6'x* `ŊƌU~~>/^KKi 5BaEFFfX0a ܾ}'ƕqĖ-[غur4innEDDO^6g90666XB(L#Ȕǘ2BepRyYX߷N/_nk`rrr/"|qu%  $$$iW\!//Imm-EEEtvvRYYe2n8`٘;w"d!8:R;XciUxx8fbϞ=DEEꢰÇ_ܹsիcY`;v ""I&Mmm-̙3Nkk+UUUq9Μ9# A ,ArAzG#tL˲D8,Z]ٴi񙆗_~xjڵٳ2R8233(,,jrAكj%%%"&OLQQ۷otO?Myy\.|EE({^c/BxE(|]c2{5X{J % Y*y2ĥʐ1A ;X b`  #+RU*O燨 mA~dKAaJߧu<c%Yo/G7ދ98>]@?pZ՟,:z<)ݐLqXX^|Yx|8Mէ ^ݤ#s|x>d~y+wf+ޓΛ7fOPUUc`9NV\Itttue֬Y2$(?N2]F % %  {)_/jL'k:%o\_giF6x `KGy[ yWnBzK{MjS#^_eq ]ɿ-=7ܕ UW>%d ~z:4hjJKKioo coKA^{ 8XB`d6aƒE^0ƈ%b` G0=)IA5`CLj̫d*++X,̜9 %¢"(--e;w .0n8G?lܸw{..ĸ ˠ UQRRn'//ٳgSQQѧ|AA=<3tvvr5۷oSVVF\\/aa#%|>,yyy;466MN2䍾|BC X,=h`["T3`z ?ioi1cƌ>2%%%;úurR[ZZ:"a{ ^E8E8}!ڋ AIIܵAY1pY"`IJD8 r2 0V #O!`!" (@OKA $֭%A`  1#6%PEGv,>e}- AAbFl(/ P~?u.oBfdi<Δf C/=P G]On+?et|~uД#t48F|}1+x75諜>\<ۯچOyz඘|ku7=o_O^?/_ޘ7N.z!PYOZx(O^{sqk]5WC|/B޲ek, BS*4#g b` BP~z:$z1X0{J9#X°J*˹&3F>4`kA]}$4G`qF*++#99J, 3gd…+**8~8DEE1ydoqUKK ۷oʕ+8NַEnn.ՔwwݻF222Xz5ڵG}˗cZ8<~)7n@)EFF%%%$&&!\a=:2jG%BdkֲuVl6MyXd 6f>#ҥK裏p\+XV '|ҥKZlذ/zj:;;ӟÇy'dΜ98:::ؿ?я~$a_ EMeP3+N^^gϦ"lQQYYYl6Yp!N2oݺEff&$$$裏2qDm…dddp8(,,,[ Aff&ӦMҥKFiӦ1uTp8,_ׯs 9)CY"dpx*K%±}v: 8ptttrp:tuuaZ={6۶m<#L6<111XVl6Oصk׌7o~]Ɲ;wu   %---̚5gy(\֭[q:XV 4iΝ3bq#MYQJ{GX|9ƍCu_)gF$Y"޺gF[dY"Śg^Jbb"J֤])..&==έ[z̙3y3gGݹs7o2oLqqq/D\.b\rsN&MnݻTWW3~ֺ(9z(o߾^Ɵ   *F41cF/9Aqq1d߾}L8Ei&Cr}vnݺEDD>ڽFJ)V^͎;7Xx1eeeq!*--|Ygfrg͛l={RlyMgfeee8JJJ- CUA@7X5A KS Bh ` u։1˛Khy+B//!wAAAA݌[=5"=`SD _% TknwnÍw.vKܭ~D.f$`>7|hQ0" ! q,t:QJmNkKfRw@B)>D'rRHNN̙3Yp!ׯg455qNY~=:w}v.\@gg'̝;ZZZX~=k֬?dG}Dcc#ZJfń1h`CgPN>28{`4DI `JdkֲuVl6TTTO3^6m/LTTMMMtwwQ^^NII l޼7ŋZlذtR3Ai-$36{\~ fΜ9|KK Ϸnp fUOY2⢣gժUp>eA_ +, :YtL)ڵ3fPWWÇ)..MM'55d9wǏ(_ MKzPNCuɯ1toi1cƌ{4+eXطo---XV233Yzu@alX BHi%%%?ӟWf㭷2͛Ǽy/ + apc6j+A;a2Y" \1&d0TD.riBY"!wu։C>  0ԏq' 0  3b`Au&&]LN|֗OVgyi.=qykhh(,(#>̈s{ܾ9Λy-F9^@=՗Ҙzc 2fqTxϱ z6=(r;(^y5_߈YL)ϱ2śH6@P8 T<:x0a'{M P_Bʂ`  b` =Q^^o~Q   XBB*[j<ڡ6NS +QaAt VZ/vsQ***hiif1{lf͚5`;Xf 9$''~v>#dժUDGGyfۙ0at2gΝ޽{9~8V 0}te% ӧkQ[[֭[l:}ϒJXXϟ>UVaXضm7nW^ ɓ')//gҥ8 '??P(//x6oƍ`XV6l}6tqqq+\r-[PSSĉ~@UU۶m#''80Y",J/K;FmCdQRRn'//ٳgSQQqOy1uTl6\.,YBzz:\+Wpڵ>bLfcԩqȑA' ''$駟&##Aaa!.]IŒ%KL>$;w.3>쳀q'N䥗^%% u K Y~3gFD|;X  cyKAA ,AA͈54oq8]wiz<||eN4uKݢLT~_u7gK 2ow꠫?Qhn])w7o+O^90}kWJȺ}o pSÓzOYw p{8Y#^Ic>6u8T:Kg3Qf/e_#gᓗ@ʺ\|e+]鮞0syҸtp| 9zwg4[9F[/NW?uD_2%"Ó^ĉ򗿼4eeeܹS'  =A ,!^Y,!q:us #,LnqXnZGRWcUr픕iԪPJ(k,[ K,Xr%.]bϞ=ٳgy饗x饗>2'N@4^{5/^LEEǎ37oL]]/"\.b{uua566Ecca`M0A+1AnFҋ0$&&o} ݎn,l6,\SN ϊ+?~ˁȤ:@uX,L0n`X;wΝ;\|,bccfSOIlOrNQAr37PkF`6ZZZx5k< QQQ\r[t:44wF)~ (5BTTTWW3ˁr!*  .jkkub#jHvիWILLD)ERR.WFҝ;wy&Ǐ7dffrjt:9riiiU0R#ZHzU ]T+&ILLrq!ѣCwkk+v/ÇSTTng|\rz>2eGVVUUU8QJ1qD y*%Bapj%7)dP-89x cĉ,ZM6=ptwwoiEEE̘1È_b;v#N,^z%Ǭ,t]';;'ٳb`ҵtD0a0EسOE{e/BًPY"ADs6D(,;UJ1AA1AAF!#+BU yqBvhBJ  00#6UZzS?S^!cS~q_\_yu K6*,ł4ׅCXzO0y}ֿx|e~_z@e8Bzs:w_w{e`fs{.K&H~r 2*~1S(L 3X cyuaBii)V͛ӟ$F  7DGG[/M1ABURkhA ,!/cerKFd Jjj*/VGRQQAKK 6ٳg3k,D~z^uwe\pN;w.5kϩ%99~GHff&V2f6oL{{;?@u<ȱchmm%663g2w\b` (U6Wb(Ѓ_57ngeʔ)tvvret]ɓtRuuu|ᇄy8y:ill_&**&}+//x6oƍ`XV6lYtiݻcǎQRRBff&mmm444ȉ% BQ۷uS@rra 3el6 9r0Gݺu Ajj!Os.]j@zztX0JgdPY"싔7rrraڴiX,زe [n5].9s&6lLbNtް766t:ΖA- ɬ, cؑ%¾4}{p> /˙0aB4g?_}.\wg ֻׯ*xƑ@A222?>?X,Gss3>μgժUp>eNXX/^ qgVA" X^ʥK!&&WrƏٱcL4njkkioogΜ946߿{j>$@,!(ڛHVTZҎNvlݙ:;ۙM7ٙciݝqv{RmV*bi]Eeuc.| IHB"_#yrΓ<'眜'GSS@qP(裏$!==øv~a\ K)^".::(4 6n܈RQ]] R ^3Ip!CT"##[l2_ I>A#//V}ytXcr,`}6>O|=fMQ, ?>K٣ Zb[\@mvk^*P:M "gZT:'<ΓsSlMV/ӄ|odqF>TYQQQt V{;Tdhh,cٲeHHH$''O})(//B@RR֭[jc||GEEEO?L#&&[l(N9jW*ZPVPPK,+,,t;??hlld+8z= ;;X|9T*#]۴4X, `ttccc]iZz*233U RSS]ѣG؈AXVXV(JHXDrc(b˗Q__3b^xǻ~ڂ2mD8?~'ODYYT*qB/MuoHOOڵkQYY/wJ]]]BBBbbbP(000LhhhĄ^,I{/azKҥK7˲ Ʉ$H;1`Q0ѣGq … v-V_}_&"??B8pgΜAoo/~g/sF^"$"""bED.{6+KDY@A=XDDA!,""""vd4yٟb6o7c??ց&8}噫ue;Ne32m?e Xvx>~.{ Ǵ?^VGԫmb,L" n]UChtgH6=XDDD魷ނJbEМ`)vUC}游8H 9f=XkΪe%bۛG8rz{{T*| f3 =9]v!99(ٳhĶm۰tR=N|;<%B o;bP8,Xl4 j*DEEM((//GRRrssn:éS022⒮PL]IKKbcYjj۲,˸uƍ{\MdK_! $zD۷oGgg'._zW_{Jm9ar/EnYDD QTT_$9auU;uuu!** {# ʕ+.˺c(,>F| IѣGq … FRR.|W~:P[[zq%8q& ?g (2!WD@8 1hooG]]FGGhqFʂNG}xLCQQ̇[`)Pr^M6555AAA~G$vF| q,Bp,9Џc:DZ9aEXUUٌm۶˗}`2+0 DHUD_O f7zzzۋ'O> +%BZmqqX,$&&'C=Ċpp{0Q^^~WuVV:i """u;Qb6o`eק:~+~+L1ӝ_#yY˝oĒs~x(9矩AR6e\,}^?S>XQE }> fyl^-یm >?.M{縟獏zMfzoロoebݦX`Y~U 1 _!t̲̲S 1`0e%`U @``|`r.T֐ ٰz&qBT"%%۶m? :~'q<#?0L&.\J'|iiiطoZ[[~9s>7oDnn.y466f<JKKcСC8j`0,^w;..n"^"$:xpc +X,,Y6l˾xǿJ׻/?wDZsNرؽ{7ǡhaO+WDOO t2:TBc/7P9*G 1Ce*CP9D%B""" BA\%9j>ro,ap߬0($PzşG3 ?"X<ݩi 1""""Y4?nMئ"MuD;砝]<6asj\gH~x,cw=-sms3T{ ypN_%x~5SszHۺ}Þ>>򹥻^:g~yJll>Wld[]v۱.85q8V2Da6w-:0HsIwZ8 OmCr۵lӷ7'߰=#YvSφLl*&L0V&` ~CflGMnkmm->UVVgш/a4BBhP@9BF/"v^ #̘WJMHH[oXWsQT`A\QQcZ!IR 1E x9WEeVX}a 5---۽7O^{ f1|ܸq/T*z{{1>>D&w"" j* 9998wcYcc#j#r͛7#)) III/_ Ʉg}zx' ͛7a0FEaABӊ+Vp9w}^zH'޸q/\JKKY~`ݩjY\Pl,sHU_=A@OO>CñcP\\RL&$'',CVl6?ʕ+QVV1TWW?w ;v0 =RdMsBDLL z#jkkCAAzzz`XpMa…^Q^^$ddd@KKcݒ$ATBV#.. )))(..N`͛ XVņ 鸳ȁi=D8eɲZp!PXX_j1QQQp-Akk+{=w-sSHxpΪçׂ$IX`#Aff׊8-HeX,Xd 6l0-o||JEBEDD(##8q#Zp!;ٌիWzݒ$M RRRp$$$L ЈfFS<j z¡݆BSXsJ^ǹsZeffdyL4 ߏa@~~>FFF_hnnFUUՌ_Da`%P*+krm',\,;)J$ 慅E>?رcdYݻ}T*#?f8/C~pq,»0&p"9@s,B"`9ǃCмDȞ=؈`ݙ'^'/d^"dy0<0dzAbS#"XDDDDtinr'""" W""""bEDDD1""""bEDDD`1""""bEDDDD `1""""bEDDDD `1"""""XDDDD `,""""XDDDD ,""""XDDDD ,""""XDDDD,""""bEDDD1""""bEDDD1""""bEDDDvcuIENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/_images/allsky.png0000644000175100001770000040770014714401662021617 0ustar00runnerdockerPNG  IHDR(-sBIT|d pHYsaa?i IDATxy-WUU>}NN:Є9 M$4|^iO  > zl47"B]!&@Zr>{Uռ5Ƙ5V}9Igj֬YUcaaaaܩdwv 0 0 0 0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaaaaa0naaanaa+++XZZ?y(u]'ݖeY'mEQ`۶mؾ}{{"swe5 0& 0 cc߾}سgnQ{}ػw/ǁQUొ$[DMP oq۵H;;v}MsʕSLp[-[u+m8cSN9;v0oaq{?aa3.٣>tcM0Q r_ 9&blf9T]GJB S͏^r|^rOii=Mg&t^:Tr)8q)& 02& 0{$#MؔmŤZĤބEli~6cm`n͂ i1|g#eVp9ii V*}oLaweLawKƷ-|_u]믿{|Gx/{3z3ނGIiP\PMk% G^{pbC'1?͖8ē?ӱ{nq},\aaqf޽W|-aeu[(-ފEl[cwGWM T9 ӤeLC8PEC"|0vޭ~9u=70 LatnA^5\wu[r[&;:NlNlNlƱ؄fp#{Dll A;wu^^,gK8R[Ǚg3t0̊0 0~La=n%¿+_7~6y-`k lƱ،cbEPpĤG `+|;6RIlz7|C)bo+b/V_߁IlNixqؽ{7N8naaqYZZg?Y\s5җkU|aҾN,;wb3v5b|&D"puች PG癭],Cu0;8T3pDU,h8aX^ؾg~ȃpYgsg۷ 0& 0# /k5\O~j7>xó1)߄]Q"6{X^[Rk(عy:ܚSbzxczO?tnq7a1={O_i|`;[&aqz2Tl؊@BIH)nXJy}2|=ďkǵ4H}}>XmۚK`?n~܄Mn؉G=<яjE)2~0 c#c0 谼O|'?نtXNfj# 6w"CkEo؋=_Z\9ssm7|^NvW$wK;PЗ>k[C00ۓ׫8Ft7P~a N64},=\lڴi 0 t0 u]<>_>,䛱şli،{acQ}<]6Kx/Yϻuᴧ9]>-ʎ7<Oq$yȠ>NdolߒuÞwMǝs p#(~|/xٳp݄Ï{.Ix?Ȳ#9hba.& 0p ?D?;~Em8 a;N"NHcWJ${u{=(xu1exx:Ʊ!!e٘)^wJv83|L]^.>f`QS6 ˑo%uo8~l]<ľ[؇oWcq'>Faqt0nqnÕW^~֍pp^[ ȷ^`a46֬;-ip9:d}s>E.GE 3O|5cIǿfxUNVAm[k%7],:agP.L&9dG0rq: (.Ze÷7?z sx ;J}؋[X.laƝ t00}ȷV/fzlcl1CYe^_._7u_x#/,ʁ,=Hr{޿l>|G?7bv#-nq1nq{_wÎK~5\;*$=p801~YtwkC#4yig;sX{mMԶo}-_kzW?5a0Ne{g{@dpa6?J$Y{}Y7}"_6F(}~yE袋G>u 09@7 u_~9οw|6S;q}P`L/dx.p-N6![3﫛sC ߦIxSb<% _]wnVgJ?Z|,q{v䄩wIo}3'>ütn<6yйݜPgsg͛} C.$ǠNR3F){c)p#cwqS?gpEя~a& 06 UU᪫j=߻\lǎ؅a cygkpR_j7vݴc~}§_sⷡ <\DuxNnø)Ruv8Ѡ$ynqxާ#CI\Ghz8XL0< tsd6I6vQkI/7q`Kz?i^Qۘ#4y(d=S1C|`}s.E] .ɤ0{2& 02/?ۋ.,q$\=˺A}Y>MEv Qb9p] iAF}Qb23#~Lʎ<i%簣Evu;y3S@ʖ|3 RSK'P ;p-`y+v؉E] /ĺaaQ{}sx[ߊc[qд.ѽo~-Ǵ9FIϓο{)qv<@/p4Ckʛ< _|1[6x0@7 8|e]׮Ŧb;vq Qy1..pZP %rK{vr!W^xzm󖏿 tCsλ}S^a%ݸ.YIBg[8?ں6=$czL8UUs awoLa3{ͥ|p.ñ8?;qL:{*˹G*aT_8pº8uIJ=eSB=Un.*ЉԺu?k[Q_y 3S)krrA*/6{j9q6VR ĉ9,}[.T=niq8}.qyPΉ﹬ xz|L VTY*Jccݐ Q >%w-p^=iؼysg0+& 0=*o;wa%]Ճpv^iA},{P7ZnyɜZu\fȦsy-FTI92㵼N,4hQ5B2Qv|Ze\Ԇ +KwP##;Tv%_;xͅxCSvQ70ǶA_QQg;Mgg't#-5p ye҃I!dIy P›ˆ,e,t>Cy3Cs.wWb`%' 9ЃB|KT]mv?g9y;< 7 n t0;v?3\aKq,vgx<p<M+wHeDrkړ=|̔[ +j3"q'uJϫ/H_5ɴD)2%NMei˥﷾'k)+aO,nB-Gfu\ιT핦vd ;ǧL::uyr3)ԑG6Ķm#50Dek%MV;fv]vErs<_?/=>,*ʒ0t};c 0ŧ ۰3˿<91ө0 t05|.[X<'G`NSKa.Aʃ;et8x9o9W7{:;j.%^ͷ-ьAث /b+d/k^n])a}nܥɩ q<'Q v}`}I6Z3(m-bv芬}6!_ƻнW(e܆}(j .x^ēdK*g]a\vex%u_qգA.-"Rz<ʀ\^I oDuY↿⃼޺:YVٴzAEa.L"s+[M@OULJo^^*L{9>1zSDA IQW5›;MambIx0< s cO:y3 (P[r.sX0[4cUF\ך맣(1 =У]%[/뱻8oS@gV۞,qkƖ%yy~76{ѫh)o~Nh?:BWzsA G -AxJay;ՁW^-C(GŌcX?? /_|ڵka LqF_%qqb#TCx*C(ŒT^ZNB?/73QMx;Qɲ|O׌E ='PZ] Lt-\*t)\}hh)(zc&DLSrZ{'ů虶m.CusSGcoQvۣQH(%2`*;e,]q xpa<Ֆx+M|9Hh 6B7! :Ѵ۲k^h-ʮ9,d =. hLG޵U` gd///|!N?talDLqꫯk_ZE;GxZq}bu,N#R)NYukD qU$i xA@(ZOa)cB0P( !бVO.'SMۧW ~AEO's̻)~ o*#3ݳHS6bZ~~zw+ϡKEtIdSa+]k8mϡ%W@O^Qzz 0& øG|x+?ʼnYx$r(TH>fݽSM$dM@xn!RL伽+u dGnf枩{= MOE8J`Ov;<)O'u>$^!略nk -()]!eƉ%Ӄhl}6Ii58o" *'zi lL@-2zylFn+؏ʛpO_2#oqd1n=Je/?1l-c'a; Z;IzuRhOy&c2@Ce:gNNys>rpN.[6ŦNy-Cg^kD́CYd[S4eiȁzSz>ぃP}Z |hIz3Mt?~?fy!b}ODtv_a.i+/K\4ZO!mːp oZ}'? ~{RhKoq;/pXw0TLq{+/ç> l-NBlÙ tE^[{b ;,-Rখ:N-92#Ix* W""Luttvt=L{[/χ:H&p 姈3S.էx)+9oZ"9NE7tYSUBtbD;\ZGwϊ~N+6Ά"Z=bAM߱[Gd{˛lag+N4C3|,{u\GUjQȚ%95Euv {C"?W|aƑaw+*>>+%vTOfX 8RXHS1楴HBxBRf6vv`8R ;4ؐLfuɝ-Bw:*yIjh;N]w.;i q53d@e , wI,=Q m[ 6%V0< + Ĺ,׿k(.=Edoqٸ|v3xgJxr}{)=n~B{]r!>h-B8~\|">:} s6mv8L=PF\pJ{=0 X_LqoKb{4lM±X\yѥ%xި켲w8̳ΐGs+E˝GTzN=m׃ ,=8[9e:,Щ,gejYvZj3'|rg^sY)\t,:eS'HQS`EuF85sC ,wY=LxFuԆ{l)tb16ږ@k(r_:۵欳N_ǼᲱ]gB)r;5smT^>y>s$F׃Wnn1rJn7^\_w^|>0R"ѻ&t|۝{yJ )^RblXK輌?[e&=z^,R m:ynr5\ }as̚(ޏ)Rǃ3O`.mqnIO~7],3Qp5wcZߎ?yx^O<aGa%9x 7UcuR$Zp|qD.-kMw;"k6-(8'4)IRI(Ly]23T"Pf~'Ϗ:yt4K!X1r^J{^Ny]#%RtĈ,'(4#)d9yyj_mF{9Фvz~a:. |Ϝ˙I-e9F*Z =]_|¢o/|!l2á t0RTU.nflb;~ 9 %(YD=ra77eQ9! ]^=Vzu!q댽t->S)1eS!jܴ-'U#:~iGx*gz ;9-sH >tC¼ t6|h#)b\U2Y!]N7}e}7զ 8}t'W~6<aƼ@7 .>&/a;bЅ) /Åӟ4Gb:px"k#WGuH!Oq_fIF-W2 ī"}yա\|.kIP&25HH1j{݂xl5˦ѽ@}Xp>Qdcvklfs{2V9E}–u ͼ8׉sKBKVg65x F-- $<ݝw]!e(ߥ42I!K+ R_9}*}NrXDpD{Jħp.?7~70y1nƆ+/[țu'!eADs:aԙO:іN:7 +4Wm-ٌEN%mttכq|7Wn<vo涷 *gtSh(-m v}+7xBEPqJj~6N2Oa٤ Ev{Q@Sg=$Dus6g2DKxjuߪqxУ;ϲQ0RxإJ JA qm_2~|屸<)%C1Kiqmi)!6F:Ķ J[b^H`[&=\F'C}^ )BΙ_4::v^\Y4jҖ YASr5<(Cx\fJ;Yi5fr@kO,NGKqYg{a܃0nƝFYx_q/,/wuAٝL84[,</&m3Xl?ZLKv$t"7:~j7X2ކ.}xsypsNmvޗ)<5xK&֩CKLB / .yATjR3DI5\2÷6 _|k#YEw꾄 =̛MVݥm+.ͫ!ž>uWjP?Hly_kZnƫ^G_5nF t07g?' <Kl jBHqwʦ;^U%k!yv*+;8d N9u`y~myJA=FjP#M9V1BY͞*̝Dw TYO䥘.4>rmT&Бa·CN@U.tDr< C>^JT-nƢZ' G֝a].^wl |bTYΑsu8f{0ԼdC'*6 IDATl5x@H]HHI\xm[ѭ +w!펾{ʺ6ăC6ѵcx Vq9]<#iaqƟٟ/~ XKQa]oc taL,IҞX,@wB@H=1CO tGRI$Kr>᝕Pas%Kh~o%J1"Hc:Npkk^s ^ ֒(0 & 8vmx/?1O/#f qCDEեyU>2Kk![KqUd2w]Nw\y\><*wNթ0S=]OfRH [:eNȅGe>tYn$~3cqMO~egC?fVf=xo[qqc0n~[o9Hk=wgxj')vjkC|=X:#4d(y\uBev,>3OO#ni/ϛF}su s甡[f9OyĮIHח$.4'("gYϓS3h} Q箛v~Q*{|!A}I&Jt+zlw,LPj!|9@3CۻNs8AG++ߍ8eaCcLꞃC*`SX' pc0֝W{WM@稰 'VąЁ !QbP#G,!q hML&s;4naɢ+ wwd'g(oe׆pS9ZFiOs}۬~r %4yi-)rn*c{ o{-]#~ni0cזIxS,9YF4S8C1qٱ0yy>]M:/ ~>y>O-B>;pn@_=#:E<Gȼ}kLlr,׼u}tal̃nƺ"Cs\/ u[{a(5,f2w9m HG3kW(lY!!AnE!;u52"ŏ9k/gt+NfEʌr 6_$sT6nmʓӳX \aLszr28Zi\9ѺrsڌG<|w ϒLǽrt M3W2pg}se {dQ{i/ڦZ]NsӁa)H d{ޖu5/%HxWbJ䵎v .iۉ {( eX_n9x.vzDs,Kwøb0uCCPٓV4zafg-(l6|;V24w/ 3:++\.eQm!@~N.4o1D *.x9  &x ]d3F,IO żldDE?Y6AGf|XR&ow m$}"''z`S R^iiRwCy=Y=L<hjuDl3!";Q2Mi:aBu@7361c%}"}whDf yj!qa{𒗼; 'Y:{="ӒRԌgXsWGzf9om!z'7G3u{SYeb*ƒۮ tjɫ4C_'b)Ks{]󹚠ͦ]\|`Ou ~(D-Ďkqr>qЗ}xS?G9B1,S9Rpv\J4n;E}sГyAQ =A :h{kM&㻄`clrfXX)9)Dzyg !3PV@%1~7έOؽaa_E\v;bdvηԡN@^vLۥS*i|๥yP_h*8u^vkd'I9}z|o!Rn9Mݭ}(S{#$i7Ѷ6aX{z$\H ݎǢ 9M.,FYRr*fU d.j9!y4YXKﳹ+F>ö6~վaҶH/Xd/IWb 5C^cF݉{Է~,-BV!)FJ简-}zlc_SY\Sm;^"𨰌*މg=K,..^06>& X37to_?vVe6B*;kcaBN&suiys,AxPSz]gǦ}łE )9%0zGz].C1bfl8^mRt17B|[Q-KE(UAQZכJ`G^ɺukN6+1?6?z&vv{=a1D4*б6:$qz<=0w-OxC:q ø+`0y{ߋ~{WA~# Ψ Os-낓9{:d$DPsz{U0ŵdx~cE F}K@mK'PR-bEG$6DO̳4DF\֮kZ)_Nt:"ΝWX| ߈-Z?3d]::?dCLOb{9lHh<'ٵ$٢KNS7hTjbw2o{C#2փ%F*Q۹MӢi>)ן9MC&3 ߦZ@=xOySf0& ykҗ~-ݶ$ow{赕t"@AW"]Nptb/҇h׻LdEKf0X0Eurp8;MᐂI]^!~:\'\]۳ݿ;G` x>`Ӏ@ϢĈQ+;eB]molpdHk|Rܼ1¤j1Y&+·Zl2V+Q⣸KȒab:y{.R8<P ޚ8Cќk0Jh<:ZNK786/|Il~}KyUv qYœŢ˶Vۤ;}Ѷ_^țj #!#LJY7ؓ\g6|uZ^jA&YWG-{МXtʔ-k'oAƹ!RR9~-c3DD"Y&򶌲Q 6"e"6GcsM{+z({+cAK݁ήኍmoJX>lÇ+΁@w{lS5Vm9_b`aadyyO3Oπ#8P8KBm-K Nlk3pgB̻v'xDǏvay$Cv4w\T(5XcR5FzGg-Lhr+-ّW\t97/>^{)I<Ej ][mWުIdUPfe0YV4 _ $,K t9mMolQȣL%g|`BHE0x-|!DKfQ{?65Nj.)w$)[79[8;>+t#Ud 2$gFm];ϕgbRy(Z9U xYx^z<ϴҁh̦kCw,^XJ*$m PTůSvښma뮻~秮M%Cqm2"ǿ[V)!>>)n5 Q,9iyr#TL/YԜR;8]&<3s~YTo?EzwfvAQ.R6\c~tˊ&sрA+gDƆ<}""ў>u.BE¹V\wJ 4ZH8#-tx{w-(b]oov/k+#}zg7m#5C; ΅}PþOES= #/ r% w)Е͊H/{+if"~{,>o:qB9f[,bۢВ=я~wy\06& h0nm3J?1`Z'I>9^笫ܧM&+$[gd^ ¾e#i1j&W)j`U4ב:s: $F_&A]@WzK0AGN x\>^/S {mS{$nhjFB~OItGrŁ'ֹs˳sʺHGLH<d(콻"}Onk0'`|1o-8'0h3LSH!XrKJdHE^&|= "5eiȱЪ Yet9O#GUI-;_B9@?r.|*aw'^ @{ɪ7p |8|3OaE0n`Ϟ=xc={Jp' ʌ)/dN5IǒeStBoˌ$觋llDYផputM'B &)bOt \!MuBW{Nʒ=.EN뭫&?mioܖ˵'N9O˺ѱ27>N&/.=zƥp^@_)/zHaZpe}6B{jyTCꞸMY)ΛSCSpb@&ߋ2£ı)g7!zJIe+'DHpW<=ؔN('] 8@s9"R=0+x?SNI7& ފ{< γS"kHFWSGvlNMorpS7m"Utiw}-Z3/#EgŊThH(xvQSPnimwnjЉjz?D׆y<ԈJ~=b;v7" l6va:fΒ͡ŌQ]$lO:O:}"=%|˴ޤZ:pH Y6tY׋/鱳μr!kao^[]pvfh@jCy14\Fǝ_ 78보א8Sp.,[q(ֆ2 .+o층k\R zz7&odB89fuol a<~8g<'SA'}z`tHC Vu,Uxc}زeKaܹ@7{(xS+(*nR><؛ѷY9}miP*,΁OyЅ%C"8Bp7SP--+pv6@wػ.nG/Rv>n2}?FmY7ؑP>Pj0{*oDX,ۊh'y—6)"j,.V7IȖ\,2mtΒ =`(qtYgu h9=7:kVDOpEtYgURndp5ZY}X;Pm9FQ0%V]|~shd eґ ^v<\Se^9|$,Sn혞WVsYp*6f˟1?ok@&T&VVXVY_WU,WXZ {~`Tong@OBjۛ{uʒ(۔TVa*fly/" (g>pPʂԼu6uNch++rݭXBq8sA4iL gH{F ڤ}2|RabqqN-!n}S~?c?߂Rwhξ>:}Cx@C R =0pׄA:^fZdrݶļT V0=vVy]~[!! P俓]@w*A%Rq8Rp[m%8ie-L e#oVN1_0^iU *k `Gﭲ;ۛt+ɺ;q!qUrz*8#(}5۷vH {񘚀[ Hj@w+瓮!tYg_zg}'> 6`~>ґmŘgB:̶Ͷwoziv ,$VM N,npA}P5s᱆egV*̠6uܾ!8 <۷i}[ u rs7fz:8h6hSM hBb>]NM+KUUo@.%9ڊj5y1.US~(W;TO!Ϫ18\<zobNryUU`˓`o&:@`Z9Sѿ4Ϡn~~-k|Ul[w8i}=M ,F4 >8E%ҠQ̼o(♲[ksv.Ϗt^G咬PZ-g͵W-`s %0o=߃::'|*&oUz pgV,pONιt'2#>^i[r2Fv9Pj؝@dzi2Nl Os+^|y e2!RDzA'oC'7UWW΁Kװ߱%bT14 :y/yfT:dǝXl[Ŝ9o7@Mmmiûߙ[;do 2@g@Π\枧΅#93_IPuOphhlAQJ:<+8kX]1V=˾69(āM]G/jc\6ΫD&eA;t3B8qP'@-%y4d]?,:Z;ʲ~?F[@_v=3T;_ ѡg蛭ܒQoc ~ Y|&薶w1鼦º&v49;&\sY e@?@U)#Ԑ|RuDdNOtHD/Kiv[ Ndaxlb=olmUMj/5-󺊵>}اAox"$9I'-9= ] /Ʒ ޮ-r  `IΆsQ= Εڢ]ZʰkFz Stpkοf:G3ߋNJy)AR}L a/o'>scguw7W|I&tSp>HGtcH[m1$ $ѡDZ H0!7 NЙ }%, t{4AR跅q>pJ}Rs)/a+O)ܖ]+P[mcS`cp@>6/A%nWg=\mr`v Kb αom˓yz|^5qT CYDK :msNIhrϦ)m܆kjFPeÅ +- 6ɹ2uS|t ^@zS[Mpݸ&Ϟ[“TY񬪚 >._:f@דּ׹}wwOBV%X Rf%gJRUmw&¶IqsJdI➨@-hBy7SVRa@]ye|VryPMHs8=|ĻW~];묳e@דּױݸqOypc摡pZH]KF+ѕo#0ud=ٗ\PNlEL:mJҶ@7Rt 2StӌA˰8&9mF&cǚSL^lwtXY22&xbe }9/Wi:Y Ao-b15^dmA_5oo|3_n{lA}EI *9%^3Wͻ; PMAůq>|->ܳ*tW;Qlq1VH vo+v\IrK|YRU1rrT6@6A_~s{uWguvW:{s?6>@ENWg~. ;r^aQ۴5F51ͩi~M)*yNxӭ|2gf>iAm>8pN$ DCʠPIg`I6q9|VVc`*qFeq觙7|wtsU$w)Zti1|3>-`K Y[VKܵRe>T,2itM:́${__d@]NKBEoڼ y|{~u$Y zHmmtEcp2ĺ~oVa9 (' 7U7^yW`wqm.mw̨ [ !HQʀO^a.?'3:e}Ju:|#x{ l"0|{uNi(&~2:%&*]| ME `|P9*&~ '@9@<ުl+ fAX ~w6x2^P[ r=yr7.9SU xʶlozHS#_4m&u[ N?zޢ/whH0C2E6=kmDGYB)6XJx\rh`]lv 6+8l1D8 *1>IW}ڶ]1y_hj/@k/N}= s=gTs O 6@mg,oV@Š[(틮Bn-t  Y`y{5kmMҶp@|شr2k(Yqԁܦ cgʿ~/{k)jϩSKG>OD묳ʬu:/~oy "] 1 ҃A4@my_|4ov X; p>[ c }lEK0l/T= @-j/ei7PIU 8<j$ccUttue7<|dϙy~'2kO#y&BUb`wkm2W6Sjy$AnGZqnЮ)90Xp#g!|aӮK˱FטI @3fwtyv#旧$K9IaBf۝F+`rݯ4Rp';$&l!h![S^3$-B׼~yȬx-d!"(eg̤tYc*. ]lfJkWkq~(?~/{}aث3?3<{guY;ufo{۷gz=PrHb '^W w,`<߁ih2=,jbm_x#\VE7$̦H;2pd?\!o`Vl^;YQS!!f6T1X8w͠x>Vj0ʻoY} ܖ12A^p5i4ȶe)ܖP YGE2b1xPdyg ~9l&oB͑ZJS -Ml`<6TF'ldlדK>SݦcZraK#y9M׊׺*m!U*hA^8 Ay^ df! *$*b.ށ[٦(RbkKƂJb9<\\14Ag_.:lUewYMVqEa8IA4!5MŹvQl;Κmٳ>|O:h@דּי}/ 7V=b'[@o)gv+>zdKŘȶ@oZ`m|~U @*/ڦVYLX!ڷmTk(3̎ 'vRk,ɘ<1mD\wA \ߑm`ǠF V4wVK`9u%ONUԆ18"P`e0cu2٢(y<_ƏIU̎7ϳ6)R[a[FFAYD9m%r25  1_}FcZ osѹ@"< SpΠYp!vm`p> Ucqɠ+'怘Fzxno/]\:X;ud?<{M>)Ge1!Ǥ7&$-/ p]LSꃁK`MtX)lYlD3 ,ǜXbL~hq9NwX{/Wu@pz؁z^P{@ϲHg =a=Mv%,m:P%) `IrޚYp`EG+gU>xa[D@lI.mne6dʂy ٜL2 s,T6A"7> əHiU{ 8OtuY\e=QgA YR 0A;NXteܚ ]ުOku"/Wі;{UAכׁpߦ w݂rjQh8/)B3A.$R qWmW/OPfpKhK- /[` IDAT*tBE%3kYG/ڕ}9-~ nُmq!KЛ@ћ (rqFxJ]Lz,&hL"J%f'*SPj-m>8Ĉ3p|{;H]TC0XR;M3aMVE9L) rWT<,k 2_ Y#KlmdҶ{f>d(etW^ F ΡAC Q< .-ʨ:ʝ\?#ʋJ\9ҶK0lt~-mS;R.*^棸h vY8s. %>¬8_'\9/.J_<^x}cQH*fЃj4bq3`{8 KE K1I Uy`cR)֘~n/c|Rz|_YŁ;ktfjnig,P 9y qt4b;=h1;=`@(&:"~=;묳W:Yg?7')k_m ]*r 鶨EsZ6,?#SrG,[`fm1p 4{SL E3^)ω^ K _J1HU27ȇ_39Ē5 ca6~*s>ϼ ,ǀ[{eR,sҹ +b%;cit7&7 l 9dUBHYZfМ|Rz2$X Rhu A`4)(Ys,R!EW> 癯m sy}T/Կ@Ɛ#]מ |ǻӯiguu:t7=8^,`3Q1rϧ[ۺϣR:01t|V2|7aZ&f΀@9\=c[6Ʋ/XP(NKG Y@ϧW!Ġ j >V>.RI~ZX< 8h ?K&LiMwZvyHAR` FzPvq{1ZyPrnds Z}a%m*뻵FEp֨!ejq4 a~~0nD[L6 @@/:vqKl 4#Y5]j sL}y>gt^/֨W/8W zLYX`E&9XY|H?Q1ڧH"]lt! 3r4@.s5'E ;Νu˲w~~ ~ `O5+qΧ{'T(n@VVqgj+elؐC>蜗y1T "۞ϣڀ׫k1` k_Y\^t\OJY6YH짵mZ gmI>NDܲwtׄff~>Gx ~/l3$Wo~[6u١cF] 8̈˜|n`@ ~t,u7Z{Eo?&ؚ; `@S0 06 c9o8a{,Kn}L eym[ RY'a&7@A' uf+\I^ZpUx:G)v^'_- _i2@PE̖UUhJ+IA!_\t3IAJ^_@ XJ'H+էbCSeR(͠hC}uNϩMEQߌɠ;S&_AzYk2[kֺy):U}WD^9鈹{s6\#ҝ2d`G'3ŋw5;묳;wk~~ 1gw;&G"9t@Sb 2maυfkɓ@b7֨W- @G46mU܃υ8ޱ* ʹ>a}AEt瞑lRETY~9| #wp9U|M[.j);ƭd orSb$ Z2Amk e7n Y%!V+fʶڧiB|%8w#dH+Hn-oށsRZm>]mU! Ιk6IAgkSI6v6(QGxcJp̧|cL;ň+d䢛 dewo:b 9 */ü&1("UB ?N_ZS[_9H"\ׄ {U|>_YtT,C_唞??8 V93ݢ;ŐuߚEyBby ca|9 mrw tT G:묳e@דּװ x衇PCL(B䁊M  0lms8 uʢ<+3p \8hYrgBE4_;u,/'^/O~njbLmYdYdMmgS6Qee` W@*9b<  r@լ+P<;9]TJ0C[Z(d%8WrP~S*ॶ5i~at>BJ>`|<ԋ=y[bL98MsOBgySpί`VɇӞXr 3^#ݯy#ky b=`PcMk-2d&sפA͡+uQ>10١@$R#a[L.p|o*玜| >JqLOk_]rNv)L5uXt.ȶO\堩2е!e ̶pѿ&~K7J90"zgp^>9d_}n۵:+wkoi|PU&1NYhdpgr0)9Im( PZ Qq2( PV{ *`Lz1.fpeUpV9DZn4{$/;H1@$f[ԱClBl3mLElw۲zڠ0([@8F<ϲY8VJ(U?)ٯ"ŒoW'z/v`q5k.46,dY.gD2;2i'( ۞ qsDPMvNkhwbt^>8VosvUt=>90-bFrKf4A,|+Ķ>ϗx }N]S& !)Md5Tdsq΀|@7^aN9潍Ӱ54+~-9 /Kp0 .ulݓęB=Fsp vY o}t,A췱 [ V8 ൵e&l=N]"h)J,"NUMisH"fƓչa%#+90uv<>B"mLO<^`.7r%Fjdl\Ie!:N+NXF7>=??buYg/:YgAc|k^uzO~37z ƬP A9qj<X†ՓgGs/k\۾'G5vGsG,N_cK53*A+`)/_R8; T<F9Ih ZAmGT('Q/RVXiV ?mKa4.u:R߈o`1u/*G7ZsQS`թIk+*gۀ=jXۼP֌/ za^ ua1BArO6ENcɹk1@3@N$os7_#xt㮣Y\.eLbhT2SγuFtWhnXJ2\\7@PxNse-sFCPԽS?ƢtbX9 C:,Kj *!Os^lKH=-yPkBЌw(dRvAfhz;PtZ ίi\:@>¢rXa w3~PYF j*DWEȏeg@iE~gRR8PXѯ ^.;^'1jJk)tY 9*XȲ͇(֎0yPpx_~|}q謳^u^c6qC|Q[HHxNvX/4 =C@bg@fA@ea1N,f´s伟,ߜ8l nN,z%ĠC| `Fᓅ$vV;ZtF2sW@"xZ c1ZxA~^QyM,~e9LK(BS XSbx GrY_Q;! Hv CənvcA+b<[q"WRH(-B{ ,&WeuFyd#C("Hr.4Qsdk[k?Vț@VLR#"P(y|p%yQss>CP+suEHP$䙅6$-Ai%Hd`)%Z)viEkhb՚ZPJ]_h=83u5[3RL|ew҉@aA9L@/ep}=  @%`:֩õBh㏵\U4fh-1 ˚FֺpMBI ACO爐0+jj=_(Dמ IDATTe(XЦ5)<\qꡑˆ9lc>fΥYqܯbECZ.I9~V@A:o0^q\܃N @ȃO?lap.a3kdeLs[АWKn"Xt58!߸=\hk'/ z*`q;\:rxpҸęT_ P56:ԟKWP/u˶wk>яl\TxYgk<7*E2۪ssS'} ZrU.r4MJS5TV j|eփcJs6A`Z;:^:sb}[<~N¨ogT7L(ƿg4Sծh9] pu]z]; g.(k_;RYP"$ƧغwJK@we?'D98. M€ڧx6JY,6lp~]Y-ᤔ:HX(Ni[.*?ŶeAoZ|{MY0qlCCV\r{ϡn T~ù#;Oݬr|JGw nM.k ]__+r|Z6W[}J{\{?}z}X(~~hדּ:Yg1{n~w 5[8ʰkp.'9݊ k <㞝 8Q`B)eEUaN; a{c[G6G%Ƴ ^ҾìX884vjs jiiYtZ2-]3 lnipn7}>wgWgMS{p5~o?R,٫޸cpbk`_xM{75ro|ldדּ:Yg!{ꩧ裏7o_8p:v t`{lauDk^ӨQglS7u8ΨEsΜzpupƓjwu5.nv{C5{5.] 8H&٣_6bA̺sbJ5wr1bMZN)q?Lt>Ao^3eF5OP6/ RKڿ'ڹ5H攮ꏝe\0|R;YidpvQYL+_/g2|wܟ *Kf8^P-^:hx(ƨg#b#Ϩ`Cd0j1ij*bAk~t5+ p[ w*;8Et1{Tx>\U k(S2Ex!;Xdu( EQh9r f30|1mEmځuLr9-~o9Tk zP Kkd0΀π (Lᐃlk4^[?xifW(jF34 gG Yds~@o|FWN8(Eu/߬3rDoƥO1"zP㉽ _N jCOsa_@c{JG= <38;Ky4vHl_\sJu8@AfHQ_^=ggЩ-@} ܿ5ݥ_׸>-D=&p{fZ<_:B7&)K*7JuQpT;}.A챃հ,;tZo?{~+Ygu^#V%.{;OkD'<>`rX  ɭ}cGJ=R@逜N>0;Ren;TD=N9E 6p;5` 5u4;_*fޜZl4^8qϚc )7Q5Ϯp}L9/׻ EPj\9!WܚRP CcoO+xa}=9?dp,nN,r1[d>w>bڧfܺSzMHcvŏ}Y躇|cL= ¢VAeT(w 8puLhNճI^K`bpX+\=o ڮoy:OwX5 45+4LCʳۜ,53A^T͜o_hkίЍXnh=Cy( bzaX Dy8Z@]=5$k9ߟKs!@Km9K۪p?w E1ȸiM`/ s:{5_C-DWL(k14 S-nMq>.Yl@TT*g/[X ~#\ty*lAYg:{ا>)|w}>!!C#goY`3|O@@nc t脞8`G[%t|3[knf'翳&?|h|ĂaǷN.;}#s&-Y*O-^:4- E?g8+ty* t݃Mum0,,cyfz -JR%@߃]>޽ ^"N67ԦV"]5B.6-8+1(ܥL$ H5z^'1c=E s@jTr.ƪN \3 mVV@X^>vpm|>.7b R Z`JI3B%quhM]>T,9X2m9 ;ϏJW zoj3vM%n˩SرsѸ=JͩPkJ8< ЊnM+R-xOZ~%uREE Vd #QWO,psJX!5@`pqturzN_ Kr9ԴUTko* MW, S/krDSg#?__Eguʬu0x׹KޛZ]aͯl6͡IZɖ,# ɟ? D8 d;I AHPq,GE&$qjNM6dwWo9{zMtWϞډ341)E|N0`^t:dۃN "+T(G!Y %n5Z#5+IQkNiK2PI''E.'8!/@BRU9YLҭ+&,D5gTimyTyl\T֫:;17m xhRP:H5gVj>@j\g9d|iL?bsVi}ګ~\,뢔}k{N^'pa2L LDRnh*]@reClڽTJ sN5{G/}m@dR;é;>ch`XLRfeꟻ /l6QV\(4v^ҳmQQUT+'')Qߝ6nKߚDˏ f]2;/x0MNs{yvݳJ cp,hZG S5fܑd[l8l鹱V9i,ϟ |#_Zֲ=Yρxgoy{rH["jV I IDATNYᩑ z@[BtWNI蚶'jIg[yNP*;`m觲%g2=مLu+dS+R4&l&>=}2hri)0L9ؔA0]Gdo/+@.PsVIJ\FS?Mj+;Y)nM-8O~F0j{,R}+i0%A'@OLy8W\ڰ3v9Ƌxg/xOb^%KY-poM"!e7ݹ&%}+JKb=~l<ƚcH`)CØ?z`k߃ iY,j6I*GZ;4Vqu,*nx_yFn.xߜym׻o dO_pteWSo4mQ{E!3lj<TB͸2Mm"@&=?r~>2@# ǚCk)*~Ш0l hf<[7dM @VnY[ `L 1o+`hf!GfƂ$.ƚ Zvo97Pk:7ӃHKuƏ峲B64?,> ?xOc-kYw/k9__[m@WkHZa;%E(<(^Mɞ% Tz2Qwj,ɫCAj[Qdz.%o>iCäTl'п/9YGRy9R,@ ׅ "摞.-**B:'LU?IQWUT̴9mr{-YQxvl$%=XQϨ>ƚ)ls k_cU:G|xEJ7#Brb.Ho5:R)VHJ+Kv 1ԟ^' wO2{z!eҸ%R">K1hSIWR)CZZ!S>3#Ury$Ӄ<7DGC]2=@~_6`GԤXJ㼠h*pN5%qf@ ,%3fXy)t'MKx_?qgWUsߝ5xnqJ$b@Z`b{h=fϼlHTuAItj#IR5MsB"*]2+m"HhOp%T 0/*=X,[׿\ˮ^ |@Olm=l/@wlV=$WqJc8,wsijפ^CYƕGpSMlG~7V8}ؿ3.ە5@_Zs9::¹s~|B˶Qv>#\d5"R c2ޓvfYByOJ* *T樘t2c189Tʾ+KEqLӢk2RH;$d1/xU| d@Yܓf8\ 18UIc'YZ'xfY\.? =hteY U}OC'k\59ֲ|wkY}.G/ޱgO`RRĘSBCŋr$v!+2@V<@ =_tz* Mvim+ ^9HtXKY|1EJ&AmE"{qꍚ3Y+L4|l"&T5Dm'"Jy9e?A;b'̀oWxrj¸C;ѝxbiG@+ KGH|3 ^8KGM3^!ֳh@"y0!!E{<ϼNn͊0:2ij7cwMu[[JO \\-3Y)~L(#W!Nn(rFYAgIxT3G}\ѕλJod8V9E`L4W@%_14n{OwS/դ>}=5Ā@~y|qdfy@/~uil8D[\Nhj_Nx{F9u`ym{m}5m﷤ig1hFwG}kYZ;)"kYZMg>l #BfaIn)L9OJrzgRA ,cֽAeh{=l{:MISl4}]LQ^\ِyzC|` o}&u ́͟S%Ip87(RdwOz}ay .9cF//=wTHA qz 9wi&Dz PVhڞJkxLG!Ҽ`"'&ks f\ t/5,QҸ{PG:^V9CJ151p`00Dƾ 'w²^.eޥ|W-.M<[ݥp{ 9$eOLm?! dǜdux)cn9si3}ݯ|ps ~,oU|uei#|/ C:=ǩ&$8|'vKƶ%h0G=wz}rO{jC4D&t1l!9Yma R=K 1`*KA|3kke-|s(.3a! +| Xyt;N>d5>+T)E;L_U{yk]oT^*I| {*IKAC(=lv'Lxyzy:y**4+*m==.= '^nvFż[Y07 [p01Z\vH=+dP>ci6P~+zmdN5@/ 8Gi#ݐt&pk{y]|py;ۚ}\Yp2cd VxCRc E|m#3?{vFN|{NFwXm.N: z41zz/cF@KwF#RF:e[yt_ϒݤSSڋO>(^f=G |[Zge-|tR6%`wPsʿ٧0%R S-xѦ"Rf `%RVȱKGt܉`lJzϳ#5+@61AsY`dׁK9쇮b:ͮKo(F9 Ny/!UL) ?I kcv-%S_xh?sel7+1{DV^qR|Y0ǝK ?ǰHe9)P׫n" fuQpc=9ZLw/Ҏo*h{9ZzC@2099#lDdad-\|6 {37JE`U}]@i+t6x!CvgL׏ϩRE38=Üe ?bn~&d-iG^r4{= ~g9w8~_zUi8ì R.'{c?Zֲ^}-k/}UihAS20;GYO)aD]KJ i|3X D ā|a4# Z"2W3H^`쫋;D;{ 9:zMJebb#*)dl.HegK}R G܍wt U3NRMiL/]M}ӃOؕ6 z0l*7@RցP{K佉e( -.6wdt,`pe=yc75k\ LdXz6;rr,C[?<ÚpLqy_FMc)?Oᜡa]Kf_+W3nr\ g!Z];C6О?Ù݇TlX5/v6Jsi, y 3X80i3MMX>/`Wp||me-ke ײXy%G a '/쌮1%d;){ $*Y)l>SgFV@7xfES&I$LbV];}+${Qڔ.hV {_$D5+Zs#Z_]Rk@hxtob;9^M|nLYGʧXTSrFO]7uNy㟶K(஭Q^,ҵ eTɩAz%29 gPo@y]lW9N NsY 8xUy~hm_\{=mJe ~ϘZ^EO>~8,  5MqpǙls1 dBǐ  qk'Od}>붗0iP `mL{Cjddv E^ >1_Wޓ/~)U5b+J9>"{&֍O\~+lpmcg7uHX뾲X{@;~̿b<~>$@Ìi~lCsdC_x#?#XZe-|ǰ:YœVCB0H0 x!gs`sGV)K=ojPC%+J+竏V4 *{hf:雍rB wߓ{ QI*E3EfϺ b42GX3hTdž[mLPEτ2zhFy= <}E1s=wBVv~JQ%#DPF@Z%ISP=`@y1C o {ct h"ߍÌ9d` {j#%u}Ki{ K= +O:&a".c94Qtf* Wy!}E*x7՜{Tv޿g\x }{!\HŹ^i> dKNrx,ak!fc\k#O5{;D;Z0ӘۛgV2e-ߥZr˕+WnrEPT;QO*R^M@XM1!+ Ȱ 5p1@ǛVv\?u1t4V+^JtHm;IF9q곻1L!㠼Ͼ## z2|B&, ǭ`L byq.&͖d1J~{\d(Tp=],E m`#Ȋ7t})\(6@K3`{|:G9[ۆ{˜EfugG$kO;+g[>8pn>=(5'mH[R׹V7PƐ\CW;ik4cǖ1ѶR]Ue·H41Ǟu3μ棵_ K3ܼo7 oL^M0;KkN!b?42H!y'ftl#6S]cIC[yB-cHς}VuVHyXyPwdo1}7OϹ0t/ ijv.|rslloZ (v^!f ?ogU3Hg97p+n y%ԏ'15CIl2 ʕ+XZe-\v wcg0%$iG1'jk(I1Ӄ h[ phDͮMAz2pBcީ ߨG x0ǖ1nI`iQ7\$ecC2 SZ*j-]ͱ>P_Oɵ>?&b&<̀ FCc8={dp'Uti;McƛC*u0n$d|)YWY1ٖdeۉZ[xT*G3Xsf;oսs06S]H'y^o)Ti &I{ӓE!0! 0Jw) {{ +מlSO~ٸ Rݳ7>1V0ƝK`Qih(TH6@7i;zD!}A _=>1rNx;+c!ȭsRpLsPa{`y%vճl38"m`,7$q ܕy-[Ocfc3" i>n\(pk<77 Jp(Kd3 o XÏT7">Pi_ӕMz IDAT+o=ːc_=P|7aT"YZmk Lٿ+iO~=k1V8niH;pAF$FKc=Hg ׮]ZֲNDֲOcQcW͞O g`MR?3L 1 i/l Ce,rb61XJ^{(wMtff.)Ul'5w@~WrI<zɆߛ/C-_; K87#YQPЦt{Fm#)/K>zV'=߷yƉ6oY,?'@ bH )xfi ϸ^+A6ܞ zl?\>oȵe-J>kW_s${Ѩ$J*ߥ`@ T`88HD+z3i{S&(h%½YSr]J& ;!>J{)٬wzSDќ/g%$bC77zp U;bsn6Z2ļWg&Sz4w$j|?8hvK}VɶQGS^ڴ}֝d]dC9ac ضY@Q<`s_w9e8|ys| 2@c;EcU؞rN|5y9n&& }ؾӇѭ0?g0^Wjl [\5` ?BlVi'Q:Tф٠fssLy Nt]zݛxyd'P[%m:=^:Q7"o_}-ⱑRFLX=cK0F^4{`țuvDڰT 2Shl8HW_ [Zd ײX޸~UL\ 0)=&D36"['/_X&ƻGI9i` SuIVzT4Ǥ8GDDˣs%}џ*h+،[gÕ>g6hru&ؘУR2^OԬ2L揕R+2@rsyXzH]i$Y$'I<(ԥ h{KFp>J ߕ?c=ăsR!݇Oqg'6[.o]+}^9Jsb4>fo>(:עh d?#'pI]4qq tAnӞ8CO Gg)g4SүbL\&0TR3T)rk%{j' Cfaf~$hz\}x@?{p)5#1q?`rUN4K f|f?MԀQyy3~9>;d܊֧_o[*ujR1߽1*?P/*z!YNXx^4% a}?[4|6 I' Jc0\"`[4V?}/kYZ;Y>d .zl[@ _㙽:#_^=PjJrrMn.`Z⟥$:=})IJO&%bR5=I 9]Myv7= 22X 8v')-٫N0$`84' }d=4NAҚ 6S^ rMFƋ.,~dB㤕Nӫ7l9;kA&gB2̀S)z77ɹd.8x0H?握 ML3pkLr3 rߟ"WCfS.R9A5S\BO㡽zRukH:.CYM$ ec>,%p=kFKsG7s h'"`KеJ jm?ic"9Oqsd`ct\Z34~Y=I#W䤟ݘ 5sO{-i@#p C;C>q' 툍b]i=Zª)?RйN-pa$FG/|-n^l<=ɆQ2wXF-Iy?R4`h>2z7l(.b:b<c-kYw&kܧrM;c0FA#l@"X3=} +׊=0Ks>X*@IMP2eJ)VnNIRrr|v=52?YddmO{흥)"D0E`>TmK#z( SH;dqFɒJcF |U8-R3x>HuZRݑ[xJLltS/ou#Jw*A~b&W=z R2Xo0 Sש}ޒAPߞf= * FמuzKG1Ξi0 . K9aOSHgzkW qffDfwn2|mHz=u{nh3շƍ Æ?Q3L"% ֡_^(,!1 mH.#7>|@@`?60NwplACkVO2}4q4JϪE>e9ĉO+5=; }= tĢ;r>@VC Ы5\HY)}R7n'8{]e-kySY'[ZSy3?G!FevE|k1K6e#@GB%[o6  8Za]Y+=-fu9Y > >y0$Hю%8C[h?#؇>Y"' {F8cfʽg?zSH!}J@8ކvF݇? EځS[^r'c?3QֶNeJ~N#:կU ͵χg xޛOBnCg̋ǩ녥;Ȁop׆ϰ4Adc.yg{iQ̯WP#/ Wi̢m? I9H;Kq&w~$p v+L#24v&pP+:$$;w#Pn d3G RGd#4'{԰Yh1 w"DaF cN<2Kb!O5x%bqYP܌8d8_Q\}2-p/TDK5ݤOxk^|Eؽ<{K{}oi?YX}<] ΁_(Q\8`w.P=T˂)`(pk=/+6"YbXDƦ)_zcfkb!8 6w/QN5+kb$Q#x8B cIb`Y >g r +LQ\Rt^YzCff,CC-.UzNo&, 1V4FF$&M@ڏ[w\R<`|1^v?qt7+Pɽh}_C(rNU!ƫsLrh^*quw@o8{c[H{bρRl(xam~K@/33/PB0y8`u^ >P/=`.ٷ /9R ŀx5u[ U0>rZm +l>q:W<T o?W|ry|Q^8 ^a[<O xp[ї)È/DD߫tֲ|ge-QkY՝/"ݞJCopqp/8K0.zC1zo+ vޞM1^;j1+X<J%'RyNI0hTQҍ hˡ ؎vS}{پ)۰m`͓p=}_Ze#U*T?9v*-o:ڭghT H3W ~9+S`Pﱝ`JԔIz#2%Ѥ?^=yGݤ_4ʼ)qANٮ4 HA'ʗ%>6)ɌifhE~{AFS<;9# S? !2?*p<ś}i؆`\@/wXc czӽ-9y:MiљH_\͵h};2tM=wż6^^+cGE^{^Ta?kxg'|΅|]AåJv$n ?ٹm <  >ll=AnĝЛG|iA'.+:T譀wN0b(*l?5W5vW<^a\ \+K\[)>~P {2 %K{'xt N?`Y 8{PQSXTwuysѕەu Zrʗ%]B xq ,kEEY lJ%"u孀;ౝ'v lxp;`# bpJ(LKV+,ZMeLZ)S>8UEj6-xKskLQg $0.h{ ͤx%^Rf be  <.QEe {Uu5u ) PɑYJ 6{ho$-ln S8V;·R @;\ Ѡ<.u߃|9SEGLNE|0!lycvjf5kqQM[꠳r p>e?4%}G*K:K't-Y/8tEZJ AieH=۶za!йB/JFY! @@zeBo3K{@zt[Pp\dtg&}Y uqj*XvBx:K}iV2m:24lK~!Cߩf"t el:f?L*Ag$9]w> :vhm}Ju8܌Jm=_u{Uz Zdf:jF(@Lo,*ẂjK] N\QK_}Ye-kdA_ZS)K[|ӈ*BigN~sT|Ə>oTo+a_|3ڇ7??%>8*pu(jROy d=AxR *pUF_ "yDOYUMQx.|x\L*Ņ॑WO>-">lۿqA,8wWr[O6#~.i#eѱb |>xߥ|q"bQ[5޺_OVK O~1.ӕڑpvӈ b'8+J˽tQHVl ̢bPXlે5TLXC5SG1, TH=@s w=8I.zI:V(6 p d[= i J:AvLwҦUmwoG.ގ;IF;{ܲzEEʼ)݋fG4`PX: m0pAJqTj#:+`F~:oܱ1:bE6;Kv㮻ﻟ5G5S/AwxC:[bٷ&nCTE~g \hRS 9xlK_l;_ xuŋ %9UL \qcKW]ퟪx +s%p,+n\g OG;x]Gǎa=p牀/'g IDATG]br y,SMNV@y$@iOabi_ QϪ`}8d XЯ6EܜFj`Nf9듈`(p03> 1?(wVjyN$9wYn ī5Ώ #wjxb/ r(QX젷};!n_B= v!m1=`8l&)كV,g\ygu4ETSk5@j gk홮2Hv?$g(H'6`%Ml0`oJoLK SK#%nL")0*v+ʌ/7+C;sSŲ/(Ot8VT#|=рMl[ЛK0) +瀷/f DzxR\DgAp{"W ]C};_0e ^FPihƫC=>ͱ@ UA&ѳ5)׊y 3K3p"TJȨvwBqnPF#2tixYhvx5fRNC٫hAG35@p9l=3ng_Xf[V2w~)~ٳ3[+Pܙ$Za/b(QiDUT5b9f;@j9TF^l@Ybm~@ճz~Ҽ!YC4ʺ P.2V?6y~ w1L0;5@_Zc]#؃S6*{8<:VCb]`؏@P@58\Ϝ7=K5 ;B?2xVw_4c;fSESt0xߥV>;>gߨ O񃏖x(bTe?!5?oo>=?zR |;/x0' SN**+Ű4p9L "HsM*\8yeeetcW ,Qc %pN22\0lFQECF@L!Xz) ͈OEˀ)OK6X%**9زs*})(o*7ǰvI Fņ/Ft }TEHf(A@aHf=kI,`YXTnP° ؎K0rHn*z4 }''\ƶN`뺎 WxsCmb_kEϑ\kM3e3#ķ_$?ð {?,sг+" w2h ڀ%d "u<}?O }Y#?%%0.{#ō'v <6PD@r7zh f8Hs`- 3$,jm}x;e6TӚؘT {,>pwf`sd^?vj{U 85yOTVY `^)ԟtܞF<]IK'K`gh]lvT~1q#}yk{~Y]k@YNi͐ej~LNʜw*mٙ9(lC%,bbb:a}8x4K4LFgʶpdzC8J!~= xRWW1x#K{1|/>㯼ys#{ψ\>wy__H_xn߽7+o^/xEYW?;wxW_P]CV_č7w=?7Kϝ <%0qpܩY}_ԛwc@nmd=_m\: %8Y xTAC.^CV4xa퍬Ud۽v9rs(la+kV>q=MgG5s!z7{%W|7n/c@XmZx0޺Wq7_+@|}YAt͑0g7uy 3>t ,L@6=0,?[sѷX [Nt`Ay蓿}Sx|}/>᧦nf6f{۟{۟ |# =Ð9Es^tNc!įq";lc HLI2"!Ghՠ5׃B֍eT!b=ζ!ء1ó \~zgB #,On`u5gpᾇ?=+ױYiFfD>⽸s<+=ϽW?HhOaq }Ghc' pvMϟ'|ne_+V%B G 4#cU}-,I ;h{j p&,6-FN{7>|O=s=p_<ƥ۾Icw!\瞽W?{ .o lxXmuU`X[ Jo`.I^y%9h Ⱦȿ-{b+=vY@Jc3lf>l`N"ƿzy#?$ݭ$wq[Vw+A=Dwx~r~hO! ^B0og]X9~̷7z^;mpr0l";i?.b89qs#㠋Gq@w 4-n6-xv<7$Fmxr#t.3Lv`T~[c ;E>,Ql):s` Lϙ vzVm1@\n-j$!gD! !) @:g!e*0\q,O9yxC3}D֫u~&l`O}9:Cp[Kl󸐇,A%\zлaf°:Fwxau _}9Pcp"p}^|0ȧff.lWrU}Ƕщk';ml÷@W:i쾱tf5rem(FHqV%z@&:?µ=vIWCTr~x=wKf 󤭴| ~W#q׳ɭ'& ЗKޣga9ѶZᘋKl5_=j>q==ePP術y4a^?⿶ڕ\v8?Z ukloF4>;A Q➊iI``? s."}9ceV֒@ձCr):[0%=zoqQs Y)d>G`^ci{by,ڥiE~DVV`q?0 ̀8?iFfH C4 %B̃sN^Kag}9φ9!wOlo \0j/p> "S%tk=6@H^櫃v+*$[@p1V6 RujQ?qϠd&{s,y}C A@l;v98ykiC ^3Ф|o־ٚ9egrww+qGzw~vlvG7(mW=#om!fG!:ȩwGeƅq(:16nq{aȾ!8<{:M|+~->.fl.+@U)3.1$('_yyKM%l!xtv# xDŽ 0 D4Ag?%HiXܖ鑲*o?"G`qh̪`=n #m1pP$nKz]/\8\Ij۠k"AGLh{oN@y IDAT{)\}DtA(dx 6|\b`)6Sl5@߰%vua\BX]kxzeq)m8j6L?o?͉\_ DG_ޖv+\k7p[lGHet'rF+QE&\ΚchG.ʽGE wKQ(ZO ar6r "ȏb |_! hz\ۆę5Aͽ=f%q'^2g(q&L@ fu+h~@27\܃CIJ裀_28=Flo#kĨsx{C#9{@JݲG 9Og>Xu-1D:G ݢ_qs߱12N+nwA9[>=zM5A.BEgfP͹g%)Sy%3}s"c5籝NЙF>@H^ g{j9gfYP[gxG0l4f{~mok{j-QжmL$Hy4#"Kܘ 2 BΥ#k}ǥ<|{"N9Yqa!Fs)-x4tgY9: `[qb-Lbp[V١s*38qd\g˘73+5\#o Aڰ,oV۵$M H)rg6'&O4 y]_-z|2btiTɸ y-y7^5t/YV7_< ؎sn87Z^?lp2Qm7yQƅQ -m~VXS^sM'>8džRW+Upm4ЗaΫxLYZB*H15}ZlԦ*du5I3i9w y ۸1-kά_cn<_ְVr-IFM5sΫ+2&yNu:BpA$FrzWמBb5 eg0B#ʊ+B\x$2 :1`,LO3?.i |vpZR]M^r\; XbPUhM\1mu~?6ۤ}^v \z'_. 6+u :InFl`XIJ_*-I"k?v1kzq+]0dς"_ ]2luLLPsTd&3PdLyӫq*'J#&c^y :|&7>{Y(B8W~'sEvZjɬo<,Kr:)t~Dya =躽[&=mW vvxH+%(޻Ym:d ݙ!3WͶlx_%䀳+齙X.Lَ`ʞ*doP0qOU2 쌯$>OR#]ؖiBVnkH6fe1\;n$Y3dzkY4ܱj-gDJ1B>)]CI9Ю,b0A,J:q oSҘ*o^C-z:n.u>*R~r6gK9nsN=Ǐԛ^V`"4k2V5OBc9=EteF B =|nܸ˗/ޣ6m /}qQ3_@#$iVFn2^I2t6Զ헥3!tmaǁ_h}[2:ϴ>&񠑙l[5cgd_dD_s,֘w`WRe7'6'"iR NCP_YVuv#ѭ0ɠmO z}ԹVW~`ή77AO -eYڙ'NkX);vvQ*۷&9j@y96:7c*')C7;=j׏Ms^&Uwx@}9h#zĀ!Vcޯ~~>~9 گ3mt@z0Qg2d Eol~؀M.~Üb:1K Ge2tP>[m/ۘƩ"G:th\yNǐKLJjp`WIH-ۖXaskٔh1޲[:ֶw87s` MTjvCîß(Pu@" :YPmoMˠ ^kWv^߮{V5Bi'+I:w\}j<Λg<>7st$1Ondޏ| ʩ 卹mƤuݑJ)r<Ʈ\ =4bEbA+|sT|rQڲx\neڦgqJu\ #2\ Î]4K5Ə'rUVY?%x>5 4ucf{6f{[?@߉Hs1ˋGͣu: . R8 e痠qa[լx*Ete9L- k!'tϊ҉uR*K%Cl=m(wHeL)p|l˖Xt\7/:S}x`P߫>tŔnTXe(̂D ´ת y,bm5G FϋVb>7:7F'C+RPAt\wv| 9]z^0{f/&3X3SAt P[)Txbmy)~<#O l=d0ɿy-0QWsM2S)(|LT׉G9Ɖ#X=.8d^9Ѣ9.65lJkKԯo̳fZ>91A)THgWfaHL0mqW?hs3!/# |o-oy fmn3@mWoE{ |Ih  ;Nh,hS'sw:6X1+~DFωY]g.͓"a"a'H'H,C)o {<7Ud #|>a)2S  !)HFT ygW}ASV{ 9`qH5?'f-8& {ftj7-U Gb!Ӟ {{GERtCg;M@v{,k*7jJ.\gj'9E $Q}"w#t1b x42P[hirχ?}m~U=>-;Sפzm[`9!giM ٹ-@Cgb77ljg9eۃ|۷o}[1ltsgn7n݇e_ gN;L ˂SJͶi}9 aF kjw}mzEsV?pE? *0gωhBք2A|nȤn :ҙA;1 fNLY$2xYk ==hyꀙߍtxr|nl[+6_m`iw^@yleku]wo35،D * Ze[yM~~2xLPϵPcL_Eʒtk=Nf|1_76=3߆KY?ȠJчC4F5k9~l`@%mA+>B䢒3RXuE*=Hl%086?ڝ9~gߕ㱯>CO5>/b s \f$:!K2(n3P'd]5pUQ^?7TmSV ݯzJ@fǡ*TQXSO y vQC޽cü|aSޮ5Ry\`[lJs-XL1Sp''yY<>cIs3h-476i3@mW8]Qq!>à۶[ʙ bY?=uò@:;//nb0<зH'u긦*%ۘ$ЃwVqcdbV3D'l6?$߅tﱁ^ c#Ti>;lm(P(^Z|q4E`H!GhrKB^v` L܅\m]Kvܪb -n#1,fn<`8/AМ7uNyƔ }yW^1/)}DUa@XE Ě>O x#F34'67*Te]+6@32PfaYqTňj뛪[F+:)a JdmS38(E SZIJ-@a U/m9t[eP:`tȷ'y  ?Eh$?;6҇Zj{G7&9j]ivJy) t3Ŏ8j\(k PZ[_`G 0e~L#Ҟܦhۍ|O[qF娔;"WmfƇ.Y_A'ܰ@kE>6`HyL9[ 0^f1;5)LJc]T%X幵")WĪq!Mozyr6۫bxOn,㛮("fny J}0=9ɹ?2Vq\܍9j2+"^<^$s7q-QV.0^D)7a2o)]uȲ9vI#BU:FfsУ9mC~&ke> 2Wԁ9y;MNHr$Ĺ5 *Kχs;R0=g63+l{͊ೕ&d|Pۼ]$𢡊6>я\|gmI%*0Ob(׋$G~\Q6Cy8(KKzV'{M/ݭEb_Hy" kϲp6ȨUy-->99x`姖ѱ\(h)>G~9ɀl6㪣s"6N}ٹ>Htv\C" !k?Wm`[vqs苂Vj n;ss ?P@.f2uʞ_nHd_5r:뀜OK)^YQn VXZ=lfsLyF^=䔉vNKm`FԼYZHx-G a8s9Va&#hQH63vH^gbNnmS!jƗ0Ԙ `㲏=Gy'@ms pgb,Z}7`=GS7$=ףq|w{f۱6۫ǁ/_ fLjfŭ΀hOÖ.1 > >"NkAebWݳ9! y Mv8қ\e| eY HD K|e 3 SmU~ʴɊ v}9c.`RK99tK"Wg/Zy?w2u|I n@N3q%NL:8VmC@@64`@ t= I6g۠(+5cXfL9;QR@"0ۢF:eF㜫-u |1 I?`sϱ+6'ys:۽+\`1 )mХZ)p._dʇ|{omWYoG Ls;Fk8}&1mْf{Xxµ+C{ԥi=e X9tz\6|܂Tvk,OLT3p%e Q-qNe3=ˊsF=d>Sq&cuMA.,!y"j&W s)d wh`mcEJYfMU(6a h zOm6I5xǹ9Wp--LThs?y/ՙX IDAT"8'H_9zo|.ҩxMᬓ6Xa>g`Ze](\^ɊFv4mG4e#*nҊ`a0VTG9d~EͶY~ǭ}rz`k ~]W/6]̠6۫ĺûW70nmf6GNUN8y`oK-i ]A7l$q΃GClN(2+2(@wHY4b y0`Qej"+~(@n땍h7yR:7n۪28|MmٖT]]<㚱sQnE>5=y#8uQN0sm6Yer,lx ^Tiۜ*R4@Eҁn w۬Ю[h}n)FY"*k}-/ĝծy^ˆ7U"r4 LMQ(_lK __k@}dH!(յA!B8 jT`]p;{!bޯGR`1+)5Po;z˝0KD4A@{?焝Wi̓ u=4XgH PUrۭX1v7zLkU,::: kjtMʜZSs R@|Lym*L +t3yR@ ^(s]8Csw]1f{lfgUdO=7/C~84+[-=jCm,Zi+`p#!"sΟ"g,bW9sE{0S{N`؟*jk\8T cAt#=9Δ_'024jQZf ;TwꚄQ9uqzAۑAK{wۇ⻛껍T~3hI0\b ҇\Egsc'_[P^IΪdmm鶌清kb{M2Grfz찺;IgyTZS}Oz@Q(1.7Z3%7mҮiL{ji֢=E\.+^'W1Ru}`E͊OٺpmV˺i,LRokrgz)<fm}^eyo<pGYd0GJU}('nPPޠ ﴧt_b:ImkGWrWIosm"۰h06ϪN-}"0U: {aOf걬,'&䙏 3_d@6+eh" "9nkg~Low#a~DZ[M\OϹn%"|rʑTk@u׌4v4YM[ޫuN3#K=iK}M<0$u|ة0yO;.اN'~A)pnNev-۟-ϕϫ5V1th>c_~W~yz\fmY>l2~oc9UtTJv"JY`p1"" $']b&`L,زLbPFAYfidp 2IVkچ.9R:9h2G,isp%垶iYi$)m )řLaG;mcGmANy@;E`e۱f15:ŅܗL\QɀƩr$sM" U 0<M o,B_u9tcP`,'@YMstmGy|U$, "RmDDJ|̴  R I9rtC|}8ӫlpRK|۴z柙r'׊y`n] nÛ n2[p^f$v>>g 6ձjU]DqoH 5wf%̠6۫6 |\g ``tA@Q~G)e2h] D+L/&2b ,IGPup `d*]wG9>(Ջm+3'v%:9*rB 4cp"W^+P-䔪SY"keB}06ZT*ioW^;@#j{xLJ9/&Lѹ IȺVσ2| =oT%Үu>2̿G's=oK] [)`NR|F4.3 iE4f]qe: 8gtaj&1;?y,'K&uj'^T?.3Sѷ6/~'|rK{z|n\Anni<^`լSꖩo;LA ϓT@Uǵ ,lQ5B}7֬Ֆax&? z>3]s4ϟXE{smp7g>b1աf%̠6̖۫%?-=xis2tj;xWiǀzu-PavWN-sۇMu= PG:=b6Z/9ȎɠVWra42)ϗ@ycfzόP(ZݣœH_rbcO[,:4Y܇v0[F*[7Y#cC_%'ޛv9?m2T3=qܲ)'ҀsїR9azt+y`.ɉ^dp*`t.;̩0ߩ*~ ^%P:g$q7Sp߃7mߏ{Az/\t ` NslK v #X࿮ڸ\ 8g{8_IR j7ȅ90eapk7#4yO LW3a q-kF6nlJyPZ(`Q&! 檀  HJk LGʏ84_w{fp>l/ l xO`4?뽠̾m|XcT'[Xv319<6^EOlW]6YxG_ aXlMCiAn>37w1#iwMa7G@ 8)u3 ȓ,~I@fpY0e\VJv+}%BL~^LGKÝ=πȝ}M`y׿a6kx8R PR3WiXւ((LXY6GZ ZMϏi "F՜G^s3^3!O2IfopRmX6/km}_ϜkӣX7x9,g=`)(dT)oJQ ObG[e4L+Ou3 x c5nxQbۘ>OgȴiKg<~9G>{챻lv l ûƸ1혽IxMl0\P_rР7=NQ|,940@0:F)b!:hFF %p}I198~$fN1VPh=`Ӽ*'E3SgڒOBDbv)w~nm*u.DwJ@1WFjOq_Ӷ B`ڦ|N4O]n3(Ɯ} .Y^ω9tS :O| DIn ̦C_M>ӎzv\9ŪieVT=xg\Osm1<3 v'w*wOޣU4nk&k qqN֛k xfܣ <nSV\}}T 6<&Jy:橿-s71U:r7ƭмYoR!Q,tA3HUCJ=2Am@uR#LC Y3l}lfgUjO?4 xAxs]J|s L2@+skgn0,8ٍ$}"8ۮ3p#(!N9QaLJh[Hf 78|r{fg3>lR{Gw =w#qo VJ̋ =;,FəD[',,HU?߷ R$m/ ,󼹄lΉ-q9[-oOWqe]P@V  FM.ԭB)Hd2?Jc 0Q *d'fTQ)c=?Y'߲H~Ba `bT)olr߶'„@>ͱ Jz3 UP.[OAߓ_ hk^]AK9l[ X.̙kF8{HLCpP25~#>*T6k(X;_+h!FN?Csm 'n/4}3>.-VKs{9i-}I\8 LY׋ؔt@ʛck1 ztTYkFS4!l`!/6x9^ Q^|m?w|پ@63*gyox{V rlx0k9Őhl58|gLU1H@"6:S-d*Z ޴++K) nň-)g 28|m fZAL !`A8n9vW p.Jׁ\ŝ&cIFʪ X/(AR5ܩ\K:>;ԉY`9.¢0ijf۱6۫oE".KQ@8 X:Agz _@%VJ]Õ NAΕ{+RpCe'8;r#:XaDW9XQ2k0ZkCV&vۜ۩|lcJo:uƹ!}inmoeElH?v.ش[)RtCX]FJ"36qT\?r@/OYz^;Jww3|ƻ[TJkwg[(/rgkZ֞Cl^Tw Чl[}s>qYw|s:kszk}V3AgA?H-]VX1]\34o{OO^\fm6۫>O oxC&V:8%4snb:ΥCXt @pȹv [؏Nnh!¼ ۣ[DX62 6 7 #~)n&`qoȠ65^^ܘN5l_`f"j/7ӟ| \xqqwYq96ϙY}Jn?8h(EH}bvo9G۾mtI<#)& 7?Cg*-rUɬyj Xo_̿QKt+Y:?R.4p/yZ2?͠85eSLw-i;YbTmL?ضĀ ƾ𼦊٢SN BXR@ !}=ϛ>zIp.wޠ0k/k1ijiP+w&` (*rn;vf zQ8!L+K׹ua=J@}z-Lx265,}˞4N.c^2N`^$`Yt^}S$s*ŞۋmfЛ5_ /#6Ik ϑm](}Mq)ʫtnp{*E`Fjv2ۍ<>P'kvwYodf{lEd< /ðK8Xm:dg0}#-]VlM 駭+L=}mj1/]׼YHLtkLW^G3M۽H8r?ha9v S"`v빢ufN[e 9-p\ f@攸]4< YQC0,e˴@FWVtvkYnAݘһ-`6 s= @yĔbK6Sclpk/r^? Pd!);ɽׁ:%cqnetXОrc5s?1;i3B40N*sA9QZ۷yuIT~O؛Z@5}h6Hs'r漖3?59P0,/S/y[fcY>l_D׿? B&JM &k|R-+"ѣi+[-q[AsKeYRګ܃ކuE.bhz((A|0*B(ն,8T Eg58A.1(۲?.*eEX׹\hx<隯EE!fp>l63Efo}pFʯKr}q 3+9A/7nt-\C F=2ǭXUiMlG̰-s2cV.{15vmHg>S8tb&5<wsdƇLﳴEX97.XN}gMکԮ<6>tA=$|P` [$ޝՂ)Odw6&߯[[el8)G8Tr AVL.2i_[q?zs{to{s!K܋*G*}g"L:h hS? Ɲ\ӊY~H);>N{}'nV^,ߡm 5R `'gpe6G`f%5|uTRxbB]JJY]ǀ:AqΑ9g$>"o{yntQ~m??6f1)ˀ%h7~#mweG ~ߎCjoqLTgytsVVNݏR] ec>x6Kx}͏Mq8nX7lyBqV<$A w`A{ k`^ڧ<]>߲=+GӶnǼ>9HeqP Ȋ8[5ml~G-BKڗm((ڇ1+v̧%٦ue L.i`'>+wa2;?w~hS:B-8z/9 2z2tvLWAůb#9#}s@[4`Pk?@Rϔv[xg {|A(@Nv-S7eY֞u$V;V`]_s- ӺDv zæ^չԮ` ?'E0fm;Ud.U^|ؚc#Vۛnbע-'/Z T[(vIm$PZi}{L_f f3@m/R1[?_ť'5 @Eژ]/.U+nU0eY"j.v@\Q*96\]ۦl;Lp9ؘ>ŨM)mAu F+n.8 $=si1vYxݶ rrw9̸ٱjǷ^ vY[ ݎÝA:s VW5SG"1VvY5P&@$I - Bqwlu Ɋ܁,68` >d?>g Y𗣭U,oF/pbf6=_p~H쪕`NΟqwۖNk y\db 4]AǕOd;Ӧ~ߟFD KPg\޻ۖTs9G7ݼB M a4/`ccb@$((`B E1 QC6"`^?h.tkYQ7j}笵YUsׯQIWu%*6Q-v᜜QZ !l /U7I'޾~q5qB΁es%un ||"I[{ e*3)\DçF[c5791.-ɹǙm94 )%>G1]"V_>6*!3n%w҂<)Vp0bx>:G<."\,\_o1u>tGWsb.뱝n\okh77\r*tIoapTqCC-Fq ?JKul v"ʓas-w4IOkSb#کjݛmJy"#+;Emω9O-Q;LZV pȉ90q[kyR,'lI'jlk,{8ѵ8Z|Y#HFj-S'͟);2c 544Rhi>{/%,?TZt":&Z+)ӔJrk.5^3j$'G>6'M mVO /4z+hUŚ[3bwGBm)ҿViO"~A_ +"L3s!.0־j'b}XЧ8{½TvnגK&кhWn\SaZ/i]?. GHBp"G[g9/U{{IWJ }p!ƪLQRY] O z+x"q8tBp{%HNɭ,\8U9o].U3>uuyb{çCO.OPdiy$WT(\Phx^j-[L\v+pUW5rp+YX.o'pO̯TӕWܲ`4elUdY'QgӠgU[Zy_fEQuemօ5[-IkMu҂tPg.}^ۋ 9:l[t!7SLmwy$|by/0k"9W7λViٝYks(L]_Nιuo/ ^8uysa&< = }=O֥Iz8sh,I5^$l+9'Wuro;sw[SYgh _dy̢ucK.OI\7S޹kRuS\zwnM4p 7܀ox73[v*e+AN98ǽQPN\E|-'b)Im'6\&\Y˙XG)|IPmWh ;jL|5®:綎XTDm z7O\]ɸJ~Xv8DsyBJ*n[텲"9?[Fkz;xC}IN*.t.|>'z y?ތyW Wkw=۲5k\ݺtBu,{ j񌠳}AXw4AvbҴu8wrPmU}Vᨿq(M%<]Ky5 z8M[}pm/AohottutMVhI\ur+4ͭ夕SYN r[l0IѲ@Pep$aҒy.XSk9c{|*έx^I?PW9+GR87\I4GsK0$F 9[V;yAwἏ)w&gyk:jELL[8[wtAlI.o }WHwB{΃.iQӘjBB! EH r-UrUxg=AHxydzݡ椾zn蹇:[K KO_v>wU<8O_|1n}4p;|w.xJAuaf&?39ߔ~H'۠;3 %<.~xl+iX%]&nY vO@{-0bYU,o+'A܂Iz^#+M |tRY$uZr]Hض "qrH>F.́ǟn?/&oH#trBa^RRdn6#+<;\\iO)zTn.&d%d p\A='pIc =A:€05ѩw(۰~a^nWWvc{`ggmhhhzCO.=Ɵ.2Aebh"Ă9<D׷ s#, IDAT9s OyeK.1lމeoq+ ܂.袕HM / 7ǠUĠe%e~"2*{HTIz|VmvmYӂ&u.AiHέ[nic0\gq\^׆sA9Ԭ̽ 8\K7LYǠrfͦNRY j g/ !)"Xܠ yqR)N.9MY1x  o^Z 3fA$ߛ YԪ>/q_wpћ 7AohG?Wp iOr٪^Y9V汤lb^a$]bKJ^&ւ>%fvi)͕l&:9ӥ9=ջ<,'nbu_GΣN0g9_#__)b7[iWV~ iy+r񶄘ψWo*~m^ ͕eyPK^|[,cr,s^PBqi:XMӺ1pQ07GsK*U'͝50I)u8l3(k7n^@:j\lئmvLf^+=erJ9܎ѡK2wIs/bp]$2s:U!Ϋ~Osf(GMr7(K9`Bh4nۿ?cz^ZTpf6b3h;og+zȐcuZӬE]k083<>>kE# |3ߊ){ +DY`;,@VXTݹt&j\ےt%2/13|Oֹe[ ]!v!M}jɆg yc:5TbKvtj"qK%&,5=WN+*)4U[\nS^sNFߣy \AUj߲_xnEy̑if%5uv%mM8,A׶'kgy^'k`..M]V=-9rKhPJy+:a{xxkcBF0nf4 I)]dBn>W*>7Ξasysl[y< ﳇk>^:klhhҡ􆆆#[pzޫw! cLܕȗoLn=zJT,01f^#]vH};t|U Nl=' `cZ92Ƕ)r}.*]0RL+ȵ-/Ɖ>F]y #Twu\\ϑ vMm (y"n9ɭ!܏֑s'֓tWs nm۞t勀s<4sn5[R.)/,ƹ F%l 7MFZЏK%E6*b}7 itLFk! 6P4[kBx+l$7Rf]m[&\3O ws _Z4`pG}0>gFww8`b: ܺXTo~\aו'-NU=lS+&P8&\'ӺЙ)e+!0G9=to7)˄[b]½9Kʴ@ҕ BVn͖܆bLd|NFvK;\Pm= +$]Iȹ|1$硬,խEeA&NcXD%w#ɋe'1ns2,Y̺ٖB1y.J׍mÝCs$Vw/>644|izCCCO}Sx=/>g((Q!YGݯ[ #֭yBK0\9_*wń<fmH9DIF)'}c!q]SUX(ȹ;r.t:&u㐹b*Hϑ 3x׻bnAW6) <7O׃{!emֲpβu\<-@bSm+wT&Bb~8MbӬ!֑XYL^l/uϓ }ߢ*ɝȹVj9vvq_ Zj=Bƻ[΃=,|5X ަtcɎGc.9R]M)'pߥ`{gOP_vq|9uQ|6oQ[zҤ%,))XGWiH*,-:>JINb͌m|DC&̚V3:>Qo}fm:Fu2K|^|}74N\ĉxp<0MN~jOb\m]K2vOƬ%c;=OUV\TXn]`=3>b'E0+>MH󤱪 rN-Ul*/;@g**bpy9YSwxA7C ('+]V̸H\]u{_[-ѣYHN=]žMB4'8^+v̒~Ή1廅[uEŔQs=tR!ޡ1n.&ilzq2~Fi,x;bn9whN,)X4,L=alUEyv8&6|Kb9e67}NVSlJ{G8,.[ַwӝ6\sCCm744lٳg'= m _ៈയf-'HĹF.eslۉt=}.6u3 A]Ҥqܩ fn]IWB-ȅ?&Y]1 %m;].-m9iKd6^oв.8'kXŗ{;})}7qd% 1b0h%#.luwFk1HB}Ke.$7J>qJrrN˸DHֹט]`b- ]CVO^R2-7ym\O>dmƖ84Vx"c[.ʸm Hi \y"Ÿ\$<&J3kwcݖ`_d=,`eIY'}vzCCVXVxӟ׼5'2Da=9%xsve8O,9׺Qmkm[+Zub&H4O}!9tTѤ<9BQtZzv_Q]Ҭu_Cܗ>kndv)gI]l}bA?8Vq~ EΩr [ z6]wAWZWZP+?P+;=R. ; 6y(f\^^!zW!*RO[I}lsuNm3)oիd| a@$#:r*-d[E?v x\:(ѩEJD];xL8-^WaTCm744laW_5_<矈 W4O&RW&Hu2TB˩7 1]ky6*q3Rqx, BB}Bڧ>T>yU}^Dg}K8֚wmew,%9\|/h]zsmvyʹ| 5qA^sp[C744o{3^]w-ݓU9e<9eyOdJyXG,hZFaG$)Zݠp+wҞ['ʷ AODeԟ 5B}S;Wv˟ _}"b%* )O.Mp^T=Ā\Rx6\3剾ٕR;gA/]г[O݅'e~ okpl쀵yfr !Q\q~ijC~LsDi]]>s۬L!crao4qdg\@ᩒDܤD}\ iKHZ^>)5Zzswrme.m9_=Ζ 1aǶu_/fl+I۠%uUňǩg/7]]klhh}􆆆sG?Q|_wQJJ$#@I=ro.v |-c+o' {n1::'OSdvA'.Bp=wTŊ ۓZ*n[[3$M= ^B"VH1FiW] @ ΋+<K"XVЉn3AʹY\,[elYsrO"K"*! OH瞳[Kq:rn˭U^0v[׷uaa^|cNvrB kQX85[\tѝ_=z _4pθ=O-.]=yRqZGYD^ɫNu5KjVc4ntDHLFHJ zץ{bUɾ:LY MD\7d3'%1b}7b(l.V>Gd\W;a4"_TȹZ,:luނ ~yȅc>n%O}%,y`!8F.E]&[jdN!b TM\oE0N/wxv`r5OZV!,"fc5\5MYö-WKdrKLi^@P>Ay&赔MLi|d<1bA4FI,gmjDzYb%wAV<3B<۬lUwr,ɭЍ4 VUb{֯&:B^ v.{ޏ]?C 'z)Oy*~~hhhDkhhC\}q-7>ĥ%L`&s` M!(Q8JI]-.N戁L8Ե>q4NZZѹn[04-7g7Kمp;b=,9g5(Uŕv[&=$7˶cL"rJd*=#!G6mS;'\n~[_eoH{/;NjeXu[s++nL'nc$G&N &G2HZi@KN7\q'w4}Z$9BMȹ|,ŷ|uR֖aKjꞲ֢1Λς. Rla!՝se*49I7$^/QWӺK%@.zdW O] ^p( ?^'Y=Ƙ~}wd\Ä H\7R)g9IK19x1T1X zتpb alr/!<7\rs444zCC͊믿O}Se)X 8+N@-$L&v\ҧIˬlߖI| %.他]tIŨev% |-8L &[MQ׍[$ŔOI();'bOVuP7)+޾ڶU 򈺵zj'*9TJ]J6YB2B,$%')tK;CNq3ptɯY9ǡn]cȹQ*ƋLشTW.zi$}W 1#x{`t)&>$m @vHP]ܶ€+Cw^Os)&^W7mXm qS,1+j,겿-W~6)ք_bZ8Ӱ8q XГ:ųʿm}%I}IR.rUUOx^;3@(l"/=mr2On+U+)';Ykf =kUݞ~nwNЕԎ{&|w= IDAT񑏟rmGj]TT2IINe,.33;~<[2o9m!Mnr/|M3F!Ŵ *"!!Pp ǁBgeJ;"o^Oʅ>2ѻ.–|*yKraa-r :VsT 'F|c%~_>sYihh}􆆆[ wx >YvzLe"S|S T^F-xJ"/:s-Zٵ ZVD$g=eqXss:6ϫ2{$aA9hE_sb뼵Boo%Q,ub:]\ݣ\k"d:`zP9k-й8:fXuZv`b[E8Zk ۂ-~9<#&S%hv "ȹYYG^͛+sNKEd\]Xd!ĔSuQ߽Jm-LI)0hU[Lj.iy{mxvA^ٖ>pƉޥ74܁zCC- vyǽq9z<uec8u"279ҮV0VbWXdזOʵNXS eDS[vN;g I-1Q=nt [#2Ϩm3 r$xO!Ƙ{LDžJeUSY>uc IyYtUu9m] |7z:W6OזcAs\nIP1^$x4#"ѷ,*ua,`CFx=,RNr ÄTcZadye.4TV6_sm+V#^SxGsiohhh4Mx+_zQg=3e.zmdE"Juu$ݶnE:]c'ʲ蛐o)т>@,85ںs*J>"pRcӓ,!-2>&tIA-a`ED/p|pqm. wp4%__iOA|fKl۸nK+P9w.ZEl Y".n*qB ȁ)X<p4]go;< !i0VrKĵ W(bx5zs ;k& >42WwxXs>v+kB-{,K*AcW]-Yk_Oܒ c"b:By;-Nؖ+4$~Hǂ Y:)B]u-|]dxL=apH 3D,Fp,)hCSA+nL(m\n Bqx>]C.oBh碱Ž+4^^ޗ:<t~=< $&+}o$~<7Tihh􆆆/)q+^  W{}8f­S=KOC2_XE^[@`UcVYq`SY2Á'> @{H\ON-A+ġ6`rׄAV}9|A$2eyEu h9-PX\,n}~cdTNmPXLy]96qIK*:NծW'#JOO~*>[q!Έ)mg'uzte$RX͸}5(@󕇉d'BTFDc09{'S QҨ݄<sOt:_!U\\egq}$ gէ RWӪ!aߦnž/=KC}ר՝Xi?iB^|`=G|+Y|gK81P%ƶqyr䞮nvڤv":{=F(M8ކ=׽__}ihh􆆆[ /yK8\}#5"#+%\ml}We"TcTgw!߉6ʽw"w7vu`E:rW\j#Awu3r8][w}By#bS5tkt9O8]kBqaNsr[~]:v-O6:='rnP+%wef]3" VT/غH[j'/5gl xlKicԅ sB&ĉ0x * P9ᎎFnu|cӞO!߄x 86KȌ; ~v[W Wg&;O=&M^`wv;Q'HHw\(]bb%ŋ#c#!T2X|ȅwlsJY(*'rSېs=--]`˯"w6XMnd\ q=Q.ٱ-^lٙbdZ߆TF7' bRwpPwYߴ_-m?A~祉)·mkMԢ.{~AkIzSÈ8~'Ї> }k=hyh6qW Ͽ'f=n |OζG09Oy_Wj:G2A-b湠$ ctƺ{*6aWD"N1*NiӢծ_}G:!k͈F;+MVqq;`qMq^CHGļ$ېsOKy0\]Xkl}\n^.o(W=dAIbV1EAG@CLw,4'a+\,uNE!SŊe܎9'w S">I/6," SQqlnW*@'ץn5:򐊼5 qob bt#F)'~q)nr A8tk')]bqSXG2p!SȻ}RN,,"DHhn 1ֽUH7¦&w9kvyhbimEk f'..]"C$L׏߶#V/l/So h\;L—@Bǒ9VxYѥqN>v~ Tܥ=J;D|7+9X#j}o•vnLA+Oɻ,npH] uyhQ} N3x򓟂%袋knhhh􆆆$Q_~oxz4t0`rǢ'#`'eƥq8C"by~&i%Jމ@._nFEQIx+(J_# OD(Su{Iēx&ƛ>WxXKt>~ʱѶZosqթ{p=:ltaVpn:ݒ\IuCP|$0&oDkfW 53K| A^&< UҊE @A,L:-Qm,լ+OW֑s]-$_}(<>]v"2}s>' #sgroW}WW?K/]S24p^/iqX}C4wkg,9gƺzCdũ4H\8^dqIU\7 d֡tEtCнU^p|AJp xY=%mwJW!YD;"uY]h&Z>y9ȉ[|I Xkwqgsj--@ήgf.-˺tN&! ~*f+c]Cj!'dxLR\\OYzqxW9b˭?qL꘩?fs!.Kԩ/j5:Ɣڱ->åc_佝$poS"a9Be4CHvƂ? uc qe3$-z1ֹϡsCYox_W+p744nw}[7ōr7VyT.rseZs17ƶ]kL:`,6.X[}E՞ֹܣ|&"ј#}!nJrJqv"XDD⾙PɃ~sbv Yx%z:`UsH(x, Ծ:O=e>Ŭbs`^nܗru〈v[>sd)ˮsy/Kc6aKUuɽ'x~}c:WCCC:4p8x__/}w߄  8V-n\A&;ܵ=ϯ"A(k=1~Lvϝ6ٰo]]S ܺ}b}B]/6.Xғ|G>KokA$@ҍ6'D}k\,|͢5Te}Y7ʹy!ͺ'l,]ZM'DZfWHfKš!ko/hEyng2mliǬo"`*7p~sqݷ^5W|UW7p}8VrE]'?MfC# K>}/oͿp.AGDZy_\͊LY!8IdH7UT[-4 t+oR<8 %(R b紺1Ts8!>'Y-> .I*es="9C,dZ-V QsΪz^m/˖&:Jmn ZL=_Qp%FϣC qlwY;BF,q*eYzJe!cLu cƍ y.'|?X0!۾'[m..Wbo eAOi:xp'Pql_N]8d_~_~69R=GCCCù􆆆5>E/z^`u9%+&[i-&+OlH^oT~u;wW{W )|SR}Vk-@({%XZ/pzsz$?E&?X=$F"~@7’uemUWw1]$c*I%ۖwÊMtD~7^l˺Hdr1..EK0ҒZ{2^φl=Wc]uE֩(Ӎ&gbAtOB|RD.ܷZYm<=ǺmR?}n_Gyuv$LJ|3^xl _ Aohh^oy[pl7.X=.5vUKZ5)}Jqy\ ÁV\!*b:O$zzŚ7a' P0uruZ);Ų!&a. Ff%؎X]q <5T Kpʭnb]f$mc8[o_V%[RY2GŔ[-W JnMH^%"(#.Ѝu&ޡje|>"9գLfEi%}>$ ;AzoJWKyyO3;_`tNp|x+ά=ߋ|jhhh9zCC×Ok=Ɲr_%T &CTk.Zɺ.lc0]䕼A7ޭ}b(wD."5z*/$\=G!~һ\BK1##ūOmuHe"QQTn;S#!Po*JYC?#'g Th+ٹy8[=\Eʳ6DOBY1UM\Bx SY,q]~1⻐@&}ں}znDA߇X'ZL 욯z rhzzJ/! AsY;S8o_я~ ~g_K.d9n.4e??OǝqHs nZUzh/9uڄƌ-m\JxtvOхv.ՓyI,nյ}f |Izo J̞ʳ{'iȎtlxru-zzsb\#򛬡h.( f9痷;O-Qqb}!n,e0!&Łp )I\';y MNe[Xb\Dm߀\ KXӬ~M,r9jV!FͶ+[2s w}=IE]7Ys{j}s✘ ٞ ɤNdNV8}(z+D q<8'_Si`4ϧY=,p#zRxQnjm ꨑtgK_885~͗~Ex |5444܌h{ _x߇c}q ) IDATRKQlBg\gdBBeJZ\6g9׸oug4flj"K֭R9Iw@?R/m J?%.mDZZ>U"S}y `-_sk^tn}d -ص}rtz ΔakfӱLܕ/ Szy!.FnU4p>ϼ~pd签 GHt K&.Mb6׸:iǟ˴6Ʀ'r@nR+Vq{ kKX$>Yyg${\|-V,f,"dsty&kթ埵 yj\Lug\|.'RnmY[oN❢J7H#+[!`1xf!Jcx'Kk=Ly0*s~9e>se罃SI*Ȏ ;VIp?^ 0w2L8c8gVã/hhhhR􆆆;$/}K;o 1/yx4:L8VQ&I,{}*V[E |aMW^簉+(gv@7q7#[\ܓHWL k\")nÇJ{Xm>j@hR;<O/9~W ˃N߂p7r!!@֕x\,x:Tce2A2]X+sc f0%X9[DD)1ZH\W{tǘaH†zb[BnǝȅrW =L=|nzغlu~iRmISC))bW ;EiyK¦+z # I90&<#5O]Ipőv(,^%1&X.,>>};; gkmhhhzCCCnzի_z_cCqlNkᠮMueu<г{e 9_Gnn*.9Iak)zXWMo$q[‚Iȼ` O7_\ɿ8#rltheiq_~shOxlCC,:²mz# YQOV ;yf!\]ҤcyqANgѹ qúo<TzbΟf5ݙ9qmn+,v,:Z R;;nW CWq!ĕ=th7d%`9L l˼7|t .-U܈ cQ.Xϵ\m1mDZT›ǂsD7O?K.Ǎqri\ vwwnh=%/y)wcџQ8p_K`ӭ6+ŋKۉIn:XB/E-DQ,R6qnBCI)3?I=0eLiQ.xgҥ>C+pKtjIέڻ]TwIuc,IRkۢℹ`CҒR3[X(tM߁3q͘'+hke˓t  |RRMr!?;"۾$BO妁DNWr@t9feDa9"i]v?gy*(.zyYӖqJ6/: i,J+:[j;B{ًD4myyBt"Q\r_R"cSoQ FNQ`e ʩ >OZwpQ%+oR|tMAc | ׻?Sd<o]gCCC7444|馛׽_`w'ec>)6=w۵3wYϷi>9 W!:[ řCJK{ŭWQq+L]]*B\ڝZY%ޔ+;D6%y7KNr!l=טUV6XM%=̇ -yE ]\ߥ~ wMӭ]Y6X)hz n䑉qEMNJwPˎ*&hXLˀBr5݃)'ca[֝<]n$aQřzj8p7py@gk2uIV)ҦQ XAt(:אSR{P $ɪAn2Y(ض 9 \.u.9r8u$l,%|MiN?ST\r%M74448s ַ5y-pXa]SH-_$ACU!Śэ!WtNs.',|>WcTI|K~WP| CSvߧ>Z0+1v:TDWB`*/ȱ.nu",`51uS)^D">DW< <'< -FnA|3_|r x<:< :vXUC>9p=w+wN ݤy~Y'L"䱎ZC2RuR ܔGpIo?.TwlM]3 \6בQ+H̢KArc!m&[ ]&LH=.Kӥ1?TǯE> Y|HIh'"w+y_uHJX_>Cҥ9͖[ 5]Y5A!">*iEݳ /MmA!>眰xWA8jA raB>b}cgH6 ~=158`C_:V'˽j}#-{,SS2y7 Gq#pS3+8q|:nO{2!yHsaohhã􆆆/1%wMozǿp_LǠc𵘰k`:r2(w ɰבa?Ώ$n"ʟn|btd5…Sr48fW9oq( XPWeHCeϐ0 F%>a\dq 9.kLXFRչ>Ki,>ZWwHP[xF9zM'i vI77W1m!m1> yG+D?[QPѵ=([ _nM8pn{ ǧp>3Gqf9w]+qcYjhhh􆆆[y{7ۿo<ýZ]x\EPMGB@=@&D>3zשh 靅S2^z;uw@,cf1ef3&jGD)^ȯ\mT܇x}㐑I]͓ˈx-\\<c1q(RH[jW^S]_#!ωqIe1sidaܡtj/ޏp4wM)WO`AqpGv[hqGpvyn}+qe5R0Fn#XVxߏ7M|0\Ⱥ߇3|:zf8JqCg7)׭8%HHնq;8,Wuc}(w1eXF꘨ 6X~H>= dYHm}137gku7j:%_9μ[)U9AfsJEaWyuez oc8{@^n/vc7i{KhBƝt,pO&8z>'p .?ߏ+|#1 3. 74441#Mof\wb.ju9+|d8<0߯ixuW\A"_6Ła Vm\jx#)t47[kF .#wB,?Vi$So ^~nX7Xu}nG|޼usZ_ѺQπ>{RNV /ۦoQ(O_~*}nI3J1IG#@Lq/kVږR֨J?uEY(ӧ\8gPcr4-[6ɒKnOy]ݤRrd{MWr*knQP` <;{#Չ{:>G]LVD|WC}zmEu¸K2ڂr%)'sq@0҇ l]ɣһU#,ۺbN^}*S~%#rsЕe{ töl٢Ei…_:c\T),=ݤndӊpny@rExU@p%U٢_JWduS+ yQÞ}tGZw@ϴpB-XO-[*.>&Or#y=gK].'<%S#?q_p0W.1\)/u2r\*uڦZG H _9X>ܧ!3Z?+/V.G6->J@Np*LQ&tr .{~|f yS+$,Tؖge"AV"4k|* <Ļ_ɩ߿ ) ݺuc:t;>H .ԼgkV*q-tR:KpG". 0[?duIUK;=تl/^-wâxў\zì_BTg]{ùڞTnA G&p5ޛ\/)^GySJ+K2lG]QFʯON*TqVY{iذ usUjjjs:"dɒ>ڶmdI= {Zg)Y-֠.=`۟q-4Mte"Zݨ_D _~z V[ lu"7!4_E/I*p_]'V/N%˛ `SS |%_D^2]>44HM4 ")*)'--%)=JPt/zԺmX7Bq5{M?j&ڎ#'FZ$.l5*=bA҉ǸU&꺧uJWh&2veu|ر,8~+;KI$r>.Z?F5t`##޿Nj ^~O^Jm'yHG=.9l~ IDATdNʊ5,w'0Q. ª8̗^Y$>b-{4Բi `$풴VZ]K:| : L˗PB{$yRɗN&+J̊t V\wIӥ*[H.JN>.t{i%ʾyʦI1RI٣b}?%, r}6EY'/dYvd$)'޽{o߾Gu]vPZbR IJJmO;)9WJm_zݕBqN<;\;(^`ex=j}Y\:^*7Ɓ/BLQ:+xyGÛP&I2%0_$*)*(-z쥾}O\YjS|ڰaCJW-eNk#]i`Oi/%(=)UKB{U)PNPVOf㘧fc'iR#eU؎KVM&G\1=vWKl J-KɷAח|#cxt]կ_[}Q~~tV5Zv'ڸKcrȕzVBJʑtE Xt WpmJEXqŷ<eCIb^|;uvHU@ՙbhK<eyWԲ'>"ˡ$- ?V\1VJwVۙp:y{vF .w4Yt+EFTJVY+m[wODOP@7 #e,X۷U,ܮw$Zyo@؃}tAٶQ}1ԏP~ich'>D)w۶-{\ע1J;+7ION9Gho;DXRnPKt ۍdOyca/Q}m;ھ:ۏL9GScG.?ڶmNqTGZXU+[k+~nOcgkh6vm(m-6D^y:fkgٵB@?Ee*S(b@LR$X-셖R?Z`6S:.+Ej>7.Vc(mZQ{H@WWm;J`׉\] 3bh^^IfRsXgo'#5ͨ~m>[@l D-M?iٿT[yC@ tnmw~JT= \ɭ  YƸzQ8 tp:@@8 tp:@@8 tp:@@vP?_R=|_=wv}.Koqmw^ze>Z^[]w^߭_C@qmw~:{P?nOdM&A&q #!txj89"IQ?J%I%*eun"֗?J}#(}ަV>_?>eɯ#:-RHy|X1bČbo?Omv~//򶽾WYTV\FَVVm[r[}~EEk+jbkL~W:!mo1tcu|r+GVON>k^k%rc/} ="Ŕu<;GmmYnoV'丱lG76U'|OFz#֩#mPal64ZVWu#7LNsGV}~Dk Xhw?BxUMܾ]&r=fi,cLQ=so~S7nԙgY݀~=z$)??_)33p \AX$ Sؚ5k4bkNjڴ=\1`g?Szz5j/Xׯ^;_"UΝ(++K:PpB]xjݺRSSk޼y5sJgu޽;v5k sŋkηh"]ر4hN;MG֪Ubr\Q[gL.\.u%=[;;vĉ5`5jH.K/B :;CC Qfr4uԸXp͛+33SݺuO=*))ѥ^*)1? "Iz5w\ 8Pv~Hk@AAA߱M4Itw3@9}5m۶^^^ѣGHٶmLJJkk{uV駟my䑓Ы/qĜvi UΜ9Xe/_,zssΩ.:ڞ={:dZha.*YPP`,2gϮnDkII޽8q8pҥK"YTw\~pO?5e^x!}g,2SNykƤ#G:dee%uRUuΜ9Ʋ,rP}\AGM |ƍ5lذm۪sz뭷dx@@XU)55UNBX5ǣ*__:ou9nƌ+Vh׮]5MNJtAԩvQ6yeffꪫnDC=QӧO)U\W0pU4%%%)555<++KiiiZVS82׬Y`~S\\,IJII {-%%EG7|S#+2SNzו'ǣ6m;<Tu\~^vܩ{G7nԤIjW1/Եkװ=֭Kx_hժUܹsnܸQ_JOO]u'T jwuWu>oY~_zvڥQ/z-y#GQӦMueo.9nFcY,x<W3M6 ?LrB2b6lh4ibf͚e.]j.xuT}\ܿA^Ud3~>Xe^}DwκkMrrYjU4eO>zV3^לs9!0IU?Lq/ᅬ{*1,[__5˺*:|3e{e˖3g6mژ v-z=dɒ_|g!n߾ݬ\̛7ό?޸\.CUz̻XeOnc6mdFa<,ˬXNqMJJ2e!&M2e:Xjc\6md>S;+$%%^z)ѧxƘ :p]w˲̙3>~IIiѢڵkuOQjc\ym`Y} ycZ l2ӰaCs%{,]L2ŤiӦ%jMmk$[n5fjH$n'sTM6aO2E7p5k[Ҵi4eIӆ~_S֭r*RڴiSݻWC )6lqYF:t4653n9R]t~kT[բ@YӦMcꏓUg\%iԩh„ qמ={twƽqݾ}n͘1CG?$JII 緮k}T1nMz70@.K{Z&X65vg?/_^cVyfϞtÇ/޽c̐!CLjuR?/XXed7x?fDZT2t)4e]vT{Xe*qTSTTmUXM4{|+U+)))n+weYOdD]A7ƘaÆ-[&W@bH,YDn;櫴ܹrrrj*-^Xv[ u\/rIW{=nݻXy3>Pƍ]bK/T_~VX,zz饗Էo_hѢXӦMԩS5eʔ x޽[Fƍ'uSUǵGZtiϒ%Kԭ[7jҥkΖ+M6*,,)%IvZmt^ڼy-[~vW ~*++K{VNNۧ_]㎐驿/jF|>uMXB3fu-iպرcSOi„ ڷo:u꤅ j֬Y?~|Nr%\ݻ[njڴvܩ}ᇚ5k\S 7ܠ3g+C=f͚i֬Yڴi.\X[U}Qs=6l.袰} nG׀^xA>Oƍ;)vkVVfVV^oNJÎ$8T3o<>|XTT81"Hc0aFoQiiiZh? / >)TTq useڴisB@$} gΜ9Yf&))4n 4ȼauǎk\.ٶm[죏>23YYY&55t6^dTw\1tME&99G}d#Uw\g̘ac4ib<6Ç? &⳺gsכM4sESpq m\1&//Ϝ~'ێq-f}Z$*1}Q>~tܹy&##tȑ#'T::i$ӹsgӰaCdZnm~_M6ƩQYS_p;t@tp:@@[v~_OWZZ233իW/s=ڻwo;V5ӚWPP IKKS˖-u롇_] r\_^{mۖ 808_|q***RRRz.~_rԸqc\2j+tj̘1v=8FFHg}Vzʕ+uwhz뭷tW襗^M7Tv-JpOO-_\ .ԬYԽ{w=ԩ-ZTݓ$-_\ƍ ~zw} egϞZ|}j5w\i ]|f͚=q)2IDAT'kU6m4dY&bGyDǎw߭ hZzzׇ}Eu;?>c?^Cܹsgf۵`*mIT7kg={K5i$wyM6yCO>=5l0ohJIII@3w&Oӧ맟~ѣuhڵkHwyG͚5 );վ}{=czg;w$%''+yv3τǣ#F1c󕚚]*<֭[r /r4u{\.>s]qRvv~iÆ :t6lܰ+K.ҫ?OjݺtjƍQ6mGO駟/VӦM={_>tR?^͚5Svv.2ڵ+ŋ5p@egg+==]ڵ_GF]y啒A[ 6mcǎsǏWiLvء/\ 6Tƍ5fF|8E. ~=޽ոqcONN5j(޽tYgwߕ$͞=[P~z~Ξ=[rfΜӧK6l /\uօS>KR˖-պuc 8 xbK[i??5tP;6mϟs=WthS#_yѣ|M7N?&LK.D_|z-s?yd}z3hӦM5j~Lv˖,Yg駟OVuUW_ kcܸqJII+3fhҥ3fL[jĈJMM՜9s`=CPqqqH[9rxIӸ/_˗kȑ馛x¾T(**ҫO>A>Ќ3믫yꪫBʲ/}Ǝ'skC[5khɚ<̙c,2+WZ'''t9{~~7B5ʴj*c#˲̞={1ƼƲ,v Z~;v1~ظnm۶ 3`3hР3g˲̂ Bo馈ѣ%\?4e)STxvڙ ;w>3cYiݺ9zh|ܹƲ,VfUF;vDQ~l3qP+,Y"]޽kdᴑ#G/V}˯6ޥKIJjv_JW_}~^ogڵk꫸֣G%''7^|Em޼}[w{߯'|R#GT۶m| jȐ!!W_}uXÇgyInJս{wl2{~~SSS#}&׿>#1:ot@Bm޲eKLS!)e˖***Jh4i{rrJNN/;iӦ!pGÇkjժ$iϞ=o]!?7|,Ҿ}駟 yug3_*G:4sLIͷmۦ[nm߿_999a{zCx"Q^CE}1o~_~Y&IW(\?rH/RiF&MҢE4a„*K*^~A͟??W_ رc7o^vI袋$IO>dQSmg__]8 W ӷo_=䓚0az묳RIIV^gyF]vըQԱcGO\Æ رcժU+iÆ Zz^{SOiɒ%袋Զm[;vLgϖeY^3(##C:Ӄn&L;Sa c=1chС͛J'Iב#G4z;<]wu>}٣#F(%%EWV 5>nV͞=[7p>l-_}cݻ<3&$$=Сy͜9SYYY>||A5n8Xϲ݁Ϙ1C}!"oyuկ~%F鬳ҝwީqƅC>p@XB&N~AM6UΝ&o=z_^޽[ҥ~ z/ $߯9s藿e/~ y睺딙ŋkĉ;dYYf颋.RF$No߿ ԳgO=*((PZZ:wɓ'G)Xٳgk!o>}=prY 7:'mݦu֩SN1~ѢE,+xe<x@SLѷ~lh1^W~~qڵ~$ :իew}=zt\**())I#G {URRŋ'u]\L~H8PϞ=vZIR׮]k7+ nڽ{(h6nܨCI5jO?=ڜ9sci֭:~ڵkkFwuV d_3S'@Nx:j/  aF&K*(I(E% JUKQ0202d`7Dpd@ 1406 807 b@IDATx]$E9$p G QpHIHRQ )DE HGP䜑#IW٩ݚ}~fg_SQq 1@q 1)﷩!}ۙk~3UGu32}wdMo_yFv@KF=:?zu3vDϴeL[ܦ {f96ڎs+z|̐Ⱥno çHsݧ;T׻k:}3ḤrAόsz6ԻF:/8^su|{s{e IKwK;_>kb~su`X:ܚ3?*Wrgʲ9m6n~gTK#ݢcq 1@M ٰŏU[`H[>G`Pnc'2271_ vG,soΫ{AypjYv i9zG*_W;RgX.0WdTVPZ5=p~OW*O3e;3ѡ񐩳@%YVB2 ,B&L,Ơ QTf1Š$Uw9=mNv-83uf3sߐhYR\S6G 2גW~P|㤒@Hj׉Jr,Qz33G]1@q 10E9P~]~p|~t?;ܫ2 PeLz4W?V|2b^w#>#DUʋ3&3V# AF]Bd+4[]:\XJ g󐫺}ᕍP+*! 9˹;ڎ7 ZSFA^^Ͷ$yr~ѲC ./iW 򵑆l^,hGO=u1u^0XNf>}s?@|?3(xv1%?qpkȖ k-{~wӻ9~̰18c 8c 8K9CI`9,zSc3CG|'# /K;Q2aƱ,:O69r2ޥyy P!|c'6[9WUZq.MQN}6`bH icp=,/)䅯M<[Q JrCo}7!İR F0!1gI_؂$E(B^z% ~` V0on_c^q 1࿏{8P@SXA/6 w YITH vb:eMʶًN˶*mqJ+e#=,enOB\8Nc1VFyq1@C .^uU::-^ jplaXd}[ϫ>OnYpt&|qxaƿ'LpGZCJ|Hsz.sdҸ%؜Vu9bGiGZ; ǜs"`|;i*[lcXr6@B[ P1Wyк$_ٱ}\ʖϕ-ɏsl)0 ޽sJ x"A;ƀdu1Ĕx輽raY - ?ͤBUa|MO}yjP'sm7jKK)D9aK-?-dڰQTL)\^=,Ffs39|΢h,3饧f?9\3YNK9 Oѻl!ǻ31i9xn^?Jg<ش5&E D D D D D t@ RȨmFk>yC!H}!/yl嶽(CU/eЗ<Ȭ/xf1͠/V`BHU}Qja@Ġ/-$7`J>З6xLLD@_ZuA$K2_@H|.wIXGA*? @_ZAB ER(&ǟoAVdxu~N?  B%@"@߫9Ӝ_yy ȬJ$g?>@[I>ߤ٨|ywh]տUk1.~=ls_QY.1Vlx;pqқM.`-%×z}23G-<&CiaO+S&ޛ e v i&湫S.j~SB~FvyS}w<,G$ Ysl'-,X_o)#a楍L͇43ٵ wLWl?7= ^x7{*]A﹏Y{2ͻ(FNÜf$dߘ3l[t%obB@@@@@g4Mhu5CGc=xgJ)v_p,,BtEfj@_aW~X-Ҡ/]gّڡз|D9 1c7@_8ms@kqy|e32fB }!/"b#꾆(U)Mf+[N◡nh@}ֻBܣ 6NQ06Ҡ/x4 /UӠo{B= +Gz\ޣS4HUl, B@_$j<ރDM]F>#߾VE)64w`Ӡ/|555555550h4Xae#V = +`xZF[>bLYq|< Ԗ^aOƅ\MeG=+ wP@md7l dޗI6e?{3FrebokGWmɕ!fkY\v,6I'Ux@ʣ͔,V*\o[^{-q 46#ja ڣȗ0rxE^`LCagxaIG^ٞCa09aJz 8yJ3 ,A- /^&o= ;1Co)gY9sL$Zx@Kc @>E66 :x..M2l\f2ck ϓGQQQQ4 7^%xuv9 \<یȪ_#"bxcK?R|ro|Pk= Fޑc@i!lXf`]:ck|Y/P}kX2AXڃQs!˯U&N}nzSWXSr~s& t}e2d =Xݠ l 6Pc8օ1 )i@ژb+ t𻲚^ .4_"&rhmyKLJ`ĠUS}=G1Mc8zo rcb*{TTxN\. >c8 oIϤf"-os4Z}cx"5x SCwv|SsbNGxu5W69@jyU˙ً û<8< }zbavyk }5`NdUCĀD,/R #B 6K>"*^Þ\r\X?fHX&]$=W |%df}\4A}dKnJ.$b2st Jʻ>!(;,Wg``c6Qk_:`syAztPϔ +^0vL&*c8,vw 26147JG[`~[BDrw&gl~f2 ^-=}nnC};gb͔F;Y%]F˪8cxR?R|[ UA!Kl?eI4 lXruhm[kUyajAKQU^ٯm0- /:@_,[}tC;0Ț>'o[<\Y^k/>l.eQJ3k|(yX-%[]2 MWg&{f\y1LKŽ^.HKdWH"R>y)S4Azp ) ͱ{9ByFeQbd~FuUsqfa?qoR|L0[N>:& o˨0F(c7\sIغB 2!:qdsۅ-czZ}2i޽{/s! $eF&.j}^2$2>ssoƍ+T*vrCe q(m>r(SV~>>Uh)Qxt\l_E/oɳ%{_oˎiQQQQQP9L'뉲Ă?;YsuUѾH*ek`pҜNBdVeG=bu$uaX>5˩JahkB"Fi \txu"nACzSgt`1*m)h-=Ķ#o& s=Q2b~Cj%y7 G!cj~c(Mf.t7YC4` u^,3,ιon>gjv:{ qm%E!|e@#^D!)KX%ooһA@(CX;"&2OCpYyJ27KBDȐX[ F".w͸y|?Ǔy ;xyc84/Ή9Eya,q6nK>[ _}AǏO?_("R@@ԀЀ^( v hnlbV!2rX+. */]DZXZΡ΋ب4ء,w*6(kNjQ'@q=7_zbfsa# hv) Sg2&i<2'D2a2r {qo(0?fpZl:gl%lLIzF"Mt'y/CGnwW?7].e;C~~aIl1 0̦no> &^M^oLƛiAdJ071I'm)V9\6$pN<1[kOVʲ>b{6)1Lh1,$B+@Fg̠ݍA{HV aΏjJ}FJ҂aArˡm=))+;43>}9l6@`)J{S,S8skĻqcA3߻Yz3İ1cٽ3sϵB{y8@cXeBnz&l6'aR2Z}BͅU,zo -v:nb43Ei!ax"Xق!4!&Pu[iJ&s*V{-N Nj+1a<.o>l"ȅ}̢ߍnB_Рu~Y2]vCk7m~{3zfBϮ*W+lo~gGy$;p ;Z_505߁~bj?6,5.ǀLU퀇UɌr6 . YXZ֣rlE=C:gKJ:ȇM V]=Y^mCZ N!+Gp_ T,ՔTa.#`g/_<:3xS h#z˚z\_w}394%Ŝ6` ychiҷilbCDz$&NɄ o%#b)~"+U`) "/6ƟW h֡YGm(k~;,e mB>΢ tIx7 >tFPxP[7;V"ċ26]ɹRo[{nn+W+[/hAU8>l/:,s-;W{כ+_AܕFB6N"ۆ P[jFEx恾Ƿmi~60whU\ z %ȹ̵I $͓െE^ [[jaeA_kNl XVuO@wx]Tb,tZ lפ!9m9?}CCø@__5 yFZ{!4 >u6~Wƴ  'g1q X% Iȓg|f/jgU\r߱jCn>:FF>ŹmqPIE@_倾 }[&Ii_N@XlT?7𦕴*f 6Yꫯ6oeE@@@@@@ ()vA?xd{Ben2*/򵯁b6MҶ=Әux·~dA.v[%q\\NxPa2 ^ެ8/oAM |fo.l1/#6YXCeI)8I.`d} % .;l`.'/6NrNJh>q]ؠ z#J9w62B=rc2[V47QșI0_>xW^I@I&#G&1) o[o5>|A|<}i%X"Od{~jnwf0ͣ>Xk<;3I5F  &$mn,bYfdɛ3E@@@l4vmATo_.J*r"ecpv [ЇEbN $HRNQmvyYϐpZ_%, 8&US:Btd+o$0-O{jR_1S_3v7+ЇZֽkͿBӓg l~ِ<6ggE\_Eu!E7C)^L|IU8[ 2;_\MfS?yp!\u9"QI1x6r|O豀Lc|7㗥u6t?y,+7/3'/dM%PD6IZ04Ash[6ۮzNxSp"y9MknƬb>Y;Tr(szT{ҏ3׉ΰ( x:K_'|kwf+*E qZ|S-p^<ex%$TSMevm7,|v'&6BdEX/,czߑp%̐nD GE8 A5}`!/jWsq}kP:M_+ze2ӥXM$Ȑ=PM9SB|iؒX02-Kza\/T<ڴ_9@_ck,5Ӭ饮Fnkpe_ YR&n#cx~樣2mQ(YQHLO'(ze!{%4|7/o52ǥ^:a9ocp9C8Ʋ {44y /$Ü.,> 23ZOMݶ}.R@Y ƏuVQKY֩WQ@!?sBVw)w\{?W~ $%3/]IG>Zy*䉘']?V!\D~ 5mk^R.3y]`;=@MIMH/*2g#CDCH cpYԴi*S-a4LgdLo[ QK8ϛqds5$;9Ò?-55555M @?2 <JnGg{9 <+KE $,բ{hW'qOxcz< K/gl[UHLM]-}k+h?rb.:  kkD7Ȇ\^kʞG:bft_Yyyߧt7.! hn˒|^SE_uZGo2:FEk%#@`>lj=gIx [?}=7ZB6< ңt=KXI Ư>=Λ>:4h4P ->(.D<-%$"n-@r-NQw鳋q x2!$7HD:ؿH|EY/6pa#d..={!/1Gq?lկ~%"G D D D D D  lw'{N}VE1Cwo Qy07mtWK6O8y svC\Pn`$%? 6y(BR^,NCL^\L*h57@W.t>1aӋ<. X~; Gٟ ZYE! ѴLԳr\gUwLpK)kUͳ?WM7u;3xա4"eDrC"*V h/̘*ZK61<t)ҫ4Lx/Iap=yF5(_3X,'(6`i|Gu47ZFH]+=(qӔ(`524fּ =fep;/b e )m'[L$Hjmeh"lv'N`D>r-ܖ-( @I'.稪yl|ۼ?f3>HmZvmN̻x@ KocM4 JHobpkuH[<ǻ ĝf on^<۬oVsylk}7d[{_W'J6r6}o뮻o51Zƺl+W{$_|hZoRw}5p@Xz53O?6 Vve҇/ޢ׽ F%w/;Rȋ1PxRhm }͕@X>' b2cdf[\pE a?%ߠMY]5kRN=dc/ O9ĊeXãAuH~6\ *0ߤ 8 tRu( I%crÛQǔ<"u+%PUjmjx [RxgkW:}\0[D][%X\7' gAu깛k|k:w ^Od^_*vW鶺B~`1L鲮kלxe(Nw]_x.Iu( u]G•tgkڤK/,d}?:16b"Ǿ>+)K+M32sa4p,e]uU#Ln6Zk=}f%hFf<̃#4GRYo V`?Їx%¦Cr Uͯz@dO jB:|?GylEssB.+{0)[* r 00lLgYHDuج))dEכpvң: }6 #D_U#JDV }z4^G+C_9'607‰HbCDUb1!^J r9L#@j:2eB eĸzO(`}+ )Vm0e6Z/< y1ΗaAC*!8˃Rg "(}xYs9lf|}tE@1cL3i!&!C /4rs9a-9@km^j׶VX! ꫯ6'pBEVYey<.ӑDsW+cU/}pJ\5մz ]AUWO*۟KTnJ՟mϺ*ӹ/c.@ئ;oNqPބLxIa2~L$?>}myy AS95&ef0, ,n7 r 8%=./@+gY{<]N>$[_$60&dYg'cP-('N1 w+q|kW>e6jx:$p胙#M,k&yZ(Y74&ձ9E衆۷HW-{<;z֣ڊ598m&玥=@rU`/=N!j4@> 2A]t=3[uLߧBIqz:QR,:t.4l,j|YzHeܗ׼iQ.Z{~l~W:cgd_uzk3u됬im]齽Բ+>j3R<}weW96C zj02BE@@@@@Aji\ qϻ:cl.L|Ri\df6B$V>Iyr4NꈍPK7lԄڌ9Ry}l l[[\i =.~>SK=_H0[4̟S7ޅ D2?ׇ/ȳT;Pcs&w7*]u'1Ʉsx <CZpyJǓnsiMT힄xO(Wusz&]$H`y%j1j]gNeWLIm.8z_~iFifm63|poSkZ6!N.b:+ERyd“>Kg!068qb7o)1#y'|bmT5rG ƎT>nܸ4/zȎTF  `O,Q̀9ڇ?h>D*Eh~VVu%o [wZPe"jipS]q y/X̛%7z1C$=W`b)<|;D Xbj$ |^%ަrXmfr.Jpdh;×u@IDAT,?<5]LEɐ6 "\B(6W_$Ͽh${!:NM.a;DGfT̝xY6Cqqe^9wjuY.mwhKf,\惷-7VzlG(x2ݛ=½F2vhS^kcf)=B ]:h}al*21i.v[}0"UR\!КCȸڣb.1=j j j j`@}'[/^xҀ\i?Z}ބ "/$ znuZץK⫨Cl0F8ƛ}W/xpۉ'@_A78M,ݙر̇\ 'p-u[vh^̗w&i1ky`g}}d(P*Am 7g8)۠A_I;lxwx7%J` Hi`K$ۥu)dxtA_p< }Vj )orAt@_C4aۈyjA 8Aߞec/䟌/yUU͈ɲ&=;r${zBv<NWƄ7\`-j j j jY#E D D  s~H T`gc#s"A[YXp v㿈l7.7Z^5Ơ Ҽ8Ocu>DKįic$M* 3 p'؋ekyl+kWґNqC%K4'+sV`K͉VXBU#|t?B 1?5.bx%i۲s 6J#7xB4mpM] ;Ql2wZ OvicӥB-֛auإq:4'yr\bKt cǧƓДYc:Fs4@/h*?)#i5~ҹ>.(s%qE"z&s `HQQ_5ЮBvvyvB;6|p4bhm3٘חGg0bLs,;hLW,e. gHy_rx2qhԂ8Eӛ)8#&yaU}^ FP=+o:D:ʃK1GpIC~dKmH^zJoS^6^kr=2lTDsɿL[˞\2m:v^Hfoalpq|yk!$s49I4Ș/ro)1 |>kftg5.d <ω X6:T[W2wɴgG*Đ旫^ f]ۮ03Ka՚Xo@@@@@@bBI -3C`'v_kpg%+i 4=d[8x>ؓf1>.Ӵ 29/EY߯o)ءcY~q8Y~Y'~»c.@ K (M@޹=5sٙ%i\Jp@!hMxD d' Mޟ^fN. y\G8{xn6`,wy|2m41t8D [x w3{ qo'p *|Gl" "<,x0^R=Т9d+m|ByI BkHz`]/kan';c(8V8͕M1k$W)&q*6Ft#P 縓ekvR1c:Y555PƎ7n\e2]3Q=5PDG߬"T /ϪHgx5EqxI`ɾxq[r:70_^=h ;{\$v& B)`<{K[ZLKW=! @S$`'a^mV$˜UZ=XnSNV79_ͯ=)"fޚLsؽ Hi,.3]5(\*Ͳq Kn9tɚ[/CczE]K9o}p j- +PG7mL2 ߏ;1<./h sz<۴s}:iLE⚊K'9曬_/qدi\}K2tV>Z .aUR8 ;ژIpf2ǯO;1/j j j j j`i fĦo IJ7&`gq@bӠ/<+8tɐ2#e;rJO'n^]C}ZmrhBlK:e?Plٲbf}GߢkvX=gGx!  I %@E] "~P/ /@ DQ ($cFØ`#l"WI0o#n%jFj}9%@֚,`~z<^GC.=L}Y􅌿ii؏ĘDd,]SkW{?<! 7,:1ő}E\A_W(!;QxB66ԓ F4~27B@_. }N"cyHFD<.@ߧ,50z+}<'KIJ ҫ ϻ8}wXY@G.sɬq}Y`-[ß-/+$$q{x&1@H>*za\G`锼hڮG sX_L5Z Ig|;dC@MWޗgJdbk!\5>hӨBm#oD>C ɲɨ23mK;;1?H[#7S5 Ie Iw䚀yTg˺; 3jR :hL>b|b>~R-s':^\w 7(KFN2gL ]k;Wqxəke]h)5o}tF&<:B5V$[GC{6mHo3mdxy}>='//vAu@V+c%űR *~ (|dj΀jnZ>ުN^ᣬ jŴVD"$3n4VDV;ae9\@UZvuTk~.5H#y 6dx6|``U/];>^WyfꦕEy%| ,8yH3yS}2. ~|+}ڀy8^ )$F/:uLdKLtW_#^y]@#yDZg  y9ƞ;F5pk{#oaz p❓55AaX׍kNOMF^yQk yk`ݏ|E>X:' t"rm;y/ܒOǨ:@PPz)9E1пc [82u xT{,eY.!G7 ene)E1X%7=E5ho A[]i6 ]к%Lp2Vxvr]o4Ng;īx~-P{*r>E sd@жlewX6; aeV)k}Ǜ}a 燄U︬<Ɩ;Gy|~NQ߄r~0gJ6 rqļMq=rp%32dr=!9_Fvil]+Ե@G8ed 2qފP߱o{j҇@/m]N7?ٶw~W,3ev\%tL^+\|ߝ9y!Ge,XkFD!-Lmq o+/=<80"2w.Au9KK l¸s}|ٔdt"u2h3pp^Gx;T!ޮ%0ҳZM!g*m1}A6XF5 u)6j\ Z:/c Qf|׿?l6EWya3dC\P,aD+VnŞbmlt(I¸qtuy絤ńnj`mi7o0DŽ +oK?t(}q bW* yә8"o-%ȧx?OO `zڱ};8s,7y#m+$1}%Ґe dxv_Ɠnf>3I7c#PDŽE4cYk* &K >!pkvy┇sshzɛ\ ;3 ɼLZ}=V.veJd0o6Rt4O>42'-aWm o3扽z2>Ni34-SKܼNGkBx0 ̣ޑf>~u*;/j6fVPMk6]s>hi#yH/I$Dڙ{M=~HI/s$ã{Y(C|y68.XHt)1JBQXA5l"Z_UzeJtsSu^.TH*ym hFr\/<0 >?m/xC@_U B OhI^\HN]{ }!\WWFeȓ.[Nb̸i/H{Y;ؼ8 f~krd9@_dAyf4X -3JͿ/y6@_}#7SR&# x' ݌m-ʀ PRZFWtpoU/Ҧs~yfm/>}]ʀ({6g<*q`})Nmz$ Yy/ CB(, Q`8є}{~@1}Ap 9? H+ bT;bC9#pˡMe@_\$*u/m _VY|;X'%׭cʖ$P69s9;(|`yI~G L8$Sy.'R퀓v1x +)"@tCi7+uUjakJx!ɲ]p~G\u^xi r|UҁszZ' Vw0 h9dY?)NjPJ"?G^uT6h9s)^+9wYC𛽝WgIM_QL E 5WМB04RPyP;hI+ɥF#"f:̟IXI'0h *ijO v&vÿgkf>Ø#ZveŸaz=CT9kBhSs/N+zͣGgN¸Lo~M#.D8`ir:c :́[ix5.`ZfjrnhKhӳF&nm م ׹66ɱ1B5A)H:dxƙ3tغ>]f19_99fGƌUۛ) BnˑR?t_)0 9,T,ӀF #*6?:[F6Z:Tz?m^hk;ݠh:a1dІ]h/I]@? PI-t@=H_#,qj@hxҩ)4eB@,ifPω*OI8 MBwT$$.ޠLwm~F _xΚ)|CsLXD<ưx,rykϷ7 qZ i9?:ȻtkS]a.]ܒ3Uؖ4'4VB> t_Sa9зbw_~n#b#`K''cV~cKaEC?ʹn+NOnd%tXc@@@ F:p-:==)>SHŕ>-yT*y|y{g>' Lh ^f# l܆УC[ʨ2ŘSljwsպx5 tᇴP8FcGo!ޡg nzL1m`0?ma!>Z{ U@1pp1<2L n$Lx`ăoE{Z} jY|>2|; CRC(-ͩџFQN=l]~$B?Y/ x#H^|Һ H% ˝cx!sx&'Ac>sk^JX),_Gt^K^Q=ὕI0,= \4%ܧ{oyӶal!bLBa!@GngH~"EʅbEC;H3oT5  ^D D hFӰ2G]D AGړ(ߖϠ >C"N@#akDJ+u rw,v{A6WdE5e>cO6I?9saS=&БϜ-4H ϼn%}o^* bJRj$6L LisM - _D3q717I,3y\/58_@*^LmPVQߴF6CIiz#q> %e2B<9hfdO\)#cClz8'Oؖp2޹/Yv>lR'd xE]I#^ L/OoNM1 atL9gعbyʸ.,35/6W nOlDmn2pgbXZf+)82{yALкWm1xwͻ:C mx.o4c Š|I0H1沈zsIYyd\Gl;@u3^TmdLD?B\",4@likβlU,%b9F wTz7T?)W1u'J]t'ԲJGkz|h) TM/-zKG,ޢo[ cͪ~@nc@2A9V'ElD Q]Էq,6m!F]6^|G:6q\q*Ro{Oulhh- ٗsfB&^l'?ӛ\c# oe:}׀swVΓ@,0 x^T=׹zT[ν0o@_ P#a+FO(ܪ;+$钋ȿN '&-xa-Fg';6F7d40>%fNd9# "ҜW.hk"`%CyZL`eD6>qoi123|I+=њ2}}}^3OM&%Nmy9y;]S 2\h*65a3X+^hK\h6c b!3{ Di\EY޼4_rmǬ_&0wcл3_fԃ/ 7fˉw>{ENq\f#12Wg3"L 6=""@%+f "ū4;ҲGM_= [F2=;cMg欭G;<>g1Oa`3=CӞY(7uQ6+7 1<3^U|`efmZ}V sLpj^Y'k2?S:[{i F; iG'xsM M, BB#̰7(>v,:`i5$qo Fl^Q˯Kzh+5XwU`S7i❇ 6^y 5 gyXf#=X @\ C^]4̋\b޻}wSifWyx\y<`x@`topu!ƞ7HeІN6!`2 1S|:̐-MCн7w~W`5b4`35P' xFmZo9KvQl0uq0x\7vť%T \Q;3FV$€< N #(` vT4*a FE)n\ed:Ul=t C01kZ:kc9̃>h~ߙoPM=c*Ѯc=6}{nۚ=ì '`q]΍`uFgt&6ٳǧ )$cv\/d3幒\/,sxl#A B2N؇ѮL4z6*X`1oS_cFLrE i8\Z8+#A`[++\/{wPI C@I?ǥs790XCQxX#FSPn{IܓyH^$ Vmaӥ( eJjp =%P0Jt+W%@; \~$s(i(Y\lXGl,'K/):k~VVžIl'id 4(ĕt»&X"ij]<8 .ȻGIZA &ߞTl  c'H,p%%!M$IrK|eqN&n|, 8$%k_7|18t.y2vғǔ i H8>er~ث *HΈ[/ q5a,#a Y2f >rSx/0%i6?V=dkinjwA^kOba'G&oEeto0{#H~z )YT oQ̅M>,G0;=l_0H}'ҙgcq.+s ix!VοJ$ ~ s,mbvVdݶV\ԂXͧjDaf_N L0!B.fy1wy9昣1}#6s9I^,.@?\}I믿nf}v06 0Wx(qILb !'Eѧzq7GN]|N4iRR-bN=z 60F2~Fa3ƏoV[mӢ°N?ڇvOUme_?(]NyysC 8<0S" i'tu2xxwfGB]2z( Tc!lGبAj %:r0F h61#I7/L$Cꋑ&S&=xė0,6ɨikxzGX=z}فx %‘ u٢+7=rp[S4q+)loE8`L<,{%W[{ #̷!\hʪ#+S#6u^+˂gE[_/f5 .fi&MnQAr; 8fdoGmvi'ӟ4Ivm{`wf3f9r9C[la<yd6'G5?ߣB/r+,:M@l֓зE8Jk_n}J3wosύcW_yǢsvy-YmH_Zz neg_1~2uj549R,tt8.2,SN#FpI5Ifh^XpCAъK|W_|NlKY{ :2ȶ j,/qT`yӚ@J`gc:OR0qm*}ô P//6exTg/ߗL^ +, `*ڶnNt'I[xQ)tXگ2TʓƉ}2qynIЮV huf1%^,slRfB@cQ-WpDi҆wJ \?cUJbg|yRi KbJ|N޳Į|`ejonyNUZxr JyI6߅dV(Ʀ /EL2r9 uvg:;ead%@IDATb`ܸqu3edDO:€,5UU 駕&0`@R6]lE 7p#gr~ly~ F+49_[jYWqQPcMer vTAԒ=ܹJ36$0!Ɇ|۸_VX90m4y$<|g$Vl`jDYmcvV޼tjڮ{[[=cBa&yFVND|m\cYGPgv ib9?-PdYNo0s8p.тB*zIs±aȽ9rjoNt xya7ʊ`s*vz:yg Ձ;+dwmhclkUu>M]LWj^U6/Vx6%"g ODlNظ˳bҍ)o)k d&Daq)Q3{Y|=AyJEo/!V:t udn|VWqݵ? 8)vݎ?Fm l07'a}[r|Ab-昹Ԫ]`6A[E60J M ܔ)AK>|\ @W^)'ņo1Y.!/DKh]a:}ݣ7S~@Ԁ~1{emEڧ^. v'e ڠͱF~~\6 +7X5 f @/0|`׫le3Sw{ k6ͲP>C]^˹چ 6T(9ϰ #M)Ů|h~2; pԈ`5ԒG׺aTZ9l)4 b4T&eNiG_8fI"  g |e c5sbѺaxBQ2,a9 H<ӶaH^;PKٴ /t}0f~9zV_mFi?G:WX acۍcV€Z|Nmy>Az'kE:8+;@91i@p5) G L,>MXE} dr_^W^hA117K&dy&~j/>=18".;aXSG[=nfƭE|&}er8hWoLFH5|]f8|fg=FmjV0CrEKÇʃ1f( {i'N9:`_VTVb4g'qCw<',]TH'.A/3r e]پNw]N6vDj 4W8grBmbqLf*tKtU]?<  qЙT@ ?b6bHS'>>jc[ڈ/0@w|lQW VW0AnHGHȶ,yS=+,ee9e=S V?{1Cb6퓮y.uI~l(v@w؎=TS7&(W>yCi+ X8ׅCN_,US5.CYGO[=a~nᆈ Zk-U6t3nbcq' `^\pxjO=TtGGӟ[.tM0|%h5G@g0wab"G6#Ң.aww+s1CQY_.bo]wuz?W`rp@XbcVX}ԨQvm dS8ں, @'W6+ܣoV)w5n_[[ULbyGy .cП'?wk@<. E}9ڀYygQ P4o3}4zPU[62_^ˣY fr̊ڠ@Ʃ)Q>- B鷻` .mn/ f"/1PX0 ŷf@Q[[H]6̄=yn29qXrKVe~4ܥgiLf" T >wGClJw}o>xa iO{ve*W{S<()chr@Y)CE Fg}%@NN疬 gs&(zO^'Ur0@? Z.4s&3ߔ}!*)WleLc' Nn(;io/RR7hUmfYZYu:N:Ufx&ruRG0 a۵񚦭ǴwzLd7)%r9hDF:v.ƭUVl5\.)* VVlbafrK;Æ +wdĉ..?dB(<`D졼`;t%=2&f K_uwYFWh`Zo_S~&pPh@:=5:ݟpR)d|k|(stl25 UA(-V2PDDR?=oUcZ;F˼߿̐02ߟ/,EF'jb'®P@<``'L_ 똃^@Dӷ $ /csh#y21ؒ6 |%La #z?3:XqeG'jض3'Ⱦ367R1qFfz]fEשNrЕQ4ikqL]UhrUg&. T><zC_DOB (40k`;' .0hVB/BfgbYhy$txw9E_7@_?`} OZB/!`YE"@ uc}9+Hh%= ]Yr\4~Ϫ&Q7WЗjӂS/(_k9}q/זikc/a}f !ٝ$Ka"ʦJ@U@ߙ4aЪs-+EУfo%ZK{S;7v~)߉*U 񼗗р 4ŔBHpoR(uVu@`,m߅QЗ ?>wmOos'%:kA*.l/[ٴU/S?qL#"ͫoir}aZ%Je V@tu_<@ +I?U=W=g/i0?R0x=6;,$7xV41.nўWq!܇/A_&m)8Skc2 %}iӮCutǽ U(4Ph@BzY˖E5FF_g~ ] P,܃36K8fп|>c[;08DOO'm%&Tߜ6bǦe%Rfvl +LU/Ő~ =iX1aeDRkLEP‚qZwa QM;(9Fz{xcfv:)7V ?M.e y^:krry5٩_J;to]Xִmje(XgI}VK@B,-D8VQݲy6iM=yCg;i. Em?훀 RcSsX9c4l#fVMߠ3-^$-lܧ$JO$sϪ~0ۘЅ (4Р|UыSI{( h-3U>|cY veeAzufdY}O jLÌCDL X )(j5;OLC<ӱt9-c<鰲,|Ďn>BDDZb&Vxta%ߑs=FwYa3\ZǘѶ2\|,|~tԡ=1IYKLNh6gheBm8yG0֐PK)D?fu-&BN՘m#,!M~䍣iXCB (4Рz(ʀEXvuc],XecXU\+M,X ±|= ?ϸP>2 AQB[=j5PoJTlSF% {˦uP/uJnT^;B \L 2bsBU,sJCbW6 K/+/k} KV&-ق'h@M `m2Q"O ]RmR K;-lY vUOH|I=R}oϣDFʦ&|s0o1);A&JH,q}msۻP~MgKnuivnz}7h9Zάa"g V%-ԶY6USJEc"+D#X[m5٣&'\_/v4,i}V7{džmݭyk$ &`vL/F/}"fW^RkMO o8TröJ=MVTR.4Ph@U~= c8;>]8b@͵lyF|e7+úVۥ>+eU/5TS# kor{ޠ%C%{(^S3ؒUf! Mw#<@ԏ~r86 ;dc+6S9Gh1J\gBguyVϺ]N#l*%uq7N)ٺ'~聏 C9.$><-V>B+Hzؖ˲\Й;ılr$9Tb,d[,G ri`kyBOw}GS/2 ]}Syx x;fs`(v$x品]m1Լp{S)=T?&u/P'tr~[b[&v3kk#0|g3IصzhK?7!/r֖l7P`-MQ>~V|~3'M[PLSMj yP4"=}deSy&Mׅ (4Ph750-Z\NmM-O85K}-2{ѰuU٩<: c;(þ`ytn7PyA_}-J/TЗ(@X*%j)l:>+ٺ9A_([ . ߍ2@nS`~Qa vP70A_ʺ zpU.*/L'/a}wY ⦠/&а mn:B9:A_<m 烾e=@_Ұw| |C+9QK=d*}IcyÎ ZƏhMl+KY/ݿT7TOL )4Ph@B0pKKRJ?\k|^F{IMDB7b5_ꭅ50Ĭɓ&9ÜSvY\OBe%$cs +Ig&U^WÖ3TA{wOYƭ KҵlI2dR ӝ2iUNKzH M;+|"`|rDRi #}8$}_/u]W>Y!}ADߟktت==3$f13G}%`v$Olj7?XVN &$ ;~gMށ'G 4Mv%PIi'c$7[Ar`nv7P+c2DMm[MQ)ɤ yfE=t9GϺV _ |NI&oK&66$qYN S FJ;9F&MF zY_Xn ؑWs$tJIx]ogB Iٔ6)/E_3gm &W^ٴWeOeDI$&vvH uѬF iHӤ*eR ;5_醸]|dv7eG<%U̥R=LPaxO};mKi#N-/ +Q4[B (4Ph E|58,CտnNXZ>҈PJc54h\Næ#y7{JK<6; P,7@1R`.r^ˏAx,jqgP؝iy$n 'c`41[!p&K%M6 xo2hBnyǃUˤ`ڂ.sHD=Ss hfI#¦B>ve5/":͙CUQ۟I稬E{{N*Ld@݅a֟t- a2;yԡ#=\6:M^҅eIoe#~/}1iyywƈ06Tc2D{w!s%`d˺=1ɽoBBa,{V& Tvf9"ڬd6wkŝ:0eFm3!CIi U#kj&036w'dcD E¨a5et9^Wc2RMTB@0w+Keu&Z4ذQis򞂥^O!ЄM? լ'mWڙbߧuoOSyߗ cC (4Ph`6ȾN>:No^_M!k ưʦϜyd8_./MτR[~Xڊ]>At((8$ `h͗i=W皖K)+1KnQ8i~~<0P3hW NnTh%bCӣe Y= Gpz=R[[CǼ@wkNK2>/2հ_oJv (4Ph_i dgqVHh`ɦreae[9f NB}+VMA8l VOPSLgX2]_` Z6'jw0 v{ & 6i2}MƔaT2s+b8WB7%|eP}\a5_3r;<.-l@mn> rjs*&R눰BpWk!0ɎLu7L`umٮ# if 0_@b a9mg3edIQ0k;`/KY>yQYk =k?McAG*Yru$+SRγ! @Ki- ][[&_S+IXk H\慝sMKafB?b!LIFg xYo*ta58li!a/CPž/yI<&v#&ۘM9(TOήBzEC}CCH> -ͮ0`'k>ܛK;+x}--yǴ"68wo%Gukǹ&~5F[ %+JR )4Ph@Bր.kk:"bn -_%73iQH+~zYl"7Fҩg OZy7&ɓn(bβZʆzl@8 L:T/<xMmgFϹJaXHe;rB *Ay$i+1|siquy seݥU  >"їzu}YY~?R6mc^\Mŷ]waYיY ,͟" o6ܝo%@-~Z@oP=so* YY@%!Qبt*_,n@BNCPIsmfR=+T_%*.@2XP0o(f5NoKQhd}A< ~oɉe,Ks! V*`fs~\6߷ZZ˅@e(VB?9 3248pGvCP'gB FcV%Wn_`=6L>"u XI[tc2cbip-iG_f zsa4a?!mֳ3.%l e*h\Že#L}O7*ӄk3p{$ y54؂:1u(5R41v"\Uw0>(m^U8Ԅc}!<yc_™<#XkɌKY ~&kS\Yݲ KK/a5IC*IMUUdY08%e}{O-: 2O.ܽr|!}ρǻ&O6K&z솞C';8V>Bo-oo?"B`ٙ@ȎQ7gMWS[Y>P-XH8!8YOR^P, | b67Vd9*̤z kl ‡<}; $+T x GRgizda0>j9Ky`p0O|?;8!KxB̠룕&& -=Ȳo)xf㹾-O72bޖ|2n~a#}`NM &r0C0cۼ/m6LZ=&t*L&;8״Š1e't*>m8"'`k#Ct_xK.$0aBT*_/頳/Ϭ30`@4 3D믿~E9 4`7aM%ЗV -Ȳ_H.#S͗/B/i/~{2\06Ls@_"]/Xp- 8w">蛶`kdc&\fFȜR%l̛{ X%$`>$j}Ha/.i*K-2.ٰɺ=&&%o\ @_ruy%yO0@45$#hSQM1CePP?ʻQc E _XlUA'i Vq7sС{WЗ@i}i=}Apa `W) PWbZ %U6kDUtg=y^'l-zmFycs49CzrGWR`VuqbəSFӘ܂N-uLs3;Ip[~_}a3т`BtgQ}q{QЗx:q'_|[ݣL6"Yo#4V@_dcq*oح5{UivJ}M-S~huk[EB ȣM^"yO0kZڹ̀'X~Gև~^ɲX6]:&I:M`!<-l;6)|' 0k'O\m: SЮ:`%/kF([qɯ8Җђ&i~:'%Hbč8Piab_Ah%0$}0 +m6w+w 2 DX5$kA|Dd@ 0/9f--S$lΞh3IktRrazDe\bW]b3&NSZxv?^r6oĤ®.j.CHqqg$n:<M+ܶxAOely62CY0uxzLq[u=O0^rψ$ YĄ 2рI%RT1~xEJes]q>|&'6H3^o~an_4,yb>sDyi"s+}{)^ҎLl:_8 3MW1/+سY{D3KY+K h&z4]LhdJa3,tsZAZӄ|EsC"0IbI{~|qiaPںtwDtM˰+7߬aGy5eN"W>,zw2x"._Is!ZoG枫դ |5 }Us/l߯Y3-p/!@{+?>9_.N1>tm l}e*>Yh? $e ,>~gq$%}%bpy0"r^q{`eV& 0^ gOٸ,q)zʆIJ\u"PFsC c^2Ƕ ۰!'Mh`<fcJp𖧶PVWHO֟s*c jǥ=ؕ i'IIbH& @\>OqB#ǤSGb+/.uX\@6Xr.@vZ>Nphh= 8CGj_wHf < ̄IDi t&xLWO^瑿 9ߠkmѢB+IהkŬƇԶh0sY.'fN0}7q{B!I_an+ jmE;S968b'W$Manbb-NEۨ lk)e1:ru4{ߢ=I;\$4R=4?H? !*Gv.LTU5jN:q 9"s@a 31egQ>_H/FS|6v}C2`T_^e_K/tofe^s6C{.Ŕn/god6-l>I&,'߁k)/W͹H_L>Rh@f5`Y4ع?Khiv KM0,] #*M~b%iu0 ?l8 ZjAere_UOF]쐮lbk؀<K_eX \&:W4fЉla1_J}:x/sRZb:Isʭn%l'AmhP[7:Q`Ջ/OP]tXn!,W2{P`˲U |?5 h10mX]0&u%zuvf5`~Sv s\/|?v:)l2G @Cwӗ,ՕY4[tsu&m^}UϽ.5%7.&9&LCM[nUN^S*-K}:VؠXĽQlV"6?e1w87r6ޝ%/(tN !ǻHH;/kX\:[x:T]{PQEⲡW ys<8]:%ewh:|{ e?s8/~;qkz7+c _veuOü  0_Kwaz~avovlA|-);tq)rTyW"5uc R Ls?~|~erNN=FD;.. \h@B AU>ЁSe7Mfnpzl,UVA5Noxrc)_31g3fSҔk| @;& l"C l'ns~"*N#6ۍVZF ùF|F@F ;L*weְ.ghiJ@sv5A#l*S/!@ `+5nMڨײ~=qC2piifʅ|N6an:(ma:0KQ V2~3bp!`v}Y>O_WcҺr58bYh'ONk7[# 軫_߽Y^i1qw;˞Wz*Nb01p;.'w8ݢK܂`d5R%<\s2 q|<=OR䭤&o:c%MC [yS:yWt?EdZ0-$6YQ'`3dL@T{/W&u5V'<Æ s>^+b>`%6[Fcǎ-|At 7D}Q 8iaFΎ0~34S;8`|;s:n+{}Zh!&>n Agљw0p]wr,r3=Sw7aW.ܦ~W_+*N LCnmeW_=ztt!D_~ ~WfE+ H@>~eJ6;l6'_$נ ۲dupW$x\#s7`3S ~v!Y󌲌Nn{@>`W2T0l ߴ ʂsr ao5;l[F4;[itxr%XMl&Ajt׍,3:}\ϝt. `SjDh(V7 bucmΒz?ySR M릂#6'x6++\,Og |Omq̦,a\us'^縬Tv+Bx&4 )rƒ5ގ^7L>o6᪦(-xV7.OʩB?&Չ>ODI+i `} jRc0U۲iIˤ8B;evYjjyT.ߌs5&-^'.&" ąQ/gu-h `N@G u+]!3u sDV+nlT[a)S ?g$Zk[/Z}գ5\ʰE^{hȑFm-texHjzkx*[nK[a@,Gϛo3OAL9~\e[lhСVXaڼ,G9묳FlAoKDv.ɕeT4/y督QFeYeUqivWR +ϡ{̲Birmx< Q2 g_#}sԎbtWF+ $ߪF?iB; «Ի~wOϞG@_e)-ll n.']$j)`/q A-q?aFfFeWVvg#@K6p ?w wT}hii@J3BH/,65l~Ubsh$q?I0"UʼnG_lzyy?~ł6D'`.I=xlRƆ`@Dxl4shaeO]>U m^9!y㲋4wmc[an+(~(fbF=\&}N6X1/o oEaY:vvl궫U:wI<% f.|47,A{RXyKr|lp `^]tG{6ڠ1?4 Dmy\OOoL/'zM1O/[ b[J{`R5چIsH]/F.N>"`/vA]^޻} yI{mu_KGt(K/kO)LJ ~Lޗ;2c=5s_X7=6 ?R=ɳS7F4`4&VYgE[]/ u >v-\9պmzȎvWV柺Y g@iK}3fG@%ސ%5Uf&m v_h ne Y=0wduԂ~7ߢpRW,xq9&.`/Pb =ig1M #19V}| ,a8uE +<~I`bLr2I:)1o@u*n:W. ;TJ8v KHFבoj's L'=.'!4cEha \:4:459 8.V0OD~zC{Lh]2~X7&zv@Og0-fKoHe)!lkg@ ~%K'utCMAD7!Fv`#e ePYiq}wj}<рJcN#a|ɻnh} C^S>uyFz4 ?P?&Ws_8! a`Y-2@G^n' RV!S=>+`:9lf `r|(_J|S7۠\{ ?TV'ܷIA׺6VtgDJKaw= EϕV{ʦY#:ڡh^XX.׹s7O96o   Cjim D ])iKM.+~]Fp9iMW whu7lK;;He湭F#.5Wʈ{-W.zpµ 0)Y+%_Hvj { 9iWW?fu:}*:^~sYrP4ha{\6N9V%H̀9 OePY鱫}l?vǞb8?l*$ozpջ3ķsM57y̕ f١ L'!]q6k&vo ( C`oCuD@fdd;{:Ĥ2@N5]ć x+2 iL\|'li_wK{ۻhIaM&u_xci&g)a+f3®<"RIpq 3AbVȏ؄Aa. s7:>e=$e {gmIӴ%Z]y˜HsdA4(Ήuq[0d{K<8RIBڲ/ :CG 8fʼnTi?D`V)x? f&N396'KVTRՏ>-Z+3m-_4.Iy0`r=t G?g& g!4O>HDŝyqO[W&^_]f_Ś $V^wOi -uN9y#}0of22iiW`7\O7Z4ƉMk&q҄wXN& frA7wSU3M;V7ѯˀwD5 yf"6z,nRTra9fqjȬ{\F.p d ŷʌ}<խ Hin>*6W9N&ϥ|'n25M{ޕ/vAW|ls&5,/iqՕL `3˄-osk}6э4ϗ+6Ns9.ty⧅OޥgRW½R z ㏻ߣ>}=رcY6pSRK7{w8 ns& 0wH_r6{5tdùf1Ǐ7.;]Hb„ .%@h^p|EαaqUMiehoH뭁^z):s+;vq=ӎRqh2h$W;l v )2.osm" t!W͕q*)EEt?+}m} i*kpwelP>8aQ<$)O;ޛ,Sݛ,I3w/: _CRԄ|cu*kG},QnZ1lÚ]Ϳ\:໪s_&Ҋ顧)%}IQ2{ :ιxɤw?l.퇉MYP7;!.-祶Sl ;dO0W=T~5]c7ssY>"auG&B7}(:;sF_0+vw%$E/; V˘ѱOmey>w&0B K vY7ne'aRZab͗R7|u])9kf1?^*'})=˳2)r:NB@zqS丕|Gy,! _mǦ>X;N޸G4Ǖ%zJ&Bk!eb3X荅iMi|),&9AWJJL,liR>&̆9[&PcUPH0k~ \dtGG3Y ¦uxW.?VG IIb0{L,?)hn?s@O~^GlA48#>ܙY]uU#K^:D<^dE#GV !3 !7vr-uuu9;*V2 СC[ p-j;Xkmf N!Y3+x}ԨQs=me]mOryO[HǻЏX*SK1yS f6dk.IN`{| 9~+N')j}Ь:u頯'*ί?BFsTvW˾`Rd6hRxujV/ J$e1XޗrhN˟t+ϕL9NiZ`N i>\"Tf!`yA%6Q[am!6ax$_v_u`!tSե}PWWdiRi2`H x@ P#` u{ҏ,e[YdzrJLkYhƎ o7\k&`Iawc?0i򷼔g~Бgjar2^>)/ݱiy@ n}CqvOuX)3PO/' wU#sY*6ruCft !qtgj:̤= *&&-o5w~e,m2f_]][ l9%6^7$Rar)My`C"زek h_*ڸ{6o0L1w{c/=-e:g 6tZ>•G7O>8=aYΐ dM ~x\a꜕* L}(Y?ʙc mj@1p ]8V_vqGߩk6r% xCVmڄ'OB`6AmZVϻyؿ$Ɩ0 3]'%7i pټXN:o\puS6XӅhs|p,nb' ~$Ekfw/|H d Q((HR@rH%$ '*$#tvwouٽݻg+wW?]vYƭ-i5R$if)8N:0%YP a| $"lX SR kb{i_39@Nr) C$g~ F6:C?cT@p} 0Ddt!Ѕɱ,GW֥䜁(Uۯ "7Y>b*~l.lT>̮²yn5VL?*0 (U͙|*CZDfusM,r$uս8֗?.ʣN[;ZY$#`ԝ?(5z*I7$Mu,bvU+ ׂMܚԿSyKr;UfZLKmyƵjuM5gH?\g0zп >yyƝKq=!%I%P?@7sI +`gJ6ozM8p]+HB0mnƸλ1  X( .mt@-Js ~a 59)fV3;OޭZ}Ta=<:ٚOUqpWוּ_2m\vμ[M-6U7;ſL TcjKvjO'Mv9J(b d'ҏddt^x9}U HKPs1G d6?Ҽ|/A_0kZhfJ!lk&Bz}xZ~%0%A_JlZ}KjV %g**5З'K@_b\З`Q}JP^K{YH恾&e~䀾cobMKUS7$ת.gb2fD0 .X0o/XP^C5g(%׫0Lh%v@\ $K^o`]<3:6CU)v;oIc]- K8fjA_( }FvJt~%P͌$hH.(Vv B]}Ͷ,K\n)cUr (G󫼑||@E= ]F?=Lm0]5ŗ@~+/(6`Q ^l3?;Uޓ˾]e-1݌^AG :I |^gp ( ܍b~Qkw8>P4o}f m@#=>b78 }Uۭ+^}:jnIb/jP8虝w.0}s)O㯌@n@_5ݟi,*T:Ǩl ${qXJE 0qq^O(;}Fuq;:z>}q\m뀾>5zK?n!F/~*`D~zlwƾaޥN#wKX. phytv苿+T甴S2VZZkvߊUÇf}pkvg }"i2$zR t:$v!ѓKE/"=r=f6 ymH H^$H}3.1 7-^faYta֫ Т\LSD68 2^B ?ns/;p4Hi@W+!e`k~vѱDSCW9|'ӎ,~  )+ ` ~#MA4 лO:z@kGFZ["Jgc## =u݈Kh#)CY@J1k! ua 9^e l84N< uMl.S.j0 uiC$`Յ'j ݯSH^*Z@:BmXcewSGԸ";|Z$N0OR~<2j匙kItHx^ln{So{k +'% yCc5!bGfC6珶l}H .ToRj"%M6dJ_dxǾ~)-vJ #h;lPc."(ޞ\ud1Kr6kۿ GUPQ5"Tگ\t7C =/ܴ F, n^fS 6ٻ=ӱˡ̭/V%M&"Wm016\@ײ[>6l ~b^GbD-T8a P-s>k|[Q/:_jOaM0oH?z{q_2@c=~7>r|s)oQpR h] !]o.C u\Ϻ/zYÏg>1UA`e!!^~"ayflJm)67s^imEoԦ0mב\҅k&Ś`trt} ə X8$Q8 xCD%v.J-wi-eldɤ.;^J8x<ȮN,*}PT6˅X#І[$h *WI0,مmf!=}*'q21OYd܄U֥fNhfGH2{R9'nD!VrJ @K#G);nQ5jhf%|Vzg,%Qy2ThQwL,{8n+TQ'Q,M#m':$weHPjTv۴𧻓2be%W&F3X۱FOgT?2mCvܠx]s Vpk!Sn _G97w3&ռk5yvͫjűY9Ph_0,W_ oE8B@1a()%UZ 17f0߇rF[tABgb] (+NWњ$ xƣ"&EK1V%g-\Y$wmGp8OڻH4qnM`˨߉h;k0Zmfl*`=ش xJB@3O+ ?[ԭE4"ТV %$G mkJmzOt>KYED]hkS8IƛY#BI!D Y\}ާ9=&z(wE;r6Bo0lr<[hL2{ AcO}7.~caCO9_CUɟ %)'nsk|~A}Y[2 3gWWZϨlZPFh/a[-WLU7!{Jw(i{2PlG q"З >}kL7 PҬJ н0@78e㦋4t!苕ێZY20s@%@2}qCEB] eǦW^,zςҫ 4k;j6SnKXfLKCҁ2/%pNh)=Mo*/R›x?GV廥䩶}\f/F-m!yR> pm5ThN=Lg.]ؽq1U"\*QN5^&Ip>-/J|!]"_?fR=Qfp.Raxⰸm^v*3gU1TXN*}ϡ F/.`V/tCЗ\Ю(U=[9{cɮem@W 0%Bb-&F|zF֫P{$?'}!@T岵ij 3 "03ui>/ik\ WnުU)Z26`Z斀)+SB[yi¹)z1A:?Qo )ϙ\}lR `KzoS;[iq wrFIi[rl8hpx=z:,j(̋#. FԤ|ݹi=Sϣکy6 4슞vZ?FmB3r (y$HEeM/;0\0 U 0r ܩ6LcKh|Y\=yGeit$iBcQ. wc`0@9^I 5$tU 1d#Sj*;< 8 N {9ϕ2?R<,|uvߑzO^W銻\@lZ ǟ딡d^8]t/m#تk?҂V'W2Q}9l;?_@'m*Ih R]zIw#3AkRۚq:lof*$T䕲\]yMtT5)^@ L&̀f0 n#  PoGI=Y?{C6]/; *N)Ɩl2.ӑ^.{K>з(il|$/DNy'y?ZcB3/A_EO-̟ؠ)3 i>:_ެ~w` ޲P8Җ(xٻkл~]P̴V4R`r^9p5B=7me:eڪ1zk@Kh ^L#n+hMJ T=<!Q1HzʢV6=5e5y,՛]kvgds`$='d-V fmCsj~Jc6}A*F?$/Z4ama^zo]8w?u|!kh6 yĦH#vy/aM2gZOd<7U˖䀾ƕn苎JI3}Mn;n6 }]g'[Zbm~NKD @_\'R#)<JkectOun3ɻP#S$j&n4V<Ͷbk%lil9uz:<"๰}mvTYnD,krxw45@ Q~*%m/R (!P/ 0‡۱!,%TGTz/a \ VŶؒuadcQ EY2rX+OިzAb!{ @r?sr}>f8$$av BHUֶ*}#){ZR/j>%@_hOdgWHc1ߝ6."⤀9]jݹMXHs!寤Bc:8Ltna1Vڤ}G`?T;YB# R7&/3Љ /)=hF/]!p7@_̴eՆb")Ơ#.4'~iރ~P VtKĹ<գ?q/Ҧo}K/:#9CGƞ)\VeO%~zdU B~!M-tyL+oʴV>Z%)4gze3=!zzueҾlm!(LmVϑt3ɝi6b%tq|x8~YiݢR误8lCb솃$C3T1,'"! 7`Wh&\?˶^*D~6iK !`"%߽/ <9cԎ]n; hc00-q@`Ë%EJ}wm c @k6#-[父<]{?7-v0BqRٝUxQ 7HMmI*{u+R1ʍ[Z$X;t3Ks*Omkxz0z4_ ˬ4/(N."jPsUe6/.WBZ3@Pros^KtG\`Nv{&m[#wjܰ1$M9Q8uS5M&I bƞU]6F)v3OX͏{v0?PMEh$C&If;@Ʊs #_&8D[[Ҩsۿ/fH#Zv ]FjSݶ!'pn7WDҭ;\R۱i*z_SɎ I?.Z]hV]!B01 Ю {<_\6[U<*ThN^CԮQxUG&i#qth>h+f /qeA߼ si$F۝z>}d;Yi=E[d:TdA_Ҩy-fs_9!,fA_"Z( NXQKKy͸ZdD z6[96HpSu~jlԗ/I.)gƚ,QG_7r9?٘2 U쓑D@@/m}rfG j=}yP1BYZE}q Wnu7*}9i+7\`ڡjc vvqq|H}䲸 ^h(No[$*)@_vvs/_H'TM`pqHy̛̭9[! r~зHBu淏v Uq lG9 GSo5Vwg#@ <@b>I&/~}@? @=PD-U/mI*.S+(O )}+g ?o:,y3>TlYЗdt7;à#.W,cM%rʪF&#>O(vtʭV JU}RE$A$jѢV FOc7<}_>h)ga}m?>4G y}mJFQ֟=B#=w D==sne0Gςm[y x-aHY]lNuH 45OHb,!V8rKxZFus~+MF0`esn-Ɏ聄PWhiS Zxd /﹈y1!L\wCq,)-f/ly2SYW,%&n.PKEУoC fPa${!Epk=_q&?$Lie1'%B b6o84'22lڷ=+P`2ՀÃN<0?;Y-&]z1ͤ&R@u81qSd.#ݑY/ z/SY@uq** J& {lR;@t&-- ~R؝Mz؞){KcG-bm:IؑĢK5ni4,HJK;Ʀ#!'Eg@xҿT)~$qO7$a^}Zc/jh~@remiv͕GiAw r9e3_)i_̷UXwC*^NQa(XE 34:襟>tlD7:EZfijHU&(_b`kPUmS_Nmiֶ~UuYL1h­K9.Rc/&AuuG%uiOөO[NU=*/4iU@͒0 %7ůVIlKԋ0v68&S/Pe3utnQfi`S.z yz߱;DH )T:Tovlt`ʫG[HUȶ."f &3y$EJ K:N,Irp:{=O$ȸ :?-`f۬*y`M)BEs^MHE}d,`]B(Py< ؆APaRI>0|&9=iWT|˼I&Zc دwKSds>2>~d`3t$O#@c*cBWcdWvlh&ͤ.)K~՟yGwTO-;-ko9 C !ј8s@!)#ϩ7ϔl>r7( NW]K lϒ7 {` =Mߛ$AQPoڏI jH*Х;s5c{jYwS8t+x$K]%S.\8^R! ^fy2G΍TY2Ol[t (-uC'l~HӘLP cE`sx5ov6)ӌTmK`kI7(4ac}s]yϒ롗bṿ(_PȰ~vFfC߽?A6oh͗9){P"&2T_4*+V2%M.+Z^\'ɑԾ&coy2Hr>~m@/@p1L%6?/ (HdXb)OjE/KIϳͻ)ϟ=^ ƑFTn>FۍFD/$OK2 ٰHHRlmM֋6mmHUi9&/qDuDuK@:mN," I-ǯ:*wx{.bX y %DGA7ۜ'NJk^ Z*uyOq}/tdn.{kToQG>-"T<DuiLUZ!=H/+3RTDyHϪUc'<Fͭ:wZ|sY}EyV `j0$ jt90`S.QH[[?1NW_X>NfҼ,>ŸLR}4>Bai%{K~p Y`֎p5WUjwo̿b(u65Nm3nA zE8*jWԮIl/GP*VR̳գy^(~9?} ybz';@I\LОNP ^0ų'h\7ԩk)Va ̭J:'RWt{1*s-%m)!n- ~ [6ɇK3Gfj3>4,]E.h36P$£~[k B8;!ل$[E҆r6~v]@ /'3^Kc2ϫ5~%Yy.k#yH9 V^0 #5 Df\<ü!:HKQtTCqHi/vn:6{.mO P~Kg/ߑo*oGOB+'v#l i~e ]a,f'19P0 }D#RWDlSf,h3Wb4Ȗh sP*}ipkV#Ojm^$T0iXA[~h{ @~>>&=u_^Q"uXN*)kS &GgRD NQa.>Q<#YĠ'y 2 QO)*'(%FMߐ~H[b{SmtN P0Nam `0{$ͫ)o᥊.OK&qΕ%FH[ْY:Iյ !Jp8h<S {$)zzVqjLE¦VaUk={x2+P^DOnAfQk`6Kgb,!Baj턶-T-;UxD1-C ootvvE[?M;=%.5l o̔s%E VB\dn[f%lSߍ`?r&$vjc3 P%o\|.M.D(mE(zɣ [t6RSn͕0Ωʿ,-/Ա6fYr'붺6#^.A1>9_@!a,N=&_+7Xz!̄nZ0"Mx#@26ʛnPyu]T_Vy: @ ziHozuoz $mzҮ`^O y>+wW]cIn^8n xP2#ox;~`Z[Dd7F7w)赤V`~.bW(Vy3?y"[ \iI 4~NE(Ge]HBEjk. ^d :ȤÎ͇ U&F`kj5`8]rӝiY~Q]4*<%7GnSP0.1)vq'mFz`X0-N{~]׶r8aIpܧvJS3˲_.$+6JS>G{Z6<ݭ]'0 mo(R* BoN 4nǞzN ye1Ya@_͘9ˉ!:YU1>s'ڭT<*&=>Q;)\}I?+Hl$|# :=45V&/O;`b(K^' ":3z3y>\̑eڻɋ7ѴUBچJ"]ַ䝄{Rf-V JU17p ILduɌT(7}sPMoc/2mR.\2r(|@#a M-}kcdzj1GQ>gi\Oʏ#,o@:JYK H+4HJ4֫w@;zĜ1%'! jV=kjҦ0UQP\@vT6}jخs QxrPv+L6jCMg'! @Oze`66/x1zT`W5Nسd6L^J@ _kYBtC6ե5M-hՏ8b?Xp;lyPw[& 5\ !,ƴR.xc]roeQ?˺ђ+LZ:ckFfF k% CoҥsÓ9j%UYMл1őq}[(c ~*bK==?E.Ly7 iق &.rK ~o;]ZNG?rKM&`oroԷwpzD=5=j%Jסnᘴ>i-e /!lJ! j?[~i#2({JN!V֣ѥ<\u3qhT{@t@T.RxDi|3 TkvOݵb>\#FH`!Xy[ K.$-r`SD_P/MBLRZףȫMO^X)R~+[#s'eGHXy`,٥%6zж-r2+@cQxx2(y0%HXBOuS&':a޳*>GWy: 6r{ϓJF=(\xt7HWyg{H'4TCu7$uGZg6G 4Xly쐾{[5'G-y&Ok HȦt]DIZ֨-zr*d.v$'QS#a*.`jTʳ }Ϊon;KSv /R/?Y /5ҍ!ȇ Q@%7)neuPW\s_/-6VH$9nxƗ.[2> d*#%= URQ3”+F/*`lc}@_-Fߑ[C '4*(T0]M[g@' `)3LMI!2eǍJ$2>6 WL4fLHpr}<׃J*qޖ YvXRymջ@NBq]iE]q9Yٛ*Wvr޻-].Cg=\!9jWS_F/,CIsj ,淙&Z0$\%;R ouk-(u `ڌi#2$CnM9ѫ9</8kgC7'K pͩ!lu7fzU߈Jj dV.fUU:I@imm#h=ϼ[PU>KnX֓7 HQM!@6!Pu7Ֆ@?lBű5vy&LіߞF #KςyT9%N77`? (Εw r0#S JUQ҅lه`` |GA 1Ȳ3JB/9ĉh#^д,ﯕ^}!5":?8A kEҪ*0HM1{@9pCuUDZ 5bR:Gt`qf5gIu7k /KWljMgPt$C:5Uym}{# fV FJ>,FEr9Ho#!|So%owi^)zc ' 韪lSu!!M:4~ 3EoX ݦ4)1q{H: qŢ ۆO!~[YS?EaBi@*|@6[7;{[O=_y0*W:Izi@%kY_`;_ψQ̶m:꡷qnxa99qL#E7|i־vCv437!(a=$$$pve5$G& `T/he=͸=6]Up|+1 ی$7kg. 큹m#)No!7/ )4U8M}D1R.OL#qޞ7!`=xشnr}Sne IY\j]AyR; Tr !e!yJĪh1ݐZt_AW&@Od遚l%ɳ =Xdq$,ɺu@BIFF[!Kb. arBG ޣЧ^wA*ŲoRԲ. ؚxwFs@hnS{SeP)NmmHeE}ΛUfaic}ȴ|@0󶇾 6B" @j=bSz hzMgL0#è _~z#ý-TE<$Us#MK?xTvA!n!7\!Uv OƊ 揚ʿՒK2 F:AaN;No&ݏecvco J 0c 9b#'`tP*b8ZWD뱿,EcbGh iJ8 ~O%ƣަpU^f.̄0mT&j!/PZBU bI<誖}] q!!{~{.9Z0[b:GUyMFKz1V!֖?QTkb)x. ~ ҇t t>GQRaQFZh@Z%(fMn$),rCoۚLKbh2޼Vтey[e]c6ɉ֒~v OH3ʊ)<{Etzmrff#T^jG龜`'sehcx6%Uh@T GXTN>QMW^Мu|WijPW]h$6lxɓ5@9pGH|alq\"}D96#6{`?md2H5M (oB뉓9 ^pe+\5Sn&YCҡv2u5y7j,a 5 DݩrGE>$.:\/+I␰ cVcsXPm2jfc38Sۓ=G D]Hf%i3Xeg$98c`,qo/?@s(/ٚ1p M2j=Yu]5\O+'WBsƪ3';gOUYZ!cxMΒ]hsE/mmRq vwwcd>R!B֍=ie\Zj%ȻNb8!I&*U7 WMQ ,r\ާ6`,P2c?U~Z퇡jj1z'8bE'XS]*'4~j/Y~<,8lB0FУA7غ(L<sca^3*7"ۋWWsۼ?ǨYƌh?G*ۺ'xwSig5%]Ԧ=4~f8Q}+i(k^hƊ;muo~,*#T䝂evV] ̱nLK5.FCe0|/'L'+-)<{Iv O彼h緟x1wP ȉ#^r;GwGh91w[O9>'[O%PQqܞr0Sw4{ Gwf}!,q7]Z%*,-8J:-Q!Kq Q_*)R 3f̦}F*"|VfY 3K>2/֖[zO{c3*BĘWijq4xϤNZWI@'ꛭ1ּwZ-a{ Jpfa%i;]5ELusf9<#݆JKws QhƕE܂qv4t-6?Ʉw: ?%`Vچ[r$o_/ $^ޛyId "0 7?`:qfO՘Ym]iQ&qYxlh/Oc̖+|&16|eƀ9闝WQճRֺ0nfGP@w,[^QÓSHuŠzjߤrIUsbO}YL_MfWTǪҸAܑ +3Fc}9L*"7{P+~+RTsz B$)Ҟf.y}!ervW|G ||X}}<ݡh/ݝghgfLMs4~E`]2Ԙh=WqG<qt#6%f⶝sa7rXw{2*V J`8W/o qߟ+^%2 !&1cbB7kz_&R6y_$$Q*Zen xoHä,ޞMG|w32B=I+/zv=])'GS%6W|)/GX{G"+O5w$o9)5zMVu6߽rs8Į X页KsH4Z}Q^g`$"& "؄@DB֓EYG>vx.?Iڲ^T1b<rLكjv푈""rbsNCaN*:!Q UI qK݁pg?`76Qh̕94RM9۸U85=Z{JϬRIoĨ:݊,h `m_ FXv{`6TCi-`(@@u32.BƆB/:IfK.\4 f#QfyK Ӭ. V[Ԃ?v DpVqz,41P Y{/-m}o:efȫu+BB :^'**{bUfƝbd1AqVqa v^Ex?7ZR︦( zg&jó6?LuLYeT….M#zgγ+>xgk8[nCq; ]­8K4>GntD˺7cAI}gޥ. 7t%Px醸.Vy.qs !`V=SapߚJߟ 5FVs,$^8NJU} 2-Hen@*?7kehFjDQ 7}6 UwԴe(X#b@]}IrnH[5Ms;hs^eG3n.kߗ$N5,ʋfRO=ߵ P_ıV&n(gio4#[K5h/LlBP6l gGK%9v?]|v1Elo{6\\SDO a~1@*W{H䡻5? ҊFMQ-Ʒ1cicm:',tdKFk;$Kan+0l#Ђx[ mZ>)M&!R]i̕Gi',YI%TN/zr} WxDրq`Q ݠI@p;0_ڿK s 8::/.2>t ASבx HB!7T9wQA$E)l۰egc꿂%o\J;\c)_63ۥ|ı@P^<\:<LIh?ٌ˫^z_uv$.xi׶s`?[kbS^Y@婉MB's~m5QnR~ ܑ>7ı'JOQ{wWS:$T5l$'*Oc|jKg$kc&p)zQA}ͭխra@Z%)f>L[XE|B08i(L&[J'-s!m;N0&{$[CG2}ѥ qxh2'E<ƴk 79LPǴCI Z51ؑ]ʤfQ]K|lr0j<ڟyo>~,KMwA3X.ŋԴ=wg̓}a~o}f0f˥hp ÉLY! #)^@!k/I-o6.M[V J ^ME"ESl0Rf4[e?.YfJ?#h;nn?urԼ |DJz_lCZ<6~L xrǙ/C-JFʩ};/Ƥ)9V^tViuIo]큓 ]xU⁹20B5?%;r[O%}Q^WfQ9 %ܓmZgю(OM2#cKa_0Fw:7270.a>vmp$b4;~{\SO,e~ EZDL@ !o0h },?0k2N}5F I;h?jB 㻳]5]52tI۷]W};aQRPVsU4][u߂:c:.$7}"a knC[uE*ب^0?gTj,KBo-T,73ы&h CGbk@tyLoͳX0=I){{7G>H̙7CH䭲 _5HQ8s%gQmn}D5R*Ӛro&?`-K9* q/dt\ ^ \`se CL' 3Jܤklؾ{,<+%yq)ۧ:r H6=slX4$:[Aq8l)PAUnkwZ Q=z^rOF2MQJs%\0T;;v;n_*d_u 2MlM`@c*ph%6#z&F1+(kN@\|-^|C'3`ދ' 3%蓼7"@ ,YWG!JPٺ$\HI^ 4Tic )NHqb3=>$vO:T>pit;*`kmG^J&-^RݏQ3+ LXUf+v]L}uL?V < 29jes-Ϯ h}$LW:|f40[rwzA}@꿶0py0ˏK q+쎋43QDFee ^ó)ki7IPVm|U&;h۶{U /є '$.g4G,}G%i_Kѳ  ?GwjqOMU˵c>A9⊦KJw%T'џki羑Q+<uK\onEvj%-mIrfLu8+@ hhj!"D CM(m ߏ֏<~"[I@18!ٻL&-h Cױɟ+O2&!{brOWP<ͫ9 9#u~3* '*4r\W;nR{ 8ib0{;~LZTb2 QNr4n$/t)_Qi3E*HTRi-FU`gfrF~_?¨8B6D>\A@v:<"Qy VjQ~r=w4tB{@s`yHf(A_"$T\2~ʛR8nrfՐYZK> a oVof.)C jj;|{5W =wNojd`}봱KFu?}B]zhQO'cXH>!+"_YIk~ ]ݖw+z{:{" *vʂTо5M.vlʛ5ϡ1eB*d,4:'O`%@P@$9/]jsipZm,6j:NaxR}a:U<.3NՌ1c{`:9y!GY_7RySWO='PunScnITk>]=Ppivᦶf#al 1:0k3B {<,\ߊl(=%6H 9@IX[;ٵa @h߬9\kK!-&i۴a7nM&EvFYە#`w:j #$%T nVkg#29d$<_9z"qH`;n !'vtMB0?y_bm$&-qǫAB;2'vABHdY!y.H Cu3FpQJ[łf)3 ~VwΑ8\@@hg61y]^^&n:͵&W<]րx}K 'I ΓA]4p1 xm~3Dώ:Q>s>3t!/jEWF_6s;~lO-e[%—U"X]yH ~TZݧ_ j:Vg<9^3IF,H2{B?5G7q+wQjmuE3G [{Pd5vDqc)] k*mq[Jw3۽}! P/?WYR=I)E1M9U'~N߆7S]E֭y 5Yî9]1fYQApŜ׌]UWEY1kTEb 039U}kg%D}3]]+׹Q5J߈6'ɠn@<*}n2 \/DQH~cTg?.b^bn+˹t5SC\S\dF\LsE14a=kˌ[ 9 {G(OzsmH$GBx~Vv 8o4cI(lqtӓ\=%iOC vkĞm7%pkh鮆>iutn{ Pi-I )$:F6y+a+ݗ[Ulu͗3>UtEv[gPa0PZh|cG[2ㄼJ-ZjM*4ѲOI{l֋,7٘Ƣ"}yH9%o!-O}[_7 )ѱi %,}`Gjؖƒ'"Gwb[/M EBY&k{ϗ'C}i嗽@ R,nP!5:#H7k[[j'LًtRonbEw%%A#&z KKčEi0/u"Xul@$@&SRR2p.IUGl G9HF&b/ߝ#A.s,!7`80ԷMk,y'%trxH+`! nK ^X&q1A8()lq Rqޔ0%y̜j؎D T,o"*[gC}j c DQj~w=91u<!17 =La1V}$~rr:Xh&n9RkS>z ~bĵ邂wt' ;6Dӷdvz{Hs]}ʖTa/.mz>`FH8R_ @_tRvzĥuEati]7Vʉ[tvI_GWnw&(HTǜ:.iD[wҀp9W!UBr4B/LP*%z%0ṙ^Wl*\@ Xk,l\tjT\HEE]\lE?m /^y&>]Qf~dS- }DS溇WhW7(j/pWnKFz kr<2nROJJs*,ҿ.ބM3h'2! @ҏ"@C Ž:K.ǿ7_Y /$Ն`!4{h?Gtɱ<|H@]/D]I?Brwi0R9+*&cbmFJLU TCAN)̖RYuS}K S.Txi2U F4)o83OL\8HM}7Ň\nCg2$M!^h v9W;cPB9%x Nz6?C. A|TOϾS{lPz.O4^Z?[3gshdtE(;0}+C_HҶYlT u7uޒ OOHQs=v|PYRBѷ)E]3,$k:0s@#녷x^}MX,+a<ٗEy\\0HX']9Rx0|&9Ry|u_/ _Uu­0SիiJSp';K vQԲӋg6w \(XN5M}qL[L)!\.]&e [nVUJR-Y{hɴpwK%.>˥l\5Z5sYP{5-r4Z"cB/}iaY3\ }FЗ$F\зD|/#@_ ߒ>ЗϠ{rBM%}З}[EE|n%۳ ] x=:"/YIJ:{J Jz~3Z7E(e -W @_Cfw~BQB}IHF/C} x6[ҟƄ[V l'd*n+6 -02ydA3t 3,F2~qr@>c֖tQ~uGIqr@= 1nm"RЦzts"OH][DB?'*r"'>t e$}EGL3 566З㓀:Yk@_۠p#sfƏa H1-N̴Ay~u!{uD8ǁPBʈႾ̐/)6{|Z(> Ķ}\Ǧ/oL赩z(Q 0dڢ5<~JY@蒛(w>ue+$Yd+A`𞆶9Um<@қt)A_T?_3n^ uPҘ]噅 /`cB{H4_W[xѳG2j~f|S_WXO@_>'Wn/2yE@_J%ScdpŋwaKQoD{>/GeQI)8t-Ʃ2ZQlMPI8'o1$Qgxr+]WJRbPFO˽P̀CrlMr~+yT/=\¶c>T=m+ȍU^*,ZgG;y ;}zҵx=z zʨNK<^PE39i lJ|D$+NU_Tg z:J]}[c~Hnh@~%D/ۻM|w6]i:aDtW]k[DpWHRmfR@*n BNZYl}?Z8C@&כ\/ nǥHeiXC-l诪MW1vS3IkD΀NF`k7$aQt'2Ŝ|;W4b2bHmC13ܤ:=ji%ʒ׿PޢD!k(RJ@P$c3,!2jQzy*6{uŶ6n_5z`rNa9XP!8]<sh"-/ tQApH|֡_!6^m୭` 8jDi]H_8"&]^Z FxS-ڏc)Է;|B&3ƨ]  oDPoLZg  ?N~**1%2+‿*} 9{3hd$U < |s{ƙChᴇ%(qU8(u!=wbce̹ro9A&`BYXqKs o=T9a%A)LJC_jFkȞB$?!hQQQ{hlotږd/j|m`.!{d;)gݣGMM ک6G6rHDJn3f@N|kߍ1 {K"\tkz]TJRhoQ!TT=DeQq-^%@c4P@byRS–. -\V@n_n` K ~ aw/dJ0_W~Iň֘HJKc27)bOQ;}!9 L7uϕ[0~"8 I'ѠKFlis/P2ɰԓrTc@m+l$,EۅGgsm fUa W(7BRIDnnN"n-uv?A)B^<"6gԶVݦ6n81tho0jCJP~0HH:pC# 9![t+75 /$s7aұd5\/_;8s$%Տ@M:Z^nؑ|+CJl)0"ϓ &aaJoISW5e!mYH=49zZO SQL'rMB{yGD:@swD^\Kh]P1z2_rIwUrf/9JD5R-X_^qi+HΝW]p:4<f4Nf8ky4p}I&0a\З~6Kf]UtoZkŔ0uV eFjl8{Y'(}}䂾#/uzkXIR7F~f^wUGv= J}ch$ly c!59[ԃf @_2 EN@_> %Um^@_zUJ"J1w@_kӴdZ+9h$dЗyR:r,[@_fnk΃&z}&o$􃭘h:UovWd2.ٚ}yz@9u&< }C @_ƣZ?A_Td@_f~auwGb:ie8u~8eptEZv!xb avhjB|7#Gxg}" e~ ?*(MP_N]5ǵT2\:8yǸ@E9z "&ZJnlǚ~SiJJ TJR@*%)l qc$JR`Krĵ)yS'+ UE<#ᘛ4A k@҇1>FMJ!Iq|~HNP|3&rVme`+[j) W?BW~g~\j]!TJPZο/\"Z_i)~wP9T1ڳ%/r>Dw8$ ZU/FU%MP`MHJZhgZEйޙmnl9bA@-FGnQ(+jPH#p?ozhn%d'Dݖ290iA?oL ID+$D TNk@{Rj2L1P}6@)BthAe1;?XAh]AڼM htzxqljE㡜&o!YV9t Ezi6|8/?Yhq 놙kU;cY1P1<CS8 wXRO I|KH{:[\b;xN*s&k^ c }R`u I:j=V^/~!Is7bu,IdjD0'Xƺ Կ<$z#)uVZf%%kaJjM+%mF֞+_FR:hsnH._ , +KsIadAӼFO1ۘw"P1 0Pz\b9 @7J ˿-"(zjJ} ?A8fq/ϝ貕!`p:>f6gS2&[ czK)K)n@`hSvQOWvN %nNOEFv |L;oSz?C4e9o`$ wi/\  P*܌Qt>$]HpfYer b8mVi&B帘Dn ̿5OI9|ulܔBɴU.S:j'|З}M*7$OB(}ё?Ϙ+.E?HUPOxvnf&HJo߳tG%fLM2Hd\hOwzQCպt[HH. ʵ_;?~wfsJY34wu}_Y̡H;qsΎfϯ'ZY,f7Ԙo Џ4`ZN/vH8W.x3OByS ljb07a m ̗d/pQdgLeJ včnXGI:Xg˒X_9 穡e='5ME?:JV*kON1' ^Hc;Ŏ^nܮ yţR@*%z%YZ(bzDg#R-"̐[D_۫mI'HX~ʤ2Wk HPS$XݼݷG Af<-)6FQsXLjf/"lcSX*\&X%ToԊ=.cJՔCϵ}_G>#{ƙ]& $PA&-xK-Jw?. ˧3̛q;Lqѐ7Bli]Z ''0%F;%t?^,+|$D7>"H|x[n\RvDV`A$ͦ8,^Rhcq{9QXVA}*H?J6sjlQcT'Ir~Nw-8J(ʬ$IiΟ0@ hmsMm QaPp[댑B{QR A]KwuР,>ɓƱ4xЦ8ݳw|ݖk0i1YXNk@OycnRsծ (hj^x S!s#%针e%P3>kg'qU +G[A@{1Ynٮ?fMmhQ}pp]Aye$|z}qL$YIk1(ޡ?m?m.i+'{] "#L6Z DQ,ԟۉu<9fHvNG!xX.be|7x;4vEPݱfɈsӻHD$' p$ x}PO<37G1P]z|16^Genl3!KsGݭɍ;}# I١gc]5_a>8CGnD(/QDbtuHvLU=ZF{)(!45WlRHA`LxWΧYS]h_!%nHHi+oY|F7@*%%Oc:b8˿78^(wM0D."7R-E9*]A>Vv&}ň .)ck.htI;JIP&!߆oj-7Q"}4:ٝuZ)Xj@_N*V3r5}J0[-`X6JɆb9lIgQq.&*G9Pj+ +bxΚD==h[)==&l\ ʷvI*`PGbM7 -IΩzU'Ǜ1_2j` ]0ϻj}7%ma9d?5$Hav|1SXU?)-NgTKAZtz[߲ !1V+yLCy|_Hp nJH ] Gd1nEq ]ތnӧ "lQ_.wz•*e"|xD઺=FiEi? ׍U^XU7>B>0Lc)%{fLDN0z;TX^F/C[ Pfy2ޗ`}==dXtS1&}dw&Ç(8uDRhXt 敡\ߥ( uR{ʍRe~kLhJ1P%DHE#Tr٪ ]dm&%^WR}0u#BВ-B[$f2UM`*-G`jPm0P\hwO"i:Ώ:v4@\5YIN;iQF9!8G*%P)EYÇ[ ƭEngnL(ndx~IJ+`O%E)_&.#@b%۷n.Ͳߟ ďm)᧑RDc9D㱵ģ̋??cy] Q-('֓m}Q%M-υ$+ER]=֤D/}ʜĻjGʵ$޽FFaP9u.wBkx% "0^h#m )#5ANArv>qh8mp4<`$~b-L]u_(GŔJ%@&w=AvJ݈ DRC. M,q2|=ː'%E8 MbhފȓT0"ؾ03(v dLUN:!9\ ehc LCim{|>xPfҗlV;q/+&clOU>ڤ67se}pRt[msCr= =S穪g|FKO} S:Shtj/ +RyzL2F08҄B">:e(@Yj!X&[ A_;c(ԉkkg[mK:vAmE7SĞj<$З`}sfB}7ʌe'MW3QۦVZډ^1XS$R;`#40sP'H40y@_2O}.x}jr~J`we .-PdqJpJHBjGUI#* 4xSA/ --I#0Fv+v$%AX(u+_ڛ 26OL]K;瞝)ٍFS C=qʨq7IR-@-|Mk׸_eWKxm[s׈J0KX|Nquԙȿ/}CcEVG-;mAQ6}87\pr'Zd#k; OdwvwUKK}Wl m ~ c(AC['>6U?޽D/vp2rqH 1&Tic>vK<^C}nQmy֐)g=+zZgJM iOڪGFF (E(XU;Mf 6 ЎgX25`s|W%D`/5jŶKb 7v@vj"l/a9*ϤsXcyϜ"Rlb:J$|m!qQuQ82l)3I@if1pXWsa32svInO(ecLՇ2SVu(~׊;R$HIQxۂ6@VoeylδbR R65%gNCcjTuex92wĠɉ>(vۺ =sZ8[$:Y//XU*c̀p$Z AuU\hvAs2mn$۪h I ?%5J,wd{&UGIQcц +Gv=O˥0.|MyV1.s:b{7K(z,gfxdզaZEuV'(v} ,﫦6Vi{+ *5ܵIq\ܟ )^~(Q~뚐whh"W*%P)D o-~QlДeQД4Rg Psq"+$p g| i堤{u!g˔Qt7.QߐDb*c<.%'5,BF-ӱf.xyq8-~$-Q.6uo7kX?4NV k lI ^[x;eY <>!56T]bԀrh]_(=d;)A@( (qg.;Wj* &l֩q[8YhL~l.l&g@"y&hnLk`K({QL!E8/ǻKzw$UE'TU&M? GRZ}/wQy+>"AQa;eD#I [za?Cʫ|DGUhrخXVi)cFt]A%ٞ5yla"i?ia#ߚRۇwms23>!C){MUW9Ͱ\}jp~T`"$k/NDhv_o"wsp\ZW{5biW@BR0Jy~SX 3?JwڴG g蛇 @~ƙ'Gc`QCsG:Q*cVK} a􋰮!dIZQ#BH7S2.#KPv-k*3&_Ҕ}BA?95҃^LC~甿E W`%>Ͱ49QHl6Lp46uNI# ܺ$Ҍn :^xu%Rƞ. zc%V(i8y? &ڀT!ؘ#a? OenL¿,?U}9 _ yZd%j7-du/؎(w1g4bށv6<>z* =-euea јێ=J&Q %I䔲k=?4XaV0?Z%%9bt_+/cS+ߜ5+ʗ=GgRuV bIOr𮹩y {(qRjo8}y#JɕמoA)QP[ ӂ6΋_-V}#D\ű|{2szm1wUzޭ%A; 2BNT!yc)\Y2B_J=*5 ۹ֽ#w-?WH2nP[%/? N\0 #$8_ɱ5P*cx"A1$sAcҴnN3 s'dgd~lF]s.l)dv44{y>x;2睐tt36;9c|"33uN֕J TJR-S#W6H͸wK\+Z3KE<[NW2am[8%Yl"pՂ\v֋cy ιe#8\`w$ܶCn۸W6R܀]( [|D`}lҗP{+JpPjSҔML}iU=G ))6ؤK' n(}vQ"X;~׈ wL=GPζz|л >Ҧv;R:~~y$q;XRA Jf87<: >r~g3yDu 7|N_2Y}7jAчXD^,QA%7uvRt4=7@%;m6 HuMkp[*> qhh.rޅE?b(@Y__R=WnAF$EwcBI<4P^9z"0*b",Qfq C[<mqSIkAxT0vj?ga"˦=SmD&w(1#c3⸪ ѵE/V[!A=zDF"zOJΙbzXB7ߑ8NkJ6obzB7J`-<LSVJRb]9VXH+6Z;-o(t\q;R~D`Kw|7G%.}U)З/9D [Qܢ "8B@ J˲+K_ZЗ᳃HTq zeQL"pd7ovޟ-h`23}RlЗA_nvIm5ä4nA̐Я|{N(cː}e*}R;IA܌چ+lNaL&v >  Ec Ei:F2(Izk 8}heBvyhW2`]֑u2з{?\cKmj7B$p5oiz)y)K/ Zj@tG]Q~G&h:"_fqSV]*6DRcC\~@}NJm|>Ke_ 2E痩cI7 bUORҘa`boQl͎gn wnP2=F[@Mz7D ?m/X4 YRPQal]5}lq0`ފt?HZC? J|}^ ?ebaJzl~nM gY1:$`qM6qBj2t̉mԎ. Q kIFr6NEovI4bM=ɸN zƯ8au9o)O$Ojťһ!O }[K d E( ?" 47*X`Э#^^Q?,cY?ܽ2gCul:&hLs'4\/AzKZԺ8‚df\9a]P7)c/H1s]U >H2E zk~T6?KIZ1%k mK0$xϥhǣzӳtqsL$0}\ *՚\ x$j?vwgP>Z"Sdu˿DžW9?a+=$7t6"y$OE1/*so>D;a#@ݛ ; >MX_2Y #4:mrNqߚ>)_]ǰ'}|9; į@#J@HRZ)J ,F%;&OgJXb`뤑EcHM<jʥWFt>Ueu_Бt6Ѧ50CU;4̲@S 51{>đ s?( ]@_h\XFK !ب6=q~|d;G򕥲JrpB.Xłԯcʿ `yTk YweS*"% $ 6dǬNQҟcU"2'ǯ>wNߡR0_Hy:_X䇕$KF$L0ty5yt4&-n¼*¢;8al[ާosY+{Y1ɞD ㋰O]U+}$dDc(xGo-!f#1x\EjeV_~#RDݓ' 6ߩ[ @$Z)=;ԐGH|]#CJaQw8F.JM  ' ٧ЊV"}W] ONuIBlb /ص?Pw5څ`%pWA[e)x2`s5m~ețe B/Ѱ {R'V=iCKhaj{b1 /w?BF`fe:~ V$ jfh/cH #^=5c <-硿Sq0^Y#˫mS(&zxR49K+ob~=r= s昄#FF@`UM%E*x0EV|]0bр=T) D`I%rjDIBHt 63,B{;2&O 8ad*s$(7ǧ` 'w42M{*QAFWţi1hR%{_nsKSgp[!, $f"ǽǔl1꡺_۪k0&S<\sVzվJ݀z 0*?!d ^ #x3X{мt.IU)gW$Y}=塞&xOn&0Dy1RHrILa6fLDo_^mՍ=VIܦ\9vL  +ETF {[#Eȑ#-"+\A)"-5iACS^+hDF;K/ϸ?98DžM\2nܚOk$2Y/Mǣ%?cy,M5lw e̥䷝PLrһtQDz`ݫ~WuHU O">BVJg N&H'X$\$Uo#ʎM%ONMC"|9%ݹCTuQ*K탼 3 2.HG*ȶm5f~wDPa!b',D)q̲j 'X#"%EvfIu6SCỺokF_n`e|UEuł^:&]cU7v`N-%:mW-oڏ,pSX\: %X8-ńZ@ R+x$yԆIz ǶG/#K!P<< "R*߳6Dg}aB}٤.%BhL*ZEp g/D/vAa }mǣ7Ov9.kFDy2ʝXAprb !OP@|׋)!R@IDAT܏ \ 16#[A;ugVH1uG UZbF@CzNØ3Bi0F׹)~dtę6/zbO8g?>Z]=œj"wz17Ur[$u5y)`}-w1K]QX_?a]*{yhOclKYp)aYrܶw])J TJK l Ȇes5V@_;%d'< ,mrA_+G9)f"[=(IW6n$gk+k?7@v`2y7P?. 9tGmw$6t[b͍.:+B3D8ƀ5G6`̹ :+;L(uBi$C?Iå!=~5h?|A_[?}}zu.D VVgС7V~̋b;NmzݱHEh:+l[Qb$(S eFϕ>~$z4GQ1XkGɊK2?M![t|ͱoBBC0> ؿȀՔZ2 U͓ nq\QcPo+j3C|FLQ(3Dj@_&e&Kmx׆{ilЗ0@_On*GuiG* fS;Odcp>Z5;g1ſS}0VvI^NjZ/5Mʏmτ[K?3hd2} Ƹ蓮 ǥYΆwqOWSP K6PKT:ϷםKb:b\G{MX Ҡ/V_0$̰,t\oCWRѢW•\J@});]ǘS -J ϢN\Npn]cbÍs?g1.4)=%dy^Ÿ!/n.Is7kیn(XZ`GI:g%.[6fHm]bcM0Ƽ\2ΒsJnq+DWn}"l)-LI9UNT ǺydԦOt ٖer~,&QjF'rHl I'ry;j}sDvnNɽ >~.31L:*7%IrXOV{jA{BѬF\[)(ȏRߔ%FD9 @%$ O>GW0npI2w B?{jJX}w ݋~=Vm>-9 J4Rgjaopaw?FN7V|8%J`4h3];.䎧nOox;qT$@:=Rmico8Vyh W>s:=1^?alM%:}VΜU!O=gdI?p?{2Ί45Eu8qLʜ|igQ-X+>6i/}\$Qi]Fʿfֵȫ/|gI{DJWa:b.6;Tj$덳qrl,`3?_w5οz 9I yhC!l9jrf E5^ʏ -E;1gL̑u#q2s fM8kꖗV\/]@o8O͆zVvCM7@ۥعغ)woA#p}X4'H0X? A S X cs4Ko]o.y Uh#+X)J J i,r|bRM}cB4$$3Ё4jo$G2T|l޽p5ZuI=]Lxrppǰ}/ رM>N;v$2rB&kD zqzj\Y3Rs%wUm>_D$cdHQeS1_: Ʀ tW-gv Y,Iڎ eI׍QI>1T}?}-w7>뽟i ж6Mz]];IoKHjs3ݬl^qF%[oU(裏Ԉ#ԃ>>0/?1cƨ{Gj\3O0A{衇g}&ދ:o_WvҊR%K (5K :ךO1yXslҌG6O^/{b`_|p-Z8MD G%>3S|h,#B ޴}:[^<.&&xDpuk14"!ZrCT{ I}lNsS!$йr#j=:Apj%ok\4v9$1mAai7+g8NԐr7B,j8'VdH+W}X%$ ];\8|f2)To -S.DŽg*R:fhB(C91eD1-W Vj&VICzM }k?1̌4̴C!mD0;ڊm·٠׏r t> 0d4HM*g΃bҲmT~aXrxԵhIm5I{`$X%L -iѲ:cj(muĺ–)tξaXxuŹ%LFגH'֠0UAQ&iw ޖO#nM혙KqWO#5Iv 0jǨ޾Bh٬'8VLE钌.m~$dp~@MC-& O`݁6^~qX%k00{u6T'Nܥ(E;5QR!/J6ӭ5$,P]%^x͎Γ1/f'!N ێ)uT!l$]$. ׍UmV/: :.7Rri0mghFEtߡW4FX@U7@',W@vVԭ῞) 0jܭHf!٫@ q-hv( v}Wg۲/J0:$)Q`RUXg>), gmqy۲OKwYwΡ.t+yH]?5q]]TtZQ ԗ ?L|sAQʽ6}bhz9shptҤI*cǎՠ(juw_~Yg75tP.]^{M 4(WǏNРf͚nf5yRb:^,ԍ7} k;% =H"&,8i]lNdi(޳Vכ'\JCJW')&R-Rp|~3S*]09V>zk5n='^CE&oy":OToKtZa|~]g1gE iKx:QG^>{ z-z\UA@PPc~wwx]ǖAev7>͜IOTw|\jLm1 pX3g[a-/՜xŘ_pzf._FANzY~,Mѩ9\ߓ/I׊F9)5MQ}UrjZ3`iQr #5UѐdfE; ڟy?a&As'-x eE,h {[or)jvS'xS̏4rHv۩;LOC ""w=XէOտ-KDIΝ;?^CU_}F:3՞{̳Bh)R8M*lSrm~ru=6I|蒅/6 ^ך7`]ǬIS*lWEG_I@B?1@nd(y_G[~:Nh.fYJm+Ш)o G|$А "Q/e1O oC!*J`3O-wM_ xCb޼rC@7mDW$;Ha@Q_n @M&Qi0D:f )o`lu*29-@M%Y-(QG=b[+ ۇfxOR~ݖP pٸ5ݲBuL_j-ycR*)W8CbWsm0E1E%(4(@ `DKM̀򬓥CUx{ C%KXRnQ 3Ǖx@:ΔRv#vXn~J .ȌڬsBd~xA*kgvS[dNaD 'PW Q c 6d,C$A?~O%Ưc*Ƶm3a0TFFn'(ykn2?N/kLE&7 PUhd'&͡y'`sj^BuW8W3Ȅe/Ƶ;&a8nƘ$36T+J\7ԯr+^>sޞS'{1Škg:(w&@,ન**bV sF 9gxp(f10"$ l_u o+wu{Xb:9BylF*6ԽiSpxX[ϯ ~! >DQ Oי#,}sf؛Is𲛴+v ~{|sgu 3`;<(SA \iزmiV)BXjBkk7ħݷTOA1 HIKtM՞IwuŴu|_\Su (ɤTۯJKO%yyjjZ=9wnx9Գ+.za4,Rw1 w>__j 5ʂZx`/;Y^Ӹ` 8b5΀˰*wq%j$9h |E^zyl6O>]mj᪠%P   _VV&IзH|Y"PݻwWS:b(b m۶袋k.\6xcU\,:v쨚7oMnwHm̼괵vUj"h?ؓulF'{cڣgu!*AFa@47_uaWͫ$%qUGW'l,\ȓ ȦDpI ?*a,>dD.D ܞ+U܃0Zsqfʡ|&oWh^5*45M #+8(2ʑKm1{&ِxpiU0ݓ[} np Oy=Eph56Vε-\*p 1ܜ @M5zi3-ss9 f\ ]?%rAx|-v` teF~HP[ؓ2I`Jq$쇍G8*;6AMRn'L]JSbٱl S0-6KW}vU{8f8&8X1/q|+Cj*h +-#`l{`>(>ʾ fa`.2< C@rܧ zJ-䗖CWK]s^6qCx FK s "(c=7321/7e$gnWpᤝGFr=U|u޼nIG{,ky:q 1OxGʝ=_[7K[=/1-u5#rG%I@ @쮁}=zNK"GQOAFpoZ"mLߺ8uC} rJ4m=։?%ԇo/87!fja86x3Q&(oOǪg12|{A xE7IT$z -+`LtVbuP~+R}uu"b%Z(r[8ŒރIn𢡊DKԻȧONKpWi<nr/>yA9U 7QUaxnoi/'{U˂"RB %`؆fٓeQf{5GV7MNW[ugpo10jɪmԡ7PsfM'nߺyc)8?|b.ڭ?g)g/qfqSw5vl?+.<ʅ?Ћj^U:}+~eAx xSmf$ #AB0^ŶvB6'XrVi 61@7\I+עRl(9sx$gL|]\,M7T<73nSGs dE@ثæM⍛{<$'|u-R+}a!*2׈[gގbL{AG>DhF>mtoC!Իvgܩx9zP]h ĝ1rkp.ž7ei"<fb`>MqrLS_'?WjVډa^p?%M kUBZyP)[1w?C++yl_WZ x _^+JML8 h~86e-tos&9+Jr!e:.c=lަd۾JߝkV[)X_W,j{u9`-K4z " WWZ"s]]jШ!p@5Qix4g5]Q<67P5dVr1ٵJԨٿ !95ͼ!N1>lnk(~ )7m@%s vqGuaؒ,%{O39bq)K}N8}%+D3Sr *))Qg}ӧV.']v~-RuEtJKK\Ν;+*;蠃Tnݴ7h&{fՍT7gQR׆-4(Oʹh]}Ug6(;t1g!C.|] ިgJp'ɖS#SMЌJ1i!W,1˱ OƑÑX?cn`KYi9-ઍUuƵgAҢ1AGy7^~IO1XHV(4S96XY(x%< p!T\ YgBݞ5ڗY|͔i 70V)$D|Ԝ'r{qu]H :6h>6+DΝ_D>za.d,^^sdy.g\ /j~fǤdKK'?Aޏư+fa:W~ '֗w }@+B_\s%…\0&g0So` #桰LNIZͅ !!jv m? wӰ;;R(8 \]Źa toCBrRɈ1}<{5M<̉5)p+0J\ELHtLg gz"69# ڼԣƁ>اlĹIr)FUG䲽mP@6x kS1 OA"Sasoh0y/I%& THZU0cBrfE;Oٿ+U_uÛ P |PkYYүw뒍䪱^ӟu%gɒ.y}1J_Y˃Lר%9n.uZG!^]/CS8lxB׹=9G"8q 7 iHZ?Z ]3娙4SIY#D9Fd^% *k&jDy}c*XyQ /۸)Qa[`LW7]§ԁO! x =lp:9ø auS9G Eghb]uu1rє*Ӡ 9"N9*s,JXEo#C0cFW"~ +x]5[Øu+GX kJe# }+kr+.v*;1CP qhZ2__;FMMNPA*7&XޘFW/:\T!10;VߎD/ ;So谏c>@[A$NFC+<}oXޡ~lpz[BTFYVWu )`4_R;DL _ZDg r< BU j 4΀)L&{^͡Y/=hr6$(r=aZۖک![[~%h:6bLExTJGZ6 >OW-ZP}_4J@xĉ_~ꭷ`w}>VeQTT^u 3gj|59|gSF1ѣդIjocƼG` @j+ ulAC"<=laL}RW䌸'%4 kX8Ix}ˬ^ac^!i+ē$:ת(hJR%3=mw^ua4r #!d pcS(A"j!Q4eqKCYpQcD%q|ovf,l"kU*36h~a3EolKquSV}z.97e'/!#ޮ c?# A=VM8YCsm1 l Ö@nJ~SEm_қXD|KLz^ s9ėx$ WΎRkqF(M4j7D&.*z.MW0 hl?ro^Sk̈́kg^N)G ? 0)5W͛! +wSP WB"m$;y!ia9@>IR̍<'˖DP¸q4Uq%A5<4h:BReP&k(:i{@nQDY<\-cLyʻׅyQ~ % om\tFx-+X:!>3Um[\}ܞnVwBW+/ZӇOD# F+.g2˫yȇ8>N'G)@04uCzI5+ap\}mg^e=`+? vfr(O[)ZAE2} {f-"ʂjx@2 Kua}vT_ *NGZ_]y ڭ2\==obz2y߂Lb-g"9=T@|A\h֝uGNR?D(*t8R P"D}Ӏ'\,Z>q{סּ{G5JQi޼+y%h:8qW_Ν Zt]}uWhc~+#M{ ]M{RM1(sCf*Q2:IԌG9vsTM"NF!6Ip5~Xi2ȭ,6)$S's$@й̅4V`C"XSg]^ J^4mL+p*H[ʤ$FKBDHːq qLyiڦ#MZVM.(18|=JUjreϣ|)|s }>I k!=ũHl+;e^m1z <w\8zhMƋ3"l蜛M?$GQ#vq@?@oO\ӑX%G:9Җ#k@ #k(&BƏ' H< aLk&v85B8V dp.O\0l2$G)H±[_U`xx3ɩajo ea62^Y1`6oFcd7tA!rXVt9jN!xsLar.xPN?- C0\%4L#לMpkTD(qp#TVOޠ2wqSZ!DI5W&ߴ'!Txw:@Yr|>( o~認<4X: 76q !zkVpe>ҵ 0<"w ͤ 9\3Sp~ sR !(,?'s+(F@㰉oc UKJ1<40t1`qOCiz6dzh|19I|qP0/Λn@(ՅnΊ*kN|#/[ybl*7 31c!;L߶d%'̴h&U֭m/Kn%82ƬS&6e찍iw=+|sSQ%@eZ<Ε8s\{Շ3 3sG s,~z ? ɫc}mߢ@f;Þ @Tb9F_%g7iOa$򜔸႖WuA W$3uf(ݙcx/ )9>~a_f{}b~;` nGX51KU)—Z"vTkfaS8Y{܅zy,rc)7:Li6h+[Wߵ\(?f6s l699r@.hRkR{X64Ƈp+b,.8^Fuw`>|6sj8#ÇT;9[,A~g0H"q+\#F6V( yRnr@\ͱ˼%]4 r"˨|MeG(q|[h]H- n>贠k;x)|t'r@zpg#y,]݂8OS [| SB:?Eg?_0e)Z*UV`o8 z q2U9D%ֹ3)B*x>9rNN=I3i]l0%`W%+a}G͌MQ{ЋlFGƛj'1ݯ1q7B9͏˵d:m °){g!b7@_>bnY_2KDOo18L^@Y Ha1}:D;ab@pfIkW!RW {Y]hN0ln5w""`ʕ^Hoqp"+UkǦ,_y ~-%Ao0q%qJّa׈D۠\H5&n=z<26Ew mꦽW7,!ljc}q|Qp&h#d:ĵ`1a^1@CJm/?lQY^NKf=67 hÕcj`(Aqf{k>lq2S":7f)+e+*h Bys= -;y>8b pgKh"zȏJ?njmR&v͟c+`'L5@L17b? J=w K*d43qS>9ܐߚKhhӱ;@ݑ>B6Vg=  LLF[wȕ?mԲssXهKZr;pbڋ2?snef?0 -lvܨkv%Vd3VDf2K𽻤{.<ͼrzk%o#c`܊x z9\d- 0' 8& ޹N8I9\!u?8)6KBuygbWhsxxa~<AfL5ss~D:̞5 aGf:6^޺1<!EfP<711W݃5a;*Lgo<ї@d2xw0hSY?e }뺅?d=yPSHaj OEi`!PݾwAV~ǁjLϿaʒ q?<7#IqAK.Y.!ܾu%A_;5(Зa_`egTM%P0e{{wԪ k=*%wj0W4}kl/exNE˭X7&-cj碎wP NLx; rl\d ʁh+Bq)%`C[  !QĊ S7 kt'Pd9wÿPxo zv 2|+D?[kel #/AЗ.?)|Зnٔ/$<|}h_f}@cpPtЗΐU(/|ճ WD3=C9.DI?@|.a(rdževrFз 1ZGSILK9 -煞v})rM?hqy te L/*)k@; [Z-Ayg4W`K. I})82Q^&W O}ON062r [Y;I85}KʵKcBO GTHin#wh{3A7ц~O"|桼[b| {`\zWxj;+څKysOk(mX!>"Ick`q9\hN?ծVB#57nꐰ+ѩ]m/_0З@_\od$"ɽN%%nN 6Ej*[-jΙd]dŖ{4 is0T#)/`mYR:J| `(ʈ ND)pQmIJN/وv3m&wr?iw.Dʦo SF\G넼>aAB8nHDYB+ōgn+nmnGc;At ip7wY,{d6Hlg 3FpCQ7} +47[- dud\Cl\,hmx (_4|?}]> rjnA OD_,ņ?F ̙BjV nOfFd mk55:P+C,s'eCgqRmCP~?!xSBPq(-n`C08: Y5Xhܶ 'S qEn7( y YHWō~xۖۑFm}ܾp\G`Ϥ>! !F C!{BjP("^V)L=)Oms5 ^ƶOɧpi#lɜxlFFk'}F=\4oWZc*^۟δ*9SpsZ :k*C*;\dM`)nPȆ-}Ċ>l6{4i8zO2^ZMqpIGl׷uX7mD4\ +NeB? Xi٧%_87Y5rWPf)ר{S!L~-J+ 7ȗOk w!rz`OG^ B{YJhF,Ck#q7Y.-Ymao.rd%nh*w=tU+H-.DfW=L(C!gZnh*5wű87Z >ą4M8%04pM=ٮE},vwQ1Cd7HBMl%^ؒsn`\]W;IexIА2qI/}%rBǣ?hO#'JjI{\&6S 7샯7RAG$ŇzQ vsM, gSDI2F)_NSlWp יA=-zNr{D)g\οhSyMÇ7@ʘ!B2ˮ9'$";'h"eށb!xMb6ck/xvMG 2EL9榐`pJi$dW!vH}}M`" R ?pM?!D0t$X,LBiʤٷhg3LI^B(#qJF. xE[h LSw5 ?r=@QXq9Rۮ7C!ӺyNKJ0)PJpi%`p0cc!~N%XL" lGZkI6<4!8qqVHȇxÿK3Nо5Zȩ|гbSvD=R WR?޹AhW2 &Y^G &UCe]$JM q&픇XOE!_ێy؍+SNcxX75q5.ɹlZTZJ|NZ#oa@'DutZ? h[A~sxV EjeD_XG(56VyoѩAñga}R^/@| 5/%c]nK 8#h ^?zޔ4t2XC6Nr=qUSDf_[AJ'XB4 :a9mu*5Sc.Ő% C;t37dEO!rDPI5o^wS`-[w2K=^cr8@ 9JJ|gĘ!#-&>kx pUmEW[2\;~]>y*K|-S6,ޘH.a-؆K ӢQ%G: afMgsU\@35bzI61@-0Kz^8?gr^FX7oRd1TH[f(EnlW))(Zl =lH.Aޒ+G?<,4 |jih+v0)vXyIxrzRCCQ=/m^[Nh'?Dm?^e`=F}^pFʐI-`a%xy51͒91(gCn/53GE58k^Ei^?›qFjРk{`CO0Vƈ&ǾP7~<8uND<7Tɸ# O3 >gk7 |iDK܀NХ/#3׀y8Y]J+ :zS& < ,⻠8Ϩ,js>{O ,AZ*$y&@=|sŃn:b y'q,}߳Ѯh#/6FՇ'57Ep(ܮ`,OA=x#J4jK 7Iǣ3udǪ9Nn4F˜pPwWW&@(N{0 a!}k=M% ;Yu=_G{س=SJQ=a]ՠ/2@_ge|,Wi΅+;Լ~Edwu̱}_ x$rMp]aQFu}(@(1ܵ)_ЗDܾ$qAЗXQhlKi?ᛑo%KM.|W`K*KT{\"Sidw&З}}閮J@XB2jq;CЗ7G" Cjlx<~KДr$Eo$H9 ۠/=e([ HؽƯ2mZDM=őDN}rFl[!+ ڠ%q^u |T 4#nG`?G?̲p<_]s+OQ n?E.=(&K▟@_Up8o%9b嵱)N(cU~LNEoR.3o6Yg}їm&}m}iHW%!!NЖ? 9" 8z~lHlh3éx- NnsLgB*H*Xp2stBaxGݫG/aJ~9g3U 8< PՁ\ y`46=Ўȍ`!zJ%xQNz?BDɕX`N9zY]MwkyP0;Ưӻ:8K+5/9}xƪ:u#ƛ@_}醕ҕb]_ԧ k ?U#KZU=*/1n>/Q`L5xrOAdĊ 9:OD 5LЗF~wu5K %xwO+vuFXu}嬱ψ:ki7Puze(fzMAp>i"PaMr}ǡN2>D;A~{MK`ccpȟ7;VFZĕpnU~mOz9}Q^Ʈo(̕-vx^&lv)/LHs+2wfsR&KWdS-l:g(wߗd!6=>dHk]Elv^ɯ qc9`"et_$<dźUUnș=1e|{N$F+ϴo?s s4k93lx`EvM~Ce"^ uL咊kG-"MOC\|O$k!xw\\UO"+m>^35 )ek\BH '}3R)j')x|Mj`S4> |dEVG y3<M|GK)S> 1r@qr˭>;:َ}x~Tlcix r(-<"s3p x%'GlR[.΄x9| diy`^_sv  $GEXͥxLkűBk[|ߪÎ X5.ZNfϪO;x4j|cy|B]4mgG¿y&rpR+Av@lңm?j!Uo©SUy!S]?mgYs󤛦к7WJ5QdN&pN)E'Ua-wGdbnLQOvrs -kroyGš[Կ0E~B!Zyю,5[7}o` oLM `MÊt CF@qmsOjQ?>"hxwE(q㇤tWKUNCF9#\)% o8XH7KZ{_Ո/iJAK AX2VR$N{rnjeuNdB@$ו;rzq@֜D6¡*0ǰ$(s%eRer:Ea`o>nd:㟛M.; dz({g"IqXBgYi4$_ 3)>Cq!^2pC el^;|K2|e~.Zd=7%bÑǻ BNą(. 9*RaF Bҟզt"PێVN(ez"a~*\~ZKXv-JNrH<=f8J̫ۙBZkq2n?~p4hr9s}8s;ōI aX,ws>B &r>.?YNuv( DLG,\a 1F&|>Tp7jNsj ?x)^3 QF $\,R*Ov,2]TcsT6,>/ u?ՠ+<j$c (|9P(eӽ,6Y]o[±~ɋs-%>7M[Eu$X?$ JC rwf@09=ǡy40+"_]T : m>K׽-<[-&qyTXg\@0_8NSs [}7~;3 &"%^ C.qT01'K%+٫aKW~~|vSbr^e=Oo_Ac(|:&WWD uЏc6reFu!5x ~nq(q%y~ߒh\|J /<Ѿj/E8s|n9[^z#^c&G>WGxqk8_c{&`a0yh`Ñ;+ݱfp&ufy5JJ`Q^iJ Ty!M?:6@gv-8/bDy@bX^* JP" H@ lp6m<#)Jͺ,a-kl48[%AP Bn(w<崵Ŀp}1>N!e6K5p%$eMW_=Ri8TBy~P`BnOxm,m)uVQDpSWmaU  &@C=r0рR}KȲu5؜یt4?h1 1|Hۋ)/w+Jƒ H< HDM:>0lY^➈r}b(=@z1~x4%Ajj>h =1klJ.дsYZ#'+'%|'!`$ .%($ŞKSF}UILɛ7*(lR3 ɹύ e6Q9[ʦ}9?mO5R^K ʝ맷ZB8k no;s'(1E S^¹[eoyz+MЗg",sb|1+ e]f! f}ޖh3 5^SN*|O҂5v3A_:9tOB|Ou%X1}`|~9vr@_. & ?//x #ҲP3zyt7f4{/|m$/@?!r 3KrI0(з*bM(W~2o1βfxЯvf:As t}*>ļzyK(MuL:]ZUih{5vcl,#:j;0U0i0ׅxežX)]k Qgz85} @_9/QxD1'4:/j[\]1$1ŸQJp ͫ_/pJՠ/_1}P3}j1GM*>D}WhqZmb[r+{Vp231݈<۳˹ǮxRwҵjE( \C98+i~VЗPtN2A_翵WE4_/mzgS 4@d @edAڍAf1o4yqup$=Ǜ˛>W]dIJE+2gnЅ(P wY[a𸛎]u Jr'nO!\,;{9=ɁDYu#2_ݤT˘= GXdm'7`vw@C(V\5N!N7 !;|:!B85[PMRS.(:ƅg[jNK~4 HLj%(@Iv*[{s5W)mJ}!>v &GU .?j\kÇFgZLܝf5i9A{q 1:\s;(^7PCk<y}Kr!(@&CpT ɵNxݦ1^׎ƸYȓv*Ɵbe<"$@'CHE.| X/y1G]]DHkyåkj'`n PuLm8^0-t e3 ]|M 8lUX;7FZ&C=h nO_" ׀R<b37C45=vw) o5"I'-'<.VA99!?1@]RQ-R#_j#o.vx¸ƒlT#-QSQbd$v P'IW wVrTՍ[Pfq߶zN!2[ cpᏥ:i4yU8qQ/?Gfpq.B֛榨"8SfX\ #2gۋC=<]@lk%B" Q&Q @xRI"}eLٟblDy1ܞeg/Dsc"4.WbFQWpL:i5"K,C!V0\Eb1= yu]΍Ew}T; e߅<:"}.r CZkknFJ8vD jSj+gpW)lMrp-ƑaR@ArcOr۲lyp˟2YNk3Aהː;xGhv'XoRPW{Y)b;1ߤ$i'\Z<8 %]RW@8H+ OavTHe3h9*b?Z0e^{Tr]) 7@=8n:qZ58 Ȱp,R\ 0U\Q?jڥv*N#4oim';u Ɵ8:JAag.''i7Qmcۇ9MUUM}:9:@G] P@Ŷ-cw^0תp{`_|ƊYɻtrT^yxsn->fz="E&v<((.p@f]/RKz&Tb3SJzps)͵"UI)358GN}F-A4 (Ee;} w INiNK("StP6cU*<^ keX};)pD)Zm0,FbE2*& WĶ`&Qlslp)\B  ~ ]K1(KRe:a3A,$/2qONOP7-A{y?yi) ZH'-ab-L+t. ~._XSO>bc$3J8V#t́\>U~8 1+Ԃ@K^Nf['ʵ(c=ìr.eCkp5N6 V\ 익8BK}bT[5B7AM!`98;#DzO4%^UC)zgt&TE`>(껴D@D1=<'\#b~r-{;{fk0ao.Hl /JGiF/(FHߞ[-4-""lQY3Xgd']vO;h]@OϨ_jGj Bj+XrM'#i:}I"/ Z-.<@ń_ Jwk+j1.׺`}AOQpZȻ9+Bg< n\U @ /͏㙻C)kr̜0=xw8ųqjr$N7}H" "6aq4>4Y!f=F@IDAT,rWnܮvd$E9aX&{kkcP<)^v5O|Cûp {|>.}/\Iŕ[edS\ cqGgW\sjZ{GT'&P˒ߠ:VIN>}^|2( i~s Fh_1"טpVF{NJ(KM}z+A >."˃7)S&BQ0vƌR&cFX>Pˢ/E~.u:2`Ģm@OqVZ~bm&}鞚}X7 D/؎w 5-q]T n3yuvPgFE`ORυ2`yn[ug` C"BsFOvպC=Z-\@_fb۝Θ;b6U c[2=Nqo ?R#?']vVֲ}r:f!e̱}}vY_8^Ilц$6MC8ːpuw=Han/ce.y[>/|8t3 5K#ŒDM%TM%TM% %@. A3.Bu޺_Yqp'D 7w% 7xaem^7 P7Q/_` )iQrv`Mgz"T%}ؾD-jE.ʘ"oOId@ϸ\ܤ+~)푻|rd !w* gbk{䖢\|8o) !F1~ЀFޞߌctQ!(R!-rTJ/?0SQ$z[1 '}73Er406Er\f:L}@dהIAZX?nK,OG@Z7.&G |.8%>DKQ$^Xj.Uȗ?ґi)%zB$(!TNYO}ÑoͰ1~0+~y+x,eo)KI[AXgQ?{g'Gx@IH,Hv~znw[ BBDIDWfSS3;9 v멷ޗq}89)gc f0 6Hm 郸sؿɖŊj%RɃC '>fc9kp6|@;H5s^bl& r'َaFv/ldf⊙6*L:rǼO^%%{e,sRf5ܺG_Mki3NFS72.C]ܕ=ĖQ ^hȊ+|\aa&ffHmik5(7{G}k2`.; oyC3fL@ƌ`o6T trăl#.sR$zBN)H3YfRMÅqy̸gMu=8#m.Os?%-%,b?7\k538!g&3)%ۓOqȔ#TnTZiܧrJVMf^-^P- 2 W@ lMh]ʹσHB]×7k^Z $8ROq|q$/Rtsw Dn7WnZH]l.g9qXo Rӓ) , yG\ V` EHC>OK^|04ۻb(*zk'#lxkI@D\Ei m ~@nij-'Wɡy?}v?/ { Z>6"-i+D3/H9}&% $N k<0'..Uu!JlrZ_|݌Ԛ x>xyd kb8p]JךpoG{p1WHD?ԜoCZ9%ixaƞGUQ4m΅g4,i+jKm:i>*zaCK Z!Mr0Wv3;kI% NwoA*%b(Ai`_ƹD9m@qcq׹w))q~t\.+\C#^3Of>nJ̦S4̋..{ fEt߁mv>_d.|^UڳyʕH˷ĝ)RPsU$R:Q9.=/O.(֕J/)G Szݎ_&,)6\-K9kˬ4;_DQ 8XPs.| )ZPI}/Hp`}`9aMd_r $Ƒ1F)w)Mhdž}kuB'"%>{qWD=2nLq8RrYxv=kୟ 伻L[ج9IԧXXΘvt<8TUOH38r!G׼o?k^͓_Mmz rG<[XlxېF8'Lˮa?Ԝl_mLfPś WXpL\ýьñssݚ zrxgmipn}3k4s&e}q4D@o^3l֛"  #>9UXPQCAs0S[u\qzK'?]x~{ ĘY-ů /5@CPUg՞Xz"!y8[:-]!;@7ؗyzBIq% s厦yn^o +f,J{_nxfK.ٞ.#uk^ڬݑݗ0bp%%Y .f$"VDpjܝ46Bkfƛ6}ya=pHpJRmU@HRQC 8ɭ5Og;4Pw?Ɯbd-)#;}f ~ZEEbh)KE\-o)PڸTnCIy۾ 13Z,Q\!QZt7-tMxC'nZBʩ\9R_mj7e3 k$"HX/ (H!& .\֦VpQ.@Ȅr%1% q?h8?[@nZ;lhWlH \ iK)OINs9Oc3tu3<NRռ> #]veڇW+}j|`~= ?r"צɿB!@F Hzgg)Wxj4zQbM5TС<)G(K:>2@[t]pڊ#Q%x9o)rl[ M󇀋̬CzWS-\u {"gq9N4&{XsHZ=*&qwx"="_<Ĝ'o1C3A&>D'pC8Mmb~Bfoq ]L:Ђ l}_΍F"/^Pޛyg @{XER*wqa\ވ`lټ#s.~5Œ"\t6e*2n/Z=it8?AAuي qE#%S;6;to|3OEC 0 ګiL`$ } __#}x pf_ 1;eׅk.&A4q q%p#Ò0Bd&};RWqĈ*|i孩PS\Jt~!}@>&1ǰ&֑ҧԳ9$=siĕE!ή1S!]4/0dWN6Q&*s1/\c(7kXĩf)d[9c7 @;K:E9F5ǡ`gVMfu;p8YzrnGnk1YloG/ӀiP! o|_cT[jh)h)ӕs0#Wy Ԣ%@߻̦}`YJrta/Gfbʥfaz>dYϔ"+OqeӖM)\ >k}،%@_`듀G׹eV i o U!?[~.7 $-ʵ,ZqBAg#@_;l~t> }ebzqW>Y}(bǭFϽ8gRA_)f9Pܺ}qYWq M@_.ițoeA IOW7i,4Ѕ)>i+\ ?8~+"-Ir*jO+.z!<P&IN. PG>!mБ>~l9aTj^\XLGte@Yv$K`#?u&HԎڥoOQ!Q̪8S74'JH~Xm1G0y̠P01Pr]'+:A~qB~ pC@ IDgd2.G8 'mOƳKyZ RfBo=Ͱu_!XS*4KMpZ|&I}$@CL6@W]!|_sSٌYZ=~;y%grpNfp<ɚkK$"&}-mn"6I`MwhRзCK%R7}Vm5nƳK+ڙړ2@sJ[vO"AC)wfdTj}M>Ioy's1Z~n~??f ;[Scnڣ@{"Fa}bpbNʁApcZ,-%R-%5?\s}Rڵ?C`Q5zneTCZ/2VHvce n(R⓮IrWoM@I m|[>4#pa\,_WmvHL%:1\@ŵ`+ώF$I(!X/p! P)ϭr[e։S/5IiktUTErx(L3yg*O.!X=Miv5 t# /݈ͬ8}^$+a 7;a-i k7ƫ['qWX͓85bf;δs@JSF4^cz-^a"dT­==eg>6?oʋtEWk AtBT%O9)̯w@"FGSñ!w902k?]d,1981YP {'"S \$!{AV߳HɊm/\;F>öG#_ k] `bECpcIS~b| 9mIsV͹X:SC $W\E-"۷O厷ZsP{?2V(,w C Rxd !(.UDmt!QٱfΠ9\ڤ9( 'sDD}^ɫĞ(|4;+80_smha9(Z43Sxy@ԟb{@a0JY/h_ *?lMq,i}ԋ}Ƽ6oY@)ҳй{KwkfIU*ӑߕqP?rXtw>\d1>hNe5I$戸]HfGw<Vq5& 6,hiQd Qz & ͙SLr+t#]72_1%EUˬO9g#qN'י)^Xo)>dM.d/%1MI.6Arxׇ'P2[+Ov*Sr#`tj @m6Zе]ڂl.Cr8QZA.7 .U%\hE?qq$YսB0Pj>,Q?SOUY[Ň.)imVu*kq9_Xя 3_Eܥ~'`«^W^Um3źQtjuz7O)Tr#wmK57sX? pXޘ Ų\@@ Ծu*zal:[rYٗs][9/k޺逧-5C1asH"ʾu~\oy,2m^ĕ|Y\pv#)Ks $f<Ԇw/U>Evr?ꆎ?׍>0 Xg#5ьcwflm>_싈xp>)i\XpƟRN2p}TضA/}4.d`ծv5Ӗn%޹kY#ʊ +r^U *@Oי>lu"N|fdCZ!mԦ#֎0~!l Xz4_⡭X)laXkqУcX ,T GeSI?L8 U^\e|+?Tf$3@;':iM/Dk)3YX4?7=$æ_Ni,-7DэḞ?j]'J^|wG`#y2sp\5mY1tGjkhu uEHiD i kHvSV1XC01!'BS{kxЙi/UMuhTs7[5'~9A浉`1G^9&674iMyw9 i(F~yk-R-%J`>Esi&?2k§-U83k8i*/F3EXH |<\I<%g)Z%{F,8HEC< X[& X{}4;.W&t[)lzowZFTRms_bGl8tVKBqw`-乮F |>#۬_U_DZC8I謹 !N̎}]Ǽ_ɮo>A@@D3hby̑}If=ƌB X w1{9,Ke zV%53cpeGTs%"#$I2kg:!)*-%o;RSKà>F5"Cfٸbt"PD :8vNj<1RXp;q^eeVWsٝa. йlitھ`#=p3"LC8g$?|T۸ލ͉qazܽMrvK;O\UxIu7n~JU+Eh㇙&hm}ELZ5NԠGlAYd/j(0QӇrkC/OێDsa/\Ʌ\sJ*ۘjܲuV-ʚ7's0-N4my^9Ԧ DOͥAFéRq Jrs6Ր]nL @'6N֯@4ֆڑ6"I PpOx$X:S8#di$qT47 w8%{ss<0+$,iOrWaUb>l|j8ʷET9p()Ԟˎۀ6 Hn8u2 *EvwlagY2a\[H C ^uc]3k>22Dczlg[V J څ'7xb|Θ0@6Pi53+O͠Q\V[zh0vʿ 2<@?+i;5W!_ 4=+WP|V:[<˕lI\|xӫu;shͺ]zͺ6,4f{})?gY9yZKEu2 UIC.WcN3,af*^2c{2ޒZ kjN4ͷPJ@sw78Ι9\^bɟK1JLOv'+mb3読{\νHN+[Zz \Yk0-܆d 8QJvlbޭMI`%Ľ\ciۢHG™nK'bS'G&0 ;FrEr2 Zׇꩱ6!~yK@Imctkfua;Jjw g.VTd1* a'CN} _Xrj@!4m yk3lj9@ASx+0:YIfTRN[9ѦL2kVKӷU#Dh>PW $>qy8X6.n[ S*7뀤2gQ ,S6y$86a4^*&iWfOc˩KZ8k)$ӒsuYDK25h.(RVrx|8 (9'#s&y 2Kƻn+ֈX$<}iϧNήٍi[#3pSZn s2v~c^ݵC P81eְ:4}hQjZ|G»8~%cab;"! Q` Np"Y0.űׅTɠuVix\G;XY_' eІM.Ss=;ܭIִລp >T.2>ist,ኖ0.Txu SPl.n҉SY7~@ץ:?x}䮝?Q;깄Oޑ#ǹZLhLn9t]+F 5WOR~z4es36h;ZmXw9eƚ]_1wplo5N^6aݰ g</i˯1fСɉSI}O.¿1[-HvНES+moQg/#!V19Es+Oh͙=X¬aM37!y9~y>0SHd^uS7\V\DI>jrTA(RtԶuTgܞJ[*FُɫJvm:lQV9Nuq5X,`S)Dj} a\[Z(ScvhgFd5׎G`ڞdAcQ>$Xi,}M1NFuv'_\CpKA+R$L;:}#N)l72-W^; o[HU|JIW ㌽Qȭzt<`%0 X/Z tyhmTs3ƇbvՕK]5d%|JCi8`9=7@t"-?a<@CZ=ubT)sctkeWʼ,YNB6KEmA@!FpB)a8kwQN/,kAƺH6@U`au4{8qom^[2Oӏ00v|D{Vbn:V ]9jI;[}s6Q.m~0EФC=(KJ)@/a 9<6sW{$;Ѭ" Ҙmjod <@IDAT^ߡxY١zpoljX`I&HZT=zƺ6]_Ef'Yfй2uTST4!{?;۰27P1F|ohe١{vp*V13d_{/F>;H3H9JI㌩ 8Z^2n.b-"!SLOzaad+A@1 ̖-d케G|jnw gl$Xof. h~<>vY*\L`dx,weMH\T E0&'L @un.G4Naʖ#'[2z_U``D@86'ۥU|~5 w$ S妨 qM< 2fŹ$KKE+qw)v68q&Ij/){G*omͪsOÉ*Pm #(~͞Um@hx5t W+Bť/+)Ln#O'ZNR醁Os7z‰۞r˟pv"`@=6J>sqmDL q##@l0t\t'Eau^n0Nv7I~zgZ%gӑ~kfR)mAY5=$xnaS{>߂ĜzbF@e«kS0EM ^A yh2e\TkUQ_`z?6Ի1Ӵ>oJV͛>XT?!EV N2z"^E|Dzc݃0vnq='̥J&〾q1_nհbr{ :hǞ˘[V;?`ys@%qنacλMlʭ\<%d<eNDq@pSo},`.2}oU *X4t] ,nV#xa7~)qg eU;#V,.2[Y`gS zv}3}t4I._)x;'j8zT.f>3~r`hvΦQ>Sb/~(-/X0xmoma(!Aj--%&J(,RR% pL34VҤ;<`:xH)Ʋ)ˁ([yG9[v^\S.,8*@`x  LSGͅUkw]h7&MφP:qpg&CMneHcR^>eiQؘ+%yD~y,EpŞeD?pJP#n2>Iʓ쓫oGit#SFo`6~rZ^۔H2Nsq8o$ 'Eb=pJ|ԞOiyOh~2gWC*/ˁWk49&w&}sHyR4HZO0 wAecuj(&%;qBhomgS׶T9QdJ{j 7V<-PՌ- ȩAwmJcE||(Á!ѼCt-d~pS`8m (o@ ^kܑvjv#oL+/ȟ43ֶ^]iz r"5P"]!H0?p>nعK" 2TCx> "q{@s.ΚY[f ٛNIɥ @E8,B1V[U}"+[QdN)T?-Ȟfh'bv\[RGʼȟ_3y(QݝќXix f @\73n3edX0UQz6GզP>'Z wl;9<pz4<ekF?`_ǩ/ɝL_ q#Q/6ϽMD_zLs1þXkv|$= k\WLdzGM#`&&HZ/ .mq%B}yY7.ƌs|}]3FvkuuWM H641c-ޏn(HK1ub8#n͵RYsi6\HOs`LJsQ1<9MFlnU$qjl$5Ke 72\h8P:QM܀9dؓ\s~+6is(^*fK|8|?Эj \W!>ᚑ٬8%jS1k+qF|rpW_.ď#r/[|uPXuZ.!|V_q'^SbӦe ѡ!1tktv^k6ΟʊLT67a˷YJ. M$ ͦl\)dĽz~Gj]IwT7 ^kUFs+y7=tY)d33/E@0 )8< ^sJ{8\u-Y;Y~ sZ2?۲IP h$%YC*7s@^?%kdFֱįf QIP 6bׇTrI/DQ*}k9*ǥn6w6R])4IRIn:@lףMR2֧'S4.QWTfWWeߛ~1 Ȋh</m(H\?dN miƅhO :hK3H^[$jW'I>wD?S@ʣ1 Hlh9؍YbDΰ✣X:<| L_umSH D}6:?, & tYUtI䍔ԭ`:u8Ж/x '~r);mUDƾ,OLцYk''ݏE ,O;yx IsnсdX2s C?tC\QĄsP@a,|}"ԢyB ˖n s nGm-8 ,@7uU_/X Jy}8Mt@],v>>,}i,!j!c(sڮwUq34 YsFK7 ~et\73?^f75*ffܾ,6"2nsyՁy(pS]m@yo꺄 *<ƬCoI~>dd-eAi, Ud8z#mDڜb4nL8/%f}tζv]&CS- }qw.83 tAh˝K5ϴ_5ḿ#2\buS(؝9$I|k\5m2gY8ݯMJ$EVA(q Q7#֘Y6W¥O'3]j^?bӸ>~ۡL[˹L+4&}ڃPiv-Яa{q# [hDsXڕ^<+tfﱤp3lEx퇂md9||yX@}7 \CyuH7wNXq9zH$wW"~c}2gXԣ]җ:, lf\X9rNS9j[h'.ǨͽİB#%#=)w?Q ?t&g2sn׵%d/(lst4* 'oMِyoI19Tzda6I\{qbK\ofsx-Ƿ͏iōC"q{3H%T^wZ/"&$a Rrd,O6(>Yfވޑ; t>%8;X9Cc=m ܜEIҵ{l'~ךdy&I['TlO%|7pݑJiIv碫e PQ|__ߵanQ"CO6]xVNrWI:M0s)t-N ԋO+GMSD dJy`{ܓ5fqEʋ8~E]1+CƑNѵK G;+e#,8`#Adk0‘9Rt8Qj\.Y~Pk&Th TN<])@e3 ;*H[ q/ 觏U%8SЧ WlR{=Kk|UxܨMu<2O]ORQ}wPvވz,GkZnJj^ |?h@zD-\̹ʏ/wmkdCvA *,qo9/ eı8v+DTu;2l S}XPj$̊fi N1rg״'1҈ "e"m}W?Fܵt#Qn1З/0%bۊC(p$EN?j>ljh+kCiDX 1Aҁءu]k6_H.ַל_O{_tХ;[rnfEu,k9)z =Ḿ4a1o$麿@~x.}v)u)ޙLuZ{T֧}bn5o8ϡ! aeMXNܷh[փo0wf^HJG(YV)-<,W"'BOY /@f>Ivb[M̡3Ȼ1G\FPh<Y9xsR>늠znFlEaX؈.)q:$em"Rli <#x;(0-dMr2](WwΧ3/~ph"s"SfZ=el ",CM3}w9|E] \%b@ZT&oTrwt3gqHB|U#%K|E/d̖.X, EN!)=Q_q7\c66E{hsgYͨO^NOlYL#Ÿt@1j#qhw|\ڿM]x|x=foƿ_e䱄2GOt -jiT|y.@6ɢEk#7٪q96&78X?kE 76')hAM{Y}bjt)2raގ&ygr vi8(!K8h\uV,.y-u%;|̺99p|6[e0Xv3'GpvMYh'i~w¥/浞OѸU9)qmKuc㥚q܊t_`6$N՞Ejw R1\\te~=ߎ;2- Ċ Qwp3%l7]g5hVeۊiGPګnK~ӌK?Pg6x0e=Q+EawܟgVq[_8VvVԉS;[V q  Bwr6Ɋ@x,V\X\] ȼ4GtEW A2w%^AX?myrV GZekE! i}JF_GVpLB9 6И@WI #ZhRKyMiv &%[V9 n)K^۝Kʄȿ:?'ܤܭi3!Pm.n1Jp5~Mtx;wUF7 L/mcfRz ,?rod9.8~W *`SmL=ŝ.>uj,Mh,%@d9iW[$aK[ʼnU@έ<;L.]}o>Ɉ}ozO)&0ɠ~56Wxe VO!]w)4Hs:m$R;KͰy*GA\8yשO</)p8> bT*[txXʝ4M0krqo|`J6|?P:Hh]DbQm&oYݻ+f5G¥{$@(nNF\x GHSبaC5w= FN6gfCY+vaW 5HOP~7x')V:X^~ p1MB''Ί{ߨn5XmkphicQ(EP`/:2=U.鋊t@v9; ȖL]q7;7(7_‹ nN%Aϗ3 z9CwFGr{[}q9Rw?̠nj0on^MVQM#}cop٩ Z˺ԧ'K 1fV.l|sfv lY/0604[vS∭V`%>md ޔ_ ~/+$)JS--nt~v'i4V꒼sy,mFmeaKIKP"K':Q Od&0>C?OkwG { *En}*L77U;PŸșsiY(Ø׵6c9@%Te{Vcf)Lmq !HUhg"8Ki8X^Qsԕq J{IxqrZî#⳴uyd\w.`VtB0&$~ja܎~ [ o\3~REi8&I躭DFZiI"U3#W5wPpOĘփJ]k/@>[~ҼOfN(0-Ӿ+͋FZkl:]|8HӤMD38쁽_fXWx3뵤pm' ۦS8A+sAZ']CvEMJ.C?uY&ێ}ѯ=!q{h*:>˚p?(7@Z| 0sӹ5pI 8Q99EzwBٹ9i"o)30w2kzKWl*ϡʺjH~_VM Wh{6aC"RT]צo09[= [ٗCboX ov@q?K9>u4NL;`.o!͗jޒCJoJ^bv6HEZ%KvnbT4ډ'݊Nqp;ƳYQ =Q]a2n (0hq2x,yϣh<[}|;ʟm6QbD}\Aa^7抮{4OoE?\oisJ9:wW&ǚ #cTVŘ6$ J?ݕq+sI˴;h:Q(V.7 fq7K:PAzDZP-bMd67C@d;W>K&|g2M)hQouy>^%nXWt`EHq^.Lj"A"VK; xڶ&IsSyA59I{y_)UCŊ#3GK[͊piM[[pKtE \b!;DJVueF?aWiϽKb0&1?+cS,j]8#IJK8PE֜¶_Ɍ;bqDM$/sDnGs4"溛9Zqe-Z`M|Ovf> <P9~?J;79POĹzr3,k%{Wn]%]8/l{%c'l8Wzp˝h u}}R8}u "Oò HD}`eG5?:lVwz@!ԍK$%q;j<4COLU)J(e.&I~{z=5Oe-'ux%"̾r|/P-yz)+"[d]u(ezf}90M!^LH0a jEؘ]lq>smZTڮ~Jm2MӉhDX)X H#* ߙCi(b]xcf!sPIfB Ο;ʌiM82_;Rw H@(=yDXRVp%@W3Bs$n~{ד-hy:WZ@Ҍ{ĉtPysShb aOz~b&/QLs8ϙ MkeAԷ0=/0[o+ZxQcm+Cj,JkL#ѩvzb$5gQMf-N<ߟdW,9=t[ICkxmwɔHٲYѕ"fxϺl@^ Ar`sXچH\Pg3 6X5ӕQɖ,G>ȗ #%h1t+.)$8*gמ[2%应wa&\tqVm@jIbMzh,H3QX߭1ȝ3kA/qU~[hlz~On$A"i$nN,`,Op!՛'DRNG=>@Rc!j)>{ R|GS ]Y{٬F\5?p-= m[ )`X|'eW1O ~e򰻠=]2(t14;6qRz0؀i滹Yp^wO' eBy8d"_"Ł (.C:nOȱ% )ʵ\xP 6M_aA,7ת4):x7UGnl]bb$ٕst8"3jv`Wڴ@p^uY:-nЎ(c&G>*+L/y[{Ϣ0ؒ/<{`qQC0x>b^Pq${S]'i_M܌6aJf0O2gw3 r 6m[Icb~ b88ѓyHV'_צkQcZ+R+0nƢ1GY{( .7RxCf9bP+Ժͳ ["7ġƨ4+<ex ֑4¾P)Ԯf"ӄD%O@谚 hWR7fd-K%Uvx :7!q$ d`Ae~$D :{| =2i]oD;JKu6cY|LCY9Ey^  ha(<Ԗ-, A`5\tL8S{uϺ'#O<9Μ18 lMjqU.į]ا4 LwJKh٤C-1^oMcE9]cWs@~ߌZJiQtj{e(j.8҈-6GMsFh*_ڷ4EqSjYGhSb @_Q-+N,I+@\w G5TU\s2 о8i B'Ё ׋t*/T_u KTi+f}‘\L/r,7)"Ez"-u(FWQkLߜF֬,+m 1GMI'pM8*̒.)fPu]@xK?fi4i14ba , ,X֮Fuc,-rp><*U0qmYnW]] .A_mZ56XWq @x?+|fczX1E෣9qLU3n}%VlA9epLBoE㺎pFGlh;IkOڒGH"[oLSu|VԄG~#ꔥȨ<PwMco+_(cyx0$uY" ؾd[hxbuoFCe>jgzkv<_to!巋2̮k#tA$Ј| ',P( p]VײGݘ56l{32wYJL;0]1 R-(NEз`Ȟ\E[;Z MX?iq`MM&&?cF^nPjr'i_uK9'Ȗ8;o9ӍU/[ u#819зN콊A2}vϤyLmpl[@V& QkG/$1 W۾(:UzoяÔ; 0JN {"m1N}^CwpR]P (bVsΧ"|3Ŭs<=stbQPHfUwT}J]]Uկ^ֶNUrں~ -[UWF=PU;0aP7 I91'u pzW)F <4Ӡo޿ joieA_s}.~{op/ RѮ!4Z kՐkLtgKhw:LTFEOo̩j56A_9j5Smj:D3#&,2m(InM _{&rЮ+'R+# 3H&iYv G"K{حb.<e쵊9תfh́skb~lwsv!wCe;@MPYAnbۚVc28+)8t|Ҡ ٟi%%tЧX`8T`%!DT[рJ2(Ghr*]7IW[Of/zI-xZc`װFj:e:pbJ'w`ī:5.6\}ɬÌd{RT(D:uM{`fB+(-OZ{^">T5o»B7|Zpl6OhP.d{\DSnoxpY?dv|abg1蓾EՏ/Я L@@>ZNUZ4Vh7E}ʃIF4];UNw5H P7l˄x2Dg'{S/DnŜj%? |4͗48] jR63N#4ɾDp z o8zUzhfyCg1APFIoA_C){m2Oj$"`"<-I}֔l['~i 2 #n8.-3n x-ZBŲX`ws&Џ4 W>[<z9+`)$^ҠzusmNMqfg#fd5?4mv˽i$Oim D\25u J]Μ_T{E'Q̅,JjݬYp5v|7:/>x\R{ؚ ؟bXs|li Bc^/{r-:`:Grvq>ovDţw/Dʹ`8}LOۯ6=jb|] `h'ƛ̚CVv7uas}$rq2DK6su085ݜ{ɔ7+Թ>ߏorwG<*%sH̫,E=(wz8w cKh9U:IҖ%yVktl, Ccn^)-$yIx"S-pInZ!I͹ɘY W꞊JefKc#_oLf0dY9WE9dO"&3mB.k&RxqP>=B69ZPM2;&ǒQ4oɅk!\\TiidVCZvEj?h t3J2m6[6K@۟T0}N*s{p=n^[GT=>NvˇH@%D6N`NߊODlf*n0k) ̚j;HZ o0wC&{tU)IJӉ/P]ud+rJפ:º4{@{:`rLfJ4/zPvО|u-ڮN朒%AuR-|sg6$'e.C'^.zmvO]T  ojX' m/wt~~ ޘ dofhRTP.-w ;E>;.ݭYUP@IDATѢvU|,Ott\\2L j zr ` ZLZؑi#x KuckmH0}9H6v7ux eh&$wX[˳,Nsu鵙ehl%NI>f2BKCcⲱ\VH@6藀Hv:WrZI#10~G%ZV߱;eNv z+ }т^̚?yf}h~њh)}P4[5ʔ/]Ѳu1h;:Sk#U϶SnhT_M&SuH7w #9=h˾Mv`N!ٛq.:4U6{7llaIh_`Ssab0 >_Y~ Y(ecXAh?}{^ۉ+-I",*wWAӁwGJ&(xC(P](5`dŵ;>9,HN~k?hny}IUva0l ڛyinƚI-,BiqD݀_[lq}ms?u z.-$hiBۺ}ZQz]i;”&tu~}Q{%2})ݯx'J܏ d/ڒIfK \ @1yE1 }RqͿ)wD.f ز?,Ԏw((\D+$ȵUߺZ\s“hLj o@y@񄴫qaNOwIiJKx+dI&=kg<Bn*iiOPfWZxqIIaܫM 9{<4/xޖ2^9F~o(k+K'Bo lsSJ@l% YAQ[ ?WWȞ( im .3wV+; nV4EtɊpMqj26 Irm3BxkC&'7lKLV5pޫ|M4vlO+WJ>vQYg뱐jܞ|VÕC!f&zNpC=km^FEq h؅,DHB^rd\[ʏ)$4%@PuJSkڃnb1qh) ` $O [}{Od!PqW/dTܕ;lEh5Pŷ*l?P}a--9MEgD[-e'q iſzv9ƦUi~"+k'*)Nh<|q9ج]i~d;Ƌ𙞏&f,\_f.FXL$ dz^i`d[2)ꅆšU?հyo @ >쟪Gf79üO~do"΀nlJ8Ы+2$cs6pJXۼu1~wv4[p͜d/^ǑeK8&`ӵ8`ڳ<(ʕ&L:`U`fN3`QfՆ/. sμ ۯ+K% m)镩Y<ry}7XK&IN.IYI{_2f#&kBD4mV7gI.ת{{>i!6ɤp߅zh6;Hj8Yx)^A.-aح*/%Ec셥*Dx63MA:'_jE _Ɵè?CqYaK<"ng/mڻth=yaq"9~rRۙBJcyY>)xPeocH5 5jcΕ\M"3Oo;ۄ>g9U;OPR$sCK_ۨJ`*WgpNQP`0.$Z%<{ *wY=^:οUv$0g J38$=-ξ'׭% I^_64e4[?GDn;,‚5c+(!s§ eU[gխ%rW5Vn5ixb&#&&XJ'@{$S[ҵO:.7N;| ` L? K)ǩ@J$ B D1JW%%E4]dخY_[k0Xl~/I#Q--;z3f&2$$[BRa|#t"gZ8FӨj\ wpM1sBJs:>Ǫd 0}w0tz j|!-R6O "xz_Fvvp\w׋hZ->n*7jc!'$xhi@$mEߤvr7-ZmmMcNMgxJS7Ljfym,XmPk'svJGx7ȿ߇}9v ̆ طLj2]}%XuƲMe)Q^$ᪧnX,FW ᮁ?wo_S=`s@:{۶FvBK$J4o۳hr=2 噐G<H$fmS̭p;:nso+h+,ŽL&5GװS ݂N Msx>4Ppӝʥ*a'kMKqZcZw6#pLLmv!`4^f| i2*V[Z&H\>Ň3Mqu޸{y2y%?ҷĵk}5iw5 hDn4JI/OҾDȚh).CmHlH#T; \,ל@i *~\|OZOdN#~n\鄰H-ǻ C}:]Zk-d^dkM6Q ]`KӤv2gX֓tb,(,MjRT&2DҦǯk6> @9,ŕY~[-X2_=@;בdT_l".G,-U'Ł"He3<ׁL*Cx5&ye #ho-b8}۾7y߻L/^+z  ʏ9^֯ \»/CC7J0[t=95Yek)CiW<AYz#3)҇X;u7]BmN{]m36{ DHN_fMҔߣ̮!Q-4wm|iORO;pE >nlϖ!o16sT7g..DɔUéSvبҘ;QZ4H_-+H;}]9˜=B⻫OyC[ޘ Z{;ZB\"M.S/˽B|c3w# %^Z 9@WsdjѲy\*W&l _7G`ޣDeͳ s=UVN,idG@NJXI%ěQ|m&2f=kAcM@]ovi29r&3" [ b N@&y]VKzK@I5ɛ4M h|B2tG׆s%EGڂ_9o9Ylս-+LqLz  iquZwBe&Kz3_L v*J}^'/_4+kNad3)iO`ms[^^ok%7T)}þP]X@"s- -i@ FGx1z4>VFKg6op~?#O˜F$'#Sن~E胬o 8>iq%w=˜m ;h9Eh i'lIڙ2}r~ݝ,tȍ15W'{1(Z'sK$ :^wiư.qs׍sX3Eჿ0gj \y}A22ѯ6(؜9+'%NCg4-*33B5W·w|7Qn_5zS^$h$5O嗽(IFYEοRXfw­[SVոf~lR8{y}dtA`[c3hxpD5۷4hM_ OX0ls#4|eɖiAZP|z!{9}</ĕj9}]0Bx㍭O?axР6(Wsfsac徫-[o$VIA`X46GN -7?BۑA{}(N ^c :ͱ4-5mgE;DCYif:3@nMZY$Ie gag5ս1mW{'jeGi= vRd]c IpvΡ6\.l8 ,]6%lhvd@>iNr bko6e5ZЂ#% T!ۋ$Tz2q ۲Ҽ)Gs'ՅP:8@OI{y&ѣ#:o 8#'7L,E@=2qMJ+V<޷x^`6d=g2֤&I|&mN/w^,JseH_pũ}TfVgG(@6l}M Z0 MC C;ʦ&g[jm;@ Fs}+ xըVIveR-s})fv{t%_p]RL-x7U &>-8$c^*f̲w-9|g6#G[~dY76 2ߤ2,$]ȣ w?EO#I‚hs7Z0e3G!Vzw4}p\h^ZÖC_ ;v=K(Oԧi@C߫hez:}ֿbgǧV@/e1S˚_2XL8fgmED~]|r ruh]51D<عi^W>H0!ץtQ̽EDk#9nb kRZCx$}`+%,F#֓IhHQ'S1O{!QEl%_O]֜@U^mN5%2Hޤ963Dt* ǚYi44}53_eWߝ&3B,/KDoX{ )zI;:@_=K&iS&񞋣`יw8ԦZPk<&@3q(!&EUO,x(#śVK}Cۙ+ @x͕( k+Wg.cZ }BӮ; l1Ng;7d}5+r) ʣ[1t@ݙC${_þc[:Û+NdBSELI]Z e-Iķ܍jWyGp2[ Wi ,3 hA;&N9I DGrݝf-Y*5&K(':ň5/cixSK(@4謆X6iܖ9@*1x/| phP"? 7UPI2]svA9\aH\lj)ed6qc$>hդ̐(s^H'7b7=c@1KSW_ Hbu2}xDFA [3^7$0PEc:hƚbx~{>v>gNZő!,#pd. 1($GWY|),!ѿRw0G_rDjqmVm%V{ .WΚ:6j+֔Ե©fIퟕt$JlP*R ㎤)C+CDRlMc,i<+wJJ[V Pb*lɳLZv>i=EG@͹ccW3qKsW~i8bW)y[<-`G띉Q_&>DZ8M/f(dh.jf-`2+7jۣ@h//&2ZY,/ܥ/qv 3яܜ}f7 LwEuzoS`cz4;Xr 5^dn>#O%Hv3uR[BmɎ4 ѫ>+ ƣ݃ ^ L旆M}6eWDa0L`'@ৈ.-ܣqV%AlfAhp`ô͠ܫѻpvv7=# CSX`=Ǣ'엊=d`٩ȃ~$!%kGUfQ]-3$&Yiv=|LLdmڜe*gߞZv "\ڱ <;:&fn(QdV>> K{\  JPRF="{v' [-?1LtBVJV8;|)7eڴz}7i 紝o_~9xW<7Xn_؊J-[xVя$t2@%?rM(~ڴCnqC[@:&O+h$_囨ZO}x9ȍܠx}GzΕ@5i[qR;|oʤWķ~ ~rіxcP;~P;!x.ջ~NU̐tP>OBaM _[< a^8ɮ?ĩb|&.^:68v}K@I%"'i g7'6' WiI;{N٥+%=>=y ?@ 2a"zZ6mi҂qk{<5'c3x%@)I{iڍ%F9 sml?ZLb'HȬޭ{O`yL"}׵Ph}) s12MelGCWTXfۆФVF4uan{ԅ[`AKY{P4upѰm2#U8FƶfvY-JK߾<=d w-}RG7ԹsOS򖶲‚.Hc|9sq W$ ZB|a)wwbnW#2c)2`:ǂc„߇"nHq%/V<J֭XI4+oh .ŌO]cX3DN֢Cz_lZs6X:}⑉Sr ^S=-z㗚)X RMvZdJmqΙ57I caP.D{[ @[ l+M@!M" řIWq[ Ƶ@ۤ(Ր4֒&ޞ塮:(N.9{rʻ\WPj+ ;Z97fK`+?M4v7%)МHׅڟv. xhcWY>vJ exG1i)kRP-iA׌:L5$șK'ե9@@Ji<}/#n'i*`ԅ#i> WM3&5@q7y@Gg;۩9LtrqnxH_QB.xR5,~L"7q IR#i7w]xZhJY @H7 R."finOO1 ~#NoP[v$>p ʤ? [ɷ"ܘZtJob[ulf: wD7N21q."BE@~0:'6k`xf 1yg,ZqCiEؖ^نxlHI/FeAj0mL#'5;s*ֆj+:7;߾0?5#rNe N.WZ=zyw|4K J6&sYӊ xEJ& q'MHo6Æ-!lgNi=> }-&7׬dY3O!дiwð8^^eJM?HZ_6 jk"nm>6߲藊KjTՂmCS:kP/q3hRK. eOMpG^ޏ@>E7};$=>954u.L <}lL7dw}tdzE 0ȔU@{ -[ۦ͂ Y|\-rG |93}LiNt'DE$[ѿ?CЋF mYY]-雏,OX) iE8nGoznd`Mk,80;h:ՑE¤QU ݦ,JH[0=Qߍ #+3K$;w6蓲z/rča?&eIJ:m V[aôWWhߟC9pj &PW\.Y큣s?,m}:;h I}\8jL|׶r%-ďS@\I|DH.8f?֔/Һ=SGIpydoL](s9uxIܩ=oh/΢`ƎF5S,. 1.BGM=Cpf-jtNfciip[9|bG:5h[B4;S~5g ^g>ҸׂXكMC;*&_{S$s0( dI,PtVӎ&1b{xaQԺf+S^/}p䟟fv蟿d<_X:sMj)G”XG:@־cVYiδ'3#o*L~WIGזÖ 1oroΝINЖd2D0$7׎Duqqdt(ҫ!;1($qh'p $!+\JjZ*H+A_3B_Q9Wi@'|I~3*}K#'JXqiJCYdz6IN&H@hԪ]&n:M5gG}ؗ܁j/ #1O ("Hʁ# JT"l=_@SW~ݼDu Gv_(CmqX^} ւK3'44\ܐ / p] WIld$`/*/Rzv?9hB4ʜ)x)Q~"Tɡ}" *Ή&+]a} ,z!}R=̞L׻bi'?Gٴ-{ }_]`~LZ4@Ӌ@_|c 'L_Վٸ Եi_rd=&'Q?€NԷv]T{$x;y-mwȭfl|c a砵$oUxf0y[>sqǡ]>c,\= rY޷IV ZޜBR/F˖ `>g > 'ޱzQ7 ؔbK'bEȯ_-&9?K-̉MQa,hYHt!N8.htp1(ߝC8v~XՒb T_mtb! ~^,} &_- ^he+hxrE 7-_$:43 "+;mkT %2h;s9M$u!l-'FOM 1$ GjMϴ7h\:"|2V/P`hdpL{OQ6l~핳'H I>vڂ@gB4륅h6=EycBOVd7d3i2Kܑ|g!ws~#@-ᇵ{-TԎI l>'3ϹL̛v^ZҡJHk{-|I:jOgDr BВwD"ͺ)09tB[NhigI~b 9)A[ҵh_志roά Od;0H} 1@IDATZԶzi,jP>h-aM_L)P׵ᗇ2 ?}'Ao/X -EݝF#]1{MVRk;S'Mr7F0_ޗ\ht4d'R53hH v3+4˘k[ץ- SMts%HrT6M)Щ5J*QOho>ļTj85( o͜c=r Y ,-n|E$.vmUZgnQ[">=h45fu Ж2_:l:"4ʑ5c[mGlrز!YK JZ^W̓ʳfG3'Gzvd%oj tO\z\_9qD/®3NN:E4(?m5n>_[֤A} NlUY6Z;јggV*]0LF6׾:]!/]๥ (6Ecڥ4 nf,RS>s}n@_RЗ? ihh5ˊSAL z ᚤsuעhSX"MIH'%خ.`Lu8;?&~2'ٻ}Kߗ,څ@_@*Ss K HO}/ l&;#ɂaKW YJ,lW/y!?0I0z3.⠯ i۫+$o.A#$'$h@'ڿ'k-P=|ˤ0%a2(LW4qL[&ɝ?Gflhv5C^]2'?EQ@_9 2s\Q3[&srt0j/hWAD4_ m iB=S˯`-jWuz#W*Vnr >ZAc uĪ2_~J#5sf)D4H!ˀRN9S';џ/OZ{dR?-a;ܓ~>RPziOHxygd;ŔL7Ƞ=P-AGt&en?!*<ʿLyJImrsVjoPUBl ]\uyAYD Ȓcƕɭs$%!lL{DµQ[ @[ /MI̬1$nVYM:=<Մ0P1ğiVe bѪ rIdUfxL]&M JQ.{ >4pw3Wsm X:M+:dth|I@V|*MTN\=2luB$&1_>UkTA8)wpNscEziC5J扻2﷤W2gIt.ϧ4y,?"#Wnp1"h^IԀb[964X{^L k0LD)ڑBiqRE{qs ҪI :ϏЧT}^]"*BrLi82}aa5qwuܫrsuuް$O8BO1]2=S A{ZK[yA' Sb nwuLQH8} a>BcQ$0̓z?:,OzuIg`j!th܄ɪz8w?!&wq2\>K~fjo6i2:oQrcױ𕒗cr"a.c"i:Bli e6Vy$i=nwL>Rw _z<$$EstdFCjq?;x@|y^{BmMkkIJ@+ =o 1KZ5yCM֥G&YD|}3 ah) VԃlR֩2&.l;h{Pةi_䚓a5gSVFG#bژG>s!ڰIK׸R'fv1ogRoe1ۼ&5aH@Z])OFy- 7wmD&X|3$<#g ՊGR6p;wae+j+@m ԓ!+XLڦ[zcF[n* ~HғxHCw `7U:5Pyđك@Ih qf0VGې~= |jKwɪK0A y Nmj;#{v$let-V/j #_29P4o!'LtHXG9ؓ N=a3D߱/%m֒l߿?OZo.ib!` gi@MHmo7.f(n j:{1 `r2- O0aCiIL 1sYyo@K]WOߥ3=&=n~cfj]933ۀ5˥-~euj}~dͣra'{|Ѹ y7ǣO*83+_oMWi{o[3}$ ^ýj_S_ C`k<}aN  p#,.eRFJ >q3"Z S};n fڽ(ˎ=>_كe;l=6:mi)Z|ArpX˯,| w WZJ>Br _j{4NO}Qp-4n1ɜ2mWs7Ql)?&z̺8@e6ɯ7L/uR-e*Sír=g7`&/Dn~6һQ2i*i/c͆]M]l'Qs>\z j;qp][ϑЖ] #'T{'u.=GTy\ieid7,wmܺtnG<=+;͹da 2~BjM$GGNͱ7cR .-:8vSӐ{Oj 8+ ܆YLI?L C~YQ=3A8Ictw/"OQcas>@YY ~9+mKk@F9[aA^(MrG}}ّeB@F9jH&m萖j@_p4I@_@S߾w3$u# >M4$$t|%:C`J%XCLA=@_Ź T|{N+@/9>Y>{=\nŀl<{ik(Q9TݲK A_S}eW =*#hU]'8 Iin:<:WLij!,ؐ IS&h'z<׌q~a+ؕ]SG}EOW?7 *wL<۬chqIՓ04Ѵ%|xc;`|1ڴILP~ [G[}M|DSZW+s~ѵr4:D@xOU8m?ML挴#'Ǚe:?hBKqsGaלeVZdO\vk"SlwܝlWEw/a}YI'Ќy #݆k% mob&N~ %2Gi:!jii6F@ϰoO6@lK»žh.o~NwÎ״i?]B C؋O`N=d® h14A&Ciƭ)TcCޥܤEͨL{s.moT%+{;t2fE$K'K+(s qnj%6h&&qw6w||ÁySi抃 ?mI~1{xT? uotZDU} Zp1;P[;w@_Yع`c~Hͣ_ y>\EG མ:6oȦ_n+<)M@_O| h3[a0z'8@_%S `~@_5?ԻB EѮX\n>&05W{n Pxm)m|?=à$m&Nώ'k:r$QL^@dZ=xt-2v680".,ZePΘɯ$89hd w{5W7iB |Du^j%k\i# LDkBA*>X)Ӆ:#T e'zм-PK)&+&,fo7+Wéɕ/y4An%-8գb+>`b*.8i&R:8d &>@ 㬱-pkG|֋KB'HSueu$BDŽ1G.W޻DN .OS@0aul"0V 'Gld҂3l^tloڦS5W摣cE?7nE6禼t@ܚh Ф /L)kA:@FL \ہpV5Yi@,xH\&䛒j=Vo⫅)و9a=g/ <}`9|4ƙ-/ ?g23--Wӓ ܇@vP^%\0`ɿAPJ2bQ(!Z%HlhiKhϞ|g>OmcmmffX/yx4!d[9̹i&|{3ئs9lsZxɬ K3'3mhbcFy>6,W-bKB=eos)o Icz=ax΢e`.CK|OքHG9xKӿ@78?ȏ8#iMk-86 abѢmMYlYd#톲4/e2u"Ʉo2s㬹(;PHiJC.iۨOZ_9z&3q Dڅn2}(REOU?LЀN5F)ϋ>1[4#p.OKJ3=N)ZMS@tR=77N>WZ܀>@[/d2w8m< dz).*G2zOԙ>-^TB۾ENk|@ Q#FRDF5z3# .2T<"f ɿA{Ky_*!+K\QVY'3'hBRh*ܥYh"=aXD@ϑ3Ԉn7g+k:@evr vx#OQfy+ 9.z=+ឍMz}{b@$-mjoVdG\ *&C;N Z< Z5ܞȧՐ<_6{"ޥe+C->>%&p$'Xqqb4H@ L/9-7651ۀjQ`5uG͎ا,/kDl%y-ݰ:=0~Ļ7mUץwpE_94?/l4WxPWi f~jՠ% Gֶ=76͢y](1+P#$]lq1fvM.4GEN_:ܓ'8my|ʿǜ0`m^<& /bNeIq'ӟ6 rwU,{2iAsϏl,8oՉw\KsmӟV5Kt9#Ƀ&uBv&<7ĹTwU!F:-wѷ=e5Up٘XeƈM3me/2ԵG?א*J9 , {6iE=?XXwվ'njŤ^H+ 9ԣq 䃤]H=7 ckCS$ }|At}9Sצ3167)K;R_DR+E$rD=r[ZKZP]"/?/&ib2OH'^/[&rq˽A\[h؈dCNlus9L؞ctӘo{Pkչ꾍fO .Ӊ̞͙j;"9*52ZS>Vث@h\)-ӔYBLcXj͡mҔk)6g*B-7cݽq~$x4|N^Fe{WG"Fqt_NZjJKJa1,c?Sbۘ$e_S2@b,sx-02F&Q %jHeO|[+t׉^aAJeB${I[4Er#ĩJ6}Ur 7LW7!ilr'REY"'j% 7 oJ9`zXߵ ecuѺ6Fugdi xt rtύ7*r Zf-KCٵWmui 8/M^ف';QRؖ؅4'5MGujuDc"ӓM-@r?G!4.7 L7#g,*?>[7hn.l݁݅>]bx{ssw`>s&~3<2>s l"˚ӾWL\ʳAbv<-KEvlش#6!/,F(S Jπw8ZU璇Ogt$<fFD5O5O0%@h8jB_y?[_} pȩEi2{z?Pr#)@KZ"DY\9 (c#r5}9TTxmJY5gMynL80\I@0Ț 殉ʛgi3^m@nwkhIfXsM2˛-+~rIxڌ:rNE }ǧFme^.]3lebD_aqncl ǒ`su95URfj42:>? QE)= qHpwi$o1'ó7ߙK[?a6csjF:fݘF $jlݒqF[ilZ`M|kkeIc7)u:u}k6HyAa]9^Bnf&O*%NLtl>%rtp?tH]ʦMTҘɫyCe?BaipYO; u(2̋sWNln5:7$0 S- 4(a13Z#|ҷ-Cj yvm[9vf!fq@N@~>|" e[1 ⺐Np̻63sv*0ӻV;0h;1X|s`՝f9hǰN2n^}2u;/#Q6(6}u.1bS9}GI{3.˼! Pzy8Xo<\jOnZ[HJvh8s:߳Ytrt6s@<=J:CȮK-Ԭą\i arRKm[ @gy.*$8nJ>6-cLeƚ575CmළTyѦ(DBKHTƅΈ,æg 0{>AyQݠ'7GqBZ,o0R6b? fN'&3@`X8StG?4w~:6\{y8,-=].ݤy4ķGS s/I<4C-c,B׊Vf=ͶoʹKTW5w{Ey|d3n.K{Bs\P W\c>ɠ\n1gYfn˚[1_1uwLGݹjS"'^nW}LI|afFֵ|v4WilwΣ_$EٟY^0I+|aHsAXqE\P[Дj>J}HH*#.K }HjR[FkB*JM۵\ 旔GQp.SsPI+:rsήՄXs >(bBq"*wGk@?Bu!^;טb2iS/~1LgM*IXTxI |R1N hR ]X0w,ÛH9u}ni?r׶D;u9^쒇>&x/LE|7ScY]ԽAhX_(l>ֺIJp053UM&~HKjj|iٙ<4Yҡ!!Q\*B4ΐEon_aY ̧<O. hJ< 3`iQ-s~Kx+ͫ ڇmKYU(daV[@텎cx&䍁ru}~eRޛiG( gu|I4{@xyS>5K ?䴼kL^X$>VQ @m쿄&3? ,6h^lHO=}6g1HR7ײ,Xі{eu՗$%íݖjv_tisr6y!c ӟZΝa߿?Zu>p&@Q-.1ɂ^e-LE(vcR4oi<ɋp^dRSa7M}nN/g8B&`V@nv{s kίSZQ/"s '؉B}7.tlЄpypmπ4.ҀҀhAcfʂ6!&-w\=&lҦt>.0c%s`_qˁ+b}qSҾ7j[ m𫼟x(ov*O4%6Gmlbًɥ/Hp`G固}p3E`񼻂Ӳ-(ۏ  vJ!]kʥT^˰ޟ'9Kl ]L)tz[,0r:ӶMCy{SgZn+yoEգY<͌XCטl9n+j޼4_|R v;^$c%)u- |Z^x='9+9T7&N*p; kq CloEJ ĬsO7g |GҖ$ Q@m4٤Yi`F֕Z6ͻ|Oc&ij쬨C\ѪE8oG f \F6g̗.+FBRukQ~M?Sgz_PGYk){S)3f:єw R0ϟs t>RR0 :s[6~k[:~;q\KKY ߶w8f?^e$ّM_6_AՉvU!&I{|]/ '2/P_T.uKVq[>;u]}w""v(w0z8^ǖ!o6@*2K)8 M~Hvb\soC6#sCR/̇3'Y?8Oq}œwwiKCݢj&^6|IX%J W,WRWHBԷ9[3|z&-Mk #NygM y G&@e 5JSaG$RQ!#+?%]”1V݇@e, k9RY]÷TP̩{vVK%grN|7J_# .)`M*IW8ZQTT1>=^X׹ԏ{n.߫ *^lƽnpI]xOqҟUMl ;zޒ4a=PYrZ -l*s׍5;PL`& +(RہIqR Ij8]5f7iV {$_;m99I'J:qa7uQA*7>.i?Kut(. WJSs$R?F@kG͂Vd1) @|JPM 7Ӧ׾}1Iț}CRTxGʬlUvDtݭ}6ij%ݪ zz"m- 7-܆si yl.-;M) xE|Ǽͭ٩SYwm+[]GYg .'.ztlPQmYW`tC ߮9y\`֋wى$:2?Җy?//sQ1ҥmt;$8eZj6*l4ֲf ;fOZ~gx.nNO:1otLaҏSB̛>1J\,?`A{c~y ii'_iDth|ɦN Y5ך{!|x{sRh&Hf\.๺˧;00&.e%{w$<?;WWsKl/G6wR8nS"\ct%6j8w;{i{|NeTb<}y.{ O[ 뻻vRmYluK|6T88G!Ty||:Ҕ20#hOVŭ|s/^AYцOc6D e-*t }69JARco#'4&/$O&LH>2(Gh -X: S]et[[jN脐{pU*  UPPsߢokHrwth._Gt4{.-7t>k@)fN=ڨ"xi&^n սhn7E lf?Le^Ục!H8TmA%r6^dF&U]m',8O 9sɧ~7!6*"Qטٸ<@q__] o=t5<IG/ f?T}A~3vڏGEK;+~:w$_+j]7:C~$Q3}g]<GW߲.jcO +/aw]|b㉨ymK c&U93I<6w9u ,HO~D+;ً"~F/zkg˺+C&g Ivm񔣪[z@X0}g.l M5Ps !"ay3ƒz60ğqBۼH WUN(6+p:oRmL}&c~CU̇t[ނ,9Ӭj-0V.KtHT6["R(i S 1=V6[I_Hc0}ڨ}N/Fs߲ seT"<,ݻe)q,9{&wsfX{RRs^<rC}_6m5ei[pUBW辢]ѧ2]¼KJכuVuvev5;pThH韹jf fr/ܜyMcz AlgmDH(_`ѯ`lu`zGX 'HVZ9Eޥ_\vq.ӡvi J{@ +~Tho!-< ` /(SI\{ʵ-K. #tm\qoo϶ =NSndt~jv&d9+QHV7@h9}\Tf[I߳#o|m /d~2œ.OݛЫ Lo6]%M|zlܥu g 6╋$8ә'ǼKc=U\jc~V"q 8DO4 j{(G 9K -J$ 6V+bѴ5]sĬ?Iv18fe\G N7ǚRcU8vm5<|52S0+eCWo:*M3^Iڟ?*_I-DR1?$ s*7@7\ysôuo3żoxs(UҼBc.xOUVnRﵽl7WlڇN $ 7)4NQ[ Ut)iy$]ؙ[IM2&l _[A`xK#, TX{kꜜL]# 'hhG梇>ɻ2clɻsnO6/zt'fpˢGT6L;,@_ E?kJk|b<ǬEraYz1_ocmU u/My8O=srd˵5mTBZ=2 '`gN Ϊx2憹7!)N[,X#m @C_j.Z F`umݣiy3؀xj1#;Dzcv5 eajocc)6O),6Te6Ʃ7K@rou`m6e4-p.saie~A ľu,clw̽EåaEvr{/mt$dAW8)yH=)+d䩱qg(,9h ('.5?fm5+Jw8ΏPӮ;g|оtnū x6\1v.F]3̣& D[Ø>)mb.CM $ioIK,j7hgf }gZel]:s̿jNs޻2jJU]k|qBol͎e!/b1t'.8|qȾ̝ޘ쒈Le<{ӿ38h;֬޲##U=:H,J Dvc#4xGd?.mVnow2EaWYJbկЕ8DuC|-Vw0ݫ*GҦKS~2П@YKnFeTù-yV$juIIr +ys ^Z0lp4hFcɣ}x@k ۠cٵ&:*”C@#:H;l(cn?Km9(݅Z|I?bH&Gqqb`a8:84h]4 R]0bMMlR'_H斓%H _SsjsZ7zErZjk4H`&RoYm/u\6J9tN:L6SO#YJC^XHƏ5R!y%<" ^'ɱ\8dͽ]?07ӟEJ_F?X!~9..O62H"S+!qNw.^ˡX)d҆\~hUȪ'eIUlJ}@9ԣN9%IyM'$NLHAVMycJ[u/ɽ'Cu=앞D{Ij'm8mNX2 ^VT'Ύ-λye|7T֯goǺ 0y7}1TWTO0d杏[}Ajse"2^<+U|A˾LƂ(^߾|^y4mhמgCR:6|&\Xi66f/ezԉHn=W(.`uUr|bc}~I][ЛhC$3J ^S.XL7D7r &;!.G 6]akRR鴏Q,0< h72}/TVhvQ:GخO;`ځ6lJ:5Փ>nV} }dyOEW$Bojc`M}1yc`A-Xsd$;>/pJt-^?7]ŷw[} HPJ-nڡ}F"=(%`.M]6pt-Rd]ڟx6auMg~\c:^$-,://J};p^RƏU$|/ߛ!k4| rـͼ*Gʗ ȅbϣ)5AhK9۝ wj=V¿g1VYL3kbv"0{i\E|L,|0qAݽ6 K .a_l6cje]̲߇DM

~ =#oܤn);@h`w@Vws EZT7 d̑pG0o>>TK rw)N &> |:R>GCJ Z$0ѭ~n{ZtJ_=,:@/g[=~QxDA*j8BHQ4. $l m5;>VIV#gEyWHA(>Xs6hW 6lhNwÌP+Wn%(K#M g]ߊ]H dBnl λ)[[d&@`YZ^~FXjT)[s-PEVIfgbYgcuWRiJŷM3-V}L$[`"~\4Ŝ>ZV VلY [Zߤ2|=tsm>kfZX/O y}=ssnnm a ڋ|GцHx@cGzX2fA[q}xl?3 GFN] `ί~nGX.I}~ '%F%N$Vԃb@(Ul7hH|-Ze sMm_tl&\H'NACb`ۍPp.qqO\%B@ ּ nм&+}7[:s!C L&a4xȗܸ+773c٪DYǟ'zC;݃pƇǹ&ᙁ8cj|;xDkGV8&YPQR{:oʵg2:)'y!ߓK3s37$ݺ'iyT #oJ^^H᫆Io.jǘ<ۮ b㤑[0.. ~\8%C  ,U @v\j||h P7xY& _6>#3콎8yu,8r/pU+]#-%M%iy!Yؚ`KiAIjX^'f]em3煖jY /F4ܙzH-T5>(S=ד؛z X KJW[4>].ejQO [. #Tcx>nbL$}:|sXV n@IDb.T Х6}&ͬ",FyLX"$,NN,2!Oztj9DOK iy]D#Ũ肧@EYOBfGSc+[:藩(ídҺ^i63(h~,}y^^sw0k7km@cل7]gg՛] X%.ߙmsht.I]\^s1pj%%Gs22[׆];9!p,7t)(#0o]6 #(ԣ^Ŗ ,@s`;kN+5V왼86fxd<"2|b٪Pa/l= ذcYhYl#\B Ix|bfX̸wF7`GvF ;b=ŧE',+!?]<mYfےdEEkDZ=/_)B>%[:kufTor\9Q$Q)Tny5e0)y,~?pl\,!y˿P5va7⚰Ct-I4͇z/p9 G)/S.NPntUFjp QXk|?#)1R~Xڷ6拑TR\I}H7i 6fz~8{v\l->޹\ ? JϣFY1$yYq:|0N;ŧi}_w~#~9ۖqb*ӹH@)eHv Ht!'neӷpڤ~w\ m1kn yjޭ8G8pc;fmObpӈ-$cKW b#'LW܉5?HG/e52L0Ϋ/yfRHIyuXEGÙxtN`Y쩜ߤ&t%#yp<`Wl1Y0:SuLUԷ(_a%M`H>±~间ch%bnjMj!떪O_)Wipٗyn')O~7-D: )֒:/zn%PJo_ lB'Bk1S9jBX4Q.D\[Zȹ[mKd#O +@_IF) ʯ[nQ Wz>5ҬC {#Dn3Σ*KwDǮzǘ6  (ua#40諠h.)UX,4G , 0IzבjqKY̜<- $K"$LW %gKҟx.͵('}Eo8qϼ'p)Zm/Pf/`[ B1@(*|A_ ]ϕW5#%}AQP@)}z(!;Yq'0Ѓ_[R)."0~?ExxϫVU_+ A{=˂B/?z>ӧE,ء53ǜ'^\O~" B2K̍^gop v@gSsG뾟s43W%ʳ66$/HJjHkku+yYw f,GSQe1hs]Z$h̏-ѳɷHCfO HAYmz#M$p[ᲸfYzr5?G&'M(e=M bf8Z2"IO7 ]@߰ ?<_ %$#ߟ K-':UPׂxrFR&=yRrVRnfڭ'ƙ _&'['04GuffĚG`%yŶm4CvBc@;Gc~F $FQ S=wW#unOP} )>Z ,ѱp1h8 TQlC} EA_Q$urbVto:yMhF8&fW=jd\,U.JPgT^c?PX#tl\761dch'@ᡟ#Y csw-RΪ$%+_}ɾDƖ4;';]uI)tC@l$ U:3u+:DG/FVTd"_m@}mP TP0%%%Hʪ/oP0x1PbS W"Ymq , =U@uҺ[EFQir3wE"RͿK+'b&5'~LO_L4AA =h[L/_ p<+~>5j{#_Lnd걨!0 IJ'ۡ(l{=Dyn$3ԗYZfڽ.e I?"^}eYu 5i7ߪ0GHYch K9/U}돔kOa, ?Gyx~΁?ݺT9+fHeIJ| |< LK_==ͻwZf-c4u$uL5քښlnGAfxlO%S#_njSPy#56K DxtJ!3> ։,⬋ג4NAcU@IDAT 3Wr PD{2"#;6==HwU@PyFIDʣޤu u|7c2U\k{!ob Ims!p3ɳG8qZH-"רnG#gz([B2-}%WBT_JyXu c tOe21T!+jٛRTG!׶б&.,x6?MJD Bmё݄F f8p!'`Oη&Ic˄TwR۳'x3څEt (IjAXXN7`*)*paށF,2{@)Z@j,CL9_$aZJ4|.^I8#p\1loE$QJqGAZ؋xZ`ZG@Y}Y@-nF%aT($]̦Ґ^lmhy$IѢ6LԖΞ5+Z9 Ե8g8ƴ焖։*1Kr scz/VO!칂 зхL=3<1+T~KeF#w9_u:b#,ԡBD<~GC3ϙ?倂HI.ؚYlrW36o a+}ӖcLTa3?.:Gӏ5yӠzSZG D~~OEy|U"73x %YKVdu%O}禿uHЬ g.o,pXF| OYh%y6JAϽ\t@RFRħg%>ц h~AYC HKgK m.e{T-+MrN70m`/$mUy u傴ߡ >Ymnr/uVlZfԕ4\г3j\Z&om,mfM$?֠jhN>,8,t潿Sw?]M ڸrY@ӳ1C)ǽ1x zǥle?yW 20âyǩNSPxj`Y'o 0D/eC ^rK;3{q]{"mHg/ 餍&k\\\{Fc8.,Uh4 ^SwyM*ɐ7/Q[d\߈[UnM ;۱. FOv ݭbF8aI X6f^ EDiՐHI]6N?Z o6eڛ_fWZ1?3[~| `0Ҭ ?:T'8/~inާ30rdsd#P k"~̹'o8/>9gZÊ˽M"^Þy!=wcF￘paYտ+⸳hZւl 軠wJ`a}Q7QgP R db ,VQI׼x޴ Kɀldn]#v B ~RGH9Iv9 ?KzDҗ&tit%o'4; HK aڿkV8;ƈHcp -ԢP*Z]H&%Ҕ=LNeƙ,7i0j@_#Y}%-Um@ߨ˧0aB폤T@HEG'"a%j\=/ݿL;; c7S7[OT+2JaWX}#^vok,+~~/IRv+dƏXVl\.O)1_., ,W7y |Sā|;jIm~ͣ{6u6޵+| :ފz<4.dA_$Ϋ_!w-(LJIIʷeG-cx2O~enA9L\9>35nֶ[Cfg[y׆i؇k+=c!`NGzG%O3V"'m<,-?ؐKRȢT93<'uYvWY};=W}qIUqT+GغYuvvm"O#0Zސ $s3]DLo/"ƨER;:zڛkH ' JۭdIÞIMqj 7Uƅ iԷ}8x ZtEvbsoa%w ~%.r{7/%2zXQPvףIxFI7SIz(uSަ|:adKQ7ys""T-D)F'u ܡcM ssOw3.R0q0nTjZvLZ7XbbZ}ӗ,RGMBAVE!w 䊬.q7'G)~#MXT__܎/KKI+=;y^ &-s ^LV&(RSr'uz-R(* v4@^=Qo‚+z,5)wMBM $)^:j ]7 5msjy2'݅*Eyx(e;O"e2y_gTIb8u/rGp_'hY\U2Hɗ-B qNM =ɱkO&sVTj֯EY 0<یB±l*肰 K5_Ug~6su`fڝ?K>6O{'E7= "&T w愞9O1c,3`΢03SΜ0 (*bIfUw.Aϻt宮W^pvW:Ū9*~uծ/Nҫi|ӋL*8r"DžҬdzDkPl|?n-ӎFйZjug%';Tc@J nP8. 8ƝѸ-CoFEM1F7Ś)Afh`3tuuՙbss^ 3T6]RNh@}vVv}d{%%-}~t:fvw'QL$I"ϓ$07wcM{_=5g y Nm3O>{Í2FQzB7B19΁]2ũNU]0&_/bm;]׾fD pNP]7I9>񬆝5 Y_'2xT{:B[n>:;Z˿N^lxTJqul6PFT (aV'kan nb1:ukH'y+My]NqWQVBތo(={]/یP+ə+a@[n 2ſ֮F~˵Mܨ <[ m \jE1̜K6Ec+:H"I "7IMA=4&Om7KBR̥sdԷpجmpa.2Rٵ썏: wbn)CiEHdjRa([c%*aALB(Ir~&?˝t[]d8vcERY2DqlU$Ĥ~!RHi0AV)G8Hڔz*xr zo)4L_?I҆rD۳W'$jy6+ RD[$i^]ãzb)m ,؟MlJ FfռଣeN/ኂ k: etǐۏ,-wK''G{p~'2^[5vy}l=N2 LG̪HJ1ڿ_s^7zv2Wtqx `l]`)5 sBYNCv8dv oa|]u'~=CC`982v=A 2W2"{GiTH9z)+'1L~[$:J,kdG؟f f,6o #{]BJ[WbH/ܞ[Sjs}*T?\۪Oy ?lO)G*_ӧV U^bCQ@pkz'W_!Nki[]My*x/Ql*[egN lPC/v|ͽerwيj::RZvxɖ64V|E*b+:1ΥH(q6&s QGƎWmgIO X}ۏ6%jOɛ .b՞]v ذ=n#҅|z 5a>{ѳ? $$.ѳ>tO7Ʒ ]xBzcEԸ;e/ǥ22`<MO oMNeئI9]>1k}m>Ƴ+R*mD򙥸m t ^NhVS %yPx#ieSëqFOetwEs l,=tr.-[SWܦA[/r(s i L/&Jt~ڵ8ж}HAԉKZ,欱sYMqbpZ+p6_C+㖭jwhCAaA8(̫ӑLKvKvK<"TMWhcFˀ.*&$HA-$ei LKil#RoIiTN4^E{k|ꁅ|2OIފ$5YAğ_}p{M 袍x~Q3A5!SĿzڅE=_B:,)Z&HI~ҟŹܵ:wd|]kM=qmE>W}v mC;ۡ'ƝY^Pqpq $gl/:Ex& Iq8k05^GMeZ`i=rCjY>m8^˰Ҟe's`LȍG?p`㫇G~Ti\gPzH+ᒎS2c. vKEaZxT6d"98 [!('V$V~, Lz5c?e' .mB$IoNg}K.7#dH`wChW'Jiڠi@TrьM'1vGw.65I Q ;cUOy-|KgS%͞^ ZFh,䪚Ł8E,t*1+ ux<anWNEF]vH/.wv&}[ZG rKМkᢎ$A\{6>/|v(v2b7ujn Lo_ QPԆh;)Vm띹9A]5l[~V_}w~nM`pT 9kǧd.wPr"[9:p}?D8Ռ2-mciҒ$ -akSNH|x*oCg%]ξIVF6NDn089t**Y*gNN%)ر, My1~⋎磍T}Cz5 W :gKG<IJV<(d86̓T3D?y֖σۜDwBR9r+7y-9*R%J=إJA$ph6eR9,DZ>{7i]-wGd#T(I /vΟM:],sHQbK!gm8BNG>p)W@~ mGQ^W?Ys_ZG86^om{eܫe|L9zɩ[pD3p0>1,KdNPg@ͽv|]zryYImVm} ӇcЖ ˴sERjg[N3B\YAou;==7I6o)nއ,2Hc,@8]r/qi9qkz{??Gc̑Q}js7klg [7U\fmFEwT 2x}'o5C(vJczԚĒ dp$^X;%@yybc ½owUHUS7qs@ pjpZu'8|Zvgvӕ \}Ud/Pꪼ^`Qܜ^YZ>JR\hOg'97qC"R[զCތ?IZ6*@~r&E?Z5%Z߹KX6b:?ڤ&褡)QЎqÍiqPrtD*{]\еgD9},^X}}ҧ? 0IXقE,ZM{}E`ni`Ĝ%K"!mk'7lNa^KHЧG׶67f3H30HwpIlm4!#.?prl$ʳ R^پ&i&Yo(fڃ|:"”À~+C~ S!D?̓ $oiaM>I7{z2d f҂T%zGgMwQxZ*ƺSG ND+W-BdH8ZOɿ tܺhєx_vޱQ25v F[+$U,Z,\$zk髥ǟ sjZE,vq:#cz:byJmv׆O9u5J #l\4OV2=Ma,@ܐh{sT*: "NTx,PKS`a(|^7{%;@@L_4}csAǹSh;ؤ`->;cW:4g։uq,c~Jb C3ElT9,5cX /:(Mi F蛽 &ÏA=toLcսS#ݥoQ <6g weLH-Sρ쑀1THD5]@1u:P)tq<fQ>#4;#s4fNz4먧 l in4|tIo޵kdo6ke5oGYT`1{6WE:ҟNkG5$~ /7Y|i\>u<ޝ ib#N\2ۦMo' O*s7.oj z&l}VЇ4ygƝ Mj'.FOn׷q1Kt&MsJܜ9}פOeҤ~\d+4닪r~UЂtgy5O;$rfy-uwDGf20Q-D ]8mHIe)IêReCHj4 &ɦK$}*n&ߋYl~E"Z\ilIyb-5-^;9&,gd!^W,:/ I q-8ItzNo5R oK=BZ LC5WCHc*W;J\+J;hZjIceD,wZ. Zit5 5&bl(=ٳgX7X؟B;S%plrE9` ?8ew֙Kt}4ok!x}t:-쁪5I~V o>.gԦ~W/o?3čmV`}T]K3h'T[37C9N"eHՁWPpWZeli<[tCT%iA!͓H1ܺ5 1xLf6L&LO>+b߹6M^T\ uqk XnD9 x* |K\t4;4t/9c?'ڝz_HZZ@_r*0O;JЅ(M?5\2n&<\* >r{z9TbunAoS 0BBN91 ˖Av>ʡz x2 [,O%˓>B c>m36>a+ya^Ȝ6XMh& gҜwKmvLbWMOwsGoJ`[}nDh{0}V.w}ou'ߜ5nM:>, ۤX~ ti7WEU̙v ucƋ%rQ tlR% 4{₳|2MOMn~x'|O_ SyP>nBz\%T˄mI"w 0l|>SGc ܚwv'y_I`Sk^fߖZԦN6W"S2>^67M^Ia 0u"E2'I<͏|Æpvj/9,v(1_;v_:5A_S㲤E̼$M>*WG1n.BIL4tZzI12A֩yз %}$-K=bΨ|y -P_\ژd9 M: ׼7!%@_ݮݫZ.`-4/2&@_|E 5[Eb>J9#|1rP8 k%R;^ .$iq$0^V^Rɽ%1>zHGp}R(BR@܃*f]`@Ś'.Xk'^Iޓ9EDʫ9UOjL*{E>wSE/ CjUnCٵH)-滗+@E7XtGI_g)Mhv7v`dlE^k>}EZQ}2`]w@SlMn`ED9H PYֳTӗ֏|8 }9)΀VW H9Jf~}͎Bq(p2.h'OxvKN_0ڦ26U9(E ; V3~K{GeGy8{ivF{(oEqp'̭a9Ӛ8+A_C|:N^T};/$A/PyxvLld\a](cAIvȪf|۴^ eG%I+SvKPֻ\w͑Q GNzm@<^EYšu+'JZ}IWgo[j(HE#tħ}t\=%pq%x쀾J3 j}7ŒNweQ@`_q^oO6˂m WsUF%.P=%O(XP}ԜwVCڜ@\3BI$Z`ktD<@_ImkZE_'KZQo"iwh/dZ[%Aۭ%y|VGZIo-Y(RV {[2ƱL[?_3g&=uvT+M%[&C!=qမd2/gШ[k5 OkIHt i(&1lch!lW6-S=(mA%ߣyaR^d1p- ^N60OSд%SzL%c=U=Xg}SwΩԾez %S1}oގ'xrG9E˘>j0nzw;BEkt>2V{rK|ԃS'T5EmW&$ڴ$q<z @xX|F# ' B:Nf..l{iУW<^i-HB5tw$BۚC7qOu?soِT/;Q;{N_zʟ%&@KU XkX'ѹ2Nv\KZ7-?r| NRV@؁mZز>e1*~˶fqŠShP|3|W2O{e6ܥRK.|7ҙ$~K'RwW> ʧ}?,d7/T#QM|n[Py_}9o9-@ m$-/:p|/m*>W^t)ـ= U8BaT qSSk^8 =GЦ^9˨r!4kUVu|Ԓ?v,2q<.&{ˏ# ʼn0g$T ʕ[8=AIbCO\lo#iV˓{"qxF4בfka$rmІUmbn1^]@7{6#2R' /])`?ǛoCV+1Yg/gšoJsϞiA;+HDsծayvN/qlԻԫ}$ K}pKKga zV#k7?9IʥԳ5iQO'8t\ǚ'knR~Ƶ> YxĜNs6;>vX:K7y 1Q+e78con\rm][ͷg\GP}l@Z>wI}2 ϱ-Ǚ`w/)6+~Rj4*S{ Y ,CL3?o_,,zE/QL5Wa={_4HTJe,*ZSش+V$M;0yJ5pZ'à 0KSO1I+iIDkV`ޟQhRm @_dUKb)+"q O&,{m$5};~6[郩jB"UXn HClpCA FswR{ oݰ:N հM'1QNJMH?[+zi,@ýޥe: JZb%eaNftE&ú]EZDC  +x]AmN.+% {FX~"QcorM&PWjz1x|rD._#\*N*.V,c/0PwX#,}C`| s@8ٹ|Q\Bxu{U?S\Juq&O?߾X:ǯzMp._8#*>$1}s=R5TU<퍙՞3Vj U:($[>pWޯ  f.AtMg-%$9WN_6$bKK`ޕKdtjm1?td(haQӰJ48Unrt߇-N?λŤo-v[L>\O('c}kjp*srRRy9k}2 L5]%2^RZ6| _E#:ΚҋJ/ +mbp-P"}/pMz -,{kz@]' 3Ӌ 6bTF_.9=a#!Y6mL)lM7pv8f)ܜ&gqTmOmC.}ie[j\rRHY }6h9BsTB:>ӎDyph:wT `Z)U?EHV"}`[MUNk]GgX(_.IM`@IDAT>K-XgVsd290-"U5ڃ햦H~ oꣷvEvA ҦF[:kWTe:Wf#}g!Ţ: wnPt#0A7M]o-WeܓnaG^B31X/QE5(W8bfG `H\o=G^]P.՗30O{Տ17%pS? xWUf7PaIofB9H8[rA,fa%d>A`ȊQb~ rGH, y>5|c * QFM&x?I!؍|\ /,o hm:[[ˆH wR_ }] H+Gd%ǭ.ImƣVEY4<:| ܩH@EL>?l6Kޑ|mmAڏƚ9RUc8m~E}TZ|f/6=&WAu8ѿTx M3氌sD6r01Q۶WPfN&oC 6?ж5=GcHN+Ei2m(B3|ѳ OZGƅ3 yl^Tm:bUM?s9RS_a/7DcS;R9 sԄ?O˵9 0ɖ9}R/>} `~$ s akt'*UC=.BJzmu9[d MR&-tuj]cw+n>J/ܜ~4i&Wj*eIEX/7 Fޱsk@0gt$ly{= ʀ ` /WytL9ݫ>\Fé@ǒ֎ qހ%rSB=2r4_EΡnEfa\Э<`~gR"ˡwb+6tLæ0o݁CY:# 7f H(gLOCܗ̶-Ǧw1}M>"@cб𛹎v /ǃV9R/vPL~ޖ)$׶K!蛢M>KZo{4q-i5'X0ơXWD/mE*|M?3w#v6@.~BA9[G 62RfkBtk '7ZOX{|)6Ȇq)ʬmRHSohNjS4Iz?,!SG2Zx0m*(c ?%]6$UJ=“}6(&rB=W\{F`do'RCq6@Sڍj'quO@4CD,d_$et/NHm$飼)ia9TR-NJ#joҙוg' p iW0ý;a1}-(Zewhp, r$Ց~fsJG幉@T:пq, JK=M[8QI\~Tܑ[0B`@dӏRzYCCIi)fGO:cl,]=OE6Q (&-m omN.s?)m^f|B]Η:-7xsQXڸSZɘQygp|R=Fb E׏8e~Y6:iQi8vdz,oY>*\,cHv*yR]\@cmO-]\Jp[& ))\@C2[zW7 B^9$zMScJxkN(7l?8)mi%ԟ><([Gu ʸb AQ{\hys疫smVг;ɔe{!Im$ $onݢ_mmFpۄ5j]JomrC6#d/rwT ;>9w_J*/ E\%hQw9?B{G:SwSK hd\z_ I_yUvTƁI '=z_) =OqڥJ WT7^F.~9JJ>)Wud+o/$[g;qGhZ))IjP]]wd\u\C."aS#+xI,Q` kﴹ qKOw|Ցzک @ď:^iھoͶ"o1kk)+5,jLh?Zv¬X>Ņn2t7zCs2ڬٛ`&w5|5Zz/@ND3hmƂE +p+,j/No} Cx`fYh;PkPcF;W /8kIY}3OmhJ"O m"AFR*̽} 䋊hѨr`-GN=߀2{oEIPF$SӈT|GS4ݶaD`:l9&8粏[` @娃kn\e>5=  R^[̏BįT4Dt:B_zboLD( MmLUkO/5F-XMȓ:Qރka8xHiRr۫N_Ps@$ ~ϛqĉ#GTmۚ[m-;}dY4+޷EL5M hU|VZ7}6OhޙW$fR1éqRqmy.N(@&n:sѼc nii&a5cq L1Gl}mdOi!"/c#nꎗJfpv$[|Ny/f?cm4zkSHΰ_z O.IivvP[%:%#F '$j9& /w瀥 ۀoy#7$5A/'}ۡȩRˉkd>#|9(l@{ /@^4},:t3&}F2v ?VmΒ׳Ֆ$ f턻ԓӝ@.2ENȵeKy2d9:_Z_?ЎJ%~$$id^ }d#0)^T%$I%Mnt{s%=)M(k. EP TwehN7\󴲟p _(kA۝$0λ`񰆤OJh&v劤$K)}|v_JZF<%?$j֨61X[M oҟV/IMd g:"-f$=//&"RYip4]?i->Ƈǁb.K-~KhX~,!Z:~׻/_ V6=llx|$)\#`q]{&Q;HlDɭRIr#%;_e#,c)6$TY5L!uO&|f. +S]g%{dj{j-f6c4beviLomLUqiR=vwv[`jՖmf޶"o!fS=0PMe# Y0"PCԯ@J2{U(*5;L0o{|?>Pv`h!O;}B犿?:q SuOZ;||$^vؑ>!+@ʼn / RG{T+rAK㹈 tr=n1h ‹ R}t8jCPe9Ǩ`(eS# ~W:gwR@e}맞\ "*|U%1g{M&ŧ'^`n"L#!qEicͨ &6s;6K/CԺJvJt"HDZ grۋ3'DwrX/6:^r!؂p'v6t/P-ƩRK(OuY39x|u]"n8_9Խp#gm\֕`G6. azc$=Fmvms!zmh)im]tvܓ~c/r;JQS"_MWw^)텋Ü:;>2*$`5Vu̠}97Vi|]xux}j/JMQ2ܔ+nM&TN%d4UӯB?Imq*5ܗ@%cSO Q/%$!@&hsB+Ngt;Uz86M%tX& DT{/Pf<(78D!IP>*~IN%plߙPQrfƫk+-/M_wIn?JQ9xN/#C* Y8RJK 'I@zehz0;"Et@W9ʔ:6HƦܦy,I8W^XT7ӿؔoЩ |_ʆCq$Q$ur@Ezr1U4|Hh!H RKd)MfPRQJ09>sHs3|_hxvbѴrvӶd쨦X l*VS/XJFPxUۺ1 .AZ6o$ݱ#J،xm@k 17ndxX8[>cE ِ8־ȏS /VD))#eب{`Q2^'DΈJ 5ߓla.H>gсvS#, Qӱn|_>jRi{x[7|ۦK!/ԝu?5ڞN,Т2& },F6m+g=jWiotb$ |Tݣ-a7tY犌{&>$D]$ tw:(7cMF.1;}?'ɡT_W/xr]:ٖ{f(#{3"Rx~3^hi֩ Pr9Nl׺~+%s)d=>O0yh"v^K%)\4Sq vJ'|]81K,K7pKا ۳q/PU6m}IF6_ڧY 1nTHJg. NzՏ~{ؖ@sԧvVf*lӠzfWrgYoL'~N< ?qf_ A6-`MbCvizu\&;)/_+hLw"pll}+h?I*: .ãS[;6NN"c~%" ;$֗RU_殷:?? /utk6dq d3FaƴyohbIՆ1Sh6)o{z.AhB[Տa\E[~v^W rEHmeiiC3XPͷkY>g 0$EBJ-lU̥&.K1]wÐ5"ػ/T*%MnF•  1G/dto~@,w5G-[uԓ,35ܙ:Me {jNjba~%-6+6aOs!ؕlxC'aS)<^Ni#?:5hV_QsN]Ia&NqP4/{6j9qwn 1WE4b;m)xs&ʼn^+I}JjN{ )BUψBs 沦&\g#Ԟ|au{6wQ~ PŷhTjE$?$P/ v>#5A Jԟb*CNf )%7#GLJƑv!u+ׅ%N5Ǎyq8x"RjHfwp5_ M2f24w:okzY:Ij!c81;ON_z8?-'$Nþh%|xQ29Ѫ2NKG%6U5}6KًCP4_;oi@:}K@t緐yJzI⿱DtUG}m ";ǯWBKCjm.zz6,Q ol ],cvUa7}`(Y`צT0OC3Ng|^51 `l ȅ$ ggk&o+vJO3.t_d{s!mRh~-gN6(O8]ר~«|Ü$ٯ|@.7>W0H)cGDžI^5 _"7Ռ;v),90A˓@F2eL( ~.?:a(!ImYI6O-c"=Dm|l8t1npE9ydWJ?I4725v*g)Bm bC᳙cHR&Fb=܌?@H3O yLh3KH@STfc{"-l%XOp?W4цaN`]5@3+:$Kn`-%- [Z!р"iZK1-e$I5"}$YR)O("bI$iYכڋXR)Ui~QjA[kɶ BWoB>%\tImx? ML;_w7& Д{c&oI ]]tr~A f:I q&)U$"E8U|pJBW ȏAN>5G$P EЭ%@ԁҧ=bĻ'4wHu\\9%`-OsT>\˵o@H~K#0]()4 n^SU[}`M( M$&*qDqT_OIlu)}?1ľɜilgzpR' Aش YWn,pC{Xm+@ s؁K9ϜClX~>;OKM!v8!$p²p~UUUG a٭G=V ;bCGR~l6&'AEm0tB} IW!cg/VBL=.#$qca",E rlyտAR |F?lB.tgA:<ߑoػ)I,գBSziYpN f~Su>ewv|6|0y}Z+/,Ch X^~;KG{Y;{<'sD޵Ԣ9 9p\A oK!1Mt:h9(gC۩١K8E#rHt;SB `gs*8໫䔎_uq7}Z>=,__Hz|kEKp4  ?aX @LKY$@`+dXaRIQr/RcL'cO/NTz6+HXkRGG>w=v$ e"^Sz2o"—t!D˙ Jץ0g(F]U_\.VpRm=l%TΣ σ'n+oURڎN[__L%sitކG\2Bbr#\ n \_胕SRk+?qKp 8̙5O4{?éZ>@K)9ys霤X5Ky*A֣EU*EJt_ B|[0V4uSCZhhB u_\! sm::6$Fq<thi!1NJ)jmV⢤FY⁺q;RȞd:2|5޵t ؂Yxy%r 7.o$yʋ+p$P"Fy~ltOs; p 6J8LSZ^Ղ/!_#Z]$7UOA&/"ص K;"cNO K?}>7νTDa9t:VH8$a[Yڲ,X6zQdIΕ$H@Th]G0Cj.znoTpf<[-)Ib;f։:z$:!`҂N]zsy;z/ ae|=GY;;[&gDn M0[?vDN&Gxh9͒п^l:ȝ#Dw.UyBe,/^FOʾhOs ~vpU|g̕oZ g=|AzS5N3)mCg k:.\J'qNb1If+(Uuz=Jw4j5yGǛ?xtBR AW%e4鲩euJb|v&ֿN!iN9nem`R8Ff/eVrnH KxB vrNL\mx|:9pN ?Gu+^Oo4rʛ6"j:fm̤>> ݨ!b`QWN٩Ym͔-\* P! F#DmEl/r)one߲'S!x[ @@i?PjkADB ;n٭3yJw3ܙ{/9gg׊KUW6JL^Sצ'Kyi[!gZ?RdZkLktb{vL_O;JԼ\BtTۆE ]Q1X{:ZNp}H+y>,{٫R̾;:SByW;"udH3 9{xnGT?T]սI{KT\X+uE}N)t8,ZXńWJː0ƚcd?0fZ^igm>H'd!]|rQZI@P}@9<#¢>$H<~O{x'9%/٬jG.ccv6AL/|w JUWƙl+\k$ 5ߡ)v&^t_BG7±1>Os7]eW6b=u=A݆=CiH:P+.~vygwog܂>}j2~?~l[~~Vm ;C\J|d{qJm6ك fÓ {YbU>'WW5cJ(xmӺQ5&ت%7%ioZXi^hhz@{qA1.U3#[AŸZ P R+&܅K=奓m{g$qe EpHMT",Q2PSؤd/''F?I$ '`$iU1!el-w7Q8~$Hd5+]IHBPK~ਟ$=K:*V: ]`PI3G>oRq ҩ?Hq3t0a&<#aŸTi.Bcc$ף\F_$X' 4TzÑpt tΛe0PՆtlP^ϠF^cyt~ + IBFo]=h`ǩ*-vZtt R'^~6,Y\7-2OLKȢWtm=捼$@U+M~f39*v|Ԫ466d>GNUob߶|omO&|FZo6۸8Cz9؁DTTXJ/N\&෇U^fUE"t6]wJu7 9D$>E[6 ̗Wk׏ioZIƆmZ.{KTa7;ܐz!$I)a@6qL(of kHy^@tA)WݿmKd@IRom"*q<9'w yDVl(2QJ76xy7i嶼 vWN_dK?ë#T-:D{comI6$iƧkdt8.[2'2P)Qas xtEe no\~g. |G[Vc=PuRW a $/ 1Ffģݑ>ͧ_Ku3leіhFK1lSM)k FlRl+ WRŃv#v5Ee [SulT;a-۔=zcӵ,~tGIonҏ߮5^$*N}WѿHw}S Rdk\`{(Wphy~:5oc+ -~3Ϋ Y٩@a砩 woorh\$nv@uGFR/'az x,RO:AMAk%49jj U9,\E 4@|j&t/ ;7X-)t R4쾤R\ft± Ŀ٠FHqAhBJkkq(7þcA ր2C>Z@__@;P}BBH@*ɕ{UU 0p!s*\W0 ;,6)[}߹@KQ3}A-"d8&sAJe2bMXM**#@@IDAT%fE<f4*S XAZ$YAabm9ƞ޷PT*k I̦]nm_Cq-=IRʀ&s3zy'#IڣlpQh;ϟ> 5?1)ڃ b| @U_׾#5N}U7o۱l~t?/?ahh lBsaξ0FaD@|іm,Rd䷀y6wxئ4냊~N"^n'j&J'A+ -&ן!\Z)܊GX,®l:Iùq sb@U6N ;آO^>^+?|Ɔ޻jtC(+#N=eۖ̓n=j8ƒG QB_Mdc{˼]p$ H´ĆCrҠl[}E(6SLH @}MRKr4͕Y,ca^#<{ Z9x0\҉TQM 76,ua09qB7}C3^ ~ NwhQG>큾*-ɫomm d`};;VXzEvFb_4IGNvŮxpsPv!U@; gbg~|:p*ůQ5ϾKG* /ND&OGs'J?x.گoDOp̹?9~~|"jՉ K)7v9:G{#oyVZSMdK4)蚲}韖43g'mFUN )*LtD#izd{#䇱xehs4}xu녠ԼgފBt'K%ombN oP֟gfNMȍXE8M8i=6ve< CMRȩ'xtoy">ԇN6H nV=K'xT05M8)쒑}f?~~.Kzر]ω+cr&#I./븘dzw_1Z֣։ rwZaW[Y!Q/ORX#gKvi|>t V:;&@P8 V>;vj6'_0L/?c#K)S`jMK(8&dFβ"}[aR,m ?%YѢbI]HRњCx0KulY3 A#Y8.D* SZr['Y07o|I2IֻnZ ׻6q4FHb)Fcqͦa)ю=fI|~D#JL|Q7бghCtt(hP{rs5Z㑌ىaH{oNlla:NyRu)MeHCe%i&?xVlkˆ$y3)PRsTuݑdQ(!ie\t͸x I,7'0M}W]ݶ53FW'9ĈM쁪I:0]FĺY+7z&Ouq֎lȡ(H[0Wpzbsgذk@׃a|ش܁<>kk `#IM`*8~+׶ ӥPmU74P)#\Cz=`)1zR6TUy(vHvuO4m<~O\t13)s^I@le۪l'9e.Ft29+Z*mok@UJmY4ޱۧ0Q0>3Lvn~Mo!FY۪1 trCFiMi\CmtO5 wn>$_uMAo6[BT!ӵ'4׵C};-T\[$_K$rjEO (rmw/5(Ήawam 1'Bk7:Нu5N݆c՞QRWX0!ѭ| ˧W['[a6ߛ1@owB&=ssҨ (7pe»-0C㹛d{4f]9)B}lNr": @v&9Vҝ[e=EzU,ʸ$"=̎.36u`nkR;Bkk/M XUM[`-iq ƼHf,vd=۱|81FOw L+$ *[-zst{~wBxrum-(>?ן.~9~8<'&NJ_ٟyjٴ)cwO IYoz׮&_)ұۑwWn3MݜΘ5m{d7Ly e#HgW~V8+nfKNuy,jOh)߱1Oߌ;FyEht^@po?8gK&9RJ;^F<4HOWFRYY:RGw 1kIsO#~e$xr~.fK~¤1_ws #џƍ*뙙95ecΛڬOز*&oΙX0|Zy3\5j ,bs[ %ڱ_g9{iۡts~Cxߚ=|ߚZͮᨪa6$|ϟʤNߐຘix0$4Y.ՓY_R,Yf)7/ьdYP^/jf'5 H eIl ےxy 6VŠ_įɀ@57$%Cob{H+c^jϢI )1a,",v^82ϭE3GM̞SxIN}ip@$bNC5rإmLJAL%&]2u<~-Þ[*@BX gҙoɭ~w%e ^;`hO|;0"'6UF5tugz\k+ZzW`F\$,ikG[*BLxfWw\f iOO+͔]m0%o:o<7Dz#G'0 z;o+Cj=DK p! y-!}_𤍆bmFҺ#=ec&Ҵϴop3,\N99BSsZ+Pw< :'{I落*q}$`ύ 5`L5&-~w zF 1`&K֫OEj7nݛrހ)fXytcjy0ں SbxNzORVi:E ]m{S3R()5f !Jyl#]½ޕ5. V}b69 ޴6KoEr;`bd2 O\G=7$s*jcdZZn ױ}?~Py;jDҍh5L$~f7=DgTqfQ^h-N0}^ܻf:yƿPc2q䓠o ']e!j ZP#0]+Uk?^gl28>RUş`w.b/d08ƒ}6Nd< H%>F6|{)"ģG9fD`UmnK@;>6Lp}gL#UhYSA:P'4O~Soe%%n=CKҿ֫ 91"]Qa!YsG^iʹ.D՚_DZϽ JS꓉ܷ❣Ɓ)wEi=6s61b3{.M` 6ժ;4AZ~5)EE7c9UՏxA<}Ns ?~#?&]=1dc`3I`AO%zykjxOE%.6A7d"N϶!ሌ#U-z0Vt]l4* rs Ǟ;㱘=>'d9U wO& an]  z?7 gَ+vZ$eQ~zD =;I23;/V>B$6<]R5͡Hy/ aU$)IIV@i# 23٘%]B,R8n aۥXtLcc5_?jHZ!}azݩ>HOث{6ےv1Y:N2ԕx6Lߖ ψʇ5>,OL⸴nDʮWSq~nw k%Wq?ow/`X?G7$p_#:1&v:h%.:al ZS@cnH)*ÙZr:qk (NHK^Lw@}l [q\:]^lCR@=\;.o+@QB]<~e{dvzD1H Lz4`Zq<ȷ 7֔h]g#h5K֦of{c-({J?rft7yDKlJ[Q _q@ؾEw[<*'f}CRmߧU)=F#Uy1pR^᱁%}.=]`";d<9w{ƫlptsJ( Cחr3% P {ME6%쾑ѽK6trĭdu<6喗ۓ%SoR14Iޗ7x-3}.@` ̍ijԡi#D} %XuK=ySpvd2Jc{!-@ad%(?RIPhO^#s,j r Hj0. ^r}aB'la$5A t%J5fr7~$9uO_#dt3'ngMC̷]JIۑ$i;372"mƮ窱Oh܄NE'ʝq^(mׁyZRĘV,dS@1#WC=o u봖6& hnL F3Olι~Ty޵ +> 3\kd5kP?kk;jnݺդR?U̳ UHt\u v;9_gw=\ǧ@CU*ionaַ}wglNʨax)eۺ-[۷f4ƒK$W4jl[d #I4Ln )}ڬfVp &%m%N]$oG:Bd(v]Ԧ/&_g7M\:{A; ~"=),0oFtBAR@=qR̞nV{Z)_Sn*fU şHN[їz] ֯iIAiF>B$UoZ(DwbH<5r -,l6)cPsaqtl{7Vy{fԞ 6~8jGW7R{+MK2,B84|IjPdTеڗ$c5uX;*~{Ϩ0 `.g= cp*)Ω<Џӎzޔ<$E'#ƙ.iFb_>u/0Ej]Qu5|胇KeI\7Añky7dCDdR$F@h]\]1Rڍka01@)xH7UlS'gkl;>o9i&wނ.@:<#p뉾e_ ĩ6ж/_>UBj: m#6S3WFazeLtJ(E;Cmc :IW;qp`K&zy.;\q1Muf}[x䜼eMFxJ636I?uD_tUk{ALNTKڂv4 PDjA@{;u;kf6?~[2s.RB]Ń/e he'#UΝԡ-U!I8l=Mzvk eg.JGpʅԼ(~<'fѵs}bW[Ӯcd80?8Fs*0hlFmMIE7u7GG؄~֕ =ǛQ\G۟/+ݎMK>(Ëg2;;߻b]uQ[Nv>#[3#ͫR h?wvR3pߌnVdO5i]N[Xϱ}md:f76U}b=}U0:A&]ķJ{.şrX.zru%xwvG0*a8淣y9M^[K<6A!i4'=C585RΓpmYGQvHr&-qH-W^el6Jfg $_Et8Kc`P) hsCqDw稀i:DSJo0*'ZRR~6.+mJmHmÑDސY{o#-R'!л~jȧO(wNƥU(<ȗeVleL 53R`JKgw.lxs(j!ۂH@0(5*ݖaB bA9`^>9y ga/Ԥ40כX.;uBN`kXeKq܌vo59 ӱBͥ$ۑ"OZSz=%._] 0YF=74֣og|x±Ċ?YS /s-9}qUzh&=cqsDa +꣗~r⭨`Fa0*CZ>cR[꩑*!gN%~gswa [͵ā`J[ƨpj C0zgzRoyIs,{pw]l:=Euԝ-`z"b0h w?:7,17^\~ªvq f!I 4,:@%aa9}td#إg:TFol_1$AdvIG}RXtW[i?6 lhoz~ = (n|MܦKq%:Sd:'W=.>\ p@H@1%'>D9?_s|Ʈϡz>hdj&~vm哢=m!vX'ŸDYU R-~[';Ik^[B"SzQߏSNY׸/ߦ9X7:P3+au=ێzٮ @0x˸0l( _.gSŵY~ϡARYzZ6PesѪ樍GUɊzX9)ƷJQ.h ]'Zy=a2#t4/ޫ8-|B)MͨY䮂."ƻ`~Ƣ P2)Ҭxz؍^8OAz'5 &ʷ`3C-HLHNctZ8)eC|n՚1=)Tǩu3܌o2!AVFg{m6Uu<G,{ϒ&XsslJ' kvwoOQD/I~ФKFέ! Y TAffR_B2#vz=ׯQg'ye5a0)Xt(ꅲ9&Vyex]=IOw |`afMFY ]\ՖxG|87Ww$};SU|U"vKٌvO=wCp P\Fn-uSJ|A=Ϯ-pz711ؐ`S-K时 $zڅ OU3!T^Sc;Q3m4Үx^R/\dȻ0bY,I%o>٘oG/:ޜV3]G˹0"vm@ t7~YJ})MpSYǜp!Xwv&T- CDggq"2~P>/\ứORrz _BHd}͠^N\@,Q!p=&5uq397N*nژ1{{_2_n U=1Vm4GFDlrDZx#Zh' LrϐIr@VdRݹW0IGٮ⎕Mec7lTt#bg =0>NWMkVk!exD ֶ֗p<{@L Ḇh5굲)FO  c|# A𥄾\HAf5/`mQP2>[_:ƎE"mFЫ({@#i{3tqW2H`HXg[f'LS93{/~ڈ 4ҍ1#]X#`'֓^)zoԒP$a-Kmm/ixSoؑߐKQ J*&fofX`]jmv^|2K1 8XU!OEh;M=eXElHL?dG[3Q6c?}oGFu/ϟ#xNG[Oq#YlJm%WؑAIb[[ީ: _N߽;ܼ=Z| /~ngd$XĂy*@jtYc} OQeͨr\Eޗ>QRMMt:[`;[H[%~!n(ăG?b|,!{C7reM 0dQr0oӆ2lM=r,e<*}u::c&K&6X1  (ۓ79̩mlvd땼_u =etlRL$m ށ.h֏[ b\ͥ% {^5xN'pYF0$;_w̬龫w|&uZǪBr)ukNkPv'񉹷onsSOD~J>rvN7nN%j`юczgFS1>)e!^^*bAiQ{rx@j]j@S"~U'=É8exu!H;s3NpHsy4~  jz=G9cfXͤw2Kl ?MPtCkШ{ ߁1rgM"6;S_WEvvꌱ7D*kuz705rikw3ggld^ͬE\ ڜe5THT'HruCfO"+ÎC-$_sqKZAߡ^aqH*jOLJ! I͊ⱈS#!U+" Kr+ڗg&qR ~I[-T^Ҵc\͐d$]6>o]Y;į p|63Nԓ|wcE.Xn U`ŸƗBw&)ALo% Qߚ_]yZJ6[V%1^"H*a~ogO0;r͎OSڃZ@ked*[QX0a%<*$t.y\׆Ϛ) hqR[]Xv(߃YzA˦yvji')Aݷo9]BOp$^V[W-VAvqIz:FVʗ:08#GS=OawgDLjRͬ)'^H-wf'SQ?9~'s 7V=a=l. ^Ij Ԛ6ha/Rm7\DEr/s/I,;YJReB;R؏4f쎎Eou߅KV,6_s @H9Fڳg󍒜xvT& ӳV:Wk&,o~FMT~H.vbfN7X}siTKor[VH;ca$N/OؙX_nF;5gQ($\>^epGۛHa}3Noy=".,h*3u=ICzğKNZ 5>'tڔl|1"qT4_ @]NA D$f+,,)mzg[GO!EQI@VwS-p|$7kí&Q|dtZ[0\ SaU!No|%oSymG>ْǀɏZsg]{6 ӎ&Uk^G;;'kߡO0eOGܜ4-:"=+saFT-by[>9W}7O 8vl+7yh5ę[#vF I Vhp69GůijQUz3pRxrOrxT…I u D hnNoOyW&aҮx0uvHgj{݁5)!~. ecESemh˒֥oR @[dk~ɀR9qsIzWA򋖳5 .˶f+ۅ?xe GV#m[.۾eLf^HT؎H\GzK~1w.wOWn?ksՆ}UV|ϴŬ7~2zϳDkGGtn߲p=Hwq'6cmh=EIfS΢Rf=R_h  F:vf`]݂wtB6Sm}Ϲ;kx6Iucֹ_藗_i_5Xrcٻlr º)=ԟ?:M^ocя{V9,ySQ;leZ ʕg/پb=9o>2N0'޵1y'E6Zs A} %hC(N-H;zms%*64x5ΉtTV4MBcNdfB/qbQ>H䍂2p[{U6wOd%mĚEY!&QT9 Eϯ$ZoO": ܉km|ȟ k $YCWFؾX0Æ.nlp>|APY£̶W9,h؛߀$+U"G: !4uw|cqk䍗og]QaUE6 OSNR7QQ CB -s"I"KWH75K _͡}L5Oy-OɆw}{nax&҅Ar /Q9iaQ=GwH#3aiҏ5bO͂uXƌ|)ZZ$ )hXܬkf jC+ߡfY;|iѬz05y_p:/t<),I$ܚ"12LQy~a$=fhQ:N`X_cdS|JW] A1h)_ҲѬ݄" 冕dogSA8q+fYo@6R_:8>kŎ@rQz{4J.KIYdc租x@F+~w[0_gHbD]QWzU0tm%\Y<h\(,;G5zKȎ XCUZs) `~a'zeSvPE Ml H+" (yG Z{.g_#uz7eOox7zcˬ,`r^@޻dWI[WlÃF6?O'?")4D*ϴFs=_i3 q^fMEH;N_dO[V1̵S i[h#~|ET 9oZs4cc9N}CHٮ[ޯ rekhqNgn愧B9;>&2g6rCHcK[zm H9uQOOD"nItkQ7PYDwoE6_~wTH!'Dj)L)._}FTd~W!rqS$g{nGڐ PY#EYWEVrƑxO얨T$=*JŗJߖ>6AsR(z:P=tӋhˋV#'b%{}k?χR) U<*;\Xݺ2EC)-(F5' Z2N]tW*gH:;/?s'8 (.Ƭ_u4룓&dJWp*PnתQP'w#Xya0lVm'XqQ d)ȫy>I7}*WڦV+vt6m{?eU^͞h:ҪNW$w%K(_ J_X~$QWj]2uA_bA 9 wDgn"@͂!a4v|@7_^h7莘6lL 3gY!){_&xHD`0^ńf'CWЄ!Au,(DRȭNf&H!N'*oHW@>?4`]06s 4qwd~pR";4IafOj:z-U-0#:oD_$.7}j*Ԫodj ygP+e#D:9%!uuk['FʁQ5WChW"IpjP3 V7uNGv0ls&Si5{}HSM\JgcsiR[)is6L C ʤ1qO;/w<`c٨Px3lƔ%O:]oC5o̤&z ;o i5yv/ %ѾYNB ~G#B7jɧmhڰZjݨت)tͪ7ӳKC%ŭu7.iP;*񸥐,U;i51M%$_oXDDknkYm&;ǦrRT㨻iq091z砜fiN~KS X" VCNgve3]_qd^X RsGi2H{'f+c#7(pJXtdW`ݣ HRE!ؗ~Bf.A>a|?s_oT~hb ;ϟK_ Vy6Lu{:w[{).n,~=#E`z8F ?|1x,65X@ϳ[*dGϩͧ:Z_(Nmf ,*nApzEtleM5=^@~;tϥB%|MڡGQQr{]`^Dxm0~Y$ޕ{ttfNqFxCg婷0^uO+>Y@9k=\RILUֶXԆyn)m \ˏ;2 f-XrR8یnKn<01/y*SB{Ȋ)QW|Wd{a$sÃFf.'a*;XW\1(핹8BѪ3߀_~|ϳ2Qj"ӆ/.NeK7I 1kHq75Ɯu +;D#OP#R0jmQ[ v+W̆Cػ]13VT+wޙBoC}q1RMqAT0}-Bo0p1D$_YR:=-iOһeHCC@Y/7ڣL4?GԱk{q56n|؉_#-zc&fĈ,fgOi }K):̾_6H\R-bpiefvOds4쒹s|f;GL)kK$YKƢ h@74p31!-͌X0a o,dэi455z2 =Nāѳx5T7r96~u;Y2鼗 nHQ1zsr0fPF-> !+"gG"̱KrpVt iɓtb%2@jO_od QQ.doO-f90,׷.4} #V{HMK}BVd'YpR6}".ę@֌G0,d*k+&q?wѓlhbhπKlr:9}#]( bg(_!5rP_Q5.Mͻ{jQ+'bT3 o[L3۹(QbXЂ+3LךŰ貟n;Vϗ>\]@c {>$֌$u1O`XŮ58>= -ҮuglxkJ*:ya%ͪvXǎ+~X]s+~R/_R>oSs%Nѓ| 7^k{xfvl2whu*)In\ 1ne=4 TIg kc8^l2g lpAMI-x#7k@oOk4_h7vnj\ݦ=Ҏ%2mݢWYkʶe$~mGrmn~\=/a,k9RH5NL\9؝zPG];UG.}'oηݞ1{Z*j=uoB$6ZbMQ}gݪ11֗bNb`%ZiS$g{1Lek]gFfaЬq钯6ʄ\wT6nq**bL䜛B<ÿ|ُiI]v)6-uk??fTDkآ!mh, L$WJ =&h0AwdV s b ss{FKFH ^/Yqrb`K]w;eZZ0y^yZϞUnI74F曣g2n1cvH91N1eY}w ]X +SZX}˲L⹓|oI[;2IQ>0Բ Siig߽j쭍Ifw&ER(>kI,ua |בޖ9&/L`ϭNwc U؎ST476F@0BMt2K{҅_SPx5HVu0LRcp ڬcν\2$d= <""P$Jx~>CB $P5{,Ipij3hb<.|az)=GĮǐ[e}G@q}Fidz%;Q:9FzP7&29y`ll˸n;m6M=<=ɠK0  J)v}F¯= X>~Th$0@0h2F̱R$Mբaw̦THbY'X9^v`ٞc=bQW6WM-Ro'و`3f QQl$usN?†ERHwOJ-P`\#1ĺ_CziF]Cr ؖrvV{gxE FV_GY4vfݚ|J9ln P+JUꋺDNAU:zz;l@P*gim/b̢P@N3quZbwX>GـuXnb⛅H;6X(uUC*ztÏq1ڙvv | X`-b^%!+tZW{,~4$l5m1@7ŕH)Eڸġv;A V(\q[=ɫZRcOOvaEsrt5[/iNzĵI; *gEP ӺJ@a" s&)+Î711PQssd(FK!1M4IINoD!3iS4R?\/sֆӾVGjSVgS'jS)VrE*!~ o0yb&YR}9AZE P+CV^s8׎\ {:,㝋ֶEXnD q2a"ͥ5lXٻI?M89IweWi2ScNU%(}̽R43ؼ7޹323| 6J]y|@9Ўb`LPw5źQjS/=_ !ՇŜnqo)QDGieb\ )UcdmNcd0`+T:**6ɼb)h-g^O-SMF;Fڟh M;;qA6orWKPUb?VL~.냌}9 *޶jQc5 @_=K*4I\ם9QUy%IR+8 6dεCRZmNŒ=akqx]:pS*:fIO#,zӢM}gZo6@$#_$%Ȏ:S? k-@ʓӫ;W^Hk"Phc~v^^;v 6tWtX+Zr:<f %'M$Z炾z!2#8nnյ^kA \i23d˟8ģ@Z>n+tilhc#]숖lE_ܖBۻHcœϱA=_<`͠znSFcʟ^HZZS޾ lh/.ɱЗpWŏsRMC{qYEFlW[{GVaI"tRN3NvwuK? tr1~\Tu= r62k;&i})RPaeG لY,ySlT5Ũ`쟏c;}'iHopZ~I-~PTdB7+iog@03A̅C_ls(S}弁4Fuh_`zmH5ª NdHka 6a> [p]T!mǬjv- ͱ770צ.諓 RoQ0%Kvv3<- Y i?vV NB’` CxJ/7Mh+EyOTAchR1h{':'"@ LKLңO wFH B.ʲ9tzŻ ;b$ɳ4h}~RC`ඳ}*Tw9 ɞ9_q1=,:槾A^e5+Z[+Ta K[mL8/EV[PDFf/پ6N~:=±`l\Zc#]+]1ƍٍD=yݫ$rjݸJv_@YEffǶt~nXѫzҦeaجsZ${ٴ>|~!LXFHh|8O T 6W]P1:Tfn"&,VH 'UV4ȹ~VAR״dm<:}0~%oրSm;=bc悹}d tqr̽e&m$*""" fs=0GsFPFT1aQQDL""杙홝ElwWW+.tvg_`{S;(5~ڎ'Z&UvY;b[:~ 07˰o,M I=|.5cth.|<{%q;m]"Ff^2stXT߿vsWFvT޵\a|@1ڋ{ޑZ 3 \~6FP~ex:9(qC7g|~A${xM\~nH]xx*S3Q́Az2VĢKd4=8i.`\Qu#~k/}Kq)(DyG&!<u3glf|ҾaA 'g-[]ɼp kT洝& v$z)d~bƟ!,c>ū(7{yVvan9]K: 6,rާ /f\Q_+=uQ]u! 5z&CgVx)?1-ceJ$Y1W=*J`)]K` >53[}vylu6>e|uNInbIu6=(m ndyAӿju6MC-9Lf=9Ho0QH |!*EYjm@K:)M |!,jUoqX(Hn8vIl_S"pw`egX(Iw8L4eO@\$g]ȥ胯Aȧߧ1kѿV9sG (v;3 DTM䕞@wnQЁ8K%L -`݌hJ8xCW_GPΕeF'ejC|R5ζRXgGBSbvha@ hS 1ف:jhW+9"4%L婅[+LͳrTrwvŒBQ-u Εl@GK0]l zϱ(~Vc%;[uэ Lxoث !-R7٘!t R&#Kw|4q+?^MY<.Rf5$t"3[GbF6vwK9gs{{؛N)]dڨ]JgDca(HK3~xO  M\IQ 需!l:jTTGGw'ޛւbCx880z"&sֳ ߶ou eF]4WeI&Y 1&xp4S4ńLOM)/Y\*9hK.ڠۙ;/E瀨^dS997չq׊߽!]عIHܽ[b &Jk᤺;veGὕGTq֜'/sA.֎[cn0)_h~Yzd^uXCCև|M> ,j;7ЊR> Rn$RU힖gkj(-I rAgmuG%g;Ef;zGsߒ| nu+fG r\ы$͟e| в K atN~?on('8-PF>MJ r"; ]AdwiuTk(=fM@sj&a;84 e m-먴Vd/LE tGK9(&XQ"'a^w/'Pz2B,,d0?1-o\ | $CԎ~yTHx+D-wGӂ8cN"?Nu56IQG-;.z7 ~WDr[eBIGiw=<2[F:' 8Jw[_;rzJm"ٙpbtlSWX!}:G<'v-F,q&}hCψ$)j}M^@l:@%Y$CQed:(,e=I*Gzn~U{{p/GYaŎ8}p@;Mۯv=B5 !dildmr8 P{+ALHsBW1lad"<뀶LĄ)6@s jNMK[} ږDo[z3]IJqISjL6% @·hK ;- ¿V>JolgpS%u)c?lAd|)9ޥf˪`6$A}TDu|[@ )_+^F޿!DX*nF޷,^Z~Fֳ #+ΤN> *_u$ILItD¥Ӵ*+b7n5evNr=|ݶpx@mk@oK?F"@e"K>mIG;R(gW.׉ߧVp[dDv"%Z4G(5{iɈkUۢ'8\iѶ2A{>ȹ2(یEn+I#Ti"9]Ӧ{FN}3Mػůh~ȸ-]3 詌n{R䀨|T_}KVEZ/RY)Zt)|1s^:G ێ:;-4DzOl[6Lp Ʌ6QM6 9u-2FmufA@*S SDecNLP!ww.sטE<6aR¦|oɘc,!/Ve #p%`ۮ5lc Ṃtt*,Uiz-eOL8scx]n^i$Eigmp92#n?mMnkUA;+3 [ A2c-.,OPQk'uvҮą]܃L;"nǂ2V%'uu(E͢R/IכŹio)<v`Œ0 2l'낻;ax:-SnGJ@ X$TqeW2^^u}eJ˹+Y~% [eI r <<Ֆ:=K#k# *- >W6eKr&Rn8Yy Ӡ \C$ZٝP>(X (_gDa\7疧 >YNAЋ#7> %4cdy]Nq:CxK}tșlJ IW8~'aRYߴV4Ff0}];-ӱ>dVcej?+ߘ=߉ EtT6|Ɛʙc8vC.[+0,4/DQvot _OccO F@<9ۦ#Xv(۲l=>xłSݡ/B$PH}ElRC?B{|Eg@]:@$ݹѳm614KsVuto+L—& G0({frA|=^A_L&{b[x)^+xʁYq֜JO>xZeX~.rD eR9Ok'Q&o689{-OKhEɔkP6*a`S.Y.AJH5=@%K7O_ "L-p{a4&gD}ٰU0E Ѷ؅oT DS<^<m{ʟJHEdՋ箿}ɯ^U=I5v\~`,oV, Pч黣5|K8>ѓlXhL5['*4D,֓wFWE"6>g״ >ҕ] ѧ.pv%hY ۔69VSR#pW\X-|Ҭ}ǚUoJ޷ 8EOC-l E҂X=Ͻ _ǻ1.B 5e-/f+zS7ޜZmvA5pS1y|Es`yO1[5]ؗ<7O=ڂo޳={P~mb@}t #@Oƛ ' )6z8{ʲ;b@ᲦM]@$Oﯢ7^̓9AwZ>ugswE{Pvi/%i>O/Elԏmi<|jЫ=ɯ꩜E;TD-E_?ԚY/yues `| Ц~O=ONʬ$[NJe*khm 'mzBfy x|aFXy Y܆85'.y}aR[TXjwSྏw>aƳYҊڙ<$;{-9.hOm.^kG$!оwO+~y]PG# -U"Pީe pP hv"P4w\/sߕ4?33V4LU%mhaʜv\瘘l7WaRTO(U۔K4(gQU0wȚ²j{Ke΄{odbU׊-S0X{A˺^޳\"hX.!FrO.vcCi6d#7YSFMV;>ʋ2z󂾑]]8"1/6 B46 ^_w't/?2^n@ B_D3*lZTwGQo,~L{-d?hȽ.xr !4+lf7jbUhV_v`+[MZqd;s*CLXхx>Jݓ2{mt㺔๖Z=h'.HHgѹҎ[:X^'#0T[1.-F橀` @IDAT{8VLhf0  A0.OF)2&h4:k uic[ir)lОdkY-#] u#p$NJ ߢ |8_-#wy_%|).他H;E)lD/{nPvF8j~M+?EPG8A#pkUBֽhFexH*43^t[0N {,}qoT*[t`>YZBNx@;{~ץ~6%p" ?p/ADxK2=V!ꔛ@XOEOsuN&(kYݞESsDRCm!Z#O@/7EϰpbW" \᥋hmuA9&A&ڸl >~Ks{*R`MGv^w Kw{ hE߾C xynX |G:hxO٤p( A]++rkVv{5i8|kC}Ƹ@~&/-3NFJ f@=ᜱF&._~*»-`k!FO7`k8Xnw[ [ R:_mG?(i[WVRD`u!Zr1,\~¾-^$(,eLS5w7#]i}Y,mI>tYcn}ٕmzeF:ʁ'JzH\PNz|w|+\ފ =T#2z6mr-`ؒ6qIkSOHyϣ۝'r*=ޣP}D;ykԾJjc ̞j1rA%3 "(G9.|Z(Zѓn̐&4u $6pwYhSLȗ3㊔Y5DLw_y-TjRcć{KnmNxw{m"6wm/i"|nc4SxSgzgHݠ9uav^߄U/{b{Lc |I- >wClBo;n8tåvv|dÜF-- `5H<1yY*u8m[TE絃qyKpTTG:ج;_пXuu#T<ʢ7.Ya5vp;,Ϟi<,0QDۘcn+*7% E:ȡSu9wjƨ{-u{&@[lcOҠL̺K1/= },H\KMx\je$̱(Vs~&t #\7[qhWd>b7ZHnSȉ% ~i-mTcjȼUnz7`~I3/*fí, iG$yWveg|ce*]e'D?4^ѡq6 1QIT &%[G$ꮢU%WkLTc԰'h@s.諃7FZ@_li61!h5U:J I`?ߤo% xMY4ä&lPy0wY M^Τprĭ]7HSߠ<͍|F}}ٽ @_q&DKv6ꃾ]my*`>W' a7̫a?,e}FrܼiyX Wnu͙*j]tHqDS: i.ށBHW |R@<{%m,>1CS$ЅISء ϧ@_AU;1]}nDy}`߻+@+W8pe; RlTnӇ@0qX4 +B}Ъj/bDD%vgrm5]ٚ $@_=8mhm&H}\tW[+Qe}A% &2 x$%n)-!25!J&غz60R?5iϱnuP nێF zd4UsI,6?&q؅p0O0n;.OwNcɃNZyB @_Y*9XT Ji}a`KӸ[>=mfפbnϘFr V"qڽC͚fq5|'/ )ՀeyծV5֕orGyl+-Yz-XO))676F7(bq)3Zm@d:w 8&2ʟ곜}+@m X>Qr|nhOkE"e[ ,ñGQptHÔ+E+8o^fcx 0U I]~O\bܦWv}>Mn+8 xTs7h;>/Ȕ!o4lGeC(G-Q5}q|YcSKG|F8*#M|%}!o5,0݌]0Ɯ_~A٬惾DQh/Ep[ڧB8j܉}'zApޔex~[uμ#: MnGPy^&z;[REpvJ\3K%U z6Wi>Egw"}ŝ}W'-TQxU{ʑ7w)n}pְYQnk>/<EN+V6Q\}%eՓt /t'&U:1_JQ,x|w6f| Hs[9RSUg^QT?Ob %fn1toxz-4ߪЇU:sYMohu&& Dڰh{mQEʴڑ$?RVRa h'A"fs5e.H V-;4|?/Tyۃ| yGGRy1Fm{P5⢰؝=X^Ns"sޟ,}5(q@Heg@_EG#Gf9њ\]e9.KKّ'mW4_ pL]<ʣM@~ }K@/\|byBvOR??˿4U%X:#D)AI+˓YZy]7 AQu"oKÇx5T4<"Rfw&7"TD1 (j 7F0YO`{@חe X`Hqa'k|&3ąwSKlm-C&[.5F qE:zʟm][m0w - 1-wvHvit m~ii LhiwC~4W՘D S>7̈́< _T; Qsd{]E6S"xmhhMhR6'&Tsg+#MlU͏, /wEf(it)WH }Nk]$5 |VaEwaS=yS ރ8^`;~h+e>{HXSy)]H}|5oMLh=6nv@Xݔ]H[B wmpztf/#{}L9v3 pA%?[j(x \ua.. pi=ާAP//e%_̊/ڹ?ş_=<`h!^ѾE'OKIC-$7"1SJ;z7Zo WZK4tmX pW8b.{m?i_{=^罇qhY4OY6Wzn;ฅ;2z(VeXYW'ny6qSЃ{J&֙E3y/8?H_ "KVJv/c&in=hLCɯw+u-"v*9VH/ }g V{{ؕ{Ʋ^h8σ/ շc_ %7h>j(.iۓ8dSͿbKfBHPz2_E#p Sޤ N_:8=ɦqc7fsE,옚͎_7 %2Jݼ @P0n-˫_Bo weQA[e_8ج5ox AiZ k MI $‡O)nhn- ҖA%z8hn|~ēD560wn-Z]/*H`)3)ۍs :4 -(ۅ.}܈?wqlMx|?4/l059=f'|zno [~J2@/sOR y9cvUU%"%biÊ$?a)+] $dT䐶~#&2縷x! *̆kK 2U@qVVZP@|v @hbk1O:FUe\jzhIv5)?0.e_Nm\ԐS-@R@DN{\A(6N)$8I#mG$?RB"[a^A4ŵA*A/yPW^ν,vڶ#y=I:#KE[8JK},/-Râۿ3Кp9Σ7iFc qe%uB[YՀ<󾑶~3Ϥx~_~,LH $qyvGxG.Zn%!,=ȁQK秞$4Ņj7<]n\RhGΰDi D)y )9;q}@[GO&.u_^OcdJR,JQzӿH@g3xwlo>ٖmox V6=5MUtZ˶A#닪\yuO (?_peyAŞȒQ_V]_=Zp 8cދiF6/D;;]{хځhh޵fW?nUEOl:>Ew:6Ւ诨[# 3wvz=:M;nt2z;.!={k/1u8n}k#eS'mj>90@ 8<^$/;~W|G;pImM|3Zl} ˮ{ۘg@_9h.fγcGͥT\,x۝3 45>,ۮL#-SJۏPҟy0ԵYp]"9TD[pBFH{ {7%'bdo,dPǧXa(M| 0\~}A8RU.2Q^_0Ik{Ox m] /yc DtYch/ ި\=y֊˚ښ޷ )· +գKb0;W;}Ց"1ҺMNn8ۙzyk nWQ4<KB藺|zgR6L8uOٜY|+'hCm(*~n  -uHvdamNU{m^v 0VNv5Vg/a@k.0iq=KT956dM'`ߔQ5Cwr&}l]Xus@cc^,ns}28n2[C|YXq_vxl>?En"SǫZlSwcuI9_6Ķ[7SlFX/\(fcz#یv]yNeH:+|1LҶfIF$}8+|ɽ;z Kr՞zU Zβ%|˥[YTM6ՁmQ4 q xCl\k Oj7"hy1Bko^s:6 ʹ07;ɠ}d< ;'qn4 'uiJ gcR}W8 W P N@ ~7ƾٍO-(JYu[c%xVP?#>E×rӨ(CjB]4/D۸BVhK?Pu{j'o[(8{sSe%Nb>к%_Jfv`ikGxLpq/|Ɠ𙀉q*[GlO瀺:"Ml@KG4:yyI 2v9\!iUkV1;Ϛ$Hyd|)z}=D`l Hp:צ'8m魽gڃ'iln/P~ʫm"[TQgU?5<ȊbghMuxj!V\ Fc;ټkj#mV]o+8aM@(LiBmߌkDjO>nJ=j:ɮK'_OԺnϷNoS̖}Ɲ(Up]\^pgL⤘ď&_[{'_ %1uF_7+'XBڣ:͉KCߵ6 aSm8{7zVN=1$ZlSɴu.n)2^O=[ګCYnORomp F`uS[/ԳˆsU=Y䘌;1}NCwRȄd#L=#S%քx-pJ×Rm[H:P`#d~F\?ڔMз{|ັFE4pO=?`ۀ}_ u(٥Q!;z7DH1ԘUӛ䧄2HvT<\Hk{;46C%~K k;lkSB^,YE78?7k[*,ژMևΉ6q#X|%Uπh=NcL#Dm0g21IvRsv5v%)hրEzRhE2}!`GOs/^%Or/[hAtt5fDM:”X;j.UiS}ᑺFT*Z yHCR"*ٵӀ?N{~iEX[e 7!gmBFU&`^ĨNca]1_VFu|d\:Wz{ɠD/BuRQC)N<^U| 4_E| I}1^8).gn˙?<AUCd؜OW ٽIH`Oς+^ c\m;[{.f-`;4zrxV܃U匾cr2 ,] ‡&ԥ =.NFf.]eч E9'y|0ҊNn*&aA` &Dz0ب3 -4dA`@CJL~5ȻqQfQZWB&NgiEiph2%hgs}k:;~#q!p+;p76;%!2P7sw\/|7ڢԑ+! ݝeXMb8!NK D=xmSaL?sj2=H{q`w+ 4rH{|g|0_!Ep=m%CquN3H"dV _-G ^ !.2g#[z!E[|5Ҟ-'SwWA]Ģ_ep'l 4 %xݣv2#-JY}h5x[4؎/`]e#Wg!6'TIv(!(I{rgWl;w$ANc@!=EO%u" U,o:lKnm?i^oF7rvMPMt:R7Vk5&˗{I ~+IhO?6i:hy/F.玲Nc-VI{A%)őh9$v0[YK@\}P%ﰗc'ڛ*v,im|ǜDHڕ|JveW ޔyUglCS~I_֬@ƹ& 9uIE^4ړXt*w5,~ L)'u@^FJ o)> n S7#f ♈iM\v@_b۟U~nFЮw"h!Vӊ>}DCcXj;˽ 9hW2g9_.u"'PL*k-{dWgBzS>˚-5}%ԊTa. Me-Ƈ*~$hɧ|9XB ډ}%.jN@_%3 ܒ Kt|}e5d @y-_i}rI>@O.dqr_d5meD4䕶4@GC+=  uZPl{L9$J"UI/P4`ww$ܛ-ROà=\fk~9 ^nlIûn} }0*Pw9;{EԊsa㝀aPhz3W/@_M:!9w!Zd'[PmVrْZ'xw+H Wl*M;{zKK2=f$6#^G±o].`Phuo\[Q{&nXp>}2@"x,7pv %-f%آb8e݅x9.x)[f0 _g_p)D^ AuJ afT>h{);L"hZ ،vFՁrBroMO %Ҋ#EIcRWWPﳍyA[] =n _{hɎӴSDZ`<5#VEiv෶V^^8~?&%KPzeTؓc e8 Q'i#kd寧'X਑3-W{XbFgZ/!."6 E-ҋB曟oۓjjilmcwNoÎs世֚l>ud k/vqo [3=]ʎ8̌wTTI&0J?[m&LPev ]şXPi]rv!OeMOz$?~C=gρRL]6۞wdѷT-u.1%r@|]wۇIU`Ηv봥yЂM47Feb@ KDTC?9S$sޠ/ 3#R{s1&=ixփRKm(FsI}EzjU(*x x3h{~Zdod{XM}gC8) eOS.2>?gtC_imoQ+:B.L ;vaO$•@UY% O fWkZ,=?+1[Jz͟6_se-Ԙr\a}?0!<_Ɔ4l HS{Y_ǢPb4zY(05UseY'),Xs&WNo !Ky|Toi,O8 DT*ؠW'kuP8l+Ji#]Ė\-AewA'Rw HZƃ$ 4Jġ|I}0ɞo. hy˅1ol' 0@ס"j ӿPɅM)L.໎Dpn{jA*)p^Fj|&Tޡ̭o;hޕ'" ;" ԅI8v(KjOۂJ:GD:!YMU}sp Mtے-:=NHo46e⢅6%*DL^O k~Ri탪m:[7O`Rͣ@\&0]o4E93FF0Ӵo4pbI{ [Z:za~f"QaAy:/nʗAAϨڜw^Lٟ s/f'L[#.NGM!EHSҫv -_#zzq*h K"mk<%5nE]W]V_-=CzxI@%@4Wv;Op^s"/;p{po83@єޮ=E%(-5om>ckQsd"7rz8-Ylg9@w_&(s-z E@\"qPegIpڔ4yۮ0[qk/n qG#jmF{ho= ݂Q[;xw$fFV1aQ8FbT a̹pƒ1-5oxNh[- ӆ} m(`Csىo^>:S)kގ*ߏ_x.!:0wDϧt(r[q;͉J^U:C:$tÿ ~DGuj+!гwfpmHn&cXyb9/NO:-hI53A?e[,ޠJFB#!yFY>BKT,UbwCGzڑs)iyc5ϓꥎI$|K:ʁPGvxdϭ=WZ G'VxYhbڕOb5V5]cGxrLt3u\]4Gwj29ۭù Xjv(ww#]h»`u[j vm;p2K[4O 0fwTRW ?Xm?OF}*| {ݜL=N{>`G#7įJ7G: d%[ڝc0%A?Rӏ0=lj[W՜^\@'<uzcK˞W2 $ހfjWo$ؑ_f8tf4&vZE#i꺳-S/tN@c Q|Jne~f5;=3:}@ȎH.qoq7S+;?Mf:_>_RR)iNknTFH^cxM.3 Dv ynQvΥ-ℶ]|0J]>w y>݃VEl[4>nhٟ@F`ql8؀B}UFu#OCltSYb7XГ6?fEB` w,h;^#ێ9$xGTfL1lN=%JU/t]E_ghgT&VBؚ ںSe8p62qb*lH3rSQ WZ4%uSo%r:2"P+iHz{վgl#kx)+/-bQhU$z&@_-*ۥNܟFkY>,xaEX:ɦ4wd{`9~@_9 2Vq%Ig맶 yrxcogVäپsk3K[[b۩#`oh87hi,n"Ͽ~Jcfv9GW? ېі9[[zầ_iëֶ57+U9dg{PQ2d}3jBAsO<ejx9`{*O IhneKg6%TQ*\ٝqt׺I](9J>_g=1a0*zB_k^4%3h5~#JS+!ֵ:4[$C#՟7Ra>\v2E h}@ε⤱lq>F _c4=\=m+P"ّ}+K)A\{Sapd^Ӡ#RAEue)sBbjDgHͶ}pLCT0~}6^ iԮotL^x` =uCh86xMl N \& β}}yNÇgduH{W3>90^٣cσ^|7`95g)^94l#7?KjvLفإLo4щp1k(5܁%pCqb#6_xn?; S$e:E}eYM oi䁾ȩةSΖ{1t?چТߣk6SbED)C\B_-JmuM +tm'ȕҥ6c{?2@f9 O?2.S&Q9Ey}X_tiQҗE}C7c5.lB ,If#{_W_1/s, ]):9TBG2 B-mv$ޤMY jk_ eu =7mbS/{4?&bi_*.wA(V4j{㘱cM#ik7c^ŝK{l1Ldz}mg-Z(m. N:V)@|N=bx0oJ!{}ͷ؎?M)@.SDkq7'z!;ѭIRbx?mtWLKZd|0v% B$CdWNvcD6,0Er54a҄HN91T|>u nvWB%V)x9Gڪl9WR"Tw{:ݬ$qOTn_&7!bS"]:bq<6zq}5\nm-[}iИv-7%)4=LUuYUz…q.DGgo 8#[{w8 }h)KU bp4&S~w`ohVj'?VC c0Uh=Hݭ<>:DV~o cuʰ ^y\>=,o@՞j;I2$fuտK.Ǫnd hiЋ9';GRy8US:\8Vq!Zv~߿&aH/TZ_Gݗ{A@ީu,GhWKHxLv 3v͌p9w8_rH[tt߅]:6zJX)8mi,mcOicv=E h:S{\;)lRC0Kw<`FB3lWVzy:l<lP?xDyye@< &Ҕik|hHِ/`z#} yyO9o}6י:x%mn酵sO=}jq5;ځoS`];竒!(EbAׯ{W 1ّlIƤ5hք4Nv7ݿL dz1k \7;"+=v"oZV _ۭ/"{b>@XnשĮMj3B? Y(ӷ53L\h< ?4rHןBpBbW:J! #wC ?,},t߽ ȺϠ/Ώ0k{͑|UrWI;-Ww;=sT0yF"R}x;9gd$/H 8ɠuڅ/R)W{5>j-m Fd9/W&tj~&DОI+,bN߷1?%3Wbi!.iTg/ׂn7 GkWж_W`Yx^ 22і]4Hu]ǣMahB2+{ѿ*dZeL8;f?p]'h]E4vJlRAgBhѢh&#PpY#60(4L1+NC!.[Ǝi?0Sيcs,k0~̻uIzJKZy5E,$:h/MeDOZHVCիk R% )|&+֔;h({-L}ُkF]ܒcڗêd6-<7V%w\q?~Y+VJ^Tt ,fvaN-IiB{()#+ۣcMћ?BJsM |S _Bw8cהV= hi`iZR횺xy#uNql=]mę;%^8gb>O%3`!Ҡgs& Rg}s;%{ف=F{C-f 0 %aqP;DGL>-p 3avL⥐1'VOI=#"a,e|qdM'Jodc#GX63[z[@a\ SVBʏdU{{ՖV>I،&% pj{<֪D[Z0Ns k^7:՟>̕`[*6#xv7fUSQhIF{ c_tڌi,iEń_ 6?vٸN rӁ嘉<ؗ& _?cިh' oNSO\76|7Zor6N;ѦwAh;4G$%^G(nuRU: w'߃~j¯E]^ c@#. KyM ЎmR؀#P^LB fi?z; qpqa0 \\$ v ;1 UQ_>9h-Rr9 UEYmTc ݱq,Eɋɯ3sq*t;>%\cXckMʳFZ|m _vGm6L2ɝ&:06ize֍>~G}&ZjpC0v 6ݨ>Rxn@fϣJi'Rܹz_G6Ne|M \_g-r@*b^s}vn;!vylM_rHK|?C2FmNlDi 9;WM5\5ӸT؇6Ŏ;_Fj9ڮaJeF:9.o{);!6\Ȇ/ʋ0NGb̹ZA|\GmІ)ѿPN@qM5oF涡Re.lwI8͟f?qHz\5n6yd{7a|oiN0~=v/lܿ q{YN\&;zP0gujB;@  S;:/gzl xuShPoVKx33Ұ2ޯ\W~!v0|H 趱lT@uZ6o !ޥܯkA˟l}4 m\A^*޴W oӖ#qkkTGK'u<%(}t1Vj:}^OԆ3tip]+v c~>G"m ʶJnm>`Y\Fc r0ZeOp 2t³ s_B 45]04}>76e6X#8>ָsRÙdf^Our#& lЯsyP_qi˴s=Q>v9rϧo}=29I=06~ݖXH]aZJT_.14:Շ 7C\ ѿwÞ*}"2qݶB߳aYF|%#2wC?ly3P\A#N9/ WtK$1Wk󷤏D2CտؐVb'f:wG}c'd[1^N5QΫ"3vGP>H5@b5҄Q^eȿ\mȁ_ÿM5ژXeJz|3V}2{l[Ɇ-c)_1Qss)?߿$җm}]toב,OˆQXĿM/}9Y7 pچy6<XݑϾ_>_gHk;$Cr_bsP3.˽,BBz(78&}/}0Jc`eK;J'5*|LzUΈW͹M o|VUY~wHI^~"鋳aԏ,uq>{:ɼlyےyK=Gy;EBԱ\%7f a9%`|ꠠSGiiGwn0]pz vCЂek[X:t6-'8w6kD{Ot|Ch+*@SYb:{oabh6 ; ߛ:I. +gq왆J?*#|7tgQKF遵 {[+J?wwjNоU5~F(>w>ewNg=f^ .Tܒ{u_tՎL(y z<.}ʬA{Sۈ'6GuD Kv=ci:BDU^HgaeLlmOP~fY%>*>8ыgɣ썮iev~\Y|%G[ЁxCmh,MűL#\SS_x.kV݃雇;0E9wϹ4Lu({5Q0$_ݳfǀ[4(c:_{[+>ğ1u"a^mՍ{}\[eA1Fc[qWwW.] خ7~73?4:,GѼ+ӑ MU9)<e~@b*>dj Ֆq!o~|!_~y$HVo^JUsIL.ii Vuw14OלBl4g#+G׿;RX?L/ׄ;5ols>䤣怚fⓣKlvUcwHP.[\T ]&&qzEv2dzqט'*g оkT4l*^쯅5Tt _ʱFQ8_w+D ц[|O3j(_'oJX^̐N'@>yϋQtk_{#g7?/54ɫFi" H|,L0:5=F\iʟ|$Yy[Rbn憢8! ,(#f2dd~,Šl:;=AzYcƦ0{a\T4 CG:>A9m`hxO=c↲$̇_Vt`@?K_E3ݽMއx yO|ffo2$Hx,Ny-؁KP@́hByK0q>mX0ou="1Hu'ϳXN0-ؑ4;)kU]4ĉU}{sb-?#JNMg> .~=k].hM{`$Qeedd w4_g㬄VKӄ=ة䮠??E\Y4@ K_N=qg>xx|̌=`ux۸P{Ьx狂o h55&=}ViD/}2$ 7=/{ww+!a%wfX‡>x8F(0I xMAts~ov3],%/=L ->B'Rcp.~K65_jWřwL 8KUy`ÂYN MmT:, 'mi$D4g; z|dFB.kWr4#XWln"*@N5?k{8ʭMSsiU.^Dƚm2_k&"|@엓_aWR_NQJ_[~ ɗONiMfB yQmHo0«R#X@[/)ZQ=M]EmcX3yyBQXTw=i ,reG^ <V9isJL\;@l*ъ D+ kW@ ~i8hRb=SEܜtw `pvSF?,&+CTPD5^x;95%2s 劔0'IBTӚ`77Nǀ2)m dռ^`dea΄ս֓Ƨ0\d Rg E+g[b ϨC~]ts߫msq][yʽөBE44^89zi\Vʳ'8v2?)Qs͕A~y;ҮrNI;o*]NS2q|p-!T 1½$8ǏPؖSs-8eiJ=B9I}^Lf?16OqemO \]ܽiDz7/{ZLbo4;6J7͸ Z5񬅚ʣOGO~I,!pIfJ$5Od9$Ӭ,$Oq I6}ܕFSVr)7 MV=tJj Ը Ñ#\8mbI: 6V|XUvɧY/}b<5MxV,J#Nv0A f4FN-iޡ_ lƁk9@-;@[P7 .%)xxT{uS\~Q¹Όے A#:J86L2+.lKp\ړ%~MA(@ۄuKwt+ѹNKYz5rװLPw-BᵘAș&_vjm<)8`Az`} z35<$ a\YFܟQhc~^:8mOih,?Lv riQ@|*ZlMfq D?7=ʽ9ȁXhm!{U?>EُrqL5ύAG2Q*{W t^@H#J}&YWt֣<߇yǏ:yЅPaDC?%[saE+6cT=UʖGH&N2 > W=#P7q'WtXē4 e3#LLiDWS'olfTmQ(S0>uػ%*Lߙ:Z #z ;n2x'{:KܮE7ݩtF}@tl>I%<5!i2Utb@"zi /sWitg&'4ȝ$Ikw=H8oLq0?(I}>^vdnIw#lG`&NKpF{hNd$Ȃ\,Z<]|qRA٘W_G/C>k jS @b@C1 ?%vCİlGɗ$,mN9`!;7U GyĀDfKEOKh*LK7ב(Ƚ4uDT ,-"p6ߒ4'Q$I,$imH+3)8kN$l;׌{ "w\Gdqɮhw>yL8sVS;%0#nHZ*&#L;Z̫ o2Nl%ҸZHh,n_' iDikXG=Ki޹v̕Q,;a5gz޵ bQ@f̀rY35% ݇f${s6cZYf""IqvA}rա@V6@q"?bjH_x, =S|4(`rG)&y4]ߑb ;@ZjNSk4>1_xC8MHvM@?:m'.QN{?Qqtd7 wh`/q=pGڥqD6*TDzx{]ܽk@"{O6mtt9͘)0uܖ"L?L"3!ׅ)mNU@o9R1WBo[!cD $Xmɘ81+˧rn\Qr D{U;0mU^.W? ӱniTG,٨8E#!pG''x=}d!ub7"-wX6q}#JrK-ia\ZȔy锁q `m^w_Xi:(M$<ƦK䖵q`F}1T$P>]%b vh+O[l&zƼ9 %uml4w\9y$tu">22l+/ZSdxAIeWk)-[^cma|Y% k\]4 K*tU/el|[$6Hq4\@NOyd 4v/^ |e̽W?o;)fMs "ֹy7脗d^.9LίxJTYK1b.[@< ּC#_<>fX!7Evt&5h|J/Uq>-ZzakO(om l Oj&5Nbq_KA4b1*Hseԇ .jnrMڒ. ?7ehq*[]-@$X (4te&#G1(_w4Ff>wjR۸MC4QOmq6:ri_ustj475nj4KKsL[O*hipn>#ނU5֫3eF_ \'m4FPvCS.'v8gw[!Lm$`ԉLÞv~s&+h\;HKf;.v(]z/qYzEډ=a`)mKHk+Vңx,s=wlїg*p븬6+ 4r\wϋr9&0Cq෋>;8ި͆~|+ \V{uM%rWsG~@0prJx "K4<4[_h 4[z en$!ؒ"r Zo땷@lz]hEwnh%Y=kN_2x\vbP[`.-.zF.K Oy:܁bmDjч:SGq)on';!Ƹ: +Th. @V`6nNJ*   xMhh0ya]lpewaj*^!O@A+JiLr,}Wǥ+:2 @:.sYT*諏E+ VYMu9w~Kח`;O_O)={b_,Jhۂ ;VV‰^Wg&GG̦?=W6vi Mj/Ѥ{p rE?q}mX"4_Ўvc@^E]Tw]]lܚkvQ^:e"6/_wcZ]]x6A[4oPUHcN45@#mtGU3cN$ #_TKN&UN꟫C&^b'9 @IDAT4Os2ЙJЮxCii%^kN{䈫8|<{ z]~|+Vk1x4~gI ;7})Ego8?[=8tjY 97b'YzN[24nkiOucL|vR0dDKZۇӹ&/DRШ9)u5cC3n2:}ŜmN2Bs GI@ f٭ ~;:/(s\ZE^17V3 ۉpDq*f-tsiUYo$Co7Jn`-㦽>Ÿ\=4I^% ^,ʡ'A_=kh́L A_*xQ-ŢLz+` U|<0l^9?(k_ O@켂1?+(7kDei S͖hъ^_%y}Pi6;#7 Al {* :/>Ƣ|d2>V#\g~Ų:BMO\0#?yæPkR䖷US{Q}W ",Y,=;mnPO=M m܏ >q@r ])m?);}ʞ` P@&^P]Tu*-8xSWfBpApp#&6>}oaf&XV_ރ C{tMtڋ8 lbC͠ohKhޥn=|kOcxo~?k_ )#|-`kzߴ R(F\MYÅR_Ph&>e\0=,Xk [C^`[$HVA7QORrTRVܜZf6qf3m d'>u*gֵ<)i^jcW? \rY{..u:+ڥ/w1jXw6m0۱*x#-on5 B 2q4eedV C_&[/p1諍0j"vxcF"u[]{T'8;h>=对ۮ>N 0h5KuqK܆fW./{F$բS3cqt>\c<<@Z ==-^s+v}j65};Ӈڶ鳸l|! ,U3e mp)vko_1T=7~nيޚ@C >νwwB Mn&ljG{|$g?bi1`s;?z^u-}wɿL"ґѮO.B5մk񹸲MkU;PCbv'4?.}%`ey_ jm\/+QZ jjBOJli[jYF C>cA|;M7J 鋚I9R8ƟU?2kh/ݙX=9n$dO~ ѻ{ߌd^)⟣6Q$+v\T뭘e I7JXOXC]^oG1aaOit=ԺVnX=kF#WѯV OEp-hOeq!w\!!-\:!5V cH9 yN+oxy3}?9zdŇpٞAJ.$FWY]ՠH7X@hΘ P\{?Z+\tp~Wj= @~Vyany@:{r|$s+/"f߿iՏ^R:=iOFw̹,nY\`K-XNqaϰMYw,7^f[B”kMO5+r :);q|djYLx",4ǗmJ3yZ^̵Zg)]}g}k2_Nɔ'ZU.!=}-vvJM6| Ϡ%iḻem:_ZmkhH praR G{3 W˺ȶ\FVUU?~ 'Vouo88_ӝ6奃 = Wڻh>w;m @=f/GOc0lk֞m/~n97%Rl,6ɩH;sdrŻ`HS=%źeFhtZu)=V3.5,ܗ1wЖpߊ/Ou4[9ߐKGes[ vO%\w^w&+|ئW3#`??t몾(кu`cR?@>uMR؅K(] |~c$ޥ/@m͆vfǒNc K>A;Fbc^Pڡ۝`:@6{6Ԧaq5eJ:}Mʼn^G>9)ҦdYX?BB$4zxY7LpwV3>ʗHεtN=$d{4to)UӒ> Nɡ qN̏λhg8}.':<\h?j|zџe?g S8`e_əLRڔ6~TJ)'3ؽƊ:-fQ@'.`$9;f,Cn<7#rۗyKK{ z-LAFMSgr3D|sy^莾]NWgN&rDOP'Z Ukz6u}Ai!Z,]B+[5/M%qoц2l/~?3?!L/0P@ (v ,047bq Mڣ-t"7Uh:?;G6"oT 0:Kdˎܝ? c[8נIEZZ} J;=1qli8U,|wg4Hi 15v45s 6\-v0yjA m&.X_˂BZ0!;ڏ/Nx=`Oa1}Wy>:uH6Kl+#iEtVЋRLOt1z|jc=g7RJ͢I߅ X(?(% ~6i5NG6jCϹv.R]H{_Fs{_I>2o1"qGZPRkP@K5T6=QSSNg+rFsv!8i,N|( x~fjd{at/]H˯gzm _wtG{@UDhϰ׋hxꝸozp䔮I/ yf@P' Ѩjؒcm=\k~(wɽ>ݧۍʟ`rŪH.yOJj9Rm"@qmE?,oxiԟnz_G҄A~(|W @@`SͶ]m11^>?+Sf cXno:/u.mSg#۠+u]/.?/ޚ֫{~3-WmDcV$mF ҏ5<-)aZ{aURr1}]{+,>*Ђ^<{[\b׎>$dfX5Rx 󑷟3?$[K2l94R?NjZ=|0` S]mШmd|&cUl nN:yt48?^Nn3gy\%.9.R#v4`ZAZ˶gB^j5O #6TX뵆MyZom/ӏ2Z8}ݕiku|h;sS`\[ɆgSK_w|Ԏ6q}V}< -t.HLXh]Bu)]#hDɉIJ2-Lp3 jF@)ڑ0FLݬf{(cm܏ݚ9vwbk'>#G9czItS:sK2@̓B^򋐛[\q}['vc~>.f6:?q6,lϥjcmEl.ex@y1,BQ -b^Zw)A$ $;~ 2&c<|>E}r oi6Yɔk=/Zw2>vO"6v#O9Mލ6듹<#0LCT=< i˶ȼh;O7cE㽚g}h\D}bQu6Ӻ9NIM<]wp>'sIbY͆sUI#ɬ]6]'!iֿ~Oz}D;71sDX?'x`.NرKrK(W-2F'd(;:2hƗbs_iϼryΆLU'=/ , }(6ֶl FǑd JB w`]u1;=˱M&ܝ(9DiG5豂Ć ,e:w؏I|4!EOAG ]( Yb]:(]γHQ˅S]ZofJxdֽnѿiҍ&'a4`6Eb["Dek- k4 ³XLa?VR0`ѦACU `$SD;N@_i/&.b XK,]$DutLo0ZX ,@f{gjQTr- Ҍqo7TT܊6t->^n7 iBj,rZ\ҔK{E<.A+" X6ŪKN 4$C2Bf:Eh4\nPQmb&h9f{;# *d_fHdSsZWk\Rݕec~6Ui?E![oS#) {gx}mlr#Os#d pKmj;C/O:"h,icĂC2o=f K]PTn-}eLDSi7G?pC1IWyU4oa?6p@P94ǵR:K°^}B=Z!_e\سɞ ٌh5]_:鄻3q֪΅I["xri :W! ғ?Ki/XEԥ j§_cVMiNF0 qαl{ohtǨ%(LcʘTf!yS!2ϔD)1CER2 ;z}{n7?S<|^Yٚl_ti`&ۏFT;cpmUq7}"ll;kR8ޛ?k͊wG0;/1З}7=Q=/>Yw{_.sG#tz}8pi9|QHЏ[fkS0& lPv+5 7)=~̸%D]m4`vyP\ًݩ2ɐ"nk* yI"w%3Ώcq*td2>O\e).`pTQ7{77ߚJy3?0x)8w+^Oڍ{h~C_?Q&6ݱmLc#=~E Zڽ-i.c2ړ待m;je6˼wWّ!-\ڍg)eur!}IGUF8d|[>K)]r Rt8Nkxrr3c/Is++JWqU'+_]R/ZD( -.$A<ʄ8]iY][u&UE)m/}ɺPbeO |Gf&UQY@_.?}Y!a,b#)B4x)JO#[JR~҆鮿5\L-G|O7)wc:fSjtKL袷YeCsU Tu󯚜dP&~"-It6HeA{$Kf6] >TAzKFR9[64m$Jٞ,ҤO:J*'eZ@ e串)v~Ii%]W%W, Xڤ;7haM6%$qA8m<ԼEMh$Ny]qs~@([ :ץ)"}RTo9+yti 6D|Ӓ3$\|ˑӅ*:>f4|Բxrx^'M "}e,{ qtWaNsP__N3^pr3IΗo9pdgҲ,X~=N\`{S\ru@ӳ{oJ)|ꎎs8/ڢs{&HN㾼L;H=e уHw_'?3xji~sYQ|DG@M.!PFʢ-f?w;Cxkxxo:֥ńEsN<ґ~7!M^oN\C/󜄨/*,3UGݵ`6$}58'N F9&H+s[QP^NAW(NLkF?e^[wõ~g9J2o&sLTiK~\c(t/wn*ޓ/)F?d&.D50YPv2j=ɧȣ'z*]Z -|ulݻf(x?{N6M:n#_seRvUsʙ5͵9Pq9zv;U/x?'`F6Iգ)7tP]}j5BE8Cm~V1!v;*%Gδ%#I͔3#kV8Aͮ3yԴn|z16ãb2eLRlMĿpw7[E)h 7t=̫? ^r}"__U-:$$"X6|z>Y")| k1?6zZWw|'r܁I_Kq6M:VXU~Xq^ D*S48 s?`x'g4 ]v# Sj2c,2HbcUx~)P@f}YMDҩI]Iufg4}fѡKm SRs"r%#_e[P:V,#HL/ЭZ7N5- ]UZ y຤4l0VA:$&A*$ CWo;)3Extd '@Ygz#ӋNw>m^ŀoj,inYdK}X{L6/7PR~isU.jv!6 0pwD_Jtfbp'}̍V,$ P1owug.n Ol,"zFҔ#ldc"$P飒"!3==w~'xm=<{8G0C(>QȽDqV  M6Wwauuvit6'&t@NCtgU]먨U-XLn_]h#R ʹp26?VwfIA!=t%KnaJ>,rv<Դ$W>~D6jRqЮHBlx)ݡn-9v"-Ptt4{ eᾀm'Bow,R|޷=KZY=J LE)qg40;)񱾼{}(/F665_tuaj#a;ݢͨ3DBNtUzqYT4[5x;1>r[>` 9 w5 a NaJrq C~H;5_d 0+iL@` "E;/$G {출>=F:ܲvTP][+k+怛B_ۧ[Q Q/s%Υu]?fHˤ[ߛRžzP!$TpF4މؿEM\8(D|;`K;V>8'!ļҞ$fN=rݎ.O=Gw8|K}anJu"Kbϖ'2%fb ,"Oߐ7xp&郗I cNN}`wjm}=D ",7J^Mbki Yey\ycn *#QF)xd>Oɦ^Wܕ};Ʊ'=^ uaQY-{V#\nD!&no潇ZM.J9gE ]mVo+,gdQdcv\q[\ c$}l55ʠxp;Y>77Çw|=/R>eR+lE t}aEfP0&Yx&M#+ģ0NK\!ߥ]j's;A[?6OpfNćYG&]yv8P!l@5؎>>-L438"!ZH{S;}$?p?IZWeU#Iقw6eouCe];5O։*i2NNljKlNaή7-Ζ/@侔QҨmt-vBHwZ+x-}#03_ 9W8KMZrnaFseSg=2NNlv%6+`Fx t~y~]VZ-WH^S8*_}%[;%m>slUI"ho=OB ܗ\J^ļ(ptZg֏Q)Ұr9>/yV?œDOx)?T.YMxE2$Vz WگуƤ"PT'n ƣB*lSO; i \ =Ḧ́,r"BS=.jK@qy;C]@̦kz2A4粹G'[d *ʶ ej{ aG l^,x^elM$…Nr-[)'#pi:Iپ?D>;*.fCкOl\";mvE;& ߒӉozrZ at$qCkNZP1P&.vhxK?0p#$qϣQѯ|)Piݙ{nMnHoWP;'[}Tҕ"ϕ:Q~ Q^d<$u|Q.օ޻:0 —]VR? ?i}Z~MN<~hTTVM3)~}Oߥzf`8scWh$+{^ ,51ER7]/33:40/68/r ,fmf 4?L&tRlM6qp"GU:SJ@}@nnZ6~]SnUeytR?܀n>eБN+g4U)^7h$5@3%O``bIK?rI"Fa }$uKT9eiz!oJ./t_qS%ɞo]/獔 iӍsf<+)D 0l"QH7dhoƦn)lC +nPUR@9X=>5-_ӿ` %E(7g !\޵vڂnrwICԘe9e)]^-iP*Q "cy$)­شꌋCGD=S1'9M[ rI}uK~mK|'<ɳoFDQ[R:/ OȦ1brWh }uJظgƴD[dN|xnjscc!ܳxDf9s%;BMk L=:.:(Q3qu0 ՝o|KӛT}iާu:Q7Cԅ~]{RRq-ncF3ʯth<⁴򈎇7)ȯHe0.%Fl䅥٧lJB9chN1S֍9źnSW/"|j7pgҳG\vƷ`s7[޽+p`;sS@" }'SOK.ĮZ .3ܬGb"Bb[b_YzJXÆ kp#gb.؅Fkq[tHT6j$DD?ʽ@_IaqC6+*'WC{d vzWP]BVu&IA:(]^A3}E8Ndd!l?ndzJobp 񺀾.E!>5jY?Z Ie'd_BY>O@^$@;erS+7d$6[ s73-kh¯Jױ՛4 ԞHɒIhO@ҒV%n'}>Q=;:Go̱bm>bZnɂQ5W?uqjl]8/ڔ.&f7MY\[jӲلAEvǪ[z.me/F.A2ȗz/UC0 c5f_|iQc@>K ߁fOE9H X}E:r X hp*i8 wG/@w?۔60/Ӂ5l^|Q uDEHu+g{N\bݚ{uNlKX}뇗bT$vJn9&#a&xЩ|>vBO}b].jk9sRۣH +4,Hb"?1kiDshNj+}Dž%S@TҷYWs@|9%^yS8[lX<^𴝓3ۛYHyME>/iBr\I-B,7v#:}<TqQlGx|\'J.gZΤ˳7sYli>Sm)vΩ֮\".0*+)_z4&@lQeuyَx!2CRI⨾([1^*2fRT%{n!ݴPKߣFPnӶ~uc߻ (6y\ 8s+snO"}" hc;w*W6 ϼ_ũ:j\h橓"]_#X<ϤF2z$sOBp|o)U4m56~DH]R#.s!H͆G3o)}DG72nS]]֫e#Iwwp 6~*ۃpZw\vOm;)k(Tյ>2  6e&-K墱?(c'ޗpqσ/y:^Ưj9ڼ3u3_ Y.}峂6V%l59'0'ҠL;ս5녟Sf̾X8^_H2BEGu^u9\j]~ێ;hspl6rH%;zh{giӦv)ؙgi;wv`-b_|zͲn/z JTWbj.640V3A}3f/k|8^Z7 j))te#HAY38B,s` 42ڴ I%ynM1H =@ˤ3Y:ܻ'nߵdɵܿ`6{-?J, IOg~h{,89ɼ!a PG$U_z 4kW&F"]&m3Y{%;"홍V?ђy#8U$P*O6Hc!(X~RM!n3ŀ>ia髂ۏ76ۻF? :0GI~1@Iԁ,`zIԫ,X>gH>kFS#?_Jk B>@gak]=zBt>$͡.G; i_O'-90Tgs+K& !@biqo[-yޚaw(xA  4ϪvD@tM;{-p֊o6Es_klhbЧ>y?!OMAh1]))ufNdrlmzo5DLgoJ187t g%%]>7Tb~TK즒ԱLfO`sjT?YH穯gmp!"*<~ˑm(', Wh6 90S`ɛ rR&Xw[#VL*nnnO!y6 l&ިB7f. QpظXM}sϱN -~@?9ݳS,% e#1vM ׭ qsm8iH[7'X=xi2 `3x0ij]gOĤK$vԓ/t"o(P{ۊ)bB*r^Hv"n&じk;0 ⥺| O)އ$G}&$&[1}饧'M]q@ؼyNQb $q;S-ʎAyv d+ԵXDލL&l#< lMl<2I!Ij$x>m| 3y ?_KSӆl+⾆} Rz*GQ'i+B'sAu[tikSalNV Էke (6"˷CRĦP~b=A98ԕBӵ0rdfnrBGk.nJIM4c 큼ɀ@a D΀45I&4`118N9F$/)d%#&;zۉ0uyCT6Z;c ={h+,_k~"kݛt'Π?;B"z5H I-I}4 H.޿2Ýe޽.S}J a%ӼZAҼ*Щ|*?twu .Rgn%ycIIPϏGs\Ert(Ҟ*xy3.3ʲ 6իiZ[x86"`~\^&jn4y _mӏ ZI޻:JUiyʼηYp_wOglkD@,^{ A9A>i~ke̙q._W}L2ŖyNu}8lv}>#5kmֆ f|vgۀ\V},JO@w]6~ԇ$T xXT11=os&@eJjǁ @9:F9/7LitQ{EUb}5M^g(*L$} ϵ諅2ZyLARᏐA:%stutxWӄ_*uU7*Mn A^ޡ}*oܐ̔__`&:ZR[ELoS}:Eail/>5ϡq^lඎK zIKf ùAHNȤSCGZhYbi9,U_NMc?FG/\I ]j/K}\f _l@Nr|#]ޖ.!,l|bh9RbOE>Doak.[bGHgA\]\ Q,H}py`PrnQ>Ljk,()z?;ߺ4:.O9b',>iH&wތfv&(آ#/]ި$SW ,?z(?E%= *doPwK'(|ʋTOTWpJ$*_~^;HHQqq23VXYfQe ׸w=%{M hʼnayOs,$N{W]N]A~s<8D/FyRMa~Ere9rJ1xFR\Z9gQx~$?<$=Mm|b\;$8;/m# ͑$nIf3q?[$o;IWRėb(=-iV\먯>H@F;K?$0vcr}px}tUs_b-v ~ēq`>S6 KvʳمYtﱛkb^dWS^%dx/>D_{xpQYm짒v/ 3k Rbq1ncg_@zw$Q[% "$Gxfh;R*^O6+=եu-aʓ٣EScU -ƈ޹|E[}gN|o~Ϧ {l6e~o^nɦucwÙ)/jt.]k_flOT_% AKANo9yvw$_L}j6E_i|0$-rDž~qHŸlBtB 9RwzԒNfQNw9-󝵤uT[caP9M/=>N\iO\;yGIә5J6sڈq+]B7d#"ҁ:i`_f;rl1ߎ^^Fz{/AG6sI$Ԇ6Z9?SYvn?zR6rߚ?q)Oh-(<2sTO[JX#r^ԇGiI.T{}2}mnjqowڈG|z y/q bsЍ6'1Us+捺W?yu.z m & p=ʋG'Ss?K`xkT@~O @YIGjyuʹgXV VE kR3+mA_ F"NkܚyOmk~ u(Nfjm׸W s}Fsg-))38ׯT=p;HR%pM79}͝;~m'[^=kӦϾּy3l^}s1NuѢEv 'T(L>m׮,QrƤzaĉm?pۭ[7رc^|E={-[Zh]:t-\Щwo&y.K/T&!6]ʼK.Ϋ$~㏷_͛g{X׮]mʕʝʺUV6p@{\U=8ؾd{GLeF]tIFޤ]qIZ쭷rH$\vi}f}Z_K*ҾV֭% IjlԑAq-|}} i?&Tc SYMxȉt1S4A#_ojƣ&Kp >aMW4保$.H>n 4I^^a]H%G]ٵi1&EL:~wRu󣶰35Wa5&-6'$^awӘNh: [ҟff&lZ`JZpKB}p^/. Mׅ Jg9 uOQ-ܷY ,[B9Yl|zW͵tƕ& }`6H t6M=a'm3С߂p"X .Қ82j/R%ri]]4y79wf=l 7oѓ}֔l0aOxKy0?r;^,>.ѱֶTʣ6TVoXVI<|j\⿞R$m|8 a%IH #]Y%I<3LRS^~Y\{5,O~g(s:kߏTWloIV>6|&}Q6zFi68~Ɇ5vqZj2C%G z~W b&7VTJQ77a9I1t|7O  lDT9AE%o6+LTR֭-e(bCֻ;/\/zGh;3\ʱ%S؆E d#v]W< #m./O]S%[9#݀wA>Bf-.{ xr7Է6 ;uG=z|~O IzF$ `\^Mex"omZj@6!]'X= mpSOb[ KQ Iں&F]pIbst/9[Y Ow#nxM{iwuc,]sj_&Ft8'R.οx-툸Tެ ́ij|)>D핱 [d/3/b.&Ti]VE[E]6sO|9!"5Q\KS#헴` eډt[ _Huc79ik\?*ѹ(ڪ':q#} {rSG.K\lq{@KZM9 +D p];(s+~H'#S 6"i~}66]ٴaWH”W+t}:z鯆<΂Yps/ՏnK}IPv'iSӵY(z3uPV/dg_[n1:oju'~0=ցccKgI6H=TzO^;u̠*^qN^p?~m뭷w}uYyy_}N3fe]f=Duac }nfȸL|6d@hv@|']I'>#HBWib-[ɞc}83馛:h`&LpqWڮ*(wѕSO=eo7\1Na G/wynVMe%We{m Ka 2$N_Dnc-t j/UvkMFO3;6>QKX}+ADG>ǒő. ],OAطy+Z/{9k{H8H|貉`UJF@_)纐lTldG(BT#wJ#ՕOf>+sKv}xI?嫐ZHmjGOp(\R;>+p$w%m$;}RI:Xow%prI+Rj 0aJ/u eT8}eE~EvH p-Ӡ'["0KE=n1Ve|kGمÑ![Ov{"02߀zl>Ŗ5UaMlR$I$VB<'Su+_ '44ܭ/ F Pry՞{#9r-I\ID{YHY{Gɫ3fCs('-,}Wf3V3PÀ=p{}T6gPgr;n Kp< ('\y"!ViYqGa4nvziG &PҖcHKКmMX1َ0ۘڢe|=}$&џ\prFCmg(w c|lI*mjI;x%ݗΙ@_ӀqIIm*F3%~T\c9ɰJWxW鬛/x1qK Zίd )dwL= ߡzD7U&JwuLRTloJoe:DOqjQ`@m⼗ }6vb:6] # v#iGiT_jsQ+$G_6mZaKDyH J;i^}&^N;}+W$-9~]=<||uϸَI|nϬkH3*pqzgSZVL";>:3Vz} ^6A>@Ho½t! ??shYo|@[=9S;luK,| ô0@QfԅHٓH7#G{q]N(ij@_?&g͢7 ɑѺ^l+%Z3b!}}qY(Z9s:v5?oMB L$̠UD.@@_Y '|\>&[o7r浀 Qj$njetcDC0yOms5gk04?@_i3M4m7з *zv6&@FYȓ.\'p԰(TPc,e'ҿHm'"ͥ7fY4,BR#66Dk$="]׷o_J-DQQ*!HukrӣG$ *)^.S3gxNHAu)l$]wuWg&XI銾k#`Xixcu u{n ݻ$z ^ֲeKO>έBI~.W~TZ$+Wj6Hz%%)I:SNuv?T$_TKd&I㿓)'YkN%H4i|-@Z?-巾֮v~ ~H rhRЏΦ @%ǘwY8B7-%|_\*]M눴O?S)R.r|$sٞlXf?BdE!j#m$8}w$5> oR )tQR.RCEB tMYJrI 4l_'$W$~ѭGnّip`چU2_M&>(N8HN@¶H~J~2؀@Xcvgt;9&.&d\c(9ah%G}3ɹ7q?a0/jKo F{@j!@r 烨n ]::ZmaEڤFZ&u&E ~-]{>Dk:=lox3)SnMtab{9mH u{>Mr90VA.."ߚOǩN(?g#-쩏Ӛ =jЏ ':d=! j`~uCg?s̻2mLg+`pF67wXxNv6ȿ)P=%IR HP6zGvʴbk{vK*V$Lr߳+7;GV1NHɍm;oDďM[-s?.Z÷8 m~cWTF .;_çcxWlJ:>7m_xf@;H[u<Iı}^uץk<B`UÐѳ_n:(W<b">К/ޖjy* Odw&1I\m!!0o9?-1bʺn]\R;T*nmjڨTPicQruuj_%C$S(ve}@Uո"K㓾~YFTͨs0xjA6Yʹ6RMx We1' xʨArZ\Iuwօa+Iȗl+C9nKZU$]EԥK)X_uvsfX̚4i#H߮HR IJ5o޼R9Or oIJ/@NIKX$Hqg67A3IveJc;ྥf'~YJXS}&@[y\ȿO~~oT_{ ]'UJTK+]|?R4ӵ]L׎ NmR Vr USZ6 ca^i_XW.utD"U<ӟz]N>CJ$itkpm-=T&uF5nastc uN.Gz&>򦵑$UC1ƛ I/Hu]ɤt&ht ]l&A $qgw54BEORڼbݟ@ӓ\ᣘ0eʀKM5˵^|"x\:Ґb2LiT-VHJHrpΠ>h6Ό#Z0FwglF았ʒfVP~H8>@bZbS#y dmG/_XV'_:IG ~Fģ[+|f&@{f}>ocr$}͓ٚś,@ZE$c*OzC{ ]Z&"s>-2oM)%el5z^w@q$a͑:[}p\^:&%u@QzG5S Ryhk@YHJBu Jne= XG/?h%{ZbwN.#۴*2bE5(cqچO 2?NXdӒsYxmc7'^c1Z:?dmrHAv+ O}_[lDd,$HcJ!#.3$Q7}Q;Fg٘0F'wێF`=^xжv7j"t 3XygzԎ {KICxI[6/TTcP·ǭ DHGgrP.KaKsNp?ITR:]vVGMWXL!LsCl3S]vN΍f^Lorʪ1z2gus` ep`Rʣ89F(4$x@}Huja"tGѤ/ۢNMD|gŞGRpRǨ];~Q&3oh /m,37xho6jFێ?}w7Jʦ\hЙ7@OS6rh_42.#m="kQrao}Ìj|e#Ya*/ṁ9pmomRbZqEI~9kqg6XwXo 3&.^pipDE2IΣ٭#C5R^y C Æ$S/ܯM`);ԅsSq [N/5nf~-Ge'|G )>N$p3IO$}pTQT޲YjN>?cNU$]4YQPIi|0[Ҏ~6ܰVܵt KH*#Dq _~4oIO<@dmGGA`jR k=_ S>Ӟ/KZmljd2;3ȿ0Ƞɣ۵H ur8cג~%HAPUv!/AIZQ+v}ZH@IDATK|#/hw,wUSaln&lT5*f[2$87׫Գ˧,$ l##2FIg/o]t9$6Z[|am'Fˠgÿ:ݹuz݃:KZI!6q//y>]k+4'z;wɸ}"RrDCdJq9KB=,ȵ;)؞ӑnq}QŢ%$Zj!"s/uM5VϕD!~+תW|;ୌ^J]++{R^^$<.2-ϗvѢ  SUhUvR?6x+L/qޟ|zIŹ |9$\-z:Zwȕ$i⾄ q6BO3'p|R}]3s7&J&skF$|4UhZ!1~~ճtY` UKցmXE+Ƅ]/.w sB,_{>+?M1sDHV&dZ@F@?b xRS7r a eC+νǾSK<_0S᷄ErK)FR .$&v%at٨S1kJ\=o0JRﰁtK3"=8q?{ f&>!$9lk .Ҷ?וFgQ?EcM+sןy3lrR.a%s jR`ތ Ƽ&'+"jE '_TLQi .u(O5oߔ=+Oe\kW,~iE_9gj)~Bi'y|7xBE<Wߖ-[$eW/+S:W*@,z^tJ5D'fǏwRv]{v xKi(7Jbus}${i%}ܹ-뻤~Eߪt|0_8c$|9kjj.P|%([QWA}$>'I@LRJ ,V(yrW۹b|U&~N^ri-$vs,{zz鼲BifPP8} ,w倾 Ct)Ae#$VdH˖&3QYұ р;YH}zW]4LZ-請}[ȩ}ǰjLA'fo"5$sHSsů٢֧%.S|jX[ߤҋftz{ qW>.+_8o9xm_4+ǙQ&,V{c-\/z/u G KʮY`=kClpX3.VnSx\$OEoJf ݼgoYk|;jS (XxF0"-,[6rtI=\fB]6lj~$ h[32onE9 swVю|7^IܮBecowp?>~6wФʙݟOi,DQVCǑCzmgQQxb$fI8׬Iu1;=U~2*zй?/~ptGl4>#ퟶZN Q+;O/ƺMyv{8(h8Bd-  G\EXSI;,-qr 8qk=xDP1d{ɥ Voۇ(m>?צfA09 pn8qlVEn^Ij,_eٟ/3o)_ص%yf 2FЪ%^joqn~atW~_?viIa6'|dzY^5F`SlxhT8ɨ踮I,h;5ӱLX۪zíy])G ǧÇ>_R(+$D%S{kw_ڑEp$zLLNGXvqzYP#E3h^'SL94 eemXIYgt~ـN|oG6"C%F,P3]0]')]6"mNw4܈- y@ВX+6$C׶)]K"g(7ꑓ&܆tթ]]啻Ђ ͑fެqxGsF;;:I1^Ph{@)Ş%ͪ`{:'S v ՜uҥ(1{.טδD(}?})ԻXk,uw^OjS$͘G8UW%_1OqgH{l 40؊oe\x i/Sn"vˬ.lZlCUlzelo#ׯ@9*Y InjnTc7d-hץ_Z0IhYt1i=?Ǵ _ye \P6,~-$I_T[ZWƫ$IwqGJU@$*S1cU:('*oHjDK+ݹm]6y2Q=>I Jmס޽{;YUS-5›o}h_Ϸv<2 UA ~,& $FpuN$++ 3ZIOUDu,"M3z5~%}Ӌ&&="N/8|4[~z,t\a]|(I mm )kbB%Eu^&|Sr/mLz[J T⸧ZG2]OY4FoptQR5Y* A޲k[̣N,Ɔ:=RV/25s@, RdִIsc|zN 5>JG|j𢾾Kdvs#)@ϻKu@]:+86dR˻9}bcL=taM|o'%{:r,Z*!-VfSnQJ$g>H%= myI\gޗEݜt ,p pfr9A2xv]ȕ.e{xS3$4SK[] ;m yERSsx{A1c#G)8u#Vʮ\ 6!6>IzЂA#%M,TreuNldKQ%5l&#O]N-46ɾ x|9 zrt]G<"'=³^R!c&$>)aDv~xg/ToX/%Xp 4L!1ePF ;ҎG ]Sb~Fˣxj0rLi#i {ֵ{{ :{=OQjw={ {"feTIFS؏G<{>Iv.Bu{ O$gM'驡*W I?]S;ÔUs?!.4.T9> u̘:Z4Bx m#>qwқiSd#o+8̞ə{oTkNwh33& G<02/OtOx1Ŵ!J9 >LT|_NH>u!_!%mB^ʆX@2R=[VByGγ]ǷΚslN\ghk}$ cٚJ &S bS] pQ%ld-o8HKe=_JNJuBGw6~5I_s^ ¤)h!Gʜ>I l/<1U.xb{j}&O=1F.%sYhĠq&s/'~*f8]Lkz?uG]^?rWM uջI)}a nŶ-҇K*gU*pw'TfkGPkUɦZ]Cڏv%|9u F8R# _c^Sh IZѵT#~mg-\o'_V k<9oFv> A&a^VAn4YXbNOFNGqS} u `W^qk]{6ȁoӽ{wk4gΜ,ry.9w/;O?m]uUNg.Ct{=w_&9=',I#.5oM0IK].JuݕOS@D~=vMEN_I*?B䕀vg3W&ׅ`ɧ$8PK[՟ז;eeJ!0;~^?~%1=r4\ҔZ I&Jҿ$6R\ h%|?[3:2/ 2$%ԫn#=}0ocQV钭_82*J$4t/H,ҭ0*.҄3{7;#PZ}eb&aZ.W$]*JՊx^m  "?.w$s,;k[/8^Fw.vm19vur rUS1@Cȗ(*cѨASKsLs6J1Sio5G[|gs4$ +9& *=9ɝYYo t2z=3r] * ݑ$?/AQyIXy.DaWCV3~NܫdVPݗ6 0J˟OiKruy ߏl@ً׷ Z`u-鯥l~[i},\z s.n>{ʅݲuH?nb>3R &جp{U_ȍGXG#A%>0Ͽ˲'rLjWwRN#GQyIЂ2 \_۳x݆&NGfon͋_v~3a#vwcSg.)(S\]E]DK<)C>^ޓo*a",ql04. I.POb+Szۍv3ҾmB;NQPg“o6ǥnڔ'lߡ\Vzc9G͔.m L}L?KR 7 _s9a[{j%MC sg>ndž,'.%ԹLw. Aۧ|-{bV O'V$@ Gl3Q=͸]Vy)aRv`CZr y N;#z>z1;*@_#~UeRUN;u*} [Lmd0^=F%mP C5g;/@ 3$)ќh1 0$U GRvq`G]Vh!5xގw6L"@”o(~IڟQmR:' jnbmm+/B[;h_x4kx`¿lXGE(xͣ6.tIOC; m/[ۥaض>H~dVgJ|iC_ȵıL-|i^zB=]~”ϸ!@?ʊݻg cCy.-Mujl-p"955~c_G̓-S/wKHXxNP70(| 3/4.P݂҃o^oJ+R7#nWb9w ߌ m&/_KE.v 3Hr' /f`* "ҜEՌ4mbu(q%ፓMggrs<BW%1ۓCؽvM2~iS!ҷ렛q+!UwΔ.l:]k=fv"h?9L'1 :s?_>B}AҜʍܨ ۫;OL c-Zv&,m:ymi|F2 $)D_mQ ލSis<@gjح0vEy zm]tQF{.(9bw)gQ?3qh9.A ` g}/5|qvÇqotJCCt (,U>b_4hy7,PUK4:i?JrOav}zˆ fFKڹ瞛ӧ͞=ƌHZv*[+| DOǿ 5'/ux&൴Dhܲuڋ޴6L }0X7;$%OE$UI1ujӮ9Z6iz܋VOs%P4c-"< 9c!@.B[:;p#d?Ii(#9^\|1jr=e0QfC,  :zWԑ eO=]]., M0Mbt&'X Jr:~t/?~'HGO{e?Pl7EI);iseT6YSm8ouμH_ykՀcXIbKu24S9+s|,B~K{ y Q}A:nPZ|̫qC۪n#ISGTi*/<JXG\' ?yK;j;ՆQ5]=)n)Kx]-vI9{NG߈M/}k(Á}+6U mcXG1H^C zYC/ |֜nC+>|U 8u,X)m>DznXb>t^)t~~8Η42Oc;jIw7TvDZ};}*Q껅S8Y%D>ii_C"tʤ}$b뚘˥u/=S&_ Vuv)ޞ/çɧޜ}E_/̊-4T"Nƺ+l::+6^H{h=e(2{ qCMrN,Z2~[2.nhr-%ZlnEMu&I"ݩߖg]۝lt`5ǹjQ6n܊/ Qn1hS]GMmL2S)^sεg,ReoҟRЦM.aoYӖ*I%e}Oߑ:UJ Zn87 +%P#L#n' K;2#D@Yxxs?q[r!v5՜#IH>Aܹ 8֛N@#ފ˿IJ)5 %:/qp~/1ӎznB2"??WR23m=mHLgXP`<"s(%'OKk{i؄>HP^bAM6ۛHڭLN~K-\'UdhNPEƱ>CzZl0* },R,LH@-P=jI'# ڑ՛|`jrH @$.Yu%5aRwMb:YB`coMچ#}=0RZXgv1(Ldž$۾TT)C!>ڕ ْr_XKv?;? eےi pKd!H߳SAw xӐN1'a]#7h  lR@8fSưXӂw.?b < *gO^IjtԺ#I{tT?D;~GLFGNEcFM# ./ÿSt@6كGjWlz0_⋉R,%3vf\*fUR1PI'6os$о%~ښpݪa=< ǟ d /cUi_; 0v)`vhc6* N{+6#Xk^ j6Vtv[5؋XLP:(6z? ;ygti]%$ϭi ]>[5qe=ϱ nbD|C}Kɨ &NH!|w%*/q|EO@bVJʨ hGD:ف5Yyd5kTRx>ioPm5*e.O2Ԏ VK87Ɔ{GRH=}gTG\㺸i]>`iEg-U=nb$~Sbٴw.$'}}yeW~8nk[*UgGm x?/H6F^TePVﱙк'-텉'|Gx ^7Me6ͮVfEz/ǿHrAYlEHO_ 6jٍJK{{Us>>6D\vBQd{ȗ$w6^#YކPfc%mycdx4АPWXʻvSCmEoKeNx/' %p<1apkVlK# h%+UL#K'ᯘ渿Eɷ( u}>Gel˜—ucQ73!0h#JפNF1Rc-}X+앲MP@\a5Tak#;?+.#ӧmqN_א.x!@:1<VcA*NZ}GVjG`2SpoؿN 78uicx61 's|~Ќ3შҷO*F+w҅8/twayU;vJWm믿 _+[vDMao,K5kf]tqz}\鍊nW%pt4J)̿/P>}t U_ i3/ fQ)MS}4D\YǙa1 2$FGK$PyyIot`]rEH(ڝVdMe˯z)Xz/qYntZ?r}6.ݿM#S $>mLnb>" #̕jo?KV\RO|c_ {YJlvS!ב8=73M @($z7j<[ݤ}]qNegϻ-BP43z% $ /᝜[}JuzV@OLr .}Opn.oJ7H{c*x‹IBm]u_?+JE'[34yqZ@N XX i%qc&jeTm6 {PK98ҋe@[{ЍKUjq'Z gLNA@_9%n5y:>)4#fR*siKyݜr'nCx821}g>s)RNT`"fY"gbT'TjU2qelGŶ~xgqZEpٕ:MME'sX VPy=J-Tj5ҷ`3O](U]aySn8iyM{7lQa_e*2^/&/g)B˻b05%ʲAvzp+37@<(_%Ed+$ ?q'\@ }. J+oATp=EQ! Vg ܮJ~d7:z\^cq7#pxe.>'$)cW'?XQI)ljW$6E>CIU'V>6J %}lQcVH۽9ڣ7V*}wҼz ?èӏ$nKhcv6e9I~B7m]dptO`DS%RA ᧊8vj6?h*vɐ0or*<Ԯ jE?1^@_It7pW2G!כYB<_oWfz~WVwr?јش3};d[zmuNra35tQ;+Cr߫LЦhY4G ^kd?~owQR¦duCz<$n&<7"#ݦwDSa6uʡAVu] # U?xTsU:ί].џ_e>Θd!s]^ /Gz jS uL}H0#?]4'HXI߂9S8-?]댽 2&nnq僓#ns-g J? |TJ.MY?JVw-ԡ$GN+-V(]7_t=p[VW%k7o]GK* $sa@ 2Id/B$w|j(`-}+_zWhv'0+dN*]Rձd=PWD>ccR\-ɟ͸ϸɍ]&s~9W3WJV m'-ux3m&OO>it XFwYHTSn6Wh-+ Meҧ8m4z =A@YM sU#isXΧJWTtsP⾡ܴXҿJL^WlgFi3:(ʻ{ID`f}ٲ%HI( T0+rsU M`$җ i  i>C`tp~-`(P/}j͒})]'q/Zl5!#ct?;nA?}IxYo!UA7W/c#؜E&>AWI -uY\yqKp{RGPtYd]975NeuQ pjZ:_O/7z} Kxz?D[P,{AXT;z~♨4rڹM~EK[حQȆqy{N/zHɫ@bnALPQ?@)nd;q,HC]ۮk|0}|ԥVviGeڍxAVhג @"vAP39~ tzm~DxD~+z׵͖kUaNGE#g@g?ލlq]|NABr$E C}_/`gmz;T|=yd!?{ ҩxy|I=zVN{fd<ר&|L==+z'jlX}]N]V*hg/Ug8G%}Ʃvږ8[@txwz:jDH/4Y̎uw=~ԙU]=HƝzM\ƙQVi++ʎTYkɁ*ۙʷg(o\sc?u_D]M}Iok+]&PDlnT|2qb)A!!E7ٻ?(?ȵeߦA`UG-'7'By:ߗlT@K Bx,5@k_h9{Fr\>\-Sex A<IMVɠ5o;|׉O9bZx.\-S9_ڜPߑ3օE=5_s~:ڑ[0q]cD~J u cV̩=[00xwǞRB | $EwYh Z_]sNV%?S+A߆ag.!T:h=M Tk/Z(zJ}uL]+?z/Q': Hk a.L/5A'] ^dL>itKpJWH$P$^2ヤ۷yK_}NRQ&#'K \W4]oZHX (h,:}Eoz< `hS5ߑ:$ . *KGpO榓_?x߼?SG )L]x tuwV$A U$- dի }ױXqGgJHdl=7 -z?#EG6Rr4푘A_9z/aJɷgUV`pPTh^d>¦ҏțw~5>K,8&ҟWϯ퇣,cA_J*)iŘG'rY.j"ID Pݼ^XDڴ^8w$)xg@Y^6-oJ=oPHi缏u>z` yaFk9@IDATP"'HYyEh0n@ H:wedm^ W ,m+,}Vy3Gm:`4#GP]$찲]+NY-FLȵwdc҅ l'y%pj$RopZ}3 !̣ Nߓa^f\9@w_ڏ2N7z#z4@UG()]f$jlxo9v6TŅIҁ $ћcمVH|,WPX9opRXW_ml9+4:W%-8)b"p GW-?:oU~#]tIWҀ˼v(9&;Xpv6b$'CmQ?R+(I_NumƯ|OVyÏih[~SH՗Gc7rD3یA{ywR(+R*TƫcSf՗+{%J yC+MgڈLAMy8f~䅇 5Mݖ!n^=.قt>s?8?*GlH_z CEN`vx|\d'dKhk\c@~oQo3ҕoO6 Q;'Tc&'b.OkON_#vfA}EIsrM3,U;b7{'9]:N썝n|wJt&nRWzf'BZ}U p`d[^k}eY|9fks vIdRm^xM@Uki%!/*WWCW`NUߛ|ɽϛ-6L$iW!mQvCjq1Z;)WչD!Ζ1TY͸QMUOGxd;rAOلyˆ@mٺ䍯sjM)yݟЇIH 7yI9iN0i## }ڲAT>WoS'v@_YSu[o4rk?y~yI@?O[ۚUeWJN$wh/*Wqt1UMo4Lml7+4@7-֥ɹ\B=—_EOc"Oe]iΉsiHrnoNi|Zҏ~ 4\h[bNS0i ',Z/%^u^ ~IzHyn$e(^x~k$diCU#,\51aØa[$N6x&>Fy:_ÏMP$__ӂ4|-/ TA !MH4$ Qm`ÏɓNF9`:N}])T[ז'#> cő #Nvg! 'ypNżݖ.+!["cRyg.92>wO@.zvw qq8e{d-h /숑ޚI6xS)P2*|mn3RmDQ?3%6vqMX@{=HV6Zfd~&qy77]`S~1nHz՘ iGEH:*7:V~X6EqV ?rL|f$^r'}(  9;+tkxOp"&U[/uw dh+ yR̞濓)-ـ}!FO'H<[*90ԅy)il?%d9"Cx rX< '7J\yJ_~I~Ǽ|&0ɐ6]іXtjIE0uBXuhC[s})A{ʡp y]IJ뾖?S+A/BA_"\o}&MI4M+>*Á鑿! gomS軳kJȗ(G,[5 4@URz?_e9-ZkcvrA_rQ>W~We& ^A=zrq3S7&)cc4}E:֯䂾*MIgA_RIl2Y`}Lh%ͧ(OR*eމ7QҒb@-4IɖUu-IR]Bܣ뇒6Q;w7>~=HjuA_iF[@wpzyЄ_ :Ż[}K|IB· %&p"UW`}(@}zyr5g-sV"=N}ռʧ*ܟN A CIIF䲴6` Rt鞦/9#l뭮t7C);j߮ηQ 2}fWf_.~lVl}Ԓ-)iy|!=ӷϷ题UZhR: -{v1vC7asfVߜbZ`ob -J*8eQ"B tM{Gʪ qwW8$^Х9펎:.Afqn8*- 9UHh w|j}TؿfvYlxy dm"[ձbӀ(8R2O:wopq jWӊ*[| -J"=hT7+`Q*ȯ¢"`ޚᏲ}* O G?LۉHN,u`!~;:/ݝI' #Yj6"π %g=X3iIFٖXHz cƚ4ҊmԒ1_fdmv˯Hmo/\7.kJGq.#E\8*6v  m\|^mFF"G#Sjj3;[{8=ใӠ<@_~$q|*6v!*r1ʑvtJB\Yaʱ)O#KQY1nI\J0C(`hz."k?¡ggXyrft?{(LU&%F>[s1e-F?WftN} )s`X${24T`!/'Z%Zapq3\߿ P@Oۆ@_͙ h:/>ґjA߆<Ut蜇Ԉ7ws\򿢷_>>GlCɖtRWӣ8!=վ-+$L4/z,WqH<(ihԮG! WxзֳY?-ӖZWv|k1Mυiܒ2U,ϕWJs3ήЩ!<3Ml40nRp0VGp{Gh!6KU&x>*RP vJA_OOmaLF|:1 YeX~z \v ,kǴ9˙jIYi ;6WksN-$#yb1JI Ɛm2pm@OI_emkmU[{y]2qUYd,]G.H[ueY$%3#.̛Fx,}mO +Xk:I/,=ƺ!ze@967.L佡PSr'rHҵ~vRa @2>]X8پJ%~bTx,Lo]je2tX2E.buBB7G~ ]F':/'ŮnQ`ndqgS\>)=F.U#At3w']0=mQ~9`,N[;0޶#Nc`G:)$n1ַHU%r b}~t!ia.'qMx31%n}l S(Xp<8 և[Ibɺh jtC^,sfT[BeKV1|ݏjtD?9S&.%^=j8~|n, $YFL>bSˀM:C!k[5wčбݱWE^RmċveqYƞ/MuIw3cXHnai#R%XC*KT%~!B~6mMg#h+-ڔ'Z}V#`m@q:]%Ȩc汻9 8軲%]ᨂevzR~U?QWZo]~[+e)I"NXGJjZ9`Wt?`CKzs c'J\`{YG:-<;GDZHW V-z sO&څ=!\doGz-|n?ڠ{wHYF],T.m)@"=cWM\q:\Hۓ,MޒfQiLm6]=R):b z s$< p}PX@)g/ދ O~Vx"~ w@ɪa ]Ik;I P`rEפW!:D U)1᮶@0dܭr|:y/^Q:a BlN^;ÛQr6 W$|ϷEȱV>˾<#4y<(ԁcX\xg ҷ!ݺ=3sلo avj ̬"=/]3~7A\'h`bv'z>&NLJ/G"\ f cuDeX':Tg$ Z:b7ʲ]c)z5svm=emc6GCws3Z_KΆ +\[IIc$V$R퓱mh5];4y}ځ֘Esڦ+WS3~}Gɠz߁'=J'J@ެΤECkWD(#M~ J A80rG|観/q- İ>*a YɺG4HH#z@59kF<>s ./uyQރYN0AZQݶ9]yy %dA-!qZ1ݳ.h*Ef:FY5ɮ }V~ixc4vGRC#T'HǹX_%^m#hO6`N=@d9Hs/yi:tsLϲ|X4[\me:'^WTJrf6/fq$IKn?f˥amXX^֎mgWJ5ZF|u#VVjǥcrqx6&0O(,&zijt `XڅposRo"uE8s[-Fn u}K,h'aͷ CM\?ߊ&NRG՗a@ۣ+eXt.ږ,DĒ[Eh ²^L%܆xH(y Ñ,:֏oSmC7Ei+7![73cc,l.G[0~-rȭHIZhMj_3wYW* Dk{̸ SP)@^Y7M>hWߥ'XU+$1ћ\7-Zkķz^ױ7"ͭ/҆ |jߘz'}f?4=9*wgk8-ᵡ=Pp]dk(3MXs%ut]׮ }jD{bK͙gafwɼ^TrNڎ#4BX<@~Im/a{~ p E+@MÓ}i>m");v2;`77i~Υo)8ɚRV?lHYMv|!~-9Q[;퍒q_[OfaeΝ.Sˌ Ƒz`-6HpsoVEJ?&h{t?=ϖTmiDghuL"ʀB•H ?RuuAZPSf%k|:_=K'$|!lbK/w:C|k@=EJ#B'')9 xYAڨ~?qyͦnOc9mhh}&^qc\0@$sC;џ0EV[Us7EiC:뉬u:7u/Er[H'O`$Ogt-1G|%ϡ!G͠VJȱ twIFCVг[-I3 m΋AXnLS{CoԴjavdf`0颍{Ryv */SKֿ"~6Vх'A܇gc$=UlϪw7SQh 5b*L*:m٭$[m<-H!DW|% $`h|R i_'+.ohos=t $3$u+) ޗRE?[c}DҒ,ͨSGS}T'u3R[t&MY3ӿa7בE.ƜEL~[f0RFNWK#g$c4wGa+tO=qWNBpֽNgg7p]/?Kwٞ,h&Lw)j"K8^ / F6l8 a{~*zD'%?%\[6{>}%,2=%tiJ@d"oc?_^VT\-B>|1įE˟vnRjKwxi+|cln͉68#8}*y2:|0i \Ս\tW`gDbGYVM!OEUH`:z7Χ!]ucl]X|ǮepPR['>qɕ9u-T W'o7 i[}\v̂ewp@lD#{nMogKDR\4;>̓n Ϗкv^d{f*|Mu/}Ϋ{g`UսunLҨ҂؅(6v؊b`bb`]b`(!"7߳=w C5s9gٱv={Zt_ٮ̭| nCGk3ډP[rx>V ;>֞RmLb.y"T"o$m@F@:bZK>7YD~B~܏>+l0oC -j.y>5B8⍰̳FHXu:+obN<{^ׯ񞉄v۳p-Cw6i 9GW3 ooF9v-+ +xѶDzpL }{v|S*:? .੒xvɿWؙt߾-<}{pgvM]qʭzőp av~ý#? +(Aa&>#LގLS7N/t)qŞ(/y _2{`ݭ># oKJw$'0۳$Fҍ๏2 'p9:+G=hi/S'Cu:ΑT|d/%J4OB/oE1ę|\+.57Q~Z0Y\F"x#}i\ڞqA$XvvCٚi5WeMHgmn 엓T\cA]˙Y$`EtWKq{<_Xkv n@ 5_D -7:M nrKnop"oM ]s0hY(d&:.Ϣ#QھT_!-kgARO䒇AY:"Ṕgu4qYCwģrR!Qs57CuglhJ5pn &5RSLdhT#F95t.4mOx rZ+zQFڢ-iNvO@ }}kHyAmm7Ƀ]I:52:h?Gero ߔiNO$ $K0isԉ_ٟ&;!a;C uI g-i(\Uyd?, 9e Ho"~B%'qt5yuzTրσv’VK"~~\SH*5gqa5 l"% aa]ǐ9KWIDj&?SuLh. ~H&t%vfDmʴu'~ps*]$ `-U6ОMDzxÃP-pg @јf̒:؝ >Dگb!-qCtxNux[KK!Iu Ny>fag?oCl1RӼHgf-X,+ioBbH\[x=s9qO^raU"=5ҍh_/kD|@eJI8 u>IcIUsox>yQ@8(JXxԅƱ?p-iVng =+/B V&}*bva%v9RC3 $enrAygR,v ;ؔb>+z? oO7ϋ80@t܆*/CUPn&\ĎA:#oҩCwo?;S64XaϲY {1*X8v#*Bw6u Rst^ՕUK,bX>|#t ,{YaIO F'"#Yâx;pZ՜WYp(s=`"kQcDQrӖvg嵄{e~ZbӐ$Lae̽>47YﱫT~8NuΫ.f1mC @VJJ2m(nV'^CJ)c^mgRmTAVkms &z2ԚgTZq)۴tXuoj+8]:jO6ӄZI8ߵHA帩;Ce`fЮH˽]e[cDwI@$mUB? tP93y~,g-˜03lF=7D Д;+eeR 68IY] OCߺ(b0-^_!e~J̰Xrډ$q/e&}mn@ T +Z3>Wccq60`OM.w tM}HILaF;$=S堲?ki ) BIy=hEɇZ_ЂU`})-=@klWRZPB$L My`xUZԚzKI2I\:L)=I=1w ?˸@ UrrM6fDhb6+k=n"kH,a{ ~\rB}L}sGm`\L*PCԙ_Ը? djs[BCK߀g&R +վ] E*{W UZp<@wНBǪۨ0<ɨ%VM/_&ݎ q2]LK31zbڎ)%(|!rcH%7ٌp"gגMK3UF_tl,&Ec r-=X,|/;f ԊǒEp&E۸pcWRu!*ي/ݣS 7YPoq;`ђǧJ>m#|@)Mߦw?ʠ?L]S!X.s7?QvZ˪QCj56TbR+~dJںd +lRՃvHQɀNfh: ծ̱/X,ܘt{}"@=,xN>mX,[Bج[;@ ZR~_zN6D|y=mb=hDu:;IJZԯOP#^͇Oсlym`.v{|&y9or>=~UuP}Ѯ:9S$Prp`hvc=25FAw0gViήݾ$Bb~<%4i޳Ugv;I ^2 }(icuc`:L #q*=)[ NE y.$TڣB.IIiL3կV yC9z"$8~s5{GR6\V2uzrv.S Oa6G:|k%ʗ$mu3NmLS^m܁ ;Jxpe8ZلM%Oj NvH4*hmv%?1f>*̜D;g!`9 w_7́d2 ws R.[WN _=; !ee7.D#؃Q 5e \ڇ s&=82_2_er#uU!J%O\}ad !9i;,4_dq7ap!GΥ\WgXR|*`u$_u9՗TEmfzd@_9H*΀q`<)R _[5 v|{k}AW5QZ6}3ӭ)W$W$W'X humOTsHA|;!/ڪe(oS}1 -n-`,i\Mڕ iG{KjP )xyWSGīiv}-s]u$5$m)"j&@L*Dizf. 0B&tuy#^h2Tė}e89+6C5Ɂk΋s$û}Ե+n.4_Hc홤)"^A-wS̶m|痦 Elɒ&thm>{#WX okqs?ޚXj`%OzܥH1Y e=J @!\G\¹Sրc٠/1Rۀ]\Z-&:-vm N ըqOɷ;6?vC_@r ")a|}2B#:Pmro.BTK ha.T'6Q~vc?.-d-Kv{RG$UL}=((M[6çT??_Ep|g?x[1KmO\eJI*@ܝa;xuf:ΨO#i{- z,q`\w,_$:>K31v15̻qM.]me-!Gxi#<뮝-]~w+ÎI yRу,  @xp*y5fy^GTʑjm"Q9%U%[ K Y;B?]W9v2|!|uo@`މ+#IZE.Y54jW<#Uu/>Piš}ˡ|+7 ^a8շ֠JĽISw@=H+% #Y]; zSH>%?ĨoI^F9"z[L }yS-*PݼZh[)bĸ<nDY@&uIz$T xVHeUT@IDATi6q;(/.ۃg#G}A, ިhCSno iRb~Ti<;OZo/mpXe4^tSlO(g! @n:Kȣ8

" ) return value class FieldTypeContainer: def __init__(self, ds): self.ds = weakref.proxy(ds) def __getattr__(self, attr): ds = self.__getattribute__("ds") fnc = FieldNameContainer(ds, attr) if len(dir(fnc)) == 0: return self.__getattribute__(attr) return fnc @cached_property def field_types(self): return {t for t, n in self.ds.field_info} def __dir__(self): return list(self.field_types) def __iter__(self): for ft in self.field_types: fnc = FieldNameContainer(self.ds, ft) if len(dir(fnc)) == 0: yield self.__getattribute__(ft) else: yield fnc def __contains__(self, obj): ob = None if isinstance(obj, FieldNameContainer): ob = obj.field_type elif isinstance(obj, str): ob = obj return ob in self.field_types if IPYWIDGETS_ENABLED: def _ipython_display_(self): import ipywidgets from IPython.display import display fnames = [] children = [] for ftype in sorted(self.field_types): fnc = getattr(self, ftype) children.append(ipywidgets.Output()) with children[-1]: display(fnc) fnames.append(ftype) tabs = ipywidgets.Tab(children=children) for i, n in enumerate(fnames): tabs.set_title(i, n) display(tabs) class FieldNameContainer: def __init__(self, ds, field_type): self.ds = ds self.field_type = field_type def __getattr__(self, attr): ft = self.__getattribute__("field_type") ds = self.__getattribute__("ds") if (ft, attr) not in ds.field_info: return self.__getattribute__(attr) return ds.field_info[ft, attr] def __dir__(self): return [n for t, n in self.ds.field_info if t == self.field_type] def __iter__(self): for t, n in self.ds.field_info: if t == self.field_type: yield self.ds.field_info[t, n] def __contains__(self, obj): if isinstance(obj, DerivedField): if self.field_type == obj.name[0] and obj.name in self.ds.field_info: # e.g. from a completely different dataset if self.ds.field_info[obj.name] is not obj: return False return True elif isinstance(obj, tuple): if self.field_type == obj[0] and obj in self.ds.field_info: return True elif isinstance(obj, str): if (self.field_type, obj) in self.ds.field_info: return True return False if IPYWIDGETS_ENABLED: # for discussion of this class-level conditional: https://github.com/yt-project/yt/pull/4941 def _ipython_display_(self): import ipywidgets from IPython.display import Markdown, display names = dir(self) names.sort() def change_field(_ftype, _box, _var_window): def _change_field(event): fobj = getattr(_ftype, event["new"]) _box.clear_output() with _box: display( Markdown( data="```python\n" + textwrap.dedent(fobj.get_source()) + "\n```" ) ) values = inspect.getclosurevars(fobj._function).nonlocals _var_window.value = _fill_values(values) return _change_field flist = ipywidgets.Select( options=names, layout=ipywidgets.Layout(height="95%") ) source = ipywidgets.Output( layout=ipywidgets.Layout(width="100%", height="9em") ) var_window = ipywidgets.HTML(value="Empty") var_box = ipywidgets.Box( layout=ipywidgets.Layout( width="100%", height="100%", overflow_y="scroll" ) ) var_box.children = [var_window] ftype_tabs = ipywidgets.Tab( children=[source, var_box], layout=ipywidgets.Layout(flex="2 1 auto", width="auto", height="95%"), ) ftype_tabs.set_title(0, "Source") ftype_tabs.set_title(1, "Variables") flist.observe(change_field(self, source, var_window), "value") display( ipywidgets.HBox( [flist, ftype_tabs], layout=ipywidgets.Layout(height="14em") ) ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/fluid_fields.py0000644000175100001770000002216514714401662016500 0ustar00runnerdockerimport numpy as np from yt.units.dimensions import current_mks from yt.units.unit_object import Unit # type: ignore from yt.utilities.chemical_formulas import compute_mu from yt.utilities.lib.misc_utilities import obtain_relative_velocity_vector from .derived_field import ValidateParameter, ValidateSpatial from .field_plugin_registry import register_field_plugin from .vector_operations import ( create_averaged_field, create_magnitude_field, create_vector_fields, ) @register_field_plugin def setup_fluid_fields(registry, ftype="gas", slice_info=None): pc = registry.ds.units.physical_constants # slice_info would be the left, the right, and the factor. # For example, with the old Enzo-ZEUS fields, this would be: # slice(None, -2, None) # slice(1, -1, None) # 1.0 # Otherwise, we default to a centered difference. if slice_info is None: sl_left = slice(None, -2, None) sl_right = slice(2, None, None) div_fac = 2.0 else: sl_left, sl_right, div_fac = slice_info unit_system = registry.ds.unit_system if unit_system.base_units[current_mks] is None: mag_units = "magnetic_field_cgs" else: mag_units = "magnetic_field_mks" create_vector_fields( registry, "velocity", unit_system["velocity"], ftype, slice_info ) create_vector_fields( registry, "magnetic_field", unit_system[mag_units], ftype, slice_info ) def _cell_mass(field, data): return data[ftype, "density"] * data[ftype, "cell_volume"] registry.add_field( (ftype, "cell_mass"), sampling_type="cell", function=_cell_mass, units=unit_system["mass"], ) registry.alias((ftype, "mass"), (ftype, "cell_mass")) # momentum def momentum_xyz(v): def _momm(field, data): return data["gas", "mass"] * data["gas", f"velocity_{v}"] def _momd(field, data): return data["gas", "density"] * data["gas", f"velocity_{v}"] return _momm, _momd for v in registry.ds.coordinates.axis_order: _momm, _momd = momentum_xyz(v) registry.add_field( ("gas", f"momentum_{v}"), sampling_type="local", function=_momm, units=unit_system["momentum"], ) registry.add_field( ("gas", f"momentum_density_{v}"), sampling_type="local", function=_momd, units=unit_system["density"] * unit_system["velocity"], ) def _sound_speed(field, data): tr = data.ds.gamma * data[ftype, "pressure"] / data[ftype, "density"] return np.sqrt(tr) registry.add_field( (ftype, "sound_speed"), sampling_type="local", function=_sound_speed, units=unit_system["velocity"], ) def _radial_mach_number(field, data): """Radial component of M{|v|/c_sound}""" tr = data[ftype, "radial_velocity"] / data[ftype, "sound_speed"] return np.abs(tr) registry.add_field( (ftype, "radial_mach_number"), sampling_type="local", function=_radial_mach_number, units="", ) def _kinetic_energy_density(field, data): v = obtain_relative_velocity_vector(data) return 0.5 * data[ftype, "density"] * (v**2).sum(axis=0) registry.add_field( (ftype, "kinetic_energy_density"), sampling_type="local", function=_kinetic_energy_density, units=unit_system["pressure"], validators=[ValidateParameter("bulk_velocity")], ) def _mach_number(field, data): """M{|v|/c_sound}""" return data[ftype, "velocity_magnitude"] / data[ftype, "sound_speed"] registry.add_field( (ftype, "mach_number"), sampling_type="local", function=_mach_number, units="" ) def _courant_time_step(field, data): t1 = data[ftype, "dx"] / ( data[ftype, "sound_speed"] + np.abs(data[ftype, "velocity_x"]) ) t2 = data[ftype, "dy"] / ( data[ftype, "sound_speed"] + np.abs(data[ftype, "velocity_y"]) ) t3 = data[ftype, "dz"] / ( data[ftype, "sound_speed"] + np.abs(data[ftype, "velocity_z"]) ) tr = np.minimum(np.minimum(t1, t2), t3) return tr registry.add_field( (ftype, "courant_time_step"), sampling_type="cell", function=_courant_time_step, units=unit_system["time"], ) def _pressure(field, data): """M{(Gamma-1.0)*rho*E}""" tr = (data.ds.gamma - 1.0) * ( data[ftype, "density"] * data[ftype, "specific_thermal_energy"] ) return tr registry.add_field( (ftype, "pressure"), sampling_type="local", function=_pressure, units=unit_system["pressure"], ) def _kT(field, data): return (pc.kboltz * data[ftype, "temperature"]).in_units("keV") registry.add_field( (ftype, "kT"), sampling_type="local", function=_kT, units="keV", display_name="Temperature", ) def _metallicity(field, data): return data[ftype, "metal_density"] / data[ftype, "density"] registry.add_field( (ftype, "metallicity"), sampling_type="local", function=_metallicity, units="Zsun", ) def _metal_mass(field, data): Z = data[ftype, "metallicity"].to("dimensionless") return Z * data[ftype, "mass"] registry.add_field( (ftype, "metal_mass"), sampling_type="local", function=_metal_mass, units=unit_system["mass"], ) if len(registry.ds.field_info.species_names) > 0: def _number_density(field, data): field_data = np.zeros_like( data["gas", f"{data.ds.field_info.species_names[0]}_number_density"] ) for species in data.ds.field_info.species_names: field_data += data["gas", f"{species}_number_density"] return field_data else: def _number_density(field, data): mu = getattr(data.ds, "mu", compute_mu(data.ds.default_species_fields)) return data[ftype, "density"] / (pc.mh * mu) registry.add_field( (ftype, "number_density"), sampling_type="local", function=_number_density, units=unit_system["number_density"], ) def _mean_molecular_weight(field, data): return data[ftype, "density"] / (pc.mh * data[ftype, "number_density"]) registry.add_field( (ftype, "mean_molecular_weight"), sampling_type="local", function=_mean_molecular_weight, units="", ) setup_gradient_fields( registry, (ftype, "pressure"), unit_system["pressure"], slice_info ) setup_gradient_fields( registry, (ftype, "density"), unit_system["density"], slice_info ) create_averaged_field( registry, "density", unit_system["density"], ftype=ftype, slice_info=slice_info, weight="mass", ) def setup_gradient_fields(registry, grad_field, field_units, slice_info=None): assert isinstance(grad_field, tuple) ftype, fname = grad_field if slice_info is None: sl_left = slice(None, -2, None) sl_right = slice(2, None, None) div_fac = 2.0 else: sl_left, sl_right, div_fac = slice_info slice_3d = (slice(1, -1), slice(1, -1), slice(1, -1)) def grad_func(axi, ax): slice_3dl = slice_3d[:axi] + (sl_left,) + slice_3d[axi + 1 :] slice_3dr = slice_3d[:axi] + (sl_right,) + slice_3d[axi + 1 :] def func(field, data): block_order = getattr(data, "_block_order", "C") if block_order == "F": # Fortran-ordering: we need to swap axes here and # reswap below field_data = data[grad_field].swapaxes(0, 2) else: field_data = data[grad_field] dx = div_fac * data[ftype, f"d{ax}"] if ax == "theta": dx *= data[ftype, "r"] if ax == "phi": dx *= data[ftype, "r"] * np.sin(data[ftype, "theta"]) f = field_data[slice_3dr] / dx[slice_3d] f -= field_data[slice_3dl] / dx[slice_3d] new_field = np.zeros_like(data[grad_field], dtype=np.float64) new_field = data.ds.arr(new_field, field_data.units / dx.units) new_field[slice_3d] = f if block_order == "F": new_field = new_field.swapaxes(0, 2) return new_field return func field_units = Unit(field_units, registry=registry.ds.unit_registry) grad_units = field_units / registry.ds.unit_system["length"] for axi, ax in enumerate(registry.ds.coordinates.axis_order): f = grad_func(axi, ax) registry.add_field( (ftype, f"{fname}_gradient_{ax}"), sampling_type="local", function=f, validators=[ValidateSpatial(1, [grad_field])], units=grad_units, ) create_magnitude_field( registry, f"{fname}_gradient", grad_units, ftype=ftype, validators=[ValidateSpatial(1, [grad_field])], ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/fluid_vector_fields.py0000644000175100001770000005051114714401662020056 0ustar00runnerdockerimport numpy as np from yt._typing import FieldType from yt.fields.derived_field import ValidateParameter, ValidateSpatial from yt.fields.field_info_container import FieldInfoContainer from yt.funcs import just_one from yt.geometry.api import Geometry from yt.utilities.exceptions import YTDimensionalityError, YTFieldNotFound from .field_plugin_registry import register_field_plugin from .vector_operations import create_magnitude_field, create_squared_field @register_field_plugin def setup_fluid_vector_fields( registry: FieldInfoContainer, ftype: FieldType = "gas", slice_info=None ) -> None: # Current implementation for gradient is not valid for curvilinear geometries geometry: Geometry = registry.ds.geometry if geometry is not Geometry.CARTESIAN: return unit_system = registry.ds.unit_system # slice_info would be the left, the right, and the factor. # For example, with the old Enzo-ZEUS fields, this would be: # slice(None, -2, None) # slice(1, -1, None) # 1.0 # Otherwise, we default to a centered difference. if slice_info is None: sl_left = slice(None, -2, None) sl_right = slice(2, None, None) div_fac = 2.0 else: sl_left, sl_right, div_fac = slice_info sl_center = slice(1, -1, None) def _baroclinic_vorticity_x(field, data): rho2 = data[ftype, "density"].astype("float64", copy=False) ** 2 return ( data[ftype, "pressure_gradient_y"] * data[ftype, "density_gradient_z"] - data[ftype, "pressure_gradient_z"] * data[ftype, "density_gradient_z"] ) / rho2 def _baroclinic_vorticity_y(field, data): rho2 = data[ftype, "density"].astype("float64", copy=False) ** 2 return ( data[ftype, "pressure_gradient_z"] * data[ftype, "density_gradient_x"] - data[ftype, "pressure_gradient_x"] * data[ftype, "density_gradient_z"] ) / rho2 def _baroclinic_vorticity_z(field, data): rho2 = data[ftype, "density"].astype("float64", copy=False) ** 2 return ( data[ftype, "pressure_gradient_x"] * data[ftype, "density_gradient_y"] - data[ftype, "pressure_gradient_y"] * data[ftype, "density_gradient_x"] ) / rho2 bv_validators = [ValidateSpatial(1, [(ftype, "density"), (ftype, "pressure")])] for ax in "xyz": n = f"baroclinic_vorticity_{ax}" registry.add_field( (ftype, n), sampling_type="cell", function=eval(f"_{n}"), validators=bv_validators, units=unit_system["frequency"] ** 2, ) create_magnitude_field( registry, "baroclinic_vorticity", unit_system["frequency"] ** 2, ftype=ftype, slice_info=slice_info, validators=bv_validators, ) def _vorticity_x(field, data): vz = data[ftype, "relative_velocity_z"] vy = data[ftype, "relative_velocity_y"] f = (vz[sl_center, sl_right, sl_center] - vz[sl_center, sl_left, sl_center]) / ( div_fac * just_one(data["index", "dy"]) ) f -= ( vy[sl_center, sl_center, sl_right] - vy[sl_center, sl_center, sl_left] ) / (div_fac * just_one(data["index", "dz"])) new_field = data.ds.arr(np.zeros_like(vz, dtype=np.float64), f.units) new_field[sl_center, sl_center, sl_center] = f return new_field def _vorticity_y(field, data): vx = data[ftype, "relative_velocity_x"] vz = data[ftype, "relative_velocity_z"] f = (vx[sl_center, sl_center, sl_right] - vx[sl_center, sl_center, sl_left]) / ( div_fac * just_one(data["index", "dz"]) ) f -= ( vz[sl_right, sl_center, sl_center] - vz[sl_left, sl_center, sl_center] ) / (div_fac * just_one(data["index", "dx"])) new_field = data.ds.arr(np.zeros_like(vz, dtype=np.float64), f.units) new_field[sl_center, sl_center, sl_center] = f return new_field def _vorticity_z(field, data): vx = data[ftype, "relative_velocity_x"] vy = data[ftype, "relative_velocity_y"] f = (vy[sl_right, sl_center, sl_center] - vy[sl_left, sl_center, sl_center]) / ( div_fac * just_one(data["index", "dx"]) ) f -= ( vx[sl_center, sl_right, sl_center] - vx[sl_center, sl_left, sl_center] ) / (div_fac * just_one(data["index", "dy"])) new_field = data.ds.arr(np.zeros_like(vy, dtype=np.float64), f.units) new_field[sl_center, sl_center, sl_center] = f return new_field vort_validators = [ ValidateSpatial(1, [(ftype, f"velocity_{d}") for d in "xyz"]), ValidateParameter("bulk_velocity"), ] for ax in "xyz": n = f"vorticity_{ax}" registry.add_field( (ftype, n), sampling_type="cell", function=eval(f"_{n}"), units=unit_system["frequency"], validators=vort_validators, ) create_magnitude_field( registry, "vorticity", unit_system["frequency"], ftype=ftype, slice_info=slice_info, validators=vort_validators, ) create_squared_field( registry, "vorticity", unit_system["frequency"] ** 2, ftype=ftype, slice_info=slice_info, validators=vort_validators, ) def _vorticity_stretching_x(field, data): return data[ftype, "velocity_divergence"] * data[ftype, "vorticity_x"] def _vorticity_stretching_y(field, data): return data[ftype, "velocity_divergence"] * data[ftype, "vorticity_y"] def _vorticity_stretching_z(field, data): return data[ftype, "velocity_divergence"] * data[ftype, "vorticity_z"] for ax in "xyz": n = f"vorticity_stretching_{ax}" registry.add_field( (ftype, n), sampling_type="cell", function=eval(f"_{n}"), units=unit_system["frequency"] ** 2, validators=vort_validators, ) create_magnitude_field( registry, "vorticity_stretching", unit_system["frequency"] ** 2, ftype=ftype, slice_info=slice_info, validators=vort_validators, ) def _vorticity_growth_x(field, data): return ( -data[ftype, "vorticity_stretching_x"] - data[ftype, "baroclinic_vorticity_x"] ) def _vorticity_growth_y(field, data): return ( -data[ftype, "vorticity_stretching_y"] - data[ftype, "baroclinic_vorticity_y"] ) def _vorticity_growth_z(field, data): return ( -data[ftype, "vorticity_stretching_z"] - data[ftype, "baroclinic_vorticity_z"] ) for ax in "xyz": n = f"vorticity_growth_{ax}" registry.add_field( (ftype, n), sampling_type="cell", function=eval(f"_{n}"), units=unit_system["frequency"] ** 2, validators=vort_validators, ) def _vorticity_growth_magnitude(field, data): result = np.sqrt( data[ftype, "vorticity_growth_x"] ** 2 + data[ftype, "vorticity_growth_y"] ** 2 + data[ftype, "vorticity_growth_z"] ** 2 ) dot = data.ds.arr(np.zeros(result.shape), "") for ax in "xyz": dot += ( data[ftype, f"vorticity_{ax}"] * data[ftype, f"vorticity_growth_{ax}"] ).to_ndarray() result = np.sign(dot) * result return result registry.add_field( (ftype, "vorticity_growth_magnitude"), sampling_type="cell", function=_vorticity_growth_magnitude, units=unit_system["frequency"] ** 2, validators=vort_validators, take_log=False, ) def _vorticity_growth_magnitude_absolute(field, data): return np.sqrt( data[ftype, "vorticity_growth_x"] ** 2 + data[ftype, "vorticity_growth_y"] ** 2 + data[ftype, "vorticity_growth_z"] ** 2 ) registry.add_field( (ftype, "vorticity_growth_magnitude_absolute"), sampling_type="cell", function=_vorticity_growth_magnitude_absolute, units=unit_system["frequency"] ** 2, validators=vort_validators, ) def _vorticity_growth_timescale(field, data): domegax_dt = data[ftype, "vorticity_x"] / data[ftype, "vorticity_growth_x"] domegay_dt = data[ftype, "vorticity_y"] / data[ftype, "vorticity_growth_y"] domegaz_dt = data[ftype, "vorticity_z"] / data[ftype, "vorticity_growth_z"] return np.sqrt(domegax_dt**2 + domegay_dt**2 + domegaz_dt**2) registry.add_field( (ftype, "vorticity_growth_timescale"), sampling_type="cell", function=_vorticity_growth_timescale, units=unit_system["time"], validators=vort_validators, ) ######################################################################## # With radiation pressure ######################################################################## def _vorticity_radiation_pressure_x(field, data): rho = data[ftype, "density"].astype("float64", copy=False) return ( data[ftype, "radiation_acceleration_y"] * data[ftype, "density_gradient_z"] - data[ftype, "radiation_acceleration_z"] * data[ftype, "density_gradient_y"] ) / rho def _vorticity_radiation_pressure_y(field, data): rho = data[ftype, "density"].astype("float64", copy=False) return ( data[ftype, "radiation_acceleration_z"] * data[ftype, "density_gradient_x"] - data[ftype, "radiation_acceleration_x"] * data[ftype, "density_gradient_z"] ) / rho def _vorticity_radiation_pressure_z(field, data): rho = data[ftype, "density"].astype("float64", copy=False) return ( data[ftype, "radiation_acceleration_x"] * data[ftype, "density_gradient_y"] - data[ftype, "radiation_acceleration_y"] * data[ftype, "density_gradient_x"] ) / rho vrp_validators = [ ValidateSpatial( 1, [ (ftype, "density"), (ftype, "radiation_acceleration_x"), (ftype, "radiation_acceleration_y"), (ftype, "radiation_acceleration_z"), ], ) ] for ax in "xyz": n = f"vorticity_radiation_pressure_{ax}" registry.add_field( (ftype, n), sampling_type="cell", function=eval(f"_{n}"), units=unit_system["frequency"] ** 2, validators=vrp_validators, ) create_magnitude_field( registry, "vorticity_radiation_pressure", unit_system["frequency"] ** 2, ftype=ftype, slice_info=slice_info, validators=vrp_validators, ) def _vorticity_radiation_pressure_growth_x(field, data): return ( -data[ftype, "vorticity_stretching_x"] - data[ftype, "baroclinic_vorticity_x"] - data[ftype, "vorticity_radiation_pressure_x"] ) def _vorticity_radiation_pressure_growth_y(field, data): return ( -data[ftype, "vorticity_stretching_y"] - data[ftype, "baroclinic_vorticity_y"] - data[ftype, "vorticity_radiation_pressure_y"] ) def _vorticity_radiation_pressure_growth_z(field, data): return ( -data[ftype, "vorticity_stretching_z"] - data[ftype, "baroclinic_vorticity_z"] - data[ftype, "vorticity_radiation_pressure_z"] ) for ax in "xyz": n = f"vorticity_radiation_pressure_growth_{ax}" registry.add_field( (ftype, n), sampling_type="cell", function=eval(f"_{n}"), units=unit_system["frequency"] ** 2, validators=vrp_validators, ) def _vorticity_radiation_pressure_growth_magnitude(field, data): result = np.sqrt( data[ftype, "vorticity_radiation_pressure_growth_x"] ** 2 + data[ftype, "vorticity_radiation_pressure_growth_y"] ** 2 + data[ftype, "vorticity_radiation_pressure_growth_z"] ** 2 ) dot = data.ds.arr(np.zeros(result.shape), "") for ax in "xyz": dot += ( data[ftype, f"vorticity_{ax}"] * data[ftype, f"vorticity_growth_{ax}"] ).to_ndarray() result = np.sign(dot) * result return result registry.add_field( (ftype, "vorticity_radiation_pressure_growth_magnitude"), sampling_type="cell", function=_vorticity_radiation_pressure_growth_magnitude, units=unit_system["frequency"] ** 2, validators=vrp_validators, take_log=False, ) def _vorticity_radiation_pressure_growth_magnitude_absolute(field, data): return np.sqrt( data[ftype, "vorticity_radiation_pressure_growth_x"] ** 2 + data[ftype, "vorticity_radiation_pressure_growth_y"] ** 2 + data[ftype, "vorticity_radiation_pressure_growth_z"] ** 2 ) registry.add_field( (ftype, "vorticity_radiation_pressure_growth_magnitude_absolute"), sampling_type="cell", function=_vorticity_radiation_pressure_growth_magnitude_absolute, units="s**(-2)", validators=vrp_validators, ) def _vorticity_radiation_pressure_growth_timescale(field, data): domegax_dt = ( data[ftype, "vorticity_x"] / data[ftype, "vorticity_radiation_pressure_growth_x"] ) domegay_dt = ( data[ftype, "vorticity_y"] / data[ftype, "vorticity_radiation_pressure_growth_y"] ) domegaz_dt = ( data[ftype, "vorticity_z"] / data[ftype, "vorticity_radiation_pressure_growth_z"] ) return np.sqrt(domegax_dt**2 + domegay_dt**2 + domegaz_dt**2) registry.add_field( (ftype, "vorticity_radiation_pressure_growth_timescale"), sampling_type="cell", function=_vorticity_radiation_pressure_growth_timescale, units=unit_system["time"], validators=vrp_validators, ) def _shear(field, data): """ Shear is defined as [(dvx/dy + dvy/dx)^2 + (dvz/dy + dvy/dz)^2 + (dvx/dz + dvz/dx)^2 ]^(0.5) where dvx/dy = [vx(j-1) - vx(j+1)]/[2dy] and is in units of s^(-1) (it's just like vorticity except add the derivative pairs instead of subtracting them) """ if data.ds.geometry != "cartesian": raise NotImplementedError("shear is only supported in cartesian geometries") try: vx = data[ftype, "relative_velocity_x"] vy = data[ftype, "relative_velocity_y"] except YTFieldNotFound as e: raise YTDimensionalityError( "shear computation requires 2 velocity components" ) from e dvydx = ( vy[sl_right, sl_center, sl_center] - vy[sl_left, sl_center, sl_center] ) / (div_fac * just_one(data["index", "dx"])) dvxdy = ( vx[sl_center, sl_right, sl_center] - vx[sl_center, sl_left, sl_center] ) / (div_fac * just_one(data["index", "dy"])) f = (dvydx + dvxdy) ** 2.0 del dvydx, dvxdy try: vz = data[ftype, "relative_velocity_z"] dvzdy = ( vz[sl_center, sl_right, sl_center] - vz[sl_center, sl_left, sl_center] ) / (div_fac * just_one(data["index", "dy"])) dvydz = ( vy[sl_center, sl_center, sl_right] - vy[sl_center, sl_center, sl_left] ) / (div_fac * just_one(data["index", "dz"])) f += (dvzdy + dvydz) ** 2.0 del dvzdy, dvydz dvxdz = ( vx[sl_center, sl_center, sl_right] - vx[sl_center, sl_center, sl_left] ) / (div_fac * just_one(data["index", "dz"])) dvzdx = ( vz[sl_right, sl_center, sl_center] - vz[sl_left, sl_center, sl_center] ) / (div_fac * just_one(data["index", "dx"])) f += (dvxdz + dvzdx) ** 2.0 del dvxdz, dvzdx except YTFieldNotFound: # the absence of a z velocity component is not blocking pass np.sqrt(f, out=f) new_field = data.ds.arr(np.zeros_like(data[ftype, "velocity_x"]), f.units) new_field[sl_center, sl_center, sl_center] = f return new_field registry.add_field( (ftype, "shear"), sampling_type="cell", function=_shear, validators=[ ValidateSpatial( 1, [(ftype, "velocity_x"), (ftype, "velocity_y"), (ftype, "velocity_z")] ), ValidateParameter("bulk_velocity"), ], units=unit_system["frequency"], ) def _shear_criterion(field, data): """ Divide by c_s to leave shear in units of length**-1, which can be compared against the inverse of the local cell size (1/dx) to determine if refinement should occur. """ return data[ftype, "shear"] / data[ftype, "sound_speed"] registry.add_field( (ftype, "shear_criterion"), sampling_type="cell", function=_shear_criterion, units=unit_system["length"] ** -1, validators=[ ValidateSpatial( 1, [ (ftype, "sound_speed"), (ftype, "velocity_x"), (ftype, "velocity_y"), (ftype, "velocity_z"), ], ) ], ) def _shear_mach(field, data): """ Dimensionless shear (shear_mach) is defined nearly the same as shear, except that it is scaled by the local dx/dy/dz and the local sound speed. So it results in a unitless quantity that is effectively measuring shear in mach number. In order to avoid discontinuities created by multiplying by dx/dy/dz at grid refinement boundaries, we also multiply by 2**GridLevel. Shear (Mach) = [(dvx + dvy)^2 + (dvz + dvy)^2 + (dvx + dvz)^2 ]^(0.5) / c_sound """ if data.ds.geometry != "cartesian": raise NotImplementedError( "shear_mach is only supported in cartesian geometries" ) try: vx = data[ftype, "relative_velocity_x"] vy = data[ftype, "relative_velocity_y"] except YTFieldNotFound as e: raise YTDimensionalityError( "shear_mach computation requires 2 velocity components" ) from e dvydx = ( vy[sl_right, sl_center, sl_center] - vy[sl_left, sl_center, sl_center] ) / div_fac dvxdy = ( vx[sl_center, sl_right, sl_center] - vx[sl_center, sl_left, sl_center] ) / div_fac f = (dvydx + dvxdy) ** 2.0 del dvydx, dvxdy try: vz = data[ftype, "relative_velocity_z"] dvzdy = ( vz[sl_center, sl_right, sl_center] - vz[sl_center, sl_left, sl_center] ) / div_fac dvydz = ( vy[sl_center, sl_center, sl_right] - vy[sl_center, sl_center, sl_left] ) / div_fac f += (dvzdy + dvydz) ** 2.0 del dvzdy, dvydz dvxdz = ( vx[sl_center, sl_center, sl_right] - vx[sl_center, sl_center, sl_left] ) / div_fac dvzdx = ( vz[sl_right, sl_center, sl_center] - vz[sl_left, sl_center, sl_center] ) / div_fac f += (dvxdz + dvzdx) ** 2.0 del dvxdz, dvzdx except YTFieldNotFound: # the absence of a z velocity component is not blocking pass f *= ( 2.0 ** data["index", "grid_level"][sl_center, sl_center, sl_center] / data[ftype, "sound_speed"][sl_center, sl_center, sl_center] ) ** 2.0 np.sqrt(f, out=f) new_field = data.ds.arr(np.zeros_like(vx), f.units) new_field[sl_center, sl_center, sl_center] = f return new_field vs_fields = [ (ftype, "sound_speed"), (ftype, "velocity_x"), (ftype, "velocity_y"), (ftype, "velocity_z"), ] registry.add_field( (ftype, "shear_mach"), sampling_type="cell", function=_shear_mach, units="", validators=[ValidateSpatial(1, vs_fields), ValidateParameter("bulk_velocity")], ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/geometric_fields.py0000644000175100001770000002036214714401662017350 0ustar00runnerdockerimport numpy as np from yt.utilities.lib.geometry_utils import compute_morton from yt.utilities.math_utils import ( get_cyl_r, get_cyl_theta, get_cyl_z, get_sph_phi, get_sph_r, get_sph_theta, ) from .derived_field import ValidateParameter, ValidateSpatial from .field_functions import get_periodic_rvec, get_radius from .field_plugin_registry import register_field_plugin @register_field_plugin def setup_geometric_fields(registry, ftype="gas", slice_info=None): unit_system = registry.ds.unit_system def _radius(field, data): """The spherical radius component of the mesh cells. Relative to the coordinate system defined by the *center* field parameter. """ return get_radius(data, "", field.name[0]) registry.add_field( ("index", "radius"), sampling_type="cell", function=_radius, validators=[ValidateParameter("center")], units=unit_system["length"], ) def _grid_level(field, data): """The AMR refinement level""" arr = np.ones(data.ires.shape, dtype="float64") arr *= data.ires if data._spatial: return data._reshape_vals(arr) return arr registry.add_field( ("index", "grid_level"), sampling_type="cell", function=_grid_level, units="", validators=[ValidateSpatial(0)], ) def _grid_indices(field, data): """The index of the leaf grid the mesh cells exist on""" if hasattr(data, "domain_id"): this_id = data.domain_id else: this_id = data.id - data._id_offset arr = np.ones(data["index", "ones"].shape) arr *= this_id if data._spatial: return data._reshape_vals(arr) return arr registry.add_field( ("index", "grid_indices"), sampling_type="cell", function=_grid_indices, units="", validators=[ValidateSpatial(0)], take_log=False, ) def _ones_over_dx(field, data): """The inverse of the local cell spacing""" return ( np.ones(data["index", "ones"].shape, dtype="float64") / data["index", "dx"] ) registry.add_field( ("index", "ones_over_dx"), sampling_type="cell", function=_ones_over_dx, units=unit_system["length"] ** -1, display_field=False, ) def _zeros(field, data): """Returns zero for all cells""" arr = np.zeros(data["index", "ones"].shape, dtype="float64") return data.apply_units(arr, field.units) registry.add_field( ("index", "zeros"), sampling_type="cell", function=_zeros, units="", display_field=False, ) def _ones(field, data): """Returns one for all cells""" tmp = np.ones(data.ires.shape, dtype="float64") arr = data.apply_units(tmp, field.units) if data._spatial: return data._reshape_vals(arr) return arr registry.add_field( ("index", "ones"), sampling_type="cell", function=_ones, units="", display_field=False, ) def _morton_index(field, data): """This is the morton index, which is properly a uint64 field. Because we make some assumptions that the fields returned by derived fields are float64, this returns a "view" on the data that is float64. To get back the original uint64, you need to call .view("uint64") on it; however, it should be true that if you sort the uint64, you will get the same order as if you sort the float64 view. """ eps = np.finfo("f8").eps uq = data.ds.domain_left_edge.uq LE = data.ds.domain_left_edge - eps * uq RE = data.ds.domain_right_edge + eps * uq # .ravel() only copies if it needs to morton = compute_morton( data["index", "x"].ravel(), data["index", "y"].ravel(), data["index", "z"].ravel(), LE, RE, ) morton.shape = data["index", "x"].shape return morton.view("f8") registry.add_field( ("index", "morton_index"), sampling_type="cell", function=_morton_index, units="", ) def _spherical_radius(field, data): """The spherical radius component of the positions of the mesh cells. Relative to the coordinate system defined by the *center* field parameter. """ coords = get_periodic_rvec(data) return data.ds.arr(get_sph_r(coords), "code_length").in_base(unit_system.name) registry.add_field( ("index", "spherical_radius"), sampling_type="cell", function=_spherical_radius, validators=[ValidateParameter("center")], units=unit_system["length"], ) def _spherical_theta(field, data): """The spherical theta component of the positions of the mesh cells. theta is the poloidal position angle in the plane parallel to the *normal* vector Relative to the coordinate system defined by the *center* and *normal* field parameters. """ normal = data.get_field_parameter("normal") coords = get_periodic_rvec(data) return get_sph_theta(coords, normal) registry.add_field( ("index", "spherical_theta"), sampling_type="cell", function=_spherical_theta, validators=[ValidateParameter("center"), ValidateParameter("normal")], units="", ) def _spherical_phi(field, data): """The spherical phi component of the positions of the mesh cells. phi is the azimuthal position angle in the plane perpendicular to the *normal* vector Relative to the coordinate system defined by the *center* and *normal* field parameters. """ normal = data.get_field_parameter("normal") coords = get_periodic_rvec(data) return get_sph_phi(coords, normal) registry.add_field( ("index", "spherical_phi"), sampling_type="cell", function=_spherical_phi, validators=[ValidateParameter("center"), ValidateParameter("normal")], units="", ) def _cylindrical_radius(field, data): """The cylindrical radius component of the positions of the mesh cells. Relative to the coordinate system defined by the *normal* vector and *center* field parameters. """ normal = data.get_field_parameter("normal") coords = get_periodic_rvec(data) return data.ds.arr(get_cyl_r(coords, normal), "code_length").in_base( unit_system.name ) registry.add_field( ("index", "cylindrical_radius"), sampling_type="cell", function=_cylindrical_radius, validators=[ValidateParameter("center"), ValidateParameter("normal")], units=unit_system["length"], ) def _cylindrical_z(field, data): """The cylindrical z component of the positions of the mesh cells. Relative to the coordinate system defined by the *normal* vector and *center* field parameters. """ normal = data.get_field_parameter("normal") coords = get_periodic_rvec(data) return data.ds.arr(get_cyl_z(coords, normal), "code_length").in_base( unit_system.name ) registry.add_field( ("index", "cylindrical_z"), sampling_type="cell", function=_cylindrical_z, validators=[ValidateParameter("center"), ValidateParameter("normal")], units=unit_system["length"], ) def _cylindrical_theta(field, data): """The cylindrical z component of the positions of the mesh cells. theta is the azimuthal position angle in the plane perpendicular to the *normal* vector. Relative to the coordinate system defined by the *normal* vector and *center* field parameters. """ normal = data.get_field_parameter("normal") coords = get_periodic_rvec(data) return get_cyl_theta(coords, normal) registry.add_field( ("index", "cylindrical_theta"), sampling_type="cell", function=_cylindrical_theta, validators=[ValidateParameter("center"), ValidateParameter("normal")], units="", ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/interpolated_fields.py0000644000175100001770000000252714714401662020067 0ustar00runnerdockerfrom yt.fields.local_fields import add_field from yt.utilities.linear_interpolators import ( BilinearFieldInterpolator, TrilinearFieldInterpolator, UnilinearFieldInterpolator, ) _int_class = { 1: UnilinearFieldInterpolator, 2: BilinearFieldInterpolator, 3: TrilinearFieldInterpolator, } def add_interpolated_field( name, units, table_data, axes_data, axes_fields, ftype="gas", particle_type=False, validators=None, truncate=True, ): if len(table_data.shape) not in _int_class: raise RuntimeError( "Interpolated field can only be created from 1d, 2d, or 3d data." ) if len(axes_fields) != len(axes_data) or len(axes_fields) != len(table_data.shape): raise RuntimeError( "Data dimension mismatch: data is %d, " "%d axes data provided, and %d axes fields provided." % (len(table_data.shape), len(axes_data), len(axes_fields)) ) int_class = _int_class[len(table_data.shape)] my_interpolator = int_class(table_data, axes_data, axes_fields, truncate=truncate) def _interpolated_field(field, data): return my_interpolator(data) add_field( (ftype, name), function=_interpolated_field, units=units, validators=validators, particle_type=particle_type, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/local_fields.py0000644000175100001770000000514214714401662016463 0ustar00runnerdockerfrom collections.abc import Callable from functools import partial from typing import Any, TypeVar from yt.funcs import is_sequence from yt.utilities.logger import ytLogger as mylog from .field_info_container import FieldInfoContainer from .field_plugin_registry import register_field_plugin # workaround mypy not being comfortable around decorator preserving signatures # adapted from # https://github.com/python/mypy/issues/1551#issuecomment-253978622 TFun = TypeVar("TFun", bound=Callable[..., Any]) class LocalFieldInfoContainer(FieldInfoContainer): def add_field( self, name, function, sampling_type, *, force_override=False, **kwargs ): from yt.fields.field_functions import validate_field_function validate_field_function(function) if isinstance(name, str) or not is_sequence(name): # the base method only accepts proper tuple field keys # and is only used internally, while this method is exposed to users # and is documented as usable with single strings as name if sampling_type == "particle": ftype = "all" else: ftype = "gas" name = (ftype, name) # Handle the case where the field has already been added. if not force_override and name in self: mylog.warning( "Field %s already exists. To override use `force_override=True`.", name, ) return super().add_field( name, function, sampling_type, force_override=force_override, **kwargs ) # Empty FieldInfoContainer local_fields = LocalFieldInfoContainer(None, [], None) # we define two handles, essentially pointing to the same function but documented differently # yt.add_field() is meant to be used directly, while yt.derived_field is documented # as a decorator. add_field = local_fields.add_field class derived_field: # implement a decorator accepting keyword arguments to be passed down to add_field def __init__(self, **kwargs) -> None: self._kwargs = kwargs def __call__(self, f: Callable) -> Callable: partial(local_fields.add_field, function=f)(**self._kwargs) return f @register_field_plugin def setup_local_fields(registry, ftype="gas", slice_info=None): # This is easy. We just update with the contents of the local_fields field # info container, and since they are not mutable in any real way, we are # fine. # Note that we actually don't care about the ftype here. for f in local_fields: registry._show_field_errors.append(f) registry.update(local_fields) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/magnetic_field.py0000644000175100001770000003033114714401662016773 0ustar00runnerdockerimport sys import numpy as np from yt._typing import FieldType from yt.fields.derived_field import ValidateParameter from yt.fields.field_info_container import FieldInfoContainer from yt.geometry.api import Geometry from yt.units import dimensions from .field_plugin_registry import register_field_plugin if sys.version_info >= (3, 11): from typing import assert_never else: from typing_extensions import assert_never cgs_normalizations = {"gaussian": 4.0 * np.pi, "lorentz_heaviside": 1.0} def get_magnetic_normalization(key: str) -> float: if key not in cgs_normalizations: raise ValueError( "Unknown magnetic normalization convention. " f"Got {key!r}, expected one of {tuple(cgs_normalizations)}" ) return cgs_normalizations[key] @register_field_plugin def setup_magnetic_field_fields( registry: FieldInfoContainer, ftype: FieldType = "gas", slice_info=None ) -> None: ds = registry.ds unit_system = ds.unit_system pc = registry.ds.units.physical_constants axis_names = registry.ds.coordinates.axis_order if (ftype, f"magnetic_field_{axis_names[0]}") not in registry: return u = registry[ftype, f"magnetic_field_{axis_names[0]}"].units def mag_factors(dims): if dims == dimensions.magnetic_field_cgs: return getattr(ds, "_magnetic_factor", 4.0 * np.pi) elif dims == dimensions.magnetic_field_mks: return ds.units.physical_constants.mu_0 def _magnetic_field_strength(field, data): xm = f"relative_magnetic_field_{axis_names[0]}" ym = f"relative_magnetic_field_{axis_names[1]}" zm = f"relative_magnetic_field_{axis_names[2]}" B2 = (data[ftype, xm]) ** 2 + (data[ftype, ym]) ** 2 + (data[ftype, zm]) ** 2 return np.sqrt(B2) registry.add_field( (ftype, "magnetic_field_strength"), sampling_type="local", function=_magnetic_field_strength, validators=[ValidateParameter("bulk_magnetic_field")], units=u, ) def _magnetic_energy_density(field, data): B = data[ftype, "magnetic_field_strength"] return 0.5 * B * B / mag_factors(B.units.dimensions) registry.add_field( (ftype, "magnetic_energy_density"), sampling_type="local", function=_magnetic_energy_density, units=unit_system["pressure"], ) def _plasma_beta(field, data): return data[ftype, "pressure"] / data[ftype, "magnetic_energy_density"] registry.add_field( (ftype, "plasma_beta"), sampling_type="local", function=_plasma_beta, units="" ) def _magnetic_pressure(field, data): return data[ftype, "magnetic_energy_density"] registry.add_field( (ftype, "magnetic_pressure"), sampling_type="local", function=_magnetic_pressure, units=unit_system["pressure"], ) _magnetic_field_poloidal_magnitude = None _magnetic_field_toroidal_magnitude = None geometry: Geometry = registry.ds.geometry if geometry is Geometry.CARTESIAN: def _magnetic_field_poloidal_magnitude(field, data): B2 = ( data[ftype, "relative_magnetic_field_x"] * data[ftype, "relative_magnetic_field_x"] + data[ftype, "relative_magnetic_field_y"] * data[ftype, "relative_magnetic_field_y"] + data[ftype, "relative_magnetic_field_z"] * data[ftype, "relative_magnetic_field_z"] ) Bt2 = ( data[ftype, "magnetic_field_spherical_phi"] * data[ftype, "magnetic_field_spherical_phi"] ) return np.sqrt(B2 - Bt2) elif geometry is Geometry.CYLINDRICAL or geometry is Geometry.POLAR: def _magnetic_field_poloidal_magnitude(field, data): bm = data.get_field_parameter("bulk_magnetic_field") rax = axis_names.index("r") zax = axis_names.index("z") return np.sqrt( (data[ftype, "magnetic_field_r"] - bm[rax]) ** 2 + (data[ftype, "magnetic_field_z"] - bm[zax]) ** 2 ) def _magnetic_field_toroidal_magnitude(field, data): ax = axis_names.find("theta") bm = data.get_field_parameter("bulk_magnetic_field") return data[ftype, "magnetic_field_theta"] - bm[ax] elif geometry is Geometry.SPHERICAL: def _magnetic_field_poloidal_magnitude(field, data): bm = data.get_field_parameter("bulk_magnetic_field") rax = axis_names.index("r") tax = axis_names.index("theta") return np.sqrt( (data[ftype, "magnetic_field_r"] - bm[rax]) ** 2 + (data[ftype, "magnetic_field_theta"] - bm[tax]) ** 2 ) def _magnetic_field_toroidal_magnitude(field, data): ax = axis_names.find("phi") bm = data.get_field_parameter("bulk_magnetic_field") return data[ftype, "magnetic_field_phi"] - bm[ax] elif geometry is Geometry.GEOGRAPHIC or geometry is Geometry.INTERNAL_GEOGRAPHIC: # not implemented pass elif geometry is Geometry.SPECTRAL_CUBE: # nothing to be done pass else: assert_never(geometry) if _magnetic_field_poloidal_magnitude is not None: registry.add_field( (ftype, "magnetic_field_poloidal_magnitude"), sampling_type="local", function=_magnetic_field_poloidal_magnitude, units=u, validators=[ ValidateParameter("normal"), ValidateParameter("bulk_magnetic_field"), ], ) if _magnetic_field_toroidal_magnitude is not None: registry.add_field( (ftype, "magnetic_field_toroidal_magnitude"), sampling_type="local", function=_magnetic_field_toroidal_magnitude, units=u, validators=[ ValidateParameter("normal"), ValidateParameter("bulk_magnetic_field"), ], ) if geometry is Geometry.CARTESIAN: registry.alias( (ftype, "magnetic_field_toroidal_magnitude"), (ftype, "magnetic_field_spherical_phi"), units=u, ) registry.alias( (ftype, "magnetic_field_toroidal"), (ftype, "magnetic_field_spherical_phi"), units=u, deprecate=("4.1.0", None), ) registry.alias( (ftype, "magnetic_field_poloidal"), (ftype, "magnetic_field_spherical_theta"), units=u, deprecate=("4.1.0", None), ) elif ( geometry is Geometry.CYLINDRICAL or geometry is Geometry.POLAR or geometry is Geometry.SPHERICAL ): # These cases should be covered already, just check that they are assert (ftype, "magnetic_field_toroidal_magnitude") in registry assert (ftype, "magnetic_field_poloidal_magnitude") in registry elif geometry is Geometry.GEOGRAPHIC or geometry is Geometry.INTERNAL_GEOGRAPHIC: # not implemented pass elif geometry is Geometry.SPECTRAL_CUBE: # nothing to be done pass else: assert_never(Geometry) def _alfven_speed(field, data): B = data[ftype, "magnetic_field_strength"] return B / np.sqrt(mag_factors(B.units.dimensions) * data[ftype, "density"]) registry.add_field( (ftype, "alfven_speed"), sampling_type="local", function=_alfven_speed, units=unit_system["velocity"], ) def _mach_alfven(field, data): return data[ftype, "velocity_magnitude"] / data[ftype, "alfven_speed"] registry.add_field( (ftype, "mach_alfven"), sampling_type="local", function=_mach_alfven, units="dimensionless", ) b_units = registry.ds.quan(1.0, u).units if dimensions.current_mks in b_units.dimensions.free_symbols: rm_scale = pc.qp.to("C", "SI") ** 3 / (4.0 * np.pi * pc.eps_0) else: rm_scale = pc.qp**3 / pc.clight rm_scale *= registry.ds.quan(1.0, "rad") / (2.0 * np.pi * pc.me**2 * pc.clight**3) rm_units = registry.ds.quan(1.0, "rad/m**2").units / unit_system["length"] def _rotation_measure(field, data): return ( rm_scale * data[ftype, "magnetic_field_los"] * data[ftype, "El_number_density"] ) registry.add_field( (ftype, "rotation_measure"), sampling_type="local", function=_rotation_measure, units=rm_units, validators=[ValidateParameter("axis", {"axis": [0, 1, 2]})], ) def setup_magnetic_field_aliases(registry, ds_ftype, ds_fields, ftype="gas"): r""" This routine sets up special aliases between dataset-specific magnetic fields and the default magnetic fields in yt so that unit conversions between different unit systems can be handled properly. This is only called from the `setup_fluid_fields` method (for grid dataset) or the `setup_gas_particle_fields` method (for particle dataset) of a frontend's :class:`FieldInfoContainer` instance. Parameters ---------- registry : :class:`FieldInfoContainer` The field registry that these definitions will be installed into. ds_ftype : string The field type for the fields we're going to alias, e.g. "flash", "enzo", "athena", "PartType0", etc. ds_fields : list of strings or string The fields that will be aliased. For grid dataset, this should be a list of strings corresponding to the components of magnetic field. For particle dataset, this should be a single string corresponding to the vector magnetic field. ftype : string, optional The resulting field type of the fields. Default "gas". Examples -------- >>> from yt.fields.magnetic_field import setup_magnetic_field_aliases >>> class PlutoFieldInfo(ChomboFieldInfo): ... def setup_fluid_fields(self): ... setup_magnetic_field_aliases( ... self, "chombo", ["bx%s" % ax for ax in [1, 2, 3]] ... ) >>> class GizmoFieldInfo(GadgetFieldInfo): ... def setup_gas_particle_fields(self): ... setup_magnetic_field_aliases( ... self, "PartType0", "MagneticField", ftype="PartType0" ... ) """ unit_system = registry.ds.unit_system if isinstance(ds_fields, list): # If ds_fields is a list, we assume a grid dataset sampling_type = "local" ds_fields = [(ds_ftype, fd) for fd in ds_fields] ds_field = ds_fields[0] else: # Otherwise, we assume a particle dataset sampling_type = "particle" ds_field = (ds_ftype, ds_fields) if ds_field not in registry: return # Figure out the unit conversion to use if unit_system.base_units[dimensions.current_mks] is not None: to_units = unit_system["magnetic_field_mks"] else: to_units = unit_system["magnetic_field_cgs"] units = unit_system[to_units.dimensions] # Add fields if sampling_type in ["cell", "local"]: # Grid dataset case def mag_field_from_field(fd): def _mag_field(field, data): return data[fd].to(field.units) return _mag_field for ax, fd in zip(registry.ds.coordinates.axis_order, ds_fields, strict=False): registry.add_field( (ftype, f"magnetic_field_{ax}"), sampling_type=sampling_type, function=mag_field_from_field(fd), units=units, ) else: # Particle dataset case def mag_field_from_ax(ax): def _mag_field(field, data): return data[ds_field][:, "xyz".index(ax)] return _mag_field for ax in registry.ds.coordinates.axis_order: fname = f"particle_magnetic_field_{ax}" registry.add_field( (ds_ftype, fname), sampling_type=sampling_type, function=mag_field_from_ax(ax), units=units, ) sph_ptypes = getattr(registry.ds, "_sph_ptypes", ()) if ds_ftype in sph_ptypes: registry.alias((ftype, f"magnetic_field_{ax}"), (ds_ftype, fname)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/my_plugin_fields.py0000644000175100001770000000073214714401662017374 0ustar00runnerdockerfrom .field_plugin_registry import register_field_plugin from .local_fields import LocalFieldInfoContainer # Empty FieldInfoContainer my_plugins_fields = LocalFieldInfoContainer(None, [], None) @register_field_plugin def setup_my_plugins_fields(registry, ftype="gas", slice_info=None): # fields end up inside this container when added via add_field in # my_plugins.py. See yt.funcs.enable_plugins to see how this is set up. registry.update(my_plugins_fields) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/particle_fields.py0000644000175100001770000006741714714401662017211 0ustar00runnerdockerimport numpy as np from yt.fields.derived_field import ValidateParameter, ValidateSpatial from yt.units._numpy_wrapper_functions import uconcatenate, ucross from yt.utilities.lib.misc_utilities import ( obtain_position_vector, obtain_relative_velocity_vector, ) from yt.utilities.math_utils import ( get_cyl_r, get_cyl_r_component, get_cyl_theta, get_cyl_theta_component, get_cyl_z, get_cyl_z_component, get_sph_phi, get_sph_phi_component, get_sph_r_component, get_sph_theta, get_sph_theta_component, modify_reference_frame, ) from .field_functions import get_radius from .vector_operations import create_los_field, create_magnitude_field sph_whitelist_fields = ( "density", "temperature", "metallicity", "thermal_energy", "smoothing_length", "star_formation_rate", "cold_fraction", "hot_temperature", "H_fraction", "He_fraction", "C_fraction", "Ca_fraction", "N_fraction", "O_fraction", "S_fraction", "Ne_fraction", "Mg_fraction", "Si_fraction", "Fe_fraction", "Na_fraction", "Al_fraction", "Ar_fraction", "Ni_fraction", "Ej_fraction", "H_density", "He_density", "C_density", "Ca_density", "N_density", "O_density", "S_density", "Ne_density", "Mg_density", "Si_density", "Fe_density", "Na_density", "Al_density", "Ar_density", "Ni_density", "Ej_density", ) def _field_concat(fname): def _AllFields(field, data): v = [] for ptype in data.ds.particle_types: if ptype == "all" or ptype in data.ds.known_filters: continue v.append(data[ptype, fname].copy()) rv = uconcatenate(v, axis=0) return rv return _AllFields def _field_concat_slice(fname, axi): def _AllFields(field, data): v = [] for ptype in data.ds.particle_types: if ptype == "all" or ptype in data.ds.known_filters: continue v.append(data[ptype, fname][:, axi]) rv = uconcatenate(v, axis=0) return rv return _AllFields def particle_deposition_functions(ptype, coord_name, mass_name, registry): unit_system = registry.ds.unit_system orig = set(registry.keys()) ptype_dn = ptype.replace("_", " ").title() def particle_count(field, data): pos = data[ptype, coord_name] d = data.deposit(pos, method="count") return data.apply_units(d, field.units) registry.add_field( ("deposit", f"{ptype}_count"), sampling_type="cell", function=particle_count, validators=[ValidateSpatial()], units="", display_name=rf"\mathrm{{{ptype_dn} Count}}", ) def particle_mass(field, data): pos = data[ptype, coord_name] pmass = data[ptype, mass_name] pmass.convert_to_units(field.units) d = data.deposit(pos, [pmass], method="sum") return data.apply_units(d, field.units) registry.add_field( ("deposit", f"{ptype}_mass"), sampling_type="cell", function=particle_mass, validators=[ValidateSpatial()], display_name=rf"\mathrm{{{ptype_dn} Mass}}", units=unit_system["mass"], ) def particle_density(field, data): pos = data[ptype, coord_name] pos.convert_to_units("code_length") mass = data[ptype, mass_name] mass.convert_to_units("code_mass") d = data.deposit(pos, [mass], method="sum") d = data.ds.arr(d, "code_mass") d /= data["index", "cell_volume"] return d registry.add_field( ("deposit", f"{ptype}_density"), sampling_type="cell", function=particle_density, validators=[ValidateSpatial()], display_name=rf"\mathrm{{{ptype_dn} Density}}", units=unit_system["density"], ) def particle_cic(field, data): pos = data[ptype, coord_name] d = data.deposit(pos, [data[ptype, mass_name]], method="cic") d = data.apply_units(d, data[ptype, mass_name].units) d /= data["index", "cell_volume"] return d registry.add_field( ("deposit", f"{ptype}_cic"), sampling_type="cell", function=particle_cic, validators=[ValidateSpatial()], display_name=rf"\mathrm{{{ptype_dn} CIC Density}}", units=unit_system["density"], ) def _get_density_weighted_deposit_field(fname, units, method): def _deposit_field(field, data): """ Create a grid field for particle quantities weighted by particle mass, using cloud-in-cell deposit. """ pos = data[ptype, "particle_position"] # Get back into density pden = data[ptype, "particle_mass"] top = data.deposit(pos, [pden * data[ptype, fname]], method=method) bottom = data.deposit(pos, [pden], method=method) top[bottom == 0] = 0.0 bnz = bottom.nonzero() top[bnz] /= bottom[bnz] d = data.ds.arr(top, units=units) return d return _deposit_field for ax in "xyz": for method, name in [("cic", "cic"), ("sum", "nn")]: function = _get_density_weighted_deposit_field( f"particle_velocity_{ax}", "code_velocity", method ) registry.add_field( ("deposit", ("%s_" + name + "_velocity_%s") % (ptype, ax)), sampling_type="cell", function=function, units=unit_system["velocity"], take_log=False, validators=[ValidateSpatial(0)], ) for method, name in [("cic", "cic"), ("sum", "nn")]: function = _get_density_weighted_deposit_field("age", "code_time", method) registry.add_field( ("deposit", ("%s_" + name + "_age") % (ptype)), sampling_type="cell", function=function, units=unit_system["time"], take_log=False, validators=[ValidateSpatial(0)], ) # Now some translation functions. def particle_ones(field, data): v = np.ones(data[ptype, coord_name].shape[0], dtype="float64") return data.apply_units(v, field.units) registry.add_field( (ptype, "particle_ones"), sampling_type="particle", function=particle_ones, units="", display_name=r"Particle Count", ) def particle_mesh_ids(field, data): pos = data[ptype, coord_name] ids = np.zeros(pos.shape[0], dtype="float64") - 1 # This is float64 in name only. It will be properly cast inside the # deposit operation. # _ids = ids.view("float64") data.deposit(pos, [ids], method="mesh_id") return data.apply_units(ids, "") registry.add_field( (ptype, "mesh_id"), sampling_type="particle", function=particle_mesh_ids, validators=[ValidateSpatial()], units="", ) return list(set(registry.keys()).difference(orig)) def particle_scalar_functions(ptype, coord_name, vel_name, registry): # Now we have to set up the various velocity and coordinate things. In the # future, we'll actually invert this and use the 3-component items # elsewhere, and stop using these. # Note that we pass in _ptype here so that it's defined inside the closure. def _get_coord_funcs(axi, _ptype): def _particle_velocity(field, data): return data[_ptype, vel_name][:, axi] def _particle_position(field, data): return data[_ptype, coord_name][:, axi] return _particle_velocity, _particle_position for axi, ax in enumerate("xyz"): v, p = _get_coord_funcs(axi, ptype) registry.add_field( (ptype, f"particle_velocity_{ax}"), sampling_type="particle", function=v, units="code_velocity", ) registry.add_field( (ptype, f"particle_position_{ax}"), sampling_type="particle", function=p, units="code_length", ) def particle_vector_functions(ptype, coord_names, vel_names, registry): unit_system = registry.ds.unit_system # This will column_stack a set of scalars to create vector fields. def _get_vec_func(_ptype, names): def particle_vectors(field, data): v = [data[_ptype, name].in_units(field.units) for name in names] return data.ds.arr(np.column_stack(v), v[0].units) return particle_vectors registry.add_field( (ptype, "particle_position"), sampling_type="particle", function=_get_vec_func(ptype, coord_names), units="code_length", ) registry.add_field( (ptype, "particle_velocity"), sampling_type="particle", function=_get_vec_func(ptype, vel_names), units=unit_system["velocity"], ) def get_angular_momentum_components(ptype, data, spos, svel): normal = data.ds.arr([0.0, 0.0, 1.0], "code_length") # default to simulation axis pos = data.ds.arr([data[ptype, spos % ax] for ax in "xyz"]).T vel = data.ds.arr([data[ptype, f"relative_{svel % ax}"] for ax in "xyz"]).T return pos, vel, normal def standard_particle_fields( registry, ptype, spos="particle_position_%s", svel="particle_velocity_%s" ): unit_system = registry.ds.unit_system def _particle_velocity_magnitude(field, data): """M{|v|}""" return np.sqrt( data[ptype, f"relative_{svel % 'x'}"] ** 2 + data[ptype, f"relative_{svel % 'y'}"] ** 2 + data[ptype, f"relative_{svel % 'z'}"] ** 2 ) registry.add_field( (ptype, "particle_velocity_magnitude"), sampling_type="particle", function=_particle_velocity_magnitude, take_log=False, units=unit_system["velocity"], ) create_los_field( registry, "particle_velocity", unit_system["velocity"], ftype=ptype, sampling_type="particle", ) def _particle_specific_angular_momentum(field, data): """Calculate the angular of a particle velocity. Returns a vector for each particle. """ center = data.get_field_parameter("center") pos, vel, normal = get_angular_momentum_components(ptype, data, spos, svel) L, r_vec, v_vec = modify_reference_frame(center, normal, P=pos, V=vel) # adding in the unit registry allows us to have a reference to the # dataset and thus we will always get the correct units after applying # the cross product. return ucross(r_vec, v_vec, registry=data.ds.unit_registry) registry.add_field( (ptype, "particle_specific_angular_momentum"), sampling_type="particle", function=_particle_specific_angular_momentum, units=unit_system["specific_angular_momentum"], validators=[ValidateParameter("center")], ) def _get_spec_ang_mom_comp(axi, ax, _ptype): def _particle_specific_angular_momentum_component(field, data): return data[_ptype, "particle_specific_angular_momentum"][:, axi] def _particle_angular_momentum_component(field, data): return ( data[_ptype, "particle_mass"] * data[ptype, f"particle_specific_angular_momentum_{ax}"] ) return ( _particle_specific_angular_momentum_component, _particle_angular_momentum_component, ) for axi, ax in enumerate("xyz"): f, v = _get_spec_ang_mom_comp(axi, ax, ptype) registry.add_field( (ptype, f"particle_specific_angular_momentum_{ax}"), sampling_type="particle", function=f, units=unit_system["specific_angular_momentum"], validators=[ValidateParameter("center")], ) registry.add_field( (ptype, f"particle_angular_momentum_{ax}"), sampling_type="particle", function=v, units=unit_system["angular_momentum"], validators=[ValidateParameter("center")], ) def _particle_angular_momentum(field, data): am = ( data[ptype, "particle_mass"] * data[ptype, "particle_specific_angular_momentum"].T ) return am.T registry.add_field( (ptype, "particle_angular_momentum"), sampling_type="particle", function=_particle_angular_momentum, units=unit_system["angular_momentum"], validators=[ValidateParameter("center")], ) create_magnitude_field( registry, "particle_angular_momentum", unit_system["angular_momentum"], sampling_type="particle", ftype=ptype, ) def _particle_radius(field, data): """The spherical radius component of the particle positions Relative to the coordinate system defined by the *normal* vector, and *center* field parameters. """ return get_radius(data, "particle_position_", field.name[0]) registry.add_field( (ptype, "particle_radius"), sampling_type="particle", function=_particle_radius, units=unit_system["length"], validators=[ValidateParameter("center")], ) def _relative_particle_position(field, data): """The cartesian particle positions in a rotated reference frame Relative to the coordinate system defined by *center* field parameter. Note that the orientation of the x and y axes are arbitrary. """ field_names = [(ptype, f"particle_position_{ax}") for ax in "xyz"] return obtain_position_vector(data, field_names=field_names).T registry.add_field( (ptype, "relative_particle_position"), sampling_type="particle", function=_relative_particle_position, units=unit_system["length"], validators=[ValidateParameter("normal"), ValidateParameter("center")], ) def _relative_particle_velocity(field, data): """The vector particle velocities in an arbitrary coordinate system Relative to the coordinate system defined by the *bulk_velocity* vector field parameter. Note that the orientation of the x and y axes are arbitrary. """ field_names = [(ptype, f"particle_velocity_{ax}") for ax in "xyz"] return obtain_relative_velocity_vector(data, field_names=field_names).T registry.add_field( (ptype, "relative_particle_velocity"), sampling_type="particle", function=_relative_particle_velocity, units=unit_system["velocity"], validators=[ValidateParameter("normal"), ValidateParameter("center")], ) def _get_coord_funcs_relative(axi, _ptype): def _particle_pos_rel(field, data): return data[_ptype, "relative_particle_position"][:, axi] def _particle_vel_rel(field, data): return data[_ptype, "relative_particle_velocity"][:, axi] return _particle_vel_rel, _particle_pos_rel for axi, ax in enumerate("xyz"): v, p = _get_coord_funcs_relative(axi, ptype) registry.add_field( (ptype, f"particle_velocity_relative_{ax}"), sampling_type="particle", function=v, units="code_velocity", ) registry.add_field( (ptype, f"particle_position_relative_{ax}"), sampling_type="particle", function=p, units="code_length", ) registry.add_field( (ptype, f"relative_particle_velocity_{ax}"), sampling_type="particle", function=v, units="code_velocity", ) registry.add_field( (ptype, f"relative_particle_position_{ax}"), sampling_type="particle", function=p, units="code_length", ) # this is just particle radius but we add it with an alias for the sake of # consistent naming registry.add_field( (ptype, "particle_position_spherical_radius"), sampling_type="particle", function=_particle_radius, units=unit_system["length"], validators=[ValidateParameter("normal"), ValidateParameter("center")], ) def _particle_position_spherical_theta(field, data): """The spherical theta coordinate of the particle positions. Relative to the coordinate system defined by the *normal* vector and *center* field parameters. """ normal = data.get_field_parameter("normal") pos = data[ptype, "relative_particle_position"].T return data.ds.arr(get_sph_theta(pos, normal), "") registry.add_field( (ptype, "particle_position_spherical_theta"), sampling_type="particle", function=_particle_position_spherical_theta, units="", validators=[ValidateParameter("center"), ValidateParameter("normal")], ) def _particle_position_spherical_phi(field, data): """The spherical phi component of the particle positions Relative to the coordinate system defined by the *normal* vector and *center* field parameters. """ normal = data.get_field_parameter("normal") pos = data[ptype, "relative_particle_position"].T return data.ds.arr(get_sph_phi(pos, normal), "") registry.add_field( (ptype, "particle_position_spherical_phi"), sampling_type="particle", function=_particle_position_spherical_phi, units="", validators=[ValidateParameter("normal"), ValidateParameter("center")], ) def _particle_velocity_spherical_radius(field, data): """The spherical radius component of the particle velocities in an arbitrary coordinate system Relative to the coordinate system defined by the *normal* vector, *bulk_velocity* vector and *center* field parameters. """ normal = data.get_field_parameter("normal") pos = data[ptype, "relative_particle_position"].T vel = data[ptype, "relative_particle_velocity"].T theta = get_sph_theta(pos, normal) phi = get_sph_phi(pos, normal) sphr = get_sph_r_component(vel, theta, phi, normal) return sphr registry.add_field( (ptype, "particle_velocity_spherical_radius"), sampling_type="particle", function=_particle_velocity_spherical_radius, units=unit_system["velocity"], validators=[ValidateParameter("normal"), ValidateParameter("center")], ) registry.alias( (ptype, "particle_radial_velocity"), (ptype, "particle_velocity_spherical_radius"), ) def _particle_velocity_spherical_theta(field, data): """The spherical theta component of the particle velocities in an arbitrary coordinate system Relative to the coordinate system defined by the *normal* vector, *bulk_velocity* vector and *center* field parameters. """ normal = data.get_field_parameter("normal") pos = data[ptype, "relative_particle_position"].T vel = data[ptype, "relative_particle_velocity"].T theta = get_sph_theta(pos, normal) phi = get_sph_phi(pos, normal) spht = get_sph_theta_component(vel, theta, phi, normal) return spht registry.add_field( (ptype, "particle_velocity_spherical_theta"), sampling_type="particle", function=_particle_velocity_spherical_theta, units=unit_system["velocity"], validators=[ValidateParameter("normal"), ValidateParameter("center")], ) def _particle_velocity_spherical_phi(field, data): """The spherical phi component of the particle velocities Relative to the coordinate system defined by the *normal* vector, *bulk_velocity* vector and *center* field parameters. """ normal = data.get_field_parameter("normal") pos = data[ptype, "relative_particle_position"].T vel = data[ptype, "relative_particle_velocity"].T phi = get_sph_phi(pos, normal) sphp = get_sph_phi_component(vel, phi, normal) return sphp registry.add_field( (ptype, "particle_velocity_spherical_phi"), sampling_type="particle", function=_particle_velocity_spherical_phi, units=unit_system["velocity"], validators=[ValidateParameter("normal"), ValidateParameter("center")], ) def _particle_position_cylindrical_radius(field, data): """The cylindrical radius component of the particle positions Relative to the coordinate system defined by the *normal* vector and *center* field parameters. """ normal = data.get_field_parameter("normal") pos = data[ptype, "relative_particle_position"].T pos.convert_to_units("code_length") return data.ds.arr(get_cyl_r(pos, normal), "code_length") registry.add_field( (ptype, "particle_position_cylindrical_radius"), sampling_type="particle", function=_particle_position_cylindrical_radius, units=unit_system["length"], validators=[ValidateParameter("normal"), ValidateParameter("center")], ) def _particle_position_cylindrical_theta(field, data): """The cylindrical theta component of the particle positions Relative to the coordinate system defined by the *normal* vector and *center* field parameters. """ normal = data.get_field_parameter("normal") pos = data[ptype, "relative_particle_position"].T return data.ds.arr(get_cyl_theta(pos, normal), "") registry.add_field( (ptype, "particle_position_cylindrical_theta"), sampling_type="particle", function=_particle_position_cylindrical_theta, units="", validators=[ValidateParameter("center"), ValidateParameter("normal")], ) def _particle_position_cylindrical_z(field, data): """The cylindrical z component of the particle positions Relative to the coordinate system defined by the *normal* vector and *center* field parameters. """ normal = data.get_field_parameter("normal") pos = data[ptype, "relative_particle_position"].T pos.convert_to_units("code_length") return data.ds.arr(get_cyl_z(pos, normal), "code_length") registry.add_field( (ptype, "particle_position_cylindrical_z"), sampling_type="particle", function=_particle_position_cylindrical_z, units=unit_system["length"], validators=[ValidateParameter("normal"), ValidateParameter("center")], ) def _particle_velocity_cylindrical_radius(field, data): """The cylindrical radius component of the particle velocities Relative to the coordinate system defined by the *normal* vector, *bulk_velocity* vector and *center* field parameters. """ normal = data.get_field_parameter("normal") pos = data[ptype, "relative_particle_position"].T vel = data[ptype, "relative_particle_velocity"].T theta = get_cyl_theta(pos, normal) cylr = get_cyl_r_component(vel, theta, normal) return cylr registry.add_field( (ptype, "particle_velocity_cylindrical_radius"), sampling_type="particle", function=_particle_velocity_cylindrical_radius, units=unit_system["velocity"], validators=[ValidateParameter("normal"), ValidateParameter("center")], ) def _particle_velocity_cylindrical_theta(field, data): """The cylindrical theta component of the particle velocities Relative to the coordinate system defined by the *normal* vector, *bulk_velocity* vector and *center* field parameters. """ normal = data.get_field_parameter("normal") pos = data[ptype, "relative_particle_position"].T vel = data[ptype, "relative_particle_velocity"].T theta = get_cyl_theta(pos, normal) cylt = get_cyl_theta_component(vel, theta, normal) return cylt registry.add_field( (ptype, "particle_velocity_cylindrical_theta"), sampling_type="particle", function=_particle_velocity_cylindrical_theta, units=unit_system["velocity"], validators=[ValidateParameter("normal"), ValidateParameter("center")], ) def _particle_velocity_cylindrical_z(field, data): """The cylindrical z component of the particle velocities Relative to the coordinate system defined by the *normal* vector, *bulk_velocity* vector and *center* field parameters. """ normal = data.get_field_parameter("normal") vel = data[ptype, "relative_particle_velocity"].T cylz = get_cyl_z_component(vel, normal) return cylz registry.add_field( (ptype, "particle_velocity_cylindrical_z"), sampling_type="particle", function=_particle_velocity_cylindrical_z, units=unit_system["velocity"], validators=[ValidateParameter("normal"), ValidateParameter("center")], ) def add_particle_average(registry, ptype, field_name, weight=None, density=True): if weight is None: weight = (ptype, "particle_mass") field_units = registry[ptype, field_name].units def _pfunc_avg(field, data): pos = data[ptype, "particle_position"] f = data[ptype, field_name] wf = data[ptype, weight] f *= wf v = data.deposit(pos, [f], method="sum") w = data.deposit(pos, [wf], method="sum") v /= w if density: v /= data["index", "cell_volume"] v[np.isnan(v)] = 0.0 return v fn = ("deposit", f"{ptype}_avg_{field_name}") registry.add_field( fn, sampling_type="cell", function=_pfunc_avg, validators=[ValidateSpatial(0)], units=field_units, ) return fn def add_nearest_neighbor_field(ptype, coord_name, registry, nneighbors=64): field_name = (ptype, f"nearest_neighbor_distance_{nneighbors}") def _nth_neighbor(field, data): pos = data[ptype, coord_name] pos.convert_to_units("code_length") distances = 0.0 * pos[:, 0] data.particle_operation( pos, [distances], method="nth_neighbor", nneighbors=nneighbors ) # Now some quick unit conversions. return distances registry.add_field( field_name, sampling_type="particle", function=_nth_neighbor, validators=[ValidateSpatial(0)], units="code_length", ) return [field_name] def add_nearest_neighbor_value_field(ptype, coord_name, sampled_field, registry): """ This adds a nearest-neighbor field, where values on the mesh are assigned based on the nearest particle value found. This is useful, for instance, with voronoi-tesselations. """ field_name = ("deposit", f"{ptype}_nearest_{sampled_field}") field_units = registry[ptype, sampled_field].units unit_system = registry.ds.unit_system def _nearest_value(field, data): pos = data[ptype, coord_name] pos = pos.convert_to_units("code_length") value = data[ptype, sampled_field].in_base(unit_system.name) rv = data.smooth( pos, [value], method="nearest", create_octree=True, nneighbors=1 ) rv = data.apply_units(rv, field_units) return rv registry.add_field( field_name, sampling_type="cell", function=_nearest_value, validators=[ValidateSpatial(0)], units=field_units, ) return [field_name] def add_union_field(registry, ptype, field_name, units): """ Create a field that is the concatenation of multiple particle types. This allows us to create fields for particle unions using alias names. """ def _cat_field(field, data): return uconcatenate( [data[dep_type, field_name] for dep_type in data.ds.particle_types_raw] ) registry.add_field( (ptype, field_name), sampling_type="particle", function=_cat_field, units=units ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/species_fields.py0000644000175100001770000002675214714401662017036 0ustar00runnerdockerimport re import numpy as np from yt.frontends.sph.data_structures import ParticleDataset from yt.utilities.chemical_formulas import ChemicalFormula from yt.utilities.physical_ratios import _primordial_mass_fraction from .field_plugin_registry import register_field_plugin # See YTEP-0003 for details, but we want to ensure these fields are all # populated: # # * _mass # * _density # * _fraction # * _number_density # def _create_fraction_func(ftype, species): def _frac(field, data): return data[ftype, f"{species}_density"] / data[ftype, "density"] return _frac def _mass_from_cell_volume_and_density(ftype, species): def _mass(field, data): return data[ftype, f"{species}_density"] * data["index", "cell_volume"] return _mass def _mass_from_particle_mass_and_fraction(ftype, species): def _mass(field, data): return data[ftype, f"{species}_fraction"] * data[ftype, "particle_mass"] return _mass def _create_number_density_func(ftype, species): formula = ChemicalFormula(species) def _number_density(field, data): weight = formula.weight # This is in AMU weight *= data.ds.quan(1.0, "amu").in_cgs() return data[ftype, f"{species}_density"] / weight return _number_density def _create_density_func(ftype, species): def _density(field, data): return data[ftype, f"{species}_fraction"] * data[ftype, "density"] return _density def add_species_field_by_density(registry, ftype, species): """ This takes a field registry, a fluid type, and a species name and then adds the other fluids based on that. This assumes that the field "SPECIES_density" already exists and refers to mass density. """ unit_system = registry.ds.unit_system registry.add_field( (ftype, f"{species}_fraction"), sampling_type="local", function=_create_fraction_func(ftype, species), units="", ) if isinstance(registry.ds, ParticleDataset): _create_mass_func = _mass_from_particle_mass_and_fraction else: _create_mass_func = _mass_from_cell_volume_and_density registry.add_field( (ftype, f"{species}_mass"), sampling_type="local", function=_create_mass_func(ftype, species), units=unit_system["mass"], ) registry.add_field( (ftype, f"{species}_number_density"), sampling_type="local", function=_create_number_density_func(ftype, species), units=unit_system["number_density"], ) return [ (ftype, f"{species}_number_density"), (ftype, f"{species}_density"), (ftype, f"{species}_mass"), ] def add_species_field_by_fraction(registry, ftype, species): """ This takes a field registry, a fluid type, and a species name and then adds the other fluids based on that. This assumes that the field "SPECIES_fraction" already exists and refers to mass fraction. """ unit_system = registry.ds.unit_system registry.add_field( (ftype, f"{species}_density"), sampling_type="local", function=_create_density_func(ftype, species), units=unit_system["density"], ) if isinstance(registry.ds, ParticleDataset): _create_mass_func = _mass_from_particle_mass_and_fraction else: _create_mass_func = _mass_from_cell_volume_and_density registry.add_field( (ftype, f"{species}_mass"), sampling_type="local", function=_create_mass_func(ftype, species), units=unit_system["mass"], ) registry.add_field( (ftype, f"{species}_number_density"), sampling_type="local", function=_create_number_density_func(ftype, species), units=unit_system["number_density"], ) return [ (ftype, f"{species}_number_density"), (ftype, f"{species}_density"), (ftype, f"{species}_mass"), ] def add_species_aliases(registry, ftype, alias_species, species): r""" This takes a field registry, a fluid type, and two species names. The first species name is one you wish to alias to an existing species name. For instance you might alias all "H_p0" fields to "H\_" fields to indicate that "H\_" fields are really just neutral hydrogen fields. This function registers field aliases for the density, number_density, mass, and fraction fields between the two species given in the arguments. """ registry.alias((ftype, f"{alias_species}_density"), (ftype, f"{species}_density")) registry.alias((ftype, f"{alias_species}_fraction"), (ftype, f"{species}_fraction")) registry.alias( (ftype, f"{alias_species}_number_density"), (ftype, f"{species}_number_density"), ) registry.alias((ftype, f"{alias_species}_mass"), (ftype, f"{species}_mass")) def add_deprecated_species_aliases(registry, ftype, alias_species, species): """ Add the species aliases but with deprecation warnings. """ for suffix in ["density", "fraction", "number_density", "mass"]: add_deprecated_species_alias(registry, ftype, alias_species, species, suffix) def add_deprecated_species_alias(registry, ftype, alias_species, species, suffix): """ Add a deprecated species alias field. """ unit_system = registry.ds.unit_system if suffix == "fraction": my_units = "" else: my_units = unit_system[suffix] def _dep_field(field, data): return data[ftype, f"{species}_{suffix}"] registry.add_field( (ftype, f"{alias_species}_{suffix}"), sampling_type="local", function=_dep_field, units=my_units, ) def add_nuclei_density_fields(registry, ftype): unit_system = registry.ds.unit_system elements = _get_all_elements(registry.species_names) for element in elements: registry.add_field( (ftype, f"{element}_nuclei_density"), sampling_type="local", function=_nuclei_density, units=unit_system["number_density"], ) # Here, we add default nuclei and number density fields for H and # He if they are not defined above, and if it was requested by # setting "default_species_fields" if registry.ds.default_species_fields is None: return dsf = registry.ds.default_species_fields # Right now, this only handles default fields for H and He for element in ["H", "He"]: # If these elements are already present in the dataset, # DO NOT set them if element in elements: continue # First add the default nuclei density fields registry.add_field( (ftype, f"{element}_nuclei_density"), sampling_type="local", function=_default_nuclei_density, units=unit_system["number_density"], ) # Set up number density fields for hydrogen, either fully ionized or neutral. if element == "H": if dsf == "ionized": state = "p1" elif dsf == "neutral": state = "p0" else: raise NotImplementedError( f"'default_species_fields' option '{dsf}' is not implemented!" ) registry.alias( (ftype, f"H_{state}_number_density"), (ftype, "H_nuclei_density") ) # Set up number density fields for helium, either fully ionized or neutral. if element == "He": if dsf == "ionized": state = "p2" elif dsf == "neutral": state = "p0" registry.alias( (ftype, f"He_{state}_number_density"), (ftype, "He_nuclei_density") ) # If we're fully ionized, we need to setup the electron number density field if (ftype, "El_number_density") not in registry and dsf == "ionized": registry.add_field( (ftype, "El_number_density"), sampling_type="local", function=_default_nuclei_density, units=unit_system["number_density"], ) def _default_nuclei_density(field, data): ftype = field.name[0] element = field.name[1][: field.name[1].find("_")] amu_cgs = data.ds.quan(1.0, "amu").in_cgs() if element == "El": # This is for determining the electron number density. # If we got here, this assumes full ionization! muinv = 1.0 * _primordial_mass_fraction["H"] / ChemicalFormula("H").weight muinv += 2.0 * _primordial_mass_fraction["He"] / ChemicalFormula("He").weight else: # This is for anything else besides electrons muinv = _primordial_mass_fraction[element] / ChemicalFormula(element).weight return data[ftype, "density"] * muinv / amu_cgs def _nuclei_density(field, data): ftype = field.name[0] element = field.name[1][: field.name[1].find("_")] nuclei_mass_field = f"{element}_nuclei_mass_density" if (ftype, nuclei_mass_field) in data.ds.field_info: return ( data[ftype, nuclei_mass_field] / ChemicalFormula(element).weight / data.ds.quan(1.0, "amu").in_cgs() ) metal_field = f"{element}_metallicity" if (ftype, metal_field) in data.ds.field_info: return ( data[ftype, "density"] * data[ftype, metal_field] / ChemicalFormula(element).weight / data.ds.quan(1.0, "amu").in_cgs() ) field_data = np.zeros_like( data[ftype, f"{data.ds.field_info.species_names[0]}_number_density"] ) for species in data.ds.field_info.species_names: nucleus = species if "_" in species: nucleus = species[: species.find("_")] # num is the number of nuclei contributed by this species. num = _get_element_multiple(nucleus, element) # Since this is a loop over all species existing in this dataset, # we will encounter species that contribute nothing, so we skip them. if num == 0: continue field_data += num * data[ftype, f"{species}_number_density"] return field_data def _get_all_elements(species_list): elements = [] for species in species_list: for item in re.findall("[A-Z][a-z]?|[0-9]+", species): if not item.isdigit() and item not in elements and item != "El": elements.append(item) return elements def _get_element_multiple(compound, element): my_split = re.findall("[A-Z][a-z]?|[0-9]+", compound) if element not in my_split: return 0 loc = my_split.index(element) if loc == len(my_split) - 1 or not my_split[loc + 1].isdigit(): return 1 return int(my_split[loc + 1]) @register_field_plugin def setup_species_fields(registry, ftype="gas", slice_info=None): for species in registry.species_names: # These are all the species we should be looking for fractions or # densities of. if (ftype, f"{species}_density") in registry: func = add_species_field_by_density elif (ftype, f"{species}_fraction") in registry: func = add_species_field_by_fraction else: # Skip it continue func(registry, ftype, species) # Add aliases of X_p0_ to X_. # These are deprecated and will be removed soon. if ChemicalFormula(species).charge == 0: alias_species = species.split("_")[0] if (ftype, f"{alias_species}_density") in registry: continue add_deprecated_species_aliases(registry, "gas", alias_species, species) add_nuclei_density_fields(registry, ftype) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/tensor_fields.py0000644000175100001770000000210314714401662016675 0ustar00runnerdockerfrom functools import partial # This is the metric for Minkowski spacetime in SR def metric(mu: int, nu: int): # This assumes the -+++ signature if (mu, nu) == (0, 0): return -1 elif mu == nu: return 1 else: return 0 def setup_stress_energy_ideal(registry, ftype="gas"): ax = ("t",) + registry.ds.coordinates.axis_order pc = registry.ds.units.physical_constants inv_c2 = 1.0 / (pc.clight * pc.clight) def _T(field, data, mu: int, nu: int): Umu = data[ftype, f"four_velocity_{ax[mu]}"] Unu = data[ftype, f"four_velocity_{ax[nu]}"] p = data[ftype, "pressure"] e = data[ftype, "thermal_energy_density"] rho = data[ftype, "density"] return (rho + (e + p) * inv_c2) * Umu * Unu + metric(mu, nu) * p for mu in range(5): for nu in range(5): registry.add_field( (ftype, f"T{mu}{nu}"), sampling_type="local", function=partial(_T, mu=mu, nu=nu), units=registry.ds.unit_system["pressure"], ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2871518 yt-4.4.0/yt/fields/tests/0000755000175100001770000000000014714401715014630 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/tests/__init__.py0000644000175100001770000000000014714401662016730 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/tests/test_ambiguous_fields.py0000644000175100001770000000243614714401662021570 0ustar00runnerdockerimport pytest from yt.testing import fake_amr_ds def test_ambiguous_fails(): ds = fake_amr_ds(particles=10) msg = "The requested field name '{}' is ambiguous" def _mock_field(field, data): return data["ones"] # create a pair of ambiguous field ds.add_field(("io", "mock_field"), _mock_field, sampling_type="particle") ds.add_field(("gas", "mock_field"), _mock_field, sampling_type="cell") # Test errors are raised for ambiguous fields with pytest.raises(ValueError, match=msg.format("mock_field")): ds.r["mock_field"] # check that explicit name tuples don't raise a warning ds.r["io", "mock_field"] ds.r["gas", "mock_field"] def test_nameonly_field_with_all_aliases_candidates(): # see https://github.com/yt-project/yt/issues/3839 ds = fake_amr_ds(fields=["density"], units=["g/cm**3"]) # here we rely on implementations details from fake_amr_ds, # so we verify that it provides the appropriate conditions # for the actual test. candidates = [f for f in ds.derived_field_list if f[1] == "density"] assert len(candidates) == 2 fi = ds.field_info assert fi[candidates[0]].is_alias_to(fi[candidates[1]]) # this is the actual test (check that no error or warning is raised) ds.all_data()["density"] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/tests/test_angular_momentum.py0000644000175100001770000000137214714401662021617 0ustar00runnerdockerimport numpy as np from yt.testing import assert_allclose_units, fake_amr_ds def test_AM_value(): ds = fake_amr_ds( fields=("Density", "velocity_x", "velocity_y", "velocity_z"), units=("g/cm**3", "cm/s", "cm/s", "cm/s"), length_unit=0.5, ) sp = ds.sphere([0.5] * 3, (0.1, "code_length")) x0 = sp.center v0 = ds.arr([1, 2, 3], "km/s") sp.set_field_parameter("bulk_velocity", v0) X = (ds.arr([sp["index", k] for k in "xyz"]) - x0[:, None]).T V = (ds.arr([sp["gas", f"velocity_{k}"] for k in "xyz"]) - v0[:, None]).T sAM_manual = ds.arr(np.cross(X, V), X.units * V.units) sAM = ds.arr([sp["gas", f"specific_angular_momentum_{k}"] for k in "xyz"]).T assert_allclose_units(sAM_manual, sAM) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/tests/test_field_access.py0000644000175100001770000000261714714401662020654 0ustar00runnerdockerfrom numpy.testing import assert_equal from yt.data_objects.profiles import create_profile from yt.testing import fake_random_ds from yt.visualization.plot_window import ProjectionPlot, SlicePlot from yt.visualization.profile_plotter import PhasePlot, ProfilePlot def test_field_access(): ds = fake_random_ds(16) ad = ds.all_data() sp = ds.sphere(ds.domain_center, 0.25) cg = ds.covering_grid(0, ds.domain_left_edge, ds.domain_dimensions) scg = ds.smoothed_covering_grid(0, ds.domain_left_edge, ds.domain_dimensions) sl = ds.slice(0, ds.domain_center[0]) proj = ds.proj(("gas", "density"), 0) prof = create_profile(ad, ("index", "radius"), ("gas", "density")) for data_object in [ad, sp, cg, scg, sl, proj, prof]: assert_equal(data_object["gas", "density"], data_object[ds.fields.gas.density]) for field in [("gas", "density"), ds.fields.gas.density]: ad = ds.all_data() prof = ProfilePlot(ad, ("index", "radius"), field) phase = PhasePlot(ad, ("index", "radius"), field, ("gas", "cell_mass")) s = SlicePlot(ds, 2, field) oas = SlicePlot(ds, [1, 1, 1], field) p = ProjectionPlot(ds, 2, field) oap = ProjectionPlot(ds, [1, 1, 1], field) for plot_object in [s, oas, p, oap, prof, phase]: plot_object.render() if hasattr(plot_object, "_frb"): plot_object._frb[field] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/tests/test_field_access_pytest.py0000644000175100001770000000243614714401662022263 0ustar00runnerdockerfrom collections import defaultdict from itertools import product import pytest from yt.testing import fake_random_ds from yt.utilities.exceptions import YTFieldNotFound def test_unexisting_field_access(): ds = fake_random_ds(16, particles=10) fname2ftype = defaultdict(list) for ft, fn in ds.derived_field_list: fname2ftype[fn].append(ft) ad = ds.all_data() ftypes = ("gas", "io") fnames = ( "density", "particle_position_x", "particle_position_y", "particle_position_z", ) # Try invalid ftypes, fnames combinations for ft, fn in product(ftypes, fnames): if (ft, fn) in ds.derived_field_list: continue with pytest.raises(YTFieldNotFound) as excinfo: ad[ft, fn] # Make sure the existing field has been suggested for possible_ft in fname2ftype[fn]: assert str((possible_ft, fn)) in str(excinfo.value) # Try typos for bad_field, good_field in ( (("gas", "densi_y"), ("gas", "density")), (("oi", "particle_mass"), ("io", "particle_mass")), (("gas", "DENSITY"), ("gas", "density")), ): with pytest.raises(YTFieldNotFound) as excinfo: ad[bad_field] assert str(good_field) in str(excinfo.value) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/tests/test_field_name_container.py0000644000175100001770000000326714714401662022377 0ustar00runnerdockerfrom yt import load from yt.testing import fake_amr_ds, fake_hexahedral_ds, requires_file, requires_module def do_field_type(ft): assert dir(ft) == sorted(dir(ft)) assert sorted(dir(ft)) == sorted(f.name[1] for f in ft) for field_name in dir(ft): f = getattr(ft, field_name) assert (ft.field_type, field_name) == f.name for field in ft: f = getattr(ft, field.name[1]) assert f == field assert f in ft assert f.name in ft assert f.name[1] in ft enzotiny = "enzo_tiny_cosmology/DD0046/DD0046" @requires_module("h5py") @requires_file(enzotiny) def test_field_name_container(): ds = load(enzotiny) assert dir(ds.fields) == sorted(dir(ds.fields)) assert sorted(ft.field_type for ft in ds.fields) == sorted(dir(ds.fields)) for field_type in dir(ds.fields): assert field_type in ds.fields ft = getattr(ds.fields, field_type) do_field_type(ft) for field_type in ds.fields: assert field_type in ds.fields do_field_type(field_type) def test_vertex_fields_only_in_unstructured_ds(): def get_vertex_fields(ds): return [(ft, fn) for ft, fn in ds.derived_field_list if "vertex" in fn] ds = fake_amr_ds() vertex_fields = get_vertex_fields(ds) assert not vertex_fields ds = fake_hexahedral_ds() actual = get_vertex_fields(ds) expected = [ ("all", "vertex_x"), ("all", "vertex_y"), ("all", "vertex_z"), ("connect1", "vertex_x"), ("connect1", "vertex_y"), ("connect1", "vertex_z"), ("index", "vertex_x"), ("index", "vertex_y"), ("index", "vertex_z"), ] assert actual == expected ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/tests/test_fields.py0000644000175100001770000004202514714401662017513 0ustar00runnerdockerimport numpy as np from numpy.testing import ( assert_almost_equal, assert_array_almost_equal_nulp, assert_array_equal, assert_equal, assert_raises, ) from yt import load from yt.frontends.stream.fields import StreamFieldInfo from yt.testing import ( assert_allclose_units, fake_amr_ds, fake_particle_ds, fake_random_ds, requires_file, requires_module, ) from yt.units.yt_array import YTArray, YTQuantity, array_like_field from yt.utilities.cosmology import Cosmology from yt.utilities.exceptions import ( YTDimensionalityError, YTFieldUnitError, YTFieldUnitParseError, ) def get_params(ds): return { "axis": 0, "center": YTArray((0.0, 0.0, 0.0), "cm", registry=ds.unit_registry), "bulk_velocity": YTArray((0.0, 0.0, 0.0), "cm/s", registry=ds.unit_registry), "bulk_magnetic_field": YTArray((0.0, 0.0, 0.0), "G", registry=ds.unit_registry), "normal": YTArray((0.0, 0.0, 1.0), "", registry=ds.unit_registry), "cp_x_vec": YTArray((1.0, 0.0, 0.0), "", registry=ds.unit_registry), "cp_y_vec": YTArray((0.0, 1.0, 0.0), "", registry=ds.unit_registry), "cp_z_vec": YTArray((0.0, 0.0, 1.0), "", registry=ds.unit_registry), "omega_baryon": 0.04, "observer_redshift": 0.0, "source_redshift": 3.0, "virial_radius": YTQuantity(1.0, "cm"), } _base_fields = ( ("gas", "density"), ("gas", "velocity_x"), ("gas", "velocity_y"), ("gas", "velocity_z"), ) def _strip_ftype(field): if not isinstance(field, tuple): return field elif field[0] in ("all", "io"): return field return field[1] class TestFieldAccess: description = None def __init__(self, field_name, ds, nprocs): # Note this should be a field name self.field_name = field_name self.description = f"Accessing_{field_name}_{nprocs}" self.nprocs = nprocs self.ds = ds def __call__(self): field = self.ds._get_field_info(self.field_name) skip_grids = False needs_spatial = False for v in field.validators: if getattr(v, "ghost_zones", 0) > 0: skip_grids = True if hasattr(v, "ghost_zones"): needs_spatial = True ds = self.ds # This gives unequal sized grids as well as subgrids dd1 = ds.all_data() dd2 = ds.all_data() sp = get_params(ds) dd1.field_parameters.update(sp) dd2.field_parameters.update(sp) with np.errstate(all="ignore"): v1 = dd1[self.field_name] # No more conversion checking assert_equal(v1, dd1[self.field_name]) if not needs_spatial: with field.unit_registry(dd2): res = field._function(field, dd2) res = dd2.apply_units(res, field.units) assert_array_almost_equal_nulp(v1, res, 4) if not skip_grids: for g in ds.index.grids: g.field_parameters.update(sp) v1 = g[self.field_name] g.clear_data() g.field_parameters.update(sp) r1 = field._function(field, g) if field.sampling_type == "particle": assert_equal(v1.shape[0], g.NumberOfParticles) else: assert_array_equal(r1.shape, v1.shape) for ax in "xyz": assert_array_equal(g["index", ax].shape, v1.shape) with field.unit_registry(g): res = field._function(field, g) assert_array_equal(v1.shape, res.shape) res = g.apply_units(res, field.units) assert_array_almost_equal_nulp(v1, res, 4) def get_base_ds(nprocs): fields, units = [], [] for fname, (code_units, *_) in StreamFieldInfo.known_other_fields: fields.append(("gas", fname)) units.append(code_units) pfields, punits = [], [] for fname, (code_units, _aliases, _dn) in StreamFieldInfo.known_particle_fields: if fname == "smoothing_lenth": # we test SPH fields elsewhere continue pfields.append(fname) punits.append(code_units) ds = fake_random_ds( 4, fields=fields, units=units, particles=20, nprocs=nprocs, particle_fields=pfields, particle_field_units=punits, ) ds.parameters["HydroMethod"] = "streaming" ds.parameters["EOSType"] = 1.0 ds.parameters["EOSSoundSpeed"] = 1.0 ds.conversion_factors["Time"] = 1.0 ds.conversion_factors.update({f: 1.0 for f in fields}) ds.gamma = 5.0 / 3.0 ds.current_redshift = 0.0001 ds.cosmological_simulation = 1 ds.hubble_constant = 0.7 ds.omega_matter = 0.27 ds.omega_lambda = 0.73 ds.cosmology = Cosmology( hubble_constant=ds.hubble_constant, omega_matter=ds.omega_matter, omega_lambda=ds.omega_lambda, unit_registry=ds.unit_registry, ) # ensures field errors are raised during testing # see FieldInfoContainer.check_derived_fields ds._field_test_dataset = True ds.index return ds def test_all_fields(): datasets = {} for nprocs in [1, 4, 8]: ds = get_base_ds(nprocs) datasets[nprocs] = ds for field in sorted(ds.field_info): if field[1].find("beta_p") > -1: continue if field[1].find("vertex") > -1: # don't test the vertex fields for now continue if field[1].find("smoothed") > -1: # smoothed fields aren't implemented for grid data continue if field in ds.field_list: # Don't know how to test this. We need some way of having fields # that are fallbacks be tested, but we don't have that now. continue for nprocs in [1, 4, 8]: test_all_fields.__name__ = f"{field}_{nprocs}" yield TestFieldAccess(field, datasets[nprocs], nprocs) def test_add_deposited_particle_field(): # NOT tested: "std", "mesh_id", "nearest" and "simple_smooth" base_ds = get_base_ds(1) ad = base_ds.all_data() # Test "count", "sum" and "cic" method for method in ["count", "sum", "cic"]: fn = base_ds.add_deposited_particle_field(("io", "particle_mass"), method) expected_fn = "io_%s" if method == "count" else "io_%s_mass" assert_equal(fn, ("deposit", expected_fn % method)) ret = ad[fn] if method == "count": assert_equal(ret.sum(), ad["io", "particle_ones"].sum()) else: assert_almost_equal(ret.sum(), ad["io", "particle_mass"].sum()) # Test "weighted_mean" method fn = base_ds.add_deposited_particle_field( ("io", "particle_ones"), "weighted_mean", weight_field="particle_ones" ) assert_equal(fn, ("deposit", "io_avg_ones")) ret = ad[fn] # The sum should equal the number of cells that have particles assert_equal(ret.sum(), np.count_nonzero(ad["deposit", "io_count"])) def test_add_gradient_fields(): ds = get_base_ds(1) gfields = ds.add_gradient_fields(("gas", "density")) gfields += ds.add_gradient_fields(("index", "ones")) field_list = [ ("gas", "density_gradient_x"), ("gas", "density_gradient_y"), ("gas", "density_gradient_z"), ("gas", "density_gradient_magnitude"), ("index", "ones_gradient_x"), ("index", "ones_gradient_y"), ("index", "ones_gradient_z"), ("index", "ones_gradient_magnitude"), ] assert_equal(gfields, field_list) ad = ds.all_data() for field in field_list: ret = ad[field] if field[0] == "gas": assert str(ret.units) == "g/cm**4" else: assert str(ret.units) == "1/cm" def test_add_gradient_fields_by_fname(): ds = fake_amr_ds(fields=("density", "temperature"), units=("g/cm**3", "K")) actual = ds.add_gradient_fields(("gas", "density")) expected = [ ("gas", "density_gradient_x"), ("gas", "density_gradient_y"), ("gas", "density_gradient_z"), ("gas", "density_gradient_magnitude"), ] assert_equal(actual, expected) def test_add_gradient_multiple_fields(): ds = fake_amr_ds(fields=("density", "temperature"), units=("g/cm**3", "K")) actual = ds.add_gradient_fields([("gas", "density"), ("gas", "temperature")]) expected = [ ("gas", "density_gradient_x"), ("gas", "density_gradient_y"), ("gas", "density_gradient_z"), ("gas", "density_gradient_magnitude"), ("gas", "temperature_gradient_x"), ("gas", "temperature_gradient_y"), ("gas", "temperature_gradient_z"), ("gas", "temperature_gradient_magnitude"), ] assert_equal(actual, expected) ds = fake_amr_ds(fields=("density", "temperature"), units=("g/cm**3", "K")) actual = ds.add_gradient_fields([("gas", "density"), ("gas", "temperature")]) assert_equal(actual, expected) def test_add_gradient_fields_curvilinear(): ds = fake_amr_ds(fields=["density"], units=["g/cm**3"], geometry="spherical") gfields = ds.add_gradient_fields(("gas", "density")) gfields += ds.add_gradient_fields(("index", "ones")) field_list = [ ("gas", "density_gradient_r"), ("gas", "density_gradient_theta"), ("gas", "density_gradient_phi"), ("gas", "density_gradient_magnitude"), ("index", "ones_gradient_r"), ("index", "ones_gradient_theta"), ("index", "ones_gradient_phi"), ("index", "ones_gradient_magnitude"), ] assert_equal(gfields, field_list) ad = ds.all_data() for field in field_list: ret = ad[field] if field[0] == "gas": assert str(ret.units) == "g/cm**4" else: assert str(ret.units) == "1/cm" def get_data(ds, field_name): # Need to create a new data object otherwise the errors we are # intentionally raising lead to spurious GenerationInProgress errors ad = ds.all_data() return ad[field_name] def test_add_field_unit_semantics(): ds = fake_random_ds(16) ad = ds.all_data() def density_alias(field, data): return data["gas", "density"].in_cgs() def unitless_data(field, data): return np.ones(data["gas", "density"].shape) ds.add_field( ("gas", "density_alias_auto"), sampling_type="cell", function=density_alias, units="auto", dimensions="density", ) ds.add_field( ("gas", "density_alias_wrong_units"), function=density_alias, sampling_type="cell", units="m/s", ) ds.add_field( ("gas", "density_alias_unparseable_units"), sampling_type="cell", function=density_alias, units="dragons", ) ds.add_field( ("gas", "density_alias_auto_wrong_dims"), function=density_alias, sampling_type="cell", units="auto", dimensions="temperature", ) assert_raises(YTFieldUnitError, get_data, ds, ("gas", "density_alias_wrong_units")) assert_raises( YTFieldUnitParseError, get_data, ds, ("gas", "density_alias_unparseable_units") ) assert_raises( YTDimensionalityError, get_data, ds, ("gas", "density_alias_auto_wrong_dims") ) dens = ad["gas", "density_alias_auto"] assert_equal(str(dens.units), "g/cm**3") ds.add_field(("gas", "dimensionless"), sampling_type="cell", function=unitless_data) ds.add_field( ("gas", "dimensionless_auto"), function=unitless_data, sampling_type="cell", units="auto", dimensions="dimensionless", ) ds.add_field( ("gas", "dimensionless_explicit"), function=unitless_data, sampling_type="cell", units="", ) ds.add_field( ("gas", "dimensionful"), sampling_type="cell", function=unitless_data, units="g/cm**3", ) assert_equal(str(ad["gas", "dimensionless"].units), "dimensionless") assert_equal(str(ad["gas", "dimensionless_auto"].units), "dimensionless") assert_equal(str(ad["gas", "dimensionless_explicit"].units), "dimensionless") assert_raises(YTFieldUnitError, get_data, ds, ("gas", "dimensionful")) def test_add_field_from_lambda(): ds = fake_amr_ds(fields=["density"], units=["g/cm**3"]) def _function_density(field, data): return data["gas", "density"] ds.add_field( ("gas", "function_density"), function=_function_density, sampling_type="cell", units="g/cm**3", ) ds.add_field( ("gas", "lambda_density"), function=lambda field, data: data["gas", "density"], sampling_type="cell", units="g/cm**3", ) ad = ds.all_data() # check that the fields are accessible ad["gas", "function_density"] ad["gas", "lambda_density"] def test_array_like_field(): ds = fake_random_ds(4, particles=64) ad = ds.all_data() u1 = ad["all", "particle_mass"].units u2 = array_like_field(ad, 1.0, ("all", "particle_mass")).units assert u1 == u2 ISOGAL = "IsolatedGalaxy/galaxy0030/galaxy0030" @requires_module("h5py") @requires_file(ISOGAL) def test_array_like_field_output_units(): ds = load(ISOGAL) ad = ds.all_data() u1 = ad["all", "particle_mass"].units u2 = array_like_field(ad, 1.0, ("all", "particle_mass")).units assert u1 == u2 assert str(u1) == ds.fields.all.particle_mass.output_units u1 = ad["gas", "x"].units u2 = array_like_field(ad, 1.0, ("gas", "x")).units assert u1 == u2 assert str(u1) == ds.fields.gas.x.units def test_add_field_string(): ds = fake_random_ds(16) ad = ds.all_data() def density_alias(field, data): return data["gas", "density"] ds.add_field( ("gas", "density_alias"), sampling_type="cell", function=density_alias, units="g/cm**3", ) ad["gas", "density_alias"] assert ("gas", "density_alias") in ds.derived_field_list def test_add_field_string_aliasing(): ds = fake_random_ds(16) def density_alias(field, data): return data["gas", "density"] ds.add_field( ("gas", "density_alias"), sampling_type="cell", function=density_alias, units="g/cm**3", ) ds.field_info["gas", "density_alias"] ds.field_info["gas", "density_alias"] ds = fake_particle_ds() def pmass_alias(field, data): return data["all", "particle_mass"] ds.add_field( ("all", "particle_mass_alias"), function=pmass_alias, units="g", sampling_type="particle", ) ds.field_info["all", "particle_mass_alias"] ds.field_info["all", "particle_mass_alias"] def test_morton_index(): ds = fake_amr_ds() mi = ds.r["index", "morton_index"] mi2 = mi.view("uint64") assert_equal(np.unique(mi2).size, mi2.size) a1 = np.argsort(mi) a2 = np.argsort(mi2) assert_array_equal(a1, a2) @requires_module("h5py") @requires_file(ISOGAL) def test_deposit_amr(): ds = load(ISOGAL) for g in ds.index.grids: gpm = g["all", "particle_mass"].sum() dpm = g["deposit", "all_mass"].sum() assert_allclose_units(gpm, dpm) def test_ion_field_labels(): fields = [ "O_p1_number_density", "O2_p1_number_density", "CO2_p1_number_density", "Co_p1_number_density", "O2_p2_number_density", "H2O_p1_number_density", ] units = ["cm**-3" for f in fields] ds = fake_random_ds(16, fields=fields, units=units) # by default labels should use roman numerals default_labels = { "O_p1_number_density": "$\\rm{O\\ II\\ Number\\ Density}$", "O2_p1_number_density": "$\\rm{O_{2}\\ II\\ Number\\ Density}$", "CO2_p1_number_density": "$\\rm{CO_{2}\\ II\\ Number\\ Density}$", "Co_p1_number_density": "$\\rm{Co\\ II\\ Number\\ Density}$", "O2_p2_number_density": "$\\rm{O_{2}\\ III\\ Number\\ Density}$", "H2O_p1_number_density": "$\\rm{H_{2}O\\ II\\ Number\\ Density}$", } pm_labels = { "O_p1_number_density": "$\\rm{{O}^{+}\\ Number\\ Density}$", "O2_p1_number_density": "$\\rm{{O_{2}}^{+}\\ Number\\ Density}$", "CO2_p1_number_density": "$\\rm{{CO_{2}}^{+}\\ Number\\ Density}$", "Co_p1_number_density": "$\\rm{{Co}^{+}\\ Number\\ Density}$", "O2_p2_number_density": "$\\rm{{O_{2}}^{++}\\ Number\\ Density}$", "H2O_p1_number_density": "$\\rm{{H_{2}O}^{+}\\ Number\\ Density}$", } fobj = ds.fields.stream for f in fields: label = getattr(fobj, f).get_latex_display_name() assert_equal(label, default_labels[f]) ds.set_field_label_format("ionization_label", "plus_minus") fobj = ds.fields.stream for f in fields: label = getattr(fobj, f).get_latex_display_name() assert_equal(label, pm_labels[f]) def test_default_fluid_type_None(): """ Check for bad behavior when default_fluid_type is None. See PR #3710. """ ds = fake_amr_ds() ds.default_fluid_type = None ds.field_list ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/tests/test_fields_plugins.py0000644000175100001770000000400214714401662021245 0ustar00runnerdockerimport os import shutil import tempfile import unittest from numpy.testing import assert_raises import yt from yt.config import ytcfg from yt.testing import fake_random_ds from yt.utilities.configure import YTConfig, config_dir _TEST_PLUGIN = "_test_plugin.py" _DUMMY_CFG_TOML = f"""[yt] log_level = 49 plugin_filename = "{_TEST_PLUGIN}" boolean_stuff = true chunk_size = 3 """ TEST_PLUGIN_FILE = """ import numpy as np def _myfunc(field, data): return np.random.random(data['gas', 'density'].shape) add_field(('gas', 'random'), dimensions='dimensionless', function=_myfunc, units='auto', sampling_type='local') constant = 3 def myfunc(): return constant*4 foobar = 17 """ class TestPluginFile(unittest.TestCase): @classmethod def setUpClass(cls): cls.xdg_config_home = os.environ.get("XDG_CONFIG_HOME") cls.tmpdir = tempfile.mkdtemp() os.mkdir(os.path.join(cls.tmpdir, "yt")) os.environ["XDG_CONFIG_HOME"] = cls.tmpdir with open(YTConfig.get_global_config_file(), mode="w") as fh: fh.write(_DUMMY_CFG_TOML) cls.plugin_path = os.path.join(config_dir(), ytcfg.get("yt", "plugin_filename")) with open(cls.plugin_path, mode="w") as fh: fh.write(TEST_PLUGIN_FILE) @classmethod def tearDownClass(cls): shutil.rmtree(cls.tmpdir) if cls.xdg_config_home: os.environ["XDG_CONFIG_HOME"] = cls.xdg_config_home else: os.environ.pop("XDG_CONFIG_HOME") def testCustomField(self): msg = f"INFO:yt:Loading plugins from {self.plugin_path}" with self.assertLogs("yt", level="INFO") as cm: yt.enable_plugins() self.assertEqual(cm.output, [msg]) ds = fake_random_ds(16) dd = ds.all_data() self.assertEqual(str(dd["gas", "random"].units), "dimensionless") self.assertEqual(dd["gas", "random"].shape, dd["gas", "density"].shape) assert yt.myfunc() == 12 assert_raises(AttributeError, getattr, yt, "foobar") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/tests/test_magnetic_fields.py0000644000175100001770000001041614714401662021361 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_almost_equal from yt.loaders import load_uniform_grid from yt.utilities.physical_constants import mu_0 def setup_module(): from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True def test_magnetic_fields(): ddims = (16, 16, 16) data1 = { "magnetic_field_x": (np.random.random(size=ddims), "T"), "magnetic_field_y": (np.random.random(size=ddims), "T"), "magnetic_field_z": (np.random.random(size=ddims), "T"), } data2 = {} for field in data1: data2[field] = (data1[field][0] * 1.0e4, "gauss") ds0 = load_uniform_grid(data1, ddims, unit_system="cgs") ds1 = load_uniform_grid(data1, ddims, magnetic_unit=(1.0, "T"), unit_system="cgs") ds2 = load_uniform_grid(data2, ddims, unit_system="mks") # For this test dataset, code units are cgs units ds3 = load_uniform_grid(data2, ddims, unit_system="code") # For this test dataset, code units are SI units ds4 = load_uniform_grid(data1, ddims, magnetic_unit=(1.0, "T"), unit_system="code") ds0.index ds1.index ds2.index ds3.index ds4.index dd0 = ds0.all_data() dd1 = ds1.all_data() dd2 = ds2.all_data() dd3 = ds3.all_data() dd4 = ds4.all_data() assert ds0.fields.gas.magnetic_field_strength.units == "G" assert ds1.fields.gas.magnetic_field_strength.units == "G" assert ds1.fields.gas.magnetic_field_poloidal.units == "G" assert ds1.fields.gas.magnetic_field_toroidal.units == "G" assert ds2.fields.gas.magnetic_field_strength.units == "T" assert ds2.fields.gas.magnetic_field_poloidal.units == "T" assert ds2.fields.gas.magnetic_field_toroidal.units == "T" assert ds3.fields.gas.magnetic_field_strength.units == "code_magnetic" assert ds3.fields.gas.magnetic_field_poloidal.units == "code_magnetic" assert ds3.fields.gas.magnetic_field_toroidal.units == "code_magnetic" assert ds4.fields.gas.magnetic_field_strength.units == "code_magnetic" assert ds4.fields.gas.magnetic_field_poloidal.units == "code_magnetic" assert ds4.fields.gas.magnetic_field_toroidal.units == "code_magnetic" emag0 = ( dd0["gas", "magnetic_field_x"] ** 2 + dd0["gas", "magnetic_field_y"] ** 2 + dd0["gas", "magnetic_field_z"] ** 2 ) / (8.0 * np.pi) emag0.convert_to_units("dyne/cm**2") emag1 = ( dd1["gas", "magnetic_field_x"] ** 2 + dd1["gas", "magnetic_field_y"] ** 2 + dd1["gas", "magnetic_field_z"] ** 2 ) / (8.0 * np.pi) emag1.convert_to_units("dyne/cm**2") emag2 = ( dd2["gas", "magnetic_field_x"] ** 2 + dd2["gas", "magnetic_field_y"] ** 2 + dd2["gas", "magnetic_field_z"] ** 2 ) / (2.0 * mu_0) emag2.convert_to_units("Pa") emag3 = ( dd3["gas", "magnetic_field_x"] ** 2 + dd3["gas", "magnetic_field_y"] ** 2 + dd3["gas", "magnetic_field_z"] ** 2 ) / (8.0 * np.pi) emag3.convert_to_units("code_pressure") emag4 = ( dd4["gas", "magnetic_field_x"] ** 2 + dd4["gas", "magnetic_field_y"] ** 2 + dd4["gas", "magnetic_field_z"] ** 2 ) / (2.0 * mu_0) emag4.convert_to_units("code_pressure") # note that "magnetic_energy_density" and "magnetic_pressure" are aliased assert_almost_equal(emag0, dd0["gas", "magnetic_energy_density"]) assert_almost_equal(emag1, dd1["gas", "magnetic_energy_density"]) assert_almost_equal(emag2, dd2["gas", "magnetic_energy_density"]) assert_almost_equal(emag3, dd3["gas", "magnetic_energy_density"]) assert_almost_equal(emag4, dd4["gas", "magnetic_energy_density"]) assert str(emag0.units) == str(dd0["gas", "magnetic_energy_density"].units) assert str(emag1.units) == str(dd1["gas", "magnetic_energy_density"].units) assert str(emag2.units) == str(dd2["gas", "magnetic_energy_density"].units) assert str(emag3.units) == str(dd3["gas", "magnetic_energy_density"].units) assert str(emag4.units) == str(dd4["gas", "magnetic_energy_density"].units) assert_almost_equal(emag1.in_cgs(), emag0.in_cgs()) assert_almost_equal(emag2.in_cgs(), emag0.in_cgs()) assert_almost_equal(emag1.in_cgs(), emag2.in_cgs()) assert_almost_equal(emag1.in_cgs(), emag3.in_cgs()) assert_almost_equal(emag1.in_cgs(), emag4.in_cgs()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/tests/test_particle_fields.py0000644000175100001770000000316314714401662021376 0ustar00runnerdockerimport numpy as np from yt.testing import assert_allclose_units, requires_file, requires_module from yt.utilities.answer_testing.framework import data_dir_load g30 = "IsolatedGalaxy/galaxy0030/galaxy0030" @requires_module("h5py") @requires_file(g30) def test_relative_particle_fields(): ds = data_dir_load(g30) offset = ds.arr([0.1, -0.2, 0.3], "code_length") c = ds.domain_center + offset sp = ds.sphere(c, (10, "kpc")) bv = ds.arr([1.0, 2.0, 3.0], "code_velocity") sp.set_field_parameter("bulk_velocity", bv) assert_allclose_units( sp["all", "relative_particle_position"], sp["all", "particle_position"] - c ) assert_allclose_units( sp["all", "relative_particle_velocity"], sp["all", "particle_velocity"] - bv ) @requires_module("h5py") @requires_file(g30) def test_los_particle_fields(): ds = data_dir_load(g30) offset = ds.arr([0.1, -0.2, 0.3], "code_length") c = ds.domain_center + offset sp = ds.sphere(c, (10, "kpc")) bv = ds.arr([1.0, 2.0, 3.0], "code_velocity") sp.set_field_parameter("bulk_velocity", bv) ax = [0.1, 0.2, -0.3] sp.set_field_parameter("axis", ax) ax /= np.linalg.norm(ax) vlos = ( sp["all", "relative_particle_velocity_x"] * ax[0] + sp["all", "relative_particle_velocity_y"] * ax[1] + sp["all", "relative_particle_velocity_z"] * ax[2] ) assert_allclose_units(sp["all", "particle_velocity_los"], vlos) sp.clear_data() ax = 2 sp.set_field_parameter("axis", ax) vlos = sp["all", "relative_particle_velocity_z"] assert_allclose_units(sp["all", "particle_velocity_los"], vlos) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/tests/test_species_fields.py0000644000175100001770000000642114714401662021226 0ustar00runnerdockerfrom numpy.testing import assert_equal from yt.testing import assert_allclose_units, fake_random_ds from yt.utilities.chemical_formulas import ChemicalFormula from yt.utilities.physical_ratios import _primordial_mass_fraction def test_default_species_fields(): # Test default case (no species fields) ds = fake_random_ds(32) sp = ds.sphere("c", (0.2, "unitary")) mu = 0.5924489101195808 assert_allclose_units(mu * sp["index", "ones"], sp["gas", "mean_molecular_weight"]) assert ("gas", "H_nuclei_density") not in ds.derived_field_list assert ("gas", "He_nuclei_density") not in ds.derived_field_list assert ("gas", "El_number_density") not in ds.derived_field_list assert ("gas", "H_p1_number_density") not in ds.derived_field_list assert ("gas", "He_p2_number_density") not in ds.derived_field_list assert ("gas", "H_p0_number_density") not in ds.derived_field_list assert ("gas", "He_p0_number_density") not in ds.derived_field_list # Test fully ionized case dsi = fake_random_ds(32, default_species_fields="ionized") spi = dsi.sphere("c", (0.2, "unitary")) amu_cgs = dsi.quan(1.0, "amu").in_cgs() mueinv = 1.0 * _primordial_mass_fraction["H"] / ChemicalFormula("H").weight mueinv *= spi["index", "ones"] mueinv += 2.0 * _primordial_mass_fraction["He"] / ChemicalFormula("He").weight mupinv = _primordial_mass_fraction["H"] / ChemicalFormula("H").weight mupinv *= spi["index", "ones"] muainv = _primordial_mass_fraction["He"] / ChemicalFormula("He").weight muainv *= spi["index", "ones"] mueinv2 = spi["gas", "El_number_density"] * amu_cgs / spi["gas", "density"] mupinv2 = spi["gas", "H_p1_number_density"] * amu_cgs / spi["gas", "density"] muainv2 = spi["gas", "He_p2_number_density"] * amu_cgs / spi["gas", "density"] assert_allclose_units(mueinv, mueinv2) assert_allclose_units(mupinv, mupinv2) assert_allclose_units(muainv, muainv2) assert_equal(spi["gas", "H_p1_number_density"], spi["gas", "H_nuclei_density"]) assert_equal(spi["gas", "He_p2_number_density"], spi["gas", "He_nuclei_density"]) mu = 0.5924489101195808 assert_allclose_units( mu * spi["index", "ones"], spi["gas", "mean_molecular_weight"] ) # Test fully neutral case dsn = fake_random_ds(32, default_species_fields="neutral") spn = dsn.sphere("c", (0.2, "unitary")) amu_cgs = dsn.quan(1.0, "amu").in_cgs() assert ("gas", "El_number_density") not in ds.derived_field_list mupinv = _primordial_mass_fraction["H"] / ChemicalFormula("H").weight mupinv *= spn["index", "ones"] muainv = _primordial_mass_fraction["He"] / ChemicalFormula("He").weight muainv *= spn["index", "ones"] mupinv2 = spn["gas", "H_p0_number_density"] * amu_cgs / spn["gas", "density"] muainv2 = spn["gas", "He_p0_number_density"] * amu_cgs / spn["gas", "density"] assert_allclose_units(mueinv, mueinv2) assert_allclose_units(mupinv, mupinv2) assert_allclose_units(muainv, muainv2) assert_equal(spn["gas", "H_p0_number_density"], spn["gas", "H_nuclei_density"]) assert_equal(spn["gas", "He_p0_number_density"], spn["gas", "He_nuclei_density"]) mu = 1.2285402715185552 assert_allclose_units( mu * spn["index", "ones"], spn["gas", "mean_molecular_weight"] ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/tests/test_sph_fields.py0000644000175100001770000000405614714401662020367 0ustar00runnerdockerfrom collections import defaultdict from numpy.testing import assert_array_almost_equal, assert_equal import yt from yt.testing import requires_file, skip isothermal_h5 = "IsothermalCollapse/snap_505.hdf5" isothermal_bin = "IsothermalCollapse/snap_505" snap_33 = "snapshot_033/snap_033.0.hdf5" tipsy_gal = "TipsyGalaxy/galaxy.00300" FIRE_m12i = "FIRE_M12i_ref11/snapshot_600.hdf5" iso_kwargs = { "bounding_box": [[-3, 3], [-3, 3], [-3, 3]], "unit_base": { "UnitLength_in_cm": 5.0e16, "UnitMass_in_g": 1.98992e33, "UnitVelocity_in_cm_per_s": 46385.190, }, } load_kwargs = defaultdict(dict) load_kwargs.update( { isothermal_h5: iso_kwargs, isothermal_bin: iso_kwargs, } ) gas_fields_to_particle_fields = { "temperature": "Temperature", "density": "Density", "velocity_x": "particle_velocity_x", "velocity_magnitude": "particle_velocity_magnitude", } @skip(reason="See https://github.com/yt-project/yt/issues/3909") @requires_file(isothermal_bin) @requires_file(isothermal_h5) @requires_file(snap_33) @requires_file(tipsy_gal) @requires_file(FIRE_m12i) def test_sph_field_semantics(): for ds_fn in [tipsy_gal, isothermal_h5, isothermal_bin, snap_33, FIRE_m12i]: yield sph_fields_validate, ds_fn def sph_fields_validate(ds_fn): ds = yt.load(ds_fn, **(load_kwargs[ds_fn])) ad = ds.all_data() for gf, pf in gas_fields_to_particle_fields.items(): gas_field = ad["gas", gf] part_field = ad[ds._sph_ptypes[0], pf] assert_array_almost_equal(gas_field, part_field) npart = ds.particle_type_counts[ds._sph_ptypes[0]] err_msg = f"Field {gf} is not the correct shape" assert_equal(npart, gas_field.shape[0], err_msg=err_msg) dd = ds.r[0.4:0.6, 0.4:0.6, 0.4:0.6] for i, ax in enumerate("xyz"): dd.set_field_parameter(f"cp_{ax}_vec", yt.YTArray([1, 1, 1])) dd.set_field_parameter("axis", i) dd.set_field_parameter("omega_baryon", 0.3) for f in ds.fields.gas: gas_field = dd[f] assert f.is_sph_field ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/tests/test_vector_fields.py0000644000175100001770000001204014714401662021067 0ustar00runnerdockerimport numpy as np from yt.testing import ( assert_allclose_units, fake_random_ds, requires_file, requires_module, ) from yt.units import cm, s # type: ignore from yt.utilities.answer_testing.framework import data_dir_load from yt.visualization.volume_rendering.off_axis_projection import off_axis_projection def random_unit_vector(prng): v = prng.random_sample(3) while (v == 0).all(): v = prng.random_sample(3) return v / np.sqrt((v**2).sum()) def random_velocity_vector(prng): return 2e5 * prng.random_sample(3) - 1e5 def compare_vector_conversions(data_source): prng = np.random.RandomState(8675309) normals = [[1, 0, 0], [0, 1, 0], [0, 0, 1]] + [ random_unit_vector(prng) for i in range(2) ] bulk_velocities = [random_velocity_vector(prng) for i in range(2)] for bv in bulk_velocities: bulk_velocity = bv * cm / s data_source.set_field_parameter("bulk_velocity", bulk_velocity) data_source.clear_data() vmag = data_source["gas", "velocity_magnitude"] vrad = data_source["gas", "velocity_spherical_radius"] for normal in normals: data_source.set_field_parameter("normal", normal) data_source.clear_data() assert_allclose_units(vrad, data_source["gas", "velocity_spherical_radius"]) vmag_new = data_source["gas", "velocity_magnitude"] assert_allclose_units(vmag, vmag_new) vmag_cart = np.sqrt( (data_source["gas", "velocity_x"] - bulk_velocity[0]) ** 2 + (data_source["gas", "velocity_y"] - bulk_velocity[1]) ** 2 + (data_source["gas", "velocity_z"] - bulk_velocity[2]) ** 2 ) assert_allclose_units(vmag, vmag_cart) vmag_cyl = np.sqrt( data_source["gas", "velocity_cylindrical_radius"] ** 2 + data_source["gas", "velocity_cylindrical_theta"] ** 2 + data_source["gas", "velocity_cylindrical_z"] ** 2 ) assert_allclose_units(vmag, vmag_cyl) vmag_sph = np.sqrt( data_source["gas", "velocity_spherical_radius"] ** 2 + data_source["gas", "velocity_spherical_theta"] ** 2 + data_source["gas", "velocity_spherical_phi"] ** 2 ) assert_allclose_units(vmag, vmag_sph) for i, d in enumerate("xyz"): assert_allclose_units( data_source["gas", f"velocity_{d}"] - bulk_velocity[i], data_source["gas", f"relative_velocity_{d}"], ) for i, ax in enumerate("xyz"): data_source.set_field_parameter("axis", i) data_source.clear_data() assert_allclose_units( data_source["gas", "velocity_los"], data_source["gas", f"relative_velocity_{ax}"], ) for i, ax in enumerate("xyz"): prj = data_source.ds.proj( ("gas", "velocity_los"), i, weight_field=("gas", "density") ) assert_allclose_units( prj["gas", "velocity_los"], prj["gas", f"velocity_{ax}"] ) data_source.clear_data() ax = [0.1, 0.2, -0.3] data_source.set_field_parameter("axis", ax) ax /= np.sqrt(np.dot(ax, ax)) vlos = data_source["gas", "relative_velocity_x"] * ax[0] vlos += data_source["gas", "relative_velocity_y"] * ax[1] vlos += data_source["gas", "relative_velocity_z"] * ax[2] assert_allclose_units(data_source["gas", "velocity_los"], vlos) buf_los = off_axis_projection( data_source, data_source.ds.domain_center, ax, 0.5, 128, ("gas", "velocity_los"), weight=("gas", "density"), ) buf_x = off_axis_projection( data_source, data_source.ds.domain_center, ax, 0.5, 128, ("gas", "relative_velocity_x"), weight=("gas", "density"), ) buf_y = off_axis_projection( data_source, data_source.ds.domain_center, ax, 0.5, 128, ("gas", "relative_velocity_y"), weight=("gas", "density"), ) buf_z = off_axis_projection( data_source, data_source.ds.domain_center, ax, 0.5, 128, ("gas", "relative_velocity_z"), weight=("gas", "density"), ) vlos = buf_x * ax[0] + buf_y * ax[1] + buf_z * ax[2] assert_allclose_units(buf_los, vlos, rtol=1.0e-6) def test_vector_component_conversions_fake(): ds = fake_random_ds(16) ad = ds.all_data() compare_vector_conversions(ad) g30 = "IsolatedGalaxy/galaxy0030/galaxy0030" @requires_module("h5py") @requires_file(g30) def test_vector_component_conversions_real(): ds = data_dir_load(g30) sp = ds.sphere(ds.domain_center, (10, "kpc")) compare_vector_conversions(sp) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/tests/test_xray_fields.py0000644000175100001770000000422214714401662020553 0ustar00runnerdockerfrom yt.fields.xray_emission_fields import add_xray_emissivity_field from yt.utilities.answer_testing.framework import ( FieldValuesTest, ProjectionValuesTest, can_run_ds, data_dir_load, requires_ds, ) def setup_module(): from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True def check_xray_fields(ds_fn, fields): if not can_run_ds(ds_fn): return dso = [None, ("sphere", ("m", (0.1, "unitary")))] for field in fields: for dobj_name in dso: for axis in [0, 1, 2]: yield ProjectionValuesTest(ds_fn, axis, field, None, dobj_name) yield FieldValuesTest(ds_fn, field, dobj_name) sloshing = "GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0300" d9p = "D9p_500/10MpcBox_HartGal_csf_a0.500.d" @requires_ds(sloshing, big_data=True) def test_sloshing_apec(): ds = data_dir_load(sloshing, kwargs={"default_species_fields": "ionized"}) fields = add_xray_emissivity_field(ds, 0.5, 7.0, table_type="apec", metallicity=0.3) for test in check_xray_fields(ds, fields): test_sloshing_apec.__name__ = test.description yield test @requires_ds(d9p, big_data=True) def test_d9p_cloudy(): ds = data_dir_load(d9p, kwargs={"default_species_fields": "ionized"}) fields = add_xray_emissivity_field( ds, 0.5, 2.0, redshift=ds.current_redshift, table_type="cloudy", cosmology=ds.cosmology, metallicity=("gas", "metallicity"), ) for test in check_xray_fields(ds, fields): test.suffix = "current_redshift" test_d9p_cloudy.__name__ = test.description + test.suffix yield test @requires_ds(d9p, big_data=True) def test_d9p_cloudy_local(): ds = data_dir_load(d9p, kwargs={"default_species_fields": "ionized"}) fields = add_xray_emissivity_field( ds, 0.5, 2.0, dist=(1.0, "Mpc"), table_type="cloudy", metallicity=("gas", "metallicity"), ) for test in check_xray_fields(ds, fields): test.suffix = "dist_1Mpc" test_d9p_cloudy_local.__name__ = test.description + test.suffix yield test ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/vector_operations.py0000644000175100001770000006105314714401662017613 0ustar00runnerdockerimport sys from typing import TYPE_CHECKING import numpy as np from unyt import Unit from yt._typing import FieldName, FieldType from yt.funcs import is_sequence, just_one from yt.geometry.api import Geometry from yt.utilities.lib.misc_utilities import obtain_relative_velocity_vector from yt.utilities.math_utils import ( get_cyl_r_component, get_cyl_theta_component, get_cyl_z_component, get_sph_phi_component, get_sph_r_component, get_sph_theta_component, ) from .derived_field import NeedsParameter, ValidateParameter, ValidateSpatial if sys.version_info >= (3, 11): from typing import assert_never else: from typing_extensions import assert_never if TYPE_CHECKING: # avoid circular imports from yt.fields.field_info_container import FieldInfoContainer def get_bulk(data, basename, unit): if data.has_field_parameter(f"bulk_{basename}"): bulk = data.get_field_parameter(f"bulk_{basename}") else: bulk = [0, 0, 0] * unit return bulk def create_magnitude_field( registry, basename, field_units, ftype="gas", slice_info=None, validators=None, sampling_type=None, ): axis_order = registry.ds.coordinates.axis_order field_components = [(ftype, f"{basename}_{ax}") for ax in axis_order] if sampling_type is None: sampling_type = "local" def _magnitude(field, data): fn = field_components[0] if data.has_field_parameter(f"bulk_{basename}"): fn = (fn[0], f"relative_{fn[1]}") d = data[fn] mag = (d) ** 2 for idim in range(1, registry.ds.dimensionality): fn = field_components[idim] if data.has_field_parameter(f"bulk_{basename}"): fn = (fn[0], f"relative_{fn[1]}") mag += (data[fn]) ** 2 return np.sqrt(mag) registry.add_field( (ftype, f"{basename}_magnitude"), sampling_type=sampling_type, function=_magnitude, units=field_units, validators=validators, ) def create_relative_field( registry, basename, field_units, ftype="gas", slice_info=None, validators=None ): axis_order = registry.ds.coordinates.axis_order field_components = [(ftype, f"{basename}_{ax}") for ax in axis_order] def relative_vector(ax): def _relative_vector(field, data): iax = axis_order.index(ax) d = data[field_components[iax]] bulk = get_bulk(data, basename, d.unit_quantity) return d - bulk[iax] return _relative_vector for d in axis_order: registry.add_field( (ftype, f"relative_{basename}_{d}"), sampling_type="local", function=relative_vector(d), units=field_units, validators=validators, ) def create_los_field( registry, basename, field_units, ftype="gas", slice_info=None, *, sampling_type="local", ): axis_order = registry.ds.coordinates.axis_order # Here we need to check if we are a particle field, so that we can # correctly identify the "bulk" field parameter corresponding to # this vector field. if sampling_type == "particle": basenm = basename.removeprefix("particle_") else: basenm = basename validators = [ ValidateParameter(f"bulk_{basenm}"), ValidateParameter("axis", {"axis": [0, 1, 2]}), ] field_comps = [(ftype, f"{basename}_{ax}") for ax in axis_order] def _los_field(field, data): if data.has_field_parameter(f"bulk_{basenm}"): fns = [(fc[0], f"relative_{fc[1]}") for fc in field_comps] else: fns = field_comps ax = data.get_field_parameter("axis") if is_sequence(ax): # Make sure this is a unit vector ax /= np.sqrt(np.dot(ax, ax)) ret = data[fns[0]] * ax[0] + data[fns[1]] * ax[1] + data[fns[2]] * ax[2] elif ax in [0, 1, 2]: ret = data[fns[ax]] else: raise NeedsParameter(["axis"]) return ret registry.add_field( (ftype, f"{basename}_los"), sampling_type=sampling_type, function=_los_field, units=field_units, validators=validators, display_name=rf"\mathrm{{Line of Sight {basename.capitalize()}}}", ) def create_squared_field( registry, basename, field_units, ftype="gas", slice_info=None, validators=None ): axis_order = registry.ds.coordinates.axis_order field_components = [(ftype, f"{basename}_{ax}") for ax in axis_order] def _squared(field, data): fn = field_components[0] if data.has_field_parameter(f"bulk_{basename}"): fn = (fn[0], f"relative_{fn[1]}") squared = data[fn] * data[fn] for idim in range(1, registry.ds.dimensionality): fn = field_components[idim] squared += data[fn] * data[fn] return squared registry.add_field( (ftype, f"{basename}_squared"), sampling_type="local", function=_squared, units=field_units, validators=validators, ) def create_vector_fields( registry: "FieldInfoContainer", basename: FieldName, field_units, ftype: FieldType = "gas", slice_info=None, ) -> None: # slice_info would be the left, the right, and the factor. # For example, with the old Enzo-ZEUS fields, this would be: # slice(None, -2, None) # slice(1, -1, None) # 1.0 # Otherwise, we default to a centered difference. if slice_info is None: sl_left = slice(None, -2, None) sl_right = slice(2, None, None) div_fac = 2.0 else: sl_left, sl_right, div_fac = slice_info axis_order = registry.ds.coordinates.axis_order xn, yn, zn = ((ftype, f"{basename}_{ax}") for ax in axis_order) # Is this safe? if registry.ds.dimensionality < 3: zn = ("index", "zeros") if registry.ds.dimensionality < 2: yn = ("index", "zeros") create_relative_field( registry, basename, field_units, ftype=ftype, slice_info=slice_info, validators=[ValidateParameter(f"bulk_{basename}")], ) create_magnitude_field( registry, basename, field_units, ftype=ftype, slice_info=slice_info, validators=[ValidateParameter(f"bulk_{basename}")], ) axis_names = registry.ds.coordinates.axis_order geometry: Geometry = registry.ds.geometry if geometry is Geometry.CARTESIAN: # The following fields are invalid for curvilinear geometries def _spherical_radius_component(field, data): """The spherical radius component of the vector field Relative to the coordinate system defined by the *normal* vector, *center*, and *bulk_* field parameters. """ normal = data.get_field_parameter("normal") vectors = obtain_relative_velocity_vector( data, (xn, yn, zn), f"bulk_{basename}" ) theta = data["index", "spherical_theta"] phi = data["index", "spherical_phi"] rv = get_sph_r_component(vectors, theta, phi, normal) # Now, anywhere that radius is in fact zero, we want to zero out our # return values. rv[np.isnan(theta)] = 0.0 return rv registry.add_field( (ftype, f"{basename}_spherical_radius"), sampling_type="local", function=_spherical_radius_component, units=field_units, validators=[ ValidateParameter("normal"), ValidateParameter("center"), ValidateParameter(f"bulk_{basename}"), ], ) create_los_field( registry, basename, field_units, ftype=ftype, slice_info=slice_info ) def _radial(field, data): return data[ftype, f"{basename}_spherical_radius"] def _radial_absolute(field, data): return np.abs(data[ftype, f"{basename}_spherical_radius"]) def _tangential(field, data): return np.sqrt( data[ftype, f"{basename}_spherical_theta"] ** 2.0 + data[ftype, f"{basename}_spherical_phi"] ** 2.0 ) registry.add_field( (ftype, f"radial_{basename}"), sampling_type="local", function=_radial, units=field_units, validators=[ValidateParameter("normal"), ValidateParameter("center")], ) registry.add_field( (ftype, f"radial_{basename}_absolute"), sampling_type="local", function=_radial_absolute, units=field_units, ) registry.add_field( (ftype, f"tangential_{basename}"), sampling_type="local", function=_tangential, units=field_units, ) def _spherical_theta_component(field, data): """The spherical theta component of the vector field Relative to the coordinate system defined by the *normal* vector, *center*, and *bulk_* field parameters. """ normal = data.get_field_parameter("normal") vectors = obtain_relative_velocity_vector( data, (xn, yn, zn), f"bulk_{basename}" ) theta = data["index", "spherical_theta"] phi = data["index", "spherical_phi"] return get_sph_theta_component(vectors, theta, phi, normal) registry.add_field( (ftype, f"{basename}_spherical_theta"), sampling_type="local", function=_spherical_theta_component, units=field_units, validators=[ ValidateParameter("normal"), ValidateParameter("center"), ValidateParameter(f"bulk_{basename}"), ], ) def _spherical_phi_component(field, data): """The spherical phi component of the vector field Relative to the coordinate system defined by the *normal* vector, *center*, and *bulk_* field parameters. """ normal = data.get_field_parameter("normal") vectors = obtain_relative_velocity_vector( data, (xn, yn, zn), f"bulk_{basename}" ) phi = data["index", "spherical_phi"] return get_sph_phi_component(vectors, phi, normal) registry.add_field( (ftype, f"{basename}_spherical_phi"), sampling_type="local", function=_spherical_phi_component, units=field_units, validators=[ ValidateParameter("normal"), ValidateParameter("center"), ValidateParameter(f"bulk_{basename}"), ], ) def _cp_vectors(ax): def _cp_val(field, data): vec = data.get_field_parameter(f"cp_{ax}_vec") tr = data[xn[0], f"relative_{xn[1]}"] * vec.d[0] tr += data[yn[0], f"relative_{yn[1]}"] * vec.d[1] tr += data[zn[0], f"relative_{zn[1]}"] * vec.d[2] return tr return _cp_val for ax in "xyz": registry.add_field( (ftype, f"cutting_plane_{basename}_{ax}"), sampling_type="local", function=_cp_vectors(ax), units=field_units, ) def _divergence(field, data): ds = div_fac * just_one(data["index", "dx"]) f = data[xn[0], f"relative_{xn[1]}"][sl_right, 1:-1, 1:-1] / ds f -= data[xn[0], f"relative_{xn[1]}"][sl_left, 1:-1, 1:-1] / ds ds = div_fac * just_one(data["index", "dy"]) f += data[yn[0], f"relative_{yn[1]}"][1:-1, sl_right, 1:-1] / ds f -= data[yn[0], f"relative_{yn[1]}"][1:-1, sl_left, 1:-1] / ds ds = div_fac * just_one(data["index", "dz"]) f += data[zn[0], f"relative_{zn[1]}"][1:-1, 1:-1, sl_right] / ds f -= data[zn[0], f"relative_{zn[1]}"][1:-1, 1:-1, sl_left] / ds new_field = data.ds.arr(np.zeros(data[xn].shape, dtype="f8"), str(f.units)) new_field[1:-1, 1:-1, 1:-1] = f return new_field def _divergence_abs(field, data): return np.abs(data[ftype, f"{basename}_divergence"]) field_units = Unit(field_units, registry=registry.ds.unit_registry) div_units = field_units / registry.ds.unit_system["length"] registry.add_field( (ftype, f"{basename}_divergence"), sampling_type="local", function=_divergence, units=div_units, validators=[ValidateSpatial(1), ValidateParameter(f"bulk_{basename}")], ) registry.add_field( (ftype, f"{basename}_divergence_absolute"), sampling_type="local", function=_divergence_abs, units=div_units, ) def _tangential_over_magnitude(field, data): tr = ( data[ftype, f"tangential_{basename}"] / data[ftype, f"{basename}_magnitude"] ) return np.abs(tr) registry.add_field( (ftype, f"tangential_over_{basename}_magnitude"), sampling_type="local", function=_tangential_over_magnitude, take_log=False, ) def _cylindrical_radius_component(field, data): """The cylindrical radius component of the vector field Relative to the coordinate system defined by the *normal* vector, *center*, and *bulk_* field parameters. """ normal = data.get_field_parameter("normal") vectors = obtain_relative_velocity_vector( data, (xn, yn, zn), f"bulk_{basename}" ) theta = data["index", "cylindrical_theta"] return get_cyl_r_component(vectors, theta, normal) registry.add_field( (ftype, f"{basename}_cylindrical_radius"), sampling_type="local", function=_cylindrical_radius_component, units=field_units, validators=[ValidateParameter("normal")], ) def _cylindrical_theta_component(field, data): """The cylindrical theta component of the vector field Relative to the coordinate system defined by the *normal* vector, *center*, and *bulk_* field parameters. """ normal = data.get_field_parameter("normal") vectors = obtain_relative_velocity_vector( data, (xn, yn, zn), f"bulk_{basename}" ) theta = data["index", "cylindrical_theta"].copy() theta = np.tile(theta, (3,) + (1,) * len(theta.shape)) return get_cyl_theta_component(vectors, theta, normal) registry.add_field( (ftype, f"{basename}_cylindrical_theta"), sampling_type="local", function=_cylindrical_theta_component, units=field_units, validators=[ ValidateParameter("normal"), ValidateParameter("center"), ValidateParameter(f"bulk_{basename}"), ], ) def _cylindrical_z_component(field, data): """The cylindrical z component of the vector field Relative to the coordinate system defined by the *normal* vector, *center*, and *bulk_* field parameters. """ normal = data.get_field_parameter("normal") vectors = obtain_relative_velocity_vector( data, (xn, yn, zn), f"bulk_{basename}" ) return get_cyl_z_component(vectors, normal) registry.add_field( (ftype, f"{basename}_cylindrical_z"), sampling_type="local", function=_cylindrical_z_component, units=field_units, validators=[ ValidateParameter("normal"), ValidateParameter("center"), ValidateParameter(f"bulk_{basename}"), ], ) elif ( geometry is Geometry.POLAR or geometry is Geometry.CYLINDRICAL or geometry is Geometry.SPHERICAL ): # Create Cartesian fields for curvilinear coordinates if geometry is Geometry.POLAR: def _cartesian_x(field, data): return data[ftype, f"{basename}_r"] * np.cos(data[ftype, "theta"]) def _cartesian_y(field, data): return data[ftype, f"{basename}_r"] * np.sin(data[ftype, "theta"]) def _cartesian_z(field, data): return data[ftype, f"{basename}_z"] elif geometry is Geometry.CYLINDRICAL: def _cartesian_x(field, data): if data.ds.dimensionality == 2: return data[ftype, f"{basename}_r"] elif data.ds.dimensionality == 3: return data[ftype, f"{basename}_r"] * np.cos( data[ftype, "theta"] ) - data[ftype, f"{basename}_theta"] * np.sin(data[ftype, "theta"]) def _cartesian_y(field, data): if data.ds.dimensionality == 2: return data[ftype, f"{basename}_z"] elif data.ds.dimensionality == 3: return data[ftype, f"{basename}_r"] * np.sin( data[ftype, "theta"] ) + data[ftype, f"{basename}_theta"] * np.cos(data[ftype, "theta"]) def _cartesian_z(field, data): return data[ftype, f"{basename}_z"] elif geometry is Geometry.SPHERICAL: def _cartesian_x(field, data): if data.ds.dimensionality == 2: return data[ftype, f"{basename}_r"] * np.sin( data[ftype, "theta"] ) + data[ftype, f"{basename}_theta"] * np.cos(data[ftype, "theta"]) elif data.ds.dimensionality == 3: return ( data[ftype, f"{basename}_r"] * np.sin(data[ftype, "theta"]) * np.cos(data[ftype, "phi"]) + data[ftype, f"{basename}_theta"] * np.cos(data[ftype, "theta"]) * np.cos(data[ftype, "phi"]) - data[ftype, f"{basename}_phi"] * np.sin(data[ftype, "phi"]) ) def _cartesian_y(field, data): if data.ds.dimensionality == 2: return data[ftype, f"{basename}_r"] * np.cos( data[ftype, "theta"] ) - data[f"{basename}_theta"] * np.sin(data[ftype, "theta"]) elif data.ds.dimensionality == 3: return ( data[ftype, f"{basename}_r"] * np.sin(data[ftype, "theta"]) * np.sin(data[ftype, "phi"]) + data[ftype, f"{basename}_theta"] * np.cos(data[ftype, "theta"]) * np.sin(data[ftype, "phi"]) + data[ftype, f"{basename}_phi"] * np.cos(data[ftype, "phi"]) ) def _cartesian_z(field, data): return data[ftype, f"{basename}_r"] * np.cos( data[ftype, "theta"] ) - data[ftype, f"{basename}_theta"] * np.sin(data[ftype, "theta"]) else: assert_never(geometry) # it's redundant to define a cartesian x field for 1D data if registry.ds.dimensionality >= 2: registry.add_field( (ftype, f"{basename}_cartesian_x"), sampling_type="local", function=_cartesian_x, units=field_units, display_field=True, ) registry.add_field( (ftype, f"{basename}_cartesian_y"), sampling_type="local", function=_cartesian_y, units=field_units, display_field=True, ) registry.add_field( (ftype, f"{basename}_cartesian_z"), sampling_type="local", function=_cartesian_z, units=field_units, display_field=True, ) elif ( geometry is Geometry.GEOGRAPHIC or geometry is Geometry.INTERNAL_GEOGRAPHIC or geometry is Geometry.SPECTRAL_CUBE ): # nothing to do pass else: assert_never(geometry) if registry.ds.geometry is Geometry.SPHERICAL: def _cylindrical_radius_component(field, data): return ( np.sin(data[ftype, "theta"]) * data[ftype, f"{basename}_r"] + np.cos(data[ftype, "theta"]) * data[ftype, f"{basename}_theta"] ) registry.add_field( (ftype, f"{basename}_cylindrical_radius"), sampling_type="local", function=_cylindrical_radius_component, units=field_units, display_field=True, ) registry.alias( (ftype, f"{basename}_cylindrical_z"), (ftype, f"{basename}_cartesian_z"), ) # define vector components appropriate for 'theta'-normal plots. # The projection plane is called 'conic plane' in the code base as well as docs. # Contrary to 'poloidal' and 'toroidal', this isn't a widely spread # naming convention, but here it is exposed to users as part of dedicated # field names, so it needs to be stable. def _conic_x(field, data): rax = axis_names.index("r") pax = axis_names.index("phi") bc = data.get_field_parameter(f"bulk_{basename}") return np.cos(data[ftype, "phi"]) * ( data[ftype, f"{basename}_r"] - bc[rax] ) - np.sin(data[ftype, "phi"]) * (data[ftype, f"{basename}_phi"] - bc[pax]) def _conic_y(field, data): rax = axis_names.index("r") pax = axis_names.index("phi") bc = data.get_field_parameter(f"bulk_{basename}") return np.sin(data[ftype, "phi"]) * ( data[ftype, f"{basename}_r"] - bc[rax] ) + np.cos(data[ftype, "phi"]) * (data[ftype, f"{basename}_phi"] - bc[pax]) if registry.ds.dimensionality == 3: registry.add_field( (ftype, f"{basename}_conic_x"), sampling_type="local", function=_conic_x, units=field_units, display_field=True, ) registry.add_field( (ftype, f"{basename}_conic_y"), sampling_type="local", function=_conic_y, units=field_units, display_field=True, ) def create_averaged_field( registry, basename, field_units, ftype="gas", slice_info=None, validators=None, weight="mass", ): if validators is None: validators = [] validators += [ValidateSpatial(1, [(ftype, basename)])] def _averaged_field(field, data): def atleast_4d(array): if array.ndim == 3: return array[..., None] else: return array nx, ny, nz, ngrids = atleast_4d(data[ftype, basename]).shape new_field = data.ds.arr( np.zeros((nx - 2, ny - 2, nz - 2, ngrids), dtype=np.float64), (just_one(data[ftype, basename]) * just_one(data[ftype, weight])).units, ) weight_field = data.ds.arr( np.zeros((nx - 2, ny - 2, nz - 2, ngrids), dtype=np.float64), data[ftype, weight].units, ) i_i, j_i, k_i = np.mgrid[0:3, 0:3, 0:3] for i, j, k in zip(i_i.ravel(), j_i.ravel(), k_i.ravel(), strict=True): sl = ( slice(i, nx - (2 - i)), slice(j, ny - (2 - j)), slice(k, nz - (2 - k)), ) new_field += ( atleast_4d(data[ftype, basename])[sl] * atleast_4d(data[ftype, weight])[sl] ) weight_field += atleast_4d(data[ftype, weight])[sl] # Now some fancy footwork new_field2 = data.ds.arr( np.zeros((nx, ny, nz, ngrids)), data[ftype, basename].units ) new_field2[1:-1, 1:-1, 1:-1] = new_field / weight_field if data[ftype, basename].ndim == 3: return new_field2[..., 0] else: return new_field2 registry.add_field( (ftype, f"averaged_{basename}"), sampling_type="cell", function=_averaged_field, units=field_units, validators=validators, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/xray_emission_fields.py0000644000175100001770000003332514714401662020266 0ustar00runnerdockerimport os import numpy as np from yt.config import ytcfg from yt.fields.derived_field import DerivedField from yt.funcs import mylog, only_on_root, parse_h5_attr from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.cosmology import Cosmology from yt.utilities.exceptions import YTException, YTFieldNotFound from yt.utilities.linear_interpolators import ( BilinearFieldInterpolator, UnilinearFieldInterpolator, ) from yt.utilities.on_demand_imports import _h5py as h5py data_version = {"cloudy": 2, "apec": 3} data_url = "http://yt-project.org/data" def _get_data_file(table_type, data_dir=None): data_file = "%s_emissivity_v%d.h5" % (table_type, data_version[table_type]) if data_dir is None: supp_data_dir = ytcfg.get("yt", "supp_data_dir") data_dir = supp_data_dir if os.path.exists(supp_data_dir) else "." data_path = os.path.join(data_dir, data_file) if not os.path.exists(data_path): msg = f"Failed to find emissivity data file {data_file}! Please download from {data_url}" mylog.error(msg) raise OSError(msg) return data_path class EnergyBoundsException(YTException): def __init__(self, lower, upper): self.lower = lower self.upper = upper def __str__(self): return f"Energy bounds are {self.lower:e} to {self.upper:e} keV." class ObsoleteDataException(YTException): def __init__(self, table_type): data_file = "%s_emissivity_v%d.h5" % (table_type, data_version[table_type]) self.msg = "X-ray emissivity data is out of date.\n" self.msg += f"Download the latest data from {data_url}/{data_file}." def __str__(self): return self.msg class XrayEmissivityIntegrator: r"""Class for making X-ray emissivity fields. Uses hdf5 data tables generated from Cloudy and AtomDB/APEC. Initialize an XrayEmissivityIntegrator object. Parameters ---------- table_type : string The type of data to use when computing the emissivity values. If "cloudy", a file called "cloudy_emissivity.h5" is used, for photoionized plasmas. If, "apec", a file called "apec_emissivity.h5" is used for collisionally ionized plasmas. These files contain emissivity tables for primordial elements and for metals at solar metallicity for the energy range 0.1 to 100 keV. redshift : float, optional The cosmological redshift of the source of the field. Default: 0.0. data_dir : string, optional The location to look for the data table in. If not supplied, the file will be looked for in the location of the YT_DEST environment variable or in the current working directory. use_metals : boolean, optional If set to True, the emissivity will include contributions from metals. Default: True """ def __init__(self, table_type, redshift=0.0, data_dir=None, use_metals=True): filename = _get_data_file(table_type, data_dir=data_dir) only_on_root(mylog.info, "Loading emissivity data from %s", filename) in_file = h5py.File(filename, mode="r") if "info" in in_file.attrs: only_on_root(mylog.info, parse_h5_attr(in_file, "info")) if parse_h5_attr(in_file, "version") != data_version[table_type]: raise ObsoleteDataException(table_type) else: only_on_root( mylog.info, f"X-ray '{table_type}' emissivity data version: " f"{parse_h5_attr(in_file, 'version')}.", ) self.log_T = in_file["log_T"][:] self.emissivity_primordial = in_file["emissivity_primordial"][:] if "log_nH" in in_file: self.log_nH = in_file["log_nH"][:] if use_metals: self.emissivity_metals = in_file["emissivity_metals"][:] self.ebin = YTArray(in_file["E"], "keV") in_file.close() self.dE = np.diff(self.ebin) self.emid = 0.5 * (self.ebin[1:] + self.ebin[:-1]).to("erg") self.redshift = redshift def get_interpolator(self, data_type, e_min, e_max, energy=True): data = getattr(self, f"emissivity_{data_type}") if not energy: data = data[..., :] / self.emid.v e_min = YTQuantity(e_min, "keV") * (1.0 + self.redshift) e_max = YTQuantity(e_max, "keV") * (1.0 + self.redshift) if (e_min - self.ebin[0]) / e_min < -1e-3 or ( e_max - self.ebin[-1] ) / e_max > 1e-3: raise EnergyBoundsException(self.ebin[0], self.ebin[-1]) e_is, e_ie = np.digitize([e_min, e_max], self.ebin) e_is = np.clip(e_is - 1, 0, self.ebin.size - 1) e_ie = np.clip(e_ie, 0, self.ebin.size - 1) my_dE = self.dE[e_is:e_ie].copy() # clip edge bins if the requested range is smaller my_dE[0] -= e_min - self.ebin[e_is] my_dE[-1] -= self.ebin[e_ie] - e_max interp_data = (data[..., e_is:e_ie] * my_dE).sum(axis=-1) if data.ndim == 2: emiss = UnilinearFieldInterpolator( np.log10(interp_data), [self.log_T[0], self.log_T[-1]], "log_T", truncate=True, ) else: emiss = BilinearFieldInterpolator( np.log10(interp_data), [self.log_nH[0], self.log_nH[-1], self.log_T[0], self.log_T[-1]], ["log_nH", "log_T"], truncate=True, ) return emiss def add_xray_emissivity_field( ds, e_min, e_max, redshift=0.0, metallicity=("gas", "metallicity"), table_type="cloudy", data_dir=None, cosmology=None, dist=None, ftype="gas", ): r"""Create X-ray emissivity fields for a given energy range. Parameters ---------- e_min : float The minimum energy in keV for the energy band. e_min : float The maximum energy in keV for the energy band. redshift : float, optional The cosmological redshift of the source of the field. Default: 0.0. metallicity : str or tuple of str or float, optional Either the name of a metallicity field or a single floating-point number specifying a spatially constant metallicity. Must be in solar units. If set to None, no metals will be assumed. Default: ("gas", "metallicity") table_type : string, optional The type of emissivity table to be used when creating the fields. Options are "cloudy" or "apec". Default: "cloudy" data_dir : string, optional The location to look for the data table in. If not supplied, the file will be looked for in the location of the YT_DEST environment variable or in the current working directory. cosmology : :class:`~yt.utilities.cosmology.Cosmology`, optional If set and redshift > 0.0, this cosmology will be used when computing the cosmological dependence of the emission fields. If not set, yt's default LCDM cosmology will be used. dist : (value, unit) tuple or :class:`~yt.units.yt_array.YTQuantity`, optional The distance to the source, used for making intensity fields. You should only use this if your source is nearby (not cosmological). Default: None ftype : string, optional The field type to use when creating the fields, default "gas" This will create at least three fields: "xray_emissivity_{e_min}_{e_max}_keV" (erg s^-1 cm^-3) "xray_luminosity_{e_min}_{e_max}_keV" (erg s^-1) "xray_photon_emissivity_{e_min}_{e_max}_keV" (photons s^-1 cm^-3) and if a redshift or distance is specified it will create two others: "xray_intensity_{e_min}_{e_max}_keV" (erg s^-1 cm^-3 arcsec^-2) "xray_photon_intensity_{e_min}_{e_max}_keV" (photons s^-1 cm^-3 arcsec^-2) These latter two are really only useful when making projections. Examples -------- >>> import yt >>> ds = yt.load("sloshing_nomag2_hdf5_plt_cnt_0100") >>> yt.add_xray_emissivity_field(ds, 0.5, 2) >>> p = yt.ProjectionPlot( ... ds, "x", ("gas", "xray_emissivity_0.5_2_keV"), table_type="apec" ... ) >>> p.save() """ if not isinstance(metallicity, float) and metallicity is not None: try: metallicity = ds._get_field_info(metallicity) except YTFieldNotFound as e: raise RuntimeError( f"Your dataset does not have a {metallicity} field! " + "Perhaps you should specify a constant metallicity instead?" ) from e if table_type == "cloudy": # Cloudy wants to scale by nH**2 other_n = "H_nuclei_density" else: # APEC wants to scale by nH*ne other_n = "El_number_density" def _norm_field(field, data): return data[ftype, "H_nuclei_density"] * data[ftype, other_n] ds.add_field( (ftype, "norm_field"), _norm_field, units="cm**-6", sampling_type="local" ) my_si = XrayEmissivityIntegrator(table_type, data_dir=data_dir, redshift=redshift) em_0 = my_si.get_interpolator("primordial", e_min, e_max) emp_0 = my_si.get_interpolator("primordial", e_min, e_max, energy=False) if metallicity is not None: em_Z = my_si.get_interpolator("metals", e_min, e_max) emp_Z = my_si.get_interpolator("metals", e_min, e_max, energy=False) def _emissivity_field(field, data): with np.errstate(all="ignore"): dd = { "log_nH": np.log10(data[ftype, "H_nuclei_density"]), "log_T": np.log10(data[ftype, "temperature"]), } my_emissivity = np.power(10, em_0(dd)) if metallicity is not None: if isinstance(metallicity, DerivedField): my_Z = data[metallicity.name].to_value("Zsun") else: my_Z = metallicity my_emissivity += my_Z * np.power(10, em_Z(dd)) my_emissivity[np.isnan(my_emissivity)] = 0 return data[ftype, "norm_field"] * YTArray(my_emissivity, "erg*cm**3/s") emiss_name = (ftype, f"xray_emissivity_{e_min}_{e_max}_keV") ds.add_field( emiss_name, function=_emissivity_field, display_name=rf"\epsilon_{{X}} ({e_min}-{e_max} keV)", sampling_type="local", units="erg/cm**3/s", ) def _luminosity_field(field, data): return data[emiss_name] * data[ftype, "mass"] / data[ftype, "density"] lum_name = (ftype, f"xray_luminosity_{e_min}_{e_max}_keV") ds.add_field( lum_name, function=_luminosity_field, display_name=rf"\rm{{L}}_{{X}} ({e_min}-{e_max} keV)", sampling_type="local", units="erg/s", ) def _photon_emissivity_field(field, data): dd = { "log_nH": np.log10(data[ftype, "H_nuclei_density"]), "log_T": np.log10(data[ftype, "temperature"]), } my_emissivity = np.power(10, emp_0(dd)) if metallicity is not None: if isinstance(metallicity, DerivedField): my_Z = data[metallicity.name].to_value("Zsun") else: my_Z = metallicity my_emissivity += my_Z * np.power(10, emp_Z(dd)) return data[ftype, "norm_field"] * YTArray(my_emissivity, "photons*cm**3/s") phot_name = (ftype, f"xray_photon_emissivity_{e_min}_{e_max}_keV") ds.add_field( phot_name, function=_photon_emissivity_field, display_name=rf"\epsilon_{{X}} ({e_min}-{e_max} keV)", sampling_type="local", units="photons/cm**3/s", ) fields = [emiss_name, lum_name, phot_name] if redshift > 0.0 or dist is not None: if dist is None: if cosmology is None: if hasattr(ds, "cosmology"): cosmology = ds.cosmology else: cosmology = Cosmology() D_L = cosmology.luminosity_distance(0.0, redshift) angular_scale = 1.0 / cosmology.angular_scale(0.0, redshift) dist_fac = ds.quan( 1.0 / (4.0 * np.pi * D_L * D_L * angular_scale * angular_scale).v, "rad**-2", ) else: redshift = 0.0 # Only for local sources! try: # normal behaviour, if dist is a YTQuantity dist = ds.quan(dist.value, dist.units) except AttributeError as e: try: dist = ds.quan(*dist) except (RuntimeError, TypeError): raise TypeError( "dist should be a YTQuantity or a (value, unit) tuple!" ) from e angular_scale = dist / ds.quan(1.0, "radian") dist_fac = ds.quan( 1.0 / (4.0 * np.pi * dist * dist * angular_scale * angular_scale).v, "rad**-2", ) ei_name = (ftype, f"xray_intensity_{e_min}_{e_max}_keV") def _intensity_field(field, data): I = dist_fac * data[emiss_name] return I.in_units("erg/cm**3/s/arcsec**2") ds.add_field( ei_name, function=_intensity_field, display_name=rf"I_{{X}} ({e_min}-{e_max} keV)", sampling_type="local", units="erg/cm**3/s/arcsec**2", ) i_name = (ftype, f"xray_photon_intensity_{e_min}_{e_max}_keV") def _photon_intensity_field(field, data): I = (1.0 + redshift) * dist_fac * data[phot_name] return I.in_units("photons/cm**3/s/arcsec**2") ds.add_field( i_name, function=_photon_intensity_field, display_name=rf"I_{{X}} ({e_min}-{e_max} keV)", sampling_type="local", units="photons/cm**3/s/arcsec**2", ) fields += [ei_name, i_name] for field in fields: mylog.info("Adding ('%s','%s') field.", field[0], field[1]) return fields ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2871518 yt-4.4.0/yt/frontends/0000755000175100001770000000000014714401715014222 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/__init__.py0000644000175100001770000000234714714401662016342 0ustar00runnerdocker__all__ = [ "adaptahop", "ahf", "amrex", "amrvac", "art", "artio", "athena", "athena_pp", "boxlib", # the boxlib frontend is deprecated, use 'amrex' "cf_radial", "chimera", "chombo", "cholla", "enzo_e", "enzo", "exodus_ii", "fits", "flash", "gadget", # breaking alphabetical order intentionnally here: # arepo and eagle depend on gadget. Importing them first causes # unintended side effects. See # https://github.com/yt-project/yt/issues/4563 "arepo", "eagle", "gadget_fof", "gamer", "gdf", "gizmo", "halo_catalog", "http_stream", "moab", "nc4_cm1", "open_pmd", "owls", "owls_subfind", "parthenon", "ramses", "rockstar", "sdf", "stream", "swift", "tipsy", "ytdata", ] from functools import lru_cache @lru_cache(maxsize=None) def __getattr__(value): import importlib if value == "_all": # recursively import all frontends for _ in __all__: __getattr__(_) return if value not in __all__: raise AttributeError(f"yt.frontends has no attribute {value!r}") return importlib.import_module(f"yt.frontends.{value}.api") ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2871518 yt-4.4.0/yt/frontends/adaptahop/0000755000175100001770000000000014714401715016163 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/adaptahop/__init__.py0000644000175100001770000000005014714401662020270 0ustar00runnerdocker""" API for AdaptaHOP frontend. """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/adaptahop/api.py0000644000175100001770000000027214714401662017310 0ustar00runnerdocker""" API for AdaptaHOP frontend """ from . import tests from .data_structures import AdaptaHOPDataset from .fields import AdaptaHOPFieldInfo from .io import IOHandlerAdaptaHOPBinary ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/adaptahop/data_structures.py0000644000175100001770000002770114714401662021761 0ustar00runnerdocker""" Data structures for AdaptaHOP frontend. """ import os import re from itertools import product import numpy as np from yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer, ) from yt.data_objects.static_output import Dataset from yt.frontends.halo_catalog.data_structures import HaloCatalogFile from yt.funcs import mylog, setdefaultattr from yt.geometry.particle_geometry_handler import ParticleIndex from yt.units import Mpc # type: ignore from yt.utilities.cython_fortran_utils import FortranFile from .definitions import ADAPTAHOP_TEMPLATES, ATTR_T, HEADER_ATTRIBUTES from .fields import AdaptaHOPFieldInfo class AdaptaHOPParticleIndex(ParticleIndex): def _setup_filenames(self): template = self.dataset.filename_template ndoms = self.dataset.file_count cls = self.dataset._file_class if ndoms > 1: self.data_files = [ cls(self.dataset, self.io, template % {"num": i}, i, None) for i in range(ndoms) ] else: self.data_files = [ cls( self.dataset, self.io, self.dataset.parameter_filename, 0, None, ) ] class AdaptaHOPDataset(Dataset): _index_class = AdaptaHOPParticleIndex _file_class = HaloCatalogFile _field_info_class = AdaptaHOPFieldInfo # AdaptaHOP internally assumes 1Mpc == 3.0824cm _code_length_to_Mpc = (1.0 * Mpc).to_value("cm") / 3.08e24 _header_attributes: ATTR_T | None = None _halo_attributes: ATTR_T | None = None def __init__( self, filename, dataset_type="adaptahop_binary", n_ref=16, num_zones=2, units_override=None, unit_system="cgs", parent_ds=None, ): self.n_ref = n_ref self.num_zones = num_zones if parent_ds is None: raise RuntimeError( "The AdaptaHOP frontend requires a parent dataset " "to be passed as `parent_ds`." ) self.parent_ds = parent_ds self._guess_headers_from_file(filename) super().__init__( filename, dataset_type, units_override=units_override, unit_system=unit_system, ) def _set_code_unit_attributes(self): setdefaultattr(self, "length_unit", self.quan(self._code_length_to_Mpc, "Mpc")) setdefaultattr(self, "mass_unit", self.quan(1e11, "Msun")) setdefaultattr(self, "velocity_unit", self.quan(1.0, "km / s")) setdefaultattr(self, "time_unit", self.length_unit / self.velocity_unit) def _guess_headers_from_file(self, filename) -> None: with FortranFile(filename) as fpu: ok = False for dp, longint in product((True, False), (True, False)): fpu.seek(0) try: header_attributes = HEADER_ATTRIBUTES(double=dp, longint=longint) fpu.read_attrs(header_attributes) ok = True break except (ValueError, OSError): pass if not ok: raise OSError(f"Could not read headers from file {filename}") istart = fpu.tell() fpu.seek(0, 2) iend = fpu.tell() # Try different templates ok = False for name, cls in ADAPTAHOP_TEMPLATES.items(): fpu.seek(istart) attributes = cls(longint, dp).HALO_ATTRIBUTES mylog.debug("Trying %s(longint=%s, dp=%s)", name, longint, dp) try: # Try to read two halos to be sure fpu.read_attrs(attributes) if fpu.tell() < iend: fpu.read_attrs(attributes) ok = True break except (ValueError, OSError): continue if not ok: raise OSError(f"Could not guess fields from file {filename}") self._header_attributes = header_attributes self._halo_attributes = attributes def _parse_parameter_file(self): with FortranFile(self.parameter_filename) as fpu: params = fpu.read_attrs(self._header_attributes) self.dimensionality = 3 # Domain related things self.filename_template = self.parameter_filename self.file_count = 1 nz = self.num_zones self.domain_dimensions = np.ones(3, "int32") * nz # Set things up self.cosmological_simulation = 1 self.current_redshift = (1.0 / params["aexp"]) - 1.0 self.omega_matter = params["omega_t"] self.current_time = self.quan(params["age"], "Gyr") self.omega_lambda = 0.724 # hard coded if not inferred from parent ds self.hubble_constant = 0.7 # hard coded if not inferred from parent ds self._periodicity = (True, True, True) self.particle_types = "halos" self.particle_types_raw = "halos" # Inherit stuff from parent ds -- if they exist for k in ("omega_lambda", "hubble_constant", "omega_matter", "omega_radiation"): v = getattr(self.parent_ds, k, None) if v is not None: setattr(self, k, v) self.domain_left_edge = np.array([0.0, 0.0, 0.0]) self.domain_right_edge = ( self.parent_ds.domain_right_edge.to_value("Mpc") * self._code_length_to_Mpc ) self.parameters.update(params) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: fname = os.path.split(filename)[1] if not fname.startswith("tree_bricks") or not re.match( r"^tree_bricks\d{3}$", fname ): return False return True def halo(self, halo_id, ptype="DM"): """ Create a data container to get member particles and individual values from halos. Halo mass, position, and velocity are set as attributes. Halo IDs are accessible through the field, "member_ids". Other fields that are one value per halo are accessible as normal. The field list for halo objects can be seen in `ds.halos_field_list`. Parameters ---------- halo_id : int The id of the halo or group ptype : str, default DM The type of particles the halo is made of. """ return AdaptaHOPHaloContainer( ptype, halo_id, parent_ds=self.parent_ds, halo_ds=self ) class AdaptaHOPHaloContainer(YTSelectionContainer): """ Create a data container to get member particles and individual values from halos. Halo mass, position, and velocity are set as attributes. Halo IDs are accessible through the field, "member_ids". Other fields that are one value per halo are accessible as normal. The field list for halo objects can be seen in `ds.halos_field_list`. Parameters ---------- ptype : string The type of halo, either "Group" for the main halo or "Subhalo" for subhalos. particle_identifier : int or tuple of ints The halo or subhalo id. If requesting a subhalo, the id can also be given as a tuple of the main halo id and subgroup id, such as (1, 4) for subgroup 4 of halo 1. Attributes ---------- particle_identifier : int The id of the halo or subhalo. group_identifier : int For subhalos, the id of the enclosing halo. subgroup_identifier : int For subhalos, the relative id of the subhalo within the enclosing halo. particle_number : int Number of particles in the halo. mass : float Halo mass. position : array of floats Halo position. velocity : array of floats Halo velocity. Note ---- Relevant Fields: * particle_number - number of particles * subhalo_number - number of subhalos * group_identifier - id of parent group for subhalos Examples -------- >>> import yt >>> ds = yt.load( ... "output_00080_halos/tree_bricks080", ... parent_ds=yt.load("output_00080/info_00080.txt"), ... ) >>> ds.halo(1, ptype="io") >>> print(halo.mass) 119.22804260253906 100000000000.0*Msun >>> print(halo.position) [26.80901299 24.35978484 5.45388672] code_length >>> print(halo.velocity) [3306394.95849609 8584366.60766602 9982682.80029297] cm/s >>> print(halo["io", "particle_mass"]) [3.19273578e-06 3.19273578e-06 ... 3.19273578e-06 3.19273578e-06] code_mass >>> # particle ids for this halo >>> print(halo.member_ids) [ 48 64 176 ... 999947 1005471 1006779] """ _type_name = "halo" _con_args = ("ptype", "particle_identifier", "parent_ds", "halo_ds") _spatial = False # Do not register it to prevent .halo from being attached to all datasets _skip_add = True def __init__(self, ptype, particle_identifier, parent_ds, halo_ds): if ptype not in parent_ds.particle_types_raw: raise RuntimeError( f'Possible halo types are {parent_ds.particle_types_raw}, supplied "{ptype}".' ) # Setup required fields self._dimensionality = 3 self.ds = parent_ds # Halo-specific self.halo_ds = halo_ds self.ptype = ptype self.particle_identifier = particle_identifier # Set halo particles data self._set_halo_properties() self._set_halo_member_data() # Call constructor super().__init__(parent_ds, {}) def __repr__(self): return "%s_%s_%09d" % (self.ds, self.ptype, self.particle_identifier) def __getitem__(self, key): return self.region[key] @property def ihalo(self): """The index in the catalog of the halo""" ihalo = getattr(self, "_ihalo", None) if ihalo: return ihalo else: halo_id = self.particle_identifier halo_ids = self.halo_ds.r["halos", "particle_identifier"].astype("int64") ihalo = np.searchsorted(halo_ids, halo_id) assert halo_ids[ihalo] == halo_id self._ihalo = ihalo return self._ihalo def _set_halo_member_data(self): ptype = self.ptype halo_ds = self.halo_ds parent_ds = self.ds ihalo = self.ihalo # Note: convert to physical units to prevent errors when jumping # from halo_ds to parent_ds halo_pos = halo_ds.r["halos", "particle_position"][ihalo, :].to_value("Mpc") halo_vel = halo_ds.r["halos", "particle_velocity"][ihalo, :].to_value("km/s") halo_radius = halo_ds.r["halos", "r"][ihalo].to_value("Mpc") members = self.member_ids ok = False f = 1 / 1.1 center = parent_ds.arr(halo_pos, "Mpc") radius = parent_ds.arr(halo_radius, "Mpc") # Find smallest sphere containing all particles while not ok: f *= 1.1 sph = parent_ds.sphere(center, f * radius) part_ids = sph[ptype, "particle_identity"].astype("int64") ok = len(np.lib.arraysetops.setdiff1d(members, part_ids)) == 0 # Set bulk velocity sph.set_field_parameter("bulk_velocity", (halo_vel, "km/s")) # Build subregion that only contains halo particles reg = sph.cut_region( ['np.isin(obj["io", "particle_identity"].astype("int64"), members)'], locals={"members": members, "np": np}, ) self.sphere = sph self.region = reg def _set_halo_properties(self): ihalo = self.ihalo ds = self.halo_ds # Add position, mass, velocity member functions for attr_name in ("mass", "position", "velocity"): setattr(self, attr_name, ds.r["halos", f"particle_{attr_name}"][ihalo]) # Add members self.member_ids = self.halo_ds.index.io.members(ihalo).astype(np.int64) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/adaptahop/definitions.py0000644000175100001770000001234714714401662021060 0ustar00runnerdocker""" Data structures for AdaptaHOP """ from yt.funcs import mylog ATTR_T = tuple[tuple[tuple[str, ...] | str, int, str], ...] def HEADER_ATTRIBUTES(*, double: bool, longint: bool) -> ATTR_T: int_type = "l" if longint else "i" float_type = "d" if double else "f" return ( ("npart", 1, int_type), ("massp", 1, float_type), ("aexp", 1, float_type), ("omega_t", 1, float_type), ("age", 1, float_type), (("nhalos", "nsubs"), 2, "i"), ) ADAPTAHOP_TEMPLATES = {} class AdaptaHOPDefTemplate: # this is a mixin class def __init_subclass__(cls, *args, **kwargs): super().__init_subclass__(*args, **kwargs) mylog.debug("Registering AdaptaHOP template class %s", cls.__name__) ADAPTAHOP_TEMPLATES[cls.__name__] = cls def __init__(self, longint, double_precision): self.longint = longint self.double_precision = double_precision class AdaptaHOPOld(AdaptaHOPDefTemplate): @property def HALO_ATTRIBUTES(self) -> ATTR_T: int_type = "l" if self.longint else "i" float_type = "d" if self.double_precision else "f" return ( ("npart", 1, int_type), ("particle_identities", -1, int_type), ("particle_identifier", 1, "i"), # this is the halo id, always an int32 ("timestep", 1, "i"), ( ( "level", "host_id", "first_subhalo_id", "n_subhalos", "next_subhalo_id", ), 5, "i", ), ("particle_mass", 1, float_type), (("raw_position_x", "raw_position_y", "raw_position_z"), 3, float_type), ( ("particle_velocity_x", "particle_velocity_y", "particle_velocity_z"), 3, float_type, ), ( ( "particle_angular_momentum_x", "particle_angular_momentum_y", "particle_angular_momentum_z", ), 3, float_type, ), (("r", "a", "b", "c"), 4, float_type), (("ek", "ep", "etot"), 3, float_type), ("spin", 1, float_type), ( ( "virial_radius", "virial_mass", "virial_temperature", "virial_velocity", ), 4, float_type, ), (("rho0", "R_c"), 2, float_type), ) class AdaptaHOPNewNoContam(AdaptaHOPDefTemplate): @property def HALO_ATTRIBUTES(self) -> ATTR_T: int_type = "l" if self.longint else "i" float_type = "d" if self.double_precision else "f" return ( ("npart", 1, int_type), ("particle_identities", -1, int_type), ("particle_identifier", 1, "i"), # this is the halo id, always an int32 ("timestep", 1, "i"), ( ( "level", "host_id", "first_subhalo_id", "n_subhalos", "next_subhalo_id", ), 5, "i", ), ("particle_mass", 1, float_type), ("npart_tot", 1, int_type), ("particle_mass_tot", 1, float_type), (("raw_position_x", "raw_position_y", "raw_position_z"), 3, float_type), ( ("particle_velocity_x", "particle_velocity_y", "particle_velocity_z"), 3, float_type, ), ( ( "particle_angular_momentum_x", "particle_angular_momentum_y", "particle_angular_momentum_z", ), 3, float_type, ), (("r", "a", "b", "c"), 4, float_type), (("ek", "ep", "etot"), 3, float_type), ("spin", 1, float_type), ("velocity_dispersion", 1, float_type), ( ( "virial_radius", "virial_mass", "virial_temperature", "virial_velocity", ), 4, float_type, ), (("rmax", "vmax"), 2, float_type), ("concentration", 1, float_type), (("radius_200", "mass_200"), 2, float_type), (("radius_50", "mass_50"), 2, float_type), ("radius_profile", -1, float_type), ("rho_profile", -1, float_type), (("rho0", "R_c"), 2, float_type), ) class AdaptaHOPNewContam(AdaptaHOPNewNoContam): @property def HALO_ATTRIBUTES(self) -> ATTR_T: attrs = list(super().HALO_ATTRIBUTES) int_type = "l" if self.longint else "i" float_type = "d" if self.double_precision else "f" return tuple( attrs + [ ("contaminated", 1, "i"), (("m_contam", "mtot_contam"), 2, float_type), (("n_contam", "ntot_contam"), 2, int_type), ] ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/adaptahop/fields.py0000644000175100001770000000570214714401662020010 0ustar00runnerdocker""" AdaptaHOP-specific fields """ from yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer m_units = "1e11 * Msun" r_units = "Mpc" v_units = "km / s" l_units = "1e11 * Msun * Mpc * km / s" e_units = "1e11 * Msun * km**2 / s**2" dens_units = "1e11 * Msun / Mpc**3" class AdaptaHOPFieldInfo(FieldInfoContainer): known_particle_fields: KnownFieldsT = ( ("particle_identifier", ("", [], "Halo Identity")), ("raw_position_x", (r_units, [], None)), ("raw_position_y", (r_units, [], None)), ("raw_position_z", (r_units, [], None)), ("r", (r_units, [], "Halo radius")), ("a", (r_units, [], "Halo semi-major axis")), ("b", (r_units, [], "Halo semi-medium axis")), ("c", (r_units, [], "Halo semi-minor axis")), ("particle_velocity_x", (v_units, [], "Halo velocity x")), ("particle_velocity_y", (v_units, [], "Halo velocity y")), ("particle_velocity_z", (v_units, [], "Halo velocity z")), ("particle_angular_momentum_x", (l_units, [], "Halo Angular Momentum x")), ("particle_angular_momentum_y", (l_units, [], "Halo Angular Momentum y")), ("particle_angular_momentum_z", (l_units, [], "Halo Angular Momentum z")), ("particle_mass", (m_units, [], "Halo mass")), ("ek", (e_units, [], "Halo Kinetic Energy")), ("ep", (e_units, [], "Halo Gravitational Energy")), ("ek", (e_units, [], "Halo Total Energy")), ("spin", ("", [], "Halo Spin Parameter")), # Virial parameters ("virial_radius", (r_units, [], "Halo Virial Radius")), ("virial_mass", (m_units, [], "Halo Virial Mass")), ("virial_temperature", ("K", [], "Halo Virial Temperature")), ("virial_velocity", (v_units, [], "Halo Virial Velocity")), # NFW parameters ("rho0", (dens_units, [], "Halo NFW Density")), ("R_c", (r_units, [], "Halo NFW Scale Radius")), ("velocity_dispersion", ("km/s", [], "Velocity Dispersion")), ("radius_200", (r_units, [], r"$R_\mathrm{200}$")), ("radius_50", (r_units, [], r"$R_\mathrm{50}$")), ("mass_200", (m_units, [], r"$M_\mathrm{200}$")), ("mass_50", (m_units, [], r"$M_\mathrm{50}$")), # Contamination ("contaminated", ("", [], "Contaminated")), ("m_contam", (m_units, [], "Contaminated Mass")), ) def setup_particle_fields(self, ptype): super().setup_particle_fields(ptype) # Add particle position def generate_pos_field(d): shift = self.ds.domain_width[0] / 2 def closure(field, data): return data["halos", f"raw_position_{d}"] + shift return closure for k in "xyz": fun = generate_pos_field(k) self.add_field( ("halos", f"particle_position_{k}"), sampling_type="particle", function=fun, units="Mpc", ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/adaptahop/io.py0000644000175100001770000002123114714401662017144 0ustar00runnerdocker""" AdaptaHOP data-file handling function """ from functools import partial from operator import attrgetter import numpy as np from yt.utilities.cython_fortran_utils import FortranFile from yt.utilities.io_handler import BaseParticleIOHandler from .definitions import ATTR_T class IOHandlerAdaptaHOPBinary(BaseParticleIOHandler): _dataset_type = "adaptahop_binary" _offsets = None # Location of halos in the file _particle_positions = None # Buffer of halo position def _read_fluid_selection(self, chunks, selector, fields, size): raise NotImplementedError def _read_chunk_data(self, chunk, fields): raise NotImplementedError def _yield_coordinates(self, data_file): yield "halos", self._get_particle_positions() def _read_particle_coords(self, chunks, ptf): # This will read chunks and yield the results. # Only support halo reading for now. assert len(ptf) == 1 assert list(ptf.keys())[0] == "halos" ptype = "halos" for data_file in self._sorted_chunk_iterator(chunks): pcount = ( data_file.ds.parameters["nhalos"] + data_file.ds.parameters["nsubs"] ) if pcount == 0: continue pos = self._get_particle_positions() yield ptype, [pos[:, i] for i in range(3)], 0.0 def _read_particle_fields(self, chunks, ptf, selector): # Now we have all the sizes, and we can allocate # Only support halo reading for now. assert len(ptf) == 1 assert list(ptf.keys())[0] == "halos" def iterate_over_attributes(attr_list): for attr, *_ in attr_list: if isinstance(attr, tuple): yield from attr else: yield attr halo_attributes = self.ds._halo_attributes attr_pos = partial(_find_attr_position, halo_attributes=halo_attributes) for data_file in self._sorted_chunk_iterator(chunks): pcount = ( data_file.ds.parameters["nhalos"] + data_file.ds.parameters["nsubs"] ) if pcount == 0: continue ptype = "halos" field_list0 = sorted(ptf[ptype], key=attr_pos) field_list_pos = [f"raw_position_{k}" for k in "xyz"] field_list = sorted(set(field_list0 + field_list_pos), key=attr_pos) with FortranFile(self.ds.parameter_filename) as fpu: params = fpu.read_attrs(self.ds._header_attributes) todo = _todo_from_attributes(field_list, self.ds._halo_attributes) nhalos = params["nhalos"] + params["nsubs"] data = np.zeros((nhalos, len(field_list))) for ihalo in range(nhalos): jj = 0 for it in todo: if isinstance(it, int): fpu.skip(it) else: tmp = fpu.read_attrs(it) for key in iterate_over_attributes(it): v = tmp[key] if key not in field_list: continue data[ihalo, jj] = v jj += 1 ipos = [field_list.index(k) for k in field_list_pos] w = self.ds.domain_width.to("code_length")[0].value / 2 x, y, z = (data[:, i] + w for i in ipos) mask = selector.select_points(x, y, z, 0.0) del x, y, z if mask is None: continue for field in field_list0: i = field_list.index(field) yield (ptype, field), data[mask, i] def _count_particles(self, data_file): nhalos = data_file.ds.parameters["nhalos"] + data_file.ds.parameters["nsubs"] return {"halos": nhalos} def _identify_fields(self, data_file): fields = [] for attr, _1, _2 in self.ds._halo_attributes: if isinstance(attr, str): fields.append(("halos", attr)) else: for a in attr: fields.append(("halos", a)) return fields, {} # ----------------------------------------------------- # Specific to AdaptaHOP def _get_particle_positions(self): """Read the particles and return them in code_units""" data = getattr(self, "_particle_positions", None) if data is not None: return data with FortranFile(self.ds.parameter_filename) as fpu: params = fpu.read_attrs(self.ds._header_attributes) todo = _todo_from_attributes( ( "particle_identifier", "raw_position_x", "raw_position_y", "raw_position_z", ), self.ds._halo_attributes, ) nhalos = params["nhalos"] + params["nsubs"] data = np.zeros((nhalos, 3)) offset_map = np.zeros((nhalos, 2), dtype="int64") for ihalo in range(nhalos): ipos = fpu.tell() for it in todo: if isinstance(it, int): fpu.skip(it) elif it[0][0] != "particle_identifier": # Small optimisation here: we can read as vector # dt = fpu.read_attrs(it) # data[ihalo, 0] = dt['particle_position_x'] # data[ihalo, 1] = dt['particle_position_y'] # data[ihalo, 2] = dt['particle_position_z'] data[ihalo, :] = fpu.read_vector(it[0][-1]) else: halo_id = fpu.read_int() offset_map[ihalo, 0] = halo_id offset_map[ihalo, 1] = ipos data = self.ds.arr(data, "code_length") + self.ds.domain_width / 2 # Make sure halos are loaded in increasing halo_id order assert np.all(np.diff(offset_map[:, 0]) > 0) # Cache particle positions as one do not expect a large number of halos anyway self._particle_positions = data self._offsets = offset_map return data def members(self, ihalo): offset = self._offsets[ihalo, 1] todo = _todo_from_attributes(("particle_identities",), self.ds._halo_attributes) with FortranFile(self.ds.parameter_filename) as fpu: fpu.seek(offset) if isinstance(todo[0], int): fpu.skip(todo.pop(0)) members = fpu.read_attrs(todo.pop(0))["particle_identities"] return members def _sorted_chunk_iterator(self, chunks): data_files = self._get_data_files(chunks) yield from sorted(data_files, key=attrgetter("filename")) def _todo_from_attributes(attributes: ATTR_T, halo_attributes: ATTR_T): # Helper function to generate a list of read-skip instructions given a list of # attributes. This is used to skip fields most of the fields when reading # the tree_brick files. iskip = 0 todo: list[int | list[tuple[tuple[str, ...] | str, int, str]]] = [] attributes = tuple(set(attributes)) for i, (attrs, l, k) in enumerate(halo_attributes): attrs_list: tuple[str, ...] if isinstance(attrs, tuple): if not all(isinstance(a, str) for a in attrs): raise TypeError(f"Expected a single str or a tuple of str, got {attrs}") attrs_list = attrs else: attrs_list = (attrs,) ok = False for attr in attrs_list: if attr in attributes: ok = True break if i == 0: if ok: state = "read" todo.append([]) else: state = "skip" if ok: if state == "skip": # Switched from skip to read, store skip information and start # new read list todo.append(iskip) todo.append([]) iskip = 0 if not isinstance(todo[-1], list): raise TypeError todo[-1].append((attrs, l, k)) state = "read" else: iskip += 1 state = "skip" if state == "skip" and iskip > 0: todo.append(iskip) return todo def _find_attr_position(key, halo_attributes: ATTR_T) -> int: j = 0 for attrs, *_ in halo_attributes: if not isinstance(attrs, tuple): attrs = (attrs,) for a in attrs: if key == a: return j j += 1 raise KeyError ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2871518 yt-4.4.0/yt/frontends/adaptahop/tests/0000755000175100001770000000000014714401715017325 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/adaptahop/tests/__init__.py0000644000175100001770000000000014714401662021425 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/adaptahop/tests/test_outputs.py0000644000175100001770000000412114714401662022460 0ustar00runnerdocker""" AdaptaHOP frontend tests """ import numpy as np import pytest from yt.frontends.adaptahop.data_structures import AdaptaHOPDataset from yt.testing import requires_file from yt.utilities.answer_testing.framework import data_dir_load r0 = "output_00080/info_00080.txt" brick_old = "output_00080_halos/tree_bricks080" brick_new = "output_00080_new_halos/tree_bricks080" @requires_file(r0) @requires_file(brick_old) @requires_file(brick_new) @pytest.mark.parametrize("brick", [brick_old, brick_new]) def test_opening(brick): ds_parent = data_dir_load(r0) ds = data_dir_load(brick, kwargs={"parent_ds": ds_parent}) ds.index assert isinstance(ds, AdaptaHOPDataset) @requires_file(r0) @requires_file(brick_old) @requires_file(brick_new) @pytest.mark.parametrize("brick", [brick_old, brick_new]) def test_field_access(brick): ds_parent = data_dir_load(r0) ds = data_dir_load(brick, kwargs={"parent_ds": ds_parent}) skip_list = ("particle_identities", "mesh_id", "radius_profile", "rho_profile") fields = [ (ptype, field) for (ptype, field) in ds.derived_field_list if (ptype == "halos") and (field not in skip_list) ] ad = ds.all_data() for ptype, field in fields: ad[ptype, field] @requires_file(r0) @requires_file(brick_old) @requires_file(brick_new) @pytest.mark.parametrize("brick", [brick_old, brick_new]) def test_get_halo(brick): ds_parent = data_dir_load(r0) ds = data_dir_load(brick, kwargs={"parent_ds": ds_parent}) halo = ds.halo(1, ptype="io") # Check halo object has position, velocity, mass and members attributes for attr_name in ("mass", "position", "velocity", "member_ids"): getattr(halo, attr_name) members = np.sort(halo.member_ids) # Check sphere contains all the members id_sphere = halo.sphere["io", "particle_identity"].astype("int64") assert len(np.lib.arraysetops.setdiff1d(members, id_sphere)) == 0 # Check region contains *only* halo particles id_reg = np.sort(halo["io", "particle_identity"].astype("int64")) np.testing.assert_equal(members, id_reg) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2911518 yt-4.4.0/yt/frontends/ahf/0000755000175100001770000000000014714401715014760 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ahf/__init__.py0000644000175100001770000000000014714401662017060 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ahf/api.py0000644000175100001770000000016514714401662016106 0ustar00runnerdockerfrom .data_structures import AHFHalosDataset from .fields import AHFHalosFieldInfo from .io import IOHandlerAHFHalos ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ahf/data_structures.py0000644000175100001770000001210314714401662020544 0ustar00runnerdockerimport glob import os import numpy as np from yt.data_objects.static_output import Dataset from yt.frontends.halo_catalog.data_structures import HaloCatalogFile from yt.funcs import setdefaultattr from yt.geometry.particle_geometry_handler import ParticleIndex from yt.utilities.cosmology import Cosmology from .fields import AHFHalosFieldInfo class AHFHalosFile(HaloCatalogFile): def __init__(self, ds, io, filename, file_id, range=None): root, _ = os.path.splitext(filename) candidates = glob.glob(root + "*.AHF_halos") if len(candidates) == 1: filename = candidates[0] else: raise ValueError("Too many AHF_halos files.") self.col_names = self._read_column_names(filename) super().__init__(ds, io, filename, file_id, range) def read_data(self, usecols=None): return np.genfromtxt(self.filename, names=self.col_names, usecols=usecols) def _read_column_names(self, filename): with open(filename) as f: line = f.readline() # Remove leading '#' line = line[1:] names = line.split() # Remove trailing '()' names = [name.split("(")[0] for name in names] return names def _read_particle_positions(self, ptype, f=None): """ Read all particle positions in this file. """ halos = self.read_data(usecols=["Xc", "Yc", "Zc"]) pos = np.empty((halos.size, 3), dtype="float64") for i, ax in enumerate("XYZ"): pos[:, i] = halos[f"{ax}c"].astype("float64") return pos class AHFHalosDataset(Dataset): _index_class = ParticleIndex _file_class = AHFHalosFile _field_info_class = AHFHalosFieldInfo def __init__( self, filename, dataset_type="ahf", n_ref=16, num_zones=2, units_override=None, unit_system="cgs", hubble_constant=1.0, ): root, _ = os.path.splitext(filename) self.log_filename = root + ".log" self.hubble_constant = hubble_constant self.n_ref = n_ref self.num_zones = num_zones super().__init__( filename, dataset_type=dataset_type, units_override=units_override, unit_system=unit_system, ) def _set_code_unit_attributes(self): setdefaultattr(self, "length_unit", self.quan(1.0, "kpccm/h")) setdefaultattr(self, "mass_unit", self.quan(1.0, "Msun/h")) setdefaultattr(self, "time_unit", self.quan(1.0, "s")) setdefaultattr(self, "velocity_unit", self.quan(1.0, "km/s")) def _parse_parameter_file(self): # Read all parameters. simu = self._read_log_simu() param = self._read_parameter() # Set up general information. self.filename_template = self.parameter_filename self.file_count = 1 self.parameters.update(param) self.particle_types = "halos" self.particle_types_raw = "halos" # Set up geometrical information. self.refine_by = 2 self.dimensionality = 3 nz = self.num_zones self.domain_dimensions = np.ones(self.dimensionality, "int32") * nz self.domain_left_edge = np.array([0.0, 0.0, 0.0]) # Note that boxsize is in Mpc but particle positions are in kpc. self.domain_right_edge = np.array([simu["boxsize"]] * 3) * 1000 self._periodicity = (True, True, True) # Set up cosmological information. self.cosmological_simulation = 1 self.current_redshift = param["z"] self.omega_lambda = simu["lambda0"] self.omega_matter = simu["omega0"] cosmo = Cosmology( hubble_constant=self.hubble_constant, omega_matter=self.omega_matter, omega_lambda=self.omega_lambda, ) self.current_time = cosmo.lookback_time(param["z"], 1e6).in_units("s") @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if not filename.endswith(".parameter"): return False with open(filename) as f: return f.readlines()[11].startswith("AHF") # Helper methods def _read_log_simu(self): simu = {} with open(self.log_filename) as f: for l in f: if l.startswith("simu."): name, val = l.split(":") key = name.strip().split(".")[1] try: val = float(val) except Exception: val = float.fromhex(val) simu[key] = val return simu def _read_parameter(self): param = {} with open(self.parameter_filename) as f: for l in f: words = l.split() if len(words) == 2: key, val = words try: val = float(val) param[key] = val except Exception: pass return param @property def _skip_cache(self): return True ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ahf/fields.py0000644000175100001770000000415214714401662016603 0ustar00runnerdockerfrom yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer m_units = "Msun/h" p_units = "kpccm/h" r_units = "kpccm/h" v_units = "km/s" class AHFHalosFieldInfo(FieldInfoContainer): # See http://popia.ft.uam.es/AHF/files/AHF.pdf # and search for '*.AHF_halos'. known_particle_fields: KnownFieldsT = ( ("ID", ("", ["particle_identifier"], None)), ("hostHalo", ("", [], None)), ("numSubStruct", ("", [], None)), ("Mvir", (m_units, ["particle_mass"], "Virial Mass")), ("npart", ("", [], None)), ("Xc", (p_units, ["particle_position_x"], None)), ("Yc", (p_units, ["particle_position_y"], None)), ("Zc", (p_units, ["particle_position_z"], None)), ("VXc", (v_units, ["particle_velocity_x"], None)), ("VYc", (v_units, ["particle_velocity_y"], None)), ("VZc", (v_units, ["particle_velocity_z"], None)), ("Rvir", (r_units, ["virial_radius"], "Virial Radius")), ("Rmax", (r_units, [], None)), ("r2", (r_units, [], None)), ("mbp_offset", (r_units, [], None)), ("com_offset", (r_units, [], None)), ("Vmax", (v_units, [], None)), ("v_sec", (v_units, [], None)), ("sigV", (v_units, [], None)), ("lambda", ("", [], None)), ("lambdaE", ("", [], None)), ("Lx", ("", [], None)), ("Ly", ("", [], None)), ("Lz", ("", [], None)), ("b", ("", [], None)), ("c", ("", [], None)), ("Eax", ("", [], None)), ("Eay", ("", [], None)), ("Eaz", ("", [], None)), ("Ebx", ("", [], None)), ("Eby", ("", [], None)), ("Ebz", ("", [], None)), ("Ecx", ("", [], None)), ("Ecy", ("", [], None)), ("Ecz", ("", [], None)), ("ovdens", ("", [], None)), ("nbins", ("", [], None)), ("fMhires", ("", [], None)), ("Ekin", ("Msun/h*(km/s)**2", [], None)), ("Epot", ("Msun/h*(km/s)**2", [], None)), ("SurfP", ("Msun/h*(km/s)**2", [], None)), ("Phi0", ("(km/s)**2", [], None)), ("cNFW", ("", [], None)), ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ahf/io.py0000644000175100001770000000616114714401662015746 0ustar00runnerdockerfrom operator import attrgetter import numpy as np from yt.utilities.io_handler import BaseParticleIOHandler class IOHandlerAHFHalos(BaseParticleIOHandler): _particle_reader = False _dataset_type = "ahf" def _read_fluid_selection(self, chunks, selector, fields, size): raise NotImplementedError def _read_particle_coords(self, chunks, ptf): # This needs to *yield* a series of tuples of (ptype, (x, y, z), hsml). # chunks is a list of chunks, and ptf is a dict where the keys are # ptypes and the values are lists of fields. # Only support halo reading for now. assert len(ptf) == 1 assert list(ptf.keys())[0] == "halos" for data_file in self._sorted_chunk_iterator(chunks): pos = data_file._get_particle_positions("halos") x, y, z = (pos[:, i] for i in range(3)) yield "halos", (x, y, z), 0.0 def _yield_coordinates(self, data_file): halos = data_file.read_data(usecols=["Xc", "Yc", "Zc"]) x = halos["Xc"].astype("float64") y = halos["Yc"].astype("float64") z = halos["Zc"].astype("float64") yield "halos", np.asarray((x, y, z)).T def _read_particle_fields(self, chunks, ptf, selector): # This gets called after the arrays have been allocated. It needs to # yield ((ptype, field), data) where data is the masked results of # reading ptype, field and applying the selector to the data read in. # Selector objects have a .select_points(x,y,z) that returns a mask, so # you need to do your masking here. # Only support halo reading for now. assert len(ptf) == 1 assert list(ptf.keys())[0] == "halos" for data_file in self._sorted_chunk_iterator(chunks): si, ei = data_file.start, data_file.end cols = [] for field_list in ptf.values(): cols.extend(field_list) cols = list(set(cols)) halos = data_file.read_data(usecols=cols) pos = data_file._get_particle_positions("halos") x, y, z = (pos[:, i] for i in range(3)) yield "halos", (x, y, z) mask = selector.select_points(x, y, z, 0.0) del x, y, z if mask is None: continue for ptype, field_list in sorted(ptf.items()): for field in field_list: data = halos[field][si:ei][mask].astype("float64") yield (ptype, field), data def _count_particles(self, data_file): halos = data_file.read_data(usecols=["ID"]) nhalos = len(halos["ID"]) si, ei = data_file.start, data_file.end if None not in (si, ei): nhalos = np.clip(nhalos - si, 0, ei - si) return {"halos": nhalos} def _identify_fields(self, data_file): fields = [("halos", f) for f in data_file.col_names] return fields, {} def _sorted_chunk_iterator(self, chunks): # yield from sorted list of data_files data_files = self._get_data_files(chunks) yield from sorted(data_files, key=attrgetter("filename")) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2911518 yt-4.4.0/yt/frontends/ahf/tests/0000755000175100001770000000000014714401715016122 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ahf/tests/__init__.py0000644000175100001770000000000014714401662020222 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ahf/tests/test_outputs.py0000644000175100001770000000202114714401662021252 0ustar00runnerdockerimport os.path from numpy.testing import assert_equal from yt.frontends.ahf.api import AHFHalosDataset from yt.testing import ParticleSelectionComparison, requires_file from yt.utilities.answer_testing.framework import ( FieldValuesTest, data_dir_load, requires_ds, ) _fields = ( ("all", "particle_position_x"), ("all", "particle_position_y"), ("all", "particle_position_z"), ("all", "particle_mass"), ) ahf_halos = "ahf_halos/snap_N64L16_135.parameter" def load(filename): return data_dir_load(filename, kwargs={"hubble_constant": 0.7}) @requires_ds(ahf_halos) def test_fields_ahf_halos(): ds = load(ahf_halos) assert_equal(str(ds), os.path.basename(ahf_halos)) for field in _fields: yield FieldValuesTest(ds, field, particle_type=True) @requires_file(ahf_halos) def test_AHFHalosDataset(): ds = load(ahf_halos) assert isinstance(ds, AHFHalosDataset) ad = ds.all_data() ad["all", "particle_mass"] psc = ParticleSelectionComparison(ds) psc.run_defaults() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2911518 yt-4.4.0/yt/frontends/amrex/0000755000175100001770000000000014714401715015336 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrex/__init__.py0000644000175100001770000000000014714401662017436 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrex/api.py0000644000175100001770000000070614714401662016465 0ustar00runnerdockerfrom . import tests from .data_structures import ( AMReXDataset, AMReXHierarchy, BoxlibDataset, BoxlibGrid, BoxlibHierarchy, CastroDataset, MaestroDataset, NyxDataset, NyxHierarchy, OrionDataset, OrionHierarchy, WarpXDataset, WarpXHierarchy, ) from .fields import ( BoxlibFieldInfo, CastroFieldInfo, MaestroFieldInfo, NyxFieldInfo, WarpXFieldInfo, ) from .io import IOHandlerBoxlib ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrex/data_structures.py0000644000175100001770000017304514714401662021137 0ustar00runnerdockerimport glob import os import re from collections import namedtuple from functools import cached_property from stat import ST_CTIME import numpy as np from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.static_output import Dataset from yt.fields.field_info_container import FieldInfoContainer from yt.funcs import mylog, setdefaultattr from yt.geometry.api import Geometry from yt.geometry.grid_geometry_handler import GridIndex from yt.utilities.io_handler import io_registry from yt.utilities.lib.misc_utilities import get_box_grids_level from yt.utilities.parallel_tools.parallel_analysis_interface import parallel_root_only from .fields import ( BoxlibFieldInfo, CastroFieldInfo, MaestroFieldInfo, NyxFieldInfo, WarpXFieldInfo, ) # This is what we use to find scientific notation that might include d's # instead of e's. _scinot_finder = re.compile(r"[-+]?[0-9]*\.?[0-9]+([eEdD][-+]?[0-9]+)?") # This is the dimensions in the Cell_H file for each level # It is different for different dimensionalities, so we make a list _1dregx = r"-?\d+" _2dregx = r"-?\d+,-?\d+" _3dregx = r"-?\d+,-?\d+,-?\d+" _dim_finder = [ re.compile(rf"\(\(({ndregx})\) \(({ndregx})\) \({ndregx}\)\)$") for ndregx in (_1dregx, _2dregx, _3dregx) ] # This is the line that prefixes each set of data for a FAB in the FAB file # It is different for different dimensionalities, so we make a list _endian_regex = r"^FAB\ \(\(\d+,\ \([\d\ ]+\)\),\((\d+),\ \(([\d\ ]+)\)\)\)" _header_pattern = [ re.compile( rf"""{_endian_regex} # match `endianness` \( \(( {ndregx} )\) # match `start` \ \(( {ndregx} )\) # match `end` \ \(( {ndregx} )\) # match `centering` \) \ (-?\d+) # match `nc` $ # end of line """, re.VERBOSE, ) for ndregx in (_1dregx, _2dregx, _3dregx) ] class BoxlibGrid(AMRGridPatch): _id_offset = 0 _offset = -1 def __init__(self, grid_id, offset, filename=None, index=None): super().__init__(grid_id, filename, index) self._base_offset = offset self._parent_id = [] self._children_ids = [] self._pdata = {} def _prepare_grid(self): super()._prepare_grid() my_ind = self.id - self._id_offset self.start_index = self.index.grid_start_index[my_ind] def get_global_startindex(self): return self.start_index def _setup_dx(self): # has already been read in and stored in index self.dds = self.index.ds.arr(self.index.level_dds[self.Level, :], "code_length") self.field_data["dx"], self.field_data["dy"], self.field_data["dz"] = self.dds @property def Parent(self): if len(self._parent_id) == 0: return None return [self.index.grids[pid - self._id_offset] for pid in self._parent_id] @property def Children(self): return [self.index.grids[cid - self._id_offset] for cid in self._children_ids] def _get_offset(self, f): # This will either seek to the _offset or figure out the correct # _offset. if self._offset == -1: f.seek(self._base_offset, os.SEEK_SET) f.readline() self._offset = f.tell() return self._offset # We override here because we can have varying refinement levels def select_ires(self, dobj): mask = self._get_selector_mask(dobj.selector) if mask is None: return np.empty(0, dtype="int64") coords = np.empty(self._last_count, dtype="int64") coords[:] = self.Level + self.ds.level_offsets[self.Level] return coords # Override this as well, since refine_by can vary def _fill_child_mask(self, child, mask, tofill, dlevel=1): rf = self.ds.ref_factors[self.Level] if dlevel != 1: raise NotImplementedError gi, cgi = self.get_global_startindex(), child.get_global_startindex() startIndex = np.maximum(0, cgi // rf - gi) endIndex = np.minimum( (cgi + child.ActiveDimensions) // rf - gi, self.ActiveDimensions ) endIndex += startIndex == endIndex mask[ startIndex[0] : endIndex[0], startIndex[1] : endIndex[1], startIndex[2] : endIndex[2], ] = tofill class BoxLibParticleHeader: def __init__(self, ds, directory_name, is_checkpoint, extra_field_names=None): self.particle_type = directory_name header_filename = os.path.join(ds.output_dir, directory_name, "Header") with open(header_filename) as f: self.version_string = f.readline().strip() particle_real_type = self.version_string.split("_")[-1] known_real_types = {"double": np.float64, "single": np.float32} try: self.real_type = known_real_types[particle_real_type] except KeyError: mylog.warning( "yt did not recognize particle real type '%s'. Assuming 'double'.", particle_real_type, ) self.real_type = known_real_types["double"] self.int_type = np.int32 self.dim = int(f.readline().strip()) self.num_int_base = 2 + self.dim self.num_real_base = self.dim self.num_int_extra = 0 # this should be written by Boxlib, but isn't self.num_real_extra = int(f.readline().strip()) self.num_int = self.num_int_base + self.num_int_extra self.num_real = self.num_real_base + self.num_real_extra self.num_particles = int(f.readline().strip()) self.max_next_id = int(f.readline().strip()) self.finest_level = int(f.readline().strip()) self.num_levels = self.finest_level + 1 # Boxlib particles can be written in checkpoint or plotfile mode # The base integer fields are only there for checkpoints, but some # codes use the checkpoint format for plotting if not is_checkpoint: self.num_int_base = 0 self.num_int_extra = 0 self.num_int = 0 self.grids_per_level = np.zeros(self.num_levels, dtype="int64") self.data_map = {} for level_num in range(self.num_levels): self.grids_per_level[level_num] = int(f.readline().strip()) self.data_map[level_num] = {} pfd = namedtuple( "ParticleFileDescriptor", ["file_number", "num_particles", "offset"] ) for level_num in range(self.num_levels): for grid_num in range(self.grids_per_level[level_num]): entry = [int(val) for val in f.readline().strip().split()] self.data_map[level_num][grid_num] = pfd(*entry) self._generate_particle_fields(extra_field_names) def _generate_particle_fields(self, extra_field_names): # these are the 'base' integer fields self.known_int_fields = [ (self.particle_type, "particle_id"), (self.particle_type, "particle_cpu"), (self.particle_type, "particle_cell_x"), (self.particle_type, "particle_cell_y"), (self.particle_type, "particle_cell_z"), ] self.known_int_fields = self.known_int_fields[0 : self.num_int_base] # these are extra integer fields extra_int_fields = [ "particle_int_comp%d" % i for i in range(self.num_int_extra) ] self.known_int_fields.extend( [(self.particle_type, field) for field in extra_int_fields] ) # these are the base real fields self.known_real_fields = [ (self.particle_type, "particle_position_x"), (self.particle_type, "particle_position_y"), (self.particle_type, "particle_position_z"), ] self.known_real_fields = self.known_real_fields[0 : self.num_real_base] # these are the extras if extra_field_names is not None: assert len(extra_field_names) == self.num_real_extra else: extra_field_names = [ "particle_real_comp%d" % i for i in range(self.num_real_extra) ] self.known_real_fields.extend( [(self.particle_type, field) for field in extra_field_names] ) self.known_fields = self.known_int_fields + self.known_real_fields self.particle_int_dtype = np.dtype( [(t[1], self.int_type) for t in self.known_int_fields] ) self.particle_real_dtype = np.dtype( [(t[1], self.real_type) for t in self.known_real_fields] ) class AMReXParticleHeader: def __init__(self, ds, directory_name, is_checkpoint, extra_field_names=None): self.particle_type = directory_name header_filename = os.path.join(ds.output_dir, directory_name, "Header") self.real_component_names = [] self.int_component_names = [] with open(header_filename) as f: self.version_string = f.readline().strip() particle_real_type = self.version_string.split("_")[-1] if particle_real_type == "double": self.real_type = np.float64 elif particle_real_type == "single": self.real_type = np.float32 else: raise RuntimeError("yt did not recognize particle real type.") self.int_type = np.int32 self.dim = int(f.readline().strip()) self.num_int_base = 2 self.num_real_base = self.dim self.num_real_extra = int(f.readline().strip()) for _ in range(self.num_real_extra): self.real_component_names.append(f.readline().strip()) self.num_int_extra = int(f.readline().strip()) for _ in range(self.num_int_extra): self.int_component_names.append(f.readline().strip()) self.num_int = self.num_int_base + self.num_int_extra self.num_real = self.num_real_base + self.num_real_extra self.is_checkpoint = bool(int(f.readline().strip())) self.num_particles = int(f.readline().strip()) self.max_next_id = int(f.readline().strip()) self.finest_level = int(f.readline().strip()) self.num_levels = self.finest_level + 1 if not self.is_checkpoint: self.num_int_base = 0 self.num_int_extra = 0 self.num_int = 0 self.grids_per_level = np.zeros(self.num_levels, dtype="int64") self.data_map = {} for level_num in range(self.num_levels): self.grids_per_level[level_num] = int(f.readline().strip()) self.data_map[level_num] = {} pfd = namedtuple( "ParticleFileDescriptor", ["file_number", "num_particles", "offset"] ) for level_num in range(self.num_levels): for grid_num in range(self.grids_per_level[level_num]): entry = [int(val) for val in f.readline().strip().split()] self.data_map[level_num][grid_num] = pfd(*entry) self._generate_particle_fields() def _generate_particle_fields(self): # these are the 'base' integer fields self.known_int_fields = [ (self.particle_type, "particle_id"), (self.particle_type, "particle_cpu"), ] self.known_int_fields = self.known_int_fields[0 : self.num_int_base] self.known_int_fields.extend( [ (self.particle_type, "particle_" + field) for field in self.int_component_names ] ) # these are the base real fields self.known_real_fields = [ (self.particle_type, "particle_position_x"), (self.particle_type, "particle_position_y"), (self.particle_type, "particle_position_z"), ] self.known_real_fields = self.known_real_fields[0 : self.num_real_base] self.known_real_fields.extend( [ (self.particle_type, "particle_" + field) for field in self.real_component_names ] ) self.known_fields = self.known_int_fields + self.known_real_fields self.particle_int_dtype = np.dtype( [(t[1], self.int_type) for t in self.known_int_fields] ) self.particle_real_dtype = np.dtype( [(t[1], self.real_type) for t in self.known_real_fields] ) class BoxlibHierarchy(GridIndex): grid = BoxlibGrid def __init__(self, ds, dataset_type="boxlib_native"): self.dataset_type = dataset_type self.header_filename = os.path.join(ds.output_dir, "Header") self.directory = ds.output_dir self.particle_headers = {} GridIndex.__init__(self, ds, dataset_type) self._cache_endianness(self.grids[-1]) def _parse_index(self): """ read the global header file for an Boxlib plotfile output. """ self.max_level = self.dataset._max_level header_file = open(self.header_filename) self.dimensionality = self.dataset.dimensionality _our_dim_finder = _dim_finder[self.dimensionality - 1] DRE = self.dataset.domain_right_edge # shortcut DLE = self.dataset.domain_left_edge # shortcut # We can now skip to the point in the file we want to start parsing. header_file.seek(self.dataset._header_mesh_start) dx = [] for i in range(self.max_level + 1): dx.append([float(v) for v in next(header_file).split()]) # account for non-3d data sets if self.dimensionality < 2: dx[i].append(DRE[1] - DLE[1]) if self.dimensionality < 3: dx[i].append(DRE[2] - DLE[2]) self.level_dds = np.array(dx, dtype="float64") next(header_file) if self.ds.geometry == "cartesian": default_ybounds = (0.0, 1.0) default_zbounds = (0.0, 1.0) elif self.ds.geometry == "cylindrical": self.level_dds[:, 2] = 2 * np.pi default_ybounds = (0.0, 1.0) default_zbounds = (0.0, 2 * np.pi) elif self.ds.geometry == "spherical": # BoxLib only supports 1D spherical, so ensure # the other dimensions have the right extent. self.level_dds[:, 1] = np.pi self.level_dds[:, 2] = 2 * np.pi default_ybounds = (0.0, np.pi) default_zbounds = (0.0, 2 * np.pi) else: header_file.close() raise RuntimeError("Unknown BoxLib coordinate system.") if int(next(header_file)) != 0: header_file.close() raise RuntimeError("INTERNAL ERROR! This should be a zero.") # each level is one group with ngrids on it. # each grid has self.dimensionality number of lines of 2 reals self.grids = [] grid_counter = 0 for level in range(self.max_level + 1): vals = next(header_file).split() lev, ngrids = int(vals[0]), int(vals[1]) assert lev == level nsteps = int(next(header_file)) # NOQA for gi in range(ngrids): xlo, xhi = (float(v) for v in next(header_file).split()) if self.dimensionality > 1: ylo, yhi = (float(v) for v in next(header_file).split()) else: ylo, yhi = default_ybounds if self.dimensionality > 2: zlo, zhi = (float(v) for v in next(header_file).split()) else: zlo, zhi = default_zbounds self.grid_left_edge[grid_counter + gi, :] = [xlo, ylo, zlo] self.grid_right_edge[grid_counter + gi, :] = [xhi, yhi, zhi] # Now we get to the level header filename, which we open and parse. fn = os.path.join(self.dataset.output_dir, next(header_file).strip()) level_header_file = open(fn + "_H") level_dir = os.path.dirname(fn) # We skip the first two lines, which contain BoxLib header file # version and 'how' the data was written next(level_header_file) next(level_header_file) # Now we get the number of components ncomp_this_file = int(next(level_header_file)) # NOQA # Skip the next line, which contains the number of ghost zones next(level_header_file) # To decipher this next line, we expect something like: # (8 0 # where the first is the number of FABs in this level. ngrids = int(next(level_header_file).split()[0][1:]) # Now we can iterate over each and get the indices. for gi in range(ngrids): # components within it start, stop = _our_dim_finder.match(next(level_header_file)).groups() # fix for non-3d data # note we append '0' to both ends b/c of the '+1' in dims below start += ",0" * (3 - self.dimensionality) stop += ",0" * (3 - self.dimensionality) start = np.array(start.split(","), dtype="int64") stop = np.array(stop.split(","), dtype="int64") dims = stop - start + 1 self.grid_dimensions[grid_counter + gi, :] = dims self.grid_start_index[grid_counter + gi, :] = start # Now we read two more lines. The first of these is a close # parenthesis. next(level_header_file) # The next is again the number of grids next(level_header_file) # Now we iterate over grids to find their offsets in each file. for gi in range(ngrids): # Now we get the data file, at which point we're ready to # create the grid. dummy, filename, offset = next(level_header_file).split() filename = os.path.join(level_dir, filename) go = self.grid(grid_counter + gi, int(offset), filename, self) go.Level = self.grid_levels[grid_counter + gi, :] = level self.grids.append(go) level_header_file.close() grid_counter += ngrids # already read the filenames above... self.float_type = "float64" header_file.close() def _cache_endianness(self, test_grid): """ Cache the endianness and bytes perreal of the grids by using a test grid and assuming that all grids have the same endianness. This is a pretty safe assumption since Boxlib uses one file per processor, and if you're running on a cluster with different endian processors, then you're on your own! """ # open the test file & grab the header with open(os.path.expanduser(test_grid.filename), "rb") as f: header = f.readline().decode("ascii", "ignore") bpr, endian, start, stop, centering, nc = ( _header_pattern[self.dimensionality - 1].search(header).groups() ) # Note that previously we were using a different value for BPR than we # use now. Here is an example set of information directly from BoxLib """ * DOUBLE data * FAB ((8, (64 11 52 0 1 12 0 1023)),(8, (1 2 3 4 5 6 7 8)))((0,0) (63,63) (0,0)) 27 # NOQA: E501 * FLOAT data * FAB ((8, (32 8 23 0 1 9 0 127)),(4, (1 2 3 4)))((0,0) (63,63) (0,0)) 27 """ if bpr == endian[0]: dtype = f"f{bpr}" else: raise ValueError( "FAB header is neither big nor little endian. " "Perhaps the file is corrupt?" ) mylog.debug("FAB header suggests dtype of %s", dtype) self._dtype = np.dtype(dtype) def _populate_grid_objects(self): mylog.debug("Creating grid objects") self.grids = np.array(self.grids, dtype="object") self._reconstruct_parent_child() for i, grid in enumerate(self.grids): if (i % 1e4) == 0: mylog.debug("Prepared % 7i / % 7i grids", i, self.num_grids) grid._prepare_grid() grid._setup_dx() mylog.debug("Done creating grid objects") def _reconstruct_parent_child(self): if self.max_level == 0: return mask = np.empty(len(self.grids), dtype="int32") mylog.debug("First pass; identifying child grids") for i, grid in enumerate(self.grids): get_box_grids_level( self.grid_left_edge[i, :], self.grid_right_edge[i, :], self.grid_levels[i].item() + 1, self.grid_left_edge, self.grid_right_edge, self.grid_levels, mask, ) ids = np.where(mask.astype("bool")) # where is a tuple grid._children_ids = ids[0] + grid._id_offset mylog.debug("Second pass; identifying parents") for i, grid in enumerate(self.grids): # Second pass for child in grid.Children: child._parent_id.append(i + grid._id_offset) def _count_grids(self): # We can get everything from the Header file, but note that we're # duplicating some work done elsewhere. In a future where we don't # pre-allocate grid arrays, this becomes unnecessary. header_file = open(self.header_filename) header_file.seek(self.dataset._header_mesh_start) # Skip over the level dxs, geometry and the zero: [next(header_file) for i in range(self.dataset._max_level + 3)] # Now we need to be very careful, as we've seeked, and now we iterate. # Does this work? We are going to count the number of places that we # have a three-item line. The three items would be level, number of # grids, and then grid time. self.num_grids = 0 for line in header_file: if len(line.split()) != 3: continue self.num_grids += int(line.split()[1]) header_file.close() def _initialize_grid_arrays(self): super()._initialize_grid_arrays() self.grid_start_index = np.zeros((self.num_grids, 3), "int64") def _initialize_state_variables(self): """override to not re-initialize num_grids in AMRHierarchy.__init__""" self._parallel_locking = False self._data_file = None self._data_mode = None def _detect_output_fields(self): # This is all done in _parse_header_file self.field_list = [("boxlib", f) for f in self.dataset._field_list] self.field_indexes = {f[1]: i for i, f in enumerate(self.field_list)} # There are times when field_list may change. We copy it here to # avoid that possibility. self.field_order = list(self.field_list) def _setup_data_io(self): self.io = io_registry[self.dataset_type](self.dataset) def _determine_particle_output_type(self, directory_name): header_filename = os.path.join(self.ds.output_dir, directory_name, "Header") with open(header_filename) as f: version_string = f.readline().strip() if version_string.startswith("Version_Two"): return AMReXParticleHeader else: return BoxLibParticleHeader def _read_particles(self, directory_name, is_checkpoint, extra_field_names=None): pheader = self._determine_particle_output_type(directory_name) self.particle_headers[directory_name] = pheader( self.ds, directory_name, is_checkpoint, extra_field_names ) num_parts = self.particle_headers[directory_name].num_particles if self.ds._particle_type_counts is None: self.ds._particle_type_counts = {} self.ds._particle_type_counts[directory_name] = num_parts base = os.path.join(self.ds.output_dir, directory_name) if len(glob.glob(os.path.join(base, "Level_?", "DATA_????"))) > 0: base_particle_fn = os.path.join(base, "Level_%d", "DATA_%.4d") elif len(glob.glob(os.path.join(base, "Level_?", "DATA_?????"))) > 0: base_particle_fn = os.path.join(base, "Level_%d", "DATA_%.5d") else: return gid = 0 for lev, data in self.particle_headers[directory_name].data_map.items(): for pdf in data.values(): pdict = self.grids[gid]._pdata pdict[directory_name] = {} pdict[directory_name]["particle_filename"] = base_particle_fn % ( lev, pdf.file_number, ) pdict[directory_name]["offset"] = pdf.offset pdict[directory_name]["NumberOfParticles"] = pdf.num_particles self.grid_particle_count[gid] += pdf.num_particles self.grids[gid].NumberOfParticles += pdf.num_particles gid += 1 # add particle fields to field_list pfield_list = self.particle_headers[directory_name].known_fields self.field_list.extend(pfield_list) class BoxlibDataset(Dataset): """ This class is a stripped down class that simply reads and parses *filename*, without looking at the Boxlib index. """ _index_class = BoxlibHierarchy _field_info_class: type[FieldInfoContainer] = BoxlibFieldInfo _output_prefix = None _default_cparam_filename = "job_info" def __init__( self, output_dir, cparam_filename=None, fparam_filename=None, dataset_type="boxlib_native", storage_filename=None, units_override=None, unit_system="cgs", default_species_fields=None, ): """ The paramfile is usually called "inputs" and there may be a fortran inputs file usually called "probin" plotname here will be a directory name as per BoxLib, dataset_type will be Native (implemented here), IEEE (not yet implemented) or ASCII (not yet implemented.) """ self.fluid_types += ("boxlib",) self.output_dir = os.path.abspath(os.path.expanduser(output_dir)) cparam_filename = cparam_filename or self.__class__._default_cparam_filename self.cparam_filename = self._lookup_cparam_filepath( self.output_dir, cparam_filename=cparam_filename ) self.fparam_filename = self._localize_check(fparam_filename) self.storage_filename = storage_filename Dataset.__init__( self, output_dir, dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) # These are still used in a few places. if "HydroMethod" not in self.parameters.keys(): self.parameters["HydroMethod"] = "boxlib" self.parameters["Time"] = 1.0 # default unit is 1... self.parameters["EOSType"] = -1 # default self.parameters["gamma"] = self.parameters.get("materials.gamma", 1.6667) def _localize_check(self, fn): if fn is None: return None # If the file exists, use it. If not, set it to None. root_dir = os.path.dirname(self.output_dir) full_fn = os.path.join(root_dir, fn) if os.path.exists(full_fn): return full_fn return None @classmethod def _is_valid(cls, filename, *args, cparam_filename=None, **kwargs): output_dir = filename header_filename = os.path.join(output_dir, "Header") # boxlib datasets are always directories, and # We *know* it's not boxlib if Header doesn't exist. if not os.path.exists(header_filename): return False if cls is BoxlibDataset: # Stop checks here for the boxlib base class. # Further checks are performed on subclasses. return True cparam_filename = cparam_filename or cls._default_cparam_filename cparam_filepath = cls._lookup_cparam_filepath(output_dir, cparam_filename) if cparam_filepath is None: return False with open(cparam_filepath) as f: lines = [line.lower() for line in f] return any(cls._subtype_keyword in line for line in lines) @classmethod def _lookup_cparam_filepath(cls, output_dir, cparam_filename): lookup_table = [ os.path.abspath(os.path.join(p, cparam_filename)) for p in (output_dir, os.path.dirname(output_dir)) ] found = [os.path.exists(file) for file in lookup_table] if not any(found): return None return lookup_table[found.index(True)] @cached_property def unique_identifier(self) -> str: hfn = os.path.join(self.output_dir, "Header") return str(int(os.stat(hfn)[ST_CTIME])) def _parse_parameter_file(self): """ Parses the parameter file and establishes the various dictionaries. """ self._periodicity = (False, False, False) self._parse_header_file() # Let's read the file # the 'inputs' file is now optional self._parse_cparams() self._parse_fparams() def _parse_cparams(self): if self.cparam_filename is None: return with open(self.cparam_filename) as param_file: for line in (line.split("#")[0].strip() for line in param_file): try: param, vals = (s.strip() for s in line.split("=")) except ValueError: continue # Castro and Maestro mark overridden defaults with a "[*]" # before the parameter name param = param.removeprefix("[*]").strip() if param == "amr.ref_ratio": vals = self.refine_by = int(vals[0]) elif param == "Prob.lo_bc": vals = tuple(p == "1" for p in vals.split()) assert len(vals) == self.dimensionality # default to non periodic periodicity = [False, False, False] # fill in ndim parsed values periodicity[: self.dimensionality] = vals self._periodicity = tuple(periodicity) elif param == "castro.use_comoving": vals = self.cosmological_simulation = int(vals) else: try: vals = _guess_pcast(vals) except (IndexError, ValueError): # hitting an empty string or a comment vals = None self.parameters[param] = vals if getattr(self, "cosmological_simulation", 0) == 1: self.omega_lambda = self.parameters["comoving_OmL"] self.omega_matter = self.parameters["comoving_OmM"] self.hubble_constant = self.parameters["comoving_h"] with open(os.path.join(self.output_dir, "comoving_a")) as a_file: line = a_file.readline().strip() self.current_redshift = 1 / float(line) - 1 else: self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 self.cosmological_simulation = 0 def _parse_fparams(self): """ Parses the fortran parameter file for Orion. Most of this will be useless, but this is where it keeps mu = mass per particle/m_hydrogen. """ if self.fparam_filename is None: return param_file = open(self.fparam_filename) for line in (l for l in param_file if "=" in l): param, vals = (v.strip() for v in line.split("=")) # Now, there are a couple different types of parameters. # Some will be where you only have floating point values, others # will be where things are specified as string literals. # Unfortunately, we're also using Fortran values, which will have # things like 1.d-2 which is pathologically difficult to parse if # your C library doesn't include 'd' in its locale for strtod. # So we'll try to determine this. vals = vals.split() if any(_scinot_finder.match(v) for v in vals): vals = [float(v.replace("D", "e").replace("d", "e")) for v in vals] if len(vals) == 1: vals = vals[0] self.parameters[param] = vals param_file.close() def _parse_header_file(self): """ We parse the Boxlib header, which we use as our basis. Anything in the inputs file will override this, but the inputs file is not strictly necessary for orientation of the data in space. """ # Note: Python uses a read-ahead buffer, so using next(), which would # be my preferred solution, won't work here. We have to explicitly # call readline() if we want to end up with an offset at the very end. # Fortunately, elsewhere we don't care about the offset, so we're fine # everywhere else using iteration exclusively. header_file = open(os.path.join(self.output_dir, "Header")) self.orion_version = header_file.readline().rstrip() n_fields = int(header_file.readline()) self._field_list = [header_file.readline().strip() for i in range(n_fields)] self.dimensionality = int(header_file.readline()) self.current_time = float(header_file.readline()) # This is traditionally a index attribute, so we will set it, but # in a slightly hidden variable. self._max_level = int(header_file.readline()) for side, init in [("left", np.zeros), ("right", np.ones)]: domain_edge = init(3, dtype="float64") domain_edge[: self.dimensionality] = header_file.readline().split() setattr(self, f"domain_{side}_edge", domain_edge) ref_factors = np.array(header_file.readline().split(), dtype="int64") if ref_factors.size == 0: # We use a default of two, as Nyx doesn't always output this value ref_factors = [2] * (self._max_level + 1) # We can't vary refinement factors based on dimension, or whatever else # they are varied on. In one curious thing, I found that some Castro 3D # data has only two refinement factors, which I don't know how to # understand. self.ref_factors = ref_factors if np.unique(ref_factors).size > 1: # We want everything to be a multiple of this. self.refine_by = min(ref_factors) # Check that they're all multiples of the minimum. if not all( float(rf) / self.refine_by == int(float(rf) / self.refine_by) for rf in ref_factors ): header_file.close() raise RuntimeError base_log = np.log2(self.refine_by) self.level_offsets = [0] # level 0 has to have 0 offset lo = 0 for rf in self.ref_factors: lo += int(np.log2(rf) / base_log) - 1 self.level_offsets.append(lo) # assert(np.unique(ref_factors).size == 1) else: self.refine_by = ref_factors[0] self.level_offsets = [0 for l in range(self._max_level + 1)] # Now we read the global index space, to get index_space = header_file.readline() # This will be of the form: # ((0,0,0) (255,255,255) (0,0,0)) ((0,0,0) (511,511,511) (0,0,0)) # So note that if we split it all up based on spaces, we should be # fine, as long as we take the first two entries, which correspond to # the root level. I'm not 100% pleased with this solution. root_space = index_space.replace("(", "").replace(")", "").split()[:2] start = np.array(root_space[0].split(","), dtype="int64") stop = np.array(root_space[1].split(","), dtype="int64") dd = np.ones(3, dtype="int64") dd[: self.dimensionality] = stop - start + 1 self.domain_offset[: self.dimensionality] = start self.domain_dimensions = dd # Skip timesteps per level header_file.readline() self._header_mesh_start = header_file.tell() # Skip the cell size information per level - we'll get this later for _ in range(self._max_level + 1): header_file.readline() # Get the geometry next_line = header_file.readline() if len(next_line.split()) == 1: coordinate_type = int(next_line) else: coordinate_type = 0 known_types = {0: "cartesian", 1: "cylindrical", 2: "spherical"} try: geom_str = known_types[coordinate_type] except KeyError as err: header_file.close() raise ValueError(f"Unknown BoxLib coord_type `{coordinate_type}`.") from err else: self.geometry = Geometry(geom_str) if self.geometry == "cylindrical": dre = self.domain_right_edge.copy() dre[2] = 2.0 * np.pi self.domain_right_edge = dre header_file.close() def _set_code_unit_attributes(self): setdefaultattr(self, "length_unit", self.quan(1.0, "cm")) setdefaultattr(self, "mass_unit", self.quan(1.0, "g")) setdefaultattr(self, "time_unit", self.quan(1.0, "s")) setdefaultattr(self, "velocity_unit", self.quan(1.0, "cm/s")) @parallel_root_only def print_key_parameters(self): for a in [ "current_time", "domain_dimensions", "domain_left_edge", "domain_right_edge", ]: if not hasattr(self, a): mylog.error("Missing %s in parameter file definition!", a) continue v = getattr(self, a) mylog.info("Parameters: %-25s = %s", a, v) def relative_refinement(self, l0, l1): offset = self.level_offsets[l1] - self.level_offsets[l0] return self.refine_by ** (l1 - l0 + offset) class AMReXHierarchy(BoxlibHierarchy): def __init__(self, ds, dataset_type="boxlib_native"): super().__init__(ds, dataset_type) if "particles" in self.ds.parameters: is_checkpoint = True for ptype in self.ds.particle_types: self._read_particles(ptype, is_checkpoint) class AMReXDataset(BoxlibDataset): _index_class: type[BoxlibHierarchy] = AMReXHierarchy _subtype_keyword = "amrex" _default_cparam_filename = "job_info" def _parse_parameter_file(self): super()._parse_parameter_file() particle_types = glob.glob(os.path.join(self.output_dir, "*", "Header")) particle_types = [cpt.split(os.sep)[-2] for cpt in particle_types] if len(particle_types) > 0: self.parameters["particles"] = 1 self.particle_types = tuple(particle_types) self.particle_types_raw = self.particle_types class OrionHierarchy(BoxlibHierarchy): def __init__(self, ds, dataset_type="orion_native"): BoxlibHierarchy.__init__(self, ds, dataset_type) self._read_particles() # self.io = IOHandlerOrion def _detect_output_fields(self): # This is all done in _parse_header_file self.field_list = [("boxlib", f) for f in self.dataset._field_list] self.field_indexes = {f[1]: i for i, f in enumerate(self.field_list)} # There are times when field_list may change. We copy it here to # avoid that possibility. self.field_order = list(self.field_list) # look for particle fields self.particle_filename = None for particle_filename in ["StarParticles", "SinkParticles"]: fn = os.path.join(self.ds.output_dir, particle_filename) if os.path.exists(fn): self.particle_filename = fn if self.particle_filename is None: return pfield_list = [("io", c) for c in self.io.particle_field_index.keys()] self.field_list.extend(pfield_list) def _read_particles(self): """ Reads in particles and assigns them to grids. Will search for Star particles, then sink particles if no star particle file is found, and finally will simply note that no particles are found if neither works. To add a new Orion particle type, simply add it to the if/elif/else block. """ self.grid_particle_count = np.zeros(len(self.grids)) if self.particle_filename is not None: self._read_particle_file(self.particle_filename) def _read_particle_file(self, fn): """actually reads the orion particle data file itself.""" if not os.path.exists(fn): return with open(fn) as f: lines = f.readlines() self.num_stars = int(lines[0].strip()[0]) for num, line in enumerate(lines[1:]): particle_position_x = float(line.split(" ")[1]) particle_position_y = float(line.split(" ")[2]) particle_position_z = float(line.split(" ")[3]) coord = [particle_position_x, particle_position_y, particle_position_z] # for each particle, determine which grids contain it # copied from object_finding_mixin.py mask = np.ones(self.num_grids) for i in range(len(coord)): np.choose( np.greater(self.grid_left_edge.d[:, i], coord[i]), (mask, 0), mask, ) np.choose( np.greater(self.grid_right_edge.d[:, i], coord[i]), (0, mask), mask, ) ind = np.where(mask == 1) selected_grids = self.grids[ind] # in orion, particles always live on the finest level. # so, we want to assign the particle to the finest of # the grids we just found if len(selected_grids) != 0: grid = sorted(selected_grids, key=lambda grid: grid.Level)[-1] ind = np.where(self.grids == grid)[0][0] self.grid_particle_count[ind] += 1 self.grids[ind].NumberOfParticles += 1 # store the position in the particle file for fast access. try: self.grids[ind]._particle_line_numbers.append(num + 1) except AttributeError: self.grids[ind]._particle_line_numbers = [num + 1] return True class OrionDataset(BoxlibDataset): _index_class = OrionHierarchy _subtype_keyword = "hyp." _default_cparam_filename = "inputs" def __init__( self, output_dir, cparam_filename=None, fparam_filename="probin", dataset_type="orion_native", storage_filename=None, units_override=None, unit_system="cgs", default_species_fields=None, ): BoxlibDataset.__init__( self, output_dir, cparam_filename, fparam_filename, dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) class CastroHierarchy(BoxlibHierarchy): def __init__(self, ds, dataset_type="castro_native"): super().__init__(ds, dataset_type) if "particles" in self.ds.parameters: # extra beyond the base real fields that all Boxlib # particles have, i.e. the xyz positions castro_extra_real_fields = [ "particle_velocity_x", "particle_velocity_y", "particle_velocity_z", ] is_checkpoint = True self._read_particles( "Tracer", is_checkpoint, castro_extra_real_fields[0 : self.ds.dimensionality], ) class CastroDataset(AMReXDataset): _index_class = CastroHierarchy _field_info_class = CastroFieldInfo _subtype_keyword = "castro" _default_cparam_filename = "job_info" def __init__( self, output_dir, cparam_filename=None, fparam_filename=None, dataset_type="boxlib_native", storage_filename=None, units_override=None, unit_system="cgs", default_species_fields=None, ): super().__init__( output_dir, cparam_filename, fparam_filename, dataset_type, storage_filename, units_override, unit_system, default_species_fields=default_species_fields, ) def _parse_parameter_file(self): super()._parse_parameter_file() jobinfo_filename = os.path.join(self.output_dir, self.cparam_filename) line = "" with open(jobinfo_filename) as f: while not line.startswith(" Inputs File Parameters"): # boundary condition info starts with -x:, etc. bcs = ["-x:", "+x:", "-y:", "+y:", "-z:", "+z:"] if any(b in line for b in bcs): p, v = line.strip().split(":") self.parameters[p] = v.strip() if "git describe" in line or "git hash" in line: # Castro release 17.02 and later # line format: codename git describe: the-hash # Castro before release 17.02 # line format: codename git hash: the-hash fields = line.split(":") self.parameters[fields[0]] = fields[1].strip() line = next(f) # hydro method is set by the base class -- override it here self.parameters["HydroMethod"] = "Castro" # set the periodicity based on the runtime parameters # https://amrex-astro.github.io/Castro/docs/inputs.html?highlight=periodicity periodicity = [False, False, False] for i, axis in enumerate("xyz"): try: periodicity[i] = self.parameters[f"-{axis}"] == "interior" except KeyError: break self._periodicity = tuple(periodicity) if os.path.isdir(os.path.join(self.output_dir, "Tracer")): # we have particles self.parameters["particles"] = 1 self.particle_types = ("Tracer",) self.particle_types_raw = self.particle_types class MaestroDataset(AMReXDataset): _index_class = BoxlibHierarchy _field_info_class = MaestroFieldInfo _subtype_keyword = "maestro" _default_cparam_filename = "job_info" def __init__( self, output_dir, cparam_filename=None, fparam_filename=None, dataset_type="boxlib_native", storage_filename=None, units_override=None, unit_system="cgs", default_species_fields=None, ): super().__init__( output_dir, cparam_filename, fparam_filename, dataset_type, storage_filename, units_override, unit_system, default_species_fields=default_species_fields, ) def _parse_parameter_file(self): super()._parse_parameter_file() jobinfo_filename = os.path.join(self.output_dir, self.cparam_filename) with open(jobinfo_filename) as f: for line in f: # get the code git hashes if "git hash" in line: # line format: codename git hash: the-hash fields = line.split(":") self.parameters[fields[0]] = fields[1].strip() # hydro method is set by the base class -- override it here self.parameters["HydroMethod"] = "Maestro" # set the periodicity based on the integer BC runtime parameters periodicity = [False, False, False] for i, ax in enumerate("xyz"): try: periodicity[i] = self.parameters[f"bc{ax}_lo"] != -1 except KeyError: pass self._periodicity = tuple(periodicity) class NyxHierarchy(BoxlibHierarchy): def __init__(self, ds, dataset_type="nyx_native"): super().__init__(ds, dataset_type) if "particles" in self.ds.parameters: # extra beyond the base real fields that all Boxlib # particles have, i.e. the xyz positions nyx_extra_real_fields = [ "particle_mass", "particle_velocity_x", "particle_velocity_y", "particle_velocity_z", ] is_checkpoint = False self._read_particles( "DM", is_checkpoint, nyx_extra_real_fields[0 : self.ds.dimensionality + 1], ) class NyxDataset(BoxlibDataset): _index_class = NyxHierarchy _field_info_class = NyxFieldInfo _subtype_keyword = "nyx" _default_cparam_filename = "job_info" def __init__( self, output_dir, cparam_filename=None, fparam_filename=None, dataset_type="boxlib_native", storage_filename=None, units_override=None, unit_system="cgs", default_species_fields=None, ): super().__init__( output_dir, cparam_filename, fparam_filename, dataset_type, storage_filename, units_override, unit_system, default_species_fields=default_species_fields, ) def _parse_parameter_file(self): super()._parse_parameter_file() jobinfo_filename = os.path.join(self.output_dir, self.cparam_filename) with open(jobinfo_filename) as f: for line in f: # get the code git hashes if "git hash" in line: # line format: codename git hash: the-hash fields = line.split(":") self.parameters[fields[0]] = fields[1].strip() if line.startswith(" Cosmology Information"): self.cosmological_simulation = 1 break else: self.cosmological_simulation = 0 if self.cosmological_simulation: # note that modern Nyx is always cosmological, but there are some old # files without these parameters so we want to special-case them for line in f: if "Omega_m (comoving)" in line: self.omega_matter = float(line.split(":")[1]) elif "Omega_lambda (comoving)" in line: self.omega_lambda = float(line.split(":")[1]) elif "h (comoving)" in line: self.hubble_constant = float(line.split(":")[1]) # Read in the `comoving_a` file and parse the value. We should fix this # in the new Nyx output format... with open(os.path.join(self.output_dir, "comoving_a")) as a_file: a_string = a_file.readline().strip() # Set the scale factor and redshift self.cosmological_scale_factor = float(a_string) self.parameters["CosmologyCurrentRedshift"] = 1 / float(a_string) - 1 # alias self.current_redshift = self.parameters["CosmologyCurrentRedshift"] if os.path.isfile(os.path.join(self.output_dir, "DM/Header")): # we have particles self.parameters["particles"] = 1 self.particle_types = ("DM",) self.particle_types_raw = self.particle_types def _set_code_unit_attributes(self): setdefaultattr(self, "mass_unit", self.quan(1.0, "Msun")) setdefaultattr(self, "time_unit", self.quan(1.0 / 3.08568025e19, "s")) setdefaultattr( self, "length_unit", self.quan(1.0 / (1 + self.current_redshift), "Mpc") ) setdefaultattr(self, "velocity_unit", self.length_unit / self.time_unit) class QuokkaDataset(AMReXDataset): # match any plotfiles that have a metadata.yaml file in the root _subtype_keyword = "" _default_cparam_filename = "metadata.yaml" def _guess_pcast(vals): # Now we guess some things about the parameter and its type # Just in case there are multiple; we'll go # back afterward to using vals. v = vals.split()[0] try: float(v.upper().replace("D", "E")) except Exception: pcast = str if v in ("F", "T"): pcast = bool else: syms = (".", "D+", "D-", "E+", "E-", "E", "D") if any(sym in v.upper() for sym in syms for v in vals.split()): pcast = float else: pcast = int if pcast is bool: vals = [value == "T" for value in vals.split()] else: vals = [pcast(value) for value in vals.split()] if len(vals) == 1: vals = vals[0] return vals def _read_raw_field_names(raw_file): header_files = glob.glob(os.path.join(raw_file, "*_H")) return [hf.split(os.sep)[-1][:-2] for hf in header_files] def _string_to_numpy_array(s): return np.array([int(v) for v in s[1:-1].split(",")], dtype=np.int64) def _line_to_numpy_arrays(line): lo_corner = _string_to_numpy_array(line[0][1:]) hi_corner = _string_to_numpy_array(line[1][:]) node_type = _string_to_numpy_array(line[2][:-1]) return lo_corner, hi_corner, node_type def _get_active_dimensions(box): return box[1] - box[2] - box[0] + 1 def _read_header(raw_file, field): level_files = glob.glob(os.path.join(raw_file, "Level_*")) level_files.sort() all_boxes = [] all_file_names = [] all_offsets = [] for level_file in level_files: header_file = os.path.join(level_file, field + "_H") with open(header_file) as f: f.readline() # version f.readline() # how f.readline() # ncomp # nghost_line will be parsed below after the number of dimensions # is determined when the boxes are read in nghost_line = f.readline().strip().split() f.readline() # num boxes # read boxes boxes = [] for line in f: clean_line = line.strip().split() if clean_line == [")"]: break lo_corner, hi_corner, node_type = _line_to_numpy_arrays(clean_line) boxes.append((lo_corner, hi_corner, node_type)) try: # nghost_line[0] is a single number ng = int(nghost_line[0]) ndims = len(lo_corner) nghost = np.array(ndims * [ng]) except ValueError: # nghost_line[0] is (#,#,#) nghost_list = nghost_line[0].strip("()").split(",") nghost = np.array(nghost_list, dtype="int64") # read the file and offset position for the corresponding box file_names = [] offsets = [] for line in f: if line.startswith("FabOnDisk:"): clean_line = line.strip().split() file_names.append(clean_line[1]) offsets.append(int(clean_line[2])) all_boxes += boxes all_file_names += file_names all_offsets += offsets return nghost, all_boxes, all_file_names, all_offsets class WarpXHeader: def __init__(self, header_fn): self.data = {} with open(header_fn) as f: self.data["Checkpoint_version"] = int(f.readline().strip().split()[-1]) self.data["num_levels"] = int(f.readline().strip().split()[-1]) self.data["istep"] = [int(num) for num in f.readline().strip().split()] self.data["nsubsteps"] = [int(num) for num in f.readline().strip().split()] self.data["t_new"] = [float(num) for num in f.readline().strip().split()] self.data["t_old"] = [float(num) for num in f.readline().strip().split()] self.data["dt"] = [float(num) for num in f.readline().strip().split()] self.data["moving_window_x"] = float(f.readline().strip().split()[-1]) # not all datasets will have is_synchronized line = f.readline().strip().split() if len(line) == 1: self.data["is_synchronized"] = bool(line[-1]) self.data["prob_lo"] = [ float(num) for num in f.readline().strip().split() ] else: self.data["is_synchronized"] = True self.data["prob_lo"] = [float(num) for num in line] self.data["prob_hi"] = [float(num) for num in f.readline().strip().split()] for _ in range(self.data["num_levels"]): num_boxes = int(f.readline().strip().split()[0][1:]) for __ in range(num_boxes): f.readline() f.readline() i = 0 line = f.readline() while line: line = line.strip().split() if len(line) == 1: line = f.readline() continue self.data["species_%d" % i] = [float(val) for val in line] i = i + 1 line = f.readline() class WarpXHierarchy(BoxlibHierarchy): def __init__(self, ds, dataset_type="boxlib_native"): super().__init__(ds, dataset_type) is_checkpoint = True for ptype in self.ds.particle_types: self._read_particles(ptype, is_checkpoint) # Additional WarpX particle information (used to set up species) self.warpx_header = WarpXHeader(os.path.join(self.ds.output_dir, "WarpXHeader")) for key, val in self.warpx_header.data.items(): if key.startswith("species_"): i = int(key.split("_")[-1]) charge_name = "particle%.1d_charge" % i mass_name = "particle%.1d_mass" % i self.parameters[charge_name] = val[0] self.parameters[mass_name] = val[1] def _detect_output_fields(self): super()._detect_output_fields() # now detect the optional, non-cell-centered fields self.raw_file = os.path.join(self.ds.output_dir, "raw_fields") self.raw_fields = _read_raw_field_names(os.path.join(self.raw_file, "Level_0")) self.field_list += [("raw", f) for f in self.raw_fields] self.raw_field_map = {} self.ds.nodal_flags = {} self.raw_field_nghost = {} for field_name in self.raw_fields: nghost, boxes, file_names, offsets = _read_header(self.raw_file, field_name) self.raw_field_map[field_name] = (boxes, file_names, offsets) self.raw_field_nghost[field_name] = nghost self.ds.nodal_flags[field_name] = np.array(boxes[0][2]) def _skip_line(line): if len(line) == 0: return True if line[0] == "\n": return True if line[0] == "=": return True if line[0] == " ": return True class WarpXDataset(BoxlibDataset): _index_class = WarpXHierarchy _field_info_class = WarpXFieldInfo _subtype_keyword = "warpx" _default_cparam_filename = "warpx_job_info" def __init__( self, output_dir, cparam_filename=None, fparam_filename=None, dataset_type="boxlib_native", storage_filename=None, units_override=None, unit_system="mks", ): self.default_fluid_type = "mesh" self.default_field = ("mesh", "density") self.fluid_types = ("mesh", "index", "raw") super().__init__( output_dir, cparam_filename, fparam_filename, dataset_type, storage_filename, units_override, unit_system, ) def _parse_parameter_file(self): super()._parse_parameter_file() jobinfo_filename = os.path.join(self.output_dir, self.cparam_filename) with open(jobinfo_filename) as f: for line in f.readlines(): if _skip_line(line): continue l = line.strip().split(":") if len(l) == 2: self.parameters[l[0].strip()] = l[1].strip() l = line.strip().split("=") if len(l) == 2: self.parameters[l[0].strip()] = l[1].strip() # set the periodicity based on the integer BC runtime parameters # https://amrex-codes.github.io/amrex/docs_html/InputsProblemDefinition.html periodicity = [False, False, False] try: is_periodic = self.parameters["geometry.is_periodic"].split() periodicity[: len(is_periodic)] = [p == "1" for p in is_periodic] except KeyError: pass self._periodicity = tuple(periodicity) particle_types = glob.glob(os.path.join(self.output_dir, "*", "Header")) particle_types = [cpt.split(os.sep)[-2] for cpt in particle_types] if len(particle_types) > 0: self.parameters["particles"] = 1 self.particle_types = tuple(particle_types) self.particle_types_raw = self.particle_types else: self.particle_types = () self.particle_types_raw = () def _set_code_unit_attributes(self): setdefaultattr(self, "length_unit", self.quan(1.0, "m")) setdefaultattr(self, "mass_unit", self.quan(1.0, "kg")) setdefaultattr(self, "time_unit", self.quan(1.0, "s")) setdefaultattr(self, "velocity_unit", self.quan(1.0, "m/s")) setdefaultattr(self, "magnetic_unit", self.quan(1.0, "T")) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrex/definitions.py0000644000175100001770000000000014714401662020212 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrex/fields.py0000644000175100001770000005230014714401662017157 0ustar00runnerdockerimport re from typing import TypeAlias import numpy as np from yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer from yt.units import YTQuantity from yt.utilities.physical_constants import amu_cgs, boltzmann_constant_cgs, c rho_units = "code_mass / code_length**3" mom_units = "code_mass / (code_time * code_length**2)" eden_units = "code_mass / (code_time**2 * code_length)" # erg / cm^3 def _thermal_energy_density(field, data): # What we've got here is UEINT: # u here is velocity # E is energy density from the file # rho e = rho E - rho * u * u / 2 ke = ( 0.5 * ( data["gas", "momentum_density_x"] ** 2 + data["gas", "momentum_density_y"] ** 2 + data["gas", "momentum_density_z"] ** 2 ) / data["gas", "density"] ) return data["boxlib", "eden"] - ke def _specific_thermal_energy(field, data): # This is little e, so we take thermal_energy_density and divide by density return data["gas", "thermal_energy_density"] / data["gas", "density"] def _temperature(field, data): mu = data.ds.parameters["mu"] gamma = data.ds.parameters["gamma"] tr = data["gas", "thermal_energy_density"] / data["gas", "density"] tr *= mu * amu_cgs / boltzmann_constant_cgs tr *= gamma - 1.0 return tr class WarpXFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( ("Bx", ("T", ["magnetic_field_x", "B_x"], None)), ("By", ("T", ["magnetic_field_y", "B_y"], None)), ("Bz", ("T", ["magnetic_field_z", "B_z"], None)), ("Ex", ("V/m", ["electric_field_x", "E_x"], None)), ("Ey", ("V/m", ["electric_field_y", "E_y"], None)), ("Ez", ("V/m", ["electric_field_z", "E_z"], None)), ("jx", ("A", ["current_x", "Jx", "J_x"], None)), ("jy", ("A", ["current_y", "Jy", "J_y"], None)), ("jz", ("A", ["current_z", "Jz", "J_z"], None)), ) known_particle_fields: KnownFieldsT = ( ("particle_weight", ("", ["particle_weighting"], None)), ("particle_position_x", ("m", [], None)), ("particle_position_y", ("m", [], None)), ("particle_position_z", ("m", [], None)), ("particle_velocity_x", ("m/s", [], None)), ("particle_velocity_y", ("m/s", [], None)), ("particle_velocity_z", ("m/s", [], None)), ("particle_momentum_x", ("kg*m/s", [], None)), ("particle_momentum_y", ("kg*m/s", [], None)), ("particle_momentum_z", ("kg*m/s", [], None)), ) extra_union_fields = ( ("kg", "particle_mass"), ("C", "particle_charge"), ("", "particle_ones"), ) def __init__(self, ds, field_list): super().__init__(ds, field_list) # setup nodal flag information for field in ds.index.raw_fields: finfo = self.__getitem__(("raw", field)) finfo.nodal_flag = ds.nodal_flags[field] def setup_fluid_fields(self): for field in self.known_other_fields: fname = field[0] self.alias(("mesh", fname), ("boxlib", fname)) def setup_fluid_aliases(self): super().setup_fluid_aliases("mesh") def setup_particle_fields(self, ptype): def get_mass(field, data): species_mass = data.ds.index.parameters[ptype + "_mass"] return data[ptype, "particle_weight"] * YTQuantity(species_mass, "kg") self.add_field( (ptype, "particle_mass"), sampling_type="particle", function=get_mass, units="kg", ) def get_charge(field, data): species_charge = data.ds.index.parameters[ptype + "_charge"] return data[ptype, "particle_weight"] * YTQuantity(species_charge, "C") self.add_field( (ptype, "particle_charge"), sampling_type="particle", function=get_charge, units="C", ) def get_energy(field, data): p2 = ( data[ptype, "particle_momentum_x"] ** 2 + data[ptype, "particle_momentum_y"] ** 2 + data[ptype, "particle_momentum_z"] ** 2 ) return np.sqrt(p2 * c**2 + data[ptype, "particle_mass"] ** 2 * c**4) self.add_field( (ptype, "particle_energy"), sampling_type="particle", function=get_energy, units="J", ) def get_velocity_x(field, data): return ( c**2 * data[ptype, "particle_momentum_x"] / data[ptype, "particle_energy"] ) def get_velocity_y(field, data): return ( c**2 * data[ptype, "particle_momentum_y"] / data[ptype, "particle_energy"] ) def get_velocity_z(field, data): return ( c**2 * data[ptype, "particle_momentum_z"] / data[ptype, "particle_energy"] ) self.add_field( (ptype, "particle_velocity_x"), sampling_type="particle", function=get_velocity_x, units="m/s", ) self.add_field( (ptype, "particle_velocity_y"), sampling_type="particle", function=get_velocity_y, units="m/s", ) self.add_field( (ptype, "particle_velocity_z"), sampling_type="particle", function=get_velocity_z, units="m/s", ) super().setup_particle_fields(ptype) class NyxFieldInfo(FieldInfoContainer): known_particle_fields: KnownFieldsT = ( ("particle_position_x", ("code_length", [], None)), ("particle_position_y", ("code_length", [], None)), ("particle_position_z", ("code_length", [], None)), ) class BoxlibFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( ("density", (rho_units, ["density"], None)), ("eden", (eden_units, ["total_energy_density"], None)), ("xmom", (mom_units, ["momentum_density_x"], None)), ("ymom", (mom_units, ["momentum_density_y"], None)), ("zmom", (mom_units, ["momentum_density_z"], None)), ("temperature", ("K", ["temperature"], None)), ("Temp", ("K", ["temperature"], None)), ("x_velocity", ("cm/s", ["velocity_x"], None)), ("y_velocity", ("cm/s", ["velocity_y"], None)), ("z_velocity", ("cm/s", ["velocity_z"], None)), ("xvel", ("cm/s", ["velocity_x"], None)), ("yvel", ("cm/s", ["velocity_y"], None)), ("zvel", ("cm/s", ["velocity_z"], None)), ) known_particle_fields: KnownFieldsT = ( ("particle_mass", ("code_mass", [], None)), ("particle_position_x", ("code_length", [], None)), ("particle_position_y", ("code_length", [], None)), ("particle_position_z", ("code_length", [], None)), ("particle_momentum_x", ("code_mass*code_length/code_time", [], None)), ("particle_momentum_y", ("code_mass*code_length/code_time", [], None)), ("particle_momentum_z", ("code_mass*code_length/code_time", [], None)), # Note that these are *internal* agmomen ("particle_angmomen_x", ("code_length**2/code_time", [], None)), ("particle_angmomen_y", ("code_length**2/code_time", [], None)), ("particle_angmomen_z", ("code_length**2/code_time", [], None)), ("particle_id", ("", ["particle_index"], None)), ("particle_mdot", ("code_mass/code_time", [], None)), # "mlast", # "r", # "mdeut", # "n", # "burnstate", # "luminosity", ) def setup_particle_fields(self, ptype): def _get_vel(axis): def velocity(field, data): return ( data[ptype, f"particle_momentum_{axis}"] / data[ptype, "particle_mass"] ) return velocity for ax in "xyz": self.add_field( (ptype, f"particle_velocity_{ax}"), sampling_type="particle", function=_get_vel(ax), units="code_length/code_time", ) super().setup_particle_fields(ptype) def setup_fluid_fields(self): unit_system = self.ds.unit_system # Now, let's figure out what fields are included. if any(f[1] == "xmom" for f in self.field_list): self.setup_momentum_to_velocity() elif any(f[1] == "xvel" for f in self.field_list): self.setup_velocity_to_momentum() self.add_field( ("gas", "specific_thermal_energy"), sampling_type="cell", function=_specific_thermal_energy, units=unit_system["specific_energy"], ) self.add_field( ("gas", "thermal_energy_density"), sampling_type="cell", function=_thermal_energy_density, units=unit_system["pressure"], ) if ("gas", "temperature") not in self.field_aliases: self.add_field( ("gas", "temperature"), sampling_type="cell", function=_temperature, units=unit_system["temperature"], ) def setup_momentum_to_velocity(self): def _get_vel(axis): def velocity(field, data): return data["boxlib", f"{axis}mom"] / data["boxlib", "density"] return velocity for ax in "xyz": self.add_field( ("gas", f"velocity_{ax}"), sampling_type="cell", function=_get_vel(ax), units=self.ds.unit_system["velocity"], ) def setup_velocity_to_momentum(self): def _get_mom(axis): def momentum(field, data): return data["boxlib", f"{axis}vel"] * data["boxlib", "density"] return momentum for ax in "xyz": self.add_field( ("gas", f"momentum_density_{ax}"), sampling_type="cell", function=_get_mom(ax), units=mom_units, ) class CastroFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( ("density", ("g/cm**3", ["density"], r"\rho")), ("xmom", ("g/(cm**2 * s)", ["momentum_density_x"], r"\rho u")), ("ymom", ("g/(cm**2 * s)", ["momentum_density_y"], r"\rho v")), ("zmom", ("g/(cm**2 * s)", ["momentum_density_z"], r"\rho w")), # velocity components are not always present ("x_velocity", ("cm/s", ["velocity_x"], r"u")), ("y_velocity", ("cm/s", ["velocity_y"], r"v")), ("z_velocity", ("cm/s", ["velocity_z"], r"w")), ("rho_E", ("erg/cm**3", ["total_energy_density"], r"\rho E")), # internal energy density (not just thermal) ("rho_e", ("erg/cm**3", [], r"\rho e")), ("Temp", ("K", ["temperature"], r"T")), ("grav_x", ("cm/s**2", [], r"\mathbf{g} \cdot \mathbf{e}_x")), ("grav_y", ("cm/s**2", [], r"\mathbf{g} \cdot \mathbf{e}_y")), ("grav_z", ("cm/s**2", [], r"\mathbf{g} \cdot \mathbf{e}_z")), ("pressure", ("dyne/cm**2", [], r"p")), ( "kineng", ("erg/cm**3", ["kinetic_energy_density"], r"\frac{1}{2}\rho|\mathbf{U}|^2"), ), ("soundspeed", ("cm/s", ["sound_speed"], "Sound Speed")), ("MachNumber", ("", ["mach_number"], "Mach Number")), ("abar", ("", [], r"$\bar{A}$")), ("Ye", ("", [], r"$Y_e$")), ("entropy", ("erg/(g*K)", ["entropy"], r"s")), ("magvort", ("1/s", ["vorticity_magnitude"], r"|\nabla \times \mathbf{U}|")), ("divu", ("1/s", ["velocity_divergence"], r"\nabla \cdot \mathbf{U}")), ("eint_E", ("erg/g", [], r"e(E,U)")), ("eint_e", ("erg/g", [], r"e")), ("magvel", ("cm/s", ["velocity_magnitude"], r"|\mathbf{U}|")), ("radvel", ("cm/s", ["radial_velocity"], r"\mathbf{U} \cdot \mathbf{e}_r")), ("magmom", ("g*cm/s", ["momentum_magnitude"], r"\rho |\mathbf{U}|")), ("maggrav", ("cm/s**2", [], r"|\mathbf{g}|")), ("phiGrav", ("erg/g", [], r"\Phi")), ("enuc", ("erg/(g*s)", [], r"\dot{e}_{\rm{nuc}}")), ("rho_enuc", ("erg/(cm**3*s)", [], r"\rho \dot{e}_{\rm{nuc}}")), ("angular_momentum_x", ("g/(cm*s)", [], r"\ell_x")), ("angular_momentum_y", ("g/(cm*s)", [], r"\ell_y")), ("angular_momentum_z", ("g/(cm*s)", [], r"\ell_z")), ("phiRot", ("erg/g", [], r"\Phi_{\rm{rot}}")), ("rot_x", ("cm/s**2", [], r"\mathbf{f}_{\rm{rot}} \cdot \mathbf{e}_x")), ("rot_y", ("cm/s**2", [], r"\mathbf{f}_{\rm{rot}} \cdot \mathbf{e}_y")), ("rot_z", ("cm/s**2", [], r"\mathbf{f}_{\rm{rot}} \cdot \mathbf{e}_z")), ) known_particle_fields: KnownFieldsT = ( ("particle_position_x", ("code_length", [], None)), ("particle_position_y", ("code_length", [], None)), ("particle_position_z", ("code_length", [], None)), ) def setup_fluid_fields(self): # add X's for _, field in self.ds.field_list: if field.startswith("X("): # We have a fraction sub = Substance(field) # Overwrite field to use nicer tex_label display_name self.add_output_field( ("boxlib", field), sampling_type="cell", units="", display_name=rf"X\left({sub.to_tex()}\right)", ) self.alias(("gas", f"{sub}_fraction"), ("boxlib", field), units="") func = _create_density_func(("gas", f"{sub}_fraction")) self.add_field( name=("gas", f"{sub}_density"), sampling_type="cell", function=func, units=self.ds.unit_system["density"], display_name=rf"\rho {sub.to_tex()}", ) class MaestroFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( ("density", ("g/cm**3", ["density"], None)), ("x_vel", ("cm/s", ["velocity_x"], r"\tilde{u}")), ("y_vel", ("cm/s", ["velocity_y"], r"\tilde{v}")), ("z_vel", ("cm/s", ["velocity_z"], r"\tilde{w}")), ( "magvel", ( "cm/s", ["velocity_magnitude"], r"|\tilde{\mathbf{U}} + w_0 \mathbf{e}_r|", ), ), ( "radial_velocity", ("cm/s", ["radial_velocity"], r"\mathbf{U}\cdot \mathbf{e}_r"), ), ("circum_velocity", ("cm/s", ["tangential_velocity"], r"U - U\cdot e_r")), ("tfromp", ("K", [], "T(\\rho,p,X)")), ("tfromh", ("K", [], "T(\\rho,h,X)")), ("Machnumber", ("", ["mach_number"], "M")), ("S", ("1/s", [], None)), ("ad_excess", ("", [], r"\nabla - \nabla_\mathrm{ad}")), ("deltaT", ("", [], "[T(\\rho,h,X) - T(\\rho,p,X)]/T(\\rho,h,X)")), ("deltagamma", ("", [], r"\Gamma_1 - \overline{\Gamma_1}")), ("deltap", ("", [], "[p(\\rho,h,X) - p_0] / p_0")), ("divw0", ("1/s", [], r"\nabla \cdot \mathbf{w}_0")), # Specific entropy ("entropy", ("erg/(g*K)", ["entropy"], "s")), ("entropypert", ("", [], r"[s - \overline{s}] / \overline{s}")), ("enucdot", ("erg/(g*s)", [], r"\dot{\epsilon}_{nuc}")), ("Hext", ("erg/(g*s)", [], "H_{ext}")), # Perturbational pressure grad ("gpi_x", ("dyne/cm**3", [], r"\left(\nabla\pi\right)_x")), ("gpi_y", ("dyne/cm**3", [], r"\left(\nabla\pi\right)_y")), ("gpi_z", ("dyne/cm**3", [], r"\left(\nabla\pi\right)_z")), ("h", ("erg/g", [], "h")), ("h0", ("erg/g", [], "h_0")), # Momentum cannot be computed because we need to include base and # full state. ("momentum", ("g*cm/s", ["momentum_magnitude"], r"\rho |\mathbf{U}|")), ("p0", ("erg/cm**3", [], "p_0")), ("p0pluspi", ("erg/cm**3", [], r"p_0 + \pi")), ("pi", ("erg/cm**3", [], r"\pi")), ("pioverp0", ("", [], r"\pi/p_0")), # Base state density ("rho0", ("g/cm**3", [], "\\rho_0")), ("rhoh", ("erg/cm**3", ["enthalpy_density"], "(\\rho h)")), # Base state enthalpy density ("rhoh0", ("erg/cm**3", [], "(\\rho h)_0")), ("rhohpert", ("erg/cm**3", [], "(\\rho h)^\\prime")), ("rhopert", ("g/cm**3", [], "\\rho^\\prime")), ("soundspeed", ("cm/s", ["sound_speed"], None)), ("sponge", ("", [], None)), ("tpert", ("K", [], r"T - \overline{T}")), # Again, base state -- so we can't compute ourselves. ("vort", ("1/s", ["vorticity_magnitude"], r"|\nabla\times\tilde{U}|")), # Base state ("w0_x", ("cm/s", [], "(w_0)_x")), ("w0_y", ("cm/s", [], "(w_0)_y")), ("w0_z", ("cm/s", [], "(w_0)_z")), ) def setup_fluid_fields(self): unit_system = self.ds.unit_system # pick the correct temperature field tfromp = False if "use_tfromp" in self.ds.parameters: # original MAESTRO (F90) code tfromp = self.ds.parameters["use_tfromp"] elif "maestro.use_tfromp" in self.ds.parameters: # new MAESTROeX (C++) code tfromp = self.ds.parameters["maestro.use_tfromp"] if tfromp: self.alias( ("gas", "temperature"), ("boxlib", "tfromp"), units=unit_system["temperature"], ) else: self.alias( ("gas", "temperature"), ("boxlib", "tfromh"), units=unit_system["temperature"], ) # Add X's and omegadots, units of 1/s for _, field in self.ds.field_list: if field.startswith("X("): # We have a mass fraction sub = Substance(field) # Overwrite field to use nicer tex_label display_name self.add_output_field( ("boxlib", field), sampling_type="cell", units="", display_name=rf"X\left({sub.to_tex()}\right)", ) self.alias(("gas", f"{sub}_fraction"), ("boxlib", field), units="") func = _create_density_func(("gas", f"{sub}_fraction")) self.add_field( name=("gas", f"{sub}_density"), sampling_type="cell", function=func, units=unit_system["density"], display_name=rf"\rho {sub.to_tex()}", ) elif field.startswith("omegadot("): sub = Substance(field) display_name = rf"\dot{{\omega}}\left[{sub.to_tex()}\right]" # Overwrite field to use nicer tex_label'ed display_name self.add_output_field( ("boxlib", field), sampling_type="cell", units=unit_system["frequency"], display_name=display_name, ) self.alias( ("gas", f"{sub}_creation_rate"), ("boxlib", field), units=unit_system["frequency"], ) substance_expr_re = re.compile(r"\(([a-zA-Z][a-zA-Z0-9]*)\)") substance_elements_re = re.compile(r"(?P[a-zA-Z]+)(?P\d*)") SubstanceSpec: TypeAlias = list[tuple[str, int]] class Substance: def __init__(self, data: str) -> None: if (m := substance_expr_re.search(data)) is None: raise ValueError(f"{data!r} doesn't match expected regular expression") sub_str = m.group() constituents = substance_elements_re.findall(sub_str) # 0 is used as a sentinel value to mark descriptive names default_value = 1 if len(constituents) > 1 else 0 self._spec: SubstanceSpec = [ (name, int(count or default_value)) for (name, count) in constituents ] def get_spec(self) -> SubstanceSpec: return self._spec.copy() def is_isotope(self) -> bool: return len(self._spec) == 1 and self._spec[0][1] > 0 def is_molecule(self) -> bool: return len(self._spec) != 1 def is_descriptive_name(self) -> bool: return len(self._spec) == 1 and self._spec[0][1] == 0 def __str__(self) -> str: return "".join( f"{element}{count if count > 1 else ''}" for element, count in self._spec ) def _to_tex_isotope(self) -> str: element, count = self._spec[0] return rf"^{{{count}}}{element}" def _to_tex_molecule(self) -> str: return "".join( rf"{element}_{{{count if count>1 else ''}}}" for element, count in self._spec ) def _to_tex_descriptive(self) -> str: return str(self) def to_tex(self) -> str: if self.is_isotope(): return self._to_tex_isotope() elif self.is_molecule(): return self._to_tex_molecule() elif self.is_descriptive_name(): return self._to_tex_descriptive() else: # should only be reachable in case of a regular expression defect raise RuntimeError def _create_density_func(field_name): def _func(field, data): return data[field_name] * data["gas", "density"] return _func ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrex/io.py0000644000175100001770000002443214714401662016325 0ustar00runnerdockerimport os from collections import defaultdict import numpy as np from yt.frontends.chombo.io import parse_orion_sinks from yt.funcs import mylog from yt.geometry.selection_routines import GridSelector from yt.utilities.io_handler import BaseIOHandler def _remove_raw(all_fields, raw_fields): centered_fields = set(all_fields) for raw in raw_fields: centered_fields.discard(raw) return list(centered_fields) class IOHandlerBoxlib(BaseIOHandler): _dataset_type = "boxlib_native" def __init__(self, ds, *args, **kwargs): super().__init__(ds) def _read_fluid_selection(self, chunks, selector, fields, size): chunks = list(chunks) if any((not (ftype == "boxlib" or ftype == "raw") for ftype, fname in fields)): raise NotImplementedError rv = {} raw_fields = [] for field in fields: if field[0] == "raw": nodal_flag = self.ds.nodal_flags[field[1]] num_nodes = 2 ** sum(nodal_flag) rv[field] = np.empty((size, num_nodes), dtype="float64") raw_fields.append(field) else: rv[field] = np.empty(size, dtype="float64") centered_fields = _remove_raw(fields, raw_fields) ng = sum(len(c.objs) for c in chunks) mylog.debug( "Reading %s cells of %s fields in %s grids", size, [f2 for f1, f2 in fields], ng, ) ind = 0 for chunk in chunks: data = self._read_chunk_data(chunk, centered_fields) for g in chunk.objs: for field in fields: if field in centered_fields: ds = data[g.id].pop(field) else: ds = self._read_raw_field(g, field) nd = g.select(selector, ds, rv[field], ind) ind += nd data.pop(g.id) return rv def _read_raw_field(self, grid, field): field_name = field[1] base_dir = self.ds.index.raw_file nghost = self.ds.index.raw_field_nghost[field_name] box_list = self.ds.index.raw_field_map[field_name][0] fn_list = self.ds.index.raw_field_map[field_name][1] offset_list = self.ds.index.raw_field_map[field_name][2] lev = grid.Level filename = os.path.join(base_dir, f"Level_{lev}", fn_list[grid.id]) offset = offset_list[grid.id] box = box_list[grid.id] lo = box[0] - nghost hi = box[1] + nghost shape = hi - lo + 1 with open(filename, "rb") as f: f.seek(offset) f.readline() # always skip the first line arr = np.fromfile(f, "float64", np.prod(shape)) arr = arr.reshape(shape, order="F") return arr[ tuple( slice(None) if (nghost[dim] == 0) else slice(nghost[dim], -nghost[dim]) for dim in range(self.ds.dimensionality) ) ] def _read_chunk_data(self, chunk, fields): data = {} grids_by_file = defaultdict(list) if len(chunk.objs) == 0: return data for g in chunk.objs: if g.filename is None: continue grids_by_file[g.filename].append(g) dtype = self.ds.index._dtype bpr = dtype.itemsize for filename in grids_by_file: grids = grids_by_file[filename] grids.sort(key=lambda a: a._offset) f = open(filename, "rb") for grid in grids: data[grid.id] = {} local_offset = grid._get_offset(f) - f.tell() count = grid.ActiveDimensions.prod() size = count * bpr for field in self.ds.index.field_order: if field in fields: # We read it ... f.seek(local_offset, os.SEEK_CUR) v = np.fromfile(f, dtype=dtype, count=count) v = v.reshape(grid.ActiveDimensions, order="F") data[grid.id][field] = v local_offset = 0 else: local_offset += size f.close() return data def _read_particle_coords(self, chunks, ptf): yield from ( (ptype, xyz, 0.0) for ptype, xyz in self._read_particle_fields(chunks, ptf, None) ) def _read_particle_fields(self, chunks, ptf, selector): for chunk in chunks: # These should be organized by grid filename for g in chunk.objs: for ptype, field_list in sorted(ptf.items()): npart = g._pdata[ptype]["NumberOfParticles"] if npart == 0: continue fn = g._pdata[ptype]["particle_filename"] offset = g._pdata[ptype]["offset"] pheader = self.ds.index.particle_headers[ptype] with open(fn, "rb") as f: # read in the position fields for selection f.seek(offset + pheader.particle_int_dtype.itemsize * npart) rdata = np.fromfile( f, pheader.real_type, pheader.num_real * npart ) # Allow reading particles in 1, 2, and 3 dimensions, # setting the appropriate default for unused dimensions. pos = [] for idim in [1, 2, 3]: if g.ds.dimensionality >= idim: pos.append( np.asarray( rdata[idim - 1 :: pheader.num_real], dtype=np.float64, ) ) else: center = 0.5 * ( g.LeftEdge[idim - 1] + g.RightEdge[idim - 1] ) pos.append(np.full(npart, center, dtype=np.float64)) x, y, z = pos if selector is None: # This only ever happens if the call is made from # _read_particle_coords. yield ptype, (x, y, z) continue mask = selector.select_points(x, y, z, 0.0) if mask is None: continue for field in field_list: # handle the case that this is an integer field int_fnames = [ fname for _, fname in pheader.known_int_fields ] if field in int_fnames: ind = int_fnames.index(field) f.seek(offset) idata = np.fromfile( f, pheader.int_type, pheader.num_int * npart ) data = np.asarray( idata[ind :: pheader.num_int], dtype=np.float64 ) yield (ptype, field), data[mask].flatten() # handle case that this is a real field real_fnames = [ fname for _, fname in pheader.known_real_fields ] if field in real_fnames: ind = real_fnames.index(field) data = np.asarray( rdata[ind :: pheader.num_real], dtype=np.float64 ) yield (ptype, field), data[mask].flatten() class IOHandlerOrion(IOHandlerBoxlib): _dataset_type = "orion_native" _particle_filename = None @property def particle_filename(self): fn = os.path.join(self.ds.output_dir, "StarParticles") if not os.path.exists(fn): fn = os.path.join(self.ds.output_dir, "SinkParticles") self._particle_filename = fn return self._particle_filename _particle_field_index = None @property def particle_field_index(self): index = parse_orion_sinks(self.particle_filename) self._particle_field_index = index return self._particle_field_index def _read_particle_selection(self, chunks, selector, fields): rv = {} chunks = list(chunks) if isinstance(selector, GridSelector): if not (len(chunks) == len(chunks[0].objs) == 1): raise RuntimeError grid = chunks[0].objs[0] for ftype, fname in fields: rv[ftype, fname] = self._read_particles(grid, fname) return rv rv = {f: np.array([]) for f in fields} for chunk in chunks: for grid in chunk.objs: for ftype, fname in fields: data = self._read_particles(grid, fname) rv[ftype, fname] = np.concatenate((data, rv[ftype, fname])) return rv def _read_particles(self, grid, field): """ parses the Orion Star Particle text files """ particles = [] if grid.NumberOfParticles == 0: return np.array(particles) def read(line, field): entry = line.strip().split(" ")[self.particle_field_index[field]] return float(entry) try: lines = self._cached_lines for num in grid._particle_line_numbers: line = lines[num] particles.append(read(line, field)) return np.array(particles) except AttributeError: fn = self.particle_filename with open(fn) as f: lines = f.readlines() self._cached_lines = lines for num in grid._particle_line_numbers: line = lines[num] particles.append(read(line, field)) return np.array(particles) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrex/misc.py0000644000175100001770000000000014714401662016632 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2911518 yt-4.4.0/yt/frontends/amrex/tests/0000755000175100001770000000000014714401715016500 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrex/tests/__init__.py0000644000175100001770000000000014714401662020600 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrex/tests/test_field_parsing.py0000644000175100001770000000353014714401662022721 0ustar00runnerdockerimport pytest from yt.frontends.amrex.fields import Substance @pytest.mark.parametrize( "data, expected", [ pytest.param("X(He5)", [("He", 5)], id="isotope_1"), pytest.param("X(C12)", [("C", 12)], id="isotope_2"), pytest.param("X(A1B2C3)", [("A", 1), ("B", 2), ("C", 3)], id="molecule_1"), pytest.param("X(C12H24)", [("C", 12), ("H", 24)], id="molecule_2"), pytest.param("X(H2O)", [("H", 2), ("O", 1)], id="molecule_3"), pytest.param("X(ash)", [("ash", 0)], id="descriptive_name"), ], ) def test_Substance_spec(data, expected): assert Substance(data)._spec == expected @pytest.mark.parametrize( "data, expected_type", [ pytest.param("X(He5)", "isotope", id="isotope_1"), pytest.param("X(C12)", "isotope", id="isotope_2"), pytest.param("X(A1B2C3)", "molecule", id="molecule_1"), pytest.param("X(C12H24)", "molecule", id="molecule_2"), pytest.param("X(H2O)", "molecule", id="molecule_3"), pytest.param("X(ash)", "descriptive_name", id="descriptive_name"), ], ) def test_Substance_type(data, expected_type): sub = Substance(data) assert getattr(sub, f"is_{expected_type}")() @pytest.mark.parametrize( "data, expected_str, expected_tex", [ pytest.param("X(He5)", "He5", "^{5}He", id="isotope_1"), pytest.param("X(C12)", "C12", "^{12}C", id="isotope_2"), pytest.param("X(A1B2C3)", "AB2C3", "A_{}B_{2}C_{3}", id="molecule_1"), pytest.param("X(C12H24)", "C12H24", "C_{12}H_{24}", id="molecule_2"), pytest.param("X(H2O)", "H2O", "H_{2}O_{}", id="molecule_2"), pytest.param("X(ash)", "ash", "ash", id="descriptive_name"), ], ) def test_Substance_to_str(data, expected_str, expected_tex): sub = Substance(data) assert str(sub) == expected_str assert sub.to_tex() == expected_tex ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrex/tests/test_outputs.py0000644000175100001770000003066714714401662021651 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_allclose, assert_equal from yt.frontends.amrex.api import ( AMReXDataset, CastroDataset, MaestroDataset, NyxDataset, OrionDataset, WarpXDataset, ) from yt.loaders import load from yt.testing import ( disable_dataset_cache, requires_file, units_override_check, ) from yt.utilities.answer_testing.framework import ( GridValuesTest, data_dir_load, requires_ds, small_patch_amr, ) # We don't do anything needing ghost zone generation right now, because these # are non-periodic datasets. _orion_fields = ( ("gas", "temperature"), ("gas", "density"), ("gas", "velocity_magnitude"), ) _nyx_fields = ( ("boxlib", "Ne"), ("boxlib", "Temp"), ("boxlib", "particle_mass_density"), ) _warpx_fields = (("mesh", "Ex"), ("mesh", "By"), ("mesh", "jz")) _castro_fields = ( ("boxlib", "Temp"), ("gas", "density"), ("boxlib", "particle_count"), ) radadvect = "RadAdvect/plt00000" @requires_ds(radadvect) def test_radadvect(): ds = data_dir_load(radadvect) assert_equal(str(ds), "plt00000") for test in small_patch_amr(ds, _orion_fields): test_radadvect.__name__ = test.description yield test rt = "RadTube/plt00500" @requires_ds(rt) def test_radtube(): ds = data_dir_load(rt) assert_equal(str(ds), "plt00500") for test in small_patch_amr(ds, _orion_fields): test_radtube.__name__ = test.description yield test star = "StarParticles/plrd01000" @requires_ds(star) def test_star(): ds = data_dir_load(star) assert_equal(str(ds), "plrd01000") for test in small_patch_amr(ds, _orion_fields): test_star.__name__ = test.description yield test LyA = "Nyx_LyA/plt00000" @requires_ds(LyA) def test_LyA(): ds = data_dir_load(LyA) assert_equal(str(ds), "plt00000") for test in small_patch_amr( ds, _nyx_fields, input_center="c", input_weight=("boxlib", "Ne") ): test_LyA.__name__ = test.description yield test @requires_file(LyA) def test_nyx_particle_io(): ds = data_dir_load(LyA) grid = ds.index.grids[0] npart_grid_0 = 7908 # read directly from the header assert_equal(grid[("all", "particle_position_x")].size, npart_grid_0) assert_equal(grid["DM", "particle_position_y"].size, npart_grid_0) assert_equal(grid["all", "particle_position_z"].size, npart_grid_0) ad = ds.all_data() npart = 32768 # read directly from the header assert_equal(ad[("all", "particle_velocity_x")].size, npart) assert_equal(ad["DM", "particle_velocity_y"].size, npart) assert_equal(ad["all", "particle_velocity_z"].size, npart) assert np.all(ad[("all", "particle_mass")] == ad[("all", "particle_mass")][0]) left_edge = ds.arr([0.0, 0.0, 0.0], "code_length") right_edge = ds.arr([4.0, 4.0, 4.0], "code_length") center = 0.5 * (left_edge + right_edge) reg = ds.region(center, left_edge, right_edge) assert np.all( np.logical_and( reg[("all", "particle_position_x")] <= right_edge[0], reg[("all", "particle_position_x")] >= left_edge[0], ) ) assert np.all( np.logical_and( reg[("all", "particle_position_y")] <= right_edge[1], reg[("all", "particle_position_y")] >= left_edge[1], ) ) assert np.all( np.logical_and( reg[("all", "particle_position_z")] <= right_edge[2], reg[("all", "particle_position_z")] >= left_edge[2], ) ) RT_particles = "RT_particles/plt00050" @requires_ds(RT_particles) def test_RT_particles(): ds = data_dir_load(RT_particles) assert_equal(str(ds), "plt00050") for test in small_patch_amr(ds, _castro_fields): test_RT_particles.__name__ = test.description yield test @requires_file(RT_particles) def test_castro_particle_io(): ds = data_dir_load(RT_particles) grid = ds.index.grids[2] npart_grid_2 = 49 # read directly from the header assert_equal(grid[("all", "particle_position_x")].size, npart_grid_2) assert_equal(grid["Tracer", "particle_position_y"].size, npart_grid_2) assert_equal(grid["all", "particle_position_y"].size, npart_grid_2) ad = ds.all_data() npart = 49 # read directly from the header assert_equal(ad[("all", "particle_velocity_x")].size, npart) assert_equal(ad["Tracer", "particle_velocity_y"].size, npart) assert_equal(ad["all", "particle_velocity_y"].size, npart) left_edge = ds.arr([0.0, 0.0, 0.0], "code_length") right_edge = ds.arr([0.25, 1.0, 1.0], "code_length") center = 0.5 * (left_edge + right_edge) reg = ds.region(center, left_edge, right_edge) assert np.all( np.logical_and( reg[("all", "particle_position_x")] <= right_edge[0], reg[("all", "particle_position_x")] >= left_edge[0], ) ) assert np.all( np.logical_and( reg[("all", "particle_position_y")] <= right_edge[1], reg[("all", "particle_position_y")] >= left_edge[1], ) ) langmuir = "LangmuirWave/plt00020_v2" @requires_ds(langmuir) def test_langmuir(): ds = data_dir_load(langmuir) assert_equal(str(ds), "plt00020_v2") for test in small_patch_amr( ds, _warpx_fields, input_center="c", input_weight=("mesh", "Ex") ): test_langmuir.__name__ = test.description yield test plasma = "PlasmaAcceleration/plt00030_v2" @requires_ds(plasma) def test_plasma(): ds = data_dir_load(plasma) assert_equal(str(ds), "plt00030_v2") for test in small_patch_amr( ds, _warpx_fields, input_center="c", input_weight=("mesh", "Ex") ): test_plasma.__name__ = test.description yield test beam = "GaussianBeam/plt03008" @requires_ds(beam) def test_beam(): ds = data_dir_load(beam) assert_equal(str(ds), "plt03008") for param in ("number of boxes", "maximum zones"): # PR 2807 # these parameters are only populated if the config file attached to this # dataset is read correctly assert param in ds.parameters for test in small_patch_amr( ds, _warpx_fields, input_center="c", input_weight=("mesh", "Ex") ): test_beam.__name__ = test.description yield test @requires_file(plasma) def test_warpx_particle_io(): ds = data_dir_load(plasma) grid = ds.index.grids[0] # read directly from the header npart0_grid_0 = 344 npart1_grid_0 = 69632 assert_equal(grid["particle0", "particle_position_x"].size, npart0_grid_0) assert_equal(grid["particle1", "particle_position_y"].size, npart1_grid_0) assert_equal(grid["all", "particle_position_z"].size, npart0_grid_0 + npart1_grid_0) # read directly from the header npart0 = 1360 npart1 = 802816 ad = ds.all_data() assert_equal(ad["particle0", "particle_velocity_x"].size, npart0) assert_equal(ad["particle1", "particle_velocity_y"].size, npart1) assert_equal(ad["all", "particle_velocity_z"].size, npart0 + npart1) np.all(ad["particle1", "particle_mass"] == ad["particle1", "particle_mass"][0]) np.all(ad["particle0", "particle_mass"] == ad["particle0", "particle_mass"][0]) left_edge = ds.arr([-7.5e-5, -7.5e-5, -7.5e-5], "code_length") right_edge = ds.arr([2.5e-5, 2.5e-5, 2.5e-5], "code_length") center = 0.5 * (left_edge + right_edge) reg = ds.region(center, left_edge, right_edge) assert np.all( np.logical_and( reg[("all", "particle_position_x")] <= right_edge[0], reg[("all", "particle_position_x")] >= left_edge[0], ) ) assert np.all( np.logical_and( reg[("all", "particle_position_y")] <= right_edge[1], reg[("all", "particle_position_y")] >= left_edge[1], ) ) assert np.all( np.logical_and( reg[("all", "particle_position_z")] <= right_edge[2], reg[("all", "particle_position_z")] >= left_edge[2], ) ) _raw_fields = [("raw", "Bx"), ("raw", "Ey"), ("raw", "jz")] laser = "Laser/plt00015" @requires_ds(laser) def test_raw_fields(): for field in _raw_fields: yield GridValuesTest(laser, field) @requires_file(rt) def test_OrionDataset(): assert isinstance(data_dir_load(rt), OrionDataset) @requires_file(LyA) def test_NyxDataset(): assert isinstance(data_dir_load(LyA), NyxDataset) @requires_file("nyx_small/nyx_small_00000") def test_NyxDataset_2(): assert isinstance(data_dir_load("nyx_small/nyx_small_00000"), NyxDataset) @requires_file(RT_particles) def test_CastroDataset(): assert isinstance(data_dir_load(RT_particles), CastroDataset) @requires_file("castro_sod_x_plt00036") def test_CastroDataset_2(): assert isinstance(data_dir_load("castro_sod_x_plt00036"), CastroDataset) @requires_file("castro_sedov_1d_cyl_plt00150") def test_CastroDataset_3(): assert isinstance(data_dir_load("castro_sedov_1d_cyl_plt00150"), CastroDataset) @requires_file(plasma) def test_WarpXDataset(): assert isinstance(data_dir_load(plasma), WarpXDataset) @disable_dataset_cache @requires_file(plasma) def test_magnetic_units(): ds1 = load(plasma) assert_allclose(ds1.magnetic_unit.value, 1.0) assert str(ds1.magnetic_unit.units) == "T" mag_unit1 = ds1.magnetic_unit.to("code_magnetic") assert_allclose(mag_unit1.value, 1.0) assert str(mag_unit1.units) == "code_magnetic" ds2 = load(plasma, unit_system="cgs") assert_allclose(ds2.magnetic_unit.value, 1.0e4) assert str(ds2.magnetic_unit.units) == "G" mag_unit2 = ds2.magnetic_unit.to("code_magnetic") assert_allclose(mag_unit2.value, 1.0) assert str(mag_unit2.units) == "code_magnetic" @requires_ds(laser) def test_WarpXDataset_2(): assert isinstance(data_dir_load(laser), WarpXDataset) @requires_file("plt.Cavity00010") def test_AMReXDataset(): ds = data_dir_load("plt.Cavity00010", kwargs={"cparam_filename": "inputs"}) assert isinstance(ds, AMReXDataset) @requires_file(rt) def test_units_override(): units_override_check(rt) nyx_no_particles = "nyx_sedov_plt00086" @requires_file(nyx_no_particles) def test_nyx_no_part(): assert isinstance(data_dir_load(nyx_no_particles), NyxDataset) fields = sorted( [ ("boxlib", "H"), ("boxlib", "He"), ("boxlib", "MachNumber"), ("boxlib", "Ne"), ("boxlib", "Rank"), ("boxlib", "StateErr"), ("boxlib", "Temp"), ("boxlib", "X(H)"), ("boxlib", "X(He)"), ("boxlib", "density"), ("boxlib", "divu"), ("boxlib", "eint_E"), ("boxlib", "eint_e"), ("boxlib", "entropy"), ("boxlib", "forcex"), ("boxlib", "forcey"), ("boxlib", "forcez"), ("boxlib", "kineng"), ("boxlib", "logden"), ("boxlib", "magmom"), ("boxlib", "magvel"), ("boxlib", "magvort"), ("boxlib", "pressure"), ("boxlib", "rho_E"), ("boxlib", "rho_H"), ("boxlib", "rho_He"), ("boxlib", "rho_e"), ("boxlib", "soundspeed"), ("boxlib", "x_velocity"), ("boxlib", "xmom"), ("boxlib", "y_velocity"), ("boxlib", "ymom"), ("boxlib", "z_velocity"), ("boxlib", "zmom"), ] ) ds = data_dir_load(nyx_no_particles) assert_equal(sorted(ds.field_list), fields) msubch = "maestro_subCh_plt00248" @requires_file(msubch) def test_maestro_parameters(): assert isinstance(data_dir_load(msubch), MaestroDataset) ds = data_dir_load(msubch) # Check a string parameter assert ds.parameters["plot_base_name"] == "subCh_hot_baserun_plt" assert type(ds.parameters["plot_base_name"]) is str # noqa: E721 # Check boolean parameters: T or F assert not ds.parameters["use_thermal_diffusion"] assert type(ds.parameters["use_thermal_diffusion"]) is bool # noqa: E721 assert ds.parameters["do_burning"] assert type(ds.parameters["do_burning"]) is bool # noqa: E721 # Check a float parameter with a decimal point assert ds.parameters["sponge_kappa"] == float("10.00000000") assert type(ds.parameters["sponge_kappa"]) is float # noqa: E721 # Check a float parameter with E exponent notation assert ds.parameters["small_dt"] == float("0.1000000000E-09") # Check an int parameter assert ds.parameters["s0_interp_type"] == 3 assert type(ds.parameters["s0_interp_type"]) is int # noqa: E721 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2911518 yt-4.4.0/yt/frontends/amrvac/0000755000175100001770000000000014714401715015473 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrvac/__init__.py0000644000175100001770000000000014714401662017573 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrvac/api.py0000644000175100001770000000035714714401662016624 0ustar00runnerdocker""" frontend API: a submodule that exposes user-facing defs and classes """ from .data_structures import AMRVACDataset, AMRVACGrid, AMRVACHierarchy from .fields import AMRVACFieldInfo from .io import AMRVACIOHandler, read_amrvac_namelist ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrvac/data_structures.py0000644000175100001770000004550014714401662021266 0ustar00runnerdocker""" AMRVAC data structures """ import os import struct import warnings import weakref from pathlib import Path import numpy as np from more_itertools import always_iterable from yt.config import ytcfg from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.static_output import Dataset from yt.funcs import mylog, setdefaultattr from yt.geometry.api import Geometry from yt.geometry.grid_geometry_handler import GridIndex from yt.utilities.physical_constants import boltzmann_constant_cgs as kb_cgs from .datfile_utils import get_header, get_tree_info from .fields import AMRVACFieldInfo from .io import read_amrvac_namelist def _parse_geometry(geometry_tag: str) -> Geometry: """Translate AMRVAC's geometry tag to yt's format. Parameters ---------- geometry_tag : str A geometry tag as read from AMRVAC's datfile from v5. Returns ------- geometry_yt : Geometry An enum member of the yt.geometry.geometry_enum.Geometry class Examples -------- >>> _parse_geometry("Polar_2.5D") >>> _parse_geometry("Cartesian_2.5D") """ geometry_str, _, _dimension_str = geometry_tag.partition("_") return Geometry(geometry_str.lower()) class AMRVACGrid(AMRGridPatch): """A class to populate AMRVACHierarchy.grids, setting parent/children relations.""" _id_offset = 0 def __init__(self, id, index, level): # should use yt's convention (start from 0) super().__init__(id, filename=index.index_filename, index=index) self.Parent = None self.Children = [] self.Level = level def get_global_startindex(self): """Refresh and retrieve the starting index for each dimension at current level. Returns ------- self.start_index : int """ start_index = (self.LeftEdge - self.ds.domain_left_edge) / self.dds self.start_index = np.rint(start_index).astype("int64").ravel() return self.start_index def retrieve_ghost_zones(self, n_zones, fields, all_levels=False, smoothed=False): if smoothed: warnings.warn( "ghost-zones interpolation/smoothing is not " "currently supported for AMRVAC data.", category=RuntimeWarning, stacklevel=2, ) smoothed = False return super().retrieve_ghost_zones( n_zones, fields, all_levels=all_levels, smoothed=smoothed ) class AMRVACHierarchy(GridIndex): grid = AMRVACGrid def __init__(self, ds, dataset_type="amrvac"): self.dataset_type = dataset_type self.dataset = weakref.proxy(ds) # the index file *is* the datfile self.index_filename = self.dataset.parameter_filename self.directory = os.path.dirname(self.index_filename) self.float_type = np.float64 super().__init__(ds, dataset_type) def _detect_output_fields(self): """Parse field names from the header, as stored in self.dataset.parameters""" self.field_list = [ (self.dataset_type, f) for f in self.dataset.parameters["w_names"] ] def _count_grids(self): """Set self.num_grids from datfile header.""" self.num_grids = self.dataset.parameters["nleafs"] def _parse_index(self): """Populate self.grid_* attributes from tree info from datfile header.""" with open(self.index_filename, "rb") as istream: vaclevels, morton_indices, block_offsets = get_tree_info(istream) assert ( len(vaclevels) == len(morton_indices) == len(block_offsets) == self.num_grids ) self.block_offsets = block_offsets # YT uses 0-based grid indexing: # lowest level = 0, while AMRVAC uses 1 for lowest level ytlevels = np.array(vaclevels, dtype="int32") - 1 self.grid_levels.flat[:] = ytlevels self.min_level = np.min(ytlevels) self.max_level = np.max(ytlevels) assert self.max_level == self.dataset.parameters["levmax"] - 1 # some aliases for left/right edges computation in the coming loop domain_width = self.dataset.parameters["xmax"] - self.dataset.parameters["xmin"] block_nx = self.dataset.parameters["block_nx"] xmin = self.dataset.parameters["xmin"] dx0 = ( domain_width / self.dataset.parameters["domain_nx"] ) # dx at coarsest grid level (YT level 0) dim = self.dataset.dimensionality self.grids = np.empty(self.num_grids, dtype="object") for igrid, (ytlevel, morton_index) in enumerate( zip(ytlevels, morton_indices, strict=True) ): dx = dx0 / self.dataset.refine_by**ytlevel left_edge = xmin + (morton_index - 1) * block_nx * dx # edges and dimensions are filled in a dimensionality-agnostic way self.grid_left_edge[igrid, :dim] = left_edge self.grid_right_edge[igrid, :dim] = left_edge + block_nx * dx self.grid_dimensions[igrid, :dim] = block_nx self.grids[igrid] = self.grid(igrid, self, ytlevels[igrid]) def _populate_grid_objects(self): # required method for g in self.grids: g._prepare_grid() g._setup_dx() class AMRVACDataset(Dataset): _index_class = AMRVACHierarchy _field_info_class = AMRVACFieldInfo def __init__( self, filename, dataset_type="amrvac", units_override=None, unit_system="cgs", geometry_override=None, parfiles=None, default_species_fields=None, ): """Instantiate AMRVACDataset. Parameters ---------- filename : str Path to a datfile. dataset_type : str, optional This should always be 'amrvac'. units_override : dict, optional A dictionary of physical normalisation factors to interpret on disk data. unit_system : str, optional Either "cgs" (default), "mks" or "code" geometry_override : str, optional A geometry flag formatted either according to either AMRVAC or yt standards. When this parameter is passed along with v5 or more newer datfiles, will precede over their internal "geometry" tag. parfiles : str or list, optional One or more parfiles to be passed to yt.frontends.amrvac.read_amrvac_parfiles() """ # note: geometry_override and parfiles are specific to this frontend self._geometry_override = geometry_override self._parfiles = [] if parfiles is not None: parfiles = list(always_iterable(parfiles)) ppf = Path(parfiles[0]) if not ppf.is_absolute() and Path(filename).resolve().is_relative_to( ytcfg["yt", "test_data_dir"] ): mylog.debug( "Looks like %s is relative to your test_data_dir. Assuming this is also true for parfiles.", filename, ) parfiles = [Path(ytcfg["yt", "test_data_dir"]) / pf for pf in parfiles] self._parfiles = parfiles super().__init__( filename, dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) namelist = None namelist_gamma = None c_adiab = None e_is_internal = None if parfiles is not None: namelist = read_amrvac_namelist(parfiles) if "hd_list" in namelist: c_adiab = namelist["hd_list"].get("hd_adiab", 1.0) namelist_gamma = namelist["hd_list"].get("hd_gamma") elif "mhd_list" in namelist: c_adiab = namelist["mhd_list"].get("mhd_adiab", 1.0) namelist_gamma = namelist["mhd_list"].get("mhd_gamma") if namelist_gamma is not None and self.gamma != namelist_gamma: mylog.error( "Inconsistent values in gamma: datfile %s, parfiles %s", self.gamma, namelist_gamma, ) if "method_list" in namelist: e_is_internal = namelist["method_list"].get("solve_internal_e", False) if c_adiab is not None: # this complicated unit is required for the adiabatic equation # of state to make physical sense c_adiab *= ( self.mass_unit ** (1 - self.gamma) * self.length_unit ** (2 + 3 * (self.gamma - 1)) / self.time_unit**2 ) self.namelist = namelist self._c_adiab = c_adiab self._e_is_internal = e_is_internal self.fluid_types += ("amrvac",) # refinement factor between a grid and its subgrid self.refine_by = 2 @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: """At load time, check whether data is recognized as AMRVAC formatted.""" validation = False if filename.endswith(".dat"): try: with open(filename, mode="rb") as istream: fmt = "=i" [datfile_version] = struct.unpack( fmt, istream.read(struct.calcsize(fmt)) ) if 3 <= datfile_version < 6: fmt = "=ii" offset_tree, offset_blocks = struct.unpack( fmt, istream.read(struct.calcsize(fmt)) ) istream.seek(0, 2) file_size = istream.tell() validation = ( offset_tree < file_size and offset_blocks < file_size ) except Exception: pass return validation def _parse_parameter_file(self): """Parse input datfile's header. Apply geometry_override if specified.""" # required method # populate self.parameters with header data with open(self.parameter_filename, "rb") as istream: self.parameters.update(get_header(istream)) self.current_time = self.parameters["time"] self.dimensionality = self.parameters["ndim"] # force 3D for this definition dd = np.ones(3, dtype="int64") dd[: self.dimensionality] = self.parameters["domain_nx"] self.domain_dimensions = dd if self.parameters.get("staggered", False): mylog.warning( "'staggered' flag was found, but is currently ignored (unsupported)" ) # parse geometry # by order of decreasing priority, we use # - geometry_override # - "geometry" parameter from datfile # - if all fails, default to "cartesian" self.geometry = Geometry.CARTESIAN amrvac_geom = self.parameters.get("geometry", None) if amrvac_geom is not None: self.geometry = _parse_geometry(amrvac_geom) elif self.parameters["datfile_version"] > 4: mylog.error( "No 'geometry' flag found in datfile with version %d >4.", self.parameters["datfile_version"], ) if self._geometry_override is not None: try: new_geometry = _parse_geometry(self._geometry_override) if new_geometry == self.geometry: mylog.info("geometry_override is identical to datfile parameter.") else: self.geometry = new_geometry mylog.warning( "Overriding geometry, this may lead to surprising results." ) except ValueError: mylog.error( "Unable to parse geometry_override '%s' (will be ignored).", self._geometry_override, ) # parse peridiocity periodicity = self.parameters.get("periodic", ()) missing_dim = 3 - len(periodicity) self._periodicity = (*periodicity, *(missing_dim * (False,))) self.gamma = self.parameters.get("gamma", 5.0 / 3.0) # parse domain edges dle = np.zeros(3) dre = np.ones(3) dle[: self.dimensionality] = self.parameters["xmin"] dre[: self.dimensionality] = self.parameters["xmax"] self.domain_left_edge = dle self.domain_right_edge = dre # defaulting to non-cosmological self.cosmological_simulation = 0 self.current_redshift = 0.0 self.omega_matter = 0.0 self.omega_lambda = 0.0 self.hubble_constant = 0.0 # units stuff ====================================================================== def _set_code_unit_attributes(self): """Reproduce how AMRVAC internally set up physical normalisation factors.""" # This gets called later than Dataset._override_code_units() # This is the reason why it uses setdefaultattr: it will only fill in the gaps # left by the "override", instead of overriding them again. # note: yt sets hydrogen mass equal to proton mass, amrvac doesn't. mp_cgs = self.quan(1.672621898e-24, "g") # This value is taken from AstroPy # get self.length_unit if overrides are supplied, otherwise use default length_unit = getattr(self, "length_unit", self.quan(1, "cm")) namelist = read_amrvac_namelist(self._parfiles) He_abundance = namelist.get("mhd_list", {}).get("he_abundance", 0.1) # 1. calculations for mass, density, numberdensity if "mass_unit" in self.units_override: # in this case unit_mass is supplied (and has been set as attribute) mass_unit = self.mass_unit density_unit = mass_unit / length_unit**3 nd_unit = density_unit / ((1.0 + 4.0 * He_abundance) * mp_cgs) else: # other case: numberdensity is supplied. # Fall back to one (default) if no overrides supplied try: nd_unit = self.quan(self.units_override["numberdensity_unit"]) except KeyError: nd_unit = self.quan( 1.0, self.__class__.default_units["numberdensity_unit"] ) density_unit = (1.0 + 4.0 * He_abundance) * mp_cgs * nd_unit mass_unit = density_unit * length_unit**3 # 2. calculations for velocity if "time_unit" in self.units_override: # in this case time was supplied velocity_unit = length_unit / self.time_unit else: # other case: velocity was supplied. # Fall back to None if no overrides supplied velocity_unit = getattr(self, "velocity_unit", None) # 3. calculations for pressure and temperature if velocity_unit is None: # velocity and time not given, see if temperature is given. # Fall back to one (default) if not temperature_unit = getattr(self, "temperature_unit", self.quan(1, "K")) pressure_unit = ( (2.0 + 3.0 * He_abundance) * nd_unit * kb_cgs * temperature_unit ).in_cgs() velocity_unit = (np.sqrt(pressure_unit / density_unit)).in_cgs() else: # velocity is not zero if either time was given OR velocity was given pressure_unit = (density_unit * velocity_unit**2).in_cgs() temperature_unit = ( pressure_unit / ((2.0 + 3.0 * He_abundance) * nd_unit * kb_cgs) ).in_cgs() # 4. calculations for magnetic unit and time time_unit = getattr( self, "time_unit", length_unit / velocity_unit ) # if time given use it, else calculate magnetic_unit = (np.sqrt(4 * np.pi * pressure_unit)).to("gauss") setdefaultattr(self, "mass_unit", mass_unit) setdefaultattr(self, "density_unit", density_unit) setdefaultattr(self, "length_unit", length_unit) setdefaultattr(self, "velocity_unit", velocity_unit) setdefaultattr(self, "time_unit", time_unit) setdefaultattr(self, "temperature_unit", temperature_unit) setdefaultattr(self, "pressure_unit", pressure_unit) setdefaultattr(self, "magnetic_unit", magnetic_unit) allowed_unit_combinations = [ {"numberdensity_unit", "temperature_unit", "length_unit"}, {"mass_unit", "temperature_unit", "length_unit"}, {"mass_unit", "time_unit", "length_unit"}, {"numberdensity_unit", "velocity_unit", "length_unit"}, {"mass_unit", "velocity_unit", "length_unit"}, ] default_units = { "length_unit": "cm", "time_unit": "s", "mass_unit": "g", "velocity_unit": "cm/s", "magnetic_unit": "gauss", "temperature_unit": "K", # this is the one difference with Dataset.default_units: # we accept numberdensity_unit as a valid override "numberdensity_unit": "cm**-3", } @classmethod def _validate_units_override_keys(cls, units_override): """Check that keys in units_override are consistent with AMRVAC's internal normalisations factors. """ # YT supports overriding other normalisations, this method ensures consistency # between supplied 'units_override' items and those used by AMRVAC. # AMRVAC's normalisations/units have 3 degrees of freedom. # Moreover, if temperature unit is specified then velocity unit will be # calculated accordingly, and vice-versa. # We replicate this by allowing a finite set of combinations. # there are only three degrees of freedom, so explicitly check for this if len(units_override) > 3: raise ValueError( "More than 3 degrees of freedom were specified " f"in units_override ({len(units_override)} given)" ) # temperature and velocity cannot both be specified if "temperature_unit" in units_override and "velocity_unit" in units_override: raise ValueError( "Either temperature or velocity is allowed in units_override, not both." ) # check if provided overrides are allowed suo = set(units_override) for allowed_combo in cls.allowed_unit_combinations: if suo.issubset(allowed_combo): break else: raise ValueError( f"Combination {suo} passed to units_override " "is not consistent with AMRVAC.\n" f"Allowed combinations are {cls.allowed_unit_combinations}" ) # syntax for mixing super with classmethod is weird... super(cls, cls)._validate_units_override_keys(units_override) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrvac/datfile_utils.py0000644000175100001770000001227514714401662020705 0ustar00runnerdockerimport struct import numpy as np # Size of basic types (in bytes) SIZE_LOGICAL = 4 SIZE_INT = 4 SIZE_DOUBLE = 8 NAME_LEN = 16 # For un-aligned data, use '=' (for aligned data set to '') ALIGN = "=" def get_header(istream): """Read header from an MPI-AMRVAC 2.0 snapshot. istream' should be a file opened in binary mode. """ istream.seek(0) h = {} fmt = ALIGN + "i" [h["datfile_version"]] = struct.unpack(fmt, istream.read(struct.calcsize(fmt))) if h["datfile_version"] < 3: raise OSError("Unsupported AMRVAC .dat file version: %d", h["datfile_version"]) # Read scalar data at beginning of file fmt = ALIGN + 9 * "i" + "d" hdr = struct.unpack(fmt, istream.read(struct.calcsize(fmt))) [ h["offset_tree"], h["offset_blocks"], h["nw"], h["ndir"], h["ndim"], h["levmax"], h["nleafs"], h["nparents"], h["it"], h["time"], ] = hdr # Read min/max coordinates fmt = ALIGN + h["ndim"] * "d" h["xmin"] = np.array(struct.unpack(fmt, istream.read(struct.calcsize(fmt)))) h["xmax"] = np.array(struct.unpack(fmt, istream.read(struct.calcsize(fmt)))) # Read domain and block size (in number of cells) fmt = ALIGN + h["ndim"] * "i" h["domain_nx"] = np.array(struct.unpack(fmt, istream.read(struct.calcsize(fmt)))) h["block_nx"] = np.array(struct.unpack(fmt, istream.read(struct.calcsize(fmt)))) if h["datfile_version"] >= 5: # Read periodicity fmt = ALIGN + h["ndim"] * "i" # Fortran logical is 4 byte int h["periodic"] = np.array( struct.unpack(fmt, istream.read(struct.calcsize(fmt))), dtype=bool ) # Read geometry name fmt = ALIGN + NAME_LEN * "c" hdr = struct.unpack(fmt, istream.read(struct.calcsize(fmt))) h["geometry"] = b"".join(hdr).strip().decode() # Read staggered flag fmt = ALIGN + "i" # Fortran logical is 4 byte int h["staggered"] = bool(struct.unpack(fmt, istream.read(struct.calcsize(fmt)))[0]) # Read w_names w_names = [] for _ in range(h["nw"]): fmt = ALIGN + NAME_LEN * "c" hdr = struct.unpack(fmt, istream.read(struct.calcsize(fmt))) w_names.append(b"".join(hdr).strip().decode()) h["w_names"] = w_names # Read physics type fmt = ALIGN + NAME_LEN * "c" hdr = struct.unpack(fmt, istream.read(struct.calcsize(fmt))) h["physics_type"] = b"".join(hdr).strip().decode() # Read number of physics-defined parameters fmt = ALIGN + "i" [n_pars] = struct.unpack(fmt, istream.read(struct.calcsize(fmt))) # First physics-parameter values are given, then their names fmt = ALIGN + n_pars * "d" vals = struct.unpack(fmt, istream.read(struct.calcsize(fmt))) fmt = ALIGN + n_pars * NAME_LEN * "c" names = struct.unpack(fmt, istream.read(struct.calcsize(fmt))) # Split and join the name strings (from one character array) names = [ b"".join(names[i : i + NAME_LEN]).strip().decode() for i in range(0, len(names), NAME_LEN) ] # Store the values corresponding to the names for val, name in zip(vals, names, strict=True): h[name] = val return h def get_tree_info(istream): """ Read levels, morton-curve indices, and byte offsets for each block as stored in the datfile istream is an open datfile buffer with 'rb' mode This can be used as the "first pass" data reading required by YT's interface. """ istream.seek(0) header = get_header(istream) nleafs = header["nleafs"] nparents = header["nparents"] # Read tree info. Skip 'leaf' array istream.seek(header["offset_tree"] + (nleafs + nparents) * SIZE_LOGICAL) # Read block levels fmt = ALIGN + nleafs * "i" block_lvls = np.array(struct.unpack(fmt, istream.read(struct.calcsize(fmt)))) # Read block indices fmt = ALIGN + nleafs * header["ndim"] * "i" block_ixs = np.reshape( struct.unpack(fmt, istream.read(struct.calcsize(fmt))), [nleafs, header["ndim"]] ) # Read block offsets (skip ghost cells !) bcfmt = ALIGN + header["ndim"] * "i" bcsize = struct.calcsize(bcfmt) * 2 fmt = ALIGN + nleafs * "q" block_offsets = ( np.array(struct.unpack(fmt, istream.read(struct.calcsize(fmt)))) + bcsize ) return block_lvls, block_ixs, block_offsets def get_single_block_data(istream, byte_offset, block_shape): """retrieve a specific block (all fields) from a datfile""" istream.seek(byte_offset) # Read actual data fmt = ALIGN + np.prod(block_shape) * "d" d = struct.unpack(fmt, istream.read(struct.calcsize(fmt))) # Fortran ordering block_data = np.reshape(d, block_shape, order="F") return block_data def get_single_block_field_data(istream, byte_offset, block_shape, field_idx): """retrieve a specific block (ONE field) from a datfile""" # compute byte size of a single field field_shape = block_shape[:-1] fmt = ALIGN + np.prod(field_shape) * "d" byte_size_field = struct.calcsize(fmt) istream.seek(byte_offset + byte_size_field * field_idx) data = np.fromfile(istream, "=f8", count=np.prod(field_shape)) data.shape = field_shape[::-1] return data.T ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrvac/definitions.py0000644000175100001770000000011414714401662020355 0ustar00runnerdocker# This file is often empty. It can hold definitions related to a frontend. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrvac/fields.py0000644000175100001770000002552014714401662017320 0ustar00runnerdocker""" AMRVAC-specific fields """ import functools import numpy as np from yt.fields.field_info_container import FieldInfoContainer from yt.units import dimensions from yt.utilities.logger import ytLogger as mylog # We need to specify which fields we might have in our dataset. The field info # container subclass here will define which fields it knows about. There are # optionally methods on it that get called which can be subclassed. direction_aliases = { "cartesian": ("x", "y", "z"), "polar": ("r", "theta", "z"), "cylindrical": ("r", "z", "theta"), "spherical": ("r", "theta", "phi"), } def _velocity(field, data, idir, prefix=None): """Velocity = linear momentum / density""" # This is meant to be used with functools.partial to produce # functions with only 2 arguments (field, data) # idir : int # the direction index (1, 2 or 3) # prefix : str # used to generalize to dust fields if prefix is None: prefix = "" moment = data["gas", "%smoment_%d" % (prefix, idir)] rho = data["gas", f"{prefix}density"] mask1 = rho == 0 if mask1.any(): mylog.info( "zeros found in %sdensity, " "patching them to compute corresponding velocity field.", prefix, ) mask2 = moment == 0 if not ((mask1 & mask2) == mask1).all(): raise RuntimeError rho[mask1] = 1 return moment / rho code_density = "code_mass / code_length**3" code_moment = "code_mass / code_length**2 / code_time" code_pressure = "code_mass / code_length / code_time**2" class AMRVACFieldInfo(FieldInfoContainer): # for now, define a finite family of dust fields (up to 100 species) MAXN_DUST_SPECIES = 100 known_dust_fields = [ ("rhod%d" % idust, (code_density, ["dust%d_density" % idust], None)) for idust in range(1, MAXN_DUST_SPECIES + 1) ] + [ ( "m%dd%d" % (idir, idust), (code_moment, ["dust%d_moment_%d" % (idust, idir)], None), ) for idust in range(1, MAXN_DUST_SPECIES + 1) for idir in (1, 2, 3) ] # format: (native(?) field, (units, [aliases], display_name)) # note: aliases will correspond to "gas" typed fields # whereas the native ones are "amrvac" typed known_other_fields = ( ("rho", (code_density, ["density"], None)), ("m1", (code_moment, ["moment_1"], None)), ("m2", (code_moment, ["moment_2"], None)), ("m3", (code_moment, ["moment_3"], None)), ("e", (code_pressure, ["energy_density"], None)), ("b1", ("code_magnetic", ["magnetic_1"], None)), ("b2", ("code_magnetic", ["magnetic_2"], None)), ("b3", ("code_magnetic", ["magnetic_3"], None)), ("Te", ("code_temperature", ["temperature"], None)), *known_dust_fields, ) known_particle_fields = () def _setup_velocity_fields(self, idust=None): if idust is None: dust_flag = dust_label = "" else: dust_flag = "d%d" % idust dust_label = "dust%d_" % idust us = self.ds.unit_system for idir, alias in enumerate(direction_aliases[self.ds.geometry], start=1): if ("amrvac", "m%d%s" % (idir, dust_flag)) not in self.field_list: break velocity_fn = functools.partial(_velocity, idir=idir, prefix=dust_label) self.add_field( ("gas", f"{dust_label}velocity_{alias}"), function=velocity_fn, units=us["velocity"], dimensions=dimensions.velocity, sampling_type="cell", ) self.alias( ("gas", "%svelocity_%d" % (dust_label, idir)), ("gas", f"{dust_label}velocity_{alias}"), units=us["velocity"], ) self.alias( ("gas", f"{dust_label}moment_{alias}"), ("gas", "%smoment_%d" % (dust_label, idir)), units=us["density"] * us["velocity"], ) def _setup_dust_fields(self): idust = 1 imax = self.__class__.MAXN_DUST_SPECIES while ("amrvac", "rhod%d" % idust) in self.field_list: if idust > imax: mylog.error( "Only the first %d dust species are currently read by yt. " "If you read this, please consider issuing a ticket. ", imax, ) break self._setup_velocity_fields(idust) idust += 1 n_dust_found = idust - 1 us = self.ds.unit_system if n_dust_found > 0: def _total_dust_density(field, data): tot = np.zeros_like(data["gas", "density"]) for idust in range(1, n_dust_found + 1): tot += data["dust%d_density" % idust] return tot self.add_field( ("gas", "total_dust_density"), function=_total_dust_density, dimensions=dimensions.density, units=us["density"], sampling_type="cell", ) def dust_to_gas_ratio(field, data): return data["gas", "total_dust_density"] / data["gas", "density"] self.add_field( ("gas", "dust_to_gas_ratio"), function=dust_to_gas_ratio, dimensions=dimensions.dimensionless, sampling_type="cell", ) def setup_fluid_fields(self): from yt.fields.magnetic_field import setup_magnetic_field_aliases setup_magnetic_field_aliases(self, "amrvac", [f"mag{ax}" for ax in "xyz"]) self._setup_velocity_fields() # gas velocities self._setup_dust_fields() # dust derived fields (including velocities) # fields with nested dependencies are defined thereafter # by increasing level of complexity us = self.ds.unit_system def _kinetic_energy_density(field, data): # devnote : have a look at issue 1301 return 0.5 * data["gas", "density"] * data["gas", "velocity_magnitude"] ** 2 self.add_field( ("gas", "kinetic_energy_density"), function=_kinetic_energy_density, units=us["density"] * us["velocity"] ** 2, dimensions=dimensions.density * dimensions.velocity**2, sampling_type="cell", ) # magnetic energy density if ("amrvac", "b1") in self.field_list: def _magnetic_energy_density(field, data): emag = 0.5 * data["gas", "magnetic_1"] ** 2 for idim in "23": if ("amrvac", f"b{idim}") not in self.field_list: break emag += 0.5 * data["gas", f"magnetic_{idim}"] ** 2 # in AMRVAC the magnetic field is defined in units where mu0 = 1, # such that # Emag = 0.5*B**2 instead of Emag = 0.5*B**2 / mu0 # To correctly transform the dimensionality from gauss**2 -> rho*v**2, # we have to take mu0 into account. If we divide here, units when adding # the field should be us["density"]*us["velocity"]**2. # If not, they should be us["magnetic_field"]**2 and division should # happen elsewhere. emag /= 4 * np.pi # divided by mu0 = 4pi in cgs, # yt handles 'mks' and 'code' unit systems internally. return emag self.add_field( ("gas", "magnetic_energy_density"), function=_magnetic_energy_density, units=us["density"] * us["velocity"] ** 2, dimensions=dimensions.density * dimensions.velocity**2, sampling_type="cell", ) # Adding the thermal pressure field. # In AMRVAC we have multiple physics possibilities: # - if HD/MHD + energy equation P = (gamma-1)*(e - ekin (- emag)) for (M)HD # - if HD/MHD but solve_internal_e is true in parfile, P = (gamma-1)*e for both # - if (m)hd_energy is false in parfile (isothermal), P = c_adiab * rho**gamma def _full_thermal_pressure_HD(field, data): # energy density and pressure are actually expressed in the same unit pthermal = (data.ds.gamma - 1) * ( data["gas", "energy_density"] - data["gas", "kinetic_energy_density"] ) return pthermal def _full_thermal_pressure_MHD(field, data): pthermal = ( _full_thermal_pressure_HD(field, data) - (data.ds.gamma - 1) * data["gas", "magnetic_energy_density"] ) return pthermal def _polytropic_thermal_pressure(field, data): return (data.ds.gamma - 1) * data["gas", "energy_density"] def _adiabatic_thermal_pressure(field, data): return data.ds._c_adiab * data["gas", "density"] ** data.ds.gamma pressure_recipe = None if ("amrvac", "e") in self.field_list: if self.ds._e_is_internal: pressure_recipe = _polytropic_thermal_pressure mylog.info("Using polytropic EoS for thermal pressure.") elif ("amrvac", "b1") in self.field_list: pressure_recipe = _full_thermal_pressure_MHD mylog.info("Using full MHD energy for thermal pressure.") else: pressure_recipe = _full_thermal_pressure_HD mylog.info("Using full HD energy for thermal pressure.") elif self.ds._c_adiab is not None: pressure_recipe = _adiabatic_thermal_pressure mylog.info("Using adiabatic EoS for thermal pressure (isothermal).") mylog.warning( "If you used usr_set_pthermal you should " "redefine the thermal_pressure field." ) if pressure_recipe is not None: self.add_field( ("gas", "thermal_pressure"), function=pressure_recipe, units=us["density"] * us["velocity"] ** 2, dimensions=dimensions.density * dimensions.velocity**2, sampling_type="cell", ) # sound speed and temperature depend on thermal pressure def _sound_speed(field, data): return np.sqrt( data.ds.gamma * data["gas", "thermal_pressure"] / data["gas", "density"] ) self.add_field( ("gas", "sound_speed"), function=_sound_speed, units=us["velocity"], dimensions=dimensions.velocity, sampling_type="cell", ) else: mylog.warning( "e not found and no parfile passed, can not set thermal_pressure." ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrvac/io.py0000644000175100001770000001512414714401662016460 0ustar00runnerdocker""" AMRVAC-specific IO functions """ import os import numpy as np from more_itertools import always_iterable from yt.geometry.selection_routines import GridSelector from yt.utilities.io_handler import BaseIOHandler from yt.utilities.on_demand_imports import _f90nml as f90nml def read_amrvac_namelist(parfiles): """Read one or more parfiles, and return a unified f90nml.Namelist object. This function replicates the patching logic of MPI-AMRVAC where redundant parameters only retain last-in-line values, with the exception of `&filelist:base_filename`, which is accumulated. When passed a single file, this function acts as a mere wrapper of f90nml.read(). Parameters ---------- parfiles : str, os.Pathlike, byte, or an iterable returning those types A file path, or a list of file paths to MPI-AMRVAC configuration parfiles. Returns ------- unified_namelist : f90nml.Namelist A single namelist object. The class inherits from ordereddict. """ parfiles = (os.path.expanduser(pf) for pf in always_iterable(parfiles)) # first merge the namelists namelists = [f90nml.read(parfile) for parfile in parfiles] unified_namelist = f90nml.Namelist() for nml in namelists: unified_namelist.patch(nml) if "filelist" not in unified_namelist: return unified_namelist # accumulate `&filelist:base_filename` base_filename = "".join( nml.get("filelist", {}).get("base_filename", "") for nml in namelists ) unified_namelist["filelist"]["base_filename"] = base_filename return unified_namelist class AMRVACIOHandler(BaseIOHandler): _particle_reader = False _dataset_type = "amrvac" def __init__(self, ds): BaseIOHandler.__init__(self, ds) self.ds = ds self.datfile = ds.parameter_filename header = self.ds.parameters self.block_shape = np.append(header["block_nx"], header["nw"]) def _read_particle_coords(self, chunks, ptf): """Not implemented yet.""" # This needs to *yield* a series of tuples of (ptype, (x, y, z)). # chunks is a list of chunks, and ptf is a dict where the keys are # ptypes and the values are lists of fields. raise NotImplementedError def _read_particle_fields(self, chunks, ptf, selector): """Not implemented yet.""" # This gets called after the arrays have been allocated. It needs to # yield ((ptype, field), data) where data is the masked results of # reading ptype, field and applying the selector to the data read in. # Selector objects have a .select_points(x,y,z) that returns a mask, so # you need to do your masking here. raise NotImplementedError def _read_data(self, fid, grid, field): """Retrieve field data from a grid. Parameters ---------- fid: file descriptor (open binary file with read access) grid : yt.frontends.amrvac.data_structures.AMRVACGrid The grid from which data is to be read. field : str A field name. Returns ------- data : np.ndarray A 3D array of float64 type representing grid data. """ ileaf = grid.id offset = grid._index.block_offsets[ileaf] field_idx = self.ds.parameters["w_names"].index(field) field_shape = self.block_shape[:-1] count = np.prod(field_shape) byte_size_field = count * 8 # size of a double fid.seek(offset + byte_size_field * field_idx) data = np.fromfile(fid, "=f8", count=count) data.shape = field_shape[::-1] data = data.T # Always convert data to 3D, as grid.ActiveDimensions is always 3D while len(data.shape) < 3: data = data[..., np.newaxis] return data def _read_fluid_selection(self, chunks, selector, fields, size): """Retrieve field(s) data in a selected region of space. Parameters ---------- chunks : generator A generator for multiple chunks, each of which contains a list of grids. selector : yt.geometry.selection_routines.SelectorObject A spatial region selector. fields : list A list of tuples (ftype, fname). size : np.int64 The cumulative number of objs contained in all chunks. Returns ------- data_dict : dict keys are the (ftype, fname) tuples, values are arrays that have been masked using whatever selector method is appropriate. Arrays have dtype float64. """ # @Notes from Niels: # The chunks list has YTDataChunk objects containing the different grids. # The list of grids can be obtained by doing eg. # grids_list = chunks[0].objs or chunks[1].objs etc. # Every element in "grids_list" is then an AMRVACGrid object, # and has hence all attributes of a grid : # (Level, ActiveDimensions, LeftEdge, etc.) chunks = list(chunks) data_dict = {} # <- return variable if isinstance(selector, GridSelector): if not len(chunks) == len(chunks[0].objs) == 1: raise RuntimeError grid = chunks[0].objs[0] with open(self.datfile, "rb") as fh: for ftype, fname in fields: data_dict[ftype, fname] = self._read_data(fh, grid, fname) else: if size is None: size = sum(g.count(selector) for chunk in chunks for g in chunk.objs) for field in fields: data_dict[field] = np.empty(size, dtype="float64") # nb_grids = sum(len(chunk.objs) for chunk in chunks) with open(self.datfile, "rb") as fh: ind = 0 for chunk in chunks: for grid in chunk.objs: nd = 0 for field in fields: ftype, fname = field data = self._read_data(fh, grid, fname) nd = grid.select(selector, data, data_dict[field], ind) ind += nd return data_dict def _read_chunk_data(self, chunk, fields): """Not implemented yet.""" # This reads the data from a single chunk without doing any selection, # and is only used for caching data that might be used by multiple # different selectors later. For instance, this can speed up ghost zone # computation. # it should be used by _read_fluid_selection instead of _read_data raise NotImplementedError ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.295152 yt-4.4.0/yt/frontends/amrvac/tests/0000755000175100001770000000000014714401715016635 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrvac/tests/__init__.py0000644000175100001770000000000014714401662020735 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.295152 yt-4.4.0/yt/frontends/amrvac/tests/sample_parfiles/0000755000175100001770000000000014714401715022003 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrvac/tests/sample_parfiles/bw_3d.par0000644000175100001770000000212014714401662023501 0ustar00runnerdocker!setup.pl -d=3 &filelist typefilelog='regression_test' base_filename='bw_3d' saveprim=.true. convert_type='vtuBCCmpi' autoconvert=.true. nwauxio=1 / &savelist dtsave_log = 1.d-3 / &stoplist time_max = 2.d-2 / &methodlist time_integrator= 'threestep' flux_scheme= 20*'hllc' limiter= 20*'koren' / &boundlist typeboundary_min1=5*'cont' typeboundary_max1=5*'cont' typeboundary_min2=5*'cont' typeboundary_max2=5*'cont' typeboundary_min3=5*'cont' typeboundary_max3=5*'cont' / &meshlist refine_criterion=3 refine_max_level=3 w_refine_weight(1)=0.5d0 w_refine_weight(5)=0.5d0 block_nx1=8 block_nx2=8 block_nx3=8 domain_nx1=16 domain_nx2=16 domain_nx3=16 iprob=1 xprobmin1=0.d0 xprobmax1=2.d0 xprobmin2=0.d0 xprobmax2=2.d0 xprobmin3=0.d0 xprobmax3=2.d0 / ¶mlist typecourant='maxsum' courantpar=0.5d0 / ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrvac/tests/sample_parfiles/tvdlf_scheme.par0000644000175100001770000000036714714401662025161 0ustar00runnerdocker! This is a fake example modifier parfile that can be used as an ! extension to bw_3d.par in order to change the integrator scheme. ! This file is here for testing. &methodlist flux_scheme= 20*'tvdlf' / &filelist base_filename='_tvdlf' / ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrvac/tests/test_outputs.py0000644000175100001770000001063314714401662021775 0ustar00runnerdockerimport numpy as np import yt # NOQA from yt.frontends.amrvac.api import AMRVACDataset, AMRVACGrid from yt.testing import requires_file from yt.units import YTArray from yt.utilities.answer_testing.framework import ( data_dir_load, requires_ds, small_patch_amr, ) blastwave_spherical_2D = "amrvac/bw_2d0000.dat" khi_cartesian_2D = "amrvac/kh_2d0000.dat" khi_cartesian_3D = "amrvac/kh_3D0000.dat" jet_cylindrical_25D = "amrvac/Jet0003.dat" riemann_cartesian_175D = "amrvac/R_1d0005.dat" blastwave_cartesian_3D = "amrvac/bw_3d0000.dat" blastwave_polar_2D = "amrvac/bw_polar_2D0000.dat" blastwave_cylindrical_3D = "amrvac/bw_cylindrical_3D0000.dat" rmi_cartesian_dust_2D = "amrvac/Richtmyer_Meshkov_dust_2D/RM2D_dust_Kwok0000.dat" def _get_fields_to_check(ds): fields = ["density", "velocity_magnitude"] raw_fields_labels = [fname for ftype, fname in ds.field_list] if "b1" in raw_fields_labels: fields.append("magnetic_energy_density") if "e" in raw_fields_labels: fields.append("energy_density") if "rhod1" in raw_fields_labels: fields.append("total_dust_density") # note : not hitting dust velocity fields return fields @requires_file(khi_cartesian_2D) def test_AMRVACDataset(): assert isinstance(data_dir_load(khi_cartesian_2D), AMRVACDataset) @requires_ds(blastwave_cartesian_3D) def test_domain_size(): # "Check for correct box size, see bw_3d.par" ds = data_dir_load(blastwave_cartesian_3D) for lb in ds.domain_left_edge: assert int(lb) == 0 for rb in ds.domain_right_edge: assert int(rb) == 2 for w in ds.domain_width: assert int(w) == 2 @requires_file(blastwave_cartesian_3D) def test_grid_attributes(): # "Check various grid attributes" ds = data_dir_load(blastwave_cartesian_3D) grids = ds.index.grids assert ds.index.max_level == 2 for g in grids: assert isinstance(g, AMRVACGrid) assert isinstance(g.LeftEdge, YTArray) assert isinstance(g.RightEdge, YTArray) assert isinstance(g.ActiveDimensions, np.ndarray) assert isinstance(g.Level, (np.int32, np.int64, int)) @requires_ds(blastwave_polar_2D) def test_bw_polar_2d(): ds = data_dir_load(blastwave_polar_2D) for test in small_patch_amr(ds, _get_fields_to_check(ds)): test_bw_polar_2d.__name__ = test.description yield test @requires_ds(blastwave_cartesian_3D) def test_blastwave_cartesian_3D(): ds = data_dir_load(blastwave_cartesian_3D) for test in small_patch_amr(ds, _get_fields_to_check(ds)): test_blastwave_cartesian_3D.__name__ = test.description yield test @requires_ds(blastwave_spherical_2D) def test_blastwave_spherical_2D(): ds = data_dir_load(blastwave_spherical_2D) for test in small_patch_amr(ds, _get_fields_to_check(ds)): test_blastwave_spherical_2D.__name__ = test.description yield test @requires_ds(blastwave_cylindrical_3D) def test_blastwave_cylindrical_3D(): ds = data_dir_load(blastwave_cylindrical_3D) for test in small_patch_amr(ds, _get_fields_to_check(ds)): test_blastwave_cylindrical_3D.__name__ = test.description yield test @requires_ds(khi_cartesian_2D) def test_khi_cartesian_2D(): ds = data_dir_load(khi_cartesian_2D) for test in small_patch_amr(ds, _get_fields_to_check(ds)): test_khi_cartesian_2D.__name__ = test.description yield test @requires_ds(khi_cartesian_3D) def test_khi_cartesian_3D(): ds = data_dir_load(khi_cartesian_3D) for test in small_patch_amr(ds, _get_fields_to_check(ds)): test_khi_cartesian_3D.__name__ = test.description yield test @requires_ds(jet_cylindrical_25D) def test_jet_cylindrical_25D(): ds = data_dir_load(jet_cylindrical_25D) for test in small_patch_amr(ds, _get_fields_to_check(ds)): test_jet_cylindrical_25D.__name__ = test.description yield test @requires_ds(riemann_cartesian_175D) def test_riemann_cartesian_175D(): ds = data_dir_load(riemann_cartesian_175D) for test in small_patch_amr(ds, _get_fields_to_check(ds)): test_riemann_cartesian_175D.__name__ = test.description yield test @requires_ds(rmi_cartesian_dust_2D) def test_rmi_cartesian_dust_2D(): # dataset with dust fields ds = data_dir_load(rmi_cartesian_dust_2D) for test in small_patch_amr(ds, _get_fields_to_check(ds)): test_rmi_cartesian_dust_2D.__name__ = test.description yield test ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrvac/tests/test_read_amrvac_namelist.py0000644000175100001770000000334414714401662024413 0ustar00runnerdockerimport os from copy import deepcopy from pathlib import Path from yt.frontends.amrvac.api import read_amrvac_namelist from yt.testing import requires_module from yt.utilities.on_demand_imports import _f90nml as f90nml test_dir = os.path.dirname(os.path.abspath(__file__)) blast_wave_parfile = os.path.join(test_dir, "sample_parfiles", "bw_3d.par") modifier_parfile = os.path.join(test_dir, "sample_parfiles", "tvdlf_scheme.par") @requires_module("f90nml") def test_read_pathlike(): read_amrvac_namelist(Path(blast_wave_parfile)) @requires_module("f90nml") def test_read_one_file(): """when provided a single file, the function should merely act as a wrapper for f90nml.read()""" namelist1 = read_amrvac_namelist(blast_wave_parfile) namelist2 = f90nml.read(blast_wave_parfile) assert namelist1 == namelist2 @requires_module("f90nml") def test_accumulate_basename(): """When two (or more) parfiles are passed, the filelist:base_filename should be special-cased""" namelist_base = f90nml.read(blast_wave_parfile) namelist_update = f90nml.read(modifier_parfile) namelist_tot1 = read_amrvac_namelist([blast_wave_parfile, modifier_parfile]) namelist_tot2 = deepcopy(namelist_base) namelist_tot2.patch(namelist_update) # remove and store the special-case value name1 = namelist_tot1["filelist"].pop("base_filename") name2 = namelist_tot2["filelist"].pop("base_filename") assert ( name1 == namelist_base["filelist"]["base_filename"] + namelist_update["filelist"]["base_filename"] ) assert name2 == namelist_update["filelist"]["base_filename"] assert name1 != name2 # test equality for the rest of the namelist assert namelist_tot1 == namelist_tot2 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/amrvac/tests/test_units_override.py0000644000175100001770000001162314714401662023313 0ustar00runnerdockerfrom numpy.testing import assert_raises from yt.testing import assert_allclose_units, requires_file from yt.units import YTQuantity from yt.utilities.answer_testing.framework import data_dir_load khi_cartesian_2D = "amrvac/kh_2d0000.dat" # Tests for units: check that overriding certain units yields the correct derived units. # The following are the correct normalisations # based on length, numberdensity and temperature length_unit = (1e9, "cm") numberdensity_unit = (1e9, "cm**-3") temperature_unit = (1e6, "K") density_unit = (2.341670657200000e-15, "g*cm**-3") mass_unit = (2.341670657200000e12, "g") velocity_unit = (1.164508387441102e07, "cm*s**-1") pressure_unit = (3.175492240000000e-01, "dyn*cm**-2") time_unit = (8.587314705370271e01, "s") magnetic_unit = (1.997608879907716, "gauss") def _assert_normalisations_equal(ds): assert_allclose_units(ds.length_unit, YTQuantity(*length_unit)) assert_allclose_units(ds.temperature_unit, YTQuantity(*temperature_unit)) assert_allclose_units(ds.density_unit, YTQuantity(*density_unit)) assert_allclose_units(ds.mass_unit, YTQuantity(*mass_unit)) assert_allclose_units(ds.velocity_unit, YTQuantity(*velocity_unit)) assert_allclose_units(ds.pressure_unit, YTQuantity(*pressure_unit)) assert_allclose_units(ds.time_unit, YTQuantity(*time_unit)) assert_allclose_units(ds.magnetic_unit, YTQuantity(*magnetic_unit)) @requires_file(khi_cartesian_2D) def test_normalisations_length_temp_nb(): # overriding length, temperature, numberdensity overrides = { "length_unit": length_unit, "temperature_unit": temperature_unit, "numberdensity_unit": numberdensity_unit, } ds = data_dir_load(khi_cartesian_2D, kwargs={"units_override": overrides}) _assert_normalisations_equal(ds) @requires_file(khi_cartesian_2D) def test_normalisations_length_temp_mass(): # overriding length, temperature, mass overrides = { "length_unit": length_unit, "temperature_unit": temperature_unit, "mass_unit": mass_unit, } ds = data_dir_load(khi_cartesian_2D, kwargs={"units_override": overrides}) _assert_normalisations_equal(ds) @requires_file(khi_cartesian_2D) def test_normalisations_length_time_mass(): # overriding length, time, mass overrides = { "length_unit": length_unit, "time_unit": time_unit, "mass_unit": mass_unit, } ds = data_dir_load(khi_cartesian_2D, kwargs={"units_override": overrides}) _assert_normalisations_equal(ds) @requires_file(khi_cartesian_2D) def test_normalisations_length_vel_nb(): # overriding length, velocity, numberdensity overrides = { "length_unit": length_unit, "velocity_unit": velocity_unit, "numberdensity_unit": numberdensity_unit, } ds = data_dir_load(khi_cartesian_2D, kwargs={"units_override": overrides}) _assert_normalisations_equal(ds) @requires_file(khi_cartesian_2D) def test_normalisations_length_vel_mass(): # overriding length, velocity, mass overrides = { "length_unit": length_unit, "velocity_unit": velocity_unit, "mass_unit": mass_unit, } ds = data_dir_load(khi_cartesian_2D, kwargs={"units_override": overrides}) _assert_normalisations_equal(ds) @requires_file(khi_cartesian_2D) def test_normalisations_default(): # test default normalisations, without overrides ds = data_dir_load(khi_cartesian_2D) assert_allclose_units(ds.length_unit, YTQuantity(1, "cm")) assert_allclose_units(ds.temperature_unit, YTQuantity(1, "K")) assert_allclose_units( ds.density_unit, YTQuantity(2.341670657200000e-24, "g*cm**-3") ) assert_allclose_units(ds.mass_unit, YTQuantity(2.341670657200000e-24, "g")) assert_allclose_units( ds.velocity_unit, YTQuantity(1.164508387441102e04, "cm*s**-1") ) assert_allclose_units( ds.pressure_unit, YTQuantity(3.175492240000000e-16, "dyn*cm**-2") ) assert_allclose_units(ds.time_unit, YTQuantity(8.587314705370271e-05, "s")) assert_allclose_units(ds.magnetic_unit, YTQuantity(6.316993934686148e-08, "gauss")) @requires_file(khi_cartesian_2D) def test_normalisations_too_many_args(): # test forbidden case: too many arguments (max 3 are allowed) overrides = { "length_unit": length_unit, "numberdensity_unit": numberdensity_unit, "temperature_unit": temperature_unit, "time_unit": time_unit, } with assert_raises(ValueError): data_dir_load(khi_cartesian_2D, kwargs={"units_override": overrides}) @requires_file(khi_cartesian_2D) def test_normalisations_vel_and_length(): # test forbidden case: both velocity and temperature are specified as overrides overrides = { "length_unit": length_unit, "velocity_unit": velocity_unit, "temperature_unit": temperature_unit, } with assert_raises(ValueError): data_dir_load(khi_cartesian_2D, kwargs={"units_override": overrides}) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/api.py0000644000175100001770000000006714714401662015351 0ustar00runnerdockerfrom . import __all__ as _frontends # backward compat ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.295152 yt-4.4.0/yt/frontends/arepo/0000755000175100001770000000000014714401715015330 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/arepo/__init__.py0000644000175100001770000000000014714401662017430 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/arepo/api.py0000644000175100001770000000016514714401662016456 0ustar00runnerdockerfrom . import tests from .data_structures import ArepoFieldInfo, ArepoHDF5Dataset from .io import IOHandlerArepoHDF5 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/arepo/data_structures.py0000644000175100001770000001127014714401662021120 0ustar00runnerdockerimport numpy as np from yt.frontends.gadget.api import GadgetHDF5Dataset from yt.funcs import mylog from yt.utilities.on_demand_imports import _h5py as h5py from .fields import ArepoFieldInfo class ArepoHDF5Dataset(GadgetHDF5Dataset): _load_requirements = ["h5py"] _field_info_class = ArepoFieldInfo def __init__( self, filename, dataset_type="arepo_hdf5", unit_base=None, smoothing_factor=2.0, index_order=None, index_filename=None, kernel_name=None, bounding_box=None, units_override=None, unit_system="cgs", default_species_fields=None, ): super().__init__( filename, dataset_type=dataset_type, unit_base=unit_base, index_order=index_order, index_filename=index_filename, kernel_name=kernel_name, bounding_box=bounding_box, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) # The "smoothing_factor" is a user-configurable parameter which # is multiplied by the radius of the sphere with a volume equal # to that of the Voronoi cell to create smoothing lengths. self.smoothing_factor = smoothing_factor self.gamma = 5.0 / 3.0 self.gamma_cr = self.parameters.get("GammaCR", 4.0 / 3.0) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False need_groups = ["Header", "Config"] veto_groups = ["FOF", "Group", "Subhalo"] valid = True try: fh = h5py.File(filename, mode="r") valid = ( all(ng in fh["/"] for ng in need_groups) and not any(vg in fh["/"] for vg in veto_groups) and ( "VORONOI" in fh["/Config"].attrs.keys() or "AMR" in fh["/Config"].attrs.keys() ) # Datasets with GFM_ fields present are AREPO or any(field.startswith("GFM_") for field in fh["/PartType0"]) ) fh.close() except Exception: valid = False return valid def _get_uvals(self): handle = h5py.File(self.parameter_filename, mode="r") uvals = {} missing = [True] * 3 for i, unit in enumerate( ["UnitLength_in_cm", "UnitMass_in_g", "UnitVelocity_in_cm_per_s"] ): for grp in ["Header", "Parameters", "Units"]: if grp in handle and unit in handle[grp].attrs: uvals[unit] = handle[grp].attrs[unit] missing[i] = False break if "UnitLength_in_cm" in uvals: # We assume this is comoving, because in the absence of comoving # integration the redshift will be zero. uvals["cmcm"] = 1.0 / uvals["UnitLength_in_cm"] handle.close() if all(missing): uvals = None return uvals def _set_code_unit_attributes(self): arepo_unit_base = self._get_uvals() # This rather convoluted logic is required to ensure that # units which are present in the Arepo dataset will be used # no matter what but that the user gets warned if arepo_unit_base is not None: if self._unit_base is None: self._unit_base = arepo_unit_base else: for unit in arepo_unit_base: if unit == "cmcm": continue short_unit = unit.split("_")[0][4:].lower() if short_unit in self._unit_base: which_unit = short_unit self._unit_base.pop(short_unit, None) elif unit in self._unit_base: which_unit = unit else: which_unit = None if which_unit is not None: msg = f"Overwriting '{which_unit}' in unit_base with what we found in the dataset." mylog.warning(msg) self._unit_base[unit] = arepo_unit_base[unit] if "cmcm" in arepo_unit_base: self._unit_base["cmcm"] = arepo_unit_base["cmcm"] super()._set_code_unit_attributes() munit = np.sqrt(self.mass_unit / (self.time_unit**2 * self.length_unit)).to( "gauss" ) if self.cosmological_simulation: self.magnetic_unit = self.quan(munit.value, f"{munit.units}/a**2") else: self.magnetic_unit = munit ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/arepo/fields.py0000644000175100001770000002105614714401662017155 0ustar00runnerdockerfrom yt.fields.field_info_container import FieldInfoContainer from yt.fields.species_fields import add_species_field_by_fraction, setup_species_fields from yt.frontends.gadget.api import GadgetFieldInfo from yt.utilities.chemical_formulas import ChemicalFormula from yt.utilities.physical_ratios import _primordial_mass_fraction metal_elements = ["He", "C", "N", "O", "Ne", "Mg", "Si", "Fe"] class ArepoFieldInfo(GadgetFieldInfo): def __init__(self, ds, field_list, slice_info=None): if ds.cosmological_simulation: GFM_SFT_units = "dimensionless" else: GFM_SFT_units = "code_length/code_velocity" self.known_particle_fields += ( ("GFM_StellarFormationTime", (GFM_SFT_units, ["stellar_age"], None)), ("MagneticField", ("code_magnetic", ["particle_magnetic_field"], None)), ( "MagneticFieldDivergence", ("code_magnetic/code_length", ["magnetic_field_divergence"], None), ), ("GFM_CoolingRate", ("erg*cm**3/s", ["cooling_rate"], None)), ("GFM_Metallicity", ("", ["metallicity"], None)), ("GFM_Metals_00", ("", ["H_fraction"], None)), ("GFM_Metals_01", ("", ["He_fraction"], None)), ("GFM_Metals_02", ("", ["C_fraction"], None)), ("GFM_Metals_03", ("", ["N_fraction"], None)), ("GFM_Metals_04", ("", ["O_fraction"], None)), ("GFM_Metals_05", ("", ["Ne_fraction"], None)), ("GFM_Metals_06", ("", ["Mg_fraction"], None)), ("GFM_Metals_07", ("", ["Si_fraction"], None)), ("GFM_Metals_08", ("", ["Fe_fraction"], None)), ("GFM_StellarPhotometrics_00", ("", ["U_magnitude"], None)), ("GFM_StellarPhotometrics_01", ("", ["B_magnitude"], None)), ("GFM_StellarPhotometrics_02", ("", ["V_magnitude"], None)), ("GFM_StellarPhotometrics_03", ("", ["K_magnitude"], None)), ("GFM_StellarPhotometrics_04", ("", ["g_magnitude"], None)), ("GFM_StellarPhotometrics_05", ("", ["r_magnitude"], None)), ("GFM_StellarPhotometrics_06", ("", ["i_magnitude"], None)), ("GFM_StellarPhotometrics_07", ("", ["z_magnitude"], None)), ( "CosmicRaySpecificEnergy", ("code_specific_energy", ["specific_cosmic_ray_energy"], None), ), ) super().__init__(ds, field_list, slice_info=slice_info) def setup_particle_fields(self, ptype, *args, **kwargs): FieldInfoContainer.setup_particle_fields(self, ptype) if ptype == "PartType0": self.setup_gas_particle_fields(ptype) setup_species_fields(self, ptype) def setup_gas_particle_fields(self, ptype): from yt.fields.magnetic_field import setup_magnetic_field_aliases super().setup_gas_particle_fields(ptype) # Since the AREPO gas "particles" are Voronoi cells, we can # define a volume here def _volume(field, data): return data["gas", "mass"] / data["gas", "density"] self.add_field( ("gas", "cell_volume"), function=_volume, sampling_type="local", units=self.ds.unit_system["volume"], ) if (ptype, "InternalEnergy") in self.field_list: def _pressure(field, data): return ( (data.ds.gamma - 1.0) * data[ptype, "density"] * data[ptype, "InternalEnergy"] ) self.add_field( ("gas", "pressure"), function=_pressure, sampling_type="local", units=self.ds.unit_system["pressure"], ) if (ptype, "GFM_Metals_00") in self.field_list: self.nuclei_names = metal_elements self.species_names = ["H"] + metal_elements if (ptype, "MagneticField") in self.field_list: setup_magnetic_field_aliases(self, ptype, "MagneticField") if (ptype, "NeutralHydrogenAbundance") in self.field_list: def _h_p0_fraction(field, data): return ( data[ptype, "GFM_Metals_00"] * data[ptype, "NeutralHydrogenAbundance"] ) self.add_field( (ptype, "H_p0_fraction"), sampling_type="particle", function=_h_p0_fraction, units="", ) def _h_p1_fraction(field, data): return data[ptype, "GFM_Metals_00"] * ( 1.0 - data[ptype, "NeutralHydrogenAbundance"] ) self.add_field( (ptype, "H_p1_fraction"), sampling_type="particle", function=_h_p1_fraction, units="", ) add_species_field_by_fraction(self, ptype, "H_p0") add_species_field_by_fraction(self, ptype, "H_p1") for species in ["H", "H_p0", "H_p1"]: for suf in ["_density", "_number_density"]: field = f"{species}{suf}" self.alias(("gas", field), (ptype, field)) if (ptype, "ElectronAbundance") in self.field_list: # If we have ElectronAbundance but not NeutralHydrogenAbundance, # try first to use the H_fraction, but otherwise we assume the # cosmic value for hydrogen to generate the H_number_density if (ptype, "NeutralHydrogenAbundance") not in self.field_list: m_u = self.ds.quan(1.0, "amu").in_cgs() A_H = ChemicalFormula("H").weight if (ptype, "GFM_Metals_00") in self.field_list: def _h_number_density(field, data): return ( data["gas", "density"] * data["gas", "H_fraction"] / (A_H * m_u) ) else: X_H = _primordial_mass_fraction["H"] def _h_number_density(field, data): return data["gas", "density"] * X_H / (A_H * m_u) self.add_field( (ptype, "H_number_density"), sampling_type="particle", function=_h_number_density, units=self.ds.unit_system["number_density"], ) self.alias(("gas", "H_number_density"), (ptype, "H_number_density")) self.alias(("gas", "H_nuclei_density"), ("gas", "H_number_density")) def _el_number_density(field, data): return ( data[ptype, "ElectronAbundance"] * data[ptype, "H_number_density"] ) self.add_field( (ptype, "El_number_density"), sampling_type="particle", function=_el_number_density, units=self.ds.unit_system["number_density"], ) self.alias(("gas", "El_number_density"), (ptype, "El_number_density")) if (ptype, "GFM_CoolingRate") in self.field_list: self.alias(("gas", "cooling_rate"), ("PartType0", "cooling_rate")) def _cooling_time(field, data): nH = data["gas", "H_nuclei_density"] dedt = -data["gas", "cooling_rate"] * nH * nH e = 1.5 * data["gas", "pressure"] return e / dedt self.add_field( ("gas", "cooling_time"), _cooling_time, sampling_type="local", units="s" ) if (ptype, "CosmicRaySpecificEnergy") in self.field_list: self.alias( (ptype, "specific_cosmic_ray_energy"), ("gas", "specific_cosmic_ray_energy"), ) def _cr_energy_density(field, data): return ( data["PartType0", "specific_cosmic_ray_energy"] * data["gas", "density"] ) self.add_field( ("gas", "cosmic_ray_energy_density"), _cr_energy_density, sampling_type="local", units=self.ds.unit_system["pressure"], ) def _cr_pressure(field, data): return (data.ds.gamma_cr - 1.0) * data[ "gas", "cosmic_ray_energy_density" ] self.add_field( ("gas", "cosmic_ray_pressure"), _cr_pressure, sampling_type="local", units=self.ds.unit_system["pressure"], ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/arepo/io.py0000644000175100001770000000257014714401662016316 0ustar00runnerdockerimport numpy as np from yt.frontends.gadget.api import IOHandlerGadgetHDF5 from yt.utilities.on_demand_imports import _h5py as h5py class IOHandlerArepoHDF5(IOHandlerGadgetHDF5): _dataset_type = "arepo_hdf5" def _generate_smoothing_length(self, index): # This is handled below in _get_smoothing_length return def _get_smoothing_length(self, data_file, position_dtype, position_shape): ptype = self.ds._sph_ptypes[0] ind = int(ptype[-1]) si, ei = data_file.start, data_file.end with h5py.File(data_file.filename, mode="r") as f: pcount = f["/Header"].attrs["NumPart_ThisFile"][ind].astype("int64") pcount = np.clip(pcount - si, 0, ei - si) # Arepo cells do not have "smoothing lengths" by definition, so # we compute one here by finding the radius of the sphere # corresponding to the volume of the Voroni cell and multiplying # by a user-configurable smoothing factor. hsml = f[ptype]["Masses"][si:ei, ...] / f[ptype]["Density"][si:ei, ...] hsml *= 3.0 / (4.0 * np.pi) hsml **= 1.0 / 3.0 hsml *= self.ds.smoothing_factor dt = hsml.dtype.newbyteorder("N") # Native if position_dtype is not None and dt < position_dtype: dt = position_dtype return hsml.astype(dt) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.295152 yt-4.4.0/yt/frontends/arepo/tests/0000755000175100001770000000000014714401715016472 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/arepo/tests/__init__.py0000644000175100001770000000000014714401662020572 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/arepo/tests/test_outputs.py0000644000175100001770000000627114714401662021635 0ustar00runnerdockerfrom collections import OrderedDict from yt.frontends.arepo.api import ArepoHDF5Dataset from yt.testing import ( ParticleSelectionComparison, assert_allclose_units, requires_file, requires_module, ) from yt.utilities.answer_testing.framework import data_dir_load, requires_ds, sph_answer bullet_h5 = "ArepoBullet/snapshot_150.hdf5" tng59_h5 = "TNGHalo/halo_59.hdf5" _tng59_bbox = [[40669.34, 56669.34], [45984.04, 61984.04], [54114.9, 70114.9]] cr_h5 = "ArepoCosmicRays/snapshot_039.hdf5" @requires_module("h5py") @requires_file(bullet_h5) def test_arepo_hdf5_selection(): ds = data_dir_load(bullet_h5) assert isinstance(ds, ArepoHDF5Dataset) psc = ParticleSelectionComparison(ds) psc.run_defaults() bullet_fields = OrderedDict( [ (("gas", "density"), None), (("gas", "temperature"), None), (("gas", "temperature"), ("gas", "density")), (("gas", "velocity_magnitude"), None), ] ) @requires_module("h5py") @requires_ds(bullet_h5) def test_arepo_bullet(): ds = data_dir_load(bullet_h5) for test in sph_answer(ds, "snapshot_150", 26529600, bullet_fields): test_arepo_bullet.__name__ = test.description yield test tng59_fields = OrderedDict( [ (("gas", "density"), None), (("gas", "temperature"), None), (("gas", "temperature"), ("gas", "density")), (("gas", "H_number_density"), None), (("gas", "H_p0_number_density"), None), (("gas", "H_p1_number_density"), None), (("gas", "El_number_density"), None), (("gas", "C_number_density"), None), (("gas", "velocity_magnitude"), None), (("gas", "magnetic_field_strength"), None), ] ) @requires_module("h5py") @requires_ds(tng59_h5) def test_arepo_tng59(): ds = data_dir_load(tng59_h5, kwargs={"bounding_box": _tng59_bbox}) for test in sph_answer(ds, "halo_59", 10107142, tng59_fields): test_arepo_tng59.__name__ = test.description yield test @requires_module("h5py") @requires_ds(tng59_h5) def test_arepo_tng59_periodicity(): ds1 = data_dir_load(tng59_h5) assert ds1.periodicity == (True, True, True) ds2 = data_dir_load(tng59_h5, kwargs={"bounding_box": _tng59_bbox}) assert ds2.periodicity == (False, False, False) @requires_module("h5py") @requires_file(tng59_h5) def test_nh_density(): ds = data_dir_load(tng59_h5, kwargs={"bounding_box": _tng59_bbox}) ad = ds.all_data() assert_allclose_units( ad["gas", "H_number_density"], (ad["gas", "H_nuclei_density"]) ) @requires_module("h5py") @requires_file(tng59_h5) def test_arepo_tng59_selection(): ds = data_dir_load(tng59_h5, kwargs={"bounding_box": _tng59_bbox}) psc = ParticleSelectionComparison(ds) psc.run_defaults() cr_fields = OrderedDict( [ (("gas", "density"), None), (("gas", "cosmic_ray_energy_density"), None), (("gas", "cosmic_ray_pressure"), None), ] ) @requires_module("h5py") @requires_ds(cr_h5) def test_arepo_cr(): ds = data_dir_load(cr_h5) assert ds.gamma_cr == ds.parameters["GammaCR"] for test in sph_answer(ds, "snapshot_039", 28313510, cr_fields): test_arepo_cr.__name__ = test.description yield test ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.295152 yt-4.4.0/yt/frontends/art/0000755000175100001770000000000014714401715015010 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/art/__init__.py0000644000175100001770000000000014714401662017110 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/art/api.py0000644000175100001770000000032314714401662016132 0ustar00runnerdockerfrom . import tests from .data_structures import ( ARTDataset, ARTDomainFile, ARTDomainSubset, ARTIndex, DarkMatterARTDataset, ) from .fields import ARTFieldInfo from .io import IOHandlerART ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/art/data_structures.py0000644000175100001770000011254214714401662020604 0ustar00runnerdockerimport glob import os import struct import weakref import numpy as np import yt.utilities.fortran_utils as fpu from yt.data_objects.index_subobjects.octree_subset import OctreeSubset from yt.data_objects.static_output import Dataset, ParticleFile from yt.data_objects.unions import ParticleUnion from yt.frontends.art.definitions import ( amr_header_struct, constants, dmparticle_header_struct, filename_pattern, fluid_fields, particle_fields, particle_header_struct, seek_extras, ) from yt.frontends.art.fields import ARTFieldInfo from yt.frontends.art.io import ( _read_art_level_info, _read_child_level, _read_root_level, a2b, b2t, ) from yt.funcs import mylog, setdefaultattr from yt.geometry.geometry_handler import Index, YTDataChunk from yt.geometry.oct_container import ARTOctreeContainer from yt.geometry.oct_geometry_handler import OctreeIndex from yt.geometry.particle_geometry_handler import ParticleIndex class ARTIndex(OctreeIndex): def __init__(self, ds, dataset_type="art"): self.fluid_field_list = fluid_fields self.dataset_type = dataset_type self.dataset = weakref.proxy(ds) self.index_filename = self.dataset.parameter_filename self.directory = os.path.dirname(self.index_filename) self.max_level = ds.max_level self.float_type = np.float64 super().__init__(ds, dataset_type) def get_smallest_dx(self): """ Returns (in code units) the smallest cell size in the simulation. """ # Overloaded ds = self.dataset return (ds.domain_width / ds.domain_dimensions / (2**self.max_level)).min() def _initialize_oct_handler(self): """ Just count the number of octs per domain and allocate the requisite memory in the oct tree """ nv = len(self.fluid_field_list) self.oct_handler = ARTOctreeContainer( self.dataset.domain_dimensions / 2, # dd is # of root cells self.dataset.domain_left_edge, self.dataset.domain_right_edge, 1, ) # The 1 here refers to domain_id == 1 always for ARTIO. self.domains = [ARTDomainFile(self.dataset, nv, self.oct_handler, 1)] self.octs_per_domain = [dom.level_count.sum() for dom in self.domains] self.total_octs = sum(self.octs_per_domain) mylog.debug("Allocating %s octs", self.total_octs) self.oct_handler.allocate_domains(self.octs_per_domain) domain = self.domains[0] domain._read_amr_root(self.oct_handler) domain._read_amr_level(self.oct_handler) self.oct_handler.finalize() def _detect_output_fields(self): self.particle_field_list = list(particle_fields) self.field_list = [("art", f) for f in fluid_fields] # now generate all of the possible particle fields for ptype in self.dataset.particle_types_raw: for pfield in self.particle_field_list: pfn = (ptype, pfield) self.field_list.append(pfn) def _identify_base_chunk(self, dobj): """ Take the passed in data source dobj, and use its embedded selector to calculate the domain mask, build the reduced domain subsets and oct counts. Attach this information to dobj. """ if getattr(dobj, "_chunk_info", None) is None: # Get all octs within this oct handler domains = [dom for dom in self.domains if dom.included(dobj.selector)] base_region = getattr(dobj, "base_region", dobj) if len(domains) > 1: mylog.debug("Identified %s intersecting domains", len(domains)) subsets = [ ARTDomainSubset(base_region, domain, self.dataset) for domain in domains ] dobj._chunk_info = subsets dobj._current_chunk = list(self._chunk_all(dobj))[0] def _chunk_all(self, dobj): oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) # We pass the chunk both the current chunk and list of chunks, # as well as the referring data source yield YTDataChunk(dobj, "all", oobjs, None) def _chunk_spatial(self, dobj, ngz, sort=None, preload_fields=None): sobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) for og in sobjs: if ngz > 0: g = og.retrieve_ghost_zones(ngz, [], smoothed=True) else: g = og yield YTDataChunk(dobj, "spatial", [g], None) def _chunk_io(self, dobj, cache=True, local_only=False): """ Since subsets are calculated per domain, i.e. per file, yield each domain at a time to organize by IO. We will eventually chunk out NMSU ART to be level-by-level. """ oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) for subset in oobjs: yield YTDataChunk(dobj, "io", [subset], None, cache=cache) class ARTDataset(Dataset): _index_class: type[Index] = ARTIndex _field_info_class = ARTFieldInfo def __init__( self, filename, dataset_type="art", fields=None, storage_filename=None, skip_particles=False, skip_stars=False, limit_level=None, spread_age=True, force_max_level=None, file_particle_header=None, file_particle_data=None, file_particle_stars=None, units_override=None, unit_system="cgs", default_species_fields=None, ): self.fluid_types += ("art",) if fields is None: fields = fluid_fields filename = os.path.abspath(filename) self._fields_in_file = fields self._file_amr = filename self._file_particle_header = file_particle_header self._file_particle_data = file_particle_data self._file_particle_stars = file_particle_stars self._find_files(filename) self.skip_particles = skip_particles self.skip_stars = skip_stars self.limit_level = limit_level self.max_level = limit_level self.force_max_level = force_max_level self.spread_age = spread_age Dataset.__init__( self, filename, dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) self.storage_filename = storage_filename def _find_files(self, file_amr): """ Given the AMR base filename, attempt to find the particle header, star files, etc. """ base_prefix, base_suffix = filename_pattern["amr"] numericstr = file_amr.rsplit("_", 1)[1].replace(base_suffix, "") possibles = glob.glob( os.path.join(os.path.dirname(os.path.abspath(file_amr)), "*") ) for filetype, (prefix, suffix) in filename_pattern.items(): # if this attribute is already set skip it if getattr(self, "_file_" + filetype, None) is not None: continue match = None for possible in possibles: if possible.endswith(numericstr + suffix): if os.path.basename(possible).startswith(prefix): match = possible if match is not None: mylog.info("discovered %s:%s", filetype, match) setattr(self, "_file_" + filetype, match) else: setattr(self, "_file_" + filetype, None) def __str__(self): return self._file_amr.split("/")[-1] def _set_code_unit_attributes(self): """ Generates the conversion to various physical units based on the parameters from the header """ # spatial units z = self.current_redshift h = self.hubble_constant boxcm_cal = self.parameters["boxh"] boxcm_uncal = boxcm_cal / h box_proper = boxcm_uncal / (1 + z) aexpn = self.parameters["aexpn"] # all other units Om0 = self.parameters["Om0"] ng = self.parameters["ng"] boxh = self.parameters["boxh"] aexpn = self.parameters["aexpn"] hubble = self.parameters["hubble"] r0 = boxh / ng v0 = 50.0 * r0 * np.sqrt(Om0) rho0 = 2.776e11 * hubble**2.0 * Om0 aM0 = rho0 * (boxh / hubble) ** 3.0 / ng**3.0 velocity = v0 / aexpn * 1.0e5 # proper cm/s mass = aM0 * 1.98892e33 self.cosmological_simulation = True setdefaultattr(self, "mass_unit", self.quan(mass, f"g*{ng ** 3}")) setdefaultattr(self, "length_unit", self.quan(box_proper, "Mpc")) setdefaultattr(self, "velocity_unit", self.quan(velocity, "cm/s")) setdefaultattr(self, "time_unit", self.length_unit / self.velocity_unit) def _parse_parameter_file(self): """ Get the various simulation parameters & constants. """ self.domain_left_edge = np.zeros(3, dtype="float") self.domain_right_edge = np.zeros(3, dtype="float") + 1.0 self.dimensionality = 3 self.refine_by = 2 self._periodicity = (True, True, True) self.cosmological_simulation = True self.parameters = {} self.parameters.update(constants) self.parameters["Time"] = 1.0 # read the amr header with open(self._file_amr, "rb") as f: amr_header_vals = fpu.read_attrs(f, amr_header_struct, ">") n_to_skip = len(("tl", "dtl", "tlold", "dtlold", "iSO")) fpu.skip(f, n_to_skip, endian=">") (self.ncell) = fpu.read_vector(f, "i", ">")[0] # Try to figure out the root grid dimensions est = int(np.rint(self.ncell ** (1.0 / 3.0))) # Note here: this is the number of *cells* on the root grid. # This is not the same as the number of Octs. # domain dimensions is the number of root *cells* self.domain_dimensions = np.ones(3, dtype="int64") * est self.root_grid_mask_offset = f.tell() self.root_nocts = self.domain_dimensions.prod() // 8 self.root_ncells = self.root_nocts * 8 mylog.debug( "Estimating %i cells on a root grid side, %i root octs", est, self.root_nocts, ) self.root_iOctCh = fpu.read_vector(f, "i", ">")[: self.root_ncells] self.root_iOctCh = self.root_iOctCh.reshape( self.domain_dimensions, order="F" ) self.root_grid_offset = f.tell() self.root_nhvar = fpu.skip(f, endian=">") self.root_nvar = fpu.skip(f, endian=">") # make sure that the number of root variables is a multiple of # rootcells assert self.root_nhvar % self.root_ncells == 0 assert self.root_nvar % self.root_ncells == 0 self.nhydro_variables = ( self.root_nhvar + self.root_nvar ) / self.root_ncells self.iOctFree, self.nOct = fpu.read_vector(f, "i", ">") self.child_grid_offset = f.tell() # lextra needs to be loaded as a string, but it's actually # array values. So pop it off here, and then re-insert. lextra = amr_header_vals.pop("lextra") amr_header_vals["lextra"] = np.frombuffer(lextra, ">f4") self.parameters.update(amr_header_vals) amr_header_vals = None # estimate the root level float_center, fl, iocts, nocts, root_level = _read_art_level_info( f, [0, self.child_grid_offset], 1, coarse_grid=self.domain_dimensions[0] ) del float_center, fl, iocts, nocts self.root_level = root_level mylog.info("Using root level of %02i", self.root_level) # read the particle header self.particle_types = [] self.particle_types_raw = () if not self.skip_particles and self._file_particle_header: with open(self._file_particle_header, "rb") as fh: particle_header_vals = fpu.read_attrs(fh, particle_header_struct, ">") fh.seek(seek_extras) n = particle_header_vals["Nspecies"] wspecies = np.fromfile(fh, dtype=">f", count=10) lspecies = np.fromfile(fh, dtype=">i", count=10) # extras needs to be loaded as a string, but it's actually # array values. So pop it off here, and then re-insert. extras = particle_header_vals.pop("extras") particle_header_vals["extras"] = np.frombuffer(extras, ">f4") self.parameters["wspecies"] = wspecies[:n] self.parameters["lspecies"] = lspecies[:n] for specie in range(n): self.particle_types.append("specie%i" % specie) self.particle_types_raw = tuple(self.particle_types) ls_nonzero = np.diff(lspecies)[: n - 1] ls_nonzero = np.append(lspecies[0], ls_nonzero) self.star_type = len(ls_nonzero) mylog.info("Discovered %i species of particles", len(ls_nonzero)) info_str = "Particle populations: " + "%9i " * len(ls_nonzero) mylog.info(info_str, *ls_nonzero) self._particle_type_counts = dict( zip(self.particle_types_raw, ls_nonzero, strict=True) ) for k, v in particle_header_vals.items(): if k in self.parameters.keys(): if not self.parameters[k] == v: mylog.info( "Inconsistent parameter %s %1.1e %1.1e", k, v, self.parameters[k], ) else: self.parameters[k] = v self.parameters_particles = particle_header_vals self.parameters.update(particle_header_vals) self.parameters["ng"] = self.parameters["Ngridc"] self.parameters["ncell0"] = self.parameters["ng"] ** 3 # setup standard simulation params yt expects to see self.current_redshift = self.parameters["aexpn"] ** -1.0 - 1.0 self.omega_lambda = self.parameters["Oml0"] self.omega_matter = self.parameters["Om0"] self.hubble_constant = self.parameters["hubble"] self.min_level = self.parameters["min_level"] self.max_level = self.parameters["max_level"] if self.limit_level is not None: self.max_level = min(self.limit_level, self.parameters["max_level"]) if self.force_max_level is not None: self.max_level = self.force_max_level self.hubble_time = 1.0 / (self.hubble_constant * 100 / 3.08568025e19) self.current_time = self.quan(b2t(self.parameters["t"]), "Gyr") self.gamma = self.parameters["gamma"] mylog.info("Max level is %02i", self.max_level) def create_field_info(self): super().create_field_info() if "wspecies" in self.parameters: # We create dark_matter and stars unions. ptr = self.particle_types_raw pu = ParticleUnion("darkmatter", list(ptr[:-1])) self.add_particle_union(pu) pu = ParticleUnion("stars", list(ptr[-1:])) self.add_particle_union(pu) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: """ Defined for the NMSU file naming scheme. This could differ for other formats. """ f = str(filename) prefix, suffix = filename_pattern["amr"] if not os.path.isfile(f): return False if not f.endswith(suffix): return False with open(f, "rb") as fh: try: fpu.read_attrs(fh, amr_header_struct, ">") return True except Exception: return False class ARTParticleFile(ParticleFile): def __init__(self, ds, io, filename, file_id): super().__init__(ds, io, filename, file_id, range=None) self.total_particles = {} for ptype, count in zip( ds.particle_types_raw, ds.parameters["total_particles"], strict=True, ): self.total_particles[ptype] = count with open(filename, "rb") as f: f.seek(0, os.SEEK_END) self._file_size = f.tell() class ARTParticleIndex(ParticleIndex): def _setup_filenames(self): # no need for template, all data in one file template = self.dataset.filename_template ndoms = self.dataset.file_count cls = self.dataset._file_class self.data_files = [] fi = 0 for i in range(int(ndoms)): df = cls(self.dataset, self.io, template % {"num": i}, fi) fi += 1 self.data_files.append(df) class DarkMatterARTDataset(ARTDataset): _index_class = ARTParticleIndex _file_class = ARTParticleFile filter_bbox = False def __init__( self, filename, dataset_type="dm_art", fields=None, storage_filename=None, skip_particles=False, skip_stars=False, limit_level=None, spread_age=True, force_max_level=None, file_particle_header=None, file_particle_stars=None, units_override=None, unit_system="cgs", ): self.num_zones = 2 self.n_ref = 64 self.particle_types += ("all",) if fields is None: fields = particle_fields filename = os.path.abspath(filename) self._fields_in_file = fields self._file_particle = filename self._file_particle_header = file_particle_header self._find_files(filename) self.skip_stars = skip_stars self.spread_age = spread_age Dataset.__init__( self, filename, dataset_type, units_override=units_override, unit_system=unit_system, ) self.storage_filename = storage_filename def _find_files(self, file_particle): """ Given the particle base filename, attempt to find the particle header and star files. """ base_prefix, base_suffix = filename_pattern["particle_data"] aexpstr = file_particle.rsplit("s0", 1)[1].replace(base_suffix, "") possibles = glob.glob( os.path.join(os.path.dirname(os.path.abspath(file_particle)), "*") ) for filetype, (prefix, suffix) in filename_pattern.items(): # if this attribute is already set skip it if getattr(self, "_file_" + filetype, None) is not None: continue match = None for possible in possibles: if possible.endswith(aexpstr + suffix): if os.path.basename(possible).startswith(prefix): match = possible if match is not None: mylog.info("discovered %s:%s", filetype, match) setattr(self, "_file_" + filetype, match) else: setattr(self, "_file_" + filetype, None) def __str__(self): return self._file_particle.split("/")[-1] def _set_code_unit_attributes(self): """ Generates the conversion to various physical units based on the parameters from the header """ # spatial units z = self.current_redshift h = self.hubble_constant boxcm_cal = self.parameters["boxh"] boxcm_uncal = boxcm_cal / h box_proper = boxcm_uncal / (1 + z) aexpn = self.parameters["aexpn"] # all other units Om0 = self.parameters["Om0"] ng = self.parameters["ng"] boxh = self.parameters["boxh"] aexpn = self.parameters["aexpn"] hubble = self.parameters["hubble"] r0 = boxh / ng rho0 = 2.776e11 * hubble**2.0 * Om0 aM0 = rho0 * (boxh / hubble) ** 3.0 / ng**3.0 velocity = 100.0 * r0 / aexpn * 1.0e5 # proper cm/s mass = aM0 * 1.98892e33 self.cosmological_simulation = True self.mass_unit = self.quan(mass, f"g*{ng ** 3}") self.length_unit = self.quan(box_proper, "Mpc") self.velocity_unit = self.quan(velocity, "cm/s") self.time_unit = self.length_unit / self.velocity_unit def _parse_parameter_file(self): """ Get the various simulation parameters & constants. """ self.domain_left_edge = np.zeros(3, dtype="float") self.domain_right_edge = np.zeros(3, dtype="float") + 1.0 self.dimensionality = 3 self.refine_by = 2 self._periodicity = (True, True, True) self.cosmological_simulation = True self.parameters = {} self.parameters.update(constants) self.parameters["Time"] = 1.0 self.file_count = 1 self.filename_template = self.parameter_filename # read the particle header self.particle_types = [] self.particle_types_raw = () assert self._file_particle_header with open(self._file_particle_header, "rb") as fh: seek = 4 fh.seek(seek) headerstr = fh.read(45).decode("ascii") aexpn = np.fromfile(fh, count=1, dtype=">f4") aexp0 = np.fromfile(fh, count=1, dtype=">f4") amplt = np.fromfile(fh, count=1, dtype=">f4") astep = np.fromfile(fh, count=1, dtype=">f4") istep = np.fromfile(fh, count=1, dtype=">i4") partw = np.fromfile(fh, count=1, dtype=">f4") tintg = np.fromfile(fh, count=1, dtype=">f4") ekin = np.fromfile(fh, count=1, dtype=">f4") ekin1 = np.fromfile(fh, count=1, dtype=">f4") ekin2 = np.fromfile(fh, count=1, dtype=">f4") au0 = np.fromfile(fh, count=1, dtype=">f4") aeu0 = np.fromfile(fh, count=1, dtype=">f4") nrowc = np.fromfile(fh, count=1, dtype=">i4") ngridc = np.fromfile(fh, count=1, dtype=">i4") nspecs = np.fromfile(fh, count=1, dtype=">i4") nseed = np.fromfile(fh, count=1, dtype=">i4") Om0 = np.fromfile(fh, count=1, dtype=">f4") Oml0 = np.fromfile(fh, count=1, dtype=">f4") hubble = np.fromfile(fh, count=1, dtype=">f4") Wp5 = np.fromfile(fh, count=1, dtype=">f4") Ocurv = np.fromfile(fh, count=1, dtype=">f4") wspecies = np.fromfile(fh, count=10, dtype=">f4") lspecies = np.fromfile(fh, count=10, dtype=">i4") extras = np.fromfile(fh, count=79, dtype=">f4") boxsize = np.fromfile(fh, count=1, dtype=">f4") n = nspecs[0] particle_header_vals = {} tmp = [ headerstr, aexpn, aexp0, amplt, astep, istep, partw, tintg, ekin, ekin1, ekin2, au0, aeu0, nrowc, ngridc, nspecs, nseed, Om0, Oml0, hubble, Wp5, Ocurv, wspecies, lspecies, extras, boxsize, ] for i, arr in enumerate(tmp): a1 = dmparticle_header_struct[0][i] a2 = dmparticle_header_struct[1][i] if a2 == 1: particle_header_vals[a1] = arr[0] else: particle_header_vals[a1] = arr[:a2] for specie in range(n): self.particle_types.append("specie%i" % specie) self.particle_types_raw = tuple(self.particle_types) ls_nonzero = np.diff(lspecies)[: n - 1] ls_nonzero = np.append(lspecies[0], ls_nonzero) self.star_type = len(ls_nonzero) mylog.info("Discovered %i species of particles", len(ls_nonzero)) info_str = "Particle populations: " + "%9i " * len(ls_nonzero) mylog.info(info_str, *ls_nonzero) for k, v in particle_header_vals.items(): if k in self.parameters.keys(): if not self.parameters[k] == v: mylog.info( "Inconsistent parameter %s %1.1e %1.1e", k, v, self.parameters[k], ) else: self.parameters[k] = v self.parameters_particles = particle_header_vals self.parameters.update(particle_header_vals) self.parameters["wspecies"] = wspecies[:n] self.parameters["lspecies"] = lspecies[:n] self.parameters["ng"] = self.parameters["Ngridc"] self.parameters["ncell0"] = self.parameters["ng"] ** 3 self.parameters["boxh"] = self.parameters["boxsize"] self.parameters["total_particles"] = ls_nonzero self.domain_dimensions = np.ones(3, dtype="int64") * 2 # NOT ng # setup standard simulation params yt expects to see # Convert to float to please unyt self.current_redshift = float(self.parameters["aexpn"] ** -1.0 - 1.0) self.omega_lambda = float(particle_header_vals["Oml0"]) self.omega_matter = float(particle_header_vals["Om0"]) self.hubble_constant = float(particle_header_vals["hubble"]) self.min_level = 0 self.max_level = 0 # self.min_level = particle_header_vals['min_level'] # self.max_level = particle_header_vals['max_level'] # if self.limit_level is not None: # self.max_level = min( # self.limit_level, particle_header_vals['max_level']) # if self.force_max_level is not None: # self.max_level = self.force_max_level self.hubble_time = 1.0 / (self.hubble_constant * 100 / 3.08568025e19) self.parameters["t"] = a2b(self.parameters["aexpn"]) self.current_time = self.quan(b2t(self.parameters["t"]), "Gyr") self.gamma = self.parameters["gamma"] mylog.info("Max level is %02i", self.max_level) def create_field_info(self): super(ARTDataset, self).create_field_info() ptr = self.particle_types_raw pu = ParticleUnion("darkmatter", list(ptr)) self.add_particle_union(pu) pass @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: """ Defined for the NMSU file naming scheme. This could differ for other formats. """ f = str(filename) prefix, suffix = filename_pattern["particle_data"] if not os.path.isfile(f): return False if not f.endswith(suffix): return False if "s0" not in f: # ATOMIC.DAT, for instance, passes the other tests, but then dies # during _find_files because it can't be split. return False with open(f, "rb") as fh: try: amr_prefix, amr_suffix = filename_pattern["amr"] possibles = glob.glob( os.path.join(os.path.dirname(os.path.abspath(f)), "*") ) for possible in possibles: if possible.endswith(amr_suffix): if os.path.basename(possible).startswith(amr_prefix): return False except Exception: pass try: seek = 4 fh.seek(seek) headerstr = np.fromfile(fh, count=1, dtype=(str, 45)) # NOQA aexpn = np.fromfile(fh, count=1, dtype=">f4") # NOQA aexp0 = np.fromfile(fh, count=1, dtype=">f4") # NOQA amplt = np.fromfile(fh, count=1, dtype=">f4") # NOQA astep = np.fromfile(fh, count=1, dtype=">f4") # NOQA istep = np.fromfile(fh, count=1, dtype=">i4") # NOQA partw = np.fromfile(fh, count=1, dtype=">f4") # NOQA tintg = np.fromfile(fh, count=1, dtype=">f4") # NOQA ekin = np.fromfile(fh, count=1, dtype=">f4") # NOQA ekin1 = np.fromfile(fh, count=1, dtype=">f4") # NOQA ekin2 = np.fromfile(fh, count=1, dtype=">f4") # NOQA au0 = np.fromfile(fh, count=1, dtype=">f4") # NOQA aeu0 = np.fromfile(fh, count=1, dtype=">f4") # NOQA nrowc = np.fromfile(fh, count=1, dtype=">i4") # NOQA ngridc = np.fromfile(fh, count=1, dtype=">i4") # NOQA nspecs = np.fromfile(fh, count=1, dtype=">i4") # NOQA nseed = np.fromfile(fh, count=1, dtype=">i4") # NOQA Om0 = np.fromfile(fh, count=1, dtype=">f4") # NOQA Oml0 = np.fromfile(fh, count=1, dtype=">f4") # NOQA hubble = np.fromfile(fh, count=1, dtype=">f4") # NOQA Wp5 = np.fromfile(fh, count=1, dtype=">f4") # NOQA Ocurv = np.fromfile(fh, count=1, dtype=">f4") # NOQA wspecies = np.fromfile(fh, count=10, dtype=">f4") # NOQA lspecies = np.fromfile(fh, count=10, dtype=">i4") # NOQA extras = np.fromfile(fh, count=79, dtype=">f4") # NOQA boxsize = np.fromfile(fh, count=1, dtype=">f4") # NOQA return True except Exception: return False class ARTDomainSubset(OctreeSubset): @property def oct_handler(self): return self.domain.oct_handler def fill(self, content, ftfields, selector): """ This is called from IOHandler. It takes content which is a binary stream, reads the requested field over this while domain. It then uses oct_handler fill to reorganize values from IO read index order to the order they are in in the octhandler. """ oct_handler = self.oct_handler all_fields = self.domain.ds.index.fluid_field_list fields = [f for ft, f in ftfields] field_idxs = [all_fields.index(f) for f in fields] source, tr = {}, {} cell_count = selector.count_oct_cells(self.oct_handler, self.domain_id) levels, cell_inds, file_inds = self.oct_handler.file_index_octs( selector, self.domain_id, cell_count ) for field in fields: tr[field] = np.zeros(cell_count, "float64") data = _read_root_level( content, self.domain.level_child_offsets, self.domain.level_count ) ns = (self.domain.ds.domain_dimensions.prod() // 8, 8) for field, fi in zip(fields, field_idxs, strict=True): source[field] = np.empty(ns, dtype="float64", order="C") dt = data[fi, :].reshape(self.domain.ds.domain_dimensions, order="F") for i in range(2): for j in range(2): for k in range(2): ii = ((k * 2) + j) * 2 + i # Note: C order because our index converts C to F. source[field][:, ii] = dt[i::2, j::2, k::2].ravel(order="C") oct_handler.fill_level(0, levels, cell_inds, file_inds, tr, source) del source # Now we continue with the additional levels. for level in range(1, self.ds.index.max_level + 1): no = self.domain.level_count[level] noct_range = [0, no] source = _read_child_level( content, self.domain.level_child_offsets, self.domain.level_offsets, self.domain.level_count, level, fields, self.domain.ds.domain_dimensions, self.domain.ds.parameters["ncell0"], noct_range=noct_range, ) oct_handler.fill_level(level, levels, cell_inds, file_inds, tr, source) return tr class ARTDomainFile: """ Read in the AMR, left/right edges, fill out the octhandler """ # We already read in the header in static output, # and since these headers are defined in only a single file it's # best to leave them in the static output _last_mask = None _last_selector_id = None def __init__(self, ds, nvar, oct_handler, domain_id): self.nvar = nvar self.ds = ds self.domain_id = domain_id self._level_count = None self._level_oct_offsets = None self._level_child_offsets = None self._oct_handler = oct_handler @property def oct_handler(self): return self._oct_handler @property def level_count(self): # this is number of *octs* if self._level_count is not None: return self._level_count self.level_offsets return self._level_count @property def level_child_offsets(self): if self._level_count is not None: return self._level_child_offsets self.level_offsets return self._level_child_offsets @property def level_offsets(self): # this is used by the IO operations to find the file offset, # and then start reading to fill values # note that this is called hydro_offset in ramses if self._level_oct_offsets is not None: return self._level_oct_offsets # We now have to open the file and calculate it f = open(self.ds._file_amr, "rb") ( nhydrovars, inoll, _level_oct_offsets, _level_child_offsets, ) = self._count_art_octs( f, self.ds.child_grid_offset, self.ds.min_level, self.ds.max_level ) # remember that the root grid is by itself; manually add it back in inoll[0] = self.ds.domain_dimensions.prod() // 8 _level_child_offsets[0] = self.ds.root_grid_offset self.nhydrovars = nhydrovars self.inoll = inoll # number of octs self._level_oct_offsets = _level_oct_offsets self._level_child_offsets = _level_child_offsets self._level_count = inoll return self._level_oct_offsets def _count_art_octs(self, f, offset, MinLev, MaxLevelNow): level_oct_offsets = [ 0, ] level_child_offsets = [ 0, ] f.seek(offset) nchild, ntot = 8, 0 Level = np.zeros(MaxLevelNow + 1 - MinLev, dtype="int64") iNOLL = np.zeros(MaxLevelNow + 1 - MinLev, dtype="int64") iHOLL = np.zeros(MaxLevelNow + 1 - MinLev, dtype="int64") for Lev in range(MinLev + 1, MaxLevelNow + 1): level_oct_offsets.append(f.tell()) # Get the info for this level, skip the rest # print("Reading oct tree data for level", Lev) # print('offset:',f.tell()) Level[Lev], iNOLL[Lev], iHOLL[Lev] = fpu.read_vector(f, "i", ">") # print('Level %i : '%Lev, iNOLL) # print('offset after level record:',f.tell()) nLevel = iNOLL[Lev] ntot = ntot + nLevel # Skip all the oct hierarchy data ns = fpu.peek_record_size(f, endian=">") size = struct.calcsize(">i") + ns + struct.calcsize(">i") f.seek(f.tell() + size * nLevel) level_child_offsets.append(f.tell()) # Skip the child vars data ns = fpu.peek_record_size(f, endian=">") size = struct.calcsize(">i") + ns + struct.calcsize(">i") f.seek(f.tell() + size * nLevel * nchild) # find nhydrovars nhydrovars = 8 + 2 f.seek(offset) return nhydrovars, iNOLL, level_oct_offsets, level_child_offsets def _read_amr_level(self, oct_handler): """Open the oct file, read in octs level-by-level. For each oct, only the position, index, level and domain are needed - its position in the octree is found automatically. The most important is finding all the information to feed oct_handler.add """ self.level_offsets f = open(self.ds._file_amr, "rb") for level in range(1, self.ds.max_level + 1): unitary_center, fl, iocts, nocts, root_level = _read_art_level_info( f, self._level_oct_offsets, level, coarse_grid=self.ds.domain_dimensions[0], root_level=self.ds.root_level, ) nocts_check = oct_handler.add(self.domain_id, level, unitary_center) assert nocts_check == nocts mylog.debug( "Added %07i octs on level %02i, cumulative is %07i", nocts, level, oct_handler.nocts, ) def _read_amr_root(self, oct_handler): self.level_offsets # add the root *cell* not *oct* mesh root_octs_side = self.ds.domain_dimensions[0] / 2 NX = np.ones(3) * root_octs_side LE = np.array([0.0, 0.0, 0.0], dtype="float64") RE = np.array([1.0, 1.0, 1.0], dtype="float64") root_dx = (RE - LE) / NX LL = LE + root_dx / 2.0 RL = RE - root_dx / 2.0 # compute floating point centers of root octs root_fc = np.mgrid[ LL[0] : RL[0] : NX[0] * 1j, LL[1] : RL[1] : NX[1] * 1j, LL[2] : RL[2] : NX[2] * 1j, ] root_fc = np.vstack([p.ravel() for p in root_fc]).T oct_handler.add(self.domain_id, 0, root_fc) assert oct_handler.nocts == root_fc.shape[0] mylog.debug( "Added %07i octs on level %02i, cumulative is %07i", root_octs_side**3, 0, oct_handler.nocts, ) def included(self, selector): return True ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/art/definitions.py0000644000175100001770000000671014714401662017702 0ustar00runnerdocker# If not otherwise specified, we are big endian endian = ">" fluid_fields = [ "Density", "TotalEnergy", "XMomentumDensity", "YMomentumDensity", "ZMomentumDensity", "Pressure", "Gamma", "GasEnergy", "MetalDensitySNII", "MetalDensitySNIa", "PotentialNew", "PotentialOld", ] hydro_struct = [("pad1", ">i"), ("idc", ">i"), ("iOctCh", ">i")] for field in fluid_fields: hydro_struct += ((field, ">f"),) hydro_struct += (("pad2", ">i"),) particle_fields = [ "particle_mass", # stars have variable mass "particle_index", "particle_type", "particle_position_x", "particle_position_y", "particle_position_z", "particle_velocity_x", "particle_velocity_y", "particle_velocity_z", "particle_mass_initial", "particle_creation_time", "particle_metallicity1", "particle_metallicity2", "particle_metallicity", ] particle_star_fields = [ "particle_mass", "particle_mass_initial", "particle_creation_time", "particle_metallicity1", "particle_metallicity2", "particle_metallicity", ] filename_pattern = { "amr": ["10MpcBox_", ".d"], "particle_header": ["PMcrd", ".DAT"], "particle_data": ["PMcrs", ".DAT"], "particle_stars": ["stars", ".dat"], } amr_header_struct = [ ("jname", 1, "256s"), (("istep", "t", "dt", "aexpn", "ainit"), 1, "iddff"), (("boxh", "Om0", "Oml0", "Omb0", "hubble"), 5, "f"), ("nextras", 1, "i"), (("extra1", "extra2"), 2, "f"), ("lextra", 1, "512s"), (("min_level", "max_level"), 2, "i"), ] particle_header_struct = [ ( ( "header", "aexpn", "aexp0", "amplt", "astep", "istep", "partw", "tintg", "Ekin", "Ekin1", "Ekin2", "au0", "aeu0", "Nrow", "Ngridc", "Nspecies", "Nseed", "Om0", "Oml0", "hubble", "Wp5", "Ocurv", "Omb0", "extras", "unknown", ), 1, "45sffffi" + "fffffff" + "iiii" + "ffffff" + "396s" + "f", ) ] dmparticle_header_struct = [ ( "header", "aexpn", "aexp0", "amplt", "astep", "istep", "partw", "tintg", "Ekin", "Ekin1", "Ekin2", "au0", "aeu0", "Nrow", "Ngridc", "Nspecies", "Nseed", "Om0", "Oml0", "hubble", "Wp5", "Ocurv", "wspecies", "lspecies", "extras", "boxsize", ), (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 10, 10, 79, 1), ] star_struct = [ (">d", ("t_stars", "a_stars")), (">i", "nstars"), (">d", ("ws_old", "ws_oldi")), (">f", "particle_mass"), (">f", "particle_mass_initial"), (">f", "particle_creation_time"), (">f", "particle_metallicity1"), (">f", "particle_metallicity2"), ] star_name_map = { "particle_mass": "mass", "particle_mass_initial": "imass", "particle_creation_time": "tbirth", "particle_metallicity1": "metallicity1", "particle_metallicity2": "metallicity2", "particle_metallicity": "metallicity", } constants = { "Y_p": 0.245, "gamma": 5.0 / 3.0, "T_CMB0": 2.726, "T_min": 300.0, "wmu": 4.0 / (8.0 - 5.0 * 0.245), } seek_extras = 137 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/art/fields.py0000644000175100001770000001715214714401662016637 0ustar00runnerdockerfrom yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer b_units = "code_magnetic" ra_units = "code_length / code_time**2" rho_units = "code_mass / code_length**3" vel_units = "code_velocity" # NOTE: ARTIO uses momentum density. mom_units = "code_mass / (code_length**2 * code_time)" en_units = "code_mass*code_velocity**2/code_length**3" class ARTFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( ("Density", (rho_units, ["density"], None)), ("TotalEnergy", (en_units, ["total_energy_density"], None)), ("XMomentumDensity", (mom_units, ["momentum_density_x"], None)), ("YMomentumDensity", (mom_units, ["momentum_density_y"], None)), ("ZMomentumDensity", (mom_units, ["momentum_density_z"], None)), ("Pressure", ("", ["pressure"], None)), # Unused ("Gamma", ("", ["gamma"], None)), ("GasEnergy", (en_units, ["thermal_energy_density"], None)), ("MetalDensitySNII", (rho_units, ["metal_ii_density"], None)), ("MetalDensitySNIa", (rho_units, ["metal_ia_density"], None)), ("PotentialNew", ("", ["potential"], None)), ("PotentialOld", ("", ["gas_potential"], None)), ) known_particle_fields: KnownFieldsT = ( ("particle_position_x", ("code_length", [], None)), ("particle_position_y", ("code_length", [], None)), ("particle_position_z", ("code_length", [], None)), ("particle_velocity_x", (vel_units, [], None)), ("particle_velocity_y", (vel_units, [], None)), ("particle_velocity_z", (vel_units, [], None)), ("particle_mass", ("code_mass", [], None)), ("particle_index", ("", [], None)), ("particle_species", ("", ["particle_type"], None)), ("particle_creation_time", ("Gyr", [], None)), ("particle_mass_initial", ("code_mass", [], None)), ("particle_metallicity1", ("", [], None)), ("particle_metallicity2", ("", [], None)), ) def setup_fluid_fields(self): unit_system = self.ds.unit_system def _temperature(field, data): r0 = data.ds.parameters["boxh"] / data.ds.parameters["ng"] tr = data.ds.quan(3.03e5 * r0**2, "K/code_velocity**2") tr *= data.ds.parameters["wmu"] * data.ds.parameters["Om0"] tr *= data.ds.parameters["gamma"] - 1.0 tr /= data.ds.parameters["aexpn"] ** 2 return tr * data["art", "GasEnergy"] / data["art", "Density"] self.add_field( ("gas", "temperature"), sampling_type="cell", function=_temperature, units=unit_system["temperature"], ) def _get_vel(axis): def velocity(field, data): return data["gas", f"momentum_density_{axis}"] / data["gas", "density"] return velocity for ax in "xyz": self.add_field( ("gas", f"velocity_{ax}"), sampling_type="cell", function=_get_vel(ax), units=unit_system["velocity"], ) def _momentum_magnitude(field, data): tr = ( data["gas", "momentum_density_x"] ** 2 + data["gas", "momentum_density_y"] ** 2 + data["gas", "momentum_density_z"] ** 2 ) ** 0.5 tr *= data["index", "cell_volume"].in_units("cm**3") return tr self.add_field( ("gas", "momentum_magnitude"), sampling_type="cell", function=_momentum_magnitude, units=unit_system["momentum"], ) def _velocity_magnitude(field, data): tr = data["gas", "momentum_magnitude"] tr /= data["gas", "cell_mass"] return tr self.add_field( ("gas", "velocity_magnitude"), sampling_type="cell", function=_velocity_magnitude, units=unit_system["velocity"], ) def _metal_density(field, data): tr = data["gas", "metal_ia_density"] tr += data["gas", "metal_ii_density"] return tr self.add_field( ("gas", "metal_density"), sampling_type="cell", function=_metal_density, units=unit_system["density"], ) def _metal_mass_fraction(field, data): tr = data["gas", "metal_density"] tr /= data["gas", "density"] return tr self.add_field( ("gas", "metal_mass_fraction"), sampling_type="cell", function=_metal_mass_fraction, units="", ) def _H_mass_fraction(field, data): tr = 1.0 - data.ds.parameters["Y_p"] - data["gas", "metal_mass_fraction"] return tr self.add_field( ("gas", "H_mass_fraction"), sampling_type="cell", function=_H_mass_fraction, units="", ) def _metallicity(field, data): tr = data["gas", "metal_mass_fraction"] tr /= data["gas", "H_mass_fraction"] return tr self.add_field( ("gas", "metallicity"), sampling_type="cell", function=_metallicity, units="", ) atoms = [ "C", "N", "O", "F", "Ne", "Na", "Mg", "Al", "Si", "P", "S", "Cl", "Ar", "K", "Ca", "Sc", "Ti", "V", "Cr", "Mn", "Fe", "Co", "Ni", "Cu", "Zn", ] def _specific_metal_density_function(atom): def _specific_metal_density(field, data): nucleus_densityIa = ( data["gas", "metal_ia_density"] * SNIa_abundance[atom] ) nucleus_densityII = ( data["gas", "metal_ii_density"] * SNII_abundance[atom] ) return nucleus_densityIa + nucleus_densityII return _specific_metal_density for atom in atoms: self.add_field( ("gas", f"{atom}_nuclei_mass_density"), sampling_type="cell", function=_specific_metal_density_function(atom), units=unit_system["density"], ) # based on Iwamoto et al 1999 # mass fraction of each atom in SNIa metal SNIa_abundance = { "H": 0.00e00, "He": 0.00e00, "C": 3.52e-02, "N": 8.47e-07, "O": 1.04e-01, "F": 4.14e-10, "Ne": 3.30e-03, "Na": 4.61e-05, "Mg": 6.25e-03, "Al": 7.19e-04, "Si": 1.14e-01, "P": 2.60e-04, "S": 6.35e-02, "Cl": 1.27e-04, "Ar": 1.14e-02, "K": 5.72e-05, "Ca": 8.71e-03, "Sc": 1.61e-07, "Ti": 2.50e-04, "V": 5.46e-05, "Cr": 6.19e-03, "Mn": 6.47e-03, "Fe": 5.46e-01, "Co": 7.59e-04, "Ni": 9.17e-02, "Cu": 2.19e-06, "Zn": 2.06e-05, } # mass fraction of each atom in SNII metal SNII_abundance = { "H": 0.00e00, "He": 0.00e00, "C": 3.12e-02, "N": 6.15e-04, "O": 7.11e-01, "F": 4.57e-10, "Ne": 9.12e-02, "Na": 2.56e-03, "Mg": 4.84e-02, "Al": 5.83e-03, "Si": 4.81e-02, "P": 4.77e-04, "S": 1.62e-02, "Cl": 4.72e-05, "Ar": 3.15e-03, "K": 2.65e-05, "Ca": 2.31e-03, "Sc": 9.02e-08, "Ti": 5.18e-05, "V": 3.94e-06, "Cr": 5.18e-04, "Mn": 1.52e-04, "Fe": 3.58e-02, "Co": 2.86e-05, "Ni": 2.35e-03, "Cu": 4.90e-07, "Zn": 7.46e-06, } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/art/io.py0000644000175100001770000005416314714401662016003 0ustar00runnerdockerimport os import os.path from collections import defaultdict from functools import partial import numpy as np from yt.frontends.art.definitions import ( hydro_struct, particle_fields, particle_star_fields, star_struct, ) from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.fortran_utils import read_vector, skip from yt.utilities.io_handler import BaseIOHandler from yt.utilities.logger import ytLogger as mylog class IOHandlerART(BaseIOHandler): _dataset_type = "art" tb, ages = None, None cache = None masks = None caching = True def __init__(self, *args, **kwargs): self.cache = {} self.masks = {} super().__init__(*args, **kwargs) self.ws = self.ds.parameters["wspecies"] self.ls = self.ds.parameters["lspecies"] self.file_particle = self.ds._file_particle_data self.file_stars = self.ds._file_particle_stars self.Nrow = self.ds.parameters["Nrow"] def _read_fluid_selection(self, chunks, selector, fields, size): # Chunks in this case will have affiliated domain subset objects # Each domain subset will contain a hydro_offset array, which gives # pointers to level-by-level hydro information tr = defaultdict(list) cp = 0 for chunk in chunks: for subset in chunk.objs: # Now we read the entire thing f = open(subset.domain.ds._file_amr, "rb") # This contains the boundary information, so we skim through # and pick off the right vectors rv = subset.fill(f, fields, selector) for ft, f in fields: d = rv.pop(f) mylog.debug( "Filling %s with %s (%0.3e %0.3e) (%s:%s)", f, d.size, d.min(), d.max(), cp, cp + d.size, ) tr[ft, f].append(d) cp += d.size d = {} for field in fields: d[field] = np.concatenate(tr.pop(field)) return d def _get_mask(self, selector, ftype): key = (selector, ftype) if key in self.masks.keys() and self.caching: return self.masks[key] pstr = "particle_position_%s" x, y, z = (self._get_field((ftype, pstr % ax)) for ax in "xyz") mask = selector.select_points(x, y, z, 0.0) if self.caching: self.masks[key] = mask return self.masks[key] else: return mask def _read_particle_coords(self, chunks, ptf): chunks = list(chunks) for _chunk in chunks: for ptype in sorted(ptf): x = self._get_field((ptype, "particle_position_x")) y = self._get_field((ptype, "particle_position_y")) z = self._get_field((ptype, "particle_position_z")) yield ptype, (x, y, z), 0.0 def _read_particle_fields(self, chunks, ptf, selector): chunks = list(chunks) for _chunk in chunks: for ptype, field_list in sorted(ptf.items()): x = self._get_field((ptype, "particle_position_x")) y = self._get_field((ptype, "particle_position_y")) z = self._get_field((ptype, "particle_position_z")) mask = selector.select_points(x, y, z, 0.0) if mask is None: continue for field in field_list: data = self._get_field((ptype, field)) yield (ptype, field), data[mask] def _get_field(self, field): if field in self.cache.keys() and self.caching: mylog.debug("Cached %s", str(field)) return self.cache[field] mylog.debug("Reading %s", str(field)) tr = {} ftype, fname = field ptmax = self.ws[-1] pbool, idxa, idxb = _determine_field_size(self.ds, ftype, self.ls, ptmax) npa = idxb - idxa sizes = np.diff(np.concatenate(([0], self.ls))) rp = partial( read_particles, self.file_particle, self.Nrow, idxa=idxa, idxb=idxb ) for ax in "xyz": if fname.startswith(f"particle_position_{ax}"): dd = self.ds.domain_dimensions[0] off = 1.0 / dd tr[field] = rp(fields=[ax])[0] / dd - off if fname.startswith(f"particle_velocity_{ax}"): (tr[field],) = rp(fields=["v" + ax]) if fname.startswith("particle_mass"): a = 0 data = np.zeros(npa, dtype="f8") for ptb, size, m in zip(pbool, sizes, self.ws, strict=True): if ptb: data[a : a + size] = m a += size tr[field] = data elif fname == "particle_index": tr[field] = np.arange(idxa, idxb) elif fname == "particle_type": a = 0 data = np.zeros(npa, dtype="int64") for i, (ptb, size) in enumerate(zip(pbool, sizes, strict=True)): if ptb: data[a : a + size] = i a += size tr[field] = data if pbool[-1] and fname in particle_star_fields: data = read_star_field(self.file_stars, field=fname) temp = tr.get(field, np.zeros(npa, "f8")) nstars = self.ls[-1] - self.ls[-2] if nstars > 0: temp[-nstars:] = data tr[field] = temp if fname == "particle_creation_time": self.tb, self.ages, data = interpolate_ages( tr[field][-nstars:], self.file_stars, self.tb, self.ages, self.ds.current_time, ) temp = tr.get(field, np.zeros(npa, "f8")) temp[-nstars:] = data tr[field] = temp del data # We check again, after it's been filled if fname.startswith("particle_mass"): # We now divide by NGrid in order to make this match up. Note that # this means that even when requested in *code units*, we are # giving them as modified by the ng value. This only works for # dark_matter -- stars are regular matter. tr[field] /= self.ds.domain_dimensions.prod() if tr == {}: tr = {f: np.array([]) for f in [field]} if self.caching: self.cache[field] = tr[field] return self.cache[field] else: return tr[field] class IOHandlerDarkMatterART(IOHandlerART): _dataset_type = "dm_art" def _count_particles(self, data_file): return { k: self.ds.parameters["lspecies"][i] for i, k in enumerate(self.ds.particle_types_raw) } def _identify_fields(self, domain): field_list = [] self.particle_field_list = list(particle_fields) for ptype in self.ds.particle_types_raw: for pfield in self.particle_field_list: pfn = (ptype, pfield) field_list.append(pfn) return field_list, {} def _get_field(self, field): if field in self.cache.keys() and self.caching: mylog.debug("Cached %s", str(field)) return self.cache[field] mylog.debug("Reading %s", str(field)) tr = {} ftype, fname = field ptmax = self.ws[-1] pbool, idxa, idxb = _determine_field_size(self.ds, ftype, self.ls, ptmax) npa = idxb - idxa sizes = np.diff(np.concatenate(([0], self.ls))) rp = partial( read_particles, self.file_particle, self.Nrow, idxa=idxa, idxb=idxb ) for ax in "xyz": if fname.startswith(f"particle_position_{ax}"): # This is not the same as domain_dimensions dd = self.ds.parameters["ng"] off = 1.0 / dd tr[field] = rp(fields=[ax])[0] / dd - off if fname.startswith(f"particle_velocity_{ax}"): (tr[field],) = rp(["v" + ax]) if fname.startswith("particle_mass"): a = 0 data = np.zeros(npa, dtype="f8") for ptb, size, m in zip(pbool, sizes, self.ws, strict=True): if ptb: data[a : a + size] = m a += size tr[field] = data elif fname == "particle_index": tr[field] = np.arange(idxa, idxb) elif fname == "particle_type": a = 0 data = np.zeros(npa, dtype="int64") for i, (ptb, size) in enumerate(zip(pbool, sizes, strict=True)): if ptb: data[a : a + size] = i a += size tr[field] = data # We check again, after it's been filled if fname.startswith("particle_mass"): # We now divide by NGrid in order to make this match up. Note that # this means that even when requested in *code units*, we are # giving them as modified by the ng value. This only works for # dark_matter -- stars are regular matter. tr[field] /= self.ds.domain_dimensions.prod() if tr == {}: tr[field] = np.array([]) if self.caching: self.cache[field] = tr[field] return self.cache[field] else: return tr[field] def _yield_coordinates(self, data_file): for ptype in self.ds.particle_types_raw: x = self._get_field((ptype, "particle_position_x")) y = self._get_field((ptype, "particle_position_y")) z = self._get_field((ptype, "particle_position_z")) yield ptype, np.stack((x, y, z), axis=-1) def _determine_field_size(pf, field, lspecies, ptmax): pbool = np.zeros(len(lspecies), dtype="bool") idxas = np.concatenate( ( [ 0, ], lspecies[:-1], ) ) idxbs = lspecies if "specie" in field: index = int(field.replace("specie", "")) pbool[index] = True else: raise RuntimeError idxa, idxb = idxas[pbool][0], idxbs[pbool][-1] return pbool, idxa, idxb def interpolate_ages( data, file_stars, interp_tb=None, interp_ages=None, current_time=None ): if interp_tb is None: t_stars, a_stars = read_star_field(file_stars, field="t_stars") # timestamp of file should match amr timestamp if current_time: tdiff = YTQuantity(b2t(t_stars), "Gyr") - current_time.in_units("Gyr") if np.abs(tdiff) > 1e-4: mylog.info("Timestamp mismatch in star particle header: %s", tdiff) mylog.info("Interpolating ages") interp_tb, interp_ages = b2t(data) interp_tb = YTArray(interp_tb, "Gyr") interp_ages = YTArray(interp_ages, "Gyr") temp = np.interp(data, interp_tb, interp_ages) return interp_tb, interp_ages, temp def _read_art_level_info( f, level_oct_offsets, level, coarse_grid=128, ncell0=None, root_level=None ): pos = f.tell() f.seek(level_oct_offsets[level]) # Get the info for this level, skip the rest junk, nLevel, iOct = read_vector(f, "i", ">") # fortran indices start at 1 # Skip all the oct index data le = np.zeros((nLevel, 3), dtype="int64") fl = np.ones((nLevel, 6), dtype="int64") iocts = np.zeros(nLevel + 1, dtype="int64") idxa, idxb = 0, 0 chunk = int(1e6) # this is ~111MB for 15 dimensional 64 bit arrays left = nLevel while left > 0: this_chunk = min(chunk, left) idxb = idxa + this_chunk data = np.fromfile(f, dtype=">i", count=this_chunk * 15) data = data.reshape(this_chunk, 15) left -= this_chunk le[idxa:idxb, :] = data[:, 1:4] fl[idxa:idxb, 1] = np.arange(idxa, idxb) # pad byte is last, LL2, then ioct right before it iocts[idxa:idxb] = data[:, -3] idxa = idxa + this_chunk del data # emulate fortran code # do ic1 = 1 , nLevel # read(19) (iOctPs(i,iOct),i=1,3),(iOctNb(i,iOct),i=1,6), # & iOctPr(iOct), iOctLv(iOct), iOctLL1(iOct), # & iOctLL2(iOct) # iOct = iOctLL1(iOct) # ioct always represents the index of the next variable # not the current, so shift forward one index # the last index isn't used iocts[1:] = iocts[:-1] # shift iocts = iocts[:nLevel] # chop off the last, unused, index iocts[0] = iOct # starting value # now correct iocts for fortran indices start @ 1 iocts = iocts - 1 assert np.unique(iocts).shape[0] == nLevel # left edges are expressed as if they were on # level 15, so no matter what level max(le)=2**15 # correct to the yt convention # le = le/2**(root_level-1-level)-1 # try to find the root_level first def cfc(root_level, level, le): d_x = 1.0 / (2.0 ** (root_level - level + 1)) fc = (d_x * le) - 2 ** (level - 1) return fc if root_level is None: root_level = np.floor(np.log2(le.max() * 1.0 / coarse_grid)) root_level = root_level.astype("int64") for _ in range(10): fc = cfc(root_level, level, le) go = np.diff(np.unique(fc)).min() < 1.1 if go: break root_level += 1 else: fc = cfc(root_level, level, le) unitary_center = fc / (coarse_grid * 2.0 ** (level - 1)) assert np.all(unitary_center < 1.0) # again emulate the fortran code # This is all for calculating child oct locations # iC_ = iC + nbshift # iO = ishft ( iC_ , - ndim ) # id = ishft ( 1, MaxLevel - iOctLv(iO) ) # j = iC_ + 1 - ishft( iO , ndim ) # Posx = d_x * (iOctPs(1,iO) + sign ( id , idelta(j,1) )) # Posy = d_x * (iOctPs(2,iO) + sign ( id , idelta(j,2) )) # Posz = d_x * (iOctPs(3,iO) + sign ( id , idelta(j,3) )) # idelta = [[-1, 1, -1, 1, -1, 1, -1, 1], # [-1, -1, 1, 1, -1, -1, 1, 1], # [-1, -1, -1, -1, 1, 1, 1, 1]] # idelta = np.array(idelta) # if ncell0 is None: # ncell0 = coarse_grid**3 # nchild = 8 # ndim = 3 # nshift = nchild -1 # nbshift = nshift - ncell0 # iC = iocts #+ nbshift # iO = iC >> ndim #possibly >> # id = 1 << (root_level - level) # j = iC + 1 - ( iO << 3) # delta = np.abs(id)*idelta[:,j-1] # try without the -1 # le = le/2**(root_level+1-level) # now read the hvars and vars arrays # we are looking for iOctCh # we record if iOctCh is >0, in which it is subdivided # iOctCh = np.zeros((nLevel+1,8),dtype='bool') f.seek(pos) return unitary_center, fl, iocts, nLevel, root_level def get_ranges( skip, count, field, words=6, real_size=4, np_per_page=4096**2, num_pages=1 ): # translate every particle index into a file position ranges ranges = [] arr_size = np_per_page * real_size idxa, idxb = 0, 0 posa, posb = 0, 0 for _page in range(num_pages): idxb += np_per_page for i, fname in enumerate(["x", "y", "z", "vx", "vy", "vz"]): posb += arr_size if i == field or fname == field: if skip < np_per_page and count > 0: left_in_page = np_per_page - skip this_count = min(left_in_page, count) count -= this_count start = posa + skip * real_size end = posa + this_count * real_size ranges.append((start, this_count)) skip = 0 assert end <= posb else: skip -= np_per_page posa += arr_size idxa += np_per_page assert count == 0 return ranges def read_particles(file, Nrow, idxa, idxb, fields): words = 6 # words (reals) per particle: x,y,z,vx,vy,vz real_size = 4 # for file_particle_data; not always true? np_per_page = Nrow**2 # defined in ART a_setup.h, # of particles/page num_pages = os.path.getsize(file) // (real_size * words * np_per_page) fh = open(file) skip, count = idxa, idxb - idxa kwargs = { "words": words, "real_size": real_size, "np_per_page": np_per_page, "num_pages": num_pages, } arrs = [] for field in fields: ranges = get_ranges(skip, count, field, **kwargs) data = None for seek, this_count in ranges: fh.seek(seek) temp = np.fromfile(fh, count=this_count, dtype=">f4") if data is None: data = temp else: data = np.concatenate((data, temp)) arrs.append(data.astype("f8")) fh.close() return arrs def read_star_field(file, field=None): data = {} with open(file, "rb") as fh: for dtype, variables in star_struct: found = ( isinstance(variables, tuple) and field in variables ) or field == variables if found: data[field] = read_vector(fh, dtype[1], dtype[0]) else: skip(fh, endian=">") return data.pop(field) def _read_child_mask_level(f, level_child_offsets, level, nLevel, nhydro_vars): f.seek(level_child_offsets[level]) ioctch = np.zeros(nLevel, dtype="uint8") idc = np.zeros(nLevel, dtype="int32") chunk = int(1e6) left = nLevel width = nhydro_vars + 6 a, b = 0, 0 while left > 0: chunk = min(chunk, left) b += chunk arr = np.fromfile(f, dtype=">i", count=chunk * width) arr = arr.reshape((width, chunk), order="F") assert np.all(arr[0, :] == arr[-1, :]) # pads must be equal idc[a:b] = arr[1, :] - 1 # fix fortran indexing ioctch[a:b] = arr[2, :] == 0 # if it is above zero, then refined available # zero in the mask means there is refinement available a = b left -= chunk assert left == 0 return idc, ioctch nchem = 8 + 2 dtyp = np.dtype(f">i4,>i8,>i8,>{nchem}f4,>2f4,>i4") def _read_child_level( f, level_child_offsets, level_oct_offsets, level_info, level, fields, domain_dimensions, ncell0, nhydro_vars=10, nchild=8, noct_range=None, ): # emulate the fortran code for reading cell data # read ( 19 ) idc, iOctCh(idc), (hvar(i,idc),i=1,nhvar), # & (var(i,idc), i=2,3) # contiguous 8-cell sections are for the same oct; # ie, we don't write out just the 0 cells, then the 1 cells # optionally, we only read noct_range to save memory left_index, fl, octs, nocts, root_level = _read_art_level_info( f, level_oct_offsets, level, coarse_grid=domain_dimensions[0] ) if noct_range is None: nocts = level_info[level] ncells = nocts * 8 f.seek(level_child_offsets[level]) arr = np.fromfile(f, dtype=hydro_struct, count=ncells) assert np.all(arr["pad1"] == arr["pad2"]) # pads must be equal # idc = np.argsort(arr['idc']) #correct fortran indices # translate idc into icell, and then to iOct icell = (arr["idc"] >> 3) << 3 iocts = (icell - ncell0) / nchild # without a F correction, there's a +1 # assert that the children are read in the same order as the octs assert np.all(octs == iocts[::nchild]) else: start, end = noct_range nocts = min(end - start, level_info[level]) end = start + nocts ncells = nocts * 8 skip = np.dtype(hydro_struct).itemsize * start * 8 f.seek(level_child_offsets[level] + skip) arr = np.fromfile(f, dtype=hydro_struct, count=ncells) assert np.all(arr["pad1"] == arr["pad2"]) # pads must be equal source = {} for field in fields: sh = (nocts, 8) source[field] = np.reshape(arr[field], sh, order="C").astype("float64") return source def _read_root_level(f, level_offsets, level_info, nhydro_vars=10): nocts = level_info[0] f.seek(level_offsets[0]) # Ditch the header hvar = read_vector(f, "f", ">") var = read_vector(f, "f", ">") hvar = hvar.reshape((nhydro_vars, nocts * 8), order="F") var = var.reshape((2, nocts * 8), order="F") arr = np.concatenate((hvar, var)) return arr # All of these functions are to convert from hydro time var to # proper time sqrt = np.sqrt sign = np.sign def find_root(f, a, b, tol=1e-6): c = (a + b) / 2.0 last = -np.inf assert sign(f(a)) != sign(f(b)) while np.abs(f(c) - last) > tol: last = f(c) if sign(last) == sign(f(b)): b = c else: a = c c = (a + b) / 2.0 return c def quad(fintegrand, xmin, xmax, n=1e4): from yt._maintenance.numpy2_compat import trapezoid spacings = np.logspace(np.log10(xmin), np.log10(xmax), num=int(n)) integrand_arr = fintegrand(spacings) val = trapezoid(integrand_arr, dx=np.diff(spacings)) return val def a2b(at, Om0=0.27, Oml0=0.73, h=0.700): def f_a2b(x): val = 0.5 * sqrt(Om0) / x**3.0 val /= sqrt(Om0 / x**3.0 + Oml0 + (1.0 - Om0 - Oml0) / x**2.0) return val # val, err = si.quad(f_a2b,1,at) val = quad(f_a2b, 1, at) return val def b2a(bt, **kwargs): # converts code time into expansion factor # if Om0 ==1and OmL == 0 then b2a is (1 / (1-td))**2 # if bt < -190.0 or bt > -.10: raise 'bt outside of range' def f_b2a(at): return a2b(at, **kwargs) - bt return find_root(f_b2a, 1e-4, 1.1) # return so.brenth(f_b2a,1e-4,1.1) # return brent.brent(f_b2a) def a2t(at, Om0=0.27, Oml0=0.73, h=0.700): def integrand(x): return 1.0 / (x * sqrt(Oml0 + Om0 * x**-3.0)) current_time = quad(integrand, 1e-4, at) current_time *= 9.779 / h return current_time def b2t(tb, n=1e2, logger=None, **kwargs): tb = np.array(tb) if isinstance(tb, float): return a2t(b2a(tb)) if tb.shape == (): return a2t(b2a(tb)) if len(tb) < n: n = len(tb) tbs = -1.0 * np.logspace(np.log10(-tb.min()), np.log10(-tb.max()), int(n)) ages = [] for i, tbi in enumerate(tbs): ages += (a2t(b2a(tbi)),) if logger: logger(i) ages = np.array(ages) return tbs, ages ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.295152 yt-4.4.0/yt/frontends/art/tests/0000755000175100001770000000000014714401715016152 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/art/tests/__init__.py0000644000175100001770000000000014714401662020252 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/art/tests/test_outputs.py0000644000175100001770000001001314714401662021302 0ustar00runnerdockerfrom numpy.testing import assert_almost_equal, assert_equal from yt.frontends.art.api import ARTDataset from yt.testing import ( ParticleSelectionComparison, requires_file, units_override_check, ) from yt.units.yt_array import YTQuantity from yt.utilities.answer_testing.framework import ( FieldValuesTest, PixelizedProjectionValuesTest, data_dir_load, requires_ds, ) _fields = ( ("gas", "density"), ("gas", "temperature"), ("all", "particle_mass"), ("all", "particle_position_x"), ("all", "particle_velocity_y"), ) d9p = "D9p_500/10MpcBox_HartGal_csf_a0.500.d" dmonly = "DMonly/PMcrs0.0100.DAT" @requires_ds(d9p, big_data=True) def test_d9p(): ds = data_dir_load(d9p) ds.index assert_equal(str(ds), "10MpcBox_HartGal_csf_a0.500.d") dso = [None, ("sphere", ("max", (0.1, "unitary")))] for field in _fields: for axis in [0, 1]: for dobj_name in dso: for weight_field in [None, ("gas", "density")]: if field[0] not in ds.particle_types: yield PixelizedProjectionValuesTest( d9p, axis, field, weight_field, dobj_name ) yield FieldValuesTest( d9p, field, obj_type=dobj_name, particle_type=field[0] in ds.particle_types, ) @requires_ds(d9p, big_data=True) def test_d9p_global_values(): ds = data_dir_load(d9p) ad = ds.all_data() # 'Ana' variable values output from the ART Fortran 'ANA' analysis code AnaNStars = 6255 assert_equal(ad["stars", "particle_type"].size, AnaNStars) assert_equal(ad["specie4", "particle_type"].size, AnaNStars) # The *real* answer is 2833405, but yt misses one particle since it lives # on a domain boundary. See issue 814. When that is fixed, this test # will need to be updated AnaNDM = 2833404 assert_equal(ad["darkmatter", "particle_type"].size, AnaNDM) assert_equal( ( ad["specie0", "particle_type"].size + ad["specie1", "particle_type"].size + ad["specie2", "particle_type"].size + ad["specie3", "particle_type"].size ), AnaNDM, ) for spnum in range(5): npart_read = ad[f"specie{spnum}", "particle_type"].size npart_header = ds.particle_type_counts[f"specie{spnum}"] if spnum == 3: # see issue 814 npart_read += 1 assert_equal(npart_read, npart_header) AnaBoxSize = YTQuantity(7.1442196564, "Mpc") AnaVolume = YTQuantity(364.640074656, "Mpc**3") Volume = 1 for i in ds.domain_width.in_units("Mpc"): assert_almost_equal(i, AnaBoxSize) Volume *= i assert_almost_equal(Volume, AnaVolume) AnaNCells = 4087490 assert_equal(len(ad["index", "cell_volume"]), AnaNCells) AnaTotDMMass = YTQuantity(1.01191786808255e14, "Msun") assert_almost_equal( ad["darkmatter", "particle_mass"].sum().in_units("Msun"), AnaTotDMMass ) AnaTotStarMass = YTQuantity(1776701.3990607238, "Msun") assert_almost_equal( ad["stars", "particle_mass"].sum().in_units("Msun"), AnaTotStarMass ) AnaTotStarMassInitial = YTQuantity(2423468.2801332865, "Msun") assert_almost_equal( ad["stars", "particle_mass_initial"].sum().in_units("Msun"), AnaTotStarMassInitial, ) AnaTotGasMass = YTQuantity(1.7826982029216785e13, "Msun") assert_almost_equal(ad["gas", "cell_mass"].sum().in_units("Msun"), AnaTotGasMass) AnaTotTemp = YTQuantity(150219844793.3907, "K") # just leaves assert_almost_equal(ad["gas", "temperature"].sum().in_units("K"), AnaTotTemp) @requires_file(d9p) def test_ARTDataset(): assert isinstance(data_dir_load(d9p), ARTDataset) @requires_file(d9p) def test_units_override(): units_override_check(d9p) @requires_file(dmonly) def test_particle_selection(): ds = data_dir_load(dmonly) psc = ParticleSelectionComparison(ds) psc.run_defaults() ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.299152 yt-4.4.0/yt/frontends/artio/0000755000175100001770000000000014714401715015340 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/__init__.py0000644000175100001770000000000014714401662017440 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/_artio_caller.pyx0000644000175100001770000021171214714401662020706 0ustar00runnerdocker# distutils: sources = ARTIO_SOURCE # distutils: include_dirs = LIB_DIR_GEOM_ARTIO # distutils: libraries = STD_LIBS cimport cython import numpy as np cimport numpy as np import sys from libc.stdlib cimport free, malloc from libc.string cimport memcpy from yt.geometry.oct_container cimport OctObjectPool, SparseOctreeContainer from yt.geometry.oct_visitors cimport Oct from yt.geometry.particle_deposit cimport ParticleDepositOperation from yt.geometry.selection_routines cimport AlwaysSelector, SelectorObject from yt.utilities.lib.fp_utils cimport imax from yt.utilities.lib.misc_utilities import OnceIndirect cdef extern from "platform_dep.h": ctypedef int int32_t ctypedef long long int64_t void *alloca(int) cdef extern from "cosmology.h": ctypedef struct CosmologyParameters "CosmologyParameters" : pass CosmologyParameters *cosmology_allocate() void cosmology_free(CosmologyParameters *c) void cosmology_set_fixed(CosmologyParameters *c) void cosmology_set_OmegaM(CosmologyParameters *c, double value) void cosmology_set_OmegaB(CosmologyParameters *c, double value) void cosmology_set_OmegaL(CosmologyParameters *c, double value) void cosmology_set_h(CosmologyParameters *c, double value) void cosmology_set_DeltaDC(CosmologyParameters *c, double value) double abox_from_auni(CosmologyParameters *c, double a) noexcept nogil double tcode_from_auni(CosmologyParameters *c, double a) noexcept nogil double tphys_from_auni(CosmologyParameters *c, double a) noexcept nogil double auni_from_abox(CosmologyParameters *c, double v) noexcept nogil double auni_from_tcode(CosmologyParameters *c, double v) noexcept nogil double auni_from_tphys(CosmologyParameters *c, double v) noexcept nogil double abox_from_tcode(CosmologyParameters *c, double tcode) noexcept nogil double tcode_from_abox(CosmologyParameters *c, double abox) noexcept nogil double tphys_from_abox(CosmologyParameters *c, double abox) noexcept nogil double tphys_from_tcode(CosmologyParameters *c, double tcode) noexcept nogil cdef extern from "artio.h": ctypedef struct artio_fileset_handle "artio_fileset" : pass ctypedef struct artio_selection "artio_selection" : pass ctypedef struct artio_context : pass cdef extern artio_context *artio_context_global # open modes cdef int ARTIO_OPEN_HEADER "ARTIO_OPEN_HEADER" cdef int ARTIO_OPEN_GRID "ARTIO_OPEN_GRID" cdef int ARTIO_OPEN_PARTICLES "ARTIO_OPEN_PARTICLES" # parameter constants cdef int ARTIO_TYPE_STRING "ARTIO_TYPE_STRING" cdef int ARTIO_TYPE_CHAR "ARTIO_TYPE_CHAR" cdef int ARTIO_TYPE_INT "ARTIO_TYPE_INT" cdef int ARTIO_TYPE_FLOAT "ARTIO_TYPE_FLOAT" cdef int ARTIO_TYPE_DOUBLE "ARTIO_TYPE_DOUBLE" cdef int ARTIO_TYPE_LONG "ARTIO_TYPE_LONG" cdef int ARTIO_MAX_STRING_LENGTH "ARTIO_MAX_STRING_LENGTH" cdef int ARTIO_PARAMETER_EXHAUSTED "ARTIO_PARAMETER_EXHAUSTED" # grid read options cdef int ARTIO_READ_LEAFS "ARTIO_READ_LEAFS" cdef int ARTIO_READ_REFINED "ARTIO_READ_REFINED" cdef int ARTIO_READ_ALL "ARTIO_READ_ALL" cdef int ARTIO_READ_REFINED_NOT_ROOT "ARTIO_READ_REFINED_NOT_ROOT" cdef int ARTIO_RETURN_CELLS "ARTIO_RETURN_CELLS" cdef int ARTIO_RETURN_OCTS "ARTIO_RETURN_OCTS" # errors cdef int ARTIO_SUCCESS "ARTIO_SUCCESS" cdef int ARTIO_ERR_MEMORY_ALLOCATION "ARTIO_ERR_MEMORY_ALLOCATION" artio_fileset_handle *artio_fileset_open(char *file_prefix, int type, artio_context *context ) int artio_fileset_close( artio_fileset_handle *handle ) int artio_fileset_open_particle( artio_fileset_handle *handle ) int artio_fileset_open_grid(artio_fileset_handle *handle) int artio_fileset_close_grid(artio_fileset_handle *handle) int artio_fileset_has_grid( artio_fileset_handle *handle ) int artio_fileset_has_particles( artio_fileset_handle *handle ) # selection functions artio_selection *artio_selection_allocate( artio_fileset_handle *handle ) artio_selection *artio_select_all( artio_fileset_handle *handle ) artio_selection *artio_select_volume( artio_fileset_handle *handle, double lpos[3], double rpos[3] ) int artio_selection_add_root_cell( artio_selection *selection, int coords[3] ) int artio_selection_destroy( artio_selection *selection ) int artio_selection_iterator( artio_selection *selection, int64_t max_range_size, int64_t *start, int64_t *end ) int64_t artio_selection_size( artio_selection *selection ) void artio_selection_print( artio_selection *selection ) # parameter functions int artio_parameter_iterate( artio_fileset_handle *handle, char *key, int *type, int *length ) int artio_parameter_get_int_array(artio_fileset_handle *handle, char * key, int length, int32_t *values) int artio_parameter_get_float_array(artio_fileset_handle *handle, char * key, int length, float *values) int artio_parameter_get_long_array(artio_fileset_handle *handle, char * key, int length, int64_t *values) int artio_parameter_get_double_array(artio_fileset_handle *handle, char * key, int length, double *values) int artio_parameter_get_string_array(artio_fileset_handle *handle, char * key, int length, char **values ) # grid functions int artio_grid_cache_sfc_range(artio_fileset_handle *handle, int64_t start, int64_t end) int artio_grid_clear_sfc_cache( artio_fileset_handle *handle ) int artio_grid_read_root_cell_begin(artio_fileset_handle *handle, int64_t sfc, double *pos, float *variables, int *num_tree_levels, int *num_octs_per_level) int artio_grid_read_root_cell_end(artio_fileset_handle *handle) int artio_grid_read_level_begin(artio_fileset_handle *handle, int level ) int artio_grid_read_level_end(artio_fileset_handle *handle) int artio_grid_read_oct(artio_fileset_handle *handle, double *pos, float *variables, int *refined) int artio_grid_count_octs_in_sfc_range(artio_fileset_handle *handle, int64_t start, int64_t end, int64_t *num_octs) #particle functions int artio_fileset_open_particles(artio_fileset_handle *handle) int artio_particle_read_root_cell_begin(artio_fileset_handle *handle, int64_t sfc, int * num_particle_per_species) int artio_particle_read_root_cell_end(artio_fileset_handle *handle) int artio_particle_read_particle(artio_fileset_handle *handle, int64_t *pid, int *subspecies, double *primary_variables, float *secondary_variables) int artio_particle_cache_sfc_range(artio_fileset_handle *handle, int64_t sfc_start, int64_t sfc_end) int artio_particle_clear_sfc_cache(artio_fileset_handle *handle) int artio_particle_read_species_begin(artio_fileset_handle *handle, int species) int artio_particle_read_species_end(artio_fileset_handle *handle) cdef extern from "artio_internal.h": np.int64_t artio_sfc_index( artio_fileset_handle *handle, int coords[3] ) noexcept nogil void artio_sfc_coords( artio_fileset_handle *handle, int64_t index, int coords[3] ) noexcept nogil cdef void check_artio_status(int status, char *fname="[unknown]"): if status != ARTIO_SUCCESS: import traceback traceback.print_stack() callername = sys._getframe().f_code.co_name nline = sys._getframe().f_lineno raise RuntimeError('failure with status', status, 'in function',fname,'from caller', callername, nline) cdef class artio_fileset : cdef public object parameters cdef artio_fileset_handle *handle # cosmology parameter for time unit conversion cdef CosmologyParameters *cosmology cdef float tcode_to_years # common attributes cdef public int num_grid cdef int64_t num_root_cells cdef int64_t sfc_min, sfc_max # grid attributes cdef public int has_grid cdef public int min_level, max_level cdef public int num_grid_variables cdef int *num_octs_per_level cdef float *grid_variables # particle attributes cdef public int has_particles cdef public int num_species cdef int *particle_position_index cdef int *num_particles_per_species cdef double *primary_variables cdef float *secondary_variables def __init__(self, char *file_prefix) : cdef int artio_type = ARTIO_OPEN_HEADER cdef int64_t num_root self.handle = artio_fileset_open( file_prefix, artio_type, artio_context_global ) if not self.handle : raise RuntimeError self.read_parameters() self.num_root_cells = self.parameters['num_root_cells'][0] self.num_grid = 1 num_root = self.num_root_cells while num_root > 1 : self.num_grid <<= 1 num_root >>= 3 self.sfc_min = 0 self.sfc_max = self.num_root_cells-1 # initialize cosmology module if "abox" in self.parameters: self.cosmology = cosmology_allocate() cosmology_set_OmegaM(self.cosmology, self.parameters['OmegaM'][0]) cosmology_set_OmegaL(self.cosmology, self.parameters['OmegaL'][0]) cosmology_set_OmegaB(self.cosmology, self.parameters['OmegaB'][0]) cosmology_set_h(self.cosmology, self.parameters['hubble'][0]) cosmology_set_DeltaDC(self.cosmology, self.parameters['DeltaDC'][0]) cosmology_set_fixed(self.cosmology) else: self.cosmology = NULL self.tcode_to_years = self.parameters['time_unit'][0]/(365.25*86400) # grid detection self.min_level = 0 self.max_level = self.parameters['grid_max_level'][0] self.num_grid_variables = self.parameters['num_grid_variables'][0] self.num_octs_per_level = malloc(self.max_level*sizeof(int)) self.grid_variables = malloc(8*self.num_grid_variables*sizeof(float)) if (not self.num_octs_per_level) or (not self.grid_variables) : raise MemoryError if artio_fileset_has_grid(self.handle): status = artio_fileset_open_grid(self.handle) check_artio_status(status) self.has_grid = 1 else: self.has_grid = 0 # particle detection if ( artio_fileset_has_particles(self.handle) ): status = artio_fileset_open_particles(self.handle) check_artio_status(status) self.has_particles = 1 for v in ["num_particle_species","num_primary_variables","num_secondary_variables"]: if v not in self.parameters: raise RuntimeError("Unable to locate particle header information in artio header: key=", v) self.num_species = self.parameters['num_particle_species'][0] self.particle_position_index = malloc(3*sizeof(int)*self.num_species) if not self.particle_position_index : raise MemoryError for ispec in range(self.num_species) : species_labels = "species_%02d_primary_variable_labels"% (ispec,) if species_labels not in self.parameters: raise RuntimeError("Unable to locate variable labels for species",ispec) labels = self.parameters[species_labels] try : self.particle_position_index[3*ispec+0] = labels.index('POSITION_X') self.particle_position_index[3*ispec+1] = labels.index('POSITION_Y') self.particle_position_index[3*ispec+2] = labels.index('POSITION_Z') except ValueError : raise RuntimeError("Unable to locate position information for particle species", ispec) self.num_particles_per_species = malloc(sizeof(int)*self.num_species) self.primary_variables = malloc(sizeof(double)*max(self.parameters['num_primary_variables'])) self.secondary_variables = malloc(sizeof(float)*max(self.parameters['num_secondary_variables'])) if (not self.num_particles_per_species) or (not self.primary_variables) or (not self.secondary_variables) : raise MemoryError else: self.has_particles = 0 def __dealloc__(self) : if self.num_octs_per_level : free(self.num_octs_per_level) if self.grid_variables : free(self.grid_variables) if self.particle_position_index : free(self.particle_position_index) if self.num_particles_per_species : free(self.num_particles_per_species) if self.primary_variables : free(self.primary_variables) if self.secondary_variables : free(self.secondary_variables) if self.cosmology : cosmology_free(self.cosmology) if self.handle : artio_fileset_close(self.handle) def read_parameters(self) : cdef char key[64] cdef int type cdef int length cdef char ** char_values cdef int32_t *int_values cdef int64_t *long_values cdef float *float_values cdef double *double_values self.parameters = {} while artio_parameter_iterate( self.handle, key, &type, &length ) == ARTIO_SUCCESS : if type == ARTIO_TYPE_STRING : char_values = malloc(length*sizeof(char *)) for i in range(length) : char_values[i] = malloc( ARTIO_MAX_STRING_LENGTH*sizeof(char) ) artio_parameter_get_string_array( self.handle, key, length, char_values ) parameter = [ char_values[i] for i in range(length) ] for i in range(length) : free(char_values[i]) free(char_values) for i in range(len(parameter)): parameter[i] = parameter[i].decode('utf-8') elif type == ARTIO_TYPE_INT : int_values = malloc(length*sizeof(int32_t)) artio_parameter_get_int_array( self.handle, key, length, int_values ) parameter = [ int_values[i] for i in range(length) ] free(int_values) elif type == ARTIO_TYPE_LONG : long_values = malloc(length*sizeof(int64_t)) artio_parameter_get_long_array( self.handle, key, length, long_values ) parameter = [ long_values[i] for i in range(length) ] free(long_values) elif type == ARTIO_TYPE_FLOAT : float_values = malloc(length*sizeof(float)) artio_parameter_get_float_array( self.handle, key, length, float_values ) parameter = [ float_values[i] for i in range(length) ] free(float_values) elif type == ARTIO_TYPE_DOUBLE : double_values = malloc(length*sizeof(double)) artio_parameter_get_double_array( self.handle, key, length, double_values ) parameter = [ double_values[i] for i in range(length) ] free(double_values) else : raise RuntimeError("ARTIO file corruption detected: invalid type!") self.parameters[key.decode('utf-8')] = parameter def abox_from_auni(self, np.float64_t a): if self.cosmology: return abox_from_auni(self.cosmology, a) else: raise RuntimeError("abox_from_auni called for non-cosmological ARTIO fileset!") def tcode_from_auni(self, np.float64_t a): if self.cosmology: return tcode_from_auni(self.cosmology, a) else: raise RuntimeError("tcode_from_auni called for non-cosmological ARTIO fileset!") def tphys_from_auni(self, np.float64_t a): if self.cosmology: return tphys_from_auni(self.cosmology, a) else: raise RuntimeError("tphys_from_auni called for non-cosmological ARTIO fileset!") def auni_from_abox(self, np.float64_t v): if self.cosmology: return auni_from_abox(self.cosmology, v) else: raise RuntimeError("auni_from_abox called for non-cosmological ARTIO fileset!") def auni_from_tcode(self, np.float64_t v): if self.cosmology: return auni_from_tcode(self.cosmology, v) else: raise RuntimeError("auni_from_tcode called for non-cosmological ARTIO fileset!") @cython.wraparound(False) @cython.boundscheck(False) def auni_from_tcode_array(self, np.ndarray[np.float64_t] tcode): cdef int i, nauni cdef np.ndarray[np.float64_t, ndim=1] auni cdef CosmologyParameters *cosmology = self.cosmology if not cosmology: raise RuntimeError("auni_from_tcode_array called for non-cosmological ARTIO fileset!") auni = np.empty_like(tcode) nauni = auni.shape[0] with nogil: for i in range(nauni): auni[i] = auni_from_tcode(self.cosmology, tcode[i]) return auni def auni_from_tphys(self, np.float64_t v): if self.cosmology: return auni_from_tphys(self.cosmology, v) else: raise RuntimeError("auni_from_tphys called for non-cosmological ARTIO fileset!") def abox_from_tcode(self, np.float64_t abox): if self.cosmology: return abox_from_tcode(self.cosmology, abox) else: raise RuntimeError("abox_from_tcode called for non-cosmological ARTIO fileset!") def tcode_from_abox(self, np.float64_t abox): if self.cosmology: return tcode_from_abox(self.cosmology, abox) else: raise RuntimeError("tcode_from_abox called for non-cosmological ARTIO fileset!") def tphys_from_abox(self, np.float64_t abox): if self.cosmology: return tphys_from_abox(self.cosmology, abox) else: raise RuntimeError("tphys_from_abox called for non-cosmological ARTIO fileset!") def tphys_from_tcode(self, np.float64_t tcode): if self.cosmology: return tphys_from_tcode(self.cosmology, tcode) else: return self.tcode_to_years*tcode @cython.wraparound(False) @cython.boundscheck(False) def tphys_from_tcode_array(self, np.ndarray[np.float64_t] tcode): cdef int i, ntphys cdef np.ndarray[np.float64_t, ndim=1] tphys cdef CosmologyParameters *cosmology = self.cosmology tphys = np.empty_like(tcode) ntphys = tcode.shape[0] if cosmology: tphys = np.empty_like(tcode) ntphys = tcode.shape[0] with nogil: for i in range(ntphys): tphys[i] = tphys_from_tcode(cosmology, tcode[i]) return tphys else: return tcode*self.tcode_to_years # @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def read_particle_chunk(self, SelectorObject selector, int64_t sfc_start, int64_t sfc_end, fields) : cdef int i cdef int status cdef int subspecies cdef int64_t pid cdef np.float64_t pos[3] # since RuntimeErrors are not fatal, ensure no artio_particles* functions # called if fileset lacks particles if not self.has_particles: return data = {} accessed_species = np.zeros( self.num_species, dtype="int64") selected_mass = [ None for i in range(self.num_species)] selected_pid = [ None for i in range(self.num_species)] selected_species = [ None for i in range(self.num_species)] selected_primary = [ [] for i in range(self.num_species)] selected_secondary = [ [] for i in range(self.num_species)] for species,field in fields : if species < 0 or species > self.num_species : raise RuntimeError("Invalid species provided to read_particle_chunk") accessed_species[species] = 1 if self.parameters["num_primary_variables"][species] > 0 and \ field in self.parameters["species_%02u_primary_variable_labels"%(species,)] : selected_primary[species].append((self.parameters["species_%02u_primary_variable_labels"%(species,)].index(field),(species,field))) data[species,field] = np.empty(0,dtype="float64") elif self.parameters["num_secondary_variables"][species] > 0 and \ field in self.parameters["species_%02u_secondary_variable_labels"%(species,)] : selected_secondary[species].append((self.parameters["species_%02u_secondary_variable_labels"%(species,)].index(field),(species,field))) data[species,field] = np.empty(0,dtype="float64") elif field == "MASS" : selected_mass[species] = (species,field) data[species,field] = np.empty(0,dtype="float64") elif field == "PID" : selected_pid[species] = (species,field) data[species,field] = np.empty(0,dtype="int64") elif field == "SPECIES" : selected_species[species] = (species,field) data[species,field] = np.empty(0,dtype="int8") else : raise RuntimeError("invalid field name provided to read_particle_chunk") # cache the range status = artio_particle_cache_sfc_range( self.handle, self.sfc_min, self.sfc_max ) check_artio_status(status) for sfc in range( sfc_start, sfc_end+1 ) : status = artio_particle_read_root_cell_begin( self.handle, sfc, self.num_particles_per_species ) check_artio_status(status) for ispec in range(self.num_species) : if accessed_species[ispec] : status = artio_particle_read_species_begin(self.handle, ispec) check_artio_status(status) for particle in range( self.num_particles_per_species[ispec] ) : status = artio_particle_read_particle(self.handle, &pid, &subspecies, self.primary_variables, self.secondary_variables) check_artio_status(status) for i in range(3) : pos[i] = self.primary_variables[self.particle_position_index[3*ispec+i]] if selector.select_point(pos) : # loop over primary variables for i,field in selected_primary[ispec] : count = len(data[field]) data[field].resize(count+1) data[field][count] = self.primary_variables[i] # loop over secondary variables for i,field in selected_secondary[ispec] : count = len(data[field]) data[field].resize(count+1) data[field][count] = self.secondary_variables[i] # add particle id if selected_pid[ispec] : count = len(data[selected_pid[ispec]]) data[selected_pid[ispec]].resize(count+1) data[selected_pid[ispec]][count] = pid # add mass if requested if selected_mass[ispec] : count = len(data[selected_mass[ispec]]) data[selected_mass[ispec]].resize(count+1) data[selected_mass[ispec]][count] = self.parameters["particle_species_mass"][ispec] status = artio_particle_read_species_end( self.handle ) check_artio_status(status) status = artio_particle_read_root_cell_end( self.handle ) check_artio_status(status) return data #@cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def read_grid_chunk(self, SelectorObject selector, int64_t sfc_start, int64_t sfc_end, fields ): cdef int i cdef int level cdef int num_oct_levels cdef int refined[8] cdef int status cdef int64_t count cdef double dpos[3] cdef np.float64_t left[3] cdef np.float64_t right[3] cdef np.float64_t dds[3] cdef int *field_order cdef int num_fields = len(fields) field_order = malloc(sizeof(int)*num_fields) # translate fields from ARTIO names to indices var_labels = self.parameters['grid_variable_labels'] for i, f in enumerate(fields): if f not in var_labels: raise RuntimeError("Field",f,"is not known to ARTIO") field_order[i] = var_labels.index(f) status = artio_grid_cache_sfc_range( self.handle, self.sfc_min, self.sfc_max ) check_artio_status(status) # determine max number of cells we could hit (optimize later) #status = artio_grid_count_octs_in_sfc_range( self.handle, # sfc_start, sfc_end, &max_octs ) #check_artio_status(status) #max_cells = sfc_end-sfc_start+1 + max_octs*8 # allocate space for _fcoords, _icoords, _fwidth, _ires #fcoords = np.empty((max_cells, 3), dtype="float64") #ires = np.empty(max_cells, dtype="int64") fcoords = np.empty((0, 3), dtype="float64") ires = np.empty(0, dtype="int64") #data = [ np.empty(max_cells, dtype="float32") for i in range(num_fields) ] data = [ np.empty(0,dtype="float64") for i in range(num_fields)] count = 0 for sfc in range( sfc_start, sfc_end+1 ) : status = artio_grid_read_root_cell_begin( self.handle, sfc, dpos, self.grid_variables, &num_oct_levels, self.num_octs_per_level ) check_artio_status(status) if num_oct_levels == 0 : for i in range(num_fields) : data[i].resize(count+1) data[i][count] = self.grid_variables[field_order[i]] fcoords.resize((count+1,3)) for i in range(3) : fcoords[count][i] = dpos[i] ires.resize(count+1) ires[count] = 0 count += 1 for level in range(1,num_oct_levels+1) : status = artio_grid_read_level_begin( self.handle, level ) check_artio_status(status) for i in range(3) : dds[i] = 2.**-level for oct in range(self.num_octs_per_level[level-1]) : status = artio_grid_read_oct( self.handle, dpos, self.grid_variables, refined ) check_artio_status(status) for child in range(8) : if not refined[child] : for i in range(3) : left[i] = (dpos[i]-dds[i]) if (child & (i<<1)) else dpos[i] right[i] = left[i] + dds[i] if selector.select_bbox(left,right) : fcoords.resize((count+1, 3)) for i in range(3) : fcoords[count][i] = left[i]+0.5*dds[i] ires.resize(count+1) ires[count] = level for i in range(num_fields) : data[i].resize(count+1) data[i][count] = self.grid_variables[self.num_grid_variables*child+field_order[i]] count += 1 status = artio_grid_read_level_end( self.handle ) check_artio_status(status) status = artio_grid_read_root_cell_end( self.handle ) check_artio_status(status) free(field_order) #fcoords.resize((count,3)) #ires.resize(count) # #for i in range(num_fields) : # data[i].resize(count) return (fcoords, ires, data) def root_sfc_ranges_all(self, int max_range_size = 1024) : cdef int64_t sfc_start, sfc_end cdef artio_selection *selection selection = artio_select_all( self.handle ) if selection == NULL : raise RuntimeError sfc_ranges = [] while artio_selection_iterator(selection, max_range_size, &sfc_start, &sfc_end) == ARTIO_SUCCESS : sfc_ranges.append([sfc_start, sfc_end]) artio_selection_destroy(selection) return sfc_ranges def root_sfc_ranges(self, SelectorObject selector, int max_range_size = 1024): cdef int coords[3] cdef int64_t sfc_start, sfc_end cdef np.float64_t left[3] cdef np.float64_t right[3] cdef artio_selection *selection cdef int i, j, k sfc_ranges=[] selection = artio_selection_allocate(self.handle) for i in range(self.num_grid) : # stupid cython coords[0] = i left[0] = coords[0] right[0] = left[0] + 1.0 for j in range(self.num_grid) : coords[1] = j left[1] = coords[1] right[1] = left[1] + 1.0 for k in range(self.num_grid) : coords[2] = k left[2] = coords[2] right[2] = left[2] + 1.0 if selector.select_bbox(left,right) : status = artio_selection_add_root_cell(selection, coords) check_artio_status(status) while artio_selection_iterator(selection, max_range_size, &sfc_start, &sfc_end) == ARTIO_SUCCESS : sfc_ranges.append([sfc_start, sfc_end]) artio_selection_destroy(selection) return sfc_ranges ################################################### def artio_is_valid( char *file_prefix ) : cdef artio_fileset_handle *handle = artio_fileset_open( file_prefix, ARTIO_OPEN_HEADER, artio_context_global ) if handle == NULL : return False else : artio_fileset_close(handle) return True cdef class ARTIOSFCRangeHandler: cdef public np.int64_t sfc_start cdef public np.int64_t sfc_end cdef public artio_fileset artio_handle cdef public object root_mesh_handler cdef public object oct_count cdef public object octree_handler cdef artio_fileset_handle *handle cdef np.float64_t DLE[3] cdef np.float64_t DRE[3] cdef np.float64_t dds[3] cdef np.int64_t dims[3] cdef public np.int64_t total_octs cdef np.int64_t *doct_count cdef np.int64_t **pcount cdef float **root_mesh_data cdef np.int64_t nvars[2] cdef int cache_root_mesh def __init__(self, domain_dimensions, # cells domain_left_edge, domain_right_edge, artio_fileset artio_handle, sfc_start, sfc_end, int cache_root_mesh = 0): cdef int i cdef np.int64_t sfc self.sfc_start = sfc_start self.sfc_end = sfc_end self.artio_handle = artio_handle self.root_mesh_handler = None self.octree_handler = None self.handle = artio_handle.handle self.oct_count = None self.root_mesh_data = NULL self.pcount = NULL self.cache_root_mesh = cache_root_mesh if artio_handle.has_particles: self.pcount = malloc(sizeof(np.int64_t*) * artio_handle.num_species) self.nvars[0] = artio_handle.num_species for i in range(artio_handle.num_species): self.pcount[i] = malloc(sizeof(np.int64_t) * (self.sfc_end - self.sfc_start + 1)) for sfc in range(self.sfc_end - self.sfc_start + 1): self.pcount[i][sfc] = 0 else: self.nvars[0] = 0 if artio_handle.has_grid: self.nvars[1] = artio_handle.num_grid_variables else: self.nvars[1] = 0 for i in range(3): self.dims[i] = domain_dimensions[i] self.DLE[i] = domain_left_edge[i] self.DRE[i] = domain_right_edge[i] self.dds[i] = (self.DRE[i] - self.DLE[i])/self.dims[i] def __dealloc__(self): cdef int i if self.artio_handle.has_particles: for i in range(self.nvars[0]): free(self.pcount[i]) free(self.pcount) if self.artio_handle.has_grid: if self.root_mesh_data != NULL: for i in range(self.nvars[1]): free(self.root_mesh_data[i]) free(self.root_mesh_data) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def construct_mesh(self): cdef int status, level, ngv cdef np.int64_t sfc, oc, i cdef double dpos[3] cdef int num_oct_levels cdef int max_level = self.artio_handle.max_level cdef int *num_octs_per_level = malloc( (max_level + 1)*sizeof(int)) cdef int num_species = self.artio_handle.num_species cdef int *num_particles_per_species cdef ARTIOOctreeContainer octree ngv = self.nvars[1] cdef float *grid_variables = malloc( ngv * sizeof(float)) self.octree_handler = octree = ARTIOOctreeContainer(self) if self.cache_root_mesh == 1: self.root_mesh_data = malloc(sizeof(float *) * ngv) for i in range(ngv): self.root_mesh_data[i] = malloc(sizeof(float) * \ (self.sfc_end - self.sfc_start + 1)) # We want to pre-allocate an array of root pointers. In the future, # this will be pre-determined by the ARTIO library. However, because # realloc plays havoc with our tree searching, we can't utilize an # expanding array at the present time. octree.allocate_domains([], self.sfc_end - self.sfc_start + 1) cdef np.ndarray[np.int64_t, ndim=1] oct_count oct_count = np.zeros(self.sfc_end - self.sfc_start + 1, dtype="int64") status = artio_grid_cache_sfc_range(self.handle, self.sfc_start, self.sfc_end) check_artio_status(status) for sfc in range(self.sfc_start, self.sfc_end + 1): status = artio_grid_read_root_cell_begin( self.handle, sfc, dpos, grid_variables, &num_oct_levels, num_octs_per_level) check_artio_status(status) for i in range(ngv * self.cache_root_mesh): self.root_mesh_data[i][sfc - self.sfc_start] = \ grid_variables[i] if num_oct_levels > 0: oc = 0 for level in range(num_oct_levels): oc += num_octs_per_level[level] self.total_octs += oc oct_count[sfc - self.sfc_start] = oc octree.initialize_local_mesh(oc, num_oct_levels, num_octs_per_level, sfc) status = artio_grid_read_root_cell_end(self.handle) check_artio_status(status) status = artio_grid_clear_sfc_cache(self.handle) check_artio_status(status) if self.artio_handle.has_particles: num_particles_per_species = malloc( sizeof(int)*num_species) # Now particles status = artio_particle_cache_sfc_range(self.handle, self.sfc_start, self.sfc_end) check_artio_status(status) for sfc in range(self.sfc_start, self.sfc_end + 1): # Now particles status = artio_particle_read_root_cell_begin(self.handle, sfc, num_particles_per_species) check_artio_status(status) for i in range(num_species): self.pcount[i][sfc - self.sfc_start] = \ num_particles_per_species[i] status = artio_particle_read_root_cell_end(self.handle) check_artio_status(status) status = artio_particle_clear_sfc_cache(self.handle) check_artio_status(status) free(num_particles_per_species) free(grid_variables) free(num_octs_per_level) self.oct_count = oct_count self.doct_count = oct_count.data self.root_mesh_handler = ARTIORootMeshContainer(self) def free_mesh(self): self.octree_handler = None self.root_mesh_handler = None self.doct_count = NULL self.oct_count = None def get_coords(artio_fileset handle, np.int64_t s): cdef int coords[3] artio_sfc_coords(handle.handle, s, coords) return (coords[0], coords[1], coords[2]) cdef struct particle_var_pointers: # The number of particles we have filled np.int64_t count # Number of primary variables and pointers to their indices int n_p int p_ind[16] # Max of 16 vars # Number of secondary variables and pointers to their indices int n_s int s_ind[16] # Max of 16 vars # Pointers to the bools and data arrays for mass, pid and species int n_mass np.float64_t *mass int n_pid np.int64_t *pid int n_species np.int8_t *species # Pointers to the pointers to primary and secondary vars np.float64_t *pvars[16] np.float64_t *svars[16] cdef class ARTIOOctreeContainer(SparseOctreeContainer): # This is a transitory, created-on-demand OctreeContainer. It should not # be considered to be long-lasting, and during its creation it will read # the index file. This means that when created it will then be able to # provide coordinates, but then on subsequent IO accesses it will pass over # the file again, despite knowing the indexing system already. Because of # this, we will avoid creating it as long as possible. cdef public artio_fileset artio_handle cdef ARTIOSFCRangeHandler range_handler cdef np.int64_t level_indices[32] cdef np.int64_t sfc_start cdef np.int64_t sfc_end def __init__(self, ARTIOSFCRangeHandler range_handler): self.range_handler = range_handler self.sfc_start = range_handler.sfc_start self.sfc_end = range_handler.sfc_end self.artio_handle = range_handler.artio_handle # Note the final argument is partial_coverage, which indicates whether # or not an Oct can be partially refined. dims, DLE, DRE = [], [], [] for i in range(32): self.level_indices[i] = 0 for i in range(3): # range_handler has dims in cells, which is the same as the number # of possible octs. This is because we have a forest of octrees. dims.append(range_handler.dims[i]) DLE.append(range_handler.DLE[i]) DRE.append(range_handler.DRE[i]) super(ARTIOOctreeContainer, self).__init__(dims, DLE, DRE) self.artio_handle = range_handler.artio_handle self.level_offset = 1 self.domains = OctObjectPool() self.root_nodes = NULL @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void initialize_local_mesh(self, np.int64_t oct_count, int num_oct_levels, int *num_octs_per_level, np.int64_t sfc): # We actually will not be initializing the root mesh here, we will be # initializing the entire mesh between sfc_start and sfc_end. cdef np.int64_t oct_ind, ipos, nadded cdef int i, status, level cdef artio_fileset_handle *handle = self.artio_handle.handle cdef double dpos[3] cdef np.ndarray[np.float64_t, ndim=2] pos # NOTE: We do not cache any SFC ranges here, as we should only ever be # called from within a pre-cached operation in the SFC handler. # We only allow one root oct. self.append_domain(oct_count) self.domains.containers[self.num_domains - 1].con_id = sfc oct_ind = -1 ipos = 0 for level in range(num_oct_levels): oct_ind = imax(oct_ind, num_octs_per_level[level]) self.level_indices[level] = ipos ipos += num_octs_per_level[level] pos = np.empty((oct_ind, 3), dtype="float64") # Now we initialize # Note that we also assume we have already started reading the level. ipos = 0 for level in range(num_oct_levels): status = artio_grid_read_level_begin(handle, level + 1) check_artio_status(status) for oct_ind in range(num_octs_per_level[level]): status = artio_grid_read_oct(handle, dpos, NULL, NULL) for i in range(3): pos[oct_ind, i] = dpos[i] check_artio_status(status) status = artio_grid_read_level_end(handle) check_artio_status(status) nadded = self.add(self.num_domains, level, pos[:num_octs_per_level[level],:]) if nadded != num_octs_per_level[level]: raise RuntimeError @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fill_sfc(self, np.ndarray[np.uint8_t, ndim=1] levels, np.ndarray[np.uint8_t, ndim=1] cell_inds, np.ndarray[np.int64_t, ndim=1] file_inds, np.ndarray[np.int64_t, ndim=1] domain_counts, field_indices, dest_fields): cdef np.ndarray[np.float32_t, ndim=2] source cdef np.ndarray[np.float64_t, ndim=1] dest cdef int status, i, num_oct_levels, nf, ngv, max_level cdef int level, j, oct_ind, si cdef np.int64_t sfc, ipos cdef artio_fileset_handle *handle = self.artio_handle.handle cdef double dpos[3] # We duplicate some of the grid_variables stuff here so that we can # potentially release the GIL nf = len(field_indices) ngv = self.artio_handle.num_grid_variables max_level = self.artio_handle.max_level cdef int *num_octs_per_level = malloc( (max_level + 1)*sizeof(int)) cdef float *grid_variables = malloc( 8 * ngv * sizeof(float)) cdef int* field_ind = malloc( nf * sizeof(int)) cdef np.float32_t **field_vals = malloc( nf * sizeof(np.float32_t*)) source_arrays = [] ipos = -1 for i in range(self.num_domains): ipos = imax(ipos, self.domains.containers[i].n) for i in range(nf): field_ind[i] = field_indices[i] # Note that we subtract one, because we're not using the root mesh. source = np.zeros((ipos, 8), dtype="float32") source_arrays.append(source) field_vals[i] = source.data cdef np.int64_t level_position[32] cdef np.int64_t lp # First we need to walk the mesh in the file. Then we fill in the dest # location based on the file index. # A few ways this could be improved: # * Create a new visitor function that actually queried the data, # rather than our somewhat hokey double-loop over SFC arrays. # * Go to pointers for the dest arrays. # * Enable preloading during mesh initialization # * Calculate domain indices on the fly rather than with a # double-loop to calculate domain_counts # The cons should be in order cdef np.int64_t sfc_start, sfc_end sfc_start = self.domains.containers[0].con_id sfc_end = self.domains.containers[self.num_domains - 1].con_id status = artio_grid_cache_sfc_range(handle, sfc_start, sfc_end) check_artio_status(status) cdef np.int64_t offset = 0 for si in range(self.num_domains): sfc = self.domains.containers[si].con_id status = artio_grid_read_root_cell_begin( handle, sfc, dpos, NULL, &num_oct_levels, num_octs_per_level) check_artio_status(status) lp = 0 for level in range(num_oct_levels): status = artio_grid_read_level_begin(handle, level + 1) check_artio_status(status) level_position[level] = lp for oct_ind in range(num_octs_per_level[level]): status = artio_grid_read_oct(handle, dpos, grid_variables, NULL) check_artio_status(status) for j in range(8): for i in range(nf): field_vals[i][(oct_ind + lp)*8+j] = \ grid_variables[ngv*j+field_ind[i]] status = artio_grid_read_level_end(handle) check_artio_status(status) lp += num_octs_per_level[level] status = artio_grid_read_root_cell_end(handle) check_artio_status(status) # Now we have all our sources. for j in range(nf): dest = dest_fields[j] source = source_arrays[j] for i in range(domain_counts[si]): level = levels[i + offset] oct_ind = file_inds[i + offset] + level_position[level] dest[i + offset] = source[oct_ind, cell_inds[i + offset]] # Now, we offset by the actual number filled here. offset += domain_counts[si] status = artio_grid_clear_sfc_cache(handle) check_artio_status(status) free(field_ind) free(field_vals) free(grid_variables) free(num_octs_per_level) def fill_sfc_particles(self, fields): # This handles not getting particles for refined sfc values. rv = read_sfc_particles(self.artio_handle, self.sfc_start, self.sfc_end, 0, fields, self.range_handler.doct_count, self.range_handler.pcount) return rv @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef read_sfc_particles(artio_fileset artio_handle, np.int64_t sfc_start, np.int64_t sfc_end, int read_unrefined, fields, np.int64_t *doct_count, np.int64_t **pcount): cdef int status, ispec, subspecies cdef np.int64_t sfc, particle, pid, ind, vind cdef int num_species = artio_handle.num_species cdef artio_fileset_handle *handle = artio_handle.handle cdef int *num_particles_per_species = malloc( sizeof(int)*num_species) cdef int *accessed_species = malloc( sizeof(int)*num_species) cdef int *total_particles = malloc( sizeof(int)*num_species) cdef particle_var_pointers *vpoints = malloc( sizeof(particle_var_pointers)*num_species) cdef double *primary_variables cdef float *secondary_variables cdef np.int64_t tp cdef int max_level = artio_handle.max_level cdef int *num_octs_per_level = malloc( (max_level + 1)*sizeof(int)) cdef np.ndarray[np.int8_t, ndim=1] npi8arr cdef np.ndarray[np.int64_t, ndim=1] npi64arr cdef np.ndarray[np.float64_t, ndim=1] npf64arr if not artio_handle.has_particles: raise RuntimeError("Attempted to read non-existent particles in ARTIO") # Now we set up our field pointers params = artio_handle.parameters npri_vars = params["num_primary_variables"] nsec_vars = params["num_secondary_variables"] primary_variables = malloc(sizeof(double) * max(npri_vars)) secondary_variables = malloc(sizeof(float) * max(nsec_vars)) cdef particle_var_pointers *vp for ispec in range(num_species): total_particles[ispec] = 0 accessed_species[ispec] = 0 # Initialize our vpoints array vpoints[ispec].count = 0 vpoints[ispec].n_mass = 0 vpoints[ispec].n_pid = 0 vpoints[ispec].n_species = 0 vpoints[ispec].n_p = 0 vpoints[ispec].n_s = 0 # Pass through once. We want every single particle. tp = 0 cdef np.int64_t c for sfc in range(sfc_start, sfc_end + 1): c = doct_count[sfc - sfc_start] if read_unrefined == 1 and c > 0: continue if read_unrefined == 0 and c == 0: continue for ispec in range(num_species): total_particles[ispec] += pcount[ispec][sfc - sfc_start] # Now we allocate our final fields, which will be filled #for ispec in range(num_species): # print "In SFC %s to %s reading %s of species %s" % ( # sfc_start, sfc_end + 1, total_particles[ispec], ispec) data = {} for species, field in sorted(fields): accessed_species[species] = 1 pri_vars = params.get( "species_%02u_primary_variable_labels" % (species,), []) sec_vars = params.get( "species_%02u_secondary_variable_labels" % (species,), []) tp = total_particles[species] vp = &vpoints[species] if field == "PID": vp.n_pid = 1 data[species, field] = np.zeros(tp, dtype="int64") npi64arr = data[species, field] vp.pid = npi64arr.data elif field == "SPECIES": vp.n_species = 1 data[species, field] = np.zeros(tp, dtype="int8") npi8arr = data[species, field] # We fill this *now* npi8arr += species vp.species = npi8arr.data elif npri_vars[species] > 0 and field in pri_vars : data[species, field] = np.zeros(tp, dtype="float64") npf64arr = data[species, field] vp.p_ind[vp.n_p] = pri_vars.index(field) vp.pvars[vp.n_p] = npf64arr.data vp.n_p += 1 elif nsec_vars[species] > 0 and field in sec_vars : data[species, field] = np.zeros(tp, dtype="float64") npf64arr = data[species, field] vp.s_ind[vp.n_s] = sec_vars.index(field) vp.svars[vp.n_s] = npf64arr.data vp.n_s += 1 elif field == "MASS": vp.n_mass = 1 data[species, field] = np.zeros(tp, dtype="float64") npf64arr = data[species, field] # We fill this *now* npf64arr += params["particle_species_mass"][species] vp.mass = npf64arr.data status = artio_particle_cache_sfc_range( handle, sfc_start, sfc_end ) check_artio_status(status) for sfc in range(sfc_start, sfc_end + 1): c = doct_count[sfc - sfc_start] check_artio_status(status) if read_unrefined == 1 and c > 0: continue if read_unrefined == 0 and c == 0: continue c = 0 for ispec in range(num_species) : if accessed_species[ispec] == 0: continue c += pcount[ispec][sfc - sfc_start] if c == 0: continue status = artio_particle_read_root_cell_begin( handle, sfc, num_particles_per_species ) check_artio_status(status) for ispec in range(num_species) : if accessed_species[ispec] == 0: continue status = artio_particle_read_species_begin(handle, ispec) check_artio_status(status) vp = &vpoints[ispec] for particle in range(num_particles_per_species[ispec]) : status = artio_particle_read_particle(handle, &pid, &subspecies, primary_variables, secondary_variables) check_artio_status(status) ind = vp.count for i in range(vp.n_p): vind = vp.p_ind[i] vp.pvars[i][ind] = primary_variables[vind] for i in range(vp.n_s): vind = vp.s_ind[i] vp.svars[i][ind] = secondary_variables[vind] if vp.n_pid: vp.pid[ind] = pid vp.count += 1 status = artio_particle_read_species_end( handle ) check_artio_status(status) status = artio_particle_read_root_cell_end( handle ) check_artio_status(status) status = artio_particle_clear_sfc_cache(handle) check_artio_status(status) free(num_octs_per_level) free(num_particles_per_species) free(total_particles) free(accessed_species) free(vpoints) free(primary_variables) free(secondary_variables) return data cdef class ARTIORootMeshContainer: cdef public artio_fileset artio_handle cdef np.float64_t DLE[3] cdef np.float64_t DRE[3] cdef np.float64_t dds[3] cdef np.int64_t dims[3] cdef artio_fileset_handle *handle cdef np.uint64_t sfc_start cdef np.uint64_t sfc_end cdef public object _last_mask cdef public np.int64_t _last_selector_id cdef np.int64_t _last_mask_sum cdef ARTIOSFCRangeHandler range_handler cdef np.uint8_t *sfc_mask cdef np.int64_t nsfc def __init__(self, ARTIOSFCRangeHandler range_handler): cdef int i cdef np.int64_t sfci for i in range(3): self.DLE[i] = range_handler.DLE[i] self.DRE[i] = range_handler.DRE[i] self.dims[i] = range_handler.dims[i] self.dds[i] = range_handler.dds[i] self.handle = range_handler.handle self.artio_handle = range_handler.artio_handle self._last_mask = None self._last_selector_id = -1 self.sfc_start = range_handler.sfc_start self.sfc_end = range_handler.sfc_end self.range_handler = range_handler # We assume that the number of octs has been created and filled # already. We no longer care about ANY of the SFCs that have octs # inside them -- this goes for every operation that this object # performs. self.sfc_mask = malloc(sizeof(np.uint8_t) * self.sfc_end - self.sfc_start + 1) self.nsfc = 0 for sfci in range(self.sfc_end - self.sfc_start + 1): if self.range_handler.oct_count[sfci] > 0: self.sfc_mask[sfci] = 0 else: self.sfc_mask[sfci] = 1 self.nsfc += 1 def __dealloc__(self): free(self.sfc_mask) @cython.cdivision(True) cdef np.int64_t pos_to_sfc(self, np.float64_t pos[3]) noexcept nogil: # Calculate the index cdef int coords[3] cdef int i cdef np.int64_t sfc for i in range(3): coords[i] = ((pos[i] - self.DLE[i])/self.dds[i]) sfc = artio_sfc_index(self.handle, coords) return sfc @cython.cdivision(True) cdef void sfc_to_pos(self, np.int64_t sfc, np.float64_t pos[3]) noexcept nogil: cdef int coords[3] cdef int i artio_sfc_coords(self.handle, sfc, coords) for i in range(3): pos[i] = self.DLE[i] + (coords[i] + 0.5) * self.dds[i] cdef np.int64_t count_cells(self, SelectorObject selector): # We visit each cell if it is not refined and determine whether it is # included or not. return self.mask(selector).sum() @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def icoords(self, SelectorObject selector, np.int64_t num_cells = -1, int domain_id = -1): # Note that num_octs does not have to equal sfc_end - sfc_start + 1. cdef np.int64_t sfc, sfci = -1 cdef int acoords[3] cdef int i cdef np.ndarray[np.uint8_t, ndim=1, cast=True] mask mask = self.mask(selector) num_cells = self._last_mask_sum cdef np.ndarray[np.int64_t, ndim=2] coords coords = np.empty((num_cells, 3), dtype="int64") cdef int filled = 0 for sfc in range(self.sfc_start, self.sfc_end + 1): if self.sfc_mask[sfc - self.sfc_start] == 0: continue sfci += 1 if mask[sfci] == 0: continue # Note that we do *no* checks on refinement here. In fact, this # entire setup should not need to touch the disk except if the # artio sfc calculators need to. artio_sfc_coords(self.handle, sfc, acoords) for i in range(3): coords[filled, i] = acoords[i] filled += 1 return coords @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fcoords(self, SelectorObject selector, np.int64_t num_cells = -1, int domain_id = -1): # Note that num_cells does not have to equal sfc_end - sfc_start + 1. cdef np.int64_t sfc, sfci = -1 cdef np.float64_t pos[3] cdef int i cdef np.ndarray[np.uint8_t, ndim=1, cast=True] mask mask = self.mask(selector) num_cells = self._last_mask_sum cdef np.ndarray[np.float64_t, ndim=2] coords coords = np.empty((num_cells, 3), dtype="float64") cdef int filled = 0 for sfc in range(self.sfc_start, self.sfc_end + 1): if self.sfc_mask[sfc - self.sfc_start] == 0: continue sfci += 1 if mask[sfci] == 0: continue # Note that we do *no* checks on refinement here. In fact, this # entire setup should not need to touch the disk except if the # artio sfc calculators need to. self.sfc_to_pos(sfc, pos) for i in range(3): coords[filled, i] = pos[i] filled += 1 return coords @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fwidth(self, SelectorObject selector, np.int64_t num_cells = -1, int domain_id = -1): cdef int i num_cells = self._last_mask_sum cdef np.ndarray[np.float64_t, ndim=2] width width = np.zeros((num_cells, 3), dtype="float64") for i in range(3): width[:,i] = self.dds[i] return width @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def ires(self, SelectorObject selector, np.int64_t num_cells = -1, int domain_id = -1): # Note: self.mask has a side effect of setting self._last_mask_sum self.mask(selector) num_cells = self._last_mask_sum cdef np.ndarray[np.int64_t, ndim=1] res res = np.zeros(num_cells, dtype="int64") return res @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def selector_fill(self, SelectorObject selector, np.ndarray source, np.ndarray dest = None, np.int64_t offset = 0, int dims = 1, int domain_id = -1): # This is where we use the selector to transplant from one to the # other. Note that we *do* apply the selector here. cdef np.int64_t num_cells = -1 cdef np.int64_t ind cdef np.int64_t sfc, sfci = -1 cdef int filled = 0 cdef char *sdata = source.data cdef char *ddata cdef int ss = source.dtype.itemsize cdef np.ndarray[np.uint8_t, ndim=1, cast=True] mask mask = self.mask(selector) if dest is None: # Note that RAMSES can have partial refinement inside an Oct. This # means we actually do want the number of Octs, not the number of # cells. num_cells = self._last_mask_sum if dims > 1: dest = np.zeros((num_cells, dims), dtype=source.dtype, order='C') else: dest = np.zeros(num_cells, dtype=source.dtype, order='C') ddata = (dest.data) + offset*ss*dims ind = 0 for sfc in range(self.sfc_start, self.sfc_end + 1): if self.sfc_mask[sfc - self.sfc_start] == 0: continue sfci += 1 if mask[sfci] == 0: continue memcpy(ddata, sdata + ind, dims * ss) ddata += dims * ss filled += 1 ind += ss * dims if num_cells >= 0: return dest return filled @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def mask(self, SelectorObject selector, np.int64_t num_cells = -1, int domain_id = -1): # We take a domain_id here to avoid subclassing cdef np.float64_t pos[3] cdef np.int64_t sfc, sfci = -1 if self._last_selector_id == hash(selector): return self._last_mask cdef np.ndarray[np.uint8_t, ndim=1] mask mask = np.zeros((self.nsfc), dtype="uint8") self._last_mask_sum = 0 for sfc in range(self.sfc_start, self.sfc_end + 1): if self.sfc_mask[sfc - self.sfc_start] == 0: continue sfci += 1 self.sfc_to_pos(sfc, pos) if selector.select_cell(pos, self.dds) == 0: continue mask[sfci] = 1 self._last_mask_sum += 1 self._last_mask = mask.astype("bool") self._last_selector_id = hash(selector) return self._last_mask def fill_sfc_particles(self, fields): rv = read_sfc_particles(self.artio_handle, self.sfc_start, self.sfc_end, 1, fields, self.range_handler.doct_count, self.range_handler.pcount) return rv @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fill_sfc(self, SelectorObject selector, field_indices): cdef np.ndarray[np.float64_t, ndim=1] dest cdef int status, i, num_oct_levels, nf, ngv, max_level cdef np.int64_t sfc, num_cells, sfci = -1 cdef double dpos[3] max_level = self.artio_handle.max_level cdef int *num_octs_per_level = malloc( (max_level + 1)*sizeof(int)) # We duplicate some of the grid_variables stuff here so that we can # potentially release the GIL nf = len(field_indices) ngv = self.artio_handle.num_grid_variables cdef float *grid_variables = malloc( ngv * sizeof(float)) cdef np.ndarray[np.uint8_t, ndim=1, cast=True] mask mask = self.mask(selector, -1) num_cells = self._last_mask_sum tr = [] for i in range(nf): tr.append(np.zeros(num_cells, dtype="float64")) cdef int* field_ind = malloc( nf * sizeof(int)) cdef np.float64_t **field_vals = malloc( nf * sizeof(np.float64_t*)) for i in range(nf): field_ind[i] = field_indices[i] # This zeros should be an empty once we handle the root grid dest = tr[i] field_vals[i] = dest.data # First we need to walk the mesh in the file. Then we fill in the dest # location based on the file index. cdef int filled = 0 cdef float **mesh_data = self.range_handler.root_mesh_data if mesh_data == NULL: status = artio_grid_cache_sfc_range(self.handle, self.sfc_start, self.sfc_end) check_artio_status(status) for sfc in range(self.sfc_start, self.sfc_end + 1): if self.sfc_mask[sfc - self.sfc_start] == 0: continue sfci += 1 if mask[sfci] == 0: continue status = artio_grid_read_root_cell_begin( self.handle, sfc, dpos, grid_variables, &num_oct_levels, num_octs_per_level) check_artio_status(status) for i in range(nf): field_vals[i][filled] = grid_variables[field_ind[i]] filled += 1 status = artio_grid_read_root_cell_end(self.handle) check_artio_status(status) status = artio_grid_clear_sfc_cache(self.handle) check_artio_status(status) else: for sfc in range(self.sfc_start, self.sfc_end + 1): if self.sfc_mask[sfc - self.sfc_start] == 0: continue sfci += 1 if mask[sfci] == 0: continue for i in range(nf): field_vals[i][filled] = mesh_data[field_ind[i]][ sfc - self.sfc_start] filled += 1 # Now we have all our sources. free(field_ind) free(field_vals) free(grid_variables) free(num_octs_per_level) return tr @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def deposit(self, ParticleDepositOperation pdeposit, SelectorObject selector, np.ndarray[np.float64_t, ndim=2] positions, fields): # This implements the necessary calls to enable particle deposition to # occur as needed. cdef int nf, i, j cdef np.int64_t sfc, sfci if fields is None: fields = [] nf = len(fields) cdef np.float64_t[::cython.view.indirect, ::1] field_pointers if nf > 0: field_pointers = OnceIndirect(fields) cdef np.float64_t[:] field_vals = np.empty(nf, dtype="float64") cdef np.ndarray[np.uint8_t, ndim=1, cast=True] mask mask = self.mask(selector, -1) cdef np.ndarray[np.int64_t, ndim=1] domain_ind domain_ind = np.zeros(self.sfc_end - self.sfc_start + 1, dtype="int64") - 1 j = 0 sfci = -1 for sfc in range(self.sfc_start, self.sfc_end + 1): if self.sfc_mask[sfc - self.sfc_start] == 0: continue sfci += 1 if mask[sfci] == 0: continue domain_ind[sfc - self.sfc_start] = j j += 1 cdef np.float64_t pos[3] cdef np.float64_t left_edge[3] cdef int coords[3] cdef int dims[3] dims[0] = dims[1] = dims[2] = 1 cdef np.int64_t offset for i in range(positions.shape[0]): for j in range(nf): field_vals[j] = field_pointers[j][i] for j in range(3): pos[j] = positions[i, j] coords[j] = ((pos[j] - self.DLE[j])/self.dds[j]) sfc = artio_sfc_index(self.artio_handle.handle, coords) if sfc < self.sfc_start or sfc > self.sfc_end: continue offset = domain_ind[sfc - self.sfc_start] if offset < 0: continue # Check that we found the oct ... for j in range(3): left_edge[j] = coords[j] * self.dds[j] + self.DLE[j] pdeposit.process(dims, i, left_edge, self.dds, offset, pos, field_vals, sfc) if pdeposit.update_values == 1: for j in range(nf): field_pointers[j][i] = field_vals[j] cdef class SFCRangeSelector(SelectorObject): cdef SelectorObject base_selector cdef ARTIOSFCRangeHandler range_handler cdef ARTIORootMeshContainer mesh_container cdef np.int64_t sfc_start, sfc_end def __init__(self, dobj): self.base_selector = dobj.base_selector self.mesh_container = dobj.oct_handler self.range_handler = self.mesh_container.range_handler self.sfc_start = self.mesh_container.sfc_start self.sfc_end = self.mesh_container.sfc_end @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def select_grids(self, np.ndarray[np.float64_t, ndim=2] left_edges, np.ndarray[np.float64_t, ndim=2] right_edges, np.ndarray[np.int32_t, ndim=2] levels): raise RuntimeError @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: return self.select_point(pos) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: cdef np.int64_t sfc = self.mesh_container.pos_to_sfc(pos) if sfc > self.sfc_end: return 0 cdef np.int64_t oc = self.range_handler.doct_count[ sfc - self.sfc_start] if oc > 0: return 0 return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: return self.base_selector.select_bbox(left_edge, right_edge) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.int32_t level, Oct *o = NULL) noexcept nogil: # Because visitors now use select_grid, we should be explicitly # checking this. return self.base_selector.select_grid(left_edge, right_edge, level, o) def _hash_vals(self): return (hash(self.base_selector), self.sfc_start, self.sfc_end) sfc_subset_selector = AlwaysSelector #sfc_subset_selector = SFCRangeSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/api.py0000644000175100001770000000020014714401662016454 0ustar00runnerdockerfrom . import tests from .data_structures import ARTIODataset from .fields import ARTIOFieldInfo from .io import IOHandlerARTIO ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.299152 yt-4.4.0/yt/frontends/artio/artio_headers/0000755000175100001770000000000014714401715020151 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/LICENSE0000644000175100001770000012434014714401662021163 0ustar00runnerdockerARTIO is licensed under the GNU Lesser General Public License (LGPL) version 3, which is an extension of the GNU General Public License (GPL). The text of both licenses are included here. =============================================================================== GNU LESSER GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. This version of the GNU Lesser General Public License incorporates the terms and conditions of version 3 of the GNU General Public License, supplemented by the additional permissions listed below. 0. Additional Definitions. As used herein, "this License" refers to version 3 of the GNU Lesser General Public License, and the "GNU GPL" refers to version 3 of the GNU General Public License. "The Library" refers to a covered work governed by this License, other than an Application or a Combined Work as defined below. An "Application" is any work that makes use of an interface provided by the Library, but which is not otherwise based on the Library. Defining a subclass of a class defined by the Library is deemed a mode of using an interface provided by the Library. A "Combined Work" is a work produced by combining or linking an Application with the Library. The particular version of the Library with which the Combined Work was made is also called the "Linked Version". The "Minimal Corresponding Source" for a Combined Work means the Corresponding Source for the Combined Work, excluding any source code for portions of the Combined Work that, considered in isolation, are based on the Application, and not on the Linked Version. The "Corresponding Application Code" for a Combined Work means the object code and/or source code for the Application, including any data and utility programs needed for reproducing the Combined Work from the Application, but excluding the System Libraries of the Combined Work. 1. Exception to Section 3 of the GNU GPL. You may convey a covered work under sections 3 and 4 of this License without being bound by section 3 of the GNU GPL. 2. Conveying Modified Versions. If you modify a copy of the Library, and, in your modifications, a facility refers to a function or data to be supplied by an Application that uses the facility (other than as an argument passed when the facility is invoked), then you may convey a copy of the modified version: a) under this License, provided that you make a good faith effort to ensure that, in the event an Application does not supply the function or data, the facility still operates, and performs whatever part of its purpose remains meaningful, or b) under the GNU GPL, with none of the additional permissions of this License applicable to that copy. 3. Object Code Incorporating Material from Library Header Files. The object code form of an Application may incorporate material from a header file that is part of the Library. You may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following: a) Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the object code with a copy of the GNU GPL and this license document. 4. Combined Works. You may convey a Combined Work under terms of your choice that, taken together, effectively do not restrict modification of the portions of the Library contained in the Combined Work and reverse engineering for debugging such modifications, if you also do each of the following: a) Give prominent notice with each copy of the Combined Work that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the Combined Work with a copy of the GNU GPL and this license document. c) For a Combined Work that displays copyright notices during execution, include the copyright notice for the Library among these notices, as well as a reference directing the user to the copies of the GNU GPL and this license document. d) Do one of the following: 0) Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source. 1) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a) uses at run time a copy of the Library already present on the user's computer system, and (b) will operate properly with a modified version of the Library that is interface-compatible with the Linked Version. e) Provide Installation Information, but only if you would otherwise be required to provide such information under section 6 of the GNU GPL, and only to the extent that such information is necessary to install and execute a modified version of the Combined Work produced by recombining or relinking the Application with a modified version of the Linked Version. (If you use option 4d0, the Installation Information must accompany the Minimal Corresponding Source and Corresponding Application Code. If you use option 4d1, you must provide the Installation Information in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.) 5. Combined Libraries. You may place library facilities that are a work based on the Library side by side in a single library together with other library facilities that are not Applications and are not covered by this License, and convey such a combined library under terms of your choice, if you do both of the following: a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities, conveyed under the terms of this License. b) Give prominent notice with the combined library that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work. 6. Revised Versions of the GNU Lesser General Public License. The Free Software Foundation may publish revised and/or new versions of the GNU Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Library as you received it specifies that a certain numbered version of the GNU Lesser General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that published version or of any later version published by the Free Software Foundation. If the Library as you received it does not specify a version number of the GNU Lesser General Public License, you may choose any version of the GNU Lesser General Public License ever published by the Free Software Foundation. If the Library as you received it specifies that a proxy can decide whether future versions of the GNU Lesser General Public License shall apply, that proxy's public statement of acceptance of any version is permanent authorization for you to choose that version for the Library. =============================================================================== GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . =============================================================================== ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/artio.c0000644000175100001770000001723314714401662021442 0ustar00runnerdocker/********************************************************************** * Copyright (c) 2012-2013, Douglas H. Rudd * All rights reserved. * * This file is part of the artio library. * * artio is free software: you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as * published by the Free Software Foundation, either version 3 of the * License, or (at your option) any later version. * * artio is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * Copies of the GNU Lesser General Public License and the GNU General * Public License are available in the file LICENSE, included with this * distribution. If you failed to receive a copy of this file, see * **********************************************************************/ #include "artio.h" #include "artio_internal.h" #include #include #include #include #ifdef _WIN32 typedef __int64 int64_t; typedef __int32 int32_t; #else #include #endif artio_fileset *artio_fileset_allocate( char *file_prefix, int mode, const artio_context *context ); void artio_fileset_destroy( artio_fileset *handle ); int artio_fh_buffer_size = ARTIO_DEFAULT_BUFFER_SIZE; int artio_fileset_set_buffer_size( int buffer_size ) { if ( buffer_size < 0 ) { return ARTIO_ERR_INVALID_BUFFER_SIZE; } artio_fh_buffer_size = buffer_size; return ARTIO_SUCCESS; } artio_fileset *artio_fileset_open(char * file_prefix, int type, const artio_context *context) { artio_fh *head_fh; char filename[256]; int ret; int64_t tmp; int artio_major, artio_minor; artio_fileset *handle = artio_fileset_allocate( file_prefix, ARTIO_FILESET_READ, context ); if ( handle == NULL ) { return NULL; } /* open header file */ sprintf(filename, "%s.art", handle->file_prefix); head_fh = artio_file_fopen(filename, ARTIO_MODE_READ | ARTIO_MODE_ACCESS, context); if ( head_fh == NULL ) { artio_fileset_destroy(handle); return NULL; } ret = artio_parameter_read(head_fh, handle->parameters ); if ( ret != ARTIO_SUCCESS ) { artio_fileset_destroy(handle); return NULL; } artio_file_fclose(head_fh); /* check versions */ if ( artio_parameter_get_int(handle, "ARTIO_MAJOR_VERSION", &artio_major ) == ARTIO_ERR_PARAM_NOT_FOUND ) { /* version pre 1.0 */ artio_major = 0; artio_minor = 9; } else { artio_parameter_get_int(handle, "ARTIO_MINOR_VERSION", &artio_minor ); } if ( artio_major > ARTIO_MAJOR_VERSION ) { fprintf(stderr,"ERROR: artio file version newer than library (%u.%u vs %u.%u).\n", artio_major, artio_minor, ARTIO_MAJOR_VERSION, ARTIO_MINOR_VERSION ); artio_fileset_destroy(handle); return NULL; } artio_parameter_get_long(handle, "num_root_cells", &handle->num_root_cells); if ( artio_parameter_get_int(handle, "sfc_type", &handle->sfc_type ) != ARTIO_SUCCESS ) { handle->sfc_type = ARTIO_SFC_HILBERT; } handle->nBitsPerDim = 0; tmp = handle->num_root_cells >> 3; while ( tmp ) { handle->nBitsPerDim++; tmp >>= 3; } handle->num_grid = 1<nBitsPerDim; /* default to accessing all sfc indices */ handle->proc_sfc_begin = 0; handle->proc_sfc_end = handle->num_root_cells-1; /* open data files */ if (type & ARTIO_OPEN_PARTICLES) { ret = artio_fileset_open_particles(handle); if ( ret != ARTIO_SUCCESS ) { artio_fileset_destroy(handle); return NULL; } } if (type & ARTIO_OPEN_GRID) { ret = artio_fileset_open_grid(handle); if ( ret != ARTIO_SUCCESS ) { artio_fileset_destroy(handle); return NULL; } } return handle; } artio_fileset *artio_fileset_create(char * file_prefix, int64_t root_cells, int64_t proc_sfc_begin, int64_t proc_sfc_end, const artio_context *context) { artio_fileset *handle = artio_fileset_allocate( file_prefix, ARTIO_FILESET_WRITE, context ); if ( handle == NULL ) { return NULL; } handle->proc_sfc_index = (int64_t*)malloc((handle->num_procs+1)*sizeof(int64_t)); if ( handle->proc_sfc_index == NULL ) { artio_fileset_destroy(handle); return NULL; } #ifdef ARTIO_MPI MPI_Allgather( &proc_sfc_begin, 1, MPI_LONG_LONG, handle->proc_sfc_index, 1, MPI_LONG_LONG, handle->context->comm ); #else handle->proc_sfc_index[0] = 0; #endif /* ARTIO_MPI */ handle->proc_sfc_index[handle->num_procs] = root_cells; handle->proc_sfc_begin = proc_sfc_begin; handle->proc_sfc_end = proc_sfc_end; handle->num_root_cells = root_cells; artio_parameter_set_long(handle, "num_root_cells", root_cells); artio_parameter_set_int(handle, "ARTIO_MAJOR_VERSION", ARTIO_MAJOR_VERSION ); artio_parameter_set_int(handle, "ARTIO_MINOR_VERSION", ARTIO_MINOR_VERSION ); return handle; } int artio_fileset_close(artio_fileset *handle) { char header_filename[256]; artio_fh *head_fh; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode == ARTIO_FILESET_WRITE) { /* ensure we've flushed open particle and * grid files before writing header */ if ( handle->grid != NULL ) { artio_fileset_close_grid(handle); } if ( handle->particle != NULL ) { artio_fileset_close_particles(handle); } sprintf(header_filename, "%s.art", handle->file_prefix); head_fh = artio_file_fopen(header_filename, ARTIO_MODE_WRITE | ((handle->rank == 0) ? ARTIO_MODE_ACCESS : 0), handle->context); if (head_fh == NULL) { return ARTIO_ERR_FILE_CREATE; } if (0 == handle->rank) { artio_parameter_write(head_fh, handle->parameters ); } artio_file_fclose(head_fh); } artio_fileset_destroy(handle); return ARTIO_SUCCESS; } artio_fileset *artio_fileset_allocate( char *file_prefix, int mode, const artio_context *context ) { int my_rank; int num_procs; artio_fileset *handle = (artio_fileset *)malloc(sizeof(artio_fileset)); if ( handle != NULL ) { handle->parameters = artio_parameter_list_init(); #ifdef ARTIO_MPI handle->context = (artio_context *)malloc(sizeof(artio_context)); if ( handle->context == NULL ) { return NULL; } memcpy( handle->context, context, sizeof(artio_context) ); MPI_Comm_size(handle->context->comm, &num_procs); MPI_Comm_rank(handle->context->comm, &my_rank); #else handle->context = NULL; num_procs = 1; my_rank = 0; #endif /* MPI */ strncpy(handle->file_prefix, file_prefix, 250); handle->open_mode = mode; handle->open_type = ARTIO_OPEN_HEADER; handle->rank = my_rank; handle->num_procs = num_procs; handle->endian_swap = 0; handle->proc_sfc_index = NULL; handle->proc_sfc_begin = -1; handle->proc_sfc_end = -1; handle->num_root_cells = -1; handle->grid = NULL; handle->particle = NULL; } return handle; } void artio_fileset_destroy( artio_fileset *handle ) { if ( handle == NULL ) return; if ( handle->proc_sfc_index != NULL ) free( handle->proc_sfc_index ); if ( handle->grid != NULL ) { artio_fileset_close_grid(handle); } if ( handle->particle != NULL ) { artio_fileset_close_particles(handle); } if ( handle->context != NULL ) free( handle->context ); artio_parameter_list_free(handle->parameters); free(handle); } int artio_fileset_has_grid( artio_fileset *handle ) { int num_grid_files = 0; return ( handle->grid != NULL || ( artio_parameter_get_int( handle, "num_grid_files", &num_grid_files ) == ARTIO_SUCCESS && num_grid_files > 0 ) ); } int artio_fileset_has_particles( artio_fileset *handle ) { int num_particle_files = 0; return ( handle->particle != NULL || ( artio_parameter_get_int( handle, "num_particle_files", &num_particle_files ) == ARTIO_SUCCESS && num_particle_files > 0 ) ); } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/artio.h0000644000175100001770000004335614714401662021454 0ustar00runnerdocker/********************************************************************** * Copyright (c) 2012-2013, Douglas H. Rudd * All rights reserved. * * This file is part of the artio library. * * artio is free software: you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as * published by the Free Software Foundation, either version 3 of the * License, or (at your option) any later version. * * artio is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * Copies of the GNU Lesser General Public License and the GNU General * Public License are available in the file LICENSE, included with this * distribution. If you failed to receive a copy of this file, see * **********************************************************************/ #ifndef __ARTIO_H__ #define __ARTIO_H__ #define ARTIO_MAJOR_VERSION 1 #define ARTIO_MINOR_VERSION 2 #ifdef ARTIO_MPI #include #endif #ifdef _WIN32 typedef __int64 int64_t; typedef __int32 int32_t; #else #include #endif #define ARTIO_OPEN_HEADER 0 #define ARTIO_OPEN_PARTICLES 1 #define ARTIO_OPEN_GRID 2 #define ARTIO_READ_LEAFS 1 #define ARTIO_READ_REFINED 2 #define ARTIO_READ_ALL 3 #define ARTIO_RETURN_OCTS 4 #define ARTIO_RETURN_CELLS 0 /* allocation strategy */ #define ARTIO_ALLOC_EQUAL_SFC 0 #define ARTIO_ALLOC_EQUAL_PROC 1 #define ARTIO_ALLOC_MAX_FILE_SIZE 2 #define ARTIO_TYPE_STRING 0 #define ARTIO_TYPE_CHAR 1 #define ARTIO_TYPE_INT 2 #define ARTIO_TYPE_FLOAT 3 #define ARTIO_TYPE_DOUBLE 4 #define ARTIO_TYPE_LONG 5 /* error codes */ #define ARTIO_SUCCESS 0 #define ARTIO_ERR_PARAM_NOT_FOUND 1 #define ARTIO_PARAMETER_EXHAUSTED 2 #define ARTIO_ERR_PARAM_INVALID_LENGTH 3 #define ARTIO_ERR_PARAM_TYPE_MISMATCH 4 #define ARTIO_ERR_PARAM_LENGTH_MISMATCH 5 #define ARTIO_ERR_PARAM_LENGTH_INVALID 6 #define ARTIO_ERR_PARAM_DUPLICATE 7 #define ARTIO_ERR_PARAM_CORRUPTED 8 #define ARTIO_ERR_PARAM_CORRUPTED_MAGIC 9 #define ARTIO_ERR_STRING_LENGTH 10 #define ARTIO_ERR_INVALID_FILESET_MODE 100 #define ARTIO_ERR_INVALID_FILE_NUMBER 101 #define ARTIO_ERR_INVALID_FILE_MODE 102 #define ARTIO_ERR_INVALID_SFC_RANGE 103 #define ARTIO_ERR_INVALID_SFC 104 #define ARTIO_ERR_INVALID_STATE 105 #define ARTIO_ERR_INVALID_SEEK 106 #define ARTIO_ERR_INVALID_OCT_LEVELS 107 #define ARTIO_ERR_INVALID_SPECIES 108 #define ARTIO_ERR_INVALID_ALLOC_STRATEGY 109 #define ARTIO_ERR_INVALID_LEVEL 110 #define ARTIO_ERR_INVALID_PARAMETER_LIST 111 #define ARTIO_ERR_INVALID_DATATYPE 112 #define ARTIO_ERR_INVALID_OCT_REFINED 113 #define ARTIO_ERR_INVALID_HANDLE 114 #define ARTIO_ERR_INVALID_CELL_TYPES 115 #define ARTIO_ERR_INVALID_BUFFER_SIZE 116 #define ARTIO_ERR_INVALID_INDEX 117 #define ARTIO_ERR_DATA_EXISTS 200 #define ARTIO_ERR_INSUFFICIENT_DATA 201 #define ARTIO_ERR_FILE_CREATE 202 #define ARTIO_ERR_GRID_DATA_NOT_FOUND 203 #define ARTIO_ERR_GRID_FILE_NOT_FOUND 204 #define ARTIO_ERR_PARTICLE_DATA_NOT_FOUND 205 #define ARTIO_ERR_PARTICLE_FILE_NOT_FOUND 206 #define ARTIO_ERR_IO_OVERFLOW 207 #define ARTIO_ERR_IO_WRITE 208 #define ARTIO_ERR_IO_READ 209 #define ARTIO_ERR_BUFFER_EXISTS 210 #define ARTIO_SELECTION_EXHAUSTED 300 #define ARTIO_ERR_INVALID_SELECTION 301 #define ARTIO_ERR_INVALID_COORDINATES 302 #define ARTIO_ERR_MEMORY_ALLOCATION 400 #define ARTIO_ERR_VERSION_MISMATCH 500 #ifdef ARTIO_MPI typedef struct { MPI_Comm comm; } artio_context; #else typedef struct { int comm; } artio_context; #endif #define ARTIO_MAX_STRING_LENGTH 256 typedef struct artio_fileset_struct artio_fileset; typedef struct artio_selection_struct artio_selection; extern const artio_context *artio_context_global; /* * Description: Open the file * * filename The file prefix * type combination of ARTIO_OPEN_PARTICLES and ARTIO_OPEN_GRID flags */ artio_fileset *artio_fileset_open( char * file_name, int type, const artio_context *context); /** * Description: Create fileset and begin populating header information * * file_name file name of refined cells * root_cells the number of root level cells * proc_sfc_begin-end the range of local space-filling-curve indices * handle the artio file handle * */ artio_fileset *artio_fileset_create(char * file_prefix, int64_t root_cells, int64_t proc_sfc_begin, int64_t proc_sfc_end, const artio_context *context); /* * Description Close the file */ int artio_fileset_close(artio_fileset *handle); int artio_fileset_set_buffer_size( int buffer_size ); int artio_fileset_has_grid( artio_fileset *handle ); int artio_fileset_has_particles( artio_fileset *handle ); /* public parameter interface */ int artio_parameter_iterate( artio_fileset *handle, char *key, int *type, int *length ); int artio_parameter_get_array_length(artio_fileset *handle, const char * key, int *length); int artio_parameter_set_int(artio_fileset *handle, const char * key, int32_t value); int artio_parameter_get_int(artio_fileset *handle, const char * key, int32_t * value); int artio_parameter_set_int_array(artio_fileset *handle, const char * key, int length, int32_t *values); int artio_parameter_get_int_array(artio_fileset *handle, const char * key, int length, int32_t *values); int artio_parameter_get_int_array_index(artio_fileset *handle, const char * key, int index, int32_t *values); int artio_parameter_set_string(artio_fileset *handle, const char * key, char * value); int artio_parameter_get_string(artio_fileset *handle, const char * key, char * value ); int artio_parameter_set_string_array(artio_fileset *handle, const char * key, int length, char ** values); int artio_parameter_get_string_array(artio_fileset *handle, const char * key, int length, char ** values ); int artio_parameter_get_string_array_index(artio_fileset *handle, const char * key, int index, char * values ); int artio_parameter_set_float(artio_fileset *handle, const char * key, float value); int artio_parameter_get_float(artio_fileset *handle, const char * key, float * value); int artio_parameter_set_float_array(artio_fileset *handle, const char * key, int length, float *values); int artio_parameter_get_float_array(artio_fileset *handle, const char * key, int length, float * values); int artio_parameter_get_float_array_index(artio_fileset *handle, const char * key, int index, float * values); int artio_parameter_set_double(artio_fileset *handle, const char * key, double value); int artio_parameter_get_double(artio_fileset *handle, const char * key, double * value); int artio_parameter_set_double_array(artio_fileset *handle, const char * key, int length, double * values); int artio_parameter_get_double_array(artio_fileset *handle, const char * key, int length, double *values); int artio_parameter_get_double_array_index(artio_fileset *handle, const char * key, int index, double *values); int artio_parameter_set_long(artio_fileset *handle, const char * key, int64_t value); int artio_parameter_get_long(artio_fileset *handle, const char * key, int64_t *value); int artio_parameter_set_long_array(artio_fileset *handle, const char * key, int length, int64_t *values); int artio_parameter_get_long_array(artio_fileset *handle, const char * key, int length, int64_t *values); int artio_parameter_get_long_array_index(artio_fileset *handle, const char * key, int index, int64_t *values); /* public grid interface */ typedef void (* artio_grid_callback)( int64_t sfc_index, int level, double *pos, float * variables, int *refined, void *params ); /* * Description: Add a grid component to a fileset open for writing * * handle The fileset handle * num_grid_files The number of grid files to create * allocation_strategy How to apportion sfc indices to each grid file * num_grid_variables The number of variables per cell * grid_variable_labels Identifying labels for each variable * num_levels_per_root_tree Maximum tree depth for each oct tree * num_octs_per_root_tree Total octs in each oct tree */ int artio_fileset_add_grid(artio_fileset *handle, int num_grid_files, int allocation_strategy, int num_grid_variables, char ** grid_variable_labels, int * num_levels_per_root_tree, int * num_octs_per_root_tree ); int artio_fileset_open_grid(artio_fileset *handle); int artio_fileset_close_grid(artio_fileset *handle); /* * Description: Output the variables of the root level cell and the index of the Oct tree correlated with this root level cell * * handle The File handle * sfc The sfc index of root cell * variables The variables of the root level cell * level The depth of the Oct tree correlated to the root level cell * num_level_octs The array store the number of Oct nodes each level */ int artio_grid_write_root_cell_begin(artio_fileset *handle, int64_t sfc, float * variables, int level, int * num_octs_per_level); /* * Description: Do something at the end of writing the root level cell */ int artio_grid_write_root_cell_end(artio_fileset *handle); /* * Description: Do something at the beginning of each level */ int artio_grid_write_level_begin(artio_fileset *handle, int level ); /* * Description: Do something at the end of each level */ int artio_grid_write_level_end(artio_fileset *handle); /* * Description: Output the data of a special oct tree node to the file * * handle The handle of the file * variables The array recording the variables of the eight cells belonging to this Octree node. */ int artio_grid_write_oct(artio_fileset *handle, float *variables, int *refined); /* * Description: Read the variables of the root level cell and the index of the Octree * correlated with this root level cell * * handle The File handle * variables The variables of the root level cell * level The depth of the OCT tree * num_octs_per_level The number of node of each oct level * */ int artio_grid_read_root_cell_begin(artio_fileset *handle, int64_t sfc, double *pos, float *variables, int *num_tree_levels, int *num_octs_per_level); /* * Description: Do something at the end of reading the root level cell */ int artio_grid_read_root_cell_end(artio_fileset *handle); /* * Description: Do something at the beginning of each level */ int artio_grid_read_level_begin(artio_fileset *handle, int level ); /* * Description: Do something at the end of each level */ int artio_grid_read_level_end(artio_fileset *handle); /* * Description: Read the data of a special oct tree node from the file */ int artio_grid_read_oct(artio_fileset *handle, double *pos, float *variables, int *refined); int artio_grid_cache_sfc_range(artio_fileset *handle, int64_t sfc_start, int64_t sfc_end); int artio_grid_clear_sfc_cache(artio_fileset *handle ); int artio_grid_count_octs_in_sfc_range(artio_fileset *handle, int64_t start, int64_t end, int64_t *num_octs_in_range ); /* * Description: Read a segment of oct nodes * * handle file pointer * sfc1 the start sfc index * sfc2 the end sfc index * max_level_to_read max level to read for each oct tree * option 1. refined nodes; 2 leaf nodes; 3 all nodes * callback callback function * params a pointer to user-defined data passed to the callback */ int artio_grid_read_sfc_range_levels(artio_fileset *handle, int64_t sfc1, int64_t sfc2, int min_level_to_read, int max_level_to_read, int options, artio_grid_callback callback, void *params ); int artio_grid_read_sfc_range(artio_fileset *handle, int64_t sfc1, int64_t sfc2, int options, artio_grid_callback callback, void *params ); int artio_grid_read_selection(artio_fileset *handle, artio_selection *selection, int options, artio_grid_callback callback, void *params ); int artio_grid_read_selection_levels( artio_fileset *handle, artio_selection *selection, int min_level_to_read, int max_level_to_read, int options, artio_grid_callback callback, void *params ); /** * header head file name * num_particle_files the number of files to record refined cells * allocation_strategy * num_species number of particle species * species_labels string identifier for each species * handle the artio file handle * */ int artio_fileset_add_particles(artio_fileset *handle, int num_particle_files, int allocation_strategy, int num_species, char **species_labels, int *num_primary_variables, int *num_secondary_variables, char ***primary_variable_labels_per_species, char ***secondary_variable_labels_per_species, int *num_particles_per_species_per_root_tree ); int artio_fileset_open_particles(artio_fileset *handle); int artio_fileset_close_particles(artio_fileset *handle); /* * Description: Output the variables of the root level cell and the index of * the oct-tree correlated with this root level cell * * handle The File handle * sfc The sfc index of root cell * variables The variables of the root level cell * level The depth of the Oct tree correlated to the root level cell * num_level_octs The array store the number of Oct nodes each level */ int artio_particle_write_root_cell_begin(artio_fileset *handle, int64_t sfc, int *num_particles_per_species); /* * Description: Do something at the end of writing the root level cell */ int artio_particle_write_root_cell_end(artio_fileset *handle); /* * Description: Do something at the beginning of each level */ int artio_particle_write_species_begin(artio_fileset *handle, int species ); /* * Description: Do something at the end of each level */ int artio_particle_write_species_end(artio_fileset *handle); /* * Description: Output the data of a special oct tree node to the file * * handle The handle of the file * variables The array recording the variables of the eight cells belonging to this Octree node. */ int artio_particle_write_particle(artio_fileset *handle, int64_t pid, int subspecies, double* primary_variables, float *secondary_variables); /* * Description: Read the variables of the root level cell and the index of the Octree * correlated with this root level cell * * handle The File handle * variables The variables of the root level cell * level The depth of the OCT tree * num_octs_per_level The number of node of each oct level * */ int artio_particle_read_root_cell_begin(artio_fileset *handle, int64_t sfc, int * num_particle_per_species); /* * Description: Do something at the end of reading the root level cell */ int artio_particle_read_root_cell_end(artio_fileset *handle); /* * Description: Do something at the beginning of each level */ int artio_particle_read_species_begin(artio_fileset *handle, int species ); /* * Description: Do something at the end of each level */ int artio_particle_read_species_end(artio_fileset *handle); /* * Description: Read the data of a single particle from the file */ int artio_particle_read_particle(artio_fileset *handle, int64_t *pid, int *subspecies, double *primary_variables, float *secondary_variables); int artio_particle_cache_sfc_range(artio_fileset *handle, int64_t sfc_start, int64_t sfc_end); int artio_particle_clear_sfc_cache(artio_fileset *handle ); typedef void (* artio_particle_callback)(int64_t sfc_index, int species, int subspecies, int64_t pid, double *primary_variables, float *secondary_variables, void *params ); /* * Description: Read a segment of particles * * handle file pointer * sfc1 the start sfc index * sfc2 the end sfc index * start_species the first particle species to read * end_species the last particle species to read * callback callback function * params user defined data passed to the callback function */ int artio_particle_read_sfc_range(artio_fileset *handle, int64_t sfc1, int64_t sfc2, artio_particle_callback callback, void *params); int artio_particle_read_sfc_range_species( artio_fileset *handle, int64_t sfc1, int64_t sfc2, int start_species, int end_species, artio_particle_callback callback, void *params); int artio_particle_read_selection(artio_fileset *handle, artio_selection *selection, artio_particle_callback callback, void *params ); int artio_particle_read_selection_species( artio_fileset *handle, artio_selection *selection, int start_species, int end_species, artio_particle_callback callback, void *params ); artio_selection *artio_selection_allocate( artio_fileset *handle ); artio_selection *artio_select_all( artio_fileset *handle ); artio_selection *artio_select_volume( artio_fileset *handle, double lpos[3], double rpos[3] ); artio_selection *artio_select_cube( artio_fileset *handle, double center[3], double size ); int artio_selection_add_root_cell( artio_selection *selection, int coords[3] ); int artio_selection_destroy( artio_selection *selection ); void artio_selection_print( artio_selection *selection ); int artio_selection_iterator( artio_selection *selection, int64_t max_range_size, int64_t *start, int64_t *end ); int artio_selection_iterator_reset( artio_selection *selection ); int64_t artio_selection_size( artio_selection *selection ); #endif /* __ARTIO_H__ */ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/artio_endian.c0000644000175100001770000000451214714401662022754 0ustar00runnerdocker/********************************************************************** * Copyright (c) 2012-2013, Douglas H. Rudd * All rights reserved. * * This file is part of the artio library. * * artio is free software: you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as * published by the Free Software Foundation, either version 3 of the * License, or (at your option) any later version. * * artio is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * Copies of the GNU Lesser General Public License and the GNU General * Public License are available in the file LICENSE, included with this * distribution. If you failed to receive a copy of this file, see * **********************************************************************/ #include "artio_endian.h" #ifdef _WIN32 typedef __int64 int64_t; typedef __int32 int32_t; #else #include #endif void artio_int_swap(int32_t *src, int count) { int i; union { int32_t f; unsigned char c[4]; } d1, d2; for ( i = 0; i < count; i++ ) { d1.f = src[i]; d2.c[0] = d1.c[3]; d2.c[1] = d1.c[2]; d2.c[2] = d1.c[1]; d2.c[3] = d1.c[0]; src[i] = d2.f; } } void artio_float_swap(float *src, int count) { int i; union { float f; unsigned char c[4]; } d1, d2; for ( i = 0; i < count; i++ ) { d1.f = src[i]; d2.c[0] = d1.c[3]; d2.c[1] = d1.c[2]; d2.c[2] = d1.c[1]; d2.c[3] = d1.c[0]; src[i] = d2.f; } } void artio_double_swap(double *src, int count) { int i; union { double d; unsigned char c[8]; } d1, d2; for ( i = 0; i < count; i++ ) { d1.d = src[i]; d2.c[0] = d1.c[7]; d2.c[1] = d1.c[6]; d2.c[2] = d1.c[5]; d2.c[3] = d1.c[4]; d2.c[4] = d1.c[3]; d2.c[5] = d1.c[2]; d2.c[6] = d1.c[1]; d2.c[7] = d1.c[0]; src[i] = d2.d; } } void artio_long_swap(int64_t *src, int count) { int i; union { int64_t d; unsigned char c[8]; } d1, d2; for ( i = 0; i < count; i++ ) { d1.d = src[i]; d2.c[0] = d1.c[7]; d2.c[1] = d1.c[6]; d2.c[2] = d1.c[5]; d2.c[3] = d1.c[4]; d2.c[4] = d1.c[3]; d2.c[5] = d1.c[2]; d2.c[6] = d1.c[1]; d2.c[7] = d1.c[0]; src[i] = d2.d; } } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/artio_endian.h0000644000175100001770000000254714714401662022767 0ustar00runnerdocker/********************************************************************** * Copyright (c) 2012-2013, Douglas H. Rudd * All rights reserved. * * This file is part of the artio library. * * artio is free software: you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as * published by the Free Software Foundation, either version 3 of the * License, or (at your option) any later version. * * artio is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * Copies of the GNU Lesser General Public License and the GNU General * Public License are available in the file LICENSE, included with this * distribution. If you failed to receive a copy of this file, see * **********************************************************************/ #ifndef __ARTIO_EDIAN_H__ #define __ARTIO_EDIAN_H__ #ifdef _WIN32 typedef __int64 int64_t; typedef __int32 int32_t; #else #include #endif void artio_int_swap(int32_t *src, int count); void artio_float_swap(float *src, int count); void artio_double_swap(double *src, int count); void artio_long_swap(int64_t *src, int count); #endif /* __ARTIO_ENDIAN_H__ */ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/artio_file.c0000644000175100001770000001206014714401662022432 0ustar00runnerdocker/********************************************************************** * Copyright (c) 2012-2013, Douglas H. Rudd * All rights reserved. * * This file is part of the artio library. * * artio is free software: you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as * published by the Free Software Foundation, either version 3 of the * License, or (at your option) any later version. * * artio is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * Copies of the GNU Lesser General Public License and the GNU General * Public License are available in the file LICENSE, included with this * distribution. If you failed to receive a copy of this file, see * **********************************************************************/ #include "artio.h" #include "artio_internal.h" #include #include artio_fh *artio_file_fopen( char * filename, int mode, const artio_context *context) { artio_fh *fh; #ifdef ARTIO_DEBUG printf( "artio_file_fopen( filename=%s, mode=%d, context=%p )\n", filename, mode, context ); fflush(stdout); #endif /* ARTIO_DEBUG */ fh = artio_file_fopen_i(filename,mode,context); #ifdef ARTIO_DEBUG printf(" artio_file_fopen = %p\n", fh ); fflush(stdout); #endif /* ARTIO_DEBUG */ return fh; } int artio_file_attach_buffer( artio_fh *handle, void *buf, int buf_size ) { int status; #ifdef ARTIO_DEBUG printf( "artio_file_attach_buffer( handle=%p, buf=%p, buf_size = %d )\n", handle, buf, buf_size ); fflush(stdout); #endif /* ARTIO_DEBUG */ status = artio_file_attach_buffer_i(handle,buf,buf_size); #ifdef ARTIO_DEBUG if ( status != ARTIO_SUCCESS ) { printf(" artio_file_attach_buffer(%p) = %d\n", handle, status ); fflush(stdout); } #endif /* ARTIO_DEBUG */ return status; } int artio_file_detach_buffer( artio_fh *handle ) { int status; #ifdef ARTIO_DEBUG printf( "artio_file_detach_buffer( handle=%p )\n", handle ); fflush(stdout); #endif /* ARTIO_DEBUG */ status = artio_file_detach_buffer_i(handle); #ifdef ARTIO_DEBUG if ( status != ARTIO_SUCCESS ) { printf( "artio_file_detach_buffer(%p) = %d\n", handle, status ); fflush(stdout); } #endif /* ARTIO_DEBUG */ return status; } int artio_file_fwrite( artio_fh *handle, const void *buf, int64_t count, int type ) { int status; #ifdef ARTIO_DEBUG printf( "artio_file_fwrite( handle=%p, buf=%p, count=%ld, type=%d )\n", handle, buf, count, type ); fflush(stdout); #endif /* ARTIO_DEBUG */ status = artio_file_fwrite_i(handle,buf,count,type); #ifdef ARTIO_DEBUG if ( status != ARTIO_SUCCESS ) { printf( "artio_file_fwrite(%p) = %d", handle, status ); fflush(stdout); } #endif /* ARTIO_DEBUG */ return status; } int artio_file_fflush(artio_fh *handle) { int status; #ifdef ARTIO_DEBUG printf( "artio_file_fflush( handle=%p )\n", handle ); fflush(stdout); #endif /* ARTIO_DEBUG */ status = artio_file_fflush_i(handle); #ifdef ARTIO_DEBUG if ( status != ARTIO_SUCCESS ) { printf( "artio_file_fflush(%p) = %d\n", handle, status ); fflush(stdout); } #endif /* ARTIO_DEBUG */ return status; } int artio_file_fread(artio_fh *handle, void *buf, int64_t count, int type ) { int status; #ifdef ARTIO_DEBUG printf( "artio_file_fread( handle=%p, buf=%p, count=%ld, type=%d )\n", handle, buf, count, type ); fflush(stdout); #endif /* ARTIO_DEBUG */ status = artio_file_fread_i(handle,buf,count,type); #ifdef ARTIO_DEBUG if ( status != ARTIO_SUCCESS ) { printf( "artio_file_fread(%p) = %d", handle, status ); } #endif /* ARTIO_DEBUG */ return status; } int artio_file_ftell(artio_fh *handle, int64_t *offset) { int status; #ifdef ARTIO_DEBUG printf( "artio_file_ftell( handle=%p, offset=%p )\n", handle, offset ); fflush(stdout); #endif /* ARTIO_DEBUG */ status = artio_file_ftell_i(handle,offset); #ifdef ARTIO_DEBUG if ( status != ARTIO_SUCCESS ) { printf("artio_file_ftell(%p) = %d\n", handle, status ); fflush(stdout); } #endif /* ARTIO_DEBUG */ return status; } int artio_file_fseek(artio_fh *handle, int64_t offset, int whence ) { int status; #ifdef ARTIO_DEBUG printf( "artio_file_fseek( handle=%p, offset=%ld, whence=%d )\n", handle, offset, whence ); fflush(stdout); #endif /* ARTIO_DEBUG */ status = artio_file_fseek_i(handle,offset,whence); #ifdef ARTIO_DEBUG if ( status != ARTIO_SUCCESS ) { printf( "artio_file_fseek(%p) = %d\n", handle, status ); fflush(stdout); } #endif /* ARTIO_DEBUG */ return status; } int artio_file_fclose(artio_fh *handle) { int status; #ifdef ARTIO_DEBUG printf( "artio_file_fclose( handle=%p )\n", handle ); fflush(stdout); #endif /* ARTIO_DEBUG */ status = artio_file_fclose_i(handle); #ifdef ARTIO_DEBUG if ( status != ARTIO_SUCCESS ) { printf( "artio_file_fclose(%p) = %d\n", handle, status ); fflush(stdout); } #endif /* ARTIO_DEBUG */ return status; } void artio_file_set_endian_swap_tag(artio_fh *handle) { artio_file_set_endian_swap_tag_i(handle); } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/artio_grid.c0000644000175100001770000011076314714401662022451 0ustar00runnerdocker/********************************************************************** * Copyright (c) 2012-2013, Douglas H. Rudd * All rights reserved. * * This file is part of the artio library. * * artio is free software: you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as * published by the Free Software Foundation, either version 3 of the * License, or (at your option) any later version. * * artio is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * Copies of the GNU Lesser General Public License and the GNU General * Public License are available in the file LICENSE, included with this * distribution. If you failed to receive a copy of this file, see * **********************************************************************/ #include "artio.h" #include "artio_internal.h" #include #include #include #ifdef _WIN32 typedef __int64 int64_t; typedef __int32 int32_t; #else #include #endif int artio_grid_find_file(artio_grid_file *ghandle, int start, int end, int64_t sfc); artio_grid_file *artio_grid_file_allocate(void); void artio_grid_file_destroy(artio_grid_file *ghandle); const double oct_pos_offsets[8][3] = { { -0.5, -0.5, -0.5 }, { 0.5, -0.5, -0.5 }, { -0.5, 0.5, -0.5 }, { 0.5, 0.5, -0.5 }, { -0.5, -0.5, 0.5 }, { 0.5, -0.5, 0.5 }, { -0.5, 0.5, 0.5 }, { 0.5, 0.5, 0.5 } }; /* * Open grid component of the fileset */ int artio_fileset_open_grid(artio_fileset *handle) { int i; char filename[256]; int first_file, last_file; int mode; artio_grid_file *ghandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } /* check that the fileset doesn't already contain a grid component */ if ( handle->open_type & ARTIO_OPEN_GRID || handle->open_mode != ARTIO_FILESET_READ || handle->grid != NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } handle->open_type |= ARTIO_OPEN_GRID; ghandle = artio_grid_file_allocate(); if ( ghandle == NULL ) { return ARTIO_ERR_MEMORY_ALLOCATION; } /* load grid parameters from header file */ if ( artio_parameter_get_int(handle, "num_grid_files", &ghandle->num_grid_files) != ARTIO_SUCCESS || artio_parameter_get_int( handle, "num_grid_variables", &ghandle->num_grid_variables ) != ARTIO_SUCCESS ) { return ARTIO_ERR_GRID_DATA_NOT_FOUND; } ghandle->file_sfc_index = (int64_t *)malloc(sizeof(int64_t) * (ghandle->num_grid_files + 1)); if ( ghandle->file_sfc_index == NULL ) { artio_grid_file_destroy(ghandle); return ARTIO_ERR_MEMORY_ALLOCATION; } artio_parameter_get_long_array(handle, "grid_file_sfc_index", ghandle->num_grid_files + 1, ghandle->file_sfc_index); artio_parameter_get_int(handle, "grid_max_level", &ghandle->file_max_level); ghandle->octs_per_level = (int *)malloc(ghandle->file_max_level * sizeof(int)); if ( ghandle->octs_per_level == NULL ) { artio_grid_file_destroy(ghandle); return ARTIO_ERR_MEMORY_ALLOCATION; } ghandle->ffh = (artio_fh **)malloc(ghandle->num_grid_files * sizeof(artio_fh *)); if ( ghandle->ffh == NULL ) { artio_grid_file_destroy(ghandle); return ARTIO_ERR_MEMORY_ALLOCATION; } for ( i = 0; i < ghandle->num_grid_files; i++ ) { ghandle->ffh[i] = NULL; } first_file = artio_grid_find_file(ghandle, 0, ghandle->num_grid_files, handle->proc_sfc_begin); last_file = artio_grid_find_file(ghandle, first_file, ghandle->num_grid_files, handle->proc_sfc_end); /* open files on all processes */ for (i = 0; i < ghandle->num_grid_files; i++) { sprintf(filename, "%s.g%03d", handle->file_prefix, i); mode = ARTIO_MODE_READ; if (i >= first_file && i <= last_file) { mode |= ARTIO_MODE_ACCESS; } if (handle->endian_swap) { mode |= ARTIO_MODE_ENDIAN_SWAP; } ghandle->ffh[i] = artio_file_fopen(filename, mode, handle->context); if ( ghandle->ffh[i] == NULL ) { artio_grid_file_destroy(ghandle); return ARTIO_ERR_GRID_FILE_NOT_FOUND; } } handle->grid = ghandle; return ARTIO_SUCCESS; } int artio_fileset_add_grid(artio_fileset *handle, int num_grid_files, int allocation_strategy, int num_grid_variables, char ** grid_variable_labels, int * num_levels_per_root_tree, int * num_octs_per_root_tree ) { int i; int file_max_level, local_max_level; int64_t cur, sfc, l; int64_t first_file_sfc, last_file_sfc; int first_file, last_file; char filename[256]; int mode; int ret; artio_grid_file *ghandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if ( handle->open_mode != ARTIO_FILESET_WRITE ) { return ARTIO_ERR_INVALID_FILESET_MODE; } if ( handle->open_type & ARTIO_OPEN_GRID) { return ARTIO_ERR_DATA_EXISTS; } handle->open_type |= ARTIO_OPEN_GRID; artio_parameter_set_int(handle, "num_grid_files", num_grid_files); artio_parameter_set_int(handle, "num_grid_variables", num_grid_variables); artio_parameter_set_string_array(handle, "grid_variable_labels", num_grid_variables, grid_variable_labels); ghandle = artio_grid_file_allocate(); if ( ghandle == NULL ) { return ARTIO_ERR_MEMORY_ALLOCATION; } ghandle->file_sfc_index = (int64_t *)malloc(sizeof(int64_t) * (num_grid_files + 1)); if ( ghandle->file_sfc_index == NULL ) { artio_grid_file_destroy(ghandle); return ARTIO_ERR_MEMORY_ALLOCATION; } /* compute global maximum level */ local_max_level = 0; for (sfc = 0; sfc < handle->proc_sfc_end - handle->proc_sfc_begin + 1; sfc++) { if (num_levels_per_root_tree[sfc] > local_max_level) { local_max_level = num_levels_per_root_tree[sfc]; } } #ifdef ARTIO_MPI MPI_Allreduce( &local_max_level, &file_max_level, 1, MPI_INT, MPI_MAX, handle->context->comm ); #else file_max_level = local_max_level; #endif /* ARTIO_MPI */ switch (allocation_strategy) { case ARTIO_ALLOC_EQUAL_PROC: if (num_grid_files > handle->num_procs) { return ARTIO_ERR_INVALID_FILE_NUMBER; } for (i = 0; i < num_grid_files; i++) { ghandle->file_sfc_index[i] = handle->proc_sfc_index[(handle->num_procs*i+num_grid_files-1) / num_grid_files]; } ghandle->file_sfc_index[num_grid_files] = handle->proc_sfc_index[handle->num_procs]; break; case ARTIO_ALLOC_EQUAL_SFC: if ( num_grid_files > handle->num_root_cells ) { return ARTIO_ERR_INVALID_FILE_NUMBER; } for (i = 0; i < num_grid_files; i++) { ghandle->file_sfc_index[i] = (handle->num_root_cells*i+num_grid_files-1) / num_grid_files; } ghandle->file_sfc_index[num_grid_files] = handle->num_root_cells; break; default: artio_grid_file_destroy(ghandle); return ARTIO_ERR_INVALID_ALLOC_STRATEGY; } ghandle->num_grid_files = num_grid_files; ghandle->num_grid_variables = num_grid_variables; ghandle->file_max_level = file_max_level; /* allocate space for sfc offset cache */ ghandle->cache_sfc_begin = handle->proc_sfc_begin; ghandle->cache_sfc_end = handle->proc_sfc_end; ghandle->sfc_offset_table = (int64_t *)malloc((size_t)(ghandle->cache_sfc_end - ghandle->cache_sfc_begin + 1) * sizeof(int64_t)); if ( ghandle->sfc_offset_table == NULL ) { artio_grid_file_destroy(ghandle); return ARTIO_ERR_MEMORY_ALLOCATION; } ghandle->octs_per_level = (int *)malloc(ghandle->file_max_level * sizeof(int)); if ( ghandle->octs_per_level == NULL ) { artio_grid_file_destroy(ghandle); return ARTIO_ERR_MEMORY_ALLOCATION; } /* allocate file handles */ ghandle->ffh = (artio_fh **)malloc(num_grid_files * sizeof(artio_fh *)); if ( ghandle->ffh == NULL ) { artio_grid_file_destroy(ghandle); return ARTIO_ERR_MEMORY_ALLOCATION; } for ( i = 0; i < num_grid_files; i++ ) { ghandle->ffh[i] = NULL; } /* open file handles */ first_file = artio_grid_find_file(ghandle, 0, num_grid_files, handle->proc_sfc_begin); last_file = artio_grid_find_file(ghandle, first_file, num_grid_files, handle->proc_sfc_end); if ( first_file < 0 || first_file >= num_grid_files || last_file < first_file || last_file >= num_grid_files ) { return ARTIO_ERR_INVALID_FILE_NUMBER; } for (i = 0; i < num_grid_files; i++) { sprintf(filename, "%s.g%03d", handle->file_prefix, i); mode = ARTIO_MODE_WRITE; if (i >= first_file && i <= last_file) { mode |= ARTIO_MODE_ACCESS; } ghandle->ffh[i] = artio_file_fopen(filename, mode, handle->context); if ( ghandle->ffh[i] == NULL ) { artio_grid_file_destroy(ghandle); return ARTIO_ERR_FILE_CREATE; } /* write sfc offset header if we contribute to this file */ if (i >= first_file && i <= last_file) { #ifdef ARTIO_MPI if (ghandle->file_sfc_index[i] >= handle->proc_sfc_index[ handle->rank ] && ghandle->file_sfc_index[i] < handle->proc_sfc_index[ handle->rank + 1] ) { cur = (ghandle->file_sfc_index[i + 1] - ghandle->file_sfc_index[i]) * sizeof(int64_t); } else { /* obtain offset from previous process */ MPI_Recv( &cur, 1, MPI_LONG_LONG_INT, handle->rank - 1, i, handle->context->comm, MPI_STATUS_IGNORE ); } #else cur = (ghandle->file_sfc_index[i + 1] - ghandle->file_sfc_index[i]) * sizeof(int64_t); #endif /* ARTIO_MPI */ first_file_sfc = MAX( handle->proc_sfc_begin, ghandle->file_sfc_index[i] ); last_file_sfc = MIN( handle->proc_sfc_end, ghandle->file_sfc_index[i+1]-1 ); for (l = first_file_sfc - ghandle->cache_sfc_begin; l < last_file_sfc - ghandle->cache_sfc_begin + 1; l++) { ghandle->sfc_offset_table[l] = cur; cur += sizeof(float) * ghandle->num_grid_variables + sizeof(int) * (1 + num_levels_per_root_tree[l]) + num_octs_per_root_tree[l] * 8 * (sizeof(float) * ghandle->num_grid_variables + sizeof(int)); } #ifdef ARTIO_MPI if ( ghandle->file_sfc_index[i+1] > handle->proc_sfc_end+1 ) { MPI_Send( &cur, 1, MPI_LONG_LONG_INT, handle->rank + 1, i, handle->context->comm ); } #endif /* ARTIO_MPI */ /* seek and write our portion of sfc table */ ret = artio_file_fseek(ghandle->ffh[i], (first_file_sfc - ghandle->file_sfc_index[i]) * sizeof(int64_t), ARTIO_SEEK_SET); if ( ret != ARTIO_SUCCESS ) { artio_grid_file_destroy(ghandle); return ret; } ret = artio_file_fwrite(ghandle->ffh[i], &ghandle->sfc_offset_table[first_file_sfc - ghandle->cache_sfc_begin], last_file_sfc - first_file_sfc + 1, ARTIO_TYPE_LONG); if ( ret != ARTIO_SUCCESS ) { artio_grid_file_destroy(ghandle); return ret; } } } handle->grid = ghandle; artio_parameter_set_long_array(handle, "grid_file_sfc_index", ghandle->num_grid_files + 1, ghandle->file_sfc_index); artio_parameter_set_int(handle, "grid_max_level", ghandle->file_max_level); return ARTIO_SUCCESS; } artio_grid_file *artio_grid_file_allocate(void) { artio_grid_file *ghandle = (artio_grid_file *)malloc(sizeof(struct artio_grid_file_struct)); if ( ghandle != NULL ) { ghandle->ffh = NULL; ghandle->num_grid_variables = -1; ghandle->num_grid_files = -1; ghandle->file_sfc_index = NULL; ghandle->cache_sfc_begin = -1; ghandle->cache_sfc_end = -1; ghandle->sfc_offset_table = NULL; ghandle->file_max_level = -1; ghandle->cur_file = -1; ghandle->cur_num_levels = -1; ghandle->cur_level = -1; ghandle->cur_octs = -1; ghandle->cur_sfc = -1; ghandle->octs_per_level = NULL; ghandle->pos_flag = 0; ghandle->pos_cur_level = -1; ghandle->next_level_size = -1; ghandle->cur_level_size = -1; ghandle->cell_size_level = 1e20; ghandle->next_level_pos = NULL; ghandle->cur_level_pos = NULL; ghandle->next_level_oct = -1; ghandle->buffer_size = artio_fh_buffer_size; ghandle->buffer = malloc(ghandle->buffer_size); if ( ghandle->buffer == NULL ) { free(ghandle); return NULL; } } return ghandle; } void artio_grid_file_destroy(artio_grid_file *ghandle) { int i; if ( ghandle == NULL ) return; if ( ghandle->ffh != NULL ) { for (i = 0; i < ghandle->num_grid_files; i++) { if ( ghandle->ffh[i] != NULL ) { artio_file_fclose(ghandle->ffh[i]); } } free(ghandle->ffh); } if ( ghandle->sfc_offset_table != NULL ) free(ghandle->sfc_offset_table); if ( ghandle->octs_per_level != NULL ) free(ghandle->octs_per_level); if ( ghandle->file_sfc_index != NULL ) free(ghandle->file_sfc_index); if ( ghandle->next_level_pos != NULL ) free(ghandle->next_level_pos); if ( ghandle->cur_level_pos != NULL ) free(ghandle->cur_level_pos); if ( ghandle->buffer != NULL ) free( ghandle->buffer ); free(ghandle); } int artio_fileset_close_grid(artio_fileset *handle) { if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if ( !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } artio_grid_file_destroy(handle->grid); handle->grid = NULL; return ARTIO_SUCCESS; } int artio_grid_count_octs_in_sfc_range(artio_fileset *handle, int64_t start, int64_t end, int64_t *num_octs_in_range ) { int i; int ret; int file, first; int64_t sfc; int64_t offset, next_offset, size_offset; int num_oct_levels; int *num_octs_per_level; artio_grid_file *ghandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } if ( start > end || start < handle->proc_sfc_begin || end > handle->proc_sfc_end ) { return ARTIO_ERR_INVALID_SFC_RANGE; } ghandle = handle->grid; /* check that we're not in the middle of a read */ if ( ghandle->cur_sfc != -1 ) { return ARTIO_ERR_INVALID_STATE; } *num_octs_in_range = 0; if ( 8*ghandle->num_grid_variables <= ghandle->file_max_level ) { /* we can't compute the number of octs through the offset table */ ret = artio_grid_cache_sfc_range( handle, start, end ); if ( ret != ARTIO_SUCCESS ) return ret; num_octs_per_level = (int *)malloc(ghandle->file_max_level*sizeof(int) ); if ( num_octs_per_level == NULL ) { return ARTIO_ERR_MEMORY_ALLOCATION; } for ( sfc = start; sfc <= end; sfc++ ) { ret = artio_grid_read_root_cell_begin( handle, sfc, NULL, NULL, &num_oct_levels, num_octs_per_level ); if ( ret != ARTIO_SUCCESS ) return ret; for ( i = 0; i < num_oct_levels; i++ ) { *num_octs_in_range += num_octs_per_level[i]; } ret = artio_grid_read_root_cell_end( handle ); if ( ret != ARTIO_SUCCESS ) return ret; } free( num_octs_per_level ); } else { /* TODO: add optimization if sfc range already cached */ file = artio_grid_find_file(ghandle, 0, ghandle->num_grid_files, start); first = MAX( 0, start - ghandle->file_sfc_index[file] ); ret = artio_file_fseek(ghandle->ffh[file], sizeof(int64_t) * first, ARTIO_SEEK_SET); if ( ret != ARTIO_SUCCESS ) return ret; ret = artio_file_fread(ghandle->ffh[file], &offset, 1, ARTIO_TYPE_LONG ); if ( ret != ARTIO_SUCCESS ) return ret; sfc = start; while ( sfc <= end ) { /* read next offset or compute end of file*/ if ( sfc < ghandle->file_sfc_index[file+1] - 1 ) { ret = artio_file_fread(ghandle->ffh[file], &size_offset, 1, ARTIO_TYPE_LONG ); if ( ret != ARTIO_SUCCESS ) return ret; next_offset = size_offset; } else { /* need to seek and ftell */ artio_file_fseek( ghandle->ffh[file], 0, ARTIO_SEEK_END ); artio_file_ftell( ghandle->ffh[file], &size_offset ); file++; if ( sfc < end && file < ghandle->num_grid_files ) { artio_file_fseek( ghandle->ffh[file], 0, ARTIO_SEEK_SET ); ret = artio_file_fread(ghandle->ffh[file], &next_offset, 1, ARTIO_TYPE_LONG ); if ( ret != ARTIO_SUCCESS ) return ret; } } /* this assumes (num_levels_per_root_tree)*sizeof(int) < * size of an oct, or 8*num_variables > max_level so the * number of levels drops off in rounding to int */ *num_octs_in_range += (size_offset - offset - sizeof(float)*ghandle->num_grid_variables - sizeof(int) ) / (8*(sizeof(float)*ghandle->num_grid_variables + sizeof(int) )); offset = next_offset; sfc++; } } return ARTIO_SUCCESS; } int artio_grid_cache_sfc_range(artio_fileset *handle, int64_t start, int64_t end) { int i; int ret; int first_file, last_file; int64_t first, count, cur; artio_grid_file *ghandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } if ( start > end || start < handle->proc_sfc_begin || end > handle->proc_sfc_end ) { return ARTIO_ERR_INVALID_SFC_RANGE; } ghandle = handle->grid; /* check if we've already cached the range */ if ( start >= ghandle->cache_sfc_begin && end <= ghandle->cache_sfc_end ) { return ARTIO_SUCCESS; } artio_grid_clear_sfc_cache(handle); first_file = artio_grid_find_file(ghandle, 0, ghandle->num_grid_files, start); last_file = artio_grid_find_file(ghandle, first_file, ghandle->num_grid_files, end); ghandle->cache_sfc_begin = start; ghandle->cache_sfc_end = end; ghandle->sfc_offset_table = (int64_t *)malloc(sizeof(int64_t) * (size_t)(end - start + 1)); if ( ghandle->sfc_offset_table == NULL ) { return ARTIO_ERR_MEMORY_ALLOCATION; } if ( ghandle->cur_file != -1 ) { artio_file_detach_buffer( ghandle->ffh[ghandle->cur_file]); ghandle->cur_file = -1; } cur = 0; for (i = first_file; i <= last_file; i++) { first = MAX( 0, start - ghandle->file_sfc_index[i] ); count = MIN( ghandle->file_sfc_index[i+1], end+1 ) - MAX( start, ghandle->file_sfc_index[i]); artio_file_attach_buffer( ghandle->ffh[i], ghandle->buffer, ghandle->buffer_size ); ret = artio_file_fseek(ghandle->ffh[i], sizeof(int64_t) * first, ARTIO_SEEK_SET); if ( ret != ARTIO_SUCCESS ) return ret; ret = artio_file_fread(ghandle->ffh[i], &ghandle->sfc_offset_table[cur], count, ARTIO_TYPE_LONG); if ( ret != ARTIO_SUCCESS ) return ret; artio_file_detach_buffer( ghandle->ffh[i] ); cur += count; } return ARTIO_SUCCESS; } int artio_grid_clear_sfc_cache( artio_fileset *handle ) { artio_grid_file *ghandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } ghandle = handle->grid; if ( ghandle->sfc_offset_table != NULL ) { free(ghandle->sfc_offset_table); ghandle->sfc_offset_table = NULL; } ghandle->cache_sfc_begin = -1; ghandle->cache_sfc_end = -1; return ARTIO_SUCCESS; } int artio_grid_find_file(artio_grid_file *ghandle, int start, int end, int64_t sfc) { int j; if ( start < 0 || start > ghandle->num_grid_files || end < 0 || end > ghandle->num_grid_files || sfc < ghandle->file_sfc_index[start] || sfc >= ghandle->file_sfc_index[end] ) { return -1; } if (start == end || sfc == ghandle->file_sfc_index[start]) { return start; } if (1 == end - start) { if (sfc < ghandle->file_sfc_index[end]) { return start; } else { return end; } } j = start + (end - start) / 2; if (sfc > ghandle->file_sfc_index[j]) { return artio_grid_find_file(ghandle, j, end, sfc); } else if (sfc < ghandle->file_sfc_index[j]) { return artio_grid_find_file(ghandle, start, j, sfc); } else { return j; } } int artio_grid_seek_to_sfc(artio_fileset *handle, int64_t sfc) { int64_t offset; artio_grid_file *ghandle; int file; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if ( !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } ghandle = handle->grid; if (ghandle->cache_sfc_begin == -1 || sfc < ghandle->cache_sfc_begin || sfc > ghandle->cache_sfc_end) { return ARTIO_ERR_INVALID_SFC; } file = artio_grid_find_file(ghandle, 0, ghandle->num_grid_files, sfc ); if ( file != ghandle->cur_file ) { if ( ghandle->cur_file != -1 ) { artio_file_detach_buffer( ghandle->ffh[ghandle->cur_file] ); } if ( ghandle->buffer_size > 0 ) { artio_file_attach_buffer( ghandle->ffh[file], ghandle->buffer, ghandle->buffer_size ); } ghandle->cur_file = file; } offset = ghandle->sfc_offset_table[sfc - ghandle->cache_sfc_begin]; return artio_file_fseek(ghandle->ffh[ghandle->cur_file], offset, ARTIO_SEEK_SET); } int artio_grid_write_root_cell_begin(artio_fileset *handle, int64_t sfc, float *variables, int num_oct_levels, int *num_octs_per_level) { int i; int ret; artio_grid_file *ghandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_WRITE || !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } ghandle = handle->grid; if (num_oct_levels < 0 || num_oct_levels > ghandle->file_max_level) { return ARTIO_ERR_INVALID_OCT_LEVELS; } ret = artio_grid_seek_to_sfc(handle, sfc); if ( ret != ARTIO_SUCCESS ) return ret; ret = artio_file_fwrite(ghandle->ffh[ghandle->cur_file], variables, ghandle->num_grid_variables, ARTIO_TYPE_FLOAT); if ( ret != ARTIO_SUCCESS ) return ret; ret = artio_file_fwrite(ghandle->ffh[ghandle->cur_file], &num_oct_levels, 1, ARTIO_TYPE_INT); if ( ret != ARTIO_SUCCESS ) return ret; ret = artio_file_fwrite(ghandle->ffh[ghandle->cur_file], num_octs_per_level, num_oct_levels, ARTIO_TYPE_INT); if ( ret != ARTIO_SUCCESS ) return ret; for (i = 0; i < num_oct_levels; i++) { ghandle->octs_per_level[i] = num_octs_per_level[i]; } ghandle->cur_sfc = sfc; ghandle->cur_num_levels = num_oct_levels; ghandle->cur_level = -1; ghandle->cur_octs = 0; return ARTIO_SUCCESS; } int artio_grid_write_root_cell_end(artio_fileset *handle) { if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_WRITE || !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } handle->grid->cur_sfc = -1; return ARTIO_SUCCESS; } int artio_grid_write_level_begin(artio_fileset *handle, int level) { artio_grid_file *ghandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_WRITE || !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } ghandle = handle->grid; if (ghandle->cur_sfc == -1 || level <= 0 || level > ghandle->cur_num_levels) { return ARTIO_ERR_INVALID_STATE; } ghandle->cur_level = level; return ARTIO_SUCCESS; } int artio_grid_write_level_end(artio_fileset *handle) { artio_grid_file *ghandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_WRITE || !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } ghandle = handle->grid; if (ghandle->cur_level == -1 || ghandle->cur_octs != ghandle->octs_per_level[ghandle->cur_level - 1] ) { return ARTIO_ERR_INVALID_STATE; } ghandle->cur_level = -1; ghandle->cur_octs = 0; return ARTIO_SUCCESS; } int artio_grid_write_oct(artio_fileset *handle, float *variables, int *cellrefined) { int i; int ret; artio_grid_file *ghandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_WRITE || !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } ghandle = handle->grid; if (ghandle->cur_level == -1 || ghandle->cur_octs >= ghandle->octs_per_level[ghandle->cur_level - 1]) { return ARTIO_ERR_INVALID_STATE; } /* check that no last-level octs have refined cells */ if ( ghandle->cur_level == ghandle->cur_num_levels ) { for ( i = 0; i < 8; i++ ) { if ( cellrefined[i] > 0) { return ARTIO_ERR_INVALID_OCT_REFINED; } } } ret = artio_file_fwrite(ghandle->ffh[ghandle->cur_file], variables, 8 * ghandle->num_grid_variables, ARTIO_TYPE_FLOAT); if ( ret != ARTIO_SUCCESS ) return ret; ret = artio_file_fwrite(ghandle->ffh[ghandle->cur_file], cellrefined, 8, ARTIO_TYPE_INT); if ( ret != ARTIO_SUCCESS ) return ret; ghandle->cur_octs++; return ARTIO_SUCCESS; } /* * */ int artio_grid_read_root_cell_begin(artio_fileset *handle, int64_t sfc, double *pos, float *variables, int *num_oct_levels, int *num_octs_per_level) { int i; int ret; int coords[3]; artio_grid_file *ghandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } ghandle = handle->grid; ret = artio_grid_seek_to_sfc(handle, sfc); if ( ret != ARTIO_SUCCESS ) return ret; if ( variables == NULL ) { ret = artio_file_fseek( ghandle->ffh[ghandle->cur_file], ghandle->num_grid_variables*sizeof(float), ARTIO_SEEK_CUR ); if ( ret != ARTIO_SUCCESS ) return ret; } else { ret = artio_file_fread(ghandle->ffh[ghandle->cur_file], variables, ghandle->num_grid_variables, ARTIO_TYPE_FLOAT); if ( ret != ARTIO_SUCCESS ) return ret; } ret = artio_file_fread(ghandle->ffh[ghandle->cur_file], num_oct_levels, 1, ARTIO_TYPE_INT); if ( ret != ARTIO_SUCCESS ) return ret; if ( *num_oct_levels > ghandle->file_max_level || *num_oct_levels < 0 ) { printf("*num_oct_levels = %d\n", *num_oct_levels ); return ARTIO_ERR_INVALID_OCT_LEVELS; } if ( pos != NULL ) { ghandle->pos_flag = 1; /* compute position from sfc */ artio_sfc_coords( handle, sfc, coords ); for ( i = 0; i < 3; i++ ) { pos[i] = (double)coords[i] + 0.5; } if ( *num_oct_levels > 0 ) { /* compute next level position */ if ( ghandle->next_level_pos == NULL ) { ghandle->next_level_pos = (double *)malloc( 3*sizeof(double) ); if ( ghandle->next_level_pos == NULL ) { return ARTIO_ERR_MEMORY_ALLOCATION; } ghandle->next_level_size = 1; } for ( i = 0; i < 3; i++ ) { ghandle->next_level_pos[i] = pos[i]; } ghandle->pos_cur_level = 0; } else { ghandle->pos_cur_level = -1; } } else { ghandle->pos_flag = 0; } if (*num_oct_levels > 0) { ret = artio_file_fread(ghandle->ffh[ghandle->cur_file], num_octs_per_level, *num_oct_levels, ARTIO_TYPE_INT); if ( ret != ARTIO_SUCCESS ) return ret; for (i = 0; i < *num_oct_levels; i++) { ghandle->octs_per_level[i] = num_octs_per_level[i]; } } ghandle->cur_sfc = sfc; ghandle->cur_num_levels = *num_oct_levels; ghandle->cur_level = -1; return ARTIO_SUCCESS; } int artio_grid_read_oct(artio_fileset *handle, double *pos, float *variables, int *refined) { int i, j; int ret; int local_refined[8]; artio_grid_file *ghandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } ghandle = handle->grid; if (ghandle->cur_level == -1 || ghandle->cur_octs > ghandle->octs_per_level[ghandle->cur_level - 1] || (pos != NULL && !ghandle->pos_flag )) { return ARTIO_ERR_INVALID_STATE; } if ( variables == NULL ) { ret = artio_file_fseek(ghandle->ffh[ghandle->cur_file], 8*ghandle->num_grid_variables*sizeof(float), ARTIO_SEEK_CUR ); if ( ret != ARTIO_SUCCESS ) return ret; } else { ret = artio_file_fread(ghandle->ffh[ghandle->cur_file], variables, 8 * ghandle->num_grid_variables, ARTIO_TYPE_FLOAT); if ( ret != ARTIO_SUCCESS ) return ret; } if ( !ghandle->pos_flag && refined == NULL ) { ret = artio_file_fseek(ghandle->ffh[ghandle->cur_file], 8*sizeof(int), ARTIO_SEEK_CUR ); if ( ret != ARTIO_SUCCESS ) return ret; } else { ret = artio_file_fread(ghandle->ffh[ghandle->cur_file], local_refined, 8, ARTIO_TYPE_INT); if ( ret != ARTIO_SUCCESS ) return ret; } if ( refined != NULL ) { for ( i = 0; i < 8; i++ ) { refined[i] = (local_refined[i] > 0); } } if ( ghandle->pos_flag ) { if ( pos != NULL ) { for ( i = 0; i < 3; i++ ) { pos[i] = ghandle->cur_level_pos[3*ghandle->cur_octs + i]; } } for ( i = 0; i < 8; i++ ) { if ( local_refined[i] > 0) { if ( ghandle->next_level_oct >= ghandle->next_level_size ) { return ARTIO_ERR_INVALID_STATE; } for ( j = 0; j < 3; j++ ) { ghandle->next_level_pos[3*ghandle->next_level_oct+j] = ghandle->cur_level_pos[3*ghandle->cur_octs + j] + ghandle->cell_size_level*oct_pos_offsets[i][j]; } ghandle->next_level_oct++; } } } ghandle->cur_octs++; return ARTIO_SUCCESS; } /* * Description Obtain data from an appointed level tree node */ int artio_grid_read_level_begin(artio_fileset *handle, int level) { int i; int ret; int64_t offset = 0; artio_grid_file *ghandle; int tmp_size; double *tmp_pos; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } ghandle = handle->grid; if ( ghandle->cur_sfc == -1 || level <= 0 || level > ghandle->cur_num_levels || (ghandle->pos_flag && ghandle->pos_cur_level != level - 1) ) { return ARTIO_ERR_INVALID_STATE; } if ( ghandle->pos_flag ) { ghandle->cell_size_level = 1.0 / (double)(1<cur_level_pos; tmp_size = ghandle->cur_level_size; ghandle->cur_level_pos = ghandle->next_level_pos; ghandle->cur_level_size = ghandle->next_level_size; ghandle->next_level_pos = tmp_pos; ghandle->next_level_size = tmp_size; ghandle->pos_cur_level = level; if ( level < ghandle->cur_num_levels ) { /* ensure the buffer for the next level positions is large enough */ if ( ghandle->octs_per_level[level] > ghandle->next_level_size ) { if ( ghandle->next_level_pos != NULL ) { free( ghandle->next_level_pos ); } ghandle->next_level_pos = (double *)malloc( 3*ghandle->octs_per_level[level]*sizeof(double) ); if ( ghandle->next_level_pos == NULL ) { return ARTIO_ERR_MEMORY_ALLOCATION; } ghandle->next_level_size = ghandle->octs_per_level[level]; } ghandle->next_level_oct = 0; } } offset = ghandle->sfc_offset_table[ghandle->cur_sfc - ghandle->cache_sfc_begin]; offset += sizeof(float) * ghandle->num_grid_variables + sizeof(int) * (ghandle->cur_num_levels + 1); for (i = 0; i < level - 1; i++) { offset += 8 * (sizeof(float) * ghandle->num_grid_variables + sizeof(int)) * ghandle->octs_per_level[i]; } ret = artio_file_fseek(ghandle->ffh[ghandle->cur_file], offset, ARTIO_SEEK_SET); if ( ret != ARTIO_SUCCESS ) return ret; ghandle->cur_level = level; ghandle->cur_octs = 0; return ARTIO_SUCCESS; } /* * Description Do something at the end of each kind of read operation */ int artio_grid_read_level_end(artio_fileset *handle) { artio_grid_file *ghandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } ghandle = handle->grid; if (ghandle->cur_level == -1 && ( ghandle->cur_level < ghandle->cur_num_levels - 1 || ghandle->next_level_oct != ghandle->octs_per_level[ghandle->cur_level] ) ) { return ARTIO_ERR_INVALID_STATE; } ghandle->cur_level = -1; ghandle->cur_octs = -1; ghandle->next_level_oct = -1; return ARTIO_SUCCESS; } int artio_grid_read_root_cell_end(artio_fileset *handle) { if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } handle->grid->cur_sfc = -1; handle->grid->cur_level = -1; handle->grid->pos_flag = 0; handle->grid->pos_cur_level = -1; return ARTIO_SUCCESS; } int artio_grid_read_sfc_range_levels(artio_fileset *handle, int64_t sfc1, int64_t sfc2, int min_level_to_read, int max_level_to_read, int options, artio_grid_callback callback, void *params ) { int i, j; int64_t sfc; int oct, level; int ret; int *octs_per_level = NULL; int refined; int oct_refined[8]; int root_tree_levels; float *variables = NULL; double pos[3], cell_pos[3]; artio_grid_file *ghandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_GRID) ) { return ARTIO_ERR_INVALID_FILESET_MODE; } if ( ( (options & ARTIO_RETURN_CELLS) && !(options & ARTIO_READ_LEAFS) && !(options & ARTIO_READ_REFINED)) || ( (options & ARTIO_RETURN_OCTS) && ((options & ARTIO_READ_LEAFS) || (options & ARTIO_READ_REFINED) ) && !((options & ARTIO_READ_ALL) == ARTIO_READ_ALL ) ) ) { return ARTIO_ERR_INVALID_CELL_TYPES; } ghandle = handle->grid; if ((min_level_to_read < 0) || (min_level_to_read > max_level_to_read)) { return ARTIO_ERR_INVALID_LEVEL; } octs_per_level = (int *)malloc(ghandle->file_max_level * sizeof(int)); variables = (float *)malloc(8*ghandle->num_grid_variables * sizeof(float)); if ( octs_per_level == NULL || variables == NULL ) { if ( octs_per_level != NULL ) free(octs_per_level); if ( variables != NULL ) free(variables); return ARTIO_ERR_MEMORY_ALLOCATION; } ret = artio_grid_cache_sfc_range(handle, sfc1, sfc2); if ( ret != ARTIO_SUCCESS ) { free(octs_per_level); free(variables); return ret; } for (sfc = sfc1; sfc <= sfc2; sfc++) { ret = artio_grid_read_root_cell_begin(handle, sfc, pos, variables, &root_tree_levels, octs_per_level); if ( ret != ARTIO_SUCCESS ) { free(octs_per_level); free(variables); return ret; } if (min_level_to_read == 0 && ((options & ARTIO_READ_REFINED && root_tree_levels > 0) || (options & ARTIO_READ_LEAFS && root_tree_levels == 0)) ) { refined = (root_tree_levels > 0) ? 1 : 0; callback( sfc, 0, pos, variables, &refined, params ); } for (level = MAX(min_level_to_read,1); level <= MIN(root_tree_levels,max_level_to_read); level++) { ret = artio_grid_read_level_begin(handle, level); if ( ret != ARTIO_SUCCESS ) { free(octs_per_level); free(variables); return ret; } for (oct = 0; oct < octs_per_level[level - 1]; oct++) { ret = artio_grid_read_oct(handle, pos, variables, oct_refined); if ( ret != ARTIO_SUCCESS ) { free(octs_per_level); free(variables); return ret; } if ( options & ARTIO_RETURN_OCTS ) { callback( sfc, level, pos, variables, oct_refined, params ); } else { for (i = 0; i < 8; i++) { if ( (options & ARTIO_READ_REFINED && oct_refined[i]>0) || (options & ARTIO_READ_LEAFS && oct_refined[i]<=0) ) { for ( j = 0; j < 3; j++ ) { cell_pos[j] = pos[j] + ghandle->cell_size_level*oct_pos_offsets[i][j]; } callback( sfc, level, cell_pos, &variables[i * ghandle->num_grid_variables], &oct_refined[i], params ); } } } } artio_grid_read_level_end(handle); } artio_grid_read_root_cell_end(handle); } free(variables); free(octs_per_level); artio_grid_clear_sfc_cache(handle); return ARTIO_SUCCESS; } int artio_grid_read_sfc_range(artio_fileset *handle, int64_t sfc1, int64_t sfc2, int options, artio_grid_callback callback, void *params) { if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } return artio_grid_read_sfc_range_levels( handle, sfc1, sfc2, 0, handle->grid->file_max_level, options, callback, params ); } int artio_grid_read_selection(artio_fileset *handle, artio_selection *selection, int options, artio_grid_callback callback, void *params ) { if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } return artio_grid_read_selection_levels( handle, selection, 0, handle->grid->file_max_level, options, callback, params ); } int artio_grid_read_selection_levels( artio_fileset *handle, artio_selection *selection, int min_level_to_read, int max_level_to_read, int options, artio_grid_callback callback, void *params ) { int ret; int64_t start, end; /* loop over selected ranges */ artio_selection_iterator_reset( selection ); while ( artio_selection_iterator( selection, handle->num_root_cells, &start, &end ) == ARTIO_SUCCESS ) { ret = artio_grid_read_sfc_range_levels( handle, start, end, min_level_to_read, max_level_to_read, options, callback, params); if ( ret != ARTIO_SUCCESS ) return ret; } return ARTIO_SUCCESS; } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/artio_internal.h0000644000175100001770000001473314714401662023345 0ustar00runnerdocker/********************************************************************** * Copyright (c) 2012-2013, Douglas H. Rudd * All rights reserved. * * This file is part of the artio library. * * artio is free software: you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as * published by the Free Software Foundation, either version 3 of the * License, or (at your option) any later version. * * artio is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * Copies of the GNU Lesser General Public License and the GNU General * Public License are available in the file LICENSE, included with this * distribution. If you failed to receive a copy of this file, see * **********************************************************************/ #ifndef __ARTIO_INTERNAL_H__ #define __ARTIO_INTERNAL_H__ #ifdef ARTIO_MPI #include #endif #include #include #ifdef _WIN32 typedef __int64 int64_t; typedef __int32 int32_t; #else #include #endif #include "artio_endian.h" #ifndef ARTIO_DEFAULT_BUFFER_SIZE #define ARTIO_DEFAULT_BUFFER_SIZE 65536 #endif extern int artio_fh_buffer_size; #define nDim 3 #ifndef MIN #define MIN(x,y) (((x) < (y)) ? (x): (y)) #endif #ifndef MAX #define MAX(x,y) (((x) > (y)) ? (x): (y)) #endif /* limit individual writes to 32-bit safe quantity */ #define ARTIO_IO_MAX (1<<30) #ifdef INT64_MAX #define ARTIO_INT64_MAX INT64_MAX #else #define ARTIO_INT64_MAX 0x7fffffffffffffffLL #endif typedef struct ARTIO_FH artio_fh; typedef struct artio_particle_file_struct { artio_fh **ffh; void *buffer; int buffer_size; int num_particle_files; int64_t *file_sfc_index; int64_t cache_sfc_begin; int64_t cache_sfc_end; int64_t *sfc_offset_table; /* maintained for consistency and user-error detection */ int num_species; int cur_file; int cur_species; int cur_particle; int64_t cur_sfc; int *num_primary_variables; int *num_secondary_variables; int *num_particles_per_species; } artio_particle_file; typedef struct artio_grid_file_struct { artio_fh **ffh; void *buffer; int buffer_size; int num_grid_variables; int num_grid_files; int64_t *file_sfc_index; int64_t cache_sfc_begin; int64_t cache_sfc_end; int64_t *sfc_offset_table; int file_max_level; /* maintained for consistency and user-error detection */ int cur_file; int cur_num_levels; int cur_level; int cur_octs; int64_t cur_sfc; int *octs_per_level; int pos_flag; int pos_cur_level; int next_level_size; int cur_level_size; double cell_size_level; double *next_level_pos; double *cur_level_pos; int next_level_oct; } artio_grid_file; typedef struct parameter_struct { int key_length; char key[64]; int val_length; int type; char *value; struct parameter_struct * next; } parameter; typedef struct parameter_list_struct { parameter * head; parameter * tail; parameter * cursor; int iterate_flag; } parameter_list; struct artio_fileset_struct { char file_prefix[256]; int endian_swap; int open_type; int open_mode; int rank; int num_procs; artio_context *context; int64_t *proc_sfc_index; int64_t proc_sfc_begin; int64_t proc_sfc_end; int64_t num_root_cells; int sfc_type; int nBitsPerDim; int num_grid; parameter_list *parameters; artio_grid_file *grid; artio_particle_file *particle; }; struct artio_selection_struct { int64_t *list; int size; int num_ranges; int cursor; int64_t subcycle; artio_fileset *fileset; }; #define ARTIO_FILESET_READ 0 #define ARTIO_FILESET_WRITE 1 #define ARTIO_MODE_READ 1 #define ARTIO_MODE_WRITE 2 #define ARTIO_MODE_ACCESS 4 #define ARTIO_MODE_ENDIAN_SWAP 8 #define ARTIO_SEEK_SET 0 #define ARTIO_SEEK_CUR 1 #define ARTIO_SEEK_END 2 /* wrapper functions for profiling and debugging */ artio_fh *artio_file_fopen( char * filename, int amode, const artio_context *context ); int artio_file_attach_buffer( artio_fh *handle, void *buf, int buf_size ); int artio_file_detach_buffer( artio_fh *handle ); int artio_file_fwrite(artio_fh *handle, const void *buf, int64_t count, int type ); int artio_file_ftell( artio_fh *handle, int64_t *offset ); int artio_file_fflush(artio_fh *handle); int artio_file_fseek(artio_fh *ffh, int64_t offset, int whence); int artio_file_fread(artio_fh *handle, void *buf, int64_t count, int type ); int artio_file_fclose(artio_fh *handle); void artio_file_set_endian_swap_tag(artio_fh *handle); /* internal versions */ artio_fh *artio_file_fopen_i( char * filename, int amode, const artio_context *context ); int artio_file_attach_buffer_i( artio_fh *handle, void *buf, int buf_size ); int artio_file_detach_buffer_i( artio_fh *handle ); int artio_file_fwrite_i(artio_fh *handle, const void *buf, int64_t count, int type ); int artio_file_ftell_i( artio_fh *handle, int64_t *offset ); int artio_file_fflush_i(artio_fh *handle); int artio_file_fseek_i(artio_fh *ffh, int64_t offset, int whence); int artio_file_fread_i(artio_fh *handle, void *buf, int64_t count, int type ); int artio_file_fclose_i(artio_fh *handle); void artio_file_set_endian_swap_tag_i(artio_fh *handle); #define ARTIO_ENDIAN_MAGIC 0x1234 parameter_list *artio_parameter_list_init(void); parameter *artio_parameter_list_search(parameter_list *parameters, const char *key); int artio_parameter_array_length( parameter *item ); int artio_parameter_list_insert(parameter_list *parameters, const char *key, int length, void * value, int type); int artio_parameter_read(artio_fh *handle, parameter_list *parameters); int artio_parameter_write(artio_fh *handle, parameter_list *parameters); int artio_parameter_list_print(parameter_list *parameters); int artio_parameter_list_free(parameter_list *parameters); int artio_parameter_list_print(parameter_list *parameters); size_t artio_type_size( int type ); int artio_parameter_list_unpack(parameter_list *parameters, const char *key, int length, void *value, int type ); #define ARTIO_SFC_SLAB_X 0 #define ARTIO_SFC_MORTION 1 #define ARTIO_SFC_HILBERT 2 #define ARTIO_SFC_SLAB_Y 3 #define ARTIO_SFC_SLAB_Z 4 int64_t artio_sfc_index_position( artio_fileset *handle, double position[nDim] ); int64_t artio_sfc_index( artio_fileset *handle, int coords[nDim] ); void artio_sfc_coords( artio_fileset *handle, int64_t index, int coords[nDim] ); #endif /* __ARTIO_INTERNAL_H__ */ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/artio_mpi.c0000644000175100001770000002337014714401662022306 0ustar00runnerdocker/********************************************************************** * Copyright (c) 2012-2013, Douglas H. Rudd * All rights reserved. * * This file is part of the artio library. * * artio is free software: you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as * published by the Free Software Foundation, either version 3 of the * License, or (at your option) any later version. * * artio is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * Copies of the GNU Lesser General Public License and the GNU General * Public License are available in the file LICENSE, included with this * distribution. If you failed to receive a copy of this file, see * **********************************************************************/ #include "artio.h" #include "artio_internal.h" #ifdef ARTIO_MPI #include #include #include #ifdef _WIN32 typedef __int64 int64_t; typedef __int32 int32_t; #else #include #endif artio_context artio_context_global_struct = { MPI_COMM_WORLD }; const artio_context *artio_context_global = &artio_context_global_struct; struct ARTIO_FH { MPI_File fh; MPI_Comm comm; int mode; char *data; int bfsize; int bfptr; int bfend; }; artio_fh *artio_file_fopen_i( char * filename, int mode, const artio_context *context) { int status; int flag; int rank; int amode; if ( mode & ARTIO_MODE_WRITE && mode & ARTIO_MODE_READ ) { return NULL; } else if ( mode & ARTIO_MODE_WRITE ) { amode = MPI_MODE_CREATE | MPI_MODE_WRONLY; } else if ( mode & ARTIO_MODE_READ ) { amode = MPI_MODE_RDONLY; } else { return NULL; } artio_fh *ffh = (artio_fh *)malloc(sizeof(artio_fh)); if ( ffh == NULL ) { return NULL; } ffh->mode = mode; ffh->data = NULL; ffh->bfsize = -1; ffh->bfend = -1; ffh->bfptr = -1; flag = mode & ARTIO_MODE_ACCESS; MPI_Comm_rank( context->comm, &rank ); MPI_Comm_split( context->comm, flag, rank, &ffh->comm ); if ( flag ) { status = MPI_File_open( ffh->comm, filename, amode, MPI_INFO_NULL, &ffh->fh); if (status != MPI_SUCCESS) { MPI_Comm_free(&ffh->comm); free( ffh ); return NULL; } /* truncate the file on write */ if ( mode & ARTIO_MODE_WRITE ) { MPI_File_set_size(ffh->fh, 0); } } return ffh; } int artio_file_attach_buffer_i( artio_fh *handle, void *buf, int buf_size ) { if ( !(handle->mode & ARTIO_MODE_ACCESS ) ) { return ARTIO_ERR_INVALID_FILE_MODE; } if ( handle->data != NULL ) { return ARTIO_ERR_BUFFER_EXISTS; } handle->bfsize = buf_size; handle->bfend = -1; handle->bfptr = 0; handle->data = (char *)buf; return ARTIO_SUCCESS; } int artio_file_detach_buffer_i( artio_fh *handle ) { int ret; ret = artio_file_fflush(handle); if ( ret != ARTIO_SUCCESS ) return ret; handle->data = NULL; handle->bfsize = -1; handle->bfend = -1; handle->bfptr = -1; return ARTIO_SUCCESS; } int artio_file_fwrite_i( artio_fh *handle, const void *buf, int64_t count, int type ) { size_t size; int64_t remain; int size32; char *p; if ( !(handle->mode & ARTIO_MODE_WRITE) || !(handle->mode & ARTIO_MODE_ACCESS) ) { return ARTIO_ERR_INVALID_FILE_MODE; } size = artio_type_size( type ); if ( size == (size_t)-1 ) { return ARTIO_ERR_INVALID_DATATYPE; } if ( count > ARTIO_INT64_MAX / size ) { return ARTIO_ERR_IO_OVERFLOW; } remain = size*count; p = (char *)buf; if ( handle->data == NULL ) { while ( remain > 0 ) { size32 = MIN( ARTIO_IO_MAX, remain ); if ( MPI_File_write( handle->fh, p, size32, MPI_BYTE, MPI_STATUS_IGNORE ) != MPI_SUCCESS ) { return ARTIO_ERR_IO_WRITE; } remain -= size32; p += size32; } } else if ( remain < handle->bfsize - handle->bfptr ) { memcpy( handle->data + handle->bfptr, p, (size_t)remain ); handle->bfptr += remain; } else { /* complete buffer */ size32 = handle->bfsize - handle->bfptr; memcpy( handle->data + handle->bfptr, p, size32 ); if ( MPI_File_write(handle->fh, handle->data, handle->bfsize, MPI_BYTE, MPI_STATUS_IGNORE ) != MPI_SUCCESS ) { return ARTIO_ERR_IO_WRITE; } p += size32; remain -= size32; while ( remain > handle->bfsize ) { if ( MPI_File_write(handle->fh, p, handle->bfsize, MPI_BYTE, MPI_STATUS_IGNORE ) != MPI_SUCCESS ) { return ARTIO_ERR_IO_WRITE; } remain -= handle->bfsize; p += handle->bfsize; } memcpy( handle->data, p, (size_t)remain); handle->bfptr = remain; } return ARTIO_SUCCESS; } int artio_file_fflush_i(artio_fh *handle) { if ( !(handle->mode & ARTIO_MODE_ACCESS) ) { return ARTIO_ERR_INVALID_FILE_MODE; } if ( handle->mode & ARTIO_MODE_WRITE ) { if ( handle->bfptr > 0 ) { if ( MPI_File_write(handle->fh, handle->data, handle->bfptr, MPI_BYTE, MPI_STATUS_IGNORE ) != MPI_SUCCESS ) { return ARTIO_ERR_IO_WRITE; } handle->bfptr = 0; } } else if ( handle->mode & ARTIO_MODE_READ ) { handle->bfend = -1; handle->bfptr = 0; } else { return ARTIO_ERR_INVALID_FILE_MODE; } return ARTIO_SUCCESS; } int artio_file_fread_i(artio_fh *handle, void *buf, int64_t count, int type ) { MPI_Status status; size_t size, avail, remain; int size_read, size32; char *p; if ( !(handle->mode & ARTIO_MODE_READ) ) { return ARTIO_ERR_INVALID_FILE_MODE; } size = artio_type_size( type ); if ( size == (size_t)-1 ) { return ARTIO_ERR_INVALID_DATATYPE; } if ( count > ARTIO_INT64_MAX / size ) { return ARTIO_ERR_IO_OVERFLOW; } remain = size*count; p = (char *)buf; if ( handle->data == NULL ) { while ( remain > 0 ) { size32 = MIN( ARTIO_IO_MAX, remain ); if ( MPI_File_read(handle->fh, p, size32, MPI_BYTE, &status ) != MPI_SUCCESS ) { return ARTIO_ERR_IO_READ; } MPI_Get_count( &status, MPI_BYTE, &size_read ); if ( size_read != size32 ) { return ARTIO_ERR_INSUFFICIENT_DATA; } remain -= size32; p += size32; } } else { if ( handle->bfend == -1 ) { /* load initial data into buffer */ if ( MPI_File_read(handle->fh, handle->data, handle->bfsize, MPI_BYTE, &status) != MPI_SUCCESS ) { return ARTIO_ERR_IO_READ; } MPI_Get_count(&status, MPI_BYTE, &handle->bfend); handle->bfptr = 0; } /* read from buffer */ while ( remain > 0 && handle->bfend > 0 && handle->bfptr + remain >= handle->bfend ) { avail = handle->bfend - handle->bfptr; memcpy( p, handle->data + handle->bfptr, avail ); p += avail; remain -= avail; /* refill buffer */ if ( MPI_File_read(handle->fh, handle->data, handle->bfsize, MPI_BYTE, &status ) != MPI_SUCCESS ) { return ARTIO_ERR_IO_READ; } MPI_Get_count(&status, MPI_BYTE, &handle->bfend ); handle->bfptr = 0; } if ( remain > 0 ) { if ( handle->bfend == 0 ) { /* ran out of data, eof */ return ARTIO_ERR_INSUFFICIENT_DATA; } memcpy( p, handle->data + handle->bfptr, (size_t)remain ); handle->bfptr += (int)remain; } } if(handle->mode & ARTIO_MODE_ENDIAN_SWAP){ switch (type) { case ARTIO_TYPE_INT : artio_int_swap( (int32_t *)buf, count ); break; case ARTIO_TYPE_FLOAT : artio_float_swap( (float *)buf, count ); break; case ARTIO_TYPE_DOUBLE : artio_double_swap( (double *)buf, count ); break; case ARTIO_TYPE_LONG : artio_long_swap( (int64_t *)buf, count ); break; default : return ARTIO_ERR_INVALID_DATATYPE; } } return ARTIO_SUCCESS; } int artio_file_ftell_i(artio_fh *handle, int64_t *offset) { MPI_Offset current; MPI_File_get_position( handle->fh, ¤t ); if ( handle->bfend > 0 ) { current -= handle->bfend; } if ( handle->bfptr > 0 ) { current += handle->bfptr; } *offset = (int64_t)current; return ARTIO_SUCCESS; } int artio_file_fseek_i(artio_fh *handle, int64_t offset, int whence ) { MPI_Offset current; if ( handle->mode & ARTIO_MODE_ACCESS ) { if ( whence == ARTIO_SEEK_CUR ) { if ( offset == 0 ) { return ARTIO_SUCCESS; } else if ( handle->mode & ARTIO_MODE_READ && handle->bfend > 0 && handle->bfptr + offset >= 0 && handle->bfptr + offset < handle->bfend ) { handle->bfptr += offset; return ARTIO_SUCCESS; } else { if ( handle->bfptr > 0 ) { current = (MPI_Offset)offset - handle->bfend + handle->bfptr; } else { current = (MPI_Offset)offset; } artio_file_fflush( handle ); MPI_File_seek( handle->fh, current, MPI_SEEK_CUR ); } } else if ( whence == ARTIO_SEEK_SET ) { MPI_File_get_position( handle->fh, ¤t ); if (handle->mode & ARTIO_MODE_WRITE && current <= offset && offset < current + handle->bfsize && handle->bfptr == offset - current ) { return ARTIO_SUCCESS; } else if ( handle->mode & ARTIO_MODE_READ && handle->bfptr > 0 && handle->bfend > 0 && handle->bfptr < handle->bfend && offset >= current - handle->bfend && offset < current ) { handle->bfptr = offset - current + handle->bfend; } else { artio_file_fflush( handle ); MPI_File_seek( handle->fh, (MPI_Offset)offset, MPI_SEEK_SET ); } } else if ( whence == ARTIO_SEEK_END ) { artio_file_fflush(handle); MPI_File_seek( handle->fh, (MPI_Offset)offset, MPI_SEEK_END ); } else { /* unknown whence */ return ARTIO_ERR_INVALID_SEEK; } } else { /* seek on non-active file handle */ return ARTIO_ERR_INVALID_FILE_MODE; } return ARTIO_SUCCESS; } int artio_file_fclose_i(artio_fh *handle) { if ( handle->mode & ARTIO_MODE_ACCESS ) { artio_file_fflush(handle); MPI_File_close(&handle->fh); } MPI_Comm_free(&handle->comm); free(handle); return ARTIO_SUCCESS; } void artio_file_set_endian_swap_tag_i(artio_fh *handle) { handle->mode |= ARTIO_MODE_ENDIAN_SWAP; } #endif /* MPI */ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/artio_mpi.h0000644000175100001770000000034414714401662022307 0ustar00runnerdocker/* * artio_mpi.h * * Created on: Mar 6, 2012 * Author: Nick Gnedin */ #ifndef __ARTIO_MPI_H__ #define __ARTIO_MPI_H__ #include struct artio_context_struct { MPI_Comm comm; }; #endif /* __ARTIO_MPI_H__ */ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/artio_parameter.c0000644000175100001770000003654014714401662023504 0ustar00runnerdocker/********************************************************************** * Copyright (c) 2012-2013, Douglas H. Rudd * All rights reserved. * * This file is part of the artio library. * * artio is free software: you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as * published by the Free Software Foundation, either version 3 of the * License, or (at your option) any later version. * * artio is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * Copies of the GNU Lesser General Public License and the GNU General * Public License are available in the file LICENSE, included with this * distribution. If you failed to receive a copy of this file, see * **********************************************************************/ #include "artio.h" #include "artio_internal.h" #include #include #include #ifdef _WIN32 typedef __int64 int64_t; typedef __int32 int32_t; #else #include #endif size_t artio_type_size(int type) { size_t t_len=0; switch (type) { case ARTIO_TYPE_STRING: case ARTIO_TYPE_CHAR: t_len = sizeof(char); break; case ARTIO_TYPE_INT: t_len = sizeof(int32_t); break; case ARTIO_TYPE_FLOAT: t_len = sizeof(float); break; case ARTIO_TYPE_DOUBLE: t_len = sizeof(double); break; case ARTIO_TYPE_LONG: t_len = sizeof(int64_t); break; default: t_len = (size_t)-1; break; } return t_len; } parameter_list *artio_parameter_list_init() { parameter_list *parameters = (parameter_list *)malloc(sizeof(parameter_list)); if ( parameters != NULL ) { parameters->head = NULL; parameters->tail = NULL; parameters->cursor = NULL; parameters->iterate_flag = 0; } return parameters; } int artio_parameter_read(artio_fh *handle, parameter_list *parameters) { parameter * item; int i; int length, re; int t_len; int32_t endian_tag; /* endian check */ re = artio_file_fread(handle, &endian_tag, 1, ARTIO_TYPE_INT); if ( re != ARTIO_SUCCESS ) { return ARTIO_ERR_PARAM_CORRUPTED; } if ( endian_tag != ARTIO_ENDIAN_MAGIC ) { artio_int_swap( &endian_tag, 1 ); if ( endian_tag == ARTIO_ENDIAN_MAGIC ) { artio_file_set_endian_swap_tag(handle); } else { return ARTIO_ERR_PARAM_CORRUPTED_MAGIC; } } re = artio_file_fread(handle, &length, 1, ARTIO_TYPE_INT); if ( re != ARTIO_SUCCESS ) { return ARTIO_ERR_PARAM_CORRUPTED; } for ( i = 0; i < length; i++ ) { item = (parameter *)malloc(sizeof(parameter)); if ( item == NULL ) { return ARTIO_ERR_MEMORY_ALLOCATION; } artio_file_fread(handle, &item->key_length, 1, ARTIO_TYPE_INT); artio_file_fread(handle, item->key, item->key_length, ARTIO_TYPE_CHAR); item->key[item->key_length] = 0; artio_file_fread(handle, &item->val_length, 1, ARTIO_TYPE_INT); artio_file_fread(handle, &item->type, 1, ARTIO_TYPE_INT); t_len = artio_type_size(item->type); item->value = (char *)malloc(item->val_length * t_len); re = artio_file_fread(handle, item->value, item->val_length, item->type); if ( re != ARTIO_SUCCESS ) { return ARTIO_ERR_PARAM_CORRUPTED; } item->next = NULL; if (NULL == parameters->tail) { parameters->tail = item; parameters->head = item; } else { parameters->tail->next = item; parameters->tail = item; } } return ARTIO_SUCCESS; } int artio_parameter_write(artio_fh *handle, parameter_list *parameters) { parameter * item; /* retain a number for endian check */ int32_t endian_tag = ARTIO_ENDIAN_MAGIC; int32_t length = 0; item = parameters->head; while (NULL != item) { length++; item = item->next; } artio_file_fwrite(handle, &endian_tag, 1, ARTIO_TYPE_INT); artio_file_fwrite(handle, &length, 1, ARTIO_TYPE_INT); item = parameters->head; while (NULL != item) { artio_file_fwrite(handle, &item->key_length, 1, ARTIO_TYPE_INT); artio_file_fwrite(handle, item->key, item->key_length, ARTIO_TYPE_CHAR); artio_file_fwrite(handle, &item->val_length, 1, ARTIO_TYPE_INT); artio_file_fwrite(handle, &item->type, 1, ARTIO_TYPE_INT); artio_file_fwrite(handle, item->value, item->val_length, item->type); item = item->next; } return ARTIO_SUCCESS; } int artio_parameter_iterate( artio_fileset *handle, char *key, int *type, int *length ) { parameter *item; parameter_list *parameters = handle->parameters; if ( parameters->iterate_flag == 0 ) { parameters->cursor = parameters->head; parameters->iterate_flag = 1; } if ( parameters->cursor == NULL ) { parameters->iterate_flag = 0; return ARTIO_PARAMETER_EXHAUSTED; } item = parameters->cursor; strncpy( key, item->key, 64 ); *type = item->type; *length = artio_parameter_array_length(item); parameters->cursor = item->next; return ARTIO_SUCCESS; } parameter *artio_parameter_list_search(parameter_list * parameters, const char *key ) { parameter * item = parameters->head; while ( NULL != item && strcmp(item->key, key) ) { item = item->next; } return item; } int artio_parameter_list_insert(parameter_list * parameters, const char * key, int length, void *value, int type) { int key_len; size_t val_len = 0; parameter * item; if ( length <= 0 ) { return ARTIO_ERR_PARAM_LENGTH_INVALID; } item = artio_parameter_list_search(parameters, key); if (NULL != item) { return ARTIO_ERR_PARAM_DUPLICATE; } /* create the list node */ item = (parameter *)malloc(sizeof(parameter)); if ( item == NULL ) { return ARTIO_ERR_MEMORY_ALLOCATION; } key_len = strlen(key); item->key_length = key_len; strcpy(item->key, key); item->val_length = length; item->type = type; val_len = artio_type_size(type); item->value = (char *)malloc(length * val_len); if ( item->value == NULL ) { free(item); return ARTIO_ERR_MEMORY_ALLOCATION; } memcpy(item->value, value, length * val_len); item->next = NULL; /* add to the list */ if (NULL == parameters->tail) { parameters->tail = item; parameters->head = item; } else { parameters->tail->next = item; parameters->tail = item; } return ARTIO_SUCCESS; } int artio_parameter_list_unpack(parameter_list *parameters, const char *key, int length, void *value, int type) { size_t t_len; parameter *item = artio_parameter_list_search(parameters, key); if (item != NULL) { if (length != item->val_length ) { return ARTIO_ERR_PARAM_LENGTH_MISMATCH; } else if ( type != item->type ) { return ARTIO_ERR_PARAM_TYPE_MISMATCH; } else { t_len = artio_type_size(type); memcpy(value, item->value, item->val_length * t_len); } } else { return ARTIO_ERR_PARAM_NOT_FOUND; } return ARTIO_SUCCESS; } int artio_parameter_list_unpack_index(parameter_list *parameters, const char *key, int index, void *value, int type) { size_t t_len; parameter *item; if ( index < 0 ) { return ARTIO_ERR_INVALID_INDEX; } item = artio_parameter_list_search(parameters, key); if (item != NULL) { if (index >= item->val_length ) { return ARTIO_ERR_PARAM_LENGTH_MISMATCH; } else if ( type != item->type ) { return ARTIO_ERR_PARAM_TYPE_MISMATCH; } else { t_len = artio_type_size(type); memcpy(value, item->value+index*t_len, t_len); } } else { return ARTIO_ERR_PARAM_NOT_FOUND; } return ARTIO_SUCCESS; } int artio_parameter_array_length( parameter *item ) { int i, length; if ( item->type == ARTIO_TYPE_STRING ) { length = 0; for ( i = 0; i < item->val_length; i++ ) { if ( item->value[i] == '\0' ) { length++; } } } else { length = item->val_length; } return length; } int artio_parameter_get_array_length(artio_fileset *handle, const char * key, int *length) { parameter *item = artio_parameter_list_search(handle->parameters, key); if (item != NULL) { *length = artio_parameter_array_length(item); } else { return ARTIO_ERR_PARAM_NOT_FOUND; } return ARTIO_SUCCESS; } int artio_parameter_list_free(parameter_list * parameters) { parameter * tmp; parameter * item; if ( parameters != NULL ) { item = parameters->head; while (NULL != item) { tmp = item; item = item->next; free(tmp->value); free(tmp); } parameters->head = NULL; parameters->tail = NULL; free( parameters ); } return ARTIO_SUCCESS; } int artio_parameter_list_print(parameter_list * parameters) { int32_t a; float b; double c; int64_t d; parameter * item = parameters->head; while (NULL != item) { switch ( item->type ) { case ARTIO_TYPE_STRING: printf("string: key %s %s\n", item->key, item->value); break; case ARTIO_TYPE_CHAR: printf("char: key %s %c\n", item->key, *item->value); break; case ARTIO_TYPE_INT: memcpy(&a, item->value, sizeof(int32_t)); printf("int: key %s %d\n", item->key, a); break; case ARTIO_TYPE_FLOAT: memcpy(&b, item->value, sizeof(float)); printf("float: key %s %f\n", item->key, b); break; case ARTIO_TYPE_DOUBLE: memcpy(&c, item->value, sizeof(double)); printf("double: key %s %f\n", item->key, c); break; case ARTIO_TYPE_LONG: memcpy(&d, item->value, sizeof(int64_t)); printf("long: %ld\n", d); break; default: printf("unrecognized type %d\n", item->type); } item = item->next; } return ARTIO_SUCCESS; } int artio_parameter_set_int(artio_fileset *handle, const char * key, int32_t value) { int32_t tmp = value; return artio_parameter_set_int_array(handle, key, 1, &tmp); } int artio_parameter_get_int(artio_fileset *handle, const char * key, int32_t * value) { return artio_parameter_get_int_array(handle, key, 1, value); } int artio_parameter_set_int_array(artio_fileset *handle, const char * key, int length, int32_t * value) { return artio_parameter_list_insert(handle->parameters, key, length, value, ARTIO_TYPE_INT); } int artio_parameter_get_int_array(artio_fileset *handle, const char * key, int length, int32_t * value) { return artio_parameter_list_unpack(handle->parameters, key, length, value, ARTIO_TYPE_INT); } int artio_parameter_get_int_array_index( artio_fileset *handle, const char *key, int index, int32_t * value ) { return artio_parameter_list_unpack_index(handle->parameters, key, index, value, ARTIO_TYPE_INT ); } int artio_parameter_set_float(artio_fileset *handle, const char * key, float value) { float tmp = value; return artio_parameter_set_float_array(handle, key, 1, &tmp); } int artio_parameter_get_float(artio_fileset *handle, const char * key, float * value) { return artio_parameter_get_float_array(handle, key, 1, value); } int artio_parameter_set_float_array(artio_fileset *handle, const char * key, int length, float * value) { return artio_parameter_list_insert(handle->parameters, key, length, value, ARTIO_TYPE_FLOAT); } int artio_parameter_get_float_array(artio_fileset *handle, const char * key, int length, float * value) { return artio_parameter_list_unpack(handle->parameters, key, length, value, ARTIO_TYPE_FLOAT); } int artio_parameter_get_float_array_index(artio_fileset *handle, const char * key, int index, float * value) { return artio_parameter_list_unpack_index(handle->parameters, key, index, value, ARTIO_TYPE_FLOAT); } int artio_parameter_set_double(artio_fileset *handle, const char * key, double value) { double tmp = value; return artio_parameter_set_double_array(handle, key, 1, &tmp); } int artio_parameter_get_double(artio_fileset *handle, const char * key, double * value) { return artio_parameter_get_double_array(handle, key, 1, value); } int artio_parameter_set_double_array(artio_fileset *handle, const char * key, int length, double * value) { return artio_parameter_list_insert(handle->parameters, key, length, value, ARTIO_TYPE_DOUBLE); } int artio_parameter_get_double_array(artio_fileset *handle, const char * key, int length, double * value) { return artio_parameter_list_unpack(handle->parameters, key, length, value, ARTIO_TYPE_DOUBLE); } int artio_parameter_get_double_array_index(artio_fileset *handle, const char * key, int index, double * value) { return artio_parameter_list_unpack_index(handle->parameters, key, index, value, ARTIO_TYPE_DOUBLE); } int artio_parameter_set_long(artio_fileset *handle, const char * key, int64_t value) { int64_t tmp = value; return artio_parameter_set_long_array(handle, key, 1, &tmp); } int artio_parameter_get_long(artio_fileset *handle, const char * key, int64_t * value) { return artio_parameter_get_long_array(handle, key, 1, value); } int artio_parameter_set_long_array(artio_fileset *handle, const char * key, int length, int64_t * value) { return artio_parameter_list_insert(handle->parameters, key, length, value, ARTIO_TYPE_LONG); } int artio_parameter_get_long_array(artio_fileset *handle, const char * key, int length, int64_t * value) { return artio_parameter_list_unpack(handle->parameters, key, length, value, ARTIO_TYPE_LONG); } int artio_parameter_get_long_array_index(artio_fileset *handle, const char * key, int index, int64_t * value) { return artio_parameter_list_unpack_index(handle->parameters, key, index, value, ARTIO_TYPE_LONG); } int artio_parameter_set_string(artio_fileset *handle, const char *key, char *value) { return artio_parameter_set_string_array(handle, key, 1, &value); } int artio_parameter_set_string_array(artio_fileset *handle, const char *key, int length, char **value) { int i; int len, loc_length; char *loc_value; char *p; int ret; for (i = 0, loc_length = 0; i < length; i++) { len = strlen(value[i]) + 1; if ( len > ARTIO_MAX_STRING_LENGTH ) { return ARTIO_ERR_STRING_LENGTH; } loc_length += len; } loc_value = (char *)malloc(loc_length * sizeof(char)); if ( loc_value == NULL ) { return ARTIO_ERR_MEMORY_ALLOCATION; } for (i = 0, p = loc_value; i < length; i++) { strcpy(p, value[i]); p += strlen(value[i]) + 1; } ret = artio_parameter_list_insert(handle->parameters, key, loc_length, loc_value, ARTIO_TYPE_STRING); free(loc_value); return ret; } int artio_parameter_get_string(artio_fileset *handle, const char *key, char *value ) { return artio_parameter_get_string_array(handle, key, 1, &value ); } int artio_parameter_get_string_array_index( artio_fileset *handle, const char *key, int index, char *value ) { int count; char *p; parameter *item = artio_parameter_list_search(handle->parameters, key); if ( item != NULL ) { count = 0; p = item->value; while ( count < index && p < item->value + item->val_length) { p += strlen(p) + 1; count++; } if ( count != index ) { return ARTIO_ERR_INVALID_INDEX; } strncpy(value, p, ARTIO_MAX_STRING_LENGTH-1); value[ARTIO_MAX_STRING_LENGTH-1] = 0; } else { return ARTIO_ERR_PARAM_NOT_FOUND; } return ARTIO_SUCCESS; } int artio_parameter_get_string_array(artio_fileset *handle, const char *key, int length, char **value ) { int i; char *p; int count; parameter *item = artio_parameter_list_search(handle->parameters, key); if (item != NULL) { /* count string items in item->value */ count = 0; p = item->value; while (p < item->value + item->val_length) { p += strlen(p) + 1; count++; } if (count != length) { return ARTIO_ERR_PARAM_LENGTH_MISMATCH; } for (i = 0, p = item->value; i < length; i++) { strncpy(value[i], p, ARTIO_MAX_STRING_LENGTH-1); value[i][ARTIO_MAX_STRING_LENGTH-1] = 0; p += strlen(p) + 1; } } else { return ARTIO_ERR_PARAM_NOT_FOUND; } return ARTIO_SUCCESS; } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/artio_particle.c0000644000175100001770000010116314714401662023321 0ustar00runnerdocker/********************************************************************** * Copyright (c) 2012-2013, Douglas H. Rudd * All rights reserved. * * This file is part of the artio library. * * artio is free software: you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as * published by the Free Software Foundation, either version 3 of the * License, or (at your option) any later version. * * artio is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * Copies of the GNU Lesser General Public License and the GNU General * Public License are available in the file LICENSE, included with this * distribution. If you failed to receive a copy of this file, see * **********************************************************************/ #include "artio.h" #include "artio_internal.h" #include #include #include #ifdef _WIN32 typedef __int64 int64_t; typedef __int32 int32_t; #else #include #endif int artio_particle_find_file(artio_particle_file *phandle, int start, int end, int64_t sfc); artio_particle_file *artio_particle_file_allocate(void); void artio_particle_file_destroy( artio_particle_file *phandle ); /* * Open existing particle files and add to fileset */ int artio_fileset_open_particles(artio_fileset *handle) { int i; char filename[256]; int first_file, last_file; int mode; artio_particle_file *phandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if ( handle->open_type & ARTIO_OPEN_PARTICLES || handle->open_mode != ARTIO_FILESET_READ || handle->particle != NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } handle->open_type |= ARTIO_OPEN_PARTICLES; phandle = artio_particle_file_allocate(); if ( phandle == NULL ) { return ARTIO_ERR_MEMORY_ALLOCATION; } if ( artio_parameter_get_int( handle, "num_particle_files", &phandle->num_particle_files ) != ARTIO_SUCCESS || artio_parameter_get_int(handle, "num_particle_species", &phandle->num_species) != ARTIO_SUCCESS ) { return ARTIO_ERR_PARTICLE_DATA_NOT_FOUND; } phandle->num_primary_variables = (int *)malloc(sizeof(int) * phandle->num_species); if ( phandle->num_primary_variables == NULL ) { artio_particle_file_destroy(phandle); return ARTIO_ERR_MEMORY_ALLOCATION; } phandle->num_secondary_variables = (int *)malloc(sizeof(int) * phandle->num_species); if ( phandle->num_primary_variables == NULL ) { artio_particle_file_destroy(phandle); return ARTIO_ERR_MEMORY_ALLOCATION; } phandle->num_particles_per_species = (int *)malloc(phandle->num_species * sizeof(int)); if ( phandle->num_particles_per_species == NULL ) { artio_particle_file_destroy(phandle); return ARTIO_ERR_MEMORY_ALLOCATION; } artio_parameter_get_int_array(handle, "num_primary_variables", phandle->num_species, phandle->num_primary_variables); artio_parameter_get_int_array(handle, "num_secondary_variables", phandle->num_species, phandle->num_secondary_variables); phandle->file_sfc_index = (int64_t *)malloc(sizeof(int64_t) * (phandle->num_particle_files + 1)); if ( phandle->file_sfc_index == NULL ) { artio_particle_file_destroy(phandle); return ARTIO_ERR_MEMORY_ALLOCATION; } artio_parameter_get_long_array(handle, "particle_file_sfc_index", phandle->num_particle_files + 1, phandle->file_sfc_index); first_file = artio_particle_find_file(phandle, 0, phandle->num_particle_files, handle->proc_sfc_begin); last_file = artio_particle_find_file(phandle, first_file, phandle->num_particle_files, handle->proc_sfc_end); /* allocate file handles */ phandle->ffh = (artio_fh **)malloc(phandle->num_particle_files * sizeof(artio_fh *)); if ( phandle->ffh == NULL ) { artio_particle_file_destroy(phandle); return ARTIO_ERR_MEMORY_ALLOCATION; } for ( i = 0; i < phandle->num_particle_files; i++ ) { phandle->ffh[i] = NULL; } /* open files on all processes */ for (i = 0; i < phandle->num_particle_files; i++) { sprintf(filename, "%s.p%03d", handle->file_prefix, i); mode = ARTIO_MODE_READ; if (i >= first_file && i <= last_file) { mode |= ARTIO_MODE_ACCESS; } if (handle->endian_swap) { mode |= ARTIO_MODE_ENDIAN_SWAP; } phandle->ffh[i] = artio_file_fopen(filename, mode, handle->context); if ( phandle->ffh[i] == NULL ) { artio_particle_file_destroy(phandle); return ARTIO_ERR_PARTICLE_FILE_NOT_FOUND; } } handle->particle = phandle; return ARTIO_SUCCESS; } int artio_fileset_add_particles( artio_fileset *handle, int num_particle_files, int allocation_strategy, int num_species, char ** species_labels, int * num_primary_variables, int * num_secondary_variables, char *** primary_variable_labels_per_species, char *** secondary_variable_labels_per_species, int * num_particles_per_species_per_root_tree ) { int i, k; int ret; int64_t l, cur; int64_t first_file_sfc, last_file_sfc; int first_file, last_file; char filename[256]; char species_label[64]; int mode; int64_t *local_particles_per_species, *total_particles_per_species; artio_particle_file *phandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if ( handle->open_mode != ARTIO_FILESET_WRITE ) { return ARTIO_ERR_INVALID_FILESET_MODE; } if ( handle->open_type & ARTIO_OPEN_PARTICLES || handle->particle != NULL ) { return ARTIO_ERR_DATA_EXISTS; } handle->open_type |= ARTIO_OPEN_PARTICLES; /* compute total number of particles per species */ local_particles_per_species = (int64_t *)malloc( num_species * sizeof(int64_t)); if ( local_particles_per_species == NULL ) { return ARTIO_ERR_MEMORY_ALLOCATION; } total_particles_per_species = (int64_t *)malloc( num_species * sizeof(int64_t)); if ( total_particles_per_species == NULL ) { free( local_particles_per_species ); return ARTIO_ERR_MEMORY_ALLOCATION; } for ( i = 0; i < num_species; i++ ) { local_particles_per_species[i] = 0; } for ( l = 0; l < handle->proc_sfc_end-handle->proc_sfc_begin+1; l++ ) { for ( i = 0; i < num_species; i++ ) { local_particles_per_species[i] += num_particles_per_species_per_root_tree[num_species*l+i]; } } #ifdef ARTIO_MPI MPI_Allreduce( local_particles_per_species, total_particles_per_species, num_species, MPI_LONG_LONG_INT, MPI_SUM, handle->context->comm ); #else for ( i = 0; i < num_species; i++ ) { total_particles_per_species[i] = local_particles_per_species[i]; } #endif artio_parameter_set_long_array(handle, "num_particles_per_species", num_species, total_particles_per_species ); free( local_particles_per_species ); free( total_particles_per_species ); artio_parameter_set_int(handle, "num_particle_files", num_particle_files); artio_parameter_set_int(handle, "num_particle_species", num_species); artio_parameter_set_string_array(handle, "particle_species_labels", num_species, species_labels); artio_parameter_set_int_array(handle, "num_primary_variables", num_species, num_primary_variables); artio_parameter_set_int_array(handle, "num_secondary_variables", num_species, num_secondary_variables); for(i=0;ifile_sfc_index = (int64_t *)malloc(sizeof(int64_t) * (num_particle_files + 1)); if ( phandle->file_sfc_index == NULL ) { artio_particle_file_destroy(phandle); return ARTIO_ERR_MEMORY_ALLOCATION; } switch (allocation_strategy) { case ARTIO_ALLOC_EQUAL_PROC: if (num_particle_files > handle->num_procs) { return ARTIO_ERR_INVALID_FILE_NUMBER; } for (i = 0; i < num_particle_files; i++) { phandle->file_sfc_index[i] = handle->proc_sfc_index[ (handle->num_procs*i+num_particle_files-1) / num_particle_files ]; } phandle->file_sfc_index[num_particle_files] = handle->proc_sfc_index[handle->num_procs]; break; case ARTIO_ALLOC_EQUAL_SFC: for (i = 0; i < num_particle_files; i++) { phandle->file_sfc_index[i] = (handle->num_root_cells*i+num_particle_files-1) / num_particle_files; } phandle->file_sfc_index[num_particle_files] = handle->num_root_cells; break; default: artio_particle_file_destroy(phandle); return ARTIO_ERR_INVALID_ALLOC_STRATEGY; } phandle->num_particle_files = num_particle_files; phandle->num_species = num_species; phandle->num_primary_variables = (int *)malloc(sizeof(int) * num_species); if ( phandle->num_primary_variables == NULL ) { artio_particle_file_destroy(phandle); return ARTIO_ERR_MEMORY_ALLOCATION; } phandle->num_secondary_variables = (int *)malloc(sizeof(int) * num_species); if ( phandle->num_secondary_variables == NULL ) { artio_particle_file_destroy(phandle); return ARTIO_ERR_MEMORY_ALLOCATION; } phandle->num_particles_per_species = (int *)malloc(phandle->num_species * sizeof(int)); if ( phandle->num_particles_per_species == NULL ) { artio_particle_file_destroy(phandle); return ARTIO_ERR_MEMORY_ALLOCATION; } for (i = 0; i < num_species; i++) { phandle->num_primary_variables[i] = num_primary_variables[i]; phandle->num_secondary_variables[i] = num_secondary_variables[i]; } /* allocate space for sfc offset cache */ phandle->cache_sfc_begin = handle->proc_sfc_begin; phandle->cache_sfc_end = handle->proc_sfc_end; phandle->sfc_offset_table = (int64_t *)malloc( (size_t)(handle->proc_sfc_end - handle->proc_sfc_begin + 1) * sizeof(int64_t)); if ( phandle->sfc_offset_table == NULL ) { artio_particle_file_destroy(phandle); return ARTIO_ERR_MEMORY_ALLOCATION; } /* allocate file handles */ phandle->ffh = (artio_fh **)malloc(num_particle_files * sizeof(artio_fh *)); if ( phandle->ffh == NULL ) { artio_particle_file_destroy(phandle); return ARTIO_ERR_MEMORY_ALLOCATION; } for ( i = 0; i < num_particle_files; i++ ) { phandle->ffh[i] = NULL; } /* open file handles */ first_file = artio_particle_find_file(phandle, 0, num_particle_files, handle->proc_sfc_begin); last_file = artio_particle_find_file(phandle, first_file, num_particle_files, handle->proc_sfc_end); for (i = 0; i < num_particle_files; i++) { sprintf(filename, "%s.p%03d", handle->file_prefix, i); mode = ARTIO_MODE_WRITE; if (i >= first_file && i <= last_file) { mode |= ARTIO_MODE_ACCESS; } phandle->ffh[i] = artio_file_fopen(filename, mode, handle->context); if ( phandle->ffh[i] == NULL ) { artio_particle_file_destroy(phandle); return ARTIO_ERR_FILE_CREATE; } /* write sfc offset header if we contribute to this file */ if (i >= first_file && i <= last_file) { #ifdef ARTIO_MPI if ( phandle->file_sfc_index[i] >= handle->proc_sfc_index[ handle->rank ] && phandle->file_sfc_index[i] < handle->proc_sfc_index[ handle->rank + 1 ] ) { cur = (phandle->file_sfc_index[i+1] - phandle->file_sfc_index[i]) * sizeof(int64_t); } else { /* obtain offset from previous process */ MPI_Recv( &cur, 1, MPI_LONG_LONG_INT, handle->rank - 1, i, handle->context->comm, MPI_STATUS_IGNORE ); } #else cur = (phandle->file_sfc_index[i+1] - phandle->file_sfc_index[i]) * sizeof(int64_t); #endif /* ARTIO_MPI */ first_file_sfc = MAX( phandle->cache_sfc_begin, phandle->file_sfc_index[i] ); last_file_sfc = MIN( phandle->cache_sfc_end, phandle->file_sfc_index[i+1]-1 ); for (l = first_file_sfc - phandle->cache_sfc_begin; l < last_file_sfc - phandle->cache_sfc_begin + 1; l++) { phandle->sfc_offset_table[l] = cur; cur += sizeof(int) * num_species; for (k = 0; k < num_species; k++) { cur += num_particles_per_species_per_root_tree[l*num_species+k] * (sizeof(int64_t) + sizeof(int) + num_primary_variables[k] * sizeof(double) + num_secondary_variables[k] * sizeof(float)); } } #ifdef ARTIO_MPI if ( phandle->file_sfc_index[i+1] > handle->proc_sfc_end+1 ) { MPI_Send( &cur, 1, MPI_LONG_LONG_INT, handle->rank+1, i, handle->context->comm ); } #endif /* ARTIO_MPI */ /* seek and write our portion of sfc table */ ret = artio_file_fseek(phandle->ffh[i], (first_file_sfc - phandle->file_sfc_index[i]) * sizeof(int64_t), ARTIO_SEEK_SET); if ( ret != ARTIO_SUCCESS ) { artio_particle_file_destroy(phandle); return ret; } ret = artio_file_fwrite(phandle->ffh[i], &phandle->sfc_offset_table[first_file_sfc - phandle->cache_sfc_begin], last_file_sfc - first_file_sfc + 1, ARTIO_TYPE_LONG); if ( ret != ARTIO_SUCCESS ) { artio_particle_file_destroy(phandle); return ret; } } } artio_parameter_set_long_array(handle, "particle_file_sfc_index", phandle->num_particle_files + 1, phandle->file_sfc_index); handle->particle = phandle; return ARTIO_SUCCESS; } artio_particle_file *artio_particle_file_allocate(void) { artio_particle_file *phandle = (artio_particle_file *)malloc(sizeof(artio_particle_file)); if ( phandle != NULL ) { phandle->ffh = NULL; phandle->num_particle_files = -1; phandle->file_sfc_index = NULL; phandle->cache_sfc_begin = -1; phandle->cache_sfc_end = -1; phandle->sfc_offset_table = NULL; phandle->num_species = -1; phandle->cur_particle = -1; phandle->cur_sfc = -1; phandle->num_primary_variables = NULL; phandle->num_secondary_variables = NULL; phandle->num_particles_per_species = NULL; phandle->cur_file = -1; phandle->buffer_size = artio_fh_buffer_size; phandle->buffer = malloc(phandle->buffer_size); if ( phandle->buffer == NULL ) { free(phandle); return NULL; } } return phandle; } void artio_particle_file_destroy( artio_particle_file *phandle ) { int i; if ( phandle == NULL ) return; if ( phandle->ffh != NULL ) { for (i = 0; i < phandle->num_particle_files; i++) { if ( phandle->ffh[i] != NULL ) { artio_file_fclose(phandle->ffh[i]); } } free(phandle->ffh); } if (phandle->sfc_offset_table != NULL) free(phandle->sfc_offset_table); if (phandle->num_particles_per_species != NULL) free(phandle->num_particles_per_species); if (phandle->num_primary_variables != NULL) free(phandle->num_primary_variables); if (phandle->num_secondary_variables != NULL) free(phandle->num_secondary_variables); if (phandle->file_sfc_index != NULL) free(phandle->file_sfc_index); if (phandle->buffer != NULL) free(phandle->buffer); free(phandle); } int artio_fileset_close_particles(artio_fileset *handle) { if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if ( !(handle->open_type & ARTIO_OPEN_PARTICLES) || handle->particle == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } artio_particle_file_destroy(handle->particle); handle->particle = NULL; return ARTIO_SUCCESS; } int artio_particle_cache_sfc_range(artio_fileset *handle, int64_t start, int64_t end) { int i; int ret; int first_file, last_file; int64_t min, count, cur; artio_particle_file *phandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_PARTICLES) || handle->particle == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } phandle = handle->particle; if ( start > end || start < handle->proc_sfc_begin || end > handle->proc_sfc_end) { return ARTIO_ERR_INVALID_SFC_RANGE; } /* check if we've already cached the range */ if ( start >= phandle->cache_sfc_begin && end <= phandle->cache_sfc_end ) { return ARTIO_SUCCESS; } artio_grid_clear_sfc_cache(handle); first_file = artio_particle_find_file(phandle, 0, phandle->num_particle_files, start); last_file = artio_particle_find_file(phandle, first_file, phandle->num_particle_files, end); phandle->cache_sfc_begin = start; phandle->cache_sfc_end = end; phandle->sfc_offset_table = (int64_t *)malloc(sizeof(int64_t) * (size_t)(end - start + 1)); if ( phandle->sfc_offset_table == NULL ) { return ARTIO_ERR_MEMORY_ALLOCATION; } if ( phandle->cur_file != -1 ) { artio_file_detach_buffer( phandle->ffh[phandle->cur_file]); phandle->cur_file = -1; } cur = 0; for (i = first_file; i <= last_file; i++) { min = MAX( 0, start - phandle->file_sfc_index[i] ); count = MIN( phandle->file_sfc_index[i+1], end+1 ) - MAX( start, phandle->file_sfc_index[i] ); artio_file_attach_buffer( phandle->ffh[i], phandle->buffer, phandle->buffer_size ); ret = artio_file_fseek(phandle->ffh[i], sizeof(int64_t) * min, ARTIO_SEEK_SET); if ( ret != ARTIO_SUCCESS ) return ret; ret = artio_file_fread(phandle->ffh[i], &phandle->sfc_offset_table[cur], count, ARTIO_TYPE_LONG); if ( ret != ARTIO_SUCCESS ) return ret; artio_file_detach_buffer( phandle->ffh[i] ); cur += count; } return ARTIO_SUCCESS; } int artio_particle_clear_sfc_cache( artio_fileset *handle ) { artio_particle_file *phandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_GRID) || handle->grid == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } phandle = handle->particle; if ( phandle->sfc_offset_table != NULL ) { free(phandle->sfc_offset_table); phandle->sfc_offset_table = NULL; } phandle->cache_sfc_begin = -1; phandle->cache_sfc_end = -1; return ARTIO_SUCCESS; } int artio_particle_find_file(artio_particle_file *phandle, int start, int end, int64_t sfc) { int j; if ( start < 0 || start > phandle->num_particle_files || end < 0 || end > phandle->num_particle_files || sfc < phandle->file_sfc_index[start] || sfc >= phandle->file_sfc_index[end] ) { return -1; } if (start == end || sfc == phandle->file_sfc_index[start] ) { return start; } if (1 == end - start) { if (sfc < phandle->file_sfc_index[end]) { return start; } else { return end; } } j = start + (end - start) / 2; if (sfc > phandle->file_sfc_index[j]) { return artio_particle_find_file(phandle, j, end, sfc); } else if (sfc < phandle->file_sfc_index[j]) { return artio_particle_find_file(phandle, start, j, sfc); } else { return j; } } int artio_particle_seek_to_sfc(artio_fileset *handle, int64_t sfc) { int64_t offset; artio_particle_file *phandle; int file; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if ( !(handle->open_type & ARTIO_OPEN_PARTICLES) || handle->particle == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } phandle = handle->particle; if (phandle->cache_sfc_begin == -1 || sfc < phandle->cache_sfc_begin || sfc > phandle->cache_sfc_end) { return ARTIO_ERR_INVALID_SFC; } file = artio_particle_find_file(phandle, 0, phandle->num_particle_files, sfc); if ( file != phandle->cur_file ) { if ( phandle->cur_file != -1 ) { artio_file_detach_buffer( phandle->ffh[phandle->cur_file] ); } if ( phandle->buffer_size > 0 ) { artio_file_attach_buffer( phandle->ffh[file], phandle->buffer, phandle->buffer_size ); } phandle->cur_file = file; } offset = phandle->sfc_offset_table[sfc - phandle->cache_sfc_begin]; return artio_file_fseek(phandle->ffh[phandle->cur_file], offset, ARTIO_SEEK_SET); } int artio_particle_write_root_cell_begin(artio_fileset *handle, int64_t sfc, int * num_particles_per_species) { int i; int ret; artio_particle_file *phandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_WRITE || !(handle->open_type & ARTIO_OPEN_PARTICLES) || handle->particle == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } phandle = handle->particle; if ( phandle->cur_sfc != -1 ) { return ARTIO_ERR_INVALID_STATE; } ret = artio_particle_seek_to_sfc(handle, sfc); if ( ret != ARTIO_SUCCESS ) return ret; ret = artio_file_fwrite(phandle->ffh[phandle->cur_file], num_particles_per_species, phandle->num_species, ARTIO_TYPE_INT); if ( ret != ARTIO_SUCCESS ) return ret; for (i = 0; i < phandle->num_species; i++) { phandle->num_particles_per_species[i] = num_particles_per_species[i]; } phandle->cur_sfc = sfc; phandle->cur_species = -1; phandle->cur_particle = -1; return ARTIO_SUCCESS; } int artio_particle_write_root_cell_end(artio_fileset *handle) { if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_WRITE || !(handle->open_type & ARTIO_OPEN_PARTICLES) || handle->particle == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } if ( handle->particle->cur_sfc == -1 || handle->particle->cur_species != -1 ) { return ARTIO_ERR_INVALID_STATE; } handle->particle->cur_sfc = -1; return ARTIO_SUCCESS; } int artio_particle_write_species_begin(artio_fileset *handle, int species) { artio_particle_file *phandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_WRITE || !(handle->open_type & ARTIO_OPEN_PARTICLES) || handle->particle == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } phandle = handle->particle; if (phandle->cur_sfc == -1 || phandle->cur_species != -1 ) { return ARTIO_ERR_INVALID_STATE; } if ( species < 0 || species >= phandle->num_species) { return ARTIO_ERR_INVALID_SPECIES; } phandle->cur_species = species; phandle->cur_particle = 0; return ARTIO_SUCCESS; } int artio_particle_write_species_end(artio_fileset *handle) { artio_particle_file *phandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_WRITE || !(handle->open_type & ARTIO_OPEN_PARTICLES) || handle->particle == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } phandle = handle->particle; if (phandle->cur_species == -1 || phandle->cur_particle != phandle->num_particles_per_species[phandle->cur_species]) { return ARTIO_ERR_INVALID_STATE; } phandle->cur_species = -1; phandle->cur_particle = -1; return ARTIO_SUCCESS; } int artio_particle_write_particle(artio_fileset *handle, int64_t pid, int subspecies, double * primary_variables, float *secondary_variables) { int ret; artio_particle_file *phandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_WRITE || !(handle->open_type & ARTIO_OPEN_PARTICLES) || handle->particle == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } phandle = handle->particle; if (phandle->cur_species == -1 || phandle->cur_particle >= phandle->num_particles_per_species[phandle->cur_species]) { return ARTIO_ERR_INVALID_STATE; } ret = artio_file_fwrite(phandle->ffh[phandle->cur_file], &pid, 1, ARTIO_TYPE_LONG); if ( ret != ARTIO_SUCCESS ) return ret; ret = artio_file_fwrite(phandle->ffh[phandle->cur_file], &subspecies, 1, ARTIO_TYPE_INT); if ( ret != ARTIO_SUCCESS ) return ret; ret = artio_file_fwrite(phandle->ffh[phandle->cur_file], primary_variables, phandle->num_primary_variables[phandle->cur_species], ARTIO_TYPE_DOUBLE); if ( ret != ARTIO_SUCCESS ) return ret; ret = artio_file_fwrite(phandle->ffh[phandle->cur_file], secondary_variables, phandle->num_secondary_variables[phandle->cur_species], ARTIO_TYPE_FLOAT); if ( ret != ARTIO_SUCCESS ) return ret; phandle->cur_particle++; return ARTIO_SUCCESS; } /* * */ int artio_particle_read_root_cell_begin(artio_fileset *handle, int64_t sfc, int * num_particles_per_species) { int i; int ret; artio_particle_file *phandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_PARTICLES) || handle->particle == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } phandle = handle->particle; ret = artio_particle_seek_to_sfc(handle, sfc); if ( ret != ARTIO_SUCCESS ) return ret; ret = artio_file_fread(phandle->ffh[phandle->cur_file], num_particles_per_species, phandle->num_species, ARTIO_TYPE_INT); if ( ret != ARTIO_SUCCESS ) return ret; for (i = 0; i < phandle->num_species; i++) { phandle->num_particles_per_species[i] = num_particles_per_species[i]; } phandle->cur_sfc = sfc; phandle->cur_species = -1; phandle->cur_particle = 0; return ARTIO_SUCCESS; } /* Description */ int artio_particle_read_particle(artio_fileset *handle, int64_t * pid, int *subspecies, double * primary_variables, float * secondary_variables) { int ret; artio_particle_file *phandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_PARTICLES) || handle->particle == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } phandle = handle->particle; if (phandle->cur_species == -1 || phandle->cur_particle >= phandle->num_particles_per_species[phandle->cur_species]) { return ARTIO_ERR_INVALID_STATE; } ret = artio_file_fread(phandle->ffh[phandle->cur_file], pid, 1, ARTIO_TYPE_LONG); if ( ret != ARTIO_SUCCESS ) return ret; ret = artio_file_fread(phandle->ffh[phandle->cur_file], subspecies, 1, ARTIO_TYPE_INT); if ( ret != ARTIO_SUCCESS ) return ret; ret = artio_file_fread(phandle->ffh[phandle->cur_file], primary_variables, phandle->num_primary_variables[phandle->cur_species], ARTIO_TYPE_DOUBLE); if ( ret != ARTIO_SUCCESS ) return ret; ret = artio_file_fread(phandle->ffh[phandle->cur_file], secondary_variables, phandle->num_secondary_variables[phandle->cur_species], ARTIO_TYPE_FLOAT); if ( ret != ARTIO_SUCCESS ) return ret; phandle->cur_particle++; return ARTIO_SUCCESS; } /* * Description Start reading particle species */ int artio_particle_read_species_begin(artio_fileset *handle, int species) { int i; int ret; int64_t offset = 0; artio_particle_file *phandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_PARTICLES) || handle->particle == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } phandle = handle->particle; if (phandle->cur_sfc == -1) { return ARTIO_ERR_INVALID_STATE; } if (species < 0 || species >= phandle->num_species) { return ARTIO_ERR_INVALID_SPECIES; } offset = phandle->sfc_offset_table[phandle->cur_sfc - phandle->cache_sfc_begin]; offset += sizeof(int32_t) * (phandle->num_species); for (i = 0; i < species; i++) { offset += ( sizeof(int64_t) + sizeof(int) + phandle->num_primary_variables[i] * sizeof(double) + phandle->num_secondary_variables[i] * sizeof(float) ) * phandle->num_particles_per_species[i]; } ret = artio_file_fseek(phandle->ffh[phandle->cur_file], offset, ARTIO_SEEK_SET); if ( ret != ARTIO_SUCCESS ) return ret; phandle->cur_species = species; phandle->cur_particle = 0; return ARTIO_SUCCESS; } /* * Description Do something at the end of each kind of read operation */ int artio_particle_read_species_end(artio_fileset *handle) { artio_particle_file *phandle; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_PARTICLES) || handle->particle == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } phandle = handle->particle; if (phandle->cur_species == -1) { return ARTIO_ERR_INVALID_STATE; } phandle->cur_species = -1; phandle->cur_particle = 0; return ARTIO_SUCCESS; } int artio_particle_read_root_cell_end(artio_fileset *handle) { if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_PARTICLES) || handle->particle == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } if ( handle->particle->cur_sfc == -1 ) { return ARTIO_ERR_INVALID_STATE; } handle->particle->cur_sfc = -1; return ARTIO_SUCCESS; } int artio_particle_read_selection(artio_fileset *handle, artio_selection *selection, artio_particle_callback callback, void *params ) { if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_PARTICLES) || handle->particle == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } return artio_particle_read_selection_species( handle, selection, 0, handle->particle->num_species-1, callback, params ); } int artio_particle_read_selection_species( artio_fileset *handle, artio_selection *selection, int start_species, int end_species, artio_particle_callback callback, void *params ) { int ret; int64_t start, end; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_PARTICLES) || handle->particle == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } artio_selection_iterator_reset( selection ); while ( artio_selection_iterator( selection, handle->num_root_cells, &start, &end ) == ARTIO_SUCCESS ) { ret = artio_particle_read_sfc_range_species( handle, start, end, start_species, end_species, callback, params ); if ( ret != ARTIO_SUCCESS ) return ret; } return ARTIO_SUCCESS; } int artio_particle_read_sfc_range(artio_fileset *handle, int64_t sfc1, int64_t sfc2, artio_particle_callback callback, void *params ) { if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_PARTICLES) || handle->particle == NULL ) { return ARTIO_ERR_INVALID_FILESET_MODE; } return artio_particle_read_sfc_range_species( handle, sfc1, sfc2, 0, handle->particle->num_species-1, callback, params ); } int artio_particle_read_sfc_range_species(artio_fileset *handle, int64_t sfc1, int64_t sfc2, int start_species, int end_species, artio_particle_callback callback, void *params ) { int64_t sfc; int particle, species; int *num_particles_per_species; artio_particle_file *phandle; int64_t pid = 0l; int subspecies; double * primary_variables = NULL; float * secondary_variables = NULL; int num_primary, num_secondary; int ret; if ( handle == NULL ) { return ARTIO_ERR_INVALID_HANDLE; } if (handle->open_mode != ARTIO_FILESET_READ || !(handle->open_type & ARTIO_OPEN_PARTICLES) ) { return ARTIO_ERR_INVALID_FILESET_MODE; } phandle = handle->particle; if ( start_species < 0 || start_species > end_species || end_species > phandle->num_species-1 ) { return ARTIO_ERR_INVALID_SPECIES; } num_particles_per_species = (int *)malloc(phandle->num_species * sizeof(int)); if ( num_particles_per_species == NULL ) { return ARTIO_ERR_MEMORY_ALLOCATION; } ret = artio_particle_cache_sfc_range(handle, sfc1, sfc2); if ( ret != ARTIO_SUCCESS ) { free( num_particles_per_species ); return ret; } num_primary = num_secondary = 0; for ( species = start_species; species <= end_species; species++ ) { num_primary = MAX( phandle->num_primary_variables[species], num_primary ); num_secondary = MAX( phandle->num_secondary_variables[species], num_secondary ); } primary_variables = (double *)malloc(num_primary * sizeof(double)); if ( primary_variables == NULL ) { free( num_particles_per_species ); return ARTIO_ERR_MEMORY_ALLOCATION; } secondary_variables = (float *)malloc(num_secondary * sizeof(float)); if ( secondary_variables == NULL ) { free( num_particles_per_species ); free( primary_variables ); return ARTIO_ERR_MEMORY_ALLOCATION; } for ( sfc = sfc1; sfc <= sfc2; sfc++ ) { ret = artio_particle_read_root_cell_begin(handle, sfc, num_particles_per_species); if ( ret != ARTIO_SUCCESS ) { free( num_particles_per_species ); free( primary_variables ); free( secondary_variables ); return ret; } for ( species = start_species; species <= end_species; species++) { ret = artio_particle_read_species_begin(handle, species); if ( ret != ARTIO_SUCCESS ) { free( num_particles_per_species ); free( primary_variables ); free( secondary_variables ); return ret; } for (particle = 0; particle < num_particles_per_species[species]; particle++) { ret = artio_particle_read_particle(handle, &pid, &subspecies, primary_variables, secondary_variables); if ( ret != ARTIO_SUCCESS ) { free( num_particles_per_species ); free( primary_variables ); free( secondary_variables ); return ret; } callback(sfc, species, subspecies, pid, primary_variables, secondary_variables, params ); } artio_particle_read_species_end(handle); } artio_particle_read_root_cell_end(handle); } free(primary_variables); free(secondary_variables); free(num_particles_per_species); return ARTIO_SUCCESS; } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/artio_posix.c0000644000175100001770000002166714714401662022672 0ustar00runnerdocker/********************************************************************** * Copyright (c) 2012-2013, Douglas H. Rudd * All rights reserved. * * This file is part of the artio library. * * artio is free software: you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as * published by the Free Software Foundation, either version 3 of the * License, or (at your option) any later version. * * artio is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * Copies of the GNU Lesser General Public License and the GNU General * Public License are available in the file LICENSE, included with this * distribution. If you failed to receive a copy of this file, see * **********************************************************************/ #include "artio.h" #include "artio_internal.h" #ifndef ARTIO_MPI #include #include #include #include #ifdef _WIN32 typedef __int64 int64_t; typedef __int32 int32_t; #else #include #endif struct ARTIO_FH { FILE *fh; int mode; char *data; int bfptr; int bfsize; int bfend; }; #ifdef _WIN32 #define FOPEN_FLAGS "b" #define fseek _fseeki64 #else #define FOPEN_FLAGS "" #endif artio_context artio_context_global_struct = { 0 }; const artio_context *artio_context_global = &artio_context_global_struct; artio_fh *artio_file_fopen_i( char * filename, int mode, const artio_context *not_used ) { artio_fh *ffh; /* check for invalid combination of mode parameter */ if ( ( mode & ARTIO_MODE_READ && mode & ARTIO_MODE_WRITE ) || !( mode & ARTIO_MODE_READ || mode & ARTIO_MODE_WRITE ) ) { return NULL; } ffh = (artio_fh *)malloc(sizeof(artio_fh)); if ( ffh == NULL ) { return NULL; } ffh->mode = mode; ffh->bfsize = -1; ffh->bfend = -1; ffh->bfptr = -1; ffh->data = NULL; if ( mode & ARTIO_MODE_ACCESS ) { ffh->fh = fopen( filename, ( mode & ARTIO_MODE_WRITE ) ? "w"FOPEN_FLAGS : "r"FOPEN_FLAGS ); if ( ffh->fh == NULL ) { free( ffh ); return NULL; } } return ffh; } int artio_file_attach_buffer_i( artio_fh *handle, void *buf, int buf_size ) { if ( !(handle->mode & ARTIO_MODE_ACCESS ) ) { return ARTIO_ERR_INVALID_FILE_MODE; } if ( handle->data != NULL ) { return ARTIO_ERR_BUFFER_EXISTS; } handle->bfsize = buf_size; handle->bfend = -1; handle->bfptr = 0; handle->data = (char *)buf; return ARTIO_SUCCESS; } int artio_file_detach_buffer_i( artio_fh *handle ) { int ret; ret = artio_file_fflush(handle); if ( ret != ARTIO_SUCCESS ) return ret; handle->data = NULL; handle->bfsize = -1; handle->bfend = -1; handle->bfptr = -1; return ARTIO_SUCCESS; } int artio_file_fwrite_i(artio_fh *handle, const void *buf, int64_t count, int type ) { size_t size; int64_t remain; char *p; int size32; if ( !(handle->mode & ARTIO_MODE_WRITE) || !(handle->mode & ARTIO_MODE_ACCESS) ) { return ARTIO_ERR_INVALID_FILE_MODE; } size = artio_type_size( type ); if ( size == (size_t)-1 ) { return ARTIO_ERR_INVALID_DATATYPE; } if ( count > ARTIO_INT64_MAX / size ) { return ARTIO_ERR_IO_OVERFLOW; } remain = count*size; p = (char *)buf; if ( handle->data == NULL ) { /* force writes to 32-bit sizes */ while ( remain > 0 ) { size32 = MIN( ARTIO_IO_MAX, remain ); if ( fwrite( p, 1, size32, handle->fh ) != size32 ) { return ARTIO_ERR_IO_WRITE; } remain -= size32; p += size32; } } else if ( remain < handle->bfsize - handle->bfptr ) { memcpy( handle->data + handle->bfptr, p, (size_t)remain ); handle->bfptr += remain; } else { size32 = handle->bfsize - handle->bfptr; memcpy( handle->data + handle->bfptr, p, size32 ); if ( fwrite( handle->data, 1, handle->bfsize, handle->fh ) != handle->bfsize ) { return ARTIO_ERR_IO_WRITE; } p += size32; remain -= size32; while ( remain > handle->bfsize ) { /* write directly to file-handle in unbuffered case */ if ( fwrite( p, 1, handle->bfsize, handle->fh ) != handle->bfsize ) { return ARTIO_ERR_IO_WRITE; } remain -= handle->bfsize; p += handle->bfsize; } memcpy( handle->data, p, (size_t)remain); handle->bfptr = remain; } return ARTIO_SUCCESS; } int artio_file_fflush_i(artio_fh *handle) { if ( !(handle->mode & ARTIO_MODE_ACCESS) ) { return ARTIO_ERR_INVALID_FILE_MODE; } if ( handle->mode & ARTIO_MODE_WRITE ) { if ( handle->bfptr > 0 ) { if ( fwrite( handle->data, 1, handle->bfptr, handle->fh ) != handle->bfptr ) { return ARTIO_ERR_IO_WRITE; } handle->bfptr = 0; } } else if ( handle->mode & ARTIO_MODE_READ ) { handle->bfend = -1; handle->bfptr = 0; } else { return ARTIO_ERR_INVALID_FILE_MODE; } return ARTIO_SUCCESS; } int artio_file_fread_i(artio_fh *handle, void *buf, int64_t count, int type ) { size_t size, avail, remain; int size32; char *p; if ( !(handle->mode & ARTIO_MODE_READ) ) { return ARTIO_ERR_INVALID_FILE_MODE; } size = artio_type_size( type ); if ( size == (size_t)-1 ) { return ARTIO_ERR_INVALID_DATATYPE; } if ( count > ARTIO_INT64_MAX / size ) { return ARTIO_ERR_IO_OVERFLOW; } remain = size*count; p = (char *)buf; if ( handle->data == NULL ) { while ( remain > 0 ) { size32 = MIN( ARTIO_IO_MAX, remain ); if ( fread( p, 1, size32, handle->fh) != size32 ) { return ARTIO_ERR_INSUFFICIENT_DATA; } remain -= size32; p += size32; } } else { if ( handle->bfend == -1 ) { /* load initial data into buffer */ handle->bfend = fread( handle->data, 1, handle->bfsize, handle->fh ); handle->bfptr = 0; } /* read from buffer */ while ( remain > 0 && handle->bfend > 0 && handle->bfptr + remain >= handle->bfend ) { avail = handle->bfend - handle->bfptr; memcpy( p, handle->data + handle->bfptr, avail ); p += avail; remain -= avail; /* refill buffer */ handle->bfend = fread( handle->data, 1, handle->bfsize, handle->fh ); handle->bfptr = 0; } if ( remain > 0 ) { if ( handle->bfend == 0 ) { /* ran out of data, eof */ return ARTIO_ERR_INSUFFICIENT_DATA; } memcpy( p, handle->data + handle->bfptr, (size_t)remain ); handle->bfptr += (int)remain; } } if(handle->mode & ARTIO_MODE_ENDIAN_SWAP) { switch (type) { case ARTIO_TYPE_INT : artio_int_swap( (int32_t *)buf, count ); break; case ARTIO_TYPE_FLOAT : artio_float_swap( (float *)buf, count ); break; case ARTIO_TYPE_DOUBLE : artio_double_swap( (double *)buf, count ); break; case ARTIO_TYPE_LONG : artio_long_swap( (int64_t *)buf, count ); break; default : return ARTIO_ERR_INVALID_DATATYPE; } } return ARTIO_SUCCESS; } int artio_file_ftell_i( artio_fh *handle, int64_t *offset ) { size_t current = ftell( handle->fh ); if ( handle->bfend > 0 ) { current -= handle->bfend; } if ( handle->bfptr > 0 ) { current += handle->bfptr; } *offset = (int64_t)current; return ARTIO_SUCCESS; } int artio_file_fseek_i(artio_fh *handle, int64_t offset, int whence ) { size_t current; if ( handle->mode & ARTIO_MODE_ACCESS ) { if ( whence == ARTIO_SEEK_CUR ) { if ( offset == 0 ) { return ARTIO_SUCCESS; } else if ( handle->mode & ARTIO_MODE_READ && handle->bfend > 0 && handle->bfptr + offset >= 0 && handle->bfptr + offset < handle->bfend ) { handle->bfptr += offset; return ARTIO_SUCCESS; } else { /* modify offset due to offset in buffer */ if ( handle->bfptr > 0 ) { current = offset - handle->bfend + handle->bfptr; } else { current = offset; } artio_file_fflush( handle ); fseek( handle->fh, (size_t)current, SEEK_CUR ); } } else if ( whence == ARTIO_SEEK_SET ) { current = ftell( handle->fh ); if ( handle->mode & ARTIO_MODE_WRITE && current <= offset && offset < current + handle->bfsize && handle->bfptr == offset - current ) { return ARTIO_SUCCESS; } else if ( handle->mode & ARTIO_MODE_READ && handle->bfptr > 0 && handle->bfend > 0 && handle->bfptr < handle->bfend && offset >= current - handle->bfend && offset < current ) { handle->bfptr = offset - current + handle->bfend; } else { artio_file_fflush( handle ); fseek( handle->fh, (size_t)offset, SEEK_SET ); } } else if ( whence == ARTIO_SEEK_END ) { artio_file_fflush( handle ); fseek( handle->fh, (size_t)offset, SEEK_END ); } else { /* unknown whence */ return ARTIO_ERR_INVALID_SEEK; } } else { return ARTIO_ERR_INVALID_FILE_MODE; } return ARTIO_SUCCESS; } int artio_file_fclose_i(artio_fh *handle) { if ( handle->mode & ARTIO_MODE_ACCESS ) { artio_file_fflush(handle); fclose(handle->fh); } free(handle); return ARTIO_SUCCESS; } void artio_file_set_endian_swap_tag_i(artio_fh *handle) { handle->mode |= ARTIO_MODE_ENDIAN_SWAP; } #endif /* ifndef ARTIO_MPI */ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/artio_selector.c0000644000175100001770000002142114714401662023334 0ustar00runnerdocker/********************************************************************** * Copyright (c) 2012-2013, Douglas H. Rudd * All rights reserved. * * This file is part of the artio library. * * artio is free software: you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as * published by the Free Software Foundation, either version 3 of the * License, or (at your option) any later version. * * artio is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * Copies of the GNU Lesser General Public License and the GNU General * Public License are available in the file LICENSE, included with this * distribution. If you failed to receive a copy of this file, see * **********************************************************************/ #include "artio.h" #include "artio_internal.h" #include #include #include #ifdef _WIN32 typedef __int64 int64_t; typedef __int32 int32_t; #else #include #endif #define ARTIO_SELECTION_LIST_SIZE 1024 #define ARTIO_SELECTION_VOLUME_LIMIT (1L<<60) int artio_add_volume_to_selection( artio_fileset *handle, int lcoords[3], int rcoords[3], int64_t sfcs[8], artio_selection *selection ); int artio_selection_iterator( artio_selection *selection, int64_t max_range_size, int64_t *start, int64_t *end ) { if ( selection->cursor < 0 ) { selection->cursor = 0; } if ( selection->cursor == selection->num_ranges ) { selection->cursor = -1; return ARTIO_SELECTION_EXHAUSTED; } if ( selection->subcycle > 0 ) { *start = selection->subcycle+1; } else { *start = selection->list[2*selection->cursor]; } *end = selection->list[2*selection->cursor+1]; if ( *end - *start > max_range_size ) { *end = *start + max_range_size-1; selection->subcycle = *end; } else { selection->subcycle = -1; selection->cursor++; } return ARTIO_SUCCESS; } int artio_selection_iterator_reset( artio_selection *selection ) { selection->cursor = -1; selection->subcycle = -1; return ARTIO_SUCCESS; } int64_t artio_selection_size( artio_selection *selection ) { int i; int64_t count = 0; for ( i = 0; i < selection->num_ranges; i++ ) { count += selection->list[2*i+1] - selection->list[2*i] + 1; } return count; } artio_selection *artio_selection_allocate( artio_fileset *handle ) { artio_selection *selection = (artio_selection *)malloc(sizeof(artio_selection)); if ( selection != NULL ) { selection->list = (int64_t *)malloc(2*ARTIO_SELECTION_LIST_SIZE*sizeof(int64_t)); if ( selection->list == NULL ) { free(selection); return NULL; } } selection->subcycle = -1; selection->cursor = -1; selection->size = ARTIO_SELECTION_LIST_SIZE; selection->num_ranges = 0; selection->fileset = handle; return selection; } int artio_selection_destroy( artio_selection *selection ) { if ( selection == NULL ) { return ARTIO_ERR_INVALID_SELECTION; } if ( selection->list != NULL ) { free( selection->list ); } free(selection); return ARTIO_SUCCESS; } int artio_selection_add_range( artio_selection *selection, int64_t start, int64_t end ) { int i, j; int64_t *new_list; if ( selection == NULL ) { return ARTIO_ERR_INVALID_SELECTION; } if ( start < 0 || end >= selection->fileset->num_root_cells || start > end ) { return ARTIO_ERR_INVALID_SFC_RANGE; } for ( i = 0; i < selection->num_ranges; i++ ) { if ( (start >= selection->list[2*i] && start <= selection->list[2*i+1] ) || (end >= selection->list[2*i] && end <= selection->list[2*i+1] ) ) { return ARTIO_ERR_INVALID_STATE; } } /* locate page */ if ( selection->num_ranges == 0 ) { selection->list[0] = start; selection->list[1] = end; selection->num_ranges = 1; return ARTIO_SUCCESS; } else { /* eventually replace with binary search */ for ( i = 0; i < selection->num_ranges; i++ ) { if ( end < selection->list[2*i] ) { break; } } if ( ( i == 0 && end < selection->list[2*i]-1 ) || ( i == selection->num_ranges && start > selection->list[2*i-1]+1 ) || ( end < selection->list[2*i]-1 && start > selection->list[2*i-1]+1 ) ) { if ( selection->num_ranges == selection->size ) { new_list = (int64_t *)malloc(4*selection->size*sizeof(int64_t)); if ( new_list == NULL ) { return ARTIO_ERR_MEMORY_ALLOCATION; } for ( j = 0; j < i; j++ ) { new_list[2*j] = selection->list[2*j]; new_list[2*j+1] = selection->list[2*j+1]; } for ( ; j < selection->num_ranges; j++ ) { new_list[2*j+2] = selection->list[2*j]; new_list[2*j+3] = selection->list[2*j+1]; } selection->size *= 2; free( selection->list ); selection->list = new_list; } else { for ( j = selection->num_ranges-1; j >= i; j-- ) { selection->list[2*j+2] = selection->list[2*j]; selection->list[2*j+3] = selection->list[2*j+1]; } } selection->list[2*i] = start; selection->list[2*i+1] = end; selection->num_ranges++; } else { if ( end == selection->list[2*i]-1 ) { selection->list[2*i] = start; } else if ( start == selection->list[2*i-1]+1 ) { selection->list[2*i-1] = end; } /* merge 2 ranges if necessary */ if ( selection->list[2*i] == selection->list[2*i-1]+1 ) { selection->list[2*i-1] = selection->list[2*i+1]; selection->num_ranges--; for ( ; i < selection->num_ranges; i++ ) { selection->list[2*i] = selection->list[2*i+2]; selection->list[2*i+1] = selection->list[2*i+3]; } } } } return ARTIO_SUCCESS; } int artio_selection_add_root_cell( artio_selection *selection, int coords[3] ) { int i; int64_t sfc; if ( selection == NULL ) { return ARTIO_ERR_INVALID_SELECTION; } for ( i = 0; i < 3; i++ ) { if ( coords[i] < 0 || coords[i] >= selection->fileset->num_grid ) { return ARTIO_ERR_INVALID_COORDINATES; } } sfc = artio_sfc_index( selection->fileset, coords ); return artio_selection_add_range( selection, sfc, sfc ); } void artio_selection_print( artio_selection *selection ) { int i; for ( i = 0; i < selection->num_ranges; i++ ) { printf("%u: %ld %ld\n", i, selection->list[2*i], selection->list[2*i+1] ); } } artio_selection *artio_select_all( artio_fileset *handle ) { artio_selection *selection; if ( handle == NULL ) { return NULL; } selection = artio_selection_allocate(handle); if ( selection == NULL ) { return NULL; } if ( artio_selection_add_range( selection, 0, handle->num_root_cells-1 ) != ARTIO_SUCCESS ) { artio_selection_destroy(selection); return NULL; } return selection; } artio_selection *artio_select_volume( artio_fileset *handle, double lpos[3], double rpos[3] ) { int i; int64_t sfc; int coords[3]; int lcoords[3]; int rcoords[3]; artio_selection *selection; if ( handle == NULL ) { return NULL; } for ( i = 0; i < 3; i++ ) { if ( lpos[i] < 0.0 || lpos[i] >= rpos[i] ) { return NULL; } } for ( i = 0; i < 3; i++ ) { lcoords[i] = (int)lpos[i]; rcoords[i] = (int)rpos[i]; } selection = artio_selection_allocate( handle ); if ( selection == NULL ) { return NULL; } for ( coords[0] = lcoords[0]; coords[0] <= rcoords[0]; coords[0]++ ) { for ( coords[1] = lcoords[1]; coords[1] <= rcoords[1]; coords[1]++ ) { for ( coords[2] = lcoords[2]; coords[2] <= rcoords[2]; coords[2]++ ) { sfc = artio_sfc_index( handle, coords ); if ( artio_selection_add_range( selection, sfc, sfc ) != ARTIO_SUCCESS ) { artio_selection_destroy(selection); return NULL; } } } } return selection; } artio_selection *artio_select_cube( artio_fileset *handle, double center[3], double size ) { int i, j, k, dx; int64_t sfc; int coords[3], coords2[3]; artio_selection *selection; if ( handle == NULL ) { return NULL; } if ( size <= 0.0 || size > handle->num_grid/2 ) { return NULL; } dx = (int)(center[0] + 0.5*size) - (int)(center[0] - 0.5*size) + 1; for ( i = 0; i < 3; i++ ) { if ( center[i] < 0.0 || center[i] >= handle->num_grid ) { return NULL; } coords[i] = (int)(center[i] - 0.5*size + handle->num_grid) % handle->num_grid; } selection = artio_selection_allocate( handle ); if ( selection == NULL ) { return NULL; } for ( i = coords[0]-dx; i <= coords[0]+dx; i++ ) { coords2[0] = (i + handle->num_grid) % handle->num_grid; for ( j = coords[1]-dx; j <= coords[1]+dx; j++ ) { coords2[1] = (j + handle->num_grid) % handle->num_grid; for ( k = coords[2]-dx; k <= coords[2]+dx; k++ ) { coords2[2] = (k + handle->num_grid) % handle->num_grid; sfc = artio_sfc_index( handle, coords2 ); if ( artio_selection_add_range( selection, sfc, sfc ) != ARTIO_SUCCESS ) { artio_selection_destroy(selection); return NULL; } } } } return selection; } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/artio_sfc.c0000644000175100001770000002023214714401662022266 0ustar00runnerdocker/********************************************************************** * Copyright (c) 2012-2013, Douglas H. Rudd * All rights reserved. * * This file is part of the artio library. * * artio is free software: you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as * published by the Free Software Foundation, either version 3 of the * License, or (at your option) any later version. * * artio is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * Copies of the GNU Lesser General Public License and the GNU General * Public License are available in the file LICENSE, included with this * distribution. If you failed to receive a copy of this file, see * **********************************************************************/ #include "artio.h" #include "artio_internal.h" #include #include #include #define rollLeft(x,y,mask) ((x<>(nDim-y))) & mask #define rollRight(x,y,mask) ((x>>y) | (x<<(nDim-y))) & mask /******************************************************* * morton_index ******************************************************/ int64_t artio_morton_index( artio_fileset *handle, int coords[nDim] ) /* purpose: interleaves the bits of the nDim integer * coordinates, normally called Morton or z-ordering * * Used by the hilbert curve algorithm */ { int i, d; int64_t mortonnumber = 0; int64_t bitMask = 1L << (handle->nBitsPerDim - 1); /* interleave bits of coordinates */ for ( i = handle->nBitsPerDim; i > 0; i-- ) { for ( d = 0; d < nDim; d++ ) { mortonnumber |= ( coords[d] & bitMask ) << (((nDim - 1) * i ) - d ); } bitMask >>= 1; } return mortonnumber; } /******************************************************* * hilbert_index ******************************************************/ int64_t artio_hilbert_index( artio_fileset *handle, int coords[nDim] ) /* purpose: calculates the 1-d space-filling-curve index * corresponding to the nDim set of coordinates * * Uses the Hilbert curve algorithm given in * Alternative Algorithm for Hilbert's Space- * Filling Curve, A.R. Butz, IEEE Trans on Comp., * p. 424, 1971 */ { int i, j; int64_t hilbertnumber; int64_t singleMask; int64_t dimMask; int64_t numberShifts; int principal; int64_t o; int64_t rho; int64_t w; int64_t interleaved; /* begin by transposing bits */ interleaved = artio_morton_index( handle, coords ); /* mask out nDim and 1 bit blocks starting * at highest order bits */ singleMask = 1L << ((handle->nBitsPerDim - 1) * nDim); dimMask = singleMask; for ( i = 1; i < nDim; i++ ) { dimMask |= singleMask << i; } w = 0; numberShifts = 0; hilbertnumber = 0; while (singleMask) { o = (interleaved ^ w) & dimMask; o = rollLeft( o, numberShifts, dimMask ); rho = o; for ( j = 1; j < nDim; j++ ) { rho ^= (o>>j) & dimMask; } hilbertnumber |= rho; /* break out early (we already have complete number * no need to calculate other numbers) */ if ( singleMask == 1 ) { break; } /* calculate principal position */ principal = 0; for ( i = 1; i < nDim; i++ ) { if ( (hilbertnumber & singleMask) != ((hilbertnumber>>i) & singleMask)) { principal = i; break; } } /* complement lowest bit position */ o ^= singleMask; /* force even parity by complementing at principal position if necessary * Note: lowest order bit of hilbertnumber gives you parity of o at this * point due to xor operations of previous steps */ if ( !(hilbertnumber & singleMask) ) { o ^= singleMask << principal; } /* rotate o right by numberShifts */ o = rollRight( o, numberShifts, dimMask ); /* find next numberShifts */ numberShifts += (nDim - 1) - principal; numberShifts %= nDim; w ^= o; w >>= nDim; singleMask >>= nDim; dimMask >>= nDim; } return hilbertnumber; } /******************************************************* * hilbert_coords ******************************************************/ void artio_hilbert_coords( artio_fileset *handle, int64_t index, int coords[nDim] ) /* purpose: performs the inverse of sfc_index, * taking a 1-d space-filling-curve index * and transforming it into nDim coordinates * * returns: the coordinates in coords */ { int i, j; int64_t dimMask; int64_t singleMask; int64_t sigma; int64_t sigma_; int64_t tau; int64_t tau_; int num_shifts; int principal; int64_t w; int64_t x = 0; w = 0; sigma_ = 0; num_shifts = 0; singleMask = 1L << ((handle->nBitsPerDim - 1) * nDim); dimMask = singleMask; for ( i = 1; i < nDim; i++ ) { dimMask |= singleMask << i; } for ( i = 0; i < handle->nBitsPerDim; i++ ) { sigma = ((index & dimMask) ^ ( (index & dimMask) >> 1 )) & dimMask; sigma_ |= rollRight( sigma, num_shifts, dimMask ); principal = nDim - 1; for ( j = 1; j < nDim; j++ ) { if ( (index & singleMask) != ((index >> j) & singleMask) ) { principal = nDim - j - 1; break; } } /* complement nth bit */ tau = sigma ^ singleMask; /* if even parity, complement in principal bit position */ if ( !(index & singleMask) ) { tau ^= singleMask << ( nDim - principal - 1 ); } tau_ = rollRight( tau, num_shifts, dimMask ); num_shifts += principal; num_shifts %= nDim; w |= ((w & dimMask) ^ tau_) >> nDim; dimMask >>= nDim; singleMask >>= nDim; } x = w ^ sigma_; /* undo bit interleaving to get coordinates */ for ( i = 0; i < nDim; i++ ) { coords[i] = 0; singleMask = 1L << (nDim*handle->nBitsPerDim - 1 - i); for ( j = 0; j < handle->nBitsPerDim; j++ ) { if ( x & singleMask ) { coords[i] |= 1 << (handle->nBitsPerDim-j-1); } singleMask >>= nDim; } } } int64_t artio_slab_index( artio_fileset *handle, int coords[nDim], int slab_dim ) { int64_t num_grid = 1L << handle->nBitsPerDim; int64_t index; switch ( slab_dim ) { case 0: index = num_grid*num_grid*coords[0] + num_grid*coords[1] + coords[2]; break; case 1: index = num_grid*num_grid*coords[1] + num_grid*coords[0] + coords[2]; break; case 2: index = num_grid*num_grid*coords[2] + num_grid*coords[0] + coords[1]; break; default: index = -1; } return index; } void artio_slab_coords( artio_fileset *handle, int64_t index, int coords[nDim], int slab_dim ) { int64_t num_grid = 1L << handle->nBitsPerDim; switch ( slab_dim ) { case 0: coords[2] = index % num_grid; coords[1] = ((index - coords[2] )/num_grid) % num_grid; coords[0] = (index - coords[2] - num_grid*coords[1])/(num_grid*num_grid); break; case 1: coords[2] = index % num_grid; coords[0] = ((index - coords[2] )/num_grid) % num_grid; coords[1] = (index - coords[2] - num_grid*coords[0])/(num_grid*num_grid); break; case 2: coords[1] = index % num_grid; coords[0] = ((index - coords[1] )/num_grid) % num_grid; coords[2] = (index - coords[1] - num_grid*coords[0])/(num_grid*num_grid); break; } } int64_t artio_sfc_index_position( artio_fileset *handle, double position[nDim] ) { int i; int coords[nDim]; for ( i = 0; i < nDim; i++ ) { coords[i] = (int)position[i]; } return artio_sfc_index(handle, coords); } int64_t artio_sfc_index( artio_fileset *handle, int coords[nDim] ) { switch ( handle->sfc_type ) { case ARTIO_SFC_SLAB_X: return artio_slab_index(handle, coords, 0); case ARTIO_SFC_SLAB_Y: return artio_slab_index(handle, coords, 1); case ARTIO_SFC_SLAB_Z: return artio_slab_index(handle, coords, 2); case ARTIO_SFC_HILBERT: return artio_hilbert_index( handle, coords ); default: return -1; } } void artio_sfc_coords( artio_fileset *handle, int64_t index, int coords[nDim] ) { int i; switch ( handle->sfc_type ) { case ARTIO_SFC_SLAB_X: artio_slab_coords( handle, index, coords, 0 ); break; case ARTIO_SFC_SLAB_Y: artio_slab_coords( handle, index, coords, 1 ); break; case ARTIO_SFC_SLAB_Z: artio_slab_coords( handle, index, coords, 2 ); break; case ARTIO_SFC_HILBERT: artio_hilbert_coords( handle, index, coords ); break; default : for ( i = 0; i < nDim; i++ ) { coords[i] = -1; } break; } } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/cosmology.c0000644000175100001770000003110114714401662022325 0ustar00runnerdocker#include #include #include #ifndef ERROR #include #define ERROR(msg) { fprintf(stderr,"%s\n",msg); exit(1); } #endif #ifndef ASSERT #include #define ASSERT(exp) { if(!(exp)) { fprintf(stderr,"Failed assertion %s, line: %d\n",#exp,__LINE__); } } #endif #ifndef HEAPALLOC #include #define HEAPALLOC(type,size) (type *)malloc((size)*sizeof(type)) #endif #ifndef NEWARR #include #define NEWARR(size) HEAPALLOC(double,size) #endif #ifndef DELETE #include #define DELETE(ptr) free(ptr) #endif #include "cosmology.h" struct CosmologyParametersStruct { int set; int ndex; int size; double *la; double *aUni; double *aBox; double *tCode; double *tPhys; double *dPlus; double *qPlus; double aLow; double tCodeOffset; double OmegaM; double OmegaD; double OmegaB; double OmegaL; double OmegaK; double OmegaR; double h; double DeltaDC; int flat; double Omh2; double Obh2; }; void cosmology_clear_table(CosmologyParameters *c); void cosmology_fill_table(CosmologyParameters *c, double amin, double amax); void cosmology_fill_table_abox(CosmologyParameters *c, int istart, int n); CosmologyParameters *cosmology_allocate() { CosmologyParameters *c = HEAPALLOC(CosmologyParameters,1); if ( c != NULL ) { memset(c, 0, sizeof(CosmologyParameters)); c->ndex = 200; c->aLow = 1.0e-2; } return c; } void cosmology_free(CosmologyParameters *c) { cosmology_clear_table(c); DELETE(c); } int cosmology_is_set(CosmologyParameters *c) { return (c->OmegaM>0.0 && c->OmegaB>0.0 && c->h>0.0); } void cosmology_fail_on_reset(const char *name, double old_value, double new_value) { char str[150]; sprintf(str,"Trying to change %s from %lg to %lg...\nCosmology has been fixed and cannot be changed.\n",name,old_value,new_value); ERROR(str); } void cosmology_set_OmegaM(CosmologyParameters *c, double v) { if(v < 1.0e-3) v = 1.0e-3; if(fabs(c->OmegaM-v) > 1.0e-5) { if(c->set) cosmology_fail_on_reset("OmegaM",c->OmegaM,v); c->OmegaM = v; c->flat = (fabs(c->OmegaM+c->OmegaL-1.0) > 1.0e-5) ? 0 : 1; cosmology_clear_table(c); } } void cosmology_set_OmegaL(CosmologyParameters *c, double v) { if(fabs(c->OmegaL-v) > 1.0e-5) { if(c->set) cosmology_fail_on_reset("OmegaL",c->OmegaL,v); c->OmegaL = v; c->flat = (fabs(c->OmegaM+c->OmegaL-1.0) > 1.0e-5) ? 0 : 1; cosmology_clear_table(c); } } void cosmology_set_OmegaB(CosmologyParameters *c, double v) { if(v < 0.0) v = 0.0; if(fabs(c->OmegaB-v) > 1.0e-5) { if(c->set) cosmology_fail_on_reset("OmegaB",c->OmegaB,v); c->OmegaB = v; cosmology_clear_table(c); } } void cosmology_set_h(CosmologyParameters *c, double v) { if(fabs(c->h-v) > 1.0e-5) { if(c->set) cosmology_fail_on_reset("h",c->h,v); c->h = v; cosmology_clear_table(c); } } void cosmology_set_DeltaDC(CosmologyParameters *c, double v) { if(fabs(c->DeltaDC-v) > 1.0e-3) { if(c->set) cosmology_fail_on_reset("DeltaDC",c->DeltaDC,v); c->DeltaDC = v; cosmology_clear_table(c); } } void cosmology_init(CosmologyParameters *c) { if(c->size == 0) /* reset only if the state is dirty */ { if(!cosmology_is_set(c)) ERROR("Not all of the required cosmological parameters have been set; the minimum required set is (OmegaM,OmegaB,h)."); if(c->OmegaB > c->OmegaM) c->OmegaB = c->OmegaM; c->OmegaD = c->OmegaM - c->OmegaB; if(c->flat) { c->OmegaK = 0.0; c->OmegaL = 1.0 - c->OmegaM; } else { c->OmegaK = 1.0 - (c->OmegaM+c->OmegaL); } c->OmegaR = 4.166e-5/(c->h*c->h); c->Omh2 = c->OmegaM*c->h*c->h; c->Obh2 = c->OmegaB*c->h*c->h; cosmology_fill_table(c,c->aLow,1.0); c->tCodeOffset = 0.0; /* Do need to set it to zero first */ #ifndef NATIVE_TCODE_NORMALIZATION c->tCodeOffset = 0.0 - tCode(c,inv_aBox(c,1.0)); #endif } } void cosmology_set_fixed(CosmologyParameters *c) { cosmology_init(c); c->set = 1; } double cosmology_mu(CosmologyParameters *c, double a) { return sqrt(((a*a*c->OmegaL+c->OmegaK)*a+c->OmegaM)*a+c->OmegaR); } double cosmology_dc_factor(CosmologyParameters *c, double dPlus) { double dc = 1.0 + dPlus*c->DeltaDC; return 1.0/pow((dc>0.001)?dc:0.001,1.0/3.0); } void cosmology_fill_table_integrate(CosmologyParameters *c, double a, double y[], double f[]) { double mu = cosmology_mu(c, a); double abox = a*cosmology_dc_factor(c, y[2]); f[0] = a/(abox*abox*mu); f[1] = a/mu; f[2] = y[3]/(a*mu); f[3] = 1.5*c->OmegaM*y[2]/mu; } #ifdef _WIN32 double asinh(double x){ return log(x + sqrt((x * x) + 1.0)); } #endif void cosmology_fill_table_piece(CosmologyParameters *c, int istart, int n) { int i, j; double tPhysUnit = (3.0856775813e17/(365.25*86400))/c->h; /* 1/H0 in Julian years */ double x, aeq = c->OmegaR/c->OmegaM; double tCodeFac = 1.0/sqrt(aeq); double tPhysFac = tPhysUnit*aeq*sqrt(aeq)/sqrt(c->OmegaM); double da, a0, y0[4], y1[4]; double f1[4], f2[4], f3[4], f4[4]; for(i=istart; iaUni[i] = pow(10.0,c->la[i]); } /* // Small a regime, use analytical formulae for matter + radiation model */ for(i=istart; c->aUni[i]<(c->aLow+1.0e-9) && iaUni[i]/aeq; c->tPhys[i] = tPhysFac*2*x*x*(2+sqrt(x+1))/(3*pow(1+sqrt(x+1),2.0)); c->dPlus[i] = aeq*(x + 2.0/3.0 + (6*sqrt(1+x)+(2+3*x)*log(x)-2*(2+3*x)*log(1+sqrt(1+x)))/(log(64.0)-9)); /* long last term is the decaying mode generated after equality; it is very small for x > 10, I keep ot just for completeness; */ c->qPlus[i] = c->aUni[i]*cosmology_mu(c,c->aUni[i])*(1 + ((2+6*x)/(x*sqrt(1+x))+3*log(x)-6*log(1+sqrt(1+x)))/(log(64)-9)); /* this is a^2*dDPlus/dt/H0 */ c->aBox[i] = c->aUni[i]*cosmology_dc_factor(c,c->dPlus[i]); c->tCode[i] = 1.0 - tCodeFac*asinh(sqrt(aeq/c->aBox[i])); } /* // Large a regime, solve ODEs */ ASSERT(i > 0); tCodeFac = 0.5*sqrt(c->OmegaM); tPhysFac = tPhysUnit; y1[0] = c->tCode[i-1]/tCodeFac; y1[1] = c->tPhys[i-1]/tPhysFac; y1[2] = c->dPlus[i-1]; y1[3] = c->qPlus[i-1]; for(; iaUni[i-1]; da = c->aUni[i] - a0; /* RK4 integration */ for(j=0; j<4; j++) y0[j] = y1[j]; cosmology_fill_table_integrate(c, a0,y1,f1); for(j=0; j<4; j++) y1[j] = y0[j] + 0.5*da*f1[j]; cosmology_fill_table_integrate(c, a0+0.5*da,y1,f2); for(j=0; j<4; j++) y1[j] = y0[j] + 0.5*da*f2[j]; cosmology_fill_table_integrate(c, a0+0.5*da,y1,f3); for(j=0; j<4; j++) y1[j] = y0[j] + da*f3[j]; cosmology_fill_table_integrate(c, a0+da,y1,f4); for(j=0; j<4; j++) y1[j] = y0[j] + da*(f1[j]+2*f2[j]+2*f3[j]+f4[j])/6.0; c->tCode[i] = tCodeFac*y1[0]; c->tPhys[i] = tPhysFac*y1[1]; c->dPlus[i] = y1[2]; c->qPlus[i] = y1[3]; c->aBox[i] = c->aUni[i]*cosmology_dc_factor(c,c->dPlus[i]); } } void cosmology_fill_table(CosmologyParameters *c, double amin, double amax) { int i, imin, imax, iold; double dla = 1.0/c->ndex; double lamin, lamax; double *old_la = c->la; double *old_aUni = c->aUni; double *old_aBox = c->aBox; double *old_tCode = c->tCode; double *old_tPhys = c->tPhys; double *old_dPlus = c->dPlus; double *old_qPlus = c->qPlus; int old_size = c->size; if(amin > c->aLow) amin = c->aLow; lamin = dla*floor(c->ndex*log10(amin)); lamax = dla*ceil(c->ndex*log10(amax)); c->size = 1 + (int)(0.5+c->ndex*(lamax-lamin)); ASSERT(fabs(lamax-lamin-dla*(c->size-1)) < 1.0e-14); c->la = NEWARR(c->size); ASSERT(c->la != NULL); c->aUni = NEWARR(c->size); ASSERT(c->aUni != NULL); c->aBox = NEWARR(c->size); ASSERT(c->aBox != NULL); c->tCode = NEWARR(c->size); ASSERT(c->tCode != NULL); c->tPhys = NEWARR(c->size); ASSERT(c->tPhys != NULL); c->dPlus = NEWARR(c->size); ASSERT(c->dPlus != NULL); c->qPlus = NEWARR(c->size); ASSERT(c->qPlus != NULL); /* // New log10(aUni) table */ for(i=0; isize; i++) { c->la[i] = lamin + dla*i; } if(old_size == 0) { /* // Filling the table for the first time */ cosmology_fill_table_piece(c,0,c->size); } else { /* // Find if we need to expand the lower end */ if(lamin < old_la[0]) { imin = (int)(0.5+c->ndex*(old_la[0]-lamin)); ASSERT(fabs(old_la[0]-lamin-dla*imin) < 1.0e-14); } else imin = 0; /* // Find if we need to expand the upper end */ if(lamax > old_la[old_size-1]) { imax = (int)(0.5+c->ndex*(old_la[old_size-1]-lamin)); ASSERT(fabs(old_la[old_size-1]-lamin-dla*imax) < 1.0e-14); } else imax = c->size - 1; /* // Re-use the rest */ if(lamin > old_la[0]) { iold = (int)(0.5+c->ndex*(lamin-old_la[0])); ASSERT(fabs(lamin-old_la[0]-dla*iold) < 1.0e-14); } else iold = 0; memcpy(c->aUni+imin,old_aUni+iold,sizeof(double)*(imax-imin+1)); memcpy(c->aBox+imin,old_aBox+iold,sizeof(double)*(imax-imin+1)); memcpy(c->tCode+imin,old_tCode+iold,sizeof(double)*(imax-imin+1)); memcpy(c->tPhys+imin,old_tPhys+iold,sizeof(double)*(imax-imin+1)); memcpy(c->dPlus+imin,old_dPlus+iold,sizeof(double)*(imax-imin+1)); memcpy(c->qPlus+imin,old_qPlus+iold,sizeof(double)*(imax-imin+1)); DELETE(old_la); DELETE(old_aUni); DELETE(old_aBox); DELETE(old_tCode); DELETE(old_tPhys); DELETE(old_dPlus); DELETE(old_qPlus); /* // Fill in additional pieces */ if(imin > 0) cosmology_fill_table_piece(c,0,imin); if(imax < c->size-1) cosmology_fill_table_piece(c,imax,c->size); } } void cosmology_clear_table(CosmologyParameters *c) { if(c->size > 0) { DELETE(c->la); DELETE(c->aUni); DELETE(c->aBox); DELETE(c->tCode); DELETE(c->tPhys); DELETE(c->dPlus); DELETE(c->qPlus); c->size = 0; c->la = NULL; c->aUni = NULL; c->aBox = NULL; c->tCode = NULL; c->tPhys = NULL; c->dPlus = NULL; c->qPlus = NULL; } } void cosmology_check_range(CosmologyParameters *c, double a) { ASSERT((a > 1.0e-9) && (a < 1.0e9)); if(c->size == 0) cosmology_init(c); if(a < c->aUni[0]) { cosmology_fill_table(c,a,c->aUni[c->size-1]); } if(a > c->aUni[c->size-1]) { cosmology_fill_table(c,c->aUni[0],a); } } void cosmology_set_thread_safe_range(CosmologyParameters *c, double amin, double amax) { cosmology_check_range(c, amin); cosmology_check_range(c, amax); } double cosmology_get_value_from_table(CosmologyParameters *c, double a, double table[]) { // This is special case code for boundary conditions int idx; double la = log10(a); if (fabs(la - c->la[c->size-1]) < 1.0e-14) { return table[c->size-1]; } else if (fabs(la - c->la[0]) < 1.0e-14) { return table[0]; } idx = (int)(c->ndex*(la-c->la[0])); // Note that because we do idx+1 below, we need -1 here. ASSERT(idx>=0 && (idxsize-1)); /* // Do it as a function of aUni rather than la to ensure exact inversion */ return table[idx] + (table[idx+1]-table[idx])/(c->aUni[idx+1]-c->aUni[idx])*(a-c->aUni[idx]); } int cosmology_find_index(CosmologyParameters *c, double v, double table[]) { int ic, il = 0; int ih = c->size - 1; if(v < table[0]) { return -1; } if(v > table[c->size-1]) { return c->size + 1; } while((ih-il) > 1) { ic = (il+ih)/2; if(v > table[ic]) /* special, not fully optimal form to avoid checking that il < c->size-1 */ il = ic; else ih = ic; } ASSERT(il+1 < c->size); return il; } /* // Direct and inverse functions */ #define DEFINE_FUN(name,offset) \ double name(CosmologyParameters *c, double a) \ { \ cosmology_check_range(c,a); \ return cosmology_get_value_from_table(c,a,c->name) + offset; \ } \ double inv_##name(CosmologyParameters *c, double v) \ { \ int idx; \ double *table; \ if(c->size == 0) cosmology_init(c); \ v -= offset; \ table = c->name; \ idx = cosmology_find_index(c,v,table); \ while(idx < 0) \ { \ cosmology_check_range(c,0.5*c->aUni[0]); \ table = c->name; \ idx = cosmology_find_index(c,v,table); \ } \ while(idx > c->size) \ { \ cosmology_check_range(c,2.0*c->aUni[c->size-1]); \ table = c->name; \ idx = cosmology_find_index(c,v,table); \ } \ return c->aUni[idx] + (c->aUni[idx+1]-c->aUni[idx])/(table[idx+1]-table[idx])*(v-table[idx]); \ } DEFINE_FUN(aBox,0.0); DEFINE_FUN(tCode,c->tCodeOffset); DEFINE_FUN(tPhys,0.0); DEFINE_FUN(dPlus,0.0); DEFINE_FUN(qPlus,0.0); #undef DEFINE_FUN ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/artio_headers/cosmology.h0000644000175100001770000000613214714401662022340 0ustar00runnerdocker#ifndef __COSMOLOGY_H__ #define __COSMOLOGY_H__ typedef struct CosmologyParametersStruct CosmologyParameters; #define COSMOLOGY_DECLARE_PRIMARY_PARAMETER(name) \ void cosmology_set_##name(CosmologyParameters *c, double value) #define cosmology_set(c,name,value) \ cosmology_set_##name(c,value) COSMOLOGY_DECLARE_PRIMARY_PARAMETER(OmegaM); COSMOLOGY_DECLARE_PRIMARY_PARAMETER(OmegaB); COSMOLOGY_DECLARE_PRIMARY_PARAMETER(OmegaL); COSMOLOGY_DECLARE_PRIMARY_PARAMETER(h); COSMOLOGY_DECLARE_PRIMARY_PARAMETER(DeltaDC); #undef COSMOLOGY_DECLARE_PRIMARY_PARAMETER CosmologyParameters *cosmology_allocate(); void cosmology_free(CosmologyParameters *c); /* // Check that all required cosmological parameters have been set. // The minimum set is OmegaM, OmegaB, and h. By default, zero OmegaL, // OmegaK, and the DC mode are assumed. */ int cosmology_is_set(CosmologyParameters *c); /* // Freeze the cosmology and forbid any further changes to it. // In codes that include user-customizable segments (like plugins), // this function van be used for insuring that a user does not // change the cosmology in mid-run. */ void cosmology_set_fixed(CosmologyParameters *c); /* // Manual initialization. This does not need to be called, // the initialization is done automatically on the first call // to a relevant function. */ void cosmology_init(CosmologyParameters *c); /* // Set the range of global scale factors for thread-safe // calls to direct functions until the argument leaves the range. */ void cosmology_set_thread_safe_range(CosmologyParameters *c, double amin, double amax); /* // Direct functions take the global cosmological scale factor as the argument. // These functionsare are thread-safe if called with the argument in the // range set by a prior call to cosmology_set_thread_safe_range(...). // Calling them with the argument outside that range is ok, but breaks // thread-safety assurance. */ #define DEFINE_FUN(name) \ double name(CosmologyParameters *c, double a); \ double inv_##name(CosmologyParameters *c, double v); DEFINE_FUN(aBox); DEFINE_FUN(tCode); DEFINE_FUN(tPhys); DEFINE_FUN(dPlus); DEFINE_FUN(qPlus); /* Q+ = a^2 dD+/(H0 dt) */ #undef DEFINE_FUN /* // Conversion macros */ #define abox_from_auni(c,a) aBox(c,a) #define tcode_from_auni(c,a) tCode(c,a) #define tphys_from_auni(c,a) tPhys(c,a) #define dplus_from_auni(c,a) dPlus(c,a) #define auni_from_abox(c,v) inv_aBox(c,v) #define auni_from_tcode(c,v) inv_tCode(c,v) #define auni_from_tphys(c,v) inv_tPhys(c,v) #define auni_from_dplus(c,v) inv_dPlus(c,v) #define abox_from_tcode(c,tcode) aBox(c,inv_tCode(c,tcode)) #define tcode_from_abox(c,abox) tCode(c,inv_aBox(c,abox)) #define tphys_from_abox(c,abox) tPhys(c,inv_aBox(c,abox)) #define tphys_from_tcode(c,tcode) tPhys(c,inv_tCode(c,tcode)) #define dplus_from_tcode(c,tcode) dPlus(c,inv_tCode(c,tcode)) /* // Hubble parameter in km/s/Mpc; defined as macro so that it can be // undefined if needed to avoid the name clash. */ double cosmology_mu(CosmologyParameters *c, double a); #define Hubble(c,a) (100*c->h*cosmology_mu(c,a)/(a*a)) #endif /* __COSMOLOGY_H__ */ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/data_structures.py0000644000175100001770000005056214714401662021137 0ustar00runnerdockerimport os import weakref from collections import defaultdict import numpy as np from yt.data_objects.field_data import YTFieldData from yt.data_objects.index_subobjects.octree_subset import OctreeSubset from yt.data_objects.static_output import Dataset from yt.data_objects.unions import ParticleUnion from yt.frontends.artio import _artio_caller from yt.frontends.artio._artio_caller import ( ARTIOSFCRangeHandler, artio_fileset, artio_is_valid, ) from yt.frontends.artio.fields import ARTIOFieldInfo from yt.funcs import mylog, setdefaultattr from yt.geometry import particle_deposit as particle_deposit from yt.geometry.geometry_handler import Index, YTDataChunk from yt.utilities.exceptions import YTParticleDepositionNotImplemented class ARTIOOctreeSubset(OctreeSubset): _domain_offset = 0 domain_id = -1 _con_args = ("base_region", "sfc_start", "sfc_end", "oct_handler", "ds") _type_name = "octree_subset" _num_zones = 2 def __init__(self, base_region, sfc_start, sfc_end, oct_handler, ds): self.field_data = YTFieldData() self.field_parameters = {} self.sfc_start = sfc_start self.sfc_end = sfc_end self._oct_handler = oct_handler self.ds = ds self._last_mask = None self._last_selector_id = None self._current_particle_type = "all" self._current_fluid_type = self.ds.default_fluid_type self.base_region = base_region self.base_selector = base_region.selector @property def oct_handler(self): return self._oct_handler @property def min_ind(self): return self.sfc_start @property def max_ind(self): return self.sfc_end def fill(self, fields, selector): if len(fields) == 0: return [] handle = self.oct_handler.artio_handle field_indices = [ handle.parameters["grid_variable_labels"].index(f) for (ft, f) in fields ] cell_count = selector.count_oct_cells(self.oct_handler, self.domain_id) self.data_size = cell_count levels, cell_inds, file_inds = self.oct_handler.file_index_octs( selector, self.domain_id, cell_count ) domain_counts = self.oct_handler.domain_count(selector) tr = [np.zeros(cell_count, dtype="float64") for field in fields] self.oct_handler.fill_sfc( levels, cell_inds, file_inds, domain_counts, field_indices, tr ) tr = dict(zip(fields, tr, strict=True)) return tr def fill_particles(self, fields): if len(fields) == 0: return {} ptype_indices = self.ds.particle_types art_fields = [(ptype_indices.index(ptype), fname) for ptype, fname in fields] species_data = self.oct_handler.fill_sfc_particles(art_fields) tr = defaultdict(dict) # Now we need to sum things up and then fill for s, f in fields: count = 0 dt = "float64" # default i = ptype_indices.index(s) # No vector fields in ARTIO count += species_data[i, f].size dt = species_data[i, f].dtype tr[s][f] = np.zeros(count, dtype=dt) cp = 0 v = species_data.pop((i, f)) tr[s][f][cp : cp + v.size] = v cp += v.size return tr # We create something of a fake octree here. This is primarily to enable us to # reuse code for things like __getitem__ and the like. We will also create a # new oct_handler type that is functionally equivalent, except that it will # only manage the root mesh. class ARTIORootMeshSubset(ARTIOOctreeSubset): _num_zones = 1 _type_name = "sfc_subset" _selector_module = _artio_caller domain_id = -1 def fill(self, fields, selector): # We know how big these will be. if len(fields) == 0: return [] handle = self.ds._handle field_indices = [ handle.parameters["grid_variable_labels"].index(f) for (ft, f) in fields ] tr = self.oct_handler.fill_sfc(selector, field_indices) self.data_size = tr[0].size tr = dict(zip(fields, tr, strict=True)) return tr def deposit(self, positions, fields=None, method=None, kernel_name="cubic"): # Here we perform our particle deposition. if fields is None: fields = [] cls = getattr(particle_deposit, f"deposit_{method}", None) if cls is None: raise YTParticleDepositionNotImplemented(method) nz = self.nz nvals = (nz, nz, nz, self.ires.size) # We allocate number of zones, not number of octs op = cls(nvals, kernel_name) op.initialize() mylog.debug( "Depositing %s (%s^3) particles into %s Root Mesh", positions.shape[0], positions.shape[0] ** 0.3333333, nvals[-1], ) pos = np.array(positions, dtype="float64") f64 = [np.array(f, dtype="float64") for f in fields] self.oct_handler.deposit(op, self.base_selector, pos, f64) vals = op.finalize() if vals is None: return return np.asfortranarray(vals) class ARTIOIndex(Index): def __init__(self, ds, dataset_type="artio"): self.dataset_type = dataset_type self.dataset = weakref.proxy(ds) # for now, the index file is the dataset! self.index_filename = self.dataset.parameter_filename self.directory = os.path.dirname(self.index_filename) self.max_level = ds.max_level self.range_handlers = {} self.float_type = np.float64 super().__init__(ds, dataset_type) @property def max_range(self): return self.dataset.max_range def _setup_geometry(self): mylog.debug("Initializing Geometry Handler empty for now.") def get_smallest_dx(self): """ Returns (in code units) the smallest cell size in the simulation. """ return ( self.dataset.domain_width / (self.dataset.domain_dimensions * 2 ** (self.max_level)) ).min() def _get_particle_type_counts(self): # this could be done in the artio C interface without creating temporary # arrays but I don't want to touch that code # if a future brave soul wants to try, take a look at # `read_sfc_particles` in _artio_caller.pyx result = {} ad = self.ds.all_data() for ptype in self.ds.particle_types_raw: result[ptype] = ad[ptype, "PID"].size return result def convert(self, unit): return self.dataset.conversion_factors[unit] def find_max(self, field, finest_levels=3): """ Returns (value, center) of location of maximum for a given field. """ if (field, finest_levels) in self._max_locations: return self._max_locations[field, finest_levels] mv, pos = self.find_max_cell_location(field, finest_levels) self._max_locations[field, finest_levels] = (mv, pos) return mv, pos def find_max_cell_location(self, field, finest_levels=3): source = self.all_data() if finest_levels is not False: source.min_level = self.max_level - finest_levels mylog.debug("Searching for maximum value of %s", field) max_val, mx, my, mz = source.quantities["MaxLocation"](field) mylog.info("Max Value is %0.5e at %0.16f %0.16f %0.16f", max_val, mx, my, mz) self.ds.parameters[f"Max{field}Value"] = max_val self.ds.parameters[f"Max{field}Pos"] = f"{mx, my, mz}" return max_val, np.array((mx, my, mz), dtype="float64") def _detect_output_fields(self): self.fluid_field_list = self._detect_fluid_fields() self.particle_field_list = self._detect_particle_fields() self.field_list = self.fluid_field_list + self.particle_field_list mylog.debug("Detected fields: %s", (self.field_list,)) def _detect_fluid_fields(self): return [("artio", f) for f in self.ds.artio_parameters["grid_variable_labels"]] def _detect_particle_fields(self): fields = set() for i, ptype in enumerate(self.ds.particle_types): if ptype == "all": break # This will always be after all intrinsic for fname in self.ds.particle_variables[i]: fields.add((ptype, fname)) return list(fields) def _identify_base_chunk(self, dobj): if getattr(dobj, "_chunk_info", None) is None: try: all_data = all(dobj.left_edge == self.ds.domain_left_edge) and all( dobj.right_edge == self.ds.domain_right_edge ) except Exception: all_data = False base_region = getattr(dobj, "base_region", dobj) sfc_start = getattr(dobj, "sfc_start", None) sfc_end = getattr(dobj, "sfc_end", None) nz = getattr(dobj, "_num_zones", 0) if all_data: mylog.debug("Selecting entire artio domain") list_sfc_ranges = self.ds._handle.root_sfc_ranges_all( max_range_size=self.max_range ) elif sfc_start is not None and sfc_end is not None: mylog.debug("Restricting to %s .. %s", sfc_start, sfc_end) list_sfc_ranges = [(sfc_start, sfc_end)] else: mylog.debug("Running selector on artio base grid") list_sfc_ranges = self.ds._handle.root_sfc_ranges( dobj.selector, max_range_size=self.max_range ) ci = [] # v = np.array(list_sfc_ranges) # list_sfc_ranges = [ (v.min(), v.max()) ] for start, end in list_sfc_ranges: if (start, end) in self.range_handlers.keys(): range_handler = self.range_handlers[start, end] else: range_handler = ARTIOSFCRangeHandler( self.ds.domain_dimensions, self.ds.domain_left_edge, self.ds.domain_right_edge, self.ds._handle, start, end, ) range_handler.construct_mesh() self.range_handlers[start, end] = range_handler if nz != 2: ci.append( ARTIORootMeshSubset( base_region, start, end, range_handler.root_mesh_handler, self.ds, ) ) if nz != 1 and range_handler.total_octs > 0: ci.append( ARTIOOctreeSubset( base_region, start, end, range_handler.octree_handler, self.ds, ) ) dobj._chunk_info = ci if len(list_sfc_ranges) > 1: mylog.info("Created %d chunks for ARTIO", len(list_sfc_ranges)) dobj._current_chunk = list(self._chunk_all(dobj))[0] def _data_size(self, dobj, dobjs): size = 0 for d in dobjs: size += d.data_size return size def _chunk_all(self, dobj): oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) yield YTDataChunk(dobj, "all", oobjs, None, cache=True) def _chunk_spatial(self, dobj, ngz, preload_fields=None): if ngz > 0: raise NotImplementedError sobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) for og in sobjs: if ngz > 0: g = og.retrieve_ghost_zones(ngz, [], smoothed=True) else: g = og yield YTDataChunk(dobj, "spatial", [g], None, cache=True) def _chunk_io(self, dobj, cache=True, local_only=False): # _current_chunk is made from identify_base_chunk oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) for chunk in oobjs: yield YTDataChunk(dobj, "io", [chunk], None, cache=cache) def _read_fluid_fields(self, fields, dobj, chunk=None): if len(fields) == 0: return {}, [] if chunk is None: self._identify_base_chunk(dobj) fields_to_return = {} fields_to_read, fields_to_generate = self._split_fields(fields) if len(fields_to_read) == 0: return {}, fields_to_generate fields_to_return = self.io._read_fluid_selection( self._chunk_io(dobj), dobj.selector, fields_to_read ) return fields_to_return, fields_to_generate def _icoords_to_fcoords( self, icoords: np.ndarray, ires: np.ndarray, axes: tuple[int, ...] | None = None, ) -> tuple[np.ndarray, np.ndarray]: """ Accepts icoords and ires and returns appropriate fcoords and fwidth. Mostly useful for cases where we have irregularly spaced or structured grids. """ dds = self.ds.domain_width[axes,] / ( self.ds.domain_dimensions[axes,] * self.ds.refine_by ** ires[:, None] ) pos = (0.5 + icoords) * dds + self.ds.domain_left_edge[axes,] return pos, dds class ARTIODataset(Dataset): _handle = None _index_class = ARTIOIndex _field_info_class = ARTIOFieldInfo def __init__( self, filename, dataset_type="artio", storage_filename=None, max_range=1024, units_override=None, unit_system="cgs", default_species_fields=None, ): if self._handle is not None: return self.max_range = max_range self.fluid_types += ("artio",) self._filename = filename self._fileset_prefix = filename[:-4] self._handle = artio_fileset(bytes(self._fileset_prefix, "utf-8")) self.artio_parameters = self._handle.parameters # Here we want to initiate a traceback, if the reader is not built. Dataset.__init__( self, filename, dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) self.storage_filename = storage_filename def _set_code_unit_attributes(self): setdefaultattr(self, "mass_unit", self.quan(self.parameters["unit_m"], "g")) setdefaultattr(self, "length_unit", self.quan(self.parameters["unit_l"], "cm")) setdefaultattr(self, "time_unit", self.quan(self.parameters["unit_t"], "s")) setdefaultattr(self, "velocity_unit", self.length_unit / self.time_unit) def _parse_parameter_file(self): # hard-coded -- not provided by headers self.dimensionality = 3 self.refine_by = 2 self.parameters["HydroMethod"] = "artio" self.parameters["Time"] = 1.0 # default unit is 1... # read header self.num_grid = self._handle.num_grid self.domain_dimensions = np.ones(3, dtype="int32") * self.num_grid self.domain_left_edge = np.zeros(3, dtype="float64") self.domain_right_edge = np.ones(3, dtype="float64") * self.num_grid # TODO: detect if grid exists self.min_level = 0 # ART has min_level=0 self.max_level = self.artio_parameters["grid_max_level"][0] # TODO: detect if particles exist if self._handle.has_particles: self.num_species = self.artio_parameters["num_particle_species"][0] self.particle_variables = [ ["PID", "SPECIES"] for i in range(self.num_species) ] # If multiple N-BODY species exist, they all have the same name, # which can lead to conflict if not renamed # A particle union will be created later to hold all N-BODY # particles and will take the name "N-BODY" labels = self.artio_parameters["particle_species_labels"] if labels.count("N-BODY") > 1: for species, label in enumerate(labels): if label == "N-BODY": labels[species] = f"N-BODY_{species}" self.particle_types_raw = self.artio_parameters["particle_species_labels"] self.particle_types = tuple(self.particle_types_raw) for species in range(self.num_species): # Mass would be best as a derived field, # but wouldn't detect under 'all' label = self.artio_parameters["particle_species_labels"][species] if "N-BODY" in label: self.particle_variables[species].append("MASS") if self.artio_parameters["num_primary_variables"][species] > 0: self.particle_variables[species].extend( self.artio_parameters[ "species_%02d_primary_variable_labels" % (species,) ] ) if self.artio_parameters["num_secondary_variables"][species] > 0: self.particle_variables[species].extend( self.artio_parameters[ "species_%02d_secondary_variable_labels" % (species,) ] ) else: self.num_species = 0 self.particle_variables = [] self.particle_types = () self.particle_types_raw = self.particle_types self.current_time = self.quan( self._handle.tphys_from_tcode(self.artio_parameters["tl"][0]), "yr" ) # detect cosmology if "abox" in self.artio_parameters: self.cosmological_simulation = True abox = self.artio_parameters["abox"][0] self.omega_lambda = self.artio_parameters["OmegaL"][0] self.omega_matter = self.artio_parameters["OmegaM"][0] self.hubble_constant = self.artio_parameters["hubble"][0] self.current_redshift = 1.0 / self.artio_parameters["auni"][0] - 1.0 self.current_redshift_box = 1.0 / abox - 1.0 self.parameters["initial_redshift"] = ( 1.0 / self.artio_parameters["auni_init"][0] - 1.0 ) self.parameters["CosmologyInitialRedshift"] = self.parameters[ "initial_redshift" ] self.parameters["unit_m"] = self.artio_parameters["mass_unit"][0] self.parameters["unit_t"] = self.artio_parameters["time_unit"][0] * abox**2 self.parameters["unit_l"] = self.artio_parameters["length_unit"][0] * abox if self.artio_parameters["DeltaDC"][0] != 0: mylog.warning( "DeltaDC != 0, which implies auni != abox. " "Be sure you understand which expansion parameter " "is appropriate for your use! (Gnedin, Kravtsov, & Rudd 2011)" ) else: self.cosmological_simulation = False self.parameters["unit_l"] = self.artio_parameters["length_unit"][0] self.parameters["unit_t"] = self.artio_parameters["time_unit"][0] self.parameters["unit_m"] = self.artio_parameters["mass_unit"][0] # hard coded assumption of 3D periodicity self._periodicity = (True, True, True) def create_field_info(self): super().create_field_info() # only make the particle union if there are multiple DM species. # If there are multiple, "N-BODY_0" will be the first species. If there # are not multiple, they will be all under "N-BODY" if "N-BODY_0" in self.particle_types_raw: dm_labels = [ label for label in self.particle_types_raw if "N-BODY" in label ] # Use the N-BODY label for the union to be consistent with the # previous single mass N-BODY case, where this label was used for # all N-BODY particles by default pu = ParticleUnion("N-BODY", dm_labels) self.add_particle_union(pu) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: # a valid artio header file starts with a prefix and ends with .art name, _, ext = filename.rpartition(".") if ext != "art": return False return artio_is_valid(bytes(name, "utf-8")) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/definitions.py0000644000175100001770000000373114714401662020232 0ustar00runnerdockeryt_to_art = { "Density": "HVAR_GAS_DENSITY", "TotalEnergy": "HVAR_GAS_ENERGY", "GasEnergy": "HVAR_INTERNAL_ENERGY", "Pressure": "HVAR_PRESSURE", "XMomentumDensity": "HVAR_MOMENTUM_X", "YMomentumDensity": "HVAR_MOMENTUM_Y", "ZMomentumDensity": "HVAR_MOMENTUM_Z", "Gamma": "HVAR_GAMMA", "MetalDensitySNIa": "HVAR_METAL_DENSITY_Ia", "MetalDensitySNII": "HVAR_METAL_DENSITY_II", "Potential": "VAR_POTENTIAL", "PotentialHydro": "VAR_POTENTIAL_HYDRO", "particle_position_x": "POSITION_X", "particle_position_y": "POSITION_Y", "particle_position_z": "POSITION_Z", "particle_velocity_x": "VELOCITY_X", "particle_velocity_y": "VELOCITY_Y", "particle_velocity_z": "VELOCITY_Z", "particle_mass": "MASS", "particle_index": "PID", "particle_species": "SPECIES", "creation_time": "BIRTH_TIME", "particle_mass_initial": "INITIAL_MASS", "particle_metallicity1": "METALLICITY_SNIa", "particle_metallicity2": "METALLICITY_SNII", "stars": "STAR", "nbody": "N-BODY", } art_to_yt = dict(zip(yt_to_art.values(), yt_to_art.keys(), strict=True)) class ARTIOconstants: def __init__(self): self.yr = 365.25 * 86400 self.Myr = 1.0e6 * self.yr self.Gyr = 1.0e9 * self.yr self.pc = 3.0856775813e18 self.kpc = 1.0e3 * self.pc self.Mpc = 1.0e6 * self.pc self.kms = 1.0e5 self.mp = 1.672621637e-24 self.k = 1.3806504e-16 self.G = 6.67428e-8 self.c = 2.99792458e10 self.eV = 1.602176487e-12 self.amu = 1.660538782e-24 self.mH = 1.007825 * self.amu self.mHe = 4.002602 * self.amu self.Msun = 1.32712440018e26 / self.G self.Zsun = 0.0199 self.Yp = 0.24 self.wmu = 4.0 / (8.0 - 5.0 * self.Yp) self.wmu_e = 1.0 / (1.0 - 0.5 * self.Yp) self.XH = 1.0 - self.Yp self.XHe = 0.25 * self.Yp self.gamma = 5.0 / 3.0 self.sigmaT = 6.6524e-25 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/fields.py0000644000175100001770000001510514714401662017163 0ustar00runnerdockerimport numpy as np from yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer from yt.units.yt_array import YTArray from yt.utilities.physical_constants import amu_cgs, boltzmann_constant_cgs b_units = "code_magnetic" ra_units = "code_length / code_time**2" rho_units = "code_mass / code_length**3" vel_units = "code_velocity" # NOTE: ARTIO uses momentum density. mom_units = "code_mass / (code_length**2 * code_time)" en_units = "code_mass*code_velocity**2/code_length**3" p_units = "code_mass / (code_length * code_time**2)" class ARTIOFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( ("HVAR_GAS_DENSITY", (rho_units, ["density"], None)), ("HVAR_GAS_ENERGY", (en_units, ["total_energy_density"], None)), ("HVAR_INTERNAL_ENERGY", (en_units, ["thermal_energy_density"], None)), ("HVAR_PRESSURE", (p_units, ["pressure"], None)), ("HVAR_MOMENTUM_X", (mom_units, ["momentum_density_x"], None)), ("HVAR_MOMENTUM_Y", (mom_units, ["momentum_density_y"], None)), ("HVAR_MOMENTUM_Z", (mom_units, ["momentum_density_z"], None)), ("HVAR_GAMMA", ("", ["gamma"], None)), ("HVAR_METAL_DENSITY_Ia", (rho_units, ["metal_ia_density"], None)), ("HVAR_METAL_DENSITY_II", (rho_units, ["metal_ii_density"], None)), ("VAR_POTENTIAL", ("", ["potential"], None)), ("VAR_POTENTIAL_HYDRO", ("", ["gas_potential"], None)), ("RT_HVAR_HI", (rho_units, ["H_density"], None)), ("RT_HVAR_HII", (rho_units, ["H_p1_density"], None)), ("RT_HVAR_H2", (rho_units, ["H2_density"], None)), ("RT_HVAR_HeI", (rho_units, ["He_density"], None)), ("RT_HVAR_HeII", (rho_units, ["He_p1_density"], None)), ("RT_HVAR_HeIII", (rho_units, ["He_p2_density"], None)), ) known_particle_fields: KnownFieldsT = ( ("POSITION_X", ("code_length", ["particle_position_x"], None)), ("POSITION_Y", ("code_length", ["particle_position_y"], None)), ("POSITION_Z", ("code_length", ["particle_position_z"], None)), ("VELOCITY_X", (vel_units, ["particle_velocity_x"], None)), ("VELOCITY_Y", (vel_units, ["particle_velocity_y"], None)), ("VELOCITY_Z", (vel_units, ["particle_velocity_z"], None)), ("MASS", ("code_mass", ["particle_mass"], None)), ("PID", ("", ["particle_index"], None)), ("SPECIES", ("", ["particle_type"], None)), ("BIRTH_TIME", ("", [], None)), # code-units defined as dimensionless to # avoid incorrect conversion ("INITIAL_MASS", ("code_mass", ["initial_mass"], None)), ("METALLICITY_SNIa", ("", ["metallicity_snia"], None)), ("METALLICITY_SNII", ("", ["metallicity_snii"], None)), ) def setup_fluid_fields(self): unit_system = self.ds.unit_system def _get_vel(axis): def velocity(field, data): return data["gas", f"momentum_density_{axis}"] / data["gas", "density"] return velocity for ax in "xyz": self.add_field( ("gas", f"velocity_{ax}"), sampling_type="cell", function=_get_vel(ax), units=unit_system["velocity"], ) def _temperature(field, data): tr = data["gas", "thermal_energy_density"] / data["gas", "density"] # We want this to match *exactly* what ARTIO would compute # internally. We therefore use the exact values that are internal # to ARTIO, rather than yt's own internal constants. mH = 1.007825 * amu_cgs mHe = 4.002602 * amu_cgs Yp = 0.24 XH = 1.0 - Yp XHe = 0.25 * Yp mb = XH * mH + XHe * mHe wmu = 4.0 / (8.0 - 5.0 * Yp) # Note that we have gamma = 5.0/3.0 here tr *= data["gas", "gamma"] - 1.0 tr *= wmu tr *= mb / boltzmann_constant_cgs return tr # TODO: The conversion factor here needs to be addressed, as previously # it was set as: # unit_T = unit_v**2.0*mb / constants.k self.add_field( ("gas", "temperature"), sampling_type="cell", function=_temperature, units=unit_system["temperature"], ) # Create a metal_density field as sum of existing metal fields. flag1 = ("artio", "HVAR_METAL_DENSITY_Ia") in self.field_list flag2 = ("artio", "HVAR_METAL_DENSITY_II") in self.field_list if flag1 or flag2: if flag1 and flag2: def _metal_density(field, data): tr = data["gas", "metal_ia_density"].copy() np.add(tr, data["gas", "metal_ii_density"], out=tr) return tr elif flag1 and not flag2: def _metal_density(field, data): tr = data["metal_ia_density"] return tr else: def _metal_density(field, data): tr = data["metal_ii_density"] return tr self.add_field( ("gas", "metal_density"), sampling_type="cell", function=_metal_density, units=unit_system["density"], take_log=True, ) def setup_particle_fields(self, ptype): if ptype == "STAR": def _creation_time(field, data): return YTArray( data.ds._handle.tphys_from_tcode_array(data["STAR", "BIRTH_TIME"]), "yr", ) def _age(field, data): return data.ds.current_time - data["STAR", "creation_time"] self.add_field( (ptype, "creation_time"), sampling_type="particle", function=_creation_time, units="yr", ) self.add_field( (ptype, "age"), sampling_type="particle", function=_age, units="yr" ) if self.ds.cosmological_simulation: def _creation_redshift(field, data): return ( 1.0 / data.ds._handle.auni_from_tcode_array( data["STAR", "BIRTH_TIME"] ) - 1.0 ) self.add_field( (ptype, "creation_redshift"), sampling_type="particle", function=_creation_redshift, ) super().setup_particle_fields(ptype) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/io.py0000644000175100001770000000452714714401662016332 0ustar00runnerdockerimport numpy as np from yt.utilities.io_handler import BaseIOHandler class IOHandlerARTIO(BaseIOHandler): _dataset_type = "artio" def _read_fluid_selection(self, chunks, selector, fields): tr = {ftuple: np.empty(0, dtype="float64") for ftuple in fields} cp = 0 for onechunk in chunks: for artchunk in onechunk.objs: rv = artchunk.fill(fields, selector) for f in fields: tr[f].resize(cp + artchunk.data_size) tr[f][cp : cp + artchunk.data_size] = rv.pop(f) cp += artchunk.data_size return tr def _read_particle_coords(self, chunks, ptf): pn = "POSITION_%s" chunks = list(chunks) fields = [(ptype, pn % ax) for ptype, field_list in ptf.items() for ax in "XYZ"] for chunk in chunks: # These should be organized by grid filename for subset in chunk.objs: rv = dict(**subset.fill_particles(fields)) for ptype in sorted(ptf): x, y, z = ( np.asarray(rv[ptype][pn % ax], dtype="=f8") for ax in "XYZ" ) yield ptype, (x, y, z), 0.0 rv.pop(ptype) def _read_particle_fields(self, chunks, ptf, selector): pn = "POSITION_%s" chunks = list(chunks) fields = [ (ptype, fname) for ptype, field_list in ptf.items() for fname in field_list ] for ptype, field_list in sorted(ptf.items()): for ax in "XYZ": if pn % ax not in field_list: fields.append((ptype, pn % ax)) for chunk in chunks: # These should be organized by grid filename for subset in chunk.objs: rv = dict(**subset.fill_particles(fields)) for ptype, field_list in sorted(ptf.items()): x, y, z = ( np.asarray(rv[ptype][pn % ax], dtype="=f8") for ax in "XYZ" ) mask = selector.select_points(x, y, z, 0.0) if mask is None: continue for field in field_list: data = np.asarray(rv[ptype][field], "=f8") yield (ptype, field), data[mask] rv.pop(ptype) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.299152 yt-4.4.0/yt/frontends/artio/tests/0000755000175100001770000000000014714401715016502 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/tests/__init__.py0000644000175100001770000000000014714401662020602 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/artio/tests/test_outputs.py0000644000175100001770000000423514714401662021643 0ustar00runnerdockerfrom numpy.testing import assert_equal from yt.frontends.artio.api import ARTIODataset from yt.loaders import load from yt.testing import ( assert_allclose_units, requires_file, units_override_check, ) from yt.utilities.answer_testing.framework import ( FieldValuesTest, PixelizedProjectionValuesTest, create_obj, data_dir_load, requires_ds, ) _fields = ( ("gas", "temperature"), ("gas", "density"), ("gas", "velocity_magnitude"), ("deposit", "all_density"), ("deposit", "all_count"), ) sizmbhloz = "sizmbhloz-clref04SNth-rs9_a0.9011/sizmbhloz-clref04SNth-rs9_a0.9011.art" @requires_ds(sizmbhloz) def test_sizmbhloz(): ds = data_dir_load(sizmbhloz) ds.max_range = 1024 * 1024 assert_equal(str(ds), "sizmbhloz-clref04SNth-rs9_a0.9011.art") dso = [None, ("sphere", ("max", (0.1, "unitary")))] for dobj_name in dso: for field in _fields: for axis in [0, 1, 2]: for weight_field in [None, ("gas", "density")]: yield PixelizedProjectionValuesTest( ds, axis, field, weight_field, dobj_name ) yield FieldValuesTest(ds, field, dobj_name) dobj = create_obj(ds, dobj_name) s1 = dobj["index", "ones"].sum() s2 = sum(mask.sum() for block, mask in dobj.blocks) assert_equal(s1, s2) assert_equal(ds.particle_type_counts, {"N-BODY": 100000, "STAR": 110650}) @requires_file(sizmbhloz) def test_ARTIODataset(): assert isinstance(data_dir_load(sizmbhloz), ARTIODataset) @requires_file(sizmbhloz) def test_units_override(): units_override_check(sizmbhloz) @requires_file(sizmbhloz) def test_particle_derived_field(): def star_age_alias(field, data): # test to make sure we get back data in the correct units # during field detection return data["STAR", "age"].in_units("Myr") ds = load(sizmbhloz) ds.add_field( ("STAR", "new_field"), function=star_age_alias, units="Myr", sampling_type="particle", ) ad = ds.all_data() assert_allclose_units(ad["STAR", "age"].in_units("Myr"), ad["STAR", "new_field"]) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.303152 yt-4.4.0/yt/frontends/athena/0000755000175100001770000000000014714401715015462 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena/__init__.py0000644000175100001770000000000014714401662017562 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena/api.py0000644000175100001770000000024014714401662016602 0ustar00runnerdockerfrom . import tests from .data_structures import AthenaDataset, AthenaGrid, AthenaHierarchy from .fields import AthenaFieldInfo from .io import IOHandlerAthena ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena/data_structures.py0000644000175100001770000006340714714401662021263 0ustar00runnerdockerimport os import re import weakref import numpy as np from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.static_output import Dataset from yt.fields.magnetic_field import get_magnetic_normalization from yt.funcs import mylog, sglob from yt.geometry.api import Geometry from yt.geometry.geometry_handler import YTDataChunk from yt.geometry.grid_geometry_handler import GridIndex from yt.utilities.chemical_formulas import compute_mu from yt.utilities.decompose import decompose_array, get_psize from yt.utilities.lib.misc_utilities import get_box_grids_level from .fields import AthenaFieldInfo def chk23(strin): return strin.encode("utf-8") def str23(strin): if isinstance(strin, list): return [s.decode("utf-8") for s in strin] else: return strin.decode("utf-8") def check_readline(fl): line = fl.readline() chk = chk23("SCALARS") if chk in line and not line.startswith(chk): line = line[line.find(chk) :] chk = chk23("VECTORS") if chk in line and not line.startswith(chk): line = line[line.find(chk) :] return line def check_break(line): splitup = line.strip().split() do_break = chk23("SCALAR") in splitup do_break = (chk23("VECTOR") in splitup) & do_break do_break = (chk23("TABLE") in splitup) & do_break do_break = (len(line) == 0) & do_break return do_break def _get_convert(fname): def _conv(data): return data.convert(fname) return _conv class AthenaGrid(AMRGridPatch): _id_offset = 0 def __init__(self, id, index, level, start, dimensions, file_offset, read_dims): gname = index.grid_filenames[id] AMRGridPatch.__init__(self, id, filename=gname, index=index) self.filename = gname self.Parent = [] self.Children = [] self.Level = level self.start_index = start.copy() self.stop_index = self.start_index + dimensions self.ActiveDimensions = dimensions.copy() self.file_offset = file_offset self.read_dims = read_dims def _setup_dx(self): # So first we figure out what the index is. We don't assume # that dx=dy=dz , at least here. We probably do elsewhere. id = self.id - self._id_offset if len(self.Parent) > 0: self.dds = self.Parent[0].dds / self.ds.refine_by else: LE, RE = self.index.grid_left_edge[id, :], self.index.grid_right_edge[id, :] self.dds = self.ds.arr((RE - LE) / self.ActiveDimensions, "code_length") if self.ds.dimensionality < 2: self.dds[1] = 1.0 if self.ds.dimensionality < 3: self.dds[2] = 1.0 self.field_data["dx"], self.field_data["dy"], self.field_data["dz"] = self.dds def parse_line(line, grid): # grid is a dictionary splitup = line.strip().split() if chk23("vtk") in splitup: grid["vtk_version"] = str23(splitup[-1]) elif chk23("time=") in splitup: time_index = splitup.index(chk23("time=")) grid["time"] = float(str23(splitup[time_index + 1]).rstrip(",")) grid["level"] = int(str23(splitup[time_index + 3]).rstrip(",")) grid["domain"] = int(str23(splitup[time_index + 5]).rstrip(",")) elif chk23("DIMENSIONS") in splitup: grid["dimensions"] = np.array(str23(splitup[-3:]), dtype="int") elif chk23("ORIGIN") in splitup: grid["left_edge"] = np.array(str23(splitup[-3:]), dtype="float64") elif chk23("SPACING") in splitup: grid["dds"] = np.array(str23(splitup[-3:]), dtype="float64") elif chk23("CELL_DATA") in splitup or chk23("POINT_DATA") in splitup: grid["ncells"] = int(str23(splitup[-1])) elif chk23("SCALARS") in splitup: field = str23(splitup[1]) grid["read_field"] = field grid["read_type"] = "scalar" elif chk23("VECTORS") in splitup: field = str23(splitup[1]) grid["read_field"] = field grid["read_type"] = "vector" elif chk23("time") in splitup: time_index = splitup.index(chk23("time")) grid["time"] = float(str23(splitup[time_index + 1])) class AthenaHierarchy(GridIndex): grid = AthenaGrid _dataset_type = "athena" _data_file = None def __init__(self, ds, dataset_type="athena"): self.dataset = weakref.proxy(ds) self.directory = os.path.dirname(self.dataset.filename) self.dataset_type = dataset_type # for now, the index file is the dataset! self.index_filename = os.path.join(os.getcwd(), self.dataset.filename) self._fhandle = open(self.index_filename, "rb") GridIndex.__init__(self, ds, dataset_type) self._fhandle.close() def _detect_output_fields(self): field_map = {} f = open(self.index_filename, "rb") line = check_readline(f) chkwhile = chk23("") while line != chkwhile: splitup = line.strip().split() chkd = chk23("DIMENSIONS") chkc = chk23("CELL_DATA") chkp = chk23("POINT_DATA") if chkd in splitup: field = str23(splitup[-3:]) grid_dims = np.array(field, dtype="int64") line = check_readline(f) elif chkc in splitup or chkp in splitup: grid_ncells = int(str23(splitup[-1])) line = check_readline(f) if np.prod(grid_dims) != grid_ncells: grid_dims -= 1 grid_dims[grid_dims == 0] = 1 if np.prod(grid_dims) != grid_ncells: mylog.error( "product of dimensions %i not equal to number of cells %i", np.prod(grid_dims), grid_ncells, ) raise TypeError break else: line = check_readline(f) read_table = False read_table_offset = f.tell() while line != chkwhile: splitup = line.strip().split() chks = chk23("SCALARS") chkv = chk23("VECTORS") if chks in line and chks not in splitup: splitup = str23(line[line.find(chks) :].strip().split()) if chkv in line and chkv not in splitup: splitup = str23(line[line.find(chkv) :].strip().split()) if chks in splitup: field = ("athena", str23(splitup[1])) dtype = str23(splitup[-1]).lower() if not read_table: line = check_readline(f) # Read the lookup table line read_table = True field_map[field] = ("scalar", f.tell() - read_table_offset, dtype) read_table = False elif chkv in splitup: field = str23(splitup[1]) dtype = str23(splitup[-1]).lower() for ax in "xyz": field_map["athena", f"{field}_{ax}"] = ( "vector", f.tell() - read_table_offset, dtype, ) line = check_readline(f) f.close() self.field_list = list(field_map.keys()) self._field_map = field_map def _count_grids(self): self.num_grids = self.dataset.nvtk * self.dataset.nprocs def _parse_index(self): f = open(self.index_filename, "rb") grid = {} grid["read_field"] = None grid["read_type"] = None line = f.readline() while grid["read_field"] is None: parse_line(line, grid) if check_break(line): break line = f.readline() f.close() # It seems some datasets have a mismatch between ncells and # the actual grid dimensions. if np.prod(grid["dimensions"]) != grid["ncells"]: grid["dimensions"] -= 1 grid["dimensions"][grid["dimensions"] == 0] = 1 if np.prod(grid["dimensions"]) != grid["ncells"]: mylog.error( "product of dimensions %i not equal to number of cells %i", np.prod(grid["dimensions"]), grid["ncells"], ) raise TypeError # Need to determine how many grids: self.num_grids dataset_dir = os.path.dirname(self.index_filename) dname = os.path.split(self.index_filename)[-1] if dataset_dir.endswith("id0"): dname = "id0/" + dname dataset_dir = dataset_dir[:-3] gridlistread = sglob( os.path.join(dataset_dir, f"id*/{dname[4:-9]}-id*{dname[-9:]}") ) gridlistread.insert(0, self.index_filename) if "id0" in dname: gridlistread += sglob( os.path.join(dataset_dir, f"id*/lev*/{dname[4:-9]}*-lev*{dname[-9:]}") ) else: gridlistread += sglob( os.path.join(dataset_dir, f"lev*/{dname[:-9]}*-lev*{dname[-9:]}") ) ndots = dname.count(".") gridlistread = [ fn for fn in gridlistread if os.path.basename(fn).count(".") == ndots ] self.num_grids = len(gridlistread) dxs = [] levels = np.zeros(self.num_grids, dtype="int32") glis = np.empty((self.num_grids, 3), dtype="float64") gdds = np.empty((self.num_grids, 3), dtype="float64") gdims = np.ones_like(glis) j = 0 self.grid_filenames = gridlistread while j < (self.num_grids): f = open(gridlistread[j], "rb") gridread = {} gridread["read_field"] = None gridread["read_type"] = None line = f.readline() while gridread["read_field"] is None: parse_line(line, gridread) splitup = line.strip().split() if chk23("X_COORDINATES") in splitup: gridread["left_edge"] = np.zeros(3) gridread["dds"] = np.zeros(3) v = np.fromfile(f, dtype=">f8", count=2) gridread["left_edge"][0] = v[0] - 0.5 * (v[1] - v[0]) gridread["dds"][0] = v[1] - v[0] if chk23("Y_COORDINATES") in splitup: v = np.fromfile(f, dtype=">f8", count=2) gridread["left_edge"][1] = v[0] - 0.5 * (v[1] - v[0]) gridread["dds"][1] = v[1] - v[0] if chk23("Z_COORDINATES") in splitup: v = np.fromfile(f, dtype=">f8", count=2) gridread["left_edge"][2] = v[0] - 0.5 * (v[1] - v[0]) gridread["dds"][2] = v[1] - v[0] if check_break(line): break line = f.readline() f.close() levels[j] = gridread.get("level", 0) glis[j, 0] = gridread["left_edge"][0] glis[j, 1] = gridread["left_edge"][1] glis[j, 2] = gridread["left_edge"][2] # It seems some datasets have a mismatch between ncells and # the actual grid dimensions. if np.prod(gridread["dimensions"]) != gridread["ncells"]: gridread["dimensions"] -= 1 gridread["dimensions"][gridread["dimensions"] == 0] = 1 if np.prod(gridread["dimensions"]) != gridread["ncells"]: mylog.error( "product of dimensions %i not equal to number of cells %i", np.prod(gridread["dimensions"]), gridread["ncells"], ) raise TypeError gdims[j, 0] = gridread["dimensions"][0] gdims[j, 1] = gridread["dimensions"][1] gdims[j, 2] = gridread["dimensions"][2] # Setting dds=1 for non-active dimensions in 1D/2D datasets gridread["dds"][gridread["dimensions"] == 1] = 1.0 gdds[j, :] = gridread["dds"] j = j + 1 gres = glis + gdims * gdds # Now we convert the glis, which were left edges (floats), to indices # from the domain left edge. Then we do a bunch of fixing now that we # know the extent of all the grids. glis = np.round( (glis - self.dataset.domain_left_edge.ndarray_view()) / gdds ).astype("int64") new_dre = np.max(gres, axis=0) dre_units = self.dataset.domain_right_edge.uq self.dataset.domain_right_edge = np.round(new_dre, decimals=12) * dre_units self.dataset.domain_width = ( self.dataset.domain_right_edge - self.dataset.domain_left_edge ) self.dataset.domain_center = 0.5 * ( self.dataset.domain_left_edge + self.dataset.domain_right_edge ) domain_dimensions = np.round(self.dataset.domain_width / gdds[0]).astype( "int64" ) if self.dataset.dimensionality <= 2: domain_dimensions[2] = 1 if self.dataset.dimensionality == 1: domain_dimensions[1] = 1 self.dataset.domain_dimensions = domain_dimensions dle = self.dataset.domain_left_edge dre = self.dataset.domain_right_edge dx_root = ( self.dataset.domain_right_edge - self.dataset.domain_left_edge ) / self.dataset.domain_dimensions if self.dataset.nprocs > 1: gle_all = [] gre_all = [] shapes_all = [] levels_all = [] new_gridfilenames = [] file_offsets = [] read_dims = [] for i in range(levels.shape[0]): dx = dx_root / self.dataset.refine_by ** (levels[i]) gle_orig = self.ds.arr( np.round(dle + dx * glis[i], decimals=12), "code_length" ) gre_orig = self.ds.arr( np.round(gle_orig + dx * gdims[i], decimals=12), "code_length" ) bbox = np.array( [[le, re] for le, re in zip(gle_orig, gre_orig, strict=True)] ) psize = get_psize(self.ds.domain_dimensions, self.ds.nprocs) gle, gre, shapes, slices, _ = decompose_array(gdims[i], psize, bbox) gle_all += gle gre_all += gre shapes_all += shapes levels_all += [levels[i]] * self.dataset.nprocs new_gridfilenames += [self.grid_filenames[i]] * self.dataset.nprocs file_offsets += [ [slc[0].start, slc[1].start, slc[2].start] for slc in slices ] read_dims += [ np.array([gdims[i][0], gdims[i][1], shape[2]], dtype="int64") for shape in shapes ] self.num_grids *= self.dataset.nprocs self.grids = np.empty(self.num_grids, dtype="object") self.grid_filenames = new_gridfilenames self.grid_left_edge = self.ds.arr(gle_all, "code_length") self.grid_right_edge = self.ds.arr(gre_all, "code_length") self.grid_dimensions = np.array(list(shapes_all), dtype="int32") gdds = (self.grid_right_edge - self.grid_left_edge) / self.grid_dimensions glis = np.round( (self.grid_left_edge - self.ds.domain_left_edge) / gdds ).astype("int64") for i in range(self.num_grids): self.grids[i] = self.grid( i, self, levels_all[i], glis[i], shapes_all[i], file_offsets[i], read_dims[i], ) else: self.grids = np.empty(self.num_grids, dtype="object") for i in range(levels.shape[0]): self.grids[i] = self.grid( i, self, levels[i], glis[i], gdims[i], [0] * 3, gdims[i] ) dx = dx_root / self.dataset.refine_by ** (levels[i]) dxs.append(dx) dx = self.ds.arr(dxs, "code_length") self.grid_left_edge = self.ds.arr( np.round(dle + dx * glis, decimals=12), "code_length" ) self.grid_dimensions = gdims.astype("int32") self.grid_right_edge = self.ds.arr( np.round(self.grid_left_edge + dx * self.grid_dimensions, decimals=12), "code_length", ) if self.dataset.dimensionality <= 2: self.grid_right_edge[:, 2] = dre[2] if self.dataset.dimensionality == 1: self.grid_right_edge[:, 1:] = dre[1:] self.grid_particle_count = np.zeros([self.num_grids, 1], dtype="int64") def _populate_grid_objects(self): for g in self.grids: g._prepare_grid() g._setup_dx() self._reconstruct_parent_child() self.max_level = self.grid_levels.max() def _reconstruct_parent_child(self): mask = np.empty(len(self.grids), dtype="int32") mylog.debug("First pass; identifying child grids") for i, grid in enumerate(self.grids): get_box_grids_level( self.grid_left_edge[i, :], self.grid_right_edge[i, :], self.grid_levels[i].item() + 1, self.grid_left_edge, self.grid_right_edge, self.grid_levels, mask, ) grid.Children = [ g for g in self.grids[mask.astype("bool")] if g.Level == grid.Level + 1 ] mylog.debug("Second pass; identifying parents") for grid in self.grids: # Second pass for child in grid.Children: child.Parent.append(grid) def _get_grid_children(self, grid): mask = np.zeros(self.num_grids, dtype="bool") grids, grid_ind = self.get_box_grids(grid.LeftEdge, grid.RightEdge) mask[grid_ind] = True return [g for g in self.grids[mask] if g.Level == grid.Level + 1] def _chunk_io(self, dobj, cache=True, local_only=False): gobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) for subset in gobjs: yield YTDataChunk( dobj, "io", [subset], self._count_selection(dobj, [subset]), cache=cache ) class AthenaDataset(Dataset): _index_class = AthenaHierarchy _field_info_class = AthenaFieldInfo _dataset_type = "athena" def __init__( self, filename, dataset_type="athena", storage_filename=None, parameters=None, units_override=None, nprocs=1, unit_system="cgs", default_species_fields=None, magnetic_normalization="gaussian", ): self.fluid_types += ("athena",) self.nprocs = nprocs if parameters is None: parameters = {} self.specified_parameters = parameters.copy() if units_override is None: units_override = {} self._magnetic_factor = get_magnetic_normalization(magnetic_normalization) Dataset.__init__( self, filename, dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) if storage_filename is None: storage_filename = self.basename + ".yt" self.storage_filename = storage_filename # Unfortunately we now have to mandate that the index gets # instantiated so that we can make sure we have the correct left # and right domain edges. self.index def _set_code_unit_attributes(self): """ Generates the conversion to various physical _units based on the parameter file """ if "length_unit" not in self.units_override: self.no_cgs_equiv_length = True for unit, cgs in [("length", "cm"), ("time", "s"), ("mass", "g")]: # We set these to cgs for now, but they may have been overridden if getattr(self, unit + "_unit", None) is not None: continue mylog.warning("Assuming 1.0 = 1.0 %s", cgs) setattr(self, f"{unit}_unit", self.quan(1.0, cgs)) self.magnetic_unit = np.sqrt( self._magnetic_factor * self.mass_unit / (self.time_unit**2 * self.length_unit) ) self.magnetic_unit.convert_to_units("gauss") self.velocity_unit = self.length_unit / self.time_unit def _parse_parameter_file(self): self._handle = open(self.parameter_filename, "rb") # Read the start of a grid to get simulation parameters. grid = {} grid["read_field"] = None line = self._handle.readline() while grid["read_field"] is None: parse_line(line, grid) splitup = line.strip().split() if chk23("X_COORDINATES") in splitup: grid["left_edge"] = np.zeros(3) grid["dds"] = np.zeros(3) v = np.fromfile(self._handle, dtype=">f8", count=2) grid["left_edge"][0] = v[0] - 0.5 * (v[1] - v[0]) grid["dds"][0] = v[1] - v[0] if chk23("Y_COORDINATES") in splitup: v = np.fromfile(self._handle, dtype=">f8", count=2) grid["left_edge"][1] = v[0] - 0.5 * (v[1] - v[0]) grid["dds"][1] = v[1] - v[0] if chk23("Z_COORDINATES") in splitup: v = np.fromfile(self._handle, dtype=">f8", count=2) grid["left_edge"][2] = v[0] - 0.5 * (v[1] - v[0]) grid["dds"][2] = v[1] - v[0] if check_break(line): break line = self._handle.readline() self.domain_left_edge = grid["left_edge"] mylog.info( "Temporarily setting domain_right_edge = -domain_left_edge. " "This will be corrected automatically if it is not the case." ) self.domain_right_edge = -self.domain_left_edge self.domain_width = self.domain_right_edge - self.domain_left_edge domain_dimensions = np.round(self.domain_width / grid["dds"]).astype("int32") refine_by = None if refine_by is None: refine_by = 2 self.refine_by = refine_by dimensionality = 3 if grid["dimensions"][2] == 1: dimensionality = 2 if grid["dimensions"][1] == 1: dimensionality = 1 if dimensionality <= 2: domain_dimensions[2] = np.int32(1) if dimensionality == 1: domain_dimensions[1] = np.int32(1) if dimensionality != 3 and self.nprocs > 1: raise RuntimeError("Virtual grids are only supported for 3D outputs!") self.domain_dimensions = domain_dimensions self.dimensionality = dimensionality self.current_time = grid["time"] self.cosmological_simulation = False self.num_ghost_zones = 0 self.field_ordering = "fortran" self.boundary_conditions = [1] * 6 self._periodicity = tuple( self.specified_parameters.get("periodicity", (True, True, True)) ) if "gamma" in self.specified_parameters: self.gamma = float(self.specified_parameters["gamma"]) else: self.gamma = 5.0 / 3.0 dataset_dir = os.path.dirname(self.parameter_filename) dname = os.path.split(self.parameter_filename)[-1] if dataset_dir.endswith("id0"): dname = "id0/" + dname dataset_dir = dataset_dir[:-3] gridlistread = sglob( os.path.join(dataset_dir, f"id*/{dname[4:-9]}-id*{dname[-9:]}") ) if "id0" in dname: gridlistread += sglob( os.path.join(dataset_dir, f"id*/lev*/{dname[4:-9]}*-lev*{dname[-9:]}") ) else: gridlistread += sglob( os.path.join(dataset_dir, f"lev*/{dname[:-9]}*-lev*{dname[-9:]}") ) ndots = dname.count(".") gridlistread = [ fn for fn in gridlistread if os.path.basename(fn).count(".") == ndots ] self.nvtk = len(gridlistread) + 1 self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 self.cosmological_simulation = 0 # Hardcode time conversion for now. self.parameters["Time"] = self.current_time # Hardcode for now until field staggering is supported. self.parameters["HydroMethod"] = 0 if "gamma" in self.specified_parameters: self.parameters["Gamma"] = self.specified_parameters["gamma"] else: self.parameters["Gamma"] = 5.0 / 3.0 self.geometry = Geometry(self.specified_parameters.get("geometry", "cartesian")) self._handle.close() self.mu = self.specified_parameters.get( "mu", compute_mu(self.default_species_fields) ) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if not filename.endswith(".vtk"): return False with open(filename, "rb") as fh: if not re.match(b"# vtk DataFile Version \\d\\.\\d\n", fh.readline(256)): return False if ( re.search( b"at time= .*, level= \\d, domain= \\d\n", fh.readline(256), ) is None ): # vtk Dumps headers start with either "CONSERVED vars" or "PRIMITIVE vars", # while vtk output headers start with "Really cool Athena data", but # we will consider this an implementation detail and not attempt to # match it exactly here. # See Athena's user guide for reference # https://princetonuniversity.github.io/Athena-Cversion/AthenaDocsUGbtk return False return True @property def _skip_cache(self): return True def __str__(self): return self.basename.rsplit(".", 1)[0] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena/definitions.py0000644000175100001770000000010614714401662020345 0ustar00runnerdocker""" Various definitions for various other modules and routines """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena/fields.py0000644000175100001770000001356614714401662017316 0ustar00runnerdockerfrom yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer from yt.utilities.physical_constants import kboltz, mh b_units = "code_magnetic" pres_units = "code_pressure" erg_units = "code_mass * (code_length/code_time)**2" rho_units = "code_mass / code_length**3" def velocity_field(comp): def _velocity(field, data): return data["athena", f"momentum_{comp}"] / data["athena", "density"] return _velocity class AthenaFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( ("density", ("code_mass/code_length**3", ["density"], None)), ("cell_centered_B_x", (b_units, [], None)), ("cell_centered_B_y", (b_units, [], None)), ("cell_centered_B_z", (b_units, [], None)), ("total_energy", ("code_pressure", ["total_energy_density"], None)), ( "gravitational_potential", ("code_velocity**2", ["gravitational_potential"], None), ), ) # In Athena, conservative or primitive variables may be written out. # By default, yt concerns itself with primitive variables. The following # field definitions allow for conversions to primitive variables in the # case that the file contains the conservative ones. def setup_fluid_fields(self): from yt.fields.magnetic_field import setup_magnetic_field_aliases unit_system = self.ds.unit_system # Add velocity fields for comp in "xyz": vel_field = ("athena", f"velocity_{comp}") mom_field = ("athena", f"momentum_{comp}") if vel_field in self.field_list: self.add_output_field( vel_field, sampling_type="cell", units="code_length/code_time" ) self.alias( ("gas", f"velocity_{comp}"), vel_field, units=unit_system["velocity"], ) elif mom_field in self.field_list: self.add_output_field( mom_field, sampling_type="cell", units="code_mass/code_time/code_length**2", ) self.alias( ("gas", f"momentum_density_{comp}"), mom_field, units=unit_system["density"] * unit_system["velocity"], ) self.add_field( ("gas", f"velocity_{comp}"), sampling_type="cell", function=velocity_field(comp), units=unit_system["velocity"], ) # Add pressure, energy, and temperature fields def eint_from_etot(data): eint = ( data["athena", "total_energy"] - data["gas", "kinetic_energy_density"] ) if ("athena", "cell_centered_B_x") in self.field_list: eint -= data["gas", "magnetic_energy_density"] return eint def etot_from_pres(data): etot = data["athena", "pressure"] / (data.ds.gamma - 1.0) etot += data["gas", "kinetic_energy_density"] if ("athena", "cell_centered_B_x") in self.field_list: etot += data["gas", "magnetic_energy_density"] return etot if ("athena", "pressure") in self.field_list: self.add_output_field( ("athena", "pressure"), sampling_type="cell", units=pres_units ) self.alias( ("gas", "pressure"), ("athena", "pressure"), units=unit_system["pressure"], ) def _specific_thermal_energy(field, data): return ( data["athena", "pressure"] / (data.ds.gamma - 1.0) / data["athena", "density"] ) self.add_field( ("gas", "specific_thermal_energy"), sampling_type="cell", function=_specific_thermal_energy, units=unit_system["specific_energy"], ) def _specific_total_energy(field, data): return etot_from_pres(data) / data["athena", "density"] self.add_field( ("gas", "specific_total_energy"), sampling_type="cell", function=_specific_total_energy, units=unit_system["specific_energy"], ) elif ("athena", "total_energy") in self.field_list: self.add_output_field( ("athena", "total_energy"), sampling_type="cell", units=pres_units ) def _specific_thermal_energy(field, data): return eint_from_etot(data) / data["athena", "density"] self.add_field( ("gas", "specific_thermal_energy"), sampling_type="cell", function=_specific_thermal_energy, units=unit_system["specific_energy"], ) def _specific_total_energy(field, data): return data["athena", "total_energy"] / data["athena", "density"] self.add_field( ("gas", "specific_total_energy"), sampling_type="cell", function=_specific_total_energy, units=unit_system["specific_energy"], ) # Add temperature field def _temperature(field, data): return ( data.ds.mu * data["gas", "pressure"] / data["gas", "density"] * mh / kboltz ) self.add_field( ("gas", "temperature"), sampling_type="cell", function=_temperature, units=unit_system["temperature"], ) setup_magnetic_field_aliases( self, "athena", [f"cell_centered_B_{ax}" for ax in "xyz"] ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena/io.py0000644000175100001770000001056314714401662016451 0ustar00runnerdockerimport numpy as np from yt.funcs import mylog from yt.utilities.io_handler import BaseIOHandler from .data_structures import chk23 float_size = {"float": np.dtype(">f4").itemsize, "double": np.dtype(">f8").itemsize} axis_list = ["_x", "_y", "_z"] class IOHandlerAthena(BaseIOHandler): _dataset_type = "athena" _offset_string = "data:offsets=0" _data_string = "data:datatype=0" _read_table_offset = None def _read_field_names(self, grid): pass def _read_chunk_data(self, chunk, fields): data = {} if len(chunk.objs) == 0: return data for grid in chunk.objs: if grid.filename is None: continue f = open(grid.filename, "rb") data[grid.id] = {} grid_dims = grid.ActiveDimensions read_dims = grid.read_dims.astype("int64") grid_ncells = np.prod(read_dims) grid0_ncells = np.prod(grid.index.grids[0].read_dims) read_table_offset = get_read_table_offset(f) for field in fields: ftype, offsetr, dtype = grid.index._field_map[field] if grid_ncells != grid0_ncells: offset = offsetr + ( (grid_ncells - grid0_ncells) * (offsetr // grid0_ncells) ) if grid_ncells == grid0_ncells: offset = offsetr offset = int(offset) # Casting to be certain. file_offset = ( grid.file_offset[2] * read_dims[0] * read_dims[1] * float_size[dtype] ) xread = slice(grid.file_offset[0], grid.file_offset[0] + grid_dims[0]) yread = slice(grid.file_offset[1], grid.file_offset[1] + grid_dims[1]) f.seek(read_table_offset + offset + file_offset) if dtype == "float": dt = ">f4" elif dtype == "double": dt = ">f8" if ftype == "scalar": f.seek(read_table_offset + offset + file_offset) v = np.fromfile(f, dtype=dt, count=grid_ncells).reshape( read_dims, order="F" ) if ftype == "vector": vec_offset = axis_list.index(field[-1][-2:]) f.seek(read_table_offset + offset + 3 * file_offset) v = np.fromfile(f, dtype=dt, count=3 * grid_ncells) v = v[vec_offset::3].reshape(read_dims, order="F") if grid.ds.field_ordering == 1: data[grid.id][field] = v[xread, yread, :].T.astype("float64") else: data[grid.id][field] = v[xread, yread, :].astype("float64") f.close() return data def _read_data_slice(self, grid, field, axis, coord): sl = [slice(None), slice(None), slice(None)] sl[axis] = slice(coord, coord + 1) if grid.ds.field_ordering == 1: sl.reverse() return self._read_data_set(grid, field)[tuple(sl)] def _read_fluid_selection(self, chunks, selector, fields, size): chunks = list(chunks) if any((ftype != "athena" for ftype, fname in fields)): raise NotImplementedError rv = {} for field in fields: rv[field] = np.empty(size, dtype="float64") ng = sum(len(c.objs) for c in chunks) mylog.debug( "Reading %s cells of %s fields in %s grids", size, [f2 for f1, f2 in fields], ng, ) ind = 0 for chunk in chunks: data = self._read_chunk_data(chunk, fields) for g in chunk.objs: for field in fields: ftype, fname = field ds = data[g.id].pop(field) nd = g.select(selector, ds, rv[field], ind) # caches ind += nd data.pop(g.id) return rv def get_read_table_offset(f): line = f.readline() while True: splitup = line.strip().split() chkc = chk23("CELL_DATA") chkp = chk23("POINT_DATA") if chkc in splitup or chkp in splitup: f.readline() read_table_offset = f.tell() break line = f.readline() return read_table_offset ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena/misc.py0000644000175100001770000000000014714401662016756 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.303152 yt-4.4.0/yt/frontends/athena/tests/0000755000175100001770000000000014714401715016624 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena/tests/__init__.py0000644000175100001770000000000014714401662020724 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena/tests/test_outputs.py0000644000175100001770000001165414714401662021770 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_equal from yt.frontends.athena.api import AthenaDataset from yt.loaders import load from yt.testing import ( assert_allclose_units, disable_dataset_cache, requires_file, ) from yt.utilities.answer_testing.framework import ( data_dir_load, requires_ds, small_patch_amr, ) _fields_cloud = ( ("athena", "scalar[0]"), ("gas", "density"), ("gas", "total_energy_density"), ) cloud = "ShockCloud/id0/Cloud.0050.vtk" @requires_ds(cloud) def test_cloud(): ds = data_dir_load(cloud) assert_equal(str(ds), "Cloud.0050") for test in small_patch_amr(ds, _fields_cloud): test_cloud.__name__ = test.description yield test _fields_blast = ( ("gas", "temperature"), ("gas", "density"), ("gas", "velocity_magnitude"), ) blast = "MHDBlast/id0/Blast.0100.vtk" @requires_ds(blast) def test_blast(): ds = data_dir_load(blast) assert_equal(str(ds), "Blast.0100") for test in small_patch_amr(ds, _fields_blast): test_blast.__name__ = test.description yield test uo_blast = { "length_unit": (1.0, "pc"), "mass_unit": (2.38858753789e-24, "g/cm**3*pc**3"), "time_unit": (1.0, "s*pc/km"), } @requires_file(blast) def test_blast_override(): # verify that overriding units causes derived unit values to be updated. # see issue #1259 ds = load(blast, units_override=uo_blast) assert_equal(float(ds.magnetic_unit.in_units("gauss")), 5.47867467969813e-07) uo_stripping = { "time_unit": 3.086e14, "length_unit": 8.0236e22, "mass_unit": 9.999e-30 * 8.0236e22**3, } _fields_stripping = ( ("gas", "temperature"), ("gas", "density"), ("athena", "specific_scalar[0]"), ) stripping = "RamPressureStripping/id0/rps.0062.vtk" @requires_ds(stripping, big_data=True) def test_stripping(): ds = data_dir_load(stripping, kwargs={"units_override": uo_stripping}) assert_equal(str(ds), "rps.0062") for test in small_patch_amr(ds, _fields_stripping): test_stripping.__name__ = test.description yield test sloshing = "MHDSloshing/virgo_low_res.0054.vtk" uo_sloshing = { "length_unit": (1.0, "Mpc"), "time_unit": (1.0, "Myr"), "mass_unit": (1.0e14, "Msun"), } @requires_file(sloshing) @disable_dataset_cache def test_nprocs(): ds1 = load(sloshing, units_override=uo_sloshing) sp1 = ds1.sphere("c", (100.0, "kpc")) prj1 = ds1.proj(("gas", "density"), 0) ds2 = load(sloshing, units_override=uo_sloshing, nprocs=8) sp2 = ds2.sphere("c", (100.0, "kpc")) prj2 = ds1.proj(("gas", "density"), 0) assert_equal( sp1.quantities.extrema(("gas", "pressure")), sp2.quantities.extrema(("gas", "pressure")), ) assert_allclose_units( sp1.quantities.total_quantity(("gas", "pressure")), sp2.quantities.total_quantity(("gas", "pressure")), ) for ax in "xyz": assert_equal( sp1.quantities.extrema(("gas", f"velocity_{ax}")), sp2.quantities.extrema(("gas", f"velocity_{ax}")), ) assert_allclose_units( sp1.quantities.bulk_velocity(), sp2.quantities.bulk_velocity() ) assert_equal(prj1["gas", "density"], prj2["gas", "density"]) @requires_file(cloud) def test_AthenaDataset(): assert isinstance(data_dir_load(cloud), AthenaDataset) @requires_file(sloshing) @disable_dataset_cache def test_mag_factor(): ds1 = load(sloshing, units_override=uo_sloshing, magnetic_normalization="gaussian") assert ds1.magnetic_unit == np.sqrt( 4.0 * np.pi * ds1.mass_unit / (ds1.time_unit**2 * ds1.length_unit) ) sp1 = ds1.sphere("c", (100.0, "kpc")) pB1a = ( sp1["athena", "cell_centered_B_x"] ** 2 + sp1["athena", "cell_centered_B_y"] ** 2 + sp1["athena", "cell_centered_B_z"] ** 2 ) / (8.0 * np.pi) pB1b = ( sp1["gas", "magnetic_field_x"] ** 2 + sp1["gas", "magnetic_field_y"] ** 2 + sp1["gas", "magnetic_field_z"] ** 2 ) / (8.0 * np.pi) pB1a.convert_to_units("dyn/cm**2") pB1b.convert_to_units("dyn/cm**2") assert_allclose_units(pB1a, pB1b) assert_allclose_units(pB1a, sp1["magnetic_pressure"]) ds2 = load( sloshing, units_override=uo_sloshing, magnetic_normalization="lorentz_heaviside" ) assert ds2.magnetic_unit == np.sqrt( ds2.mass_unit / (ds2.time_unit**2 * ds2.length_unit) ) sp2 = ds2.sphere("c", (100.0, "kpc")) pB2a = ( sp2["athena", "cell_centered_B_x"] ** 2 + sp2["athena", "cell_centered_B_y"] ** 2 + sp2["athena", "cell_centered_B_z"] ** 2 ) / 2.0 pB2b = ( sp2["gas", "magnetic_field_x"] ** 2 + sp2["gas", "magnetic_field_y"] ** 2 + sp2["gas", "magnetic_field_z"] ** 2 ) / 2.0 pB2a.convert_to_units("dyn/cm**2") pB2b.convert_to_units("dyn/cm**2") assert_allclose_units(pB2a, pB2b) assert_allclose_units(pB2a, sp2["magnetic_pressure"]) assert_allclose_units(pB1a, pB2a) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.303152 yt-4.4.0/yt/frontends/athena_pp/0000755000175100001770000000000014714401715016161 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena_pp/__init__.py0000644000175100001770000000000014714401662020261 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena_pp/api.py0000644000175100001770000000025214714401662017304 0ustar00runnerdockerfrom . import tests from .data_structures import AthenaPPDataset, AthenaPPGrid, AthenaPPHierarchy from .fields import AthenaPPFieldInfo from .io import IOHandlerAthenaPP ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena_pp/data_structures.py0000644000175100001770000002347714714401662021765 0ustar00runnerdockerimport os import weakref import numpy as np from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.index_subobjects.stretched_grid import StretchedGrid from yt.data_objects.static_output import Dataset from yt.fields.magnetic_field import get_magnetic_normalization from yt.funcs import mylog from yt.geometry.api import Geometry from yt.geometry.grid_geometry_handler import GridIndex from yt.utilities.chemical_formulas import compute_mu from yt.utilities.file_handler import HDF5FileHandler from .fields import AthenaPPFieldInfo geom_map = { "cartesian": "cartesian", "cylindrical": "cylindrical", "spherical_polar": "spherical", "minkowski": "cartesian", "tilted": "cartesian", "sinusoidal": "cartesian", "schwarzschild": "spherical", "kerr-schild": "spherical", } class AthenaPPGrid(AMRGridPatch): _id_offset = 0 def __init__(self, id, index, level): AMRGridPatch.__init__(self, id, filename=index.index_filename, index=index) self.Parent = None self.Children = [] self.Level = level def _setup_dx(self): # So first we figure out what the index is. We don't assume # that dx=dy=dz , at least here. We probably do elsewhere. id = self.id - self._id_offset LE, RE = self.index.grid_left_edge[id, :], self.index.grid_right_edge[id, :] self.dds = self.ds.arr((RE - LE) / self.ActiveDimensions, "code_length") if self.ds.dimensionality < 2: self.dds[1] = 1.0 if self.ds.dimensionality < 3: self.dds[2] = 1.0 self.field_data["dx"], self.field_data["dy"], self.field_data["dz"] = self.dds class AthenaPPStretchedGrid(StretchedGrid): _id_offset = 0 def __init__(self, id, cell_widths, index, level): super().__init__(id, cell_widths, filename=index.index_filename, index=index) self.Parent = None self.Children = [] self.Level = level class AthenaPPHierarchy(GridIndex): _dataset_type = "athena_pp" _data_file = None def __init__(self, ds, dataset_type="athena_pp"): self.dataset = weakref.proxy(ds) self.grid = AthenaPPStretchedGrid if self.dataset._nonuniform else AthenaPPGrid self.directory = os.path.dirname(self.dataset.filename) self.dataset_type = dataset_type # for now, the index file is the dataset! self.index_filename = self.dataset.filename self._handle = ds._handle GridIndex.__init__(self, ds, dataset_type) def _detect_output_fields(self): self.field_list = [("athena_pp", k) for k in self.dataset._field_map] def _count_grids(self): self.num_grids = self._handle.attrs["NumMeshBlocks"] def _parse_index(self): num_grids = self._handle.attrs["NumMeshBlocks"] self.grid_left_edge = np.zeros((num_grids, 3), dtype="float64") self.grid_right_edge = np.zeros((num_grids, 3), dtype="float64") self.grid_dimensions = np.zeros((num_grids, 3), dtype="int32") # TODO: In an unlikely case this would use too much memory, implement # chunked read along 1 dim x = self._handle["x1f"][:, :].astype("float64") y = self._handle["x2f"][:, :].astype("float64") z = self._handle["x3f"][:, :].astype("float64") dx = np.diff(x, axis=1) dy = np.diff(y, axis=1) dz = np.diff(z, axis=1) mesh_block_size = self._handle.attrs["MeshBlockSize"] for i in range(num_grids): self.grid_left_edge[i] = np.array([x[i, 0], y[i, 0], z[i, 0]]) self.grid_right_edge[i] = np.array([x[i, -1], y[i, -1], z[i, -1]]) self.grid_dimensions[i] = mesh_block_size levels = self._handle["Levels"][:] self.grid_left_edge = self.ds.arr(self.grid_left_edge, "code_length") self.grid_right_edge = self.ds.arr(self.grid_right_edge, "code_length") self.grids = np.empty(self.num_grids, dtype="object") for i in range(num_grids): if self.dataset._nonuniform: self.grids[i] = self.grid(i, [dx[i], dy[i], dz[i]], self, levels[i]) else: self.grids[i] = self.grid(i, self, levels[i]) if self.dataset.dimensionality <= 2: self.grid_right_edge[:, 2] = self.dataset.domain_right_edge[2] if self.dataset.dimensionality == 1: self.grid_right_edge[:, 1:] = self.dataset.domain_right_edge[1:] self.grid_particle_count = np.zeros([self.num_grids, 1], dtype="int64") def _populate_grid_objects(self): for g in self.grids: g._prepare_grid() g._setup_dx() self.max_level = self._handle.attrs["MaxLevel"] class AthenaPPDataset(Dataset): _field_info_class = AthenaPPFieldInfo _dataset_type = "athena_pp" _index_class = AthenaPPHierarchy def __init__( self, filename, dataset_type="athena_pp", storage_filename=None, parameters=None, units_override=None, unit_system="code", default_species_fields=None, magnetic_normalization="gaussian", ): self.fluid_types += ("athena_pp",) if parameters is None: parameters = {} self.specified_parameters = parameters if units_override is None: units_override = {} self._handle = HDF5FileHandler(filename) xrat = self._handle.attrs["RootGridX1"][2] yrat = self._handle.attrs["RootGridX2"][2] zrat = self._handle.attrs["RootGridX3"][2] self._nonuniform = xrat != 1.0 or yrat != 1.0 or zrat != 1.0 self._magnetic_factor = get_magnetic_normalization(magnetic_normalization) geom = self._handle.attrs["Coordinates"].decode("utf-8") self.geometry = Geometry(geom_map[geom]) if self.geometry == "cylindrical": axis_order = ("r", "theta", "z") else: axis_order = None Dataset.__init__( self, filename, dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, axis_order=axis_order, ) if storage_filename is None: storage_filename = self.basename + ".yt" self.storage_filename = storage_filename def _set_code_unit_attributes(self): """ Generates the conversion to various physical _units based on the parameter file """ if "length_unit" not in self.units_override: self.no_cgs_equiv_length = True for unit, cgs in [ ("length", "cm"), ("time", "s"), ("mass", "g"), ("temperature", "K"), ]: # We set these to cgs for now, but they may have been overridden if getattr(self, unit + "_unit", None) is not None: continue mylog.warning("Assuming 1.0 = 1.0 %s", cgs) setattr(self, f"{unit}_unit", self.quan(1.0, cgs)) self.magnetic_unit = np.sqrt( self._magnetic_factor * self.mass_unit / (self.time_unit**2 * self.length_unit) ) self.magnetic_unit.convert_to_units("gauss") self.velocity_unit = self.length_unit / self.time_unit def _parse_parameter_file(self): xmin, xmax = self._handle.attrs["RootGridX1"][:2] ymin, ymax = self._handle.attrs["RootGridX2"][:2] zmin, zmax = self._handle.attrs["RootGridX3"][:2] self.domain_left_edge = np.array([xmin, ymin, zmin], dtype="float64") self.domain_right_edge = np.array([xmax, ymax, zmax], dtype="float64") self.domain_width = self.domain_right_edge - self.domain_left_edge self.domain_dimensions = self._handle.attrs["RootGridSize"] self._field_map = {} k = 0 for dname, num_var in zip( self._handle.attrs["DatasetNames"], self._handle.attrs["NumVariables"], strict=True, ): for j in range(num_var): fname = self._handle.attrs["VariableNames"][k].decode("ascii", "ignore") self._field_map[fname] = (dname.decode("ascii", "ignore"), j) k += 1 self.refine_by = 2 dimensionality = 3 if self.domain_dimensions[2] == 1: dimensionality = 2 if self.domain_dimensions[1] == 1: dimensionality = 1 self.dimensionality = dimensionality self.current_time = self._handle.attrs["Time"] self.cosmological_simulation = False self.num_ghost_zones = 0 self.field_ordering = "fortran" self.boundary_conditions = [1] * 6 self._periodicity = tuple( self.specified_parameters.get("periodicity", (True, True, True)) ) if "gamma" in self.specified_parameters: self.gamma = float(self.specified_parameters["gamma"]) else: self.gamma = 5.0 / 3.0 self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 self.cosmological_simulation = 0 # Hardcode time conversion for now. self.parameters["Time"] = self.current_time # Hardcode for now until field staggering is supported. self.parameters["HydroMethod"] = 0 if "gamma" in self.specified_parameters: self.parameters["Gamma"] = self.specified_parameters["gamma"] else: self.parameters["Gamma"] = 5.0 / 3.0 self.mu = self.specified_parameters.get( "mu", compute_mu(self.default_species_fields) ) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: return filename.endswith(".athdf") @property def _skip_cache(self): return True def __str__(self): return self.basename.rsplit(".", 1)[0] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena_pp/definitions.py0000644000175100001770000000010614714401662021044 0ustar00runnerdocker""" Various definitions for various other modules and routines """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena_pp/fields.py0000644000175100001770000001012314714401662017777 0ustar00runnerdockerfrom yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer from yt.utilities.physical_constants import kboltz, mh b_units = "code_magnetic" pres_units = "code_mass/(code_length*code_time**2)" rho_units = "code_mass / code_length**3" vel_units = "code_length / code_time" def velocity_field(j): def _velocity(field, data): return data["athena_pp", f"mom{j}"] / data["athena_pp", "dens"] return _velocity class AthenaPPFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( ("rho", (rho_units, ["density"], None)), ("dens", (rho_units, ["density"], None)), ("Bcc1", (b_units, [], None)), ("Bcc2", (b_units, [], None)), ("Bcc3", (b_units, [], None)), ) def setup_fluid_fields(self): from yt.fields.magnetic_field import setup_magnetic_field_aliases unit_system = self.ds.unit_system # Add velocity fields vel_prefix = "velocity" for i, comp in enumerate(self.ds.coordinates.axis_order): vel_field = ("athena_pp", "vel%d" % (i + 1)) mom_field = ("athena_pp", "mom%d" % (i + 1)) if vel_field in self.field_list: self.add_output_field( vel_field, sampling_type="cell", units="code_length/code_time" ) self.alias( ("gas", f"{vel_prefix}_{comp}"), vel_field, units=unit_system["velocity"], ) elif mom_field in self.field_list: self.add_output_field( mom_field, sampling_type="cell", units="code_mass/code_time/code_length**2", ) self.add_field( ("gas", f"{vel_prefix}_{comp}"), sampling_type="cell", function=velocity_field(i + 1), units=unit_system["velocity"], ) # Figure out thermal energy field if ("athena_pp", "press") in self.field_list: self.add_output_field( ("athena_pp", "press"), sampling_type="cell", units=pres_units ) self.alias( ("gas", "pressure"), ("athena_pp", "press"), units=unit_system["pressure"], ) def _specific_thermal_energy(field, data): return ( data["athena_pp", "press"] / (data.ds.gamma - 1.0) / data["athena_pp", "rho"] ) self.add_field( ("gas", "specific_thermal_energy"), sampling_type="cell", function=_specific_thermal_energy, units=unit_system["specific_energy"], ) elif ("athena_pp", "Etot") in self.field_list: self.add_output_field( ("athena_pp", "Etot"), sampling_type="cell", units=pres_units ) def _specific_thermal_energy(field, data): eint = data["athena_pp", "Etot"] - data["gas", "kinetic_energy_density"] if ("athena_pp", "B1") in self.field_list: eint -= data["gas", "magnetic_energy_density"] return eint / data["athena_pp", "dens"] self.add_field( ("gas", "specific_thermal_energy"), sampling_type="cell", function=_specific_thermal_energy, units=unit_system["specific_energy"], ) # Add temperature field def _temperature(field, data): return ( (data["gas", "pressure"] / data["gas", "density"]) * data.ds.mu * mh / kboltz ) self.add_field( ("gas", "temperature"), sampling_type="cell", function=_temperature, units=unit_system["temperature"], ) setup_magnetic_field_aliases( self, "athena_pp", ["Bcc%d" % ax for ax in (1, 2, 3)] ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena_pp/io.py0000644000175100001770000000523114714401662017144 0ustar00runnerdockerfrom itertools import groupby import numpy as np from yt.utilities.io_handler import BaseIOHandler from yt.utilities.logger import ytLogger as mylog # http://stackoverflow.com/questions/2361945/detecting-consecutive-integers-in-a-list def grid_sequences(grids): g_iter = sorted(grids, key=lambda g: g.id) for _, g in groupby(enumerate(g_iter), lambda i_x1: i_x1[0] - i_x1[1].id): seq = [v[1] for v in g] yield seq class IOHandlerAthenaPP(BaseIOHandler): _particle_reader = False _dataset_type = "athena_pp" def __init__(self, ds): super().__init__(ds) self._handle = ds._handle def _read_particles( self, fields_to_read, type, args, grid_list, count_list, conv_factors ): pass def _read_fluid_selection(self, chunks, selector, fields, size): chunks = list(chunks) if any((ftype != "athena_pp" for ftype, fname in fields)): raise NotImplementedError f = self._handle rv = {} for field in fields: # Always use *native* 64-bit float. rv[field] = np.empty(size, dtype="=f8") ng = sum(len(c.objs) for c in chunks) mylog.debug( "Reading %s cells of %s fields in %s blocks", size, [f2 for f1, f2 in fields], ng, ) last_dname = None for field in fields: ftype, fname = field dname, fdi = self.ds._field_map[fname] if dname != last_dname: ds = f[f"/{dname}"] ind = 0 for chunk in chunks: for gs in grid_sequences(chunk.objs): start = gs[0].id - gs[0]._id_offset end = gs[-1].id - gs[-1]._id_offset + 1 data = ds[fdi, start:end, :, :, :].transpose() for i, g in enumerate(gs): ind += g.select(selector, data[..., i], rv[field], ind) last_dname = dname return rv def _read_chunk_data(self, chunk, fields): f = self._handle rv = {} for g in chunk.objs: rv[g.id] = {} if len(fields) == 0: return rv for field in fields: ftype, fname = field dname, fdi = self.ds._field_map[fname] ds = f[f"/{dname}"] for gs in grid_sequences(chunk.objs): start = gs[0].id - gs[0]._id_offset end = gs[-1].id - gs[-1]._id_offset + 1 data = ds[fdi, start:end, :, :, :].transpose() for i, g in enumerate(gs): rv[g.id][field] = np.asarray(data[..., i], "=f8") return rv ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena_pp/misc.py0000644000175100001770000000000014714401662017455 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.303152 yt-4.4.0/yt/frontends/athena_pp/tests/0000755000175100001770000000000014714401715017323 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena_pp/tests/__init__.py0000644000175100001770000000000014714401662021423 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/athena_pp/tests/test_outputs.py0000644000175100001770000000741314714401662022465 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_allclose, assert_equal from yt.frontends.athena_pp.api import AthenaPPDataset from yt.loaders import load from yt.testing import ( assert_allclose_units, disable_dataset_cache, requires_file, units_override_check, ) from yt.units import dimensions from yt.utilities.answer_testing.framework import ( GenericArrayTest, data_dir_load, requires_ds, small_patch_amr, ) _fields_disk = ("density", "velocity_r") disk = "KeplerianDisk/disk.out1.00000.athdf" @requires_ds(disk) def test_disk(): ds = data_dir_load(disk) assert_equal(str(ds), "disk.out1.00000") dd = ds.all_data() vol = (ds.domain_right_edge[0] ** 3 - ds.domain_left_edge[0] ** 3) / 3.0 vol *= np.cos(ds.domain_left_edge[1]) - np.cos(ds.domain_right_edge[1]) vol *= ds.domain_right_edge[2].v - ds.domain_left_edge[2].v assert_allclose(dd.quantities.total_quantity(("gas", "cell_volume")), vol) def field_func(field): return dd[field] for field in _fields_disk: yield GenericArrayTest(ds, field_func, args=[field]) _fields_AM06 = ("temperature", "density", "velocity_magnitude", "magnetic_field_x") AM06 = "AM06/AM06.out1.00400.athdf" @requires_ds(AM06) def test_AM06(): ds = data_dir_load(AM06) assert_equal(str(ds), "AM06.out1.00400") for test in small_patch_amr(ds, _fields_AM06): test_AM06.__name__ = test.description yield test uo_AM06 = { "length_unit": (1.0, "kpc"), "mass_unit": (1.0, "Msun"), "time_unit": (1.0, "Myr"), } @requires_file(AM06) def test_AM06_override(): # verify that overriding units causes derived unit values to be updated. # see issue #1259 ds = load(AM06, units_override=uo_AM06) assert_equal(float(ds.magnetic_unit.in_units("gauss")), 9.01735778342523e-08) @requires_file(AM06) def test_units_override(): units_override_check(AM06) @requires_file(AM06) def test_AthenaPPDataset(): assert isinstance(data_dir_load(AM06), AthenaPPDataset) @requires_file(AM06) def test_magnetic_units(): ds = load(AM06, unit_system="code") assert ds.magnetic_unit.units.dimensions == dimensions.magnetic_field_cgs assert (ds.magnetic_unit**2).units.dimensions == dimensions.pressure @requires_file(AM06) @disable_dataset_cache def test_mag_factor(): ds1 = load(AM06, units_override=uo_AM06, magnetic_normalization="gaussian") assert ds1.magnetic_unit == np.sqrt( 4.0 * np.pi * ds1.mass_unit / (ds1.time_unit**2 * ds1.length_unit) ) sp1 = ds1.sphere("c", (100.0, "kpc")) pB1a = ( sp1["athena_pp", "Bcc1"] ** 2 + sp1["athena_pp", "Bcc2"] ** 2 + sp1["athena_pp", "Bcc3"] ** 2 ) / (8.0 * np.pi) pB1b = ( sp1["gas", "magnetic_field_x"] ** 2 + sp1["gas", "magnetic_field_y"] ** 2 + sp1["gas", "magnetic_field_z"] ** 2 ) / (8.0 * np.pi) pB1a.convert_to_units("dyn/cm**2") pB1b.convert_to_units("dyn/cm**2") assert_allclose_units(pB1a, pB1b) assert_allclose_units(pB1a, sp1["magnetic_pressure"]) ds2 = load(AM06, units_override=uo_AM06, magnetic_normalization="lorentz_heaviside") assert ds2.magnetic_unit == np.sqrt( ds2.mass_unit / (ds2.time_unit**2 * ds2.length_unit) ) sp2 = ds2.sphere("c", (100.0, "kpc")) pB2a = ( sp2["athena_pp", "Bcc1"] ** 2 + sp2["athena_pp", "Bcc2"] ** 2 + sp2["athena_pp", "Bcc3"] ** 2 ) / 2.0 pB2b = ( sp2["gas", "magnetic_field_x"] ** 2 + sp2["gas", "magnetic_field_y"] ** 2 + sp2["gas", "magnetic_field_z"] ** 2 ) / 2.0 pB2a.convert_to_units("dyn/cm**2") pB2b.convert_to_units("dyn/cm**2") assert_allclose_units(pB2a, pB2b) assert_allclose_units(pB2a, sp2["magnetic_pressure"]) assert_allclose_units(pB1a, pB2a) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.303152 yt-4.4.0/yt/frontends/boxlib/0000755000175100001770000000000014714401715015501 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/boxlib/__init__.py0000644000175100001770000000000014714401662017601 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/boxlib/_deprecation.py0000644000175100001770000000073214714401662020512 0ustar00runnerdockerfrom yt._maintenance.deprecation import issue_deprecation_warning, warnings def boxlib_deprecation(): with warnings.catch_warnings(): warnings.simplefilter("always") issue_deprecation_warning( "The historic 'boxlib' frontend is \n" "deprecated as it has been renamed 'amrex'. " "Existing and future work should instead reference the 'amrex' frontend.", stacklevel=4, since="4.4.0", ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/boxlib/api.py0000644000175100001770000000100514714401662016621 0ustar00runnerdockerfrom ..amrex import tests from ..amrex import data_structures from ..amrex.data_structures import ( AMReXDataset, AMReXHierarchy, BoxlibDataset, BoxlibGrid, BoxlibHierarchy, CastroDataset, MaestroDataset, NyxDataset, NyxHierarchy, OrionDataset, OrionHierarchy, WarpXDataset, WarpXHierarchy, ) from ..amrex.fields import ( BoxlibFieldInfo, CastroFieldInfo, MaestroFieldInfo, NyxFieldInfo, WarpXFieldInfo, ) from ..amrex.io import IOHandlerBoxlib ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.303152 yt-4.4.0/yt/frontends/boxlib/data_structures/0000755000175100001770000000000014714401715020715 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/boxlib/data_structures/__init__.py0000644000175100001770000000054014714401662023026 0ustar00runnerdockerfrom ...amrex.data_structures import ( AMReXDataset, AMReXHierarchy, BoxlibDataset, BoxlibGrid, BoxlibHierarchy, CastroDataset, MaestroDataset, NyxDataset, NyxHierarchy, OrionDataset, OrionHierarchy, WarpXDataset, WarpXHierarchy, ) from .._deprecation import boxlib_deprecation boxlib_deprecation() ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.303152 yt-4.4.0/yt/frontends/boxlib/fields/0000755000175100001770000000000014714401715016747 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/boxlib/fields/__init__.py0000644000175100001770000000031214714401662021055 0ustar00runnerdockerfrom ...amrex.fields import ( BoxlibFieldInfo, CastroFieldInfo, MaestroFieldInfo, NyxFieldInfo, WarpXFieldInfo, ) from .._deprecation import boxlib_deprecation boxlib_deprecation() ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.303152 yt-4.4.0/yt/frontends/boxlib/io/0000755000175100001770000000000014714401715016110 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/boxlib/io/__init__.py0000644000175100001770000000015414714401662020222 0ustar00runnerdockerfrom ...amrex.io import IOHandlerBoxlib from .._deprecation import boxlib_deprecation boxlib_deprecation() ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.303152 yt-4.4.0/yt/frontends/boxlib/tests/0000755000175100001770000000000014714401715016643 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/boxlib/tests/__init__.py0000644000175100001770000000003314714401662020751 0ustar00runnerdockerfrom ...amrex import tests ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/boxlib/tests/test_boxlib_deprecation.py0000644000175100001770000000125614714401662024115 0ustar00runnerdockerfrom importlib import import_module, reload from yt._maintenance.deprecation import warnings def test_imports(): with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") for index, mname in enumerate(["data_structures", "fields", "io"]): mod_name = import_module("yt.frontends.boxlib." + mname) if len(w) != index + 1: reload(mod_name) assert len(w) == 3 and all( [ issubclass(w[0].category, DeprecationWarning), issubclass(w[1].category, DeprecationWarning), issubclass(w[2].category, DeprecationWarning), ] ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/boxlib/tests/test_outputs.py0000644000175100001770000003236114714401662022005 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_allclose, assert_equal from yt.frontends.boxlib.api import ( AMReXDataset, CastroDataset, MaestroDataset, NyxDataset, OrionDataset, WarpXDataset, ) from yt.loaders import load from yt.testing import ( disable_dataset_cache, requires_file, units_override_check, ) from yt.utilities.answer_testing.framework import ( GridValuesTest, data_dir_load, requires_ds, small_patch_amr, ) # We don't do anything needing ghost zone generation right now, because these # are non-periodic datasets. _orion_fields = ( ("gas", "temperature"), ("gas", "density"), ("gas", "velocity_magnitude"), ) _nyx_fields = ( ("boxlib", "Ne"), ("boxlib", "Temp"), ("boxlib", "particle_mass_density"), ) _warpx_fields = (("mesh", "Ex"), ("mesh", "By"), ("mesh", "jz")) _castro_fields = ( ("boxlib", "Temp"), ("gas", "density"), ("boxlib", "particle_count"), ) radadvect = "RadAdvect/plt00000" @requires_ds(radadvect) def test_radadvect(): ds = data_dir_load(radadvect) assert_equal(str(ds), "plt00000") for test in small_patch_amr(ds, _orion_fields): test_radadvect.__name__ = test.description yield test rt = "RadTube/plt00500" @requires_ds(rt) def test_radtube(): ds = data_dir_load(rt) assert_equal(str(ds), "plt00500") for test in small_patch_amr(ds, _orion_fields): test_radtube.__name__ = test.description yield test star = "StarParticles/plrd01000" @requires_ds(star) def test_star(): ds = data_dir_load(star) assert_equal(str(ds), "plrd01000") for test in small_patch_amr(ds, _orion_fields): test_star.__name__ = test.description yield test LyA = "Nyx_LyA/plt00000" @requires_ds(LyA) def test_LyA(): ds = data_dir_load(LyA) assert_equal(str(ds), "plt00000") for test in small_patch_amr( ds, _nyx_fields, input_center="c", input_weight=("boxlib", "Ne") ): test_LyA.__name__ = test.description yield test @requires_file(LyA) def test_nyx_particle_io(): ds = data_dir_load(LyA) grid = ds.index.grids[0] npart_grid_0 = 7908 # read directly from the header assert_equal(grid["all", "particle_position_x"].size, npart_grid_0) assert_equal(grid["DM", "particle_position_y"].size, npart_grid_0) assert_equal(grid["all", "particle_position_z"].size, npart_grid_0) ad = ds.all_data() npart = 32768 # read directly from the header assert_equal(ad["all", "particle_velocity_x"].size, npart) assert_equal(ad["DM", "particle_velocity_y"].size, npart) assert_equal(ad["all", "particle_velocity_z"].size, npart) assert np.all(ad["all", "particle_mass"] == ad["all", "particle_mass"][0]) left_edge = ds.arr([0.0, 0.0, 0.0], "code_length") right_edge = ds.arr([4.0, 4.0, 4.0], "code_length") center = 0.5 * (left_edge + right_edge) reg = ds.region(center, left_edge, right_edge) assert np.all( np.logical_and( reg["all", "particle_position_x"] <= right_edge[0], reg["all", "particle_position_x"] >= left_edge[0], ) ) assert np.all( np.logical_and( reg["all", "particle_position_y"] <= right_edge[1], reg["all", "particle_position_y"] >= left_edge[1], ) ) assert np.all( np.logical_and( reg["all", "particle_position_z"] <= right_edge[2], reg["all", "particle_position_z"] >= left_edge[2], ) ) RT_particles = "RT_particles/plt00050" @requires_ds(RT_particles) def test_RT_particles(): ds = data_dir_load(RT_particles) assert_equal(str(ds), "plt00050") for test in small_patch_amr(ds, _castro_fields): test_RT_particles.__name__ = test.description yield test @requires_file(RT_particles) def test_castro_particle_io(): ds = data_dir_load(RT_particles) grid = ds.index.grids[2] npart_grid_2 = 49 # read directly from the header assert_equal(grid["all", "particle_position_x"].size, npart_grid_2) assert_equal(grid["Tracer", "particle_position_y"].size, npart_grid_2) assert_equal(grid["all", "particle_position_y"].size, npart_grid_2) ad = ds.all_data() npart = 49 # read directly from the header assert_equal(ad["all", "particle_velocity_x"].size, npart) assert_equal(ad["Tracer", "particle_velocity_y"].size, npart) assert_equal(ad["all", "particle_velocity_y"].size, npart) left_edge = ds.arr([0.0, 0.0, 0.0], "code_length") right_edge = ds.arr([0.25, 1.0, 1.0], "code_length") center = 0.5 * (left_edge + right_edge) reg = ds.region(center, left_edge, right_edge) assert np.all( np.logical_and( reg["all", "particle_position_x"] <= right_edge[0], reg["all", "particle_position_x"] >= left_edge[0], ) ) assert np.all( np.logical_and( reg["all", "particle_position_y"] <= right_edge[1], reg["all", "particle_position_y"] >= left_edge[1], ) ) langmuir = "LangmuirWave/plt00020_v2" @requires_ds(langmuir) def test_langmuir(): ds = data_dir_load(langmuir) assert_equal(str(ds), "plt00020_v2") for test in small_patch_amr( ds, _warpx_fields, input_center="c", input_weight=("mesh", "Ex") ): test_langmuir.__name__ = test.description yield test plasma = "PlasmaAcceleration/plt00030_v2" @requires_ds(plasma) def test_plasma(): ds = data_dir_load(plasma) assert_equal(str(ds), "plt00030_v2") for test in small_patch_amr( ds, _warpx_fields, input_center="c", input_weight=("mesh", "Ex") ): test_plasma.__name__ = test.description yield test beam = "GaussianBeam/plt03008" @requires_ds(beam) def test_beam(): ds = data_dir_load(beam) assert_equal(str(ds), "plt03008") for param in ("number of boxes", "maximum zones"): # PR 2807 # these parameters are only populated if the config file attached to this # dataset is read correctly assert param in ds.parameters for test in small_patch_amr( ds, _warpx_fields, input_center="c", input_weight=("mesh", "Ex") ): test_beam.__name__ = test.description yield test @requires_file(plasma) def test_warpx_particle_io(): ds = data_dir_load(plasma) grid = ds.index.grids[0] # read directly from the header npart0_grid_0 = 344 npart1_grid_0 = 69632 assert_equal(grid["particle0", "particle_position_x"].size, npart0_grid_0) assert_equal(grid["particle1", "particle_position_y"].size, npart1_grid_0) assert_equal(grid["all", "particle_position_z"].size, npart0_grid_0 + npart1_grid_0) # read directly from the header npart0 = 1360 npart1 = 802816 ad = ds.all_data() assert_equal(ad["particle0", "particle_velocity_x"].size, npart0) assert_equal(ad["particle1", "particle_velocity_y"].size, npart1) assert_equal(ad["all", "particle_velocity_z"].size, npart0 + npart1) np.all(ad["particle1", "particle_mass"] == ad["particle1", "particle_mass"][0]) np.all(ad["particle0", "particle_mass"] == ad["particle0", "particle_mass"][0]) left_edge = ds.arr([-7.5e-5, -7.5e-5, -7.5e-5], "code_length") right_edge = ds.arr([2.5e-5, 2.5e-5, 2.5e-5], "code_length") center = 0.5 * (left_edge + right_edge) reg = ds.region(center, left_edge, right_edge) assert np.all( np.logical_and( reg["all", "particle_position_x"] <= right_edge[0], reg["all", "particle_position_x"] >= left_edge[0], ) ) assert np.all( np.logical_and( reg["all", "particle_position_y"] <= right_edge[1], reg["all", "particle_position_y"] >= left_edge[1], ) ) assert np.all( np.logical_and( reg["all", "particle_position_z"] <= right_edge[2], reg["all", "particle_position_z"] >= left_edge[2], ) ) _raw_fields = [("raw", "Bx"), ("raw", "Ey"), ("raw", "jz")] laser = "Laser/plt00015" @requires_ds(laser) def test_raw_fields(): for field in _raw_fields: yield GridValuesTest(laser, field) @requires_file(rt) def test_OrionDataset(): assert isinstance(data_dir_load(rt), OrionDataset) @requires_file(LyA) def test_NyxDataset(): assert isinstance(data_dir_load(LyA), NyxDataset) @requires_file("nyx_small/nyx_small_00000") def test_NyxDataset_2(): assert isinstance(data_dir_load("nyx_small/nyx_small_00000"), NyxDataset) @requires_file(RT_particles) def test_CastroDataset(): assert isinstance(data_dir_load(RT_particles), CastroDataset) @requires_file("castro_sod_x_plt00036") def test_CastroDataset_2(): assert isinstance(data_dir_load("castro_sod_x_plt00036"), CastroDataset) @requires_file("castro_sedov_1d_cyl_plt00150") def test_CastroDataset_3(): assert isinstance(data_dir_load("castro_sedov_1d_cyl_plt00150"), CastroDataset) @requires_file(plasma) def test_WarpXDataset(): assert isinstance(data_dir_load(plasma), WarpXDataset) @disable_dataset_cache @requires_file(plasma) def test_magnetic_units(): ds1 = load(plasma) assert_allclose(ds1.magnetic_unit.value, 1.0) assert str(ds1.magnetic_unit.units) == "T" mag_unit1 = ds1.magnetic_unit.to("code_magnetic") assert_allclose(mag_unit1.value, 1.0) assert str(mag_unit1.units) == "code_magnetic" ds2 = load(plasma, unit_system="cgs") assert_allclose(ds2.magnetic_unit.value, 1.0e4) assert str(ds2.magnetic_unit.units) == "G" mag_unit2 = ds2.magnetic_unit.to("code_magnetic") assert_allclose(mag_unit2.value, 1.0) assert str(mag_unit2.units) == "code_magnetic" @requires_ds(laser) def test_WarpXDataset_2(): assert isinstance(data_dir_load(laser), WarpXDataset) @requires_file("plt.Cavity00010") def test_AMReXDataset(): ds = data_dir_load("plt.Cavity00010", kwargs={"cparam_filename": "inputs"}) assert isinstance(ds, AMReXDataset) @requires_file(rt) def test_units_override(): units_override_check(rt) nyx_no_particles = "nyx_sedov_plt00086" @requires_file(nyx_no_particles) def test_nyx_no_part(): assert isinstance(data_dir_load(nyx_no_particles), NyxDataset) fields = sorted( [ ("boxlib", "H"), ("boxlib", "He"), ("boxlib", "MachNumber"), ("boxlib", "Ne"), ("boxlib", "Rank"), ("boxlib", "StateErr"), ("boxlib", "Temp"), ("boxlib", "X(H)"), ("boxlib", "X(He)"), ("boxlib", "density"), ("boxlib", "divu"), ("boxlib", "eint_E"), ("boxlib", "eint_e"), ("boxlib", "entropy"), ("boxlib", "forcex"), ("boxlib", "forcey"), ("boxlib", "forcez"), ("boxlib", "kineng"), ("boxlib", "logden"), ("boxlib", "magmom"), ("boxlib", "magvel"), ("boxlib", "magvort"), ("boxlib", "pressure"), ("boxlib", "rho_E"), ("boxlib", "rho_H"), ("boxlib", "rho_He"), ("boxlib", "rho_e"), ("boxlib", "soundspeed"), ("boxlib", "x_velocity"), ("boxlib", "xmom"), ("boxlib", "y_velocity"), ("boxlib", "ymom"), ("boxlib", "z_velocity"), ("boxlib", "zmom"), ] ) ds = data_dir_load(nyx_no_particles) assert_equal(sorted(ds.field_list), fields) msubch = "maestro_subCh_plt00248" @requires_file(msubch) def test_maestro_parameters(): assert isinstance(data_dir_load(msubch), MaestroDataset) ds = data_dir_load(msubch) # Check a string parameter assert ds.parameters["plot_base_name"] == "subCh_hot_baserun_plt" assert type(ds.parameters["plot_base_name"]) is str # noqa: E721 # Check boolean parameters: T or F assert not ds.parameters["use_thermal_diffusion"] assert type(ds.parameters["use_thermal_diffusion"]) is bool # noqa: E721 assert ds.parameters["do_burning"] assert type(ds.parameters["do_burning"]) is bool # noqa: E721 # Check a float parameter with a decimal point assert ds.parameters["sponge_kappa"] == float("10.00000000") assert type(ds.parameters["sponge_kappa"]) is float # noqa: E721 # Check a float parameter with E exponent notation assert ds.parameters["small_dt"] == float("0.1000000000E-09") # Check an int parameter assert ds.parameters["s0_interp_type"] == 3 assert type(ds.parameters["s0_interp_type"]) is int # noqa: E721 castro_1d_cyl = "castro_sedov_1d_cyl_plt00150" @requires_file(castro_1d_cyl) def test_castro_parameters(): ds = data_dir_load(castro_1d_cyl) assert isinstance(ds, CastroDataset) # Modified from default (leading [*]) assert ds.parameters["castro.do_hydro"] == 1 assert ds.parameters["castro.cfl"] == 0.5 assert ds.parameters["problem.p_ambient"] == float("1e-06") # Leading [*] should be removed from the parameter name assert "[*] castro.do_hydro" not in ds.parameters # Not modified from default assert ds.parameters["castro.pslope_cutoff_density"] == float("-1e+20") assert ds.parameters["castro.do_sponge"] == 0 assert ds.parameters["problem.dens_ambient"] == 1 assert ds.parameters["eos.eos_assume_neutral"] == 1 # Empty string value assert ds.parameters["castro.stopping_criterion_field"] is None ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.307152 yt-4.4.0/yt/frontends/cf_radial/0000755000175100001770000000000014714401715016126 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cf_radial/__init__.py0000644000175100001770000000005214714401662020235 0ustar00runnerdocker""" API for yt.frontends.cf_radial """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cf_radial/api.py0000644000175100001770000000030114714401662017244 0ustar00runnerdocker""" API for yt.frontends.cf_radial """ from .data_structures import CFRadialDataset, CFRadialGrid, CFRadialHierarchy from .fields import CFRadialFieldInfo from .io import CFRadialIOHandler ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cf_radial/data_structures.py0000644000175100001770000003233314714401662021721 0ustar00runnerdocker""" CF Radial data structures """ import contextlib import os import weakref import numpy as np from unyt import unyt_array from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.static_output import Dataset from yt.funcs import mylog from yt.geometry.grid_geometry_handler import GridIndex from yt.utilities.file_handler import NetCDF4FileHandler, valid_netcdf_signature from yt.utilities.on_demand_imports import _xarray as xr from .fields import CFRadialFieldInfo class CFRadialGrid(AMRGridPatch): _id_offset = 0 def __init__(self, id, index, level, dimensions): super().__init__(id, filename=index.index_filename, index=index) self.Parent = None self.Children = [] self.Level = level self.ActiveDimensions = dimensions class CFRadialHierarchy(GridIndex): grid = CFRadialGrid def __init__(self, ds, dataset_type="cf_radial"): self.dataset_type = dataset_type self.dataset = weakref.proxy(ds) # our index file is the dataset itself: self.index_filename = self.dataset.parameter_filename self.directory = os.path.dirname(self.index_filename) # float type for the simulation edges and must be float64 now self.float_type = np.float64 super().__init__(ds, dataset_type) def _detect_output_fields(self): # This sets self.field_list, containing all the available on-disk fields and # records the units for each field. self.field_list = [] units = {} with self.ds._handle() as xr_ds_handle: for key in xr_ds_handle.variables.keys(): if all(x in xr_ds_handle[key].dims for x in ["time", "z", "y", "x"]): fld = ("cf_radial", key) self.field_list.append(fld) units[fld] = xr_ds_handle[key].units self.ds.field_units.update(units) def _count_grids(self): self.num_grids = 1 def _parse_index(self): self.grid_left_edge[0][:] = self.ds.domain_left_edge[:] self.grid_right_edge[0][:] = self.ds.domain_right_edge[:] self.grid_dimensions[0][:] = self.ds.domain_dimensions[:] self.grid_particle_count[0][0] = 0 self.grid_levels[0][0] = 0 self.max_level = 0 def _populate_grid_objects(self): # only a single grid, no need to loop g = self.grid(0, self, self.grid_levels.flat[0], self.grid_dimensions[0]) g._prepare_grid() g._setup_dx() self.grids = np.array([g], dtype="object") class CFRadialDataset(Dataset): _load_requirements = ["xarray", "pyart"] _index_class = CFRadialHierarchy _field_info_class = CFRadialFieldInfo def __init__( self, filename, dataset_type="cf_radial", storage_filename=None, storage_overwrite: bool = False, grid_shape: tuple[int, int, int] | None = None, grid_limit_x: tuple[float, float] | None = None, grid_limit_y: tuple[float, float] | None = None, grid_limit_z: tuple[float, float] | None = None, units_override=None, ): """ Parameters ---------- filename dataset_type storage_filename: Optional[str] the filename to store gridded file to if necessary. If not provided, the string "_yt_grid" will be appended to the dataset filename. storage_overwrite: bool if True and if any gridding parameters are set, then the storage_filename will be over-written if it exists. Default is False. grid_shape : Optional[Tuple[int, int, int]] when gridding to cartesian, grid_shape is the number of cells in the z, y, x coordinates. If not provided, yt attempts to calculate a reasonable shape based on the resolution of the original cfradial grid grid_limit_x : Optional[Tuple[float, float]] The x range of the cartesian-gridded data in the form (xmin, xmax) with x in the native radar range units grid_limit_y : Optional[Tuple[float, float]] The y range of the cartesian-gridded data in the form (ymin, ymax) with y in the native radar range units grid_limit_z : Optional[Tuple[float, float]] The z range of the cartesian-gridded data in the form (zmin, zmax) with z in the native radar range units units_override """ self.fluid_types += ("cf_radial",) with self._handle(filename=filename) as xr_ds_handle: if "x" not in xr_ds_handle.coords: if storage_filename is None: f_base, f_ext = os.path.splitext(filename) storage_filename = f_base + "_yt_grid" + f_ext regrid = True if os.path.exists(storage_filename): # pyart grid.write will error if the filename exists, so this logic # forces some explicit behavior to minimize confusion and avoid # overwriting or deleting without explicit user consent. if storage_overwrite: os.remove(storage_filename) elif any([grid_shape, grid_limit_x, grid_limit_y, grid_limit_z]): mylog.warning( "Ignoring provided grid parameters because %s exists.", storage_filename, ) mylog.warning( "To re-grid, either provide a unique storage_filename or set " "storage_overwrite to True to overwrite %s.", storage_filename, ) regrid = False else: mylog.info( "loading existing re-gridded file: %s", storage_filename ) regrid = False if regrid: mylog.info("Building cfradial grid") from yt.utilities.on_demand_imports import _pyart as pyart radar = pyart.io.read_cfradial(filename) grid_limit_z = self._validate_grid_dim(radar, "z", grid_limit_z) grid_limit_x = self._validate_grid_dim(radar, "x", grid_limit_x) grid_limit_y = self._validate_grid_dim(radar, "y", grid_limit_y) grid_limits = (grid_limit_z, grid_limit_y, grid_limit_x) grid_shape = self._validate_grid_shape(grid_shape) # note: grid_shape must be in (z, y, x) order. self.grid_shape = grid_shape self.grid_limits = grid_limits mylog.info("Calling pyart.map.grid_from_radars ... ") # this is fairly slow grid = pyart.map.grid_from_radars( (radar,), grid_shape=self.grid_shape, grid_limits=self.grid_limits, ) mylog.info( "Successfully built cfradial grid, writing to %s", storage_filename, ) mylog.info( "Subsequent loads of %s will load the gridded file by default", filename, ) grid.write(storage_filename) filename = storage_filename super().__init__(filename, dataset_type, units_override=units_override) self.storage_filename = storage_filename self.refine_by = 2 # refinement factor between a grid and its subgrid @contextlib.contextmanager def _handle(self, filename: str | None = None): if filename is None: if hasattr(self, "filename"): filename = self.filename else: raise RuntimeError("Dataset has no filename yet.") with xr.open_dataset(filename) as xrds: yield xrds def _validate_grid_dim( self, radar, dim: str, grid_limit: tuple[float, float] | None = None ) -> tuple[float, float]: if grid_limit is None: if dim.lower() == "z": gate_alt = radar.gate_altitude["data"] gate_alt_units = radar.gate_altitude["units"] grid_limit = (gate_alt.min(), gate_alt.max()) grid_limit = self._round_grid_guess(grid_limit, gate_alt_units) mylog.info( "grid_limit_z not provided, using max height range in data: (%f, %f)", *grid_limit, ) else: max_range = radar.range["data"].max() grid_limit = self._round_grid_guess( (-max_range, max_range), radar.range["units"] ) mylog.info( "grid_limit_%s not provided, using max horizontal range in data: (%f, %f)", dim, *grid_limit, ) if len(grid_limit) != 2: raise ValueError( f"grid_limit_{dim} must have 2 dimensions, but it has {len(grid_limit)}" ) return grid_limit def _validate_grid_shape( self, grid_shape: tuple[int, int, int] | None = None ) -> tuple[int, int, int]: if grid_shape is None: grid_shape = (100, 100, 100) mylog.info( "grid_shape not provided, using (nz, ny, nx) = (%i, %i, %i)", *grid_shape, ) if len(grid_shape) != 3: raise ValueError( f"grid_shape must have 3 dimensions, but it has {len(grid_shape)}" ) return grid_shape def _round_grid_guess(self, bounds: tuple[float, float], unit_str: str): # rounds the bounds to the closest 10 km increment that still contains # the grid_limit for findstr, repstr in self._field_info_class.unit_subs: unit_str = unit_str.replace(findstr, repstr) limits = unyt_array(bounds, unit_str).to("km") limits[0] = np.floor(limits[0] / 10.0) * 10.0 limits[1] = np.ceil(limits[1] / 10.0) * 10.0 return tuple(limits.to(unit_str).tolist()) def _set_code_unit_attributes(self): with self._handle() as xr_ds_handle: length_unit = xr_ds_handle.variables["x"].attrs["units"] self.length_unit = self.quan(1.0, length_unit) self.mass_unit = self.quan(1.0, "kg") self.time_unit = self.quan(1.0, "s") def _parse_parameter_file(self): self.parameters = {} with self._handle() as xr_ds_handle: x, y, z = (xr_ds_handle.coords[d] for d in "xyz") self.domain_left_edge = np.array([x.min(), y.min(), z.min()]) self.domain_right_edge = np.array([x.max(), y.max(), z.max()]) self.dimensionality = 3 dims = [xr_ds_handle.sizes[d] for d in "xyz"] self.domain_dimensions = np.array(dims, dtype="int64") self._periodicity = (False, False, False) # note: origin_latitude and origin_longitude arrays will have time # as a dimension and the initial implementation here only handles # the first index. Also, the time array may have a datetime dtype, # so cast to float. self.origin_latitude = xr_ds_handle.origin_latitude[0] self.origin_longitude = xr_ds_handle.origin_longitude[0] self.current_time = float(xr_ds_handle.time.values[0]) # Cosmological information set to zero (not in space). self.cosmological_simulation = 0 self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: # This accepts a filename or a set of arguments and returns True or # False depending on if the file is of the type requested. if not valid_netcdf_signature(filename): return False if cls._missing_load_requirements(): return False is_cfrad = False try: # note that we use the NetCDF4FileHandler here to avoid some # issues with xarray opening datasets it cannot handle. Once # a dataset is as identified as a CFRadialDataset, xarray is used # for opening. See https://github.com/yt-project/yt/issues/3987 nc4_file = NetCDF4FileHandler(filename) with nc4_file.open_ds(keepweakref=True) as ds: con = "Conventions" # the attribute to check for file conventions # note that the attributes here are potentially space- or # comma-delimited strings, so we concatenate a single string # to search for a substring. cons = "" # the value of the Conventions attribute for c in [con, con.lower(), "Sub_" + con.lower()]: if hasattr(ds, c): cons += getattr(ds, c) is_cfrad = "CF/Radial" in cons or "CF-Radial" in cons except (OSError, AttributeError, ImportError): return False return is_cfrad ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cf_radial/fields.py0000644000175100001770000000221514714401662017747 0ustar00runnerdocker""" CF-radial-specific fields """ from yt.fields.field_info_container import FieldInfoContainer class CFRadialFieldInfo(FieldInfoContainer): known_other_fields = () # fields are set dynamically known_particle_fields = () units_to_ignore = ("dBz", "dBZ", "ratio") # set as nondimensional if found field_units_ignored: list[str] = [] # fields for which units have been ignored # (find, replace) pairs for sanitizing: unit_subs = (("degrees", "degree"), ("meters", "m"), ("_per_", "/")) def setup_fluid_fields(self): # Here we dynamically add fields available in our netcdf file for to the # FieldInfoContainer with sanitized units. for field in self.field_list: # field here is ('fluid_type', 'field') tuple units = self.ds.field_units.get(field, "") # sanitization of the units if units in self.units_to_ignore: self.field_units_ignored.append(field) units = "" for findstr, repstr in self.unit_subs: units = units.replace(findstr, repstr) self.add_output_field(field, "cell", units=units) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cf_radial/io.py0000644000175100001770000000211114714401662017103 0ustar00runnerdocker""" CF-Radial-specific IO functions """ import numpy as np from yt.utilities.io_handler import BaseIOHandler class CFRadialIOHandler(BaseIOHandler): _particle_reader = False _dataset_type = "cf_radial" def _read_fluid_selection(self, chunks, selector, fields, size): # This needs to allocate a set of arrays inside a dictionary, where the # keys are the (ftype, fname) tuples and the values are arrays that # have been masked using whatever selector method is appropriate. The # dict gets returned at the end and it should be flat, with selected # data. rv = {field: np.empty(size, dtype="float64") for field in fields} offset = 0 with self.ds._handle() as xr_ds_handle: for field in fields: for chunk in chunks: for grid in chunk.objs: variable = xr_ds_handle.variables[field[1]] data = variable.values[0, ...].T offset += grid.select(selector, data, rv[field], offset) return rv ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.307152 yt-4.4.0/yt/frontends/cf_radial/tests/0000755000175100001770000000000014714401715017270 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cf_radial/tests/__init__.py0000644000175100001770000000000014714401662021370 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cf_radial/tests/test_cf_radial_pytest.py0000644000175100001770000000322714714401662024222 0ustar00runnerdockerfrom pathlib import Path import numpy as np import yt from yt.frontends.cf_radial.api import CFRadialDataset from yt.testing import requires_module_pytest as requires_module def create_cf_radial_mock_gridded_ds(savedir: Path) -> Path: # a minimal dataset that yt should pick up as a CfRadial dataset import xarray as xr file_to_save = savedir / "mock_gridded_cfradial.nc" shp = (16, 16, 16) xyz = {dim: np.linspace(0.0, 1.0, shp[idim]) for idim, dim in enumerate("xyz")} da = xr.DataArray(data=np.ones(shp), coords=xyz, name="reflectivity") ds_xr = da.to_dataset() ds_xr.attrs["conventions"] = "CF/Radial" ds_xr.x.attrs["units"] = "m" ds_xr.y.attrs["units"] = "m" ds_xr.z.attrs["units"] = "m" ds_xr.reflectivity.attrs["units"] = "" times = np.array(["2017-05-19T01", "2017-05-19T01:01"], dtype="datetime64[ns]") ds_xr = ds_xr.assign_coords({"time": times}) ds_xr["origin_latitude"] = xr.DataArray(np.zeros_like(times), dims=("time",)) ds_xr["origin_longitude"] = xr.DataArray(np.zeros_like(times), dims=("time",)) ds_xr.to_netcdf(file_to_save, engine="netcdf4") return file_to_save @requires_module("xarray", "netCDF4", "pyart") def test_load_mock_gridded_cf_radial(tmp_path): import xarray as xr test_file = create_cf_radial_mock_gridded_ds(tmp_path) assert test_file.exists() # make sure that the mock dataset is valid and can be re-loaded with xarray with xr.open_dataset(test_file) as ds_xr: assert "CF/Radial" in ds_xr.conventions test_file = create_cf_radial_mock_gridded_ds(tmp_path) ds_yt = yt.load(test_file) assert isinstance(ds_yt, CFRadialDataset) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cf_radial/tests/test_outputs.py0000644000175100001770000001304314714401662022426 0ustar00runnerdocker""" CF-Radial frontend tests """ import os import shutil import tempfile import numpy as np from numpy.testing import assert_almost_equal, assert_equal from yt.frontends.cf_radial.data_structures import CFRadialDataset from yt.testing import ( requires_file, requires_module, units_override_check, ) from yt.utilities.answer_testing.framework import ( data_dir_load, requires_ds, small_patch_amr, ) cf = "CfRadialGrid/grid1.nc" # an already gridded cfradial file cf_nongridded = ( "CfRadialGrid/swx_20120520_0641.nc" # cfradial file without cartesian grid ) _fields_cfradial = ["reflectivity", "velocity", "gate_id", "differential_phase", "ROI"] _fields_units = { "reflectivity": "dimensionless", "velocity": "m/s", "differential_phase": "degree", "gate_id": "dimensionless", "ROI": "m", } @requires_module("xarray") @requires_file(cf) def test_units_override(): units_override_check(cf) @requires_module("xarray") @requires_file(cf) def test_cf_radial_gridded(): ds = data_dir_load(cf) assert isinstance(ds, CFRadialDataset) check_domain(ds) check_origin_latitude_longitude(ds) ad = ds.all_data() for field in _fields_cfradial: check_fields(ds, field) check_field_units(ad, field, _fields_units[field]) def check_fields(ds, field): assert ("cf_radial", field) in ds.field_list with ds._handle() as xr_ds_handle: assert field in xr_ds_handle.variables.keys() def check_field_units(ad, field, value): assert str(ad["cf_radial", field].units) == value def check_origin_latitude_longitude(ds): assert_almost_equal(ds.origin_latitude.values, 36.49120001) assert_almost_equal(ds.origin_longitude.values, -97.5939) def check_domain(ds): domain_dim_array = [251, 251, 46] assert_equal(ds.domain_dimensions, domain_dim_array) domain_center_array = [0.0, 0.0, 7500.0] assert_equal(ds.domain_center, domain_center_array) domain_left_array = [-50000.0, -50000.0, 0.0] assert_equal(ds.domain_left_edge, domain_left_array) domain_right_array = [50000.0, 50000.0, 15000.0] assert_equal(ds.domain_right_edge, domain_right_array) @requires_module("xarray") @requires_file(cf_nongridded) def test_auto_gridding(): # loads up a radial dataset, which triggers the gridding. # create temporary directory and grid file tempdir = tempfile.mkdtemp() grid_file = os.path.join(tempdir, "temp_grid.nc") # this load will trigger the re-gridding and write out the gridded file # from which data will be loaded. With default gridding params, this takes # on the order of 10s, but since we are not testing actual output here, we # can decrease the resolution to speed it up. grid_shape = (10, 10, 10) ds = data_dir_load( cf_nongridded, kwargs={"storage_filename": grid_file, "grid_shape": grid_shape} ) assert os.path.exists(grid_file) # check that the cartesian fields exist now with ds._handle() as xr_ds_handle: on_disk_fields = xr_ds_handle.variables.keys() for field in ["x", "y", "z"]: assert field in on_disk_fields assert all(ds.domain_dimensions == grid_shape) # check that we can load the gridded file too ds = data_dir_load(grid_file) assert isinstance(ds, CFRadialDataset) shutil.rmtree(tempdir) @requires_module("xarray") @requires_file(cf_nongridded) def test_grid_parameters(): # checks that the gridding parameters are used and that conflicts in parameters # are resolved as expected. tempdir = tempfile.mkdtemp() grid_file = os.path.join(tempdir, "temp_grid_params.nc") # check that the grid parameters work cfkwargs = { "storage_filename": grid_file, "grid_shape": (10, 10, 10), "grid_limit_x": (-10000, 10000), "grid_limit_y": (-10000, 10000), "grid_limit_z": (500, 20000), } ds = data_dir_load(cf_nongridded, kwargs=cfkwargs) expected_width = [] for dim in "xyz": minval, maxval = cfkwargs[f"grid_limit_{dim}"] expected_width.append(maxval - minval) expected_width = np.array(expected_width) actual_width = ds.domain_width.to_value("m") assert all(expected_width == actual_width) assert all(ds.domain_dimensions == cfkwargs["grid_shape"]) # check the grid parameter conflicts # on re-load with default grid params it will reload storage_filename if # it exists. Just checking that this runs... _ = data_dir_load(cf_nongridded, kwargs={"storage_filename": grid_file}) # if storage_filename exists, grid parameters are ignored (with a warning) # and the domain_dimensions will match the original new_kwargs = {"storage_filename": grid_file, "grid_shape": (15, 15, 15)} ds = data_dir_load(cf_nongridded, kwargs=new_kwargs) assert all(ds.domain_dimensions == cfkwargs["grid_shape"]) # if we overwrite, the regridding should run and the dimensions should match # the desired dimensions new_kwargs["storage_overwrite"] = True ds = data_dir_load(cf_nongridded, kwargs=new_kwargs) assert all(ds.domain_dimensions == new_kwargs["grid_shape"]) shutil.rmtree(tempdir) @requires_module("xarray") @requires_ds(cf) def test_cfradial_grid_field_values(): ds = data_dir_load(cf) fields_to_check = [("cf_radial", field) for field in _fields_cfradial] wtfield = ("cf_radial", "reflectivity") for test in small_patch_amr( ds, fields_to_check, input_center=ds.domain_center, input_weight=wtfield ): test_cfradial_grid_field_values.__name__ = test.description yield test ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.307152 yt-4.4.0/yt/frontends/chimera/0000755000175100001770000000000014714401715015632 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chimera/__init__.py0000644000175100001770000000005014714401662017737 0ustar00runnerdocker""" API for yt.frontends.chimera """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chimera/api.py0000644000175100001770000000023314714401662016754 0ustar00runnerdocker""" API for yt.frontends.chimera """ from .data_structures import ChimeraDataset from .fields import ChimeraFieldInfo from .io import ChimeraIOHandler ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chimera/data_structures.py0000644000175100001770000002600014714401662021417 0ustar00runnerdocker""" Chimera data structures """ import os import re import numpy as np from yt.data_objects.index_subobjects.unstructured_mesh import SemiStructuredMesh from yt.data_objects.static_output import Dataset from yt.geometry.api import Geometry from yt.geometry.geometry_handler import YTDataChunk from yt.geometry.unstructured_mesh_handler import UnstructuredIndex from yt.utilities.file_handler import HDF5FileHandler from yt.utilities.io_handler import io_registry from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py from .fields import ChimeraFieldInfo class ChimeraMesh(SemiStructuredMesh): _index_offset = 0 def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def _find_files(filename_c): # Returns a list of all files that share a frame number with the input dirname, file = os.path.split(filename_c) match = re.match(r"chimera_\d+_grid", file) if match is None: raise RuntimeError( rf"Expected filename to be of form 'chimera_\d+_grid_*', got {file!r}" ) prefix = match.group() frames = [f for f in os.listdir(dirname) if f.startswith(prefix)] index_filenames = [os.path.join(dirname, f) for f in sorted(frames)] return index_filenames class ChimeraUNSIndex(UnstructuredIndex): def __init__(self, ds, dataset_type="chimera"): self._handle = ds._handle super().__init__(ds, dataset_type) self.directory = os.path.dirname(self.dataset.filename) self.dataset_type = dataset_type def _initialize_mesh(self): self.meshes = [] index_filenames = _find_files( self.dataset.filename ) # Retrieves list of all datafiles with the same frame number # Detects Yin-Yang data format yy = any("grid_2" in file for file in index_filenames) for n, file in enumerate(index_filenames): with h5py.File(file, "r") as f: nmx, nmy, nmz = tuple(f["mesh"]["array_dimensions"][:]) l = ( int(file[-5:-3]) - 1 ) # Pulls the subgrid number from the data file name if nmz > 2: k = f["fluid"]["entropy"].shape[0] r = f["mesh"]["x_ef"][:-2] theta = f["mesh"]["y_ef"][:] phi = f["mesh"]["z_ef"][ k * l : k * (l + 1) + 1 ] # Pulls only the individual subgrid's band of phi values elif f["mesh"]["z_ef"][-1] == f["mesh"]["z_ef"][0]: r = f["mesh"]["x_ef"][ : f["mesh"]["radial_index_bound"][1] - f["mesh"]["x_ef"].shape[0] ] theta = f["mesh"]["y_ef"][:] phi = np.array([f["mesh"]["z_ef"][0], 2 * np.pi]) else: r = f["mesh"]["x_ef"][ : f["mesh"]["radial_index_bound"][1] - f["mesh"]["x_ef"].shape[0] ] theta = f["mesh"]["y_ef"][:] phi = f["mesh"]["z_ef"][:] # Creates variables to hold the size of dimensions nxd = r.size nyd = theta.size nzd = phi.size nyzd = nyd * nzd nyzd_ = (nyd - 1) * (nzd - 1) # Generates and fills coordinate array coords = np.zeros((nxd, nyd, nzd, 3), dtype="float64", order="C") coords[:, :, :, 0] = r[:, None, None] coords[:, :, :, 1] = theta[None, :, None] coords[:, :, :, 2] = phi[None, None, :] if yy: mylog.warning( "Yin-Yang File Detected; This data is not currently supported." ) coords.shape = (nxd * nyd * nzd, 3) # Connectivity is an array of rows, each of which corresponds to a grid cell. # The 8 elements of each row are integers representing the cell vertices. # These integers reference the numerical index of the element of the # "coords" array which corresponds to the spatial coordinate. connectivity = np.zeros( ((nyd - 1) * (nxd - 1) * (nzd - 1), 8), dtype="int64", order="C" ) # Creates scaffold array connectivity[0] = [ 0, 1, nzd, (nzd + 1), (nyzd), (nyzd + 1), (nyzd + nzd), (nyzd + nzd + 1), ] # Manually defines first coordinate set for p in range( nzd - 1 ): # Increments first row around phi to define an arc of cells if p > 0: connectivity[p] = connectivity[p - 1] + 1 for t in range( nyd - 1 ): # Increments this arc around theta to define a shell if t > 0: connectivity[t * (nzd - 1) : (t + 1) * (nzd - 1)] = ( connectivity[(t - 1) * (nzd - 1) : t * (nzd - 1)] + nzd ) for r in range( nxd - 1 ): # Increments this shell along r to define a sphere if r > 0: connectivity[r * (nyzd_) : (r + 1) * (nyzd_)] = ( connectivity[(r - 1) * (nyzd_) : r * (nyzd_)] + nyzd ) mesh = ChimeraMesh( n, self.index_filename, connectivity, coords, self ) # Creates a mesh object if "grid_" in file: mylog.info("Mesh %s generated", (n + 1) / len(index_filenames)) self.meshes.append( mesh ) # Adds new mesh to the list of generated meshes def _detect_output_fields(self): # Reads in the available data fields with h5py.File(self.index_filename, "r") as f: fluids = [ ("chimera", i) for i in f["fluid"] if np.shape(f["fluid"][i]) == np.shape(f["fluid"]["rho_c"]) ] abundance = [ ("chimera", i) for i in f["abundance"] if np.shape(f["abundance"][i]) == np.shape(f["fluid"]["rho_c"]) ] e_rms = [("chimera", f"e_rms_{i+1}") for i in range(4)] lumin = [("chimera", f"lumin_{i+1}") for i in range(4)] num_lumin = [("chimera", f"num_lumin_{i+1}") for i in range(4)] a_name = [ ("chimera", i.decode("utf-8").strip()) for i in f["abundance"]["a_name"] ] self.field_list = ( fluids + abundance + e_rms + lumin + num_lumin + [("chimera", "abar")] + a_name ) if np.shape(f["abundance"]["nse_c"]) != np.shape(f["fluid"]["rho_c"]): self.field_list += [("chimera", "nse_c")] def _chunk_io(self, dobj, cache=True, local_only=False): # Creates Data chunk gobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) for subset in gobjs: yield YTDataChunk( dobj, "io", [subset], self._count_selection(dobj, [subset]), cache=cache ) def _setup_data_io(self): self.io = io_registry[self.dataset_type](self.dataset) class ChimeraDataset(Dataset): _load_requirements = ["h5py"] _index_class = ChimeraUNSIndex # ChimeraHierarchy _field_info_class = ChimeraFieldInfo def __init__( self, filename, dataset_type="chimera", storage_filename=None, units_override=None, ): # refinement factor between a grid and its subgrid self.refine_by = 1 # Somewhat superfluous for Chimera, but left to avoid errors self.fluid_types += ("chimera",) super().__init__(filename, dataset_type, units_override=units_override) self.storage_filename = storage_filename self._handle = HDF5FileHandler(filename) def _set_code_unit_attributes(self): # This is where quantities are created that represent the various # on-disk units. These are the currently available quantities which # should be set, along with examples of how to set them to standard # values. # self.length_unit = self.quan(1.0, "cm") self.mass_unit = self.quan(1.0, "g") self.time_unit = self.quan(1.0, "s") self.time_unit = self.quan(1.0, "s") self.velocity_unit = self.quan(1.0, "cm/s") self.magnetic_unit = self.quan(1.0, "gauss") def _parse_parameter_file(self): with h5py.File(self.parameter_filename, "r") as f: # Reads in simulation time, number of dimensions and shape self.current_time = f["mesh"]["time"][()] self.dimensionality = 3 self.domain_dimensions = f["mesh"]["array_dimensions"][()] self.geometry = Geometry.SPHERICAL # Uses default spherical geometry self._periodicity = (False, False, True) dle = [ f["mesh"]["x_ef"][0], f["mesh"]["y_ef"][0], f["mesh"]["z_ef"][0], ] if ( self.domain_dimensions[2] <= 2 and f["mesh"]["z_ef"][-1] == f["mesh"]["z_ef"][0] ): dre = [ f["mesh"]["x_ef"][ f["mesh"]["radial_index_bound"][1] - f["mesh"]["x_ef"].shape[0] ], f["mesh"]["y_ef"][-1], 2 * np.pi, ] else: dre = [ f["mesh"]["x_ef"][ f["mesh"]["radial_index_bound"][1] - f["mesh"]["x_ef"].shape[0] ], f["mesh"]["y_ef"][-1], f["mesh"]["z_ef"][-1], ] # Sets left and right bounds based on earlier definitions self.domain_right_edge = np.array(dre) self.domain_left_edge = np.array(dle) self.cosmological_simulation = 0 # Chimera is not a cosmological simulation @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: # This accepts a filename or a set of arguments and returns True or # False depending on if the file is of the type requested. if cls._missing_load_requirements(): return False try: fileh = HDF5FileHandler(filename) if ( "fluid" in fileh and "agr_c" in fileh["fluid"].keys() and "grav_x_c" in fileh["fluid"].keys() ): return True # Numpy bless except OSError: pass return False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chimera/definitions.py0000644000175100001770000000011414714401662020514 0ustar00runnerdocker# This file is often empty. It can hold definitions related to a frontend. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chimera/fields.py0000644000175100001770000000452714714401662017463 0ustar00runnerdocker""" Chimera-specific fields """ from yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer # We need to specify which fields we might have in our dataset. The field info # container subclass here will define which fields it knows about. There are # optionally methods on it that get called which can be subclassed. class ChimeraFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( ("e_int", ("erg", ["Internal Energy"], "Internal Energy")), ("entropy", ("", ["Entropy"], None)), ("rho_c", ("g/cm**3", ["density", "Density"], "Density")), ("dudt_nu", ("erg/s", [], None)), ("dudt_nuc", ("erg/s", [], None)), ("grav_x_c", ("cm/s**2", [], None)), ("grav_y_c", ("cm/s**2", [], None)), ("grav_z_c", ("cm/s**2", [], None)), ("press", ("erg/cm**3", ["pressure"], "Pressure")), ("t_c", ("K", ["temperature"], "Temperature")), ("u_c", ("cm/s", ["v_radial"], "Radial Velocity")), ("v_c", ("cm/s", ["v_theta"], "Theta Velocity")), ("v_csound", ("", [], None)), ("wBVMD", ("1/s", [], "BruntViasala_freq")), ("w_c", ("cm/s", ["v_phi"], "Phi Velocity")), ("ye_c", ("", [], None)), ("ylep", ("", [], None)), ("a_nuc_rep_c", ("", [], None)), ("be_nuc_rep_c", ("", [], None)), ("e_book", ("", [], None)), ("nse_c", ("", [], None)), ("z_nuc_rep_c", ("", [], None)), ) # Each entry here is of the form # ( "name", ("units", ["fields", "to", "alias"], # "display_name")), known_particle_fields = ( # Identical form to above ) def __init__(self, ds, field_list): super().__init__(ds, field_list) # If you want, you can check self.field_list def setup_fluid_fields(self): # Here we do anything that might need info about the dataset. # You can use self.alias, self.add_output_field (for on-disk fields) # and self.add_field (for derived fields). def _test(field, data): return data["chimera", "rho_c"] self.add_field( ("chimera", "test"), sampling_type="cell", function=_test, units="g/cm**3" ) def setup_particle_fields(self, ptype): super().setup_particle_fields(ptype) # This will get called for every particle type. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chimera/io.py0000644000175100001770000002321414714401662016616 0ustar00runnerdocker""" Chimera-specific IO functions """ import numpy as np import unyt as un from yt.frontends.chimera.data_structures import _find_files from yt.utilities.io_handler import BaseIOHandler from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py class ChimeraIOHandler(BaseIOHandler): _particle_reader = False _dataset_type = "chimera" def __init__(self, ds): super().__init__(ds) self._handle = ds._handle self.filename = ds.filename def _read_particle_coords(self, chunks, ptf): # This needs to *yield* a series of tuples of (ptype, (x, y, z)). # chunks is a list of chunks, and ptf is a dict where the keys are # ptypes and the values are lists of fields. pass def _read_particle_fields(self, chunks, ptf, selector): # This gets called after the arrays have been allocated. It needs to # yield ((ptype, field), data) where data is the masked results of # reading ptype, field and applying the selector to the data read in. # Selector objects have a .select_points(x,y,z) that returns a mask, so # you need to do your masking here. pass def _read_fluid_selection(self, chunks, selector, fields, size): rv = {} nodal_fields = [] for field in fields: finfo = self.ds.field_info[field] nodal_flag = finfo.nodal_flag if np.any(nodal_flag): num_nodes = 2 ** sum(nodal_flag) rv[field] = np.empty((size, num_nodes), dtype="=f8") nodal_fields.append(field) else: rv[field] = np.empty(size, dtype="=f8") ind = 0 for field, mesh, data in self.io_iter(chunks, fields): if data is None: continue else: ind += mesh.select(selector, data.flatten(), rv[field], ind) # caches return rv def io_iter(self, chunks, fields): for n, chunk in enumerate(chunks): file = _find_files(self.filename) with h5py.File(file[n], "r") as f: # Generates mask according to the "ongrid_mask" variable m = int(file[n][-5:-3]) - 1 k = f["fluid"]["entropy"].shape[0] mask_0 = f["mesh"]["ongrid_mask"][k * m : k * (m + 1), :] if f["mesh"]["array_dimensions"][2] > 1: nrd = f["mesh"]["array_dimensions"][0] - 2 else: nrd = f["mesh"]["array_dimensions"][0] mask = np.repeat(mask_0[:, :, np.newaxis], nrd, axis=2).transpose() specials = ( "abar", "e_rms_1", "e_rms_2", "e_rms_3", "e_rms_4", "lumin_1", "lumin_2", "lumin_3", "lumin_4", "num_lumin_1", "num_lumin_2", "num_lumin_3", "num_lumin_4", "shock", "nse_c", ) for field in fields: # Reads data by locating subheading ftype, fname = field a_name_2 = [i.decode("utf-8") for i in f["abundance"]["a_name"]] a_name_dict = {name.strip(): name for name in a_name_2} if fname not in specials: if fname in f["fluid"]: ds = f["fluid"][f"{fname}"] elif fname in f["abundance"]: ds = f["abundance"][f"{fname}"] elif fname in a_name_dict: ind_xn = a_name_2.index(a_name_dict[fname]) ds = f["abundance"]["xn_c"][:, :, :, ind_xn] else: mylog.warning("Invalid field name %s", fname) dat_1 = ds[:, :, :].transpose() elif fname == "nse_c": if np.shape(f["abundance"]["nse_c"]) != np.shape( f["fluid"]["rho_c"] ): ds = f["abundance"]["nse_c"][:, :, 1:] else: ds = f["abundance"]["nse_c"] dat_1 = ds[:, :, :].transpose() elif fname == "abar": xn_c = np.array(f["abundance"]["xn_c"]) a_nuc_rep_c = np.array(f["abundance"]["a_nuc_rep_c"]) a_nuc = np.array(f["abundance"]["a_nuc"]) a_nuc_tile = np.tile( a_nuc, (xn_c.shape[0], xn_c.shape[1], xn_c.shape[2], 1) ) yn_c = np.empty(xn_c.shape) yn_c[:, :, :, :-1] = xn_c[:, :, :, :-1] / a_nuc_tile[:, :, :, :] yn_c[:, :, :, -1] = xn_c[:, :, :, -1] / a_nuc_rep_c[:, :, :] ytot = np.sum(yn_c, axis=3) atot = np.sum(xn_c, axis=3) abar = np.divide(atot, ytot) dat_1 = abar[:, :, :].transpose() elif fname in ("e_rms_1", "e_rms_2", "e_rms_3", "e_rms_4"): dims = f["mesh"]["array_dimensions"] n_groups = f["radiation"]["raddim"][0] n_species = f["radiation"]["raddim"][1] n_hyperslabs = f["mesh"]["nz_hyperslabs"][()] energy_edge = f["radiation"]["unubi"][()] energy_center = f["radiation"]["unui"][()] d_energy = [] for i in range(0, n_groups): d_energy.append(energy_edge[i + 1] - energy_edge[i]) d_energy = np.array(d_energy) e3de = energy_center**3 * d_energy e5de = energy_center**5 * d_energy psi0_c = f["radiation"]["psi0_c"][:] row = np.empty( (n_species, int(dims[2] / n_hyperslabs), dims[1], dims[0]) ) for n in range(0, n_species): numerator = np.sum(psi0_c[:, :, :, n] * e5de, axis=3) denominator = np.sum(psi0_c[:, :, :, n] * e3de, axis=3) row[n][:][:][:] = np.sqrt( numerator / (denominator + 1e-100) ) species = int(fname[-1]) - 1 dat_1 = row[species, :, :, :].transpose() elif fname in ( "lumin_1", "lumin_2", "lumin_3", "lumin_4", "num_lumin_1", "num_lumin_2", "num_lumin_3", "num_lumin_4", ): dims = f["mesh"]["array_dimensions"] n_groups = f["radiation"]["raddim"][0] n_hyperslabs = f["mesh"]["nz_hyperslabs"][()] ergmev = float((1 * un.MeV) / (1 * un.erg)) cvel = float(un.c.to("cm/s")) h = float(un.h.to("MeV * s")) ecoef = 4.0 * np.pi * ergmev / (h * cvel) ** 3 radius = f["mesh"]["x_ef"][()] agr_e = f["fluid"]["agr_e"][()] cell_area_GRcorrected = 4 * np.pi * radius**2 / agr_e**4 psi1_e = f["radiation"]["psi1_e"] energy_edge = f["radiation"]["unubi"][()] energy_center = f["radiation"]["unui"][()] d_energy = [] for i in range(0, n_groups): d_energy.append(energy_edge[i + 1] - energy_edge[i]) d_energy = np.array(d_energy) species = int(fname[-1]) - 1 if fname in ("lumin_1", "lumin_2", "lumin_3", "lumin_4"): eNde = energy_center**3 * d_energy else: eNde = energy_center**2 * d_energy lumin = ( np.sum(psi1_e[:, :, :, species] * eNde, axis=3) * np.tile( cell_area_GRcorrected[1 : dims[0] + 1], (int(dims[2] / n_hyperslabs), dims[1], 1), ) * (cvel * ecoef * 1e-51) ) dat_1 = lumin[:, :, :].transpose() if f["mesh"]["array_dimensions"][2] > 1: data = dat_1[:-2, :, :] # Clips off ghost zones for 3D else: data = dat_1[:, :, :] data = np.ma.masked_where(mask == 0.0, data) # Masks data = np.ma.filled( data, fill_value=0.0 ) # Replaces masked value with 0 yield field, chunk.objs[0], data pass def _read_chunk_data(self, chunk, fields): # This reads the data from a single chunk without doing any selection, # and is only used for caching data that might be used by multiple # different selectors later. For instance, this can speed up ghost zone # computation. pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chimera/misc.py0000644000175100001770000000000014714401662017126 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.307152 yt-4.4.0/yt/frontends/chimera/tests/0000755000175100001770000000000014714401715016774 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chimera/tests/__init__.py0000644000175100001770000000000014714401662021074 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chimera/tests/test_outputs.py0000644000175100001770000001414014714401662022131 0ustar00runnerdocker""" Chimera frontend tests """ import numpy as np from numpy.testing import assert_almost_equal, assert_array_equal, assert_equal from yt.testing import requires_file, requires_module from yt.utilities.answer_testing.framework import ( GenericArrayTest, data_dir_load, requires_ds, ) Two_D = "F37_80/chimera_00001_grid_1_01.h5" @requires_module("h5py") @requires_ds(Two_D) def test_2D(): ds = data_dir_load(Two_D) _fields = [ ("chimera", "a_nuc_rep_c"), ("chimera", "abar"), ("chimera", "ar36"), ("chimera", "be_nuc_rep_c"), ("chimera", "c12"), ("chimera", "ca40"), ("chimera", "cr48"), ("chimera", "dudt_nu"), ("chimera", "dudt_nuc"), ("chimera", "e_book"), ("chimera", "e_int"), ("chimera", "e_rms_1"), ("chimera", "e_rms_2"), ("chimera", "e_rms_3"), ("chimera", "e_rms_4"), ("chimera", "entropy"), ("chimera", "fe52"), ("chimera", "fe56"), ("chimera", "grav_x_c"), ("chimera", "grav_y_c"), ("chimera", "grav_z_c"), ("chimera", "he4"), ("chimera", "lumin_1"), ("chimera", "lumin_2"), ("chimera", "lumin_3"), ("chimera", "lumin_4"), ("chimera", "mg24"), ("chimera", "n"), ("chimera", "ne20"), ("chimera", "ni56"), ("chimera", "nse_c"), ("chimera", "num_lumin_1"), ("chimera", "num_lumin_2"), ("chimera", "num_lumin_3"), ("chimera", "num_lumin_4"), ("chimera", "o16"), ("chimera", "p"), ("chimera", "press"), ("chimera", "rho_c"), ("chimera", "s32"), ("chimera", "si28"), ("chimera", "t_c"), ("chimera", "ti44"), ("chimera", "u_c"), ("chimera", "v_c"), ("chimera", "v_csound"), ("chimera", "wBVMD"), ("chimera", "w_c"), ("chimera", "ye_c"), ("chimera", "ylep"), ("chimera", "z_nuc_rep_c"), ("chimera", "zn60"), ] assert_equal(str(ds), "chimera_00001_grid_1_01.h5") assert_equal(str(ds.geometry), "spherical") # Geometry assert_almost_equal( ds.domain_right_edge, ds.arr([1.0116509e10 + 100, 3.14159265e00, 6.28318531e00], "code_length"), ) # domain edge assert_array_equal( ds.domain_left_edge, ds.arr([0.0, 0.0, 0.0], "code_length") ) # domain edge assert_array_equal(ds.domain_dimensions, np.array([722, 240, 1])) # Dimensions assert_array_equal(ds.field_list, _fields) def field_func(field): min = dd[field].min() max = dd[field].max() avg = np.mean(dd[field]) size = dd[field].size return [min, max, avg, size] dd = ds.all_data() for field in _fields: if field != ("chimera", "shock"): yield GenericArrayTest(ds, field_func, args=[field]) Three_D = "C15-3D-3deg/chimera_002715000_grid_1_01.h5" @requires_module("h5py") @requires_ds(Three_D) def test_3D(): ds = data_dir_load(Three_D) _fields = [ ("chimera", "a_nuc_rep_c"), ("chimera", "abar"), ("chimera", "ar36"), ("chimera", "be_nuc_rep_c"), ("chimera", "c12"), ("chimera", "ca40"), ("chimera", "cr48"), ("chimera", "dudt_nu"), ("chimera", "dudt_nuc"), ("chimera", "e_book"), ("chimera", "e_int"), ("chimera", "e_rms_1"), ("chimera", "e_rms_2"), ("chimera", "e_rms_3"), ("chimera", "e_rms_4"), ("chimera", "entropy"), ("chimera", "fe52"), ("chimera", "grav_x_c"), ("chimera", "grav_y_c"), ("chimera", "grav_z_c"), ("chimera", "he4"), ("chimera", "lumin_1"), ("chimera", "lumin_2"), ("chimera", "lumin_3"), ("chimera", "lumin_4"), ("chimera", "mg24"), ("chimera", "n"), ("chimera", "ne20"), ("chimera", "ni56"), ("chimera", "nse_c"), ("chimera", "num_lumin_1"), ("chimera", "num_lumin_2"), ("chimera", "num_lumin_3"), ("chimera", "num_lumin_4"), ("chimera", "o16"), ("chimera", "p"), ("chimera", "press"), ("chimera", "rho_c"), ("chimera", "s32"), ("chimera", "si28"), ("chimera", "t_c"), ("chimera", "ti44"), ("chimera", "u_c"), ("chimera", "v_c"), ("chimera", "v_csound"), ("chimera", "wBVMD"), ("chimera", "w_c"), ("chimera", "ye_c"), ("chimera", "ylep"), ("chimera", "z_nuc_rep_c"), ("chimera", "zn60"), ] assert_equal(str(ds), "chimera_002715000_grid_1_01.h5") assert_equal(str(ds.geometry), "spherical") # Geometry assert_almost_equal( ds.domain_right_edge, ds.arr( [1.06500257e09 - 1.03818333, 3.14159265e00, 6.2831853e00], "code_length" ), ) # Domain edge assert_array_equal(ds.domain_left_edge, [0.0, 0.0, 0.0]) # Domain edge assert_array_equal(ds.domain_dimensions, [542, 60, 135]) # Dimensions assert_array_equal(ds.field_list, _fields) def field_func(field): min = dd[field].min() max = dd[field].max() avg = np.mean(dd[field]) size = dd[field].size return [min, max, avg, size] dd = ds.all_data() for field in _fields: if field != ("chimera", "shock"): yield GenericArrayTest(ds, field_func, args=[field]) @requires_file(Three_D) def test_multimesh(): # Tests that the multimesh system for 3D data has been created correctly ds = data_dir_load(Three_D) assert_equal(len(ds.index.meshes), 45) for i in range(44): assert_almost_equal( ds.index.meshes[i + 1].connectivity_coords - ds.index.meshes[i].connectivity_coords, np.tile([0.0, 0.0, 0.13962634015954636], (132004, 1)), ) # Tests that each mesh is an identically shaped wedge, incrememnted in Phi. assert_array_equal( ds.index.meshes[i + 1].connectivity_indices, ds.index.meshes[i].connectivity_indices, ) # Checks Connectivity array is identical for all meshes. ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.307152 yt-4.4.0/yt/frontends/cholla/0000755000175100001770000000000014714401715015464 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cholla/__init__.py0000644000175100001770000000000014714401662017564 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cholla/api.py0000644000175100001770000000021414714401662016605 0ustar00runnerdockerfrom .data_structures import ChollaDataset, ChollaGrid, ChollaHierarchy from .fields import ChollaFieldInfo from .io import ChollaIOHandler ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cholla/data_structures.py0000644000175100001770000001433214714401662021256 0ustar00runnerdockerimport os import weakref import numpy as np from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.static_output import Dataset from yt.funcs import setdefaultattr from yt.geometry.api import Geometry from yt.geometry.grid_geometry_handler import GridIndex from yt.utilities.on_demand_imports import _h5py as h5py from .fields import ChollaFieldInfo class ChollaGrid(AMRGridPatch): _id_offset = 0 def __init__(self, id, index, level, dims): super().__init__(id, filename=index.index_filename, index=index) self.Parent = None self.Children = [] self.Level = level self.ActiveDimensions = dims class ChollaHierarchy(GridIndex): grid = ChollaGrid def __init__(self, ds, dataset_type="cholla"): self.dataset_type = dataset_type self.dataset = weakref.proxy(ds) # for now, the index file is the dataset! self.index_filename = self.dataset.parameter_filename self.directory = os.path.dirname(self.index_filename) # float type for the simulation edges and must be float64 now self.float_type = np.float64 super().__init__(ds, dataset_type) def _detect_output_fields(self): with h5py.File(self.index_filename, mode="r") as h5f: self.field_list = [("cholla", k) for k in h5f.keys()] def _count_grids(self): self.num_grids = 1 def _parse_index(self): self.grid_left_edge[0][:] = self.ds.domain_left_edge[:] self.grid_right_edge[0][:] = self.ds.domain_right_edge[:] self.grid_dimensions[0][:] = self.ds.domain_dimensions[:] self.grid_particle_count[0][0] = 0 self.grid_levels[0][0] = 0 self.max_level = 0 def _populate_grid_objects(self): self.grids = np.empty(self.num_grids, dtype="object") for i in range(self.num_grids): g = self.grid(i, self, self.grid_levels.flat[i], self.grid_dimensions[i]) g._prepare_grid() g._setup_dx() self.grids[i] = g class ChollaDataset(Dataset): _load_requirements = ["h5py"] _index_class = ChollaHierarchy _field_info_class = ChollaFieldInfo def __init__( self, filename, dataset_type="cholla", storage_filename=None, units_override=None, unit_system="cgs", ): self.fluid_types += ("cholla",) super().__init__(filename, dataset_type, units_override=units_override) self.storage_filename = storage_filename def _set_code_unit_attributes(self): # This is where quantities are created that represent the various # on-disk units. These are the defaults, but if they are listed # in the HDF5 attributes for a file, which is loaded first, then those are # used instead. # if not self.length_unit: self.length_unit = self.quan(1.0, "pc") if not self.mass_unit: self.mass_unit = self.quan(1.0, "Msun") if not self.time_unit: self.time_unit = self.quan(1000, "yr") if not self.velocity_unit: self.velocity_unit = self.quan(1.0, "cm/s") if not self.magnetic_unit: self.magnetic_unit = self.quan(1.0, "gauss") for key, unit in self.__class__.default_units.items(): setdefaultattr(self, key, self.quan(1, unit)) def _parse_parameter_file(self): with h5py.File(self.parameter_filename, mode="r") as h5f: attrs = h5f.attrs self.parameters = dict(attrs.items()) self.domain_left_edge = attrs["bounds"][:].astype("=f8") self.domain_right_edge = self.domain_left_edge + attrs["domain"][:].astype( "=f8" ) self.dimensionality = len(attrs["dims"][:]) self.domain_dimensions = attrs["dims"][:].astype("=f8") self.current_time = attrs["t"][:] self._periodicity = tuple(attrs.get("periodicity", (False, False, False))) self.gamma = attrs.get("gamma", 5.0 / 3.0) self.mu = attrs.get("mu", 1.0) self.refine_by = 1 # If header specifies code units, default to those (in CGS) length_unit = attrs.get("length_unit", None) mass_unit = attrs.get("mass_unit", None) time_unit = attrs.get("time_unit", None) velocity_unit = attrs.get("velocity_unit", None) magnetic_unit = attrs.get("magnetic_unit", None) if length_unit: self.length_unit = self.quan(length_unit[0], "cm") if mass_unit: self.mass_unit = self.quan(mass_unit[0], "g") if time_unit: self.time_unit = self.quan(time_unit[0], "s") if velocity_unit: self.velocity_unit = self.quan(velocity_unit[0], "cm/s") if magnetic_unit: self.magnetic_unit = self.quan(magnetic_unit[0], "gauss") # this minimalistic implementation fills the requirements for # this frontend to run, change it to make it run _correctly_ ! for key, unit in self.__class__.default_units.items(): setdefaultattr(self, key, self.quan(1, unit)) # CHOLLA cannot yet be run as a cosmological simulation self.cosmological_simulation = 0 self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 # CHOLLA datasets are always unigrid cartesian self.geometry = Geometry.CARTESIAN @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: # This accepts a filename or a set of arguments and returns True or # False depending on if the file is of the type requested. if cls._missing_load_requirements(): return False try: fileh = h5py.File(filename, mode="r") except OSError: return False try: attrs = fileh.attrs except AttributeError: return False else: return ( "bounds" in attrs and "domain" in attrs and attrs.get("data_type") != "yt_light_ray" ) finally: fileh.close() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cholla/definitions.py0000644000175100001770000000011414714401662020346 0ustar00runnerdocker# This file is often empty. It can hold definitions related to a frontend. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cholla/fields.py0000644000175100001770000001126514714401662017312 0ustar00runnerdockerimport numpy as np from unyt import Zsun from yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer from yt.utilities.physical_constants import kboltz, mh # Copied from Athena frontend pres_units = "code_pressure" erg_units = "code_mass * (code_length/code_time)**2" rho_units = "code_mass / code_length**3" mom_units = "code_mass / code_length**2 / code_time" def velocity_field(comp): def _velocity(field, data): return data["cholla", f"momentum_{comp}"] / data["cholla", "density"] return _velocity class ChollaFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( # Each entry here is of the form # ( "name", ("units", ["fields", "to", "alias"], # "display_name")), ("density", (rho_units, ["density"], None)), ("momentum_x", (mom_units, ["momentum_x"], None)), ("momentum_y", (mom_units, ["momentum_y"], None)), ("momentum_z", (mom_units, ["momentum_z"], None)), ("Energy", ("code_pressure", ["total_energy_density"], None)), ("scalar0", (rho_units, [], None)), ("metal_density", (rho_units, ["metal_density"], None)), ) known_particle_fields = () # In Cholla, conservative variables are written out. def setup_fluid_fields(self): unit_system = self.ds.unit_system # Add velocity fields for comp in "xyz": self.add_field( ("gas", f"velocity_{comp}"), sampling_type="cell", function=velocity_field(comp), units=unit_system["velocity"], ) # Add pressure field if ("cholla", "GasEnergy") in self.field_list: self.add_output_field( ("cholla", "GasEnergy"), sampling_type="cell", units=pres_units ) self.alias( ("gas", "thermal_energy"), ("cholla", "GasEnergy"), units=unit_system["pressure"], ) def _pressure(field, data): return (data.ds.gamma - 1.0) * data["cholla", "GasEnergy"] else: def _pressure(field, data): return (data.ds.gamma - 1.0) * ( data["cholla", "Energy"] - data["gas", "kinetic_energy_density"] ) self.add_field( ("gas", "pressure"), sampling_type="cell", function=_pressure, units=unit_system["pressure"], ) def _specific_total_energy(field, data): return data["cholla", "Energy"] / data["cholla", "density"] self.add_field( ("gas", "specific_total_energy"), sampling_type="cell", function=_specific_total_energy, units=unit_system["specific_energy"], ) # Add temperature field def _temperature(field, data): return ( data.ds.mu * data["gas", "pressure"] / data["gas", "density"] * mh / kboltz ) self.add_field( ("gas", "temperature"), sampling_type="cell", function=_temperature, units=unit_system["temperature"], ) # Add color field if present (scalar0 / density) if ("cholla", "scalar0") in self.field_list: self.add_output_field( ("cholla", "scalar0"), sampling_type="cell", units=rho_units, ) def _color(field, data): return data["cholla", "scalar0"] / data["cholla", "density"] self.add_field( ("cholla", "color"), sampling_type="cell", function=_color, units="", ) self.alias( ("gas", "color"), ("cholla", "color"), units="", ) # Using color field to define metallicity field, where a color of 1 # indicates solar metallicity def _metallicity(field, data): # Ensuring that there are no negative metallicities return np.clip(data["cholla", "color"], 0, np.inf) * Zsun self.add_field( ("cholla", "metallicity"), sampling_type="cell", function=_metallicity, units="Zsun", ) self.alias( ("gas", "metallicity"), ("cholla", "metallicity"), units="Zsun", ) def setup_particle_fields(self, ptype): super().setup_particle_fields(ptype) # This will get called for every particle type. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cholla/io.py0000644000175100001770000000213714714401662016451 0ustar00runnerdockerimport numpy as np from yt.utilities.io_handler import BaseIOHandler from yt.utilities.on_demand_imports import _h5py as h5py class ChollaIOHandler(BaseIOHandler): _particle_reader = False _dataset_type = "cholla" def _read_particle_coords(self, chunks, ptf): raise NotImplementedError def _read_particle_fields(self, chunks, ptf, selector): raise NotImplementedError def _read_fluid_selection(self, chunks, selector, fields, size): data = {} for field in fields: data[field] = np.empty(size, dtype="float64") with h5py.File(self.ds.parameter_filename, "r") as fh: ind = 0 for chunk in chunks: for grid in chunk.objs: nd = 0 for field in fields: ftype, fname = field values = fh[fname][:].astype("=f8") nd = grid.select(selector, values, data[field], ind) ind += nd return data def _read_chunk_data(self, chunk, fields): raise NotImplementedError ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cholla/misc.py0000644000175100001770000000000014714401662016760 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3111522 yt-4.4.0/yt/frontends/cholla/tests/0000755000175100001770000000000014714401715016626 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cholla/tests/__init__.py0000644000175100001770000000000014714401662020726 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/cholla/tests/test_outputs.py0000644000175100001770000000403214714401662021762 0ustar00runnerdockerfrom numpy.testing import assert_equal import yt from yt.frontends.cholla.api import ChollaDataset from yt.testing import requires_file, requires_module from yt.utilities.answer_testing.framework import ( data_dir_load, requires_ds, small_patch_amr, ) _fields = ( ("gas", "temperature"), ("gas", "density"), ) ChollaSimple = "ChollaSimple/0.h5" @requires_module("h5py") @requires_file(ChollaSimple) def test_ChollaDataset(): assert isinstance(data_dir_load(ChollaSimple), ChollaDataset) @requires_module("h5py") @requires_file(ChollaSimple) def test_ChollaSimple_fields(): expected_fields = [ "Energy", "GasEnergy", "density", "momentum_x", "momentum_y", "momentum_z", "scalar0", ] ds = yt.load(ChollaSimple) assert_equal(str(ds), "0.h5") ad = ds.all_data() # Check all the expected fields exist and can be accessed for field in expected_fields: assert ("cholla", field) in ds.field_list # test that field access works ad["cholla", field] @requires_module("h5py") @requires_file(ChollaSimple) def test_ChollaSimple_derived_fields(): expected_derived_fields = [ "density", "momentum_x", "momentum_y", "momentum_z", "metallicity", ] ds = yt.load(ChollaSimple) ad = ds.all_data() # Check all the expected fields exist and can be accessed for field in expected_derived_fields: assert ("gas", field) in ds.derived_field_list # test that field access works ad["gas", field] _fields_chollasimple = ( ("cholla", "GasEnergy"), ("gas", "temperature"), ("gas", "density"), ("gas", "metallicity"), ) @requires_module("h5py") @requires_ds(ChollaSimple) def test_cholla_data(): ds = data_dir_load(ChollaSimple) assert_equal(str(ds), "0.h5") for test in small_patch_amr( ds, _fields_chollasimple, input_center="c", input_weight="ones" ): test_cholla_data.__name__ = test.description yield test ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3111522 yt-4.4.0/yt/frontends/chombo/0000755000175100001770000000000014714401715015471 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chombo/__init__.py0000644000175100001770000000000014714401662017571 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chombo/api.py0000644000175100001770000000066114714401662016620 0ustar00runnerdockerfrom . import tests from .data_structures import ( ChomboDataset, ChomboGrid, ChomboHierarchy, ChomboPICDataset, ChomboPICHierarchy, Orion2Dataset, Orion2Hierarchy, PlutoDataset, PlutoHierarchy, ) from .fields import ( ChomboFieldInfo, ChomboPICFieldInfo1D, ChomboPICFieldInfo2D, ChomboPICFieldInfo3D, Orion2FieldInfo, PlutoFieldInfo, ) from .io import IOHandlerChomboHDF5 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chombo/data_structures.py0000644000175100001770000007352214714401662021271 0ustar00runnerdockerimport os import re import weakref import numpy as np from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.static_output import Dataset from yt.fields.field_info_container import FieldInfoContainer from yt.funcs import mylog, setdefaultattr from yt.geometry.api import Geometry from yt.geometry.grid_geometry_handler import GridIndex from yt.utilities.file_handler import HDF5FileHandler, valid_hdf5_signature from yt.utilities.lib.misc_utilities import get_box_grids_level from yt.utilities.on_demand_imports import _h5py as h5py from yt.utilities.parallel_tools.parallel_analysis_interface import parallel_root_only from .fields import ( ChomboFieldInfo, ChomboPICFieldInfo1D, ChomboPICFieldInfo2D, ChomboPICFieldInfo3D, Orion2FieldInfo, PlutoFieldInfo, ) def is_chombo_hdf5(fn): try: with h5py.File(fn, mode="r") as fileh: valid = "Chombo_global" in fileh["/"] except (KeyError, OSError, ImportError): return False return valid class ChomboGrid(AMRGridPatch): _id_offset = 0 __slots__ = ["_level_id", "stop_index"] def __init__(self, id, index, level, start, stop): AMRGridPatch.__init__(self, id, filename=index.index_filename, index=index) self._parent_id = [] self._children_ids = [] self.Level = level self.ActiveDimensions = stop - start + 1 def get_global_startindex(self): """ Return the integer starting index for each dimension at the current level. """ if self.start_index is not None: return self.start_index if self.Parent is None: iLE = self.LeftEdge - self.ds.domain_left_edge start_index = iLE / self.dds return np.rint(start_index).astype("int64").ravel() pdx = self.Parent[0].dds start_index = (self.Parent[0].get_global_startindex()) + np.rint( (self.LeftEdge - self.Parent[0].LeftEdge) / pdx ) self.start_index = (start_index * self.ds.refine_by).astype("int64").ravel() return self.start_index def _setup_dx(self): # has already been read in and stored in index self.dds = self.ds.arr(self.index.dds_list[self.Level], "code_length") @property def Parent(self): if len(self._parent_id) == 0: return None return [self.index.grids[pid - self._id_offset] for pid in self._parent_id] @property def Children(self): return [self.index.grids[cid - self._id_offset] for cid in self._children_ids] class ChomboHierarchy(GridIndex): grid = ChomboGrid _data_file = None def __init__(self, ds, dataset_type="chombo_hdf5"): self.domain_left_edge = ds.domain_left_edge self.domain_right_edge = ds.domain_right_edge self.dataset_type = dataset_type self.field_indexes = {} self.dataset = weakref.proxy(ds) # for now, the index file is the dataset! self.index_filename = os.path.abspath(self.dataset.parameter_filename) self.directory = ds.directory self._handle = ds._handle self._levels = [key for key in self._handle.keys() if key.startswith("level")] GridIndex.__init__(self, ds, dataset_type) self._read_particles() def _read_particles(self): # only do anything if the dataset contains particles if not any(f[1].startswith("particle_") for f in self.field_list): return self.num_particles = 0 particles_per_grid = [] for key, val in self._handle.items(): if key.startswith("level"): level_particles = val["particles:offsets"][:] self.num_particles += level_particles.sum() particles_per_grid = np.concatenate( (particles_per_grid, level_particles) ) for i, _grid in enumerate(self.grids): self.grids[i].NumberOfParticles = particles_per_grid[i] self.grid_particle_count[i] = particles_per_grid[i] assert self.num_particles == self.grid_particle_count.sum() # Chombo datasets, by themselves, have no "known" fields. However, # we will look for "fluid" fields by finding the string "component" in # the output file, and "particle" fields by finding the string "particle". def _detect_output_fields(self): # look for fluid fields output_fields = [] for key, val in self._handle.attrs.items(): if key.startswith("component"): output_fields.append(val.decode("ascii")) self.field_list = [("chombo", c) for c in output_fields] # look for particle fields particle_fields = [] for key, val in self._handle.attrs.items(): if key.startswith("particle"): particle_fields.append(val.decode("ascii")) self.field_list.extend([("io", c) for c in particle_fields]) def _count_grids(self): self.num_grids = 0 for lev in self._levels: d = self._handle[lev] if "Processors" in d: self.num_grids += d["Processors"].len() elif "boxes" in d: self.num_grids += d["boxes"].len() else: raise RuntimeError("Unknown file specification") def _parse_index(self): f = self._handle # shortcut self.max_level = f.attrs["num_levels"] - 1 grids = [] self.dds_list = [] i = 0 D = self.dataset.dimensionality for lev_index, lev in enumerate(self._levels): level_number = int(re.match(r"level_(\d+)", lev).groups()[0]) try: boxes = f[lev]["boxes"][()] except KeyError: boxes = f[lev]["particles:boxes"][()] dx = f[lev].attrs["dx"] self.dds_list.append(dx * np.ones(3)) if D == 1: self.dds_list[lev_index][1] = 1.0 self.dds_list[lev_index][2] = 1.0 if D == 2: self.dds_list[lev_index][2] = 1.0 for level_id, box in enumerate(boxes): si = np.array([box[f"lo_{ax}"] for ax in "ijk"[:D]]) ei = np.array([box[f"hi_{ax}"] for ax in "ijk"[:D]]) if D == 1: si = np.concatenate((si, [0.0, 0.0])) ei = np.concatenate((ei, [0.0, 0.0])) if D == 2: si = np.concatenate((si, [0.0])) ei = np.concatenate((ei, [0.0])) pg = self.grid(len(grids), self, level=level_number, start=si, stop=ei) grids.append(pg) grids[-1]._level_id = level_id self.grid_levels[i] = level_number self.grid_left_edge[i] = self.dds_list[lev_index] * si.astype( self.float_type ) self.grid_right_edge[i] = self.dds_list[lev_index] * ( ei.astype(self.float_type) + 1 ) self.grid_particle_count[i] = 0 self.grid_dimensions[i] = ei - si + 1 i += 1 self.grids = np.empty(len(grids), dtype="object") for gi, g in enumerate(grids): self.grids[gi] = g def _populate_grid_objects(self): self._reconstruct_parent_child() for g in self.grids: g._prepare_grid() g._setup_dx() def _setup_derived_fields(self): self.derived_field_list = [] def _reconstruct_parent_child(self): mask = np.empty(len(self.grids), dtype="int32") mylog.debug("First pass; identifying child grids") for i, grid in enumerate(self.grids): get_box_grids_level( self.grid_left_edge[i, :], self.grid_right_edge[i, :], self.grid_levels[i].item() + 1, self.grid_left_edge, self.grid_right_edge, self.grid_levels, mask, ) ids = np.where(mask.astype("bool")) # where is a tuple grid._children_ids = ids[0] + grid._id_offset mylog.debug("Second pass; identifying parents") for i, grid in enumerate(self.grids): # Second pass for child in grid.Children: child._parent_id.append(i + grid._id_offset) class ChomboDataset(Dataset): _load_requirements = ["h5py"] _index_class = ChomboHierarchy _field_info_class: type[FieldInfoContainer] = ChomboFieldInfo def __init__( self, filename, dataset_type="chombo_hdf5", storage_filename=None, ini_filename=None, units_override=None, unit_system="cgs", default_species_fields=None, ): self.fluid_types += ("chombo",) self._handle = HDF5FileHandler(filename) self.dataset_type = dataset_type self.geometry = Geometry.CARTESIAN self.ini_filename = ini_filename self.fullplotdir = os.path.abspath(filename) Dataset.__init__( self, filename, self.dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) self.storage_filename = storage_filename self.cosmological_simulation = False # These are parameters that I very much wish to get rid of. self.parameters["HydroMethod"] = "chombo" self.parameters["DualEnergyFormalism"] = 0 self.parameters["EOSType"] = -1 # default def _set_code_unit_attributes(self): if not hasattr(self, "length_unit"): mylog.warning("Setting code length unit to be 1.0 cm") if not hasattr(self, "mass_unit"): mylog.warning("Setting code mass unit to be 1.0 g") if not hasattr(self, "time_unit"): mylog.warning("Setting code time unit to be 1.0 s") setdefaultattr(self, "length_unit", self.quan(1.0, "cm")) setdefaultattr(self, "mass_unit", self.quan(1.0, "g")) setdefaultattr(self, "time_unit", self.quan(1.0, "s")) setdefaultattr(self, "magnetic_unit", self.quan(np.sqrt(4.0 * np.pi), "gauss")) setdefaultattr(self, "velocity_unit", self.length_unit / self.time_unit) def _localize(self, f, default): if f is None: return os.path.join(self.directory, default) return f def _parse_parameter_file(self): self.dimensionality = self._handle["Chombo_global/"].attrs["SpaceDim"] self.domain_left_edge = self._calc_left_edge() self.domain_right_edge = self._calc_right_edge() self.domain_dimensions = self._calc_domain_dimensions() # if a lower-dimensional dataset, set up pseudo-3D stuff here. if self.dimensionality == 1: self.domain_left_edge = np.concatenate((self.domain_left_edge, [0.0, 0.0])) self.domain_right_edge = np.concatenate( (self.domain_right_edge, [1.0, 1.0]) ) self.domain_dimensions = np.concatenate((self.domain_dimensions, [1, 1])) if self.dimensionality == 2: self.domain_left_edge = np.concatenate((self.domain_left_edge, [0.0])) self.domain_right_edge = np.concatenate((self.domain_right_edge, [1.0])) self.domain_dimensions = np.concatenate((self.domain_dimensions, [1])) self.refine_by = self._handle["/level_0"].attrs["ref_ratio"] self._determine_periodic() self._determine_current_time() def _determine_current_time(self): # some datasets will not be time-dependent, and to make # matters worse, the simulation time is not always # stored in the same place in the hdf file! Make # sure we handle that here. try: self.current_time = self._handle.attrs["time"] except KeyError: try: self.current_time = self._handle["/level_0"].attrs["time"] except KeyError: self.current_time = 0.0 def _determine_periodic(self): # we default to true unless the HDF5 file says otherwise is_periodic = np.array([True, True, True]) for dir in range(self.dimensionality): try: is_periodic[dir] = self._handle["/level_0"].attrs[ "is_periodic_%d" % dir ] except KeyError: is_periodic[dir] = True self._periodicity = tuple(is_periodic) def _calc_left_edge(self): fileh = self._handle dx0 = fileh["/level_0"].attrs["dx"] D = self.dimensionality LE = dx0 * ((np.array(list(fileh["/level_0"].attrs["prob_domain"])))[0:D]) return LE def _calc_right_edge(self): fileh = self._handle dx0 = fileh["/level_0"].attrs["dx"] D = self.dimensionality RE = dx0 * ((np.array(list(fileh["/level_0"].attrs["prob_domain"])))[D:] + 1) return RE def _calc_domain_dimensions(self): fileh = self._handle D = self.dimensionality L_index = (np.array(list(fileh["/level_0"].attrs["prob_domain"])))[0:D] R_index = (np.array(list(fileh["/level_0"].attrs["prob_domain"])))[D:] + 1 return R_index - L_index @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False if not is_chombo_hdf5(filename): return False pluto_ini_file_exists = False orion2_ini_file_exists = False if isinstance(filename, str): dir_name = os.path.dirname(os.path.abspath(filename)) pluto_ini_filename = os.path.join(dir_name, "pluto.ini") orion2_ini_filename = os.path.join(dir_name, "orion2.ini") pluto_ini_file_exists = os.path.isfile(pluto_ini_filename) orion2_ini_file_exists = os.path.isfile(orion2_ini_filename) if not (pluto_ini_file_exists or orion2_ini_file_exists): try: fileh = h5py.File(filename, mode="r") valid = "Chombo_global" in fileh["/"] # ORION2 simulations should always have this: valid = valid and "CeilVA_mass" not in fileh.attrs.keys() valid = valid and "Charm_global" not in fileh.keys() fileh.close() return valid except Exception: pass return False @parallel_root_only def print_key_parameters(self): for a in [ "current_time", "domain_dimensions", "domain_left_edge", "domain_right_edge", ]: if not hasattr(self, a): mylog.error("Missing %s in parameter file definition!", a) continue v = getattr(self, a) mylog.info("Parameters: %-25s = %s", a, v) class PlutoHierarchy(ChomboHierarchy): def __init__(self, ds, dataset_type="chombo_hdf5"): ChomboHierarchy.__init__(self, ds, dataset_type) def _parse_index(self): f = self._handle # shortcut self.max_level = f.attrs["num_levels"] - 1 grids = [] self.dds_list = [] i = 0 D = self.dataset.dimensionality for lev_index, lev in enumerate(self._levels): level_number = int(re.match(r"level_(\d+)", lev).groups()[0]) try: boxes = f[lev]["boxes"][()] except KeyError: boxes = f[lev]["particles:boxes"][()] dx = f[lev].attrs["dx"] self.dds_list.append(dx * np.ones(3)) if D == 1: self.dds_list[lev_index][1] = 1.0 self.dds_list[lev_index][2] = 1.0 if D == 2: self.dds_list[lev_index][2] = 1.0 for level_id, box in enumerate(boxes): si = np.array([box[f"lo_{ax}"] for ax in "ijk"[:D]]) ei = np.array([box[f"hi_{ax}"] for ax in "ijk"[:D]]) if D == 1: si = np.concatenate((si, [0.0, 0.0])) ei = np.concatenate((ei, [0.0, 0.0])) if D == 2: si = np.concatenate((si, [0.0])) ei = np.concatenate((ei, [0.0])) pg = self.grid(len(grids), self, level=level_number, start=si, stop=ei) grids.append(pg) grids[-1]._level_id = level_id self.grid_levels[i] = level_number self.grid_left_edge[i] = ( self.dds_list[lev_index] * si.astype(self.float_type) + self.domain_left_edge.value ) self.grid_right_edge[i] = ( self.dds_list[lev_index] * (ei.astype(self.float_type) + 1) + self.domain_left_edge.value ) self.grid_particle_count[i] = 0 self.grid_dimensions[i] = ei - si + 1 i += 1 self.grids = np.empty(len(grids), dtype="object") for gi, g in enumerate(grids): self.grids[gi] = g class PlutoDataset(ChomboDataset): _index_class = PlutoHierarchy _field_info_class = PlutoFieldInfo def __init__( self, filename, dataset_type="chombo_hdf5", storage_filename=None, ini_filename=None, units_override=None, unit_system="cgs", default_species_fields=None, ): ChomboDataset.__init__( self, filename, dataset_type, storage_filename, ini_filename, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) def _parse_parameter_file(self): """ Check to see whether a 'pluto.ini' file exists in the plot file directory. If one does, attempt to parse it. Otherwise grab the dimensions from the hdf5 file. """ pluto_ini_file_exists = False dir_name = os.path.dirname(os.path.abspath(self.fullplotdir)) pluto_ini_filename = os.path.join(dir_name, "pluto.ini") pluto_ini_file_exists = os.path.isfile(pluto_ini_filename) self.dimensionality = self._handle["Chombo_global/"].attrs["SpaceDim"] self.domain_dimensions = self._calc_domain_dimensions() self.refine_by = self._handle["/level_0"].attrs["ref_ratio"] if pluto_ini_file_exists: lines = [line.strip() for line in open(pluto_ini_filename)] domain_left_edge = np.zeros(self.dimensionality) domain_right_edge = np.zeros(self.dimensionality) for il, ll in enumerate( lines[ lines.index("[Grid]") + 2 : lines.index("[Grid]") + 2 + self.dimensionality ] ): domain_left_edge[il] = float(ll.split()[2]) domain_right_edge[il] = float(ll.split()[-1]) self._periodicity = [0] * 3 for il, ll in enumerate( lines[ lines.index("[Boundary]") + 2 : lines.index("[Boundary]") + 2 + 6 : 2 ] ): self.periodicity[il] = ll.split()[1] == "periodic" self._periodicity = tuple(self.periodicity) for ll in lines[lines.index("[Parameters]") + 2 :]: if ll.split()[0] == "GAMMA": self.gamma = float(ll.split()[1]) self.domain_left_edge = domain_left_edge self.domain_right_edge = domain_right_edge else: self.domain_left_edge = self._calc_left_edge() self.domain_right_edge = self._calc_right_edge() self._periodicity = (True, True, True) # if a lower-dimensional dataset, set up pseudo-3D stuff here. if self.dimensionality == 1: self.domain_left_edge = np.concatenate((self.domain_left_edge, [0.0, 0.0])) self.domain_right_edge = np.concatenate( (self.domain_right_edge, [1.0, 1.0]) ) self.domain_dimensions = np.concatenate((self.domain_dimensions, [1, 1])) if self.dimensionality == 2: self.domain_left_edge = np.concatenate((self.domain_left_edge, [0.0])) self.domain_right_edge = np.concatenate((self.domain_right_edge, [1.0])) self.domain_dimensions = np.concatenate((self.domain_dimensions, [1])) self._determine_current_time() @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False if not is_chombo_hdf5(filename): return False pluto_ini_file_exists = False if isinstance(filename, str): dir_name = os.path.dirname(os.path.abspath(filename)) pluto_ini_filename = os.path.join(dir_name, "pluto.ini") pluto_ini_file_exists = os.path.isfile(pluto_ini_filename) if pluto_ini_file_exists: return True return False class Orion2Hierarchy(ChomboHierarchy): def __init__(self, ds, dataset_type="orion_chombo_native"): ChomboHierarchy.__init__(self, ds, dataset_type) def _detect_output_fields(self): # look for fluid fields output_fields = [] for key, val in self._handle.attrs.items(): if key.startswith("component"): output_fields.append(val.decode("ascii")) self.field_list = [("chombo", c) for c in output_fields] # look for particle fields self.particle_filename = self.index_filename[:-4] + "sink" if not os.path.exists(self.particle_filename): return pfield_list = [("io", str(c)) for c in self.io.particle_field_index.keys()] self.field_list.extend(pfield_list) def _read_particles(self): if not os.path.exists(self.particle_filename): return with open(self.particle_filename) as f: lines = f.readlines() self.num_stars = int(lines[0].strip().split(" ")[0]) for num, line in enumerate(lines[1:]): particle_position_x = float(line.split(" ")[1]) particle_position_y = float(line.split(" ")[2]) particle_position_z = float(line.split(" ")[3]) coord = [particle_position_x, particle_position_y, particle_position_z] # for each particle, determine which grids contain it # copied from object_finding_mixin.py mask = np.ones(self.num_grids) for i in range(len(coord)): np.choose( np.greater(self.grid_left_edge.d[:, i], coord[i]), (mask, 0), mask, ) np.choose( np.greater(self.grid_right_edge.d[:, i], coord[i]), (0, mask), mask, ) ind = np.where(mask == 1) selected_grids = self.grids[ind] # in orion, particles always live on the finest level. # so, we want to assign the particle to the finest of # the grids we just found if len(selected_grids) != 0: grid = sorted(selected_grids, key=lambda grid: grid.Level)[-1] ind = np.where(self.grids == grid)[0][0] self.grid_particle_count[ind] += 1 self.grids[ind].NumberOfParticles += 1 # store the position in the *.sink file for fast access. try: self.grids[ind]._particle_line_numbers.append(num + 1) except AttributeError: self.grids[ind]._particle_line_numbers = [num + 1] class Orion2Dataset(ChomboDataset): _load_requirements = ["h5py"] _index_class = Orion2Hierarchy _field_info_class = Orion2FieldInfo def __init__( self, filename, dataset_type="orion_chombo_native", storage_filename=None, ini_filename=None, units_override=None, default_species_fields=None, ): ChomboDataset.__init__( self, filename, dataset_type, storage_filename, ini_filename, units_override=units_override, default_species_fields=default_species_fields, ) def _parse_parameter_file(self): """ Check to see whether an 'orion2.ini' file exists in the plot file directory. If one does, attempt to parse it. Otherwise grab the dimensions from the hdf5 file. """ orion2_ini_file_exists = False dir_name = os.path.dirname(os.path.abspath(self.fullplotdir)) orion2_ini_filename = os.path.join(dir_name, "orion2.ini") orion2_ini_file_exists = os.path.isfile(orion2_ini_filename) if orion2_ini_file_exists: self._parse_inputs_file("orion2.ini") self.dimensionality = 3 self.domain_left_edge = self._calc_left_edge() self.domain_right_edge = self._calc_right_edge() self.domain_dimensions = self._calc_domain_dimensions() self.refine_by = self._handle["/level_0"].attrs["ref_ratio"] self._determine_periodic() self._determine_current_time() def _parse_inputs_file(self, ini_filename): self.fullplotdir = os.path.abspath(self.parameter_filename) self.ini_filename = self._localize(self.ini_filename, ini_filename) lines = open(self.ini_filename).readlines() # read the file line by line, storing important parameters for line in lines: try: param, sep, vals = line.partition("=") if not sep: # No = sign present, so split by space instead param, sep, vals = line.partition(" ") param = param.strip() vals = vals.strip() if not param: # skip blank lines continue if param[0] == "#": # skip comment lines continue if param[0] == "[": # skip stanza headers continue vals = vals.partition("#")[0] # strip trailing comments try: self.parameters[param] = np.int64(vals) except ValueError: try: self.parameters[param] = np.float64(vals) except ValueError: self.parameters[param] = vals except ValueError: mylog.error("ValueError: '%s'", line) if param == "GAMMA": self.gamma = np.float64(vals) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False if not is_chombo_hdf5(filename): return False pluto_ini_file_exists = False orion2_ini_file_exists = False if isinstance(filename, str): dir_name = os.path.dirname(os.path.abspath(filename)) pluto_ini_filename = os.path.join(dir_name, "pluto.ini") orion2_ini_filename = os.path.join(dir_name, "orion2.ini") pluto_ini_file_exists = os.path.isfile(pluto_ini_filename) orion2_ini_file_exists = os.path.isfile(orion2_ini_filename) if orion2_ini_file_exists: return True if not pluto_ini_file_exists: try: fileh = h5py.File(filename, mode="r") valid = "CeilVA_mass" in fileh.attrs.keys() valid = ( "Chombo_global" in fileh["/"] and "Charm_global" not in fileh["/"] ) valid = valid and "CeilVA_mass" in fileh.attrs.keys() fileh.close() return valid except Exception: pass return False class ChomboPICHierarchy(ChomboHierarchy): def __init__(self, ds, dataset_type="chombo_hdf5"): ChomboHierarchy.__init__(self, ds, dataset_type) class ChomboPICDataset(ChomboDataset): _load_requirements = ["h5py"] _index_class = ChomboPICHierarchy _field_info_class = ChomboPICFieldInfo3D def __init__( self, filename, dataset_type="chombo_hdf5", storage_filename=None, ini_filename=None, units_override=None, ): ChomboDataset.__init__( self, filename, dataset_type, storage_filename, ini_filename, units_override=units_override, ) if self.dimensionality == 1: self._field_info_class = ChomboPICFieldInfo1D if self.dimensionality == 2: self._field_info_class = ChomboPICFieldInfo2D @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if not valid_hdf5_signature(filename): return False if cls._missing_load_requirements(): return False if not is_chombo_hdf5(filename): return False pluto_ini_file_exists = False orion2_ini_file_exists = False if isinstance(filename, str): dir_name = os.path.dirname(os.path.abspath(filename)) pluto_ini_filename = os.path.join(dir_name, "pluto.ini") orion2_ini_filename = os.path.join(dir_name, "orion2.ini") pluto_ini_file_exists = os.path.isfile(pluto_ini_filename) orion2_ini_file_exists = os.path.isfile(orion2_ini_filename) if orion2_ini_file_exists: return False if pluto_ini_file_exists: return False try: fileh = h5py.File(filename, mode="r") valid = "Charm_global" in fileh["/"] fileh.close() return valid except Exception: pass return False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chombo/definitions.py0000644000175100001770000000000014714401662020345 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chombo/fields.py0000644000175100001770000003614414714401662017322 0ustar00runnerdockerimport numpy as np from yt._typing import KnownFieldsT from yt.fields.field_info_container import ( FieldInfoContainer, particle_deposition_functions, particle_vector_functions, standard_particle_fields, ) from yt.units.unit_object import Unit # type: ignore from yt.utilities.exceptions import YTFieldNotFound rho_units = "code_mass / code_length**3" mom_units = "code_mass / (code_time * code_length**2)" eden_units = "code_mass / (code_time**2 * code_length)" # erg / cm^3 vel_units = "code_length / code_time" b_units = "code_magnetic" class ChomboFieldInfo(FieldInfoContainer): # no custom behaviour is needed yet pass # Orion 2 Fields # We duplicate everything here from Boxlib, because we want to be able to # subclass it and that can be somewhat tricky. class Orion2FieldInfo(ChomboFieldInfo): known_other_fields: KnownFieldsT = ( ("density", (rho_units, ["density"], None)), ("energy-density", (eden_units, ["total_energy_density"], None)), ("radiation-energy-density", (eden_units, ["radiation_energy_density"], None)), ("X-momentum", (mom_units, ["momentum_density_x"], None)), ("Y-momentum", (mom_units, ["momentum_density_y"], None)), ("Z-momentum", (mom_units, ["momentum_density_z"], None)), ("temperature", ("K", ["temperature"], None)), ("X-magnfield", (b_units, [], None)), ("Y-magnfield", (b_units, [], None)), ("Z-magnfield", (b_units, [], None)), ("directrad-dedt-density", (eden_units, ["directrad-dedt-density"], None)), ("directrad-dpxdt-density", (mom_units, ["directrad-dpxdt-density"], None)), ("directrad-dpydt-density", (mom_units, ["directrad-dpydt-density"], None)), ("directrad-dpzdt-density", (mom_units, ["directrad-dpzdt-density"], None)), ) known_particle_fields: KnownFieldsT = ( ("particle_mass", ("code_mass", [], None)), ("particle_position_x", ("code_length", [], None)), ("particle_position_y", ("code_length", [], None)), ("particle_position_z", ("code_length", [], None)), ("particle_momentum_x", ("code_mass*code_length/code_time", [], None)), ("particle_momentum_y", ("code_mass*code_length/code_time", [], None)), ("particle_momentum_z", ("code_mass*code_length/code_time", [], None)), # Note that these are *internal* agmomen ("particle_angmomen_x", ("code_length**2/code_time", [], None)), ("particle_angmomen_y", ("code_length**2/code_time", [], None)), ("particle_angmomen_z", ("code_length**2/code_time", [], None)), ("particle_mlast", ("code_mass", [], None)), ("particle_r", ("code_length", [], None)), ("particle_mdeut", ("code_mass", [], None)), ("particle_n", ("", [], None)), ("particle_mdot", ("code_mass/code_time", [], None)), ("particle_burnstate", ("", [], None)), ("particle_luminosity", ("", [], None)), ("particle_id", ("", ["particle_index"], None)), ) def setup_particle_fields(self, ptype): def _get_vel(axis): def velocity(field, data): return ( data[ptype, f"particle_momentum_{axis}"] / data[ptype, "particle_mass"] ) return velocity for ax in "xyz": self.add_field( (ptype, f"particle_velocity_{ax}"), sampling_type="particle", function=_get_vel(ax), units="code_length/code_time", ) super().setup_particle_fields(ptype) def setup_fluid_fields(self): from yt.fields.magnetic_field import setup_magnetic_field_aliases unit_system = self.ds.unit_system def _thermal_energy_density(field, data): try: return ( data["chombo", "energy-density"] - data["gas", "kinetic_energy_density"] - data["gas", "magnetic_energy_density"] ) except YTFieldNotFound: return ( data["chombo", "energy-density"] - data["gas", "kinetic_energy_density"] ) def _specific_thermal_energy(field, data): return data["gas", "thermal_energy_density"] / data["gas", "density"] def _magnetic_energy_density(field, data): ret = data["chombo", "X-magnfield"] ** 2 if data.ds.dimensionality > 1: ret = ret + data["chombo", "Y-magnfield"] ** 2 if data.ds.dimensionality > 2: ret = ret + data["chombo", "Z-magnfield"] ** 2 return ret / 8.0 / np.pi def _specific_magnetic_energy(field, data): return data["gas", "specific_magnetic_energy"] / data["gas", "density"] def _kinetic_energy_density(field, data): p2 = data["chombo", "X-momentum"] ** 2 if data.ds.dimensionality > 1: p2 = p2 + data["chombo", "Y-momentum"] ** 2 if data.ds.dimensionality > 2: p2 = p2 + data["chombo", "Z-momentum"] ** 2 return 0.5 * p2 / data["gas", "density"] def _specific_kinetic_energy(field, data): return data["gas", "kinetic_energy_density"] / data["gas", "density"] def _temperature(field, data): c_v = data.ds.quan(data.ds.parameters["radiation.const_cv"], "erg/g/K") return data["gas", "specific_thermal_energy"] / c_v def _get_vel(axis): def velocity(field, data): return data["gas", f"momentum_density_{axis}"] / data["gas", "density"] return velocity for ax in "xyz": self.add_field( ("gas", f"velocity_{ax}"), sampling_type="cell", function=_get_vel(ax), units=unit_system["velocity"], ) self.add_field( ("gas", "specific_thermal_energy"), sampling_type="cell", function=_specific_thermal_energy, units=unit_system["specific_energy"], ) self.add_field( ("gas", "thermal_energy_density"), sampling_type="cell", function=_thermal_energy_density, units=unit_system["pressure"], ) self.add_field( ("gas", "kinetic_energy_density"), sampling_type="cell", function=_kinetic_energy_density, units=unit_system["pressure"], ) self.add_field( ("gas", "specific_kinetic_energy"), sampling_type="cell", function=_specific_kinetic_energy, units=unit_system["specific_energy"], ) self.add_field( ("gas", "magnetic_energy_density"), sampling_type="cell", function=_magnetic_energy_density, units=unit_system["pressure"], ) self.add_field( ("gas", "specific_magnetic_energy"), sampling_type="cell", function=_specific_magnetic_energy, units=unit_system["specific_energy"], ) self.add_field( ("gas", "temperature"), sampling_type="cell", function=_temperature, units=unit_system["temperature"], ) setup_magnetic_field_aliases( self, "chombo", [f"{ax}-magnfield" for ax in "XYZ"] ) class ChomboPICFieldInfo3D(FieldInfoContainer): known_other_fields: KnownFieldsT = ( ("density", (rho_units, ["density", "Density"], None)), ( "potential", ("code_length**2 / code_time**2", ["potential", "Potential"], None), ), ("gravitational_field_x", ("code_length / code_time**2", [], None)), ("gravitational_field_y", ("code_length / code_time**2", [], None)), ("gravitational_field_z", ("code_length / code_time**2", [], None)), ) known_particle_fields: KnownFieldsT = ( ("particle_mass", ("code_mass", [], None)), ("particle_position_x", ("code_length", [], None)), ("particle_position_y", ("code_length", [], None)), ("particle_position_z", ("code_length", [], None)), ("particle_velocity_x", ("code_length / code_time", [], None)), ("particle_velocity_y", ("code_length / code_time", [], None)), ("particle_velocity_z", ("code_length / code_time", [], None)), ) # I am re-implementing this here to override a few default behaviors: # I don't want to skip output units for code_length and I do want # particle_fields to default to take_log = False. def setup_particle_fields(self, ptype, ftype="gas", num_neighbors=64): skip_output_units = () for f, (units, aliases, dn) in sorted(self.known_particle_fields): units = self.ds.field_units.get((ptype, f), units) if ( f in aliases or ptype not in self.ds.particle_types_raw ) and units not in skip_output_units: u = Unit(units, registry=self.ds.unit_registry) output_units = str(u.get_cgs_equivalent()) else: output_units = units if (ptype, f) not in self.field_list: continue self.add_output_field( (ptype, f), sampling_type="particle", units=units, display_name=dn, output_units=output_units, take_log=False, ) for alias in aliases: self.alias((ptype, alias), (ptype, f), units=output_units) ppos_fields = [f"particle_position_{ax}" for ax in "xyz"] pvel_fields = [f"particle_velocity_{ax}" for ax in "xyz"] particle_vector_functions(ptype, ppos_fields, pvel_fields, self) particle_deposition_functions(ptype, "particle_position", "particle_mass", self) standard_particle_fields(self, ptype) # Now we check for any leftover particle fields for field in sorted(self.field_list): if field in self: continue if not isinstance(field, tuple): raise RuntimeError if field[0] not in self.ds.particle_types: continue self.add_output_field( field, sampling_type="particle", units=self.ds.field_units.get(field, ""), ) self.setup_smoothed_fields(ptype, num_neighbors=num_neighbors, ftype=ftype) def _dummy_position(field, data): return 0.5 * np.ones_like(data["all", "particle_position_x"]) def _dummy_velocity(field, data): return np.zeros_like(data["all", "particle_velocity_x"]) def _dummy_field(field, data): return 0.0 * data["chombo", "gravitational_field_x"] fluid_field_types = ["chombo", "gas"] particle_field_types = ["io", "all"] class ChomboPICFieldInfo2D(ChomboPICFieldInfo3D): known_other_fields: KnownFieldsT = ( ("density", (rho_units, ["density", "Density"], None)), ( "potential", ("code_length**2 / code_time**2", ["potential", "Potential"], None), ), ("gravitational_field_x", ("code_length / code_time**2", [], None)), ("gravitational_field_y", ("code_length / code_time**2", [], None)), ) known_particle_fields: KnownFieldsT = ( ("particle_mass", ("code_mass", [], None)), ("particle_position_x", ("code_length", [], None)), ("particle_position_y", ("code_length", [], None)), ("particle_velocity_x", ("code_length / code_time", [], None)), ("particle_velocity_y", ("code_length / code_time", [], None)), ) def __init__(self, ds, field_list): super().__init__(ds, field_list) for ftype in fluid_field_types: self.add_field( (ftype, "gravitational_field_z"), sampling_type="cell", function=_dummy_field, units="code_length / code_time**2", ) for ptype in particle_field_types: self.add_field( (ptype, "particle_position_z"), sampling_type="particle", function=_dummy_position, units="code_length", ) self.add_field( (ptype, "particle_velocity_z"), sampling_type="particle", function=_dummy_velocity, units="code_length / code_time", ) class ChomboPICFieldInfo1D(ChomboPICFieldInfo3D): known_other_fields: KnownFieldsT = ( ("density", (rho_units, ["density", "Density"], None)), ( "potential", ("code_length**2 / code_time**2", ["potential", "Potential"], None), ), ("gravitational_field_x", ("code_length / code_time**2", [], None)), ) known_particle_fields: KnownFieldsT = ( ("particle_mass", ("code_mass", [], None)), ("particle_position_x", ("code_length", [], None)), ("particle_velocity_x", ("code_length / code_time", [], None)), ) def __init__(self, ds, field_list): super().__init__(ds, field_list) for ftype in fluid_field_types: self.add_field( (ftype, "gravitational_field_y"), sampling_type="cell", function=_dummy_field, units="code_length / code_time**2", ) self.add_field( (ftype, "gravitational_field_z"), sampling_type="cell", function=_dummy_field, units="code_length / code_time**2", ) for ptype in particle_field_types: self.add_field( (ptype, "particle_position_y"), sampling_type="particle", function=_dummy_position, units="code_length", ) self.add_field( (ptype, "particle_position_z"), sampling_type="particle", function=_dummy_position, units="code_length", ) self.add_field( (ptype, "particle_velocity_y"), sampling_type="particle", function=_dummy_velocity, units="code_length / code_time", ) self.add_field( (ptype, "particle_velocity_z"), sampling_type="particle", function=_dummy_velocity, units="code_length / code_time", ) class PlutoFieldInfo(ChomboFieldInfo): known_other_fields: KnownFieldsT = ( ("rho", (rho_units, ["density"], None)), ("prs", ("code_mass / (code_length * code_time**2)", ["pressure"], None)), ("vx1", (vel_units, ["velocity_x"], None)), ("vx2", (vel_units, ["velocity_y"], None)), ("vx3", (vel_units, ["velocity_z"], None)), ("bx1", (b_units, [], None)), ("bx2", (b_units, [], None)), ("bx3", (b_units, [], None)), ) known_particle_fields = () def setup_fluid_fields(self): from yt.fields.magnetic_field import setup_magnetic_field_aliases setup_magnetic_field_aliases(self, "chombo", [f"bx{ax}" for ax in [1, 2, 3]]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chombo/io.py0000644000175100001770000002415414714401662016461 0ustar00runnerdockerimport re import numpy as np from yt.geometry.selection_routines import GridSelector from yt.utilities.io_handler import BaseIOHandler from yt.utilities.logger import ytLogger as mylog class IOHandlerChomboHDF5(BaseIOHandler): _dataset_type = "chombo_hdf5" _offset_string = "data:offsets=0" _data_string = "data:datatype=0" _offsets = None def __init__(self, ds, *args, **kwargs): BaseIOHandler.__init__(self, ds, *args, **kwargs) self.ds = ds self._handle = ds._handle self.dim = self._handle["Chombo_global/"].attrs["SpaceDim"] self._read_ghost_info() if self._offset_string not in self._handle["level_0"]: self._calculate_offsets() def _calculate_offsets(self): def box_size(corners): size = 1 for idim in range(self.dim): size *= corners[idim + self.dim] - corners[idim] + 1 return size self._offsets = {} num_comp = self._handle.attrs["num_components"] level = 0 while True: lname = "level_%i" % level if lname not in self._handle: break boxes = self._handle["level_0"]["boxes"][()] box_sizes = np.array([box_size(box) for box in boxes]) offsets = np.cumsum(box_sizes * num_comp, dtype="int64") offsets -= offsets[0] self._offsets[level] = offsets level += 1 def _read_ghost_info(self): try: self.ghost = tuple( self._handle["level_0/data_attributes"].attrs["outputGhost"] ) # pad with zeros if the dataset is low-dimensional self.ghost += (3 - self.dim) * (0,) self.ghost = np.array(self.ghost) except KeyError: # assume zero ghosts if outputGhosts not present self.ghost = np.zeros(self.dim, "int64") _field_dict = None @property def field_dict(self): if self._field_dict is not None: return self._field_dict field_dict = {} for key, val in self._handle.attrs.items(): if key.startswith("component_"): comp_number = int(re.match(r"component_(\d+)", key).groups()[0]) field_dict[val.decode("utf-8")] = comp_number self._field_dict = field_dict return self._field_dict _particle_field_index = None @property def particle_field_index(self): if self._particle_field_index is not None: return self._particle_field_index field_dict = {} for key, val in self._handle.attrs.items(): if key.startswith("particle_"): comp_number = int( re.match(r"particle_component_(\d+)", key).groups()[0] ) field_dict[val.decode("ascii")] = comp_number self._particle_field_index = field_dict return self._particle_field_index def _read_data(self, grid, field): lstring = "level_%i" % grid.Level lev = self._handle[lstring] dims = grid.ActiveDimensions shape = dims + 2 * self.ghost boxsize = shape.prod() if self._offsets is not None: grid_offset = self._offsets[grid.Level][grid._level_id] else: grid_offset = lev[self._offset_string][grid._level_id] start = grid_offset + self.field_dict[field] * boxsize stop = start + boxsize data = lev[self._data_string][start:stop] data_no_ghost = data.reshape(shape, order="F") ghost_slice = tuple( slice(g, g + d) for g, d in zip(self.ghost, dims, strict=True) ) ghost_slice = ghost_slice[0 : self.dim] return data_no_ghost[ghost_slice] def _read_fluid_selection(self, chunks, selector, fields, size): rv = {} chunks = list(chunks) fields.sort(key=lambda a: self.field_dict[a[1]]) if isinstance(selector, GridSelector): if not (len(chunks) == len(chunks[0].objs) == 1): raise RuntimeError grid = chunks[0].objs[0] for ftype, fname in fields: rv[ftype, fname] = self._read_data(grid, fname) return rv if size is None: size = sum(g.count(selector) for chunk in chunks for g in chunk.objs) for field in fields: ftype, fname = field fsize = size rv[field] = np.empty(fsize, dtype="float64") ng = sum(len(c.objs) for c in chunks) mylog.debug( "Reading %s cells of %s fields in %s grids", size, [f2 for f1, f2 in fields], ng, ) ind = 0 for chunk in chunks: for g in chunk.objs: nd = 0 for field in fields: ftype, fname = field data = self._read_data(g, fname) nd = g.select(selector, data, rv[field], ind) # caches ind += nd return rv def _read_particle_selection(self, chunks, selector, fields): rv = {} chunks = list(chunks) if isinstance(selector, GridSelector): if not (len(chunks) == len(chunks[0].objs) == 1): raise RuntimeError grid = chunks[0].objs[0] for ftype, fname in fields: rv[ftype, fname] = self._read_particles(grid, fname) return rv rv = {f: np.array([]) for f in fields} for chunk in chunks: for grid in chunk.objs: for ftype, fname in fields: data = self._read_particles(grid, fname) rv[ftype, fname] = np.concatenate((data, rv[ftype, fname])) return rv def _read_particles(self, grid, name): field_index = self.particle_field_index[name] lev = f"level_{grid.Level}" particles_per_grid = self._handle[lev]["particles:offsets"][()] items_per_particle = len(self._particle_field_index) # compute global offset position offsets = items_per_particle * np.cumsum(particles_per_grid) offsets = np.append(np.array([0]), offsets) offsets = np.array(offsets, dtype=np.int64) # convert between the global grid id and the id on this level grid_levels = np.array([g.Level for g in self.ds.index.grids]) grid_ids = np.array([g.id for g in self.ds.index.grids]) grid_level_offset = grid_ids[np.where(grid_levels == grid.Level)[0][0]] lo = grid.id - grid_level_offset hi = lo + 1 # handle the case where this grid has no particles if offsets[lo] == offsets[hi]: return np.array([], dtype=np.float64) data = self._handle[lev]["particles:data"][offsets[lo] : offsets[hi]] return np.asarray( data[field_index::items_per_particle], dtype=np.float64, order="F" ) def parse_orion_sinks(fn): r""" Orion sink particles are stored in text files. This function is for figuring what particle fields are present based on the number of entries per line in the \*.sink file. """ # Figure out the format of the particle file with open(fn) as f: lines = f.readlines() try: line = lines[1] except IndexError: # a particle file exists, but there is only one line, # so no sinks have been created yet. index = {} return index # The basic fields that all sink particles have index = { "particle_mass": 0, "particle_position_x": 1, "particle_position_y": 2, "particle_position_z": 3, "particle_momentum_x": 4, "particle_momentum_y": 5, "particle_momentum_z": 6, "particle_angmomen_x": 7, "particle_angmomen_y": 8, "particle_angmomen_z": 9, "particle_id": -1, } if len(line.strip().split()) == 11: # these are vanilla sinks, do nothing pass elif len(line.strip().split()) == 17: # these are old-style stars, add stellar model parameters index["particle_mlast"] = 10 index["particle_r"] = 11 index["particle_mdeut"] = 12 index["particle_n"] = 13 index["particle_mdot"] = 14 index["particle_burnstate"] = 15 elif len(line.strip().split()) == 18 or len(line.strip().split()) == 19: # these are the newer style, add luminosity as well index["particle_mlast"] = 10 index["particle_r"] = 11 index["particle_mdeut"] = 12 index["particle_n"] = 13 index["particle_mdot"] = 14 index["particle_burnstate"] = 15 index["particle_luminosity"] = 16 else: # give a warning if none of the above apply: mylog.warning("Warning - could not figure out particle output file") mylog.warning("These results could be nonsense!") return index class IOHandlerOrion2HDF5(IOHandlerChomboHDF5): _dataset_type = "orion_chombo_native" _particle_field_index = None @property def particle_field_index(self): fn = self.ds.fullplotdir[:-4] + "sink" index = parse_orion_sinks(fn) self._particle_field_index = index return self._particle_field_index def _read_particles(self, grid, field): """ parses the Orion Star Particle text files """ particles = [] if grid.NumberOfParticles == 0: return np.array(particles) def read(line, field): entry = line.strip().split(" ")[self.particle_field_index[field]] return float(entry) try: lines = self._cached_lines for num in grid._particle_line_numbers: line = lines[num] particles.append(read(line, field)) return np.array(particles) except AttributeError: fn = grid.ds.fullplotdir[:-4] + "sink" with open(fn) as f: lines = f.readlines() self._cached_lines = lines for num in grid._particle_line_numbers: line = lines[num] particles.append(read(line, field)) return np.array(particles) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chombo/misc.py0000644000175100001770000000000014714401662016765 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3111522 yt-4.4.0/yt/frontends/chombo/tests/0000755000175100001770000000000014714401715016633 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chombo/tests/__init__.py0000644000175100001770000000000014714401662020733 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/chombo/tests/test_outputs.py0000644000175100001770000000505114714401662021771 0ustar00runnerdockerfrom numpy.testing import assert_equal from yt.frontends.chombo.api import ChomboDataset, Orion2Dataset, PlutoDataset from yt.testing import requires_file, requires_module, units_override_check from yt.utilities.answer_testing.framework import ( data_dir_load, requires_ds, small_patch_amr, ) _fields = ( ("gas", "density"), ("gas", "velocity_magnitude"), ("gas", "magnetic_field_x"), ) gc = "GaussianCloud/data.0077.3d.hdf5" @requires_ds(gc) def test_gc(): ds = data_dir_load(gc) assert_equal(str(ds), "data.0077.3d.hdf5") for test in small_patch_amr(ds, _fields): test_gc.__name__ = test.description yield test tb = "TurbBoxLowRes/data.0005.3d.hdf5" @requires_module("h5py") @requires_ds(tb) def test_tb(): ds = data_dir_load(tb) assert_equal(str(ds), "data.0005.3d.hdf5") for test in small_patch_amr(ds, _fields): test_tb.__name__ = test.description yield test iso = "IsothermalSphere/data.0000.3d.hdf5" @requires_module("h5py") @requires_ds(iso) def test_iso(): ds = data_dir_load(iso) assert_equal(str(ds), "data.0000.3d.hdf5") for test in small_patch_amr(ds, _fields): test_iso.__name__ = test.description yield test _zp_fields = (("chombo", "rhs"), ("chombo", "phi")) zp = "ZeldovichPancake/plt32.2d.hdf5" @requires_module("h5py") @requires_ds(zp) def test_zp(): ds = data_dir_load(zp) assert_equal(str(ds), "plt32.2d.hdf5") for test in small_patch_amr(ds, _zp_fields, input_center="c", input_weight="rhs"): test_zp.__name__ = test.description yield test kho = "KelvinHelmholtz/data.0004.hdf5" @requires_module("h5py") @requires_ds(kho) def test_kho(): ds = data_dir_load(kho) assert_equal(str(ds), "data.0004.hdf5") for test in small_patch_amr(ds, _fields): test_kho.__name__ = test.description yield test @requires_module("h5py") @requires_file(zp) def test_ChomboDataset(): assert isinstance(data_dir_load(zp), ChomboDataset) @requires_module("h5py") @requires_file(gc) def test_Orion2Dataset(): assert isinstance(data_dir_load(gc), Orion2Dataset) @requires_module("h5py") @requires_file(kho) def test_PlutoDataset(): assert isinstance(data_dir_load(kho), PlutoDataset) @requires_module("h5py") @requires_file(zp) def test_units_override_zp(): units_override_check(zp) @requires_module("h5py") @requires_file(gc) def test_units_override_gc(): units_override_check(gc) @requires_module("h5py") @requires_file(kho) def test_units_override_kho(): units_override_check(kho) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3111522 yt-4.4.0/yt/frontends/eagle/0000755000175100001770000000000014714401715015277 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/eagle/__init__.py0000644000175100001770000000000014714401662017377 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/eagle/api.py0000644000175100001770000000024314714401662016422 0ustar00runnerdockerfrom . import tests from .data_structures import EagleDataset, EagleNetworkDataset from .fields import EagleNetworkFieldInfo from .io import IOHandlerEagleNetwork ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/eagle/data_structures.py0000644000175100001770000000600214714401662021064 0ustar00runnerdockerimport numpy as np import yt.units from yt.fields.field_info_container import FieldInfoContainer from yt.frontends.gadget.data_structures import GadgetHDF5Dataset from yt.frontends.owls.fields import OWLSFieldInfo from yt.utilities.on_demand_imports import _h5py as h5py from .fields import EagleNetworkFieldInfo class EagleDataset(GadgetHDF5Dataset): _load_requirements = ["h5py"] _particle_mass_name = "Mass" _field_info_class: type[FieldInfoContainer] = OWLSFieldInfo _time_readin_ = "Time" def _parse_parameter_file(self): # read values from header hvals = self._get_hvals() self.parameters = hvals # set features common to OWLS and Eagle self._set_owls_eagle() # Set time from analytic solution for flat LCDM universe a = hvals["ExpansionFactor"] H0 = hvals["H(z)"] / hvals["E(z)"] a_eq = (self.omega_matter / self.omega_lambda) ** (1.0 / 3) t1 = 2.0 / (3.0 * np.sqrt(self.omega_lambda)) t2 = (a / a_eq) ** (3.0 / 2) t3 = np.sqrt(1.0 + (a / a_eq) ** 3) t = t1 * np.log(t2 + t3) / H0 self.current_time = t * yt.units.s def _set_code_unit_attributes(self): self._set_owls_eagle_units() @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False need_groups = [ "Config", "Constants", "HashTable", "Header", "Parameters", "RuntimePars", "Units", ] veto_groups = [ "SUBFIND", "PartType0/ChemistryAbundances", "PartType0/ChemicalAbundances", ] valid = True try: fileh = h5py.File(filename, mode="r") for ng in need_groups: if ng not in fileh["/"]: valid = False for vg in veto_groups: if vg in fileh["/"]: valid = False fileh.close() except Exception: valid = False pass return valid class EagleNetworkDataset(EagleDataset): _load_requirements = ["h5py"] _particle_mass_name = "Mass" _field_info_class = EagleNetworkFieldInfo _time_readin = "Time" @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False try: fileh = h5py.File(filename, mode="r") if ( "Constants" in fileh["/"].keys() and "Header" in fileh["/"].keys() and "SUBFIND" not in fileh["/"].keys() and ( "ChemistryAbundances" in fileh["PartType0"].keys() or "ChemicalAbundances" in fileh["PartType0"].keys() ) ): fileh.close() return True fileh.close() except Exception: pass return False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/eagle/definitions.py0000644000175100001770000000270414714401662020170 0ustar00runnerdockereaglenetwork_ions = ( "electron", "H1", "H2", "H_m", "He1", "He2", "He3", "C1", "C2", "C3", "C4", "C5", "C6", "C7", "C_m", "N1", "N2", "N3", "N4", "N5", "N6", "N7", "N8", "O1", "O2", "O3", "O4", "O5", "O6", "O7", "O8", "O9", "O_m", "Ne1", "Ne2", "Ne3", "Ne4", "Ne5", "Ne6", "Ne7", "Ne8", "Ne9", "Ne10", "Ne11", "Mg1", "Mg2", "Mg3", "Mg4", "Mg5", "Mg6", "Mg7", "Mg8", "Mg9", "Mg10", "Mg11", "Mg12", "Mg13", "Si1", "Si2", "Si3", "Si4", "Si5", "Si6", "Si7", "Si8", "Si9", "Si10", "Si11", "Si12", "Si13", "Si14", "Si15", "Si16", "Si17", "Ca1", "Ca2", "Ca3", "Ca4", "Ca5", "Ca6", "Ca7", "Ca8", "Ca9", "Ca10", "Ca11", "Ca12", "Ca13", "Ca14", "Ca15", "Ca16", "Ca17", "Ca18", "Ca19", "Ca20", "Ca21", "Fe1", "Fe2", "Fe3", "Fe4", "Fe5", "Fe6", "Fe7", "Fe8", "Fe9", "Fe10", "Fe11", "Fe12", "Fe13", "Fe14", "Fe15", "Fe16", "Fe17", "Fe18", "Fe19", "Fe20", "Fe21", "Fe22", "Fe23", "Fe24", "Fe25", "Fe25", "Fe27", ) eaglenetwork_ion_lookup = {ion: index for index, ion in enumerate(eaglenetwork_ions)} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/eagle/fields.py0000644000175100001770000000567514714401662017135 0ustar00runnerdockerfrom yt.frontends.eagle.definitions import eaglenetwork_ion_lookup from yt.frontends.owls.fields import OWLSFieldInfo from yt.units.yt_array import YTQuantity from yt.utilities.periodic_table import periodic_table class EagleNetworkFieldInfo(OWLSFieldInfo): _ions = ( "H1", "H2", "He1", "He2", "He3", "C1", "C2", "C3", "C4", "C5", "C6", "C7", "N1", "N2", "N3", "N4", "N5", "N6", "N7", "N8", "O1", "O2", "O3", "O4", "O5", "O6", "O7", "O8", "O9", "Ne1", "Ne2", "Ne3", "Ne4", "Ne5", "Ne6", "Ne7", "Ne8", "Ne9", "Ne10", "Ne11", "Mg1", "Mg2", "Mg3", "Mg4", "Mg5", "Mg6", "Mg7", "Mg8", "Mg9", "Mg10", "Mg11", "Mg12", "Mg13", "Si1", "Si2", "Si3", "Si4", "Si5", "Si6", "Si7", "Si8", "Si9", "Si10", "Si11", "Si12", "Si13", "Si14", "Si15", "Si16", "Si17", "Ca1", "Ca2", "Ca3", "Ca4", "Ca5", "Ca6", "Ca7", "Ca8", "Ca9", "Ca10", "Ca11", "Ca12", "Ca13", "Ca14", "Ca15", "Ca16", "Ca17", "Ca18", "Ca19", "Ca20", "Ca21", "Fe1", "Fe2", "Fe3", "Fe4", "Fe5", "Fe6", "Fe7", "Fe8", "Fe9", "Fe10", "Fe11", "Fe12", "Fe13", "Fe14", "Fe15", "Fe16", "Fe17", "Fe18", "Fe19", "Fe20", "Fe21", "Fe22", "Fe23", "Fe24", "Fe25", "Fe25", "Fe27", ) def __init__(self, ds, field_list, slice_info=None): super().__init__(ds, field_list, slice_info=slice_info) def _create_ion_density_func(self, ftype, ion): """returns a function that calculates the ion density of a particle.""" def _ion_density(field, data): # Lookup the index of the ion index = eaglenetwork_ion_lookup[ion] # Ion to hydrogen number density ratio ion_chem = data[ftype, "Chemistry_%03i" % index] # Mass of a single ion if ion[0:2].isalpha(): symbol = ion[0:2].capitalize() else: symbol = ion[0:1].capitalize() m_ion = YTQuantity(periodic_table.elements_by_symbol[symbol].weight, "amu") # hydrogen number density n_H = data["PartType0", "H_number_density"] return m_ion * ion_chem * n_H return _ion_density ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/eagle/io.py0000644000175100001770000000020114714401662016252 0ustar00runnerdockerfrom yt.frontends.owls.io import IOHandlerOWLS class IOHandlerEagleNetwork(IOHandlerOWLS): _dataset_type = "eagle_network" ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3111522 yt-4.4.0/yt/frontends/eagle/tests/0000755000175100001770000000000014714401715016441 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/eagle/tests/__init__.py0000644000175100001770000000000014714401662020541 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/eagle/tests/test_outputs.py0000644000175100001770000000131514714401662021576 0ustar00runnerdockerfrom yt.frontends.eagle.api import EagleDataset from yt.testing import ParticleSelectionComparison, requires_file, requires_module from yt.utilities.answer_testing.framework import data_dir_load s28 = "snapshot_028_z000p000/snap_028_z000p000.0.hdf5" @requires_module("h5py") @requires_file(s28) def test_EagleDataset(): ds = data_dir_load(s28) assert isinstance(ds, EagleDataset) psc = ParticleSelectionComparison(ds) psc.run_defaults() s399 = "snipshot_399_z000p000/snip_399_z000p000.0.hdf5" @requires_module("h5py") @requires_file(s399) def test_Snipshot(): ds = data_dir_load(s399) assert isinstance(ds, EagleDataset) psc = ParticleSelectionComparison(ds) psc.run_defaults() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3151522 yt-4.4.0/yt/frontends/enzo/0000755000175100001770000000000014714401715015175 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo/__init__.py0000644000175100001770000000000014714401662017275 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo/answer_testing_support.py0000644000175100001770000000725214714401662022406 0ustar00runnerdockerimport os from functools import wraps import numpy as np from yt.config import ytcfg from yt.loaders import load from yt.testing import assert_allclose from yt.utilities.answer_testing.framework import ( AnswerTestingTest, FieldValuesTest, GridValuesTest, ProjectionValuesTest, can_run_ds, temp_cwd, ) class AssertWrapper: """ Used to wrap a numpy testing assertion, in order to provide a useful name for a given assertion test. """ def __init__(self, description, *args): # The key here is to add a description attribute, which nose will pick # up. self.args = args self.description = description def __call__(self): self.args[0](*self.args[1:]) def requires_outputlog(path=".", prefix=""): from nose import SkipTest def ffalse(func): @wraps(func) def fskip(*args, **kwargs): raise SkipTest return fskip def ftrue(func): @wraps(func) def fyielder(*args, **kwargs): with temp_cwd(path): for t in func(*args, **kwargs): if isinstance(t, AnswerTestingTest): t.prefix = prefix yield t return fyielder if os.path.exists("OutputLog"): return ftrue with temp_cwd(path): if os.path.exists("OutputLog"): return ftrue return ffalse def standard_small_simulation(ds_fn, fields): if not can_run_ds(ds_fn): return dso = [None] tolerance = ytcfg.get("yt", "answer_testing_tolerance") bitwise = ytcfg.get("yt", "answer_testing_bitwise") for field in fields: if bitwise: yield GridValuesTest(ds_fn, field) if "particle" in field: continue for dobj_name in dso: for axis in [0, 1, 2]: for weight_field in [None, ("gas", "density")]: yield ProjectionValuesTest( ds_fn, axis, field, weight_field, dobj_name, decimals=tolerance ) yield FieldValuesTest(ds_fn, field, dobj_name, decimals=tolerance) class ShockTubeTest: def __init__( self, data_file, solution_file, fields, left_edges, right_edges, rtol, atol ): self.solution_file = solution_file self.data_file = data_file self.fields = fields self.left_edges = left_edges self.right_edges = right_edges self.rtol = rtol self.atol = atol def __call__(self): # Read in the ds ds = load(self.data_file) exact = self.get_analytical_solution() ad = ds.all_data() position = ad["index", "x"] for k in self.fields: field = ad[k].d for xmin, xmax in zip(self.left_edges, self.right_edges, strict=True): mask = (position >= xmin) * (position <= xmax) exact_field = np.interp(position[mask].ndview, exact["pos"], exact[k]) myname = f"ShockTubeTest_{k}" # yield test vs analytical solution yield AssertWrapper( myname, assert_allclose, field[mask], exact_field, self.rtol, self.atol, ) def get_analytical_solution(self): # Reads in from file pos, dens, vel, pres, inte = np.loadtxt(self.solution_file, unpack=True) exact = {} exact["pos"] = pos exact["gas", "density"] = dens exact["gas", "velocity_x"] = vel exact["gas", "pressure"] = pres exact["gas", "specific_thermal_energy"] = inte return exact ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo/api.py0000644000175100001770000000071114714401662016320 0ustar00runnerdockerfrom . import tests from .data_structures import ( EnzoDataset, EnzoDatasetInMemory, EnzoGrid, EnzoGridInMemory, EnzoHierarchy, EnzoHierarchy1D, EnzoHierarchy2D, EnzoHierarchyInMemory, ) from .fields import EnzoFieldInfo from .io import ( IOHandlerInMemory, IOHandlerPacked1D, IOHandlerPacked2D, IOHandlerPackedHDF5, ) from .simulation_handling import EnzoSimulation add_enzo_field = EnzoFieldInfo.add_field ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo/data_structures.py0000644000175100001770000013065614714401662020777 0ustar00runnerdockerimport os import re import string import time import weakref from collections import defaultdict from functools import cached_property import numpy as np from more_itertools import always_iterable from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.static_output import Dataset from yt.fields.field_info_container import NullFunc from yt.frontends.enzo.misc import cosmology_get_units from yt.funcs import get_pbar, iter_fields, setdefaultattr from yt.geometry.geometry_handler import YTDataChunk from yt.geometry.grid_geometry_handler import GridIndex from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py, _libconf as libconf from .fields import EnzoFieldInfo class EnzoGrid(AMRGridPatch): """ Class representing a single Enzo Grid instance. """ def __init__(self, id, index): """ Returns an instance of EnzoGrid with *id*, associated with *filename* and *index*. """ # All of the field parameters will be passed to us as needed. AMRGridPatch.__init__(self, id, filename=None, index=index) self._children_ids = [] self._parent_id = -1 self.Level = -1 def set_filename(self, filename): """ Intelligently set the filename. """ if filename is None: self.filename = filename return if self.index._strip_path: self.filename = os.path.join( self.index.directory, os.path.basename(filename) ) elif filename[0] == os.path.sep: self.filename = filename else: self.filename = os.path.join(self.index.directory, filename) return @property def Parent(self): if self._parent_id == -1: return None return self.index.grids[self._parent_id - self._id_offset] @property def Children(self): return [self.index.grids[cid - self._id_offset] for cid in self._children_ids] @property def NumberOfActiveParticles(self): if not hasattr(self.index, "grid_active_particle_count"): return {} id = self.id - self._id_offset nap = { ptype: self.index.grid_active_particle_count[ptype][id] for ptype in self.index.grid_active_particle_count } return nap class EnzoGridInMemory(EnzoGrid): __slots__ = ["proc_num"] def set_filename(self, filename): pass class EnzoGridGZ(EnzoGrid): __slots__ = () def retrieve_ghost_zones(self, n_zones, fields, all_levels=False, smoothed=False): NGZ = self.ds.parameters.get("NumberOfGhostZones", 3) if n_zones > NGZ: return EnzoGrid.retrieve_ghost_zones( self, n_zones, fields, all_levels, smoothed ) # ----- Below is mostly the original code, except we remove the field # ----- access section # We will attempt this by creating a datacube that is exactly bigger # than the grid by nZones*dx in each direction nl = self.get_global_startindex() - n_zones new_left_edge = nl * self.dds + self.ds.domain_left_edge # Something different needs to be done for the root grid, though level = self.Level kwargs = { "dims": self.ActiveDimensions + 2 * n_zones, "num_ghost_zones": n_zones, "use_pbar": False, } # This should update the arguments to set the field parameters to be # those of this grid. kwargs.update(self.field_parameters) if smoothed: # cube = self.index.smoothed_covering_grid( # level, new_left_edge, new_right_edge, **kwargs) cube = self.index.smoothed_covering_grid(level, new_left_edge, **kwargs) else: cube = self.index.covering_grid(level, new_left_edge, **kwargs) # ----- This is EnzoGrid.get_data, duplicated here mostly for # ---- efficiency's sake. start_zone = NGZ - n_zones if start_zone == 0: end_zone = None else: end_zone = -(NGZ - n_zones) sl = tuple(slice(start_zone, end_zone) for i in range(3)) if fields is None: return cube for field in iter_fields(fields): if field in self.field_list: conv_factor = 1.0 if field in self.ds.field_info: conv_factor = self.ds.field_info[field]._convert_function(self) if self.ds.field_info[field].sampling_type == "particle": continue temp = self.index.io._read_raw_data_set(self, field) temp = temp.swapaxes(0, 2) cube.field_data[field] = np.multiply(temp, conv_factor, temp)[sl] return cube class EnzoHierarchy(GridIndex): _strip_path = False grid = EnzoGrid _preload_implemented = True def __init__(self, ds, dataset_type): self.dataset_type = dataset_type self.index_filename = os.path.abspath(f"{ds.parameter_filename}.hierarchy") if os.path.getsize(self.index_filename) == 0: raise OSError(-1, "File empty", self.index_filename) self.directory = os.path.dirname(self.index_filename) # For some reason, r8 seems to want Float64 if "CompilerPrecision" in ds and ds["CompilerPrecision"] == "r4": self.float_type = "float32" else: self.float_type = "float64" GridIndex.__init__(self, ds, dataset_type) # sync it back self.dataset.dataset_type = self.dataset_type def _count_grids(self): self.num_grids = None test_grid = test_grid_id = None self.num_stars = 0 for line in rlines(open(self.index_filename, "rb")): if ( line.startswith("BaryonFileName") or line.startswith("ParticleFileName") or line.startswith("FileName ") ): test_grid = line.split("=")[-1].strip().rstrip() if line.startswith("NumberOfStarParticles"): self.num_stars = int(line.split("=")[-1]) if line.startswith("Grid "): if self.num_grids is None: self.num_grids = int(line.split("=")[-1]) test_grid_id = int(line.split("=")[-1]) if test_grid is not None: break self._guess_dataset_type(self.ds.dimensionality, test_grid, test_grid_id) def _guess_dataset_type(self, rank, test_grid, test_grid_id): if test_grid[0] != os.path.sep: test_grid = os.path.join(self.directory, test_grid) if not os.path.exists(test_grid): test_grid = os.path.join(self.directory, os.path.basename(test_grid)) mylog.debug("Your data uses the annoying hardcoded path.") self._strip_path = True if self.dataset_type is not None: return if rank == 3: mylog.debug("Detected packed HDF5") if self.parameters.get("WriteGhostZones", 0) == 1: self.dataset_type = "enzo_packed_3d_gz" self.grid = EnzoGridGZ else: self.dataset_type = "enzo_packed_3d" elif rank == 2: mylog.debug("Detect packed 2D") self.dataset_type = "enzo_packed_2d" elif rank == 1: mylog.debug("Detect packed 1D") self.dataset_type = "enzo_packed_1d" else: raise NotImplementedError # Sets are sorted, so that won't work! def _parse_index(self): def _next_token_line(token, f): for line in f: if line.startswith(token): return line.split()[2:] pattern = r"Pointer: Grid\[(\d*)\]->NextGrid(Next|This)Level = (\d*)\s+$" patt = re.compile(pattern) f = open(self.index_filename) self.grids = [self.grid(1, self)] self.grids[0].Level = 0 si, ei, LE, RE, fn, npart = [], [], [], [], [], [] pbar = get_pbar("Parsing Hierarchy ", self.num_grids) version = self.dataset.parameters.get("VersionNumber", None) params = self.dataset.parameters if version is None and "Internal" in params: version = float(params["Internal"]["Provenance"]["VersionNumber"]) if version >= 3.0: active_particles = True nap = { ap_type: [] for ap_type in params["Physics"]["ActiveParticles"][ "ActiveParticlesEnabled" ] } else: if "AppendActiveParticleType" in self.parameters: nap = {} active_particles = True for type in self.parameters.get("AppendActiveParticleType", []): nap[type] = [] else: nap = None active_particles = False for grid_id in range(self.num_grids): pbar.update(grid_id + 1) # We will unroll this list si.append(_next_token_line("GridStartIndex", f)) ei.append(_next_token_line("GridEndIndex", f)) LE.append(_next_token_line("GridLeftEdge", f)) RE.append(_next_token_line("GridRightEdge", f)) nb = int(_next_token_line("NumberOfBaryonFields", f)[0]) fn.append([None]) if nb > 0: fn[-1] = _next_token_line("BaryonFileName", f) npart.append(int(_next_token_line("NumberOfParticles", f)[0])) # Below we find out what active particles exist in this grid, # and add their counts individually. if active_particles: ptypes = _next_token_line("PresentParticleTypes", f) counts = [int(c) for c in _next_token_line("ParticleTypeCounts", f)] for ptype in self.parameters.get("AppendActiveParticleType", []): if ptype in ptypes: nap[ptype].append(counts[ptypes.index(ptype)]) else: nap[ptype].append(0) if nb == 0 and npart[-1] > 0: fn[-1] = _next_token_line("ParticleFileName", f) for line in f: if len(line) < 2: break if line.startswith("Pointer:"): vv = patt.findall(line)[0] self.__pointer_handler(vv) pbar.finish() self._fill_arrays(ei, si, LE, RE, npart, nap) temp_grids = np.empty(self.num_grids, dtype="object") temp_grids[:] = self.grids self.grids = temp_grids self.filenames = fn def _initialize_grid_arrays(self): super()._initialize_grid_arrays() if "AppendActiveParticleType" in self.parameters.keys() and len( self.parameters["AppendActiveParticleType"] ): gac = { ptype: np.zeros(self.num_grids, dtype="i4") for ptype in self.parameters["AppendActiveParticleType"] } self.grid_active_particle_count = gac def _fill_arrays(self, ei, si, LE, RE, npart, nap): self.grid_dimensions.flat[:] = ei self.grid_dimensions -= np.array(si, dtype="i4") self.grid_dimensions += 1 self.grid_left_edge.flat[:] = LE self.grid_right_edge.flat[:] = RE self.grid_particle_count.flat[:] = npart if nap is not None: for ptype in nap: self.grid_active_particle_count[ptype].flat[:] = nap[ptype] def __pointer_handler(self, m): sgi = int(m[2]) - 1 if sgi == -1: return # if it's 0, then we're done with that lineage # Okay, so, we have a pointer. We make a new grid, with an id of the length+1 # (recall, Enzo grids are 1-indexed) self.grids.append(self.grid(len(self.grids) + 1, self)) # We'll just go ahead and make a weakref to cache second_grid = self.grids[sgi] # zero-indexed already first_grid = self.grids[int(m[0]) - 1] if m[1] == "Next": first_grid._children_ids.append(second_grid.id) second_grid._parent_id = first_grid.id second_grid.Level = first_grid.Level + 1 elif m[1] == "This": if first_grid.Parent is not None: first_grid.Parent._children_ids.append(second_grid.id) second_grid._parent_id = first_grid._parent_id second_grid.Level = first_grid.Level self.grid_levels[sgi] = second_grid.Level def _rebuild_top_grids(self, level=0): mylog.info("Rebuilding grids on level %s", level) cmask = self.grid_levels.flat == (level + 1) cmsum = cmask.sum() mask = np.zeros(self.num_grids, dtype="bool") for grid in self.select_grids(level): mask[:] = 0 LE = self.grid_left_edge[grid.id - grid._id_offset] RE = self.grid_right_edge[grid.id - grid._id_offset] grids, grid_i = self.get_box_grids(LE, RE) mask[grid_i] = 1 grid._children_ids = [] cgrids = self.grids[(mask * cmask).astype("bool")] mylog.info("%s: %s / %s", grid, len(cgrids), cmsum) for cgrid in cgrids: grid._children_ids.append(cgrid.id) cgrid._parent_id = grid.id mylog.info("Finished rebuilding") def _populate_grid_objects(self): for g, f in zip(self.grids, self.filenames, strict=True): g._prepare_grid() g._setup_dx() g.set_filename(f[0]) del self.filenames # No longer needed. self.max_level = self.grid_levels.max() def _detect_active_particle_fields(self): ap_list = self.dataset["AppendActiveParticleType"] _fields = {ap: [] for ap in ap_list} fields = [] for ptype in self.dataset["AppendActiveParticleType"]: select_grids = self.grid_active_particle_count[ptype].flat if not np.any(select_grids): current_ptypes = self.dataset.particle_types new_ptypes = [p for p in current_ptypes if p != ptype] self.dataset.particle_types = new_ptypes self.dataset.particle_types_raw = new_ptypes continue if ptype != "DarkMatter": gs = self.grids[select_grids > 0] g = gs[0] handle = h5py.File(g.filename, "r") grid_group = handle[f"/Grid{g.id:08d}"] for pname in ["Active Particles", "Particles"]: if pname in grid_group: break else: raise RuntimeError("Could not find active particle group in data.") node = grid_group[pname] for ptype in (str(p) for p in node): if ptype not in _fields: continue for field in (str(f) for f in node[ptype]): _fields[ptype].append(field) if node[ptype][field].ndim > 1: self.io._array_fields[field] = ( node[ptype][field].shape[1:], ) fields += [(ptype, field) for field in _fields.pop(ptype)] handle.close() return set(fields) def _setup_derived_fields(self): super()._setup_derived_fields() aps = self.dataset.parameters.get("AppendActiveParticleType", []) for fname, field in self.ds.field_info.items(): if not field.sampling_type == "particle": continue if isinstance(fname, tuple): continue if field._function is NullFunc: continue for apt in aps: dd = field._copy_def() dd.pop("name") self.ds.field_info.add_field((apt, fname), sampling_type="cell", **dd) def _detect_output_fields(self): self.field_list = [] # Do this only on the root processor to save disk work. if self.comm.rank in (0, None): mylog.info("Gathering a field list (this may take a moment.)") field_list = set() random_sample = self._generate_random_grids() for grid in random_sample: if not hasattr(grid, "filename"): continue try: gf = self.io._read_field_names(grid) except self.io._read_exception as e: raise OSError("Grid %s is a bit funky?", grid.id) from e mylog.debug("Grid %s has: %s", grid.id, gf) field_list = field_list.union(gf) if "AppendActiveParticleType" in self.dataset.parameters: ap_fields = self._detect_active_particle_fields() field_list = list(set(field_list).union(ap_fields)) if not any(f[0] == "io" for f in field_list): if "io" in self.dataset.particle_types_raw: ptypes_raw = list(self.dataset.particle_types_raw) ptypes_raw.remove("io") self.dataset.particle_types_raw = tuple(ptypes_raw) if "io" in self.dataset.particle_types: ptypes = list(self.dataset.particle_types) ptypes.remove("io") self.dataset.particle_types = tuple(ptypes) ptypes = self.dataset.particle_types ptypes_raw = self.dataset.particle_types_raw else: field_list = None ptypes = None ptypes_raw = None self.field_list = list(self.comm.mpi_bcast(field_list)) self.dataset.particle_types = list(self.comm.mpi_bcast(ptypes)) self.dataset.particle_types_raw = list(self.comm.mpi_bcast(ptypes_raw)) def _generate_random_grids(self): if self.num_grids > 40: rng = np.random.default_rng() starter = rng.integers(0, 20) random_sample = np.mgrid[starter : len(self.grids) - 1 : 20j].astype( "int32" ) # We also add in a bit to make sure that some of the grids have # particles gwp = self.grid_particle_count > 0 if np.any(gwp) and not np.any(gwp[random_sample,]): # We just add one grid. This is not terribly efficient. first_grid = np.where(gwp)[0][0] random_sample.resize((21,)) random_sample[-1] = first_grid mylog.debug("Added additional grid %s", first_grid) mylog.debug("Checking grids: %s", random_sample.tolist()) else: random_sample = np.mgrid[0 : max(len(self.grids), 1)].astype("int32") return self.grids[random_sample,] def _get_particle_type_counts(self): try: ret = {} for ptype in self.grid_active_particle_count: ret[ptype] = self.grid_active_particle_count[ptype].sum() return ret except AttributeError: return super()._get_particle_type_counts() def find_particles_by_type(self, ptype, max_num=None, additional_fields=None): """ Returns a structure of arrays with all of the particles' positions, velocities, masses, types, IDs, and attributes for a particle type **ptype** for a maximum of **max_num** particles. If non-default particle fields are used, provide them in **additional_fields**. """ # Not sure whether this routine should be in the general HierarchyType. if self.grid_particle_count.sum() == 0: mylog.info("Data contains no particles.") return None if additional_fields is None: additional_fields = [ "metallicity_fraction", "creation_time", "dynamical_time", ] pfields = [f for f in self.field_list if f.startswith("particle_")] nattr = self.dataset["NumberOfParticleAttributes"] if nattr > 0: pfields += additional_fields[:nattr] # Find where the particles reside and count them if max_num is None: max_num = 1e100 total = 0 pstore = [] for level in range(self.max_level, -1, -1): for grid in self.select_grids(level): index = np.where(grid["particle_type"] == ptype)[0] total += len(index) pstore.append(index) if total >= max_num: break if total >= max_num: break result = None if total > 0: result = {} for p in pfields: result[p] = np.zeros(total, "float64") # Now we retrieve data for each field ig = count = 0 for level in range(self.max_level, -1, -1): for grid in self.select_grids(level): nidx = len(pstore[ig]) if nidx > 0: for p in pfields: result[p][count : count + nidx] = grid[p][pstore[ig]] count += nidx ig += 1 if count >= total: break if count >= total: break # Crop data if retrieved more than max_num if count > max_num: for p in pfields: result[p] = result[p][0:max_num] return result class EnzoHierarchyInMemory(EnzoHierarchy): grid = EnzoGridInMemory @cached_property def enzo(self): import enzo return enzo def __init__(self, ds, dataset_type=None): self.dataset_type = dataset_type self.float_type = "float64" self.dataset = weakref.proxy(ds) # for _obtain_enzo self.float_type = self.enzo.hierarchy_information["GridLeftEdge"].dtype self.directory = os.getcwd() GridIndex.__init__(self, ds, dataset_type) def _initialize_data_storage(self): pass def _count_grids(self): self.num_grids = self.enzo.hierarchy_information["GridDimensions"].shape[0] def _parse_index(self): self._copy_index_structure() mylog.debug("Copying reverse tree") reverse_tree = self.enzo.hierarchy_information["GridParentIDs"].ravel().tolist() # Initial setup: mylog.debug("Reconstructing parent-child relationships") grids = [] # We enumerate, so it's 0-indexed id and 1-indexed pid self.filenames = ["-1"] * self.num_grids for id, pid in enumerate(reverse_tree): grids.append(self.grid(id + 1, self)) grids[-1].Level = self.grid_levels[id, 0] if pid > 0: grids[-1]._parent_id = pid grids[pid - 1]._children_ids.append(grids[-1].id) self.max_level = self.grid_levels.max() mylog.debug("Preparing grids") self.grids = np.empty(len(grids), dtype="object") for i, grid in enumerate(grids): if (i % 1e4) == 0: mylog.debug("Prepared % 7i / % 7i grids", i, self.num_grids) grid.filename = "Inline_processor_%07i" % (self.grid_procs[i, 0]) grid._prepare_grid() grid._setup_dx() grid.proc_num = self.grid_procs[i, 0] self.grids[i] = grid mylog.debug("Prepared") def _initialize_grid_arrays(self): EnzoHierarchy._initialize_grid_arrays(self) self.grid_procs = np.zeros((self.num_grids, 1), "int32") def _copy_index_structure(self): # Dimensions are important! self.grid_dimensions[:] = self.enzo.hierarchy_information["GridEndIndices"][:] self.grid_dimensions -= self.enzo.hierarchy_information["GridStartIndices"][:] self.grid_dimensions += 1 self.grid_left_edge[:] = self.enzo.hierarchy_information["GridLeftEdge"][:] self.grid_right_edge[:] = self.enzo.hierarchy_information["GridRightEdge"][:] self.grid_levels[:] = self.enzo.hierarchy_information["GridLevels"][:] self.grid_procs = self.enzo.hierarchy_information["GridProcs"].copy() self.grid_particle_count[:] = self.enzo.hierarchy_information[ "GridNumberOfParticles" ][:] def save_data(self, *args, **kwargs): pass _cached_field_list = None _cached_derived_field_list = None def _generate_random_grids(self): my_rank = self.comm.rank my_grids = self.grids[self.grid_procs.ravel() == my_rank] if len(my_grids) > 40: rng = np.random.default_rng() starter = rng.integers(0, 20) random_sample = np.mgrid[starter : len(my_grids) - 1 : 20j].astype("int32") mylog.debug("Checking grids: %s", random_sample.tolist()) else: random_sample = np.mgrid[0 : max(len(my_grids) - 1, 1)].astype("int32") return my_grids[random_sample,] def _chunk_io(self, dobj, cache=True, local_only=False): gfiles = defaultdict(list) gobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) for g in gobjs: gfiles[g.filename].append(g) for fn in sorted(gfiles): if local_only: gobjs = [g for g in gfiles[fn] if g.proc_num == self.comm.rank] gfiles[fn] = gobjs gs = gfiles[fn] count = self._count_selection(dobj, gs) yield YTDataChunk(dobj, "io", gs, count, cache=cache) class EnzoHierarchy1D(EnzoHierarchy): def _fill_arrays(self, ei, si, LE, RE, npart, nap): self.grid_dimensions[:, :1] = ei self.grid_dimensions[:, :1] -= np.array(si, dtype="i4") self.grid_dimensions += 1 self.grid_left_edge[:, :1] = LE self.grid_right_edge[:, :1] = RE self.grid_particle_count.flat[:] = npart self.grid_left_edge[:, 1:] = 0.0 self.grid_right_edge[:, 1:] = 1.0 self.grid_dimensions[:, 1:] = 1 if nap is not None: raise NotImplementedError class EnzoHierarchy2D(EnzoHierarchy): def _fill_arrays(self, ei, si, LE, RE, npart, nap): self.grid_dimensions[:, :2] = ei self.grid_dimensions[:, :2] -= np.array(si, dtype="i4") self.grid_dimensions += 1 self.grid_left_edge[:, :2] = LE self.grid_right_edge[:, :2] = RE self.grid_particle_count.flat[:] = npart self.grid_left_edge[:, 2] = 0.0 self.grid_right_edge[:, 2] = 1.0 self.grid_dimensions[:, 2] = 1 if nap is not None: raise NotImplementedError class EnzoDataset(Dataset): """ Enzo-specific output, set at a fixed time. """ _load_requirements = ["h5py"] _index_class = EnzoHierarchy _field_info_class = EnzoFieldInfo def __init__( self, filename, dataset_type=None, parameter_override=None, conversion_override=None, storage_filename=None, units_override=None, unit_system="cgs", default_species_fields=None, ): """ This class is a stripped down class that simply reads and parses *filename* without looking at the index. *dataset_type* gets passed to the index to pre-determine the style of data-output. However, it is not strictly necessary. Optionally you may specify a *parameter_override* dictionary that will override anything in the parameter file and a *conversion_override* dictionary that consists of {fieldname : conversion_to_cgs} that will override the #DataCGS. """ self.fluid_types += ("enzo",) if filename.endswith(".hierarchy"): filename = filename[:-10] if parameter_override is None: parameter_override = {} self._parameter_override = parameter_override if conversion_override is None: conversion_override = {} self._conversion_override = conversion_override self.storage_filename = storage_filename Dataset.__init__( self, filename, dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) def _setup_1d(self): self._index_class = EnzoHierarchy1D self.domain_left_edge = np.concatenate([[self.domain_left_edge], [0.0, 0.0]]) self.domain_right_edge = np.concatenate([[self.domain_right_edge], [1.0, 1.0]]) def _setup_2d(self): self._index_class = EnzoHierarchy2D self.domain_left_edge = np.concatenate([self.domain_left_edge, [0.0]]) self.domain_right_edge = np.concatenate([self.domain_right_edge, [1.0]]) def get_parameter(self, parameter, type=None): """ Gets a parameter not in the parameterDict. """ if parameter in self.parameters: return self.parameters[parameter] for line in open(self.parameter_filename): if line.find("#") >= 1: # Keep the commented lines line = line[: line.find("#")] line = line.strip().rstrip() if len(line) < 2: continue try: param, vals = map(string.strip, map(string.rstrip, line.split("="))) except ValueError: mylog.error("ValueError: '%s'", line) if parameter == param: if type is None: t = vals.split() else: t = map(type, vals.split()) if len(t) == 1: self.parameters[param] = t[0] else: self.parameters[param] = t if param.endswith("Units") and not param.startswith("Temperature"): dataType = param[:-5] self.conversion_factors[dataType] = self.parameters[param] return self.parameters[parameter] return "" @cached_property def unique_identifier(self) -> str: if "CurrentTimeIdentifier" in self.parameters: # enzo2 return str(self.parameters["CurrentTimeIdentifier"]) elif "MetaDataDatasetUUID" in self.parameters: # enzo2 return str(self.parameters["MetaDataDatasetUUID"]) elif "Internal" in self.parameters: # enzo3 return str( self.parameters["Internal"]["Provenance"]["CurrentTimeIdentidier"] ) else: return super().unique_identifier def _parse_parameter_file(self): """ Parses the parameter file and establishes the various dictionaries. """ # Let's read the file with open(self.parameter_filename) as f: line = f.readline().strip() f.seek(0) if line == "Internal:": self._parse_enzo3_parameter_file(f) else: self._parse_enzo2_parameter_file(f) def _parse_enzo3_parameter_file(self, f): self.parameters = p = libconf.load(f) sim = p["SimulationControl"] internal = p["Internal"] phys = p["Physics"] self.refine_by = sim["AMR"]["RefineBy"] self._periodicity = tuple( a == 3 for a in sim["Domain"]["LeftFaceBoundaryCondition"] ) self.dimensionality = sim["Domain"]["TopGridRank"] self.domain_dimensions = np.array( sim["Domain"]["TopGridDimensions"], dtype="int64" ) self.domain_left_edge = np.array( sim["Domain"]["DomainLeftEdge"], dtype="float64" ) self.domain_right_edge = np.array( sim["Domain"]["DomainRightEdge"], dtype="float64" ) self.gamma = phys["Hydro"]["Gamma"] self.current_time = internal["InitialTime"] self.cosmological_simulation = phys["Cosmology"]["ComovingCoordinates"] if self.cosmological_simulation == 1: cosmo = phys["Cosmology"] self.current_redshift = internal["CosmologyCurrentRedshift"] self.omega_lambda = cosmo["OmegaLambdaNow"] self.omega_matter = cosmo["OmegaMatterNow"] self.hubble_constant = cosmo["HubbleConstantNow"] else: self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 self.cosmological_simulation = 0 self.particle_types = ["DarkMatter"] + phys["ActiveParticles"][ "ActiveParticlesEnabled" ] self.particle_types = tuple(self.particle_types) self.particle_types_raw = self.particle_types if self.dimensionality == 1: self._setup_1d() elif self.dimensionality == 2: self._setup_2d() def _parse_enzo2_parameter_file(self, f): for line in (l.strip() for l in f): if (len(line) < 2) or ("=" not in line): continue param, vals = (i.strip() for i in line.split("=", 1)) # First we try to decipher what type of value it is. vals = vals.split() # Special case approaching. if "(do" in vals: vals = vals[:1] if len(vals) == 0: pcast = str # Assume NULL output else: v = vals[0] # Figure out if it's castable to floating point: try: float(v) except ValueError: pcast = str else: if any("." in v or "e+" in v or "e-" in v for v in vals): pcast = float elif v == "inf": pcast = str else: pcast = int # Now we figure out what to do with it. if len(vals) == 0: vals = "" elif len(vals) == 1: vals = pcast(vals[0]) else: vals = np.array([pcast(i) for i in vals if i != "-99999"]) if param.startswith("Append"): if param not in self.parameters: self.parameters[param] = [] self.parameters[param].append(vals) else: self.parameters[param] = vals self.refine_by = self.parameters["RefineBy"] _periodicity = tuple( always_iterable(self.parameters["LeftFaceBoundaryCondition"] == 3) ) self.dimensionality = self.parameters["TopGridRank"] if self.dimensionality > 1: self.domain_dimensions = self.parameters["TopGridDimensions"] if len(self.domain_dimensions) < 3: tmp = self.domain_dimensions.tolist() tmp.append(1) self.domain_dimensions = np.array(tmp) _periodicity += (False,) self.domain_left_edge = np.array( self.parameters["DomainLeftEdge"], "float64" ).copy() self.domain_right_edge = np.array( self.parameters["DomainRightEdge"], "float64" ).copy() else: self.domain_left_edge = np.array( self.parameters["DomainLeftEdge"], "float64" ) self.domain_right_edge = np.array( self.parameters["DomainRightEdge"], "float64" ) self.domain_dimensions = np.array( [self.parameters["TopGridDimensions"], 1, 1] ) _periodicity += (False, False) assert len(_periodicity) == 3 self._periodicity = _periodicity self.gamma = self.parameters["Gamma"] if self.parameters["ComovingCoordinates"]: self.cosmological_simulation = 1 self.current_redshift = self.parameters["CosmologyCurrentRedshift"] self.omega_lambda = self.parameters["CosmologyOmegaLambdaNow"] self.omega_matter = self.parameters["CosmologyOmegaMatterNow"] self.omega_radiation = self.parameters.get( "CosmologyOmegaRadiationNow", 0.0 ) self.hubble_constant = self.parameters["CosmologyHubbleConstantNow"] else: self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 self.cosmological_simulation = 0 self.particle_types = [] self.current_time = self.parameters["InitialTime"] if ( self.parameters["NumberOfParticles"] > 0 and "AppendActiveParticleType" in self.parameters.keys() ): # If this is the case, then we know we should have a DarkMatter # particle type, and we don't need the "io" type. self.parameters["AppendActiveParticleType"].append("DarkMatter") else: # We do not have an "io" type for Enzo particles if the # ActiveParticle machinery is on, as we simply will ignore any of # the non-DarkMatter particles in that case. However, for older # datasets, we call this particle type "io". self.particle_types = ["io"] for ptype in self.parameters.get("AppendActiveParticleType", []): self.particle_types.append(ptype) self.particle_types = tuple(self.particle_types) self.particle_types_raw = self.particle_types if self.dimensionality == 1: self._setup_1d() elif self.dimensionality == 2: self._setup_2d() def _set_code_unit_attributes(self): if self.cosmological_simulation: k = cosmology_get_units( self.hubble_constant, self.omega_matter, self.parameters["CosmologyComovingBoxSize"], self.parameters["CosmologyInitialRedshift"], self.current_redshift, ) # Now some CGS values box_size = self.parameters["CosmologyComovingBoxSize"] setdefaultattr(self, "length_unit", self.quan(box_size, "Mpccm/h")) setdefaultattr( self, "mass_unit", self.quan(k["urho"], "g/cm**3") * (self.length_unit.in_cgs()) ** 3, ) setdefaultattr(self, "time_unit", self.quan(k["utim"], "s")) setdefaultattr(self, "velocity_unit", self.quan(k["uvel"], "cm/s")) else: if "LengthUnits" in self.parameters: length_unit = self.parameters["LengthUnits"] mass_unit = self.parameters["DensityUnits"] * length_unit**3 time_unit = self.parameters["TimeUnits"] elif "SimulationControl" in self.parameters: units = self.parameters["SimulationControl"]["Units"] length_unit = units["Length"] mass_unit = units["Density"] * length_unit**3 time_unit = units["Time"] else: mylog.warning("Setting 1.0 in code units to be 1.0 cm") mylog.warning("Setting 1.0 in code units to be 1.0 s") length_unit = mass_unit = time_unit = 1.0 setdefaultattr(self, "length_unit", self.quan(length_unit, "cm")) setdefaultattr(self, "mass_unit", self.quan(mass_unit, "g")) setdefaultattr(self, "time_unit", self.quan(time_unit, "s")) setdefaultattr(self, "velocity_unit", self.length_unit / self.time_unit) density_unit = self.mass_unit / self.length_unit**3 magnetic_unit = np.sqrt(4 * np.pi * density_unit) * self.velocity_unit magnetic_unit = np.float64(magnetic_unit.in_cgs()) setdefaultattr(self, "magnetic_unit", self.quan(magnetic_unit, "gauss")) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: return filename.endswith(".hierarchy") or os.path.exists( f"{filename}.hierarchy" ) @classmethod def _guess_candidates(cls, base, directories, files): candidates = [ _ for _ in files if _.endswith(".hierarchy") and os.path.exists(os.path.join(base, _.rsplit(".", 1)[0])) ] # Typically, Enzo won't have nested outputs. return candidates, (len(candidates) == 0) class EnzoDatasetInMemory(EnzoDataset): _index_class = EnzoHierarchyInMemory _dataset_type = "enzo_inline" def __init__(self, parameter_override=None, conversion_override=None): self.fluid_types += ("enzo",) if parameter_override is None: parameter_override = {} self._parameter_override = parameter_override if conversion_override is None: conversion_override = {} self._conversion_override = conversion_override Dataset.__init__(self, "InMemoryParameterFile", self._dataset_type) def _parse_parameter_file(self): enzo = self._obtain_enzo() ncalls = enzo.yt_parameter_file["NumberOfPythonCalls"] self._input_filename = f"cycle{ncalls:08d}" self.parameters["CurrentTimeIdentifier"] = time.time() self.parameters.update(enzo.yt_parameter_file) self.conversion_factors.update(enzo.conversion_factors) for i in self.parameters: if isinstance(self.parameters[i], tuple): self.parameters[i] = np.array(self.parameters[i]) if i.endswith("Units") and not i.startswith("Temperature"): dataType = i[:-5] self.conversion_factors[dataType] = self.parameters[i] self.domain_left_edge = self.parameters["DomainLeftEdge"].copy() self.domain_right_edge = self.parameters["DomainRightEdge"].copy() for i in self.conversion_factors: if isinstance(self.conversion_factors[i], tuple): self.conversion_factors[i] = np.array(self.conversion_factors[i]) for p, v in self._parameter_override.items(): self.parameters[p] = v for p, v in self._conversion_override.items(): self.conversion_factors[p] = v self.refine_by = self.parameters["RefineBy"] self._periodicity = tuple( always_iterable(self.parameters["LeftFaceBoundaryCondition"] == 3) ) self.dimensionality = self.parameters["TopGridRank"] self.domain_dimensions = self.parameters["TopGridDimensions"] self.current_time = self.parameters["InitialTime"] if self.parameters["ComovingCoordinates"]: self.cosmological_simulation = 1 self.current_redshift = self.parameters["CosmologyCurrentRedshift"] self.omega_lambda = self.parameters["CosmologyOmegaLambdaNow"] self.omega_matter = self.parameters["CosmologyOmegaMatterNow"] self.hubble_constant = self.parameters["CosmologyHubbleConstantNow"] else: self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 self.cosmological_simulation = 0 def _obtain_enzo(self): import enzo return enzo @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: return False # These next two functions are taken from # http://www.reddit.com/r/Python/comments/6hj75/reverse_file_iterator/c03vms4 # Credit goes to "Brian" on Reddit def rblocks(f, blocksize=4096): """Read file as series of blocks from end of file to start. The data itself is in normal order, only the order of the blocks is reversed. ie. "hello world" -> ["ld","wor", "lo ", "hel"] Note that the file must be opened in binary mode. """ if "b" not in f.mode.lower(): raise Exception("File must be opened using binary mode.") size = os.stat(f.name).st_size fullblocks, lastblock = divmod(size, blocksize) # The first(end of file) block will be short, since this leaves # the rest aligned on a blocksize boundary. This may be more # efficient than having the last (first in file) block be short f.seek(-lastblock, 2) yield f.read(lastblock).decode("ascii") for i in range(fullblocks - 1, -1, -1): f.seek(i * blocksize) yield f.read(blocksize).decode("ascii") def rlines(f, keepends=False): """Iterate through the lines of a file in reverse order. If keepends is true, line endings are kept as part of the line. """ buf = "" for block in rblocks(f): buf = block + buf lines = buf.splitlines(keepends) # Return all lines except the first (since may be partial) if lines: lines.reverse() buf = lines.pop() # Last line becomes end of new first line. yield from lines yield buf # First line. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo/definitions.py0000644000175100001770000000000014714401662020051 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo/fields.py0000644000175100001770000003146114714401662017023 0ustar00runnerdockerimport numpy as np from yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer from yt.utilities.physical_constants import me, mp b_units = "code_magnetic" e_units = "code_magnetic/c" ra_units = "code_length / code_time**2" rho_units = "code_mass / code_length**3" vel_units = "code_velocity" known_species_names = { "HI": "H_p0", "HII": "H_p1", "HeI": "He_p0", "HeII": "He_p1", "HeIII": "He_p2", "H2I": "H2_p0", "H2II": "H2_p1", "HM": "H_m1", "HeH": "HeH_p0", "DI": "D_p0", "DII": "D_p1", "HDI": "HD_p0", "Electron": "El", "OI": "O_p0", "OII": "O_p1", "OIII": "O_p2", "OIV": "O_p3", "OV": "O_p4", "OVI": "O_p5", "OVII": "O_p6", "OVIII": "O_p7", "OIX": "O_p8", } NODAL_FLAGS = { "BxF": [1, 0, 0], "ByF": [0, 1, 0], "BzF": [0, 0, 1], "Ex": [0, 1, 1], "Ey": [1, 0, 1], "Ez": [1, 1, 0], "AvgElec0": [0, 1, 1], "AvgElec1": [1, 0, 1], "AvgElec2": [1, 1, 0], } class EnzoFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( ("Cooling_Time", ("s", ["cooling_time"], None)), ("Dengo_Cooling_Rate", ("erg/g/s", [], None)), ("Grackle_Cooling_Rate", ("erg/s/cm**3", [], None)), ("HI_kph", ("1/code_time", ["H_p0_ionization_rate"], None)), ("HeI_kph", ("1/code_time", ["He_p0_ionization_rate"], None)), ("HeII_kph", ("1/code_time", ["He_p1_ionization_rate"], None)), ("H2I_kdiss", ("1/code_time", ["H2_p0_dissociation_rate"], None)), ("HM_kph", ("1/code_time", ["H_m1_ionization_rate"], None)), ("H2II_kdiss", ("1/code_time", ["H2_p1_dissociation_rate"], None)), ("Bx", (b_units, [], None)), ("By", (b_units, [], None)), ("Bz", (b_units, [], None)), ("BxF", (b_units, [], None)), ("ByF", (b_units, [], None)), ("BzF", (b_units, [], None)), ("Ex", (e_units, [], None)), ("Ey", (e_units, [], None)), ("Ez", (e_units, [], None)), ("AvgElec0", (e_units, [], None)), ("AvgElec1", (e_units, [], None)), ("AvgElec2", (e_units, [], None)), ("RadAccel1", (ra_units, ["radiation_acceleration_x"], None)), ("RadAccel2", (ra_units, ["radiation_acceleration_y"], None)), ("RadAccel3", (ra_units, ["radiation_acceleration_z"], None)), ("Dark_Matter_Density", (rho_units, ["dark_matter_density"], None)), ("Temperature", ("K", ["temperature"], None)), ("Dust_Temperature", ("K", ["dust_temperature"], None)), ("x-velocity", (vel_units, ["velocity_x"], None)), ("y-velocity", (vel_units, ["velocity_y"], None)), ("z-velocity", (vel_units, ["velocity_z"], None)), ("RaySegments", ("", ["ray_segments"], None)), ("PhotoGamma", ("eV/code_time", ["photo_gamma"], None)), ("PotentialField", ("code_velocity**2", ["gravitational_potential"], None)), ("Density", (rho_units, ["density"], None)), ("Metal_Density", (rho_units, ["metal_density"], None)), ("SN_Colour", (rho_units, [], None)), # Note: we do not alias Electron_Density to anything ("Electron_Density", (rho_units, [], None)), ) known_particle_fields: KnownFieldsT = ( ("particle_position_x", ("code_length", [], None)), ("particle_position_y", ("code_length", [], None)), ("particle_position_z", ("code_length", [], None)), ("particle_velocity_x", (vel_units, [], None)), ("particle_velocity_y", (vel_units, [], None)), ("particle_velocity_z", (vel_units, [], None)), ("creation_time", ("code_time", [], None)), ("dynamical_time", ("code_time", [], None)), ("metallicity_fraction", ("code_metallicity", [], None)), ("metallicity", ("", [], None)), ("particle_type", ("", [], None)), ("particle_index", ("", [], None)), ("particle_mass", ("code_mass", [], None)), ("GridID", ("", [], None)), ("identifier", ("", ["particle_index"], None)), ("level", ("", [], None)), ("AccretionRate", ("code_mass/code_time", [], None)), ("AccretionRateTime", ("code_time", [], None)), ("AccretionRadius", ("code_length", [], None)), ("RadiationLifetime", ("code_time", [], None)), ) def __init__(self, ds, field_list): hydro_method = ds.parameters.get("HydroMethod", None) if hydro_method is None: hydro_method = ds.parameters["Physics"]["Hydro"]["HydroMethod"] if hydro_method == 2: sl_left = slice(None, -2, None) sl_right = slice(1, -1, None) div_fac = 1.0 else: sl_left = slice(None, -2, None) sl_right = slice(2, None, None) div_fac = 2.0 slice_info = (sl_left, sl_right, div_fac) super().__init__(ds, field_list, slice_info) # setup nodal flag information for field in NODAL_FLAGS: if ("enzo", field) in self: finfo = self["enzo", field] finfo.nodal_flag = np.array(NODAL_FLAGS[field]) def add_species_field(self, species): # This is currently specific to Enzo. Hopefully in the future we will # have deeper integration with other systems, such as Dengo, to provide # better understanding of ionization and molecular states. # # We have several fields to add based on a given species field. First # off, we add the species field itself. Then we'll add a few more # items... # self.add_output_field( ("enzo", f"{species}_Density"), sampling_type="cell", take_log=True, units="code_mass/code_length**3", ) yt_name = known_species_names[species] # don't alias electron density since mass is wrong if species != "Electron": self.alias(("gas", f"{yt_name}_density"), ("enzo", f"{species}_Density")) def setup_species_fields(self): species_names = [ fn.rsplit("_Density")[0] for ft, fn in self.field_list if fn.endswith("_Density") ] species_names = [sp for sp in species_names if sp in known_species_names] def _electron_density(field, data): return data["enzo", "Electron_Density"] * (me / mp) self.add_field( ("gas", "El_density"), sampling_type="cell", function=_electron_density, units=self.ds.unit_system["density"], ) for sp in species_names: self.add_species_field(sp) self.species_names.append(known_species_names[sp]) self.species_names.sort() # bb #1059 def setup_fluid_fields(self): from yt.fields.magnetic_field import setup_magnetic_field_aliases # Now we conditionally load a few other things. params = self.ds.parameters multi_species = params.get("MultiSpecies", None) dengo = params.get("DengoChemistryModel", 0) if multi_species is None: multi_species = params["Physics"]["AtomicPhysics"]["MultiSpecies"] if multi_species > 0 or dengo == 1: self.setup_species_fields() self.setup_energy_field() setup_magnetic_field_aliases(self, "enzo", [f"B{ax}" for ax in "xyz"]) def setup_energy_field(self): unit_system = self.ds.unit_system # We check which type of field we need, and then we add it. ge_name = None te_name = None params = self.ds.parameters multi_species = params.get("MultiSpecies", None) if multi_species is None: multi_species = params["Physics"]["AtomicPhysics"]["MultiSpecies"] hydro_method = params.get("HydroMethod", None) if hydro_method is None: hydro_method = params["Physics"]["Hydro"]["HydroMethod"] dual_energy = params.get("DualEnergyFormalism", None) if dual_energy is None: dual_energy = params["Physics"]["Hydro"]["DualEnergyFormalism"] if ("enzo", "Gas_Energy") in self.field_list: ge_name = "Gas_Energy" elif ("enzo", "GasEnergy") in self.field_list: ge_name = "GasEnergy" if ("enzo", "Total_Energy") in self.field_list: te_name = "Total_Energy" elif ("enzo", "TotalEnergy") in self.field_list: te_name = "TotalEnergy" if hydro_method == 2 and te_name is not None: self.add_output_field( ("enzo", te_name), sampling_type="cell", units="code_velocity**2" ) self.alias(("gas", "specific_thermal_energy"), ("enzo", te_name)) def _ge_plus_kin(field, data): ret = data["enzo", te_name] + 0.5 * data["gas", "velocity_x"] ** 2.0 if data.ds.dimensionality > 1: ret += 0.5 * data["gas", "velocity_y"] ** 2.0 if data.ds.dimensionality > 2: ret += 0.5 * data["gas", "velocity_z"] ** 2.0 return ret self.add_field( ("gas", "specific_total_energy"), sampling_type="cell", function=_ge_plus_kin, units=unit_system["specific_energy"], ) elif dual_energy == 1: if te_name is not None: self.add_output_field( ("enzo", te_name), sampling_type="cell", units="code_velocity**2" ) self.alias( ("gas", "specific_total_energy"), ("enzo", te_name), units=unit_system["specific_energy"], ) if ge_name is not None: self.add_output_field( ("enzo", ge_name), sampling_type="cell", units="code_velocity**2" ) self.alias( ("gas", "specific_thermal_energy"), ("enzo", ge_name), units=unit_system["specific_energy"], ) elif hydro_method in (4, 6) and te_name is not None: self.add_output_field( ("enzo", te_name), sampling_type="cell", units="code_velocity**2" ) # Subtract off B-field energy def _sub_b(field, data): ret = data["enzo", te_name] - 0.5 * data["gas", "velocity_x"] ** 2.0 if data.ds.dimensionality > 1: ret -= 0.5 * data["gas", "velocity_y"] ** 2.0 if data.ds.dimensionality > 2: ret -= 0.5 * data["gas", "velocity_z"] ** 2.0 ret -= data["gas", "magnetic_energy_density"] / data["gas", "density"] return ret self.add_field( ("gas", "specific_thermal_energy"), sampling_type="cell", function=_sub_b, units=unit_system["specific_energy"], ) elif te_name is not None: # Otherwise, we assume TotalEnergy is kinetic+thermal self.add_output_field( ("enzo", te_name), sampling_type="cell", units="code_velocity**2" ) self.alias( ("gas", "specific_total_energy"), ("enzo", te_name), units=unit_system["specific_energy"], ) def _tot_minus_kin(field, data): ret = data["enzo", te_name] - 0.5 * data["gas", "velocity_x"] ** 2.0 if data.ds.dimensionality > 1: ret -= 0.5 * data["gas", "velocity_y"] ** 2.0 if data.ds.dimensionality > 2: ret -= 0.5 * data["gas", "velocity_z"] ** 2.0 return ret self.add_field( ("gas", "specific_thermal_energy"), sampling_type="cell", function=_tot_minus_kin, units=unit_system["specific_energy"], ) if multi_species == 0 and "Mu" in params: def _mean_molecular_weight(field, data): return params["Mu"] * data["index", "ones"] self.add_field( ("gas", "mean_molecular_weight"), sampling_type="cell", function=_mean_molecular_weight, units="", ) def _number_density(field, data): return data["gas", "density"] / (mp * params["Mu"]) self.add_field( ("gas", "number_density"), sampling_type="cell", function=_number_density, units=unit_system["number_density"], ) def setup_particle_fields(self, ptype): def _age(field, data): return data.ds.current_time - data["all", "creation_time"] self.add_field( (ptype, "age"), sampling_type="particle", function=_age, units="yr" ) super().setup_particle_fields(ptype) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo/io.py0000644000175100001770000003425114714401662016164 0ustar00runnerdockerimport numpy as np from yt.geometry.selection_routines import GridSelector from yt.utilities.io_handler import BaseIOHandler from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py _convert_mass = ("particle_mass", "mass") _particle_position_names: dict[str, str] = {} class IOHandlerPackedHDF5(BaseIOHandler): _dataset_type = "enzo_packed_3d" _base = slice(None) _field_dtype = "float64" def _read_field_names(self, grid): if grid.filename is None: return [] f = h5py.File(grid.filename, mode="r") try: group = f["/Grid%08i" % grid.id] except KeyError: group = f fields = [] dtypes = set() add_io = "io" in grid.ds.particle_types add_dm = "DarkMatter" in grid.ds.particle_types for name, v in group.items(): # NOTE: This won't work with 1D datasets or references. # For all versions of Enzo I know about, we can assume all floats # are of the same size. So, let's grab one. if not hasattr(v, "shape") or v.dtype == "O": continue elif len(v.dims) == 1: if grid.ds.dimensionality == 1: fields.append(("enzo", str(name))) elif add_io: fields.append(("io", str(name))) elif add_dm: fields.append(("DarkMatter", str(name))) else: fields.append(("enzo", str(name))) dtypes.add(v.dtype) if len(dtypes) == 1: # Now, if everything we saw was the same dtype, we can go ahead and # set it here. We do this because it is a HUGE savings for 32 bit # floats, since our numpy copying/casting is way faster than # h5py's, for some reason I don't understand. This does *not* need # to be correct -- it will get fixed later -- it just needs to be # okay for now. self._field_dtype = list(dtypes)[0] f.close() return fields @property def _read_exception(self): return (KeyError,) def _read_particle_coords(self, chunks, ptf): yield from ( (ptype, xyz, 0.0) for ptype, xyz in self._read_particle_fields(chunks, ptf, None) ) def _read_particle_fields(self, chunks, ptf, selector): chunks = list(chunks) for chunk in chunks: # These should be organized by grid filename f = None for g in chunk.objs: if g.filename is None: continue if f is None: # print("Opening (read) %s" % g.filename) f = h5py.File(g.filename, mode="r") nap = sum(g.NumberOfActiveParticles.values()) if g.NumberOfParticles == 0 and nap == 0: continue ds = f.get("/Grid%08i" % g.id) for ptype, field_list in sorted(ptf.items()): if ptype == "io": if g.NumberOfParticles == 0: continue pds = ds elif ptype == "DarkMatter": if g.NumberOfActiveParticles[ptype] == 0: continue pds = ds elif not g.NumberOfActiveParticles[ptype]: continue else: for pname in ["Active Particles", "Particles"]: pds = ds.get(f"{pname}/{ptype}") if pds is not None: break else: raise RuntimeError( "Could not find active particle group in data." ) pn = _particle_position_names.get(ptype, r"particle_position_%s") x, y, z = ( np.asarray(pds.get(pn % ax)[()], dtype="=f8") for ax in "xyz" ) if selector is None: # This only ever happens if the call is made from # _read_particle_coords. yield ptype, (x, y, z) continue mask = selector.select_points(x, y, z, 0.0) if mask is None: continue for field in field_list: data = np.asarray(pds.get(field)[()], "=f8") if field in _convert_mass: data *= g.dds.prod(dtype="f8") yield (ptype, field), data[mask] if f: f.close() def io_iter(self, chunks, fields): h5_dtype = self._field_dtype for chunk in chunks: fid = None filename = -1 for obj in chunk.objs: if obj.filename is None: continue if obj.filename != filename: # Note one really important thing here: even if we do # implement LRU caching in the _read_obj_field function, # we'll still be doing file opening and whatnot. This is a # problem, but one we can return to. if fid is not None: fid.close() fid = h5py.h5f.open( obj.filename.encode("latin-1"), h5py.h5f.ACC_RDONLY ) filename = obj.filename for field in fields: nodal_flag = self.ds.field_info[field].nodal_flag dims = obj.ActiveDimensions[::-1] + nodal_flag[::-1] data = np.empty(dims, dtype=h5_dtype) yield field, obj, self._read_obj_field(obj, field, (fid, data)) if fid is not None: fid.close() def _read_obj_field(self, obj, field, fid_data): if fid_data is None: fid_data = (None, None) fid, data = fid_data if fid is None: close = True fid = h5py.h5f.open(obj.filename.encode("latin-1"), h5py.h5f.ACC_RDONLY) else: close = False if data is None: data = np.empty(obj.ActiveDimensions[::-1], dtype=self._field_dtype) ftype, fname = field try: node = "/Grid%08i/%s" % (obj.id, fname) dg = h5py.h5d.open(fid, node.encode("latin-1")) except KeyError: if fname == "Dark_Matter_Density": data[:] = 0 return data.T raise dg.read(h5py.h5s.ALL, h5py.h5s.ALL, data) # I don't know why, but on some installations of h5py this works, but # on others, nope. Doesn't seem to be a version thing. # dg.close() if close: fid.close() return data.T class IOHandlerPackedHDF5GhostZones(IOHandlerPackedHDF5): _dataset_type = "enzo_packed_3d_gz" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) NGZ = self.ds.parameters.get("NumberOfGhostZones", 3) self._base = (slice(NGZ, -NGZ), slice(NGZ, -NGZ), slice(NGZ, -NGZ)) def _read_obj_field(self, *args, **kwargs): return super()._read_obj_field(*args, **kwargs)[self._base] class IOHandlerInMemory(BaseIOHandler): _dataset_type = "enzo_inline" def __init__(self, ds, ghost_zones=3): self.ds = ds import enzo self.enzo = enzo self.grids_in_memory = enzo.grid_data self.old_grids_in_memory = enzo.old_grid_data self.my_slice = ( slice(ghost_zones, -ghost_zones), slice(ghost_zones, -ghost_zones), slice(ghost_zones, -ghost_zones), ) BaseIOHandler.__init__(self, ds) def _read_field_names(self, grid): fields = [] add_io = "io" in grid.ds.particle_types for name, v in self.grids_in_memory[grid.id].items(): # NOTE: This won't work with 1D datasets or references. if not hasattr(v, "shape") or v.dtype == "O": continue elif v.ndim == 1: if grid.ds.dimensionality == 1: fields.append(("enzo", str(name))) elif add_io: fields.append(("io", str(name))) else: fields.append(("enzo", str(name))) return fields def _read_fluid_selection(self, chunks, selector, fields, size): rv = {} # Now we have to do something unpleasant chunks = list(chunks) if isinstance(selector, GridSelector): if not (len(chunks) == len(chunks[0].objs) == 1): raise RuntimeError g = chunks[0].objs[0] for ftype, fname in fields: rv[ftype, fname] = self.grids_in_memory[g.id][fname].swapaxes(0, 2) return rv if size is None: size = sum(g.count(selector) for chunk in chunks for g in chunk.objs) for field in fields: ftype, fname = field fsize = size rv[field] = np.empty(fsize, dtype="float64") ng = sum(len(c.objs) for c in chunks) mylog.debug( "Reading %s cells of %s fields in %s grids", size, [f2 for f1, f2 in fields], ng, ) ind = 0 for chunk in chunks: for g in chunk.objs: # We want a *hard error* here. # if g.id not in self.grids_in_memory: continue for field in fields: ftype, fname = field data_view = self.grids_in_memory[g.id][fname][ self.my_slice ].swapaxes(0, 2) nd = g.select(selector, data_view, rv[field], ind) ind += nd assert ind == fsize return rv def _read_particle_coords(self, chunks, ptf): chunks = list(chunks) for chunk in chunks: # These should be organized by grid filename for g in chunk.objs: if g.id not in self.grids_in_memory: continue nap = sum(g.NumberOfActiveParticles.values()) if g.NumberOfParticles == 0 and nap == 0: continue for ptype in sorted(ptf): x, y, z = ( self.grids_in_memory[g.id]["particle_position_x"], self.grids_in_memory[g.id]["particle_position_y"], self.grids_in_memory[g.id]["particle_position_z"], ) yield ptype, (x, y, z), 0.0 def _read_particle_fields(self, chunks, ptf, selector): chunks = list(chunks) for chunk in chunks: # These should be organized by grid filename for g in chunk.objs: if g.id not in self.grids_in_memory: continue nap = sum(g.NumberOfActiveParticles.values()) if g.NumberOfParticles == 0 and nap == 0: continue for ptype, field_list in sorted(ptf.items()): x, y, z = ( self.grids_in_memory[g.id]["particle_position_x"], self.grids_in_memory[g.id]["particle_position_y"], self.grids_in_memory[g.id]["particle_position_z"], ) mask = selector.select_points(x, y, z, 0.0) if mask is None: continue for field in field_list: data = self.grids_in_memory[g.id][field] if field in _convert_mass: data = data * g.dds.prod(dtype="f8") yield (ptype, field), data[mask] class IOHandlerPacked2D(IOHandlerPackedHDF5): _dataset_type = "enzo_packed_2d" _particle_reader = False def _read_data_set(self, grid, field): f = h5py.File(grid.filename, mode="r") ds = f["/Grid%08i/%s" % (grid.id, field)][:] f.close() return ds.transpose()[:, :, None] def _read_fluid_selection(self, chunks, selector, fields, size): rv = {} # Now we have to do something unpleasant chunks = list(chunks) if isinstance(selector, GridSelector): if not (len(chunks) == len(chunks[0].objs) == 1): raise RuntimeError g = chunks[0].objs[0] f = h5py.File(g.filename, mode="r") gds = f.get("/Grid%08i" % g.id) for ftype, fname in fields: rv[ftype, fname] = np.atleast_3d(gds.get(fname)[()].transpose()) f.close() return rv if size is None: size = sum(g.count(selector) for chunk in chunks for g in chunk.objs) for field in fields: ftype, fname = field fsize = size rv[field] = np.empty(fsize, dtype="float64") ng = sum(len(c.objs) for c in chunks) mylog.debug( "Reading %s cells of %s fields in %s grids", size, [f2 for f1, f2 in fields], ng, ) ind = 0 for chunk in chunks: f = None for g in chunk.objs: if f is None: # print("Opening (count) %s" % g.filename) f = h5py.File(g.filename, mode="r") gds = f.get("/Grid%08i" % g.id) if gds is None: gds = f for field in fields: ftype, fname = field ds = np.atleast_3d(gds.get(fname)[()].transpose()) nd = g.select(selector, ds, rv[field], ind) # caches ind += nd f.close() return rv class IOHandlerPacked1D(IOHandlerPackedHDF5): _dataset_type = "enzo_packed_1d" _particle_reader = False def _read_data_set(self, grid, field): f = h5py.File(grid.filename, mode="r") ds = f["/Grid%08i/%s" % (grid.id, field)][:] f.close() return ds.transpose()[:, None, None] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo/misc.py0000644000175100001770000000226014714401662016503 0ustar00runnerdockerimport numpy as np from yt.utilities.physical_ratios import ( boltzmann_constant_erg_per_K, cm_per_mpc, mass_hydrogen_grams, newton_cgs, rho_crit_g_cm3_h2, ) def cosmology_get_units( hubble_constant, omega_matter, box_size, initial_redshift, current_redshift ): """ Return a dict of Enzo cosmological unit conversions. """ zp1 = 1.0 + current_redshift zip1 = 1.0 + initial_redshift k = {} # For better agreement with values calculated by Enzo, # adopt the exact constants that are used there. time_scaling = np.sqrt(1 / (4 * np.pi * newton_cgs * rho_crit_g_cm3_h2)) vel_scaling = cm_per_mpc / time_scaling temp_scaling = mass_hydrogen_grams / boltzmann_constant_erg_per_K * vel_scaling**2 k["utim"] = time_scaling / np.sqrt(omega_matter) / hubble_constant / zip1**1.5 k["urho"] = rho_crit_g_cm3_h2 * omega_matter * hubble_constant**2 * zp1**3 k["uxyz"] = cm_per_mpc * box_size / hubble_constant / zp1 k["uaye"] = 1.0 / zip1 k["uvel"] = vel_scaling * box_size * np.sqrt(omega_matter) * np.sqrt(zip1) k["utem"] = temp_scaling * (box_size**2) * omega_matter * zip1 k["aye"] = zip1 / zp1 return k ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo/simulation_handling.py0000644000175100001770000007047514714401662021615 0ustar00runnerdockerimport glob import os import numpy as np from unyt import dimensions, unyt_array from unyt.unit_registry import UnitRegistry from yt.data_objects.time_series import DatasetSeries, SimulationTimeSeries from yt.funcs import only_on_root from yt.loaders import load from yt.utilities.cosmology import Cosmology from yt.utilities.exceptions import ( InvalidSimulationTimeSeries, MissingParameter, NoStoppingCondition, YTUnidentifiedDataType, ) from yt.utilities.logger import ytLogger as mylog from yt.utilities.parallel_tools.parallel_analysis_interface import parallel_objects class EnzoSimulation(SimulationTimeSeries): r""" Initialize an Enzo Simulation object. Upon creation, the parameter file is parsed and the time and redshift are calculated and stored in all_outputs. A time units dictionary is instantiated to allow for time outputs to be requested with physical time units. The get_time_series can be used to generate a DatasetSeries object. parameter_filename : str The simulation parameter file. find_outputs : bool If True, subdirectories within the GlobalDir directory are searched one by one for datasets. Time and redshift information are gathered by temporarily instantiating each dataset. This can be used when simulation data was created in a non-standard way, making it difficult to guess the corresponding time and redshift information. Default: False. Examples -------- >>> import yt >>> es = yt.load_simulation("enzo_tiny_cosmology/32Mpc_32.enzo", "Enzo") >>> es.get_time_series() >>> for ds in es: ... print(ds.current_time) """ def __init__(self, parameter_filename, find_outputs=False): self.simulation_type = "grid" self.key_parameters = ["stop_cycle"] SimulationTimeSeries.__init__( self, parameter_filename, find_outputs=find_outputs ) def _set_units(self): self.unit_registry = UnitRegistry() self.unit_registry.add("code_time", 1.0, dimensions.time) self.unit_registry.add("code_length", 1.0, dimensions.length) if self.cosmological_simulation: # Instantiate EnzoCosmology object for units and time conversions. self.cosmology = EnzoCosmology( self.parameters["CosmologyHubbleConstantNow"], self.parameters["CosmologyOmegaMatterNow"], self.parameters["CosmologyOmegaLambdaNow"], self.parameters.get("CosmologyOmegaRadiationNow", 0.0), 0.0, self.parameters["CosmologyInitialRedshift"], unit_registry=self.unit_registry, ) self.time_unit = self.cosmology.time_unit.in_units("s") if "h" in self.unit_registry: self.unit_registry.modify("h", self.hubble_constant) else: self.unit_registry.add( "h", self.hubble_constant, dimensions.dimensionless ) # Comoving lengths for my_unit in ["m", "pc", "AU"]: new_unit = f"{my_unit}cm" # technically not true, but should be ok self.unit_registry.add( new_unit, self.unit_registry.lut[my_unit][0], dimensions.length, f"\\rm{{{my_unit}}}/(1+z)", prefixable=True, ) self.length_unit = self.quan( self.box_size, "Mpccm / h", registry=self.unit_registry ) else: self.time_unit = self.quan(self.parameters["TimeUnits"], "s") self.length_unit = self.quan(self.parameters["LengthUnits"], "cm") self.box_size = self.length_unit self.domain_left_edge = self.domain_left_edge * self.length_unit self.domain_right_edge = self.domain_right_edge * self.length_unit self.unit_registry.modify("code_time", self.time_unit) self.unit_registry.modify("code_length", self.length_unit) self.unit_registry.add( "unitary", float(self.box_size.in_base()), self.length_unit.units.dimensions ) def get_time_series( self, time_data=True, redshift_data=True, initial_time=None, final_time=None, initial_redshift=None, final_redshift=None, initial_cycle=None, final_cycle=None, times=None, redshifts=None, tolerance=None, parallel=True, setup_function=None, ): """ Instantiate a DatasetSeries object for a set of outputs. If no additional keywords given, a DatasetSeries object will be created with all potential datasets created by the simulation. Outputs can be gather by specifying a time or redshift range (or combination of time and redshift), with a specific list of times or redshifts, a range of cycle numbers (for cycle based output), or by simply searching all subdirectories within the simulation directory. time_data : bool Whether or not to include time outputs when gathering datasets for time series. Default: True. redshift_data : bool Whether or not to include redshift outputs when gathering datasets for time series. Default: True. initial_time : tuple of type (float, str) The earliest time for outputs to be included. This should be given as the value and the string representation of the units. For example, (5.0, "Gyr"). If None, the initial time of the simulation is used. This can be used in combination with either final_time or final_redshift. Default: None. final_time : tuple of type (float, str) The latest time for outputs to be included. This should be given as the value and the string representation of the units. For example, (13.7, "Gyr"). If None, the final time of the simulation is used. This can be used in combination with either initial_time or initial_redshift. Default: None. times : tuple of type (float array, str) A list of times for which outputs will be found and the units of those values. For example, ([0, 1, 2, 3], "s"). Default: None. initial_redshift : float The earliest redshift for outputs to be included. If None, the initial redshift of the simulation is used. This can be used in combination with either final_time or final_redshift. Default: None. final_redshift : float The latest redshift for outputs to be included. If None, the final redshift of the simulation is used. This can be used in combination with either initial_time or initial_redshift. Default: None. redshifts : array_like A list of redshifts for which outputs will be found. Default: None. initial_cycle : float The earliest cycle for outputs to be included. If None, the initial cycle of the simulation is used. This can only be used with final_cycle. Default: None. final_cycle : float The latest cycle for outputs to be included. If None, the final cycle of the simulation is used. This can only be used in combination with initial_cycle. Default: None. tolerance : float Used in combination with "times" or "redshifts" keywords, this is the tolerance within which outputs are accepted given the requested times or redshifts. If None, the nearest output is always taken. Default: None. parallel : bool/int If True, the generated DatasetSeries will divide the work such that a single processor works on each dataset. If an integer is supplied, the work will be divided into that number of jobs. Default: True. setup_function : callable, accepts a ds This function will be called whenever a dataset is loaded. Examples -------- >>> import yt >>> es = yt.load_simulation("enzo_tiny_cosmology/32Mpc_32.enzo", "Enzo") >>> es.get_time_series( ... initial_redshift=10, final_time=(13.7, "Gyr"), redshift_data=False ... ) >>> for ds in es: ... print(ds.current_time) >>> es.get_time_series(redshifts=[3, 2, 1, 0]) >>> for ds in es: ... print(ds.current_time) """ if ( initial_redshift is not None or final_redshift is not None ) and not self.cosmological_simulation: raise InvalidSimulationTimeSeries( "An initial or final redshift has been given for a " + "noncosmological simulation." ) if time_data and redshift_data: my_all_outputs = self.all_outputs elif time_data: my_all_outputs = self.all_time_outputs elif redshift_data: my_all_outputs = self.all_redshift_outputs else: raise InvalidSimulationTimeSeries( "Both time_data and redshift_data are False." ) if not my_all_outputs: DatasetSeries.__init__(self, outputs=[], parallel=parallel) mylog.info("0 outputs loaded into time series.") return # Apply selection criteria to the set. if times is not None: my_outputs = self._get_outputs_by_key( "time", times, tolerance=tolerance, outputs=my_all_outputs ) elif redshifts is not None: my_outputs = self._get_outputs_by_key( "redshift", redshifts, tolerance=tolerance, outputs=my_all_outputs ) elif initial_cycle is not None or final_cycle is not None: if initial_cycle is None: initial_cycle = 0 else: initial_cycle = max(initial_cycle, 0) if final_cycle is None: final_cycle = self.parameters["StopCycle"] else: final_cycle = min(final_cycle, self.parameters["StopCycle"]) my_outputs = my_all_outputs[ int( np.ceil(float(initial_cycle) / self.parameters["CycleSkipDataDump"]) ) : (final_cycle / self.parameters["CycleSkipDataDump"]) + 1 ] else: if initial_time is not None: if isinstance(initial_time, float): my_initial_time = self.quan(initial_time, "code_time") elif isinstance(initial_time, tuple) and len(initial_time) == 2: my_initial_time = self.quan(*initial_time) elif not isinstance(initial_time, unyt_array): raise RuntimeError( "Error: initial_time must be given as a float or " + "tuple of (value, units)." ) elif initial_redshift is not None: my_initial_time = self.cosmology.t_from_z(initial_redshift) else: my_initial_time = self.initial_time if final_time is not None: if isinstance(final_time, float): my_final_time = self.quan(final_time, "code_time") elif isinstance(final_time, tuple) and len(final_time) == 2: my_final_time = self.quan(*final_time) elif not isinstance(final_time, unyt_array): raise RuntimeError( "Error: final_time must be given as a float or " + "tuple of (value, units)." ) elif final_redshift is not None: my_final_time = self.cosmology.t_from_z(final_redshift) else: my_final_time = self.final_time my_initial_time.convert_to_units("s") my_final_time.convert_to_units("s") my_times = np.array([a["time"] for a in my_all_outputs]) my_indices = np.digitize([my_initial_time, my_final_time], my_times) if my_initial_time == my_times[my_indices[0] - 1]: my_indices[0] -= 1 my_outputs = my_all_outputs[my_indices[0] : my_indices[1]] init_outputs = [] for output in my_outputs: if os.path.exists(output["filename"]): init_outputs.append(output["filename"]) DatasetSeries.__init__( self, outputs=init_outputs, parallel=parallel, setup_function=setup_function ) mylog.info("%d outputs loaded into time series.", len(init_outputs)) def _parse_parameter_file(self): """ Parses the parameter file and establishes the various dictionaries. """ self.conversion_factors = {} redshift_outputs = [] # Let's read the file lines = open(self.parameter_filename).readlines() for line in (l.strip() for l in lines): if "#" in line: line = line[0 : line.find("#")] if "//" in line: line = line[0 : line.find("//")] if len(line) < 2: continue param, vals = (i.strip() for i in line.split("=", 1)) # First we try to decipher what type of value it is. vals = vals.split() # Special case approaching. if "(do" in vals: vals = vals[:1] if len(vals) == 0: pcast = str # Assume NULL output else: v = vals[0] # Figure out if it's castable to floating point: try: float(v) except ValueError: pcast = str else: if any("." in v or "e" in v for v in vals): pcast = float elif v == "inf": pcast = str else: pcast = int # Now we figure out what to do with it. if param.endswith("Units") and not param.startswith("Temperature"): dataType = param[:-5] # This one better be a float. self.conversion_factors[dataType] = float(vals[0]) if param.startswith("CosmologyOutputRedshift["): index = param[param.find("[") + 1 : param.find("]")] redshift_outputs.append( {"index": int(index), "redshift": float(vals[0])} ) elif len(vals) == 0: vals = "" elif len(vals) == 1: vals = pcast(vals[0]) else: vals = np.array([pcast(i) for i in vals if i != "-99999"]) self.parameters[param] = vals self.refine_by = self.parameters["RefineBy"] self.dimensionality = self.parameters["TopGridRank"] if self.dimensionality > 1: self.domain_dimensions = self.parameters["TopGridDimensions"] if len(self.domain_dimensions) < 3: tmp = self.domain_dimensions.tolist() tmp.append(1) self.domain_dimensions = np.array(tmp) self.domain_left_edge = np.array( self.parameters["DomainLeftEdge"], "float64" ).copy() self.domain_right_edge = np.array( self.parameters["DomainRightEdge"], "float64" ).copy() else: self.domain_left_edge = np.array( self.parameters["DomainLeftEdge"], "float64" ) self.domain_right_edge = np.array( self.parameters["DomainRightEdge"], "float64" ) self.domain_dimensions = np.array( [self.parameters["TopGridDimensions"], 1, 1] ) if self.parameters["ComovingCoordinates"]: cosmo_attr = { "box_size": "CosmologyComovingBoxSize", "omega_lambda": "CosmologyOmegaLambdaNow", "omega_matter": "CosmologyOmegaMatterNow", "omega_radiation": "CosmologyOmegaRadiationNow", "hubble_constant": "CosmologyHubbleConstantNow", "initial_redshift": "CosmologyInitialRedshift", "final_redshift": "CosmologyFinalRedshift", } self.cosmological_simulation = 1 for a, v in cosmo_attr.items(): if v not in self.parameters: raise MissingParameter(self.parameter_filename, v) setattr(self, a, self.parameters[v]) else: self.cosmological_simulation = 0 self.omega_lambda = self.omega_matter = self.hubble_constant = 0.0 # make list of redshift outputs self.all_redshift_outputs = [] if not self.cosmological_simulation: return for output in redshift_outputs: output["filename"] = os.path.join( self.parameters["GlobalDir"], "%s%04d" % (self.parameters["RedshiftDumpDir"], output["index"]), "%s%04d" % (self.parameters["RedshiftDumpName"], output["index"]), ) del output["index"] self.all_redshift_outputs = redshift_outputs def _calculate_time_outputs(self): """ Calculate time outputs and their redshifts if cosmological. """ self.all_time_outputs = [] if ( self.final_time is None or "dtDataDump" not in self.parameters or self.parameters["dtDataDump"] <= 0.0 ): return [] index = 0 current_time = self.initial_time.copy() dt_datadump = self.quan(self.parameters["dtDataDump"], "code_time") while current_time <= self.final_time + dt_datadump: filename = os.path.join( self.parameters["GlobalDir"], "%s%04d" % (self.parameters["DataDumpDir"], index), "%s%04d" % (self.parameters["DataDumpName"], index), ) output = {"index": index, "filename": filename, "time": current_time.copy()} output["time"] = min(output["time"], self.final_time) if self.cosmological_simulation: output["redshift"] = self.cosmology.z_from_t(current_time) self.all_time_outputs.append(output) if np.abs(self.final_time - current_time) / self.final_time < 1e-4: break current_time += dt_datadump index += 1 def _calculate_cycle_outputs(self): """ Calculate cycle outputs. """ mylog.warning("Calculating cycle outputs. Dataset times will be unavailable.") if ( self.stop_cycle is None or "CycleSkipDataDump" not in self.parameters or self.parameters["CycleSkipDataDump"] <= 0.0 ): return [] self.all_time_outputs = [] index = 0 for cycle in range( 0, self.stop_cycle + 1, self.parameters["CycleSkipDataDump"] ): filename = os.path.join( self.parameters["GlobalDir"], "%s%04d" % (self.parameters["DataDumpDir"], index), "%s%04d" % (self.parameters["DataDumpName"], index), ) output = {"index": index, "filename": filename, "cycle": cycle} self.all_time_outputs.append(output) index += 1 def _get_all_outputs(self, *, find_outputs=False): """ Get all potential datasets and combine into a time-sorted list. """ # Create the set of outputs from which further selection will be done. if find_outputs: self._find_outputs() elif ( self.parameters["dtDataDump"] > 0 and self.parameters["CycleSkipDataDump"] > 0 ): mylog.info( "Simulation %s has both dtDataDump and CycleSkipDataDump set.", self.parameter_filename, ) mylog.info( " Unable to calculate datasets. " "Attempting to search in the current directory" ) self._find_outputs() else: # Get all time or cycle outputs. if self.parameters["CycleSkipDataDump"] > 0: self._calculate_cycle_outputs() else: self._calculate_time_outputs() # Calculate times for redshift outputs. if self.cosmological_simulation: for output in self.all_redshift_outputs: output["time"] = self.cosmology.t_from_z(output["redshift"]) self.all_redshift_outputs.sort(key=lambda obj: obj["time"]) self.all_outputs = self.all_time_outputs + self.all_redshift_outputs if self.parameters["CycleSkipDataDump"] <= 0: self.all_outputs.sort(key=lambda obj: obj["time"].to_ndarray()) def _calculate_simulation_bounds(self): """ Figure out the starting and stopping time and redshift for the simulation. """ if "StopCycle" in self.parameters: self.stop_cycle = self.parameters["StopCycle"] # Convert initial/final redshifts to times. if self.cosmological_simulation: self.initial_time = self.cosmology.t_from_z(self.initial_redshift) self.initial_time.units.registry = self.unit_registry self.final_time = self.cosmology.t_from_z(self.final_redshift) self.final_time.units.registry = self.unit_registry # If not a cosmology simulation, figure out the stopping criteria. else: if "InitialTime" in self.parameters: self.initial_time = self.quan( self.parameters["InitialTime"], "code_time" ) else: self.initial_time = self.quan(0.0, "code_time") if "StopTime" in self.parameters: self.final_time = self.quan(self.parameters["StopTime"], "code_time") else: self.final_time = None if not ("StopTime" in self.parameters or "StopCycle" in self.parameters): raise NoStoppingCondition(self.parameter_filename) if self.final_time is None: mylog.warning( "Simulation %s has no stop time set, stopping condition " "will be based only on cycles.", self.parameter_filename, ) def _set_parameter_defaults(self): """ Set some default parameters to avoid problems if they are not in the parameter file. """ self.parameters["GlobalDir"] = self.directory self.parameters["DataDumpName"] = "data" self.parameters["DataDumpDir"] = "DD" self.parameters["RedshiftDumpName"] = "RedshiftOutput" self.parameters["RedshiftDumpDir"] = "RD" self.parameters["ComovingCoordinates"] = 0 self.parameters["TopGridRank"] = 3 self.parameters["DomainLeftEdge"] = np.zeros(self.parameters["TopGridRank"]) self.parameters["DomainRightEdge"] = np.ones(self.parameters["TopGridRank"]) self.parameters["RefineBy"] = 2 # technically not the enzo default self.parameters["StopCycle"] = 100000 self.parameters["dtDataDump"] = 0.0 self.parameters["CycleSkipDataDump"] = 0.0 self.parameters["LengthUnits"] = 1.0 self.parameters["TimeUnits"] = 1.0 self.parameters["CosmologyOmegaRadiationNow"] = 0.0 def _find_outputs(self): """ Search for directories matching the data dump keywords. If found, get dataset times py opening the ds. """ # look for time outputs. potential_time_outputs = glob.glob( os.path.join( self.parameters["GlobalDir"], f"{self.parameters['DataDumpDir']}*" ) ) self.all_time_outputs = self._check_for_outputs(potential_time_outputs) self.all_time_outputs.sort(key=lambda obj: obj["time"]) # look for redshift outputs. potential_redshift_outputs = glob.glob( os.path.join( self.parameters["GlobalDir"], f"{self.parameters['RedshiftDumpDir']}*" ) ) self.all_redshift_outputs = self._check_for_outputs(potential_redshift_outputs) self.all_redshift_outputs.sort(key=lambda obj: obj["time"]) self.all_outputs = self.all_time_outputs + self.all_redshift_outputs self.all_outputs.sort(key=lambda obj: obj["time"]) only_on_root(mylog.info, "Located %d total outputs.", len(self.all_outputs)) # manually set final time and redshift with last output if self.all_outputs: self.final_time = self.all_outputs[-1]["time"] if self.cosmological_simulation: self.final_redshift = self.all_outputs[-1]["redshift"] def _check_for_outputs(self, potential_outputs): """ Check a list of files to see if they are valid datasets. """ only_on_root( mylog.info, "Checking %d potential outputs.", len(potential_outputs) ) my_outputs = {} llevel = mylog.level # suppress logging as we load every dataset, unless set to debug if llevel > 10 and llevel < 40: mylog.setLevel(40) for my_storage, output in parallel_objects( potential_outputs, storage=my_outputs ): if self.parameters["DataDumpDir"] in output: dir_key = self.parameters["DataDumpDir"] output_key = self.parameters["DataDumpName"] else: dir_key = self.parameters["RedshiftDumpDir"] output_key = self.parameters["RedshiftDumpName"] index = output[output.find(dir_key) + len(dir_key) :] filename = os.path.join( self.parameters["GlobalDir"], f"{dir_key}{index}", f"{output_key}{index}", ) try: ds = load(filename) except (FileNotFoundError, YTUnidentifiedDataType): mylog.error("Failed to load %s", filename) continue my_storage.result = { "filename": filename, "time": ds.current_time.in_units("s"), } if ds.cosmological_simulation: my_storage.result["redshift"] = ds.current_redshift mylog.setLevel(llevel) my_outputs = [ my_output for my_output in my_outputs.values() if my_output is not None ] return my_outputs def _write_cosmology_outputs(self, filename, outputs, start_index, decimals=3): """ Write cosmology output parameters for a cosmology splice. """ mylog.info("Writing redshift output list to %s.", filename) f = open(filename, "w") for q, output in enumerate(outputs): f.write( (f"CosmologyOutputRedshift[%d] = %.{decimals}f\n") % ((q + start_index), output["redshift"]) ) f.close() class EnzoCosmology(Cosmology): def __init__( self, hubble_constant, omega_matter, omega_lambda, omega_radiation, omega_curvature, initial_redshift, unit_registry=None, ): Cosmology.__init__( self, hubble_constant=hubble_constant, omega_matter=omega_matter, omega_lambda=omega_lambda, omega_radiation=omega_radiation, omega_curvature=omega_curvature, unit_registry=unit_registry, ) self.initial_redshift = initial_redshift self.initial_time = self.t_from_z(self.initial_redshift) # time units = 1 / sqrt(4 * pi * G rho_0 * (1 + z_i)**3), # rho_0 = (3 * Omega_m * h**2) / (8 * pi * G) self.time_unit = ( ( 1.5 * self.omega_matter * self.hubble_constant**2 * (1 + self.initial_redshift) ** 3 ) ** -0.5 ).in_units("s") self.time_unit.units.registry = self.unit_registry ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3151522 yt-4.4.0/yt/frontends/enzo/tests/0000755000175100001770000000000014714401715016337 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo/tests/__init__.py0000644000175100001770000000000014714401662020437 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo/tests/test_outputs.py0000644000175100001770000002147614714401662021506 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_almost_equal, assert_array_equal, assert_equal from yt.frontends.enzo.api import EnzoDataset from yt.frontends.enzo.fields import NODAL_FLAGS from yt.testing import ( assert_allclose_units, requires_file, requires_module, units_override_check, ) from yt.utilities.answer_testing.framework import ( big_patch_amr, data_dir_load, requires_ds, small_patch_amr, ) from yt.visualization.plot_window import SlicePlot _fields = ( ("gas", "temperature"), ("gas", "density"), ("gas", "velocity_magnitude"), ("gas", "velocity_divergence"), ) two_sphere_test = "ActiveParticleTwoSphere/DD0011/DD0011" active_particle_cosmology = "ActiveParticleCosmology/DD0046/DD0046" ecp = "enzo_cosmology_plus/DD0046/DD0046" g30 = "IsolatedGalaxy/galaxy0030/galaxy0030" enzotiny = "enzo_tiny_cosmology/DD0046/DD0046" toro1d = "ToroShockTube/DD0001/data0001" kh2d = "EnzoKelvinHelmholtz/DD0011/DD0011" mhdctot = "MHDCTOrszagTang/DD0004/data0004" dnz = "DeeplyNestedZoom/DD0025/data0025" p3mini = "PopIII_mini/DD0034/DD0034" def color_conservation(ds): species_names = ds.field_info.species_names dd = ds.all_data() dens_yt = dd["gas", "density"].copy() # Enumerate our species here for s in sorted(species_names): if s == "El": continue dens_yt -= dd[f"{s}_density"] dens_yt -= dd["enzo", "Metal_Density"] delta_yt = np.abs(dens_yt / dd["gas", "density"]) # Now we compare color conservation to Enzo's color conservation dd = ds.all_data() dens_enzo = dd["enzo", "Density"].copy() for f in sorted(ds.field_list): ff = f[1] if not ff.endswith("_Density"): continue start_strings = [ "Electron_", "SFR_", "Forming_Stellar_", "Dark_Matter", "Star_Particle_", ] if any(ff.startswith(ss) for ss in start_strings): continue dens_enzo -= dd[f] delta_enzo = np.abs(dens_enzo / dd["enzo", "Density"]) np.testing.assert_almost_equal(delta_yt, delta_enzo) def check_color_conservation(ds): species_names = ds.field_info.species_names dd = ds.all_data() dens_yt = dd["gas", "density"].copy() # Enumerate our species here for s in sorted(species_names): if s == "El": continue dens_yt -= dd[f"{s}_density"] dens_yt -= dd["enzo", "Metal_Density"] delta_yt = np.abs(dens_yt / dd["gas", "density"]) # Now we compare color conservation to Enzo's color conservation dd = ds.all_data() dens_enzo = dd["enzo", "Density"].copy() for f in sorted(ds.field_list): ff = f[1] if not ff.endswith("_Density"): continue start_strings = [ "Electron_", "SFR_", "Forming_Stellar_", "Dark_Matter", "Star_Particle_", ] if any(ff.startswith(ss) for ss in start_strings): continue dens_enzo -= dd[f] delta_enzo = np.abs(dens_enzo / dd["enzo", "Density"]) return assert_almost_equal, delta_yt, delta_enzo m7 = "DD0010/moving7_0010" @requires_module("h5py") @requires_ds(m7) def test_moving7(): ds = data_dir_load(m7) assert_equal(str(ds), "moving7_0010") for test in small_patch_amr(m7, _fields): test_moving7.__name__ = test.description yield test @requires_module("h5py") @requires_ds(g30, big_data=True) def test_galaxy0030(): ds = data_dir_load(g30) yield check_color_conservation(ds) assert_equal(str(ds), "galaxy0030") for test in big_patch_amr(ds, _fields): test_galaxy0030.__name__ = test.description yield test assert_equal(ds.particle_type_counts, {"io": 1124453}) @requires_module("h5py") @requires_ds(toro1d) def test_toro1d(): ds = data_dir_load(toro1d) assert_equal(str(ds), "data0001") for test in small_patch_amr(ds, ds.field_list): test_toro1d.__name__ = test.description yield test @requires_module("h5py") @requires_ds(kh2d) def test_kh2d(): ds = data_dir_load(kh2d) assert_equal(str(ds), "DD0011") for test in small_patch_amr(ds, ds.field_list): test_kh2d.__name__ = test.description yield test @requires_module("h5py") @requires_ds(ecp, big_data=True) def test_ecp(): ds = data_dir_load(ecp) # Now we test our species fields yield check_color_conservation(ds) @requires_module("h5py") @requires_file(enzotiny) def test_units_override(): units_override_check(enzotiny) @requires_module("h5py") @requires_ds(ecp, big_data=True) def test_nuclei_density_fields(): ds = data_dir_load(ecp) ad = ds.all_data() assert_array_equal( ad["gas", "H_nuclei_density"], (ad["gas", "H_p0_number_density"] + ad["gas", "H_p1_number_density"]), ) assert_array_equal( ad["gas", "He_nuclei_density"], ( ad["gas", "He_p0_number_density"] + ad["gas", "He_p1_number_density"] + ad["gas", "He_p2_number_density"] ), ) @requires_module("h5py") @requires_file(enzotiny) def test_EnzoDataset(): assert isinstance(data_dir_load(enzotiny), EnzoDataset) @requires_module("h5py") @requires_file(two_sphere_test) @requires_file(active_particle_cosmology) def test_active_particle_datasets(): two_sph = data_dir_load(two_sphere_test) assert "AccretingParticle" in two_sph.particle_types_raw assert "io" not in two_sph.particle_types_raw assert "all" in two_sph.particle_types assert "nbody" in two_sph.particle_types assert_equal(len(two_sph.particle_unions), 2) pfields = [ "GridID", "creation_time", "dynamical_time", "identifier", "level", "metallicity", "particle_mass", ] pfields += [f"particle_position_{d}" for d in "xyz"] pfields += [f"particle_velocity_{d}" for d in "xyz"] acc_part_fields = [("AccretingParticle", pf) for pf in ["AccretionRate"] + pfields] real_acc_part_fields = sorted( f for f in two_sph.field_list if f[0] == "AccretingParticle" ) assert_equal(acc_part_fields, real_acc_part_fields) apcos = data_dir_load(active_particle_cosmology) assert_equal(["CenOstriker", "DarkMatter"], apcos.particle_types_raw) assert "all" in apcos.particle_unions assert "nbody" in apcos.particle_unions apcos_fields = [("CenOstriker", pf) for pf in pfields] real_apcos_fields = sorted(f for f in apcos.field_list if f[0] == "CenOstriker") assert_equal(apcos_fields, real_apcos_fields) assert_equal( apcos.particle_type_counts, {"CenOstriker": 899755, "DarkMatter": 32768} ) @requires_module("h5py") @requires_file(mhdctot) def test_face_centered_mhdct_fields(): ds = data_dir_load(mhdctot) ad = ds.all_data() grid = ds.index.grids[0] for field, flag in NODAL_FLAGS.items(): dims = ds.domain_dimensions assert_equal(ad[field].shape, (dims.prod(), 2 * sum(flag))) assert_equal(grid[field].shape, tuple(dims) + (2 * sum(flag),)) # Average of face-centered fields should be the same as cell-centered field assert (ad["enzo", "BxF"].sum(axis=-1) / 2 == ad["enzo", "Bx"]).all() assert (ad["enzo", "ByF"].sum(axis=-1) / 2 == ad["enzo", "By"]).all() assert (ad["enzo", "BzF"].sum(axis=-1) / 2 == ad["enzo", "Bz"]).all() @requires_module("h5py") @requires_file(dnz) def test_deeply_nested_zoom(): ds = data_dir_load(dnz) # carefully chosen to just barely miss a grid in the middle of the image center = [0.4915073260199302, 0.5052605316800006, 0.4905805557500548] plot = SlicePlot(ds, "z", "density", width=(0.001, "pc"), center=center) image = plot.frb["gas", "density"] assert (image > 0).all() v, c = ds.find_max(("gas", "density")) assert_allclose_units(v, ds.quan(0.005878286377124154, "g/cm**3")) c_actual = [0.49150732540021, 0.505260532936791, 0.49058055816398] c_actual = ds.arr(c_actual, "code_length") assert_allclose_units(c, c_actual) assert_equal(max(g["gas", "density"].max() for g in ds.index.grids), v) @requires_module("h5py") @requires_file(kh2d) def test_2d_grid_shape(): # see issue #1601 # we want to make sure that accessing data on a grid object # returns a 3D array with a dummy dimension. ds = data_dir_load(kh2d) g = ds.index.grids[1] assert g["gas", "density"].shape == (128, 100, 1) @requires_module("h5py") @requires_file(p3mini) def test_nonzero_omega_radiation(): """ Test support for non-zero omega_radiation cosmologies. """ ds = data_dir_load(p3mini) assert_equal(ds.omega_radiation, ds.cosmology.omega_radiation) tratio = ds.current_time / ds.cosmology.t_from_z(ds.current_redshift) assert_almost_equal( tratio, 1, 4, err_msg="Simulation time not consistent with cosmology calculator.", ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3151522 yt-4.4.0/yt/frontends/enzo_e/0000755000175100001770000000000014714401715015501 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo_e/__init__.py0000644000175100001770000000000014714401662017601 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo_e/api.py0000644000175100001770000000030714714401662016625 0ustar00runnerdockerfrom . import tests from .data_structures import EnzoEDataset, EnzoEGrid, EnzoEHierarchy from .fields import EnzoEFieldInfo from .io import EnzoEIOHandler add_enzoe_field = EnzoEFieldInfo.add_field ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo_e/data_structures.py0000644000175100001770000004603214714401662021275 0ustar00runnerdockerimport os from functools import cached_property import numpy as np from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.static_output import Dataset from yt.fields.field_info_container import NullFunc from yt.frontends.enzo.misc import cosmology_get_units from yt.frontends.enzo_e.fields import EnzoEFieldInfo from yt.frontends.enzo_e.misc import ( get_block_info, get_child_index, get_listed_subparam, get_root_block_id, get_root_blocks, is_parent, nested_dict_get, ) from yt.funcs import get_pbar, setdefaultattr from yt.geometry.grid_geometry_handler import GridIndex from yt.utilities.cosmology import Cosmology from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py, _libconf as libconf class EnzoEGrid(AMRGridPatch): """ Class representing a single EnzoE Grid instance. """ _id_offset = 0 _refine_by = 2 def __init__(self, id, index, block_name, filename=None): """ Returns an instance of EnzoEGrid with *id*, associated with *filename* and *index*. """ # All of the field parameters will be passed to us as needed. AMRGridPatch.__init__(self, id, filename=filename, index=index) self.block_name = block_name self._children_ids = None self._parent_id = -1 self.Level = -1 def __repr__(self): return "EnzoEGrid_%04d" % self.id def _prepare_grid(self): """Copies all the appropriate attributes from the index.""" h = self.index # cache it my_ind = self.id - self._id_offset self.ActiveDimensions = h.grid_dimensions[my_ind] self.LeftEdge = h.grid_left_edge[my_ind] self.RightEdge = h.grid_right_edge[my_ind] def get_parent_id(self, desc_block_name): if self.block_name == desc_block_name: raise RuntimeError("Child and parent are the same!") dim = self.ds.dimensionality d_block = desc_block_name[1:].replace(":", "") parent = self while True: a_block = parent.block_name[1:].replace(":", "") gengap = (len(d_block) - len(a_block)) / dim if gengap <= 1: return parent.id cid = get_child_index(a_block, d_block) parent = self.index.grids[parent._children_ids[cid]] def add_child(self, child): if self._children_ids is None: self._children_ids = -1 * np.ones( self._refine_by**self.ds.dimensionality, dtype=np.int64 ) a_block = self.block_name[1:].replace(":", "") d_block = child.block_name[1:].replace(":", "") cid = get_child_index(a_block, d_block) self._children_ids[cid] = child.id @cached_property def particle_count(self): with h5py.File(self.filename, mode="r") as f: fnstr = "{}/{}".format( self.block_name, self.ds.index.io._sep.join(["particle", "%s", "%s"]), ) return { ptype: f.get(fnstr % (ptype, pfield)).size for ptype, pfield in self.ds.index.io.sample_pfields.items() } @cached_property def total_particles(self) -> int: return sum(self.particle_count.values()) @property def Parent(self): if self._parent_id == -1: return None return self.index.grids[self._parent_id] @property def Children(self): if self._children_ids is None: return [] return [self.index.grids[cid] for cid in self._children_ids] class EnzoEHierarchy(GridIndex): _strip_path = False grid = EnzoEGrid _preload_implemented = True def __init__(self, ds, dataset_type): self.dataset_type = dataset_type self.directory = os.path.dirname(ds.parameter_filename) self.index_filename = ds.parameter_filename if os.path.getsize(self.index_filename) == 0: raise OSError(-1, "File empty", self.index_filename) GridIndex.__init__(self, ds, dataset_type) self.dataset.dataset_type = self.dataset_type def _count_grids(self): fblock_size = 32768 f = open(self.ds.parameter_filename) f.seek(0, 2) file_size = f.tell() nblocks = np.ceil(float(file_size) / fblock_size).astype(np.int64) f.seek(0) offset = f.tell() ngrids = 0 for _ in range(nblocks): my_block = min(fblock_size, file_size - offset) buff = f.read(my_block) ngrids += buff.count("\n") offset += my_block f.close() self.num_grids = ngrids self.dataset_type = "enzo_e" def _parse_index(self): self.grids = np.empty(self.num_grids, dtype="object") c = 1 pbar = get_pbar("Parsing Hierarchy", self.num_grids) f = open(self.ds.parameter_filename) fblock_size = 32768 f.seek(0, 2) file_size = f.tell() nblocks = np.ceil(float(file_size) / fblock_size).astype(np.int64) f.seek(0) offset = f.tell() lstr = "" # place child blocks after the root blocks rbdim = self.ds.root_block_dimensions nroot_blocks = rbdim.prod() child_id = nroot_blocks last_pid = None for _ib in range(nblocks): fblock = min(fblock_size, file_size - offset) buff = lstr + f.read(fblock) bnl = 0 for _inl in range(buff.count("\n")): nnl = buff.find("\n", bnl) line = buff[bnl:nnl] block_name, block_file = line.split() # Handling of the B, B_, and B__ blocks is consistent with # other unrefined blocks level, left, right = get_block_info(block_name) rbindex = get_root_block_id(block_name) rbid = ( rbindex[0] * rbdim[1:].prod() + rbindex[1] * rbdim[2:].prod() + rbindex[2] ) # There are also blocks at lower level than the # real root blocks. These can be ignored. if level == 0: check_root = get_root_blocks(block_name).prod() if check_root < nroot_blocks: level = -1 if level == -1: grid_id = child_id parent_id = -1 child_id += 1 elif level == 0: grid_id = rbid parent_id = -1 else: grid_id = child_id # Try the last parent_id first if last_pid is not None and is_parent( self.grids[last_pid].block_name, block_name ): parent_id = last_pid else: parent_id = self.grids[rbid].get_parent_id(block_name) last_pid = parent_id child_id += 1 my_grid = self.grid( grid_id, self, block_name, filename=os.path.join(self.directory, block_file), ) my_grid.Level = level my_grid._parent_id = parent_id self.grids[grid_id] = my_grid self.grid_levels[grid_id] = level self.grid_left_edge[grid_id] = left self.grid_right_edge[grid_id] = right self.grid_dimensions[grid_id] = self.ds.active_grid_dimensions if level > 0: self.grids[parent_id].add_child(my_grid) bnl = nnl + 1 pbar.update(c) c += 1 lstr = buff[bnl:] offset += fblock f.close() pbar.finish() slope = self.ds.domain_width / self.ds.arr(np.ones(3), "code_length") self.grid_left_edge = self.grid_left_edge * slope + self.ds.domain_left_edge self.grid_right_edge = self.grid_right_edge * slope + self.ds.domain_left_edge def _populate_grid_objects(self): for g in self.grids: g._prepare_grid() g._setup_dx() self.max_level = self.grid_levels.max() def _setup_derived_fields(self): super()._setup_derived_fields() for fname, field in self.ds.field_info.items(): if not field.particle_type: continue if isinstance(fname, tuple): continue if field._function is NullFunc: continue def _get_particle_type_counts(self): return { ptype: sum(g.particle_count[ptype] for g in self.grids) for ptype in self.ds.particle_types_raw } def _detect_output_fields(self): self.field_list = [] # Do this only on the root processor to save disk work. if self.comm.rank in (0, None): # Just check the first grid. grid = self.grids[0] field_list, ptypes = self.io._read_field_names(grid) mylog.debug("Grid %s has: %s", grid.id, field_list) sample_pfields = self.io.sample_pfields else: field_list = None ptypes = None sample_pfields = None self.field_list = list(self.comm.mpi_bcast(field_list)) self.dataset.particle_types = list(self.comm.mpi_bcast(ptypes)) self.dataset.particle_types_raw = self.dataset.particle_types[:] self.io.sample_pfields = self.comm.mpi_bcast(sample_pfields) class EnzoEDataset(Dataset): """ Enzo-E-specific output, set at a fixed time. """ _load_requirements = ["h5py", "libconf"] refine_by = 2 _index_class = EnzoEHierarchy _field_info_class = EnzoEFieldInfo _suffix = ".block_list" particle_types: tuple[str, ...] = () particle_types_raw = None def __init__( self, filename, dataset_type=None, parameter_override=None, conversion_override=None, storage_filename=None, units_override=None, unit_system="cgs", default_species_fields=None, ): """ This class is a stripped down class that simply reads and parses *filename* without looking at the index. *dataset_type* gets passed to the index to pre-determine the style of data-output. However, it is not strictly necessary. Optionally you may specify a *parameter_override* dictionary that will override anything in the parameter file and a *conversion_override* dictionary that consists of {fieldname : conversion_to_cgs} that will override the #DataCGS. """ self.fluid_types += ("enzoe",) if parameter_override is None: parameter_override = {} self._parameter_override = parameter_override if conversion_override is None: conversion_override = {} self._conversion_override = conversion_override self.storage_filename = storage_filename Dataset.__init__( self, filename, dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) def _parse_parameter_file(self): """ Parses the parameter file and establishes the various dictionaries. """ f = open(self.parameter_filename) # get dimension from first block name b0, fn0 = f.readline().strip().split() level0, left0, right0 = get_block_info(b0, min_dim=0) root_blocks = get_root_blocks(b0) f.close() self.dimensionality = left0.size self._periodicity = tuple(np.ones(self.dimensionality, dtype=bool)) lcfn = self.parameter_filename[: -len(self._suffix)] + ".libconfig" if os.path.exists(lcfn): with open(lcfn) as lf: self.parameters = libconf.load(lf) # Enzo-E ignores all cosmology parameters if "cosmology" is not in # the Physics:list parameter physics_list = nested_dict_get( self.parameters, ("Physics", "list"), default=[] ) if "cosmology" in physics_list: self.cosmological_simulation = 1 co_pars = [ "hubble_constant_now", "omega_matter_now", "omega_lambda_now", "comoving_box_size", "initial_redshift", ] co_dict = { attr: nested_dict_get( self.parameters, ("Physics", "cosmology", attr) ) for attr in co_pars } for attr in ["hubble_constant", "omega_matter", "omega_lambda"]: setattr(self, attr, co_dict[f"{attr}_now"]) # Current redshift is not stored, so it's not possible # to set all cosmological units yet. # Get the time units and use that to figure out redshift. k = cosmology_get_units( self.hubble_constant, self.omega_matter, co_dict["comoving_box_size"], co_dict["initial_redshift"], 0, ) setdefaultattr(self, "time_unit", self.quan(k["utim"], "s")) co = Cosmology( hubble_constant=self.hubble_constant, omega_matter=self.omega_matter, omega_lambda=self.omega_lambda, ) else: self.cosmological_simulation = 0 else: self.cosmological_simulation = 0 fh = h5py.File(os.path.join(self.directory, fn0), "r") self.domain_left_edge = fh.attrs["lower"] self.domain_right_edge = fh.attrs["upper"] if "version" in fh.attrs: version = fh.attrs.get("version").tobytes().decode("ascii") else: version = None # earliest recorded version is '0.9.0' self.parameters["version"] = version # all blocks are the same size ablock = fh[list(fh.keys())[0]] self.current_time = ablock.attrs["time"][0] self.parameters["current_cycle"] = ablock.attrs["cycle"][0] gsi = ablock.attrs["enzo_GridStartIndex"] gei = ablock.attrs["enzo_GridEndIndex"] assert len(gsi) == len(gei) == 3 # sanity check # Enzo-E technically allows each axis to have different ghost zone # depths (this feature is not really used in practice) self.ghost_zones = gsi assert (self.ghost_zones[self.dimensionality :] == 0).all() # sanity check self.root_block_dimensions = root_blocks self.active_grid_dimensions = gei - gsi + 1 self.grid_dimensions = ablock.attrs["enzo_GridDimension"] self.domain_dimensions = root_blocks * self.active_grid_dimensions fh.close() if self.cosmological_simulation: self.current_redshift = co.z_from_t(self.current_time * self.time_unit) self._periodicity += (False,) * (3 - self.dimensionality) self._parse_fluid_prop_params() def _parse_fluid_prop_params(self): """ Parse the fluid properties. """ fp_params = nested_dict_get( self.parameters, ("Physics", "fluid_props"), default=None ) if fp_params is not None: # in newer versions of enzo-e, this data is specified in a # centralized parameter group called Physics:fluid_props # - for internal reasons related to backwards compatibility, # treatment of this physics-group is somewhat special (compared # to the cosmology group). The parameters in this group are # honored even if Physics:list does not include "fluid_props" self.gamma = nested_dict_get(fp_params, ("eos", "gamma")) de_type = nested_dict_get( fp_params, ("dual_energy", "type"), default="disabled" ) uses_de = de_type != "disabled" else: # in older versions, these parameters were more scattered self.gamma = nested_dict_get(self.parameters, ("Field", "gamma")) uses_de = False for method in ("ppm", "mhd_vlct"): subparams = get_listed_subparam( self.parameters, "Method", method, default=None ) if subparams is not None: uses_de = subparams.get("dual_energy", False) self.parameters["uses_dual_energy"] = uses_de def _set_code_unit_attributes(self): if self.cosmological_simulation: box_size = self.parameters["Physics"]["cosmology"]["comoving_box_size"] k = cosmology_get_units( self.hubble_constant, self.omega_matter, box_size, self.parameters["Physics"]["cosmology"]["initial_redshift"], self.current_redshift, ) # Now some CGS values setdefaultattr(self, "length_unit", self.quan(box_size, "Mpccm/h")) setdefaultattr( self, "mass_unit", self.quan(k["urho"], "g/cm**3") * (self.length_unit.in_cgs()) ** 3, ) setdefaultattr(self, "velocity_unit", self.quan(k["uvel"], "cm/s")) else: p = self.parameters for d, u in [("length", "cm"), ("time", "s")]: val = nested_dict_get(p, ("Units", d), default=1) setdefaultattr(self, f"{d}_unit", self.quan(val, u)) mass = nested_dict_get(p, ("Units", "mass")) if mass is None: density = nested_dict_get(p, ("Units", "density")) if density is not None: mass = density * self.length_unit**3 else: mass = 1 setdefaultattr(self, "mass_unit", self.quan(mass, "g")) setdefaultattr(self, "velocity_unit", self.length_unit / self.time_unit) magnetic_unit = np.sqrt( 4 * np.pi * self.mass_unit / (self.time_unit**2 * self.length_unit) ) magnetic_unit = np.float64(magnetic_unit.in_cgs()) setdefaultattr(self, "magnetic_unit", self.quan(magnetic_unit, "gauss")) def __str__(self): return self.basename[: -len(self._suffix)] @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: ddir = os.path.dirname(filename) if not filename.endswith(cls._suffix): return False if cls._missing_load_requirements(): return False try: with open(filename) as f: block, block_file = f.readline().strip().split() get_block_info(block) if not os.path.exists(os.path.join(ddir, block_file)): return False except Exception: return False return True ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo_e/definitions.py0000644000175100001770000000000014714401662020355 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo_e/fields.py0000644000175100001770000001377014714401662017332 0ustar00runnerdockerimport numpy as np from yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer from yt.fields.particle_fields import add_union_field from yt.frontends.enzo_e.misc import ( get_listed_subparam, get_particle_mass_correction, nested_dict_get, ) rho_units = "code_mass / code_length**3" vel_units = "code_velocity" acc_units = "code_velocity / code_time" energy_units = "code_velocity**2" b_units = "code_magnetic" NODAL_FLAGS = { "bfieldi_x": [1, 0, 0], "bfieldi_y": [0, 1, 0], "bfieldi_z": [0, 0, 1], } class EnzoEFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( ("velocity_x", (vel_units, ["velocity_x"], None)), ("velocity_y", (vel_units, ["velocity_y"], None)), ("velocity_z", (vel_units, ["velocity_z"], None)), ("acceleration_x", (acc_units, ["acceleration_x"], None)), ("acceleration_y", (acc_units, ["acceleration_y"], None)), ("acceleration_z", (acc_units, ["acceleration_z"], None)), ("density", (rho_units, ["density"], None)), ("density_total", (rho_units, ["total_density"], None)), ("total_energy", (energy_units, ["specific_total_energy"], None)), ("internal_energy", (energy_units, ["specific_thermal_energy"], None)), ("bfield_x", (b_units, [], None)), ("bfield_y", (b_units, [], None)), ("bfield_z", (b_units, [], None)), ("bfieldi_x", (b_units, [], None)), ("bfieldi_y", (b_units, [], None)), ("bfieldi_z", (b_units, [], None)), ) known_particle_fields: KnownFieldsT = ( ("x", ("code_length", ["particle_position_x"], None)), ("y", ("code_length", ["particle_position_y"], None)), ("z", ("code_length", ["particle_position_z"], None)), ("vx", (vel_units, ["particle_velocity_x"], None)), ("vy", (vel_units, ["particle_velocity_y"], None)), ("vz", (vel_units, ["particle_velocity_z"], None)), ("ax", (acc_units, ["particle_acceleration_x"], None)), ("ay", (acc_units, ["particle_acceleration_y"], None)), ("az", (acc_units, ["particle_acceleration_z"], None)), ("mass", ("code_mass", ["particle_mass"], None)), ) def __init__(self, ds, field_list, slice_info=None): super().__init__(ds, field_list, slice_info=slice_info) # setup nodal flag information for field, arr in NODAL_FLAGS.items(): if ("enzoe", field) in self: finfo = self["enzoe", field] finfo.nodal_flag = np.array(arr) def setup_fluid_fields(self): from yt.fields.magnetic_field import setup_magnetic_field_aliases self.setup_energy_field() setup_magnetic_field_aliases(self, "enzoe", [f"bfield_{ax}" for ax in "xyz"]) def setup_energy_field(self): unit_system = self.ds.unit_system # check if we have a field for internal energy has_ie_field = ("enzoe", "internal_energy") in self.field_list # check if we need to account for magnetic energy vlct_params = get_listed_subparam(self.ds.parameters, "Method", "mhd_vlct", {}) has_magnetic = "no_bfield" != vlct_params.get("mhd_choice", "no_bfield") # define the ("gas", "specific_thermal_energy") field # - this is already done for us if the simulation used the dual-energy # formalism AND ("enzoe", "internal_energy") was saved to disk if not (self.ds.parameters["uses_dual_energy"] and has_ie_field): def _tot_minus_kin(field, data): return ( data["enzoe", "total_energy"] - 0.5 * data["gas", "velocity_magnitude"] ** 2.0 ) if not has_magnetic: # thermal energy = total energy - kinetic energy self.add_field( ("gas", "specific_thermal_energy"), sampling_type="cell", function=_tot_minus_kin, units=unit_system["specific_energy"], ) else: # thermal energy = total energy - kinetic energy - magnetic energy def _sub_b(field, data): return ( _tot_minus_kin(field, data) - data["gas", "magnetic_energy_density"] / data["gas", "density"] ) self.add_field( ("gas", "specific_thermal_energy"), sampling_type="cell", function=_sub_b, units=unit_system["specific_energy"], ) def setup_particle_fields(self, ptype, ftype="gas", num_neighbors=64): super().setup_particle_fields(ptype, ftype=ftype, num_neighbors=num_neighbors) self.setup_particle_mass_field(ptype) def setup_particle_mass_field(self, ptype): fname = "particle_mass" if ptype in self.ds.particle_unions: add_union_field(self, ptype, fname, "code_mass") return pdict = self.ds.parameters.get("Particle", None) if pdict is None: return constants = nested_dict_get(pdict, (ptype, "constants"), default=()) if not constants: return # constants should be a tuple consisting of multiple tuples of (name, type, value). # When there is only one entry, the enclosing tuple gets stripped, so we put it back. if not isinstance(constants[0], tuple): constants = (constants,) names = [c[0] for c in constants] if "mass" not in names: return val = constants[names.index("mass")][2] * self.ds.mass_unit if not self.ds.index.io._particle_mass_is_mass: val = val * get_particle_mass_correction(self.ds) def _pmass(field, data): return val * data[ptype, "particle_ones"] self.add_field( (ptype, fname), function=_pmass, units="code_mass", sampling_type="particle", ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo_e/io.py0000644000175100001770000002106614714401662016470 0ustar00runnerdockerimport numpy as np from yt.frontends.enzo_e.misc import get_particle_mass_correction, nested_dict_get from yt.utilities.exceptions import YTException from yt.utilities.io_handler import BaseIOHandler from yt.utilities.on_demand_imports import _h5py as h5py class EnzoEIOHandler(BaseIOHandler): _dataset_type = "enzo_e" _base = slice(None) _field_dtype = "float64" _sep = "_" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # precompute the indexing specifying each field's active zone # -> this assumes that each field in Enzo-E shares the same number of # ghost-zones. Technically, Enzo-E allows each field to have a # different number of ghost zones (but this feature isn't currently # used & it currently doesn't record this information) # -> our usage of a negative stop value ensures compatability with # both cell-centered and face-centered fields self._activezone_idx = tuple( slice(num_zones, -num_zones) if num_zones > 0 else slice(None) for num_zones in self.ds.ghost_zones[: self.ds.dimensionality] ) # Determine if particle masses are actually masses or densities. if self.ds.parameters["version"] is not None: # they're masses for enzo-e versions that record a version string mass_flag = True else: # in earlier versions: query the existence of the "mass_is_mass" # particle parameter mass_flag = nested_dict_get( self.ds.parameters, ("Particle", "mass_is_mass"), default=None ) # the historic approach for initializing the value of "mass_is_mass" # was unsound (and could yield a random value). Thus we should only # check for the parameter's existence and not its value self._particle_mass_is_mass = mass_flag is not None def _read_field_names(self, grid): if grid.filename is None: return [] f = h5py.File(grid.filename, mode="r") try: group = f[grid.block_name] except KeyError as e: raise YTException( message=f"Grid {grid.block_name} is missing from data file {grid.filename}.", ds=self.ds, ) from e fields = [] ptypes = set() dtypes = set() # keep one field for each particle type so we can count later sample_pfields = {} for name, v in group.items(): if not hasattr(v, "shape") or v.dtype == "O": continue # mesh fields are "field " if name.startswith("field"): _, fname = name.split(self._sep, 1) fields.append(("enzoe", fname)) dtypes.add(v.dtype) # particle fields are "particle " else: _, ftype, fname = name.split(self._sep, 2) fields.append((ftype, fname)) ptypes.add(ftype) dtypes.add(v.dtype) if ftype not in sample_pfields: sample_pfields[ftype] = fname self.sample_pfields = sample_pfields if len(dtypes) == 1: # Now, if everything we saw was the same dtype, we can go ahead and # set it here. We do this because it is a HUGE savings for 32 bit # floats, since our numpy copying/casting is way faster than # h5py's, for some reason I don't understand. This does *not* need # to be correct -- it will get fixed later -- it just needs to be # okay for now. self._field_dtype = list(dtypes)[0] f.close() return fields, ptypes def _read_particle_coords(self, chunks, ptf): yield from ( (ptype, xyz, 0.0) for ptype, xyz in self._read_particle_fields(chunks, ptf, None) ) def _read_particle_fields(self, chunks, ptf, selector): chunks = list(chunks) dc = self.ds.domain_center.in_units("code_length").d for chunk in chunks: # These should be organized by grid filename f = None for g in chunk.objs: if g.filename is None: continue if f is None: f = h5py.File(g.filename, mode="r") if g.particle_count is None: fnstr = "{}/{}".format( g.block_name, self._sep.join(["particle", "%s", "%s"]), ) g.particle_count = { ptype: f.get(fnstr % (ptype, self.sample_pfields[ptype])).size for ptype in self.sample_pfields } g.total_particles = sum(g.particle_count.values()) if g.total_particles == 0: continue group = f.get(g.block_name) for ptype, field_list in sorted(ptf.items()): pn = self._sep.join(["particle", ptype, "%s"]) if g.particle_count[ptype] == 0: continue coords = tuple( np.asarray(group.get(pn % ax)[()], dtype="=f8") for ax in "xyz"[: self.ds.dimensionality] ) for i in range(self.ds.dimensionality, 3): coords += ( dc[i] * np.ones(g.particle_count[ptype], dtype="f8"), ) if selector is None: # This only ever happens if the call is made from # _read_particle_coords. yield ptype, coords continue coords += (0.0,) mask = selector.select_points(*coords) if mask is None: continue for field in field_list: data = np.asarray(group.get(pn % field)[()], "=f8") if field == "mass" and not self._particle_mass_is_mass: data[mask] *= get_particle_mass_correction(self.ds) yield (ptype, field), data[mask] if f: f.close() def io_iter(self, chunks, fields): for chunk in chunks: fid = None filename = -1 for obj in chunk.objs: if obj.filename is None: continue if obj.filename != filename: # Note one really important thing here: even if we do # implement LRU caching in the _read_obj_field function, # we'll still be doing file opening and whatnot. This is a # problem, but one we can return to. if fid is not None: fid.close() fid = h5py.h5f.open( obj.filename.encode("latin-1"), h5py.h5f.ACC_RDONLY ) filename = obj.filename for field in fields: grid_dim = self.ds.grid_dimensions nodal_flag = self.ds.field_info[field].nodal_flag dims = ( grid_dim[: self.ds.dimensionality][::-1] + nodal_flag[: self.ds.dimensionality][::-1] ) data = np.empty(dims, dtype=self._field_dtype) yield field, obj, self._read_obj_field(obj, field, (fid, data)) if fid is not None: fid.close() def _read_obj_field(self, obj, field, fid_data): if fid_data is None: fid_data = (None, None) fid, rdata = fid_data if fid is None: close = True fid = h5py.h5f.open(obj.filename.encode("latin-1"), h5py.h5f.ACC_RDONLY) else: close = False ftype, fname = field node = f"/{obj.block_name}/field{self._sep}{fname}" dg = h5py.h5d.open(fid, node.encode("latin-1")) if rdata is None: rdata = np.empty( self.ds.grid_dimensions[: self.ds.dimensionality][::-1], dtype=self._field_dtype, ) dg.read(h5py.h5s.ALL, h5py.h5s.ALL, rdata) if close: fid.close() data = rdata[self._activezone_idx].T if self.ds.dimensionality < 3: nshape = data.shape + (1,) * (3 - self.ds.dimensionality) data = np.reshape(data, nshape) return data ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo_e/misc.py0000644000175100001770000001027114714401662017010 0ustar00runnerdockerimport numpy as np from more_itertools import always_iterable def bdecode(block): """ Decode a block descriptor to get its left and right sides and level. A block string consisting of (0, 1), with optionally one colon. The number of digits after the colon is the refinement level. The combined digits denote the binary representation of the left edge. """ if ":" in block: level = len(block) - block.find(":") - 1 else: level = 0 bst = block.replace(":", "") d = float(2 ** len(bst)) left = int(bst, 2) right = left + 1 left /= d right /= d return level, left, right def get_block_string_and_dim(block, min_dim=3): mybs = block[1:].split("_") dim = max(len(mybs), min_dim) return mybs, dim def get_block_level(block): if ":" in block: l = block.find(":") else: l = len(block) return l def get_block_info(block, min_dim=3): """Decode a block name to get its left and right sides and level. Given a block name, this function returns the locations of the block's left and right edges (measured as binary fractions of the domain along each axis) and level. Unrefined blocks in the root array (which can each hold an of octree) have a refinement level of 0 while their ancestors (used internally by Enzo-E's solvers - they don't actually hold meaningful data) have negative levels. Because identification of negative refinement levels requires knowledge of the root array shape (the 'root_blocks' value specified in the parameter file), all unrefined blocks are assumed to have a level of 0. """ mybs, dim = get_block_string_and_dim(block, min_dim=min_dim) left = np.zeros(dim) right = np.ones(dim) level = 0 for i, myb in enumerate(mybs): if myb == "": continue level, left[i], right[i] = bdecode(myb) return level, left, right def get_root_blocks(block, min_dim=3): mybs, dim = get_block_string_and_dim(block, min_dim=min_dim) nb = np.ones(dim, dtype="int64") for i, myb in enumerate(mybs): if myb == "": continue s = get_block_level(myb) nb[i] = 2**s return nb def get_root_block_id(block, min_dim=3): mybs, dim = get_block_string_and_dim(block, min_dim=min_dim) rbid = np.zeros(dim, dtype="int64") for i, myb in enumerate(mybs): if myb == "": continue s = get_block_level(myb) if s == 0: continue rbid[i] = int(myb[:s], 2) return rbid def get_child_index(anc_id, desc_id): cid = "" for aind, dind in zip(anc_id.split("_"), desc_id.split("_"), strict=True): cid += dind[len(aind)] cid = int(cid, 2) return cid def is_parent(anc_block, desc_block): dim = anc_block.count("_") + 1 if (len(desc_block.replace(":", "")) - len(anc_block.replace(":", ""))) / dim != 1: return False for aind, dind in zip(anc_block.split("_"), desc_block.split("_"), strict=True): if not dind.startswith(aind): return False return True def nested_dict_get(pdict, keys, default=None): """ Retrieve a value from a nested dict using a tuple of keys. If a is a dict, and a['b'] = {'c': 'd'}, then nested_dict_get(a, ('b', 'c')) returns 'd'. """ val = pdict for key in always_iterable(keys): try: val = val[key] except KeyError: return default return val def get_listed_subparam(pdict, parent_param, subparam, default=None): """ Returns nested_dict_get(pdict, (parent_param,subparam), default) if subparam is an entry in nested_dict_get(pdict, (parent_param, 'list'), []) This is a common idiom in Enzo-E's parameter parsing """ if subparam in nested_dict_get(pdict, (parent_param, "list"), []): return nested_dict_get(pdict, (parent_param, subparam), default) return default def get_particle_mass_correction(ds): """ Normalize particle masses by the root grid cell volume. This correction is used for Enzo-E datasets where particle masses are stored as densities. """ return (ds.domain_width / ds.domain_dimensions).prod() / ds.length_unit**3 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3151522 yt-4.4.0/yt/frontends/enzo_e/tests/0000755000175100001770000000000014714401715016643 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo_e/tests/__init__.py0000644000175100001770000000000014714401662020743 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo_e/tests/test_misc.py0000644000175100001770000001157414714401662021220 0ustar00runnerdockerimport numpy as np from yt.frontends.enzo_e.misc import ( get_block_info, get_root_block_id, get_root_blocks, is_parent, nested_dict_get, ) def get_random_block_string(max_n=64, random_state=None, level=None): if max_n == 1: assert level is None or level == 0 return 0, 0, "B" elif max_n < 1: raise ValueError("max_n must be a positive integer") if random_state is None: random_state = np.random.RandomState() max_l = int(np.log2(max_n)) form = f"%0{max_l}d" num10 = random_state.randint(0, high=max_n) num2 = form % int(bin(num10)[2:]) # the slice clips the '0b' prefix if level is None: level = random_state.randint(0, high=max_l) if level > 0: my_block = f"{num2[:-level]}:{num2[-level:]}" else: my_block = num2 my_block = "B" + my_block return num10, level, my_block def flip_random_block_bit(block, rs): """ Flips a bit string in one of the block descriptors in a given block name """ # split block into descriptors for each dimension descriptors = block[1:].split("_") # choose which descriptor to modify flippable = [i for i, descr in enumerate(descriptors) if len(descr) > 0] if len(flippable) == 0: # when block in ['B', 'B_', 'B__'] raise ValueError(f"{block} has no bits that can be flipped") descr_index = flippable[rs.randint(0, len(flippable))] # split block descriptor into left and right parts parts = descriptors[descr_index].split(":") # select the part to be modified if len(parts) == 1: # block is unrefined part_index = 0 elif len(parts[0]) == 0: # The root block index can't be modified part_index = 1 else: part_index = rs.randint(0, high=2) modify_part = parts[part_index] # flip a bit in modify_part, and return the new block name with this change flip_index = rs.randint(0, high=len(modify_part)) parts[part_index] = "%s%d%s" % ( modify_part[:flip_index], (int(modify_part[flip_index]) + 1) % 2, modify_part[flip_index + 1 :], ) descriptors[descr_index] = ":".join(parts) return "B" + "_".join(descriptors) def test_get_block_info(): rs = np.random.RandomState(45047) max_n = 64 for _ in range(10): n, l, b = get_random_block_string(max_n=max_n, random_state=rs) level, left, right = get_block_info(b, min_dim=1) assert level == l assert left == float(n) / max_n assert right == float(n + 1) / max_n for block in ["B", "B_", "B__"]: level, left, right = get_block_info(block) assert level == 0 assert (left == 0.0).all() assert (right == 1.0).all() def test_root_blocks(): rs = np.random.RandomState(45652) for i in range(6): max_n = 2**i n1, l1, b1 = get_random_block_string(max_n=max_n, random_state=rs, level=0) n2, l2, b2 = get_random_block_string(max_n=32, random_state=rs, level=0) block = f"{b1}:{b2[1:]}" nrb = get_root_blocks(block, min_dim=1) assert nrb == max_n rbid = get_root_block_id(block, min_dim=1) assert rbid == n1 def test_is_parent(): rs = np.random.RandomState(45652) for dim in [1, 2, 3]: for i in range(6): max_n = 2**i descriptors = [] for _ in range(dim): n1, l1, b1 = get_random_block_string( max_n=max_n, random_state=rs, level=0 ) n2, l2, b2 = get_random_block_string(max_n=32, random_state=rs, level=0) descriptors.append(f"{b1[1:]}:{b2[1:]}") block = "B" + "_".join(descriptors) # since b2 is computed with max_n=32 in the for-loop, block always # has a refined great-great-grandparent parent = "B" + "_".join(elem[:-1] for elem in descriptors) grandparent = "B" + "_".join(elem[:-2] for elem in descriptors) assert is_parent(parent, block) assert is_parent(grandparent, parent) assert not is_parent(grandparent, block) assert not is_parent(block, parent) assert not is_parent(flip_random_block_bit(parent, rs), block) def test_nested_dict_get(): rs = np.random.RandomState(47988) keys = [] my_dict = None for _ in range(5): k = str(rs.randint(0, high=1000000)) if my_dict is None: v = str(rs.randint(0, high=1000000)) keys.append(v) my_dict = {k: v} else: my_dict = {k: my_dict} keys.append(k) keys.reverse() assert nested_dict_get(my_dict, keys[:-1]) == keys[-1] my_def = "devron" assert nested_dict_get(my_dict, "system", default=my_def) == my_def def test_nested_dict_get_real_none(): my_dict = {"a": None} response = nested_dict_get(my_dict, "a", default="fail") assert response is None ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/enzo_e/tests/test_outputs.py0000644000175100001770000001076614714401662022012 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_array_equal, assert_equal from yt.frontends.enzo_e.api import EnzoEDataset from yt.frontends.enzo_e.fields import NODAL_FLAGS from yt.testing import requires_file, requires_module from yt.utilities.answer_testing.framework import ( FieldValuesTest, PixelizedProjectionValuesTest, create_obj, data_dir_load, requires_ds, ) from yt.utilities.on_demand_imports import _h5py as h5py _fields = ( ("gas", "density"), ("gas", "specific_total_energy"), ("gas", "velocity_x"), ("gas", "velocity_y"), ) _pfields = ( ("all", "particle_position_x"), ("all", "particle_position_y"), ("all", "particle_position_z"), ("all", "particle_velocity_x"), ("all", "particle_velocity_y"), ("all", "particle_velocity_z"), ) hello_world = "hello-0210/hello-0210.block_list" ep_cosmo = "ENZOP_DD0140/ENZOP_DD0140.block_list" orszag_tang = "ENZOE_orszag-tang_0.5/ENZOE_orszag-tang_0.5.block_list" @requires_module("h5py") @requires_file(hello_world) def test_EnzoEDataset(): assert isinstance(data_dir_load(hello_world), EnzoEDataset) @requires_module("h5py") @requires_ds(hello_world) def test_hello_world(): ds = data_dir_load(hello_world) dso = [None, ("sphere", ("max", (0.25, "unitary")))] for dobj_name in dso: for field in _fields: for axis in [0, 1, 2]: for weight_field in [None, ("gas", "density")]: yield PixelizedProjectionValuesTest( hello_world, axis, field, weight_field, dobj_name ) yield FieldValuesTest(hello_world, field, dobj_name) dobj = create_obj(ds, dobj_name) s1 = dobj["index", "ones"].sum() s2 = sum(mask.sum() for block, mask in dobj.blocks) assert_equal(s1, s2) @requires_module("h5py") @requires_ds(ep_cosmo) def test_particle_fields(): ds = data_dir_load(ep_cosmo) dso = [None, ("sphere", ("max", (0.1, "unitary")))] for dobj_name in dso: for field in _pfields: yield FieldValuesTest(ep_cosmo, field, dobj_name, particle_type=True) dobj = create_obj(ds, dobj_name) s1 = dobj["index", "ones"].sum() s2 = sum(mask.sum() for block, mask in dobj.blocks) assert_equal(s1, s2) @requires_module("h5py") @requires_file(hello_world) def test_hierarchy(): ds = data_dir_load(hello_world) fh = h5py.File(ds.index.grids[0].filename, mode="r") for grid in ds.index.grids: assert_array_equal( grid.LeftEdge.d, fh[grid.block_name].attrs["enzo_GridLeftEdge"] ) assert_array_equal(ds.index.grid_left_edge[grid.id], grid.LeftEdge) assert_array_equal(ds.index.grid_right_edge[grid.id], grid.RightEdge) for child in grid.Children: assert (child.LeftEdge >= grid.LeftEdge).all() assert (child.RightEdge <= grid.RightEdge).all() assert_equal(child.Parent.id, grid.id) fh.close() @requires_module("h5py") @requires_file(ep_cosmo) def test_critical_density(): ds = data_dir_load(ep_cosmo) c1 = ( (ds.r["dark", "particle_mass"].sum() + ds.r["gas", "cell_mass"].sum()) / ds.domain_width.prod() / ds.critical_density ) c2 = ( ds.omega_matter * (1 + ds.current_redshift) ** 3 / (ds.omega_matter * (1 + ds.current_redshift) ** 3 + ds.omega_lambda) ) assert np.abs(c1 - c2) / max(c1, c2) < 1e-3 @requires_module("h5py") @requires_file(orszag_tang) def test_face_centered_bfields(): # this is based on the enzo frontend test, test_face_centered_mhdct_fields ds = data_dir_load(orszag_tang) ad = ds.all_data() assert len(ds.index.grids) == 4 for field, flag in NODAL_FLAGS.items(): dims = ds.domain_dimensions assert_equal(ad[field].shape, (dims.prod(), 2 * sum(flag))) # the x and y domains are each split across 2 blocks. The z domain # is all located on a single block block_dims = (dims[0] // 2, dims[1] // 2, dims[2]) for grid in ds.index.grids: assert_equal(grid[field].shape, block_dims + (2 * sum(flag),)) # Average of face-centered fields should be the same as cell-centered field assert_array_equal( ad["enzoe", "bfieldi_x"].sum(axis=-1) / 2, ad["enzoe", "bfield_x"] ) assert_array_equal( ad["enzoe", "bfieldi_y"].sum(axis=-1) / 2, ad["enzoe", "bfield_y"] ) assert_array_equal( ad["enzoe", "bfieldi_z"].sum(axis=-1) / 2, ad["enzoe", "bfield_z"] ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3151522 yt-4.4.0/yt/frontends/exodus_ii/0000755000175100001770000000000014714401715016212 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/exodus_ii/__init__.py0000644000175100001770000000000014714401662020312 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/exodus_ii/api.py0000644000175100001770000000040314714401662017333 0ustar00runnerdockerfrom . import tests from .data_structures import ( ExodusIIDataset, ExodusIIUnstructuredIndex, ExodusIIUnstructuredMesh, ) from .fields import ExodusIIFieldInfo from .io import IOHandlerExodusII from .simulation_handling import ExodusIISimulation ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/exodus_ii/data_structures.py0000644000175100001770000003515414714401662022011 0ustar00runnerdockerimport numpy as np from yt.data_objects.index_subobjects.unstructured_mesh import UnstructuredMesh from yt.data_objects.static_output import Dataset from yt.data_objects.unions import MeshUnion from yt.funcs import setdefaultattr from yt.geometry.unstructured_mesh_handler import UnstructuredIndex from yt.utilities.file_handler import NetCDF4FileHandler, valid_netcdf_signature from yt.utilities.logger import ytLogger as mylog from .fields import ExodusIIFieldInfo from .util import get_num_pseudo_dims, load_info_records, sanitize_string class ExodusIIUnstructuredMesh(UnstructuredMesh): _index_offset = 1 def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) class ExodusIIUnstructuredIndex(UnstructuredIndex): def __init__(self, ds, dataset_type="exodus_ii"): super().__init__(ds, dataset_type) def _initialize_mesh(self): coords = self.ds._read_coordinates() connectivity = self.ds._read_connectivity() self.meshes = [] for mesh_id, conn_ind in enumerate(connectivity): displaced_coords = self.ds._apply_displacement(coords, mesh_id) mesh = ExodusIIUnstructuredMesh( mesh_id, self.index_filename, conn_ind, displaced_coords, self ) self.meshes.append(mesh) self.mesh_union = MeshUnion("mesh_union", self.meshes) def _detect_output_fields(self): elem_names = self.dataset.parameters["elem_names"] node_names = self.dataset.parameters["nod_names"] fnames = elem_names + node_names self.field_list = [] for i in range(1, len(self.meshes) + 1): self.field_list += [("connect%d" % i, fname) for fname in fnames] self.field_list += [("all", fname) for fname in fnames] class ExodusIIDataset(Dataset): _load_requirements = ["netCDF4"] _index_class = ExodusIIUnstructuredIndex _field_info_class = ExodusIIFieldInfo def __init__( self, filename, step=0, displacements=None, dataset_type="exodus_ii", storage_filename=None, units_override=None, ): """ A class used to represent an on-disk ExodusII dataset. The initializer takes two extra optional parameters, "step" and "displacements." Parameters ---------- step : integer The step tells which time index to slice at. It throws an Error if the index is larger than the number of time outputs in the ExodusII file. Passing step=-1 picks out the last dataframe. Default is 0. displacements : dictionary of tuples This is a dictionary that controls whether or not displacement fields will be used with the meshes in this dataset. The keys of the displacements dictionary should the names of meshes in the file (e.g., "connect1", "connect2", etc... ), while the values should be tuples of the form (scale, offset), where "scale" is a floating point value and "offset" is an array-like with one component for each spatial dimension in the dataset. When the displacements for a given mesh are turned on, the coordinates of the vertices in that mesh get transformed as: vertex_x = vertex_x + disp_x*scale + offset_x vertex_y = vertex_y + disp_y*scale + offset_y vertex_z = vertex_z + disp_z*scale + offset_z If no displacement fields (assumed to be named 'disp_x', 'disp_y', etc... ) are detected in the output file, then this dictionary is ignored. Examples -------- This will load the Dataset at time index '0' with displacements turned off. >>> import yt >>> ds = yt.load("MOOSE_sample_data/mps_out.e") This will load the Dataset at the final index with displacements turned off. >>> import yt >>> ds = yt.load("MOOSE_sample_data/mps_out.e", step=-1) This will load the Dataset at index 10, turning on displacement fields for the 2nd mesh without applying any scale or offset: >>> import yt >>> ds = yt.load( ... "MOOSE_sample_data/mps_out.e", ... step=10, ... displacements={"connect2": (1.0, [0.0, 0.0, 0.0])}, ... ) This will load the Dataset at index 10, scaling the displacements in the 2nd mesh by a factor of 5 while not applying an offset: >>> import yt >>> ds = yt.load( ... "MOOSE_sample_data/mps_out.e", ... step=10, ... displacements={"connect2": (5.0, [0.0, 0.0, 0.0])}, ... ) This will load the Dataset at index 10, scaling the displacements for the 2nd mesh by a factor of 5.0 and shifting all the vertices in the first mesh by 1.0 unit in the z direction. >>> import yt >>> ds = yt.load( ... "MOOSE_sample_data/mps_out.e", ... step=10, ... displacements={ ... "connect1": (0.0, [0.0, 0.0, 1.0]), ... "connect2": (5.0, [0.0, 0.0, 0.0]), ... }, ... ) """ self.step = step if displacements is None: self.displacements = {} else: self.displacements = displacements self.storage_filename = storage_filename super().__init__(filename, dataset_type, units_override=units_override) self.fluid_types += self._get_fluid_types() self.default_field = [f for f in self.field_list if f[0] == "connect1"][-1] @property def index_filename(self): # historic alias return self.filename def _set_code_unit_attributes(self): # This is where quantities are created that represent the various # on-disk units. These are the currently available quantities which # should be set, along with examples of how to set them to standard # values. # setdefaultattr(self, "length_unit", self.quan(1.0, "cm")) setdefaultattr(self, "mass_unit", self.quan(1.0, "g")) setdefaultattr(self, "time_unit", self.quan(1.0, "s")) # # These can also be set: # self.velocity_unit = self.quan(1.0, "cm/s") # self.magnetic_unit = self.quan(1.0, "gauss") def _parse_parameter_file(self): self._handle = NetCDF4FileHandler(self.parameter_filename) with self._handle.open_ds() as ds: self._read_glo_var() self.dimensionality = ds.variables["coor_names"].shape[0] self.parameters["info_records"] = self._load_info_records() self.num_steps = len(ds.variables["time_whole"]) self.current_time = self._get_current_time() self.parameters["num_meshes"] = ds.variables["eb_status"].shape[0] self.parameters["elem_names"] = self._get_elem_names() self.parameters["nod_names"] = self._get_nod_names() self.domain_left_edge, self.domain_right_edge = self._load_domain_edge() self._periodicity = (False, False, False) # These attributes don't really make sense for unstructured # mesh data, but yt warns if they are not present, so we set # them to dummy values here. self.domain_dimensions = np.ones(3, "int32") self.cosmological_simulation = 0 self.current_redshift = 0 self.omega_lambda = 0 self.omega_matter = 0 self.hubble_constant = 0 self.refine_by = 0 def _get_fluid_types(self): with NetCDF4FileHandler(self.parameter_filename).open_ds() as ds: fluid_types = () i = 1 while True: ftype = "connect%d" % i if ftype in ds.variables: fluid_types += (ftype,) i += 1 else: break fluid_types += ("all",) return fluid_types def _read_glo_var(self): """ Adds each global variable to the dict of parameters """ names = self._get_glo_names() if not names: return with self._handle.open_ds() as ds: values = ds.variables["vals_glo_var"][:].transpose() for name, value in zip(names, values, strict=True): self.parameters[name] = value def _load_info_records(self): """ Returns parsed version of the info_records. """ with self._handle.open_ds() as ds: try: return load_info_records(ds.variables["info_records"]) except (KeyError, TypeError): mylog.warning("No info_records found") return [] def _get_current_time(self): with self._handle.open_ds() as ds: try: return ds.variables["time_whole"][self.step] except IndexError as e: raise RuntimeError( "Invalid step number, max is %d" % (self.num_steps - 1) ) from e except (KeyError, TypeError): return 0.0 def _get_glo_names(self): """ Returns the names of the global vars, if available. """ with self._handle.open_ds() as ds: if "name_glo_var" not in ds.variables: mylog.warning("name_glo_var not found") return [] else: return [ sanitize_string(v.tobytes()) for v in ds.variables["name_glo_var"] ] def _get_elem_names(self): """ Returns the names of the element vars, if available. """ with self._handle.open_ds() as ds: if "name_elem_var" not in ds.variables: mylog.warning("name_elem_var not found") return [] else: return [ sanitize_string(v.tobytes()) for v in ds.variables["name_elem_var"] ] def _get_nod_names(self): """ Returns the names of the node vars, if available """ with self._handle.open_ds() as ds: if "name_nod_var" not in ds.variables: mylog.warning("name_nod_var not found") return [] else: return [ sanitize_string(v.tobytes()) for v in ds.variables["name_nod_var"] ] def _read_coordinates(self): """ Loads the coordinates for the mesh """ coord_axes = "xyz"[: self.dimensionality] mylog.info("Loading coordinates") with self._handle.open_ds() as ds: if "coord" not in ds.variables: coords = ( np.array([ds.variables[f"coord{ax}"][:] for ax in coord_axes]) .transpose() .astype("f8") ) else: coords = ( np.array(list(ds.variables["coord"][:])).transpose().astype("f8") ) return coords def _apply_displacement(self, coords, mesh_id): mesh_name = "connect%d" % (mesh_id + 1) new_coords = coords.copy() if mesh_name not in self.displacements: return new_coords fac = self.displacements[mesh_name][0] offset = self.displacements[mesh_name][1] coord_axes = "xyz"[: self.dimensionality] with self._handle.open_ds() as ds: for i, ax in enumerate(coord_axes): if f"disp_{ax}" in self.parameters["nod_names"]: ind = self.parameters["nod_names"].index(f"disp_{ax}") disp = ds.variables["vals_nod_var%d" % (ind + 1)][self.step] new_coords[:, i] = coords[:, i] + fac * disp + offset[i] return new_coords def _read_connectivity(self): """ Loads the connectivity data for the mesh """ mylog.info("Loading connectivity") connectivity = [] with self._handle.open_ds() as ds: for i in range(self.parameters["num_meshes"]): var = ds.variables["connect%d" % (i + 1)][:].astype("i8") try: elem_type = var.elem_type.lower() if elem_type == "nfaced": raise NotImplementedError( "3D arbitrary polyhedra are not implemented yet" ) arbitrary_polyhedron = elem_type == "nsided" except AttributeError: arbitrary_polyhedron = False conn = var[:] if arbitrary_polyhedron: nodes_per_element = ds.variables[f"ebepecnt{i + 1}"] npe = nodes_per_element[0] if np.any(nodes_per_element != npe): raise NotImplementedError("only equal-size polyhedra supported") q, r = np.divmod(len(conn), npe) assert r == 0 conn.shape = (q, npe) connectivity.append(conn) return connectivity def _load_domain_edge(self): """ Loads the boundaries for the domain edge """ coords = self._read_coordinates() connectivity = self._read_connectivity() mi = 1e300 ma = -1e300 for mesh_id, _ in enumerate(connectivity): displaced_coords = self._apply_displacement(coords, mesh_id) mi = np.minimum(displaced_coords.min(axis=0), mi) ma = np.maximum(displaced_coords.max(axis=0), ma) # pad domain boundaries width = ma - mi mi -= 0.1 * width ma += 0.1 * width # set up pseudo-3D for lodim datasets here for _ in range(self.dimensionality, 3): mi = np.append(mi, 0.0) ma = np.append(ma, 1.0) num_pseudo_dims = get_num_pseudo_dims(coords) self.dimensionality -= num_pseudo_dims for i in range(self.dimensionality, 3): mi[i] = 0.0 ma[i] = 1.0 return mi, ma @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if not valid_netcdf_signature(filename): return False if cls._missing_load_requirements(): return False try: from netCDF4 import Dataset # We use keepweakref here to avoid holding onto the file handle # which can interfere with other is_valid calls. with Dataset(filename, keepweakref=True) as f: f.variables["connect1"] return True except Exception: return False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/exodus_ii/definitions.py0000644000175100001770000000011414714401662021074 0ustar00runnerdocker# This file is often empty. It can hold definitions related to a frontend. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/exodus_ii/fields.py0000644000175100001770000000217114714401662020034 0ustar00runnerdockerfrom yt.fields.field_info_container import FieldInfoContainer # We need to specify which fields we might have in our dataset. The field info # container subclass here will define which fields it knows about. There are # optionally methods on it that get called which can be subclassed. class ExodusIIFieldInfo(FieldInfoContainer): known_other_fields = ( # Each entry here is of the form # ( "name", ("units", ["fields", "to", "alias"], # "display_name")), ) known_particle_fields = ( # Identical form to above # ( "name", ("units", ["fields", "to", "alias"], # "display_name")), ) def __init__(self, ds, field_list): super().__init__(ds, field_list) for name in self: self[name].take_log = False # If you want, you can check self.field_list def setup_fluid_fields(self): # Here we do anything that might need info about the dataset. # You can use self.alias, self.add_output_field and self.add_field . pass def setup_particle_fields(self, ptype): # This will get called for every particle type. pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/exodus_ii/io.py0000644000175100001770000000710214714401662017174 0ustar00runnerdockerimport numpy as np from yt.utilities.file_handler import NetCDF4FileHandler from yt.utilities.io_handler import BaseIOHandler class IOHandlerExodusII(BaseIOHandler): _particle_reader = False _dataset_type = "exodus_ii" _INDEX_OFFSET = 1 def __init__(self, ds): self.filename = ds.index_filename exodus_ii_handler = NetCDF4FileHandler(self.filename) self.handler = exodus_ii_handler super().__init__(ds) self.node_fields = ds._get_nod_names() self.elem_fields = ds._get_elem_names() def _read_particle_coords(self, chunks, ptf): pass def _read_particle_fields(self, chunks, ptf, selector): pass def _read_fluid_selection(self, chunks, selector, fields, size): # This needs to allocate a set of arrays inside a dictionary, where the # keys are the (ftype, fname) tuples and the values are arrays that # have been masked using whatever selector method is appropriate. The # dict gets returned at the end and it should be flat, with selected # data. Note that if you're reading grid data, you might need to # special-case a grid selector object. with self.handler.open_ds() as ds: chunks = list(chunks) rv = {} for field in fields: ftype, fname = field if ftype == "all": ci = np.concatenate( [ mesh.connectivity_indices - self._INDEX_OFFSET for mesh in self.ds.index.mesh_union ] ) else: ci = ds.variables[ftype][:] - self._INDEX_OFFSET num_elem = ci.shape[0] if fname in self.node_fields: nodes_per_element = ci.shape[1] rv[field] = np.zeros((num_elem, nodes_per_element), dtype="float64") elif fname in self.elem_fields: rv[field] = np.zeros(num_elem, dtype="float64") for field in fields: ind = 0 ftype, fname = field if ftype == "all": mesh_ids = [mesh.mesh_id + 1 for mesh in self.ds.index.mesh_union] objs = list(self.ds.index.mesh_union) else: mesh_ids = [int(ftype.replace("connect", ""))] chunk = chunks[mesh_ids[0] - 1] objs = chunk.objs if fname in self.node_fields: field_ind = self.node_fields.index(fname) fdata = ds.variables["vals_nod_var%d" % (field_ind + 1)] for g in objs: ci = g.connectivity_indices - self._INDEX_OFFSET data = fdata[self.ds.step][ci] ind += g.select(selector, data, rv[field], ind) # caches if fname in self.elem_fields: field_ind = self.elem_fields.index(fname) for g, mesh_id in zip(objs, mesh_ids, strict=True): fdata = ds.variables[ "vals_elem_var%deb%s" % (field_ind + 1, mesh_id) ][:] data = fdata[self.ds.step, :] ind += g.select(selector, data, rv[field], ind) # caches rv[field] = rv[field][:ind] return rv def _read_chunk_data(self, chunk, fields): # This reads the data from a single chunk, and is only used for # caching. pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/exodus_ii/misc.py0000644000175100001770000000000014714401662017506 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/exodus_ii/simulation_handling.py0000644000175100001770000000644314714401662022624 0ustar00runnerdockerimport glob from yt.data_objects.time_series import DatasetSeries from yt.funcs import only_on_root from yt.loaders import load from yt.utilities.exceptions import YTUnidentifiedDataType from yt.utilities.logger import ytLogger as mylog from yt.utilities.parallel_tools.parallel_analysis_interface import parallel_objects class ExodusIISimulation(DatasetSeries): r""" Initialize an ExodusII Simulation object. Upon creation, the input directory is searched for valid ExodusIIDatasets. The get_time_series can be used to generate a DatasetSeries object. simulation_directory : str The directory that contain the simulation data. Examples -------- >>> import yt >>> sim = yt.load_simulation("demo_second", "ExodusII") >>> sim.get_time_series() >>> for ds in sim: ... print(ds.current_time) """ def __init__(self, simulation_directory, find_outputs=False): self.simulation_directory = simulation_directory fn_pattern = f"{self.simulation_directory}/*" potential_outputs = glob.glob(fn_pattern) self.all_outputs = self._check_for_outputs(potential_outputs) self.all_outputs.sort(key=lambda obj: obj["filename"]) def __iter__(self): for o in self._pre_outputs: fn, step = o ds = load(fn, step=step) self._setup_function(ds) yield ds def __getitem__(self, key): if isinstance(key, slice): if isinstance(key.start, float): return self.get_range(key.start, key.stop) # This will return a sliced up object! return DatasetSeries(self._pre_outputs[key], self.parallel) o = self._pre_outputs[key] fn, step = o o = load(fn, step=step) self._setup_function(o) return o def get_time_series(self, parallel=False, setup_function=None): r""" Instantiate a DatasetSeries object for a set of outputs. If no additional keywords given, a DatasetSeries object will be created with all potential datasets created by the simulation. Fine-level filtering is currently not implemented. """ all_outputs = self.all_outputs ds_list = [] for output in all_outputs: num_steps = output["num_steps"] fn = output["filename"] for step in range(num_steps): ds_list.append((fn, step)) super().__init__(ds_list, parallel=parallel, setup_function=setup_function) def _check_for_outputs(self, potential_outputs): r""" Check a list of files to see if they are valid datasets. """ only_on_root( mylog.info, "Checking %d potential outputs.", len(potential_outputs) ) my_outputs = {} for my_storage, output in parallel_objects( potential_outputs, storage=my_outputs ): try: ds = load(output) except (FileNotFoundError, YTUnidentifiedDataType): mylog.error("Failed to load %s", output) continue my_storage.result = {"filename": output, "num_steps": ds.num_steps} my_outputs = [ my_output for my_output in my_outputs.values() if my_output is not None ] return my_outputs ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3151522 yt-4.4.0/yt/frontends/exodus_ii/tests/0000755000175100001770000000000014714401715017354 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/exodus_ii/tests/__init__.py0000644000175100001770000000000014714401662021454 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/exodus_ii/tests/test_outputs.py0000644000175100001770000000475414714401662022523 0ustar00runnerdockerfrom numpy.testing import assert_array_equal, assert_equal from yt.testing import requires_file from yt.utilities.answer_testing.framework import ( GenericArrayTest, data_dir_load, requires_ds, ) out = "ExodusII/out.e" @requires_file(out) def test_out(): ds = data_dir_load(out) field_list = [ ("all", "conv_indicator"), ("all", "conv_marker"), ("all", "convected"), ("all", "diffused"), ("connect1", "conv_indicator"), ("connect1", "conv_marker"), ("connect1", "convected"), ("connect1", "diffused"), ("connect2", "conv_indicator"), ("connect2", "conv_marker"), ("connect2", "convected"), ("connect2", "diffused"), ] assert_equal(str(ds), "out.e") assert_equal(ds.dimensionality, 3) assert_equal(ds.current_time, 0.0) assert_array_equal(ds.parameters["nod_names"], ["convected", "diffused"]) assert_equal(ds.parameters["num_meshes"], 2) assert_array_equal(ds.field_list, field_list) out_s002 = "ExodusII/out.e-s002" @requires_file(out_s002) def test_out002(): ds = data_dir_load(out_s002) field_list = [ ("all", "conv_indicator"), ("all", "conv_marker"), ("all", "convected"), ("all", "diffused"), ("connect1", "conv_indicator"), ("connect1", "conv_marker"), ("connect1", "convected"), ("connect1", "diffused"), ("connect2", "conv_indicator"), ("connect2", "conv_marker"), ("connect2", "convected"), ("connect2", "diffused"), ] assert_equal(str(ds), "out.e-s002") assert_equal(ds.dimensionality, 3) assert_equal(ds.current_time, 2.0) assert_array_equal(ds.field_list, field_list) gold = "ExodusII/gold.e" @requires_file(gold) def test_gold(): ds = data_dir_load(gold) field_list = [("all", "forced"), ("connect1", "forced")] assert_equal(str(ds), "gold.e") assert_array_equal(ds.field_list, field_list) big_data = "MOOSE_sample_data/mps_out.e" @requires_ds(big_data) def test_displacement_fields(): displacement_dicts = [ {"connect2": (5.0, [0.0, 0.0, 0.0])}, {"connect1": (1.0, [1.0, 2.0, 3.0]), "connect2": (0.0, [0.0, 0.0, 0.0])}, ] for disp in displacement_dicts: ds = data_dir_load(big_data, displacements=disp) for mesh in ds.index.meshes: def array_func(): return mesh.connectivity_coords # noqa: B023 yield GenericArrayTest(ds, array_func, 12) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/exodus_ii/util.py0000644000175100001770000000351514714401662017546 0ustar00runnerdockerimport re import string from collections import OrderedDict from itertools import takewhile import numpy as np def get_num_pseudo_dims(coords): D = coords.shape[1] return sum(np.all(coords[:, dim] == 0.0) for dim in range(D)) def sanitize_string(s): _printable = {ord(_) for _ in string.printable} return "".join(chr(_) for _ in takewhile(lambda a: a in _printable, s)) def load_info_records(info_records): info_records_parsed = [sanitize_string(line_chars) for line_chars in info_records] return group_by_sections(info_records_parsed) def group_by_sections(info_records): # 1. Split by top groupings top_levels = get_top_levels(info_records) # 2. Determine if in section by index number grouped = OrderedDict() for tidx, top_level in enumerate(top_levels): grouped[top_level[1]] = [] try: next_idx = top_levels[tidx + 1][0] except IndexError: next_idx = len(info_records) - 1 for idx in range(top_level[0], next_idx): if idx == top_level[0]: continue grouped[top_level[1]].append(info_records[idx]) if "Version Info" in grouped.keys(): version_info = OrderedDict() for line in grouped["Version Info"]: split_line = line.split(":") key = split_line[0] val = ":".join(split_line[1:]).lstrip().rstrip() if key != "": version_info[key] = val grouped["Version Info"] = version_info return grouped def get_top_levels(info_records): top_levels = [] for idx, line in enumerate(info_records): pattern = re.compile(r"###[a-zA-Z\s]+") if pattern.match(line): clean_line = re.sub(r"[^\w\s]", "", line).lstrip().rstrip() top_levels.append([idx, clean_line]) return top_levels ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3191524 yt-4.4.0/yt/frontends/fits/0000755000175100001770000000000014714401715015167 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/fits/__init__.py0000644000175100001770000000000014714401662017267 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/fits/api.py0000644000175100001770000000047014714401662016314 0ustar00runnerdockerfrom . import tests from .data_structures import ( EventsFITSDataset, FITSDataset, FITSGrid, FITSHierarchy, SkyDataFITSDataset, SpectralCubeFITSDataset, SpectralCubeFITSHierarchy, ) from .fields import FITSFieldInfo from .io import IOHandlerFITS from .misc import setup_counts_fields ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/fits/data_structures.py0000644000175100001770000010563514714401662020770 0ustar00runnerdockerimport os import time import uuid import warnings import weakref from collections import defaultdict from functools import cached_property import numpy as np from more_itertools import always_iterable from yt.config import ytcfg from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.static_output import Dataset from yt.fields.field_info_container import FieldInfoContainer from yt.funcs import mylog, setdefaultattr from yt.geometry.api import Geometry from yt.geometry.geometry_handler import YTDataChunk from yt.geometry.grid_geometry_handler import GridIndex from yt.units import dimensions from yt.units.unit_lookup_table import ( # type: ignore default_unit_symbol_lut, unit_prefixes, ) from yt.units.unit_object import UnitParseError # type: ignore from yt.units.yt_array import YTQuantity from yt.utilities.decompose import decompose_array, get_psize from yt.utilities.file_handler import FITSFileHandler from yt.utilities.io_handler import io_registry from yt.utilities.on_demand_imports import NotAModule, _astropy from .fields import FITSFieldInfo, WCSFITSFieldInfo, YTFITSFieldInfo lon_prefixes = ["X", "RA", "GLON", "LINEAR"] lat_prefixes = ["Y", "DEC", "GLAT", "LINEAR"] spec_names = {"V": "Velocity", "F": "Frequency", "E": "Energy", "W": "Wavelength"} space_prefixes = list(set(lon_prefixes + lat_prefixes)) unique_sky_prefixes = set(space_prefixes) unique_sky_prefixes.difference_update({"X", "Y", "LINEAR"}) sky_prefixes = list(unique_sky_prefixes) spec_prefixes = list(spec_names.keys()) class FITSGrid(AMRGridPatch): _id_offset = 0 def __init__(self, id, index, level): AMRGridPatch.__init__(self, id, filename=index.index_filename, index=index) self.Parent = None self.Children = [] self.Level = 0 class FITSHierarchy(GridIndex): grid = FITSGrid def __init__(self, ds, dataset_type="fits"): self.dataset_type = dataset_type self.field_indexes = {} self.dataset = weakref.proxy(ds) # for now, the index file is the dataset self.index_filename = self.dataset.parameter_filename self.directory = os.path.dirname(self.index_filename) self._handle = ds._handle self.float_type = np.float64 GridIndex.__init__(self, ds, dataset_type) def _initialize_data_storage(self): pass def _guess_name_from_units(self, units): field_from_unit = {"Jy": "intensity", "K": "temperature"} for k, v in field_from_unit.items(): if k in units: mylog.warning( "Guessing this is a %s field based on its units of %s.", v, k ) return v return None def _determine_image_units(self, bunit): try: try: # First let AstroPy attempt to figure the unit out u = 1.0 * _astropy.units.Unit(bunit, format="fits") u = YTQuantity.from_astropy(u).units except ValueError: try: # Let yt try it by itself u = self.ds.quan(1.0, bunit).units except UnitParseError: return "dimensionless" return str(u) except KeyError: return "dimensionless" def _ensure_same_dims(self, hdu): ds = self.dataset conditions = [hdu.header["naxis"] != ds.primary_header["naxis"]] for i in range(ds.naxis): nax = "naxis%d" % (i + 1) conditions.append(hdu.header[nax] != ds.primary_header[nax]) if np.any(conditions): return False else: return True def _detect_output_fields(self): self.field_list = [] self._axis_map = {} self._file_map = {} self._ext_map = {} self._scale_map = {} dup_field_index = {} # Since FITS header keywords are case-insensitive, we only pick a subset of # prefixes, ones that we expect to end up in headers. known_units = {unit.lower(): unit for unit in self.ds.unit_registry.lut} for unit in list(known_units.values()): if unit in self.ds.unit_registry.prefixable_units: for p in ["n", "u", "m", "c", "k"]: known_units[(p + unit).lower()] = p + unit # We create a field from each slice on the 4th axis if self.dataset.naxis == 4: naxis4 = self.dataset.primary_header["naxis4"] else: naxis4 = 1 for i, fits_file in enumerate(self.dataset._handle._fits_files): for j, hdu in enumerate(fits_file): if ( isinstance(hdu, _astropy.pyfits.BinTableHDU) or hdu.header["naxis"] == 0 ): continue if self._ensure_same_dims(hdu): units = self._determine_image_units(hdu.header["bunit"]) try: # Grab field name from btype fname = hdu.header["btype"] except KeyError: # Try to guess the name from the units fname = self._guess_name_from_units(units) # When all else fails if fname is None: fname = "image_%d" % (j) if self.ds.num_files > 1 and fname.startswith("image"): fname += "_file_%d" % (i) if ("fits", fname) in self.field_list: if fname in dup_field_index: dup_field_index[fname] += 1 else: dup_field_index[fname] = 1 mylog.warning( "This field has the same name as a previously loaded " "field. Changing the name from %s to %s_%d. To avoid " "this, change one of the BTYPE header keywords.", fname, fname, dup_field_index[fname], ) fname += "_%d" % (dup_field_index[fname]) for k in range(naxis4): if naxis4 > 1: fname += "_%s_%d" % (hdu.header["CTYPE4"], k + 1) self._axis_map[fname] = k self._file_map[fname] = fits_file self._ext_map[fname] = j self._scale_map[fname] = [0.0, 1.0] if "bzero" in hdu.header: self._scale_map[fname][0] = hdu.header["bzero"] if "bscale" in hdu.header: self._scale_map[fname][1] = hdu.header["bscale"] self.field_list.append(("fits", fname)) self.dataset.field_units[fname] = units mylog.info("Adding field %s to the list of fields.", fname) if units == "dimensionless": mylog.warning( "Could not determine dimensions for field %s, " "setting to dimensionless.", fname, ) else: mylog.warning( "Image block %s does not have the same dimensions " "as the primary and will not be available as a field.", hdu.name.lower(), ) def _count_grids(self): self.num_grids = self.ds.parameters["nprocs"] def _parse_index(self): ds = self.dataset # If nprocs > 1, decompose the domain into virtual grids if self.num_grids > 1: self._domain_decomp() else: self.grid_left_edge[0, :] = ds.domain_left_edge self.grid_right_edge[0, :] = ds.domain_right_edge self.grid_dimensions[0] = ds.domain_dimensions self.grid_levels.flat[:] = 0 self.grids = np.empty(self.num_grids, dtype="object") for i in range(self.num_grids): self.grids[i] = self.grid(i, self, self.grid_levels[i, 0]) def _domain_decomp(self): bbox = np.array( [self.ds.domain_left_edge, self.ds.domain_right_edge] ).transpose() dims = self.ds.domain_dimensions psize = get_psize(dims, self.num_grids) gle, gre, shapes, slices, _ = decompose_array(dims, psize, bbox) self.grid_left_edge = self.ds.arr(gle, "code_length") self.grid_right_edge = self.ds.arr(gre, "code_length") self.grid_dimensions = np.array(shapes, dtype="int32") def _populate_grid_objects(self): for i in range(self.num_grids): self.grids[i]._prepare_grid() self.grids[i]._setup_dx() self.max_level = 0 def _setup_derived_fields(self): super()._setup_derived_fields() [self.dataset.conversion_factors[field] for field in self.field_list] for field in self.field_list: if field not in self.derived_field_list: self.derived_field_list.append(field) for field in self.derived_field_list: f = self.dataset.field_info[field] if f.is_alias: # Translating an already-converted field self.dataset.conversion_factors[field] = 1.0 def _setup_data_io(self): self.io = io_registry[self.dataset_type](self.dataset) def _chunk_io(self, dobj, cache=True, local_only=False): # local_only is only useful for inline datasets and requires # implementation by subclasses. gfiles = defaultdict(list) gobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) for g in gobjs: gfiles[g.id].append(g) for fn in sorted(gfiles): gs = gfiles[fn] yield YTDataChunk( dobj, "io", gs, self._count_selection(dobj, gs), cache=cache ) def find_primary_header(fileh): # Sometimes the primary hdu doesn't have an image if len(fileh) > 1 and fileh[0].header["naxis"] == 0: first_image = 1 else: first_image = 0 header = fileh[first_image].header return header, first_image def check_fits_valid(filename): ext = filename.rsplit(".", 1)[-1] if ext.upper() in ("GZ", "FZ"): # We don't know for sure that there will be > 1 ext = filename.rsplit(".", 1)[0].rsplit(".", 1)[-1] if ext.upper() not in ("FITS", "FTS"): return None elif isinstance(_astropy.pyfits, NotAModule): raise RuntimeError( "This appears to be a FITS file, but AstroPy is not installed." ) try: with warnings.catch_warnings(): warnings.filterwarnings("ignore", category=UserWarning, append=True) fileh = _astropy.pyfits.open(filename) header, _ = find_primary_header(fileh) if header["naxis"] >= 2: return fileh else: fileh.close() except Exception: pass return None def check_sky_coords(filename, ndim): fileh = check_fits_valid(filename) if fileh is not None: try: if len(fileh) > 1 and fileh[1].name == "EVENTS" and ndim == 2: fileh.close() return True else: header, _ = find_primary_header(fileh) if header["naxis"] < ndim: return False axis_names = [ header.get("ctype%d" % (i + 1), "") for i in range(header["naxis"]) ] if len(axis_names) == 3 and axis_names.count("LINEAR") == 2: return any(a[0] in spec_prefixes for a in axis_names) x = find_axes(axis_names, sky_prefixes + spec_prefixes) fileh.close() return x >= ndim except Exception: pass return False class FITSDataset(Dataset): _load_requirements = ["astropy"] _index_class = FITSHierarchy _field_info_class: type[FieldInfoContainer] = FITSFieldInfo _dataset_type = "fits" _handle = None def __init__( self, filename, dataset_type="fits", auxiliary_files=None, nprocs=None, storage_filename=None, nan_mask=None, suppress_astropy_warnings=True, parameters=None, units_override=None, unit_system="cgs", ): if parameters is None: parameters = {} parameters["nprocs"] = nprocs self.specified_parameters = parameters if suppress_astropy_warnings: warnings.filterwarnings("ignore", module="astropy", append=True) self.filenames = [filename] + list(always_iterable(auxiliary_files)) self.num_files = len(self.filenames) self.fluid_types += ("fits",) if nan_mask is None: self.nan_mask = {} elif isinstance(nan_mask, float): self.nan_mask = {"all": nan_mask} elif isinstance(nan_mask, dict): self.nan_mask = nan_mask self._handle = FITSFileHandler(self.filenames[0]) if isinstance( self.filenames[0], _astropy.pyfits.hdu.image._ImageBaseHDU ) or isinstance(self.filenames[0], _astropy.pyfits.HDUList): fn = f"InMemoryFITSFile_{uuid.uuid4().hex}" else: fn = self.filenames[0] self._handle._fits_files.append(self._handle) if self.num_files > 1: for fits_file in auxiliary_files: if isinstance(fits_file, _astropy.pyfits.hdu.image._ImageBaseHDU): f = _astropy.pyfits.HDUList([fits_file]) elif isinstance(fits_file, _astropy.pyfits.HDUList): f = fits_file else: if os.path.exists(fits_file): fn = fits_file else: fn = os.path.join(ytcfg.get("yt", "test_data_dir"), fits_file) f = _astropy.pyfits.open( fn, memmap=True, do_not_scale_image_data=True, ignore_blank=True ) self._handle._fits_files.append(f) self.refine_by = 2 Dataset.__init__( self, fn, dataset_type, units_override=units_override, unit_system=unit_system, ) self.storage_filename = storage_filename def _set_code_unit_attributes(self): """ Generates the conversion to various physical _units based on the parameter file """ if getattr(self, "length_unit", None) is None: default_length_units = [ u for u, v in default_unit_symbol_lut.items() if str(v[1]) == "(length)" ] more_length_units = [] for unit in default_length_units: if unit in self.unit_registry.prefixable_units: more_length_units += [prefix + unit for prefix in unit_prefixes] default_length_units += more_length_units file_units = [] cunits = [self.wcs.wcs.cunit[i] for i in range(self.dimensionality)] for unit in (_.to_string() for _ in cunits): if unit in default_length_units: file_units.append(unit) if len(set(file_units)) == 1: length_factor = self.wcs.wcs.cdelt[0] length_unit = str(file_units[0]) mylog.info("Found length units of %s.", length_unit) else: self.no_cgs_equiv_length = True mylog.warning("No length conversion provided. Assuming 1 = 1 cm.") length_factor = 1.0 length_unit = "cm" setdefaultattr(self, "length_unit", self.quan(length_factor, length_unit)) for unit, cgs in [("time", "s"), ("mass", "g")]: # We set these to cgs for now, but they may have been overridden if getattr(self, unit + "_unit", None) is not None: continue mylog.warning("Assuming 1.0 = 1.0 %s", cgs) setdefaultattr(self, f"{unit}_unit", self.quan(1.0, cgs)) self.magnetic_unit = np.sqrt( 4 * np.pi * self.mass_unit / (self.time_unit**2 * self.length_unit) ) self.magnetic_unit.convert_to_units("gauss") self.velocity_unit = self.length_unit / self.time_unit @property def filename(self) -> str: if self._input_filename.startswith("InMemory"): return self._input_filename else: return super().filename @cached_property def unique_identifier(self) -> str: if self.filename.startswith("InMemory"): return str(time.time()) else: return super().unique_identifier def _parse_parameter_file(self): self._determine_structure() self._determine_axes() # Determine dimensionality self.dimensionality = self.naxis self.geometry = Geometry.CARTESIAN # Sometimes a FITS file has a 4D datacube, in which case # we take the 4th axis and assume it consists of different fields. if self.dimensionality == 4: self.dimensionality = 3 self._determine_wcs() self.current_time = 0.0 self.domain_dimensions = np.array(self.dims)[: self.dimensionality] if self.dimensionality == 2: self.domain_dimensions = np.append(self.domain_dimensions, [1]) self._determine_bbox() # Get the simulation time try: self.current_time = self.parameters["time"] except Exception: mylog.warning("Cannot find time") self.current_time = 0.0 pass # For now we'll ignore these self._periodicity = (False,) * 3 self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 self.cosmological_simulation = 0 self._determine_nprocs() # Now we can set up some of our parameters for convenience. for k, v in self.primary_header.items(): self.parameters[k] = v # Remove potential default keys self.parameters.pop("", None) def _determine_nprocs(self): # If nprocs is None, do some automatic decomposition of the domain if self.specified_parameters["nprocs"] is None: nprocs = np.around( np.prod(self.domain_dimensions) / 32**self.dimensionality ).astype("int64") self.parameters["nprocs"] = max(min(nprocs, 512), 1) else: self.parameters["nprocs"] = self.specified_parameters["nprocs"] def _determine_structure(self): self.primary_header, self.first_image = find_primary_header(self._handle) self.naxis = self.primary_header["naxis"] self.axis_names = [ self.primary_header.get("ctype%d" % (i + 1), "LINEAR") for i in range(self.naxis) ] self.dims = [ self.primary_header["naxis%d" % (i + 1)] for i in range(self.naxis) ] def _determine_wcs(self): wcs = _astropy.pywcs.WCS(header=self.primary_header) if self.naxis == 4: self.wcs = _astropy.pywcs.WCS(naxis=3) self.wcs.wcs.crpix = wcs.wcs.crpix[:3] self.wcs.wcs.cdelt = wcs.wcs.cdelt[:3] self.wcs.wcs.crval = wcs.wcs.crval[:3] self.wcs.wcs.cunit = [str(unit) for unit in wcs.wcs.cunit][:3] self.wcs.wcs.ctype = list(wcs.wcs.ctype)[:3] else: self.wcs = wcs def _determine_bbox(self): domain_left_edge = np.array([0.5] * 3) domain_right_edge = np.array( [float(dim) + 0.5 for dim in self.domain_dimensions] ) if self.dimensionality == 2: domain_left_edge[-1] = 0.5 domain_right_edge[-1] = 1.5 self.domain_left_edge = domain_left_edge self.domain_right_edge = domain_right_edge def _determine_axes(self): self.lat_axis = 1 self.lon_axis = 0 self.lat_name = "Y" self.lon_name = "X" @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False try: fileh = check_fits_valid(filename) except Exception: return False if fileh is None: return False else: fileh.close() return True @classmethod def _guess_candidates(cls, base, directories, files): candidates = [] for fn, fnl in ((_, _.lower()) for _ in files): if ( fnl.endswith(".fits") or fnl.endswith(".fits.gz") or fnl.endswith(".fits.fz") ): candidates.append(fn) # FITS files don't preclude subdirectories return candidates, True def close(self): self._handle.close() def find_axes(axis_names, prefixes): x = 0 for p in prefixes: y = np.char.startswith(axis_names, p) x += np.any(y) return x class YTFITSDataset(FITSDataset): _load_requirements = ["astropy"] _field_info_class = YTFITSFieldInfo def _parse_parameter_file(self): super()._parse_parameter_file() # Get the current time if "time" in self.primary_header: self.current_time = self.primary_header["time"] def _set_code_unit_attributes(self): """ Generates the conversion to various physical _units based on the parameter file """ for unit, cgs in [ ("length", "cm"), ("time", "s"), ("mass", "g"), ("velocity", "cm/s"), ("magnetic", "gauss"), ]: if unit == "magnetic": short_unit = "bfunit" else: short_unit = f"{unit[0]}unit" if short_unit in self.primary_header: # units should now be in header u = self.quan( self.primary_header[short_unit], self.primary_header.comments[short_unit].strip("[]"), ) mylog.info("Found %s units of %s.", unit, u) else: if unit == "length": # Falling back to old way of getting units for length # in old files u = self.quan(1.0, str(self.wcs.wcs.cunit[0])) mylog.info("Found %s units of %s.", unit, u) else: # Give up otherwise u = self.quan(1.0, cgs) mylog.warning( "No unit for %s found. Assuming 1.0 code_%s = 1.0 %s", unit, unit, cgs, ) setdefaultattr(self, f"{unit}_unit", u) def _determine_bbox(self): dx = np.zeros(3) dx[: self.dimensionality] = self.wcs.wcs.cdelt domain_left_edge = np.zeros(3) domain_left_edge[: self.dimensionality] = self.wcs.wcs.crval - dx[ : self.dimensionality ] * (self.wcs.wcs.crpix - 0.5) domain_right_edge = domain_left_edge + dx * self.domain_dimensions if self.dimensionality == 2: domain_left_edge[-1] = 0.0 domain_right_edge[-1] = dx[0] self.domain_left_edge = domain_left_edge self.domain_right_edge = domain_right_edge @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False try: fileh = check_fits_valid(filename) except Exception: return False if fileh is None: return False else: if "WCSNAME" in fileh[0].header: isyt = fileh[0].header["WCSNAME"].strip() == "yt" else: isyt = False fileh.close() return isyt class SkyDataFITSDataset(FITSDataset): _load_requirements = ["astropy"] _field_info_class = WCSFITSFieldInfo def _determine_wcs(self): super()._determine_wcs() end = min(self.dimensionality + 1, 4) self.ctypes = np.array( [self.primary_header["CTYPE%d" % (i)] for i in range(1, end)] ) self.wcs_2d = self.wcs def _parse_parameter_file(self): super()._parse_parameter_file() end = min(self.dimensionality + 1, 4) self.geometry = Geometry.SPECTRAL_CUBE log_str = "Detected these axes: " + "%s " * len(self.ctypes) mylog.info(log_str, *self.ctypes) self.lat_axis = np.zeros((end - 1), dtype="bool") for p in lat_prefixes: self.lat_axis += np.char.startswith(self.ctypes, p) self.lat_axis = np.where(self.lat_axis)[0][0] self.lat_name = self.ctypes[self.lat_axis].split("-")[0].lower() self.lon_axis = np.zeros((end - 1), dtype="bool") for p in lon_prefixes: self.lon_axis += np.char.startswith(self.ctypes, p) self.lon_axis = np.where(self.lon_axis)[0][0] self.lon_name = self.ctypes[self.lon_axis].split("-")[0].lower() if self.lat_axis == self.lon_axis and self.lat_name == self.lon_name: self.lat_axis = 1 self.lon_axis = 0 self.lat_name = "Y" self.lon_name = "X" self.spec_axis = 2 self.spec_name = "z" self.spec_unit = "" def _set_code_unit_attributes(self): super()._set_code_unit_attributes() units = self.wcs_2d.wcs.cunit[0] if units == "deg": units = "degree" if units == "rad": units = "radian" pixel_area = np.prod(np.abs(self.wcs_2d.wcs.cdelt)) pixel_area = self.quan(pixel_area, f"{units}**2").in_cgs() pixel_dims = pixel_area.units.dimensions self.unit_registry.add("pixel", float(pixel_area.value), dimensions=pixel_dims) if "beam_size" in self.specified_parameters: beam_size = self.specified_parameters["beam_size"] beam_size = self.quan(beam_size[0], beam_size[1]).in_cgs().value self.unit_registry.add("beam", beam_size, dimensions=dimensions.solid_angle) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False try: return check_sky_coords(filename, ndim=2) except Exception: return False class SpectralCubeFITSHierarchy(FITSHierarchy): def _domain_decomp(self): dz = self.ds.quan(1.0, "code_length") * self.ds.spectral_factor self.grid_dimensions[:, 2] = np.around( float(self.ds.domain_dimensions[2]) / self.num_grids ).astype("int64") self.grid_dimensions[-1, 2] += self.ds.domain_dimensions[2] % self.num_grids self.grid_left_edge[0, 2] = self.ds.domain_left_edge[2] self.grid_left_edge[1:, 2] = ( self.ds.domain_left_edge[2] + np.cumsum(self.grid_dimensions[:-1, 2]) * dz ) self.grid_right_edge[:, 2] = ( self.grid_left_edge[:, 2] + self.grid_dimensions[:, 2] * dz ) self.grid_left_edge[:, :2] = self.ds.domain_left_edge[:2] self.grid_right_edge[:, :2] = self.ds.domain_right_edge[:2] self.grid_dimensions[:, :2] = self.ds.domain_dimensions[:2] class SpectralCubeFITSDataset(SkyDataFITSDataset): _load_requirements = ["astropy"] _index_class = SpectralCubeFITSHierarchy def __init__( self, filename, auxiliary_files=None, nprocs=None, storage_filename=None, nan_mask=None, spectral_factor=1.0, suppress_astropy_warnings=True, parameters=None, units_override=None, unit_system="cgs", ): if auxiliary_files is None: auxiliary_files = [] self.spectral_factor = spectral_factor super().__init__( filename, nprocs=nprocs, auxiliary_files=auxiliary_files, storage_filename=storage_filename, suppress_astropy_warnings=suppress_astropy_warnings, nan_mask=nan_mask, parameters=parameters, units_override=units_override, unit_system=unit_system, ) def _parse_parameter_file(self): super()._parse_parameter_file() self.geometry = Geometry.SPECTRAL_CUBE end = min(self.dimensionality + 1, 4) self.spec_axis = np.zeros(end - 1, dtype="bool") for p in spec_names.keys(): self.spec_axis += np.char.startswith(self.ctypes, p) self.spec_axis = np.where(self.spec_axis)[0][0] self.spec_name = spec_names[self.ctypes[self.spec_axis].split("-")[0][0]] # Extract a subimage from a WCS object self.wcs_2d = self.wcs.sub(["longitude", "latitude"]) self._p0 = self.wcs.wcs.crpix[self.spec_axis] self._dz = self.wcs.wcs.cdelt[self.spec_axis] self._z0 = self.wcs.wcs.crval[self.spec_axis] self.spec_unit = str(self.wcs.wcs.cunit[self.spec_axis]) if self.spectral_factor == "auto": self.spectral_factor = float( max(self.domain_dimensions[[self.lon_axis, self.lat_axis]]) ) self.spectral_factor /= self.domain_dimensions[self.spec_axis] mylog.info("Setting the spectral factor to %f", self.spectral_factor) Dz = ( self.domain_right_edge[self.spec_axis] - self.domain_left_edge[self.spec_axis] ) dre = self.domain_right_edge.copy() dre[self.spec_axis] = ( self.domain_left_edge[self.spec_axis] + self.spectral_factor * Dz ) self.domain_right_edge = dre self._dz /= self.spectral_factor self._p0 = (self._p0 - 0.5) * self.spectral_factor + 0.5 def _determine_nprocs(self): # If nprocs is None, do some automatic decomposition of the domain if self.specified_parameters["nprocs"] is None: nprocs = np.around(self.domain_dimensions[2] / 8).astype("int64") self.parameters["nprocs"] = max(min(nprocs, 512), 1) else: self.parameters["nprocs"] = self.specified_parameters["nprocs"] def spec2pixel(self, spec_value): sv = self.arr(spec_value).in_units(self.spec_unit) return self.arr((sv.v - self._z0) / self._dz + self._p0, "code_length") def pixel2spec(self, pixel_value): pv = self.arr(pixel_value, "code_length") return self.arr((pv.v - self._p0) * self._dz + self._z0, self.spec_unit) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False try: return check_sky_coords(filename, ndim=3) except Exception: return False class EventsFITSHierarchy(FITSHierarchy): def _detect_output_fields(self): ds = self.dataset self.field_list = [] for k, v in ds.events_info.items(): fname = "event_" + k mylog.info("Adding field %s to the list of fields.", fname) self.field_list.append(("io", fname)) if k in ["x", "y"]: field_unit = "code_length" else: field_unit = v self.dataset.field_units["io", fname] = field_unit return def _parse_index(self): super()._parse_index() try: self.grid_particle_count[:] = self.dataset.primary_header["naxis2"] except KeyError: self.grid_particle_count[:] = 0.0 self._particle_indices = np.zeros(self.num_grids + 1, dtype="int64") self._particle_indices[1] = self.grid_particle_count.squeeze() class EventsFITSDataset(SkyDataFITSDataset): _load_requirements = ["astropy"] _index_class = EventsFITSHierarchy def __init__( self, filename, storage_filename=None, suppress_astropy_warnings=True, reblock=1, parameters=None, units_override=None, unit_system="cgs", ): self.reblock = reblock super().__init__( filename, nprocs=1, storage_filename=storage_filename, parameters=parameters, suppress_astropy_warnings=suppress_astropy_warnings, units_override=units_override, unit_system=unit_system, ) def _determine_structure(self): self.first_image = 1 self.primary_header = self._handle[self.first_image].header self.naxis = 2 def _determine_wcs(self): self.wcs = _astropy.pywcs.WCS(naxis=2) self.events_info = {} for k, v in self.primary_header.items(): if k.startswith("TTYP"): if v.lower() in ["x", "y"]: num = k.replace("TTYPE", "") self.events_info[v.lower()] = ( self.primary_header["TLMIN" + num], self.primary_header["TLMAX" + num], self.primary_header["TCTYP" + num], self.primary_header["TCRVL" + num], self.primary_header["TCDLT" + num], self.primary_header["TCRPX" + num], ) elif v.lower() in ["energy", "time"]: num = k.replace("TTYPE", "") unit = self.primary_header["TUNIT" + num].lower() if unit.endswith("ev"): unit = unit.replace("ev", "eV") self.events_info[v.lower()] = unit self.axis_names = [self.events_info[ax][2] for ax in ["x", "y"]] self.wcs.wcs.cdelt = [ self.events_info["x"][4] * self.reblock, self.events_info["y"][4] * self.reblock, ] self.wcs.wcs.crpix = [ (self.events_info["x"][5] - 0.5) / self.reblock + 0.5, (self.events_info["y"][5] - 0.5) / self.reblock + 0.5, ] self.wcs.wcs.ctype = [self.events_info["x"][2], self.events_info["y"][2]] self.wcs.wcs.cunit = ["deg", "deg"] self.wcs.wcs.crval = [self.events_info["x"][3], self.events_info["y"][3]] self.dims = [ (self.events_info["x"][1] - self.events_info["x"][0]) / self.reblock, (self.events_info["y"][1] - self.events_info["y"][0]) / self.reblock, ] self.ctypes = self.axis_names self.wcs_2d = self.wcs @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False try: fileh = check_fits_valid(filename) except Exception: return False if fileh is not None: try: valid = fileh[1].name == "EVENTS" fileh.close() return valid except Exception: pass return False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/fits/definitions.py0000644000175100001770000000000014714401662020043 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/fits/fields.py0000644000175100001770000001027514714401662017015 0ustar00runnerdockerfrom yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer class FITSFieldInfo(FieldInfoContainer): known_other_fields = () def __init__(self, ds, field_list, slice_info=None): super().__init__(ds, field_list, slice_info=slice_info) for field in ds.field_list: if field[0] == "fits": self[field].take_log = False class YTFITSFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( ("density", ("code_mass/code_length**3", ["density"], None)), ( "dark_matter_density", ("code_mass/code_length**3", ["dark_matter_density"], None), ), ("number_density", ("1/code_length**3", ["number_density"], None)), ("pressure", ("dyne/code_length**2", ["pressure"], None)), ("thermal_energy", ("erg / g", ["specific_thermal_energy"], None)), ("temperature", ("K", ["temperature"], None)), ("velocity_x", ("code_length/code_time", ["velocity_x"], None)), ("velocity_y", ("code_length/code_time", ["velocity_y"], None)), ("velocity_z", ("code_length/code_time", ["velocity_z"], None)), ("magnetic_field_x", ("gauss", [], None)), ("magnetic_field_y", ("gauss", [], None)), ("magnetic_field_z", ("gauss", [], None)), ("metallicity", ("Zsun", ["metallicity"], None)), # We need to have a bunch of species fields here, too ("metal_density", ("code_mass/code_length**3", ["metal_density"], None)), ("hi_density", ("code_mass/code_length**3", ["hi_density"], None)), ("hii_density", ("code_mass/code_length**3", ["hii_density"], None)), ("h2i_density", ("code_mass/code_length**3", ["h2i_density"], None)), ("h2ii_density", ("code_mass/code_length**3", ["h2ii_density"], None)), ("h2m_density", ("code_mass/code_length**3", ["h2m_density"], None)), ("hei_density", ("code_mass/code_length**3", ["hei_density"], None)), ("heii_density", ("code_mass/code_length**3", ["heii_density"], None)), ("heiii_density", ("code_mass/code_length**3", ["heiii_density"], None)), ("hdi_density", ("code_mass/code_length**3", ["hdi_density"], None)), ("di_density", ("code_mass/code_length**3", ["di_density"], None)), ("dii_density", ("code_mass/code_length**3", ["dii_density"], None)), ) def __init__(self, ds, field_list, slice_info=None): super().__init__(ds, field_list, slice_info=slice_info) class WCSFITSFieldInfo(FITSFieldInfo): def setup_fluid_fields(self): wcs_2d = getattr(self.ds, "wcs_2d", self.ds.wcs) def _pixel(field, data): return data.ds.arr(data["index", "ones"], "pixel") self.add_field( ("fits", "pixel"), sampling_type="cell", function=_pixel, units="pixel" ) def _get_2d_wcs(data, axis): w_coords = wcs_2d.wcs_pix2world(data["index", "x"], data["index", "y"], 1) return w_coords[axis] def world_f(axis, unit): def _world_f(field, data): return data.ds.arr(_get_2d_wcs(data, axis), unit) return _world_f for i, axis, name in [ (0, self.ds.lon_axis, self.ds.lon_name), (1, self.ds.lat_axis, self.ds.lat_name), ]: unit = str(wcs_2d.wcs.cunit[i]) if unit.lower() == "deg": unit = "degree" if unit.lower() == "rad": unit = "radian" self.add_field( ("fits", name), sampling_type="cell", function=world_f(axis, unit), units=unit, ) if self.ds.dimensionality == 3: def _spec(field, data): axis = "xyz"[data.ds.spec_axis] sp = ( data["fits", axis].ndarray_view() - self.ds._p0 ) * self.ds._dz + self.ds._z0 return data.ds.arr(sp, data.ds.spec_unit) self.add_field( ("fits", "spectral"), sampling_type="cell", function=_spec, units=self.ds.spec_unit, display_name=self.ds.spec_name, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/fits/io.py0000644000175100001770000000732314714401662016156 0ustar00runnerdockerimport numpy as np from yt.utilities.io_handler import BaseIOHandler from yt.utilities.logger import ytLogger as mylog class IOHandlerFITS(BaseIOHandler): _particle_reader = False _dataset_type = "fits" def __init__(self, ds): super().__init__(ds) self.ds = ds self._handle = ds._handle def _read_particles( self, fields_to_read, type, args, grid_list, count_list, conv_factors ): pass def _read_particle_coords(self, chunks, ptf): pdata = self.ds._handle[self.ds.first_image].data assert len(ptf) == 1 ptype = list(ptf.keys())[0] x = np.asarray(pdata.field("X"), dtype="=f8") y = np.asarray(pdata.field("Y"), dtype="=f8") z = np.ones(x.shape) x = (x - 0.5) / self.ds.reblock + 0.5 y = (y - 0.5) / self.ds.reblock + 0.5 yield ptype, (x, y, z), 0.0 def _read_particle_fields(self, chunks, ptf, selector): pdata = self.ds._handle[self.ds.first_image].data assert len(ptf) == 1 ptype = list(ptf.keys())[0] field_list = ptf[ptype] x = np.asarray(pdata.field("X"), dtype="=f8") y = np.asarray(pdata.field("Y"), dtype="=f8") z = np.ones(x.shape) x = (x - 0.5) / self.ds.reblock + 0.5 y = (y - 0.5) / self.ds.reblock + 0.5 mask = selector.select_points(x, y, z, 0.0) if mask is None: return for field in field_list: fd = field.split("_")[-1] data = pdata.field(fd.upper()) if fd in ["x", "y"]: data = (data.copy() - 0.5) / self.ds.reblock + 0.5 yield (ptype, field), data[mask] def _read_fluid_selection(self, chunks, selector, fields, size): chunks = list(chunks) if any((ftype != "fits" for ftype, fname in fields)): raise NotImplementedError rv = {} dt = "float64" for field in fields: rv[field] = np.empty(size, dtype=dt) ng = sum(len(c.objs) for c in chunks) mylog.debug( "Reading %s cells of %s fields in %s grids", size, [f2 for f1, f2 in fields], ng, ) dx = self.ds.domain_width / self.ds.domain_dimensions for field in fields: ftype, fname = field f = self.ds.index._file_map[fname] ds = f[self.ds.index._ext_map[fname]] bzero, bscale = self.ds.index._scale_map[fname] ind = 0 for chunk in chunks: for g in chunk.objs: start = ((g.LeftEdge - self.ds.domain_left_edge) / dx).d.astype( "int" ) end = start + g.ActiveDimensions slices = [slice(start[i], end[i]) for i in range(3)] if self.ds.dimensionality == 2: nx, ny = g.ActiveDimensions[:2] nz = 1 data = np.zeros((nx, ny, nz)) data[:, :, 0] = ds.data[slices[1], slices[0]].T elif self.ds.naxis == 4: idx = self.ds.index._axis_map[fname] data = ds.data[idx, slices[2], slices[1], slices[0]].T else: data = ds.data[slices[2], slices[1], slices[0]].T if fname in self.ds.nan_mask: data[np.isnan(data)] = self.ds.nan_mask[fname] elif "all" in self.ds.nan_mask: data[np.isnan(data)] = self.ds.nan_mask["all"] data = bzero + bscale * data ind += g.select(selector, data.astype("float64"), rv[field], ind) return rv ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/fits/misc.py0000644000175100001770000002404314714401662016500 0ustar00runnerdockerimport base64 import os from io import BytesIO import numpy as np from yt.fields.derived_field import ValidateSpatial from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _astropy def _make_counts(emin, emax): def _counts(field, data): e = data["all", "event_energy"].in_units("keV") mask = np.logical_and(e >= emin, e < emax) x = data["all", "event_x"][mask] y = data["all", "event_y"][mask] z = np.ones(x.shape) pos = np.array([x, y, z]).transpose() img = data.deposit(pos, method="count") if data.has_field_parameter("sigma"): sigma = data.get_field_parameter("sigma") else: sigma = None if sigma is not None and sigma > 0.0: kern = _astropy.conv.Gaussian2DKernel(x_stddev=sigma) img[:, :, 0] = _astropy.conv.convolve(img[:, :, 0], kern) return data.ds.arr(img, "counts/pixel") return _counts def setup_counts_fields(ds, ebounds, ftype="gas"): r""" Create deposited image fields from X-ray count data in energy bands. Parameters ---------- ds : ~yt.data_objects.static_output.Dataset The FITS events file dataset to add the counts fields to. ebounds : list of tuples A list of tuples, one for each field, with (emin, emax) as the energy bounds for the image. ftype : string, optional The field type of the resulting field. Defaults to "gas". Examples -------- >>> ds = yt.load("evt.fits") >>> ebounds = [(0.1, 2.0), (2.0, 3.0)] >>> setup_counts_fields(ds, ebounds) """ for emin, emax in ebounds: cfunc = _make_counts(emin, emax) fname = f"counts_{emin}-{emax}" mylog.info("Creating counts field %s.", fname) ds.add_field( (ftype, fname), sampling_type="cell", function=cfunc, units="counts/pixel", validators=[ValidateSpatial()], display_name=f"Counts ({emin}-{emax} keV)", ) def create_spectral_slabs(filename, slab_centers, slab_width, **kwargs): r""" Given a dictionary of spectral slab centers and a width in spectral units, extract data from a spectral cube at these slab centers and return a `FITSDataset` instance containing the different slabs as separate yt fields. Useful for extracting individual lines from a spectral cube and separating them out as different fields. Requires the SpectralCube (https://spectral-cube.readthedocs.io/en/latest/) library. All keyword arguments will be passed on to the `FITSDataset` constructor. Parameters ---------- filename : string The spectral cube FITS file to extract the data from. slab_centers : dict of (float, string) tuples or YTQuantities The centers of the slabs, where the keys are the names of the new fields and the values are (float, string) tuples or YTQuantities, specifying a value for each center and its unit. slab_width : YTQuantity or (float, string) tuple The width of the slab along the spectral axis. Examples -------- >>> slab_centers = { ... "13CN": (218.03117, "GHz"), ... "CH3CH2CHO": (218.284256, "GHz"), ... "CH3NH2": (218.40956, "GHz"), ... } >>> slab_width = (0.05, "GHz") >>> ds = create_spectral_slabs( ... "intensity_cube.fits", slab_centers, slab_width, nan_mask=0.0 ... ) """ from spectral_cube import SpectralCube from yt.frontends.fits.api import FITSDataset from yt.visualization.fits_image import FITSImageData cube = SpectralCube.read(filename) if not isinstance(slab_width, YTQuantity): slab_width = YTQuantity(slab_width[0], slab_width[1]) slab_data = {} field_units = cube.header.get("bunit", "dimensionless") for k, v in slab_centers.items(): if not isinstance(v, YTQuantity): slab_center = YTQuantity(v[0], v[1]) else: slab_center = v mylog.info("Adding slab field %s at %g %s", k, slab_center.v, slab_center.units) slab_lo = (slab_center - 0.5 * slab_width).to_astropy() slab_hi = (slab_center + 0.5 * slab_width).to_astropy() subcube = cube.spectral_slab(slab_lo, slab_hi) slab_data[k] = YTArray(subcube.filled_data[:, :, :], field_units) width = subcube.header["naxis3"] * cube.header["cdelt3"] w = subcube.wcs.copy() w.wcs.crpix[-1] = 0.5 w.wcs.crval[-1] = -0.5 * width fid = FITSImageData(slab_data, wcs=w) for hdu in fid: hdu.header.pop("RESTFREQ", None) hdu.header.pop("RESTFRQ", None) ds = FITSDataset(fid, **kwargs) return ds def ds9_region(ds, reg, obj=None, field_parameters=None): r""" Create a data container from a ds9 region file. Requires the regions package (https://astropy-regions.readthedocs.io/) to be installed. Parameters ---------- ds : FITSDataset The Dataset to create the region from. reg : string The filename of the ds9 region, or a region string to be parsed. obj : data container, optional The data container that will be used to create the new region. Defaults to ds.all_data. field_parameters : dictionary, optional A set of field parameters to apply to the region. Examples -------- >>> ds = yt.load("m33_hi.fits") >>> circle_region = ds9_region(ds, "circle.reg") >>> print(circle_region.quantities.extrema("flux")) """ from yt.utilities.on_demand_imports import _astropy, _regions Regions = _regions.Regions WCS = _astropy.WCS from yt.frontends.fits.api import EventsFITSDataset if os.path.exists(reg): method = Regions.read else: method = Regions.parse r = method(reg, format="ds9").regions[0] reg_name = reg header = ds.wcs_2d.to_header() # The FITS header only contains WCS-related keywords header["NAXIS1"] = ds.domain_dimensions[ds.lon_axis] header["NAXIS2"] = ds.domain_dimensions[ds.lat_axis] pixreg = r.to_pixel(WCS(header)) mask = pixreg.to_mask().to_image((header["NAXIS1"], header["NAXIS2"])).astype(bool) if isinstance(ds, EventsFITSDataset): prefix = "event_" else: prefix = "" def _reg_field(field, data): i = data[prefix + "xyz"[ds.lon_axis]].d.astype("int64") - 1 j = data[prefix + "xyz"[ds.lat_axis]].d.astype("int64") - 1 new_mask = mask[i, j] ret = np.zeros(data[prefix + "x"].shape) ret[new_mask] = 1.0 return ret ds.add_field(("gas", reg_name), sampling_type="cell", function=_reg_field) if obj is None: obj = ds.all_data() if field_parameters is not None: for k, v in field_parameters.items(): obj.set_field_parameter(k, v) return obj.cut_region([f"obj['{reg_name}'] > 0"]) class PlotWindowWCS: r""" Use AstroPy's WCSAxes class to plot celestial coordinates on the axes of a on-axis PlotWindow plot. See http://docs.astropy.org/en/stable/visualization/wcsaxes/ for more details on how it works under the hood. This functionality requires a version of AstroPy >= 1.3. Parameters ---------- pw : on-axis PlotWindow instance The PlotWindow instance to add celestial axes to. """ def __init__(self, pw): WCSAxes = _astropy.wcsaxes.WCSAxes if pw.oblique: raise NotImplementedError("WCS axes are not implemented for oblique plots.") if not hasattr(pw.ds, "wcs_2d"): raise NotImplementedError("WCS axes are not implemented for this dataset.") if pw.data_source.axis != pw.ds.spec_axis: raise NotImplementedError("WCS axes are not implemented for this axis.") self.plots = {} self.pw = pw for f in pw.plots: rect = pw.plots[f]._get_best_layout()[1] fig = pw.plots[f].figure ax = fig.axes[0] wcs_ax = WCSAxes(fig, rect, wcs=pw.ds.wcs_2d, frameon=False) fig.add_axes(wcs_ax) wcs = pw.ds.wcs_2d.wcs xax = pw.ds.coordinates.x_axis[pw.data_source.axis] yax = pw.ds.coordinates.y_axis[pw.data_source.axis] xlabel = f"{wcs.ctype[xax].split('-')[0]} ({wcs.cunit[xax]})" ylabel = f"{wcs.ctype[yax].split('-')[0]} ({wcs.cunit[yax]})" fp = pw._font_properties wcs_ax.coords[0].set_axislabel(xlabel, fontproperties=fp, minpad=0.5) wcs_ax.coords[1].set_axislabel(ylabel, fontproperties=fp, minpad=0.4) wcs_ax.coords[0].ticklabels.set_fontproperties(fp) wcs_ax.coords[1].ticklabels.set_fontproperties(fp) ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) wcs_ax.set_xlim(pw.xlim[0].value, pw.xlim[1].value) wcs_ax.set_ylim(pw.ylim[0].value, pw.ylim[1].value) wcs_ax.coords.frame._update_cache = [] ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) self.plots[f] = fig def keys(self): return self.plots.keys() def values(self): return self.plots.values() def items(self): return self.plots.items() def __getitem__(self, key): for k in self.keys(): if k[1] == key: return self.plots[k] def show(self): return self def save(self, name=None, mpl_kwargs=None): if mpl_kwargs is None: mpl_kwargs = {} mpl_kwargs["bbox_inches"] = "tight" self.pw.save(name=name, mpl_kwargs=mpl_kwargs) def _repr_html_(self): from matplotlib.backends.backend_agg import FigureCanvasAgg ret = "" for v in self.plots.values(): canvas = FigureCanvasAgg(v) f = BytesIO() canvas.print_figure(f) f.seek(0) img = base64.b64encode(f.read()).decode() ret += ( r'
' ) return ret ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3191524 yt-4.4.0/yt/frontends/fits/tests/0000755000175100001770000000000014714401715016331 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/fits/tests/__init__.py0000644000175100001770000000000014714401662020431 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/fits/tests/test_outputs.py0000644000175100001770000000534614714401662021476 0ustar00runnerdockerfrom numpy.testing import assert_equal from yt.testing import requires_file, requires_module, units_override_check from yt.utilities.answer_testing.framework import ( data_dir_load, requires_ds, small_patch_amr, ) from ..data_structures import ( EventsFITSDataset, FITSDataset, SkyDataFITSDataset, SpectralCubeFITSDataset, ) _fields_grs = (("fits", "temperature"),) grs = "radio_fits/grs-50-cube.fits" @requires_ds(grs) def test_grs(): ds = data_dir_load(grs, cls=SpectralCubeFITSDataset, kwargs={"nan_mask": 0.0}) assert_equal(str(ds), "grs-50-cube.fits") for test in small_patch_amr(ds, _fields_grs, input_center="c", input_weight="ones"): test_grs.__name__ = test.description yield test _fields_vels = (("fits", "velocity_x"), ("fits", "velocity_y"), ("fits", "velocity_z")) vf = "UnigridData/velocity_field_20.fits" @requires_module("astropy") @requires_ds(vf) def test_velocity_field(): ds = data_dir_load(vf, cls=FITSDataset) assert_equal(str(ds), "velocity_field_20.fits") for test in small_patch_amr( ds, _fields_vels, input_center="c", input_weight="ones" ): test_velocity_field.__name__ = test.description yield test acis = "xray_fits/acisf05356N003_evt2.fits.gz" _fields_acis = (("gas", "counts_0.1-2.0"), ("gas", "counts_2.0-5.0")) @requires_ds(acis) def test_acis(): from yt.frontends.fits.misc import setup_counts_fields ds = data_dir_load(acis, cls=EventsFITSDataset) ebounds = [(0.1, 2.0), (2.0, 5.0)] setup_counts_fields(ds, ebounds) assert_equal(str(ds), "acisf05356N003_evt2.fits.gz") for test in small_patch_amr( ds, _fields_acis, input_center="c", input_weight="ones" ): test_acis.__name__ = test.description yield test A2052 = "xray_fits/A2052_merged_0.3-2_match-core_tmap_bgecorr.fits" _fields_A2052 = (("fits", "flux"),) @requires_ds(A2052) def test_A2052(): ds = data_dir_load(A2052, cls=SkyDataFITSDataset) assert_equal(str(ds), "A2052_merged_0.3-2_match-core_tmap_bgecorr.fits") for test in small_patch_amr( ds, _fields_A2052, input_center="c", input_weight="ones" ): test_A2052.__name__ = test.description yield test @requires_file(vf) def test_units_override(): units_override_check(vf) @requires_file(vf) def test_FITSDataset(): assert isinstance(data_dir_load(vf), FITSDataset) @requires_file(grs) def test_SpectralCubeFITSDataset(): assert isinstance(data_dir_load(grs), SpectralCubeFITSDataset) @requires_file(acis) def test_EventsFITSDataset(): assert isinstance(data_dir_load(acis), EventsFITSDataset) @requires_file(A2052) def test_SkyDataFITSDataset(): assert isinstance(data_dir_load(A2052), SkyDataFITSDataset) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3191524 yt-4.4.0/yt/frontends/flash/0000755000175100001770000000000014714401715015317 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/flash/__init__.py0000644000175100001770000000000014714401662017417 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/flash/api.py0000644000175100001770000000033614714401662016445 0ustar00runnerdockerfrom . import tests from .data_structures import ( FLASHDataset, FLASHGrid, FLASHHierarchy, FLASHParticleDataset, ) from .fields import FLASHFieldInfo from .io import IOHandlerFLASH, IOHandlerFLASHParticle ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/flash/data_structures.py0000644000175100001770000005263714714401662021123 0ustar00runnerdockerimport os import weakref from pathlib import Path import numpy as np from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.static_output import Dataset, ParticleFile from yt.funcs import mylog, setdefaultattr from yt.geometry.api import Geometry from yt.geometry.geometry_handler import Index from yt.geometry.grid_geometry_handler import GridIndex from yt.geometry.particle_geometry_handler import ParticleIndex from yt.utilities.file_handler import HDF5FileHandler, valid_hdf5_signature from yt.utilities.physical_ratios import cm_per_mpc from .fields import FLASHFieldInfo class FLASHGrid(AMRGridPatch): _id_offset = 1 # __slots__ = ["_level_id", "stop_index"] def __init__(self, id, index, level): AMRGridPatch.__init__(self, id, filename=index.index_filename, index=index) self.Parent = None self.Children = [] self.Level = level class FLASHHierarchy(GridIndex): grid = FLASHGrid _preload_implemented = True def __init__(self, ds, dataset_type="flash_hdf5"): self.dataset_type = dataset_type self.field_indexes = {} self.dataset = weakref.proxy(ds) # for now, the index file is the dataset! self.index_filename = self.dataset.parameter_filename self.directory = os.path.dirname(self.index_filename) self._handle = ds._handle self._particle_handle = ds._particle_handle self.float_type = np.float64 GridIndex.__init__(self, ds, dataset_type) def _initialize_data_storage(self): pass def _detect_output_fields(self): self.field_list = [ ("flash", s.decode("ascii", "ignore")) for s in self._handle["/unknown names"][:].flat ] if "/particle names" in self._particle_handle: self.field_list += [ ("io", "particle_" + s[0].decode("ascii", "ignore").strip()) for s in self._particle_handle["/particle names"][:] ] def _count_grids(self): try: self.num_grids = self.dataset._find_parameter( "integer", "globalnumblocks", True ) except KeyError: try: self.num_grids = self._handle["simulation parameters"]["total blocks"][ 0 ] except KeyError: self.num_grids = self._handle["/simulation parameters"][0][0] def _parse_index(self): f = self._handle # shortcut ds = self.dataset # shortcut f_part = self._particle_handle # shortcut # Initialize to the domain left / domain right ND = self.dataset.dimensionality DLE = self.dataset.domain_left_edge DRE = self.dataset.domain_right_edge for i in range(3): self.grid_left_edge[:, i] = DLE[i] self.grid_right_edge[:, i] = DRE[i] # We only go up to ND for 2D datasets self.grid_left_edge[:, :ND] = f["/bounding box"][:, :ND, 0] self.grid_right_edge[:, :ND] = f["/bounding box"][:, :ND, 1] # Move this to the parameter file try: nxb = ds.parameters["nxb"] nyb = ds.parameters["nyb"] nzb = ds.parameters["nzb"] except KeyError: nxb, nyb, nzb = ( int(f["/simulation parameters"][f"n{ax}b"]) for ax in "xyz" ) self.grid_dimensions[:] *= (nxb, nyb, nzb) try: self.grid_particle_count[:] = f_part["/localnp"][:][:, None] self._blockless_particle_count = ( f_part["/tracer particles"].shape[0] - self.grid_particle_count.sum() ) except KeyError: self.grid_particle_count[:] = 0.0 self._particle_indices = np.zeros(self.num_grids + 1, dtype="int64") if self.num_grids > 1: np.add.accumulate( self.grid_particle_count.squeeze(), out=self._particle_indices[1:] ) else: self._particle_indices[1] = self.grid_particle_count.squeeze() # This will become redundant, as _prepare_grid will reset it to its # current value. Note that FLASH uses 1-based indexing for refinement # levels, but we do not, so we reduce the level by 1. self.grid_levels.flat[:] = f["/refine level"][:][:] - 1 self.grids = np.empty(self.num_grids, dtype="object") for i in range(self.num_grids): self.grids[i] = self.grid(i + 1, self, self.grid_levels[i, 0]) # This is a possibly slow and verbose fix, and should be re-examined! rdx = self.dataset.domain_width / self.dataset.domain_dimensions nlevels = self.grid_levels.max() dxs = np.ones((nlevels + 1, 3), dtype="float64") for i in range(nlevels + 1): dxs[i, :ND] = rdx[:ND] / self.dataset.refine_by**i if ND < 3: dxs[:, ND:] = rdx[ND:] # Because we don't care about units, we're going to operate on views. gle = self.grid_left_edge.ndarray_view() gre = self.grid_right_edge.ndarray_view() geom = self.dataset.geometry if geom != "cartesian" and ND < 3: if geom == "spherical" and ND < 2: gle[:, 1] = 0.0 gre[:, 1] = np.pi gle[:, 2] = 0.0 gre[:, 2] = 2.0 * np.pi return def _populate_grid_objects(self): ii = np.argsort(self.grid_levels.flat) gid = self._handle["/gid"][:] first_ind = -(self.dataset.refine_by**self.dataset.dimensionality) for g in self.grids[ii].flat: gi = g.id - g._id_offset # FLASH uses 1-indexed group info g.Children = [self.grids[i - 1] for i in gid[gi, first_ind:] if i > -1] for g1 in g.Children: g1.Parent = g g._prepare_grid() g._setup_dx() if self.dataset.dimensionality < 3: DD = self.dataset.domain_right_edge[2] - self.dataset.domain_left_edge[2] for g in self.grids: g.dds[2] = DD if self.dataset.dimensionality < 2: DD = self.dataset.domain_right_edge[1] - self.dataset.domain_left_edge[1] for g in self.grids: g.dds[1] = DD self.max_level = self.grid_levels.max() class FLASHDataset(Dataset): _load_requirements = ["h5py"] _index_class: type[Index] = FLASHHierarchy _field_info_class = FLASHFieldInfo _handle = None def __init__( self, filename, dataset_type="flash_hdf5", storage_filename=None, particle_filename=None, units_override=None, unit_system="cgs", default_species_fields=None, ): self.fluid_types += ("flash",) if self._handle is not None: return self._handle = HDF5FileHandler(filename) self.particle_filename = particle_filename filepath = Path(filename) if self.particle_filename is None: # try to guess the particle filename if "hdf5_plt_cnt" in filepath.name: # We have a plotfile, look for the particle file try: pfn = str( filepath.parent.resolve() / filepath.name.replace("plt_cnt", "part") ) self._particle_handle = HDF5FileHandler(pfn) self.particle_filename = pfn mylog.info( "Particle file found: %s", os.path.basename(self.particle_filename), ) except OSError: self._particle_handle = self._handle elif "hdf5_chk" in filepath.name: # This is a checkpoint file, should have the particles in it self._particle_handle = self._handle else: # particle_filename is specified by user self._particle_handle = HDF5FileHandler(self.particle_filename) # Check if the particle file has the same time if self._particle_handle != self._handle: plot_time = self._handle.handle.get("real scalars") if (part_time := self._particle_handle.handle.get("real scalars")) is None: raise RuntimeError("FLASH 2.x particle files are not supported!") if not np.isclose(part_time[0][1], plot_time[0][1]): self._particle_handle = self._handle mylog.warning( "%s and %s are not at the same time. " "This particle file will not be used.", self.particle_filename, filename, ) # These should be explicitly obtained from the file, but for now that # will wait until a reorganization of the source tree and better # generalization. self.refine_by = 2 Dataset.__init__( self, filename, dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) self.storage_filename = storage_filename self.parameters["HydroMethod"] = "flash" # always PPM DE self.parameters["Time"] = 1.0 # default unit is 1... def _set_code_unit_attributes(self): if "unitsystem" in self.parameters: # Some versions of FLASH inject quotes in the runtime parameters # See issue #1721 us = self["unitsystem"].replace("'", "").replace('"', "").lower() if us == "cgs": b_factor = 1.0 elif us == "si": b_factor = np.sqrt(4 * np.pi / 1e7) elif us == "none": b_factor = np.sqrt(4 * np.pi) else: raise RuntimeError( "Runtime parameter unitsystem with " f"value {self['unitsystem']} is unrecognized" ) else: b_factor = 1.0 if self.cosmological_simulation == 1: length_factor = 1.0 / (1.0 + self.current_redshift) temperature_factor = 1.0 / (1.0 + self.current_redshift) ** 2 else: length_factor = 1.0 temperature_factor = 1.0 setdefaultattr(self, "magnetic_unit", self.quan(b_factor, "gauss")) setdefaultattr(self, "length_unit", self.quan(length_factor, "cm")) setdefaultattr(self, "mass_unit", self.quan(1.0, "g")) setdefaultattr(self, "time_unit", self.quan(1.0, "s")) setdefaultattr(self, "velocity_unit", self.quan(1.0, "cm/s")) setdefaultattr(self, "temperature_unit", self.quan(temperature_factor, "K")) def set_code_units(self): super().set_code_units() def _find_parameter(self, ptype, pname, scalar=False): nn = "/{} {}".format( ptype, {False: "runtime parameters", True: "scalars"}[scalar] ) if nn not in self._handle: raise KeyError(nn) for tpname, pval in zip( self._handle[nn][:, "name"], self._handle[nn][:, "value"], strict=True, ): if tpname.decode("ascii", "ignore").strip() == pname: if hasattr(pval, "decode"): pval = pval.decode("ascii", "ignore") if ptype == "string": return pval.strip() else: return pval raise KeyError(pname) def _parse_parameter_file(self): if "file format version" in self._handle: self._flash_version = self._handle["file format version"][:].item() elif "sim info" in self._handle: self._flash_version = self._handle["sim info"][:][ "file format version" ].item() else: raise RuntimeError("Can't figure out FLASH file version.") # First we load all of the parameters hns = ["simulation parameters"] # note the ordering here is important: runtime parameters should # overwrite scalars with the same name. for ptype in ["scalars", "runtime parameters"]: for vtype in ["integer", "real", "logical", "string"]: hns.append(f"{vtype} {ptype}") if self._flash_version > 7: for hn in hns: if hn not in self._handle: continue for varname, val in zip( self._handle[hn][:, "name"], self._handle[hn][:, "value"], strict=True, ): vn = varname.strip() if hn.startswith("string"): pval = val.strip() else: pval = val if vn in self.parameters and self.parameters[vn] != pval: mylog.info( "%s %s overwrites a simulation scalar of the same name", hn[:-1], vn, ) if hasattr(pval, "decode"): pval = pval.decode("ascii", "ignore") self.parameters[vn.decode("ascii", "ignore")] = pval if self._flash_version == 7: for hn in hns: if hn not in self._handle: continue if hn == "simulation parameters": zipover = ( (name, self._handle[hn][name][0]) for name in self._handle[hn].dtype.names ) else: zipover = zip( self._handle[hn][:, "name"], self._handle[hn][:, "value"], strict=True, ) for varname, val in zipover: vn = varname.strip() if hasattr(vn, "decode"): vn = vn.decode("ascii", "ignore") if hn.startswith("string"): pval = val.strip() else: pval = val if vn in self.parameters and self.parameters[vn] != pval: mylog.info( "%s %s overwrites a simulation scalar of the same name", hn[:-1], vn, ) if hasattr(pval, "decode"): pval = pval.decode("ascii", "ignore") self.parameters[vn] = pval # Determine block size try: nxb = self.parameters["nxb"] nyb = self.parameters["nyb"] nzb = self.parameters["nzb"] except KeyError: nxb, nyb, nzb = ( int(self._handle["/simulation parameters"][f"n{ax}b"]) for ax in "xyz" ) # FLASH2 only! # Determine dimensionality try: dimensionality = self.parameters["dimensionality"] except KeyError: dimensionality = 3 if nzb == 1: dimensionality = 2 if nyb == 1: dimensionality = 1 if dimensionality < 3: mylog.warning("Guessing dimensionality as %s", dimensionality) self.dimensionality = dimensionality self.geometry = Geometry(self.parameters["geometry"]) # Determine base grid parameters if "lrefine_min" in self.parameters.keys(): # PARAMESH nblockx = self.parameters["nblockx"] nblocky = self.parameters["nblocky"] nblockz = self.parameters["nblockz"] elif self.parameters["globalnumblocks"] == 1: # non-fixed block size UG nblockx = 1 nblocky = 1 nblockz = 1 else: # Uniform Grid nblockx = self.parameters["iprocs"] nblocky = self.parameters["jprocs"] nblockz = self.parameters["kprocs"] # In case the user wasn't careful if self.dimensionality <= 2: nblockz = 1 if self.dimensionality == 1: nblocky = 1 # Determine domain boundaries dle = np.array([self.parameters[f"{ax}min"] for ax in "xyz"]).astype("float64") dre = np.array([self.parameters[f"{ax}max"] for ax in "xyz"]).astype("float64") if self.dimensionality < 3: for d in range(self.dimensionality, 3): if dle[d] == dre[d]: mylog.warning( "Identical domain left edge and right edges " "along dummy dimension (%i), attempting to read anyway", d, ) dre[d] = dle[d] + 1.0 if self.dimensionality < 3 and self.geometry == "cylindrical": mylog.warning("Extending theta dimension to 2PI + left edge.") dre[2] = dle[2] + 2 * np.pi elif self.dimensionality < 3 and self.geometry == "polar": mylog.warning("Extending theta dimension to 2PI + left edge.") dre[1] = dle[1] + 2 * np.pi elif self.dimensionality < 3 and self.geometry == "spherical": mylog.warning("Extending phi dimension to 2PI + left edge.") dre[2] = dle[2] + 2 * np.pi if self.dimensionality == 1 and self.geometry == "spherical": mylog.warning("Extending theta dimension to PI + left edge.") dre[1] = dle[1] + np.pi self.domain_left_edge = dle self.domain_right_edge = dre self.domain_dimensions = np.array([nblockx * nxb, nblocky * nyb, nblockz * nzb]) # Try to determine Gamma try: self.gamma = self.parameters["gamma"] except Exception: mylog.info("Cannot find Gamma") pass # Get the simulation time self.current_time = self.parameters["time"] # Determine if this is a periodic box p = [ self.parameters.get(f"{ax}l_boundary_type", None) == "periodic" for ax in "xyz" ] self._periodicity = tuple(p) # Determine cosmological parameters. try: self.parameters["usecosmology"] self.cosmological_simulation = 1 self.current_redshift = 1.0 / self.parameters["scalefactor"] - 1.0 self.omega_lambda = self.parameters["cosmologicalconstant"] self.omega_matter = self.parameters["omegamatter"] self.hubble_constant = self.parameters["hubbleconstant"] self.hubble_constant *= cm_per_mpc * 1.0e-5 * 1.0e-2 # convert to 'h' except Exception: self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 self.cosmological_simulation = 0 @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False try: fileh = HDF5FileHandler(filename) if "bounding box" in fileh["/"].keys(): return True except OSError: pass return False @classmethod def _guess_candidates(cls, base, directories, files): candidates = [ _ for _ in files if ("_hdf5_plt_cnt_" in _) or ("_hdf5_chk_" in _) ] # Typically, Flash won't have nested outputs. return candidates, (len(candidates) == 0) def close(self): self._handle.close() class FLASHParticleFile(ParticleFile): pass class FLASHParticleDataset(FLASHDataset): _load_requirements = ["h5py"] _index_class = ParticleIndex filter_bbox = False _file_class = FLASHParticleFile def __init__( self, filename, dataset_type="flash_particle_hdf5", storage_filename=None, units_override=None, index_order=None, index_filename=None, unit_system="cgs", ): self.index_order = index_order self.index_filename = index_filename if self._handle is not None: return self._handle = HDF5FileHandler(filename) self.refine_by = 2 Dataset.__init__( self, filename, dataset_type, units_override=units_override, unit_system=unit_system, ) self.storage_filename = storage_filename def _parse_parameter_file(self): # Let the superclass do all the work but then # fix the domain dimensions super()._parse_parameter_file() domain_dimensions = np.zeros(3, "int32") domain_dimensions[: self.dimensionality] = 1 self.domain_dimensions = domain_dimensions self.filename_template = self.parameter_filename self.file_count = 1 @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if not valid_hdf5_signature(filename): return False if cls._missing_load_requirements(): return False try: fileh = HDF5FileHandler(filename) if ( "bounding box" not in fileh["/"].keys() and "localnp" in fileh["/"].keys() ): return True except OSError: pass return False @classmethod def _guess_candidates(cls, base, directories, files): candidates = [_ for _ in files if "_hdf5_part_" in _] # Typically, Flash won't have nested outputs. return candidates, (len(candidates) == 0) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/flash/definitions.py0000644000175100001770000000000014714401662020173 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/flash/fields.py0000644000175100001770000002111614714401662017141 0ustar00runnerdockerfrom yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer # Common fields in FLASH: (Thanks to John ZuHone for this list) # # dens gas mass density (g/cc) -- # eint internal energy (ergs/g) -- # ener total energy (ergs/g), with 0.5*v^2 -- # gamc gamma defined as ratio of specific heats, no units # game gamma defined as in , no units # gpol gravitational potential from the last timestep (ergs/g) # gpot gravitational potential from the current timestep (ergs/g) # grac gravitational acceleration from the current timestep (cm s^-2) # pden particle mass density (usually dark matter) (g/cc) # pres pressure (erg/cc) # temp temperature (K) -- # velx velocity x (cm/s) -- # vely velocity y (cm/s) -- # velz velocity z (cm/s) -- b_units = "code_magnetic" pres_units = "code_mass/(code_length*code_time**2)" en_units = "code_mass * (code_length/code_time)**2" rho_units = "code_mass / code_length**3" class FLASHFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( ("velx", ("code_length/code_time", ["velocity_x"], None)), ("vely", ("code_length/code_time", ["velocity_y"], None)), ("velz", ("code_length/code_time", ["velocity_z"], None)), ("dens", ("code_mass/code_length**3", ["density"], None)), ("temp", ("code_temperature", ["temperature"], None)), ("pres", (pres_units, ["pressure"], None)), ("gpot", ("code_length**2/code_time**2", ["gravitational_potential"], None)), ("gpol", ("code_length**2/code_time**2", [], None)), ("tion", ("code_temperature", [], None)), ("tele", ("code_temperature", [], None)), ("trad", ("code_temperature", [], None)), ("pion", (pres_units, [], None)), ("pele", (pres_units, [], "Electron Pressure, P_e")), ("prad", (pres_units, [], "Radiation Pressure")), ("eion", (en_units, [], "Ion Internal Specific Energy")), ("eele", (en_units, [], "Electron Internal Specific Energy")), ("erad", (en_units, [], "Radiation Internal Specific Energy")), ("pden", (rho_units, [], "Particle Mass Density")), ("depo", ("code_length**2/code_time**2", [], None)), ("ye", ("", [], "Y_e")), ("magp", (pres_units, [], None)), ("divb", ("code_magnetic/code_length", [], None)), ("game", ("", [], r"\gamma_e\ \rm{(ratio\ of\ specific\ heats)}")), ("gamc", ("", [], r"\gamma_c\ \rm{(ratio\ of\ specific\ heats)}")), ("flam", ("", [], None)), ("absr", ("", [], "Absorption Coefficient")), ("emis", ("", [], "Emissivity")), ("cond", ("", [], "Conductivity")), ("dfcf", ("", [], "Diffusion Equation Scalar")), ("fllm", ("", [], "Flux Limit")), ("pipe", ("", [], "P_i/P_e")), ("tite", ("", [], "T_i/T_e")), ("dbgs", ("", [], "Debug for Shocks")), ("cham", ("", [], "Chamber Material Fraction")), ("targ", ("", [], "Target Material Fraction")), ("sumy", ("", [], None)), ("mgdc", ("", [], "Emission Minus Absorption Diffusion Terms")), ("magx", (b_units, [], "B_x")), ("magy", (b_units, [], "B_y")), ("magz", (b_units, [], "B_z")), ) known_particle_fields = ( ("particle_posx", ("code_length", ["particle_position_x"], None)), ("particle_posy", ("code_length", ["particle_position_y"], None)), ("particle_posz", ("code_length", ["particle_position_z"], None)), ("particle_velx", ("code_length/code_time", ["particle_velocity_x"], None)), ("particle_vely", ("code_length/code_time", ["particle_velocity_y"], None)), ("particle_velz", ("code_length/code_time", ["particle_velocity_z"], None)), ("particle_tag", ("", ["particle_index"], None)), ("particle_mass", ("code_mass", ["particle_mass"], None)), ( "particle_gpot", ("code_length**2/code_time**2", ["particle_gravitational_potential"], None), ), ) def setup_fluid_fields(self): from yt.fields.magnetic_field import setup_magnetic_field_aliases unit_system = self.ds.unit_system # Adopt FLASH 4.6 value for Na Na = self.ds.quan(6.022140857e23, "g**-1") for i in range(1, 1000): self.add_output_field( ("flash", f"r{i:03}"), sampling_type="cell", units="", display_name=f"Energy Group {i}", ) # Add energy fields def ekin(data): ek = data["flash", "velx"] ** 2 if data.ds.dimensionality >= 2: ek += data["flash", "vely"] ** 2 if data.ds.dimensionality == 3: ek += data["flash", "velz"] ** 2 return 0.5 * ek if ("flash", "ener") in self.field_list: self.add_output_field( ("flash", "ener"), sampling_type="cell", units="code_length**2/code_time**2", ) self.alias( ("gas", "specific_total_energy"), ("flash", "ener"), units=unit_system["specific_energy"], ) else: def _ener(field, data): ener = data["flash", "eint"] + ekin(data) try: ener += data["flash", "magp"] / data["flash", "dens"] except Exception: pass return ener self.add_field( ("gas", "specific_total_energy"), sampling_type="cell", function=_ener, units=unit_system["specific_energy"], ) if ("flash", "eint") in self.field_list: self.add_output_field( ("flash", "eint"), sampling_type="cell", units="code_length**2/code_time**2", ) self.alias( ("gas", "specific_thermal_energy"), ("flash", "eint"), units=unit_system["specific_energy"], ) else: def _eint(field, data): eint = data["flash", "ener"] - ekin(data) try: eint -= data["flash", "magp"] / data["flash", "dens"] except Exception: pass return eint self.add_field( ("gas", "specific_thermal_energy"), sampling_type="cell", function=_eint, units=unit_system["specific_energy"], ) ## Derived FLASH Fields if ("flash", "abar") in self.field_list: self.alias(("gas", "mean_molecular_weight"), ("flash", "abar")) elif ("flash", "sumy") in self.field_list: def _abar(field, data): return 1.0 / data["flash", "sumy"] self.add_field( ("gas", "mean_molecular_weight"), sampling_type="cell", function=_abar, units="", ) elif "eos_singlespeciesa" in self.ds.parameters: def _abar(field, data): return data.ds.parameters["eos_singlespeciesa"] * data["index", "ones"] self.add_field( ("gas", "mean_molecular_weight"), sampling_type="cell", function=_abar, units="", ) if ("flash", "sumy") in self.field_list: def _nele(field, data): return data["flash", "dens"] * data["flash", "ye"] * Na self.add_field( ("gas", "El_number_density"), sampling_type="cell", function=_nele, units=unit_system["number_density"], ) def _nion(field, data): return data["flash", "dens"] * data["flash", "sumy"] * Na self.add_field( ("gas", "ion_number_density"), sampling_type="cell", function=_nion, units=unit_system["number_density"], ) def _number_density(field, data): return ( data["gas", "El_number_density"] + data["gas", "ion_number_density"] ) else: def _number_density(field, data): return data["flash", "dens"] * Na / data["gas", "mean_molecular_weight"] self.add_field( ("gas", "number_density"), sampling_type="cell", function=_number_density, units=unit_system["number_density"], ) setup_magnetic_field_aliases(self, "flash", [f"mag{ax}" for ax in "xyz"]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/flash/io.py0000644000175100001770000002714614714401662016313 0ustar00runnerdockerfrom functools import cached_property from itertools import groupby import numpy as np from yt.geometry.selection_routines import AlwaysSelector from yt.utilities.io_handler import BaseIOHandler # http://stackoverflow.com/questions/2361945/detecting-consecutive-integers-in-a-list def particle_sequences(grids): g_iter = sorted(grids, key=lambda g: g.id) for _k, g in groupby(enumerate(g_iter), lambda i_x: i_x[0] - i_x[1].id): seq = [v[1] for v in g] yield seq[0], seq[-1] def grid_sequences(grids): g_iter = sorted(grids, key=lambda g: g.id) for _k, g in groupby(enumerate(g_iter), lambda i_x1: i_x1[0] - i_x1[1].id): seq = [v[1] for v in g] yield seq def determine_particle_fields(handle): try: particle_fields = [ s[0].decode("ascii", "ignore").strip() for s in handle["/particle names"][:] ] _particle_fields = {"particle_" + s: i for i, s in enumerate(particle_fields)} except KeyError: _particle_fields = {} return _particle_fields def _conditionally_split_arrays(inp_arrays, condition): output_true = [] output_false = [] for arr in inp_arrays: output_true.append(arr[condition]) output_false.append(arr[~condition]) return output_true, output_false class IOHandlerFLASH(BaseIOHandler): _particle_reader = False _dataset_type = "flash_hdf5" def __init__(self, ds): super().__init__(ds) # Now we cache the particle fields self._handle = ds._handle self._particle_handle = ds._particle_handle self._particle_fields = determine_particle_fields(self._particle_handle) def _read_particles( self, fields_to_read, type, args, grid_list, count_list, conv_factors ): pass def io_iter(self, chunks, fields): f = self._handle for chunk in chunks: for field in fields: # Note that we *prefer* to iterate over the fields on the # outside; here, though, we're iterating over them on the # inside because we may exhaust our chunks. ftype, fname = field ds = f[f"/{fname}"] for gs in grid_sequences(chunk.objs): start = gs[0].id - gs[0]._id_offset end = gs[-1].id - gs[-1]._id_offset + 1 data = ds[start:end, :, :, :] for i, g in enumerate(gs): yield field, g, self._read_obj_field(g, field, (data, i)) def _read_particle_coords(self, chunks, ptf): chunks = list(chunks) f_part = self._particle_handle p_ind = self.ds.index._particle_indices px, py, pz = (self._particle_fields[f"particle_pos{ax}"] for ax in "xyz") pblk = self._particle_fields["particle_blk"] blockless_buffer = self.ds.index._blockless_particle_count p_fields = f_part["/tracer particles"] assert len(ptf) == 1 ptype = list(ptf.keys())[0] bxyz = [] # We need to track all the particles that don't have blocks and make # sure they get yielded too. But, we also want our particles to largely # line up with the grids they are resident in. So we keep a buffer of # particles. That buffer is checked for any "blockless" particles, # which get yielded at the end. for chunk in chunks: start = end = None for g1, g2 in particle_sequences(chunk.objs): start = p_ind[g1.id - g1._id_offset] end_nobuff = p_ind[g2.id - g2._id_offset + 1] end = end_nobuff + blockless_buffer maxlen = end_nobuff - start blk = np.asarray(p_fields[start:end, pblk], dtype="=i8") >= 0 # Can we use something like np.choose here? xyz = [ np.asarray(p_fields[start:end, _], dtype="=f8") for _ in (px, py, pz) ] (x, y, z), _xyz = _conditionally_split_arrays(xyz, blk) if _xyz[0].size > 0: bxyz.append(_xyz) yield ptype, (x[:maxlen], y[:maxlen], z[:maxlen]), 0.0 if len(bxyz) == 0: return bxyz = np.concatenate(bxyz, axis=-1) yield ptype, (bxyz[0, :], bxyz[1, :], bxyz[2, :]), 0.0 def _read_particle_fields(self, chunks, ptf, selector): chunks = list(chunks) f_part = self._particle_handle p_ind = self.ds.index._particle_indices px, py, pz = (self._particle_fields[f"particle_pos{ax}"] for ax in "xyz") pblk = self._particle_fields["particle_blk"] blockless_buffer = self.ds.index._blockless_particle_count p_fields = f_part["/tracer particles"] assert len(ptf) == 1 ptype = list(ptf.keys())[0] field_list = ptf[ptype] bxyz = [] bfields = {(ptype, field): [] for field in field_list} for chunk in chunks: for g1, g2 in particle_sequences(chunk.objs): start = p_ind[g1.id - g1._id_offset] end_nobuff = p_ind[g2.id - g2._id_offset + 1] end = end_nobuff + blockless_buffer maxlen = end_nobuff - start blk = np.asarray(p_fields[start:end, pblk], dtype="=i8") >= 0 xyz = [ np.asarray(p_fields[start:end, _], dtype="=f8") for _ in (px, py, pz) ] (x, y, z), _xyz = _conditionally_split_arrays(xyz, blk) x, y, z = (_[:maxlen] for _ in (x, y, z)) if _xyz[0].size > 0: bxyz.append(_xyz) mask = selector.select_points(x, y, z, 0.0) blockless = (_xyz[0] > 0).any() # This checks if none of the particles within these blocks are # included in the mask We need to also allow for blockless # particles to be selected. if mask is None and not blockless: continue for field in field_list: fi = self._particle_fields[field] (data,), (bdata,) = _conditionally_split_arrays( [p_fields[start:end, fi]], blk ) # Note that we have to apply mask to the split array, since # we select on the split array. if mask is not None: # Be sure to truncate off the buffer length!! yield (ptype, field), data[:maxlen][mask] if blockless: bfields[ptype, field].append(bdata) if len(bxyz) == 0: return bxyz = np.concatenate(bxyz, axis=-1) mask = selector.select_points(bxyz[0, :], bxyz[1, :], bxyz[2, :], 0.0) if mask is None: return for field in field_list: data_field = np.concatenate(bfields[ptype, field], axis=-1) yield (ptype, field), data_field[mask] def _read_obj_field(self, obj, field, ds_offset=None): if ds_offset is None: ds_offset = (None, -1) ds, offset = ds_offset # our context here includes datasets and whatnot that are opened in the # hdf5 file if ds is None: ds = self._handle[f"/{field[1]}"] if offset == -1: data = ds[obj.id - obj._id_offset, :, :, :].transpose() else: data = ds[offset, :, :, :].transpose() return data.astype("=f8") def _read_chunk_data(self, chunk, fields): f = self._handle rv = {} for g in chunk.objs: rv[g.id] = {} # Split into particles and non-particles fluid_fields, particle_fields = [], [] for ftype, fname in fields: if ftype in self.ds.particle_types: particle_fields.append((ftype, fname)) else: fluid_fields.append((ftype, fname)) if len(particle_fields) > 0: selector = AlwaysSelector(self.ds) rv.update(self._read_particle_selection([chunk], selector, particle_fields)) if len(fluid_fields) == 0: return rv for field in fluid_fields: ftype, fname = field ds = f[f"/{fname}"] for gs in grid_sequences(chunk.objs): start = gs[0].id - gs[0]._id_offset end = gs[-1].id - gs[-1]._id_offset + 1 data = ds[start:end, :, :, :].transpose() for i, g in enumerate(gs): rv[g.id][field] = np.asarray(data[..., i], "=f8") return rv class IOHandlerFLASHParticle(BaseIOHandler): _particle_reader = True _dataset_type = "flash_particle_hdf5" def __init__(self, ds): super().__init__(ds) # Now we cache the particle fields self._handle = ds._handle self._particle_fields = determine_particle_fields(self._handle) self._position_fields = [ self._particle_fields[f"particle_pos{ax}"] for ax in "xyz" ] @property def chunksize(self): return 32**3 def _read_fluid_selection(self, chunks, selector, fields, size): raise NotImplementedError def _read_particle_coords(self, chunks, ptf): chunks = list(chunks) data_files = set() assert len(ptf) == 1 for chunk in chunks: for obj in chunk.objs: data_files.update(obj.data_files) px, py, pz = self._position_fields p_fields = self._handle["/tracer particles"] for data_file in sorted(data_files, key=lambda x: (x.filename, x.start)): pxyz = np.asarray( p_fields[data_file.start : data_file.end, [px, py, pz]], dtype="=f8" ) yield "io", pxyz.T, 0.0 def _yield_coordinates(self, data_file, needed_ptype=None): px, py, pz = self._position_fields p_fields = self._handle["/tracer particles"] pxyz = np.asarray( p_fields[data_file.start : data_file.end, [px, py, pz]], dtype="=f8" ) yield ("io", pxyz) def _read_particle_data_file(self, data_file, ptf, selector=None): px, py, pz = self._position_fields p_fields = self._handle["/tracer particles"] si, ei = data_file.start, data_file.end data_return = {} # This should just be a single item for ptype, field_list in sorted(ptf.items()): x = np.asarray(p_fields[si:ei, px], dtype="=f8") y = np.asarray(p_fields[si:ei, py], dtype="=f8") z = np.asarray(p_fields[si:ei, pz], dtype="=f8") if selector: mask = selector.select_points(x, y, z, 0.0) del x, y, z if mask is None: continue for field in field_list: fi = self._particle_fields[field] data = p_fields[si:ei, fi] if selector: data = data[mask] data_return[ptype, field] = data return data_return def _read_particle_fields(self, chunks, ptf, selector): assert len(ptf) == 1 yield from super()._read_particle_fields(chunks, ptf, selector) @cached_property def _pcount(self): return self._handle["/localnp"][:].sum() def _count_particles(self, data_file): si, ei = data_file.start, data_file.end pcount = self._pcount if None not in (si, ei): pcount = np.clip(pcount - si, 0, ei - si) return {"io": pcount} def _identify_fields(self, data_file): fields = [("io", field) for field in self._particle_fields] return fields, {} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/flash/misc.py0000644000175100001770000000000014714401662016613 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3191524 yt-4.4.0/yt/frontends/flash/tests/0000755000175100001770000000000014714401715016461 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/flash/tests/__init__.py0000644000175100001770000000000014714401662020561 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/flash/tests/test_outputs.py0000644000175100001770000001036614714401662021624 0ustar00runnerdockerfrom collections import OrderedDict import numpy as np from numpy.testing import assert_allclose, assert_equal from yt.frontends.flash.api import FLASHDataset, FLASHParticleDataset from yt.loaders import load from yt.testing import ( ParticleSelectionComparison, disable_dataset_cache, requires_file, requires_module, units_override_check, ) from yt.utilities.answer_testing.framework import ( data_dir_load, nbody_answer, requires_ds, small_patch_amr, ) _fields = (("gas", "temperature"), ("gas", "density"), ("gas", "velocity_magnitude")) sloshing = "GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0300" @requires_ds(sloshing, big_data=True) def test_sloshing(): ds = data_dir_load(sloshing) assert_equal(str(ds), "sloshing_low_res_hdf5_plt_cnt_0300") for test in small_patch_amr(ds, _fields): test_sloshing.__name__ = test.description yield test _fields_2d = (("gas", "temperature"), ("gas", "density")) wt = "WindTunnel/windtunnel_4lev_hdf5_plt_cnt_0030" @requires_module("h5py") @requires_ds(wt) def test_wind_tunnel(): ds = data_dir_load(wt) assert_equal(str(ds), "windtunnel_4lev_hdf5_plt_cnt_0030") for test in small_patch_amr(ds, _fields_2d): test_wind_tunnel.__name__ = test.description yield test @requires_module("h5py") @requires_file(wt) def test_FLASHDataset(): assert isinstance(data_dir_load(wt), FLASHDataset) @requires_module("h5py") @requires_file(sloshing) def test_units_override(): units_override_check(sloshing) @disable_dataset_cache @requires_module("h5py") @requires_file(sloshing) def test_magnetic_units(): ds1 = load(sloshing) assert_allclose(ds1.magnetic_unit.value, np.sqrt(4.0 * np.pi)) assert str(ds1.magnetic_unit.units) == "G" mag_unit1 = ds1.magnetic_unit.to("code_magnetic") assert_allclose(mag_unit1.value, 1.0) assert str(mag_unit1.units) == "code_magnetic" ds2 = load(sloshing, unit_system="mks") assert_allclose(ds2.magnetic_unit.value, np.sqrt(4.0 * np.pi) * 1.0e-4) assert str(ds2.magnetic_unit.units) == "T" mag_unit2 = ds2.magnetic_unit.to("code_magnetic") assert_allclose(mag_unit2.value, 1.0) assert str(mag_unit2.units) == "code_magnetic" @requires_module("h5py") @requires_file(sloshing) def test_mu(): ds = data_dir_load(sloshing) sp = ds.sphere("c", (0.1, "unitary")) assert np.all( sp["gas", "mean_molecular_weight"] == ds.parameters["eos_singlespeciesa"] ) fid_1to3_b1 = "fiducial_1to3_b1/fiducial_1to3_b1_hdf5_part_0080" fid_1to3_b1_fields = OrderedDict( [ (("all", "particle_mass"), None), (("all", "particle_ones"), None), (("all", "particle_velocity_x"), ("all", "particle_mass")), (("all", "particle_velocity_y"), ("all", "particle_mass")), (("all", "particle_velocity_z"), ("all", "particle_mass")), ] ) @requires_module("h5py") @requires_file(fid_1to3_b1) def test_FLASHParticleDataset(): assert isinstance(data_dir_load(fid_1to3_b1), FLASHParticleDataset) @requires_module("h5py") @requires_file(fid_1to3_b1) def test_FLASHParticleDataset_selection(): ds = data_dir_load(fid_1to3_b1) psc = ParticleSelectionComparison(ds) psc.run_defaults() dens_turb_mag = "DensTurbMag/DensTurbMag_hdf5_plt_cnt_0015" @requires_module("h5py") @requires_file(dens_turb_mag) def test_FLASH25_dataset(): ds = data_dir_load(dens_turb_mag) assert_equal(ds.parameters["time"], 751000000000.0) assert_equal(ds.domain_dimensions, np.array([8, 8, 8])) assert_equal(ds.domain_left_edge, ds.arr([-2e18, -2e18, -2e18], "code_length")) assert_equal(ds.index.num_grids, 73) dd = ds.all_data() dd["gas", "density"] @requires_module("h5py") @requires_ds(fid_1to3_b1, big_data=True) def test_fid_1to3_b1(): ds = data_dir_load(fid_1to3_b1) for test in nbody_answer( ds, "fiducial_1to3_b1_hdf5_part_0080", 6684119, fid_1to3_b1_fields ): test_fid_1to3_b1.__name__ = test.description yield test loc_bub_dust = "LocBub_dust/LocBub_dust_hdf5_plt_cnt_0220" @requires_module("h5py") @requires_file(loc_bub_dust) def test_blockless_particles(): ds = data_dir_load(loc_bub_dust) dd = ds.all_data() pos = dd["all", "particle_position"] assert_equal(pos.shape, (2239, 3)) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3231523 yt-4.4.0/yt/frontends/gadget/0000755000175100001770000000000014714401715015455 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget/__init__.py0000644000175100001770000000000014714401662017555 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget/api.py0000644000175100001770000000032014714401662016574 0ustar00runnerdockerfrom . import tests from .data_structures import GadgetDataset, GadgetFieldInfo, GadgetHDF5Dataset from .io import IOHandlerGadgetBinary, IOHandlerGadgetHDF5 from .simulation_handling import GadgetSimulation ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget/data_structures.py0000644000175100001770000006513414714401662021255 0ustar00runnerdockerimport os import struct import numpy as np from yt.data_objects.static_output import ParticleFile from yt.fields.field_info_container import FieldInfoContainer from yt.frontends.sph.data_structures import SPHDataset, SPHParticleIndex from yt.funcs import only_on_root from yt.geometry.geometry_handler import Index from yt.utilities.chemical_formulas import compute_mu from yt.utilities.cosmology import Cosmology from yt.utilities.fortran_utils import read_record from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py from .definitions import ( SNAP_FORMAT_2_OFFSET, gadget_field_specs, gadget_header_specs, gadget_ptype_specs, ) from .fields import GadgetFieldInfo def _fix_unit_ordering(unit): if isinstance(unit[0], str): unit = unit[1], unit[0] return unit def _byte_swap_32(x): return struct.unpack(">I", struct.pack("" # Format 2? elif rhead == 8: return 2, "<" elif rhead == _byte_swap_32(8): return 2, ">" else: raise RuntimeError( "Gadget snapshot file is likely corrupted! " f"The first 4 bytes represent {rhead} (as little endian int32). " f"But we are looking for {first_header_size} (for format 1) " "or 8 (for format 2)." ) @property def position_offset(self): """Offset to the position block.""" n_header = len(self.size) offset = 8 * n_header + sum(self.size) if self.gadget_format[0] == 2: offset += SNAP_FORMAT_2_OFFSET * (n_header + 1) return offset @property def size(self): """Header size in bytes.""" def single_header_size(single_header_spec): return sum( field[1] * np.dtype(field[2]).itemsize for field in single_header_spec ) return [single_header_size(spec) for spec in self.spec] @property def value(self): """Header values as a dictionary.""" # The entries in this header are capitalized and named to match Table 4 # in the GADGET-2 user guide. gformat, endianswap = self.gadget_format # Read header with self.open() as f: hvals = {} for spec in self.spec: if gformat == 2: f.seek(f.tell() + SNAP_FORMAT_2_OFFSET) hvals_new = read_record(f, spec, endian=endianswap) hvals.update(hvals_new) # Remove placeholder keys for key in self._placeholder_keys: if key in hvals: del hvals[key] # Convert length 1 list to scalar value for i in hvals: if len(hvals[i]) == 1: hvals[i] = hvals[i][0] return hvals def open(self): """Open snapshot file.""" for filename in [self.filename, self.filename + ".0"]: if os.path.exists(filename): return open(filename, "rb") raise FileNotFoundError(f"Snapshot file {self.filename} does not exist.") def validate(self): """Validate data integrity.""" try: self.open().close() self.gadget_format self.float_type except Exception: return False return True class GadgetBinaryFile(ParticleFile): def __init__(self, ds, io, filename, file_id, range=None): header = GadgetBinaryHeader(filename, ds._header.spec) self.header = header.value self._position_offset = header.position_offset with header.open() as f: self._file_size = f.seek(0, os.SEEK_END) super().__init__(ds, io, filename, file_id, range) def _calculate_offsets(self, field_list, pcounts): # Note that we ignore pcounts here because it's the global count. We # just want the local count, which we store here. self.field_offsets = self.io._calculate_field_offsets( field_list, self.header["Npart"].copy(), self._position_offset, self.start, self._file_size, ) class GadgetBinaryIndex(SPHParticleIndex): def __init__(self, ds, dataset_type): super().__init__(ds, dataset_type) self._initialize_index() def _initialize_index(self): # Normally this function is called during field detection. We call it # here because we need to know which fields exist on-disk so that we can # read in the smoothing lengths for SPH data before we construct the # Morton bitmaps. self._detect_output_fields() super()._initialize_index() def _initialize_frontend_specific(self): super()._initialize_frontend_specific() self.io._float_type = self.ds._header.float_type class GadgetDataset(SPHDataset): _index_class: type[Index] = GadgetBinaryIndex _file_class: type[ParticleFile] = GadgetBinaryFile _field_info_class: type[FieldInfoContainer] = GadgetFieldInfo _particle_mass_name = "Mass" _particle_coordinates_name = "Coordinates" _particle_velocity_name = "Velocities" _sph_ptypes = ("Gas",) _suffix = "" def __init__( self, filename, dataset_type="gadget_binary", additional_fields=(), unit_base=None, index_order=None, index_filename=None, kdtree_filename=None, kernel_name=None, bounding_box=None, header_spec="default", field_spec="default", ptype_spec="default", long_ids=False, units_override=None, mean_molecular_weight=None, header_offset=0, unit_system="cgs", use_dark_factor=False, w_0=-1.0, w_a=0.0, default_species_fields=None, ): if self._instantiated: return # Check if filename is a directory if os.path.isdir(filename): # Get the .0 snapshot file. We know there's only 1 and it's valid since we # came through _is_valid in load() for f in os.listdir(filename): fname = os.path.join(filename, f) fext = os.path.splitext(fname)[-1] if ( (".0" in f) and (fext not in {".ewah", ".kdtree"}) and os.path.isfile(fname) ): filename = os.path.join(filename, f) break self._header = GadgetBinaryHeader(filename, header_spec) header_size = self._header.size if header_size != [256]: only_on_root( mylog.warning, "Non-standard header size is detected! " "Gadget-2 standard header is 256 bytes, but yours is %s. " "Make sure a non-standard header is actually expected. " "Otherwise something is wrong, " "and you might want to check how the dataset is loaded. " "Further information about header specification can be found in " "https://yt-project.org/docs/dev/examining/loading_data.html#header-specification.", header_size, ) self._field_spec = self._setup_binary_spec(field_spec, gadget_field_specs) self._ptype_spec = self._setup_binary_spec(ptype_spec, gadget_ptype_specs) self.storage_filename = None if long_ids: self._id_dtype = "u8" else: self._id_dtype = "u4" self.long_ids = long_ids self.header_offset = header_offset if unit_base is not None and "UnitLength_in_cm" in unit_base: # We assume this is comoving, because in the absence of comoving # integration the redshift will be zero. unit_base["cmcm"] = 1.0 / unit_base["UnitLength_in_cm"] self._unit_base = unit_base if bounding_box is not None: # This ensures that we know a bounding box has been applied self._domain_override = True bbox = np.array(bounding_box, dtype="float64") if bbox.shape == (2, 3): bbox = bbox.transpose() self.domain_left_edge = bbox[:, 0] self.domain_right_edge = bbox[:, 1] else: self.domain_left_edge = self.domain_right_edge = None if units_override is not None: raise RuntimeError( "units_override is not supported for GadgetDataset. " + "Use unit_base instead." ) # Set dark energy parameters before cosmology object is created self.use_dark_factor = use_dark_factor self.w_0 = w_0 self.w_a = w_a super().__init__( filename, dataset_type=dataset_type, unit_system=unit_system, index_order=index_order, index_filename=index_filename, kdtree_filename=kdtree_filename, kernel_name=kernel_name, default_species_fields=default_species_fields, ) if self.cosmological_simulation: self.time_unit.convert_to_units("s/h") self.length_unit.convert_to_units("kpccm/h") self.mass_unit.convert_to_units("g/h") else: self.time_unit.convert_to_units("s") self.length_unit.convert_to_units("kpc") self.mass_unit.convert_to_units("Msun") if mean_molecular_weight is None: self.mu = compute_mu(self.default_species_fields) else: self.mu = mean_molecular_weight @classmethod def _setup_binary_spec(cls, spec, spec_dict): if isinstance(spec, str): _hs = () for hs in spec.split("+"): _hs += spec_dict[hs] spec = _hs return spec def __str__(self): return os.path.basename(self.parameter_filename).split(".")[0] def _get_hvals(self): self.gen_hsmls = False return self._header.value def _parse_parameter_file(self): hvals = self._get_hvals() self.dimensionality = 3 self.refine_by = 2 self.parameters["HydroMethod"] = "sph" # Set standard values # We may have an overridden bounding box. if self.domain_left_edge is None and hvals["BoxSize"] != 0: self.domain_left_edge = np.zeros(3, "float64") self.domain_right_edge = np.ones(3, "float64") * hvals["BoxSize"] self.domain_dimensions = np.ones(3, "int32") self._periodicity = (True, True, True) self.current_redshift = hvals.get("Redshift", 0.0) if "Redshift" not in hvals: mylog.info("Redshift is not set in Header. Assuming z=0.") if "OmegaLambda" in hvals: self.omega_lambda = hvals["OmegaLambda"] self.omega_matter = hvals["Omega0"] self.hubble_constant = hvals["HubbleParam"] self.cosmological_simulation = self.omega_lambda != 0.0 else: # If these are not set it is definitely not a cosmological dataset. self.omega_lambda = 0.0 self.omega_matter = 0.0 # Just in case somebody asks for it. # omega_matter has been changed to 0.0 for consistency with # non-cosmological frontends self.cosmological_simulation = 0 # Hubble is set below for Omega Lambda = 0. # According to the Gadget manual, OmegaLambda will be zero for # non-cosmological datasets. However, it may be the case that # individuals are running cosmological simulations *without* Lambda, in # which case we may be doing something incorrect here. # It may be possible to deduce whether ComovingIntegration is on # somehow, but opinions on this vary. if not self.cosmological_simulation: mylog.info("Omega Lambda is 0.0, so we are turning off Cosmology.") self.hubble_constant = 1.0 # So that scaling comes out correct self.current_redshift = 0.0 # This may not be correct. self.current_time = hvals["Time"] else: # Now we calculate our time based on the cosmology, because in # ComovingIntegration hvals["Time"] will in fact be the expansion # factor, not the actual integration time, so we re-calculate # global time from our Cosmology. cosmo = Cosmology( hubble_constant=self.hubble_constant, omega_matter=self.omega_matter, omega_lambda=self.omega_lambda, ) self.current_time = cosmo.lookback_time(self.current_redshift, 1e6) mylog.info( "Calculating time from %0.3e to be %0.3e seconds", hvals["Time"], self.current_time, ) self.parameters = hvals basename, _, _ = self.basename.partition(".") prefix = os.path.join(self.directory, basename) if hvals["NumFiles"] > 1: for t in ( f"{prefix}.%(num)s{self._suffix}", f"{prefix}.gad.%(num)s{self._suffix}", ): if os.path.isfile(t % {"num": 0}): self.filename_template = t break else: raise RuntimeError("Could not determine correct data file template.") else: self.filename_template = self.parameter_filename self.file_count = hvals["NumFiles"] def _set_code_unit_attributes(self): # If no units passed in by user, set a sane default (Gadget-2 users # guide). if self._unit_base is None: if self.cosmological_simulation == 1: only_on_root( mylog.info, "Assuming length units are in kpc/h (comoving)" ) self._unit_base = {"length": (1.0, "kpccm/h")} else: only_on_root(mylog.info, "Assuming length units are in kpc (physical)") self._unit_base = {"length": (1.0, "kpc")} # If units passed in by user, decide what to do about # co-moving and factors of h unit_base = self._unit_base or {} if "length" in unit_base: length_unit = unit_base["length"] elif "UnitLength_in_cm" in unit_base: if self.cosmological_simulation == 0: length_unit = (unit_base["UnitLength_in_cm"], "cm") else: length_unit = (unit_base["UnitLength_in_cm"], "cmcm/h") else: raise RuntimeError length_unit = _fix_unit_ordering(length_unit) self.length_unit = self.quan(length_unit[0], length_unit[1]) unit_base = self._unit_base or {} if self.cosmological_simulation: # see http://www.mpa-garching.mpg.de/gadget/gadget-list/0113.html # for why we need to include a factor of square root of the # scale factor vel_units = "cm/s * sqrt(a)" else: vel_units = "cm/s" if "velocity" in unit_base: velocity_unit = unit_base["velocity"] elif "UnitVelocity_in_cm_per_s" in unit_base: velocity_unit = (unit_base["UnitVelocity_in_cm_per_s"], vel_units) else: velocity_unit = (1e5, vel_units) velocity_unit = _fix_unit_ordering(velocity_unit) self.velocity_unit = self.quan(velocity_unit[0], velocity_unit[1]) # We set hubble_constant = 1.0 for non-cosmology, so this is safe. # Default to 1e10 Msun/h if mass is not specified. if "mass" in unit_base: mass_unit = unit_base["mass"] elif "UnitMass_in_g" in unit_base: if self.cosmological_simulation == 0: mass_unit = (unit_base["UnitMass_in_g"], "g") else: mass_unit = (unit_base["UnitMass_in_g"], "g/h") else: # Sane default mass_unit = (1e10, "Msun/h") mass_unit = _fix_unit_ordering(mass_unit) self.mass_unit = self.quan(mass_unit[0], mass_unit[1]) if self.cosmological_simulation: # self.velocity_unit is the unit to rescale on-disk velocities, The # actual internal velocity unit is really in comoving units # since the time unit is derived from the internal velocity unit, we # infer the internal velocity unit here and name it vel_unit # # see http://www.mpa-garching.mpg.de/gadget/gadget-list/0113.html if "velocity" in unit_base: vel_unit = unit_base["velocity"] elif "UnitVelocity_in_cm_per_s" in unit_base: vel_unit = (unit_base["UnitVelocity_in_cm_per_s"], "cmcm/s") else: vel_unit = (1, "kmcm/s") vel_unit = self.quan(*vel_unit) else: vel_unit = self.velocity_unit self.time_unit = self.length_unit / vel_unit if "specific_energy" in unit_base: specific_energy_unit = unit_base["specific_energy"] elif "UnitEnergy_in_cgs" in unit_base and "UnitMass_in_g" in unit_base: specific_energy_unit = ( unit_base["UnitEnergy_in_cgs"] / unit_base["UnitMass_in_g"] ) specific_energy_unit = (specific_energy_unit, "(cm/s)**2") else: # Sane default specific_energy_unit = (1, "(km/s) ** 2") specific_energy_unit = _fix_unit_ordering(specific_energy_unit) self.specific_energy_unit = self.quan(*specific_energy_unit) if "magnetic" in unit_base: magnetic_unit = unit_base["magnetic"] elif "UnitMagneticField_in_gauss" in unit_base: magnetic_unit = (unit_base["UnitMagneticField_in_gauss"], "gauss") else: # Sane default magnetic_unit = (1.0, "gauss") magnetic_unit = _fix_unit_ordering(magnetic_unit) self.magnetic_unit = self.quan(*magnetic_unit) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: header_spec = kwargs.get("header_spec", "default") # Check to see if passed filename is a directory. If so, use it to get # the .0 snapshot file. Make sure there's only one such file, otherwise # there's an ambiguity about which file the user wants. Ignore ewah files if os.path.isdir(filename): valid_files = [] for f in os.listdir(filename): fname = os.path.join(filename, f) if (".0" in f) and (".ewah" not in f) and os.path.isfile(fname): valid_files.append(f) if len(valid_files) == 0: return False elif len(valid_files) > 1: return False else: validated_file = os.path.join(filename, valid_files[0]) else: validated_file = filename header = GadgetBinaryHeader(validated_file, header_spec) return header.validate() class GadgetHDF5File(ParticleFile): pass class GadgetHDF5Dataset(GadgetDataset): _load_requirements = ["h5py"] _file_class = GadgetHDF5File _index_class = SPHParticleIndex _field_info_class: type[FieldInfoContainer] = GadgetFieldInfo _particle_mass_name = "Masses" _sph_ptypes = ("PartType0",) _suffix = ".hdf5" def __init__( self, filename, dataset_type="gadget_hdf5", unit_base=None, index_order=None, index_filename=None, kernel_name=None, bounding_box=None, units_override=None, unit_system="cgs", default_species_fields=None, ): self.storage_filename = None filename = os.path.abspath(filename) if units_override is not None: raise RuntimeError( "units_override is not supported for GadgetHDF5Dataset. " "Use unit_base instead." ) super().__init__( filename, dataset_type, unit_base=unit_base, index_order=index_order, index_filename=index_filename, kernel_name=kernel_name, bounding_box=bounding_box, unit_system=unit_system, default_species_fields=default_species_fields, ) def _get_hvals(self): handle = h5py.File(self.parameter_filename, mode="r") hvals = {} hvals.update((str(k), v) for k, v in handle["/Header"].attrs.items()) # Compat reasons. hvals["NumFiles"] = hvals["NumFilesPerSnapshot"] hvals["Massarr"] = hvals["MassTable"] sph_ptypes = [ptype for ptype in self._sph_ptypes if ptype in handle] if sph_ptypes: self.gen_hsmls = "SmoothingLength" not in handle[sph_ptypes[0]] else: self.gen_hsmls = False # Later versions of Gadget and its derivatives have a "Parameters" # group in the HDF5 file. if "Parameters" in handle: hvals.update((str(k), v) for k, v in handle["/Parameters"].attrs.items()) handle.close() # ensure that 1-element arrays are reduced to scalars updated_hvals = {} for hvalname, value in hvals.items(): if isinstance(value, np.ndarray) and value.size == 1: mylog.info("Reducing single-element array %s to scalar.", hvalname) updated_hvals[hvalname] = value.item() hvals.update(updated_hvals) return hvals def _get_uvals(self): handle = h5py.File(self.parameter_filename, mode="r") uvals = {} uvals.update((str(k), v) for k, v in handle["/Units"].attrs.items()) handle.close() return uvals def _set_owls_eagle(self): self.dimensionality = 3 self.refine_by = 2 self.parameters["HydroMethod"] = "sph" self._unit_base = self._get_uvals() self._unit_base["cmcm"] = 1.0 / self._unit_base["UnitLength_in_cm"] self.current_redshift = self.parameters["Redshift"] self.omega_lambda = self.parameters["OmegaLambda"] self.omega_matter = self.parameters["Omega0"] self.hubble_constant = self.parameters["HubbleParam"] if self.domain_left_edge is None and self.parameters["BoxSize"] != 0: self.domain_left_edge = np.zeros(3, "float64") self.domain_right_edge = np.ones(3, "float64") * self.parameters["BoxSize"] self.domain_dimensions = np.ones(3, "int32") self.cosmological_simulation = 1 self._periodicity = (True, True, True) prefix = os.path.abspath( os.path.join( os.path.dirname(self.parameter_filename), os.path.basename(self.parameter_filename).split(".", 1)[0], ) ) suffix = self.parameter_filename.rsplit(".", 1)[-1] if self.parameters["NumFiles"] > 1: self.filename_template = f"{prefix}.%(num)i.{suffix}" else: self.filename_template = self.parameter_filename self.file_count = self.parameters["NumFilesPerSnapshot"] def _set_owls_eagle_units(self): # note the contents of the HDF5 Units group are in _unit_base # note the velocity stored on disk is sqrt(a) dx/dt # physical velocity [cm/s] = a dx/dt # = sqrt(a) * velocity_on_disk * UnitVelocity_in_cm_per_s self.length_unit = self.quan(self._unit_base["UnitLength_in_cm"], "cmcm/h") self.mass_unit = self.quan(self._unit_base["UnitMass_in_g"], "g/h") self.velocity_unit = self.quan( self._unit_base["UnitVelocity_in_cm_per_s"], "cm/s * sqrt(a)" ) self.time_unit = self.quan(self._unit_base["UnitTime_in_s"], "s/h") specific_energy_unit_cgs = ( self._unit_base["UnitEnergy_in_cgs"] / self._unit_base["UnitMass_in_g"] ) self.specific_energy_unit = self.quan(specific_energy_unit_cgs, "(cm/s)**2") @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False need_groups = ["Header"] veto_groups = ["FOF", "Group", "Subhalo"] valid = True try: fh = h5py.File(filename, mode="r") valid = all(ng in fh["/"] for ng in need_groups) and not any( vg in fh["/"] for vg in veto_groups ) fh.close() except Exception: valid = False pass try: fh = h5py.File(filename, mode="r") valid = fh["Header"].attrs["Code"].decode("utf-8") != "SWIFT" fh.close() except (OSError, KeyError): pass return valid ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget/definitions.py0000644000175100001770000000637714714401662020360 0ustar00runnerdockergadget_header_specs = { "default": ( ("Npart", 6, "i"), ("Massarr", 6, "d"), ("Time", 1, "d"), ("Redshift", 1, "d"), ("FlagSfr", 1, "i"), ("FlagFeedback", 1, "i"), ("Nall", 6, "i"), ("FlagCooling", 1, "i"), ("NumFiles", 1, "i"), ("BoxSize", 1, "d"), ("Omega0", 1, "d"), ("OmegaLambda", 1, "d"), ("HubbleParam", 1, "d"), ("FlagAge", 1, "i"), ("FlagMetals", 1, "i"), ("NallHW", 6, "i"), ("unused", 16, "i"), ), "pad32": (("empty", 32, "c"),), "pad64": (("empty", 64, "c"),), "pad128": (("empty", 128, "c"),), "pad256": (("empty", 256, "c"),), } gadget_ptype_specs = {"default": ("Gas", "Halo", "Disk", "Bulge", "Stars", "Bndry")} gadget_field_specs = { "default": ( "Coordinates", "Velocities", "ParticleIDs", "Mass", ("InternalEnergy", "Gas"), ("Density", "Gas"), ("SmoothingLength", "Gas"), ), "agora_unlv": ( "Coordinates", "Velocities", "ParticleIDs", "Mass", ("InternalEnergy", "Gas"), ("Density", "Gas"), ("Electron_Number_Density", "Gas"), ("HI_NumberDensity", "Gas"), ("SmoothingLength", "Gas"), ), "group0000": ( "Coordinates", "Velocities", "ParticleIDs", "Mass", "Potential", ("Temperature", "Gas"), ("Density", "Gas"), ("ElectronNumberDensity", "Gas"), ("HI_NumberDensity", "Gas"), ("SmoothingLength", "Gas"), ("StarFormationRate", "Gas"), ("DelayTime", "Gas"), ("FourMetalFractions", ("Gas", "Stars")), ("MaxTemperature", ("Gas", "Stars")), ("NStarsSpawned", ("Gas", "Stars")), ("StellarAge", "Stars"), ), "magneticum_box2_hr": ( "Coordinates", "Velocities", "ParticleIDs", "Mass", ("InternalEnergy", "Gas"), ("Density", "Gas"), ("SmoothingLength", "Gas"), ("ColdFraction", "Gas"), ("Temperature", "Gas"), ("StellarAge", ("Stars", "Bndry")), "Potential", ("InitialMass", "Stars"), ("ElevenMetalMasses", ("Gas", "Stars")), ("StarFormationRate", "Gas"), ("TrueMass", "Bndry"), ("AccretionRate", "Bndry"), ), } gadget_hdf5_ptypes = ( "PartType0", "PartType1", "PartType2", "PartType3", "PartType4", "PartType5", ) SNAP_FORMAT_2_OFFSET = 16 """ Here we have a dictionary of possible element species defined in Gadget datasets, keyed by the number of elements. In some cases, these are mass fractions, in others, they are metals--the context for the dataset will determine this. The "Ej" key is for the total mass of all elements that are not explicitly listed. """ elem_names_opts = { 4: ["C", "O", "Si", "Fe"], 7: ["C", "N", "O", "Mg", "Si", "Fe", "Ej"], 8: ["He", "C", "O", "Mg", "S", "Si", "Fe", "Ej"], 11: ["He", "C", "Ca", "O", "N", "Ne", "Mg", "S", "Si", "Fe", "Ej"], 15: [ "He", "C", "Ca", "O", "N", "Ne", "Mg", "S", "Si", "Fe", "Na", "Al", "Ar", "Ni", "Ej", ], } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget/fields.py0000644000175100001770000002376714714401662017315 0ustar00runnerdockerfrom functools import partial from yt.fields.particle_fields import sph_whitelist_fields from yt.frontends.gadget.definitions import elem_names_opts from yt.frontends.sph.fields import SPHFieldInfo from yt.utilities.periodic_table import periodic_table from yt.utilities.physical_constants import kb, mp from yt.utilities.physical_ratios import _primordial_mass_fraction class GadgetFieldInfo(SPHFieldInfo): def __init__(self, ds, field_list, slice_info=None): if ds.gen_hsmls: hsml = (("smoothing_length", ("code_length", [], None)),) self.known_particle_fields += hsml for field in field_list: if field[1].startswith("MetalMasses"): mm = ((field[1], ("code_mass", [], None)),) self.known_particle_fields += mm super().__init__(ds, field_list, slice_info=slice_info) def setup_particle_fields(self, ptype, *args, **kwargs): # setup some special fields that only make sense for SPH particles if (ptype, "FourMetalFractions") in self.ds.field_list: self.species_names = self._setup_four_metal_fractions(ptype) elif (ptype, "ElevenMetalMasses") in self.ds.field_list: self.species_names = self._setup_metal_masses(ptype, "ElevenMetalMasses") elif (ptype, "MetalMasses_00") in self.ds.field_list: self.species_names = self._setup_metal_masses(ptype, "MetalMasses") if len(self.species_names) == 0: self.species_names = self._check_whitelist_species_fields(ptype) super().setup_particle_fields(ptype, *args, **kwargs) if ptype in ("PartType0", "Gas"): self.setup_gas_particle_fields(ptype) def _setup_four_metal_fractions(self, ptype): """ This function breaks the FourMetalFractions field (if present) into its four component metal fraction fields and adds corresponding metal density fields which will later get smoothed This gets used with the Gadget group0000 format as defined in the gadget_field_specs in frontends/gadget/definitions.py """ metal_names = elem_names_opts[4] def _fraction(field, data, i: int): return data[ptype, "FourMetalFractions"][:, i] def _metal_density(field, data, i: int): return data[ptype, "FourMetalFractions"][:, i] * data[ptype, "density"] for i, metal_name in enumerate(metal_names): # add the metal fraction fields self.add_field( (ptype, metal_name + "_fraction"), sampling_type="particle", function=partial(_fraction, i=i), units="", ) # add the metal density fields self.add_field( (ptype, metal_name + "_density"), sampling_type="particle", function=partial(_metal_density, i=i), units=self.ds.unit_system["density"], ) return metal_names def _make_fraction_functions(self, ptype, fname): if fname == "ElevenMetalMasses": def _fraction(field, data, i: int): return ( data[ptype, "ElevenMetalMasses"][:, i] / data[ptype, "particle_mass"] ) def _metallicity(field, data): ret = ( data[ptype, "ElevenMetalMasses"][:, 1].sum(axis=1) / data[ptype, "particle_mass"] ) return ret def _h_fraction(field, data): ret = ( data[ptype, "ElevenMetalMasses"].sum(axis=1) / data[ptype, "particle_mass"] ) return 1.0 - ret elem_names = elem_names_opts[11] elif fname == "MetalMasses": n_elem = len( [ fd for fd in self.ds.field_list if fd[0] == ptype and fd[1].startswith("MetalMasses") ] ) elem_names = elem_names_opts[n_elem] no_He = "He" not in elem_names def _fraction(field, data, i: int): return ( data[ptype, f"MetalMasses_{i:02d}"] / data[ptype, "particle_mass"] ) def _metallicity(field, data): mass = 0.0 start_idx = int(not no_He) for i in range(start_idx, n_elem): mass += data[ptype, f"MetalMasses_{i:02d}"] ret = mass / data[ptype, "particle_mass"] return ret if no_He: _h_fraction = None else: def _h_fraction(field, data): mass = 0.0 for i in range(n_elem): mass += data[ptype, f"MetalMasses_{i:02d}"] ret = mass / data[ptype, "particle_mass"] return 1.0 - ret else: raise KeyError( f"Making element fraction fields from '{ptype}','{fname}' not possible!" ) return _fraction, _h_fraction, _metallicity, elem_names def _setup_metal_masses(self, ptype, fname): """ This function breaks the ElevenMetalMasses field (if present) into its eleven component metal fraction fields and adds corresponding metal density fields which will later get smoothed This gets used with the magneticum_box2_hr format as defined in the gadget_field_specs in frontends/gadget/definitions.py """ sampling_type = "local" if ptype in self.ds._sph_ptypes else "particle" ( _fraction, _h_fraction, _metallicity, elem_names, ) = self._make_fraction_functions(ptype, fname) def _metal_density(field, data, elem_name: str): return data[ptype, f"{elem_name}_fraction"] * data[ptype, "density"] for i, elem_name in enumerate(elem_names): # add the element fraction fields self.add_field( (ptype, f"{elem_name}_fraction"), sampling_type=sampling_type, function=partial(_fraction, i=i), units="", ) # add the element density fields self.add_field( (ptype, f"{elem_name}_density"), sampling_type=sampling_type, function=partial(_metal_density, elem_name=elem_name), units=self.ds.unit_system["density"], ) # metallicity self.add_field( (ptype, "metallicity"), sampling_type=sampling_type, function=_metallicity, units="", ) if _h_fraction is None: # no helium, so can't compute hydrogen species_names = elem_names[-1] else: # hydrogen fraction and density self.add_field( (ptype, "H_fraction"), sampling_type=sampling_type, function=_h_fraction, units="", ) def _h_density(field, data): return data[ptype, "H_fraction"] * data[ptype, "density"] self.add_field( (ptype, "H_density"), sampling_type=sampling_type, function=_h_density, units=self.ds.unit_system["density"], ) species_names = ["H"] + elem_names[:-1] if "Ej" in elem_names: def _ej_mass(field, data): return data[ptype, "Ej_fraction"] * data[ptype, "particle_mass"] self.add_field( (ptype, "Ej_mass"), sampling_type=sampling_type, function=_ej_mass, units=self.ds.unit_system["mass"], ) if sampling_type == "local": self.alias(("gas", "Ej_mass"), (ptype, "Ej_mass")) return species_names def _check_whitelist_species_fields(self, ptype): species_names = [] for field in self.ds.field_list: if ( field[0] == ptype and field[1].endswith(("_fraction", "_density")) and field[1] in sph_whitelist_fields ): symbol, _, _ = field[1].partition("_") species_names.append(symbol) return sorted(species_names, key=lambda symbol: periodic_table[symbol].num) def setup_gas_particle_fields(self, ptype): if (ptype, "Temperature") not in self.ds.field_list: if (ptype, "ElectronAbundance") in self.ds.field_list: def _temperature(field, data): # Assume cosmic abundances x_H = _primordial_mass_fraction["H"] gamma = 5.0 / 3.0 a_e = data[ptype, "ElectronAbundance"] mu = 4.0 / (3.0 * x_H + 1.0 + 4.0 * x_H * a_e) ret = data[ptype, "InternalEnergy"] * (gamma - 1) * mu * mp / kb return ret.in_units(self.ds.unit_system["temperature"]) else: def _temperature(field, data): gamma = 5.0 / 3.0 ret = ( data[ptype, "InternalEnergy"] * (gamma - 1) * data.ds.mu * mp / kb ) return ret.in_units(self.ds.unit_system["temperature"]) self.add_field( (ptype, "Temperature"), sampling_type="particle", function=_temperature, units=self.ds.unit_system["temperature"], ) self.alias((ptype, "temperature"), (ptype, "Temperature")) # need to do this manually since that automatic aliasing that happens # in the FieldInfoContainer base class has already happened at this # point self.alias(("gas", "temperature"), (ptype, "Temperature")) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget/io.py0000644000175100001770000005532714714401662016453 0ustar00runnerdockerimport os from collections import defaultdict from functools import cached_property import numpy as np from yt.frontends.sph.io import IOHandlerSPH from yt.units._numpy_wrapper_functions import uconcatenate from yt.utilities.lib.particle_kdtree_tools import generate_smoothing_length from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py from .definitions import SNAP_FORMAT_2_OFFSET, gadget_hdf5_ptypes class IOHandlerGadgetHDF5(IOHandlerSPH): _dataset_type = "gadget_hdf5" _vector_fields = { "Coordinates": 3, "Velocity": 3, "Velocities": 3, "MagneticField": 3, } _known_ptypes = gadget_hdf5_ptypes _element_names = ( "Hydrogen", "Helium", "Carbon", "Nitrogen", "Oxygen", "Neon", "Magnesium", "Silicon", "Iron", ) _coord_name = "Coordinates" @cached_property def var_mass(self) -> tuple[str, ...]: vm = [] for i, v in enumerate(self.ds["Massarr"]): if v == 0: vm.append(self._known_ptypes[i]) return tuple(vm) def _read_fluid_selection(self, chunks, selector, fields, size): raise NotImplementedError def _read_particle_coords(self, chunks, ptf): for data_file in self._sorted_chunk_iterator(chunks): si, ei = data_file.start, data_file.end f = h5py.File(data_file.filename, mode="r") # This double-reads for ptype in sorted(ptf): if data_file.total_particles[ptype] == 0: continue c = f[f"/{ptype}/{self._coord_name}"][si:ei, :].astype("float64") x, y, z = (np.squeeze(_) for _ in np.split(c, 3, axis=1)) if ptype == self.ds._sph_ptypes[0]: pdtype = c.dtype pshape = c.shape hsml = self._get_smoothing_length(data_file, pdtype, pshape) else: hsml = 0.0 yield ptype, (x, y, z), hsml f.close() def _yield_coordinates(self, data_file, needed_ptype=None): si, ei = data_file.start, data_file.end f = h5py.File(data_file.filename, mode="r") pcount = f["/Header"].attrs["NumPart_ThisFile"][:].astype("int64") np.clip(pcount - si, 0, ei - si, out=pcount) pcount = pcount.sum() for key in f.keys(): if not key.startswith("PartType"): continue if "Coordinates" not in f[key]: continue if needed_ptype and key != needed_ptype: continue ds = f[key]["Coordinates"][si:ei, ...] dt = ds.dtype.newbyteorder("N") # Native pos = np.empty(ds.shape, dtype=dt) pos[:] = ds yield key, pos f.close() def _generate_smoothing_length(self, index): data_files = index.data_files if not self.ds.gen_hsmls: return hsml_fn = data_files[0].filename.replace(".hdf5", ".hsml.hdf5") if os.path.exists(hsml_fn): with h5py.File(hsml_fn, mode="r") as f: file_hash = f.attrs["q"] if file_hash != self.ds._file_hash: mylog.warning("Replacing hsml files.") for data_file in data_files: hfn = data_file.filename.replace(".hdf5", ".hsml.hdf5") os.remove(hfn) else: return positions = [] counts = defaultdict(int) for data_file in data_files: for _, ppos in self._yield_coordinates( data_file, needed_ptype=self.ds._sph_ptypes[0] ): counts[data_file.filename] += ppos.shape[0] positions.append(ppos) if not positions: return offsets = {} offset = 0 for fn, count in counts.items(): offsets[fn] = offset offset += count kdtree = index.kdtree positions = uconcatenate(positions)[kdtree.idx] hsml = generate_smoothing_length( positions.astype("float64"), kdtree, self.ds._num_neighbors ) dtype = positions.dtype hsml = hsml[np.argsort(kdtree.idx)].astype(dtype) mylog.warning("Writing smoothing lengths to hsml files.") for i, data_file in enumerate(data_files): si, ei = data_file.start, data_file.end fn = data_file.filename hsml_fn = data_file.filename.replace(".hdf5", ".hsml.hdf5") with h5py.File(hsml_fn, mode="a") as f: if i == 0: f.attrs["q"] = self.ds._file_hash g = f.require_group(self.ds._sph_ptypes[0]) d = g.require_dataset( "SmoothingLength", dtype=dtype, shape=(counts[fn],) ) begin = si + offsets[fn] end = min(ei, d.size) + offsets[fn] d[si:ei] = hsml[begin:end] def _get_smoothing_length(self, data_file, position_dtype, position_shape): ptype = self.ds._sph_ptypes[0] si, ei = data_file.start, data_file.end if self.ds.gen_hsmls: fn = data_file.filename.replace(".hdf5", ".hsml.hdf5") else: fn = data_file.filename with h5py.File(fn, mode="r") as f: ds = f[ptype]["SmoothingLength"][si:ei, ...] dt = ds.dtype.newbyteorder("N") # Native if position_dtype is not None and dt < position_dtype: # Sometimes positions are stored in double precision # but smoothing lengths are stored in single precision. # In these cases upcast smoothing length to double precision # to avoid ValueErrors when we pass these arrays to Cython. dt = position_dtype hsml = np.empty(ds.shape, dtype=dt) hsml[:] = ds return hsml def _read_particle_data_file(self, data_file, ptf, selector=None): si, ei = data_file.start, data_file.end data_return = {} f = h5py.File(data_file.filename, mode="r") for ptype, field_list in sorted(ptf.items()): if data_file.total_particles[ptype] == 0: continue g = f[f"/{ptype}"] if selector is None or getattr(selector, "is_all_data", False): mask = slice(None, None, None) mask_sum = data_file.total_particles[ptype] hsmls = None else: coords = g["Coordinates"][si:ei].astype("float64") if ptype == "PartType0": hsmls = self._get_smoothing_length( data_file, g["Coordinates"].dtype, g["Coordinates"].shape ).astype("float64") else: hsmls = 0.0 mask = selector.select_points( coords[:, 0], coords[:, 1], coords[:, 2], hsmls ) if mask is not None: mask_sum = mask.sum() del coords if mask is None: continue for field in field_list: if field in ("Mass", "Masses") and ptype not in self.var_mass: data = np.empty(mask_sum, dtype="float64") ind = self._known_ptypes.index(ptype) data[:] = self.ds["Massarr"][ind] elif field in self._element_names: rfield = "ElementAbundance/" + field data = g[rfield][si:ei][mask, ...] elif field.startswith("Metallicity_"): col = int(field.rsplit("_", 1)[-1]) data = g["Metallicity"][si:ei, col][mask] elif field.startswith("GFM_Metals_"): col = int(field.rsplit("_", 1)[-1]) data = g["GFM_Metals"][si:ei, col][mask] elif field.startswith("Chemistry_"): col = int(field.rsplit("_", 1)[-1]) data = g["ChemistryAbundances"][si:ei, col][mask] elif field.startswith("PassiveScalars_"): col = int(field.rsplit("_", 1)[-1]) data = g["PassiveScalars"][si:ei, col][mask] elif field.startswith("GFM_StellarPhotometrics_"): col = int(field.rsplit("_", 1)[-1]) data = g["GFM_StellarPhotometrics"][si:ei, col][mask] elif field.startswith("MetalMasses_"): col = int(field.rsplit("_", 1)[-1]) data = g["Mass of Metals"][si:ei, col][mask] elif field == "smoothing_length": # This is for frontends which do not store # the smoothing length on-disk, so we do not # attempt to read them, but instead assume # that they are calculated in _get_smoothing_length. if hsmls is None: hsmls = self._get_smoothing_length( data_file, g["Coordinates"].dtype, g["Coordinates"].shape, ).astype("float64") data = hsmls[mask] else: data = g[field][si:ei][mask, ...] data_return[ptype, field] = data f.close() return data_return def _count_particles(self, data_file): si, ei = data_file.start, data_file.end f = h5py.File(data_file.filename, mode="r") pcount = f["/Header"].attrs["NumPart_ThisFile"][:].astype("int64") f.close() if None not in (si, ei): np.clip(pcount - si, 0, ei - si, out=pcount) npart = {f"PartType{i}": v for i, v in enumerate(pcount)} return npart def _identify_fields(self, data_file): f = h5py.File(data_file.filename, mode="r") fields = [] cname = self.ds._particle_coordinates_name # Coordinates mname = self.ds._particle_mass_name # Mass # loop over all keys in OWLS hdf5 file # -------------------------------------------------- for key in f.keys(): # only want particle data # -------------------------------------- if not key.startswith("PartType"): continue # particle data group # -------------------------------------- g = f[key] if cname not in g: continue # note str => not unicode! ptype = str(key) if ptype not in self.var_mass: fields.append((ptype, mname)) # loop over all keys in PartTypeX group # ---------------------------------------- for k in g.keys(): if k == "ElementAbundance": gp = g[k] for j in gp.keys(): kk = j fields.append((ptype, str(kk))) elif ( k in ( "Metallicity", "GFM_Metals", "PassiveScalars", "GFM_StellarPhotometrics", "Mass of Metals", ) and len(g[k].shape) > 1 ): # Vector of metallicity or passive scalar for i in range(g[k].shape[1]): key = "MetalMasses" if k == "Mass of Metals" else k fields.append((ptype, "%s_%02i" % (key, i))) elif k == "ChemistryAbundances" and len(g[k].shape) > 1: for i in range(g[k].shape[1]): fields.append((ptype, "Chemistry_%03i" % i)) else: kk = k if not hasattr(g[kk], "shape"): continue if len(g[kk].shape) > 1: self._vector_fields[kk] = g[kk].shape[1] fields.append((ptype, str(kk))) f.close() if self.ds.gen_hsmls: fields.append(("PartType0", "smoothing_length")) return fields, {} ZeroMass = object() class IOHandlerGadgetBinary(IOHandlerSPH): _dataset_type = "gadget_binary" _vector_fields = { "Coordinates": 3, "Velocity": 3, "Velocities": 3, "MagneticField": 3, "FourMetalFractions": 4, "ElevenMetalMasses": 11, } # Particle types (Table 3 in GADGET-2 user guide) # # Blocks in the file: # HEAD # POS # VEL # ID # MASS (variable mass only) # U (gas only) # RHO (gas only) # HSML (gas only) # POT (only if enabled in makefile) # ACCE (only if enabled in makefile) # ENDT (only if enabled in makefile) # TSTP (only if enabled in makefile) _format = None def __init__(self, ds, *args, **kwargs): self._fields = ds._field_spec self._ptypes = ds._ptype_spec self.data_files = set() gformat, endianswap = ds._header.gadget_format # gadget format 1 original, 2 with block name self._format = gformat self._endian = endianswap super().__init__(ds, *args, **kwargs) @cached_property def var_mass(self) -> tuple[str, ...]: vm = [] for i, v in enumerate(self.ds["Massarr"]): if v == 0: vm.append(self._ptypes[i]) return tuple(vm) def _read_fluid_selection(self, chunks, selector, fields, size): raise NotImplementedError def _read_particle_coords(self, chunks, ptf): data_files = set() for chunk in chunks: for obj in chunk.objs: data_files.update(obj.data_files) for data_file in sorted(data_files, key=lambda x: (x.filename, x.start)): poff = data_file.field_offsets tp = data_file.total_particles f = open(data_file.filename, "rb") for ptype in ptf: if tp[ptype] == 0: # skip if there are no particles continue f.seek(poff[ptype, "Coordinates"], os.SEEK_SET) pos = self._read_field_from_file(f, tp[ptype], "Coordinates") if ptype == self.ds._sph_ptypes[0]: f.seek(poff[ptype, "SmoothingLength"], os.SEEK_SET) hsml = self._read_field_from_file(f, tp[ptype], "SmoothingLength") else: hsml = 0.0 yield ptype, (pos[:, 0], pos[:, 1], pos[:, 2]), hsml f.close() def _read_particle_data_file(self, data_file, ptf, selector=None): return_data = {} poff = data_file.field_offsets tp = data_file.total_particles f = open(data_file.filename, "rb") for ptype, field_list in sorted(ptf.items()): if tp[ptype] == 0: continue if selector is None or getattr(selector, "is_all_data", False): mask = slice(None, None, None) else: f.seek(poff[ptype, "Coordinates"], os.SEEK_SET) pos = self._read_field_from_file(f, tp[ptype], "Coordinates") if ptype == self.ds._sph_ptypes[0]: f.seek(poff[ptype, "SmoothingLength"], os.SEEK_SET) hsml = self._read_field_from_file(f, tp[ptype], "SmoothingLength") else: hsml = 0.0 mask = selector.select_points(pos[:, 0], pos[:, 1], pos[:, 2], hsml) del pos del hsml if mask is None: continue for field in field_list: if field == "Mass" and ptype not in self.var_mass: if getattr(selector, "is_all_data", False): size = data_file.total_particles[ptype] else: size = mask.sum() data = np.empty(size, dtype="float64") m = self.ds.parameters["Massarr"][self._ptypes.index(ptype)] data[:] = m else: f.seek(poff[ptype, field], os.SEEK_SET) data = self._read_field_from_file(f, tp[ptype], field) data = data[mask, ...] return_data[ptype, field] = data f.close() return return_data def _read_field_from_file(self, f, count, name): if count == 0: return if name == "ParticleIDs": dt = self._endian + self.ds._id_dtype else: dt = self._endian + self._float_type dt = np.dtype(dt) if name in self._vector_fields: count *= self._vector_fields[name] arr = np.fromfile(f, dtype=dt, count=count) # ensure data are in native endianness to avoid errors # when field data are passed to cython dt = dt.newbyteorder("N") arr = arr.astype(dt) if name in self._vector_fields: factor = self._vector_fields[name] arr = arr.reshape((count // factor, factor), order="C") return arr def _yield_coordinates(self, data_file, needed_ptype=None): self._float_type = data_file.ds._header.float_type self._field_size = np.dtype(self._float_type).itemsize dt = np.dtype(self._endian + self._float_type) dt_native = dt.newbyteorder("N") with open(data_file.filename, "rb") as f: # We add on an additionally 4 for the first record. f.seek(data_file._position_offset + 4) for ptype, count in data_file.total_particles.items(): if count == 0: continue if needed_ptype is not None and ptype != needed_ptype: continue # The first total_particles * 3 values are positions pp = np.fromfile(f, dtype=dt, count=count * 3).astype( dt_native, copy=False ) pp.shape = (count, 3) yield ptype, pp def _get_smoothing_length(self, data_file, position_dtype, position_shape): ret = self._get_field(data_file, "SmoothingLength", "Gas") if position_dtype is not None and ret.dtype != position_dtype: # Sometimes positions are stored in double precision # but smoothing lengths are stored in single precision. # In these cases upcast smoothing length to double precision # to avoid ValueErrors when we pass these arrays to Cython. ret = ret.astype(position_dtype) return ret def _get_field(self, data_file, field, ptype): poff = data_file.field_offsets tp = data_file.total_particles with open(data_file.filename, "rb") as f: f.seek(poff[ptype, field], os.SEEK_SET) pp = self._read_field_from_file(f, tp[ptype], field) return pp def _count_particles(self, data_file): si, ei = data_file.start, data_file.end pcount = np.array(data_file.header["Npart"]) if None not in (si, ei): np.clip(pcount - si, 0, ei - si, out=pcount) npart = {self._ptypes[i]: v for i, v in enumerate(pcount)} return npart # header is 256, but we have 4 at beginning and end for ints _field_size = 4 def _calculate_field_offsets( self, field_list, pcount, offset, df_start, file_size=None ): # field_list is (ftype, fname) but the blocks are ordered # (fname, ftype) in the file. if self._format == 2: # Need to subtract offset due to extra header block pos = offset - SNAP_FORMAT_2_OFFSET else: pos = offset fs = self._field_size offsets = {} pcount = dict(zip(self._ptypes, pcount, strict=True)) for field in self._fields: if field == "ParticleIDs" and self.ds.long_ids: fs = 8 else: fs = 4 if not isinstance(field, str): field = field[0] if not any((ptype, field) in field_list for ptype in self._ptypes): continue if self._format == 2: pos += 20 # skip block header elif self._format == 1: pos += 4 else: raise RuntimeError(f"incorrect Gadget format {str(self._format)}!") any_ptypes = False for ptype in self._ptypes: if field == "Mass" and ptype not in self.var_mass: continue if (ptype, field) not in field_list: continue start_offset = df_start * fs if field in self._vector_fields: start_offset *= self._vector_fields[field] pos += start_offset offsets[ptype, field] = pos any_ptypes = True remain_offset = (pcount[ptype] - df_start) * fs if field in self._vector_fields: remain_offset *= self._vector_fields[field] pos += remain_offset pos += 4 if not any_ptypes: pos -= 8 if file_size is not None: if (file_size != pos) & (self._format == 1): # ignore the rest of format 2 diff = file_size - pos possible = [] for ptype, psize in sorted(pcount.items()): if psize == 0: continue if float(diff) / psize == int(float(diff) / psize): possible.append(ptype) mylog.warning( "Your Gadget-2 file may have extra " "columns or different precision! " "(%s diff => %s?)", diff, possible, ) return offsets def _identify_fields(self, domain): # We can just look at the particle counts. field_list = [] tp = domain.total_particles for i, ptype in enumerate(self._ptypes): count = tp[ptype] if count == 0: continue m = domain.header["Massarr"][i] for field in self._fields: if isinstance(field, tuple): field, req = field if req is ZeroMass: if m > 0.0: continue elif isinstance(req, tuple) and ptype in req: pass elif req != ptype: continue field_list.append((ptype, field)) return field_list, {} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget/simulation_handling.py0000644000175100001770000005212714714401662022067 0ustar00runnerdockerimport glob import os import numpy as np from unyt import dimensions, unyt_array from unyt.unit_registry import UnitRegistry from yt.data_objects.time_series import DatasetSeries, SimulationTimeSeries from yt.funcs import only_on_root from yt.loaders import load from yt.utilities.cosmology import Cosmology from yt.utilities.exceptions import ( InvalidSimulationTimeSeries, MissingParameter, NoStoppingCondition, YTUnidentifiedDataType, ) from yt.utilities.logger import ytLogger as mylog from yt.utilities.parallel_tools.parallel_analysis_interface import parallel_objects class GadgetSimulation(SimulationTimeSeries): r""" Initialize an Gadget Simulation object. Upon creation, the parameter file is parsed and the time and redshift are calculated and stored in all_outputs. A time units dictionary is instantiated to allow for time outputs to be requested with physical time units. The get_time_series can be used to generate a DatasetSeries object. parameter_filename : str The simulation parameter file. find_outputs : bool If True, the OutputDir directory is searched for datasets. Time and redshift information are gathered by temporarily instantiating each dataset. This can be used when simulation data was created in a non-standard way, making it difficult to guess the corresponding time and redshift information. Default: False. Examples -------- >>> import yt >>> gs = yt.load_simulation("my_simulation.par", "Gadget") >>> gs.get_time_series() >>> for ds in gs: ... print(ds.current_time) """ def __init__(self, parameter_filename, find_outputs=False): self.simulation_type = "particle" self.dimensionality = 3 SimulationTimeSeries.__init__( self, parameter_filename, find_outputs=find_outputs ) def _set_units(self): self.unit_registry = UnitRegistry() self.time_unit = self.quan(1.0, "s") if self.cosmological_simulation: # Instantiate Cosmology object for units and time conversions. self.cosmology = Cosmology( hubble_constant=self.hubble_constant, omega_matter=self.omega_matter, omega_lambda=self.omega_lambda, unit_registry=self.unit_registry, ) if "h" in self.unit_registry: self.unit_registry.modify("h", self.hubble_constant) else: self.unit_registry.add( "h", self.hubble_constant, dimensions.dimensionless ) # Comoving lengths for my_unit in ["m", "pc", "AU"]: new_unit = f"{my_unit}cm" # technically not true, but should be ok self.unit_registry.add( new_unit, self.unit_registry.lut[my_unit][0], dimensions.length, f"\\rm{{{my_unit}}}/(1+z)", prefixable=True, ) self.length_unit = self.quan( self.unit_base["UnitLength_in_cm"], "cmcm / h", registry=self.unit_registry, ) self.mass_unit = self.quan( self.unit_base["UnitMass_in_g"], "g / h", registry=self.unit_registry ) self.box_size = self.box_size * self.length_unit self.domain_left_edge = self.domain_left_edge * self.length_unit self.domain_right_edge = self.domain_right_edge * self.length_unit self.unit_registry.add( "unitary", float(self.box_size.in_base()), self.length_unit.units.dimensions, ) else: # Read time from file for non-cosmological sim self.time_unit = self.quan( self.unit_base["UnitLength_in_cm"] / self.unit_base["UnitVelocity_in_cm_per_s"], "s", ) self.unit_registry.add("code_time", 1.0, dimensions.time) self.unit_registry.modify("code_time", self.time_unit) # Length self.length_unit = self.quan(self.unit_base["UnitLength_in_cm"], "cm") self.unit_registry.add("code_length", 1.0, dimensions.length) self.unit_registry.modify("code_length", self.length_unit) def get_time_series( self, initial_time=None, final_time=None, initial_redshift=None, final_redshift=None, times=None, redshifts=None, tolerance=None, parallel=True, setup_function=None, ): """ Instantiate a DatasetSeries object for a set of outputs. If no additional keywords given, a DatasetSeries object will be created with all potential datasets created by the simulation. Outputs can be gather by specifying a time or redshift range (or combination of time and redshift), with a specific list of times or redshifts), or by simply searching all subdirectories within the simulation directory. initial_time : tuple of type (float, str) The earliest time for outputs to be included. This should be given as the value and the string representation of the units. For example, (5.0, "Gyr"). If None, the initial time of the simulation is used. This can be used in combination with either final_time or final_redshift. Default: None. final_time : tuple of type (float, str) The latest time for outputs to be included. This should be given as the value and the string representation of the units. For example, (13.7, "Gyr"). If None, the final time of the simulation is used. This can be used in combination with either initial_time or initial_redshift. Default: None. times : tuple of type (float array, str) A list of times for which outputs will be found and the units of those values. For example, ([0, 1, 2, 3], "s"). Default: None. initial_redshift : float The earliest redshift for outputs to be included. If None, the initial redshift of the simulation is used. This can be used in combination with either final_time or final_redshift. Default: None. final_redshift : float The latest redshift for outputs to be included. If None, the final redshift of the simulation is used. This can be used in combination with either initial_time or initial_redshift. Default: None. redshifts : array_like A list of redshifts for which outputs will be found. Default: None. tolerance : float Used in combination with "times" or "redshifts" keywords, this is the tolerance within which outputs are accepted given the requested times or redshifts. If None, the nearest output is always taken. Default: None. parallel : bool/int If True, the generated DatasetSeries will divide the work such that a single processor works on each dataset. If an integer is supplied, the work will be divided into that number of jobs. Default: True. setup_function : callable, accepts a ds This function will be called whenever a dataset is loaded. Examples -------- >>> import yt >>> gs = yt.load_simulation("my_simulation.par", "Gadget") >>> gs.get_time_series(initial_redshift=10, final_time=(13.7, "Gyr")) >>> gs.get_time_series(redshifts=[3, 2, 1, 0]) >>> # after calling get_time_series >>> for ds in gs.piter(): ... p = ProjectionPlot(ds, "x", ("gas", "density")) ... p.save() >>> # An example using the setup_function keyword >>> def print_time(ds): ... print(ds.current_time) >>> gs.get_time_series(setup_function=print_time) >>> for ds in gs: ... SlicePlot(ds, "x", "Density").save() """ if ( initial_redshift is not None or final_redshift is not None ) and not self.cosmological_simulation: raise InvalidSimulationTimeSeries( "An initial or final redshift has been given for a " + "noncosmological simulation." ) my_all_outputs = self.all_outputs if not my_all_outputs: DatasetSeries.__init__( self, outputs=[], parallel=parallel, unit_base=self.unit_base ) mylog.info("0 outputs loaded into time series.") return # Apply selection criteria to the set. if times is not None: my_outputs = self._get_outputs_by_key( "time", times, tolerance=tolerance, outputs=my_all_outputs ) elif redshifts is not None: my_outputs = self._get_outputs_by_key( "redshift", redshifts, tolerance=tolerance, outputs=my_all_outputs ) else: if initial_time is not None: if isinstance(initial_time, float): initial_time = self.quan(initial_time, "code_time") elif isinstance(initial_time, tuple) and len(initial_time) == 2: initial_time = self.quan(*initial_time) elif not isinstance(initial_time, unyt_array): raise RuntimeError( "Error: initial_time must be given as a float or " + "tuple of (value, units)." ) elif initial_redshift is not None: my_initial_time = self.cosmology.t_from_z(initial_redshift) else: my_initial_time = self.initial_time if final_time is not None: if isinstance(final_time, float): final_time = self.quan(final_time, "code_time") elif isinstance(final_time, tuple) and len(final_time) == 2: final_time = self.quan(*final_time) elif not isinstance(final_time, unyt_array): raise RuntimeError( "Error: final_time must be given as a float or " + "tuple of (value, units)." ) my_final_time = final_time.in_units("s") elif final_redshift is not None: my_final_time = self.cosmology.t_from_z(final_redshift) else: my_final_time = self.final_time my_initial_time.convert_to_units("s") my_final_time.convert_to_units("s") my_times = np.array([a["time"] for a in my_all_outputs]) my_indices = np.digitize([my_initial_time, my_final_time], my_times) if my_initial_time == my_times[my_indices[0] - 1]: my_indices[0] -= 1 my_outputs = my_all_outputs[my_indices[0] : my_indices[1]] init_outputs = [] for output in my_outputs: if os.path.exists(output["filename"]): init_outputs.append(output["filename"]) if len(init_outputs) == 0 and len(my_outputs) > 0: mylog.warning( "Could not find any datasets. " "Check the value of OutputDir in your parameter file." ) DatasetSeries.__init__( self, outputs=init_outputs, parallel=parallel, setup_function=setup_function, unit_base=self.unit_base, ) mylog.info("%d outputs loaded into time series.", len(init_outputs)) def _parse_parameter_file(self): """ Parses the parameter file and establishes the various dictionaries. """ self.unit_base = {} # Let's read the file lines = open(self.parameter_filename).readlines() comments = ["%", ";"] for line in (l.strip() for l in lines): for comment in comments: if comment in line: line = line[0 : line.find(comment)] if len(line) < 2: continue param, vals = (i.strip() for i in line.split(None, 1)) # First we try to decipher what type of value it is. vals = vals.split() # Special case approaching. if "(do" in vals: vals = vals[:1] if len(vals) == 0: pcast = str # Assume NULL output else: v = vals[0] # Figure out if it's castable to floating point: try: float(v) except ValueError: pcast = str else: if any("." in v or "e" in v for v in vals): pcast = float elif v == "inf": pcast = str else: pcast = int # Now we figure out what to do with it. if param.startswith("Unit"): self.unit_base[param] = float(vals[0]) if len(vals) == 0: vals = "" elif len(vals) == 1: vals = pcast(vals[0]) else: vals = np.array([pcast(i) for i in vals]) self.parameters[param] = vals # Domain dimensions for Gadget datasets are always 2x2x2 for octree self.domain_dimensions = np.array([2, 2, 2]) if self.parameters["ComovingIntegrationOn"]: cosmo_attr = { "box_size": "BoxSize", "omega_lambda": "OmegaLambda", "omega_matter": "Omega0", "hubble_constant": "HubbleParam", } self.initial_redshift = 1.0 / self.parameters["TimeBegin"] - 1.0 self.final_redshift = 1.0 / self.parameters["TimeMax"] - 1.0 self.cosmological_simulation = 1 for a, v in cosmo_attr.items(): if v not in self.parameters: raise MissingParameter(self.parameter_filename, v) setattr(self, a, self.parameters[v]) self.domain_left_edge = np.array([0.0, 0.0, 0.0]) self.domain_right_edge = ( np.array([1.0, 1.0, 1.0]) * self.parameters["BoxSize"] ) else: self.cosmological_simulation = 0 self.omega_lambda = self.omega_matter = self.hubble_constant = 0.0 def _find_data_dir(self): """ Find proper location for datasets. First look where parameter file points, but if this doesn't exist then default to the current directory. """ if self.parameters["OutputDir"].startswith("/"): data_dir = self.parameters["OutputDir"] else: data_dir = os.path.join(self.directory, self.parameters["OutputDir"]) if not os.path.exists(data_dir): mylog.info( "OutputDir not found at %s, instead using %s.", data_dir, self.directory ) data_dir = self.directory self.data_dir = data_dir def _snapshot_format(self, index=None): """ The snapshot filename for a given index. Modify this for different naming conventions. """ if self.parameters["NumFilesPerSnapshot"] > 1: suffix = ".0" else: suffix = "" if self.parameters["SnapFormat"] == 3: suffix += ".hdf5" if index is None: count = "*" else: count = "%03d" % index filename = f"{self.parameters['SnapshotFileBase']}_{count}{suffix}" return os.path.join(self.data_dir, filename) def _get_all_outputs(self, *, find_outputs=False): """ Get all potential datasets and combine into a time-sorted list. """ # Find the data directory where the outputs are self._find_data_dir() # Create the set of outputs from which further selection will be done. if find_outputs: self._find_outputs() else: if self.parameters["OutputListOn"]: a_values = [ float(a) for a in open( os.path.join( self.data_dir, self.parameters["OutputListFilename"] ), ).readlines() ] else: a_values = [float(self.parameters["TimeOfFirstSnapshot"])] time_max = float(self.parameters["TimeMax"]) while a_values[-1] < time_max: if self.cosmological_simulation: a_values.append( a_values[-1] * self.parameters["TimeBetSnapshot"] ) else: a_values.append( a_values[-1] + self.parameters["TimeBetSnapshot"] ) if a_values[-1] > time_max: a_values[-1] = time_max if self.cosmological_simulation: self.all_outputs = [ {"filename": self._snapshot_format(i), "redshift": (1.0 / a - 1)} for i, a in enumerate(a_values) ] # Calculate times for redshift outputs. for output in self.all_outputs: output["time"] = self.cosmology.t_from_z(output["redshift"]) else: self.all_outputs = [ { "filename": self._snapshot_format(i), "time": self.quan(a, "code_time"), } for i, a in enumerate(a_values) ] self.all_outputs.sort(key=lambda obj: obj["time"].to_ndarray()) def _calculate_simulation_bounds(self): """ Figure out the starting and stopping time and redshift for the simulation. """ # Convert initial/final redshifts to times. if self.cosmological_simulation: self.initial_time = self.cosmology.t_from_z(self.initial_redshift) self.initial_time.units.registry = self.unit_registry self.final_time = self.cosmology.t_from_z(self.final_redshift) self.final_time.units.registry = self.unit_registry # If not a cosmology simulation, figure out the stopping criteria. else: if "TimeBegin" in self.parameters: self.initial_time = self.quan(self.parameters["TimeBegin"], "code_time") else: self.initial_time = self.quan(0.0, "code_time") if "TimeMax" in self.parameters: self.final_time = self.quan(self.parameters["TimeMax"], "code_time") else: self.final_time = None if "TimeMax" not in self.parameters: raise NoStoppingCondition(self.parameter_filename) def _find_outputs(self): """ Search for directories matching the data dump keywords. If found, get dataset times py opening the ds. """ potential_outputs = glob.glob(self._snapshot_format()) self.all_outputs = self._check_for_outputs(potential_outputs) self.all_outputs.sort(key=lambda obj: obj["time"]) only_on_root(mylog.info, "Located %d total outputs.", len(self.all_outputs)) # manually set final time and redshift with last output if self.all_outputs: self.final_time = self.all_outputs[-1]["time"] if self.cosmological_simulation: self.final_redshift = self.all_outputs[-1]["redshift"] def _check_for_outputs(self, potential_outputs): r""" Check a list of files to see if they are valid datasets. """ only_on_root( mylog.info, "Checking %d potential outputs.", len(potential_outputs) ) my_outputs = {} for my_storage, output in parallel_objects( potential_outputs, storage=my_outputs ): try: ds = load(output) except (FileNotFoundError, YTUnidentifiedDataType): mylog.error("Failed to load %s", output) continue my_storage.result = { "filename": output, "time": ds.current_time.in_units("s"), } if ds.cosmological_simulation: my_storage.result["redshift"] = ds.current_redshift my_outputs = [ my_output for my_output in my_outputs.values() if my_output is not None ] return my_outputs def _write_cosmology_outputs(self, filename, outputs, start_index, decimals=3): r""" Write cosmology output parameters for a cosmology splice. """ mylog.info("Writing redshift output list to %s.", filename) f = open(filename, "w") for output in outputs: f.write(f"{1.0 / (1.0 + output['redshift']):f}\n") f.close() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget/testing.py0000644000175100001770000000707314714401662017514 0ustar00runnerdockerimport numpy as np from .data_structures import GadgetBinaryHeader, GadgetDataset from .definitions import gadget_field_specs, gadget_ptype_specs from .io import IOHandlerGadgetBinary vector_fields = dict(IOHandlerGadgetBinary._vector_fields) block_ids = { "Coordinates": "POS", "Velocities": "VEL", "ParticleIDs": "ID", "Mass": "MASS", "InternalEnergy": "U", "Density": "RHO", "SmoothingLength": "HSML", } def write_record(fp, data, endian): dtype = endian + "i4" size = np.array(data.nbytes, dtype=dtype) fp.write(size.tobytes()) fp.write(data.tobytes()) fp.write(size.tobytes()) def write_block(fp, data, endian, fmt, block_id): assert fmt in [1, 2] block_id = "%-4s" % block_id if fmt == 2: block_id_dtype = np.dtype([("id", "S", 4), ("offset", endian + "i4")]) block_id_data = np.zeros(1, dtype=block_id_dtype) block_id_data["id"] = block_id block_id_data["offset"] = data.nbytes + 8 write_record(fp, block_id_data, endian) write_record(fp, data, endian) def fake_gadget_binary( filename="fake_gadget_binary", npart=(100, 100, 100, 0, 100, 0), header_spec="default", field_spec="default", ptype_spec="default", endian="", fmt=2, ): """Generate a fake Gadget binary snapshot.""" header = GadgetBinaryHeader(filename, header_spec) field_spec = GadgetDataset._setup_binary_spec(field_spec, gadget_field_specs) ptype_spec = GadgetDataset._setup_binary_spec(ptype_spec, gadget_ptype_specs) with open(filename, "wb") as fp: # Generate and write header blocks for i_header, header_spec in enumerate(header.spec): specs = [] for name, dim, dtype in header_spec: # workaround a FutureWarning in numpy where np.dtype(name, type, 1) # will change meaning in a future version so name_dtype = [name, endian + dtype, dim] if dim == 1: name_dtype.pop() specs.append(tuple(name_dtype)) header_dtype = np.dtype(specs) header = np.zeros(1, dtype=header_dtype) if i_header == 0: header["Npart"] = npart header["Nall"] = npart header["NumFiles"] = 1 header["BoxSize"] = 1 header["HubbleParam"] = 1 write_block(fp, header, endian, fmt, "HEAD") npart = dict(zip(ptype_spec, npart, strict=True)) for fs in field_spec: # Parse field name and particle type if isinstance(fs, str): field = fs ptype = ptype_spec else: field, ptype = fs if isinstance(ptype, str): ptype = (ptype,) # Determine field dimension if field in vector_fields: dim = vector_fields[field] else: dim = 1 # Determine dtype (in numpy convention) if field == "ParticleIDs": dtype = "u4" else: dtype = "f4" dtype = endian + dtype # Generate and write field block data = [] rng = np.random.default_rng() for pt in ptype: data += [rng.random((npart[pt], dim))] data = np.concatenate(data).astype(dtype) if field in block_ids: block_id = block_ids[field] else: block_id = "" write_block(fp, data, endian, fmt, block_id) return filename ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3231523 yt-4.4.0/yt/frontends/gadget/tests/0000755000175100001770000000000014714401715016617 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget/tests/__init__.py0000644000175100001770000000000014714401662020717 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget/tests/test_gadget_pytest.py0000644000175100001770000000237314714401662023101 0ustar00runnerdockerimport numpy as np import yt from yt.testing import requires_file, requires_module from yt.utilities.on_demand_imports import _h5py as h5py @requires_file("snapshot_033/snap_033.0.hdf5") @requires_module("h5py") def test_gadget_header_array_reduction(tmp_path): # first get a real header ds = yt.load("snapshot_033/snap_033.0.hdf5") hvals = ds._get_hvals() hvals_orig = hvals.copy() # wrap some of the scalar values in nested arrays hvals["Redshift"] = np.array([hvals["Redshift"]]) hvals["Omega0"] = np.array([[hvals["Omega0"]]]) # drop those header values into a fake header-only file tmp_snpshot_dir = tmp_path / "snapshot_033" tmp_snpshot_dir.mkdir() tmp_header_only_file = str(tmp_snpshot_dir / "fake_gadget_header.hdf5") with h5py.File(tmp_header_only_file, mode="w") as f: headergrp = f.create_group("Header") for field, val in hvals.items(): headergrp.attrs[field] = val # trick the dataset into using the header file and make sure the # arrays are reduced ds._input_filename = tmp_header_only_file hvals = ds._get_hvals() for attr in ("Redshift", "Omega0"): assert hvals[attr] == hvals_orig[attr] assert isinstance(hvals[attr], np.ndarray) is False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget/tests/test_outputs.py0000644000175100001770000001321114714401662021752 0ustar00runnerdockerimport os import shutil import tempfile from collections import OrderedDict from itertools import product import yt from yt.frontends.gadget.api import GadgetDataset, GadgetHDF5Dataset from yt.frontends.gadget.testing import fake_gadget_binary from yt.testing import ( ParticleSelectionComparison, assert_allclose_units, requires_file, requires_module, ) from yt.utilities.answer_testing.framework import data_dir_load, requires_ds, sph_answer isothermal_h5 = "IsothermalCollapse/snap_505.hdf5" isothermal_bin = "IsothermalCollapse/snap_505" BE_Gadget = "BigEndianGadgetBinary/BigEndianGadgetBinary" LE_SnapFormat2 = "Gadget3-snap-format2/Gadget3-snap-format2" keplerian_ring = "KeplerianRing/keplerian_ring_0020.hdf5" snap_33 = "snapshot_033/snap_033.0.hdf5" snap_33_dir = "snapshot_033/" magneticum = "MagneticumCluster/snap_132" magneticum_camels = "magneticum_camels/snap_small_086.hdf5" # This maps from field names to weight field names to use for projections iso_fields = OrderedDict( [ (("gas", "density"), None), (("gas", "temperature"), None), (("gas", "temperature"), ("gas", "density")), (("gas", "velocity_magnitude"), None), ] ) iso_kwargs = {"bounding_box": [[-3, 3], [-3, 3], [-3, 3]]} @requires_module("h5py") def test_gadget_binary(): header_specs = ["default", "default+pad32", ["default", "pad32"]] curdir = os.getcwd() tmpdir = tempfile.mkdtemp() for header_spec, endian, fmt in product(header_specs, "<>", [1, 2]): try: fake_snap = fake_gadget_binary( header_spec=header_spec, endian=endian, fmt=fmt ) except FileNotFoundError: # sometimes this happens for mysterious reasons pass ds = yt.load(fake_snap, header_spec=header_spec) assert isinstance(ds, GadgetDataset) ds.field_list try: os.remove(fake_snap) except FileNotFoundError: # sometimes this happens for mysterious reasons pass os.chdir(curdir) shutil.rmtree(tmpdir) @requires_module("h5py") @requires_file(isothermal_h5) def test_gadget_hdf5(): assert isinstance( data_dir_load(isothermal_h5, kwargs=iso_kwargs), GadgetHDF5Dataset ) @requires_file(keplerian_ring) def test_non_cosmo_dataset(): """ Non-cosmological datasets may not have the cosmological parameters in the Header. The code should fall back gracefully when they are not present, with the Redshift set to 0. """ data = data_dir_load(keplerian_ring) assert data.current_redshift == 0.0 assert data.cosmological_simulation == 0 @requires_ds(isothermal_h5) def test_iso_collapse(): ds = data_dir_load(isothermal_h5, kwargs=iso_kwargs) for test in sph_answer(ds, "snap_505", 2**17, iso_fields): test_iso_collapse.__name__ = test.description yield test @requires_ds(LE_SnapFormat2) def test_pid_uniqueness(): """ ParticleIDs should be unique. """ ds = data_dir_load(LE_SnapFormat2) ad = ds.all_data() pid = ad["all", "ParticleIDs"] assert len(pid) == len(set(pid.v)) @requires_file(snap_33) @requires_file(snap_33_dir) def test_multifile_read(): """ Tests to make sure multi-file gadget snapshot can be loaded by passing '.0' file or by passing the directory containing the multi-file snapshot. """ assert isinstance(data_dir_load(snap_33), GadgetDataset) assert isinstance(data_dir_load(snap_33_dir), GadgetDataset) @requires_file(snap_33) def test_particle_subselection(): # This checks that we correctly subselect from a dataset, first by making # sure we get all the particles, then by comparing manual selections against # them. ds = data_dir_load(snap_33) psc = ParticleSelectionComparison(ds) psc.run_defaults() @requires_ds(BE_Gadget) def test_bigendian_field_access(): ds = data_dir_load(BE_Gadget) data = ds.all_data() data["Halo", "Velocities"] mag_fields = OrderedDict( [ (("gas", "density"), None), (("gas", "temperature"), None), (("gas", "temperature"), ("gas", "density")), (("gas", "velocity_magnitude"), None), (("gas", "H_fraction"), None), (("gas", "C_fraction"), None), ] ) mag_kwargs = { "long_ids": True, "field_spec": "magneticum_box2_hr", } @requires_ds(magneticum) def test_magneticum(): ds = data_dir_load(magneticum, kwargs=mag_kwargs) for test in sph_answer(ds, "snap_132", 3718111, mag_fields, center="max"): test_magneticum.__name__ = test.description yield test camels_kwargs = { "bounding_box": [[8126.0, 22126.0], [5140.0, 19140.0], [5500.0, 19500.0]] } @requires_module("h5py") @requires_file(magneticum_camels) def test_magneticum_camels(): # In this test, we're only checking the metal fields since this # is a dataset with special metal handling ds = data_dir_load(magneticum_camels, kwargs=camels_kwargs) dd = ds.all_data() elems = [ "He", "C", "Ca", "O", "N", "Ne", "Mg", "S", "Si", "Fe", "Na", "Al", "Ar", "Ni", "Ej", ] metl = 0.0 heavy_mass = 0.0 for i, elem in enumerate(elems): assert_allclose_units( dd["gas", f"{elem}_mass"], dd["PartType0", f"MetalMasses_{i:02d}"] ) heavy_mass += dd["PartType0", f"MetalMasses_{i:02d}"] if i > 0: metl += dd["PartType0", f"MetalMasses_{i:02d}"] / dd["PartType0", "Masses"] assert_allclose_units(dd["gas", "metallicity"], metl) assert_allclose_units(dd["gas", "H_mass"], dd["PartType0", "Masses"] - heavy_mass) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3231523 yt-4.4.0/yt/frontends/gadget_fof/0000755000175100001770000000000014714401715016307 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget_fof/__init__.py0000644000175100001770000000005214714401662020416 0ustar00runnerdocker""" API for HaloCatalog frontend. """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget_fof/api.py0000644000175100001770000000052614714401662017436 0ustar00runnerdockerfrom . import tests from .data_structures import ( GadgetFOFDataset, GadgetFOFHaloContainer, GadgetFOFHaloDataset, GadgetFOFHaloParticleIndex, GadgetFOFHDF5File, GadgetFOFParticleIndex, ) from .fields import GadgetFOFFieldInfo, GadgetFOFHaloFieldInfo from .io import IOHandlerGadgetFOFHaloHDF5, IOHandlerGadgetFOFHDF5 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget_fof/data_structures.py0000644000175100001770000005773514714401662022117 0ustar00runnerdockerimport os import weakref from collections import defaultdict from functools import cached_property, partial import numpy as np from yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer, ) from yt.data_objects.static_output import ParticleDataset from yt.frontends.gadget.data_structures import _fix_unit_ordering from yt.frontends.gadget_fof.fields import GadgetFOFFieldInfo, GadgetFOFHaloFieldInfo from yt.frontends.halo_catalog.data_structures import HaloCatalogFile, HaloDataset from yt.funcs import only_on_root, setdefaultattr from yt.geometry.particle_geometry_handler import ParticleIndex from yt.utilities.cosmology import Cosmology from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py class GadgetFOFParticleIndex(ParticleIndex): def _calculate_particle_count(self): """ Calculate the total number of each type of particle. """ self.particle_count = { ptype: sum(d.total_particles[ptype] for d in self.data_files) for ptype in self.ds.particle_types_raw } def _calculate_particle_index_starts(self): # Halo indices are not saved in the file, so we must count by hand. # File 0 has halos 0 to N_0 - 1, file 1 has halos N_0 to N_0 + N_1 - 1, etc. particle_count = defaultdict(int) offset_count = 0 for data_file in self.data_files: data_file.index_start = { ptype: particle_count[ptype] for ptype in data_file.total_particles } data_file.offset_start = offset_count for ptype in data_file.total_particles: particle_count[ptype] += data_file.total_particles[ptype] offset_count += data_file.total_offset self._halo_index_start = { ptype: np.array( [data_file.index_start[ptype] for data_file in self.data_files] ) for ptype in self.ds.particle_types_raw } def _calculate_file_offset_map(self): # After the FOF is performed, a load-balancing step redistributes halos # and then writes more fields. Here, for each file, we create a list of # files which contain the rest of the redistributed particles. ifof = np.array( [data_file.total_particles["Group"] for data_file in self.data_files] ) isub = np.array([data_file.total_offset for data_file in self.data_files]) subend = isub.cumsum() fofend = ifof.cumsum() istart = np.digitize(fofend - ifof, subend - isub) - 1 iend = np.clip(np.digitize(fofend, subend), 0, ifof.size - 2) for i, data_file in enumerate(self.data_files): data_file.offset_files = self.data_files[istart[i] : iend[i] + 1] def _detect_output_fields(self): field_list = [] units = {} found_fields = { ptype: False for ptype, pnum in self.particle_count.items() if pnum > 0 } for data_file in self.data_files: fl, _units = self.io._identify_fields(data_file) units.update(_units) field_list.extend([f for f in fl if f not in field_list]) for ptype in found_fields: found_fields[ptype] |= data_file.total_particles[ptype] if all(found_fields.values()): break self.field_list = field_list ds = self.dataset ds.particle_types = tuple({pt for pt, ds in field_list}) ds.field_units.update(units) ds.particle_types_raw = ds.particle_types def _setup_filenames(self): template = self.ds.filename_template ndoms = self.ds.file_count cls = self.ds._file_class self.data_files = [ cls(self.ds, self.io, template % {"num": i}, i, frange=None) for i in range(ndoms) ] def _setup_data_io(self): super()._setup_data_io() self._calculate_particle_count() self._calculate_particle_index_starts() self._calculate_file_offset_map() class GadgetFOFHDF5File(HaloCatalogFile): def __init__(self, ds, io, filename, file_id, frange): with h5py.File(filename, mode="r") as f: self.header = {str(field): val for field, val in f["Header"].attrs.items()} self.group_length_sum = ( f["Group/GroupLen"][()].sum() if "Group/GroupLen" in f else 0 ) self.group_subs_sum = ( f["Group/GroupNsubs"][()].sum() if "Group/GroupNsubs" in f else 0 ) self.total_ids = self.header["Nids_ThisFile"] self.total_offset = 0 super().__init__(ds, io, filename, file_id, frange) def _read_particle_positions(self, ptype, f=None): """ Read all particle positions in this file. """ if f is None: close = True f = h5py.File(self.filename, mode="r") else: close = False pos = f[ptype][f"{ptype}Pos"][()].astype("float64") if close: f.close() return pos class GadgetFOFDataset(ParticleDataset): _load_requirements = ["h5py"] _index_class = GadgetFOFParticleIndex _file_class = GadgetFOFHDF5File _field_info_class = GadgetFOFFieldInfo def __init__( self, filename, dataset_type="gadget_fof_hdf5", index_order=None, index_filename=None, unit_base=None, units_override=None, unit_system="cgs", ): if unit_base is not None and "UnitLength_in_cm" in unit_base: # We assume this is comoving, because in the absence of comoving # integration the redshift will be zero. unit_base["cmcm"] = 1.0 / unit_base["UnitLength_in_cm"] self._unit_base = unit_base if units_override is not None: raise RuntimeError( "units_override is not supported for GadgetFOFDataset. " + "Use unit_base instead." ) super().__init__( filename, dataset_type, units_override=units_override, index_order=index_order, index_filename=index_filename, unit_system=unit_system, ) def add_field(self, *args, **kwargs): super().add_field(*args, **kwargs) self._halos_ds.add_field(*args, **kwargs) @property def halos_field_list(self): return self._halos_ds.field_list @property def halos_derived_field_list(self): return self._halos_ds.derived_field_list @cached_property def _halos_ds(self): return GadgetFOFHaloDataset(self) def _setup_classes(self): super()._setup_classes() self.halo = partial(GadgetFOFHaloContainer, ds=self._halos_ds) def _parse_parameter_file(self): with h5py.File(self.parameter_filename, mode="r") as f: self.parameters = { str(field): val for field, val in f["Header"].attrs.items() } self.dimensionality = 3 self.refine_by = 2 # Set standard values self.domain_left_edge = np.zeros(3, "float64") self.domain_right_edge = np.ones(3, "float64") * self.parameters["BoxSize"] self.domain_dimensions = np.ones(3, "int32") self.cosmological_simulation = 1 self._periodicity = (True, True, True) self.current_redshift = self.parameters["Redshift"] self.omega_lambda = self.parameters["OmegaLambda"] self.omega_matter = self.parameters["Omega0"] self.hubble_constant = self.parameters["HubbleParam"] cosmology = Cosmology( hubble_constant=self.hubble_constant, omega_matter=self.omega_matter, omega_lambda=self.omega_lambda, ) self.current_time = cosmology.t_from_z(self.current_redshift) prefix = os.path.abspath( os.path.join( os.path.dirname(self.parameter_filename), os.path.basename(self.parameter_filename).split(".", 1)[0], ) ) suffix = self.parameter_filename.rsplit(".", 1)[-1] self.filename_template = f"{prefix}.%(num)i.{suffix}" self.file_count = self.parameters["NumFiles"] self.particle_types = ("Group", "Subhalo") self.particle_types_raw = ("Group", "Subhalo") def _set_code_unit_attributes(self): # Set a sane default for cosmological simulations. if self._unit_base is None and self.cosmological_simulation == 1: only_on_root(mylog.info, "Assuming length units are in Mpc/h (comoving)") self._unit_base = {"length": (1.0, "Mpccm/h")} # The other same defaults we will use from the standard Gadget # defaults. unit_base = self._unit_base or {} if "length" in unit_base: length_unit = unit_base["length"] elif "UnitLength_in_cm" in unit_base: if self.cosmological_simulation == 0: length_unit = (unit_base["UnitLength_in_cm"], "cm") else: length_unit = (unit_base["UnitLength_in_cm"], "cmcm/h") else: raise RuntimeError length_unit = _fix_unit_ordering(length_unit) setdefaultattr(self, "length_unit", self.quan(length_unit[0], length_unit[1])) if "velocity" in unit_base: velocity_unit = unit_base["velocity"] elif "UnitVelocity_in_cm_per_s" in unit_base: velocity_unit = (unit_base["UnitVelocity_in_cm_per_s"], "cm/s") else: if self.cosmological_simulation == 0: velocity_unit = (1e5, "cm/s") else: velocity_unit = (1e5, "cm/s * sqrt(a)") velocity_unit = _fix_unit_ordering(velocity_unit) setdefaultattr( self, "velocity_unit", self.quan(velocity_unit[0], velocity_unit[1]) ) # We set hubble_constant = 1.0 for non-cosmology, so this is safe. # Default to 1e10 Msun/h if mass is not specified. if "mass" in unit_base: mass_unit = unit_base["mass"] elif "UnitMass_in_g" in unit_base: if self.cosmological_simulation == 0: mass_unit = (unit_base["UnitMass_in_g"], "g") else: mass_unit = (unit_base["UnitMass_in_g"], "g/h") else: # Sane default mass_unit = (1.0, "1e10*Msun/h") mass_unit = _fix_unit_ordering(mass_unit) setdefaultattr(self, "mass_unit", self.quan(mass_unit[0], mass_unit[1])) if "time" in unit_base: time_unit = unit_base["time"] elif "UnitTime_in_s" in unit_base: time_unit = (unit_base["UnitTime_in_s"], "s") else: tu = (self.length_unit / self.velocity_unit).to("yr/h") time_unit = (tu.d, tu.units) setdefaultattr(self, "time_unit", self.quan(time_unit[0], time_unit[1])) def __str__(self): return self.basename.split(".", 1)[0] @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False need_groups = ["Group", "Header", "Subhalo"] veto_groups = ["FOF"] valid = True try: fh = h5py.File(filename, mode="r") valid = all(ng in fh["/"] for ng in need_groups) and not any( vg in fh["/"] for vg in veto_groups ) fh.close() except Exception: valid = False pass return valid class GadgetFOFHaloParticleIndex(GadgetFOFParticleIndex): def __init__(self, ds, dataset_type): self.real_ds = weakref.proxy(ds.real_ds) super().__init__(ds, dataset_type) def _create_halo_id_table(self): """ Create a list of halo start ids so we know which file contains particles for a given halo. Note, the halo ids are distributed over all files and so the ids for a given halo are likely stored in a different file than the halo itself. """ self._halo_id_number = np.array( [data_file.total_ids for data_file in self.data_files] ) self._halo_id_end = self._halo_id_number.cumsum() self._halo_id_start = self._halo_id_end - self._halo_id_number self._group_length_sum = np.array( [data_file.group_length_sum for data_file in self.data_files] ) def _detect_output_fields(self): field_list = [] scalar_field_list = [] units = {} found_fields = { ptype: False for ptype, pnum in self.particle_count.items() if pnum > 0 } has_ids = False for data_file in self.data_files: fl, sl, idl, _units = self.io._identify_fields(data_file) units.update(_units) field_list.extend([f for f in fl if f not in field_list]) scalar_field_list.extend([f for f in sl if f not in scalar_field_list]) for ptype in found_fields: found_fields[ptype] |= data_file.total_particles[ptype] has_ids |= len(idl) > 0 if all(found_fields.values()) and has_ids: break self.field_list = field_list self.scalar_field_list = scalar_field_list ds = self.dataset ds.scalar_field_list = scalar_field_list ds.particle_types = tuple({pt for pt, ds in field_list}) ds.field_units.update(units) ds.particle_types_raw = ds.particle_types def _identify_base_chunk(self, dobj): pass def _read_particle_fields(self, fields, dobj, chunk=None): if len(fields) == 0: return {}, [] fields_to_read, fields_to_generate = self._split_fields(fields) if len(fields_to_read) == 0: return {}, fields_to_generate fields_to_return = self.io._read_particle_selection(dobj, fields_to_read) return fields_to_return, fields_to_generate def _get_halo_file_indices(self, ptype, identifiers): return np.digitize(identifiers, self._halo_index_start[ptype], right=False) - 1 def _get_halo_scalar_index(self, ptype, identifier): i_scalar = self._get_halo_file_indices(ptype, [identifier])[0] scalar_index = identifier - self._halo_index_start[ptype][i_scalar] return scalar_index def _get_halo_values(self, ptype, identifiers, fields, f=None): """ Get field values for halos. IDs are likely to be sequential (or at least monotonic), but not necessarily all within the same file. This does not do much to minimize file i/o, but with halos randomly distributed across files, there's not much more we can do. """ # if a file is already open, don't open it again filename = None if f is None else f.filename data = defaultdict(lambda: np.empty(identifiers.size)) i_scalars = self._get_halo_file_indices(ptype, identifiers) for i_scalar in np.unique(i_scalars): target = i_scalars == i_scalar scalar_indices = identifiers - self._halo_index_start[ptype][i_scalar] # only open file if it's not already open my_f = ( f if self.data_files[i_scalar].filename == filename else h5py.File(self.data_files[i_scalar].filename, mode="r") ) for field in fields: data[field][target] = my_f[os.path.join(ptype, field)][()][ scalar_indices[target] ] if self.data_files[i_scalar].filename != filename: my_f.close() return data def _setup_data_io(self): super()._setup_data_io() self._create_halo_id_table() class GadgetFOFHaloDataset(HaloDataset): _index_class = GadgetFOFHaloParticleIndex _file_class = GadgetFOFHDF5File _field_info_class = GadgetFOFHaloFieldInfo def __init__(self, ds, dataset_type="gadget_fof_halo_hdf5"): super().__init__(ds, dataset_type) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: # This class is not meant to be instantiated by yt.load() return False class GadgetFOFHaloContainer(YTSelectionContainer): """ Create a data container to get member particles and individual values from halos and subhalos. Halo mass, position, and velocity are set as attributes. Halo IDs are accessible through the field, "member_ids". Other fields that are one value per halo are accessible as normal. The field list for halo objects can be seen in `ds.halos_field_list`. Parameters ---------- ptype : string The type of halo, either "Group" for the main halo or "Subhalo" for subhalos. particle_identifier : int or tuple of ints The halo or subhalo id. If requesting a subhalo, the id can also be given as a tuple of the main halo id and subgroup id, such as (1, 4) for subgroup 4 of halo 1. Attributes ---------- particle_identifier : int The id of the halo or subhalo. group_identifier : int For subhalos, the id of the enclosing halo. subgroup_identifier : int For subhalos, the relative id of the subhalo within the enclosing halo. particle_number : int Number of particles in the halo. mass : float Halo mass. position : array of floats Halo position. velocity : array of floats Halo velocity. Note ---- Relevant Fields: * particle_number - number of particles * subhalo_number - number of subhalos * group_identifier - id of parent group for subhalos Examples -------- >>> import yt >>> ds = yt.load("gadget_halos/data/groups_298/fof_subhalo_tab_298.0.hdf5") >>> halo = ds.halo("Group", 0) >>> print(halo.mass) 13256.5517578 code_mass >>> print(halo.position) [ 16.18603706 6.95965052 12.52694607] code_length >>> print(halo.velocity) [ 6943694.22793569 -762788.90647454 -794749.63819757] cm/s >>> print(halo["Group_R_Crit200"]) [ 0.79668683] code_length >>> # particle ids for this halo >>> print(halo["member_ids"]) [ 723631. 690744. 854212. ..., 608589. 905551. 1147449.] dimensionless >>> # get the first subhalo of this halo >>> subhalo = ds.halo("Subhalo", (0, 0)) >>> print(subhalo["member_ids"]) [ 723631. 690744. 854212. ..., 808362. 956359. 1248821.] dimensionless """ _type_name = "halo" _con_args = ("ptype", "particle_identifier") _spatial = False # Do not register it to prevent .halo from being attached to all datasets _skip_add = True def __init__(self, ptype, particle_identifier, ds=None): if ptype not in ds.particle_types_raw: raise RuntimeError( f'Possible halo types are {ds.particle_types_raw}, supplied "{ptype}".' ) self.ptype = ptype self._current_particle_type = ptype super().__init__(ds, {}) if ptype == "Subhalo" and isinstance(particle_identifier, tuple): self.group_identifier, self.subgroup_identifier = particle_identifier my_data = self.index._get_halo_values( "Group", np.array([self.group_identifier]), ["GroupFirstSub"] ) self.particle_identifier = np.int64( my_data["GroupFirstSub"][0] + self.subgroup_identifier ) else: self.particle_identifier = particle_identifier if self.particle_identifier >= self.index.particle_count[ptype]: raise RuntimeError( "%s %d requested, but only %d %s objects exist." % (ptype, particle_identifier, self.index.particle_count[ptype], ptype) ) # Find the file that has the scalar values for this halo. i_scalar = self.index._get_halo_file_indices(ptype, [self.particle_identifier])[ 0 ] self.scalar_data_file = self.index.data_files[i_scalar] # index within halo arrays that corresponds to this halo self.scalar_index = self.index._get_halo_scalar_index( ptype, self.particle_identifier ) halo_fields = [f"{ptype}Len"] if ptype == "Subhalo": halo_fields.append("SubhaloGrNr") my_data = self.index._get_halo_values( ptype, np.array([self.particle_identifier]), halo_fields ) self.particle_number = np.int64(my_data[f"{ptype}Len"][0]) if ptype == "Group": self.group_identifier = self.particle_identifier id_offset = 0 # index of file that has scalar values for the group g_scalar = i_scalar group_index = self.scalar_index # If a subhalo, find the index of the parent. elif ptype == "Subhalo": self.group_identifier = np.int64(my_data["SubhaloGrNr"][0]) # Find the file that has the scalar values for the parent group. g_scalar = self.index._get_halo_file_indices( "Group", [self.group_identifier] )[0] # index within halo arrays that corresponds to the paent group group_index = self.index._get_halo_scalar_index( "Group", self.group_identifier ) my_data = self.index._get_halo_values( "Group", np.array([self.group_identifier]), ["GroupNsubs", "GroupFirstSub"], ) self.subgroup_identifier = self.particle_identifier - np.int64( my_data["GroupFirstSub"][0] ) parent_subhalos = my_data["GroupNsubs"][0] mylog.debug( "Subhalo %d is subgroup %s of %d in group %d.", self.particle_identifier, self.subgroup_identifier, parent_subhalos, self.group_identifier, ) # ids of the sibling subhalos that come before this one if self.subgroup_identifier > 0: sub_ids = np.arange( self.particle_identifier - self.subgroup_identifier, self.particle_identifier, ) my_data = self.index._get_halo_values( "Subhalo", sub_ids, ["SubhaloLen"] ) id_offset = my_data["SubhaloLen"].sum(dtype=np.int64) else: id_offset = 0 # Calculate the starting index for the member particles. # First, add up all the particles in the earlier files. all_id_start = self.index._group_length_sum[:g_scalar].sum(dtype=np.int64) # Now add the halos in this file that come before. with h5py.File(self.index.data_files[g_scalar].filename, mode="r") as f: all_id_start += f["Group"]["GroupLen"][:group_index].sum(dtype=np.int64) # Add the subhalo offset. all_id_start += id_offset # indices of first and last files containing member particles i_start = ( np.digitize([all_id_start], self.index._halo_id_start, right=False)[0] - 1 ) i_end = np.digitize( [all_id_start + self.particle_number], self.index._halo_id_end, right=True )[0] self.field_data_files = self.index.data_files[i_start : i_end + 1] # starting and ending indices for each file containing particles self.field_data_start = ( all_id_start - self.index._halo_id_start[i_start : i_end + 1] ).clip(min=0) self.field_data_start = self.field_data_start.astype(np.int64) self.field_data_end = ( all_id_start + self.particle_number - self.index._halo_id_start[i_start : i_end + 1] ).clip(max=self.index._halo_id_number[i_start : i_end + 1]) self.field_data_end = self.field_data_end.astype(np.int64) for attr in ["mass", "position", "velocity"]: setattr(self, attr, self[self.ptype, f"particle_{attr}"][0]) def __repr__(self): return "%s_%s_%09d" % (self.ds, self.ptype, self.particle_identifier) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget_fof/fields.py0000644000175100001770000000761614714401662020142 0ustar00runnerdockerfrom yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer m_units = "code_mass" p_units = "code_length" v_units = "code_velocity" _pnums = 6 _type_fields: KnownFieldsT = tuple( ("%s%sType_%d" % (ptype, field, pnum), (units, [], None)) for pnum in range(_pnums) for field, units in (("Mass", m_units), ("Len", p_units)) for ptype in ("Group", "Subhalo") ) _sub_type_fields: KnownFieldsT = tuple( ("Subhalo%sType_%d" % (field, pnum), (units, [], None)) for pnum in range(_pnums) for field, units in ( ("HalfmassRad", p_units), ("MassInHalfRad", m_units), ("MassInMaxRad", m_units), ("MassInRad", m_units), ) ) _particle_fields: KnownFieldsT = ( ("GroupPos_0", (p_units, ["Group", "particle_position_x"], None)), ("GroupPos_1", (p_units, ["Group", "particle_position_y"], None)), ("GroupPos_2", (p_units, ["Group", "particle_position_z"], None)), ("GroupVel_0", (v_units, ["Group", "particle_velocity_x"], None)), ("GroupVel_1", (v_units, ["Group", "particle_velocity_y"], None)), ("GroupVel_2", (v_units, ["Group", "particle_velocity_z"], None)), ("GroupMass", (m_units, ["Group", "particle_mass"], None)), ("GroupLen", ("", ["Group", "particle_number"], None)), ("GroupNsubs", ("", ["Group", "subhalo_number"], None)), ("GroupFirstSub", ("", [], None)), ("Group_M_Crit200", (m_units, [], None)), ("Group_M_Crit500", (m_units, [], None)), ("Group_M_Mean200", (m_units, [], None)), ("Group_M_TopHat200", (m_units, [], None)), ("Group_R_Crit200", (p_units, [], None)), ("Group_R_Crit500", (p_units, [], None)), ("Group_R_Mean200", (p_units, [], None)), ("Group_R_TopHat200", (p_units, [], None)), ("SubhaloPos_0", (p_units, ["Subhalo", "particle_position_x"], None)), ("SubhaloPos_1", (p_units, ["Subhalo", "particle_position_y"], None)), ("SubhaloPos_2", (p_units, ["Subhalo", "particle_position_z"], None)), ("SubhaloVel_0", (v_units, ["Subhalo", "particle_velocity_x"], None)), ("SubhaloVel_1", (v_units, ["Subhalo", "particle_velocity_y"], None)), ("SubhaloVel_2", (v_units, ["Subhalo", "particle_velocity_z"], None)), ("SubhaloMass", (m_units, ["Subhalo", "particle_mass"], None)), ("SubhaloLen", ("", ["Subhalo", "particle_number"], None)), ("SubhaloCM_0", (p_units, ["Subhalo", "center_of_mass_x"], None)), ("SubhaloCM_1", (p_units, ["Subhalo", "center_of_mass_y"], None)), ("SubhaloCM_2", (p_units, ["Subhalo", "center_of_mass_z"], None)), ("SubhaloSpin_0", ("", ["Subhalo", "spin_x"], None)), ("SubhaloSpin_1", ("", ["Subhalo", "spin_y"], None)), ("SubhaloSpin_2", ("", ["Subhalo", "spin_z"], None)), ("SubhaloGrNr", ("", ["Subhalo", "group_identifier"], None)), ("SubhaloHalfmassRad", (p_units, [], None)), ("SubhaloIDMostbound", ("", [], None)), ("SubhaloMassInHalfRad", (m_units, [], None)), ("SubhaloMassInMaxRad", (m_units, [], None)), ("SubhaloMassInRad", (m_units, [], None)), ("SubhaloParent", ("", [], None)), ("SubhaloVelDisp", (v_units, ["Subhalo", "velocity_dispersion"], None)), ("SubhaloVmax", (v_units, [], None)), ("SubhaloVmaxRad", (p_units, [], None)), *_type_fields, *_sub_type_fields, ) class GadgetFOFFieldInfo(FieldInfoContainer): known_particle_fields = _particle_fields # these are extra fields to be created for the "all" particle type extra_union_fields = ( (p_units, "particle_position_x"), (p_units, "particle_position_y"), (p_units, "particle_position_z"), (v_units, "particle_velocity_x"), (v_units, "particle_velocity_y"), (v_units, "particle_velocity_z"), (m_units, "particle_mass"), ("", "particle_number"), ("", "particle_ones"), ) class GadgetFOFHaloFieldInfo(FieldInfoContainer): known_particle_fields = _particle_fields + (("ID", ("", ["member_ids"], None)),) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget_fof/io.py0000644000175100001770000003424314714401662017277 0ustar00runnerdockerfrom collections import defaultdict import numpy as np from yt.funcs import mylog from yt.utilities.io_handler import BaseParticleIOHandler from yt.utilities.on_demand_imports import _h5py as h5py class IOHandlerGadgetFOFHDF5(BaseParticleIOHandler): _dataset_type = "gadget_fof_hdf5" def __init__(self, ds): super().__init__(ds) self.offset_fields = set() def _read_fluid_selection(self, chunks, selector, fields, size): raise NotImplementedError( "IOHandlerGadgetFOFHDF5 _read_fluid_selection not implemented yet" ) def _read_particle_coords(self, chunks, ptf): # This will read chunks and yield the results. for data_file in self._sorted_chunk_iterator(chunks): with h5py.File(data_file.filename, mode="r") as f: for ptype in sorted(ptf): coords = data_file._get_particle_positions(ptype, f=f) if coords is None: continue x = coords[:, 0] y = coords[:, 1] z = coords[:, 2] yield ptype, (x, y, z), 0.0 def _yield_coordinates(self, data_file): ptypes = self.ds.particle_types_raw with h5py.File(data_file.filename, mode="r") as f: for ptype in sorted(ptypes): pcount = data_file.total_particles[ptype] if pcount == 0: continue coords = f[ptype][f"{ptype}Pos"][()].astype("float64") coords = np.resize(coords, (pcount, 3)) yield ptype, coords def _read_offset_particle_field(self, field, data_file, fh): field_data = np.empty(data_file.total_particles["Group"], dtype="float64") fofindex = ( np.arange(data_file.total_particles["Group"]) + data_file.index_start["Group"] ) for offset_file in data_file.offset_files: if fh.filename == offset_file.filename: ofh = fh else: ofh = h5py.File(offset_file.filename, mode="r") subindex = np.arange(offset_file.total_offset) + offset_file.offset_start substart = max(fofindex[0] - subindex[0], 0) subend = min(fofindex[-1] - subindex[0], subindex.size - 1) fofstart = substart + subindex[0] - fofindex[0] fofend = subend + subindex[0] - fofindex[0] field_data[fofstart : fofend + 1] = ofh["Subhalo"][field][ substart : subend + 1 ] return field_data def _read_particle_fields(self, chunks, ptf, selector): # Now we have all the sizes, and we can allocate for data_file in self._sorted_chunk_iterator(chunks): si, ei = data_file.start, data_file.end with h5py.File(data_file.filename, mode="r") as f: for ptype, field_list in sorted(ptf.items()): pcount = data_file.total_particles[ptype] if pcount == 0: continue coords = data_file._get_particle_positions(ptype, f=f) x = coords[:, 0] y = coords[:, 1] z = coords[:, 2] mask = selector.select_points(x, y, z, 0.0) del x, y, z if mask is None: continue for field in field_list: if field in self.offset_fields: field_data = self._read_offset_particle_field( field, data_file, f ) else: if field == "particle_identifier": field_data = ( np.arange(data_file.total_particles[ptype]) + data_file.index_start[ptype] ) elif field in f[ptype]: field_data = f[ptype][field][()].astype("float64") else: fname = field[: field.rfind("_")] field_data = f[ptype][fname][()].astype("float64") my_div = field_data.size / pcount if my_div > 1: findex = int(field[field.rfind("_") + 1 :]) field_data = field_data[:, findex] data = field_data[si:ei][mask] yield (ptype, field), data def _count_particles(self, data_file): si, ei = data_file.start, data_file.end pcount = { "Group": data_file.header["Ngroups_ThisFile"], "Subhalo": data_file.header["Nsubgroups_ThisFile"], } if None not in (si, ei): for ptype in pcount: pcount[ptype] = np.clip(pcount[ptype] - si, 0, ei - si) return pcount def _identify_fields(self, data_file): fields = [] pcount = data_file.total_particles if sum(pcount.values()) == 0: return fields, {} with h5py.File(data_file.filename, mode="r") as f: for ptype in self.ds.particle_types_raw: if data_file.total_particles[ptype] == 0: continue fields.append((ptype, "particle_identifier")) my_fields, my_offset_fields = subfind_field_list( f[ptype], ptype, data_file.total_particles ) fields.extend(my_fields) self.offset_fields = self.offset_fields.union(set(my_offset_fields)) return fields, {} class IOHandlerGadgetFOFHaloHDF5(IOHandlerGadgetFOFHDF5): _dataset_type = "gadget_fof_halo_hdf5" def _read_particle_coords(self, chunks, ptf): pass def _read_particle_selection(self, dobj, fields): rv = {} ind = {} # We first need a set of masks for each particle type ptf = defaultdict(list) # ON-DISK TO READ fsize = defaultdict(lambda: 0) # COUNT RV field_maps = defaultdict(list) # ptypes -> fields unions = self.ds.particle_unions # What we need is a mapping from particle types to return types for field in fields: ftype, fname = field fsize[field] = 0 # We should add a check for p.fparticle_unions or something here if ftype in unions: for pt in unions[ftype]: ptf[pt].append(fname) field_maps[pt, fname].append(field) else: ptf[ftype].append(fname) field_maps[field].append(field) # Now we allocate psize = {dobj.ptype: dobj.particle_number} for field in fields: if field[0] in unions: for pt in unions[field[0]]: fsize[field] += psize.get(pt, 0) else: fsize[field] += psize.get(field[0], 0) for field in fields: if field[1] in self._vector_fields: shape = (fsize[field], self._vector_fields[field[1]]) elif field[1] in self._array_fields: shape = (fsize[field],) + self._array_fields[field[1]] elif field in self.ds.scalar_field_list: shape = (1,) else: shape = (fsize[field],) rv[field] = np.empty(shape, dtype="float64") ind[field] = 0 # Now we read. for field_r, vals in self._read_particle_fields(dobj, ptf): # Note that we now need to check the mappings for field_f in field_maps[field_r]: my_ind = ind[field_f] rv[field_f][my_ind : my_ind + vals.shape[0], ...] = vals ind[field_f] += vals.shape[0] # Now we need to truncate all our fields, since we allow for # over-estimating. for field_f in ind: rv[field_f] = rv[field_f][: ind[field_f]] return rv def _read_scalar_fields(self, dobj, scalar_fields): all_data = {} if not scalar_fields: return all_data pcount = 1 with h5py.File(dobj.scalar_data_file.filename, mode="r") as f: for ptype, field_list in sorted(scalar_fields.items()): for field in field_list: if field == "particle_identifier": field_data = ( np.arange(dobj.scalar_data_file.total_particles[ptype]) + dobj.scalar_data_file.index_start[ptype] ) elif field in f[ptype]: field_data = f[ptype][field][()].astype("float64") else: fname = field[: field.rfind("_")] field_data = f[ptype][fname][()].astype("float64") my_div = field_data.size / pcount if my_div > 1: findex = int(field[field.rfind("_") + 1 :]) field_data = field_data[:, findex] data = np.array([field_data[dobj.scalar_index]]) all_data[ptype, field] = data return all_data def _read_member_fields(self, dobj, member_fields): all_data = defaultdict(lambda: np.empty(dobj.particle_number, dtype=np.float64)) if not member_fields: return all_data field_start = 0 for i, data_file in enumerate(dobj.field_data_files): start_index = dobj.field_data_start[i] end_index = dobj.field_data_end[i] pcount = end_index - start_index if pcount == 0: continue field_end = field_start + end_index - start_index with h5py.File(data_file.filename, mode="r") as f: for ptype, field_list in sorted(member_fields.items()): for field in field_list: field_data = all_data[ptype, field] if field in f["IDs"]: my_data = f["IDs"][field][start_index:end_index].astype( "float64" ) else: fname = field[: field.rfind("_")] my_data = f["IDs"][fname][start_index:end_index].astype( "float64" ) my_div = my_data.size / pcount if my_div > 1: findex = int(field[field.rfind("_") + 1 :]) my_data = my_data[:, findex] field_data[field_start:field_end] = my_data field_start = field_end return all_data def _read_particle_fields(self, dobj, ptf): # separate member particle fields from scalar fields scalar_fields = defaultdict(list) member_fields = defaultdict(list) for ptype, field_list in sorted(ptf.items()): for field in field_list: if (ptype, field) in self.ds.scalar_field_list: scalar_fields[ptype].append(field) else: member_fields[ptype].append(field) all_data = self._read_scalar_fields(dobj, scalar_fields) all_data.update(self._read_member_fields(dobj, member_fields)) for field, field_data in all_data.items(): yield field, field_data def _identify_fields(self, data_file): fields = [] scalar_fields = [] id_fields = {} with h5py.File(data_file.filename, mode="r") as f: for ptype in self.ds.particle_types_raw: fields.append((ptype, "particle_identifier")) scalar_fields.append((ptype, "particle_identifier")) my_fields, my_offset_fields = subfind_field_list( f[ptype], ptype, data_file.total_particles ) fields.extend(my_fields) scalar_fields.extend(my_fields) if "IDs" not in f: continue id_fields = [(ptype, field) for field in f["IDs"]] fields.extend(id_fields) return fields, scalar_fields, id_fields, {} def subfind_field_list(fh, ptype, pcount): fields = [] offset_fields = [] for field in fh.keys(): if isinstance(fh[field], h5py.Group): my_fields, my_offset_fields = subfind_field_list(fh[field], ptype, pcount) fields.extend(my_fields) my_offset_fields.extend(offset_fields) else: if not fh[field].size % pcount[ptype]: my_div = fh[field].size / pcount[ptype] fname = fh[field].name[fh[field].name.find(ptype) + len(ptype) + 1 :] if my_div > 1: for i in range(int(my_div)): fields.append((ptype, "%s_%d" % (fname, i))) else: fields.append((ptype, fname)) elif ( ptype == "Subhalo" and not fh[field].size % fh["/Subhalo"].attrs["Number_of_groups"] ): # These are actually Group fields, but they were written after # a load balancing step moved halos around and thus they do not # correspond to the halos stored in the Group group. my_div = fh[field].size / fh["/Subhalo"].attrs["Number_of_groups"] fname = fh[field].name[fh[field].name.find(ptype) + len(ptype) + 1 :] if my_div > 1: for i in range(int(my_div)): fields.append(("Group", "%s_%d" % (fname, i))) else: fields.append(("Group", fname)) offset_fields.append(fname) else: mylog.warning( "Cannot add field (%s, %s) with size %d.", ptype, fh[field].name, fh[field].size, ) continue return fields, offset_fields ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3231523 yt-4.4.0/yt/frontends/gadget_fof/tests/0000755000175100001770000000000014714401715017451 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget_fof/tests/__init__.py0000644000175100001770000000000014714401662021551 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gadget_fof/tests/test_outputs.py0000644000175100001770000000732514714401662022615 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_array_equal, assert_equal from yt.frontends.gadget_fof.api import GadgetFOFDataset from yt.testing import ParticleSelectionComparison, requires_file, requires_module from yt.utilities.answer_testing.framework import ( FieldValuesTest, data_dir_load, requires_ds, ) _fields = ( ("all", "particle_position_x"), ("all", "particle_position_y"), ("all", "particle_position_z"), ("all", "particle_velocity_x"), ("all", "particle_velocity_y"), ("all", "particle_velocity_z"), ("all", "particle_mass"), ("all", "particle_identifier"), ) # a dataset with empty files g5 = "gadget_fof_halos/groups_005/fof_subhalo_tab_005.0.hdf5" g42 = "gadget_fof_halos/groups_042/fof_subhalo_tab_042.0.hdf5" @requires_module("h5py") @requires_ds(g5) def test_fields_g5(): for field in _fields: yield FieldValuesTest(g5, field, particle_type=True) @requires_module("h5py") @requires_ds(g42) def test_fields_g42(): for field in _fields: yield FieldValuesTest(g42, field, particle_type=True) @requires_module("h5py") @requires_file(g42) def test_GadgetFOFDataset(): assert isinstance(data_dir_load(g42), GadgetFOFDataset) # fof/subhalo catalog with member particles g298 = "gadget_halos/data/groups_298/fof_subhalo_tab_298.0.hdf5" @requires_module("h5py") @requires_file(g298) def test_particle_selection(): ds = data_dir_load(g298) psc = ParticleSelectionComparison(ds) psc.run_defaults() @requires_module("h5py") @requires_file(g298) def test_subhalos(): ds = data_dir_load(g298) total_sub = 0 total_int = 0 for hid in range(0, ds.index.particle_count["Group"]): my_h = ds.halo("Group", hid) h_ids = my_h["Group", "ID"] for sid in range(int(my_h["Group", "subhalo_number"][0])): my_s = ds.halo("Subhalo", (my_h.particle_identifier, sid)) total_sub += my_s["Subhalo", "ID"].size total_int += np.intersect1d(h_ids, my_s["Subhalo", "ID"]).size # Test that all subhalo particles are contained within # their parent group. assert_equal(total_sub, total_int) @requires_module("h5py") @requires_file(g298) def test_halo_masses(): ds = data_dir_load(g298) ad = ds.all_data() for ptype in ["Group", "Subhalo"]: nhalos = ds.index.particle_count[ptype] mass = ds.arr(np.zeros(nhalos), "code_mass") for i in range(nhalos): halo = ds.halo(ptype, i) mass[i] = halo.mass # Check that masses from halo containers are the same # as the array of all masses. This will test getting # scalar fields for halos correctly. assert_array_equal(ad[ptype, "particle_mass"], mass) # fof/subhalo catalog with no member ids in first file g56 = "gadget_halos/data/groups_056/fof_subhalo_tab_056.0.hdf5" # This dataset has halos in one file and ids in another, # which can confuse the field detection. @requires_module("h5py") @requires_file(g56) def test_unbalanced_dataset(): ds = data_dir_load(g56) halo = ds.halo("Group", 0) assert_equal(len(halo["Group", "member_ids"]), 33) assert_equal(halo["Group", "member_ids"].min().d, 723254.0) assert_equal(halo["Group", "member_ids"].max().d, 772662.0) # fof/subhalo catalog with no member ids in first file g76 = "gadget_halos/data/groups_076/fof_subhalo_tab_076.0.hdf5" # This dataset has one halo with particles distributed over 3 files # with the 2nd file being empty. @requires_module("h5py") @requires_file(g76) def test_3file_halo(): ds = data_dir_load(g76) # this halo's particles are distributed over 3 files with the # middle file being empty halo = ds.halo("Group", 6) halo["Group", "member_ids"] assert True ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3231523 yt-4.4.0/yt/frontends/gamer/0000755000175100001770000000000014714401715015315 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gamer/__init__.py0000644000175100001770000000000014714401662017415 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gamer/api.py0000644000175100001770000000026414714401662016443 0ustar00runnerdockerfrom .data_structures import GAMERDataset, GAMERGrid, GAMERHierarchy from .fields import GAMERFieldInfo from .io import IOHandlerGAMER ### NOT SUPPORTED YET # from . import tests ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gamer/cfields.pyx0000644000175100001770000001343614714401662017500 0ustar00runnerdocker# distutils: include_dirs = LIB_DIR # distutils: libraries = STD_LIBS cimport cython cimport libc.math as math cimport numpy as np import numpy as np cdef np.float64_t gamma_eos(np.float64_t kT, np.float64_t g) noexcept nogil: return g cdef np.float64_t gamma_eos_tb(np.float64_t kT, np.float64_t g) noexcept nogil: cdef np.float64_t x, c_p, c_v x = 2.25 * kT / ( math.sqrt(2.25 * kT * kT + 1.0) + 1.0 ) c_p = 2.5 + x c_v = 1.5 + x return c_p / c_v cdef np.float64_t cs_eos_tb(np.float64_t kT, np.float64_t h, np.float64_t g) noexcept nogil: cdef np.float64_t hp, cs2 hp = h + 1.0 cs2 = kT / (3.0 * hp) cs2 *= (5.0 * hp - 8.0 * kT) / (hp - kT) return math.sqrt(cs2) cdef np.float64_t cs_eos(np.float64_t kT, np.float64_t h, np.float64_t g) noexcept nogil: cdef np.float64_t hp, cs2 hp = h + 1.0 cs2 = g / hp * kT return math.sqrt(cs2) ctypedef np.float64_t (*f2_type)(np.float64_t, np.float64_t) noexcept nogil ctypedef np.float64_t (*f3_type)(np.float64_t, np.float64_t, np.float64_t) noexcept nogil cdef class SRHDFields: cdef f2_type gamma cdef f3_type cs cdef np.float64_t _gamma def __init__(self, int eos, np.float64_t gamma): self._gamma = gamma # Select aux functions based on eos no. if eos == 1: self.gamma = gamma_eos self.cs = cs_eos else: self.gamma = gamma_eos_tb self.cs = cs_eos_tb @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def gamma_field(self, temp): cdef np.float64_t[:] kT = temp.ravel() out = np.empty_like(kT) cdef np.float64_t[:] outp = out.ravel() cdef int i for i in range(outp.shape[0]): outp[i] = self.gamma(kT[i], self._gamma) return out cdef np.float64_t _lorentz_factor( self, np.float64_t rho, np.float64_t mx, np.float64_t my, np.float64_t mz, np.float64_t h, ) noexcept nogil: cdef np.float64_t u2 cdef np.float64_t fac fac = (1.0 / (rho * (h + 1.0))) ** 2 u2 = (mx * mx + my * my + mz * mz) * fac return math.sqrt(1.0 + u2) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def lorentz_factor(self, dens, momx, momy, momz, enth): cdef np.float64_t[:] rho = dens.ravel() cdef np.float64_t[:] mx = momx.ravel() cdef np.float64_t[:] my = momy.ravel() cdef np.float64_t[:] mz = momz.ravel() cdef np.float64_t[:] h = enth.ravel() out = np.empty_like(dens) cdef np.float64_t[:] outp = out.ravel() cdef int i for i in range(outp.shape[0]): outp[i] = self._lorentz_factor(rho[i], mx[i], my[i], mz[i], h[i]) return out @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def sound_speed(self, temp, enth): cdef np.float64_t[:] kT = temp.ravel() cdef np.float64_t[:] h = enth.ravel() out = np.empty_like(kT) cdef np.float64_t[:] outp = out.ravel() cdef int i for i in range(outp.shape[0]): outp[i] = self.cs(kT[i], h[i], self._gamma) return out cdef np.float64_t _four_vel( self, np.float64_t rho, np.float64_t mi, np.float64_t h, ) noexcept nogil: return mi / (rho * (h + 1.0)) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def four_velocity_xyz(self, dens, mom, enth): cdef np.float64_t[:] rho = dens.ravel() cdef np.float64_t[:] mi = mom.ravel() cdef np.float64_t[:] h = enth.ravel() out = np.empty_like(dens) cdef np.float64_t[:] outp = out.ravel() cdef int i for i in range(outp.shape[0]): outp[i] = self._four_vel(rho[i], mi[i], h[i]) return out @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def kinetic_energy_density(self, dens, momx, momy, momz, temp, enth): cdef np.float64_t[:] rho = dens.ravel() cdef np.float64_t[:] mx = momx.ravel() cdef np.float64_t[:] my = momy.ravel() cdef np.float64_t[:] mz = momz.ravel() cdef np.float64_t[:] kT = temp.ravel() cdef np.float64_t[:] h = enth.ravel() out = np.empty_like(dens) cdef np.float64_t[:] outp = out.ravel() cdef np.float64_t lf, u2, ux, uy, uz cdef int i for i in range(outp.shape[0]): ux = self._four_vel(rho[i], mx[i], h[i]) uy = self._four_vel(rho[i], my[i], h[i]) uz = self._four_vel(rho[i], mz[i], h[i]) u2 = ux**2 + uy**2 + uz**2 lf = self._lorentz_factor(rho[i], mx[i], my[i], mz[i], h[i]) gm1 = u2 / (lf + 1.0) p = rho[i] / lf * kT[i] outp[i] = gm1 * (rho[i] * (h[i] + 1.0) + p) return out @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def mach_number(self, dens, momx, momy, momz, temp, enth): cdef np.float64_t[:] rho = dens.ravel() cdef np.float64_t[:] mx = momx.ravel() cdef np.float64_t[:] my = momy.ravel() cdef np.float64_t[:] mz = momz.ravel() cdef np.float64_t[:] kT = temp.ravel() cdef np.float64_t[:] h = enth.ravel() out = np.empty_like(dens) cdef np.float64_t[:] outp = out.ravel() cdef np.float64_t cs, us, u cdef int i for i in range(outp.shape[0]): cs = self.cs(kT[i], h[i], self._gamma) us = cs / math.sqrt(1.0 - cs**2) u = math.sqrt(mx[i]**2 + my[i]**2 + mz[i]**2) / (rho[i] * (h[i] + 1.0)) outp[i] = u / us return out ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gamer/data_structures.py0000644000175100001770000003435514714401662021116 0ustar00runnerdockerimport os import weakref import numpy as np from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.static_output import Dataset from yt.funcs import mylog, setdefaultattr from yt.geometry.api import Geometry from yt.geometry.grid_geometry_handler import GridIndex from yt.utilities.cosmology import Cosmology from yt.utilities.file_handler import HDF5FileHandler from .definitions import geometry_parameters from .fields import GAMERFieldInfo class GAMERGrid(AMRGridPatch): _id_offset = 0 def __init__(self, id, index, level): AMRGridPatch.__init__(self, id, filename=index.index_filename, index=index) self.Parent = None # do NOT initialize Parent as [] self.Children = [] self.Level = level class GAMERHierarchy(GridIndex): grid = GAMERGrid _preload_implemented = True # since gamer defines "_read_chunk_data" in io.py def __init__(self, ds, dataset_type="gamer"): self.dataset_type = dataset_type self.dataset = weakref.proxy(ds) self.index_filename = self.dataset.parameter_filename self.directory = os.path.dirname(self.index_filename) self._handle = ds._handle self._group_grid = ds._group_grid self._group_particle = ds._group_particle self.float_type = "float64" # fixed even when FLOAT8 is off self._particle_handle = ds._particle_handle self.refine_by = ds.refine_by self.pgroup = self.refine_by**3 # number of patches in a patch group GridIndex.__init__(self, ds, dataset_type) def _detect_output_fields(self): # find all field names in the current dataset # grid fields self.field_list = [("gamer", v) for v in self._group_grid.keys()] # particle fields if self._group_particle is not None: self.field_list += [("io", v) for v in self._group_particle.keys()] def _count_grids(self): # count the total number of patches at all levels self.num_grids = self.dataset.parameters["NPatch"].sum() // self.pgroup def _parse_index(self): parameters = self.dataset.parameters gid0 = 0 grid_corner = self._handle["Tree/Corner"][()][:: self.pgroup] convert2physical = self._handle["Tree/Corner"].attrs["Cvt2Phy"] self.grid_dimensions[:] = parameters["PatchSize"] * self.refine_by for lv in range(0, parameters["NLevel"]): num_grids_level = parameters["NPatch"][lv] // self.pgroup if num_grids_level == 0: break patch_scale = ( parameters["PatchSize"] * parameters["CellScale"][lv] * self.refine_by ) # set the level and edge of each grid # (left/right_edge are YT arrays in code units) self.grid_levels.flat[gid0 : gid0 + num_grids_level] = lv self.grid_left_edge[gid0 : gid0 + num_grids_level] = ( grid_corner[gid0 : gid0 + num_grids_level] * convert2physical ) self.grid_right_edge[gid0 : gid0 + num_grids_level] = ( grid_corner[gid0 : gid0 + num_grids_level] + patch_scale ) * convert2physical gid0 += num_grids_level self.grid_left_edge += self.dataset.domain_left_edge self.grid_right_edge += self.dataset.domain_left_edge # allocate all grid objects self.grids = np.empty(self.num_grids, dtype="object") for i in range(self.num_grids): self.grids[i] = self.grid(i, self, self.grid_levels.flat[i]) # maximum level with patches (which can be lower than MAX_LEVEL) self.max_level = self.grid_levels.max() # number of particles in each grid try: self.grid_particle_count[:] = np.sum( self._handle["Tree/NPar"][()].reshape(-1, self.pgroup), axis=1 )[:, None] except KeyError: self.grid_particle_count[:] = 0.0 # calculate the starting particle indices for each grid (starting from 0) # --> note that the last element must store the total number of particles # (see _read_particle_coords and _read_particle_fields in io.py) self._particle_indices = np.zeros(self.num_grids + 1, dtype="int64") np.add.accumulate( self.grid_particle_count.squeeze(), out=self._particle_indices[1:] ) def _populate_grid_objects(self): son_list = self._handle["Tree/Son"][()] for gid in range(self.num_grids): grid = self.grids[gid] son_gid0 = ( son_list[gid * self.pgroup : (gid + 1) * self.pgroup] // self.pgroup ) # set up the parent-children relationship grid.Children = [self.grids[t] for t in son_gid0[son_gid0 >= 0]] for son_grid in grid.Children: son_grid.Parent = grid # set up other grid attributes grid._prepare_grid() grid._setup_dx() # validate the parent-children relationship in the debug mode if self.dataset._debug: self._validate_parent_children_relationship() # for _debug mode only def _validate_parent_children_relationship(self): mylog.info("Validating the parent-children relationship ...") father_list = self._handle["Tree/Father"][()] for grid in self.grids: # parent->children == itself if grid.Parent is not None: assert grid in grid.Parent.Children, ( f"Grid {grid.id}, Parent {grid.Parent.id}, " f"Parent->Children[0] {grid.Parent.Children[0].id}" ) # children->parent == itself for c in grid.Children: assert c.Parent is grid, ( f"Grid {grid.id}, Children {c.id}, " f"Children->Parent {c.Parent.id}" ) # all refinement grids should have parent if grid.Level > 0: assert grid.Parent is not None and grid.Parent.id >= 0, ( f"Grid {grid.id}, Level {grid.Level}, " f"Parent {grid.Parent.id if grid.Parent is not None else -999}" ) # parent index is consistent with the loaded dataset if grid.Level > 0: father_gid = father_list[grid.id * self.pgroup] // self.pgroup assert father_gid == grid.Parent.id, ( f"Grid {grid.id}, Level {grid.Level}, " f"Parent_Found {grid.Parent.id}, Parent_Expect {father_gid}" ) # edges between children and parent for c in grid.Children: for d in range(0, 3): msgL = ( "Grid %d, Child %d, Grid->EdgeL %14.7e, Children->EdgeL %14.7e" % (grid.id, c.id, grid.LeftEdge[d], c.LeftEdge[d]) ) msgR = ( "Grid %d, Child %d, Grid->EdgeR %14.7e, Children->EdgeR %14.7e" % (grid.id, c.id, grid.RightEdge[d], c.RightEdge[d]) ) if not grid.LeftEdge[d] <= c.LeftEdge[d]: raise ValueError(msgL) if not grid.RightEdge[d] >= c.RightEdge[d]: raise ValueError(msgR) mylog.info("Check passed") class GAMERDataset(Dataset): _load_requirements = ["h5py"] _index_class = GAMERHierarchy _field_info_class = GAMERFieldInfo _handle = None _group_grid = None _group_particle = None _debug = False # debug mode for the GAMER frontend def __init__( self, filename, dataset_type="gamer", storage_filename=None, particle_filename=None, units_override=None, unit_system="cgs", default_species_fields=None, ): if self._handle is not None: return self.fluid_types += ("gamer",) self._handle = HDF5FileHandler(filename) self.particle_filename = particle_filename # to catch both the new and old data formats for the grid data try: self._group_grid = self._handle["GridData"] except KeyError: self._group_grid = self._handle["Data"] if "Particle" in self._handle: self._group_particle = self._handle["Particle"] if self.particle_filename is None: self._particle_handle = self._handle else: self._particle_handle = HDF5FileHandler(self.particle_filename) # currently GAMER only supports refinement by a factor of 2 self.refine_by = 2 Dataset.__init__( self, filename, dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) self.storage_filename = storage_filename def _set_code_unit_attributes(self): if self.opt_unit: # GAMER units are always in CGS setdefaultattr( self, "length_unit", self.quan(self.parameters["Unit_L"], "cm") ) setdefaultattr(self, "mass_unit", self.quan(self.parameters["Unit_M"], "g")) setdefaultattr(self, "time_unit", self.quan(self.parameters["Unit_T"], "s")) if self.mhd: setdefaultattr( self, "magnetic_unit", self.quan(self.parameters["Unit_B"], "gauss") ) else: if len(self.units_override) == 0: mylog.warning( "Cannot determine code units ==> " "Use units_override to specify the units" ) for unit, value, cgs in [ ("length", 1.0, "cm"), ("time", 1.0, "s"), ("mass", 1.0, "g"), ("magnetic", np.sqrt(4.0 * np.pi), "gauss"), ]: setdefaultattr(self, f"{unit}_unit", self.quan(value, cgs)) if len(self.units_override) == 0: mylog.warning("Assuming %8s unit = %f %s", unit, value, cgs) def _parse_parameter_file(self): # code-specific parameters for t in self._handle["Info"]: info_category = self._handle["Info"][t] for v in info_category.dtype.names: self.parameters[v] = info_category[v] # shortcut for self.parameters parameters = self.parameters # reset 'Model' to be more readable # (no longer regard MHD as a separate model) if parameters["Model"] == 1: parameters["Model"] = "Hydro" elif parameters["Model"] == 3: parameters["Model"] = "ELBDM" else: parameters["Model"] = "Unknown" # simulation time and domain self.dimensionality = 3 # always 3D self.domain_left_edge = parameters.get( "BoxEdgeL", np.array([0.0, 0.0, 0.0]) ).astype("f8") self.domain_right_edge = parameters.get( "BoxEdgeR", parameters["BoxSize"] ).astype("f8") self.domain_dimensions = parameters["NX0"].astype("int64") # periodicity if parameters["FormatVersion"] >= 2106: periodic_bc = 1 else: periodic_bc = 0 self._periodicity = ( bool(parameters["Opt__BC_Flu"][0] == periodic_bc), bool(parameters["Opt__BC_Flu"][2] == periodic_bc), bool(parameters["Opt__BC_Flu"][4] == periodic_bc), ) # cosmological parameters if parameters["Comoving"]: self.cosmological_simulation = 1 # here parameters["Time"][0] is the scale factor a at certain redshift self.current_redshift = 1.0 / parameters["Time"][0] - 1.0 self.omega_matter = parameters["OmegaM0"] self.omega_lambda = 1.0 - self.omega_matter # default to 0.7 for old data format self.hubble_constant = parameters.get("Hubble0", 0.7) # use the cosmological age computed by the given cosmological parameters as the current time when COMOVING is on; cosmological age is computed by subtracting the lookback time at self.current_redshift from that at z = 1e6 (i.e., very early universe) cosmo = Cosmology( hubble_constant=self.hubble_constant, omega_matter=self.omega_matter, omega_lambda=self.omega_lambda, ) self.current_time = cosmo.lookback_time(self.current_redshift, 1e6) else: self.cosmological_simulation = 0 self.current_redshift = 0.0 self.omega_matter = 0.0 self.omega_lambda = 0.0 self.hubble_constant = 0.0 # use parameters["Time"][0] as current time when COMOVING is off self.current_time = parameters["Time"][0] # make aliases to some frequently used variables if parameters["Model"] == "Hydro": self.gamma = parameters["Gamma"] self.gamma_cr = self.parameters.get("CR_Gamma", None) self.eos = parameters.get("EoS", 1) # Assume gamma-law by default # default to 0.6 for old data format self.mu = parameters.get( "MolecularWeight", 0.6 ) # Assume ionized primordial by default self.mhd = parameters.get("Magnetohydrodynamics", 0) self.srhd = parameters.get("SRHydrodynamics", 0) else: self.mhd = 0 self.srhd = 0 # set dummy value of mu here to avoid more complicated workarounds later self.mu = 1.0 # old data format (version < 2210) did not contain any information of code units self.opt_unit = self.parameters.get("Opt__Unit", 0) self.geometry = Geometry(geometry_parameters[parameters.get("Coordinate", 1)]) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False try: # define a unique way to identify GAMER datasets f = HDF5FileHandler(filename) if "Info" in f["/"].keys() and "KeyInfo" in f["/Info"].keys(): return True except Exception: pass return False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gamer/definitions.py0000644000175100001770000000021014714401662020174 0ustar00runnerdockergeometry_parameters = { 1: "cartesian", 2: ("cylindrical", ("r", "theta", "z")), 3: ("spherical", ("r", "theta", "phi")), } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gamer/fields.py0000644000175100001770000004021214714401662017135 0ustar00runnerdockerfrom yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer from yt.fields.tensor_fields import setup_stress_energy_ideal from yt.funcs import mylog from .cfields import SRHDFields b_units = "code_magnetic" pre_units = "code_mass / (code_length*code_time**2)" erg_units = "code_mass / (code_length*code_time**2)" rho_units = "code_mass / code_length**3" mom_units = "code_mass / (code_length**2*code_time)" vel_units = "code_velocity" pot_units = "code_length**2/code_time**2" psi_units = "code_mass**(1/2) / code_length**(3/2)" class GAMERFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( # hydro fields on disk (GAMER outputs conservative variables) ("Dens", (rho_units, [], None)), ("MomX", (mom_units, ["momentum_density_x"], None)), ("MomY", (mom_units, ["momentum_density_y"], None)), ("MomZ", (mom_units, ["momentum_density_z"], None)), ("Engy", (erg_units, [], None)), ("CRay", (erg_units, ["cosmic_ray_energy_density"], None)), ("Pote", (pot_units, ["gravitational_potential"], None)), ("Pres", (pre_units, ["pressure"], None)), ("Temp", ("code_temperature", ["temperature"], None)), ("Enth", (pot_units, ["specific_reduced_enthalpy"], None)), ("Mach", ("dimensionless", ["mach_number"], None)), ("Cs", (vel_units, ["sound_speed"], None)), ("DivVel", ("1/code_time", ["velocity_divergence"], None)), # MHD fields on disk (CC=cell-centered) ("CCMagX", (b_units, [], "B_x")), ("CCMagY", (b_units, [], "B_y")), ("CCMagZ", (b_units, [], "B_z")), # psiDM fields on disk ("Real", (psi_units, ["psidm_real_part"], None)), ("Imag", (psi_units, ["psidm_imaginary_part"], None)), # particle fields on disk (deposited onto grids) ("ParDens", (rho_units, ["particle_density_on_grid"], None)), ("TotalDens", (rho_units, ["total_density_on_grid"], None)), ) known_particle_fields: KnownFieldsT = ( ("ParMass", ("code_mass", ["particle_mass"], None)), ("ParPosX", ("code_length", ["particle_position_x"], None)), ("ParPosY", ("code_length", ["particle_position_y"], None)), ("ParPosZ", ("code_length", ["particle_position_z"], None)), ("ParVelX", ("code_velocity", ["particle_velocity_x"], None)), ("ParVelY", ("code_velocity", ["particle_velocity_y"], None)), ("ParVelZ", ("code_velocity", ["particle_velocity_z"], None)), ("ParCreTime", ("code_time", ["particle_creation_time"], None)), ) def __init__(self, ds, field_list): super().__init__(ds, field_list) # add primitive and other derived variables def setup_fluid_fields(self): pc = self.ds.units.physical_constants from yt.fields.magnetic_field import setup_magnetic_field_aliases unit_system = self.ds.unit_system unit_system.registry = self.ds.unit_registry # TODO: Why do I need this?! if self.ds.opt_unit: temp_conv = pc.kb / (self.ds.mu * pc.mh) else: temp_conv = ( self.ds.arr(1.0, "code_velocity**2/code_temperature") / self.ds.mu ) if self.ds.srhd: if self.ds.opt_unit: c2 = pc.clight * pc.clight else: c2 = self.ds.arr(1.0, "code_velocity**2") invc2 = 1.0 / c2 if ("gamer", "Temp") not in self.field_list: mylog.warning( 'The temperature field "Temp" is not present in the dataset. Most ' 'SRHD fields will not be available!! Please set "OPT__OUTPUT_TEMP ' '= 1" in Input__Parameter and re-run the simulation.' ) if ("gamer", "Enth") not in self.field_list: mylog.warning( 'The reduced enthalpy field "Enth" is not present in the dataset. ' "Most SRHD fields will not be available!! Please set " '"OPT__OUTPUT_ENTHALPY = 1" in Input__Parameter and re-run the ' "simulation." ) # EOS functions gamma = self.ds.gamma if self.ds.eos == 1 else 0.0 fgen = SRHDFields(self.ds.eos, gamma) # temperature fraction (kT/mc^2) def _temp_fraction(field, data): return data["gamer", "Temp"] * temp_conv * invc2 self.add_field( ("gas", "temp_fraction"), function=_temp_fraction, sampling_type="cell", units="", ) # specific enthalpy def _specific_enthalpy(field, data): return data["gas", "specific_reduced_enthalpy"] + c2 self.add_field( ("gas", "specific_enthalpy"), function=_specific_enthalpy, sampling_type="cell", units=unit_system["specific_energy"], ) # sound speed if ("gamer", "Cs") not in self.field_list: def _sound_speed(field, data): out = fgen.sound_speed( data["gas", "temp_fraction"].d, data["gamer", "Enth"].d, ) return data.ds.arr(out, "code_velocity").to(unit_system["velocity"]) self.add_field( ("gas", "sound_speed"), sampling_type="cell", function=_sound_speed, units=unit_system["velocity"], ) # ratio of specific heats (gamma) def _gamma(field, data): out = fgen.gamma_field(data["gas", "temp_fraction"].d) return data.ds.arr(out, "dimensionless") self.add_field( ("gas", "gamma"), sampling_type="cell", function=_gamma, units="" ) # reduced total energy density self.alias( ("gas", "reduced_total_energy_density"), ("gamer", "Engy"), units=unit_system["pressure"], ) # total energy density def _total_energy_density(field, data): return data["gamer", "Engy"] + data["gamer", "Dens"] * c2 self.add_field( ("gas", "total_energy_density"), sampling_type="cell", function=_total_energy_density, units=unit_system["pressure"], ) # coordinate frame density self.alias( ("gas", "frame_density"), ("gamer", "Dens"), units=unit_system["density"], ) # 4-velocity spatial components def four_velocity_xyz(u): def _four_velocity(field, data): out = fgen.four_velocity_xyz( data["gamer", "Dens"].d, data["gamer", f"Mom{u.upper()}"].d, data["gamer", "Enth"].d, ) return data.ds.arr(out, "code_velocity").to(unit_system["velocity"]) return _four_velocity for u in "xyz": self.add_field( ("gas", f"four_velocity_{u}"), sampling_type="cell", function=four_velocity_xyz(u), units=unit_system["velocity"], ) # lorentz factor if ("gamer", "Lrtz") in self.field_list: def _lorentz_factor(field, data): return data["gamer", "Lrtz"] else: def _lorentz_factor(field, data): out = fgen.lorentz_factor( data["gamer", "Dens"].d, data["gamer", "MomX"].d, data["gamer", "MomY"].d, data["gamer", "MomZ"].d, data["gamer", "Enth"].d, ) return data.ds.arr(out, "dimensionless") self.add_field( ("gas", "lorentz_factor"), sampling_type="cell", function=_lorentz_factor, units="", ) # density def _density(field, data): return data["gamer", "Dens"] / data["gas", "lorentz_factor"] self.add_field( ("gas", "density"), sampling_type="cell", function=_density, units=unit_system["density"], ) # pressure def _pressure(field, data): p = data["gas", "density"] * data["gas", "temp_fraction"] return p * c2 # thermal energy per mass (i.e., specific) def _specific_thermal_energy(field, data): return ( data["gas", "specific_reduced_enthalpy"] - c2 * data["gas", "temp_fraction"] ) # total energy per mass def _specific_total_energy(field, data): return data["gas", "total_energy_density"] / data["gas", "density"] # kinetic energy density def _kinetic_energy_density(field, data): out = fgen.kinetic_energy_density( data["gamer", "Dens"].d, data["gamer", "MomX"].d, data["gamer", "MomY"].d, data["gamer", "MomZ"].d, data["gas", "temp_fraction"].d, data["gamer", "Enth"].d, ) return data.ds.arr(out, erg_units).to(unit_system["pressure"]) self.add_field( ("gas", "kinetic_energy_density"), sampling_type="cell", function=_kinetic_energy_density, units=unit_system["pressure"], ) # Mach number if ("gamer", "Mach") not in self.field_list: def _mach_number(field, data): out = fgen.mach_number( data["gamer", "Dens"].d, data["gamer", "MomX"].d, data["gamer", "MomY"].d, data["gamer", "MomZ"].d, data["gas", "temp_fraction"].d, data["gamer", "Enth"].d, ) return data.ds.arr(out, "dimensionless") self.add_field( ("gas", "mach_number"), sampling_type="cell", function=_mach_number, units="", ) setup_stress_energy_ideal(self) else: # not RHD # density self.alias( ("gas", "density"), ("gamer", "Dens"), units=unit_system["density"] ) self.alias( ("gas", "total_energy_density"), ("gamer", "Engy"), units=unit_system["pressure"], ) # ==================================================== # note that yt internal fields assume # [specific_thermal_energy] = [energy per mass] # [kinetic_energy_density] = [energy per volume] # [magnetic_energy_density] = [energy per volume] # and we further adopt # [specific_total_energy] = [energy per mass] # [total_energy_density] = [energy per volume] # ==================================================== # thermal energy per volume def et(data): ek = ( 0.5 * ( data["gamer", "MomX"] ** 2 + data["gamer", "MomY"] ** 2 + data["gamer", "MomZ"] ** 2 ) / data["gamer", "Dens"] ) Et = data["gamer", "Engy"] - ek if self.ds.mhd: # magnetic_energy is a yt internal field Et -= data["gas", "magnetic_energy_density"] if getattr(self.ds, "gamma_cr", None): # cosmic rays are included in this dataset Et -= data["gas", "cosmic_ray_energy_density"] return Et # thermal energy per mass (i.e., specific) def _specific_thermal_energy(field, data): return et(data) / data["gamer", "Dens"] # total energy per mass def _specific_total_energy(field, data): return data["gamer", "Engy"] / data["gamer", "Dens"] # pressure def _pressure(field, data): return et(data) * (data.ds.gamma - 1.0) # velocity def velocity_xyz(v): if ("gamer", f"Vel{v.upper()}") in self.field_list: def _velocity(field, data): return data.ds.arr( data["gamer", f"Vel{v.upper()}"].d, "code_velocity" ).to(unit_system["velocity"]) elif self.ds.srhd: def _velocity(field, data): return ( data["gas", f"four_velocity_{v}"] / data["gas", "lorentz_factor"] ) else: def _velocity(field, data): return data["gas", f"momentum_density_{v}"] / data["gas", "density"] return _velocity for v in "xyz": self.add_field( ("gas", f"velocity_{v}"), sampling_type="cell", function=velocity_xyz(v), units=unit_system["velocity"], ) if ("gamer", "Pres") not in self.field_list: self.add_field( ("gas", "pressure"), sampling_type="cell", function=_pressure, units=unit_system["pressure"], ) self.add_field( ("gas", "specific_thermal_energy"), sampling_type="cell", function=_specific_thermal_energy, units=unit_system["specific_energy"], ) def _thermal_energy_density(field, data): return data["gas", "density"] * data["gas", "specific_thermal_energy"] self.add_field( ("gas", "thermal_energy_density"), sampling_type="cell", function=_thermal_energy_density, units=unit_system["pressure"], ) self.add_field( ("gas", "specific_total_energy"), sampling_type="cell", function=_specific_total_energy, units=unit_system["specific_energy"], ) if getattr(self.ds, "gamma_cr", None): def _cr_pressure(field, data): return (data.ds.gamma_cr - 1.0) * data[ "gas", "cosmic_ray_energy_density" ] self.add_field( ("gas", "cosmic_ray_pressure"), _cr_pressure, sampling_type="cell", units=self.ds.unit_system["pressure"], ) # mean molecular weight if hasattr(self.ds, "mu"): def _mu(field, data): return data.ds.mu * data["index", "ones"] self.add_field( ("gas", "mean_molecular_weight"), sampling_type="cell", function=_mu, units="", ) # temperature if ("gamer", "Temp") not in self.field_list: def _temperature(field, data): return data["gas", "pressure"] / (data["gas", "density"] * temp_conv) self.add_field( ("gas", "temperature"), sampling_type="cell", function=_temperature, units=unit_system["temperature"], ) # magnetic field aliases --> magnetic_field_x/y/z if self.ds.mhd: setup_magnetic_field_aliases(self, "gamer", [f"CCMag{v}" for v in "XYZ"]) def setup_particle_fields(self, ptype): super().setup_particle_fields(ptype) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gamer/io.py0000644000175100001770000001671714714401662016313 0ustar00runnerdockerfrom itertools import groupby import numpy as np from yt.geometry.selection_routines import AlwaysSelector from yt.utilities.io_handler import BaseIOHandler from yt.utilities.logger import ytLogger as mylog # ----------------------------------------------------------------------------- # GAMER shares a similar HDF5 format, and thus io.py as well, with FLASH # ----------------------------------------------------------------------------- # group grids with consecutive indices together to improve the I/O performance # --> grids are assumed to be sorted into ascending numerical order already def grid_sequences(grids): for _k, g in groupby(enumerate(grids), lambda i_x: i_x[0] - i_x[1].id): seq = [v[1] for v in g] yield seq def particle_sequences(grids): for _k, g in groupby(enumerate(grids), lambda i_x: i_x[0] - i_x[1].id): seq = [v[1] for v in g] yield seq[0], seq[-1] class IOHandlerGAMER(BaseIOHandler): _particle_reader = False _dataset_type = "gamer" def __init__(self, ds): super().__init__(ds) self._handle = ds._handle self._group_grid = ds._group_grid self._group_particle = ds._group_particle self._field_dtype = "float64" # fixed even when FLOAT8 is off self._particle_handle = ds._particle_handle self.patch_size = ds.parameters["PatchSize"] * ds.refine_by self.pgroup = ds.refine_by**3 # number of patches in a patch group def _read_particle_coords(self, chunks, ptf): chunks = list(chunks) # generator --> list p_idx = self.ds.index._particle_indices # shortcuts par_posx = self._group_particle["ParPosX"] par_posy = self._group_particle["ParPosY"] par_posz = self._group_particle["ParPosZ"] # currently GAMER does not support multiple particle types assert len(ptf) == 1 ptype = list(ptf.keys())[0] for chunk in chunks: for g1, g2 in particle_sequences(chunk.objs): start = p_idx[g1.id] end = p_idx[g2.id + 1] x = np.asarray(par_posx[start:end], dtype=self._field_dtype) y = np.asarray(par_posy[start:end], dtype=self._field_dtype) z = np.asarray(par_posz[start:end], dtype=self._field_dtype) yield ptype, (x, y, z), 0.0 def _read_particle_fields(self, chunks, ptf, selector): chunks = list(chunks) # generator --> list p_idx = self.ds.index._particle_indices # shortcuts par_posx = self._group_particle["ParPosX"] par_posy = self._group_particle["ParPosY"] par_posz = self._group_particle["ParPosZ"] # currently GAMER does not support multiple particle types assert len(ptf) == 1 ptype = list(ptf.keys())[0] pfields = ptf[ptype] for chunk in chunks: for g1, g2 in particle_sequences(chunk.objs): start = p_idx[g1.id] end = p_idx[g2.id + 1] x = np.asarray(par_posx[start:end], dtype=self._field_dtype) y = np.asarray(par_posy[start:end], dtype=self._field_dtype) z = np.asarray(par_posz[start:end], dtype=self._field_dtype) mask = selector.select_points(x, y, z, 0.0) if mask is None: continue for field in pfields: data = self._group_particle[field][start:end] yield (ptype, field), data[mask] def _read_fluid_selection(self, chunks, selector, fields, size): chunks = list(chunks) # generator --> list if any((ftype != "gamer" for ftype, fname in fields)): raise NotImplementedError rv = {} for field in fields: rv[field] = np.empty(size, dtype=self._field_dtype) ng = sum(len(c.objs) for c in chunks) # c.objs is a list of grids mylog.debug( "Reading %s cells of %s fields in %s grids", size, [f2 for f1, f2 in fields], ng, ) # shortcuts ps2 = self.patch_size ps1 = ps2 // 2 for field in fields: ds = self._group_grid[field[1]] offset = 0 for chunk in chunks: for gs in grid_sequences(chunk.objs): start = (gs[0].id) * self.pgroup end = (gs[-1].id + 1) * self.pgroup buf = ds[start:end, :, :, :] ngrid = len(gs) data = np.empty((ngrid, ps2, ps2, ps2), dtype=self._field_dtype) for g in range(ngrid): pid0 = g * self.pgroup data[g, 0:ps1, 0:ps1, 0:ps1] = buf[pid0 + 0, :, :, :] data[g, 0:ps1, 0:ps1, ps1:ps2] = buf[pid0 + 1, :, :, :] data[g, 0:ps1, ps1:ps2, 0:ps1] = buf[pid0 + 2, :, :, :] data[g, ps1:ps2, 0:ps1, 0:ps1] = buf[pid0 + 3, :, :, :] data[g, 0:ps1, ps1:ps2, ps1:ps2] = buf[pid0 + 4, :, :, :] data[g, ps1:ps2, ps1:ps2, 0:ps1] = buf[pid0 + 5, :, :, :] data[g, ps1:ps2, 0:ps1, ps1:ps2] = buf[pid0 + 6, :, :, :] data[g, ps1:ps2, ps1:ps2, ps1:ps2] = buf[pid0 + 7, :, :, :] data = data.transpose() for i, g in enumerate(gs): offset += g.select(selector, data[..., i], rv[field], offset) return rv def _read_chunk_data(self, chunk, fields): rv = {} if len(chunk.objs) == 0: return rv for g in chunk.objs: rv[g.id] = {} # Split into particles and non-particles fluid_fields, particle_fields = [], [] for ftype, fname in fields: if ftype in self.ds.particle_types: particle_fields.append((ftype, fname)) else: fluid_fields.append((ftype, fname)) # particles if len(particle_fields) > 0: selector = AlwaysSelector(self.ds) rv.update(self._read_particle_selection([chunk], selector, particle_fields)) # fluid if len(fluid_fields) == 0: return rv ps2 = self.patch_size ps1 = ps2 // 2 for field in fluid_fields: ds = self._group_grid[field[1]] for gs in grid_sequences(chunk.objs): start = (gs[0].id) * self.pgroup end = (gs[-1].id + 1) * self.pgroup buf = ds[start:end, :, :, :] ngrid = len(gs) data = np.empty((ngrid, ps2, ps2, ps2), dtype=self._field_dtype) for g in range(ngrid): pid0 = g * self.pgroup data[g, 0:ps1, 0:ps1, 0:ps1] = buf[pid0 + 0, :, :, :] data[g, 0:ps1, 0:ps1, ps1:ps2] = buf[pid0 + 1, :, :, :] data[g, 0:ps1, ps1:ps2, 0:ps1] = buf[pid0 + 2, :, :, :] data[g, ps1:ps2, 0:ps1, 0:ps1] = buf[pid0 + 3, :, :, :] data[g, 0:ps1, ps1:ps2, ps1:ps2] = buf[pid0 + 4, :, :, :] data[g, ps1:ps2, ps1:ps2, 0:ps1] = buf[pid0 + 5, :, :, :] data[g, ps1:ps2, 0:ps1, ps1:ps2] = buf[pid0 + 6, :, :, :] data[g, ps1:ps2, ps1:ps2, ps1:ps2] = buf[pid0 + 7, :, :, :] data = data.transpose() for i, g in enumerate(gs): rv[g.id][field] = data[..., i] return rv ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gamer/misc.py0000644000175100001770000000000014714401662016611 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3231523 yt-4.4.0/yt/frontends/gamer/tests/0000755000175100001770000000000014714401715016457 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gamer/tests/__init__.py0000644000175100001770000000000014714401662020557 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gamer/tests/test_outputs.py0000644000175100001770000000757214714401662021627 0ustar00runnerdockerfrom numpy.testing import assert_array_almost_equal, assert_equal from yt.frontends.gamer.api import GAMERDataset from yt.testing import requires_file, units_override_check from yt.utilities.answer_testing.framework import ( data_dir_load, requires_ds, small_patch_amr, ) jet = "InteractingJets/jet_000002" _fields_jet = ( ("gas", "temperature"), ("gas", "density"), ("gas", "velocity_magnitude"), ) jet_units = { "length_unit": (1.0, "kpc"), "time_unit": (3.08567758096e13, "s"), "mass_unit": (1.4690033e36, "g"), } @requires_ds(jet, big_data=True) def test_jet(): ds = data_dir_load(jet, kwargs={"units_override": jet_units}) assert_equal(str(ds), "jet_000002") for test in small_patch_amr(ds, _fields_jet): test_jet.__name__ = test.description yield test psiDM = "WaveDarkMatter/psiDM_000020" _fields_psiDM = ("Dens", "Real", "Imag") @requires_ds(psiDM, big_data=True) def test_psiDM(): ds = data_dir_load(psiDM) assert_equal(str(ds), "psiDM_000020") for test in small_patch_amr(ds, _fields_psiDM): test_psiDM.__name__ = test.description yield test plummer = "Plummer/plummer_000000" _fields_plummer = (("gamer", "ParDens"), ("deposit", "io_cic")) @requires_ds(plummer, big_data=True) def test_plummer(): ds = data_dir_load(plummer) assert_equal(str(ds), "plummer_000000") for test in small_patch_amr(ds, _fields_plummer): test_plummer.__name__ = test.description yield test mhdvortex = "MHDOrszagTangVortex/Data_000018" _fields_mhdvortex = ( ("gamer", "CCMagX"), ("gamer", "CCMagY"), ("gas", "magnetic_energy_density"), ) @requires_ds(mhdvortex, big_data=True) def test_mhdvortex(): ds = data_dir_load(mhdvortex) assert_equal(str(ds), "Data_000018") for test in small_patch_amr(ds, _fields_mhdvortex): test_mhdvortex.__name__ = test.description yield test @requires_file(psiDM) def test_GAMERDataset(): assert isinstance(data_dir_load(psiDM), GAMERDataset) @requires_file(jet) def test_units_override(): units_override_check(jet) jiw = "JetICMWall/Data_000060" _fields_jiw = ( ("gas", "four_velocity_magnitude"), ("gas", "density"), ("gas", "gamma"), ("gas", "temperature"), ) @requires_ds(jiw, big_data=True) def test_jiw(): ds = data_dir_load(jiw) assert_equal(str(ds), "Data_000060") for test in small_patch_amr(ds, _fields_jiw): test_jiw.__name__ = test.description yield test @requires_ds(jiw, big_data=True) def test_stress_energy(): axes = "txyz" ds = data_dir_load(jiw) center = ds.arr([3.0, 10.0, 10.0], "kpc") sp = ds.sphere(center, (1.0, "kpc")) c2 = ds.units.clight**2 inv_c2 = 1.0 / c2 rho = sp["gas", "density"] p = sp["gas", "pressure"] e = sp["gas", "thermal_energy_density"] h = rho + (e + p) * inv_c2 for mu in range(4): for nu in range(4): # matrix is symmetric so only do the upper-right part if nu >= mu: Umu = sp[f"four_velocity_{axes[mu]}"] Unu = sp[f"four_velocity_{axes[nu]}"] Tmunu = h * Umu * Unu if mu != nu: assert_array_almost_equal(sp[f"T{mu}{nu}"], sp[f"T{nu}{mu}"]) else: Tmunu += p assert_array_almost_equal(sp[f"T{mu}{nu}"], Tmunu) cr_shock = "CRShockTube/Data_000005" @requires_ds(cr_shock) def test_cosmic_rays(): ds = data_dir_load(cr_shock) assert_array_almost_equal(ds.gamma_cr, 4.0 / 3.0) ad = ds.all_data() p_cr = ad["gas", "cosmic_ray_pressure"] e_cr = ad["gas", "cosmic_ray_energy_density"] assert_array_almost_equal(p_cr, e_cr / 3.0) e_kin = ad["gas", "kinetic_energy_density"] e_int = ad["gas", "thermal_energy_density"] e_tot = ad["gas", "total_energy_density"] assert_array_almost_equal(e_tot, e_kin + e_int + e_cr) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3271525 yt-4.4.0/yt/frontends/gdf/0000755000175100001770000000000014714401715014762 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gdf/__init__.py0000644000175100001770000000000014714401662017062 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gdf/api.py0000644000175100001770000000027514714401662016112 0ustar00runnerdockerfrom . import tests from .data_structures import GDFDataset, GDFGrid, GDFHierarchy from .fields import GDFFieldInfo from .io import IOHandlerGDFHDF5 add_gdf_field = GDFFieldInfo.add_field ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gdf/data_structures.py0000644000175100001770000002737514714401662020567 0ustar00runnerdockerimport os import weakref from functools import cached_property import numpy as np from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.static_output import Dataset from yt.funcs import just_one, setdefaultattr from yt.geometry.api import Geometry from yt.geometry.grid_geometry_handler import GridIndex from yt.units.dimensions import dimensionless as sympy_one # type: ignore from yt.units.unit_object import Unit # type: ignore from yt.units.unit_systems import unit_system_registry # type: ignore from yt.utilities.exceptions import YTGDFUnknownGeometry from yt.utilities.lib.misc_utilities import get_box_grids_level from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py from .fields import GDFFieldInfo GEOMETRY_TRANS = { 0: "cartesian", 1: "polar", 2: "cylindrical", 3: "spherical", } class GDFGrid(AMRGridPatch): _id_offset = 0 def __init__(self, id, index, level, start, dimensions): AMRGridPatch.__init__(self, id, filename=index.index_filename, index=index) self.Parent = [] self.Children = [] self.Level = level self.start_index = start.copy() self.stop_index = self.start_index + dimensions self.ActiveDimensions = dimensions.copy() def _setup_dx(self): # So first we figure out what the index is. We don't assume # that dx=dy=dz , at least here. We probably do elsewhere. id = self.id - self._id_offset if len(self.Parent) > 0: self.dds = self.Parent[0].dds / self.ds.refine_by else: LE, RE = self.index.grid_left_edge[id, :], self.index.grid_right_edge[id, :] self.dds = np.array((RE - LE) / self.ActiveDimensions) self.field_data["dx"], self.field_data["dy"], self.field_data["dz"] = self.dds self.dds = self.ds.arr(self.dds, "code_length") class GDFHierarchy(GridIndex): grid = GDFGrid def __init__(self, ds, dataset_type="grid_data_format"): self.dataset = weakref.proxy(ds) self.index_filename = self.dataset.parameter_filename h5f = h5py.File(self.index_filename, mode="r") self.dataset_type = dataset_type GridIndex.__init__(self, ds, dataset_type) self.directory = os.path.dirname(self.index_filename) h5f.close() def _detect_output_fields(self): h5f = h5py.File(self.index_filename, mode="r") self.field_list = [("gdf", str(f)) for f in h5f["field_types"].keys()] h5f.close() def _count_grids(self): h5f = h5py.File(self.index_filename, mode="r") self.num_grids = h5f["/grid_parent_id"].shape[0] h5f.close() def _parse_index(self): h5f = h5py.File(self.index_filename, mode="r") dxs = [] self.grids = np.empty(self.num_grids, dtype="object") levels = (h5f["grid_level"][:]).copy() glis = (h5f["grid_left_index"][:]).copy() gdims = (h5f["grid_dimensions"][:]).copy() active_dims = ~( (np.max(gdims, axis=0) == 1) & (self.dataset.domain_dimensions == 1) ) for i in range(levels.shape[0]): self.grids[i] = self.grid(i, self, levels[i], glis[i], gdims[i]) self.grids[i]._level_id = levels[i] dx = ( self.dataset.domain_right_edge - self.dataset.domain_left_edge ) / self.dataset.domain_dimensions dx[active_dims] /= self.dataset.refine_by ** levels[i] dxs.append(dx.in_units("code_length")) dx = self.dataset.arr(dxs, units="code_length") self.grid_left_edge = self.dataset.domain_left_edge + dx * glis self.grid_dimensions = gdims.astype("int32") self.grid_right_edge = self.grid_left_edge + dx * self.grid_dimensions self.grid_particle_count = h5f["grid_particle_count"][:] del levels, glis, gdims h5f.close() def _populate_grid_objects(self): mask = np.empty(self.grids.size, dtype="int32") for g in self.grids: g._prepare_grid() g._setup_dx() for gi, g in enumerate(self.grids): g.Children = self._get_grid_children(g) for g1 in g.Children: g1.Parent.append(g) get_box_grids_level( self.grid_left_edge[gi, :], self.grid_right_edge[gi, :], self.grid_levels[gi].item(), self.grid_left_edge, self.grid_right_edge, self.grid_levels, mask, ) m = mask.astype("bool") m[gi] = False siblings = self.grids[gi:][m[gi:]] if len(siblings) > 0: g.OverlappingSiblings = siblings.tolist() self.max_level = self.grid_levels.max() def _get_box_grids(self, left_edge, right_edge): """ Gets back all the grids between a left edge and right edge """ eps = np.finfo(np.float64).eps grid_i = np.where( np.all((self.grid_right_edge - left_edge) > eps, axis=1) & np.all((right_edge - self.grid_left_edge) > eps, axis=1) ) return self.grids[grid_i], grid_i def _get_grid_children(self, grid): mask = np.zeros(self.num_grids, dtype="bool") grids, grid_ind = self._get_box_grids(grid.LeftEdge, grid.RightEdge) mask[grid_ind] = True return [g for g in self.grids[mask] if g.Level == grid.Level + 1] class GDFDataset(Dataset): _load_requirements = ["h5py"] _index_class = GDFHierarchy _field_info_class = GDFFieldInfo def __init__( self, filename, dataset_type="grid_data_format", storage_filename=None, geometry=None, units_override=None, unit_system="cgs", default_species_fields=None, ): self.geometry = geometry self.fluid_types += ("gdf",) Dataset.__init__( self, filename, dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) self.storage_filename = storage_filename def _set_code_unit_attributes(self): """ Generates the conversion to various physical _units based on the parameter file """ # This should be improved. h5f = h5py.File(self.parameter_filename, mode="r") for field_name in h5f["/field_types"]: current_field = h5f[f"/field_types/{field_name}"] if "field_to_cgs" in current_field.attrs: field_conv = current_field.attrs["field_to_cgs"] self.field_units[field_name] = just_one(field_conv) elif "field_units" in current_field.attrs: field_units = current_field.attrs["field_units"] if isinstance(field_units, str): current_field_units = current_field.attrs["field_units"] else: current_field_units = just_one(current_field.attrs["field_units"]) self.field_units[field_name] = current_field_units.decode("utf8") else: self.field_units[field_name] = "" if "dataset_units" in h5f: for unit_name in h5f["/dataset_units"]: current_unit = h5f[f"/dataset_units/{unit_name}"] value = current_unit[()] unit = current_unit.attrs["unit"] # need to convert to a Unit object and check dimensions # because unit can be things like # 'dimensionless/dimensionless**3' so naive string # comparisons are insufficient unit = Unit(unit, registry=self.unit_registry) if unit_name.endswith("_unit") and unit.dimensions is sympy_one: # Catch code units and if they are dimensionless, # assign CGS units. setdefaultattr will catch code units # which have already been set via units_override. un = unit_name[:-5] un = un.replace("magnetic", "magnetic_field_cgs", 1) unit = unit_system_registry["cgs"][un] setdefaultattr(self, unit_name, self.quan(value, unit)) setdefaultattr(self, unit_name, self.quan(value, unit)) if unit_name in h5f["/field_types"]: if unit_name in self.field_units: mylog.warning( "'field_units' was overridden by 'dataset_units/%s'", unit_name, ) self.field_units[unit_name] = str(unit) else: setdefaultattr(self, "length_unit", self.quan(1.0, "cm")) setdefaultattr(self, "mass_unit", self.quan(1.0, "g")) setdefaultattr(self, "time_unit", self.quan(1.0, "s")) h5f.close() @cached_property def unique_identifier(self) -> str: with h5py.File(self.parameter_filename, mode="r") as handle: return str(handle["/simulation_parameters"].attrs["unique_identifier"]) def _parse_parameter_file(self): self._handle = h5py.File(self.parameter_filename, mode="r") if "data_software" in self._handle["gridded_data_format"].attrs: self.data_software = self._handle["gridded_data_format"].attrs[ "data_software" ] else: self.data_software = "unknown" sp = self._handle["/simulation_parameters"].attrs if self.geometry is None: geometry = just_one(sp.get("geometry", 0)) try: self.geometry = Geometry(GEOMETRY_TRANS[geometry]) except KeyError as e: raise YTGDFUnknownGeometry(geometry) from e self.parameters.update(sp) self.domain_left_edge = sp["domain_left_edge"][:] self.domain_right_edge = sp["domain_right_edge"][:] self.domain_dimensions = sp["domain_dimensions"][:] refine_by = sp["refine_by"] if refine_by is None: refine_by = 2 self.refine_by = refine_by self.dimensionality = sp["dimensionality"] self.current_time = sp["current_time"] self.cosmological_simulation = sp["cosmological_simulation"] if sp["num_ghost_zones"] != 0: raise RuntimeError self.num_ghost_zones = sp["num_ghost_zones"] self.field_ordering = sp["field_ordering"] self.boundary_conditions = sp["boundary_conditions"][:] self._periodicity = tuple(bnd == 0 for bnd in self.boundary_conditions[::2]) if self.cosmological_simulation: self.current_redshift = sp["current_redshift"] self.omega_lambda = sp["omega_lambda"] self.omega_matter = sp["omega_matter"] self.hubble_constant = sp["hubble_constant"] else: self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 self.cosmological_simulation = 0 # Hardcode time conversion for now. self.parameters["Time"] = 1.0 # Hardcode for now until field staggering is supported. self.parameters["HydroMethod"] = 0 self._handle.close() del self._handle @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False try: fileh = h5py.File(filename, mode="r") if "gridded_data_format" in fileh: fileh.close() return True fileh.close() except Exception: pass return False def __str__(self): return self.basename.rsplit(".", 1)[0] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gdf/definitions.py0000644000175100001770000000010614714401662017645 0ustar00runnerdocker""" Various definitions for various other modules and routines """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gdf/fields.py0000644000175100001770000000212714714401662016605 0ustar00runnerdockerfrom yt.fields.field_info_container import FieldInfoContainer # The nice thing about GDF is that for the most part, everything is in CGS, # with potentially a scalar modification. class GDFFieldInfo(FieldInfoContainer): known_other_fields = ( ("density", ("g/cm**3", ["density"], None)), ("specific_energy", ("erg/g", ["specific_thermal_energy"], None)), ("pressure", ("erg/cm**3", ["pressure"], None)), ("temperature", ("K", ["temperature"], None)), ("velocity_x", ("cm/s", ["velocity_x"], None)), ("velocity_y", ("cm/s", ["velocity_y"], None)), ("velocity_z", ("cm/s", ["velocity_z"], None)), ("mag_field_x", ("gauss", ["magnetic_field_x"], None)), ("mag_field_y", ("gauss", ["magnetic_field_y"], None)), ("mag_field_z", ("gauss", ["magnetic_field_z"], None)), ) known_particle_fields = () def setup_fluid_fields(self): from yt.fields.magnetic_field import setup_magnetic_field_aliases setup_magnetic_field_aliases( self, "gdf", [f"magnetic_field_{ax}" for ax in "xyz"] ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gdf/io.py0000644000175100001770000000611014714401662015742 0ustar00runnerdockerimport numpy as np from yt.funcs import mylog from yt.geometry.selection_routines import GridSelector from yt.utilities.io_handler import BaseParticleIOHandler from yt.utilities.on_demand_imports import _h5py as h5py def _grid_dname(grid_id): return "/data/grid_%010i" % grid_id def _field_dname(grid_id, field_name): return f"{_grid_dname(grid_id)}/{field_name}" # TODO all particle bits were removed class IOHandlerGDFHDF5(BaseParticleIOHandler): _dataset_type = "grid_data_format" _offset_string = "data:offsets=0" _data_string = "data:datatype=0" def _read_fluid_selection(self, chunks, selector, fields, size): rv = {} chunks = list(chunks) if isinstance(selector, GridSelector): if not (len(chunks) == len(chunks[0].objs) == 1): raise RuntimeError grid = chunks[0].objs[0] h5f = h5py.File(grid.filename, mode="r") gds = h5f.get(_grid_dname(grid.id)) for ftype, fname in fields: if self.ds.field_ordering == 1: rv[ftype, fname] = gds.get(fname)[()].swapaxes(0, 2) else: rv[ftype, fname] = gds.get(fname)[()] h5f.close() return rv if size is None: size = sum(grid.count(selector) for chunk in chunks for grid in chunk.objs) if any((ftype != "gdf" for ftype, fname in fields)): raise NotImplementedError for field in fields: ftype, fname = field fsize = size # check the dtype instead rv[field] = np.empty(fsize, dtype="float64") ngrids = sum(len(chunk.objs) for chunk in chunks) mylog.debug( "Reading %s cells of %s fields in %s blocks", size, [fn for ft, fn in fields], ngrids, ) ind = 0 for chunk in chunks: fid = None for grid in chunk.objs: if grid.filename is None: continue if fid is None: fid = h5py.h5f.open( bytes(grid.filename, "utf-8"), h5py.h5f.ACC_RDONLY ) if self.ds.field_ordering == 1: # check the dtype instead data = np.empty(grid.ActiveDimensions[::-1], dtype="float64") data_view = data.swapaxes(0, 2) else: # check the dtype instead data_view = data = np.empty(grid.ActiveDimensions, dtype="float64") for field in fields: ftype, fname = field dg = h5py.h5d.open( fid, bytes(_field_dname(grid.id, fname), "utf-8") ) dg.read(h5py.h5s.ALL, h5py.h5s.ALL, data) # caches nd = grid.select(selector, data_view, rv[field], ind) ind += nd # I don't get that part, only last nd is added if fid is not None: fid.close() return rv ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gdf/misc.py0000644000175100001770000000000014714401662016256 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3271525 yt-4.4.0/yt/frontends/gdf/tests/0000755000175100001770000000000014714401715016124 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gdf/tests/__init__.py0000644000175100001770000000000014714401662020224 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gdf/tests/conftest.py0000644000175100001770000000162214714401662020325 0ustar00runnerdocker""" Title: conftest.py Purpose: Contains fixtures for loading data. """ # Test data sedov = "sedov/sedov_tst_0004.h5" # Test parameters. Format: # {test1: {param1 : [(val1, val2,...), (id1, id2,...)], param2 : ...}, test2: ...} test_params = { "test_sedov_tunnel": { "axis": [(0, 1, 2), ("0", "1", "2")], "dobj": [(None, ("sphere", ("max", (0.1, "unitary")))), ("None", "sphere")], "weight": [(None, ("gas", "density")), ("None", "density")], "field": [(("gas", "density"), ("gas", "velocity_x")), ("density", "vx")], } } def pytest_generate_tests(metafunc): # Loop over each test in test_params for test_name, params in test_params.items(): if metafunc.function.__name__ == test_name: # Parametrize for param_name, param_vals in params.items(): metafunc.parametrize(param_name, param_vals[0], ids=param_vals[1]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gdf/tests/test_outputs.py0000644000175100001770000000154614714401662021267 0ustar00runnerdockerimport pytest from yt.frontends.gdf.api import GDFDataset from yt.testing import requires_file, requires_module, units_override_check from yt.utilities.answer_testing.answer_tests import small_patch_amr # Test data sedov = "sedov/sedov_tst_0004.h5" @pytest.mark.answer_test class TestGDF: answer_file = None saved_hashes = None answer_version = "000" @pytest.mark.usefixtures("hashing") @pytest.mark.parametrize("ds", [sedov], indirect=True) def test_sedov_tunnel(self, axis, dobj, weight, field, ds): self.hashes.update(small_patch_amr(ds, field, weight, axis, dobj)) @pytest.mark.parametrize("ds", [sedov], indirect=True) def test_GDFDataset(self, ds): assert isinstance(ds, GDFDataset) @requires_module("h5py") @requires_file(sedov) def test_units_override(self): units_override_check(sedov) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gdf/tests/test_outputs_nose.py0000644000175100001770000000150514714401662022306 0ustar00runnerdockerfrom numpy.testing import assert_equal from yt.frontends.gdf.api import GDFDataset from yt.testing import requires_file, requires_module, units_override_check from yt.utilities.answer_testing.framework import ( data_dir_load, requires_ds, small_patch_amr, ) _fields = [("gas", "density"), ("gas", "velocity_x")] sedov = "sedov/sedov_tst_0004.h5" @requires_ds(sedov) def test_sedov_tunnel(): ds = data_dir_load(sedov) assert_equal(str(ds), "sedov_tst_0004") for test in small_patch_amr(ds, _fields): test_sedov_tunnel.__name__ = test.description yield test @requires_module("h5py") @requires_file(sedov) def test_GDFDataset(): assert isinstance(data_dir_load(sedov), GDFDataset) @requires_module("h5py") @requires_file(sedov) def test_units_override(): units_override_check(sedov) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3271525 yt-4.4.0/yt/frontends/gizmo/0000755000175100001770000000000014714401715015347 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gizmo/__init__.py0000644000175100001770000000000014714401662017447 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gizmo/api.py0000644000175100001770000000011514714401662016470 0ustar00runnerdockerfrom .data_structures import GizmoDataset from .fields import GizmoFieldInfo ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gizmo/data_structures.py0000644000175100001770000001422714714401662021144 0ustar00runnerdockerimport os import numpy as np from yt.frontends.gadget.data_structures import GadgetHDF5Dataset from yt.utilities.cosmology import Cosmology from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py from .fields import GizmoFieldInfo class GizmoDataset(GadgetHDF5Dataset): _load_requirements = ["h5py"] _field_info_class = GizmoFieldInfo @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False need_groups = ["Header"] veto_groups = ["Config", "Constants", "FOF", "Group", "Subhalo"] valid = True valid_fname = filename # If passed arg is a directory, look for the .0 file in that dir if os.path.isdir(filename): valid_files = [] for f in os.listdir(filename): fname = os.path.join(filename, f) fext = os.path.splitext(fname)[-1] if ( (".0" in f) and (fext not in {".ewah", ".kdtree"}) and os.path.isfile(fname) ): valid_files.append(fname) if len(valid_files) == 0: valid = False elif len(valid_files) > 1: valid = False else: valid_fname = valid_files[0] try: fh = h5py.File(valid_fname, mode="r") valid = all(ng in fh["/"] for ng in need_groups) and not any( vg in fh["/"] for vg in veto_groups ) # From Apr 2021, 7f1f06f, public gizmo includes a header variable # GIZMO_version, which is set to the year of the most recent commit # We should prefer this to checking the metallicity, which might # not exist if "GIZMO_version" not in fh["/Header"].attrs: dmetal = "/PartType0/Metallicity" if dmetal not in fh or ( fh[dmetal].ndim > 1 and fh[dmetal].shape[1] < 11 ): valid = False fh.close() except Exception: valid = False return valid def _set_code_unit_attributes(self): super()._set_code_unit_attributes() def _parse_parameter_file(self): hvals = self._get_hvals() self.dimensionality = 3 self.refine_by = 2 self.parameters["HydroMethod"] = "sph" # Set standard values # We may have an overridden bounding box. if self.domain_left_edge is None and hvals["BoxSize"] != 0: self.domain_left_edge = np.zeros(3, "float64") self.domain_right_edge = np.ones(3, "float64") * hvals["BoxSize"] self.domain_dimensions = np.ones(3, "int32") self._periodicity = (True, True, True) self.cosmological_simulation = 1 self.current_redshift = hvals.get("Redshift", 0.0) if "Redshift" not in hvals: mylog.info("Redshift is not set in Header. Assuming z=0.") if "ComovingIntegrationOn" in hvals: # In 1d8479, Nov 2020, public GIZMO updated the names of the Omegas # to include an _, added baryons and radiation and added the # ComovingIntegrationOn field. ComovingIntegrationOn is always set, # but the Omega's are only included if ComovingIntegrationOn is true mylog.debug("Reading cosmological parameters using post-1d8479 format") self.cosmological_simulation = hvals["ComovingIntegrationOn"] if self.cosmological_simulation: self.omega_lambda = hvals["Omega_Lambda"] self.omega_matter = hvals["Omega_Matter"] self.omega_baryon = hvals["Omega_Baryon"] self.omega_radiation = hvals["Omega_Radiation"] self.hubble_constant = hvals["HubbleParam"] elif "OmegaLambda" in hvals: # Should still support GIZMO versions prior to 1d8479 too mylog.info( "ComovingIntegrationOn does not exist, falling back to OmegaLambda", ) self.omega_lambda = hvals["OmegaLambda"] self.omega_matter = hvals["Omega0"] self.hubble_constant = hvals["HubbleParam"] self.cosmological_simulation = self.omega_lambda != 0.0 else: # If these are not set it is definitely not a cosmological dataset. mylog.debug("No cosmological information found, assuming defaults") self.omega_lambda = 0.0 self.omega_matter = 0.0 # Just in case somebody asks for it. self.cosmological_simulation = 0 # Hubble is set below for Omega Lambda = 0. if not self.cosmological_simulation: mylog.info( "ComovingIntegrationOn != 1 or (not found " "and OmegaLambda is 0.0), so we are turning off Cosmology.", ) self.hubble_constant = 1.0 # So that scaling comes out correct self.current_redshift = 0.0 # This may not be correct. self.current_time = hvals["Time"] else: # Now we calculate our time based on the cosmology, because in # ComovingIntegration hvals["Time"] will in fact be the expansion # factor, not the actual integration time, so we re-calculate # global time from our Cosmology. cosmo = Cosmology( hubble_constant=self.hubble_constant, omega_matter=self.omega_matter, omega_lambda=self.omega_lambda, ) self.current_time = cosmo.lookback_time(self.current_redshift, 1e6) mylog.info( "Calculating time from %0.3e to be %0.3e seconds", hvals["Time"], self.current_time, ) self.parameters = hvals prefix = os.path.join(self.directory, self.basename.split(".", 1)[0]) if hvals["NumFiles"] > 1: self.filename_template = f"{prefix}.%(num)s{self._suffix}" else: self.filename_template = self.parameter_filename self.file_count = hvals["NumFiles"] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gizmo/fields.py0000644000175100001770000001631414714401662017175 0ustar00runnerdockerfrom yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer from yt.fields.species_fields import add_species_field_by_density, setup_species_fields from yt.frontends.gadget.fields import GadgetFieldInfo from yt.frontends.sph.fields import SPHFieldInfo metal_elements = ["He", "C", "N", "O", "Ne", "Mg", "Si", "S", "Ca", "Fe"] class GizmoFieldInfo(GadgetFieldInfo): # The known fields list is according to the GIZMO User Guide. See # http://www.tapir.caltech.edu/~phopkins/Site/GIZMO_files/gizmo_documentation.html#snaps-reading known_particle_fields: KnownFieldsT = ( ("Coordinates", ("code_length", ["particle_position"], None)), ("Velocities", ("code_velocity", ["particle_velocity"], None)), ("ParticleIDs", ("", ["particle_index"], None)), ("Masses", ("code_mass", ["particle_mass"], None)), ("InternalEnergy", ("code_specific_energy", ["specific_thermal_energy"], None)), ("Density", ("code_mass / code_length**3", ["density"], None)), ("SmoothingLength", ("code_length", ["smoothing_length"], None)), ("ElectronAbundance", ("", [], None)), ("NeutralHydrogenAbundance", ("", [], None)), ("StarFormationRate", ("Msun / yr", ["star_formation_rate"], None)), ("Metallicity", ("code_metallicity", ["metallicity"], None)), ("Metallicity_00", ("", ["metallicity"], None)), ("Metallicity_01", ("", ["He_metallicity"], None)), ("Metallicity_02", ("", ["C_metallicity"], None)), ("Metallicity_03", ("", ["N_metallicity"], None)), ("Metallicity_04", ("", ["O_metallicity"], None)), ("Metallicity_05", ("", ["Ne_metallicity"], None)), ("Metallicity_06", ("", ["Mg_metallicity"], None)), ("Metallicity_07", ("", ["Si_metallicity"], None)), ("Metallicity_08", ("", ["S_metallicity"], None)), ("Metallicity_09", ("", ["Ca_metallicity"], None)), ("Metallicity_10", ("", ["Fe_metallicity"], None)), ("ArtificialViscosity", ("", [], None)), ("MagneticField", ("code_magnetic", ["particle_magnetic_field"], None)), ("DivergenceOfMagneticField", ("code_magnetic / code_length", [], None)), ("StellarFormationTime", ("", [], None)), # "StellarFormationTime" has different meanings in (non-)cosmological # runs, so units are left blank here. ("BH_Mass", ("code_mass", [], None)), ("BH_Mdot", ("code_mass / code_time", [], None)), ("BH_Mass_AlphaDisk", ("code_mass", [], None)), ) def __init__(self, *args, **kwargs): super(SPHFieldInfo, self).__init__(*args, **kwargs) if ("PartType0", "Metallicity_00") in self.field_list: self.nuclei_names = metal_elements self.species_names = ["H_p0", "H_p1"] + metal_elements else: self.nuclei_names = [] def setup_particle_fields(self, ptype): FieldInfoContainer.setup_particle_fields(self, ptype) if ptype in ("PartType0",): self.setup_gas_particle_fields(ptype) setup_species_fields(self, ptype) if ptype in ("PartType4",): self.setup_star_particle_fields(ptype) def setup_gas_particle_fields(self, ptype): from yt.fields.magnetic_field import setup_magnetic_field_aliases super().setup_gas_particle_fields(ptype) def _h_p0_density(field, data): x_H = 1.0 - data[ptype, "He_metallicity"] - data[ptype, "metallicity"] return ( x_H * data[ptype, "density"] * data[ptype, "NeutralHydrogenAbundance"] ) self.add_field( (ptype, "H_p0_density"), sampling_type="particle", function=_h_p0_density, units=self.ds.unit_system["density"], ) add_species_field_by_density(self, ptype, "H") def _h_p1_density(field, data): x_H = 1.0 - data[ptype, "He_metallicity"] - data[ptype, "metallicity"] return ( x_H * data[ptype, "density"] * (1.0 - data[ptype, "NeutralHydrogenAbundance"]) ) self.add_field( (ptype, "H_p1_density"), sampling_type="particle", function=_h_p1_density, units=self.ds.unit_system["density"], ) add_species_field_by_density(self, ptype, "H_p1") def _nuclei_mass_density_field(field, data): species = field.name[1][: field.name[1].find("_")] return data[ptype, "density"] * data[ptype, f"{species}_metallicity"] for species in ["H", "H_p0", "H_p1"]: for suf in ["_density", "_number_density"]: field = f"{species}{suf}" self.alias(("gas", field), (ptype, field)) if (ptype, "ElectronAbundance") in self.field_list: def _el_number_density(field, data): return ( data[ptype, "ElectronAbundance"] * data[ptype, "H_nuclei_density"] ) self.add_field( (ptype, "El_number_density"), sampling_type="particle", function=_el_number_density, units=self.ds.unit_system["number_density"], ) self.alias(("gas", "El_number_density"), (ptype, "El_number_density")) for species in self.nuclei_names: self.add_field( (ptype, f"{species}_nuclei_mass_density"), sampling_type="particle", function=_nuclei_mass_density_field, units=self.ds.unit_system["density"], ) for suf in ["_nuclei_mass_density", "_metallicity"]: field = f"{species}{suf}" self.alias(("gas", field), (ptype, field)) def _metal_density_field(field, data): return data[ptype, "metallicity"] * data[ptype, "density"] self.add_field( (ptype, "metal_density"), sampling_type="local", function=_metal_density_field, units=self.ds.unit_system["density"], ) self.alias(("gas", "metal_density"), (ptype, "metal_density")) magnetic_field = "MagneticField" if (ptype, magnetic_field) in self.field_list: setup_magnetic_field_aliases(self, ptype, magnetic_field) def setup_star_particle_fields(self, ptype): def _creation_time(field, data): if data.ds.cosmological_simulation: a_form = data[ptype, "StellarFormationTime"] z_form = 1 / a_form - 1 creation_time = data.ds.cosmology.t_from_z(z_form) else: t_form = data[ptype, "StellarFormationTime"] creation_time = data.ds.arr(t_form, "code_time") return creation_time self.add_field( (ptype, "creation_time"), sampling_type="particle", function=_creation_time, units=self.ds.unit_system["time"], ) def _age(field, data): return data.ds.current_time - data[ptype, "creation_time"] self.add_field( (ptype, "age"), sampling_type="particle", function=_age, units=self.ds.unit_system["time"], ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3271525 yt-4.4.0/yt/frontends/gizmo/tests/0000755000175100001770000000000014714401715016511 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gizmo/tests/__init__.py0000644000175100001770000000000014714401662020611 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/gizmo/tests/test_outputs.py0000644000175100001770000001052114714401662021645 0ustar00runnerdockerfrom collections import OrderedDict import yt from yt.frontends.gizmo.api import GizmoDataset from yt.frontends.gizmo.fields import metal_elements from yt.testing import assert_allclose_units, requires_file, requires_module from yt.units import Myr from yt.utilities.answer_testing.framework import requires_ds, sph_answer # This maps from field names to weight field names to use for projections fields = OrderedDict( [ (("gas", "density"), None), (("gas", "temperature"), ("gas", "density")), (("gas", "metallicity"), ("gas", "density")), (("gas", "O_metallicity"), ("gas", "density")), (("gas", "velocity_magnitude"), None), ] ) g64 = "gizmo_64/output/snap_N64L16_135.hdf5" gmhd = "gizmo_mhd_mwdisk/gizmo_mhd_mwdisk.hdf5" gmhd_bbox = [[-400, 400]] * 3 zeld_wg = "gizmo_zeldovich/snapshot_076_wi_gizver.hdf5" zeld_ng = "gizmo_zeldovich/snapshot_076_no_gizver.hdf5" @requires_module("h5py") @requires_ds(g64, big_data=True) def test_gizmo_64(): ds = yt.load(g64) assert isinstance(ds, GizmoDataset) for test in sph_answer(ds, "snap_N64L16_135", 524288, fields): test_gizmo_64.__name__ = test.description yield test @requires_module("h5py") @requires_file(zeld_wg) @requires_file(zeld_ng) def test_gizmo_zeldovich(): """ Test loading a recent gizmo snapshot that doesn't have cooling/metallicity The gizmo_zeldovich file has no metallicity field on the gas particles but is a cosmological dataset run using GIZMO_version=2022. There are two versions of the file, with GIZMO_version (_wg) and without GIZMO_version (_ng). Check that both load as gizmo datasets and correctly pull the cosmological variables. This test should get simpler when the file switches to pytest. """ for fn in [zeld_wg, zeld_ng]: ds = yt.load(fn) assert isinstance(ds, GizmoDataset) assert ds.cosmological_simulation assert ds.omega_matter == 1.0 assert ds.omega_lambda == 0.0 # current_time is calculated from the cosmology so this checks if that # was calculated correctly assert_allclose_units(ds.current_time, 1672.0678 * Myr) @requires_module("h5py") @requires_file(gmhd) def test_gizmo_mhd(): """ Magnetic fields should be loaded correctly when they are present. """ ds = yt.load(gmhd, bounding_box=gmhd_bbox, unit_system="code") ad = ds.all_data() ptype = "PartType0" # Test vector magnetic field fmag = "particle_magnetic_field" f = ad[ptype, fmag] assert str(f.units) == "code_magnetic" assert f.shape == (409013, 3) # Test component magnetic fields for axis in "xyz": f = ad[ptype, fmag + "_" + axis] assert str(f.units) == "code_magnetic" assert f.shape == (409013,) @requires_module("h5py") @requires_file(gmhd) def test_gas_particle_fields(): """ Test fields set up in GizmoFieldInfo.setup_gas_particle_fields. """ ds = yt.load(gmhd, bounding_box=gmhd_bbox) ptype = "PartType0" derived_fields = [] # Add species fields for species in ["H_p0", "H_p1"]: for suffix in ["density", "fraction", "mass", "number_density"]: derived_fields += [f"{species}_{suffix}"] for species in metal_elements: derived_fields += [f"{species}_nuclei_mass_density"] # Add magnetic fields derived_fields += [f"particle_magnetic_field_{axis}" for axis in "xyz"] # Check for field in derived_fields: assert (ptype, field) in ds.derived_field_list ptype = "gas" derived_fields = [] for species in ["H_p0", "H_p1"]: for suffix in ["density", "number_density"]: derived_fields += [f"{species}_{suffix}"] for species in metal_elements: for suffix in ["nuclei_mass_density", "metallicity"]: derived_fields += [f"{species}_{suffix}"] derived_fields += [f"magnetic_field_{axis}" for axis in "xyz"] for field in derived_fields: assert (ptype, field) in ds.derived_field_list @requires_module("h5py") @requires_file(gmhd) def test_star_particle_fields(): """ Test fields set up in GizmoFieldInfo.setup_star_particle_fields. """ ds = yt.load(gmhd, bounding_box=gmhd_bbox) ptype = "PartType4" derived_fields = ["creation_time", "age"] for field in derived_fields: assert (ptype, field) in ds.derived_field_list ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3271525 yt-4.4.0/yt/frontends/halo_catalog/0000755000175100001770000000000014714401715016637 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/halo_catalog/__init__.py0000644000175100001770000000005214714401662020746 0ustar00runnerdocker""" API for HaloCatalog frontend. """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/halo_catalog/api.py0000644000175100001770000000020414714401662017757 0ustar00runnerdockerfrom .data_structures import YTHaloCatalogDataset from .fields import YTHaloCatalogFieldInfo from .io import IOHandlerYTHaloCatalog ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/halo_catalog/data_structures.py0000644000175100001770000004040314714401662022427 0ustar00runnerdockerimport glob import weakref from collections import defaultdict from functools import cached_property, partial import numpy as np from yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer, ) from yt.data_objects.static_output import ( ParticleDataset, ParticleFile, ) from yt.frontends.ytdata.data_structures import SavedDataset from yt.funcs import parse_h5_attr from yt.geometry.particle_geometry_handler import ParticleIndex from yt.utilities.on_demand_imports import _h5py as h5py from .fields import YTHaloCatalogFieldInfo, YTHaloCatalogHaloFieldInfo class HaloCatalogFile(ParticleFile): """ Base class for data files of halo catalog datasets. This is mainly here to correct for periodicity when reading particle positions. """ def __init__(self, ds, io, filename, file_id, frange): super().__init__(ds, io, filename, file_id, frange) def _read_particle_positions(self, ptype, f=None): raise NotImplementedError def _get_particle_positions(self, ptype, f=None): pcount = self.total_particles[ptype] if pcount == 0: return None # Correct for periodicity. dle = self.ds.domain_left_edge.to("code_length").v dw = self.ds.domain_width.to("code_length").v pos = self._read_particle_positions(ptype, f=f) si, ei = self.start, self.end if None not in (si, ei): pos = pos[si:ei] np.subtract(pos, dle, out=pos) np.mod(pos, dw, out=pos) np.add(pos, dle, out=pos) return pos class YTHaloCatalogFile(HaloCatalogFile): """ Data file class for the YTHaloCatalogDataset. """ def __init__(self, ds, io, filename, file_id, frange): with h5py.File(filename, mode="r") as f: self.header = {field: parse_h5_attr(f, field) for field in f.attrs.keys()} pids = f.get("particles/ids") self.total_ids = 0 if pids is None else pids.size self.group_length_sum = self.total_ids super().__init__(ds, io, filename, file_id, frange) def _read_particle_positions(self, ptype, f=None): """ Read all particle positions in this file. """ if f is None: close = True f = h5py.File(self.filename, mode="r") else: close = False pcount = self.header["num_halos"] pos = np.empty((pcount, 3), dtype="float64") for i, ax in enumerate("xyz"): pos[:, i] = f[f"particle_position_{ax}"][()] if close: f.close() return pos class YTHaloCatalogDataset(SavedDataset): """ Dataset class for halo catalogs made with yt. This covers yt FoF/HoP halo finders and the halo analysis in yt_astro_analysis. """ _load_requirements = ["h5py"] _index_class = ParticleIndex _file_class = YTHaloCatalogFile _field_info_class = YTHaloCatalogFieldInfo _suffix = ".h5" _con_attrs = ( "cosmological_simulation", "current_time", "current_redshift", "hubble_constant", "omega_matter", "omega_lambda", "domain_left_edge", "domain_right_edge", ) def __init__( self, filename, dataset_type="ythalocatalog", index_order=None, units_override=None, unit_system="cgs", ): self.index_order = index_order super().__init__( filename, dataset_type, units_override=units_override, unit_system=unit_system, ) def add_field(self, *args, **kwargs): super().add_field(*args, **kwargs) self._halos_ds.add_field(*args, **kwargs) @property def halos_field_list(self): return self._halos_ds.field_list @property def halos_derived_field_list(self): return self._halos_ds.derived_field_list @cached_property def _halos_ds(self): return YTHaloDataset(self) def _setup_classes(self): super()._setup_classes() self.halo = partial(YTHaloCatalogHaloContainer, ds=self._halos_ds) self.halo.__doc__ = YTHaloCatalogHaloContainer.__doc__ def _parse_parameter_file(self): self.refine_by = 2 self.dimensionality = 3 self.domain_dimensions = np.ones(self.dimensionality, "int32") self._periodicity = (True, True, True) prefix = ".".join(self.parameter_filename.rsplit(".", 2)[:-2]) self.filename_template = f"{prefix}.%(num)s{self._suffix}" self.file_count = len(glob.glob(prefix + "*" + self._suffix)) self.particle_types = ("halos",) self.particle_types_raw = ("halos",) super()._parse_parameter_file() @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if not filename.endswith(".h5"): return False if cls._missing_load_requirements(): return False with h5py.File(filename, mode="r") as f: if ( "data_type" in f.attrs and parse_h5_attr(f, "data_type") == "halo_catalog" ): return True return False class YTHaloParticleIndex(ParticleIndex): """ Particle index for getting halo particles from YTHaloCatalogDatasets. """ def __init__(self, ds, dataset_type): self.real_ds = weakref.proxy(ds.real_ds) super().__init__(ds, dataset_type) def _calculate_particle_index_starts(self): """ Create a dict of halo id offsets for each file. """ particle_count = defaultdict(int) offset_count = 0 for data_file in self.data_files: data_file.index_start = { ptype: particle_count[ptype] for ptype in data_file.total_particles } data_file.offset_start = offset_count for ptype in data_file.total_particles: particle_count[ptype] += data_file.total_particles[ptype] offset_count += getattr(data_file, "total_offset", 0) self._halo_index_start = {} for ptype in self.ds.particle_types_raw: d = [data_file.index_start[ptype] for data_file in self.data_files] self._halo_index_start.update({ptype: np.array(d)}) def _detect_output_fields(self): field_list = [] scalar_field_list = [] units = {} pc = {} for ptype in self.ds.particle_types_raw: d = [df.total_particles[ptype] for df in self.data_files] pc.update({ptype: sum(d)}) found_fields = {ptype: False for ptype, pnum in pc.items() if pnum > 0} has_ids = False for data_file in self.data_files: fl, sl, idl, _units = self.io._identify_fields(data_file) units.update(_units) field_list.extend([f for f in fl if f not in field_list]) scalar_field_list.extend([f for f in sl if f not in scalar_field_list]) for ptype in found_fields: found_fields[ptype] |= data_file.total_particles[ptype] has_ids |= len(idl) > 0 if all(found_fields.values()) and has_ids: break self.field_list = field_list self.scalar_field_list = scalar_field_list ds = self.dataset ds.scalar_field_list = scalar_field_list ds.particle_types = tuple({pt for pt, ds in field_list}) ds.field_units.update(units) ds.particle_types_raw = ds.particle_types def _get_halo_file_indices(self, ptype, identifiers): """ Get the index of the data file list where this halo lives. Digitize returns i such that bins[i-1] <= x < bins[i], so we subtract one because we will open data file i. """ return np.digitize(identifiers, self._halo_index_start[ptype], right=False) - 1 def _get_halo_scalar_index(self, ptype, identifier): i_scalar = self._get_halo_file_indices(ptype, [identifier])[0] scalar_index = identifier - self._halo_index_start[ptype][i_scalar] return scalar_index def _get_halo_values(self, ptype, identifiers, fields, f=None): """ Get field values for halo data containers. """ # if a file is already open, don't open it again filename = None if f is None else f.filename data = defaultdict(lambda: np.empty(identifiers.size)) i_scalars = self._get_halo_file_indices(ptype, identifiers) for i_scalar in np.unique(i_scalars): # mask array to get field data for this halo target = i_scalars == i_scalar scalar_indices = identifiers - self._halo_index_start[ptype][i_scalar] # only open file if it's not already open my_f = ( f if self.data_files[i_scalar].filename == filename else h5py.File(self.data_files[i_scalar].filename, mode="r") ) for field in fields: data[field][target] = self._read_halo_particle_field( my_f, ptype, field, scalar_indices[target] ) if self.data_files[i_scalar].filename != filename: my_f.close() return data def _identify_base_chunk(self, dobj): pass def _read_halo_particle_field(self, fh, ptype, field, indices): return fh[field][indices] def _read_particle_fields(self, fields, dobj, chunk=None): if not fields: return {}, [] fields_to_read, fields_to_generate = self._split_fields(fields) if not fields_to_read: return {}, fields_to_generate fields_to_return = self.io._read_particle_selection(dobj, fields_to_read) return fields_to_return, fields_to_generate def _setup_data_io(self): super()._setup_data_io() if self.real_ds._instantiated_index is None: self.real_ds.index self.real_ds.index # inherit some things from parent index self._data_files = self.real_ds.index.data_files self._total_particles = self.real_ds.index.total_particles self._calculate_particle_index_starts() class HaloDataset(ParticleDataset): """ Base class for dataset accessing particles from halo catalogs. """ def __init__(self, ds, dataset_type): self.real_ds = ds for attr in [ "filename_template", "file_count", "particle_types_raw", "particle_types", "_periodicity", ]: setattr(self, attr, getattr(self.real_ds, attr)) super().__init__(self.real_ds.parameter_filename, dataset_type) def print_key_parameters(self): pass def _set_derived_attrs(self): pass def _parse_parameter_file(self): for attr in [ "cosmological_simulation", "cosmology", "current_redshift", "current_time", "dimensionality", "domain_dimensions", "domain_left_edge", "domain_right_edge", "domain_width", "hubble_constant", "omega_lambda", "omega_matter", "unique_identifier", ]: setattr(self, attr, getattr(self.real_ds, attr)) def set_code_units(self): self._set_code_unit_attributes() self.unit_registry = self.real_ds.unit_registry def _set_code_unit_attributes(self): for unit in ["length", "time", "mass", "velocity", "magnetic", "temperature"]: my_unit = f"{unit}_unit" setattr(self, my_unit, getattr(self.real_ds, my_unit, None)) def __str__(self): return f"{self.real_ds}" def _setup_classes(self): self.objects = [] class YTHaloDataset(HaloDataset): """ Dataset used for accessing member particles from YTHaloCatalogDatasets. """ _index_class = YTHaloParticleIndex _file_class = YTHaloCatalogFile _field_info_class = YTHaloCatalogHaloFieldInfo def __init__(self, ds, dataset_type="ythalo"): super().__init__(ds, dataset_type) def _set_code_unit_attributes(self): pass @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: # We don't ever want this to be loaded by yt.load. return False class HaloContainer(YTSelectionContainer): """ Base class for data containers providing halo particles. """ _type_name = "halo" _con_args = ("ptype", "particle_identifier") _skip_add = True _spatial = False def __init__(self, ptype, particle_identifier, ds=None): if ptype not in ds.particle_types_raw: raise RuntimeError( f'Possible halo types are {ds.particle_types_raw}, supplied "{ptype}".' ) self.ptype = ptype self._current_particle_type = ptype super().__init__(ds, {}) self._set_identifiers(particle_identifier) # Find the file that has the scalar values for this halo. i_scalar = self.index._get_halo_file_indices(ptype, [self.particle_identifier])[ 0 ] self.i_scalar = i_scalar self.scalar_data_file = self.index.data_files[i_scalar] # Data files containing particles belonging to this halo. self.field_data_files = [self.index.data_files[i_scalar]] # index within halo arrays that corresponds to this halo self.scalar_index = self.index._get_halo_scalar_index( ptype, self.particle_identifier ) self._set_io_data() self.particle_number = self._get_particle_number() # starting and ending indices for each file containing particles self._set_field_indices() @cached_property def mass(self): return self[self.ptype, "particle_mass"][0] @cached_property def radius(self): return self[self.ptype, "virial_radius"][0] @cached_property def position(self): return self[self.ptype, "particle_position"][0] @cached_property def velocity(self): return self[self.ptype, "particle_velocity"][0] def _set_io_data(self): halo_fields = self._get_member_fieldnames() my_data = self.index._get_halo_values( self.ptype, np.array([self.particle_identifier]), halo_fields ) self._io_data = {field: np.int64(val[0]) for field, val in my_data.items()} def __repr__(self): return f"{self.ds}_{self.ptype}_{self.particle_identifier:09d}" class YTHaloCatalogHaloContainer(HaloContainer): """ Data container for accessing particles from a halo. Create a data container to get member particles and individual values from halos and subhalos. Halo mass, radius, position, and velocity are set as attributes. Halo IDs are accessible through the field, "member_ids". Other fields that are one value per halo are accessible as normal. The field list for halo objects can be seen in `ds.halos_field_list`. Parameters ---------- ptype : string The type of halo. Possible options can be found by inspecting the value of ds.particle_types_raw. particle_identifier : int The halo id. Examples -------- >>> import yt >>> ds = yt.load("tiny_fof_halos/DD0046/DD0046.0.h5") >>> halo = ds.halo("halos", 0) >>> print(halo.particle_identifier) 0 >>> print(halo.mass) 8724990744704.453 Msun >>> print(halo.radius) 658.8140635766607 kpc >>> print(halo.position) [0.05496909 0.19451951 0.04056824] code_length >>> print(halo.velocity) [7034181.07118151 5323471.09102874 3234522.50495914] cm/s >>> # particle ids for this halo >>> print(halo["member_ids"]) [ 1248. 129. 128. 31999. 31969. 31933. 31934. 159. 31903. 31841. ... 2241. 2240. 2239. 2177. 2209. 2207. 2208.] dimensionless """ def _get_member_fieldnames(self): return ["particle_number", "particle_index_start"] def _get_particle_number(self): return self._io_data["particle_number"] def _set_field_indices(self): self.field_data_start = [self._io_data["particle_index_start"]] self.field_data_end = [self.field_data_start[0] + self.particle_number] def _set_identifiers(self, particle_identifier): self.particle_identifier = particle_identifier self.group_identifier = self.particle_identifier ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/halo_catalog/fields.py0000644000175100001770000000166614714401662020471 0ustar00runnerdockerfrom yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer m_units = "g" p_units = "cm" v_units = "cm / s" r_units = "cm" _particle_fields: KnownFieldsT = ( ("particle_identifier", ("", [], None)), ("particle_position_x", (p_units, [], None)), ("particle_position_y", (p_units, [], None)), ("particle_position_z", (p_units, [], None)), ("particle_velocity_x", (v_units, [], None)), ("particle_velocity_y", (v_units, [], None)), ("particle_velocity_z", (v_units, [], None)), ("particle_mass", (m_units, [], "Virial Mass")), ("virial_radius", (r_units, [], "Virial Radius")), ) class YTHaloCatalogFieldInfo(FieldInfoContainer): known_other_fields = () known_particle_fields = _particle_fields class YTHaloCatalogHaloFieldInfo(FieldInfoContainer): known_other_fields = () known_particle_fields = _particle_fields + (("ids", ("", ["member_ids"], None)),) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/halo_catalog/io.py0000644000175100001770000001512614714401662017626 0ustar00runnerdockerfrom collections import defaultdict import numpy as np from yt.frontends.gadget_fof.io import IOHandlerGadgetFOFHaloHDF5 from yt.funcs import parse_h5_attr from yt.units._numpy_wrapper_functions import uvstack from yt.utilities.io_handler import BaseParticleIOHandler from yt.utilities.on_demand_imports import _h5py as h5py class IOHandlerYTHaloCatalog(BaseParticleIOHandler): _dataset_type = "ythalocatalog" def _read_fluid_selection(self, chunks, selector, fields, size): raise NotImplementedError def _read_particle_coords(self, chunks, ptf): # This will read chunks and yield the results. # Only support halo reading for now. assert len(ptf) == 1 assert list(ptf.keys())[0] == "halos" ptype = "halos" pn = "particle_position_%s" for data_file in self._sorted_chunk_iterator(chunks): with h5py.File(data_file.filename, mode="r") as f: units = parse_h5_attr(f[pn % "x"], "units") pos = data_file._get_particle_positions(ptype, f=f) x, y, z = (self.ds.arr(pos[:, i], units) for i in range(3)) yield "halos", (x, y, z), 0.0 def _yield_coordinates(self, data_file): pn = "particle_position_%s" with h5py.File(data_file.filename, mode="r") as f: units = parse_h5_attr(f[pn % "x"], "units") x, y, z = ( self.ds.arr(f[pn % ax][()].astype("float64"), units) for ax in "xyz" ) pos = uvstack([x, y, z]).T pos.convert_to_units("code_length") yield "halos", pos def _read_particle_fields(self, chunks, ptf, selector): # Only support halo reading for now. assert len(ptf) == 1 assert list(ptf.keys())[0] == "halos" pn = "particle_position_%s" for data_file in self._sorted_chunk_iterator(chunks): si, ei = data_file.start, data_file.end with h5py.File(data_file.filename, mode="r") as f: for ptype, field_list in sorted(ptf.items()): units = parse_h5_attr(f[pn % "x"], "units") pos = data_file._get_particle_positions(ptype, f=f) x, y, z = (self.ds.arr(pos[:, i], units) for i in range(3)) mask = selector.select_points(x, y, z, 0.0) del x, y, z if mask is None: continue for field in field_list: data = f[field][si:ei][mask].astype("float64") yield (ptype, field), data def _count_particles(self, data_file): si, ei = data_file.start, data_file.end nhalos = data_file.header["num_halos"] if None not in (si, ei): nhalos = np.clip(nhalos - si, 0, ei - si) return {"halos": nhalos} def _identify_fields(self, data_file): with h5py.File(data_file.filename, mode="r") as f: fields = [ ("halos", field) for field in f if not isinstance(f[field], h5py.Group) ] units = {("halos", field): parse_h5_attr(f[field], "units") for field in f} return fields, units class HaloDatasetIOHandler: """ Base class for io handlers to load halo member particles. """ def _read_particle_coords(self, chunks, ptf): pass def _read_particle_fields(self, dobj, ptf): # separate member particle fields from scalar fields scalar_fields = defaultdict(list) member_fields = defaultdict(list) for ptype, field_list in sorted(ptf.items()): for field in field_list: if (ptype, field) in self.ds.scalar_field_list: scalar_fields[ptype].append(field) else: member_fields[ptype].append(field) all_data = self._read_scalar_fields(dobj, scalar_fields) all_data.update(self._read_member_fields(dobj, member_fields)) for field, field_data in all_data.items(): yield field, field_data # This will be refactored. _read_particle_selection = IOHandlerGadgetFOFHaloHDF5._read_particle_selection # ignoring type in this mixing to circumvent this error from mypy # Definition of "_read_particle_fields" in base class "HaloDatasetIOHandler" # is incompatible with definition in base class "IOHandlerYTHaloCatalog" # # it may not be possible to refactor out of this situation without breaking downstream class IOHandlerYTHalo(HaloDatasetIOHandler, IOHandlerYTHaloCatalog): # type: ignore _dataset_type = "ythalo" def _identify_fields(self, data_file): with h5py.File(data_file.filename, mode="r") as f: scalar_fields = [ ("halos", field) for field in f if not isinstance(f[field], h5py.Group) ] units = {("halos", field): parse_h5_attr(f[field], "units") for field in f} if "particles" in f: id_fields = [("halos", field) for field in f["particles"]] else: id_fields = [] return scalar_fields + id_fields, scalar_fields, id_fields, units def _read_member_fields(self, dobj, member_fields): all_data = defaultdict(lambda: np.empty(dobj.particle_number, dtype=np.float64)) if not member_fields: return all_data field_start = 0 for i, data_file in enumerate(dobj.field_data_files): start_index = dobj.field_data_start[i] end_index = dobj.field_data_end[i] pcount = end_index - start_index if pcount == 0: continue field_end = field_start + end_index - start_index with h5py.File(data_file.filename, mode="r") as f: for ptype, field_list in sorted(member_fields.items()): for field in field_list: field_data = all_data[ptype, field] my_data = f["particles"][field][start_index:end_index].astype( "float64" ) field_data[field_start:field_end] = my_data field_start = field_end return all_data def _read_scalar_fields(self, dobj, scalar_fields): all_data = {} if not scalar_fields: return all_data with h5py.File(dobj.scalar_data_file.filename, mode="r") as f: for ptype, field_list in sorted(scalar_fields.items()): for field in field_list: data = np.array([f[field][dobj.scalar_index]]).astype("float64") all_data[ptype, field] = data return all_data ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3271525 yt-4.4.0/yt/frontends/halo_catalog/tests/0000755000175100001770000000000014714401715020001 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/halo_catalog/tests/__init__.py0000644000175100001770000000000014714401662022101 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/halo_catalog/tests/test_outputs.py0000644000175100001770000000721114714401662023137 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_array_equal, assert_equal from yt.frontends.halo_catalog.data_structures import YTHaloCatalogDataset from yt.frontends.ytdata.utilities import save_as_dataset from yt.loaders import load as yt_load from yt.testing import TempDirTest, requires_file, requires_module from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.answer_testing.framework import data_dir_load def fake_halo_catalog(data): filename = "catalog.0.h5" ftypes = {field: "." for field in data} extra_attrs = {"data_type": "halo_catalog", "num_halos": data["particle_mass"].size} ds = { "cosmological_simulation": 1, "omega_lambda": 0.7, "omega_matter": 0.3, "hubble_constant": 0.7, "current_redshift": 0, "current_time": YTQuantity(1, "yr"), "domain_left_edge": YTArray(np.zeros(3), "cm"), "domain_right_edge": YTArray(np.ones(3), "cm"), } save_as_dataset(ds, filename, data, field_types=ftypes, extra_attrs=extra_attrs) return filename class HaloCatalogTest(TempDirTest): @requires_module("h5py") def test_halo_catalog(self): rs = np.random.RandomState(3670474) n_halos = 100 fields = ["particle_mass"] + [f"particle_position_{ax}" for ax in "xyz"] units = ["g"] + ["cm"] * 3 data = { field: YTArray(rs.random_sample(n_halos), unit) for field, unit in zip(fields, units, strict=True) } fn = fake_halo_catalog(data) ds = yt_load(fn) assert type(ds) is YTHaloCatalogDataset for field in fields: f1 = data[field].in_base() f1.sort() f2 = ds.r["all", field].in_base() f2.sort() assert_array_equal(f1, f2) @requires_module("h5py") def test_halo_catalog_boundary_particles(self): rs = np.random.RandomState(3670474) n_halos = 100 fields = ["particle_mass"] + [f"particle_position_{ax}" for ax in "xyz"] units = ["g"] + ["cm"] * 3 data = { field: YTArray(rs.random_sample(n_halos), unit) for field, unit in zip(fields, units, strict=True) } data["particle_position_x"][0] = 1.0 data["particle_position_x"][1] = 0.0 data["particle_position_y"][2] = 1.0 data["particle_position_y"][3] = 0.0 data["particle_position_z"][4] = 1.0 data["particle_position_z"][5] = 0.0 fn = fake_halo_catalog(data) ds = yt_load(fn) assert type(ds) is YTHaloCatalogDataset for field in fields: f1 = data[field].in_base() f1.sort() f2 = ds.r["all", field].in_base() f2.sort() assert_array_equal(f1, f2) t46 = "tiny_fof_halos/DD0046/DD0046.0.h5" @requires_file(t46) @requires_module("h5py") def test_halo_quantities(): ds = data_dir_load(t46) ad = ds.all_data() for i in range(ds.index.total_particles): hid = int(ad["halos", "particle_identifier"][i]) halo = ds.halo("halos", hid) for field in ["mass", "position", "velocity"]: v1 = ad["halos", f"particle_{field}"][i] v2 = getattr(halo, field) assert_equal(v1, v2, err_msg=f"Halo {hid} {field} field mismatch.") @requires_file(t46) @requires_module("h5py") def test_halo_particles(): ds = data_dir_load(t46) i = ds.r["halos", "particle_mass"].argmax() hid = int(ds.r["halos", "particle_identifier"][i]) halo = ds.halo("halos", hid) ids = halo["halos", "member_ids"] assert_equal(ids.size, 420) assert_equal(ids.min(), 19478.0) assert_equal(ids.max(), 31669.0) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3271525 yt-4.4.0/yt/frontends/http_stream/0000755000175100001770000000000014714401715016554 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/http_stream/__init__.py0000644000175100001770000000000014714401662020654 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/http_stream/api.py0000644000175100001770000000012314714401662017674 0ustar00runnerdockerfrom .data_structures import HTTPStreamDataset from .io import IOHandlerHTTPStream ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/http_stream/data_structures.py0000644000175100001770000000664214714401662022353 0ustar00runnerdockerimport json import time from functools import cached_property import numpy as np from yt.data_objects.static_output import ParticleDataset, ParticleFile from yt.frontends.sph.fields import SPHFieldInfo from yt.geometry.particle_geometry_handler import ParticleIndex from yt.utilities.on_demand_imports import _requests as requests class HTTPParticleFile(ParticleFile): pass class HTTPStreamDataset(ParticleDataset): _load_requirements = ["requests"] _index_class = ParticleIndex _file_class = HTTPParticleFile _field_info_class = SPHFieldInfo _particle_mass_name = "Mass" _particle_coordinates_name = "Coordinates" _particle_velocity_name = "Velocities" filename_template = "" def __init__( self, base_url, dataset_type="http_particle_stream", unit_system="cgs", index_order=None, index_filename=None, ): self.base_url = base_url super().__init__( "", dataset_type=dataset_type, unit_system=unit_system, index_order=index_order, index_filename=index_filename, ) def __str__(self): return self.base_url @cached_property def unique_identifier(self) -> str: return str(self.parameters.get("unique_identifier", time.time())) def _parse_parameter_file(self): self.dimensionality = 3 self.refine_by = 2 self.parameters["HydroMethod"] = "sph" # Here's where we're going to grab the JSON index file hreq = requests.get(self.base_url + "/yt_index.json") if hreq.status_code != 200: raise RuntimeError header = json.loads(hreq.content) header["particle_count"] = { int(k): header["particle_count"][k] for k in header["particle_count"] } self.parameters = header # Now we get what we need self.domain_left_edge = np.array(header["domain_left_edge"], "float64") self.domain_right_edge = np.array(header["domain_right_edge"], "float64") self.domain_dimensions = np.ones(3, "int32") self._periodicity = (True, True, True) self.current_time = header["current_time"] self.cosmological_simulation = int(header["cosmological_simulation"]) for attr in ( "current_redshift", "omega_lambda", "omega_matter", "hubble_constant", ): setattr(self, attr, float(header[attr])) self.file_count = header["num_files"] def _set_units(self): length_unit = float(self.parameters["units"]["length"]) time_unit = float(self.parameters["units"]["time"]) mass_unit = float(self.parameters["units"]["mass"]) density_unit = mass_unit / length_unit**3 velocity_unit = length_unit / time_unit self._unit_base = {} self._unit_base["cm"] = 1.0 / length_unit self._unit_base["s"] = 1.0 / time_unit super()._set_units() self.conversion_factors["velocity"] = velocity_unit self.conversion_factors["mass"] = mass_unit self.conversion_factors["density"] = density_unit @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if not filename.startswith("http://"): return False if cls._missing_load_requirements(): return False return requests.get(filename + "/yt_index.json").status_code == 200 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/http_stream/io.py0000644000175100001770000000474714714401662017552 0ustar00runnerdockerimport numpy as np from yt.funcs import mylog from yt.utilities.io_handler import BaseParticleIOHandler from yt.utilities.on_demand_imports import _requests as requests class IOHandlerHTTPStream(BaseParticleIOHandler): _dataset_type = "http_particle_stream" _vector_fields = {"Coordinates": 3, "Velocity": 3, "Velocities": 3} def __init__(self, ds): self._url = ds.base_url # This should eventually manage the IO and cache it self.total_bytes = 0 super().__init__(ds) def _open_stream(self, data_file, field): # This does not actually stream yet! ftype, fname = field s = f"{self._url}/{data_file.file_id}/{ftype}/{fname}" mylog.info("Loading URL %s", s) resp = requests.get(s) if resp.status_code != 200: raise RuntimeError self.total_bytes += len(resp.content) return resp.content def _identify_fields(self, data_file): f = [] for ftype, fname in self.ds.parameters["field_list"]: f.append((str(ftype), str(fname))) return f, {} def _read_particle_coords(self, chunks, ptf): for data_file in self._sorted_chunk_iterator(chunks): for ptype in ptf: s = self._open_stream(data_file, (ptype, "Coordinates")) c = np.frombuffer(s, dtype="float64") c.shape = (c.shape[0] / 3.0, 3) yield ptype, (c[:, 0], c[:, 1], c[:, 2]), 0.0 def _read_particle_fields(self, chunks, ptf, selector): # Now we have all the sizes, and we can allocate for data_file in self._sorted_chunk_iterator(chunks): for ptype, field_list in sorted(ptf.items()): s = self._open_stream(data_file, (ptype, "Coordinates")) c = np.frombuffer(s, dtype="float64") c.shape = (c.shape[0] / 3.0, 3) mask = selector.select_points(c[:, 0], c[:, 1], c[:, 2], 0.0) del c if mask is None: continue for field in field_list: s = self._open_stream(data_file, (ptype, field)) c = np.frombuffer(s, dtype="float64") if field in self._vector_fields: c.shape = (c.shape[0] / 3.0, 3) data = c[mask, ...] yield (ptype, field), data def _count_particles(self, data_file): return self.ds.parameters["particle_count"][data_file.file_id] ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3311524 yt-4.4.0/yt/frontends/moab/0000755000175100001770000000000014714401715015140 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/moab/__init__.py0000644000175100001770000000004314714401662017247 0ustar00runnerdocker""" Empty __init__.py file. """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/moab/api.py0000644000175100001770000000034214714401662016263 0ustar00runnerdockerfrom . import tests from .data_structures import ( MoabHex8Dataset, MoabHex8Hierarchy, MoabHex8Mesh, PyneMoabHex8Dataset, ) from .fields import MoabFieldInfo, PyneFieldInfo from .io import IOHandlerMoabH5MHex8 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/moab/data_structures.py0000644000175100001770000001574414714401662020742 0ustar00runnerdockerimport os import weakref from functools import cached_property import numpy as np from yt.data_objects.index_subobjects.unstructured_mesh import SemiStructuredMesh from yt.data_objects.static_output import Dataset from yt.funcs import setdefaultattr from yt.geometry.unstructured_mesh_handler import UnstructuredIndex from yt.utilities.file_handler import HDF5FileHandler from yt.utilities.on_demand_imports import _h5py as h5py from .fields import MoabFieldInfo, PyneFieldInfo class MoabHex8Mesh(SemiStructuredMesh): _connectivity_length = 8 _index_offset = 1 class MoabHex8Hierarchy(UnstructuredIndex): def __init__(self, ds, dataset_type="h5m"): self.dataset = weakref.proxy(ds) self.dataset_type = dataset_type self.index_filename = self.dataset.parameter_filename self.directory = os.path.dirname(self.index_filename) self._fhandle = h5py.File(self.index_filename, mode="r") UnstructuredIndex.__init__(self, ds, dataset_type) self._fhandle.close() def _initialize_mesh(self): con = self._fhandle["/tstt/elements/Hex8/connectivity"][:] con = np.asarray(con, dtype="int64") coords = self._fhandle["/tstt/nodes/coordinates"][:] coords = np.asarray(coords, dtype="float64") self.meshes = [MoabHex8Mesh(0, self.index_filename, con, coords, self)] def _detect_output_fields(self): self.field_list = [ ("moab", f) for f in self._fhandle["/tstt/elements/Hex8/tags"].keys() ] def _count_grids(self): self.num_grids = 1 class MoabHex8Dataset(Dataset): _load_requirements = ["h5py"] _index_class = MoabHex8Hierarchy _field_info_class = MoabFieldInfo periodicity = (False, False, False) def __init__( self, filename, dataset_type="moab_hex8", storage_filename=None, units_override=None, unit_system="cgs", ): self.fluid_types += ("moab",) Dataset.__init__( self, filename, dataset_type, units_override=units_override, unit_system=unit_system, ) self.storage_filename = storage_filename self._handle = HDF5FileHandler(filename) def _set_code_unit_attributes(self): # Almost everything is regarded as dimensionless in MOAB, so these will # not be used very much or at all. setdefaultattr(self, "length_unit", self.quan(1.0, "cm")) setdefaultattr(self, "time_unit", self.quan(1.0, "s")) setdefaultattr(self, "mass_unit", self.quan(1.0, "g")) def _parse_parameter_file(self): self._handle = h5py.File(self.parameter_filename, mode="r") coords = self._handle["/tstt/nodes/coordinates"] self.domain_left_edge = coords[0] self.domain_right_edge = coords[-1] self.domain_dimensions = self.domain_right_edge - self.domain_left_edge self.refine_by = 2 self.dimensionality = len(self.domain_dimensions) self.current_time = 0.0 self.cosmological_simulation = False self.num_ghost_zones = 0 self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 self.cosmological_simulation = 0 @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: return filename.endswith(".h5m") and not cls._missing_load_requirements() def __str__(self): return self.basename.rsplit(".", 1)[0] class PyneHex8Mesh(SemiStructuredMesh): _connectivity_length = 8 _index_offset = 0 class PyneMeshHex8Hierarchy(UnstructuredIndex): def __init__(self, ds, dataset_type="moab_hex8_pyne"): self.dataset = weakref.proxy(ds) self.dataset_type = dataset_type self.index_filename = self.dataset.parameter_filename self.directory = os.getcwd() self.pyne_mesh = ds.pyne_mesh super().__init__(ds, dataset_type) def _initialize_mesh(self): from pymoab import types ents = list(self.pyne_mesh.structured_iterate_vertex()) coords = self.pyne_mesh.mesh.get_coords(ents).astype("float64") coords = coords.reshape(len(coords) // 3, 3) hexes = self.pyne_mesh.mesh.get_entities_by_type(0, types.MBHEX) vind = [] for h in hexes: vind.append( self.pyne_mesh.mesh.get_adjacencies( h, 0, create_if_missing=True, op_type=types.UNION ) ) vind = np.asarray(vind, dtype=np.int64) if vind.ndim == 1: vind = vind.reshape(len(vind) // 8, 8) assert vind.ndim == 2 and vind.shape[1] == 8 self.meshes = [PyneHex8Mesh(0, self.index_filename, vind, coords, self)] def _detect_output_fields(self): self.field_list = [("pyne", f) for f in self.pyne_mesh.tags.keys()] def _count_grids(self): self.num_grids = 1 class PyneMoabHex8Dataset(Dataset): _index_class = PyneMeshHex8Hierarchy _fieldinfo_fallback = MoabFieldInfo _field_info_class = PyneFieldInfo periodicity = (False, False, False) def __init__( self, pyne_mesh, dataset_type="moab_hex8_pyne", storage_filename=None, units_override=None, unit_system="cgs", ): self.fluid_types += ("pyne",) filename = f"pyne_mesh_{id(pyne_mesh)}" self.pyne_mesh = pyne_mesh Dataset.__init__( self, str(filename), dataset_type, units_override=units_override, unit_system=unit_system, ) self.storage_filename = storage_filename @property def filename(self) -> str: return self._input_filename @cached_property def unique_identifier(self) -> str: return self.filename def _set_code_unit_attributes(self): # Almost everything is regarded as dimensionless in MOAB, so these will # not be used very much or at all. setdefaultattr(self, "length_unit", self.quan(1.0, "cm")) setdefaultattr(self, "time_unit", self.quan(1.0, "s")) setdefaultattr(self, "mass_unit", self.quan(1.0, "g")) def _parse_parameter_file(self): ents = list(self.pyne_mesh.structured_iterate_vertex()) coords = self.pyne_mesh.mesh.get_coords(ents) self.domain_left_edge = coords[0:3] self.domain_right_edge = coords[-3:] self.domain_dimensions = self.domain_right_edge - self.domain_left_edge self.refine_by = 2 self.dimensionality = len(self.domain_dimensions) self.current_time = 0.0 self.cosmological_simulation = False self.num_ghost_zones = 0 self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 self.cosmological_simulation = 0 @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: return False def __str__(self): return self.basename.rsplit(".", 1)[0] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/moab/definitions.py0000644000175100001770000000000014714401662020014 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/moab/fields.py0000644000175100001770000000024614714401662016763 0ustar00runnerdockerfrom yt.fields.field_info_container import FieldInfoContainer class MoabFieldInfo(FieldInfoContainer): pass class PyneFieldInfo(FieldInfoContainer): pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/moab/io.py0000644000175100001770000000464014714401662016126 0ustar00runnerdockerimport numpy as np from yt.funcs import mylog from yt.utilities.io_handler import BaseIOHandler def field_dname(field_name): return f"/tstt/elements/Hex8/tags/{field_name}" # TODO all particle bits were removed class IOHandlerMoabH5MHex8(BaseIOHandler): _dataset_type = "moab_hex8" def __init__(self, ds): super().__init__(ds) self._handle = ds._handle def _read_fluid_selection(self, chunks, selector, fields, size): chunks = list(chunks) assert len(chunks) == 1 fhandle = self._handle rv = {} for field in fields: ftype, fname = field rv[field] = np.empty(size, dtype=fhandle[field_dname(fname)].dtype) ngrids = sum(len(chunk.objs) for chunk in chunks) mylog.debug( "Reading %s cells of %s fields in %s blocks", size, [fname for ft, fn in fields], ngrids, ) for field in fields: ftype, fname = field ds = np.array(fhandle[field_dname(fname)][:], dtype="float64") ind = 0 for chunk in chunks: for g in chunk.objs: ind += g.select(selector, ds, rv[field], ind) # caches return rv class IOHandlerMoabPyneHex8(BaseIOHandler): _dataset_type = "moab_hex8_pyne" def _read_fluid_selection(self, chunks, selector, fields, size): chunks = list(chunks) assert len(chunks) == 1 rv = {} pyne_mesh = self.ds.pyne_mesh for field in fields: rv[field] = np.empty(size, dtype="float64") ngrids = sum(len(chunk.objs) for chunk in chunks) mylog.debug( "Reading %s cells of %s fields in %s blocks", size, [fname for ftype, fname in fields], ngrids, ) for field in fields: ftype, fname = field if pyne_mesh.structured: tag = pyne_mesh.mesh.tag_get_handle("idx") hex_list = list(pyne_mesh.structured_iterate_hex()) indices = pyne_mesh.mesh.tag_get_data(tag, hex_list).flatten() else: indices = slice(None) ds = np.asarray(getattr(pyne_mesh, fname)[indices], "float64") ind = 0 for chunk in chunks: for g in chunk.objs: ind += g.select(selector, ds, rv[field], ind) # caches return rv ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/moab/misc.py0000644000175100001770000000000014714401662016434 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3311524 yt-4.4.0/yt/frontends/moab/tests/0000755000175100001770000000000014714401715016302 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/moab/tests/__init__.py0000644000175100001770000000000014714401662020402 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/moab/tests/test_c5.py0000644000175100001770000000333614714401662020230 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_almost_equal, assert_equal from yt.frontends.moab.api import MoabHex8Dataset from yt.testing import requires_file, requires_module, units_override_check from yt.utilities.answer_testing.framework import ( FieldValuesTest, data_dir_load, requires_ds, ) _fields = (("moab", "flux"),) c5 = "c5/c5.h5m" @requires_module("h5py") @requires_ds(c5) def test_cantor_5(): np.random.seed(0x4D3D3D3) ds = data_dir_load(c5) assert_equal(str(ds), "c5") dso = [ None, ("sphere", ("c", (0.1, "unitary"))), ("sphere", ("c", (0.2, "unitary"))), ] dd = ds.all_data() assert_almost_equal(ds.index.get_smallest_dx(), 0.00411522633744843, 10) assert_equal(dd["gas", "x"].shape[0], 63 * 63 * 63) assert_almost_equal( dd["index", "cell_volume"].in_units("code_length**3").sum(dtype="float64").d, 1.0, 10, ) for offset_1 in [1e-9, 1e-4, 0.1]: for offset_2 in [1e-9, 1e-4, 0.1]: DLE = ds.domain_left_edge DRE = ds.domain_right_edge ray = ds.ray(DLE + offset_1 * DLE.uq, DRE - offset_2 * DRE.uq) assert_almost_equal(ray["dts"].sum(dtype="float64"), 1.0, 8) for p1 in np.random.random((5, 3)): for p2 in np.random.random((5, 3)): ray = ds.ray(p1, p2) assert_almost_equal(ray["dts"].sum(dtype="float64"), 1.0, 8) for field in _fields: for dobj_name in dso: yield FieldValuesTest(c5, field, dobj_name) @requires_module("h5py") @requires_file(c5) def test_MoabHex8Dataset(): assert isinstance(data_dir_load(c5), MoabHex8Dataset) @requires_file(c5) def test_units_override(): units_override_check(c5) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3311524 yt-4.4.0/yt/frontends/nc4_cm1/0000755000175100001770000000000014714401715015446 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/nc4_cm1/__init__.py0000644000175100001770000000000014714401662017546 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/nc4_cm1/api.py0000644000175100001770000000024614714401662016574 0ustar00runnerdocker""" API for yt.frontends.nc4_cm1 """ from .data_structures import CM1Dataset, CM1Grid, CM1Hierarchy from .fields import CM1FieldInfo from .io import CM1IOHandler ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/nc4_cm1/data_structures.py0000644000175100001770000002211214714401662021233 0ustar00runnerdockerimport os import weakref from collections import OrderedDict import numpy as np from yt._typing import AxisOrder from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.static_output import Dataset from yt.geometry.grid_geometry_handler import GridIndex from yt.utilities.file_handler import NetCDF4FileHandler, valid_netcdf_signature from yt.utilities.logger import ytLogger as mylog from .fields import CM1FieldInfo class CM1Grid(AMRGridPatch): _id_offset = 0 def __init__(self, id, index, level, dimensions): super().__init__(id, filename=index.index_filename, index=index) self.Parent = None self.Children = [] self.Level = level self.ActiveDimensions = dimensions class CM1Hierarchy(GridIndex): grid = CM1Grid def __init__(self, ds, dataset_type="cm1"): self.dataset_type = dataset_type self.dataset = weakref.proxy(ds) # for now, the index file is the dataset! self.index_filename = self.dataset.parameter_filename self.directory = os.path.dirname(self.index_filename) # float type for the simulation edges and must be float64 now self.float_type = np.float64 super().__init__(ds, dataset_type) def _detect_output_fields(self): # build list of on-disk fields for dataset_type 'cm1' vnames = self.dataset.parameters["variable_names"] self.field_list = [("cm1", vname) for vname in vnames] def _count_grids(self): # This needs to set self.num_grids self.num_grids = 1 def _parse_index(self): self.grid_left_edge[0][:] = self.ds.domain_left_edge[:] self.grid_right_edge[0][:] = self.ds.domain_right_edge[:] self.grid_dimensions[0][:] = self.ds.domain_dimensions[:] self.grid_particle_count[0][0] = 0 self.grid_levels[0][0] = 0 self.max_level = 0 def _populate_grid_objects(self): self.grids = np.empty(self.num_grids, dtype="object") for i in range(self.num_grids): g = self.grid(i, self, self.grid_levels.flat[i], self.grid_dimensions[i]) g._prepare_grid() g._setup_dx() self.grids[i] = g class CM1Dataset(Dataset): _load_requirements = ["netCDF4"] _index_class = CM1Hierarchy _field_info_class = CM1FieldInfo def __init__( self, filename, dataset_type="cm1", storage_filename=None, units_override=None, unit_system="mks", ): self.fluid_types += ("cm1",) self._handle = NetCDF4FileHandler(filename) # refinement factor between a grid and its subgrid. self.refine_by = 1 super().__init__( filename, dataset_type, units_override=units_override, unit_system=unit_system, ) self.storage_filename = storage_filename def _setup_coordinate_handler(self, axis_order: AxisOrder | None) -> None: # ensure correct ordering of axes so plots aren't rotated (z should always be # on the vertical axis). super()._setup_coordinate_handler(axis_order) # type checking is deactivated in the following two lines because changing them is not # within the scope of the PR that _enabled_ typechecking here (#4244), but it'd be worth # having a careful look at *why* these warnings appear, as they may point to rotten code self.coordinates._x_pairs = (("x", "y"), ("y", "x"), ("z", "x")) # type: ignore [union-attr] self.coordinates._y_pairs = (("x", "z"), ("y", "z"), ("z", "y")) # type: ignore [union-attr] def _set_code_unit_attributes(self): # This is where quantities are created that represent the various # on-disk units. These are the currently available quantities which # should be set, along with examples of how to set them to standard # values. with self._handle.open_ds() as _handle: length_unit = _handle.variables["xh"].units self.length_unit = self.quan(1.0, length_unit) self.mass_unit = self.quan(1.0, "kg") self.time_unit = self.quan(1.0, "s") self.velocity_unit = self.quan(1.0, "m/s") self.time_unit = self.quan(1.0, "s") def _parse_parameter_file(self): # This needs to set up the following items. Note that these are all # assumed to be in code units; domain_left_edge and domain_right_edge # will be converted to YTArray automatically at a later time. # This includes the cosmological parameters. self.parameters = {} # code-specific items with self._handle.open_ds() as _handle: # _handle here is a netcdf Dataset object, we need to parse some metadata # for constructing our yt ds. # TO DO: generalize this to be coordinate variable name agnostic in order to # make useful for WRF or climate data. For now, we're hard coding for CM1 # specifically and have named the classes appropriately. Additionally, we # are only handling the cell-centered grid ("xh","yh","zh") at present. # The cell-centered grid contains scalar fields and interpolated velocities. dims = [_handle.dimensions[i].size for i in ["xh", "yh", "zh"]] xh, yh, zh = (_handle.variables[i][:] for i in ["xh", "yh", "zh"]) self.domain_left_edge = np.array( [xh.min(), yh.min(), zh.min()], dtype="float64" ) self.domain_right_edge = np.array( [xh.max(), yh.max(), zh.max()], dtype="float64" ) # loop over the variable names in the netCDF file, record only those on the # "zh","yh","xh" grid. varnames = [] for key, var in _handle.variables.items(): if all(x in var.dimensions for x in ["time", "zh", "yh", "xh"]): varnames.append(key) self.parameters["variable_names"] = varnames self.parameters["lofs_version"] = _handle.cm1_lofs_version self.parameters["is_uniform"] = _handle.uniform_mesh self.current_time = _handle.variables["time"][:][0] # record the dimension metadata: __handle.dimensions contains netcdf # objects so we need to manually copy over attributes. dim_info = OrderedDict() for dim, meta in _handle.dimensions.items(): dim_info[dim] = meta.size self.parameters["dimensions"] = dim_info self.dimensionality = 3 self.domain_dimensions = np.array(dims, dtype="int64") self._periodicity = (False, False, False) # Set cosmological information to zero for non-cosmological. self.cosmological_simulation = 0 self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: # This accepts a filename or a set of arguments and returns True or # False depending on if the file is of the type requested. if not valid_netcdf_signature(filename): return False if cls._missing_load_requirements(): return False try: nc4_file = NetCDF4FileHandler(filename) with nc4_file.open_ds(keepweakref=True) as _handle: is_cm1_lofs = hasattr(_handle, "cm1_lofs_version") is_cm1 = hasattr(_handle, "cm1 version") # not a typo, it is a space... # ensure coordinates of each variable array exists in the dataset coords = _handle.dimensions # get the dataset wide coordinates failed_vars = [] # list of failed variables for var in _handle.variables: # iterate over the variables vcoords = _handle[var].dimensions # get the dims for the variable ncoords = len(vcoords) # number of coordinates in variable # number of coordinates that pass for a variable coordspassed = sum(vc in coords for vc in vcoords) if coordspassed != ncoords: failed_vars.append(var) if failed_vars: mylog.warning( "Trying to load a cm1_lofs netcdf file but the " "coordinates of the following fields do not match the " "coordinates of the dataset: %s", failed_vars, ) return False if not is_cm1_lofs: if is_cm1: mylog.warning( "It looks like you are trying to load a cm1 netcdf file, " "but at present yt only supports cm1_lofs output. Until" " support is added, you can likely use" " yt.load_uniform_grid() to load your cm1 file manually." ) return False except (OSError, AttributeError, ImportError): return False return True ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/nc4_cm1/fields.py0000644000175100001770000000630214714401662017270 0ustar00runnerdockerfrom yt.fields.field_info_container import FieldInfoContainer # We need to specify which fields we might have in our dataset. The field info # container subclass here will define which fields it knows about. There are # optionally methods on it that get called which can be subclassed. class CM1FieldInfo(FieldInfoContainer): known_other_fields = ( # Each entry here is of the form # ( "name", ("units", ["fields", "to", "alias"], # "display_name")), ("uinterp", ("m/s", ["velocity_x"], None)), ("vinterp", ("m/s", ["velocity_y"], None)), ("winterp", ("m/s", ["velocity_z"], None)), ("u", ("m/s", ["velocity_x"], None)), ("v", ("m/s", ["velocity_y"], None)), ("w", ("m/s", ["velocity_z"], None)), ("hwin_sr", ("m/s", ["storm_relative_horizontal_wind_speed"], None)), ("windmag_sr", ("m/s", ["storm_relative_3D_wind_speed"], None)), ("hwin_gr", ("m/s", ["ground_relative_horizontal_wind_speed"], None)), ("thpert", ("K", ["potential_temperature_perturbation"], None)), ("thrhopert", ("K", ["density_potential_temperature_perturbation"], None)), ("prespert", ("hPa", ["presure_perturbation"], None)), ("rhopert", ("kg/m**3", ["density_perturbation"], None)), ("dbz", ("dB", ["simulated_reflectivity"], None)), ("qvpert", ("g/kg", ["water_vapor_mixing_ratio_perturbation"], None)), ("qc", ("g/kg", ["cloud_liquid_water_mixing_ratio"], None)), ("qr", ("g/kg", ["rain_mixing_ratio"], None)), ("qi", ("g/kg", ["cloud_ice_mixing_ratio"], None)), ("qs", ("g/kg", ["snow_mixing_ratio"], None)), ("qg", ("g/kg", ["graupel_or_hail_mixing_ratio"], None)), ("qcloud", ("g/kg", ["sum_of_cloud_water_and_cloud_ice_mixing_ratios"], None)), ("qprecip", ("g/kg", ["sum_of_rain_graupel_snow_mixing_ratios"], None)), ("nci", ("1/cm**3", ["number_concerntration_of_cloud_ice"], None)), ("ncr", ("1/cm**3", ["number_concentration_of_rain"], None)), ("ncs", ("1/cm**3", ["number_concentration_of_snow"], None)), ("ncg", ("1/cm**3", ["number_concentration_of_graupel_or_hail"], None)), ("xvort", ("1/s", ["vorticity_x"], None)), ("yvort", ("1/s", ["vorticity_y"], None)), ("zvort", ("1/s", ["vorticity_z"], None)), ("hvort", ("1/s", ["horizontal_vorticity_magnitude"], None)), ("vortmag", ("1/s", ["vorticity_magnitude"], None)), ("streamvort", ("1/s", ["streamwise_vorticity"], None)), ("khh", ("m**2/s", ["khh"], None)), ("khv", ("m**2/s", ["khv"], None)), ("kmh", ("m**2/s", ["kmh"], None)), ("kmv", ("m**2/s", ["kmv"], None)), ) known_particle_fields = ( # Identical form to above # ( "name", ("units", ["fields", "to", "alias"], # "display_name")), ) def setup_fluid_fields(self): # Here we do anything that might need info about the dataset. # You can use self.alias, self.add_output_field (for on-disk fields) # and self.add_field (for derived fields). pass def setup_particle_fields(self, ptype): super().setup_particle_fields(ptype) # This will get called for every particle type. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/nc4_cm1/io.py0000644000175100001770000000555414714401662016441 0ustar00runnerdockerimport numpy as np from yt.utilities.file_handler import NetCDF4FileHandler from yt.utilities.io_handler import BaseIOHandler class CM1IOHandler(BaseIOHandler): _particle_reader = False _dataset_type = "cm1" def __init__(self, ds): self.filename = ds.filename self._handle = NetCDF4FileHandler(self.filename) super().__init__(ds) def _read_particle_coords(self, chunks, ptf): # This needs to *yield* a series of tuples of (ptype, (x, y, z)). # chunks is a list of chunks, and ptf is a dict where the keys are # ptypes and the values are lists of fields. raise NotImplementedError def _read_particle_fields(self, chunks, ptf, selector): # This gets called after the arrays have been allocated. It needs to # yield ((ptype, field), data) where data is the masked results of # reading ptype, field and applying the selector to the data read in. # Selector objects have a .select_points(x,y,z) that returns a mask, so # you need to do your masking here. raise NotImplementedError def _read_fluid_selection(self, chunks, selector, fields, size): # This needs to allocate a set of arrays inside a dictionary, where the # keys are the (ftype, fname) tuples and the values are arrays that # have been masked using whatever selector method is appropriate. The # dict gets returned at the end and it should be flat, with selected # data. Note that if you're reading grid data, you might need to # special-case a grid selector object. # Also note that "chunks" is a generator for multiple chunks, each of # which contains a list of grids. The returned numpy arrays should be # in 64-bit float and contiguous along the z direction. Therefore, for # a C-like input array with the dimension [x][y][z] or a # Fortran-like input array with the dimension (z,y,x), a matrix # transpose is required (e.g., using np_array.transpose() or # np_array.swapaxes(0,2)). data = {} chunks = list(chunks) with self._handle.open_ds() as ds: for field in fields: data[field] = np.empty(size, dtype="float64") offset = 0 for chunk in chunks: for grid in chunk.objs: variable = ds.variables[field[1]][:][0] values = np.squeeze(variable.T) offset += grid.select(selector, values, data[field], offset) return data def _read_chunk_data(self, chunk, fields): # This reads the data from a single chunk without doing any selection, # and is only used for caching data that might be used by multiple # different selectors later. For instance, this can speed up ghost zone # computation. pass ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3311524 yt-4.4.0/yt/frontends/nc4_cm1/tests/0000755000175100001770000000000014714401715016610 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/nc4_cm1/tests/__init__.py0000644000175100001770000000000014714401662020710 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/nc4_cm1/tests/test_outputs.py0000644000175100001770000000407314714401662021751 0ustar00runnerdockerfrom numpy.testing import assert_equal from yt.frontends.nc4_cm1.api import CM1Dataset from yt.testing import requires_file, units_override_check from yt.utilities.answer_testing.framework import ( FieldValuesTest, GridValuesTest, can_run_ds, data_dir_load, requires_ds, small_patch_amr, ) _fields = (("cm1", "thrhopert"), ("cm1", "zvort")) cm1sim = "cm1_tornado_lofs/nc4_cm1_lofs_tornado_test.nc" @requires_ds(cm1sim) def test_cm1_mesh_fields(): ds = data_dir_load(cm1sim) assert_equal(str(ds), "nc4_cm1_lofs_tornado_test.nc") # run the small_patch_amr tests on safe fields ic = ds.domain_center for test in small_patch_amr(ds, _fields, input_center=ic, input_weight=None): test_cm1_mesh_fields.__name__ = test.description yield test # manually run the Grid and Field Values tests on dbz (do not want to run the # ProjectionValuesTest for this field) if can_run_ds(ds): dso = [None, ("sphere", (ic, (0.1, "unitary")))] for field in [("cm1", "dbz")]: yield GridValuesTest(ds, field) for dobj_name in dso: yield FieldValuesTest(ds, field, dobj_name) @requires_file(cm1sim) def test_CM1Dataset(): assert isinstance(data_dir_load(cm1sim), CM1Dataset) @requires_file(cm1sim) def test_units_override(): units_override_check(cm1sim) @requires_file(cm1sim) def test_dims_and_meta(): ds = data_dir_load(cm1sim) known_dims = ["time", "zf", "zh", "yf", "yh", "xf", "xh"] dims = ds.parameters["dimensions"] ## check the file for 2 grids and a time dimension - ## (time, xf, xh, yf, yh, zf, zh). The dimensions ending in ## f are the staggered velocity grid components and the ## dimensions ending in h are the scalar grid components assert_equal(len(dims), len(known_dims)) for kdim in known_dims: assert kdim in dims ## check the simulation time assert_equal(ds.current_time, 5500.0) @requires_file(cm1sim) def test_if_cm1(): ds = data_dir_load(cm1sim) assert float(ds.parameters["lofs_version"]) >= 1.0 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3311524 yt-4.4.0/yt/frontends/open_pmd/0000755000175100001770000000000014714401715016023 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/open_pmd/__init__.py0000644000175100001770000000000014714401662020123 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/open_pmd/api.py0000644000175100001770000000025114714401662017145 0ustar00runnerdockerfrom . import tests from .data_structures import OpenPMDDataset, OpenPMDGrid, OpenPMDHierarchy from .fields import OpenPMDFieldInfo from .io import IOHandlerOpenPMDHDF5 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/open_pmd/data_structures.py0000644000175100001770000006576114714401662021631 0ustar00runnerdockerfrom functools import reduce from operator import mul from os import listdir, path from re import match import numpy as np from packaging.version import Version from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.static_output import Dataset from yt.data_objects.time_series import DatasetSeries from yt.frontends.open_pmd.fields import OpenPMDFieldInfo from yt.frontends.open_pmd.misc import get_component, is_const_component from yt.funcs import setdefaultattr from yt.geometry.grid_geometry_handler import GridIndex from yt.utilities.file_handler import HDF5FileHandler, valid_hdf5_signature from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py ompd_known_versions = [Version(_) for _ in ("1.0.0", "1.0.1", "1.1.0")] opmd_required_attributes = ["openPMD", "basePath"] class OpenPMDGrid(AMRGridPatch): """Represents chunk of data on-disk. This defines the index and offset for every mesh and particle type. It also defines parents and children grids. Since openPMD does not have multiple levels of refinement there are no parents or children for any grid. """ _id_offset = 0 __slots__ = ["_level_id"] # Every particle species and mesh might have different hdf5-indices and offsets ftypes: list[str] | None = [] ptypes: list[str] | None = [] findex = 0 foffset = 0 pindex = 0 poffset = 0 def __init__(self, gid, index, level=-1, fi=0, fo=0, pi=0, po=0, ft=None, pt=None): AMRGridPatch.__init__(self, gid, filename=index.index_filename, index=index) if ft is None: ft = [] if pt is None: pt = [] self.findex = fi self.foffset = fo self.pindex = pi self.poffset = po self.ftypes = ft self.ptypes = pt self.Parent = None self.Children = [] self.Level = level def __str__(self): return "OpenPMDGrid_%04i (%s)" % (self.id, self.ActiveDimensions) class OpenPMDHierarchy(GridIndex): """Defines which fields and particles are created and read from disk. Furthermore it defines the characteristics of the grids. """ grid = OpenPMDGrid def __init__(self, ds, dataset_type="openPMD"): self.dataset_type = dataset_type self.dataset = ds self.index_filename = ds.parameter_filename self.directory = path.dirname(self.index_filename) GridIndex.__init__(self, ds, dataset_type) def _get_particle_type_counts(self): """Reads the active number of particles for every species. Returns ------- dict keys are ptypes values are integer counts of the ptype """ result = {} f = self.dataset._handle bp = self.dataset.base_path pp = self.dataset.particles_path try: for ptype in self.ds.particle_types_raw: if str(ptype) == "io": spec = list(f[bp + pp].keys())[0] else: spec = ptype axis = list(f[bp + pp + "/" + spec + "/position"].keys())[0] pos = f[bp + pp + "/" + spec + "/position/" + axis] if is_const_component(pos): result[ptype] = pos.attrs["shape"] else: result[ptype] = pos.len() except KeyError: result["io"] = 0 return result def _detect_output_fields(self): """Populates ``self.field_list`` with native fields (mesh and particle) on disk. Each entry is a tuple of two strings. The first element is the on-disk fluid type or particle type. The second element is the name of the field in yt. This string is later used for accessing the data. Convention suggests that the on-disk fluid type should be "openPMD", the on-disk particle type (for a single species of particles) is "io" or (for multiple species of particles) the particle name on-disk. """ f = self.dataset._handle bp = self.dataset.base_path mp = self.dataset.meshes_path pp = self.dataset.particles_path mesh_fields = [] try: meshes = f[bp + mp] for mname in meshes.keys(): try: mesh = meshes[mname] for axis in mesh.keys(): mesh_fields.append(mname.replace("_", "-") + "_" + axis) except AttributeError: # This is a h5py.Dataset (i.e. no axes) mesh_fields.append(mname.replace("_", "-")) except (KeyError, TypeError, AttributeError): pass self.field_list = [("openPMD", str(field)) for field in mesh_fields] particle_fields = [] try: particles = f[bp + pp] for pname in particles.keys(): species = particles[pname] for recname in species.keys(): record = species[recname] if is_const_component(record): # Record itself (e.g. particle_mass) is constant particle_fields.append( pname.replace("_", "-") + "_" + recname.replace("_", "-") ) elif "particlePatches" not in recname: try: # Create a field for every axis (x,y,z) of every # property (position) of every species (electrons) axes = list(record.keys()) if str(recname) == "position": recname = "positionCoarse" for axis in axes: particle_fields.append( pname.replace("_", "-") + "_" + recname.replace("_", "-") + "_" + axis ) except AttributeError: # Record is a dataset, does not have axes (e.g. weighting) particle_fields.append( pname.replace("_", "-") + "_" + recname.replace("_", "-") ) pass else: pass if len(list(particles.keys())) > 1: # There is more than one particle species, # use the specific names as field types self.field_list.extend( [ ( str(field).split("_")[0], ("particle_" + "_".join(str(field).split("_")[1:])), ) for field in particle_fields ] ) else: # Only one particle species, fall back to "io" self.field_list.extend( [ ("io", ("particle_" + "_".join(str(field).split("_")[1:]))) for field in particle_fields ] ) except (KeyError, TypeError, AttributeError): pass def _count_grids(self): """Sets ``self.num_grids`` to be the total number of grids in the simulation. The number of grids is determined by their respective memory footprint. """ f = self.dataset._handle bp = self.dataset.base_path mp = self.dataset.meshes_path pp = self.dataset.particles_path self.meshshapes = {} self.numparts = {} self.num_grids = 0 try: meshes = f[bp + mp] for mname in meshes.keys(): mesh = meshes[mname] if isinstance(mesh, h5py.Group): shape = mesh[list(mesh.keys())[0]].shape else: shape = mesh.shape spacing = tuple(mesh.attrs["gridSpacing"]) offset = tuple(mesh.attrs["gridGlobalOffset"]) unit_si = mesh.attrs["gridUnitSI"] self.meshshapes[mname] = (shape, spacing, offset, unit_si) except (KeyError, TypeError, AttributeError): pass try: particles = f[bp + pp] for pname in particles.keys(): species = particles[pname] if "particlePatches" in species.keys(): for patch, size in enumerate( species["/particlePatches/numParticles"] ): self.numparts[f"{pname}#{patch}"] = size else: axis = list(species["/position"].keys())[0] if is_const_component(species["/position/" + axis]): self.numparts[pname] = species["/position/" + axis].attrs[ "shape" ] else: self.numparts[pname] = species["/position/" + axis].len() except (KeyError, TypeError, AttributeError): pass # Limit values per grid by resulting memory footprint self.vpg = int(self.dataset.gridsize / 4) # 4Byte per value (f32) # Meshes of the same size do not need separate chunks for shape, *_ in set(self.meshshapes.values()): self.num_grids += min( shape[0], int(np.ceil(reduce(mul, shape) * self.vpg**-1)) ) # Same goes for particle chunks if they are not inside particlePatches patches = {} no_patches = {} for k, v in self.numparts.items(): if "#" in k: patches[k] = v else: no_patches[k] = v for size in set(no_patches.values()): self.num_grids += int(np.ceil(size * self.vpg**-1)) for size in patches.values(): self.num_grids += int(np.ceil(size * self.vpg**-1)) def _parse_index(self): """Fills each grid with appropriate properties (extent, dimensions, ...) This calculates the properties of every OpenPMDGrid based on the total number of grids in the simulation. The domain is divided into ``self.num_grids`` (roughly) equally sized chunks along the x-axis. ``grid_levels`` is always equal to 0 since we only have one level of refinement in openPMD. Notes ----- ``self.grid_dimensions`` is rounded to the nearest integer. Grid edges are calculated from this dimension. Grids with dimensions [0, 0, 0] are particle only. The others do not have any particles affiliated with them. """ f = self.dataset._handle bp = self.dataset.base_path pp = self.dataset.particles_path self.grid_levels.flat[:] = 0 self.grids = np.empty(self.num_grids, dtype="object") grid_index_total = 0 # Mesh grids for mesh in set(self.meshshapes.values()): (shape, spacing, offset, unit_si) = mesh shape = np.asarray(shape) spacing = np.asarray(spacing) offset = np.asarray(offset) # Total dimension of this grid domain_dimension = np.asarray(shape, dtype=np.int32) domain_dimension = np.append( domain_dimension, np.ones(3 - len(domain_dimension)) ) # Number of grids of this shape num_grids = min(shape[0], int(np.ceil(reduce(mul, shape) * self.vpg**-1))) gle = offset * unit_si # self.dataset.domain_left_edge gre = ( domain_dimension[: spacing.size] * unit_si * spacing + gle ) # self.dataset.domain_right_edge gle = np.append(gle, np.zeros(3 - len(gle))) gre = np.append(gre, np.ones(3 - len(gre))) grid_dim_offset = np.linspace( 0, domain_dimension[0], num_grids + 1, dtype=np.int32 ) grid_edge_offset = ( grid_dim_offset * float(domain_dimension[0]) ** -1 * (gre[0] - gle[0]) + gle[0] ) mesh_names = [] for mname, mdata in self.meshshapes.items(): if mesh == mdata: mesh_names.append(str(mname)) prev = 0 for grid in np.arange(num_grids): self.grid_dimensions[grid_index_total] = domain_dimension self.grid_dimensions[grid_index_total][0] = ( grid_dim_offset[grid + 1] - grid_dim_offset[grid] ) self.grid_left_edge[grid_index_total] = gle self.grid_left_edge[grid_index_total][0] = grid_edge_offset[grid] self.grid_right_edge[grid_index_total] = gre self.grid_right_edge[grid_index_total][0] = grid_edge_offset[grid + 1] self.grid_particle_count[grid_index_total] = 0 self.grids[grid_index_total] = self.grid( grid_index_total, self, 0, fi=prev, fo=self.grid_dimensions[grid_index_total][0], ft=mesh_names, ) prev += self.grid_dimensions[grid_index_total][0] grid_index_total += 1 handled_ptypes = [] # Particle grids for species, count in self.numparts.items(): if "#" in species: # This is a particlePatch spec = species.split("#") patch = f[bp + pp + "/" + spec[0] + "/particlePatches"] domain_dimension = np.ones(3, dtype=np.int32) for ind, axis in enumerate(list(patch["extent"].keys())): domain_dimension[ind] = patch["extent/" + axis][()][int(spec[1])] num_grids = int(np.ceil(count * self.vpg**-1)) gle = [] for axis in patch["offset"].keys(): gle.append( get_component(patch, "offset/" + axis, int(spec[1]), 1)[0] ) gle = np.asarray(gle) gle = np.append(gle, np.zeros(3 - len(gle))) gre = [] for axis in patch["extent"].keys(): gre.append( get_component(patch, "extent/" + axis, int(spec[1]), 1)[0] ) gre = np.asarray(gre) gre = np.append(gre, np.ones(3 - len(gre))) np.add(gle, gre, gre) npo = patch["numParticlesOffset"][()].item(int(spec[1])) particle_count = np.linspace( npo, npo + count, num_grids + 1, dtype=np.int32 ) particle_names = [str(spec[0])] elif str(species) not in handled_ptypes: domain_dimension = self.dataset.domain_dimensions num_grids = int(np.ceil(count * self.vpg**-1)) gle = self.dataset.domain_left_edge gre = self.dataset.domain_right_edge particle_count = np.linspace(0, count, num_grids + 1, dtype=np.int32) particle_names = [] for pname, size in self.numparts.items(): if size == count: # Since this is not part of a particlePatch, # we can include multiple same-sized ptypes particle_names.append(str(pname)) handled_ptypes.append(str(pname)) else: # A grid with this exact particle count has already been created continue for grid in np.arange(num_grids): self.grid_dimensions[grid_index_total] = domain_dimension self.grid_left_edge[grid_index_total] = gle self.grid_right_edge[grid_index_total] = gre self.grid_particle_count[grid_index_total] = ( particle_count[grid + 1] - particle_count[grid] ) * len(particle_names) self.grids[grid_index_total] = self.grid( grid_index_total, self, 0, pi=particle_count[grid], po=particle_count[grid + 1] - particle_count[grid], pt=particle_names, ) grid_index_total += 1 def _populate_grid_objects(self): """This initializes all grids. Additionally, it should set up Children and Parent lists on each grid object. openPMD is not adaptive and thus there are no Children and Parents for any grid. """ for i in np.arange(self.num_grids): self.grids[i]._prepare_grid() self.grids[i]._setup_dx() self.max_level = 0 class OpenPMDDataset(Dataset): """Contains all the required information of a single iteration of the simulation. Notes ----- It is assumed that - all meshes cover the same region. Their resolution can be different. - all particles reside in this same region exclusively. - particle and mesh positions are *absolute* with respect to the simulation origin. """ _load_requirements = ["h5py"] _index_class = OpenPMDHierarchy _field_info_class = OpenPMDFieldInfo def __init__( self, filename, dataset_type="openPMD", storage_filename=None, units_override=None, unit_system="mks", **kwargs, ): self._handle = HDF5FileHandler(filename) self.gridsize = kwargs.pop("open_pmd_virtual_gridsize", 10**9) self.standard_version = Version(self._handle.attrs["openPMD"].decode()) self.iteration = kwargs.pop("iteration", None) self._set_paths(self._handle, path.dirname(filename), self.iteration) Dataset.__init__( self, filename, dataset_type, units_override=units_override, unit_system=unit_system, ) self.storage_filename = storage_filename self.fluid_types += ("openPMD",) try: particles = tuple( str(c) for c in self._handle[self.base_path + self.particles_path].keys() ) if len(particles) > 1: # Only use on-disk particle names if there is more than one species self.particle_types = particles mylog.debug("self.particle_types: %s", self.particle_types) self.particle_types_raw = self.particle_types self.particle_types = tuple(self.particle_types) except (KeyError, TypeError, AttributeError): pass def _set_paths(self, handle, path, iteration): """Parses relevant hdf5-paths out of ``handle``. Parameters ---------- handle : h5py.File path : str (absolute) filepath for current hdf5 container """ iterations = [] if iteration is None: iteration = list(handle["/data"].keys())[0] encoding = handle.attrs["iterationEncoding"].decode() if "groupBased" in encoding: iterations = list(handle["/data"].keys()) mylog.info("Found %s iterations in file", len(iterations)) elif "fileBased" in encoding: itformat = handle.attrs["iterationFormat"].decode().split("/")[-1] regex = "^" + itformat.replace("%T", "[0-9]+") + "$" if path == "": mylog.warning( "For file based iterations, please use absolute file paths!" ) pass for filename in listdir(path): if match(regex, filename): iterations.append(filename) mylog.info("Found %s iterations in directory", len(iterations)) if len(iterations) == 0: mylog.warning("No iterations found!") if "groupBased" in encoding and len(iterations) > 1: mylog.warning("Only chose to load one iteration (%s)", iteration) self.base_path = f"/data/{iteration}/" try: self.meshes_path = self._handle["/"].attrs["meshesPath"].decode() handle[self.base_path + self.meshes_path] except KeyError: if self.standard_version <= Version("1.1.0"): mylog.info( "meshesPath not present in file. " "Assuming file contains no meshes and has a domain extent of 1m^3!" ) self.meshes_path = None else: raise try: self.particles_path = self._handle["/"].attrs["particlesPath"].decode() handle[self.base_path + self.particles_path] except KeyError: if self.standard_version <= Version("1.1.0"): mylog.info( "particlesPath not present in file." " Assuming file contains no particles!" ) self.particles_path = None else: raise def _set_code_unit_attributes(self): """Handle conversion between different physical units and the code units. Every dataset in openPMD can have different code <-> physical scaling. The individual factor is obtained by multiplying with "unitSI" reading getting data from disk. """ setdefaultattr(self, "length_unit", self.quan(1.0, "m")) setdefaultattr(self, "mass_unit", self.quan(1.0, "kg")) setdefaultattr(self, "time_unit", self.quan(1.0, "s")) setdefaultattr(self, "velocity_unit", self.quan(1.0, "m/s")) setdefaultattr(self, "magnetic_unit", self.quan(1.0, "T")) def _parse_parameter_file(self): """Read in metadata describing the overall data on-disk.""" f = self._handle bp = self.base_path mp = self.meshes_path self.parameters = 0 self._periodicity = np.zeros(3, dtype="bool") self.refine_by = 1 self.cosmological_simulation = 0 try: shapes = {} left_edges = {} right_edges = {} meshes = f[bp + mp] for mname in meshes.keys(): mesh = meshes[mname] if isinstance(mesh, h5py.Group): shape = np.asarray(mesh[list(mesh.keys())[0]].shape) else: shape = np.asarray(mesh.shape) spacing = np.asarray(mesh.attrs["gridSpacing"]) offset = np.asarray(mesh.attrs["gridGlobalOffset"]) unit_si = np.asarray(mesh.attrs["gridUnitSI"]) le = offset * unit_si re = le + shape * unit_si * spacing shapes[mname] = shape left_edges[mname] = le right_edges[mname] = re lowest_dim = np.min([len(i) for i in shapes.values()]) shapes = np.asarray([i[:lowest_dim] for i in shapes.values()]) left_edges = np.asarray([i[:lowest_dim] for i in left_edges.values()]) right_edges = np.asarray([i[:lowest_dim] for i in right_edges.values()]) fs = [] dle = [] dre = [] for i in np.arange(lowest_dim): fs.append(np.max(shapes.transpose()[i])) dle.append(np.min(left_edges.transpose()[i])) dre.append(np.min(right_edges.transpose()[i])) self.dimensionality = len(fs) self.domain_dimensions = np.append(fs, np.ones(3 - self.dimensionality)) self.domain_left_edge = np.append(dle, np.zeros(3 - len(dle))) self.domain_right_edge = np.append(dre, np.ones(3 - len(dre))) except (KeyError, TypeError, AttributeError): if self.standard_version <= Version("1.1.0"): self.dimensionality = 3 self.domain_dimensions = np.ones(3, dtype=np.float64) self.domain_left_edge = np.zeros(3, dtype=np.float64) self.domain_right_edge = np.ones(3, dtype=np.float64) else: raise self.current_time = f[bp].attrs["time"] * f[bp].attrs["timeUnitSI"] @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: """Checks whether the supplied file can be read by this frontend.""" if not valid_hdf5_signature(filename): return False if cls._missing_load_requirements(): return False try: with h5py.File(filename, mode="r") as f: attrs = list(f["/"].attrs.keys()) for i in opmd_required_attributes: if i not in attrs: return False if Version(f.attrs["openPMD"].decode()) not in ompd_known_versions: return False if f.attrs["iterationEncoding"].decode() == "fileBased": return True return False except OSError: return False class OpenPMDDatasetSeries(DatasetSeries): _pre_outputs = () _dataset_cls = OpenPMDDataset parallel = True setup_function = None mixed_dataset_types = False def __init__(self, filename): super().__init__([]) self.handle = h5py.File(filename, mode="r") self.filename = filename self._pre_outputs = sorted( np.asarray(list(self.handle["/data"].keys()), dtype="int64") ) def __iter__(self): for it in self._pre_outputs: ds = self._load(it, **self.kwargs) self._setup_function(ds) yield ds def __getitem__(self, key): if isinstance(key, int): o = self._load(key) self._setup_function(o) return o else: raise KeyError(f"Unknown iteration {key}") def _load(self, it, **kwargs): return OpenPMDDataset(self.filename, iteration=it) class OpenPMDGroupBasedDataset(Dataset): _load_requirements = ["h5py"] _index_class = OpenPMDHierarchy _field_info_class = OpenPMDFieldInfo def __new__(cls, filename, *args, **kwargs): ret = object.__new__(OpenPMDDatasetSeries) ret.__init__(filename) return ret @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if not valid_hdf5_signature(filename): return False if cls._missing_load_requirements(): return False try: with h5py.File(filename, mode="r") as f: attrs = list(f["/"].attrs.keys()) for i in opmd_required_attributes: if i not in attrs: return False if Version(f.attrs["openPMD"].decode()) not in ompd_known_versions: return False if f.attrs["iterationEncoding"].decode() == "groupBased": return True return False except OSError: return False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/open_pmd/definitions.py0000644000175100001770000000000014714401662020677 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/open_pmd/fields.py0000644000175100001770000002370714714401662017655 0ustar00runnerdockerimport numpy as np from yt.fields.field_info_container import FieldInfoContainer from yt.fields.magnetic_field import setup_magnetic_field_aliases from yt.frontends.open_pmd.misc import is_const_component, parse_unit_dimension from yt.units.yt_array import YTQuantity from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py from yt.utilities.physical_constants import mu_0, speed_of_light def setup_poynting_vector(self): def _get_poyn(axis): def poynting(field, data): u = mu_0**-1 if axis in "x": return u * ( data["openPMD", "E_y"] * data["gas", "magnetic_field_z"] - data["openPMD", "E_z"] * data["gas", "magnetic_field_y"] ) elif axis in "y": return u * ( data["openPMD", "E_z"] * data["gas", "magnetic_field_x"] - data["openPMD", "E_x"] * data["gas", "magnetic_field_z"] ) elif axis in "z": return u * ( data["openPMD", "E_x"] * data["gas", "magnetic_field_y"] - data["openPMD", "E_y"] * data["gas", "magnetic_field_x"] ) return poynting for ax in "xyz": self.add_field( ("openPMD", f"poynting_vector_{ax}"), sampling_type="cell", function=_get_poyn(ax), units="W/m**2", ) def setup_kinetic_energy(self, ptype): def _kin_en(field, data): p2 = ( data[ptype, "particle_momentum_x"] ** 2 + data[ptype, "particle_momentum_y"] ** 2 + data[ptype, "particle_momentum_z"] ** 2 ) mass = data[ptype, "particle_mass"] * data[ptype, "particle_weighting"] return ( speed_of_light * np.sqrt(p2 + mass**2 * speed_of_light**2) - mass * speed_of_light**2 ) self.add_field( (ptype, "particle_kinetic_energy"), sampling_type="particle", function=_kin_en, units="kg*m**2/s**2", ) def setup_velocity(self, ptype): def _get_vel(axis): def velocity(field, data): c = speed_of_light momentum = data[ptype, f"particle_momentum_{axis}"] mass = data[ptype, "particle_mass"] weighting = data[ptype, "particle_weighting"] return momentum / np.sqrt((mass * weighting) ** 2 + (momentum**2) / (c**2)) return velocity for ax in "xyz": self.add_field( (ptype, f"particle_velocity_{ax}"), sampling_type="particle", function=_get_vel(ax), units="m/s", ) def setup_absolute_positions(self, ptype): def _abs_pos(axis): def ap(field, data): return np.add( data[ptype, f"particle_positionCoarse_{axis}"], data[ptype, f"particle_positionOffset_{axis}"], ) return ap for ax in "xyz": self.add_field( (ptype, f"particle_position_{ax}"), sampling_type="particle", function=_abs_pos(ax), units="m", ) class OpenPMDFieldInfo(FieldInfoContainer): r"""Specifies which fields from the dataset yt should know about. ``self.known_other_fields`` and ``self.known_particle_fields`` must be populated. Entries for both of these lists must be tuples of the form ("name", ("units", ["fields", "to", "alias"], "display_name")) These fields will be represented and handled in yt in the way you define them here. The fields defined in both ``self.known_other_fields`` and ``self.known_particle_fields`` will only be added to a dataset (with units, aliases, etc), if they match any entry in the ``OpenPMDHierarchy``'s ``self.field_list``. Notes ----- Contrary to many other frontends, we dynamically obtain the known fields from the simulation output. The openPMD markup is extremely flexible - names, dimensions and the number of individual datasets can (and very likely will) vary. openPMD states that record names and their components are only allowed to contain * characters a-Z, * the numbers 0-9 * and the underscore _ * (equivalently, the regex \w). Since yt widely uses the underscore in field names, openPMD's underscores (_) are replaced by hyphen (-). Derived fields will automatically be set up, if names and units of your known on-disk (or manually derived) fields match the ones in [1]. References ---------- * http://yt-project.org/docs/dev/analyzing/fields.html * http://yt-project.org/docs/dev/developing/creating_frontend.html#data-meaning-structures * https://github.com/openPMD/openPMD-standard/blob/latest/STANDARD.md * [1] http://yt-project.org/docs/dev/reference/field_list.html#universal-fields """ _mag_fields: list[str] = [] def __init__(self, ds, field_list): f = ds._handle bp = ds.base_path mp = ds.meshes_path pp = ds.particles_path try: fields = f[bp + mp] for fname in fields.keys(): field = fields[fname] if isinstance(field, h5py.Dataset) or is_const_component(field): # Don't consider axes. # This appears to be a vector field of single dimensionality ytname = str("_".join([fname.replace("_", "-")])) parsed = parse_unit_dimension( np.asarray(field.attrs["unitDimension"], dtype="int64") ) unit = str(YTQuantity(1, parsed).units) aliases = [] # Save a list of magnetic fields for aliasing later on # We can not reasonably infer field type/unit by name in openPMD if unit == "T" or unit == "kg/(A*s**2)": self._mag_fields.append(ytname) self.known_other_fields += ((ytname, (unit, aliases, None)),) else: for axis in field.keys(): ytname = str("_".join([fname.replace("_", "-"), axis])) parsed = parse_unit_dimension( np.asarray(field.attrs["unitDimension"], dtype="int64") ) unit = str(YTQuantity(1, parsed).units) aliases = [] # Save a list of magnetic fields for aliasing later on # We can not reasonably infer field type by name in openPMD if unit == "T" or unit == "kg/(A*s**2)": self._mag_fields.append(ytname) self.known_other_fields += ((ytname, (unit, aliases, None)),) for i in self.known_other_fields: mylog.debug("open_pmd - known_other_fields - %s", i) except (KeyError, TypeError, AttributeError): pass try: particles = f[bp + pp] for pname in particles.keys(): species = particles[pname] for recname in species.keys(): try: record = species[recname] parsed = parse_unit_dimension(record.attrs["unitDimension"]) unit = str(YTQuantity(1, parsed).units) ytattrib = str(recname).replace("_", "-") if ytattrib == "position": # Symbolically rename position to preserve yt's # interpretation of the pfield particle_position is later # derived in setup_absolute_positions in the way yt expects ytattrib = "positionCoarse" if isinstance(record, h5py.Dataset) or is_const_component( record ): name = ["particle", ytattrib] self.known_particle_fields += ( (str("_".join(name)), (unit, [], None)), ) else: for axis in record.keys(): aliases = [] name = ["particle", ytattrib, axis] ytname = str("_".join(name)) self.known_particle_fields += ( (ytname, (unit, aliases, None)), ) except KeyError: if recname != "particlePatches": mylog.info( "open_pmd - %s_%s does not seem to have " "unitDimension", pname, recname, ) for i in self.known_particle_fields: mylog.debug("open_pmd - known_particle_fields - %s", i) except (KeyError, TypeError, AttributeError): pass super().__init__(ds, field_list) def setup_fluid_fields(self): """Defines which derived mesh fields to create. If a field can not be calculated, it will simply be skipped. """ # Set up aliases first so the setup for poynting can use them if len(self._mag_fields) > 0: setup_magnetic_field_aliases(self, "openPMD", self._mag_fields) setup_poynting_vector(self) def setup_particle_fields(self, ptype): """Defines which derived particle fields to create. This will be called for every entry in `OpenPMDDataset``'s ``self.particle_types``. If a field can not be calculated, it will simply be skipped. """ setup_absolute_positions(self, ptype) setup_kinetic_energy(self, ptype) setup_velocity(self, ptype) super().setup_particle_fields(ptype) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/open_pmd/io.py0000644000175100001770000001732514714401662017015 0ustar00runnerdockerfrom collections import defaultdict import numpy as np from yt.frontends.open_pmd.misc import get_component, is_const_component from yt.geometry.selection_routines import GridSelector from yt.utilities.io_handler import BaseIOHandler class IOHandlerOpenPMDHDF5(BaseIOHandler): _field_dtype = "float32" _dataset_type = "openPMD" def __init__(self, ds, *args, **kwargs): self.ds = ds self._handle = ds._handle self.base_path = ds.base_path self.meshes_path = ds.meshes_path self.particles_path = ds.particles_path self._array_fields = {} self._cached_ptype = "" def _fill_cache(self, ptype, index=0, offset=None): """Fills the particle position cache for the ``ptype``. Parameters ---------- ptype : str The on-disk name of the particle species index : int, optional offset : int, optional """ if str((ptype, index, offset)) not in self._cached_ptype: self._cached_ptype = str((ptype, index, offset)) pds = self._handle[self.base_path + self.particles_path + "/" + ptype] axes = list(pds["position"].keys()) if offset is None: if is_const_component(pds["position/" + axes[0]]): offset = pds["position/" + axes[0]].attrs["shape"] else: offset = pds["position/" + axes[0]].len() self.cache = np.empty((3, offset), dtype=np.float64) for i in np.arange(3): ax = "xyz"[i] if ax in axes: np.add( get_component(pds, "position/" + ax, index, offset), get_component(pds, "positionOffset/" + ax, index, offset), self.cache[i], ) else: # Pad accordingly with zeros to make 1D/2D datasets compatible # These have to be the same shape as the existing axes since that # equals the number of particles self.cache[i] = np.zeros(offset) def _read_particle_selection(self, chunks, selector, fields): """Read particle fields for particle species masked by a selection. Parameters ---------- chunks A list of chunks A chunk is a list of grids selector A region (inside your domain) specifying which parts of the field you want to read. See [1] and [2] fields : array_like Tuples (ptype, pfield) representing a field Returns ------- dict keys are tuples (ptype, pfield) representing a field values are (N,) ndarrays with data from that field """ f = self._handle bp = self.base_path pp = self.particles_path ds = f[bp + pp] unions = self.ds.particle_unions chunks = list(chunks) # chunks is a generator rv = {} ind = {} particle_count = {} ptf = defaultdict(list) # ParticleTypes&Fields rfm = defaultdict(list) # RequestFieldMapping for ptype, pname in fields: pfield = (ptype, pname) # Overestimate the size of all pfields so they include all particles # and shrink it later particle_count[pfield] = 0 if ptype in unions: for pt in unions[ptype]: particle_count[pfield] += self.ds.particle_type_counts[pt] ptf[pt].append(pname) rfm[pt, pname].append(pfield) else: particle_count[pfield] = self.ds.particle_type_counts[ptype] ptf[ptype].append(pname) rfm[pfield].append(pfield) rv[pfield] = np.empty((particle_count[pfield],), dtype=np.float64) ind[pfield] = 0 for ptype in ptf: for chunk in chunks: for grid in chunk.objs: if str(ptype) == "io": species = list(ds.keys())[0] else: species = ptype if species not in grid.ptypes: continue # read particle coords into cache self._fill_cache(species, grid.pindex, grid.poffset) mask = selector.select_points( self.cache[0], self.cache[1], self.cache[2], 0.0 ) if mask is None: continue pds = ds[species] for field in ptf[ptype]: component = "/".join(field.split("_")[1:]) component = component.replace("positionCoarse", "position") component = component.replace("-", "_") data = get_component(pds, component, grid.pindex, grid.poffset)[ mask ] for request_field in rfm[ptype, field]: rv[request_field][ ind[request_field] : ind[request_field] + data.shape[0] ] = data ind[request_field] += data.shape[0] for field in fields: rv[field] = rv[field][: ind[field]] return rv def _read_fluid_selection(self, chunks, selector, fields, size): """Reads given fields masked by a given selection. Parameters ---------- chunks A list of chunks A chunk is a list of grids selector A region (inside your domain) specifying which parts of the field you want to read. See [1] and [2] fields : array_like Tuples (fname, ftype) representing a field size : int Size of the data to read Returns ------- dict keys are tuples (ftype, fname) representing a field values are flat (``size``,) ndarrays with data from that field """ f = self._handle bp = self.base_path mp = self.meshes_path ds = f[bp + mp] chunks = list(chunks) rv = {} ind = {} if isinstance(selector, GridSelector): if not (len(chunks) == len(chunks[0].objs) == 1): raise RuntimeError if size is None: size = sum(g.count(selector) for chunk in chunks for g in chunk.objs) for field in fields: rv[field] = np.empty(size, dtype=np.float64) ind[field] = 0 for ftype, fname in fields: field = (ftype, fname) for chunk in chunks: for grid in chunk.objs: mask = grid._get_selector_mask(selector) if mask is None: continue component = fname.replace("_", "/").replace("-", "_") if component.split("/")[0] not in grid.ftypes: data = np.full(grid.ActiveDimensions, 0, dtype=np.float64) else: data = get_component(ds, component, grid.findex, grid.foffset) # The following is a modified AMRGridPatch.select(...) data.shape = ( mask.shape ) # Workaround - casts a 2D (x,y) array to 3D (x,y,1) count = grid.count(selector) rv[field][ind[field] : ind[field] + count] = data[mask] ind[field] += count for field in fields: rv[field] = rv[field][: ind[field]] rv[field].flatten() return rv ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/open_pmd/misc.py0000644000175100001770000000614314714401662017335 0ustar00runnerdockerimport numpy as np from yt.utilities.logger import ytLogger as mylog def parse_unit_dimension(unit_dimension): r"""Transforms an openPMD unitDimension into a string. Parameters ---------- unit_dimension : array_like integer array of length 7 with one entry for the dimensional component of every SI unit [0] length L, [1] mass M, [2] time T, [3] electric current I, [4] thermodynamic temperature theta, [5] amount of substance N, [6] luminous intensity J References ---------- https://github.com/openPMD/openPMD-standard/blob/latest/STANDARD.md#unit-systems-and-dimensionality Returns ------- str Examples -------- >>> velocity = [1.0, 0.0, -1.0, 0.0, 0.0, 0.0, 0.0] >>> print(parse_unit_dimension(velocity)) 'm**1*s**-1' >>> magnetic_field = [0.0, 1.0, -2.0, -1.0, 0.0, 0.0, 0.0] >>> print(parse_unit_dimension(magnetic_field)) 'kg**1*s**-2*A**-1' """ if len(unit_dimension) != 7: mylog.error("SI must have 7 base dimensions!") unit_dimension = np.asarray(unit_dimension, dtype="int64") dim = [] si = ["m", "kg", "s", "A", "C", "mol", "cd"] for i in np.arange(7): if unit_dimension[i] != 0: dim.append(f"{si[i]}**{unit_dimension[i]}") return "*".join(dim) def is_const_component(record_component): """Determines whether a group or dataset in the HDF5 file is constant. Parameters ---------- record_component : h5py.Group or h5py.Dataset Returns ------- bool True if constant, False otherwise References ---------- .. https://github.com/openPMD/openPMD-standard/blob/latest/STANDARD.md, section 'Constant Record Components' """ return "value" in record_component.attrs.keys() def get_component(group, component_name, index=0, offset=None): """Grabs a dataset component from a group as a whole or sliced. Parameters ---------- group : h5py.Group component_name : str relative path of the component in the group index : int, optional first entry along the first axis to read offset : int, optional number of entries to read if not supplied, every entry after index is returned Notes ----- This scales every entry of the component with the respective "unitSI". Returns ------- ndarray (N,) 1D in case of particle data (O,P,Q) 1D/2D/3D in case of mesh data """ record_component = group[component_name] unit_si = record_component.attrs["unitSI"] if is_const_component(record_component): shape = np.asarray(record_component.attrs["shape"]) if offset is None: shape[0] -= index else: shape[0] = offset # component is constant, craft an array by hand return np.full(shape, record_component.attrs["value"] * unit_si) else: if offset is not None: offset += index # component is a dataset, return it (possibly masked) return np.multiply(record_component[index:offset], unit_si) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3311524 yt-4.4.0/yt/frontends/open_pmd/tests/0000755000175100001770000000000014714401715017165 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/open_pmd/tests/__init__.py0000644000175100001770000000000014714401662021265 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/open_pmd/tests/test_outputs.py0000644000175100001770000001404414714401662022325 0ustar00runnerdockerfrom itertools import product import numpy as np from numpy.testing import assert_almost_equal, assert_array_equal, assert_equal from yt.frontends.open_pmd.data_structures import OpenPMDDataset from yt.loaders import load from yt.testing import requires_file, requires_module from yt.utilities.answer_testing.framework import data_dir_load twoD = "example-2d/hdf5/data00000100.h5" threeD = "example-3d/hdf5/data00000100.h5" noFields = "no_fields/data00000400.h5" noParticles = "no_particles/data00000400.h5" groupBased = "singleParticle/simData.h5" particle_fields = [ "particle_charge", "particle_mass", "particle_momentum_x", "particle_momentum_y", "particle_momentum_z", "particle_positionCoarse_x", "particle_positionCoarse_y", "particle_positionCoarse_z", "particle_positionOffset_x", "particle_positionOffset_y", "particle_positionOffset_z", "particle_weighting", ] @requires_module("h5py") @requires_file(threeD) def test_3d_out(): ds = data_dir_load(threeD) particle_types = ["all", "io", "nbody"] field_list = list(product(particle_types, particle_fields)) field_list += list(product(("openPMD",), ("E_x", "E_y", "E_z", "rho"))) domain_dimensions = [26, 26, 201] * np.ones_like(ds.domain_dimensions) domain_width = [2.08e-05, 2.08e-05, 2.01e-05] * np.ones_like(ds.domain_left_edge) assert isinstance(ds, OpenPMDDataset) assert_equal(str(ds), "data00000100.h5") assert_equal(ds.dimensionality, 3) assert_equal(ds.particle_types_raw, ("io",)) assert "all" in ds.particle_unions assert_array_equal(ds.field_list, field_list) assert_array_equal(ds.domain_dimensions, domain_dimensions) assert_almost_equal( ds.current_time, 3.28471214521e-14 * np.ones_like(ds.current_time) ) assert_almost_equal(ds.domain_right_edge - ds.domain_left_edge, domain_width) @requires_module("h5py") @requires_file(twoD) def test_2d_out(): ds = data_dir_load(twoD) particle_types = ("Hydrogen1+", "all", "electrons", "nbody") field_list = list(product(particle_types, particle_fields)) field_list += [ ("openPMD", "B_x"), ("openPMD", "B_y"), ("openPMD", "B_z"), ("openPMD", "E_x"), ("openPMD", "E_y"), ("openPMD", "E_z"), ("openPMD", "J_x"), ("openPMD", "J_y"), ("openPMD", "J_z"), ("openPMD", "rho"), ] domain_dimensions = [51, 201, 1] * np.ones_like(ds.domain_dimensions) domain_width = [3.06e-05, 2.01e-05, 1e0] * np.ones_like(ds.domain_left_edge) assert isinstance(ds, OpenPMDDataset) assert_equal(str(ds), "data00000100.h5") assert_equal(ds.dimensionality, 2) assert_equal(ds.particle_types_raw, ("Hydrogen1+", "electrons")) assert "all" in ds.particle_unions assert_array_equal(ds.field_list, field_list) assert_array_equal(ds.domain_dimensions, domain_dimensions) assert_almost_equal( ds.current_time, 3.29025596712e-14 * np.ones_like(ds.current_time) ) assert_almost_equal(ds.domain_right_edge - ds.domain_left_edge, domain_width) @requires_module("h5py") @requires_file(noFields) def test_no_fields_out(): ds = data_dir_load(noFields) particle_types = ("all", "io", "nbody") no_fields_pfields = sorted(particle_fields + ["particle_id"]) field_list = list(product(particle_types, no_fields_pfields)) domain_dimensions = [1, 1, 1] * np.ones_like(ds.domain_dimensions) domain_width = [1, 1, 1] * np.ones_like(ds.domain_left_edge) assert isinstance(ds, OpenPMDDataset) assert_equal(str(ds), "data00000400.h5") assert_equal(ds.dimensionality, 3) assert_equal(ds.particle_types_raw, ("io",)) assert "all" in ds.particle_unions assert_array_equal(ds.field_list, field_list) assert_array_equal(ds.domain_dimensions, domain_dimensions) assert_almost_equal( ds.current_time, 1.3161023868481013e-13 * np.ones_like(ds.current_time) ) assert_almost_equal(ds.domain_right_edge - ds.domain_left_edge, domain_width) @requires_module("h5py") @requires_file(noParticles) def test_no_particles_out(): ds = data_dir_load(noParticles) field_list = [ ("openPMD", "E_x"), ("openPMD", "E_y"), ("openPMD", "E_z"), ("openPMD", "rho"), ] domain_dimensions = [51, 201, 1] * np.ones_like(ds.domain_dimensions) domain_width = [3.06e-05, 2.01e-05, 1e0] * np.ones_like(ds.domain_left_edge) assert isinstance(ds, OpenPMDDataset) assert_equal(str(ds), "data00000400.h5") assert_equal(ds.dimensionality, 2) assert_equal(ds.particle_types_raw, ("io",)) assert "all" not in ds.particle_unions assert_array_equal(ds.field_list, field_list) assert_array_equal(ds.domain_dimensions, domain_dimensions) assert_almost_equal( ds.current_time, 1.3161023868481013e-13 * np.ones_like(ds.current_time) ) assert_almost_equal(ds.domain_right_edge - ds.domain_left_edge, domain_width) @requires_module("h5py") @requires_file(groupBased) def test_groupBased_out(): dss = load(groupBased) particle_types = ("all", "io", "nbody") field_list = list(product(particle_types, particle_fields)) field_list += [ ("openPMD", "J_x"), ("openPMD", "J_y"), ("openPMD", "J_z"), ("openPMD", "e-chargeDensity"), ] domain_dimensions = [32, 64, 64] * np.ones_like(dss[0].domain_dimensions) domain_width = [0.0002752, 0.0005504, 0.0005504] * np.ones_like( dss[0].domain_left_edge ) assert_equal(len(dss), 101) for i in range(0, len(dss), 20): # Test only every 20th ds out of the series ds = dss[i] assert_equal(str(ds), "simData.h5") assert_equal(ds.dimensionality, 3) assert_equal(ds.particle_types_raw, ("io",)) assert_array_equal(ds.field_list, field_list) assert_array_equal(ds.domain_dimensions, domain_dimensions) assert ds.current_time >= np.zeros_like(ds.current_time) assert ds.current_time <= 1.6499999999999998e-12 * np.ones_like(ds.current_time) assert_almost_equal(ds.domain_right_edge - ds.domain_left_edge, domain_width) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3351526 yt-4.4.0/yt/frontends/owls/0000755000175100001770000000000014714401715015206 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/owls/__init__.py0000644000175100001770000000000014714401662017306 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/owls/api.py0000644000175100001770000000025514714401662016334 0ustar00runnerdockerfrom . import tests from .data_structures import OWLSDataset from .fields import OWLSFieldInfo from .io import IOHandlerOWLS from .simulation_handling import OWLSSimulation ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/owls/data_structures.py0000644000175100001770000000461114714401662020777 0ustar00runnerdockerimport os import yt.units from yt.frontends.gadget.data_structures import GadgetHDF5Dataset from yt.utilities.definitions import sec_conversion from yt.utilities.on_demand_imports import _h5py as h5py from .fields import OWLSFieldInfo class OWLSDataset(GadgetHDF5Dataset): _load_requirements = ["h5py"] _particle_mass_name = "Mass" _field_info_class = OWLSFieldInfo _time_readin = "Time_GYR" def _parse_parameter_file(self): # read values from header hvals = self._get_hvals() self.parameters = hvals # set features common to OWLS and Eagle self._set_owls_eagle() # Set time from value in header self.current_time = ( hvals[self._time_readin] * sec_conversion["Gyr"] * yt.units.s ) def _set_code_unit_attributes(self): self._set_owls_eagle_units() @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False need_groups = ["Constants", "Header", "Parameters", "Units"] veto_groups = [ "SUBFIND", "FOF", "PartType0/ChemistryAbundances", "PartType0/ChemicalAbundances", "RuntimePars", "HashTable", ] valid = True valid_fname = filename # If passed arg is a directory, look for the .0 file in that dir if os.path.isdir(filename): valid_files = [] for f in os.listdir(filename): fname = os.path.join(filename, f) fext = os.path.splitext(fname)[-1] if ( (".0" in f) and (fext not in {".ewah", ".kdtree"}) and os.path.isfile(fname) ): valid_files.append(fname) if len(valid_files) == 0: valid = False elif len(valid_files) > 1: valid = False else: valid_fname = valid_files[0] try: fileh = h5py.File(valid_fname, mode="r") for ng in need_groups: if ng not in fileh["/"]: valid = False for vg in veto_groups: if vg in fileh["/"]: valid = False fileh.close() except Exception: valid = False return valid ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/owls/fields.py0000644000175100001770000003024414714401662017032 0ustar00runnerdockerimport os import numpy as np from yt.config import ytcfg from yt.fields.species_fields import ( add_species_field_by_density, add_species_field_by_fraction, ) from yt.frontends.sph.fields import SPHFieldInfo from yt.funcs import download_file, mylog from . import owls_ion_tables as oit def _get_ion_mass_frac(ion, ftype, itab, data): # get element symbol from ion string. ion string will # be a member of the tuple _ions (i.e. si13) # -------------------------------------------------------- if ion[0:2].isalpha(): symbol = ion[0:2].capitalize() else: symbol = ion[0:1].capitalize() # mass fraction for the element # -------------------------------------------------------- m_frac = data[ftype, symbol + "_fraction"] # get nH and T for lookup # -------------------------------------------------------- log_nH = np.log10(data["PartType0", "H_number_density"]) log_T = np.log10(data["PartType0", "Temperature"]) # get name of owls_ion_file for given ion # -------------------------------------------------------- itab.set_iz(data.ds.current_redshift) # find ion balance using log nH and log T # -------------------------------------------------------- i_frac = itab.interp(log_nH, log_T) return i_frac, m_frac class OWLSFieldInfo(SPHFieldInfo): _ions: tuple[str, ...] = ( "c1", "c2", "c3", "c4", "c5", "c6", "fe2", "fe17", "h1", "he1", "he2", "mg1", "mg2", "n2", "n3", "n4", "n5", "n6", "n7", "ne8", "ne9", "ne10", "o1", "o6", "o7", "o8", "si2", "si3", "si4", "si13", ) _elements = ("H", "He", "C", "N", "O", "Ne", "Mg", "Si", "Fe") _num_neighbors = 48 _add_elements = ("PartType0", "PartType4") _add_ions = "PartType0" def __init__(self, ds, field_list, slice_info=None): new_particle_fields = ( ("Hydrogen", ("", ["H_fraction"], None)), ("Helium", ("", ["He_fraction"], None)), ("Carbon", ("", ["C_fraction"], None)), ("Nitrogen", ("", ["N_fraction"], None)), ("Oxygen", ("", ["O_fraction"], None)), ("Neon", ("", ["Ne_fraction"], None)), ("Magnesium", ("", ["Mg_fraction"], None)), ("Silicon", ("", ["Si_fraction"], None)), ("Iron", ("", ["Fe_fraction"], None)), ) if ds.gen_hsmls: new_particle_fields += (("smoothing_length", ("code_length", [], None)),) self.known_particle_fields += new_particle_fields super().__init__(ds, field_list, slice_info=slice_info) # This enables the machinery in yt.fields.species_fields self.species_names += list(self._elements) def setup_particle_fields(self, ptype): """additional particle fields derived from those in snapshot. we also need to add the smoothed fields here b/c setup_fluid_fields is called before setup_particle_fields.""" smoothed_suffixes = ("_number_density", "_density", "_mass") # we add particle element fields for stars and gas # ----------------------------------------------------- if ptype in self._add_elements: # this adds the particle element fields # X_density, X_mass, and X_number_density # where X is an item of self._elements. # X_fraction are defined in snapshot # ----------------------------------------------- for s in self._elements: field_names = add_species_field_by_fraction(self, ptype, s) if ptype == self.ds._sph_ptypes[0]: for fn in field_names: self.alias(("gas", fn[1]), fn) # this needs to be called after the call to # add_species_field_by_fraction for some reason ... # not sure why yet. # ------------------------------------------------------- if ptype == "PartType0": ftype = "gas" else: ftype = ptype super().setup_particle_fields( ptype, num_neighbors=self._num_neighbors, ftype=ftype ) # and now we add the smoothed versions for PartType0 # ----------------------------------------------------- if ptype == "PartType0": # we only add ion fields for gas. this takes some # time as the ion abundances have to be interpolated # from cloudy tables (optically thin) # ----------------------------------------------------- # this defines the ion density on particles # X_density for all items in self._ions # ----------------------------------------------- self.setup_gas_ion_particle_fields(ptype) # this adds the rest of the ion particle fields # X_fraction, X_mass, X_number_density # ----------------------------------------------- for ion in self._ions: # construct yt name for ion # --------------------------------------------------- if ion[0:2].isalpha(): symbol = ion[0:2].capitalize() roman = int(ion[2:]) else: symbol = ion[0:1].capitalize() roman = int(ion[1:]) if (ptype, symbol + "_fraction") not in self.field_aliases: continue pstr = f"_p{roman - 1}" yt_ion = symbol + pstr # add particle field # --------------------------------------------------- add_species_field_by_density(self, ptype, yt_ion) def _h_p1_density(field, data): return data[ptype, "H_density"] - data[ptype, "H_p0_density"] self.add_field( (ptype, "H_p1_density"), sampling_type="particle", function=_h_p1_density, units=self.ds.unit_system["density"], ) add_species_field_by_density(self, ptype, "H_p1") for sfx in ["mass", "density", "number_density"]: fname = "H_p1_" + sfx self.alias(("gas", fname), (ptype, fname)) def _el_number_density(field, data): n_e = data[ptype, "H_p1_number_density"] n_e += data[ptype, "He_p1_number_density"] n_e += 2.0 * data[ptype, "He_p2_number_density"] return n_e self.add_field( (ptype, "El_number_density"), sampling_type="particle", function=_el_number_density, units=self.ds.unit_system["number_density"], ) self.alias(("gas", "El_number_density"), (ptype, "El_number_density")) # alias ion fields # ----------------------------------------------- for ion in self._ions: # construct yt name for ion # --------------------------------------------------- if ion[0:2].isalpha(): symbol = ion[0:2].capitalize() roman = int(ion[2:]) else: symbol = ion[0:1].capitalize() roman = int(ion[1:]) if (ptype, symbol + "_fraction") not in self.field_aliases: continue pstr = f"_p{roman - 1}" yt_ion = symbol + pstr for sfx in smoothed_suffixes: fname = yt_ion + sfx self.alias(("gas", fname), (ptype, fname)) def setup_gas_ion_particle_fields(self, ptype): """Sets up particle fields for gas ion densities.""" # loop over all ions and make fields # ---------------------------------------------- for ion in self._ions: # construct yt name for ion # --------------------------------------------------- if ion[0:2].isalpha(): symbol = ion[0:2].capitalize() roman = int(ion[2:]) else: symbol = ion[0:1].capitalize() roman = int(ion[1:]) if (ptype, symbol + "_fraction") not in self.field_aliases: continue pstr = f"_p{roman - 1}" yt_ion = symbol + pstr ftype = ptype # add ion density and mass field for this species # ------------------------------------------------ fname = yt_ion + "_density" dens_func = self._create_ion_density_func(ftype, ion) self.add_field( (ftype, fname), sampling_type="particle", function=dens_func, units=self.ds.unit_system["density"], ) self._show_field_errors.append((ftype, fname)) fname = yt_ion + "_mass" mass_func = self._create_ion_mass_func(ftype, ion) self.add_field( (ftype, fname), sampling_type="particle", function=mass_func, units=self.ds.unit_system["mass"], ) self._show_field_errors.append((ftype, fname)) def _create_ion_density_func(self, ftype, ion): """returns a function that calculates the ion density of a particle.""" def get_owls_ion_density_field(ion, ftype, itab): def _func(field, data): m_frac, i_frac = _get_ion_mass_frac(ion, ftype, itab, data) return data[ftype, "Density"] * m_frac * i_frac return _func ion_path = self._get_owls_ion_data_dir() fname = os.path.join(ion_path, ion + ".hdf5") itab = oit.IonTableOWLS(fname) return get_owls_ion_density_field(ion, ftype, itab) def _create_ion_mass_func(self, ftype, ion): """returns a function that calculates the ion mass of a particle""" def get_owls_ion_mass_field(ion, ftype, itab): def _func(field, data): m_frac, i_frac = _get_ion_mass_frac(ion, ftype, itab, data) return data[ftype, "particle_mass"] * m_frac * i_frac return _func ion_path = self._get_owls_ion_data_dir() fname = os.path.join(ion_path, ion + ".hdf5") itab = oit.IonTableOWLS(fname) return get_owls_ion_mass_field(ion, ftype, itab) # this function sets up the X_mass, X_density, X_fraction, and # X_number_density fields where X is the name of an OWLS element. # ------------------------------------------------------------- def setup_fluid_fields(self): return # this function returns the owls_ion_data directory. if it doesn't # exist it will download the data from http://yt-project.org/data # ------------------------------------------------------------- def _get_owls_ion_data_dir(self): txt = "Attempting to download ~ 30 Mb of owls ion data from %s to %s." data_file = "owls_ion_data.tar.gz" data_url = "http://yt-project.org/data" # get test_data_dir from yt config (ytcgf) # ---------------------------------------------- tdir = ytcfg.get("yt", "test_data_dir") # set download destination to tdir or ./ if tdir isn't defined # ---------------------------------------------- if tdir == "/does/not/exist": data_dir = "./" else: data_dir = tdir # check for owls_ion_data directory in data_dir # if not there download the tarball and untar it # ---------------------------------------------- owls_ion_path = os.path.join(data_dir, "owls_ion_data") if not os.path.exists(owls_ion_path): mylog.info(txt, data_url, data_dir) fname = os.path.join(data_dir, data_file) download_file(os.path.join(data_url, data_file), fname) cmnd = f"cd {data_dir}; tar xf {data_file}" os.system(cmnd) if not os.path.exists(owls_ion_path): raise RuntimeError("Failed to download owls ion data.") return owls_ion_path ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/owls/io.py0000644000175100001770000000017614714401662016174 0ustar00runnerdockerfrom yt.frontends.gadget.io import IOHandlerGadgetHDF5 class IOHandlerOWLS(IOHandlerGadgetHDF5): _dataset_type = "OWLS" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/owls/owls_ion_tables.py0000644000175100001770000001347014714401662020751 0ustar00runnerdockerimport numpy as np from yt.utilities.on_demand_imports import _h5py as h5py def h5rd(fname, path, dtype=None): """Read Data. Return a dataset located at in file as a numpy array. e.g. rd( fname, '/PartType0/Coordinates' ).""" data = None fid = h5py.h5f.open(fname.encode("latin-1"), h5py.h5f.ACC_RDONLY) dg = h5py.h5d.open(fid, path.encode("ascii")) if dtype is None: dtype = dg.dtype data = np.zeros(dg.shape, dtype=dtype) dg.read(h5py.h5s.ALL, h5py.h5s.ALL, data) fid.close() return data class IonTableSpectrum: """A class to handle the HM01 spectra in the OWLS ionization tables.""" def __init__(self, ion_file): where = "/header/spectrum/gammahi" self.GH1 = h5rd(ion_file, where) # GH1[1/s] where = "/header/spectrum/logenergy_ryd" self.logryd = h5rd(ion_file, where) # E[ryd] where = "/header/spectrum/logflux" self.logflux = h5rd(ion_file, where) # J[ergs/s/Hz/Sr/cm^2] where = "/header/spectrum/redshift" self.z = h5rd(ion_file, where) # z def return_table_GH1_at_z(self, z): # find redshift indices # ----------------------------------------------------------------- i_zlo = np.argmin(np.abs(self.z - z)) if self.z[i_zlo] < z: i_zhi = i_zlo + 1 else: i_zhi = i_zlo i_zlo = i_zlo - 1 z_frac = (z - self.z[i_zlo]) / (self.z[i_zhi] - self.z[i_zlo]) # find GH1 from table # ----------------------------------------------------------------- logGH1_all = np.log10(self.GH1) dlog_GH1 = logGH1_all[i_zhi] - logGH1_all[i_zlo] logGH1_table = logGH1_all[i_zlo] + z_frac * dlog_GH1 GH1_table = 10.0**logGH1_table return GH1_table class IonTableOWLS: """A class to handle OWLS ionization tables.""" DELTA_nH = 0.25 DELTA_T = 0.1 def __init__(self, ion_file): self.ion_file = ion_file # ionbal is indexed like [nH, T, z] # nH and T are log quantities # --------------------------------------------------------------- self.nH = h5rd(ion_file, "/logd") # log nH [cm^-3] self.T = h5rd(ion_file, "/logt") # log T [K] self.z = h5rd(ion_file, "/redshift") # z # read the ionization fractions # linear values stored in file so take log here # ionbal is the ionization balance (i.e. fraction) # --------------------------------------------------------------- self.ionbal = h5rd(ion_file, "/ionbal").astype(np.float64) self.ionbal_orig = self.ionbal.copy() ipositive = self.ionbal > 0.0 izero = np.logical_not(ipositive) self.ionbal[izero] = self.ionbal[ipositive].min() self.ionbal = np.log10(self.ionbal) # load in background spectrum # --------------------------------------------------------------- self.spectrum = IonTableSpectrum(ion_file) # calculate the spacing along each dimension # --------------------------------------------------------------- self.dnH = self.nH[1:] - self.nH[0:-1] self.dT = self.T[1:] - self.T[0:-1] self.dz = self.z[1:] - self.z[0:-1] self.order_str = "[log nH, log T, z]" # sets iz and fz # ----------------------------------------------------- def set_iz(self, z): if z <= self.z[0]: self.iz = 0 self.fz = 0.0 elif z >= self.z[-1]: self.iz = len(self.z) - 2 self.fz = 1.0 else: for iz in range(len(self.z) - 1): if z < self.z[iz + 1]: self.iz = iz self.fz = (z - self.z[iz]) / self.dz[iz] break # interpolate the table at a fixed redshift for the input # values of nH and T ( input should be log ). A simple # tri-linear interpolation is used. # ----------------------------------------------------- def interp(self, nH, T): nH = np.array(nH) T = np.array(T) if nH.size != T.size: raise ValueError(" owls_ion_tables: array size mismatch !!! ") # field discovery will have nH.size == 1 and T.size == 1 # in that case we simply return 1.0 if nH.size == 1 and T.size == 1: ionfrac = 1.0 return ionfrac # find inH and fnH # ----------------------------------------------------- x_nH = (nH - self.nH[0]) / self.DELTA_nH x_nH_clip = np.clip(x_nH, 0.0, self.nH.size - 1.001) fnH, inH = np.modf(x_nH_clip) inH = inH.astype(np.int32) # find iT and fT # ----------------------------------------------------- x_T = (T - self.T[0]) / self.DELTA_T x_T_clip = np.clip(x_T, 0.0, self.T.size - 1.001) fT, iT = np.modf(x_T_clip) iT = iT.astype(np.int32) # short names for previously calculated iz and fz # ----------------------------------------------------- iz = self.iz fz = self.fz # calculate interpolated value # use tri-linear interpolation on the log values # ----------------------------------------------------- ionfrac = ( self.ionbal[inH, iT, iz] * (1 - fnH) * (1 - fT) * (1 - fz) + self.ionbal[inH + 1, iT, iz] * (fnH) * (1 - fT) * (1 - fz) + self.ionbal[inH, iT + 1, iz] * (1 - fnH) * (fT) * (1 - fz) + self.ionbal[inH, iT, iz + 1] * (1 - fnH) * (1 - fT) * (fz) + self.ionbal[inH + 1, iT, iz + 1] * (fnH) * (1 - fT) * (fz) + self.ionbal[inH, iT + 1, iz + 1] * (1 - fnH) * (fT) * (fz) + self.ionbal[inH + 1, iT + 1, iz] * (fnH) * (fT) * (1 - fz) + self.ionbal[inH + 1, iT + 1, iz + 1] * (fnH) * (fT) * (fz) ) return 10**ionfrac ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/owls/simulation_handling.py0000644000175100001770000000413714714401662021616 0ustar00runnerdockerimport os from yt.frontends.gadget.simulation_handling import GadgetSimulation class OWLSSimulation(GadgetSimulation): r""" Initialize an OWLS Simulation object. Upon creation, the parameter file is parsed and the time and redshift are calculated and stored in all_outputs. A time units dictionary is instantiated to allow for time outputs to be requested with physical time units. The get_time_series can be used to generate a DatasetSeries object. parameter_filename : str The simulation parameter file. find_outputs : bool If True, the OutputDir directory is searched for datasets. Time and redshift information are gathered by temporarily instantiating each dataset. This can be used when simulation data was created in a non-standard way, making it difficult to guess the corresponding time and redshift information. Default: False. Examples -------- >>> import yt >>> es = yt.load_simulation("my_simulation.par", "OWLS") >>> es.get_time_series() >>> for ds in es: ... print(ds.current_time) """ def __init__(self, parameter_filename, find_outputs=False): GadgetSimulation.__init__(self, parameter_filename, find_outputs=find_outputs) def _snapshot_format(self, index=None): """ The snapshot filename for a given index. Modify this for different naming conventions. """ if self.parameters["OutputDir"].startswith("/"): data_dir = self.parameters["OutputDir"] else: data_dir = os.path.join(self.directory, self.parameters["OutputDir"]) if self.parameters["NumFilesPerSnapshot"] > 1: suffix = ".0" else: suffix = "" if self.parameters["SnapFormat"] == 3: suffix += ".hdf5" if index is None: count = "*" else: count = "%03d" % index keyword = f"{self.parameters['SnapshotFileBase']}_{count}" filename = os.path.join(keyword, f"{keyword}{suffix}") return os.path.join(data_dir, filename) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3351526 yt-4.4.0/yt/frontends/owls/tests/0000755000175100001770000000000014714401715016350 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/owls/tests/__init__.py0000644000175100001770000000000014714401662020450 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/owls/tests/test_outputs.py0000644000175100001770000000340614714401662021510 0ustar00runnerdockerfrom collections import OrderedDict from yt.data_objects.particle_filters import add_particle_filter from yt.frontends.owls.api import OWLSDataset from yt.testing import ParticleSelectionComparison, requires_file, requires_module from yt.utilities.answer_testing.framework import data_dir_load, requires_ds, sph_answer os33 = "snapshot_033/snap_033.0.hdf5" # This maps from field names to weight field names to use for projections _fields = OrderedDict( [ (("gas", "density"), None), (("gas", "temperature"), None), (("gas", "temperature"), ("gas", "density")), (("gas", "He_p0_number_density"), None), (("gas", "velocity_magnitude"), None), ] ) @requires_module("h5py") @requires_ds(os33, big_data=True) def test_snapshot_033(): ds = data_dir_load(os33) psc = ParticleSelectionComparison(ds) psc.run_defaults() for test in sph_answer(ds, "snap_033", 2 * 128**3, _fields): test_snapshot_033.__name__ = test.description yield test @requires_module("h5py") @requires_file(os33) def test_OWLSDataset(): assert isinstance(data_dir_load(os33), OWLSDataset) @requires_module("h5py") @requires_ds(os33) def test_OWLS_particlefilter(): ds = data_dir_load(os33) ad = ds.all_data() def cold_gas(pfilter, data): temperature = data[pfilter.filtered_type, "Temperature"] filter = temperature.in_units("K") <= 1e5 return filter add_particle_filter( "gas_cold", function=cold_gas, filtered_type="PartType0", requires=["Temperature"], ) ds.add_particle_filter("gas_cold") mask = ad["PartType0", "Temperature"] <= 1e5 assert ( ad["PartType0", "Temperature"][mask].shape == ad["gas_cold", "Temperature"].shape ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3351526 yt-4.4.0/yt/frontends/owls_subfind/0000755000175100001770000000000014714401715016720 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/owls_subfind/__init__.py0000644000175100001770000000005214714401662021027 0ustar00runnerdocker""" API for HaloCatalog frontend. """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/owls_subfind/api.py0000644000175100001770000000022614714401662020044 0ustar00runnerdockerfrom . import tests from .data_structures import OWLSSubfindDataset from .fields import OWLSSubfindFieldInfo from .io import IOHandlerOWLSSubfindHDF5 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/owls_subfind/data_structures.py0000644000175100001770000002065014714401662022512 0ustar00runnerdockerimport glob import os from collections import defaultdict import numpy as np from yt.data_objects.static_output import ParticleDataset, ParticleFile from yt.frontends.gadget.data_structures import _fix_unit_ordering from yt.funcs import only_on_root, setdefaultattr from yt.geometry.particle_geometry_handler import ParticleIndex from yt.utilities.exceptions import YTException from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py from .fields import OWLSSubfindFieldInfo class OWLSSubfindParticleIndex(ParticleIndex): chunksize = -1 def __init__(self, ds, dataset_type): super().__init__(ds, dataset_type) def _calculate_particle_index_starts(self): # Halo indices are not saved in the file, so we must count by hand. # File 0 has halos 0 to N_0 - 1, file 1 has halos N_0 to N_0 + N_1 - 1, etc. particle_count = defaultdict(int) offset_count = 0 for data_file in self.data_files: data_file.index_start = { ptype: particle_count[ptype] for ptype in data_file.total_particles } data_file.offset_start = offset_count for ptype in data_file.total_particles: particle_count[ptype] += data_file.total_particles[ptype] offset_count += data_file.total_offset def _calculate_file_offset_map(self): # After the FOF is performed, a load-balancing step redistributes halos # and then writes more fields. Here, for each file, we create a list of # files which contain the rest of the redistributed particles. ifof = np.array( [data_file.total_particles["FOF"] for data_file in self.data_files] ) isub = np.array([data_file.total_offset for data_file in self.data_files]) subend = isub.cumsum() fofend = ifof.cumsum() istart = np.digitize(fofend - ifof, subend - isub) - 1 iend = np.clip(np.digitize(fofend, subend), 0, ifof.size - 2) for i, data_file in enumerate(self.data_files): data_file.offset_files = self.data_files[istart[i] : iend[i] + 1] def _detect_output_fields(self): # TODO: Add additional fields self._calculate_particle_index_starts() self._calculate_file_offset_map() dsl = [] units = {} for dom in self.data_files: fl, _units = self.io._identify_fields(dom) units.update(_units) for f in fl: if f not in dsl: dsl.append(f) self.field_list = dsl ds = self.dataset ds.particle_types = tuple({pt for pt, ds in dsl}) # This is an attribute that means these particle types *actually* # exist. As in, they are real, in the dataset. ds.field_units.update(units) ds.particle_types_raw = ds.particle_types class OWLSSubfindHDF5File(ParticleFile): def __init__(self, ds, io, filename, file_id, bounds): super().__init__(ds, io, filename, file_id, bounds) with h5py.File(filename, mode="r") as f: self.header = {field: f.attrs[field] for field in f.attrs.keys()} class OWLSSubfindDataset(ParticleDataset): _load_requirements = ["h5py"] _index_class = OWLSSubfindParticleIndex _file_class = OWLSSubfindHDF5File _field_info_class = OWLSSubfindFieldInfo _suffix = ".hdf5" def __init__( self, filename, dataset_type="subfind_hdf5", index_order=None, index_filename=None, units_override=None, unit_system="cgs", ): super().__init__( filename, dataset_type, index_order=index_order, index_filename=index_filename, units_override=units_override, unit_system=unit_system, ) def _parse_parameter_file(self): handle = h5py.File(self.parameter_filename, mode="r") hvals = {} hvals.update((str(k), v) for k, v in handle["/Header"].attrs.items()) hvals["NumFiles"] = hvals["NumFilesPerSnapshot"] hvals["Massarr"] = hvals["MassTable"] self.dimensionality = 3 self.refine_by = 2 # Set standard values if "Time_GYR" in hvals: self.current_time = self.quan(hvals["Time_GYR"], "Gyr") self.domain_left_edge = np.zeros(3, "float64") self.domain_right_edge = np.ones(3, "float64") * hvals["BoxSize"] self.domain_dimensions = np.ones(3, "int32") self.cosmological_simulation = 1 self._periodicity = (True, True, True) self.current_redshift = hvals["Redshift"] self.omega_lambda = hvals["OmegaLambda"] self.omega_matter = hvals["Omega0"] self.hubble_constant = hvals["HubbleParam"] self.parameters = hvals prefix = os.path.abspath( os.path.join( os.path.dirname(self.parameter_filename), os.path.basename(self.parameter_filename).split(".", 1)[0], ) ) suffix = self.parameter_filename.rsplit(".", 1)[-1] self.filename_template = f"{prefix}.%(num)i.{suffix}" self.file_count = len(glob.glob(prefix + "*" + self._suffix)) if self.file_count == 0: raise YTException(message="No data files found.", ds=self) self.particle_types = ("FOF", "SUBFIND") self.particle_types_raw = ("FOF", "SUBFIND") # To avoid having to open files twice self._unit_base = {} self._unit_base.update((str(k), v) for k, v in handle["/Units"].attrs.items()) handle.close() def _set_code_unit_attributes(self): # Set a sane default for cosmological simulations. if self._unit_base is None and self.cosmological_simulation == 1: only_on_root(mylog.info, "Assuming length units are in Mpc/h (comoving)") self._unit_base = {"length": (1.0, "Mpccm/h")} # The other same defaults we will use from the standard Gadget # defaults. unit_base = self._unit_base or {} if "length" in unit_base: length_unit = unit_base["length"] elif "UnitLength_in_cm" in unit_base: if self.cosmological_simulation == 0: length_unit = (unit_base["UnitLength_in_cm"], "cm") else: length_unit = (unit_base["UnitLength_in_cm"], "cmcm/h") else: raise RuntimeError length_unit = _fix_unit_ordering(length_unit) setdefaultattr(self, "length_unit", self.quan(length_unit[0], length_unit[1])) if "velocity" in unit_base: velocity_unit = unit_base["velocity"] elif "UnitVelocity_in_cm_per_s" in unit_base: velocity_unit = (unit_base["UnitVelocity_in_cm_per_s"], "cm/s") else: velocity_unit = (1e5, "cm/s * sqrt(a)") velocity_unit = _fix_unit_ordering(velocity_unit) setdefaultattr( self, "velocity_unit", self.quan(velocity_unit[0], velocity_unit[1]) ) # We set hubble_constant = 1.0 for non-cosmology, so this is safe. # Default to 1e10 Msun/h if mass is not specified. if "mass" in unit_base: mass_unit = unit_base["mass"] elif "UnitMass_in_g" in unit_base: if self.cosmological_simulation == 0: mass_unit = (unit_base["UnitMass_in_g"], "g") else: mass_unit = (unit_base["UnitMass_in_g"], "g/h") else: # Sane default mass_unit = (1.0, "1e10*Msun/h") mass_unit = _fix_unit_ordering(mass_unit) setdefaultattr(self, "mass_unit", self.quan(mass_unit[0], mass_unit[1])) if "time" in unit_base: time_unit = unit_base["time"] elif "UnitTime_in_s" in unit_base: time_unit = (unit_base["UnitTime_in_s"], "s") else: tu = (self.length_unit / self.velocity_unit).to("yr/h") time_unit = (tu.d, tu.units) setdefaultattr(self, "time_unit", self.quan(time_unit[0], time_unit[1])) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False need_groups = ["Constants", "Header", "Parameters", "Units", "FOF"] valid = True try: fh = h5py.File(filename, mode="r") valid = all(ng in fh["/"] for ng in need_groups) fh.close() except Exception: valid = False return valid ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/owls_subfind/fields.py0000644000175100001770000000411014714401662020535 0ustar00runnerdockerfrom yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer m_units = "code_mass" mdot_units = "code_mass / code_time" p_units = "Mpccm/h" v_units = "1e5 * cmcm / s" class OWLSSubfindFieldInfo(FieldInfoContainer): known_particle_fields: KnownFieldsT = ( ("CenterOfMass_0", (p_units, ["particle_position_x"], None)), ("CenterOfMass_1", (p_units, ["particle_position_y"], None)), ("CenterOfMass_2", (p_units, ["particle_position_z"], None)), ("CentreOfMass_0", (p_units, ["particle_position_x"], None)), ("CentreOfMass_1", (p_units, ["particle_position_y"], None)), ("CentreOfMass_2", (p_units, ["particle_position_z"], None)), ("CenterOfMassVelocity_0", (v_units, ["particle_velocity_x"], None)), ("CenterOfMassVelocity_1", (v_units, ["particle_velocity_y"], None)), ("CenterOfMassVelocity_2", (v_units, ["particle_velocity_z"], None)), ("Velocity_0", (v_units, ["particle_velocity_x"], None)), ("Velocity_1", (v_units, ["particle_velocity_y"], None)), ("Velocity_2", (v_units, ["particle_velocity_z"], None)), ("Mass", (m_units, ["particle_mass"], None)), ("Halo_M_Crit200", (m_units, ["Virial Mass"], None)), ("Halo_M_Crit2500", (m_units, [], None)), ("Halo_M_Crit500", (m_units, [], None)), ("Halo_M_Mean200", (m_units, [], None)), ("Halo_M_Mean2500", (m_units, [], None)), ("Halo_M_Mean500", (m_units, [], None)), ("Halo_M_TopHat200", (m_units, [], None)), ("Halo_R_Crit200", (p_units, ["Virial Radius"], None)), ("Halo_R_Crit2500", (p_units, [], None)), ("Halo_R_Crit500", (p_units, [], None)), ("Halo_R_Mean200", (p_units, [], None)), ("Halo_R_Mean2500", (p_units, [], None)), ("Halo_R_Mean500", (p_units, [], None)), ("Halo_R_TopHat200", (p_units, [], None)), ("BH_Mass", (m_units, [], None)), ("Stars/Mass", (m_units, [], None)), ("BH_Mdot", (mdot_units, [], None)), ("StarFormationRate", (mdot_units, [], None)), ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/owls_subfind/io.py0000644000175100001770000002133514714401662017706 0ustar00runnerdockerimport numpy as np from yt.funcs import mylog from yt.utilities.io_handler import BaseParticleIOHandler from yt.utilities.on_demand_imports import _h5py as h5py _pos_names = ["CenterOfMass", "CentreOfMass"] class IOHandlerOWLSSubfindHDF5(BaseParticleIOHandler): _dataset_type = "subfind_hdf5" _position_name = None def __init__(self, ds): super().__init__(ds) self.offset_fields = set() def _read_fluid_selection(self, chunks, selector, fields, size): raise NotImplementedError def _read_particle_coords(self, chunks, ptf): # This will read chunks and yield the results. for data_file in self._sorted_chunk_iterator(chunks): with h5py.File(data_file.filename, mode="r") as f: for ptype in sorted(ptf): pcount = data_file.total_particles[ptype] coords = f[ptype][self._position_name][()].astype("float64") coords = np.resize(coords, (pcount, 3)) x = coords[:, 0] y = coords[:, 1] z = coords[:, 2] yield ptype, (x, y, z), 0.0 def _yield_coordinates(self, data_file): ptypes = self.ds.particle_types_raw with h5py.File(data_file.filename, mode="r") as f: for ptype in sorted(ptypes): pcount = data_file.total_particles[ptype] coords = f[ptype][self._position_name][()].astype("float64") coords = np.resize(coords, (pcount, 3)) yield ptype, coords def _read_offset_particle_field(self, field, data_file, fh): field_data = np.empty(data_file.total_particles["FOF"], dtype="float64") fofindex = ( np.arange(data_file.total_particles["FOF"]) + data_file.index_start["FOF"] ) for offset_file in data_file.offset_files: if fh.filename == offset_file.filename: ofh = fh else: ofh = h5py.File(offset_file.filename, mode="r") subindex = np.arange(offset_file.total_offset) + offset_file.offset_start substart = max(fofindex[0] - subindex[0], 0) subend = min(fofindex[-1] - subindex[0], subindex.size - 1) fofstart = substart + subindex[0] - fofindex[0] fofend = subend + subindex[0] - fofindex[0] field_data[fofstart : fofend + 1] = ofh["SUBFIND"][field][ substart : subend + 1 ] return field_data def _read_particle_fields(self, chunks, ptf, selector): # Now we have all the sizes, and we can allocate for data_file in self._sorted_chunk_iterator(chunks): with h5py.File(data_file.filename, mode="r") as f: for ptype, field_list in sorted(ptf.items()): pcount = data_file.total_particles[ptype] if pcount == 0: continue coords = f[ptype][self._position_name][()].astype("float64") coords = np.resize(coords, (pcount, 3)) x = coords[:, 0] y = coords[:, 1] z = coords[:, 2] mask = selector.select_points(x, y, z, 0.0) del x, y, z if mask is None: continue for field in field_list: if field in self.offset_fields: field_data = self._read_offset_particle_field( field, data_file, f ) else: if field == "particle_identifier": field_data = ( np.arange(data_file.total_particles[ptype]) + data_file.index_start[ptype] ) elif field in f[ptype]: field_data = f[ptype][field][()].astype("float64") else: fname = field[: field.rfind("_")] field_data = f[ptype][fname][()].astype("float64") my_div = field_data.size / pcount if my_div > 1: field_data = np.resize( field_data, (int(pcount), int(my_div)) ) findex = int(field[field.rfind("_") + 1 :]) field_data = field_data[:, findex] data = field_data[mask] yield (ptype, field), data def _count_particles(self, data_file): with h5py.File(data_file.filename, mode="r") as f: pcount = {"FOF": get_one_attr(f["FOF"], ["Number_of_groups", "Ngroups"])} if "SUBFIND" in f: # We need this to figure out where the offset fields are stored. data_file.total_offset = get_one_attr( f["SUBFIND"], ["Number_of_groups", "Ngroups"] ) pcount["SUBFIND"] = get_one_attr( f["FOF"], ["Number_of_subgroups", "Nsubgroups"] ) else: data_file.total_offset = 0 pcount["SUBFIND"] = 0 return pcount def _identify_fields(self, data_file): fields = [] pcount = data_file.total_particles if sum(pcount.values()) == 0: return fields, {} with h5py.File(data_file.filename, mode="r") as f: for ptype in self.ds.particle_types_raw: if data_file.total_particles[ptype] == 0: continue self._identify_position_name(f[ptype]) fields.append((ptype, "particle_identifier")) my_fields, my_offset_fields = subfind_field_list( f[ptype], ptype, data_file.total_particles ) fields.extend(my_fields) self.offset_fields = self.offset_fields.union(set(my_offset_fields)) return fields, {} def _identify_position_name(self, fh): if self._position_name is not None: return for pname in _pos_names: if pname in fh: self._position_name = pname return def get_one_attr(fh, attrs, default=None, error=True): """ Try getting from a list of attrs. Return the first one that exists. """ for attr in attrs: if attr in fh.attrs: return fh.attrs[attr] if error: raise RuntimeError( f"Could not find any of these attributes: {attrs}. " f"Available attributes: {fh.attrs.keys()}" ) return default def subfind_field_list(fh, ptype, pcount): fields = [] offset_fields = [] for field in fh.keys(): if "PartType" in field: # These are halo member particles continue elif isinstance(fh[field], h5py.Group): my_fields, my_offset_fields = subfind_field_list(fh[field], ptype, pcount) fields.extend(my_fields) my_offset_fields.extend(offset_fields) else: if not fh[field].size % pcount[ptype]: my_div = fh[field].size / pcount[ptype] fname = fh[field].name[fh[field].name.find(ptype) + len(ptype) + 1 :] if my_div > 1: for i in range(int(my_div)): fields.append((ptype, "%s_%d" % (fname, i))) else: fields.append((ptype, fname)) elif ( ptype == "SUBFIND" and not fh[field].size % fh["/SUBFIND"].attrs["Number_of_groups"] ): # These are actually FOF fields, but they were written after # a load balancing step moved halos around and thus they do not # correspond to the halos stored in the FOF group. my_div = fh[field].size / fh["/SUBFIND"].attrs["Number_of_groups"] fname = fh[field].name[fh[field].name.find(ptype) + len(ptype) + 1 :] if my_div > 1: for i in range(int(my_div)): fields.append(("FOF", "%s_%d" % (fname, i))) else: fields.append(("FOF", fname)) offset_fields.append(fname) else: mylog.warning( "Cannot add field (%s, %s) with size %d.", ptype, fh[field].name, fh[field].size, ) continue return fields, offset_fields ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3351526 yt-4.4.0/yt/frontends/owls_subfind/tests/0000755000175100001770000000000014714401715020062 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/owls_subfind/tests/__init__.py0000644000175100001770000000000014714401662022162 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/owls_subfind/tests/test_outputs.py0000644000175100001770000000176114714401662023224 0ustar00runnerdockerimport os.path from numpy.testing import assert_equal from yt.testing import requires_module from yt.utilities.answer_testing.framework import ( FieldValuesTest, data_dir_load, requires_ds, ) # from yt.frontends.owls_subfind.api import OWLSSubfindDataset _fields = ( ("all", "particle_position_x"), ("all", "particle_position_y"), ("all", "particle_position_z"), ("all", "particle_mass"), ) # a dataset with empty files g1 = "owls_fof_halos/groups_001/group_001.0.hdf5" g8 = "owls_fof_halos/groups_008/group_008.0.hdf5" @requires_module("h5py") @requires_ds(g8) def test_fields_g8(): ds = data_dir_load(g8) assert_equal(str(ds), os.path.basename(g8)) for field in _fields: yield FieldValuesTest(g8, field, particle_type=True) @requires_module("h5py") @requires_ds(g1) def test_fields_g1(): ds = data_dir_load(g1) assert_equal(str(ds), os.path.basename(g1)) for field in _fields: yield FieldValuesTest(g1, field, particle_type=True) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3351526 yt-4.4.0/yt/frontends/parthenon/0000755000175100001770000000000014714401715016220 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/parthenon/__init__.py0000644000175100001770000000000014714401662020320 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/parthenon/api.py0000644000175100001770000000025714714401662017350 0ustar00runnerdockerfrom . import tests from .data_structures import ParthenonDataset, ParthenonGrid, ParthenonHierarchy from .fields import ParthenonFieldInfo from .io import IOHandlerParthenon ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/parthenon/data_structures.py0000644000175100001770000002723214714401662022015 0ustar00runnerdockerimport os import warnings import weakref import numpy as np from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.static_output import Dataset from yt.fields.magnetic_field import get_magnetic_normalization from yt.funcs import mylog, setdefaultattr from yt.geometry.api import Geometry from yt.geometry.grid_geometry_handler import GridIndex from yt.utilities.chemical_formulas import compute_mu from yt.utilities.file_handler import HDF5FileHandler from .fields import ParthenonFieldInfo _geom_map = { "UniformCartesian": Geometry.CARTESIAN, "UniformCylindrical": Geometry.CYLINDRICAL, "UniformSpherical": Geometry.SPHERICAL, } # fmt: off _cis = np.array( [ [0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 1, 1], ], dtype="int64", ) # fmt: on class ParthenonGrid(AMRGridPatch): _id_offset = 0 def __init__(self, id, index, level): AMRGridPatch.__init__(self, id, filename=index.index_filename, index=index) self.Parent = None self.Children = [] self.Level = level def _setup_dx(self): # So first we figure out what the index is. We don't assume # that dx=dy=dz , at least here. We probably do elsewhere. id = self.id - self._id_offset LE, RE = self.index.grid_left_edge[id, :], self.index.grid_right_edge[id, :] self.dds = self.ds.arr((RE - LE) / self.ActiveDimensions, "code_length") if self.ds.dimensionality < 2: self.dds[1] = 1.0 if self.ds.dimensionality < 3: self.dds[2] = 1.0 self.field_data["dx"], self.field_data["dy"], self.field_data["dz"] = self.dds def retrieve_ghost_zones(self, n_zones, fields, all_levels=False, smoothed=False): if smoothed: warnings.warn( "ghost-zones interpolation/smoothing is not " "currently supported for Parthenon data.", category=RuntimeWarning, stacklevel=2, ) smoothed = False return super().retrieve_ghost_zones( n_zones, fields, all_levels=all_levels, smoothed=smoothed ) class ParthenonHierarchy(GridIndex): grid = ParthenonGrid _dataset_type = "parthenon" _data_file = None def __init__(self, ds, dataset_type="parthenon"): self.dataset = weakref.proxy(ds) self.directory = os.path.dirname(self.dataset.filename) self.dataset_type = dataset_type # for now, the index file is the dataset! self.index_filename = self.dataset.filename self._handle = ds._handle GridIndex.__init__(self, ds, dataset_type) def _detect_output_fields(self): self.field_list = [("parthenon", k) for k in self.dataset._field_map] def _count_grids(self): self.num_grids = self._handle["Info"].attrs["NumMeshBlocks"] def _parse_index(self): num_grids = self._handle["Info"].attrs["NumMeshBlocks"] # TODO: In an unlikely case this would use too much memory, implement # chunked read along 1 dim x = self._handle["Locations"]["x"][:, :] y = self._handle["Locations"]["y"][:, :] z = self._handle["Locations"]["z"][:, :] mesh_block_size = self._handle["Info"].attrs["MeshBlockSize"] self.grids = np.empty(self.num_grids, dtype="object") levels = self._handle["Levels"][:] for i in range(num_grids): self.grid_left_edge[i] = np.array( [x[i, 0], y[i, 0], z[i, 0]], dtype="float64" ) self.grid_right_edge[i] = np.array( [x[i, -1], y[i, -1], z[i, -1]], dtype="float64" ) self.grid_dimensions[i] = mesh_block_size self.grids[i] = self.grid(i, self, levels[i]) if self.dataset.dimensionality <= 2: self.grid_right_edge[:, 2] = self.dataset.domain_right_edge[2] if self.dataset.dimensionality == 1: self.grid_right_edge[:, 1:] = self.dataset.domain_right_edge[1:] self.grid_particle_count = np.zeros([self.num_grids, 1], dtype="int64") def _populate_grid_objects(self): for g in self.grids: g._prepare_grid() g._setup_dx() self.max_level = self._handle["Info"].attrs["MaxLevel"] class ParthenonDataset(Dataset): _field_info_class = ParthenonFieldInfo _dataset_type = "parthenon" _index_class = ParthenonHierarchy def __init__( self, filename, dataset_type="parthenon", storage_filename=None, parameters=None, units_override=None, unit_system="cgs", default_species_fields=None, magnetic_normalization="gaussian", ): self.fluid_types += ("parthenon",) if parameters is None: parameters = {} self.specified_parameters = parameters if units_override is None: units_override = {} self._handle = HDF5FileHandler(filename) xrat = self._handle["Info"].attrs["RootGridDomain"][2] yrat = self._handle["Info"].attrs["RootGridDomain"][5] zrat = self._handle["Info"].attrs["RootGridDomain"][8] if xrat != 1.0 or yrat != 1.0 or zrat != 1.0: raise NotImplementedError( "Logarithmic grids not yet supported/tested in Parthenon frontend." ) self._magnetic_factor = get_magnetic_normalization(magnetic_normalization) self.geometry = _geom_map[self._handle["Info"].attrs["Coordinates"]] if self.geometry == "cylindrical": axis_order = ("r", "theta", "z") else: axis_order = None Dataset.__init__( self, filename, dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, axis_order=axis_order, ) if storage_filename is None: storage_filename = self.basename + ".yt" self.storage_filename = storage_filename def _set_code_unit_attributes(self): """ Generates the conversion to various physical _units based on the parameter file """ for unit, cgs in [ ("length", "cm"), ("time", "s"), ("mass", "g"), ]: unit_param = f"Hydro/code_{unit}_cgs" # Use units, if provided in output if unit_param in self.parameters: setdefaultattr( self, f"{unit}_unit", self.quan(self.parameters[unit_param], cgs) ) # otherwise use code = cgs else: mylog.warning(f"Assuming 1.0 code_{unit} = 1.0 {cgs}") setdefaultattr(self, f"{unit}_unit", self.quan(1.0, cgs)) self.magnetic_unit = np.sqrt( self._magnetic_factor * self.mass_unit / (self.time_unit**2 * self.length_unit) ) self.magnetic_unit.convert_to_units("gauss") self.velocity_unit = self.length_unit / self.time_unit def _parse_parameter_file(self): self.parameters.update(self.specified_parameters) for key, val in self._handle["Params"].attrs.items(): if key in self.parameters.keys(): mylog.warning( f"Overriding existing {key!r} key in ds.parameters from data 'Params'" ) self.parameters[key] = val xmin, xmax = self._handle["Info"].attrs["RootGridDomain"][0:2] ymin, ymax = self._handle["Info"].attrs["RootGridDomain"][3:5] zmin, zmax = self._handle["Info"].attrs["RootGridDomain"][6:8] self.domain_left_edge = np.array([xmin, ymin, zmin], dtype="float64") self.domain_right_edge = np.array([xmax, ymax, zmax], dtype="float64") self.domain_width = self.domain_right_edge - self.domain_left_edge self.domain_dimensions = self._handle["Info"].attrs["RootGridSize"] self._field_map = {} k = 0 dnames = self._handle["Info"].attrs["OutputDatasetNames"] num_components = self._handle["Info"].attrs["NumComponents"] if "OutputFormatVersion" in self._handle["Info"].attrs.keys(): self.output_format_version = self._handle["Info"].attrs[ "OutputFormatVersion" ] else: raise NotImplementedError("Could not determine OutputFormatVersion.") # For a single variable, we need to convert it to a list for the following # zip to work. if isinstance(num_components, np.uint64): dnames = (dnames,) num_components = (num_components,) component_name_offset = 0 for dname, num_component in zip(dnames, num_components, strict=False): for j in range(num_component): fname = self._handle["Info"].attrs["ComponentNames"][ j + component_name_offset ] self._field_map[fname] = (dname, j) k += 1 component_name_offset = int(component_name_offset + num_component) self.refine_by = 2 dimensionality = 3 if self.domain_dimensions[2] == 1: dimensionality = 2 if self.domain_dimensions[1] == 1: dimensionality = 1 self.dimensionality = dimensionality self.current_time = self._handle["Info"].attrs["Time"] self.num_ghost_zones = 0 self.field_ordering = "fortran" self.boundary_conditions = [1] * 6 self.cosmological_simulation = False if "periodicity" in self.parameters: self._periodicity = tuple(self.parameters["periodicity"]) else: boundary_conditions = self._handle["Info"].attrs["BoundaryConditions"] inner_bcs = boundary_conditions[::2] # outer_bcs = boundary_conditions[1::2] ##Check self consistency # for inner_bc,outer_bc in zip(inner_bcs,outer_bcs): # if( (inner_bc == "periodicity" or outer_bc == "periodic") and inner_bc != outer_bc ): # raise Exception("Inconsistent periodicity in boundary conditions") self._periodicity = tuple(bc == "periodic" for bc in inner_bcs) if "gamma" in self.parameters: self.gamma = float(self.parameters["gamma"]) elif "Hydro/AdiabaticIndex" in self.parameters: self.gamma = self.parameters["Hydro/AdiabaticIndex"] else: mylog.warning( "Adiabatic index gamma could not be determined. Falling back to 5/3." ) self.gamma = 5.0 / 3.0 if "mu" in self.parameters: self.mu = self.parameters["mu"] elif "Hydro/mu" in self.parameters: self.mu = self.parameters["Hydro/mu"] # Legacy He_mass_fraction parameter implemented in AthenaPK elif "Hydro/He_mass_fraction" in self.parameters: He_mass_fraction = self.parameters["Hydro/He_mass_fraction"] self.mu = 1 / (He_mass_fraction * 3.0 / 4.0 + (1 - He_mass_fraction) * 2) # Fallback to primorial gas composition (and show warning) else: mylog.warning( "Plasma composition could not be determined in data file. Falling back to fully ionized primodial composition." ) self.mu = self.parameters.get("mu", compute_mu(self.default_species_fields)) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: return filename.endswith((".phdf", ".rhdf")) @property def _skip_cache(self): return True def __str__(self): return self.basename.rsplit(".", 1)[0] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/parthenon/definitions.py0000644000175100001770000000010614714401662021103 0ustar00runnerdocker""" Various definitions for various other modules and routines """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/parthenon/fields.py0000644000175100001770000001615414714401662020050 0ustar00runnerdockerimport numpy as np from yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer from yt.funcs import mylog from yt.utilities.physical_constants import kboltz, mh mag_units = "code_magnetic" pres_units = "code_mass/(code_length*code_time**2)" rho_units = "code_mass / code_length**3" vel_units = "code_length / code_time" mom_units = "code_mass / code_length**2 / code_time" eng_units = "code_mass / code_length / code_time**2" def velocity_field(mom_field): def _velocity(field, data): return data[mom_field] / data["gas", "density"] return _velocity def _cooling_time_field(field, data): cooling_time = ( data["gas", "density"] * data["gas", "specific_thermal_energy"] / data["gas", "cooling_rate"] ) # Set cooling time where Cooling_Rate==0 to infinity inf_ct_mask = data["cooling_rate"] == 0 cooling_time[inf_ct_mask] = data.ds.quan(np.inf, "s") return cooling_time class ParthenonFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( # Need to provide info for both primitive and conserved variable as they # can be written indepdendently (or even in the same output file). # New field naming (i.e., "variable_component") of primitive variables ("prim_density", (rho_units, ["density"], None)), ("prim_velocity_1", (vel_units, ["velocity_x"], None)), ("prim_velocity_2", (vel_units, ["velocity_y"], None)), ("prim_velocity_3", (vel_units, ["velocity_z"], None)), ("prim_pressure", (pres_units, ["pressure"], None)), # Magnetic fields carry units of 1/sqrt(pi) so we cannot directly forward # and need to setup aliases below. ("prim_magnetic_field_1", (mag_units, [], None)), ("prim_magnetic_field_2", (mag_units, [], None)), ("prim_magnetic_field_3", (mag_units, [], None)), # New field naming (i.e., "variable_component") of conserved variables ("cons_density", (rho_units, ["density"], None)), ("cons_momentum_density_1", (mom_units, ["momentum_density_x"], None)), ("cons_momentum_density_2", (mom_units, ["momentum_density_y"], None)), ("cons_momentum_density_3", (mom_units, ["momentum_density_z"], None)), ("cons_total_energy_density", (eng_units, ["total_energy_density"], None)), # Magnetic fields carry units of 1/sqrt(pi) so we cannot directly forward # and need to setup aliases below. ("cons_magnetic_field_1", (mag_units, [], None)), ("cons_magnetic_field_2", (mag_units, [], None)), ("cons_magnetic_field_3", (mag_units, [], None)), # Legacy naming. Given that there's no conflict with the names above, # we can just define those here so that the frontend works with older data. ("Density", (rho_units, ["density"], None)), ("Velocity1", (mom_units, ["velocity_x"], None)), ("Velocity2", (mom_units, ["velocity_y"], None)), ("Velocity3", (mom_units, ["velocity_z"], None)), ("Pressure", (pres_units, ["pressure"], None)), ("MagneticField1", (mag_units, [], None)), ("MagneticField2", (mag_units, [], None)), ("MagneticField3", (mag_units, [], None)), ("MomentumDensity1", (mom_units, ["momentum_density_x"], None)), ("MomentumDensity2", (mom_units, ["momentum_density_y"], None)), ("MomentumDensity3", (mom_units, ["momentum_density_z"], None)), ("TotalEnergyDensity", (eng_units, ["total_energy_density"], None)), ) def setup_fluid_fields(self): from yt.fields.magnetic_field import setup_magnetic_field_aliases unit_system = self.ds.unit_system # Add velocity fields (if only momemtum densities are given) for i, comp in enumerate(self.ds.coordinates.axis_order): # Support both current and legacy scheme for mom_field_name in ["MomentumDensity", "cons_momentum_density_"]: mom_field = ("parthenon", f"{mom_field_name}{i+1}") if mom_field in self.field_list: self.add_field( ("gas", f"velocity_{comp}"), sampling_type="cell", function=velocity_field(mom_field), units=unit_system["velocity"], ) # Figure out thermal energy field if ("parthenon", "Pressure") in self.field_list or ( "parthenon", "prim_pressure", ) in self.field_list: # only show warning for non-AthenaPK codes if "Hydro/AdiabaticIndex" not in self.ds.parameters: mylog.warning( f"Adding a specific thermal energy field assuming an ideal gas with an " f"adiabatic index of {self.ds.gamma}" ) def _specific_thermal_energy(field, data): return ( data["gas", "pressure"] / (data.ds.gamma - 1.0) / data["gas", "density"] ) self.add_field( ("gas", "specific_thermal_energy"), sampling_type="cell", function=_specific_thermal_energy, units=unit_system["specific_energy"], ) elif ("parthenon", "TotalEnergyDensity") in self.field_list or ( "parthenon", "cons_total_energy_density", ) in self.field_list: def _specific_thermal_energy(field, data): eint = ( data["gas", "total_energy_density"] - data["gas", "kinetic_energy_density"] ) if ( ("parthenon", "MagneticField1") in self.field_list or ("parthenon", "prim_magnetic_field_1") in self.field_list or ("parthenon", "cons_magnetic_field_1") in self.field_list ): eint -= data["gas", "magnetic_energy_density"] return eint / data["gas", "density"] self.add_field( ("gas", "specific_thermal_energy"), sampling_type="cell", function=_specific_thermal_energy, units=unit_system["specific_energy"], ) # Add temperature field def _temperature(field, data): return ( (data["gas", "pressure"] / data["gas", "density"]) * data.ds.mu * mh / kboltz ) self.add_field( ("gas", "temperature"), sampling_type="cell", function=_temperature, units=unit_system["temperature"], ) # We can simply all all variants as only fields present will be added setup_magnetic_field_aliases( self, "parthenon", ["MagneticField%d" % ax for ax in (1, 2, 3)] ) setup_magnetic_field_aliases( self, "parthenon", ["prim_magnetic_field_%d" % ax for ax in (1, 2, 3)] ) setup_magnetic_field_aliases( self, "parthenon", ["cons_magnetic_field_%d" % ax for ax in (1, 2, 3)] ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/parthenon/io.py0000644000175100001770000000502114714401662017200 0ustar00runnerdockerfrom itertools import groupby import numpy as np from yt.utilities.io_handler import BaseIOHandler from yt.utilities.logger import ytLogger as mylog # http://stackoverflow.com/questions/2361945/detecting-consecutive-integers-in-a-list def grid_sequences(grids): g_iter = sorted(grids, key=lambda g: g.id) for _, g in groupby(enumerate(g_iter), lambda i_x1: i_x1[0] - i_x1[1].id): seq = [v[1] for v in g] yield seq ii = [0, 1, 0, 1, 0, 1, 0, 1] jj = [0, 0, 1, 1, 0, 0, 1, 1] kk = [0, 0, 0, 0, 1, 1, 1, 1] class IOHandlerParthenon(BaseIOHandler): _particle_reader = False _dataset_type = "parthenon" def __init__(self, ds): super().__init__(ds) self._handle = ds._handle def _read_fluid_selection(self, chunks, selector, fields, size): chunks = list(chunks) f = self._handle rv = {} for field in fields: # Always use *native* 64-bit float. rv[field] = np.empty(size, dtype="=f8") ng = sum(len(c.objs) for c in chunks) mylog.debug( "Reading %s cells of %s fields in %s blocks", size, [f2 for f1, f2 in fields], ng, ) last_dname = None for field in fields: ftype, fname = field dname, fdi = self.ds._field_map[fname] if dname != last_dname: ds = f[f"/{dname}"] ind = 0 for chunk in chunks: for gs in grid_sequences(chunk.objs): start = gs[0].id - gs[0]._id_offset end = gs[-1].id - gs[-1]._id_offset + 1 data = ds[start:end, fdi, :, :, :].transpose() for i, g in enumerate(gs): ind += g.select(selector, data[..., i], rv[field], ind) last_dname = dname return rv def _read_chunk_data(self, chunk, fields): f = self._handle rv = {} for g in chunk.objs: rv[g.id] = {} if len(fields) == 0: return rv for field in fields: ftype, fname = field dname, fdi = self.ds._field_map[fname] ds = f[f"/{dname}"] for gs in grid_sequences(chunk.objs): start = gs[0].id - gs[0]._id_offset end = gs[-1].id - gs[-1]._id_offset + 1 data = ds[start:end, fdi, :, :, :].transpose() for i, g in enumerate(gs): rv[g.id][field] = np.asarray(data[..., i], "=f8") return rv ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/parthenon/misc.py0000644000175100001770000000000014714401662017514 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3391526 yt-4.4.0/yt/frontends/parthenon/tests/0000755000175100001770000000000014714401715017362 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/parthenon/tests/__init__.py0000644000175100001770000000000014714401662021462 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/parthenon/tests/test_outputs.py0000644000175100001770000001377114714401662022530 0ustar00runnerdockerimport numpy as np from yt.frontends.parthenon.api import ParthenonDataset from yt.loaders import load from yt.testing import ( assert_allclose, assert_equal, assert_true, requires_file, ) from yt.utilities.answer_testing.framework import ( GenericArrayTest, data_dir_load, requires_ds, small_patch_amr, ) _fields_parthenon_advection = ( ("parthenon", "advected_0_0"), ("parthenon", "one_minus_advected"), ("parthenon", "one_minus_advected_sq"), ("parthenon", "one_minus_sqrt_one_minus_advected_sq_12"), ("parthenon", "one_minus_sqrt_one_minus_advected_sq_37"), ) # Simple 2D test (advected spherical blob) with AMR from the main Parthenon test suite # adjusted so that x1 != x2. # Ran with `./example/advection/advection-example -i ../tst/regression/test_suites/output_hdf5/parthinput.advection parthenon/mesh/nx1=128 parthenon/mesh/x1min=-1.0 parthenon/mesh/x1max=1.0 Advection/vx=2` # on changeset e5059ad parthenon_advection = "parthenon_advection/advection_2d.out0.final.phdf" @requires_ds(parthenon_advection) def test_loading_data(): ds = data_dir_load(parthenon_advection) assert_equal(str(ds), "advection_2d.out0.final") dd = ds.all_data() # test mesh dims vol = np.prod(ds.domain_right_edge - ds.domain_left_edge) assert_equal(vol, ds.quan(2.0, "code_length**3")) assert_allclose(dd.quantities.total_quantity("cell_volume"), vol) # test data for field in _fields_parthenon_advection: def field_func(name): return dd[name] yield GenericArrayTest(ds, field_func, args=[field]) # reading data of two fields and compare against each other (data is squared in output) ad = ds.all_data() assert_allclose( ad[("parthenon", "one_minus_advected")] ** 2.0, ad[("parthenon", "one_minus_advected_sq")], ) # check if the peak is in the domain center (and at the highest refinement level) dist_of_max_from_center = np.linalg.norm( ad.quantities.max_location(("parthenon", "advected_0_0"))[1:] - ds.domain_center ) dx_min, dx_max = ad.quantities.extrema(("index", "dx")) dy_min, dy_max = ad.quantities.extrema(("index", "dy")) assert_true(dist_of_max_from_center < np.min((dx_min, dy_min))) # 3D magnetized cluster center from downstream Parthenon code AthenaPK (Restart, Conserveds) athenapk_cluster = "athenapk_cluster/athenapk_cluster.restart.00000.rhdf" # Keplerian disk in 2D cylindrical from downstream Parthenon code AthenaPK (Data, Primitives) athenapk_disk = "athenapk_disk/athenapk_disk.prim.00000.phdf" @requires_file(athenapk_cluster) def test_AthenaPK_rhdf(): # Test that a downstream AthenaPK data set can be loaded with this Parthenon # frontend ds = data_dir_load(athenapk_cluster) assert isinstance(ds, ParthenonDataset) assert_equal(ds.domain_left_edge.in_units("code_length").v, (-0.15, -0.18, -0.2)) assert_equal(ds.domain_right_edge.in_units("code_length").v, (0.15, 0.18, 0.2)) @requires_file(athenapk_disk) def test_AthenaPK_phdf(): # Test that a downstream AthenaPK data set can be loaded with this Parthenon # frontend assert isinstance(data_dir_load(athenapk_disk), ParthenonDataset) _fields_derived = ( ("gas", "temperature"), ("gas", "specific_thermal_energy"), ) _fields_derived_cluster = (("gas", "magnetic_field_strength"),) @requires_ds(athenapk_cluster) def test_cluster(): ds = data_dir_load(athenapk_cluster) assert_equal(str(ds), "athenapk_cluster.restart.00000") for test in small_patch_amr(ds, _fields_derived + _fields_derived_cluster): test_cluster.__name__ = test.description yield test @requires_ds(athenapk_disk) @requires_ds(athenapk_cluster) def test_derived_fields(): # Check that derived fields like temperature are present in downstream # which define them # Check temperature and specific thermal energy becomes defined from primitives ds = data_dir_load(athenapk_disk) dd = ds.all_data() for field in _fields_derived: def field_func(name): return dd[name] yield GenericArrayTest(ds, field_func, args=[field]) # Check hydro, magnetic, and cooling fields defined from conserveds ds = data_dir_load(athenapk_cluster) dd = ds.all_data() for field in _fields_derived + _fields_derived_cluster: def field_func(name): return dd[name] yield GenericArrayTest(ds, field_func, args=[field]) @requires_file(athenapk_cluster) @requires_file(athenapk_disk) def test_adiabatic_index(): # Read adiabiatic index from dataset parameters ds = data_dir_load(athenapk_cluster) assert_allclose(ds.gamma, 5.0 / 3.0, rtol=1e-12) ds = data_dir_load(athenapk_disk) assert_allclose(ds.gamma, 4.0 / 3.0, rtol=1e-12) # Change adiabatic index from dataset parameters ds = load(athenapk_disk, parameters={"gamma": 9.0 / 8.0}) assert_allclose(ds.gamma, 9.0 / 8.0, rtol=1e-12) @requires_file(athenapk_cluster) def test_molecular_mass(): # Read mu from dataset parameters ds = data_dir_load(athenapk_cluster) assert_allclose(float(ds.mu), 0.5925925925925926, rtol=1e-12) # Change He mass fraction from dataset parameters ds = load(athenapk_disk, parameters={"mu": 137}) assert_equal(ds.mu, 137) @requires_file(athenapk_cluster) def test_units(): # Check units in dataset are loaded correctly ds = data_dir_load(athenapk_cluster) assert_allclose(float(ds.quan(1, "code_time").in_units("Gyr")), 1, rtol=1e-12) assert_allclose(float(ds.quan(1, "code_length").in_units("Mpc")), 1, rtol=1e-12) assert_allclose(float(ds.quan(1, "code_mass").in_units("msun")), 1e14, rtol=1e-12) @requires_file(athenapk_disk) def test_load_cylindrical(): # Load a cylindrical dataset of a full disk ds = data_dir_load(athenapk_disk) # Check that the domain edges match r in [0.5,2.0], theta in [0, 2pi] assert_equal(ds.domain_left_edge.in_units("code_length").v[:2], (0.5, 0)) assert_equal(ds.domain_right_edge.in_units("code_length").v[:2], (2.0, 2 * np.pi)) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3391526 yt-4.4.0/yt/frontends/ramses/0000755000175100001770000000000014714401715015514 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ramses/__init__.py0000644000175100001770000000000014714401662017614 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ramses/api.py0000644000175100001770000000025214714401662016637 0ustar00runnerdockerfrom . import tests from .data_structures import RAMSESDataset from .definitions import field_aliases from .fields import RAMSESFieldInfo from .io import IOHandlerRAMSES ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ramses/data_structures.py0000644000175100001770000012076014714401662021311 0ustar00runnerdockerimport os import weakref from collections import defaultdict from functools import cached_property from itertools import product from pathlib import Path import numpy as np from yt.arraytypes import blankRecordArray from yt.data_objects.index_subobjects.octree_subset import OctreeSubset from yt.data_objects.particle_filters import add_particle_filter from yt.data_objects.static_output import Dataset from yt.funcs import mylog, setdefaultattr from yt.geometry.geometry_handler import YTDataChunk from yt.geometry.oct_container import RAMSESOctreeContainer from yt.geometry.oct_geometry_handler import OctreeIndex from yt.utilities.cython_fortran_utils import FortranFile as fpu from yt.utilities.lib.cosmology_time import t_frw, tau_frw from yt.utilities.on_demand_imports import _f90nml as f90nml from yt.utilities.physical_constants import kb, mp from .definitions import ( OUTPUT_DIR_EXP, OUTPUT_DIR_RE, STANDARD_FILE_RE, field_aliases, particle_families, ramses_header, ) from .field_handlers import get_field_handlers from .fields import _X, RAMSESFieldInfo from .hilbert import get_intersecting_cpus from .io_utils import fill_hydro, read_amr from .particle_handlers import get_particle_handlers class RAMSESFileSanitizer: """A class to handle the different files that can be passed and associated safely to a RAMSES output.""" root_folder = None # Path | None: path to the root folder info_fname = None # Path | None: path to the info file group_name = None # str | None: name of the first group folder (if any) def __init__(self, filename): # Make the resolve optional, so that it works with symlinks paths_to_try = (Path(filename), Path(filename).resolve()) self.original_filename = filename self.output_dir = None self.info_fname = None check_functions = (self.test_with_standard_file, self.test_with_folder_name) # Loop on both the functions and the tested paths for path, check_fun in product(paths_to_try, check_functions): ok, output_dir, group_dir, info_fname = check_fun(path) if ok: break # Early exit if the ok flag is False if not ok: return self.root_folder = output_dir self.group_name = group_dir.name if group_dir else None self.info_fname = info_fname def validate(self) -> None: # raise a TypeError if self.original_filename is not a valid path # we also want to expand '$USER' and '~' because os.path.exists('~') is always False filename: str = os.path.expanduser(self.original_filename) if not os.path.exists(filename): raise FileNotFoundError(rf"No such file or directory '{filename!s}'") if self.root_folder is None: raise ValueError( f"Could not determine output directory from '{filename!s}'\n" f"Expected a directory name of form {OUTPUT_DIR_EXP!r} " "containing an info_*.txt file and amr_* files." ) # This last case is (erroneously ?) marked as unreachable by mypy # If/when this bug is fixed upstream, mypy will warn that the unused # 'type: ignore' comment can be removed if self.info_fname is None: # type: ignore [unreachable] raise ValueError(f"Failed to detect info file from '{filename!s}'") @property def is_valid(self) -> bool: try: self.validate() except (TypeError, FileNotFoundError, ValueError): return False else: return True @staticmethod def check_standard_files(folder, iout): """Return True if the folder contains an amr file and the info file.""" # Check that the "amr_" and "info_" files exist ok = (folder / f"amr_{iout}.out00001").is_file() ok &= (folder / f"info_{iout}.txt").is_file() return ok @staticmethod def _match_output_and_group( path: Path, ) -> tuple[Path, Path | None, str | None]: # Make sure we work with a directory of the form `output_XXXXX` for p in (path, path.parent): match = OUTPUT_DIR_RE.match(p.name) if match: path = p break if match is None: return path, None, None iout = match.group(1) # See whether a folder named `group_YYYYY` exists group_dir = path / "group_00001" if group_dir.is_dir(): return path, group_dir, iout else: return path, None, iout @classmethod def test_with_folder_name( cls, output_dir: Path ) -> tuple[bool, Path | None, Path | None, Path | None]: output_dir, group_dir, iout = cls._match_output_and_group(output_dir) ok = output_dir.is_dir() and iout is not None info_fname: Path | None if ok: parent_dir = group_dir or output_dir ok &= cls.check_standard_files(parent_dir, iout) info_fname = parent_dir / f"info_{iout}.txt" else: info_fname = None return ok, output_dir, group_dir, info_fname @classmethod def test_with_standard_file( cls, filename: Path ) -> tuple[bool, Path | None, Path | None, Path | None]: output_dir, group_dir, iout = cls._match_output_and_group(filename.parent) ok = ( filename.is_file() and STANDARD_FILE_RE.match(filename.name) is not None and iout is not None ) info_fname: Path | None if ok: parent_dir = group_dir or output_dir ok &= cls.check_standard_files(parent_dir, iout) info_fname = parent_dir / f"info_{iout}.txt" else: info_fname = None return ok, output_dir, group_dir, info_fname class RAMSESDomainFile: _last_mask = None _last_selector_id = None _hydro_offset = None _level_count = None _oct_handler_initialized = False _amr_header_initialized = False def __init__(self, ds, domain_id): self.ds = ds self.domain_id = domain_id num = os.path.basename(ds.parameter_filename).split(".")[0].split("_")[1] rootdir = ds.root_folder basedir = os.path.abspath(os.path.dirname(ds.parameter_filename)) basename = "%s/%%s_%s.out%05i" % (basedir, num, domain_id) part_file_descriptor = f"{basedir}/part_file_descriptor.txt" if ds.num_groups > 0: igroup = ((domain_id - 1) // ds.group_size) + 1 basename = "%s/group_%05i/%%s_%s.out%05i" % ( rootdir, igroup, num, domain_id, ) else: basename = "%s/%%s_%s.out%05i" % (basedir, num, domain_id) for t in ["grav", "amr"]: setattr(self, f"{t}_fn", basename % t) self._part_file_descriptor = part_file_descriptor self.max_level = self.ds.parameters["levelmax"] - self.ds.parameters["levelmin"] # Autodetect field files field_handlers = [FH(self) for FH in get_field_handlers() if FH.any_exist(ds)] self.field_handlers = field_handlers for fh in field_handlers: mylog.debug("Detected fluid type %s in domain_id=%s", fh.ftype, domain_id) fh.detect_fields(ds) # self._add_ftype(fh.ftype) # Autodetect particle files particle_handlers = [ PH(self) for PH in get_particle_handlers() if PH.any_exist(ds) ] self.particle_handlers = particle_handlers def __repr__(self): return "RAMSESDomainFile: %i" % self.domain_id @property def level_count(self): lvl_count = None for fh in self.field_handlers: fh.offset if lvl_count is None: lvl_count = fh.level_count.copy() else: lvl_count += fh._level_count return lvl_count @property def amr_file(self): if hasattr(self, "_amr_file") and not self._amr_file.close: self._amr_file.seek(0) return self._amr_file f = fpu(self.amr_fn) self._amr_file = f f.seek(0) return f def _read_amr_header(self): if self._amr_header_initialized: return hvals = {} with self.amr_file as f: f.seek(0) for header in ramses_header(hvals): hvals.update(f.read_attrs(header)) # For speedup, skip reading of 'headl' and 'taill' f.skip(2) hvals["numbl"] = f.read_vector("i") # That's the header, now we skip a few. hvals["numbl"] = np.array(hvals["numbl"]).reshape( (hvals["nlevelmax"], hvals["ncpu"]) ) f.skip() if hvals["nboundary"] > 0: f.skip(2) self._ngridbound = f.read_vector("i").astype("int64") else: self._ngridbound = np.zeros(hvals["nlevelmax"], dtype="int64") _free_mem = f.read_attrs((("free_mem", 5, "i"),)) _ordering = f.read_vector("c") f.skip(4) # Now we're at the tree itself # Now we iterate over each level and each CPU. position = f.tell() self._amr_header = hvals self._amr_offset = position # The maximum effective level is the deepest level # that has a non-zero number of octs nocts_to_this_level = hvals["numbl"].sum(axis=1).cumsum() self._max_level = ( np.argwhere(nocts_to_this_level == nocts_to_this_level[-1])[0][0] - self.ds.parameters["levelmin"] + 1 ) # update levelmax force_max_level, convention = self.ds._force_max_level if convention == "yt": force_max_level += self.ds.min_level + 1 self._amr_header["nlevelmax"] = min( force_max_level, self._amr_header["nlevelmax"] ) self._local_oct_count = hvals["numbl"][ self.ds.min_level :, self.domain_id - 1 ].sum() imin, imax = self.ds.min_level, self._amr_header["nlevelmax"] self._total_oct_count = hvals["numbl"][imin:imax, :].sum(axis=0) self._amr_header_initialized = True @property def ngridbound(self): self._read_amr_header() return self._ngridbound @property def amr_offset(self): self._read_amr_header() return self._amr_offset @property def max_level(self): self._read_amr_header() return self._max_level @max_level.setter def max_level(self, value): self._max_level = value @property def total_oct_count(self): self._read_amr_header() return self._total_oct_count @property def local_oct_count(self): self._read_amr_header() return self._local_oct_count @property def amr_header(self): self._read_amr_header() return self._amr_header @cached_property def oct_handler(self): """Open the oct file, read in octs level-by-level. For each oct, only the position, index, level and domain are needed - its position in the octree is found automatically. The most important is finding all the information to feed oct_handler.add """ self._read_amr_header() oct_handler = RAMSESOctreeContainer( self.ds.domain_dimensions / 2, self.ds.domain_left_edge, self.ds.domain_right_edge, ) root_nodes = self.amr_header["numbl"][self.ds.min_level, :].sum() oct_handler.allocate_domains(self.total_oct_count, root_nodes) mylog.debug( "Reading domain AMR % 4i (%0.3e, %0.3e)", self.domain_id, self.total_oct_count.sum(), self.ngridbound.sum(), ) with self.amr_file as f: f.seek(self.amr_offset) min_level = self.ds.min_level max_level = read_amr( f, self.amr_header, self.ngridbound, min_level, oct_handler ) oct_handler.finalize() new_max_level = max_level if new_max_level > self.max_level: raise RuntimeError( f"The maximum level detected in the AMR file ({new_max_level}) " f" does not match the expected number {self.max_level}." ) self.max_level = new_max_level self._oct_handler_initialized = True return oct_handler def included(self, selector): if getattr(selector, "domain_id", None) is not None: return selector.domain_id == self.domain_id domain_ids = self.oct_handler.domain_identify(selector) return self.domain_id in domain_ids class RAMSESDomainSubset(OctreeSubset): _domain_offset = 1 _block_order = "F" _base_domain = None def __init__( self, base_region, domain, ds, num_zones=2, num_ghost_zones=0, base_grid=None, ): super().__init__(base_region, domain, ds, num_zones, num_ghost_zones) self._base_grid = base_grid if num_ghost_zones > 0: if not all(ds.periodicity): mylog.warning( "Ghost zones will wrongly assume the domain to be periodic." ) # Create a base domain *with no self._base_domain.fwidth base_domain = RAMSESDomainSubset(ds.all_data(), domain, ds, num_zones) self._base_domain = base_domain elif num_ghost_zones < 0: raise RuntimeError( "Cannot initialize a domain subset with a negative number " f"of ghost zones, was called with {num_ghost_zones=}" ) @property def oct_handler(self): return self.domain.oct_handler def _fill_no_ghostzones(self, fd, fields, selector, file_handler): ndim = self.ds.dimensionality # Here we get a copy of the file, which we skip through and read the # bits we want. oct_handler = self.oct_handler all_fields = [f for ft, f in file_handler.field_list] fields = [f for ft, f in fields] data = {} cell_count = selector.count_oct_cells(self.oct_handler, self.domain_id) # Initializing data container for field in fields: data[field] = np.zeros(cell_count, "float64") # Do an early exit if the cell count is null if cell_count == 0: return data level_inds, cell_inds, file_inds = self.oct_handler.file_index_octs( selector, self.domain_id, cell_count ) cpu_list = [self.domain_id - 1] fill_hydro( fd, file_handler.offset, file_handler.level_count, cpu_list, level_inds, cell_inds, file_inds, ndim, all_fields, fields, data, oct_handler, ) return data def _fill_with_ghostzones( self, fd, fields, selector, file_handler, num_ghost_zones ): ndim = self.ds.dimensionality ncpu = self.ds.parameters["ncpu"] # Here we get a copy of the file, which we skip through and read the # bits we want. oct_handler = self.oct_handler all_fields = [f for ft, f in file_handler.field_list] fields = [f for ft, f in fields] tr = {} cell_count = ( selector.count_octs(self.oct_handler, self.domain_id) * self.nz**ndim ) # Initializing data container for field in fields: tr[field] = np.zeros(cell_count, "float64") # Do an early exit if the cell count is null if cell_count == 0: return tr gz_cache = getattr(self, "_ghost_zone_cache", None) if gz_cache: level_inds, cell_inds, file_inds, domain_inds = gz_cache else: gz_cache = ( level_inds, cell_inds, file_inds, domain_inds, ) = self.oct_handler.file_index_octs_with_ghost_zones( selector, self.domain_id, cell_count, self._num_ghost_zones ) self._ghost_zone_cache = gz_cache cpu_list = list(range(ncpu)) fill_hydro( fd, file_handler.offset, file_handler.level_count, cpu_list, level_inds, cell_inds, file_inds, ndim, all_fields, fields, tr, oct_handler, domain_inds=domain_inds, ) return tr @property def fwidth(self): fwidth = super().fwidth if self._num_ghost_zones > 0: fwidth = fwidth.reshape(-1, 8, 3) n_oct = fwidth.shape[0] # new_fwidth contains the fwidth of the oct+ghost zones # this is a constant array in each oct, so we simply copy # the oct value using numpy fancy-indexing new_fwidth = np.zeros((n_oct, self.nz**3, 3), dtype=fwidth.dtype) new_fwidth[:, :, :] = fwidth[:, 0:1, :] fwidth = new_fwidth.reshape(-1, 3) return fwidth @property def fcoords(self): num_ghost_zones = self._num_ghost_zones if num_ghost_zones == 0: return super().fcoords oh = self.oct_handler indices = oh.fill_index(self.selector).reshape(-1, 8) oct_inds, cell_inds = oh.fill_octcellindex_neighbours( self.selector, self._num_ghost_zones ) N_per_oct = self.nz**3 oct_inds = oct_inds.reshape(-1, N_per_oct) cell_inds = cell_inds.reshape(-1, N_per_oct) inds = indices[oct_inds, cell_inds] fcoords = self.ds.arr(oh.fcoords(self.selector)[inds].reshape(-1, 3), "unitary") return fcoords def fill(self, fd, fields, selector, file_handler): if self._num_ghost_zones == 0: return self._fill_no_ghostzones(fd, fields, selector, file_handler) else: return self._fill_with_ghostzones( fd, fields, selector, file_handler, self._num_ghost_zones ) def retrieve_ghost_zones(self, ngz, fields, smoothed=False): if smoothed: mylog.warning( "%s.retrieve_ghost_zones was called with the " "`smoothed` argument set to True. This is not supported, " "ignoring it.", self, ) smoothed = False _subset_with_gz = getattr(self, "_subset_with_gz", {}) try: new_subset = _subset_with_gz[ngz] mylog.debug( "Reusing previous subset with %s ghost zones for domain %s", ngz, self.domain_id, ) except KeyError: new_subset = RAMSESDomainSubset( self.base_region, self.domain, self.ds, num_ghost_zones=ngz, base_grid=self, ) _subset_with_gz[ngz] = new_subset # Cache the fields new_subset.get_data(fields) self._subset_with_gz = _subset_with_gz return new_subset class RAMSESIndex(OctreeIndex): def __init__(self, ds, dataset_type="ramses"): self.fluid_field_list = ds._fields_in_file self.dataset_type = dataset_type self.dataset = weakref.proxy(ds) self.index_filename = self.dataset.parameter_filename self.directory = os.path.dirname(self.index_filename) self.float_type = np.float64 super().__init__(ds, dataset_type) def _initialize_oct_handler(self): if self.ds._bbox is not None: cpu_list = get_intersecting_cpus(self.dataset, self.dataset._bbox) else: cpu_list = range(self.dataset["ncpu"]) self.domains = [RAMSESDomainFile(self.dataset, i + 1) for i in cpu_list] @cached_property def max_level(self): force_max_level, convention = self.ds._force_max_level if convention == "yt": force_max_level += self.ds.min_level + 1 return min(force_max_level, max(dom.max_level for dom in self.domains)) @cached_property def num_grids(self): return sum( dom.local_oct_count for dom in self.domains # + dom.ngridbound.sum() ) def _detect_output_fields(self): dsl = set() # Get the detected particle fields for ph in self.domains[0].particle_handlers: dsl.update(set(ph.field_offsets.keys())) self.particle_field_list = list(dsl) # Get the detected fields dsl = set() for fh in self.domains[0].field_handlers: dsl.update(set(fh.field_list)) self.fluid_field_list = list(dsl) self.field_list = self.particle_field_list + self.fluid_field_list def _identify_base_chunk(self, dobj): use_fast_hilbert = ( hasattr(dobj, "get_bbox") and self.ds.parameters["ordering type"] == "hilbert" ) if getattr(dobj, "_chunk_info", None) is None: if use_fast_hilbert: idoms = { idom + 1 for idom in get_intersecting_cpus(self.ds, dobj, factor=3) } # If the oct handler has been initialized, use it domains = [] for dom in self.domains: # Hilbert indexing is conservative, so reject all those that # aren't in the bbox if dom.domain_id not in idoms: continue # If the domain has its oct handler, refine the selection if dom._oct_handler_initialized and not dom.included(dobj.selector): continue mylog.debug("Identified domain %s", dom.domain_id) domains.append(dom) if len(domains) >= 1: mylog.info( "Identified % 5d/% 5d intersecting domains (% 5d through hilbert key indexing)", len(domains), len(self.domains), len(idoms), ) else: domains = [dom for dom in self.domains if dom.included(dobj.selector)] if len(domains) >= 1: mylog.info("Identified %s intersecting domains", len(domains)) base_region = getattr(dobj, "base_region", dobj) subsets = [ RAMSESDomainSubset( base_region, domain, self.dataset, num_ghost_zones=dobj._num_ghost_zones, ) for domain in domains ] dobj._chunk_info = subsets dobj._current_chunk = list(self._chunk_all(dobj))[0] def _chunk_all(self, dobj): oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) yield YTDataChunk(dobj, "all", oobjs, None) def _chunk_spatial(self, dobj, ngz, sort=None, preload_fields=None): sobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) for og in sobjs: if ngz > 0: g = og.retrieve_ghost_zones(ngz, [], smoothed=True) else: g = og yield YTDataChunk(dobj, "spatial", [g], None) def _chunk_io(self, dobj, cache=True, local_only=False): oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) for subset in oobjs: yield YTDataChunk(dobj, "io", [subset], None, cache=cache) def _initialize_level_stats(self): levels = sum(dom.level_count for dom in self.domains) desc = {"names": ["numcells", "level"], "formats": ["int64"] * 2} max_level = self.dataset.min_level + self.dataset.max_level + 2 self.level_stats = blankRecordArray(desc, max_level) self.level_stats["level"] = list(range(max_level)) self.level_stats["numcells"] = [0 for i in range(max_level)] for level in range(self.dataset.min_level + 1): self.level_stats[level + 1]["numcells"] = 2 ** ( level * self.dataset.dimensionality ) for level in range(levels.shape[1]): ncell = levels[:, level].sum() self.level_stats[level + self.dataset.min_level + 1]["numcells"] = ncell def _get_particle_type_counts(self): npart = 0 npart = {k: 0 for k in self.ds.particle_types if k != "all"} for dom in self.domains: for fh in dom.particle_handlers: count = fh.local_particle_count npart[fh.ptype] += count return npart def print_stats(self): """ Prints out (stdout) relevant information about the simulation This function prints information based on the fluid on the grids, and therefore does not work for DM only runs. """ if not self.fluid_field_list: print("This function is not implemented for DM only runs") return self._initialize_level_stats() header = "{:>3}\t{:>14}\t{:>14}".format("level", "# cells", "# cells^3") print(header) print(f"{len(header.expandtabs()) * '-'}") for level in range(self.dataset.min_level + self.dataset.max_level + 2): print( "% 3i\t% 14i\t% 14i" % ( level, self.level_stats["numcells"][level], np.ceil(self.level_stats["numcells"][level] ** (1.0 / 3)), ) ) print("-" * 46) print(" \t% 14i" % (self.level_stats["numcells"].sum())) print("\n") dx = self.get_smallest_dx() try: print(f"z = {self.dataset.current_redshift:0.8f}") except Exception: pass print( "t = {:0.8e} = {:0.8e} = {:0.8e}".format( self.ds.current_time.in_units("code_time"), self.ds.current_time.in_units("s"), self.ds.current_time.in_units("yr"), ) ) print("\nSmallest Cell:") for item in ("Mpc", "pc", "AU", "cm"): print(f"\tWidth: {dx.in_units(item):0.3e}") class RAMSESDataset(Dataset): _index_class = RAMSESIndex _field_info_class = RAMSESFieldInfo gamma = 1.4 # This will get replaced on hydro_fn open # RAMSES-specific parameters force_cosmological: bool | None _force_max_level: tuple[int, str] _bbox: list[list[float]] | None _self_shielding: bool | None = None def __init__( self, filename, dataset_type="ramses", fields=None, storage_filename=None, units_override=None, unit_system="cgs", extra_particle_fields=None, cosmological=None, bbox=None, max_level=None, max_level_convention=None, default_species_fields=None, self_shielding=None, use_conformal_time=None, ): # Here we want to initiate a traceback, if the reader is not built. if isinstance(fields, str): fields = field_aliases[fields] """ fields: An array of hydro variable fields in order of position in the hydro_XXXXX.outYYYYY file. If set to None, will try a default set of fields. extra_particle_fields: An array of extra particle variables in order of position in the particle_XXXXX.outYYYYY file. cosmological: If set to None, automatically detect cosmological simulation. If a boolean, force its value. self_shielding: If set to True, assume gas is self-shielded above 0.01 mp/cm^3. This affects the fields related to cooling and the mean molecular weight. """ self._fields_in_file = fields # By default, extra fields have not triggered a warning self._warned_extra_fields = defaultdict(lambda: False) self._extra_particle_fields = extra_particle_fields self.force_cosmological = cosmological self._bbox = bbox self._force_max_level = self._sanitize_max_level( max_level, max_level_convention ) file_handler = RAMSESFileSanitizer(filename) # ensure validation happens even if the class is instantiated # directly rather than from yt.load file_handler.validate() # Sanitize the filename info_fname = file_handler.info_fname if file_handler.group_name is not None: self.num_groups = len( [_ for _ in file_handler.root_folder.glob("group_?????") if _.is_dir()] ) else: self.num_groups = 0 self.root_folder = file_handler.root_folder Dataset.__init__( self, info_fname, dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) # Add the particle types ptypes = [] for PH in get_particle_handlers(): if PH.any_exist(self): ptypes.append(PH.ptype) ptypes = tuple(ptypes) self.particle_types = self.particle_types_raw = ptypes # Add the fluid types for FH in get_field_handlers(): FH.purge_detected_fields(self) if FH.any_exist(self): self.fluid_types += (FH.ftype,) if use_conformal_time is not None: self.use_conformal_time = use_conformal_time elif self.cosmological_simulation: if "rt" in self.fluid_types: self.use_conformal_time = False else: self.use_conformal_time = True else: self.use_conformal_time = False self.storage_filename = storage_filename self.self_shielding = self_shielding @property def self_shielding(self) -> bool: if self._self_shielding is not None: return self._self_shielding # Read namelist.txt file (if any) has_namelist = self.read_namelist() if not has_namelist: self._self_shielding = False return self._self_shielding nml = self.parameters["namelist"] # "self_shielding" is stored in physics_params in older versions of the code physics_params = nml.get("physics_params", default={}) # and in "cooling_params" in more recent ones cooling_params = nml.get("cooling_params", default={}) self_shielding = physics_params.get("self_shielding", False) self_shielding |= cooling_params.get("self_shielding", False) self._self_shielding = self_shielding return self_shielding @self_shielding.setter def self_shielding(self, value): self._self_shielding = value @staticmethod def _sanitize_max_level(max_level, max_level_convention): # NOTE: the initialisation of the dataset class sets # self.min_level _and_ requires force_max_level # to be set, so we cannot convert from to yt/ramses # conventions if max_level is None and max_level_convention is None: return (2**999, "yt") # Check max_level is a valid, positive integer if not isinstance(max_level, (int, np.integer)): raise TypeError( f"Expected `max_level` to be a positive integer, got {max_level} " f"with type {type(max_level)} instead." ) if max_level < 0: raise ValueError( f"Expected `max_level` to be a positive integer, got {max_level} " "instead." ) # Check max_level_convention is set and acceptable if max_level_convention is None: raise ValueError( f"Received `max_level`={max_level}, but no `max_level_convention`. " "Valid conventions are 'yt' and 'ramses'." ) if max_level_convention not in ("ramses", "yt"): raise ValueError( f"Invalid convention {max_level_convention}. " "Valid choices are 'yt' and 'ramses'." ) return (max_level, max_level_convention) def create_field_info(self, *args, **kwa): """Extend create_field_info to add the particles types.""" super().create_field_info(*args, **kwa) # Register particle filters if ("io", "particle_family") in self.field_list: for fname, value in particle_families.items(): def loc(val): def closure(pfilter, data): filter = data[pfilter.filtered_type, "particle_family"] == val return filter return closure add_particle_filter( fname, loc(value), filtered_type="io", requires=["particle_family"] ) for k in particle_families.keys(): mylog.info("Adding particle_type: %s", k) self.add_particle_filter(f"{k}") def __str__(self): return self.basename.rsplit(".", 1)[0] def _set_code_unit_attributes(self): """ Generates the conversion to various physical _units based on the parameter file """ # loading the units from the info file boxlen = self.parameters["boxlen"] length_unit = self.parameters["unit_l"] density_unit = self.parameters["unit_d"] time_unit = self.parameters["unit_t"] # calculating derived units (except velocity and temperature, done below) mass_unit = density_unit * length_unit**3 magnetic_unit = np.sqrt(4 * np.pi * mass_unit / (time_unit**2 * length_unit)) pressure_unit = density_unit * (length_unit / time_unit) ** 2 # TODO: # Generalize the temperature field to account for ionization # For now assume an atomic ideal gas with cosmic abundances (x_H = 0.76) mean_molecular_weight_factor = _X**-1 setdefaultattr(self, "density_unit", self.quan(density_unit, "g/cm**3")) setdefaultattr(self, "magnetic_unit", self.quan(magnetic_unit, "gauss")) setdefaultattr(self, "pressure_unit", self.quan(pressure_unit, "dyne/cm**2")) setdefaultattr(self, "time_unit", self.quan(time_unit, "s")) setdefaultattr(self, "mass_unit", self.quan(mass_unit, "g")) setdefaultattr( self, "velocity_unit", self.quan(length_unit, "cm") / self.time_unit ) temperature_unit = ( self.velocity_unit**2 * mp * mean_molecular_weight_factor / kb ) setdefaultattr(self, "temperature_unit", temperature_unit.in_units("K")) # Only the length unit get scales by a factor of boxlen setdefaultattr(self, "length_unit", self.quan(length_unit * boxlen, "cm")) def _parse_parameter_file(self): # hardcoded for now # These should be explicitly obtained from the file, but for now that # will wait until a reorganization of the source tree and better # generalization. self.dimensionality = 3 self.refine_by = 2 self.parameters["HydroMethod"] = "ramses" self.parameters["Time"] = 1.0 # default unit is 1... # We now execute the same logic Oliver's code does rheader = {} def read_rhs(f, cast): line = f.readline().strip() if line and "=" in line: key, val = (_.strip() for _ in line.split("=")) rheader[key] = cast(val) return key else: return None def cast_a_else_b(cast_a, cast_b): def caster(val): try: return cast_a(val) except ValueError: return cast_b(val) return caster with open(self.parameter_filename) as f: # Standard: first six are ncpu, ndim, levelmin, levelmax, ngridmax, nstep_coarse for _ in range(6): read_rhs(f, int) f.readline() # Standard: next 11 are boxlen, time, aexp, h0, omega_m, omega_l, omega_k, omega_b, unit_l, unit_d, unit_t for _ in range(11): key = read_rhs(f, float) # Read non standard extra fields until hitting the ordering type while key != "ordering type": key = read_rhs(f, cast_a_else_b(float, str)) # This next line deserves some comment. We specify a min_level that # corresponds to the minimum level in the RAMSES simulation. RAMSES is # one-indexed, but it also does refer to the *oct* dimensions -- so # this means that a levelmin of 1 would have *1* oct in it. So a # levelmin of 2 would have 8 octs at the root mesh level. self.min_level = rheader["levelmin"] - 1 # Now we read the hilbert indices self.hilbert_indices = {} if rheader["ordering type"] == "hilbert": f.readline() # header for _ in range(rheader["ncpu"]): dom, mi, ma = f.readline().split() self.hilbert_indices[int(dom)] = (float(mi), float(ma)) if rheader["ordering type"] != "hilbert" and self._bbox is not None: raise NotImplementedError( f"The ordering {rheader['ordering type']} " "is not compatible with the `bbox` argument." ) self.parameters.update(rheader) self.domain_left_edge = np.zeros(3, dtype="float64") self.domain_dimensions = np.ones(3, dtype="int32") * 2 ** (self.min_level + 1) self.domain_right_edge = np.ones(3, dtype="float64") # This is likely not true, but it's not clear # how to determine the boundary conditions self._periodicity = (True, True, True) if self.force_cosmological is not None: is_cosmological = self.force_cosmological else: # These conditions seem to always be true for non-cosmological datasets is_cosmological = not ( rheader["time"] >= 0 and rheader["H0"] == 1 and rheader["aexp"] == 1 ) if not is_cosmological: self.cosmological_simulation = False self.current_redshift = 0 self.hubble_constant = 0 self.omega_matter = 0 self.omega_lambda = 0 else: self.cosmological_simulation = True self.current_redshift = (1.0 / rheader["aexp"]) - 1.0 self.omega_lambda = rheader["omega_l"] self.omega_matter = rheader["omega_m"] self.hubble_constant = rheader["H0"] / 100.0 # This is H100 force_max_level, convention = self._force_max_level if convention == "yt": force_max_level += self.min_level + 1 self.max_level = min(force_max_level, rheader["levelmax"]) - self.min_level - 1 if not self.cosmological_simulation: self.current_time = self.parameters["time"] else: aexp_grid = np.geomspace(1e-3, 1, 2_000, endpoint=False) z_grid = 1 / aexp_grid - 1 self.tau_frw = tau_frw(self, z_grid) self.t_frw = t_frw(self, z_grid) self.current_time = t_frw(self, self.current_redshift).to("Gyr") if self.num_groups > 0: self.group_size = rheader["ncpu"] // self.num_groups self.read_namelist() def read_namelist(self) -> bool: """Read the namelist.txt file in the output folder, if present""" namelist_file = os.path.join(self.root_folder, "namelist.txt") if not os.path.exists(namelist_file): return False try: with open(namelist_file) as f: nml = f90nml.read(f) except ImportError as err: mylog.warning( "`namelist.txt` file found but missing package f90nml to read it:", exc_info=err, ) return False except (ValueError, StopIteration, AssertionError) as err: # Note: f90nml may raise a StopIteration, a ValueError or an AssertionError if # the namelist is not valid. mylog.warning( "Could not parse `namelist.txt` file as it was malformed:", exc_info=err, ) return False self.parameters["namelist"] = nml return True @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False return RAMSESFileSanitizer(filename).is_valid ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ramses/definitions.py0000644000175100001770000000434114714401662020404 0ustar00runnerdocker# These functions are RAMSES-specific import re from yt.funcs import mylog from yt.utilities.configure import YTConfig, configuration_callbacks def ramses_header(hvals): header = ( ("ncpu", 1, "i"), ("ndim", 1, "i"), ("nx", 3, "i"), ("nlevelmax", 1, "i"), ("ngridmax", 1, "i"), ("nboundary", 1, "i"), ("ngrid_current", 1, "i"), ("boxlen", 1, "d"), ("nout", 3, "i"), ) yield header # TODO: REMOVE noutput, iout, ifout = hvals["nout"] next_set = ( ("tout", noutput, "d"), ("aout", noutput, "d"), ("t", 1, "d"), ("dtold", hvals["nlevelmax"], "d"), ("dtnew", hvals["nlevelmax"], "d"), ("nstep", 2, "i"), ("stat", 3, "d"), ("cosm", 7, "d"), ("timing", 5, "d"), ("mass_sph", 1, "d", True), ) yield next_set field_aliases = { "standard_five": ("Density", "x-velocity", "y-velocity", "z-velocity", "Pressure"), "standard_six": ( "Density", "x-velocity", "y-velocity", "z-velocity", "Pressure", "Metallicity", ), } ## Regular expressions used to parse file descriptors VERSION_RE = re.compile(r"# version: *(\d+)") # This will match comma-separated strings, discarding whitespaces # on the left hand side VAR_DESC_RE = re.compile(r"\s*([^\s]+),\s*([^\s]+),\s*([^\s]+)") OUTPUT_DIR_EXP = r"output_(\d{5})" OUTPUT_DIR_RE = re.compile(OUTPUT_DIR_EXP) STANDARD_FILE_RE = re.compile(r"((amr|hydro|part|grav)_\d{5}\.out\d{5}|info_\d{5}.txt)") ## Configure family mapping particle_families = { "DM": 1, "star": 2, "cloud": 3, "dust": 4, "star_tracer": -2, "cloud_tracer": -3, "dust_tracer": -4, "gas_tracer": 0, } def _setup_ramses_particle_families(ytcfg: YTConfig) -> None: if not ytcfg.has_section("ramses-families"): return for key in particle_families.keys(): val = ytcfg.get("ramses-families", key, callback=None) if val is not None: mylog.info( "Changing family %s from %s to %s", key, particle_families[key], val ) particle_families[key] = val configuration_callbacks.append(_setup_ramses_particle_families) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ramses/field_handlers.py0000644000175100001770000004766414714401662021053 0ustar00runnerdockerimport abc import glob import os from functools import cached_property from yt.config import ytcfg from yt.funcs import mylog from yt.utilities.cython_fortran_utils import FortranFile from .io import _read_fluid_file_descriptor from .io_utils import read_offset FIELD_HANDLERS: set[type["FieldFileHandler"]] = set() def get_field_handlers(): return FIELD_HANDLERS def register_field_handler(ph): FIELD_HANDLERS.add(ph) DETECTED_FIELDS = {} # type: ignore class HandlerMixin: """This contains all the shared methods to handle RAMSES files. This is not supposed to be user-facing. """ def setup_handler(self, domain): """ Initialize an instance of the class. This automatically sets the full path to the file. This is not intended to be overridden in most cases. If you need more flexibility, rewrite this function to your need in the inherited class. """ self.ds = ds = domain.ds self.domain = domain self.domain_id = domain.domain_id basename = os.path.abspath(ds.root_folder) iout = int(os.path.basename(ds.parameter_filename).split(".")[0].split("_")[1]) if ds.num_groups > 0: igroup = ((domain.domain_id - 1) // ds.group_size) + 1 full_path = os.path.join( basename, f"group_{igroup:05d}", self.fname.format(iout=iout, icpu=domain.domain_id), ) else: full_path = os.path.join( basename, self.fname.format(iout=iout, icpu=domain.domain_id) ) if os.path.exists(full_path): self.fname = full_path else: raise FileNotFoundError( f"Could not find {self._file_type} file (type: {self.ftype}). " f"Tried {full_path}" ) if self.file_descriptor is not None: if ds.num_groups > 0: # The particle file descriptor is *only* in the first group self.file_descriptor = os.path.join( basename, "group_00001", self.file_descriptor ) else: self.file_descriptor = os.path.join(basename, self.file_descriptor) @property def exists(self): """ This function should return True if the *file* the instance exists. It is called for each file of the type found on the disk. By default, it just returns whether the file exists. Override it for more complex cases. """ return os.path.exists(self.fname) @property def has_descriptor(self): """ This function should return True if a *file descriptor* exists. By default, it just returns whether the file exists. Override it for more complex cases. """ return os.path.exists(self.file_descriptor) @classmethod def any_exist(cls, ds): """ This function should return True if the kind of particle represented by the class exists in the dataset. It takes as argument the class itself -not an instance- and a dataset. Arguments --------- ds : a Ramses Dataset Note ---- This function is usually called once at the initialization of the RAMSES Dataset structure to determine if the particle type (e.g. regular particles) exists. """ if ds.unique_identifier in cls._unique_registry: return cls._unique_registry[ds.unique_identifier] iout = int(os.path.basename(ds.parameter_filename).split(".")[0].split("_")[1]) fname = os.path.join( os.path.split(ds.parameter_filename)[0], cls.fname.format(iout=iout, icpu=1) ) exists = os.path.exists(fname) cls._unique_registry[ds.unique_identifier] = exists return exists class FieldFileHandler(abc.ABC, HandlerMixin): """ Abstract class to handle particles in RAMSES. Each instance represents a single file (one domain). To add support to a new particle file, inherit from this class and implement all functions containing a `NotImplementedError`. See `SinkParticleFileHandler` for an example implementation.""" _file_type = "field" # These properties are static properties ftype: str | None = None # The name to give to the field type fname: str | None = None # The name of the file(s) # The attributes of the header attrs: tuple[tuple[str, int, str], ...] | None = None known_fields = None # A list of tuple containing the field name and its type config_field: str | None = None # Name of the config section (if any) file_descriptor: str | None = None # The name of the file descriptor (if any) # These properties are computed dynamically field_offsets = None # Mapping from field to offset in file field_types = ( None # Mapping from field to the type of the data (float, integer, ...) ) def __init_subclass__(cls, *args, **kwargs): """ Registers subclasses at creation. """ super().__init_subclass__(*args, **kwargs) if cls.ftype is not None: register_field_handler(cls) cls._unique_registry = {} return cls def __init__(self, domain): self.setup_handler(domain) @classmethod @abc.abstractmethod def detect_fields(cls, ds): """ Called once to setup the fields of this type It should set the following static variables: * parameters: dictionary Dictionary containing the variables. The keys should match those of `cls.attrs` * field_list: list of (ftype, fname) The list of the field present in the file """ pass @classmethod def get_detected_fields(cls, ds): """ Get the detected fields from the registry. """ if ds.unique_identifier in DETECTED_FIELDS: d = DETECTED_FIELDS[ds.unique_identifier] if cls.ftype in d: return d[cls.ftype] return None @classmethod def set_detected_fields(cls, ds, fields): """ Store the detected fields into the registry. """ if ds.unique_identifier not in DETECTED_FIELDS: DETECTED_FIELDS[ds.unique_identifier] = {} DETECTED_FIELDS[ds.unique_identifier].update({cls.ftype: fields}) @classmethod def purge_detected_fields(cls, ds): """ Purge the registry. This should be called on dataset creation to force the field detection to be called. """ if ds.unique_identifier in DETECTED_FIELDS: DETECTED_FIELDS.pop(ds.unique_identifier) @property def level_count(self): """ Return the number of cells per level. """ if getattr(self, "_level_count", None) is not None: return self._level_count self.offset return self._level_count @cached_property def offset(self): """ Compute the offsets of the fields. By default, it skips the header (as defined by `cls.attrs`) and computes the offset at each level. It should be generic enough for most of the cases, but if the *structure* of your fluid file is non-canonical, change this. """ nvars = len(self.field_list) with FortranFile(self.fname) as fd: # Skip headers nskip = len(self.attrs) fd.skip(nskip) min_level = self.domain.ds.min_level # The file is as follows: # > headers # loop over levels # loop over cpu domains # > : current level # > : number of octs in level, domain # loop over variables (positions, velocities, density, ...) # loop over <2*2*2> cells in each oct # > with shape (nocts, ) # # So there are 8 * nvars records each with length (nocts, ) # at each (level, cpus) offset, level_count = read_offset( fd, min_level, self.domain.domain_id, self.parameters["nvar"], self.domain.amr_header, Nskip=nvars * 8, ) self._level_count = level_count return offset @classmethod def load_fields_from_yt_config(cls) -> list[str]: if cls.config_field and ytcfg.has_section(cls.config_field): cfg = ytcfg.get(cls.config_field, "fields") fields = [_.strip() for _ in cfg if _.strip() != ""] return fields return [] class HydroFieldFileHandler(FieldFileHandler): ftype = "ramses" fname = "hydro_{iout:05d}.out{icpu:05d}" file_descriptor = "hydro_file_descriptor.txt" config_field = "ramses-hydro" attrs = ( ("ncpu", 1, "i"), ("nvar", 1, "i"), ("ndim", 1, "i"), ("nlevelmax", 1, "i"), ("nboundary", 1, "i"), ("gamma", 1, "d"), ) @classmethod def detect_fields(cls, ds): # Try to get the detected fields detected_fields = cls.get_detected_fields(ds) if detected_fields: return detected_fields num = os.path.basename(ds.parameter_filename).split(".")[0].split("_")[1] testdomain = 1 # Just pick the first domain file to read basepath = os.path.abspath(os.path.dirname(ds.parameter_filename)) basename = "%s/%%s_%s.out%05i" % (basepath, num, testdomain) fname = basename % "hydro" fname_desc = os.path.join(basepath, cls.file_descriptor) attrs = cls.attrs with FortranFile(fname) as fd: hvals = fd.read_attrs(attrs) cls.parameters = hvals # Store some metadata ds.gamma = hvals["gamma"] nvar = hvals["nvar"] ok = False if ds._fields_in_file is not None: # Case 1: fields are provided by users on construction of dataset fields = list(ds._fields_in_file) ok = True else: # Case 2: fields are provided by users in the config fields = cls.load_fields_from_yt_config() ok = len(fields) > 0 if not ok and os.path.exists(fname_desc): # Case 3: there is a file descriptor # Or there is an hydro file descriptor mylog.debug("Reading hydro file descriptor.") # For now, we can only read double precision fields fields = [ e[0] for e in _read_fluid_file_descriptor(fname_desc, prefix="hydro") ] # We get no fields for old-style hydro file descriptor ok = len(fields) > 0 if not ok: # Case 4: attempt autodetection with usual fields foldername = os.path.abspath(os.path.dirname(ds.parameter_filename)) rt_flag = any(glob.glob(os.sep.join([foldername, "info_rt_*.txt"]))) if rt_flag: # rt run if nvar < 10: mylog.info("Detected RAMSES-RT file WITHOUT IR trapping.") fields = [ "Density", "x-velocity", "y-velocity", "z-velocity", "Pressure", "Metallicity", "HII", "HeII", "HeIII", ] else: mylog.info("Detected RAMSES-RT file WITH IR trapping.") fields = [ "Density", "x-velocity", "y-velocity", "z-velocity", "Pres_IR", "Pressure", "Metallicity", "HII", "HeII", "HeIII", ] else: if nvar < 5: mylog.debug( "nvar=%s is too small! YT doesn't currently " "support 1D/2D runs in RAMSES %s" ) raise ValueError # Basic hydro runs if nvar == 5: fields = [ "Density", "x-velocity", "y-velocity", "z-velocity", "Pressure", ] if nvar > 5 and nvar < 11: fields = [ "Density", "x-velocity", "y-velocity", "z-velocity", "Pressure", "Metallicity", ] # MHD runs - NOTE: # THE MHD MODULE WILL SILENTLY ADD 3 TO THE NVAR IN THE MAKEFILE if nvar == 11: fields = [ "Density", "x-velocity", "y-velocity", "z-velocity", "B_x_left", "B_y_left", "B_z_left", "B_x_right", "B_y_right", "B_z_right", "Pressure", ] if nvar > 11: fields = [ "Density", "x-velocity", "y-velocity", "z-velocity", "B_x_left", "B_y_left", "B_z_left", "B_x_right", "B_y_right", "B_z_right", "Pressure", "Metallicity", ] mylog.debug( "No fields specified by user; automatically setting fields array to %s", fields, ) # Allow some wiggle room for users to add too many variables count_extra = 0 while len(fields) < nvar: fields.append(f"var_{len(fields)}") count_extra += 1 if count_extra > 0: mylog.debug("Detected %s extra fluid fields.", count_extra) cls.field_list = [(cls.ftype, e) for e in fields] cls.set_detected_fields(ds, fields) return fields class GravFieldFileHandler(FieldFileHandler): ftype = "gravity" fname = "grav_{iout:05d}.out{icpu:05d}" config_field = "ramses-grav" attrs = ( ("ncpu", 1, "i"), ("nvar", 1, "i"), ("nlevelmax", 1, "i"), ("nboundary", 1, "i"), ) @classmethod def detect_fields(cls, ds): # Try to get the detected fields detected_fields = cls.get_detected_fields(ds) if detected_fields: return detected_fields ndim = ds.dimensionality iout = int(str(ds).split("_")[1]) basedir = os.path.split(ds.parameter_filename)[0] fname = os.path.join(basedir, cls.fname.format(iout=iout, icpu=1)) with FortranFile(fname) as fd: cls.parameters = fd.read_attrs(cls.attrs) nvar = cls.parameters["nvar"] ndim = ds.dimensionality fields = cls.load_fields_from_yt_config() if not fields: if nvar == ndim + 1: fields = ["Potential"] + [f"{k}-acceleration" for k in "xyz"[:ndim]] else: fields = [f"{k}-acceleration" for k in "xyz"[:ndim]] ndetected = len(fields) if ndetected != nvar and not ds._warned_extra_fields["gravity"]: mylog.info("Detected %s extra gravity fields.", nvar - ndetected) ds._warned_extra_fields["gravity"] = True for i in range(nvar - ndetected): fields.append(f"var{i}") cls.field_list = [(cls.ftype, e) for e in fields] cls.set_detected_fields(ds, fields) return fields class RTFieldFileHandler(FieldFileHandler): ftype = "ramses-rt" fname = "rt_{iout:05d}.out{icpu:05d}" file_descriptor = "rt_file_descriptor.txt" config_field = "ramses-rt" attrs = ( ("ncpu", 1, "i"), ("nvar", 1, "i"), ("ndim", 1, "i"), ("nlevelmax", 1, "i"), ("nboundary", 1, "i"), ("gamma", 1, "d"), ) @classmethod def detect_fields(cls, ds): # Try to get the detected fields detected_fields = cls.get_detected_fields(ds) if detected_fields: return detected_fields fname = ds.parameter_filename.replace("info_", "info_rt_") rheader = {} def read_rhs(cast): line = f.readline() p, v = line.split("=") rheader[p.strip()] = cast(v) with open(fname) as f: # Read nRTvar, nions, ngroups, iions for _ in range(4): read_rhs(int) # Try to read rtprecision. # Either it is present or the line is simply blank, so # we try to parse the line as an int, and if it fails, # we simply ignore it. try: read_rhs(int) f.readline() except ValueError: pass # Read X and Y fractions for _ in range(2): read_rhs(float) f.readline() # Reat unit_np, unit_pfd for _ in range(2): read_rhs(float) # Read rt_c_frac # Note: when using variable speed of light, this line will contain multiple # values corresponding the the velocity at each level read_rhs(lambda line: [float(e) for e in line.split()]) f.readline() # Read n star, t2star, g_star for _ in range(3): read_rhs(float) # Touchy part, we have to read the photon group properties mylog.debug("Not reading photon group properties") cls.rt_parameters = rheader ngroups = rheader["nGroups"] iout = int(str(ds).split("_")[1]) basedir = os.path.split(ds.parameter_filename)[0] fname = os.path.join(basedir, cls.fname.format(iout=iout, icpu=1)) fname_desc = os.path.join(basedir, cls.file_descriptor) with FortranFile(fname) as fd: cls.parameters = fd.read_attrs(cls.attrs) ok = False if ds._fields_in_file is not None: # Case 1: fields are provided by users on construction of dataset fields = list(ds._fields_in_file) ok = True else: # Case 2: fields are provided by users in the config fields = cls.load_fields_from_yt_config() ok = len(fields) > 0 if not ok and os.path.exists(fname_desc): # Case 3: there is a file descriptor # Or there is an hydro file descriptor mylog.debug("Reading rt file descriptor.") # For now, we can only read double precision fields fields = [ e[0] for e in _read_fluid_file_descriptor(fname_desc, prefix="rt") ] ok = len(fields) > 0 if not ok: fields = [] tmp = [ "Photon_density_%s", "Photon_flux_x_%s", "Photon_flux_y_%s", "Photon_flux_z_%s", ] for ng in range(ngroups): fields.extend([t % (ng + 1) for t in tmp]) cls.field_list = [(cls.ftype, e) for e in fields] cls.set_detected_fields(ds, fields) return fields @classmethod def get_rt_parameters(cls, ds): if cls.rt_parameters: return cls.rt_parameters # Call detect fields to get the rt_parameters cls.detect_fields(ds) return cls.rt_parameters ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ramses/fields.py0000644000175100001770000005205014714401662017337 0ustar00runnerdockerimport os import warnings from functools import partial import numpy as np from yt import units from yt._typing import KnownFieldsT from yt.fields.field_detector import FieldDetector from yt.fields.field_info_container import FieldInfoContainer from yt.frontends.ramses.io import convert_ramses_conformal_time_to_physical_time from yt.utilities.cython_fortran_utils import FortranFile from yt.utilities.lib.cosmology_time import t_frw from yt.utilities.linear_interpolators import BilinearFieldInterpolator from yt.utilities.logger import ytLogger as mylog from yt.utilities.physical_constants import ( boltzmann_constant_cgs, mass_hydrogen_cgs, mh, mp, ) from .field_handlers import RTFieldFileHandler b_units = "code_magnetic" ra_units = "code_length / code_time**2" rho_units = "code_density" vel_units = "code_velocity" pressure_units = "code_pressure" ener_units = "code_mass * code_velocity**2" specific_ener_units = "code_velocity**2" ang_mom_units = "code_mass * code_velocity * code_length" cooling_function_units = " erg * cm**3 /s" cooling_function_prime_units = " erg * cm**3 /s/K" flux_unit = "1 / code_length**2 / code_time" number_density_unit = "1 / code_length**3" known_species_masses = { sp: mh * v for sp, v in [ ("HI", 1.0), ("HII", 1.0), ("Electron", 1.0), ("HeI", 4.0), ("HeII", 4.0), ("HeIII", 4.0), ("H2I", 2.0), ("H2II", 2.0), ("HM", 1.0), ("DI", 2.0), ("DII", 2.0), ("HDI", 3.0), ] } known_species_names = { "HI": "H_p0", "HII": "H_p1", "Electron": "El", "HeI": "He_p0", "HeII": "He_p1", "HeIII": "He_p2", "H2I": "H2_p0", "H2II": "H2_p1", "HM": "H_m1", "DI": "D_p0", "DII": "D_p1", "HDI": "HD_p0", } _cool_axes = ("lognH", "logT") # , "logTeq") _cool_arrs = ( ("cooling_primordial", cooling_function_units), ("heating_primordial", cooling_function_units), ("cooling_compton", cooling_function_units), ("heating_compton", cooling_function_units), ("cooling_metal", cooling_function_units), ("cooling_primordial_prime", cooling_function_prime_units), ("heating_primordial_prime", cooling_function_prime_units), ("cooling_compton_prime", cooling_function_prime_units), ("heating_compton_prime", cooling_function_prime_units), ("cooling_metal_prime", cooling_function_prime_units), ("mu", None), ("abundances", None), ) _cool_species = ( "Electron_number_density", "HI_number_density", "HII_number_density", "HeI_number_density", "HeII_number_density", "HeIII_number_density", ) _X = 0.76 # H fraction, hardcoded _Y = 0.24 # He fraction, hardcoded class RAMSESFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( ("Density", (rho_units, ["density"], None)), ("x-velocity", (vel_units, ["velocity_x"], None)), ("y-velocity", (vel_units, ["velocity_y"], None)), ("z-velocity", (vel_units, ["velocity_z"], None)), ("Pres_IR", (pressure_units, ["pres_IR", "pressure_IR"], None)), ("Pressure", (pressure_units, ["pressure"], None)), ("Metallicity", ("", ["metallicity"], None)), ("HII", ("", ["H_p1_fraction"], None)), ("HeII", ("", ["He_p1_fraction"], None)), ("HeIII", ("", ["He_p2_fraction"], None)), ("x-acceleration", (ra_units, ["acceleration_x"], None)), ("y-acceleration", (ra_units, ["acceleration_y"], None)), ("z-acceleration", (ra_units, ["acceleration_z"], None)), ("Potential", (specific_ener_units, ["potential"], None)), ("B_x_left", (b_units, ["magnetic_field_x_left"], None)), ("B_x_right", (b_units, ["magnetic_field_x_right"], None)), ("B_y_left", (b_units, ["magnetic_field_y_left"], None)), ("B_y_right", (b_units, ["magnetic_field_y_right"], None)), ("B_z_left", (b_units, ["magnetic_field_z_left"], None)), ("B_z_right", (b_units, ["magnetic_field_z_right"], None)), ) known_particle_fields: KnownFieldsT = ( ("particle_position_x", ("code_length", [], None)), ("particle_position_y", ("code_length", [], None)), ("particle_position_z", ("code_length", [], None)), ("particle_velocity_x", (vel_units, [], None)), ("particle_velocity_y", (vel_units, [], None)), ("particle_velocity_z", (vel_units, [], None)), ("particle_mass", ("code_mass", [], None)), ("particle_identity", ("", ["particle_index"], None)), ("particle_refinement_level", ("", [], None)), ("particle_birth_time", ("code_time", ["age"], None)), ("conformal_birth_time", ("", [], None)), ("particle_metallicity", ("", [], None)), ("particle_family", ("", [], None)), ("particle_tag", ("", [], None)), # sink field parameters ("particle_mass", ("code_mass", [], None)), ("particle_angular_momentum_x", (ang_mom_units, [], None)), ("particle_angular_momentum_y", (ang_mom_units, [], None)), ("particle_angular_momentum_z", (ang_mom_units, [], None)), ("particle_formation_time", ("code_time", [], None)), ("particle_accretion_rate", ("code_mass/code_time", [], None)), ("particle_delta_mass", ("code_mass", [], None)), ("particle_rho_gas", (rho_units, [], None)), ("particle_cs**2", (vel_units, [], None)), ("particle_etherm", (ener_units, [], None)), ("particle_velocity_x_gas", (vel_units, [], None)), ("particle_velocity_y_gas", (vel_units, [], None)), ("particle_velocity_z_gas", (vel_units, [], None)), ("particle_mass_bh", ("code_mass", [], None)), ("particle_level", ("", [], None)), ("particle_radius_star", ("code_length", [], None)), ) known_sink_fields: KnownFieldsT = ( ("particle_position_x", ("code_length", [], None)), ("particle_position_y", ("code_length", [], None)), ("particle_position_z", ("code_length", [], None)), ("particle_velocity_x", (vel_units, [], None)), ("particle_velocity_y", (vel_units, [], None)), ("particle_velocity_z", (vel_units, [], None)), ("particle_mass", ("code_mass", [], None)), ("particle_identifier", ("", ["particle_index"], None)), ("particle_birth_time", ("code_time", ["age"], None)), ("BH_real_accretion", ("code_mass/code_time", [], None)), ("BH_bondi_accretion", ("code_mass/code_time", [], None)), ("BH_eddington_accretion", ("code_mass/code_time", [], None)), ("BH_esave", (ener_units, [], None)), ("gas_spin_x", (ang_mom_units, [], None)), ("gas_spin_y", (ang_mom_units, [], None)), ("gas_spin_z", (ang_mom_units, [], None)), ("BH_spin_x", ("", [], None)), ("BH_spin_y", ("", [], None)), ("BH_spin_z", ("", [], None)), ("BH_spin", (ang_mom_units, [], None)), ("BH_efficiency", ("", [], None)), ) def setup_particle_fields(self, ptype): super().setup_particle_fields(ptype) def star_age_from_conformal_cosmo(field, data): conformal_age = data[ptype, "conformal_birth_time"] birth_time = convert_ramses_conformal_time_to_physical_time( data.ds, conformal_age ) return data.ds.current_time - birth_time def star_age_from_physical_cosmo(field, data): H0 = float( data.ds.quan(data.ds.hubble_constant * 100, "km/s/Mpc").to("1/Gyr") ) times = data[ptype, "conformal_birth_time"].value time_tot = float(t_frw(data.ds, 0) * H0) birth_time = (time_tot + times) / H0 t_out = float(data.ds.current_time.to("Gyr")) return data.apply_units(t_out - birth_time, "Gyr") def star_age(field, data): formation_time = data[ptype, "particle_birth_time"] return data.ds.current_time - formation_time if self.ds.cosmological_simulation and self.ds.use_conformal_time: fun = star_age_from_conformal_cosmo elif self.ds.cosmological_simulation: fun = star_age_from_physical_cosmo else: fun = star_age self.add_field( (ptype, "star_age"), sampling_type="particle", function=fun, units=self.ds.unit_system["time"], ) def setup_fluid_fields(self): def _temperature_over_mu(field, data): rv = data["gas", "pressure"] / data["gas", "density"] rv *= mass_hydrogen_cgs / boltzmann_constant_cgs return rv self.add_field( ("gas", "temperature_over_mu"), sampling_type="cell", function=_temperature_over_mu, units=self.ds.unit_system["temperature"], ) found_cooling_fields = self.create_cooling_fields() if found_cooling_fields: def _temperature(field, data): return data["gas", "temperature_over_mu"] * data["gas", "mu"] else: def _temperature(field, data): if not isinstance(data, FieldDetector): warnings.warn( "Trying to calculate temperature but the cooling tables " "couldn't be found or read. yt will return T/µ instead of " "T — this is equivalent to assuming µ=1.0. To suppress this, " "derive the temperature from temperature_over_mu with " "some values for mu.", category=RuntimeWarning, stacklevel=1, ) return data["gas", "temperature_over_mu"] self.add_field( ("gas", "temperature"), sampling_type="cell", function=_temperature, units=self.ds.unit_system["temperature"], ) self.species_names = [ known_species_names[fn] for ft, fn in self.field_list if fn in known_species_names ] # See if we need to load the rt fields rt_flag = RTFieldFileHandler.any_exist(self.ds) if rt_flag: # rt run self.create_rt_fields() # Load magnetic fields if ("gas", "magnetic_field_x_left") in self: self.create_magnetic_fields() # Potential field if ("gravity", "Potential") in self: self.create_gravity_fields() def create_gravity_fields(self): def potential_energy(field, data): return data["gas", "potential"] * data["gas", "cell_mass"] self.add_field( ("gas", "potential_energy"), sampling_type="cell", function=potential_energy, units=self.ds.unit_system["energy"], ) def create_magnetic_fields(self): # Calculate cell-centred magnetic fields from face-centred def mag_field(ax): def _mag_field(field, data): return ( data["gas", f"magnetic_field_{ax}_left"] + data["gas", f"magnetic_field_{ax}_right"] ) / 2 return _mag_field for ax in self.ds.coordinates.axis_order: self.add_field( ("gas", f"magnetic_field_{ax}"), sampling_type="cell", function=mag_field(ax), units=self.ds.unit_system["magnetic_field_cgs"], ) def _divB(field, data): """Calculate magnetic field divergence""" out = np.zeros_like(data["gas", "magnetic_field_x_right"]) for ax in data.ds.coordinates.axis_order: out += ( data["gas", f"magnetic_field_{ax}_right"] - data["gas", f"magnetic_field_{ax}_left"] ) return out / data["gas", "dx"] self.add_field( ("gas", "magnetic_field_divergence"), sampling_type="cell", function=_divB, units=self.ds.unit_system["magnetic_field_cgs"] / self.ds.unit_system["length"], ) def create_rt_fields(self): self.ds.fluid_types += ("rt",) p = RTFieldFileHandler.get_rt_parameters(self.ds).copy() p.update(self.ds.parameters) ngroups = p["nGroups"] # Make sure rt_c_frac is at least as long as the number of levels in # the simulation. Pad with either 1 (default) when using level-dependent # reduced speed of light, otherwise pad with a constant value if len(p["rt_c_frac"]) == 1: pad_value = p["rt_c_frac"][0] else: pad_value = 1 rt_c_frac = np.pad( p["rt_c_frac"], (0, max(0, self.ds.max_level - len(["rt_c_frac"]) + 1)), constant_values=pad_value, ) rt_c = rt_c_frac * units.c / (p["unit_l"] / p["unit_t"]) dens_conv = (p["unit_np"] / rt_c).value / units.cm**3 ######################################## # Adding the fields in the hydro_* files def _temp_IR(field, data): rv = data["gas", "pres_IR"] / data["gas", "density"] rv *= mass_hydrogen_cgs / boltzmann_constant_cgs return rv self.add_field( ("gas", "temp_IR"), sampling_type="cell", function=_temp_IR, units=self.ds.unit_system["temperature"], ) def _species_density(field, data, species: str): return data["gas", f"{species}_fraction"] * data["gas", "density"] def _species_mass(field, data, species: str): return data["gas", f"{species}_density"] * data["index", "cell_volume"] for species in ["H_p1", "He_p1", "He_p2"]: self.add_field( ("gas", species + "_density"), sampling_type="cell", function=partial(_species_density, species=species), units=self.ds.unit_system["density"], ) self.add_field( ("gas", species + "_mass"), sampling_type="cell", function=partial(_species_mass, species=species), units=self.ds.unit_system["mass"], ) ######################################## # Adding the fields in the rt_ files def gen_pdens(igroup): def _photon_density(field, data): # The photon density depends on the possibly level-dependent conversion factor. ilvl = data["index", "grid_level"].astype("int64") dc = dens_conv[ilvl] rv = data["ramses-rt", f"Photon_density_{igroup + 1}"] * dc return rv return _photon_density for igroup in range(ngroups): self.add_field( ("rt", f"photon_density_{igroup + 1}"), sampling_type="cell", function=gen_pdens(igroup), units=self.ds.unit_system["number_density"], ) flux_conv = p["unit_pf"] / units.cm**2 / units.s flux_unit = ( 1 / self.ds.unit_system["time"] / self.ds.unit_system["length"] ** 2 ).units def gen_flux(key, igroup): def _photon_flux(field, data): rv = data["ramses-rt", f"Photon_flux_{key}_{igroup + 1}"] * flux_conv return rv return _photon_flux for key in "xyz": for igroup in range(ngroups): self.add_field( ("rt", f"photon_flux_{key}_{igroup + 1}"), sampling_type="cell", function=gen_flux(key, igroup), units=flux_unit, ) def create_cooling_fields(self) -> bool: "Create cooling fields from the cooling files. Return True if successful." num = os.path.basename(self.ds.parameter_filename).split(".")[0].split("_")[1] filename = "%s/cooling_%05i.out" % ( os.path.dirname(self.ds.parameter_filename), int(num), ) if not os.path.exists(filename): mylog.warning("This output has no cooling fields") return False # Function to create the cooling fields def _create_field(name, interp_object, unit): def _func(field, data): shape = data["gas", "temperature_over_mu"].shape # Ramses assumes a fraction X of Hydrogen within the non-metal gas. # It has to be corrected by metallicity. Z = data["gas", "metallicity"] nH = ((1 - _Y) * (1 - Z) * data["gas", "density"] / mh).to("cm**-3") if data.ds.self_shielding: boost = np.maximum(np.exp(-nH / 0.01), 1e-20) else: boost = 1 d = { "lognH": np.log10(nH / boost).ravel(), "logT": np.log10(data["gas", "temperature_over_mu"]).ravel(), } rv = interp_object(d).reshape(shape) if name[-1] != "mu": rv = 10 ** interp_object(d).reshape(shape) cool = data.ds.arr(rv, unit) if "metal" in name[-1].split("_"): cool = ( cool * data["gas", "metallicity"] / 0.02 ) # Ramses uses Zsolar=0.02 elif "compton" in name[-1].split("_"): cool = data.ds.arr(rv, unit + "/cm**3") cool = ( cool / data["gas", "number_density"] ) # Compton cooling/heating is written to file in erg/s return cool self.add_field(name=name, sampling_type="cell", function=_func, units=unit) # Load cooling files avals = {} tvals = {} with FortranFile(filename) as fd: n1, n2 = fd.read_vector("i") for ax in _cool_axes: avals[ax] = fd.read_vector("d") for i, (tname, unit) in enumerate(_cool_arrs): var = fd.read_vector("d") if var.size == n1 and i == 0: # If this case occurs, the cooling files were produced pre-2010 in # a format that is no longer supported mylog.warning( "This cooling file format is no longer supported. " "Cooling field loading skipped." ) return False if var.size == n1 * n2: tvals[tname] = { "data": var.reshape((n1, n2), order="F"), "unit": unit, } else: var = var.reshape((n1, n2, var.size // (n1 * n2)), order="F") for i in range(var.shape[-1]): tvals[_cool_species[i]] = { "data": var[:, :, i], "unit": "1/cm**3", } # Add the mu field first, as it is needed for the number density interp = BilinearFieldInterpolator( tvals["mu"]["data"], (avals["lognH"], avals["logT"]), ["lognH", "logT"], truncate=True, ) _create_field(("gas", "mu"), interp, "dimensionless") # Add the number density field, based on mu def _number_density(field, data): return data["gas", "density"] / mp / data["gas", "mu"] self.add_field( name=("gas", "number_density"), sampling_type="cell", function=_number_density, units=number_density_unit, ) # Add the cooling and heating fields, which need the number density field for key in tvals: if key != "mu": interp = BilinearFieldInterpolator( tvals[key]["data"], (avals["lognH"], avals["logT"]), ["lognH", "logT"], truncate=True, ) _create_field(("gas", key), interp, tvals[key]["unit"]) # Add total cooling and heating fields def _all_cool(field, data): return ( data["gas", "cooling_primordial"] + data["gas", "cooling_metal"] + data["gas", "cooling_compton"] ) def _all_heat(field, data): return data["gas", "heating_primordial"] + data["gas", "heating_compton"] self.add_field( name=("gas", "cooling_total"), sampling_type="cell", function=_all_cool, units=cooling_function_units, ) self.add_field( name=("gas", "heating_total"), sampling_type="cell", function=_all_heat, units=cooling_function_units, ) # Add net cooling fields def _net_cool(field, data): return data["gas", "cooling_total"] - data["gas", "heating_total"] self.add_field( name=("gas", "cooling_net"), sampling_type="cell", function=_net_cool, units=cooling_function_units, ) return True ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ramses/hilbert.py0000644000175100001770000001400414714401662017517 0ustar00runnerdockerfrom typing import Any, Optional import numpy as np from yt.data_objects.selection_objects.region import YTRegion from yt.geometry.selection_routines import ( bbox_intersects, fully_contains, ) from yt.utilities.lib.geometry_utils import get_hilbert_indices # State diagram to compute the hilbert curve _STATE_DIAGRAM = np.array( [ [ [1, 2, 0, 6, 11, 4, 5, 6, 10, 4, 7, 10], [0, 0, 0, 2, 4, 6, 4, 6, 2, 2, 4, 6], ], [ [2, 6, 9, 0, 11, 4, 7, 1, 3, 4, 2, 3], [1, 7, 3, 3, 3, 5, 7, 7, 5, 1, 5, 1], ], [ [3, 0, 10, 6, 0, 8, 5, 6, 1, 8, 11, 2], [3, 1, 7, 1, 5, 1, 3, 5, 3, 5, 7, 7], ], [ [2, 7, 9, 11, 7, 8, 3, 10, 1, 8, 2, 6], [2, 6, 4, 0, 2, 2, 0, 4, 4, 6, 6, 0], ], [ [4, 8, 1, 9, 5, 0, 1, 9, 10, 2, 7, 10], [7, 3, 1, 5, 7, 7, 5, 1, 1, 3, 3, 5], ], [ [5, 8, 1, 0, 9, 6, 1, 4, 3, 7, 5, 3], [6, 4, 2, 4, 0, 4, 6, 0, 6, 0, 2, 2], ], [ [3, 0, 11, 9, 0, 10, 11, 9, 5, 2, 8, 4], [4, 2, 6, 6, 6, 0, 2, 2, 0, 4, 0, 4], ], [ [5, 7, 11, 8, 7, 6, 11, 10, 9, 3, 5, 4], [5, 5, 5, 7, 1, 3, 1, 3, 7, 7, 1, 3], ], ] ) def hilbert3d( ijk: "np.ndarray[Any, np.dtype[np.int64]]", bit_length: int ) -> "np.ndarray[Any, np.dtype[np.float64]]": """Compute the order using Hilbert indexing. Arguments --------- ijk : (N, ndim) integer array The positions bit_length : integer The bit_length for the indexing. """ ijk = np.atleast_2d(ijk) # A note here: there is a freedom in the way hilbert indices are # being computed (should it be xyz or yzx or zxy etc.) # and the yt convention is not the same as the RAMSES one. return get_hilbert_indices(bit_length, ijk[:, [1, 2, 0]].astype(np.int64)) def get_intersecting_cpus( ds, region: YTRegion, LE: Optional["np.ndarray[Any, np.dtype[np.float64]]"] = None, dx: float = 1.0, dx_cond: float | None = None, factor: float = 4.0, bound_keys: Optional["np.ndarray[Any, np.dtype[np.float64]]"] = None, ) -> set[int]: """ Find the subset of CPUs that intersect the bbox in a recursive fashion. """ if LE is None: LE = np.array([0, 0, 0], dtype="d") if dx_cond is None: bbox = region.get_bbox() dx_cond = float((bbox[1] - bbox[0]).min().to("code_length")) if bound_keys is None: ncpu = ds.parameters["ncpu"] bound_keys = np.empty(ncpu + 1, dtype="float64") bound_keys[:ncpu] = [ds.hilbert_indices[icpu + 1][0] for icpu in range(ncpu)] bound_keys[ncpu] = ds.hilbert_indices[ncpu][1] # If the current dx is smaller than the smallest size of the bbox if dx < dx_cond / factor: # Finish recursion return get_cpu_list_cuboid(ds, np.asarray([LE, LE + dx]), bound_keys) # If the current cell is fully within the selected region, stop recursion if fully_contains(region.selector, LE, dx): return get_cpu_list_cuboid(ds, np.asarray([LE, LE + dx]), bound_keys) dx /= 2 ret = set() # Compute intersection of the eight subcubes with the bbox and recurse. for i in range(2): for j in range(2): for k in range(2): LE_new = LE + np.array([i, j, k], dtype="d") * dx if bbox_intersects(region.selector, LE_new, dx): ret.update( get_intersecting_cpus( ds, region, LE_new, dx, dx_cond, factor, bound_keys ) ) return ret def get_cpu_list_cuboid( ds, X: "np.ndarray[Any, np.dtype[np.float64]]", bound_keys: "np.ndarray[Any, np.dtype[np.float64]]", ) -> set[int]: """ Return the list of the CPU intersecting with the cuboid containing the positions. Note that it will be 0-indexed. Parameters ---------- ds : Dataset The dataset containing the information X : (N, ndim) float array An array containing positions. They should be between 0 and 1. """ X = np.atleast_2d(X) if X.shape[1] != 3: raise NotImplementedError("This function is only implemented in 3D.") levelmax = ds.parameters["levelmax"] ndim = ds.parameters["ndim"] xmin, ymin, zmin = X.min(axis=0) xmax, ymax, zmax = X.max(axis=0) dmax = max(xmax - xmin, ymax - ymin, zmax - zmin) ilevel = int(np.ceil(-np.log2(dmax))) lmin = ilevel bit_length = lmin - 1 maxdom = 2**bit_length imin, imax, jmin, jmax, kmin, kmax = 0, 0, 0, 0, 0, 0 if bit_length > 0: imin = int(xmin * maxdom) imax = imin + 1 jmin = int(ymin * maxdom) jmax = jmin + 1 kmin = int(zmin * maxdom) kmax = kmin + 1 dkey = (2 ** (levelmax + 1) / maxdom) ** ndim ndom = 1 if bit_length > 0: ndom = 8 ijkdom = idom, jdom, kdom = np.empty((3, 8), dtype=np.int64) idom[0], idom[1] = imin, imax idom[2], idom[3] = imin, imax idom[4], idom[5] = imin, imax idom[6], idom[7] = imin, imax jdom[0], jdom[1] = jmin, jmin jdom[2], jdom[3] = jmax, jmax jdom[4], jdom[5] = jmin, jmin jdom[6], jdom[7] = jmax, jmax kdom[0], kdom[1] = kmin, kmin kdom[2], kdom[3] = kmin, kmin kdom[4], kdom[5] = kmax, kmax kdom[6], kdom[7] = kmax, kmax bounding_min, bounding_max = np.zeros(ndom), np.zeros(ndom) if bit_length > 0: order_min = hilbert3d(ijkdom.T, bit_length) for i in range(ndom): if bit_length > 0: omin = order_min[i] else: omin = 0 bounding_min[i] = omin * dkey bounding_max[i] = (omin + 1) * dkey cpu_min = np.searchsorted(bound_keys, bounding_min, side="right") - 1 cpu_max = np.searchsorted(bound_keys, bounding_max, side="right") cpu_read: set[int] = set() for i in range(ndom): cpu_read.update(range(cpu_min[i], cpu_max[i])) return cpu_read ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ramses/io.py0000644000175100001770000003726114714401662016507 0ustar00runnerdockerfrom collections import defaultdict from functools import lru_cache from typing import TYPE_CHECKING, Union import numpy as np from unyt import unyt_array from yt._maintenance.deprecation import issue_deprecation_warning from yt.frontends.ramses.definitions import VAR_DESC_RE, VERSION_RE from yt.utilities.cython_fortran_utils import FortranFile from yt.utilities.exceptions import ( YTFieldTypeNotFound, YTFileNotParseable, YTParticleOutputFormatNotImplemented, ) from yt.utilities.io_handler import BaseIOHandler from yt.utilities.logger import ytLogger as mylog if TYPE_CHECKING: import os def convert_ramses_ages(ds, conformal_ages): issue_deprecation_warning( msg=( "The `convert_ramses_ages' function is deprecated. It should be replaced " "by the `convert_ramses_conformal_time_to_physical_age' function." ), stacklevel=3, since="4.0.3", ) return convert_ramses_conformal_time_to_physical_time(ds, conformal_ages) def convert_ramses_conformal_time_to_physical_time( ds, conformal_time: np.ndarray ) -> unyt_array: """ Convert conformal times (as defined in RAMSES) to physical times. Arguments --------- ds : RAMSESDataset The RAMSES dataset to use for the conversion conformal_time : np.ndarray The conformal time as read from disk Returns ------- physical_age : np.ndarray The physical age in code units """ h0 = ds.hubble_constant tau_bins = ds.tau_frw * h0 t_bins = ds.t_frw min_time = 0 max_time = ds.current_time.to(t_bins.units) return ds.arr( np.clip( np.interp( conformal_time, tau_bins, t_bins.value, right=max_time, left=min_time, ), min_time, max_time.value, ), t_bins.units, ) def _ramses_particle_binary_file_handler(particle_handler, subset, fields, count): """General file handler for binary file, called by _read_particle_subset Parameters ---------- particle : ``ParticleFileHandler`` the particle class we want to read subset: ``RAMSESDomainSubset`` A RAMSES domain subset object fields: list of tuple The fields to read count: integer The number of elements to count """ tr = {} ds = subset.domain.ds foffsets = particle_handler.field_offsets fname = particle_handler.fname data_types = particle_handler.field_types with FortranFile(fname) as fd: # We do *all* conversion into boxlen here. # This means that no other conversions need to be applied to convert # positions into the same domain as the octs themselves. for field in sorted(fields, key=lambda a: foffsets[a]): if count == 0: tr[field] = np.empty(0, dtype=data_types[field]) continue # Sentinel value: -1 means we don't have this field if foffsets[field] == -1: tr[field] = np.empty(count, dtype=data_types[field]) else: fd.seek(foffsets[field]) dt = data_types[field] tr[field] = fd.read_vector(dt) if field[1].startswith("particle_position"): np.divide(tr[field], ds["boxlen"], tr[field]) # Hand over to field handler for special cases, like particle_birth_times particle_handler.handle_field(field, tr) return tr def _ramses_particle_csv_file_handler(particle_handler, subset, fields, count): """General file handler for csv file, called by _read_particle_subset Parameters ---------- particle: ``ParticleFileHandler`` the particle class we want to read subset: ``RAMSESDomainSubset`` A RAMSES domain subset object fields: list of tuple The fields to read count: integer The number of elements to count """ from yt.utilities.on_demand_imports import _pandas as pd tr = {} ds = subset.domain.ds foffsets = particle_handler.field_offsets fname = particle_handler.fname list_field_ind = [ (field, foffsets[field]) for field in sorted(fields, key=lambda a: foffsets[a]) ] # read only selected fields dat = pd.read_csv( fname, delimiter=",", usecols=[ind for _field, ind in list_field_ind], skiprows=2, header=None, ) for field, ind in list_field_ind: tr[field] = dat[ind].to_numpy() if field[1].startswith("particle_position"): np.divide(tr[field], ds["boxlen"], tr[field]) particle_handler.handle_field(field, tr) return tr class IOHandlerRAMSES(BaseIOHandler): _dataset_type = "ramses" def _read_fluid_selection(self, chunks, selector, fields, size): tr = defaultdict(list) # Set of field types ftypes = {f[0] for f in fields} for chunk in chunks: # Gather fields by type to minimize i/o operations for ft in ftypes: # Get all the fields of the same type field_subs = list(filter(lambda f, ft=ft: f[0] == ft, fields)) # Loop over subsets for subset in chunk.objs: fname = None for fh in subset.domain.field_handlers: if fh.ftype == ft: file_handler = fh fname = fh.fname break if fname is None: raise YTFieldTypeNotFound(ft) # Now we read the entire thing with FortranFile(fname) as fd: # This contains the boundary information, so we skim through # and pick off the right vectors rv = subset.fill(fd, field_subs, selector, file_handler) for ft, f in field_subs: d = rv.pop(f) if d.size == 0: continue mylog.debug( "Filling %s with %s (%0.3e %0.3e) (%s zones)", f, d.size, d.min(), d.max(), d.size, ) tr[ft, f].append(d) d = {} for field in fields: tmp = tr.pop(field, None) d[field] = np.concatenate(tmp) if tmp else np.empty(0, dtype="d") return d def _read_particle_coords(self, chunks, ptf): pn = "particle_position_%s" fields = [ (ptype, f"particle_position_{ax}") for ptype, field_list in ptf.items() for ax in "xyz" ] for chunk in chunks: for subset in chunk.objs: rv = self._read_particle_subset(subset, fields) for ptype in sorted(ptf): yield ( ptype, ( rv[ptype, pn % "x"], rv[ptype, pn % "y"], rv[ptype, pn % "z"], ), 0.0, ) def _read_particle_fields(self, chunks, ptf, selector): pn = "particle_position_%s" chunks = list(chunks) fields = [ (ptype, fname) for ptype, field_list in ptf.items() for fname in field_list ] for ptype, field_list in sorted(ptf.items()): for ax in "xyz": if pn % ax not in field_list: fields.append((ptype, pn % ax)) if ptype == "sink_csv": subset = chunks[0].objs[0] rv = self._read_particle_subset(subset, fields) for ptype, field_list in sorted(ptf.items()): x, y, z = (np.asarray(rv[ptype, pn % ax], "=f8") for ax in "xyz") mask = selector.select_points(x, y, z, 0.0) if mask is None: mask = [] for field in field_list: data = np.asarray(rv.pop((ptype, field))[mask], "=f8") yield (ptype, field), data else: for chunk in chunks: for subset in chunk.objs: rv = self._read_particle_subset(subset, fields) for ptype, field_list in sorted(ptf.items()): x, y, z = ( np.asarray(rv[ptype, pn % ax], "=f8") for ax in "xyz" ) mask = selector.select_points(x, y, z, 0.0) if mask is None: mask = [] for field in field_list: data = np.asarray(rv.pop((ptype, field))[mask], "=f8") yield (ptype, field), data def _read_particle_subset(self, subset, fields): """Read the particle files.""" tr = {} # Sequential read depending on particle type for ptype in {f[0] for f in fields}: # Select relevant files subs_fields = filter(lambda f, ptype=ptype: f[0] == ptype, fields) ok = False for ph in subset.domain.particle_handlers: if ph.ptype == ptype: ok = True count = ph.local_particle_count break if not ok: raise YTFieldTypeNotFound(ptype) tr.update(ph.reader(subset, subs_fields, count)) return tr @lru_cache def _read_part_binary_file_descriptor(fname: Union[str, "os.PathLike[str]"]): """ Read a file descriptor and returns the array of the fields found. """ # Mapping mapping_list = [ ("position_x", "particle_position_x"), ("position_y", "particle_position_y"), ("position_z", "particle_position_z"), ("velocity_x", "particle_velocity_x"), ("velocity_y", "particle_velocity_y"), ("velocity_z", "particle_velocity_z"), ("mass", "particle_mass"), ("identity", "particle_identity"), ("levelp", "particle_level"), ("family", "particle_family"), ("tag", "particle_tag"), ] # Convert to dictionary mapping = dict(mapping_list) with open(fname) as f: line = f.readline() tmp = VERSION_RE.match(line) mylog.debug("Reading part file descriptor %s.", fname) if not tmp: raise YTParticleOutputFormatNotImplemented() version = int(tmp.group(1)) if version == 1: # Skip one line (containing the headers) line = f.readline() fields = [] for i, line in enumerate(f.readlines()): tmp = VAR_DESC_RE.match(line) if not tmp: raise YTFileNotParseable(fname, i + 1) # ivar = tmp.group(1) varname = tmp.group(2) dtype = tmp.group(3) if varname in mapping: varname = mapping[varname] else: varname = f"particle_{varname}" fields.append((varname, dtype)) else: raise YTParticleOutputFormatNotImplemented() return fields @lru_cache def _read_part_csv_file_descriptor(fname: Union[str, "os.PathLike[str]"]): """ Read the file from the csv sink particles output. """ from yt.utilities.on_demand_imports import _pandas as pd # Fields name from the default csv RAMSES sink algorithm in the yt default convention mapping = { " # id": "particle_identifier", "msink": "particle_mass", "x": "particle_position_x", "y": "particle_position_y", "z": "particle_position_z", "vx": "particle_velocity_x", "vy": "particle_velocity_y", "vz": "particle_velocity_z", "lx": "particle_angular_momentum_x", "ly": "particle_angular_momentum_y", "lz": "particle_angular_momentum_z", "tform": "particle_formation_time", "acc_rate": "particle_accretion_rate", "del_mass": "particle_delta_mass", "rho_gas": "particle_rho_gas", "cs**2": "particle_sound_speed", "etherm": "particle_etherm", "vx_gas": "particle_velocity_x_gas", "vy_gas": "particle_velocity_y_gas", "vz_gas": "particle_velocity_z_gas", "mbh": "particle_mass_bh", "level": "particle_level", "rsink_star": "particle_radius_star", } # read the all file to get the number of particle dat = pd.read_csv(fname, delimiter=",") fields = [] local_particle_count = len(dat) for varname in dat.columns: if varname in mapping: varname = mapping[varname] else: varname = f"particle_{varname}" fields.append(varname) return fields, local_particle_count @lru_cache def _read_fluid_file_descriptor(fname: Union[str, "os.PathLike[str]"], *, prefix: str): """ Read a file descriptor and returns the array of the fields found. """ # Mapping mapping_list = [ ("density", "Density"), ("velocity_x", "x-velocity"), ("velocity_y", "y-velocity"), ("velocity_z", "z-velocity"), ("pressure", "Pressure"), ("metallicity", "Metallicity"), # Add mapping for ionized species # Note: we expect internally that these names use the HII, HeII, # HeIII, ... convention for historical reasons. So we need to map # the names read from `hydro_file_descriptor.txt` to this # convention. # This will create fields like ("ramses", "HII") which are mapped # to ("gas", "H_p1_fraction") in fields.py ("H_p1_fraction", "HII"), ("He_p1_fraction", "HeII"), ("He_p2_fraction", "HeIII"), # Photon fluxes / densities are stored as `photon_density_XX`, so # only 100 photon bands can be stored with this format. Let's be # conservative and support up to 100 bands. *[(f"photon_density_{i:02d}", f"Photon_density_{i:d}") for i in range(100)], *[ (f"photon_flux_{i:02d}_{dim}", f"Photon_flux_{dim}_{i:d}") for i in range(100) for dim in "xyz" ], ] # Add mapping for magnetic fields mapping_list += [ (key, key) for key in ( f"B_{dim}_{side}" for side in ["left", "right"] for dim in ["x", "y", "z"] ) ] # Convert to dictionary mapping = dict(mapping_list) with open(fname) as f: line = f.readline() tmp = VERSION_RE.match(line) mylog.debug("Reading fluid file descriptor %s.", fname) if not tmp: return [] version = int(tmp.group(1)) if version == 1: # Skip one line (containing the headers) line = f.readline() fields = [] for i, line in enumerate(f.readlines()): tmp = VAR_DESC_RE.match(line) if not tmp: raise YTFileNotParseable(fname, i + 1) # ivar = tmp.group(1) varname = tmp.group(2) dtype = tmp.group(3) if varname in mapping: varname = mapping[varname] else: varname = f"{prefix}_{varname}" fields.append((varname, dtype)) else: mylog.error("Version %s", version) raise YTParticleOutputFormatNotImplemented() return fields ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ramses/io_utils.pyx0000644000175100001770000002210514714401662020106 0ustar00runnerdocker# distutils: libraries = STD_LIBS # distutils: include_dirs = LIB_DIR cimport cython from libc.stdio cimport SEEK_CUR, SEEK_SET cimport numpy as np import numpy as np from yt.geometry.oct_container cimport RAMSESOctreeContainer from yt.utilities.cython_fortran_utils cimport FortranFile from yt.utilities.exceptions import YTIllDefinedAMRData ctypedef np.int32_t INT32_t ctypedef np.int64_t INT64_t ctypedef np.float64_t DOUBLE_t cdef int INT32_SIZE = sizeof(np.int32_t) cdef int INT64_SIZE = sizeof(np.int64_t) cdef int DOUBLE_SIZE = sizeof(np.float64_t) cdef inline int skip_len(int Nskip, int record_len) noexcept nogil: return Nskip * (record_len * DOUBLE_SIZE + INT64_SIZE) @cython.cpow(True) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.nonecheck(False) def read_amr(FortranFile f, dict headers, np.ndarray[np.int64_t, ndim=1] ngridbound, INT64_t min_level, RAMSESOctreeContainer oct_handler): cdef INT64_t ncpu, nboundary, max_level, nlevelmax, ncpu_and_bound cdef DOUBLE_t nx, ny, nz cdef INT64_t ilevel, icpu, n, ndim, jump_len cdef INT32_t ng cdef np.ndarray[np.int32_t, ndim=2] numbl cdef np.ndarray[np.float64_t, ndim=2] pos cdef int i # The ordering is very important here, as we'll write directly into the memory # address the content of the files. cdef np.float64_t[::1, :] pos_view ndim = headers['ndim'] numbl = headers['numbl'] nboundary = headers['nboundary'] nx, ny, nz = (((i-1.0)/2.0) for i in headers['nx']) nlevelmax = headers['nlevelmax'] ncpu = headers['ncpu'] ncpu_and_bound = nboundary + ncpu # Allocate more memory if required pos = np.empty((max(numbl.max(), ngridbound.max()), 3), dtype="d", order="F") pos_view = pos # Compute number of fields to skip. This should be 31 in 3 dimensions jump_len = (1 # father index + 2*ndim # neighbor index + 2**ndim # son index + 2**ndim # cpu map + 2**ndim # refinement map ) # Initialize values max_level = 0 cdef int record_len for ilevel in range(nlevelmax): for icpu in range(ncpu_and_bound): if icpu < ncpu: ng = numbl[ilevel, icpu] else: ng = ngridbound[icpu - ncpu + nboundary*ilevel] if ng == 0: continue # Skip grid index, 'next' and 'prev' arrays record_len = (ng * INT32_SIZE + INT64_SIZE) f.seek(record_len * 3, SEEK_CUR) f.read_vector_inplace("d", &pos_view[0, 0]) f.read_vector_inplace("d", &pos_view[0, 1]) f.read_vector_inplace("d", &pos_view[0, 2]) for i in range(ng): pos_view[i, 0] -= nx for i in range(ng): pos_view[i, 1] -= ny for i in range(ng): pos_view[i, 2] -= nz # Skip father, neighbor, son, cpu map and refinement map f.seek(record_len * jump_len, SEEK_CUR) # Note that we're adding *grids*, not individual cells. if ilevel >= min_level: n = oct_handler.add(icpu + 1, ilevel - min_level, pos[:ng, :], count_boundary = 1) if n > 0: max_level = max(ilevel - min_level, max_level) return max_level @cython.cpow(True) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.nonecheck(False) cpdef read_offset(FortranFile f, INT64_t min_level, INT64_t domain_id, INT64_t nvar, dict headers, int Nskip): cdef np.ndarray[np.int64_t, ndim=2] offset, level_count cdef INT64_t ndim, twotondim, nlevelmax, n_levels, nboundary, ncpu, ncpu_and_bound cdef INT64_t ilevel, icpu cdef INT32_t file_ilevel, file_ncache ndim = headers['ndim'] nboundary = headers['nboundary'] nlevelmax = headers['nlevelmax'] n_levels = nlevelmax - min_level ncpu = headers['ncpu'] ncpu_and_bound = nboundary + ncpu twotondim = 2**ndim if Nskip == -1: Nskip = twotondim * nvar # It goes: level, CPU, 8-variable (1 oct) offset = np.full((ncpu_and_bound, n_levels), -1, dtype=np.int64) level_count = np.zeros((ncpu_and_bound, n_levels), dtype=np.int64) cdef np.int64_t[:, ::1] level_count_view = level_count cdef np.int64_t[:, ::1] offset_view = offset for ilevel in range(nlevelmax): for icpu in range(ncpu_and_bound): file_ilevel = f.read_int() file_ncache = f.read_int() if file_ncache == 0: continue if file_ilevel != ilevel+1: raise YTIllDefinedAMRData( 'Cannot read offsets in file %s. The level read ' 'from data (%s) is not coherent with the expected (%s)', f.name, file_ilevel, ilevel) if ilevel >= min_level: offset_view[icpu, ilevel - min_level] = f.tell() level_count_view[icpu, ilevel - min_level] = file_ncache f.seek(skip_len(Nskip, file_ncache), SEEK_CUR) return offset, level_count @cython.cpow(True) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.nonecheck(False) def fill_hydro(FortranFile f, np.ndarray[np.int64_t, ndim=2] offsets, np.ndarray[np.int64_t, ndim=2] level_count, list cpu_enumerator, np.ndarray[np.uint8_t, ndim=1] level_inds, np.ndarray[np.uint8_t, ndim=1] cell_inds, np.ndarray[np.int64_t, ndim=1] file_inds, INT64_t ndim, list all_fields, list fields, dict tr, RAMSESOctreeContainer oct_handler, np.ndarray[np.int32_t, ndim=1] domain_inds=np.array([], dtype='int32')): cdef INT64_t offset cdef dict tmp cdef str field cdef INT64_t twotondim cdef int ilevel, icpu, nlevels, nc, ncpu_selected, nfields_selected cdef int i, j, ii twotondim = 2**ndim nfields_selected = len(fields) nlevels = offsets.shape[1] ncpu_selected = len(cpu_enumerator) cdef np.int64_t[::1] cpu_list = np.asarray(cpu_enumerator, dtype=np.int64) cdef np.int64_t[::1] jumps = np.zeros(nfields_selected + 1, dtype=np.int64) cdef int jump_len, Ncells cdef np.uint8_t[::1] mask_level = np.zeros(nlevels, dtype=np.uint8) # First, make sure fields are in the same order fields = sorted(fields, key=lambda f: all_fields.index(f)) # The ordering is very important here, as we'll write directly into the memory # address the content of the files. cdef np.float64_t[::1, :, :] buffer jump_len = 0 j = 0 for i, field in enumerate(all_fields): if field in fields: jumps[j] = jump_len j += 1 jump_len = 0 else: jump_len += 1 jumps[j] = jump_len cdef int first_field_index = jumps[0] buffer = np.empty((level_count.max(), twotondim, nfields_selected), dtype="float64", order='F') # Precompute which levels we need to read Ncells = len(level_inds) for i in range(Ncells): mask_level[level_inds[i]] |= 1 # Loop over levels for ilevel in range(nlevels): if mask_level[ilevel] == 0: continue # Loop over cpu domains. In most cases, we'll only read the CPU corresponding to the domain. for ii in range(ncpu_selected): icpu = cpu_list[ii] nc = level_count[icpu, ilevel] if nc == 0: continue offset = offsets[icpu, ilevel] if offset == -1: continue f.seek(offset + skip_len(first_field_index, nc), SEEK_SET) # We have already skipped the first fields (if any) # so we "rewind" (this will cancel the first seek) jump_len = -first_field_index for i in range(twotondim): # Read the selected fields for j in range(nfields_selected): jump_len += jumps[j] if jump_len > 0: f.seek(skip_len(jump_len, nc), SEEK_CUR) jump_len = 0 f.read_vector_inplace('d', &buffer[0, i, j]) jump_len += jumps[nfields_selected] # In principle, we may be left with some fields to skip # but since we're doing an absolute seek at the beginning of # the loop on CPUs, we can spare one seek here ## if jump_len > 0: ## f.seek(skip_len(jump_len, nc), SEEK_CUR) # Alias buffer into dictionary tmp = {} for i, field in enumerate(fields): tmp[field] = buffer[:, :, i] if ncpu_selected > 1: oct_handler.fill_level_with_domain( ilevel, level_inds, cell_inds, file_inds, domain_inds, tr, tmp, domain=icpu+1) else: oct_handler.fill_level( ilevel, level_inds, cell_inds, file_inds, tr, tmp) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ramses/particle_handlers.py0000644000175100001770000004040414714401662021554 0ustar00runnerdockerimport abc import os import struct from collections.abc import Callable from functools import cached_property from itertools import chain, count from typing import TYPE_CHECKING, Any import numpy as np from yt._typing import FieldKey from yt.config import ytcfg from yt.funcs import mylog from yt.utilities.cython_fortran_utils import FortranFile from .field_handlers import HandlerMixin from .io import ( _ramses_particle_binary_file_handler, _ramses_particle_csv_file_handler, _read_part_binary_file_descriptor, _read_part_csv_file_descriptor, convert_ramses_conformal_time_to_physical_time, ) if TYPE_CHECKING: from yt.frontends.ramses.data_structures import RAMSESDomainSubset PARTICLE_HANDLERS: set[type["ParticleFileHandler"]] = set() def get_particle_handlers(): return PARTICLE_HANDLERS def register_particle_handler(ph): PARTICLE_HANDLERS.add(ph) class ParticleFileHandler(abc.ABC, HandlerMixin): """ Abstract class to handle particles in RAMSES. Each instance represents a single file (one domain). To add support to a new particle file, inherit from this class and implement all functions containing a `NotImplementedError`. See `SinkParticleFileHandler` for an example implementation.""" _file_type = "particle" ## These properties are static # The name to give to the particle type ptype: str # The name of the file(s). fname: str # The name of the file descriptor (if any) file_descriptor: str | None = None # The attributes of the header attrs: tuple[tuple[str, int, str], ...] # A list of tuple containing the field name and its type known_fields: list[FieldKey] # The function to employ to read the file reader: Callable[ ["ParticleFileHandler", "RAMSESDomainSubset", list[tuple[str, str]], int], dict[tuple[str, str], np.ndarray], ] # Name of the config section (if any) config_field: str | None = None ## These properties are computed dynamically # Mapping from field to offset in file _field_offsets: dict[tuple[str, str], int] # Mapping from field to the type of the data (float, integer, ...) _field_types: dict[tuple[str, str], str] # Number of particle in the domain _local_particle_count: int # The header of the file _header: dict[str, Any] def __init_subclass__(cls, *args, **kwargs): """ Registers subclasses at creation. """ super().__init_subclass__(*args, **kwargs) if cls.ptype is not None: register_particle_handler(cls) cls._unique_registry = {} return cls def __init__(self, domain): self.setup_handler(domain) # Attempt to read the list of fields from the config file if self.config_field and ytcfg.has_section(self.config_field): cfg = ytcfg.get(self.config_field, "fields") known_fields = [] for c in (_.strip() for _ in cfg.split("\n") if _.strip() != ""): field, field_type = (_.strip() for _ in c.split(",")) known_fields.append((field, field_type)) self.known_fields = known_fields @abc.abstractmethod def read_header(self): """ This function is called once per file. It should: * read the header of the file and store any relevant information * detect the fields in the file * compute the offsets (location in the file) of each field It is in charge of setting `self.field_offsets` and `self.field_types`. * `_field_offsets`: dictionary: tuple -> integer A dictionary that maps `(type, field_name)` to their location in the file (integer) * `_field_types`: dictionary: tuple -> character A dictionary that maps `(type, field_name)` to their type (character), following Python's struct convention. """ pass @property def field_offsets(self) -> dict[tuple[str, str], int]: if hasattr(self, "_field_offsets"): return self._field_offsets self.read_header() return self._field_offsets @property def field_types(self) -> dict[tuple[str, str], str]: if hasattr(self, "_field_types"): return self._field_types self.read_header() return self._field_types @property def local_particle_count(self) -> int: if hasattr(self, "_local_particle_count"): return self._local_particle_count self.read_header() return self._local_particle_count @property def header(self) -> dict[str, Any]: if hasattr(self, "_header"): return self._header self.read_header() return self._header def handle_field( self, field: tuple[str, str], data_dict: dict[tuple[str, str], np.ndarray] ): """ This function allows custom code to be called to handle special cases, such as the particle birth time. It updates the `data_dict` dictionary with the new data. Parameters ---------- field : tuple[str, str] The field name. data_dict : dict[tuple[str, str], np.ndarray] A dictionary containing the data. By default, this function does nothing. """ pass _default_dtypes: dict[int, str] = { 1: "c", # char 2: "h", # short, 4: "f", # float 8: "d", # double } class DefaultParticleFileHandler(ParticleFileHandler): ptype = "io" fname = "part_{iout:05d}.out{icpu:05d}" file_descriptor = "part_file_descriptor.txt" config_field = "ramses-particles" reader = _ramses_particle_binary_file_handler attrs = ( ("ncpu", 1, "i"), ("ndim", 1, "i"), ("npart", 1, "i"), ("localseed", -1, "i"), ("nstar_tot", 1, "i"), ("mstar_tot", 1, "d"), ("mstar_lost", 1, "d"), ("nsink", 1, "i"), ) known_fields = [ ("particle_position_x", "d"), ("particle_position_y", "d"), ("particle_position_z", "d"), ("particle_velocity_x", "d"), ("particle_velocity_y", "d"), ("particle_velocity_z", "d"), ("particle_mass", "d"), ("particle_identity", "i"), ("particle_refinement_level", "i"), ] def read_header(self): if not self.exists: self._field_offsets = {} self._field_types = {} self._local_particle_count = 0 self._header = {} return flen = os.path.getsize(self.fname) with FortranFile(self.fname) as fd: hvals = dict(fd.read_attrs(self.attrs)) particle_field_pos = fd.tell() self._header = hvals self._local_particle_count = hvals["npart"] extra_particle_fields = self.ds._extra_particle_fields if self.has_descriptor: particle_fields = _read_part_binary_file_descriptor(self.file_descriptor) else: particle_fields = list(self.known_fields) if extra_particle_fields is not None: particle_fields += extra_particle_fields if ( hvals["nstar_tot"] > 0 and extra_particle_fields is not None and ("particle_birth_time", "d") not in particle_fields ): particle_fields += [ ("particle_birth_time", "d"), ("particle_metallicity", "d"), ] def build_iterator(): return chain( particle_fields, ((f"particle_extra_field_{i}", "d") for i in count(1)), ) field_offsets = {} _pfields = {} ptype = self.ptype blockLen = struct.calcsize("i") * 2 particle_fields_iterator = build_iterator() ipos = particle_field_pos while ipos < flen: field, vtype = next(particle_fields_iterator) field_offsets[ptype, field] = ipos _pfields[ptype, field] = vtype ipos += blockLen + struct.calcsize(vtype) * hvals["npart"] if ipos != flen: particle_fields_iterator = build_iterator() with FortranFile(self.fname) as fd: fd.seek(particle_field_pos) ipos = particle_field_pos while ipos < flen: field, vtype = next(particle_fields_iterator) old_pos = fd.tell() field_offsets[ptype, field] = old_pos fd.skip(1) ipos = fd.tell() record_len = ipos - old_pos - blockLen exp_len = struct.calcsize(vtype) * hvals["npart"] if record_len != exp_len: # Guess record vtype from length nbytes = record_len // hvals["npart"] # NOTE: in some simulations (e.g. New Horizon), the record length is not # a multiple of 1, 2, 4 or 8. In this case, fallback onto assuming # double precision. vtype = _default_dtypes.get(nbytes, "d") mylog.warning( "Supposed that `%s` has type %s given record size", field, np.dtype(vtype), ) _pfields[ptype, field] = vtype if field.startswith("particle_extra_field_"): iextra = int(field.split("_")[-1]) else: iextra = 0 if iextra > 0 and not self.ds._warned_extra_fields["io"]: mylog.warning( "Detected %s extra particle fields assuming kind " "`double`. Consider using the `extra_particle_fields` " "keyword argument if you have unexpected behavior.", iextra, ) self.ds._warned_extra_fields["io"] = True if ( self.ds.use_conformal_time and (ptype, "particle_birth_time") in field_offsets ): field_offsets[ptype, "conformal_birth_time"] = field_offsets[ ptype, "particle_birth_time" ] _pfields[ptype, "conformal_birth_time"] = _pfields[ ptype, "particle_birth_time" ] self._field_offsets = field_offsets self._field_types = _pfields @property def birth_file_fname(self): basename = os.path.abspath(self.ds.root_folder) iout = int( os.path.basename(self.ds.parameter_filename).split(".")[0].split("_")[1] ) icpu = self.domain_id fname = os.path.join(basename, f"birth_{iout:05d}.out{icpu:05d}") return fname @cached_property def has_birth_file(self): return os.path.exists(self.birth_file_fname) def handle_field( self, field: tuple[str, str], data_dict: dict[tuple[str, str], np.ndarray] ): _ptype, fname = field if not (fname == "particle_birth_time" and self.ds.cosmological_simulation): return # If the birth files exist, read from them if self.has_birth_file: with FortranFile(self.birth_file_fname) as fd: # Note: values are written in Gyr, so we need to convert back to code_time data_dict[field] = ( self.ds.arr(fd.read_vector("d"), "Gyr").to("code_time").v ) return # Otherwise, convert conformal time to physical age ds = self.ds conformal_time = data_dict[field] physical_time = ( convert_ramses_conformal_time_to_physical_time(ds, conformal_time) .to("code_time") .v ) # arbitrarily set particles with zero conformal_age to zero # particle_age. This corresponds to DM particles. data_dict[field] = np.where(conformal_time != 0, physical_time, 0) class SinkParticleFileHandler(ParticleFileHandler): """Handle sink files""" ptype = "sink" fname = "sink_{iout:05d}.out{icpu:05d}" file_descriptor = "sink_file_descriptor.txt" config_field = "ramses-sink-particles" reader = _ramses_particle_binary_file_handler attrs = (("nsink", 1, "i"), ("nindsink", 1, "i")) known_fields = [ ("particle_identifier", "i"), ("particle_mass", "d"), ("particle_position_x", "d"), ("particle_position_y", "d"), ("particle_position_z", "d"), ("particle_velocity_x", "d"), ("particle_velocity_y", "d"), ("particle_velocity_z", "d"), ("particle_birth_time", "d"), ("BH_real_accretion", "d"), ("BH_bondi_accretion", "d"), ("BH_eddington_accretion", "d"), ("BH_esave", "d"), ("gas_spin_x", "d"), ("gas_spin_y", "d"), ("gas_spin_z", "d"), ("BH_spin_x", "d"), ("BH_spin_y", "d"), ("BH_spin_z", "d"), ("BH_spin", "d"), ("BH_efficiency", "d"), ] def read_header(self): if not self.exists: self._field_offsets = {} self._field_types = {} self._local_particle_count = 0 self._header = {} return fd = FortranFile(self.fname) flen = os.path.getsize(self.fname) hvals = {} # Read the header of the file attrs = self.attrs hvals.update(fd.read_attrs(attrs)) self._header = hvals # This is somehow a trick here: we only want one domain to # be read, as ramses writes all the sinks in all the # domains. Here, we set the local_particle_count to 0 except # for the first domain to be red. if getattr(self.ds, "_sink_file_flag", False): self._local_particle_count = 0 else: self.ds._sink_file_flag = True self._local_particle_count = hvals["nsink"] # Read the fields + add the sink properties if self.has_descriptor: fields = _read_part_binary_file_descriptor(self.file_descriptor) else: fields = list(self.known_fields) # Note: this follows RAMSES convention. for i in range(self.ds.dimensionality * 2 + 1): for ilvl in range(self.ds.max_level + 1): fields.append((f"particle_prop_{ilvl}_{i}", "d")) field_offsets = {} _pfields = {} # Fill the fields, offsets and types self.fields = [] for field, vtype in fields: self.fields.append(field) if fd.tell() >= flen: break field_offsets[self.ptype, field] = fd.tell() _pfields[self.ptype, field] = vtype fd.skip(1) self._field_offsets = field_offsets self._field_types = _pfields fd.close() class SinkParticleFileHandlerCsv(ParticleFileHandler): """Handle sink files from a csv file, the format from the sink particle in ramses""" ptype = "sink_csv" fname = "sink_{iout:05d}.csv" file_descriptor = None config_field = "ramses-sink-particles" reader = _ramses_particle_csv_file_handler attrs = (("nsink", 1, "i"), ("nindsink", 1, "i")) def read_header(self): if not self.exists: self._field_offsets = {} self._field_types = {} self._local_particle_count = 0 self._header = {} return field_offsets = {} _pfields = {} fields, self._local_particle_count = _read_part_csv_file_descriptor(self.fname) for ind, field in enumerate(fields): field_offsets[self.ptype, field] = ind _pfields[self.ptype, field] = "d" self._field_offsets = field_offsets self._field_types = _pfields def handle_field( self, field: tuple[str, str], data_dict: dict[tuple[str, str], np.ndarray] ): _ptype, fname = field if not (fname == "particle_birth_time" and self.ds.cosmological_simulation): return # convert conformal time to physical age ds = self.ds conformal_time = data_dict[field] physical_time = convert_ramses_conformal_time_to_physical_time( ds, conformal_time ) # arbitrarily set particles with zero conformal_age to zero # particle_age. This corresponds to DM particles. data_dict[field] = np.where(conformal_time > 0, physical_time, 0) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3391526 yt-4.4.0/yt/frontends/ramses/tests/0000755000175100001770000000000014714401715016656 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ramses/tests/__init__.py0000644000175100001770000000000014714401662020756 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ramses/tests/test_file_sanitizer.py0000644000175100001770000000671614714401662023311 0ustar00runnerdockerimport re import tempfile from collections import namedtuple from itertools import chain from pathlib import Path import pytest from yt.frontends.ramses.data_structures import RAMSESFileSanitizer PathTuple = namedtuple( "PathTuple", ("output_dir", "group_dir_name", "info_file", "paths_to_try") ) def generate_paths(create): with tempfile.TemporaryDirectory() as tmpdir: output_dir = Path(tmpdir) / "output_00123" output_dir.mkdir() amr_file = output_dir / "amr_00123.out00001" if create: amr_file.touch() info_file = output_dir / "info_00123.txt" if create: info_file.touch() # Test with regular structure output_dir2 = Path(tmpdir) / "output_00124" output_dir2.mkdir() group_dir2 = output_dir2 / "group_00001" group_dir2.mkdir() info_file2 = group_dir2 / "info_00124.txt" if create: info_file2.touch() amr_file2 = group_dir2 / "amr_00124.out00001" if create: amr_file2.touch() yield ( PathTuple( output_dir=output_dir, group_dir_name=None, info_file=info_file, paths_to_try=(output_dir, info_file, amr_file), ), PathTuple( output_dir=output_dir2, group_dir_name=group_dir2.name, info_file=info_file2, paths_to_try=(output_dir2, info_file2, group_dir2, amr_file2), ), ) @pytest.fixture def valid_path_tuples(): yield from generate_paths(create=True) @pytest.fixture def invalid_path_tuples(): yield from generate_paths(create=False) def test_valid_sanitizing(valid_path_tuples): for answer in valid_path_tuples: for path in answer.paths_to_try: sanitizer = RAMSESFileSanitizer(path) sanitizer.validate() assert sanitizer.root_folder == answer.output_dir assert sanitizer.group_name == answer.group_dir_name assert sanitizer.info_fname == answer.info_file def test_invalid_sanitizing(valid_path_tuples, invalid_path_tuples): for path in chain(*(pt.paths_to_try for pt in invalid_path_tuples)): sanitizer = RAMSESFileSanitizer(path) if path.exists(): expected_error = ValueError expected_error_message = ( "Could not determine output directory from '.*'\n" "Expected a directory name of form .* " "containing an info_\\*.txt file and amr_\\* files." ) else: expected_error = FileNotFoundError expected_error_message = re.escape( f"No such file or directory '{str(path)}'" ) with pytest.raises(expected_error, match=expected_error_message): sanitizer.validate() for path in chain(*(pt.paths_to_try for pt in valid_path_tuples)): expected_error_message = re.escape( f"No such file or directory '{str(path/'does_not_exist.txt')}'" ) sanitizer = RAMSESFileSanitizer(path / "does_not_exist.txt") with pytest.raises(FileNotFoundError, match=expected_error_message): sanitizer.validate() expected_error_message = "No such file or directory '.*'" sanitizer = RAMSESFileSanitizer(Path("this") / "does" / "not" / "exist") with pytest.raises(FileNotFoundError, match=expected_error_message): sanitizer.validate() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ramses/tests/test_hilbert.py0000644000175100001770000000336714714401662021732 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_equal import yt from yt.frontends.ramses.hilbert import get_cpu_list_cuboid, hilbert3d from yt.testing import requires_file def test_hilbert3d(): # 8 different cases, checked against RAMSES' own implementation inputs = [ [0, 0, 0], [1, 0, 0], [0, 1, 0], [1, 1, 0], [0, 0, 1], [1, 0, 1], [0, 1, 1], [1, 1, 1], ] outputs = [0, 1, 7, 6, 3, 2, 4, 5] for i, o in zip(inputs, outputs, strict=True): assert_equal(hilbert3d(i, 3).item(), o) output_00080 = "output_00080/info_00080.txt" @requires_file(output_00080) def test_get_cpu_list(): ds = yt.load(output_00080) np.random.seed(16091992) # These are randomly generated outputs, checked against RAMSES' own implementation inputs = ( [[0.27747276, 0.30018937, 0.17916189], [0.42656026, 0.40509483, 0.29927838]], [[0.90660856, 0.44201328, 0.22770587], [1.09175462, 0.58017918, 0.2836648]], [[0.98542323, 0.58543376, 0.45858327], [1.04441105, 0.62079207, 0.58919283]], [[0.42274841, 0.44887745, 0.87793679], [0.52066634, 0.58936331, 1.00666222]], [[0.69964803, 0.65893669, 0.03660775], [0.80565696, 0.67409752, 0.11434604]], ) outputs = ([0, 15], [0, 15], [0, 1, 15], [0, 13, 14, 15], [0]) ncpu = ds.parameters["ncpu"] bound_keys = np.array( [ds.hilbert_indices[icpu][0] for icpu in range(1, ncpu + 1)] + [ds.hilbert_indices[ds.parameters["ncpu"]][1]], dtype="float64", ) for i, o in zip(inputs, outputs, strict=True): bbox = i ls = list(get_cpu_list_cuboid(ds, bbox, bound_keys=bound_keys)) assert len(ls) > 0 assert all(np.array(o) == np.array(ls)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ramses/tests/test_outputs.py0000644000175100001770000006050514714401662022021 0ustar00runnerdockerimport os from contextlib import contextmanager from pathlib import Path from shutil import copytree from tempfile import TemporaryDirectory import numpy as np from numpy.testing import assert_equal, assert_raises import yt from yt.config import ytcfg from yt.fields.field_detector import FieldDetector from yt.frontends.ramses.api import RAMSESDataset from yt.frontends.ramses.field_handlers import DETECTED_FIELDS, HydroFieldFileHandler from yt.testing import requires_file, requires_module, units_override_check from yt.utilities.answer_testing.framework import ( FieldValuesTest, PixelizedProjectionValuesTest, create_obj, data_dir_load, requires_ds, ) from yt.utilities.on_demand_imports import _f90nml as f90nml _fields = ( ("gas", "temperature"), ("gas", "density"), ("gas", "velocity_magnitude"), ("gas", "pressure_gradient_magnitude"), ) output_00080 = "output_00080/info_00080.txt" @requires_ds(output_00080) def test_output_00080(): ds = data_dir_load(output_00080) assert_equal(str(ds), "info_00080") assert_equal(ds.particle_type_counts, {"io": 1090895, "nbody": 0}) dso = [None, ("sphere", ("max", (0.1, "unitary")))] for dobj_name in dso: for field in _fields: for axis in [0, 1, 2]: for weight_field in [None, ("gas", "density")]: yield PixelizedProjectionValuesTest( output_00080, axis, field, weight_field, dobj_name ) yield FieldValuesTest(output_00080, field, dobj_name) dobj = create_obj(ds, dobj_name) s1 = dobj["index", "ones"].sum() s2 = sum(mask.sum() for block, mask in dobj.blocks) assert_equal(s1, s2) @requires_file(output_00080) def test_RAMSESDataset(): assert isinstance(data_dir_load(output_00080), RAMSESDataset) @requires_file(output_00080) def test_RAMSES_alternative_load(): # Test that we can load a RAMSES dataset by giving it the folder name, # the info file name or an amr file name base_dir, info_file_fname = os.path.split(output_00080) for fname in (base_dir, os.path.join(base_dir, "amr_00080.out00001")): assert isinstance(data_dir_load(fname), RAMSESDataset) @requires_file(output_00080) def test_units_override(): units_override_check(output_00080) ramsesNonCosmo = "DICEGalaxyDisk_nonCosmological/output_00002/info_00002.txt" @requires_file(ramsesNonCosmo) def test_non_cosmo_detection(): ds = yt.load(ramsesNonCosmo, cosmological=False) assert_equal(ds.cosmological_simulation, 0) ds = yt.load(ramsesNonCosmo, cosmological=None) assert_equal(ds.cosmological_simulation, 0) ds = yt.load(ramsesNonCosmo) assert_equal(ds.cosmological_simulation, 0) @requires_file(ramsesNonCosmo) def test_unit_non_cosmo(): for force_cosmo in [False, None]: ds = yt.load(ramsesNonCosmo, cosmological=force_cosmo) expected_raw_time = 0.0299468077820411 # in ramses unit assert_equal(ds.current_time.value, expected_raw_time) expected_time = 14087886140997.336 # in seconds assert_equal(ds.current_time.in_units("s").value, expected_time) ramsesCosmo = "output_00080/info_00080.txt" @requires_file(ramsesCosmo) def test_cosmo_detection(): ds = yt.load(ramsesCosmo, cosmological=True) assert_equal(ds.cosmological_simulation, 1) ds = yt.load(ramsesCosmo, cosmological=None) assert_equal(ds.cosmological_simulation, 1) ds = yt.load(ramsesCosmo) assert_equal(ds.cosmological_simulation, 1) @requires_file(ramsesCosmo) def test_unit_cosmo(): for force_cosmo in [True, None]: ds = yt.load(ramsesCosmo, cosmological=force_cosmo) # NOTE: these are the old test values, which used 3.08e24 as # the Mpc to cm conversion factor # expected_raw_time = 1.119216564055017 # in ramses unit # expected_time = 3.756241729312462e17 # in seconds expected_raw_time = 1.1213297131656712 # in ramses unit np.testing.assert_equal( ds.current_time.to("code_time").value, expected_raw_time ) expected_time = 3.7633337427123904e17 # in seconds np.testing.assert_equal(ds.current_time.to("s").value, expected_time) ramsesExtraFieldsSmall = "ramses_extra_fields_small/output_00001" @requires_file(ramsesExtraFieldsSmall) def test_extra_fields(): extra_fields = [("particle_family", "I"), ("particle_pointer", "I")] ds = yt.load( os.path.join(ramsesExtraFieldsSmall, "info_00001.txt"), extra_particle_fields=extra_fields, ) # the dataset should contain the fields for field, _ in extra_fields: assert ("all", field) in ds.field_list # Check the family (they should equal 100, for tracer particles) dd = ds.all_data() families = dd["all", "particle_family"] assert all(families == 100) @requires_file(ramsesExtraFieldsSmall) def test_extra_fields_2(): extra_fields = [f"particle_extra_field_{i + 1}" for i in range(2)] ds = yt.load(os.path.join(ramsesExtraFieldsSmall, "info_00001.txt")) # When migrating to pytest, uncomment this # with pytest.warns( # UserWarning, # match=r"Field (.*) has a length \d+, but expected a length of \d+.()", # ): # ds.index # the dataset should contain the fields for field in extra_fields: assert ("io", field) in ds.field_list # In the dataset, the fields are integers, so we cannot test # that they are accessed correctly. ramses_rt = "ramses_rt_00088/output_00088/info_00088.txt" @requires_file(ramses_rt) def test_ramses_rt(): ds = yt.load(ramses_rt) ad = ds.all_data() expected_fields = [ "Density", "x-velocity", "y-velocity", "z-velocity", "Pres_IR", "Pressure", "Metallicity", "HII", "HeII", "HeIII", ] for field in expected_fields: assert ("ramses", field) in ds.field_list # test that field access works ad["ramses", field] # test that special derived fields for RT datasets work special_fields = [("gas", "temp_IR")] species = ["H_p1", "He_p1", "He_p2"] for specie in species: special_fields.extend( [ ("gas", f"{specie}_fraction"), ("gas", f"{specie}_density"), ("gas", f"{specie}_mass"), ] ) for field in special_fields: assert field in ds.derived_field_list ad[field] # Test the creation of rt fields rt_fields = [("rt", "photon_density_1")] + [ ("rt", f"photon_flux_{d}_1") for d in "xyz" ] for field in rt_fields: assert field in ds.derived_field_list ad[field] ramses_sink = "ramses_sink_00016/output_00016/info_00016.txt" @requires_file(ramses_sink) def test_ramses_sink(): expected_fields = [ "BH_bondi_accretion", "BH_eddington_accretion", "BH_efficiency", "BH_esave", "BH_real_accretion", "BH_spin", "BH_spin_x", "BH_spin_y", "BH_spin_z", "gas_spin_x", "gas_spin_y", "gas_spin_z", "particle_birth_time", "particle_identifier", "particle_mass", "particle_position_x", "particle_position_y", "particle_position_z", "particle_prop_0_0", "particle_prop_0_1", "particle_prop_0_2", "particle_prop_0_3", "particle_prop_0_4", "particle_prop_0_5", "particle_prop_0_6", "particle_velocity_x", "particle_velocity_y", "particle_velocity_z", ] # Check that sinks are autodetected ds = yt.load(ramses_sink) ad = ds.all_data() for field in expected_fields: assert ("sink", field) in ds.field_list # test that field access works ad["sink", field] # Checking that sinks are autodetected ds = yt.load(ramsesCosmo) ad = ds.all_data() for field in expected_fields: assert ("sink", field) not in ds.field_list ramses_new_format = "ramses_new_format/output_00002/info_00002.txt" @requires_file(ramses_new_format) def test_new_format(): expected_particle_fields = [ ("star", "particle_identity"), ("star", "particle_level"), ("star", "particle_mass"), ("star", "particle_metallicity"), ("star", "particle_position_x"), ("star", "particle_position_y"), ("star", "particle_position_z"), ("star", "particle_tag"), ("star", "particle_velocity_x"), ("star", "particle_velocity_y"), ("star", "particle_velocity_z"), ] ds = yt.load(ramses_new_format) ad = ds.all_data() # Check all the expected fields exist and can be accessed for f in expected_particle_fields: assert f in ds.derived_field_list ad[f] # Check there is only stars with tag 0 (it should be right) assert all(ad["star", "particle_family"] == 2) assert all(ad["star", "particle_tag"] == 0) assert len(ad["star", "particle_tag"]) == 600 @requires_file(ramses_sink) def test_ramses_part_count(): ds = yt.load(ramses_sink) pcount = ds.particle_type_counts assert_equal(pcount["io"], 17132, err_msg="Got wrong number of io particle") assert_equal(pcount["sink"], 8, err_msg="Got wrong number of sink particle") @requires_file(ramsesCosmo) def test_custom_particle_def(): ytcfg.add_section("ramses-particles") ytcfg["ramses-particles", "fields"] = """particle_position_x, d particle_position_y, d particle_position_z, d particle_velocity_x, d particle_velocity_y, d particle_velocity_z, d particle_mass, d particle_identifier, i particle_refinement_level, I particle_birth_time, d particle_foobar, d """ ds = yt.load(ramsesCosmo) def check_unit(array, unit): assert str(array.in_cgs().units) == unit try: assert ("io", "particle_birth_time") in ds.derived_field_list assert ("io", "particle_foobar") in ds.derived_field_list check_unit(ds.r["io", "particle_birth_time"], "s") check_unit(ds.r["io", "particle_foobar"], "dimensionless") finally: ytcfg.remove_section("ramses-particles") @requires_file(output_00080) @requires_file(ramses_sink) def test_grav_detection(): for path, has_potential in ((output_00080, False), (ramses_sink, True)): ds = yt.load(path) # Test detection for k in "xyz": assert ("gravity", f"{k}-acceleration") in ds.field_list assert ("gas", f"acceleration_{k}") in ds.derived_field_list if has_potential: assert ("gravity", "Potential") in ds.field_list assert ("gas", "potential") in ds.derived_field_list assert ("gas", "potential_energy") in ds.derived_field_list # Test access for k in "xyz": ds.r["gas", f"acceleration_{k}"].to("m/s**2") if has_potential: ds.r["gas", "potential"].to("m**2/s**2") ds.r["gas", "potential_energy"].to("kg*m**2/s**2") @requires_file(ramses_sink) @requires_file(output_00080) def test_ramses_field_detection(): ds1 = yt.load(ramses_rt) assert "ramses" not in DETECTED_FIELDS # Now they are detected ! ds1.index P1 = HydroFieldFileHandler.parameters assert DETECTED_FIELDS[ds1.unique_identifier]["ramses"] is not None fields_1 = set(DETECTED_FIELDS[ds1.unique_identifier]["ramses"]) # Check the right number of variables has been loaded assert P1["nvar"] == 10 assert len(fields_1) == P1["nvar"] # Now load another dataset ds2 = yt.load(output_00080) ds2.index P2 = HydroFieldFileHandler.parameters fields_2 = set(DETECTED_FIELDS[ds2.unique_identifier]["ramses"]) # Check the right number of variables has been loaded assert P2["nvar"] == 6 assert len(fields_2) == P2["nvar"] @requires_file(ramses_new_format) @requires_file(ramsesCosmo) @requires_file(ramsesNonCosmo) def test_formation_time(): extra_particle_fields = [ ("particle_birth_time", "d"), ("particle_metallicity", "d"), ] # test semantics for cosmological dataset ds = yt.load(ramsesCosmo, extra_particle_fields=extra_particle_fields) assert ("io", "particle_birth_time") in ds.field_list assert ("io", "conformal_birth_time") in ds.field_list assert ("io", "particle_metallicity") in ds.field_list ad = ds.all_data() whstars = ad["io", "conformal_birth_time"] != 0 assert np.all(ad["io", "star_age"][whstars] > 0) # test semantics for non-cosmological new-style output format ds = yt.load(ramses_new_format) ad = ds.all_data() assert ("io", "particle_birth_time") in ds.field_list assert np.all(ad["io", "particle_birth_time"] > 0) # test semantics for non-cosmological old-style output format ds = yt.load(ramsesNonCosmo, extra_particle_fields=extra_particle_fields) ad = ds.all_data() assert ("io", "particle_birth_time") in ds.field_list # the dataset only includes particles with arbitrarily old ages # and particles that formed in the very first timestep assert np.all(ad["io", "particle_birth_time"] <= 0) whdynstars = ad["io", "particle_birth_time"] == 0 assert np.all(ad["io", "star_age"][whdynstars] == ds.current_time) @requires_file(ramses_new_format) def test_cooling_fields(): # Test the field is being loaded correctly ds = yt.load(ramses_new_format) # Derived cooling fields assert ("gas", "cooling_net") in ds.derived_field_list assert ("gas", "cooling_total") in ds.derived_field_list assert ("gas", "heating_total") in ds.derived_field_list assert ("gas", "number_density") in ds.derived_field_list # Original cooling fields assert ("gas", "cooling_primordial") in ds.derived_field_list assert ("gas", "cooling_compton") in ds.derived_field_list assert ("gas", "cooling_metal") in ds.derived_field_list assert ("gas", "heating_primordial") in ds.derived_field_list assert ("gas", "heating_compton") in ds.derived_field_list assert ("gas", "cooling_primordial_prime") in ds.derived_field_list assert ("gas", "cooling_compton_prime") in ds.derived_field_list assert ("gas", "cooling_metal_prime") in ds.derived_field_list assert ("gas", "heating_primordial_prime") in ds.derived_field_list assert ("gas", "heating_compton_prime") in ds.derived_field_list assert ("gas", "mu") in ds.derived_field_list # Abundances assert ("gas", "Electron_number_density") in ds.derived_field_list assert ("gas", "HI_number_density") in ds.derived_field_list assert ("gas", "HII_number_density") in ds.derived_field_list assert ("gas", "HeI_number_density") in ds.derived_field_list assert ("gas", "HeII_number_density") in ds.derived_field_list assert ("gas", "HeIII_number_density") in ds.derived_field_list def check_unit(array, unit): assert str(array.in_cgs().units) == unit check_unit(ds.r["gas", "cooling_total"], "cm**5*g/s**3") check_unit(ds.r["gas", "cooling_primordial_prime"], "cm**5*g/(K*s**3)") check_unit(ds.r["gas", "number_density"], "cm**(-3)") check_unit(ds.r["gas", "mu"], "dimensionless") check_unit(ds.r["gas", "Electron_number_density"], "cm**(-3)") @requires_file(ramses_rt) def test_ramses_mixed_files(): # Test that one can use derived fields that depend on different # files (here hydro and rt files) ds = yt.load(ramses_rt) def _mixed_field(field, data): return data["rt", "photon_density_1"] / data["gas", "H_nuclei_density"] ds.add_field(("gas", "mixed_files"), function=_mixed_field, sampling_type="cell") # Access the field ds.r["gas", "mixed_files"] ramses_empty_record = "ramses_empty_record/output_00003/info_00003.txt" @requires_file(ramses_empty_record) def test_ramses_empty_record(): # Test that yt can load datasets with empty records ds = yt.load(ramses_empty_record) # This should not fail ds.index # Access some field ds.r["gas", "density"] @requires_file(ramses_new_format) @requires_module("f90nml") def test_namelist_reading(): ds = data_dir_load(ramses_new_format) namelist_fname = os.path.join(ds.directory, "namelist.txt") with open(namelist_fname) as f: ref = f90nml.read(f) nml = ds.parameters["namelist"] assert nml == ref @requires_file(ramses_empty_record) @requires_file(output_00080) @requires_module("f90nml") def test_namelist_reading_should_not_fail(): for ds_name in (ramses_empty_record, output_00080): # Test that the reading does not fail for malformed namelist.txt files ds = data_dir_load(ds_name) ds.index # should work ramses_mhd_128 = "ramses_mhd_128/output_00027/info_00027.txt" @requires_file(ramses_mhd_128) def test_magnetic_field_aliasing(): # Test if RAMSES magnetic fields are correctly aliased to yt magnetic fields and if # derived magnetic quantities are calculated ds = data_dir_load(ramses_mhd_128) ad = ds.all_data() for field in [ "magnetic_field_x", "magnetic_field_magnitude", "alfven_speed", "magnetic_field_divergence", ]: assert ("gas", field) in ds.derived_field_list ad["gas", field] @requires_file(output_00080) def test_field_accession(): ds = yt.load(output_00080) fields = [ ("gas", "density"), # basic ones ("gas", "pressure"), ("gas", "pressure_gradient_magnitude"), # requires ghost zones ] # Check accessing gradient works for a variety of spatial domains for reg in ( ds.all_data(), ds.sphere([0.1] * 3, 0.01), ds.sphere([0.5] * 3, 0.05), ds.box([0.1] * 3, [0.2] * 3), ): for field in fields: reg[field] @requires_file(output_00080) def test_ghost_zones(): ds = yt.load(output_00080) def gen_dummy(ngz): def dummy(field, data): if not isinstance(data, FieldDetector): shape = data["gas", "mach_number"].shape[:3] np.testing.assert_equal(shape, (2 + 2 * ngz, 2 + 2 * ngz, 2 + 2 * ngz)) return data["gas", "mach_number"] return dummy fields = [] for ngz in (1, 2, 3): fname = ("gas", f"density_ghost_zone_{ngz}") ds.add_field( fname, gen_dummy(ngz), sampling_type="cell", units="", validators=[yt.ValidateSpatial(ghost_zones=ngz)], ) fields.append(fname) box = ds.box([0, 0, 0], [0.1, 0.1, 0.1]) for f in fields: print("Getting ", f) box[f] @requires_file(output_00080) def test_max_level(): ds = yt.load(output_00080) assert any(ds.r["index", "grid_level"] > 2) # Should work for ds in ( yt.load(output_00080, max_level=2, max_level_convention="yt"), yt.load(output_00080, max_level=8, max_level_convention="ramses"), ): assert all(ds.r["index", "grid_level"] <= 2) assert any(ds.r["index", "grid_level"] == 2) @requires_file(output_00080) def test_invalid_max_level(): invalid_value_args = ( (1, None), (1, "foo"), (1, "bar"), (-1, "yt"), ) for lvl, convention in invalid_value_args: with assert_raises(ValueError): yt.load(output_00080, max_level=lvl, max_level_convention=convention) invalid_type_args = ( (1.0, "yt"), # not an int ("invalid", "yt"), ) # Should fail with value errors for lvl, convention in invalid_type_args: with assert_raises(TypeError): yt.load(output_00080, max_level=lvl, max_level_convention=convention) @requires_file(ramses_new_format) def test_print_stats(): ds = yt.load(ramses_new_format) # Should work ds.print_stats() # FIXME #3197: use `capsys` with pytest to make sure the print_stats function works as intended @requires_file(output_00080) def test_reading_order(): # This checks the bug unvovered in #4880 # This checks that the result of field accession doesn't # depend on the order def _dummy_field(field, data): # Note: this is a dummy field # that doesn't really have any physical meaning # but may trigger some bug in the field # handling. T = data["gas", "temperature"] Z = data["gas", "metallicity"] return T * 1**Z fields = [ "Density", "x-velocity", "y-velocity", "z-velocity", "Pressure", "Metallicity", ] ds = yt.load(output_00080, fields=fields) ds.add_field( ("gas", "test"), function=_dummy_field, units=None, sampling_type="cell" ) ad = ds.all_data() v0 = ad["gas", "test"] ad = ds.all_data() ad["gas", "temperature"] v1 = ad["gas", "test"] np.testing.assert_allclose(v0, v1) # Simple context manager that overrides the value of # the self_shielding flag in the namelist.txt file @contextmanager def override_self_shielding(root: Path, section: str, val: bool): # Copy content of root in a temporary folder with TemporaryDirectory() as tmp: tmpdir = Path(tmp) / root.name tmpdir.mkdir() # Copy content of `root` in `tmpdir` recursively copytree(root, tmpdir, dirs_exist_ok=True) fname = Path(tmpdir) / "namelist.txt" with open(fname) as f: namelist_data = f90nml.read(f).todict() sec = namelist_data.get(section, {}) sec["self_shielding"] = val namelist_data[section] = sec with open(fname, "w") as f: new_nml = f90nml.Namelist(namelist_data) new_nml.write(f) yield tmpdir @requires_file(ramses_new_format) @requires_module("f90nml") def test_self_shielding_logic(): ################################################## # Check that the self_shielding flag is correctly read ds = yt.load(ramses_new_format) assert ds.parameters["namelist"] is not None assert ds.self_shielding is False ################################################## # Modify the namelist in-situ, reload and check root_folder = Path(ds.root_folder) for section in ("physics_params", "cooling_params"): for val in (True, False): with override_self_shielding(root_folder, section, val) as tmp_output: # Autodetection should work ds = yt.load(tmp_output) assert ds.parameters["namelist"] is not None assert ds.self_shielding is val # Manually set should ignore the namelist ds = yt.load(tmp_output, self_shielding=True) assert ds.self_shielding is True ds = yt.load(tmp_output, self_shielding=False) assert ds.self_shielding is False ################################################## # Manually set should work, even if namelist does not contain the flag ds = yt.load(ramses_new_format, self_shielding=True) assert ds.self_shielding is True ds = yt.load(ramses_new_format, self_shielding=False) assert ds.self_shielding is False ramses_star_formation = "ramses_star_formation/output_00013" @requires_file(ramses_star_formation) @requires_module("f90nml") def test_self_shielding_loading(): ds_off = yt.load(ramses_star_formation, self_shielding=False) ds_on = yt.load(ramses_star_formation, self_shielding=True) # We do not expect any significant change at densities << 1e-2 reg_on = ds_on.all_data().cut_region( ["obj['gas', 'density'].to('mp/cm**3') < 1e-6"] ) reg_off = ds_off.all_data().cut_region( ["obj['gas', 'density'].to('mp/cm**3') < 1e-6"] ) np.testing.assert_allclose( reg_on["gas", "cooling_total"], reg_off["gas", "cooling_total"], rtol=1e-5, ) # We expect large differences from 1e-2 mp/cc reg_on = ds_on.all_data().cut_region( ["obj['gas', 'density'].to('mp/cm**3') > 1e-2"] ) reg_off = ds_off.all_data().cut_region( ["obj['gas', 'density'].to('mp/cm**3') > 1e-2"] ) # Primordial cooling decreases with density (except at really high temperature) # so we expect the boosted version to cool *less* efficiently diff = ( reg_on["gas", "cooling_primordial"] - reg_off["gas", "cooling_primordial"] ) / reg_off["gas", "cooling_primordial"] assert (np.isclose(diff, 0, atol=1e-5) | (diff < 0)).all() # Also make sure the difference is large for some cells assert (np.abs(diff) > 0.1).any() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ramses/tests/test_outputs_pytest.py0000644000175100001770000000406314714401662023426 0ustar00runnerdockerfrom copy import deepcopy import pytest import yt from yt.config import ytcfg from yt.testing import requires_file output_00080 = "output_00080/info_00080.txt" ramses_new_format = "ramses_new_format/output_00002/info_00002.txt" custom_hydro = [ "my-Density", "my-x-velocity", "my-y-velocity", "my-z-velocity", "my-Pressure", "my-Metallicity", ] custom_grav = [ "my-x-acceleration", "my-y-acceleration", "my-z-acceleration", ] @pytest.fixture() def custom_ramses_fields_conf(): old_config = deepcopy(ytcfg.config_root.as_dict()) ytcfg.add_section("ramses-hydro") ytcfg.add_section("ramses-grav") ytcfg.set("ramses-hydro", "fields", custom_hydro) ytcfg.set("ramses-grav", "fields", custom_grav) yield ytcfg.remove_section("ramses-hydro") ytcfg.remove_section("ramses-grav") ytcfg.update(old_config) @requires_file(output_00080) def test_field_config_1(custom_ramses_fields_conf): ds = yt.load(output_00080) for f in custom_hydro: assert ("ramses", f) in ds.field_list for f in custom_grav: assert ("gravity", f) in ds.field_list @requires_file(ramses_new_format) def test_field_config_2(custom_ramses_fields_conf): ds = yt.load(ramses_new_format) for f in custom_hydro: assert ("ramses", f) in ds.field_list for f in custom_grav: assert ("gravity", f) in ds.field_list @requires_file(output_00080) @requires_file(ramses_new_format) def test_warning_T2(): ds1 = yt.load(output_00080) ds2 = yt.load(ramses_new_format) # Should not raise warnings ds1.r["gas", "temperature_over_mu"] ds2.r["gas", "temperature_over_mu"] # We cannot read the cooling tables of output_00080 # so this should raise a warning with pytest.warns( RuntimeWarning, match=( "Trying to calculate temperature but the cooling tables couldn't be " r"found or read\. yt will return T/µ instead of T.*" ), ): ds1.r["gas", "temperature"] # But this one should not ds2.r["gas", "temperature"] ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3391526 yt-4.4.0/yt/frontends/rockstar/0000755000175100001770000000000014714401715016052 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/rockstar/__init__.py0000644000175100001770000000004714714401662020165 0ustar00runnerdocker""" API for Rockstar frontend. """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/rockstar/api.py0000644000175100001770000000021714714401662017176 0ustar00runnerdockerfrom . import tests from .data_structures import RockstarDataset from .fields import RockstarFieldInfo from .io import IOHandlerRockstarBinary ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/rockstar/data_structures.py0000644000175100001770000001677014714401662021654 0ustar00runnerdockerimport glob import os from functools import cached_property from typing import Any, Optional import numpy as np from yt.data_objects.static_output import ParticleDataset from yt.frontends.halo_catalog.data_structures import HaloCatalogFile from yt.funcs import setdefaultattr from yt.geometry.particle_geometry_handler import ParticleIndex from yt.utilities import fortran_utils as fpu from yt.utilities.cosmology import Cosmology from yt.utilities.exceptions import YTFieldNotFound from .definitions import header_dt from .fields import RockstarFieldInfo class RockstarBinaryFile(HaloCatalogFile): header: dict _position_offset: int _member_offset: int _Npart: "np.ndarray[Any, np.dtype[np.int64]]" _ids_halos: list[int] _file_size: int def __init__(self, ds, io, filename, file_id, range): with open(filename, "rb") as f: self.header = fpu.read_cattrs(f, header_dt, "=") self._position_offset = f.tell() pcount = self.header["num_halos"] halos = np.fromfile(f, dtype=io._halo_dt, count=pcount) self._member_offset = f.tell() self._ids_halos = list(halos["particle_identifier"]) self._Npart = halos["num_p"] f.seek(0, os.SEEK_END) self._file_size = f.tell() expected_end = self._member_offset + 8 * self._Npart.sum() if expected_end != self._file_size: raise RuntimeError( f"File size {self._file_size} does not match expected size {expected_end}." ) super().__init__(ds, io, filename, file_id, range) def _read_member( self, ihalo: int ) -> Optional["np.ndarray[Any, np.dtype[np.int64]]"]: if ihalo not in self._ids_halos: return None ind_halo = self._ids_halos.index(ihalo) ipos = self._member_offset + 8 * self._Npart[:ind_halo].sum() with open(self.filename, "rb") as f: f.seek(ipos, os.SEEK_SET) ids = np.fromfile(f, dtype=np.int64, count=self._Npart[ind_halo]) return ids def _read_particle_positions(self, ptype: str, f=None): """ Read all particle positions in this file. """ if f is None: close = True f = open(self.filename, "rb") else: close = False pcount = self.header["num_halos"] pos = np.empty((pcount, 3), dtype="float64") f.seek(self._position_offset, os.SEEK_SET) halos = np.fromfile(f, dtype=self.io._halo_dt, count=pcount) for i, ax in enumerate("xyz"): pos[:, i] = halos[f"particle_position_{ax}"].astype("float64") if close: f.close() return pos class RockstarIndex(ParticleIndex): def get_member(self, ihalo: int): for df in self.data_files: members = df._read_member(ihalo) if members is not None: return members raise RuntimeError(f"Could not find halo {ihalo} in any data file.") class RockstarDataset(ParticleDataset): _index_class = RockstarIndex _file_class = RockstarBinaryFile _field_info_class = RockstarFieldInfo _suffix = ".bin" def __init__( self, filename, dataset_type="rockstar_binary", units_override=None, unit_system="cgs", index_order=None, index_filename=None, ): super().__init__( filename, dataset_type, units_override=units_override, unit_system=unit_system, ) def _parse_parameter_file(self): with open(self.parameter_filename, "rb") as f: hvals = fpu.read_cattrs(f, header_dt) hvals.pop("unused") self.dimensionality = 3 self.refine_by = 2 prefix = ".".join(self.parameter_filename.rsplit(".", 2)[:-2]) self.filename_template = f"{prefix}.%(num)s{self._suffix}" self.file_count = len(glob.glob(prefix + ".*" + self._suffix)) # Now we can set up things we already know. self.cosmological_simulation = 1 self.current_redshift = (1.0 / hvals["scale"]) - 1.0 self.hubble_constant = hvals["h0"] self.omega_lambda = hvals["Ol"] self.omega_matter = hvals["Om"] cosmo = Cosmology( hubble_constant=self.hubble_constant, omega_matter=self.omega_matter, omega_lambda=self.omega_lambda, ) self.current_time = cosmo.lookback_time(self.current_redshift, 1e6).in_units( "s" ) self._periodicity = (True, True, True) self.particle_types = "halos" self.particle_types_raw = "halos" self.domain_left_edge = np.array([0.0, 0.0, 0.0]) self.domain_right_edge = np.array([hvals["box_size"]] * 3) self.domain_dimensions = np.ones(3, "int32") self.parameters.update(hvals) def _set_code_unit_attributes(self): z = self.current_redshift setdefaultattr(self, "length_unit", self.quan(1.0 / (1.0 + z), "Mpc / h")) setdefaultattr(self, "mass_unit", self.quan(1.0, "Msun / h")) setdefaultattr(self, "velocity_unit", self.quan(1.0, "km / s")) setdefaultattr(self, "time_unit", self.length_unit / self.velocity_unit) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if not filename.endswith(".bin"): return False try: with open(filename, mode="rb") as f: header = fpu.read_cattrs(f, header_dt) except OSError: return False else: return header["magic"] == 18077126535843729616 def halo(self, ptype, particle_identifier): return RockstarHaloContainer( ptype, particle_identifier, parent_ds=None, halo_ds=self, ) class RockstarHaloContainer: def __init__(self, ptype, particle_identifier, *, parent_ds, halo_ds): if ptype not in halo_ds.particle_types_raw: raise RuntimeError( f'Possible halo types are {halo_ds.particle_types_raw}, supplied "{ptype}".' ) self.ds = parent_ds self.halo_ds = halo_ds self.ptype = ptype self.particle_identifier = particle_identifier def __repr__(self): return f"{self.halo_ds}_{self.ptype}_{self.particle_identifier:09d}" def __getitem__(self, key): if isinstance(key, tuple): ptype, field = key else: ptype = self.ptype field = key data = { "mass": self.mass, "position": self.position, "velocity": self.velocity, "member_ids": self.member_ids, } if ptype == "halos" and field in data: return data[field] raise YTFieldNotFound((ptype, field), dataset=self.ds) @cached_property def ihalo(self): halo_id = self.particle_identifier halo_ids = list(self.halo_ds.r["halos", "particle_identifier"].astype("i8")) ihalo = halo_ids.index(halo_id) assert halo_ids[ihalo] == halo_id return ihalo @property def mass(self): return self.halo_ds.r["halos", "particle_mass"][self.ihalo] @property def position(self): return self.halo_ds.r["halos", "particle_position"][self.ihalo] @property def velocity(self): return self.halo_ds.r["halos", "particle_velocity"][self.ihalo] @property def member_ids(self): return self.halo_ds.index.get_member(self.particle_identifier) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/rockstar/definitions.py0000644000175100001770000000766414714401662020755 0ustar00runnerdockerfrom typing import Any import numpy as np BINARY_HEADER_SIZE = 256 header_dt = ( ("magic", 1, "Q"), ("snap", 1, "q"), ("chunk", 1, "q"), ("scale", 1, "f"), ("Om", 1, "f"), ("Ol", 1, "f"), ("h0", 1, "f"), ("bounds", 6, "f"), ("num_halos", 1, "q"), ("num_particles", 1, "q"), ("box_size", 1, "f"), ("particle_mass", 1, "f"), ("particle_type", 1, "q"), ("format_revision", 1, "i"), ("version", 12, "c"), ("unused", BINARY_HEADER_SIZE - 4 * 12 - 4 - 8 * 6 - 12, "c"), ) # Note the final field here, which is a field for min/max format revision in # which the field appears. KNOWN_REVISIONS: list[int] = [0, 1, 2] # using typing.Any here in lieu of numpy.typing.DTypeLike (should be backported for numpy < 1.20) HaloDataType = tuple[str, Any] | tuple[str, Any, tuple[int, int]] halo_dt: list[HaloDataType] = [ ("particle_identifier", np.int64), ("particle_position_x", np.float32), ("particle_position_y", np.float32), ("particle_position_z", np.float32), ("particle_mposition_x", np.float32, (0, 0)), ("particle_mposition_y", np.float32, (0, 0)), ("particle_mposition_z", np.float32, (0, 0)), ("particle_velocity_x", np.float32), ("particle_velocity_y", np.float32), ("particle_velocity_z", np.float32), ("particle_corevel_x", np.float32, (1, 100)), ("particle_corevel_y", np.float32, (1, 100)), ("particle_corevel_z", np.float32, (1, 100)), ("particle_bulkvel_x", np.float32), ("particle_bulkvel_y", np.float32), ("particle_bulkvel_z", np.float32), ("particle_mass", np.float32), ("virial_radius", np.float32), ("child_r", np.float32), ("vmax_r", np.float32), ("mgrav", np.float32), ("vmax", np.float32), ("rvmax", np.float32), ("rs", np.float32), ("klypin_rs", np.float32), ("vrms", np.float32), ("Jx", np.float32), ("Jy", np.float32), ("Jz", np.float32), ("energy", np.float32), ("spin", np.float32), ("alt_m1", np.float32), ("alt_m2", np.float32), ("alt_m3", np.float32), ("alt_m4", np.float32), ("Xoff", np.float32), ("Voff", np.float32), ("b_to_a", np.float32), ("c_to_a", np.float32), ("Ax", np.float32), ("Ay", np.float32), ("Az", np.float32), ("b_to_a2", np.float32, (1, 100)), ("c_to_a2", np.float32, (1, 100)), ("A2x", np.float32, (1, 100)), ("A2y", np.float32, (1, 100)), ("A2z", np.float32, (1, 100)), ("bullock_spin", np.float32), ("kin_to_pot", np.float32), ("m_pe_b", np.float32, (1, 100)), ("m_pe_d", np.float32, (1, 100)), ("num_p", np.int64), ("num_child_particles", np.int64), ("p_start", np.int64), ("desc", np.int64), ("flags", np.int64), ("n_core", np.int64), ("min_pos_err", np.float32), ("min_vel_err", np.float32), ("min_bulkvel_err", np.float32), ("type", np.int32, (2, 100)), ("sm", np.float32, (2, 100)), ("gas", np.float32, (2, 100)), ("bh", np.float32, (2, 100)), ("peak_density", np.float32, (2, 100)), ("av_density", np.float32, (2, 100)), ] # using typing.Any here in lieu of numpy.typing.DTypeLike (should be backported for numpy < 1.20) halo_dts_tmp: dict[int, list[HaloDataType]] = {} halo_dts: dict[int, np.dtype] = {} for rev in KNOWN_REVISIONS: halo_dts_tmp[rev] = [] for item in halo_dt: if len(item) == 2: halo_dts_tmp[rev].append(item) elif len(item) == 3: mi, ma = item[2] if (mi <= rev) and (rev <= ma): halo_dts_tmp[rev].append(item[:2]) halo_dts[rev] = np.dtype(halo_dts_tmp[rev], align=True) del halo_dts_tmp particle_dt = np.dtype( [ ("particle_identifier", np.int64), ("particle_position_x", np.float32), ("particle_position_y", np.float32), ("particle_position_z", np.float32), ("particle_velocity_x", np.float32), ("particle_velocity_y", np.float32), ("particle_velocity_z", np.float32), ] ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/rockstar/fields.py0000644000175100001770000000555714714401662017707 0ustar00runnerdockerfrom yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer m_units = "Msun / h" # Msun / h p_units = "Mpccm / h" # Mpc / h comoving v_units = "km / s" # km /s phys, peculiar r_units = "kpccm / h" # kpc / h comoving class RockstarFieldInfo(FieldInfoContainer): known_particle_fields: KnownFieldsT = ( ("particle_identifier", ("", [], None)), ("particle_position_x", (p_units, [], None)), ("particle_position_y", (p_units, [], None)), ("particle_position_z", (p_units, [], None)), ("particle_velocity_x", (v_units, [], None)), ("particle_velocity_y", (v_units, [], None)), ("particle_velocity_z", (v_units, [], None)), ("particle_corevel_x", (v_units, [], None)), ("particle_corevel_y", (v_units, [], None)), ("particle_corevel_z", (v_units, [], None)), ("particle_bulkvel_x", (v_units, [], None)), ("particle_bulkvel_y", (v_units, [], None)), ("particle_bulkvel_z", (v_units, [], None)), ("particle_mass", (m_units, [], "Mass")), ("virial_radius", (r_units, [], "Radius")), ("child_r", (r_units, [], None)), ("vmax_r", (v_units, [], None)), # These fields I don't have good definitions for yet. ("mgrav", ("", [], None)), ("vmax", (v_units, [], "V_{max}")), ("rvmax", (v_units, [], None)), ("rs", (r_units, [], "R_s")), ("klypin_rs", (r_units, [], "Klypin R_s")), ("vrms", (v_units, [], "V_{rms}")), ("Jx", ("", [], "J_x")), ("Jy", ("", [], "J_y")), ("Jz", ("", [], "J_z")), ("energy", ("", [], None)), ("spin", ("", [], "Spin Parameter")), ("alt_m1", (m_units, [], None)), ("alt_m2", (m_units, [], None)), ("alt_m3", (m_units, [], None)), ("alt_m4", (m_units, [], None)), ("Xoff", ("", [], None)), ("Voff", ("", [], None)), ("b_to_a", ("", [], "Ellipsoidal b to a")), ("c_to_a", ("", [], "Ellipsoidal c to a")), ("Ax", ("", [], "A_x")), ("Ay", ("", [], "A_y")), ("Az", ("", [], "A_z")), ("b_to_a2", ("", [], None)), ("c_to_a2", ("", [], None)), ("A2x", ("", [], "A2_x")), ("A2y", ("", [], "A2_y")), ("A2z", ("", [], "A2_z")), ("bullock_spin", ("", [], "Bullock Spin Parameter")), ("kin_to_pot", ("", [], "Kinetic to Potential")), ("m_pe_b", ("", [], None)), ("m_pe_d", ("", [], None)), ("num_p", ("", [], "Number of Particles")), ("num_child_particles", ("", [], "Number of Child Particles")), ("p_start", ("", [], None)), ("desc", ("", [], None)), ("flags", ("", [], None)), ("n_core", ("", [], None)), ("min_pos_err", ("", [], None)), ("min_vel_err", ("", [], None)), ("min_bulkvel_err", ("", [], None)), ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/rockstar/io.py0000644000175100001770000000615014714401662017036 0ustar00runnerdockerimport os import numpy as np from yt.utilities.io_handler import BaseParticleIOHandler from .definitions import halo_dts class IOHandlerRockstarBinary(BaseParticleIOHandler): _dataset_type = "rockstar_binary" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self._halo_dt = halo_dts[self.ds.parameters["format_revision"]] def _read_fluid_selection(self, chunks, selector, fields, size): raise NotImplementedError def _read_particle_coords(self, chunks, ptf): # This will read chunks and yield the results. # Only support halo reading for now. assert len(ptf) == 1 assert list(ptf.keys())[0] == "halos" ptype = "halos" for data_file in self._sorted_chunk_iterator(chunks): pcount = data_file.header["num_halos"] if pcount == 0: continue with open(data_file.filename, "rb") as f: pos = data_file._get_particle_positions(ptype, f=f) yield "halos", (pos[:, i] for i in range(3)), 0.0 def _read_particle_fields(self, chunks, ptf, selector): # Only support halo reading for now. assert len(ptf) == 1 assert list(ptf.keys())[0] == "halos" for data_file in self._sorted_chunk_iterator(chunks): si, ei = data_file.start, data_file.end pcount = data_file.header["num_halos"] if pcount == 0: continue with open(data_file.filename, "rb") as f: for ptype, field_list in sorted(ptf.items()): pos = data_file._get_particle_positions(ptype, f=f) x, y, z = (pos[:, i] for i in range(3)) mask = selector.select_points(x, y, z, 0.0) del x, y, z f.seek(data_file._position_offset, os.SEEK_SET) halos = np.fromfile(f, dtype=self._halo_dt, count=pcount) if mask is None: continue for field in field_list: data = halos[field][si:ei][mask].astype("float64") yield (ptype, field), data def _yield_coordinates(self, data_file): # Just does halos pcount = data_file.header["num_halos"] with open(data_file.filename, "rb") as f: f.seek(data_file._position_offset, os.SEEK_SET) halos = np.fromfile(f, dtype=self._halo_dt, count=pcount) pos = np.empty((halos.size, 3), dtype="float64") pos[:, 0] = halos["particle_position_x"] pos[:, 1] = halos["particle_position_y"] pos[:, 2] = halos["particle_position_z"] yield "halos", pos def _count_particles(self, data_file): nhalos = data_file.header["num_halos"] si, ei = data_file.start, data_file.end if None not in (si, ei): nhalos = np.clip(nhalos - si, 0, ei - si) return {"halos": nhalos} def _identify_fields(self, data_file): fields = [("halos", f) for f in self._halo_dt.fields if "padding" not in f] return fields, {} ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3431528 yt-4.4.0/yt/frontends/rockstar/tests/0000755000175100001770000000000014714401715017214 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/rockstar/tests/__init__.py0000644000175100001770000000000014714401662021314 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/rockstar/tests/test_outputs.py0000644000175100001770000000261414714401662022354 0ustar00runnerdockerimport os.path from numpy.testing import assert_equal from yt.frontends.rockstar.api import RockstarDataset from yt.testing import ParticleSelectionComparison, requires_file from yt.utilities.answer_testing.framework import ( FieldValuesTest, data_dir_load, requires_ds, ) _fields = ( ("all", "particle_position_x"), ("all", "particle_position_y"), ("all", "particle_position_z"), ("all", "particle_mass"), ) r1 = "rockstar_halos/halos_0.0.bin" @requires_ds(r1) def test_fields_r1(): ds = data_dir_load(r1) assert_equal(str(ds), os.path.basename(r1)) for field in _fields: yield FieldValuesTest(r1, field, particle_type=True) @requires_file(r1) def test_RockstarDataset(): assert isinstance(data_dir_load(r1), RockstarDataset) @requires_file(r1) def test_particle_selection(): ds = data_dir_load(r1) psc = ParticleSelectionComparison(ds) psc.run_defaults() @requires_file(r1) def test_halo_loading(): ds = data_dir_load(r1) for halo_id, Npart in zip( ds.r["halos", "particle_identifier"], ds.r["halos", "num_p"], strict=False, ): halo = ds.halo("halos", halo_id) assert halo is not None # Try accessing properties halo.position halo.velocity halo.mass # Make sure we can access the member particles assert_equal(len(halo.member_ids), Npart) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3431528 yt-4.4.0/yt/frontends/sdf/0000755000175100001770000000000014714401715014776 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/sdf/__init__.py0000644000175100001770000000005214714401662017105 0ustar00runnerdocker""" __init__ for yt.frontends.sdf """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/sdf/api.py0000644000175100001770000000014614714401662016123 0ustar00runnerdockerfrom .data_structures import SDFDataset from .fields import SDFFieldInfo from .io import IOHandlerSDF ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/sdf/data_structures.py0000644000175100001770000001545714714401662020601 0ustar00runnerdockerimport os from functools import cached_property import numpy as np from yt.data_objects.static_output import ParticleDataset, ParticleFile from yt.funcs import setdefaultattr from yt.geometry.particle_geometry_handler import ParticleIndex from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _requests as requests from yt.utilities.sdf import HTTPSDFRead, SDFIndex, SDFRead from .fields import SDFFieldInfo # currently specified by units_2HOT == 2 in header # in future will read directly from file units_2HOT_v2_length = 3.08567802e21 units_2HOT_v2_mass = 1.98892e43 units_2HOT_v2_time = 3.1558149984e16 class SDFFile(ParticleFile): pass class SDFDataset(ParticleDataset): _load_requirements = ["requests"] _index_class = ParticleIndex _file_class = SDFFile _field_info_class = SDFFieldInfo _particle_mass_name = None _particle_coordinates_name = None _particle_velocity_name = None _skip_cache = True _subspace = False def __init__( self, filename, dataset_type="sdf_particles", index_order=None, index_filename=None, bounding_box=None, sdf_header=None, midx_filename=None, midx_header=None, midx_level=None, field_map=None, units_override=None, unit_system="cgs", ): if bounding_box is not None: # This ensures that we know a bounding box has been applied self._domain_override = True self._subspace = True bbox = np.array(bounding_box, dtype="float64") if bbox.shape == (2, 3): bbox = bbox.transpose() self.domain_left_edge = bbox[:, 0] self.domain_right_edge = bbox[:, 1] else: self.domain_left_edge = self.domain_right_edge = None self.sdf_header = sdf_header self.midx_filename = midx_filename self.midx_header = midx_header self.midx_level = midx_level if field_map is None: field_map = {} self._field_map = field_map prefix = "" if self.midx_filename is not None: prefix += "midx_" if filename.startswith("http"): prefix += "http_" dataset_type = prefix + "sdf_particles" super().__init__( filename, dataset_type=dataset_type, units_override=units_override, unit_system=unit_system, index_order=index_order, index_filename=index_filename, ) def _parse_parameter_file(self): if self.parameter_filename.startswith("http"): sdf_class = HTTPSDFRead else: sdf_class = SDFRead self.sdf_container = sdf_class(self.parameter_filename, header=self.sdf_header) # Reference self.parameters = self.sdf_container.parameters self.dimensionality = 3 self.refine_by = 2 if self.domain_left_edge is None or self.domain_right_edge is None: R0 = self.parameters["R0"] if "offset_center" in self.parameters and self.parameters["offset_center"]: self.domain_left_edge = np.array([0, 0, 0], dtype=np.float64) self.domain_right_edge = np.array( [2.0 * self.parameters.get(f"R{ax}", R0) for ax in "xyz"], dtype=np.float64, ) else: self.domain_left_edge = np.array( [-self.parameters.get(f"R{ax}", R0) for ax in "xyz"], dtype=np.float64, ) self.domain_right_edge = np.array( [+self.parameters.get(f"R{ax}", R0) for ax in "xyz"], dtype=np.float64, ) self.domain_left_edge *= self.parameters.get("a", 1.0) self.domain_right_edge *= self.parameters.get("a", 1.0) self.domain_dimensions = np.ones(3, "int32") if self.parameters.get("do_periodic", False): self._periodicity = (True, True, True) else: self._periodicity = (False, False, False) self.cosmological_simulation = 1 self.current_redshift = self.parameters.get("redshift", 0.0) self.omega_lambda = self.parameters["Omega0_lambda"] self.omega_matter = self.parameters["Omega0_m"] if "Omega0_fld" in self.parameters: self.omega_lambda += self.parameters["Omega0_fld"] if "Omega0_r" in self.parameters: # not correct, but most codes can't handle Omega0_r self.omega_matter += self.parameters["Omega0_r"] self.hubble_constant = self.parameters["h_100"] self.current_time = units_2HOT_v2_time * self.parameters.get("tpos", 0.0) mylog.info("Calculating time to be %0.3e seconds", self.current_time) self.filename_template = self.parameter_filename self.file_count = 1 @cached_property def midx(self): if self.midx_filename is None: raise RuntimeError("SDF index0 file not supplied in load.") if "http" in self.midx_filename: sdf_class = HTTPSDFRead else: sdf_class = SDFRead indexdata = sdf_class(self.midx_filename, header=self.midx_header) return SDFIndex(self.sdf_container, indexdata, level=self.midx_level) def _set_code_unit_attributes(self): setdefaultattr( self, "length_unit", self.quan(1.0, self.parameters.get("length_unit", "kpc")), ) setdefaultattr( self, "velocity_unit", self.quan(1.0, self.parameters.get("velocity_unit", "kpc/Gyr")), ) setdefaultattr( self, "time_unit", self.quan(1.0, self.parameters.get("time_unit", "Gyr")) ) mass_unit = self.parameters.get("mass_unit", "1e10 Msun") if " " in mass_unit: factor, unit = mass_unit.split(" ") else: factor = 1.0 unit = mass_unit setdefaultattr(self, "mass_unit", self.quan(float(factor), unit)) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if cls._missing_load_requirements(): return False sdf_header = kwargs.get("sdf_header", filename) if sdf_header.startswith("http"): hreq = requests.get(sdf_header, stream=True) if hreq.status_code != 200: return False # Grab a whole 4k page. line = next(hreq.iter_content(4096)) elif os.path.isfile(sdf_header): try: with open(sdf_header, encoding="ISO-8859-1") as f: line = f.read(10).strip() except PermissionError: return False else: return False return line.startswith("# SDF") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/sdf/definitions.py0000644000175100001770000000000014714401662017652 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/sdf/fields.py0000644000175100001770000000315414714401662016622 0ustar00runnerdockerfrom yt.fields.field_info_container import FieldInfoContainer class SDFFieldInfo(FieldInfoContainer): known_other_fields = () known_particle_fields = () _mass_field = None def __init__(self, ds, field_list): if "mass" in field_list: self.known_particle_fields.append( ("mass", "code_mass", ["particle_mass"], None) ) possible_masses = ["mass", "m200b", "mvir"] mnf = "mass" for mn in possible_masses: if mn in ds.sdf_container.keys(): mnf = self._mass_field = mn break idf = ds._field_map.get("particle_index", "ident") xf = ds._field_map.get("particle_position_x", "x") yf = ds._field_map.get("particle_position_y", "y") zf = ds._field_map.get("particle_position_z", "z") vxf = ds._field_map.get("particle_velocity_x", "vx") vyf = ds._field_map.get("particle_velocity_z", "vy") vzf = ds._field_map.get("particle_velocity_z", "vz") self.known_particle_fields = ( (idf, ("dimensionless", ["particle_index"], None)), (xf, ("code_length", ["particle_position_x"], None)), (yf, ("code_length", ["particle_position_y"], None)), (zf, ("code_length", ["particle_position_z"], None)), (vxf, ("code_velocity", ["particle_velocity_x"], None)), (vyf, ("code_velocity", ["particle_velocity_y"], None)), (vzf, ("code_velocity", ["particle_velocity_z"], None)), (mnf, ("code_mass", ["particle_mass"], None)), ) super().__init__(ds, field_list) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/sdf/io.py0000644000175100001770000001606714714401662015772 0ustar00runnerdockerimport numpy as np from yt.funcs import mylog from yt.utilities.io_handler import BaseParticleIOHandler class IOHandlerSDF(BaseParticleIOHandler): _dataset_type = "sdf_particles" @property def _handle(self): return self.ds.sdf_container def _read_fluid_selection(self, chunks, selector, fields, size): raise NotImplementedError def _read_particle_coords(self, chunks, ptf): assert len(ptf) == 1 assert ptf.keys()[0] == "dark_matter" data_files = self._get_data_files(chunks) assert len(data_files) == 1 for _data_file in sorted(data_files, key=lambda x: (x.filename, x.start)): yield ( "dark_matter", ( self._handle["x"], self._handle["y"], self._handle["z"], ), 0.0, ) def _read_particle_fields(self, chunks, ptf, selector): assert len(ptf) == 1 assert ptf.keys()[0] == "dark_matter" data_files = self._get_data_files(chunks) assert len(data_files) == 1 for _data_file in sorted(data_files, key=lambda x: (x.filename, x.start)): for ptype, field_list in sorted(ptf.items()): x = self._handle["x"] y = self._handle["y"] z = self._handle["z"] mask = selector.select_points(x, y, z, 0.0) del x, y, z if mask is None: continue for field in field_list: if field == "mass": data = np.ones(mask.sum(), dtype="float64") data *= self.ds.parameters["particle_mass"] else: data = self._handle[field][mask] yield (ptype, field), data def _identify_fields(self, data_file): fields = [("dark_matter", v) for v in self._handle.keys()] fields.append(("dark_matter", "mass")) return fields, {} def _count_particles(self, data_file): pcount = self._handle["x"].size if pcount > 1e9: mylog.warning( "About to load %i particles into memory. " "You may want to consider a midx-enabled load", pcount, ) return {"dark_matter": pcount} class IOHandlerHTTPSDF(IOHandlerSDF): _dataset_type = "http_sdf_particles" def _read_particle_coords(self, chunks, ptf): chunks = list(chunks) data_files = set() assert len(ptf) == 1 assert ptf.keys()[0] == "dark_matter" for chunk in chunks: for obj in chunk.objs: data_files.update(obj.data_files) assert len(data_files) == 1 for _data_file in data_files: pcount = self._handle["x"].size yield ( "dark_matter", ( self._handle["x"][:pcount], self._handle["y"][:pcount], self._handle["z"][:pcount], ), 0.0, ) def _read_particle_fields(self, chunks, ptf, selector): chunks = list(chunks) data_files = set() assert len(ptf) == 1 assert ptf.keys()[0] == "dark_matter" for chunk in chunks: for obj in chunk.objs: data_files.update(obj.data_files) assert len(data_files) == 1 for _data_file in data_files: pcount = self._handle["x"].size for ptype, field_list in sorted(ptf.items()): x = self._handle["x"][:pcount] y = self._handle["y"][:pcount] z = self._handle["z"][:pcount] mask = selector.select_points(x, y, z, 0.0) del x, y, z if mask is None: continue for field in field_list: if field == "mass": if self.ds.field_info._mass_field is None: pm = 1.0 if "particle_mass" in self.ds.parameters: pm = self.ds.parameters["particle_mass"] else: raise RuntimeError data = pm * np.ones(mask.sum(), dtype="float64") else: data = self._handle[self.ds.field_info._mass_field][:][mask] else: data = self._handle[field][:][mask] yield (ptype, field), data def _count_particles(self, data_file): return {"dark_matter": self._handle["x"].http_array.shape} class IOHandlerSIndexSDF(IOHandlerSDF): _dataset_type = "midx_sdf_particles" def _read_particle_coords(self, chunks, ptf): dle = self.ds.domain_left_edge.in_units("code_length").d dre = self.ds.domain_right_edge.in_units("code_length").d for dd in self.ds.midx.iter_bbox_data(dle, dre, ["x", "y", "z"]): yield "dark_matter", (dd["x"], dd["y"], dd["z"]), 0.0 def _read_particle_fields(self, chunks, ptf, selector): dle = self.ds.domain_left_edge.in_units("code_length").d dre = self.ds.domain_right_edge.in_units("code_length").d required_fields = [] for field_list in sorted(ptf.values()): for field in field_list: if field == "mass": continue required_fields.append(field) for dd in self.ds.midx.iter_bbox_data(dle, dre, required_fields): for ptype, field_list in sorted(ptf.items()): x = dd["x"] y = dd["y"] z = dd["z"] mask = selector.select_points(x, y, z, 0.0) del x, y, z if mask is None: continue for field in field_list: if field == "mass": data = np.ones(mask.sum(), dtype="float64") data *= self.ds.parameters["particle_mass"] else: data = dd[field][mask] yield (ptype, field), data def _count_particles(self, data_file): dle = self.ds.domain_left_edge.in_units("code_length").d dre = self.ds.domain_right_edge.in_units("code_length").d pcount_estimate = self.ds.midx.get_nparticles_bbox(dle, dre) if pcount_estimate > 1e9: mylog.warning( "Filtering %i particles to find total. " "You may want to reconsider your bounding box.", pcount_estimate, ) pcount = 0 for dd in self.ds.midx.iter_bbox_data(dle, dre, ["x"]): pcount += dd["x"].size return {"dark_matter": pcount} def _identify_fields(self, data_file): fields = [("dark_matter", v) for v in self._handle.keys()] fields.append(("dark_matter", "mass")) return fields, {} class IOHandlerSIndexHTTPSDF(IOHandlerSIndexSDF): _dataset_type = "midx_http_sdf_particles" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/sdf/misc.py0000644000175100001770000000000014714401662016272 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3431528 yt-4.4.0/yt/frontends/sph/0000755000175100001770000000000014714401715015014 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/sph/__init__.py0000644000175100001770000000000014714401662017114 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/sph/api.py0000644000175100001770000000006314714401662016137 0ustar00runnerdocker""" API for general SPH frontend machinery """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/sph/data_structures.py0000644000175100001770000001065314714401662020610 0ustar00runnerdockerimport os import numpy as np from yt.data_objects.static_output import ParticleDataset from yt.funcs import mylog from yt.geometry.particle_geometry_handler import ParticleIndex class SPHDataset(ParticleDataset): default_kernel_name = "cubic" _sph_smoothing_styles = ["scatter", "gather"] _sph_smoothing_style = "scatter" _num_neighbors = 32 _use_sph_normalization = True def __init__( self, filename, dataset_type=None, units_override=None, unit_system="cgs", index_order=None, index_filename=None, kdtree_filename=None, kernel_name=None, default_species_fields=None, ): if kernel_name is None: self.kernel_name = self.default_kernel_name else: self.kernel_name = kernel_name self.kdtree_filename = kdtree_filename super().__init__( filename, dataset_type=dataset_type, units_override=units_override, unit_system=unit_system, index_order=index_order, index_filename=index_filename, default_species_fields=default_species_fields, ) @property def num_neighbors(self): return self._num_neighbors @num_neighbors.setter def num_neighbors(self, value): if value < 0: raise ValueError(f"Negative value not allowed: {value}") self._num_neighbors = value @property def sph_smoothing_style(self): return self._sph_smoothing_style @sph_smoothing_style.setter def sph_smoothing_style(self, value): if value not in self._sph_smoothing_styles: raise ValueError( f"Smoothing style not implemented: {value}, " "please select one of the following: ", self._sph_smoothing_styles, ) self._sph_smoothing_style = value @property def use_sph_normalization(self): return self._use_sph_normalization @use_sph_normalization.setter def use_sph_normalization(self, value): if value is not True and value is not False: raise ValueError("SPH normalization needs to be True or False!") self._use_sph_normalization = value class SPHParticleIndex(ParticleIndex): def _initialize_index(self): ds = self.dataset ds._file_hash = self._generate_hash() if hasattr(self.io, "_generate_smoothing_length"): self.io._generate_smoothing_length(self) super()._initialize_index() def _generate_kdtree(self, fname): from yt.utilities.lib.cykdtree import PyKDTree if fname is not None: if os.path.exists(fname): mylog.info("Loading KDTree from %s", os.path.basename(fname)) kdtree = PyKDTree.from_file(fname) if kdtree.data_version != self.ds._file_hash: mylog.info("Detected hash mismatch, regenerating KDTree") else: self._kdtree = kdtree return positions = [] for data_file in self.data_files: for _, ppos in self.io._yield_coordinates( data_file, needed_ptype=self.ds._sph_ptypes[0] ): positions.append(ppos) if positions == []: self._kdtree = None return positions = np.concatenate(positions) mylog.info("Allocating KDTree for %s particles", positions.shape[0]) num_neighbors = getattr(self.ds, "num_neighbors", 32) self._kdtree = PyKDTree( positions.astype("float64"), left_edge=self.ds.domain_left_edge, right_edge=self.ds.domain_right_edge, periodic=np.array(self.ds.periodicity), leafsize=2 * int(num_neighbors), data_version=self.ds._file_hash, ) if fname is not None: self._kdtree.save(fname) @property def kdtree(self): if hasattr(self, "_kdtree"): return self._kdtree ds = self.ds if getattr(ds, "kdtree_filename", None) is None: if os.path.exists(ds.parameter_filename): fname = ds.parameter_filename + ".kdtree" else: # we don't want to write to disk for in-memory data fname = None else: fname = ds.kdtree_filename self._generate_kdtree(fname) return self._kdtree ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/sph/fields.py0000644000175100001770000000412214714401662016634 0ustar00runnerdockerfrom yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer from yt.fields.species_fields import setup_species_fields class SPHFieldInfo(FieldInfoContainer): known_particle_fields: KnownFieldsT = ( ("Mass", ("code_mass", ["particle_mass"], None)), ("Masses", ("code_mass", ["particle_mass"], None)), ("Coordinates", ("code_length", ["particle_position"], None)), ("Velocity", ("code_velocity", ["particle_velocity"], None)), ("Velocities", ("code_velocity", ["particle_velocity"], None)), ("ParticleIDs", ("", ["particle_index"], None)), ("InternalEnergy", ("code_specific_energy", ["specific_thermal_energy"], None)), ("SmoothingLength", ("code_length", ["smoothing_length"], None)), ("Density", ("code_mass / code_length**3", ["density"], None)), ("MaximumTemperature", ("K", [], None)), ("Temperature", ("K", ["temperature"], None)), ("Epsilon", ("code_length", [], None)), ("Metals", ("code_metallicity", ["metallicity"], None)), ("Metallicity", ("code_metallicity", ["metallicity"], None)), ("Phi", ("code_length", [], None)), ("Potential", ("code_velocity**2", ["gravitational_potential"], None)), ("StarFormationRate", ("Msun / yr", ["star_formation_rate"], None)), ("FormationTime", ("code_time", ["creation_time"], None)), ("Metallicity_00", ("", ["metallicity"], None)), ("InitialMass", ("code_mass", [], None)), ("TrueMass", ("code_mass", [], None)), ("ElevenMetalMasses", ("code_mass", [], None)), ("ColdFraction", ("", ["cold_fraction"], None)), ("HotTemperature", ("code_temperature", ["hot_temperature"], None)), ("CloudFraction", ("", ["cold_fraction"], None)), ("HotPhaseTemperature", ("code_temperature", ["hot_temperature"], None)), ) def setup_particle_fields(self, ptype, *args, **kwargs): super().setup_particle_fields(ptype, *args, **kwargs) setup_species_fields(self, ptype) def setup_fluid_index_fields(self): pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/sph/io.py0000644000175100001770000000071614714401662016002 0ustar00runnerdocker""" Generic file-handing functions for SPH data """ from yt.utilities.io_handler import BaseParticleIOHandler class IOHandlerSPH(BaseParticleIOHandler): """IOHandler implementation specifically for SPH data This exists to handle particles with smoothing lengths, which require us to read in smoothing lengths along with the the particle coordinates to determine particle extents. At present this is non-functional. """ pass ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3431528 yt-4.4.0/yt/frontends/stream/0000755000175100001770000000000014714401715015515 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/__init__.py0000644000175100001770000000000014714401662017615 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/api.py0000644000175100001770000000035614714401662016645 0ustar00runnerdockerfrom . import sample_data, tests from .data_structures import ( StreamDataset, StreamGrid, StreamHandler, StreamHierarchy, hexahedral_connectivity, ) from .fields import StreamFieldInfo from .io import IOHandlerStream ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/data_structures.py0000644000175100001770000010507414714401662021313 0ustar00runnerdockerimport os import time import uuid import weakref from collections import UserDict from functools import cached_property from itertools import chain, product, repeat from numbers import Number as numeric_type import numpy as np from more_itertools import always_iterable from yt._typing import AxisOrder, FieldKey from yt.data_objects.field_data import YTFieldData from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.index_subobjects.octree_subset import OctreeSubset from yt.data_objects.index_subobjects.stretched_grid import StretchedGrid from yt.data_objects.index_subobjects.unstructured_mesh import ( SemiStructuredMesh, UnstructuredMesh, ) from yt.data_objects.static_output import Dataset, ParticleFile from yt.data_objects.unions import MeshUnion, ParticleUnion from yt.frontends.sph.data_structures import SPHParticleIndex from yt.funcs import setdefaultattr from yt.geometry.api import Geometry from yt.geometry.geometry_handler import Index, YTDataChunk from yt.geometry.grid_geometry_handler import GridIndex from yt.geometry.oct_container import OctreeContainer from yt.geometry.oct_geometry_handler import OctreeIndex from yt.geometry.unstructured_mesh_handler import UnstructuredIndex from yt.units import YTQuantity from yt.utilities.io_handler import io_registry from yt.utilities.lib.cykdtree import PyKDTree from yt.utilities.lib.misc_utilities import ( _obtain_coords_and_widths, get_box_grids_level, ) from yt.utilities.lib.particle_kdtree_tools import ( estimate_density, generate_smoothing_length, ) from yt.utilities.logger import ytLogger as mylog from .definitions import process_data, set_particle_types from .fields import StreamFieldInfo class StreamGrid(AMRGridPatch): """ Class representing a single In-memory Grid instance. """ __slots__ = ["proc_num"] _id_offset = 0 def __init__(self, id, index): """ Returns an instance of StreamGrid with *id*, associated with *filename* and *index*. """ # All of the field parameters will be passed to us as needed. AMRGridPatch.__init__(self, id, filename=None, index=index) self._children_ids = [] self._parent_id = -1 self.Level = -1 def set_filename(self, filename): pass @property def Parent(self): if self._parent_id == -1: return None return self.index.grids[self._parent_id - self._id_offset] @property def Children(self): return [self.index.grids[cid - self._id_offset] for cid in self._children_ids] class StreamStretchedGrid(StretchedGrid): _id_offset = 0 def __init__(self, id, index): cell_widths = index.grid_cell_widths[id - self._id_offset] super().__init__(id, cell_widths, index=index) self._children_ids = [] self._parent_id = -1 self.Level = -1 @property def Parent(self): if self._parent_id == -1: return None return self.index.grids[self._parent_id - self._id_offset] @property def Children(self): return [self.index.grids[cid - self._id_offset] for cid in self._children_ids] class StreamHandler: def __init__( self, left_edges, right_edges, dimensions, levels, parent_ids, particle_count, processor_ids, fields, field_units, code_units, io=None, particle_types=None, periodicity=(True, True, True), *, cell_widths=None, parameters=None, ): if particle_types is None: particle_types = {} self.left_edges = np.array(left_edges) self.right_edges = np.array(right_edges) self.dimensions = dimensions self.levels = levels self.parent_ids = parent_ids self.particle_count = particle_count self.processor_ids = processor_ids self.num_grids = self.levels.size self.fields = fields self.field_units = field_units self.code_units = code_units self.io = io self.particle_types = particle_types self.periodicity = periodicity self.cell_widths = cell_widths if parameters is None: self.parameters = {} else: self.parameters = parameters.copy() def get_fields(self): return self.fields.all_fields def get_particle_type(self, field): if field in self.particle_types: return self.particle_types[field] else: return False class StreamHierarchy(GridIndex): grid = StreamGrid def __init__(self, ds, dataset_type=None): self.dataset_type = dataset_type self.float_type = "float64" self.dataset = weakref.proxy(ds) # for _obtain_enzo self.stream_handler = ds.stream_handler self.float_type = "float64" self.directory = os.getcwd() GridIndex.__init__(self, ds, dataset_type) def _count_grids(self): self.num_grids = self.stream_handler.num_grids def _icoords_to_fcoords(self, icoords, ires, axes=None): """ We check here that we have cell_widths, and if we do, we will provide them. """ if self.grid_cell_widths is None: return super()._icoords_to_fcoords(icoords, ires, axes) if axes is None: axes = [0, 1, 2] # Transpose these by reversing the shape coords = np.empty(icoords.shape, dtype="f8") cell_widths = np.empty(icoords.shape, dtype="f8") for i, ax in enumerate(axes): coords[:, i], cell_widths[:, i] = _obtain_coords_and_widths( icoords[:, i], ires, self.grid_cell_widths[0][ax], self.ds.domain_left_edge[ax].d, ) return coords, cell_widths def _parse_index(self): self.grid_dimensions = self.stream_handler.dimensions self.grid_left_edge[:] = self.stream_handler.left_edges self.grid_right_edge[:] = self.stream_handler.right_edges self.grid_levels[:] = self.stream_handler.levels self.min_level = self.grid_levels.min() self.grid_procs = self.stream_handler.processor_ids self.grid_particle_count[:] = self.stream_handler.particle_count if self.stream_handler.cell_widths is not None: self.grid_cell_widths = self.stream_handler.cell_widths[:] self.grid = StreamStretchedGrid else: self.grid_cell_widths = None mylog.debug("Copying reverse tree") self.grids = [] # We enumerate, so it's 0-indexed id and 1-indexed pid for id in range(self.num_grids): self.grids.append(self.grid(id, self)) self.grids[id].Level = self.grid_levels[id, 0] parent_ids = self.stream_handler.parent_ids if parent_ids is not None: reverse_tree = self.stream_handler.parent_ids.tolist() # Initial setup: for gid, pid in enumerate(reverse_tree): if pid >= 0: self.grids[gid]._parent_id = pid self.grids[pid]._children_ids.append(self.grids[gid].id) else: mylog.debug("Reconstructing parent-child relationships") self._reconstruct_parent_child() self.max_level = self.grid_levels.max() mylog.debug("Preparing grids") temp_grids = np.empty(self.num_grids, dtype="object") for i, grid in enumerate(self.grids): if (i % 1e4) == 0: mylog.debug("Prepared % 7i / % 7i grids", i, self.num_grids) grid.filename = None grid._prepare_grid() grid._setup_dx() grid.proc_num = self.grid_procs[i] temp_grids[i] = grid self.grids = temp_grids mylog.debug("Prepared") def _reconstruct_parent_child(self): mask = np.empty(len(self.grids), dtype="int32") mylog.debug("First pass; identifying child grids") for i, grid in enumerate(self.grids): get_box_grids_level( self.grid_left_edge[i, :], self.grid_right_edge[i, :], self.grid_levels[i].item() + 1, self.grid_left_edge, self.grid_right_edge, self.grid_levels, mask, ) ids = np.where(mask.astype("bool")) grid._children_ids = ids[0] # where is a tuple mylog.debug("Second pass; identifying parents") self.stream_handler.parent_ids = ( np.zeros(self.stream_handler.num_grids, "int64") - 1 ) for i, grid in enumerate(self.grids): # Second pass for child in grid.Children: child._parent_id = i # _id_offset = 0 self.stream_handler.parent_ids[child.id] = i def _initialize_grid_arrays(self): GridIndex._initialize_grid_arrays(self) self.grid_procs = np.zeros((self.num_grids, 1), "int32") def _detect_output_fields(self): # NOTE: Because particle unions add to the actual field list, without # having the keys in the field list itself, we need to double check # here. fl = set(self.stream_handler.get_fields()) fl.update(set(getattr(self, "field_list", []))) self.field_list = list(fl) def _populate_grid_objects(self): for g in self.grids: g._setup_dx() self.max_level = self.grid_levels.max() def _setup_data_io(self): if self.stream_handler.io is not None: self.io = self.stream_handler.io else: self.io = io_registry[self.dataset_type](self.ds) def _reset_particle_count(self): self.grid_particle_count[:] = self.stream_handler.particle_count for i, grid in enumerate(self.grids): grid.NumberOfParticles = self.grid_particle_count[i, 0] def update_data(self, data): """ Update the stream data with a new data dict. If fields already exist, they will be replaced, but if they do not, they will be added. Fields already in the stream but not part of the data dict will be left alone. """ particle_types = set_particle_types(data[0]) self.stream_handler.particle_types.update(particle_types) self.ds._find_particle_types() for i, grid in enumerate(self.grids): field_units, gdata, number_of_particles = process_data(data[i]) self.stream_handler.particle_count[i] = number_of_particles self.stream_handler.field_units.update(field_units) for field in gdata: if field in grid.field_data: grid.field_data.pop(field, None) self.stream_handler.fields[grid.id][field] = gdata[field] self._reset_particle_count() # We only want to create a superset of fields here. for field in self.ds.field_list: if field[0] == "all": self.ds.field_list.remove(field) self._detect_output_fields() self.ds.create_field_info() mylog.debug("Creating Particle Union 'all'") pu = ParticleUnion("all", list(self.ds.particle_types_raw)) self.ds.add_particle_union(pu) self.ds.particle_types = tuple(set(self.ds.particle_types)) class StreamDataset(Dataset): _index_class: type[Index] = StreamHierarchy _field_info_class = StreamFieldInfo _dataset_type = "stream" def __init__( self, stream_handler, storage_filename=None, geometry="cartesian", unit_system="cgs", default_species_fields=None, *, axis_order: AxisOrder | None = None, ): self.fluid_types += ("stream",) self.geometry = Geometry(geometry) self.stream_handler = stream_handler self._find_particle_types() name = f"InMemoryParameterFile_{uuid.uuid4().hex}" from yt.data_objects.static_output import _cached_datasets if geometry == "spectral_cube": # mimic FITSDataset specific interface to allow testing with # fake, in memory data setdefaultattr(self, "lon_axis", 0) setdefaultattr(self, "lat_axis", 1) setdefaultattr(self, "spec_axis", 2) setdefaultattr(self, "lon_name", "X") setdefaultattr(self, "lat_name", "Y") setdefaultattr(self, "spec_name", "z") setdefaultattr(self, "spec_unit", "") setdefaultattr( self, "pixel2spec", lambda pixel_value: self.arr(pixel_value, self.spec_unit), # type: ignore [attr-defined] ) setdefaultattr( self, "spec2pixel", lambda spec_value: self.arr(spec_value, "code_length"), ) _cached_datasets[name] = self Dataset.__init__( self, name, self._dataset_type, unit_system=unit_system, default_species_fields=default_species_fields, axis_order=axis_order, ) @property def filename(self): return self.stream_handler.name @cached_property def unique_identifier(self) -> str: return str(self.parameters["CurrentTimeIdentifier"]) def _parse_parameter_file(self): self.parameters["CurrentTimeIdentifier"] = time.time() self.domain_left_edge = self.stream_handler.domain_left_edge.copy() self.domain_right_edge = self.stream_handler.domain_right_edge.copy() self.refine_by = self.stream_handler.refine_by self.dimensionality = self.stream_handler.dimensionality self._periodicity = self.stream_handler.periodicity self.domain_dimensions = self.stream_handler.domain_dimensions self.current_time = self.stream_handler.simulation_time self.gamma = 5.0 / 3.0 self.parameters["EOSType"] = -1 self.parameters["CosmologyHubbleConstantNow"] = 1.0 self.parameters["CosmologyCurrentRedshift"] = 1.0 self.parameters["HydroMethod"] = -1 self.parameters.update(self.stream_handler.parameters) if self.stream_handler.cosmology_simulation: self.cosmological_simulation = 1 self.current_redshift = self.stream_handler.current_redshift self.omega_lambda = self.stream_handler.omega_lambda self.omega_matter = self.stream_handler.omega_matter self.hubble_constant = self.stream_handler.hubble_constant else: self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 self.cosmological_simulation = 0 def _set_units(self): self.field_units = self.stream_handler.field_units def _set_code_unit_attributes(self): base_units = self.stream_handler.code_units attrs = ( "length_unit", "mass_unit", "time_unit", "velocity_unit", "magnetic_unit", ) cgs_units = ("cm", "g", "s", "cm/s", "gauss") for unit, attr, cgs_unit in zip(base_units, attrs, cgs_units, strict=True): if isinstance(unit, str): if unit == "code_magnetic": # If no magnetic unit was explicitly specified # we skip it now and take care of it at the bottom continue else: uq = self.quan(1.0, unit) elif isinstance(unit, numeric_type): uq = self.quan(unit, cgs_unit) elif isinstance(unit, YTQuantity): uq = unit elif isinstance(unit, tuple): uq = self.quan(unit[0], unit[1]) else: raise RuntimeError(f"{attr} ({unit}) is invalid.") setattr(self, attr, uq) if not hasattr(self, "magnetic_unit"): self.magnetic_unit = np.sqrt( 4 * np.pi * self.mass_unit / (self.time_unit**2 * self.length_unit) ) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: return False @property def _skip_cache(self): return True def _find_particle_types(self): particle_types = set() for k, v in self.stream_handler.particle_types.items(): if v: particle_types.add(k[0]) self.particle_types = tuple(particle_types) self.particle_types_raw = self.particle_types class StreamDictFieldHandler(UserDict): _additional_fields: tuple[FieldKey, ...] = () @property def all_fields(self): self_fields = chain.from_iterable(s.keys() for s in self.values()) self_fields = list(set(self_fields)) fields = list(self._additional_fields) + self_fields fields = list(set(fields)) return fields class StreamParticleIndex(SPHParticleIndex): def __init__(self, ds, dataset_type=None): self.stream_handler = ds.stream_handler super().__init__(ds, dataset_type) def _setup_data_io(self): if self.stream_handler.io is not None: self.io = self.stream_handler.io else: self.io = io_registry[self.dataset_type](self.ds) def update_data(self, data): """ Update the stream data with a new data dict. If fields already exist, they will be replaced, but if they do not, they will be added. Fields already in the stream but not part of the data dict will be left alone. """ # Alias ds = self.ds handler = ds.stream_handler # Preprocess field_units, data, _ = process_data(data) pdata = {} for key in data.keys(): if not isinstance(key, tuple): field = ("io", key) mylog.debug("Reassigning '%s' to '%s'", key, field) else: field = key pdata[field] = data[key] data = pdata # Drop reference count particle_types = set_particle_types(data) # Update particle types handler.particle_types.update(particle_types) ds._find_particle_types() # Update fields handler.field_units.update(field_units) fields = handler.fields for field in data.keys(): if field not in fields._additional_fields: fields._additional_fields += (field,) fields["stream_file"].update(data) # Update field list for field in self.ds.field_list: if field[0] in ["all", "nbody"]: self.ds.field_list.remove(field) self._detect_output_fields() self.ds.create_field_info() class StreamParticleFile(ParticleFile): pass class StreamParticlesDataset(StreamDataset): _index_class = StreamParticleIndex _file_class = StreamParticleFile _field_info_class = StreamFieldInfo _dataset_type = "stream_particles" file_count = 1 filename_template = "stream_file" _proj_type = "particle_proj" def __init__( self, stream_handler, storage_filename=None, geometry="cartesian", unit_system="cgs", default_species_fields=None, *, axis_order: AxisOrder | None = None, ): super().__init__( stream_handler, storage_filename=storage_filename, geometry=geometry, unit_system=unit_system, default_species_fields=default_species_fields, axis_order=axis_order, ) fields = list(stream_handler.fields["stream_file"].keys()) sph_ptypes = [] for ptype in self.particle_types: if (ptype, "density") in fields and (ptype, "smoothing_length") in fields: sph_ptypes.append(ptype) if len(sph_ptypes) == 1: self._sph_ptypes = tuple(sph_ptypes) elif len(sph_ptypes) > 1: raise ValueError("Multiple SPH particle types are currently not supported!") def add_sph_fields(self, n_neighbors=32, kernel="cubic", sph_ptype="io"): """Add SPH fields for the specified particle type. For a particle type with "particle_position" and "particle_mass" already defined, this method adds the "smoothing_length" and "density" fields. "smoothing_length" is computed as the distance to the nth nearest neighbor. "density" is computed as the SPH (gather) smoothed mass. The SPH fields are added only if they don't already exist. Parameters ---------- n_neighbors : int The number of neighbors to use in smoothing length computation. kernel : str The kernel function to use in density estimation. sph_ptype : str The SPH particle type. Each dataset has one sph_ptype only. This method will overwrite existing sph_ptype of the dataset. """ mylog.info("Generating SPH fields") # Unify units l_unit = "code_length" m_unit = "code_mass" d_unit = "code_mass / code_length**3" # Read basic fields ad = self.all_data() pos = ad[sph_ptype, "particle_position"].to(l_unit).d mass = ad[sph_ptype, "particle_mass"].to(m_unit).d # Construct k-d tree kdtree = PyKDTree( pos.astype("float64"), left_edge=self.domain_left_edge.to_value(l_unit), right_edge=self.domain_right_edge.to_value(l_unit), periodic=self.periodicity, leafsize=2 * int(n_neighbors), ) order = np.argsort(kdtree.idx) def exists(fname): if (sph_ptype, fname) in self.derived_field_list: mylog.info( "Field ('%s','%s') already exists. Skipping", sph_ptype, fname ) return True else: mylog.info("Generating field ('%s','%s')", sph_ptype, fname) return False data = {} # Add smoothing length field fname = "smoothing_length" if not exists(fname): hsml = generate_smoothing_length(pos[kdtree.idx], kdtree, n_neighbors) hsml = hsml[order] data[sph_ptype, "smoothing_length"] = (hsml, l_unit) else: hsml = ad[sph_ptype, fname].to(l_unit).d # Add density field fname = "density" if not exists(fname): dens = estimate_density( pos[kdtree.idx], mass[kdtree.idx], hsml[kdtree.idx], kdtree, kernel_name=kernel, ) dens = dens[order] data[sph_ptype, "density"] = (dens, d_unit) # Add fields self._sph_ptypes = (sph_ptype,) self.index.update_data(data) self.num_neighbors = n_neighbors _cis = np.fromiter( chain.from_iterable(product([0, 1], [0, 1], [0, 1])), dtype=np.int64, count=8 * 3 ) _cis.shape = (8, 3) def hexahedral_connectivity(xgrid, ygrid, zgrid): r"""Define the cell coordinates and cell neighbors of a hexahedral mesh for a semistructured grid. Used to specify the connectivity and coordinates parameters used in :func:`~yt.frontends.stream.data_structures.load_hexahedral_mesh`. Parameters ---------- xgrid : array_like x-coordinates of boundaries of the hexahedral cells. Should be a one-dimensional array. ygrid : array_like y-coordinates of boundaries of the hexahedral cells. Should be a one-dimensional array. zgrid : array_like z-coordinates of boundaries of the hexahedral cells. Should be a one-dimensional array. Returns ------- coords : array_like The list of (x,y,z) coordinates of the vertices of the mesh. Is of size (M,3) where M is the number of vertices. connectivity : array_like For each hexahedron h in the mesh, gives the index of each of h's neighbors. Is of size (N,8), where N is the number of hexahedra. Examples -------- >>> xgrid = np.array([-1, -0.25, 0, 0.25, 1]) >>> coords, conn = hexahedral_connectivity(xgrid, xgrid, xgrid) >>> coords array([[-1. , -1. , -1. ], [-1. , -1. , -0.25], [-1. , -1. , 0. ], ..., [ 1. , 1. , 0. ], [ 1. , 1. , 0.25], [ 1. , 1. , 1. ]]) >>> conn array([[ 0, 1, 5, 6, 25, 26, 30, 31], [ 1, 2, 6, 7, 26, 27, 31, 32], [ 2, 3, 7, 8, 27, 28, 32, 33], ..., [ 91, 92, 96, 97, 116, 117, 121, 122], [ 92, 93, 97, 98, 117, 118, 122, 123], [ 93, 94, 98, 99, 118, 119, 123, 124]]) """ nx = len(xgrid) ny = len(ygrid) nz = len(zgrid) coords = np.zeros((nx, ny, nz, 3), dtype="float64", order="C") coords[:, :, :, 0] = xgrid[:, None, None] coords[:, :, :, 1] = ygrid[None, :, None] coords[:, :, :, 2] = zgrid[None, None, :] coords.shape = (nx * ny * nz, 3) cycle = np.rollaxis(np.indices((nx - 1, ny - 1, nz - 1)), 0, 4) cycle.shape = ((nx - 1) * (ny - 1) * (nz - 1), 3) off = _cis + cycle[:, np.newaxis] connectivity = np.array( ((off[:, :, 0] * ny) + off[:, :, 1]) * nz + off[:, :, 2], order="C" ) return coords, connectivity class StreamHexahedralMesh(SemiStructuredMesh): _connectivity_length = 8 _index_offset = 0 class StreamHexahedralHierarchy(UnstructuredIndex): def __init__(self, ds, dataset_type=None): self.stream_handler = ds.stream_handler super().__init__(ds, dataset_type) def _initialize_mesh(self): coords = self.stream_handler.fields.pop("coordinates") connect = self.stream_handler.fields.pop("connectivity") self.meshes = [ StreamHexahedralMesh(0, self.index_filename, connect, coords, self) ] def _setup_data_io(self): if self.stream_handler.io is not None: self.io = self.stream_handler.io else: self.io = io_registry[self.dataset_type](self.ds) def _detect_output_fields(self): self.field_list = list(set(self.stream_handler.get_fields())) class StreamHexahedralDataset(StreamDataset): _index_class = StreamHexahedralHierarchy _field_info_class = StreamFieldInfo _dataset_type = "stream_hexahedral" class StreamOctreeSubset(OctreeSubset): domain_id = 1 _domain_offset = 1 def __init__(self, base_region, ds, oct_handler, num_zones=2, num_ghost_zones=0): self._num_zones = num_zones self.field_data = YTFieldData() self.field_parameters = {} self.ds = ds self._oct_handler = oct_handler self._last_mask = None self._last_selector_id = None self._current_particle_type = "io" self._current_fluid_type = self.ds.default_fluid_type self.base_region = base_region self.base_selector = base_region.selector self._num_ghost_zones = num_ghost_zones if num_ghost_zones > 0: if not all(ds.periodicity): mylog.warning( "Ghost zones will wrongly assume the domain to be periodic." ) base_grid = StreamOctreeSubset(base_region, ds, oct_handler, num_zones) self._base_grid = base_grid @property def oct_handler(self): return self._oct_handler def retrieve_ghost_zones(self, ngz, fields, smoothed=False): try: new_subset = self._subset_with_gz mylog.debug("Reusing previous subset with ghost zone.") except AttributeError: new_subset = StreamOctreeSubset( self.base_region, self.ds, self.oct_handler, self._num_zones, num_ghost_zones=ngz, ) self._subset_with_gz = new_subset return new_subset def _fill_no_ghostzones(self, content, dest, selector, offset): # Here we get a copy of the file, which we skip through and read the # bits we want. oct_handler = self.oct_handler cell_count = selector.count_oct_cells(self.oct_handler, self.domain_id) levels, cell_inds, file_inds = self.oct_handler.file_index_octs( selector, self.domain_id, cell_count ) levels[:] = 0 dest.update((field, np.empty(cell_count, dtype="float64")) for field in content) # Make references ... count = oct_handler.fill_level( 0, levels, cell_inds, file_inds, dest, content, offset ) return count def _fill_with_ghostzones(self, content, dest, selector, offset): oct_handler = self.oct_handler ndim = self.ds.dimensionality cell_count = ( selector.count_octs(self.oct_handler, self.domain_id) * self.nz**ndim ) gz_cache = getattr(self, "_ghost_zone_cache", None) if gz_cache: levels, cell_inds, file_inds, domains = gz_cache else: gz_cache = ( levels, cell_inds, file_inds, domains, ) = oct_handler.file_index_octs_with_ghost_zones( selector, self.domain_id, cell_count ) self._ghost_zone_cache = gz_cache levels[:] = 0 dest.update((field, np.empty(cell_count, dtype="float64")) for field in content) # Make references ... oct_handler.fill_level(0, levels, cell_inds, file_inds, dest, content, offset) def fill(self, content, dest, selector, offset): if self._num_ghost_zones == 0: return self._fill_no_ghostzones(content, dest, selector, offset) else: return self._fill_with_ghostzones(content, dest, selector, offset) class StreamOctreeHandler(OctreeIndex): def __init__(self, ds, dataset_type=None): self.stream_handler = ds.stream_handler self.dataset_type = dataset_type super().__init__(ds, dataset_type) def _setup_data_io(self): if self.stream_handler.io is not None: self.io = self.stream_handler.io else: self.io = io_registry[self.dataset_type](self.ds) def _initialize_oct_handler(self): header = { "dims": self.ds.domain_dimensions // self.ds.num_zones, "left_edge": self.ds.domain_left_edge, "right_edge": self.ds.domain_right_edge, "octree": self.ds.octree_mask, "num_zones": self.ds.num_zones, "partial_coverage": self.ds.partial_coverage, } self.oct_handler = OctreeContainer.load_octree(header) # We do now need to get the maximum level set, as well. self.ds.max_level = self.oct_handler.max_level def _identify_base_chunk(self, dobj): if getattr(dobj, "_chunk_info", None) is None: base_region = getattr(dobj, "base_region", dobj) subset = [ StreamOctreeSubset( base_region, self.dataset, self.oct_handler, self.ds.num_zones, ) ] dobj._chunk_info = subset dobj._current_chunk = list(self._chunk_all(dobj))[0] def _chunk_all(self, dobj): oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) yield YTDataChunk(dobj, "all", oobjs, None) def _chunk_spatial(self, dobj, ngz, sort=None, preload_fields=None): sobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) # This is where we will perform cutting of the Octree and # load-balancing. That may require a specialized selector object to # cut based on some space-filling curve index. for og in sobjs: if ngz > 0: g = og.retrieve_ghost_zones(ngz, [], smoothed=True) else: g = og yield YTDataChunk(dobj, "spatial", [g]) def _chunk_io(self, dobj, cache=True, local_only=False): oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) for subset in oobjs: yield YTDataChunk(dobj, "io", [subset], None, cache=cache) def _setup_classes(self): dd = self._get_data_reader_dict() super()._setup_classes(dd) def _detect_output_fields(self): # NOTE: Because particle unions add to the actual field list, without # having the keys in the field list itself, we need to double check # here. fl = set(self.stream_handler.get_fields()) fl.update(set(getattr(self, "field_list", []))) self.field_list = list(fl) class StreamOctreeDataset(StreamDataset): _index_class = StreamOctreeHandler _field_info_class = StreamFieldInfo _dataset_type = "stream_octree" levelmax = None def __init__( self, stream_handler, storage_filename=None, geometry="cartesian", unit_system="cgs", default_species_fields=None, ): super().__init__( stream_handler, storage_filename, geometry, unit_system, default_species_fields=default_species_fields, ) # Set up levelmax self.max_level = stream_handler.levels.max() self.min_level = stream_handler.levels.min() class StreamUnstructuredMesh(UnstructuredMesh): _index_offset = 0 def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self._connectivity_length = self.connectivity_indices.shape[1] class StreamUnstructuredIndex(UnstructuredIndex): def __init__(self, ds, dataset_type=None): self.stream_handler = ds.stream_handler super().__init__(ds, dataset_type) def _initialize_mesh(self): coords = self.stream_handler.fields.pop("coordinates") connect = always_iterable(self.stream_handler.fields.pop("connectivity")) self.meshes = [ StreamUnstructuredMesh(i, self.index_filename, c1, c2, self) for i, (c1, c2) in enumerate(zip(connect, repeat(coords))) ] self.mesh_union = MeshUnion("mesh_union", self.meshes) def _setup_data_io(self): if self.stream_handler.io is not None: self.io = self.stream_handler.io else: self.io = io_registry[self.dataset_type](self.ds) def _detect_output_fields(self): self.field_list = list(set(self.stream_handler.get_fields())) fnames = list({fn for ft, fn in self.field_list}) self.field_list += [("all", fname) for fname in fnames] class StreamUnstructuredMeshDataset(StreamDataset): _index_class = StreamUnstructuredIndex _field_info_class = StreamFieldInfo _dataset_type = "stream_unstructured" def _find_particle_types(self): pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/definitions.py0000644000175100001770000002446314714401662020414 0ustar00runnerdockerfrom collections import defaultdict import numpy as np from yt.funcs import is_sequence from yt.geometry.grid_container import GridTree, MatchPointsToGrids from yt.utilities.exceptions import ( YTInconsistentGridFieldShape, YTInconsistentGridFieldShapeGridDims, YTInconsistentParticleFieldShape, ) from yt.utilities.logger import ytLogger as mylog from .fields import StreamFieldInfo def assign_particle_data(ds, pdata, bbox): """ Assign particle data to the grids using MatchPointsToGrids. This will overwrite any existing particle data, so be careful! """ particle_index_fields = [ f"particle_position_{ax}" for ax in ds.coordinates.axis_order ] for ptype in ds.particle_types_raw: check_fields = [(ptype, pi_field) for pi_field in particle_index_fields] check_fields.append((ptype, "particle_position")) if all(f not in pdata for f in check_fields): pdata_ftype = {} for f in sorted(pdata): if not hasattr(pdata[f], "shape"): continue if f == "number_of_particles": continue mylog.debug("Reassigning '%s' to ('%s','%s')", f, ptype, f) pdata_ftype[ptype, f] = pdata.pop(f) pdata_ftype.update(pdata) pdata = pdata_ftype # Note: what we need to do here is a bit tricky. Because occasionally this # gets called before we property handle the field detection, we cannot use # any information about the index. Fortunately for us, we can generate # most of the GridTree utilizing information we already have from the # stream handler. if len(ds.stream_handler.fields) > 1: pdata.pop("number_of_particles", None) num_grids = len(ds.stream_handler.fields) parent_ids = ds.stream_handler.parent_ids num_children = np.zeros(num_grids, dtype="int64") # We're going to do this the slow way mask = np.empty(num_grids, dtype="bool") for i in range(num_grids): np.equal(parent_ids, i, mask) num_children[i] = mask.sum() levels = ds.stream_handler.levels.astype("int64").ravel() grid_tree = GridTree( num_grids, ds.stream_handler.left_edges, ds.stream_handler.right_edges, ds.stream_handler.dimensions, ds.stream_handler.parent_ids, levels, num_children, ) grid_pdata = [] for _ in range(num_grids): grid = {"number_of_particles": 0} grid_pdata.append(grid) particle_index_fields = [ f"particle_position_{ax}" for ax in ds.coordinates.axis_order ] for ptype in ds.particle_types_raw: if (ptype, "particle_position_x") in pdata: # we call them x, y, z even though they may be different field names x, y, z = (pdata[ptype, pi_field] for pi_field in particle_index_fields) elif (ptype, "particle_position") in pdata: x, y, z = pdata[ptype, "particle_position"].T else: raise KeyError( "Cannot decompose particle data without position fields!" ) pts = MatchPointsToGrids(grid_tree, len(x), x, y, z) particle_grid_inds = pts.find_points_in_tree() (assigned_particles,) = (particle_grid_inds >= 0).nonzero() num_particles = particle_grid_inds.size num_unassigned = num_particles - assigned_particles.size if num_unassigned > 0: eps = np.finfo(x.dtype).eps s = np.array( [ [x.min() - eps, x.max() + eps], [y.min() - eps, y.max() + eps], [z.min() - eps, z.max() + eps], ] ) sug_bbox = [ [min(bbox[0, 0], s[0, 0]), max(bbox[0, 1], s[0, 1])], [min(bbox[1, 0], s[1, 0]), max(bbox[1, 1], s[1, 1])], [min(bbox[2, 0], s[2, 0]), max(bbox[2, 1], s[2, 1])], ] mylog.warning( "Discarding %s particles (out of %s) that are outside " "bounding box. Set bbox=%s to avoid this in the future.", num_unassigned, num_particles, sug_bbox, ) particle_grid_inds = particle_grid_inds[assigned_particles] x = x[assigned_particles] y = y[assigned_particles] z = z[assigned_particles] idxs = np.argsort(particle_grid_inds) particle_grid_count = np.bincount( particle_grid_inds.astype("intp"), minlength=num_grids ) particle_indices = np.zeros(num_grids + 1, dtype="int64") if num_grids > 1: np.add.accumulate( particle_grid_count.squeeze(), out=particle_indices[1:] ) else: particle_indices[1] = particle_grid_count.squeeze() for i, pcount in enumerate(particle_grid_count): grid_pdata[i]["number_of_particles"] += pcount start = particle_indices[i] end = particle_indices[i + 1] for key in pdata.keys(): if key[0] == ptype: grid_pdata[i][key] = pdata[key][idxs][start:end] else: grid_pdata = [pdata] for pd, gi in zip(grid_pdata, sorted(ds.stream_handler.fields), strict=True): ds.stream_handler.fields[gi].update(pd) ds.stream_handler.particle_types.update(set_particle_types(pd)) npart = ds.stream_handler.fields[gi].pop("number_of_particles", 0) ds.stream_handler.particle_count[gi] = npart def process_data(data, grid_dims=None, allow_callables=True): new_data, field_units = {}, {} for field, val in data.items(): # val is a data array if isinstance(val, np.ndarray): # val is a YTArray if hasattr(val, "units"): field_units[field] = val.units new_data[field] = val.copy().d # val is a numpy array else: field_units[field] = "" new_data[field] = val.copy() # val is a tuple of (data, units) elif isinstance(val, tuple) and len(val) == 2: valid_data = isinstance(val[0], np.ndarray) if allow_callables: valid_data = valid_data or callable(val[0]) if not isinstance(field, (str, tuple)): raise TypeError("Field name is not a string!") if not valid_data: raise TypeError( "Field data is not an ndarray or callable (with nproc == 1)!" ) if not isinstance(val[1], str): raise TypeError("Unit specification is not a string!") field_units[field] = val[1] new_data[field] = val[0] # val is a list of data to be turned into an array elif is_sequence(val): field_units[field] = "" new_data[field] = np.asarray(val) elif callable(val): if not allow_callables: raise RuntimeError( "Callable functions can not be specified " "in conjunction with nprocs > 1." ) field_units[field] = "" new_data[field] = val else: raise RuntimeError( "The data dict appears to be invalid. " "The data dictionary must map from field " "names to (numpy array, unit spec) tuples. " ) data = new_data # At this point, we have arrays for all our fields new_data = {} for field in data: n_shape = 3 if not callable(data[field]): n_shape = len(data[field].shape) if isinstance(field, tuple): new_field = field elif n_shape in (1, 2): new_field = ("io", field) elif n_shape == 3: new_field = ("stream", field) else: raise RuntimeError new_data[new_field] = data[field] field_units[new_field] = field_units.pop(field) known_fields = ( StreamFieldInfo.known_particle_fields + StreamFieldInfo.known_other_fields ) # We do not want to override any of the known ones, if it's not # overridden here. if ( any(f[0] == new_field[1] for f in known_fields) and field_units[new_field] == "" ): field_units.pop(new_field) data = new_data # Sanity checking that all fields have the same dimensions. g_shapes = [] p_shapes = defaultdict(list) for field in data: if callable(data[field]): continue f_shape = data[field].shape n_shape = len(f_shape) if n_shape in (1, 2): p_shapes[field[0]].append((field[1], f_shape[0])) elif n_shape == 3: g_shapes.append((field, f_shape)) if len(g_shapes) > 0: g_s = np.array([s[1] for s in g_shapes]) if not np.all(g_s == g_s[0]): raise YTInconsistentGridFieldShape(g_shapes) if grid_dims is not None: if not np.all(g_s == grid_dims): raise YTInconsistentGridFieldShapeGridDims(g_shapes, grid_dims) if len(p_shapes) > 0: for ptype, p_shape in p_shapes.items(): p_s = np.array([s[1] for s in p_shape]) if not np.all(p_s == p_s[0]): raise YTInconsistentParticleFieldShape(ptype, p_shape) # Now that we know the particle fields are consistent, determine the number # of particles. if len(p_shapes) > 0: number_of_particles = np.sum([s[0][1] for s in p_shapes.values()]) else: number_of_particles = 0 return field_units, data, number_of_particles def set_particle_types(data): particle_types = {} for key in data.keys(): if key == "number_of_particles": continue elif callable(data[key]): particle_types[key] = False elif len(data[key].shape) == 1: particle_types[key] = True else: particle_types[key] = False return particle_types ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/fields.py0000644000175100001770000001532414714401662017343 0ustar00runnerdockerimport re from yt._typing import KnownFieldsT from yt.fields.field_info_container import FieldInfoContainer class StreamFieldInfo(FieldInfoContainer): known_other_fields: KnownFieldsT = ( ("density", ("code_mass/code_length**3", ["density"], None)), ( "dark_matter_density", ("code_mass/code_length**3", ["dark_matter_density"], None), ), ("number_density", ("1/code_length**3", ["number_density"], None)), ("pressure", ("dyne/code_length**2", ["pressure"], None)), ("specific_thermal_energy", ("erg / g", ["specific_thermal_energy"], None)), ("temperature", ("K", ["temperature"], None)), ("velocity_x", ("code_length/code_time", ["velocity_x"], None)), ("velocity_y", ("code_length/code_time", ["velocity_y"], None)), ("velocity_z", ("code_length/code_time", ["velocity_z"], None)), ("magnetic_field_x", ("gauss", [], None)), ("magnetic_field_y", ("gauss", [], None)), ("magnetic_field_z", ("gauss", [], None)), ("velocity_r", ("code_length/code_time", ["velocity_r"], None)), ("velocity_theta", ("code_length/code_time", ["velocity_theta"], None)), ("velocity_phi", ("code_length/code_time", ["velocity_phi"], None)), ("magnetic_field_r", ("gauss", [], None)), ("magnetic_field_theta", ("gauss", [], None)), ("magnetic_field_phi", ("gauss", [], None)), ( "radiation_acceleration_x", ("code_length/code_time**2", ["radiation_acceleration_x"], None), ), ( "radiation_acceleration_y", ("code_length/code_time**2", ["radiation_acceleration_y"], None), ), ( "radiation_acceleration_z", ("code_length/code_time**2", ["radiation_acceleration_z"], None), ), ("metallicity", ("Zsun", ["metallicity"], None)), # We need to have a bunch of species fields here, too ("metal_density", ("code_mass/code_length**3", ["metal_density"], None)), ("hi_density", ("code_mass/code_length**3", ["hi_density"], None)), ("hii_density", ("code_mass/code_length**3", ["hii_density"], None)), ("h2i_density", ("code_mass/code_length**3", ["h2i_density"], None)), ("h2ii_density", ("code_mass/code_length**3", ["h2ii_density"], None)), ("h2m_density", ("code_mass/code_length**3", ["h2m_density"], None)), ("hei_density", ("code_mass/code_length**3", ["hei_density"], None)), ("heii_density", ("code_mass/code_length**3", ["heii_density"], None)), ("heiii_density", ("code_mass/code_length**3", ["heiii_density"], None)), ("hdi_density", ("code_mass/code_length**3", ["hdi_density"], None)), ("di_density", ("code_mass/code_length**3", ["di_density"], None)), ("dii_density", ("code_mass/code_length**3", ["dii_density"], None)), ) known_particle_fields: KnownFieldsT = ( ("particle_position", ("code_length", ["particle_position"], None)), ("particle_position_x", ("code_length", ["particle_position_x"], None)), ("particle_position_y", ("code_length", ["particle_position_y"], None)), ("particle_position_z", ("code_length", ["particle_position_z"], None)), ("particle_velocity", ("code_length/code_time", ["particle_velocity"], None)), ( "particle_velocity_x", ("code_length/code_time", ["particle_velocity_x"], None), ), ( "particle_velocity_y", ("code_length/code_time", ["particle_velocity_y"], None), ), ( "particle_velocity_z", ("code_length/code_time", ["particle_velocity_z"], None), ), ("particle_index", ("", ["particle_index"], None)), ( "particle_gas_density", ("code_mass/code_length**3", ["particle_gas_density"], None), ), ("particle_gas_temperature", ("K", ["particle_gas_temperature"], None)), ("particle_mass", ("code_mass", ["particle_mass"], None)), ("smoothing_length", ("code_length", ["smoothing_length"], None)), ("density", ("code_mass/code_length**3", ["density"], None)), ("temperature", ("code_temperature", ["temperature"], None)), ("creation_time", ("code_time", ["creation_time"], None)), ("age", ("code_time", [], None)), ) def setup_fluid_fields(self): from yt.fields.magnetic_field import setup_magnetic_field_aliases from yt.fields.species_fields import setup_species_fields from yt.utilities.periodic_table import periodic_table # First grab all the element symbols from the periodic table # (this includes the electron and deuterium) symbols = list(periodic_table.elements_by_symbol) # Now add some common molecules symbols += ["H2", "CO"] species_names = [] for field in self.ds.stream_handler.field_units: if field[0] in self.ds.particle_types: continue units = self.ds.stream_handler.field_units[field] if units != "": self.add_output_field(field, sampling_type="cell", units=units) # Check to see if this could be a species fraction field if field[1].endswith("_fraction"): sp = field[1].rsplit("_fraction")[0] parts = sp.split("_") # parts is now either an element or molecule symbol # by itself: valid = parts[0] in symbols # or it may also have an ionization state after it if valid and len(parts) > 1 and parts[0] != "El": # Note that this doesn't catch invalid ionization states, # which would indicate more electron states empty than actually # exist, but we'll leave that to the user to do correctly. valid &= re.match("^[pm](0|[1-9][0-9]*)$", parts[1]) is not None if valid: # Add the species name to the list species_names.append(sp) # Alias the field self.alias(("gas", field[1]), ("stream", field[1])) self.species_names = sorted(species_names) setup_magnetic_field_aliases( self, "stream", [f"magnetic_field_{ax}" for ax in self.ds.coordinates.axis_order], ) setup_species_fields(self) def add_output_field(self, name, sampling_type, **kwargs): if name in self.ds.stream_handler.field_units: kwargs["units"] = self.ds.stream_handler.field_units[name] super().add_output_field(name, sampling_type, **kwargs) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/io.py0000644000175100001770000002570114714401662016504 0ustar00runnerdockerimport numpy as np from yt.utilities.exceptions import YTDomainOverflow from yt.utilities.io_handler import BaseIOHandler, BaseParticleIOHandler from yt.utilities.logger import ytLogger as mylog class IOHandlerStream(BaseIOHandler): _dataset_type = "stream" _vector_fields = {"particle_velocity": 3, "particle_position": 3} def __init__(self, ds): self.fields = ds.stream_handler.fields self.field_units = ds.stream_handler.field_units super().__init__(ds) def _read_data_set(self, grid, field): # This is where we implement processor-locking tr = self.fields[grid.id][field] if callable(tr): tr = tr(grid, field) # If it's particles, we copy. if len(tr.shape) == 1: return tr.copy() # New in-place unit conversion breaks if we don't copy first return tr def _read_fluid_selection(self, chunks, selector, fields, size): chunks = list(chunks) if any((ftype not in self.ds.fluid_types for ftype, fname in fields)): raise NotImplementedError rv = {} for field in fields: rv[field] = self.ds.arr(np.empty(size, dtype="float64")) ng = sum(len(c.objs) for c in chunks) mylog.debug( "Reading %s cells of %s fields in %s blocks", size, [f2 for f1, f2 in fields], ng, ) for field in fields: ftype, fname = field ind = 0 for chunk in chunks: for g in chunk.objs: ds = self.fields[g.id][ftype, fname] if callable(ds): ds = ds(g, field) ind += g.select(selector, ds, rv[field], ind) # caches return rv def _read_particle_coords(self, chunks, ptf): chunks = list(chunks) for chunk in chunks: for g in chunk.objs: if g.NumberOfParticles == 0: continue gf = self.fields[g.id] for ptype in sorted(ptf): if (ptype, "particle_position") in gf: x, y, z = gf[ptype, "particle_position"].T else: x, y, z = ( gf[ptype, f"particle_position_{ax}"] for ax in self.ds.coordinates.axis_order ) yield ptype, (x, y, z), 0.0 def _read_particle_fields(self, chunks, ptf, selector): chunks = list(chunks) for chunk in chunks: for g in chunk.objs: if g.NumberOfParticles == 0: continue gf = self.fields[g.id] for ptype, field_list in sorted(ptf.items()): if (ptype, "particle_position") in gf: x, y, z = gf[ptype, "particle_position"].T else: x, y, z = ( gf[ptype, f"particle_position_{ax}"] for ax in self.ds.coordinates.axis_order ) mask = selector.select_points(x, y, z, 0.0) if mask is None: continue for field in field_list: data = np.asarray(gf[ptype, field]) yield (ptype, field), data[mask] @property def _read_exception(self): return KeyError class StreamParticleIOHandler(BaseParticleIOHandler): _dataset_type = "stream_particles" _vector_fields = {"particle_velocity": 3, "particle_position": 3} def __init__(self, ds): self.fields = ds.stream_handler.fields super().__init__(ds) def _read_particle_coords(self, chunks, ptf): for data_file in self._sorted_chunk_iterator(chunks): f = self.fields[data_file.filename] # This double-reads for ptype in sorted(ptf): yield ( ptype, ( f[ptype, "particle_position_x"], f[ptype, "particle_position_y"], f[ptype, "particle_position_z"], ), 0.0, ) def _read_smoothing_length(self, chunks, ptf, ptype): for data_file in self._sorted_chunk_iterator(chunks): f = self.fields[data_file.filename] return f[ptype, "smoothing_length"] def _read_particle_data_file(self, data_file, ptf, selector=None): return_data = {} f = self.fields[data_file.filename] for ptype, field_list in sorted(ptf.items()): if (ptype, "particle_position") in f: ppos = f[ptype, "particle_position"] x = ppos[:, 0] y = ppos[:, 1] z = ppos[:, 2] else: x, y, z = (f[ptype, f"particle_position_{ax}"] for ax in "xyz") if (ptype, "smoothing_length") in self.ds.field_list: hsml = f[ptype, "smoothing_length"] else: hsml = 0.0 if selector: mask = selector.select_points(x, y, z, hsml) if mask is None: continue for field in field_list: data = f[ptype, field] if selector: data = data[mask] return_data[ptype, field] = data return return_data def _yield_coordinates(self, data_file, needed_ptype=None): # self.fields[g.id][fname] is the pattern here for ptype in self.ds.particle_types_raw: if needed_ptype is not None and needed_ptype is not ptype: continue try: pos = np.column_stack( [ self.fields[data_file.filename][ (ptype, f"particle_position_{ax}") ] for ax in "xyz" ] ) except KeyError: pos = self.fields[data_file.filename][ptype, "particle_position"] if np.any(pos.min(axis=0) < data_file.ds.domain_left_edge) or np.any( pos.max(axis=0) > data_file.ds.domain_right_edge ): raise YTDomainOverflow( pos.min(axis=0), pos.max(axis=0), data_file.ds.domain_left_edge, data_file.ds.domain_right_edge, ) yield ptype, pos def _get_smoothing_length(self, data_file, dtype, shape): ptype = self.ds._sph_ptypes[0] return self.fields[data_file.filename][ptype, "smoothing_length"] def _count_particles(self, data_file): pcount = {} for ptype in self.ds.particle_types_raw: pcount[ptype] = 0 # stream datasets only have one "file" if data_file.file_id > 0: return pcount for ptype in self.ds.particle_types_raw: d = self.fields[data_file.filename] try: pcount[ptype] = d[ptype, "particle_position_x"].size except KeyError: pcount[ptype] = d[ptype, "particle_position"].shape[0] return pcount def _identify_fields(self, data_file): return self.fields[data_file.filename].keys(), {} class IOHandlerStreamHexahedral(BaseIOHandler): _dataset_type = "stream_hexahedral" _vector_fields = {"particle_velocity": 3, "particle_position": 3} def __init__(self, ds): self.fields = ds.stream_handler.fields super().__init__(ds) def _read_fluid_selection(self, chunks, selector, fields, size): chunks = list(chunks) assert len(chunks) == 1 chunk = chunks[0] rv = {} for field in fields: ftype, fname = field rv[field] = np.empty(size, dtype="float64") ngrids = sum(len(chunk.objs) for chunk in chunks) mylog.debug( "Reading %s cells of %s fields in %s blocks", size, [fn for ft, fn in fields], ngrids, ) for field in fields: ind = 0 ftype, fname = field for chunk in chunks: for g in chunk.objs: ds = self.fields[g.mesh_id].get(field, None) if ds is None: ds = self.fields[g.mesh_id][fname] ind += g.select(selector, ds, rv[field], ind) # caches return rv class IOHandlerStreamOctree(BaseIOHandler): _dataset_type = "stream_octree" _vector_fields = {"particle_velocity": 3, "particle_position": 3} def __init__(self, ds): self.fields = ds.stream_handler.fields super().__init__(ds) def _read_fluid_selection(self, chunks, selector, fields, size): rv = {} ind = 0 chunks = list(chunks) assert len(chunks) == 1 for chunk in chunks: assert len(chunk.objs) == 1 for subset in chunk.objs: field_vals = {} for field in fields: field_vals[field] = self.fields[ subset.domain_id - subset._domain_offset ][field] subset.fill(field_vals, rv, selector, ind) return rv class IOHandlerStreamUnstructured(BaseIOHandler): _dataset_type = "stream_unstructured" def __init__(self, ds): self.fields = ds.stream_handler.fields super().__init__(ds) def _read_fluid_selection(self, chunks, selector, fields, size): chunks = list(chunks) rv = {} for field in fields: ftype, fname = field if ftype == "all": ci = np.concatenate( [mesh.connectivity_indices for mesh in self.ds.index.mesh_union] ) else: mesh_id = int(ftype[-1]) - 1 m = self.ds.index.meshes[mesh_id] ci = m.connectivity_indices num_elem = ci.shape[0] if fname in self.ds._node_fields: nodes_per_element = ci.shape[1] rv[field] = np.empty((num_elem, nodes_per_element), dtype="float64") else: rv[field] = np.empty(num_elem, dtype="float64") for field in fields: ind = 0 ftype, fname = field if ftype == "all": objs = list(self.ds.index.mesh_union) else: mesh_ids = [int(ftype[-1])] chunk = chunks[mesh_ids[0] - 1] objs = chunk.objs for g in objs: ds = self.fields[g.mesh_id].get(field, None) if ds is None: f = ("connect%d" % (g.mesh_id + 1), fname) ds = self.fields[g.mesh_id][f] ind += g.select(selector, ds, rv[field], ind) # caches rv[field] = rv[field][:ind] return rv ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/misc.py0000644000175100001770000000123014714401662017017 0ustar00runnerdockerimport numpy as np from yt._typing import DomainDimensions def _validate_cell_widths( cell_widths: list[np.ndarray], domain_dimensions: DomainDimensions, ) -> list[np.ndarray]: # check dimensionality if (nwids := len(cell_widths)) != (ndims := len(domain_dimensions)): raise ValueError( f"The number of elements in cell_widths ({nwids}) " f"must match the number of dimensions ({ndims})." ) # check the dtypes for each dimension, upcast to float64 if needed for idim in range(len(cell_widths)): cell_widths[idim] = cell_widths[idim].astype(np.float64, copy=False) return cell_widths ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3471527 yt-4.4.0/yt/frontends/stream/sample_data/0000755000175100001770000000000014714401715017767 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/sample_data/__init__.py0000644000175100001770000000000014714401662022067 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/sample_data/hexahedral_mesh.py0000644000175100001770000133106614714401662023475 0ustar00runnerdockerimport numpy as np # This mesh can be found at # https://github.com/idaholab/moose/tree/master/test/tests/bcs/nodal_normals/cylinder-hexes.e _coordinates = np.array([ [ 1.00000000e+00, 0.00000000e+00, 3.57142857e-01], [ 9.86361303e-01, 1.64594590e-01, 3.57142857e-01], [ 9.04164528e-01, 1.50878374e-01, 3.57142857e-01], [ 9.16666667e-01, 2.24432411e-17, 3.57142857e-01], [ 1.00000000e+00, 0.00000000e+00, 5.00000000e-01], [ 9.86361303e-01, 1.64594590e-01, 5.00000000e-01], [ 9.04164528e-01, 1.50878374e-01, 5.00000000e-01], [ 9.16666667e-01, -4.50419429e-18, 5.00000000e-01], [ 9.45817242e-01, 3.24699469e-01, 3.57142857e-01], [ 8.66999138e-01, 2.97641180e-01, 3.57142857e-01], [ 9.45817242e-01, 3.24699469e-01, 5.00000000e-01], [ 8.66999138e-01, 2.97641180e-01, 5.00000000e-01], [ 8.79473751e-01, 4.75947393e-01, 3.57142857e-01], [ 8.06184272e-01, 4.36285110e-01, 3.57142857e-01], [ 8.79473751e-01, 4.75947393e-01, 5.00000000e-01], [ 8.06184272e-01, 4.36285110e-01, 5.00000000e-01], [ 7.89140509e-01, 6.14212713e-01, 3.57142857e-01], [ 7.23378800e-01, 5.63028320e-01, 3.57142857e-01], [ 7.89140509e-01, 6.14212713e-01, 5.00000000e-01], [ 7.23378800e-01, 5.63028320e-01, 5.00000000e-01], [ 6.77281572e-01, 7.35723911e-01, 3.57142857e-01], [ 6.20841441e-01, 6.74413585e-01, 3.57142857e-01], [ 6.77281572e-01, 7.35723911e-01, 5.00000000e-01], [ 6.20841441e-01, 6.74413585e-01, 5.00000000e-01], [ 5.46948158e-01, 8.37166478e-01, 3.57142857e-01], [ 5.01369145e-01, 7.67402605e-01, 3.57142857e-01], [ 5.46948158e-01, 8.37166478e-01, 5.00000000e-01], [ 5.01369145e-01, 7.67402605e-01, 5.00000000e-01], [ 4.01695425e-01, 9.15773327e-01, 3.57142857e-01], [ 3.68220806e-01, 8.39458883e-01, 3.57142857e-01], [ 4.01695425e-01, 9.15773327e-01, 5.00000000e-01], [ 3.68220806e-01, 8.39458883e-01, 5.00000000e-01], [ 8.21967753e-01, 1.37162159e-01, 3.57142857e-01], [ 8.33333333e-01, 3.56043101e-17, 3.57142857e-01], [ 8.21967753e-01, 1.37162159e-01, 5.00000000e-01], [ 8.33333333e-01, -9.00838858e-18, 5.00000000e-01], [ 7.88181035e-01, 2.70582891e-01, 3.57142857e-01], [ 7.88181035e-01, 2.70582891e-01, 5.00000000e-01], [ 7.32894793e-01, 3.96622828e-01, 3.57142857e-01], [ 7.32894793e-01, 3.96622828e-01, 5.00000000e-01], [ 6.57617091e-01, 5.11843927e-01, 3.57142857e-01], [ 6.57617091e-01, 5.11843927e-01, 5.00000000e-01], [ 5.64401310e-01, 6.13103259e-01, 3.57142857e-01], [ 5.64401310e-01, 6.13103259e-01, 5.00000000e-01], [ 4.55790132e-01, 6.97638732e-01, 3.57142857e-01], [ 4.55790132e-01, 6.97638732e-01, 5.00000000e-01], [ 3.34746187e-01, 7.63144439e-01, 3.57142857e-01], [ 3.34746187e-01, 7.63144439e-01, 5.00000000e-01], [ 7.39770978e-01, 1.23445943e-01, 3.57142857e-01], [ 7.50000000e-01, 3.32197375e-17, 3.57142857e-01], [ 7.39770978e-01, 1.23445943e-01, 5.00000000e-01], [ 7.50000000e-01, -1.35125829e-17, 5.00000000e-01], [ 7.09362931e-01, 2.43524602e-01, 3.57142857e-01], [ 7.09362931e-01, 2.43524602e-01, 5.00000000e-01], [ 6.59605313e-01, 3.56960545e-01, 3.57142857e-01], [ 6.59605313e-01, 3.56960545e-01, 5.00000000e-01], [ 5.91855382e-01, 4.60659535e-01, 3.57142857e-01], [ 5.91855382e-01, 4.60659535e-01, 5.00000000e-01], [ 5.07961179e-01, 5.51792933e-01, 3.57142857e-01], [ 5.07961179e-01, 5.51792933e-01, 5.00000000e-01], [ 4.10211119e-01, 6.27874859e-01, 3.57142857e-01], [ 4.10211119e-01, 6.27874859e-01, 5.00000000e-01], [ 3.01271568e-01, 6.86829995e-01, 3.57142857e-01], [ 3.01271568e-01, 6.86829995e-01, 5.00000000e-01], [ 6.57574202e-01, 1.09729727e-01, 3.57142857e-01], [ 6.66666667e-01, 2.67569398e-17, 3.57142857e-01], [ 6.57574202e-01, 1.09729727e-01, 5.00000000e-01], [ 6.66666667e-01, -1.80167772e-17, 5.00000000e-01], [ 6.30544828e-01, 2.16466313e-01, 3.57142857e-01], [ 6.30544828e-01, 2.16466313e-01, 5.00000000e-01], [ 5.86315834e-01, 3.17298262e-01, 3.57142857e-01], [ 5.86315834e-01, 3.17298262e-01, 5.00000000e-01], [ 5.26093673e-01, 4.09475142e-01, 3.57142857e-01], [ 5.26093673e-01, 4.09475142e-01, 5.00000000e-01], [ 4.51521048e-01, 4.90482607e-01, 3.57142857e-01], [ 4.51521048e-01, 4.90482607e-01, 5.00000000e-01], [ 3.64632105e-01, 5.58110986e-01, 3.57142857e-01], [ 3.64632105e-01, 5.58110986e-01, 5.00000000e-01], [ 2.67796950e-01, 6.10515551e-01, 3.57142857e-01], [ 2.67796950e-01, 6.10515551e-01, 5.00000000e-01], [ 5.75377427e-01, 9.60135110e-02, 3.57142857e-01], [ 5.83333333e-01, 1.89153508e-17, 3.57142857e-01], [ 5.75377427e-01, 9.60135110e-02, 5.00000000e-01], [ 5.83333333e-01, -2.25209714e-17, 5.00000000e-01], [ 5.51726724e-01, 1.89408024e-01, 3.57142857e-01], [ 5.51726724e-01, 1.89408024e-01, 5.00000000e-01], [ 5.13026355e-01, 2.77635979e-01, 3.57142857e-01], [ 5.13026355e-01, 2.77635979e-01, 5.00000000e-01], [ 4.60331964e-01, 3.58290749e-01, 3.57142857e-01], [ 4.60331964e-01, 3.58290749e-01, 5.00000000e-01], [ 3.95080917e-01, 4.29172281e-01, 3.57142857e-01], [ 3.95080917e-01, 4.29172281e-01, 5.00000000e-01], [ 3.19053092e-01, 4.88347112e-01, 3.57142857e-01], [ 3.19053092e-01, 4.88347112e-01, 5.00000000e-01], [ 2.34322331e-01, 5.34201107e-01, 3.57142857e-01], [ 2.34322331e-01, 5.34201107e-01, 5.00000000e-01], [ 4.93180652e-01, 8.22972951e-02, 3.57142857e-01], [ 5.00000000e-01, 1.04459875e-17, 3.57142857e-01], [ 4.93180652e-01, 8.22972951e-02, 5.00000000e-01], [ 5.00000000e-01, -2.70251657e-17, 5.00000000e-01], [ 4.72908621e-01, 1.62349735e-01, 3.57142857e-01], [ 4.72908621e-01, 1.62349735e-01, 5.00000000e-01], [ 4.39736876e-01, 2.37973697e-01, 3.57142857e-01], [ 4.39736876e-01, 2.37973697e-01, 5.00000000e-01], [ 3.94570255e-01, 3.07106356e-01, 3.57142857e-01], [ 3.94570255e-01, 3.07106356e-01, 5.00000000e-01], [ 3.38640786e-01, 3.67861955e-01, 3.57142857e-01], [ 3.38640786e-01, 3.67861955e-01, 5.00000000e-01], [ 2.73474079e-01, 4.18583239e-01, 3.57142857e-01], [ 2.73474079e-01, 4.18583239e-01, 5.00000000e-01], [ 2.00847712e-01, 4.57886663e-01, 3.57142857e-01], [ 2.00847712e-01, 4.57886663e-01, 5.00000000e-01], [ 2.92606991e-01, 3.16445261e-01, 3.57142857e-01], [ 3.44188184e-01, 2.67935253e-01, 3.57142857e-01], [ 2.92606991e-01, 3.16445261e-01, 5.00000000e-01], [ 3.44188184e-01, 2.67935253e-01, 5.00000000e-01], [ 2.34867640e-01, 3.58265725e-01, 3.57142857e-01], [ 2.34867640e-01, 3.58265725e-01, 5.00000000e-01], [ 1.72155182e-01, 3.92474283e-01, 3.57142857e-01], [ 1.72155182e-01, 3.92474283e-01, 5.00000000e-01], [ 2.46573197e-01, 2.65028567e-01, 3.57142857e-01], [ 2.93806114e-01, 2.28764151e-01, 3.57142857e-01], [ 2.46573197e-01, 2.65028567e-01, 5.00000000e-01], [ 2.93806114e-01, 2.28764151e-01, 5.00000000e-01], [ 1.96261201e-01, 2.97948211e-01, 3.57142857e-01], [ 1.96261201e-01, 2.97948211e-01, 5.00000000e-01], [ 1.43462652e-01, 3.27061902e-01, 3.57142857e-01], [ 1.43462652e-01, 3.27061902e-01, 5.00000000e-01], [ 2.00539403e-01, 2.13611872e-01, 3.57142857e-01], [ 2.43424043e-01, 1.89593048e-01, 3.57142857e-01], [ 2.00539403e-01, 2.13611872e-01, 5.00000000e-01], [ 2.43424043e-01, 1.89593048e-01, 5.00000000e-01], [ 1.57654762e-01, 2.37630697e-01, 3.57142857e-01], [ 1.57654762e-01, 2.37630697e-01, 5.00000000e-01], [ 1.14770121e-01, 2.61649522e-01, 3.57142857e-01], [ 1.14770121e-01, 2.61649522e-01, 5.00000000e-01], [ 8.60775910e-02, 1.96237141e-01, 3.57142857e-01], [ 1.39074405e-01, 1.78223023e-01, 3.57142857e-01], [ 8.60775910e-02, 1.96237141e-01, 5.00000000e-01], [ 1.39074405e-01, 1.78223023e-01, 5.00000000e-01], [ 5.73850607e-02, 1.30824761e-01, 3.57142857e-01], [ 1.20494048e-01, 1.18815349e-01, 3.57142857e-01], [ 5.73850607e-02, 1.30824761e-01, 5.00000000e-01], [ 1.20494048e-01, 1.18815349e-01, 5.00000000e-01], [ 2.86925303e-02, 6.54123805e-02, 3.57142857e-01], [ 1.01913690e-01, 5.94076743e-02, 3.57142857e-01], [ 2.86925303e-02, 6.54123805e-02, 5.00000000e-01], [ 1.01913690e-01, 5.94076743e-02, 5.00000000e-01], [ -1.23126238e-17, -4.32625932e-17, 3.57142857e-01], [ 8.33333333e-02, -3.43565351e-17, 3.57142857e-01], [ -5.84327908e-18, -5.40503315e-17, 5.00000000e-01], [ 8.33333333e-02, -4.95461372e-17, 5.00000000e-01], [ 1.92071219e-01, 1.60208904e-01, 3.57142857e-01], [ 1.92071219e-01, 1.60208904e-01, 5.00000000e-01], [ 1.83603035e-01, 1.06805936e-01, 3.57142857e-01], [ 1.83603035e-01, 1.06805936e-01, 5.00000000e-01], [ 1.75134851e-01, 5.34029681e-02, 3.57142857e-01], [ 1.75134851e-01, 5.34029681e-02, 5.00000000e-01], [ 1.66666667e-01, -2.53770275e-17, 3.57142857e-01], [ 1.66666667e-01, -4.50419429e-17, 5.00000000e-01], [ 2.45068032e-01, 1.42194786e-01, 3.57142857e-01], [ 2.45068032e-01, 1.42194786e-01, 5.00000000e-01], [ 2.46712022e-01, 9.47965238e-02, 3.57142857e-01], [ 2.46712022e-01, 9.47965238e-02, 5.00000000e-01], [ 2.48356011e-01, 4.73982619e-02, 3.57142857e-01], [ 2.48356011e-01, 4.73982619e-02, 5.00000000e-01], [ 2.50000000e-01, -1.63501274e-17, 3.57142857e-01], [ 2.50000000e-01, -4.05377486e-17, 5.00000000e-01], [ 3.33333333e-01, -7.31934467e-18, 3.57142857e-01], [ 3.29964224e-01, 5.90312730e-02, 3.57142857e-01], [ 3.33333333e-01, -3.60335543e-17, 5.00000000e-01], [ 3.29964224e-01, 5.90312730e-02, 5.00000000e-01], [ 4.16666667e-01, 1.64710272e-18, 3.57142857e-01], [ 4.11572438e-01, 7.06642841e-02, 3.57142857e-01], [ 4.16666667e-01, -3.15293600e-17, 5.00000000e-01], [ 4.11572438e-01, 7.06642841e-02, 5.00000000e-01], [ 3.22110888e-01, 1.17314261e-01, 3.57142857e-01], [ 3.22110888e-01, 1.17314261e-01, 5.00000000e-01], [ 3.97509754e-01, 1.39831998e-01, 3.57142857e-01], [ 3.97509754e-01, 1.39831998e-01, 5.00000000e-01], [ 3.09957647e-01, 1.74121089e-01, 3.57142857e-01], [ 3.09957647e-01, 1.74121089e-01, 5.00000000e-01], [ 3.74847261e-01, 2.06047393e-01, 3.57142857e-01], [ 3.74847261e-01, 2.06047393e-01, 5.00000000e-01], [ 2.45485487e-01, 9.69400266e-01, 3.57142857e-01], [ 2.25028363e-01, 8.88616910e-01, 3.57142857e-01], [ 2.45485487e-01, 9.69400266e-01, 5.00000000e-01], [ 2.25028363e-01, 8.88616910e-01, 5.00000000e-01], [ 8.25793455e-02, 9.96584493e-01, 3.57142857e-01], [ 7.56977333e-02, 9.13535785e-01, 3.57142857e-01], [ 8.25793455e-02, 9.96584493e-01, 5.00000000e-01], [ 7.56977333e-02, 9.13535785e-01, 5.00000000e-01], [ -8.25793455e-02, 9.96584493e-01, 3.57142857e-01], [ -7.56977333e-02, 9.13535785e-01, 3.57142857e-01], [ -8.25793455e-02, 9.96584493e-01, 5.00000000e-01], [ -7.56977333e-02, 9.13535785e-01, 5.00000000e-01], [ -2.45485487e-01, 9.69400266e-01, 3.57142857e-01], [ -2.25028363e-01, 8.88616910e-01, 3.57142857e-01], [ -2.45485487e-01, 9.69400266e-01, 5.00000000e-01], [ -2.25028363e-01, 8.88616910e-01, 5.00000000e-01], [ -4.01695425e-01, 9.15773327e-01, 3.57142857e-01], [ -3.68220806e-01, 8.39458883e-01, 3.57142857e-01], [ -4.01695425e-01, 9.15773327e-01, 5.00000000e-01], [ -3.68220806e-01, 8.39458883e-01, 5.00000000e-01], [ -5.46948158e-01, 8.37166478e-01, 3.57142857e-01], [ -5.01369145e-01, 7.67402605e-01, 3.57142857e-01], [ -5.46948158e-01, 8.37166478e-01, 5.00000000e-01], [ -5.01369145e-01, 7.67402605e-01, 5.00000000e-01], [ -6.77281572e-01, 7.35723911e-01, 3.57142857e-01], [ -6.20841441e-01, 6.74413585e-01, 3.57142857e-01], [ -6.77281572e-01, 7.35723911e-01, 5.00000000e-01], [ -6.20841441e-01, 6.74413585e-01, 5.00000000e-01], [ 2.04571239e-01, 8.07833555e-01, 3.57142857e-01], [ 2.04571239e-01, 8.07833555e-01, 5.00000000e-01], [ 6.88161212e-02, 8.30487078e-01, 3.57142857e-01], [ 6.88161212e-02, 8.30487078e-01, 5.00000000e-01], [ -6.88161212e-02, 8.30487078e-01, 3.57142857e-01], [ -6.88161212e-02, 8.30487078e-01, 5.00000000e-01], [ -2.04571239e-01, 8.07833555e-01, 3.57142857e-01], [ -2.04571239e-01, 8.07833555e-01, 5.00000000e-01], [ -3.34746187e-01, 7.63144439e-01, 3.57142857e-01], [ -3.34746187e-01, 7.63144439e-01, 5.00000000e-01], [ -4.55790132e-01, 6.97638732e-01, 3.57142857e-01], [ -4.55790132e-01, 6.97638732e-01, 5.00000000e-01], [ -5.64401310e-01, 6.13103259e-01, 3.57142857e-01], [ -5.64401310e-01, 6.13103259e-01, 5.00000000e-01], [ 1.84114115e-01, 7.27050199e-01, 3.57142857e-01], [ 1.84114115e-01, 7.27050199e-01, 5.00000000e-01], [ 6.19345091e-02, 7.47438370e-01, 3.57142857e-01], [ 6.19345091e-02, 7.47438370e-01, 5.00000000e-01], [ -6.19345091e-02, 7.47438370e-01, 3.57142857e-01], [ -6.19345091e-02, 7.47438370e-01, 5.00000000e-01], [ -1.84114115e-01, 7.27050199e-01, 3.57142857e-01], [ -1.84114115e-01, 7.27050199e-01, 5.00000000e-01], [ -3.01271568e-01, 6.86829995e-01, 3.57142857e-01], [ -3.01271568e-01, 6.86829995e-01, 5.00000000e-01], [ -4.10211119e-01, 6.27874859e-01, 3.57142857e-01], [ -4.10211119e-01, 6.27874859e-01, 5.00000000e-01], [ -5.07961179e-01, 5.51792933e-01, 3.57142857e-01], [ -5.07961179e-01, 5.51792933e-01, 5.00000000e-01], [ 1.63656991e-01, 6.46266844e-01, 3.57142857e-01], [ 1.63656991e-01, 6.46266844e-01, 5.00000000e-01], [ 5.50528970e-02, 6.64389662e-01, 3.57142857e-01], [ 5.50528970e-02, 6.64389662e-01, 5.00000000e-01], [ -5.50528970e-02, 6.64389662e-01, 3.57142857e-01], [ -5.50528970e-02, 6.64389662e-01, 5.00000000e-01], [ -1.63656991e-01, 6.46266844e-01, 3.57142857e-01], [ -1.63656991e-01, 6.46266844e-01, 5.00000000e-01], [ -2.67796950e-01, 6.10515551e-01, 3.57142857e-01], [ -2.67796950e-01, 6.10515551e-01, 5.00000000e-01], [ -3.64632105e-01, 5.58110986e-01, 3.57142857e-01], [ -3.64632105e-01, 5.58110986e-01, 5.00000000e-01], [ -4.51521048e-01, 4.90482607e-01, 3.57142857e-01], [ -4.51521048e-01, 4.90482607e-01, 5.00000000e-01], [ 1.43199867e-01, 5.65483488e-01, 3.57142857e-01], [ 1.43199867e-01, 5.65483488e-01, 5.00000000e-01], [ 4.81712849e-02, 5.81340954e-01, 3.57142857e-01], [ 4.81712849e-02, 5.81340954e-01, 5.00000000e-01], [ -4.81712849e-02, 5.81340954e-01, 3.57142857e-01], [ -4.81712849e-02, 5.81340954e-01, 5.00000000e-01], [ -1.43199867e-01, 5.65483488e-01, 3.57142857e-01], [ -1.43199867e-01, 5.65483488e-01, 5.00000000e-01], [ -2.34322331e-01, 5.34201107e-01, 3.57142857e-01], [ -2.34322331e-01, 5.34201107e-01, 5.00000000e-01], [ -3.19053092e-01, 4.88347112e-01, 3.57142857e-01], [ -3.19053092e-01, 4.88347112e-01, 5.00000000e-01], [ -3.95080917e-01, 4.29172281e-01, 3.57142857e-01], [ -3.95080917e-01, 4.29172281e-01, 5.00000000e-01], [ 1.22742744e-01, 4.84700133e-01, 3.57142857e-01], [ 1.22742744e-01, 4.84700133e-01, 5.00000000e-01], [ 4.12896727e-02, 4.98292247e-01, 3.57142857e-01], [ 4.12896727e-02, 4.98292247e-01, 5.00000000e-01], [ -4.12896727e-02, 4.98292247e-01, 3.57142857e-01], [ -4.12896727e-02, 4.98292247e-01, 5.00000000e-01], [ -1.22742744e-01, 4.84700133e-01, 3.57142857e-01], [ -1.22742744e-01, 4.84700133e-01, 5.00000000e-01], [ -2.00847712e-01, 4.57886663e-01, 3.57142857e-01], [ -2.00847712e-01, 4.57886663e-01, 5.00000000e-01], [ -2.73474079e-01, 4.18583239e-01, 3.57142857e-01], [ -2.73474079e-01, 4.18583239e-01, 5.00000000e-01], [ -3.38640786e-01, 3.67861955e-01, 3.57142857e-01], [ -3.38640786e-01, 3.67861955e-01, 5.00000000e-01], [ -1.02283149e-01, 4.15336195e-01, 3.57142857e-01], [ -3.59859419e-02, 4.34695086e-01, 3.57142857e-01], [ -1.02283149e-01, 4.15336195e-01, 5.00000000e-01], [ -3.59859419e-02, 4.34695086e-01, 5.00000000e-01], [ -1.66348287e-01, 3.87163066e-01, 3.57142857e-01], [ -1.66348287e-01, 3.87163066e-01, 5.00000000e-01], [ -2.26761024e-01, 3.50663301e-01, 3.57142857e-01], [ -2.26761024e-01, 3.50663301e-01, 5.00000000e-01], [ -2.82200655e-01, 3.06551629e-01, 3.57142857e-01], [ -2.82200655e-01, 3.06551629e-01, 5.00000000e-01], [ -8.18235534e-02, 3.45972257e-01, 3.57142857e-01], [ -3.06822110e-02, 3.71097926e-01, 3.57142857e-01], [ -8.18235534e-02, 3.45972257e-01, 5.00000000e-01], [ -3.06822110e-02, 3.71097926e-01, 5.00000000e-01], [ -1.31848862e-01, 3.16439469e-01, 3.57142857e-01], [ -1.31848862e-01, 3.16439469e-01, 5.00000000e-01], [ -1.80047970e-01, 2.82743363e-01, 3.57142857e-01], [ -1.80047970e-01, 2.82743363e-01, 5.00000000e-01], [ -2.25760524e-01, 2.45241304e-01, 3.57142857e-01], [ -2.25760524e-01, 2.45241304e-01, 5.00000000e-01], [ -6.13639584e-02, 2.76608319e-01, 3.57142857e-01], [ -2.53784802e-02, 3.07500766e-01, 3.57142857e-01], [ -6.13639584e-02, 2.76608319e-01, 5.00000000e-01], [ -2.53784802e-02, 3.07500766e-01, 5.00000000e-01], [ -9.73494365e-02, 2.45715872e-01, 3.57142857e-01], [ -9.73494365e-02, 2.45715872e-01, 5.00000000e-01], [ -1.33334915e-01, 2.14823425e-01, 3.57142857e-01], [ -1.33334915e-01, 2.14823425e-01, 5.00000000e-01], [ -1.69320393e-01, 1.83930978e-01, 3.57142857e-01], [ -1.69320393e-01, 1.83930978e-01, 5.00000000e-01], [ -1.12880262e-01, 1.22620652e-01, 3.57142857e-01], [ -7.93257664e-02, 1.65019743e-01, 3.57142857e-01], [ -1.12880262e-01, 1.22620652e-01, 5.00000000e-01], [ -7.93257664e-02, 1.65019743e-01, 5.00000000e-01], [ -5.64401310e-02, 6.13103259e-02, 3.57142857e-01], [ -2.53166180e-02, 1.15216062e-01, 3.57142857e-01], [ -5.64401310e-02, 6.13103259e-02, 5.00000000e-01], [ -2.53166180e-02, 1.15216062e-01, 5.00000000e-01], [ -4.57712708e-02, 2.07418835e-01, 3.57142857e-01], [ -4.57712708e-02, 2.07418835e-01, 5.00000000e-01], [ 5.80689493e-03, 1.69121798e-01, 3.57142857e-01], [ 5.80689493e-03, 1.69121798e-01, 5.00000000e-01], [ -1.22167752e-02, 2.49817927e-01, 3.57142857e-01], [ -1.22167752e-02, 2.49817927e-01, 5.00000000e-01], [ 3.69304079e-02, 2.23027534e-01, 3.57142857e-01], [ 3.69304079e-02, 2.23027534e-01, 5.00000000e-01], [ 2.13377203e-02, 2.92217018e-01, 3.57142857e-01], [ 2.13377203e-02, 2.92217018e-01, 5.00000000e-01], [ 6.80539208e-02, 2.76933270e-01, 3.57142857e-01], [ 6.80539208e-02, 2.76933270e-01, 5.00000000e-01], [ 8.62835284e-02, 3.46188891e-01, 3.57142857e-01], [ 8.62835284e-02, 3.46188891e-01, 5.00000000e-01], [ 1.04513136e-01, 4.15444512e-01, 3.57142857e-01], [ 1.04513136e-01, 4.15444512e-01, 5.00000000e-01], [ 2.79883711e-02, 3.60908761e-01, 3.57142857e-01], [ 2.79883711e-02, 3.60908761e-01, 5.00000000e-01], [ 3.46390219e-02, 4.29600504e-01, 3.57142857e-01], [ 3.46390219e-02, 4.29600504e-01, 5.00000000e-01], [ -7.89140509e-01, 6.14212713e-01, 3.57142857e-01], [ -7.23378800e-01, 5.63028320e-01, 3.57142857e-01], [ -7.89140509e-01, 6.14212713e-01, 5.00000000e-01], [ -7.23378800e-01, 5.63028320e-01, 5.00000000e-01], [ -8.79473751e-01, 4.75947393e-01, 3.57142857e-01], [ -8.06184272e-01, 4.36285110e-01, 3.57142857e-01], [ -8.79473751e-01, 4.75947393e-01, 5.00000000e-01], [ -8.06184272e-01, 4.36285110e-01, 5.00000000e-01], [ -9.45817242e-01, 3.24699469e-01, 3.57142857e-01], [ -8.66999138e-01, 2.97641180e-01, 3.57142857e-01], [ -9.45817242e-01, 3.24699469e-01, 5.00000000e-01], [ -8.66999138e-01, 2.97641180e-01, 5.00000000e-01], [ -9.86361303e-01, 1.64594590e-01, 3.57142857e-01], [ -9.04164528e-01, 1.50878374e-01, 3.57142857e-01], [ -9.86361303e-01, 1.64594590e-01, 5.00000000e-01], [ -9.04164528e-01, 1.50878374e-01, 5.00000000e-01], [ -1.00000000e+00, 8.74747714e-17, 3.57142857e-01], [ -9.16666667e-01, 7.64102399e-17, 3.57142857e-01], [ -1.00000000e+00, 1.22464680e-16, 5.00000000e-01], [ -9.16666667e-01, 1.07755096e-16, 5.00000000e-01], [ -9.86361303e-01, -1.64594590e-01, 3.57142857e-01], [ -9.04164528e-01, -1.50878374e-01, 3.57142857e-01], [ -9.86361303e-01, -1.64594590e-01, 5.00000000e-01], [ -9.04164528e-01, -1.50878374e-01, 5.00000000e-01], [ -6.57617091e-01, 5.11843927e-01, 3.57142857e-01], [ -6.57617091e-01, 5.11843927e-01, 5.00000000e-01], [ -7.32894793e-01, 3.96622828e-01, 3.57142857e-01], [ -7.32894793e-01, 3.96622828e-01, 5.00000000e-01], [ -7.88181035e-01, 2.70582891e-01, 3.57142857e-01], [ -7.88181035e-01, 2.70582891e-01, 5.00000000e-01], [ -8.21967753e-01, 1.37162159e-01, 3.57142857e-01], [ -8.21967753e-01, 1.37162159e-01, 5.00000000e-01], [ -8.33333333e-01, 6.54603924e-17, 3.57142857e-01], [ -8.33333333e-01, 9.30455114e-17, 5.00000000e-01], [ -8.21967753e-01, -1.37162159e-01, 3.57142857e-01], [ -8.21967753e-01, -1.37162159e-01, 5.00000000e-01], [ -5.91855382e-01, 4.60659535e-01, 3.57142857e-01], [ -5.91855382e-01, 4.60659535e-01, 5.00000000e-01], [ -6.59605313e-01, 3.56960545e-01, 3.57142857e-01], [ -6.59605313e-01, 3.56960545e-01, 5.00000000e-01], [ -7.09362931e-01, 2.43524602e-01, 3.57142857e-01], [ -7.09362931e-01, 2.43524602e-01, 5.00000000e-01], [ -7.39770978e-01, 1.23445943e-01, 3.57142857e-01], [ -7.39770978e-01, 1.23445943e-01, 5.00000000e-01], [ -7.50000000e-01, 5.42913931e-17, 3.57142857e-01], [ -7.50000000e-01, 7.83359271e-17, 5.00000000e-01], [ -7.39770978e-01, -1.23445943e-01, 3.57142857e-01], [ -7.39770978e-01, -1.23445943e-01, 5.00000000e-01], [ -5.26093673e-01, 4.09475142e-01, 3.57142857e-01], [ -5.26093673e-01, 4.09475142e-01, 5.00000000e-01], [ -5.86315834e-01, 3.17298262e-01, 3.57142857e-01], [ -5.86315834e-01, 3.17298262e-01, 5.00000000e-01], [ -6.30544828e-01, 2.16466313e-01, 3.57142857e-01], [ -6.30544828e-01, 2.16466313e-01, 5.00000000e-01], [ -6.57574202e-01, 1.09729727e-01, 3.57142857e-01], [ -6.57574202e-01, 1.09729727e-01, 5.00000000e-01], [ -6.66666667e-01, 4.30651628e-17, 3.57142857e-01], [ -6.66666667e-01, 6.36263428e-17, 5.00000000e-01], [ -6.57574202e-01, -1.09729727e-01, 3.57142857e-01], [ -6.57574202e-01, -1.09729727e-01, 5.00000000e-01], [ -4.60331964e-01, 3.58290749e-01, 3.57142857e-01], [ -4.60331964e-01, 3.58290749e-01, 5.00000000e-01], [ -5.13026355e-01, 2.77635979e-01, 3.57142857e-01], [ -5.13026355e-01, 2.77635979e-01, 5.00000000e-01], [ -5.51726724e-01, 1.89408024e-01, 3.57142857e-01], [ -5.51726724e-01, 1.89408024e-01, 5.00000000e-01], [ -5.75377427e-01, 9.60135110e-02, 3.57142857e-01], [ -5.75377427e-01, 9.60135110e-02, 5.00000000e-01], [ -5.83333333e-01, 3.18758200e-17, 3.57142857e-01], [ -5.83333333e-01, 4.89167585e-17, 5.00000000e-01], [ -5.75377427e-01, -9.60135110e-02, 3.57142857e-01], [ -5.75377427e-01, -9.60135110e-02, 5.00000000e-01], [ -3.94570255e-01, 3.07106356e-01, 3.57142857e-01], [ -3.94570255e-01, 3.07106356e-01, 5.00000000e-01], [ -4.39736876e-01, 2.37973697e-01, 3.57142857e-01], [ -4.39736876e-01, 2.37973697e-01, 5.00000000e-01], [ -4.72908621e-01, 1.62349735e-01, 3.57142857e-01], [ -4.72908621e-01, 1.62349735e-01, 5.00000000e-01], [ -4.93180652e-01, 8.22972951e-02, 3.57142857e-01], [ -4.93180652e-01, 8.22972951e-02, 5.00000000e-01], [ -5.00000000e-01, 2.07780968e-17, 3.57142857e-01], [ -5.00000000e-01, 3.42071742e-17, 5.00000000e-01], [ -4.93180652e-01, -8.22972951e-02, 3.57142857e-01], [ -4.93180652e-01, -8.22972951e-02, 5.00000000e-01], [ -4.20156583e-01, 7.22539112e-02, 3.57142857e-01], [ -4.11228248e-01, 1.41174836e-01, 3.57142857e-01], [ -4.20156583e-01, 7.22539112e-02, 5.00000000e-01], [ -4.11228248e-01, 1.41174836e-01, 5.00000000e-01], [ -4.20116462e-01, 1.83641595e-03, 3.57142857e-01], [ -4.20116462e-01, 1.83641595e-03, 5.00000000e-01], [ -4.10983876e-01, -6.85810793e-02, 3.57142857e-01], [ -4.10983876e-01, -6.85810793e-02, 5.00000000e-01], [ -3.47132513e-01, 6.22105272e-02, 3.57142857e-01], [ -3.49547876e-01, 1.19999937e-01, 3.57142857e-01], [ -3.47132513e-01, 6.22105272e-02, 5.00000000e-01], [ -3.49547876e-01, 1.19999937e-01, 5.00000000e-01], [ -3.40232923e-01, 3.67283191e-03, 3.57142857e-01], [ -3.40232923e-01, 3.67283191e-03, 5.00000000e-01], [ -3.28787101e-01, -5.48648634e-02, 3.57142857e-01], [ -3.28787101e-01, -5.48648634e-02, 5.00000000e-01], [ -2.74108444e-01, 5.21671433e-02, 3.57142857e-01], [ -2.87867503e-01, 9.88250387e-02, 3.57142857e-01], [ -2.74108444e-01, 5.21671433e-02, 5.00000000e-01], [ -2.87867503e-01, 9.88250387e-02, 5.00000000e-01], [ -2.60349385e-01, 5.50924786e-03, 3.57142857e-01], [ -2.60349385e-01, 5.50924786e-03, 5.00000000e-01], [ -2.46590326e-01, -4.11486476e-02, 3.57142857e-01], [ -2.46590326e-01, -4.11486476e-02, 5.00000000e-01], [ -1.64393551e-01, -2.74324317e-02, 3.57142857e-01], [ -1.92379634e-01, 2.41096072e-02, 3.57142857e-01], [ -1.64393551e-01, -2.74324317e-02, 5.00000000e-01], [ -1.92379634e-01, 2.41096072e-02, 5.00000000e-01], [ -8.21967753e-02, -1.37162159e-02, 3.57142857e-01], [ -1.24409882e-01, 4.27099665e-02, 3.57142857e-01], [ -8.21967753e-02, -1.37162159e-02, 5.00000000e-01], [ -1.24409882e-01, 4.27099665e-02, 5.00000000e-01], [ -2.20365717e-01, 7.56516461e-02, 3.57142857e-01], [ -2.20365717e-01, 7.56516461e-02, 5.00000000e-01], [ -1.66622989e-01, 9.91361489e-02, 3.57142857e-01], [ -1.66622989e-01, 9.91361489e-02, 5.00000000e-01], [ -2.48351800e-01, 1.27193685e-01, 3.57142857e-01], [ -2.48351800e-01, 1.27193685e-01, 5.00000000e-01], [ -2.08836096e-01, 1.55562331e-01, 3.57142857e-01], [ -2.08836096e-01, 1.55562331e-01, 5.00000000e-01], [ -2.70747482e-01, 2.06077006e-01, 3.57142857e-01], [ -2.70747482e-01, 2.06077006e-01, 5.00000000e-01], [ -3.32658869e-01, 2.56591681e-01, 3.57142857e-01], [ -3.32658869e-01, 2.56591681e-01, 5.00000000e-01], [ -3.12146825e-01, 1.64120356e-01, 3.57142857e-01], [ -3.12146825e-01, 1.64120356e-01, 5.00000000e-01], [ -3.75941850e-01, 2.01047026e-01, 3.57142857e-01], [ -3.75941850e-01, 2.01047026e-01, 5.00000000e-01], [ -9.45817242e-01, -3.24699469e-01, 3.57142857e-01], [ -8.66999138e-01, -2.97641180e-01, 3.57142857e-01], [ -9.45817242e-01, -3.24699469e-01, 5.00000000e-01], [ -8.66999138e-01, -2.97641180e-01, 5.00000000e-01], [ -8.79473751e-01, -4.75947393e-01, 3.57142857e-01], [ -8.06184272e-01, -4.36285110e-01, 3.57142857e-01], [ -8.79473751e-01, -4.75947393e-01, 5.00000000e-01], [ -8.06184272e-01, -4.36285110e-01, 5.00000000e-01], [ -7.89140509e-01, -6.14212713e-01, 3.57142857e-01], [ -7.23378800e-01, -5.63028320e-01, 3.57142857e-01], [ -7.89140509e-01, -6.14212713e-01, 5.00000000e-01], [ -7.23378800e-01, -5.63028320e-01, 5.00000000e-01], [ -6.77281572e-01, -7.35723911e-01, 3.57142857e-01], [ -6.20841441e-01, -6.74413585e-01, 3.57142857e-01], [ -6.77281572e-01, -7.35723911e-01, 5.00000000e-01], [ -6.20841441e-01, -6.74413585e-01, 5.00000000e-01], [ -5.46948158e-01, -8.37166478e-01, 3.57142857e-01], [ -5.01369145e-01, -7.67402605e-01, 3.57142857e-01], [ -5.46948158e-01, -8.37166478e-01, 5.00000000e-01], [ -5.01369145e-01, -7.67402605e-01, 5.00000000e-01], [ -4.01695425e-01, -9.15773327e-01, 3.57142857e-01], [ -3.68220806e-01, -8.39458883e-01, 3.57142857e-01], [ -4.01695425e-01, -9.15773327e-01, 5.00000000e-01], [ -3.68220806e-01, -8.39458883e-01, 5.00000000e-01], [ -7.88181035e-01, -2.70582891e-01, 3.57142857e-01], [ -7.88181035e-01, -2.70582891e-01, 5.00000000e-01], [ -7.32894793e-01, -3.96622828e-01, 3.57142857e-01], [ -7.32894793e-01, -3.96622828e-01, 5.00000000e-01], [ -6.57617091e-01, -5.11843927e-01, 3.57142857e-01], [ -6.57617091e-01, -5.11843927e-01, 5.00000000e-01], [ -5.64401310e-01, -6.13103259e-01, 3.57142857e-01], [ -5.64401310e-01, -6.13103259e-01, 5.00000000e-01], [ -4.55790132e-01, -6.97638732e-01, 3.57142857e-01], [ -4.55790132e-01, -6.97638732e-01, 5.00000000e-01], [ -3.34746187e-01, -7.63144439e-01, 3.57142857e-01], [ -3.34746187e-01, -7.63144439e-01, 5.00000000e-01], [ -7.09362931e-01, -2.43524602e-01, 3.57142857e-01], [ -7.09362931e-01, -2.43524602e-01, 5.00000000e-01], [ -6.59605313e-01, -3.56960545e-01, 3.57142857e-01], [ -6.59605313e-01, -3.56960545e-01, 5.00000000e-01], [ -5.91855382e-01, -4.60659535e-01, 3.57142857e-01], [ -5.91855382e-01, -4.60659535e-01, 5.00000000e-01], [ -5.07961179e-01, -5.51792933e-01, 3.57142857e-01], [ -5.07961179e-01, -5.51792933e-01, 5.00000000e-01], [ -4.10211119e-01, -6.27874859e-01, 3.57142857e-01], [ -4.10211119e-01, -6.27874859e-01, 5.00000000e-01], [ -3.01271568e-01, -6.86829995e-01, 3.57142857e-01], [ -3.01271568e-01, -6.86829995e-01, 5.00000000e-01], [ -6.30544828e-01, -2.16466313e-01, 3.57142857e-01], [ -6.30544828e-01, -2.16466313e-01, 5.00000000e-01], [ -5.86315834e-01, -3.17298262e-01, 3.57142857e-01], [ -5.86315834e-01, -3.17298262e-01, 5.00000000e-01], [ -5.26093673e-01, -4.09475142e-01, 3.57142857e-01], [ -5.26093673e-01, -4.09475142e-01, 5.00000000e-01], [ -4.51521048e-01, -4.90482607e-01, 3.57142857e-01], [ -4.51521048e-01, -4.90482607e-01, 5.00000000e-01], [ -3.64632105e-01, -5.58110986e-01, 3.57142857e-01], [ -3.64632105e-01, -5.58110986e-01, 5.00000000e-01], [ -2.67796950e-01, -6.10515551e-01, 3.57142857e-01], [ -2.67796950e-01, -6.10515551e-01, 5.00000000e-01], [ -5.51726724e-01, -1.89408024e-01, 3.57142857e-01], [ -5.51726724e-01, -1.89408024e-01, 5.00000000e-01], [ -5.13026355e-01, -2.77635979e-01, 3.57142857e-01], [ -5.13026355e-01, -2.77635979e-01, 5.00000000e-01], [ -4.60331964e-01, -3.58290749e-01, 3.57142857e-01], [ -4.60331964e-01, -3.58290749e-01, 5.00000000e-01], [ -3.95080917e-01, -4.29172281e-01, 3.57142857e-01], [ -3.95080917e-01, -4.29172281e-01, 5.00000000e-01], [ -3.19053092e-01, -4.88347112e-01, 3.57142857e-01], [ -3.19053092e-01, -4.88347112e-01, 5.00000000e-01], [ -2.34322331e-01, -5.34201107e-01, 3.57142857e-01], [ -2.34322331e-01, -5.34201107e-01, 5.00000000e-01], [ -4.72908621e-01, -1.62349735e-01, 3.57142857e-01], [ -4.72908621e-01, -1.62349735e-01, 5.00000000e-01], [ -4.39736876e-01, -2.37973697e-01, 3.57142857e-01], [ -4.39736876e-01, -2.37973697e-01, 5.00000000e-01], [ -3.94570255e-01, -3.07106356e-01, 3.57142857e-01], [ -3.94570255e-01, -3.07106356e-01, 5.00000000e-01], [ -3.38640786e-01, -3.67861955e-01, 3.57142857e-01], [ -3.38640786e-01, -3.67861955e-01, 5.00000000e-01], [ -2.73474079e-01, -4.18583239e-01, 3.57142857e-01], [ -2.73474079e-01, -4.18583239e-01, 5.00000000e-01], [ -2.00847712e-01, -4.57886663e-01, 3.57142857e-01], [ -2.00847712e-01, -4.57886663e-01, 5.00000000e-01], [ -2.90292421e-01, -3.12221863e-01, 3.57142857e-01], [ -3.43107373e-01, -2.67051188e-01, 3.57142857e-01], [ -2.90292421e-01, -3.12221863e-01, 5.00000000e-01], [ -3.43107373e-01, -2.67051188e-01, 5.00000000e-01], [ -2.31319311e-01, -3.50702994e-01, 3.57142857e-01], [ -2.31319311e-01, -3.50702994e-01, 5.00000000e-01], [ -1.67373094e-01, -3.81572219e-01, 3.57142857e-01], [ -1.67373094e-01, -3.81572219e-01, 5.00000000e-01], [ -2.41944057e-01, -2.56581770e-01, 3.57142857e-01], [ -2.91644492e-01, -2.26996019e-01, 3.57142857e-01], [ -2.41944057e-01, -2.56581770e-01, 5.00000000e-01], [ -2.91644492e-01, -2.26996019e-01, 5.00000000e-01], [ -1.89164543e-01, -2.82822750e-01, 3.57142857e-01], [ -1.89164543e-01, -2.82822750e-01, 5.00000000e-01], [ -1.33898475e-01, -3.05257776e-01, 3.57142857e-01], [ -1.33898475e-01, -3.05257776e-01, 5.00000000e-01], [ -1.93595692e-01, -2.00941678e-01, 3.57142857e-01], [ -2.40181610e-01, -1.86940851e-01, 3.57142857e-01], [ -1.93595692e-01, -2.00941678e-01, 5.00000000e-01], [ -2.40181610e-01, -1.86940851e-01, 5.00000000e-01], [ -1.47009774e-01, -2.14942505e-01, 3.57142857e-01], [ -1.47009774e-01, -2.14942505e-01, 5.00000000e-01], [ -1.00423856e-01, -2.28943332e-01, 3.57142857e-01], [ -1.00423856e-01, -2.28943332e-01, 5.00000000e-01], [ -6.69492374e-02, -1.52628888e-01, 3.57142857e-01], [ -1.25405441e-01, -1.47867075e-01, 3.57142857e-01], [ -6.69492374e-02, -1.52628888e-01, 5.00000000e-01], [ -1.25405441e-01, -1.47867075e-01, 5.00000000e-01], [ -3.34746187e-02, -7.63144439e-02, 3.57142857e-01], [ -1.03801108e-01, -8.07916455e-02, 3.57142857e-01], [ -3.34746187e-02, -7.63144439e-02, 5.00000000e-01], [ -1.03801108e-01, -8.07916455e-02, 5.00000000e-01], [ -1.83861645e-01, -1.43105263e-01, 3.57142857e-01], [ -1.83861645e-01, -1.43105263e-01, 5.00000000e-01], [ -1.74127598e-01, -8.52688471e-02, 3.57142857e-01], [ -1.74127598e-01, -8.52688471e-02, 5.00000000e-01], [ -2.42317849e-01, -1.38343450e-01, 3.57142857e-01], [ -2.42317849e-01, -1.38343450e-01, 5.00000000e-01], [ -2.44454087e-01, -8.97460487e-02, 3.57142857e-01], [ -2.44454087e-01, -8.97460487e-02, 5.00000000e-01], [ -3.20605599e-01, -1.13947277e-01, 3.57142857e-01], [ -3.20605599e-01, -1.13947277e-01, 5.00000000e-01], [ -3.96757110e-01, -1.38148506e-01, 3.57142857e-01], [ -3.96757110e-01, -1.38148506e-01, 5.00000000e-01], [ -3.08124191e-01, -1.71553532e-01, 3.57142857e-01], [ -3.08124191e-01, -1.71553532e-01, 5.00000000e-01], [ -3.73930533e-01, -2.04763614e-01, 3.57142857e-01], [ -3.73930533e-01, -2.04763614e-01, 5.00000000e-01], [ -2.45485487e-01, -9.69400266e-01, 3.57142857e-01], [ -2.25028363e-01, -8.88616910e-01, 3.57142857e-01], [ -2.45485487e-01, -9.69400266e-01, 5.00000000e-01], [ -2.25028363e-01, -8.88616910e-01, 5.00000000e-01], [ -8.25793455e-02, -9.96584493e-01, 3.57142857e-01], [ -7.56977333e-02, -9.13535785e-01, 3.57142857e-01], [ -8.25793455e-02, -9.96584493e-01, 5.00000000e-01], [ -7.56977333e-02, -9.13535785e-01, 5.00000000e-01], [ 8.25793455e-02, -9.96584493e-01, 3.57142857e-01], [ 7.56977333e-02, -9.13535785e-01, 3.57142857e-01], [ 8.25793455e-02, -9.96584493e-01, 5.00000000e-01], [ 7.56977333e-02, -9.13535785e-01, 5.00000000e-01], [ 2.45485487e-01, -9.69400266e-01, 3.57142857e-01], [ 2.25028363e-01, -8.88616910e-01, 3.57142857e-01], [ 2.45485487e-01, -9.69400266e-01, 5.00000000e-01], [ 2.25028363e-01, -8.88616910e-01, 5.00000000e-01], [ 4.01695425e-01, -9.15773327e-01, 3.57142857e-01], [ 3.68220806e-01, -8.39458883e-01, 3.57142857e-01], [ 4.01695425e-01, -9.15773327e-01, 5.00000000e-01], [ 3.68220806e-01, -8.39458883e-01, 5.00000000e-01], [ 5.46948158e-01, -8.37166478e-01, 3.57142857e-01], [ 5.01369145e-01, -7.67402605e-01, 3.57142857e-01], [ 5.46948158e-01, -8.37166478e-01, 5.00000000e-01], [ 5.01369145e-01, -7.67402605e-01, 5.00000000e-01], [ -2.04571239e-01, -8.07833555e-01, 3.57142857e-01], [ -2.04571239e-01, -8.07833555e-01, 5.00000000e-01], [ -6.88161212e-02, -8.30487078e-01, 3.57142857e-01], [ -6.88161212e-02, -8.30487078e-01, 5.00000000e-01], [ 6.88161212e-02, -8.30487078e-01, 3.57142857e-01], [ 6.88161212e-02, -8.30487078e-01, 5.00000000e-01], [ 2.04571239e-01, -8.07833555e-01, 3.57142857e-01], [ 2.04571239e-01, -8.07833555e-01, 5.00000000e-01], [ 3.34746187e-01, -7.63144439e-01, 3.57142857e-01], [ 3.34746187e-01, -7.63144439e-01, 5.00000000e-01], [ 4.55790132e-01, -6.97638732e-01, 3.57142857e-01], [ 4.55790132e-01, -6.97638732e-01, 5.00000000e-01], [ -1.84114115e-01, -7.27050199e-01, 3.57142857e-01], [ -1.84114115e-01, -7.27050199e-01, 5.00000000e-01], [ -6.19345091e-02, -7.47438370e-01, 3.57142857e-01], [ -6.19345091e-02, -7.47438370e-01, 5.00000000e-01], [ 6.19345091e-02, -7.47438370e-01, 3.57142857e-01], [ 6.19345091e-02, -7.47438370e-01, 5.00000000e-01], [ 1.84114115e-01, -7.27050199e-01, 3.57142857e-01], [ 1.84114115e-01, -7.27050199e-01, 5.00000000e-01], [ 3.01271568e-01, -6.86829995e-01, 3.57142857e-01], [ 3.01271568e-01, -6.86829995e-01, 5.00000000e-01], [ 4.10211119e-01, -6.27874859e-01, 3.57142857e-01], [ 4.10211119e-01, -6.27874859e-01, 5.00000000e-01], [ -1.63656991e-01, -6.46266844e-01, 3.57142857e-01], [ -1.63656991e-01, -6.46266844e-01, 5.00000000e-01], [ -5.50528970e-02, -6.64389662e-01, 3.57142857e-01], [ -5.50528970e-02, -6.64389662e-01, 5.00000000e-01], [ 5.50528970e-02, -6.64389662e-01, 3.57142857e-01], [ 5.50528970e-02, -6.64389662e-01, 5.00000000e-01], [ 1.63656991e-01, -6.46266844e-01, 3.57142857e-01], [ 1.63656991e-01, -6.46266844e-01, 5.00000000e-01], [ 2.67796950e-01, -6.10515551e-01, 3.57142857e-01], [ 2.67796950e-01, -6.10515551e-01, 5.00000000e-01], [ 3.64632105e-01, -5.58110986e-01, 3.57142857e-01], [ 3.64632105e-01, -5.58110986e-01, 5.00000000e-01], [ -1.43199867e-01, -5.65483488e-01, 3.57142857e-01], [ -1.43199867e-01, -5.65483488e-01, 5.00000000e-01], [ -4.81712849e-02, -5.81340954e-01, 3.57142857e-01], [ -4.81712849e-02, -5.81340954e-01, 5.00000000e-01], [ 4.81712849e-02, -5.81340954e-01, 3.57142857e-01], [ 4.81712849e-02, -5.81340954e-01, 5.00000000e-01], [ 1.43199867e-01, -5.65483488e-01, 3.57142857e-01], [ 1.43199867e-01, -5.65483488e-01, 5.00000000e-01], [ 2.34322331e-01, -5.34201107e-01, 3.57142857e-01], [ 2.34322331e-01, -5.34201107e-01, 5.00000000e-01], [ 3.19053092e-01, -4.88347112e-01, 3.57142857e-01], [ 3.19053092e-01, -4.88347112e-01, 5.00000000e-01], [ -1.22742744e-01, -4.84700133e-01, 3.57142857e-01], [ -1.22742744e-01, -4.84700133e-01, 5.00000000e-01], [ -4.12896727e-02, -4.98292247e-01, 3.57142857e-01], [ -4.12896727e-02, -4.98292247e-01, 5.00000000e-01], [ 4.12896727e-02, -4.98292247e-01, 3.57142857e-01], [ 4.12896727e-02, -4.98292247e-01, 5.00000000e-01], [ 1.22742744e-01, -4.84700133e-01, 3.57142857e-01], [ 1.22742744e-01, -4.84700133e-01, 5.00000000e-01], [ 2.00847712e-01, -4.57886663e-01, 3.57142857e-01], [ 2.00847712e-01, -4.57886663e-01, 5.00000000e-01], [ 2.73474079e-01, -4.18583239e-01, 3.57142857e-01], [ 2.73474079e-01, -4.18583239e-01, 5.00000000e-01], [ 1.02606772e-01, -4.13792257e-01, 3.57142857e-01], [ 3.59043567e-02, -4.33301147e-01, 3.57142857e-01], [ 1.02606772e-01, -4.13792257e-01, 5.00000000e-01], [ 3.59043567e-02, -4.33301147e-01, 5.00000000e-01], [ 1.67077120e-01, -3.85469130e-01, 3.57142857e-01], [ 1.67077120e-01, -3.85469130e-01, 5.00000000e-01], [ 2.27895066e-01, -3.48819366e-01, 3.57142857e-01], [ 2.27895066e-01, -3.48819366e-01, 5.00000000e-01], [ 8.24708009e-02, -3.42884381e-01, 3.57142857e-01], [ 3.05190406e-02, -3.68310047e-01, 3.57142857e-01], [ 8.24708009e-02, -3.42884381e-01, 5.00000000e-01], [ 3.05190406e-02, -3.68310047e-01, 5.00000000e-01], [ 1.33306527e-01, -3.13051596e-01, 3.57142857e-01], [ 1.33306527e-01, -3.13051596e-01, 5.00000000e-01], [ 1.82316053e-01, -2.79055493e-01, 3.57142857e-01], [ 1.82316053e-01, -2.79055493e-01, 5.00000000e-01], [ 6.23348295e-02, -2.71976505e-01, 3.57142857e-01], [ 2.51337245e-02, -3.03318947e-01, 3.57142857e-01], [ 6.23348295e-02, -2.71976505e-01, 5.00000000e-01], [ 2.51337245e-02, -3.03318947e-01, 5.00000000e-01], [ 9.95359345e-02, -2.40634062e-01, 3.57142857e-01], [ 9.95359345e-02, -2.40634062e-01, 5.00000000e-01], [ 1.36737040e-01, -2.09291620e-01, 3.57142857e-01], [ 1.36737040e-01, -2.09291620e-01, 5.00000000e-01], [ 9.11580264e-02, -1.39527746e-01, 3.57142857e-01], [ 5.51990834e-02, -1.85860856e-01, 3.57142857e-01], [ 9.11580264e-02, -1.39527746e-01, 5.00000000e-01], [ 5.51990834e-02, -1.85860856e-01, 5.00000000e-01], [ 4.55790132e-02, -6.97638732e-02, 3.57142857e-01], [ 1.08622324e-02, -1.31087650e-01, 3.57142857e-01], [ 4.55790132e-02, -6.97638732e-02, 5.00000000e-01], [ 1.08622324e-02, -1.31087650e-01, 5.00000000e-01], [ 1.92401405e-02, -2.32193966e-01, 3.57142857e-01], [ 1.92401405e-02, -2.32193966e-01, 5.00000000e-01], [ -2.38545485e-02, -1.92411427e-01, 3.57142857e-01], [ -2.38545485e-02, -1.92411427e-01, 5.00000000e-01], [ -1.67188024e-02, -2.78527075e-01, 3.57142857e-01], [ -1.67188024e-02, -2.78527075e-01, 5.00000000e-01], [ -5.85713293e-02, -2.53735203e-01, 3.57142857e-01], [ -5.85713293e-02, -2.53735203e-01, 5.00000000e-01], [ -7.99618007e-02, -3.30723513e-01, 3.57142857e-01], [ -7.99618007e-02, -3.30723513e-01, 5.00000000e-01], [ -1.01352272e-01, -4.07711823e-01, 3.57142857e-01], [ -1.01352272e-01, -4.07711823e-01, 5.00000000e-01], [ -2.49090925e-02, -3.51782132e-01, 3.57142857e-01], [ -2.49090925e-02, -3.51782132e-01, 5.00000000e-01], [ -3.30993826e-02, -4.25037189e-01, 3.57142857e-01], [ -3.30993826e-02, -4.25037189e-01, 5.00000000e-01], [ 6.77281572e-01, -7.35723911e-01, 3.57142857e-01], [ 6.20841441e-01, -6.74413585e-01, 3.57142857e-01], [ 6.77281572e-01, -7.35723911e-01, 5.00000000e-01], [ 6.20841441e-01, -6.74413585e-01, 5.00000000e-01], [ 7.89140509e-01, -6.14212713e-01, 3.57142857e-01], [ 7.23378800e-01, -5.63028320e-01, 3.57142857e-01], [ 7.89140509e-01, -6.14212713e-01, 5.00000000e-01], [ 7.23378800e-01, -5.63028320e-01, 5.00000000e-01], [ 8.79473751e-01, -4.75947393e-01, 3.57142857e-01], [ 8.06184272e-01, -4.36285110e-01, 3.57142857e-01], [ 8.79473751e-01, -4.75947393e-01, 5.00000000e-01], [ 8.06184272e-01, -4.36285110e-01, 5.00000000e-01], [ 9.45817242e-01, -3.24699469e-01, 3.57142857e-01], [ 8.66999138e-01, -2.97641180e-01, 3.57142857e-01], [ 9.45817242e-01, -3.24699469e-01, 5.00000000e-01], [ 8.66999138e-01, -2.97641180e-01, 5.00000000e-01], [ 9.86361303e-01, -1.64594590e-01, 3.57142857e-01], [ 9.04164528e-01, -1.50878374e-01, 3.57142857e-01], [ 9.86361303e-01, -1.64594590e-01, 5.00000000e-01], [ 9.04164528e-01, -1.50878374e-01, 5.00000000e-01], [ 5.64401310e-01, -6.13103259e-01, 3.57142857e-01], [ 5.64401310e-01, -6.13103259e-01, 5.00000000e-01], [ 6.57617091e-01, -5.11843927e-01, 3.57142857e-01], [ 6.57617091e-01, -5.11843927e-01, 5.00000000e-01], [ 7.32894793e-01, -3.96622828e-01, 3.57142857e-01], [ 7.32894793e-01, -3.96622828e-01, 5.00000000e-01], [ 7.88181035e-01, -2.70582891e-01, 3.57142857e-01], [ 7.88181035e-01, -2.70582891e-01, 5.00000000e-01], [ 8.21967753e-01, -1.37162159e-01, 3.57142857e-01], [ 8.21967753e-01, -1.37162159e-01, 5.00000000e-01], [ 5.07961179e-01, -5.51792933e-01, 3.57142857e-01], [ 5.07961179e-01, -5.51792933e-01, 5.00000000e-01], [ 5.91855382e-01, -4.60659535e-01, 3.57142857e-01], [ 5.91855382e-01, -4.60659535e-01, 5.00000000e-01], [ 6.59605313e-01, -3.56960545e-01, 3.57142857e-01], [ 6.59605313e-01, -3.56960545e-01, 5.00000000e-01], [ 7.09362931e-01, -2.43524602e-01, 3.57142857e-01], [ 7.09362931e-01, -2.43524602e-01, 5.00000000e-01], [ 7.39770978e-01, -1.23445943e-01, 3.57142857e-01], [ 7.39770978e-01, -1.23445943e-01, 5.00000000e-01], [ 4.51521048e-01, -4.90482607e-01, 3.57142857e-01], [ 4.51521048e-01, -4.90482607e-01, 5.00000000e-01], [ 5.26093673e-01, -4.09475142e-01, 3.57142857e-01], [ 5.26093673e-01, -4.09475142e-01, 5.00000000e-01], [ 5.86315834e-01, -3.17298262e-01, 3.57142857e-01], [ 5.86315834e-01, -3.17298262e-01, 5.00000000e-01], [ 6.30544828e-01, -2.16466313e-01, 3.57142857e-01], [ 6.30544828e-01, -2.16466313e-01, 5.00000000e-01], [ 6.57574202e-01, -1.09729727e-01, 3.57142857e-01], [ 6.57574202e-01, -1.09729727e-01, 5.00000000e-01], [ 3.95080917e-01, -4.29172281e-01, 3.57142857e-01], [ 3.95080917e-01, -4.29172281e-01, 5.00000000e-01], [ 4.60331964e-01, -3.58290749e-01, 3.57142857e-01], [ 4.60331964e-01, -3.58290749e-01, 5.00000000e-01], [ 5.13026355e-01, -2.77635979e-01, 3.57142857e-01], [ 5.13026355e-01, -2.77635979e-01, 5.00000000e-01], [ 5.51726724e-01, -1.89408024e-01, 3.57142857e-01], [ 5.51726724e-01, -1.89408024e-01, 5.00000000e-01], [ 5.75377427e-01, -9.60135110e-02, 3.57142857e-01], [ 5.75377427e-01, -9.60135110e-02, 5.00000000e-01], [ 3.38640786e-01, -3.67861955e-01, 3.57142857e-01], [ 3.38640786e-01, -3.67861955e-01, 5.00000000e-01], [ 3.94570255e-01, -3.07106356e-01, 3.57142857e-01], [ 3.94570255e-01, -3.07106356e-01, 5.00000000e-01], [ 4.39736876e-01, -2.37973697e-01, 3.57142857e-01], [ 4.39736876e-01, -2.37973697e-01, 5.00000000e-01], [ 4.72908621e-01, -1.62349735e-01, 3.57142857e-01], [ 4.72908621e-01, -1.62349735e-01, 5.00000000e-01], [ 4.93180652e-01, -8.22972951e-02, 3.57142857e-01], [ 4.93180652e-01, -8.22972951e-02, 5.00000000e-01], [ 4.02533591e-01, -1.40423963e-01, 3.57142857e-01], [ 3.82383017e-01, -2.06935340e-01, 3.57142857e-01], [ 4.02533591e-01, -1.40423963e-01, 5.00000000e-01], [ 3.82383017e-01, -2.06935340e-01, 5.00000000e-01], [ 4.14084357e-01, -7.09602665e-02, 3.57142857e-01], [ 4.14084357e-01, -7.09602665e-02, 5.00000000e-01], [ 3.32158562e-01, -1.18498191e-01, 3.57142857e-01], [ 3.25029158e-01, -1.75896984e-01, 3.57142857e-01], [ 3.32158562e-01, -1.18498191e-01, 5.00000000e-01], [ 3.25029158e-01, -1.75896984e-01, 5.00000000e-01], [ 3.34988061e-01, -5.96232379e-02, 3.57142857e-01], [ 3.34988061e-01, -5.96232379e-02, 5.00000000e-01], [ 2.61783533e-01, -9.65724185e-02, 3.57142857e-01], [ 2.67675299e-01, -1.44858628e-01, 3.57142857e-01], [ 2.61783533e-01, -9.65724185e-02, 5.00000000e-01], [ 2.67675299e-01, -1.44858628e-01, 5.00000000e-01], [ 2.55891766e-01, -4.82862093e-02, 3.57142857e-01], [ 2.55891766e-01, -4.82862093e-02, 5.00000000e-01], [ 1.85787515e-01, -5.54454306e-02, 3.57142857e-01], [ 1.85787515e-01, -5.54454306e-02, 5.00000000e-01], [ 1.15683264e-01, -6.26046519e-02, 3.57142857e-01], [ 1.15683264e-01, -6.26046519e-02, 5.00000000e-01], [ 2.04908364e-01, -1.10890861e-01, 3.57142857e-01], [ 2.04908364e-01, -1.10890861e-01, 5.00000000e-01], [ 1.48033195e-01, -1.25209304e-01, 3.57142857e-01], [ 1.48033195e-01, -1.25209304e-01, 5.00000000e-01], [ 2.24029213e-01, -1.66336292e-01, 3.57142857e-01], [ 2.24029213e-01, -1.66336292e-01, 5.00000000e-01], [ 1.80383126e-01, -1.87813956e-01, 3.57142857e-01], [ 1.80383126e-01, -1.87813956e-01, 5.00000000e-01], [ 2.33135679e-01, -2.47829956e-01, 3.57142857e-01], [ 2.33135679e-01, -2.47829956e-01, 5.00000000e-01], [ 2.85888233e-01, -3.07845955e-01, 3.57142857e-01], [ 2.85888233e-01, -3.07845955e-01, 5.00000000e-01], [ 2.80876227e-01, -2.13259647e-01, 3.57142857e-01], [ 2.80876227e-01, -2.13259647e-01, 5.00000000e-01], [ 3.37723241e-01, -2.60183001e-01, 3.57142857e-01], [ 3.37723241e-01, -2.60183001e-01, 5.00000000e-01], [ 1.00000000e+00, 0.00000000e+00, 2.14285714e-01], [ 9.86361303e-01, 1.64594590e-01, 2.14285714e-01], [ 9.04164528e-01, 1.50878374e-01, 2.14285714e-01], [ 9.16666667e-01, 7.92157468e-17, 2.14285714e-01], [ 9.45817242e-01, 3.24699469e-01, 2.14285714e-01], [ 8.66999138e-01, 2.97641180e-01, 2.14285714e-01], [ 8.79473751e-01, 4.75947393e-01, 2.14285714e-01], [ 8.06184272e-01, 4.36285110e-01, 2.14285714e-01], [ 7.89140509e-01, 6.14212713e-01, 2.14285714e-01], [ 7.23378800e-01, 5.63028320e-01, 2.14285714e-01], [ 6.77281572e-01, 7.35723911e-01, 2.14285714e-01], [ 6.20841441e-01, 6.74413585e-01, 2.14285714e-01], [ 5.46948158e-01, 8.37166478e-01, 2.14285714e-01], [ 5.01369145e-01, 7.67402605e-01, 2.14285714e-01], [ 4.01695425e-01, 9.15773327e-01, 2.14285714e-01], [ 3.68220806e-01, 8.39458883e-01, 2.14285714e-01], [ 8.21967753e-01, 1.37162159e-01, 2.14285714e-01], [ 8.33333333e-01, 1.09189934e-16, 2.14285714e-01], [ 7.88181035e-01, 2.70582891e-01, 2.14285714e-01], [ 7.32894793e-01, 3.96622828e-01, 2.14285714e-01], [ 6.57617091e-01, 5.11843927e-01, 2.14285714e-01], [ 5.64401310e-01, 6.13103259e-01, 2.14285714e-01], [ 4.55790132e-01, 6.97638732e-01, 2.14285714e-01], [ 3.34746187e-01, 7.63144439e-01, 2.14285714e-01], [ 7.39770978e-01, 1.23445943e-01, 2.14285714e-01], [ 7.50000000e-01, 1.08072838e-16, 2.14285714e-01], [ 7.09362931e-01, 2.43524602e-01, 2.14285714e-01], [ 6.59605313e-01, 3.56960545e-01, 2.14285714e-01], [ 5.91855382e-01, 4.60659535e-01, 2.14285714e-01], [ 5.07961179e-01, 5.51792933e-01, 2.14285714e-01], [ 4.10211119e-01, 6.27874859e-01, 2.14285714e-01], [ 3.01271568e-01, 6.86829995e-01, 2.14285714e-01], [ 6.57574202e-01, 1.09729727e-01, 2.14285714e-01], [ 6.66666667e-01, 9.87992924e-17, 2.14285714e-01], [ 6.30544828e-01, 2.16466313e-01, 2.14285714e-01], [ 5.86315834e-01, 3.17298262e-01, 2.14285714e-01], [ 5.26093673e-01, 4.09475142e-01, 2.14285714e-01], [ 4.51521048e-01, 4.90482607e-01, 2.14285714e-01], [ 3.64632105e-01, 5.58110986e-01, 2.14285714e-01], [ 2.67796950e-01, 6.10515551e-01, 2.14285714e-01], [ 5.75377427e-01, 9.60135110e-02, 2.14285714e-01], [ 5.83333333e-01, 8.67681639e-17, 2.14285714e-01], [ 5.51726724e-01, 1.89408024e-01, 2.14285714e-01], [ 5.13026355e-01, 2.77635979e-01, 2.14285714e-01], [ 4.60331964e-01, 3.58290749e-01, 2.14285714e-01], [ 3.95080917e-01, 4.29172281e-01, 2.14285714e-01], [ 3.19053092e-01, 4.88347112e-01, 2.14285714e-01], [ 2.34322331e-01, 5.34201107e-01, 2.14285714e-01], [ 4.93180652e-01, 8.22972951e-02, 2.14285714e-01], [ 5.00000000e-01, 7.34814867e-17, 2.14285714e-01], [ 4.72908621e-01, 1.62349735e-01, 2.14285714e-01], [ 4.39736876e-01, 2.37973697e-01, 2.14285714e-01], [ 3.94570255e-01, 3.07106356e-01, 2.14285714e-01], [ 3.38640786e-01, 3.67861955e-01, 2.14285714e-01], [ 2.73474079e-01, 4.18583239e-01, 2.14285714e-01], [ 2.00847712e-01, 4.57886663e-01, 2.14285714e-01], [ 2.92606991e-01, 3.16445261e-01, 2.14285714e-01], [ 3.44188184e-01, 2.67935253e-01, 2.14285714e-01], [ 2.34867640e-01, 3.58265725e-01, 2.14285714e-01], [ 1.72155182e-01, 3.92474283e-01, 2.14285714e-01], [ 2.46573197e-01, 2.65028567e-01, 2.14285714e-01], [ 2.93806114e-01, 2.28764151e-01, 2.14285714e-01], [ 1.96261201e-01, 2.97948211e-01, 2.14285714e-01], [ 1.43462652e-01, 3.27061902e-01, 2.14285714e-01], [ 2.00539403e-01, 2.13611872e-01, 2.14285714e-01], [ 2.43424043e-01, 1.89593048e-01, 2.14285714e-01], [ 1.57654762e-01, 2.37630697e-01, 2.14285714e-01], [ 1.14770121e-01, 2.61649522e-01, 2.14285714e-01], [ 8.60775910e-02, 1.96237141e-01, 2.14285714e-01], [ 1.39074405e-01, 1.78223023e-01, 2.14285714e-01], [ 5.73850607e-02, 1.30824761e-01, 2.14285714e-01], [ 1.20494048e-01, 1.18815349e-01, 2.14285714e-01], [ 2.86925303e-02, 6.54123805e-02, 2.14285714e-01], [ 1.01913690e-01, 5.94076743e-02, 2.14285714e-01], [ -2.17036080e-17, -1.20233781e-17, 2.14285714e-01], [ 8.33333333e-02, 2.13668869e-18, 2.14285714e-01], [ 1.92071219e-01, 1.60208904e-01, 2.14285714e-01], [ 1.83603035e-01, 1.06805936e-01, 2.14285714e-01], [ 1.75134851e-01, 5.34029681e-02, 2.14285714e-01], [ 1.66666667e-01, 1.64436543e-17, 2.14285714e-01], [ 2.45068032e-01, 1.42194786e-01, 2.14285714e-01], [ 2.46712022e-01, 9.47965238e-02, 2.14285714e-01], [ 2.48356011e-01, 4.73982619e-02, 2.14285714e-01], [ 2.50000000e-01, 3.08454053e-17, 2.14285714e-01], [ 3.33333333e-01, 4.52549212e-17, 2.14285714e-01], [ 3.29964224e-01, 5.90312730e-02, 2.14285714e-01], [ 4.16666667e-01, 5.95357666e-17, 2.14285714e-01], [ 4.11572438e-01, 7.06642841e-02, 2.14285714e-01], [ 3.22110888e-01, 1.17314261e-01, 2.14285714e-01], [ 3.97509754e-01, 1.39831998e-01, 2.14285714e-01], [ 3.09957647e-01, 1.74121089e-01, 2.14285714e-01], [ 3.74847261e-01, 2.06047393e-01, 2.14285714e-01], [ 2.45485487e-01, 9.69400266e-01, 2.14285714e-01], [ 2.25028363e-01, 8.88616910e-01, 2.14285714e-01], [ 8.25793455e-02, 9.96584493e-01, 2.14285714e-01], [ 7.56977333e-02, 9.13535785e-01, 2.14285714e-01], [ -8.25793455e-02, 9.96584493e-01, 2.14285714e-01], [ -7.56977333e-02, 9.13535785e-01, 2.14285714e-01], [ -2.45485487e-01, 9.69400266e-01, 2.14285714e-01], [ -2.25028363e-01, 8.88616910e-01, 2.14285714e-01], [ -4.01695425e-01, 9.15773327e-01, 2.14285714e-01], [ -3.68220806e-01, 8.39458883e-01, 2.14285714e-01], [ -5.46948158e-01, 8.37166478e-01, 2.14285714e-01], [ -5.01369145e-01, 7.67402605e-01, 2.14285714e-01], [ -6.77281572e-01, 7.35723911e-01, 2.14285714e-01], [ -6.20841441e-01, 6.74413585e-01, 2.14285714e-01], [ 2.04571239e-01, 8.07833555e-01, 2.14285714e-01], [ 6.88161212e-02, 8.30487078e-01, 2.14285714e-01], [ -6.88161212e-02, 8.30487078e-01, 2.14285714e-01], [ -2.04571239e-01, 8.07833555e-01, 2.14285714e-01], [ -3.34746187e-01, 7.63144439e-01, 2.14285714e-01], [ -4.55790132e-01, 6.97638732e-01, 2.14285714e-01], [ -5.64401310e-01, 6.13103259e-01, 2.14285714e-01], [ 1.84114115e-01, 7.27050199e-01, 2.14285714e-01], [ 6.19345091e-02, 7.47438370e-01, 2.14285714e-01], [ -6.19345091e-02, 7.47438370e-01, 2.14285714e-01], [ -1.84114115e-01, 7.27050199e-01, 2.14285714e-01], [ -3.01271568e-01, 6.86829995e-01, 2.14285714e-01], [ -4.10211119e-01, 6.27874859e-01, 2.14285714e-01], [ -5.07961179e-01, 5.51792933e-01, 2.14285714e-01], [ 1.63656991e-01, 6.46266844e-01, 2.14285714e-01], [ 5.50528970e-02, 6.64389662e-01, 2.14285714e-01], [ -5.50528970e-02, 6.64389662e-01, 2.14285714e-01], [ -1.63656991e-01, 6.46266844e-01, 2.14285714e-01], [ -2.67796950e-01, 6.10515551e-01, 2.14285714e-01], [ -3.64632105e-01, 5.58110986e-01, 2.14285714e-01], [ -4.51521048e-01, 4.90482607e-01, 2.14285714e-01], [ 1.43199867e-01, 5.65483488e-01, 2.14285714e-01], [ 4.81712849e-02, 5.81340954e-01, 2.14285714e-01], [ -4.81712849e-02, 5.81340954e-01, 2.14285714e-01], [ -1.43199867e-01, 5.65483488e-01, 2.14285714e-01], [ -2.34322331e-01, 5.34201107e-01, 2.14285714e-01], [ -3.19053092e-01, 4.88347112e-01, 2.14285714e-01], [ -3.95080917e-01, 4.29172281e-01, 2.14285714e-01], [ 1.22742744e-01, 4.84700133e-01, 2.14285714e-01], [ 4.12896727e-02, 4.98292247e-01, 2.14285714e-01], [ -4.12896727e-02, 4.98292247e-01, 2.14285714e-01], [ -1.22742744e-01, 4.84700133e-01, 2.14285714e-01], [ -2.00847712e-01, 4.57886663e-01, 2.14285714e-01], [ -2.73474079e-01, 4.18583239e-01, 2.14285714e-01], [ -3.38640786e-01, 3.67861955e-01, 2.14285714e-01], [ -1.02283149e-01, 4.15336195e-01, 2.14285714e-01], [ -3.59859419e-02, 4.34695086e-01, 2.14285714e-01], [ -1.66348287e-01, 3.87163066e-01, 2.14285714e-01], [ -2.26761024e-01, 3.50663301e-01, 2.14285714e-01], [ -2.82200655e-01, 3.06551629e-01, 2.14285714e-01], [ -8.18235534e-02, 3.45972257e-01, 2.14285714e-01], [ -3.06822110e-02, 3.71097926e-01, 2.14285714e-01], [ -1.31848862e-01, 3.16439469e-01, 2.14285714e-01], [ -1.80047970e-01, 2.82743363e-01, 2.14285714e-01], [ -2.25760524e-01, 2.45241304e-01, 2.14285714e-01], [ -6.13639584e-02, 2.76608319e-01, 2.14285714e-01], [ -2.53784802e-02, 3.07500766e-01, 2.14285714e-01], [ -9.73494365e-02, 2.45715872e-01, 2.14285714e-01], [ -1.33334915e-01, 2.14823425e-01, 2.14285714e-01], [ -1.69320393e-01, 1.83930978e-01, 2.14285714e-01], [ -1.12880262e-01, 1.22620652e-01, 2.14285714e-01], [ -7.93257664e-02, 1.65019743e-01, 2.14285714e-01], [ -5.64401310e-02, 6.13103259e-02, 2.14285714e-01], [ -2.53166180e-02, 1.15216062e-01, 2.14285714e-01], [ -4.57712708e-02, 2.07418835e-01, 2.14285714e-01], [ 5.80689493e-03, 1.69121798e-01, 2.14285714e-01], [ -1.22167752e-02, 2.49817927e-01, 2.14285714e-01], [ 3.69304079e-02, 2.23027534e-01, 2.14285714e-01], [ 2.13377203e-02, 2.92217018e-01, 2.14285714e-01], [ 6.80539208e-02, 2.76933270e-01, 2.14285714e-01], [ 8.62835284e-02, 3.46188891e-01, 2.14285714e-01], [ 1.04513136e-01, 4.15444512e-01, 2.14285714e-01], [ 2.79883711e-02, 3.60908761e-01, 2.14285714e-01], [ 3.46390219e-02, 4.29600504e-01, 2.14285714e-01], [ -7.89140509e-01, 6.14212713e-01, 2.14285714e-01], [ -7.23378800e-01, 5.63028320e-01, 2.14285714e-01], [ -8.79473751e-01, 4.75947393e-01, 2.14285714e-01], [ -8.06184272e-01, 4.36285110e-01, 2.14285714e-01], [ -9.45817242e-01, 3.24699469e-01, 2.14285714e-01], [ -8.66999138e-01, 2.97641180e-01, 2.14285714e-01], [ -9.86361303e-01, 1.64594590e-01, 2.14285714e-01], [ -9.04164528e-01, 1.50878374e-01, 2.14285714e-01], [ -1.00000000e+00, 5.24848628e-17, 2.14285714e-01], [ -9.16666667e-01, 5.61432674e-17, 2.14285714e-01], [ -9.86361303e-01, -1.64594590e-01, 2.14285714e-01], [ -9.04164528e-01, -1.50878374e-01, 2.14285714e-01], [ -6.57617091e-01, 5.11843927e-01, 2.14285714e-01], [ -7.32894793e-01, 3.96622828e-01, 2.14285714e-01], [ -7.88181035e-01, 2.70582891e-01, 2.14285714e-01], [ -8.21967753e-01, 1.37162159e-01, 2.14285714e-01], [ -8.33333333e-01, 4.98053016e-17, 2.14285714e-01], [ -8.21967753e-01, -1.37162159e-01, 2.14285714e-01], [ -5.91855382e-01, 4.60659535e-01, 2.14285714e-01], [ -6.59605313e-01, 3.56960545e-01, 2.14285714e-01], [ -7.09362931e-01, 2.43524602e-01, 2.14285714e-01], [ -7.39770978e-01, 1.23445943e-01, 2.14285714e-01], [ -7.50000000e-01, 4.30290320e-17, 2.14285714e-01], [ -7.39770978e-01, -1.23445943e-01, 2.14285714e-01], [ -5.26093673e-01, 4.09475142e-01, 2.14285714e-01], [ -5.86315834e-01, 3.17298262e-01, 2.14285714e-01], [ -6.30544828e-01, 2.16466313e-01, 2.14285714e-01], [ -6.57574202e-01, 1.09729727e-01, 2.14285714e-01], [ -6.66666667e-01, 3.61383006e-17, 2.14285714e-01], [ -6.57574202e-01, -1.09729727e-01, 2.14285714e-01], [ -4.60331964e-01, 3.58290749e-01, 2.14285714e-01], [ -5.13026355e-01, 2.77635979e-01, 2.14285714e-01], [ -5.51726724e-01, 1.89408024e-01, 2.14285714e-01], [ -5.75377427e-01, 9.60135110e-02, 2.14285714e-01], [ -5.83333333e-01, 2.93213442e-17, 2.14285714e-01], [ -5.75377427e-01, -9.60135110e-02, 2.14285714e-01], [ -3.94570255e-01, 3.07106356e-01, 2.14285714e-01], [ -4.39736876e-01, 2.37973697e-01, 2.14285714e-01], [ -4.72908621e-01, 1.62349735e-01, 2.14285714e-01], [ -4.93180652e-01, 8.22972951e-02, 2.14285714e-01], [ -5.00000000e-01, 2.26876270e-17, 2.14285714e-01], [ -4.93180652e-01, -8.22972951e-02, 2.14285714e-01], [ -4.20156583e-01, 7.22539112e-02, 2.14285714e-01], [ -4.11228248e-01, 1.41174836e-01, 2.14285714e-01], [ -4.20116462e-01, 1.83641595e-03, 2.14285714e-01], [ -4.10983876e-01, -6.85810793e-02, 2.14285714e-01], [ -3.47132513e-01, 6.22105272e-02, 2.14285714e-01], [ -3.49547876e-01, 1.19999937e-01, 2.14285714e-01], [ -3.40232923e-01, 3.67283191e-03, 2.14285714e-01], [ -3.28787101e-01, -5.48648634e-02, 2.14285714e-01], [ -2.74108444e-01, 5.21671433e-02, 2.14285714e-01], [ -2.87867503e-01, 9.88250387e-02, 2.14285714e-01], [ -2.60349385e-01, 5.50924786e-03, 2.14285714e-01], [ -2.46590326e-01, -4.11486476e-02, 2.14285714e-01], [ -1.64393551e-01, -2.74324317e-02, 2.14285714e-01], [ -1.92379634e-01, 2.41096072e-02, 2.14285714e-01], [ -8.21967753e-02, -1.37162159e-02, 2.14285714e-01], [ -1.24409882e-01, 4.27099665e-02, 2.14285714e-01], [ -2.20365717e-01, 7.56516461e-02, 2.14285714e-01], [ -1.66622989e-01, 9.91361489e-02, 2.14285714e-01], [ -2.48351800e-01, 1.27193685e-01, 2.14285714e-01], [ -2.08836096e-01, 1.55562331e-01, 2.14285714e-01], [ -2.70747482e-01, 2.06077006e-01, 2.14285714e-01], [ -3.32658869e-01, 2.56591681e-01, 2.14285714e-01], [ -3.12146825e-01, 1.64120356e-01, 2.14285714e-01], [ -3.75941850e-01, 2.01047026e-01, 2.14285714e-01], [ -9.45817242e-01, -3.24699469e-01, 2.14285714e-01], [ -8.66999138e-01, -2.97641180e-01, 2.14285714e-01], [ -8.79473751e-01, -4.75947393e-01, 2.14285714e-01], [ -8.06184272e-01, -4.36285110e-01, 2.14285714e-01], [ -7.89140509e-01, -6.14212713e-01, 2.14285714e-01], [ -7.23378800e-01, -5.63028320e-01, 2.14285714e-01], [ -6.77281572e-01, -7.35723911e-01, 2.14285714e-01], [ -6.20841441e-01, -6.74413585e-01, 2.14285714e-01], [ -5.46948158e-01, -8.37166478e-01, 2.14285714e-01], [ -5.01369145e-01, -7.67402605e-01, 2.14285714e-01], [ -4.01695425e-01, -9.15773327e-01, 2.14285714e-01], [ -3.68220806e-01, -8.39458883e-01, 2.14285714e-01], [ -7.88181035e-01, -2.70582891e-01, 2.14285714e-01], [ -7.32894793e-01, -3.96622828e-01, 2.14285714e-01], [ -6.57617091e-01, -5.11843927e-01, 2.14285714e-01], [ -5.64401310e-01, -6.13103259e-01, 2.14285714e-01], [ -4.55790132e-01, -6.97638732e-01, 2.14285714e-01], [ -3.34746187e-01, -7.63144439e-01, 2.14285714e-01], [ -7.09362931e-01, -2.43524602e-01, 2.14285714e-01], [ -6.59605313e-01, -3.56960545e-01, 2.14285714e-01], [ -5.91855382e-01, -4.60659535e-01, 2.14285714e-01], [ -5.07961179e-01, -5.51792933e-01, 2.14285714e-01], [ -4.10211119e-01, -6.27874859e-01, 2.14285714e-01], [ -3.01271568e-01, -6.86829995e-01, 2.14285714e-01], [ -6.30544828e-01, -2.16466313e-01, 2.14285714e-01], [ -5.86315834e-01, -3.17298262e-01, 2.14285714e-01], [ -5.26093673e-01, -4.09475142e-01, 2.14285714e-01], [ -4.51521048e-01, -4.90482607e-01, 2.14285714e-01], [ -3.64632105e-01, -5.58110986e-01, 2.14285714e-01], [ -2.67796950e-01, -6.10515551e-01, 2.14285714e-01], [ -5.51726724e-01, -1.89408024e-01, 2.14285714e-01], [ -5.13026355e-01, -2.77635979e-01, 2.14285714e-01], [ -4.60331964e-01, -3.58290749e-01, 2.14285714e-01], [ -3.95080917e-01, -4.29172281e-01, 2.14285714e-01], [ -3.19053092e-01, -4.88347112e-01, 2.14285714e-01], [ -2.34322331e-01, -5.34201107e-01, 2.14285714e-01], [ -4.72908621e-01, -1.62349735e-01, 2.14285714e-01], [ -4.39736876e-01, -2.37973697e-01, 2.14285714e-01], [ -3.94570255e-01, -3.07106356e-01, 2.14285714e-01], [ -3.38640786e-01, -3.67861955e-01, 2.14285714e-01], [ -2.73474079e-01, -4.18583239e-01, 2.14285714e-01], [ -2.00847712e-01, -4.57886663e-01, 2.14285714e-01], [ -2.90292421e-01, -3.12221863e-01, 2.14285714e-01], [ -3.43107373e-01, -2.67051188e-01, 2.14285714e-01], [ -2.31319311e-01, -3.50702994e-01, 2.14285714e-01], [ -1.67373094e-01, -3.81572219e-01, 2.14285714e-01], [ -2.41944057e-01, -2.56581770e-01, 2.14285714e-01], [ -2.91644492e-01, -2.26996019e-01, 2.14285714e-01], [ -1.89164543e-01, -2.82822750e-01, 2.14285714e-01], [ -1.33898475e-01, -3.05257776e-01, 2.14285714e-01], [ -1.93595692e-01, -2.00941678e-01, 2.14285714e-01], [ -2.40181610e-01, -1.86940851e-01, 2.14285714e-01], [ -1.47009774e-01, -2.14942505e-01, 2.14285714e-01], [ -1.00423856e-01, -2.28943332e-01, 2.14285714e-01], [ -6.69492374e-02, -1.52628888e-01, 2.14285714e-01], [ -1.25405441e-01, -1.47867075e-01, 2.14285714e-01], [ -3.34746187e-02, -7.63144439e-02, 2.14285714e-01], [ -1.03801108e-01, -8.07916455e-02, 2.14285714e-01], [ -1.83861645e-01, -1.43105263e-01, 2.14285714e-01], [ -1.74127598e-01, -8.52688471e-02, 2.14285714e-01], [ -2.42317849e-01, -1.38343450e-01, 2.14285714e-01], [ -2.44454087e-01, -8.97460487e-02, 2.14285714e-01], [ -3.20605599e-01, -1.13947277e-01, 2.14285714e-01], [ -3.96757110e-01, -1.38148506e-01, 2.14285714e-01], [ -3.08124191e-01, -1.71553532e-01, 2.14285714e-01], [ -3.73930533e-01, -2.04763614e-01, 2.14285714e-01], [ -2.45485487e-01, -9.69400266e-01, 2.14285714e-01], [ -2.25028363e-01, -8.88616910e-01, 2.14285714e-01], [ -8.25793455e-02, -9.96584493e-01, 2.14285714e-01], [ -7.56977333e-02, -9.13535785e-01, 2.14285714e-01], [ 8.25793455e-02, -9.96584493e-01, 2.14285714e-01], [ 7.56977333e-02, -9.13535785e-01, 2.14285714e-01], [ 2.45485487e-01, -9.69400266e-01, 2.14285714e-01], [ 2.25028363e-01, -8.88616910e-01, 2.14285714e-01], [ 4.01695425e-01, -9.15773327e-01, 2.14285714e-01], [ 3.68220806e-01, -8.39458883e-01, 2.14285714e-01], [ 5.46948158e-01, -8.37166478e-01, 2.14285714e-01], [ 5.01369145e-01, -7.67402605e-01, 2.14285714e-01], [ -2.04571239e-01, -8.07833555e-01, 2.14285714e-01], [ -6.88161212e-02, -8.30487078e-01, 2.14285714e-01], [ 6.88161212e-02, -8.30487078e-01, 2.14285714e-01], [ 2.04571239e-01, -8.07833555e-01, 2.14285714e-01], [ 3.34746187e-01, -7.63144439e-01, 2.14285714e-01], [ 4.55790132e-01, -6.97638732e-01, 2.14285714e-01], [ -1.84114115e-01, -7.27050199e-01, 2.14285714e-01], [ -6.19345091e-02, -7.47438370e-01, 2.14285714e-01], [ 6.19345091e-02, -7.47438370e-01, 2.14285714e-01], [ 1.84114115e-01, -7.27050199e-01, 2.14285714e-01], [ 3.01271568e-01, -6.86829995e-01, 2.14285714e-01], [ 4.10211119e-01, -6.27874859e-01, 2.14285714e-01], [ -1.63656991e-01, -6.46266844e-01, 2.14285714e-01], [ -5.50528970e-02, -6.64389662e-01, 2.14285714e-01], [ 5.50528970e-02, -6.64389662e-01, 2.14285714e-01], [ 1.63656991e-01, -6.46266844e-01, 2.14285714e-01], [ 2.67796950e-01, -6.10515551e-01, 2.14285714e-01], [ 3.64632105e-01, -5.58110986e-01, 2.14285714e-01], [ -1.43199867e-01, -5.65483488e-01, 2.14285714e-01], [ -4.81712849e-02, -5.81340954e-01, 2.14285714e-01], [ 4.81712849e-02, -5.81340954e-01, 2.14285714e-01], [ 1.43199867e-01, -5.65483488e-01, 2.14285714e-01], [ 2.34322331e-01, -5.34201107e-01, 2.14285714e-01], [ 3.19053092e-01, -4.88347112e-01, 2.14285714e-01], [ -1.22742744e-01, -4.84700133e-01, 2.14285714e-01], [ -4.12896727e-02, -4.98292247e-01, 2.14285714e-01], [ 4.12896727e-02, -4.98292247e-01, 2.14285714e-01], [ 1.22742744e-01, -4.84700133e-01, 2.14285714e-01], [ 2.00847712e-01, -4.57886663e-01, 2.14285714e-01], [ 2.73474079e-01, -4.18583239e-01, 2.14285714e-01], [ 1.02606772e-01, -4.13792257e-01, 2.14285714e-01], [ 3.59043567e-02, -4.33301147e-01, 2.14285714e-01], [ 1.67077120e-01, -3.85469130e-01, 2.14285714e-01], [ 2.27895066e-01, -3.48819366e-01, 2.14285714e-01], [ 8.24708009e-02, -3.42884381e-01, 2.14285714e-01], [ 3.05190406e-02, -3.68310047e-01, 2.14285714e-01], [ 1.33306527e-01, -3.13051596e-01, 2.14285714e-01], [ 1.82316053e-01, -2.79055493e-01, 2.14285714e-01], [ 6.23348295e-02, -2.71976505e-01, 2.14285714e-01], [ 2.51337245e-02, -3.03318947e-01, 2.14285714e-01], [ 9.95359345e-02, -2.40634062e-01, 2.14285714e-01], [ 1.36737040e-01, -2.09291620e-01, 2.14285714e-01], [ 9.11580264e-02, -1.39527746e-01, 2.14285714e-01], [ 5.51990834e-02, -1.85860856e-01, 2.14285714e-01], [ 4.55790132e-02, -6.97638732e-02, 2.14285714e-01], [ 1.08622324e-02, -1.31087650e-01, 2.14285714e-01], [ 1.92401405e-02, -2.32193966e-01, 2.14285714e-01], [ -2.38545485e-02, -1.92411427e-01, 2.14285714e-01], [ -1.67188024e-02, -2.78527075e-01, 2.14285714e-01], [ -5.85713293e-02, -2.53735203e-01, 2.14285714e-01], [ -7.99618007e-02, -3.30723513e-01, 2.14285714e-01], [ -1.01352272e-01, -4.07711823e-01, 2.14285714e-01], [ -2.49090925e-02, -3.51782132e-01, 2.14285714e-01], [ -3.30993826e-02, -4.25037189e-01, 2.14285714e-01], [ 6.77281572e-01, -7.35723911e-01, 2.14285714e-01], [ 6.20841441e-01, -6.74413585e-01, 2.14285714e-01], [ 7.89140509e-01, -6.14212713e-01, 2.14285714e-01], [ 7.23378800e-01, -5.63028320e-01, 2.14285714e-01], [ 8.79473751e-01, -4.75947393e-01, 2.14285714e-01], [ 8.06184272e-01, -4.36285110e-01, 2.14285714e-01], [ 9.45817242e-01, -3.24699469e-01, 2.14285714e-01], [ 8.66999138e-01, -2.97641180e-01, 2.14285714e-01], [ 9.86361303e-01, -1.64594590e-01, 2.14285714e-01], [ 9.04164528e-01, -1.50878374e-01, 2.14285714e-01], [ 5.64401310e-01, -6.13103259e-01, 2.14285714e-01], [ 6.57617091e-01, -5.11843927e-01, 2.14285714e-01], [ 7.32894793e-01, -3.96622828e-01, 2.14285714e-01], [ 7.88181035e-01, -2.70582891e-01, 2.14285714e-01], [ 8.21967753e-01, -1.37162159e-01, 2.14285714e-01], [ 5.07961179e-01, -5.51792933e-01, 2.14285714e-01], [ 5.91855382e-01, -4.60659535e-01, 2.14285714e-01], [ 6.59605313e-01, -3.56960545e-01, 2.14285714e-01], [ 7.09362931e-01, -2.43524602e-01, 2.14285714e-01], [ 7.39770978e-01, -1.23445943e-01, 2.14285714e-01], [ 4.51521048e-01, -4.90482607e-01, 2.14285714e-01], [ 5.26093673e-01, -4.09475142e-01, 2.14285714e-01], [ 5.86315834e-01, -3.17298262e-01, 2.14285714e-01], [ 6.30544828e-01, -2.16466313e-01, 2.14285714e-01], [ 6.57574202e-01, -1.09729727e-01, 2.14285714e-01], [ 3.95080917e-01, -4.29172281e-01, 2.14285714e-01], [ 4.60331964e-01, -3.58290749e-01, 2.14285714e-01], [ 5.13026355e-01, -2.77635979e-01, 2.14285714e-01], [ 5.51726724e-01, -1.89408024e-01, 2.14285714e-01], [ 5.75377427e-01, -9.60135110e-02, 2.14285714e-01], [ 3.38640786e-01, -3.67861955e-01, 2.14285714e-01], [ 3.94570255e-01, -3.07106356e-01, 2.14285714e-01], [ 4.39736876e-01, -2.37973697e-01, 2.14285714e-01], [ 4.72908621e-01, -1.62349735e-01, 2.14285714e-01], [ 4.93180652e-01, -8.22972951e-02, 2.14285714e-01], [ 4.02533591e-01, -1.40423963e-01, 2.14285714e-01], [ 3.82383017e-01, -2.06935340e-01, 2.14285714e-01], [ 4.14084357e-01, -7.09602665e-02, 2.14285714e-01], [ 3.32158562e-01, -1.18498191e-01, 2.14285714e-01], [ 3.25029158e-01, -1.75896984e-01, 2.14285714e-01], [ 3.34988061e-01, -5.96232379e-02, 2.14285714e-01], [ 2.61783533e-01, -9.65724185e-02, 2.14285714e-01], [ 2.67675299e-01, -1.44858628e-01, 2.14285714e-01], [ 2.55891766e-01, -4.82862093e-02, 2.14285714e-01], [ 1.85787515e-01, -5.54454306e-02, 2.14285714e-01], [ 1.15683264e-01, -6.26046519e-02, 2.14285714e-01], [ 2.04908364e-01, -1.10890861e-01, 2.14285714e-01], [ 1.48033195e-01, -1.25209304e-01, 2.14285714e-01], [ 2.24029213e-01, -1.66336292e-01, 2.14285714e-01], [ 1.80383126e-01, -1.87813956e-01, 2.14285714e-01], [ 2.33135679e-01, -2.47829956e-01, 2.14285714e-01], [ 2.85888233e-01, -3.07845955e-01, 2.14285714e-01], [ 2.80876227e-01, -2.13259647e-01, 2.14285714e-01], [ 3.37723241e-01, -2.60183001e-01, 2.14285714e-01], [ 1.00000000e+00, 0.00000000e+00, 7.14285714e-02], [ 9.86361303e-01, 1.64594590e-01, 7.14285714e-02], [ 9.04164528e-01, 1.50878374e-01, 7.14285714e-02], [ 9.16666667e-01, 6.89122780e-17, 7.14285714e-02], [ 9.45817242e-01, 3.24699469e-01, 7.14285714e-02], [ 8.66999138e-01, 2.97641180e-01, 7.14285714e-02], [ 8.79473751e-01, 4.75947393e-01, 7.14285714e-02], [ 8.06184272e-01, 4.36285110e-01, 7.14285714e-02], [ 7.89140509e-01, 6.14212713e-01, 7.14285714e-02], [ 7.23378800e-01, 5.63028320e-01, 7.14285714e-02], [ 6.77281572e-01, 7.35723911e-01, 7.14285714e-02], [ 6.20841441e-01, 6.74413585e-01, 7.14285714e-02], [ 5.46948158e-01, 8.37166478e-01, 7.14285714e-02], [ 5.01369145e-01, 7.67402605e-01, 7.14285714e-02], [ 4.01695425e-01, 9.15773327e-01, 7.14285714e-02], [ 3.68220806e-01, 8.39458883e-01, 7.14285714e-02], [ 8.21967753e-01, 1.37162159e-01, 7.14285714e-02], [ 8.33333333e-01, 1.18742958e-16, 7.14285714e-02], [ 7.88181035e-01, 2.70582891e-01, 7.14285714e-02], [ 7.32894793e-01, 3.96622828e-01, 7.14285714e-02], [ 6.57617091e-01, 5.11843927e-01, 7.14285714e-02], [ 5.64401310e-01, 6.13103259e-01, 7.14285714e-02], [ 4.55790132e-01, 6.97638732e-01, 7.14285714e-02], [ 3.34746187e-01, 7.63144439e-01, 7.14285714e-02], [ 7.39770978e-01, 1.23445943e-01, 7.14285714e-02], [ 7.50000000e-01, 1.21936714e-16, 7.14285714e-02], [ 7.09362931e-01, 2.43524602e-01, 7.14285714e-02], [ 6.59605313e-01, 3.56960545e-01, 7.14285714e-02], [ 5.91855382e-01, 4.60659535e-01, 7.14285714e-02], [ 5.07961179e-01, 5.51792933e-01, 7.14285714e-02], [ 4.10211119e-01, 6.27874859e-01, 7.14285714e-02], [ 3.01271568e-01, 6.86829995e-01, 7.14285714e-02], [ 6.57574202e-01, 1.09729727e-01, 7.14285714e-02], [ 6.66666667e-01, 1.12895794e-16, 7.14285714e-02], [ 6.30544828e-01, 2.16466313e-01, 7.14285714e-02], [ 5.86315834e-01, 3.17298262e-01, 7.14285714e-02], [ 5.26093673e-01, 4.09475142e-01, 7.14285714e-02], [ 4.51521048e-01, 4.90482607e-01, 7.14285714e-02], [ 3.64632105e-01, 5.58110986e-01, 7.14285714e-02], [ 2.67796950e-01, 6.10515551e-01, 7.14285714e-02], [ 5.75377427e-01, 9.60135110e-02, 7.14285714e-02], [ 5.83333333e-01, 9.97185006e-17, 7.14285714e-02], [ 5.51726724e-01, 1.89408024e-01, 7.14285714e-02], [ 5.13026355e-01, 2.77635979e-01, 7.14285714e-02], [ 4.60331964e-01, 3.58290749e-01, 7.14285714e-02], [ 3.95080917e-01, 4.29172281e-01, 7.14285714e-02], [ 3.19053092e-01, 4.88347112e-01, 7.14285714e-02], [ 2.34322331e-01, 5.34201107e-01, 7.14285714e-02], [ 4.93180652e-01, 8.22972951e-02, 7.14285714e-02], [ 5.00000000e-01, 8.46578841e-17, 7.14285714e-02], [ 4.72908621e-01, 1.62349735e-01, 7.14285714e-02], [ 4.39736876e-01, 2.37973697e-01, 7.14285714e-02], [ 3.94570255e-01, 3.07106356e-01, 7.14285714e-02], [ 3.38640786e-01, 3.67861955e-01, 7.14285714e-02], [ 2.73474079e-01, 4.18583239e-01, 7.14285714e-02], [ 2.00847712e-01, 4.57886663e-01, 7.14285714e-02], [ 2.92606991e-01, 3.16445261e-01, 7.14285714e-02], [ 3.44188184e-01, 2.67935253e-01, 7.14285714e-02], [ 2.34867640e-01, 3.58265725e-01, 7.14285714e-02], [ 1.72155182e-01, 3.92474283e-01, 7.14285714e-02], [ 2.46573197e-01, 2.65028567e-01, 7.14285714e-02], [ 2.93806114e-01, 2.28764151e-01, 7.14285714e-02], [ 1.96261201e-01, 2.97948211e-01, 7.14285714e-02], [ 1.43462652e-01, 3.27061902e-01, 7.14285714e-02], [ 2.00539403e-01, 2.13611872e-01, 7.14285714e-02], [ 2.43424043e-01, 1.89593048e-01, 7.14285714e-02], [ 1.57654762e-01, 2.37630697e-01, 7.14285714e-02], [ 1.14770121e-01, 2.61649522e-01, 7.14285714e-02], [ 8.60775910e-02, 1.96237141e-01, 7.14285714e-02], [ 1.39074405e-01, 1.78223023e-01, 7.14285714e-02], [ 5.73850607e-02, 1.30824761e-01, 7.14285714e-02], [ 1.20494048e-01, 1.18815349e-01, 7.14285714e-02], [ 2.86925303e-02, 6.54123805e-02, 7.14285714e-02], [ 1.01913690e-01, 5.94076743e-02, 7.14285714e-02], [ -1.35647550e-17, -1.43830177e-17, 7.14285714e-02], [ 8.33333333e-02, 1.98768323e-18, 7.14285714e-02], [ 1.92071219e-01, 1.60208904e-01, 7.14285714e-02], [ 1.83603035e-01, 1.06805936e-01, 7.14285714e-02], [ 1.75134851e-01, 5.34029681e-02, 7.14285714e-02], [ 1.66666667e-01, 1.85787325e-17, 7.14285714e-02], [ 2.45068032e-01, 1.42194786e-01, 7.14285714e-02], [ 2.46712022e-01, 9.47965238e-02, 7.14285714e-02], [ 2.48356011e-01, 4.73982619e-02, 7.14285714e-02], [ 2.50000000e-01, 3.53119596e-17, 7.14285714e-02], [ 3.33333333e-01, 5.20568343e-17, 7.14285714e-02], [ 3.29964224e-01, 5.90312730e-02, 7.14285714e-02], [ 4.16666667e-01, 6.86087031e-17, 7.14285714e-02], [ 4.11572438e-01, 7.06642841e-02, 7.14285714e-02], [ 3.22110888e-01, 1.17314261e-01, 7.14285714e-02], [ 3.97509754e-01, 1.39831998e-01, 7.14285714e-02], [ 3.09957647e-01, 1.74121089e-01, 7.14285714e-02], [ 3.74847261e-01, 2.06047393e-01, 7.14285714e-02], [ 2.45485487e-01, 9.69400266e-01, 7.14285714e-02], [ 2.25028363e-01, 8.88616910e-01, 7.14285714e-02], [ 8.25793455e-02, 9.96584493e-01, 7.14285714e-02], [ 7.56977333e-02, 9.13535785e-01, 7.14285714e-02], [ -8.25793455e-02, 9.96584493e-01, 7.14285714e-02], [ -7.56977333e-02, 9.13535785e-01, 7.14285714e-02], [ -2.45485487e-01, 9.69400266e-01, 7.14285714e-02], [ -2.25028363e-01, 8.88616910e-01, 7.14285714e-02], [ -4.01695425e-01, 9.15773327e-01, 7.14285714e-02], [ -3.68220806e-01, 8.39458883e-01, 7.14285714e-02], [ -5.46948158e-01, 8.37166478e-01, 7.14285714e-02], [ -5.01369145e-01, 7.67402605e-01, 7.14285714e-02], [ -6.77281572e-01, 7.35723911e-01, 7.14285714e-02], [ -6.20841441e-01, 6.74413585e-01, 7.14285714e-02], [ 2.04571239e-01, 8.07833555e-01, 7.14285714e-02], [ 6.88161212e-02, 8.30487078e-01, 7.14285714e-02], [ -6.88161212e-02, 8.30487078e-01, 7.14285714e-02], [ -2.04571239e-01, 8.07833555e-01, 7.14285714e-02], [ -3.34746187e-01, 7.63144439e-01, 7.14285714e-02], [ -4.55790132e-01, 6.97638732e-01, 7.14285714e-02], [ -5.64401310e-01, 6.13103259e-01, 7.14285714e-02], [ 1.84114115e-01, 7.27050199e-01, 7.14285714e-02], [ 6.19345091e-02, 7.47438370e-01, 7.14285714e-02], [ -6.19345091e-02, 7.47438370e-01, 7.14285714e-02], [ -1.84114115e-01, 7.27050199e-01, 7.14285714e-02], [ -3.01271568e-01, 6.86829995e-01, 7.14285714e-02], [ -4.10211119e-01, 6.27874859e-01, 7.14285714e-02], [ -5.07961179e-01, 5.51792933e-01, 7.14285714e-02], [ 1.63656991e-01, 6.46266844e-01, 7.14285714e-02], [ 5.50528970e-02, 6.64389662e-01, 7.14285714e-02], [ -5.50528970e-02, 6.64389662e-01, 7.14285714e-02], [ -1.63656991e-01, 6.46266844e-01, 7.14285714e-02], [ -2.67796950e-01, 6.10515551e-01, 7.14285714e-02], [ -3.64632105e-01, 5.58110986e-01, 7.14285714e-02], [ -4.51521048e-01, 4.90482607e-01, 7.14285714e-02], [ 1.43199867e-01, 5.65483488e-01, 7.14285714e-02], [ 4.81712849e-02, 5.81340954e-01, 7.14285714e-02], [ -4.81712849e-02, 5.81340954e-01, 7.14285714e-02], [ -1.43199867e-01, 5.65483488e-01, 7.14285714e-02], [ -2.34322331e-01, 5.34201107e-01, 7.14285714e-02], [ -3.19053092e-01, 4.88347112e-01, 7.14285714e-02], [ -3.95080917e-01, 4.29172281e-01, 7.14285714e-02], [ 1.22742744e-01, 4.84700133e-01, 7.14285714e-02], [ 4.12896727e-02, 4.98292247e-01, 7.14285714e-02], [ -4.12896727e-02, 4.98292247e-01, 7.14285714e-02], [ -1.22742744e-01, 4.84700133e-01, 7.14285714e-02], [ -2.00847712e-01, 4.57886663e-01, 7.14285714e-02], [ -2.73474079e-01, 4.18583239e-01, 7.14285714e-02], [ -3.38640786e-01, 3.67861955e-01, 7.14285714e-02], [ -1.02283149e-01, 4.15336195e-01, 7.14285714e-02], [ -3.59859419e-02, 4.34695086e-01, 7.14285714e-02], [ -1.66348287e-01, 3.87163066e-01, 7.14285714e-02], [ -2.26761024e-01, 3.50663301e-01, 7.14285714e-02], [ -2.82200655e-01, 3.06551629e-01, 7.14285714e-02], [ -8.18235534e-02, 3.45972257e-01, 7.14285714e-02], [ -3.06822110e-02, 3.71097926e-01, 7.14285714e-02], [ -1.31848862e-01, 3.16439469e-01, 7.14285714e-02], [ -1.80047970e-01, 2.82743363e-01, 7.14285714e-02], [ -2.25760524e-01, 2.45241304e-01, 7.14285714e-02], [ -6.13639584e-02, 2.76608319e-01, 7.14285714e-02], [ -2.53784802e-02, 3.07500766e-01, 7.14285714e-02], [ -9.73494365e-02, 2.45715872e-01, 7.14285714e-02], [ -1.33334915e-01, 2.14823425e-01, 7.14285714e-02], [ -1.69320393e-01, 1.83930978e-01, 7.14285714e-02], [ -1.12880262e-01, 1.22620652e-01, 7.14285714e-02], [ -7.93257664e-02, 1.65019743e-01, 7.14285714e-02], [ -5.64401310e-02, 6.13103259e-02, 7.14285714e-02], [ -2.53166180e-02, 1.15216062e-01, 7.14285714e-02], [ -4.57712708e-02, 2.07418835e-01, 7.14285714e-02], [ 5.80689493e-03, 1.69121798e-01, 7.14285714e-02], [ -1.22167752e-02, 2.49817927e-01, 7.14285714e-02], [ 3.69304079e-02, 2.23027534e-01, 7.14285714e-02], [ 2.13377203e-02, 2.92217018e-01, 7.14285714e-02], [ 6.80539208e-02, 2.76933270e-01, 7.14285714e-02], [ 8.62835284e-02, 3.46188891e-01, 7.14285714e-02], [ 1.04513136e-01, 4.15444512e-01, 7.14285714e-02], [ 2.79883711e-02, 3.60908761e-01, 7.14285714e-02], [ 3.46390219e-02, 4.29600504e-01, 7.14285714e-02], [ -7.89140509e-01, 6.14212713e-01, 7.14285714e-02], [ -7.23378800e-01, 5.63028320e-01, 7.14285714e-02], [ -8.79473751e-01, 4.75947393e-01, 7.14285714e-02], [ -8.06184272e-01, 4.36285110e-01, 7.14285714e-02], [ -9.45817242e-01, 3.24699469e-01, 7.14285714e-02], [ -8.66999138e-01, 2.97641180e-01, 7.14285714e-02], [ -9.86361303e-01, 1.64594590e-01, 7.14285714e-02], [ -9.04164528e-01, 1.50878374e-01, 7.14285714e-02], [ -1.00000000e+00, 1.74949543e-17, 7.14285714e-02], [ -9.16666667e-01, 3.57545599e-17, 7.14285714e-02], [ -9.86361303e-01, -1.64594590e-01, 7.14285714e-02], [ -9.04164528e-01, -1.50878374e-01, 7.14285714e-02], [ -6.57617091e-01, 5.11843927e-01, 7.14285714e-02], [ -7.32894793e-01, 3.96622828e-01, 7.14285714e-02], [ -7.88181035e-01, 2.70582891e-01, 7.14285714e-02], [ -8.21967753e-01, 1.37162159e-01, 7.14285714e-02], [ -8.33333333e-01, 3.09851013e-17, 7.14285714e-02], [ -8.21967753e-01, -1.37162159e-01, 7.14285714e-02], [ -5.91855382e-01, 4.60659535e-01, 7.14285714e-02], [ -6.59605313e-01, 3.56960545e-01, 7.14285714e-02], [ -7.09362931e-01, 2.43524602e-01, 7.14285714e-02], [ -7.39770978e-01, 1.23445943e-01, 7.14285714e-02], [ -7.50000000e-01, 2.55581870e-17, 7.14285714e-02], [ -7.39770978e-01, -1.23445943e-01, 7.14285714e-02], [ -5.26093673e-01, 4.09475142e-01, 7.14285714e-02], [ -5.86315834e-01, 3.17298262e-01, 7.14285714e-02], [ -6.30544828e-01, 2.16466313e-01, 7.14285714e-02], [ -6.57574202e-01, 1.09729727e-01, 7.14285714e-02], [ -6.66666667e-01, 1.99595799e-17, 7.14285714e-02], [ -6.57574202e-01, -1.09729727e-01, 7.14285714e-02], [ -4.60331964e-01, 3.58290749e-01, 7.14285714e-02], [ -5.13026355e-01, 2.77635979e-01, 7.14285714e-02], [ -5.51726724e-01, 1.89408024e-01, 7.14285714e-02], [ -5.75377427e-01, 9.60135110e-02, 7.14285714e-02], [ -5.83333333e-01, 1.44716353e-17, 7.14285714e-02], [ -5.75377427e-01, -9.60135110e-02, 7.14285714e-02], [ -3.94570255e-01, 3.07106356e-01, 7.14285714e-02], [ -4.39736876e-01, 2.37973697e-01, 7.14285714e-02], [ -4.72908621e-01, 1.62349735e-01, 7.14285714e-02], [ -4.93180652e-01, 8.22972951e-02, 7.14285714e-02], [ -5.00000000e-01, 9.25854954e-18, 7.14285714e-02], [ -4.93180652e-01, -8.22972951e-02, 7.14285714e-02], [ -4.20156583e-01, 7.22539112e-02, 7.14285714e-02], [ -4.11228248e-01, 1.41174836e-01, 7.14285714e-02], [ -4.20116462e-01, 1.83641595e-03, 7.14285714e-02], [ -4.10983876e-01, -6.85810793e-02, 7.14285714e-02], [ -3.47132513e-01, 6.22105272e-02, 7.14285714e-02], [ -3.49547876e-01, 1.19999937e-01, 7.14285714e-02], [ -3.40232923e-01, 3.67283191e-03, 7.14285714e-02], [ -3.28787101e-01, -5.48648634e-02, 7.14285714e-02], [ -2.74108444e-01, 5.21671433e-02, 7.14285714e-02], [ -2.87867503e-01, 9.88250387e-02, 7.14285714e-02], [ -2.60349385e-01, 5.50924786e-03, 7.14285714e-02], [ -2.46590326e-01, -4.11486476e-02, 7.14285714e-02], [ -1.64393551e-01, -2.74324317e-02, 7.14285714e-02], [ -1.92379634e-01, 2.41096072e-02, 7.14285714e-02], [ -8.21967753e-02, -1.37162159e-02, 7.14285714e-02], [ -1.24409882e-01, 4.27099665e-02, 7.14285714e-02], [ -2.20365717e-01, 7.56516461e-02, 7.14285714e-02], [ -1.66622989e-01, 9.91361489e-02, 7.14285714e-02], [ -2.48351800e-01, 1.27193685e-01, 7.14285714e-02], [ -2.08836096e-01, 1.55562331e-01, 7.14285714e-02], [ -2.70747482e-01, 2.06077006e-01, 7.14285714e-02], [ -3.32658869e-01, 2.56591681e-01, 7.14285714e-02], [ -3.12146825e-01, 1.64120356e-01, 7.14285714e-02], [ -3.75941850e-01, 2.01047026e-01, 7.14285714e-02], [ -9.45817242e-01, -3.24699469e-01, 7.14285714e-02], [ -8.66999138e-01, -2.97641180e-01, 7.14285714e-02], [ -8.79473751e-01, -4.75947393e-01, 7.14285714e-02], [ -8.06184272e-01, -4.36285110e-01, 7.14285714e-02], [ -7.89140509e-01, -6.14212713e-01, 7.14285714e-02], [ -7.23378800e-01, -5.63028320e-01, 7.14285714e-02], [ -6.77281572e-01, -7.35723911e-01, 7.14285714e-02], [ -6.20841441e-01, -6.74413585e-01, 7.14285714e-02], [ -5.46948158e-01, -8.37166478e-01, 7.14285714e-02], [ -5.01369145e-01, -7.67402605e-01, 7.14285714e-02], [ -4.01695425e-01, -9.15773327e-01, 7.14285714e-02], [ -3.68220806e-01, -8.39458883e-01, 7.14285714e-02], [ -7.88181035e-01, -2.70582891e-01, 7.14285714e-02], [ -7.32894793e-01, -3.96622828e-01, 7.14285714e-02], [ -6.57617091e-01, -5.11843927e-01, 7.14285714e-02], [ -5.64401310e-01, -6.13103259e-01, 7.14285714e-02], [ -4.55790132e-01, -6.97638732e-01, 7.14285714e-02], [ -3.34746187e-01, -7.63144439e-01, 7.14285714e-02], [ -7.09362931e-01, -2.43524602e-01, 7.14285714e-02], [ -6.59605313e-01, -3.56960545e-01, 7.14285714e-02], [ -5.91855382e-01, -4.60659535e-01, 7.14285714e-02], [ -5.07961179e-01, -5.51792933e-01, 7.14285714e-02], [ -4.10211119e-01, -6.27874859e-01, 7.14285714e-02], [ -3.01271568e-01, -6.86829995e-01, 7.14285714e-02], [ -6.30544828e-01, -2.16466313e-01, 7.14285714e-02], [ -5.86315834e-01, -3.17298262e-01, 7.14285714e-02], [ -5.26093673e-01, -4.09475142e-01, 7.14285714e-02], [ -4.51521048e-01, -4.90482607e-01, 7.14285714e-02], [ -3.64632105e-01, -5.58110986e-01, 7.14285714e-02], [ -2.67796950e-01, -6.10515551e-01, 7.14285714e-02], [ -5.51726724e-01, -1.89408024e-01, 7.14285714e-02], [ -5.13026355e-01, -2.77635979e-01, 7.14285714e-02], [ -4.60331964e-01, -3.58290749e-01, 7.14285714e-02], [ -3.95080917e-01, -4.29172281e-01, 7.14285714e-02], [ -3.19053092e-01, -4.88347112e-01, 7.14285714e-02], [ -2.34322331e-01, -5.34201107e-01, 7.14285714e-02], [ -4.72908621e-01, -1.62349735e-01, 7.14285714e-02], [ -4.39736876e-01, -2.37973697e-01, 7.14285714e-02], [ -3.94570255e-01, -3.07106356e-01, 7.14285714e-02], [ -3.38640786e-01, -3.67861955e-01, 7.14285714e-02], [ -2.73474079e-01, -4.18583239e-01, 7.14285714e-02], [ -2.00847712e-01, -4.57886663e-01, 7.14285714e-02], [ -2.90292421e-01, -3.12221863e-01, 7.14285714e-02], [ -3.43107373e-01, -2.67051188e-01, 7.14285714e-02], [ -2.31319311e-01, -3.50702994e-01, 7.14285714e-02], [ -1.67373094e-01, -3.81572219e-01, 7.14285714e-02], [ -2.41944057e-01, -2.56581770e-01, 7.14285714e-02], [ -2.91644492e-01, -2.26996019e-01, 7.14285714e-02], [ -1.89164543e-01, -2.82822750e-01, 7.14285714e-02], [ -1.33898475e-01, -3.05257776e-01, 7.14285714e-02], [ -1.93595692e-01, -2.00941678e-01, 7.14285714e-02], [ -2.40181610e-01, -1.86940851e-01, 7.14285714e-02], [ -1.47009774e-01, -2.14942505e-01, 7.14285714e-02], [ -1.00423856e-01, -2.28943332e-01, 7.14285714e-02], [ -6.69492374e-02, -1.52628888e-01, 7.14285714e-02], [ -1.25405441e-01, -1.47867075e-01, 7.14285714e-02], [ -3.34746187e-02, -7.63144439e-02, 7.14285714e-02], [ -1.03801108e-01, -8.07916455e-02, 7.14285714e-02], [ -1.83861645e-01, -1.43105263e-01, 7.14285714e-02], [ -1.74127598e-01, -8.52688471e-02, 7.14285714e-02], [ -2.42317849e-01, -1.38343450e-01, 7.14285714e-02], [ -2.44454087e-01, -8.97460487e-02, 7.14285714e-02], [ -3.20605599e-01, -1.13947277e-01, 7.14285714e-02], [ -3.96757110e-01, -1.38148506e-01, 7.14285714e-02], [ -3.08124191e-01, -1.71553532e-01, 7.14285714e-02], [ -3.73930533e-01, -2.04763614e-01, 7.14285714e-02], [ -2.45485487e-01, -9.69400266e-01, 7.14285714e-02], [ -2.25028363e-01, -8.88616910e-01, 7.14285714e-02], [ -8.25793455e-02, -9.96584493e-01, 7.14285714e-02], [ -7.56977333e-02, -9.13535785e-01, 7.14285714e-02], [ 8.25793455e-02, -9.96584493e-01, 7.14285714e-02], [ 7.56977333e-02, -9.13535785e-01, 7.14285714e-02], [ 2.45485487e-01, -9.69400266e-01, 7.14285714e-02], [ 2.25028363e-01, -8.88616910e-01, 7.14285714e-02], [ 4.01695425e-01, -9.15773327e-01, 7.14285714e-02], [ 3.68220806e-01, -8.39458883e-01, 7.14285714e-02], [ 5.46948158e-01, -8.37166478e-01, 7.14285714e-02], [ 5.01369145e-01, -7.67402605e-01, 7.14285714e-02], [ -2.04571239e-01, -8.07833555e-01, 7.14285714e-02], [ -6.88161212e-02, -8.30487078e-01, 7.14285714e-02], [ 6.88161212e-02, -8.30487078e-01, 7.14285714e-02], [ 2.04571239e-01, -8.07833555e-01, 7.14285714e-02], [ 3.34746187e-01, -7.63144439e-01, 7.14285714e-02], [ 4.55790132e-01, -6.97638732e-01, 7.14285714e-02], [ -1.84114115e-01, -7.27050199e-01, 7.14285714e-02], [ -6.19345091e-02, -7.47438370e-01, 7.14285714e-02], [ 6.19345091e-02, -7.47438370e-01, 7.14285714e-02], [ 1.84114115e-01, -7.27050199e-01, 7.14285714e-02], [ 3.01271568e-01, -6.86829995e-01, 7.14285714e-02], [ 4.10211119e-01, -6.27874859e-01, 7.14285714e-02], [ -1.63656991e-01, -6.46266844e-01, 7.14285714e-02], [ -5.50528970e-02, -6.64389662e-01, 7.14285714e-02], [ 5.50528970e-02, -6.64389662e-01, 7.14285714e-02], [ 1.63656991e-01, -6.46266844e-01, 7.14285714e-02], [ 2.67796950e-01, -6.10515551e-01, 7.14285714e-02], [ 3.64632105e-01, -5.58110986e-01, 7.14285714e-02], [ -1.43199867e-01, -5.65483488e-01, 7.14285714e-02], [ -4.81712849e-02, -5.81340954e-01, 7.14285714e-02], [ 4.81712849e-02, -5.81340954e-01, 7.14285714e-02], [ 1.43199867e-01, -5.65483488e-01, 7.14285714e-02], [ 2.34322331e-01, -5.34201107e-01, 7.14285714e-02], [ 3.19053092e-01, -4.88347112e-01, 7.14285714e-02], [ -1.22742744e-01, -4.84700133e-01, 7.14285714e-02], [ -4.12896727e-02, -4.98292247e-01, 7.14285714e-02], [ 4.12896727e-02, -4.98292247e-01, 7.14285714e-02], [ 1.22742744e-01, -4.84700133e-01, 7.14285714e-02], [ 2.00847712e-01, -4.57886663e-01, 7.14285714e-02], [ 2.73474079e-01, -4.18583239e-01, 7.14285714e-02], [ 1.02606772e-01, -4.13792257e-01, 7.14285714e-02], [ 3.59043567e-02, -4.33301147e-01, 7.14285714e-02], [ 1.67077120e-01, -3.85469130e-01, 7.14285714e-02], [ 2.27895066e-01, -3.48819366e-01, 7.14285714e-02], [ 8.24708009e-02, -3.42884381e-01, 7.14285714e-02], [ 3.05190406e-02, -3.68310047e-01, 7.14285714e-02], [ 1.33306527e-01, -3.13051596e-01, 7.14285714e-02], [ 1.82316053e-01, -2.79055493e-01, 7.14285714e-02], [ 6.23348295e-02, -2.71976505e-01, 7.14285714e-02], [ 2.51337245e-02, -3.03318947e-01, 7.14285714e-02], [ 9.95359345e-02, -2.40634062e-01, 7.14285714e-02], [ 1.36737040e-01, -2.09291620e-01, 7.14285714e-02], [ 9.11580264e-02, -1.39527746e-01, 7.14285714e-02], [ 5.51990834e-02, -1.85860856e-01, 7.14285714e-02], [ 4.55790132e-02, -6.97638732e-02, 7.14285714e-02], [ 1.08622324e-02, -1.31087650e-01, 7.14285714e-02], [ 1.92401405e-02, -2.32193966e-01, 7.14285714e-02], [ -2.38545485e-02, -1.92411427e-01, 7.14285714e-02], [ -1.67188024e-02, -2.78527075e-01, 7.14285714e-02], [ -5.85713293e-02, -2.53735203e-01, 7.14285714e-02], [ -7.99618007e-02, -3.30723513e-01, 7.14285714e-02], [ -1.01352272e-01, -4.07711823e-01, 7.14285714e-02], [ -2.49090925e-02, -3.51782132e-01, 7.14285714e-02], [ -3.30993826e-02, -4.25037189e-01, 7.14285714e-02], [ 6.77281572e-01, -7.35723911e-01, 7.14285714e-02], [ 6.20841441e-01, -6.74413585e-01, 7.14285714e-02], [ 7.89140509e-01, -6.14212713e-01, 7.14285714e-02], [ 7.23378800e-01, -5.63028320e-01, 7.14285714e-02], [ 8.79473751e-01, -4.75947393e-01, 7.14285714e-02], [ 8.06184272e-01, -4.36285110e-01, 7.14285714e-02], [ 9.45817242e-01, -3.24699469e-01, 7.14285714e-02], [ 8.66999138e-01, -2.97641180e-01, 7.14285714e-02], [ 9.86361303e-01, -1.64594590e-01, 7.14285714e-02], [ 9.04164528e-01, -1.50878374e-01, 7.14285714e-02], [ 5.64401310e-01, -6.13103259e-01, 7.14285714e-02], [ 6.57617091e-01, -5.11843927e-01, 7.14285714e-02], [ 7.32894793e-01, -3.96622828e-01, 7.14285714e-02], [ 7.88181035e-01, -2.70582891e-01, 7.14285714e-02], [ 8.21967753e-01, -1.37162159e-01, 7.14285714e-02], [ 5.07961179e-01, -5.51792933e-01, 7.14285714e-02], [ 5.91855382e-01, -4.60659535e-01, 7.14285714e-02], [ 6.59605313e-01, -3.56960545e-01, 7.14285714e-02], [ 7.09362931e-01, -2.43524602e-01, 7.14285714e-02], [ 7.39770978e-01, -1.23445943e-01, 7.14285714e-02], [ 4.51521048e-01, -4.90482607e-01, 7.14285714e-02], [ 5.26093673e-01, -4.09475142e-01, 7.14285714e-02], [ 5.86315834e-01, -3.17298262e-01, 7.14285714e-02], [ 6.30544828e-01, -2.16466313e-01, 7.14285714e-02], [ 6.57574202e-01, -1.09729727e-01, 7.14285714e-02], [ 3.95080917e-01, -4.29172281e-01, 7.14285714e-02], [ 4.60331964e-01, -3.58290749e-01, 7.14285714e-02], [ 5.13026355e-01, -2.77635979e-01, 7.14285714e-02], [ 5.51726724e-01, -1.89408024e-01, 7.14285714e-02], [ 5.75377427e-01, -9.60135110e-02, 7.14285714e-02], [ 3.38640786e-01, -3.67861955e-01, 7.14285714e-02], [ 3.94570255e-01, -3.07106356e-01, 7.14285714e-02], [ 4.39736876e-01, -2.37973697e-01, 7.14285714e-02], [ 4.72908621e-01, -1.62349735e-01, 7.14285714e-02], [ 4.93180652e-01, -8.22972951e-02, 7.14285714e-02], [ 4.02533591e-01, -1.40423963e-01, 7.14285714e-02], [ 3.82383017e-01, -2.06935340e-01, 7.14285714e-02], [ 4.14084357e-01, -7.09602665e-02, 7.14285714e-02], [ 3.32158562e-01, -1.18498191e-01, 7.14285714e-02], [ 3.25029158e-01, -1.75896984e-01, 7.14285714e-02], [ 3.34988061e-01, -5.96232379e-02, 7.14285714e-02], [ 2.61783533e-01, -9.65724185e-02, 7.14285714e-02], [ 2.67675299e-01, -1.44858628e-01, 7.14285714e-02], [ 2.55891766e-01, -4.82862093e-02, 7.14285714e-02], [ 1.85787515e-01, -5.54454306e-02, 7.14285714e-02], [ 1.15683264e-01, -6.26046519e-02, 7.14285714e-02], [ 2.04908364e-01, -1.10890861e-01, 7.14285714e-02], [ 1.48033195e-01, -1.25209304e-01, 7.14285714e-02], [ 2.24029213e-01, -1.66336292e-01, 7.14285714e-02], [ 1.80383126e-01, -1.87813956e-01, 7.14285714e-02], [ 2.33135679e-01, -2.47829956e-01, 7.14285714e-02], [ 2.85888233e-01, -3.07845955e-01, 7.14285714e-02], [ 2.80876227e-01, -2.13259647e-01, 7.14285714e-02], [ 3.37723241e-01, -2.60183001e-01, 7.14285714e-02], [ 1.00000000e+00, 0.00000000e+00, -7.14285714e-02], [ 9.86361303e-01, 1.64594590e-01, -7.14285714e-02], [ 9.04164528e-01, 1.50878374e-01, -7.14285714e-02], [ 9.16666667e-01, 1.66466002e-16, -7.14285714e-02], [ 9.45817242e-01, 3.24699469e-01, -7.14285714e-02], [ 8.66999138e-01, 2.97641180e-01, -7.14285714e-02], [ 8.79473751e-01, 4.75947393e-01, -7.14285714e-02], [ 8.06184272e-01, 4.36285110e-01, -7.14285714e-02], [ 7.89140509e-01, 6.14212713e-01, -7.14285714e-02], [ 7.23378800e-01, 5.63028320e-01, -7.14285714e-02], [ 6.77281572e-01, 7.35723911e-01, -7.14285714e-02], [ 6.20841441e-01, 6.74413585e-01, -7.14285714e-02], [ 5.46948158e-01, 8.37166478e-01, -7.14285714e-02], [ 5.01369145e-01, 7.67402605e-01, -7.14285714e-02], [ 4.01695425e-01, 9.15773327e-01, -7.14285714e-02], [ 3.68220806e-01, 8.39458883e-01, -7.14285714e-02], [ 8.21967753e-01, 1.37162159e-01, -7.14285714e-02], [ 8.33333333e-01, 2.29336017e-16, -7.14285714e-02], [ 7.88181035e-01, 2.70582891e-01, -7.14285714e-02], [ 7.32894793e-01, 3.96622828e-01, -7.14285714e-02], [ 6.57617091e-01, 5.11843927e-01, -7.14285714e-02], [ 5.64401310e-01, 6.13103259e-01, -7.14285714e-02], [ 4.55790132e-01, 6.97638732e-01, -7.14285714e-02], [ 3.34746187e-01, 7.63144439e-01, -7.14285714e-02], [ 7.39770978e-01, 1.23445943e-01, -7.14285714e-02], [ 7.50000000e-01, 2.30023465e-16, -7.14285714e-02], [ 7.09362931e-01, 2.43524602e-01, -7.14285714e-02], [ 6.59605313e-01, 3.56960545e-01, -7.14285714e-02], [ 5.91855382e-01, 4.60659535e-01, -7.14285714e-02], [ 5.07961179e-01, 5.51792933e-01, -7.14285714e-02], [ 4.10211119e-01, 6.27874859e-01, -7.14285714e-02], [ 3.01271568e-01, 6.86829995e-01, -7.14285714e-02], [ 6.57574202e-01, 1.09729727e-01, -7.14285714e-02], [ 6.66666667e-01, 2.14398012e-16, -7.14285714e-02], [ 6.30544828e-01, 2.16466313e-01, -7.14285714e-02], [ 5.86315834e-01, 3.17298262e-01, -7.14285714e-02], [ 5.26093673e-01, 4.09475142e-01, -7.14285714e-02], [ 4.51521048e-01, 4.90482607e-01, -7.14285714e-02], [ 3.64632105e-01, 5.58110986e-01, -7.14285714e-02], [ 2.67796950e-01, 6.10515551e-01, -7.14285714e-02], [ 5.75377427e-01, 9.60135110e-02, -7.14285714e-02], [ 5.83333333e-01, 1.93257395e-16, -7.14285714e-02], [ 5.51726724e-01, 1.89408024e-01, -7.14285714e-02], [ 5.13026355e-01, 2.77635979e-01, -7.14285714e-02], [ 4.60331964e-01, 3.58290749e-01, -7.14285714e-02], [ 3.95080917e-01, 4.29172281e-01, -7.14285714e-02], [ 3.19053092e-01, 4.88347112e-01, -7.14285714e-02], [ 2.34322331e-01, 5.34201107e-01, -7.14285714e-02], [ 4.93180652e-01, 8.22972951e-02, -7.14285714e-02], [ 5.00000000e-01, 1.69605680e-16, -7.14285714e-02], [ 4.72908621e-01, 1.62349735e-01, -7.14285714e-02], [ 4.39736876e-01, 2.37973697e-01, -7.14285714e-02], [ 3.94570255e-01, 3.07106356e-01, -7.14285714e-02], [ 3.38640786e-01, 3.67861955e-01, -7.14285714e-02], [ 2.73474079e-01, 4.18583239e-01, -7.14285714e-02], [ 2.00847712e-01, 4.57886663e-01, -7.14285714e-02], [ 2.92606991e-01, 3.16445261e-01, -7.14285714e-02], [ 3.44188184e-01, 2.67935253e-01, -7.14285714e-02], [ 2.34867640e-01, 3.58265725e-01, -7.14285714e-02], [ 1.72155182e-01, 3.92474283e-01, -7.14285714e-02], [ 2.46573197e-01, 2.65028567e-01, -7.14285714e-02], [ 2.93806114e-01, 2.28764151e-01, -7.14285714e-02], [ 1.96261201e-01, 2.97948211e-01, -7.14285714e-02], [ 1.43462652e-01, 3.27061902e-01, -7.14285714e-02], [ 2.00539403e-01, 2.13611872e-01, -7.14285714e-02], [ 2.43424043e-01, 1.89593048e-01, -7.14285714e-02], [ 1.57654762e-01, 2.37630697e-01, -7.14285714e-02], [ 1.14770121e-01, 2.61649522e-01, -7.14285714e-02], [ 8.60775910e-02, 1.96237141e-01, -7.14285714e-02], [ 1.39074405e-01, 1.78223023e-01, -7.14285714e-02], [ 5.73850607e-02, 1.30824761e-01, -7.14285714e-02], [ 1.20494048e-01, 1.18815349e-01, -7.14285714e-02], [ 2.86925303e-02, 6.54123805e-02, -7.14285714e-02], [ 1.01913690e-01, 5.94076743e-02, -7.14285714e-02], [ -7.26236114e-17, 1.61257875e-17, -7.14285714e-02], [ 8.33333333e-02, 4.15242815e-17, -7.14285714e-02], [ 1.92071219e-01, 1.60208904e-01, -7.14285714e-02], [ 1.83603035e-01, 1.06805936e-01, -7.14285714e-02], [ 1.75134851e-01, 5.34029681e-02, -7.14285714e-02], [ 1.66666667e-01, 6.72165733e-17, -7.14285714e-02], [ 2.45068032e-01, 1.42194786e-01, -7.14285714e-02], [ 2.46712022e-01, 9.47965238e-02, -7.14285714e-02], [ 2.48356011e-01, 4.73982619e-02, -7.14285714e-02], [ 2.50000000e-01, 9.30984356e-17, -7.14285714e-02], [ 3.33333333e-01, 1.18995828e-16, -7.14285714e-02], [ 3.29964224e-01, 5.90312730e-02, -7.14285714e-02], [ 4.16666667e-01, 1.44635879e-16, -7.14285714e-02], [ 4.11572438e-01, 7.06642841e-02, -7.14285714e-02], [ 3.22110888e-01, 1.17314261e-01, -7.14285714e-02], [ 3.97509754e-01, 1.39831998e-01, -7.14285714e-02], [ 3.09957647e-01, 1.74121089e-01, -7.14285714e-02], [ 3.74847261e-01, 2.06047393e-01, -7.14285714e-02], [ 2.45485487e-01, 9.69400266e-01, -7.14285714e-02], [ 2.25028363e-01, 8.88616910e-01, -7.14285714e-02], [ 8.25793455e-02, 9.96584493e-01, -7.14285714e-02], [ 7.56977333e-02, 9.13535785e-01, -7.14285714e-02], [ -8.25793455e-02, 9.96584493e-01, -7.14285714e-02], [ -7.56977333e-02, 9.13535785e-01, -7.14285714e-02], [ -2.45485487e-01, 9.69400266e-01, -7.14285714e-02], [ -2.25028363e-01, 8.88616910e-01, -7.14285714e-02], [ -4.01695425e-01, 9.15773327e-01, -7.14285714e-02], [ -3.68220806e-01, 8.39458883e-01, -7.14285714e-02], [ -5.46948158e-01, 8.37166478e-01, -7.14285714e-02], [ -5.01369145e-01, 7.67402605e-01, -7.14285714e-02], [ -6.77281572e-01, 7.35723911e-01, -7.14285714e-02], [ -6.20841441e-01, 6.74413585e-01, -7.14285714e-02], [ 2.04571239e-01, 8.07833555e-01, -7.14285714e-02], [ 6.88161212e-02, 8.30487078e-01, -7.14285714e-02], [ -6.88161212e-02, 8.30487078e-01, -7.14285714e-02], [ -2.04571239e-01, 8.07833555e-01, -7.14285714e-02], [ -3.34746187e-01, 7.63144439e-01, -7.14285714e-02], [ -4.55790132e-01, 6.97638732e-01, -7.14285714e-02], [ -5.64401310e-01, 6.13103259e-01, -7.14285714e-02], [ 1.84114115e-01, 7.27050199e-01, -7.14285714e-02], [ 6.19345091e-02, 7.47438370e-01, -7.14285714e-02], [ -6.19345091e-02, 7.47438370e-01, -7.14285714e-02], [ -1.84114115e-01, 7.27050199e-01, -7.14285714e-02], [ -3.01271568e-01, 6.86829995e-01, -7.14285714e-02], [ -4.10211119e-01, 6.27874859e-01, -7.14285714e-02], [ -5.07961179e-01, 5.51792933e-01, -7.14285714e-02], [ 1.63656991e-01, 6.46266844e-01, -7.14285714e-02], [ 5.50528970e-02, 6.64389662e-01, -7.14285714e-02], [ -5.50528970e-02, 6.64389662e-01, -7.14285714e-02], [ -1.63656991e-01, 6.46266844e-01, -7.14285714e-02], [ -2.67796950e-01, 6.10515551e-01, -7.14285714e-02], [ -3.64632105e-01, 5.58110986e-01, -7.14285714e-02], [ -4.51521048e-01, 4.90482607e-01, -7.14285714e-02], [ 1.43199867e-01, 5.65483488e-01, -7.14285714e-02], [ 4.81712849e-02, 5.81340954e-01, -7.14285714e-02], [ -4.81712849e-02, 5.81340954e-01, -7.14285714e-02], [ -1.43199867e-01, 5.65483488e-01, -7.14285714e-02], [ -2.34322331e-01, 5.34201107e-01, -7.14285714e-02], [ -3.19053092e-01, 4.88347112e-01, -7.14285714e-02], [ -3.95080917e-01, 4.29172281e-01, -7.14285714e-02], [ 1.22742744e-01, 4.84700133e-01, -7.14285714e-02], [ 4.12896727e-02, 4.98292247e-01, -7.14285714e-02], [ -4.12896727e-02, 4.98292247e-01, -7.14285714e-02], [ -1.22742744e-01, 4.84700133e-01, -7.14285714e-02], [ -2.00847712e-01, 4.57886663e-01, -7.14285714e-02], [ -2.73474079e-01, 4.18583239e-01, -7.14285714e-02], [ -3.38640786e-01, 3.67861955e-01, -7.14285714e-02], [ -1.02283149e-01, 4.15336195e-01, -7.14285714e-02], [ -3.59859419e-02, 4.34695086e-01, -7.14285714e-02], [ -1.66348287e-01, 3.87163066e-01, -7.14285714e-02], [ -2.26761024e-01, 3.50663301e-01, -7.14285714e-02], [ -2.82200655e-01, 3.06551629e-01, -7.14285714e-02], [ -8.18235534e-02, 3.45972257e-01, -7.14285714e-02], [ -3.06822110e-02, 3.71097926e-01, -7.14285714e-02], [ -1.31848862e-01, 3.16439469e-01, -7.14285714e-02], [ -1.80047970e-01, 2.82743363e-01, -7.14285714e-02], [ -2.25760524e-01, 2.45241304e-01, -7.14285714e-02], [ -6.13639584e-02, 2.76608319e-01, -7.14285714e-02], [ -2.53784802e-02, 3.07500766e-01, -7.14285714e-02], [ -9.73494365e-02, 2.45715872e-01, -7.14285714e-02], [ -1.33334915e-01, 2.14823425e-01, -7.14285714e-02], [ -1.69320393e-01, 1.83930978e-01, -7.14285714e-02], [ -1.12880262e-01, 1.22620652e-01, -7.14285714e-02], [ -7.93257664e-02, 1.65019743e-01, -7.14285714e-02], [ -5.64401310e-02, 6.13103259e-02, -7.14285714e-02], [ -2.53166180e-02, 1.15216062e-01, -7.14285714e-02], [ -4.57712708e-02, 2.07418835e-01, -7.14285714e-02], [ 5.80689493e-03, 1.69121798e-01, -7.14285714e-02], [ -1.22167752e-02, 2.49817927e-01, -7.14285714e-02], [ 3.69304079e-02, 2.23027534e-01, -7.14285714e-02], [ 2.13377203e-02, 2.92217018e-01, -7.14285714e-02], [ 6.80539208e-02, 2.76933270e-01, -7.14285714e-02], [ 8.62835284e-02, 3.46188891e-01, -7.14285714e-02], [ 1.04513136e-01, 4.15444512e-01, -7.14285714e-02], [ 2.79883711e-02, 3.60908761e-01, -7.14285714e-02], [ 3.46390219e-02, 4.29600504e-01, -7.14285714e-02], [ -7.89140509e-01, 6.14212713e-01, -7.14285714e-02], [ -7.23378800e-01, 5.63028320e-01, -7.14285714e-02], [ -8.79473751e-01, 4.75947393e-01, -7.14285714e-02], [ -8.06184272e-01, 4.36285110e-01, -7.14285714e-02], [ -9.45817242e-01, 3.24699469e-01, -7.14285714e-02], [ -8.66999138e-01, 2.97641180e-01, -7.14285714e-02], [ -9.86361303e-01, 1.64594590e-01, -7.14285714e-02], [ -9.04164528e-01, 1.50878374e-01, -7.14285714e-02], [ -1.00000000e+00, -1.74949543e-17, -7.14285714e-02], [ -9.16666667e-01, -2.67544510e-17, -7.14285714e-02], [ -9.86361303e-01, -1.64594590e-01, -7.14285714e-02], [ -9.04164528e-01, -1.50878374e-01, -7.14285714e-02], [ -6.57617091e-01, 5.11843927e-01, -7.14285714e-02], [ -7.32894793e-01, 3.96622828e-01, -7.14285714e-02], [ -7.88181035e-01, 2.70582891e-01, -7.14285714e-02], [ -8.21967753e-01, 1.37162159e-01, -7.14285714e-02], [ -8.33333333e-01, -2.31382435e-17, -7.14285714e-02], [ -8.21967753e-01, -1.37162159e-01, -7.14285714e-02], [ -5.91855382e-01, 4.60659535e-01, -7.14285714e-02], [ -6.59605313e-01, 3.56960545e-01, -7.14285714e-02], [ -7.09362931e-01, 2.43524602e-01, -7.14285714e-02], [ -7.39770978e-01, 1.23445943e-01, -7.14285714e-02], [ -7.50000000e-01, -2.03986436e-17, -7.14285714e-02], [ -7.39770978e-01, -1.23445943e-01, -7.14285714e-02], [ -5.26093673e-01, 4.09475142e-01, -7.14285714e-02], [ -5.86315834e-01, 3.17298262e-01, -7.14285714e-02], [ -6.30544828e-01, 2.16466313e-01, -7.14285714e-02], [ -6.57574202e-01, 1.09729727e-01, -7.14285714e-02], [ -6.66666667e-01, -1.78879673e-17, -7.14285714e-02], [ -6.57574202e-01, -1.09729727e-01, -7.14285714e-02], [ -4.60331964e-01, 3.58290749e-01, -7.14285714e-02], [ -5.13026355e-01, 2.77635979e-01, -7.14285714e-02], [ -5.51726724e-01, 1.89408024e-01, -7.14285714e-02], [ -5.75377427e-01, 9.60135110e-02, -7.14285714e-02], [ -5.83333333e-01, -1.52297412e-17, -7.14285714e-02], [ -5.75377427e-01, -9.60135110e-02, -7.14285714e-02], [ -3.94570255e-01, 3.07106356e-01, -7.14285714e-02], [ -4.39736876e-01, 2.37973697e-01, -7.14285714e-02], [ -4.72908621e-01, 1.62349735e-01, -7.14285714e-02], [ -4.93180652e-01, 8.22972951e-02, -7.14285714e-02], [ -5.00000000e-01, -1.22050366e-17, -7.14285714e-02], [ -4.93180652e-01, -8.22972951e-02, -7.14285714e-02], [ -4.20156583e-01, 7.22539112e-02, -7.14285714e-02], [ -4.11228248e-01, 1.41174836e-01, -7.14285714e-02], [ -4.20116462e-01, 1.83641595e-03, -7.14285714e-02], [ -4.10983876e-01, -6.85810793e-02, -7.14285714e-02], [ -3.47132513e-01, 6.22105272e-02, -7.14285714e-02], [ -3.49547876e-01, 1.19999937e-01, -7.14285714e-02], [ -3.40232923e-01, 3.67283191e-03, -7.14285714e-02], [ -3.28787101e-01, -5.48648634e-02, -7.14285714e-02], [ -2.74108444e-01, 5.21671433e-02, -7.14285714e-02], [ -2.87867503e-01, 9.88250387e-02, -7.14285714e-02], [ -2.60349385e-01, 5.50924786e-03, -7.14285714e-02], [ -2.46590326e-01, -4.11486476e-02, -7.14285714e-02], [ -1.64393551e-01, -2.74324317e-02, -7.14285714e-02], [ -1.92379634e-01, 2.41096072e-02, -7.14285714e-02], [ -8.21967753e-02, -1.37162159e-02, -7.14285714e-02], [ -1.24409882e-01, 4.27099665e-02, -7.14285714e-02], [ -2.20365717e-01, 7.56516461e-02, -7.14285714e-02], [ -1.66622989e-01, 9.91361489e-02, -7.14285714e-02], [ -2.48351800e-01, 1.27193685e-01, -7.14285714e-02], [ -2.08836096e-01, 1.55562331e-01, -7.14285714e-02], [ -2.70747482e-01, 2.06077006e-01, -7.14285714e-02], [ -3.32658869e-01, 2.56591681e-01, -7.14285714e-02], [ -3.12146825e-01, 1.64120356e-01, -7.14285714e-02], [ -3.75941850e-01, 2.01047026e-01, -7.14285714e-02], [ -9.45817242e-01, -3.24699469e-01, -7.14285714e-02], [ -8.66999138e-01, -2.97641180e-01, -7.14285714e-02], [ -8.79473751e-01, -4.75947393e-01, -7.14285714e-02], [ -8.06184272e-01, -4.36285110e-01, -7.14285714e-02], [ -7.89140509e-01, -6.14212713e-01, -7.14285714e-02], [ -7.23378800e-01, -5.63028320e-01, -7.14285714e-02], [ -6.77281572e-01, -7.35723911e-01, -7.14285714e-02], [ -6.20841441e-01, -6.74413585e-01, -7.14285714e-02], [ -5.46948158e-01, -8.37166478e-01, -7.14285714e-02], [ -5.01369145e-01, -7.67402605e-01, -7.14285714e-02], [ -4.01695425e-01, -9.15773327e-01, -7.14285714e-02], [ -3.68220806e-01, -8.39458883e-01, -7.14285714e-02], [ -7.88181035e-01, -2.70582891e-01, -7.14285714e-02], [ -7.32894793e-01, -3.96622828e-01, -7.14285714e-02], [ -6.57617091e-01, -5.11843927e-01, -7.14285714e-02], [ -5.64401310e-01, -6.13103259e-01, -7.14285714e-02], [ -4.55790132e-01, -6.97638732e-01, -7.14285714e-02], [ -3.34746187e-01, -7.63144439e-01, -7.14285714e-02], [ -7.09362931e-01, -2.43524602e-01, -7.14285714e-02], [ -6.59605313e-01, -3.56960545e-01, -7.14285714e-02], [ -5.91855382e-01, -4.60659535e-01, -7.14285714e-02], [ -5.07961179e-01, -5.51792933e-01, -7.14285714e-02], [ -4.10211119e-01, -6.27874859e-01, -7.14285714e-02], [ -3.01271568e-01, -6.86829995e-01, -7.14285714e-02], [ -6.30544828e-01, -2.16466313e-01, -7.14285714e-02], [ -5.86315834e-01, -3.17298262e-01, -7.14285714e-02], [ -5.26093673e-01, -4.09475142e-01, -7.14285714e-02], [ -4.51521048e-01, -4.90482607e-01, -7.14285714e-02], [ -3.64632105e-01, -5.58110986e-01, -7.14285714e-02], [ -2.67796950e-01, -6.10515551e-01, -7.14285714e-02], [ -5.51726724e-01, -1.89408024e-01, -7.14285714e-02], [ -5.13026355e-01, -2.77635979e-01, -7.14285714e-02], [ -4.60331964e-01, -3.58290749e-01, -7.14285714e-02], [ -3.95080917e-01, -4.29172281e-01, -7.14285714e-02], [ -3.19053092e-01, -4.88347112e-01, -7.14285714e-02], [ -2.34322331e-01, -5.34201107e-01, -7.14285714e-02], [ -4.72908621e-01, -1.62349735e-01, -7.14285714e-02], [ -4.39736876e-01, -2.37973697e-01, -7.14285714e-02], [ -3.94570255e-01, -3.07106356e-01, -7.14285714e-02], [ -3.38640786e-01, -3.67861955e-01, -7.14285714e-02], [ -2.73474079e-01, -4.18583239e-01, -7.14285714e-02], [ -2.00847712e-01, -4.57886663e-01, -7.14285714e-02], [ -2.90292421e-01, -3.12221863e-01, -7.14285714e-02], [ -3.43107373e-01, -2.67051188e-01, -7.14285714e-02], [ -2.31319311e-01, -3.50702994e-01, -7.14285714e-02], [ -1.67373094e-01, -3.81572219e-01, -7.14285714e-02], [ -2.41944057e-01, -2.56581770e-01, -7.14285714e-02], [ -2.91644492e-01, -2.26996019e-01, -7.14285714e-02], [ -1.89164543e-01, -2.82822750e-01, -7.14285714e-02], [ -1.33898475e-01, -3.05257776e-01, -7.14285714e-02], [ -1.93595692e-01, -2.00941678e-01, -7.14285714e-02], [ -2.40181610e-01, -1.86940851e-01, -7.14285714e-02], [ -1.47009774e-01, -2.14942505e-01, -7.14285714e-02], [ -1.00423856e-01, -2.28943332e-01, -7.14285714e-02], [ -6.69492374e-02, -1.52628888e-01, -7.14285714e-02], [ -1.25405441e-01, -1.47867075e-01, -7.14285714e-02], [ -3.34746187e-02, -7.63144439e-02, -7.14285714e-02], [ -1.03801108e-01, -8.07916455e-02, -7.14285714e-02], [ -1.83861645e-01, -1.43105263e-01, -7.14285714e-02], [ -1.74127598e-01, -8.52688471e-02, -7.14285714e-02], [ -2.42317849e-01, -1.38343450e-01, -7.14285714e-02], [ -2.44454087e-01, -8.97460487e-02, -7.14285714e-02], [ -3.20605599e-01, -1.13947277e-01, -7.14285714e-02], [ -3.96757110e-01, -1.38148506e-01, -7.14285714e-02], [ -3.08124191e-01, -1.71553532e-01, -7.14285714e-02], [ -3.73930533e-01, -2.04763614e-01, -7.14285714e-02], [ -2.45485487e-01, -9.69400266e-01, -7.14285714e-02], [ -2.25028363e-01, -8.88616910e-01, -7.14285714e-02], [ -8.25793455e-02, -9.96584493e-01, -7.14285714e-02], [ -7.56977333e-02, -9.13535785e-01, -7.14285714e-02], [ 8.25793455e-02, -9.96584493e-01, -7.14285714e-02], [ 7.56977333e-02, -9.13535785e-01, -7.14285714e-02], [ 2.45485487e-01, -9.69400266e-01, -7.14285714e-02], [ 2.25028363e-01, -8.88616910e-01, -7.14285714e-02], [ 4.01695425e-01, -9.15773327e-01, -7.14285714e-02], [ 3.68220806e-01, -8.39458883e-01, -7.14285714e-02], [ 5.46948158e-01, -8.37166478e-01, -7.14285714e-02], [ 5.01369145e-01, -7.67402605e-01, -7.14285714e-02], [ -2.04571239e-01, -8.07833555e-01, -7.14285714e-02], [ -6.88161212e-02, -8.30487078e-01, -7.14285714e-02], [ 6.88161212e-02, -8.30487078e-01, -7.14285714e-02], [ 2.04571239e-01, -8.07833555e-01, -7.14285714e-02], [ 3.34746187e-01, -7.63144439e-01, -7.14285714e-02], [ 4.55790132e-01, -6.97638732e-01, -7.14285714e-02], [ -1.84114115e-01, -7.27050199e-01, -7.14285714e-02], [ -6.19345091e-02, -7.47438370e-01, -7.14285714e-02], [ 6.19345091e-02, -7.47438370e-01, -7.14285714e-02], [ 1.84114115e-01, -7.27050199e-01, -7.14285714e-02], [ 3.01271568e-01, -6.86829995e-01, -7.14285714e-02], [ 4.10211119e-01, -6.27874859e-01, -7.14285714e-02], [ -1.63656991e-01, -6.46266844e-01, -7.14285714e-02], [ -5.50528970e-02, -6.64389662e-01, -7.14285714e-02], [ 5.50528970e-02, -6.64389662e-01, -7.14285714e-02], [ 1.63656991e-01, -6.46266844e-01, -7.14285714e-02], [ 2.67796950e-01, -6.10515551e-01, -7.14285714e-02], [ 3.64632105e-01, -5.58110986e-01, -7.14285714e-02], [ -1.43199867e-01, -5.65483488e-01, -7.14285714e-02], [ -4.81712849e-02, -5.81340954e-01, -7.14285714e-02], [ 4.81712849e-02, -5.81340954e-01, -7.14285714e-02], [ 1.43199867e-01, -5.65483488e-01, -7.14285714e-02], [ 2.34322331e-01, -5.34201107e-01, -7.14285714e-02], [ 3.19053092e-01, -4.88347112e-01, -7.14285714e-02], [ -1.22742744e-01, -4.84700133e-01, -7.14285714e-02], [ -4.12896727e-02, -4.98292247e-01, -7.14285714e-02], [ 4.12896727e-02, -4.98292247e-01, -7.14285714e-02], [ 1.22742744e-01, -4.84700133e-01, -7.14285714e-02], [ 2.00847712e-01, -4.57886663e-01, -7.14285714e-02], [ 2.73474079e-01, -4.18583239e-01, -7.14285714e-02], [ 1.02606772e-01, -4.13792257e-01, -7.14285714e-02], [ 3.59043567e-02, -4.33301147e-01, -7.14285714e-02], [ 1.67077120e-01, -3.85469130e-01, -7.14285714e-02], [ 2.27895066e-01, -3.48819366e-01, -7.14285714e-02], [ 8.24708009e-02, -3.42884381e-01, -7.14285714e-02], [ 3.05190406e-02, -3.68310047e-01, -7.14285714e-02], [ 1.33306527e-01, -3.13051596e-01, -7.14285714e-02], [ 1.82316053e-01, -2.79055493e-01, -7.14285714e-02], [ 6.23348295e-02, -2.71976505e-01, -7.14285714e-02], [ 2.51337245e-02, -3.03318947e-01, -7.14285714e-02], [ 9.95359345e-02, -2.40634062e-01, -7.14285714e-02], [ 1.36737040e-01, -2.09291620e-01, -7.14285714e-02], [ 9.11580264e-02, -1.39527746e-01, -7.14285714e-02], [ 5.51990834e-02, -1.85860856e-01, -7.14285714e-02], [ 4.55790132e-02, -6.97638732e-02, -7.14285714e-02], [ 1.08622324e-02, -1.31087650e-01, -7.14285714e-02], [ 1.92401405e-02, -2.32193966e-01, -7.14285714e-02], [ -2.38545485e-02, -1.92411427e-01, -7.14285714e-02], [ -1.67188024e-02, -2.78527075e-01, -7.14285714e-02], [ -5.85713293e-02, -2.53735203e-01, -7.14285714e-02], [ -7.99618007e-02, -3.30723513e-01, -7.14285714e-02], [ -1.01352272e-01, -4.07711823e-01, -7.14285714e-02], [ -2.49090925e-02, -3.51782132e-01, -7.14285714e-02], [ -3.30993826e-02, -4.25037189e-01, -7.14285714e-02], [ 6.77281572e-01, -7.35723911e-01, -7.14285714e-02], [ 6.20841441e-01, -6.74413585e-01, -7.14285714e-02], [ 7.89140509e-01, -6.14212713e-01, -7.14285714e-02], [ 7.23378800e-01, -5.63028320e-01, -7.14285714e-02], [ 8.79473751e-01, -4.75947393e-01, -7.14285714e-02], [ 8.06184272e-01, -4.36285110e-01, -7.14285714e-02], [ 9.45817242e-01, -3.24699469e-01, -7.14285714e-02], [ 8.66999138e-01, -2.97641180e-01, -7.14285714e-02], [ 9.86361303e-01, -1.64594590e-01, -7.14285714e-02], [ 9.04164528e-01, -1.50878374e-01, -7.14285714e-02], [ 5.64401310e-01, -6.13103259e-01, -7.14285714e-02], [ 6.57617091e-01, -5.11843927e-01, -7.14285714e-02], [ 7.32894793e-01, -3.96622828e-01, -7.14285714e-02], [ 7.88181035e-01, -2.70582891e-01, -7.14285714e-02], [ 8.21967753e-01, -1.37162159e-01, -7.14285714e-02], [ 5.07961179e-01, -5.51792933e-01, -7.14285714e-02], [ 5.91855382e-01, -4.60659535e-01, -7.14285714e-02], [ 6.59605313e-01, -3.56960545e-01, -7.14285714e-02], [ 7.09362931e-01, -2.43524602e-01, -7.14285714e-02], [ 7.39770978e-01, -1.23445943e-01, -7.14285714e-02], [ 4.51521048e-01, -4.90482607e-01, -7.14285714e-02], [ 5.26093673e-01, -4.09475142e-01, -7.14285714e-02], [ 5.86315834e-01, -3.17298262e-01, -7.14285714e-02], [ 6.30544828e-01, -2.16466313e-01, -7.14285714e-02], [ 6.57574202e-01, -1.09729727e-01, -7.14285714e-02], [ 3.95080917e-01, -4.29172281e-01, -7.14285714e-02], [ 4.60331964e-01, -3.58290749e-01, -7.14285714e-02], [ 5.13026355e-01, -2.77635979e-01, -7.14285714e-02], [ 5.51726724e-01, -1.89408024e-01, -7.14285714e-02], [ 5.75377427e-01, -9.60135110e-02, -7.14285714e-02], [ 3.38640786e-01, -3.67861955e-01, -7.14285714e-02], [ 3.94570255e-01, -3.07106356e-01, -7.14285714e-02], [ 4.39736876e-01, -2.37973697e-01, -7.14285714e-02], [ 4.72908621e-01, -1.62349735e-01, -7.14285714e-02], [ 4.93180652e-01, -8.22972951e-02, -7.14285714e-02], [ 4.02533591e-01, -1.40423963e-01, -7.14285714e-02], [ 3.82383017e-01, -2.06935340e-01, -7.14285714e-02], [ 4.14084357e-01, -7.09602665e-02, -7.14285714e-02], [ 3.32158562e-01, -1.18498191e-01, -7.14285714e-02], [ 3.25029158e-01, -1.75896984e-01, -7.14285714e-02], [ 3.34988061e-01, -5.96232379e-02, -7.14285714e-02], [ 2.61783533e-01, -9.65724185e-02, -7.14285714e-02], [ 2.67675299e-01, -1.44858628e-01, -7.14285714e-02], [ 2.55891766e-01, -4.82862093e-02, -7.14285714e-02], [ 1.85787515e-01, -5.54454306e-02, -7.14285714e-02], [ 1.15683264e-01, -6.26046519e-02, -7.14285714e-02], [ 2.04908364e-01, -1.10890861e-01, -7.14285714e-02], [ 1.48033195e-01, -1.25209304e-01, -7.14285714e-02], [ 2.24029213e-01, -1.66336292e-01, -7.14285714e-02], [ 1.80383126e-01, -1.87813956e-01, -7.14285714e-02], [ 2.33135679e-01, -2.47829956e-01, -7.14285714e-02], [ 2.85888233e-01, -3.07845955e-01, -7.14285714e-02], [ 2.80876227e-01, -2.13259647e-01, -7.14285714e-02], [ 3.37723241e-01, -2.60183001e-01, -7.14285714e-02], [ 1.00000000e+00, 0.00000000e+00, -2.14285714e-01], [ 9.86361303e-01, 1.64594590e-01, -2.14285714e-01], [ 9.04164528e-01, 1.50878374e-01, -2.14285714e-01], [ 9.16666667e-01, 1.69309911e-16, -2.14285714e-01], [ 9.45817242e-01, 3.24699469e-01, -2.14285714e-01], [ 8.66999138e-01, 2.97641180e-01, -2.14285714e-01], [ 8.79473751e-01, 4.75947393e-01, -2.14285714e-01], [ 8.06184272e-01, 4.36285110e-01, -2.14285714e-01], [ 7.89140509e-01, 6.14212713e-01, -2.14285714e-01], [ 7.23378800e-01, 5.63028320e-01, -2.14285714e-01], [ 6.77281572e-01, 7.35723911e-01, -2.14285714e-01], [ 6.20841441e-01, 6.74413585e-01, -2.14285714e-01], [ 5.46948158e-01, 8.37166478e-01, -2.14285714e-01], [ 5.01369145e-01, 7.67402605e-01, -2.14285714e-01], [ 4.01695425e-01, 9.15773327e-01, -2.14285714e-01], [ 3.68220806e-01, 8.39458883e-01, -2.14285714e-01], [ 8.21967753e-01, 1.37162159e-01, -2.14285714e-01], [ 8.33333333e-01, 2.52036419e-16, -2.14285714e-01], [ 7.88181035e-01, 2.70582891e-01, -2.14285714e-01], [ 7.32894793e-01, 3.96622828e-01, -2.14285714e-01], [ 6.57617091e-01, 5.11843927e-01, -2.14285714e-01], [ 5.64401310e-01, 6.13103259e-01, -2.14285714e-01], [ 4.55790132e-01, 6.97638732e-01, -2.14285714e-01], [ 3.34746187e-01, 7.63144439e-01, -2.14285714e-01], [ 7.39770978e-01, 1.23445943e-01, -2.14285714e-01], [ 7.50000000e-01, 2.57034718e-16, -2.14285714e-01], [ 7.09362931e-01, 2.43524602e-01, -2.14285714e-01], [ 6.59605313e-01, 3.56960545e-01, -2.14285714e-01], [ 5.91855382e-01, 4.60659535e-01, -2.14285714e-01], [ 5.07961179e-01, 5.51792933e-01, -2.14285714e-01], [ 4.10211119e-01, 6.27874859e-01, -2.14285714e-01], [ 3.01271568e-01, 6.86829995e-01, -2.14285714e-01], [ 6.57574202e-01, 1.09729727e-01, -2.14285714e-01], [ 6.66666667e-01, 2.41641892e-16, -2.14285714e-01], [ 6.30544828e-01, 2.16466313e-01, -2.14285714e-01], [ 5.86315834e-01, 3.17298262e-01, -2.14285714e-01], [ 5.26093673e-01, 4.09475142e-01, -2.14285714e-01], [ 4.51521048e-01, 4.90482607e-01, -2.14285714e-01], [ 3.64632105e-01, 5.58110986e-01, -2.14285714e-01], [ 2.67796950e-01, 6.10515551e-01, -2.14285714e-01], [ 5.75377427e-01, 9.60135110e-02, -2.14285714e-01], [ 5.83333333e-01, 2.19355109e-16, -2.14285714e-01], [ 5.51726724e-01, 1.89408024e-01, -2.14285714e-01], [ 5.13026355e-01, 2.77635979e-01, -2.14285714e-01], [ 4.60331964e-01, 3.58290749e-01, -2.14285714e-01], [ 3.95080917e-01, 4.29172281e-01, -2.14285714e-01], [ 3.19053092e-01, 4.88347112e-01, -2.14285714e-01], [ 2.34322331e-01, 5.34201107e-01, -2.14285714e-01], [ 4.93180652e-01, 8.22972951e-02, -2.14285714e-01], [ 5.00000000e-01, 1.93929455e-16, -2.14285714e-01], [ 4.72908621e-01, 1.62349735e-01, -2.14285714e-01], [ 4.39736876e-01, 2.37973697e-01, -2.14285714e-01], [ 3.94570255e-01, 3.07106356e-01, -2.14285714e-01], [ 3.38640786e-01, 3.67861955e-01, -2.14285714e-01], [ 2.73474079e-01, 4.18583239e-01, -2.14285714e-01], [ 2.00847712e-01, 4.57886663e-01, -2.14285714e-01], [ 2.92606991e-01, 3.16445261e-01, -2.14285714e-01], [ 3.44188184e-01, 2.67935253e-01, -2.14285714e-01], [ 2.34867640e-01, 3.58265725e-01, -2.14285714e-01], [ 1.72155182e-01, 3.92474283e-01, -2.14285714e-01], [ 2.46573197e-01, 2.65028567e-01, -2.14285714e-01], [ 2.93806114e-01, 2.28764151e-01, -2.14285714e-01], [ 1.96261201e-01, 2.97948211e-01, -2.14285714e-01], [ 1.43462652e-01, 3.27061902e-01, -2.14285714e-01], [ 2.00539403e-01, 2.13611872e-01, -2.14285714e-01], [ 2.43424043e-01, 1.89593048e-01, -2.14285714e-01], [ 1.57654762e-01, 2.37630697e-01, -2.14285714e-01], [ 1.14770121e-01, 2.61649522e-01, -2.14285714e-01], [ 8.60775910e-02, 1.96237141e-01, -2.14285714e-01], [ 1.39074405e-01, 1.78223023e-01, -2.14285714e-01], [ 5.73850607e-02, 1.30824761e-01, -2.14285714e-01], [ 1.20494048e-01, 1.18815349e-01, -2.14285714e-01], [ 2.86925303e-02, 6.54123805e-02, -2.14285714e-01], [ 1.01913690e-01, 5.94076743e-02, -2.14285714e-01], [ 1.14778696e-17, 2.69135258e-17, -2.14285714e-01], [ 8.33333333e-02, 5.45226540e-17, -2.14285714e-01], [ 1.92071219e-01, 1.60208904e-01, -2.14285714e-01], [ 1.83603035e-01, 1.06805936e-01, -2.14285714e-01], [ 1.75134851e-01, 5.34029681e-02, -2.14285714e-01], [ 1.66666667e-01, 8.24990293e-17, -2.14285714e-01], [ 2.45068032e-01, 1.42194786e-01, -2.14285714e-01], [ 2.46712022e-01, 9.47965238e-02, -2.14285714e-01], [ 2.48356011e-01, 4.73982619e-02, -2.14285714e-01], [ 2.50000000e-01, 1.10712368e-16, -2.14285714e-01], [ 3.33333333e-01, 1.38945119e-16, -2.14285714e-01], [ 3.29964224e-01, 5.90312730e-02, -2.14285714e-01], [ 4.16666667e-01, 1.66856194e-16, -2.14285714e-01], [ 4.11572438e-01, 7.06642841e-02, -2.14285714e-01], [ 3.22110888e-01, 1.17314261e-01, -2.14285714e-01], [ 3.97509754e-01, 1.39831998e-01, -2.14285714e-01], [ 3.09957647e-01, 1.74121089e-01, -2.14285714e-01], [ 3.74847261e-01, 2.06047393e-01, -2.14285714e-01], [ 2.45485487e-01, 9.69400266e-01, -2.14285714e-01], [ 2.25028363e-01, 8.88616910e-01, -2.14285714e-01], [ 8.25793455e-02, 9.96584493e-01, -2.14285714e-01], [ 7.56977333e-02, 9.13535785e-01, -2.14285714e-01], [ -8.25793455e-02, 9.96584493e-01, -2.14285714e-01], [ -7.56977333e-02, 9.13535785e-01, -2.14285714e-01], [ -2.45485487e-01, 9.69400266e-01, -2.14285714e-01], [ -2.25028363e-01, 8.88616910e-01, -2.14285714e-01], [ -4.01695425e-01, 9.15773327e-01, -2.14285714e-01], [ -3.68220806e-01, 8.39458883e-01, -2.14285714e-01], [ -5.46948158e-01, 8.37166478e-01, -2.14285714e-01], [ -5.01369145e-01, 7.67402605e-01, -2.14285714e-01], [ -6.77281572e-01, 7.35723911e-01, -2.14285714e-01], [ -6.20841441e-01, 6.74413585e-01, -2.14285714e-01], [ 2.04571239e-01, 8.07833555e-01, -2.14285714e-01], [ 6.88161212e-02, 8.30487078e-01, -2.14285714e-01], [ -6.88161212e-02, 8.30487078e-01, -2.14285714e-01], [ -2.04571239e-01, 8.07833555e-01, -2.14285714e-01], [ -3.34746187e-01, 7.63144439e-01, -2.14285714e-01], [ -4.55790132e-01, 6.97638732e-01, -2.14285714e-01], [ -5.64401310e-01, 6.13103259e-01, -2.14285714e-01], [ 1.84114115e-01, 7.27050199e-01, -2.14285714e-01], [ 6.19345091e-02, 7.47438370e-01, -2.14285714e-01], [ -6.19345091e-02, 7.47438370e-01, -2.14285714e-01], [ -1.84114115e-01, 7.27050199e-01, -2.14285714e-01], [ -3.01271568e-01, 6.86829995e-01, -2.14285714e-01], [ -4.10211119e-01, 6.27874859e-01, -2.14285714e-01], [ -5.07961179e-01, 5.51792933e-01, -2.14285714e-01], [ 1.63656991e-01, 6.46266844e-01, -2.14285714e-01], [ 5.50528970e-02, 6.64389662e-01, -2.14285714e-01], [ -5.50528970e-02, 6.64389662e-01, -2.14285714e-01], [ -1.63656991e-01, 6.46266844e-01, -2.14285714e-01], [ -2.67796950e-01, 6.10515551e-01, -2.14285714e-01], [ -3.64632105e-01, 5.58110986e-01, -2.14285714e-01], [ -4.51521048e-01, 4.90482607e-01, -2.14285714e-01], [ 1.43199867e-01, 5.65483488e-01, -2.14285714e-01], [ 4.81712849e-02, 5.81340954e-01, -2.14285714e-01], [ -4.81712849e-02, 5.81340954e-01, -2.14285714e-01], [ -1.43199867e-01, 5.65483488e-01, -2.14285714e-01], [ -2.34322331e-01, 5.34201107e-01, -2.14285714e-01], [ -3.19053092e-01, 4.88347112e-01, -2.14285714e-01], [ -3.95080917e-01, 4.29172281e-01, -2.14285714e-01], [ 1.22742744e-01, 4.84700133e-01, -2.14285714e-01], [ 4.12896727e-02, 4.98292247e-01, -2.14285714e-01], [ -4.12896727e-02, 4.98292247e-01, -2.14285714e-01], [ -1.22742744e-01, 4.84700133e-01, -2.14285714e-01], [ -2.00847712e-01, 4.57886663e-01, -2.14285714e-01], [ -2.73474079e-01, 4.18583239e-01, -2.14285714e-01], [ -3.38640786e-01, 3.67861955e-01, -2.14285714e-01], [ -1.02283149e-01, 4.15336195e-01, -2.14285714e-01], [ -3.59859419e-02, 4.34695086e-01, -2.14285714e-01], [ -1.66348287e-01, 3.87163066e-01, -2.14285714e-01], [ -2.26761024e-01, 3.50663301e-01, -2.14285714e-01], [ -2.82200655e-01, 3.06551629e-01, -2.14285714e-01], [ -8.18235534e-02, 3.45972257e-01, -2.14285714e-01], [ -3.06822110e-02, 3.71097926e-01, -2.14285714e-01], [ -1.31848862e-01, 3.16439469e-01, -2.14285714e-01], [ -1.80047970e-01, 2.82743363e-01, -2.14285714e-01], [ -2.25760524e-01, 2.45241304e-01, -2.14285714e-01], [ -6.13639584e-02, 2.76608319e-01, -2.14285714e-01], [ -2.53784802e-02, 3.07500766e-01, -2.14285714e-01], [ -9.73494365e-02, 2.45715872e-01, -2.14285714e-01], [ -1.33334915e-01, 2.14823425e-01, -2.14285714e-01], [ -1.69320393e-01, 1.83930978e-01, -2.14285714e-01], [ -1.12880262e-01, 1.22620652e-01, -2.14285714e-01], [ -7.93257664e-02, 1.65019743e-01, -2.14285714e-01], [ -5.64401310e-02, 6.13103259e-02, -2.14285714e-01], [ -2.53166180e-02, 1.15216062e-01, -2.14285714e-01], [ -4.57712708e-02, 2.07418835e-01, -2.14285714e-01], [ 5.80689493e-03, 1.69121798e-01, -2.14285714e-01], [ -1.22167752e-02, 2.49817927e-01, -2.14285714e-01], [ 3.69304079e-02, 2.23027534e-01, -2.14285714e-01], [ 2.13377203e-02, 2.92217018e-01, -2.14285714e-01], [ 6.80539208e-02, 2.76933270e-01, -2.14285714e-01], [ 8.62835284e-02, 3.46188891e-01, -2.14285714e-01], [ 1.04513136e-01, 4.15444512e-01, -2.14285714e-01], [ 2.79883711e-02, 3.60908761e-01, -2.14285714e-01], [ 3.46390219e-02, 4.29600504e-01, -2.14285714e-01], [ -7.89140509e-01, 6.14212713e-01, -2.14285714e-01], [ -7.23378800e-01, 5.63028320e-01, -2.14285714e-01], [ -8.79473751e-01, 4.75947393e-01, -2.14285714e-01], [ -8.06184272e-01, 4.36285110e-01, -2.14285714e-01], [ -9.45817242e-01, 3.24699469e-01, -2.14285714e-01], [ -8.66999138e-01, 2.97641180e-01, -2.14285714e-01], [ -9.86361303e-01, 1.64594590e-01, -2.14285714e-01], [ -9.04164528e-01, 1.50878374e-01, -2.14285714e-01], [ -1.00000000e+00, -5.24848628e-17, -2.14285714e-01], [ -9.16666667e-01, -3.39957805e-17, -2.14285714e-01], [ -9.86361303e-01, -1.64594590e-01, -2.14285714e-01], [ -9.04164528e-01, -1.50878374e-01, -2.14285714e-01], [ -6.57617091e-01, 5.11843927e-01, -2.14285714e-01], [ -7.32894793e-01, 3.96622828e-01, -2.14285714e-01], [ -7.88181035e-01, 2.70582891e-01, -2.14285714e-01], [ -8.21967753e-01, 1.37162159e-01, -2.14285714e-01], [ -8.33333333e-01, -2.88110659e-17, -2.14285714e-01], [ -8.21967753e-01, -1.37162159e-01, -2.14285714e-01], [ -5.91855382e-01, 4.60659535e-01, -2.14285714e-01], [ -6.59605313e-01, 3.56960545e-01, -2.14285714e-01], [ -7.09362931e-01, 2.43524602e-01, -2.14285714e-01], [ -7.39770978e-01, 1.23445943e-01, -2.14285714e-01], [ -7.50000000e-01, -2.47221107e-17, -2.14285714e-01], [ -7.39770978e-01, -1.23445943e-01, -2.14285714e-01], [ -5.26093673e-01, 4.09475142e-01, -2.14285714e-01], [ -5.86315834e-01, 3.17298262e-01, -2.14285714e-01], [ -6.30544828e-01, 2.16466313e-01, -2.14285714e-01], [ -6.57574202e-01, 1.09729727e-01, -2.14285714e-01], [ -6.66666667e-01, -2.09193101e-17, -2.14285714e-01], [ -6.57574202e-01, -1.09729727e-01, -2.14285714e-01], [ -4.60331964e-01, 3.58290749e-01, -2.14285714e-01], [ -5.13026355e-01, 2.77635979e-01, -2.14285714e-01], [ -5.51726724e-01, 1.89408024e-01, -2.14285714e-01], [ -5.75377427e-01, 9.60135110e-02, -2.14285714e-01], [ -5.83333333e-01, -1.69320721e-17, -2.14285714e-01], [ -5.75377427e-01, -9.60135110e-02, -2.14285714e-01], [ -3.94570255e-01, 3.07106356e-01, -2.14285714e-01], [ -4.39736876e-01, 2.37973697e-01, -2.14285714e-01], [ -4.72908621e-01, 1.62349735e-01, -2.14285714e-01], [ -4.93180652e-01, 8.22972951e-02, -2.14285714e-01], [ -5.00000000e-01, -1.24867361e-17, -2.14285714e-01], [ -4.93180652e-01, -8.22972951e-02, -2.14285714e-01], [ -4.20156583e-01, 7.22539112e-02, -2.14285714e-01], [ -4.11228248e-01, 1.41174836e-01, -2.14285714e-01], [ -4.20116462e-01, 1.83641595e-03, -2.14285714e-01], [ -4.10983876e-01, -6.85810793e-02, -2.14285714e-01], [ -3.47132513e-01, 6.22105272e-02, -2.14285714e-01], [ -3.49547876e-01, 1.19999937e-01, -2.14285714e-01], [ -3.40232923e-01, 3.67283191e-03, -2.14285714e-01], [ -3.28787101e-01, -5.48648634e-02, -2.14285714e-01], [ -2.74108444e-01, 5.21671433e-02, -2.14285714e-01], [ -2.87867503e-01, 9.88250387e-02, -2.14285714e-01], [ -2.60349385e-01, 5.50924786e-03, -2.14285714e-01], [ -2.46590326e-01, -4.11486476e-02, -2.14285714e-01], [ -1.64393551e-01, -2.74324317e-02, -2.14285714e-01], [ -1.92379634e-01, 2.41096072e-02, -2.14285714e-01], [ -8.21967753e-02, -1.37162159e-02, -2.14285714e-01], [ -1.24409882e-01, 4.27099665e-02, -2.14285714e-01], [ -2.20365717e-01, 7.56516461e-02, -2.14285714e-01], [ -1.66622989e-01, 9.91361489e-02, -2.14285714e-01], [ -2.48351800e-01, 1.27193685e-01, -2.14285714e-01], [ -2.08836096e-01, 1.55562331e-01, -2.14285714e-01], [ -2.70747482e-01, 2.06077006e-01, -2.14285714e-01], [ -3.32658869e-01, 2.56591681e-01, -2.14285714e-01], [ -3.12146825e-01, 1.64120356e-01, -2.14285714e-01], [ -3.75941850e-01, 2.01047026e-01, -2.14285714e-01], [ -9.45817242e-01, -3.24699469e-01, -2.14285714e-01], [ -8.66999138e-01, -2.97641180e-01, -2.14285714e-01], [ -8.79473751e-01, -4.75947393e-01, -2.14285714e-01], [ -8.06184272e-01, -4.36285110e-01, -2.14285714e-01], [ -7.89140509e-01, -6.14212713e-01, -2.14285714e-01], [ -7.23378800e-01, -5.63028320e-01, -2.14285714e-01], [ -6.77281572e-01, -7.35723911e-01, -2.14285714e-01], [ -6.20841441e-01, -6.74413585e-01, -2.14285714e-01], [ -5.46948158e-01, -8.37166478e-01, -2.14285714e-01], [ -5.01369145e-01, -7.67402605e-01, -2.14285714e-01], [ -4.01695425e-01, -9.15773327e-01, -2.14285714e-01], [ -3.68220806e-01, -8.39458883e-01, -2.14285714e-01], [ -7.88181035e-01, -2.70582891e-01, -2.14285714e-01], [ -7.32894793e-01, -3.96622828e-01, -2.14285714e-01], [ -6.57617091e-01, -5.11843927e-01, -2.14285714e-01], [ -5.64401310e-01, -6.13103259e-01, -2.14285714e-01], [ -4.55790132e-01, -6.97638732e-01, -2.14285714e-01], [ -3.34746187e-01, -7.63144439e-01, -2.14285714e-01], [ -7.09362931e-01, -2.43524602e-01, -2.14285714e-01], [ -6.59605313e-01, -3.56960545e-01, -2.14285714e-01], [ -5.91855382e-01, -4.60659535e-01, -2.14285714e-01], [ -5.07961179e-01, -5.51792933e-01, -2.14285714e-01], [ -4.10211119e-01, -6.27874859e-01, -2.14285714e-01], [ -3.01271568e-01, -6.86829995e-01, -2.14285714e-01], [ -6.30544828e-01, -2.16466313e-01, -2.14285714e-01], [ -5.86315834e-01, -3.17298262e-01, -2.14285714e-01], [ -5.26093673e-01, -4.09475142e-01, -2.14285714e-01], [ -4.51521048e-01, -4.90482607e-01, -2.14285714e-01], [ -3.64632105e-01, -5.58110986e-01, -2.14285714e-01], [ -2.67796950e-01, -6.10515551e-01, -2.14285714e-01], [ -5.51726724e-01, -1.89408024e-01, -2.14285714e-01], [ -5.13026355e-01, -2.77635979e-01, -2.14285714e-01], [ -4.60331964e-01, -3.58290749e-01, -2.14285714e-01], [ -3.95080917e-01, -4.29172281e-01, -2.14285714e-01], [ -3.19053092e-01, -4.88347112e-01, -2.14285714e-01], [ -2.34322331e-01, -5.34201107e-01, -2.14285714e-01], [ -4.72908621e-01, -1.62349735e-01, -2.14285714e-01], [ -4.39736876e-01, -2.37973697e-01, -2.14285714e-01], [ -3.94570255e-01, -3.07106356e-01, -2.14285714e-01], [ -3.38640786e-01, -3.67861955e-01, -2.14285714e-01], [ -2.73474079e-01, -4.18583239e-01, -2.14285714e-01], [ -2.00847712e-01, -4.57886663e-01, -2.14285714e-01], [ -2.90292421e-01, -3.12221863e-01, -2.14285714e-01], [ -3.43107373e-01, -2.67051188e-01, -2.14285714e-01], [ -2.31319311e-01, -3.50702994e-01, -2.14285714e-01], [ -1.67373094e-01, -3.81572219e-01, -2.14285714e-01], [ -2.41944057e-01, -2.56581770e-01, -2.14285714e-01], [ -2.91644492e-01, -2.26996019e-01, -2.14285714e-01], [ -1.89164543e-01, -2.82822750e-01, -2.14285714e-01], [ -1.33898475e-01, -3.05257776e-01, -2.14285714e-01], [ -1.93595692e-01, -2.00941678e-01, -2.14285714e-01], [ -2.40181610e-01, -1.86940851e-01, -2.14285714e-01], [ -1.47009774e-01, -2.14942505e-01, -2.14285714e-01], [ -1.00423856e-01, -2.28943332e-01, -2.14285714e-01], [ -6.69492374e-02, -1.52628888e-01, -2.14285714e-01], [ -1.25405441e-01, -1.47867075e-01, -2.14285714e-01], [ -3.34746187e-02, -7.63144439e-02, -2.14285714e-01], [ -1.03801108e-01, -8.07916455e-02, -2.14285714e-01], [ -1.83861645e-01, -1.43105263e-01, -2.14285714e-01], [ -1.74127598e-01, -8.52688471e-02, -2.14285714e-01], [ -2.42317849e-01, -1.38343450e-01, -2.14285714e-01], [ -2.44454087e-01, -8.97460487e-02, -2.14285714e-01], [ -3.20605599e-01, -1.13947277e-01, -2.14285714e-01], [ -3.96757110e-01, -1.38148506e-01, -2.14285714e-01], [ -3.08124191e-01, -1.71553532e-01, -2.14285714e-01], [ -3.73930533e-01, -2.04763614e-01, -2.14285714e-01], [ -2.45485487e-01, -9.69400266e-01, -2.14285714e-01], [ -2.25028363e-01, -8.88616910e-01, -2.14285714e-01], [ -8.25793455e-02, -9.96584493e-01, -2.14285714e-01], [ -7.56977333e-02, -9.13535785e-01, -2.14285714e-01], [ 8.25793455e-02, -9.96584493e-01, -2.14285714e-01], [ 7.56977333e-02, -9.13535785e-01, -2.14285714e-01], [ 2.45485487e-01, -9.69400266e-01, -2.14285714e-01], [ 2.25028363e-01, -8.88616910e-01, -2.14285714e-01], [ 4.01695425e-01, -9.15773327e-01, -2.14285714e-01], [ 3.68220806e-01, -8.39458883e-01, -2.14285714e-01], [ 5.46948158e-01, -8.37166478e-01, -2.14285714e-01], [ 5.01369145e-01, -7.67402605e-01, -2.14285714e-01], [ -2.04571239e-01, -8.07833555e-01, -2.14285714e-01], [ -6.88161212e-02, -8.30487078e-01, -2.14285714e-01], [ 6.88161212e-02, -8.30487078e-01, -2.14285714e-01], [ 2.04571239e-01, -8.07833555e-01, -2.14285714e-01], [ 3.34746187e-01, -7.63144439e-01, -2.14285714e-01], [ 4.55790132e-01, -6.97638732e-01, -2.14285714e-01], [ -1.84114115e-01, -7.27050199e-01, -2.14285714e-01], [ -6.19345091e-02, -7.47438370e-01, -2.14285714e-01], [ 6.19345091e-02, -7.47438370e-01, -2.14285714e-01], [ 1.84114115e-01, -7.27050199e-01, -2.14285714e-01], [ 3.01271568e-01, -6.86829995e-01, -2.14285714e-01], [ 4.10211119e-01, -6.27874859e-01, -2.14285714e-01], [ -1.63656991e-01, -6.46266844e-01, -2.14285714e-01], [ -5.50528970e-02, -6.64389662e-01, -2.14285714e-01], [ 5.50528970e-02, -6.64389662e-01, -2.14285714e-01], [ 1.63656991e-01, -6.46266844e-01, -2.14285714e-01], [ 2.67796950e-01, -6.10515551e-01, -2.14285714e-01], [ 3.64632105e-01, -5.58110986e-01, -2.14285714e-01], [ -1.43199867e-01, -5.65483488e-01, -2.14285714e-01], [ -4.81712849e-02, -5.81340954e-01, -2.14285714e-01], [ 4.81712849e-02, -5.81340954e-01, -2.14285714e-01], [ 1.43199867e-01, -5.65483488e-01, -2.14285714e-01], [ 2.34322331e-01, -5.34201107e-01, -2.14285714e-01], [ 3.19053092e-01, -4.88347112e-01, -2.14285714e-01], [ -1.22742744e-01, -4.84700133e-01, -2.14285714e-01], [ -4.12896727e-02, -4.98292247e-01, -2.14285714e-01], [ 4.12896727e-02, -4.98292247e-01, -2.14285714e-01], [ 1.22742744e-01, -4.84700133e-01, -2.14285714e-01], [ 2.00847712e-01, -4.57886663e-01, -2.14285714e-01], [ 2.73474079e-01, -4.18583239e-01, -2.14285714e-01], [ 1.02606772e-01, -4.13792257e-01, -2.14285714e-01], [ 3.59043567e-02, -4.33301147e-01, -2.14285714e-01], [ 1.67077120e-01, -3.85469130e-01, -2.14285714e-01], [ 2.27895066e-01, -3.48819366e-01, -2.14285714e-01], [ 8.24708009e-02, -3.42884381e-01, -2.14285714e-01], [ 3.05190406e-02, -3.68310047e-01, -2.14285714e-01], [ 1.33306527e-01, -3.13051596e-01, -2.14285714e-01], [ 1.82316053e-01, -2.79055493e-01, -2.14285714e-01], [ 6.23348295e-02, -2.71976505e-01, -2.14285714e-01], [ 2.51337245e-02, -3.03318947e-01, -2.14285714e-01], [ 9.95359345e-02, -2.40634062e-01, -2.14285714e-01], [ 1.36737040e-01, -2.09291620e-01, -2.14285714e-01], [ 9.11580264e-02, -1.39527746e-01, -2.14285714e-01], [ 5.51990834e-02, -1.85860856e-01, -2.14285714e-01], [ 4.55790132e-02, -6.97638732e-02, -2.14285714e-01], [ 1.08622324e-02, -1.31087650e-01, -2.14285714e-01], [ 1.92401405e-02, -2.32193966e-01, -2.14285714e-01], [ -2.38545485e-02, -1.92411427e-01, -2.14285714e-01], [ -1.67188024e-02, -2.78527075e-01, -2.14285714e-01], [ -5.85713293e-02, -2.53735203e-01, -2.14285714e-01], [ -7.99618007e-02, -3.30723513e-01, -2.14285714e-01], [ -1.01352272e-01, -4.07711823e-01, -2.14285714e-01], [ -2.49090925e-02, -3.51782132e-01, -2.14285714e-01], [ -3.30993826e-02, -4.25037189e-01, -2.14285714e-01], [ 6.77281572e-01, -7.35723911e-01, -2.14285714e-01], [ 6.20841441e-01, -6.74413585e-01, -2.14285714e-01], [ 7.89140509e-01, -6.14212713e-01, -2.14285714e-01], [ 7.23378800e-01, -5.63028320e-01, -2.14285714e-01], [ 8.79473751e-01, -4.75947393e-01, -2.14285714e-01], [ 8.06184272e-01, -4.36285110e-01, -2.14285714e-01], [ 9.45817242e-01, -3.24699469e-01, -2.14285714e-01], [ 8.66999138e-01, -2.97641180e-01, -2.14285714e-01], [ 9.86361303e-01, -1.64594590e-01, -2.14285714e-01], [ 9.04164528e-01, -1.50878374e-01, -2.14285714e-01], [ 5.64401310e-01, -6.13103259e-01, -2.14285714e-01], [ 6.57617091e-01, -5.11843927e-01, -2.14285714e-01], [ 7.32894793e-01, -3.96622828e-01, -2.14285714e-01], [ 7.88181035e-01, -2.70582891e-01, -2.14285714e-01], [ 8.21967753e-01, -1.37162159e-01, -2.14285714e-01], [ 5.07961179e-01, -5.51792933e-01, -2.14285714e-01], [ 5.91855382e-01, -4.60659535e-01, -2.14285714e-01], [ 6.59605313e-01, -3.56960545e-01, -2.14285714e-01], [ 7.09362931e-01, -2.43524602e-01, -2.14285714e-01], [ 7.39770978e-01, -1.23445943e-01, -2.14285714e-01], [ 4.51521048e-01, -4.90482607e-01, -2.14285714e-01], [ 5.26093673e-01, -4.09475142e-01, -2.14285714e-01], [ 5.86315834e-01, -3.17298262e-01, -2.14285714e-01], [ 6.30544828e-01, -2.16466313e-01, -2.14285714e-01], [ 6.57574202e-01, -1.09729727e-01, -2.14285714e-01], [ 3.95080917e-01, -4.29172281e-01, -2.14285714e-01], [ 4.60331964e-01, -3.58290749e-01, -2.14285714e-01], [ 5.13026355e-01, -2.77635979e-01, -2.14285714e-01], [ 5.51726724e-01, -1.89408024e-01, -2.14285714e-01], [ 5.75377427e-01, -9.60135110e-02, -2.14285714e-01], [ 3.38640786e-01, -3.67861955e-01, -2.14285714e-01], [ 3.94570255e-01, -3.07106356e-01, -2.14285714e-01], [ 4.39736876e-01, -2.37973697e-01, -2.14285714e-01], [ 4.72908621e-01, -1.62349735e-01, -2.14285714e-01], [ 4.93180652e-01, -8.22972951e-02, -2.14285714e-01], [ 4.02533591e-01, -1.40423963e-01, -2.14285714e-01], [ 3.82383017e-01, -2.06935340e-01, -2.14285714e-01], [ 4.14084357e-01, -7.09602665e-02, -2.14285714e-01], [ 3.32158562e-01, -1.18498191e-01, -2.14285714e-01], [ 3.25029158e-01, -1.75896984e-01, -2.14285714e-01], [ 3.34988061e-01, -5.96232379e-02, -2.14285714e-01], [ 2.61783533e-01, -9.65724185e-02, -2.14285714e-01], [ 2.67675299e-01, -1.44858628e-01, -2.14285714e-01], [ 2.55891766e-01, -4.82862093e-02, -2.14285714e-01], [ 1.85787515e-01, -5.54454306e-02, -2.14285714e-01], [ 1.15683264e-01, -6.26046519e-02, -2.14285714e-01], [ 2.04908364e-01, -1.10890861e-01, -2.14285714e-01], [ 1.48033195e-01, -1.25209304e-01, -2.14285714e-01], [ 2.24029213e-01, -1.66336292e-01, -2.14285714e-01], [ 1.80383126e-01, -1.87813956e-01, -2.14285714e-01], [ 2.33135679e-01, -2.47829956e-01, -2.14285714e-01], [ 2.85888233e-01, -3.07845955e-01, -2.14285714e-01], [ 2.80876227e-01, -2.13259647e-01, -2.14285714e-01], [ 3.37723241e-01, -2.60183001e-01, -2.14285714e-01], [ 1.00000000e+00, 0.00000000e+00, -3.57142857e-01], [ 9.86361303e-01, 1.64594590e-01, -3.57142857e-01], [ 9.04164528e-01, 1.50878374e-01, -3.57142857e-01], [ 9.16666667e-01, 1.88101103e-16, -3.57142857e-01], [ 9.45817242e-01, 3.24699469e-01, -3.57142857e-01], [ 8.66999138e-01, 2.97641180e-01, -3.57142857e-01], [ 8.79473751e-01, 4.75947393e-01, -3.57142857e-01], [ 8.06184272e-01, 4.36285110e-01, -3.57142857e-01], [ 7.89140509e-01, 6.14212713e-01, -3.57142857e-01], [ 7.23378800e-01, 5.63028320e-01, -3.57142857e-01], [ 6.77281572e-01, 7.35723911e-01, -3.57142857e-01], [ 6.20841441e-01, 6.74413585e-01, -3.57142857e-01], [ 5.46948158e-01, 8.37166478e-01, -3.57142857e-01], [ 5.01369145e-01, 7.67402605e-01, -3.57142857e-01], [ 4.01695425e-01, 9.15773327e-01, -3.57142857e-01], [ 3.68220806e-01, 8.39458883e-01, -3.57142857e-01], [ 8.21967753e-01, 1.37162159e-01, -3.57142857e-01], [ 8.33333333e-01, 2.89101549e-16, -3.57142857e-01], [ 7.88181035e-01, 2.70582891e-01, -3.57142857e-01], [ 7.32894793e-01, 3.96622828e-01, -3.57142857e-01], [ 6.57617091e-01, 5.11843927e-01, -3.57142857e-01], [ 5.64401310e-01, 6.13103259e-01, -3.57142857e-01], [ 4.55790132e-01, 6.97638732e-01, -3.57142857e-01], [ 3.34746187e-01, 7.63144439e-01, -3.57142857e-01], [ 7.39770978e-01, 1.23445943e-01, -3.57142857e-01], [ 7.50000000e-01, 2.96828145e-16, -3.57142857e-01], [ 7.09362931e-01, 2.43524602e-01, -3.57142857e-01], [ 6.59605313e-01, 3.56960545e-01, -3.57142857e-01], [ 5.91855382e-01, 4.60659535e-01, -3.57142857e-01], [ 5.07961179e-01, 5.51792933e-01, -3.57142857e-01], [ 4.10211119e-01, 6.27874859e-01, -3.57142857e-01], [ 3.01271568e-01, 6.86829995e-01, -3.57142857e-01], [ 6.57574202e-01, 1.09729727e-01, -3.57142857e-01], [ 6.66666667e-01, 2.80085390e-16, -3.57142857e-01], [ 6.30544828e-01, 2.16466313e-01, -3.57142857e-01], [ 5.86315834e-01, 3.17298262e-01, -3.57142857e-01], [ 5.26093673e-01, 4.09475142e-01, -3.57142857e-01], [ 4.51521048e-01, 4.90482607e-01, -3.57142857e-01], [ 3.64632105e-01, 5.58110986e-01, -3.57142857e-01], [ 2.67796950e-01, 6.10515551e-01, -3.57142857e-01], [ 5.75377427e-01, 9.60135110e-02, -3.57142857e-01], [ 5.83333333e-01, 2.55069887e-16, -3.57142857e-01], [ 5.51726724e-01, 1.89408024e-01, -3.57142857e-01], [ 5.13026355e-01, 2.77635979e-01, -3.57142857e-01], [ 4.60331964e-01, 3.58290749e-01, -3.57142857e-01], [ 3.95080917e-01, 4.29172281e-01, -3.57142857e-01], [ 3.19053092e-01, 4.88347112e-01, -3.57142857e-01], [ 2.34322331e-01, 5.34201107e-01, -3.57142857e-01], [ 4.93180652e-01, 8.22972951e-02, -3.57142857e-01], [ 5.00000000e-01, 2.26287739e-16, -3.57142857e-01], [ 4.72908621e-01, 1.62349735e-01, -3.57142857e-01], [ 4.39736876e-01, 2.37973697e-01, -3.57142857e-01], [ 3.94570255e-01, 3.07106356e-01, -3.57142857e-01], [ 3.38640786e-01, 3.67861955e-01, -3.57142857e-01], [ 2.73474079e-01, 4.18583239e-01, -3.57142857e-01], [ 2.00847712e-01, 4.57886663e-01, -3.57142857e-01], [ 2.92606991e-01, 3.16445261e-01, -3.57142857e-01], [ 3.44188184e-01, 2.67935253e-01, -3.57142857e-01], [ 2.34867640e-01, 3.58265725e-01, -3.57142857e-01], [ 1.72155182e-01, 3.92474283e-01, -3.57142857e-01], [ 2.46573197e-01, 2.65028567e-01, -3.57142857e-01], [ 2.93806114e-01, 2.28764151e-01, -3.57142857e-01], [ 1.96261201e-01, 2.97948211e-01, -3.57142857e-01], [ 1.43462652e-01, 3.27061902e-01, -3.57142857e-01], [ 2.00539403e-01, 2.13611872e-01, -3.57142857e-01], [ 2.43424043e-01, 1.89593048e-01, -3.57142857e-01], [ 1.57654762e-01, 2.37630697e-01, -3.57142857e-01], [ 1.14770121e-01, 2.61649522e-01, -3.57142857e-01], [ 8.60775910e-02, 1.96237141e-01, -3.57142857e-01], [ 1.39074405e-01, 1.78223023e-01, -3.57142857e-01], [ 5.73850607e-02, 1.30824761e-01, -3.57142857e-01], [ 1.20494048e-01, 1.18815349e-01, -3.57142857e-01], [ 2.86925303e-02, 6.54123805e-02, -3.57142857e-01], [ 1.01913690e-01, 5.94076743e-02, -3.57142857e-01], [ -6.67803323e-18, 3.62404443e-17, -3.57142857e-01], [ 8.33333333e-02, 6.76427614e-17, -3.57142857e-01], [ 1.92071219e-01, 1.60208904e-01, -3.57142857e-01], [ 1.83603035e-01, 1.06805936e-01, -3.57142857e-01], [ 1.75134851e-01, 5.34029681e-02, -3.57142857e-01], [ 1.66666667e-01, 9.94857751e-17, -3.57142857e-01], [ 2.45068032e-01, 1.42194786e-01, -3.57142857e-01], [ 2.46712022e-01, 9.47965238e-02, -3.57142857e-01], [ 2.48356011e-01, 4.73982619e-02, -3.57142857e-01], [ 2.50000000e-01, 1.31613145e-16, -3.57142857e-01], [ 3.33333333e-01, 1.63763809e-16, -3.57142857e-01], [ 3.29964224e-01, 5.90312730e-02, -3.57142857e-01], [ 4.16666667e-01, 1.95528462e-16, -3.57142857e-01], [ 4.11572438e-01, 7.06642841e-02, -3.57142857e-01], [ 3.22110888e-01, 1.17314261e-01, -3.57142857e-01], [ 3.97509754e-01, 1.39831998e-01, -3.57142857e-01], [ 3.09957647e-01, 1.74121089e-01, -3.57142857e-01], [ 3.74847261e-01, 2.06047393e-01, -3.57142857e-01], [ 2.45485487e-01, 9.69400266e-01, -3.57142857e-01], [ 2.25028363e-01, 8.88616910e-01, -3.57142857e-01], [ 8.25793455e-02, 9.96584493e-01, -3.57142857e-01], [ 7.56977333e-02, 9.13535785e-01, -3.57142857e-01], [ -8.25793455e-02, 9.96584493e-01, -3.57142857e-01], [ -7.56977333e-02, 9.13535785e-01, -3.57142857e-01], [ -2.45485487e-01, 9.69400266e-01, -3.57142857e-01], [ -2.25028363e-01, 8.88616910e-01, -3.57142857e-01], [ -4.01695425e-01, 9.15773327e-01, -3.57142857e-01], [ -3.68220806e-01, 8.39458883e-01, -3.57142857e-01], [ -5.46948158e-01, 8.37166478e-01, -3.57142857e-01], [ -5.01369145e-01, 7.67402605e-01, -3.57142857e-01], [ -6.77281572e-01, 7.35723911e-01, -3.57142857e-01], [ -6.20841441e-01, 6.74413585e-01, -3.57142857e-01], [ 2.04571239e-01, 8.07833555e-01, -3.57142857e-01], [ 6.88161212e-02, 8.30487078e-01, -3.57142857e-01], [ -6.88161212e-02, 8.30487078e-01, -3.57142857e-01], [ -2.04571239e-01, 8.07833555e-01, -3.57142857e-01], [ -3.34746187e-01, 7.63144439e-01, -3.57142857e-01], [ -4.55790132e-01, 6.97638732e-01, -3.57142857e-01], [ -5.64401310e-01, 6.13103259e-01, -3.57142857e-01], [ 1.84114115e-01, 7.27050199e-01, -3.57142857e-01], [ 6.19345091e-02, 7.47438370e-01, -3.57142857e-01], [ -6.19345091e-02, 7.47438370e-01, -3.57142857e-01], [ -1.84114115e-01, 7.27050199e-01, -3.57142857e-01], [ -3.01271568e-01, 6.86829995e-01, -3.57142857e-01], [ -4.10211119e-01, 6.27874859e-01, -3.57142857e-01], [ -5.07961179e-01, 5.51792933e-01, -3.57142857e-01], [ 1.63656991e-01, 6.46266844e-01, -3.57142857e-01], [ 5.50528970e-02, 6.64389662e-01, -3.57142857e-01], [ -5.50528970e-02, 6.64389662e-01, -3.57142857e-01], [ -1.63656991e-01, 6.46266844e-01, -3.57142857e-01], [ -2.67796950e-01, 6.10515551e-01, -3.57142857e-01], [ -3.64632105e-01, 5.58110986e-01, -3.57142857e-01], [ -4.51521048e-01, 4.90482607e-01, -3.57142857e-01], [ 1.43199867e-01, 5.65483488e-01, -3.57142857e-01], [ 4.81712849e-02, 5.81340954e-01, -3.57142857e-01], [ -4.81712849e-02, 5.81340954e-01, -3.57142857e-01], [ -1.43199867e-01, 5.65483488e-01, -3.57142857e-01], [ -2.34322331e-01, 5.34201107e-01, -3.57142857e-01], [ -3.19053092e-01, 4.88347112e-01, -3.57142857e-01], [ -3.95080917e-01, 4.29172281e-01, -3.57142857e-01], [ 1.22742744e-01, 4.84700133e-01, -3.57142857e-01], [ 4.12896727e-02, 4.98292247e-01, -3.57142857e-01], [ -4.12896727e-02, 4.98292247e-01, -3.57142857e-01], [ -1.22742744e-01, 4.84700133e-01, -3.57142857e-01], [ -2.00847712e-01, 4.57886663e-01, -3.57142857e-01], [ -2.73474079e-01, 4.18583239e-01, -3.57142857e-01], [ -3.38640786e-01, 3.67861955e-01, -3.57142857e-01], [ -1.02283149e-01, 4.15336195e-01, -3.57142857e-01], [ -3.59859419e-02, 4.34695086e-01, -3.57142857e-01], [ -1.66348287e-01, 3.87163066e-01, -3.57142857e-01], [ -2.26761024e-01, 3.50663301e-01, -3.57142857e-01], [ -2.82200655e-01, 3.06551629e-01, -3.57142857e-01], [ -8.18235534e-02, 3.45972257e-01, -3.57142857e-01], [ -3.06822110e-02, 3.71097926e-01, -3.57142857e-01], [ -1.31848862e-01, 3.16439469e-01, -3.57142857e-01], [ -1.80047970e-01, 2.82743363e-01, -3.57142857e-01], [ -2.25760524e-01, 2.45241304e-01, -3.57142857e-01], [ -6.13639584e-02, 2.76608319e-01, -3.57142857e-01], [ -2.53784802e-02, 3.07500766e-01, -3.57142857e-01], [ -9.73494365e-02, 2.45715872e-01, -3.57142857e-01], [ -1.33334915e-01, 2.14823425e-01, -3.57142857e-01], [ -1.69320393e-01, 1.83930978e-01, -3.57142857e-01], [ -1.12880262e-01, 1.22620652e-01, -3.57142857e-01], [ -7.93257664e-02, 1.65019743e-01, -3.57142857e-01], [ -5.64401310e-02, 6.13103259e-02, -3.57142857e-01], [ -2.53166180e-02, 1.15216062e-01, -3.57142857e-01], [ -4.57712708e-02, 2.07418835e-01, -3.57142857e-01], [ 5.80689493e-03, 1.69121798e-01, -3.57142857e-01], [ -1.22167752e-02, 2.49817927e-01, -3.57142857e-01], [ 3.69304079e-02, 2.23027534e-01, -3.57142857e-01], [ 2.13377203e-02, 2.92217018e-01, -3.57142857e-01], [ 6.80539208e-02, 2.76933270e-01, -3.57142857e-01], [ 8.62835284e-02, 3.46188891e-01, -3.57142857e-01], [ 1.04513136e-01, 4.15444512e-01, -3.57142857e-01], [ 2.79883711e-02, 3.60908761e-01, -3.57142857e-01], [ 3.46390219e-02, 4.29600504e-01, -3.57142857e-01], [ -7.89140509e-01, 6.14212713e-01, -3.57142857e-01], [ -7.23378800e-01, 5.63028320e-01, -3.57142857e-01], [ -8.79473751e-01, 4.75947393e-01, -3.57142857e-01], [ -8.06184272e-01, 4.36285110e-01, -3.57142857e-01], [ -9.45817242e-01, 3.24699469e-01, -3.57142857e-01], [ -8.66999138e-01, 2.97641180e-01, -3.57142857e-01], [ -9.86361303e-01, 1.64594590e-01, -3.57142857e-01], [ -9.04164528e-01, 1.50878374e-01, -3.57142857e-01], [ -1.00000000e+00, -8.74747714e-17, -3.57142857e-01], [ -9.16666667e-01, -6.01060321e-17, -3.57142857e-01], [ -9.86361303e-01, -1.64594590e-01, -3.57142857e-01], [ -9.04164528e-01, -1.50878374e-01, -3.57142857e-01], [ -6.57617091e-01, 5.11843927e-01, -3.57142857e-01], [ -7.32894793e-01, 3.96622828e-01, -3.57142857e-01], [ -7.88181035e-01, 2.70582891e-01, -3.57142857e-01], [ -8.21967753e-01, 1.37162159e-01, -3.57142857e-01], [ -8.33333333e-01, -5.17702556e-17, -3.57142857e-01], [ -8.21967753e-01, -1.37162159e-01, -3.57142857e-01], [ -5.91855382e-01, 4.60659535e-01, -3.57142857e-01], [ -6.59605313e-01, 3.56960545e-01, -3.57142857e-01], [ -7.09362931e-01, 2.43524602e-01, -3.57142857e-01], [ -7.39770978e-01, 1.23445943e-01, -3.57142857e-01], [ -7.50000000e-01, -4.47493904e-17, -3.57142857e-01], [ -7.39770978e-01, -1.23445943e-01, -3.57142857e-01], [ -5.26093673e-01, 4.09475142e-01, -3.57142857e-01], [ -5.86315834e-01, 3.17298262e-01, -3.57142857e-01], [ -6.30544828e-01, 2.16466313e-01, -3.57142857e-01], [ -6.57574202e-01, 1.09729727e-01, -3.57142857e-01], [ -6.66666667e-01, -3.80719106e-17, -3.57142857e-01], [ -6.57574202e-01, -1.09729727e-01, -3.57142857e-01], [ -4.60331964e-01, 3.58290749e-01, -3.57142857e-01], [ -5.13026355e-01, 2.77635979e-01, -3.57142857e-01], [ -5.51726724e-01, 1.89408024e-01, -3.57142857e-01], [ -5.75377427e-01, 9.60135110e-02, -3.57142857e-01], [ -5.83333333e-01, -3.11731060e-17, -3.57142857e-01], [ -5.75377427e-01, -9.60135110e-02, -3.57142857e-01], [ -3.94570255e-01, 3.07106356e-01, -3.57142857e-01], [ -4.39736876e-01, 2.37973697e-01, -3.57142857e-01], [ -4.72908621e-01, 1.62349735e-01, -3.57142857e-01], [ -4.93180652e-01, 8.22972951e-02, -3.57142857e-01], [ -5.00000000e-01, -2.37245839e-17, -3.57142857e-01], [ -4.93180652e-01, -8.22972951e-02, -3.57142857e-01], [ -4.20156583e-01, 7.22539112e-02, -3.57142857e-01], [ -4.11228248e-01, 1.41174836e-01, -3.57142857e-01], [ -4.20116462e-01, 1.83641595e-03, -3.57142857e-01], [ -4.10983876e-01, -6.85810793e-02, -3.57142857e-01], [ -3.47132513e-01, 6.22105272e-02, -3.57142857e-01], [ -3.49547876e-01, 1.19999937e-01, -3.57142857e-01], [ -3.40232923e-01, 3.67283191e-03, -3.57142857e-01], [ -3.28787101e-01, -5.48648634e-02, -3.57142857e-01], [ -2.74108444e-01, 5.21671433e-02, -3.57142857e-01], [ -2.87867503e-01, 9.88250387e-02, -3.57142857e-01], [ -2.60349385e-01, 5.50924786e-03, -3.57142857e-01], [ -2.46590326e-01, -4.11486476e-02, -3.57142857e-01], [ -1.64393551e-01, -2.74324317e-02, -3.57142857e-01], [ -1.92379634e-01, 2.41096072e-02, -3.57142857e-01], [ -8.21967753e-02, -1.37162159e-02, -3.57142857e-01], [ -1.24409882e-01, 4.27099665e-02, -3.57142857e-01], [ -2.20365717e-01, 7.56516461e-02, -3.57142857e-01], [ -1.66622989e-01, 9.91361489e-02, -3.57142857e-01], [ -2.48351800e-01, 1.27193685e-01, -3.57142857e-01], [ -2.08836096e-01, 1.55562331e-01, -3.57142857e-01], [ -2.70747482e-01, 2.06077006e-01, -3.57142857e-01], [ -3.32658869e-01, 2.56591681e-01, -3.57142857e-01], [ -3.12146825e-01, 1.64120356e-01, -3.57142857e-01], [ -3.75941850e-01, 2.01047026e-01, -3.57142857e-01], [ -9.45817242e-01, -3.24699469e-01, -3.57142857e-01], [ -8.66999138e-01, -2.97641180e-01, -3.57142857e-01], [ -8.79473751e-01, -4.75947393e-01, -3.57142857e-01], [ -8.06184272e-01, -4.36285110e-01, -3.57142857e-01], [ -7.89140509e-01, -6.14212713e-01, -3.57142857e-01], [ -7.23378800e-01, -5.63028320e-01, -3.57142857e-01], [ -6.77281572e-01, -7.35723911e-01, -3.57142857e-01], [ -6.20841441e-01, -6.74413585e-01, -3.57142857e-01], [ -5.46948158e-01, -8.37166478e-01, -3.57142857e-01], [ -5.01369145e-01, -7.67402605e-01, -3.57142857e-01], [ -4.01695425e-01, -9.15773327e-01, -3.57142857e-01], [ -3.68220806e-01, -8.39458883e-01, -3.57142857e-01], [ -7.88181035e-01, -2.70582891e-01, -3.57142857e-01], [ -7.32894793e-01, -3.96622828e-01, -3.57142857e-01], [ -6.57617091e-01, -5.11843927e-01, -3.57142857e-01], [ -5.64401310e-01, -6.13103259e-01, -3.57142857e-01], [ -4.55790132e-01, -6.97638732e-01, -3.57142857e-01], [ -3.34746187e-01, -7.63144439e-01, -3.57142857e-01], [ -7.09362931e-01, -2.43524602e-01, -3.57142857e-01], [ -6.59605313e-01, -3.56960545e-01, -3.57142857e-01], [ -5.91855382e-01, -4.60659535e-01, -3.57142857e-01], [ -5.07961179e-01, -5.51792933e-01, -3.57142857e-01], [ -4.10211119e-01, -6.27874859e-01, -3.57142857e-01], [ -3.01271568e-01, -6.86829995e-01, -3.57142857e-01], [ -6.30544828e-01, -2.16466313e-01, -3.57142857e-01], [ -5.86315834e-01, -3.17298262e-01, -3.57142857e-01], [ -5.26093673e-01, -4.09475142e-01, -3.57142857e-01], [ -4.51521048e-01, -4.90482607e-01, -3.57142857e-01], [ -3.64632105e-01, -5.58110986e-01, -3.57142857e-01], [ -2.67796950e-01, -6.10515551e-01, -3.57142857e-01], [ -5.51726724e-01, -1.89408024e-01, -3.57142857e-01], [ -5.13026355e-01, -2.77635979e-01, -3.57142857e-01], [ -4.60331964e-01, -3.58290749e-01, -3.57142857e-01], [ -3.95080917e-01, -4.29172281e-01, -3.57142857e-01], [ -3.19053092e-01, -4.88347112e-01, -3.57142857e-01], [ -2.34322331e-01, -5.34201107e-01, -3.57142857e-01], [ -4.72908621e-01, -1.62349735e-01, -3.57142857e-01], [ -4.39736876e-01, -2.37973697e-01, -3.57142857e-01], [ -3.94570255e-01, -3.07106356e-01, -3.57142857e-01], [ -3.38640786e-01, -3.67861955e-01, -3.57142857e-01], [ -2.73474079e-01, -4.18583239e-01, -3.57142857e-01], [ -2.00847712e-01, -4.57886663e-01, -3.57142857e-01], [ -2.90292421e-01, -3.12221863e-01, -3.57142857e-01], [ -3.43107373e-01, -2.67051188e-01, -3.57142857e-01], [ -2.31319311e-01, -3.50702994e-01, -3.57142857e-01], [ -1.67373094e-01, -3.81572219e-01, -3.57142857e-01], [ -2.41944057e-01, -2.56581770e-01, -3.57142857e-01], [ -2.91644492e-01, -2.26996019e-01, -3.57142857e-01], [ -1.89164543e-01, -2.82822750e-01, -3.57142857e-01], [ -1.33898475e-01, -3.05257776e-01, -3.57142857e-01], [ -1.93595692e-01, -2.00941678e-01, -3.57142857e-01], [ -2.40181610e-01, -1.86940851e-01, -3.57142857e-01], [ -1.47009774e-01, -2.14942505e-01, -3.57142857e-01], [ -1.00423856e-01, -2.28943332e-01, -3.57142857e-01], [ -6.69492374e-02, -1.52628888e-01, -3.57142857e-01], [ -1.25405441e-01, -1.47867075e-01, -3.57142857e-01], [ -3.34746187e-02, -7.63144439e-02, -3.57142857e-01], [ -1.03801108e-01, -8.07916455e-02, -3.57142857e-01], [ -1.83861645e-01, -1.43105263e-01, -3.57142857e-01], [ -1.74127598e-01, -8.52688471e-02, -3.57142857e-01], [ -2.42317849e-01, -1.38343450e-01, -3.57142857e-01], [ -2.44454087e-01, -8.97460487e-02, -3.57142857e-01], [ -3.20605599e-01, -1.13947277e-01, -3.57142857e-01], [ -3.96757110e-01, -1.38148506e-01, -3.57142857e-01], [ -3.08124191e-01, -1.71553532e-01, -3.57142857e-01], [ -3.73930533e-01, -2.04763614e-01, -3.57142857e-01], [ -2.45485487e-01, -9.69400266e-01, -3.57142857e-01], [ -2.25028363e-01, -8.88616910e-01, -3.57142857e-01], [ -8.25793455e-02, -9.96584493e-01, -3.57142857e-01], [ -7.56977333e-02, -9.13535785e-01, -3.57142857e-01], [ 8.25793455e-02, -9.96584493e-01, -3.57142857e-01], [ 7.56977333e-02, -9.13535785e-01, -3.57142857e-01], [ 2.45485487e-01, -9.69400266e-01, -3.57142857e-01], [ 2.25028363e-01, -8.88616910e-01, -3.57142857e-01], [ 4.01695425e-01, -9.15773327e-01, -3.57142857e-01], [ 3.68220806e-01, -8.39458883e-01, -3.57142857e-01], [ 5.46948158e-01, -8.37166478e-01, -3.57142857e-01], [ 5.01369145e-01, -7.67402605e-01, -3.57142857e-01], [ -2.04571239e-01, -8.07833555e-01, -3.57142857e-01], [ -6.88161212e-02, -8.30487078e-01, -3.57142857e-01], [ 6.88161212e-02, -8.30487078e-01, -3.57142857e-01], [ 2.04571239e-01, -8.07833555e-01, -3.57142857e-01], [ 3.34746187e-01, -7.63144439e-01, -3.57142857e-01], [ 4.55790132e-01, -6.97638732e-01, -3.57142857e-01], [ -1.84114115e-01, -7.27050199e-01, -3.57142857e-01], [ -6.19345091e-02, -7.47438370e-01, -3.57142857e-01], [ 6.19345091e-02, -7.47438370e-01, -3.57142857e-01], [ 1.84114115e-01, -7.27050199e-01, -3.57142857e-01], [ 3.01271568e-01, -6.86829995e-01, -3.57142857e-01], [ 4.10211119e-01, -6.27874859e-01, -3.57142857e-01], [ -1.63656991e-01, -6.46266844e-01, -3.57142857e-01], [ -5.50528970e-02, -6.64389662e-01, -3.57142857e-01], [ 5.50528970e-02, -6.64389662e-01, -3.57142857e-01], [ 1.63656991e-01, -6.46266844e-01, -3.57142857e-01], [ 2.67796950e-01, -6.10515551e-01, -3.57142857e-01], [ 3.64632105e-01, -5.58110986e-01, -3.57142857e-01], [ -1.43199867e-01, -5.65483488e-01, -3.57142857e-01], [ -4.81712849e-02, -5.81340954e-01, -3.57142857e-01], [ 4.81712849e-02, -5.81340954e-01, -3.57142857e-01], [ 1.43199867e-01, -5.65483488e-01, -3.57142857e-01], [ 2.34322331e-01, -5.34201107e-01, -3.57142857e-01], [ 3.19053092e-01, -4.88347112e-01, -3.57142857e-01], [ -1.22742744e-01, -4.84700133e-01, -3.57142857e-01], [ -4.12896727e-02, -4.98292247e-01, -3.57142857e-01], [ 4.12896727e-02, -4.98292247e-01, -3.57142857e-01], [ 1.22742744e-01, -4.84700133e-01, -3.57142857e-01], [ 2.00847712e-01, -4.57886663e-01, -3.57142857e-01], [ 2.73474079e-01, -4.18583239e-01, -3.57142857e-01], [ 1.02606772e-01, -4.13792257e-01, -3.57142857e-01], [ 3.59043567e-02, -4.33301147e-01, -3.57142857e-01], [ 1.67077120e-01, -3.85469130e-01, -3.57142857e-01], [ 2.27895066e-01, -3.48819366e-01, -3.57142857e-01], [ 8.24708009e-02, -3.42884381e-01, -3.57142857e-01], [ 3.05190406e-02, -3.68310047e-01, -3.57142857e-01], [ 1.33306527e-01, -3.13051596e-01, -3.57142857e-01], [ 1.82316053e-01, -2.79055493e-01, -3.57142857e-01], [ 6.23348295e-02, -2.71976505e-01, -3.57142857e-01], [ 2.51337245e-02, -3.03318947e-01, -3.57142857e-01], [ 9.95359345e-02, -2.40634062e-01, -3.57142857e-01], [ 1.36737040e-01, -2.09291620e-01, -3.57142857e-01], [ 9.11580264e-02, -1.39527746e-01, -3.57142857e-01], [ 5.51990834e-02, -1.85860856e-01, -3.57142857e-01], [ 4.55790132e-02, -6.97638732e-02, -3.57142857e-01], [ 1.08622324e-02, -1.31087650e-01, -3.57142857e-01], [ 1.92401405e-02, -2.32193966e-01, -3.57142857e-01], [ -2.38545485e-02, -1.92411427e-01, -3.57142857e-01], [ -1.67188024e-02, -2.78527075e-01, -3.57142857e-01], [ -5.85713293e-02, -2.53735203e-01, -3.57142857e-01], [ -7.99618007e-02, -3.30723513e-01, -3.57142857e-01], [ -1.01352272e-01, -4.07711823e-01, -3.57142857e-01], [ -2.49090925e-02, -3.51782132e-01, -3.57142857e-01], [ -3.30993826e-02, -4.25037189e-01, -3.57142857e-01], [ 6.77281572e-01, -7.35723911e-01, -3.57142857e-01], [ 6.20841441e-01, -6.74413585e-01, -3.57142857e-01], [ 7.89140509e-01, -6.14212713e-01, -3.57142857e-01], [ 7.23378800e-01, -5.63028320e-01, -3.57142857e-01], [ 8.79473751e-01, -4.75947393e-01, -3.57142857e-01], [ 8.06184272e-01, -4.36285110e-01, -3.57142857e-01], [ 9.45817242e-01, -3.24699469e-01, -3.57142857e-01], [ 8.66999138e-01, -2.97641180e-01, -3.57142857e-01], [ 9.86361303e-01, -1.64594590e-01, -3.57142857e-01], [ 9.04164528e-01, -1.50878374e-01, -3.57142857e-01], [ 5.64401310e-01, -6.13103259e-01, -3.57142857e-01], [ 6.57617091e-01, -5.11843927e-01, -3.57142857e-01], [ 7.32894793e-01, -3.96622828e-01, -3.57142857e-01], [ 7.88181035e-01, -2.70582891e-01, -3.57142857e-01], [ 8.21967753e-01, -1.37162159e-01, -3.57142857e-01], [ 5.07961179e-01, -5.51792933e-01, -3.57142857e-01], [ 5.91855382e-01, -4.60659535e-01, -3.57142857e-01], [ 6.59605313e-01, -3.56960545e-01, -3.57142857e-01], [ 7.09362931e-01, -2.43524602e-01, -3.57142857e-01], [ 7.39770978e-01, -1.23445943e-01, -3.57142857e-01], [ 4.51521048e-01, -4.90482607e-01, -3.57142857e-01], [ 5.26093673e-01, -4.09475142e-01, -3.57142857e-01], [ 5.86315834e-01, -3.17298262e-01, -3.57142857e-01], [ 6.30544828e-01, -2.16466313e-01, -3.57142857e-01], [ 6.57574202e-01, -1.09729727e-01, -3.57142857e-01], [ 3.95080917e-01, -4.29172281e-01, -3.57142857e-01], [ 4.60331964e-01, -3.58290749e-01, -3.57142857e-01], [ 5.13026355e-01, -2.77635979e-01, -3.57142857e-01], [ 5.51726724e-01, -1.89408024e-01, -3.57142857e-01], [ 5.75377427e-01, -9.60135110e-02, -3.57142857e-01], [ 3.38640786e-01, -3.67861955e-01, -3.57142857e-01], [ 3.94570255e-01, -3.07106356e-01, -3.57142857e-01], [ 4.39736876e-01, -2.37973697e-01, -3.57142857e-01], [ 4.72908621e-01, -1.62349735e-01, -3.57142857e-01], [ 4.93180652e-01, -8.22972951e-02, -3.57142857e-01], [ 4.02533591e-01, -1.40423963e-01, -3.57142857e-01], [ 3.82383017e-01, -2.06935340e-01, -3.57142857e-01], [ 4.14084357e-01, -7.09602665e-02, -3.57142857e-01], [ 3.32158562e-01, -1.18498191e-01, -3.57142857e-01], [ 3.25029158e-01, -1.75896984e-01, -3.57142857e-01], [ 3.34988061e-01, -5.96232379e-02, -3.57142857e-01], [ 2.61783533e-01, -9.65724185e-02, -3.57142857e-01], [ 2.67675299e-01, -1.44858628e-01, -3.57142857e-01], [ 2.55891766e-01, -4.82862093e-02, -3.57142857e-01], [ 1.85787515e-01, -5.54454306e-02, -3.57142857e-01], [ 1.15683264e-01, -6.26046519e-02, -3.57142857e-01], [ 2.04908364e-01, -1.10890861e-01, -3.57142857e-01], [ 1.48033195e-01, -1.25209304e-01, -3.57142857e-01], [ 2.24029213e-01, -1.66336292e-01, -3.57142857e-01], [ 1.80383126e-01, -1.87813956e-01, -3.57142857e-01], [ 2.33135679e-01, -2.47829956e-01, -3.57142857e-01], [ 2.85888233e-01, -3.07845955e-01, -3.57142857e-01], [ 2.80876227e-01, -2.13259647e-01, -3.57142857e-01], [ 3.37723241e-01, -2.60183001e-01, -3.57142857e-01], [ 1.00000000e+00, 0.00000000e+00, -5.00000000e-01], [ 9.86361303e-01, 1.64594590e-01, -5.00000000e-01], [ 8.85385799e-01, 1.47859195e-01, -5.00000000e-01], [ 8.98367845e-01, 5.09955106e-05, -5.00000000e-01], [ 9.45817242e-01, 3.24699469e-01, -5.00000000e-01], [ 8.50003984e-01, 2.91939989e-01, -5.00000000e-01], [ 8.79473751e-01, 4.75947393e-01, -5.00000000e-01], [ 7.90244661e-01, 4.27798478e-01, -5.00000000e-01], [ 7.89140509e-01, 6.14212713e-01, -5.00000000e-01], [ 7.09163527e-01, 5.52078099e-01, -5.00000000e-01], [ 6.77281572e-01, 7.35723911e-01, -5.00000000e-01], [ 6.10767838e-01, 6.63497160e-01, -5.00000000e-01], [ 5.46948158e-01, 8.37166478e-01, -5.00000000e-01], [ 4.93229607e-01, 7.54957609e-01, -5.00000000e-01], [ 4.01695425e-01, 9.15773327e-01, -5.00000000e-01], [ 3.62235106e-01, 8.25828858e-01, -5.00000000e-01], [ 7.94125993e-01, 1.32740732e-01, -5.00000000e-01], [ 8.00800813e-01, 1.88651800e-04, -5.00000000e-01], [ 7.57532995e-01, 2.60531900e-01, -5.00000000e-01], [ 7.05311522e-01, 3.82151873e-01, -5.00000000e-01], [ 6.33176967e-01, 4.93188206e-01, -5.00000000e-01], [ 5.44359740e-01, 5.91554707e-01, -5.00000000e-01], [ 4.39850704e-01, 6.73367897e-01, -5.00000000e-01], [ 3.23078188e-01, 7.36661594e-01, -5.00000000e-01], [ 7.01128052e-01, 1.17677998e-01, -5.00000000e-01], [ 7.11649572e-01, 3.33951524e-04, -5.00000000e-01], [ 6.72966890e-01, 2.31894729e-01, -5.00000000e-01], [ 6.25669634e-01, 3.39513212e-01, -5.00000000e-01], [ 5.64027416e-01, 4.39616287e-01, -5.00000000e-01], [ 4.84508482e-01, 5.26668700e-01, -5.00000000e-01], [ 3.91560912e-01, 5.99504467e-01, -5.00000000e-01], [ 2.87602059e-01, 6.55840087e-01, -5.00000000e-01], [ 6.17706136e-01, 1.04228490e-01, -5.00000000e-01], [ 6.24422079e-01, 6.32285718e-04, -5.00000000e-01], [ 5.91740630e-01, 2.04740807e-01, -5.00000000e-01], [ 5.53662436e-01, 3.01062778e-01, -5.00000000e-01], [ 4.96990333e-01, 3.87922085e-01, -5.00000000e-01], [ 4.26905739e-01, 4.64324206e-01, -5.00000000e-01], [ 3.44867105e-01, 5.28103963e-01, -5.00000000e-01], [ 2.53361833e-01, 5.77868140e-01, -5.00000000e-01], [ 5.38389125e-01, 9.17206205e-02, -5.00000000e-01], [ 5.45454368e-01, 8.70266955e-04, -5.00000000e-01], [ 5.17997560e-01, 1.80324497e-01, -5.00000000e-01], [ 4.84931719e-01, 2.64732797e-01, -5.00000000e-01], [ 4.36383812e-01, 3.41220989e-01, -5.00000000e-01], [ 3.74978458e-01, 4.07950157e-01, -5.00000000e-01], [ 3.02772277e-01, 4.63502415e-01, -5.00000000e-01], [ 2.22468936e-01, 5.07437291e-01, -5.00000000e-01], [ 4.64471094e-01, 8.04373661e-02, -5.00000000e-01], [ 4.73417894e-01, 1.03377535e-03, -5.00000000e-01], [ 4.49550108e-01, 1.58190845e-01, -5.00000000e-01], [ 4.20123744e-01, 2.31037319e-01, -5.00000000e-01], [ 3.77857876e-01, 2.96451484e-01, -5.00000000e-01], [ 3.26056063e-01, 3.54556465e-01, -5.00000000e-01], [ 2.63231766e-01, 4.02405818e-01, -5.00000000e-01], [ 1.93502599e-01, 4.41318122e-01, -5.00000000e-01], [ 2.80697996e-01, 3.04138803e-01, -5.00000000e-01], [ 3.26828978e-01, 2.57477836e-01, -5.00000000e-01], [ 2.27278087e-01, 3.45852853e-01, -5.00000000e-01], [ 1.66385745e-01, 3.79356622e-01, -5.00000000e-01], [ 2.41953376e-01, 2.58408137e-01, -5.00000000e-01], [ 2.80636530e-01, 2.22682050e-01, -5.00000000e-01], [ 1.94021253e-01, 2.91186055e-01, -5.00000000e-01], [ 1.39733872e-01, 3.17911899e-01, -5.00000000e-01], [ 2.12482604e-01, 2.15267429e-01, -5.00000000e-01], [ 2.47964916e-01, 2.00029985e-01, -5.00000000e-01], [ 1.65163116e-01, 2.36189539e-01, -5.00000000e-01], [ 1.14601980e-01, 2.59689793e-01, -5.00000000e-01], [ 8.97246141e-02, 2.03521438e-01, -5.00000000e-01], [ 1.45294294e-01, 1.83523348e-01, -5.00000000e-01], [ 6.53411100e-02, 1.48104882e-01, -5.00000000e-01], [ 1.29690654e-01, 1.28069210e-01, -5.00000000e-01], [ 3.66825649e-02, 8.40224902e-02, -5.00000000e-01], [ 1.15178059e-01, 6.50962363e-02, -5.00000000e-01], [ -6.90931371e-03, -1.43821650e-02, -5.00000000e-01], [ 1.13891956e-01, -7.46169753e-03, -5.00000000e-01], [ 2.00104806e-01, 1.68875989e-01, -5.00000000e-01], [ 1.92634871e-01, 1.16656531e-01, -5.00000000e-01], [ 1.88903990e-01, 5.89872913e-02, -5.00000000e-01], [ 1.90411041e-01, -2.07447445e-03, -5.00000000e-01], [ 2.50775614e-01, 1.62140475e-01, -5.00000000e-01], [ 2.53396140e-01, 1.13391119e-01, -5.00000000e-01], [ 2.55727085e-01, 5.80397186e-02, -5.00000000e-01], [ 2.60266073e-01, 6.10581640e-04, -5.00000000e-01], [ 3.30524944e-01, 1.08041301e-03, -5.00000000e-01], [ 3.24967172e-01, 6.21259202e-02, -5.00000000e-01], [ 4.00230719e-01, 1.27521573e-03, -5.00000000e-01], [ 3.95498789e-01, 7.04582546e-02, -5.00000000e-01], [ 3.15952931e-01, 1.20992370e-01, -5.00000000e-01], [ 3.81156850e-01, 1.37671789e-01, -5.00000000e-01], [ 3.01812446e-01, 1.76311672e-01, -5.00000000e-01], [ 3.60992327e-01, 2.01144550e-01, -5.00000000e-01], [ 2.45485487e-01, 9.69400266e-01, -5.00000000e-01], [ 2.21362099e-01, 8.74184899e-01, -5.00000000e-01], [ 8.25793455e-02, 9.96584493e-01, -5.00000000e-01], [ 7.44564944e-02, 8.98711097e-01, -5.00000000e-01], [ -8.25793455e-02, 9.96584493e-01, -5.00000000e-01], [ -7.44723590e-02, 8.98720257e-01, -5.00000000e-01], [ -2.45485487e-01, 9.69400266e-01, -5.00000000e-01], [ -2.20691802e-01, 8.71719923e-01, -5.00000000e-01], [ -4.01695425e-01, 9.15773327e-01, -5.00000000e-01], [ -3.60903765e-01, 8.22973696e-01, -5.00000000e-01], [ -5.46948158e-01, 8.37166478e-01, -5.00000000e-01], [ -4.91337090e-01, 7.52168583e-01, -5.00000000e-01], [ -6.77281572e-01, 7.35723911e-01, -5.00000000e-01], [ -6.08422426e-01, 6.60965795e-01, -5.00000000e-01], [ 1.97440834e-01, 7.79839076e-01, -5.00000000e-01], [ 6.64059355e-02, 8.01246618e-01, -5.00000000e-01], [ -6.62645469e-02, 8.01029588e-01, -5.00000000e-01], [ -1.96854349e-01, 7.78286921e-01, -5.00000000e-01], [ -3.21946321e-01, 7.34659580e-01, -5.00000000e-01], [ -4.36798391e-01, 6.69120390e-01, -5.00000000e-01], [ -5.40612618e-01, 5.87506757e-01, -5.00000000e-01], [ 1.75775694e-01, 6.94385920e-01, -5.00000000e-01], [ 5.91673686e-02, 7.13777718e-01, -5.00000000e-01], [ -5.88119880e-02, 7.12616793e-01, -5.00000000e-01], [ -1.74101170e-01, 6.89878982e-01, -5.00000000e-01], [ -2.84169826e-01, 6.49644503e-01, -5.00000000e-01], [ -3.86858078e-01, 5.93199666e-01, -5.00000000e-01], [ -4.79144350e-01, 5.20924297e-01, -5.00000000e-01], [ 1.54837478e-01, 6.11910244e-01, -5.00000000e-01], [ 5.21907761e-02, 6.29193467e-01, -5.00000000e-01], [ -5.12184367e-02, 6.25994011e-01, -5.00000000e-01], [ -1.53060733e-01, 6.08704209e-01, -5.00000000e-01], [ -2.50417746e-01, 5.73884869e-01, -5.00000000e-01], [ -3.39314709e-01, 5.21301649e-01, -5.00000000e-01], [ -4.20125339e-01, 4.57152827e-01, -5.00000000e-01], [ 1.35887547e-01, 5.37522969e-01, -5.00000000e-01], [ 4.57175793e-02, 5.49353160e-01, -5.00000000e-01], [ -4.41750548e-02, 5.47282691e-01, -5.00000000e-01], [ -1.32346943e-01, 5.31088099e-01, -5.00000000e-01], [ -2.17069383e-01, 5.00276789e-01, -5.00000000e-01], [ -2.95441098e-01, 4.55297464e-01, -5.00000000e-01], [ -3.66529630e-01, 3.99203743e-01, -5.00000000e-01], [ 1.17538501e-01, 4.65928006e-01, -5.00000000e-01], [ 3.96664619e-02, 4.79172226e-01, -5.00000000e-01], [ -3.75472975e-02, 4.75808900e-01, -5.00000000e-01], [ -1.13238534e-01, 4.61485351e-01, -5.00000000e-01], [ -1.87421164e-01, 4.35519238e-01, -5.00000000e-01], [ -2.54075366e-01, 3.93685438e-01, -5.00000000e-01], [ -3.15875504e-01, 3.44415828e-01, -5.00000000e-01], [ -9.42722138e-02, 3.97659750e-01, -5.00000000e-01], [ -3.16082042e-02, 4.12808353e-01, -5.00000000e-01], [ -1.57313402e-01, 3.72682840e-01, -5.00000000e-01], [ -2.13902082e-01, 3.35146284e-01, -5.00000000e-01], [ -2.67798778e-01, 2.92234617e-01, -5.00000000e-01], [ -7.38121164e-02, 3.40587375e-01, -5.00000000e-01], [ -2.54537279e-02, 3.56949863e-01, -5.00000000e-01], [ -1.23885635e-01, 3.12926202e-01, -5.00000000e-01], [ -1.73736614e-01, 2.79179204e-01, -5.00000000e-01], [ -2.22166180e-01, 2.41993646e-01, -5.00000000e-01], [ -4.95025577e-02, 2.94314076e-01, -5.00000000e-01], [ -1.98454920e-02, 3.18279461e-01, -5.00000000e-01], [ -8.72841201e-02, 2.62569352e-01, -5.00000000e-01], [ -1.29852384e-01, 2.26256159e-01, -5.00000000e-01], [ -1.74652198e-01, 1.90687531e-01, -5.00000000e-01], [ -1.29778666e-01, 1.38814436e-01, -5.00000000e-01], [ -8.36228415e-02, 1.77780710e-01, -5.00000000e-01], [ -8.16435423e-02, 7.97110013e-02, -5.00000000e-01], [ -3.02608410e-02, 1.29131270e-01, -5.00000000e-01], [ -4.40670309e-02, 2.18498201e-01, -5.00000000e-01], [ 7.24710626e-03, 1.80262002e-01, -5.00000000e-01], [ -1.03471103e-02, 2.62032046e-01, -5.00000000e-01], [ 3.73861516e-02, 2.31491239e-01, -5.00000000e-01], [ 1.54198095e-02, 3.03574443e-01, -5.00000000e-01], [ 6.32129499e-02, 2.82903258e-01, -5.00000000e-01], [ 8.26525900e-02, 3.39405613e-01, -5.00000000e-01], [ 1.00909910e-01, 4.03015805e-01, -5.00000000e-01], [ 2.64893934e-02, 3.53483382e-01, -5.00000000e-01], [ 3.36435314e-02, 4.12308571e-01, -5.00000000e-01], [ -7.89140509e-01, 6.14212713e-01, -5.00000000e-01], [ -7.08964474e-01, 5.51797869e-01, -5.00000000e-01], [ -8.79473751e-01, 4.75947393e-01, -5.00000000e-01], [ -7.90197698e-01, 4.27606295e-01, -5.00000000e-01], [ -9.45817242e-01, 3.24699469e-01, -5.00000000e-01], [ -8.49856882e-01, 2.91743625e-01, -5.00000000e-01], [ -9.86361303e-01, 1.64594590e-01, -5.00000000e-01], [ -8.86275569e-01, 1.47901385e-01, -5.00000000e-01], [ -1.00000000e+00, -1.22464680e-16, -5.00000000e-01], [ -8.98473805e-01, 1.15381832e-05, -5.00000000e-01], [ -9.86361303e-01, -1.64594590e-01, -5.00000000e-01], [ -8.86174824e-01, -1.47883246e-01, -5.00000000e-01], [ -6.30001456e-01, 4.90383785e-01, -5.00000000e-01], [ -7.02273443e-01, 3.80015358e-01, -5.00000000e-01], [ -7.55313724e-01, 2.59296912e-01, -5.00000000e-01], [ -7.87658470e-01, 1.31485016e-01, -5.00000000e-01], [ -7.98470908e-01, 4.55107425e-05, -5.00000000e-01], [ -7.87494160e-01, -1.31426735e-01, -5.00000000e-01], [ -5.58627682e-01, 4.34835928e-01, -5.00000000e-01], [ -6.22870183e-01, 3.37003542e-01, -5.00000000e-01], [ -6.69946331e-01, 2.29998633e-01, -5.00000000e-01], [ -6.98601469e-01, 1.16688217e-01, -5.00000000e-01], [ -7.08116470e-01, 1.05809214e-04, -5.00000000e-01], [ -6.98261773e-01, -1.16539499e-01, -5.00000000e-01], [ -4.90039236e-01, 3.81447281e-01, -5.00000000e-01], [ -5.46674699e-01, 2.95688707e-01, -5.00000000e-01], [ -5.88083370e-01, 2.01927138e-01, -5.00000000e-01], [ -6.13127891e-01, 1.02579208e-01, -5.00000000e-01], [ -6.21326911e-01, 2.49679885e-04, -5.00000000e-01], [ -6.12659159e-01, -1.02238675e-01, -5.00000000e-01], [ -4.28201630e-01, 3.33102542e-01, -5.00000000e-01], [ -4.78292897e-01, 2.58422199e-01, -5.00000000e-01], [ -5.14588632e-01, 1.76707278e-01, -5.00000000e-01], [ -5.36209382e-01, 9.00148497e-02, -5.00000000e-01], [ -5.43295990e-01, 5.06463641e-04, -5.00000000e-01], [ -5.35354600e-01, -8.93132854e-02, -5.00000000e-01], [ -3.70108778e-01, 2.87293073e-01, -5.00000000e-01], [ -4.14823967e-01, 2.23456652e-01, -5.00000000e-01], [ -4.46176995e-01, 1.53226227e-01, -5.00000000e-01], [ -4.64366203e-01, 7.86053306e-02, -5.00000000e-01], [ -4.70979406e-01, 1.02297782e-03, -5.00000000e-01], [ -4.62590465e-01, -7.71173269e-02, -5.00000000e-01], [ -3.98997832e-01, 6.89882561e-02, -5.00000000e-01], [ -3.84007718e-01, 1.31869003e-01, -5.00000000e-01], [ -4.02767806e-01, 2.14475434e-03, -5.00000000e-01], [ -3.93927304e-01, -6.55498719e-02, -5.00000000e-01], [ -3.37106322e-01, 6.21262340e-02, -5.00000000e-01], [ -3.30209654e-01, 1.13360219e-01, -5.00000000e-01], [ -3.35031657e-01, 5.42266022e-03, -5.00000000e-01], [ -3.27721161e-01, -5.42580733e-02, -5.00000000e-01], [ -2.83447084e-01, 6.21745720e-02, -5.00000000e-01], [ -2.92001062e-01, 1.00268454e-01, -5.00000000e-01], [ -2.73720464e-01, 1.23812993e-02, -5.00000000e-01], [ -2.63751777e-01, -4.33840760e-02, -5.00000000e-01], [ -1.98646355e-01, -3.27574407e-02, -5.00000000e-01], [ -2.13311244e-01, 2.71005688e-02, -5.00000000e-01], [ -1.25363626e-01, -2.31240997e-02, -5.00000000e-01], [ -1.50402001e-01, 4.80422198e-02, -5.00000000e-01], [ -2.35376990e-01, 8.01922238e-02, -5.00000000e-01], [ -1.81647343e-01, 1.06212482e-01, -5.00000000e-01], [ -2.61648693e-01, 1.25022130e-01, -5.00000000e-01], [ -2.20441300e-01, 1.56334892e-01, -5.00000000e-01], [ -2.65822474e-01, 2.01206908e-01, -5.00000000e-01], [ -3.15933525e-01, 2.43651325e-01, -5.00000000e-01], [ -3.03727391e-01, 1.58021732e-01, -5.00000000e-01], [ -3.58146942e-01, 1.91331513e-01, -5.00000000e-01], [ -9.45817242e-01, -3.24699469e-01, -5.00000000e-01], [ -8.49758028e-01, -2.91753303e-01, -5.00000000e-01], [ -8.79473751e-01, -4.75947393e-01, -5.00000000e-01], [ -7.90200242e-01, -4.27674318e-01, -5.00000000e-01], [ -7.89140509e-01, -6.14212713e-01, -5.00000000e-01], [ -7.09077565e-01, -5.51918815e-01, -5.00000000e-01], [ -6.77281572e-01, -7.35723911e-01, -5.00000000e-01], [ -6.08569412e-01, -6.61074211e-01, -5.00000000e-01], [ -5.46948158e-01, -8.37166478e-01, -5.00000000e-01], [ -4.91428805e-01, -7.52168929e-01, -5.00000000e-01], [ -4.01695425e-01, -9.15773327e-01, -5.00000000e-01], [ -3.60888838e-01, -8.22761670e-01, -5.00000000e-01], [ -7.55188846e-01, -2.59345642e-01, -5.00000000e-01], [ -7.02337606e-01, -3.80192065e-01, -5.00000000e-01], [ -6.32857312e-01, -4.92618433e-01, -5.00000000e-01], [ -5.43151014e-01, -5.89986003e-01, -5.00000000e-01], [ -4.37118826e-01, -6.68993686e-01, -5.00000000e-01], [ -3.20766354e-01, -7.31326287e-01, -5.00000000e-01], [ -6.69734997e-01, -2.30080512e-01, -5.00000000e-01], [ -6.22993936e-01, -3.37335567e-01, -5.00000000e-01], [ -5.59194551e-01, -4.35298937e-01, -5.00000000e-01], [ -4.79828309e-01, -5.21111684e-01, -5.00000000e-01], [ -3.87501919e-01, -5.92949943e-01, -5.00000000e-01], [ -2.84426825e-01, -6.48495027e-01, -5.00000000e-01], [ -5.87684351e-01, -2.02039854e-01, -5.00000000e-01], [ -5.46979032e-01, -2.96347299e-01, -5.00000000e-01], [ -4.93900401e-01, -3.84490387e-01, -5.00000000e-01], [ -4.23918729e-01, -4.60219985e-01, -5.00000000e-01], [ -3.40561104e-01, -5.20838061e-01, -5.00000000e-01], [ -2.49643040e-01, -5.69140160e-01, -5.00000000e-01], [ -5.13658677e-01, -1.76864123e-01, -5.00000000e-01], [ -4.78660176e-01, -2.59652919e-01, -5.00000000e-01], [ -4.29973501e-01, -3.34721367e-01, -5.00000000e-01], [ -3.69196403e-01, -4.00374180e-01, -5.00000000e-01], [ -2.97634891e-01, -4.54622635e-01, -5.00000000e-01], [ -2.18156550e-01, -4.97266712e-01, -5.00000000e-01], [ -4.43955972e-01, -1.53433052e-01, -5.00000000e-01], [ -4.14976040e-01, -2.25780487e-01, -5.00000000e-01], [ -3.73212995e-01, -2.90523705e-01, -5.00000000e-01], [ -3.21264722e-01, -3.47542486e-01, -5.00000000e-01], [ -2.57973317e-01, -3.92813300e-01, -5.00000000e-01], [ -1.88659023e-01, -4.29830508e-01, -5.00000000e-01], [ -2.76959851e-01, -2.97681175e-01, -5.00000000e-01], [ -3.22068080e-01, -2.50675254e-01, -5.00000000e-01], [ -2.21359240e-01, -3.34292006e-01, -5.00000000e-01], [ -1.61403511e-01, -3.67396927e-01, -5.00000000e-01], [ -2.36755182e-01, -2.48291400e-01, -5.00000000e-01], [ -2.76170203e-01, -2.14736566e-01, -5.00000000e-01], [ -1.88145259e-01, -2.77458362e-01, -5.00000000e-01], [ -1.33977883e-01, -3.03968290e-01, -5.00000000e-01], [ -2.07536544e-01, -2.03087349e-01, -5.00000000e-01], [ -2.44322863e-01, -1.89623778e-01, -5.00000000e-01], [ -1.60891409e-01, -2.22173809e-01, -5.00000000e-01], [ -1.08132613e-01, -2.43259688e-01, -5.00000000e-01], [ -8.28302097e-02, -1.83773619e-01, -5.00000000e-01], [ -1.41840587e-01, -1.63689740e-01, -5.00000000e-01], [ -5.28309761e-02, -1.16741673e-01, -5.00000000e-01], [ -1.27483869e-01, -9.76683493e-02, -5.00000000e-01], [ -1.97979852e-01, -1.52716684e-01, -5.00000000e-01], [ -1.94512087e-01, -9.49401261e-02, -5.00000000e-01], [ -2.49063661e-01, -1.50709568e-01, -5.00000000e-01], [ -2.55937772e-01, -1.00112636e-01, -5.00000000e-01], [ -3.15961679e-01, -1.13123573e-01, -5.00000000e-01], [ -3.78395823e-01, -1.32030107e-01, -5.00000000e-01], [ -2.99920321e-01, -1.68386817e-01, -5.00000000e-01], [ -3.56808685e-01, -1.95651482e-01, -5.00000000e-01], [ -2.45485487e-01, -9.69400266e-01, -5.00000000e-01], [ -2.20527452e-01, -8.70963424e-01, -5.00000000e-01], [ -8.25793455e-02, -9.96584493e-01, -5.00000000e-01], [ -7.41639184e-02, -8.95445060e-01, -5.00000000e-01], [ 8.25793455e-02, -9.96584493e-01, -5.00000000e-01], [ 7.42192906e-02, -8.95487034e-01, -5.00000000e-01], [ 2.45485487e-01, -9.69400266e-01, -5.00000000e-01], [ 2.20573235e-01, -8.71047175e-01, -5.00000000e-01], [ 4.01695425e-01, -9.15773327e-01, -5.00000000e-01], [ 3.60903494e-01, -8.22802941e-01, -5.00000000e-01], [ 5.46948158e-01, -8.37166478e-01, -5.00000000e-01], [ 4.91381119e-01, -7.52102570e-01, -5.00000000e-01], [ -1.95947086e-01, -7.74127277e-01, -5.00000000e-01], [ -6.58591700e-02, -7.95934883e-01, -5.00000000e-01], [ 6.59950160e-02, -7.96009686e-01, -5.00000000e-01], [ 1.96033858e-01, -7.74239363e-01, -5.00000000e-01], [ 3.20688739e-01, -7.31187985e-01, -5.00000000e-01], [ 4.36565689e-01, -6.68176192e-01, -5.00000000e-01], [ -1.73710483e-01, -6.86594431e-01, -5.00000000e-01], [ -5.83403733e-02, -7.06068825e-01, -5.00000000e-01], [ 5.85663516e-02, -7.06222728e-01, -5.00000000e-01], [ 1.73845433e-01, -6.86833114e-01, -5.00000000e-01], [ 2.84310702e-01, -6.48392566e-01, -5.00000000e-01], [ 3.87077327e-01, -5.92417398e-01, -5.00000000e-01], [ -1.52324550e-01, -6.02613585e-01, -5.00000000e-01], [ -5.10827736e-02, -6.20032612e-01, -5.00000000e-01], [ 5.14407375e-02, -6.20224621e-01, -5.00000000e-01], [ 1.52492169e-01, -6.03046508e-01, -5.00000000e-01], [ 2.49356849e-01, -5.69065471e-01, -5.00000000e-01], [ 3.39522909e-01, -5.19665127e-01, -5.00000000e-01], [ -1.32913591e-01, -5.26851074e-01, -5.00000000e-01], [ -4.44373752e-02, -5.42756409e-01, -5.00000000e-01], [ 4.50390459e-02, -5.43028575e-01, -5.00000000e-01], [ 1.33228954e-01, -5.27995406e-01, -5.00000000e-01], [ 2.17713616e-01, -4.97612407e-01, -5.00000000e-01], [ 2.96632422e-01, -4.54081308e-01, -5.00000000e-01], [ -1.14420539e-01, -4.55704117e-01, -5.00000000e-01], [ -3.79657707e-02, -4.70906926e-01, -5.00000000e-01], [ 3.90798645e-02, -4.71331248e-01, -5.00000000e-01], [ 1.15221008e-01, -4.58994868e-01, -5.00000000e-01], [ 1.87693599e-01, -4.30660195e-01, -5.00000000e-01], [ 2.56191789e-01, -3.92327216e-01, -5.00000000e-01], [ 9.77205966e-02, -3.94673911e-01, -5.00000000e-01], [ 3.36893742e-02, -4.06727121e-01, -5.00000000e-01], [ 1.58803158e-01, -3.68132920e-01, -5.00000000e-01], [ 2.17966632e-01, -3.34109450e-01, -5.00000000e-01], [ 7.83407097e-02, -3.33990665e-01, -5.00000000e-01], [ 2.86928174e-02, -3.48638215e-01, -5.00000000e-01], [ 1.29463668e-01, -3.09306875e-01, -5.00000000e-01], [ 1.81227986e-01, -2.78393827e-01, -5.00000000e-01], [ 5.64523840e-02, -2.84774155e-01, -5.00000000e-01], [ 2.50675824e-02, -3.08228657e-01, -5.00000000e-01], [ 9.81604613e-02, -2.56263578e-01, -5.00000000e-01], [ 1.45747568e-01, -2.24725813e-01, -5.00000000e-01], [ 1.08776353e-01, -1.69704406e-01, -5.00000000e-01], [ 5.96672230e-02, -2.08411981e-01, -5.00000000e-01], [ 6.52344214e-02, -1.07234898e-01, -5.00000000e-01], [ 1.12299685e-02, -1.59491418e-01, -5.00000000e-01], [ 1.94489428e-02, -2.49212860e-01, -5.00000000e-01], [ -2.75711446e-02, -2.14453527e-01, -5.00000000e-01], [ -1.01419274e-02, -2.90912747e-01, -5.00000000e-01], [ -5.63990856e-02, -2.68644868e-01, -5.00000000e-01], [ -7.81326349e-02, -3.26377817e-01, -5.00000000e-01], [ -9.65225590e-02, -3.89249328e-01, -5.00000000e-01], [ -2.31032455e-02, -3.43177165e-01, -5.00000000e-01], [ -3.13773433e-02, -4.05719997e-01, -5.00000000e-01], [ 6.77281572e-01, -7.35723911e-01, -5.00000000e-01], [ 6.08411375e-01, -6.60869436e-01, -5.00000000e-01], [ 7.89140509e-01, -6.14212713e-01, -5.00000000e-01], [ 7.08543237e-01, -5.51440045e-01, -5.00000000e-01], [ 8.79473751e-01, -4.75947393e-01, -5.00000000e-01], [ 7.88509514e-01, -4.26698136e-01, -5.00000000e-01], [ 9.45817242e-01, -3.24699469e-01, -5.00000000e-01], [ 8.49108858e-01, -2.91497351e-01, -5.00000000e-01], [ 9.86361303e-01, -1.64594590e-01, -5.00000000e-01], [ 8.85646606e-01, -1.47773471e-01, -5.00000000e-01], [ 5.40399291e-01, -5.86902930e-01, -5.00000000e-01], [ 6.31661212e-01, -4.91529504e-01, -5.00000000e-01], [ 7.06154712e-01, -3.82110644e-01, -5.00000000e-01], [ 7.57207596e-01, -2.59933027e-01, -5.00000000e-01], [ 7.90560453e-01, -1.31868883e-01, -5.00000000e-01], [ 4.79134872e-01, -5.20246274e-01, -5.00000000e-01], [ 5.57358575e-01, -4.33573270e-01, -5.00000000e-01], [ 6.22649467e-01, -3.36858332e-01, -5.00000000e-01], [ 6.70334308e-01, -2.30102238e-01, -5.00000000e-01], [ 7.00296083e-01, -1.16745852e-01, -5.00000000e-01], [ 4.20218711e-01, -4.56063930e-01, -5.00000000e-01], [ 4.90701218e-01, -3.81516225e-01, -5.00000000e-01], [ 5.48809022e-01, -2.96865907e-01, -5.00000000e-01], [ 5.88345346e-01, -2.02001938e-01, -5.00000000e-01], [ 6.14676740e-01, -1.02418815e-01, -5.00000000e-01], [ 3.67922567e-01, -3.98927381e-01, -5.00000000e-01], [ 4.28983572e-01, -3.33156683e-01, -5.00000000e-01], [ 4.78469364e-01, -2.58774492e-01, -5.00000000e-01], [ 5.14798664e-01, -1.76976670e-01, -5.00000000e-01], [ 5.37552876e-01, -8.96851608e-02, -5.00000000e-01], [ 3.18991616e-01, -3.45088841e-01, -5.00000000e-01], [ 3.72496945e-01, -2.88521239e-01, -5.00000000e-01], [ 4.17437712e-01, -2.25772747e-01, -5.00000000e-01], [ 4.46270753e-01, -1.54015799e-01, -5.00000000e-01], [ 4.65793651e-01, -7.81566215e-02, -5.00000000e-01], [ 3.83645678e-01, -1.33842084e-01, -5.00000000e-01], [ 3.57498091e-01, -1.93408546e-01, -5.00000000e-01], [ 3.94991079e-01, -6.75531524e-02, -5.00000000e-01], [ 3.23108985e-01, -1.16767010e-01, -5.00000000e-01], [ 3.07405476e-01, -1.66484070e-01, -5.00000000e-01], [ 3.28629393e-01, -5.96308040e-02, -5.00000000e-01], [ 2.69488721e-01, -1.08491179e-01, -5.00000000e-01], [ 2.71915330e-01, -1.47551966e-01, -5.00000000e-01], [ 2.65001259e-01, -5.64391238e-02, -5.00000000e-01], [ 2.02706372e-01, -6.38527005e-02, -5.00000000e-01], [ 1.35742212e-01, -7.83273135e-02, -5.00000000e-01], [ 2.18918251e-01, -1.20081827e-01, -5.00000000e-01], [ 1.63747295e-01, -1.39842812e-01, -5.00000000e-01], [ 2.38577514e-01, -1.67677403e-01, -5.00000000e-01], [ 1.94295104e-01, -1.94156801e-01, -5.00000000e-01], [ 2.30940050e-01, -2.44114153e-01, -5.00000000e-01], [ 2.74443311e-01, -2.95203178e-01, -5.00000000e-01], [ 2.74528072e-01, -2.06666991e-01, -5.00000000e-01], [ 3.20977283e-01, -2.46876061e-01, -5.00000000e-01] ]) _connectivity = np.array([ [ 1, 2, 3, 4, 5, 6, 7, 8], [ 2, 9, 10, 3, 6, 11, 12, 7], [ 9, 13, 14, 10, 11, 15, 16, 12], [ 13, 17, 18, 14, 15, 19, 20, 16], [ 17, 21, 22, 18, 19, 23, 24, 20], [ 21, 25, 26, 22, 23, 27, 28, 24], [ 25, 29, 30, 26, 27, 31, 32, 28], [ 4, 3, 33, 34, 8, 7, 35, 36], [ 3, 10, 37, 33, 7, 12, 38, 35], [ 10, 14, 39, 37, 12, 16, 40, 38], [ 14, 18, 41, 39, 16, 20, 42, 40], [ 18, 22, 43, 41, 20, 24, 44, 42], [ 22, 26, 45, 43, 24, 28, 46, 44], [ 26, 30, 47, 45, 28, 32, 48, 46], [ 34, 33, 49, 50, 36, 35, 51, 52], [ 33, 37, 53, 49, 35, 38, 54, 51], [ 37, 39, 55, 53, 38, 40, 56, 54], [ 39, 41, 57, 55, 40, 42, 58, 56], [ 41, 43, 59, 57, 42, 44, 60, 58], [ 43, 45, 61, 59, 44, 46, 62, 60], [ 45, 47, 63, 61, 46, 48, 64, 62], [ 50, 49, 65, 66, 52, 51, 67, 68], [ 49, 53, 69, 65, 51, 54, 70, 67], [ 53, 55, 71, 69, 54, 56, 72, 70], [ 55, 57, 73, 71, 56, 58, 74, 72], [ 57, 59, 75, 73, 58, 60, 76, 74], [ 59, 61, 77, 75, 60, 62, 78, 76], [ 61, 63, 79, 77, 62, 64, 80, 78], [ 66, 65, 81, 82, 68, 67, 83, 84], [ 65, 69, 85, 81, 67, 70, 86, 83], [ 69, 71, 87, 85, 70, 72, 88, 86], [ 71, 73, 89, 87, 72, 74, 90, 88], [ 73, 75, 91, 89, 74, 76, 92, 90], [ 75, 77, 93, 91, 76, 78, 94, 92], [ 77, 79, 95, 93, 78, 80, 96, 94], [ 82, 81, 97, 98, 84, 83, 99, 100], [ 81, 85, 101, 97, 83, 86, 102, 99], [ 85, 87, 103, 101, 86, 88, 104, 102], [ 87, 89, 105, 103, 88, 90, 106, 104], [ 89, 91, 107, 105, 90, 92, 108, 106], [ 91, 93, 109, 107, 92, 94, 110, 108], [ 93, 95, 111, 109, 94, 96, 112, 110], [ 105, 107, 113, 114, 106, 108, 115, 116], [ 107, 109, 117, 113, 108, 110, 118, 115], [ 109, 111, 119, 117, 110, 112, 120, 118], [ 114, 113, 121, 122, 116, 115, 123, 124], [ 113, 117, 125, 121, 115, 118, 126, 123], [ 117, 119, 127, 125, 118, 120, 128, 126], [ 122, 121, 129, 130, 124, 123, 131, 132], [ 121, 125, 133, 129, 123, 126, 134, 131], [ 125, 127, 135, 133, 126, 128, 136, 134], [ 135, 137, 138, 133, 136, 139, 140, 134], [ 137, 141, 142, 138, 139, 143, 144, 140], [ 141, 145, 146, 142, 143, 147, 148, 144], [ 145, 149, 150, 146, 147, 151, 152, 148], [ 133, 138, 153, 129, 134, 140, 154, 131], [ 138, 142, 155, 153, 140, 144, 156, 154], [ 142, 146, 157, 155, 144, 148, 158, 156], [ 146, 150, 159, 157, 148, 152, 160, 158], [ 129, 153, 161, 130, 131, 154, 162, 132], [ 153, 155, 163, 161, 154, 156, 164, 162], [ 155, 157, 165, 163, 156, 158, 166, 164], [ 157, 159, 167, 165, 158, 160, 168, 166], [ 167, 169, 170, 165, 168, 171, 172, 166], [ 169, 173, 174, 170, 171, 175, 176, 172], [ 173, 98, 97, 174, 175, 100, 99, 176], [ 165, 170, 177, 163, 166, 172, 178, 164], [ 170, 174, 179, 177, 172, 176, 180, 178], [ 174, 97, 101, 179, 176, 99, 102, 180], [ 163, 177, 181, 161, 164, 178, 182, 162], [ 177, 179, 183, 181, 178, 180, 184, 182], [ 179, 101, 103, 183, 180, 102, 104, 184], [ 161, 181, 122, 130, 162, 182, 124, 132], [ 181, 183, 114, 122, 182, 184, 116, 124], [ 183, 103, 105, 114, 184, 104, 106, 116], [ 29, 185, 186, 30, 31, 187, 188, 32], [ 185, 189, 190, 186, 187, 191, 192, 188], [ 189, 193, 194, 190, 191, 195, 196, 192], [ 193, 197, 198, 194, 195, 199, 200, 196], [ 197, 201, 202, 198, 199, 203, 204, 200], [ 201, 205, 206, 202, 203, 207, 208, 204], [ 205, 209, 210, 206, 207, 211, 212, 208], [ 30, 186, 213, 47, 32, 188, 214, 48], [ 186, 190, 215, 213, 188, 192, 216, 214], [ 190, 194, 217, 215, 192, 196, 218, 216], [ 194, 198, 219, 217, 196, 200, 220, 218], [ 198, 202, 221, 219, 200, 204, 222, 220], [ 202, 206, 223, 221, 204, 208, 224, 222], [ 206, 210, 225, 223, 208, 212, 226, 224], [ 47, 213, 227, 63, 48, 214, 228, 64], [ 213, 215, 229, 227, 214, 216, 230, 228], [ 215, 217, 231, 229, 216, 218, 232, 230], [ 217, 219, 233, 231, 218, 220, 234, 232], [ 219, 221, 235, 233, 220, 222, 236, 234], [ 221, 223, 237, 235, 222, 224, 238, 236], [ 223, 225, 239, 237, 224, 226, 240, 238], [ 63, 227, 241, 79, 64, 228, 242, 80], [ 227, 229, 243, 241, 228, 230, 244, 242], [ 229, 231, 245, 243, 230, 232, 246, 244], [ 231, 233, 247, 245, 232, 234, 248, 246], [ 233, 235, 249, 247, 234, 236, 250, 248], [ 235, 237, 251, 249, 236, 238, 252, 250], [ 237, 239, 253, 251, 238, 240, 254, 252], [ 79, 241, 255, 95, 80, 242, 256, 96], [ 241, 243, 257, 255, 242, 244, 258, 256], [ 243, 245, 259, 257, 244, 246, 260, 258], [ 245, 247, 261, 259, 246, 248, 262, 260], [ 247, 249, 263, 261, 248, 250, 264, 262], [ 249, 251, 265, 263, 250, 252, 266, 264], [ 251, 253, 267, 265, 252, 254, 268, 266], [ 95, 255, 269, 111, 96, 256, 270, 112], [ 255, 257, 271, 269, 256, 258, 272, 270], [ 257, 259, 273, 271, 258, 260, 274, 272], [ 259, 261, 275, 273, 260, 262, 276, 274], [ 261, 263, 277, 275, 262, 264, 278, 276], [ 263, 265, 279, 277, 264, 266, 280, 278], [ 265, 267, 281, 279, 266, 268, 282, 280], [ 273, 275, 283, 284, 274, 276, 285, 286], [ 275, 277, 287, 283, 276, 278, 288, 285], [ 277, 279, 289, 287, 278, 280, 290, 288], [ 279, 281, 291, 289, 280, 282, 292, 290], [ 284, 283, 293, 294, 286, 285, 295, 296], [ 283, 287, 297, 293, 285, 288, 298, 295], [ 287, 289, 299, 297, 288, 290, 300, 298], [ 289, 291, 301, 299, 290, 292, 302, 300], [ 294, 293, 303, 304, 296, 295, 305, 306], [ 293, 297, 307, 303, 295, 298, 308, 305], [ 297, 299, 309, 307, 298, 300, 310, 308], [ 299, 301, 311, 309, 300, 302, 312, 310], [ 311, 313, 314, 309, 312, 315, 316, 310], [ 313, 317, 318, 314, 315, 319, 320, 316], [ 317, 149, 145, 318, 319, 151, 147, 320], [ 309, 314, 321, 307, 310, 316, 322, 308], [ 314, 318, 323, 321, 316, 320, 324, 322], [ 318, 145, 141, 323, 320, 147, 143, 324], [ 307, 321, 325, 303, 308, 322, 326, 305], [ 321, 323, 327, 325, 322, 324, 328, 326], [ 323, 141, 137, 327, 324, 143, 139, 328], [ 303, 325, 329, 304, 305, 326, 330, 306], [ 325, 327, 331, 329, 326, 328, 332, 330], [ 327, 137, 135, 331, 328, 139, 136, 332], [ 135, 127, 333, 331, 136, 128, 334, 332], [ 127, 119, 335, 333, 128, 120, 336, 334], [ 119, 111, 269, 335, 120, 112, 270, 336], [ 331, 333, 337, 329, 332, 334, 338, 330], [ 333, 335, 339, 337, 334, 336, 340, 338], [ 335, 269, 271, 339, 336, 270, 272, 340], [ 329, 337, 294, 304, 330, 338, 296, 306], [ 337, 339, 284, 294, 338, 340, 286, 296], [ 339, 271, 273, 284, 340, 272, 274, 286], [ 209, 341, 342, 210, 211, 343, 344, 212], [ 341, 345, 346, 342, 343, 347, 348, 344], [ 345, 349, 350, 346, 347, 351, 352, 348], [ 349, 353, 354, 350, 351, 355, 356, 352], [ 353, 357, 358, 354, 355, 359, 360, 356], [ 357, 361, 362, 358, 359, 363, 364, 360], [ 210, 342, 365, 225, 212, 344, 366, 226], [ 342, 346, 367, 365, 344, 348, 368, 366], [ 346, 350, 369, 367, 348, 352, 370, 368], [ 350, 354, 371, 369, 352, 356, 372, 370], [ 354, 358, 373, 371, 356, 360, 374, 372], [ 358, 362, 375, 373, 360, 364, 376, 374], [ 225, 365, 377, 239, 226, 366, 378, 240], [ 365, 367, 379, 377, 366, 368, 380, 378], [ 367, 369, 381, 379, 368, 370, 382, 380], [ 369, 371, 383, 381, 370, 372, 384, 382], [ 371, 373, 385, 383, 372, 374, 386, 384], [ 373, 375, 387, 385, 374, 376, 388, 386], [ 239, 377, 389, 253, 240, 378, 390, 254], [ 377, 379, 391, 389, 378, 380, 392, 390], [ 379, 381, 393, 391, 380, 382, 394, 392], [ 381, 383, 395, 393, 382, 384, 396, 394], [ 383, 385, 397, 395, 384, 386, 398, 396], [ 385, 387, 399, 397, 386, 388, 400, 398], [ 253, 389, 401, 267, 254, 390, 402, 268], [ 389, 391, 403, 401, 390, 392, 404, 402], [ 391, 393, 405, 403, 392, 394, 406, 404], [ 393, 395, 407, 405, 394, 396, 408, 406], [ 395, 397, 409, 407, 396, 398, 410, 408], [ 397, 399, 411, 409, 398, 400, 412, 410], [ 267, 401, 413, 281, 268, 402, 414, 282], [ 401, 403, 415, 413, 402, 404, 416, 414], [ 403, 405, 417, 415, 404, 406, 418, 416], [ 405, 407, 419, 417, 406, 408, 420, 418], [ 407, 409, 421, 419, 408, 410, 422, 420], [ 409, 411, 423, 421, 410, 412, 424, 422], [ 417, 419, 425, 426, 418, 420, 427, 428], [ 419, 421, 429, 425, 420, 422, 430, 427], [ 421, 423, 431, 429, 422, 424, 432, 430], [ 426, 425, 433, 434, 428, 427, 435, 436], [ 425, 429, 437, 433, 427, 430, 438, 435], [ 429, 431, 439, 437, 430, 432, 440, 438], [ 434, 433, 441, 442, 436, 435, 443, 444], [ 433, 437, 445, 441, 435, 438, 446, 443], [ 437, 439, 447, 445, 438, 440, 448, 446], [ 447, 449, 450, 445, 448, 451, 452, 446], [ 449, 453, 454, 450, 451, 455, 456, 452], [ 453, 149, 317, 454, 455, 151, 319, 456], [ 445, 450, 457, 441, 446, 452, 458, 443], [ 450, 454, 459, 457, 452, 456, 460, 458], [ 454, 317, 313, 459, 456, 319, 315, 460], [ 441, 457, 461, 442, 443, 458, 462, 444], [ 457, 459, 463, 461, 458, 460, 464, 462], [ 459, 313, 311, 463, 460, 315, 312, 464], [ 311, 301, 465, 463, 312, 302, 466, 464], [ 301, 291, 467, 465, 302, 292, 468, 466], [ 291, 281, 413, 467, 292, 282, 414, 468], [ 463, 465, 469, 461, 464, 466, 470, 462], [ 465, 467, 471, 469, 466, 468, 472, 470], [ 467, 413, 415, 471, 468, 414, 416, 472], [ 461, 469, 434, 442, 462, 470, 436, 444], [ 469, 471, 426, 434, 470, 472, 428, 436], [ 471, 415, 417, 426, 472, 416, 418, 428], [ 361, 473, 474, 362, 363, 475, 476, 364], [ 473, 477, 478, 474, 475, 479, 480, 476], [ 477, 481, 482, 478, 479, 483, 484, 480], [ 481, 485, 486, 482, 483, 487, 488, 484], [ 485, 489, 490, 486, 487, 491, 492, 488], [ 489, 493, 494, 490, 491, 495, 496, 492], [ 362, 474, 497, 375, 364, 476, 498, 376], [ 474, 478, 499, 497, 476, 480, 500, 498], [ 478, 482, 501, 499, 480, 484, 502, 500], [ 482, 486, 503, 501, 484, 488, 504, 502], [ 486, 490, 505, 503, 488, 492, 506, 504], [ 490, 494, 507, 505, 492, 496, 508, 506], [ 375, 497, 509, 387, 376, 498, 510, 388], [ 497, 499, 511, 509, 498, 500, 512, 510], [ 499, 501, 513, 511, 500, 502, 514, 512], [ 501, 503, 515, 513, 502, 504, 516, 514], [ 503, 505, 517, 515, 504, 506, 518, 516], [ 505, 507, 519, 517, 506, 508, 520, 518], [ 387, 509, 521, 399, 388, 510, 522, 400], [ 509, 511, 523, 521, 510, 512, 524, 522], [ 511, 513, 525, 523, 512, 514, 526, 524], [ 513, 515, 527, 525, 514, 516, 528, 526], [ 515, 517, 529, 527, 516, 518, 530, 528], [ 517, 519, 531, 529, 518, 520, 532, 530], [ 399, 521, 533, 411, 400, 522, 534, 412], [ 521, 523, 535, 533, 522, 524, 536, 534], [ 523, 525, 537, 535, 524, 526, 538, 536], [ 525, 527, 539, 537, 526, 528, 540, 538], [ 527, 529, 541, 539, 528, 530, 542, 540], [ 529, 531, 543, 541, 530, 532, 544, 542], [ 411, 533, 545, 423, 412, 534, 546, 424], [ 533, 535, 547, 545, 534, 536, 548, 546], [ 535, 537, 549, 547, 536, 538, 550, 548], [ 537, 539, 551, 549, 538, 540, 552, 550], [ 539, 541, 553, 551, 540, 542, 554, 552], [ 541, 543, 555, 553, 542, 544, 556, 554], [ 549, 551, 557, 558, 550, 552, 559, 560], [ 551, 553, 561, 557, 552, 554, 562, 559], [ 553, 555, 563, 561, 554, 556, 564, 562], [ 558, 557, 565, 566, 560, 559, 567, 568], [ 557, 561, 569, 565, 559, 562, 570, 567], [ 561, 563, 571, 569, 562, 564, 572, 570], [ 566, 565, 573, 574, 568, 567, 575, 576], [ 565, 569, 577, 573, 567, 570, 578, 575], [ 569, 571, 579, 577, 570, 572, 580, 578], [ 579, 581, 582, 577, 580, 583, 584, 578], [ 581, 585, 586, 582, 583, 587, 588, 584], [ 585, 149, 453, 586, 587, 151, 455, 588], [ 577, 582, 589, 573, 578, 584, 590, 575], [ 582, 586, 591, 589, 584, 588, 592, 590], [ 586, 453, 449, 591, 588, 455, 451, 592], [ 573, 589, 593, 574, 575, 590, 594, 576], [ 589, 591, 595, 593, 590, 592, 596, 594], [ 591, 449, 447, 595, 592, 451, 448, 596], [ 447, 439, 597, 595, 448, 440, 598, 596], [ 439, 431, 599, 597, 440, 432, 600, 598], [ 431, 423, 545, 599, 432, 424, 546, 600], [ 595, 597, 601, 593, 596, 598, 602, 594], [ 597, 599, 603, 601, 598, 600, 604, 602], [ 599, 545, 547, 603, 600, 546, 548, 604], [ 593, 601, 566, 574, 594, 602, 568, 576], [ 601, 603, 558, 566, 602, 604, 560, 568], [ 603, 547, 549, 558, 604, 548, 550, 560], [ 493, 605, 606, 494, 495, 607, 608, 496], [ 605, 609, 610, 606, 607, 611, 612, 608], [ 609, 613, 614, 610, 611, 615, 616, 612], [ 613, 617, 618, 614, 615, 619, 620, 616], [ 617, 621, 622, 618, 619, 623, 624, 620], [ 621, 625, 626, 622, 623, 627, 628, 624], [ 494, 606, 629, 507, 496, 608, 630, 508], [ 606, 610, 631, 629, 608, 612, 632, 630], [ 610, 614, 633, 631, 612, 616, 634, 632], [ 614, 618, 635, 633, 616, 620, 636, 634], [ 618, 622, 637, 635, 620, 624, 638, 636], [ 622, 626, 639, 637, 624, 628, 640, 638], [ 507, 629, 641, 519, 508, 630, 642, 520], [ 629, 631, 643, 641, 630, 632, 644, 642], [ 631, 633, 645, 643, 632, 634, 646, 644], [ 633, 635, 647, 645, 634, 636, 648, 646], [ 635, 637, 649, 647, 636, 638, 650, 648], [ 637, 639, 651, 649, 638, 640, 652, 650], [ 519, 641, 653, 531, 520, 642, 654, 532], [ 641, 643, 655, 653, 642, 644, 656, 654], [ 643, 645, 657, 655, 644, 646, 658, 656], [ 645, 647, 659, 657, 646, 648, 660, 658], [ 647, 649, 661, 659, 648, 650, 662, 660], [ 649, 651, 663, 661, 650, 652, 664, 662], [ 531, 653, 665, 543, 532, 654, 666, 544], [ 653, 655, 667, 665, 654, 656, 668, 666], [ 655, 657, 669, 667, 656, 658, 670, 668], [ 657, 659, 671, 669, 658, 660, 672, 670], [ 659, 661, 673, 671, 660, 662, 674, 672], [ 661, 663, 675, 673, 662, 664, 676, 674], [ 543, 665, 677, 555, 544, 666, 678, 556], [ 665, 667, 679, 677, 666, 668, 680, 678], [ 667, 669, 681, 679, 668, 670, 682, 680], [ 669, 671, 683, 681, 670, 672, 684, 682], [ 671, 673, 685, 683, 672, 674, 686, 684], [ 673, 675, 687, 685, 674, 676, 688, 686], [ 681, 683, 689, 690, 682, 684, 691, 692], [ 683, 685, 693, 689, 684, 686, 694, 691], [ 685, 687, 695, 693, 686, 688, 696, 694], [ 690, 689, 697, 698, 692, 691, 699, 700], [ 689, 693, 701, 697, 691, 694, 702, 699], [ 693, 695, 703, 701, 694, 696, 704, 702], [ 698, 697, 705, 706, 700, 699, 707, 708], [ 697, 701, 709, 705, 699, 702, 710, 707], [ 701, 703, 711, 709, 702, 704, 712, 710], [ 711, 713, 714, 709, 712, 715, 716, 710], [ 713, 717, 718, 714, 715, 719, 720, 716], [ 717, 149, 585, 718, 719, 151, 587, 720], [ 709, 714, 721, 705, 710, 716, 722, 707], [ 714, 718, 723, 721, 716, 720, 724, 722], [ 718, 585, 581, 723, 720, 587, 583, 724], [ 705, 721, 725, 706, 707, 722, 726, 708], [ 721, 723, 727, 725, 722, 724, 728, 726], [ 723, 581, 579, 727, 724, 583, 580, 728], [ 579, 571, 729, 727, 580, 572, 730, 728], [ 571, 563, 731, 729, 572, 564, 732, 730], [ 563, 555, 677, 731, 564, 556, 678, 732], [ 727, 729, 733, 725, 728, 730, 734, 726], [ 729, 731, 735, 733, 730, 732, 736, 734], [ 731, 677, 679, 735, 732, 678, 680, 736], [ 725, 733, 698, 706, 726, 734, 700, 708], [ 733, 735, 690, 698, 734, 736, 692, 700], [ 735, 679, 681, 690, 736, 680, 682, 692], [ 625, 737, 738, 626, 627, 739, 740, 628], [ 737, 741, 742, 738, 739, 743, 744, 740], [ 741, 745, 746, 742, 743, 747, 748, 744], [ 745, 749, 750, 746, 747, 751, 752, 748], [ 749, 753, 754, 750, 751, 755, 756, 752], [ 753, 1, 4, 754, 755, 5, 8, 756], [ 626, 738, 757, 639, 628, 740, 758, 640], [ 738, 742, 759, 757, 740, 744, 760, 758], [ 742, 746, 761, 759, 744, 748, 762, 760], [ 746, 750, 763, 761, 748, 752, 764, 762], [ 750, 754, 765, 763, 752, 756, 766, 764], [ 754, 4, 34, 765, 756, 8, 36, 766], [ 639, 757, 767, 651, 640, 758, 768, 652], [ 757, 759, 769, 767, 758, 760, 770, 768], [ 759, 761, 771, 769, 760, 762, 772, 770], [ 761, 763, 773, 771, 762, 764, 774, 772], [ 763, 765, 775, 773, 764, 766, 776, 774], [ 765, 34, 50, 775, 766, 36, 52, 776], [ 651, 767, 777, 663, 652, 768, 778, 664], [ 767, 769, 779, 777, 768, 770, 780, 778], [ 769, 771, 781, 779, 770, 772, 782, 780], [ 771, 773, 783, 781, 772, 774, 784, 782], [ 773, 775, 785, 783, 774, 776, 786, 784], [ 775, 50, 66, 785, 776, 52, 68, 786], [ 663, 777, 787, 675, 664, 778, 788, 676], [ 777, 779, 789, 787, 778, 780, 790, 788], [ 779, 781, 791, 789, 780, 782, 792, 790], [ 781, 783, 793, 791, 782, 784, 794, 792], [ 783, 785, 795, 793, 784, 786, 796, 794], [ 785, 66, 82, 795, 786, 68, 84, 796], [ 675, 787, 797, 687, 676, 788, 798, 688], [ 787, 789, 799, 797, 788, 790, 800, 798], [ 789, 791, 801, 799, 790, 792, 802, 800], [ 791, 793, 803, 801, 792, 794, 804, 802], [ 793, 795, 805, 803, 794, 796, 806, 804], [ 795, 82, 98, 805, 796, 84, 100, 806], [ 801, 803, 807, 808, 802, 804, 809, 810], [ 803, 805, 811, 807, 804, 806, 812, 809], [ 805, 98, 173, 811, 806, 100, 175, 812], [ 808, 807, 813, 814, 810, 809, 815, 816], [ 807, 811, 817, 813, 809, 812, 818, 815], [ 811, 173, 169, 817, 812, 175, 171, 818], [ 814, 813, 819, 820, 816, 815, 821, 822], [ 813, 817, 823, 819, 815, 818, 824, 821], [ 817, 169, 167, 823, 818, 171, 168, 824], [ 167, 159, 825, 823, 168, 160, 826, 824], [ 159, 150, 827, 825, 160, 152, 828, 826], [ 150, 149, 717, 827, 152, 151, 719, 828], [ 823, 825, 829, 819, 824, 826, 830, 821], [ 825, 827, 831, 829, 826, 828, 832, 830], [ 827, 717, 713, 831, 828, 719, 715, 832], [ 819, 829, 833, 820, 821, 830, 834, 822], [ 829, 831, 835, 833, 830, 832, 836, 834], [ 831, 713, 711, 835, 832, 715, 712, 836], [ 711, 703, 837, 835, 712, 704, 838, 836], [ 703, 695, 839, 837, 704, 696, 840, 838], [ 695, 687, 797, 839, 696, 688, 798, 840], [ 835, 837, 841, 833, 836, 838, 842, 834], [ 837, 839, 843, 841, 838, 840, 844, 842], [ 839, 797, 799, 843, 840, 798, 800, 844], [ 833, 841, 814, 820, 834, 842, 816, 822], [ 841, 843, 808, 814, 842, 844, 810, 816], [ 843, 799, 801, 808, 844, 800, 802, 810], [ 845, 846, 847, 848, 1, 2, 3, 4], [ 846, 849, 850, 847, 2, 9, 10, 3], [ 849, 851, 852, 850, 9, 13, 14, 10], [ 851, 853, 854, 852, 13, 17, 18, 14], [ 853, 855, 856, 854, 17, 21, 22, 18], [ 855, 857, 858, 856, 21, 25, 26, 22], [ 857, 859, 860, 858, 25, 29, 30, 26], [ 848, 847, 861, 862, 4, 3, 33, 34], [ 847, 850, 863, 861, 3, 10, 37, 33], [ 850, 852, 864, 863, 10, 14, 39, 37], [ 852, 854, 865, 864, 14, 18, 41, 39], [ 854, 856, 866, 865, 18, 22, 43, 41], [ 856, 858, 867, 866, 22, 26, 45, 43], [ 858, 860, 868, 867, 26, 30, 47, 45], [ 862, 861, 869, 870, 34, 33, 49, 50], [ 861, 863, 871, 869, 33, 37, 53, 49], [ 863, 864, 872, 871, 37, 39, 55, 53], [ 864, 865, 873, 872, 39, 41, 57, 55], [ 865, 866, 874, 873, 41, 43, 59, 57], [ 866, 867, 875, 874, 43, 45, 61, 59], [ 867, 868, 876, 875, 45, 47, 63, 61], [ 870, 869, 877, 878, 50, 49, 65, 66], [ 869, 871, 879, 877, 49, 53, 69, 65], [ 871, 872, 880, 879, 53, 55, 71, 69], [ 872, 873, 881, 880, 55, 57, 73, 71], [ 873, 874, 882, 881, 57, 59, 75, 73], [ 874, 875, 883, 882, 59, 61, 77, 75], [ 875, 876, 884, 883, 61, 63, 79, 77], [ 878, 877, 885, 886, 66, 65, 81, 82], [ 877, 879, 887, 885, 65, 69, 85, 81], [ 879, 880, 888, 887, 69, 71, 87, 85], [ 880, 881, 889, 888, 71, 73, 89, 87], [ 881, 882, 890, 889, 73, 75, 91, 89], [ 882, 883, 891, 890, 75, 77, 93, 91], [ 883, 884, 892, 891, 77, 79, 95, 93], [ 886, 885, 893, 894, 82, 81, 97, 98], [ 885, 887, 895, 893, 81, 85, 101, 97], [ 887, 888, 896, 895, 85, 87, 103, 101], [ 888, 889, 897, 896, 87, 89, 105, 103], [ 889, 890, 898, 897, 89, 91, 107, 105], [ 890, 891, 899, 898, 91, 93, 109, 107], [ 891, 892, 900, 899, 93, 95, 111, 109], [ 897, 898, 901, 902, 105, 107, 113, 114], [ 898, 899, 903, 901, 107, 109, 117, 113], [ 899, 900, 904, 903, 109, 111, 119, 117], [ 902, 901, 905, 906, 114, 113, 121, 122], [ 901, 903, 907, 905, 113, 117, 125, 121], [ 903, 904, 908, 907, 117, 119, 127, 125], [ 906, 905, 909, 910, 122, 121, 129, 130], [ 905, 907, 911, 909, 121, 125, 133, 129], [ 907, 908, 912, 911, 125, 127, 135, 133], [ 912, 913, 914, 911, 135, 137, 138, 133], [ 913, 915, 916, 914, 137, 141, 142, 138], [ 915, 917, 918, 916, 141, 145, 146, 142], [ 917, 919, 920, 918, 145, 149, 150, 146], [ 911, 914, 921, 909, 133, 138, 153, 129], [ 914, 916, 922, 921, 138, 142, 155, 153], [ 916, 918, 923, 922, 142, 146, 157, 155], [ 918, 920, 924, 923, 146, 150, 159, 157], [ 909, 921, 925, 910, 129, 153, 161, 130], [ 921, 922, 926, 925, 153, 155, 163, 161], [ 922, 923, 927, 926, 155, 157, 165, 163], [ 923, 924, 928, 927, 157, 159, 167, 165], [ 928, 929, 930, 927, 167, 169, 170, 165], [ 929, 931, 932, 930, 169, 173, 174, 170], [ 931, 894, 893, 932, 173, 98, 97, 174], [ 927, 930, 933, 926, 165, 170, 177, 163], [ 930, 932, 934, 933, 170, 174, 179, 177], [ 932, 893, 895, 934, 174, 97, 101, 179], [ 926, 933, 935, 925, 163, 177, 181, 161], [ 933, 934, 936, 935, 177, 179, 183, 181], [ 934, 895, 896, 936, 179, 101, 103, 183], [ 925, 935, 906, 910, 161, 181, 122, 130], [ 935, 936, 902, 906, 181, 183, 114, 122], [ 936, 896, 897, 902, 183, 103, 105, 114], [ 859, 937, 938, 860, 29, 185, 186, 30], [ 937, 939, 940, 938, 185, 189, 190, 186], [ 939, 941, 942, 940, 189, 193, 194, 190], [ 941, 943, 944, 942, 193, 197, 198, 194], [ 943, 945, 946, 944, 197, 201, 202, 198], [ 945, 947, 948, 946, 201, 205, 206, 202], [ 947, 949, 950, 948, 205, 209, 210, 206], [ 860, 938, 951, 868, 30, 186, 213, 47], [ 938, 940, 952, 951, 186, 190, 215, 213], [ 940, 942, 953, 952, 190, 194, 217, 215], [ 942, 944, 954, 953, 194, 198, 219, 217], [ 944, 946, 955, 954, 198, 202, 221, 219], [ 946, 948, 956, 955, 202, 206, 223, 221], [ 948, 950, 957, 956, 206, 210, 225, 223], [ 868, 951, 958, 876, 47, 213, 227, 63], [ 951, 952, 959, 958, 213, 215, 229, 227], [ 952, 953, 960, 959, 215, 217, 231, 229], [ 953, 954, 961, 960, 217, 219, 233, 231], [ 954, 955, 962, 961, 219, 221, 235, 233], [ 955, 956, 963, 962, 221, 223, 237, 235], [ 956, 957, 964, 963, 223, 225, 239, 237], [ 876, 958, 965, 884, 63, 227, 241, 79], [ 958, 959, 966, 965, 227, 229, 243, 241], [ 959, 960, 967, 966, 229, 231, 245, 243], [ 960, 961, 968, 967, 231, 233, 247, 245], [ 961, 962, 969, 968, 233, 235, 249, 247], [ 962, 963, 970, 969, 235, 237, 251, 249], [ 963, 964, 971, 970, 237, 239, 253, 251], [ 884, 965, 972, 892, 79, 241, 255, 95], [ 965, 966, 973, 972, 241, 243, 257, 255], [ 966, 967, 974, 973, 243, 245, 259, 257], [ 967, 968, 975, 974, 245, 247, 261, 259], [ 968, 969, 976, 975, 247, 249, 263, 261], [ 969, 970, 977, 976, 249, 251, 265, 263], [ 970, 971, 978, 977, 251, 253, 267, 265], [ 892, 972, 979, 900, 95, 255, 269, 111], [ 972, 973, 980, 979, 255, 257, 271, 269], [ 973, 974, 981, 980, 257, 259, 273, 271], [ 974, 975, 982, 981, 259, 261, 275, 273], [ 975, 976, 983, 982, 261, 263, 277, 275], [ 976, 977, 984, 983, 263, 265, 279, 277], [ 977, 978, 985, 984, 265, 267, 281, 279], [ 981, 982, 986, 987, 273, 275, 283, 284], [ 982, 983, 988, 986, 275, 277, 287, 283], [ 983, 984, 989, 988, 277, 279, 289, 287], [ 984, 985, 990, 989, 279, 281, 291, 289], [ 987, 986, 991, 992, 284, 283, 293, 294], [ 986, 988, 993, 991, 283, 287, 297, 293], [ 988, 989, 994, 993, 287, 289, 299, 297], [ 989, 990, 995, 994, 289, 291, 301, 299], [ 992, 991, 996, 997, 294, 293, 303, 304], [ 991, 993, 998, 996, 293, 297, 307, 303], [ 993, 994, 999, 998, 297, 299, 309, 307], [ 994, 995, 1000, 999, 299, 301, 311, 309], [1000, 1001, 1002, 999, 311, 313, 314, 309], [1001, 1003, 1004, 1002, 313, 317, 318, 314], [1003, 919, 917, 1004, 317, 149, 145, 318], [ 999, 1002, 1005, 998, 309, 314, 321, 307], [1002, 1004, 1006, 1005, 314, 318, 323, 321], [1004, 917, 915, 1006, 318, 145, 141, 323], [ 998, 1005, 1007, 996, 307, 321, 325, 303], [1005, 1006, 1008, 1007, 321, 323, 327, 325], [1006, 915, 913, 1008, 323, 141, 137, 327], [ 996, 1007, 1009, 997, 303, 325, 329, 304], [1007, 1008, 1010, 1009, 325, 327, 331, 329], [1008, 913, 912, 1010, 327, 137, 135, 331], [ 912, 908, 1011, 1010, 135, 127, 333, 331], [ 908, 904, 1012, 1011, 127, 119, 335, 333], [ 904, 900, 979, 1012, 119, 111, 269, 335], [1010, 1011, 1013, 1009, 331, 333, 337, 329], [1011, 1012, 1014, 1013, 333, 335, 339, 337], [1012, 979, 980, 1014, 335, 269, 271, 339], [1009, 1013, 992, 997, 329, 337, 294, 304], [1013, 1014, 987, 992, 337, 339, 284, 294], [1014, 980, 981, 987, 339, 271, 273, 284], [ 949, 1015, 1016, 950, 209, 341, 342, 210], [1015, 1017, 1018, 1016, 341, 345, 346, 342], [1017, 1019, 1020, 1018, 345, 349, 350, 346], [1019, 1021, 1022, 1020, 349, 353, 354, 350], [1021, 1023, 1024, 1022, 353, 357, 358, 354], [1023, 1025, 1026, 1024, 357, 361, 362, 358], [ 950, 1016, 1027, 957, 210, 342, 365, 225], [1016, 1018, 1028, 1027, 342, 346, 367, 365], [1018, 1020, 1029, 1028, 346, 350, 369, 367], [1020, 1022, 1030, 1029, 350, 354, 371, 369], [1022, 1024, 1031, 1030, 354, 358, 373, 371], [1024, 1026, 1032, 1031, 358, 362, 375, 373], [ 957, 1027, 1033, 964, 225, 365, 377, 239], [1027, 1028, 1034, 1033, 365, 367, 379, 377], [1028, 1029, 1035, 1034, 367, 369, 381, 379], [1029, 1030, 1036, 1035, 369, 371, 383, 381], [1030, 1031, 1037, 1036, 371, 373, 385, 383], [1031, 1032, 1038, 1037, 373, 375, 387, 385], [ 964, 1033, 1039, 971, 239, 377, 389, 253], [1033, 1034, 1040, 1039, 377, 379, 391, 389], [1034, 1035, 1041, 1040, 379, 381, 393, 391], [1035, 1036, 1042, 1041, 381, 383, 395, 393], [1036, 1037, 1043, 1042, 383, 385, 397, 395], [1037, 1038, 1044, 1043, 385, 387, 399, 397], [ 971, 1039, 1045, 978, 253, 389, 401, 267], [1039, 1040, 1046, 1045, 389, 391, 403, 401], [1040, 1041, 1047, 1046, 391, 393, 405, 403], [1041, 1042, 1048, 1047, 393, 395, 407, 405], [1042, 1043, 1049, 1048, 395, 397, 409, 407], [1043, 1044, 1050, 1049, 397, 399, 411, 409], [ 978, 1045, 1051, 985, 267, 401, 413, 281], [1045, 1046, 1052, 1051, 401, 403, 415, 413], [1046, 1047, 1053, 1052, 403, 405, 417, 415], [1047, 1048, 1054, 1053, 405, 407, 419, 417], [1048, 1049, 1055, 1054, 407, 409, 421, 419], [1049, 1050, 1056, 1055, 409, 411, 423, 421], [1053, 1054, 1057, 1058, 417, 419, 425, 426], [1054, 1055, 1059, 1057, 419, 421, 429, 425], [1055, 1056, 1060, 1059, 421, 423, 431, 429], [1058, 1057, 1061, 1062, 426, 425, 433, 434], [1057, 1059, 1063, 1061, 425, 429, 437, 433], [1059, 1060, 1064, 1063, 429, 431, 439, 437], [1062, 1061, 1065, 1066, 434, 433, 441, 442], [1061, 1063, 1067, 1065, 433, 437, 445, 441], [1063, 1064, 1068, 1067, 437, 439, 447, 445], [1068, 1069, 1070, 1067, 447, 449, 450, 445], [1069, 1071, 1072, 1070, 449, 453, 454, 450], [1071, 919, 1003, 1072, 453, 149, 317, 454], [1067, 1070, 1073, 1065, 445, 450, 457, 441], [1070, 1072, 1074, 1073, 450, 454, 459, 457], [1072, 1003, 1001, 1074, 454, 317, 313, 459], [1065, 1073, 1075, 1066, 441, 457, 461, 442], [1073, 1074, 1076, 1075, 457, 459, 463, 461], [1074, 1001, 1000, 1076, 459, 313, 311, 463], [1000, 995, 1077, 1076, 311, 301, 465, 463], [ 995, 990, 1078, 1077, 301, 291, 467, 465], [ 990, 985, 1051, 1078, 291, 281, 413, 467], [1076, 1077, 1079, 1075, 463, 465, 469, 461], [1077, 1078, 1080, 1079, 465, 467, 471, 469], [1078, 1051, 1052, 1080, 467, 413, 415, 471], [1075, 1079, 1062, 1066, 461, 469, 434, 442], [1079, 1080, 1058, 1062, 469, 471, 426, 434], [1080, 1052, 1053, 1058, 471, 415, 417, 426], [1025, 1081, 1082, 1026, 361, 473, 474, 362], [1081, 1083, 1084, 1082, 473, 477, 478, 474], [1083, 1085, 1086, 1084, 477, 481, 482, 478], [1085, 1087, 1088, 1086, 481, 485, 486, 482], [1087, 1089, 1090, 1088, 485, 489, 490, 486], [1089, 1091, 1092, 1090, 489, 493, 494, 490], [1026, 1082, 1093, 1032, 362, 474, 497, 375], [1082, 1084, 1094, 1093, 474, 478, 499, 497], [1084, 1086, 1095, 1094, 478, 482, 501, 499], [1086, 1088, 1096, 1095, 482, 486, 503, 501], [1088, 1090, 1097, 1096, 486, 490, 505, 503], [1090, 1092, 1098, 1097, 490, 494, 507, 505], [1032, 1093, 1099, 1038, 375, 497, 509, 387], [1093, 1094, 1100, 1099, 497, 499, 511, 509], [1094, 1095, 1101, 1100, 499, 501, 513, 511], [1095, 1096, 1102, 1101, 501, 503, 515, 513], [1096, 1097, 1103, 1102, 503, 505, 517, 515], [1097, 1098, 1104, 1103, 505, 507, 519, 517], [1038, 1099, 1105, 1044, 387, 509, 521, 399], [1099, 1100, 1106, 1105, 509, 511, 523, 521], [1100, 1101, 1107, 1106, 511, 513, 525, 523], [1101, 1102, 1108, 1107, 513, 515, 527, 525], [1102, 1103, 1109, 1108, 515, 517, 529, 527], [1103, 1104, 1110, 1109, 517, 519, 531, 529], [1044, 1105, 1111, 1050, 399, 521, 533, 411], [1105, 1106, 1112, 1111, 521, 523, 535, 533], [1106, 1107, 1113, 1112, 523, 525, 537, 535], [1107, 1108, 1114, 1113, 525, 527, 539, 537], [1108, 1109, 1115, 1114, 527, 529, 541, 539], [1109, 1110, 1116, 1115, 529, 531, 543, 541], [1050, 1111, 1117, 1056, 411, 533, 545, 423], [1111, 1112, 1118, 1117, 533, 535, 547, 545], [1112, 1113, 1119, 1118, 535, 537, 549, 547], [1113, 1114, 1120, 1119, 537, 539, 551, 549], [1114, 1115, 1121, 1120, 539, 541, 553, 551], [1115, 1116, 1122, 1121, 541, 543, 555, 553], [1119, 1120, 1123, 1124, 549, 551, 557, 558], [1120, 1121, 1125, 1123, 551, 553, 561, 557], [1121, 1122, 1126, 1125, 553, 555, 563, 561], [1124, 1123, 1127, 1128, 558, 557, 565, 566], [1123, 1125, 1129, 1127, 557, 561, 569, 565], [1125, 1126, 1130, 1129, 561, 563, 571, 569], [1128, 1127, 1131, 1132, 566, 565, 573, 574], [1127, 1129, 1133, 1131, 565, 569, 577, 573], [1129, 1130, 1134, 1133, 569, 571, 579, 577], [1134, 1135, 1136, 1133, 579, 581, 582, 577], [1135, 1137, 1138, 1136, 581, 585, 586, 582], [1137, 919, 1071, 1138, 585, 149, 453, 586], [1133, 1136, 1139, 1131, 577, 582, 589, 573], [1136, 1138, 1140, 1139, 582, 586, 591, 589], [1138, 1071, 1069, 1140, 586, 453, 449, 591], [1131, 1139, 1141, 1132, 573, 589, 593, 574], [1139, 1140, 1142, 1141, 589, 591, 595, 593], [1140, 1069, 1068, 1142, 591, 449, 447, 595], [1068, 1064, 1143, 1142, 447, 439, 597, 595], [1064, 1060, 1144, 1143, 439, 431, 599, 597], [1060, 1056, 1117, 1144, 431, 423, 545, 599], [1142, 1143, 1145, 1141, 595, 597, 601, 593], [1143, 1144, 1146, 1145, 597, 599, 603, 601], [1144, 1117, 1118, 1146, 599, 545, 547, 603], [1141, 1145, 1128, 1132, 593, 601, 566, 574], [1145, 1146, 1124, 1128, 601, 603, 558, 566], [1146, 1118, 1119, 1124, 603, 547, 549, 558], [1091, 1147, 1148, 1092, 493, 605, 606, 494], [1147, 1149, 1150, 1148, 605, 609, 610, 606], [1149, 1151, 1152, 1150, 609, 613, 614, 610], [1151, 1153, 1154, 1152, 613, 617, 618, 614], [1153, 1155, 1156, 1154, 617, 621, 622, 618], [1155, 1157, 1158, 1156, 621, 625, 626, 622], [1092, 1148, 1159, 1098, 494, 606, 629, 507], [1148, 1150, 1160, 1159, 606, 610, 631, 629], [1150, 1152, 1161, 1160, 610, 614, 633, 631], [1152, 1154, 1162, 1161, 614, 618, 635, 633], [1154, 1156, 1163, 1162, 618, 622, 637, 635], [1156, 1158, 1164, 1163, 622, 626, 639, 637], [1098, 1159, 1165, 1104, 507, 629, 641, 519], [1159, 1160, 1166, 1165, 629, 631, 643, 641], [1160, 1161, 1167, 1166, 631, 633, 645, 643], [1161, 1162, 1168, 1167, 633, 635, 647, 645], [1162, 1163, 1169, 1168, 635, 637, 649, 647], [1163, 1164, 1170, 1169, 637, 639, 651, 649], [1104, 1165, 1171, 1110, 519, 641, 653, 531], [1165, 1166, 1172, 1171, 641, 643, 655, 653], [1166, 1167, 1173, 1172, 643, 645, 657, 655], [1167, 1168, 1174, 1173, 645, 647, 659, 657], [1168, 1169, 1175, 1174, 647, 649, 661, 659], [1169, 1170, 1176, 1175, 649, 651, 663, 661], [1110, 1171, 1177, 1116, 531, 653, 665, 543], [1171, 1172, 1178, 1177, 653, 655, 667, 665], [1172, 1173, 1179, 1178, 655, 657, 669, 667], [1173, 1174, 1180, 1179, 657, 659, 671, 669], [1174, 1175, 1181, 1180, 659, 661, 673, 671], [1175, 1176, 1182, 1181, 661, 663, 675, 673], [1116, 1177, 1183, 1122, 543, 665, 677, 555], [1177, 1178, 1184, 1183, 665, 667, 679, 677], [1178, 1179, 1185, 1184, 667, 669, 681, 679], [1179, 1180, 1186, 1185, 669, 671, 683, 681], [1180, 1181, 1187, 1186, 671, 673, 685, 683], [1181, 1182, 1188, 1187, 673, 675, 687, 685], [1185, 1186, 1189, 1190, 681, 683, 689, 690], [1186, 1187, 1191, 1189, 683, 685, 693, 689], [1187, 1188, 1192, 1191, 685, 687, 695, 693], [1190, 1189, 1193, 1194, 690, 689, 697, 698], [1189, 1191, 1195, 1193, 689, 693, 701, 697], [1191, 1192, 1196, 1195, 693, 695, 703, 701], [1194, 1193, 1197, 1198, 698, 697, 705, 706], [1193, 1195, 1199, 1197, 697, 701, 709, 705], [1195, 1196, 1200, 1199, 701, 703, 711, 709], [1200, 1201, 1202, 1199, 711, 713, 714, 709], [1201, 1203, 1204, 1202, 713, 717, 718, 714], [1203, 919, 1137, 1204, 717, 149, 585, 718], [1199, 1202, 1205, 1197, 709, 714, 721, 705], [1202, 1204, 1206, 1205, 714, 718, 723, 721], [1204, 1137, 1135, 1206, 718, 585, 581, 723], [1197, 1205, 1207, 1198, 705, 721, 725, 706], [1205, 1206, 1208, 1207, 721, 723, 727, 725], [1206, 1135, 1134, 1208, 723, 581, 579, 727], [1134, 1130, 1209, 1208, 579, 571, 729, 727], [1130, 1126, 1210, 1209, 571, 563, 731, 729], [1126, 1122, 1183, 1210, 563, 555, 677, 731], [1208, 1209, 1211, 1207, 727, 729, 733, 725], [1209, 1210, 1212, 1211, 729, 731, 735, 733], [1210, 1183, 1184, 1212, 731, 677, 679, 735], [1207, 1211, 1194, 1198, 725, 733, 698, 706], [1211, 1212, 1190, 1194, 733, 735, 690, 698], [1212, 1184, 1185, 1190, 735, 679, 681, 690], [1157, 1213, 1214, 1158, 625, 737, 738, 626], [1213, 1215, 1216, 1214, 737, 741, 742, 738], [1215, 1217, 1218, 1216, 741, 745, 746, 742], [1217, 1219, 1220, 1218, 745, 749, 750, 746], [1219, 1221, 1222, 1220, 749, 753, 754, 750], [1221, 845, 848, 1222, 753, 1, 4, 754], [1158, 1214, 1223, 1164, 626, 738, 757, 639], [1214, 1216, 1224, 1223, 738, 742, 759, 757], [1216, 1218, 1225, 1224, 742, 746, 761, 759], [1218, 1220, 1226, 1225, 746, 750, 763, 761], [1220, 1222, 1227, 1226, 750, 754, 765, 763], [1222, 848, 862, 1227, 754, 4, 34, 765], [1164, 1223, 1228, 1170, 639, 757, 767, 651], [1223, 1224, 1229, 1228, 757, 759, 769, 767], [1224, 1225, 1230, 1229, 759, 761, 771, 769], [1225, 1226, 1231, 1230, 761, 763, 773, 771], [1226, 1227, 1232, 1231, 763, 765, 775, 773], [1227, 862, 870, 1232, 765, 34, 50, 775], [1170, 1228, 1233, 1176, 651, 767, 777, 663], [1228, 1229, 1234, 1233, 767, 769, 779, 777], [1229, 1230, 1235, 1234, 769, 771, 781, 779], [1230, 1231, 1236, 1235, 771, 773, 783, 781], [1231, 1232, 1237, 1236, 773, 775, 785, 783], [1232, 870, 878, 1237, 775, 50, 66, 785], [1176, 1233, 1238, 1182, 663, 777, 787, 675], [1233, 1234, 1239, 1238, 777, 779, 789, 787], [1234, 1235, 1240, 1239, 779, 781, 791, 789], [1235, 1236, 1241, 1240, 781, 783, 793, 791], [1236, 1237, 1242, 1241, 783, 785, 795, 793], [1237, 878, 886, 1242, 785, 66, 82, 795], [1182, 1238, 1243, 1188, 675, 787, 797, 687], [1238, 1239, 1244, 1243, 787, 789, 799, 797], [1239, 1240, 1245, 1244, 789, 791, 801, 799], [1240, 1241, 1246, 1245, 791, 793, 803, 801], [1241, 1242, 1247, 1246, 793, 795, 805, 803], [1242, 886, 894, 1247, 795, 82, 98, 805], [1245, 1246, 1248, 1249, 801, 803, 807, 808], [1246, 1247, 1250, 1248, 803, 805, 811, 807], [1247, 894, 931, 1250, 805, 98, 173, 811], [1249, 1248, 1251, 1252, 808, 807, 813, 814], [1248, 1250, 1253, 1251, 807, 811, 817, 813], [1250, 931, 929, 1253, 811, 173, 169, 817], [1252, 1251, 1254, 1255, 814, 813, 819, 820], [1251, 1253, 1256, 1254, 813, 817, 823, 819], [1253, 929, 928, 1256, 817, 169, 167, 823], [ 928, 924, 1257, 1256, 167, 159, 825, 823], [ 924, 920, 1258, 1257, 159, 150, 827, 825], [ 920, 919, 1203, 1258, 150, 149, 717, 827], [1256, 1257, 1259, 1254, 823, 825, 829, 819], [1257, 1258, 1260, 1259, 825, 827, 831, 829], [1258, 1203, 1201, 1260, 827, 717, 713, 831], [1254, 1259, 1261, 1255, 819, 829, 833, 820], [1259, 1260, 1262, 1261, 829, 831, 835, 833], [1260, 1201, 1200, 1262, 831, 713, 711, 835], [1200, 1196, 1263, 1262, 711, 703, 837, 835], [1196, 1192, 1264, 1263, 703, 695, 839, 837], [1192, 1188, 1243, 1264, 695, 687, 797, 839], [1262, 1263, 1265, 1261, 835, 837, 841, 833], [1263, 1264, 1266, 1265, 837, 839, 843, 841], [1264, 1243, 1244, 1266, 839, 797, 799, 843], [1261, 1265, 1252, 1255, 833, 841, 814, 820], [1265, 1266, 1249, 1252, 841, 843, 808, 814], [1266, 1244, 1245, 1249, 843, 799, 801, 808], [1267, 1268, 1269, 1270, 845, 846, 847, 848], [1268, 1271, 1272, 1269, 846, 849, 850, 847], [1271, 1273, 1274, 1272, 849, 851, 852, 850], [1273, 1275, 1276, 1274, 851, 853, 854, 852], [1275, 1277, 1278, 1276, 853, 855, 856, 854], [1277, 1279, 1280, 1278, 855, 857, 858, 856], [1279, 1281, 1282, 1280, 857, 859, 860, 858], [1270, 1269, 1283, 1284, 848, 847, 861, 862], [1269, 1272, 1285, 1283, 847, 850, 863, 861], [1272, 1274, 1286, 1285, 850, 852, 864, 863], [1274, 1276, 1287, 1286, 852, 854, 865, 864], [1276, 1278, 1288, 1287, 854, 856, 866, 865], [1278, 1280, 1289, 1288, 856, 858, 867, 866], [1280, 1282, 1290, 1289, 858, 860, 868, 867], [1284, 1283, 1291, 1292, 862, 861, 869, 870], [1283, 1285, 1293, 1291, 861, 863, 871, 869], [1285, 1286, 1294, 1293, 863, 864, 872, 871], [1286, 1287, 1295, 1294, 864, 865, 873, 872], [1287, 1288, 1296, 1295, 865, 866, 874, 873], [1288, 1289, 1297, 1296, 866, 867, 875, 874], [1289, 1290, 1298, 1297, 867, 868, 876, 875], [1292, 1291, 1299, 1300, 870, 869, 877, 878], [1291, 1293, 1301, 1299, 869, 871, 879, 877], [1293, 1294, 1302, 1301, 871, 872, 880, 879], [1294, 1295, 1303, 1302, 872, 873, 881, 880], [1295, 1296, 1304, 1303, 873, 874, 882, 881], [1296, 1297, 1305, 1304, 874, 875, 883, 882], [1297, 1298, 1306, 1305, 875, 876, 884, 883], [1300, 1299, 1307, 1308, 878, 877, 885, 886], [1299, 1301, 1309, 1307, 877, 879, 887, 885], [1301, 1302, 1310, 1309, 879, 880, 888, 887], [1302, 1303, 1311, 1310, 880, 881, 889, 888], [1303, 1304, 1312, 1311, 881, 882, 890, 889], [1304, 1305, 1313, 1312, 882, 883, 891, 890], [1305, 1306, 1314, 1313, 883, 884, 892, 891], [1308, 1307, 1315, 1316, 886, 885, 893, 894], [1307, 1309, 1317, 1315, 885, 887, 895, 893], [1309, 1310, 1318, 1317, 887, 888, 896, 895], [1310, 1311, 1319, 1318, 888, 889, 897, 896], [1311, 1312, 1320, 1319, 889, 890, 898, 897], [1312, 1313, 1321, 1320, 890, 891, 899, 898], [1313, 1314, 1322, 1321, 891, 892, 900, 899], [1319, 1320, 1323, 1324, 897, 898, 901, 902], [1320, 1321, 1325, 1323, 898, 899, 903, 901], [1321, 1322, 1326, 1325, 899, 900, 904, 903], [1324, 1323, 1327, 1328, 902, 901, 905, 906], [1323, 1325, 1329, 1327, 901, 903, 907, 905], [1325, 1326, 1330, 1329, 903, 904, 908, 907], [1328, 1327, 1331, 1332, 906, 905, 909, 910], [1327, 1329, 1333, 1331, 905, 907, 911, 909], [1329, 1330, 1334, 1333, 907, 908, 912, 911], [1334, 1335, 1336, 1333, 912, 913, 914, 911], [1335, 1337, 1338, 1336, 913, 915, 916, 914], [1337, 1339, 1340, 1338, 915, 917, 918, 916], [1339, 1341, 1342, 1340, 917, 919, 920, 918], [1333, 1336, 1343, 1331, 911, 914, 921, 909], [1336, 1338, 1344, 1343, 914, 916, 922, 921], [1338, 1340, 1345, 1344, 916, 918, 923, 922], [1340, 1342, 1346, 1345, 918, 920, 924, 923], [1331, 1343, 1347, 1332, 909, 921, 925, 910], [1343, 1344, 1348, 1347, 921, 922, 926, 925], [1344, 1345, 1349, 1348, 922, 923, 927, 926], [1345, 1346, 1350, 1349, 923, 924, 928, 927], [1350, 1351, 1352, 1349, 928, 929, 930, 927], [1351, 1353, 1354, 1352, 929, 931, 932, 930], [1353, 1316, 1315, 1354, 931, 894, 893, 932], [1349, 1352, 1355, 1348, 927, 930, 933, 926], [1352, 1354, 1356, 1355, 930, 932, 934, 933], [1354, 1315, 1317, 1356, 932, 893, 895, 934], [1348, 1355, 1357, 1347, 926, 933, 935, 925], [1355, 1356, 1358, 1357, 933, 934, 936, 935], [1356, 1317, 1318, 1358, 934, 895, 896, 936], [1347, 1357, 1328, 1332, 925, 935, 906, 910], [1357, 1358, 1324, 1328, 935, 936, 902, 906], [1358, 1318, 1319, 1324, 936, 896, 897, 902], [1281, 1359, 1360, 1282, 859, 937, 938, 860], [1359, 1361, 1362, 1360, 937, 939, 940, 938], [1361, 1363, 1364, 1362, 939, 941, 942, 940], [1363, 1365, 1366, 1364, 941, 943, 944, 942], [1365, 1367, 1368, 1366, 943, 945, 946, 944], [1367, 1369, 1370, 1368, 945, 947, 948, 946], [1369, 1371, 1372, 1370, 947, 949, 950, 948], [1282, 1360, 1373, 1290, 860, 938, 951, 868], [1360, 1362, 1374, 1373, 938, 940, 952, 951], [1362, 1364, 1375, 1374, 940, 942, 953, 952], [1364, 1366, 1376, 1375, 942, 944, 954, 953], [1366, 1368, 1377, 1376, 944, 946, 955, 954], [1368, 1370, 1378, 1377, 946, 948, 956, 955], [1370, 1372, 1379, 1378, 948, 950, 957, 956], [1290, 1373, 1380, 1298, 868, 951, 958, 876], [1373, 1374, 1381, 1380, 951, 952, 959, 958], [1374, 1375, 1382, 1381, 952, 953, 960, 959], [1375, 1376, 1383, 1382, 953, 954, 961, 960], [1376, 1377, 1384, 1383, 954, 955, 962, 961], [1377, 1378, 1385, 1384, 955, 956, 963, 962], [1378, 1379, 1386, 1385, 956, 957, 964, 963], [1298, 1380, 1387, 1306, 876, 958, 965, 884], [1380, 1381, 1388, 1387, 958, 959, 966, 965], [1381, 1382, 1389, 1388, 959, 960, 967, 966], [1382, 1383, 1390, 1389, 960, 961, 968, 967], [1383, 1384, 1391, 1390, 961, 962, 969, 968], [1384, 1385, 1392, 1391, 962, 963, 970, 969], [1385, 1386, 1393, 1392, 963, 964, 971, 970], [1306, 1387, 1394, 1314, 884, 965, 972, 892], [1387, 1388, 1395, 1394, 965, 966, 973, 972], [1388, 1389, 1396, 1395, 966, 967, 974, 973], [1389, 1390, 1397, 1396, 967, 968, 975, 974], [1390, 1391, 1398, 1397, 968, 969, 976, 975], [1391, 1392, 1399, 1398, 969, 970, 977, 976], [1392, 1393, 1400, 1399, 970, 971, 978, 977], [1314, 1394, 1401, 1322, 892, 972, 979, 900], [1394, 1395, 1402, 1401, 972, 973, 980, 979], [1395, 1396, 1403, 1402, 973, 974, 981, 980], [1396, 1397, 1404, 1403, 974, 975, 982, 981], [1397, 1398, 1405, 1404, 975, 976, 983, 982], [1398, 1399, 1406, 1405, 976, 977, 984, 983], [1399, 1400, 1407, 1406, 977, 978, 985, 984], [1403, 1404, 1408, 1409, 981, 982, 986, 987], [1404, 1405, 1410, 1408, 982, 983, 988, 986], [1405, 1406, 1411, 1410, 983, 984, 989, 988], [1406, 1407, 1412, 1411, 984, 985, 990, 989], [1409, 1408, 1413, 1414, 987, 986, 991, 992], [1408, 1410, 1415, 1413, 986, 988, 993, 991], [1410, 1411, 1416, 1415, 988, 989, 994, 993], [1411, 1412, 1417, 1416, 989, 990, 995, 994], [1414, 1413, 1418, 1419, 992, 991, 996, 997], [1413, 1415, 1420, 1418, 991, 993, 998, 996], [1415, 1416, 1421, 1420, 993, 994, 999, 998], [1416, 1417, 1422, 1421, 994, 995, 1000, 999], [1422, 1423, 1424, 1421, 1000, 1001, 1002, 999], [1423, 1425, 1426, 1424, 1001, 1003, 1004, 1002], [1425, 1341, 1339, 1426, 1003, 919, 917, 1004], [1421, 1424, 1427, 1420, 999, 1002, 1005, 998], [1424, 1426, 1428, 1427, 1002, 1004, 1006, 1005], [1426, 1339, 1337, 1428, 1004, 917, 915, 1006], [1420, 1427, 1429, 1418, 998, 1005, 1007, 996], [1427, 1428, 1430, 1429, 1005, 1006, 1008, 1007], [1428, 1337, 1335, 1430, 1006, 915, 913, 1008], [1418, 1429, 1431, 1419, 996, 1007, 1009, 997], [1429, 1430, 1432, 1431, 1007, 1008, 1010, 1009], [1430, 1335, 1334, 1432, 1008, 913, 912, 1010], [1334, 1330, 1433, 1432, 912, 908, 1011, 1010], [1330, 1326, 1434, 1433, 908, 904, 1012, 1011], [1326, 1322, 1401, 1434, 904, 900, 979, 1012], [1432, 1433, 1435, 1431, 1010, 1011, 1013, 1009], [1433, 1434, 1436, 1435, 1011, 1012, 1014, 1013], [1434, 1401, 1402, 1436, 1012, 979, 980, 1014], [1431, 1435, 1414, 1419, 1009, 1013, 992, 997], [1435, 1436, 1409, 1414, 1013, 1014, 987, 992], [1436, 1402, 1403, 1409, 1014, 980, 981, 987], [1371, 1437, 1438, 1372, 949, 1015, 1016, 950], [1437, 1439, 1440, 1438, 1015, 1017, 1018, 1016], [1439, 1441, 1442, 1440, 1017, 1019, 1020, 1018], [1441, 1443, 1444, 1442, 1019, 1021, 1022, 1020], [1443, 1445, 1446, 1444, 1021, 1023, 1024, 1022], [1445, 1447, 1448, 1446, 1023, 1025, 1026, 1024], [1372, 1438, 1449, 1379, 950, 1016, 1027, 957], [1438, 1440, 1450, 1449, 1016, 1018, 1028, 1027], [1440, 1442, 1451, 1450, 1018, 1020, 1029, 1028], [1442, 1444, 1452, 1451, 1020, 1022, 1030, 1029], [1444, 1446, 1453, 1452, 1022, 1024, 1031, 1030], [1446, 1448, 1454, 1453, 1024, 1026, 1032, 1031], [1379, 1449, 1455, 1386, 957, 1027, 1033, 964], [1449, 1450, 1456, 1455, 1027, 1028, 1034, 1033], [1450, 1451, 1457, 1456, 1028, 1029, 1035, 1034], [1451, 1452, 1458, 1457, 1029, 1030, 1036, 1035], [1452, 1453, 1459, 1458, 1030, 1031, 1037, 1036], [1453, 1454, 1460, 1459, 1031, 1032, 1038, 1037], [1386, 1455, 1461, 1393, 964, 1033, 1039, 971], [1455, 1456, 1462, 1461, 1033, 1034, 1040, 1039], [1456, 1457, 1463, 1462, 1034, 1035, 1041, 1040], [1457, 1458, 1464, 1463, 1035, 1036, 1042, 1041], [1458, 1459, 1465, 1464, 1036, 1037, 1043, 1042], [1459, 1460, 1466, 1465, 1037, 1038, 1044, 1043], [1393, 1461, 1467, 1400, 971, 1039, 1045, 978], [1461, 1462, 1468, 1467, 1039, 1040, 1046, 1045], [1462, 1463, 1469, 1468, 1040, 1041, 1047, 1046], [1463, 1464, 1470, 1469, 1041, 1042, 1048, 1047], [1464, 1465, 1471, 1470, 1042, 1043, 1049, 1048], [1465, 1466, 1472, 1471, 1043, 1044, 1050, 1049], [1400, 1467, 1473, 1407, 978, 1045, 1051, 985], [1467, 1468, 1474, 1473, 1045, 1046, 1052, 1051], [1468, 1469, 1475, 1474, 1046, 1047, 1053, 1052], [1469, 1470, 1476, 1475, 1047, 1048, 1054, 1053], [1470, 1471, 1477, 1476, 1048, 1049, 1055, 1054], [1471, 1472, 1478, 1477, 1049, 1050, 1056, 1055], [1475, 1476, 1479, 1480, 1053, 1054, 1057, 1058], [1476, 1477, 1481, 1479, 1054, 1055, 1059, 1057], [1477, 1478, 1482, 1481, 1055, 1056, 1060, 1059], [1480, 1479, 1483, 1484, 1058, 1057, 1061, 1062], [1479, 1481, 1485, 1483, 1057, 1059, 1063, 1061], [1481, 1482, 1486, 1485, 1059, 1060, 1064, 1063], [1484, 1483, 1487, 1488, 1062, 1061, 1065, 1066], [1483, 1485, 1489, 1487, 1061, 1063, 1067, 1065], [1485, 1486, 1490, 1489, 1063, 1064, 1068, 1067], [1490, 1491, 1492, 1489, 1068, 1069, 1070, 1067], [1491, 1493, 1494, 1492, 1069, 1071, 1072, 1070], [1493, 1341, 1425, 1494, 1071, 919, 1003, 1072], [1489, 1492, 1495, 1487, 1067, 1070, 1073, 1065], [1492, 1494, 1496, 1495, 1070, 1072, 1074, 1073], [1494, 1425, 1423, 1496, 1072, 1003, 1001, 1074], [1487, 1495, 1497, 1488, 1065, 1073, 1075, 1066], [1495, 1496, 1498, 1497, 1073, 1074, 1076, 1075], [1496, 1423, 1422, 1498, 1074, 1001, 1000, 1076], [1422, 1417, 1499, 1498, 1000, 995, 1077, 1076], [1417, 1412, 1500, 1499, 995, 990, 1078, 1077], [1412, 1407, 1473, 1500, 990, 985, 1051, 1078], [1498, 1499, 1501, 1497, 1076, 1077, 1079, 1075], [1499, 1500, 1502, 1501, 1077, 1078, 1080, 1079], [1500, 1473, 1474, 1502, 1078, 1051, 1052, 1080], [1497, 1501, 1484, 1488, 1075, 1079, 1062, 1066], [1501, 1502, 1480, 1484, 1079, 1080, 1058, 1062], [1502, 1474, 1475, 1480, 1080, 1052, 1053, 1058], [1447, 1503, 1504, 1448, 1025, 1081, 1082, 1026], [1503, 1505, 1506, 1504, 1081, 1083, 1084, 1082], [1505, 1507, 1508, 1506, 1083, 1085, 1086, 1084], [1507, 1509, 1510, 1508, 1085, 1087, 1088, 1086], [1509, 1511, 1512, 1510, 1087, 1089, 1090, 1088], [1511, 1513, 1514, 1512, 1089, 1091, 1092, 1090], [1448, 1504, 1515, 1454, 1026, 1082, 1093, 1032], [1504, 1506, 1516, 1515, 1082, 1084, 1094, 1093], [1506, 1508, 1517, 1516, 1084, 1086, 1095, 1094], [1508, 1510, 1518, 1517, 1086, 1088, 1096, 1095], [1510, 1512, 1519, 1518, 1088, 1090, 1097, 1096], [1512, 1514, 1520, 1519, 1090, 1092, 1098, 1097], [1454, 1515, 1521, 1460, 1032, 1093, 1099, 1038], [1515, 1516, 1522, 1521, 1093, 1094, 1100, 1099], [1516, 1517, 1523, 1522, 1094, 1095, 1101, 1100], [1517, 1518, 1524, 1523, 1095, 1096, 1102, 1101], [1518, 1519, 1525, 1524, 1096, 1097, 1103, 1102], [1519, 1520, 1526, 1525, 1097, 1098, 1104, 1103], [1460, 1521, 1527, 1466, 1038, 1099, 1105, 1044], [1521, 1522, 1528, 1527, 1099, 1100, 1106, 1105], [1522, 1523, 1529, 1528, 1100, 1101, 1107, 1106], [1523, 1524, 1530, 1529, 1101, 1102, 1108, 1107], [1524, 1525, 1531, 1530, 1102, 1103, 1109, 1108], [1525, 1526, 1532, 1531, 1103, 1104, 1110, 1109], [1466, 1527, 1533, 1472, 1044, 1105, 1111, 1050], [1527, 1528, 1534, 1533, 1105, 1106, 1112, 1111], [1528, 1529, 1535, 1534, 1106, 1107, 1113, 1112], [1529, 1530, 1536, 1535, 1107, 1108, 1114, 1113], [1530, 1531, 1537, 1536, 1108, 1109, 1115, 1114], [1531, 1532, 1538, 1537, 1109, 1110, 1116, 1115], [1472, 1533, 1539, 1478, 1050, 1111, 1117, 1056], [1533, 1534, 1540, 1539, 1111, 1112, 1118, 1117], [1534, 1535, 1541, 1540, 1112, 1113, 1119, 1118], [1535, 1536, 1542, 1541, 1113, 1114, 1120, 1119], [1536, 1537, 1543, 1542, 1114, 1115, 1121, 1120], [1537, 1538, 1544, 1543, 1115, 1116, 1122, 1121], [1541, 1542, 1545, 1546, 1119, 1120, 1123, 1124], [1542, 1543, 1547, 1545, 1120, 1121, 1125, 1123], [1543, 1544, 1548, 1547, 1121, 1122, 1126, 1125], [1546, 1545, 1549, 1550, 1124, 1123, 1127, 1128], [1545, 1547, 1551, 1549, 1123, 1125, 1129, 1127], [1547, 1548, 1552, 1551, 1125, 1126, 1130, 1129], [1550, 1549, 1553, 1554, 1128, 1127, 1131, 1132], [1549, 1551, 1555, 1553, 1127, 1129, 1133, 1131], [1551, 1552, 1556, 1555, 1129, 1130, 1134, 1133], [1556, 1557, 1558, 1555, 1134, 1135, 1136, 1133], [1557, 1559, 1560, 1558, 1135, 1137, 1138, 1136], [1559, 1341, 1493, 1560, 1137, 919, 1071, 1138], [1555, 1558, 1561, 1553, 1133, 1136, 1139, 1131], [1558, 1560, 1562, 1561, 1136, 1138, 1140, 1139], [1560, 1493, 1491, 1562, 1138, 1071, 1069, 1140], [1553, 1561, 1563, 1554, 1131, 1139, 1141, 1132], [1561, 1562, 1564, 1563, 1139, 1140, 1142, 1141], [1562, 1491, 1490, 1564, 1140, 1069, 1068, 1142], [1490, 1486, 1565, 1564, 1068, 1064, 1143, 1142], [1486, 1482, 1566, 1565, 1064, 1060, 1144, 1143], [1482, 1478, 1539, 1566, 1060, 1056, 1117, 1144], [1564, 1565, 1567, 1563, 1142, 1143, 1145, 1141], [1565, 1566, 1568, 1567, 1143, 1144, 1146, 1145], [1566, 1539, 1540, 1568, 1144, 1117, 1118, 1146], [1563, 1567, 1550, 1554, 1141, 1145, 1128, 1132], [1567, 1568, 1546, 1550, 1145, 1146, 1124, 1128], [1568, 1540, 1541, 1546, 1146, 1118, 1119, 1124], [1513, 1569, 1570, 1514, 1091, 1147, 1148, 1092], [1569, 1571, 1572, 1570, 1147, 1149, 1150, 1148], [1571, 1573, 1574, 1572, 1149, 1151, 1152, 1150], [1573, 1575, 1576, 1574, 1151, 1153, 1154, 1152], [1575, 1577, 1578, 1576, 1153, 1155, 1156, 1154], [1577, 1579, 1580, 1578, 1155, 1157, 1158, 1156], [1514, 1570, 1581, 1520, 1092, 1148, 1159, 1098], [1570, 1572, 1582, 1581, 1148, 1150, 1160, 1159], [1572, 1574, 1583, 1582, 1150, 1152, 1161, 1160], [1574, 1576, 1584, 1583, 1152, 1154, 1162, 1161], [1576, 1578, 1585, 1584, 1154, 1156, 1163, 1162], [1578, 1580, 1586, 1585, 1156, 1158, 1164, 1163], [1520, 1581, 1587, 1526, 1098, 1159, 1165, 1104], [1581, 1582, 1588, 1587, 1159, 1160, 1166, 1165], [1582, 1583, 1589, 1588, 1160, 1161, 1167, 1166], [1583, 1584, 1590, 1589, 1161, 1162, 1168, 1167], [1584, 1585, 1591, 1590, 1162, 1163, 1169, 1168], [1585, 1586, 1592, 1591, 1163, 1164, 1170, 1169], [1526, 1587, 1593, 1532, 1104, 1165, 1171, 1110], [1587, 1588, 1594, 1593, 1165, 1166, 1172, 1171], [1588, 1589, 1595, 1594, 1166, 1167, 1173, 1172], [1589, 1590, 1596, 1595, 1167, 1168, 1174, 1173], [1590, 1591, 1597, 1596, 1168, 1169, 1175, 1174], [1591, 1592, 1598, 1597, 1169, 1170, 1176, 1175], [1532, 1593, 1599, 1538, 1110, 1171, 1177, 1116], [1593, 1594, 1600, 1599, 1171, 1172, 1178, 1177], [1594, 1595, 1601, 1600, 1172, 1173, 1179, 1178], [1595, 1596, 1602, 1601, 1173, 1174, 1180, 1179], [1596, 1597, 1603, 1602, 1174, 1175, 1181, 1180], [1597, 1598, 1604, 1603, 1175, 1176, 1182, 1181], [1538, 1599, 1605, 1544, 1116, 1177, 1183, 1122], [1599, 1600, 1606, 1605, 1177, 1178, 1184, 1183], [1600, 1601, 1607, 1606, 1178, 1179, 1185, 1184], [1601, 1602, 1608, 1607, 1179, 1180, 1186, 1185], [1602, 1603, 1609, 1608, 1180, 1181, 1187, 1186], [1603, 1604, 1610, 1609, 1181, 1182, 1188, 1187], [1607, 1608, 1611, 1612, 1185, 1186, 1189, 1190], [1608, 1609, 1613, 1611, 1186, 1187, 1191, 1189], [1609, 1610, 1614, 1613, 1187, 1188, 1192, 1191], [1612, 1611, 1615, 1616, 1190, 1189, 1193, 1194], [1611, 1613, 1617, 1615, 1189, 1191, 1195, 1193], [1613, 1614, 1618, 1617, 1191, 1192, 1196, 1195], [1616, 1615, 1619, 1620, 1194, 1193, 1197, 1198], [1615, 1617, 1621, 1619, 1193, 1195, 1199, 1197], [1617, 1618, 1622, 1621, 1195, 1196, 1200, 1199], [1622, 1623, 1624, 1621, 1200, 1201, 1202, 1199], [1623, 1625, 1626, 1624, 1201, 1203, 1204, 1202], [1625, 1341, 1559, 1626, 1203, 919, 1137, 1204], [1621, 1624, 1627, 1619, 1199, 1202, 1205, 1197], [1624, 1626, 1628, 1627, 1202, 1204, 1206, 1205], [1626, 1559, 1557, 1628, 1204, 1137, 1135, 1206], [1619, 1627, 1629, 1620, 1197, 1205, 1207, 1198], [1627, 1628, 1630, 1629, 1205, 1206, 1208, 1207], [1628, 1557, 1556, 1630, 1206, 1135, 1134, 1208], [1556, 1552, 1631, 1630, 1134, 1130, 1209, 1208], [1552, 1548, 1632, 1631, 1130, 1126, 1210, 1209], [1548, 1544, 1605, 1632, 1126, 1122, 1183, 1210], [1630, 1631, 1633, 1629, 1208, 1209, 1211, 1207], [1631, 1632, 1634, 1633, 1209, 1210, 1212, 1211], [1632, 1605, 1606, 1634, 1210, 1183, 1184, 1212], [1629, 1633, 1616, 1620, 1207, 1211, 1194, 1198], [1633, 1634, 1612, 1616, 1211, 1212, 1190, 1194], [1634, 1606, 1607, 1612, 1212, 1184, 1185, 1190], [1579, 1635, 1636, 1580, 1157, 1213, 1214, 1158], [1635, 1637, 1638, 1636, 1213, 1215, 1216, 1214], [1637, 1639, 1640, 1638, 1215, 1217, 1218, 1216], [1639, 1641, 1642, 1640, 1217, 1219, 1220, 1218], [1641, 1643, 1644, 1642, 1219, 1221, 1222, 1220], [1643, 1267, 1270, 1644, 1221, 845, 848, 1222], [1580, 1636, 1645, 1586, 1158, 1214, 1223, 1164], [1636, 1638, 1646, 1645, 1214, 1216, 1224, 1223], [1638, 1640, 1647, 1646, 1216, 1218, 1225, 1224], [1640, 1642, 1648, 1647, 1218, 1220, 1226, 1225], [1642, 1644, 1649, 1648, 1220, 1222, 1227, 1226], [1644, 1270, 1284, 1649, 1222, 848, 862, 1227], [1586, 1645, 1650, 1592, 1164, 1223, 1228, 1170], [1645, 1646, 1651, 1650, 1223, 1224, 1229, 1228], [1646, 1647, 1652, 1651, 1224, 1225, 1230, 1229], [1647, 1648, 1653, 1652, 1225, 1226, 1231, 1230], [1648, 1649, 1654, 1653, 1226, 1227, 1232, 1231], [1649, 1284, 1292, 1654, 1227, 862, 870, 1232], [1592, 1650, 1655, 1598, 1170, 1228, 1233, 1176], [1650, 1651, 1656, 1655, 1228, 1229, 1234, 1233], [1651, 1652, 1657, 1656, 1229, 1230, 1235, 1234], [1652, 1653, 1658, 1657, 1230, 1231, 1236, 1235], [1653, 1654, 1659, 1658, 1231, 1232, 1237, 1236], [1654, 1292, 1300, 1659, 1232, 870, 878, 1237], [1598, 1655, 1660, 1604, 1176, 1233, 1238, 1182], [1655, 1656, 1661, 1660, 1233, 1234, 1239, 1238], [1656, 1657, 1662, 1661, 1234, 1235, 1240, 1239], [1657, 1658, 1663, 1662, 1235, 1236, 1241, 1240], [1658, 1659, 1664, 1663, 1236, 1237, 1242, 1241], [1659, 1300, 1308, 1664, 1237, 878, 886, 1242], [1604, 1660, 1665, 1610, 1182, 1238, 1243, 1188], [1660, 1661, 1666, 1665, 1238, 1239, 1244, 1243], [1661, 1662, 1667, 1666, 1239, 1240, 1245, 1244], [1662, 1663, 1668, 1667, 1240, 1241, 1246, 1245], [1663, 1664, 1669, 1668, 1241, 1242, 1247, 1246], [1664, 1308, 1316, 1669, 1242, 886, 894, 1247], [1667, 1668, 1670, 1671, 1245, 1246, 1248, 1249], [1668, 1669, 1672, 1670, 1246, 1247, 1250, 1248], [1669, 1316, 1353, 1672, 1247, 894, 931, 1250], [1671, 1670, 1673, 1674, 1249, 1248, 1251, 1252], [1670, 1672, 1675, 1673, 1248, 1250, 1253, 1251], [1672, 1353, 1351, 1675, 1250, 931, 929, 1253], [1674, 1673, 1676, 1677, 1252, 1251, 1254, 1255], [1673, 1675, 1678, 1676, 1251, 1253, 1256, 1254], [1675, 1351, 1350, 1678, 1253, 929, 928, 1256], [1350, 1346, 1679, 1678, 928, 924, 1257, 1256], [1346, 1342, 1680, 1679, 924, 920, 1258, 1257], [1342, 1341, 1625, 1680, 920, 919, 1203, 1258], [1678, 1679, 1681, 1676, 1256, 1257, 1259, 1254], [1679, 1680, 1682, 1681, 1257, 1258, 1260, 1259], [1680, 1625, 1623, 1682, 1258, 1203, 1201, 1260], [1676, 1681, 1683, 1677, 1254, 1259, 1261, 1255], [1681, 1682, 1684, 1683, 1259, 1260, 1262, 1261], [1682, 1623, 1622, 1684, 1260, 1201, 1200, 1262], [1622, 1618, 1685, 1684, 1200, 1196, 1263, 1262], [1618, 1614, 1686, 1685, 1196, 1192, 1264, 1263], [1614, 1610, 1665, 1686, 1192, 1188, 1243, 1264], [1684, 1685, 1687, 1683, 1262, 1263, 1265, 1261], [1685, 1686, 1688, 1687, 1263, 1264, 1266, 1265], [1686, 1665, 1666, 1688, 1264, 1243, 1244, 1266], [1683, 1687, 1674, 1677, 1261, 1265, 1252, 1255], [1687, 1688, 1671, 1674, 1265, 1266, 1249, 1252], [1688, 1666, 1667, 1671, 1266, 1244, 1245, 1249], [1689, 1690, 1691, 1692, 1267, 1268, 1269, 1270], [1690, 1693, 1694, 1691, 1268, 1271, 1272, 1269], [1693, 1695, 1696, 1694, 1271, 1273, 1274, 1272], [1695, 1697, 1698, 1696, 1273, 1275, 1276, 1274], [1697, 1699, 1700, 1698, 1275, 1277, 1278, 1276], [1699, 1701, 1702, 1700, 1277, 1279, 1280, 1278], [1701, 1703, 1704, 1702, 1279, 1281, 1282, 1280], [1692, 1691, 1705, 1706, 1270, 1269, 1283, 1284], [1691, 1694, 1707, 1705, 1269, 1272, 1285, 1283], [1694, 1696, 1708, 1707, 1272, 1274, 1286, 1285], [1696, 1698, 1709, 1708, 1274, 1276, 1287, 1286], [1698, 1700, 1710, 1709, 1276, 1278, 1288, 1287], [1700, 1702, 1711, 1710, 1278, 1280, 1289, 1288], [1702, 1704, 1712, 1711, 1280, 1282, 1290, 1289], [1706, 1705, 1713, 1714, 1284, 1283, 1291, 1292], [1705, 1707, 1715, 1713, 1283, 1285, 1293, 1291], [1707, 1708, 1716, 1715, 1285, 1286, 1294, 1293], [1708, 1709, 1717, 1716, 1286, 1287, 1295, 1294], [1709, 1710, 1718, 1717, 1287, 1288, 1296, 1295], [1710, 1711, 1719, 1718, 1288, 1289, 1297, 1296], [1711, 1712, 1720, 1719, 1289, 1290, 1298, 1297], [1714, 1713, 1721, 1722, 1292, 1291, 1299, 1300], [1713, 1715, 1723, 1721, 1291, 1293, 1301, 1299], [1715, 1716, 1724, 1723, 1293, 1294, 1302, 1301], [1716, 1717, 1725, 1724, 1294, 1295, 1303, 1302], [1717, 1718, 1726, 1725, 1295, 1296, 1304, 1303], [1718, 1719, 1727, 1726, 1296, 1297, 1305, 1304], [1719, 1720, 1728, 1727, 1297, 1298, 1306, 1305], [1722, 1721, 1729, 1730, 1300, 1299, 1307, 1308], [1721, 1723, 1731, 1729, 1299, 1301, 1309, 1307], [1723, 1724, 1732, 1731, 1301, 1302, 1310, 1309], [1724, 1725, 1733, 1732, 1302, 1303, 1311, 1310], [1725, 1726, 1734, 1733, 1303, 1304, 1312, 1311], [1726, 1727, 1735, 1734, 1304, 1305, 1313, 1312], [1727, 1728, 1736, 1735, 1305, 1306, 1314, 1313], [1730, 1729, 1737, 1738, 1308, 1307, 1315, 1316], [1729, 1731, 1739, 1737, 1307, 1309, 1317, 1315], [1731, 1732, 1740, 1739, 1309, 1310, 1318, 1317], [1732, 1733, 1741, 1740, 1310, 1311, 1319, 1318], [1733, 1734, 1742, 1741, 1311, 1312, 1320, 1319], [1734, 1735, 1743, 1742, 1312, 1313, 1321, 1320], [1735, 1736, 1744, 1743, 1313, 1314, 1322, 1321], [1741, 1742, 1745, 1746, 1319, 1320, 1323, 1324], [1742, 1743, 1747, 1745, 1320, 1321, 1325, 1323], [1743, 1744, 1748, 1747, 1321, 1322, 1326, 1325], [1746, 1745, 1749, 1750, 1324, 1323, 1327, 1328], [1745, 1747, 1751, 1749, 1323, 1325, 1329, 1327], [1747, 1748, 1752, 1751, 1325, 1326, 1330, 1329], [1750, 1749, 1753, 1754, 1328, 1327, 1331, 1332], [1749, 1751, 1755, 1753, 1327, 1329, 1333, 1331], [1751, 1752, 1756, 1755, 1329, 1330, 1334, 1333], [1756, 1757, 1758, 1755, 1334, 1335, 1336, 1333], [1757, 1759, 1760, 1758, 1335, 1337, 1338, 1336], [1759, 1761, 1762, 1760, 1337, 1339, 1340, 1338], [1761, 1763, 1764, 1762, 1339, 1341, 1342, 1340], [1755, 1758, 1765, 1753, 1333, 1336, 1343, 1331], [1758, 1760, 1766, 1765, 1336, 1338, 1344, 1343], [1760, 1762, 1767, 1766, 1338, 1340, 1345, 1344], [1762, 1764, 1768, 1767, 1340, 1342, 1346, 1345], [1753, 1765, 1769, 1754, 1331, 1343, 1347, 1332], [1765, 1766, 1770, 1769, 1343, 1344, 1348, 1347], [1766, 1767, 1771, 1770, 1344, 1345, 1349, 1348], [1767, 1768, 1772, 1771, 1345, 1346, 1350, 1349], [1772, 1773, 1774, 1771, 1350, 1351, 1352, 1349], [1773, 1775, 1776, 1774, 1351, 1353, 1354, 1352], [1775, 1738, 1737, 1776, 1353, 1316, 1315, 1354], [1771, 1774, 1777, 1770, 1349, 1352, 1355, 1348], [1774, 1776, 1778, 1777, 1352, 1354, 1356, 1355], [1776, 1737, 1739, 1778, 1354, 1315, 1317, 1356], [1770, 1777, 1779, 1769, 1348, 1355, 1357, 1347], [1777, 1778, 1780, 1779, 1355, 1356, 1358, 1357], [1778, 1739, 1740, 1780, 1356, 1317, 1318, 1358], [1769, 1779, 1750, 1754, 1347, 1357, 1328, 1332], [1779, 1780, 1746, 1750, 1357, 1358, 1324, 1328], [1780, 1740, 1741, 1746, 1358, 1318, 1319, 1324], [1703, 1781, 1782, 1704, 1281, 1359, 1360, 1282], [1781, 1783, 1784, 1782, 1359, 1361, 1362, 1360], [1783, 1785, 1786, 1784, 1361, 1363, 1364, 1362], [1785, 1787, 1788, 1786, 1363, 1365, 1366, 1364], [1787, 1789, 1790, 1788, 1365, 1367, 1368, 1366], [1789, 1791, 1792, 1790, 1367, 1369, 1370, 1368], [1791, 1793, 1794, 1792, 1369, 1371, 1372, 1370], [1704, 1782, 1795, 1712, 1282, 1360, 1373, 1290], [1782, 1784, 1796, 1795, 1360, 1362, 1374, 1373], [1784, 1786, 1797, 1796, 1362, 1364, 1375, 1374], [1786, 1788, 1798, 1797, 1364, 1366, 1376, 1375], [1788, 1790, 1799, 1798, 1366, 1368, 1377, 1376], [1790, 1792, 1800, 1799, 1368, 1370, 1378, 1377], [1792, 1794, 1801, 1800, 1370, 1372, 1379, 1378], [1712, 1795, 1802, 1720, 1290, 1373, 1380, 1298], [1795, 1796, 1803, 1802, 1373, 1374, 1381, 1380], [1796, 1797, 1804, 1803, 1374, 1375, 1382, 1381], [1797, 1798, 1805, 1804, 1375, 1376, 1383, 1382], [1798, 1799, 1806, 1805, 1376, 1377, 1384, 1383], [1799, 1800, 1807, 1806, 1377, 1378, 1385, 1384], [1800, 1801, 1808, 1807, 1378, 1379, 1386, 1385], [1720, 1802, 1809, 1728, 1298, 1380, 1387, 1306], [1802, 1803, 1810, 1809, 1380, 1381, 1388, 1387], [1803, 1804, 1811, 1810, 1381, 1382, 1389, 1388], [1804, 1805, 1812, 1811, 1382, 1383, 1390, 1389], [1805, 1806, 1813, 1812, 1383, 1384, 1391, 1390], [1806, 1807, 1814, 1813, 1384, 1385, 1392, 1391], [1807, 1808, 1815, 1814, 1385, 1386, 1393, 1392], [1728, 1809, 1816, 1736, 1306, 1387, 1394, 1314], [1809, 1810, 1817, 1816, 1387, 1388, 1395, 1394], [1810, 1811, 1818, 1817, 1388, 1389, 1396, 1395], [1811, 1812, 1819, 1818, 1389, 1390, 1397, 1396], [1812, 1813, 1820, 1819, 1390, 1391, 1398, 1397], [1813, 1814, 1821, 1820, 1391, 1392, 1399, 1398], [1814, 1815, 1822, 1821, 1392, 1393, 1400, 1399], [1736, 1816, 1823, 1744, 1314, 1394, 1401, 1322], [1816, 1817, 1824, 1823, 1394, 1395, 1402, 1401], [1817, 1818, 1825, 1824, 1395, 1396, 1403, 1402], [1818, 1819, 1826, 1825, 1396, 1397, 1404, 1403], [1819, 1820, 1827, 1826, 1397, 1398, 1405, 1404], [1820, 1821, 1828, 1827, 1398, 1399, 1406, 1405], [1821, 1822, 1829, 1828, 1399, 1400, 1407, 1406], [1825, 1826, 1830, 1831, 1403, 1404, 1408, 1409], [1826, 1827, 1832, 1830, 1404, 1405, 1410, 1408], [1827, 1828, 1833, 1832, 1405, 1406, 1411, 1410], [1828, 1829, 1834, 1833, 1406, 1407, 1412, 1411], [1831, 1830, 1835, 1836, 1409, 1408, 1413, 1414], [1830, 1832, 1837, 1835, 1408, 1410, 1415, 1413], [1832, 1833, 1838, 1837, 1410, 1411, 1416, 1415], [1833, 1834, 1839, 1838, 1411, 1412, 1417, 1416], [1836, 1835, 1840, 1841, 1414, 1413, 1418, 1419], [1835, 1837, 1842, 1840, 1413, 1415, 1420, 1418], [1837, 1838, 1843, 1842, 1415, 1416, 1421, 1420], [1838, 1839, 1844, 1843, 1416, 1417, 1422, 1421], [1844, 1845, 1846, 1843, 1422, 1423, 1424, 1421], [1845, 1847, 1848, 1846, 1423, 1425, 1426, 1424], [1847, 1763, 1761, 1848, 1425, 1341, 1339, 1426], [1843, 1846, 1849, 1842, 1421, 1424, 1427, 1420], [1846, 1848, 1850, 1849, 1424, 1426, 1428, 1427], [1848, 1761, 1759, 1850, 1426, 1339, 1337, 1428], [1842, 1849, 1851, 1840, 1420, 1427, 1429, 1418], [1849, 1850, 1852, 1851, 1427, 1428, 1430, 1429], [1850, 1759, 1757, 1852, 1428, 1337, 1335, 1430], [1840, 1851, 1853, 1841, 1418, 1429, 1431, 1419], [1851, 1852, 1854, 1853, 1429, 1430, 1432, 1431], [1852, 1757, 1756, 1854, 1430, 1335, 1334, 1432], [1756, 1752, 1855, 1854, 1334, 1330, 1433, 1432], [1752, 1748, 1856, 1855, 1330, 1326, 1434, 1433], [1748, 1744, 1823, 1856, 1326, 1322, 1401, 1434], [1854, 1855, 1857, 1853, 1432, 1433, 1435, 1431], [1855, 1856, 1858, 1857, 1433, 1434, 1436, 1435], [1856, 1823, 1824, 1858, 1434, 1401, 1402, 1436], [1853, 1857, 1836, 1841, 1431, 1435, 1414, 1419], [1857, 1858, 1831, 1836, 1435, 1436, 1409, 1414], [1858, 1824, 1825, 1831, 1436, 1402, 1403, 1409], [1793, 1859, 1860, 1794, 1371, 1437, 1438, 1372], [1859, 1861, 1862, 1860, 1437, 1439, 1440, 1438], [1861, 1863, 1864, 1862, 1439, 1441, 1442, 1440], [1863, 1865, 1866, 1864, 1441, 1443, 1444, 1442], [1865, 1867, 1868, 1866, 1443, 1445, 1446, 1444], [1867, 1869, 1870, 1868, 1445, 1447, 1448, 1446], [1794, 1860, 1871, 1801, 1372, 1438, 1449, 1379], [1860, 1862, 1872, 1871, 1438, 1440, 1450, 1449], [1862, 1864, 1873, 1872, 1440, 1442, 1451, 1450], [1864, 1866, 1874, 1873, 1442, 1444, 1452, 1451], [1866, 1868, 1875, 1874, 1444, 1446, 1453, 1452], [1868, 1870, 1876, 1875, 1446, 1448, 1454, 1453], [1801, 1871, 1877, 1808, 1379, 1449, 1455, 1386], [1871, 1872, 1878, 1877, 1449, 1450, 1456, 1455], [1872, 1873, 1879, 1878, 1450, 1451, 1457, 1456], [1873, 1874, 1880, 1879, 1451, 1452, 1458, 1457], [1874, 1875, 1881, 1880, 1452, 1453, 1459, 1458], [1875, 1876, 1882, 1881, 1453, 1454, 1460, 1459], [1808, 1877, 1883, 1815, 1386, 1455, 1461, 1393], [1877, 1878, 1884, 1883, 1455, 1456, 1462, 1461], [1878, 1879, 1885, 1884, 1456, 1457, 1463, 1462], [1879, 1880, 1886, 1885, 1457, 1458, 1464, 1463], [1880, 1881, 1887, 1886, 1458, 1459, 1465, 1464], [1881, 1882, 1888, 1887, 1459, 1460, 1466, 1465], [1815, 1883, 1889, 1822, 1393, 1461, 1467, 1400], [1883, 1884, 1890, 1889, 1461, 1462, 1468, 1467], [1884, 1885, 1891, 1890, 1462, 1463, 1469, 1468], [1885, 1886, 1892, 1891, 1463, 1464, 1470, 1469], [1886, 1887, 1893, 1892, 1464, 1465, 1471, 1470], [1887, 1888, 1894, 1893, 1465, 1466, 1472, 1471], [1822, 1889, 1895, 1829, 1400, 1467, 1473, 1407], [1889, 1890, 1896, 1895, 1467, 1468, 1474, 1473], [1890, 1891, 1897, 1896, 1468, 1469, 1475, 1474], [1891, 1892, 1898, 1897, 1469, 1470, 1476, 1475], [1892, 1893, 1899, 1898, 1470, 1471, 1477, 1476], [1893, 1894, 1900, 1899, 1471, 1472, 1478, 1477], [1897, 1898, 1901, 1902, 1475, 1476, 1479, 1480], [1898, 1899, 1903, 1901, 1476, 1477, 1481, 1479], [1899, 1900, 1904, 1903, 1477, 1478, 1482, 1481], [1902, 1901, 1905, 1906, 1480, 1479, 1483, 1484], [1901, 1903, 1907, 1905, 1479, 1481, 1485, 1483], [1903, 1904, 1908, 1907, 1481, 1482, 1486, 1485], [1906, 1905, 1909, 1910, 1484, 1483, 1487, 1488], [1905, 1907, 1911, 1909, 1483, 1485, 1489, 1487], [1907, 1908, 1912, 1911, 1485, 1486, 1490, 1489], [1912, 1913, 1914, 1911, 1490, 1491, 1492, 1489], [1913, 1915, 1916, 1914, 1491, 1493, 1494, 1492], [1915, 1763, 1847, 1916, 1493, 1341, 1425, 1494], [1911, 1914, 1917, 1909, 1489, 1492, 1495, 1487], [1914, 1916, 1918, 1917, 1492, 1494, 1496, 1495], [1916, 1847, 1845, 1918, 1494, 1425, 1423, 1496], [1909, 1917, 1919, 1910, 1487, 1495, 1497, 1488], [1917, 1918, 1920, 1919, 1495, 1496, 1498, 1497], [1918, 1845, 1844, 1920, 1496, 1423, 1422, 1498], [1844, 1839, 1921, 1920, 1422, 1417, 1499, 1498], [1839, 1834, 1922, 1921, 1417, 1412, 1500, 1499], [1834, 1829, 1895, 1922, 1412, 1407, 1473, 1500], [1920, 1921, 1923, 1919, 1498, 1499, 1501, 1497], [1921, 1922, 1924, 1923, 1499, 1500, 1502, 1501], [1922, 1895, 1896, 1924, 1500, 1473, 1474, 1502], [1919, 1923, 1906, 1910, 1497, 1501, 1484, 1488], [1923, 1924, 1902, 1906, 1501, 1502, 1480, 1484], [1924, 1896, 1897, 1902, 1502, 1474, 1475, 1480], [1869, 1925, 1926, 1870, 1447, 1503, 1504, 1448], [1925, 1927, 1928, 1926, 1503, 1505, 1506, 1504], [1927, 1929, 1930, 1928, 1505, 1507, 1508, 1506], [1929, 1931, 1932, 1930, 1507, 1509, 1510, 1508], [1931, 1933, 1934, 1932, 1509, 1511, 1512, 1510], [1933, 1935, 1936, 1934, 1511, 1513, 1514, 1512], [1870, 1926, 1937, 1876, 1448, 1504, 1515, 1454], [1926, 1928, 1938, 1937, 1504, 1506, 1516, 1515], [1928, 1930, 1939, 1938, 1506, 1508, 1517, 1516], [1930, 1932, 1940, 1939, 1508, 1510, 1518, 1517], [1932, 1934, 1941, 1940, 1510, 1512, 1519, 1518], [1934, 1936, 1942, 1941, 1512, 1514, 1520, 1519], [1876, 1937, 1943, 1882, 1454, 1515, 1521, 1460], [1937, 1938, 1944, 1943, 1515, 1516, 1522, 1521], [1938, 1939, 1945, 1944, 1516, 1517, 1523, 1522], [1939, 1940, 1946, 1945, 1517, 1518, 1524, 1523], [1940, 1941, 1947, 1946, 1518, 1519, 1525, 1524], [1941, 1942, 1948, 1947, 1519, 1520, 1526, 1525], [1882, 1943, 1949, 1888, 1460, 1521, 1527, 1466], [1943, 1944, 1950, 1949, 1521, 1522, 1528, 1527], [1944, 1945, 1951, 1950, 1522, 1523, 1529, 1528], [1945, 1946, 1952, 1951, 1523, 1524, 1530, 1529], [1946, 1947, 1953, 1952, 1524, 1525, 1531, 1530], [1947, 1948, 1954, 1953, 1525, 1526, 1532, 1531], [1888, 1949, 1955, 1894, 1466, 1527, 1533, 1472], [1949, 1950, 1956, 1955, 1527, 1528, 1534, 1533], [1950, 1951, 1957, 1956, 1528, 1529, 1535, 1534], [1951, 1952, 1958, 1957, 1529, 1530, 1536, 1535], [1952, 1953, 1959, 1958, 1530, 1531, 1537, 1536], [1953, 1954, 1960, 1959, 1531, 1532, 1538, 1537], [1894, 1955, 1961, 1900, 1472, 1533, 1539, 1478], [1955, 1956, 1962, 1961, 1533, 1534, 1540, 1539], [1956, 1957, 1963, 1962, 1534, 1535, 1541, 1540], [1957, 1958, 1964, 1963, 1535, 1536, 1542, 1541], [1958, 1959, 1965, 1964, 1536, 1537, 1543, 1542], [1959, 1960, 1966, 1965, 1537, 1538, 1544, 1543], [1963, 1964, 1967, 1968, 1541, 1542, 1545, 1546], [1964, 1965, 1969, 1967, 1542, 1543, 1547, 1545], [1965, 1966, 1970, 1969, 1543, 1544, 1548, 1547], [1968, 1967, 1971, 1972, 1546, 1545, 1549, 1550], [1967, 1969, 1973, 1971, 1545, 1547, 1551, 1549], [1969, 1970, 1974, 1973, 1547, 1548, 1552, 1551], [1972, 1971, 1975, 1976, 1550, 1549, 1553, 1554], [1971, 1973, 1977, 1975, 1549, 1551, 1555, 1553], [1973, 1974, 1978, 1977, 1551, 1552, 1556, 1555], [1978, 1979, 1980, 1977, 1556, 1557, 1558, 1555], [1979, 1981, 1982, 1980, 1557, 1559, 1560, 1558], [1981, 1763, 1915, 1982, 1559, 1341, 1493, 1560], [1977, 1980, 1983, 1975, 1555, 1558, 1561, 1553], [1980, 1982, 1984, 1983, 1558, 1560, 1562, 1561], [1982, 1915, 1913, 1984, 1560, 1493, 1491, 1562], [1975, 1983, 1985, 1976, 1553, 1561, 1563, 1554], [1983, 1984, 1986, 1985, 1561, 1562, 1564, 1563], [1984, 1913, 1912, 1986, 1562, 1491, 1490, 1564], [1912, 1908, 1987, 1986, 1490, 1486, 1565, 1564], [1908, 1904, 1988, 1987, 1486, 1482, 1566, 1565], [1904, 1900, 1961, 1988, 1482, 1478, 1539, 1566], [1986, 1987, 1989, 1985, 1564, 1565, 1567, 1563], [1987, 1988, 1990, 1989, 1565, 1566, 1568, 1567], [1988, 1961, 1962, 1990, 1566, 1539, 1540, 1568], [1985, 1989, 1972, 1976, 1563, 1567, 1550, 1554], [1989, 1990, 1968, 1972, 1567, 1568, 1546, 1550], [1990, 1962, 1963, 1968, 1568, 1540, 1541, 1546], [1935, 1991, 1992, 1936, 1513, 1569, 1570, 1514], [1991, 1993, 1994, 1992, 1569, 1571, 1572, 1570], [1993, 1995, 1996, 1994, 1571, 1573, 1574, 1572], [1995, 1997, 1998, 1996, 1573, 1575, 1576, 1574], [1997, 1999, 2000, 1998, 1575, 1577, 1578, 1576], [1999, 2001, 2002, 2000, 1577, 1579, 1580, 1578], [1936, 1992, 2003, 1942, 1514, 1570, 1581, 1520], [1992, 1994, 2004, 2003, 1570, 1572, 1582, 1581], [1994, 1996, 2005, 2004, 1572, 1574, 1583, 1582], [1996, 1998, 2006, 2005, 1574, 1576, 1584, 1583], [1998, 2000, 2007, 2006, 1576, 1578, 1585, 1584], [2000, 2002, 2008, 2007, 1578, 1580, 1586, 1585], [1942, 2003, 2009, 1948, 1520, 1581, 1587, 1526], [2003, 2004, 2010, 2009, 1581, 1582, 1588, 1587], [2004, 2005, 2011, 2010, 1582, 1583, 1589, 1588], [2005, 2006, 2012, 2011, 1583, 1584, 1590, 1589], [2006, 2007, 2013, 2012, 1584, 1585, 1591, 1590], [2007, 2008, 2014, 2013, 1585, 1586, 1592, 1591], [1948, 2009, 2015, 1954, 1526, 1587, 1593, 1532], [2009, 2010, 2016, 2015, 1587, 1588, 1594, 1593], [2010, 2011, 2017, 2016, 1588, 1589, 1595, 1594], [2011, 2012, 2018, 2017, 1589, 1590, 1596, 1595], [2012, 2013, 2019, 2018, 1590, 1591, 1597, 1596], [2013, 2014, 2020, 2019, 1591, 1592, 1598, 1597], [1954, 2015, 2021, 1960, 1532, 1593, 1599, 1538], [2015, 2016, 2022, 2021, 1593, 1594, 1600, 1599], [2016, 2017, 2023, 2022, 1594, 1595, 1601, 1600], [2017, 2018, 2024, 2023, 1595, 1596, 1602, 1601], [2018, 2019, 2025, 2024, 1596, 1597, 1603, 1602], [2019, 2020, 2026, 2025, 1597, 1598, 1604, 1603], [1960, 2021, 2027, 1966, 1538, 1599, 1605, 1544], [2021, 2022, 2028, 2027, 1599, 1600, 1606, 1605], [2022, 2023, 2029, 2028, 1600, 1601, 1607, 1606], [2023, 2024, 2030, 2029, 1601, 1602, 1608, 1607], [2024, 2025, 2031, 2030, 1602, 1603, 1609, 1608], [2025, 2026, 2032, 2031, 1603, 1604, 1610, 1609], [2029, 2030, 2033, 2034, 1607, 1608, 1611, 1612], [2030, 2031, 2035, 2033, 1608, 1609, 1613, 1611], [2031, 2032, 2036, 2035, 1609, 1610, 1614, 1613], [2034, 2033, 2037, 2038, 1612, 1611, 1615, 1616], [2033, 2035, 2039, 2037, 1611, 1613, 1617, 1615], [2035, 2036, 2040, 2039, 1613, 1614, 1618, 1617], [2038, 2037, 2041, 2042, 1616, 1615, 1619, 1620], [2037, 2039, 2043, 2041, 1615, 1617, 1621, 1619], [2039, 2040, 2044, 2043, 1617, 1618, 1622, 1621], [2044, 2045, 2046, 2043, 1622, 1623, 1624, 1621], [2045, 2047, 2048, 2046, 1623, 1625, 1626, 1624], [2047, 1763, 1981, 2048, 1625, 1341, 1559, 1626], [2043, 2046, 2049, 2041, 1621, 1624, 1627, 1619], [2046, 2048, 2050, 2049, 1624, 1626, 1628, 1627], [2048, 1981, 1979, 2050, 1626, 1559, 1557, 1628], [2041, 2049, 2051, 2042, 1619, 1627, 1629, 1620], [2049, 2050, 2052, 2051, 1627, 1628, 1630, 1629], [2050, 1979, 1978, 2052, 1628, 1557, 1556, 1630], [1978, 1974, 2053, 2052, 1556, 1552, 1631, 1630], [1974, 1970, 2054, 2053, 1552, 1548, 1632, 1631], [1970, 1966, 2027, 2054, 1548, 1544, 1605, 1632], [2052, 2053, 2055, 2051, 1630, 1631, 1633, 1629], [2053, 2054, 2056, 2055, 1631, 1632, 1634, 1633], [2054, 2027, 2028, 2056, 1632, 1605, 1606, 1634], [2051, 2055, 2038, 2042, 1629, 1633, 1616, 1620], [2055, 2056, 2034, 2038, 1633, 1634, 1612, 1616], [2056, 2028, 2029, 2034, 1634, 1606, 1607, 1612], [2001, 2057, 2058, 2002, 1579, 1635, 1636, 1580], [2057, 2059, 2060, 2058, 1635, 1637, 1638, 1636], [2059, 2061, 2062, 2060, 1637, 1639, 1640, 1638], [2061, 2063, 2064, 2062, 1639, 1641, 1642, 1640], [2063, 2065, 2066, 2064, 1641, 1643, 1644, 1642], [2065, 1689, 1692, 2066, 1643, 1267, 1270, 1644], [2002, 2058, 2067, 2008, 1580, 1636, 1645, 1586], [2058, 2060, 2068, 2067, 1636, 1638, 1646, 1645], [2060, 2062, 2069, 2068, 1638, 1640, 1647, 1646], [2062, 2064, 2070, 2069, 1640, 1642, 1648, 1647], [2064, 2066, 2071, 2070, 1642, 1644, 1649, 1648], [2066, 1692, 1706, 2071, 1644, 1270, 1284, 1649], [2008, 2067, 2072, 2014, 1586, 1645, 1650, 1592], [2067, 2068, 2073, 2072, 1645, 1646, 1651, 1650], [2068, 2069, 2074, 2073, 1646, 1647, 1652, 1651], [2069, 2070, 2075, 2074, 1647, 1648, 1653, 1652], [2070, 2071, 2076, 2075, 1648, 1649, 1654, 1653], [2071, 1706, 1714, 2076, 1649, 1284, 1292, 1654], [2014, 2072, 2077, 2020, 1592, 1650, 1655, 1598], [2072, 2073, 2078, 2077, 1650, 1651, 1656, 1655], [2073, 2074, 2079, 2078, 1651, 1652, 1657, 1656], [2074, 2075, 2080, 2079, 1652, 1653, 1658, 1657], [2075, 2076, 2081, 2080, 1653, 1654, 1659, 1658], [2076, 1714, 1722, 2081, 1654, 1292, 1300, 1659], [2020, 2077, 2082, 2026, 1598, 1655, 1660, 1604], [2077, 2078, 2083, 2082, 1655, 1656, 1661, 1660], [2078, 2079, 2084, 2083, 1656, 1657, 1662, 1661], [2079, 2080, 2085, 2084, 1657, 1658, 1663, 1662], [2080, 2081, 2086, 2085, 1658, 1659, 1664, 1663], [2081, 1722, 1730, 2086, 1659, 1300, 1308, 1664], [2026, 2082, 2087, 2032, 1604, 1660, 1665, 1610], [2082, 2083, 2088, 2087, 1660, 1661, 1666, 1665], [2083, 2084, 2089, 2088, 1661, 1662, 1667, 1666], [2084, 2085, 2090, 2089, 1662, 1663, 1668, 1667], [2085, 2086, 2091, 2090, 1663, 1664, 1669, 1668], [2086, 1730, 1738, 2091, 1664, 1308, 1316, 1669], [2089, 2090, 2092, 2093, 1667, 1668, 1670, 1671], [2090, 2091, 2094, 2092, 1668, 1669, 1672, 1670], [2091, 1738, 1775, 2094, 1669, 1316, 1353, 1672], [2093, 2092, 2095, 2096, 1671, 1670, 1673, 1674], [2092, 2094, 2097, 2095, 1670, 1672, 1675, 1673], [2094, 1775, 1773, 2097, 1672, 1353, 1351, 1675], [2096, 2095, 2098, 2099, 1674, 1673, 1676, 1677], [2095, 2097, 2100, 2098, 1673, 1675, 1678, 1676], [2097, 1773, 1772, 2100, 1675, 1351, 1350, 1678], [1772, 1768, 2101, 2100, 1350, 1346, 1679, 1678], [1768, 1764, 2102, 2101, 1346, 1342, 1680, 1679], [1764, 1763, 2047, 2102, 1342, 1341, 1625, 1680], [2100, 2101, 2103, 2098, 1678, 1679, 1681, 1676], [2101, 2102, 2104, 2103, 1679, 1680, 1682, 1681], [2102, 2047, 2045, 2104, 1680, 1625, 1623, 1682], [2098, 2103, 2105, 2099, 1676, 1681, 1683, 1677], [2103, 2104, 2106, 2105, 1681, 1682, 1684, 1683], [2104, 2045, 2044, 2106, 1682, 1623, 1622, 1684], [2044, 2040, 2107, 2106, 1622, 1618, 1685, 1684], [2040, 2036, 2108, 2107, 1618, 1614, 1686, 1685], [2036, 2032, 2087, 2108, 1614, 1610, 1665, 1686], [2106, 2107, 2109, 2105, 1684, 1685, 1687, 1683], [2107, 2108, 2110, 2109, 1685, 1686, 1688, 1687], [2108, 2087, 2088, 2110, 1686, 1665, 1666, 1688], [2105, 2109, 2096, 2099, 1683, 1687, 1674, 1677], [2109, 2110, 2093, 2096, 1687, 1688, 1671, 1674], [2110, 2088, 2089, 2093, 1688, 1666, 1667, 1671], [2111, 2112, 2113, 2114, 1689, 1690, 1691, 1692], [2112, 2115, 2116, 2113, 1690, 1693, 1694, 1691], [2115, 2117, 2118, 2116, 1693, 1695, 1696, 1694], [2117, 2119, 2120, 2118, 1695, 1697, 1698, 1696], [2119, 2121, 2122, 2120, 1697, 1699, 1700, 1698], [2121, 2123, 2124, 2122, 1699, 1701, 1702, 1700], [2123, 2125, 2126, 2124, 1701, 1703, 1704, 1702], [2114, 2113, 2127, 2128, 1692, 1691, 1705, 1706], [2113, 2116, 2129, 2127, 1691, 1694, 1707, 1705], [2116, 2118, 2130, 2129, 1694, 1696, 1708, 1707], [2118, 2120, 2131, 2130, 1696, 1698, 1709, 1708], [2120, 2122, 2132, 2131, 1698, 1700, 1710, 1709], [2122, 2124, 2133, 2132, 1700, 1702, 1711, 1710], [2124, 2126, 2134, 2133, 1702, 1704, 1712, 1711], [2128, 2127, 2135, 2136, 1706, 1705, 1713, 1714], [2127, 2129, 2137, 2135, 1705, 1707, 1715, 1713], [2129, 2130, 2138, 2137, 1707, 1708, 1716, 1715], [2130, 2131, 2139, 2138, 1708, 1709, 1717, 1716], [2131, 2132, 2140, 2139, 1709, 1710, 1718, 1717], [2132, 2133, 2141, 2140, 1710, 1711, 1719, 1718], [2133, 2134, 2142, 2141, 1711, 1712, 1720, 1719], [2136, 2135, 2143, 2144, 1714, 1713, 1721, 1722], [2135, 2137, 2145, 2143, 1713, 1715, 1723, 1721], [2137, 2138, 2146, 2145, 1715, 1716, 1724, 1723], [2138, 2139, 2147, 2146, 1716, 1717, 1725, 1724], [2139, 2140, 2148, 2147, 1717, 1718, 1726, 1725], [2140, 2141, 2149, 2148, 1718, 1719, 1727, 1726], [2141, 2142, 2150, 2149, 1719, 1720, 1728, 1727], [2144, 2143, 2151, 2152, 1722, 1721, 1729, 1730], [2143, 2145, 2153, 2151, 1721, 1723, 1731, 1729], [2145, 2146, 2154, 2153, 1723, 1724, 1732, 1731], [2146, 2147, 2155, 2154, 1724, 1725, 1733, 1732], [2147, 2148, 2156, 2155, 1725, 1726, 1734, 1733], [2148, 2149, 2157, 2156, 1726, 1727, 1735, 1734], [2149, 2150, 2158, 2157, 1727, 1728, 1736, 1735], [2152, 2151, 2159, 2160, 1730, 1729, 1737, 1738], [2151, 2153, 2161, 2159, 1729, 1731, 1739, 1737], [2153, 2154, 2162, 2161, 1731, 1732, 1740, 1739], [2154, 2155, 2163, 2162, 1732, 1733, 1741, 1740], [2155, 2156, 2164, 2163, 1733, 1734, 1742, 1741], [2156, 2157, 2165, 2164, 1734, 1735, 1743, 1742], [2157, 2158, 2166, 2165, 1735, 1736, 1744, 1743], [2163, 2164, 2167, 2168, 1741, 1742, 1745, 1746], [2164, 2165, 2169, 2167, 1742, 1743, 1747, 1745], [2165, 2166, 2170, 2169, 1743, 1744, 1748, 1747], [2168, 2167, 2171, 2172, 1746, 1745, 1749, 1750], [2167, 2169, 2173, 2171, 1745, 1747, 1751, 1749], [2169, 2170, 2174, 2173, 1747, 1748, 1752, 1751], [2172, 2171, 2175, 2176, 1750, 1749, 1753, 1754], [2171, 2173, 2177, 2175, 1749, 1751, 1755, 1753], [2173, 2174, 2178, 2177, 1751, 1752, 1756, 1755], [2178, 2179, 2180, 2177, 1756, 1757, 1758, 1755], [2179, 2181, 2182, 2180, 1757, 1759, 1760, 1758], [2181, 2183, 2184, 2182, 1759, 1761, 1762, 1760], [2183, 2185, 2186, 2184, 1761, 1763, 1764, 1762], [2177, 2180, 2187, 2175, 1755, 1758, 1765, 1753], [2180, 2182, 2188, 2187, 1758, 1760, 1766, 1765], [2182, 2184, 2189, 2188, 1760, 1762, 1767, 1766], [2184, 2186, 2190, 2189, 1762, 1764, 1768, 1767], [2175, 2187, 2191, 2176, 1753, 1765, 1769, 1754], [2187, 2188, 2192, 2191, 1765, 1766, 1770, 1769], [2188, 2189, 2193, 2192, 1766, 1767, 1771, 1770], [2189, 2190, 2194, 2193, 1767, 1768, 1772, 1771], [2194, 2195, 2196, 2193, 1772, 1773, 1774, 1771], [2195, 2197, 2198, 2196, 1773, 1775, 1776, 1774], [2197, 2160, 2159, 2198, 1775, 1738, 1737, 1776], [2193, 2196, 2199, 2192, 1771, 1774, 1777, 1770], [2196, 2198, 2200, 2199, 1774, 1776, 1778, 1777], [2198, 2159, 2161, 2200, 1776, 1737, 1739, 1778], [2192, 2199, 2201, 2191, 1770, 1777, 1779, 1769], [2199, 2200, 2202, 2201, 1777, 1778, 1780, 1779], [2200, 2161, 2162, 2202, 1778, 1739, 1740, 1780], [2191, 2201, 2172, 2176, 1769, 1779, 1750, 1754], [2201, 2202, 2168, 2172, 1779, 1780, 1746, 1750], [2202, 2162, 2163, 2168, 1780, 1740, 1741, 1746], [2125, 2203, 2204, 2126, 1703, 1781, 1782, 1704], [2203, 2205, 2206, 2204, 1781, 1783, 1784, 1782], [2205, 2207, 2208, 2206, 1783, 1785, 1786, 1784], [2207, 2209, 2210, 2208, 1785, 1787, 1788, 1786], [2209, 2211, 2212, 2210, 1787, 1789, 1790, 1788], [2211, 2213, 2214, 2212, 1789, 1791, 1792, 1790], [2213, 2215, 2216, 2214, 1791, 1793, 1794, 1792], [2126, 2204, 2217, 2134, 1704, 1782, 1795, 1712], [2204, 2206, 2218, 2217, 1782, 1784, 1796, 1795], [2206, 2208, 2219, 2218, 1784, 1786, 1797, 1796], [2208, 2210, 2220, 2219, 1786, 1788, 1798, 1797], [2210, 2212, 2221, 2220, 1788, 1790, 1799, 1798], [2212, 2214, 2222, 2221, 1790, 1792, 1800, 1799], [2214, 2216, 2223, 2222, 1792, 1794, 1801, 1800], [2134, 2217, 2224, 2142, 1712, 1795, 1802, 1720], [2217, 2218, 2225, 2224, 1795, 1796, 1803, 1802], [2218, 2219, 2226, 2225, 1796, 1797, 1804, 1803], [2219, 2220, 2227, 2226, 1797, 1798, 1805, 1804], [2220, 2221, 2228, 2227, 1798, 1799, 1806, 1805], [2221, 2222, 2229, 2228, 1799, 1800, 1807, 1806], [2222, 2223, 2230, 2229, 1800, 1801, 1808, 1807], [2142, 2224, 2231, 2150, 1720, 1802, 1809, 1728], [2224, 2225, 2232, 2231, 1802, 1803, 1810, 1809], [2225, 2226, 2233, 2232, 1803, 1804, 1811, 1810], [2226, 2227, 2234, 2233, 1804, 1805, 1812, 1811], [2227, 2228, 2235, 2234, 1805, 1806, 1813, 1812], [2228, 2229, 2236, 2235, 1806, 1807, 1814, 1813], [2229, 2230, 2237, 2236, 1807, 1808, 1815, 1814], [2150, 2231, 2238, 2158, 1728, 1809, 1816, 1736], [2231, 2232, 2239, 2238, 1809, 1810, 1817, 1816], [2232, 2233, 2240, 2239, 1810, 1811, 1818, 1817], [2233, 2234, 2241, 2240, 1811, 1812, 1819, 1818], [2234, 2235, 2242, 2241, 1812, 1813, 1820, 1819], [2235, 2236, 2243, 2242, 1813, 1814, 1821, 1820], [2236, 2237, 2244, 2243, 1814, 1815, 1822, 1821], [2158, 2238, 2245, 2166, 1736, 1816, 1823, 1744], [2238, 2239, 2246, 2245, 1816, 1817, 1824, 1823], [2239, 2240, 2247, 2246, 1817, 1818, 1825, 1824], [2240, 2241, 2248, 2247, 1818, 1819, 1826, 1825], [2241, 2242, 2249, 2248, 1819, 1820, 1827, 1826], [2242, 2243, 2250, 2249, 1820, 1821, 1828, 1827], [2243, 2244, 2251, 2250, 1821, 1822, 1829, 1828], [2247, 2248, 2252, 2253, 1825, 1826, 1830, 1831], [2248, 2249, 2254, 2252, 1826, 1827, 1832, 1830], [2249, 2250, 2255, 2254, 1827, 1828, 1833, 1832], [2250, 2251, 2256, 2255, 1828, 1829, 1834, 1833], [2253, 2252, 2257, 2258, 1831, 1830, 1835, 1836], [2252, 2254, 2259, 2257, 1830, 1832, 1837, 1835], [2254, 2255, 2260, 2259, 1832, 1833, 1838, 1837], [2255, 2256, 2261, 2260, 1833, 1834, 1839, 1838], [2258, 2257, 2262, 2263, 1836, 1835, 1840, 1841], [2257, 2259, 2264, 2262, 1835, 1837, 1842, 1840], [2259, 2260, 2265, 2264, 1837, 1838, 1843, 1842], [2260, 2261, 2266, 2265, 1838, 1839, 1844, 1843], [2266, 2267, 2268, 2265, 1844, 1845, 1846, 1843], [2267, 2269, 2270, 2268, 1845, 1847, 1848, 1846], [2269, 2185, 2183, 2270, 1847, 1763, 1761, 1848], [2265, 2268, 2271, 2264, 1843, 1846, 1849, 1842], [2268, 2270, 2272, 2271, 1846, 1848, 1850, 1849], [2270, 2183, 2181, 2272, 1848, 1761, 1759, 1850], [2264, 2271, 2273, 2262, 1842, 1849, 1851, 1840], [2271, 2272, 2274, 2273, 1849, 1850, 1852, 1851], [2272, 2181, 2179, 2274, 1850, 1759, 1757, 1852], [2262, 2273, 2275, 2263, 1840, 1851, 1853, 1841], [2273, 2274, 2276, 2275, 1851, 1852, 1854, 1853], [2274, 2179, 2178, 2276, 1852, 1757, 1756, 1854], [2178, 2174, 2277, 2276, 1756, 1752, 1855, 1854], [2174, 2170, 2278, 2277, 1752, 1748, 1856, 1855], [2170, 2166, 2245, 2278, 1748, 1744, 1823, 1856], [2276, 2277, 2279, 2275, 1854, 1855, 1857, 1853], [2277, 2278, 2280, 2279, 1855, 1856, 1858, 1857], [2278, 2245, 2246, 2280, 1856, 1823, 1824, 1858], [2275, 2279, 2258, 2263, 1853, 1857, 1836, 1841], [2279, 2280, 2253, 2258, 1857, 1858, 1831, 1836], [2280, 2246, 2247, 2253, 1858, 1824, 1825, 1831], [2215, 2281, 2282, 2216, 1793, 1859, 1860, 1794], [2281, 2283, 2284, 2282, 1859, 1861, 1862, 1860], [2283, 2285, 2286, 2284, 1861, 1863, 1864, 1862], [2285, 2287, 2288, 2286, 1863, 1865, 1866, 1864], [2287, 2289, 2290, 2288, 1865, 1867, 1868, 1866], [2289, 2291, 2292, 2290, 1867, 1869, 1870, 1868], [2216, 2282, 2293, 2223, 1794, 1860, 1871, 1801], [2282, 2284, 2294, 2293, 1860, 1862, 1872, 1871], [2284, 2286, 2295, 2294, 1862, 1864, 1873, 1872], [2286, 2288, 2296, 2295, 1864, 1866, 1874, 1873], [2288, 2290, 2297, 2296, 1866, 1868, 1875, 1874], [2290, 2292, 2298, 2297, 1868, 1870, 1876, 1875], [2223, 2293, 2299, 2230, 1801, 1871, 1877, 1808], [2293, 2294, 2300, 2299, 1871, 1872, 1878, 1877], [2294, 2295, 2301, 2300, 1872, 1873, 1879, 1878], [2295, 2296, 2302, 2301, 1873, 1874, 1880, 1879], [2296, 2297, 2303, 2302, 1874, 1875, 1881, 1880], [2297, 2298, 2304, 2303, 1875, 1876, 1882, 1881], [2230, 2299, 2305, 2237, 1808, 1877, 1883, 1815], [2299, 2300, 2306, 2305, 1877, 1878, 1884, 1883], [2300, 2301, 2307, 2306, 1878, 1879, 1885, 1884], [2301, 2302, 2308, 2307, 1879, 1880, 1886, 1885], [2302, 2303, 2309, 2308, 1880, 1881, 1887, 1886], [2303, 2304, 2310, 2309, 1881, 1882, 1888, 1887], [2237, 2305, 2311, 2244, 1815, 1883, 1889, 1822], [2305, 2306, 2312, 2311, 1883, 1884, 1890, 1889], [2306, 2307, 2313, 2312, 1884, 1885, 1891, 1890], [2307, 2308, 2314, 2313, 1885, 1886, 1892, 1891], [2308, 2309, 2315, 2314, 1886, 1887, 1893, 1892], [2309, 2310, 2316, 2315, 1887, 1888, 1894, 1893], [2244, 2311, 2317, 2251, 1822, 1889, 1895, 1829], [2311, 2312, 2318, 2317, 1889, 1890, 1896, 1895], [2312, 2313, 2319, 2318, 1890, 1891, 1897, 1896], [2313, 2314, 2320, 2319, 1891, 1892, 1898, 1897], [2314, 2315, 2321, 2320, 1892, 1893, 1899, 1898], [2315, 2316, 2322, 2321, 1893, 1894, 1900, 1899], [2319, 2320, 2323, 2324, 1897, 1898, 1901, 1902], [2320, 2321, 2325, 2323, 1898, 1899, 1903, 1901], [2321, 2322, 2326, 2325, 1899, 1900, 1904, 1903], [2324, 2323, 2327, 2328, 1902, 1901, 1905, 1906], [2323, 2325, 2329, 2327, 1901, 1903, 1907, 1905], [2325, 2326, 2330, 2329, 1903, 1904, 1908, 1907], [2328, 2327, 2331, 2332, 1906, 1905, 1909, 1910], [2327, 2329, 2333, 2331, 1905, 1907, 1911, 1909], [2329, 2330, 2334, 2333, 1907, 1908, 1912, 1911], [2334, 2335, 2336, 2333, 1912, 1913, 1914, 1911], [2335, 2337, 2338, 2336, 1913, 1915, 1916, 1914], [2337, 2185, 2269, 2338, 1915, 1763, 1847, 1916], [2333, 2336, 2339, 2331, 1911, 1914, 1917, 1909], [2336, 2338, 2340, 2339, 1914, 1916, 1918, 1917], [2338, 2269, 2267, 2340, 1916, 1847, 1845, 1918], [2331, 2339, 2341, 2332, 1909, 1917, 1919, 1910], [2339, 2340, 2342, 2341, 1917, 1918, 1920, 1919], [2340, 2267, 2266, 2342, 1918, 1845, 1844, 1920], [2266, 2261, 2343, 2342, 1844, 1839, 1921, 1920], [2261, 2256, 2344, 2343, 1839, 1834, 1922, 1921], [2256, 2251, 2317, 2344, 1834, 1829, 1895, 1922], [2342, 2343, 2345, 2341, 1920, 1921, 1923, 1919], [2343, 2344, 2346, 2345, 1921, 1922, 1924, 1923], [2344, 2317, 2318, 2346, 1922, 1895, 1896, 1924], [2341, 2345, 2328, 2332, 1919, 1923, 1906, 1910], [2345, 2346, 2324, 2328, 1923, 1924, 1902, 1906], [2346, 2318, 2319, 2324, 1924, 1896, 1897, 1902], [2291, 2347, 2348, 2292, 1869, 1925, 1926, 1870], [2347, 2349, 2350, 2348, 1925, 1927, 1928, 1926], [2349, 2351, 2352, 2350, 1927, 1929, 1930, 1928], [2351, 2353, 2354, 2352, 1929, 1931, 1932, 1930], [2353, 2355, 2356, 2354, 1931, 1933, 1934, 1932], [2355, 2357, 2358, 2356, 1933, 1935, 1936, 1934], [2292, 2348, 2359, 2298, 1870, 1926, 1937, 1876], [2348, 2350, 2360, 2359, 1926, 1928, 1938, 1937], [2350, 2352, 2361, 2360, 1928, 1930, 1939, 1938], [2352, 2354, 2362, 2361, 1930, 1932, 1940, 1939], [2354, 2356, 2363, 2362, 1932, 1934, 1941, 1940], [2356, 2358, 2364, 2363, 1934, 1936, 1942, 1941], [2298, 2359, 2365, 2304, 1876, 1937, 1943, 1882], [2359, 2360, 2366, 2365, 1937, 1938, 1944, 1943], [2360, 2361, 2367, 2366, 1938, 1939, 1945, 1944], [2361, 2362, 2368, 2367, 1939, 1940, 1946, 1945], [2362, 2363, 2369, 2368, 1940, 1941, 1947, 1946], [2363, 2364, 2370, 2369, 1941, 1942, 1948, 1947], [2304, 2365, 2371, 2310, 1882, 1943, 1949, 1888], [2365, 2366, 2372, 2371, 1943, 1944, 1950, 1949], [2366, 2367, 2373, 2372, 1944, 1945, 1951, 1950], [2367, 2368, 2374, 2373, 1945, 1946, 1952, 1951], [2368, 2369, 2375, 2374, 1946, 1947, 1953, 1952], [2369, 2370, 2376, 2375, 1947, 1948, 1954, 1953], [2310, 2371, 2377, 2316, 1888, 1949, 1955, 1894], [2371, 2372, 2378, 2377, 1949, 1950, 1956, 1955], [2372, 2373, 2379, 2378, 1950, 1951, 1957, 1956], [2373, 2374, 2380, 2379, 1951, 1952, 1958, 1957], [2374, 2375, 2381, 2380, 1952, 1953, 1959, 1958], [2375, 2376, 2382, 2381, 1953, 1954, 1960, 1959], [2316, 2377, 2383, 2322, 1894, 1955, 1961, 1900], [2377, 2378, 2384, 2383, 1955, 1956, 1962, 1961], [2378, 2379, 2385, 2384, 1956, 1957, 1963, 1962], [2379, 2380, 2386, 2385, 1957, 1958, 1964, 1963], [2380, 2381, 2387, 2386, 1958, 1959, 1965, 1964], [2381, 2382, 2388, 2387, 1959, 1960, 1966, 1965], [2385, 2386, 2389, 2390, 1963, 1964, 1967, 1968], [2386, 2387, 2391, 2389, 1964, 1965, 1969, 1967], [2387, 2388, 2392, 2391, 1965, 1966, 1970, 1969], [2390, 2389, 2393, 2394, 1968, 1967, 1971, 1972], [2389, 2391, 2395, 2393, 1967, 1969, 1973, 1971], [2391, 2392, 2396, 2395, 1969, 1970, 1974, 1973], [2394, 2393, 2397, 2398, 1972, 1971, 1975, 1976], [2393, 2395, 2399, 2397, 1971, 1973, 1977, 1975], [2395, 2396, 2400, 2399, 1973, 1974, 1978, 1977], [2400, 2401, 2402, 2399, 1978, 1979, 1980, 1977], [2401, 2403, 2404, 2402, 1979, 1981, 1982, 1980], [2403, 2185, 2337, 2404, 1981, 1763, 1915, 1982], [2399, 2402, 2405, 2397, 1977, 1980, 1983, 1975], [2402, 2404, 2406, 2405, 1980, 1982, 1984, 1983], [2404, 2337, 2335, 2406, 1982, 1915, 1913, 1984], [2397, 2405, 2407, 2398, 1975, 1983, 1985, 1976], [2405, 2406, 2408, 2407, 1983, 1984, 1986, 1985], [2406, 2335, 2334, 2408, 1984, 1913, 1912, 1986], [2334, 2330, 2409, 2408, 1912, 1908, 1987, 1986], [2330, 2326, 2410, 2409, 1908, 1904, 1988, 1987], [2326, 2322, 2383, 2410, 1904, 1900, 1961, 1988], [2408, 2409, 2411, 2407, 1986, 1987, 1989, 1985], [2409, 2410, 2412, 2411, 1987, 1988, 1990, 1989], [2410, 2383, 2384, 2412, 1988, 1961, 1962, 1990], [2407, 2411, 2394, 2398, 1985, 1989, 1972, 1976], [2411, 2412, 2390, 2394, 1989, 1990, 1968, 1972], [2412, 2384, 2385, 2390, 1990, 1962, 1963, 1968], [2357, 2413, 2414, 2358, 1935, 1991, 1992, 1936], [2413, 2415, 2416, 2414, 1991, 1993, 1994, 1992], [2415, 2417, 2418, 2416, 1993, 1995, 1996, 1994], [2417, 2419, 2420, 2418, 1995, 1997, 1998, 1996], [2419, 2421, 2422, 2420, 1997, 1999, 2000, 1998], [2421, 2423, 2424, 2422, 1999, 2001, 2002, 2000], [2358, 2414, 2425, 2364, 1936, 1992, 2003, 1942], [2414, 2416, 2426, 2425, 1992, 1994, 2004, 2003], [2416, 2418, 2427, 2426, 1994, 1996, 2005, 2004], [2418, 2420, 2428, 2427, 1996, 1998, 2006, 2005], [2420, 2422, 2429, 2428, 1998, 2000, 2007, 2006], [2422, 2424, 2430, 2429, 2000, 2002, 2008, 2007], [2364, 2425, 2431, 2370, 1942, 2003, 2009, 1948], [2425, 2426, 2432, 2431, 2003, 2004, 2010, 2009], [2426, 2427, 2433, 2432, 2004, 2005, 2011, 2010], [2427, 2428, 2434, 2433, 2005, 2006, 2012, 2011], [2428, 2429, 2435, 2434, 2006, 2007, 2013, 2012], [2429, 2430, 2436, 2435, 2007, 2008, 2014, 2013], [2370, 2431, 2437, 2376, 1948, 2009, 2015, 1954], [2431, 2432, 2438, 2437, 2009, 2010, 2016, 2015], [2432, 2433, 2439, 2438, 2010, 2011, 2017, 2016], [2433, 2434, 2440, 2439, 2011, 2012, 2018, 2017], [2434, 2435, 2441, 2440, 2012, 2013, 2019, 2018], [2435, 2436, 2442, 2441, 2013, 2014, 2020, 2019], [2376, 2437, 2443, 2382, 1954, 2015, 2021, 1960], [2437, 2438, 2444, 2443, 2015, 2016, 2022, 2021], [2438, 2439, 2445, 2444, 2016, 2017, 2023, 2022], [2439, 2440, 2446, 2445, 2017, 2018, 2024, 2023], [2440, 2441, 2447, 2446, 2018, 2019, 2025, 2024], [2441, 2442, 2448, 2447, 2019, 2020, 2026, 2025], [2382, 2443, 2449, 2388, 1960, 2021, 2027, 1966], [2443, 2444, 2450, 2449, 2021, 2022, 2028, 2027], [2444, 2445, 2451, 2450, 2022, 2023, 2029, 2028], [2445, 2446, 2452, 2451, 2023, 2024, 2030, 2029], [2446, 2447, 2453, 2452, 2024, 2025, 2031, 2030], [2447, 2448, 2454, 2453, 2025, 2026, 2032, 2031], [2451, 2452, 2455, 2456, 2029, 2030, 2033, 2034], [2452, 2453, 2457, 2455, 2030, 2031, 2035, 2033], [2453, 2454, 2458, 2457, 2031, 2032, 2036, 2035], [2456, 2455, 2459, 2460, 2034, 2033, 2037, 2038], [2455, 2457, 2461, 2459, 2033, 2035, 2039, 2037], [2457, 2458, 2462, 2461, 2035, 2036, 2040, 2039], [2460, 2459, 2463, 2464, 2038, 2037, 2041, 2042], [2459, 2461, 2465, 2463, 2037, 2039, 2043, 2041], [2461, 2462, 2466, 2465, 2039, 2040, 2044, 2043], [2466, 2467, 2468, 2465, 2044, 2045, 2046, 2043], [2467, 2469, 2470, 2468, 2045, 2047, 2048, 2046], [2469, 2185, 2403, 2470, 2047, 1763, 1981, 2048], [2465, 2468, 2471, 2463, 2043, 2046, 2049, 2041], [2468, 2470, 2472, 2471, 2046, 2048, 2050, 2049], [2470, 2403, 2401, 2472, 2048, 1981, 1979, 2050], [2463, 2471, 2473, 2464, 2041, 2049, 2051, 2042], [2471, 2472, 2474, 2473, 2049, 2050, 2052, 2051], [2472, 2401, 2400, 2474, 2050, 1979, 1978, 2052], [2400, 2396, 2475, 2474, 1978, 1974, 2053, 2052], [2396, 2392, 2476, 2475, 1974, 1970, 2054, 2053], [2392, 2388, 2449, 2476, 1970, 1966, 2027, 2054], [2474, 2475, 2477, 2473, 2052, 2053, 2055, 2051], [2475, 2476, 2478, 2477, 2053, 2054, 2056, 2055], [2476, 2449, 2450, 2478, 2054, 2027, 2028, 2056], [2473, 2477, 2460, 2464, 2051, 2055, 2038, 2042], [2477, 2478, 2456, 2460, 2055, 2056, 2034, 2038], [2478, 2450, 2451, 2456, 2056, 2028, 2029, 2034], [2423, 2479, 2480, 2424, 2001, 2057, 2058, 2002], [2479, 2481, 2482, 2480, 2057, 2059, 2060, 2058], [2481, 2483, 2484, 2482, 2059, 2061, 2062, 2060], [2483, 2485, 2486, 2484, 2061, 2063, 2064, 2062], [2485, 2487, 2488, 2486, 2063, 2065, 2066, 2064], [2487, 2111, 2114, 2488, 2065, 1689, 1692, 2066], [2424, 2480, 2489, 2430, 2002, 2058, 2067, 2008], [2480, 2482, 2490, 2489, 2058, 2060, 2068, 2067], [2482, 2484, 2491, 2490, 2060, 2062, 2069, 2068], [2484, 2486, 2492, 2491, 2062, 2064, 2070, 2069], [2486, 2488, 2493, 2492, 2064, 2066, 2071, 2070], [2488, 2114, 2128, 2493, 2066, 1692, 1706, 2071], [2430, 2489, 2494, 2436, 2008, 2067, 2072, 2014], [2489, 2490, 2495, 2494, 2067, 2068, 2073, 2072], [2490, 2491, 2496, 2495, 2068, 2069, 2074, 2073], [2491, 2492, 2497, 2496, 2069, 2070, 2075, 2074], [2492, 2493, 2498, 2497, 2070, 2071, 2076, 2075], [2493, 2128, 2136, 2498, 2071, 1706, 1714, 2076], [2436, 2494, 2499, 2442, 2014, 2072, 2077, 2020], [2494, 2495, 2500, 2499, 2072, 2073, 2078, 2077], [2495, 2496, 2501, 2500, 2073, 2074, 2079, 2078], [2496, 2497, 2502, 2501, 2074, 2075, 2080, 2079], [2497, 2498, 2503, 2502, 2075, 2076, 2081, 2080], [2498, 2136, 2144, 2503, 2076, 1714, 1722, 2081], [2442, 2499, 2504, 2448, 2020, 2077, 2082, 2026], [2499, 2500, 2505, 2504, 2077, 2078, 2083, 2082], [2500, 2501, 2506, 2505, 2078, 2079, 2084, 2083], [2501, 2502, 2507, 2506, 2079, 2080, 2085, 2084], [2502, 2503, 2508, 2507, 2080, 2081, 2086, 2085], [2503, 2144, 2152, 2508, 2081, 1722, 1730, 2086], [2448, 2504, 2509, 2454, 2026, 2082, 2087, 2032], [2504, 2505, 2510, 2509, 2082, 2083, 2088, 2087], [2505, 2506, 2511, 2510, 2083, 2084, 2089, 2088], [2506, 2507, 2512, 2511, 2084, 2085, 2090, 2089], [2507, 2508, 2513, 2512, 2085, 2086, 2091, 2090], [2508, 2152, 2160, 2513, 2086, 1730, 1738, 2091], [2511, 2512, 2514, 2515, 2089, 2090, 2092, 2093], [2512, 2513, 2516, 2514, 2090, 2091, 2094, 2092], [2513, 2160, 2197, 2516, 2091, 1738, 1775, 2094], [2515, 2514, 2517, 2518, 2093, 2092, 2095, 2096], [2514, 2516, 2519, 2517, 2092, 2094, 2097, 2095], [2516, 2197, 2195, 2519, 2094, 1775, 1773, 2097], [2518, 2517, 2520, 2521, 2096, 2095, 2098, 2099], [2517, 2519, 2522, 2520, 2095, 2097, 2100, 2098], [2519, 2195, 2194, 2522, 2097, 1773, 1772, 2100], [2194, 2190, 2523, 2522, 1772, 1768, 2101, 2100], [2190, 2186, 2524, 2523, 1768, 1764, 2102, 2101], [2186, 2185, 2469, 2524, 1764, 1763, 2047, 2102], [2522, 2523, 2525, 2520, 2100, 2101, 2103, 2098], [2523, 2524, 2526, 2525, 2101, 2102, 2104, 2103], [2524, 2469, 2467, 2526, 2102, 2047, 2045, 2104], [2520, 2525, 2527, 2521, 2098, 2103, 2105, 2099], [2525, 2526, 2528, 2527, 2103, 2104, 2106, 2105], [2526, 2467, 2466, 2528, 2104, 2045, 2044, 2106], [2466, 2462, 2529, 2528, 2044, 2040, 2107, 2106], [2462, 2458, 2530, 2529, 2040, 2036, 2108, 2107], [2458, 2454, 2509, 2530, 2036, 2032, 2087, 2108], [2528, 2529, 2531, 2527, 2106, 2107, 2109, 2105], [2529, 2530, 2532, 2531, 2107, 2108, 2110, 2109], [2530, 2509, 2510, 2532, 2108, 2087, 2088, 2110], [2527, 2531, 2518, 2521, 2105, 2109, 2096, 2099], [2531, 2532, 2515, 2518, 2109, 2110, 2093, 2096], [2532, 2510, 2511, 2515, 2110, 2088, 2089, 2093], [2533, 2534, 2535, 2536, 2111, 2112, 2113, 2114], [2534, 2537, 2538, 2535, 2112, 2115, 2116, 2113], [2537, 2539, 2540, 2538, 2115, 2117, 2118, 2116], [2539, 2541, 2542, 2540, 2117, 2119, 2120, 2118], [2541, 2543, 2544, 2542, 2119, 2121, 2122, 2120], [2543, 2545, 2546, 2544, 2121, 2123, 2124, 2122], [2545, 2547, 2548, 2546, 2123, 2125, 2126, 2124], [2536, 2535, 2549, 2550, 2114, 2113, 2127, 2128], [2535, 2538, 2551, 2549, 2113, 2116, 2129, 2127], [2538, 2540, 2552, 2551, 2116, 2118, 2130, 2129], [2540, 2542, 2553, 2552, 2118, 2120, 2131, 2130], [2542, 2544, 2554, 2553, 2120, 2122, 2132, 2131], [2544, 2546, 2555, 2554, 2122, 2124, 2133, 2132], [2546, 2548, 2556, 2555, 2124, 2126, 2134, 2133], [2550, 2549, 2557, 2558, 2128, 2127, 2135, 2136], [2549, 2551, 2559, 2557, 2127, 2129, 2137, 2135], [2551, 2552, 2560, 2559, 2129, 2130, 2138, 2137], [2552, 2553, 2561, 2560, 2130, 2131, 2139, 2138], [2553, 2554, 2562, 2561, 2131, 2132, 2140, 2139], [2554, 2555, 2563, 2562, 2132, 2133, 2141, 2140], [2555, 2556, 2564, 2563, 2133, 2134, 2142, 2141], [2558, 2557, 2565, 2566, 2136, 2135, 2143, 2144], [2557, 2559, 2567, 2565, 2135, 2137, 2145, 2143], [2559, 2560, 2568, 2567, 2137, 2138, 2146, 2145], [2560, 2561, 2569, 2568, 2138, 2139, 2147, 2146], [2561, 2562, 2570, 2569, 2139, 2140, 2148, 2147], [2562, 2563, 2571, 2570, 2140, 2141, 2149, 2148], [2563, 2564, 2572, 2571, 2141, 2142, 2150, 2149], [2566, 2565, 2573, 2574, 2144, 2143, 2151, 2152], [2565, 2567, 2575, 2573, 2143, 2145, 2153, 2151], [2567, 2568, 2576, 2575, 2145, 2146, 2154, 2153], [2568, 2569, 2577, 2576, 2146, 2147, 2155, 2154], [2569, 2570, 2578, 2577, 2147, 2148, 2156, 2155], [2570, 2571, 2579, 2578, 2148, 2149, 2157, 2156], [2571, 2572, 2580, 2579, 2149, 2150, 2158, 2157], [2574, 2573, 2581, 2582, 2152, 2151, 2159, 2160], [2573, 2575, 2583, 2581, 2151, 2153, 2161, 2159], [2575, 2576, 2584, 2583, 2153, 2154, 2162, 2161], [2576, 2577, 2585, 2584, 2154, 2155, 2163, 2162], [2577, 2578, 2586, 2585, 2155, 2156, 2164, 2163], [2578, 2579, 2587, 2586, 2156, 2157, 2165, 2164], [2579, 2580, 2588, 2587, 2157, 2158, 2166, 2165], [2585, 2586, 2589, 2590, 2163, 2164, 2167, 2168], [2586, 2587, 2591, 2589, 2164, 2165, 2169, 2167], [2587, 2588, 2592, 2591, 2165, 2166, 2170, 2169], [2590, 2589, 2593, 2594, 2168, 2167, 2171, 2172], [2589, 2591, 2595, 2593, 2167, 2169, 2173, 2171], [2591, 2592, 2596, 2595, 2169, 2170, 2174, 2173], [2594, 2593, 2597, 2598, 2172, 2171, 2175, 2176], [2593, 2595, 2599, 2597, 2171, 2173, 2177, 2175], [2595, 2596, 2600, 2599, 2173, 2174, 2178, 2177], [2600, 2601, 2602, 2599, 2178, 2179, 2180, 2177], [2601, 2603, 2604, 2602, 2179, 2181, 2182, 2180], [2603, 2605, 2606, 2604, 2181, 2183, 2184, 2182], [2605, 2607, 2608, 2606, 2183, 2185, 2186, 2184], [2599, 2602, 2609, 2597, 2177, 2180, 2187, 2175], [2602, 2604, 2610, 2609, 2180, 2182, 2188, 2187], [2604, 2606, 2611, 2610, 2182, 2184, 2189, 2188], [2606, 2608, 2612, 2611, 2184, 2186, 2190, 2189], [2597, 2609, 2613, 2598, 2175, 2187, 2191, 2176], [2609, 2610, 2614, 2613, 2187, 2188, 2192, 2191], [2610, 2611, 2615, 2614, 2188, 2189, 2193, 2192], [2611, 2612, 2616, 2615, 2189, 2190, 2194, 2193], [2616, 2617, 2618, 2615, 2194, 2195, 2196, 2193], [2617, 2619, 2620, 2618, 2195, 2197, 2198, 2196], [2619, 2582, 2581, 2620, 2197, 2160, 2159, 2198], [2615, 2618, 2621, 2614, 2193, 2196, 2199, 2192], [2618, 2620, 2622, 2621, 2196, 2198, 2200, 2199], [2620, 2581, 2583, 2622, 2198, 2159, 2161, 2200], [2614, 2621, 2623, 2613, 2192, 2199, 2201, 2191], [2621, 2622, 2624, 2623, 2199, 2200, 2202, 2201], [2622, 2583, 2584, 2624, 2200, 2161, 2162, 2202], [2613, 2623, 2594, 2598, 2191, 2201, 2172, 2176], [2623, 2624, 2590, 2594, 2201, 2202, 2168, 2172], [2624, 2584, 2585, 2590, 2202, 2162, 2163, 2168], [2547, 2625, 2626, 2548, 2125, 2203, 2204, 2126], [2625, 2627, 2628, 2626, 2203, 2205, 2206, 2204], [2627, 2629, 2630, 2628, 2205, 2207, 2208, 2206], [2629, 2631, 2632, 2630, 2207, 2209, 2210, 2208], [2631, 2633, 2634, 2632, 2209, 2211, 2212, 2210], [2633, 2635, 2636, 2634, 2211, 2213, 2214, 2212], [2635, 2637, 2638, 2636, 2213, 2215, 2216, 2214], [2548, 2626, 2639, 2556, 2126, 2204, 2217, 2134], [2626, 2628, 2640, 2639, 2204, 2206, 2218, 2217], [2628, 2630, 2641, 2640, 2206, 2208, 2219, 2218], [2630, 2632, 2642, 2641, 2208, 2210, 2220, 2219], [2632, 2634, 2643, 2642, 2210, 2212, 2221, 2220], [2634, 2636, 2644, 2643, 2212, 2214, 2222, 2221], [2636, 2638, 2645, 2644, 2214, 2216, 2223, 2222], [2556, 2639, 2646, 2564, 2134, 2217, 2224, 2142], [2639, 2640, 2647, 2646, 2217, 2218, 2225, 2224], [2640, 2641, 2648, 2647, 2218, 2219, 2226, 2225], [2641, 2642, 2649, 2648, 2219, 2220, 2227, 2226], [2642, 2643, 2650, 2649, 2220, 2221, 2228, 2227], [2643, 2644, 2651, 2650, 2221, 2222, 2229, 2228], [2644, 2645, 2652, 2651, 2222, 2223, 2230, 2229], [2564, 2646, 2653, 2572, 2142, 2224, 2231, 2150], [2646, 2647, 2654, 2653, 2224, 2225, 2232, 2231], [2647, 2648, 2655, 2654, 2225, 2226, 2233, 2232], [2648, 2649, 2656, 2655, 2226, 2227, 2234, 2233], [2649, 2650, 2657, 2656, 2227, 2228, 2235, 2234], [2650, 2651, 2658, 2657, 2228, 2229, 2236, 2235], [2651, 2652, 2659, 2658, 2229, 2230, 2237, 2236], [2572, 2653, 2660, 2580, 2150, 2231, 2238, 2158], [2653, 2654, 2661, 2660, 2231, 2232, 2239, 2238], [2654, 2655, 2662, 2661, 2232, 2233, 2240, 2239], [2655, 2656, 2663, 2662, 2233, 2234, 2241, 2240], [2656, 2657, 2664, 2663, 2234, 2235, 2242, 2241], [2657, 2658, 2665, 2664, 2235, 2236, 2243, 2242], [2658, 2659, 2666, 2665, 2236, 2237, 2244, 2243], [2580, 2660, 2667, 2588, 2158, 2238, 2245, 2166], [2660, 2661, 2668, 2667, 2238, 2239, 2246, 2245], [2661, 2662, 2669, 2668, 2239, 2240, 2247, 2246], [2662, 2663, 2670, 2669, 2240, 2241, 2248, 2247], [2663, 2664, 2671, 2670, 2241, 2242, 2249, 2248], [2664, 2665, 2672, 2671, 2242, 2243, 2250, 2249], [2665, 2666, 2673, 2672, 2243, 2244, 2251, 2250], [2669, 2670, 2674, 2675, 2247, 2248, 2252, 2253], [2670, 2671, 2676, 2674, 2248, 2249, 2254, 2252], [2671, 2672, 2677, 2676, 2249, 2250, 2255, 2254], [2672, 2673, 2678, 2677, 2250, 2251, 2256, 2255], [2675, 2674, 2679, 2680, 2253, 2252, 2257, 2258], [2674, 2676, 2681, 2679, 2252, 2254, 2259, 2257], [2676, 2677, 2682, 2681, 2254, 2255, 2260, 2259], [2677, 2678, 2683, 2682, 2255, 2256, 2261, 2260], [2680, 2679, 2684, 2685, 2258, 2257, 2262, 2263], [2679, 2681, 2686, 2684, 2257, 2259, 2264, 2262], [2681, 2682, 2687, 2686, 2259, 2260, 2265, 2264], [2682, 2683, 2688, 2687, 2260, 2261, 2266, 2265], [2688, 2689, 2690, 2687, 2266, 2267, 2268, 2265], [2689, 2691, 2692, 2690, 2267, 2269, 2270, 2268], [2691, 2607, 2605, 2692, 2269, 2185, 2183, 2270], [2687, 2690, 2693, 2686, 2265, 2268, 2271, 2264], [2690, 2692, 2694, 2693, 2268, 2270, 2272, 2271], [2692, 2605, 2603, 2694, 2270, 2183, 2181, 2272], [2686, 2693, 2695, 2684, 2264, 2271, 2273, 2262], [2693, 2694, 2696, 2695, 2271, 2272, 2274, 2273], [2694, 2603, 2601, 2696, 2272, 2181, 2179, 2274], [2684, 2695, 2697, 2685, 2262, 2273, 2275, 2263], [2695, 2696, 2698, 2697, 2273, 2274, 2276, 2275], [2696, 2601, 2600, 2698, 2274, 2179, 2178, 2276], [2600, 2596, 2699, 2698, 2178, 2174, 2277, 2276], [2596, 2592, 2700, 2699, 2174, 2170, 2278, 2277], [2592, 2588, 2667, 2700, 2170, 2166, 2245, 2278], [2698, 2699, 2701, 2697, 2276, 2277, 2279, 2275], [2699, 2700, 2702, 2701, 2277, 2278, 2280, 2279], [2700, 2667, 2668, 2702, 2278, 2245, 2246, 2280], [2697, 2701, 2680, 2685, 2275, 2279, 2258, 2263], [2701, 2702, 2675, 2680, 2279, 2280, 2253, 2258], [2702, 2668, 2669, 2675, 2280, 2246, 2247, 2253], [2637, 2703, 2704, 2638, 2215, 2281, 2282, 2216], [2703, 2705, 2706, 2704, 2281, 2283, 2284, 2282], [2705, 2707, 2708, 2706, 2283, 2285, 2286, 2284], [2707, 2709, 2710, 2708, 2285, 2287, 2288, 2286], [2709, 2711, 2712, 2710, 2287, 2289, 2290, 2288], [2711, 2713, 2714, 2712, 2289, 2291, 2292, 2290], [2638, 2704, 2715, 2645, 2216, 2282, 2293, 2223], [2704, 2706, 2716, 2715, 2282, 2284, 2294, 2293], [2706, 2708, 2717, 2716, 2284, 2286, 2295, 2294], [2708, 2710, 2718, 2717, 2286, 2288, 2296, 2295], [2710, 2712, 2719, 2718, 2288, 2290, 2297, 2296], [2712, 2714, 2720, 2719, 2290, 2292, 2298, 2297], [2645, 2715, 2721, 2652, 2223, 2293, 2299, 2230], [2715, 2716, 2722, 2721, 2293, 2294, 2300, 2299], [2716, 2717, 2723, 2722, 2294, 2295, 2301, 2300], [2717, 2718, 2724, 2723, 2295, 2296, 2302, 2301], [2718, 2719, 2725, 2724, 2296, 2297, 2303, 2302], [2719, 2720, 2726, 2725, 2297, 2298, 2304, 2303], [2652, 2721, 2727, 2659, 2230, 2299, 2305, 2237], [2721, 2722, 2728, 2727, 2299, 2300, 2306, 2305], [2722, 2723, 2729, 2728, 2300, 2301, 2307, 2306], [2723, 2724, 2730, 2729, 2301, 2302, 2308, 2307], [2724, 2725, 2731, 2730, 2302, 2303, 2309, 2308], [2725, 2726, 2732, 2731, 2303, 2304, 2310, 2309], [2659, 2727, 2733, 2666, 2237, 2305, 2311, 2244], [2727, 2728, 2734, 2733, 2305, 2306, 2312, 2311], [2728, 2729, 2735, 2734, 2306, 2307, 2313, 2312], [2729, 2730, 2736, 2735, 2307, 2308, 2314, 2313], [2730, 2731, 2737, 2736, 2308, 2309, 2315, 2314], [2731, 2732, 2738, 2737, 2309, 2310, 2316, 2315], [2666, 2733, 2739, 2673, 2244, 2311, 2317, 2251], [2733, 2734, 2740, 2739, 2311, 2312, 2318, 2317], [2734, 2735, 2741, 2740, 2312, 2313, 2319, 2318], [2735, 2736, 2742, 2741, 2313, 2314, 2320, 2319], [2736, 2737, 2743, 2742, 2314, 2315, 2321, 2320], [2737, 2738, 2744, 2743, 2315, 2316, 2322, 2321], [2741, 2742, 2745, 2746, 2319, 2320, 2323, 2324], [2742, 2743, 2747, 2745, 2320, 2321, 2325, 2323], [2743, 2744, 2748, 2747, 2321, 2322, 2326, 2325], [2746, 2745, 2749, 2750, 2324, 2323, 2327, 2328], [2745, 2747, 2751, 2749, 2323, 2325, 2329, 2327], [2747, 2748, 2752, 2751, 2325, 2326, 2330, 2329], [2750, 2749, 2753, 2754, 2328, 2327, 2331, 2332], [2749, 2751, 2755, 2753, 2327, 2329, 2333, 2331], [2751, 2752, 2756, 2755, 2329, 2330, 2334, 2333], [2756, 2757, 2758, 2755, 2334, 2335, 2336, 2333], [2757, 2759, 2760, 2758, 2335, 2337, 2338, 2336], [2759, 2607, 2691, 2760, 2337, 2185, 2269, 2338], [2755, 2758, 2761, 2753, 2333, 2336, 2339, 2331], [2758, 2760, 2762, 2761, 2336, 2338, 2340, 2339], [2760, 2691, 2689, 2762, 2338, 2269, 2267, 2340], [2753, 2761, 2763, 2754, 2331, 2339, 2341, 2332], [2761, 2762, 2764, 2763, 2339, 2340, 2342, 2341], [2762, 2689, 2688, 2764, 2340, 2267, 2266, 2342], [2688, 2683, 2765, 2764, 2266, 2261, 2343, 2342], [2683, 2678, 2766, 2765, 2261, 2256, 2344, 2343], [2678, 2673, 2739, 2766, 2256, 2251, 2317, 2344], [2764, 2765, 2767, 2763, 2342, 2343, 2345, 2341], [2765, 2766, 2768, 2767, 2343, 2344, 2346, 2345], [2766, 2739, 2740, 2768, 2344, 2317, 2318, 2346], [2763, 2767, 2750, 2754, 2341, 2345, 2328, 2332], [2767, 2768, 2746, 2750, 2345, 2346, 2324, 2328], [2768, 2740, 2741, 2746, 2346, 2318, 2319, 2324], [2713, 2769, 2770, 2714, 2291, 2347, 2348, 2292], [2769, 2771, 2772, 2770, 2347, 2349, 2350, 2348], [2771, 2773, 2774, 2772, 2349, 2351, 2352, 2350], [2773, 2775, 2776, 2774, 2351, 2353, 2354, 2352], [2775, 2777, 2778, 2776, 2353, 2355, 2356, 2354], [2777, 2779, 2780, 2778, 2355, 2357, 2358, 2356], [2714, 2770, 2781, 2720, 2292, 2348, 2359, 2298], [2770, 2772, 2782, 2781, 2348, 2350, 2360, 2359], [2772, 2774, 2783, 2782, 2350, 2352, 2361, 2360], [2774, 2776, 2784, 2783, 2352, 2354, 2362, 2361], [2776, 2778, 2785, 2784, 2354, 2356, 2363, 2362], [2778, 2780, 2786, 2785, 2356, 2358, 2364, 2363], [2720, 2781, 2787, 2726, 2298, 2359, 2365, 2304], [2781, 2782, 2788, 2787, 2359, 2360, 2366, 2365], [2782, 2783, 2789, 2788, 2360, 2361, 2367, 2366], [2783, 2784, 2790, 2789, 2361, 2362, 2368, 2367], [2784, 2785, 2791, 2790, 2362, 2363, 2369, 2368], [2785, 2786, 2792, 2791, 2363, 2364, 2370, 2369], [2726, 2787, 2793, 2732, 2304, 2365, 2371, 2310], [2787, 2788, 2794, 2793, 2365, 2366, 2372, 2371], [2788, 2789, 2795, 2794, 2366, 2367, 2373, 2372], [2789, 2790, 2796, 2795, 2367, 2368, 2374, 2373], [2790, 2791, 2797, 2796, 2368, 2369, 2375, 2374], [2791, 2792, 2798, 2797, 2369, 2370, 2376, 2375], [2732, 2793, 2799, 2738, 2310, 2371, 2377, 2316], [2793, 2794, 2800, 2799, 2371, 2372, 2378, 2377], [2794, 2795, 2801, 2800, 2372, 2373, 2379, 2378], [2795, 2796, 2802, 2801, 2373, 2374, 2380, 2379], [2796, 2797, 2803, 2802, 2374, 2375, 2381, 2380], [2797, 2798, 2804, 2803, 2375, 2376, 2382, 2381], [2738, 2799, 2805, 2744, 2316, 2377, 2383, 2322], [2799, 2800, 2806, 2805, 2377, 2378, 2384, 2383], [2800, 2801, 2807, 2806, 2378, 2379, 2385, 2384], [2801, 2802, 2808, 2807, 2379, 2380, 2386, 2385], [2802, 2803, 2809, 2808, 2380, 2381, 2387, 2386], [2803, 2804, 2810, 2809, 2381, 2382, 2388, 2387], [2807, 2808, 2811, 2812, 2385, 2386, 2389, 2390], [2808, 2809, 2813, 2811, 2386, 2387, 2391, 2389], [2809, 2810, 2814, 2813, 2387, 2388, 2392, 2391], [2812, 2811, 2815, 2816, 2390, 2389, 2393, 2394], [2811, 2813, 2817, 2815, 2389, 2391, 2395, 2393], [2813, 2814, 2818, 2817, 2391, 2392, 2396, 2395], [2816, 2815, 2819, 2820, 2394, 2393, 2397, 2398], [2815, 2817, 2821, 2819, 2393, 2395, 2399, 2397], [2817, 2818, 2822, 2821, 2395, 2396, 2400, 2399], [2822, 2823, 2824, 2821, 2400, 2401, 2402, 2399], [2823, 2825, 2826, 2824, 2401, 2403, 2404, 2402], [2825, 2607, 2759, 2826, 2403, 2185, 2337, 2404], [2821, 2824, 2827, 2819, 2399, 2402, 2405, 2397], [2824, 2826, 2828, 2827, 2402, 2404, 2406, 2405], [2826, 2759, 2757, 2828, 2404, 2337, 2335, 2406], [2819, 2827, 2829, 2820, 2397, 2405, 2407, 2398], [2827, 2828, 2830, 2829, 2405, 2406, 2408, 2407], [2828, 2757, 2756, 2830, 2406, 2335, 2334, 2408], [2756, 2752, 2831, 2830, 2334, 2330, 2409, 2408], [2752, 2748, 2832, 2831, 2330, 2326, 2410, 2409], [2748, 2744, 2805, 2832, 2326, 2322, 2383, 2410], [2830, 2831, 2833, 2829, 2408, 2409, 2411, 2407], [2831, 2832, 2834, 2833, 2409, 2410, 2412, 2411], [2832, 2805, 2806, 2834, 2410, 2383, 2384, 2412], [2829, 2833, 2816, 2820, 2407, 2411, 2394, 2398], [2833, 2834, 2812, 2816, 2411, 2412, 2390, 2394], [2834, 2806, 2807, 2812, 2412, 2384, 2385, 2390], [2779, 2835, 2836, 2780, 2357, 2413, 2414, 2358], [2835, 2837, 2838, 2836, 2413, 2415, 2416, 2414], [2837, 2839, 2840, 2838, 2415, 2417, 2418, 2416], [2839, 2841, 2842, 2840, 2417, 2419, 2420, 2418], [2841, 2843, 2844, 2842, 2419, 2421, 2422, 2420], [2843, 2845, 2846, 2844, 2421, 2423, 2424, 2422], [2780, 2836, 2847, 2786, 2358, 2414, 2425, 2364], [2836, 2838, 2848, 2847, 2414, 2416, 2426, 2425], [2838, 2840, 2849, 2848, 2416, 2418, 2427, 2426], [2840, 2842, 2850, 2849, 2418, 2420, 2428, 2427], [2842, 2844, 2851, 2850, 2420, 2422, 2429, 2428], [2844, 2846, 2852, 2851, 2422, 2424, 2430, 2429], [2786, 2847, 2853, 2792, 2364, 2425, 2431, 2370], [2847, 2848, 2854, 2853, 2425, 2426, 2432, 2431], [2848, 2849, 2855, 2854, 2426, 2427, 2433, 2432], [2849, 2850, 2856, 2855, 2427, 2428, 2434, 2433], [2850, 2851, 2857, 2856, 2428, 2429, 2435, 2434], [2851, 2852, 2858, 2857, 2429, 2430, 2436, 2435], [2792, 2853, 2859, 2798, 2370, 2431, 2437, 2376], [2853, 2854, 2860, 2859, 2431, 2432, 2438, 2437], [2854, 2855, 2861, 2860, 2432, 2433, 2439, 2438], [2855, 2856, 2862, 2861, 2433, 2434, 2440, 2439], [2856, 2857, 2863, 2862, 2434, 2435, 2441, 2440], [2857, 2858, 2864, 2863, 2435, 2436, 2442, 2441], [2798, 2859, 2865, 2804, 2376, 2437, 2443, 2382], [2859, 2860, 2866, 2865, 2437, 2438, 2444, 2443], [2860, 2861, 2867, 2866, 2438, 2439, 2445, 2444], [2861, 2862, 2868, 2867, 2439, 2440, 2446, 2445], [2862, 2863, 2869, 2868, 2440, 2441, 2447, 2446], [2863, 2864, 2870, 2869, 2441, 2442, 2448, 2447], [2804, 2865, 2871, 2810, 2382, 2443, 2449, 2388], [2865, 2866, 2872, 2871, 2443, 2444, 2450, 2449], [2866, 2867, 2873, 2872, 2444, 2445, 2451, 2450], [2867, 2868, 2874, 2873, 2445, 2446, 2452, 2451], [2868, 2869, 2875, 2874, 2446, 2447, 2453, 2452], [2869, 2870, 2876, 2875, 2447, 2448, 2454, 2453], [2873, 2874, 2877, 2878, 2451, 2452, 2455, 2456], [2874, 2875, 2879, 2877, 2452, 2453, 2457, 2455], [2875, 2876, 2880, 2879, 2453, 2454, 2458, 2457], [2878, 2877, 2881, 2882, 2456, 2455, 2459, 2460], [2877, 2879, 2883, 2881, 2455, 2457, 2461, 2459], [2879, 2880, 2884, 2883, 2457, 2458, 2462, 2461], [2882, 2881, 2885, 2886, 2460, 2459, 2463, 2464], [2881, 2883, 2887, 2885, 2459, 2461, 2465, 2463], [2883, 2884, 2888, 2887, 2461, 2462, 2466, 2465], [2888, 2889, 2890, 2887, 2466, 2467, 2468, 2465], [2889, 2891, 2892, 2890, 2467, 2469, 2470, 2468], [2891, 2607, 2825, 2892, 2469, 2185, 2403, 2470], [2887, 2890, 2893, 2885, 2465, 2468, 2471, 2463], [2890, 2892, 2894, 2893, 2468, 2470, 2472, 2471], [2892, 2825, 2823, 2894, 2470, 2403, 2401, 2472], [2885, 2893, 2895, 2886, 2463, 2471, 2473, 2464], [2893, 2894, 2896, 2895, 2471, 2472, 2474, 2473], [2894, 2823, 2822, 2896, 2472, 2401, 2400, 2474], [2822, 2818, 2897, 2896, 2400, 2396, 2475, 2474], [2818, 2814, 2898, 2897, 2396, 2392, 2476, 2475], [2814, 2810, 2871, 2898, 2392, 2388, 2449, 2476], [2896, 2897, 2899, 2895, 2474, 2475, 2477, 2473], [2897, 2898, 2900, 2899, 2475, 2476, 2478, 2477], [2898, 2871, 2872, 2900, 2476, 2449, 2450, 2478], [2895, 2899, 2882, 2886, 2473, 2477, 2460, 2464], [2899, 2900, 2878, 2882, 2477, 2478, 2456, 2460], [2900, 2872, 2873, 2878, 2478, 2450, 2451, 2456], [2845, 2901, 2902, 2846, 2423, 2479, 2480, 2424], [2901, 2903, 2904, 2902, 2479, 2481, 2482, 2480], [2903, 2905, 2906, 2904, 2481, 2483, 2484, 2482], [2905, 2907, 2908, 2906, 2483, 2485, 2486, 2484], [2907, 2909, 2910, 2908, 2485, 2487, 2488, 2486], [2909, 2533, 2536, 2910, 2487, 2111, 2114, 2488], [2846, 2902, 2911, 2852, 2424, 2480, 2489, 2430], [2902, 2904, 2912, 2911, 2480, 2482, 2490, 2489], [2904, 2906, 2913, 2912, 2482, 2484, 2491, 2490], [2906, 2908, 2914, 2913, 2484, 2486, 2492, 2491], [2908, 2910, 2915, 2914, 2486, 2488, 2493, 2492], [2910, 2536, 2550, 2915, 2488, 2114, 2128, 2493], [2852, 2911, 2916, 2858, 2430, 2489, 2494, 2436], [2911, 2912, 2917, 2916, 2489, 2490, 2495, 2494], [2912, 2913, 2918, 2917, 2490, 2491, 2496, 2495], [2913, 2914, 2919, 2918, 2491, 2492, 2497, 2496], [2914, 2915, 2920, 2919, 2492, 2493, 2498, 2497], [2915, 2550, 2558, 2920, 2493, 2128, 2136, 2498], [2858, 2916, 2921, 2864, 2436, 2494, 2499, 2442], [2916, 2917, 2922, 2921, 2494, 2495, 2500, 2499], [2917, 2918, 2923, 2922, 2495, 2496, 2501, 2500], [2918, 2919, 2924, 2923, 2496, 2497, 2502, 2501], [2919, 2920, 2925, 2924, 2497, 2498, 2503, 2502], [2920, 2558, 2566, 2925, 2498, 2136, 2144, 2503], [2864, 2921, 2926, 2870, 2442, 2499, 2504, 2448], [2921, 2922, 2927, 2926, 2499, 2500, 2505, 2504], [2922, 2923, 2928, 2927, 2500, 2501, 2506, 2505], [2923, 2924, 2929, 2928, 2501, 2502, 2507, 2506], [2924, 2925, 2930, 2929, 2502, 2503, 2508, 2507], [2925, 2566, 2574, 2930, 2503, 2144, 2152, 2508], [2870, 2926, 2931, 2876, 2448, 2504, 2509, 2454], [2926, 2927, 2932, 2931, 2504, 2505, 2510, 2509], [2927, 2928, 2933, 2932, 2505, 2506, 2511, 2510], [2928, 2929, 2934, 2933, 2506, 2507, 2512, 2511], [2929, 2930, 2935, 2934, 2507, 2508, 2513, 2512], [2930, 2574, 2582, 2935, 2508, 2152, 2160, 2513], [2933, 2934, 2936, 2937, 2511, 2512, 2514, 2515], [2934, 2935, 2938, 2936, 2512, 2513, 2516, 2514], [2935, 2582, 2619, 2938, 2513, 2160, 2197, 2516], [2937, 2936, 2939, 2940, 2515, 2514, 2517, 2518], [2936, 2938, 2941, 2939, 2514, 2516, 2519, 2517], [2938, 2619, 2617, 2941, 2516, 2197, 2195, 2519], [2940, 2939, 2942, 2943, 2518, 2517, 2520, 2521], [2939, 2941, 2944, 2942, 2517, 2519, 2522, 2520], [2941, 2617, 2616, 2944, 2519, 2195, 2194, 2522], [2616, 2612, 2945, 2944, 2194, 2190, 2523, 2522], [2612, 2608, 2946, 2945, 2190, 2186, 2524, 2523], [2608, 2607, 2891, 2946, 2186, 2185, 2469, 2524], [2944, 2945, 2947, 2942, 2522, 2523, 2525, 2520], [2945, 2946, 2948, 2947, 2523, 2524, 2526, 2525], [2946, 2891, 2889, 2948, 2524, 2469, 2467, 2526], [2942, 2947, 2949, 2943, 2520, 2525, 2527, 2521], [2947, 2948, 2950, 2949, 2525, 2526, 2528, 2527], [2948, 2889, 2888, 2950, 2526, 2467, 2466, 2528], [2888, 2884, 2951, 2950, 2466, 2462, 2529, 2528], [2884, 2880, 2952, 2951, 2462, 2458, 2530, 2529], [2880, 2876, 2931, 2952, 2458, 2454, 2509, 2530], [2950, 2951, 2953, 2949, 2528, 2529, 2531, 2527], [2951, 2952, 2954, 2953, 2529, 2530, 2532, 2531], [2952, 2931, 2932, 2954, 2530, 2509, 2510, 2532], [2949, 2953, 2940, 2943, 2527, 2531, 2518, 2521], [2953, 2954, 2937, 2940, 2531, 2532, 2515, 2518], [2954, 2932, 2933, 2937, 2532, 2510, 2511, 2515], [2955, 2956, 2957, 2958, 2533, 2534, 2535, 2536], [2956, 2959, 2960, 2957, 2534, 2537, 2538, 2535], [2959, 2961, 2962, 2960, 2537, 2539, 2540, 2538], [2961, 2963, 2964, 2962, 2539, 2541, 2542, 2540], [2963, 2965, 2966, 2964, 2541, 2543, 2544, 2542], [2965, 2967, 2968, 2966, 2543, 2545, 2546, 2544], [2967, 2969, 2970, 2968, 2545, 2547, 2548, 2546], [2958, 2957, 2971, 2972, 2536, 2535, 2549, 2550], [2957, 2960, 2973, 2971, 2535, 2538, 2551, 2549], [2960, 2962, 2974, 2973, 2538, 2540, 2552, 2551], [2962, 2964, 2975, 2974, 2540, 2542, 2553, 2552], [2964, 2966, 2976, 2975, 2542, 2544, 2554, 2553], [2966, 2968, 2977, 2976, 2544, 2546, 2555, 2554], [2968, 2970, 2978, 2977, 2546, 2548, 2556, 2555], [2972, 2971, 2979, 2980, 2550, 2549, 2557, 2558], [2971, 2973, 2981, 2979, 2549, 2551, 2559, 2557], [2973, 2974, 2982, 2981, 2551, 2552, 2560, 2559], [2974, 2975, 2983, 2982, 2552, 2553, 2561, 2560], [2975, 2976, 2984, 2983, 2553, 2554, 2562, 2561], [2976, 2977, 2985, 2984, 2554, 2555, 2563, 2562], [2977, 2978, 2986, 2985, 2555, 2556, 2564, 2563], [2980, 2979, 2987, 2988, 2558, 2557, 2565, 2566], [2979, 2981, 2989, 2987, 2557, 2559, 2567, 2565], [2981, 2982, 2990, 2989, 2559, 2560, 2568, 2567], [2982, 2983, 2991, 2990, 2560, 2561, 2569, 2568], [2983, 2984, 2992, 2991, 2561, 2562, 2570, 2569], [2984, 2985, 2993, 2992, 2562, 2563, 2571, 2570], [2985, 2986, 2994, 2993, 2563, 2564, 2572, 2571], [2988, 2987, 2995, 2996, 2566, 2565, 2573, 2574], [2987, 2989, 2997, 2995, 2565, 2567, 2575, 2573], [2989, 2990, 2998, 2997, 2567, 2568, 2576, 2575], [2990, 2991, 2999, 2998, 2568, 2569, 2577, 2576], [2991, 2992, 3000, 2999, 2569, 2570, 2578, 2577], [2992, 2993, 3001, 3000, 2570, 2571, 2579, 2578], [2993, 2994, 3002, 3001, 2571, 2572, 2580, 2579], [2996, 2995, 3003, 3004, 2574, 2573, 2581, 2582], [2995, 2997, 3005, 3003, 2573, 2575, 2583, 2581], [2997, 2998, 3006, 3005, 2575, 2576, 2584, 2583], [2998, 2999, 3007, 3006, 2576, 2577, 2585, 2584], [2999, 3000, 3008, 3007, 2577, 2578, 2586, 2585], [3000, 3001, 3009, 3008, 2578, 2579, 2587, 2586], [3001, 3002, 3010, 3009, 2579, 2580, 2588, 2587], [3007, 3008, 3011, 3012, 2585, 2586, 2589, 2590], [3008, 3009, 3013, 3011, 2586, 2587, 2591, 2589], [3009, 3010, 3014, 3013, 2587, 2588, 2592, 2591], [3012, 3011, 3015, 3016, 2590, 2589, 2593, 2594], [3011, 3013, 3017, 3015, 2589, 2591, 2595, 2593], [3013, 3014, 3018, 3017, 2591, 2592, 2596, 2595], [3016, 3015, 3019, 3020, 2594, 2593, 2597, 2598], [3015, 3017, 3021, 3019, 2593, 2595, 2599, 2597], [3017, 3018, 3022, 3021, 2595, 2596, 2600, 2599], [3022, 3023, 3024, 3021, 2600, 2601, 2602, 2599], [3023, 3025, 3026, 3024, 2601, 2603, 2604, 2602], [3025, 3027, 3028, 3026, 2603, 2605, 2606, 2604], [3027, 3029, 3030, 3028, 2605, 2607, 2608, 2606], [3021, 3024, 3031, 3019, 2599, 2602, 2609, 2597], [3024, 3026, 3032, 3031, 2602, 2604, 2610, 2609], [3026, 3028, 3033, 3032, 2604, 2606, 2611, 2610], [3028, 3030, 3034, 3033, 2606, 2608, 2612, 2611], [3019, 3031, 3035, 3020, 2597, 2609, 2613, 2598], [3031, 3032, 3036, 3035, 2609, 2610, 2614, 2613], [3032, 3033, 3037, 3036, 2610, 2611, 2615, 2614], [3033, 3034, 3038, 3037, 2611, 2612, 2616, 2615], [3038, 3039, 3040, 3037, 2616, 2617, 2618, 2615], [3039, 3041, 3042, 3040, 2617, 2619, 2620, 2618], [3041, 3004, 3003, 3042, 2619, 2582, 2581, 2620], [3037, 3040, 3043, 3036, 2615, 2618, 2621, 2614], [3040, 3042, 3044, 3043, 2618, 2620, 2622, 2621], [3042, 3003, 3005, 3044, 2620, 2581, 2583, 2622], [3036, 3043, 3045, 3035, 2614, 2621, 2623, 2613], [3043, 3044, 3046, 3045, 2621, 2622, 2624, 2623], [3044, 3005, 3006, 3046, 2622, 2583, 2584, 2624], [3035, 3045, 3016, 3020, 2613, 2623, 2594, 2598], [3045, 3046, 3012, 3016, 2623, 2624, 2590, 2594], [3046, 3006, 3007, 3012, 2624, 2584, 2585, 2590], [2969, 3047, 3048, 2970, 2547, 2625, 2626, 2548], [3047, 3049, 3050, 3048, 2625, 2627, 2628, 2626], [3049, 3051, 3052, 3050, 2627, 2629, 2630, 2628], [3051, 3053, 3054, 3052, 2629, 2631, 2632, 2630], [3053, 3055, 3056, 3054, 2631, 2633, 2634, 2632], [3055, 3057, 3058, 3056, 2633, 2635, 2636, 2634], [3057, 3059, 3060, 3058, 2635, 2637, 2638, 2636], [2970, 3048, 3061, 2978, 2548, 2626, 2639, 2556], [3048, 3050, 3062, 3061, 2626, 2628, 2640, 2639], [3050, 3052, 3063, 3062, 2628, 2630, 2641, 2640], [3052, 3054, 3064, 3063, 2630, 2632, 2642, 2641], [3054, 3056, 3065, 3064, 2632, 2634, 2643, 2642], [3056, 3058, 3066, 3065, 2634, 2636, 2644, 2643], [3058, 3060, 3067, 3066, 2636, 2638, 2645, 2644], [2978, 3061, 3068, 2986, 2556, 2639, 2646, 2564], [3061, 3062, 3069, 3068, 2639, 2640, 2647, 2646], [3062, 3063, 3070, 3069, 2640, 2641, 2648, 2647], [3063, 3064, 3071, 3070, 2641, 2642, 2649, 2648], [3064, 3065, 3072, 3071, 2642, 2643, 2650, 2649], [3065, 3066, 3073, 3072, 2643, 2644, 2651, 2650], [3066, 3067, 3074, 3073, 2644, 2645, 2652, 2651], [2986, 3068, 3075, 2994, 2564, 2646, 2653, 2572], [3068, 3069, 3076, 3075, 2646, 2647, 2654, 2653], [3069, 3070, 3077, 3076, 2647, 2648, 2655, 2654], [3070, 3071, 3078, 3077, 2648, 2649, 2656, 2655], [3071, 3072, 3079, 3078, 2649, 2650, 2657, 2656], [3072, 3073, 3080, 3079, 2650, 2651, 2658, 2657], [3073, 3074, 3081, 3080, 2651, 2652, 2659, 2658], [2994, 3075, 3082, 3002, 2572, 2653, 2660, 2580], [3075, 3076, 3083, 3082, 2653, 2654, 2661, 2660], [3076, 3077, 3084, 3083, 2654, 2655, 2662, 2661], [3077, 3078, 3085, 3084, 2655, 2656, 2663, 2662], [3078, 3079, 3086, 3085, 2656, 2657, 2664, 2663], [3079, 3080, 3087, 3086, 2657, 2658, 2665, 2664], [3080, 3081, 3088, 3087, 2658, 2659, 2666, 2665], [3002, 3082, 3089, 3010, 2580, 2660, 2667, 2588], [3082, 3083, 3090, 3089, 2660, 2661, 2668, 2667], [3083, 3084, 3091, 3090, 2661, 2662, 2669, 2668], [3084, 3085, 3092, 3091, 2662, 2663, 2670, 2669], [3085, 3086, 3093, 3092, 2663, 2664, 2671, 2670], [3086, 3087, 3094, 3093, 2664, 2665, 2672, 2671], [3087, 3088, 3095, 3094, 2665, 2666, 2673, 2672], [3091, 3092, 3096, 3097, 2669, 2670, 2674, 2675], [3092, 3093, 3098, 3096, 2670, 2671, 2676, 2674], [3093, 3094, 3099, 3098, 2671, 2672, 2677, 2676], [3094, 3095, 3100, 3099, 2672, 2673, 2678, 2677], [3097, 3096, 3101, 3102, 2675, 2674, 2679, 2680], [3096, 3098, 3103, 3101, 2674, 2676, 2681, 2679], [3098, 3099, 3104, 3103, 2676, 2677, 2682, 2681], [3099, 3100, 3105, 3104, 2677, 2678, 2683, 2682], [3102, 3101, 3106, 3107, 2680, 2679, 2684, 2685], [3101, 3103, 3108, 3106, 2679, 2681, 2686, 2684], [3103, 3104, 3109, 3108, 2681, 2682, 2687, 2686], [3104, 3105, 3110, 3109, 2682, 2683, 2688, 2687], [3110, 3111, 3112, 3109, 2688, 2689, 2690, 2687], [3111, 3113, 3114, 3112, 2689, 2691, 2692, 2690], [3113, 3029, 3027, 3114, 2691, 2607, 2605, 2692], [3109, 3112, 3115, 3108, 2687, 2690, 2693, 2686], [3112, 3114, 3116, 3115, 2690, 2692, 2694, 2693], [3114, 3027, 3025, 3116, 2692, 2605, 2603, 2694], [3108, 3115, 3117, 3106, 2686, 2693, 2695, 2684], [3115, 3116, 3118, 3117, 2693, 2694, 2696, 2695], [3116, 3025, 3023, 3118, 2694, 2603, 2601, 2696], [3106, 3117, 3119, 3107, 2684, 2695, 2697, 2685], [3117, 3118, 3120, 3119, 2695, 2696, 2698, 2697], [3118, 3023, 3022, 3120, 2696, 2601, 2600, 2698], [3022, 3018, 3121, 3120, 2600, 2596, 2699, 2698], [3018, 3014, 3122, 3121, 2596, 2592, 2700, 2699], [3014, 3010, 3089, 3122, 2592, 2588, 2667, 2700], [3120, 3121, 3123, 3119, 2698, 2699, 2701, 2697], [3121, 3122, 3124, 3123, 2699, 2700, 2702, 2701], [3122, 3089, 3090, 3124, 2700, 2667, 2668, 2702], [3119, 3123, 3102, 3107, 2697, 2701, 2680, 2685], [3123, 3124, 3097, 3102, 2701, 2702, 2675, 2680], [3124, 3090, 3091, 3097, 2702, 2668, 2669, 2675], [3059, 3125, 3126, 3060, 2637, 2703, 2704, 2638], [3125, 3127, 3128, 3126, 2703, 2705, 2706, 2704], [3127, 3129, 3130, 3128, 2705, 2707, 2708, 2706], [3129, 3131, 3132, 3130, 2707, 2709, 2710, 2708], [3131, 3133, 3134, 3132, 2709, 2711, 2712, 2710], [3133, 3135, 3136, 3134, 2711, 2713, 2714, 2712], [3060, 3126, 3137, 3067, 2638, 2704, 2715, 2645], [3126, 3128, 3138, 3137, 2704, 2706, 2716, 2715], [3128, 3130, 3139, 3138, 2706, 2708, 2717, 2716], [3130, 3132, 3140, 3139, 2708, 2710, 2718, 2717], [3132, 3134, 3141, 3140, 2710, 2712, 2719, 2718], [3134, 3136, 3142, 3141, 2712, 2714, 2720, 2719], [3067, 3137, 3143, 3074, 2645, 2715, 2721, 2652], [3137, 3138, 3144, 3143, 2715, 2716, 2722, 2721], [3138, 3139, 3145, 3144, 2716, 2717, 2723, 2722], [3139, 3140, 3146, 3145, 2717, 2718, 2724, 2723], [3140, 3141, 3147, 3146, 2718, 2719, 2725, 2724], [3141, 3142, 3148, 3147, 2719, 2720, 2726, 2725], [3074, 3143, 3149, 3081, 2652, 2721, 2727, 2659], [3143, 3144, 3150, 3149, 2721, 2722, 2728, 2727], [3144, 3145, 3151, 3150, 2722, 2723, 2729, 2728], [3145, 3146, 3152, 3151, 2723, 2724, 2730, 2729], [3146, 3147, 3153, 3152, 2724, 2725, 2731, 2730], [3147, 3148, 3154, 3153, 2725, 2726, 2732, 2731], [3081, 3149, 3155, 3088, 2659, 2727, 2733, 2666], [3149, 3150, 3156, 3155, 2727, 2728, 2734, 2733], [3150, 3151, 3157, 3156, 2728, 2729, 2735, 2734], [3151, 3152, 3158, 3157, 2729, 2730, 2736, 2735], [3152, 3153, 3159, 3158, 2730, 2731, 2737, 2736], [3153, 3154, 3160, 3159, 2731, 2732, 2738, 2737], [3088, 3155, 3161, 3095, 2666, 2733, 2739, 2673], [3155, 3156, 3162, 3161, 2733, 2734, 2740, 2739], [3156, 3157, 3163, 3162, 2734, 2735, 2741, 2740], [3157, 3158, 3164, 3163, 2735, 2736, 2742, 2741], [3158, 3159, 3165, 3164, 2736, 2737, 2743, 2742], [3159, 3160, 3166, 3165, 2737, 2738, 2744, 2743], [3163, 3164, 3167, 3168, 2741, 2742, 2745, 2746], [3164, 3165, 3169, 3167, 2742, 2743, 2747, 2745], [3165, 3166, 3170, 3169, 2743, 2744, 2748, 2747], [3168, 3167, 3171, 3172, 2746, 2745, 2749, 2750], [3167, 3169, 3173, 3171, 2745, 2747, 2751, 2749], [3169, 3170, 3174, 3173, 2747, 2748, 2752, 2751], [3172, 3171, 3175, 3176, 2750, 2749, 2753, 2754], [3171, 3173, 3177, 3175, 2749, 2751, 2755, 2753], [3173, 3174, 3178, 3177, 2751, 2752, 2756, 2755], [3178, 3179, 3180, 3177, 2756, 2757, 2758, 2755], [3179, 3181, 3182, 3180, 2757, 2759, 2760, 2758], [3181, 3029, 3113, 3182, 2759, 2607, 2691, 2760], [3177, 3180, 3183, 3175, 2755, 2758, 2761, 2753], [3180, 3182, 3184, 3183, 2758, 2760, 2762, 2761], [3182, 3113, 3111, 3184, 2760, 2691, 2689, 2762], [3175, 3183, 3185, 3176, 2753, 2761, 2763, 2754], [3183, 3184, 3186, 3185, 2761, 2762, 2764, 2763], [3184, 3111, 3110, 3186, 2762, 2689, 2688, 2764], [3110, 3105, 3187, 3186, 2688, 2683, 2765, 2764], [3105, 3100, 3188, 3187, 2683, 2678, 2766, 2765], [3100, 3095, 3161, 3188, 2678, 2673, 2739, 2766], [3186, 3187, 3189, 3185, 2764, 2765, 2767, 2763], [3187, 3188, 3190, 3189, 2765, 2766, 2768, 2767], [3188, 3161, 3162, 3190, 2766, 2739, 2740, 2768], [3185, 3189, 3172, 3176, 2763, 2767, 2750, 2754], [3189, 3190, 3168, 3172, 2767, 2768, 2746, 2750], [3190, 3162, 3163, 3168, 2768, 2740, 2741, 2746], [3135, 3191, 3192, 3136, 2713, 2769, 2770, 2714], [3191, 3193, 3194, 3192, 2769, 2771, 2772, 2770], [3193, 3195, 3196, 3194, 2771, 2773, 2774, 2772], [3195, 3197, 3198, 3196, 2773, 2775, 2776, 2774], [3197, 3199, 3200, 3198, 2775, 2777, 2778, 2776], [3199, 3201, 3202, 3200, 2777, 2779, 2780, 2778], [3136, 3192, 3203, 3142, 2714, 2770, 2781, 2720], [3192, 3194, 3204, 3203, 2770, 2772, 2782, 2781], [3194, 3196, 3205, 3204, 2772, 2774, 2783, 2782], [3196, 3198, 3206, 3205, 2774, 2776, 2784, 2783], [3198, 3200, 3207, 3206, 2776, 2778, 2785, 2784], [3200, 3202, 3208, 3207, 2778, 2780, 2786, 2785], [3142, 3203, 3209, 3148, 2720, 2781, 2787, 2726], [3203, 3204, 3210, 3209, 2781, 2782, 2788, 2787], [3204, 3205, 3211, 3210, 2782, 2783, 2789, 2788], [3205, 3206, 3212, 3211, 2783, 2784, 2790, 2789], [3206, 3207, 3213, 3212, 2784, 2785, 2791, 2790], [3207, 3208, 3214, 3213, 2785, 2786, 2792, 2791], [3148, 3209, 3215, 3154, 2726, 2787, 2793, 2732], [3209, 3210, 3216, 3215, 2787, 2788, 2794, 2793], [3210, 3211, 3217, 3216, 2788, 2789, 2795, 2794], [3211, 3212, 3218, 3217, 2789, 2790, 2796, 2795], [3212, 3213, 3219, 3218, 2790, 2791, 2797, 2796], [3213, 3214, 3220, 3219, 2791, 2792, 2798, 2797], [3154, 3215, 3221, 3160, 2732, 2793, 2799, 2738], [3215, 3216, 3222, 3221, 2793, 2794, 2800, 2799], [3216, 3217, 3223, 3222, 2794, 2795, 2801, 2800], [3217, 3218, 3224, 3223, 2795, 2796, 2802, 2801], [3218, 3219, 3225, 3224, 2796, 2797, 2803, 2802], [3219, 3220, 3226, 3225, 2797, 2798, 2804, 2803], [3160, 3221, 3227, 3166, 2738, 2799, 2805, 2744], [3221, 3222, 3228, 3227, 2799, 2800, 2806, 2805], [3222, 3223, 3229, 3228, 2800, 2801, 2807, 2806], [3223, 3224, 3230, 3229, 2801, 2802, 2808, 2807], [3224, 3225, 3231, 3230, 2802, 2803, 2809, 2808], [3225, 3226, 3232, 3231, 2803, 2804, 2810, 2809], [3229, 3230, 3233, 3234, 2807, 2808, 2811, 2812], [3230, 3231, 3235, 3233, 2808, 2809, 2813, 2811], [3231, 3232, 3236, 3235, 2809, 2810, 2814, 2813], [3234, 3233, 3237, 3238, 2812, 2811, 2815, 2816], [3233, 3235, 3239, 3237, 2811, 2813, 2817, 2815], [3235, 3236, 3240, 3239, 2813, 2814, 2818, 2817], [3238, 3237, 3241, 3242, 2816, 2815, 2819, 2820], [3237, 3239, 3243, 3241, 2815, 2817, 2821, 2819], [3239, 3240, 3244, 3243, 2817, 2818, 2822, 2821], [3244, 3245, 3246, 3243, 2822, 2823, 2824, 2821], [3245, 3247, 3248, 3246, 2823, 2825, 2826, 2824], [3247, 3029, 3181, 3248, 2825, 2607, 2759, 2826], [3243, 3246, 3249, 3241, 2821, 2824, 2827, 2819], [3246, 3248, 3250, 3249, 2824, 2826, 2828, 2827], [3248, 3181, 3179, 3250, 2826, 2759, 2757, 2828], [3241, 3249, 3251, 3242, 2819, 2827, 2829, 2820], [3249, 3250, 3252, 3251, 2827, 2828, 2830, 2829], [3250, 3179, 3178, 3252, 2828, 2757, 2756, 2830], [3178, 3174, 3253, 3252, 2756, 2752, 2831, 2830], [3174, 3170, 3254, 3253, 2752, 2748, 2832, 2831], [3170, 3166, 3227, 3254, 2748, 2744, 2805, 2832], [3252, 3253, 3255, 3251, 2830, 2831, 2833, 2829], [3253, 3254, 3256, 3255, 2831, 2832, 2834, 2833], [3254, 3227, 3228, 3256, 2832, 2805, 2806, 2834], [3251, 3255, 3238, 3242, 2829, 2833, 2816, 2820], [3255, 3256, 3234, 3238, 2833, 2834, 2812, 2816], [3256, 3228, 3229, 3234, 2834, 2806, 2807, 2812], [3201, 3257, 3258, 3202, 2779, 2835, 2836, 2780], [3257, 3259, 3260, 3258, 2835, 2837, 2838, 2836], [3259, 3261, 3262, 3260, 2837, 2839, 2840, 2838], [3261, 3263, 3264, 3262, 2839, 2841, 2842, 2840], [3263, 3265, 3266, 3264, 2841, 2843, 2844, 2842], [3265, 3267, 3268, 3266, 2843, 2845, 2846, 2844], [3202, 3258, 3269, 3208, 2780, 2836, 2847, 2786], [3258, 3260, 3270, 3269, 2836, 2838, 2848, 2847], [3260, 3262, 3271, 3270, 2838, 2840, 2849, 2848], [3262, 3264, 3272, 3271, 2840, 2842, 2850, 2849], [3264, 3266, 3273, 3272, 2842, 2844, 2851, 2850], [3266, 3268, 3274, 3273, 2844, 2846, 2852, 2851], [3208, 3269, 3275, 3214, 2786, 2847, 2853, 2792], [3269, 3270, 3276, 3275, 2847, 2848, 2854, 2853], [3270, 3271, 3277, 3276, 2848, 2849, 2855, 2854], [3271, 3272, 3278, 3277, 2849, 2850, 2856, 2855], [3272, 3273, 3279, 3278, 2850, 2851, 2857, 2856], [3273, 3274, 3280, 3279, 2851, 2852, 2858, 2857], [3214, 3275, 3281, 3220, 2792, 2853, 2859, 2798], [3275, 3276, 3282, 3281, 2853, 2854, 2860, 2859], [3276, 3277, 3283, 3282, 2854, 2855, 2861, 2860], [3277, 3278, 3284, 3283, 2855, 2856, 2862, 2861], [3278, 3279, 3285, 3284, 2856, 2857, 2863, 2862], [3279, 3280, 3286, 3285, 2857, 2858, 2864, 2863], [3220, 3281, 3287, 3226, 2798, 2859, 2865, 2804], [3281, 3282, 3288, 3287, 2859, 2860, 2866, 2865], [3282, 3283, 3289, 3288, 2860, 2861, 2867, 2866], [3283, 3284, 3290, 3289, 2861, 2862, 2868, 2867], [3284, 3285, 3291, 3290, 2862, 2863, 2869, 2868], [3285, 3286, 3292, 3291, 2863, 2864, 2870, 2869], [3226, 3287, 3293, 3232, 2804, 2865, 2871, 2810], [3287, 3288, 3294, 3293, 2865, 2866, 2872, 2871], [3288, 3289, 3295, 3294, 2866, 2867, 2873, 2872], [3289, 3290, 3296, 3295, 2867, 2868, 2874, 2873], [3290, 3291, 3297, 3296, 2868, 2869, 2875, 2874], [3291, 3292, 3298, 3297, 2869, 2870, 2876, 2875], [3295, 3296, 3299, 3300, 2873, 2874, 2877, 2878], [3296, 3297, 3301, 3299, 2874, 2875, 2879, 2877], [3297, 3298, 3302, 3301, 2875, 2876, 2880, 2879], [3300, 3299, 3303, 3304, 2878, 2877, 2881, 2882], [3299, 3301, 3305, 3303, 2877, 2879, 2883, 2881], [3301, 3302, 3306, 3305, 2879, 2880, 2884, 2883], [3304, 3303, 3307, 3308, 2882, 2881, 2885, 2886], [3303, 3305, 3309, 3307, 2881, 2883, 2887, 2885], [3305, 3306, 3310, 3309, 2883, 2884, 2888, 2887], [3310, 3311, 3312, 3309, 2888, 2889, 2890, 2887], [3311, 3313, 3314, 3312, 2889, 2891, 2892, 2890], [3313, 3029, 3247, 3314, 2891, 2607, 2825, 2892], [3309, 3312, 3315, 3307, 2887, 2890, 2893, 2885], [3312, 3314, 3316, 3315, 2890, 2892, 2894, 2893], [3314, 3247, 3245, 3316, 2892, 2825, 2823, 2894], [3307, 3315, 3317, 3308, 2885, 2893, 2895, 2886], [3315, 3316, 3318, 3317, 2893, 2894, 2896, 2895], [3316, 3245, 3244, 3318, 2894, 2823, 2822, 2896], [3244, 3240, 3319, 3318, 2822, 2818, 2897, 2896], [3240, 3236, 3320, 3319, 2818, 2814, 2898, 2897], [3236, 3232, 3293, 3320, 2814, 2810, 2871, 2898], [3318, 3319, 3321, 3317, 2896, 2897, 2899, 2895], [3319, 3320, 3322, 3321, 2897, 2898, 2900, 2899], [3320, 3293, 3294, 3322, 2898, 2871, 2872, 2900], [3317, 3321, 3304, 3308, 2895, 2899, 2882, 2886], [3321, 3322, 3300, 3304, 2899, 2900, 2878, 2882], [3322, 3294, 3295, 3300, 2900, 2872, 2873, 2878], [3267, 3323, 3324, 3268, 2845, 2901, 2902, 2846], [3323, 3325, 3326, 3324, 2901, 2903, 2904, 2902], [3325, 3327, 3328, 3326, 2903, 2905, 2906, 2904], [3327, 3329, 3330, 3328, 2905, 2907, 2908, 2906], [3329, 3331, 3332, 3330, 2907, 2909, 2910, 2908], [3331, 2955, 2958, 3332, 2909, 2533, 2536, 2910], [3268, 3324, 3333, 3274, 2846, 2902, 2911, 2852], [3324, 3326, 3334, 3333, 2902, 2904, 2912, 2911], [3326, 3328, 3335, 3334, 2904, 2906, 2913, 2912], [3328, 3330, 3336, 3335, 2906, 2908, 2914, 2913], [3330, 3332, 3337, 3336, 2908, 2910, 2915, 2914], [3332, 2958, 2972, 3337, 2910, 2536, 2550, 2915], [3274, 3333, 3338, 3280, 2852, 2911, 2916, 2858], [3333, 3334, 3339, 3338, 2911, 2912, 2917, 2916], [3334, 3335, 3340, 3339, 2912, 2913, 2918, 2917], [3335, 3336, 3341, 3340, 2913, 2914, 2919, 2918], [3336, 3337, 3342, 3341, 2914, 2915, 2920, 2919], [3337, 2972, 2980, 3342, 2915, 2550, 2558, 2920], [3280, 3338, 3343, 3286, 2858, 2916, 2921, 2864], [3338, 3339, 3344, 3343, 2916, 2917, 2922, 2921], [3339, 3340, 3345, 3344, 2917, 2918, 2923, 2922], [3340, 3341, 3346, 3345, 2918, 2919, 2924, 2923], [3341, 3342, 3347, 3346, 2919, 2920, 2925, 2924], [3342, 2980, 2988, 3347, 2920, 2558, 2566, 2925], [3286, 3343, 3348, 3292, 2864, 2921, 2926, 2870], [3343, 3344, 3349, 3348, 2921, 2922, 2927, 2926], [3344, 3345, 3350, 3349, 2922, 2923, 2928, 2927], [3345, 3346, 3351, 3350, 2923, 2924, 2929, 2928], [3346, 3347, 3352, 3351, 2924, 2925, 2930, 2929], [3347, 2988, 2996, 3352, 2925, 2566, 2574, 2930], [3292, 3348, 3353, 3298, 2870, 2926, 2931, 2876], [3348, 3349, 3354, 3353, 2926, 2927, 2932, 2931], [3349, 3350, 3355, 3354, 2927, 2928, 2933, 2932], [3350, 3351, 3356, 3355, 2928, 2929, 2934, 2933], [3351, 3352, 3357, 3356, 2929, 2930, 2935, 2934], [3352, 2996, 3004, 3357, 2930, 2574, 2582, 2935], [3355, 3356, 3358, 3359, 2933, 2934, 2936, 2937], [3356, 3357, 3360, 3358, 2934, 2935, 2938, 2936], [3357, 3004, 3041, 3360, 2935, 2582, 2619, 2938], [3359, 3358, 3361, 3362, 2937, 2936, 2939, 2940], [3358, 3360, 3363, 3361, 2936, 2938, 2941, 2939], [3360, 3041, 3039, 3363, 2938, 2619, 2617, 2941], [3362, 3361, 3364, 3365, 2940, 2939, 2942, 2943], [3361, 3363, 3366, 3364, 2939, 2941, 2944, 2942], [3363, 3039, 3038, 3366, 2941, 2617, 2616, 2944], [3038, 3034, 3367, 3366, 2616, 2612, 2945, 2944], [3034, 3030, 3368, 3367, 2612, 2608, 2946, 2945], [3030, 3029, 3313, 3368, 2608, 2607, 2891, 2946], [3366, 3367, 3369, 3364, 2944, 2945, 2947, 2942], [3367, 3368, 3370, 3369, 2945, 2946, 2948, 2947], [3368, 3313, 3311, 3370, 2946, 2891, 2889, 2948], [3364, 3369, 3371, 3365, 2942, 2947, 2949, 2943], [3369, 3370, 3372, 3371, 2947, 2948, 2950, 2949], [3370, 3311, 3310, 3372, 2948, 2889, 2888, 2950], [3310, 3306, 3373, 3372, 2888, 2884, 2951, 2950], [3306, 3302, 3374, 3373, 2884, 2880, 2952, 2951], [3302, 3298, 3353, 3374, 2880, 2876, 2931, 2952], [3372, 3373, 3375, 3371, 2950, 2951, 2953, 2949], [3373, 3374, 3376, 3375, 2951, 2952, 2954, 2953], [3374, 3353, 3354, 3376, 2952, 2931, 2932, 2954], [3371, 3375, 3362, 3365, 2949, 2953, 2940, 2943], [3375, 3376, 3359, 3362, 2953, 2954, 2937, 2940], [3376, 3354, 3355, 3359, 2954, 2932, 2933, 2937] ], dtype='int64') ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/sample_data/tetrahedral_mesh.py0000644000175100001770000031555014714401662023666 0ustar00runnerdockerimport numpy as np # This is just an export from `pydistmesh` using the '3D Unit Ball' example: # https://github.com/bfroehle/pydistmesh _coordinates = np.array([ [ -9.62967621e-01, -2.68325669e-01, -2.63570990e-02], [ -9.23197206e-01, 6.00592270e-02, -3.79604808e-01], [ -9.99398840e-01, -3.01322522e-02, 1.71466163e-02], [ -9.23424144e-01, -7.55355896e-02, 3.76274134e-01], [ -9.76172900e-01, 2.13522703e-01, 3.86590740e-02], [ -7.73044456e-01, -5.39692475e-01, -3.33368116e-01], [ -7.87313888e-01, -5.98131409e-01, -1.49584956e-01], [ -7.74823044e-01, -6.03944560e-01, 1.86816007e-01], [ -8.02947422e-01, -3.76286648e-01, -4.62259448e-01], [ -9.01768086e-01, -3.68179163e-01, -2.26403233e-01], [ -8.75147083e-01, -4.83776676e-01, -8.81535636e-03], [ -9.02155201e-01, -3.89290298e-01, 1.85927562e-01], [ -7.89270641e-01, -4.87597303e-01, 3.73230123e-01], [ -7.52138168e-01, -2.22375528e-01, -6.20352562e-01], [ -8.95457206e-01, -1.92141852e-01, -4.01544395e-01], [ -9.71942256e-01, -1.23239288e-01, -2.00350515e-01], [ -8.05378938e-01, -2.23546612e-01, -1.51473404e-04], [ -9.66811578e-01, -1.69359673e-01, 1.91292119e-01], [ -8.70654663e-01, -2.82390171e-01, 4.02760786e-01], [ -7.11079737e-01, -4.28959611e-01, 5.57098968e-01], [ -7.88471049e-01, 1.78042094e-01, -5.88739685e-01], [ -8.39017123e-01, -2.86996919e-02, -5.43347582e-01], [ -7.35821715e-01, -2.11538194e-01, -2.33136439e-01], [ -7.85079983e-01, -4.35631443e-03, -7.29027161e-02], [ -7.92742715e-01, 5.30843614e-04, 1.49140404e-01], [ -8.17004797e-01, -5.47499959e-02, 5.74025783e-01], [ -7.04701668e-01, -2.24209163e-01, 6.73146203e-01], [ -7.70999764e-01, 3.71216019e-01, -5.17453410e-01], [ -8.90385164e-01, 2.68446994e-01, -3.67628170e-01], [ -9.79246646e-01, 1.15973457e-01, -1.66211200e-01], [ -7.72969304e-01, 2.21158909e-01, 4.35889641e-02], [ -9.70179581e-01, 8.97731254e-02, 2.25149656e-01], [ -8.86033779e-01, 1.44311425e-01, 4.40588645e-01], [ -7.57393407e-01, 1.57682547e-01, 6.33633524e-01], [ -8.20053173e-01, 4.86337456e-01, -3.01643287e-01], [ -9.21552627e-01, 3.65067689e-01, -1.32160273e-01], [ -8.88239706e-01, 4.53748187e-01, 7.17133752e-02], [ -9.12745738e-01, 3.24240717e-01, 2.48521981e-01], [ -8.08848385e-01, 3.57206222e-01, 4.67084580e-01], [ -8.02123077e-01, 5.90021489e-01, -9.20500458e-02], [ -7.45948377e-01, 6.57205497e-01, 1.07897884e-01], [ -7.98204994e-01, 5.25058631e-01, 2.95266357e-01], [ -6.30887505e-01, -7.39668679e-01, -2.34246028e-01], [ -7.15329115e-01, -6.98453892e-01, 2.15967334e-02], [ -5.77957725e-01, -7.92693457e-01, 1.93912223e-01], [ -6.42665949e-01, -5.94884286e-01, -4.82797228e-01], [ -5.46038557e-01, -5.44985033e-01, -2.71787655e-01], [ -5.54213927e-01, -5.84648463e-01, -5.10845856e-02], [ -5.45487961e-01, -5.77578168e-01, 2.00918083e-01], [ -6.46377284e-01, -6.53904664e-01, 3.93198547e-01], [ -6.38955694e-01, -4.18351960e-01, -6.45536411e-01], [ -5.99144091e-01, -3.63886456e-01, -4.05441933e-01], [ -6.82849084e-01, -3.86014000e-01, -1.63498157e-01], [ -6.68766881e-01, -4.19225500e-01, 5.42136051e-02], [ -7.30857178e-01, -2.03227666e-01, 2.06228100e-01], [ -5.43618891e-01, -4.30741083e-01, 4.45877772e-01], [ -5.58595071e-01, -4.21540710e-01, 7.14335339e-01], [ -5.82838566e-01, -2.03229852e-01, -7.86763518e-01], [ -5.09947711e-01, -2.26802295e-01, -5.80406785e-01], [ -6.57701812e-01, -1.38424779e-01, -4.27940410e-01], [ -6.07756412e-01, -1.72786293e-01, -3.54305445e-02], [ -5.06397150e-01, -2.73398545e-01, 1.14366953e-01], [ -6.62607234e-01, -3.85526543e-01, 2.68375874e-01], [ -7.05010419e-01, -2.21273740e-01, 4.52510033e-01], [ -4.93572234e-01, -2.48290925e-01, 5.93171798e-01], [ -5.30304677e-01, -1.91881131e-01, 8.25807836e-01], [ -6.95576819e-01, -1.18317597e-02, -7.18354299e-01], [ -5.87300531e-01, 6.42382159e-03, -5.97639073e-01], [ -6.73330053e-01, 1.22449487e-01, -4.02731110e-01], [ -5.59319757e-01, -1.73769322e-02, -2.09348254e-01], [ -5.95036090e-01, 6.63685740e-02, 3.03841620e-02], [ -5.57266536e-01, -5.60387817e-02, 1.93042065e-01], [ -7.00883535e-01, -6.04503771e-03, 3.64291599e-01], [ -5.78057568e-01, -3.71186623e-02, 5.64052628e-01], [ -6.47137348e-01, -2.91689448e-03, 7.62367854e-01], [ -6.17566835e-01, 2.14112793e-01, -7.56813660e-01], [ -5.39967650e-01, 2.26181937e-01, -5.56616845e-01], [ -7.72970264e-01, -6.67507940e-03, -2.79806814e-01], [ -6.13953196e-01, 1.78370120e-01, -1.61353511e-01], [ -5.29572241e-01, 2.85092966e-01, 5.07316309e-02], [ -5.74738903e-01, 1.74657307e-01, 2.42139593e-01], [ -7.56747405e-01, 2.10707702e-01, 2.69976416e-01], [ -6.31176071e-01, 1.83653152e-01, 4.84414633e-01], [ -5.87915999e-01, 2.24217289e-01, 7.77226727e-01], [ -6.28395851e-01, 4.19919168e-01, -6.54817950e-01], [ -6.42450027e-01, 3.31162532e-01, -3.28240646e-01], [ -8.01427779e-01, 2.08645036e-01, -1.92090617e-01], [ -6.90906413e-01, 3.97333130e-01, -9.61360295e-02], [ -6.72545573e-01, 4.10691484e-01, 1.45013988e-01], [ -6.04539604e-01, 3.87066026e-01, 3.57575500e-01], [ -6.68054353e-01, 3.84236522e-01, 6.37232827e-01], [ -6.75857115e-01, 5.72589964e-01, -4.64066691e-01], [ -5.59143406e-01, 5.23445316e-01, -2.54861670e-01], [ -5.63538680e-01, 5.59721915e-01, -3.14218279e-02], [ -6.39318728e-01, 6.99649299e-01, 3.19002231e-01], [ -6.64343258e-01, 5.64739973e-01, 4.89608822e-01], [ -6.86462131e-01, 6.83192131e-01, -2.49034648e-01], [ -6.20846808e-01, 7.80473026e-01, -7.35601686e-02], [ -5.64621392e-01, 8.16515778e-01, 1.20435325e-01], [ -4.95021894e-01, -7.59837405e-01, -4.21426672e-01], [ -4.11090255e-01, -8.80338747e-01, -2.36661137e-01], [ -5.47447534e-01, -8.36158957e-01, -3.37549521e-02], [ -3.66130310e-01, -9.09648278e-01, 1.96185134e-01], [ -4.62833735e-01, -7.92039672e-01, 3.98067949e-01], [ -4.89583007e-01, -6.02925047e-01, -6.29912587e-01], [ -3.80304973e-01, -6.03973036e-01, -4.02431008e-01], [ -3.91842972e-01, -7.11773257e-01, -1.85685509e-01], [ -3.84974499e-01, -7.08582426e-01, 3.83824107e-02], [ -3.40651461e-01, -6.72213220e-01, 2.28912670e-01], [ -3.84521145e-01, -5.86140658e-01, 4.14820856e-01], [ -5.31073959e-01, -6.17424650e-01, 5.80299277e-01], [ -4.59909273e-01, -3.95671027e-01, -7.94938928e-01], [ -3.98102385e-01, -4.19059333e-01, -5.61501051e-01], [ -3.81193299e-01, -3.93630978e-01, -3.56732489e-01], [ -4.77511655e-01, -3.75863014e-01, -1.26017536e-01], [ -3.87190527e-01, -4.74981994e-01, 5.57554008e-02], [ -4.21088239e-01, -3.96851005e-01, 2.59967897e-01], [ -3.08478859e-01, -3.41393062e-01, 4.32292613e-01], [ -3.52402545e-01, -4.36693481e-01, 5.99333058e-01], [ -3.88440519e-01, -3.70654859e-01, 8.43640290e-01], [ -3.89098848e-01, -1.77734623e-01, -9.03887433e-01], [ -3.06623610e-01, -2.35268920e-01, -7.00243202e-01], [ -3.41856207e-01, -1.73691252e-01, -4.53096469e-01], [ -4.95495891e-01, -2.09375496e-01, -2.66566326e-01], [ -3.58979116e-01, -2.38478707e-01, -6.00078514e-02], [ -3.61083064e-01, -1.51054689e-01, 2.47282244e-01], [ -5.22145140e-01, -2.02932177e-01, 3.68602647e-01], [ -2.88279144e-01, -2.58872203e-01, 7.15313501e-01], [ -3.30873242e-01, -1.47363870e-01, 9.32098057e-01], [ -5.21050518e-01, 2.37374153e-02, -8.53195695e-01], [ -3.96061295e-01, -3.80024812e-02, -6.94021762e-01], [ -4.64462964e-01, -1.24913007e-02, -4.30642267e-01], [ -3.26163230e-01, -3.91595358e-02, -2.32146758e-01], [ -4.06814958e-01, -5.71682613e-02, 2.94472862e-03], [ -3.74040953e-01, 1.08402262e-01, 1.74064052e-01], [ -4.51503692e-01, 2.09979658e-02, 4.07300559e-01], [ -3.92321513e-01, -4.58352294e-02, 7.04774801e-01], [ -4.56241586e-01, 5.07189584e-02, 8.88409366e-01], [ -4.25440817e-01, 2.32028056e-01, -8.74736014e-01], [ -2.99098712e-01, 2.14664654e-01, -7.35610261e-01], [ -3.56902556e-01, 1.27886858e-01, -5.76114828e-01], [ -4.38206907e-01, 1.84178546e-01, -3.37738082e-01], [ -4.01961192e-01, 1.38677100e-01, -9.82255735e-02], [ -4.11546738e-01, 3.58182028e-01, 2.47789091e-01], [ -4.07517555e-01, 2.08108781e-01, 4.18341689e-01], [ -4.51178528e-01, 1.49551301e-01, 6.51618844e-01], [ -3.72680102e-01, 2.63356951e-01, 8.89804843e-01], [ -4.64223868e-01, 4.23342694e-01, -7.77995607e-01], [ -3.64395973e-01, 4.03004411e-01, -6.01158644e-01], [ -5.12348000e-01, 4.07384566e-01, -4.58573380e-01], [ -4.44985738e-01, 3.51724152e-01, -1.50316028e-01], [ -3.86770739e-01, 4.64454002e-01, 5.02311117e-02], [ -5.32930370e-01, 5.75775886e-01, 2.03141645e-01], [ -2.66781216e-01, 4.13898117e-01, 4.30887385e-01], [ -4.85656110e-01, 3.71762124e-01, 5.38458720e-01], [ -4.68107747e-01, 4.18318830e-01, 7.78385826e-01], [ -4.97986898e-01, 6.13339175e-01, -6.13044946e-01], [ -3.86761727e-01, 5.93271383e-01, -4.03701051e-01], [ -4.45885002e-01, 7.19487703e-01, -1.83540323e-01], [ -3.94993635e-01, 7.04316770e-01, 5.57745148e-02], [ -4.34407638e-01, 8.12572050e-01, 2.97930026e-01], [ -4.34808796e-01, 5.61352642e-01, 4.02806351e-01], [ -5.03093415e-01, 5.81725526e-01, 6.39134124e-01], [ -5.36185759e-01, 7.42670990e-01, -4.01179053e-01], [ -3.78389380e-01, 8.72517067e-01, -3.09088084e-01], [ -4.08836317e-01, 9.09543790e-01, -7.47192069e-02], [ -3.45264752e-01, 9.26749050e-01, 1.48082571e-01], [ -4.65253738e-01, 7.44368236e-01, 4.79014498e-01], [ -3.49013019e-01, -9.36842287e-01, -2.27253547e-02], [ -3.14733849e-01, -7.41414880e-01, -5.92660594e-01], [ -2.74412512e-01, -8.66322161e-01, -4.17353194e-01], [ -1.92620923e-01, -9.60021017e-01, -2.03117769e-01], [ -1.82051552e-01, -7.61556292e-01, -8.36865303e-02], [ -1.51505552e-01, -9.87579969e-01, 4.16157811e-02], [ -2.54046269e-01, -8.75951727e-01, 4.10084217e-01], [ -3.15836068e-01, -7.42333328e-01, 5.90921999e-01], [ -3.16779695e-01, -5.65032826e-01, -7.61832353e-01], [ -1.99380736e-01, -5.83498966e-01, -5.17971409e-01], [ -1.93074469e-01, -7.27602781e-01, -3.13324024e-01], [ -3.07440769e-01, -5.34538758e-01, -1.59248646e-01], [ -1.83589817e-01, -5.63598366e-01, 3.99710449e-02], [ -1.72637366e-01, -7.87780365e-01, 1.41563414e-01], [ -1.66748215e-01, -7.11019457e-01, 3.88397712e-01], [ -1.79936519e-01, -5.56267314e-01, 5.46800560e-01], [ -3.58182261e-01, -5.78742564e-01, 7.32640780e-01], [ -2.39113604e-01, -3.63405744e-01, -9.00422651e-01], [ -1.88667420e-01, -4.25310126e-01, -6.68457429e-01], [ -1.64032518e-01, -4.92768758e-01, -3.23434511e-01], [ -1.42286771e-01, -3.78405111e-01, -8.11029046e-02], [ -2.44971665e-01, -3.13503503e-01, 1.06085864e-01], [ -2.28610813e-01, -4.85183904e-01, 2.66130473e-01], [ -1.28977866e-01, -3.13541733e-01, 5.70211381e-01], [ -1.78031279e-01, -4.65204651e-01, 7.92395828e-01], [ -1.72505832e-01, -3.19168435e-01, 9.31865467e-01], [ -1.71515483e-01, -1.59643018e-01, -9.72160762e-01], [ -1.77643190e-01, -3.20496856e-01, -4.92602235e-01], [ -1.14229852e-01, -1.49074465e-01, -3.42747649e-01], [ -2.52795680e-01, -2.86475470e-01, -2.49110854e-01], [ -1.74693523e-01, -1.40098697e-01, -8.60454071e-02], [ -1.66194986e-01, -2.46030595e-01, 3.10073251e-01], [ -3.13732125e-01, -1.40675034e-01, 5.06100644e-01], [ -1.74134649e-01, -1.01886625e-01, 7.32556325e-01], [ -9.95450127e-02, -1.14585341e-01, 9.88413370e-01], [ -3.18739089e-01, 4.00091666e-02, -9.46997709e-01], [ -1.93708219e-01, 1.00722710e-02, -7.78634557e-01], [ -1.82613620e-01, -7.75287419e-02, -5.72815481e-01], [ -2.28815875e-01, 6.25534005e-02, -4.05248137e-01], [ -9.94383439e-02, 2.68669966e-02, -2.20864387e-01], [ -2.07094208e-01, 9.82448007e-02, -3.77870585e-02], [ -2.15852843e-01, -8.17490291e-02, 1.15598164e-01], [ -2.30191010e-01, -1.75544371e-04, 3.44118680e-01], [ -2.35116145e-01, 7.71682150e-02, 5.52860241e-01], [ -2.08185096e-01, 1.20081075e-01, 7.63698137e-01], [ -2.40651767e-01, 5.78343201e-02, 9.68886948e-01], [ -1.93731688e-01, 2.45702312e-01, -9.49788612e-01], [ -1.12016933e-01, 1.63531041e-01, -5.86901497e-01], [ -2.34602187e-01, 2.94459202e-01, -4.73486651e-01], [ -2.26208974e-01, 2.23166687e-01, -2.41358207e-01], [ -2.85908624e-01, 2.71530866e-01, 5.11117105e-02], [ -1.34482864e-01, 1.24402789e-01, 1.86580767e-01], [ -2.19932943e-01, 2.26626105e-01, 3.55951824e-01], [ -3.01290816e-01, 3.03227979e-01, 6.38886854e-01], [ -1.64692179e-01, 2.77015285e-01, 9.46646195e-01], [ -2.62930799e-01, 4.35815525e-01, -8.60774199e-01], [ -1.45572649e-01, 3.98542626e-01, -6.84093735e-01], [ -2.07918055e-01, 5.55368124e-01, -5.18712780e-01], [ -3.42891725e-01, 4.03334468e-01, -3.28874903e-01], [ -1.88513526e-01, 3.73927074e-01, -1.06486483e-01], [ -1.73582211e-01, 3.83676674e-01, 1.80808854e-01], [ -1.14880644e-01, 4.96441733e-01, 5.98104183e-01], [ -1.47311184e-01, 3.54067258e-01, 7.64393318e-01], [ -2.55934339e-01, 4.92690346e-01, 8.31717402e-01], [ -3.03477171e-01, 6.16396048e-01, -7.26606853e-01], [ -3.37997757e-01, 7.73558160e-01, -5.36064633e-01], [ -2.26044087e-01, 7.08169734e-01, -3.04701855e-01], [ -3.23533824e-01, 5.57868526e-01, -1.56651270e-01], [ -1.95521147e-01, 5.66863051e-01, 3.43378062e-02], [ -2.94013176e-01, 5.82311319e-01, 2.45762618e-01], [ -1.97076732e-01, 6.60820318e-01, 4.16455346e-01], [ -3.15119586e-01, 5.41216314e-01, 6.21821689e-01], [ -1.35031842e-01, 6.70227569e-01, 7.29768050e-01], [ -1.30922603e-01, 7.63763696e-01, -6.32079337e-01], [ -1.69367064e-01, 8.85500714e-01, -4.32669948e-01], [ -1.88284600e-01, 9.58082704e-01, -2.15931571e-01], [ -2.15357169e-01, 7.70995955e-01, -7.15664769e-02], [ -1.19113529e-01, 9.69485318e-01, 2.14266624e-01], [ -2.50862111e-01, 8.84373013e-01, 3.93640159e-01], [ -3.03000063e-01, 7.42360357e-01, 5.97571806e-01], [ -1.87291251e-01, 9.82304419e-01, -1.28846379e-04], [ -3.84532991e-02, -9.31861848e-01, -3.60769787e-01], [ 1.44520011e-02, -9.93720630e-01, -1.10952461e-01], [ -1.26363945e-01, -9.56072126e-01, 2.64496206e-01], [ -1.19844694e-01, -6.78852423e-01, -7.24428491e-01], [ -8.69666571e-02, -8.24194281e-01, -5.59589661e-01], [ 1.92508773e-02, -7.86178086e-01, -1.99792242e-01], [ 2.59736870e-02, -8.16027980e-01, 3.21822222e-02], [ 5.78263480e-02, -7.77061434e-01, 2.69277400e-01], [ 8.48713263e-03, -9.03365175e-01, 4.28788212e-01], [ -8.12034498e-02, -8.01090245e-01, 5.93009629e-01], [ -9.44555528e-02, -5.11948311e-01, -8.53807400e-01], [ 1.22227825e-02, -5.84779056e-01, -6.17069967e-01], [ 1.14282616e-02, -6.77557927e-01, -4.03044568e-01], [ -4.94206399e-02, -5.79359611e-01, -1.75221047e-01], [ 3.54180438e-02, -6.16893928e-01, 5.76597508e-02], [ -7.35585843e-02, -6.27256660e-01, 2.41870856e-01], [ 4.69704885e-02, -6.49539057e-01, 4.72840371e-01], [ 9.18934804e-02, -6.86719418e-01, 7.21090861e-01], [ -1.25360666e-01, -6.52218717e-01, 7.47593104e-01], [ 1.10570869e-01, -5.47391887e-01, -8.29539755e-01], [ 3.39626961e-03, -3.66635006e-01, -6.93343934e-01], [ 1.25738074e-02, -4.53904798e-01, -4.69638034e-01], [ 7.03185078e-02, -4.26921601e-01, -6.79651552e-02], [ -2.81962041e-02, -4.08640743e-01, 1.57174041e-01], [ 1.29299577e-01, -5.28129998e-01, 2.68603445e-01], [ -6.51159350e-02, -4.55882802e-01, 4.04884494e-01], [ 2.36123834e-02, -4.72139675e-01, 6.39529107e-01], [ 2.46523286e-02, -4.80995012e-01, 8.76376666e-01], [ 4.97649308e-02, -1.37259058e-01, -9.89284288e-01], [ 2.42859800e-02, -3.39666980e-01, -9.40232171e-01], [ -9.69668126e-02, -1.99263737e-01, -7.68274355e-01], [ 2.08525613e-02, -2.14097485e-01, -5.31519023e-01], [ -1.07013915e-03, -3.39815934e-01, -2.93721956e-01], [ 1.57139516e-02, -1.92449444e-01, -1.42390379e-01], [ -2.04197400e-02, -2.22306197e-01, 9.04305303e-02], [ 3.87266848e-02, -2.76080684e-01, 3.60076717e-01], [ 9.31376446e-02, -2.13251477e-01, 5.74822958e-01], [ -2.06942363e-02, -2.61315125e-01, 7.64083562e-01], [ 5.35699517e-02, -2.57293791e-01, 9.64847224e-01], [ -8.66763821e-02, 5.63452713e-02, -9.94641853e-01], [ 4.50561348e-02, -7.41066799e-03, -8.47406496e-01], [ 3.33458782e-02, -1.14143341e-02, -6.60235249e-01], [ 6.84339708e-03, 8.90210227e-03, -4.41613112e-01], [ 9.49578088e-02, 4.46302002e-02, -2.10962018e-01], [ -6.47323444e-03, 4.23441467e-03, 1.11262491e-02], [ -2.20508593e-02, -7.44769302e-02, 2.50509251e-01], [ -8.08824268e-02, -1.12803625e-01, 4.93814502e-01], [ 4.31118364e-03, 2.66036252e-02, 6.39348740e-01], [ -1.51603188e-03, -1.98988230e-02, 8.37604363e-01], [ -9.23935066e-03, 1.09286102e-01, 9.93967395e-01], [ 5.63301619e-02, 2.24876724e-01, -9.72757613e-01], [ -4.61848355e-02, 2.04133531e-01, -7.79695854e-01], [ -3.96552100e-03, 3.80190538e-01, -4.97582997e-01], [ -1.18507316e-02, 2.17673333e-01, -3.61334139e-01], [ -6.02985784e-03, 2.21576339e-01, -1.22196991e-01], [ -2.32241591e-02, 2.50647992e-01, 6.38152715e-02], [ -6.63051411e-03, 2.68706951e-01, 3.21898185e-01], [ -8.58651811e-03, 8.43630474e-02, 4.22202755e-01], [ -8.42376448e-02, 2.59602722e-01, 5.58129856e-01], [ 2.57323327e-02, 2.12104235e-01, 7.70285538e-01], [ 3.52746047e-02, 3.45035878e-01, 9.37926407e-01], [ -2.97723438e-02, 4.24183729e-01, -9.05086610e-01], [ 9.29712760e-02, 3.76707117e-01, -7.06489067e-01], [ -1.40981487e-01, 4.61266202e-01, -3.29987153e-01], [ 4.92055611e-02, 3.91387717e-01, -2.35631121e-01], [ -1.97462635e-04, 4.51544969e-01, 6.47824769e-03], [ 1.23653883e-01, 3.67736045e-01, 2.05364992e-01], [ -7.63551949e-02, 4.59397098e-01, 3.72865094e-01], [ 6.82176296e-02, 4.37151216e-01, 6.84436118e-01], [ 2.66618242e-04, 5.35613256e-01, 8.44463362e-01], [ -8.06612085e-02, 6.06532435e-01, -7.90956493e-01], [ 7.59431774e-03, 5.76466083e-01, -5.91739541e-01], [ -4.20662344e-02, 6.92842103e-01, -4.04895589e-01], [ -7.22163505e-02, 5.86341525e-01, -1.76197982e-01], [ -2.82929006e-03, 6.85092763e-01, 3.33712489e-02], [ -3.54501018e-02, 5.73604344e-01, 2.16391791e-01], [ 8.87977875e-03, 7.54111200e-01, 3.05826038e-01], [ 2.83834106e-02, 6.24014787e-01, 5.06040338e-01], [ 9.15391011e-02, 7.01376458e-01, 7.06888717e-01], [ 1.19416186e-01, 7.35331387e-01, -6.67103835e-01], [ 5.68688791e-02, 8.54595574e-01, -5.16170841e-01], [ -1.16699743e-02, 7.89916321e-01, -1.85148337e-01], [ -6.01400665e-03, 8.69990193e-01, 6.23534747e-02], [ -1.95727679e-01, 7.64511507e-01, 1.81280060e-01], [ 1.20789453e-01, 8.40490066e-01, 5.28191591e-01], [ -9.39626086e-02, 8.20506401e-01, 5.63861931e-01], [ 2.81148104e-02, 9.43729028e-01, -3.29522502e-01], [ 3.41125444e-02, 9.95068116e-01, -9.31438613e-02], [ -2.14483362e-03, 9.25392070e-01, 3.79005168e-01], [ 2.18112689e-01, -9.74821369e-01, -4.63697576e-02], [ 1.23853444e-01, -7.28502092e-01, -6.73754426e-01], [ 1.44704613e-01, -8.58537380e-01, -4.91908673e-01], [ 1.87781455e-01, -9.41233073e-01, -2.80746195e-01], [ 2.31410729e-01, -8.07540207e-01, -8.25767126e-02], [ 6.97170007e-02, -9.84450226e-01, 1.61236758e-01], [ 2.30393023e-01, -9.04286027e-01, 3.59424311e-01], [ 1.89807547e-01, -8.03635809e-01, 5.64041295e-01], [ 3.22755516e-01, -5.86935920e-01, -7.42519430e-01], [ 3.31597344e-01, -7.46740775e-01, -5.76559985e-01], [ 2.27776250e-01, -7.23370447e-01, -3.10605531e-01], [ 1.63795538e-01, -5.17366379e-01, -2.81379193e-01], [ 1.78823602e-01, -6.28896291e-01, -8.64287065e-02], [ 2.31860993e-01, -7.39739895e-01, 1.49715345e-01], [ 2.69030761e-01, -6.65754968e-01, 3.56967997e-01], [ 2.50134311e-01, -5.68320717e-01, 5.65130746e-01], [ 2.21765258e-01, -5.48596353e-01, 8.06140318e-01], [ 2.66915440e-01, -3.83657939e-01, -8.84060368e-01], [ 2.01519428e-01, -4.30434869e-01, -6.53813689e-01], [ 2.27114000e-01, -5.81622952e-01, -4.95126812e-01], [ 2.26971538e-01, -3.11986319e-01, -2.06952935e-01], [ 2.66317379e-01, -5.03016563e-01, 1.08925528e-01], [ 4.45366001e-01, -6.19390316e-01, 2.11676959e-01], [ 1.75225358e-01, -4.29561039e-01, 4.56919428e-01], [ 2.04330462e-01, -3.60931919e-01, 6.92428501e-01], [ 2.28824041e-01, -3.32760745e-01, 9.14827768e-01], [ 2.48765341e-01, -1.65057236e-01, -9.54396099e-01], [ 1.39387281e-01, -2.14073764e-01, -7.55554463e-01], [ 1.97279259e-01, -3.34597766e-01, -4.57795165e-01], [ 1.55494033e-01, -1.36566595e-01, -3.29815444e-01], [ 1.69940802e-01, -3.12986488e-01, 2.72903460e-02], [ 1.88913617e-01, -2.97118516e-01, 2.31933019e-01], [ 2.83041605e-01, -2.43544619e-01, 4.58124190e-01], [ 1.86156324e-01, -1.28793574e-01, 7.73894896e-01], [ 1.72667984e-01, -7.32071974e-02, 9.82255808e-01], [ 4.12455593e-01, 1.47398934e-02, -9.10858452e-01], [ 2.69311185e-01, -2.87406987e-02, -7.57067959e-01], [ 2.19557754e-01, -8.59316259e-02, -5.45649375e-01], [ 2.16518363e-01, 8.98099440e-02, -3.96642521e-01], [ 1.94983213e-01, -1.03663895e-01, -8.13730923e-02], [ 1.83294039e-01, -8.80651206e-02, 1.23569725e-01], [ 1.07922312e-01, 1.15031462e-01, 1.92321541e-01], [ 1.65118717e-01, -7.17364718e-02, 3.60784649e-01], [ 2.09732400e-01, -1.20997873e-02, 5.59122842e-01], [ 2.18098013e-01, 9.38478667e-02, 7.75124598e-01], [ 3.73543318e-01, 5.17586579e-03, 9.27598297e-01], [ 2.05717715e-01, 7.53314927e-02, -9.75707634e-01], [ 1.17345035e-01, 1.84580292e-01, -5.76077495e-01], [ 2.25102402e-01, 3.27630607e-01, -4.35006763e-01], [ 2.22753607e-01, 2.46290020e-01, -2.26657613e-01], [ 1.97348236e-01, 1.22204372e-01, -5.27994103e-03], [ 3.15010615e-01, 2.58654930e-01, 1.42312814e-01], [ 2.09783256e-01, 1.67156009e-01, 3.59093347e-01], [ 1.63202711e-01, 2.05291525e-01, 5.64763439e-01], [ 1.97878214e-01, 1.73604975e-01, 9.64730805e-01], [ 2.05583723e-01, 3.68622696e-01, -9.06560886e-01], [ 1.99505685e-01, 1.68317631e-01, -7.68578105e-01], [ 1.02120411e-01, 5.44777800e-01, -3.74107242e-01], [ 2.91304671e-01, 4.63829103e-01, -2.65149160e-01], [ 1.92393602e-01, 3.56306387e-01, -3.61651193e-02], [ 1.66221667e-01, 5.40222401e-01, 3.16116237e-01], [ 1.03192068e-01, 3.84500538e-01, 4.76252225e-01], [ 2.40927417e-01, 3.27308102e-01, 7.05589375e-01], [ 2.22868825e-01, 4.06677398e-01, 8.85970079e-01], [ 1.51856985e-01, 5.73133196e-01, -8.05268773e-01], [ 2.24221571e-01, 5.13573162e-01, -5.71311460e-01], [ 1.76465420e-01, 7.38746010e-01, -3.11570971e-01], [ 1.57586884e-01, 5.83668501e-01, -1.32718527e-01], [ 1.93245363e-01, 5.67332754e-01, 9.12964174e-02], [ 1.82472617e-01, 7.64831299e-01, 1.71349495e-01], [ 2.37303191e-01, 6.80016753e-01, 4.01108365e-01], [ 2.50808507e-01, 5.30577516e-01, 5.61711743e-01], [ 2.60139071e-01, 5.85769469e-01, 7.67594812e-01], [ 3.37133701e-01, 6.84372138e-01, -6.46510358e-01], [ 2.67125222e-01, 8.34839493e-01, -4.81338901e-01], [ 2.24773284e-01, 9.44894725e-01, -2.38014561e-01], [ 1.93469895e-01, 7.85372670e-01, -6.87807929e-02], [ 1.28005405e-01, 9.72510108e-01, 1.94521731e-01], [ 2.59586513e-01, 8.95602508e-01, 3.61263047e-01], [ 3.03673017e-01, 7.51297047e-01, 5.85948332e-01], [ 2.38520764e-01, 9.71027131e-01, 1.46341398e-02], [ 3.84074854e-01, -8.31551082e-01, -4.01259647e-01], [ 4.13923711e-01, -8.89137254e-01, -1.95197607e-01], [ 4.53213495e-01, -8.91167829e-01, 2.04310780e-02], [ 2.81856189e-01, -9.48065186e-01, 1.47409267e-01], [ 4.10774439e-01, -7.72051309e-01, 4.84975398e-01], [ 5.07299892e-01, -6.19496981e-01, -5.99057852e-01], [ 4.31433334e-01, -5.95575682e-01, -3.92692715e-01], [ 4.08083088e-01, -6.51386299e-01, -1.87980140e-01], [ 4.02323355e-01, -7.02571172e-01, 1.91533619e-02], [ 4.44254565e-01, -8.56088417e-01, 2.64103208e-01], [ 4.65500718e-01, -5.51798409e-01, 4.34918703e-01], [ 3.82917700e-01, -6.42536357e-01, 6.63717609e-01], [ 4.83513927e-01, -4.23010976e-01, -7.66339348e-01], [ 4.17627918e-01, -4.27370036e-01, -5.50514685e-01], [ 3.77960799e-01, -4.21545760e-01, -3.12935201e-01], [ 5.01986263e-01, -3.72801757e-01, -1.34702273e-01], [ 3.35515185e-01, -4.74157052e-01, -7.82543895e-02], [ 4.86118021e-01, -3.67092357e-01, 1.16359356e-01], [ 3.61077213e-01, -4.21637977e-01, 3.08204120e-01], [ 4.16969173e-01, -3.85761421e-01, 5.58811541e-01], [ 4.16529017e-01, -4.37785083e-01, 7.96773368e-01], [ 4.48997766e-01, -2.11045782e-01, -8.68251510e-01], [ 3.35477205e-01, -2.61430553e-01, -6.74970497e-01], [ 3.66362329e-01, -2.22560833e-01, -4.34081998e-01], [ 3.74108151e-01, -1.71647694e-01, -2.03504673e-01], [ 3.57819189e-01, -2.49803706e-01, 2.19688702e-02], [ 3.61600941e-01, -1.66483260e-01, 2.28496360e-01], [ 5.05691199e-01, -2.54501047e-01, 3.39829791e-01], [ 3.62776630e-01, -2.09215076e-01, 6.75053746e-01], [ 3.95673292e-01, -2.08362685e-01, 8.94442640e-01], [ 6.03722991e-01, -2.78474122e-02, -7.96707646e-01], [ 4.40265013e-01, -7.86809664e-02, -6.55722319e-01], [ 4.39918700e-01, -2.76851361e-02, -4.13347503e-01], [ 3.29102633e-01, 4.00043008e-02, -2.11496211e-01], [ 4.05774419e-01, -2.14033991e-02, 4.17895530e-03], [ 3.43817328e-01, 5.31430401e-02, 2.18404731e-01], [ 3.96753464e-01, -6.61189080e-02, 4.47365482e-01], [ 4.06048426e-01, -1.47617094e-02, 6.86995863e-01], [ 5.44822069e-01, 1.06719344e-01, 8.31733066e-01], [ 3.75600036e-01, 2.54440923e-01, -8.91170258e-01], [ 4.63684484e-01, 1.47006457e-01, -7.51528889e-01], [ 3.45634061e-01, 1.25786727e-01, -5.64749471e-01], [ 4.33771745e-01, 2.04274120e-01, -3.49729146e-01], [ 3.89430106e-01, 2.08809630e-01, -6.30142458e-02], [ 4.89357046e-01, 2.47087213e-01, 2.84235835e-01], [ 3.90697387e-01, 1.41030133e-01, 4.63836968e-01], [ 4.12857161e-01, 1.92822910e-01, 6.59096000e-01], [ 3.96276758e-01, 2.53444744e-01, 8.82457077e-01], [ 3.61150908e-01, 4.97243556e-01, -7.88871895e-01], [ 3.26489787e-01, 3.30981779e-01, -6.53409650e-01], [ 4.51588202e-01, 4.29381991e-01, -4.71925699e-01], [ 4.56266502e-01, 3.74805015e-01, -2.06872348e-01], [ 3.68187497e-01, 4.73294103e-01, -2.70325319e-02], [ 3.71244032e-01, 4.52825679e-01, 2.00185303e-01], [ 3.04632697e-01, 3.52977527e-01, 3.88234862e-01], [ 4.14725316e-01, 3.85903492e-01, 5.70635473e-01], [ 4.45573712e-01, 4.55986081e-01, 7.70415968e-01], [ 5.29294460e-01, 5.68004268e-01, -6.30252748e-01], [ 3.36555342e-01, 6.17472970e-01, -4.19001181e-01], [ 3.79508522e-01, 6.72901111e-01, -1.89083677e-01], [ 3.79210104e-01, 7.03099639e-01, 2.89663510e-02], [ 3.71716774e-01, 6.80534148e-01, 2.44314680e-01], [ 4.55066493e-01, 5.31958561e-01, 4.03363561e-01], [ 4.74304364e-01, 6.32321249e-01, 6.12539966e-01], [ 5.09305522e-01, 7.26531737e-01, -4.61258627e-01], [ 4.14769083e-01, 8.52822936e-01, -3.17269046e-01], [ 4.18775018e-01, 9.02581503e-01, -9.98704997e-02], [ 3.94632869e-01, 9.03066896e-01, 1.69514249e-01], [ 4.61852907e-01, 7.86451245e-01, 4.10105270e-01], [ 5.78765894e-01, -7.48034418e-01, -3.24768457e-01], [ 6.22385024e-01, -7.74420626e-01, -1.13620313e-01], [ 6.24238372e-01, -7.71683267e-01, 1.21784198e-01], [ 6.66471014e-01, -5.96428244e-01, -4.47313912e-01], [ 6.01827871e-01, -5.26836192e-01, -2.07474643e-01], [ 5.59753905e-01, -5.67596319e-01, 1.41632354e-02], [ 6.15626467e-01, -4.74253980e-01, 2.38852068e-01], [ 5.96741406e-01, -7.18050370e-01, 3.58194584e-01], [ 6.64541504e-01, -4.34981561e-01, -6.07598249e-01], [ 5.94037691e-01, -3.84380270e-01, -3.94164624e-01], [ 7.37730601e-01, -3.09096499e-01, -2.03560626e-01], [ 7.00473230e-01, -3.99470459e-01, 7.71399518e-03], [ 7.25721545e-01, -2.69961817e-01, 2.12956514e-01], [ 6.25103528e-01, -3.50378404e-01, 4.31830925e-01], [ 5.75160121e-01, -5.28839768e-01, 6.24114841e-01], [ 6.37504952e-01, -2.38247168e-01, -7.32683917e-01], [ 5.56308954e-01, -2.19302319e-01, -5.54146677e-01], [ 6.91335007e-01, -1.31863121e-01, -4.04903216e-01], [ 5.50450679e-01, -2.16461343e-01, -2.95913339e-01], [ 5.76819434e-01, -1.89855295e-01, -6.06792724e-02], [ 5.68002996e-01, -1.30984565e-01, 1.46840964e-01], [ 7.05672069e-01, -1.11125915e-01, 3.88609961e-01], [ 5.53006773e-01, -1.71911867e-01, 5.61461544e-01], [ 5.86529417e-01, -3.17786124e-01, 7.44980015e-01], [ 6.75938804e-01, 1.80420223e-01, -7.14531509e-01], [ 5.94310637e-01, 3.60783005e-02, -5.54992825e-01], [ 6.74403271e-01, 9.99821774e-02, -3.80833274e-01], [ 5.47740687e-01, 2.16242697e-03, -2.17769258e-01], [ 6.52781177e-01, 1.98788373e-02, -2.04197681e-02], [ 5.43580756e-01, 6.80793881e-03, 3.12214905e-01], [ 5.94453493e-01, 5.82306725e-02, 5.52769782e-01], [ 7.05676016e-01, 8.06852878e-02, 7.03925597e-01], [ 5.80128414e-01, -1.09106402e-01, 8.07184499e-01], [ 5.42599917e-01, 3.68797555e-01, -7.54701063e-01], [ 5.43213272e-01, 2.48965238e-01, -5.43315884e-01], [ 7.72945716e-01, 1.57594270e-01, -2.09995542e-01], [ 5.70040110e-01, 1.93374118e-01, -1.49210043e-01], [ 5.37514570e-01, 3.43582877e-01, 5.02783860e-02], [ 5.49305756e-01, 1.26433159e-01, 1.10900907e-01], [ 7.04438426e-01, 1.39590407e-01, 3.72156642e-01], [ 5.76275599e-01, 2.97362798e-01, 4.81580372e-01], [ 6.07520197e-01, 3.02951102e-01, 7.34261425e-01], [ 6.99631576e-01, 4.05751597e-01, -5.88116740e-01], [ 6.49674674e-01, 3.24746847e-01, -3.61182477e-01], [ 6.90837315e-01, 3.92507432e-01, -1.56065623e-01], [ 7.07297492e-01, 4.39430221e-01, 7.45724157e-02], [ 7.34624861e-01, 2.33302182e-01, 1.97568405e-01], [ 6.20925127e-01, 4.17105125e-01, 2.91445572e-01], [ 6.39732628e-01, 4.85274250e-01, 5.96029419e-01], [ 6.84957031e-01, 5.81126866e-01, -4.39460386e-01], [ 5.42010952e-01, 5.44683144e-01, -3.00708180e-01], [ 5.65520586e-01, 5.71231502e-01, -7.77148654e-02], [ 5.42658377e-01, 5.80522401e-01, 1.50091398e-01], [ 6.45989977e-01, 6.39765095e-01, 4.16410344e-01], [ 6.11253189e-01, 7.58595339e-01, -2.25616158e-01], [ 5.69866140e-01, 8.21707530e-01, 7.02263904e-03], [ 5.96124408e-01, 7.72064856e-01, 2.20344163e-01], [ 7.67799046e-01, -5.95832390e-01, -2.35517280e-01], [ 7.81073661e-01, -6.23989069e-01, -2.36976405e-02], [ 7.54107633e-01, -6.22427589e-01, 2.09536569e-01], [ 8.15394602e-01, -4.29633749e-01, -3.88003204e-01], [ 8.95007919e-01, -4.24199922e-01, -1.37895800e-01], [ 8.74435840e-01, -4.68344016e-01, 1.26553719e-01], [ 8.44167782e-01, -4.13217508e-01, 3.41514343e-01], [ 7.04143218e-01, -5.56758821e-01, 4.40683494e-01], [ 7.99734351e-01, -2.56440289e-01, -5.42829021e-01], [ 9.12597668e-01, -2.34115328e-01, -3.35194734e-01], [ 9.73305104e-01, -1.91998325e-01, -1.25753000e-01], [ 7.84069900e-01, -1.86036000e-01, 1.64746740e-02], [ 9.45975754e-01, -2.08962721e-01, 2.47920258e-01], [ 8.65952407e-01, -2.10822307e-01, 4.53519992e-01], [ 7.36999230e-01, -3.54489940e-01, 5.75472865e-01], [ 7.69123269e-01, -4.39753477e-02, -6.37585733e-01], [ 8.89753514e-01, -2.12586220e-02, -4.55946002e-01], [ 7.61925948e-01, -8.17295281e-02, -2.12584521e-01], [ 8.66579005e-01, 1.98271619e-02, -2.65740411e-02], [ 7.71523404e-01, -7.95716890e-03, 1.88624841e-01], [ 8.49571672e-01, 3.23679601e-02, 5.26479144e-01], [ 7.49529893e-01, -1.37902806e-01, 6.47447107e-01], [ 8.13235532e-01, 2.00148039e-01, -5.46432733e-01], [ 9.17920585e-01, 1.93823705e-01, -3.46199610e-01], [ 9.83931066e-01, 1.52608447e-01, -9.26839775e-02], [ 7.63532804e-01, 2.26521101e-01, 5.58254364e-04], [ 9.73502133e-01, 1.58275744e-01, 1.65052678e-01], [ 8.97439314e-01, 2.27709113e-01, 3.77824349e-01], [ 7.73925499e-01, 2.60018372e-01, 5.77433777e-01], [ 8.30247868e-01, 4.04004588e-01, -3.84016628e-01], [ 9.13121193e-01, 3.66274908e-01, -1.79031780e-01], [ 8.53887365e-01, 5.20387043e-01, -8.58448400e-03], [ 8.84701271e-01, 4.04948543e-01, 2.30911967e-01], [ 7.88854593e-01, 4.39735361e-01, 4.29349790e-01], [ 7.78387183e-01, 5.87137721e-01, -2.22222165e-01], [ 7.24110054e-01, 6.89672471e-01, 4.06359182e-03], [ 7.67279635e-01, 6.00429434e-01, 2.25314127e-01], [ 9.50421204e-01, -3.05112721e-01, 6.00480158e-02], [ 9.68789916e-01, -7.78690666e-03, -2.47760899e-01], [ 9.94841187e-01, -5.83791196e-02, 8.29632029e-02], [ 9.43181204e-01, -1.36099283e-03, 3.32276037e-01], [ 9.43413869e-01, 3.28538485e-01, 4.50858739e-02] ]) _connectivity = np.array([ [ 95, 162, 167, 161], [239, 162, 167, 161], [342, 420, 427, 421], [273, 264, 272, 263], [180, 264, 272, 263], [243, 164, 234, 242], [243, 330, 234, 242], [190, 180, 264, 272], [238, 160, 167, 161], [522, 521, 530, 469], [522, 521, 468, 469], [ 94, 95, 167, 161], [ 94, 160, 167, 161], [ 94, 160, 98, 167], [466, 529, 465, 457], [466, 529, 465, 475], [474, 529, 465, 475], [474, 464, 465, 391], [398, 408, 409, 481], [398, 408, 409, 326], [398, 325, 408, 326], [153, 238, 239, 161], [ 1, 77, 15, 14], [ 1, 21, 77, 14], [335, 243, 330, 242], [335, 330, 413, 404], [335, 330, 413, 336], [335, 243, 330, 336], [148, 156, 147, 232], [476, 521, 530, 469], [476, 521, 468, 469], [476, 477, 483, 469], [476, 477, 483, 411], [585, 527, 564, 572], [574, 579, 532, 575], [412, 484, 413, 404], [412, 335, 413, 404], [412, 477, 484, 404], [412, 477, 483, 411], [412, 477, 484, 483], [ 92, 93, 235, 158], [ 92, 93, 235, 150], [ 92, 87, 93, 150], [ 92, 87, 93, 39], [244, 243, 330, 234], [244, 235, 234, 158], [244, 243, 164, 234], [244, 165, 243, 164], [244, 164, 234, 158], [244, 165, 164, 158], [ 89, 94, 41, 95], [237, 238, 160, 161], [237, 153, 238, 161], [237, 153, 228, 316], [237, 153, 238, 316], [ 35, 87, 39, 36], [ 30, 35, 4, 36], [ 30, 35, 87, 36], [ 30, 78, 23, 70], [ 96, 92, 163, 91], [ 96, 92, 163, 158], [ 96, 92, 93, 158], [ 96, 92, 93, 39], [233, 164, 234, 242], [254, 262, 263, 255], [248, 244, 165, 243], [248, 330, 331, 336], [248, 244, 330, 331], [248, 243, 330, 336], [248, 244, 243, 330], [536, 474, 529, 475], [334, 238, 325, 326], [460, 522, 468, 469], [534, 527, 578, 572], [534, 527, 564, 572], [534, 527, 564, 526], [534, 564, 526, 516], [573, 527, 578, 572], [381, 464, 390, 391], [381, 464, 465, 391], [381, 382, 465, 391], [306, 381, 390, 391], [528, 536, 474, 481], [528, 573, 527, 578], [528, 573, 536, 578], [528, 474, 464, 465], [528, 536, 474, 529], [528, 573, 536, 529], [528, 474, 529, 465], [399, 398, 326, 316], [399, 398, 409, 326], [399, 317, 307, 391], [399, 317, 409, 326], [221, 231, 230, 239], [359, 273, 272, 263], [359, 369, 273, 272], [ 16, 23, 15, 2], [123, 197, 122, 132], [ 76, 148, 216, 149], [ 76, 148, 216, 140], [ 84, 76, 27, 149], [ 84, 27, 91, 149], [ 84, 156, 91, 149], [ 84, 148, 156, 149], [ 84, 148, 156, 147], [ 84, 76, 148, 149], [ 84, 76, 148, 147], [ 84, 76, 147, 75], [ 84, 76, 20, 75], [ 84, 76, 20, 27], [223, 148, 147, 232], [329, 412, 328, 404], [329, 412, 335, 404], [467, 476, 521, 468], [467, 521, 458, 468], [467, 393, 458, 468], [299, 300, 214, 288], [386, 477, 396, 469], [386, 313, 387, 302], [386, 313, 387, 396], [386, 460, 468, 469], [ 58, 122, 121, 112], [215, 206, 216, 140], [215, 206, 216, 302], [403, 467, 476, 411], [403, 476, 468, 469], [403, 467, 476, 468], [403, 386, 468, 469], [403, 386, 477, 469], [403, 412, 328, 411], [403, 412, 477, 411], [403, 476, 477, 469], [403, 476, 477, 411], [403, 386, 477, 396], [142, 134, 133, 70], [571, 585, 564, 584], [571, 585, 564, 572], [571, 534, 564, 572], [557, 585, 564, 584], [557, 556, 564, 584], [582, 557, 556, 584], [489, 420, 427, 421], [146, 221, 231, 230], [338, 342, 420, 341], [338, 342, 420, 421], [340, 249, 254, 341], [424, 425, 491, 488], [424, 425, 419, 488], [204, 194, 203, 288], [204, 214, 203, 288], [204, 300, 214, 288], [204, 194, 120, 203], [204, 194, 121, 120], [494, 495, 547, 552], [350, 254, 262, 263], [350, 254, 263, 255], [350, 254, 342, 255], [348, 342, 420, 341], [348, 419, 420, 341], [348, 254, 342, 341], [348, 350, 254, 342], [348, 340, 419, 341], [348, 340, 254, 341], [ 8, 5, 52, 9], [ 10, 16, 52, 9], [115, 107, 179, 47], [115, 107, 180, 179], [415, 418, 331, 336], [415, 337, 325, 416], [414, 330, 413, 336], [414, 418, 413, 336], [414, 330, 331, 336], [414, 418, 331, 336], [414, 330, 413, 404], [414, 418, 485, 413], [531, 567, 574, 568], [531, 574, 579, 532], [531, 522, 530, 469], [531, 567, 522, 530], [531, 567, 574, 530], [538, 477, 483, 469], [538, 477, 396, 469], [538, 531, 579, 532], [303, 313, 387, 302], [303, 388, 304, 293], [322, 244, 330, 234], [322, 244, 235, 234], [322, 312, 235, 234], [ 34, 92, 87, 39], [ 34, 35, 87, 39], [ 34, 96, 92, 91], [ 34, 96, 92, 39], [166, 248, 244, 165], [166, 160, 98, 167], [159, 93, 235, 158], [159, 244, 235, 158], [159, 244, 165, 158], [159, 166, 165, 98], [159, 166, 244, 165], [159, 166, 160, 98], [143, 237, 153, 161], [143, 237, 153, 228], [332, 248, 244, 331], [332, 166, 248, 244], [332, 159, 166, 244], [332, 159, 237, 160], [332, 159, 166, 160], [332, 237, 238, 160], [ 29, 4, 23, 2], [ 29, 30, 4, 23], [ 29, 23, 15, 2], [ 29, 23, 77, 15], [ 29, 1, 77, 15], [ 79, 87, 93, 150], [ 79, 30, 78, 70], [ 79, 30, 78, 87], [ 79, 78, 87, 150], [ 79, 143, 218, 134], [ 79, 142, 78, 70], [ 79, 142, 78, 150], [ 79, 142, 218, 150], [ 79, 142, 134, 70], [ 79, 142, 218, 134], [ 97, 165, 164, 158], [ 97, 163, 164, 158], [ 97, 96, 163, 158], [ 97, 159, 165, 158], [ 97, 159, 165, 98], [ 97, 96, 93, 158], [ 97, 159, 93, 158], [ 97, 159, 93, 98], [ 97, 40, 93, 98], [ 97, 96, 93, 39], [ 97, 40, 93, 39], [226, 148, 216, 149], [226, 92, 235, 150], [225, 148, 156, 232], [225, 233, 156, 232], [225, 233, 241, 232], [225, 226, 148, 216], [225, 226, 312, 216], [279, 204, 194, 121], [279, 277, 194, 278], [172, 254, 262, 255], [172, 262, 263, 255], [172, 262, 180, 263], [172, 262, 180, 179], [172, 107, 180, 179], [482, 474, 409, 481], [482, 474, 409, 475], [482, 536, 474, 481], [482, 536, 474, 475], [533, 581, 540, 580], [541, 482, 487, 481], [541, 482, 536, 481], [480, 487, 408, 481], [480, 487, 408, 416], [480, 486, 487, 416], [480, 398, 408, 481], [480, 541, 487, 481], [462, 389, 525, 471], [462, 389, 525, 526], [462, 524, 525, 526], [462, 388, 387, 452], [462, 454, 389, 526], [523, 531, 524, 532], [523, 574, 532, 575], [523, 531, 574, 532], [523, 574, 568, 575], [523, 531, 574, 568], [523, 583, 568, 561], [470, 524, 525, 532], [470, 462, 525, 471], [470, 462, 524, 525], [470, 531, 524, 532], [470, 538, 531, 532], [470, 538, 396, 469], [470, 538, 531, 469], [246, 334, 337, 325], [246, 334, 238, 325], [246, 332, 238, 325], [246, 332, 238, 160], [246, 238, 160, 167], [246, 332, 166, 160], [246, 166, 160, 167], [417, 408, 409, 326], [417, 408, 409, 481], [417, 482, 409, 481], [417, 487, 408, 481], [417, 482, 487, 481], [417, 487, 408, 416], [308, 317, 307, 391], [308, 317, 230, 307], [229, 317, 230, 307], [229, 153, 307, 316], [229, 221, 230, 239], [229, 221, 153, 239], [229, 221, 230, 307], [229, 221, 153, 307], [229, 399, 307, 316], [229, 399, 317, 307], [229, 231, 230, 239], [229, 240, 231, 239], [229, 317, 230, 318], [229, 240, 317, 318], [229, 153, 238, 316], [229, 153, 238, 239], [229, 238, 326, 316], [229, 399, 326, 316], [229, 399, 317, 326], [229, 231, 230, 318], [229, 240, 231, 318], [229, 334, 238, 326], [229, 240, 334, 326], [229, 240, 334, 238], [400, 399, 317, 391], [400, 399, 317, 409], [400, 474, 465, 391], [400, 308, 317, 391], [400, 401, 308, 317], [400, 474, 409, 475], [400, 382, 465, 391], [400, 474, 465, 475], [400, 308, 382, 391], [400, 382, 466, 465], [400, 466, 465, 475], [400, 401, 466, 475], [400, 308, 382, 392], [400, 401, 308, 392], [400, 382, 466, 392], [400, 401, 466, 392], [ 71, 134, 133, 70], [ 71, 134, 125, 133], [ 71, 60, 133, 70], [208, 142, 134, 133], [208, 142, 218, 134], [208, 142, 133, 132], [208, 303, 304, 293], [503, 504, 496, 553], [503, 504, 496, 431], [560, 503, 504, 553], [560, 503, 504, 449], [569, 523, 568, 575], [569, 523, 583, 568], [459, 521, 458, 468], [459, 522, 521, 468], [459, 460, 522, 468], [383, 466, 465, 457], [383, 382, 466, 465], [383, 382, 466, 392], [518, 528, 529, 465], [518, 528, 573, 529], [518, 528, 573, 527], [518, 528, 464, 465], [518, 528, 464, 527], [565, 585, 527, 572], [565, 573, 527, 572], [565, 518, 573, 527], [219, 218, 304, 228], [219, 208, 218, 304], [219, 208, 218, 134], [219, 208, 304, 293], [380, 306, 381, 390], [315, 399, 398, 316], [535, 534, 527, 578], [535, 528, 527, 578], [535, 533, 581, 540], [535, 533, 525, 540], [535, 533, 534, 525], [535, 541, 540, 481], [535, 528, 536, 481], [535, 528, 536, 578], [535, 541, 581, 578], [535, 541, 581, 540], [535, 541, 536, 481], [535, 541, 536, 578], [473, 474, 409, 481], [473, 399, 390, 391], [473, 528, 474, 481], [473, 528, 474, 464], [473, 464, 390, 391], [473, 474, 464, 391], [473, 398, 409, 481], [473, 399, 398, 409], [473, 315, 399, 390], [473, 315, 399, 398], [473, 400, 474, 409], [473, 400, 399, 409], [473, 400, 474, 391], [473, 400, 399, 391], [473, 315, 389, 390], [368, 359, 369, 272], [124, 123, 60, 133], [124, 123, 133, 132], [124, 123, 197, 132], [113, 105, 46, 179], [113, 123, 197, 122], [113, 58, 122, 112], [ 68, 1, 21, 77], [ 68, 20, 1, 21], [ 68, 20, 1, 28], [ 68, 20, 27, 28], [ 68, 76, 20, 27], [ 67, 66, 20, 75], [ 67, 76, 20, 75], [ 67, 66, 129, 75], [ 67, 68, 76, 20], [ 67, 76, 140, 75], [ 67, 66, 20, 21], [ 67, 68, 20, 21], [ 67, 66, 129, 57], [ 67, 66, 13, 21], [ 67, 66, 13, 57], [ 67, 58, 13, 57], [139, 76, 148, 147], [139, 223, 148, 147], [139, 223, 138, 147], [139, 138, 147, 75], [139, 76, 147, 75], [139, 76, 148, 140], [139, 76, 140, 75], [139, 129, 140, 75], [139, 138, 129, 75], [139, 223, 138, 214], [139, 148, 216, 140], [139, 215, 216, 140], [139, 204, 215, 140], [139, 204, 215, 300], [139, 204, 300, 214], [139, 138, 129, 203], [139, 138, 214, 203], [139, 204, 214, 203], [310, 299, 300, 214], [301, 215, 216, 302], [301, 386, 313, 302], [301, 312, 313, 302], [301, 312, 216, 302], [301, 225, 312, 216], [365, 355, 364, 278], [365, 277, 364, 278], [365, 279, 277, 278], [185, 279, 194, 278], [185, 279, 259, 278], [185, 194, 121, 120], [185, 279, 194, 121], [551, 494, 547, 552], [551, 494, 547, 550], [551, 494, 501, 552], [509, 557, 585, 564], [509, 585, 527, 564], [509, 565, 585, 527], [509, 446, 510, 501], [509, 565, 518, 527], [509, 518, 510, 566], [509, 565, 518, 566], [438, 446, 510, 501], [438, 511, 510, 501], [438, 511, 502, 439], [555, 582, 556, 549], [555, 582, 556, 584], [222, 146, 231, 230], [222, 231, 230, 318], [212, 146, 221, 145], [212, 137, 146, 145], [212, 146, 221, 230], [212, 222, 146, 230], [212, 221, 230, 307], [212, 308, 230, 307], [212, 222, 308, 230], [212, 222, 308, 298], [155, 146, 221, 231], [155, 146, 221, 145], [155, 231, 239, 162], [155, 221, 231, 239], [430, 438, 502, 439], [422, 338, 343, 255], [422, 338, 342, 421], [422, 342, 427, 421], [422, 428, 427, 421], [256, 344, 343, 257], [256, 422, 344, 343], [256, 264, 263, 255], [256, 273, 264, 263], [292, 303, 387, 302], [292, 388, 377, 452], [292, 282, 377, 293], [292, 388, 387, 452], [292, 303, 388, 387], [292, 388, 377, 293], [292, 303, 388, 293], [205, 279, 204, 121], [205, 215, 206, 140], [205, 204, 215, 140], [437, 369, 273, 361], [437, 359, 369, 273], [437, 359, 369, 436], [437, 494, 446, 436], [437, 494, 446, 501], [437, 438, 446, 501], [426, 493, 489, 427], [426, 489, 420, 488], [426, 489, 420, 427], [426, 419, 420, 488], [426, 425, 419, 488], [426, 348, 419, 420], [426, 348, 425, 419], [426, 342, 420, 427], [426, 348, 342, 420], [426, 350, 342, 427], [426, 348, 350, 342], [499, 494, 547, 550], [499, 493, 494, 547], [499, 546, 547, 550], [499, 546, 493, 547], [499, 493, 494, 436], [499, 546, 549, 550], [499, 582, 549, 550], [499, 582, 556, 549], [490, 489, 427, 421], [490, 493, 489, 427], [490, 546, 493, 547], [490, 546, 493, 489], [490, 428, 427, 421], [261, 348, 340, 254], [261, 340, 253, 339], [261, 340, 249, 254], [261, 340, 249, 253], [ 6, 42, 43, 47], [ 6, 42, 46, 47], [ 6, 42, 5, 46], [ 6, 5, 52, 9], [ 6, 10, 52, 9], [ 6, 52, 46, 47], [ 6, 5, 52, 46], [106, 107, 179, 47], [106, 46, 179, 47], [106, 42, 46, 47], [106, 172, 107, 179], [106, 105, 46, 179], [106, 168, 172, 107], [106, 105, 99, 46], [106, 42, 99, 46], [ 50, 58, 13, 57], [ 22, 8, 9, 14], [ 22, 8, 52, 9], [ 22, 123, 60, 52], [ 22, 77, 15, 14], [ 22, 9, 15, 14], [ 22, 16, 52, 9], [ 22, 16, 60, 52], [ 22, 16, 60, 23], [ 22, 23, 77, 15], [ 22, 16, 23, 15], [ 0, 10, 16, 9], [ 0, 22, 9, 15], [ 0, 22, 16, 15], [ 0, 22, 16, 9], [ 0, 16, 15, 2], [ 0, 17, 16, 2], [ 0, 10, 16, 11], [ 0, 17, 16, 11], [ 53, 10, 16, 52], [ 53, 16, 60, 52], [ 53, 6, 52, 47], [ 53, 6, 10, 52], [ 53, 6, 43, 47], [ 53, 6, 10, 43], [ 53, 10, 43, 7], [ 53, 10, 11, 7], [ 53, 10, 16, 11], [ 48, 116, 109, 55], [ 48, 115, 107, 47], [ 48, 53, 115, 47], [ 48, 53, 43, 47], [ 48, 53, 43, 7], [245, 332, 166, 248], [245, 248, 331, 336], [245, 415, 331, 336], [245, 332, 248, 331], [245, 415, 325, 331], [245, 415, 337, 325], [245, 246, 332, 166], [245, 332, 325, 331], [245, 246, 337, 325], [245, 246, 332, 325], [479, 543, 486, 485], [479, 486, 418, 485], [479, 414, 418, 485], [407, 415, 486, 418], [407, 479, 486, 418], [407, 479, 414, 418], [407, 415, 418, 331], [407, 414, 418, 331], [407, 479, 480, 486], [407, 415, 325, 331], [407, 415, 486, 416], [407, 480, 486, 416], [407, 415, 325, 416], [407, 325, 408, 416], [407, 480, 408, 416], [407, 398, 325, 408], [407, 480, 398, 408], [539, 542, 579, 580], [539, 538, 542, 579], [539, 538, 579, 532], [539, 533, 540, 580], [539, 470, 538, 532], [539, 479, 540, 471], [539, 543, 542, 580], [539, 470, 525, 471], [539, 470, 525, 532], [539, 543, 540, 580], [539, 479, 543, 540], [539, 525, 540, 471], [539, 533, 525, 540], [539, 533, 525, 532], [537, 531, 574, 579], [537, 538, 531, 579], [537, 531, 574, 530], [537, 531, 530, 469], [537, 538, 531, 469], [537, 538, 542, 483], [537, 538, 542, 579], [537, 476, 530, 469], [537, 476, 483, 469], [537, 538, 483, 469], [544, 480, 486, 487], [544, 541, 581, 540], [544, 480, 541, 487], [544, 479, 543, 540], [544, 479, 480, 540], [544, 479, 543, 486], [544, 479, 480, 486], [544, 581, 540, 580], [544, 543, 540, 580], [544, 541, 540, 481], [544, 480, 540, 481], [544, 480, 541, 481], [314, 315, 304, 228], [323, 414, 330, 331], [323, 244, 330, 331], [323, 322, 244, 330], [323, 407, 414, 331], [323, 332, 244, 331], [323, 407, 325, 331], [323, 332, 325, 331], [ 86, 34, 35, 28], [ 86, 34, 35, 87], [ 86, 29, 35, 28], [ 86, 30, 78, 87], [ 86, 30, 35, 87], [ 86, 68, 78, 77], [ 86, 68, 1, 77], [ 86, 68, 1, 28], [ 86, 29, 1, 77], [ 86, 29, 1, 28], [ 86, 78, 23, 77], [ 86, 29, 23, 77], [ 86, 30, 78, 23], [ 86, 29, 30, 23], [ 86, 30, 35, 4], [ 86, 29, 35, 4], [ 86, 29, 30, 4], [151, 159, 93, 235], [151, 93, 235, 150], [151, 79, 93, 150], [151, 79, 218, 150], [151, 79, 143, 218], [151, 143, 237, 228], [151, 143, 218, 228], [152, 159, 93, 98], [152, 40, 93, 98], [152, 40, 94, 98], [152, 94, 160, 98], [152, 159, 160, 98], [152, 151, 159, 93], [152, 159, 237, 160], [152, 151, 159, 237], [152, 89, 94, 41], [152, 40, 94, 41], [152, 94, 160, 161], [152, 237, 160, 161], [152, 143, 89, 161], [152, 143, 237, 161], [152, 151, 143, 237], [152, 94, 95, 161], [152, 89, 95, 161], [152, 89, 94, 95], [ 80, 79, 30, 70], [ 80, 79, 143, 134], [ 80, 144, 143, 89], [ 80, 144, 143, 134], [ 80, 79, 134, 70], [ 80, 71, 134, 70], [227, 226, 235, 150], [227, 151, 235, 150], [227, 151, 218, 150], [227, 226, 312, 235], [227, 142, 218, 150], [227, 208, 218, 304], [227, 208, 303, 304], [227, 322, 312, 235], [227, 218, 304, 228], [227, 314, 304, 228], [227, 314, 303, 304], [227, 322, 312, 313], [227, 314, 322, 313], [227, 314, 303, 313], [ 85, 86, 68, 78], [ 85, 76, 27, 149], [ 85, 68, 76, 27], [ 85, 226, 92, 149], [ 85, 226, 92, 150], [ 85, 78, 87, 150], [ 85, 92, 87, 150], [ 85, 86, 78, 87], [ 85, 86, 34, 87], [ 85, 27, 91, 149], [ 85, 34, 27, 91], [ 85, 92, 91, 149], [ 85, 34, 92, 91], [ 85, 34, 92, 87], [ 85, 34, 27, 28], [ 85, 86, 34, 28], [ 85, 68, 27, 28], [ 85, 86, 68, 28], [157, 233, 156, 163], [157, 225, 233, 156], [157, 225, 233, 234], [157, 156, 91, 149], [157, 156, 163, 91], [157, 148, 156, 149], [157, 225, 148, 156], [157, 233, 164, 234], [157, 233, 163, 164], [157, 92, 91, 149], [157, 92, 163, 91], [157, 226, 92, 149], [157, 226, 148, 149], [157, 225, 226, 148], [157, 164, 234, 158], [157, 163, 164, 158], [157, 225, 312, 234], [157, 225, 226, 312], [157, 92, 163, 158], [157, 226, 92, 235], [157, 312, 235, 234], [157, 226, 312, 235], [157, 235, 234, 158], [157, 92, 235, 158], [321, 233, 241, 242], [321, 225, 233, 241], [321, 233, 234, 242], [321, 225, 233, 234], [321, 329, 241, 242], [321, 330, 234, 242], [321, 335, 330, 242], [321, 329, 335, 242], [321, 225, 312, 234], [321, 335, 330, 404], [321, 329, 335, 404], [321, 322, 330, 404], [321, 322, 330, 234], [321, 322, 312, 234], [576, 533, 581, 580], [576, 539, 579, 580], [576, 539, 533, 580], [576, 539, 579, 532], [576, 539, 533, 532], [576, 579, 532, 575], [577, 533, 534, 586], [577, 576, 533, 586], [577, 576, 533, 581], [577, 534, 578, 572], [577, 535, 581, 578], [577, 535, 533, 581], [577, 535, 534, 578], [577, 535, 533, 534], [577, 571, 534, 572], [577, 571, 534, 586], [472, 473, 315, 398], [472, 473, 315, 389], [472, 479, 540, 471], [472, 479, 480, 540], [472, 525, 540, 471], [472, 389, 525, 471], [472, 480, 398, 481], [472, 473, 398, 481], [472, 535, 525, 540], [472, 480, 540, 481], [472, 535, 540, 481], [570, 523, 524, 532], [570, 524, 525, 532], [570, 523, 524, 516], [570, 533, 525, 532], [570, 533, 534, 525], [570, 524, 525, 526], [570, 534, 525, 526], [570, 533, 534, 586], [570, 523, 532, 575], [570, 524, 526, 516], [570, 534, 526, 516], [570, 576, 533, 532], [570, 576, 533, 586], [570, 576, 532, 575], [570, 576, 586, 575], [570, 569, 586, 575], [570, 569, 523, 575], [570, 571, 534, 586], [570, 569, 571, 586], [570, 534, 564, 516], [570, 571, 534, 564], [376, 292, 387, 452], [376, 386, 387, 302], [376, 292, 387, 302], [247, 240, 334, 238], [247, 229, 238, 239], [247, 229, 240, 239], [247, 229, 240, 238], [247, 239, 167, 161], [247, 238, 167, 161], [247, 238, 239, 161], [247, 246, 238, 167], [247, 246, 334, 238], [247, 239, 162, 167], [247, 231, 239, 162], [247, 240, 231, 239], [410, 401, 317, 318], [410, 400, 317, 409], [410, 400, 401, 317], [410, 482, 409, 475], [410, 417, 482, 409], [410, 400, 409, 475], [410, 400, 401, 475], [327, 240, 334, 326], [327, 417, 409, 326], [327, 410, 417, 409], [327, 240, 317, 318], [327, 410, 317, 318], [327, 317, 409, 326], [327, 410, 317, 409], [327, 229, 317, 326], [327, 229, 240, 326], [327, 229, 240, 317], [333, 417, 408, 326], [333, 325, 408, 326], [333, 417, 408, 416], [333, 334, 325, 326], [333, 334, 337, 325], [333, 325, 408, 416], [333, 337, 325, 416], [333, 327, 334, 326], [333, 327, 417, 326], [309, 317, 230, 318], [309, 308, 317, 230], [309, 222, 230, 318], [309, 222, 308, 230], [309, 401, 317, 318], [309, 401, 308, 317], [309, 401, 308, 392], [309, 308, 392, 298], [309, 222, 308, 298], [297, 212, 308, 298], [297, 308, 392, 298], [297, 308, 382, 392], [154, 144, 221, 145], [154, 155, 221, 145], [154, 144, 221, 153], [154, 155, 239, 162], [154, 155, 221, 239], [154, 95, 162, 161], [154, 89, 95, 161], [154, 239, 162, 161], [154, 143, 153, 161], [154, 143, 89, 161], [154, 144, 143, 153], [154, 144, 143, 89], [154, 153, 239, 161], [154, 221, 153, 239], [ 61, 71, 125, 133], [ 61, 71, 60, 133], [ 61, 126, 116, 125], [ 61, 126, 71, 125], [ 61, 124, 125, 133], [ 61, 124, 60, 133], [ 61, 48, 116, 115], [ 61, 48, 53, 115], [ 24, 60, 23, 70], [ 24, 71, 60, 70], [ 24, 80, 71, 70], [ 24, 80, 71, 72], [ 24, 30, 23, 70], [ 24, 16, 60, 23], [ 24, 80, 30, 70], [ 24, 31, 17, 2], [ 24, 31, 3, 72], [ 24, 31, 17, 3], [ 24, 16, 23, 2], [ 24, 17, 16, 2], [ 24, 4, 23, 2], [ 24, 30, 4, 23], [ 24, 31, 4, 2], [ 24, 31, 30, 4], [512, 522, 521, 530], [512, 567, 522, 530], [512, 459, 521, 458], [512, 459, 522, 521], [512, 459, 373, 458], [512, 459, 373, 449], [497, 496, 553, 548], [497, 504, 496, 553], [497, 496, 491, 548], [456, 518, 465, 457], [456, 383, 465, 457], [456, 381, 464, 465], [456, 518, 464, 465], [456, 383, 448, 371], [456, 381, 382, 465], [456, 383, 382, 465], [456, 381, 382, 371], [456, 383, 382, 371], [136, 212, 137, 145], [558, 509, 557, 585], [558, 509, 565, 585], [558, 509, 565, 566], [519, 565, 518, 573], [519, 518, 573, 529], [519, 565, 518, 566], [519, 529, 465, 457], [519, 518, 465, 457], [519, 518, 529, 465], [519, 456, 518, 457], [220, 144, 143, 134], [220, 153, 307, 316], [220, 221, 153, 307], [220, 144, 221, 153], [220, 153, 228, 316], [220, 143, 153, 228], [220, 144, 143, 153], [220, 143, 218, 228], [220, 143, 218, 134], [220, 219, 218, 228], [220, 219, 218, 134], [445, 437, 369, 436], [445, 437, 446, 436], [305, 315, 399, 390], [305, 306, 390, 391], [305, 399, 390, 391], [305, 315, 399, 316], [305, 306, 307, 391], [305, 399, 307, 391], [305, 220, 219, 306], [305, 399, 307, 316], [305, 219, 304, 228], [305, 315, 304, 228], [305, 220, 219, 228], [305, 220, 307, 316], [305, 220, 306, 307], [305, 315, 228, 316], [305, 220, 228, 316], [463, 535, 528, 527], [463, 528, 464, 527], [463, 534, 527, 526], [463, 535, 534, 527], [463, 534, 525, 526], [463, 535, 534, 525], [463, 389, 525, 526], [463, 454, 389, 526], [463, 473, 528, 464], [463, 472, 389, 525], [463, 472, 535, 525], [463, 472, 473, 389], [463, 535, 528, 481], [463, 473, 528, 481], [463, 454, 464, 390], [463, 454, 389, 390], [463, 473, 464, 390], [463, 473, 389, 390], [463, 472, 535, 481], [463, 472, 473, 481], [271, 359, 272, 263], [271, 368, 359, 272], [271, 350, 359, 263], [271, 180, 272, 263], [271, 262, 180, 263], [271, 350, 262, 263], [198, 124, 197, 132], [198, 208, 133, 132], [198, 124, 133, 132], [189, 116, 115, 190], [189, 190, 180, 272], [189, 115, 190, 180], [189, 61, 116, 125], [189, 61, 124, 125], [189, 61, 116, 115], [189, 61, 124, 115], [114, 113, 46, 179], [114, 46, 179, 47], [114, 124, 197, 179], [114, 113, 197, 179], [114, 115, 179, 47], [114, 124, 115, 179], [114, 52, 46, 47], [114, 124, 123, 197], [114, 113, 123, 197], [114, 123, 60, 52], [114, 124, 123, 60], [114, 53, 52, 47], [114, 53, 115, 47], [114, 61, 124, 115], [114, 61, 124, 60], [114, 61, 53, 115], [114, 53, 60, 52], [114, 61, 53, 60], [ 69, 68, 78, 77], [ 69, 78, 23, 77], [ 69, 22, 123, 60], [ 69, 123, 60, 133], [ 69, 123, 133, 132], [ 69, 22, 23, 77], [ 69, 22, 60, 23], [ 69, 78, 23, 70], [ 69, 60, 23, 70], [ 69, 60, 133, 70], [ 69, 142, 133, 132], [ 69, 142, 133, 70], [ 69, 142, 78, 70], [130, 67, 129, 57], [130, 67, 58, 57], [130, 58, 121, 57], [130, 120, 129, 57], [130, 121, 120, 57], [130, 204, 121, 120], [130, 129, 140, 75], [130, 67, 140, 75], [130, 67, 129, 75], [130, 58, 122, 121], [130, 120, 129, 203], [130, 204, 120, 203], [130, 139, 129, 140], [130, 139, 204, 140], [130, 205, 204, 140], [130, 205, 204, 121], [130, 205, 122, 140], [130, 205, 122, 121], [130, 139, 129, 203], [130, 139, 204, 203], [ 59, 67, 13, 21], [ 59, 68, 21, 77], [ 59, 67, 68, 21], [ 59, 21, 77, 14], [ 59, 22, 77, 14], [ 59, 13, 21, 14], [ 59, 69, 68, 77], [ 59, 67, 58, 13], [ 59, 8, 13, 14], [ 59, 22, 8, 14], [ 59, 69, 22, 77], [ 59, 69, 22, 123], [ 59, 58, 123, 122], [402, 403, 328, 411], [402, 403, 467, 411], [224, 223, 148, 232], [224, 225, 148, 232], [224, 225, 148, 216], [224, 301, 225, 216], [224, 310, 300, 214], [224, 310, 223, 214], [224, 139, 223, 148], [224, 139, 148, 216], [224, 139, 300, 214], [224, 139, 223, 214], [224, 139, 215, 300], [224, 139, 215, 216], [224, 301, 215, 216], [385, 386, 460, 468], [385, 301, 386, 302], [385, 301, 215, 302], [385, 376, 386, 460], [385, 376, 386, 302], [395, 301, 386, 313], [395, 301, 403, 386], [395, 386, 313, 396], [395, 403, 386, 396], [395, 301, 312, 313], [395, 477, 396, 404], [395, 403, 477, 396], [395, 321, 322, 404], [395, 322, 312, 313], [395, 321, 322, 312], [395, 412, 477, 404], [395, 403, 412, 477], [395, 412, 328, 404], [395, 403, 412, 328], [395, 329, 328, 404], [395, 321, 329, 404], [395, 321, 329, 328], [347, 424, 425, 419], [347, 348, 340, 419], [347, 348, 425, 419], [177, 252, 169, 176], [177, 113, 105, 112], [177, 252, 169, 253], [500, 551, 494, 501], [500, 494, 446, 501], [500, 509, 446, 501], [500, 558, 551, 501], [500, 558, 509, 501], [500, 558, 551, 557], [500, 558, 509, 557], [500, 494, 446, 436], [500, 499, 494, 436], [500, 551, 494, 550], [500, 499, 494, 550], [500, 551, 557, 550], [500, 557, 556, 564], [500, 509, 557, 564], [500, 507, 499, 556], [500, 499, 582, 550], [500, 499, 582, 556], [500, 582, 557, 550], [500, 582, 557, 556], [517, 518, 464, 527], [517, 509, 518, 527], [517, 509, 446, 510], [517, 509, 518, 510], [517, 463, 464, 527], [517, 463, 454, 464], [517, 463, 527, 526], [517, 463, 454, 526], [517, 527, 564, 526], [517, 509, 527, 564], [447, 438, 511, 510], [447, 511, 448, 439], [447, 438, 511, 439], [447, 438, 362, 439], [447, 456, 448, 371], [447, 363, 448, 439], [447, 363, 362, 439], [447, 456, 381, 371], [447, 363, 448, 371], [447, 363, 362, 371], [447, 285, 381, 371], [447, 285, 362, 371], [545, 499, 546, 549], [108, 190, 180, 264], [108, 115, 190, 180], [108, 115, 107, 180], [108, 48, 115, 107], [108, 116, 109, 190], [108, 48, 116, 109], [108, 116, 115, 190], [108, 48, 116, 115], [295, 294, 380, 306], [295, 380, 306, 381], [295, 285, 380, 381], [ 37, 30, 4, 36], [ 37, 31, 30, 4], [ 54, 18, 17, 3], [ 54, 24, 3, 72], [ 54, 24, 17, 3], [ 54, 24, 71, 72], [ 54, 24, 17, 16], [ 54, 126, 71, 72], [ 54, 17, 16, 11], [ 54, 18, 17, 11], [ 54, 61, 126, 71], [ 54, 24, 16, 60], [ 54, 24, 71, 60], [ 54, 61, 71, 60], [ 54, 53, 16, 11], [ 54, 53, 16, 60], [ 54, 61, 53, 60], [345, 256, 344, 257], [354, 363, 362, 439], [353, 438, 362, 439], [353, 430, 438, 439], [353, 354, 362, 439], [353, 354, 430, 439], [353, 438, 362, 361], [353, 354, 430, 266], [353, 345, 430, 266], [353, 345, 430, 423], [267, 183, 258, 175], [372, 383, 448, 371], [372, 363, 448, 371], [372, 202, 297, 298], [372, 383, 382, 392], [372, 383, 382, 371], [372, 297, 382, 392], [372, 297, 382, 371], [372, 297, 392, 298], [182, 258, 174, 175], [182, 183, 258, 175], [182, 183, 175, 109], [182, 258, 174, 257], [182, 183, 109, 190], [182, 108, 190, 264], [182, 108, 109, 190], [351, 422, 428, 344], [351, 256, 422, 344], [351, 422, 428, 427], [351, 422, 342, 427], [351, 359, 273, 263], [351, 256, 273, 263], [351, 350, 359, 263], [351, 350, 359, 427], [351, 350, 342, 427], [351, 256, 263, 255], [351, 338, 342, 255], [351, 422, 338, 255], [351, 422, 338, 342], [351, 422, 343, 255], [351, 256, 343, 255], [351, 256, 422, 343], [351, 350, 263, 255], [351, 350, 342, 255], [207, 292, 282, 293], [207, 198, 282, 293], [207, 292, 303, 302], [207, 198, 208, 132], [207, 198, 208, 293], [207, 208, 303, 293], [207, 292, 303, 293], [360, 493, 494, 547], [360, 490, 493, 547], [360, 494, 495, 547], [360, 490, 495, 547], [360, 490, 493, 427], [360, 493, 359, 427], [360, 490, 428, 427], [360, 490, 428, 495], [360, 493, 359, 436], [360, 493, 494, 436], [360, 351, 359, 427], [360, 437, 359, 436], [360, 437, 494, 436], [360, 351, 428, 427], [360, 428, 495, 423], [178, 172, 254, 262], [178, 261, 254, 262], [178, 261, 249, 254], [178, 172, 262, 179], [178, 106, 172, 179], [178, 106, 105, 179], [178, 261, 249, 253], [178, 177, 261, 253], [178, 177, 169, 105], [100, 106, 42, 99], [100, 106, 168, 172], [100, 178, 106, 172], [250, 338, 343, 255], [250, 249, 254, 341], [250, 172, 254, 255], [250, 254, 342, 341], [250, 338, 342, 341], [250, 254, 342, 255], [250, 338, 342, 255], [251, 256, 343, 257], [251, 256, 343, 255], [251, 182, 174, 257], [251, 182, 256, 257], [ 44, 48, 43, 7], [ 44, 108, 48, 107], [ 44, 48, 43, 47], [ 44, 48, 107, 47], [111, 50, 58, 112], [111, 50, 58, 57], [111, 121, 120, 57], [111, 185, 121, 120], [111, 58, 121, 112], [111, 58, 121, 57], [104, 169, 105, 99], [104, 111, 176, 112], [104, 111, 50, 112], [104, 177, 176, 112], [104, 177, 169, 176], [104, 177, 105, 112], [104, 177, 169, 105], [ 45, 104, 50, 112], [ 45, 42, 99, 46], [ 45, 42, 5, 46], [ 45, 105, 99, 46], [ 45, 104, 105, 99], [ 45, 104, 105, 112], [478, 539, 538, 542], [478, 538, 477, 396], [478, 542, 484, 483], [478, 538, 542, 483], [478, 542, 484, 485], [478, 539, 543, 542], [478, 539, 479, 543], [478, 539, 479, 471], [478, 477, 396, 404], [478, 477, 484, 404], [478, 477, 484, 483], [478, 538, 477, 483], [478, 479, 414, 485], [478, 543, 542, 485], [478, 479, 543, 485], [478, 470, 396, 471], [478, 470, 538, 396], [478, 539, 470, 471], [478, 539, 470, 538], [478, 484, 413, 404], [478, 414, 413, 404], [478, 484, 485, 413], [478, 414, 485, 413], [397, 315, 388, 304], [397, 314, 315, 304], [397, 303, 388, 304], [397, 314, 303, 304], [397, 315, 388, 389], [397, 303, 388, 387], [397, 303, 313, 387], [397, 314, 303, 313], [397, 462, 389, 471], [397, 462, 388, 389], [397, 472, 389, 471], [397, 472, 315, 389], [397, 462, 388, 387], [397, 470, 462, 471], [397, 470, 462, 387], [397, 313, 387, 396], [397, 470, 396, 471], [397, 470, 387, 396], [405, 323, 414, 330], [405, 323, 322, 330], [405, 414, 330, 404], [405, 322, 330, 404], [405, 323, 314, 322], [405, 478, 396, 404], [405, 478, 414, 404], [405, 395, 396, 404], [405, 395, 322, 404], [405, 314, 322, 313], [405, 478, 396, 471], [405, 395, 313, 396], [405, 395, 322, 313], [405, 397, 396, 471], [405, 397, 313, 396], [405, 397, 314, 313], [405, 478, 479, 471], [405, 478, 479, 414], [236, 323, 314, 322], [236, 227, 322, 235], [236, 227, 314, 322], [236, 322, 244, 235], [236, 323, 322, 244], [236, 227, 151, 235], [236, 227, 314, 228], [236, 159, 244, 235], [236, 151, 159, 235], [236, 151, 237, 228], [236, 151, 218, 228], [236, 227, 218, 228], [236, 227, 151, 218], [236, 151, 159, 237], [236, 323, 332, 244], [236, 332, 159, 244], [236, 332, 159, 237], [ 88, 152, 143, 89], [ 88, 80, 143, 89], [ 88, 80, 79, 143], [ 88, 152, 89, 41], [ 88, 151, 79, 143], [ 88, 152, 151, 143], [ 88, 80, 79, 30], [ 88, 151, 79, 93], [ 88, 152, 151, 93], [ 88, 40, 41, 36], [ 88, 152, 40, 41], [ 88, 79, 87, 93], [ 88, 37, 41, 36], [ 88, 37, 30, 36], [ 88, 30, 87, 36], [ 88, 79, 30, 87], [ 88, 152, 40, 93], [ 88, 87, 39, 36], [ 88, 40, 39, 36], [ 88, 87, 93, 39], [ 88, 40, 93, 39], [217, 227, 142, 150], [217, 227, 226, 150], [217, 207, 206, 132], [217, 207, 206, 302], [217, 206, 216, 302], [217, 226, 312, 216], [217, 227, 226, 312], [217, 208, 142, 132], [217, 207, 208, 132], [217, 207, 303, 302], [217, 208, 142, 218], [217, 227, 142, 218], [217, 227, 208, 218], [217, 227, 208, 303], [217, 207, 208, 303], [217, 312, 216, 302], [217, 303, 313, 302], [217, 227, 303, 313], [217, 312, 313, 302], [217, 227, 312, 313], [406, 472, 315, 398], [406, 397, 472, 471], [406, 397, 472, 315], [406, 472, 479, 471], [406, 407, 480, 398], [406, 472, 480, 398], [406, 405, 397, 471], [406, 405, 479, 471], [406, 407, 479, 480], [406, 472, 479, 480], [406, 397, 314, 315], [406, 405, 397, 314], [406, 407, 479, 414], [406, 405, 479, 414], [406, 405, 323, 314], [406, 323, 407, 414], [406, 405, 323, 414], [563, 570, 523, 516], [563, 570, 569, 523], [563, 569, 523, 583], [563, 570, 564, 516], [563, 569, 571, 584], [563, 570, 569, 571], [563, 569, 583, 584], [563, 556, 564, 584], [563, 556, 564, 516], [563, 571, 564, 584], [563, 570, 571, 564], [563, 555, 556, 584], [563, 555, 583, 584], [117, 126, 116, 125], [117, 126, 200, 125], [117, 126, 116, 55], [117, 116, 109, 190], [117, 116, 109, 55], [117, 126, 64, 55], [117, 126, 200, 64], [117, 183, 109, 190], [394, 459, 373, 458], [394, 384, 373, 458], [394, 459, 458, 468], [394, 459, 460, 468], [394, 385, 460, 468], [394, 384, 393, 458], [394, 393, 458, 468], [394, 384, 299, 393], [450, 503, 504, 449], [450, 440, 503, 449], [450, 459, 373, 449], [450, 440, 373, 449], [562, 507, 556, 516], [562, 563, 523, 516], [562, 563, 523, 583], [562, 523, 583, 561], [562, 563, 556, 516], [562, 583, 554, 561], [562, 563, 555, 556], [562, 563, 555, 583], [562, 555, 583, 554], [515, 562, 507, 516], [515, 562, 507, 506], [515, 462, 524, 452], [515, 523, 524, 516], [515, 562, 523, 516], [434, 499, 493, 436], [434, 507, 499, 436], [434, 433, 497, 506], [432, 424, 496, 491], [432, 497, 496, 491], [432, 424, 425, 491], [432, 497, 425, 491], [432, 424, 496, 431], [432, 433, 497, 425], [432, 504, 496, 431], [432, 497, 504, 496], [432, 346, 424, 431], [367, 451, 376, 452], [367, 292, 377, 452], [367, 376, 292, 452], [367, 292, 282, 377], [520, 511, 510, 566], [520, 447, 511, 510], [520, 447, 456, 510], [520, 518, 510, 566], [520, 519, 518, 566], [520, 519, 456, 457], [520, 456, 518, 510], [520, 519, 456, 518], [520, 447, 511, 448], [520, 447, 456, 448], [520, 456, 383, 457], [520, 456, 383, 448], [455, 456, 518, 464], [455, 456, 518, 510], [455, 517, 518, 464], [455, 517, 518, 510], [455, 456, 381, 464], [455, 447, 456, 510], [455, 447, 456, 381], [455, 517, 454, 464], [455, 517, 445, 454], [455, 517, 446, 510], [455, 517, 445, 446], [455, 381, 464, 390], [455, 380, 381, 390], [455, 445, 380, 454], [455, 454, 464, 390], [455, 380, 454, 390], [135, 126, 200, 125], [135, 126, 71, 72], [135, 126, 71, 125], [135, 80, 71, 72], [135, 71, 134, 125], [135, 80, 144, 134], [135, 80, 71, 134], [211, 136, 200, 201], [211, 136, 212, 201], [211, 295, 200, 201], [211, 136, 212, 145], [211, 135, 136, 145], [211, 135, 136, 200], [211, 135, 144, 145], [211, 144, 221, 145], [211, 212, 221, 145], [211, 212, 221, 307], [211, 220, 221, 307], [211, 220, 306, 307], [211, 220, 144, 221], [559, 551, 501, 552], [559, 558, 551, 501], [559, 502, 501, 552], [559, 438, 502, 501], [559, 438, 511, 501], [559, 438, 511, 502], [559, 511, 510, 566], [559, 511, 510, 501], [559, 558, 509, 566], [559, 558, 509, 501], [559, 509, 510, 566], [559, 509, 510, 501], [379, 380, 454, 390], [379, 380, 306, 390], [379, 294, 380, 306], [379, 388, 304, 293], [379, 454, 389, 390], [379, 454, 388, 389], [379, 305, 306, 390], [379, 219, 304, 293], [379, 294, 219, 293], [379, 315, 389, 390], [379, 315, 388, 389], [379, 315, 388, 304], [379, 294, 219, 306], [379, 305, 219, 306], [379, 305, 315, 390], [379, 305, 315, 304], [379, 305, 219, 304], [283, 282, 377, 293], [283, 368, 282, 377], [283, 368, 369, 272], [283, 198, 282, 293], [283, 271, 368, 272], [283, 271, 368, 282], [296, 285, 381, 371], [296, 381, 382, 371], [296, 297, 382, 371], [296, 295, 285, 381], [296, 297, 308, 382], [296, 295, 306, 381], [296, 295, 285, 201], [296, 381, 382, 391], [296, 308, 382, 391], [296, 211, 295, 306], [296, 211, 295, 201], [296, 306, 381, 391], [296, 297, 212, 201], [296, 211, 212, 201], [296, 306, 307, 391], [296, 308, 307, 391], [296, 211, 306, 307], [296, 297, 212, 308], [296, 212, 308, 307], [296, 211, 212, 307], [284, 295, 285, 380], [284, 294, 380, 369], [284, 295, 294, 380], [284, 369, 273, 361], [284, 369, 273, 272], [284, 283, 369, 272], [284, 283, 294, 369], [378, 283, 368, 369], [378, 283, 368, 377], [378, 283, 294, 369], [378, 294, 380, 369], [378, 445, 380, 369], [378, 283, 377, 293], [378, 283, 294, 293], [378, 379, 294, 380], [378, 379, 294, 293], [378, 445, 380, 454], [378, 379, 380, 454], [378, 388, 377, 293], [378, 379, 388, 293], [378, 379, 454, 388], [444, 434, 507, 436], [444, 445, 369, 436], [444, 359, 369, 436], [444, 368, 359, 369], [444, 378, 368, 369], [444, 378, 445, 369], [444, 378, 368, 377], [209, 294, 219, 293], [209, 219, 208, 293], [209, 198, 208, 293], [209, 283, 294, 293], [209, 283, 198, 293], [209, 198, 208, 133], [209, 208, 134, 133], [209, 219, 208, 134], [209, 283, 189, 198], [209, 134, 125, 133], [209, 124, 125, 133], [209, 189, 124, 125], [209, 198, 124, 133], [209, 189, 198, 124], [141, 69, 142, 132], [141, 217, 206, 132], [141, 217, 142, 132], [141, 69, 142, 78], [141, 69, 68, 78], [141, 206, 216, 140], [141, 217, 206, 216], [141, 217, 226, 216], [141, 76, 216, 140], [141, 85, 68, 78], [141, 85, 68, 76], [141, 76, 216, 149], [141, 226, 216, 149], [141, 142, 78, 150], [141, 217, 142, 150], [141, 217, 226, 150], [141, 85, 76, 149], [141, 85, 226, 149], [141, 85, 78, 150], [141, 85, 226, 150], [131, 59, 67, 68], [131, 59, 69, 68], [131, 141, 69, 68], [131, 67, 68, 76], [131, 141, 68, 76], [131, 59, 58, 122], [131, 59, 67, 58], [131, 59, 123, 122], [131, 59, 69, 123], [131, 67, 76, 140], [131, 141, 76, 140], [131, 141, 69, 132], [131, 130, 58, 122], [131, 130, 67, 58], [131, 123, 122, 132], [131, 69, 123, 132], [131, 130, 122, 140], [131, 130, 67, 140], [131, 141, 206, 140], [131, 206, 122, 132], [131, 141, 206, 132], [131, 205, 122, 140], [131, 205, 206, 140], [131, 205, 206, 122], [ 51, 59, 22, 123], [ 51, 59, 22, 8], [ 51, 22, 123, 52], [ 51, 22, 8, 52], [ 51, 59, 58, 123], [ 51, 114, 52, 46], [ 51, 114, 123, 52], [ 51, 5, 52, 46], [ 51, 8, 5, 52], [ 51, 114, 113, 46], [ 51, 114, 113, 123], [ 51, 58, 123, 122], [ 51, 113, 123, 122], [ 51, 113, 58, 122], [ 51, 59, 58, 13], [ 51, 59, 8, 13], [ 51, 45, 5, 46], [ 51, 45, 8, 5], [ 51, 113, 105, 46], [ 51, 45, 105, 46], [ 51, 45, 50, 8], [ 51, 50, 58, 13], [ 51, 50, 8, 13], [ 51, 113, 58, 112], [ 51, 50, 58, 112], [ 51, 113, 105, 112], [ 51, 45, 105, 112], [ 51, 45, 50, 112], [319, 224, 223, 232], [319, 224, 310, 223], [319, 225, 241, 232], [319, 224, 225, 232], [311, 385, 215, 300], [311, 385, 301, 215], [311, 224, 215, 300], [311, 224, 301, 215], [311, 394, 385, 300], [311, 224, 310, 300], [311, 402, 310, 393], [311, 394, 299, 393], [311, 394, 299, 300], [311, 394, 393, 468], [311, 394, 385, 468], [311, 319, 402, 310], [311, 319, 224, 310], [311, 310, 299, 393], [311, 310, 299, 300], [311, 301, 403, 386], [311, 385, 301, 386], [311, 403, 386, 468], [311, 385, 386, 468], [311, 467, 393, 468], [311, 402, 467, 393], [311, 403, 467, 468], [311, 402, 403, 467], [269, 365, 355, 278], [269, 268, 355, 278], [269, 268, 259, 278], [269, 279, 259, 278], [269, 365, 279, 278], [269, 185, 279, 259], [260, 252, 253, 339], [260, 261, 253, 339], [260, 177, 270, 261], [260, 177, 252, 253], [260, 177, 261, 253], [260, 268, 252, 339], [260, 268, 252, 259], [260, 269, 268, 259], [187, 113, 197, 179], [187, 113, 105, 179], [187, 177, 113, 105], [187, 270, 261, 262], [187, 177, 270, 261], [187, 178, 262, 179], [187, 178, 105, 179], [187, 178, 177, 105], [187, 178, 261, 262], [187, 178, 177, 261], [508, 500, 509, 446], [508, 517, 509, 446], [508, 517, 445, 446], [508, 500, 509, 564], [508, 517, 509, 564], [508, 445, 446, 436], [508, 500, 446, 436], [508, 500, 556, 564], [508, 500, 507, 556], [508, 444, 507, 436], [508, 444, 445, 436], [508, 507, 499, 436], [508, 500, 499, 436], [508, 500, 507, 499], [508, 517, 454, 526], [508, 517, 445, 454], [508, 556, 564, 516], [508, 507, 556, 516], [508, 564, 526, 516], [508, 517, 564, 526], [118, 183, 175, 109], [118, 117, 183, 109], [118, 110, 109, 55], [118, 117, 109, 55], [118, 117, 64, 55], [275, 267, 192, 183], [275, 353, 362, 361], [275, 285, 362, 361], [275, 353, 354, 266], [275, 353, 354, 362], [275, 284, 285, 361], [286, 363, 362, 371], [286, 285, 362, 371], [286, 275, 285, 362], [286, 202, 297, 201], [286, 295, 285, 201], [286, 296, 297, 371], [286, 296, 285, 371], [286, 296, 297, 201], [286, 296, 285, 201], [ 82, 80, 144, 89], [ 82, 154, 144, 89], [ 82, 135, 80, 72], [ 82, 135, 80, 144], [ 82, 154, 144, 145], [ 82, 135, 144, 145], [ 62, 61, 126, 116], [ 62, 54, 61, 126], [ 62, 126, 116, 55], [ 62, 54, 61, 53], [ 62, 61, 48, 116], [ 62, 61, 48, 53], [ 62, 48, 116, 55], [ 62, 54, 53, 11], [ 62, 12, 19, 55], [ 62, 18, 12, 11], [ 62, 54, 18, 11], [ 62, 48, 12, 7], [ 62, 48, 53, 7], [ 62, 12, 11, 7], [ 62, 53, 11, 7], [ 63, 25, 3, 72], [ 63, 25, 18, 3], [ 63, 54, 3, 72], [ 63, 54, 18, 3], [ 63, 54, 126, 72], [ 63, 25, 26, 18], [ 63, 62, 54, 18], [ 63, 62, 54, 126], [ 63, 26, 18, 19], [ 63, 26, 19, 64], [ 63, 18, 12, 19], [ 63, 62, 12, 19], [ 63, 62, 18, 12], [ 63, 62, 19, 55], [ 63, 62, 126, 55], [ 63, 19, 64, 55], [ 63, 126, 64, 55], [ 74, 136, 137, 145], [ 90, 154, 89, 95], [ 90, 82, 154, 89], [ 90, 154, 95, 162], [ 90, 154, 155, 162], [ 90, 82, 154, 145], [ 32, 31, 3, 72], [ 32, 25, 3, 72], [352, 345, 344, 423], [352, 353, 345, 423], [352, 345, 256, 344], [352, 351, 256, 273], [352, 351, 256, 344], [352, 428, 344, 423], [352, 351, 428, 344], [352, 351, 359, 273], [352, 360, 351, 359], [352, 360, 428, 423], [352, 360, 351, 428], [352, 437, 273, 361], [352, 353, 437, 361], [352, 437, 359, 273], [352, 360, 437, 359], [184, 110, 175, 109], [184, 118, 175, 109], [184, 118, 110, 109], [184, 267, 183, 175], [184, 118, 183, 175], [184, 267, 192, 183], [184, 118, 192, 183], [213, 212, 222, 298], [213, 297, 212, 298], [213, 202, 297, 298], [213, 297, 212, 201], [213, 202, 297, 201], [213, 212, 137, 146], [213, 212, 222, 146], [213, 136, 212, 201], [213, 136, 212, 137], [276, 275, 267, 192], [276, 354, 363, 362], [276, 286, 363, 362], [276, 286, 275, 192], [276, 275, 354, 266], [276, 275, 267, 266], [276, 275, 354, 362], [276, 286, 275, 362], [ 65, 74, 136, 137], [196, 198, 197, 132], [196, 207, 198, 132], [196, 207, 206, 132], [196, 198, 282, 197], [196, 207, 198, 282], [196, 206, 122, 132], [196, 197, 122, 132], [196, 205, 206, 122], [196, 207, 292, 282], [196, 367, 292, 282], [349, 426, 348, 425], [349, 433, 426, 425], [349, 426, 348, 350], [349, 270, 261, 262], [349, 350, 254, 262], [349, 348, 350, 254], [349, 271, 350, 262], [349, 261, 254, 262], [349, 261, 348, 254], [170, 178, 249, 253], [170, 177, 169, 253], [170, 178, 177, 253], [170, 178, 177, 169], [170, 169, 105, 99], [170, 178, 169, 105], [170, 100, 106, 99], [170, 100, 178, 106], [170, 106, 105, 99], [170, 178, 106, 105], [171, 100, 168, 172], [171, 250, 249, 254], [171, 250, 172, 254], [171, 178, 249, 254], [171, 178, 172, 254], [171, 170, 178, 249], [171, 100, 178, 172], [171, 170, 100, 178], [173, 171, 168, 172], [173, 171, 250, 172], [173, 250, 343, 255], [173, 251, 343, 255], [173, 250, 172, 255], [181, 182, 108, 174], [181, 251, 182, 174], [181, 182, 108, 264], [181, 182, 256, 264], [181, 251, 182, 256], [181, 168, 172, 107], [181, 173, 168, 172], [181, 108, 180, 264], [181, 256, 264, 255], [181, 251, 256, 255], [181, 172, 107, 180], [181, 108, 107, 180], [181, 173, 172, 255], [181, 173, 251, 255], [181, 264, 263, 255], [181, 180, 264, 263], [181, 172, 263, 255], [181, 172, 180, 263], [101, 44, 43, 47], [101, 44, 107, 47], [101, 42, 43, 47], [101, 100, 106, 42], [101, 106, 168, 107], [101, 100, 106, 168], [101, 106, 107, 47], [101, 106, 42, 47], [102, 181, 251, 174], [102, 101, 168, 107], [102, 101, 44, 107], [102, 181, 108, 174], [102, 44, 108, 107], [102, 181, 173, 168], [102, 181, 173, 251], [102, 181, 168, 107], [102, 181, 108, 107], [103, 110, 175, 109], [103, 102, 108, 174], [103, 102, 44, 108], [103, 182, 175, 109], [103, 182, 174, 175], [103, 182, 108, 109], [103, 182, 108, 174], [103, 108, 48, 109], [103, 44, 108, 48], [324, 237, 228, 316], [324, 236, 237, 228], [324, 236, 314, 228], [324, 236, 323, 314], [324, 315, 228, 316], [324, 314, 315, 228], [324, 236, 332, 237], [324, 323, 332, 325], [324, 236, 323, 332], [324, 315, 398, 316], [324, 406, 323, 314], [324, 398, 326, 316], [324, 398, 325, 326], [324, 406, 315, 398], [324, 406, 314, 315], [324, 237, 238, 316], [324, 332, 238, 325], [324, 332, 237, 238], [324, 238, 326, 316], [324, 238, 325, 326], [324, 407, 398, 325], [324, 406, 407, 398], [324, 323, 407, 325], [324, 406, 323, 407], [199, 189, 116, 125], [199, 117, 116, 125], [199, 189, 116, 190], [199, 117, 116, 190], [199, 209, 189, 125], [199, 284, 283, 294], [199, 284, 295, 294], [199, 189, 190, 272], [199, 209, 283, 294], [199, 209, 283, 189], [199, 283, 189, 272], [199, 284, 283, 272], [199, 117, 200, 125], [289, 365, 277, 364], [289, 384, 277, 364], [289, 384, 277, 288], [289, 394, 299, 300], [289, 394, 384, 299], [289, 204, 300, 288], [289, 365, 279, 277], [289, 299, 300, 288], [289, 384, 299, 288], [289, 277, 194, 288], [289, 279, 277, 194], [289, 204, 194, 288], [289, 279, 204, 194], [441, 450, 503, 504], [441, 450, 440, 503], [441, 503, 504, 431], [441, 440, 503, 431], [441, 432, 504, 431], [441, 440, 355, 431], [441, 365, 355, 364], [441, 440, 355, 364], [374, 394, 459, 373], [374, 450, 459, 373], [374, 394, 459, 460], [374, 450, 459, 460], [374, 384, 373, 364], [374, 394, 384, 373], [374, 289, 365, 364], [374, 440, 373, 364], [374, 450, 440, 373], [374, 441, 365, 364], [374, 394, 385, 460], [374, 289, 384, 364], [374, 289, 394, 384], [374, 441, 440, 364], [374, 441, 450, 440], [498, 507, 499, 556], [498, 562, 507, 556], [498, 499, 556, 549], [498, 555, 556, 549], [498, 562, 555, 556], [498, 562, 507, 506], [498, 555, 549, 554], [498, 562, 555, 554], [498, 434, 507, 506], [498, 434, 507, 499], [498, 434, 497, 506], [498, 549, 554, 548], [498, 545, 549, 548], [514, 562, 523, 561], [514, 515, 562, 523], [514, 567, 568, 561], [514, 523, 568, 561], [514, 531, 567, 568], [514, 523, 531, 568], [514, 560, 567, 561], [514, 531, 567, 522], [514, 523, 531, 524], [514, 515, 523, 524], [357, 347, 346, 424], [357, 432, 346, 424], [357, 347, 424, 425], [357, 432, 424, 425], [357, 347, 346, 339], [357, 347, 348, 425], [357, 349, 348, 425], [357, 432, 433, 425], [357, 349, 433, 425], [357, 260, 261, 339], [357, 349, 261, 348], [357, 261, 340, 339], [357, 347, 340, 339], [357, 260, 270, 261], [357, 349, 270, 261], [357, 261, 348, 340], [357, 347, 348, 340], [442, 451, 506, 504], [442, 497, 506, 504], [442, 433, 497, 506], [442, 432, 497, 504], [442, 432, 433, 497], [442, 441, 432, 504], [442, 450, 451, 504], [442, 441, 450, 504], [370, 447, 285, 381], [370, 455, 447, 381], [370, 285, 380, 381], [370, 455, 380, 381], [370, 447, 438, 362], [370, 447, 285, 362], [370, 447, 438, 510], [370, 455, 447, 510], [370, 284, 285, 361], [370, 284, 285, 380], [370, 438, 362, 361], [370, 285, 362, 361], [370, 437, 438, 361], [370, 438, 446, 510], [370, 455, 446, 510], [370, 284, 369, 361], [370, 284, 380, 369], [370, 437, 369, 361], [370, 445, 380, 369], [370, 455, 445, 380], [370, 437, 438, 446], [370, 445, 437, 369], [370, 445, 437, 446], [370, 455, 445, 446], [210, 211, 295, 200], [210, 199, 200, 125], [210, 199, 295, 200], [210, 135, 200, 125], [210, 211, 135, 200], [210, 199, 295, 294], [210, 211, 295, 306], [210, 199, 209, 125], [210, 199, 209, 294], [210, 211, 220, 144], [210, 211, 135, 144], [210, 209, 294, 219], [210, 294, 219, 306], [210, 295, 294, 306], [210, 220, 219, 306], [210, 211, 220, 306], [210, 135, 134, 125], [210, 220, 219, 134], [210, 220, 144, 134], [210, 135, 144, 134], [210, 209, 134, 125], [210, 209, 219, 134], [188, 283, 189, 198], [188, 283, 271, 282], [188, 283, 198, 282], [188, 283, 271, 272], [188, 283, 189, 272], [188, 198, 282, 197], [188, 198, 124, 197], [188, 189, 198, 124], [188, 271, 180, 272], [188, 189, 180, 272], [188, 271, 262, 180], [188, 124, 197, 179], [188, 262, 180, 179], [188, 115, 180, 179], [188, 189, 115, 180], [188, 124, 115, 179], [188, 189, 124, 115], [188, 187, 262, 179], [188, 187, 197, 179], [274, 284, 273, 361], [274, 275, 284, 361], [274, 273, 264, 272], [274, 284, 273, 272], [274, 182, 190, 264], [274, 182, 183, 190], [274, 190, 264, 272], [274, 117, 183, 190], [274, 199, 117, 190], [274, 275, 284, 285], [274, 199, 190, 272], [274, 199, 284, 272], [435, 444, 368, 359], [435, 434, 433, 426], [435, 444, 359, 436], [435, 444, 434, 436], [435, 271, 350, 359], [435, 271, 368, 359], [435, 349, 271, 350], [435, 349, 426, 350], [435, 349, 433, 426], [435, 493, 359, 436], [435, 434, 493, 436], [435, 350, 359, 427], [435, 426, 350, 427], [435, 493, 359, 427], [435, 426, 493, 427], [443, 434, 507, 506], [443, 444, 434, 507], [443, 515, 507, 506], [443, 434, 433, 506], [443, 515, 451, 452], [443, 515, 451, 506], [443, 442, 433, 506], [443, 442, 451, 506], [443, 367, 377, 452], [443, 367, 451, 452], [443, 442, 367, 451], [320, 319, 241, 328], [320, 319, 225, 241], [320, 321, 225, 241], [320, 329, 241, 328], [320, 321, 329, 328], [320, 321, 329, 241], [320, 224, 301, 225], [320, 319, 224, 225], [320, 301, 225, 312], [320, 321, 225, 312], [320, 311, 301, 403], [320, 311, 224, 301], [320, 311, 319, 224], [320, 395, 403, 328], [320, 395, 321, 328], [320, 395, 301, 403], [320, 395, 301, 312], [320, 395, 321, 312], [320, 402, 403, 328], [320, 311, 402, 403], [320, 319, 402, 328], [320, 311, 319, 402], [290, 289, 365, 279], [290, 374, 289, 365], [290, 205, 279, 204], [290, 289, 279, 204], [290, 374, 394, 385], [290, 374, 289, 394], [290, 205, 204, 215], [290, 385, 215, 300], [290, 394, 385, 300], [290, 289, 394, 300], [290, 204, 215, 300], [290, 289, 204, 300], [195, 122, 121, 112], [195, 187, 177, 270], [195, 205, 122, 121], [195, 205, 279, 121], [195, 177, 113, 112], [195, 187, 177, 113], [195, 113, 122, 112], [195, 196, 205, 122], [195, 113, 197, 122], [195, 187, 113, 197], [195, 196, 197, 122], [186, 260, 269, 259], [186, 260, 252, 259], [186, 260, 177, 252], [186, 260, 177, 270], [186, 260, 269, 270], [186, 252, 259, 176], [186, 177, 252, 176], [186, 185, 259, 176], [186, 269, 185, 259], [186, 195, 177, 270], [186, 195, 269, 270], [186, 177, 176, 112], [186, 195, 177, 112], [186, 111, 185, 176], [186, 111, 185, 121], [186, 185, 279, 121], [186, 269, 185, 279], [186, 195, 121, 112], [186, 195, 279, 121], [186, 195, 269, 279], [186, 111, 176, 112], [186, 111, 121, 112], [453, 508, 444, 507], [453, 508, 526, 516], [453, 508, 507, 516], [453, 508, 454, 526], [453, 443, 444, 507], [453, 508, 445, 454], [453, 508, 444, 445], [453, 515, 507, 516], [453, 462, 454, 526], [453, 443, 515, 452], [453, 443, 515, 507], [453, 443, 377, 452], [453, 443, 444, 377], [453, 378, 445, 454], [453, 444, 378, 445], [453, 524, 526, 516], [453, 462, 524, 526], [453, 515, 462, 452], [453, 462, 388, 452], [453, 388, 377, 452], [453, 444, 378, 377], [453, 515, 524, 516], [453, 515, 462, 524], [453, 454, 388, 389], [453, 462, 388, 389], [453, 462, 454, 389], [453, 378, 388, 377], [453, 378, 454, 388], [191, 118, 192, 183], [191, 275, 192, 183], [191, 118, 117, 183], [191, 286, 275, 192], [191, 274, 117, 183], [191, 274, 275, 183], [191, 274, 199, 117], [191, 274, 275, 285], [191, 286, 295, 285], [191, 286, 275, 285], [191, 199, 295, 200], [191, 199, 117, 200], [191, 199, 284, 295], [191, 274, 199, 284], [191, 284, 295, 285], [191, 274, 284, 285], [191, 295, 200, 201], [191, 286, 295, 201], [ 73, 74, 25, 26], [ 73, 63, 26, 64], [ 73, 63, 25, 26], [ 73, 65, 26, 64], [ 73, 65, 74, 26], [ 73, 65, 136, 64], [ 73, 65, 74, 136], [ 73, 63, 126, 64], [ 73, 63, 25, 72], [ 73, 136, 200, 64], [ 73, 135, 136, 200], [ 73, 135, 136, 145], [ 73, 74, 136, 145], [ 73, 82, 25, 72], [ 73, 82, 135, 72], [ 73, 126, 200, 64], [ 73, 135, 126, 200], [ 73, 135, 126, 72], [ 73, 63, 126, 72], [ 73, 82, 135, 145], [ 38, 89, 41, 95], [ 38, 90, 89, 95], [ 38, 90, 82, 89], [ 33, 73, 82, 25], [ 33, 73, 74, 25], [ 33, 82, 25, 72], [ 33, 32, 25, 72], [ 33, 32, 82, 72], [ 33, 38, 90, 82], [ 33, 38, 32, 82], [ 33, 73, 82, 145], [ 33, 73, 74, 145], [429, 352, 360, 423], [429, 352, 360, 437], [429, 360, 495, 423], [429, 495, 502, 423], [429, 360, 494, 495], [429, 360, 437, 494], [429, 352, 353, 423], [429, 352, 353, 437], [429, 495, 502, 552], [429, 494, 495, 552], [429, 430, 502, 423], [429, 430, 438, 502], [429, 353, 430, 423], [429, 353, 430, 438], [429, 437, 438, 361], [429, 353, 438, 361], [429, 353, 437, 361], [429, 494, 501, 552], [429, 437, 494, 501], [429, 502, 501, 552], [429, 438, 502, 501], [429, 437, 438, 501], [ 56, 184, 118, 110], [ 56, 110, 19, 55], [ 56, 118, 110, 55], [ 56, 26, 19, 64], [ 56, 65, 26, 64], [ 56, 19, 64, 55], [ 56, 118, 64, 55], [287, 276, 286, 363], [287, 372, 363, 371], [287, 286, 363, 371], [287, 372, 202, 297], [287, 286, 202, 297], [287, 372, 297, 371], [287, 286, 297, 371], [193, 184, 267, 192], [193, 276, 267, 192], [193, 287, 286, 202], [193, 276, 286, 192], [193, 287, 276, 286], [193, 286, 202, 201], [291, 367, 376, 292], [291, 196, 367, 292], [291, 376, 292, 302], [291, 207, 292, 302], [291, 196, 207, 292], [291, 207, 206, 302], [291, 196, 207, 206], [291, 385, 376, 302], [291, 385, 215, 302], [291, 290, 385, 215], [291, 215, 206, 302], [291, 196, 205, 206], [291, 205, 215, 206], [291, 290, 205, 215], [281, 196, 367, 282], [281, 187, 270, 262], [281, 349, 270, 262], [281, 196, 282, 197], [281, 195, 187, 270], [281, 188, 271, 282], [281, 349, 271, 262], [281, 188, 282, 197], [281, 188, 187, 197], [281, 195, 187, 197], [281, 195, 196, 197], [281, 188, 271, 262], [281, 188, 187, 262], [ 49, 12, 19, 55], [ 49, 110, 19, 55], [ 49, 103, 44, 48], [ 49, 103, 110, 109], [ 49, 103, 48, 109], [ 49, 48, 12, 7], [ 49, 44, 48, 7], [ 49, 62, 12, 55], [ 49, 62, 48, 55], [ 49, 62, 48, 12], [ 49, 110, 109, 55], [ 49, 48, 109, 55], [492, 498, 434, 499], [492, 434, 499, 493], [492, 545, 499, 549], [492, 498, 499, 549], [492, 498, 545, 549], [492, 498, 434, 497], [492, 499, 546, 493], [492, 545, 499, 546], [492, 498, 497, 548], [492, 498, 545, 548], [492, 546, 493, 489], [492, 545, 546, 489], [492, 426, 493, 489], [492, 435, 426, 493], [492, 435, 434, 493], [492, 435, 434, 426], [492, 497, 425, 491], [492, 433, 497, 425], [492, 434, 433, 497], [492, 497, 491, 548], [492, 545, 491, 548], [492, 426, 489, 488], [492, 545, 489, 488], [492, 426, 425, 488], [492, 433, 426, 425], [492, 434, 433, 426], [492, 425, 491, 488], [492, 545, 491, 488], [505, 514, 562, 561], [505, 515, 562, 506], [505, 514, 515, 562], [505, 554, 553, 561], [505, 562, 554, 561], [505, 515, 451, 506], [505, 514, 515, 451], [505, 498, 562, 554], [505, 498, 562, 506], [505, 560, 553, 561], [505, 560, 504, 553], [505, 514, 560, 561], [505, 497, 504, 553], [505, 497, 506, 504], [505, 451, 506, 504], [505, 554, 553, 548], [505, 498, 554, 548], [505, 498, 497, 506], [505, 497, 553, 548], [505, 498, 497, 548], [461, 514, 531, 522], [461, 514, 531, 524], [461, 470, 531, 524], [461, 531, 522, 469], [461, 470, 531, 469], [461, 514, 515, 524], [461, 514, 515, 451], [461, 515, 524, 452], [461, 515, 451, 452], [461, 462, 524, 452], [461, 470, 462, 524], [461, 460, 522, 469], [461, 386, 460, 469], [461, 386, 396, 469], [461, 470, 396, 469], [461, 376, 387, 452], [461, 451, 376, 452], [461, 451, 376, 460], [461, 462, 387, 452], [461, 470, 462, 387], [461, 386, 387, 396], [461, 470, 387, 396], [461, 376, 386, 460], [461, 376, 386, 387], [513, 512, 567, 522], [513, 514, 567, 522], [513, 512, 560, 567], [513, 514, 560, 567], [513, 512, 459, 522], [513, 512, 560, 449], [513, 459, 460, 522], [513, 450, 459, 460], [513, 512, 459, 449], [513, 450, 459, 449], [513, 560, 504, 449], [513, 450, 504, 449], [513, 450, 451, 460], [513, 461, 514, 522], [513, 461, 514, 451], [513, 505, 514, 560], [513, 505, 514, 451], [513, 505, 560, 504], [513, 461, 460, 522], [513, 461, 451, 460], [513, 450, 451, 504], [513, 505, 451, 504], [375, 367, 451, 376], [375, 442, 367, 451], [375, 451, 376, 460], [375, 450, 451, 460], [375, 442, 450, 451], [375, 374, 450, 460], [375, 291, 367, 376], [375, 385, 376, 460], [375, 374, 385, 460], [375, 290, 374, 385], [375, 374, 441, 450], [375, 442, 441, 450], [375, 291, 385, 376], [375, 291, 290, 385], [375, 374, 441, 365], [375, 290, 374, 365], [366, 442, 441, 432], [366, 375, 442, 441], [366, 357, 432, 433], [366, 442, 432, 433], [366, 375, 442, 367], [366, 357, 349, 270], [366, 357, 349, 433], [366, 375, 441, 365], [366, 281, 349, 270], [265, 353, 345, 266], [265, 275, 353, 266], [265, 275, 353, 361], [265, 274, 275, 361], [265, 345, 258, 266], [265, 352, 345, 256], [265, 352, 353, 345], [265, 274, 275, 183], [265, 274, 273, 361], [265, 267, 258, 266], [265, 275, 267, 266], [265, 352, 256, 273], [265, 182, 256, 264], [265, 274, 182, 264], [265, 256, 273, 264], [265, 274, 273, 264], [265, 182, 183, 258], [265, 274, 182, 183], [265, 267, 183, 258], [265, 275, 267, 183], [265, 352, 273, 361], [265, 352, 353, 361], [265, 345, 258, 257], [265, 345, 256, 257], [265, 182, 258, 257], [265, 182, 256, 257], [358, 435, 444, 434], [358, 443, 444, 434], [358, 435, 434, 433], [358, 443, 434, 433], [358, 435, 444, 368], [358, 435, 349, 433], [358, 444, 368, 377], [358, 443, 444, 377], [358, 435, 271, 368], [358, 435, 349, 271], [358, 368, 282, 377], [358, 271, 368, 282], [358, 366, 349, 433], [358, 367, 282, 377], [358, 443, 367, 377], [358, 366, 281, 367], [358, 366, 281, 349], [358, 443, 442, 433], [358, 366, 442, 433], [358, 443, 442, 367], [358, 366, 442, 367], [358, 281, 349, 271], [358, 281, 271, 282], [358, 281, 367, 282], [280, 269, 365, 279], [280, 290, 365, 279], [280, 290, 205, 279], [280, 375, 290, 365], [280, 195, 205, 279], [280, 195, 269, 279], [280, 366, 375, 365], [280, 375, 291, 290], [280, 291, 290, 205], [280, 195, 196, 205], [280, 291, 196, 205], [280, 375, 291, 367], [280, 366, 375, 367], [280, 195, 269, 270], [280, 366, 269, 270], [280, 291, 196, 367], [280, 281, 195, 196], [280, 281, 195, 270], [280, 366, 281, 270], [280, 281, 196, 367], [280, 366, 281, 367], [127, 191, 118, 192], [127, 191, 286, 201], [127, 191, 286, 192], [127, 191, 200, 201], [127, 193, 286, 201], [127, 193, 286, 192], [127, 191, 117, 200], [127, 191, 118, 117], [127, 136, 200, 64], [127, 136, 200, 201], [127, 117, 200, 64], [127, 118, 117, 64], [127, 65, 136, 64], [ 81, 38, 32, 37], [ 81, 38, 32, 82], [ 81, 24, 80, 30], [ 81, 24, 80, 72], [ 81, 32, 37, 31], [ 81, 88, 80, 89], [ 81, 82, 80, 89], [ 81, 38, 82, 89], [ 81, 38, 89, 41], [ 81, 38, 37, 41], [ 81, 82, 80, 72], [ 81, 32, 82, 72], [ 81, 88, 80, 30], [ 81, 88, 37, 30], [ 81, 24, 31, 30], [ 81, 37, 31, 30], [ 81, 24, 31, 72], [ 81, 32, 31, 72], [ 81, 88, 89, 41], [ 81, 88, 37, 41], [ 83, 33, 74, 145], [ 83, 90, 82, 145], [ 83, 33, 82, 145], [ 83, 33, 90, 82], [ 83, 74, 137, 145], [ 83, 154, 155, 145], [ 83, 90, 154, 145], [ 83, 90, 154, 155], [ 83, 137, 146, 145], [ 83, 155, 146, 145], [128, 213, 202, 201], [128, 193, 202, 201], [128, 127, 193, 201], [128, 213, 136, 137], [128, 65, 136, 137], [128, 127, 65, 136], [128, 213, 136, 201], [128, 127, 136, 201], [119, 128, 127, 65], [119, 128, 127, 193], [119, 56, 65, 64], [119, 127, 65, 64], [119, 56, 184, 118], [119, 193, 184, 192], [119, 127, 193, 192], [119, 56, 118, 64], [119, 127, 118, 64], [119, 184, 118, 192], [119, 127, 118, 192], [356, 366, 441, 432], [356, 366, 357, 432], [356, 357, 432, 346], [356, 432, 346, 431], [356, 441, 432, 431], [356, 346, 355, 431], [356, 441, 355, 431], [356, 366, 441, 365], [356, 366, 269, 270], [356, 366, 357, 270], [356, 268, 346, 355], [356, 269, 268, 355], [356, 269, 365, 355], [356, 441, 365, 355], [356, 280, 269, 365], [356, 280, 366, 365], [356, 280, 366, 269], [356, 260, 269, 268], [356, 260, 269, 270], [356, 357, 260, 270], [356, 268, 346, 339], [356, 260, 268, 339], [356, 357, 346, 339], [356, 357, 260, 339] ], dtype='int64') ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3471527 yt-4.4.0/yt/frontends/stream/tests/0000755000175100001770000000000014714401715016657 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/tests/__init__.py0000644000175100001770000000000014714401662020757 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/tests/test_callable_grids.py0000644000175100001770000001054014714401662023220 0ustar00runnerdockerimport numpy as np import pytest import unyt from numpy.testing import assert_almost_equal, assert_equal from yt import load_amr_grids, load_hdf5_file, load_uniform_grid from yt.testing import _amr_grid_index, requires_file, requires_module turb_vels = "UnigridData/turb_vels.h5" _existing_fields = ( "Bx", "By", "Bz", "Density", "MagneticEnergy", "Temperature", "turb_x-velocity", "turb_y-velocity", "turb_z-velocity", "x-velocity", "y-velocity", "z-velocity", ) @requires_file(turb_vels) @requires_module("h5py") def test_load_hdf5_file(): ds1 = load_hdf5_file(turb_vels) assert_equal(ds1.domain_dimensions, [256, 256, 256]) for field_name in _existing_fields: assert ("stream", field_name) in ds1.field_list assert_equal(ds1.r[:]["ones"].size, 256 * 256 * 256) assert_equal(ds1.r[:]["Density"].size, 256 * 256 * 256) # Now we test that we get the same results regardless of our decomp ds2 = load_hdf5_file(turb_vels, nchunks=19) assert_equal(ds2.domain_dimensions, [256, 256, 256]) assert_equal(ds2.r[:]["ones"].size, 256 * 256 * 256) assert_equal(ds2.r[:]["Density"].size, 256 * 256 * 256) assert_almost_equal(ds2.r[:]["Density"].min(), ds1.r[:]["Density"].min()) assert_almost_equal(ds2.r[:]["Density"].max(), ds1.r[:]["Density"].max()) assert_almost_equal(ds2.r[:]["Density"].std(), ds1.r[:]["Density"].std()) # test that we can load this dataset with a different bounding box and length units ds3 = load_hdf5_file( turb_vels, bbox=np.array([[-1.0, 1.0], [-1.0, 1.0], [-1.0, 1.0]]), dataset_arguments={"length_unit": (1.0, "kpc")}, ) assert_almost_equal(ds3.domain_width, ds3.arr([2, 2, 2], "kpc")) _x_coefficients = (100, 50, 30, 10, 20) _y_coefficients = (20, 90, 80, 30, 30) _z_coefficients = (50, 10, 90, 40, 40) def _grid_data_function(grid, field_name): # We want N points from the cell-center to the cell-center on the other side x, y, z = ( np.linspace( grid.LeftEdge[i] + grid.dds[i] / 2, grid.RightEdge[i] - grid.dds[i] / 2, grid.ActiveDimensions[i], ) for i in (0, 1, 2) ) r = np.sqrt( ((x.d - 0.5) ** 2)[:, None, None] + ((y.d - 0.5) ** 2)[None, :, None] + ((z.d - 0.5) ** 2)[None, None, :] ) atten = np.exp(-20 * (1.1 * r**2)) xv = sum( c * np.sin(2 ** (1 + i) * (x.d * np.pi * 2)) for i, c in enumerate(_x_coefficients) ) yv = sum( c * np.sin(2 ** (1 + i) * (y.d * np.pi * 2)) for i, c in enumerate(_y_coefficients) ) zv = sum( c * np.sin(2 ** (1 + i) * (z.d * np.pi * 2)) for i, c in enumerate(_z_coefficients) ) return atten * (xv[:, None, None] * yv[None, :, None] * zv[None, None, :]) def test_load_callable(): grid_data = [] for level, le, re, dims in _amr_grid_index: grid_data.append( { "level": level, "left_edge": le, "right_edge": re, "dimensions": dims, "density": _grid_data_function, } ) ds = load_amr_grids( grid_data, [32, 32, 32], bbox=np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]) ) assert_equal(ds.r[:].sum("cell_volume"), ds.domain_width.prod()) assert_almost_equal(ds.r[:].max("density").d, 2660218.62833899) assert_almost_equal(ds.r[:].min("density").d, -2660218.62833899) def test_load_uniform_grid_callable(): data = {"density": _grid_data_function, "my_temp": (_grid_data_function, "K")} ds = load_uniform_grid( data, [32, 32, 32], bbox=np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]) ) assert_equal(ds.r[:].sum("cell_volume"), ds.domain_width.prod()) # note: the following min/max values differ from test_load_callable because # the grid here is coarser and the min/max values of the function are not # well-sampled. assert_almost_equal(ds.r[:].max("density").d, 1559160.37194738) assert_almost_equal(ds.r[:].min("density").d, -1559160.37194738) assert ds.r[:].min("my_temp").units == unyt.K with pytest.raises(RuntimeError, match="Callable functions can not be specified"): _ = load_uniform_grid( data, [32, 32, 32], bbox=np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]), nprocs=16, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/tests/test_outputs.py0000644000175100001770000000465114714401662022022 0ustar00runnerdockerimport os import shutil import tempfile import unittest import numpy as np from numpy.testing import assert_equal, assert_raises from yt.loaders import load_particles, load_uniform_grid from yt.utilities.exceptions import ( YTInconsistentGridFieldShape, YTInconsistentGridFieldShapeGridDims, YTInconsistentParticleFieldShape, ) class TestEmptyLoad(unittest.TestCase): def setUp(self): self.tmpdir = tempfile.mkdtemp() self.curdir = os.getcwd() os.chdir(self.tmpdir) # create 0 byte file open("empty_file", "a") # create empty directory os.makedirs("empty_directory") def tearDown(self): os.chdir(self.curdir) shutil.rmtree(self.tmpdir) def test_dimensionless_field_units(): Z = np.random.uniform(size=(32, 32, 32)) d = np.random.uniform(size=(32, 32, 32)) data = {"density": d, "metallicity": Z} ds = load_uniform_grid(data, (32, 32, 32)) dd = ds.all_data() assert_equal(Z.max(), float(dd["stream", "metallicity"].max())) def test_inconsistent_field_shape(): def load_field_field_mismatch(): d = np.random.uniform(size=(32, 32, 32)) t = np.random.uniform(size=(32, 64, 32)) data = {"density": d, "temperature": t} load_uniform_grid(data, (32, 32, 32)) assert_raises(YTInconsistentGridFieldShape, load_field_field_mismatch) def load_field_grid_mismatch(): d = np.random.uniform(size=(32, 32, 32)) t = np.random.uniform(size=(32, 32, 32)) data = {"density": d, "temperature": t} load_uniform_grid(data, (32, 64, 32)) assert_raises(YTInconsistentGridFieldShapeGridDims, load_field_grid_mismatch) def load_particle_fields_mismatch(): x = np.random.uniform(size=100) y = np.random.uniform(size=100) z = np.random.uniform(size=200) data = { "particle_position_x": x, "particle_position_y": y, "particle_position_z": z, } load_particles(data) assert_raises(YTInconsistentParticleFieldShape, load_particle_fields_mismatch) def test_parameters(): # simple test to check that we can pass in parameters Z = np.random.uniform(size=(32, 32, 32)) d = np.random.uniform(size=(32, 32, 32)) data = {"density": d, "metallicity": Z} ds = load_uniform_grid(data, (32, 32, 32), parameters={"metadata_is_nice": True}) assert ds.parameters["metadata_is_nice"] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/tests/test_stream_amrgrids.py0000644000175100001770000000404614714401662023460 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_raises from yt import ProjectionPlot, load_amr_grids from yt.utilities.exceptions import YTIllDefinedAMR, YTIntDomainOverflow def test_qt_overflow(): grid_data = [] grid_dict = {} grid_dict["left_edge"] = [-1.0, -1.0, -1.0] grid_dict["right_edge"] = [1.0, 1.0, 1.0] grid_dict["dimensions"] = [8, 8, 8] grid_dict["level"] = 0 grid_dict["density"] = np.ones((8, 8, 8)) grid_data.append(grid_dict) domain_dimensions = np.array([8, 8, 8]) spf = load_amr_grids(grid_data, domain_dimensions) def make_proj(): p = ProjectionPlot(spf, "x", [("gas", "density")], center="c", origin="native") return p assert_raises(YTIntDomainOverflow, make_proj) def test_refine_by(): grid_data = [] ref_by = 4 lo = 0.0 hi = 1.0 fine_grid_width = (hi - lo) / ref_by for level in range(2): grid_dict = {} grid_dict["left_edge"] = [0.0 + 0.5 * fine_grid_width * level] * 3 grid_dict["right_edge"] = [1.0 - 0.5 * fine_grid_width * level] * 3 grid_dict["dimensions"] = [8, 8, 8] grid_dict["level"] = level grid_dict["density"] = np.ones((8, 8, 8)) grid_data.append(grid_dict) domain_dimensions = np.array([8, 8, 8]) load_amr_grids(grid_data, domain_dimensions, refine_by=ref_by) def test_validation(): dims = np.array([4, 2, 4]) grid_data = [ { "left_edge": [0.0, 0.0, 0.0], "right_edge": [1.0, 1.0, 1.0], "level": 0, "dimensions": dims, }, { "left_edge": [0.25, 0.25, 0.25], "right_edge": [0.75, 0.75, 0.75], "level": 1, "dimensions": dims, }, ] bbox = np.array([[0, 1], [0, 1], [0, 1]]) def load_grids(): load_amr_grids( grid_data, dims, bbox=bbox, periodicity=(0, 0, 0), length_unit=1.0, refine_by=2, ) assert_raises(YTIllDefinedAMR, load_grids) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/tests/test_stream_hexahedral.py0000644000175100001770000000423114714401662023751 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_almost_equal, assert_equal from yt import SlicePlot from yt.frontends.stream.data_structures import hexahedral_connectivity from yt.loaders import load_hexahedral_mesh # Field information def test_stream_hexahedral(): np.random.seed(0x4D3D3D3) Nx, Ny, Nz = 32, 18, 24 # Note what we're doing here -- we are creating a randomly spaced mesh, but # because of how the accumulate operation works, we also reset the leftmost # cell boundary to 0.0. cell_x = np.random.random(Nx + 1) cell_x /= cell_x.sum() cell_x = np.add.accumulate(cell_x) cell_x[0] = 0.0 cell_y = np.random.random(Ny + 1) cell_y /= cell_y.sum() cell_y = np.add.accumulate(cell_y) cell_y[0] = 0.0 cell_z = np.random.random(Nz + 1) cell_z /= cell_z.sum() cell_z = np.add.accumulate(cell_z) cell_z[0] = 0.0 coords, conn = hexahedral_connectivity(cell_x, cell_y, cell_z) data = {"random_field": np.random.random((Nx, Ny, Nz))} bbox = np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]) ds = load_hexahedral_mesh(data, conn, coords, bbox=bbox) dd = ds.all_data() # raise RuntimeError assert_almost_equal(float(dd["gas", "cell_volume"].sum(dtype="float64")), 1.0) assert_equal(dd["index", "ones"].size, Nx * Ny * Nz) # Now we try it with a standard mesh cell_x = np.linspace(0.0, 1.0, Nx + 1) cell_y = np.linspace(0.0, 1.0, Ny + 1) cell_z = np.linspace(0.0, 1.0, Nz + 1) coords, conn = hexahedral_connectivity(cell_x, cell_y, cell_z) data = {"random_field": np.random.random((Nx, Ny, Nz))} bbox = np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]) ds = load_hexahedral_mesh(data, conn, coords, bbox=bbox) dd = ds.all_data() assert_almost_equal(float(dd["gas", "cell_volume"].sum(dtype="float64")), 1.0) assert_equal(dd["index", "ones"].size, Nx * Ny * Nz) assert_almost_equal(dd["index", "dx"].to_ndarray(), 1.0 / Nx) assert_almost_equal(dd["index", "dy"].to_ndarray(), 1.0 / Ny) assert_almost_equal(dd["index", "dz"].to_ndarray(), 1.0 / Nz) s = SlicePlot(ds, "x", "random_field") s.render() s.frb["stream", "random_field"] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/tests/test_stream_octree.py0000644000175100001770000000165014714401662023127 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_equal import yt OCT_MASK_LIST = [ 8, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 0, ] def test_octree(): # See Issue #1272 octree_mask = np.array(OCT_MASK_LIST, dtype=np.uint8) quantities = {} quantities["gas", "density"] = np.random.random((22, 1)) bbox = np.array([[-10.0, 10.0], [-10.0, 10.0], [-10.0, 10.0]]) ds = yt.load_octree( octree_mask=octree_mask, data=quantities, bbox=bbox, num_zones=1, partial_coverage=0, ) proj = ds.proj(("gas", "density"), "x") proj["gas", "density"] assert_equal(ds.r[:]["ones"].size, 22) rho1 = quantities["gas", "density"].ravel() rho2 = ds.r[:]["density"].copy() rho1.sort() rho2.sort() assert_equal(rho1, rho2) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/tests/test_stream_particles.py0000644000175100001770000003515214714401662023640 0ustar00runnerdockerimport numpy as np import pytest from numpy.testing import assert_equal import yt.utilities.initial_conditions as ic from yt.loaders import load_amr_grids, load_particles, load_uniform_grid from yt.testing import fake_particle_ds, fake_sph_orientation_ds # Field information def test_stream_particles(): num_particles = 100000 domain_dims = (64, 64, 64) dens = np.random.random(domain_dims) x = np.random.uniform(size=num_particles) y = np.random.uniform(size=num_particles) z = np.random.uniform(size=num_particles) m = np.ones(num_particles) # Field operators and cell flagging methods fo = [] fo.append(ic.TopHatSphere(0.1, [0.2, 0.3, 0.4], {"density": 2.0})) fo.append(ic.TopHatSphere(0.05, [0.7, 0.4, 0.75], {"density": 20.0})) # Add particles fields1 = { "density": dens, "particle_position_x": x, "particle_position_y": y, "particle_position_z": z, "particle_mass": m, } fields2 = fields1.copy() ug1 = load_uniform_grid(fields1, domain_dims, 1.0) ug2 = load_uniform_grid(fields2, domain_dims, 1.0, nprocs=8) # Check to make sure the number of particles is the same number_of_particles1 = np.sum([grid.NumberOfParticles for grid in ug1.index.grids]) number_of_particles2 = np.sum([grid.NumberOfParticles for grid in ug2.index.grids]) assert_equal(number_of_particles1, num_particles) assert_equal(number_of_particles1, number_of_particles2) for grid in ug2.index.grids: tot_parts = grid["io", "particle_position_x"].size tot_all_parts = grid["all", "particle_position_x"].size assert tot_parts == grid.NumberOfParticles assert tot_all_parts == grid.NumberOfParticles # Check to make sure the fields have been defined correctly for ptype in ("all", "io"): assert ( ug1._get_field_info((ptype, "particle_position_x")).sampling_type == "particle" ) assert ( ug1._get_field_info((ptype, "particle_position_y")).sampling_type == "particle" ) assert ( ug1._get_field_info((ptype, "particle_position_z")).sampling_type == "particle" ) assert ug1._get_field_info((ptype, "particle_mass")).sampling_type == "particle" assert not ug1._get_field_info(("gas", "density")).sampling_type == "particle" for ptype in ("all", "io"): assert ( ug2._get_field_info((ptype, "particle_position_x")).sampling_type == "particle" ) assert ( ug2._get_field_info((ptype, "particle_position_y")).sampling_type == "particle" ) assert ( ug2._get_field_info((ptype, "particle_position_z")).sampling_type == "particle" ) assert ug2._get_field_info((ptype, "particle_mass")).sampling_type == "particle" assert not ug2._get_field_info(("gas", "density")).sampling_type == "particle" # Now perform similar checks, but with multiple particle types num_dm_particles = 30000 xd = np.random.uniform(size=num_dm_particles) yd = np.random.uniform(size=num_dm_particles) zd = np.random.uniform(size=num_dm_particles) md = np.ones(num_dm_particles) num_star_particles = 20000 xs = np.random.uniform(size=num_star_particles) ys = np.random.uniform(size=num_star_particles) zs = np.random.uniform(size=num_star_particles) ms = 2.0 * np.ones(num_star_particles) dens = np.random.random(domain_dims) fields3 = { "density": dens, ("dm", "particle_position_x"): xd, ("dm", "particle_position_y"): yd, ("dm", "particle_position_z"): zd, ("dm", "particle_mass"): md, ("star", "particle_position_x"): xs, ("star", "particle_position_y"): ys, ("star", "particle_position_z"): zs, ("star", "particle_mass"): ms, } fields4 = fields3.copy() ug3 = load_uniform_grid(fields3, domain_dims, 1.0) ug4 = load_uniform_grid(fields4, domain_dims, 1.0, nprocs=8) # Check to make sure the number of particles is the same number_of_particles3 = np.sum([grid.NumberOfParticles for grid in ug3.index.grids]) number_of_particles4 = np.sum([grid.NumberOfParticles for grid in ug4.index.grids]) assert_equal(number_of_particles3, num_dm_particles + num_star_particles) assert_equal(number_of_particles3, number_of_particles4) for grid in ug4.index.grids: tot_parts = grid["dm", "particle_position_x"].size tot_parts += grid["star", "particle_position_x"].size tot_all_parts = grid["all", "particle_position_x"].size assert tot_parts == grid.NumberOfParticles assert tot_all_parts == grid.NumberOfParticles # Check to make sure the fields have been defined correctly for ptype in ("dm", "star"): assert ( ug3._get_field_info((ptype, "particle_position_x")).sampling_type == "particle" ) assert ( ug3._get_field_info((ptype, "particle_position_y")).sampling_type == "particle" ) assert ( ug3._get_field_info((ptype, "particle_position_z")).sampling_type == "particle" ) assert ug3._get_field_info((ptype, "particle_mass")).sampling_type == "particle" assert ( ug4._get_field_info((ptype, "particle_position_x")).sampling_type == "particle" ) assert ( ug4._get_field_info((ptype, "particle_position_y")).sampling_type == "particle" ) assert ( ug4._get_field_info((ptype, "particle_position_z")).sampling_type == "particle" ) assert ug4._get_field_info((ptype, "particle_mass")).sampling_type == "particle" def test_load_particles_types(): num_particles = 10000 data1 = { "particle_position_x": np.random.random(size=num_particles), "particle_position_y": np.random.random(size=num_particles), "particle_position_z": np.random.random(size=num_particles), "particle_mass": np.ones(num_particles), } ds1 = load_particles(data1) ds1.index assert set(ds1.particle_types) == {"all", "io", "nbody"} dd = ds1.all_data() for ax in "xyz": assert dd["io", f"particle_position_{ax}"].size == num_particles assert dd["all", f"particle_position_{ax}"].size == num_particles assert dd["nbody", f"particle_position_{ax}"].size == num_particles num_dm_particles = 10000 num_star_particles = 50000 num_tot_particles = num_dm_particles + num_star_particles data2 = { ("dm", "particle_position_x"): np.random.random(size=num_dm_particles), ("dm", "particle_position_y"): np.random.random(size=num_dm_particles), ("dm", "particle_position_z"): np.random.random(size=num_dm_particles), ("dm", "particle_mass"): np.ones(num_dm_particles), ("star", "particle_position_x"): np.random.random(size=num_star_particles), ("star", "particle_position_y"): np.random.random(size=num_star_particles), ("star", "particle_position_z"): np.random.random(size=num_star_particles), ("star", "particle_mass"): 2.0 * np.ones(num_star_particles), } ds2 = load_particles(data2) ds2.index # We use set here because we don't care about the order and we just need # the elements to be correct assert set(ds2.particle_types) == {"all", "star", "dm", "nbody"} dd = ds2.all_data() for ax in "xyz": npart = 0 for ptype in ds2.particle_types_raw: npart += dd[ptype, f"particle_position_{ax}"].size assert npart == num_tot_particles assert dd["all", f"particle_position_{ax}"].size == num_tot_particles def test_load_particles_sph_types(): num_particles = 10000 data = { ("gas", "particle_position_x"): np.random.random(size=num_particles), ("gas", "particle_position_y"): np.random.random(size=num_particles), ("gas", "particle_position_z"): np.random.random(size=num_particles), ("gas", "particle_velocity_x"): np.random.random(size=num_particles), ("gas", "particle_velocity_y"): np.random.random(size=num_particles), ("gas", "particle_velocity_z"): np.random.random(size=num_particles), ("gas", "particle_mass"): np.ones(num_particles), ("gas", "density"): np.ones(num_particles), ("gas", "smoothing_length"): np.ones(num_particles), ("dm", "particle_position_x"): np.random.random(size=num_particles), ("dm", "particle_position_y"): np.random.random(size=num_particles), ("dm", "particle_position_z"): np.random.random(size=num_particles), ("dm", "particle_velocity_x"): np.random.random(size=num_particles), ("dm", "particle_velocity_y"): np.random.random(size=num_particles), ("dm", "particle_velocity_z"): np.random.random(size=num_particles), ("dm", "particle_mass"): np.ones(num_particles), } ds = load_particles(data) assert set(ds.particle_types) == {"gas", "dm"} assert ds._sph_ptypes == ("gas",) data.update( { ("cr_gas", "particle_position_x"): np.random.random(size=num_particles), ("cr_gas", "particle_position_y"): np.random.random(size=num_particles), ("cr_gas", "particle_position_z"): np.random.random(size=num_particles), ("cr_gas", "particle_velocity_x"): np.random.random(size=num_particles), ("cr_gas", "particle_velocity_y"): np.random.random(size=num_particles), ("cr_gas", "particle_velocity_z"): np.random.random(size=num_particles), ("cr_gas", "particle_mass"): np.ones(num_particles), ("cr_gas", "density"): np.ones(num_particles), ("cr_gas", "smoothing_length"): np.ones(num_particles), } ) with pytest.raises( ValueError, match="Multiple SPH particle types are currently not supported!" ): load_particles(data) def test_load_particles_with_data_source(): ds1 = fake_particle_ds() # Load from dataset ad = ds1.all_data() fields = [("all", "particle_mass")] fields += [("all", f"particle_position_{ax}") for ax in "xyz"] data = {field: ad[field] for field in fields} ds2 = load_particles(data, data_source=ad) def in_cgs(quan): return quan.in_cgs().v # Test bbox is parsed correctly for attr in ["domain_left_edge", "domain_right_edge"]: assert np.allclose(in_cgs(getattr(ds1, attr)), in_cgs(getattr(ds2, attr))) # Test sim_time is parsed correctly assert in_cgs(ds1.current_time) == in_cgs(ds2.current_time) # Test code units are parsed correctly def get_cu(ds, dim): return ds.quan(1, "code_" + dim) for dim in ["length", "mass", "time", "velocity", "magnetic"]: assert in_cgs(get_cu(ds1, dim)) == in_cgs(get_cu(ds2, dim)) def test_add_sph_fields(): ds = fake_particle_ds() ds.index assert set(ds.particle_types) == {"io", "all", "nbody"} ds.add_sph_fields() assert set(ds.particle_types) == {"io", "all"} assert ("io", "smoothing_length") in ds.field_list assert ("io", "density") in ds.field_list def test_particles_outside_domain(): np.random.seed(0x4D3D3D3) posx_arr = np.random.uniform(low=-1.6, high=1.5, size=1000) posy_arr = np.random.uniform(low=-1.5, high=1.5, size=1000) posz_arr = np.random.uniform(low=-1.5, high=1.5, size=1000) dens_arr = np.random.random((16, 16, 16)) data = { "density": dens_arr, "particle_position_x": posx_arr, "particle_position_y": posy_arr, "particle_position_z": posz_arr, } bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]]) ds = load_uniform_grid(data, (16, 16, 16), bbox=bbox, nprocs=4) wh = (posx_arr < bbox[0, 0]).nonzero()[0] assert wh.size == 1000 - ds.particle_type_counts["io"] ad = ds.all_data() assert ds.particle_type_counts["io"] == ad["all", "particle_position_x"].size def test_stream_sph_projection(): ds = fake_sph_orientation_ds() proj = ds.proj(("gas", "density"), 2) frb = proj.to_frb(ds.domain_width[0], (256, 256)) image = frb["gas", "density"] assert image.max() > 0 assert image.shape == (256, 256) @pytest.mark.parametrize("loader", (load_uniform_grid, load_amr_grids)) def test_stream_non_cartesian_particles(loader): eps = 1e-6 r, theta, phi = np.mgrid[ 0.0 : 1.0 - eps : 64j, 0.0 : np.pi - eps : 64j, 0.0 : 2.0 * np.pi - eps : 64j ] np.random.seed(0x4D3D3D3) ind = np.random.randint(0, 64 * 64 * 64, size=1000) particle_position_r = r.ravel()[ind] particle_position_theta = theta.ravel()[ind] particle_position_phi = phi.ravel()[ind] ds = load_uniform_grid( { "density": r, "temperature": phi, "entropy": phi, "particle_position_r": particle_position_r, "particle_position_theta": particle_position_theta, "particle_position_phi": particle_position_phi, }, (64, 64, 64), bbox=np.array([[0.0, 1.0], [0.0, np.pi], [0.0, 2.0 * np.pi]]), geometry="spherical", ) dd = ds.all_data() assert_equal(dd["all", "particle_position_r"].v, particle_position_r) assert_equal(dd["all", "particle_position_phi"].v, particle_position_phi) assert_equal(dd["all", "particle_position_theta"].v, particle_position_theta) def test_stream_non_cartesian_particles_amr(): eps = 1e-6 r, theta, phi = np.mgrid[ 0.0 : 1.0 - eps : 64j, 0.0 : np.pi - eps : 64j, 0.0 : 2.0 * np.pi - eps : 64j ] np.random.seed(0x4D3D3D3) ind = np.random.randint(0, 64 * 64 * 64, size=1000) particle_position_r = r.ravel()[ind] particle_position_theta = theta.ravel()[ind] particle_position_phi = phi.ravel()[ind] ds = load_amr_grids( [ { "density": r, "temperature": phi, "entropy": phi, "particle_position_r": particle_position_r, "particle_position_theta": particle_position_theta, "particle_position_phi": particle_position_phi, "dimensions": [64, 64, 64], "level": 0, "left_edge": [0.0, 0.0, 0.0], "right_edge": [1.0, np.pi, 2.0 * np.pi], } ], (64, 64, 64), bbox=np.array([[0.0, 1.0], [0.0, np.pi], [0.0, 2.0 * np.pi]]), geometry="spherical", ) dd = ds.all_data() assert_equal(dd["all", "particle_position_r"].v, particle_position_r) assert_equal(dd["all", "particle_position_phi"].v, particle_position_phi) assert_equal(dd["all", "particle_position_theta"].v, particle_position_theta) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/tests/test_stream_species.py0000644000175100001770000000215614714401662023303 0ustar00runnerdockerimport numpy as np from yt.loaders import load_uniform_grid from yt.testing import assert_allclose_units def test_stream_species(): prng = np.random.default_rng(seed=42) arr = prng.uniform(size=(32, 32, 32)) data = { "density": (arr, "g/cm**3"), "H_p0_fraction": (0.37 * np.ones_like(arr), "dimensionless"), "H_p1_fraction": (0.37 * np.ones_like(arr), "dimensionless"), "He_fraction": (0.24 * np.ones_like(arr), "dimensionless"), "CO_fraction": (0.02 * np.ones_like(arr), "dimensionless"), } bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]]) ds = load_uniform_grid(data, arr.shape, length_unit="Mpc", bbox=bbox, nprocs=64) assert ("gas", "CO_density") in ds.derived_field_list assert ("gas", "H_nuclei_density") in ds.derived_field_list assert ("gas", "H_p0_number_density") in ds.derived_field_list dd = ds.all_data() assert_allclose_units(dd["gas", "CO_density"], 0.02 * dd["gas", "density"]) all_H = dd["gas", "H_p0_number_density"] + dd["gas", "H_p1_number_density"] assert_allclose_units(all_H, dd["gas", "H_nuclei_density"]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/tests/test_stream_stretched.py0000644000175100001770000001024614714401662023634 0ustar00runnerdockerimport numpy as np import pytest from numpy.testing import assert_almost_equal, assert_equal from yt import load_uniform_grid def test_variable_dx(): np.random.seed(0x4D3D3D3) data = {"density": np.random.random((128, 128, 128))} cell_widths = [] for _ in range(3): cw = np.random.random(128) cw /= cw.sum() cell_widths.append(cw) ds = load_uniform_grid( data, [128, 128, 128], bbox=np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]), cell_widths=cell_widths, ) # We now check that we get all of our original cell widths back out, and # only those cell widths assert_equal(np.unique(ds.index.grids[0]["index", "dx"]).size, 128) assert_equal(ds.index.grids[0]["index", "dx"][:, 0, 0], cell_widths[0]) assert_equal(np.unique(ds.index.grids[0]["index", "dx"]).size, 128) assert_equal(ds.index.grids[0]["index", "dy"][0, :, 0], cell_widths[1]) assert_equal(np.unique(ds.index.grids[0]["index", "dx"]).size, 128) assert_equal(ds.index.grids[0]["index", "dz"][0, 0, :], cell_widths[2]) assert_equal(np.unique(ds.index.grids[0]["index", "x"]).size, 128) center_x = np.add.accumulate(cell_widths[0]) - 0.5 * cell_widths[0] assert_equal(center_x, ds.index.grids[0]["index", "x"][:, 0, 0]) assert_equal(np.unique(ds.index.grids[0]["index", "y"]).size, 128) center_y = np.add.accumulate(cell_widths[1]) - 0.5 * cell_widths[1] assert_equal(center_y, ds.index.grids[0]["index", "y"][0, :, 0]) assert_equal(np.unique(ds.index.grids[0]["index", "z"]).size, 128) center_z = np.add.accumulate(cell_widths[2]) - 0.5 * cell_widths[2] assert_equal(center_z, ds.index.grids[0]["index", "z"][0, 0, :]) assert_almost_equal(ds.r[:].sum(("index", "cell_volume")), ds.domain_width.prod()) for ax in "xyz": dd = ds.all_data() p = dd.integrate("ones", axis=ax) assert_almost_equal(p["index", "ones"].max().d, 1.0) assert_almost_equal(p["index", "ones"].min().d, 1.0) @pytest.fixture def data_cell_widths_N16(): np.random.seed(0x4D3D3D3) N = 16 data = {"density": np.random.random((N, N, N))} cell_widths = [] for _ in range(3): cw = np.random.random(N) cw /= cw.sum() cell_widths.append(cw) return (data, cell_widths) def test_cell_width_type(data_cell_widths_N16): # checks that cell widths are properly upcast to float64 (this errors # if that is not the case). data, cell_widths = data_cell_widths_N16 cell_widths = [cw.astype(np.float32) for cw in cell_widths] ds = load_uniform_grid( data, data["density"].shape, bbox=np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]), cell_widths=cell_widths, ) _ = ds.slice(0, ds.domain_center[0])[("stream", "density")] def test_cell_width_dimensionality(data_cell_widths_N16): data, cell_widths = data_cell_widths_N16 # single np array in list should error with pytest.raises(ValueError, match="The number of elements in cell_widths"): _ = load_uniform_grid( data, data["density"].shape, bbox=np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]), cell_widths=[cell_widths[0]], ) # mismatched shapes should error with pytest.raises(ValueError, match="The number of elements in cell_widths"): _ = load_uniform_grid( data, data["density"].shape, bbox=np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]), cell_widths=[cell_widths[1:]], ) def test_cell_width_with_nproc(data_cell_widths_N16): data, cell_widths = data_cell_widths_N16 ds = load_uniform_grid( data, data["density"].shape, bbox=np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]), cell_widths=cell_widths, nprocs=4, ) assert ds.index.num_grids == 4 # check that it successfully decomposed grid = ds.index.grids[0] n_cells = np.prod(grid.shape) assert n_cells == data["density"].size / 4 # and try a selection c = (grid.RightEdge + grid.LeftEdge) / 2.0 reg = ds.region(c, grid.LeftEdge, grid.RightEdge) assert reg["gas", "density"].size == n_cells ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/tests/test_stream_unstructured.py0000644000175100001770000000350414714401662024415 0ustar00runnerdockerimport numpy as np from yt import SlicePlot, load_unstructured_mesh def test_multi_mesh(): coordsMulti = np.array( [[0.0, 0.0], [1.0, 0.0], [1.0, 1.0], [0.0, 1.0]], dtype=np.float64 ) connect1 = np.array( [ [0, 1, 3], ], dtype=np.int64, ) connect2 = np.array( [ [1, 2, 3], ], dtype=np.int64, ) data1 = {} data2 = {} data1["connect1", "test"] = np.array( [ [0.0, 1.0, 3.0], ], dtype=np.float64, ) data2["connect2", "test"] = np.array( [ [1.0, 2.0, 3.0], ], dtype=np.float64, ) connectList = [connect1, connect2] dataList = [data1, data2] ds = load_unstructured_mesh(connectList, coordsMulti, dataList) sl = SlicePlot(ds, "z", ("connect1", "test")) assert sl.data_source.field_data["connect1", "test"].shape == (1, 3) sl = SlicePlot(ds, "z", ("connect2", "test")) assert sl.data_source.field_data["connect2", "test"].shape == (1, 3) sl = SlicePlot(ds, "z", ("all", "test")) assert sl.data_source.field_data["all", "test"].shape == (2, 3) sl.annotate_mesh_lines() def test_multi_field(): coords = np.array( [[0.0, 0.0], [1.0, 0.0], [1.0, 1.0], [0.0, 1.0]], dtype=np.float64 ) connect = np.array([[0, 1, 3], [1, 2, 3]], dtype=np.int64) data = {} data["connect1", "test"] = np.array( [[0.0, 1.0, 3.0], [1.0, 2.0, 3.0]], dtype=np.float64 ) data["connect1", "testAgain"] = np.array( [[0.0, 1.0, 3.0], [1.0, 2.0, 3.0]], dtype=np.float64 ) ds = load_unstructured_mesh(connect, coords, data) sl = SlicePlot(ds, "z", ("connect1", "test")) sl.annotate_mesh_lines() sl = SlicePlot(ds, "z", ("connect1", "testAgain")) sl.annotate_mesh_lines() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/stream/tests/test_update_data.py0000644000175100001770000000155614714401662022553 0ustar00runnerdockerimport numpy as np from yt.data_objects.profiles import create_profile from yt.testing import fake_particle_ds, fake_random_ds def test_update_data_grid(): ds = fake_random_ds(64, nprocs=8) ds.index dims = (32, 32, 32) grid_data = [ {"temperature": np.random.uniform(size=dims)} for i in range(ds.index.num_grids) ] ds.index.update_data(grid_data) prj = ds.proj(("gas", "temperature"), 2) prj["gas", "temperature"] dd = ds.all_data() profile = create_profile(dd, ("gas", "density"), ("gas", "temperature"), 10) profile["gas", "temperature"] def test_update_data_particle(): npart = 100 ds = fake_particle_ds(npart=npart) part_data = {"temperature": np.random.rand(npart)} ds.index.update_data(part_data) assert ("io", "temperature") in ds.field_list dd = ds.all_data() dd["io", "temperature"] ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3471527 yt-4.4.0/yt/frontends/swift/0000755000175100001770000000000014714401715015356 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/swift/__init__.py0000644000175100001770000000000014714401662017456 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/swift/api.py0000644000175100001770000000021714714401662016502 0ustar00runnerdockerfrom yt.frontends.sph.fields import SPHFieldInfo from . import tests from .data_structures import SwiftDataset from .io import IOHandlerSwift ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/swift/data_structures.py0000644000175100001770000001702114714401662021146 0ustar00runnerdockerimport numpy as np from yt.data_objects.static_output import ParticleFile from yt.frontends.sph.data_structures import SPHDataset, SPHParticleIndex from yt.funcs import only_on_root from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py from .fields import SwiftFieldInfo class SwiftParticleFile(ParticleFile): pass class SwiftDataset(SPHDataset): _load_requirements = ["h5py"] _index_class = SPHParticleIndex _field_info_class = SwiftFieldInfo _file_class = SwiftParticleFile _particle_mass_name = "Masses" _particle_coordinates_name = "Coordinates" _particle_velocity_name = "Velocities" _sph_ptypes = ("PartType0",) _suffix = ".hdf5" def __init__( self, filename, dataset_type="swift", storage_filename=None, units_override=None, unit_system="cgs", default_species_fields=None, ): super().__init__( filename, dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) self.storage_filename = storage_filename def _set_code_unit_attributes(self): """ Sets the units from the SWIFT internal unit system. Currently sets length, mass, time, and temperature. SWIFT uses comoving coordinates without the usual h-factors. """ units = self._get_info_attributes("Units") if self.cosmological_simulation == 1: msg = "Assuming length units are in comoving centimetres" only_on_root(mylog.info, msg) self.length_unit = self.quan( float(units["Unit length in cgs (U_L)"]), "cmcm" ) else: msg = "Assuming length units are in physical centimetres" only_on_root(mylog.info, msg) self.length_unit = self.quan(float(units["Unit length in cgs (U_L)"]), "cm") self.mass_unit = self.quan(float(units["Unit mass in cgs (U_M)"]), "g") self.time_unit = self.quan(float(units["Unit time in cgs (U_t)"]), "s") self.temperature_unit = self.quan( float(units["Unit temperature in cgs (U_T)"]), "K" ) return def _get_info_attributes(self, dataset): """ Gets the information from a header-style dataset and returns it as a python dictionary. Example: self._get_info_attributes(header) returns a dictionary of all of the information in the Header.attrs. """ with h5py.File(self.filename, mode="r") as handle: header = dict(handle[dataset].attrs) return header def _parse_parameter_file(self): """ Parse the SWIFT "parameter file" -- really this actually reads info from the main HDF5 file as everything is replicated there and usually parameterfiles are not transported. The header information from the HDF5 file is stored in an un-parsed format in self.parameters should users wish to use it. """ # Read from the HDF5 file, this gives us all the info we need. The rest # of this function is just parsing. header = self._get_info_attributes("Header") # RuntimePars were removed from snapshots at SWIFT commit 6271388 # between SWIFT versions 0.8.5 and 0.9.0 with h5py.File(self.filename, mode="r") as handle: has_runtime_pars = "RuntimePars" in handle.keys() if has_runtime_pars: runtime_parameters = self._get_info_attributes("RuntimePars") else: runtime_parameters = {} policy = self._get_info_attributes("Policy") # These are the parameterfile parameters from *.yml at runtime parameters = self._get_info_attributes("Parameters") # Not used in this function, but passed to parameters hydro = self._get_info_attributes("HydroScheme") subgrid = self._get_info_attributes("SubgridScheme") self.domain_right_edge = header["BoxSize"] self.domain_left_edge = np.zeros_like(self.domain_right_edge) self.dimensionality = int(header["Dimension"]) # SWIFT is either all periodic, or not periodic at all if has_runtime_pars: periodic = int(runtime_parameters["PeriodicBoundariesOn"]) else: periodic = int(parameters["InitialConditions:periodic"]) if periodic: self._periodicity = [True] * self.dimensionality else: self._periodicity = [False] * self.dimensionality # Units get attached to this self.current_time = float(header["Time"]) # Now cosmology enters the fray, as a runtime parameter. self.cosmological_simulation = int(policy["cosmological integration"]) if self.cosmological_simulation: try: self.current_redshift = float(header["Redshift"]) # These won't be present if self.cosmological_simulation is false self.omega_lambda = float(parameters["Cosmology:Omega_lambda"]) # Cosmology:Omega_m parameter deprecated at SWIFT commit d2783c2 # Between SWIFT versions 0.9.0 and 1.0.0 if "Cosmology:Omega_cdm" in parameters: self.omega_matter = float(parameters["Cosmology:Omega_b"]) + float( parameters["Cosmology:Omega_cdm"] ) else: self.omega_matter = float(parameters["Cosmology:Omega_m"]) # This is "little h" self.hubble_constant = float(parameters["Cosmology:h"]) except KeyError: mylog.warning( "Could not find cosmology information in Parameters, " "despite having ran with -c signifying a cosmological " "run." ) mylog.info("Setting up as a non-cosmological run. Check this!") self.cosmological_simulation = 0 self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 else: self.current_redshift = 0.0 self.omega_lambda = 0.0 self.omega_matter = 0.0 self.hubble_constant = 0.0 # Store the un-parsed information should people want it. self.parameters = { "header": header, "policy": policy, "parameters": parameters, # NOTE: runtime_parameters may be empty "runtime_parameters": runtime_parameters, "hydro": hydro, "subgrid": subgrid, } # SWIFT never has multi file snapshots self.file_count = 1 self.filename_template = self.parameter_filename return @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: """ Checks to see if the file is a valid output from SWIFT. This requires the file to have the Code attribute set in the Header dataset to "SWIFT". """ if cls._missing_load_requirements(): return False valid = True # Attempt to open the file, if it's not a hdf5 then this will fail: try: handle = h5py.File(filename, mode="r") valid = handle["Header"].attrs["Code"].decode("utf-8") == "SWIFT" handle.close() except (OSError, KeyError): valid = False return valid ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/swift/fields.py0000644000175100001770000000204014714401662017173 0ustar00runnerdockerfrom yt.frontends.sph.fields import SPHFieldInfo class SwiftFieldInfo(SPHFieldInfo): def __init__(self, ds, field_list, slice_info=None): self.known_particle_fields += ( ( "InternalEnergies", ("code_specific_energy", ["specific_thermal_energy"], None), ), ("Densities", ("code_mass / code_length**3", ["density"], None)), ("SmoothingLengths", ("code_length", ["smoothing_length"], None)), ) super().__init__(ds, field_list, slice_info) def setup_particle_fields(self, ptype, *args, **kwargs): super().setup_particle_fields(ptype, *args, **kwargs) if ptype in ("PartType0", "Gas"): self.setup_gas_particle_fields(ptype) def setup_gas_particle_fields(self, ptype): self.alias((ptype, "temperature"), (ptype, "Temperatures")) self.alias(("gas", "temperature"), (ptype, "Temperatures")) for ax in ("x", "y", "z"): self.alias((ptype, ax), (ptype, "particle_position_" + ax)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/swift/io.py0000644000175100001770000001416014714401662016342 0ustar00runnerdockerimport numpy as np from yt.frontends.sph.io import IOHandlerSPH from yt.utilities.on_demand_imports import _h5py as h5py class IOHandlerSwift(IOHandlerSPH): _dataset_type = "swift" def __init__(self, ds, *args, **kwargs): super().__init__(ds, *args, **kwargs) def _read_fluid_selection(self, chunks, selector, fields, size): raise NotImplementedError # NOTE: we refer to sub_files in the next sections, these sub_files may # actually be full data_files. # In the event data_files are too big, yt breaks them up into sub_files and # we sort of treat them as files in the chunking system def _read_particle_coords(self, chunks, ptf): # This will read chunks and yield the results. # yt has the concept of sub_files, i.e, we break up big files into # virtual sub_files to deal with the chunking system for sub_file in self._sorted_chunk_iterator(chunks): si, ei = sub_file.start, sub_file.end f = h5py.File(sub_file.filename, mode="r") # This double-reads for ptype in sorted(ptf): if sub_file.total_particles[ptype] == 0: continue pos = f[f"/{ptype}/Coordinates"][si:ei, :] pos = pos.astype("float64", copy=False) if ptype == self.ds._sph_ptypes[0]: hsml = self._get_smoothing_length(sub_file) else: hsml = 0.0 yield ptype, (pos[:, 0], pos[:, 1], pos[:, 2]), hsml f.close() def _yield_coordinates(self, sub_file, needed_ptype=None): si, ei = sub_file.start, sub_file.end f = h5py.File(sub_file.filename, mode="r") pcount = f["/Header"].attrs["NumPart_ThisFile"][:].astype("int64") np.clip(pcount - si, 0, ei - si, out=pcount) pcount = pcount.sum() for key in f.keys(): if ( not key.startswith("PartType") or "Coordinates" not in f[key] or needed_ptype and key != needed_ptype ): continue pos = f[key]["Coordinates"][si:ei, ...] pos = pos.astype("float64", copy=False) yield key, pos f.close() def _get_smoothing_length(self, sub_file, pdtype=None, pshape=None): # We do not need the pdtype and the pshape, but some frontends do so we # accept them and then just ignore them ptype = self.ds._sph_ptypes[0] ind = int(ptype[-1]) si, ei = sub_file.start, sub_file.end with h5py.File(sub_file.filename, mode="r") as f: pcount = f["/Header"].attrs["NumPart_ThisFile"][ind].astype("int64") pcount = np.clip(pcount - si, 0, ei - si) keys = f[ptype].keys() # SWIFT commit a94cc81 changed from "SmoothingLength" to "SmoothingLengths" # between SWIFT versions 0.8.2 and 0.8.3 if "SmoothingLengths" in keys: hsml = f[ptype]["SmoothingLengths"][si:ei, ...] else: hsml = f[ptype]["SmoothingLength"][si:ei, ...] # we upscale to float64 hsml = hsml.astype("float64", copy=False) return hsml def _read_particle_data_file(self, sub_file, ptf, selector=None): # note: this frontend uses the variable name and terminology sub_file. # other frontends use data_file with the understanding that it may # actually be a sub_file, hence the super()._read_datafile is called # ._read_datafile instead of ._read_subfile return_data = {} si, ei = sub_file.start, sub_file.end f = h5py.File(sub_file.filename, mode="r") for ptype, field_list in sorted(ptf.items()): if sub_file.total_particles[ptype] == 0: continue g = f[f"/{ptype}"] # this should load as float64 coords = g["Coordinates"][si:ei] if ptype == "PartType0": hsmls = self._get_smoothing_length(sub_file) else: hsmls = 0.0 if selector: mask = selector.select_points( coords[:, 0], coords[:, 1], coords[:, 2], hsmls ) del coords if selector and mask is None: continue for field in field_list: if field in ("Mass", "Masses"): data = g[self.ds._particle_mass_name][si:ei] else: data = g[field][si:ei] if selector: data = data[mask, ...] data.astype("float64", copy=False) return_data[ptype, field] = data f.close() return return_data def _count_particles(self, data_file): si, ei = data_file.start, data_file.end f = h5py.File(data_file.filename, mode="r") pcount = f["/Header"].attrs["NumPart_ThisFile"][:].astype("int64") f.close() # if this data_file was a sub_file, then we just extract the region # defined by the subfile if None not in (si, ei): np.clip(pcount - si, 0, ei - si, out=pcount) npart = {f"PartType{i}": v for i, v in enumerate(pcount)} return npart def _identify_fields(self, data_file): f = h5py.File(data_file.filename, mode="r") fields = [] cname = self.ds._particle_coordinates_name # Coordinates mname = self.ds._particle_mass_name # Coordinates for key in f.keys(): if not key.startswith("PartType"): continue g = f[key] if cname not in g: continue ptype = str(key) for k in g.keys(): kk = k if str(kk) == mname: fields.append((ptype, "Mass")) continue if not hasattr(g[kk], "shape"): continue if len(g[kk].shape) > 1: self._vector_fields[kk] = g[kk].shape[1] fields.append((ptype, str(kk))) f.close() return fields, {} ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.351153 yt-4.4.0/yt/frontends/swift/tests/0000755000175100001770000000000014714401715016520 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/swift/tests/__init__.py0000644000175100001770000000000014714401662020620 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/swift/tests/test_outputs.py0000644000175100001770000000735514714401662021667 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_almost_equal from yt import load from yt.frontends.swift.api import SwiftDataset from yt.testing import ParticleSelectionComparison, requires_file, requires_module from yt.utilities.on_demand_imports import _h5py as h5py keplerian_ring = "KeplerianRing/keplerian_ring_0020.hdf5" EAGLE_6 = "EAGLE_6/eagle_0005.hdf5" # Combined the tests for loading a file and ensuring the units have been # implemented correctly to save time on re-loading a dataset @requires_module("h5py") @requires_file(keplerian_ring) def test_non_cosmo_dataset(): ds = load(keplerian_ring) assert type(ds) is SwiftDataset field = ("gas", "density") ad = ds.all_data() yt_density = ad[field] yt_coords = ad[field[0], "position"] # load some data the old fashioned way fh = h5py.File(ds.parameter_filename, mode="r") part_data = fh["PartType0"] # set up a conversion factor by loading the unit mas and unit length in cm, # and then converting to proper coordinates units = fh["Units"] units = dict(units.attrs) density_factor = float(units["Unit mass in cgs (U_M)"]) density_factor /= float(units["Unit length in cgs (U_L)"]) ** 3 # now load the raw density and coordinates raw_density = part_data["Density"][:].astype("float64") * density_factor raw_coords = part_data["Coordinates"][:].astype("float64") fh.close() # sort by the positions - yt often loads in a different order ind_raw = np.lexsort((raw_coords[:, 2], raw_coords[:, 1], raw_coords[:, 0])) ind_yt = np.lexsort((yt_coords[:, 2], yt_coords[:, 1], yt_coords[:, 0])) raw_density = raw_density[ind_raw] yt_density = yt_density[ind_yt] # make sure we are comparing fair units assert str(yt_density.units) == "g/cm**3" # make sure the actual values are the same assert_almost_equal(yt_density.d, raw_density) @requires_module("h5py") @requires_file(keplerian_ring) def test_non_cosmo_dataset_selection(): ds = load(keplerian_ring) psc = ParticleSelectionComparison(ds) psc.run_defaults() @requires_module("h5py") @requires_file(EAGLE_6) def test_cosmo_dataset(): ds = load(EAGLE_6) assert type(ds) is SwiftDataset field = ("gas", "density") ad = ds.all_data() yt_density = ad[field] yt_coords = ad[field[0], "position"] # load some data the old fashioned way fh = h5py.File(ds.parameter_filename, mode="r") part_data = fh["PartType0"] # set up a conversion factor by loading the unit mas and unit length in cm, # and then converting to proper coordinates units = fh["Units"] units = dict(units.attrs) density_factor = float(units["Unit mass in cgs (U_M)"]) density_factor /= float(units["Unit length in cgs (U_L)"]) ** 3 # add the redshift factor header = fh["Header"] header = dict(header.attrs) density_factor *= (1.0 + float(header["Redshift"])) ** 3 # now load the raw density and coordinates raw_density = part_data["Density"][:].astype("float64") * density_factor raw_coords = part_data["Coordinates"][:].astype("float64") fh.close() # sort by the positions - yt often loads in a different order ind_raw = np.lexsort((raw_coords[:, 2], raw_coords[:, 1], raw_coords[:, 0])) ind_yt = np.lexsort((yt_coords[:, 2], yt_coords[:, 1], yt_coords[:, 0])) raw_density = raw_density[ind_raw] yt_density = yt_density[ind_yt] # make sure we are comparing fair units assert str(yt_density.units) == "g/cm**3" # make sure the actual values are the same assert_almost_equal(yt_density.d, raw_density) @requires_module("h5py") @requires_file(EAGLE_6) def test_cosmo_dataset_selection(): ds = load(EAGLE_6) psc = ParticleSelectionComparison(ds) psc.run_defaults() ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.351153 yt-4.4.0/yt/frontends/tipsy/0000755000175100001770000000000014714401715015372 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/tipsy/__init__.py0000644000175100001770000000000014714401662017472 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/tipsy/api.py0000644000175100001770000000020614714401662016514 0ustar00runnerdockerfrom . import tests from .data_structures import TipsyDataset from .fields import TipsyFieldInfo from .io import IOHandlerTipsyBinary ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/tipsy/data_structures.py0000644000175100001770000003205014714401662021161 0ustar00runnerdockerimport glob import os import struct import numpy as np from yt.data_objects.static_output import ParticleFile from yt.frontends.sph.data_structures import SPHDataset, SPHParticleIndex from yt.utilities.cosmology import Cosmology from yt.utilities.physical_constants import G from yt.utilities.physical_ratios import cm_per_kpc from .fields import TipsyFieldInfo class TipsyFile(ParticleFile): def __init__(self, ds, io, filename, file_id, range=None): super().__init__(ds, io, filename, file_id, range) if not hasattr(io, "_field_list"): io._create_dtypes(self) # Check automatically what the domain size is io._update_domain(self) self._calculate_offsets(io._field_list) def _calculate_offsets(self, field_list, pcounts=None): self.field_offsets = self.io._calculate_particle_offsets(self, None) class TipsyDataset(SPHDataset): _index_class = SPHParticleIndex _file_class = TipsyFile _field_info_class = TipsyFieldInfo _particle_mass_name = "Mass" _particle_coordinates_name = "Coordinates" _sph_ptypes = ("Gas",) _header_spec = ( ("time", "d"), ("nbodies", "i"), ("ndim", "i"), ("nsph", "i"), ("ndark", "i"), ("nstar", "i"), ("dummy", "i"), ) def __init__( self, filename, dataset_type="tipsy", field_dtypes=None, unit_base=None, parameter_file=None, cosmology_parameters=None, index_order=None, index_filename=None, kdtree_filename=None, kernel_name=None, bounding_box=None, units_override=None, unit_system="cgs", default_species_fields=None, ): # Because Tipsy outputs don't have a fixed domain boundary, one can # specify a bounding box which effectively gives a domain_left_edge # and domain_right_edge self.bounding_box = bounding_box self.filter_bbox = bounding_box is not None if field_dtypes is None: field_dtypes = {} success, self.endian = self._validate_header(filename) if not success: print("SOMETHING HAS GONE WRONG. NBODIES != SUM PARTICLES.") print( "{} != (sum == {} + {} + {})".format( self.parameters["nbodies"], self.parameters["nsph"], self.parameters["ndark"], self.parameters["nstar"], ) ) print("Often this can be fixed by changing the 'endian' parameter.") print("This defaults to '>' but may in fact be '<'.") raise RuntimeError self.storage_filename = None # My understanding is that dtypes are set on a field by field basis, # not on a (particle type, field) basis self._field_dtypes = field_dtypes self._unit_base = unit_base or {} self._cosmology_parameters = cosmology_parameters if parameter_file is not None: parameter_file = os.path.abspath(parameter_file) self._param_file = parameter_file filename = os.path.abspath(filename) if units_override is not None: raise RuntimeError( "units_override is not supported for TipsyDataset. " + "Use unit_base instead." ) super().__init__( filename, dataset_type=dataset_type, unit_system=unit_system, index_order=index_order, index_filename=index_filename, kdtree_filename=kdtree_filename, kernel_name=kernel_name, default_species_fields=default_species_fields, ) def __str__(self): return os.path.basename(self.parameter_filename) def _parse_parameter_file(self): # Parsing the header of the tipsy file, from this we obtain # the snapshot time and particle counts. f = open(self.parameter_filename, "rb") hh = self.endian + "".join(str(b) for a, b in self._header_spec) hvals = { a: c for (a, b), c in zip( self._header_spec, struct.unpack(hh, f.read(struct.calcsize(hh))), strict=True, ) } self.parameters.update(hvals) self._header_offset = f.tell() # These are always true, for now. self.dimensionality = 3 self.refine_by = 2 self.parameters["HydroMethod"] = "sph" # Read in parameter file, if available. if self._param_file is None: pfn = glob.glob(os.path.join(self.directory, "*.param")) assert len(pfn) < 2, "More than one param file is in the data directory" if pfn == []: pfn = None else: pfn = pfn[0] else: pfn = self._param_file if pfn is not None: for line in (l.strip() for l in open(pfn)): # skip comment lines and blank lines l = line.strip() if l.startswith("#") or l == "": continue # parse parameters according to tipsy parameter type param, val = (i.strip() for i in line.split("=", 1)) val = val.split("#")[0] if param.startswith("n") or param.startswith("i"): val = int(val) elif param.startswith("d"): val = float(val) elif param.startswith("b"): val = bool(float(val)) self.parameters[param] = val self.current_time = hvals["time"] self.domain_dimensions = np.ones(3, "int32") periodic = self.parameters.get("bPeriodic", True) period = self.parameters.get("dPeriod", None) self._periodicity = (periodic, periodic, periodic) self.cosmological_simulation = float( self.parameters.get("bComove", self._cosmology_parameters is not None) ) if self.cosmological_simulation and period is None: period = 1.0 if self.bounding_box is None: if periodic and period is not None: # If we are periodic, that sets our domain width to # either 1 or dPeriod. self.domain_left_edge = np.zeros(3, "float64") - 0.5 * period self.domain_right_edge = np.zeros(3, "float64") + 0.5 * period else: self.domain_left_edge = None self.domain_right_edge = None else: # This ensures that we know a bounding box has been applied self._domain_override = True bbox = np.array(self.bounding_box, dtype="float64") if bbox.shape == (2, 3): bbox = bbox.transpose() self.domain_left_edge = bbox[:, 0] self.domain_right_edge = bbox[:, 1] # If the cosmology parameters dictionary got set when data is # loaded, we can assume it's a cosmological data set if self.cosmological_simulation == 1.0: cosm = self._cosmology_parameters or {} # In comoving simulations, time stores the scale factor a self.scale_factor = hvals["time"] dcosm = { "current_redshift": (1.0 / self.scale_factor) - 1.0, "omega_lambda": self.parameters.get( "dLambda", cosm.get("omega_lambda", 0.0) ), "omega_matter": self.parameters.get( "dOmega0", cosm.get("omega_matter", 0.0) ), "hubble_constant": self.parameters.get( "dHubble0", cosm.get("hubble_constant", 1.0) ), } for param in dcosm.keys(): pval = dcosm[param] setattr(self, param, pval) else: kpc_unit = self.parameters.get("dKpcUnit", 1.0) self._unit_base["cm"] = 1.0 / (kpc_unit * cm_per_kpc) self.filename_template = self.parameter_filename self.file_count = 1 f.close() def _set_derived_attrs(self): if self.bounding_box is None and ( self.domain_left_edge is None or self.domain_right_edge is None ): self.domain_left_edge = np.array([np.nan, np.nan, np.nan]) self.domain_right_edge = np.array([np.nan, np.nan, np.nan]) self.index super()._set_derived_attrs() def _set_code_unit_attributes(self): # First try to set units based on parameter file if self.cosmological_simulation: mu = self.parameters.get("dMsolUnit", 1.0) self.mass_unit = self.quan(mu, "Msun") lu = self.parameters.get("dKpcUnit", 1000.0) # In cosmological runs, lengths are stored as length*scale_factor self.length_unit = self.quan(lu, "kpc") * self.scale_factor density_unit = self.mass_unit / (self.length_unit / self.scale_factor) ** 3 if "dHubble0" in self.parameters: # Gasoline's internal hubble constant, dHubble0, is stored in # units of proper code time self.hubble_constant *= np.sqrt(G * density_unit) # Finally, we scale the hubble constant by 100 km/s/Mpc self.hubble_constant /= self.quan(100, "km/s/Mpc") # If we leave it as a YTQuantity, the cosmology object # used below will add units back on. self.hubble_constant = self.hubble_constant.to_value("") else: mu = self.parameters.get("dMsolUnit", 1.0) self.mass_unit = self.quan(mu, "Msun") lu = self.parameters.get("dKpcUnit", 1.0) self.length_unit = self.quan(lu, "kpc") # If unit base is defined by the user, override all relevant units if self._unit_base is not None: for my_unit in ["length", "mass", "time"]: if my_unit in self._unit_base: my_val = self._unit_base[my_unit] my_val = ( self.quan(*my_val) if isinstance(my_val, tuple) else self.quan(my_val) ) setattr(self, f"{my_unit}_unit", my_val) # Finally, set the dependent units if self.cosmological_simulation: cosmo = Cosmology( hubble_constant=self.hubble_constant, omega_matter=self.omega_matter, omega_lambda=self.omega_lambda, ) self.current_time = cosmo.lookback_time(self.current_redshift, 1e6) # mass units are rho_crit(z=0) * domain volume mu = ( cosmo.critical_density(0.0) * (1 + self.current_redshift) ** 3 * self.length_unit**3 ) self.mass_unit = self.quan(mu.in_units("Msun"), "Msun") density_unit = self.mass_unit / (self.length_unit / self.scale_factor) ** 3 # need to do this again because we've modified the hubble constant self.unit_registry.modify("h", self.hubble_constant) else: density_unit = self.mass_unit / self.length_unit**3 if not hasattr(self, "time_unit"): self.time_unit = 1.0 / np.sqrt(density_unit * G) @staticmethod def _validate_header(filename): """ This method automatically detects whether the tipsy file is big/little endian and is not corrupt/invalid. It returns a tuple of (Valid, endianswap) where Valid is a boolean that is true if the file is a tipsy file, and endianswap is the endianness character '>' or '<'. """ try: f = open(filename, "rb") except Exception: return False, 1 try: f.seek(0, os.SEEK_END) fs = f.tell() f.seek(0, os.SEEK_SET) # Read in the header t, n, ndim, ng, nd, ns = struct.unpack(" 3: endianswap = ">" f.seek(0) t, n, ndim, ng, nd, ns = struct.unpack(">diiiii", f.read(28)) # File is borked if this is true. The header is 28 bytes, and may # Be followed by a 4 byte pad. Next comes gas particles, which use # 48 bytes, followed by 36 bytes per dark matter particle, and 44 bytes # per star particle. If positions are stored as doubles, each of these # sizes is increased by 12 bytes. if ( fs != 28 + 48 * ng + 36 * nd + 44 * ns and fs != 28 + 60 * ng + 48 * nd + 56 * ns and fs != 32 + 48 * ng + 36 * nd + 44 * ns and fs != 32 + 60 * ng + 48 * nd + 56 * ns ): f.close() return False, 0 f.close() return True, endianswap @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: return TipsyDataset._validate_header(filename)[0] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/tipsy/definitions.py0000644000175100001770000000011114714401662020251 0ustar00runnerdockernpart_mapping = {"Gas": "nsph", "DarkMatter": "ndark", "Stars": "nstar"} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/tipsy/fields.py0000644000175100001770000000373414714401662017222 0ustar00runnerdockerfrom yt.frontends.sph.fields import SPHFieldInfo class TipsyFieldInfo(SPHFieldInfo): known_particle_fields = SPHFieldInfo.known_particle_fields + ( ("smoothing_length", ("code_length", [], None)), ) aux_particle_fields = { "uDotFB": ("uDotFB", ("code_mass * code_velocity**2", [""], None)), "uDotAV": ("uDotAV", ("code_mass * code_velocity**2", [""], None)), "uDotPdV": ("uDotPdV", ("code_mass * code_velocity**2", [""], None)), "uDotHydro": ("uDotHydro", ("code_mass * code_velocity**2", [""], None)), "uDotDiff": ("uDotDiff", ("code_mass * code_velocity**2", [""], None)), "uDot": ("uDot", ("code_mass * code_velocity**2", [""], None)), "coolontime": ("coolontime", ("code_time", [""], None)), "timeform": ("timeform", ("code_time", [""], None)), "massform": ("massform", ("code_mass", [""], None)), "HI": ("HI", ("dimensionless", ["H_fraction"], None)), "HII": ("HII", ("dimensionless", ["H_p1_fraction"], None)), "HeI": ("HeI", ("dimensionless", ["He_fraction"], None)), "HeII": ("HeII", ("dimensionless", ["He_p2_fraction"], None)), "OxMassFrac": ("OxMassFrac", ("dimensionless", ["O_fraction"], None)), "FeMassFrac": ("FeMassFrac", ("dimensionless", ["Fe_fraction"], None)), "c": ("c", ("code_velocity", [""], None)), "acc": ("acc", ("code_velocity / code_time", [""], None)), "accg": ("accg", ("code_velocity / code_time", [""], None)), "smoothlength": ("smoothlength", ("code_length", ["smoothing_length"], None)), } def __init__(self, ds, field_list, slice_info=None): for field in field_list: if ( field[1] in self.aux_particle_fields.keys() and self.aux_particle_fields[field[1]] not in self.known_particle_fields ): self.known_particle_fields += (self.aux_particle_fields[field[1]],) super().__init__(ds, field_list, slice_info) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/tipsy/io.py0000644000175100001770000004704714714401662016370 0ustar00runnerdockerimport glob import os import struct import numpy as np from yt.frontends.sph.io import IOHandlerSPH from yt.frontends.tipsy.definitions import npart_mapping from yt.utilities.lib.particle_kdtree_tools import generate_smoothing_length from yt.utilities.logger import ytLogger as mylog class IOHandlerTipsyBinary(IOHandlerSPH): _dataset_type = "tipsy" _vector_fields = {"Coordinates": 3, "Velocity": 3, "Velocities": 3} _pdtypes = None # dtypes, to be filled in later _aux_pdtypes = None # auxiliary files' dtypes _ptypes = ("Gas", "DarkMatter", "Stars") _aux_fields = None _fields = ( ("Gas", "Mass"), ("Gas", "Coordinates"), ("Gas", "Velocities"), ("Gas", "Density"), ("Gas", "Temperature"), ("Gas", "Epsilon"), ("Gas", "Metals"), ("Gas", "Phi"), ("DarkMatter", "Mass"), ("DarkMatter", "Coordinates"), ("DarkMatter", "Velocities"), ("DarkMatter", "Epsilon"), ("DarkMatter", "Phi"), ("Stars", "Mass"), ("Stars", "Coordinates"), ("Stars", "Velocities"), ("Stars", "Metals"), ("Stars", "FormationTime"), ("Stars", "Epsilon"), ("Stars", "Phi"), ) def __init__(self, *args, **kwargs): self._aux_fields = [] super().__init__(*args, **kwargs) def _read_fluid_selection(self, chunks, selector, fields, size): raise NotImplementedError def _fill_fields(self, fields, vals, hsml, mask, data_file): if mask is None: size = 0 elif isinstance(mask, slice): if fields[0] == "smoothing_length": size = hsml.size else: size = vals[fields[0]].size else: size = mask.sum() rv = {} for field in fields: mylog.debug("Allocating %s values for %s", size, field) if field in self._vector_fields: rv[field] = np.empty((size, 3), dtype="float64") if size == 0: continue rv[field][:, 0] = vals[field]["x"][mask] rv[field][:, 1] = vals[field]["y"][mask] rv[field][:, 2] = vals[field]["z"][mask] elif field == "smoothing_length": rv[field] = hsml[mask] else: rv[field] = np.empty(size, dtype="float64") if size == 0: continue rv[field][:] = vals[field][mask] if field == "Coordinates": eps = np.finfo(rv[field].dtype).eps for i in range(3): rv[field][:, i] = np.clip( rv[field][:, i], self.ds.domain_left_edge[i].v + eps, self.ds.domain_right_edge[i].v - eps, ) return rv def _read_particle_coords(self, chunks, ptf): chunksize = self.ds.index.chunksize for data_file in self._sorted_chunk_iterator(chunks): poff = data_file.field_offsets tp = data_file.total_particles f = open(data_file.filename, "rb") for ptype in sorted(ptf, key=lambda a, poff=poff: poff.get(a, -1)): if data_file.total_particles[ptype] == 0: continue f.seek(poff[ptype]) total = 0 while total < tp[ptype]: count = min(chunksize, tp[ptype] - total) p = np.fromfile(f, self._pdtypes[ptype], count=count) total += p.size d = [p["Coordinates"][ax].astype("float64") for ax in "xyz"] del p if ptype == self.ds._sph_ptypes[0]: hsml = self._read_smoothing_length(data_file, count) else: hsml = 0.0 yield ptype, d, hsml @property def hsml_filename(self): return f"{self.ds.parameter_filename}-{'hsml'}" def _generate_smoothing_length(self, index): if os.path.exists(self.hsml_filename): with open(self.hsml_filename, "rb") as f: file_hash = struct.unpack("q", f.read(struct.calcsize("q")))[0] if file_hash != self.ds._file_hash: os.remove(self.hsml_filename) else: return positions = [] for data_file in index.data_files: for _, ppos in self._yield_coordinates( data_file, needed_ptype=self.ds._sph_ptypes[0] ): positions.append(ppos) if positions == []: return kdtree = index.kdtree positions = np.concatenate(positions)[kdtree.idx] hsml = generate_smoothing_length(positions, kdtree, self.ds._num_neighbors) hsml = hsml[np.argsort(kdtree.idx)] dtype = self._pdtypes["Gas"]["Coordinates"][0] with open(self.hsml_filename, "wb") as f: f.write(struct.pack("q", self.ds._file_hash)) f.write(hsml.astype(dtype).tobytes()) def _read_smoothing_length(self, data_file, count): dtype = self._pdtypes["Gas"]["Coordinates"][0] with open(self.hsml_filename, "rb") as f: f.seek(struct.calcsize("q") + data_file.start * dtype.itemsize) hsmls = np.fromfile(f, dtype, count=count) return hsmls.astype("float64") def _get_smoothing_length(self, data_file, dtype, shape): return self._read_smoothing_length(data_file, shape[0]) def _read_particle_data_file(self, data_file, ptf, selector=None): from numpy.lib.recfunctions import append_fields return_data = {} poff = data_file.field_offsets aux_fields_offsets = self._calculate_particle_offsets_aux(data_file) tp = data_file.total_particles f = open(data_file.filename, "rb") # we need to open all aux files for chunking to work _aux_fh = {} def aux_fh(afield): if afield not in _aux_fh: _aux_fh[afield] = open(data_file.filename + "." + afield, "rb") return _aux_fh[afield] for ptype, field_list in sorted(ptf.items(), key=lambda a: poff.get(a[0], -1)): if data_file.total_particles[ptype] == 0: continue f.seek(poff[ptype]) afields = list(set(field_list).intersection(self._aux_fields)) count = min(self.ds.index.chunksize, tp[ptype]) p = np.fromfile(f, self._pdtypes[ptype], count=count) auxdata = [] for afield in afields: aux_fh(afield).seek(aux_fields_offsets[afield][ptype]) if isinstance(self._aux_pdtypes[afield], np.dtype): auxdata.append( np.fromfile( aux_fh(afield), self._aux_pdtypes[afield], count=count ) ) else: aux_fh(afield).seek(0) sh = aux_fields_offsets[afield][ptype] if tp[ptype] > 0: aux = np.genfromtxt( aux_fh(afield), skip_header=sh, max_rows=count ) if aux.ndim < 1: aux = np.array([aux]) auxdata.append(aux) if afields: p = append_fields(p, afields, auxdata) if ptype == "Gas": hsml = self._read_smoothing_length(data_file, count) else: hsml = 0.0 if selector is None or getattr(selector, "is_all_data", False): mask = slice(None, None, None) else: x = p["Coordinates"]["x"].astype("float64") y = p["Coordinates"]["y"].astype("float64") z = p["Coordinates"]["z"].astype("float64") mask = selector.select_points(x, y, z, hsml) del x, y, z if mask is None: continue tf = self._fill_fields(field_list, p, hsml, mask, data_file) for field in field_list: return_data[ptype, field] = tf.pop(field) # close all file handles f.close() for fh in _aux_fh.values(): fh.close() return return_data def _update_domain(self, data_file): """ This method is used to determine the size needed for a box that will bound the particles. It simply finds the largest position of the whole set of particles, and sets the domain to +/- that value. """ ds = data_file.ds ind = 0 # NOTE: # We hardcode this value here because otherwise we get into a # situation where we require the existence of index before we # can successfully instantiate it, or where we are calling it # from within its instantiation. # # Because this value is not propagated later on, and does not # impact the construction of the bitmap indices, it should be # acceptable to just use a reasonable number here. chunksize = 64**3 # Check to make sure that the domain hasn't already been set # by the parameter file if np.all(np.isfinite(ds.domain_left_edge)) and np.all( np.isfinite(ds.domain_right_edge) ): return with open(data_file.filename, "rb") as f: ds.domain_left_edge = 0 ds.domain_right_edge = 0 f.seek(ds._header_offset) mi = np.array([1e30, 1e30, 1e30], dtype="float64") ma = -np.array([1e30, 1e30, 1e30], dtype="float64") for ptype in self._ptypes: # We'll just add the individual types separately count = data_file.total_particles[ptype] if count == 0: continue stop = ind + count while ind < stop: c = min(chunksize, stop - ind) pp = np.fromfile(f, dtype=self._pdtypes[ptype], count=c) np.minimum( mi, [ pp["Coordinates"]["x"].min(), pp["Coordinates"]["y"].min(), pp["Coordinates"]["z"].min(), ], mi, ) np.maximum( ma, [ pp["Coordinates"]["x"].max(), pp["Coordinates"]["y"].max(), pp["Coordinates"]["z"].max(), ], ma, ) ind += c # We extend by 1%. DW = ma - mi mi -= 0.01 * DW ma += 0.01 * DW ds.domain_left_edge = ds.arr(mi, "code_length") ds.domain_right_edge = ds.arr(ma, "code_length") ds.domain_width = DW = ds.domain_right_edge - ds.domain_left_edge ds.unit_registry.add( "unitary", float(DW.max() * DW.units.base_value), DW.units.dimensions ) def _yield_coordinates(self, data_file, needed_ptype=None): with open(data_file.filename, "rb") as f: poff = data_file.field_offsets for ptype in self._ptypes: if ptype not in poff: continue f.seek(poff[ptype]) if needed_ptype is not None and ptype != needed_ptype: continue # We'll just add the individual types separately count = data_file.total_particles[ptype] if count == 0: continue pp = np.fromfile(f, dtype=self._pdtypes[ptype], count=count) mis = np.empty(3, dtype="float64") mas = np.empty(3, dtype="float64") for axi, ax in enumerate("xyz"): mi = pp["Coordinates"][ax].min() ma = pp["Coordinates"][ax].max() mylog.debug("Spanning: %0.3e .. %0.3e in %s", mi, ma, ax) mis[axi] = mi mas[axi] = ma pos = np.empty((pp.size, 3), dtype="float64") for i, ax in enumerate("xyz"): pos[:, i] = pp["Coordinates"][ax] yield ptype, pos def _count_particles(self, data_file): pcount = np.array( [ data_file.ds.parameters["nsph"], data_file.ds.parameters["nstar"], data_file.ds.parameters["ndark"], ] ) si, ei = data_file.start, data_file.end if None not in (si, ei): np.clip(pcount - si, 0, ei - si, out=pcount) ptypes = ["Gas", "Stars", "DarkMatter"] npart = dict(zip(ptypes, pcount, strict=True)) return npart @classmethod def _compute_dtypes(cls, field_dtypes, endian="<"): pds = {} for ptype, field in cls._fields: dtbase = field_dtypes.get(field, "f") ff = f"{endian}{dtbase}" if field in cls._vector_fields: dt = (field, [("x", ff), ("y", ff), ("z", ff)]) else: dt = (field, ff) pds.setdefault(ptype, []).append(dt) pdtypes = {} for ptype in pds: pdtypes[ptype] = np.dtype(pds[ptype]) return pdtypes def _create_dtypes(self, data_file): # We can just look at the particle counts. self._header_offset = data_file.ds._header_offset self._pdtypes = self._compute_dtypes( data_file.ds._field_dtypes, data_file.ds.endian ) self._field_list = [] for ptype, field in self._fields: if data_file.total_particles[ptype] == 0: # We do not want out _pdtypes to have empty particles. self._pdtypes.pop(ptype, None) continue self._field_list.append((ptype, field)) if "Gas" in self._pdtypes.keys(): self._field_list.append(("Gas", "smoothing_length")) # Find out which auxiliaries we have and what is their format tot_parts = np.sum( [ data_file.ds.parameters["nsph"], data_file.ds.parameters["nstar"], data_file.ds.parameters["ndark"], ] ) endian = data_file.ds.endian self._aux_pdtypes = {} self._aux_fields = [] for f in glob.glob(data_file.filename + ".*"): afield = f.rsplit(".")[-1] filename = data_file.filename + "." + afield if not os.path.exists(filename): continue if afield in ["log", "parameter", "kdtree"]: # Amiga halo finder makes files like this we need to ignore continue self._aux_fields.append(afield) skip_afields = [] for afield in self._aux_fields: filename = data_file.filename + "." + afield # We need to do some fairly ugly detection to see what format the # auxiliary files are in. They can be either ascii or binary, and # the binary files can be either floats, ints, or doubles. We're # going to use a try-catch cascade to determine the format. filesize = os.stat(filename).st_size dtype = np.dtype(endian + "i4") tot_parts_from_file = np.fromfile(filename, dtype, count=1) if tot_parts_from_file != tot_parts: with open(filename, "rb") as f: header_nparts = f.readline() try: header_nparts = int(header_nparts) except ValueError: skip_afields.append(afield) continue if int(header_nparts) != tot_parts: raise RuntimeError self._aux_pdtypes[afield] = "ascii" elif (filesize - 4) / 8 == tot_parts: self._aux_pdtypes[afield] = np.dtype([("aux", endian + "d")]) elif (filesize - 4) / 4 == tot_parts: if afield.startswith("i"): self._aux_pdtypes[afield] = np.dtype([("aux", endian + "i")]) else: self._aux_pdtypes[afield] = np.dtype([("aux", endian + "f")]) else: skip_afields.append(afield) for afield in skip_afields: self._aux_fields.remove(afield) # Add the auxiliary fields to each ptype we have for ptype in self._ptypes: if any(ptype == field[0] for field in self._field_list): self._field_list += [(ptype, afield) for afield in self._aux_fields] return self._field_list def _identify_fields(self, data_file): return self._field_list, {} def _calculate_particle_offsets(self, data_file, pcounts): # This computes the offsets for each particle type into a "data_file." # Note that the term "data_file" here is a bit overloaded, and also refers to a # "chunk" of particles inside a data file. # data_file.start represents the *particle count* that we should start at. # # At this point, data_file will have the total number of particles # that this chunk represents located in the property total_particles. # Because in tipsy files the particles are stored sequentially, we can # figure out where each one starts. # We first figure out the global offsets, then offset them by the count # and size of each individual particle type. field_offsets = {} # Initialize pos to the point the first particle type would start pos = data_file.ds._header_offset global_offsets = {} field_offsets = {} for ptype in self._ptypes: if ptype not in self._pdtypes: # This means we don't have any, I think, and so we shouldn't # stick it in the offsets. continue # Note that much of this will be computed redundantly; future # refactorings could fix this. global_offsets[ptype] = pos size = self._pdtypes[ptype].itemsize npart = self.ds.parameters[npart_mapping[ptype]] # Get the offset into just this particle type, and start at data_file.start if npart > data_file.start: field_offsets[ptype] = pos + size * data_file.start pos += npart * size return field_offsets def _calculate_particle_offsets_aux(self, data_file): aux_fields_offsets = {} params = self.ds.parameters for afield in self._aux_fields: aux_fields_offsets[afield] = {} if isinstance(self._aux_pdtypes[afield], np.dtype): pos = 4 # i4 size = np.dtype(self._aux_pdtypes[afield]).itemsize else: pos = 1 size = 1 for i, ptype in enumerate(self._ptypes): if data_file.total_particles[ptype] == 0: continue elif params[npart_mapping[ptype]] > self.ds.index.chunksize: for j in range(i): npart = params[npart_mapping[self._ptypes[j]]] if npart > self.ds.index.chunksize: pos += npart * size pos += data_file.start * size aux_fields_offsets[afield][ptype] = pos pos += data_file.total_particles[ptype] * size return aux_fields_offsets ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.351153 yt-4.4.0/yt/frontends/tipsy/tests/0000755000175100001770000000000014714401715016534 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/tipsy/tests/__init__.py0000644000175100001770000000000014714401662020634 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/tipsy/tests/test_outputs.py0000644000175100001770000000673114714401662021700 0ustar00runnerdockerfrom collections import OrderedDict from yt.frontends.tipsy.api import TipsyDataset from yt.testing import ParticleSelectionComparison, requires_file from yt.utilities.answer_testing.framework import ( data_dir_load, nbody_answer, requires_ds, sph_answer, ) _fields = OrderedDict( [ (("all", "particle_mass"), None), (("all", "particle_ones"), None), (("all", "particle_velocity_x"), ("all", "particle_mass")), (("all", "particle_velocity_y"), ("all", "particle_mass")), (("all", "particle_velocity_z"), ("all", "particle_mass")), ] ) pkdgrav = "halo1e11_run1.00400/halo1e11_run1.00400" pkdgrav_cosmology_parameters = { "current_redshift": 0.0, "omega_lambda": 0.728, "omega_matter": 0.272, "hubble_constant": 0.702, } pkdgrav_kwargs = { "field_dtypes": {"Coordinates": "d"}, "cosmology_parameters": pkdgrav_cosmology_parameters, "unit_base": {"length": (60.0, "Mpccm/h")}, } @requires_ds(pkdgrav, big_data=True, file_check=True) def test_pkdgrav(): ds = data_dir_load(pkdgrav, TipsyDataset, (), kwargs=pkdgrav_kwargs) yield from nbody_answer(ds, "halo1e11_run1.00400", 26847360, _fields) psc = ParticleSelectionComparison(ds) psc.run_defaults() gasoline_dmonly = "agora_1e11.00400/agora_1e11.00400" @requires_ds(gasoline_dmonly, big_data=True, file_check=True) def test_gasoline_dmonly(): cosmology_parameters = { "current_redshift": 0.0, "omega_lambda": 0.728, "omega_matter": 0.272, "hubble_constant": 0.702, } kwargs = { "cosmology_parameters": cosmology_parameters, "unit_base": {"length": (60.0, "Mpccm/h")}, } ds = data_dir_load(gasoline_dmonly, TipsyDataset, (), kwargs) yield from nbody_answer(ds, "agora_1e11.00400", 10550576, _fields) psc = ParticleSelectionComparison(ds) psc.run_defaults() tg_sph_fields = OrderedDict( [ (("gas", "density"), None), (("gas", "temperature"), None), (("gas", "temperature"), ("gas", "density")), (("gas", "velocity_magnitude"), None), (("gas", "Fe_fraction"), None), ] ) tg_nbody_fields = OrderedDict( [ (("Stars", "Metals"), None), ] ) tipsy_gal = "TipsyGalaxy/galaxy.00300" @requires_ds(tipsy_gal) def test_tipsy_galaxy(): ds = data_dir_load( tipsy_gal, kwargs={"bounding_box": [[-2000, 2000], [-2000, 2000], [-2000, 2000]]}, ) # These tests should be re-enabled. But the holdup is that the region # selector does not offset by domain_left_edge, and we have inelegant # selection using bboxes. # psc = ParticleSelectionComparison(ds) # psc.run_defaults() for test in sph_answer(ds, "galaxy.00300", 315372, tg_sph_fields): test_tipsy_galaxy.__name__ = test.description yield test for test in nbody_answer(ds, "galaxy.00300", 315372, tg_nbody_fields): test_tipsy_galaxy.__name__ = test.description yield test @requires_file(gasoline_dmonly) @requires_file(pkdgrav) def test_TipsyDataset(): assert isinstance(data_dir_load(pkdgrav, kwargs=pkdgrav_kwargs), TipsyDataset) assert isinstance(data_dir_load(gasoline_dmonly), TipsyDataset) @requires_file(tipsy_gal) def test_tipsy_index(): ds = data_dir_load(tipsy_gal) sl = ds.slice("z", 0.0) assert sl["gas", "density"].shape[0] != 0 @requires_file(tipsy_gal) def test_tipsy_smoothing_length(): ds = data_dir_load(tipsy_gal) _ = ds.all_data()["Gas", "smoothing_length"] ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.351153 yt-4.4.0/yt/frontends/ytdata/0000755000175100001770000000000014714401715015510 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ytdata/__init__.py0000644000175100001770000000004514714401662017621 0ustar00runnerdocker""" API for ytData frontend. """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ytdata/api.py0000644000175100001770000000104014714401662016627 0ustar00runnerdockerfrom . import tests from .data_structures import ( YTClumpContainer, YTClumpTreeDataset, YTDataContainerDataset, YTGrid, YTGridDataset, YTGridHierarchy, YTNonspatialDataset, YTNonspatialGrid, YTNonspatialHierarchy, YTProfileDataset, YTSpatialPlotDataset, ) from .fields import YTDataContainerFieldInfo, YTGridFieldInfo from .io import ( IOHandlerYTDataContainerHDF5, IOHandlerYTGridHDF5, IOHandlerYTNonspatialhdf5, IOHandlerYTSpatialPlotHDF5, ) from .utilities import save_as_dataset ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ytdata/data_structures.py0000644000175100001770000010727414714401662021312 0ustar00runnerdockerimport os import weakref from collections import defaultdict from functools import cached_property from numbers import Number as numeric_type import numpy as np from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch from yt.data_objects.profiles import ( Profile1DFromDataset, Profile2DFromDataset, Profile3DFromDataset, ) from yt.data_objects.static_output import Dataset, ParticleFile from yt.data_objects.unions import ParticleUnion from yt.fields.field_exceptions import NeedsGridType from yt.fields.field_info_container import FieldInfoContainer from yt.funcs import is_root, parse_h5_attr from yt.geometry.api import Geometry from yt.geometry.geometry_handler import Index from yt.geometry.grid_geometry_handler import GridIndex from yt.geometry.particle_geometry_handler import ParticleIndex from yt.units import dimensions from yt.units._numpy_wrapper_functions import uconcatenate from yt.units.unit_registry import UnitRegistry # type: ignore from yt.units.yt_array import YTQuantity from yt.utilities.exceptions import GenerationInProgress, YTFieldTypeNotFound from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py from yt.utilities.parallel_tools.parallel_analysis_interface import parallel_root_only from yt.utilities.tree_container import TreeContainer from .fields import YTDataContainerFieldInfo, YTGridFieldInfo _grid_data_containers = ["arbitrary_grid", "covering_grid", "smoothed_covering_grid"] _set_attrs = {"periodicity": "_periodicity"} class SavedDataset(Dataset): """ Base dataset class for products of calling save_as_dataset. """ geometry = Geometry.CARTESIAN _con_attrs: tuple[str, ...] = () def _parse_parameter_file(self): self.refine_by = 2 with h5py.File(self.parameter_filename, mode="r") as f: for key in f.attrs.keys(): v = parse_h5_attr(f, key) if key == "con_args": try: v = eval(v) except ValueError: # support older ytdata outputs v = v.astype("str") except NameError: # This is the most common error we expect, and it # results from having the eval do a concatenated decoded # set of the values. v = [_.decode("utf8") for _ in v] self.parameters[key] = v self._with_parameter_file_open(f) # if saved, restore unit registry from the json string if "unit_registry_json" in self.parameters: self.unit_registry = UnitRegistry.from_json( self.parameters["unit_registry_json"] ) # reset self.arr and self.quan to use new unit_registry self._arr = None self._quan = None for dim in [ "length", "mass", "pressure", "temperature", "time", "velocity", ]: cu = "code_" + dim if cu not in self.unit_registry: self.unit_registry.add(cu, 1.0, getattr(dimensions, dim)) if "code_magnetic" not in self.unit_registry: self.unit_registry.add( "code_magnetic", 0.1**0.5, dimensions.magnetic_field_cgs ) # if saved, set unit system if "unit_system_name" in self.parameters: unit_system = self.parameters["unit_system_name"] del self.parameters["unit_system_name"] else: unit_system = "cgs" # reset unit system since we may have a new unit registry self._assign_unit_system(unit_system) # assign units to parameters that have associated unit string del_pars = [] for par in self.parameters: ustr = f"{par}_units" if ustr in self.parameters: if isinstance(self.parameters[par], np.ndarray): to_u = self.arr else: to_u = self.quan self.parameters[par] = to_u(self.parameters[par], self.parameters[ustr]) del_pars.append(ustr) for par in del_pars: del self.parameters[par] for attr in self._con_attrs: sattr = _set_attrs.get(attr, attr) if sattr == "geometry": if "geometry" in self.parameters: self.geometry = Geometry(self.parameters["geometry"]) continue try: setattr(self, sattr, self.parameters.get(attr)) except TypeError: # some Dataset attributes are properties with setters # which may not accept None as an input pass def _with_parameter_file_open(self, f): # This allows subclasses to access the parameter file # while it's still open to get additional information. pass def set_units(self): if "unit_registry_json" in self.parameters: self._set_code_unit_attributes() del self.parameters["unit_registry_json"] else: super().set_units() def _set_code_unit_attributes(self): attrs = ( "length_unit", "mass_unit", "time_unit", "velocity_unit", "magnetic_unit", ) cgs_units = ("cm", "g", "s", "cm/s", "gauss") base_units = np.ones(len(attrs)) for unit, attr, cgs_unit in zip(base_units, attrs, cgs_units, strict=True): if attr in self.parameters and isinstance( self.parameters[attr], YTQuantity ): uq = self.parameters[attr] elif attr in self.parameters and f"{attr}_units" in self.parameters: uq = self.quan(self.parameters[attr], self.parameters[f"{attr}_units"]) del self.parameters[attr] del self.parameters[f"{attr}_units"] elif isinstance(unit, str): uq = self.quan(1.0, unit) elif isinstance(unit, numeric_type): uq = self.quan(unit, cgs_unit) elif isinstance(unit, YTQuantity): uq = unit elif isinstance(unit, tuple): uq = self.quan(unit[0], unit[1]) else: raise RuntimeError(f"{attr} ({unit}) is invalid.") setattr(self, attr, uq) class YTDataset(SavedDataset): """Base dataset class for all ytdata datasets.""" _con_attrs = ( "cosmological_simulation", "current_time", "current_redshift", "hubble_constant", "omega_matter", "omega_lambda", "dimensionality", "domain_dimensions", "geometry", "periodicity", "domain_left_edge", "domain_right_edge", "container_type", "data_type", ) def _with_parameter_file_open(self, f): self.num_particles = { group: parse_h5_attr(f[group], "num_elements") for group in f if group != self.default_fluid_type } def create_field_info(self): self.field_dependencies = {} self.derived_field_list = [] self.filtered_particle_types = [] self.field_info = self._field_info_class(self, self.field_list) self.coordinates.setup_fields(self.field_info) self.field_info.setup_fluid_fields() for ptype in self.particle_types: self.field_info.setup_particle_fields(ptype) self._setup_gas_alias() self.field_info.setup_fluid_index_fields() if "all" not in self.particle_types: mylog.debug("Creating Particle Union 'all'") pu = ParticleUnion("all", list(self.particle_types_raw)) self.add_particle_union(pu) self.field_info.setup_extra_union_fields() mylog.debug("Loading field plugins.") self.field_info.load_all_plugins() deps, unloaded = self.field_info.check_derived_fields() self.field_dependencies.update(deps) def _setup_gas_alias(self): pass def _setup_override_fields(self): pass class YTDataHDF5File(ParticleFile): def __init__(self, ds, io, filename, file_id, range): with h5py.File(filename, mode="r") as f: self.header = {field: parse_h5_attr(f, field) for field in f.attrs.keys()} super().__init__(ds, io, filename, file_id, range) class YTDataContainerDataset(YTDataset): """Dataset for saved geometric data containers.""" _load_requirements = ["h5py"] _index_class = ParticleIndex _file_class = YTDataHDF5File _field_info_class: type[FieldInfoContainer] = YTDataContainerFieldInfo _suffix = ".h5" fluid_types = ("grid", "gas", "deposit", "index") def __init__( self, filename, dataset_type="ytdatacontainer_hdf5", index_order=None, index_filename=None, units_override=None, unit_system="cgs", ): self.index_order = index_order self.index_filename = index_filename super().__init__( filename, dataset_type, units_override=units_override, unit_system=unit_system, ) def _parse_parameter_file(self): super()._parse_parameter_file() self.particle_types_raw = tuple(self.num_particles.keys()) self.particle_types = self.particle_types_raw self.filename_template = self.parameter_filename self.file_count = 1 self.domain_dimensions = np.ones(3, "int32") def _setup_gas_alias(self): "Alias the grid type to gas by making a particle union." if "grid" in self.particle_types and "gas" not in self.particle_types: pu = ParticleUnion("gas", ["grid"]) self.add_particle_union(pu) # We have to alias this because particle unions only # cover the field_list. self.field_info.alias(("gas", "cell_volume"), ("grid", "cell_volume")) @cached_property def data(self): """ Return a data container configured like the original used to create this dataset. """ # Some data containers can't be reconstructed in the same way # since this is now particle-like data. data_type = self.parameters.get("data_type") container_type = self.parameters.get("container_type") ex_container_type = ["cutting", "quad_proj", "ray", "slice", "cut_region"] if data_type == "yt_light_ray" or container_type in ex_container_type: mylog.info("Returning an all_data data container.") return self.all_data() my_obj = getattr(self, self.parameters["container_type"]) my_args = [self.parameters[con_arg] for con_arg in self.parameters["con_args"]] return my_obj(*my_args) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if not filename.endswith(".h5"): return False if cls._missing_load_requirements(): return False with h5py.File(filename, mode="r") as f: data_type = parse_h5_attr(f, "data_type") cont_type = parse_h5_attr(f, "container_type") if data_type is None: return False if ( data_type == "yt_data_container" and cont_type not in _grid_data_containers ): return True return False class YTDataLightRayDataset(YTDataContainerDataset): """Dataset for saved LightRay objects.""" _load_requirements = ["h5py"] def _parse_parameter_file(self): super()._parse_parameter_file() self._restore_light_ray_solution() def _restore_light_ray_solution(self): """ Restore all information associate with the light ray solution to its original form. """ key = "light_ray_solution" self.light_ray_solution = [] lrs_fields = [ par for par in self.parameters if key in par and not par.endswith("_units") ] if len(lrs_fields) == 0: return self.light_ray_solution = [{} for val in self.parameters[lrs_fields[0]]] for sp3 in ["unique_identifier", "filename"]: ksp3 = f"{key}_{sp3}" if ksp3 not in lrs_fields: continue self.parameters[ksp3] = self.parameters[ksp3].astype(str) for field in lrs_fields: field_name = field[len(key) + 1 :] for i in range(self.parameters[field].shape[0]): self.light_ray_solution[i][field_name] = self.parameters[field][i] @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if not filename.endswith(".h5"): return False if cls._missing_load_requirements(): return False with h5py.File(filename, mode="r") as f: data_type = parse_h5_attr(f, "data_type") if data_type in ["yt_light_ray"]: return True return False class YTSpatialPlotDataset(YTDataContainerDataset): """Dataset for saved slices and projections.""" _load_requirements = ["h5py"] _field_info_class = YTGridFieldInfo def __init__(self, *args, **kwargs): super().__init__(*args, dataset_type="ytspatialplot_hdf5", **kwargs) def _parse_parameter_file(self): super()._parse_parameter_file() if self.parameters["container_type"] == "proj": if ( isinstance(self.parameters["weight_field"], str) and self.parameters["weight_field"] == "None" ): self.parameters["weight_field"] = None elif isinstance(self.parameters["weight_field"], np.ndarray): self.parameters["weight_field"] = tuple(self.parameters["weight_field"]) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if not filename.endswith(".h5"): return False if cls._missing_load_requirements(): return False with h5py.File(filename, mode="r") as f: data_type = parse_h5_attr(f, "data_type") cont_type = parse_h5_attr(f, "container_type") if data_type == "yt_data_container" and cont_type in [ "cutting", "proj", "slice", "quad_proj", ]: return True return False class YTGrid(AMRGridPatch): _id_offset = 0 def __init__(self, gid, index, filename=None): AMRGridPatch.__init__(self, gid, filename=filename, index=index) self._children_ids = [] self._parent_id = -1 self.Level = 0 self.LeftEdge = self.index.ds.domain_left_edge self.RightEdge = self.index.ds.domain_right_edge def __getitem__(self, key): tr = super(AMRGridPatch, self).__getitem__(key) try: fields = self._determine_fields(key) except YTFieldTypeNotFound: return tr finfo = self.ds._get_field_info(fields[0]) if not finfo.sampling_type == "particle": return tr.reshape(self.ActiveDimensions[: self.ds.dimensionality]) return tr @property def Parent(self): return None @property def Children(self): return [] class YTDataHierarchy(GridIndex): def __init__(self, ds, dataset_type=None): self.dataset_type = dataset_type self.float_type = "float64" self.dataset = weakref.proxy(ds) self.directory = os.getcwd() super().__init__(ds, dataset_type) def _count_grids(self): self.num_grids = 1 def _parse_index(self): self.grid_dimensions[:] = self.ds.domain_dimensions self.grid_left_edge[:] = self.ds.domain_left_edge self.grid_right_edge[:] = self.ds.domain_right_edge self.grid_levels[:] = np.zeros(self.num_grids) self.grid_procs = np.zeros(self.num_grids) self.grid_particle_count[:] = sum(self.ds.num_particles.values()) self.grids = [] for gid in range(self.num_grids): self.grids.append(self.grid(gid, self)) self.grids[gid].Level = self.grid_levels[gid, 0] self.max_level = self.grid_levels.max() temp_grids = np.empty(self.num_grids, dtype="object") for i, grid in enumerate(self.grids): grid.filename = self.ds.parameter_filename grid._prepare_grid() grid.proc_num = self.grid_procs[i] temp_grids[i] = grid self.grids = temp_grids def _detect_output_fields(self): self.field_list = [] self.ds.field_units = self.ds.field_units or {} with h5py.File(self.ds.parameter_filename, mode="r") as f: for group in f: for field in f[group]: field_name = (str(group), str(field)) self.field_list.append(field_name) self.ds.field_units[field_name] = parse_h5_attr( f[group][field], "units" ) class YTGridHierarchy(YTDataHierarchy): grid = YTGrid def _populate_grid_objects(self): for g in self.grids: g._setup_dx() self.max_level = self.grid_levels.max() class YTGridDataset(YTDataset): """Dataset for saved covering grids, arbitrary grids, and FRBs.""" _load_requirements = ["h5py"] _index_class: type[Index] = YTGridHierarchy _field_info_class = YTGridFieldInfo _dataset_type = "ytgridhdf5" geometry = Geometry.CARTESIAN default_fluid_type = "grid" fluid_types: tuple[str, ...] = ("grid", "gas", "deposit", "index") def __init__(self, filename, unit_system="cgs"): super().__init__(filename, self._dataset_type, unit_system=unit_system) self.data = self.index.grids[0] def _parse_parameter_file(self): super()._parse_parameter_file() self.num_particles.pop(self.default_fluid_type, None) self.particle_types_raw = tuple(self.num_particles.keys()) self.particle_types = self.particle_types_raw # correct domain dimensions for the covering grid dimension self.base_domain_left_edge = self.domain_left_edge self.base_domain_right_edge = self.domain_right_edge self.base_domain_dimensions = self.domain_dimensions if self.container_type in _grid_data_containers: self.domain_left_edge = self.parameters["left_edge"] if "level" in self.parameters["con_args"]: dx = (self.base_domain_right_edge - self.base_domain_left_edge) / ( self.domain_dimensions * self.refine_by ** self.parameters["level"] ) self.domain_right_edge = ( self.domain_left_edge + self.parameters["ActiveDimensions"] * dx ) self.domain_dimensions = ( (self.domain_right_edge - self.domain_left_edge) / dx ).astype("int64") else: self.domain_right_edge = self.parameters["right_edge"] self.domain_dimensions = self.parameters["ActiveDimensions"] dx = ( self.domain_right_edge - self.domain_left_edge ) / self.domain_dimensions periodicity = ( np.abs(self.domain_left_edge - self.base_domain_left_edge) < 0.5 * dx ) periodicity &= ( np.abs(self.domain_right_edge - self.base_domain_right_edge) < 0.5 * dx ) self._periodicity = periodicity elif self.data_type == "yt_frb": dle = self.domain_left_edge self.domain_left_edge = uconcatenate( [self.parameters["left_edge"].to(dle.units), [0] * dle.uq] ) dre = self.domain_right_edge self.domain_right_edge = uconcatenate( [self.parameters["right_edge"].to(dre.units), [1] * dre.uq] ) self.domain_dimensions = np.concatenate( [self.parameters["ActiveDimensions"], [1]] ) def _setup_gas_alias(self): "Alias the grid type to gas with a field alias." for ftype, field in self.field_list: if ftype == "grid": self.field_info.alias(("gas", field), ("grid", field)) @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if not filename.endswith(".h5"): return False if cls._missing_load_requirements(): return False with h5py.File(filename, mode="r") as f: data_type = parse_h5_attr(f, "data_type") cont_type = parse_h5_attr(f, "container_type") if data_type == "yt_frb": return True if data_type == "yt_data_container" and cont_type in _grid_data_containers: return True return False class YTNonspatialGrid(AMRGridPatch): _id_offset = 0 def __init__(self, gid, index, filename=None): super().__init__(gid, filename=filename, index=index) self._children_ids = [] self._parent_id = -1 self.Level = 0 self.LeftEdge = self.index.ds.domain_left_edge self.RightEdge = self.index.ds.domain_right_edge def __getitem__(self, key): tr = super(AMRGridPatch, self).__getitem__(key) try: fields = self._determine_fields(key) except YTFieldTypeNotFound: return tr self.ds._get_field_info(fields[0]) return tr def get_data(self, fields=None): if fields is None: return nfields = [] apply_fields = defaultdict(list) for field in self._determine_fields(fields): if field[0] in self.ds.filtered_particle_types: f = self.ds.known_filters[field[0]] apply_fields[field[0]].append((f.filtered_type, field[1])) else: nfields.append(field) for filter_type in apply_fields: f = self.ds.known_filters[filter_type] with f.apply(self): self.get_data(apply_fields[filter_type]) fields = nfields if len(fields) == 0: return # Now we collect all our fields # Here is where we need to perform a validation step, so that if we # have a field requested that we actually *can't* yet get, we put it # off until the end. This prevents double-reading fields that will # need to be used in spatial fields later on. fields_to_get = [] # This will be pre-populated with spatial fields fields_to_generate = [] for field in self._determine_fields(fields): if field in self.field_data: continue finfo = self.ds._get_field_info(field) try: finfo.check_available(self) except NeedsGridType: fields_to_generate.append(field) continue fields_to_get.append(field) if len(fields_to_get) == 0 and len(fields_to_generate) == 0: return elif self._locked: raise GenerationInProgress(fields) # Track which ones we want in the end ofields = set(list(self.field_data.keys()) + fields_to_get + fields_to_generate) # At this point, we want to figure out *all* our dependencies. fields_to_get = self._identify_dependencies(fields_to_get, self._spatial) # We now split up into readers for the types of fields fluids, particles = [], [] finfos = {} for field_key in fields_to_get: finfo = self.ds._get_field_info(field_key) finfos[field_key] = finfo if finfo.sampling_type == "particle": particles.append(field_key) elif field_key not in fluids: fluids.append(field_key) # The _read method will figure out which fields it needs to get from # disk, and return a dict of those fields along with the fields that # need to be generated. read_fluids, gen_fluids = self.index._read_fluid_fields( fluids, self, self._current_chunk ) for f, v in read_fluids.items(): convert = True if v.dtype != np.float64: if finfos[f].units == "": self.field_data[f] = v convert = False else: v = v.astype(np.float64) if convert: self.field_data[f] = self.ds.arr(v, units=finfos[f].units) self.field_data[f].convert_to_units(finfos[f].output_units) read_particles, gen_particles = self.index._read_fluid_fields( particles, self, self._current_chunk ) for f, v in read_particles.items(): convert = True if v.dtype != np.float64: if finfos[f].units == "": self.field_data[f] = v convert = False else: v = v.astype(np.float64) if convert: self.field_data[f] = self.ds.arr(v, units=finfos[f].units) self.field_data[f].convert_to_units(finfos[f].output_units) fields_to_generate += gen_fluids + gen_particles self._generate_fields(fields_to_generate) for field in list(self.field_data.keys()): if field not in ofields: self.field_data.pop(field) @property def Parent(self): return None @property def Children(self): return [] class YTNonspatialHierarchy(YTDataHierarchy): grid = YTNonspatialGrid def _populate_grid_objects(self): for g in self.grids: g._setup_dx() # this is non-spatial, so remove the code_length units g.dds = self.ds.arr(g.dds.d, "") g.ActiveDimensions = self.ds.domain_dimensions self.max_level = self.grid_levels.max() def _read_fluid_fields(self, fields, dobj, chunk=None): if len(fields) == 0: return {}, [] fields_to_read, fields_to_generate = self._split_fields(fields) if len(fields_to_read) == 0: return {}, fields_to_generate selector = dobj.selector fields_to_return = self.io._read_fluid_selection(dobj, selector, fields_to_read) return fields_to_return, fields_to_generate class YTNonspatialDataset(YTGridDataset): """Dataset for general array data.""" _load_requirements = ["h5py"] _index_class = YTNonspatialHierarchy _field_info_class = YTGridFieldInfo _dataset_type = "ytnonspatialhdf5" geometry = Geometry.CARTESIAN default_fluid_type = "data" fluid_types: tuple[str, ...] = ("data", "gas") def _parse_parameter_file(self): super(YTGridDataset, self)._parse_parameter_file() self.num_particles.pop(self.default_fluid_type, None) self.particle_types_raw = tuple(self.num_particles.keys()) self.particle_types = self.particle_types_raw def _set_derived_attrs(self): # set some defaults just to make things go default_attrs = { "dimensionality": 3, "domain_dimensions": np.ones(3, dtype="int64"), "domain_left_edge": np.zeros(3), "domain_right_edge": np.ones(3), "_periodicity": np.ones(3, dtype="bool"), } for att, val in default_attrs.items(): if getattr(self, att, None) is None: setattr(self, att, val) def _setup_classes(self): # We don't allow geometric selection for non-spatial datasets self.objects = [] @parallel_root_only def print_key_parameters(self): for a in [ "current_time", "domain_dimensions", "domain_left_edge", "domain_right_edge", "cosmological_simulation", ]: v = getattr(self, a) if v is not None: mylog.info("Parameters: %-25s = %s", a, v) if hasattr(self, "cosmological_simulation") and self.cosmological_simulation: for a in [ "current_redshift", "omega_lambda", "omega_matter", "hubble_constant", ]: v = getattr(self, a) if v is not None: mylog.info("Parameters: %-25s = %s", a, v) mylog.warning("Geometric data selection not available for this dataset type.") @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if not filename.endswith(".h5"): return False if cls._missing_load_requirements(): return False with h5py.File(filename, mode="r") as f: data_type = parse_h5_attr(f, "data_type") if data_type == "yt_array_data": return True return False class YTProfileDataset(YTNonspatialDataset): """Dataset for saved profile objects.""" _load_requirements = ["h5py"] fluid_types = ("data", "gas", "standard_deviation") def __init__(self, filename, unit_system="cgs"): super().__init__(filename, unit_system=unit_system) _profile = None @property def profile(self): if self._profile is not None: return self._profile if self.dimensionality == 1: self._profile = Profile1DFromDataset(self) elif self.dimensionality == 2: self._profile = Profile2DFromDataset(self) elif self.dimensionality == 3: self._profile = Profile3DFromDataset(self) else: self._profile = None return self._profile def _parse_parameter_file(self): super(YTGridDataset, self)._parse_parameter_file() if ( isinstance(self.parameters["weight_field"], str) and self.parameters["weight_field"] == "None" ): self.parameters["weight_field"] = None elif isinstance(self.parameters["weight_field"], np.ndarray): self.parameters["weight_field"] = tuple( self.parameters["weight_field"].astype(str) ) for a in ["profile_dimensions"] + [ f"{ax}_{attr}" for ax in "xyz"[: self.dimensionality] for attr in ["log"] ]: setattr(self, a, self.parameters[a]) self.base_domain_left_edge = self.domain_left_edge self.base_domain_right_edge = self.domain_right_edge self.base_domain_dimensions = self.domain_dimensions domain_dimensions = np.ones(3, dtype="int64") domain_dimensions[: self.dimensionality] = self.profile_dimensions self.domain_dimensions = domain_dimensions domain_left_edge = np.zeros(3) domain_right_edge = np.ones(3) for i, ax in enumerate("xyz"[: self.dimensionality]): range_name = f"{ax}_range" my_range = self.parameters[range_name] if getattr(self, f"{ax}_log", False): my_range = np.log10(my_range) domain_left_edge[i] = my_range[0] domain_right_edge[i] = my_range[1] setattr(self, range_name, self.parameters[range_name]) bin_field = f"{ax}_field" if ( isinstance(self.parameters[bin_field], str) and self.parameters[bin_field] == "None" ): self.parameters[bin_field] = None elif isinstance(self.parameters[bin_field], np.ndarray): self.parameters[bin_field] = ( "data", self.parameters[bin_field].astype(str)[1], ) setattr(self, bin_field, self.parameters[bin_field]) self.domain_left_edge = domain_left_edge self.domain_right_edge = domain_right_edge def _setup_gas_alias(self): "Alias the grid type to gas with a field alias." for ftype, field in self.field_list: if ftype == "data": self.field_info.alias(("gas", field), (ftype, field)) def create_field_info(self): super().create_field_info() if self.parameters["weight_field"] is not None: self.field_info.alias( self.parameters["weight_field"], (self.default_fluid_type, "weight") ) def _set_derived_attrs(self): self.domain_center = 0.5 * (self.domain_right_edge + self.domain_left_edge) self.domain_width = self.domain_right_edge - self.domain_left_edge def print_key_parameters(self): if is_root(): mylog.info("YTProfileDataset") for a in ["dimensionality", "profile_dimensions"] + [ f"{ax}_{attr}" for ax in "xyz"[: self.dimensionality] for attr in ["field", "range", "log"] ]: v = getattr(self, a) mylog.info("Parameters: %-25s = %s", a, v) super().print_key_parameters() @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if not filename.endswith(".h5"): return False if cls._missing_load_requirements(): return False with h5py.File(filename, mode="r") as f: data_type = parse_h5_attr(f, "data_type") if data_type == "yt_profile": return True return False class YTClumpContainer(TreeContainer): def __init__( self, clump_id, global_id, parent_id, contour_key, contour_id, ds=None ): self.clump_id = clump_id self.global_id = global_id self.parent_id = parent_id self.contour_key = contour_key self.contour_id = contour_id self.parent = None self.ds = ds TreeContainer.__init__(self) def add_child(self, child): if self.children is None: self.children = [] self.children.append(child) child.parent = self def __repr__(self): return "Clump[%d]" % self.clump_id def __getitem__(self, field): g = self.ds.data f = g._determine_fields(field)[0] if f[0] == "clump": return g[f][self.global_id] if self.contour_id == -1: return g[f] cfield = (f[0], f"contours_{self.contour_key.decode('utf-8')}") if f[0] == "grid": return g[f][g[cfield] == self.contour_id] return self.parent[f][g[cfield] == self.contour_id] class YTClumpTreeDataset(YTNonspatialDataset): """Dataset for saved clump-finder data.""" _load_requirements = ["h5py"] def __init__(self, filename, unit_system="cgs"): super().__init__(filename, unit_system=unit_system) self._load_tree() def _load_tree(self): my_tree = {} for i, clump_id in enumerate(self.data["clump", "clump_id"]): my_tree[clump_id] = YTClumpContainer( clump_id, i, self.data["clump", "parent_id"][i], self.data["clump", "contour_key"][i], self.data["clump", "contour_id"][i], self, ) for clump in my_tree.values(): if clump.parent_id == -1: self.tree = clump else: parent = my_tree[clump.parent_id] parent.add_child(clump) @cached_property def leaves(self): return [clump for clump in self.tree if clump.children is None] @classmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: if not filename.endswith(".h5"): return False if cls._missing_load_requirements(): return False with h5py.File(filename, mode="r") as f: data_type = parse_h5_attr(f, "data_type") if data_type is None: return False if data_type == "yt_clump_tree": return True return False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ytdata/fields.py0000644000175100001770000000231414714401662017331 0ustar00runnerdockerfrom yt.fields.field_info_container import FieldInfoContainer m_units = "g" p_units = "cm" v_units = "cm / s" r_units = "cm" class YTDataContainerFieldInfo(FieldInfoContainer): known_other_fields = () known_particle_fields = () def __init__(self, ds, field_list): super().__init__(ds, field_list) self.add_fake_grid_fields() def add_fake_grid_fields(self): """ Add cell volume and mass fields that use the dx, dy, and dz fields that come with the dataset instead of the index fields which correspond to the oct tree. We need to do this for now since we're treating the grid data like particles until we implement exporting AMR hierarchies. """ if ("grid", "cell_volume") not in self.field_list: def _cell_volume(field, data): return data["grid", "dx"] * data["grid", "dy"] * data["grid", "dz"] self.add_field( ("grid", "cell_volume"), sampling_type="particle", function=_cell_volume, units="cm**3", ) class YTGridFieldInfo(FieldInfoContainer): known_other_fields = () known_particle_fields = () ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ytdata/io.py0000644000175100001770000003021714714401662016475 0ustar00runnerdockerimport numpy as np from yt.funcs import mylog, parse_h5_attr from yt.geometry.selection_routines import GridSelector from yt.units._numpy_wrapper_functions import uvstack from yt.utilities.io_handler import BaseIOHandler from yt.utilities.on_demand_imports import _h5py as h5py class IOHandlerYTNonspatialhdf5(BaseIOHandler): _dataset_type = "ytnonspatialhdf5" _base = slice(None) _field_dtype = "float64" def _read_fluid_selection(self, g, selector, fields): rv = {} if isinstance(selector, GridSelector): if g.id in self._cached_fields: gf = self._cached_fields[g.id] rv.update(gf) if len(rv) == len(fields): return rv f = h5py.File(g.filename, mode="r") for field in fields: if field in rv: self._hits += 1 continue self._misses += 1 ftype, fname = field rv[ftype, fname] = f[ftype][fname][()] if self._cache_on: for gid in rv: self._cached_fields.setdefault(gid, {}) self._cached_fields[gid].update(rv[gid]) f.close() return rv else: raise RuntimeError( "Geometric selection not supported for non-spatial datasets." ) class IOHandlerYTGridHDF5(BaseIOHandler): _dataset_type = "ytgridhdf5" _base = slice(None) _field_dtype = "float64" def _read_fluid_selection(self, chunks, selector, fields, size): rv = {} # Now we have to do something unpleasant chunks = list(chunks) if isinstance(selector, GridSelector): if not (len(chunks) == len(chunks[0].objs) == 1): raise RuntimeError g = chunks[0].objs[0] if g.id in self._cached_fields: gf = self._cached_fields[g.id] rv.update(gf) if len(rv) == len(fields): return rv f = h5py.File(g.filename, mode="r") gds = f[self.ds.default_fluid_type] for field in fields: if field in rv: self._hits += 1 continue self._misses += 1 ftype, fname = field rv[ftype, fname] = gds[fname][()] if self._cache_on: for gid in rv: self._cached_fields.setdefault(gid, {}) self._cached_fields[gid].update(rv[gid]) f.close() return rv if size is None: size = sum(g.count(selector) for chunk in chunks for g in chunk.objs) for field in fields: ftype, fname = field fsize = size rv[field] = np.empty(fsize, dtype="float64") ng = sum(len(c.objs) for c in chunks) mylog.debug( "Reading %s cells of %s fields in %s grids", size, [f2 for f1, f2 in fields], ng, ) ind = 0 for chunk in chunks: f = None for g in chunk.objs: if g.filename is None: continue if f is None: f = h5py.File(g.filename, mode="r") gf = self._cached_fields.get(g.id, {}) nd = 0 for field in fields: if field in gf: nd = g.select(selector, gf[field], rv[field], ind) self._hits += 1 continue self._misses += 1 ftype, fname = field # add extra dimensions to make data 3D data = f[ftype][fname][()].astype(self._field_dtype) for dim in range(len(data.shape), 3): data = np.expand_dims(data, dim) if self._cache_on: self._cached_fields.setdefault(g.id, {}) self._cached_fields[g.id][field] = data nd = g.select(selector, data, rv[field], ind) # caches ind += nd if f: f.close() return rv def _read_particle_coords(self, chunks, ptf): pn = "particle_position_%s" chunks = list(chunks) for chunk in chunks: f = None for g in chunk.objs: if g.filename is None: continue if f is None: f = h5py.File(g.filename, mode="r") if g.NumberOfParticles == 0: continue for ptype, field_list in sorted(ptf.items()): units = parse_h5_attr(f[ptype][pn % "x"], "units") x, y, z = ( self.ds.arr(f[ptype][pn % ax][()].astype("float64"), units) for ax in "xyz" ) for field in field_list: if np.asarray(f[ptype][field]).ndim > 1: self._array_fields[field] = f[ptype][field].shape[1:] yield ptype, (x, y, z), 0.0 if f: f.close() def _read_particle_fields(self, chunks, ptf, selector): pn = "particle_position_%s" chunks = list(chunks) for chunk in chunks: # These should be organized by grid filename f = None for g in chunk.objs: if g.filename is None: continue if f is None: f = h5py.File(g.filename, mode="r") if g.NumberOfParticles == 0: continue for ptype, field_list in sorted(ptf.items()): units = parse_h5_attr(f[ptype][pn % "x"], "units") x, y, z = ( self.ds.arr(f[ptype][pn % ax][()].astype("float64"), units) for ax in "xyz" ) mask = selector.select_points(x, y, z, 0.0) if mask is None: continue for field in field_list: data = np.asarray(f[ptype][field][()], "=f8") yield (ptype, field), data[mask] if f: f.close() class IOHandlerYTDataContainerHDF5(BaseIOHandler): _dataset_type = "ytdatacontainer_hdf5" def _read_fluid_selection(self, chunks, selector, fields, size): raise NotImplementedError def _yield_coordinates(self, data_file): with h5py.File(data_file.filename, mode="r") as f: for ptype in f.keys(): if "x" not in f[ptype].keys(): continue units = _get_position_array_units(ptype, f, "x") x, y, z = ( self.ds.arr(_get_position_array(ptype, f, ax), units) for ax in "xyz" ) pos = uvstack([x, y, z]).T pos.convert_to_units("code_length") yield ptype, pos def _read_particle_coords(self, chunks, ptf): # This will read chunks and yield the results. for data_file in self._sorted_chunk_iterator(chunks): index_mask = slice(data_file.start, data_file.end) with h5py.File(data_file.filename, mode="r") as f: for ptype in sorted(ptf): pcount = data_file.total_particles[ptype] if pcount == 0: continue units = _get_position_array_units(ptype, f, "x") x, y, z = ( self.ds.arr( _get_position_array(ptype, f, ax, index_mask=index_mask), units, ) for ax in "xyz" ) yield ptype, (x, y, z), 0.0 def _read_particle_data_file(self, data_file, ptf, selector): data_return = {} with h5py.File(data_file.filename, mode="r") as f: index_mask = slice(data_file.start, data_file.end) for ptype, field_list in sorted(ptf.items()): if selector is None or getattr(selector, "is_all_data", False): mask = index_mask else: units = _get_position_array_units(ptype, f, "x") x, y, z = ( self.ds.arr( _get_position_array(ptype, f, ax, index_mask=index_mask), units, ) for ax in "xyz" ) mask = selector.select_points(x, y, z, 0.0) del x, y, z if mask is None: continue for field in field_list: data = f[ptype][field][mask].astype("float64", copy=False) data_return[ptype, field] = data return data_return def _count_particles(self, data_file): si, ei = data_file.start, data_file.end if None not in (si, ei): pcount = {} for ptype, npart in self.ds.num_particles.items(): pcount[ptype] = np.clip(npart - si, 0, ei - si) else: pcount = self.ds.num_particles return pcount def _identify_fields(self, data_file): fields = [] units = {} with h5py.File(data_file.filename, mode="r") as f: for ptype in f: fields.extend([(ptype, str(field)) for field in f[ptype]]) units.update( { (ptype, str(field)): parse_h5_attr(f[ptype][field], "units") for field in f[ptype] } ) return fields, units class IOHandlerYTSpatialPlotHDF5(IOHandlerYTDataContainerHDF5): _dataset_type = "ytspatialplot_hdf5" def _read_particle_coords(self, chunks, ptf): # This will read chunks and yield the results. for data_file in self._sorted_chunk_iterator(chunks): with h5py.File(data_file.filename, mode="r") as f: for ptype in sorted(ptf): pcount = data_file.total_particles[ptype] if pcount == 0: continue x = _get_position_array(ptype, f, "px") y = _get_position_array(ptype, f, "py") z = ( np.zeros(x.size, dtype="float64") + self.ds.domain_left_edge[2].to("code_length").d ) yield ptype, (x, y, z), 0.0 def _read_particle_fields(self, chunks, ptf, selector): # Now we have all the sizes, and we can allocate for data_file in self._sorted_chunk_iterator(chunks): all_count = self._count_particles(data_file) with h5py.File(data_file.filename, mode="r") as f: for ptype, field_list in sorted(ptf.items()): x = _get_position_array(ptype, f, "px") y = _get_position_array(ptype, f, "py") z = ( np.zeros(all_count[ptype], dtype="float64") + self.ds.domain_left_edge[2].to("code_length").d ) mask = selector.select_points(x, y, z, 0.0) del x, y, z if mask is None: continue for field in field_list: data = f[ptype][field][mask].astype("float64") yield (ptype, field), data def _get_position_array(ptype, f, ax, index_mask=None): if index_mask is None: index_mask = slice(None, None, None) if ptype == "grid": pos_name = "" else: pos_name = "particle_position_" return f[ptype][pos_name + ax][index_mask].astype("float64") def _get_position_array_units(ptype, f, ax): if ptype == "grid": pos_name = "" else: pos_name = "particle_position_" return parse_h5_attr(f[ptype][pos_name + ax], "units") ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.351153 yt-4.4.0/yt/frontends/ytdata/tests/0000755000175100001770000000000014714401715016652 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ytdata/tests/__init__.py0000644000175100001770000000000014714401662020752 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ytdata/tests/test_data_reload.py0000644000175100001770000000176214714401662022531 0ustar00runnerdockerimport numpy as np from yt.loaders import load_uniform_grid from yt.testing import requires_module_pytest from yt.utilities.on_demand_imports import _h5py as h5py @requires_module_pytest("h5py") def test_save_as_data_unit_system(tmp_path): # This test checks that the file saved with calling save_as_dataset # for a ds with a "code" unit system contains the proper "unit_system_name". # It checks the hdf5 file directly rather than using yt.load(), because # https://github.com/yt-project/yt/issues/4315 only manifested restarting # the python kernel (because the unit registry is state dependent). fi = tmp_path / "output_data.h5" shp = (4, 4, 4) data = {"density": np.random.random(shp)} ds = load_uniform_grid(data, shp, unit_system="code") assert "code" in ds._unit_system_name sp = ds.sphere(ds.domain_center, ds.domain_width[0] / 2.0) sp.save_as_dataset(fi) with h5py.File(fi, mode="r") as f: assert f.attrs["unit_system_name"] == "code" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ytdata/tests/test_old_outputs.py0000644000175100001770000001573514714401662022660 0ustar00runnerdocker""" ytdata frontend tests using enzo_tiny_cosmology """ import os import shutil import tempfile import numpy as np from yt.data_objects.api import create_profile from yt.frontends.ytdata.api import ( YTDataContainerDataset, YTGridDataset, YTNonspatialDataset, YTProfileDataset, YTSpatialPlotDataset, ) from yt.frontends.ytdata.tests.test_outputs import ( YTDataFieldTest, compare_unit_attributes, ) from yt.testing import ( assert_allclose_units, assert_array_equal, requires_file, requires_module, skip, ) from yt.units.yt_array import YTArray from yt.utilities.answer_testing.framework import data_dir_load, requires_ds from yt.visualization.profile_plotter import PhasePlot, ProfilePlot enzotiny = "enzo_tiny_cosmology/DD0046/DD0046" ytdata_dir = "ytdata_test" @skip(reason="See https://github.com/yt-project/yt/issues/3909") @requires_module("h5py") @requires_ds(enzotiny) @requires_file(os.path.join(ytdata_dir, "DD0046_sphere.h5")) @requires_file(os.path.join(ytdata_dir, "DD0046_cut_region.h5")) def test_old_datacontainer_data(): ds = data_dir_load(enzotiny) sphere = ds.sphere(ds.domain_center, (10, "Mpc")) fn = "DD0046_sphere.h5" full_fn = os.path.join(ytdata_dir, fn) sphere_ds = data_dir_load(full_fn) compare_unit_attributes(ds, sphere_ds) assert isinstance(sphere_ds, YTDataContainerDataset) yield YTDataFieldTest(full_fn, ("grid", "density")) yield YTDataFieldTest(full_fn, ("all", "particle_mass")) cr = ds.cut_region(sphere, ['obj["gas", "temperature"] > 1e4']) fn = "DD0046_cut_region.h5" full_fn = os.path.join(ytdata_dir, fn) cr_ds = data_dir_load(full_fn) assert isinstance(cr_ds, YTDataContainerDataset) assert (cr["gas", "temperature"] == cr_ds.data["gas", "temperature"]).all() @skip(reason="See https://github.com/yt-project/yt/issues/3909") @requires_module("h5py") @requires_ds(enzotiny) @requires_file(os.path.join(ytdata_dir, "DD0046_covering_grid.h5")) @requires_file(os.path.join(ytdata_dir, "DD0046_arbitrary_grid.h5")) @requires_file(os.path.join(ytdata_dir, "DD0046_proj_frb.h5")) def test_old_grid_datacontainer_data(): ds = data_dir_load(enzotiny) fn = "DD0046_covering_grid.h5" full_fn = os.path.join(ytdata_dir, fn) cg_ds = data_dir_load(full_fn) compare_unit_attributes(ds, cg_ds) assert isinstance(cg_ds, YTGridDataset) yield YTDataFieldTest(full_fn, ("grid", "density")) yield YTDataFieldTest(full_fn, ("all", "particle_mass")) fn = "DD0046_arbitrary_grid.h5" full_fn = os.path.join(ytdata_dir, fn) ag_ds = data_dir_load(full_fn) compare_unit_attributes(ds, ag_ds) assert isinstance(ag_ds, YTGridDataset) yield YTDataFieldTest(full_fn, ("grid", "density")) yield YTDataFieldTest(full_fn, ("all", "particle_mass")) my_proj = ds.proj("density", "x", weight_field="density") frb = my_proj.to_frb(1.0, (800, 800)) fn = "DD0046_proj_frb.h5" full_fn = os.path.join(ytdata_dir, fn) frb_ds = data_dir_load(full_fn) assert_allclose_units(frb["gas", "density"], frb_ds.data["gas", "density"], 1e-7) compare_unit_attributes(ds, frb_ds) assert isinstance(frb_ds, YTGridDataset) yield YTDataFieldTest(full_fn, ("gas", "density"), geometric=False) @skip(reason="See https://github.com/yt-project/yt/issues/3909") @requires_module("h5py") @requires_ds(enzotiny) @requires_file(os.path.join(ytdata_dir, "DD0046_proj.h5")) def test_old_spatial_data(): ds = data_dir_load(enzotiny) fn = "DD0046_proj.h5" full_fn = os.path.join(ytdata_dir, fn) proj_ds = data_dir_load(full_fn) compare_unit_attributes(ds, proj_ds) assert isinstance(proj_ds, YTSpatialPlotDataset) yield YTDataFieldTest(full_fn, ("gas", "density"), geometric=False) @skip(reason="See https://github.com/yt-project/yt/issues/3909") @requires_module("h5py") @requires_ds(enzotiny) @requires_file(os.path.join(ytdata_dir, "DD0046_Profile1D.h5")) @requires_file(os.path.join(ytdata_dir, "DD0046_Profile2D.h5")) def test_old_profile_data(): tmpdir = tempfile.mkdtemp() curdir = os.getcwd() os.chdir(tmpdir) ds = data_dir_load(enzotiny) ad = ds.all_data() profile_1d = create_profile( ad, ("gas", "density"), ("gas", "temperature"), weight_field=("gas", "cell_mass"), ) fn = "DD0046_Profile1D.h5" full_fn = os.path.join(ytdata_dir, fn) prof_1d_ds = data_dir_load(full_fn) compare_unit_attributes(ds, prof_1d_ds) assert isinstance(prof_1d_ds, YTProfileDataset) for field in profile_1d.standard_deviation: assert_array_equal( profile_1d.standard_deviation[field], prof_1d_ds.profile.standard_deviation["data", field[1]], ) p1 = ProfilePlot( prof_1d_ds.data, ("gas", "density"), ("gas", "temperature"), weight_field=("gas", "cell_mass"), ) p1.save() yield YTDataFieldTest(full_fn, ("gas", "temperature"), geometric=False) yield YTDataFieldTest(full_fn, ("index", "x"), geometric=False) yield YTDataFieldTest(full_fn, ("gas", "density"), geometric=False) fn = "DD0046_Profile2D.h5" full_fn = os.path.join(ytdata_dir, fn) prof_2d_ds = data_dir_load(full_fn) compare_unit_attributes(ds, prof_2d_ds) assert isinstance(prof_2d_ds, YTProfileDataset) p2 = PhasePlot( prof_2d_ds.data, ("gas", "density"), ("gas", "temperature"), ("gas", "cell_mass"), weight_field=None, ) p2.save() yield YTDataFieldTest(full_fn, ("gas", "density"), geometric=False) yield YTDataFieldTest(full_fn, ("index", "x"), geometric=False) yield YTDataFieldTest(full_fn, ("gas", "temperature"), geometric=False) yield YTDataFieldTest(full_fn, ("index", "y"), geometric=False) yield YTDataFieldTest(full_fn, ("gas", "cell_mass"), geometric=False) os.chdir(curdir) shutil.rmtree(tmpdir) @skip(reason="See https://github.com/yt-project/yt/issues/3909") @requires_module("h5py") @requires_ds(enzotiny) @requires_file(os.path.join(ytdata_dir, "test_data.h5")) @requires_file(os.path.join(ytdata_dir, "random_data.h5")) def test_old_nonspatial_data(): ds = data_dir_load(enzotiny) region = ds.box([0.25] * 3, [0.75] * 3) sphere = ds.sphere(ds.domain_center, (10, "Mpc")) my_data = {} my_data["region_density"] = region["gas", "density"] my_data["sphere_density"] = sphere["gas", "density"] fn = "test_data.h5" full_fn = os.path.join(ytdata_dir, fn) array_ds = data_dir_load(full_fn) compare_unit_attributes(ds, array_ds) assert isinstance(array_ds, YTNonspatialDataset) yield YTDataFieldTest(full_fn, "region_density", geometric=False) yield YTDataFieldTest(full_fn, "sphere_density", geometric=False) my_data = {"density": YTArray(np.linspace(1.0, 20.0, 10), "g/cm**3")} fn = "random_data.h5" full_fn = os.path.join(ytdata_dir, fn) new_ds = data_dir_load(full_fn) assert isinstance(new_ds, YTNonspatialDataset) yield YTDataFieldTest(full_fn, ("gas", "density"), geometric=False) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ytdata/tests/test_outputs.py0000644000175100001770000002152114714401662022010 0ustar00runnerdockerimport os import shutil import tempfile import numpy as np from numpy.testing import assert_array_equal, assert_equal from yt.data_objects.api import create_profile from yt.frontends.ytdata.api import ( YTDataContainerDataset, YTGridDataset, YTNonspatialDataset, YTProfileDataset, YTSpatialPlotDataset, save_as_dataset, ) from yt.loaders import load from yt.testing import assert_allclose_units from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.answer_testing.framework import ( AnswerTestingTest, data_dir_load, requires_ds, ) from yt.visualization.profile_plotter import PhasePlot, ProfilePlot def make_tempdir(): if int(os.environ.get("GENERATE_YTDATA", 0)): return "." else: return tempfile.mkdtemp() def compare_unit_attributes(ds1, ds2): r""" Checks to make sure that the length, mass, time, velocity, and magnetic units are the same for two different dataset objects. """ attrs = ("length_unit", "mass_unit", "time_unit", "velocity_unit", "magnetic_unit") for attr in attrs: u1 = getattr(ds1, attr, None) u2 = getattr(ds2, attr, None) assert u1 == u2 class YTDataFieldTest(AnswerTestingTest): _type_name = "YTDataTest" _attrs = ("field_name",) def __init__(self, ds_fn, field, decimals=10, geometric=True): super().__init__(ds_fn) self.field = field if isinstance(field, tuple) and len(field) == 2: self.field_name = field[1] else: self.field_name = field self.decimals = decimals self.geometric = geometric def run(self): if self.geometric: obj = self.ds.all_data() else: obj = self.ds.data num_e = obj[self.field].size avg = obj[self.field].mean() return np.array([num_e, avg]) def compare(self, new_result, old_result): err_msg = f"YTData field values for {self.field} not equal." if self.decimals is None: assert_equal(new_result, old_result, err_msg=err_msg, verbose=True) else: assert_allclose_units( new_result, old_result, 10.0 ** (-self.decimals), err_msg=err_msg, verbose=True, ) enzotiny = "enzo_tiny_cosmology/DD0046/DD0046" @requires_ds(enzotiny) def test_datacontainer_data(): tmpdir = make_tempdir() curdir = os.getcwd() os.chdir(tmpdir) ds = data_dir_load(enzotiny) sphere = ds.sphere(ds.domain_center, (10, "Mpc")) fn = sphere.save_as_dataset(fields=[("gas", "density"), ("all", "particle_mass")]) full_fn = os.path.join(tmpdir, fn) sphere_ds = load(full_fn) compare_unit_attributes(ds, sphere_ds) assert isinstance(sphere_ds, YTDataContainerDataset) yield YTDataFieldTest(full_fn, ("grid", "density")) yield YTDataFieldTest(full_fn, ("all", "particle_mass")) cr = ds.cut_region(sphere, ['obj["gas", "temperature"] > 1e4']) fn = cr.save_as_dataset(fields=[("gas", "temperature")]) full_fn = os.path.join(tmpdir, fn) cr_ds = load(full_fn) assert isinstance(cr_ds, YTDataContainerDataset) assert (cr["gas", "temperature"] == cr_ds.data["gas", "temperature"]).all() os.chdir(curdir) if tmpdir != ".": shutil.rmtree(tmpdir) @requires_ds(enzotiny) def test_grid_datacontainer_data(): tmpdir = make_tempdir() curdir = os.getcwd() os.chdir(tmpdir) ds = data_dir_load(enzotiny) cg = ds.covering_grid(level=0, left_edge=[0.25] * 3, dims=[16] * 3) fn = cg.save_as_dataset( fields=[ ("gas", "density"), ("all", "particle_mass"), ("all", "particle_position"), ] ) full_fn = os.path.join(tmpdir, fn) cg_ds = load(full_fn) compare_unit_attributes(ds, cg_ds) assert isinstance(cg_ds, YTGridDataset) assert ( cg["all", "particle_position"].shape == cg_ds.r["all", "particle_position"].shape ) yield YTDataFieldTest(full_fn, ("grid", "density")) yield YTDataFieldTest(full_fn, ("all", "particle_mass")) ag = ds.arbitrary_grid(left_edge=[0.25] * 3, right_edge=[0.75] * 3, dims=[16] * 3) fn = ag.save_as_dataset(fields=[("gas", "density"), ("all", "particle_mass")]) full_fn = os.path.join(tmpdir, fn) ag_ds = load(full_fn) compare_unit_attributes(ds, ag_ds) assert isinstance(ag_ds, YTGridDataset) yield YTDataFieldTest(full_fn, ("grid", "density")) yield YTDataFieldTest(full_fn, ("all", "particle_mass")) my_proj = ds.proj(("gas", "density"), "x", weight_field=("gas", "density")) frb = my_proj.to_frb(1.0, (800, 800)) fn = frb.save_as_dataset(fields=[("gas", "density")]) frb_ds = load(fn) assert_array_equal(frb["gas", "density"], frb_ds.data["gas", "density"]) compare_unit_attributes(ds, frb_ds) assert isinstance(frb_ds, YTGridDataset) yield YTDataFieldTest(full_fn, ("grid", "density"), geometric=False) os.chdir(curdir) if tmpdir != ".": shutil.rmtree(tmpdir) @requires_ds(enzotiny) def test_spatial_data(): tmpdir = make_tempdir() curdir = os.getcwd() os.chdir(tmpdir) ds = data_dir_load(enzotiny) proj = ds.proj(("gas", "density"), "x", weight_field=("gas", "density")) fn = proj.save_as_dataset() full_fn = os.path.join(tmpdir, fn) proj_ds = load(full_fn) compare_unit_attributes(ds, proj_ds) assert isinstance(proj_ds, YTSpatialPlotDataset) yield YTDataFieldTest(full_fn, ("grid", "density"), geometric=False) os.chdir(curdir) if tmpdir != ".": shutil.rmtree(tmpdir) @requires_ds(enzotiny) def test_profile_data(): tmpdir = make_tempdir() curdir = os.getcwd() os.chdir(tmpdir) ds = data_dir_load(enzotiny) ad = ds.all_data() profile_1d = create_profile( ad, ("gas", "density"), ("gas", "temperature"), weight_field=("gas", "cell_mass"), ) fn = profile_1d.save_as_dataset() full_fn = os.path.join(tmpdir, fn) prof_1d_ds = load(full_fn) compare_unit_attributes(ds, prof_1d_ds) assert isinstance(prof_1d_ds, YTProfileDataset) for field in profile_1d.standard_deviation: assert_array_equal( profile_1d.standard_deviation[field], prof_1d_ds.profile.standard_deviation["data", field[1]], ) p1 = ProfilePlot( prof_1d_ds.data, ("gas", "density"), ("gas", "temperature"), weight_field=("gas", "cell_mass"), ) p1.save() yield YTDataFieldTest(full_fn, ("data", "temperature"), geometric=False) yield YTDataFieldTest(full_fn, ("data", "x"), geometric=False) yield YTDataFieldTest(full_fn, ("data", "density"), geometric=False) profile_2d = create_profile( ad, [("gas", "density"), ("gas", "temperature")], ("gas", "cell_mass"), weight_field=None, n_bins=(128, 128), ) fn = profile_2d.save_as_dataset() full_fn = os.path.join(tmpdir, fn) prof_2d_ds = load(full_fn) compare_unit_attributes(ds, prof_2d_ds) assert isinstance(prof_2d_ds, YTProfileDataset) p2 = PhasePlot( prof_2d_ds.data, ("gas", "density"), ("gas", "temperature"), ("gas", "cell_mass"), weight_field=None, ) p2.save() yield YTDataFieldTest(full_fn, ("data", "density"), geometric=False) yield YTDataFieldTest(full_fn, ("data", "x"), geometric=False) yield YTDataFieldTest(full_fn, ("data", "temperature"), geometric=False) yield YTDataFieldTest(full_fn, ("data", "y"), geometric=False) yield YTDataFieldTest(full_fn, ("data", "cell_mass"), geometric=False) os.chdir(curdir) if tmpdir != ".": shutil.rmtree(tmpdir) @requires_ds(enzotiny) def test_nonspatial_data(): tmpdir = make_tempdir() curdir = os.getcwd() os.chdir(tmpdir) ds = data_dir_load(enzotiny) region = ds.box([0.25] * 3, [0.75] * 3) sphere = ds.sphere(ds.domain_center, (10, "Mpc")) my_data = {} my_data["region_density"] = region["gas", "density"] my_data["sphere_density"] = sphere["gas", "density"] fn = "test_data.h5" save_as_dataset(ds, fn, my_data) full_fn = os.path.join(tmpdir, fn) array_ds = load(full_fn) compare_unit_attributes(ds, array_ds) assert isinstance(array_ds, YTNonspatialDataset) yield YTDataFieldTest(full_fn, "region_density", geometric=False) yield YTDataFieldTest(full_fn, "sphere_density", geometric=False) my_data = {"density": YTArray(np.linspace(1.0, 20.0, 10), "g/cm**3")} fake_ds = {"current_time": YTQuantity(10, "Myr")} fn = "random_data.h5" save_as_dataset(fake_ds, fn, my_data) full_fn = os.path.join(tmpdir, fn) new_ds = load(full_fn) assert isinstance(new_ds, YTNonspatialDataset) yield YTDataFieldTest(full_fn, ("data", "density"), geometric=False) os.chdir(curdir) if tmpdir != ".": shutil.rmtree(tmpdir) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ytdata/tests/test_unit.py0000644000175100001770000000750114714401662021246 0ustar00runnerdockerimport os import shutil import tempfile import numpy as np from numpy.testing import assert_array_equal from yt.loaders import load, load_uniform_grid from yt.testing import ( assert_fname, fake_random_ds, requires_file, requires_module, ) from yt.utilities.answer_testing.framework import data_dir_load from yt.visualization.plot_window import ProjectionPlot, SlicePlot ytdata_dir = "ytdata_test" @requires_module("h5py") @requires_file(os.path.join(ytdata_dir, "slice.h5")) @requires_file(os.path.join(ytdata_dir, "proj.h5")) @requires_file(os.path.join(ytdata_dir, "oas.h5")) def test_old_plot_data(): tmpdir = tempfile.mkdtemp() curdir = os.getcwd() os.chdir(tmpdir) fn = "slice.h5" full_fn = os.path.join(ytdata_dir, fn) ds_slice = data_dir_load(full_fn) p = SlicePlot(ds_slice, "z", ("gas", "density")) fn = p.save() assert_fname(fn[0]) fn = "proj.h5" full_fn = os.path.join(ytdata_dir, fn) ds_proj = data_dir_load(full_fn) p = ProjectionPlot(ds_proj, "z", ("gas", "density")) fn = p.save() assert_fname(fn[0]) fn = "oas.h5" full_fn = os.path.join(ytdata_dir, fn) ds_oas = data_dir_load(full_fn) p = SlicePlot(ds_oas, [1, 1, 1], ("gas", "density")) fn = p.save() assert_fname(fn[0]) os.chdir(curdir) shutil.rmtree(tmpdir) @requires_module("h5py") def test_plot_data(): tmpdir = tempfile.mkdtemp() curdir = os.getcwd() os.chdir(tmpdir) ds = fake_random_ds(16) plot = SlicePlot(ds, "z", ("gas", "density")) fn = plot.data_source.save_as_dataset("slice.h5") ds_slice = load(fn) p = SlicePlot(ds_slice, "z", ("gas", "density")) fn = p.save() assert_fname(fn[0]) plot = ProjectionPlot(ds, "z", ("gas", "density")) fn = plot.data_source.save_as_dataset("proj.h5") ds_proj = load(fn) p = ProjectionPlot(ds_proj, "z", ("gas", "density")) fn = p.save() assert_fname(fn[0]) plot = SlicePlot(ds, [1, 1, 1], ("gas", "density")) fn = plot.data_source.save_as_dataset("oas.h5") ds_oas = load(fn) p = SlicePlot(ds_oas, [1, 1, 1], ("gas", "density")) fn = p.save() assert_fname(fn[0]) os.chdir(curdir) if tmpdir != ".": shutil.rmtree(tmpdir) @requires_module("h5py") def test_non_square_frb(): tmpdir = tempfile.mkdtemp() curdir = os.getcwd() os.chdir(tmpdir) # construct an arbitrary dataset arr = np.arange(8.0 * 9.0 * 10.0).reshape((8, 9, 10)) data = {"density": (arr, "g/cm**3")} bbox = np.array([[-4, 4.0], [-4.5, 4.5], [-5.0, 5]]) ds = load_uniform_grid( data, arr.shape, length_unit="Mpc", bbox=bbox, periodicity=(False, False, False) ) # make a slice slc = ds.slice(axis="z", coord=ds.quan(0.0, "code_length")) # make a frb and save it to disk center = (ds.quan(0.0, "code_length"), ds.quan(0.0, "code_length")) xax, yax = ds.coordinates.x_axis[slc.axis], ds.coordinates.y_axis[slc.axis] res = [ds.domain_dimensions[xax], ds.domain_dimensions[yax]] # = [8,9] width = ds.domain_right_edge[xax] - ds.domain_left_edge[xax] # = 8 code_length height = ds.domain_right_edge[yax] - ds.domain_left_edge[yax] # = 9 code_length frb = slc.to_frb(width=width, height=height, resolution=res, center=center) fname = "test_frb_roundtrip.h5" frb.save_as_dataset(fname, fields=[("gas", "density")]) expected_vals = arr[:, :, 5].T print( "\nConfirmation that initial frb results are expected:", (expected_vals == frb["gas", "density"].v).all(), "\n", ) # yt-reload: reloaded_ds = load(fname) assert_array_equal( frb["gas", "density"].shape, reloaded_ds.data["gas", "density"].shape ) assert_array_equal(frb["gas", "density"], reloaded_ds.data["gas", "density"]) os.chdir(curdir) if tmpdir != ".": shutil.rmtree(tmpdir) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/frontends/ytdata/utilities.py0000644000175100001770000001665114714401662020107 0ustar00runnerdockerfrom yt.units.yt_array import YTArray from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py def save_as_dataset(ds, filename, data, field_types=None, extra_attrs=None): r"""Export a set of field arrays to a reloadable yt dataset. This function can be used to create a yt loadable dataset from a set of arrays. The field arrays can either be associated with a loaded dataset or, if not, a dictionary of dataset attributes can be provided that will be used as metadata for the new dataset. The resulting dataset can be reloaded as a yt dataset. Parameters ---------- ds : dataset or dict The dataset associated with the fields or a dictionary of parameters. filename : str The name of the file to be written. data : dict A dictionary of field arrays to be saved. field_types: dict, optional A dictionary denoting the group name to which each field is to be saved. When the resulting dataset is reloaded, this will be the field type for this field. If not given, "data" will be used. extra_attrs: dict, optional A dictionary of additional attributes to be saved. Returns ------- filename : str The name of the file that has been created. Examples -------- >>> import numpy as np >>> import yt >>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") >>> sphere = ds.sphere([0.5] * 3, (10, "Mpc")) >>> sphere_density = sphere["gas", "density"] >>> region = ds.box([0.0] * 3, [0.25] * 3) >>> region_density = region["gas", "density"] >>> data = {} >>> data["sphere_density"] = sphere_density >>> data["region_density"] = region_density >>> yt.save_as_dataset(ds, "density_data.h5", data) >>> new_ds = yt.load("density_data.h5") >>> print(new_ds.data["region_density"]) [ 7.47650434e-32 7.70370740e-32 9.74692941e-32 ..., 1.22384547e-27 5.13889063e-28 2.91811974e-28] g/cm**3 >>> print(new_ds.data["sphere_density"]) [ 4.46237613e-32 4.86830178e-32 4.46335118e-32 ..., 6.43956165e-30 3.57339907e-30 2.83150720e-30] g/cm**3 >>> data = { ... "density": yt.YTArray(1e-24 * np.ones(10), "g/cm**3"), ... "temperature": yt.YTArray(1000.0 * np.ones(10), "K"), ... } >>> ds_data = {"current_time": yt.YTQuantity(10, "Myr")} >>> yt.save_as_dataset(ds_data, "random_data.h5", data) >>> new_ds = yt.load("random_data.h5") >>> print(new_ds.data["gas", "temperature"]) [ 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000.] K """ mylog.info("Saving field data to yt dataset: %s.", filename) if extra_attrs is None: extra_attrs = {} base_attrs = [ "dimensionality", "domain_left_edge", "domain_right_edge", "current_redshift", "current_time", "domain_dimensions", "geometry", "periodicity", "cosmological_simulation", "omega_lambda", "omega_matter", "hubble_constant", "length_unit", "mass_unit", "time_unit", "velocity_unit", "magnetic_unit", ] fh = h5py.File(filename, mode="w") if ds is None: ds = {} if hasattr(ds, "parameters") and isinstance(ds.parameters, dict): for attr, val in ds.parameters.items(): _yt_array_hdf5_attr(fh, attr, val) if hasattr(ds, "unit_registry"): _yt_array_hdf5_attr(fh, "unit_registry_json", ds.unit_registry.to_json()) if hasattr(ds, "unit_system"): # Note: ds._unit_system_name is written here rather than # ds.unit_system.name because for a 'code' unit system, ds.unit_system.name # is a hash, not a unit system. And on re-load, we want to designate # a unit system not a hash value. # See https://github.com/yt-project/yt/issues/4315 for more background. _yt_array_hdf5_attr(fh, "unit_system_name", ds._unit_system_name) for attr in base_attrs: if isinstance(ds, dict): my_val = ds.get(attr, None) else: my_val = getattr(ds, attr, None) if my_val is None: continue _yt_array_hdf5_attr(fh, attr, my_val) for attr in extra_attrs: my_val = extra_attrs[attr] _yt_array_hdf5_attr(fh, attr, my_val) if "data_type" not in extra_attrs: fh.attrs["data_type"] = "yt_array_data" for field in data: if field_types is None: field_type = "data" else: field_type = field_types[field] if field_type not in fh: fh.create_group(field_type) if isinstance(field, tuple): field_name = field[1] else: field_name = field # for python3 if data[field].dtype.kind == "U": data[field] = data[field].astype("|S") _yt_array_hdf5(fh[field_type], field_name, data[field]) if "num_elements" not in fh[field_type].attrs: fh[field_type].attrs["num_elements"] = data[field].size fh.close() return filename def _hdf5_yt_array(fh, field, ds=None): r"""Load an hdf5 dataset as a YTArray. Reads in a dataset from an open hdf5 file or group and uses the "units" attribute, if it exists, to apply units. Parameters ---------- fh : an open hdf5 file or hdf5 group The hdf5 file or group in which the dataset exists. field : str The name of the field to be loaded. ds : yt Dataset If not None, the unit_registry of the dataset is used to apply units. Returns ------- A YTArray of the requested field. """ if ds is None: new_arr = YTArray else: new_arr = ds.arr units = "" if "units" in fh[field].attrs: units = fh[field].attrs["units"] if units == "dimensionless": units = "" return new_arr(fh[field][()], units) def _yt_array_hdf5(fh, field, data): r"""Save a YTArray to an open hdf5 file or group. Save a YTArray to an open hdf5 file or group, and save the units to a "units" attribute. Parameters ---------- fh : an open hdf5 file or hdf5 group The hdf5 file or group to which the data will be written. field : str The name of the field to be saved. data : YTArray The data array to be saved. Returns ------- dataset : hdf5 dataset The created hdf5 dataset. """ dataset = fh.create_dataset(str(field), data=data) units = "" if isinstance(data, YTArray): units = str(data.units) dataset.attrs["units"] = units return dataset def _yt_array_hdf5_attr(fh, attr, val): r"""Save a YTArray or YTQuantity as an hdf5 attribute. Save an hdf5 attribute. If it has units, save an additional attribute with the units. Parameters ---------- fh : an open hdf5 file, group, or dataset The hdf5 file, group, or dataset to which the attribute will be written. attr : str The name of the attribute to be saved. val : anything The value to be saved. """ if val is None: val = "None" if hasattr(val, "units"): fh.attrs[f"{attr}_units"] = str(val.units) try: fh.attrs[str(attr)] = val # This is raised if no HDF5 equivalent exists. # In that case, save its string representation. except TypeError: fh.attrs[str(attr)] = str(val) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/funcs.py0000644000175100001770000012557614714401662013731 0ustar00runnerdockerimport base64 import contextlib import copy import errno import glob import inspect import itertools import os import re import struct import subprocess import sys import time import traceback from collections import UserDict from collections.abc import Callable from copy import deepcopy from functools import lru_cache, wraps from numbers import Number as numeric_type from typing import Any import numpy as np from more_itertools import always_iterable, collapse, first from yt._maintenance.deprecation import issue_deprecation_warning from yt._maintenance.ipython_compat import IS_IPYTHON from yt.config import ytcfg from yt.units import YTArray, YTQuantity from yt.utilities.exceptions import YTFieldNotFound, YTInvalidWidthError from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _requests as requests # Some functions for handling sequences and other types def is_sequence(obj): """ Grabbed from Python Cookbook / matplotlib.cbook. Returns true/false for Parameters ---------- obj : iterable """ try: len(obj) return True except TypeError: return False def iter_fields(field_or_fields): """ Create an iterator for field names, specified as single strings or tuples(fname, ftype) alike. This can safely be used in places where we accept a single field or a list as input. Parameters ---------- field_or_fields: str, tuple(str, str), or any iterable of the previous types. Examples -------- >>> fields = ("gas", "density") >>> for field in iter_fields(fields): ... print(field) density >>> fields = ("gas", "density") >>> for field in iter_fields(fields): ... print(field) ('gas', 'density') >>> fields = [("gas", "density"), ("gas", "temperature"), ("index", "dx")] >>> for field in iter_fields(fields): ... print(field) density temperature ('index', 'dx') """ return always_iterable(field_or_fields, base_type=(tuple, str, bytes)) def ensure_numpy_array(obj): """ This function ensures that *obj* is a numpy array. Typically used to convert scalar, list or tuple argument passed to functions using Cython. """ if isinstance(obj, np.ndarray): if obj.shape == (): return np.array([obj]) # We cast to ndarray to catch ndarray subclasses return np.array(obj) elif isinstance(obj, (list, tuple)): return np.asarray(obj) else: return np.asarray([obj]) def read_struct(f, fmt): """ This reads a struct, and only that struct, from an open file. """ s = f.read(struct.calcsize(fmt)) return struct.unpack(fmt, s) def just_one(obj): # If we have an iterable, sometimes we only want one item return first(collapse(obj)) def compare_dicts(dict1, dict2): if not set(dict1) <= set(dict2): return False for key in dict1.keys(): if dict1[key] is not None and dict2[key] is not None: if isinstance(dict1[key], dict): if compare_dicts(dict1[key], dict2[key]): continue else: return False try: comparison = np.array_equal(dict1[key], dict2[key]) except TypeError: comparison = dict1[key] == dict2[key] if not comparison: return False return True # Taken from # http://www.goldb.org/goldblog/2008/02/06/PythonConvertSecsIntoHumanReadableTimeStringHHMMSS.aspx def humanize_time(secs): """ Takes *secs* and returns a nicely formatted string """ mins, secs = divmod(secs, 60) hours, mins = divmod(mins, 60) return "%02d:%02d:%02d" % (hours, mins, secs) # # Some function wrappers that come in handy once in a while # def get_memory_usage(subtract_share=False): """ Returning resident size in megabytes """ pid = os.getpid() # we use the resource module to get the memory page size try: import resource except ImportError: return -1024 else: pagesize = resource.getpagesize() status_file = f"/proc/{pid}/statm" if not os.path.isfile(status_file): return -1024 with open(status_file) as fh: line = fh.read() size, resident, share, text, library, data, dt = (int(i) for i in line.split()) if subtract_share: resident -= share return resident * pagesize / (1024 * 1024) # return in megs def time_execution(func): r""" Decorator for seeing how long a given function takes, depending on whether or not the global 'yt.time_functions' config parameter is set. """ @wraps(func) def wrapper(*arg, **kw): t1 = time.time() res = func(*arg, **kw) t2 = time.time() mylog.debug("%s took %0.3f s", func.__name__, (t2 - t1)) return res if ytcfg.get("yt", "time_functions"): return wrapper else: return func def print_tb(func): """ This function is used as a decorate on a function to have the calling stack printed whenever that function is entered. This can be used like so: >>> @print_tb ... def some_deeply_nested_function(*args, **kwargs): ... ... """ @wraps(func) def run_func(*args, **kwargs): traceback.print_stack() return func(*args, **kwargs) return run_func def rootonly(func): """ This is a decorator that, when used, will only call the function on the root processor. This can be used like so: .. code-block:: python @rootonly def some_root_only_function(*args, **kwargs): ... """ @wraps(func) def check_parallel_rank(*args, **kwargs): if ytcfg.get("yt", "internals", "topcomm_parallel_rank") > 0: return return func(*args, **kwargs) return check_parallel_rank def pdb_run(func): """ This decorator inserts a pdb session on top of the call-stack into a function. This can be used like so: >>> @pdb_run ... def some_function_to_debug(*args, **kwargs): ... ... """ import pdb @wraps(func) def wrapper(*args, **kw): pdb.runcall(func, *args, **kw) return wrapper __header = """ == Welcome to the embedded IPython Shell == You are currently inside the function: %(fname)s Defined in: %(filename)s:%(lineno)s """ def insert_ipython(num_up=1): """ Placed inside a function, this will insert an IPython interpreter at that current location. This will enabled detailed inspection of the current execution environment, as well as (optional) modification of that environment. *num_up* refers to how many frames of the stack get stripped off, and defaults to 1 so that this function itself is stripped off. """ import IPython from IPython.terminal.embed import InteractiveShellEmbed try: from traitlets.config.loader import Config except ImportError: from IPython.config.loader import Config frame = inspect.stack()[num_up] loc = frame[0].f_locals.copy() glo = frame[0].f_globals dd = {"fname": frame[3], "filename": frame[1], "lineno": frame[2]} cfg = Config() cfg.InteractiveShellEmbed.local_ns = loc cfg.InteractiveShellEmbed.global_ns = glo IPython.embed(config=cfg, banner2=__header % dd) ipshell = InteractiveShellEmbed(config=cfg) del ipshell # # Our progress bar types and how to get one # class TqdmProgressBar: # This is a drop in replacement for pbar # called tqdm def __init__(self, title, maxval): from tqdm import tqdm self._pbar = tqdm(leave=True, total=maxval, desc=title) self.i = 0 def update(self, i=None): if i is None: i = self.i + 1 n = i - self.i self.i = i self._pbar.update(n) def finish(self): self._pbar.close() class DummyProgressBar: # This progressbar gets handed if we don't # want ANY output def __init__(self, *args, **kwargs): return def update(self, *args, **kwargs): return def finish(self, *args, **kwargs): return def get_pbar(title, maxval): """ This returns a progressbar of the most appropriate type, given a *title* and a *maxval*. """ maxval = max(maxval, 1) if ( ytcfg.get("yt", "suppress_stream_logging") or ytcfg.get("yt", "internals", "within_testing") or maxval == 1 or not is_root() ): return DummyProgressBar() return TqdmProgressBar(title, maxval) def only_on_root(func, *args, **kwargs): """ This function accepts a *func*, a set of *args* and *kwargs* and then only on the root processor calls the function. All other processors get "None" handed back. """ if kwargs.pop("global_rootonly", False): cfg_option = "global_parallel_rank" else: cfg_option = "topcomm_parallel_rank" if not ytcfg.get("yt", "internals", "parallel"): return func(*args, **kwargs) if ytcfg.get("yt", "internals", cfg_option) > 0: return return func(*args, **kwargs) def is_root(): """ This function returns True if it is on the root processor of the topcomm and False otherwise. """ if not ytcfg.get("yt", "internals", "parallel"): return True return ytcfg.get("yt", "internals", "topcomm_parallel_rank") == 0 # # Our signal and traceback handling functions # def signal_print_traceback(signo, frame): print(traceback.print_stack(frame)) def signal_problem(signo, frame): raise RuntimeError() def signal_ipython(signo, frame): insert_ipython(2) def paste_traceback(exc_type, exc, tb): """ This is a traceback handler that knows how to paste to the pastebin. Should only be used in sys.excepthook. """ sys.__excepthook__(exc_type, exc, tb) import xmlrpc.client from io import StringIO p = xmlrpc.client.ServerProxy( "http://paste.yt-project.org/xmlrpc/", allow_none=True ) s = StringIO() traceback.print_exception(exc_type, exc, tb, file=s) s = s.getvalue() ret = p.pastes.newPaste("pytb", s, None, "", "", True) print() print(f"Traceback pasted to http://paste.yt-project.org/show/{ret}") print() def paste_traceback_detailed(exc_type, exc, tb): """ This is a traceback handler that knows how to paste to the pastebin. Should only be used in sys.excepthook. """ import cgitb import xmlrpc.client from io import StringIO s = StringIO() handler = cgitb.Hook(format="text", file=s) handler(exc_type, exc, tb) s = s.getvalue() print(s) p = xmlrpc.client.ServerProxy( "http://paste.yt-project.org/xmlrpc/", allow_none=True ) ret = p.pastes.newPaste("text", s, None, "", "", True) print() print(f"Traceback pasted to http://paste.yt-project.org/show/{ret}") print() _ss = "fURbBUUBE0cLXgETJnZgJRMXVhVGUQpQAUBuehQMUhJWRFFRAV1ERAtBXw1dAxMLXT4zXBFfABNN\nC0ZEXw1YUURHCxMXVlFERwxWCQw=\n" def _rdbeta(key): enc_s = base64.decodestring(_ss) dec_s = "".join(chr(ord(a) ^ ord(b)) for a, b in zip(enc_s, itertools.cycle(key))) print(dec_s) # # Some exceptions # class NoCUDAException(Exception): pass class YTEmptyClass: pass def update_git(path): try: import git except ImportError: print("Updating and precise version information requires ") print("gitpython to be installed.") print("Try: python -m pip install gitpython") return -1 with open(os.path.join(path, "yt_updater.log"), "a") as f: repo = git.Repo(path) if repo.is_dirty(untracked_files=True): print("Changes have been made to the yt source code so I won't ") print("update the code. You will have to do this yourself.") print("Here's a set of sample commands:") print("") print(f" $ cd {path}") print(" $ git stash") print(" $ git checkout main") print(" $ git pull") print(" $ git stash pop") print(f" $ {sys.executable} setup.py develop") print("") return 1 if repo.active_branch.name != "main": print("yt repository is not tracking the main branch so I won't ") print("update the code. You will have to do this yourself.") print("Here's a set of sample commands:") print("") print(f" $ cd {path}") print(" $ git checkout main") print(" $ git pull") print(f" $ {sys.executable} setup.py develop") print("") return 1 print("Updating the repository") f.write("Updating the repository\n\n") old_version = repo.git.rev_parse("HEAD", short=12) try: remote = repo.remotes.yt_upstream except AttributeError: remote = repo.create_remote( "yt_upstream", url="https://github.com/yt-project/yt" ) remote.fetch() main = repo.heads.main main.set_tracking_branch(remote.refs.main) main.checkout() remote.pull() new_version = repo.git.rev_parse("HEAD", short=12) f.write(f"Updated from {old_version} to {new_version}\n\n") rebuild_modules(path, f) print("Updated successfully") def rebuild_modules(path, f): f.write("Rebuilding modules\n\n") p = subprocess.Popen( [sys.executable, "setup.py", "build_clib", "build_ext", "-i"], cwd=path, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, ) stdout, stderr = p.communicate() f.write(stdout.decode("utf-8")) f.write("\n\n") if p.returncode: print(f"BROKEN: See {os.path.join(path, 'yt_updater.log')}") sys.exit(1) f.write("Successful!\n") def get_git_version(path): try: import git except ImportError: print("Updating and precise version information requires ") print("gitpython to be installed.") print("Try: python -m pip install gitpython") return None try: repo = git.Repo(path) return repo.git.rev_parse("HEAD", short=12) except git.InvalidGitRepositoryError: # path is not a git repository return None def get_yt_version(): import importlib.resources as importlib_resources version = get_git_version(os.path.dirname(importlib_resources.files("yt"))) if version is None: return version else: v_str = version[:12].strip() if hasattr(v_str, "decode"): v_str = v_str.decode("utf-8") return v_str def get_version_stack(): import matplotlib from yt._version import __version__ as yt_version version_info = {} version_info["yt"] = yt_version version_info["numpy"] = np.version.version version_info["matplotlib"] = matplotlib.__version__ return version_info def get_script_contents(): top_frame = inspect.stack()[-1] finfo = inspect.getframeinfo(top_frame[0]) if finfo[2] != "": return None if not os.path.exists(finfo[0]): return None try: contents = open(finfo[0]).read() except Exception: contents = None return contents def download_file(url, filename): try: return fancy_download_file(url, filename, requests) except ImportError: # fancy_download_file requires requests return simple_download_file(url, filename) def fancy_download_file(url, filename, requests=None): response = requests.get(url, stream=True) total_length = response.headers.get("content-length") with open(filename, "wb") as fh: if total_length is None: fh.write(response.content) else: blocksize = 4 * 1024**2 iterations = int(float(total_length) / float(blocksize)) pbar = get_pbar( "Downloading {} to {} ".format(*os.path.split(filename)[::-1]), iterations, ) iteration = 0 for chunk in response.iter_content(chunk_size=blocksize): fh.write(chunk) iteration += 1 pbar.update(iteration) pbar.finish() return filename def simple_download_file(url, filename): import urllib.error import urllib.request try: fn, h = urllib.request.urlretrieve(url, filename) except urllib.error.HTTPError as err: raise RuntimeError( f"Attempt to download file from {url} failed with error {err.code}: {err.msg}." ) from None return fn # This code snippet is modified from Georg Brandl def bb_apicall(endpoint, data, use_pass=True): import getpass import urllib.parse import urllib.request uri = f"https://api.bitbucket.org/1.0/{endpoint}/" # since bitbucket doesn't return the required WWW-Authenticate header when # making a request without Authorization, we cannot use the standard urllib2 # auth handlers; we have to add the requisite header from the start if data is not None: data = urllib.parse.urlencode(data) req = urllib.request.Request(uri, data) if use_pass: username = input("Bitbucket Username? ") password = getpass.getpass() upw = f"{username}:{password}" req.add_header("Authorization", f"Basic {base64.b64encode(upw).strip()}") return urllib.request.urlopen(req).read() def fix_length(length, ds): registry = ds.unit_registry if isinstance(length, YTArray): if registry is not None: length.units.registry = registry return length.in_units("code_length") if isinstance(length, numeric_type): return YTArray(length, "code_length", registry=registry) length_valid_tuple = isinstance(length, (list, tuple)) and len(length) == 2 unit_is_string = isinstance(length[1], str) length_is_number = isinstance(length[0], numeric_type) and not isinstance( length[0], YTArray ) if length_valid_tuple and unit_is_string and length_is_number: return YTArray(*length, registry=registry) else: raise RuntimeError(f"Length {str(length)} is invalid") @contextlib.contextmanager def parallel_profile(prefix): r"""A context manager for profiling parallel code execution using cProfile This is a simple context manager that automatically profiles the execution of a snippet of code. Parameters ---------- prefix : string A string name to prefix outputs with. Examples -------- >>> from yt import PhasePlot >>> from yt.testing import fake_random_ds >>> fields = ("density", "temperature", "cell_mass") >>> units = ("g/cm**3", "K", "g") >>> ds = fake_random_ds(16, fields=fields, units=units) >>> with parallel_profile("my_profile"): ... plot = PhasePlot(ds.all_data(), *fields) """ import cProfile fn = "%s_%04i_%04i.cprof" % ( prefix, ytcfg.get("yt", "internals", "topcomm_parallel_size"), ytcfg.get("yt", "internals", "topcomm_parallel_rank"), ) p = cProfile.Profile() p.enable() yield fn p.disable() p.dump_stats(fn) def get_num_threads(): from .config import ytcfg nt = ytcfg.get("yt", "num_threads") if nt < 0: return os.environ.get("OMP_NUM_THREADS", 0) return nt def fix_axis(axis, ds): return ds.coordinates.axis_id.get(axis, axis) def get_output_filename(name, keyword, suffix): r"""Return an appropriate filename for output. With a name provided by the user, this will decide how to appropriately name the output file by the following rules: 1. if name is None, the filename will be the keyword plus the suffix. 2. if name ends with "/" (resp "\" on Windows), assume name is a directory and the file will be named name/(keyword+suffix). If the directory does not exist, first try to create it and raise an exception if an error occurs. 3. if name does not end in the suffix, add the suffix. Parameters ---------- name : str A filename given by the user. keyword : str A default filename prefix if name is None. suffix : str Suffix that must appear at end of the filename. This will be added if not present. Examples -------- >>> get_output_filename(None, "Projection_x", ".png") 'Projection_x.png' >>> get_output_filename("my_file", "Projection_x", ".png") 'my_file.png' >>> get_output_filename("my_dir/", "Projection_x", ".png") 'my_dir/Projection_x.png' """ if name is None: name = keyword name = os.path.expanduser(name) if name.endswith(os.sep) and not os.path.isdir(name): ensure_dir(name) if os.path.isdir(name): name = os.path.join(name, keyword) if not name.endswith(suffix): name += suffix return name def ensure_dir_exists(path): r"""Create all directories in path recursively in a parallel safe manner""" my_dir = os.path.dirname(path) # If path is a file in the current directory, like "test.txt", then my_dir # would be an empty string, resulting in FileNotFoundError when passed to # ensure_dir. Let's avoid that. if my_dir: ensure_dir(my_dir) def ensure_dir(path): r"""Parallel safe directory maker.""" if os.path.exists(path): return path try: os.makedirs(path) except OSError as e: if e.errno == errno.EEXIST: pass else: raise return path def validate_width_tuple(width): if not is_sequence(width) or len(width) != 2: raise YTInvalidWidthError(f"width ({width}) is not a two element tuple") is_numeric = isinstance(width[0], numeric_type) length_has_units = isinstance(width[0], YTArray) unit_is_string = isinstance(width[1], str) if not is_numeric or length_has_units and unit_is_string: msg = f"width ({str(width)}) is invalid. " msg += "Valid widths look like this: (12, 'au')" raise YTInvalidWidthError(msg) _first_cap_re = re.compile("(.)([A-Z][a-z]+)") _all_cap_re = re.compile("([a-z0-9])([A-Z])") @lru_cache(maxsize=128, typed=False) def camelcase_to_underscore(name): s1 = _first_cap_re.sub(r"\1_\2", name) return _all_cap_re.sub(r"\1_\2", s1).lower() def set_intersection(some_list): if len(some_list) == 0: return set() # This accepts a list of iterables, which we get the intersection of. s = set(some_list[0]) for l in some_list[1:]: s.intersection_update(l) return s @contextlib.contextmanager def memory_checker(interval=15, dest=None): r"""This is a context manager that monitors memory usage. Parameters ---------- interval : int The number of seconds between printing the current memory usage in gigabytes of the current Python interpreter. Examples -------- >>> with memory_checker(10): ... arr = np.zeros(1024 * 1024 * 1024, dtype="float64") ... time.sleep(15) ... del arr MEMORY: -1.000e+00 gb """ import threading if dest is None: dest = sys.stdout class MemoryChecker(threading.Thread): def __init__(self, event, interval): self.event = event self.interval = interval threading.Thread.__init__(self) def run(self): while not self.event.wait(self.interval): print(f"MEMORY: {get_memory_usage() / 1024.0:0.3e} gb", file=dest) e = threading.Event() mem_check = MemoryChecker(e, interval) mem_check.start() try: yield finally: e.set() def enable_plugins(plugin_filename=None): """Forces a plugin file to be parsed. A plugin file is a means of creating custom fields, quantities, data objects, colormaps, and other code classes and objects to be used in yt scripts without modifying the yt source directly. If ``plugin_filename`` is omitted, this function will look for a plugin file at ``$HOME/.config/yt/my_plugins.py``, which is the preferred behaviour for a system-level configuration. Warning: a script using this function will only be reproducible if your plugin file is shared with it. """ import yt from yt.config import config_dir, ytcfg from yt.fields.my_plugin_fields import my_plugins_fields if plugin_filename is not None: _fn = plugin_filename if not os.path.isfile(_fn): raise FileNotFoundError(_fn) else: # Determine global plugin location. By decreasing priority order: # - absolute path # - CONFIG_DIR # - obsolete config dir. my_plugin_name = ytcfg.get("yt", "plugin_filename") for base_prefix in ("", config_dir()): if os.path.isfile(os.path.join(base_prefix, my_plugin_name)): _fn = os.path.join(base_prefix, my_plugin_name) break else: raise FileNotFoundError("Could not find a global system plugin file.") mylog.info("Loading plugins from %s", _fn) ytdict = yt.__dict__ execdict = ytdict.copy() execdict["add_field"] = my_plugins_fields.add_field with open(_fn) as f: code = compile(f.read(), _fn, "exec") exec(code, execdict, execdict) ytnamespace = list(ytdict.keys()) for k in execdict.keys(): if k not in ytnamespace: if callable(execdict[k]): setattr(yt, k, execdict[k]) def subchunk_count(n_total, chunk_size): handled = 0 while handled < n_total: tr = min(n_total - handled, chunk_size) yield tr handled += tr def fix_unitary(u): if u == "1": return "unitary" else: return u def get_hash(infile, algorithm="md5", BLOCKSIZE=65536): """Generate file hash without reading in the entire file at once. Original code licensed under MIT. Source: https://www.pythoncentral.io/hashing-files-with-python/ Parameters ---------- infile : str File of interest (including the path). algorithm : str (optional) Hash algorithm of choice. Defaults to 'md5'. BLOCKSIZE : int (optional) How much data in bytes to read in at once. Returns ------- hash : str The hash of the file. Examples -------- >>> from tempfile import NamedTemporaryFile >>> with NamedTemporaryFile() as file: ... get_hash(file.name) 'd41d8cd98f00b204e9800998ecf8427e' """ import hashlib try: hasher = getattr(hashlib, algorithm)() except AttributeError as e: raise NotImplementedError( f"'{algorithm}' not available! Available algorithms: {hashlib.algorithms}" ) from e filesize = os.path.getsize(infile) iterations = int(float(filesize) / float(BLOCKSIZE)) pbar = get_pbar(f"Generating {algorithm} hash", iterations) iter = 0 with open(infile, "rb") as f: buf = f.read(BLOCKSIZE) while len(buf) > 0: hasher.update(buf) buf = f.read(BLOCKSIZE) iter += 1 pbar.update(iter) pbar.finish() return hasher.hexdigest() def get_brewer_cmap(cmap): """Returns a colorbrewer colormap from palettable""" try: import palettable except ImportError as exc: raise RuntimeError( "Please install palettable to use colorbrewer colormaps" ) from exc bmap = palettable.colorbrewer.get_map(*cmap) return bmap.get_mpl_colormap(N=cmap[2]) def matplotlib_style_context(style="yt.default", after_reset=False): """Returns a context manager for controlling matplotlib style. Arguments are passed to matplotlib.style.context() if specified. Defaults to setting yt's "yt.default" style, after resetting to the default config parameters. """ # FUTURE: this function should be deprecated in favour of matplotlib.style.context # after support for matplotlib 3.6 and older versions is dropped. import importlib.resources as importlib_resources import matplotlib as mpl import matplotlib.style if style == "yt.default" and mpl.__version_info__ < (3, 7): style = importlib_resources.files("yt") / "default.mplstyle" return matplotlib.style.context(style, after_reset=after_reset) interactivity = False """Sets the condition that interactive backends can be used.""" def toggle_interactivity(): global interactivity interactivity = not interactivity if interactivity: if IS_IPYTHON: import IPython shell = IPython.get_ipython() shell.magic("matplotlib") else: import matplotlib matplotlib.interactive(True) def get_interactivity(): return interactivity def setdefaultattr(obj, name, value): """Set attribute with *name* on *obj* with *value* if it doesn't exist yet Analogous to dict.setdefault """ if not hasattr(obj, name): setattr(obj, name, value) return getattr(obj, name) def parse_h5_attr(f, attr): """A Python3-safe function for getting hdf5 attributes. If an attribute is supposed to be a string, this will return it as such. """ val = f.attrs.get(attr, None) if isinstance(val, bytes): return val.decode("utf8") else: return val def obj_length(v): if is_sequence(v): return len(v) else: # If something isn't iterable, we return 0 # to signify zero length (aka a scalar). return 0 def array_like_field(data, x, field): field = data._determine_fields(field)[0] finfo = data.ds._get_field_info(field) if finfo.sampling_type == "particle": units = finfo.output_units else: units = finfo.units if isinstance(x, YTArray): arr = copy.deepcopy(x) arr.convert_to_units(units) return arr if isinstance(x, np.ndarray): return data.ds.arr(x, units) else: return data.ds.quan(x, units) def _full_type_name(obj: object = None, /, *, cls: type | None = None) -> str: if cls is not None and obj is not None: raise TypeError("_full_type_name takes an object or a class, but not both") if cls is None: cls = obj.__class__ prefix = f"{cls.__module__}." if cls.__module__ != "builtins" else "" return f"{prefix}{cls.__name__}" def validate_3d_array(obj): if not is_sequence(obj) or len(obj) != 3: raise TypeError( f"Expected an array of size (3,), " f"received {_full_type_name(obj)!r} of length {len(obj)}" ) def validate_float(obj): """Validates if the passed argument is a float value. Raises an exception if `obj` is not a single float value or a YTQuantity of size 1. Parameters ---------- obj : Any Any argument which needs to be checked for a single float value. Raises ------ TypeError Raised if `obj` is not a single float value or YTQunatity Examples -------- >>> validate_float(1) >>> validate_float(1.50) >>> validate_float(YTQuantity(1, "cm")) >>> validate_float((1, "cm")) >>> validate_float([1, 1, 1]) Traceback (most recent call last): ... TypeError: Expected a numeric value (or size-1 array), received 'list' of length 3 >>> validate_float([YTQuantity(1, "cm"), YTQuantity(2, "cm")]) Traceback (most recent call last): ... TypeError: Expected a numeric value (or size-1 array), received 'list' of length 2 """ if isinstance(obj, tuple): if ( len(obj) != 2 or not isinstance(obj[0], numeric_type) or not isinstance(obj[1], str) ): raise TypeError( "Expected a numeric value (or tuple of format " f"(float, String)), received an inconsistent tuple {str(obj)!r}." ) else: return if is_sequence(obj) and (len(obj) != 1 or not isinstance(obj[0], numeric_type)): raise TypeError( "Expected a numeric value (or size-1 array), " f"received {_full_type_name(obj)!r} of length {len(obj)}" ) def validate_sequence(obj): if obj is not None and not is_sequence(obj): raise TypeError( "Expected an iterable object, " f"received {_full_type_name(obj)!r}" ) def validate_field_key(key): if ( isinstance(key, tuple) and len(key) == 2 and all(isinstance(_, str) for _ in key) ): return raise TypeError( "Expected a 2-tuple of strings formatted as\n" "(field or particle type, field name)\n" f"Received invalid field key: {key}, with type {type(key)}" ) def is_valid_field_key(key): try: validate_field_key(key) except TypeError: return False else: return True def validate_object(obj, data_type): if obj is not None and not isinstance(obj, data_type): raise TypeError( f"Expected an object of {_full_type_name(cls=data_type)!r} type, " f"received {_full_type_name(obj)!r}" ) def validate_axis(ds, axis): if ds is not None: valid_axis = sorted( ds.coordinates.axis_name.keys(), key=lambda k: str(k).swapcase() ) else: valid_axis = [0, 1, 2, "x", "y", "z", "X", "Y", "Z"] if axis not in valid_axis: raise TypeError(f"Expected axis to be any of {valid_axis}, received {axis!r}") def validate_center(center): if isinstance(center, str): c = center.lower() if ( c not in ["c", "center", "m", "max", "min"] and not c.startswith("max_") and not c.startswith("min_") ): raise TypeError( "Expected 'center' to be in ['c', 'center', " "'m', 'max', 'min'] or the prefix to be " f"'max_'/'min_', received {center!r}." ) elif not isinstance(center, (numeric_type, YTQuantity)) and not is_sequence(center): raise TypeError( "Expected 'center' to be a numeric object of type " "list/tuple/np.ndarray/YTArray/YTQuantity, " f"received {_full_type_name(center)}." ) def parse_center_array(center, ds, axis: int | None = None): known_shortnames = {"m": "max", "c": "center", "l": "left", "r": "right"} valid_single_str_values = ("center", "left", "right") valid_field_loc_str_values = ("min", "max") valid_str_values = valid_single_str_values + valid_field_loc_str_values default_error_message = ( "Expected any of the following\n" "- 'c', 'center', 'l', 'left', 'r', 'right', 'm', 'max', or 'min'\n" "- a 2 element tuple with 'min' or 'max' as the first element, followed by a field identifier\n" "- a 3 element array-like: for a unyt_array, expects length dimensions, otherwise code_lenght is assumed" ) # store an unmodified copy of user input to be inserted in error messages center_input = deepcopy(center) if isinstance(center, str): centerl = center.lower() if centerl in known_shortnames: centerl = known_shortnames[centerl] match = re.match(r"^(?P(min|max))(_(?P[\w_]+))?", centerl) if match is not None: if match["field"] is not None: for ftype, fname in ds.derived_field_list: # noqa: B007 if fname == match["field"]: break else: raise YTFieldNotFound(match["field"], ds) else: ftype, fname = ("gas", "density") center = (match["extremum"], (ftype, fname)) elif centerl in ("center", "left", "right"): # domain_left_edge and domain_right_edge might not be # initialized until we create the index, so create it ds.index center = ds.domain_center.copy() if centerl in ("left", "right") and axis is None: raise ValueError(f"center={center!r} is not valid with axis=None") if centerl == "left": center = ds.domain_center.copy() center[axis] = ds.domain_left_edge[axis] elif centerl == "right": # note that the right edge of a grid is excluded by slice selector # which is why we offset the region center by the smallest distance possible center = ds.domain_center.copy() center[axis] = ( ds.domain_right_edge[axis] - center.uq * np.finfo(center.dtype).eps ) elif centerl not in valid_str_values: raise ValueError( f"Received unknown center single string value {center!r}. " + default_error_message ) if is_sequence(center): if ( len(center) == 2 and isinstance(center[0], str) and (is_valid_field_key(center[1]) or isinstance(center[1], str)) ): center0l = center[0].lower() if center0l not in valid_str_values: raise ValueError( f"Received unknown string value {center[0]!r}. " f"Expected one of {valid_field_loc_str_values} (case insensitive)" ) field_key = center[1] if center0l == "min": v, center = ds.find_min(field_key) else: assert center0l == "max" v, center = ds.find_max(field_key) center = ds.arr(center, "code_length") elif len(center) == 2 and is_sequence(center[0]) and isinstance(center[1], str): center = ds.arr(center[0], center[1]) elif len(center) == 3 and all(isinstance(_, YTQuantity) for _ in center): center = ds.arr([c.copy() for c in center], dtype="float64") elif len(center) == 3: center = ds.arr(center, "code_length") if isinstance(center, np.ndarray) and center.ndim > 1: mylog.debug("Removing singleton dimensions from 'center'.") center = np.squeeze(center) if not isinstance(center, YTArray): raise TypeError( f"Received {center_input!r}, but failed to transform to a unyt_array (obtained {center!r}).\n" + default_error_message + "\n" "If you supplied an expected type, consider filing a bug report" ) if center.shape != (3,): raise TypeError( f"Received {center_input!r} and obtained {center!r} after sanitizing.\n" + default_error_message + "\n" "If you supplied an expected type, consider filing a bug report" ) # make sure the return value shares all # unit symbols with ds.unit_registry # we rely on unyt to invalidate unit dimensionality here center = ds.arr(center).in_units("code_length") if not ds._is_within_domain(center): mylog.warning( "Requested center at %s is outside of data domain with " "left edge = %s, " "right edge = %s, " "periodicity = %s", center, ds.domain_left_edge, ds.domain_right_edge, ds.periodicity, ) return center.astype("float64") def sglob(pattern): """ Return the results of a glob through the sorted() function. """ return sorted(glob.glob(pattern)) def dictWithFactory(factory: Callable[[Any], Any]) -> type: """ Create a dictionary class with a default factory function. Contrary to `collections.defaultdict`, the factory takes the missing key as input parameter. Parameters ---------- factory : callable(key) -> value The factory to call when hitting a missing key Returns ------- DictWithFactory class A class to create new dictionaries handling missing keys. """ issue_deprecation_warning( "yt.funcs.dictWithFactory will be removed in a future version of yt, please do not rely on it. " "If you need it, copy paste this function from yt's source code", stacklevel=3, since="4.1", ) class DictWithFactory(UserDict): def __init__(self, *args, **kwargs): self.factory = factory super().__init__(*args, **kwargs) def __missing__(self, key): val = self.factory(key) self[key] = val return val return DictWithFactory def levenshtein_distance(seq1, seq2, max_dist=None): """ Compute the levenshtein distance between seq1 and seq2. From https://stackabuse.com/levenshtein-distance-and-text-similarity-in-python/ Parameters ---------- seq1 : str seq2 : str The strings to compute the distance between max_dist : integer If not None, maximum distance returned (see notes). Returns ------- The Levenshtein distance as an integer. Notes ----- This computes the Levenshtein distance, i.e. the number of edits to change seq1 into seq2. If a maximum distance is passed, the algorithm will stop as soon as the number of edits goes above the value. This allows for an earlier break and speeds calculations up. """ size_x = len(seq1) + 1 size_y = len(seq2) + 1 if max_dist is None: max_dist = max(size_x, size_y) if abs(size_x - size_y) > max_dist: return max_dist + 1 matrix = np.zeros((size_x, size_y), dtype=int) for x in range(size_x): matrix[x, 0] = x for y in range(size_y): matrix[0, y] = y for x in range(1, size_x): for y in range(1, size_y): if seq1[x - 1] == seq2[y - 1]: matrix[x, y] = min( matrix[x - 1, y] + 1, matrix[x - 1, y - 1], matrix[x, y - 1] + 1 ) else: matrix[x, y] = min( matrix[x - 1, y] + 1, matrix[x - 1, y - 1] + 1, matrix[x, y - 1] + 1 ) # Early break: the minimum distance is already larger than # maximum allow value, can return safely. if matrix[x].min() > max_dist: return max_dist + 1 return matrix[size_x - 1, size_y - 1] def validate_moment(moment, weight_field): if moment == 2 and weight_field is None: raise ValueError( "Cannot compute the second moment of a projection if weight_field=None!" ) if moment not in [1, 2]: raise ValueError( "Weighted projections can only be made of averages " "(moment = 1) or standard deviations (moment = 2)!" ) def setdefault_mpl_metadata(save_kwargs: dict[str, Any], name: str) -> None: """ Set a default Software metadata entry for use with Matplotlib outputs. """ _, ext = os.path.splitext(name.lower()) if ext in (".eps", ".ps", ".svg", ".pdf"): key = "Creator" elif ext == ".png": key = "Software" else: return default_software = ( "Matplotlib version{matplotlib}, https://matplotlib.org|NumPy-{numpy}|yt-{yt}" ).format(**get_version_stack()) if "metadata" in save_kwargs: save_kwargs["metadata"].setdefault(key, default_software) else: save_kwargs["metadata"] = {key: default_software} ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.359153 yt-4.4.0/yt/geometry/0000755000175100001770000000000014714401715014053 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/__init__.py0000644000175100001770000000000014714401662016153 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.359153 yt-4.4.0/yt/geometry/_selection_routines/0000755000175100001770000000000014714401715020127 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/always_selector.pxi0000644000175100001770000000303614714401662024054 0ustar00runnerdockercdef class AlwaysSelector(SelectorObject): def __init__(self, dobj): self.overlap_cells = 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def select_grids(self, np.ndarray[np.float64_t, ndim=2] left_edges, np.ndarray[np.float64_t, ndim=2] right_edges, np.ndarray[np.int32_t, ndim=2] levels): cdef int ng = left_edges.shape[0] cdef np.ndarray[np.uint8_t, ndim=1] gridi = np.ones(ng, dtype='uint8') return gridi.astype("bool") @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: return 1 cdef int select_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.int32_t level, Oct *o = NULL) noexcept nogil: return 1 cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: return 1 cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: return 1 cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: return 1 cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: return 1 def _hash_vals(self): return ("always", 1,) always_selector = AlwaysSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/boolean_selectors.pxi0000644000175100001770000004102514714401662024356 0ustar00runnerdocker cdef class BooleanSelector(SelectorObject): def __init__(self, dobj): # Note that this has a different API than the other selector objects, # so will not work as a traditional data selector. if not hasattr(dobj.dobj1, "selector"): self.sel1 = dobj.dobj1 else: self.sel1 = dobj.dobj1.selector if not hasattr(dobj.dobj2, "selector"): self.sel2 = dobj.dobj2 else: self.sel2 = dobj.dobj2.selector cdef class BooleanANDSelector(BooleanSelector): cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef int rv1 = self.sel1.select_bbox(left_edge, right_edge) if rv1 == 0: return 0 cdef int rv2 = self.sel2.select_bbox(left_edge, right_edge) if rv2 == 0: return 0 return 1 cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef int rv1 = self.sel1.select_bbox_edge(left_edge, right_edge) if rv1 == 0: return 0 cdef int rv2 = self.sel2.select_bbox_edge(left_edge, right_edge) if rv2 == 0: return 0 return max(rv1, rv2) cdef int select_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.int32_t level, Oct *o = NULL) noexcept nogil: cdef int rv1 = self.sel1.select_grid(left_edge, right_edge, level, o) if rv1 == 0: return 0 cdef int rv2 = self.sel2.select_grid(left_edge, right_edge, level, o) if rv2 == 0: return 0 return 1 cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: cdef int rv1 = self.sel1.select_cell(pos, dds) if rv1 == 0: return 0 cdef int rv2 = self.sel2.select_cell(pos, dds) if rv2 == 0: return 0 return 1 cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: cdef int rv1 = self.sel1.select_point(pos) if rv1 == 0: return 0 cdef int rv2 = self.sel2.select_point(pos) if rv2 == 0: return 0 return 1 cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: cdef int rv1 = self.sel1.select_sphere(pos, radius) if rv1 == 0: return 0 cdef int rv2 = self.sel2.select_sphere(pos, radius) if rv2 == 0: return 0 return 1 def _hash_vals(self): return (self.sel1._hash_vals() + ("and",) + self.sel2._hash_vals()) cdef class BooleanORSelector(BooleanSelector): cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef int rv1 = self.sel1.select_bbox(left_edge, right_edge) if rv1 == 1: return 1 cdef int rv2 = self.sel2.select_bbox(left_edge, right_edge) if rv2 == 1: return 1 return 0 cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef int rv1 = self.sel1.select_bbox_edge(left_edge, right_edge) if rv1 == 1: return 1 cdef int rv2 = self.sel2.select_bbox_edge(left_edge, right_edge) if rv2 == 1: return 1 return max(rv1, rv2) cdef int select_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.int32_t level, Oct *o = NULL) noexcept nogil: cdef int rv1 = self.sel1.select_grid(left_edge, right_edge, level, o) if rv1 == 1: return 1 cdef int rv2 = self.sel2.select_grid(left_edge, right_edge, level, o) if rv2 == 1: return 1 if (rv1 == 2) or (rv2 == 2): return 2 return 0 cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: cdef int rv1 = self.sel1.select_cell(pos, dds) if rv1 == 1: return 1 cdef int rv2 = self.sel2.select_cell(pos, dds) if rv2 == 1: return 1 return 0 cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: cdef int rv1 = self.sel1.select_point(pos) if rv1 == 1: return 1 cdef int rv2 = self.sel2.select_point(pos) if rv2 == 1: return 1 return 0 cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: cdef int rv1 = self.sel1.select_sphere(pos, radius) if rv1 == 1: return 1 cdef int rv2 = self.sel2.select_sphere(pos, radius) if rv2 == 1: return 1 return 0 def _hash_vals(self): return (self.sel1._hash_vals() + ("or",) + self.sel2._hash_vals()) cdef class BooleanNOTSelector(BooleanSelector): cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: # We always return True here, because we don't have a "fully included" # check anywhere else. return 1 cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef int rv1 = self.sel1.select_bbox_edge(left_edge, right_edge) if rv1 == 0: return 1 elif rv1 == 1: return 0 return 2 cdef int select_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.int32_t level, Oct *o = NULL) noexcept nogil: return 1 cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: cdef int rv1 = self.sel1.select_cell(pos, dds) if rv1 == 0: return 1 return 0 cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: cdef int rv1 = self.sel1.select_point(pos) if rv1 == 0: return 1 return 0 cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: cdef int rv1 = self.sel1.select_sphere(pos, radius) if rv1 == 0: return 1 return 0 def _hash_vals(self): return (self.sel1._hash_vals() + ("not",)) cdef class BooleanXORSelector(BooleanSelector): cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: # We always return True here, because we don't have a "fully included" # check anywhere else. return 1 cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: # Return 2 in cases where one or both selectors partially overlap since # part of the bounding box could satisfy the condition unless the # selectors are identical. cdef int rv1 = self.sel1.select_bbox_edge(left_edge, right_edge) cdef int rv2 = self.sel2.select_bbox_edge(left_edge, right_edge) if rv1 == rv2: if rv1 == 2: # If not identical, part of the bbox will be touched by one # selector and not the other. # if self.sel1 == self.sel2: return 0 # requires gil return 2 return 0 if rv1 == 0: return rv2 if rv2 == 0: return rv1 return 2 # part of bbox only touched by selector fully covering bbox cdef int select_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.int32_t level, Oct *o = NULL) noexcept nogil: return 1 cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: cdef int rv1 = self.sel1.select_cell(pos, dds) cdef int rv2 = self.sel2.select_cell(pos, dds) if rv1 == rv2: return 0 return 1 cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: cdef int rv1 = self.sel1.select_point(pos) cdef int rv2 = self.sel2.select_point(pos) if rv1 == rv2: return 0 return 1 cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: cdef int rv1 = self.sel1.select_sphere(pos, radius) cdef int rv2 = self.sel2.select_sphere(pos, radius) if rv1 == rv2: return 0 return 1 def _hash_vals(self): return (self.sel1._hash_vals() + ("xor",) + self.sel2._hash_vals()) cdef class BooleanNEGSelector(BooleanSelector): cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: # We always return True here, because we don't have a "fully included" # check anywhere else. return self.sel1.select_bbox(left_edge, right_edge) cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef int rv1 = self.sel1.select_bbox_edge(left_edge, right_edge) if rv1 == 0: return 0 cdef int rv2 = self.sel2.select_bbox_edge(left_edge, right_edge) if rv2 == 1: return 0 elif rv2 == 0: return rv1 # If sel2 is partial, then sel1 - sel2 will be partial as long # as sel1 != sel2 # if self.sel1 == self.sel2: return 0 # requires gil return 2 cdef int select_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.int32_t level, Oct *o = NULL) noexcept nogil: return self.sel1.select_grid(left_edge, right_edge, level, o) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: cdef int rv1 = self.sel1.select_cell(pos, dds) if rv1 == 0: return 0 cdef int rv2 = self.sel2.select_cell(pos, dds) if rv2 == 1: return 0 return 1 cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: cdef int rv1 = self.sel1.select_point(pos) if rv1 == 0: return 0 cdef int rv2 = self.sel2.select_point(pos) if rv2 == 1: return 0 return 1 cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: cdef int rv1 = self.sel1.select_sphere(pos, radius) if rv1 == 0: return 0 cdef int rv2 = self.sel2.select_sphere(pos, radius) if rv2 == 1: return 0 return 1 def _hash_vals(self): return (self.sel1._hash_vals() + ("neg",) + self.sel2._hash_vals()) cdef class ChainedBooleanSelector(SelectorObject): cdef int n_obj cdef np.ndarray selectors def __init__(self, dobj): # These are data objects, not selectors self.n_obj = len(dobj.data_objects) self.selectors = np.empty(self.n_obj, dtype="object") for i in range(self.n_obj): self.selectors[i] = dobj.data_objects[i].selector cdef class ChainedBooleanANDSelector(ChainedBooleanSelector): @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: with gil: for i in range(self.n_obj): if (self.selectors[i]).select_bbox( left_edge, right_edge) == 0: return 0 return 1 @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef int selected = 1 cdef int ret with gil: for i in range(self.n_obj): ret = (self.selectors[i]).select_bbox_edge( left_edge, right_edge) if ret == 0: return 0 elif ret == 2: selected = 2 return selected @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef int select_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.int32_t level, Oct *o = NULL) noexcept nogil: with gil: for i in range(self.n_obj): if (self.selectors[i]).select_grid( left_edge, right_edge, level, o) == 0: return 0 return 1 @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: with gil: for i in range(self.n_obj): if (self.selectors[i]).select_cell( pos, dds) == 0: return 0 return 1 @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: with gil: for i in range(self.n_obj): if (self.selectors[i]).select_point(pos) == 0: return 0 return 1 @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: with gil: for i in range(self.n_obj): if (self.selectors[i]).select_sphere( pos, radius) == 0: return 0 return 1 def _hash_vals(self): v = ("chained_and",) for s in self.selectors: v += s._hash_vals() return v intersection_selector = ChainedBooleanANDSelector cdef class ChainedBooleanORSelector(ChainedBooleanSelector): @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: with gil: for i in range(self.n_obj): if (self.selectors[i]).select_bbox( left_edge, right_edge) == 1: return 1 return 0 @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef int selected = 0 cdef int ret with gil: for i in range(self.n_obj): ret = (self.selectors[i]).select_bbox_edge( left_edge, right_edge) if ret == 1: return 1 elif ret == 2: selected = 2 return selected @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef int select_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.int32_t level, Oct *o = NULL) noexcept nogil: with gil: for i in range(self.n_obj): if (self.selectors[i]).select_grid( left_edge, right_edge, level, o) == 1: return 1 return 0 @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: with gil: for i in range(self.n_obj): if (self.selectors[i]).select_cell( pos, dds) == 1: return 1 return 0 @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: with gil: for i in range(self.n_obj): if (self.selectors[i]).select_point(pos) == 1: return 1 return 0 @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: with gil: for i in range(self.n_obj): if (self.selectors[i]).select_sphere( pos, radius) == 1: return 1 return 0 def _hash_vals(self): v = ("chained_or",) for s in self.selectors: v += s._hash_vals() return v union_selector = ChainedBooleanORSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/compose_selector.pxi0000644000175100001770000000531414714401662024222 0ustar00runnerdockercdef class ComposeSelector(SelectorObject): cdef SelectorObject selector1 cdef SelectorObject selector2 def __init__(self, dobj, selector1, selector2): self.selector1 = selector1 self.selector2 = selector2 self.min_level = max(selector1.min_level, selector2.min_level) self.max_level = min(selector1.max_level, selector2.max_level) def select_grids(self, np.ndarray[np.float64_t, ndim=2] left_edges, np.ndarray[np.float64_t, ndim=2] right_edges, np.ndarray[np.int32_t, ndim=2] levels): return np.logical_or( self.selector1.select_grids(left_edges, right_edges, levels), self.selector2.select_grids(left_edges, right_edges, levels)) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: if self.selector1.select_cell(pos, dds) and \ self.selector2.select_cell(pos, dds): return 1 else: return 0 cdef int select_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.int32_t level, Oct *o = NULL) noexcept nogil: if self.selector1.select_grid(left_edge, right_edge, level, o) or \ self.selector2.select_grid(left_edge, right_edge, level, o): return 1 else: return 0 cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: if self.selector1.select_point(pos) and \ self.selector2.select_point(pos): return 1 else: return 0 cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: if self.selector1.select_sphere(pos, radius) and \ self.selector2.select_sphere(pos, radius): return 1 else: return 0 cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: if self.selector1.select_bbox(left_edge, right_edge) and \ self.selector2.select_bbox(left_edge, right_edge): return 1 else: return 0 cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef int rv1 = self.selector1.select_bbox_edge(left_edge, right_edge) if rv1 == 0: return 0 cdef int rv2 = self.selector2.select_bbox_edge(left_edge, right_edge) if rv2 == 0: return 0 return max(rv1, rv2) def _hash_vals(self): return (hash(self.selector1), hash(self.selector2)) compose_selector = ComposeSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/cut_region_selector.pxi0000644000175100001770000000261414714401662024713 0ustar00runnerdockercdef class CutRegionSelector(SelectorObject): cdef set _positions cdef tuple _conditionals def __init__(self, dobj): axis_name = dobj.ds.coordinates.axis_name positions = np.array([dobj['index', axis_name[0]], dobj['index', axis_name[1]], dobj['index', axis_name[2]]]).T self._conditionals = tuple(dobj.conditionals) self._positions = set(tuple(position) for position in positions) cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: return 1 cdef int select_bbox_dge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: return 1 cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: with gil: if (pos[0], pos[1], pos[2]) in self._positions: return 1 else: return 0 cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: return 1 cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: return 1 def _hash_vals(self): t = () for i, c in enumerate(self._conditionals): t += ("conditional[%s]" % i, c) return ("conditionals", t) cut_region_selector = CutRegionSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/cutting_plane_selector.pxi0000644000175100001770000001022514714401662025406 0ustar00runnerdockercdef class CuttingPlaneSelector(SelectorObject): cdef public np.float64_t norm_vec[3] cdef public np.float64_t d def __init__(self, dobj): cdef int i for i in range(3): self.norm_vec[i] = dobj._norm_vec[i] self.d = _ensure_code(dobj._d) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: cdef np.float64_t left_edge[3] cdef np.float64_t right_edge[3] cdef int i for i in range(3): left_edge[i] = pos[i] - 0.5*dds[i] right_edge[i] = pos[i] + 0.5*dds[i] return self.select_bbox(left_edge, right_edge) cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: # two 0-volume constructs don't intersect return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: cdef int i cdef np.float64_t height = self.d for i in range(3) : height += pos[i] * self.norm_vec[i] if height*height <= radius*radius : return 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef int i, j, k, n cdef np.float64_t *arr[2] cdef np.float64_t pos[3] cdef np.float64_t gd arr[0] = left_edge arr[1] = right_edge all_under = 1 all_over = 1 # Check each corner for i in range(2): pos[0] = arr[i][0] for j in range(2): pos[1] = arr[j][1] for k in range(2): pos[2] = arr[k][2] gd = self.d for n in range(3): gd += pos[n] * self.norm_vec[n] # this allows corners and faces on the low-end to # collide, while not selecting cells on the high-side if i == 0 and j == 0 and k == 0 : if gd <= 0: all_over = 0 if gd >= 0: all_under = 0 else : if gd < 0: all_over = 0 if gd > 0: all_under = 0 if all_over == 1 or all_under == 1: return 0 return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef int i, j, k, n cdef np.float64_t *arr[2] cdef np.float64_t pos[3] cdef np.float64_t gd arr[0] = left_edge arr[1] = right_edge all_under = 1 all_over = 1 # Check each corner for i in range(2): pos[0] = arr[i][0] for j in range(2): pos[1] = arr[j][1] for k in range(2): pos[2] = arr[k][2] gd = self.d for n in range(3): gd += pos[n] * self.norm_vec[n] # this allows corners and faces on the low-end to # collide, while not selecting cells on the high-side if i == 0 and j == 0 and k == 0 : if gd <= 0: all_over = 0 if gd >= 0: all_under = 0 else : if gd < 0: all_over = 0 if gd > 0: all_under = 0 if all_over == 1 or all_under == 1: return 0 return 2 # a box of non-zeros volume can't be inside a plane def _hash_vals(self): return (("norm_vec[0]", self.norm_vec[0]), ("norm_vec[1]", self.norm_vec[1]), ("norm_vec[2]", self.norm_vec[2]), ("d", self.d)) def _get_state_attnames(self): return ("d", "norm_vec") cutting_selector = CuttingPlaneSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/data_collection_selector.pxi0000644000175100001770000000240114714401662025673 0ustar00runnerdockercdef class DataCollectionSelector(SelectorObject): cdef object obj_ids cdef np.int64_t nids def __init__(self, dobj): self.obj_ids = dobj._obj_ids self.nids = self.obj_ids.shape[0] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def select_grids(self, np.ndarray[np.float64_t, ndim=2] left_edges, np.ndarray[np.float64_t, ndim=2] right_edges, np.ndarray[np.int32_t, ndim=2] levels): cdef int n cdef int ng = left_edges.shape[0] cdef np.ndarray[np.uint8_t, ndim=1] gridi = np.zeros(ng, dtype='uint8') cdef np.ndarray[np.int64_t, ndim=1] oids = self.obj_ids with nogil: for n in range(self.nids): gridi[oids[n]] = 1 return gridi.astype("bool") @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fill_mask_regular_grid(self, gobj): cdef np.ndarray[np.uint8_t, ndim=3] mask mask = np.ones(gobj.ActiveDimensions, dtype='uint8') return mask.astype("bool"), mask.size def _hash_vals(self): return (hash(self.obj_ids.tobytes()), self.nids) data_collection_selector = DataCollectionSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/disk_selector.pxi0000644000175100001770000001507514714401662023514 0ustar00runnerdockercdef class DiskSelector(SelectorObject): cdef public np.float64_t norm_vec[3] cdef public np.float64_t center[3] cdef public np.float64_t radius, radius2 cdef public np.float64_t height def __init__(self, dobj): cdef int i for i in range(3): self.norm_vec[i] = dobj._norm_vec[i] self.center[i] = _ensure_code(dobj.center[i]) self.radius = _ensure_code(dobj.radius) self.radius2 = self.radius * self.radius self.height = _ensure_code(dobj.height) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: return self.select_point(pos) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: cdef np.float64_t h, d, r2, temp cdef int i h = d = 0 for i in range(3): temp = self.periodic_difference(pos[i], self.center[i], i) h += temp * self.norm_vec[i] d += temp*temp r2 = (d - h*h) if fabs(h) <= self.height and r2 <= self.radius2: return 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: cdef np.float64_t h, d, r2, temp cdef int i h = d = 0 for i in range(3): temp = self.periodic_difference(pos[i], self.center[i], i) h += temp * self.norm_vec[i] d += temp*temp r2 = (d - h*h) d = self.radius+radius if fabs(h) <= self.height+radius and r2 <= d*d: return 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: # Until we can get our OBB/OBB intersection correct, disable this. return 1 # cdef np.float64_t *arr[2] # cdef np.float64_t pos[3] # cdef np.float64_t H, D, R2, temp # cdef int i, j, k, n # cdef int all_under = 1 # cdef int all_over = 1 # cdef int any_radius = 0 # # A moment of explanation (revised): # # The disk and bounding box collide if any of the following are true: # # 1) the center of the disk is inside the bounding box # # 2) any corner of the box lies inside the disk # # 3) the box spans the plane (!all_under and !all_over) and at least # # one corner is within the cylindrical radius # # check if disk center lies inside bbox # if left_edge[0] <= self.center[0] <= right_edge[0] and \ # left_edge[1] <= self.center[1] <= right_edge[1] and \ # left_edge[2] <= self.center[2] <= right_edge[2] : # return 1 # # check all corners # arr[0] = left_edge # arr[1] = right_edge # for i in range(2): # pos[0] = arr[i][0] # for j in range(2): # pos[1] = arr[j][1] # for k in range(2): # pos[2] = arr[k][2] # H = D = 0 # for n in range(3): # temp = self.difference(pos[n], self.center[n], n) # H += (temp * self.norm_vec[n]) # D += temp*temp # R2 = (D - H*H) # if R2 < self.radius2 : # any_radius = 1 # if fabs(H) < self.height: return 1 # if H < 0: all_over = 0 # if H > 0: all_under = 0 # if all_over == 0 and all_under == 0 and any_radius == 1: return 1 # return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: # Until we can get our OBB/OBB intersection correct, disable this. return 2 # cdef np.float64_t *arr[2] # cdef np.float64_t pos[3], H, D, R2, temp # cdef int i, j, k, n # cdef int all_under = 1 # cdef int all_over = 1 # cdef int any_radius = 0 # # A moment of explanation (revised): # # The disk and bounding box collide if any of the following are true: # # 1) the center of the disk is inside the bounding box # # 2) any corner of the box lies inside the disk # # 3) the box spans the plane (!all_under and !all_over) and at least # # one corner is within the cylindrical radius # # check if disk center lies inside bbox # if left_edge[0] <= self.center[0] <= right_edge[0] and \ # left_edge[1] <= self.center[1] <= right_edge[1] and \ # left_edge[2] <= self.center[2] <= right_edge[2] : # return 1 # # check all corners # arr[0] = left_edge # arr[1] = right_edge # for i in range(2): # pos[0] = arr[i][0] # for j in range(2): # pos[1] = arr[j][1] # for k in range(2): # pos[2] = arr[k][2] # H = D = 0 # for n in range(3): # temp = self.periodic_difference( # pos[n], self.center[n], n) # H += (temp * self.norm_vec[n]) # D += temp*temp # R2 = (D - H*H) # if R2 < self.radius2 : # any_radius = 1 # if fabs(H) < self.height: return 1 # if H < 0: all_over = 0 # if H > 0: all_under = 0 # if all_over == 0 and all_under == 0 and any_radius == 1: return 1 # return 0 def _hash_vals(self): return (("norm_vec[0]", self.norm_vec[0]), ("norm_vec[1]", self.norm_vec[1]), ("norm_vec[2]", self.norm_vec[2]), ("center[0]", self.center[0]), ("center[1]", self.center[1]), ("center[2]", self.center[2]), ("radius", self.radius), ("radius2", self.radius2), ("height", self.height)) def _get_state_attnames(self): return ("radius", "radius2", "height", "norm_vec", "center") disk_selector = DiskSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/ellipsoid_selector.pxi0000644000175100001770000001337714714401662024551 0ustar00runnerdockercdef class EllipsoidSelector(SelectorObject): cdef public np.float64_t vec[3][3] cdef public np.float64_t mag[3] cdef public np.float64_t center[3] def __init__(self, dobj): cdef int i _ensure_code(dobj.center) _ensure_code(dobj._e0) _ensure_code(dobj._e1) _ensure_code(dobj._e2) _ensure_code(dobj._A) _ensure_code(dobj._B) _ensure_code(dobj._C) for i in range(3): self.center[i] = dobj.center[i] self.vec[0][i] = dobj._e0[i] self.vec[1][i] = dobj._e1[i] self.vec[2][i] = dobj._e2[i] self.mag[0] = dobj._A self.mag[1] = dobj._B self.mag[2] = dobj._C @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: return self.select_point(pos) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: cdef np.float64_t dot_evec[3] cdef np.float64_t dist cdef int i, j dot_evec[0] = dot_evec[1] = dot_evec[2] = 0 # Calculate the rotated dot product for i in range(3): # axis dist = self.periodic_difference(pos[i], self.center[i], i) for j in range(3): dot_evec[j] += dist * self.vec[j][i] dist = 0.0 for i in range(3): dist += (dot_evec[i] * dot_evec[i])/(self.mag[i] * self.mag[i]) if dist <= 1.0: return 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: # this is the sphere selection cdef int i cdef np.float64_t dist, dist2_max, dist2 = 0 for i in range(3): dist = self.periodic_difference(pos[i], self.center[i], i) dist2 += dist * dist dist2_max = (self.mag[0] + radius) * (self.mag[0] + radius) if dist2 <= dist2_max: return 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: # This is the sphere selection cdef int i cdef np.float64_t box_center, relcenter, closest, dist, edge, dist_max if left_edge[0] <= self.center[0] <= right_edge[0] and \ left_edge[1] <= self.center[1] <= right_edge[1] and \ left_edge[2] <= self.center[2] <= right_edge[2]: return 1 # http://www.gamedev.net/topic/335465-is-this-the-simplest-sphere-aabb-collision-test/ dist = 0 for i in range(3): box_center = (right_edge[i] + left_edge[i])/2.0 relcenter = self.periodic_difference(box_center, self.center[i], i) edge = right_edge[i] - left_edge[i] closest = relcenter - fclip(relcenter, -edge/2.0, edge/2.0) dist += closest * closest dist_max = self.mag[0] * self.mag[0] if dist <= dist_max: return 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: # This is the sphere selection cdef int i cdef np.float64_t box_center, relcenter, closest, farthest, cdist, fdist, edge if left_edge[0] <= self.center[0] <= right_edge[0] and \ left_edge[1] <= self.center[1] <= right_edge[1] and \ left_edge[2] <= self.center[2] <= right_edge[2]: fdist = 0 for i in range(3): edge = right_edge[i] - left_edge[i] box_center = (right_edge[i] + left_edge[i])/2.0 relcenter = self.periodic_difference( box_center, self.center[i], i) farthest = relcenter + fclip(relcenter, -edge/2.0, edge/2.0) fdist += farthest*farthest if fdist >= self.mag[0]**2: return 2 return 1 # http://www.gamedev.net/topic/335465-is-this-the-simplest-sphere-aabb-collision-test/ cdist = 0 fdist = 0 for i in range(3): box_center = (right_edge[i] + left_edge[i])/2.0 relcenter = self.periodic_difference(box_center, self.center[i], i) edge = right_edge[i] - left_edge[i] closest = relcenter - fclip(relcenter, -edge/2.0, edge/2.0) farthest = relcenter + fclip(relcenter, -edge/2.0, edge/2.0) cdist += closest * closest fdist += farthest * farthest if cdist > self.mag[0]**2: return 0 if fdist < self.mag[0]**2: return 1 else: return 2 def _hash_vals(self): return (("vec[0][0]", self.vec[0][0]), ("vec[0][1]", self.vec[0][1]), ("vec[0][2]", self.vec[0][2]), ("vec[1][0]", self.vec[1][0]), ("vec[1][1]", self.vec[1][1]), ("vec[1][2]", self.vec[1][2]), ("vec[2][0]", self.vec[2][0]), ("vec[2][1]", self.vec[2][1]), ("vec[2][2]", self.vec[2][2]), ("mag[0]", self.mag[0]), ("mag[1]", self.mag[1]), ("mag[2]", self.mag[2]), ("center[0]", self.center[0]), ("center[1]", self.center[1]), ("center[2]", self.center[2])) def _get_state_attnames(self): return ("mag", "center", "vec") ellipsoid_selector = EllipsoidSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/grid_selector.pxi0000644000175100001770000000242714714401662023504 0ustar00runnerdockercdef class GridSelector(SelectorObject): cdef object ind def __init__(self, dobj): self.ind = dobj.id - dobj._id_offset @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def select_grids(self, np.ndarray[np.float64_t, ndim=2] left_edges, np.ndarray[np.float64_t, ndim=2] right_edges, np.ndarray[np.int32_t, ndim=2] levels): cdef int ng = left_edges.shape[0] cdef np.ndarray[np.uint8_t, ndim=1] gridi = np.zeros(ng, dtype='uint8') gridi[self.ind] = 1 return gridi.astype("bool") @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fill_mask_regular_grid(self, gobj): mask = np.ones(gobj.ActiveDimensions, dtype='bool') return mask, mask.size @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: return 1 cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: # we apparently don't check if the point actually lies in the grid.. return 1 def _hash_vals(self): return (self.ind,) grid_selector = GridSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/halo_particles_selector.pxi0000644000175100001770000000073214714401662025545 0ustar00runnerdockercdef class HaloParticlesSelector(SelectorObject): cdef public object base_source cdef SelectorObject base_selector cdef object pind cdef public np.int64_t halo_id def __init__(self, dobj): self.base_source = dobj.base_source self.base_selector = self.base_source.selector self.pind = dobj.particle_indices def _hash_vals(self): return ("halo_particles", self.halo_id) halo_particles_selector = HaloParticlesSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/indexed_octree_subset_selector.pxi0000644000175100001770000000523614714401662027126 0ustar00runnerdockercdef class IndexedOctreeSubsetSelector(SelectorObject): # This is a numpy array, which will be a bool of ndim 1 cdef np.uint64_t min_ind cdef np.uint64_t max_ind cdef public SelectorObject base_selector cdef int filter_bbox cdef np.float64_t DLE[3] cdef np.float64_t DRE[3] def __init__(self, dobj): self.min_ind = dobj.min_ind self.max_ind = dobj.max_ind self.base_selector = dobj.base_selector self.min_level = self.base_selector.min_level self.max_level = self.base_selector.max_level self.filter_bbox = 0 if getattr(dobj.ds, "filter_bbox", False): self.filter_bbox = 1 for i in range(3): self.DLE[i] = dobj.ds.domain_left_edge[i] self.DRE[i] = dobj.ds.domain_right_edge[i] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def select_grids(self, np.ndarray[np.float64_t, ndim=2] left_edges, np.ndarray[np.float64_t, ndim=2] right_edges, np.ndarray[np.int32_t, ndim=2] levels): raise RuntimeError @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: cdef int i if self.filter_bbox == 0: return 1 for i in range(3): if pos[i] < self.DLE[i] or pos[i] > self.DRE[i]: return 0 return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: return self.base_selector.select_bbox(left_edge, right_edge) cdef int select_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.int32_t level, Oct *o = NULL) noexcept nogil: # Because visitors now use select_grid, we should be explicitly # checking this. return self.base_selector.select_grid(left_edge, right_edge, level, o) def _hash_vals(self): return (hash(self.base_selector), self.min_ind, self.max_ind) indexed_octree_subset_selector = IndexedOctreeSubsetSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/octree_subset_selector.pxi0000644000175100001770000000450514714401662025424 0ustar00runnerdockercdef class OctreeSubsetSelector(SelectorObject): def __init__(self, dobj): self.base_selector = dobj.base_selector self.min_level = self.base_selector.min_level self.max_level = self.base_selector.max_level self.domain_id = dobj.domain_id self.overlap_cells = getattr(dobj.oct_handler, 'overlap_cells', 1) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def select_grids(self, np.ndarray[np.float64_t, ndim=2] left_edges, np.ndarray[np.float64_t, ndim=2] right_edges, np.ndarray[np.int32_t, ndim=2] levels): raise RuntimeError @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: return self.base_selector.select_point(pos) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: # return 1 return self.base_selector.select_bbox(left_edge, right_edge) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.int32_t level, Oct *o = NULL) noexcept nogil: # Because visitors now use select_grid, we should be explicitly # checking this. cdef int res res = self.base_selector.select_grid(left_edge, right_edge, level, o) if self.domain_id == -1: return res elif res == 1 and o != NULL and o.domain != self.domain_id: return -1 return res def _hash_vals(self): return (hash(self.base_selector), self.domain_id) octree_subset_selector = OctreeSubsetSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/ortho_ray_selector.pxi0000644000175100001770000001052314714401662024561 0ustar00runnerdockercdef class OrthoRaySelector(SelectorObject): cdef public np.uint8_t px_ax cdef public np.uint8_t py_ax cdef public np.float64_t px cdef public np.float64_t py cdef public int axis def __init__(self, dobj): self.axis = dobj.axis self.px_ax = dobj.px_ax self.py_ax = dobj.py_ax self.px = dobj.px self.py = dobj.py @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fill_mask_regular_grid(self, gobj): cdef np.ndarray[np.uint8_t, ndim=3] mask cdef np.ndarray[np.uint8_t, ndim=3, cast=True] child_mask cdef int i, j, k cdef int total = 0 cdef int this_level = 0 cdef int ind[3][2] cdef np.int32_t level = gobj.Level _ensure_code(gobj.LeftEdge) _ensure_code(gobj.RightEdge) _ensure_code(gobj.dds) if level < self.min_level or level > self.max_level: return None else: child_mask = gobj.child_mask mask = np.zeros(gobj.ActiveDimensions, dtype=np.uint8) if level == self.max_level: this_level = 1 ind[self.axis][0] = 0 ind[self.axis][1] = gobj.ActiveDimensions[self.axis] ind[self.px_ax][0] = \ ((self.px - (gobj.LeftEdge).to_ndarray()[self.px_ax]) / gobj.dds[self.px_ax]) ind[self.px_ax][1] = ind[self.px_ax][0] + 1 ind[self.py_ax][0] = \ ((self.py - (gobj.LeftEdge).to_ndarray()[self.py_ax]) / gobj.dds[self.py_ax]) ind[self.py_ax][1] = ind[self.py_ax][0] + 1 with nogil: for i in range(ind[0][0], ind[0][1]): for j in range(ind[1][0], ind[1][1]): for k in range(ind[2][0], ind[2][1]): if this_level == 1 or child_mask[i, j, k]: mask[i, j, k] = 1 total += 1 if total == 0: return None, 0 return mask.astype("bool"), total @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: if self.px >= pos[self.px_ax] - 0.5*dds[self.px_ax] and \ self.px < pos[self.px_ax] + 0.5*dds[self.px_ax] and \ self.py >= pos[self.py_ax] - 0.5*dds[self.py_ax] and \ self.py < pos[self.py_ax] + 0.5*dds[self.py_ax]: return 1 return 0 cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: # two 0-volume constructs don't intersect return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: cdef np.float64_t dx = self.periodic_difference( pos[self.px_ax], self.px, self.px_ax) cdef np.float64_t dy = self.periodic_difference( pos[self.py_ax], self.py, self.py_ax) if dx*dx + dy*dy < radius*radius: return 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: if left_edge[self.px_ax] <= self.px < right_edge[self.px_ax] and \ left_edge[self.py_ax] <= self.py < right_edge[self.py_ax] : return 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: if left_edge[self.px_ax] <= self.px < right_edge[self.px_ax] and \ left_edge[self.py_ax] <= self.py < right_edge[self.py_ax] : return 2 # a box of non-zero volume can't be inside a ray return 0 def _hash_vals(self): return (("px_ax", self.px_ax), ("py_ax", self.py_ax), ("px", self.px), ("py", self.py), ("axis", self.axis)) def _get_state_attnames(self): return ("px_ax", "py_ax", "px", "py", "axis") ortho_ray_selector = OrthoRaySelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/point_selector.pxi0000644000175100001770000000546214714401662023712 0ustar00runnerdockercdef class PointSelector(SelectorObject): cdef public np.float64_t p[3] def __init__(self, dobj): cdef const np.float64_t[:] DLE = dobj.ds.domain_left_edge cdef const np.float64_t[:] DRE = dobj.ds.domain_right_edge for i in range(3): self.p[i] = _ensure_code(dobj.p[i]) # ensure the point lies in the domain if self.periodicity[i]: self.p[i] = np.fmod(self.p[i], self.domain_width[i]) if self.p[i] < DLE[i]: self.p[i] += self.domain_width[i] elif self.p[i] >= DRE[i]: self.p[i] -= self.domain_width[i] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: if (pos[0] - 0.5*dds[0] <= self.p[0] < pos[0]+0.5*dds[0] and pos[1] - 0.5*dds[1] <= self.p[1] < pos[1]+0.5*dds[1] and pos[2] - 0.5*dds[2] <= self.p[2] < pos[2]+0.5*dds[2]): return 1 else: return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: cdef int i cdef np.float64_t dist, dist2 = 0 for i in range(3): dist = self.periodic_difference(pos[i], self.p[i], i) dist2 += dist*dist if dist2 <= radius*radius: return 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: # point definitely can only be in one cell if (left_edge[0] <= self.p[0] < right_edge[0] and left_edge[1] <= self.p[1] < right_edge[1] and left_edge[2] <= self.p[2] < right_edge[2]): return 1 else: return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: # point definitely can only be in one cell # Return 2 in all cases to indicate that the point only overlaps # portion of box if (left_edge[0] <= self.p[0] <= right_edge[0] and left_edge[1] <= self.p[1] <= right_edge[1] and left_edge[2] <= self.p[2] <= right_edge[2]): return 2 else: return 0 def _hash_vals(self): return (("p[0]", self.p[0]), ("p[1]", self.p[1]), ("p[2]", self.p[2])) def _get_state_attnames(self): return ('p', ) point_selector = PointSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/ray_selector.pxi0000644000175100001770000002442114714401662023350 0ustar00runnerdockercdef struct IntegrationAccumulator: np.float64_t *t np.float64_t *dt np.uint8_t *child_mask int hits cdef void dt_sampler( VolumeContainer *vc, np.float64_t v_pos[3], np.float64_t v_dir[3], np.float64_t enter_t, np.float64_t exit_t, int index[3], void *data) noexcept nogil: cdef IntegrationAccumulator *am = data cdef int di = (index[0]*vc.dims[1]+index[1])*vc.dims[2]+index[2] if am.child_mask[di] == 0 or enter_t == exit_t: return am.hits += 1 am.t[di] = enter_t am.dt[di] = (exit_t - enter_t) cdef class RaySelector(SelectorObject): cdef public np.float64_t p1[3] cdef public np.float64_t p2[3] cdef public np.float64_t vec[3] def __init__(self, dobj): cdef int i _ensure_code(dobj.start_point) _ensure_code(dobj.end_point) for i in range(3): self.vec[i] = dobj.vec[i] self.p1[i] = dobj.start_point[i] self.p2[i] = dobj.end_point[i] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fill_mask_regular_grid(self, gobj): cdef np.ndarray[np.float64_t, ndim=3] t, dt cdef np.ndarray[np.uint8_t, ndim=3, cast=True] child_mask cdef int i cdef int total = 0 cdef IntegrationAccumulator *ia ia = malloc(sizeof(IntegrationAccumulator)) cdef VolumeContainer vc mask = np.zeros(gobj.ActiveDimensions, dtype='uint8') t = np.zeros(gobj.ActiveDimensions, dtype="float64") dt = np.zeros(gobj.ActiveDimensions, dtype="float64") - 1 child_mask = gobj.child_mask ia.t = t.data ia.dt = dt.data ia.child_mask = child_mask.data ia.hits = 0 _ensure_code(gobj.LeftEdge) _ensure_code(gobj.RightEdge) _ensure_code(gobj.dds) for i in range(3): vc.left_edge[i] = gobj.LeftEdge[i] vc.right_edge[i] = gobj.RightEdge[i] vc.dds[i] = gobj.dds[i] vc.idds[i] = 1.0/gobj.dds[i] vc.dims[i] = dt.shape[i] walk_volume(&vc, self.p1, self.vec, dt_sampler, ia) for i in range(dt.shape[0]): for j in range(dt.shape[1]): for k in range(dt.shape[2]): if dt[i, j, k] >= 0: mask[i, j, k] = 1 total += 1 free(ia) if total == 0: return None, 0 return mask.astype("bool"), total @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def get_dt(self, gobj): cdef np.ndarray[np.float64_t, ndim=3] t, dt cdef np.ndarray[np.float64_t, ndim=1] tr, dtr cdef np.ndarray[np.uint8_t, ndim=3, cast=True] child_mask cdef int i, j, k, ni cdef IntegrationAccumulator *ia ia = malloc(sizeof(IntegrationAccumulator)) cdef VolumeContainer vc t = np.zeros(gobj.ActiveDimensions, dtype="float64") dt = np.zeros(gobj.ActiveDimensions, dtype="float64") - 1 child_mask = gobj.child_mask ia.t = t.data ia.dt = dt.data ia.child_mask = child_mask.data ia.hits = 0 _ensure_code(gobj.LeftEdge) _ensure_code(gobj.RightEdge) _ensure_code(gobj.dds) for i in range(3): vc.left_edge[i] = gobj.LeftEdge[i] vc.right_edge[i] = gobj.RightEdge[i] vc.dds[i] = gobj.dds[i] vc.idds[i] = 1.0/gobj.dds[i] vc.dims[i] = dt.shape[i] walk_volume(&vc, self.p1, self.vec, dt_sampler, ia) tr = np.zeros(ia.hits, dtype="float64") dtr = np.zeros(ia.hits, dtype="float64") ni = 0 for i in range(dt.shape[0]): for j in range(dt.shape[1]): for k in range(dt.shape[2]): if dt[i, j, k] >= 0: tr[ni] = t[i, j, k] dtr[ni] = dt[i, j, k] ni += 1 if not (ni == ia.hits): print(ni, ia.hits) free(ia) return dtr, tr @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def get_dt_mesh(self, mesh, nz, int offset): cdef np.ndarray[np.float64_t, ndim=3] t, dt cdef np.ndarray[np.float64_t, ndim=1] tr, dtr cdef np.ndarray[np.uint8_t, ndim=3, cast=True] child_mask cdef int i, j, k, ni cdef np.float64_t LE[3] cdef np.float64_t RE[3] cdef np.float64_t pos cdef IntegrationAccumulator *ia ia = malloc(sizeof(IntegrationAccumulator)) cdef np.ndarray[np.float64_t, ndim=2] coords cdef np.ndarray[np.int64_t, ndim=2] indices indices = mesh.connectivity_indices coords = _ensure_code(mesh.connectivity_coords) cdef int nc = indices.shape[0] cdef int nv = indices.shape[1] if nv != 8: raise NotImplementedError cdef VolumeContainer vc child_mask = np.ones((1,1,1), dtype="uint8") t = np.zeros((1,1,1), dtype="float64") dt = np.zeros((1,1,1), dtype="float64") - 1 tr = np.zeros(nz, dtype="float64") dtr = np.zeros(nz, dtype="float64") ia.t = t.data ia.dt = dt.data ia.child_mask = child_mask.data ia.hits = 0 ni = 0 for i in range(nc): for j in range(3): LE[j] = 1e60 RE[j] = -1e60 for j in range(nv): for k in range(3): pos = coords[indices[i, j] - offset, k] LE[k] = fmin(pos, LE[k]) RE[k] = fmax(pos, RE[k]) for j in range(3): vc.left_edge[j] = LE[j] vc.right_edge[j] = RE[j] vc.dds[j] = RE[j] - LE[j] vc.idds[j] = 1.0/vc.dds[j] vc.dims[j] = 1 t[0,0,0] = dt[0,0,0] = -1 walk_volume(&vc, self.p1, self.vec, dt_sampler, ia) if dt[0,0,0] >= 0: tr[ni] = t[0,0,0] dtr[ni] = dt[0,0,0] ni += 1 free(ia) return dtr, tr cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: # two 0-volume constructs don't intersect return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: cdef int i cdef np.float64_t length = norm(self.vec) cdef np.float64_t r[3] for i in range(3): r[i] = pos[i] - self.p1[i] # the projected position of the sphere along the ray cdef np.float64_t l = dot(r, self.vec) / length # the square of the impact parameter cdef np.float64_t b_sqr = dot(r, r) - l*l # only accept spheres with radii larger than the impact parameter and # with a projected position along the ray no more than a radius away # from the ray if -radius < l and l < (length+radius) and b_sqr < radius*radius: return 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef int i, rv cdef VolumeContainer vc cdef IntegrationAccumulator *ia ia = malloc(sizeof(IntegrationAccumulator)) cdef np.float64_t dt[1] cdef np.float64_t t[1] cdef np.uint8_t cm[1] for i in range(3): vc.left_edge[i] = left_edge[i] vc.right_edge[i] = right_edge[i] vc.dds[i] = right_edge[i] - left_edge[i] vc.idds[i] = 1.0/vc.dds[i] vc.dims[i] = 1 t[0] = dt[0] = 0.0 cm[0] = 1 ia.t = t ia.dt = dt ia.child_mask = cm ia.hits = 0 walk_volume(&vc, self.p1, self.vec, dt_sampler, ia) rv = 0 if ia.hits > 0: rv = 1 free(ia) return rv @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef int i cdef np.uint8_t cm = 1 cdef VolumeContainer vc cdef IntegrationAccumulator ia cdef np.float64_t dt, t for i in range(3): vc.left_edge[i] = left_edge[i] vc.right_edge[i] = right_edge[i] vc.dds[i] = right_edge[i] - left_edge[i] vc.idds[i] = 1.0/vc.dds[i] vc.dims[i] = 1 t = dt = 0.0 ia.t = &t ia.dt = &dt ia.child_mask = &cm ia.hits = 0 walk_volume(&vc, self.p1, self.vec, dt_sampler, &ia) if ia.hits > 0: return 2 # a box of non-zero volume cannot be inside a ray return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: # This is terribly inefficient for Octrees. For grids, it will never # get called. cdef int i cdef np.float64_t left_edge[3] cdef np.float64_t right_edge[3] for i in range(3): left_edge[i] = pos[i] - dds[i]/2.0 right_edge[i] = pos[i] + dds[i]/2.0 return self.select_bbox(left_edge, right_edge) def _hash_vals(self): return (("p1[0]", self.p1[0]), ("p1[1]", self.p1[1]), ("p1[2]", self.p1[2]), ("p2[0]", self.p2[0]), ("p2[1]", self.p2[1]), ("p2[2]", self.p2[2]), ("vec[0]", self.vec[0]), ("vec[1]", self.vec[1]), ("vec[2]", self.vec[2])) def _get_state_attnames(self): return ("p1", "p2", "vec") ray_selector = RaySelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/region_selector.pxi0000644000175100001770000002166114714401662024043 0ustar00runnerdockercdef class RegionSelector(SelectorObject): cdef public np.float64_t left_edge[3] cdef public np.float64_t right_edge[3] cdef public np.float64_t right_edge_shift[3] cdef public bint is_all_data cdef public bint loose_selection cdef public bint check_period[3] @cython.boundscheck(False) @cython.wraparound(False) def __init__(self, dobj): cdef int i # We are modifying dobj.left_edge and dobj.right_edge , so here we will # do an in-place conversion of those arrays. cdef np.float64_t[:] RE = dobj.right_edge.copy() cdef np.float64_t[:] LE = dobj.left_edge.copy() cdef const np.float64_t[:] DW = dobj.ds.domain_width cdef const np.float64_t[:] DLE = dobj.ds.domain_left_edge cdef const np.float64_t[:] DRE = dobj.ds.domain_right_edge le_all = (np.array(LE) == dobj.ds.domain_left_edge).all() re_all = (np.array(RE) == dobj.ds.domain_right_edge).all() # If we have a bounding box, then we should *not* revert to all data domain_override = getattr(dobj.ds, '_domain_override', False) if le_all and re_all and not domain_override: self.is_all_data = True else: self.is_all_data = False cdef np.float64_t region_width[3] cdef bint p[3] # This is for if we want to include zones that overlap and whose # centers are not strictly included. self.loose_selection = getattr(dobj, "loose_selection", False) for i in range(3): self.check_period[i] = False region_width[i] = RE[i] - LE[i] p[i] = dobj.ds.periodicity[i] if region_width[i] <= 0: raise RuntimeError( "Region right edge[%s] < left edge: width = %s" % ( i, region_width[i])) for i in range(3): if p[i]: # First, we check if any criteria requires a period check, # without any adjustments. This is for short-circuiting the # short-circuit of the loop down below in mask filling. if LE[i] < DLE[i] or LE[i] > DRE[i] or RE[i] > DRE[i]: self.check_period[i] = True # shift so left_edge guaranteed in domain if LE[i] < DLE[i]: LE[i] += DW[i] RE[i] += DW[i] elif LE[i] > DRE[i]: LE[i] -= DW[i] RE[i] -= DW[i] else: if LE[i] < DLE[i] or RE[i] > DRE[i]: raise RuntimeError( "yt attempted to read outside the boundaries of " "a non-periodic domain along dimension %s.\n" "Region left edge = %s, Region right edge = %s\n" "Dataset left edge = %s, Dataset right edge = %s\n\n" "This commonly happens when trying to compute ghost cells " "up to the domain boundary. Two possible solutions are to " "select a smaller region that does not border domain edge " "(see https://yt-project.org/docs/analyzing/objects.html?highlight=region)\n" "or override the periodicity with\n" "ds.force_periodicity()" % \ (i, dobj.left_edge[i], dobj.right_edge[i], dobj.ds.domain_left_edge[i], dobj.ds.domain_right_edge[i]) ) # Already ensured in code self.left_edge[i] = LE[i] self.right_edge[i] = RE[i] self.right_edge_shift[i] = RE[i] - DW[i] if not self.periodicity[i]: self.right_edge_shift[i] = -np.inf @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef int i for i in range(3): if (right_edge[i] < self.left_edge[i] and \ left_edge[i] >= self.right_edge_shift[i]) or \ left_edge[i] >= self.right_edge[i]: return 0 return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef int i for i in range(3): if (right_edge[i] < self.left_edge[i] and \ left_edge[i] >= self.right_edge_shift[i]) or \ left_edge[i] >= self.right_edge[i]: return 0 for i in range(3): if left_edge[i] < self.right_edge_shift[i]: if right_edge[i] >= self.right_edge_shift[i]: return 2 elif left_edge[i] < self.left_edge[i] or \ right_edge[i] >= self.right_edge[i]: return 2 return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: cdef np.float64_t left_edge[3] cdef np.float64_t right_edge[3] cdef int i if self.loose_selection: for i in range(3): left_edge[i] = pos[i] - dds[i]*0.5 right_edge[i] = pos[i] + dds[i]*0.5 return self.select_bbox(left_edge, right_edge) return self.select_point(pos) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: cdef int i for i in range(3): if (self.right_edge_shift[i] <= pos[i] < self.left_edge[i]) or \ pos[i] >= self.right_edge[i]: return 0 return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: # adapted from http://stackoverflow.com/a/4579192/1382869 cdef int i cdef np.float64_t p cdef np.float64_t r2 = radius**2 cdef np.float64_t dmin = 0 cdef np.float64_t d = 0 for i in range(3): if (pos[i]+radius < self.left_edge[i] and \ pos[i]-radius >= self.right_edge_shift[i]): d = self.periodic_difference(pos[i], self.left_edge[i], i) elif pos[i]-radius > self.right_edge[i]: d = self.periodic_difference(pos[i], self.right_edge[i], i) dmin += d*d return int(dmin <= r2) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int fill_mask_selector_regular_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.float64_t dds[3], int dim[3], np.ndarray[np.uint8_t, ndim=3, cast=True] child_mask, np.ndarray[np.uint8_t, ndim=3] mask, int level): cdef int i, j, k cdef int total = 0, this_level = 0 cdef np.float64_t pos[3] if level < self.min_level or level > self.max_level: return 0 if level == self.max_level: this_level = 1 cdef np.int64_t si[3] cdef np.int64_t ei[3] for i in range(3): if not self.check_period[i]: si[i] = ((self.left_edge[i] - left_edge[i])/dds[i]) ei[i] = ((self.right_edge[i] - left_edge[i])/dds[i]) si[i] = iclip(si[i] - 1, 0, dim[i]) ei[i] = iclip(ei[i] + 1, 0, dim[i]) else: si[i] = 0 ei[i] = dim[i] with nogil: for i in range(si[0], ei[0]): pos[0] = left_edge[0] + (i + 0.5) * dds[0] for j in range(si[1], ei[1]): pos[1] = left_edge[1] + (j + 0.5) * dds[1] for k in range(si[2], ei[2]): pos[2] = left_edge[2] + (k + 0.5) * dds[2] if child_mask[i, j, k] == 1 or this_level == 1: mask[i, j, k] = self.select_cell(pos, dds) total += mask[i, j, k] return total def _hash_vals(self): return (("left_edge[0]", self.left_edge[0]), ("left_edge[1]", self.left_edge[1]), ("left_edge[2]", self.left_edge[2]), ("right_edge[0]", self.right_edge[0]), ("right_edge[1]", self.right_edge[1]), ("right_edge[2]", self.right_edge[2])) def _get_state_attnames(self): return ('left_edge', 'right_edge', 'right_edge_shift', 'check_period', 'is_all_data', 'loose_selection') region_selector = RegionSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/selector_object.pxi0000644000175100001770000007226514714401662024034 0ustar00runnerdockerfrom .oct_visitors cimport CountTotalCells, CountTotalOcts cdef class SelectorObject: def __cinit__(self, dobj, *args): self._hash_initialized = 0 cdef const np.float64_t [:] DLE cdef const np.float64_t [:] DRE min_level = getattr(dobj, "min_level", None) max_level = getattr(dobj, "max_level", None) if min_level is None: min_level = 0 if max_level is None: max_level = 99 self.min_level = min_level self.max_level = max_level self.overlap_cells = 0 ds = getattr(dobj, 'ds', None) if ds is None: for i in range(3): # NOTE that this is not universal. self.domain_width[i] = 1.0 self.periodicity[i] = False else: DLE = ds.domain_left_edge DRE = ds.domain_right_edge for i in range(3): self.domain_width[i] = DRE[i] - DLE[i] self.domain_center[i] = DLE[i] + 0.5 * self.domain_width[i] self.periodicity[i] = ds.periodicity[i] def get_periodicity(self): cdef int i cdef np.ndarray[np.uint8_t, ndim=1] periodicity periodicity = np.zeros(3, dtype='uint8') for i in range(3): periodicity[i] = self.periodicity[i] return periodicity @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def select_grids(self, np.ndarray[np.float64_t, ndim=2] left_edges, np.ndarray[np.float64_t, ndim=2] right_edges, np.ndarray[np.int32_t, ndim=2] levels): cdef int i, n cdef int ng = left_edges.shape[0] cdef np.ndarray[np.uint8_t, ndim=1] gridi = np.zeros(ng, dtype='uint8') cdef np.float64_t LE[3] cdef np.float64_t RE[3] _ensure_code(left_edges) _ensure_code(right_edges) with nogil: for n in range(ng): # Call our selector function # Check if the sphere is inside the grid for i in range(3): LE[i] = left_edges[n, i] RE[i] = right_edges[n, i] gridi[n] = self.select_grid(LE, RE, levels[n, 0]) return gridi.astype("bool") def count_octs(self, OctreeContainer octree, int domain_id = -1): cdef CountTotalOcts visitor visitor = CountTotalOcts(octree, domain_id) octree.visit_all_octs(self, visitor) return visitor.index def count_oct_cells(self, OctreeContainer octree, int domain_id = -1): cdef CountTotalCells visitor visitor = CountTotalCells(octree, domain_id) octree.visit_all_octs(self, visitor) return visitor.index @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void recursively_visit_octs(self, Oct *root, np.float64_t pos[3], np.float64_t dds[3], int level, OctVisitor visitor, int visit_covered = 0): # visit_covered tells us whether this octree supports partial # refinement. If it does, we need to handle this specially -- first # we visit *this* oct, then we make a second pass to check any child # octs. cdef np.float64_t LE[3] cdef np.float64_t RE[3] cdef np.float64_t sdds[3] cdef np.float64_t spos[3] cdef int i, j, k, res cdef Oct *ch # Remember that pos is the *center* of the oct, and dds is the oct # width. So to get to the edges, we add/subtract half of dds. for i in range(3): # sdds is the cell width sdds[i] = dds[i]/2.0 LE[i] = pos[i] - dds[i]/2.0 RE[i] = pos[i] + dds[i]/2.0 #print(LE[0], RE[0], LE[1], RE[1], LE[2], RE[2]) res = self.select_grid(LE, RE, level, root) if res == 1 and visitor.domain > 0 and root.domain != visitor.domain: res = -1 cdef int increment = 1 cdef int next_level, this_level # next_level: an int that says whether or not we can progress to children # this_level: an int that says whether or not we can select from this # level next_level = this_level = 1 if res == -1: # This happens when we do domain selection but the oct has # children. This would allow an oct to pass to its children but # not get accessed itself. next_level = 1 this_level = 0 elif level == self.max_level: next_level = 0 elif level < self.min_level or level > self.max_level: this_level = 0 if res == 0 and this_level == 1: return # Now we visit all our children. We subtract off sdds for the first # pass because we center it on the first cell. cdef int iter = 1 - visit_covered # 2 if 1, 1 if 0. # So the order here goes like so. If visit_covered is 1, which usually # comes from "partial_coverage", we visit the components of a zone even # if it has children. But in general, the first iteration through, we # visit each cell. This means that only if visit_covered is true do we # visit potentially covered cells. The next time through, we visit # child cells. while iter < 2: spos[0] = pos[0] - sdds[0]/2.0 for i in range(2): spos[1] = pos[1] - sdds[1]/2.0 for j in range(2): spos[2] = pos[2] - sdds[2]/2.0 for k in range(2): ch = NULL # We only supply a child if we are actually going to # look at the next level. if root.children != NULL and next_level == 1: ch = root.children[cind(i, j, k)] if iter == 1 and next_level == 1 and ch != NULL: # Note that visitor.pos is always going to be the # position of the Oct -- it is *not* always going # to be the same as the position of the cell under # investigation. visitor.pos[0] = (visitor.pos[0] << 1) + i visitor.pos[1] = (visitor.pos[1] << 1) + j visitor.pos[2] = (visitor.pos[2] << 1) + k visitor.level += 1 self.recursively_visit_octs( ch, spos, sdds, level + 1, visitor, visit_covered) visitor.pos[0] = (visitor.pos[0] >> 1) visitor.pos[1] = (visitor.pos[1] >> 1) visitor.pos[2] = (visitor.pos[2] >> 1) visitor.level -= 1 elif this_level == 1 and visitor.nz > 1: visitor.global_index += increment increment = 0 self.visit_oct_cells(root, ch, spos, sdds, visitor, i, j, k) elif this_level == 1 and increment == 1: visitor.global_index += increment increment = 0 visitor.ind[0] = visitor.ind[1] = visitor.ind[2] = 0 visitor.visit(root, 1) spos[2] += sdds[2] spos[1] += sdds[1] spos[0] += sdds[0] this_level = 0 # We turn this off for the second pass. iter += 1 cdef void visit_oct_cells(self, Oct *root, Oct *ch, np.float64_t spos[3], np.float64_t sdds[3], OctVisitor visitor, int i, int j, int k): # We can short-circuit the whole process if data.nz == 2. # This saves us some funny-business. cdef int selected if visitor.nz == 2: selected = self.select_cell(spos, sdds) if ch != NULL: selected *= self.overlap_cells # visitor.ind refers to the cell, not to the oct. visitor.ind[0] = i visitor.ind[1] = j visitor.ind[2] = k visitor.visit(root, selected) return # Okay, now that we've got that out of the way, we have to do some # other checks here. In this case, spos[] is the position of the # center of a *possible* oct child, which means it is the center of a # cluster of cells. That cluster might have 1, 8, 64, ... cells in it. # But, we can figure it out by calculating the cell dds. cdef np.float64_t dds[3] cdef np.float64_t pos[3] cdef int ci, cj, ck cdef int nr = (visitor.nz >> 1) for ci in range(3): dds[ci] = sdds[ci] / nr # Boot strap at the first index. pos[0] = (spos[0] - sdds[0]/2.0) + dds[0] * 0.5 for ci in range(nr): pos[1] = (spos[1] - sdds[1]/2.0) + dds[1] * 0.5 for cj in range(nr): pos[2] = (spos[2] - sdds[2]/2.0) + dds[2] * 0.5 for ck in range(nr): selected = self.select_cell(pos, dds) if ch != NULL: selected *= self.overlap_cells visitor.ind[0] = ci + i * nr visitor.ind[1] = cj + j * nr visitor.ind[2] = ck + k * nr visitor.visit(root, selected) pos[2] += dds[2] pos[1] += dds[1] pos[0] += dds[0] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.int32_t level, Oct *o = NULL) noexcept nogil: if level < self.min_level or level > self.max_level: return 0 return self.select_bbox(left_edge, right_edge) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_grid_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.int32_t level, Oct *o = NULL) noexcept nogil: if level < self.min_level or level > self.max_level: return 0 return self.select_bbox_edge(left_edge, right_edge) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: return 0 cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: return 0 cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: return 0 cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: """ Returns: 0: If the selector does not touch the bounding box. 1: If the selector overlaps the bounding box anywhere. """ return 0 cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: """ Returns: 0: If the selector does not touch the bounding box. 1: If the selector contains the entire bounding box. 2: If the selector contains part of the bounding box. """ return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef np.float64_t periodic_difference(self, np.float64_t x1, np.float64_t x2, int d) noexcept nogil: # domain_width is already in code units, and we assume what is fed in # is too. cdef np.float64_t rel = x1 - x2 if self.periodicity[d]: if rel > self.domain_width[d] * 0.5: rel -= self.domain_width[d] elif rel < -self.domain_width[d] * 0.5: rel += self.domain_width[d] return rel @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fill_mesh_mask(self, mesh): cdef np.float64_t pos[3] cdef np.ndarray[np.int64_t, ndim=2] indices cdef np.ndarray[np.float64_t, ndim=2] coords cdef np.ndarray[np.uint8_t, ndim=1] mask cdef int i, j, k, selected cdef int npoints, nv = mesh._connectivity_length cdef int total = 0 cdef int offset = mesh._index_offset coords = _ensure_code(mesh.connectivity_coords) indices = mesh.connectivity_indices npoints = indices.shape[0] mask = np.zeros(npoints, dtype='uint8') for i in range(npoints): selected = 0 for j in range(nv): for k in range(3): pos[k] = coords[indices[i, j] - offset, k] selected = self.select_point(pos) if selected == 1: break total += selected mask[i] = selected if total == 0: return None return mask.astype("bool") @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fill_mesh_cell_mask(self, mesh): cdef np.float64_t pos cdef np.float64_t le[3] cdef np.float64_t re[3] cdef np.ndarray[np.int64_t, ndim=2] indices cdef np.ndarray[np.float64_t, ndim=2] coords cdef np.ndarray[np.uint8_t, ndim=1] mask cdef int i, j, k, selected cdef int npoints, nv = mesh._connectivity_length cdef int ndim = mesh.connectivity_coords.shape[1] cdef int total = 0 cdef int offset = mesh._index_offset coords = _ensure_code(mesh.connectivity_coords) indices = mesh.connectivity_indices npoints = indices.shape[0] mask = np.zeros(npoints, dtype='uint8') for i in range(npoints): selected = 0 for k in range(3): le[k] = 1e60 re[k] = -1e60 for j in range(nv): for k in range(ndim): pos = coords[indices[i, j] - offset, k] le[k] = fmin(pos, le[k]) re[k] = fmax(pos, re[k]) for k in range(2, ndim - 1, -1): le[k] = self.domain_center[k] re[k] = self.domain_center[k] selected = self.select_bbox(le, re) total += selected mask[i] = selected if total == 0: return None return mask.astype("bool") @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fill_mask_regular_grid(self, gobj): cdef np.ndarray[np.uint8_t, ndim=3, cast=True] child_mask child_mask = gobj.child_mask cdef np.ndarray[np.uint8_t, ndim=3] mask cdef int dim[3] _ensure_code(gobj.dds) _ensure_code(gobj.LeftEdge) _ensure_code(gobj.RightEdge) cdef np.ndarray[np.float64_t, ndim=1] odds = gobj.dds.d cdef np.ndarray[np.float64_t, ndim=1] oleft_edge = gobj.LeftEdge.d cdef np.ndarray[np.float64_t, ndim=1] oright_edge = gobj.RightEdge.d cdef int i cdef np.float64_t dds[3] cdef np.float64_t left_edge[3] cdef np.float64_t right_edge[3] for i in range(3): dds[i] = odds[i] dim[i] = gobj.ActiveDimensions[i] left_edge[i] = oleft_edge[i] right_edge[i] = oright_edge[i] mask = np.zeros(gobj.ActiveDimensions, dtype='uint8') # Check for the level bounds cdef np.int32_t level = gobj.Level # We set this to 1 if we ignore child_mask cdef int total total = self.fill_mask_selector_regular_grid(left_edge, right_edge, dds, dim, child_mask, mask, level) if total == 0: return None, 0 return mask.astype("bool"), total @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int fill_mask_selector_regular_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.float64_t dds[3], int dim[3], np.ndarray[np.uint8_t, ndim=3, cast=True] child_mask, np.ndarray[np.uint8_t, ndim=3] mask, int level): cdef int i, j, k cdef int total = 0, this_level = 0 cdef np.float64_t pos[3] if level < self.min_level or level > self.max_level: return 0 if level == self.max_level: this_level = 1 with nogil: for i in range(dim[0]): pos[0] = left_edge[0] + (i + 0.5) * dds[0] for j in range(dim[1]): pos[1] = left_edge[1] + (j + 0.5) * dds[1] for k in range(dim[2]): pos[2] = left_edge[2] + (k + 0.5) * dds[2] if child_mask[i, j, k] == 1 or this_level == 1: mask[i, j, k] = self.select_cell(pos, dds) total += mask[i, j, k] return total @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fill_mask(self, gobj): # This is for an irregular grid. We make no assumptions about the # shape of the dds values, which are supplied as differing-length # arrays. cdef np.ndarray[np.uint8_t, ndim=3, cast=True] child_mask child_mask = gobj.child_mask cdef np.ndarray[np.uint8_t, ndim=3] mask cdef int dim[3] _ensure_code(gobj.cell_widths[0]) _ensure_code(gobj.cell_widths[1]) _ensure_code(gobj.cell_widths[2]) _ensure_code(gobj.LeftEdge) _ensure_code(gobj.RightEdge) cdef np.ndarray[np.float64_t, ndim=1] oleft_edge = gobj.LeftEdge.d cdef np.ndarray[np.float64_t, ndim=1] oright_edge = gobj.RightEdge.d cdef np.ndarray[np.float64_t, ndim=1] ocell_width cdef int i, n = 0 cdef np.float64_t left_edge[3] cdef np.float64_t right_edge[3] cdef np.float64_t **dds dds = malloc(sizeof(np.float64_t*)*3) for i in range(3): dim[i] = gobj.ActiveDimensions[i] left_edge[i] = oleft_edge[i] right_edge[i] = oright_edge[i] dds[i] = malloc(sizeof(np.float64_t) * dim[i]) ocell_width = gobj.cell_widths[i] for j in range(dim[i]): dds[i][j] = ocell_width[j] mask = np.zeros(gobj.ActiveDimensions, dtype='uint8') # Check for the level bounds cdef np.int32_t level = gobj.Level # We set this to 1 if we ignore child_mask cdef int total total = self.fill_mask_selector(left_edge, right_edge, dds, dim, child_mask, mask, level) for i in range(3): free(dds[i]) free(dds) if total == 0: return None return mask.astype("bool") @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int fill_mask_selector(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.float64_t **dds, int dim[3], np.ndarray[np.uint8_t, ndim=3, cast=True] child_mask, np.ndarray[np.uint8_t, ndim=3] mask, int level): cdef int i, j, k cdef int total = 0, this_level = 0 cdef np.float64_t pos[3] if level < self.min_level or level > self.max_level: return 0 if level == self.max_level: this_level = 1 cdef np.float64_t *offsets[3] cdef np.float64_t c = 0.0 cdef np.float64_t tdds[3] for i in range(3): offsets[i] = malloc(dim[i] * sizeof(np.float64_t)) c = left_edge[i] for j in range(dim[i]): offsets[i][j] = c c += dds[i][j] with nogil: # We need to keep in mind that it is entirely possible to # accumulate round-off error by doing lots of additions, etc. # That's one of the reasons we construct (ahead of time) the edge # array. I mean, we don't necessarily *have* to do that, but it # seems OK. for i in range(dim[0]): tdds[0] = dds[0][i] pos[0] = offsets[0][i] + 0.5 * tdds[0] for j in range(dim[1]): tdds[1] = dds[1][j] pos[1] = offsets[1][j] + 0.5 * tdds[1] for k in range(dim[2]): tdds[2] = dds[2][k] pos[2] = offsets[2][k] + 0.5 * tdds[2] if child_mask[i, j, k] == 1 or this_level == 1: mask[i, j, k] = self.select_cell(pos, tdds) total += mask[i, j, k] free(offsets[0]) free(offsets[1]) free(offsets[2]) return total @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void visit_grid_cells(self, GridVisitorData *data, grid_visitor_function *func, np.uint8_t *cached_mask = NULL): # This function accepts a grid visitor function, the data that # corresponds to the current grid being examined (the most important # aspect of which is the .grid attribute, along with index values and # void* pointers to arrays) and a possibly-pre-generated cached mask. # Each cell is visited with the grid visitor function. cdef np.float64_t left_edge[3] cdef np.float64_t right_edge[3] cdef np.float64_t dds[3] cdef int dim[3] cdef int this_level = 0, level, i cdef np.float64_t pos[3] level = data.grid.level if level < self.min_level or level > self.max_level: return if level == self.max_level: this_level = 1 cdef np.uint8_t child_masked, selected for i in range(3): left_edge[i] = data.grid.left_edge[i] right_edge[i] = data.grid.right_edge[i] dds[i] = (right_edge[i] - left_edge[i])/data.grid.dims[i] dim[i] = data.grid.dims[i] with nogil: pos[0] = left_edge[0] + dds[0] * 0.5 data.pos[0] = 0 for i in range(dim[0]): pos[1] = left_edge[1] + dds[1] * 0.5 data.pos[1] = 0 for j in range(dim[1]): pos[2] = left_edge[2] + dds[2] * 0.5 data.pos[2] = 0 for k in range(dim[2]): # We short-circuit if we have a cache; if we don't, we # only set selected to true if it's *not* masked by a # child and it *is* selected. if cached_mask != NULL: selected = ba_get_value(cached_mask, data.global_index) else: if this_level == 1: child_masked = 0 else: child_masked = check_child_masked(data) if child_masked == 0: selected = self.select_cell(pos, dds) else: selected = 0 func(data, selected) data.global_index += 1 pos[2] += dds[2] data.pos[2] += 1 pos[1] += dds[1] data.pos[1] += 1 pos[0] += dds[0] data.pos[0] += 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def count_points(self, np.ndarray[cython.floating, ndim=1] x, np.ndarray[cython.floating, ndim=1] y, np.ndarray[cython.floating, ndim=1] z, radii): cdef int count = 0 cdef int i cdef np.float64_t pos[3] cdef np.float64_t radius cdef np.float64_t[:] _radii if radii is not None: _radii = np.atleast_1d(np.array(radii, dtype='float64')) else: _radii = np.array([0.0], dtype='float64') _ensure_code(x) _ensure_code(y) _ensure_code(z) with nogil: for i in range(x.shape[0]): pos[0] = x[i] pos[1] = y[i] pos[2] = z[i] if _radii.shape[0] == 1: radius = _radii[0] else: radius = _radii[i] if radius == 0: count += self.select_point(pos) else: count += self.select_sphere(pos, radius) return count @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def select_points(self, np.ndarray[cython.floating, ndim=1] x, np.ndarray[cython.floating, ndim=1] y, np.ndarray[cython.floating, ndim=1] z, radii): cdef int count = 0 cdef int i cdef np.float64_t pos[3] cdef np.float64_t radius cdef np.ndarray[np.uint8_t, ndim=1] mask cdef np.float64_t[:] _radii if radii is not None: _radii = np.atleast_1d(np.array(radii, dtype='float64')) else: _radii = np.array([0.0], dtype='float64') mask = np.empty(x.shape[0], dtype='uint8') _ensure_code(x) _ensure_code(y) _ensure_code(z) # this is to allow selectors to optimize the point vs # 0-radius sphere case. These two may have different # effects for 0-volume selectors, however (collision # between a ray and a point is null, while ray and a # sphere is allowed) with nogil: for i in range(x.shape[0]) : pos[0] = x[i] pos[1] = y[i] pos[2] = z[i] if _radii.shape[0] == 1: radius = 0 else: radius = _radii[i] if radius == 0: mask[i] = self.select_point(pos) else: mask[i] = self.select_sphere(pos, radius) count += mask[i] if count == 0: return None return mask.view("bool") def __hash__(self): # convert data to be hashed to a byte array, which FNV algorithm expects if self._hash_initialized == 1: return self._hash hash_data = bytearray() for v in self._hash_vals() + self._base_hash(): if isinstance(v, tuple): hash_data.extend(v[0].encode('ascii')) hash_data.extend(repr(v[1]).encode('ascii')) else: hash_data.extend(repr(v).encode('ascii')) cdef np.int64_t hash_value = fnv_hash(hash_data) self._hash = hash_value self._hash_initialized = 1 return hash_value def _hash_vals(self): raise NotImplementedError def _base_hash(self): return (("min_level", self.min_level), ("max_level", self.max_level), ("overlap_cells", self.overlap_cells), ("periodicity[0]", self.periodicity[0]), ("periodicity[1]", self.periodicity[1]), ("periodicity[2]", self.periodicity[2]), ("domain_width[0]", self.domain_width[0]), ("domain_width[1]", self.domain_width[1]), ("domain_width[2]", self.domain_width[2])) def _get_state_attnames(self): # return a tupe of attr names for __setstate__: implement for each subclass raise NotImplementedError def __getstate__(self): # returns a tuple containing (attribute name, attribute value) tuples needed to # rebuild the state: base_atts = ("min_level", "max_level", "overlap_cells", "periodicity", "domain_width", "domain_center") child_atts = self._get_state_attnames() # assemble the state_tuple (('a1', a1val), ('a2', a2val),...) state_tuple = () for fld in base_atts + child_atts: state_tuple += ((fld, getattr(self, fld)), ) return state_tuple def __getnewargs__(self): # __setstate__ will always call __cinit__, this pickle hook returns arguments # to __cinit__. We will give it None so we dont error then set attributes in # __setstate__ Note that we could avoid this by making dobj an optional argument # to __cinit__ return (None, ) def __setstate__(self, state_tuple): # parse and set attributes from the state_tuple: (('a1',a1val),('a2',a2val),...) for attr in state_tuple: setattr(self, attr[0], attr[1]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/slice_selector.pxi0000644000175100001770000001107314714401662023653 0ustar00runnerdockercdef class SliceSelector(SelectorObject): cdef public int axis cdef public np.float64_t coord cdef public int ax, ay cdef public int reduced_dimensionality def __init__(self, dobj): self.axis = dobj.axis self.coord = _ensure_code(dobj.coord) # If we have a reduced dimensionality dataset, we want to avoid any # checks against it in the axes that are beyond its dimensionality. # This means that if we have a 2D dataset, *all* slices along z will # select all the zones. if self.axis >= dobj.ds.dimensionality: self.reduced_dimensionality = 1 else: self.reduced_dimensionality = 0 self.ax = (self.axis+1) % 3 self.ay = (self.axis+2) % 3 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fill_mask_regular_grid(self, gobj): cdef np.ndarray[np.uint8_t, ndim=3] mask cdef np.ndarray[np.uint8_t, ndim=3, cast=True] child_mask cdef int i, j, k cdef int total = 0 cdef int this_level = 0 cdef int ind[3][2] cdef np.uint64_t icoord cdef np.int32_t level = gobj.Level _ensure_code(gobj.LeftEdge) _ensure_code(gobj.dds) if level < self.min_level or level > self.max_level: return None else: child_mask = gobj.child_mask mask = np.zeros(gobj.ActiveDimensions, dtype=np.uint8) if level == self.max_level: this_level = 1 for i in range(3): if i == self.axis: icoord = ( (self.coord - gobj.LeftEdge.d[i])/gobj.dds[i]) # clip coordinate to avoid seg fault below if we're # exactly at a grid boundary ind[i][0] = iclip( icoord, 0, gobj.ActiveDimensions[i]-1) ind[i][1] = ind[i][0] + 1 else: ind[i][0] = 0 ind[i][1] = gobj.ActiveDimensions[i] with nogil: for i in range(ind[0][0], ind[0][1]): for j in range(ind[1][0], ind[1][1]): for k in range(ind[2][0], ind[2][1]): if this_level == 1 or child_mask[i, j, k]: mask[i, j, k] = 1 total += 1 if total == 0: return None, 0 return mask.astype("bool"), total @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: if self.reduced_dimensionality == 1: return 1 if pos[self.axis] + 0.5*dds[self.axis] > self.coord \ and pos[self.axis] - 0.5*dds[self.axis] - grid_eps <= self.coord: return 1 return 0 cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: # two 0-volume constructs don't intersect return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: if self.reduced_dimensionality == 1: return 1 cdef np.float64_t dist = self.periodic_difference( pos[self.axis], self.coord, self.axis) if dist*dist < radius*radius: return 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: if self.reduced_dimensionality == 1: return 1 if left_edge[self.axis] - grid_eps <= self.coord < right_edge[self.axis]: return 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: if self.reduced_dimensionality == 1: return 2 if left_edge[self.axis] - grid_eps <= self.coord < right_edge[self.axis]: return 2 # a box with non-zero volume can't be inside a plane return 0 def _hash_vals(self): return (("axis", self.axis), ("coord", self.coord)) def _get_state_attnames(self): return ("axis", "coord", "ax", "ay", "reduced_dimensionality") slice_selector = SliceSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/_selection_routines/sphere_selector.pxi0000644000175100001770000001600414714401662024041 0ustar00runnerdockercdef class SphereSelector(SelectorObject): cdef public np.float64_t radius cdef public np.float64_t radius2 cdef public np.float64_t center[3] cdef np.float64_t bbox[3][2] cdef public bint check_box[3] def __init__(self, dobj): for i in range(3): self.center[i] = _ensure_code(dobj.center[i]) self.radius = _ensure_code(dobj.radius) self.radius2 = self.radius * self.radius self.set_bbox(_ensure_code(dobj.center)) for i in range(3): if self.bbox[i][0] < dobj.ds.domain_left_edge[i]: self.check_box[i] = False elif self.bbox[i][1] > dobj.ds.domain_right_edge[i]: self.check_box[i] = False else: self.check_box[i] = True def set_bbox(self, center): for i in range(3): self.center[i] = center[i] self.bbox[i][0] = self.center[i] - self.radius self.bbox[i][1] = self.center[i] + self.radius @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil: # sphere center either inside cell or center of cell lies inside sphere if (pos[0] - 0.5*dds[0] <= self.center[0] <= pos[0]+0.5*dds[0] and pos[1] - 0.5*dds[1] <= self.center[1] <= pos[1]+0.5*dds[1] and pos[2] - 0.5*dds[2] <= self.center[2] <= pos[2]+0.5*dds[2]): return 1 return self.select_point(pos) # # langmm: added to allow sphere to intersect edge/corner of cell # cdef np.float64_t LE[3] # cdef np.float64_t RE[3] # cdef int i # for i in range(3): # LE[i] = pos[i] - 0.5*dds[i] # RE[i] = pos[i] + 0.5*dds[i] # return self.select_bbox(LE, RE) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_point(self, np.float64_t pos[3]) noexcept nogil: cdef int i cdef np.float64_t dist, dist2 = 0 for i in range(3): if self.check_box[i] and \ (pos[i] < self.bbox[i][0] or pos[i] > self.bbox[i][1]): return 0 dist = _periodic_dist(pos[i], self.center[i], self.domain_width[i], self.periodicity[i]) dist2 += dist*dist if dist2 > self.radius2: return 0 return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil: cdef int i cdef np.float64_t dist, dist2 = 0 for i in range(3): dist = self.periodic_difference(pos[i], self.center[i], i) dist2 += dist*dist dist = self.radius+radius if dist2 <= dist*dist: return 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef np.float64_t box_center, relcenter, closest, dist, edge cdef int i if (left_edge[0] <= self.center[0] < right_edge[0] and left_edge[1] <= self.center[1] < right_edge[1] and left_edge[2] <= self.center[2] < right_edge[2]): return 1 for i in range(3): if not self.check_box[i]: continue if right_edge[i] < self.bbox[i][0] or \ left_edge[i] > self.bbox[i][1]: return 0 # http://www.gamedev.net/topic/335465-is-this-the-simplest-sphere-aabb-collision-test/ dist = 0 for i in range(3): # Early terminate box_center = (right_edge[i] + left_edge[i])/2.0 relcenter = self.periodic_difference(box_center, self.center[i], i) edge = right_edge[i] - left_edge[i] closest = relcenter - fclip(relcenter, -edge/2.0, edge/2.0) dist += closest*closest if dist > self.radius2: return 0 return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil: cdef np.float64_t box_center, relcenter, closest, farthest, cdist, fdist, edge cdef int i if (left_edge[0] <= self.center[0] <= right_edge[0] and left_edge[1] <= self.center[1] <= right_edge[1] and left_edge[2] <= self.center[2] <= right_edge[2]): fdist = 0 for i in range(3): edge = right_edge[i] - left_edge[i] box_center = (right_edge[i] + left_edge[i])/2.0 relcenter = self.periodic_difference( box_center, self.center[i], i) if relcenter >= 0: farthest = relcenter + edge/2.0 else: farthest = relcenter - edge/2.0 # farthest = relcenter + fclip(relcenter, -edge/2.0, edge/2.0) fdist += farthest*farthest if fdist >= self.radius2: return 2 # Box extends outside sphere return 1 # Box entirely inside sphere for i in range(3): if not self.check_box[i]: continue if right_edge[i] < self.bbox[i][0] or \ left_edge[i] > self.bbox[i][1]: return 0 # Box outside sphere bounding box # http://www.gamedev.net/topic/335465-is-this-the-simplest-sphere-aabb-collision-test/ cdist = 0 fdist = 0 for i in range(3): # Early terminate box_center = (right_edge[i] + left_edge[i])/2.0 relcenter = self.periodic_difference(box_center, self.center[i], i) edge = right_edge[i] - left_edge[i] closest = relcenter - fclip(relcenter, -edge/2.0, edge/2.0) if relcenter >= 0: farthest = relcenter + edge/2.0 else: farthest = relcenter - edge/2.0 #farthest = relcenter + fclip(relcenter, -edge/2.0, edge/2.0) cdist += closest*closest fdist += farthest*farthest if cdist > self.radius2: return 0 # Box does not overlap sphere if fdist < self.radius2: return 1 # Sphere extends to entirely contain box else: return 2 # Sphere only partially overlaps box def _hash_vals(self): return (("radius", self.radius), ("radius2", self.radius2), ("center[0]", self.center[0]), ("center[1]", self.center[1]), ("center[2]", self.center[2])) def _get_state_attnames(self): return ("radius", "radius2", "center", "check_box") def __setstate__(self, hashes): super(SphereSelector, self).__setstate__(hashes) self.set_bbox(self.center) sphere_selector = SphereSelector ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/api.py0000644000175100001770000000016514714401662015201 0ustar00runnerdockerfrom .geometry_enum import Geometry from .geometry_handler import Index from .grid_geometry_handler import GridIndex ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.363153 yt-4.4.0/yt/geometry/coordinates/0000755000175100001770000000000014714401715016365 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/__init__.py0000644000175100001770000000000014714401662020465 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/api.py0000644000175100001770000000073114714401662017512 0ustar00runnerdockerfrom .cartesian_coordinates import CartesianCoordinateHandler from .coordinate_handler import CoordinateHandler from .cylindrical_coordinates import CylindricalCoordinateHandler from .geographic_coordinates import ( GeographicCoordinateHandler, InternalGeographicCoordinateHandler, ) from .polar_coordinates import PolarCoordinateHandler from .spec_cube_coordinates import SpectralCubeCoordinateHandler from .spherical_coordinates import SphericalCoordinateHandler ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/cartesian_coordinates.py0000644000175100001770000010350414714401662023306 0ustar00runnerdockerimport numpy as np from yt.data_objects.index_subobjects.unstructured_mesh import SemiStructuredMesh from yt.funcs import mylog from yt.units._numpy_wrapper_functions import uconcatenate, uvstack from yt.units.yt_array import YTArray from yt.utilities.lib.pixelization_routines import ( interpolate_sph_grid_gather, normalization_2d_utility, pixelize_cartesian, pixelize_cartesian_nodal, pixelize_element_mesh, pixelize_element_mesh_line, pixelize_off_axis_cartesian, pixelize_sph_kernel_cutting, pixelize_sph_kernel_projection, pixelize_sph_kernel_slice, ) from yt.utilities.math_utils import compute_stddev_image from yt.utilities.nodal_data_utils import get_nodal_data from .coordinate_handler import ( CoordinateHandler, _get_coord_fields, _get_vert_fields, cartesian_to_cylindrical, cylindrical_to_cartesian, ) def _sample_ray(ray, npoints, field): """ Private function that uses a ray object for calculating the field values that will be the y-axis values in a LinePlot object. Parameters ---------- ray : YTOrthoRay, YTRay, or LightRay Ray object from which to sample field values npoints : int The number of points to sample field : str or field tuple The name of the field to sample """ start_point = ray.start_point end_point = ray.end_point sample_dr = (end_point - start_point) / (npoints - 1) sample_points = [np.arange(npoints) * sample_dr[i] for i in range(3)] sample_points = uvstack(sample_points).T + start_point ray_coordinates = uvstack([ray["index", d] for d in "xyz"]).T ray_dds = uvstack([ray["index", f"d{d}"] for d in "xyz"]).T ray_field = ray[field] field_values = ray.ds.arr(np.zeros(npoints), ray_field.units) for i, sample_point in enumerate(sample_points): ray_contains = (sample_point >= (ray_coordinates - ray_dds / 2)) & ( sample_point <= (ray_coordinates + ray_dds / 2) ) ray_contains = ray_contains.all(axis=-1) # use argmax to find the first nonzero index, sometimes there # are two indices if the sampling point happens to fall exactly at # a cell boundary field_values[i] = ray_field[np.argmax(ray_contains)] dr = np.sqrt((sample_dr**2).sum()) x = np.arange(npoints) / (npoints - 1) * (dr * npoints) return x, field_values def all_data(data, ptype, fields, kdtree=False): field_data = {} fields = set(fields) for field in fields: field_data[field] = [] for chunk in data.all_data().chunks([], "io"): for field in fields: field_data[field].append(chunk[ptype, field].in_base("code")) for field in fields: field_data[field] = uconcatenate(field_data[field]) if kdtree is True: kdtree = data.index.kdtree for field in fields: if len(field_data[field].shape) == 1: field_data[field] = field_data[field][kdtree.idx] else: field_data[field] = field_data[field][kdtree.idx, :] return field_data class CartesianCoordinateHandler(CoordinateHandler): name = "cartesian" _default_axis_order = ("x", "y", "z") def setup_fields(self, registry): for axi, ax in enumerate(self.axis_order): f1, f2 = _get_coord_fields(axi) registry.add_field( ("index", f"d{ax}"), sampling_type="cell", function=f1, display_field=False, units="code_length", ) registry.add_field( ("index", f"path_element_{ax}"), sampling_type="cell", function=f1, display_field=False, units="code_length", ) registry.add_field( ("index", f"{ax}"), sampling_type="cell", function=f2, display_field=False, units="code_length", ) f3 = _get_vert_fields(axi) registry.add_field( ("index", f"vertex_{ax}"), sampling_type="cell", function=f3, display_field=False, units="code_length", ) self._register_volume(registry) self._check_fields(registry) def _register_volume(self, registry): def _cell_volume(field, data): rv = data["index", "dx"].copy(order="K") rv *= data["index", "dy"] rv *= data["index", "dz"] return rv registry.add_field( ("index", "cell_volume"), sampling_type="cell", function=_cell_volume, display_field=False, units="code_length**3", ) registry.alias(("index", "volume"), ("index", "cell_volume")) def _check_fields(self, registry): registry.check_derived_fields( [ ("index", "dx"), ("index", "dy"), ("index", "dz"), ("index", "x"), ("index", "y"), ("index", "z"), ("index", "cell_volume"), ] ) def pixelize( self, dimension, data_source, field, bounds, size, antialias=True, periodic=True, *, return_mask=False, ): """ Method for pixelizing datasets in preparation for two-dimensional image plots. Relies on several sampling routines written in cython """ index = data_source.ds.index if hasattr(index, "meshes") and not isinstance( index.meshes[0], SemiStructuredMesh ): ftype, fname = field if ftype == "all": mesh_id = 0 indices = np.concatenate( [mesh.connectivity_indices for mesh in index.mesh_union] ) else: mesh_id = int(ftype[-1]) - 1 indices = index.meshes[mesh_id].connectivity_indices coords = index.meshes[mesh_id].connectivity_coords offset = index.meshes[mesh_id]._index_offset ad = data_source.ds.all_data() field_data = ad[field] buff_size = size[0:dimension] + (1,) + size[dimension:] ax = data_source.axis xax = self.x_axis[ax] yax = self.y_axis[ax] c = np.float64(data_source.center[dimension].d) extents = np.zeros((3, 2)) extents[ax] = np.array([c, c]) extents[xax] = bounds[0:2] extents[yax] = bounds[2:4] # if this is an element field, promote to 2D here if len(field_data.shape) == 1: field_data = np.expand_dims(field_data, 1) # if this is a higher-order element, we demote to 1st order # here, for now. elif field_data.shape[1] == 27: # hexahedral mylog.warning( "High order elements not yet supported, dropping to 1st order." ) field_data = field_data[:, 0:8] indices = indices[:, 0:8] buff, mask = pixelize_element_mesh( coords, indices, buff_size, field_data, extents, index_offset=offset, return_mask=True, ) buff = np.squeeze(np.transpose(buff, (yax, xax, ax))) mask = np.squeeze(np.transpose(mask, (yax, xax, ax))) elif self.axis_id.get(dimension, dimension) is not None: buff, mask = self._ortho_pixelize( data_source, field, bounds, size, antialias, dimension, periodic ) else: buff, mask = self._oblique_pixelize( data_source, field, bounds, size, antialias ) if return_mask: assert mask is None or mask.dtype == bool return buff, mask else: return buff def pixelize_line(self, field, start_point, end_point, npoints): """ Method for sampling datasets along a line in preparation for one-dimensional line plots. For UnstructuredMesh, relies on a sampling routine written in cython """ if npoints < 2: raise ValueError( "Must have at least two sample points in order to draw a line plot." ) index = self.ds.index if hasattr(index, "meshes") and not isinstance( index.meshes[0], SemiStructuredMesh ): ftype, fname = field if ftype == "all": mesh_id = 0 indices = np.concatenate( [mesh.connectivity_indices for mesh in index.mesh_union] ) else: mesh_id = int(ftype[-1]) - 1 indices = index.meshes[mesh_id].connectivity_indices coords = index.meshes[mesh_id].connectivity_coords if coords.shape[1] != end_point.size != start_point.size: raise ValueError( "The coordinate dimension doesn't match the " "start and end point dimensions." ) offset = index.meshes[mesh_id]._index_offset ad = self.ds.all_data() field_data = ad[field] # if this is an element field, promote to 2D here if len(field_data.shape) == 1: field_data = np.expand_dims(field_data, 1) # if this is a higher-order element, we demote to 1st order # here, for now. elif field_data.shape[1] == 27: # hexahedral mylog.warning( "High order elements not yet supported, dropping to 1st order." ) field_data = field_data[:, 0:8] indices = indices[:, 0:8] arc_length, plot_values = pixelize_element_mesh_line( coords, indices, start_point, end_point, npoints, field_data, index_offset=offset, ) arc_length = YTArray(arc_length, start_point.units) plot_values = YTArray(plot_values, field_data.units) else: ray = self.ds.ray(start_point, end_point) arc_length, plot_values = _sample_ray(ray, npoints, field) return arc_length, plot_values def _ortho_pixelize( self, data_source, field, bounds, size, antialias, dim, periodic ): from yt.data_objects.construction_data_containers import YTParticleProj from yt.data_objects.selection_objects.slices import YTSlice from yt.frontends.sph.data_structures import ParticleDataset from yt.frontends.stream.data_structures import StreamParticlesDataset # We should be using fcoords field = data_source._determine_fields(field)[0] finfo = data_source.ds.field_info[field] # some coordinate handlers use only projection-plane periods, # others need all box periods. period2 = self.period[:2].copy() # dummy here period2[0] = self.period[self.x_axis[dim]] period2[1] = self.period[self.y_axis[dim]] period3 = self.period[:].copy() # dummy here period3[0] = self.period[self.x_axis[dim]] period3[1] = self.period[self.y_axis[dim]] zax = list({0, 1, 2} - {self.x_axis[dim], self.y_axis[dim]})[0] period3[2] = self.period[zax] if hasattr(period2, "in_units"): period2 = period2.in_units("code_length").d if hasattr(period3, "in_units"): period3 = period3.in_units("code_length").d buff = np.full((size[1], size[0]), np.nan, dtype="float64") particle_datasets = (ParticleDataset, StreamParticlesDataset) is_sph_field = finfo.is_sph_field finfo = self.ds._get_field_info(field) if np.any(finfo.nodal_flag): nodal_data = get_nodal_data(data_source, field) coord = data_source.coord.d mask = pixelize_cartesian_nodal( buff, data_source["px"], data_source["py"], data_source["pz"], data_source["pdx"], data_source["pdy"], data_source["pdz"], nodal_data, coord, bounds, int(antialias), period2, int(periodic), return_mask=True, ) elif isinstance(data_source.ds, particle_datasets) and is_sph_field: # SPH handling ptype = field[0] if ptype == "gas": ptype = data_source.ds._sph_ptypes[0] px_name = self.axis_name[self.x_axis[dim]] py_name = self.axis_name[self.y_axis[dim]] # need z coordinates for depth, # but name isn't saved in the handler -> use the 'other one' pz_name = list(set(self.axis_order) - {px_name, py_name})[0] # ignore default True periodic argument # (not actually supplied by a call from # FixedResolutionBuffer), and use the dataset periodicity # instead xa = self.x_axis[dim] ya = self.y_axis[dim] # axorder = data_source.ds.coordinates.axis_order za = list({0, 1, 2} - {xa, ya})[0] ds_periodic = data_source.ds.periodicity _periodic = np.array(ds_periodic) _periodic[0] = ds_periodic[xa] _periodic[1] = ds_periodic[ya] _periodic[2] = ds_periodic[za] ounits = data_source.ds.field_info[field].output_units bnds = data_source.ds.arr(bounds, "code_length").tolist() kernel_name = None if hasattr(data_source.ds, "kernel_name"): kernel_name = data_source.ds.kernel_name if kernel_name is None: kernel_name = "cubic" if isinstance(data_source, YTParticleProj): # projection weight = data_source.weight_field moment = data_source.moment le, re = data_source.data_source.get_bbox() # If we're not periodic, we need to clip to the boundary edges # or we get errors about extending off the edge of the region. # (depth/z range is handled by region setting) if not self.ds.periodicity[xa]: le[xa] = max(bounds[0], self.ds.domain_left_edge[xa]) re[xa] = min(bounds[1], self.ds.domain_right_edge[xa]) else: le[xa] = bounds[0] re[xa] = bounds[1] if not self.ds.periodicity[ya]: le[ya] = max(bounds[2], self.ds.domain_left_edge[ya]) re[ya] = min(bounds[3], self.ds.domain_right_edge[ya]) else: le[ya] = bounds[2] re[ya] = bounds[3] # We actually need to clip these proj_reg = data_source.ds.region( left_edge=le, right_edge=re, center=data_source.center, data_source=data_source.data_source, ) proj_reg.set_field_parameter("axis", data_source.axis) # need some z bounds for SPH projection # -> use source bounds bnds3 = bnds + [le[za], re[za]] buff = np.zeros(size, dtype="float64") mask_uint8 = np.zeros_like(buff, dtype="uint8") if weight is None: for chunk in proj_reg.chunks([], "io"): data_source._initialize_projected_units([field], chunk) pixelize_sph_kernel_projection( buff, mask_uint8, chunk[ptype, px_name].to("code_length"), chunk[ptype, py_name].to("code_length"), chunk[ptype, pz_name].to("code_length"), chunk[ptype, "smoothing_length"].to("code_length"), chunk[ptype, "mass"].to("code_mass"), chunk[ptype, "density"].to("code_density"), chunk[field].in_units(ounits), bnds3, _check_period=_periodic.astype("int"), period=period3, kernel_name=kernel_name, ) # We use code length here, but to get the path length right # we need to multiply by the conversion factor between # code length and the unit system's length unit default_path_length_unit = data_source.ds.unit_system["length"] dl_conv = data_source.ds.quan(1.0, "code_length").to( default_path_length_unit ) buff *= dl_conv.v # if there is a weight field, take two projections: # one of field*weight, the other of just weight, and divide them else: weight_buff = np.zeros(size, dtype="float64") buff = np.zeros(size, dtype="float64") mask_uint8 = np.zeros_like(buff, dtype="uint8") wounits = data_source.ds.field_info[weight].output_units for chunk in proj_reg.chunks([], "io"): data_source._initialize_projected_units([field], chunk) data_source._initialize_projected_units([weight], chunk) pixelize_sph_kernel_projection( buff, mask_uint8, chunk[ptype, px_name].to("code_length"), chunk[ptype, py_name].to("code_length"), chunk[ptype, pz_name].to("code_length"), chunk[ptype, "smoothing_length"].to("code_length"), chunk[ptype, "mass"].to("code_mass"), chunk[ptype, "density"].to("code_density"), chunk[field].in_units(ounits), bnds3, _check_period=_periodic.astype("int"), period=period3, weight_field=chunk[weight].in_units(wounits), kernel_name=kernel_name, ) mylog.info( "Making a fixed resolution buffer of (%s) %d by %d", weight, size[0], size[1], ) for chunk in proj_reg.chunks([], "io"): data_source._initialize_projected_units([weight], chunk) pixelize_sph_kernel_projection( weight_buff, mask_uint8, chunk[ptype, px_name].to("code_length"), chunk[ptype, py_name].to("code_length"), chunk[ptype, pz_name].to("code_length"), chunk[ptype, "smoothing_length"].to("code_length"), chunk[ptype, "mass"].to("code_mass"), chunk[ptype, "density"].to("code_density"), chunk[weight].in_units(wounits), bnds3, _check_period=_periodic.astype("int"), period=period3, kernel_name=kernel_name, ) normalization_2d_utility(buff, weight_buff) if moment == 2: buff2 = np.zeros(size, dtype="float64") for chunk in proj_reg.chunks([], "io"): data_source._initialize_projected_units([field], chunk) data_source._initialize_projected_units([weight], chunk) pixelize_sph_kernel_projection( buff2, mask_uint8, chunk[ptype, px_name].to("code_length"), chunk[ptype, py_name].to("code_length"), chunk[ptype, pz_name].to("code_length"), chunk[ptype, "smoothing_length"].to("code_length"), chunk[ptype, "mass"].to("code_mass"), chunk[ptype, "density"].to("code_density"), chunk[field].in_units(ounits) ** 2, bnds3, _check_period=_periodic.astype("int"), period=period3, weight_field=chunk[weight].in_units(wounits), kernel_name=kernel_name, ) normalization_2d_utility(buff2, weight_buff) buff = compute_stddev_image(buff2, buff) mask = mask_uint8.astype("bool") elif isinstance(data_source, YTSlice): smoothing_style = getattr(self.ds, "sph_smoothing_style", "scatter") normalize = getattr(self.ds, "use_sph_normalization", True) if smoothing_style == "scatter": buff = np.zeros(size, dtype="float64") mask_uint8 = np.zeros_like(buff, dtype="uint8") if normalize: buff_den = np.zeros(size, dtype="float64") for chunk in data_source.chunks([], "io"): hsmlname = "smoothing_length" pixelize_sph_kernel_slice( buff, mask_uint8, chunk[ptype, px_name].to("code_length").v, chunk[ptype, py_name].to("code_length").v, chunk[ptype, pz_name].to("code_length").v, chunk[ptype, hsmlname].to("code_length").v, chunk[ptype, "mass"].to("code_mass").v, chunk[ptype, "density"].to("code_density").v, chunk[field].in_units(ounits).v, bnds, data_source.coord.to("code_length").v, _check_period=_periodic.astype("int"), period=period3, kernel_name=kernel_name, ) if normalize: pixelize_sph_kernel_slice( buff_den, mask_uint8, chunk[ptype, px_name].to("code_length").v, chunk[ptype, py_name].to("code_length").v, chunk[ptype, pz_name].to("code_length").v, chunk[ptype, hsmlname].to("code_length").v, chunk[ptype, "mass"].to("code_mass").v, chunk[ptype, "density"].to("code_density").v, np.ones(chunk[ptype, "density"].shape[0]), bnds, data_source.coord.to("code_length").v, _check_period=_periodic.astype("int"), period=period3, kernel_name=kernel_name, ) if normalize: normalization_2d_utility(buff, buff_den) mask = mask_uint8.astype("bool", copy=False) if smoothing_style == "gather": # Here we find out which axis are going to be the "x" and # "y" axis for the actual visualisation and then we set the # buffer size and bounds to match. The z axis of the plot # is the axis we slice over and the buffer will be of size 1 # in that dimension x, y, z = self.x_axis[dim], self.y_axis[dim], dim buff_size = np.zeros(3, dtype="int64") buff_size[x] = size[0] buff_size[y] = size[1] buff_size[z] = 1 buff_bounds = np.zeros(6, dtype="float64") buff_bounds[2 * x : 2 * x + 2] = bounds[0:2] buff_bounds[2 * y : 2 * y + 2] = bounds[2:4] buff_bounds[2 * z] = data_source.coord buff_bounds[2 * z + 1] = data_source.coord # then we do the interpolation buff_temp = np.zeros(buff_size, dtype="float64") fields_to_get = [ "particle_position", "density", "mass", "smoothing_length", field[1], ] all_fields = all_data(self.ds, ptype, fields_to_get, kdtree=True) num_neighbors = getattr(self.ds, "num_neighbors", 32) mask_temp = interpolate_sph_grid_gather( buff_temp, all_fields["particle_position"].to("code_length"), buff_bounds, all_fields["smoothing_length"].to("code_length"), all_fields["mass"].to("code_mass"), all_fields["density"].to("code_density"), all_fields[field[1]].in_units(ounits), self.ds.index.kdtree, num_neigh=num_neighbors, use_normalization=normalize, return_mask=True, ) # We swap the axes back so the axis which was sliced over # is the last axis, as this is the "z" axis of the plots. if z != 2: buff_temp = buff_temp.swapaxes(2, z) mask_temp = mask_temp.swapaxes(2, z) if x == 2: x = z else: y = z buff = buff_temp[:, :, 0] mask = mask_temp[:, :, 0] # Then we just transpose if the buffer x and y are # different than the plot x and y if y < x: buff = buff.transpose() mask = mask.transpose() else: raise NotImplementedError( "A pixelization routine has not been implemented for " f"{type(data_source)} data objects" ) buff = buff.transpose() mask = mask.transpose() else: mask = pixelize_cartesian( buff, data_source["px"], data_source["py"], data_source["pdx"], data_source["pdy"], data_source[field], bounds, int(antialias), period2, int(periodic), return_mask=True, ) assert mask is None or mask.dtype == bool return buff, mask def _oblique_pixelize(self, data_source, field, bounds, size, antialias): from yt.data_objects.selection_objects.slices import YTCuttingPlane from yt.frontends.sph.data_structures import ParticleDataset from yt.frontends.stream.data_structures import StreamParticlesDataset from yt.frontends.ytdata.data_structures import YTSpatialPlotDataset # Determine what sort of data we're dealing with # -> what backend to use # copied from the _ortho_pixelize method field = data_source._determine_fields(field)[0] _finfo = data_source.ds.field_info[field] is_sph_field = _finfo.is_sph_field particle_datasets = (ParticleDataset, StreamParticlesDataset) # finfo = self.ds._get_field_info(field) # SPH data # only for slices: a function in off_axis_projection.py # handles projections if ( isinstance(data_source.ds, particle_datasets) and is_sph_field and isinstance(data_source, YTCuttingPlane) ): normalize = getattr(self.ds, "use_sph_normalization", True) le = data_source.ds.domain_left_edge.to("code_length") re = data_source.ds.domain_right_edge.to("code_length") boxbounds = np.array([le[0], re[0], le[1], re[1], le[2], re[2]]) periodic = data_source.ds.periodicity ptype = field[0] if ptype == "gas": ptype = data_source.ds._sph_ptypes[0] axorder = data_source.ds.coordinates.axis_order ounits = data_source.ds.field_info[field].output_units # input bounds are in code length units already widthxy = np.array((bounds[1] - bounds[0], bounds[3] - bounds[2])) kernel_name = None if hasattr(data_source.ds, "kernel_name"): kernel_name = data_source.ds.kernel_name if kernel_name is None: kernel_name = "cubic" # data_source should be a YTCuttingPlane object # dimensionless unyt normal/north # -> numpy array cython can deal with normal_vector = data_source.normal.v north_vector = data_source._y_vec.v center = data_source.center.to("code_length") buff = np.zeros(size, dtype="float64") mask_uint8 = np.zeros_like(buff, dtype="uint8") if normalize: buff_den = np.zeros(size, dtype="float64") for chunk in data_source.chunks([], "io"): pixelize_sph_kernel_cutting( buff, mask_uint8, chunk[ptype, axorder[0]].to("code_length").v, chunk[ptype, axorder[1]].to("code_length").v, chunk[ptype, axorder[2]].to("code_length").v, chunk[ptype, "smoothing_length"].to("code_length").v, chunk[ptype, "mass"].to("code_mass"), chunk[ptype, "density"].to("code_density"), chunk[field].in_units(ounits), center, widthxy, normal_vector, north_vector, boxbounds, periodic, kernel_name=kernel_name, check_period=1, ) if normalize: pixelize_sph_kernel_cutting( buff_den, mask_uint8, chunk[ptype, axorder[0]].to("code_length"), chunk[ptype, axorder[1]].to("code_length"), chunk[ptype, axorder[2]].to("code_length"), chunk[ptype, "smoothing_length"].to("code_length"), chunk[ptype, "mass"].to("code_mass"), chunk[ptype, "density"].to("code_density"), np.ones(chunk[ptype, "density"].shape[0]), center, widthxy, normal_vector, north_vector, boxbounds, periodic, kernel_name=kernel_name, check_period=1, ) if normalize: normalization_2d_utility(buff, buff_den) mask = mask_uint8.astype("bool", copy=False) # swap axes for image plotting mask = mask.swapaxes(0, 1) buff = buff.swapaxes(0, 1) # whatever other data this code could handle before the # SPH option was added else: indices = np.argsort(data_source["pdx"])[::-1].astype("int64", copy=False) buff = np.full((size[1], size[0]), np.nan, dtype="float64") ftype = "index" if isinstance(data_source.ds, YTSpatialPlotDataset): ftype = "gas" mask = pixelize_off_axis_cartesian( buff, data_source[ftype, "x"], data_source[ftype, "y"], data_source[ftype, "z"], data_source["px"], data_source["py"], data_source["pdx"], data_source["pdy"], data_source["pdz"], data_source.center, data_source._inv_mat, indices, data_source[field], bounds, ) return buff, mask def convert_from_cartesian(self, coord): return coord def convert_to_cartesian(self, coord): return coord def convert_to_cylindrical(self, coord): center = self.ds.domain_center return cartesian_to_cylindrical(coord, center) def convert_from_cylindrical(self, coord): center = self.ds.domain_center return cylindrical_to_cartesian(coord, center) def convert_to_spherical(self, coord): raise NotImplementedError def convert_from_spherical(self, coord): raise NotImplementedError _x_pairs = (("x", "y"), ("y", "z"), ("z", "x")) _y_pairs = (("x", "z"), ("y", "x"), ("z", "y")) @property def period(self): return self.ds.domain_width ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/coordinate_handler.py0000644000175100001770000002612014714401662022565 0ustar00runnerdockerimport abc import weakref from functools import cached_property from numbers import Number from typing import Any, Literal, overload import numpy as np from yt._typing import AxisOrder from yt.funcs import fix_unitary, is_sequence, parse_center_array, validate_width_tuple from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.exceptions import YTCoordinateNotImplemented, YTInvalidWidthError def _unknown_coord(field, data): raise YTCoordinateNotImplemented def _get_coord_fields(axi, units="code_length"): def _dds(field, data): rv = data.ds.arr(data.fwidth[..., axi].copy(), units) return data._reshape_vals(rv) def _coords(field, data): rv = data.ds.arr(data.fcoords[..., axi].copy(), units) return data._reshape_vals(rv) return _dds, _coords def _get_vert_fields(axi, units="code_length"): def _vert(field, data): rv = data.ds.arr(data.fcoords_vertex[..., axi].copy(), units) return rv return _vert def _setup_dummy_cartesian_coords_and_widths(registry, axes: tuple[str]): for ax in axes: registry.add_field( ("index", f"d{ax}"), sampling_type="cell", function=_unknown_coord ) registry.add_field(("index", ax), sampling_type="cell", function=_unknown_coord) def _setup_polar_coordinates(registry, axis_id): f1, f2 = _get_coord_fields(axis_id["r"]) registry.add_field( ("index", "dr"), sampling_type="cell", function=f1, display_field=False, units="code_length", ) registry.add_field( ("index", "r"), sampling_type="cell", function=f2, display_field=False, units="code_length", ) f1, f2 = _get_coord_fields(axis_id["theta"], "dimensionless") registry.add_field( ("index", "dtheta"), sampling_type="cell", function=f1, display_field=False, units="dimensionless", ) registry.add_field( ("index", "theta"), sampling_type="cell", function=f2, display_field=False, units="dimensionless", ) def _path_r(field, data): return data["index", "dr"] registry.add_field( ("index", "path_element_r"), sampling_type="cell", function=_path_r, units="code_length", ) def _path_theta(field, data): # Note: this already assumes cell-centered return data["index", "r"] * data["index", "dtheta"] registry.add_field( ("index", "path_element_theta"), sampling_type="cell", function=_path_theta, units="code_length", ) def validate_sequence_width(width, ds, unit=None): if isinstance(width[0], tuple) and isinstance(width[1], tuple): validate_width_tuple(width[0]) validate_width_tuple(width[1]) return ( ds.quan(width[0][0], fix_unitary(width[0][1])), ds.quan(width[1][0], fix_unitary(width[1][1])), ) elif isinstance(width[0], Number) and isinstance(width[1], Number): return (ds.quan(width[0], "code_length"), ds.quan(width[1], "code_length")) elif isinstance(width[0], YTQuantity) and isinstance(width[1], YTQuantity): return (ds.quan(width[0]), ds.quan(width[1])) else: validate_width_tuple(width) # If width and unit are both valid width tuples, we # assume width controls x and unit controls y try: validate_width_tuple(unit) return ( ds.quan(width[0], fix_unitary(width[1])), ds.quan(unit[0], fix_unitary(unit[1])), ) except YTInvalidWidthError: return ( ds.quan(width[0], fix_unitary(width[1])), ds.quan(width[0], fix_unitary(width[1])), ) class CoordinateHandler(abc.ABC): name: str _default_axis_order: AxisOrder def __init__(self, ds, ordering: AxisOrder | None = None): self.ds = weakref.proxy(ds) if ordering is not None: self.axis_order = ordering else: self.axis_order = self._default_axis_order @abc.abstractmethod def setup_fields(self): # This should return field definitions for x, y, z, r, theta, phi pass @overload def pixelize( self, dimension, data_source, field, bounds, size, antialias=True, periodic=True, *, return_mask: Literal[False], ) -> "np.ndarray[Any, np.dtype[np.float64]]": ... @overload def pixelize( self, dimension, data_source, field, bounds, size, antialias=True, periodic=True, *, return_mask: Literal[True], ) -> tuple[ "np.ndarray[Any, np.dtype[np.float64]]", "np.ndarray[Any, np.dtype[np.bool_]]" ]: ... @abc.abstractmethod def pixelize( self, dimension, data_source, field, bounds, size, antialias=True, periodic=True, *, return_mask=False, ): # This should *actually* be a pixelize call, not just returning the # pixelizer pass @abc.abstractmethod def pixelize_line(self, field, start_point, end_point, npoints): pass def distance(self, start, end): p1 = self.convert_to_cartesian(start) p2 = self.convert_to_cartesian(end) return np.sqrt(((p1 - p2) ** 2.0).sum()) @abc.abstractmethod def convert_from_cartesian(self, coord): pass @abc.abstractmethod def convert_to_cartesian(self, coord): pass @abc.abstractmethod def convert_to_cylindrical(self, coord): pass @abc.abstractmethod def convert_from_cylindrical(self, coord): pass @abc.abstractmethod def convert_to_spherical(self, coord): pass @abc.abstractmethod def convert_from_spherical(self, coord): pass @cached_property def data_projection(self): return {ax: None for ax in self.axis_order} @cached_property def data_transform(self): return {ax: None for ax in self.axis_order} @cached_property def axis_name(self): an = {} for axi, ax in enumerate(self.axis_order): an[axi] = ax an[ax] = ax an[ax.capitalize()] = ax return an @cached_property def axis_id(self): ai = {} for axi, ax in enumerate(self.axis_order): ai[ax] = ai[axi] = axi return ai @property def image_axis_name(self): rv = {} for i in range(3): rv[i] = (self.axis_name[self.x_axis[i]], self.axis_name[self.y_axis[i]]) rv[self.axis_name[i]] = rv[i] rv[self.axis_name[i].capitalize()] = rv[i] return rv @cached_property def x_axis(self): ai = self.axis_id xa = {} for a1, a2 in self._x_pairs: xa[a1] = xa[ai[a1]] = ai[a2] return xa @cached_property def y_axis(self): ai = self.axis_id ya = {} for a1, a2 in self._y_pairs: ya[a1] = ya[ai[a1]] = ai[a2] return ya @property @abc.abstractmethod def period(self): pass def sanitize_depth(self, depth): if is_sequence(depth): validate_width_tuple(depth) depth = (self.ds.quan(depth[0], fix_unitary(depth[1])),) elif isinstance(depth, Number): depth = ( self.ds.quan(depth, "code_length", registry=self.ds.unit_registry), ) elif isinstance(depth, YTQuantity): depth = (depth,) else: raise YTInvalidWidthError(depth) return depth def sanitize_width(self, axis, width, depth): if width is None: # initialize the index if it is not already initialized self.ds.index # Default to code units if not is_sequence(axis): xax = self.x_axis[axis] yax = self.y_axis[axis] w = self.ds.domain_width[np.array([xax, yax])] else: # axis is actually the normal vector # for an off-axis data object. mi = np.argmin(self.ds.domain_width) w = self.ds.domain_width[np.array((mi, mi))] width = (w[0], w[1]) elif is_sequence(width): width = validate_sequence_width(width, self.ds) elif isinstance(width, YTQuantity): width = (width, width) elif isinstance(width, Number): width = ( self.ds.quan(width, "code_length"), self.ds.quan(width, "code_length"), ) else: raise YTInvalidWidthError(width) if depth is not None: depth = self.sanitize_depth(depth) return width + depth return width def sanitize_center(self, center, axis): center = parse_center_array(center, ds=self.ds, axis=axis) # This has to return both a center and a display_center display_center = self.convert_to_cartesian(center) return center, display_center def cartesian_to_cylindrical(coord, center=(0, 0, 0)): c2 = np.zeros_like(coord) if not isinstance(center, YTArray): center = center * coord.uq c2[..., 0] = ( (coord[..., 0] - center[0]) ** 2.0 + (coord[..., 1] - center[1]) ** 2.0 ) ** 0.5 c2[..., 1] = coord[..., 2] # rzt c2[..., 2] = np.arctan2(coord[..., 1] - center[1], coord[..., 0] - center[0]) return c2 def cylindrical_to_cartesian(coord, center=(0, 0, 0)): c2 = np.zeros_like(coord) if not isinstance(center, YTArray): center = center * coord.uq c2[..., 0] = np.cos(coord[..., 0]) * coord[..., 1] + center[0] c2[..., 1] = np.sin(coord[..., 0]) * coord[..., 1] + center[1] c2[..., 2] = coord[..., 2] return c2 def _get_polar_bounds(self: CoordinateHandler, axes: tuple[str, str]): # a small helper function that is needed by two unrelated classes ri = self.axis_id[axes[0]] pi = self.axis_id[axes[1]] rmin = self.ds.domain_left_edge[ri] rmax = self.ds.domain_right_edge[ri] phimin = self.ds.domain_left_edge[pi] phimax = self.ds.domain_right_edge[pi] corners = [ (rmin, phimin), (rmin, phimax), (rmax, phimin), (rmax, phimax), ] def to_polar_plane(r, phi): x = r * np.cos(phi) y = r * np.sin(phi) return x, y conic_corner_coords = [to_polar_plane(*corner) for corner in corners] phimin = phimin.d phimax = phimax.d if phimin <= np.pi <= phimax: xxmin = -rmax else: xxmin = min(xx for xx, yy in conic_corner_coords) if phimin <= 0 <= phimax: xxmax = rmax else: xxmax = max(xx for xx, yy in conic_corner_coords) if phimin <= 3 * np.pi / 2 <= phimax: yymin = -rmax else: yymin = min(yy for xx, yy in conic_corner_coords) if phimin <= np.pi / 2 <= phimax: yymax = rmax else: yymax = max(yy for xx, yy in conic_corner_coords) return xxmin, xxmax, yymin, yymax ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/cylindrical_coordinates.py0000644000175100001770000002047114714401662023633 0ustar00runnerdockerfrom functools import cached_property import numpy as np from yt.utilities.lib.pixelization_routines import pixelize_cartesian, pixelize_cylinder from .coordinate_handler import ( CoordinateHandler, _get_coord_fields, _get_polar_bounds, _setup_dummy_cartesian_coords_and_widths, _setup_polar_coordinates, cartesian_to_cylindrical, cylindrical_to_cartesian, ) # # Cylindrical fields # class CylindricalCoordinateHandler(CoordinateHandler): name = "cylindrical" _default_axis_order = ("r", "z", "theta") def __init__(self, ds, ordering=None): super().__init__(ds, ordering) self.image_units = {} self.image_units[self.axis_id["r"]] = ("rad", None) self.image_units[self.axis_id["theta"]] = (None, None) self.image_units[self.axis_id["z"]] = (None, None) def setup_fields(self, registry): # Missing implementation for x and y coordinates. _setup_dummy_cartesian_coords_and_widths(registry, axes=("x", "y")) _setup_polar_coordinates(registry, self.axis_id) f1, f2 = _get_coord_fields(self.axis_id["z"]) registry.add_field( ("index", "dz"), sampling_type="cell", function=f1, display_field=False, units="code_length", ) registry.add_field( ("index", "z"), sampling_type="cell", function=f2, display_field=False, units="code_length", ) def _CylindricalVolume(field, data): r = data["index", "r"] dr = data["index", "dr"] vol = 0.5 * ((r + 0.5 * dr) ** 2 - (r - 0.5 * dr) ** 2) vol *= data["index", "dtheta"] vol *= data["index", "dz"] return vol registry.add_field( ("index", "cell_volume"), sampling_type="cell", function=_CylindricalVolume, units="code_length**3", ) registry.alias(("index", "volume"), ("index", "cell_volume")) def _path_z(field, data): return data["index", "dz"] registry.add_field( ("index", "path_element_z"), sampling_type="cell", function=_path_z, units="code_length", ) def pixelize( self, dimension, data_source, field, bounds, size, antialias=True, periodic=False, *, return_mask=False, ): # Note that above, we set periodic by default to be *false*. This is # because our pixelizers, at present, do not handle periodicity # correctly, and if you change the "width" of a cylindrical plot, it # double-counts in the edge buffers. See, for instance, issue 1669. ax_name = self.axis_name[dimension] if ax_name in ("r", "theta"): buff, mask = self._ortho_pixelize( data_source, field, bounds, size, antialias, dimension, periodic ) elif ax_name == "z": # This is admittedly a very hacky way to resolve a bug # it's very likely that the *right* fix would have to # be applied upstream of this function, *but* this case # has never worked properly so maybe it's still preferable to # not having a solution ? # see https://github.com/yt-project/yt/pull/3533 bounds = (*bounds[2:4], *bounds[:2]) buff, mask = self._cyl_pixelize(data_source, field, bounds, size, antialias) else: # Pixelizing along a cylindrical surface is a bit tricky raise NotImplementedError if return_mask: assert mask is None or mask.dtype == bool return buff, mask else: return buff def pixelize_line(self, field, start_point, end_point, npoints): raise NotImplementedError def _ortho_pixelize( self, data_source, field, bounds, size, antialias, dim, periodic ): period = self.period[:2].copy() # dummy here period[0] = self.period[self.x_axis[dim]] period[1] = self.period[self.y_axis[dim]] if hasattr(period, "in_units"): period = period.in_units("code_length").d buff = np.full(size, np.nan, dtype="float64") mask = pixelize_cartesian( buff, data_source["px"], data_source["py"], data_source["pdx"], data_source["pdy"], data_source[field], bounds, int(antialias), period, int(periodic), ) return buff, mask def _cyl_pixelize(self, data_source, field, bounds, size, antialias): buff = np.full((size[1], size[0]), np.nan, dtype="f8") mask = pixelize_cylinder( buff, data_source["px"], data_source["pdx"], data_source["py"], data_source["pdy"], data_source[field], bounds, return_mask=True, ) return buff, mask _x_pairs = (("r", "theta"), ("z", "r"), ("theta", "r")) _y_pairs = (("r", "z"), ("z", "theta"), ("theta", "z")) _image_axis_name = None @property def image_axis_name(self): if self._image_axis_name is not None: return self._image_axis_name # This is the x and y axes labels that get displayed. For # non-Cartesian coordinates, we usually want to override these for # Cartesian coordinates, since we transform them. rv = { self.axis_id["r"]: ("\\theta", "z"), self.axis_id["z"]: ("x", "y"), self.axis_id["theta"]: ("r", "z"), } for i in list(rv.keys()): rv[self.axis_name[i]] = rv[i] rv[self.axis_name[i].upper()] = rv[i] self._image_axis_name = rv return rv def convert_from_cartesian(self, coord): return cartesian_to_cylindrical(coord) def convert_to_cartesian(self, coord): return cylindrical_to_cartesian(coord) def convert_to_cylindrical(self, coord): return coord def convert_from_cylindrical(self, coord): return coord def convert_to_spherical(self, coord): raise NotImplementedError def convert_from_spherical(self, coord): raise NotImplementedError @property def period(self): return np.array([0.0, 0.0, 2.0 * np.pi]) @cached_property def _polar_bounds(self): return _get_polar_bounds(self, axes=("r", "theta")) def sanitize_center(self, center, axis): center, display_center = super().sanitize_center(center, axis) display_center = [ 0.0 * display_center[0], 0.0 * display_center[1], 0.0 * display_center[2], ] ax_name = self.axis_name[axis] r_ax = self.axis_id["r"] theta_ax = self.axis_id["theta"] z_ax = self.axis_id["z"] if ax_name == "r": display_center[theta_ax] = self.ds.domain_center[theta_ax] display_center[z_ax] = self.ds.domain_center[z_ax] elif ax_name == "theta": # use existing center value for idx in (r_ax, z_ax): display_center[idx] = center[idx] elif ax_name == "z": xxmin, xxmax, yymin, yymax = self._polar_bounds xc = (xxmin + xxmax) / 2 yc = (yymin + yymax) / 2 display_center = (xc, yc, 0 * xc) return center, display_center def sanitize_width(self, axis, width, depth): name = self.axis_name[axis] r_ax, theta_ax, z_ax = ( self.ds.coordinates.axis_id[ax] for ax in ("r", "theta", "z") ) if width is not None: width = super().sanitize_width(axis, width, depth) # Note: regardless of axes, these are set up to give consistent plots # when plotted, which is not strictly a "right hand rule" for axes. elif name == "r": # soup can label width = [self.ds.domain_width[theta_ax], self.ds.domain_width[z_ax]] elif name == "theta": width = [self.ds.domain_width[r_ax], self.ds.domain_width[z_ax]] elif name == "z": xxmin, xxmax, yymin, yymax = self._polar_bounds xw = xxmax - xxmin yw = yymax - yymin width = [xw, yw] return width ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/geographic_coordinates.py0000644000175100001770000005016614714401662023452 0ustar00runnerdockerimport numpy as np import unyt from yt.utilities.lib.pixelization_routines import pixelize_cartesian, pixelize_cylinder from .coordinate_handler import ( CoordinateHandler, _get_coord_fields, _setup_dummy_cartesian_coords_and_widths, ) class GeographicCoordinateHandler(CoordinateHandler): radial_axis = "altitude" name = "geographic" def __init__(self, ds, ordering=None): if ordering is None: ordering = ("latitude", "longitude", self.radial_axis) super().__init__(ds, ordering) self.image_units = {} self.image_units[self.axis_id["latitude"]] = (None, None) self.image_units[self.axis_id["longitude"]] = (None, None) self.image_units[self.axis_id[self.radial_axis]] = ("deg", "deg") def setup_fields(self, registry): # Missing implementation for x, y and z coordinates. _setup_dummy_cartesian_coords_and_widths(registry, axes=("x", "y", "z")) f1, f2 = _get_coord_fields(self.axis_id["latitude"], "") registry.add_field( ("index", "dlatitude"), sampling_type="cell", function=f1, display_field=False, units="", ) registry.add_field( ("index", "latitude"), sampling_type="cell", function=f2, display_field=False, units="", ) f1, f2 = _get_coord_fields(self.axis_id["longitude"], "") registry.add_field( ("index", "dlongitude"), sampling_type="cell", function=f1, display_field=False, units="", ) registry.add_field( ("index", "longitude"), sampling_type="cell", function=f2, display_field=False, units="", ) f1, f2 = _get_coord_fields(self.axis_id[self.radial_axis]) registry.add_field( ("index", f"d{self.radial_axis}"), sampling_type="cell", function=f1, display_field=False, units="code_length", ) registry.add_field( ("index", self.radial_axis), sampling_type="cell", function=f2, display_field=False, units="code_length", ) def _SphericalVolume(field, data): # We can use the transformed coordinates here. # Here we compute the spherical volume element exactly r = data["index", "r"] dr = data["index", "dr"] theta = data["index", "theta"] dtheta = data["index", "dtheta"] vol = ((r + 0.5 * dr) ** 3 - (r - 0.5 * dr) ** 3) / 3.0 vol *= np.cos(theta - 0.5 * dtheta) - np.cos(theta + 0.5 * dtheta) vol *= data["index", "dphi"] return vol registry.add_field( ("index", "cell_volume"), sampling_type="cell", function=_SphericalVolume, units="code_length**3", ) registry.alias(("index", "volume"), ("index", "cell_volume")) def _path_radial_axis(field, data): return data["index", f"d{self.radial_axis}"] registry.add_field( ("index", f"path_element_{self.radial_axis}"), sampling_type="cell", function=_path_radial_axis, units="code_length", ) def _path_latitude(field, data): # We use r here explicitly return data["index", "r"] * data["index", "dlatitude"] * np.pi / 180.0 registry.add_field( ("index", "path_element_latitude"), sampling_type="cell", function=_path_latitude, units="code_length", ) def _path_longitude(field, data): # We use r here explicitly return ( data["index", "r"] * data["index", "dlongitude"] * np.pi / 180.0 * np.sin((90 - data["index", "latitude"]) * np.pi / 180.0) ) registry.add_field( ("index", "path_element_longitude"), sampling_type="cell", function=_path_longitude, units="code_length", ) def _latitude_to_theta(field, data): # latitude runs from -90 to 90 # theta = 0 at +90 deg, np.pi at -90 return (90.0 - data["index", "latitude"]) * np.pi / 180.0 registry.add_field( ("index", "theta"), sampling_type="cell", function=_latitude_to_theta, units="", ) def _dlatitude_to_dtheta(field, data): return data["index", "dlatitude"] * np.pi / 180.0 registry.add_field( ("index", "dtheta"), sampling_type="cell", function=_dlatitude_to_dtheta, units="", ) def _longitude_to_phi(field, data): # longitude runs from -180 to 180 lonvals = data[("index", "longitude")] neglons = lonvals < 0.0 if np.any(neglons): lonvals[neglons] = lonvals[neglons] + 360.0 return lonvals * np.pi / 180.0 registry.add_field( ("index", "phi"), sampling_type="cell", function=_longitude_to_phi, units="" ) def _dlongitude_to_dphi(field, data): return data["index", "dlongitude"] * np.pi / 180.0 registry.add_field( ("index", "dphi"), sampling_type="cell", function=_dlongitude_to_dphi, units="", ) self._setup_radial_fields(registry) def _setup_radial_fields(self, registry): # This stays here because we don't want to risk the field detector not # properly getting the data_source, etc. def _altitude_to_radius(field, data): surface_height = data.get_field_parameter("surface_height") if surface_height is None: if hasattr(data.ds, "surface_height"): surface_height = data.ds.surface_height else: surface_height = data.ds.quan(0.0, "code_length") return data["index", "altitude"] + surface_height registry.add_field( ("index", "r"), sampling_type="cell", function=_altitude_to_radius, units="code_length", ) registry.alias(("index", "dr"), ("index", "daltitude")) def _retrieve_radial_offset(self, data_source=None): # This returns the factor by which the radial field is multiplied and # the scalar its offset by. Typically the "factor" will *only* be # either 1.0 or -1.0. The order will be factor * r + offset. # Altitude is the radius from the central zone minus the radius of the # surface. Depth to radius is negative value of depth plus the # outermost radius. surface_height = None if data_source is not None: surface_height = data_source.get_field_parameter("surface_height") if surface_height is None: if hasattr(self.ds, "surface_height"): surface_height = self.ds.surface_height else: surface_height = self.ds.quan(0.0, "code_length") return surface_height, 1.0 def pixelize( self, dimension, data_source, field, bounds, size, antialias=True, periodic=True, *, return_mask=False, ): if self.axis_name[dimension] in ("latitude", "longitude"): buff, mask = self._cyl_pixelize( data_source, field, bounds, size, antialias, dimension ) elif self.axis_name[dimension] == self.radial_axis: buff, mask = self._ortho_pixelize( data_source, field, bounds, size, antialias, dimension, periodic ) else: raise NotImplementedError if return_mask: assert mask is None or mask.dtype == bool return buff, mask else: return buff def pixelize_line(self, field, start_point, end_point, npoints): raise NotImplementedError def _ortho_pixelize( self, data_source, field, bounds, size, antialias, dimension, periodic ): period = self.period[:2].copy() period[0] = self.period[self.x_axis[dimension]] period[1] = self.period[self.y_axis[dimension]] if hasattr(period, "in_units"): period = period.in_units("code_length").d # For a radial axis, px will correspond to longitude and py will # correspond to latitude. px = data_source["px"] pdx = data_source["pdx"] py = data_source["py"] pdy = data_source["pdy"] buff = np.full((size[1], size[0]), np.nan, dtype="float64") mask = pixelize_cartesian( buff, px, py, pdx, pdy, data_source[field], bounds, int(antialias), period, int(periodic), ) return buff, mask def _cyl_pixelize(self, data_source, field, bounds, size, antialias, dimension): offset, factor = self._retrieve_radial_offset(data_source) r = factor * data_source["py"] + offset # Because of the axis-ordering, dimensions 0 and 1 both have r as py # and the angular coordinate as px. But we need to figure out how to # convert our coordinate back to an actual angle, based on which # dimension we're in. pdx = data_source["pdx"].d * np.pi / 180 if self.axis_name[self.x_axis[dimension]] == "latitude": px = (data_source["px"].d + 90) * np.pi / 180 do_transpose = True elif self.axis_name[self.x_axis[dimension]] == "longitude": px = (data_source["px"].d + 180) * np.pi / 180 do_transpose = False else: # We should never get here! raise NotImplementedError buff = np.full((size[1], size[0]), np.nan, dtype="f8") mask = pixelize_cylinder( buff, r, data_source["pdy"], px, pdx, data_source[field], bounds, return_mask=True, ) if do_transpose: buff = buff.transpose() mask = mask.transpose() return buff, mask def convert_from_cartesian(self, coord): raise NotImplementedError def convert_to_cartesian(self, coord): offset, factor = self._retrieve_radial_offset() if isinstance(coord, np.ndarray) and len(coord.shape) > 1: rad = self.axis_id[self.radial_axis] lon = self.axis_id["longitude"] lat = self.axis_id["latitude"] r = factor * coord[:, rad] + offset colatitude = _latitude_to_colatitude(coord[:, lat]) phi = coord[:, lon] * np.pi / 180 nc = np.zeros_like(coord) # r, theta, phi nc[:, lat] = np.cos(phi) * np.sin(colatitude) * r nc[:, lon] = np.sin(phi) * np.sin(colatitude) * r nc[:, rad] = np.cos(colatitude) * r else: a, b, c = coord colatitude = _latitude_to_colatitude(b) phi = a * np.pi / 180 r = factor * c + offset nc = ( np.cos(phi) * np.sin(colatitude) * r, np.sin(phi) * np.sin(colatitude) * r, np.cos(colatitude) * r, ) return nc def convert_to_cylindrical(self, coord): raise NotImplementedError def convert_from_cylindrical(self, coord): raise NotImplementedError def convert_to_spherical(self, coord): raise NotImplementedError def convert_from_spherical(self, coord): raise NotImplementedError _image_axis_name = None @property def image_axis_name(self): if self._image_axis_name is not None: return self._image_axis_name # This is the x and y axes labels that get displayed. For # non-Cartesian coordinates, we usually want to override these for # Cartesian coordinates, since we transform them. rv = { self.axis_id["latitude"]: ( "x / \\sin(\\mathrm{latitude})", "y / \\sin(\\mathrm{latitude})", ), self.axis_id["longitude"]: ("R", "z"), self.axis_id[self.radial_axis]: ("longitude", "latitude"), } for i in list(rv.keys()): rv[self.axis_name[i]] = rv[i] rv[self.axis_name[i].capitalize()] = rv[i] self._image_axis_name = rv return rv _x_pairs = ( ("latitude", "longitude"), ("longitude", "latitude"), ("altitude", "longitude"), ) _y_pairs = ( ("latitude", "altitude"), ("longitude", "altitude"), ("altitude", "latitude"), ) _data_projection = None @property def data_projection(self): # this will control the default projection to use when displaying data if self._data_projection is not None: return self._data_projection dpj = {} for ax in self.axis_order: if ax == self.radial_axis: dpj[ax] = "Mollweide" else: dpj[ax] = None self._data_projection = dpj return dpj _data_transform = None @property def data_transform(self): # this is the coordinate system on which the data is defined (the crs). if self._data_transform is not None: return self._data_transform dtx = {} for ax in self.axis_order: if ax == self.radial_axis: dtx[ax] = "PlateCarree" else: dtx[ax] = None self._data_transform = dtx return dtx @property def period(self): return self.ds.domain_width def sanitize_center(self, center, axis): center, display_center = super().sanitize_center(center, axis) name = self.axis_name[axis] if name == self.radial_axis: display_center = center elif name == "latitude": display_center = ( 0.0 * display_center[0], 0.0 * display_center[1], 0.0 * display_center[2], ) elif name == "longitude": ri = self.axis_id[self.radial_axis] c = (self.ds.domain_right_edge[ri] + self.ds.domain_left_edge[ri]) / 2.0 display_center = [ 0.0 * display_center[0], 0.0 * display_center[1], 0.0 * display_center[2], ] display_center[self.axis_id["latitude"]] = c return center, display_center def sanitize_width(self, axis, width, depth): name = self.axis_name[axis] if width is not None: width = super().sanitize_width(axis, width, depth) elif name == self.radial_axis: rax = self.radial_axis width = [ self.ds.domain_width[self.x_axis[rax]], self.ds.domain_width[self.y_axis[rax]], ] elif name == "latitude": ri = self.axis_id[self.radial_axis] # Remember, in spherical coordinates when we cut in theta, # we create a conic section width = [2.0 * self.ds.domain_width[ri], 2.0 * self.ds.domain_width[ri]] elif name == "longitude": ri = self.axis_id[self.radial_axis] width = [self.ds.domain_width[ri], 2.0 * self.ds.domain_width[ri]] return width class InternalGeographicCoordinateHandler(GeographicCoordinateHandler): radial_axis = "depth" name = "internal_geographic" def _setup_radial_fields(self, registry): # Altitude is the radius from the central zone minus the radius of the # surface. def _depth_to_radius(field, data): outer_radius = data.get_field_parameter("outer_radius") if outer_radius is None: if hasattr(data.ds, "outer_radius"): outer_radius = data.ds.outer_radius else: # Otherwise, we assume that the depth goes to full depth, # so we can look at the domain right edge in depth. rax = self.axis_id[self.radial_axis] outer_radius = data.ds.domain_right_edge[rax] return -1.0 * data["index", "depth"] + outer_radius registry.add_field( ("index", "r"), sampling_type="cell", function=_depth_to_radius, units="code_length", ) registry.alias(("index", "dr"), ("index", "ddepth")) def _retrieve_radial_offset(self, data_source=None): # Depth means switching sign and adding to full radius outer_radius = None if data_source is not None: outer_radius = data_source.get_field_parameter("outer_radius") if outer_radius is None: if hasattr(self.ds, "outer_radius"): outer_radius = self.ds.outer_radius else: # Otherwise, we assume that the depth goes to full depth, # so we can look at the domain right edge in depth. rax = self.axis_id[self.radial_axis] outer_radius = self.ds.domain_right_edge[rax] return outer_radius, -1.0 _x_pairs = ( ("latitude", "longitude"), ("longitude", "latitude"), ("depth", "longitude"), ) _y_pairs = (("latitude", "depth"), ("longitude", "depth"), ("depth", "latitude")) def sanitize_center(self, center, axis): center, display_center = super( GeographicCoordinateHandler, self ).sanitize_center(center, axis) name = self.axis_name[axis] if name == self.radial_axis: display_center = center elif name == "latitude": display_center = ( 0.0 * display_center[0], 0.0 * display_center[1], 0.0 * display_center[2], ) elif name == "longitude": ri = self.axis_id[self.radial_axis] offset, factor = self._retrieve_radial_offset() outermost = factor * self.ds.domain_left_edge[ri] + offset display_center = [ 0.0 * display_center[0], 0.0 * display_center[1], 0.0 * display_center[2], ] display_center[self.axis_id["latitude"]] = outermost / 2.0 return center, display_center def sanitize_width(self, axis, width, depth): name = self.axis_name[axis] if width is not None: width = super(GeographicCoordinateHandler, self).sanitize_width( axis, width, depth ) elif name == self.radial_axis: rax = self.radial_axis width = [ self.ds.domain_width[self.x_axis[rax]], self.ds.domain_width[self.y_axis[rax]], ] elif name == "latitude": ri = self.axis_id[self.radial_axis] # Remember, in spherical coordinates when we cut in theta, # we create a conic section offset, factor = self._retrieve_radial_offset() outermost = factor * self.ds.domain_left_edge[ri] + offset width = [2.0 * outermost, 2.0 * outermost] elif name == "longitude": ri = self.axis_id[self.radial_axis] offset, factor = self._retrieve_radial_offset() outermost = factor * self.ds.domain_left_edge[ri] + offset width = [outermost, 2.0 * outermost] return width def _latitude_to_colatitude(lat_vals): # convert latitude to theta, accounting for units, # including the case where the units are code_length # due to how yt stores the domain_center units for # geographic coordinates. if isinstance(lat_vals, unyt.unyt_array): if lat_vals.units.dimensions == unyt.dimensions.length: return (90.0 - lat_vals.d) * np.pi / 180.0 ninety = unyt.unyt_quantity(90.0, "degree") return (ninety - lat_vals).to("radian") return (90 - lat_vals) * np.pi / 180.0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/polar_coordinates.py0000644000175100001770000000030114714401662022441 0ustar00runnerdockerfrom .cylindrical_coordinates import CylindricalCoordinateHandler class PolarCoordinateHandler(CylindricalCoordinateHandler): name = "polar" _default_axis_order = ("r", "theta", "z") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/spec_cube_coordinates.py0000644000175100001770000000605114714401662023264 0ustar00runnerdockerfrom .cartesian_coordinates import CartesianCoordinateHandler from .coordinate_handler import _get_coord_fields class SpectralCubeCoordinateHandler(CartesianCoordinateHandler): name = "spectral_cube" def __init__(self, ds, ordering=None): if ordering is None: ordering = tuple( "xyz"[axis] for axis in (ds.lon_axis, ds.lat_axis, ds.spec_axis) ) super().__init__(ds, ordering) self.default_unit_label = {} names = {} if ds.lon_name != "X" or ds.lat_name != "Y": names["x"] = r"Image\ x" names["y"] = r"Image\ y" # We can just use ds.lon_axis here self.default_unit_label[ds.lon_axis] = "pixel" self.default_unit_label[ds.lat_axis] = "pixel" names["z"] = ds.spec_name # Again, can use spec_axis here self.default_unit_label[ds.spec_axis] = ds.spec_unit self._image_axis_name = ian = {} for ax in "xyz": axi = self.axis_id[ax] xax = self.axis_name[self.x_axis[ax]] yax = self.axis_name[self.y_axis[ax]] ian[axi] = ian[ax] = ian[ax.upper()] = ( names.get(xax, xax), names.get(yax, yax), ) def _spec_axis(ax, x, y): p = (x, y)[ax] return [self.ds.pixel2spec(pp).v for pp in p] self.axis_field = {} self.axis_field[self.ds.spec_axis] = _spec_axis def setup_fields(self, registry): if not self.ds.no_cgs_equiv_length: return super().setup_fields(registry) for axi, ax in enumerate("xyz"): f1, f2 = _get_coord_fields(axi) def _get_length_func(): def _length_func(field, data): # Just use axis 0 rv = data.ds.arr(data.fcoords[..., 0].copy(), field.units) rv[:] = 1.0 return rv return _length_func registry.add_field( ("index", f"d{ax}"), sampling_type="cell", function=f1, display_field=False, units="code_length", ) registry.add_field( ("index", f"path_element_{ax}"), sampling_type="cell", function=_get_length_func(), display_field=False, units="", ) registry.add_field( ("index", f"{ax}"), sampling_type="cell", function=f2, display_field=False, units="code_length", ) self._register_volume(registry) self._check_fields(registry) _x_pairs = (("x", "y"), ("y", "x"), ("z", "x")) _y_pairs = (("x", "z"), ("y", "z"), ("z", "y")) def convert_to_cylindrical(self, coord): raise NotImplementedError def convert_from_cylindrical(self, coord): raise NotImplementedError @property def image_axis_name(self): return self._image_axis_name ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/spherical_coordinates.py0000644000175100001770000003314714714401662023314 0ustar00runnerdockerfrom functools import cached_property import numpy as np from yt.utilities.lib.pixelization_routines import pixelize_aitoff, pixelize_cylinder from .coordinate_handler import ( CoordinateHandler, _get_coord_fields, _get_polar_bounds, _setup_dummy_cartesian_coords_and_widths, _setup_polar_coordinates, ) class SphericalCoordinateHandler(CoordinateHandler): name = "spherical" _default_axis_order = ("r", "theta", "phi") def __init__(self, ds, ordering=None): super().__init__(ds, ordering) # Generate self.image_units = {} self.image_units[self.axis_id["r"]] = (1, 1) self.image_units[self.axis_id["theta"]] = (None, None) self.image_units[self.axis_id["phi"]] = (None, None) def setup_fields(self, registry): # Missing implementation for x, y and z coordinates. _setup_dummy_cartesian_coords_and_widths(registry, axes=("x", "y", "z")) _setup_polar_coordinates(registry, self.axis_id) f1, f2 = _get_coord_fields(self.axis_id["phi"], "dimensionless") registry.add_field( ("index", "dphi"), sampling_type="cell", function=f1, display_field=False, units="dimensionless", ) registry.add_field( ("index", "phi"), sampling_type="cell", function=f2, display_field=False, units="dimensionless", ) def _SphericalVolume(field, data): # Here we compute the spherical volume element exactly r = data["index", "r"] dr = data["index", "dr"] theta = data["index", "theta"] dtheta = data["index", "dtheta"] vol = ((r + 0.5 * dr) ** 3 - (r - 0.5 * dr) ** 3) / 3.0 vol *= np.cos(theta - 0.5 * dtheta) - np.cos(theta + 0.5 * dtheta) vol *= data["index", "dphi"] return vol registry.add_field( ("index", "cell_volume"), sampling_type="cell", function=_SphericalVolume, units="code_length**3", ) registry.alias(("index", "volume"), ("index", "cell_volume")) def _path_phi(field, data): # Note: this already assumes cell-centered return ( data["index", "r"] * data["index", "dphi"] * np.sin(data["index", "theta"]) ) registry.add_field( ("index", "path_element_phi"), sampling_type="cell", function=_path_phi, units="code_length", ) def pixelize( self, dimension, data_source, field, bounds, size, antialias=True, periodic=True, *, return_mask=False, ): self.period name = self.axis_name[dimension] if name == "r": buff, mask = self._ortho_pixelize( data_source, field, bounds, size, antialias, dimension, periodic ) elif name in ("theta", "phi"): if name == "theta": # This is admittedly a very hacky way to resolve a bug # it's very likely that the *right* fix would have to # be applied upstream of this function, *but* this case # has never worked properly so maybe it's still preferable to # not having a solution ? # see https://github.com/yt-project/yt/pull/3533 bounds = (*bounds[2:4], *bounds[:2]) buff, mask = self._cyl_pixelize( data_source, field, bounds, size, antialias, dimension ) else: raise NotImplementedError if return_mask: assert mask is None or mask.dtype == bool return buff, mask else: return buff def pixelize_line(self, field, start_point, end_point, npoints): raise NotImplementedError def _ortho_pixelize( self, data_source, field, bounds, size, antialias, dim, periodic ): # use Aitoff projection # http://paulbourke.net/geometry/transformationprojection/ bounds = tuple(_.ndview for _ in self._aitoff_bounds) buff, mask = pixelize_aitoff( azimuth=data_source["py"], dazimuth=data_source["pdy"], colatitude=data_source["px"], dcolatitude=data_source["pdx"], buff_size=size, field=data_source[field], bounds=bounds, input_img=None, azimuth_offset=0, colatitude_offset=0, return_mask=True, ) return buff.T, mask.T def _cyl_pixelize(self, data_source, field, bounds, size, antialias, dimension): name = self.axis_name[dimension] buff = np.full((size[1], size[0]), np.nan, dtype="f8") if name == "theta": mask = pixelize_cylinder( buff, data_source["px"], data_source["pdx"], data_source["py"], data_source["pdy"], data_source[field], bounds, return_mask=True, ) elif name == "phi": # Note that we feed in buff.T here mask = pixelize_cylinder( buff.T, data_source["px"], data_source["pdx"], data_source["py"], data_source["pdy"], data_source[field], bounds, return_mask=True, ).T else: raise RuntimeError return buff, mask def convert_from_cartesian(self, coord): raise NotImplementedError def convert_to_cartesian(self, coord): if isinstance(coord, np.ndarray) and len(coord.shape) > 1: ri = self.axis_id["r"] thetai = self.axis_id["theta"] phii = self.axis_id["phi"] r = coord[:, ri] theta = coord[:, thetai] phi = coord[:, phii] nc = np.zeros_like(coord) # r, theta, phi nc[:, ri] = np.cos(phi) * np.sin(theta) * r nc[:, thetai] = np.sin(phi) * np.sin(theta) * r nc[:, phii] = np.cos(theta) * r else: r, theta, phi = coord nc = ( np.cos(phi) * np.sin(theta) * r, np.sin(phi) * np.sin(theta) * r, np.cos(theta) * r, ) return nc def convert_to_cylindrical(self, coord): raise NotImplementedError def convert_from_cylindrical(self, coord): raise NotImplementedError def convert_to_spherical(self, coord): raise NotImplementedError def convert_from_spherical(self, coord): raise NotImplementedError _image_axis_name = None @property def image_axis_name(self): if self._image_axis_name is not None: return self._image_axis_name # This is the x and y axes labels that get displayed. For # non-Cartesian coordinates, we usually want to override these for # Cartesian coordinates, since we transform them. rv = { self.axis_id["r"]: ( # these are the Hammer-Aitoff normalized coordinates # conventions: # - theta is the colatitude, from 0 to PI # - bartheta is the latitude, from -PI/2 to +PI/2 (bartheta = PI/2 - theta) # - phi is the azimuth, from 0 to 2PI # - lambda is the longitude, from -PI to PI (lambda = phi - PI) r"\frac{2\cos(\mathrm{\bar{\theta}})\sin(\lambda/2)}{\sqrt{1 + \cos(\bar{\theta}) \cos(\lambda/2)}}", r"\frac{sin(\bar{\theta})}{\sqrt{1 + \cos(\bar{\theta}) \cos(\lambda/2)}}", "\\theta", ), self.axis_id["theta"]: ("x / \\sin(\\theta)", "y / \\sin(\\theta)"), self.axis_id["phi"]: ("R", "z"), } for i in list(rv.keys()): rv[self.axis_name[i]] = rv[i] rv[self.axis_name[i].capitalize()] = rv[i] self._image_axis_name = rv return rv _x_pairs = (("r", "theta"), ("theta", "r"), ("phi", "r")) _y_pairs = (("r", "phi"), ("theta", "phi"), ("phi", "theta")) @property def period(self): return self.ds.domain_width @cached_property def _poloidal_bounds(self): ri = self.axis_id["r"] ti = self.axis_id["theta"] rmin = self.ds.domain_left_edge[ri] rmax = self.ds.domain_right_edge[ri] thetamin = self.ds.domain_left_edge[ti] thetamax = self.ds.domain_right_edge[ti] corners = [ (rmin, thetamin), (rmin, thetamax), (rmax, thetamin), (rmax, thetamax), ] def to_poloidal_plane(r, theta): # take a r, theta position and return # cylindrical R, z coordinates R = r * np.sin(theta) z = r * np.cos(theta) return R, z cylindrical_corner_coords = [to_poloidal_plane(*corner) for corner in corners] thetamin = thetamin.d thetamax = thetamax.d Rmin = min(R for R, z in cylindrical_corner_coords) if thetamin <= np.pi / 2 <= thetamax: Rmax = rmax else: Rmax = max(R for R, z in cylindrical_corner_coords) zmin = min(z for R, z in cylindrical_corner_coords) zmax = max(z for R, z in cylindrical_corner_coords) return Rmin, Rmax, zmin, zmax @cached_property def _conic_bounds(self): return _get_polar_bounds(self, axes=("r", "phi")) @cached_property def _aitoff_bounds(self): # at the time of writing this function, yt's support for curvilinear # coordinates is a bit hacky, as many components of the system still # expect to receive coordinates with a length dimension. Ultimately # this is not needed but calls for a large refactor. ONE = self.ds.quan(1, "code_length") # colatitude ti = self.axis_id["theta"] thetamin = self.ds.domain_left_edge[ti] thetamax = self.ds.domain_right_edge[ti] # latitude latmin = ONE * np.pi / 2 - thetamax latmax = ONE * np.pi / 2 - thetamin # azimuth pi = self.axis_id["phi"] phimin = self.ds.domain_left_edge[pi] phimax = self.ds.domain_right_edge[pi] # longitude lonmin = phimin - ONE * np.pi lonmax = phimax - ONE * np.pi corners = [ (latmin, lonmin), (latmin, lonmax), (latmax, lonmin), (latmax, lonmax), ] def aitoff_z(latitude, longitude): return np.sqrt(1 + np.cos(latitude) * np.cos(longitude / 2)) def aitoff_x(latitude, longitude): return ( 2 * np.cos(latitude) * np.sin(longitude / 2) / aitoff_z(latitude, longitude) ) def aitoff_y(latitude, longitude): return np.sin(latitude) / aitoff_z(latitude, longitude) def to_aitoff_plane(latitude, longitude): return aitoff_x(latitude, longitude), aitoff_y(latitude, longitude) aitoff_corner_coords = [to_aitoff_plane(*corner) for corner in corners] xmin = ONE * min(x for x, y in aitoff_corner_coords) xmax = ONE * max(x for x, y in aitoff_corner_coords) # theta is the colatitude # What this branch is meant to do is check whether the equator (latitude = 0) # is included in the domain. if latmin < 0 < latmax: xmin = min(xmin, ONE * aitoff_x(0, lonmin)) xmax = max(xmax, ONE * aitoff_x(0, lonmax)) # the y direction is more straightforward because aitoff-projected parallels (y) # draw a convex shape, while aitoff-projected meridians (x) draw a concave shape ymin = ONE * min(y for x, y in aitoff_corner_coords) ymax = ONE * max(y for x, y in aitoff_corner_coords) return xmin, xmax, ymin, ymax def sanitize_center(self, center, axis): center, display_center = super().sanitize_center(center, axis) name = self.axis_name[axis] if name == "r": xxmin, xxmax, yymin, yymax = self._aitoff_bounds xc = (xxmin + xxmax) / 2 yc = (yymin + yymax) / 2 display_center = (0 * xc, xc, yc) elif name == "theta": xxmin, xxmax, yymin, yymax = self._conic_bounds xc = (xxmin + xxmax) / 2 yc = (yymin + yymax) / 2 display_center = (xc, 0 * xc, yc) elif name == "phi": Rmin, Rmax, zmin, zmax = self._poloidal_bounds xc = (Rmin + Rmax) / 2 yc = (zmin + zmax) / 2 display_center = (xc, yc) return center, display_center def sanitize_width(self, axis, width, depth): name = self.axis_name[axis] if width is not None: width = super().sanitize_width(axis, width, depth) elif name == "r": xxmin, xxmax, yymin, yymax = self._aitoff_bounds xw = xxmax - xxmin yw = yymax - yymin width = [xw, yw] elif name == "theta": # Remember, in spherical coordinates when we cut in theta, # we create a conic section xxmin, xxmax, yymin, yymax = self._conic_bounds xw = xxmax - xxmin yw = yymax - yymin width = [xw, yw] elif name == "phi": Rmin, Rmax, zmin, zmax = self._poloidal_bounds xw = Rmax - Rmin yw = zmax - zmin width = [xw, yw] return width ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.363153 yt-4.4.0/yt/geometry/coordinates/tests/0000755000175100001770000000000014714401715017527 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/tests/__init__.py0000644000175100001770000000000014714401662021627 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/tests/test_axial_pixelization.py0000644000175100001770000000075414714401662025044 0ustar00runnerdockerfrom yt.testing import _geom_transforms, fake_amr_ds from yt.utilities.answer_testing.framework import AxialPixelizationTest def test_axial_pixelization(): for geom in sorted(_geom_transforms): if geom == "spectral_cube": # skip this case as it was added much later and we don't want to keep # adding yield-based tests during the nose->pytest migration continue ds = fake_amr_ds(geometry=geom) yield AxialPixelizationTest(ds) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/tests/test_cartesian_coordinates.py0000644000175100001770000000221014714401662025477 0ustar00runnerdocker# Some tests for the Cartesian coordinates handler import numpy as np from numpy.testing import assert_equal from yt.testing import fake_amr_ds # Our canonical tests are that we can access all of our fields and we can # compute our volume correctly. def test_cartesian_coordinates(): # We're going to load up a simple AMR grid and check its volume # calculations and path length calculations. ds = fake_amr_ds() axes = sorted(set(ds.coordinates.axis_name.values())) for i, axis in enumerate(axes): dd = ds.all_data() fi = ("index", axis) fd = ("index", f"d{axis}") fp = ("index", f"path_element_{axis}") ma = np.argmax(dd[fi]) assert_equal(dd[fi][ma] + dd[fd][ma] / 2.0, ds.domain_right_edge[i]) mi = np.argmin(dd[fi]) assert_equal(dd[fi][mi] - dd[fd][mi] / 2.0, ds.domain_left_edge[i]) assert_equal(dd[fd].min(), ds.index.get_smallest_dx()) assert_equal(dd[fd].max(), (ds.domain_width / ds.domain_dimensions)[i]) assert_equal(dd[fd], dd[fp]) assert_equal( dd["index", "cell_volume"].sum(dtype="float64"), ds.domain_width.prod() ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/tests/test_cylindrical_coordinates.py0000644000175100001770000000245714714401662026040 0ustar00runnerdocker# Some tests for the Cylindrical coordinates handler import numpy as np from numpy.testing import assert_almost_equal, assert_equal from yt.testing import fake_amr_ds # Our canonical tests are that we can access all of our fields and we can # compute our volume correctly. def test_cylindrical_coordinates(): # We're going to load up a simple AMR grid and check its volume # calculations and path length calculations. ds = fake_amr_ds(geometry="cylindrical") axes = ["r", "z", "theta"] for i, axis in enumerate(axes): dd = ds.all_data() fi = ("index", axis) fd = ("index", f"d{axis}") ma = np.argmax(dd[fi]) assert_equal(dd[fi][ma] + dd[fd][ma] / 2.0, ds.domain_right_edge[i].d) mi = np.argmin(dd[fi]) assert_equal(dd[fi][mi] - dd[fd][mi] / 2.0, ds.domain_left_edge[i].d) assert_equal(dd[fd].max(), (ds.domain_width / ds.domain_dimensions)[i].d) assert_almost_equal( dd["index", "cell_volume"].sum(dtype="float64"), np.pi * ds.domain_width[0] ** 2 * ds.domain_width[1], ) assert_equal(dd["index", "path_element_r"], dd["index", "dr"]) assert_equal(dd["index", "path_element_z"], dd["index", "dz"]) assert_equal( dd["index", "path_element_theta"], dd["index", "r"] * dd["index", "dtheta"] ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/tests/test_geographic_coordinates.py0000644000175100001770000001314314714401662025645 0ustar00runnerdocker# Some tests for the geographic coordinates handler import numpy as np import pytest import unyt from numpy.testing import assert_equal from yt.testing import assert_rel_equal, fake_amr_ds # Our canonical tests are that we can access all of our fields and we can # compute our volume correctly. @pytest.mark.parametrize("geometry", ("geographic", "internal_geographic")) def test_geographic_coordinates(geometry): # We're going to load up a simple AMR grid and check its volume # calculations and path length calculations. # Note that we are setting it up to have an altitude of 1000 maximum, which # means our volume will be that of a shell 1000 wide, starting at r of # whatever our surface_height is set to. ds = fake_amr_ds(geometry=geometry) if geometry == "geographic": ds.surface_height = ds.quan(5000.0, "code_length") inner_r = ds.surface_height outer_r = ds.surface_height + ds.domain_width[2] else: ds.outer_radius = ds.quan(5000.0, "code_length") inner_r = ds.outer_radius - ds.domain_right_edge[2] outer_r = ds.outer_radius radial_axis = ds.coordinates.radial_axis axes = ds.coordinates.axis_order for i, axis in enumerate(axes): dd = ds.all_data() fi = ("index", axis) fd = ("index", f"d{axis}") ma = np.argmax(dd[fi]) assert_equal(dd[fi][ma] + dd[fd][ma] / 2.0, ds.domain_right_edge[i].d) mi = np.argmin(dd[fi]) assert_equal(dd[fi][mi] - dd[fd][mi] / 2.0, ds.domain_left_edge[i].d) assert_equal(dd[fd].max(), (ds.domain_width / ds.domain_dimensions)[i].d) assert_equal(dd["index", "dtheta"], dd["index", "dlatitude"] * np.pi / 180.0) assert_equal(dd["index", "dphi"], dd["index", "dlongitude"] * np.pi / 180.0) # Note our terrible agreement here. assert_rel_equal( dd["index", "cell_volume"].sum(dtype="float64"), (4.0 / 3.0) * np.pi * (outer_r**3 - inner_r**3), 14, ) assert_equal( dd["index", f"path_element_{radial_axis}"], dd["index", f"d{radial_axis}"] ) assert_equal(dd["index", f"path_element_{radial_axis}"], dd["index", "dr"]) # Note that latitude corresponds to theta, longitude to phi assert_equal( dd["index", "path_element_latitude"], dd["index", "r"] * dd["index", "dlatitude"] * np.pi / 180.0, ) assert_equal( dd["index", "path_element_longitude"], ( dd["index", "r"] * dd["index", "dlongitude"] * np.pi / 180.0 * np.sin((90 - dd["index", "latitude"]) * np.pi / 180.0) ), ) # We also want to check that our radius is correct offset, factor = ds.coordinates._retrieve_radial_offset() radius = factor * dd["index", radial_axis] + offset assert_equal(dd["index", "r"], radius) @pytest.mark.parametrize("geometry", ("geographic", "internal_geographic")) def test_geographic_conversions(geometry): ds = fake_amr_ds(geometry=geometry) ad = ds.all_data() lats = ad["index", "latitude"] dlats = ad["index", "dlatitude"] theta = ad["index", "theta"] dtheta = ad["index", "dtheta"] # check that theta = 0, pi at latitudes of 90, -90 south_pole_id = np.where(lats == np.min(lats))[0][0] north_pole_id = np.where(lats == np.max(lats))[0][0] # check that we do in fact have -90, 90 exactly assert lats[south_pole_id] - dlats[south_pole_id] / 2.0 == -90.0 assert lats[north_pole_id] + dlats[north_pole_id] / 2.0 == 90.0 # check that theta=0 at the north pole, np.pi at the south assert theta[north_pole_id] - dtheta[north_pole_id] / 2 == 0.0 assert theta[south_pole_id] + dtheta[south_pole_id] / 2 == np.pi # check that longitude-phi conversions phi = ad["index", "phi"] dphi = ad["index", "dphi"] lons = ad["index", "longitude"] dlon = ad["index", "dlongitude"] lon_180 = np.where(lons == np.max(lons))[0][0] lon_neg180 = np.where(lons == np.min(lons))[0][0] # check we have -180, 180 exactly assert lons[lon_neg180] - dlon[lon_neg180] / 2.0 == -180.0 assert lons[lon_180] + dlon[lon_180] / 2.0 == 180.0 # check that those both convert to phi = np.pi assert phi[lon_neg180] - dphi[lon_neg180] / 2.0 == np.pi assert phi[lon_180] + dphi[lon_180] / 2.0 == np.pi # check that z = +/- radius at +/-90 # default expected axis order: lat, lon, radial axis r_val = ds.coordinates._retrieve_radial_offset()[0] coords = np.zeros((2, 3)) coords[0, 0] = 90.0 coords[1, 0] = -90.0 xyz = ds.coordinates.convert_to_cartesian(coords) z = xyz[:, 2] assert z[0] == r_val assert z[1] == -r_val @pytest.mark.parametrize("geometry", ("geographic", "internal_geographic")) def test_geographic_conversions_with_units(geometry): ds = fake_amr_ds(geometry=geometry) # _sanitize_center will give all values in 'code_length' coords = ds.arr(np.zeros((2, 3)), "code_length") xyz_u = ds.coordinates.convert_to_cartesian(coords) xyz = ds.coordinates.convert_to_cartesian(coords.d) assert_equal(xyz, xyz_u) coords = ds.arr(np.zeros((3,)), "code_length") xyz_u = ds.coordinates.convert_to_cartesian(coords) xyz = ds.coordinates.convert_to_cartesian(coords.d) assert_equal(xyz, xyz_u) # also check that if correct units are supplied, the # result has dimensions of length. coords = [ ds.arr(np.zeros((10,)), "degree"), ds.arr(np.zeros((10,)), "degree"), ds.arr(np.linspace(0, 100, 10), "code_length"), ] xyz = ds.coordinates.convert_to_cartesian(coords) for dim in xyz: assert dim.units.dimensions == unyt.dimensions.length ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/tests/test_polar_coordinates.py0000644000175100001770000000253514714401662024655 0ustar00runnerdocker# Some tests for the polar coordinates handler # (Pretty similar to cylindrical, but different ordering) import numpy as np from numpy.testing import assert_almost_equal, assert_equal from yt.testing import fake_amr_ds # Our canonical tests are that we can access all of our fields and we can # compute our volume correctly. def test_cylindrical_coordinates(): # We're going to load up a simple AMR grid and check its volume # calculations and path length calculations. ds = fake_amr_ds(geometry="polar") axes = ["r", "theta", "z"] for i, axis in enumerate(axes): dd = ds.all_data() fi = ("index", axis) fd = ("index", f"d{axis}") ma = np.argmax(dd[fi]) assert_equal(dd[fi][ma] + dd[fd][ma] / 2.0, ds.domain_right_edge[i].d) mi = np.argmin(dd[fi]) assert_equal(dd[fi][mi] - dd[fd][mi] / 2.0, ds.domain_left_edge[i].d) assert_equal(dd[fd].max(), (ds.domain_width / ds.domain_dimensions)[i].d) assert_almost_equal( dd["index", "cell_volume"].sum(dtype="float64"), np.pi * ds.domain_width[0] ** 2 * ds.domain_width[2], ) assert_equal(dd["index", "path_element_r"], dd["index", "dr"]) assert_equal(dd["index", "path_element_z"], dd["index", "dz"]) assert_equal( dd["index", "path_element_theta"], dd["index", "r"] * dd["index", "dtheta"] ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/tests/test_sanitize_center.py0000644000175100001770000000762014714401662024334 0ustar00runnerdockerimport re import pytest from unyt import unyt_array from unyt.exceptions import UnitConversionError from yt.testing import fake_amr_ds @pytest.fixture(scope="module") def reusable_fake_dataset(): ds = fake_amr_ds( fields=[("gas", "density")], units=["g/cm**3"], ) return ds valid_single_str_values = ("center",) valid_field_loc_str_values = ("min", "max") DEFAUT_ERROR_MESSAGE = ( "Expected any of the following\n" "- 'c', 'center', 'l', 'left', 'r', 'right', 'm', 'max', or 'min'\n" "- a 2 element tuple with 'min' or 'max' as the first element, followed by a field identifier\n" "- a 3 element array-like: for a unyt_array, expects length dimensions, otherwise code_lenght is assumed" ) @pytest.mark.parametrize( "user_input", ( # second element can be a single str or a field tuple (2 str), but not three (("max", ("not", "a", "field"))), # a 1-tuple is also not a valid field key (("max", ("notafield",))), # both elements need to be str (("max", (0, "invalid_field_type"))), (("max", ("invalid_field_type", 1))), ), ) def test_invalid_center_type_default_error(reusable_fake_dataset, user_input): ds = reusable_fake_dataset with pytest.raises( TypeError, match=re.escape(f"Received {user_input!r}, ") + r"but failed to transform to a unyt_array \(obtained .+\)\.", ): # at the time of writing `axis` is an unused parameter of the base # sanitize center method, which is used directly for cartesian coordinate handlers # this probably hints that a refactor would make sense to separaet center sanitizing # and display_center calculation ds.coordinates.sanitize_center(user_input, axis=None) @pytest.mark.parametrize( "user_input, error_type, error_message", ( ( "bad_str", ValueError, re.escape( "Received unknown center single string value 'bad_str'. " + DEFAUT_ERROR_MESSAGE ), ), ( ("bad_str", ("gas", "density")), ValueError, re.escape( "Received unknown string value 'bad_str'. " f"Expected one of {valid_field_loc_str_values} (case insensitive)" ), ), ( ("bad_str", "density"), ValueError, re.escape( "Received unknown string value 'bad_str'. " "Expected one of ('min', 'max') (case insensitive)" ), ), # even with exactly three elements, the dimension should be length ( unyt_array([0.5] * 3, "kg"), UnitConversionError, "...", # don't match the exact error message since it's unyt's responsibility ), # only validate 3 elements unyt_arrays ( unyt_array([0.5] * 2, "cm"), TypeError, re.escape("Received unyt_array([0.5, 0.5], 'cm')"), ), ( unyt_array([0.5] * 4, "cm"), TypeError, # don't attempt to match error message as details of how # a unyt array with more than a couple elements is displayed are out of our control "...", ), ( # check that the whole shape is used in validation, not just the length (number of rows) unyt_array([0.5] * 6, "cm").reshape(3, 2), TypeError, # don't attempt to match error message as details of how # a unyt array with more than a couple elements is displayed are out of our control "...", ), ), ) def test_invalid_center_special_cases( reusable_fake_dataset, user_input, error_type, error_message ): ds = reusable_fake_dataset with pytest.raises(error_type, match=error_message): ds.coordinates.sanitize_center(user_input, axis=None) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/tests/test_sph_pixelization.py0000644000175100001770000001440014714401662024531 0ustar00runnerdockerimport numpy as np import yt from yt.testing import ( assert_rel_equal, cubicspline_python, fake_sph_flexible_grid_ds, integrate_kernel, requires_file, ) from yt.utilities.math_utils import compute_stddev_image ## off-axis projection tests for SPH data are in ## yt/visualization/tests/test_offaxisprojection.py magneticum = "MagneticumCluster/snap_132" mag_kwargs = { "long_ids": True, "field_spec": "magneticum_box2_hr", } @requires_file(magneticum) def test_sph_moment(): ds = yt.load(magneticum, **mag_kwargs) def _vysq(field, data): return data["gas", "velocity_y"] ** 2 ds.add_field(("gas", "vysq"), _vysq, sampling_type="local", units="cm**2/s**2") prj1 = yt.ProjectionPlot( ds, "y", [("gas", "velocity_y"), ("gas", "vysq")], weight_field=("gas", "density"), moment=1, buff_size=(400, 400), ) prj2 = yt.ProjectionPlot( ds, "y", ("gas", "velocity_y"), moment=2, weight_field=("gas", "density"), buff_size=(400, 400), ) sigy = compute_stddev_image(prj1.frb["gas", "vysq"], prj1.frb["gas", "velocity_y"]) assert_rel_equal(sigy, prj2.frb["gas", "velocity_y"].d, 10) def test_sph_projection_basic1(): """ small, uniform grid: expected values for given dl? pixel centers at 0.5, 1., 1.5, 2., 2.5 particles at 0.5, 1.5, 2.5 """ bbox = np.array([[0.0, 3.0]] * 3) ds = fake_sph_flexible_grid_ds(hsml_factor=1.0, nperside=3, bbox=bbox) # works, but no depth control (at least without specific filters) proj = ds.proj(("gas", "density"), 2) frb = proj.to_frb( width=(2.5, "cm"), resolution=(5, 5), height=(2.5, "cm"), center=np.array([1.5, 1.5, 1.5]), periodic=False, ) out = frb.get_image(("gas", "density")) expected_out = np.zeros((5, 5), dtype=np.float64) dl_1part = integrate_kernel(cubicspline_python, 0.0, 0.5) linedens_1part = dl_1part * 1.0 # unit mass, density linedens = 3.0 * linedens_1part expected_out[::2, ::2] = linedens assert_rel_equal(expected_out, out.v, 5) # return out def test_sph_projection_basic2(): """ small, uniform grid: expected values for given dl? pixel centers at 0.5, 1., 1.5, 2., 2.5 particles at 0.5, 1.5, 2.5 but hsml radii are 0.25 -> try non-zero impact parameters, other pixels are still zero. """ bbox = np.array([[0.0, 3.0]] * 3) ds = fake_sph_flexible_grid_ds(hsml_factor=0.5, nperside=3, bbox=bbox) proj = ds.proj(("gas", "density"), 2) frb = proj.to_frb( width=(2.5, "cm"), resolution=(5, 5), height=(2.5, "cm"), center=np.array([1.375, 1.375, 1.5]), periodic=False, ) out = frb.get_image(("gas", "density")) expected_out = np.zeros((5, 5), dtype=np.float64) dl_1part = integrate_kernel(cubicspline_python, np.sqrt(2) * 0.125, 0.25) linedens_1part = dl_1part * 1.0 # unit mass, density linedens = 3.0 * linedens_1part expected_out[::2, ::2] = linedens # print(expected_out) # print(out.v) assert_rel_equal(expected_out, out.v, 4) # return out def get_dataset_sphrefine(reflevel: int = 1): """ constant density particle grid, with increasing particle sampling """ lenfact = (1.0 / 3.0) ** (reflevel - 1) massfact = lenfact**3 nperside = 3**reflevel e1hat = np.array([lenfact, 0, 0]) e2hat = np.array([0, lenfact, 0]) e3hat = np.array([0, 0, lenfact]) hsml_factor = lenfact bbox = np.array([[0.0, 3.0]] * 3) offsets = np.ones(3, dtype=np.float64) * 0.5 # in units of ehat def refmass(i: int, j: int, k: int) -> float: return massfact unitrho = 1.0 / massfact # want density 1 for decreasing mass ds = fake_sph_flexible_grid_ds( hsml_factor=hsml_factor, nperside=nperside, periodic=True, e1hat=e1hat, e2hat=e2hat, e3hat=e3hat, offsets=offsets, massgenerator=refmass, unitrho=unitrho, bbox=bbox, ) return ds def getdata_test_gridproj2(): # initial pixel centers at 0.5, 1., 1.5, 2., 2.5 # particles at 0.5, 1.5, 2.5 # refine particle grid, check if pixel values remain the # same in the pixels passing through initial particle centers outlist = [] dss = [] for rl in range(1, 4): ds = get_dataset_sphrefine(reflevel=rl) proj = ds.proj(("gas", "density"), 2) frb = proj.to_frb( width=(2.5, "cm"), resolution=(5, 5), height=(2.5, "cm"), center=np.array([1.5, 1.5, 1.5]), periodic=False, ) out = frb.get_image(("gas", "density")) outlist.append(out) dss.append(ds) return outlist, dss def test_sph_gridproj_reseffect1(): """ Comparing same pixel centers with higher particle resolution. The pixel centers are at x/y coordinates [0.5, 1., 1.5, 2., 2.5] at the first level, the spacing halves at each level. Checking the pixels at [0.5, 1.5, 2.5], which should have the same values at each resolution. """ imgs, _ = getdata_test_gridproj2() ref = imgs[-1] for img in imgs: assert_rel_equal( img[:: img.shape[0] // 2, :: img.shape[1] // 2], ref[:: ref.shape[0] // 2, :: ref.shape[1] // 2], 4, ) def test_sph_gridproj_reseffect2(): """ refine the pixel grid instead of the particle grid """ ds = get_dataset_sphrefine(reflevel=2) proj = ds.proj(("gas", "density"), 2) imgs = {} maxrl = 5 for rl in range(1, maxrl + 1): npix = 1 + 2 ** (rl + 1) margin = 0.5 - 0.5 ** (rl + 1) frb = proj.to_frb( width=(3.0 - 2.0 * margin, "cm"), resolution=(npix, npix), height=(3.0 - 2.0 * margin, "cm"), center=np.array([1.5, 1.5, 1.5]), periodic=False, ) out = frb.get_image(("gas", "density")) imgs[rl] = out ref = imgs[maxrl] pixspace_ref = 2 ** (maxrl) for rl in imgs: img = imgs[rl] pixspace = 2 ** (rl) # print(f'Grid refinement level {rl}:') assert_rel_equal( img[::pixspace, ::pixspace], ref[::pixspace_ref, ::pixspace_ref], 4 ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/tests/test_sph_pixelization_pytestonly.py0000644000175100001770000004231414714401662027050 0ustar00runnerdockerimport numpy as np import pytest import unyt import yt from yt.data_objects.selection_objects.region import YTRegion from yt.testing import ( assert_rel_equal, cubicspline_python, distancematrix, fake_random_sph_ds, fake_sph_flexible_grid_ds, integrate_kernel, ) @pytest.mark.parametrize("weighted", [True, False]) @pytest.mark.parametrize("periodic", [True, False]) @pytest.mark.parametrize("depth", [None, (1.0, "cm")]) @pytest.mark.parametrize("shiftcenter", [False, True]) @pytest.mark.parametrize("axis", [0, 1, 2]) def test_sph_proj_general_alongaxes( axis: int, shiftcenter: bool, depth: float | None, periodic: bool, weighted: bool, ) -> None: """ The previous projection tests were for a specific issue. Here, we test more functionality of the projections. We just send lines of sight through pixel centers for convenience. Particles at [0.5, 1.5, 2.5] (in each coordinate) smoothing lengths 0.25 all particles have mass 1., density 1.5, except the single center particle, with mass 2., density 3. Parameters: ----------- axis: {0, 1, 2} projection axis (aligned with sim. axis) shiftcenter: bool shift the coordinates to center the projection on. (The grid is offset to this same center) depth: float or None depth of the projection slice periodic: bool assume periodic boundary conditions, or not weighted: bool make a weighted projection (density-weighted density), or not Returns: -------- None """ if shiftcenter: center = unyt.unyt_array(np.array((0.625, 0.625, 0.625)), "cm") else: center = unyt.unyt_array(np.array((1.5, 1.5, 1.5)), "cm") bbox = unyt.unyt_array(np.array([[0.0, 3.0], [0.0, 3.0], [0.0, 3.0]]), "cm") hsml_factor = 0.5 unitrho = 1.5 # test correct centering, particle selection def makemasses(i, j, k): if i == j == k == 1: return 2.0 else: return 1.0 # m / rho, factor 1. / hsml**2 is included in the kernel integral # (density is adjusted, so same for center particle) prefactor = 1.0 / unitrho # / (0.5 * 0.5)**2 dl_cen = integrate_kernel(cubicspline_python, 0.0, 0.25) # result shouldn't depend explicitly on the center if we re-center # the data, unless we get cut-offs in the non-periodic case ds = fake_sph_flexible_grid_ds( hsml_factor=hsml_factor, nperside=3, periodic=periodic, offsets=np.full(3, 0.5), massgenerator=makemasses, unitrho=unitrho, bbox=bbox.v, recenter=center.v, ) if depth is None: source = ds.all_data() else: depth = unyt.unyt_quantity(*depth) le = np.array(ds.domain_left_edge) re = np.array(ds.domain_right_edge) le[axis] = center[axis] - 0.5 * depth re[axis] = center[axis] + 0.5 * depth cen = 0.5 * (le + re) reg = YTRegion(center=cen, left_edge=le, right_edge=re, ds=ds) source = reg # we don't actually want a plot, it's just a straightforward, # common way to get an frb / image array if weighted: toweight_field = ("gas", "density") else: toweight_field = None prj = yt.ProjectionPlot( ds, axis, ("gas", "density"), width=(2.5, "cm"), weight_field=toweight_field, buff_size=(5, 5), center=center, data_source=source, ) img = prj.frb.data[("gas", "density")] if weighted: expected_out = np.zeros( ( 5, 5, ), dtype=img.v.dtype, ) expected_out[::2, ::2] = unitrho if depth is None: ## during shift, particle coords do wrap around edges # if (not periodic) and shiftcenter: # # weight 1. for unitrho, 2. for 2. * untrho # expected_out[2, 2] *= 5. / 3. # else: # weight (2 * 1.) for unitrho, (1 * 2.) for 2. * unitrho expected_out[2, 2] *= 1.5 else: # only 2 * unitrho element included expected_out[2, 2] *= 2.0 else: expected_out = np.zeros( ( 5, 5, ), dtype=img.v.dtype, ) expected_out[::2, ::2] = dl_cen * prefactor * unitrho if depth is None: # 3 particles per l.o.s., including the denser one expected_out *= 3.0 expected_out[2, 2] *= 4.0 / 3.0 else: # 1 particle per l.o.s., including the denser one expected_out[2, 2] *= 2.0 # grid is shifted to the left -> 'missing' stuff at the left if (not periodic) and shiftcenter: expected_out[:1, :] = 0.0 expected_out[:, :1] = 0.0 # print(axis, shiftcenter, depth, periodic, weighted) # print(expected_out) # print(img.v) assert_rel_equal(expected_out, img.v, 5) @pytest.mark.parametrize("periodic", [True, False]) @pytest.mark.parametrize("shiftcenter", [False, True]) @pytest.mark.parametrize("zoff", [0.0, 0.1, 0.5, 1.0]) @pytest.mark.parametrize("axis", [0, 1, 2]) def test_sph_slice_general_alongaxes( axis: int, shiftcenter: bool, periodic: bool, zoff: float, ) -> None: """ Particles at [0.5, 1.5, 2.5] (in each coordinate) smoothing lengths 0.25 all particles have mass 1., density 1.5, except the single center particle, with mass 2., density 3. Parameters: ----------- axis: {0, 1, 2} projection axis (aligned with sim. axis) northvector: tuple y-axis direction in the final plot (direction vector) shiftcenter: bool shift the coordinates to center the projection on. (The grid is offset to this same center) zoff: float offset of the slice plane from the SPH particle center plane periodic: bool assume periodic boundary conditions, or not Returns: -------- None """ if shiftcenter: center = unyt.unyt_array(np.array((0.625, 0.625, 0.625)), "cm") else: center = unyt.unyt_array(np.array((1.5, 1.5, 1.5)), "cm") bbox = unyt.unyt_array(np.array([[0.0, 3.0], [0.0, 3.0], [0.0, 3.0]]), "cm") hsml_factor = 0.5 unitrho = 1.5 # test correct centering, particle selection def makemasses(i, j, k): if i == j == k == 1: return 2.0 elif i == j == k == 2: return 3.0 else: return 1.0 # result shouldn't depend explicitly on the center if we re-center # the data, unless we get cut-offs in the non-periodic case ds = fake_sph_flexible_grid_ds( hsml_factor=hsml_factor, nperside=3, periodic=periodic, offsets=np.full(3, 0.5), massgenerator=makemasses, unitrho=unitrho, bbox=bbox.v, recenter=center.v, ) ad = ds.all_data() # print(ad[('gas', 'position')]) outgridsize = 10 width = 2.5 _center = center.to("cm").v.copy() _center[axis] += zoff # we don't actually want a plot, it's just a straightforward, # common way to get an frb / image array slc = yt.SlicePlot( ds, axis, ("gas", "density"), width=(width, "cm"), buff_size=(outgridsize,) * 2, center=(_center, "cm"), ) img = slc.frb.data[("gas", "density")] # center is same in non-projection coords if axis == 0: ci = 1 else: ci = 0 gridcens = ( _center[ci] - 0.5 * width + 0.5 * width / outgridsize + np.arange(outgridsize) * width / outgridsize ) xgrid = np.repeat(gridcens, outgridsize) ygrid = np.tile(gridcens, outgridsize) zgrid = np.full(outgridsize**2, _center[axis]) gridcoords = np.empty((outgridsize**2, 3), dtype=xgrid.dtype) if axis == 2: gridcoords[:, 0] = xgrid gridcoords[:, 1] = ygrid gridcoords[:, 2] = zgrid elif axis == 0: gridcoords[:, 0] = zgrid gridcoords[:, 1] = xgrid gridcoords[:, 2] = ygrid elif axis == 1: gridcoords[:, 0] = ygrid gridcoords[:, 1] = zgrid gridcoords[:, 2] = xgrid ad = ds.all_data() sphcoords = np.array( [ (ad[("gas", "x")]).to("cm"), (ad[("gas", "y")]).to("cm"), (ad[("gas", "z")]).to("cm"), ] ).T # print("sphcoords:") # print(sphcoords) # print("gridcoords:") # print(gridcoords) dists = distancematrix( gridcoords, sphcoords, periodic=(periodic,) * 3, periods=np.array([3.0, 3.0, 3.0]), ) # print("dists <= 1:") # print(dists <= 1) sml = (ad[("gas", "smoothing_length")]).to("cm") normkern = cubicspline_python(dists / sml.v[np.newaxis, :]) sphcontr = normkern / sml[np.newaxis, :] ** 3 * ad[("gas", "mass")] contsum = np.sum(sphcontr, axis=1) sphweights = ( normkern / sml[np.newaxis, :] ** 3 * ad[("gas", "mass")] / ad[("gas", "density")] ) weights = np.sum(sphweights, axis=1) nzeromask = np.logical_not(weights == 0) expected = np.zeros(weights.shape, weights.dtype) expected[nzeromask] = contsum[nzeromask] / weights[nzeromask] expected = expected.reshape((outgridsize, outgridsize)) # expected[np.isnan(expected)] = 0.0 # convention in the slices # print("expected:\n", expected) # print("recovered:\n", img.v) assert_rel_equal(expected, img.v, 5) @pytest.mark.parametrize("periodic", [True, False]) @pytest.mark.parametrize("shiftcenter", [False, True]) @pytest.mark.parametrize("northvector", [None, (1.0e-4, 1.0, 0.0)]) @pytest.mark.parametrize("zoff", [0.0, 0.1, 0.5, 1.0]) def test_sph_slice_general_offaxis( northvector: tuple[float, float, float] | None, shiftcenter: bool, zoff: float, periodic: bool, ) -> None: """ Same as the on-axis slices, but we rotate the basis vectors to test whether roations are handled ok. the rotation is chosen to be small so that in/exclusion of particles within bboxes, etc. works out the same way. Particles at [0.5, 1.5, 2.5] (in each coordinate) smoothing lengths 0.25 all particles have mass 1., density 1.5, except the single center particle, with mass 2., density 3. Parameters: ----------- northvector: tuple y-axis direction in the final plot (direction vector) shiftcenter: bool shift the coordinates to center the projection on. (The grid is offset to this same center) zoff: float offset of the slice plane from the SPH particle center plane periodic: bool assume periodic boundary conditions, or not Returns: -------- None """ if shiftcenter: center = np.array((0.625, 0.625, 0.625)) # cm else: center = np.array((1.5, 1.5, 1.5)) # cm bbox = unyt.unyt_array(np.array([[0.0, 3.0], [0.0, 3.0], [0.0, 3.0]]), "cm") hsml_factor = 0.5 unitrho = 1.5 # test correct centering, particle selection def makemasses(i, j, k): if i == j == k == 1: return 2.0 else: return 1.0 # try to make sure dl differences from periodic wrapping are small epsilon = 1e-4 projaxis = np.array([epsilon, 0.00, np.sqrt(1.0 - epsilon**2)]) e1dir = projaxis / np.sqrt(np.sum(projaxis**2)) if northvector is None: e2dir = np.array([0.0, 1.0, 0.0]) else: e2dir = np.asarray(northvector) e2dir = e2dir - np.sum(e1dir * e2dir) * e2dir # orthonormalize e2dir /= np.sqrt(np.sum(e2dir**2)) e3dir = np.cross(e2dir, e1dir) outgridsize = 10 width = 2.5 _center = center.copy() _center += zoff * e1dir ds = fake_sph_flexible_grid_ds( hsml_factor=hsml_factor, nperside=3, periodic=periodic, offsets=np.full(3, 0.5), massgenerator=makemasses, unitrho=unitrho, bbox=bbox.v, recenter=center, e1hat=e1dir, e2hat=e2dir, e3hat=e3dir, ) # source = ds.all_data() # couple to dataset -> right unit registry center = ds.arr(center, "cm") # print("position:\n", source["gas", "position"]) slc = yt.SlicePlot( ds, e1dir, ("gas", "density"), width=(width, "cm"), buff_size=(outgridsize,) * 2, center=(_center, "cm"), north_vector=e2dir, ) img = slc.frb.data[("gas", "density")] # center is same in x/y (e3dir/e2dir) gridcenx = ( np.dot(_center, e3dir) - 0.5 * width + 0.5 * width / outgridsize + np.arange(outgridsize) * width / outgridsize ) gridceny = ( np.dot(_center, e2dir) - 0.5 * width + 0.5 * width / outgridsize + np.arange(outgridsize) * width / outgridsize ) xgrid = np.repeat(gridcenx, outgridsize) ygrid = np.tile(gridceny, outgridsize) zgrid = np.full(outgridsize**2, np.dot(_center, e1dir)) gridcoords = ( xgrid[:, np.newaxis] * e3dir[np.newaxis, :] + ygrid[:, np.newaxis] * e2dir[np.newaxis, :] + zgrid[:, np.newaxis] * e1dir[np.newaxis, :] ) # print("gridcoords:") # print(gridcoords) ad = ds.all_data() sphcoords = np.array( [ (ad[("gas", "x")]).to("cm"), (ad[("gas", "y")]).to("cm"), (ad[("gas", "z")]).to("cm"), ] ).T dists = distancematrix( gridcoords, sphcoords, periodic=(periodic,) * 3, periods=np.array([3.0, 3.0, 3.0]), ) sml = (ad[("gas", "smoothing_length")]).to("cm") normkern = cubicspline_python(dists / sml.v[np.newaxis, :]) sphcontr = normkern / sml[np.newaxis, :] ** 3 * ad[("gas", "mass")] contsum = np.sum(sphcontr, axis=1) sphweights = ( normkern / sml[np.newaxis, :] ** 3 * ad[("gas", "mass")] / ad[("gas", "density")] ) weights = np.sum(sphweights, axis=1) nzeromask = np.logical_not(weights == 0) expected = np.zeros(weights.shape, weights.dtype) expected[nzeromask] = contsum[nzeromask] / weights[nzeromask] expected = expected.reshape((outgridsize, outgridsize)) expected = expected.T # transposed for image plotting # expected[np.isnan(expected)] = 0.0 # convention in the slices # print(axis, shiftcenter, depth, periodic, weighted) # print("expected:\n", expected) # print("recovered:\n", img.v) assert_rel_equal(expected, img.v, 4) # only axis-aligned; testing YTArbitraryGrid, YTCoveringGrid @pytest.mark.parametrize("periodic", [True, False, (True, True, False)]) @pytest.mark.parametrize("wholebox", [True, False]) def test_sph_grid( periodic: bool | tuple[bool, bool, bool], wholebox: bool, ) -> None: bbox = np.array([[-1.0, 3.0], [1.0, 5.2], [-1.0, 3.0]]) ds = fake_random_sph_ds(50, bbox, periodic=periodic) if not hasattr(periodic, "__len__"): periodic = (periodic,) * 3 if wholebox: left = bbox[:, 0].copy() level = 2 ncells = np.array([2**level] * 3) # print("left: ", left) # print("ncells: ", ncells) resgrid = ds.covering_grid(level, tuple(left), ncells) right = bbox[:, 1].copy() xedges = np.linspace(left[0], right[0], ncells[0] + 1) yedges = np.linspace(left[1], right[1], ncells[1] + 1) zedges = np.linspace(left[2], right[2], ncells[2] + 1) else: left = np.array([-1.0, 1.8, -1.0]) right = np.array([2.5, 5.2, 2.5]) ncells = np.array([3, 4, 4]) resgrid = ds.arbitrary_grid(left, right, dims=ncells) xedges = np.linspace(left[0], right[0], ncells[0] + 1) yedges = np.linspace(left[1], right[1], ncells[1] + 1) zedges = np.linspace(left[2], right[2], ncells[2] + 1) res = resgrid["gas", "density"] xcens = 0.5 * (xedges[:-1] + xedges[1:]) ycens = 0.5 * (yedges[:-1] + yedges[1:]) zcens = 0.5 * (zedges[:-1] + zedges[1:]) ad = ds.all_data() sphcoords = np.array( [ (ad[("gas", "x")]).to("cm"), (ad[("gas", "y")]).to("cm"), (ad[("gas", "z")]).to("cm"), ] ).T gridx, gridy, gridz = np.meshgrid(xcens, ycens, zcens, indexing="ij") outshape = gridx.shape gridx = gridx.flatten() gridy = gridy.flatten() gridz = gridz.flatten() gridcoords = np.array([gridx, gridy, gridz]).T periods = bbox[:, 1] - bbox[:, 0] dists = distancematrix(gridcoords, sphcoords, periodic=periodic, periods=periods) sml = (ad[("gas", "smoothing_length")]).to("cm") normkern = cubicspline_python(dists / sml.v[np.newaxis, :]) sphcontr = normkern / sml[np.newaxis, :] ** 3 * ad[("gas", "mass")] contsum = np.sum(sphcontr, axis=1) sphweights = ( normkern / sml[np.newaxis, :] ** 3 * ad[("gas", "mass")] / ad[("gas", "density")] ) weights = np.sum(sphweights, axis=1) nzeromask = np.logical_not(weights == 0) expected = np.zeros(weights.shape, weights.dtype) expected[nzeromask] = contsum[nzeromask] / weights[nzeromask] expected = expected.reshape(outshape) # expected[np.isnan(expected)] = 0.0 # convention in the slices # print(axis, shiftcenter, depth, periodic, weighted) # print("expected:\n", expected) # print("recovered:\n", res.v) assert_rel_equal(expected, res.v, 4) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/coordinates/tests/test_spherical_coordinates.py0000644000175100001770000000323714714401662025512 0ustar00runnerdocker# Some tests for the Spherical coordinates handler import numpy as np from numpy.testing import assert_almost_equal, assert_equal from yt.testing import fake_amr_ds # Our canonical tests are that we can access all of our fields and we can # compute our volume correctly. def test_spherical_coordinates(): # We're going to load up a simple AMR grid and check its volume # calculations and path length calculations. ds = fake_amr_ds(geometry="spherical") axes = ["r", "theta", "phi"] for i, axis in enumerate(axes): dd = ds.all_data() fi = ("index", axis) fd = ("index", f"d{axis}") ma = np.argmax(dd[fi]) assert_equal(dd[fi][ma] + dd[fd][ma] / 2.0, ds.domain_right_edge[i].d) mi = np.argmin(dd[fi]) assert_equal(dd[fi][mi] - dd[fd][mi] / 2.0, ds.domain_left_edge[i].d) assert_equal(dd[fd].max(), (ds.domain_width / ds.domain_dimensions)[i].d) # Note that we're using a lot of funny transforms to get to this, so we do # not expect to get actual agreement. This is a bit of a shame, but I # don't think it is avoidable as of right now. Real datasets will almost # certainly be correct, if this is correct to 3 decimel places. assert_almost_equal( dd["index", "cell_volume"].sum(dtype="float64"), (4.0 / 3.0) * np.pi * ds.domain_width[0] ** 3, ) assert_equal(dd["index", "path_element_r"], dd["index", "dr"]) assert_equal( dd["index", "path_element_theta"], dd["index", "r"] * dd["index", "dtheta"] ) assert_equal( dd["index", "path_element_phi"], (dd["index", "r"] * dd["index", "dphi"] * np.sin(dd["index", "theta"])), ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/fake_octree.pyx0000644000175100001770000000507114714401662017070 0ustar00runnerdocker# distutils: include_dirs = LIB_DIR # distutils: libraries = STD_LIBS """ Make a fake octree, deposit particle at every leaf """ cimport numpy as np from libc.stdlib cimport RAND_MAX, rand from .oct_visitors cimport cind import numpy as np from .oct_container cimport Oct, SparseOctreeContainer # Create a balanced octree by a random walk that recursively # subdivides def create_fake_octree(SparseOctreeContainer oct_handler, long max_noct, long max_level, np.ndarray[np.int32_t, ndim=1] ndd, np.ndarray[np.float64_t, ndim=1] dle, np.ndarray[np.float64_t, ndim=1] dre, float fsubdivide): cdef int[3] dd #hold the octant index cdef int[3] ind #hold the octant index cdef long i cdef long cur_leaf = 0 cdef np.ndarray[np.uint8_t, ndim=2] mask for i in range(3): ind[i] = 0 dd[i] = ndd[i] oct_handler.allocate_domains([max_noct]) parent = oct_handler.next_root(1, ind) parent.domain = 1 cur_leaf = 8 #we've added one parent... mask = np.ones((max_noct,8),dtype='uint8') while oct_handler.domains[0].n_assigned < max_noct: print("root: nocts ", oct_handler.domains[0].n_assigned) cur_leaf = subdivide(oct_handler, parent, ind, dd, cur_leaf, 0, max_noct, max_level, fsubdivide, mask) return cur_leaf cdef long subdivide(SparseOctreeContainer oct_handler, Oct *parent, int ind[3], int dd[3], long cur_leaf, long cur_level, long max_noct, long max_level, float fsubdivide, np.ndarray[np.uint8_t, ndim=2] mask): print("child", parent.file_ind, ind[0], ind[1], ind[2], cur_leaf, cur_level) cdef int ddr[3] cdef int ii cdef long i cdef float rf #random float from 0-1 if cur_level >= max_level: return cur_leaf if oct_handler.domains[0].n_assigned >= max_noct: return cur_leaf for i in range(3): ind[i] = ((rand() * 1.0 / RAND_MAX) * dd[i]) ddr[i] = 2 rf = rand() * 1.0 / RAND_MAX if rf > fsubdivide: ii = cind(ind[0], ind[1], ind[2]) if parent.children[ii] == NULL: cur_leaf += 7 oct = oct_handler.next_child(1, ind, parent) oct.domain = 1 cur_leaf = subdivide(oct_handler, oct, ind, ddr, cur_leaf, cur_level + 1, max_noct, max_level, fsubdivide, mask) return cur_leaf ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/geometry_enum.py0000644000175100001770000000111214714401662017300 0ustar00runnerdockerimport sys from enum import auto if sys.version_info >= (3, 11): from enum import StrEnum else: from yt._maintenance.backports import StrEnum # register all valid geometries class Geometry(StrEnum): CARTESIAN = auto() CYLINDRICAL = auto() POLAR = auto() SPHERICAL = auto() GEOGRAPHIC = auto() INTERNAL_GEOGRAPHIC = auto() SPECTRAL_CUBE = auto() def __str__(self): # Implemented for backward compatibility. if sys.version_info >= (3, 11): return super().__str__() else: return self.name.lower() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/geometry_handler.py0000644000175100001770000004063514714401662017766 0ustar00runnerdockerimport abc import os import weakref import numpy as np from yt._maintenance.deprecation import issue_deprecation_warning from yt.config import ytcfg from yt.units._numpy_wrapper_functions import uconcatenate from yt.units.yt_array import YTArray from yt.utilities.exceptions import YTFieldNotFound from yt.utilities.io_handler import io_registry from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _h5py as h5py from yt.utilities.parallel_tools.parallel_analysis_interface import ( ParallelAnalysisInterface, parallel_root_only, ) class Index(ParallelAnalysisInterface, abc.ABC): """The base index class""" _unsupported_objects: tuple[str, ...] = () _index_properties: tuple[str, ...] = () def __init__(self, ds, dataset_type): ParallelAnalysisInterface.__init__(self) self.dataset = weakref.proxy(ds) self.ds = self.dataset self._initialize_state_variables() mylog.debug("Initializing data storage.") self._initialize_data_storage() mylog.debug("Setting up domain geometry.") self._setup_geometry() mylog.debug("Initializing data grid data IO") self._setup_data_io() # Note that this falls under the "geometry" object since it's # potentially quite expensive, and should be done with the indexing. mylog.debug("Detecting fields.") self._detect_output_fields() @abc.abstractmethod def _detect_output_fields(self): pass def _icoords_to_fcoords( self, icoords: np.ndarray, ires: np.ndarray, axes: tuple[int, ...] | None = None, ) -> tuple[np.ndarray, np.ndarray]: # What's the use of raising NotImplementedError for this, when it's an # abstract base class? Well, only *some* of the subclasses have it -- # and for those that *don't*, we should not be calling it -- and since # it's a semi-private method, it shouldn't be called outside of yt # machinery. So we shouldn't ever get here! raise NotImplementedError def _initialize_state_variables(self): self._parallel_locking = False self._data_file = None self._data_mode = None self.num_grids = None def _initialize_data_storage(self): if not ytcfg.get("yt", "serialize"): return fn = self.ds.storage_filename if fn is None: if os.path.isfile( os.path.join(self.directory, f"{self.ds.unique_identifier}.yt") ): fn = os.path.join(self.directory, f"{self.ds.unique_identifier}.yt") else: fn = os.path.join(self.directory, f"{self.dataset.basename}.yt") dir_to_check = os.path.dirname(fn) if dir_to_check == "": dir_to_check = "." # We have four options: # Writeable, does not exist : create, open as append # Writeable, does exist : open as append # Not writeable, does not exist : do not attempt to open # Not writeable, does exist : open as read-only exists = os.path.isfile(fn) if not exists: writeable = os.access(dir_to_check, os.W_OK) else: writeable = os.access(fn, os.W_OK) writeable = writeable and not ytcfg.get("yt", "only_deserialize") # We now have our conditional stuff self.comm.barrier() if not writeable and not exists: return if writeable: try: if not exists: self.__create_data_file(fn) self._data_mode = "a" except OSError: self._data_mode = None return else: self._data_mode = "r" self.__data_filename = fn self._data_file = h5py.File(fn, mode=self._data_mode) def __create_data_file(self, fn): # Note that this used to be parallel_root_only; it no longer is, # because we have better logic to decide who owns the file. f = h5py.File(fn, mode="a") f.close() def _setup_data_io(self): if getattr(self, "io", None) is not None: return self.io = io_registry[self.dataset_type](self.dataset) @parallel_root_only def save_data( self, array, node, name, set_attr=None, force=False, passthrough=False ): """ Arbitrary numpy data will be saved to the region in the datafile described by *node* and *name*. If data file does not exist, it throws no error and simply does not save. """ if self._data_mode != "a": return try: node_loc = self._data_file[node] if name in node_loc and force: mylog.info("Overwriting node %s/%s", node, name) del self._data_file[node][name] elif name in node_loc and passthrough: return except Exception: pass myGroup = self._data_file["/"] for q in node.split("/"): if q: myGroup = myGroup.require_group(q) arr = myGroup.create_dataset(name, data=array) if set_attr is not None: for i, j in set_attr.items(): arr.attrs[i] = j self._data_file.flush() def _reload_data_file(self, *args, **kwargs): if self._data_file is None: return self._data_file.close() del self._data_file self._data_file = h5py.File(self.__data_filename, mode=self._data_mode) def get_data(self, node, name): """ Return the dataset with a given *name* located at *node* in the datafile. """ if self._data_file is None: return None if node[0] != "/": node = f"/{node}" myGroup = self._data_file["/"] for group in node.split("/"): if group: if group not in myGroup: return None myGroup = myGroup[group] if name not in myGroup: return None full_name = f"{node}/{name}" try: return self._data_file[full_name][:] except TypeError: return self._data_file[full_name] def _get_particle_type_counts(self): # this is implemented by subclasses raise NotImplementedError def _close_data_file(self): if self._data_file: self._data_file.close() del self._data_file self._data_file = None def _split_fields(self, fields): # This will split fields into either generated or read fields fields_to_read, fields_to_generate = [], [] for ftype, fname in fields: if fname in self.field_list or (ftype, fname) in self.field_list: fields_to_read.append((ftype, fname)) elif ( fname in self.ds.derived_field_list or (ftype, fname) in self.ds.derived_field_list ): fields_to_generate.append((ftype, fname)) else: raise YTFieldNotFound((ftype, fname), self.ds) return fields_to_read, fields_to_generate def _read_particle_fields(self, fields, dobj, chunk=None): if len(fields) == 0: return {}, [] fields_to_read, fields_to_generate = self._split_fields(fields) if len(fields_to_read) == 0: return {}, fields_to_generate selector = dobj.selector if chunk is None: self._identify_base_chunk(dobj) chunks = self._chunk_io(dobj, cache=False) fields_to_return = self.io._read_particle_selection( chunks, selector, fields_to_read ) return fields_to_return, fields_to_generate def _read_fluid_fields(self, fields, dobj, chunk=None): if len(fields) == 0: return {}, [] fields_to_read, fields_to_generate = self._split_fields(fields) if len(fields_to_read) == 0: return {}, fields_to_generate selector = dobj.selector if chunk is None: self._identify_base_chunk(dobj) chunk_size = dobj.size else: chunk_size = chunk.data_size fields_to_return = self.io._read_fluid_selection( self._chunk_io(dobj), selector, fields_to_read, chunk_size ) return fields_to_return, fields_to_generate def _chunk(self, dobj, chunking_style, ngz=0, **kwargs): # A chunk is either None or (grids, size) if dobj._current_chunk is None: self._identify_base_chunk(dobj) if ngz != 0 and chunking_style != "spatial": raise NotImplementedError if chunking_style == "all": return self._chunk_all(dobj, **kwargs) elif chunking_style == "spatial": return self._chunk_spatial(dobj, ngz, **kwargs) elif chunking_style == "io": return self._chunk_io(dobj, **kwargs) else: raise NotImplementedError def cacheable_property(func): # not quite equivalent to functools.cached_property # this decorator allows cached to be disabled via a self._cache flag attribute n = f"_{func.__name__}" @property def cacheable_func(self): if self._cache and getattr(self, n, None) is not None: return getattr(self, n) if self.data_size is None: tr = self._accumulate_values(n[1:]) else: tr = func(self) if self._cache: setattr(self, n, tr) return tr return cacheable_func class YTDataChunk: def __init__( self, dobj, chunk_type, objs, data_size=None, field_type=None, cache=False, fast_index=None, ): self.dobj = dobj self.chunk_type = chunk_type self.objs = objs self.data_size = data_size self._field_type = field_type self._cache = cache self._fast_index = fast_index def _accumulate_values(self, method): # We call this generically. It's somewhat slower, since we're doing # costly getattr functions, but this allows us to generalize. mname = f"select_{method}" arrs = [] for obj in self._fast_index or self.objs: f = getattr(obj, mname) arrs.append(f(self.dobj)) if method == "dtcoords": arrs = [arr[0] for arr in arrs] elif method == "tcoords": arrs = [arr[1] for arr in arrs] if len(arrs) == 0: self.data_size = 0 return np.empty((0, 3), dtype="float64") arrs = uconcatenate(arrs) self.data_size = arrs.shape[0] return arrs @cacheable_property def fcoords(self): if self._fast_index is not None: ci = self._fast_index.select_fcoords(self.dobj.selector, self.data_size) ci = YTArray(ci, units="code_length", registry=self.dobj.ds.unit_registry) return ci ci = np.empty((self.data_size, 3), dtype="float64") ci = YTArray(ci, units="code_length", registry=self.dobj.ds.unit_registry) if self.data_size == 0: return ci ind = 0 for obj in self._fast_index or self.objs: c = obj.select_fcoords(self.dobj) if c.shape[0] == 0: continue ci.d[ind : ind + c.shape[0], :] = c ind += c.shape[0] return ci @cacheable_property def icoords(self): if self._fast_index is not None: ci = self._fast_index.select_icoords(self.dobj.selector, self.data_size) return ci ci = np.empty((self.data_size, 3), dtype="int64") if self.data_size == 0: return ci ind = 0 for obj in self._fast_index or self.objs: c = obj.select_icoords(self.dobj) if c.shape[0] == 0: continue ci[ind : ind + c.shape[0], :] = c ind += c.shape[0] return ci @cacheable_property def fwidth(self): if self._fast_index is not None: ci = self._fast_index.select_fwidth(self.dobj.selector, self.data_size) ci = YTArray(ci, units="code_length", registry=self.dobj.ds.unit_registry) return ci ci = np.empty((self.data_size, 3), dtype="float64") ci = YTArray(ci, units="code_length", registry=self.dobj.ds.unit_registry) if self.data_size == 0: return ci ind = 0 for obj in self._fast_index or self.objs: c = obj.select_fwidth(self.dobj) if c.shape[0] == 0: continue ci.d[ind : ind + c.shape[0], :] = c ind += c.shape[0] return ci @cacheable_property def ires(self): if self._fast_index is not None: ci = self._fast_index.select_ires(self.dobj.selector, self.data_size) return ci ci = np.empty(self.data_size, dtype="int64") if self.data_size == 0: return ci ind = 0 for obj in self._fast_index or self.objs: c = obj.select_ires(self.dobj) if c.shape == 0: continue ci[ind : ind + c.size] = c ind += c.size return ci @cacheable_property def tcoords(self): self.dtcoords return self._tcoords @cacheable_property def dtcoords(self): ct = np.empty(self.data_size, dtype="float64") cdt = np.empty(self.data_size, dtype="float64") self._tcoords = ct # Se this for tcoords if self.data_size == 0: return cdt ind = 0 for obj in self._fast_index or self.objs: gdt, gt = obj.select_tcoords(self.dobj) if gt.size == 0: continue ct[ind : ind + gt.size] = gt cdt[ind : ind + gdt.size] = gdt ind += gt.size return cdt @cacheable_property def fcoords_vertex(self): nodes_per_elem = self.dobj.index.meshes[0].connectivity_indices.shape[1] dim = self.dobj.ds.dimensionality ci = np.empty((self.data_size, nodes_per_elem, dim), dtype="float64") ci = YTArray(ci, units="code_length", registry=self.dobj.ds.unit_registry) if self.data_size == 0: return ci ind = 0 for obj in self.objs: c = obj.select_fcoords_vertex(self.dobj) if c.shape[0] == 0: continue ci.d[ind : ind + c.shape[0], :, :] = c ind += c.shape[0] return ci class ChunkDataCache: def __init__(self, base_iter, preload_fields, geometry_handler, max_length=256): # At some point, max_length should instead become a heuristic function, # potentially looking at estimated memory usage. Note that this never # initializes the iterator; it assumes the iterator is already created, # and it calls next() on it. self.base_iter = base_iter.__iter__() self.queue = [] self.max_length = max_length self.preload_fields = preload_fields self.geometry_handler = geometry_handler self.cache = {} def __iter__(self): return self def __next__(self): if len(self.queue) == 0: for _ in range(self.max_length): try: self.queue.append(next(self.base_iter)) except StopIteration: break # If it's still zero ... if len(self.queue) == 0: raise StopIteration chunk = YTDataChunk(None, "cache", self.queue, cache=False) self.cache = ( self.geometry_handler.io._read_chunk_data(chunk, self.preload_fields) or {} ) g = self.queue.pop(0) g._initialize_cache(self.cache.pop(g.id, {})) return g def is_curvilinear(geo): # tell geometry is curvilinear or not issue_deprecation_warning( "the is_curvilear() function is deprecated. " "Instead, compare the geometry object directly with yt.geometry.geometry_enum.Geometry " "enum members, as for instance:\n" "if is_curvilinear(geometry):\n ...\n" "should be rewritten as:" "if geometry is Geometry.POLAR or geometry is Geometry.CYLINDRICAL or geometry is Geometry.SPHERICAL:\n ...", stacklevel=3, since="4.2", ) if geo in ["polar", "cylindrical", "spherical"]: return True else: return False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/grid_container.pxd0000644000175100001770000000351414714401662017563 0ustar00runnerdocker""" Matching points on the grid to specific grids """ import numpy as np cimport cython cimport numpy as np from libc.stdlib cimport free, malloc from yt.geometry cimport grid_visitors from yt.geometry.grid_visitors cimport ( GridTreeNode, GridTreeNodePadded, GridVisitorData, grid_visitor_function, ) from yt.utilities.lib.bitarray cimport bitarray from yt.utilities.lib.fp_utils cimport iclip from .selection_routines cimport SelectorObject, _ensure_code cdef class GridTree: cdef GridTreeNode *grids cdef GridTreeNode *root_grids cdef int num_grids cdef int num_root_grids cdef int num_leaf_grids cdef public bitarray mask cdef void setup_data(self, GridVisitorData *data) cdef void visit_grids(self, GridVisitorData *data, grid_visitor_function *func, SelectorObject selector) cdef void recursively_visit_grid(self, GridVisitorData *data, grid_visitor_function *func, SelectorObject selector, GridTreeNode *grid, np.uint8_t *buf = ?) cdef class MatchPointsToGrids: cdef int num_points cdef np.float64_t *xp cdef np.float64_t *yp cdef np.float64_t *zp cdef GridTree tree cdef np.int64_t *point_grids cdef np.uint8_t check_position(self, np.int64_t pt_index, np.float64_t x, np.float64_t y, np.float64_t z, GridTreeNode *grid) cdef np.uint8_t is_in_grid(self, np.float64_t x, np.float64_t y, np.float64_t z, GridTreeNode *grid) cdef extern from "platform_dep.h" nogil: double rint(double x) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/grid_container.pyx0000644000175100001770000003117714714401662017616 0ustar00runnerdocker# distutils: include_dirs = LIB_DIR # distutils: libraries = STD_LIBS """ Matching points on the grid to specific grids """ import numpy as np cimport cython cimport numpy as np from yt.utilities.lib.bitarray cimport bitarray @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef GridTreeNode Grid_initialize(np.ndarray[np.float64_t, ndim=1] le, np.ndarray[np.float64_t, ndim=1] re, np.ndarray[np.int32_t, ndim=1] dims, int num_children, int level, int index): cdef GridTreeNode node cdef int i node.index = index node.level = level for i in range(3): node.left_edge[i] = le[i] node.right_edge[i] = re[i] node.dims[i] = dims[i] node.dds[i] = (re[i] - le[i])/dims[i] node.start_index[i] = rint(le[i] / node.dds[i]) node.num_children = num_children if num_children <= 0: node.children = NULL return node node.children = malloc( sizeof(GridTreeNode *) * num_children) for i in range(num_children): node.children[i] = NULL return node cdef class GridTree: @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def __cinit__(self, int num_grids, np.ndarray[np.float64_t, ndim=2] left_edge, np.ndarray[np.float64_t, ndim=2] right_edge, np.ndarray[np.int32_t, ndim=2] dimensions, np.ndarray[np.int64_t, ndim=1] parent_ind, np.ndarray[np.int64_t, ndim=1] level, np.ndarray[np.int64_t, ndim=1] num_children): cdef int i, j, k cdef np.ndarray[np.int64_t, ndim=1] child_ptr child_ptr = np.zeros(num_grids, dtype='int64') self.num_grids = num_grids self.num_root_grids = 0 self.num_leaf_grids = 0 self.grids = malloc( sizeof(GridTreeNode) * num_grids) for i in range(num_grids): self.grids[i] = Grid_initialize(left_edge[i,:], right_edge[i,:], dimensions[i,:], num_children[i], level[i], i) if level[i] == 0: self.num_root_grids += 1 if num_children[i] == 0: self.num_leaf_grids += 1 self.root_grids = malloc( sizeof(GridTreeNode) * self.num_root_grids) k = 0 for i in range(num_grids): j = parent_ind[i] if j >= 0: self.grids[j].children[child_ptr[j]] = &self.grids[i] child_ptr[j] += 1 else: if k >= self.num_root_grids: raise RuntimeError self.root_grids[k] = self.grids[i] k = k + 1 def __init__(self, *args, **kwargs): self.mask = None def __iter__(self): yield self @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def return_tree_info(self): cdef int i, j levels = [] indices = [] nchild = [] children = [] for i in range(self.num_grids): childs = [] levels.append(self.grids[i].level) indices.append(self.grids[i].index) nchild.append(self.grids[i].num_children) for j in range(self.grids[i].num_children): childs.append(self.grids[i].children[j].index) children.append(childs) return indices, levels, nchild, children @property def grid_arrays(self): cdef GridTreeNodePadded[:] grids grids = \ ( self.grids) grids_basic = np.asarray(grids) # This next bit is necessary because as of 0.23.4, Cython can't make # nested dtypes automatically where you have a property that is # something like float[3]. So we unroll all of those, then re-roll # them in a new dtype. dtn = {} dt = grids_basic.dtype for name in dt.names: d, o = dt.fields[name] n = name if name.endswith("_x"): f = (d.char, 3) n = name[:-2] elif name.endswith("_y") or name.endswith("_z"): continue else: f = d.char dtn[n] = (f, o) return grids_basic.view(dtype=np.dtype(dtn)) cdef void setup_data(self, GridVisitorData *data): # Being handed a new GVD object, we initialize it to sane defaults. data.index = 0 data.global_index = 0 data.n_tuples = 0 data.child_tuples = NULL data.array = NULL data.ref_factor = 2 #### FIX THIS cdef void visit_grids(self, GridVisitorData *data, grid_visitor_function *func, SelectorObject selector): # This iterates over all root grids, given a selector+data, and then # visits each one and its children. cdef int i # Because of confusion about mapping of children to parents, we are # going to do this the stupid way for now. cdef GridTreeNode *grid cdef np.uint8_t *buf = NULL if self.mask is not None: buf = self.mask.buf for i in range(self.num_root_grids): grid = &self.root_grids[i] self.recursively_visit_grid(data, func, selector, grid, buf) grid_visitors.free_tuples(data) cdef void recursively_visit_grid(self, GridVisitorData *data, grid_visitor_function *func, SelectorObject selector, GridTreeNode *grid, np.uint8_t *buf = NULL): # Visit this grid and all of its child grids, with a given grid visitor # function. We early terminate if we are not selected by the selector. cdef int i data.grid = grid if selector.select_bbox(grid.left_edge, grid.right_edge) == 0: # Note that this does not increment the global_index. return grid_visitors.setup_tuples(data) selector.visit_grid_cells(data, func, buf) for i in range(grid.num_children): self.recursively_visit_grid(data, func, selector, grid.children[i], buf) def count(self, SelectorObject selector): # Use the counting grid visitor cdef GridVisitorData data self.setup_data(&data) cdef np.uint64_t size = 0 cdef int i for i in range(self.num_grids): size += (self.grids[i].dims[0] * self.grids[i].dims[1] * self.grids[i].dims[2]) cdef bitarray mask = bitarray(size) data.array = mask.buf self.visit_grids(&data, grid_visitors.mask_cells, selector) self.mask = mask size = 0 self.setup_data(&data) data.array = (&size) self.visit_grids(&data, grid_visitors.count_cells, selector) return size def select_icoords(self, SelectorObject selector, np.uint64_t size = -1): # Fill icoords with a selector cdef GridVisitorData data self.setup_data(&data) if size == -1: size = 0 data.array = (&size) self.visit_grids(&data, grid_visitors.count_cells, selector) cdef np.ndarray[np.int64_t, ndim=2] icoords icoords = np.empty((size, 3), dtype="int64") data.array = icoords.data self.visit_grids(&data, grid_visitors.icoords_cells, selector) return icoords def select_ires(self, SelectorObject selector, np.uint64_t size = -1): # Fill ires with a selector cdef GridVisitorData data self.setup_data(&data) if size == -1: size = 0 data.array = (&size) self.visit_grids(&data, grid_visitors.count_cells, selector) cdef np.ndarray[np.int64_t, ndim=1] ires ires = np.empty(size, dtype="int64") data.array = ires.data self.visit_grids(&data, grid_visitors.ires_cells, selector) return ires def select_fcoords(self, SelectorObject selector, np.uint64_t size = -1): # Fill fcoords with a selector cdef GridVisitorData data self.setup_data(&data) if size == -1: size = 0 data.array = (&size) self.visit_grids(&data, grid_visitors.count_cells, selector) cdef np.ndarray[np.float64_t, ndim=2] fcoords fcoords = np.empty((size, 3), dtype="float64") data.array = fcoords.data self.visit_grids(&data, grid_visitors.fcoords_cells, selector) return fcoords def select_fwidth(self, SelectorObject selector, np.uint64_t size = -1): # Fill fwidth with a selector cdef GridVisitorData data self.setup_data(&data) if size == -1: size = 0 data.array = (&size) self.visit_grids(&data, grid_visitors.count_cells, selector) cdef np.ndarray[np.float64_t, ndim=2] fwidth fwidth = np.empty((size, 3), dtype="float64") data.array = fwidth.data self.visit_grids(&data, grid_visitors.fwidth_cells, selector) return fwidth cdef class MatchPointsToGrids: @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def __cinit__(self, GridTree tree, int num_points, np.ndarray[np.float64_t, ndim=1] x, np.ndarray[np.float64_t, ndim=1] y, np.ndarray[np.float64_t, ndim=1] z): cdef int i self.num_points = num_points self.xp = malloc( sizeof(np.float64_t) * num_points) self.yp = malloc( sizeof(np.float64_t) * num_points) self.zp = malloc( sizeof(np.float64_t) * num_points) self.point_grids = malloc( sizeof(np.int64_t) * num_points) for i in range(num_points): self.xp[i] = x[i] self.yp[i] = y[i] self.zp[i] = z[i] self.point_grids[i] = -1 self.tree = tree @cython.boundscheck(False) @cython.wraparound(False) def find_points_in_tree(self): cdef np.ndarray[np.int64_t, ndim=1] pt_grids cdef int i, j cdef np.uint8_t in_grid pt_grids = np.zeros(self.num_points, dtype='int64') for i in range(self.num_points): in_grid = 0 for j in range(self.tree.num_root_grids): if not in_grid: in_grid = self.check_position(i, self.xp[i], self.yp[i], self.zp[i], &self.tree.root_grids[j]) for i in range(self.num_points): pt_grids[i] = self.point_grids[i] return pt_grids @cython.boundscheck(False) @cython.wraparound(False) cdef np.uint8_t check_position(self, np.int64_t pt_index, np.float64_t x, np.float64_t y, np.float64_t z, GridTreeNode * grid): cdef int i cdef np.uint8_t in_grid in_grid = self.is_in_grid(x, y, z, grid) if in_grid: if grid.num_children > 0: in_grid = 0 for i in range(grid.num_children): if not in_grid: in_grid = self.check_position(pt_index, x, y, z, grid.children[i]) if not in_grid: self.point_grids[pt_index] = grid.index in_grid = 1 else: self.point_grids[pt_index] = grid.index in_grid = 1 return in_grid @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef np.uint8_t is_in_grid(self, np.float64_t x, np.float64_t y, np.float64_t z, GridTreeNode * grid): if x >= grid.right_edge[0]: return 0 if y >= grid.right_edge[1]: return 0 if z >= grid.right_edge[2]: return 0 if x < grid.left_edge[0]: return 0 if y < grid.left_edge[1]: return 0 if z < grid.left_edge[2]: return 0 return 1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/grid_geometry_handler.py0000644000175100001770000004430014714401662020764 0ustar00runnerdockerimport abc import weakref from collections import defaultdict import numpy as np from yt.arraytypes import blankRecordArray from yt.config import ytcfg from yt.fields.derived_field import ValidateSpatial from yt.fields.field_detector import FieldDetector from yt.funcs import ensure_numpy_array, iter_fields from yt.geometry.geometry_handler import ChunkDataCache, Index, YTDataChunk from yt.utilities.definitions import MAXLEVEL from yt.utilities.logger import ytLogger as mylog from .grid_container import GridTree, MatchPointsToGrids class GridIndex(Index, abc.ABC): """The index class for patch and block AMR datasets.""" float_type = "float64" _preload_implemented = False _index_properties = ( "grid_left_edge", "grid_right_edge", "grid_levels", "grid_particle_count", "grid_dimensions", ) def _setup_geometry(self): mylog.debug("Counting grids.") self._count_grids() mylog.debug("Initializing grid arrays.") self._initialize_grid_arrays() mylog.debug("Parsing index.") self._parse_index() mylog.debug("Constructing grid objects.") self._populate_grid_objects() mylog.debug("Re-examining index") self._initialize_level_stats() @abc.abstractmethod def _count_grids(self): pass @abc.abstractmethod def _parse_index(self): pass @abc.abstractmethod def _populate_grid_objects(self): pass def __del__(self): del self.grid_dimensions del self.grid_left_edge del self.grid_right_edge del self.grid_levels del self.grid_particle_count del self.grids @property def parameters(self): return self.dataset.parameters def _detect_output_fields_backup(self): # grab fields from backup file as well, if present return def select_grids(self, level): """ Returns an array of grids at *level*. """ return self.grids[self.grid_levels.flat == level] def get_levels(self): for level in range(self.max_level + 1): yield self.select_grids(level) def _initialize_grid_arrays(self): mylog.debug("Allocating arrays for %s grids", self.num_grids) self.grid_dimensions = np.ones((self.num_grids, 3), "int32") self.grid_left_edge = self.ds.arr( np.zeros((self.num_grids, 3), self.float_type), "code_length" ) self.grid_right_edge = self.ds.arr( np.ones((self.num_grids, 3), self.float_type), "code_length" ) self.grid_levels = np.zeros((self.num_grids, 1), "int32") self.grid_particle_count = np.zeros((self.num_grids, 1), "int32") def clear_all_data(self): """ This routine clears all the data currently being held onto by the grids and the data io handler. """ for g in self.grids: g.clear_data() self.io.queue.clear() def get_smallest_dx(self): """ Returns (in code units) the smallest cell size in the simulation. """ return self.select_grids(self.grid_levels.max())[0].dds[:].min() def _get_particle_type_counts(self): return {self.ds.particle_types_raw[0]: self.grid_particle_count.sum()} def _initialize_level_stats(self): # Now some statistics: # 0 = number of grids # 1 = number of cells # 2 = blank desc = {"names": ["numgrids", "numcells", "level"], "formats": ["int64"] * 3} self.level_stats = blankRecordArray(desc, MAXLEVEL) self.level_stats["level"] = list(range(MAXLEVEL)) self.level_stats["numgrids"] = [0 for i in range(MAXLEVEL)] self.level_stats["numcells"] = [0 for i in range(MAXLEVEL)] for level in range(self.max_level + 1): self.level_stats[level]["numgrids"] = np.sum(self.grid_levels == level) li = self.grid_levels[:, 0] == level self.level_stats[level]["numcells"] = ( self.grid_dimensions[li, :].prod(axis=1).sum() ) @property def grid_corners(self): return np.array( [ [ self.grid_left_edge[:, 0], self.grid_left_edge[:, 1], self.grid_left_edge[:, 2], ], [ self.grid_right_edge[:, 0], self.grid_left_edge[:, 1], self.grid_left_edge[:, 2], ], [ self.grid_right_edge[:, 0], self.grid_right_edge[:, 1], self.grid_left_edge[:, 2], ], [ self.grid_left_edge[:, 0], self.grid_right_edge[:, 1], self.grid_left_edge[:, 2], ], [ self.grid_left_edge[:, 0], self.grid_left_edge[:, 1], self.grid_right_edge[:, 2], ], [ self.grid_right_edge[:, 0], self.grid_left_edge[:, 1], self.grid_right_edge[:, 2], ], [ self.grid_right_edge[:, 0], self.grid_right_edge[:, 1], self.grid_right_edge[:, 2], ], [ self.grid_left_edge[:, 0], self.grid_right_edge[:, 1], self.grid_right_edge[:, 2], ], ], dtype="float64", ) def lock_grids_to_parents(self): r"""This function locks grid edges to their parents. This is useful in cases where the grid structure may be somewhat irregular, or where setting the left and right edges is a lossy process. It is designed to correct situations where left/right edges may be set slightly incorrectly, resulting in discontinuities in images and the like. """ mylog.info("Locking grids to parents.") for i, g in enumerate(self.grids): si = g.get_global_startindex() g.LeftEdge = self.ds.domain_left_edge + g.dds * si g.RightEdge = g.LeftEdge + g.ActiveDimensions * g.dds self.grid_left_edge[i, :] = g.LeftEdge self.grid_right_edge[i, :] = g.RightEdge def print_stats(self): """ Prints out (stdout) relevant information about the simulation """ header = "{:>3}\t{:>6}\t{:>14}\t{:>14}".format( "level", "# grids", "# cells", "# cells^3" ) print(header) print(f"{len(header.expandtabs()) * '-'}") for level in range(MAXLEVEL): if (self.level_stats["numgrids"][level]) == 0: continue print( "% 3i\t% 6i\t% 14i\t% 14i" % ( level, self.level_stats["numgrids"][level], self.level_stats["numcells"][level], np.ceil(self.level_stats["numcells"][level] ** (1.0 / 3)), ) ) dx = self.select_grids(level)[0].dds[0] print("-" * 46) print( " \t% 6i\t% 14i" % (self.level_stats["numgrids"].sum(), self.level_stats["numcells"].sum()) ) print("\n") try: print(f"z = {self['CosmologyCurrentRedshift']:0.8f}") except Exception: pass print( "t = {:0.8e} = {:0.8e} = {:0.8e}".format( self.ds.current_time.in_units("code_time"), self.ds.current_time.in_units("s"), self.ds.current_time.in_units("yr"), ) ) print("\nSmallest Cell:") for item in ("Mpc", "pc", "AU", "cm"): print(f"\tWidth: {dx.in_units(item):0.3e}") def _find_field_values_at_points(self, fields, coords): r"""Find the value of fields at a set of coordinates. Returns the values [field1, field2,...] of the fields at the given (x, y, z) points. Returns a numpy array of field values cross coords """ coords = self.ds.arr(ensure_numpy_array(coords), "code_length") grids = self._find_points(coords[:, 0], coords[:, 1], coords[:, 2])[0] fields = list(iter_fields(fields)) mark = np.zeros(3, dtype="int64") out = [] # create point -> grid mapping grid_index = {} for coord_index, grid in enumerate(grids): if grid not in grid_index: grid_index[grid] = [] grid_index[grid].append(coord_index) out = [] for field in fields: funit = self.ds._get_field_info(field).units out.append(self.ds.arr(np.empty(len(coords)), funit)) for grid in grid_index: cellwidth = (grid.RightEdge - grid.LeftEdge) / grid.ActiveDimensions for field_index, field in enumerate(fields): for coord_index in grid_index[grid]: mark = (coords[coord_index, :] - grid.LeftEdge) / cellwidth mark = np.array(mark, dtype="int64") out[field_index][coord_index] = grid[field][ mark[0], mark[1], mark[2] ] if len(fields) == 1: return out[0] return out def _find_points(self, x, y, z): """ Returns the (objects, indices) of leaf grids containing a number of (x,y,z) points """ x = ensure_numpy_array(x) y = ensure_numpy_array(y) z = ensure_numpy_array(z) if not len(x) == len(y) == len(z): raise ValueError("Arrays of indices must be of the same size") grid_tree = self._get_grid_tree() pts = MatchPointsToGrids(grid_tree, len(x), x, y, z) ind = pts.find_points_in_tree() return self.grids[ind], ind def _get_grid_tree(self): left_edge = self.ds.arr(np.zeros((self.num_grids, 3)), "code_length") right_edge = self.ds.arr(np.zeros((self.num_grids, 3)), "code_length") level = np.zeros((self.num_grids), dtype="int64") parent_ind = np.zeros((self.num_grids), dtype="int64") num_children = np.zeros((self.num_grids), dtype="int64") dimensions = np.zeros((self.num_grids, 3), dtype="int32") for i, grid in enumerate(self.grids): left_edge[i, :] = grid.LeftEdge right_edge[i, :] = grid.RightEdge level[i] = grid.Level if grid.Parent is None: parent_ind[i] = -1 else: parent_ind[i] = grid.Parent.id - grid.Parent._id_offset num_children[i] = np.int64(len(grid.Children)) dimensions[i, :] = grid.ActiveDimensions return GridTree( self.num_grids, left_edge, right_edge, dimensions, parent_ind, level, num_children, ) def convert(self, unit): return self.dataset.conversion_factors[unit] def _identify_base_chunk(self, dobj): fast_index = None if dobj._type_name == "grid": dobj._chunk_info = np.empty(1, dtype="object") dobj._chunk_info[0] = weakref.proxy(dobj) elif getattr(dobj, "_grids", None) is None: gi = dobj.selector.select_grids( self.grid_left_edge, self.grid_right_edge, self.grid_levels ) if any(g.filename is not None for g in self.grids[gi]): _gsort = _grid_sort_mixed else: _gsort = _grid_sort_id grids = sorted(self.grids[gi], key=_gsort) dobj._chunk_info = np.empty(len(grids), dtype="object") for i, g in enumerate(grids): dobj._chunk_info[i] = g # These next two lines, when uncommented, turn "on" the fast index. # if dobj._type_name != "grid": # fast_index = self._get_grid_tree() if getattr(dobj, "size", None) is None: dobj.size = self._count_selection(dobj, fast_index=fast_index) if getattr(dobj, "shape", None) is None: dobj.shape = (dobj.size,) dobj._current_chunk = list( self._chunk_all(dobj, cache=False, fast_index=fast_index) )[0] def _count_selection(self, dobj, grids=None, fast_index=None): if fast_index is not None: return fast_index.count(dobj.selector) if grids is None: grids = dobj._chunk_info count = sum(g.count(dobj.selector) for g in grids) return count def _chunk_all(self, dobj, cache=True, fast_index=None): gobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) fast_index = fast_index or getattr(dobj._current_chunk, "_fast_index", None) yield YTDataChunk(dobj, "all", gobjs, dobj.size, cache, fast_index=fast_index) def _chunk_spatial(self, dobj, ngz, sort=None, preload_fields=None): gobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) if sort in ("+level", "level"): giter = sorted(gobjs, key=lambda g: g.Level) elif sort == "-level": giter = sorted(gobjs, key=lambda g: -g.Level) elif sort is None: giter = gobjs if preload_fields is None: preload_fields = [] preload_fields, _ = self._split_fields(preload_fields) if self._preload_implemented and len(preload_fields) > 0 and ngz == 0: giter = ChunkDataCache(list(giter), preload_fields, self) for og in giter: if ngz > 0: g = og.retrieve_ghost_zones(ngz, [], smoothed=True) else: g = og size = self._count_selection(dobj, [og]) if size == 0: continue # We don't want to cache any of the masks or icoords or fcoords for # individual grids. yield YTDataChunk(dobj, "spatial", [g], size, cache=False) _grid_chunksize = 1000 def _chunk_io( self, dobj, cache=True, local_only=False, preload_fields=None, chunk_sizing="auto", ): # local_only is only useful for inline datasets and requires # implementation by subclasses. if preload_fields is None: preload_fields = [] preload_fields, _ = self._split_fields(preload_fields) gfiles = defaultdict(list) gobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) fast_index = dobj._current_chunk._fast_index for g in gobjs: # Force to be a string because sometimes g.filename is None. gfiles[str(g.filename)].append(g) # We can apply a heuristic here to make sure we aren't loading too # many grids all at once. if chunk_sizing == "auto": chunk_ngrids = len(gobjs) if chunk_ngrids > 0: nproc = int(ytcfg.get("yt", "internals", "global_parallel_size")) chunking_factor = np.int64( np.ceil(self._grid_chunksize * nproc / chunk_ngrids) ) size = max(self._grid_chunksize // chunking_factor, 1) else: size = self._grid_chunksize elif chunk_sizing == "config_file": size = ytcfg.get("yt", "chunk_size") elif chunk_sizing == "just_one": size = 1 elif chunk_sizing == "old": size = self._grid_chunksize else: raise RuntimeError( f"{chunk_sizing} is an invalid value for the 'chunk_sizing' argument." ) for fn in sorted(gfiles): gs = gfiles[fn] for grids in (gs[pos : pos + size] for pos in range(0, len(gs), size)): dc = YTDataChunk( dobj, "io", grids, self._count_selection(dobj, grids), cache=cache, fast_index=fast_index, ) # We allow four full chunks to be included. with self.io.preload(dc, preload_fields, 4.0 * size): yield dc def _icoords_to_fcoords( self, icoords: np.ndarray, ires: np.ndarray, axes: tuple[int, ...] | None = None, ) -> tuple[np.ndarray, np.ndarray]: """ Accepts icoords and ires and returns appropriate fcoords and fwidth. Mostly useful for cases where we have irregularly spaced or structured grids. """ dds = self.ds.domain_width[axes,] / ( self.ds.domain_dimensions[axes,] * self.ds.refine_by ** ires[:, None] ) pos = (0.5 + icoords) * dds + self.ds.domain_left_edge[axes,] return pos, dds def _add_mesh_sampling_particle_field(self, deposit_field, ftype, ptype): units = self.ds.field_info[ftype, deposit_field].units take_log = self.ds.field_info[ftype, deposit_field].take_log field_name = f"cell_{ftype}_{deposit_field}" def _mesh_sampling_particle_field(field, data): pos = data[ptype, "particle_position"] field_values = data[ftype, deposit_field] if isinstance(data, FieldDetector): return np.zeros(pos.shape[0]) i, j, k = np.floor((pos - data.LeftEdge) / data.dds).astype("int64").T # Make sure all particles are within the current grid, otherwise return nan maxi, maxj, maxk = field_values.shape mask = (i < maxi) & (j < maxj) & (k < maxk) mask &= (i >= 0) & (j >= 0) & (k >= 0) result = np.full(len(pos), np.nan, dtype="float64") if result.shape[0] > 0: result[mask] = field_values[i[mask], j[mask], k[mask]] return data.ds.arr(result, field_values.units) self.ds.add_field( (ptype, field_name), function=_mesh_sampling_particle_field, sampling_type="particle", units=units, take_log=take_log, validators=[ValidateSpatial()], ) def _grid_sort_id(g): return g.id def _grid_sort_mixed(g): if g.filename is None: return str(g.id) return g.filename ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/grid_visitors.pxd0000644000175100001770000000465014714401662017465 0ustar00runnerdocker""" Grid visitor definitions file """ cimport numpy as np cdef struct GridTreeNode: np.int32_t num_children np.int32_t level np.int64_t index np.float64_t left_edge[3] np.float64_t right_edge[3] GridTreeNode **children np.int64_t start_index[3] np.int32_t dims[3] np.float64_t dds[3] cdef struct GridTreeNodePadded: np.int32_t num_children np.int32_t level np.int64_t index np.float64_t left_edge_x np.float64_t left_edge_y np.float64_t left_edge_z np.float64_t right_edge_x np.float64_t right_edge_y np.float64_t right_edge_z np.int64_t children_pointers np.int64_t start_index_x np.int64_t start_index_y np.int64_t start_index_z np.int32_t dims_x np.int32_t dims_y np.int32_t dims_z np.float64_t dds_x np.float64_t dds_y np.float64_t dds_z cdef struct GridVisitorData: GridTreeNode *grid np.uint64_t index np.uint64_t global_index np.int64_t pos[3] # position in ints int n_tuples int **child_tuples # [N_child][6], where 0-1 are x_start, x_end, etc. void *array int ref_factor # This may change on a grid-by-grid basis # It is the number of cells a child grid has per dimension # in a cell of this grid. cdef void free_tuples(GridVisitorData *data) nogil cdef void setup_tuples(GridVisitorData *data) nogil cdef np.uint8_t check_child_masked(GridVisitorData *data) nogil ctypedef void grid_visitor_function(GridVisitorData *data, np.uint8_t selected) nogil # This is similar in spirit to the way oct visitor functions work. However, # there are a few important differences. Because the grid objects are expected # to be bigger, we don't need to pass them along -- we will not be recursively # visiting. So the GridVisitorData will be updated in between grids. # Furthermore, we're only going to use them for a much smaller subset of # operations. All child mask evaluation is going to be conducted inside the # outermost level of the visitor function, and visitor functions will receive # information about whether they have been selected and whether they are # covered by child cells. cdef grid_visitor_function count_cells cdef grid_visitor_function mask_cells cdef grid_visitor_function icoords_cells cdef grid_visitor_function ires_cells cdef grid_visitor_function fcoords_cells cdef grid_visitor_function fwidth_cells ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/grid_visitors.pyx0000644000175100001770000001160114714401662017504 0ustar00runnerdocker# distutils: include_dirs = LIB_DIR # distutils: libraries = STD_LIBS """ Grid visitor functions """ cimport cython cimport numpy as np from libc.stdlib cimport free, malloc from yt.utilities.lib.bitarray cimport ba_set_value from yt.utilities.lib.fp_utils cimport iclip cdef void free_tuples(GridVisitorData *data) nogil: # This wipes out the tuples, which is necessary since they are # heap-allocated cdef int i if data.child_tuples == NULL: return for i in range(data.n_tuples): free(data.child_tuples[i]) free(data.child_tuples) data.child_tuples = NULL data.n_tuples = 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void setup_tuples(GridVisitorData *data) nogil: # This sets up child-mask tuples. Rather than a single mask that covers # everything, we instead allocate pairs of integers that are start/stop # positions for child masks. This may not be considerably more efficient # memory-wise, but it is easier to keep and save when going through # multiple grids and selectors. cdef int i, j cdef np.int64_t si, ei cdef GridTreeNode *g cdef GridTreeNode *c free_tuples(data) g = data.grid data.child_tuples = malloc(sizeof(int*) * g.num_children) for i in range(g.num_children): c = g.children[i] data.child_tuples[i] = malloc(sizeof(int) * 6) # Now we fill them in for j in range(3): si = (c.start_index[j] / data.ref_factor) - g.start_index[j] ei = si + c.dims[j]/data.ref_factor - 1 data.child_tuples[i][j*2+0] = iclip(si, 0, g.dims[j] - 1) data.child_tuples[i][j*2+1] = iclip(ei, 0, g.dims[j] - 1) data.n_tuples = g.num_children @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef np.uint8_t check_child_masked(GridVisitorData *data) nogil: # This simply checks if we're inside any of the tuples. Probably not the # most efficient way, but the GVD* passed in has a position affiliated with # it, and we can very easily look for that inside here. cdef int i, j, k cdef int *tup for i in range(data.n_tuples): # k is if we're inside a given child tuple. We check each one # individually, and invalidate if we're outside. k = 1 tup = data.child_tuples[i] for j in range(3): # Check if pos is outside in any of the three dimensions if data.pos[j] < tup[j*2+0] or data.pos[j] > tup[j*2+1]: k = 0 break if k == 1: return 1 # Return 1 for child masked return 0 # Only return 0 if it doesn't match any of the children @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void count_cells(GridVisitorData *data, np.uint8_t selected) nogil: # Simply increment for each one, if we've selected it. if selected == 0: return cdef np.uint64_t *count = data.array count[0] += 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void mask_cells(GridVisitorData *data, np.uint8_t selected) nogil: # Set our bitarray -- we're creating a mask -- if we are selected. if selected == 0: return cdef np.uint8_t *mask = data.array ba_set_value(mask, data.global_index, 1) # No need to increment anything. @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void icoords_cells(GridVisitorData *data, np.uint8_t selected) nogil: # Nice and easy icoord setter. if selected == 0: return cdef int i cdef np.int64_t *icoords = data.array for i in range(3): icoords[data.index * 3 + i] = data.pos[i] + data.grid.start_index[i] data.index += 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void ires_cells(GridVisitorData *data, np.uint8_t selected) nogil: # Fill with the level value. if selected == 0: return cdef np.int64_t *ires = data.array ires[data.index] = data.grid.level data.index += 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void fwidth_cells(GridVisitorData *data, np.uint8_t selected) nogil: # Fill with our dds. if selected == 0: return cdef int i cdef np.float64_t *fwidth = data.array for i in range(3): fwidth[data.index * 3 + i] = data.grid.dds[i] data.index += 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void fcoords_cells(GridVisitorData *data, np.uint8_t selected) nogil: # Simple cell-centered position filling. if selected == 0: return cdef int i cdef np.float64_t *fcoords = data.array for i in range(3): fcoords[data.index * 3 + i] = data.grid.left_edge[i] + \ (0.5 + data.pos[i])*data.grid.dds[i] data.index += 1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/oct_container.pxd0000644000175100001770000000766514714401662017436 0ustar00runnerdocker""" Oct definitions file """ cimport cython cimport numpy as np from libc.math cimport floor from libc.stdlib cimport bsearch, free, malloc, qsort, realloc from yt.geometry cimport oct_visitors, selection_routines from yt.utilities.lib.allocation_container cimport AllocationContainer, ObjectPool from yt.utilities.lib.fp_utils cimport * from .oct_visitors cimport Oct, OctInfo, OctVisitor, cind cdef int ORDER_MAX cdef struct OctKey: np.int64_t key Oct *node # These next two are for particle sparse octrees. np.int64_t *indices np.int64_t pcount cdef struct OctList cdef struct OctList: OctList *next Oct *o # NOTE: This object *has* to be the same size as the AllocationContainer # object. There's an assert in the __cinit__ function. cdef struct OctAllocationContainer: np.uint64_t n np.uint64_t n_assigned np.uint64_t offset np.int64_t con_id # container id Oct *my_objs cdef class OctObjectPool(ObjectPool): cdef inline OctAllocationContainer *get_cont(self, int i): return (&self.containers[i]) cdef OctList *OctList_append(OctList *list, Oct *o) cdef int OctList_count(OctList *list) cdef void OctList_delete(OctList *list) cdef class OctreeContainer: cdef public OctObjectPool domains cdef Oct ****root_mesh cdef int partial_coverage cdef int level_offset cdef int nn[3] cdef np.uint8_t nz cdef np.float64_t DLE[3] cdef np.float64_t DRE[3] cdef public np.int64_t nocts cdef public int num_domains cdef Oct *get(self, np.float64_t ppos[3], OctInfo *oinfo = ?, int max_level = ?) noexcept nogil cdef int get_root(self, int ind[3], Oct **o) noexcept nogil cdef Oct **neighbors(self, OctInfo *oinfo, np.int64_t *nneighbors, Oct *o, bint periodicity[3]) # This function must return the offset from global-to-local domains; i.e., # AllocationContainer.offset if such a thing exists. cdef np.int64_t get_domain_offset(self, int domain_id) cdef void visit_all_octs(self, selection_routines.SelectorObject selector, OctVisitor visitor, int vc = ?, np.int64_t *indices = ?) cdef Oct *next_root(self, int domain_id, int ind[3]) cdef Oct *next_child(self, int domain_id, int ind[3], Oct *parent) except? NULL cdef void append_domain(self, np.int64_t domain_count) # The fill_style is the ordering, C or F, of the octs in the file. "o" # corresponds to C, and "r" is for Fortran. cdef public object fill_style cdef public int max_level cpdef void fill_level( self, const int level, const np.uint8_t[::1] level_inds, const np.uint8_t[::1] cell_inds, const np.int64_t[::1] file_inds, dict dest_fields, dict source_fields, np.int64_t offset = ? ) cpdef int fill_level_with_domain( self, const int level, const np.uint8_t[::1] level_inds, const np.uint8_t[::1] cell_inds, const np.int64_t[::1] file_inds, const np.int32_t[::1] domain_inds, dict dest_fields, dict source_fields, const np.int32_t domain, np.int64_t offset = ? ) cdef class SparseOctreeContainer(OctreeContainer): cdef OctKey *root_nodes cdef void *tree_root cdef int num_root cdef int max_root cdef void key_to_ipos(self, np.int64_t key, np.int64_t pos[3]) cdef np.int64_t ipos_to_key(self, int pos[3]) noexcept nogil cdef class RAMSESOctreeContainer(SparseOctreeContainer): pass cdef extern from "tsearch.h" nogil: void *tsearch(const void *key, void **rootp, int (*compar)(const void *, const void *)) void *tfind(const void *key, const void **rootp, int (*compar)(const void *, const void *)) void *tdelete(const void *key, void **rootp, int (*compar)(const void *, const void *)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/oct_container.pyx0000644000175100001770000013462114714401662017454 0ustar00runnerdocker# distutils: sources = yt/utilities/lib/tsearch.c # distutils: include_dirs = LIB_DIR # distutils: libraries = STD_LIBS """ Oct container """ cimport cython cimport numpy as np import numpy as np from libc.math cimport floor from .oct_visitors cimport ( NeighbourCellIndexVisitor, NeighbourCellVisitor, OctPadded, StoreIndex, ) from .selection_routines cimport AlwaysSelector, SelectorObject ORDER_MAX = 20 _ORDER_MAX = ORDER_MAX cdef extern from "stdlib.h": # NOTE that size_t might not be int void *alloca(int) # Here is the strategy for RAMSES containers: # * Read each domain individually, creating *all* octs found in that domain # file, even if they reside on other CPUs. # * Only allocate octs that reside on >= domain # * For all octs, insert into tree, which may require traversing existing # octs # * Note that this does not allow one component of an ObjectPool (an # AllocationContainer) to exactly be a chunk, but it is close. For IO # chunking, we can theoretically examine those octs that live inside a # given allocator. cdef class OctreeContainer: def __init__(self, oct_domain_dimensions, domain_left_edge, domain_right_edge, partial_coverage = 0, num_zones = 2): # This will just initialize the root mesh octs self.nz = num_zones self.partial_coverage = partial_coverage cdef int i for i in range(3): self.nn[i] = oct_domain_dimensions[i] self.num_domains = 0 self.level_offset = 0 self.domains = OctObjectPool() self.nocts = 0 # Increment when initialized for i in range(3): self.DLE[i] = domain_left_edge[i] #0 self.DRE[i] = domain_right_edge[i] #num_grid self._initialize_root_mesh() self.fill_style = "o" def _initialize_root_mesh(self): self.root_mesh = malloc(sizeof(void*) * self.nn[0]) for i in range(self.nn[0]): self.root_mesh[i] = malloc(sizeof(void*) * self.nn[1]) for j in range(self.nn[1]): self.root_mesh[i][j] = malloc(sizeof(void*) * self.nn[2]) for k in range(self.nn[2]): self.root_mesh[i][j][k] = NULL @property def oct_arrays(self): return self.domains.to_arrays() @classmethod def load_octree(cls, header): cdef np.ndarray[np.uint8_t, ndim=1] ref_mask ref_mask = header['octree'] cdef OctreeContainer obj = cls(header['dims'], header['left_edge'], header['right_edge'], num_zones = header['num_zones'], partial_coverage = header['partial_coverage']) # NOTE: We do not allow domain/file indices to be specified. cdef SelectorObject selector = AlwaysSelector(None) cdef oct_visitors.LoadOctree visitor visitor = oct_visitors.LoadOctree(obj, -1) cdef int i, j, k visitor.global_index = -1 visitor.level = 0 visitor.nz = visitor.nzones = 1 visitor.max_level = 0 assert(ref_mask.shape[0] / float(visitor.nzones) == (ref_mask.shape[0]/float(visitor.nzones))) obj.allocate_domains([ref_mask.shape[0] / visitor.nzones]) cdef np.float64_t pos[3] cdef np.float64_t dds[3] # This dds is the oct-width for i in range(3): dds[i] = (obj.DRE[i] - obj.DLE[i]) / obj.nn[i] # Pos is the center of the octs cdef OctAllocationContainer *cur = obj.domains.get_cont(0) cdef Oct *o cdef np.uint64_t nfinest = 0 visitor.ref_mask = ref_mask visitor.octs = cur.my_objs visitor.nocts = &cur.n_assigned visitor.nfinest = &nfinest pos[0] = obj.DLE[0] + dds[0]/2.0 for i in range(obj.nn[0]): pos[1] = obj.DLE[1] + dds[1]/2.0 for j in range(obj.nn[1]): pos[2] = obj.DLE[2] + dds[2]/2.0 for k in range(obj.nn[2]): if obj.root_mesh[i][j][k] != NULL: raise RuntimeError o = &cur.my_objs[cur.n_assigned] o.domain_ind = o.file_ind = 0 o.domain = 1 obj.root_mesh[i][j][k] = o cur.n_assigned += 1 visitor.pos[0] = i visitor.pos[1] = j visitor.pos[2] = k # Always visit covered selector.recursively_visit_octs( obj.root_mesh[i][j][k], pos, dds, 0, visitor, 1) pos[2] += dds[2] pos[1] += dds[1] pos[0] += dds[0] obj.nocts = cur.n_assigned if obj.nocts * visitor.nz != ref_mask.size: raise KeyError(ref_mask.size, obj.nocts, obj.nz, obj.partial_coverage, visitor.nzones) obj.max_level = visitor.max_level return obj def __dealloc__(self): if self.root_mesh == NULL: return for i in range(self.nn[0]): if self.root_mesh[i] == NULL: continue for j in range(self.nn[1]): if self.root_mesh[i][j] == NULL: continue free(self.root_mesh[i][j]) if self.root_mesh[i] == NULL: continue free(self.root_mesh[i]) free(self.root_mesh) @cython.cdivision(True) cdef void visit_all_octs(self, SelectorObject selector, OctVisitor visitor, int vc = -1, np.int64_t *indices = NULL): cdef int i, j, k if vc == -1: vc = self.partial_coverage visitor.global_index = -1 visitor.level = 0 cdef np.float64_t pos[3] cdef np.float64_t dds[3] # This dds is the oct-width for i in range(3): dds[i] = (self.DRE[i] - self.DLE[i]) / self.nn[i] # Pos is the center of the octs pos[0] = self.DLE[0] + dds[0]/2.0 for i in range(self.nn[0]): pos[1] = self.DLE[1] + dds[1]/2.0 for j in range(self.nn[1]): pos[2] = self.DLE[2] + dds[2]/2.0 for k in range(self.nn[2]): if self.root_mesh[i][j][k] == NULL: raise KeyError(i,j,k) visitor.pos[0] = i visitor.pos[1] = j visitor.pos[2] = k selector.recursively_visit_octs( self.root_mesh[i][j][k], pos, dds, 0, visitor, vc) pos[2] += dds[2] pos[1] += dds[1] pos[0] += dds[0] cdef np.int64_t get_domain_offset(self, int domain_id): return 0 def _check_root_mesh(self): cdef count = 0 for i in range(self.nn[0]): for j in range(self.nn[1]): for k in range(self.nn[2]): if self.root_mesh[i][j][k] == NULL: print("Missing ", i, j, k) count += 1 print("Missing total of %s out of %s" % (count, self.nn[0] * self.nn[1] * self.nn[2])) cdef int get_root(self, int ind[3], Oct **o) noexcept nogil: cdef int i for i in range(3): if ind[i] < 0 or ind[i] >= self.nn[i]: o[0] = NULL return 1 o[0] = self.root_mesh[ind[0]][ind[1]][ind[2]] return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef Oct *get(self, np.float64_t ppos[3], OctInfo *oinfo = NULL, int max_level = 99) noexcept nogil: #Given a floating point position, retrieve the most #refined oct at that time cdef int ind32[3] cdef np.int64_t ipos[3] cdef np.float64_t dds[3] cdef np.float64_t cp[3] cdef Oct *cur cdef Oct *next cdef int i cur = next = NULL cdef np.int64_t ind[3] cdef np.int64_t level = -1 for i in range(3): dds[i] = (self.DRE[i] - self.DLE[i])/self.nn[i] ind[i] = (floor((ppos[i] - self.DLE[i])/dds[i])) cp[i] = (ind[i] + 0.5) * dds[i] + self.DLE[i] ipos[i] = 0 # We add this to ind later, so it should be zero. ind32[i] = ind[i] self.get_root(ind32, &next) # We want to stop recursing when there's nowhere else to go while next != NULL and level < max_level: level += 1 for i in range(3): ipos[i] = (ipos[i] << 1) + ind[i] cur = next for i in range(3): dds[i] = dds[i] / 2.0 if cp[i] > ppos[i]: ind[i] = 0 cp[i] -= dds[i] / 2.0 else: ind[i] = 1 cp[i] += dds[i]/2.0 if cur.children != NULL: next = cur.children[cind(ind[0],ind[1],ind[2])] else: next = NULL if oinfo == NULL: return cur cdef np.float64_t factor = 1.0 / self.nz * 2 for i in range(3): # We don't normally need to change dds[i] as it has been halved # from the oct width, thus making it already the cell width. # But, since not everything has the cell width equal to have the # width of the oct, we need to apply "factor". oinfo.dds[i] = dds[i] * factor # Cell width oinfo.ipos[i] = ipos[i] oinfo.left_edge[i] = oinfo.ipos[i] * (oinfo.dds[i] * self.nz) + self.DLE[i] oinfo.level = level return cur def locate_positions(self, np.float64_t[:,:] positions): """ This routine, meant to be called by other internal routines, returns a list of oct IDs and a dictionary of Oct info for all the positions supplied. Positions must be in code_length. """ cdef np.float64_t factor = self.nz cdef dict all_octs = {} cdef OctInfo oi cdef Oct* o = NULL cdef np.float64_t pos[3] cdef np.ndarray[np.uint8_t, ndim=1] recorded cdef np.ndarray[np.int64_t, ndim=1] oct_id oct_id = np.ones(positions.shape[0], dtype="int64") * -1 recorded = np.zeros(self.nocts, dtype="uint8") cdef np.int64_t i, j for i in range(positions.shape[0]): for j in range(3): pos[j] = positions[i,j] o = self.get(pos, &oi) if o == NULL: raise RuntimeError if recorded[o.domain_ind] == 0: left_edge = np.asarray(oi.left_edge).copy() dds = np.asarray(oi.dds).copy() right_edge = left_edge + dds*factor all_octs[o.domain_ind] = dict( left_edge = left_edge, right_edge = right_edge, level = oi.level ) recorded[o.domain_ind] = 1 oct_id[i] = o.domain_ind return oct_id, all_octs def domain_identify(self, SelectorObject selector): cdef np.ndarray[np.uint8_t, ndim=1] domain_mask domain_mask = np.zeros(self.num_domains, dtype="uint8") cdef oct_visitors.IdentifyOcts visitor visitor = oct_visitors.IdentifyOcts(self) visitor.domain_mask = domain_mask self.visit_all_octs(selector, visitor) cdef int i domain_ids = [] for i in range(self.num_domains): if domain_mask[i] == 1: domain_ids.append(i+1) return domain_ids @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef Oct** neighbors(self, OctInfo *oi, np.int64_t *nneighbors, Oct *o, bint periodicity[3]): # We are going to do a brute-force search here. # This is not the most efficient -- in fact, it's relatively bad. But # we will attempt to improve it in a future iteration, where we will # grow a stack of parent Octs. # Note that in the first iteration, we will just find the up-to-27 # neighbors, including the main oct. cdef np.int64_t i, j, k, n, level, ii, dlevel cdef int ind[3] cdef OctList *olist cdef OctList *my_list my_list = olist = NULL cdef Oct *cand cdef np.int64_t npos[3] cdef np.int64_t ndim[3] # Now we get our boundaries for this level, so that we can wrap around # if need be. # ndim is the oct dimensions of the level, not the cell dimensions. for i in range(3): ndim[i] = ((self.DRE[i] - self.DLE[i]) / oi.dds[i]) # Here we adjust for oi.dds meaning *cell* width. ndim[i] = (ndim[i] / self.nz) my_list = olist = OctList_append(NULL, o) for i in range(3): npos[0] = (oi.ipos[0] + (1 - i)) if not periodicity[0] and not \ (0 <= npos[0] < ndim[0]): continue elif npos[0] < 0: npos[0] += ndim[0] elif npos[0] >= ndim[0]: npos[0] -= ndim[0] for j in range(3): npos[1] = (oi.ipos[1] + (1 - j)) if not periodicity[1] and not \ (0 <= npos[1] < ndim[1]): continue elif npos[1] < 0: npos[1] += ndim[1] elif npos[1] >= ndim[1]: npos[1] -= ndim[1] for k in range(3): npos[2] = (oi.ipos[2] + (1 - k)) if not periodicity[2] and not \ (0 <= npos[2] < ndim[2]): continue if npos[2] < 0: npos[2] += ndim[2] if npos[2] >= ndim[2]: npos[2] -= ndim[2] # Now we have our npos, which we just need to find. # Level 0 gets bootstrapped for n in range(3): ind[n] = ((npos[n] >> (oi.level)) & 1) cand = NULL self.get_root(ind, &cand) # We should not get a NULL if we handle periodicity # correctly, but we might. if cand == NULL: continue for level in range(1, oi.level+1): dlevel = oi.level - level if cand.children == NULL: break for n in range(3): ind[n] = (npos[n] >> dlevel) & 1 ii = cind(ind[0],ind[1],ind[2]) if cand.children[ii] == NULL: break cand = cand.children[ii] if cand.children != NULL: olist = OctList_subneighbor_find( olist, cand, i, j, k) else: olist = OctList_append(olist, cand) olist = my_list cdef int noct = OctList_count(olist) cdef Oct **neighbors neighbors = malloc(sizeof(Oct*)*noct) for i in range(noct): neighbors[i] = olist.o olist = olist.next OctList_delete(my_list) nneighbors[0] = noct return neighbors @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def mask(self, SelectorObject selector, np.int64_t num_cells = -1, int domain_id = -1): if num_cells == -1: num_cells = selector.count_octs(self, domain_id) cdef np.ndarray[np.uint8_t, ndim=4] mask cdef oct_visitors.MaskOcts visitor visitor = oct_visitors.MaskOcts(self, domain_id) cdef int ns = self.nz mask = np.zeros((num_cells, ns, ns, ns), dtype="uint8") visitor.mask = mask self.visit_all_octs(selector, visitor) return mask.astype("bool") @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def icoords(self, SelectorObject selector, np.int64_t num_cells = -1, int domain_id = -1): if num_cells == -1: num_cells = selector.count_oct_cells(self, domain_id) cdef oct_visitors.ICoordsOcts visitor visitor = oct_visitors.ICoordsOcts(self, domain_id) cdef np.ndarray[np.int64_t, ndim=2] coords coords = np.empty((num_cells, 3), dtype="int64") visitor.icoords = coords self.visit_all_octs(selector, visitor) return coords @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def ires(self, SelectorObject selector, np.int64_t num_cells = -1, int domain_id = -1): cdef int i if num_cells == -1: num_cells = selector.count_oct_cells(self, domain_id) cdef oct_visitors.IResOcts visitor visitor = oct_visitors.IResOcts(self, domain_id) #Return the 'resolution' of each cell; ie the level cdef np.ndarray[np.int64_t, ndim=1] res res = np.empty(num_cells, dtype="int64") visitor.ires = res self.visit_all_octs(selector, visitor) if self.level_offset > 0: for i in range(num_cells): res[i] += self.level_offset return res @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fwidth(self, SelectorObject selector, np.int64_t num_cells = -1, int domain_id = -1): if num_cells == -1: num_cells = selector.count_oct_cells(self, domain_id) cdef oct_visitors.FWidthOcts visitor visitor = oct_visitors.FWidthOcts(self, domain_id) cdef np.ndarray[np.float64_t, ndim=2] fwidth fwidth = np.empty((num_cells, 3), dtype="float64") visitor.fwidth = fwidth self.visit_all_octs(selector, visitor) cdef np.float64_t base_dx for i in range(3): base_dx = (self.DRE[i] - self.DLE[i])/self.nn[i] fwidth[:,i] *= base_dx return fwidth @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fcoords(self, SelectorObject selector, np.int64_t num_cells = -1, int domain_id = -1): if num_cells == -1: num_cells = selector.count_oct_cells(self, domain_id) cdef oct_visitors.FCoordsOcts visitor visitor = oct_visitors.FCoordsOcts(self, domain_id) #Return the floating point unitary position of every cell cdef np.ndarray[np.float64_t, ndim=2] coords coords = np.empty((num_cells, 3), dtype="float64") visitor.fcoords = coords self.visit_all_octs(selector, visitor) cdef int i cdef np.float64_t base_dx for i in range(3): base_dx = (self.DRE[i] - self.DLE[i])/self.nn[i] coords[:,i] *= base_dx coords[:,i] += self.DLE[i] return coords def save_octree(self): # Get the header header = dict(dims = (self.nn[0], self.nn[1], self.nn[2]), left_edge = (self.DLE[0], self.DLE[1], self.DLE[2]), right_edge = (self.DRE[0], self.DRE[1], self.DRE[2]), num_zones = self.nz, partial_coverage = self.partial_coverage) cdef SelectorObject selector = AlwaysSelector(None) # domain_id = -1 here, because we want *every* oct cdef oct_visitors.StoreOctree visitor visitor = oct_visitors.StoreOctree(self, -1) visitor.nz = 1 cdef np.ndarray[np.uint8_t, ndim=1] ref_mask ref_mask = np.zeros(self.nocts * visitor.nzones, dtype="uint8") - 1 visitor.ref_mask = ref_mask # Enforce partial_coverage here self.visit_all_octs(selector, visitor, 1) header['octree'] = ref_mask return header def selector_fill(self, SelectorObject selector, np.ndarray source, np.ndarray dest = None, np.int64_t offset = 0, int dims = 1, int domain_id = -1): # This is actually not correct. The hard part is that we need to # iterate the same way visit_all_octs does, but we need to track the # number of octs total visited. cdef np.int64_t num_cells = -1 if dest is None: # Note that RAMSES can have partial refinement inside an Oct. This # means we actually do want the number of Octs, not the number of # cells. num_cells = selector.count_oct_cells(self, domain_id) dest = np.zeros((num_cells, dims), dtype=source.dtype, order='C') if dims != 1: raise RuntimeError # Just make sure that we're in the right shape. Ideally this will not # duplicate memory. Since we're in Cython, we want to avoid modifying # the .shape attributes directly. dest = dest.reshape((num_cells, 1)) source = source.reshape((source.shape[0], source.shape[1], source.shape[2], source.shape[3], dims)) cdef OctVisitor visitor cdef oct_visitors.CopyArrayI64 visitor_i64 cdef oct_visitors.CopyArrayF64 visitor_f64 if source.dtype != dest.dtype: raise RuntimeError if source.dtype == np.int64: visitor_i64 = oct_visitors.CopyArrayI64(self, domain_id) visitor_i64.source = source visitor_i64.dest = dest visitor = visitor_i64 elif source.dtype == np.float64: visitor_f64 = oct_visitors.CopyArrayF64(self, domain_id) visitor_f64.source = source visitor_f64.dest = dest visitor = visitor_f64 else: raise NotImplementedError visitor.index = offset # We only need this so we can continue calculating the offset visitor.dims = dims self.visit_all_octs(selector, visitor) if (visitor.global_index + 1) * visitor.nzones * visitor.dims > source.size: print("GLOBAL INDEX RAN AHEAD.",) print (visitor.global_index + 1) * visitor.nzones * visitor.dims - source.size print(dest.size, source.size, num_cells) raise RuntimeError if visitor.index > dest.size: print("DEST INDEX RAN AHEAD.",) print(visitor.index - dest.size) print (visitor.global_index + 1) * visitor.nzones * visitor.dims, source.size print(num_cells) raise RuntimeError if num_cells >= 0: return dest return visitor.index - offset def domain_ind(self, selector, int domain_id = -1): cdef np.ndarray[np.int64_t, ndim=1] ind # Here's where we grab the masked items. ind = np.full(self.nocts, -1, 'int64') cdef oct_visitors.IndexOcts visitor visitor = oct_visitors.IndexOcts(self, domain_id) visitor.oct_index = ind self.visit_all_octs(selector, visitor) return ind @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def add(self, int curdom, int curlevel, np.ndarray[np.float64_t, ndim=2] pos, int skip_boundary = 1, int count_boundary = 0, np.ndarray[np.uint64_t, ndim=1, cast=True] levels = None ): # In this function, if we specify curlevel = -1, then we query the # (optional) levels array for the oct level. cdef int no, p, i cdef int ind[3] cdef int nb = 0 cdef Oct *cur cdef np.float64_t pp[3] cdef np.float64_t cp[3] cdef np.float64_t dds[3] no = pos.shape[0] #number of octs if curdom > self.num_domains: return 0 cdef OctAllocationContainer *cont = self.domains.get_cont(curdom - 1) cdef int initial = cont.n_assigned cdef int in_boundary = 0 # How do we bootstrap ourselves? for p in range(no): # We allow specifying curlevel = -1 to query from the levels array # instead. if curlevel == -1: this_level = levels[p] else: this_level = curlevel #for every oct we're trying to add find the #floating point unitary position on this level in_boundary = 0 for i in range(3): pp[i] = pos[p, i] dds[i] = (self.DRE[i] - self.DLE[i])/self.nn[i] ind[i] = ((pp[i] - self.DLE[i])/dds[i]) cp[i] = (ind[i] + 0.5) * dds[i] + self.DLE[i] if ind[i] < 0 or ind[i] >= self.nn[i]: in_boundary = 1 if skip_boundary == in_boundary == 1: nb += count_boundary continue cur = self.next_root(curdom, ind) if cur == NULL: raise RuntimeError # Now we find the location we want # Note that RAMSES I think 1-findiceses levels, but we don't. for _ in range(this_level): # At every level, find the cell this oct # lives inside for i in range(3): #as we get deeper, oct size halves dds[i] = dds[i] / 2.0 if cp[i] > pp[i]: ind[i] = 0 cp[i] -= dds[i]/2.0 else: ind[i] = 1 cp[i] += dds[i]/2.0 # Check if it has not been allocated cur = self.next_child(curdom, ind, cur) # Now we should be at the right level cur.domain = curdom cur.file_ind = p return cont.n_assigned - initial + nb def allocate_domains(self, domain_counts): cdef int count, i self.num_domains = len(domain_counts) # 1-indexed for i, count in enumerate(domain_counts): self.domains.append(count) cdef void append_domain(self, np.int64_t domain_count): self.num_domains += 1 self.domains.append(domain_count) cdef Oct* next_root(self, int domain_id, int ind[3]): cdef Oct *next = self.root_mesh[ind[0]][ind[1]][ind[2]] if next != NULL: return next cdef OctAllocationContainer *cont = self.domains.get_cont(domain_id - 1) if cont.n_assigned >= cont.n: raise RuntimeError next = &cont.my_objs[cont.n_assigned] cont.n_assigned += 1 self.root_mesh[ind[0]][ind[1]][ind[2]] = next self.nocts += 1 return next cdef Oct* next_child(self, int domain_id, int ind[3], Oct *parent) except? NULL: cdef int i cdef Oct *next = NULL if parent.children != NULL: next = parent.children[cind(ind[0],ind[1],ind[2])] else: # This *8 does NOT need to be made generic. parent.children = malloc(sizeof(Oct *) * 8) for i in range(8): parent.children[i] = NULL if next != NULL: return next cdef OctAllocationContainer *cont = self.domains.get_cont(domain_id - 1) if cont.n_assigned >= cont.n: raise RuntimeError next = &cont.my_objs[cont.n_assigned] cont.n_assigned += 1 parent.children[cind(ind[0],ind[1],ind[2])] = next self.nocts += 1 return next def file_index_octs(self, SelectorObject selector, int domain_id, num_cells = -1): # We create oct arrays of the correct size cdef np.ndarray[np.uint8_t, ndim=1] levels cdef np.ndarray[np.uint8_t, ndim=1] cell_inds cdef np.ndarray[np.int64_t, ndim=1] file_inds if num_cells < 0: num_cells = selector.count_oct_cells(self, domain_id) # Initialize variables with dummy values levels = np.full(num_cells, 255, dtype="uint8") file_inds = np.full(num_cells, -1, dtype="int64") cell_inds = np.full(num_cells, 8, dtype="uint8") cdef oct_visitors.FillFileIndicesO visitor_o cdef oct_visitors.FillFileIndicesR visitor_r if self.fill_style == "r": visitor_r = oct_visitors.FillFileIndicesR(self, domain_id) visitor_r.levels = levels visitor_r.file_inds = file_inds visitor_r.cell_inds = cell_inds visitor = visitor_r elif self.fill_style == "o": visitor_o = oct_visitors.FillFileIndicesO(self, domain_id) visitor_o.levels = levels visitor_o.file_inds = file_inds visitor_o.cell_inds = cell_inds visitor = visitor_o else: raise RuntimeError self.visit_all_octs(selector, visitor) return levels, cell_inds, file_inds def morton_index_octs(self, SelectorObject selector, int domain_id, num_cells = -1): cdef np.int64_t i cdef np.uint8_t[:] levels cdef np.uint64_t[:] morton_inds if num_cells < 0: num_cells = selector.count_oct_cells(self, domain_id) levels = np.zeros(num_cells, dtype="uint8") morton_inds = np.zeros(num_cells, dtype="uint64") for i in range(num_cells): levels[i] = 100 morton_inds[i] = 0 cdef oct_visitors.MortonIndexOcts visitor visitor = oct_visitors.MortonIndexOcts(self, domain_id) visitor.level_arr = levels visitor.morton_ind = morton_inds self.visit_all_octs(selector, visitor) return levels, morton_inds def domain_count(self, SelectorObject selector): # We create oct arrays of the correct size cdef np.ndarray[np.int64_t, ndim=1] domain_counts domain_counts = np.zeros(self.num_domains, dtype="int64") cdef oct_visitors.CountByDomain visitor visitor = oct_visitors.CountByDomain(self, -1) visitor.domain_counts = domain_counts self.visit_all_octs(selector, visitor) return domain_counts @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cpdef void fill_level( self, const int level, const np.uint8_t[::1] levels, const np.uint8_t[::1] cell_inds, const np.int64_t[::1] file_inds, dict dest_fields, dict source_fields, np.int64_t offset = 0 ): cdef np.float64_t[:, :] source cdef np.float64_t[::1] dest cdef int i, lvl for key in dest_fields: dest = dest_fields[key] source = source_fields[key] for i in range(levels.shape[0]): lvl = levels[i] if lvl != level: continue if file_inds[i] < 0: dest[i + offset] = np.nan else: dest[i + offset] = source[file_inds[i], cell_inds[i]] def fill_index(self, SelectorObject selector = AlwaysSelector(None)): """Get the on-file index of each cell""" cdef StoreIndex visitor cdef np.int64_t[:, :, :, :] cell_inds cell_inds = np.full((self.nocts, 2, 2, 2), -1, dtype=np.int64) visitor = StoreIndex(self, -1) visitor.cell_inds = cell_inds self.visit_all_octs(selector, visitor) return np.asarray(cell_inds) def fill_octcellindex_neighbours(self, SelectorObject selector, int num_octs=-1, int domain_id=-1, int n_ghost_zones=1): """Compute the oct and cell indices of all the cells within all selected octs, extended by one cell in all directions (for ghost zones computations). Parameters ---------- selector : SelectorObject Selector for the octs to compute neighbour of num_octs : int, optional The number of octs to read in domain_id : int, optional The domain to perform the selection over Returns ------- oct_inds : int64 ndarray (nocts*8, ) The on-domain index of the octs containing each cell cell_inds : uint8 ndarray (nocts*8, ) The index of the cell in its parent oct Note ---- oct_inds/cell_inds """ if num_octs == -1: num_octs = selector.count_octs(self, domain_id) cdef NeighbourCellIndexVisitor visitor cdef np.uint8_t[::1] cell_inds cdef np.int64_t[::1] oct_inds cell_inds = np.full(num_octs*4**3, 8, dtype=np.uint8) oct_inds = np.full(num_octs*4**3, -1, dtype=np.int64) visitor = NeighbourCellIndexVisitor(self, -1, n_ghost_zones) visitor.cell_inds = cell_inds visitor.domain_inds = oct_inds self.visit_all_octs(selector, visitor) return np.asarray(oct_inds), np.asarray(cell_inds) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cpdef int fill_level_with_domain( self, const int level, const np.uint8_t[::1] level_inds, const np.uint8_t[::1] cell_inds, const np.int64_t[::1] file_inds, const np.int32_t[::1] domain_inds, dict dest_fields, dict source_fields, const np.int32_t domain, np.int64_t offset = 0 ): """Similar to fill_level but accepts a domain argument. This is particularly useful for frontends that have buffer zones at CPU boundaries. These buffer oct cells have a different domain than the local one and are usually not read, but one has to read them e.g. to compute ghost zones. """ cdef np.float64_t[:, :] source cdef np.float64_t[::1] dest cdef int i, count, lev cdef np.int32_t dom for key in dest_fields: dest = dest_fields[key] source = source_fields[key] count = 0 for i in range(level_inds.shape[0]): lev = level_inds[i] dom = domain_inds[i] if lev != level or dom != domain: continue count += 1 if file_inds[i] < 0: dest[i + offset] = np.nan else: dest[i + offset] = source[file_inds[i], cell_inds[i]] return count @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def file_index_octs_with_ghost_zones( self, SelectorObject selector, int domain_id, int num_cells=1, int n_ghost_zones=1): """Similar as file_index_octs, but returns the level, cell index, file index and domain of the neighbouring cells as well. Arguments --------- selector : SelectorObject The selector object. It is expected to select all cells for a given oct. domain_id : int The domain to select. Set to -1 to select all domains. num_cells : int, optional The total number of cells (accounting for a 1-cell thick ghost zone layer). Returns ------- levels : uint8, shape (num_cells,) The level of each cell of the super oct cell_inds : uint8, shape (num_cells, ) The index of each cell of the super oct within its own oct file_inds : int64, shape (num_cells, ) The on-file position of the cell. See notes below. domains : int32, shape (num_cells) The domain to which the cells belongs. See notes below. Notes ----- The algorithm constructs a "super-oct" around each oct (see sketch below, where the original oct cells are marked with an x). Note that for sparse octrees (such as RAMSES'), the neighbouring cells may belong to another domain (this is stored in `domains`). If the dataset provides buffer zones between domains (such as RAMSES), this may be stored locally and can be accessed directly. +---+---+---+---+ | | | | | |---+---+---+---| | | x | x | | |---+---+---+---| | | x | x | | |---+---+---+---| | | | | | +---+---+---+---+ """ cdef int num_octs if num_cells < 0: num_octs = selector.count_octs(self, domain_id) num_cells = num_octs * 4**3 cdef NeighbourCellVisitor visitor cdef np.ndarray[np.uint8_t, ndim=1] levels cdef np.ndarray[np.uint8_t, ndim=1] cell_inds cdef np.ndarray[np.int64_t, ndim=1] file_inds cdef np.ndarray[np.int32_t, ndim=1] domains levels = np.full(num_cells, 255, dtype="uint8") file_inds = np.full(num_cells, -1, dtype="int64") cell_inds = np.full(num_cells, 8, dtype="uint8") domains = np.full(num_cells, -1, dtype="int32") visitor = NeighbourCellVisitor(self, -1, n_ghost_zones) # output: level, file_ind and cell_ind of the neighbouring cells visitor.levels = levels visitor.file_inds = file_inds visitor.cell_inds = cell_inds visitor.domains = domains # direction to explore and extra parameters of the visitor visitor.octree = self visitor.last = -1 # Compute indices self.visit_all_octs(selector, visitor) return levels, cell_inds, file_inds, domains def finalize(self): cdef SelectorObject selector = AlwaysSelector(None) cdef oct_visitors.AssignDomainInd visitor visitor = oct_visitors.AssignDomainInd(self, 1) self.visit_all_octs(selector, visitor) assert ((visitor.global_index+1)*visitor.nzones == visitor.index) cdef int root_node_compare(const void *a, const void *b) noexcept nogil: cdef OctKey *ao cdef OctKey *bo ao = a bo = b if ao.key < bo.key: return -1 elif ao.key == bo.key: return 0 else: return 1 cdef class SparseOctreeContainer(OctreeContainer): def __init__(self, domain_dimensions, domain_left_edge, domain_right_edge, num_zones = 2): cdef int i self.partial_coverage = 1 self.nz = num_zones for i in range(3): self.nn[i] = domain_dimensions[i] self.domains = OctObjectPool() self.num_domains = 0 self.level_offset = 0 self.nocts = 0 # Increment when initialized self.root_mesh = NULL self.root_nodes = NULL self.tree_root = NULL self.num_root = 0 self.max_root = 0 # We don't initialize the octs yet for i in range(3): self.DLE[i] = domain_left_edge[i] #0 self.DRE[i] = domain_right_edge[i] #num_grid self.fill_style = "r" @classmethod def load_octree(cls, header): raise NotImplementedError def save_octree(self): raise NotImplementedError cdef int get_root(self, int ind[3], Oct **o) noexcept nogil: o[0] = NULL cdef np.int64_t key = self.ipos_to_key(ind) cdef OctKey okey cdef OctKey **oresult = NULL okey.key = key okey.node = NULL oresult = tfind(&okey, &self.tree_root, root_node_compare) if oresult != NULL: o[0] = oresult[0].node return 1 return 0 cdef void key_to_ipos(self, np.int64_t key, np.int64_t pos[3]): # Note: this is the result of doing # for i in range(20): # ukey |= (1 << i) cdef np.int64_t ukey = 1048575 cdef int j for j in range(3): pos[2 - j] = ((key & ukey)) key = key >> 20 cdef np.int64_t ipos_to_key(self, int pos[3]) noexcept nogil: # We (hope) that 20 bits is enough for each index. cdef int i cdef np.int64_t key = 0 for i in range(3): # Note the casting here. Bitshifting can cause issues otherwise. key |= ((pos[i]) << 20 * (2 - i)) return key @cython.cdivision(True) cdef void visit_all_octs(self, SelectorObject selector, OctVisitor visitor, int vc = -1, np.int64_t *indices = NULL): cdef int i, j cdef np.int64_t key visitor.global_index = -1 visitor.level = 0 if vc == -1: vc = self.partial_coverage cdef np.float64_t pos[3] cdef np.float64_t dds[3] # This dds is the oct-width for i in range(3): dds[i] = (self.DRE[i] - self.DLE[i]) / self.nn[i] # Pos is the center of the octs cdef Oct *o for i in range(self.num_root): o = self.root_nodes[i].node key = self.root_nodes[i].key self.key_to_ipos(key, visitor.pos) for j in range(3): pos[j] = self.DLE[j] + (visitor.pos[j] + 0.5) * dds[j] selector.recursively_visit_octs( o, pos, dds, 0, visitor, vc) if indices != NULL: indices[i] = visitor.index cdef np.int64_t get_domain_offset(self, int domain_id): return 0 # We no longer have a domain offset. cdef Oct* next_root(self, int domain_id, int ind[3]): cdef Oct *next = NULL self.get_root(ind, &next) if next != NULL: return next cdef OctAllocationContainer *cont = self.domains.get_cont(domain_id - 1) if cont.n_assigned >= cont.n: print("Too many assigned.") return NULL if self.num_root >= self.max_root: print("Too many roots.") return NULL next = &cont.my_objs[cont.n_assigned] cont.n_assigned += 1 cdef np.int64_t key = 0 cdef OctKey *ikey = &self.root_nodes[self.num_root] key = self.ipos_to_key(ind) self.root_nodes[self.num_root].key = key self.root_nodes[self.num_root].node = next tsearch(ikey, &self.tree_root, root_node_compare) self.num_root += 1 self.nocts += 1 return next def allocate_domains(self, domain_counts, int root_nodes): OctreeContainer.allocate_domains(self, domain_counts) self.root_nodes = malloc(sizeof(OctKey) * root_nodes) self.max_root = root_nodes for i in range(root_nodes): self.root_nodes[i].key = -1 self.root_nodes[i].node = NULL def __dealloc__(self): # This gets called BEFORE the superclass deallocation. But, both get # called. cdef OctKey *ikey for i in range(self.num_root): ikey = &self.root_nodes[i] tdelete(ikey, &self.tree_root, root_node_compare) if self.root_nodes != NULL: free(self.root_nodes) cdef class ARTOctreeContainer(OctreeContainer): def __init__(self, oct_domain_dimensions, domain_left_edge, domain_right_edge, partial_coverage = 0, num_zones = 2): OctreeContainer.__init__(self, oct_domain_dimensions, domain_left_edge, domain_right_edge, partial_coverage, num_zones) self.fill_style = "r" cdef OctList *OctList_subneighbor_find(OctList *olist, Oct *top, int i, int j, int k): if top.children == NULL: return olist # The i, j, k here are the offsets of "top" with respect to # the oct for whose neighbors we are searching. # Note that this will be recursively called. We will evaluate either 1, 2, # or 4 octs for children and potentially adding them. In fact, this will # be 2**(num_zero) where num_zero is the number of indices that are equal # to zero; i.e., the number of dimensions along which we are aligned. # For now, we assume we will not be doing this along all three zeros, # because that would be pretty tricky. if i == j == k == 1: return olist cdef np.int64_t n[3] cdef np.int64_t ind[3] cdef np.int64_t off[3][2] cdef np.int64_t ii, ij, ik, ci ind[0] = 1 - i ind[1] = 1 - j ind[2] = 1 - k for ii in range(3): if ind[ii] == 0: n[ii] = 2 off[ii][0] = 0 off[ii][1] = 1 elif ind[ii] == -1: n[ii] = 1 off[ii][0] = 1 elif ind[ii] == 1: n[ii] = 1 off[ii][0] = 0 for ii in range(n[0]): for ij in range(n[1]): for ik in range(n[2]): ci = cind(off[0][ii], off[1][ij], off[2][ik]) cand = top.children[ci] if cand.children != NULL: olist = OctList_subneighbor_find(olist, cand, i, j, k) else: olist = OctList_append(olist, cand) return olist cdef OctList *OctList_append(OctList *olist, Oct *o): cdef OctList *this = olist if this == NULL: this = malloc(sizeof(OctList)) this.next = NULL this.o = o return this while this.next != NULL: this = this.next this.next = malloc(sizeof(OctList)) this = this.next this.o = o this.next = NULL return this cdef int OctList_count(OctList *olist): cdef OctList *this = olist cdef int i = 0 # Count the list while this != NULL: i += 1 this = this.next return i cdef void OctList_delete(OctList *olist): cdef OctList *next cdef OctList *this = olist while this != NULL: next = this.next free(this) this = next cdef class OctObjectPool(ObjectPool): # This is an inherited version of the ObjectPool that provides setup and # teardown functions for the individually allocated objects. These allow # us to initialize the Octs to default values, and we can also free any # allocated memory in them. Implementing _con_to_array also provides the # opportunity to supply views of the octs in Python code. def __cinit__(self): # Base class will ALSO be called self.itemsize = sizeof(Oct) assert(sizeof(OctAllocationContainer) == sizeof(AllocationContainer)) cdef void setup_objs(self, void *obj, np.uint64_t n, np.uint64_t offset, np.int64_t con_id): cdef Oct* octs = obj for n in range(n): octs[n].file_ind = octs[n].domain = - 1 octs[n].domain_ind = n + offset octs[n].children = NULL cdef void teardown_objs(self, void *obj, np.uint64_t n, np.uint64_t offset, np.int64_t con_id): cdef np.uint64_t i cdef Oct *my_octs = obj for i in range(n): if my_octs[i].children != NULL: free(my_octs[i].children) free(obj) def _con_to_array(self, int i): cdef AllocationContainer *obj = &self.containers[i] if obj.n_assigned == 0: return None cdef OctPadded[:] mm = ( obj.my_objs) rv = np.asarray(mm) return rv ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/oct_geometry_handler.py0000644000175100001770000001111114714401662020616 0ustar00runnerdockerimport numpy as np from yt.fields.field_detector import FieldDetector from yt.geometry.geometry_handler import Index from yt.utilities.logger import ytLogger as mylog class OctreeIndex(Index): """The Index subclass for oct AMR datasets""" def _setup_geometry(self): mylog.debug("Initializing Octree Geometry Handler.") self._initialize_oct_handler() def get_smallest_dx(self): """ Returns (in code units) the smallest cell size in the simulation. """ return ( self.dataset.domain_width / (self.dataset.domain_dimensions * 2 ** (self.max_level)) ).min() def convert(self, unit): return self.dataset.conversion_factors[unit] def _add_mesh_sampling_particle_field(self, deposit_field, ftype, ptype): units = self.ds.field_info[ftype, deposit_field].units take_log = self.ds.field_info[ftype, deposit_field].take_log field_name = f"cell_{ftype}_{deposit_field}" def _cell_index(field, data): # Get the position of the particles pos = data[ptype, "particle_position"] Npart = pos.shape[0] ret = np.zeros(Npart, dtype="float64") tmp = np.zeros(Npart, dtype="float64") if isinstance(data, FieldDetector): return ret remaining = np.ones(Npart, dtype=bool) Nremaining = Npart Nobjs = len(data._current_chunk.objs) Nbits = int(np.ceil(np.log2(Nobjs))) # Sort objs by decreasing number of octs enumerated_objs = sorted( enumerate(data._current_chunk.objs), key=lambda arg: arg[1].oct_handler.nocts, reverse=True, ) for i, obj in enumerated_objs: if Nremaining == 0: break icell = ( obj["index", "ones"].T.reshape(-1).astype(np.int64).cumsum().value - 1 ) mesh_data = ((icell << Nbits) + i).astype(np.float64) # Access the mesh data and attach them to their particles tmp[:Nremaining] = obj.mesh_sampling_particle_field( pos[remaining], mesh_data ) ret[remaining] = tmp[:Nremaining] remaining[remaining] = np.isnan(tmp[:Nremaining]) Nremaining = remaining.sum() return data.ds.arr(ret, units="1") def _mesh_sampling_particle_field(field, data): """ Create a grid field for particle quantities using given method. """ ones = data[ptype, "particle_ones"] # Access "cell_index" field Npart = ones.shape[0] ret = np.zeros(Npart) cell_index = np.array(data[ptype, "cell_index"], np.int64) if isinstance(data, FieldDetector): return ret # The index of the obj is stored on the first bits Nobjs = len(data._current_chunk.objs) Nbits = int(np.ceil(np.log2(Nobjs))) icell = cell_index >> Nbits iobj = cell_index - (icell << Nbits) for i, subset in enumerate(data._current_chunk.objs): mask = iobj == i subset.field_parameters = data.field_parameters cell_data = subset[ftype, deposit_field].T.reshape(-1) ret[mask] = cell_data[icell[mask]] return data.ds.arr(ret, units=cell_data.units) if (ptype, "cell_index") not in self.ds.derived_field_list: self.ds.add_field( (ptype, "cell_index"), function=_cell_index, sampling_type="particle", units="1", ) self.ds.add_field( (ptype, field_name), function=_mesh_sampling_particle_field, sampling_type="particle", units=units, take_log=take_log, ) def _icoords_to_fcoords( self, icoords: np.ndarray, ires: np.ndarray, axes: tuple[int, ...] | None = None, ) -> tuple[np.ndarray, np.ndarray]: """ Accepts icoords and ires and returns appropriate fcoords and fwidth. Mostly useful for cases where we have irregularly spaced or structured grids. """ dds = self.ds.domain_width[axes,] / ( self.ds.domain_dimensions[axes,] * self.ds.refine_by ** ires[:, None] ) pos = (0.5 + icoords) * dds + self.ds.domain_left_edge[axes,] return pos, dds ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/oct_visitors.pxd0000644000175100001770000001105714714401662017324 0ustar00runnerdocker""" Oct visitor definitions file """ cimport numpy as np cdef struct Oct cdef struct Oct: np.int64_t file_ind # index with respect to the order in which it was # added np.int64_t domain_ind # index within the global set of domains np.int64_t domain # (opt) addl int index Oct **children # Up to 8 long cdef struct OctInfo: np.float64_t left_edge[3] np.float64_t dds[3] np.int64_t ipos[3] np.int32_t level cdef struct OctPadded: np.int64_t file_ind np.int64_t domain_ind np.int64_t domain np.int64_t padding cdef class OctVisitor: cdef np.uint64_t index cdef np.uint64_t last cdef np.int64_t global_index cdef np.int64_t pos[3] # position in ints cdef np.uint8_t ind[3] # cell position cdef int dims cdef np.int32_t domain cdef np.int8_t level cdef np.int8_t nz # This is number of zones along each dimension. 1 => 1 zones, 2 => 8, etc. # To calculate nzones, nz**3 cdef np.int32_t nzones # There will also be overrides for the memoryviews associated with the # specific instance. cdef void visit(self, Oct*, np.uint8_t selected) cdef inline int oind(self): cdef int d = self.nz return (((self.ind[0]*d)+self.ind[1])*d+self.ind[2]) cdef inline int rind(self): cdef int d = self.nz return (((self.ind[2]*d)+self.ind[1])*d+self.ind[0]) cdef class CountTotalOcts(OctVisitor): pass cdef class CountTotalCells(OctVisitor): pass cdef class MarkOcts(OctVisitor): # Unused cdef np.uint8_t[:,:,:,:] mark cdef class MaskOcts(OctVisitor): cdef np.uint8_t[:,:,:,:] mask cdef class IndexOcts(OctVisitor): cdef np.int64_t[:] oct_index cdef class MaskedIndexOcts(OctVisitor): cdef np.int64_t[:] oct_index cdef np.uint8_t[:] oct_mask cdef class IndexMaskMapOcts(OctVisitor): cdef np.int64_t[:] oct_index cdef np.uint8_t[:] oct_mask cdef np.int64_t[:] map_domain_ind cdef np.uint64_t map_index cdef class ICoordsOcts(OctVisitor): cdef np.int64_t[:,:] icoords cdef class IResOcts(OctVisitor): cdef np.int64_t[:] ires cdef class FCoordsOcts(OctVisitor): cdef np.float64_t[:,:] fcoords cdef class FWidthOcts(OctVisitor): cdef np.float64_t[:,:] fwidth cdef class CopyArrayI64(OctVisitor): cdef np.int64_t[:,:,:,:,:,:] source cdef np.int64_t[:,:] dest cdef class CopyArrayF64(OctVisitor): cdef np.float64_t[:,:,:,:,:] source cdef np.float64_t[:,:] dest cdef class CopyFileIndArrayI8(OctVisitor): cdef np.int64_t root cdef np.uint8_t[:] source cdef np.uint8_t[:] dest cdef class IdentifyOcts(OctVisitor): cdef np.uint8_t[:] domain_mask cdef class AssignDomainInd(OctVisitor): pass cdef class FillFileIndicesO(OctVisitor): cdef np.uint8_t[:] levels cdef np.int64_t[:] file_inds cdef np.uint8_t[:] cell_inds cdef class FillFileIndicesR(OctVisitor): cdef np.uint8_t[:] levels cdef np.int64_t[:] file_inds cdef np.uint8_t[:] cell_inds cdef class CountByDomain(OctVisitor): cdef np.int64_t[:] domain_counts cdef class StoreOctree(OctVisitor): cdef np.uint8_t[:] ref_mask cdef class LoadOctree(OctVisitor): cdef np.uint8_t[:] ref_mask cdef Oct* octs cdef np.uint64_t *nocts cdef np.uint64_t *nfinest cdef np.uint64_t max_level cdef class MortonIndexOcts(OctVisitor): cdef np.uint8_t[:] level_arr cdef np.uint64_t[:] morton_ind cdef inline int cind(int i, int j, int k) noexcept nogil: # THIS ONLY WORKS FOR CHILDREN. It is not general for zones. return (((i*2)+j)*2+k) from .oct_container cimport OctreeContainer cdef class StoreIndex(OctVisitor): cdef np.int64_t[:,:,:,:] cell_inds # cimport oct_container cdef class BaseNeighbourVisitor(OctVisitor): cdef int idim # 0,1,2 for x,y,z cdef int direction # +1 for +x, -1 for -x cdef np.uint8_t neigh_ind[3] cdef bint other_oct cdef Oct *neighbour cdef OctreeContainer octree cdef OctInfo oi cdef int n_ghost_zones cdef void set_neighbour_info(self, Oct *o, int ishift[3]) cdef inline np.uint8_t neighbour_rind(self): cdef int d = self.nz return (((self.neigh_ind[2]*d)+self.neigh_ind[1])*d+self.neigh_ind[0]) cdef class NeighbourCellIndexVisitor(BaseNeighbourVisitor): cdef np.uint8_t[::1] cell_inds cdef np.int64_t[::1] domain_inds cdef class NeighbourCellVisitor(BaseNeighbourVisitor): cdef np.uint8_t[::1] levels cdef np.int64_t[::1] file_inds cdef np.uint8_t[::1] cell_inds cdef np.int32_t[::1] domains ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/oct_visitors.pyx0000644000175100001770000005043614714401662017355 0ustar00runnerdocker# distutils: include_dirs = LIB_DIR # distutils: libraries = STD_LIBS """ Oct visitor functions """ cimport cython cimport numpy as np import numpy as np from libc.stdlib cimport malloc from yt.utilities.lib.fp_utils cimport * from yt.utilities.lib.geometry_utils cimport encode_morton_64bit from .oct_container cimport OctreeContainer # Now some visitor functions cdef class OctVisitor: def __init__(self, OctreeContainer octree, int domain_id = -1): cdef int i self.index = 0 self.last = -1 self.global_index = -1 for i in range(3): self.pos[i] = -1 self.ind[i] = -1 self.dims = 0 self.domain = domain_id self.level = -1 self.nz = octree.nz self.nzones = self.nz**3 cdef void visit(self, Oct* o, np.uint8_t selected): raise NotImplementedError # This copies an integer array from the source to the destination, based on the # selection criteria. cdef class CopyArrayI64(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): # We should always have global_index less than our source. # "last" here tells us the dimensionality of the array. if selected == 0: return # There are this many records between "octs" self.dest[self.index, :] = self.source[ self.ind[2], self.ind[1], self.ind[0], self.global_index, :] self.index += 1 # This copies a floating point array from the source to the destination, based # on the selection criteria. cdef class CopyArrayF64(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): # We should always have global_index less than our source. # "last" here tells us the dimensionality of the array. if selected == 0: return # There are this many records between "octs" self.dest[self.index, :] = self.source[ self.ind[2], self.ind[1], self.ind[0], self.global_index, :] self.index += 1 # This copies a bit array from source to the destination, based on file_ind cdef class CopyFileIndArrayI8(OctVisitor): def __init__(self, OctreeContainer octree, int domain_id = -1): super(CopyFileIndArrayI8, self).__init__(octree, domain_id) self.root = -1 @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): if self.level == 0: self.root += 1 if self.last != o.domain_ind: self.last = o.domain_ind self.dest[o.domain_ind] = self.source[self.root] self.index += 1 # This counts the number of octs, selected or not, that the selector hits. # Note that the selector will not recursively visit unselected octs, so this is # still useful. cdef class CountTotalOcts(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): # Count even if not selected. # Number of *octs* visited. if self.last != o.domain_ind: self.index += 1 self.last = o.domain_ind # This counts the number of selected cells. cdef class CountTotalCells(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): # Number of *cells* visited and selected. self.index += selected # Every time a cell is visited, mark it. This will be for all visited octs. cdef class MarkOcts(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): # We mark them even if they are not selected if self.last != o.domain_ind: self.last = o.domain_ind self.index += 1 self.mark[self.index, self.ind[2], self.ind[1], self.ind[0]] = 1 # Mask all the selected cells. cdef class MaskOcts(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): if selected == 0: return self.mask[self.global_index, self.ind[2], self.ind[1], self.ind[0]] = 1 # Compute a mapping from domain_ind to flattened index. cdef class IndexOcts(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): # Note that we provide an index even if the cell is not selected. if self.last != o.domain_ind: self.last = o.domain_ind self.oct_index[o.domain_ind] = self.index self.index += 1 # Compute a mapping from domain_ind to flattened index with some octs masked. cdef class MaskedIndexOcts(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): # Note that we provide an index even if the cell is not selected. if self.last != o.domain_ind: self.last = o.domain_ind if self.oct_mask[o.domain_ind] == 1: self.oct_index[o.domain_ind] = self.index self.index += 1 # Compute a mapping from domain_ind to flattened index checking mask. cdef class IndexMaskMapOcts(OctVisitor): def __init__(self, OctreeContainer octree, int domain_id = -1): super(IndexMaskMapOcts, self).__init__(octree, domain_id) self.map_index = 0 @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): if self.last != o.domain_ind: self.last = o.domain_ind if self.oct_mask[o.domain_ind] == 1: if self.map_domain_ind[self.map_index] >= 0: self.oct_index[self.map_domain_ind[self.map_index]] = self.index self.map_index += 1 self.index += 1 # Integer coordinates cdef class ICoordsOcts(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): if selected == 0: return cdef int i for i in range(3): self.icoords[self.index,i] = (self.pos[i] * self.nz) + self.ind[i] self.index += 1 # Level cdef class IResOcts(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): if selected == 0: return self.ires[self.index] = self.level self.index += 1 # Floating point coordinates cdef class FCoordsOcts(OctVisitor): @cython.cdivision(True) @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): # Note that this does not actually give the correct floating point # coordinates. It gives them in some unit system where the domain is 1.0 # in all directions, and assumes that they will be scaled later. if selected == 0: return cdef int i cdef np.float64_t c, dx dx = 1.0 / ((self.nz) << self.level) for i in range(3): c = ((self.pos[i] * self.nz) + self.ind[i]) self.fcoords[self.index,i] = (c + 0.5) * dx self.index += 1 # Floating point widths; domain modifications are done later. cdef class FWidthOcts(OctVisitor): @cython.cdivision(True) @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): # Note that this does not actually give the correct floating point # coordinates. It gives them in some unit system where the domain is 1.0 # in all directions, and assumes that they will be scaled later. if selected == 0: return cdef int i cdef np.float64_t dx dx = 1.0 / (self.nz << self.level) for i in range(3): self.fwidth[self.index,i] = dx self.index += 1 # Mark which domains are touched by a selector. cdef class IdentifyOcts(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): # We assume that our domain has *already* been selected by, which means # we'll get all cells within the domain for a by-domain selector and all # cells within the domain *and* selector for the selector itself. if selected == 0: return self.domain_mask[o.domain - 1] = 1 # Assign domain indices to octs cdef class AssignDomainInd(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): o.domain_ind = self.global_index self.index += 1 # From the file, fill in C order cdef class FillFileIndicesO(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): # We fill these arrays, then inside the level filler we use these as # indices as we fill a second array from the self. if selected == 0: return self.levels[self.index] = self.level self.file_inds[self.index] = o.file_ind self.cell_inds[self.index] = self.oind() self.index +=1 # From the file, fill in F order cdef class FillFileIndicesR(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): # We fill these arrays, then inside the level filler we use these as # indices as we fill a second array from the self. if selected == 0: return self.levels[self.index] = self.level self.file_inds[self.index] = o.file_ind self.cell_inds[self.index] = self.rind() self.index +=1 # Count octs by domain cdef class CountByDomain(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): if selected == 0: return # NOTE: We do this for every *cell*. self.domain_counts[o.domain - 1] += 1 # Store the refinement mapping of the octree to be loaded later cdef class StoreOctree(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): if o.children == NULL: # Not refined. res = 0 else: res = 1 self.ref_mask[self.index] = res self.index += 1 # Go from a refinement mapping to a new octree cdef class LoadOctree(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): cdef int i, ii ii = cind(self.ind[0], self.ind[1], self.ind[2]) if self.level > self.max_level: self.max_level = self.level if self.ref_mask[self.index] == 0: # We only want to do this once. Otherwise we end up with way too many # nfinest for our tastes. if o.file_ind == -1: o.children = NULL o.file_ind = self.nfinest[0] o.domain = 1 self.nfinest[0] += 1 elif self.ref_mask[self.index] > 0: if self.ref_mask[self.index] != 1 and self.ref_mask[self.index] != 8: print("ARRAY CLUE: ", self.ref_mask[self.index], "UNKNOWN") raise RuntimeError if o.children == NULL: o.children = malloc(sizeof(Oct *) * 8) for i in range(8): o.children[i] = NULL for i in range(8): o.children[ii + i] = &self.octs[self.nocts[0]] o.children[ii + i].domain_ind = self.nocts[0] o.children[ii + i].file_ind = -1 o.children[ii + i].domain = -1 o.children[ii + i].children = NULL self.nocts[0] += 1 else: print("SOMETHING IS AMISS", self.index) raise RuntimeError self.index += 1 cdef class MortonIndexOcts(OctVisitor): @cython.boundscheck(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): if selected == 0: return cdef np.int64_t coord[3] cdef int i for i in range(3): coord[i] = (self.pos[i] * self.nz) + self.ind[i] if (coord[i] < 0): raise RuntimeError("Oct coordinate in dimension {} is ".format(i)+ "negative. ({})".format(coord[i])) self.level_arr[self.index] = self.level self.morton_ind[self.index] = encode_morton_64bit( np.uint64(coord[0]), np.uint64(coord[1]), np.uint64(coord[2])) self.index += 1 # Store cell index cdef class StoreIndex(OctVisitor): @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): if not selected: return self.cell_inds[o.domain_ind, self.ind[2], self.ind[1], self.ind[0]] = self.index self.index += 1 cdef class BaseNeighbourVisitor(OctVisitor): def __init__(self, OctreeContainer octree, int domain_id=-1, int n_ghost_zones=1): self.octree = octree self.neigh_ind[0] = 0 self.neigh_ind[1] = 0 self.neigh_ind[2] = 0 self.n_ghost_zones = n_ghost_zones super(BaseNeighbourVisitor, self).__init__(octree, domain_id) @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) cdef void set_neighbour_info(self, Oct *o, int ishift[3]): cdef int i cdef np.float64_t c, dx cdef np.int64_t ipos cdef np.float64_t fcoords[3] cdef Oct *neighbour cdef bint local_oct cdef bint other_oct dx = 1.0 / (self.nz << self.level) local_oct = True # Compute position of neighbouring cell for i in range(3): c = (self.pos[i] * self.nz) fcoords[i] = (c + 0.5 + ishift[i]) * dx / self.octree.nn[i] # Assuming periodicity if fcoords[i] < 0: fcoords[i] += 1 elif fcoords[i] > 1: fcoords[i] -= 1 local_oct &= (0 <= ishift[i] <= 1) other_oct = not local_oct # Use octree to find neighbour if other_oct: neighbour = self.octree.get(fcoords, &self.oi, max_level=self.level) else: neighbour = o self.oi.level = self.level for i in range(3): self.oi.ipos[i] = (self.pos[i] * self.nz) + ishift[i] # Extra step - compute cell position in neighbouring oct (and store in oi.ipos) if self.oi.level == self.level - 1: for i in range(3): ipos = (((self.pos[i] * self.nz) + ishift[i])) >> 1 if (self.oi.ipos[i] << 1) == ipos: self.oi.ipos[i] = 0 else: self.oi.ipos[i] = 1 self.neighbour = neighbour # Index of neighbouring cell within its oct for i in range(3): self.neigh_ind[i] = (ishift[i]) % 2 self.other_oct = other_oct if other_oct: if self.neighbour != NULL: if self.oi.level == self.level - 1: # Position within neighbouring oct is stored in oi.ipos for i in range(3): self.neigh_ind[i] = self.oi.ipos[i] elif self.oi.level != self.level: print('This should not happen! %s %s' % (self.oi.level, self.level)) self.neighbour = NULL # Store neighbouring cell index in current cell cdef class NeighbourCellIndexVisitor(BaseNeighbourVisitor): # This piece of code is very much optimizable. Here are possible routes to achieve # much better performance: # - Work oct by oct, which would reduce the number of neighbor lookup # from 4³=64 to 3³=27, # - Use faster neighbor lookup method(s). For now, all searches are started from # the root mesh down to leaf nodes, but we could instead go up the tree from the # central oct then down to find all neighbors (see e.g. # https://geidav.wordpress.com/2017/12/02/advanced-octrees-4-finding-neighbor-nodes/). # - Pre-compute the face-neighbors of all octs. # Note that for the last point, algorithms exist that generate the neighbors of all # octs in O(1) time (https://link.springer.com/article/10.1007/s13319-015-0060-9) # during the octree construction. # Another possible solution would be to keep a unordered hash map of all the octs # indexed by their (3-integers) position. With such structure, finding a neighbor # takes O(1) time. This could even come as a replacement of the current # pointer-based octree structure. @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): cdef int i, j, k cdef int ishift[3] cdef np.uint8_t neigh_cell_ind cdef np.int64_t neigh_domain_ind if selected == 0: return # Work at oct level if self.last == o.domain_ind: return self.last = o.domain_ind cdef int i0, i1 i0 = -self.n_ghost_zones i1 = 2 + self.n_ghost_zones # Loop over cells in and directly around oct for i in range(i0, i1): ishift[0] = i for j in range(i0, i1): ishift[1] = j for k in range(i0, i1): ishift[2] = k self.set_neighbour_info(o, ishift) if not self.other_oct: neigh_domain_ind = o.domain_ind neigh_cell_ind = self.neighbour_rind() elif self.neighbour != NULL: neigh_domain_ind = self.neighbour.domain_ind neigh_cell_ind = self.neighbour_rind() else: neigh_domain_ind = -1 neigh_cell_ind = 8 self.cell_inds[self.index] = neigh_cell_ind self.domain_inds[self.index] = neigh_domain_ind self.index += 1 # Store file position + cell of neighbouring cell in current cell cdef class NeighbourCellVisitor(BaseNeighbourVisitor): @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) cdef void visit(self, Oct* o, np.uint8_t selected): cdef int i, j, k cdef int ishift[3] cdef np.int64_t neigh_file_ind cdef np.uint8_t neigh_cell_ind cdef np.int32_t neigh_domain cdef np.uint8_t neigh_level if selected == 0: return # Work at oct level if self.last == o.domain_ind: return self.last = o.domain_ind cdef int i0, i1 i0 = -self.n_ghost_zones i1 = 2 + self.n_ghost_zones # Loop over cells in and directly around oct for i in range(i0, i1): ishift[0] = i for j in range(i0, i1): ishift[1] = j for k in range(i0, i1): ishift[2] = k self.set_neighbour_info(o, ishift) if not self.other_oct: neigh_level = self.level neigh_domain = o.domain neigh_file_ind = o.file_ind neigh_cell_ind = self.neighbour_rind() elif self.neighbour != NULL: neigh_level = self.oi.level neigh_domain = self.neighbour.domain neigh_file_ind = self.neighbour.file_ind neigh_cell_ind = self.neighbour_rind() else: neigh_level = 255 neigh_domain = -1 neigh_file_ind = -1 neigh_cell_ind = 8 self.levels[self.index] = neigh_level self.file_inds[self.index] = neigh_file_ind self.cell_inds[self.index] = neigh_cell_ind self.domains[self.index] = neigh_domain self.index += 1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/particle_deposit.pxd0000644000175100001770000001072714714401662020132 0ustar00runnerdocker""" Particle Deposition onto Octs """ cimport numpy as np import numpy as np cimport cython from libc.math cimport sqrt from libc.stdlib cimport free, malloc from yt.utilities.lib.fp_utils cimport * from .oct_container cimport Oct, OctreeContainer cdef extern from "numpy/npy_math.h": double NPY_PI cdef extern from "platform_dep.h": void *alloca(int) cdef inline int gind(int i, int j, int k, int dims[3]): # The ordering is such that we want i to vary the slowest in this instance, # even though in other instances it varies the fastest. To see this in # action, try looking at the results of an n_ref=256 particle CIC plot, # which shows it the most clearly. return ((i*dims[1])+j)*dims[2]+k #################################################### # Standard SPH kernel for use with the Grid method # #################################################### cdef inline np.float64_t sph_kernel_cubic(np.float64_t x) noexcept nogil: cdef np.float64_t kernel # C is 8/pi cdef np.float64_t C = 2.5464790894703255 if x <= 0.5: kernel = 1.-6.*x*x*(1.-x) elif x>0.5 and x<=1.0: kernel = 2.*(1.-x)*(1.-x)*(1.-x) else: kernel = 0. return kernel * C ######################################################## # Alternative SPH kernels for use with the Grid method # ######################################################## # quartic spline cdef inline np.float64_t sph_kernel_quartic(np.float64_t x) noexcept nogil: cdef np.float64_t kernel cdef np.float64_t C = 9.71404681957369 # 5.**6/512/np.pi if x < 1: kernel = (1.-x)**4 if x < 3./5: kernel -= 5*(3./5-x)**4 if x < 1./5: kernel += 10*(1./5-x)**4 else: kernel = 0. return kernel * C # quintic spline cdef inline np.float64_t sph_kernel_quintic(np.float64_t x) noexcept nogil: cdef np.float64_t kernel cdef np.float64_t C = 17.403593027098754 # 3.**7/40/np.pi if x < 1: kernel = (1.-x)**5 if x < 2./3: kernel -= 6*(2./3-x)**5 if x < 1./3: kernel += 15*(1./3-x)**5 else: kernel = 0. return kernel * C # Wendland C2 cdef inline np.float64_t sph_kernel_wendland2(np.float64_t x) noexcept nogil: cdef np.float64_t kernel cdef np.float64_t C = 3.3422538049298023 # 21./2/np.pi if x < 1: kernel = (1.-x)**4 * (1+4*x) else: kernel = 0. return kernel * C # Wendland C4 cdef inline np.float64_t sph_kernel_wendland4(np.float64_t x) noexcept nogil: cdef np.float64_t kernel cdef np.float64_t C = 4.923856051905513 # 495./32/np.pi if x < 1: kernel = (1.-x)**6 * (1+6*x+35./3*x**2) else: kernel = 0. return kernel * C # Wendland C6 cdef inline np.float64_t sph_kernel_wendland6(np.float64_t x) noexcept nogil: cdef np.float64_t kernel cdef np.float64_t C = 6.78895304126366 # 1365./64/np.pi if x < 1: kernel = (1.-x)**8 * (1+8*x+25*x**2+32*x**3) else: kernel = 0. return kernel * C cdef inline np.float64_t sph_kernel_dummy(np.float64_t x) noexcept nogil: return 0 # I don't know the way to use a dict in a cdef class. # So in order to mimic a registry functionality, # I manually created a function to lookup the kernel functions. ctypedef np.float64_t (*kernel_func) (np.float64_t) noexcept nogil cdef inline kernel_func get_kernel_func(str kernel_name) noexcept nogil: with gil: if kernel_name == 'cubic': return sph_kernel_cubic elif kernel_name == 'quartic': return sph_kernel_quartic elif kernel_name == 'quintic': return sph_kernel_quintic elif kernel_name == 'wendland2': return sph_kernel_wendland2 elif kernel_name == 'wendland4': return sph_kernel_wendland4 elif kernel_name == 'wendland6': return sph_kernel_wendland6 elif kernel_name == 'none': return sph_kernel_dummy else: raise NotImplementedError cdef class ParticleDepositOperation: # We assume each will allocate and define their own temporary storage cdef kernel_func sph_kernel cdef public object nvals cdef public int update_values cdef int process(self, int dim[3], int ipart, np.float64_t left_edge[3], np.float64_t dds[3], np.int64_t offset, np.float64_t ppos[3], np.float64_t[:] fields, np.int64_t domain_ind) except -1 nogil ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/particle_deposit.pyx0000644000175100001770000005037314714401662020160 0ustar00runnerdocker# distutils: include_dirs = LIB_DIR # distutils: libraries = STD_LIBS """ Particle Deposition onto Cells """ cimport numpy as np import numpy as np cimport cython from libc.math cimport sqrt from yt.utilities.lib.fp_utils cimport * from .oct_container cimport Oct, OctInfo, OctreeContainer from yt.utilities.lib.misc_utilities import OnceIndirect cdef append_axes(np.ndarray arr, int naxes): if arr.ndim == naxes: return arr # Avoid copies arr2 = arr.view() arr2.shape = arr2.shape + (1,) * (naxes - arr2.ndim) return arr2 cdef class ParticleDepositOperation: def __init__(self, nvals, kernel_name): # nvals is a tuple containing the active dimensions of the # grid to deposit onto and the number of grids, # (nx, ny, nz, ngrids) self.nvals = nvals self.update_values = 0 # This is the default self.sph_kernel = get_kernel_func(kernel_name) def initialize(self, *args): raise NotImplementedError def finalize(self, *args): raise NotImplementedError @cython.boundscheck(False) @cython.wraparound(False) def process_octree(self, OctreeContainer octree, np.ndarray[np.int64_t, ndim=1] dom_ind, np.ndarray[np.float64_t, ndim=2] positions, fields = None, int domain_id = -1, int domain_offset = 0, lvlmax = None): cdef int nf, i, j if fields is None: fields = [] nf = len(fields) cdef np.float64_t[::cython.view.indirect, ::1] field_pointers if nf > 0: field_pointers = OnceIndirect(fields) cdef np.float64_t pos[3] cdef np.float64_t[:] field_vals = np.empty(nf, dtype="float64") cdef int dims[3] dims[0] = dims[1] = dims[2] = octree.nz cdef OctInfo oi cdef np.int64_t offset, moff cdef Oct *oct cdef np.int8_t use_lvlmax moff = octree.get_domain_offset(domain_id + domain_offset) if lvlmax is None: use_lvlmax = False lvlmax = [] else: use_lvlmax = True cdef np.ndarray[np.int32_t, ndim=1] lvlmaxval = np.asarray(lvlmax, dtype=np.int32) for i in range(positions.shape[0]): # We should check if particle remains inside the Oct here for j in range(nf): field_vals[j] = field_pointers[j,i] for j in range(3): pos[j] = positions[i, j] # This line should be modified to have it return the index into an # array based on whatever cutting of the domain we have done. This # may or may not include the domain indices that we have # previously generated. This way we can support not knowing the # full octree structure. All we *really* care about is some # arbitrary offset into a field value for deposition. if not use_lvlmax: oct = octree.get(pos, &oi) else: oct = octree.get(pos, &oi, max_level=lvlmaxval[i]) # This next line is unfortunate. Basically it says, sometimes we # might have particles that belong to octs outside our domain. # For the distributed-memory octrees, this will manifest as a NULL # oct. For the non-distributed memory octrees, we'll simply see # this as a domain_id that is not the current domain id. Note that # this relies on the idea that all the particles in a region are # all fed to sequential domain subsets, which will not be true with # RAMSES, where we *will* miss particles that live in ghost # regions on other processors. Addressing this is on the TODO # list. if oct == NULL or (domain_id > 0 and oct.domain != domain_id): continue # Note that this has to be our local index, not our in-file index. offset = dom_ind[oct.domain_ind - moff] if offset < 0: continue # Check that we found the oct ... self.process(dims, i, oi.left_edge, oi.dds, offset, pos, field_vals, oct.domain_ind) if self.update_values == 1: for j in range(nf): field_pointers[j][i] = field_vals[j] @cython.boundscheck(False) @cython.wraparound(False) def process_grid(self, gobj, np.ndarray[np.float64_t, ndim=2, cast=True] positions, fields = None): cdef int nf, i, j if fields is None: fields = [] if positions.shape[0] == 0: return nf = len(fields) cdef np.float64_t[:] field_vals = np.empty(nf, dtype="float64") cdef np.float64_t[::cython.view.indirect, ::1] field_pointers if nf > 0: field_pointers = OnceIndirect(fields) cdef np.float64_t pos[3] cdef np.int64_t gid = getattr(gobj, "id", -1) cdef np.float64_t dds[3] cdef np.float64_t left_edge[3] cdef np.float64_t right_edge[3] cdef int dims[3] for i in range(3): dds[i] = gobj.dds[i] left_edge[i] = gobj.LeftEdge[i] right_edge[i] = gobj.RightEdge[i] dims[i] = gobj.ActiveDimensions[i] for i in range(positions.shape[0]): # Now we process for j in range(nf): field_vals[j] = field_pointers[j,i] for j in range(3): pos[j] = positions[i, j] continue_loop = False for j in range(3): if pos[j] < left_edge[j] or pos[j] > right_edge[j]: continue_loop = True if continue_loop: continue self.process(dims, i, left_edge, dds, 0, pos, field_vals, gid) if self.update_values == 1: for j in range(nf): field_pointers[j][i] = field_vals[j] cdef int process(self, int dim[3], int ipart, np.float64_t left_edge[3], np.float64_t dds[3], np.int64_t offset, np.float64_t ppos[3], np.float64_t[:] fields, np.int64_t domain_ind) except -1 nogil: with gil: raise NotImplementedError cdef class CountParticles(ParticleDepositOperation): cdef np.int64_t[:,:,:,:] count def initialize(self): # Create a numpy array accessible to python self.count = append_axes( np.zeros(self.nvals, dtype="int64", order='F'), 4) @cython.cdivision(True) @cython.boundscheck(False) cdef int process(self, int dim[3], int ipart, np.float64_t left_edge[3], np.float64_t dds[3], np.int64_t offset, # offset into IO field np.float64_t ppos[3], # this particle's position np.float64_t[:] fields, np.int64_t domain_ind ) except -1 nogil: # here we do our thing; this is the kernel cdef int ii[3] cdef int i for i in range(3): ii[i] = ((ppos[i] - left_edge[i])/dds[i]) self.count[ii[2], ii[1], ii[0], offset] += 1 return 0 def finalize(self): arr = np.asarray(self.count) arr.shape = self.nvals return arr.astype("float64") deposit_count = CountParticles cdef class SimpleSmooth(ParticleDepositOperation): # Note that this does nothing at the edges. So it will give a poor # estimate there, and since Octrees are mostly edges, this will be a very # poor SPH kernel. cdef np.float64_t[:,:,:,:] data cdef np.float64_t[:,:,:,:] temp def initialize(self): self.data = append_axes( np.zeros(self.nvals, dtype="float64", order='F'), 4) self.temp = append_axes( np.zeros(self.nvals, dtype="float64", order='F'), 4) @cython.cdivision(True) @cython.boundscheck(False) cdef int process(self, int dim[3], int ipart, np.float64_t left_edge[3], np.float64_t dds[3], np.int64_t offset, np.float64_t ppos[3], np.float64_t[:] fields, np.int64_t domain_ind ) except -1 nogil: cdef int ii[3] cdef int ib0[3] cdef int ib1[3] cdef int i, j, k, half_len cdef np.float64_t idist[3] cdef np.float64_t kernel_sum, dist # Smoothing length is fields[0] kernel_sum = 0.0 for i in range(3): ii[i] = ((ppos[i] - left_edge[i])/dds[i]) half_len = (fields[0]/dds[i]) + 1 ib0[i] = ii[i] - half_len ib1[i] = ii[i] + half_len if ib0[i] >= dim[i] or ib1[i] <0: return 0 ib0[i] = iclip(ib0[i], 0, dim[i] - 1) ib1[i] = iclip(ib1[i], 0, dim[i] - 1) for i from ib0[0] <= i <= ib1[0]: idist[0] = (ii[0] - i) * dds[0] idist[0] *= idist[0] for j from ib0[1] <= j <= ib1[1]: idist[1] = (ii[1] - j) * dds[1] idist[1] *= idist[1] for k from ib0[2] <= k <= ib1[2]: idist[2] = (ii[2] - k) * dds[2] idist[2] *= idist[2] dist = idist[0] + idist[1] + idist[2] # Calculate distance in multiples of the smoothing length dist = sqrt(dist) / fields[0] with gil: self.temp[k,j,i,offset] = self.sph_kernel(dist) kernel_sum += self.temp[k,j,i,offset] # Having found the kernel, deposit accordingly into gdata for i from ib0[0] <= i <= ib1[0]: for j from ib0[1] <= j <= ib1[1]: for k from ib0[2] <= k <= ib1[2]: dist = self.temp[k,j,i,offset] / kernel_sum self.data[k,j,i,offset] += fields[1] * dist return 0 def finalize(self): return self.odata deposit_simple_smooth = SimpleSmooth cdef class SumParticleField(ParticleDepositOperation): cdef np.float64_t[:,:,:,:] sum def initialize(self): self.sum = append_axes( np.zeros(self.nvals, dtype="float64", order='F'), 4) @cython.cdivision(True) @cython.boundscheck(False) cdef int process(self, int dim[3], int ipart, np.float64_t left_edge[3], np.float64_t dds[3], np.int64_t offset, np.float64_t ppos[3], np.float64_t[:] fields, np.int64_t domain_ind ) except -1 nogil: cdef int ii[3] cdef int i for i in range(3): ii[i] = ((ppos[i] - left_edge[i]) / dds[i]) self.sum[ii[2], ii[1], ii[0], offset] += fields[0] return 0 def finalize(self): sum = np.asarray(self.sum) sum.shape = self.nvals return sum deposit_sum = SumParticleField cdef class StdParticleField(ParticleDepositOperation): # Thanks to Britton and MJ Turk for the link # to a single-pass STD # http://www.cs.berkeley.edu/~mhoemmen/cs194/Tutorials/variance.pdf cdef np.float64_t[:,:,:,:] mk cdef np.float64_t[:,:,:,:] qk cdef np.float64_t[:,:,:,:] i def initialize(self): # we do this in a single pass, but need two scalar # per cell, M_k, and Q_k and also the number of particles # deposited into each one # the M_k term self.mk = append_axes( np.zeros(self.nvals, dtype="float64", order='F'), 4) self.qk = append_axes( np.zeros(self.nvals, dtype="float64", order='F'), 4) self.i = append_axes( np.zeros(self.nvals, dtype="float64", order='F'), 4) @cython.cdivision(True) @cython.boundscheck(False) cdef int process(self, int dim[3], int ipart, np.float64_t left_edge[3], np.float64_t dds[3], np.int64_t offset, np.float64_t ppos[3], np.float64_t[:] fields, np.int64_t domain_ind ) except -1 nogil: cdef int ii[3] cdef int i cdef float k, mk, qk for i in range(3): ii[i] = ((ppos[i] - left_edge[i])/dds[i]) k = self.i[ii[2], ii[1], ii[0], offset] mk = self.mk[ii[2], ii[1], ii[0], offset] qk = self.qk[ii[2], ii[1], ii[0], offset] if k == 0.0: # Initialize cell values self.mk[ii[2], ii[1], ii[0], offset] = fields[0] else: self.mk[ii[2], ii[1], ii[0], offset] = mk + (fields[0] - mk) / k self.qk[ii[2], ii[1], ii[0], offset] = \ qk + (k - 1.0) * (fields[0] - mk) * (fields[0] - mk) / k self.i[ii[2], ii[1], ii[0], offset] += 1 return 0 def finalize(self): # This is the standard variance # if we want sample variance divide by (self.oi - 1.0) i = np.asarray(self.i) std2 = np.asarray(self.qk) / i std2[i == 0.0] = 0.0 std2.shape = self.nvals return np.sqrt(std2) deposit_std = StdParticleField cdef class CICDeposit(ParticleDepositOperation): cdef np.float64_t[:,:,:,:] field cdef public object ofield def initialize(self): if not all(_ > 1 for _ in self.nvals[:-1]): from yt.utilities.exceptions import YTBoundsDefinitionError raise YTBoundsDefinitionError( "CIC requires minimum of 2 zones in all spatial dimensions.", self.nvals[:-1]) self.field = append_axes( np.zeros(self.nvals, dtype="float64", order='F'), 4) @cython.cdivision(True) @cython.boundscheck(False) cdef int process(self, int dim[3], int ipart, np.float64_t left_edge[3], np.float64_t dds[3], np.int64_t offset, # offset into IO field np.float64_t ppos[3], # this particle's position np.float64_t[:] fields, np.int64_t domain_ind ) except -1 nogil: cdef int i, j, k cdef int ind[3] cdef np.float64_t rpos[3] cdef np.float64_t rdds[3][2] # Compute the position of the central cell for i in range(3): rpos[i] = (ppos[i]-left_edge[i])/dds[i] rpos[i] = fclip(rpos[i], 0.5001, dim[i]-0.5001) ind[i] = (rpos[i] + 0.5) # Note these are 1, then 0 rdds[i][1] = ( ind[i]) + 0.5 - rpos[i] rdds[i][0] = 1.0 - rdds[i][1] for i in range(2): for j in range(2): for k in range(2): self.field[ind[2] - k, ind[1] - j, ind[0] - i, offset] += \ fields[0]*rdds[0][i]*rdds[1][j]*rdds[2][k] return 0 def finalize(self): rv = np.asarray(self.field) rv.shape = self.nvals return rv deposit_cic = CICDeposit cdef class WeightedMeanParticleField(ParticleDepositOperation): # Deposit both mass * field and mass into two scalars # then in finalize divide mass * field / mass cdef np.float64_t[:,:,:,:] wf cdef np.float64_t[:,:,:,:] w def initialize(self): self.wf = append_axes( np.zeros(self.nvals, dtype='float64', order='F'), 4) self.w = append_axes( np.zeros(self.nvals, dtype='float64', order='F'), 4) @cython.cdivision(True) @cython.boundscheck(False) cdef int process(self, int dim[3], int ipart, np.float64_t left_edge[3], np.float64_t dds[3], np.int64_t offset, np.float64_t ppos[3], np.float64_t[:] fields, np.int64_t domain_ind ) except -1 nogil: cdef int ii[3] cdef int i for i in range(3): ii[i] = ((ppos[i] - left_edge[i]) / dds[i]) self.w[ii[2], ii[1], ii[0], offset] += fields[1] self.wf[ii[2], ii[1], ii[0], offset] += fields[0] * fields[1] return 0 def finalize(self): wf = np.asarray(self.wf) w = np.asarray(self.w) with np.errstate(divide='ignore', invalid='ignore'): rv = wf / w rv.shape = self.nvals return rv deposit_weighted_mean = WeightedMeanParticleField cdef class MeshIdentifier(ParticleDepositOperation): # This is a tricky one! What it does is put into the particle array the # value of the oct or block (grids will always be zero) identifier that a # given particle resides in def initialize(self): self.update_values = 1 @cython.cdivision(True) @cython.boundscheck(False) cdef int process(self, int dim[3], int ipart, np.float64_t left_edge[3], np.float64_t dds[3], np.int64_t offset, np.float64_t ppos[3], np.float64_t[:] fields, np.int64_t domain_ind ) except -1 nogil: fields[0] = domain_ind return 0 def finalize(self): return deposit_mesh_id = MeshIdentifier cdef class CellIdentifier(ParticleDepositOperation): cdef np.int64_t[:] indexes, cell_index # This method stores the offset of the grid containing each particle # and compute the index of the cell in the oct. def initialize(self, int npart): self.indexes = np.zeros(npart, dtype=np.int64, order='F') - 1 self.cell_index = np.zeros(npart, dtype=np.int64, order='F') - 1 @cython.cdivision(True) @cython.boundscheck(False) cdef int process(self, int dim[3], int ipart, np.float64_t left_edge[3], np.float64_t dds[3], np.int64_t offset, np.float64_t ppos[3], np.float64_t[:] fields, np.int64_t domain_ind ) except -1 nogil: cdef int i, icell self.indexes[ipart] = offset icell = 0 for i in range(3): if ppos[i] > left_edge[i] + dds[i]: icell |= 4 >> i # Compute cell index self.cell_index[ipart] = icell return 0 def finalize(self): return np.asarray(self.indexes), np.asarray(self.cell_index) deposit_cell_id = CellIdentifier cdef class NNParticleField(ParticleDepositOperation): cdef np.float64_t[:,:,:,:] nnfield cdef np.float64_t[:,:,:,:] distfield def initialize(self): self.nnfield = append_axes( np.zeros(self.nvals, dtype="float64", order='F'), 4) self.distfield = append_axes( np.zeros(self.nvals, dtype="float64", order='F'), 4) self.distfield[:] = np.inf @cython.cdivision(True) @cython.boundscheck(False) cdef int process(self, int dim[3], int ipart, np.float64_t left_edge[3], np.float64_t dds[3], np.int64_t offset, np.float64_t ppos[3], np.float64_t[:] fields, np.int64_t domain_ind ) except -1 nogil: # This one is a bit slow. Every grid cell is going to be iterated # over, and we're going to deposit particles in it. cdef int i, j, k cdef np.float64_t r2 cdef np.float64_t gpos[3] gpos[0] = left_edge[0] + 0.5 * dds[0] for i in range(dim[0]): gpos[1] = left_edge[1] + 0.5 * dds[1] for j in range(dim[1]): gpos[2] = left_edge[2] + 0.5 * dds[2] for k in range(dim[2]): r2 = ((ppos[0] - gpos[0])*(ppos[0] - gpos[0]) + (ppos[1] - gpos[1])*(ppos[1] - gpos[1]) + (ppos[2] - gpos[2])*(ppos[2] - gpos[2])) if r2 < self.distfield[k,j,i,offset]: self.distfield[k,j,i,offset] = r2 self.nnfield[k,j,i,offset] = fields[0] gpos[2] += dds[2] gpos[1] += dds[1] gpos[0] += dds[0] return 0 def finalize(self): nn = np.asarray(self.nnfield) nn.shape = self.nvals return nn deposit_nearest = NNParticleField ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/particle_geometry_handler.py0000644000175100001770000004414314714401662021647 0ustar00runnerdockerimport collections import errno import os import struct import weakref import numpy as np from ewah_bool_utils.ewah_bool_wrap import BoolArrayCollection from yt.data_objects.index_subobjects.particle_container import ParticleContainer from yt.funcs import get_pbar, is_sequence, only_on_root from yt.geometry.geometry_handler import Index, YTDataChunk from yt.geometry.particle_oct_container import ParticleBitmap from yt.utilities.lib.fnv_hash import fnv_hash from yt.utilities.logger import ytLogger as mylog from yt.utilities.parallel_tools.parallel_analysis_interface import parallel_objects def validate_index_order(index_order): if index_order is None: index_order = (6, 2) elif not is_sequence(index_order): index_order = (int(index_order), 1) else: if len(index_order) != 2: raise RuntimeError( f"Tried to load a dataset with index_order={index_order}, but " "index_order\nmust be an integer or a two-element tuple of " "integers." ) index_order = tuple(int(o) for o in index_order) return index_order class ParticleIndexInfo: def __init__(self, order1, order2, filename, mutable_index): self._order1 = order1 self._order2 = order2 self._order2_orig = order2 self.filename = filename self.mutable_index = mutable_index self._is_loaded = False @property def order1(self): return self._order1 @property def order2(self): return self._order2 @order2.setter def order2(self, value): if value == self._order2: # do nothing if nothing changes return mylog.debug("Updating index_order2 from %s to %s", self._order2, value) self._order2 = value @property def order2_orig(self): return self._order2_orig class ParticleIndex(Index): """The Index subclass for particle datasets""" def __init__(self, ds, dataset_type): self.dataset_type = dataset_type self.dataset = weakref.proxy(ds) self.float_type = np.float64 super().__init__(ds, dataset_type) self._initialize_index() def _setup_geometry(self): self.regions = None def get_smallest_dx(self): """ Returns (in code units) the smallest cell size in the simulation. """ return self.ds.arr(0, "code_length") def _get_particle_type_counts(self): result = collections.defaultdict(lambda: 0) for df in self.data_files: for k in df.total_particles.keys(): result[k] += df.total_particles[k] return dict(result) def convert(self, unit): return self.dataset.conversion_factors[unit] @property def chunksize(self): # This can be overridden in subclasses return 64**3 _data_files = None @property def data_files(self): if self._data_files is not None: return self._data_files self._setup_filenames() return self._data_files @data_files.setter def data_files(self, value): self._data_files = value _total_particles = None @property def total_particles(self): if self._total_particles is not None: return self._total_particles self._total_particles = sum( sum(d.total_particles.values()) for d in self.data_files ) return self._total_particles def _setup_filenames(self): template = self.dataset.filename_template ndoms = self.dataset.file_count cls = self.dataset._file_class self.data_files = [] fi = 0 for i in range(int(ndoms)): start = 0 if self.chunksize > 0: end = start + self.chunksize else: end = None while True: try: _filename = template % {"num": i} df = cls(self.dataset, self.io, _filename, fi, (start, end)) except FileNotFoundError: mylog.warning( "Failed to load '%s' (missing file or directory)", _filename ) break if max(df.total_particles.values()) == 0: break fi += 1 self.data_files.append(df) if end is None: break start = end end += self.chunksize def _initialize_index(self): ds = self.dataset only_on_root( mylog.info, "Allocating for %0.4g particles", self.total_particles, global_rootonly=True, ) # if we have not yet set domain_left_edge and domain_right_edge then do # an I/O pass over the particle coordinates to determine a bounding box if self.ds.domain_left_edge is None: min_ppos = np.empty(3, dtype="float64") min_ppos[:] = np.nan max_ppos = np.empty(3, dtype="float64") max_ppos[:] = np.nan only_on_root( mylog.info, "Bounding box cannot be inferred from metadata, reading " "particle positions to infer bounding box", ) for df in self.data_files: for _, ppos in self.io._yield_coordinates(df): min_ppos = np.nanmin(np.vstack([min_ppos, ppos]), axis=0) max_ppos = np.nanmax(np.vstack([max_ppos, ppos]), axis=0) only_on_root( mylog.info, f"Load this dataset with bounding_box=[{min_ppos}, {max_ppos}] " "to avoid I/O overhead from inferring bounding_box.", ) ds.domain_left_edge = ds.arr(1.05 * min_ppos, "code_length") ds.domain_right_edge = ds.arr(1.05 * max_ppos, "code_length") ds.domain_width = ds.domain_right_edge - ds.domain_left_edge # use a trivial morton index for datasets containing a single chunk if len(self.data_files) == 1: order1 = 1 order2 = 1 mutable_index = False else: mutable_index = ds.index_order is None index_order = validate_index_order(ds.index_order) order1 = index_order[0] order2 = index_order[1] if order1 == 1 and order2 == 1: dont_cache = True else: dont_cache = False if not hasattr(self.ds, "_file_hash"): self.ds._file_hash = self._generate_hash() # Load Morton index from file if provided fname = getattr(ds, "index_filename", None) or f"{ds.parameter_filename}.ewah" self.pii = ParticleIndexInfo(order1, order2, fname, mutable_index) self.regions = ParticleBitmap( ds.domain_left_edge, ds.domain_right_edge, ds.periodicity, self.ds._file_hash, len(self.data_files), index_order1=self.pii.order1, index_order2=self.pii.order2_orig, ) dont_load = dont_cache and not hasattr(ds, "index_filename") try: if dont_load: raise OSError rflag, max_hsml = self.regions.load_bitmasks(fname) if max_hsml > 0.0 and self.pii.mutable_index: self._order2_update(max_hsml) rflag = self.regions.check_bitmasks() self._initialize_frontend_specific() if rflag == 0: raise OSError self.pii._is_loaded = True except (OSError, struct.error): self.regions.reset_bitmasks() max_hsml = self._initialize_coarse_index() self._initialize_refined_index() wdir = os.path.dirname(fname) if not dont_cache and os.access(wdir, os.W_OK): # Sometimes os mis-reports whether a directory is writable, # So pass if writing the bitmask file fails. try: self.regions.save_bitmasks(fname, max_hsml) except OSError: pass rflag = self.regions.check_bitmasks() self.ds.index_order = (self.pii.order1, self.pii.order2) def _order2_update(self, max_hsml): # By passing this in, we only allow index_order2 to be increased by # two at most, never increased. One place this becomes particularly # useful is in the case of an extremely small section of gas # particles embedded in a much much larger domain. The max # smoothing length will be quite small, so based on the larger # domain, it will correspond to a very very high index order, which # is a large amount of memory! Having multiple indexes, one for # each particle type, would fix this. new_order2 = self.regions.update_mi2(max_hsml, self.pii.order2 + 2) self.pii.order2 = new_order2 def _initialize_coarse_index(self): max_hsml = 0.0 pb = get_pbar("Initializing coarse index ", len(self.data_files)) for i, data_file in parallel_objects(enumerate(self.data_files)): pb.update(i + 1) for ptype, pos in self.io._yield_coordinates(data_file): ds = self.ds if hasattr(ds, "_sph_ptypes") and ptype == ds._sph_ptypes[0]: hsml = self.io._get_smoothing_length( data_file, pos.dtype, pos.shape ) if hsml is not None and hsml.size > 0.0: max_hsml = max(max_hsml, hsml.max()) else: hsml = None self.regions._coarse_index_data_file(pos, hsml, data_file.file_id) pb.finish() self.regions.masks = self.comm.mpi_allreduce(self.regions.masks, op="sum") self.regions.particle_counts = self.comm.mpi_allreduce( self.regions.particle_counts, op="sum" ) for data_file in self.data_files: self.regions._set_coarse_index_data_file(data_file.file_id) self.regions.find_collisions_coarse() if max_hsml > 0.0 and self.pii.mutable_index: self._order2_update(max_hsml) return max_hsml def _initialize_refined_index(self): mask = self.regions.masks.sum(axis=1).astype("uint8") max_npart = max(sum(d.total_particles.values()) for d in self.data_files) * 28 sub_mi1 = np.zeros(max_npart, "uint64") sub_mi2 = np.zeros(max_npart, "uint64") pb = get_pbar("Initializing refined index", len(self.data_files)) mask_threshold = getattr(self, "_index_mask_threshold", 2) count_threshold = getattr(self, "_index_count_threshold", 256) mylog.debug( "Using estimated thresholds of %s and %s for refinement", mask_threshold, count_threshold, ) total_refined = 0 total_coarse_refined = ( (mask >= 2) & (self.regions.particle_counts > count_threshold) ).sum() mylog.debug( "This should produce roughly %s zones, for %s of the domain", total_coarse_refined, 100 * total_coarse_refined / mask.size, ) storage = {} for sto, (i, data_file) in parallel_objects( enumerate(self.data_files), storage=storage ): coll = None pb.update(i + 1) nsub_mi = 0 for ptype, pos in self.io._yield_coordinates(data_file): if pos.size == 0: continue if hasattr(self.ds, "_sph_ptypes") and ptype == self.ds._sph_ptypes[0]: hsml = self.io._get_smoothing_length( data_file, pos.dtype, pos.shape ) else: hsml = None nsub_mi, coll = self.regions._refined_index_data_file( coll, pos, hsml, mask, sub_mi1, sub_mi2, data_file.file_id, nsub_mi, count_threshold=count_threshold, mask_threshold=mask_threshold, ) total_refined += nsub_mi sto.result_id = i if coll is None: coll_str = b"" else: coll_str = coll.dumps() sto.result = (data_file.file_id, coll_str) pb.finish() for i in sorted(storage): file_id, coll_str = storage[i] coll = BoolArrayCollection() coll.loads(coll_str) self.regions.bitmasks.append(file_id, coll) self.regions.find_collisions_refined() def _detect_output_fields(self): # TODO: Add additional fields dsl = [] units = {} pcounts = self._get_particle_type_counts() field_cache = {} for dom in self.data_files: if dom.filename in field_cache: fl, _units = field_cache[dom.filename] else: fl, _units = self.io._identify_fields(dom) field_cache[dom.filename] = fl, _units units.update(_units) dom._calculate_offsets(fl, pcounts) for f in fl: if f not in dsl: dsl.append(f) self.field_list = dsl ds = self.dataset ds.particle_types = tuple({pt for pt, ds in dsl}) # This is an attribute that means these particle types *actually* # exist. As in, they are real, in the dataset. ds.field_units.update(units) ds.particle_types_raw = ds.particle_types def _identify_base_chunk(self, dobj): # Must check that chunk_info contains the right number of ghost zones if getattr(dobj, "_chunk_info", None) is None: if isinstance(dobj, ParticleContainer): dobj._chunk_info = [dobj] else: # TODO: only return files if getattr(dobj.selector, "is_all_data", False): nfiles = self.regions.nfiles dfi = np.arange(nfiles) else: dfi, file_masks, addfi = self.regions.identify_file_masks( dobj.selector ) nfiles = len(file_masks) dobj._chunk_info = [None for _ in range(nfiles)] # The following was moved here from ParticleContainer in order # to make the ParticleContainer object pickleable. By having # the base_selector as its own argument, we avoid having to # rebuild the index on unpickling a ParticleContainer. if hasattr(dobj, "base_selector"): base_selector = dobj.base_selector base_region = dobj.base_region else: base_region = dobj base_selector = dobj.selector for i in range(nfiles): domain_id = i + 1 dobj._chunk_info[i] = ParticleContainer( base_region, base_selector, [self.data_files[dfi[i]]], domain_id=domain_id, ) # NOTE: One fun thing about the way IO works is that it # consolidates things quite nicely. So we should feel free to # create as many objects as part of the chunk as we want, since # it'll take the set() of them. So if we break stuff up like # this here, we end up in a situation where we have the ability # to break things down further later on for buffer zones and the # like. (dobj._current_chunk,) = self._chunk_all(dobj) def _chunk_all(self, dobj): oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) yield YTDataChunk(dobj, "all", oobjs, None) def _chunk_spatial(self, dobj, ngz, sort=None, preload_fields=None): sobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) for og in sobjs: with og._expand_data_files(): if ngz > 0: g = og.retrieve_ghost_zones(ngz, [], smoothed=True) else: g = og yield YTDataChunk(dobj, "spatial", [g]) def _chunk_io(self, dobj, cache=True, local_only=False): oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) for container in oobjs: yield YTDataChunk(dobj, "io", [container], None, cache=cache) def _generate_hash(self): # Generate an FNV hash by creating a byte array containing the # modification time of as well as the first and last 1 MB of data in # every output file ret = bytearray() for pfile in self.data_files: # only look at "real" files, not "fake" files generated by the # chunking system if pfile.start not in (0, None): continue try: mtime = os.path.getmtime(pfile.filename) except OSError as e: if e.errno == errno.ENOENT: # this is an in-memory file so we return with a dummy # value return -1 else: raise ret.extend(str(mtime).encode("utf-8")) size = os.path.getsize(pfile.filename) if size > 1e6: size = int(1e6) with open(pfile.filename, "rb") as fh: # read in first and last 1 MB of data data = fh.read(size) fh.seek(-size, os.SEEK_END) data = fh.read(size) ret.extend(data) return fnv_hash(ret) def _initialize_frontend_specific(self): """This is for frontend-specific initialization code If there are frontend-specific things that need to be set while creating the index, this function forces these operations to happen in cases where we are reloading the index from a sidecar file. """ pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/particle_oct_container.pyx0000644000175100001770000027564314714401662021351 0ustar00runnerdocker# distutils: language = c++ # distutils: extra_compile_args = CPP14_FLAG # distutils: include_dirs = LIB_DIR # distutils: libraries = EWAH_LIBS """ Oct container tuned for Particles """ from ewah_bool_utils.ewah_bool_array cimport ( bool_array, ewah_bool_array, ewah_bool_iterator, ewah_word_type, ) from libc.math cimport ceil, log2 from libc.stdlib cimport free, malloc from libcpp.map cimport map as cmap from libcpp.vector cimport vector import numpy as np cimport cython cimport numpy as np from cpython.exc cimport PyErr_CheckSignals from cython.operator cimport dereference, preincrement from yt.geometry cimport oct_visitors from yt.utilities.lib.fnv_hash cimport c_fnv_hash as fnv_hash from yt.utilities.lib.fp_utils cimport * from yt.utilities.lib.geometry_utils cimport ( bounded_morton, bounded_morton_dds, bounded_morton_split_dds, bounded_morton_split_relative_dds, decode_morton_64bit, encode_morton_64bit, morton_neighbors_coarse, morton_neighbors_refined, ) from .oct_container cimport ( ORDER_MAX, Oct, OctKey, OctreeContainer, SparseOctreeContainer, ) from .oct_visitors cimport cind from .selection_routines cimport AlwaysSelector, SelectorObject from yt.funcs import get_pbar from ewah_bool_utils.ewah_bool_wrap cimport BoolArrayCollection import os from ewah_bool_utils.ewah_bool_wrap cimport ( BoolArrayCollectionUncompressed as BoolArrayColl, FileBitmasks, SparseUnorderedRefinedBitmaskSet as SparseUnorderedRefinedBitmask, ) _bitmask_version = np.uint64(5) ctypedef cmap[np.uint64_t, bool_array] CoarseRefinedSets cdef class ParticleOctreeContainer(OctreeContainer): cdef Oct** oct_list #The starting oct index of each domain cdef np.int64_t *dom_offsets #How many particles do we keep before refining cdef public int n_ref def allocate_root(self): cdef int i, j, k cdef Oct *cur for i in range(self.nn[0]): for j in range(self.nn[1]): for k in range(self.nn[2]): cur = self.allocate_oct() self.root_mesh[i][j][k] = cur def __dealloc__(self): #Call the freemem ops on every ocy #of the root mesh recursively cdef int i, j, k if self.root_mesh == NULL: return for i in range(self.nn[0]): if self.root_mesh[i] == NULL: continue for j in range(self.nn[1]): if self.root_mesh[i][j] == NULL: continue for k in range(self.nn[2]): if self.root_mesh[i][j][k] == NULL: continue self.visit_free(self.root_mesh[i][j][k]) free(self.oct_list) free(self.dom_offsets) cdef void visit_free(self, Oct *o): #Free the memory for this oct recursively cdef int i, j, k for i in range(2): for j in range(2): for k in range(2): if o.children != NULL \ and o.children[cind(i,j,k)] != NULL: self.visit_free(o.children[cind(i,j,k)]) free(o.children) free(o) def clear_fileind(self): cdef int i, j, k for i in range(self.nn[0]): for j in range(self.nn[1]): for k in range(self.nn[2]): self.visit_clear(self.root_mesh[i][j][k]) cdef void visit_clear(self, Oct *o): #Free the memory for this oct recursively cdef int i, j, k o.file_ind = 0 for i in range(2): for j in range(2): for k in range(2): if o.children != NULL \ and o.children[cind(i,j,k)] != NULL: self.visit_clear(o.children[cind(i,j,k)]) def __iter__(self): #Get the next oct, will traverse domains #Note that oct containers can be sorted #so that consecutive octs are on the same domain cdef int oi cdef Oct *o for oi in range(self.nocts): o = self.oct_list[oi] yield (o.file_ind, o.domain_ind, o.domain) def allocate_domains(self, domain_counts): pass def finalize(self, int domain_id = 0): #This will sort the octs in the oct list #so that domains appear consecutively #And then find the oct index/offset for #every domain cdef int max_level = 0 self.oct_list = malloc(sizeof(Oct*)*self.nocts) cdef np.int64_t i = 0, lpos = 0 # Note that we now assign them in the same order they will be visited # by recursive visitors. for i in range(self.nn[0]): for j in range(self.nn[1]): for k in range(self.nn[2]): self.visit_assign(self.root_mesh[i][j][k], &lpos, 0, &max_level) assert(lpos == self.nocts) for i in range(self.nocts): self.oct_list[i].domain_ind = i self.oct_list[i].domain = domain_id self.max_level = max_level cdef visit_assign(self, Oct *o, np.int64_t *lpos, int level, int *max_level): cdef int i, j, k self.oct_list[lpos[0]] = o lpos[0] += 1 max_level[0] = imax(max_level[0], level) for i in range(2): for j in range(2): for k in range(2): if o.children != NULL \ and o.children[cind(i,j,k)] != NULL: self.visit_assign(o.children[cind(i,j,k)], lpos, level + 1, max_level) return cdef np.int64_t get_domain_offset(self, int domain_id): return 0 cdef Oct* allocate_oct(self): #Allocate the memory, set to NULL or -1 #We reserve space for n_ref particles, but keep #track of how many are used with np initially 0 self.nocts += 1 cdef Oct *my_oct = malloc(sizeof(Oct)) my_oct.domain = -1 my_oct.file_ind = 0 my_oct.domain_ind = self.nocts - 1 my_oct.children = NULL return my_oct @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def add(self, np.ndarray[np.uint64_t, ndim=1] indices, np.uint8_t order = ORDER_MAX): #Add this particle to the root oct #Then if that oct has children, add it to them recursively #If the child needs to be refined because of max particles, do so cdef np.int64_t no = indices.shape[0], p cdef np.uint64_t index cdef int i, level cdef int ind[3] if self.root_mesh[0][0][0] == NULL: self.allocate_root() cdef np.uint64_t *data = indices.data for p in range(no): # We have morton indices, which means we choose left and right by # looking at (MAX_ORDER - level) & with the values 1, 2, 4. level = 0 index = indices[p] if index == FLAG: # This is a marker for the index not being inside the domain # we're interested in. continue # Convert morton index to 3D index of octree root for i in range(3): ind[i] = (index >> ((order - level)*3 + (2 - i))) & 1 cur = self.root_mesh[ind[0]][ind[1]][ind[2]] if cur == NULL: raise RuntimeError # Continue refining the octree until you reach the level of the # morton indexing order. Along the way, use prefix to count # previous indices at levels in the octree? while (cur.file_ind + 1) > self.n_ref: if level >= order: break # Just dump it here. level += 1 for i in range(3): ind[i] = (index >> ((order - level)*3 + (2 - i))) & 1 if cur.children == NULL or \ cur.children[cind(ind[0],ind[1],ind[2])] == NULL: cur = self.refine_oct(cur, index, level, order) self.filter_particles(cur, data, p, level, order) else: cur = cur.children[cind(ind[0],ind[1],ind[2])] # If our n_ref is 1, we are always refining, which means we're an # index octree. In this case, we should store the index for fast # lookup later on when we find neighbors and the like. if self.n_ref == 1: cur.file_ind = index else: cur.file_ind += 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef Oct *refine_oct(self, Oct *o, np.uint64_t index, int level, np.uint8_t order): #Allocate and initialize child octs #Attach particles to child octs #Remove particles from this oct entirely cdef int i, j, k cdef int ind[3] cdef Oct *noct # TODO: This does not need to be changed. o.children = malloc(sizeof(Oct *)*8) for i in range(2): for j in range(2): for k in range(2): noct = self.allocate_oct() noct.domain = o.domain noct.file_ind = 0 o.children[cind(i,j,k)] = noct o.file_ind = self.n_ref + 1 for i in range(3): ind[i] = (index >> ((order - level)*3 + (2 - i))) & 1 noct = o.children[cind(ind[0],ind[1],ind[2])] return noct cdef void filter_particles(self, Oct *o, np.uint64_t *data, np.int64_t p, int level, np.uint8_t order): # Now we look at the last nref particles to decide where they go. # If p: Loops over all previous morton indices # If n_ref: Loops over n_ref previous morton indices cdef int n = imin(p, self.n_ref) cdef np.uint64_t *arr = data + imax(p - self.n_ref, 0) cdef np.uint64_t prefix1, prefix2 # Now we figure out our prefix, which is the oct address at this level. # As long as we're actually in Morton order, we do not need to worry # about *any* of the other children of the oct. prefix1 = data[p] >> (order - level)*3 for i in range(n): prefix2 = arr[i] >> (order - level)*3 if (prefix1 == prefix2): o.file_ind += 1 # Says how many morton indices are in this octant? def recursively_count(self): #Visit every cell, accumulate the # of cells per level cdef int i, j, k cdef np.int64_t counts[128] for i in range(128): counts[i] = 0 for i in range(self.nn[0]): for j in range(self.nn[1]): for k in range(self.nn[2]): if self.root_mesh[i][j][k] != NULL: self.visit(self.root_mesh[i][j][k], counts) level_counts = {} for i in range(128): if counts[i] == 0: break level_counts[i] = counts[i] return level_counts cdef visit(self, Oct *o, np.int64_t *counts, level = 0): cdef int i, j, k counts[level] += 1 for i in range(2): for j in range(2): for k in range(2): if o.children != NULL \ and o.children[cind(i,j,k)] != NULL: self.visit(o.children[cind(i,j,k)], counts, level + 1) return @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef Oct *get_from_index(self, np.uint64_t mi, np.uint8_t order = ORDER_MAX, int max_level = 99): cdef Oct *cur cdef Oct *next cur = next = NULL cdef int i cdef np.int64_t level = -1 cdef int ind32[3] cdef np.uint64_t ind[3] cdef np.uint64_t index # Get level offset cdef int level_offset[3] for i in range(3): level_offset[i] = np.log2(self.nn[i]) if (1 << level_offset[i]) != self.nn[i]: raise Exception("Octree does not have roots along dimension {} in a power of 2 ".format(i)) for i in range(2,3): if level_offset[i] != level_offset[0]: raise Exception("Octree must have the same number of roots in each dimension for this.") # Get root for index index = (mi >> ((order - level_offset[0])*3)) decode_morton_64bit(index, ind) for i in range(3): ind32[i] = ind[i] self.get_root(ind32, &next) # We want to stop recursing when there's nowhere else to go level = level_offset[0] max_level = min(max_level, order) while next != NULL and level <= max_level: level += 1 for i in range(3): ind[i] = (mi >> ((order - level)*3 + (2 - i))) & 1 cur = next if cur.children != NULL: next = cur.children[cind(ind[0],ind[1],ind[2])] else: next = NULL return cur def apply_domain(self, int domain_id, BoolArrayCollection mask, int masklevel): cdef SelectorObject selector = AlwaysSelector(None) ind = self.domain_ind(selector, mask = mask, masklevel = masklevel) for i in range(self.nocts): if ind[i] < 0: continue self.oct_list[i].domain = domain_id super(ParticleOctreeContainer,self).domain_ind(selector, domain_id = domain_id) def domain_ind(self, selector, int domain_id = -1, BoolArrayCollection mask = None, int masklevel = 99): if mask is None: return super(ParticleOctreeContainer,self).domain_ind(selector, domain_id = domain_id) # Create mask for octs that are touched by the mask cdef ewah_bool_array *ewah_slct = mask.ewah_keys cdef ewah_bool_iterator *iter_set = new ewah_bool_iterator(ewah_slct[0].begin()) cdef ewah_bool_iterator *iter_end = new ewah_bool_iterator(ewah_slct[0].end()) cdef np.ndarray[np.uint8_t, ndim=1] oct_mask oct_mask = np.zeros(self.nocts, 'uint8') cdef Oct *o cdef int coct, cmi coct = cmi = 0 while iter_set[0] != iter_end[0]: mi = dereference(iter_set[0]) o = self.get_from_index(mi, order = masklevel) if o != NULL: _mask_children(oct_mask, o) coct += 1 cmi += 1 preincrement(iter_set[0]) # Get domain ind cdef np.ndarray[np.int64_t, ndim=1] ind ind = np.zeros(self.nocts, 'int64') - 1 cdef oct_visitors.MaskedIndexOcts visitor visitor = oct_visitors.MaskedIndexOcts(self, domain_id) visitor.oct_index = ind visitor.oct_mask = oct_mask self.visit_all_octs(selector, visitor) return ind cdef void _mask_children(np.ndarray[np.uint8_t] mask, Oct *cur): cdef int i, j, k if cur == NULL: return mask[cur.domain_ind] = 1 if cur.children == NULL: return for i in range(2): for j in range(2): for k in range(2): _mask_children(mask, cur.children[cind(i,j,k)]) cdef np.uint64_t ONEBIT=1 cdef np.uint64_t FLAG = ~(0) cdef class ParticleBitmap: cdef np.float64_t left_edge[3] cdef np.float64_t right_edge[3] cdef np.uint8_t periodicity[3] cdef np.float64_t dds[3] cdef np.float64_t dds_mi1[3] cdef np.float64_t dds_mi2[3] cdef np.float64_t idds[3] cdef np.int32_t dims[3] cdef np.int64_t file_hash cdef np.uint64_t directional_max2[3] cdef np.int64_t hash_value cdef public np.uint64_t nfiles cdef public np.int32_t index_order1 cdef public np.int32_t index_order2 cdef public object masks cdef public object particle_counts cdef public object counts cdef public object max_count cdef public object _last_selector cdef public object _last_return_values cdef public object _cached_octrees cdef public object _last_octree_subset cdef public object _last_oct_handler cdef public object _prev_octree_subset cdef public object _prev_oct_handler cdef np.uint32_t *file_markers cdef np.uint64_t n_file_markers cdef np.uint64_t file_marker_i cdef public FileBitmasks bitmasks cdef public BoolArrayCollection collisions cdef public int _used_mi2 def __init__(self, left_edge, right_edge, periodicity, file_hash, nfiles, index_order1, index_order2): # TODO: Set limit on maximum orders? cdef int i self._cached_octrees = {} self._last_selector = None self._last_return_values = None self._last_octree_subset = None self._last_oct_handler = None self._prev_octree_subset = None self._prev_oct_handler = None self.file_hash = file_hash self.nfiles = nfiles for i in range(3): self.left_edge[i] = left_edge[i] self.right_edge[i] = right_edge[i] self.periodicity[i] = periodicity[i] self.dims[i] = (1< 0: return self.index_order2 cdef np.uint64_t index_order2 = 2 for i in range(3): # Note we're casting to signed here, to avoid negative issues. if self.dds_mi1[i] < characteristic_size: continue index_order2 = max(index_order2, ceil(log2(self.dds_mi1[i] / characteristic_size))) index_order2 = i64min(max_index_order2, index_order2) self._update_mi2(index_order2) return self.index_order2 cdef void _update_mi2(self, np.uint64_t index_order2): self.index_order2 = index_order2 mi2_max = (1 << self.index_order2) - 1 self.directional_max2[0] = encode_morton_64bit(mi2_max, 0, 0) self.directional_max2[1] = encode_morton_64bit(0, mi2_max, 0) self.directional_max2[2] = encode_morton_64bit(0, 0, mi2_max) for i in range(3): self.dds_mi2[i] = self.dds_mi1[i] / (1< RE[i]: axiter[i][1] = -1 axiterv[i][1] = -DW[i] for xi in range(2): if axiter[0][xi] == 999: continue s_ppos[0] = ppos[0] + axiterv[0][xi] for yi in range(2): if axiter[1][yi] == 999: continue s_ppos[1] = ppos[1] + axiterv[1][yi] for zi in range(2): if axiter[2][zi] == 999: continue s_ppos[2] = ppos[2] + axiterv[2][zi] # OK, now we compute the left and right edges for this shift. for i in range(3): clip_pos_l[i] = fmax(s_ppos[i] - radius, LE[i] + dds[i]/10) clip_pos_r[i] = fmin(s_ppos[i] + radius, RE[i] - dds[i]/10) bounded_morton_split_dds(clip_pos_l[0], clip_pos_l[1], clip_pos_l[2], LE, dds, bounds[0]) bounded_morton_split_dds(clip_pos_r[0], clip_pos_r[1], clip_pos_r[2], LE, dds, bounds[1]) # We go to the upper bound plus one so that we have *inclusive* loops -- the upper bound # is the cell *index*, so we want to make sure we include that cell. This is also why # we don't need to worry about mi_max being the max index rather than the cell count. for xex in range(bounds[0][0], bounds[1][0] + 1): for yex in range(bounds[0][1], bounds[1][1] + 1): for zex in range(bounds[0][2], bounds[1][2] + 1): miex = encode_morton_64bit(xex, yex, zex) mask[miex] = 1 particle_counts[miex] += 1 if miex >= msize: raise IndexError( "Index for a softening region " f"({miex}) exceeds " f"max ({msize})") @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def _set_coarse_index_data_file(self, np.uint64_t file_id): return self.__set_coarse_index_data_file(file_id) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void __set_coarse_index_data_file(self, np.uint64_t file_id): cdef np.int64_t i cdef FileBitmasks bitmasks = self.bitmasks cdef np.ndarray[np.uint8_t, ndim=1] mask = self.masks[:,file_id] # Add in order for i in range(mask.shape[0]): if mask[i] == 1: bitmasks._set_coarse(file_id, i) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) def _refined_index_data_file(self, BoolArrayCollection in_collection, np.ndarray[cython.floating, ndim=2] pos, np.ndarray[cython.floating, ndim=1] hsml, np.ndarray[np.uint8_t, ndim=1] mask, np.ndarray[np.uint64_t, ndim=1] sub_mi1, np.ndarray[np.uint64_t, ndim=1] sub_mi2, np.uint64_t file_id, np.int64_t nsub_mi, np.uint64_t count_threshold = 128, np.uint8_t mask_threshold = 2): self._used_mi2 = 1 if in_collection is None: in_collection = BoolArrayCollection() cdef BoolArrayCollection _in_coll = in_collection out_collection = self.__refined_index_data_file(_in_coll, pos, hsml, mask, count_threshold, mask_threshold) return 0, out_collection @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef BoolArrayCollection __refined_index_data_file( self, BoolArrayCollection in_collection, np.ndarray[cython.floating, ndim=2] pos, np.ndarray[cython.floating, ndim=1] hsml, np.ndarray[np.uint8_t, ndim=1] mask, np.uint64_t count_threshold, np.uint8_t mask_threshold ): # Initialize cdef np.int64_t p, sorted_ind cdef np.uint64_t i cdef np.uint64_t mi1, mi2 cdef np.float64_t ppos[3] cdef np.float64_t s_ppos[3] # shifted ppos cdef int skip cdef BoolArrayCollection this_collection, out_collection cdef np.uint64_t bounds[2][3] cdef np.uint8_t fully_enclosed cdef np.float64_t LE[3] cdef np.float64_t RE[3] cdef np.float64_t DW[3] cdef np.uint8_t PER[3] cdef np.float64_t dds1[3] cdef np.float64_t dds2[3] cdef np.float64_t radius cdef np.uint64_t mi_split1[3] cdef np.uint64_t mi_split2[3] cdef np.uint64_t miex1 cdef np.uint64_t[:] particle_counts = self.particle_counts cdef np.uint64_t xex, yex, zex cdef np.float64_t clip_pos_l[3] cdef np.float64_t clip_pos_r[3] cdef int axiter[3][2] cdef np.float64_t axiterv[3][2] cdef CoarseRefinedSets coarse_refined_map cdef np.uint64_t nfully_enclosed = 0, n_calls = 0 cdef np.uint64_t max_mi1_elements = 1 << (3*self.index_order1) cdef np.uint64_t max_mi2_elements = 1 << (3*self.index_order2) cdef np.ndarray[np.uint64_t, ndim=1] refined_count = np.zeros(max_mi1_elements, dtype="uint64") # Copy things from structure (type cast) for i in range(3): LE[i] = self.left_edge[i] RE[i] = self.right_edge[i] PER[i] = self.periodicity[i] dds1[i] = self.dds_mi1[i] dds2[i] = self.dds_mi2[i] DW[i] = RE[i] - LE[i] axiter[i][0] = 0 # We always do an offset of 0 axiterv[i][0] = 0.0 cdef np.ndarray[np.uint64_t, ndim=1] morton_indices = np.empty(pos.shape[0], dtype="u8") for p in range(pos.shape[0]): morton_indices[p] = bounded_morton(pos[p, 0], pos[p, 1], pos[p, 2], LE, RE, self.index_order1) # Loop over positions skipping those outside the domain cdef np.ndarray[np.uint64_t, ndim=1, cast=True] sorted_order if hsml is None: # casting to uint64 for compatibility with 32 bits systems # see https://github.com/yt-project/yt/issues/3656 sorted_order = np.argsort(morton_indices).astype(np.uint64, copy=False) else: sorted_order = np.argsort(hsml)[::-1].astype(np.uint64, copy=False) for sorted_ind in range(sorted_order.shape[0]): p = sorted_order[sorted_ind] skip = 0 for i in range(3): axiter[i][1] = 999 if not (LE[i] <= pos[p, i] < RE[i]): skip = 1 break ppos[i] = pos[p,i] if skip == 1: continue # Only look if collision at coarse index mi1 = bounded_morton_split_dds(ppos[0], ppos[1], ppos[2], LE, dds1, mi_split1) if hsml is None: if mask[mi1] < mask_threshold \ or particle_counts[mi1] < count_threshold: continue # Determine sub index within cell of primary index mi2 = bounded_morton_split_relative_dds( ppos[0], ppos[1], ppos[2], LE, dds1, dds2, mi_split2) if refined_count[mi1] == 0: coarse_refined_map[mi1].padWithZeroes(max_mi2_elements) if not coarse_refined_map[mi1].get(mi2): coarse_refined_map[mi1].set(mi2) refined_count[mi1] += 1 else: # only hit if we have smoothing lengths. # We have to do essentially the identical process to in the coarse indexing, # except here we need to fill in all the subranges as well as the coarse ranges # Note that we are also doing the null case, where we do no shifting radius = hsml[p] #if mask[mi1] <= 4: # only one thing in this area # continue for i in range(3): if PER[i] and ppos[i] - radius < LE[i]: axiter[i][1] = +1 axiterv[i][1] = DW[i] elif PER[i] and ppos[i] + radius > RE[i]: axiter[i][1] = -1 axiterv[i][1] = -DW[i] for xi in range(2): if axiter[0][xi] == 999: continue s_ppos[0] = ppos[0] + axiterv[0][xi] for yi in range(2): if axiter[1][yi] == 999: continue s_ppos[1] = ppos[1] + axiterv[1][yi] for zi in range(2): if axiter[2][zi] == 999: continue s_ppos[2] = ppos[2] + axiterv[2][zi] # OK, now we compute the left and right edges for this shift. for i in range(3): # casting to int64 is not nice but is so we can have negative values we clip clip_pos_l[i] = fmax(s_ppos[i] - radius, LE[i] + dds1[i]/10) clip_pos_r[i] = fmin(s_ppos[i] + radius, RE[i] - dds1[i]/10) bounded_morton_split_dds(clip_pos_l[0], clip_pos_l[1], clip_pos_l[2], LE, dds1, bounds[0]) bounded_morton_split_dds(clip_pos_r[0], clip_pos_r[1], clip_pos_r[2], LE, dds1, bounds[1]) # We go to the upper bound plus one so that we have *inclusive* loops -- the upper bound # is the cell *index*, so we want to make sure we include that cell. This is also why # we don't need to worry about mi_max being the max index rather than the cell count. # One additional thing to note is that for all of # the *internal* cells, i.e., those that are both # greater than the left edge and less than the # right edge, we are fully enclosed. for xex in range(bounds[0][0], bounds[1][0] + 1): for yex in range(bounds[0][1], bounds[1][1] + 1): for zex in range(bounds[0][2], bounds[1][2] + 1): miex1 = encode_morton_64bit(xex, yex, zex) if mask[miex1] < mask_threshold or \ particle_counts[miex1] < count_threshold: continue # this explicitly requires that it be *between* # them, not overlapping if xex > bounds[0][0] and xex < bounds[1][0] and \ yex > bounds[0][1] and yex < bounds[1][1] and \ zex > bounds[0][2] and zex < bounds[1][2]: fully_enclosed = 1 else: fully_enclosed = 0 # Now we need to fill our sub-range if refined_count[miex1] == 0: coarse_refined_map[miex1].padWithZeroes(max_mi2_elements) elif refined_count[miex1] >= max_mi2_elements: continue if fully_enclosed == 1: nfully_enclosed += 1 coarse_refined_map[miex1].inplace_logicalxor( coarse_refined_map[miex1]) coarse_refined_map[miex1].inplace_logicalnot() refined_count[miex1] = max_mi2_elements continue n_calls += 1 refined_count[miex1] += self.__fill_refined_ranges(s_ppos, radius, LE, RE, dds1, xex, yex, zex, dds2, coarse_refined_map[miex1]) cdef np.uint64_t vec_i cdef bool_array *buf = NULL cdef ewah_word_type w this_collection = BoolArrayCollection() cdef ewah_bool_array *refined_arr = NULL for it1 in coarse_refined_map: mi1 = it1.first refined_arr = &this_collection.ewah_coll[0][mi1] this_collection.ewah_keys[0].set(mi1) this_collection.ewah_refn[0].set(mi1) buf = &it1.second for vec_i in range(buf.sizeInBytes() / sizeof(ewah_word_type)): w = buf.getWord(vec_i) refined_arr.addWord(w) out_collection = BoolArrayCollection() in_collection._logicalor(this_collection, out_collection) return out_collection @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef np.int64_t __fill_refined_ranges(self, np.float64_t s_ppos[3], np.float64_t radius, np.float64_t LE[3], np.float64_t RE[3], np.float64_t dds1[3], np.uint64_t xex, np.uint64_t yex, np.uint64_t zex, np.float64_t dds2[3], bool_array &refined_set) except -1: cdef int i cdef np.uint64_t bounds_l[3] cdef np.uint64_t bounds_r[3] cdef np.uint64_t miex2, miex2_min, miex2_max cdef np.float64_t clip_pos_l[3] cdef np.float64_t clip_pos_r[3] cdef np.float64_t cell_edge_l, cell_edge_r cdef np.uint64_t ex1[3] cdef np.uint64_t xiex_min, yiex_min, ziex_min cdef np.uint64_t xiex_max, yiex_max, ziex_max cdef np.uint64_t old_nsub = refined_set.numberOfOnes() ex1[0] = xex ex1[1] = yex ex1[2] = zex # Check a few special cases for i in range(3): # Figure out our bounds inside our coarse cell, in the space of the # full domain cell_edge_l = ex1[i] * dds1[i] + LE[i] cell_edge_r = cell_edge_l + dds1[i] if s_ppos[i] + radius < cell_edge_l or s_ppos[i] - radius > cell_edge_r: return 0 clip_pos_l[i] = fmax(s_ppos[i] - radius, cell_edge_l + dds2[i]/2.0) clip_pos_r[i] = fmin(s_ppos[i] + radius, cell_edge_r - dds2[i]/2.0) miex2_min = bounded_morton_split_relative_dds(clip_pos_l[0], clip_pos_l[1], clip_pos_l[2], LE, dds1, dds2, bounds_l) miex2_max = bounded_morton_split_relative_dds(clip_pos_r[0], clip_pos_r[1], clip_pos_r[2], LE, dds1, dds2, bounds_r) xex_max = self.directional_max2[0] yex_max = self.directional_max2[1] zex_max = self.directional_max2[2] xiex_min = miex2_min & xex_max yiex_min = miex2_min & yex_max ziex_min = miex2_min & zex_max xiex_max = miex2_max & xex_max yiex_max = miex2_max & yex_max ziex_max = miex2_max & zex_max # This could *probably* be sped up by iterating over words. for miex2 in range(miex2_min, miex2_max + 1): #miex2 = encode_morton_64bit(xex2, yex2, zex2) #decode_morton_64bit(miex2, ex2) # Let's check all our cases here if (miex2 & xex_max) < (xiex_min): continue if (miex2 & xex_max) > (xiex_max): continue if (miex2 & yex_max) < (yiex_min): continue if (miex2 & yex_max) > (yiex_max): continue if (miex2 & zex_max) < (ziex_min): continue if (miex2 & zex_max) > (ziex_max): continue refined_set.set(miex2) return refined_set.numberOfOnes() - old_nsub @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) def _set_refined_index_data_file(self, np.ndarray[np.uint64_t, ndim=1] sub_mi1, np.ndarray[np.uint64_t, ndim=1] sub_mi2, np.uint64_t file_id, np.int64_t nsub_mi): return self.__set_refined_index_data_file(sub_mi1, sub_mi2, file_id, nsub_mi) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef void __set_refined_index_data_file(self, np.ndarray[np.uint64_t, ndim=1] sub_mi1, np.ndarray[np.uint64_t, ndim=1] sub_mi2, np.uint64_t file_id, np.int64_t nsub_mi): cdef FileBitmasks bitmasks = self.bitmasks bitmasks._set_refined_index_array(file_id, nsub_mi, sub_mi1, sub_mi2) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) def find_collisions(self, verbose=False): cdef tuple cc, rc cc, rc = self.bitmasks._find_collisions(self.collisions,verbose) return cc, rc @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) def find_collisions_coarse(self, verbose=False, file_list = None): cdef int nc, nm nc, nm = self.bitmasks._find_collisions_coarse(self.collisions, verbose, file_list) return nc, nm @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) def find_uncontaminated(self, np.uint32_t ifile, BoolArrayCollection mask, BoolArrayCollection mask2 = None): cdef np.ndarray[np.uint8_t, ndim=1] arr = np.zeros((1 << (self.index_order1 * 3)),'uint8') cdef np.uint8_t[:] arr_view = arr self.bitmasks._select_uncontaminated(ifile, mask, arr_view, mask2) return arr @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) def find_contaminated(self, np.uint32_t ifile, BoolArrayCollection mask, BoolArrayCollection mask2 = None): cdef np.ndarray[np.uint8_t, ndim=1] arr = np.zeros((1 << (self.index_order1 * 3)),'uint8') cdef np.uint8_t[:] arr_view = arr cdef np.ndarray[np.uint8_t, ndim=1] sfiles = np.zeros(self.nfiles,'uint8') cdef np.uint8_t[:] sfiles_view = sfiles self.bitmasks._select_contaminated(ifile, mask, arr_view, sfiles_view, mask2) return arr, np.where(sfiles)[0].astype('uint32') @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) def find_collisions_refined(self, verbose=False): cdef np.int32_t nc, nm nc, nm = self.bitmasks._find_collisions_refined(self.collisions,verbose) return nc, nm def get_bitmasks(self): return self.bitmasks def iseq_bitmask(self, solf): return self.bitmasks._iseq(solf.get_bitmasks()) def save_bitmasks(self, fname, max_hsml): import h5py cdef bytes serial_BAC cdef np.uint64_t ifile with h5py.File(fname, mode="a") as fp: try: grp = fp[str(self.hash_value)] grp.clear() except KeyError: grp = fp.create_group(str(self.hash_value)) grp.attrs["bitmask_version"] = _bitmask_version grp.attrs["nfiles"] = self.nfiles grp.attrs["max_hsml"] = max_hsml # Add some attrs for convenience. They're not read back. grp.attrs["file_hash"] = self.file_hash grp.attrs["left_edge"] = self.left_edge grp.attrs["right_edge"] = self.right_edge grp.attrs["periodicity"] = self.periodicity grp.attrs["index_order1"] = self.index_order1 grp.attrs["index_order2"] = self.index_order2 for ifile in range(self.nfiles): serial_BAC = self.bitmasks._dumps(ifile) grp.create_dataset(f"nfile_{ifile:05}", data=np.void(serial_BAC)) serial_BAC = self.collisions._dumps() grp.create_dataset("collisions", data=np.void(serial_BAC)) def check_bitmasks(self): return self.bitmasks._check() def reset_bitmasks(self): self.bitmasks._reset() def load_bitmasks(self, fname): import h5py cdef bint read_flag = 1 cdef bint irflag cdef np.uint64_t ver cdef bint overwrite = 0 # Verify that file is correct version if not os.path.isfile(fname): raise OSError with h5py.File(fname, mode="r") as fp: try: grp = fp[str(self.hash_value)] except KeyError: raise OSError(f"Index not found in the {fname}") ver = grp.attrs["bitmask_version"] try: max_hsml = grp.attrs["max_hsml"] except KeyError: raise OSError(f"'max_hsml' not found in the {fname}") if ver == self.nfiles and ver != _bitmask_version: overwrite = 1 ver = 0 # Original bitmaps had number of files first if ver != _bitmask_version: raise OSError("The file format of the index has changed since " "this file was created. It will be replaced with an " "updated version.") # Read bitmap for each file pb = get_pbar("Loading particle index", self.nfiles) for ifile in range(self.nfiles): pb.update(ifile+1) irflag = self.bitmasks._loads(ifile, grp[f"nfile_{ifile:05}"][...].tobytes()) if irflag == 0: read_flag = 0 pb.finish() # Collisions irflag = self.collisions._loads(grp["collisions"][...].tobytes()) if irflag == 0: read_flag = 0 # Save in correct format if overwrite == 1: self.save_bitmasks(fname, max_hsml) return read_flag, max_hsml def print_info(self): cdef np.uint64_t ifile for ifile in range(self.nfiles): self.bitmasks.print_info(ifile, "File: %03d" % ifile) def count_coarse(self, ifile): r"""Get the number of coarse cells set for a file.""" return self.bitmasks.count_coarse(ifile) def count_refined(self, ifile): r"""Get the number of cells refined for a file.""" return self.bitmasks.count_refined(ifile) def count_total(self, ifile): r"""Get the total number of cells set for a file.""" return self.bitmasks.count_total(ifile) def check(self): cdef np.uint64_t mi1 cdef ewah_bool_array arr_totref, arr_tottwo cdef ewah_bool_array arr, arr_any, arr_two, arr_swap cdef vector[size_t] vec_totref cdef vector[size_t].iterator it_mi1 cdef int nm = 0, nc = 0 cdef np.uint64_t ifile, nbitmasks nbitmasks = len(self.bitmasks) # Locate all indices with second level refinement for ifile in range(self.nfiles): arr = ( self.bitmasks.ewah_refn)[ifile][0] arr_totref.logicalor(arr,arr_totref) # Count collections & second level indices vec_totref = arr_totref.toArray() it_mi1 = vec_totref.begin() while it_mi1 != vec_totref.end(): mi1 = dereference(it_mi1) arr_any.reset() arr_two.reset() for ifile in range(nbitmasks): if self.bitmasks._isref(ifile, mi1) == 1: arr = ( self.bitmasks.ewah_coll)[ifile][0][mi1] arr_any.logicaland(arr, arr_two) # Indices in previous files arr_any.logicalor(arr, arr_swap) # All second level indices arr_any = arr_swap arr_two.logicalor(arr_tottwo,arr_tottwo) nc += arr_tottwo.numberOfOnes() nm += arr_any.numberOfOnes() preincrement(it_mi1) # nc: total number of second level morton indices that are repeated # nm: total number of second level morton indices print("Total of %s / %s collisions (% 3.5f%%)" % (nc, nm, 100.0*float(nc)/nm)) def primary_indices(self): mi = ( self.collisions.ewah_keys)[0].toArray() return np.array(mi,'uint64') def file_ownership_mask(self, fid): cdef BoolArrayCollection out out = self.bitmasks._get_bitmask( fid) return out def finalize(self): return # self.index_octree = ParticleOctreeContainer([1,1,1], # [self.left_edge[0], self.left_edge[1], self.left_edge[2]], # [self.right_edge[0], self.right_edge[1], self.right_edge[2]], # num_zones = 1 # ) # self.index_octree.n_ref = 1 # mi = ( self.collisions.ewah_keys)[0].toArray() # Change from vector to numpy # mi = mi.astype("uint64") # self.index_octree.add(mi, self.index_order1) # self.index_octree.finalize() def get_DLE(self): cdef int i cdef np.ndarray[np.float64_t, ndim=1] DLE DLE = np.zeros(3, dtype='float64') for i in range(3): DLE[i] = self.left_edge[i] return DLE def get_DRE(self): cdef int i cdef np.ndarray[np.float64_t, ndim=1] DRE DRE = np.zeros(3, dtype='float64') for i in range(3): DRE[i] = self.right_edge[i] return DRE @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def get_ghost_zones(self, SelectorObject selector, int ngz, BoolArrayCollection dmask = None, bint coarse_ghosts = False): cdef BoolArrayCollection gmask, gmask2, out cdef np.ndarray[np.uint8_t, ndim=1] periodic = selector.get_periodicity() cdef bint periodicity[3] cdef int i for i in range(3): periodicity[i] = periodic[i] if dmask is None: dmask = BoolArrayCollection() gmask2 = BoolArrayCollection() morton_selector = ParticleBitmapSelector(selector,self,ngz=0) morton_selector.fill_masks(dmask, gmask2) gmask = BoolArrayCollection() dmask._get_ghost_zones(ngz, self.index_order1, self.index_order2, periodicity, gmask, coarse_ghosts) _dfiles, gfiles = self.masks_to_files(dmask, gmask) out = BoolArrayCollection() gmask._logicalor(dmask, out) return gfiles, out @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def selector2mask(self, SelectorObject selector): cdef BoolArrayCollection cmask = BoolArrayCollection() cdef ParticleBitmapSelector morton_selector morton_selector = ParticleBitmapSelector(selector,self,ngz=0) morton_selector.fill_masks(cmask) return cmask @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def mask2files(self, BoolArrayCollection cmask): cdef np.ndarray[np.uint32_t, ndim=1] file_idx file_idx = self.mask_to_files(cmask) return file_idx @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def mask2filemasks(self, BoolArrayCollection cmask, np.ndarray[np.uint32_t, ndim=1] file_idx): cdef BoolArrayCollection fmask cdef np.int32_t fid cdef np.ndarray[object, ndim=1] file_masks cdef int i # Get bitmasks for parts of files touching the selector file_masks = np.array([BoolArrayCollection() for i in range(len(file_idx))], dtype="object") for i, (fid, fmask) in enumerate(zip(file_idx,file_masks)): self.bitmasks._logicaland( fid, cmask, fmask) return file_masks @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def filemasks2addfiles(self, np.ndarray[object, ndim=1] file_masks): cdef list addfile_idx addfile_idx = len(file_masks)*[None] for i, fmask in enumerate(file_masks): addfile_idx[i] = self.mask_to_files(fmask).astype('uint32') return addfile_idx @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def identify_file_masks(self, SelectorObject selector): cdef BoolArrayCollection cmask = BoolArrayCollection() cdef BoolArrayCollection fmask cdef np.int32_t fid cdef np.ndarray[object, ndim=1] file_masks cdef np.ndarray[np.uint32_t, ndim=1] file_idx cdef list addfile_idx # Get bitmask for selector cdef ParticleBitmapSelector morton_selector morton_selector = ParticleBitmapSelector(selector, self, ngz=0) morton_selector.fill_masks(cmask) # Get bitmasks for parts of files touching the selector file_idx = self.mask_to_files(cmask) file_masks = np.array([BoolArrayCollection() for i in range(len(file_idx))], dtype="object") addfile_idx = len(file_idx)*[None] for i, (fid, fmask) in enumerate(zip(file_idx,file_masks)): self.bitmasks._logicaland( fid, cmask, fmask) addfile_idx[i] = self.mask_to_files(fmask).astype('uint32') return file_idx.astype('uint32'), file_masks, addfile_idx @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def identify_data_files(self, SelectorObject selector, int ngz = 0): cdef BoolArrayCollection cmask_s = BoolArrayCollection() cdef BoolArrayCollection cmask_g = BoolArrayCollection() # Find mask of selected morton indices cdef ParticleBitmapSelector morton_selector morton_selector = ParticleBitmapSelector(selector, self, ngz=ngz) morton_selector.fill_masks(cmask_s, cmask_g) return self.masks_to_files(cmask_s, cmask_g), (cmask_s, cmask_g) def mask_to_files(self, BoolArrayCollection mm_s): cdef FileBitmasks mm_d = self.bitmasks cdef np.uint32_t ifile cdef np.ndarray[np.uint8_t, ndim=1] file_mask_p file_mask_p = np.zeros(self.nfiles, dtype="uint8") # Compare with mask of particles for ifile in range(self.nfiles): # Only continue if the file is not already selected if file_mask_p[ifile] == 0: if mm_d._intersects(ifile, mm_s): file_mask_p[ifile] = 1 cdef np.ndarray[np.int32_t, ndim=1] file_idx_p file_idx_p = np.where(file_mask_p)[0].astype('int32') return file_idx_p.astype('uint32') def masks_to_files(self, BoolArrayCollection mm_s, BoolArrayCollection mm_g): cdef FileBitmasks mm_d = self.bitmasks cdef np.uint32_t ifile cdef np.ndarray[np.uint8_t, ndim=1] file_mask_p cdef np.ndarray[np.uint8_t, ndim=1] file_mask_g file_mask_p = np.zeros(self.nfiles, dtype="uint8") file_mask_g = np.zeros(self.nfiles, dtype="uint8") # Compare with mask of particles for ifile in range(self.nfiles): # Only continue if the file is not already selected if file_mask_p[ifile] == 0: if mm_d._intersects(ifile, mm_s): file_mask_p[ifile] = 1 file_mask_g[ifile] = 0 # No intersection elif mm_d._intersects(ifile, mm_g): file_mask_g[ifile] = 1 cdef np.ndarray[np.int32_t, ndim=1] file_idx_p cdef np.ndarray[np.int32_t, ndim=1] file_idx_g file_idx_p = np.where(file_mask_p)[0].astype('int32') file_idx_g = np.where(file_mask_g)[0].astype('int32') return file_idx_p.astype('uint32'), file_idx_g.astype('uint32') @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def construct_octree(self, index, io_handler, data_files, num_zones, BoolArrayCollection selector_mask, BoolArrayCollection base_mask = None): cdef np.uint64_t total_pcount cdef np.uint64_t i, j, k cdef int ind[3] cdef np.uint64_t ind64[3] cdef ParticleBitmapOctreeContainer octree cdef np.uint64_t mi, mi_root cdef np.ndarray pos cdef np.ndarray[np.float32_t, ndim=2] pos32 cdef np.ndarray[np.float64_t, ndim=2] pos64 cdef np.float64_t ppos[3] cdef np.float64_t DLE[3] cdef np.float64_t DRE[3] cdef int bitsize = 0 for i in range(3): DLE[i] = self.left_edge[i] DRE[i] = self.right_edge[i] cdef np.ndarray[np.uint64_t, ndim=1] morton_ind # Determine cells that need to be added to the octree cdef np.uint64_t nroot = selector_mask._count_total() # Now we can actually create a sparse octree. octree = ParticleBitmapOctreeContainer( (self.dims[0], self.dims[1], self.dims[2]), (self.left_edge[0], self.left_edge[1], self.left_edge[2]), (self.right_edge[0], self.right_edge[1], self.right_edge[2]), nroot, num_zones) octree.n_ref = index.dataset.n_ref octree.level_offset = self.index_order1 octree.allocate_domains() # Add roots based on the mask cdef np.uint64_t croot = 0 cdef ewah_bool_array *ewah_slct = selector_mask.ewah_keys cdef ewah_bool_array *ewah_base if base_mask is not None: ewah_base = base_mask.ewah_keys else: ewah_base = NULL cdef ewah_bool_iterator *iter_set = new ewah_bool_iterator(ewah_slct[0].begin()) cdef ewah_bool_iterator *iter_end = new ewah_bool_iterator(ewah_slct[0].end()) cdef np.ndarray[np.uint8_t, ndim=1] slct_arr slct_arr = np.zeros((1 << (self.index_order1 * 3)),'uint8') while iter_set[0] != iter_end[0]: mi = dereference(iter_set[0]) if ewah_base != NULL and ewah_base[0].get(mi) == 0: octree._index_base_roots[croot] = 0 slct_arr[mi] = 2 else: slct_arr[mi] = 1 decode_morton_64bit(mi, ind64) for j in range(3): ind[j] = ind64[j] octree.next_root(1, ind) croot += 1 preincrement(iter_set[0]) assert(croot == nroot) if ewah_base != NULL: assert(np.sum(octree._index_base_roots) == ewah_base[0].numberOfOnes()) # Get morton indices for all particles in this file and those # contaminating cells it has majority control of. files_touched = data_files #+ buffer_files # datafile object from ID goes here total_pcount = 0 for data_file in files_touched: total_pcount += sum(data_file.total_particles.values()) morton_ind = np.empty(total_pcount, dtype='uint64') total_pcount = 0 cdef np.uint64_t base_pcount = 0 for data_file in files_touched: # We now get our particle positions for pos in io_handler._yield_coordinates(data_file): pos32 = pos64 = None bitsize = 0 if pos.dtype == np.float32: pos32 = pos bitsize = 32 for j in range(pos.shape[0]): for k in range(3): ppos[k] = pos32[j,k] mi = bounded_morton(ppos[0], ppos[1], ppos[2], DLE, DRE, ORDER_MAX) mi_root = mi >> (3*(ORDER_MAX-self.index_order1)) if slct_arr[mi_root] > 0: morton_ind[total_pcount] = mi total_pcount += 1 if slct_arr[mi_root] == 1: base_pcount += 1 elif pos.dtype == np.float64: pos64 = pos bitsize = 64 for j in range(pos.shape[0]): for k in range(3): ppos[k] = pos64[j,k] mi = bounded_morton(ppos[0], ppos[1], ppos[2], DLE, DRE, ORDER_MAX) mi_root = mi >> (3*(ORDER_MAX-self.index_order1)) if slct_arr[mi_root] > 0: morton_ind[total_pcount] = mi total_pcount += 1 if slct_arr[mi_root] == 1: base_pcount += 1 else: raise RuntimeError morton_ind = morton_ind[:total_pcount] morton_ind.sort() octree.add(morton_ind, self.index_order1) octree.finalize() return octree cdef class ParticleBitmapSelector: cdef SelectorObject selector cdef ParticleBitmap bitmap cdef np.uint32_t ngz cdef np.float64_t DLE[3] cdef np.float64_t DRE[3] cdef bint periodicity[3] cdef np.uint32_t order1 cdef np.uint32_t order2 cdef np.uint64_t max_index1 cdef np.uint64_t max_index2 cdef np.uint64_t s1 cdef np.uint64_t s2 cdef void* pointers[11] cdef np.uint64_t[:,:] ind1_n cdef np.uint64_t[:,:] ind2_n cdef np.uint32_t[:,:] neighbors cdef np.uint64_t[:] neighbor_list1 cdef np.uint64_t[:] neighbor_list2 cdef np.uint32_t nfiles cdef np.uint8_t[:] file_mask_p cdef np.uint8_t[:] file_mask_g # Uncompressed boolean cdef np.uint8_t[:] refined_select_bool cdef np.uint8_t[:] refined_ghosts_bool cdef np.uint8_t[:] coarse_select_bool cdef np.uint8_t[:] coarse_ghosts_bool cdef SparseUnorderedRefinedBitmask refined_ghosts_list cdef BoolArrayColl select_ewah cdef BoolArrayColl ghosts_ewah def __cinit__(self, selector, bitmap, ngz=0): cdef int i cdef np.ndarray[np.uint8_t, ndim=1] periodicity = np.zeros(3, dtype='uint8') cdef np.ndarray[np.float64_t, ndim=1] DLE = np.zeros(3, dtype='float64') cdef np.ndarray[np.float64_t, ndim=1] DRE = np.zeros(3, dtype='float64') self.selector = selector self.bitmap = bitmap self.ngz = ngz # Things from the bitmap & selector periodicity = selector.get_periodicity() DLE = bitmap.get_DLE() DRE = bitmap.get_DRE() for i in range(3): self.DLE[i] = DLE[i] self.DRE[i] = DRE[i] self.periodicity[i] = periodicity[i] self.order1 = bitmap.index_order1 self.order2 = bitmap.index_order2 self.nfiles = bitmap.nfiles self.max_index1 = (1 << self.order1) self.max_index2 = (1 << self.order2) self.s1 = (1 << (self.order1*3)) self.s2 = (1 << (self.order2*3)) self.neighbors = np.zeros((2*ngz+1, 3), dtype='uint32') self.ind1_n = np.zeros((2*ngz+1, 3), dtype='uint64') self.ind2_n = np.zeros((2*ngz+1, 3), dtype='uint64') self.neighbor_list1 = np.zeros((2*ngz+1)**3, dtype='uint64') self.neighbor_list2 = np.zeros((2*ngz+1)**3, dtype='uint64') self.file_mask_p = np.zeros(bitmap.nfiles, dtype='uint8') self.file_mask_g = np.zeros(bitmap.nfiles, dtype='uint8') self.refined_select_bool = np.zeros(self.s2, 'uint8') self.refined_ghosts_bool = np.zeros(self.s2, 'uint8') self.coarse_select_bool = np.zeros(self.s1, 'uint8') self.coarse_ghosts_bool = np.zeros(self.s1, 'uint8') self.refined_ghosts_list = SparseUnorderedRefinedBitmask() self.select_ewah = BoolArrayColl(self.s1, self.s2) self.ghosts_ewah = BoolArrayColl(self.s1, self.s2) def fill_masks(self, BoolArrayCollection mm_s, BoolArrayCollection mm_g = None): # Normal variables cdef int i cdef np.int32_t level = 0 cdef np.uint64_t mi1 mi1 = ~(0) cdef np.float64_t pos[3] cdef np.float64_t dds[3] cdef np.uint64_t cur_ind[3] for i in range(3): cur_ind[i] = 0 pos[i] = self.DLE[i] dds[i] = self.DRE[i] - self.DLE[i] if mm_g is None: mm_g = BoolArrayCollection() # Uncompressed version cdef BoolArrayColl mm_s0 cdef BoolArrayColl mm_g0 mm_s0 = BoolArrayColl(self.s1, self.s2) mm_g0 = BoolArrayColl(self.s1, self.s2) # Recurse cdef np.float64_t rpos[3] for i in range(3): rpos[i] = self.DRE[i] - self.bitmap.dds_mi2[i]/2.0 sbbox = self.selector.select_bbox_edge(pos, rpos) if sbbox == 1: for mi1 in range(self.s1): mm_s0._set_coarse(mi1) mm_s0._compress(mm_s) return else: self.recursive_morton_mask(level, pos, dds, mi1, cur_ind) # Set coarse morton indices in order self.set_coarse_bool(mm_s0, mm_g0) self.set_refined_list(mm_s0, mm_g0) self.set_refined_bool(mm_s0, mm_g0) # Compress mm_s0._compress(mm_s) mm_g0._compress(mm_g) def find_files(self, np.ndarray[np.uint8_t, ndim=1] file_mask_p, np.ndarray[np.uint8_t, ndim=1] file_mask_g): cdef np.uint64_t i cdef np.int32_t level = 0 cdef np.uint64_t mi1 mi1 = ~(0) cdef np.float64_t pos[3] cdef np.float64_t dds[3] for i in range(3): pos[i] = self.DLE[i] dds[i] = self.DRE[i] - self.DLE[i] # Fill with input for i in range(self.nfiles): self.file_mask_p[i] = file_mask_p[i] self.file_mask_g[i] = file_mask_g[i] # Recurse self.recursive_morton_files(level, pos, dds, mi1) # Fill with results for i in range(self.nfiles): file_mask_p[i] = self.file_mask_p[i] if file_mask_p[i]: file_mask_g[i] = 0 else: file_mask_g[i] = self.file_mask_g[i] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef bint is_refined(self, np.uint64_t mi1): return self.bitmap.collisions._isref(mi1) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef bint is_refined_files(self, np.uint64_t mi1): cdef np.uint64_t i if self.bitmap.collisions._isref(mi1): # Don't refine if files all selected already for i in range(self.nfiles): if self.file_mask_p[i] == 0: if self.bitmap.bitmasks._isref(i, mi1) == 1: return 1 return 0 else: return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef void add_coarse(self, np.uint64_t mi1, int bbox = 2): self.coarse_select_bool[mi1] = 1 # Neighbors if (self.ngz > 0) and (bbox == 2): if not self.is_refined(mi1): self.add_neighbors_coarse(mi1) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef void set_files_coarse(self, np.uint64_t mi1): cdef np.uint64_t i cdef bint flag_ref = self.is_refined(mi1) # Flag files at coarse level if flag_ref == 0: for i in range(self.nfiles): if self.file_mask_p[i] == 0: if self.bitmap.bitmasks._get_coarse(i, mi1) == 1: self.file_mask_p[i] = 1 # Neighbors if (flag_ref == 0) and (self.ngz > 0): self.set_files_neighbors_coarse(mi1) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef int add_refined(self, np.uint64_t mi1, np.uint64_t mi2, int bbox = 2) except -1: self.refined_select_bool[mi2] = 1 # Neighbors if (self.ngz > 0) and (bbox == 2): self.add_neighbors_refined(mi1, mi2) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef void set_files_refined(self, np.uint64_t mi1, np.uint64_t mi2): cdef np.uint64_t i # Flag files for i in range(self.nfiles): if self.file_mask_p[i] == 0: if self.bitmap.bitmasks._get(i, mi1, mi2): self.file_mask_p[i] = 1 # Neighbors if (self.ngz > 0): self.set_files_neighbors_refined(mi1, mi2) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef void add_neighbors_coarse(self, np.uint64_t mi1): cdef np.uint64_t m cdef np.uint32_t ntot cdef np.uint64_t mi1_n ntot = morton_neighbors_coarse(mi1, self.max_index1, self.periodicity, self.ngz, self.neighbors, self.ind1_n, self.neighbor_list1) for m in range(ntot): mi1_n = self.neighbor_list1[m] self.coarse_ghosts_bool[mi1_n] = 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef void set_files_neighbors_coarse(self, np.uint64_t mi1): cdef np.uint64_t i, m cdef np.uint32_t ntot cdef np.uint64_t mi1_n ntot = morton_neighbors_coarse(mi1, self.max_index1, self.periodicity, self.ngz, self.neighbors, self.ind1_n, self.neighbor_list1) for m in range(ntot): mi1_n = self.neighbor_list1[m] for i in range(self.nfiles): if self.file_mask_g[i] == 0: if self.bitmap.bitmasks._get_coarse(i, mi1_n): self.file_mask_g[i] = 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef void add_neighbors_refined(self, np.uint64_t mi1, np.uint64_t mi2): cdef int m cdef np.uint32_t ntot cdef np.uint64_t mi1_n, mi2_n ntot = morton_neighbors_refined(mi1, mi2, self.max_index1, self.max_index2, self.periodicity, self.ngz, self.neighbors, self.ind1_n, self.ind2_n, self.neighbor_list1, self.neighbor_list2) for m in range(ntot): mi1_n = self.neighbor_list1[m] mi2_n = self.neighbor_list2[m] self.coarse_ghosts_bool[mi1_n] = 1 # Ghost cells are added at the refined level regardless of if the # coarse cell containing it is refined in the selector. if mi1_n == mi1: self.refined_ghosts_bool[mi2_n] = 1 else: self.refined_ghosts_list._set(mi1_n, mi2_n) # alternative implementation by Meagan Lang # see ed95b1ac2f7105092b1116f9c76568ae27024751 # Ghost cells are only added at the refined level if the coarse # index for the ghost cell is refined in the selector. #if mi1_n == mi1: # self.refined_ghosts_bool[mi2_n] = 1 #elif self.is_refined(mi1_n) == 1: # self.refined_ghosts_list._set(mi1_n, mi2_n) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef void set_files_neighbors_refined(self, np.uint64_t mi1, np.uint64_t mi2): cdef int i, m cdef np.uint32_t ntot cdef np.uint64_t mi1_n, mi2_n ntot = morton_neighbors_refined(mi1, mi2, self.max_index1, self.max_index2, self.periodicity, self.ngz, self.neighbors, self.ind1_n, self.ind2_n, self.neighbor_list1, self.neighbor_list2) for m in range(ntot): mi1_n = self.neighbor_list1[m] mi2_n = self.neighbor_list2[m] if self.is_refined(mi1_n) == 1: for i in range(self.nfiles): if self.file_mask_g[i] == 0: if self.bitmap.bitmasks._get(i, mi1_n, mi2_n) == 1: self.file_mask_g[i] = 1 else: for i in range(self.nfiles): if self.file_mask_g[i] == 0: if self.bitmap.bitmasks._get_coarse(i, mi1_n) == 1: self.file_mask_g[i] = 1 break # If not refined, only one file should be selected @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void set_coarse_list(self, BoolArrayColl mm_s, BoolArrayColl mm_g): self.coarse_select_list._fill_bool(mm_s) self.coarse_ghosts_list._fill_bool(mm_g) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void set_refined_list(self, BoolArrayColl mm_s, BoolArrayColl mm_g): self.refined_ghosts_list._fill_bool(mm_g) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void set_coarse_bool(self, BoolArrayColl mm_s, BoolArrayColl mm_g): cdef np.uint64_t mi1 mm_s._set_coarse_array_ptr(&self.coarse_select_bool[0]) for mi1 in range(self.s1): self.coarse_select_bool[mi1] = 0 mm_g._set_coarse_array_ptr(&self.coarse_ghosts_bool[0]) for mi1 in range(self.s1): self.coarse_ghosts_bool[mi1] = 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void set_refined_bool(self, BoolArrayColl mm_s, BoolArrayColl mm_g): mm_s._append(self.select_ewah) mm_g._append(self.ghosts_ewah) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef void push_refined_bool(self, np.uint64_t mi1): cdef np.uint64_t mi2 self.select_ewah._set_refined_array_ptr(mi1, &self.refined_select_bool[0]) for mi2 in range(self.s2): self.refined_select_bool[mi2] = 0 self.ghosts_ewah._set_refined_array_ptr(mi1, &self.refined_ghosts_bool[0]) for mi2 in range(self.s2): self.refined_ghosts_bool[mi2] = 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void add_ghost_zones(self, BoolArrayColl mm_s, BoolArrayColl mm_g): cdef np.uint64_t mi1, mi2 # Get ghost zones, unordered for mi1 in range(self.s1): if mm_s._get_coarse(mi1): if self.is_refined(mi1): for mi2 in range(self.s2): if mm_s._get(mi1, mi2): self.add_neighbors_refined(mi1, mi2) # self.push_refined_bool(mi1) self.ghosts_ewah._set_refined_array_ptr(mi1, &self.refined_ghosts_bool[0]) for mi2 in range(self.s2): self.refined_ghosts_bool[mi2] = 0 else: self.add_neighbors_coarse(mi1) # Add ghost zones to bool array in order mm_g._set_coarse_array_ptr(&self.coarse_ghosts_bool[0]) for mi1 in range(self.s1): self.coarse_ghosts_bool[mi1] = 0 self.refined_ghosts_list._fill_bool(mm_g) mm_g._append(self.ghosts_ewah) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int fill_subcells_mi1(self, np.uint64_t nlevel, np.uint64_t ind1[3]) except -1: cdef np.uint64_t imi, fmi cdef np.uint64_t mi cdef np.uint64_t shift_by = 3 * (self.bitmap.index_order1 - nlevel) imi = encode_morton_64bit(ind1[0], ind1[1], ind1[2]) << shift_by fmi = imi + (1 << shift_by) for mi in range(imi, fmi): self.add_coarse(mi, 1) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int fill_subcells_mi2(self, np.uint64_t nlevel, np.uint64_t mi1, np.uint64_t ind2[3]) except -1: cdef np.uint64_t imi, fmi cdef np.uint64_t shift_by = 3 * ((self.bitmap.index_order2 + self.bitmap.index_order1) - nlevel) imi = encode_morton_64bit(ind2[0], ind2[1], ind2[2]) << shift_by fmi = imi + (1 << shift_by) for mi2 in range(imi, fmi): self.add_refined(mi1, mi2, 1) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int recursive_morton_mask( self, np.int32_t level, np.float64_t pos[3], np.float64_t dds[3], np.uint64_t mi1, np.uint64_t cur_ind[3]) except -1: cdef np.uint64_t mi2 cdef np.float64_t npos[3] cdef np.float64_t rpos[3] cdef np.float64_t ndds[3] cdef np.uint64_t nlevel cdef np.uint64_t ncur_ind[3] cdef np.uint64_t* zeros = [0, 0, 0] cdef int i, j, k, sbbox PyErr_CheckSignals() for i in range(3): ndds[i] = dds[i]/2 nlevel = level + 1 # Loop over octs for i in range(2): npos[0] = pos[0] + i*ndds[0] rpos[0] = npos[0] + ndds[0] ncur_ind[0] = (cur_ind[0] << 1) + i for j in range(2): npos[1] = pos[1] + j*ndds[1] rpos[1] = npos[1] + ndds[1] ncur_ind[1] = (cur_ind[1] << 1) + j for k in range(2): npos[2] = pos[2] + k*ndds[2] rpos[2] = npos[2] + ndds[2] ncur_ind[2] = (cur_ind[2] << 1) + k # Only recurse into selected cells sbbox = self.selector.select_bbox_edge(npos, rpos) if sbbox == 0: continue if nlevel < self.order1: if sbbox == 1: self.fill_subcells_mi1(nlevel, ncur_ind) else: self.recursive_morton_mask( nlevel, npos, ndds, mi1, ncur_ind) elif nlevel == self.order1: mi1 = encode_morton_64bit( ncur_ind[0], ncur_ind[1], ncur_ind[2]) if sbbox == 2: # an edge cell if self.is_refined(mi1) == 1: # note we pass zeros here in the last argument # this is because we now need to generate # *refined* indices above order1 so we need to # start a new running count of refined indices. # # note that recursive_morton_mask does not # mutate the last argument (a new index is # calculated in each stack frame) so this is # safe self.recursive_morton_mask( nlevel, npos, ndds, mi1, zeros) self.add_coarse(mi1, sbbox) self.push_refined_bool(mi1) elif nlevel < (self.order1 + self.order2): if sbbox == 1: self.fill_subcells_mi2(nlevel, mi1, ncur_ind) else: self.recursive_morton_mask( nlevel, npos, ndds, mi1, ncur_ind) elif nlevel == (self.order1 + self.order2): mi2 = encode_morton_64bit( ncur_ind[0], ncur_ind[1], ncur_ind[2]) self.add_refined(mi1, mi2, sbbox) return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void recursive_morton_files(self, np.int32_t level, np.float64_t pos[3], np.float64_t dds[3], np.uint64_t mi1): cdef np.uint64_t mi2 cdef np.float64_t npos[3] cdef np.float64_t rpos[3] cdef np.float64_t ndds[3] cdef np.uint64_t nlevel cdef np.float64_t DLE[3] cdef np.uint64_t ind1[3] cdef int i, j, k, m for i in range(3): ndds[i] = dds[i]/2 nlevel = level + 1 # Loop over octs for i in range(2): npos[0] = pos[0] + i*ndds[0] rpos[0] = npos[0] + ndds[0] for j in range(2): npos[1] = pos[1] + j*ndds[1] rpos[1] = npos[1] + ndds[1] for k in range(2): npos[2] = pos[2] + k*ndds[2] rpos[2] = npos[2] + ndds[2] # Only recurse into selected cells if not self.selector.select_bbox(npos, rpos): continue if nlevel < self.order1: self.recursive_morton_files(nlevel, npos, ndds, mi1) elif nlevel == self.order1: mi1 = bounded_morton_dds(npos[0], npos[1], npos[2], self.DLE, ndds) if self.is_refined_files(mi1): self.recursive_morton_files(nlevel, npos, ndds, mi1) self.set_files_coarse(mi1) elif nlevel < (self.order1 + self.order2): self.recursive_morton_files(nlevel, npos, ndds, mi1) elif nlevel == (self.order1 + self.order2): decode_morton_64bit(mi1,ind1) for m in range(3): DLE[m] = self.DLE[m] + ndds[m]*ind1[m]*self.max_index2 mi2 = bounded_morton_dds(npos[0], npos[1], npos[2], DLE, ndds) self.set_files_refined(mi1,mi2) cdef class ParticleBitmapOctreeContainer(SparseOctreeContainer): cdef Oct** oct_list cdef public int n_ref cdef int loaded # Loaded with load_octree? cdef np.uint8_t* _ptr_index_base_roots cdef np.uint8_t* _ptr_index_base_octs cdef np.uint64_t* _ptr_octs_per_root cdef public np.uint8_t[:] _index_base_roots cdef public np.uint8_t[:] _index_base_octs cdef np.uint64_t[:] _octs_per_root cdef public int overlap_cells def __init__(self, domain_dimensions, domain_left_edge, domain_right_edge, int num_root, num_zones = 2): super(ParticleBitmapOctreeContainer, self).__init__( domain_dimensions, domain_left_edge, domain_right_edge, num_zones) self.loaded = 0 self.fill_style = "o" self.partial_coverage = 2 self.overlap_cells = 0 # Now the overrides self.max_level = -1 self.max_root = num_root self.root_nodes = malloc(sizeof(OctKey) * num_root) self._ptr_index_base_roots = malloc(sizeof(np.uint8_t) * num_root) self._ptr_octs_per_root = malloc(sizeof(np.uint64_t) * num_root) for i in range(num_root): self.root_nodes[i].key = -1 self.root_nodes[i].node = NULL self._ptr_index_base_roots[i] = 1 self._ptr_octs_per_root[i] = 0 self._index_base_roots = self._ptr_index_base_roots self._octs_per_root = self._ptr_octs_per_root def allocate_domains(self, counts = None): if counts is None: counts = [self.max_root] OctreeContainer.allocate_domains(self, counts) def finalize(self): # Assign domain ind cdef SelectorObject selector = AlwaysSelector(None) selector.overlap_cells = self.overlap_cells cdef oct_visitors.AssignDomainInd visitor visitor = oct_visitors.AssignDomainInd(self) self.visit_all_octs(selector, visitor) assert ((visitor.global_index+1)*visitor.nz == visitor.index) # Copy indexes self._ptr_index_base_octs = malloc(sizeof(np.uint8_t)*self.nocts) self._index_base_octs = self._ptr_index_base_octs cdef np.int64_t nprev_octs = 0 cdef int i for i in range(self.num_root): self._index_base_octs[nprev_octs:(nprev_octs+self._octs_per_root[i])] = self._index_base_roots[i] nprev_octs += self._octs_per_root[i] cdef visit_assign(self, Oct *o, np.int64_t *lpos, int level, int *max_level, np.int64_t index_root): cdef int i, j, k if o.children == NULL: self.oct_list[lpos[0]] = o self._index_base_octs[lpos[0]] = self._index_base_roots[index_root] lpos[0] += 1 max_level[0] = imax(max_level[0], level) for i in range(2): for j in range(2): for k in range(2): if o.children != NULL \ and o.children[cind(i,j,k)] != NULL: self.visit_assign(o.children[cind(i,j,k)], lpos, level + 1, max_level, index_root) return cdef Oct* allocate_oct(self): #Allocate the memory, set to NULL or -1 #We reserve space for n_ref particles, but keep #track of how many are used with np initially 0 self.nocts += 1 cdef Oct *my_oct = malloc(sizeof(Oct)) my_oct.domain = -1 my_oct.file_ind = 0 my_oct.domain_ind = self.nocts - 1 my_oct.children = NULL return my_oct def get_index_base_octs(self, np.int64_t[:] domain_ind): cdef np.int64_t ndst = np.max(domain_ind) + 1 ind = np.zeros(ndst, 'int64') - 1 self._get_index_base_octs(ind, domain_ind) return ind[ind >= 0] cdef void _get_index_base_octs(self, np.int64_t[:] ind, np.int64_t[:] domain_ind): cdef SelectorObject selector = AlwaysSelector(None) selector.overlap_cells = self.overlap_cells cdef oct_visitors.IndexMaskMapOcts visitor visitor = oct_visitors.IndexMaskMapOcts(self) visitor.oct_mask = self._index_base_octs visitor.oct_index = ind visitor.map_domain_ind = domain_ind self.visit_all_octs(selector, visitor) def __dealloc__(self): #Call the freemem ops on every ocy #of the root mesh recursively cdef int i if self.root_nodes== NULL: return if self.loaded == 0: for i in range(self.max_root): if self.root_nodes[i].node == NULL: continue self.visit_free(&self.root_nodes.node[i], 0) self.root_nodes = NULL free(self.oct_list) free(self._ptr_index_base_roots) free(self._ptr_index_base_octs) free(self._ptr_octs_per_root) self.oct_list = NULL cdef void visit_free(self, Oct *o, int free_this): #Free the memory for this oct recursively cdef int i, j, k for i in range(2): for j in range(2): for k in range(2): if o.children != NULL \ and o.children[cind(i,j,k)] != NULL: self.visit_free(o.children[cind(i,j,k)], 1) if o.children != NULL: free(o.children) if free_this == 1: free(o) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void recursive_add(self, Oct *o, np.ndarray[np.uint64_t, ndim=1] indices, int level, int *max_level, int domain_id, int *count): cdef np.int64_t no = indices.shape[0], beg, end, nind cdef np.int64_t index cdef int i, j, k cdef int ind[3] cdef Oct *noct beg = end = 0 if level > max_level[0]: max_level[0] = level # Initialize children if o.children == NULL: o.children = malloc(sizeof(Oct *)*8) for i in range(2): for j in range(2): for k in range(2): o.children[cind(i,j,k)] = NULL # noct = self.allocate_oct() # noct.domain = o.domain # noct.file_ind = 0 # o.children[cind(i,j,k)] = noct # Loop through sets of particles with matching prefix at this level while end < no: beg = end index = (indices[beg] >> ((ORDER_MAX - level)*3)) while (end < no) and (index == (indices[end] >> ((ORDER_MAX - level)*3))): end += 1 nind = (end - beg) # Add oct for i in range(3): ind[i] = ((index >> (2 - i)) & 1) # noct = o.children[cind(ind[0],ind[1],ind[2])] if o.children[cind(ind[0],ind[1],ind[2])] != NULL: raise Exception('Child was already initialized...') noct = self.allocate_oct() noct.domain = o.domain o.children[cind(ind[0],ind[1],ind[2])] = noct # Don't add it to the list if it will be refined if nind > self.n_ref and level < ORDER_MAX: self.nocts -= 1 noct.domain_ind = -1 # overwritten by finalize else: count[0] += 1 noct.file_ind = o.file_ind # noct.file_ind = nind # o.file_ind = self.n_ref + 1 # Refine oct or add its children if nind > self.n_ref and level < ORDER_MAX: self.recursive_add(noct, indices[beg:end], level+1, max_level, domain_id, count) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def add(self, np.ndarray[np.uint64_t, ndim=1] indices, np.uint64_t order1, int domain_id = -1): #Add this particle to the root oct #Then if that oct has children, add it to them recursively #If the child needs to be refined because of max particles, do so cdef Oct *root = NULL cdef np.int64_t no = indices.shape[0], beg, end, index cdef int i cdef int ind[3] cdef np.uint64_t ind64[3] cdef int max_level = self.max_level # Note what we're doing here: we have decided the root will always be # zero, since we're in a forest of octrees, where the root_mesh node is # the level 0. This means our morton indices should be made with # respect to that, which means we need to keep a few different arrays # of them. cdef np.int64_t index_root = 0 cdef int root_count beg = end = 0 self._octs_per_root[:] = 1 # Roots count regardless while end < no: # Determine number of octs with this prefix beg = end index = (indices[beg] >> ((ORDER_MAX - self.level_offset)*3)) while (end < no) and (index == (indices[end] >> ((ORDER_MAX - self.level_offset)*3))): end += 1 # Find root for prefix decode_morton_64bit(index, ind64) for i in range(3): ind[i] = ind64[i] while (index_root < self.num_root) and \ (self.ipos_to_key(ind) != self.root_nodes[index_root].key): index_root += 1 if index_root >= self.num_root: raise Exception('No root found for {},{},{}'.format(ind[0],ind[1],ind[2])) root = self.root_nodes[index_root].node # self.get_root(ind, &root) # if root == NULL: # raise Exception('No root found for {},{},{}'.format(ind[0],ind[1],ind[2])) root.file_ind = index_root # Refine root as necessary if (end - beg) > self.n_ref: root_count = 0 self.nocts -= 1 self.recursive_add(root, indices[beg:end], self.level_offset+1, &max_level, domain_id, &root_count) self._octs_per_root[index_root] = root_count self.max_level = max_level assert(self.nocts == np.sum(self._octs_per_root)) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef Oct *refine_oct(self, Oct *o, np.uint64_t index, int level): #Allocate and initialize child octs #Attach particles to child octs #Remove particles from this oct entirely cdef int i, j, k cdef int ind[3] cdef Oct *noct # Initialize empty children if o.children == NULL: o.children = malloc(sizeof(Oct *)*8) # This version can be used to just add the child containing the index # for i in range(2): # for j in range(2): # for k in range(2): # o.children[cind(i,j,k)] = NULL # # Only allocate and count the indexed oct # for i in range(3): # ind[i] = (index >> ((ORDER_MAX - level)*3 + (2 - i))) & 1 # noct = self.allocate_oct() # noct.domain = o.domain # noct.file_ind = 0 # o.children[cind(ind[0],ind[1],ind[2])] = noct # o.file_ind = self.n_ref + 1 for i in range(2): for j in range(2): for k in range(2): noct = self.allocate_oct() noct.domain = o.domain noct.file_ind = 0 o.children[cind(i,j,k)] = noct o.file_ind = self.n_ref + 1 for i in range(3): ind[i] = (index >> ((ORDER_MAX - level)*3 + (2 - i))) & 1 noct = o.children[cind(ind[0],ind[1],ind[2])] return noct cdef void filter_particles(self, Oct *o, np.uint64_t *data, np.int64_t p, int level): # Now we look at the last nref particles to decide where they go. cdef int n = imin(p, self.n_ref) cdef np.uint64_t *arr = data + imax(p - self.n_ref, 0) # Now we figure out our prefix, which is the oct address at this level. # As long as we're actually in Morton order, we do not need to worry # about *any* of the other children of the oct. prefix1 = data[p] >> (ORDER_MAX - level)*3 for i in range(n): prefix2 = arr[i] >> (ORDER_MAX - level)*3 if (prefix1 == prefix2): o.file_ind += 1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/particle_smooth.pxd0000644000175100001770000000613514714401662017772 0ustar00runnerdocker""" Particle Deposition onto Octs """ cimport numpy as np import numpy as np cimport cython from libc.math cimport sqrt from libc.stdlib cimport free, malloc, qsort from yt.utilities.lib.distance_queue cimport ( DistanceQueue, Neighbor_compare, NeighborList, r2dist, ) from yt.utilities.lib.fp_utils cimport * from .oct_container cimport Oct, OctreeContainer from .particle_deposit cimport get_kernel_func, gind, kernel_func cdef extern from "platform_dep.h": void *alloca(int) cdef class ParticleSmoothOperation: # We assume each will allocate and define their own temporary storage cdef kernel_func sph_kernel cdef public object nvals cdef np.float64_t DW[3] cdef int nfields cdef int maxn cdef bint periodicity[3] # Note that we are preallocating here, so this is *not* threadsafe. cdef void (*pos_setup)(np.float64_t ipos[3], np.float64_t opos[3]) cdef void neighbor_process(self, int dim[3], np.float64_t left_edge[3], np.float64_t dds[3], np.float64_t[:,:] ppos, np.float64_t **fields, np.int64_t[:] doffs, np.int64_t **nind, np.int64_t[:] pinds, np.int64_t[:] pcounts, np.int64_t offset, np.float64_t **index_fields, OctreeContainer octree, np.int64_t domain_id, int *nsize, np.float64_t[:,:] oct_left_edges, np.float64_t[:,:] oct_dds, DistanceQueue dq) cdef int neighbor_search(self, np.float64_t pos[3], OctreeContainer octree, np.int64_t **nind, int *nsize, np.int64_t nneighbors, np.int64_t domain_id, Oct **oct = ?, int extra_layer = ?) cdef void neighbor_process_particle(self, np.float64_t cpos[3], np.float64_t[:,:] ppos, np.float64_t **fields, np.int64_t[:] doffs, np.int64_t **nind, np.int64_t[:] pinds, np.int64_t[:] pcounts, np.int64_t offset, np.float64_t **index_fields, OctreeContainer octree, np.int64_t domain_id, int *nsize, DistanceQueue dq) cdef void neighbor_find(self, np.int64_t nneighbors, np.int64_t *nind, np.int64_t[:] doffs, np.int64_t[:] pcounts, np.int64_t[:] pinds, np.float64_t[:,:] ppos, np.float64_t cpos[3], np.float64_t[:,:] oct_left_edges, np.float64_t[:,:] oct_dds, DistanceQueue dq) cdef void process(self, np.int64_t offset, int i, int j, int k, int dim[3], np.float64_t cpos[3], np.float64_t **fields, np.float64_t **index_fields, DistanceQueue dq) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/particle_smooth.pyx0000644000175100001770000010171314714401662020015 0ustar00runnerdocker# distutils: include_dirs = LIB_DIR # distutils: libraries = STD_LIBS """ Particle smoothing in cells """ cimport numpy as np import numpy as np cimport cython from cpython.exc cimport PyErr_CheckSignals from libc.math cimport cos, sin, sqrt from libc.stdlib cimport free, malloc, realloc from .oct_container cimport Oct, OctInfo, OctreeContainer cdef void spherical_coord_setup(np.float64_t ipos[3], np.float64_t opos[3]): opos[0] = ipos[0] * sin(ipos[1]) * cos(ipos[2]) opos[1] = ipos[0] * sin(ipos[1]) * sin(ipos[2]) opos[2] = ipos[0] * cos(ipos[1]) cdef void cart_coord_setup(np.float64_t ipos[3], np.float64_t opos[3]): opos[0] = ipos[0] opos[1] = ipos[1] opos[2] = ipos[2] cdef class ParticleSmoothOperation: def __init__(self, nvals, nfields, max_neighbors, kernel_name): # This is the set of cells, in grids, blocks or octs, we are handling. self.nvals = nvals self.nfields = nfields self.maxn = max_neighbors self.sph_kernel = get_kernel_func(kernel_name) def initialize(self, *args): raise NotImplementedError def finalize(self, *args): raise NotImplementedError @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) def process_octree(self, OctreeContainer mesh_octree, np.int64_t [:] mdom_ind, np.float64_t[:,:] positions, np.float64_t[:,:] oct_positions, fields = None, int domain_id = -1, int domain_offset = 0, periodicity = (True, True, True), index_fields = None, OctreeContainer particle_octree = None, np.int64_t [:] pdom_ind = None, geometry = "cartesian"): # This will be a several-step operation. # # We first take all of our particles and assign them to Octs. If they # are not in an Oct, we will assume they are out of bounds. Note that # this means that if we have loaded neighbor particles for which an Oct # does not exist, we are going to be discarding them -- so sparse # octrees will need to ensure that neighbor octs *exist*. Particles # will be assigned in a new NumPy array. Note that this incurs # overhead, but reduces complexity as we will now be able to use # argsort. # # After the particles have been assigned to Octs, we process each Oct # individually. We will do this by calling "get" for the *first* # particle in each set of Octs in the sorted list. After this, we get # neighbors for each Oct. # # Now, with the set of neighbors (and thus their indices) we allocate # an array of particles and their fields, fill these in, and call our # process function. # # This is not terribly efficient -- for starters, the neighbor function # is not the most efficient yet. We will also need to handle some # mechanism of an expandable array for holding pointers to Octs, so # that we can deal with >27 neighbors. if particle_octree is None: particle_octree = mesh_octree pdom_ind = mdom_ind cdef int nf, i, j cdef int dims[3] cdef np.float64_t **field_pointers cdef np.float64_t pos[3] cdef int nsize = 0 cdef np.int64_t *nind = NULL cdef OctInfo moi cdef Oct *oct cdef np.int64_t offset, poff cdef np.int64_t moff_p, moff_m cdef np.int64_t[:] pind, doff, pdoms, pcount cdef np.ndarray[np.float64_t, ndim=1] tarr cdef np.ndarray[np.float64_t, ndim=4] iarr cdef np.float64_t[:,:] cart_positions cdef np.float64_t[:,:] oct_left_edges, oct_dds cdef OctInfo oinfo if geometry == "cartesian": self.pos_setup = cart_coord_setup cart_positions = positions elif geometry == "spherical": self.pos_setup = spherical_coord_setup cart_positions = np.empty((positions.shape[0], 3), dtype="float64") cart_positions[:,0] = positions[:,0] * \ np.sin(positions[:,1]) * \ np.cos(positions[:,2]) cart_positions[:,1] = positions[:,0] * \ np.sin(positions[:,1]) * \ np.sin(positions[:,2]) cart_positions[:,2] = positions[:,0] * \ np.cos(positions[:,1]) periodicity = (False, False, False) else: raise NotImplementedError dims[0] = dims[1] = dims[2] = mesh_octree.nz cdef int nz = dims[0] * dims[1] * dims[2] # pcount is the number of particles per oct. pcount = np.zeros_like(pdom_ind) oct_left_edges = np.zeros((pdom_ind.shape[0], 3), dtype='float64') oct_dds = np.zeros_like(oct_left_edges) # doff is the offset to a given oct in the sorted particles. doff = np.zeros_like(pdom_ind) - 1 moff_p = particle_octree.get_domain_offset(domain_id + domain_offset) moff_m = mesh_octree.get_domain_offset(domain_id + domain_offset) # pdoms points particles at their octs. So the value in this array, for # a given index, is the local oct index. pdoms = np.zeros(positions.shape[0], dtype="int64") - 1 nf = len(fields) if fields is None: fields = [] field_pointers = alloca(sizeof(np.float64_t *) * nf) for i in range(nf): tarr = fields[i] field_pointers[i] = tarr.data if index_fields is None: index_fields = [] nf = len(index_fields) index_field_pointers = alloca(sizeof(np.float64_t *) * nf) for i in range(nf): iarr = index_fields[i] index_field_pointers[i] = iarr.data for i in range(3): self.DW[i] = (mesh_octree.DRE[i] - mesh_octree.DLE[i]) self.periodicity[i] = periodicity[i] cdef np.float64_t factor = particle_octree.nz for i in range(positions.shape[0]): for j in range(3): pos[j] = positions[i, j] oct = particle_octree.get(pos, &oinfo) if oct == NULL or (domain_id > 0 and oct.domain != domain_id): continue # Note that this has to be our local index, not our in-file index. # This is the particle count, which we'll use once we have sorted # the particles to calculate the offsets into each oct's particles. offset = oct.domain_ind - moff_p pcount[offset] += 1 pdoms[i] = offset # We store the *actual* offset. # store oct positions and dds to avoid searching for neighbors # in octs that we know are too far away for j in range(3): oct_left_edges[offset, j] = oinfo.left_edge[j] oct_dds[offset, j] = oinfo.dds[j] * factor # Now we have oct assignments. Let's sort them. # Note that what we will be providing to our processing functions will # actually be indirectly-sorted fields. This preserves memory at the # expense of additional pointer lookups. pind = np.asarray(np.argsort(pdoms), dtype='int64', order='C') # So what this means is that we now have all the oct-0 particle indices # in order, then the oct-1, etc etc. # This now gives us the indices to the particles for each domain. for i in range(positions.shape[0]): # This value, poff, is the index of the particle in the *unsorted* # arrays. poff = pind[i] offset = pdoms[poff] # If we have yet to assign the starting index to this oct, we do so # now. if doff[offset] < 0: doff[offset] = i #print(domain_id, domain_offset, moff_p, moff_m) #raise RuntimeError # Now doff is full of offsets to the first entry in the pind that # refers to that oct's particles. cdef np.ndarray[np.uint8_t, ndim=1] visited visited = np.zeros(mdom_ind.shape[0], dtype="uint8") cdef int nproc = 0 # This should be thread-private if we ever go to OpenMP cdef DistanceQueue dist_queue = DistanceQueue(self.maxn) dist_queue._setup(self.DW, self.periodicity) for i in range(oct_positions.shape[0]): if (i % 10000) == 0: PyErr_CheckSignals() for j in range(3): pos[j] = oct_positions[i, j] oct = mesh_octree.get(pos, &moi) offset = mdom_ind[oct.domain_ind - moff_m] * nz if visited[oct.domain_ind - moff_m] == 1: continue visited[oct.domain_ind - moff_m] = 1 if offset < 0: continue nproc += 1 self.neighbor_process( dims, moi.left_edge, moi.dds, cart_positions, field_pointers, doff, &nind, pind, pcount, offset, index_field_pointers, particle_octree, domain_id, &nsize, oct_left_edges, oct_dds, dist_queue) #print("VISITED", visited.sum(), visited.size,) #print(100.0*float(visited.sum())/visited.size) if nind != NULL: free(nind) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) def process_particles(self, OctreeContainer particle_octree, np.ndarray[np.int64_t, ndim=1] pdom_ind, np.ndarray[np.float64_t, ndim=2] positions, fields = None, int domain_id = -1, int domain_offset = 0, periodicity = (True, True, True), geometry = "cartesian"): # The other functions in this base class process particles in a way # that results in a modification to the *mesh*. This function is # designed to process neighboring particles in such a way that a new # *particle* field is defined -- this means that new particle # attributes (*not* mesh attributes) can be created that rely on the # values of nearby particles. For instance, a smoothing kernel, or a # nearest-neighbor field. cdef int nf, i, j, k cdef np.float64_t **field_pointers cdef np.float64_t pos[3] cdef int nsize = 0 cdef np.int64_t *nind = NULL cdef Oct *oct cdef np.int64_t offset cdef np.int64_t moff_p, pind0, poff cdef np.int64_t[:] pind, doff, pdoms, pcount cdef np.ndarray[np.float64_t, ndim=1] tarr cdef np.ndarray[np.float64_t, ndim=2] cart_positions if geometry == "cartesian": self.pos_setup = cart_coord_setup cart_positions = positions elif geometry == "spherical": self.pos_setup = spherical_coord_setup cart_positions = np.empty((positions.shape[0], 3), dtype="float64") cart_positions[:,0] = positions[:,0] * \ np.sin(positions[:,1]) * \ np.cos(positions[:,2]) cart_positions[:,1] = positions[:,0] * \ np.sin(positions[:,1]) * \ np.sin(positions[:,2]) cart_positions[:,2] = positions[:,0] * \ np.cos(positions[:,1]) periodicity = (False, False, False) else: raise NotImplementedError pcount = np.zeros_like(pdom_ind) doff = np.zeros_like(pdom_ind) - 1 moff_p = particle_octree.get_domain_offset(domain_id + domain_offset) pdoms = np.zeros(positions.shape[0], dtype="int64") - 1 nf = len(fields) if fields is None: fields = [] field_pointers = alloca(sizeof(np.float64_t *) * nf) for i in range(nf): tarr = fields[i] field_pointers[i] = tarr.data for i in range(3): self.DW[i] = (particle_octree.DRE[i] - particle_octree.DLE[i]) self.periodicity[i] = periodicity[i] for i in range(positions.shape[0]): for j in range(3): pos[j] = positions[i, j] oct = particle_octree.get(pos) if oct == NULL or (domain_id > 0 and oct.domain != domain_id): continue # Note that this has to be our local index, not our in-file index. # This is the particle count, which we'll use once we have sorted # the particles to calculate the offsets into each oct's particles. offset = oct.domain_ind - moff_p pcount[offset] += 1 pdoms[i] = offset # We store the *actual* offset. # Now we have oct assignments. Let's sort them. # Note that what we will be providing to our processing functions will # actually be indirectly-sorted fields. This preserves memory at the # expense of additional pointer lookups. pind = np.asarray(np.argsort(pdoms), dtype='int64', order='C') # So what this means is that we now have all the oct-0 particle indices # in order, then the oct-1, etc etc. # This now gives us the indices to the particles for each domain. for i in range(positions.shape[0]): # This value, poff, is the index of the particle in the *unsorted* # arrays. poff = pind[i] offset = pdoms[poff] # If we have yet to assign the starting index to this oct, we do so # now. if doff[offset] < 0: doff[offset] = i #print(domain_id, domain_offset, moff_p, moff_m) #raise RuntimeError # Now doff is full of offsets to the first entry in the pind that # refers to that oct's particles. # This should be thread-private if we ever go to OpenMP cdef DistanceQueue dist_queue = DistanceQueue(self.maxn) dist_queue._setup(self.DW, self.periodicity) for i in range(doff.shape[0]): if doff[i] < 0: continue offset = pind[doff[i]] for j in range(3): pos[j] = positions[offset, j] for j in range(pcount[i]): pind0 = pind[doff[i] + j] for k in range(3): pos[k] = positions[pind0, k] self.neighbor_process_particle(pos, cart_positions, field_pointers, doff, &nind, pind, pcount, pind0, NULL, particle_octree, domain_id, &nsize, dist_queue) #print("VISITED", visited.sum(), visited.size,) #print(100.0*float(visited.sum())/visited.size) if nind != NULL: free(nind) cdef int neighbor_search(self, np.float64_t pos[3], OctreeContainer octree, np.int64_t **nind, int *nsize, np.int64_t nneighbors, np.int64_t domain_id, Oct **oct = NULL, int extra_layer = 0): cdef OctInfo oi cdef Oct *ooct cdef Oct **neighbors cdef Oct **first_layer cdef int j, total_neighbors = 0, initial_layer = 0 cdef int layer_ind = 0 cdef np.int64_t moff = octree.get_domain_offset(domain_id) ooct = octree.get(pos, &oi) if oct != NULL and ooct == oct[0]: return nneighbors oct[0] = ooct if nind[0] == NULL: nsize[0] = 27 nind[0] = malloc(sizeof(np.int64_t)*nsize[0]) # This is our "seed" set of neighbors. If we are asked to, we will # create a master list of neighbors that is much bigger and includes # everything. layer_ind = 0 first_layer = NULL while 1: neighbors = octree.neighbors(&oi, &nneighbors, ooct, self.periodicity) # Now we have all our neighbors. And, we should be set for what # else we need to do. if total_neighbors + nneighbors > nsize[0]: nind[0] = realloc( nind[0], sizeof(np.int64_t)*(nneighbors + total_neighbors)) nsize[0] = nneighbors + total_neighbors for j in range(nneighbors): # Particle octree neighbor indices nind[0][j + total_neighbors] = neighbors[j].domain_ind - moff total_neighbors += nneighbors if extra_layer == 0: # Not adding on any additional layers here. free(neighbors) neighbors = NULL break if initial_layer == 0: initial_layer = nneighbors first_layer = neighbors else: # Allocated internally; we free this in the loops if we aren't # tracking it free(neighbors) neighbors = NULL ooct = first_layer[layer_ind] layer_ind += 1 if layer_ind == initial_layer: neighbors break for j in range(total_neighbors): # Particle octree neighbor indices if nind[0][j] == -1: continue for n in range(j): if nind[0][j] == nind[0][n]: nind[0][j] = -1 # This is allocated by the neighbors function, so we deallocate it. if first_layer != NULL: free(first_layer) return total_neighbors @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) def process_grid(self, gobj, np.ndarray[np.float64_t, ndim=2] positions, fields = None): raise NotImplementedError cdef void process(self, np.int64_t offset, int i, int j, int k, int dim[3], np.float64_t cpos[3], np.float64_t **fields, np.float64_t **ifields, DistanceQueue dq): raise NotImplementedError @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) cdef void neighbor_find(self, np.int64_t nneighbors, np.int64_t *nind, np.int64_t[:] doffs, np.int64_t[:] pcounts, np.int64_t[:] pinds, np.float64_t[:,:] ppos, np.float64_t cpos[3], np.float64_t[:,:] oct_left_edges, np.float64_t[:,:] oct_dds, DistanceQueue dq ): # We are now given the number of neighbors, the indices into the # domains for them, and the number of particles for each. cdef int ni, i, j, k cdef np.int64_t offset, pn, pc cdef np.float64_t pos[3] cdef np.float64_t ex[2] cdef np.float64_t DR[2] cdef np.float64_t r2_trunc, r2, dist dq.neighbor_reset() for ni in range(nneighbors): if nind[ni] == -1: continue # terminate early if all 8 corners of oct are farther away than # most distant currently known neighbor if oct_left_edges != None and dq.curn == dq.maxn: r2_trunc = dq.neighbors[dq.curn - 1].r2 # iterate over each dimension in the outer loop so we can # consolidate temporary storage # What this next bit does is figure out which component is the # closest, of each possible permutation. # k here is the dimension r2 = 0.0 for k in range(3): # We start at left edge, then do halfway, then right edge. ex[0] = oct_left_edges[nind[ni], k] ex[1] = ex[0] + oct_dds[nind[ni], k] # There are three possibilities; we are between, left-of, # or right-of the extrema. Thanks to # http://stackoverflow.com/questions/5254838/calculating-distance-between-a-point-and-a-rectangular-box-nearest-point # for some help. This has been modified to account for # periodicity. dist = 0.0 DR[0] = (ex[0] - cpos[k]) DR[1] = (cpos[k] - ex[1]) for j in range(2): if not self.periodicity[k]: pass elif (DR[j] > self.DW[k]/2.0): DR[j] -= self.DW[k] elif (DR[j] < -self.DW[k]/2.0): DR[j] += self.DW[k] dist = fmax(dist, DR[j]) r2 += dist*dist if r2 > r2_trunc: continue offset = doffs[nind[ni]] pc = pcounts[nind[ni]] for i in range(pc): pn = pinds[offset + i] for j in range(3): pos[j] = ppos[pn, j] dq.neighbor_eval(pn, pos, cpos) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) cdef void neighbor_process(self, int dim[3], np.float64_t left_edge[3], np.float64_t dds[3], np.float64_t[:,:] ppos, np.float64_t **fields, np.int64_t [:] doffs, np.int64_t **nind, np.int64_t [:] pinds, np.int64_t[:] pcounts, np.int64_t offset, np.float64_t **index_fields, OctreeContainer octree, np.int64_t domain_id, int *nsize, np.float64_t[:,:] oct_left_edges, np.float64_t[:,:] oct_dds, DistanceQueue dq): # Note that we assume that fields[0] == smoothing length in the native # units supplied. We can now iterate over every cell in the block and # every particle to find the nearest. We will use a priority heap. cdef int i, j, k, ntot, nntot, m, nneighbors cdef np.float64_t cpos[3] cdef np.float64_t opos[3] cdef Oct* oct = NULL cpos[0] = left_edge[0] + 0.5*dds[0] for i in range(dim[0]): cpos[1] = left_edge[1] + 0.5*dds[1] for j in range(dim[1]): cpos[2] = left_edge[2] + 0.5*dds[2] for k in range(dim[2]): self.pos_setup(cpos, opos) nneighbors = self.neighbor_search(opos, octree, nind, nsize, nneighbors, domain_id, &oct, 0) self.neighbor_find(nneighbors, nind[0], doffs, pcounts, pinds, ppos, opos, oct_left_edges, oct_dds, dq) # Now we have all our neighbors in our neighbor list. if dq.curn <-1*dq.maxn: ntot = nntot = 0 for m in range(nneighbors): if nind[0][m] < 0: continue nntot += 1 ntot += pcounts[nind[0][m]] print("SOMETHING WRONG", dq.curn, nneighbors, ntot, nntot) self.process(offset, i, j, k, dim, opos, fields, index_fields, dq) cpos[2] += dds[2] cpos[1] += dds[1] cpos[0] += dds[0] @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) cdef void neighbor_process_particle(self, np.float64_t cpos[3], np.float64_t[:,:] ppos, np.float64_t **fields, np.int64_t[:] doffs, np.int64_t **nind, np.int64_t[:] pinds, np.int64_t[:] pcounts, np.int64_t offset, np.float64_t **index_fields, OctreeContainer octree, np.int64_t domain_id, int *nsize, DistanceQueue dq): # Note that we assume that fields[0] == smoothing length in the native # units supplied. We can now iterate over every cell in the block and # every particle to find the nearest. We will use a priority heap. cdef int i, j, k cdef int dim[3] cdef Oct *oct = NULL cdef np.int64_t nneighbors = 0 i = j = k = 0 dim[0] = dim[1] = dim[2] = 1 cdef np.float64_t opos[3] self.pos_setup(cpos, opos) nneighbors = self.neighbor_search(opos, octree, nind, nsize, nneighbors, domain_id, &oct, 0) self.neighbor_find(nneighbors, nind[0], doffs, pcounts, pinds, ppos, opos, None, None, dq) self.process(offset, i, j, k, dim, opos, fields, index_fields, dq) cdef class VolumeWeightedSmooth(ParticleSmoothOperation): # This smoothing function evaluates the field, *without* normalization, at # every point in the *mesh*. Applying a normalization results in # non-conservation of mass when smoothing density; to avoid this, we do not # apply this normalization factor. The SPLASH paper # (http://arxiv.org/abs/0709.0832v1) discusses this in detail; what we are # applying here is equation 6, with variable smoothing lengths (eq 13). cdef np.float64_t **fp cdef public object vals def initialize(self): cdef int i if self.nfields < 4: # We need four fields -- the mass should be the first, then the # smoothing length for particles, the normalization factor to # ensure mass conservation, then the field we're smoothing. raise RuntimeError cdef np.ndarray tarr self.fp = malloc( sizeof(np.float64_t *) * (self.nfields - 3)) self.vals = [] # We usually only allocate one field; if we are doing multiple field, # single-pass smoothing, then we might have more. for i in range(self.nfields - 3): tarr = np.zeros(self.nvals, dtype="float64", order="F") self.vals.append(tarr) self.fp[i] = tarr.data def finalize(self): free(self.fp) return self.vals @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) cdef void process(self, np.int64_t offset, int i, int j, int k, int dim[3], np.float64_t cpos[3], np.float64_t **fields, np.float64_t **index_fields, DistanceQueue dq): # We have our i, j, k for our cell, as well as the cell position. # We also have a list of neighboring particles with particle numbers. cdef int n, fi cdef np.float64_t weight, r2, val, hsml, dens, mass, max_r cdef np.float64_t max_hsml, ihsml, ihsml3, kern cdef np.int64_t pn # We get back our mass # rho_i = sum(j = 1 .. n) m_j * W_ij max_r = sqrt(dq.neighbors[dq.curn-1].r2) max_hsml = index_fields[0][gind(i,j,k,dim) + offset] for n in range(dq.curn): # No normalization for the moment. # fields[0] is the smoothing length. r2 = dq.neighbors[n].r2 pn = dq.neighbors[n].pn # Smoothing kernel weight function mass = fields[0][pn] hsml = fields[1][pn] dens = fields[2][pn] if hsml < 0: hsml = max_r if hsml == 0: continue ihsml = 1.0/hsml hsml = fmax(max_hsml/2.0, hsml) ihsml3 = ihsml*ihsml*ihsml # Usually this density has been computed if dens == 0.0: continue weight = (mass / dens) * ihsml3 kern = self.sph_kernel(sqrt(r2) * ihsml) weight *= kern # Mass of the particle times the value for fi in range(self.nfields - 3): val = fields[fi + 3][pn] self.fp[fi][gind(i,j,k,dim) + offset] += val * weight return volume_weighted_smooth = VolumeWeightedSmooth cdef class NearestNeighborSmooth(ParticleSmoothOperation): cdef np.float64_t *fp cdef public object vals def initialize(self): cdef np.ndarray tarr assert(self.nfields == 1) tarr = np.zeros(self.nvals, dtype="float64", order="F") self.vals = tarr self.fp = tarr.data def finalize(self): return self.vals @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) cdef void process(self, np.int64_t offset, int i, int j, int k, int dim[3], np.float64_t cpos[3], np.float64_t **fields, np.float64_t **index_fields, DistanceQueue dq): # We have our i, j, k for our cell, as well as the cell position. # We also have a list of neighboring particles with particle numbers. cdef np.int64_t pn # We get back our mass # rho_i = sum(j = 1 .. n) m_j * W_ij pn = dq.neighbors[0].pn self.fp[gind(i,j,k,dim) + offset] = fields[0][pn] #self.fp[gind(i,j,k,dim) + offset] = dq.neighbors[0].r2 return nearest_smooth = NearestNeighborSmooth cdef class IDWInterpolationSmooth(ParticleSmoothOperation): cdef np.float64_t *fp cdef public int p2 cdef public object vals def initialize(self): cdef np.ndarray tarr assert(self.nfields == 1) tarr = np.zeros(self.nvals, dtype="float64", order="F") self.vals = tarr self.fp = tarr.data self.p2 = 2 # Power, for IDW, in units of 2. So we only do even p's. def finalize(self): return self.vals @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) cdef void process(self, np.int64_t offset, int i, int j, int k, int dim[3], np.float64_t cpos[3], np.float64_t **fields, np.float64_t **index_fields, DistanceQueue dq): # We have our i, j, k for our cell, as well as the cell position. # We also have a list of neighboring particles with particle numbers. cdef np.int64_t pn, ni, di cdef np.float64_t total_weight = 0.0, total_value = 0.0, r2, val, w # We're going to do a very simple IDW average if dq.neighbors[0].r2 == 0.0: pn = dq.neighbors[0].pn self.fp[gind(i,j,k,dim) + offset] = fields[0][pn] for ni in range(dq.curn): r2 = dq.neighbors[ni].r2 val = fields[0][dq.neighbors[ni].pn] w = r2 for di in range(self.p2 - 1): w *= r2 total_value += w * val total_weight += w self.fp[gind(i,j,k,dim) + offset] = total_value / total_weight return idw_smooth = IDWInterpolationSmooth cdef class NthNeighborDistanceSmooth(ParticleSmoothOperation): def initialize(self): return def finalize(self): return @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) cdef void process(self, np.int64_t offset, int i, int j, int k, int dim[3], np.float64_t cpos[3], np.float64_t **fields, np.float64_t **index_fields, DistanceQueue dq): cdef np.float64_t max_r # We assume "offset" here is the particle index. max_r = sqrt(dq.neighbors[dq.curn-1].r2) fields[0][offset] = max_r nth_neighbor_smooth = NthNeighborDistanceSmooth cdef class SmoothedDensityEstimate(ParticleSmoothOperation): def initialize(self): return def finalize(self): return @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) cdef void process(self, np.int64_t offset, int i, int j, int k, int dim[3], np.float64_t cpos[3], np.float64_t **fields, np.float64_t **index_fields, DistanceQueue dq): cdef np.float64_t r2, hsml, dens, mass, weight, lw cdef int pn # We assume "offset" here is the particle index. hsml = sqrt(dq.neighbors[dq.curn-1].r2) dens = 0.0 weight = 0.0 for pn in range(dq.curn): mass = fields[0][dq.neighbors[pn].pn] r2 = dq.neighbors[pn].r2 lw = self.sph_kernel(sqrt(r2) / hsml) dens += mass * lw weight = (4.0/3.0) * 3.1415926 * hsml**3 fields[1][offset] = dens/weight density_smooth = SmoothedDensityEstimate ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/selection_routines.pxd0000644000175100001770000000753514714401662020520 0ustar00runnerdocker""" Geometry selection routine imports. """ cimport numpy as np from yt.geometry.grid_visitors cimport ( GridTreeNode, GridVisitorData, check_child_masked, grid_visitor_function, ) from yt.utilities.lib.fp_utils cimport _ensure_code from yt.utilities.lib.geometry_utils cimport decode_morton_64bit from .oct_container cimport OctreeContainer from .oct_visitors cimport Oct, OctVisitor cdef class SelectorObject: cdef public np.int32_t min_level cdef public np.int32_t max_level cdef public int overlap_cells cdef public np.float64_t domain_width[3] cdef public np.float64_t domain_center[3] cdef public bint periodicity[3] cdef bint _hash_initialized cdef np.int64_t _hash cdef void recursively_visit_octs(self, Oct *root, np.float64_t pos[3], np.float64_t dds[3], int level, OctVisitor visitor, int visit_covered = ?) cdef void visit_oct_cells(self, Oct *root, Oct *ch, np.float64_t spos[3], np.float64_t sdds[3], OctVisitor visitor, int i, int j, int k) cdef int select_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.int32_t level, Oct *o = ?) noexcept nogil cdef int select_grid_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.int32_t level, Oct *o = ?) noexcept nogil cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3]) noexcept nogil cdef int select_point(self, np.float64_t pos[3]) noexcept nogil cdef int select_sphere(self, np.float64_t pos[3], np.float64_t radius) noexcept nogil cdef int select_bbox(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil cdef int select_bbox_edge(self, np.float64_t left_edge[3], np.float64_t right_edge[3]) noexcept nogil cdef int fill_mask_selector_regular_grid(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.float64_t dds[3], int dim[3], np.ndarray[np.uint8_t, ndim=3, cast=True] child_mask, np.ndarray[np.uint8_t, ndim=3] mask, int level) cdef int fill_mask_selector(self, np.float64_t left_edge[3], np.float64_t right_edge[3], np.float64_t **dds, int dim[3], np.ndarray[np.uint8_t, ndim=3, cast=True] child_mask, np.ndarray[np.uint8_t, ndim=3] mask, int level) cdef void visit_grid_cells(self, GridVisitorData *data, grid_visitor_function *func, np.uint8_t *cached_mask = ?) # compute periodic distance (if periodicity set) # assuming 0->domain_width[d] coordinates cdef np.float64_t periodic_difference( self, np.float64_t x1, np.float64_t x2, int d) noexcept nogil cdef class AlwaysSelector(SelectorObject): pass cdef class OctreeSubsetSelector(SelectorObject): cdef public SelectorObject base_selector cdef public np.int64_t domain_id cdef class BooleanSelector(SelectorObject): cdef public SelectorObject sel1 cdef public SelectorObject sel2 cdef inline np.float64_t _periodic_dist(np.float64_t x1, np.float64_t x2, np.float64_t dw, bint periodic) noexcept nogil: cdef np.float64_t rel = x1 - x2 if not periodic: return rel if rel > dw * 0.5: rel -= dw elif rel < -dw * 0.5: rel += dw return rel ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/selection_routines.pyx0000644000175100001770000001470214714401662020537 0ustar00runnerdocker# distutils: include_dirs = LIB_DIR # distutils: libraries = STD_LIBS """ Geometry selection routines. """ import numpy as np cimport cython cimport numpy as np from libc.math cimport sqrt from libc.stdlib cimport free, malloc from yt.utilities.lib.bitarray cimport ba_get_value from yt.utilities.lib.fnv_hash cimport c_fnv_hash as fnv_hash from yt.utilities.lib.fp_utils cimport fclip, fmax, fmin, iclip from yt.utilities.lib.grid_traversal cimport walk_volume from yt.utilities.lib.volume_container cimport VolumeContainer from .oct_container cimport Oct, OctreeContainer from .oct_visitors cimport cind cdef extern from "math.h": double exp(double x) noexcept nogil float expf(float x) noexcept nogil long double expl(long double x) noexcept nogil double floor(double x) noexcept nogil double ceil(double x) noexcept nogil double fmod(double x, double y) noexcept nogil double log2(double x) noexcept nogil long int lrint(double x) noexcept nogil double fabs(double x) noexcept nogil # use this as an epsilon test for grids aligned with selector # define here to avoid the gil later cdef np.float64_t grid_eps = np.finfo(np.float64).eps grid_eps = 0.0 cdef inline np.float64_t dot(np.float64_t* v1, np.float64_t* v2) noexcept nogil: return v1[0]*v2[0] + v1[1]*v2[1] + v1[2]*v2[2] cdef inline np.float64_t norm(np.float64_t* v) noexcept nogil: return sqrt(dot(v, v)) # These routines are separated into a couple different categories: # # * Routines for identifying intersections of an object with a bounding box # * Routines for identifying cells/points inside a bounding box that # intersect with an object # * Routines that speed up some type of geometric calculation # First, bounding box / object intersection routines. # These all respect the interface "dobj" and a set of left_edges, right_edges, # sometimes also accepting level and mask information. @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def convert_mask_to_indices(np.ndarray[np.uint8_t, ndim=3, cast=True] mask, int count, int transpose = 0): cdef int i, j, k, cpos cdef np.ndarray[np.int64_t, ndim=2] indices indices = np.zeros((count, 3), dtype='int64') cpos = 0 for i in range(mask.shape[0]): for j in range(mask.shape[1]): for k in range(mask.shape[2]): if mask[i, j, k] == 1: if transpose == 1: indices[cpos, 0] = k indices[cpos, 1] = j indices[cpos, 2] = i else: indices[cpos, 0] = i indices[cpos, 1] = j indices[cpos, 2] = k cpos += 1 return indices @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef _mask_fill(np.ndarray[np.float64_t, ndim=1] out, np.int64_t offset, np.ndarray[np.uint8_t, ndim=3, cast=True] mask, np.ndarray[cython.floating, ndim=3] vals): cdef np.int64_t count = 0 cdef int i, j, k for i in range(mask.shape[0]): for j in range(mask.shape[1]): for k in range(mask.shape[2]): if mask[i, j, k] == 1: out[offset + count] = vals[i, j, k] count += 1 return count def mask_fill(np.ndarray[np.float64_t, ndim=1] out, np.int64_t offset, np.ndarray[np.uint8_t, ndim=3, cast=True] mask, np.ndarray vals): if vals.dtype == np.float32: return _mask_fill[np.float32_t](out, offset, mask, vals) elif vals.dtype == np.float64: return _mask_fill[np.float64_t](out, offset, mask, vals) else: raise RuntimeError @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def points_in_cells( np.float64_t[:] cx, np.float64_t[:] cy, np.float64_t[:] cz, np.float64_t[:] dx, np.float64_t[:] dy, np.float64_t[:] dz, np.float64_t[:] px, np.float64_t[:] py, np.float64_t[:] pz): # Take a list of cells and particles and calculate which particles # are enclosed within one of the cells. This is used for querying # particle fields on clump/contour objects. # We use brute force since the cells are a relatively unordered collection. cdef int p, c, n_p, n_c cdef np.ndarray[np.uint8_t, ndim=1, cast=True] mask n_p = px.size n_c = cx.size mask = np.zeros(n_p, dtype="bool") for p in range(n_p): for c in range(n_c): if (fabs(px[p] - cx[c]) <= 0.5 * dx[c] and fabs(py[p] - cy[c]) <= 0.5 * dy[c] and fabs(pz[p] - cz[c]) <= 0.5 * dz[c]): mask[p] = True break return mask def bbox_intersects( SelectorObject selector, np.float64_t[::1] left_edges, np.float64_t dx ): cdef np.float64_t[3] right_edges right_edges[0] = left_edges[0] + dx right_edges[1] = left_edges[1] + dx right_edges[2] = left_edges[2] + dx return selector.select_bbox(&left_edges[0], right_edges) == 1 def fully_contains( SelectorObject selector, np.float64_t[::1] left_edges, np.float64_t dx, ): cdef np.float64_t[3] right_edges right_edges[0] = left_edges[0] + dx right_edges[1] = left_edges[1] + dx right_edges[2] = left_edges[2] + dx return selector.select_bbox_edge(&left_edges[0], right_edges) == 1 include "_selection_routines/selector_object.pxi" include "_selection_routines/point_selector.pxi" include "_selection_routines/sphere_selector.pxi" include "_selection_routines/region_selector.pxi" include "_selection_routines/cut_region_selector.pxi" include "_selection_routines/disk_selector.pxi" include "_selection_routines/cutting_plane_selector.pxi" include "_selection_routines/slice_selector.pxi" include "_selection_routines/ortho_ray_selector.pxi" include "_selection_routines/ray_selector.pxi" include "_selection_routines/data_collection_selector.pxi" include "_selection_routines/ellipsoid_selector.pxi" include "_selection_routines/grid_selector.pxi" include "_selection_routines/octree_subset_selector.pxi" include "_selection_routines/indexed_octree_subset_selector.pxi" include "_selection_routines/always_selector.pxi" include "_selection_routines/compose_selector.pxi" include "_selection_routines/halo_particles_selector.pxi" include "_selection_routines/boolean_selectors.pxi" ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3671532 yt-4.4.0/yt/geometry/tests/0000755000175100001770000000000014714401715015215 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/tests/__init__.py0000644000175100001770000000000014714401662017315 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/tests/fake_octree.py0000644000175100001770000000272514714401662020045 0ustar00runnerdockerimport numpy as np from yt.geometry.fake_octree import create_fake_octree from yt.geometry.oct_container import ParticleOctreeContainer, RAMSESOctreeContainer nocts = 3 max_level = 12 dn = 2 dd = np.ones(3, dtype="i4") * dn dle = np.ones(3, dtype="f8") * 0.0 dre = np.ones(3, dtype="f8") fsub = 0.25 domain = 1 oct_handler = RAMSESOctreeContainer(dd, dle, dre) leaves = create_fake_octree(oct_handler, nocts, max_level, dd, dle, dre, fsub) mask = np.ones((nocts, 8), dtype="bool") cell_count = nocts * 8 oct_counts = oct_handler.count_levels(max_level, 1, mask) level_counts = np.concatenate( ( [ 0, ], np.cumsum(oct_counts), ) ) fc = oct_handler.fcoords(domain, mask, cell_count, level_counts.copy()) leavesb = oct_handler.count_leaves(mask) assert leaves == leavesb # Now take the fcoords, call them particles and recreate the same octree print("particle-based recreate") oct_handler2 = ParticleOctreeContainer(dd, dle, dre) oct_handler2.allocate_domains([nocts]) oct_handler2.n_ref = 1 # specifically make a maximum of 1 particle per oct oct_handler2.add(fc, 1) print("added particles") cell_count2 = nocts * 8 oct_counts2 = oct_handler.count_levels(max_level, 1, mask) level_counts2 = np.concatenate( ( [ 0, ], np.cumsum(oct_counts), ) ) fc2 = oct_handler.fcoords(domain, mask, cell_count, level_counts.copy()) leaves2 = oct_handler2.count_leaves(mask) assert leaves == leaves2 print("success") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/tests/test_ewah_write_load.py0000644000175100001770000000211014714401662021756 0ustar00runnerdockerfrom yt.loaders import load from yt.sample_data.api import _get_test_data_dir_path from yt.testing import assert_array_equal, requires_file @requires_file("TNGHalo/halo_59.hdf5") def test_ewah_write_load(tmp_path): mock_file = tmp_path / "halo_59.hdf5" mock_file.symlink_to(_get_test_data_dir_path() / "TNGHalo" / "halo_59.hdf5") masses = [] opts = [ (None, True, (6, 2, 4), False), (None, True, (6, 2, 4), True), ((5, 3), False, (5, 3, 3), False), ((5, 3), False, (5, 3, 3), True), ] for opt in opts: ds = load(mock_file, index_order=opt[0]) _, c = ds.find_max(("gas", "density")) assert ds.index.pii.mutable_index is opt[1] assert ds.index.pii.order1 == opt[2][0] assert ds.index.pii.order2_orig == opt[2][1] assert ds.index.pii.order2 == opt[2][2] assert ds.index.pii._is_loaded is opt[3] c += ds.quan(0.5, "Mpc") sp = ds.sphere(c, (20.0, "kpc")) mass = sp.sum(("gas", "mass")) masses.append(mass) assert_array_equal(masses[0], masses[1:]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/tests/test_geometries.py0000644000175100001770000000171014714401662020771 0ustar00runnerdockerfrom importlib.util import find_spec import pytest import yt from yt.testing import fake_amr_ds @pytest.mark.parametrize( "geometry", [ "cartesian", "polar", "cylindrical", "spherical", "geographic", "internal_geographic", "spectral_cube", ], ) def test_testable_geometries(geometry): # check that initializing a simple fake dataset works in any geometry ds = fake_amr_ds(fields=[("gas", "density")], units=["g/cm**3"], geometry=geometry) # make sure basic plotting works for axis in range(3): if "geographic" in geometry and axis == 2 and find_spec("cartopy") is None: pytest.skip( reason=( "cannot test this case with vanilla yt (requires cartopy) " "see https://github.com/yt-project/yt/issues/4182" ) ) yt.SlicePlot(ds, axis, ("gas", "density"), buff_size=(8, 8)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/tests/test_grid_container.py0000644000175100001770000001063514714401662021623 0ustar00runnerdockerimport random import numpy as np from numpy.testing import assert_equal, assert_raises from yt.loaders import load_amr_grids def setup_test_ds(): """Prepare setup specific environment""" grid_data = [ { "left_edge": [0.0, 0.0, 0.0], "right_edge": [1.0, 1.0, 1.0], "level": 0, "dimensions": [16, 16, 16], }, { "left_edge": [0.25, 0.25, 0.25], "right_edge": [0.75, 0.75, 0.75], "level": 1, "dimensions": [16, 16, 16], }, { "left_edge": [0.25, 0.25, 0.375], "right_edge": [0.5, 0.5, 0.625], "level": 2, "dimensions": [16, 16, 16], }, { "left_edge": [0.5, 0.5, 0.375], "right_edge": [0.75, 0.75, 0.625], "level": 2, "dimensions": [16, 16, 16], }, { "left_edge": [0.3125, 0.3125, 0.4375], "right_edge": [0.4375, 0.4375, 0.5625], "level": 3, "dimensions": [16, 16, 16], }, { "left_edge": [0.5625, 0.5625, 0.4375], "right_edge": [0.6875, 0.6875, 0.5625], "level": 3, "dimensions": [16, 16, 16], }, ] for grid in grid_data: grid["density"] = ( np.random.random(grid["dimensions"]) * 2 ** grid["level"], "g/cm**3", ) return load_amr_grids(grid_data, [16, 16, 16]) def test_grid_tree(): """Main test suite for GridTree""" test_ds = setup_test_ds() grid_tree = test_ds.index._get_grid_tree() indices, levels, nchild, children = grid_tree.return_tree_info() grid_levels = [grid.Level for grid in test_ds.index.grids] grid_indices = [grid.id - grid._id_offset for grid in test_ds.index.grids] grid_nchild = [len(grid.Children) for grid in test_ds.index.grids] assert_equal(levels, grid_levels) assert_equal(indices, grid_indices) assert_equal(nchild, grid_nchild) for i, grid in enumerate(test_ds.index.grids): if grid_nchild[i] > 0: grid_children = np.array( [child.id - child._id_offset for child in grid.Children] ) assert_equal(grid_children, children[i]) def test_find_points(): """Main test suite for MatchPoints""" num_points = 100 test_ds = setup_test_ds() randx = np.random.uniform( low=test_ds.domain_left_edge[0], high=test_ds.domain_right_edge[0], size=num_points, ) randy = np.random.uniform( low=test_ds.domain_left_edge[1], high=test_ds.domain_right_edge[1], size=num_points, ) randz = np.random.uniform( low=test_ds.domain_left_edge[2], high=test_ds.domain_right_edge[2], size=num_points, ) point_grids, point_grid_inds = test_ds.index._find_points(randx, randy, randz) grid_inds = np.zeros((num_points), dtype="int64") for ind, ixx, iyy, izz in zip(range(num_points), randx, randy, randz, strict=True): pos = np.array([ixx, iyy, izz]) pt_level = -1 for grid in test_ds.index.grids: if ( np.all(pos >= grid.LeftEdge) and np.all(pos <= grid.RightEdge) and grid.Level > pt_level ): pt_level = grid.Level grid_inds[ind] = grid.id - grid._id_offset assert_equal(point_grid_inds, grid_inds) # Test whether find_points works for lists point_grids, point_grid_inds = test_ds.index._find_points( randx.tolist(), randy.tolist(), randz.tolist() ) assert_equal(point_grid_inds, grid_inds) # Test if find_points works for scalar ind = random.randint(0, num_points - 1) point_grids, point_grid_inds = test_ds.index._find_points( randx[ind], randy[ind], randz[ind] ) assert_equal(point_grid_inds, grid_inds[ind]) # Test if find_points fails properly for non equal indices' array sizes assert_raises(ValueError, test_ds.index._find_points, [0], 1.0, [2, 3]) def test_grid_arrays_view(): ds = setup_test_ds() tree = ds.index._get_grid_tree() grid_arr = tree.grid_arrays assert_equal(grid_arr["left_edge"], ds.index.grid_left_edge) assert_equal(grid_arr["right_edge"], ds.index.grid_right_edge) assert_equal(grid_arr["dims"], ds.index.grid_dimensions) assert_equal(grid_arr["level"], ds.index.grid_levels[:, 0]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/tests/test_grid_index.py0000644000175100001770000000151614714401662020746 0ustar00runnerdockerfrom yt.testing import assert_allclose_units, fake_amr_ds def test_icoords_to_ires(): for geometry in ("cartesian", "spherical", "cylindrical"): ds = fake_amr_ds(geometry=geometry) dd = ds.r[:] icoords = dd.icoords ires = dd.ires fcoords, fwidth = ds.index._icoords_to_fcoords(icoords, ires) assert_allclose_units(fcoords, dd.fcoords, rtol=1e-14) assert_allclose_units(fwidth, dd.fwidth, rtol=1e-14) fcoords_xz, fwidth_xz = ds.index._icoords_to_fcoords( icoords[:, (0, 2)], ires, axes=(0, 2) ) assert_allclose_units(fcoords_xz[:, 0], dd.fcoords[:, 0]) assert_allclose_units(fcoords_xz[:, 1], dd.fcoords[:, 2]) assert_allclose_units(fwidth_xz[:, 0], dd.fwidth[:, 0]) assert_allclose_units(fwidth_xz[:, 1], dd.fwidth[:, 2]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/tests/test_particle_deposit.py0000644000175100001770000000710014714401662022157 0ustar00runnerdockerfrom numpy.testing import assert_allclose, assert_array_less, assert_raises import yt from yt.loaders import load from yt.testing import fake_random_ds, requires_file, requires_module from yt.utilities.exceptions import YTBoundsDefinitionError def test_cic_deposit(): ds = fake_random_ds(64, nprocs=8, particles=64**3) my_reg = ds.arbitrary_grid( ds.domain_left_edge, ds.domain_right_edge, dims=[1, 800, 800] ) f = ("deposit", "all_cic") assert_raises(YTBoundsDefinitionError, my_reg.__getitem__, f) RAMSES = "output_00080/info_00080.txt" RAMSES_small = "ramses_new_format/output_00002/info_00002.txt" ISOGAL = "IsolatedGalaxy/galaxy0030/galaxy0030" @requires_file(RAMSES) def test_one_zone_octree_deposit(): ds = load(RAMSES) # Get a sphere centred on the main halo hpos = ds.arr( [0.5215110772898429, 0.5215110772898429, 0.5215110772898429], "code_length" ) hrvir = ds.quan(0.042307235300540924, "Mpc") sp = ds.sphere(hpos, hrvir * 10) assert sp["deposit", "io_cic"].shape == (1,) @requires_module("h5py") @requires_file(RAMSES) @requires_file(ISOGAL) def test_mesh_sampling(): for fn in (RAMSES, ISOGAL): ds = yt.load(fn) ds.add_mesh_sampling_particle_field(("index", "x"), ptype="all") ds.add_mesh_sampling_particle_field(("index", "dx"), ptype="all") dx = ds.r["all", "cell_index_dx"] xc = ds.r["all", "cell_index_x"] xp = ds.r["all", "particle_position_x"] dist = xp - xc assert_array_less(dist, dx) assert_array_less(-dist, dx) @requires_module("h5py") @requires_file(RAMSES) @requires_file(ISOGAL) def test_mesh_sampling_for_filtered_particles(): for fn in (RAMSES, ISOGAL): ds = yt.load(fn) @yt.particle_filter(requires=["particle_position_x"], filtered_type="io") def left(pfilter, data): return ( data[pfilter.filtered_type, "particle_position_x"].to("code_length") < 0.5 ) ds.add_particle_filter("left") for f in (("index", "x"), ("index", "dx"), ("gas", "density")): ds.add_mesh_sampling_particle_field(f, ptype="io") ds.add_mesh_sampling_particle_field(f, ptype="left") data_sources = (ds.all_data(), ds.box([0] * 3, [0.1] * 3)) def test_source(ptype, src): # Test accessing src[ptype, "cell_index_x"] src[ptype, "cell_index_dx"] src[ptype, "cell_gas_density"] for ptype in ("io", "left"): for src in data_sources: test_source(ptype, src) @requires_file(RAMSES) def test_mesh_sampling_with_indexing(): # Access with index caching ds = yt.load(RAMSES) ds.add_mesh_sampling_particle_field(("gas", "density"), ptype="all") ad = ds.all_data() ad["all", "cell_index"] v1 = ad["all", "cell_gas_density"] # Access with no index caching ds = yt.load(RAMSES) ds.add_mesh_sampling_particle_field(("gas", "density"), ptype="all") ad = ds.all_data() v2 = ad["all", "cell_gas_density"] # Check same answer is returned assert_allclose(v1, v2) @requires_file(RAMSES_small) def test_mesh_sampling_vs_field_value_at_point(): all_ds = (fake_random_ds(ndims=3, particles=500), yt.load(RAMSES_small)) for ds in all_ds: ds.add_mesh_sampling_particle_field(("gas", "density"), ptype="all") val = ds.r["all", "cell_gas_density"] ref = ds.find_field_values_at_points( ("gas", "density"), ds.r["all", "particle_position"] ) assert_allclose(val, ref) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/tests/test_particle_octree.py0000644000175100001770000006430714714401662022005 0ustar00runnerdockerimport os import numpy as np from numpy.testing import assert_array_equal, assert_equal import yt.units.dimensions as dimensions from yt.geometry.oct_container import _ORDER_MAX from yt.geometry.particle_oct_container import ParticleBitmap, ParticleOctreeContainer from yt.geometry.selection_routines import RegionSelector from yt.testing import requires_module from yt.units.unit_registry import UnitRegistry from yt.units.yt_array import YTArray from yt.utilities.lib.geometry_utils import ( get_hilbert_indices, get_hilbert_points, get_morton_indices, get_morton_points, ) NPART = 32**3 DLE = np.array([0.0, 0.0, 0.0]) DRE = np.array([10.0, 10.0, 10.0]) DW = DRE - DLE PER = np.array([0, 0, 0], "bool") dx = DW / (2**_ORDER_MAX) def test_add_particles_random(): np.random.seed(0x4D3D3D3) pos = np.random.normal(0.5, scale=0.05, size=(NPART, 3)) * (DRE - DLE) + DLE # Now convert to integers for i in range(3): np.clip(pos[:, i], DLE[i], DRE[i], pos[:, i]) # Convert to integers pos = np.floor((pos - DLE) / dx).astype("uint64") morton = get_morton_indices(pos) morton.sort() for ndom in [1, 2, 4, 8]: octree = ParticleOctreeContainer((1, 1, 1), DLE, DRE) octree.n_ref = 32 for split in np.array_split(morton, ndom): octree.add(split) octree.finalize() # This visits every oct. tc = octree.recursively_count() total_count = np.zeros(len(tc), dtype="int32") for i in sorted(tc): total_count[i] = tc[i] assert_equal(octree.nocts, total_count.sum()) # This visits every cell -- including those covered by octs. # for dom in range(ndom): # level_count += octree.count_levels(total_count.size-1, dom, mask) assert_equal(total_count, [1, 8, 64, 64, 256, 536, 1856, 1672]) class FakeDS: domain_left_edge = None domain_right_edge = None domain_width = None unit_registry = UnitRegistry() unit_registry.add("code_length", 1.0, dimensions.length) periodicity = (False, False, False) class FakeRegion: def __init__(self, nfiles, periodic=False): self.ds = FakeDS() self.ds.domain_left_edge = YTArray( [0.0, 0.0, 0.0], "code_length", registry=self.ds.unit_registry ) self.ds.domain_right_edge = YTArray( [nfiles, nfiles, nfiles], "code_length", registry=self.ds.unit_registry, dtype="float64", ) self.ds.domain_width = self.ds.domain_right_edge - self.ds.domain_left_edge self.ds.periodicity = (periodic, periodic, periodic) self.nfiles = nfiles def set_edges(self, file_id, dx=0.1): self.left_edge = YTArray( [file_id + dx, 0.0, 0.0], "code_length", registry=self.ds.unit_registry ) self.right_edge = YTArray( [file_id + 1 - dx, self.nfiles, self.nfiles], "code_length", registry=self.ds.unit_registry, ) class FakeBoxRegion: def __init__(self, nfiles, left_edge, right_edge): self.ds = FakeDS() self.ds.domain_left_edge = YTArray( left_edge, "code_length", registry=self.ds.unit_registry ) self.ds.domain_right_edge = YTArray( right_edge, "code_length", registry=self.ds.unit_registry ) self.ds.domain_width = self.ds.domain_right_edge - self.ds.domain_left_edge self.nfiles = nfiles def set_edges(self, center, width): self.left_edge = self.ds.domain_left_edge + self.ds.domain_width * ( center - width / 2 ) self.right_edge = self.ds.domain_left_edge + self.ds.domain_width * ( center + width / 2 ) def FakeBitmap( npart, nfiles, order1, order2, left_edge=None, right_edge=None, periodicity=None, decomp="sliced", buff=0.1, distrib="uniform", fname=None, ): if left_edge is None: left_edge = np.array([0.0, 0.0, 0.0]) if right_edge is None: right_edge = np.array([1.0, 1.0, 1.0]) if periodicity is None: periodicity = np.array([0, 0, 0], "bool") reg = ParticleBitmap( left_edge, right_edge, periodicity, 12345, nfiles, order1, order2 ) # Load from file if it exists if isinstance(fname, str) and os.path.isfile(fname): reg.load_bitmasks(fname) else: # Create positions for each file posgen = yield_fake_decomp( decomp, npart, nfiles, left_edge, right_edge, buff=buff, distrib=distrib ) # Coarse index max_npart = 0 for i, (pos, hsml) in enumerate(posgen): max_npart = max(max_npart, pos.shape[0]) reg._coarse_index_data_file(pos, hsml, i) reg._set_coarse_index_data_file(i) if i != (nfiles - 1): raise RuntimeError( f"There are positions for {i + 1} files, but there should be {nfiles}." ) # Refined index mask = reg.masks.sum(axis=1).astype("uint8") sub_mi1 = np.zeros(max_npart, "uint64") sub_mi2 = np.zeros(max_npart, "uint64") posgen = yield_fake_decomp( decomp, npart, nfiles, left_edge, right_edge, buff=buff, distrib=distrib ) coll = None for i, (pos, hsml) in enumerate(posgen): nsub_mi, coll = reg._refined_index_data_file( coll, pos, hsml, mask, sub_mi1, sub_mi2, i, 0, count_threshold=1, mask_threshold=2, ) reg.bitmasks.append(i, coll) # Save if file name provided if isinstance(fname, str): reg.save_bitmasks(fname) return reg def test_bitmap_no_collisions(): # Test init for slabs of points in x left_edge = np.array([0.0, 0.0, 0.0]) right_edge = np.array([1.0, 1.0, 1.0]) periodicity = np.array([0, 0, 0], "bool") npart = 100 nfiles = 2 file_hash = 12345 order1 = 2 order2 = 2 reg = ParticleBitmap( left_edge, right_edge, periodicity, file_hash, nfiles, order1, order2 ) # Coarse index posgen = yield_fake_decomp("sliced", npart, nfiles, left_edge, right_edge) max_npart = 0 for i, (pos, hsml) in enumerate(posgen): reg._coarse_index_data_file(pos, hsml, i) max_npart = max(max_npart, pos.shape[0]) reg._set_coarse_index_data_file(i) assert_equal(reg.count_total(i), np.sum(reg.masks[:, i])) mask = reg.masks.sum(axis=1).astype("uint8") ncoll = np.sum(mask > 1) nc, nm = reg.find_collisions_coarse() assert_equal(nc, 0, "%d coarse collisions" % nc) assert_equal(ncoll, nc, "%d in mask, %d in bitmap" % (ncoll, nc)) # Refined index sub_mi1 = np.zeros(max_npart, "uint64") sub_mi2 = np.zeros(max_npart, "uint64") posgen = yield_fake_decomp("sliced", npart, nfiles, left_edge, right_edge) coll = None for i, (pos, hsml) in enumerate(posgen): nsub_mi, coll = reg._refined_index_data_file( coll, pos, hsml, mask, sub_mi1, sub_mi2, i, 0, count_threshold=1, mask_threshold=2, ) reg.bitmasks.append(i, coll) assert_equal(reg.count_refined(i), 0) nr, nm = reg.find_collisions_refined() assert_equal(nr, 0, "%d collisions" % nr) def test_bitmap_collisions(): # Test init for slabs of points in x left_edge = np.array([0.0, 0.0, 0.0]) right_edge = np.array([1.0, 1.0, 1.0]) periodicity = np.array([0, 0, 0], "bool") nfiles = 2 file_hash = 12345 order1 = 2 order2 = 2 reg = ParticleBitmap( left_edge, right_edge, periodicity, file_hash, nfiles, order1, order2 ) # Use same points for all files to force collisions pos = cell_centers(order1 + order2, left_edge, right_edge) hsml = None # Coarse index max_npart = 0 for i in range(nfiles): reg._coarse_index_data_file(pos, hsml, i) max_npart = max(max_npart, pos.shape[0]) reg._set_coarse_index_data_file(i) assert_equal(reg.count_total(i), np.sum(reg.masks[:, i])) mask = reg.masks.sum(axis=1).astype("uint8") ncoll = np.sum(mask > 1) nc, nm = reg.find_collisions_coarse() assert_equal(ncoll, nc, "%d in mask, %d in bitmap" % (ncoll, nc)) assert_equal(nc, 2 ** (3 * order1), "%d coarse collisions" % nc) # Refined index sub_mi1 = np.zeros(max_npart, "uint64") sub_mi2 = np.zeros(max_npart, "uint64") for i in range(nfiles): nsub_mi, coll = reg._refined_index_data_file( None, pos, hsml, mask, sub_mi1, sub_mi2, i, 0, count_threshold=1, mask_threshold=2, ) reg.bitmasks.append(i, coll) assert_equal(reg.count_refined(i), ncoll) nr, nm = reg.find_collisions_refined() assert_equal(nr, 2 ** (3 * (order1 + order2)), "%d collisions" % nr) @requires_module("h5py") def test_bitmap_save_load(): # Test init for slabs of points in x left_edge = np.array([0.0, 0.0, 0.0]) right_edge = np.array([1.0, 1.0, 1.0]) periodicity = np.array([0, 0, 0], "bool") npart = NPART file_hash = 12345 nfiles = 32 order1 = 2 order2 = 2 fname_fmt = "temp_bitmasks{}.dat" i = 0 fname = fname_fmt.format(i) while os.path.isfile(fname): i += 1 fname = fname_fmt.format(i) # Create bitmap and save to file reg0 = FakeBitmap(npart, nfiles, order1, order2, left_edge, right_edge, periodicity) reg0.save_bitmasks(fname, 0.0) # Attempt to load bitmap reg1 = ParticleBitmap( left_edge, right_edge, periodicity, file_hash, nfiles, order1, order2 ) reg1.load_bitmasks(fname) assert reg0.iseq_bitmask(reg1) # Remove file os.remove(fname) def test_bitmap_select(): np.random.seed(0x4D3D3D3) dx = 0.1 for periodic in [False, True]: for nfiles in [2, 15, 31, 32, 33]: # Now we create particles # Note: we set order1 to log2(nfiles) here for testing purposes to # ensure no collisions order1 = int(np.ceil(np.log2(nfiles))) # Ensures zero collisions order2 = 2 # No overlap for N = nfiles exact_division = nfiles == (1 << order1) div = float(nfiles) / float(1 << order1) reg = FakeBitmap( nfiles**3, nfiles, order1, order2, decomp="grid", left_edge=np.array([0.0, 0.0, 0.0]), right_edge=np.array([nfiles, nfiles, nfiles]), periodicity=np.array([periodic, periodic, periodic]), ) # Loop over regions selecting single files fr = FakeRegion(nfiles, periodic=periodic) for i in range(nfiles): fr.set_edges(i, dx) selector = RegionSelector(fr) (df, gf), (dmask, gmask) = reg.identify_data_files(selector, ngz=1) if exact_division: assert_equal(len(df), 1, f"selector {i}, number of files") assert_equal(df[0], i, f"selector {i}, file selected") if periodic and (nfiles != 2): ans_gf = sorted([(i - 1) % nfiles, (i + 1) % nfiles]) elif i == 0: ans_gf = [i + 1] elif i == (nfiles - 1): ans_gf = [i - 1] else: ans_gf = [i - 1, i + 1] assert_equal( len(gf), len(ans_gf), f"selector {i}, number of ghost files", ) for i in range(len(gf)): assert_equal(gf[i], ans_gf[i], f"selector {i}, ghost files") else: lf_frac = np.floor(float(fr.left_edge[0]) / div) * div rf_frac = np.floor(float(fr.right_edge[0]) / div) * div # Selected files lf = int( np.floor(lf_frac) if ((lf_frac % 0.5) == 0) else np.round(lf_frac) ) rf = int( np.floor(rf_frac) if ((rf_frac % 0.5) == 0) else np.round(rf_frac) ) if (rf + 0.5) >= (rf_frac + div): rf -= 1 if (lf + 0.5) <= (lf_frac - div): lf += 1 df_ans = np.arange(max(lf, 0), min(rf + 1, nfiles)) assert_array_equal(df, df_ans, f"selector {i}, file array") # Ghost zones selected files lf_ghost = int( np.floor(lf_frac - div) if (((lf_frac - div) % 0.5) == 0) else np.round(lf_frac - div) ) rf_ghost = int( np.floor(rf_frac + div) if (((rf_frac + div) % 0.5) == 0) else np.round(rf_frac + div) ) if not periodic: lf_ghost = max(lf_ghost, 0) rf_ghost = min(rf_ghost, nfiles - 1) if (rf_ghost + 0.5) >= (rf_frac + 2 * div): rf_ghost -= 1 gf_ans = [] if lf_ghost < lf: gf_ans.append(lf_ghost % nfiles) if rf_ghost > rf: gf_ans.append(rf_ghost % nfiles) gf_ans = np.array(sorted(gf_ans)) assert_array_equal(gf, gf_ans, f"selector {i}, ghost file array") def cell_centers(order, left_edge, right_edge): ndim = left_edge.size ncells = 2**order dx = (right_edge - left_edge) / (2 * ncells) d = [ np.linspace(left_edge[i] + dx[i], right_edge[i] - dx[i], ncells) for i in range(ndim) ] dd = np.meshgrid(*d) return np.vstack([x.flatten() for x in dd]).T def fake_decomp_random(npart, nfiles, ifile, DLE, DRE, buff=0.0): np.random.seed(0x4D3D3D3 + ifile) nPF = int(npart / nfiles) nR = npart % nfiles if ifile == 0: nPF += nR pos = np.empty((nPF, 3), "float64") for i in range(3): pos[:, i] = np.random.uniform(DLE[i], DRE[i], nPF) return pos def fake_decomp_sliced(npart, nfiles, ifile, DLE, DRE, buff=0.0): np.random.seed(0x4D3D3D3 + ifile) DW = DRE - DLE div = DW / nfiles nPF = int(npart / nfiles) nR = npart % nfiles inp = nPF if ifile == 0: inp += nR iLE = DLE[0] + ifile * div[0] iRE = iLE + div[0] if ifile != 0: iLE -= buff * div[0] if ifile != (nfiles - 1): iRE += buff * div[0] pos = np.empty((inp, 3), dtype="float") pos[:, 0] = np.random.uniform(iLE, iRE, inp) for i in range(1, 3): pos[:, i] = np.random.uniform(DLE[i], DRE[i], inp) return pos def makeall_decomp_hilbert_gaussian( npart, nfiles, DLE, DRE, buff=0.0, order=6, verbose=False, fname_base=None, nchunk=10, width=None, center=None, frac_random=0.1, ): import pickle np.random.seed(0x4D3D3D3) DW = DRE - DLE if fname_base is None: fname_base = f"hilbert{order}_gaussian_np{npart}_nf{nfiles}_" if width is None: width = 0.1 * DW if center is None: center = DLE + 0.5 * DW def load_pos(file_id): filename = fname_base + f"file{file_id}" if os.path.isfile(filename): fd = open(filename, "rb") positions = pickle.load(fd) fd.close() else: positions = np.empty((0, 3), dtype="float64") return positions def save_pos(file_id, positions): filename = fname_base + f"file{file_id}" fd = open(filename, "wb") pickle.dump(positions, fd) fd.close() npart_rnd = int(frac_random * npart) npart_gau = npart - npart_rnd dim_hilbert = 1 << order nH = dim_hilbert**3 if nH < nfiles: raise ValueError("Fewer hilbert cells than files.") nHPF = nH / nfiles rHPF = nH % nfiles for ichunk in range(nchunk): inp = npart_gau / nchunk if ichunk == 0: inp += npart_gau % nchunk pos = np.empty((inp, 3), dtype="float64") ind = np.empty((inp, 3), dtype="int64") for k in range(3): pos[:, k] = np.clip( np.random.normal(center[k], width[k], inp), DLE[k], DRE[k] - (1.0e-9) * DW[k], ) ind[:, k] = (pos[:, k] - DLE[k]) / (DW[k] / dim_hilbert) harr = get_hilbert_indices(order, ind) farr = (harr - rHPF) / nHPF for ifile in range(nfiles): ipos = load_pos(ifile) if ifile == 0: idx = farr <= ifile # Put remainders in first file else: idx = farr == ifile ipos = np.concatenate((ipos, pos[idx, :]), axis=0) save_pos(ifile, ipos) # Random for ifile in range(nfiles): ipos = load_pos(ifile) ipos_rnd = fake_decomp_hilbert_uniform( npart_rnd, nfiles, ifile, DLE, DRE, buff=buff, order=order, verbose=verbose ) ipos = np.concatenate((ipos, ipos_rnd), axis=0) save_pos(ifile, ipos) def fake_decomp_hilbert_gaussian( npart, nfiles, ifile, DLE, DRE, buff=0.0, order=6, verbose=False, fname=None ): np.random.seed(0x4D3D3D3) DW = DRE - DLE dim_hilbert = 1 << order nH = dim_hilbert**3 if nH < nfiles: raise Exception("Fewer hilbert cells than files.") nHPF = nH / nfiles rHPF = nH % nfiles hdiv = DW / dim_hilbert if ifile == 0: hlist = np.arange(0, nHPF + rHPF, dtype="int64") else: hlist = np.arange(ifile * nHPF + rHPF, (ifile + 1) * nHPF + rHPF, dtype="int64") hpos = get_hilbert_points(order, hlist) iLE = np.empty((len(hlist), 3), dtype="float") iRE = np.empty((len(hlist), 3), dtype="float") count = np.zeros(3, dtype="int64") pos = np.empty((npart, 3), dtype="float") for k in range(3): iLE[:, k] = DLE[k] + hdiv[k] * hpos[:, k] iRE[:, k] = iLE[:, k] + hdiv[k] iLE[hpos[:, k] != 0, k] -= buff * hdiv[k] iRE[hpos[:, k] != (dim_hilbert - 1), k] += buff * hdiv[k] gpos = np.clip( np.random.normal(DLE[k] + DW[k] / 2.0, DW[k] / 10.0, npart), DLE[k], DRE[k] ) for ipos in gpos: for i in range(len(hlist)): if iLE[i, k] <= ipos < iRE[i, k]: pos[count[k], k] = ipos count[k] += 1 break return pos[: count.min(), :] def fake_decomp_hilbert_uniform( npart, nfiles, ifile, DLE, DRE, buff=0.0, order=6, verbose=False ): np.random.seed(0x4D3D3D3 + ifile) DW = DRE - DLE dim_hilbert = 1 << order nH = dim_hilbert**3 if nH < nfiles: raise Exception("Fewer hilbert cells than files.") nHPF = nH / nfiles rHPF = nH % nfiles nPH = npart / nH nRH = npart % nH hind = np.arange(nH, dtype="int64") hpos = get_hilbert_points(order, hind) hdiv = DW / dim_hilbert if ifile == 0: hlist = range(0, nHPF + rHPF) nptot = nPH * len(hlist) + nRH else: hlist = range(ifile * nHPF + rHPF, (ifile + 1) * nHPF + rHPF) nptot = nPH * len(hlist) pos = np.empty((nptot, 3), dtype="float") pc = 0 for i in hlist: iLE = DLE + hdiv * hpos[i, :] iRE = iLE + hdiv for k in range(3): # Don't add buffer past domain bounds if hpos[i, k] != 0: iLE[k] -= buff * hdiv[k] if hpos[i, k] != (dim_hilbert - 1): iRE[k] += buff * hdiv[k] inp = nPH if (ifile == 0) and (i == 0): inp += nRH for k in range(3): pos[pc : (pc + inp), k] = np.random.uniform(iLE[k], iRE[k], inp) pc += inp return pos def fake_decomp_morton( npart, nfiles, ifile, DLE, DRE, buff=0.0, order=6, verbose=False ): np.random.seed(0x4D3D3D3 + ifile) DW = DRE - DLE dim_morton = 1 << order nH = dim_morton**3 if nH < nfiles: raise Exception("Fewer morton cells than files.") nHPF = nH / nfiles rHPF = nH % nfiles nPH = npart / nH nRH = npart % nH hind = np.arange(nH, dtype="uint64") hpos = get_morton_points(hind) hdiv = DW / dim_morton if ifile == 0: hlist = range(0, nHPF + rHPF) nptot = nPH * len(hlist) + nRH else: hlist = range(ifile * nHPF + rHPF, (ifile + 1) * nHPF + rHPF) nptot = nPH * len(hlist) pos = np.empty((nptot, 3), dtype="float") pc = 0 for i in hlist: iLE = DLE + hdiv * hpos[i, :] iRE = iLE + hdiv for k in range(3): # Don't add buffer past domain bounds if hpos[i, k] != 0: iLE[k] -= buff * hdiv[k] if hpos[i, k] != (dim_morton - 1): iRE[k] += buff * hdiv[k] inp = nPH if (ifile == 0) and (i == 0): inp += nRH for k in range(3): pos[pc : (pc + inp), k] = np.random.uniform(iLE[k], iRE[k], inp) pc += inp return pos def fake_decomp_grid(npart, nfiles, ifile, DLE, DRE, buff=0.0, verbose=False): # TODO: handle 'remainder' particles np.random.seed(0x4D3D3D3 + ifile) DW = DRE - DLE nYZ = int(np.sqrt(npart / nfiles)) div = DW / nYZ Y, Z = np.mgrid[ DLE[1] + 0.1 * div[1] : DRE[1] - 0.1 * div[1] : nYZ * 1j, DLE[2] + 0.1 * div[2] : DRE[2] - 0.1 * div[2] : nYZ * 1j, ] X = 0.5 * div[0] * np.ones(Y.shape, dtype="float64") + div[0] * ifile pos = np.array([X.ravel(), Y.ravel(), Z.ravel()], dtype="float64").transpose() return pos def yield_fake_decomp(decomp, npart, nfiles, DLE, DRE, **kws): hsml = None for ifile in range(nfiles): yield fake_decomp(decomp, npart, nfiles, ifile, DLE, DRE, **kws), hsml def fake_decomp( decomp, npart, nfiles, ifile, DLE, DRE, distrib="uniform", fname=None, **kws ): import pickle if fname is None and distrib == "gaussian": fname = f"{decomp}6_{distrib}_np{npart}_nf{nfiles}_file{ifile}" if fname is not None and os.path.isfile(fname): fd = open(fname, "rb") pos = pickle.load(fd) fd.close() return pos if decomp.startswith("zoom_"): zoom_factor = 5 decomp_zoom = decomp.split("zoom_")[-1] zoom_npart = npart / 2 zoom_rem = npart % 2 pos1 = fake_decomp( decomp_zoom, zoom_npart + zoom_rem, nfiles, ifile, DLE, DRE, distrib=distrib, **kws, ) DLE_zoom = DLE + 0.5 * DW * (1.0 - 1.0 / float(zoom_factor)) DRE_zoom = DLE_zoom + DW / zoom_factor pos2 = fake_decomp( decomp_zoom, zoom_npart, nfiles, ifile, DLE_zoom, DRE_zoom, distrib=distrib, **kws, ) pos = np.concatenate((pos1, pos2), axis=0) elif "_" in decomp: decomp_list = decomp.split("_") decomp_np = npart / len(decomp_list) decomp_nr = npart % len(decomp_list) pos = np.empty((0, 3), dtype="float") for i, idecomp in enumerate(decomp_list): inp = decomp_np if i == 0: inp += decomp_nr ipos = fake_decomp( idecomp, inp, nfiles, ifile, DLE, DRE, distrib=distrib, **kws ) pos = np.concatenate((pos, ipos), axis=0) # A perfect grid, no overlap between files elif decomp == "grid": pos = fake_decomp_grid(npart, nfiles, ifile, DLE, DRE, **kws) # Completely random data set elif decomp == "random": if distrib == "uniform": pos = fake_decomp_random(npart, nfiles, ifile, DLE, DRE, **kws) else: raise ValueError( f"Unsupported value {distrib} for input parameter 'distrib'" ) # Each file contains a slab (part of x domain, all of y/z domain) elif decomp == "sliced": if distrib == "uniform": pos = fake_decomp_sliced(npart, nfiles, ifile, DLE, DRE, **kws) else: raise ValueError( f"Unsupported value {distrib} for input parameter 'distrib'" ) # Particles are assigned to files based on their location on a # Peano-Hilbert curve of order 6 elif decomp.startswith("hilbert"): if decomp == "hilbert": kws["order"] = 6 else: kws["order"] = int(decomp.split("hilbert")[-1]) if distrib == "uniform": pos = fake_decomp_hilbert_uniform(npart, nfiles, ifile, DLE, DRE, **kws) elif distrib == "gaussian": makeall_decomp_hilbert_gaussian( npart, nfiles, DLE, DRE, fname_base=fname.split("file")[0], **kws ) pos = fake_decomp( decomp, npart, nfiles, ifile, DLE, DRE, distrib=distrib, fname=fname, **kws, ) else: raise ValueError( f"Unsupported value {distrib} for input parameter 'distrib'" ) # Particles are assigned to files based on their location on a # Morton ordered Z-curve of order 6 elif decomp.startswith("morton"): if decomp == "morton": kws["order"] = 6 else: kws["order"] = int(decomp.split("morton")[-1]) if distrib == "uniform": pos = fake_decomp_morton(npart, nfiles, ifile, DLE, DRE, **kws) else: raise ValueError( f"Unsupported value {distrib} for input parameter 'distrib'" ) else: raise ValueError(f"Unsupported value {decomp} for input parameter 'decomp'") # Save if fname is not None: fd = open(fname, "wb") pickle.dump(pos, fd) fd.close() return pos ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/tests/test_pickleable_selections.py0000644000175100001770000000220714714401662023153 0ustar00runnerdockerimport pickle import numpy as np from numpy.testing import assert_equal from yt.testing import fake_particle_ds def test_pickleability(): # tests the pickleability of the selection objects. def pickle_test(sel_obj): assert_fields = sel_obj._get_state_attnames() # the attrs used in get/set state new_sel_obj = pickle.loads(pickle.dumps(sel_obj)) for attr in assert_fields: assert_equal(getattr(new_sel_obj, attr), getattr(sel_obj, attr)) # list of selection types and argument tuples for each selection type c = np.array([0.5, 0.5, 0.5]) sargs = ( ("point", (c,)), ("sphere", (c, 0.25)), ("box", (c - 0.3, c)), ("ellipsoid", (c, 0.3, 0.2, 0.1, c - 0.4, 0.2)), ("disk", (c, [1, 0, 0], 0.2, 0.2)), ("cutting", ([0.1, 0.2, -0.9], [0.5, 0.42, 0.6])), ("ortho_ray", ("z", c)), ("ray", (c, [0.1, 0.1, 0.1])), ) # load fake data ds = fake_particle_ds() for sel_type, args in sargs: sel = getattr(ds, sel_type)(*args) # instantiate this selection type pickle_test(sel.selector) # make sure it (un)pickles ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/unstructured_mesh_handler.py0000644000175100001770000000565414714401662021720 0ustar00runnerdockerimport os import weakref import numpy as np from yt.geometry.geometry_handler import Index, YTDataChunk from yt.utilities.lib.mesh_utilities import smallest_fwidth from yt.utilities.logger import ytLogger as mylog class UnstructuredIndex(Index): """The Index subclass for unstructured and hexahedral mesh datasets.""" _unsupported_objects = ("proj", "covering_grid", "smoothed_covering_grid") def __init__(self, ds, dataset_type): self.dataset_type = dataset_type self.dataset = weakref.proxy(ds) self.index_filename = self.dataset.parameter_filename self.directory = os.path.dirname(self.index_filename) self.float_type = np.float64 super().__init__(ds, dataset_type) def _setup_geometry(self): mylog.debug("Initializing Unstructured Mesh Geometry Handler.") self._initialize_mesh() def get_smallest_dx(self): """ Returns (in code units) the smallest cell size in the simulation. """ dx = min( smallest_fwidth( mesh.connectivity_coords, mesh.connectivity_indices, mesh._index_offset ) for mesh in self.meshes ) return dx def convert(self, unit): return self.dataset.conversion_factors[unit] def _initialize_mesh(self): raise NotImplementedError def _identify_base_chunk(self, dobj): if getattr(dobj, "_chunk_info", None) is None: dobj._chunk_info = self.meshes if getattr(dobj, "size", None) is None: dobj.size = self._count_selection(dobj) dobj._current_chunk = list(self._chunk_all(dobj))[0] def _count_selection(self, dobj, meshes=None): if meshes is None: meshes = dobj._chunk_info count = sum(m.count(dobj.selector) for m in meshes) return count def _chunk_all(self, dobj, cache=True): oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) yield YTDataChunk(dobj, "all", oobjs, dobj.size, cache) def _chunk_spatial(self, dobj, ngz, sort=None, preload_fields=None): sobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) # This is where we will perform cutting of the Octree and # load-balancing. That may require a specialized selector object to # cut based on some space-filling curve index. for og in sobjs: if ngz > 0: g = og.retrieve_ghost_zones(ngz, [], smoothed=True) else: g = og size = self._count_selection(dobj, [og]) if size == 0: continue yield YTDataChunk(dobj, "spatial", [g], size) def _chunk_io(self, dobj, cache=True, local_only=False): oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info) for subset in oobjs: s = self._count_selection(dobj, oobjs) yield YTDataChunk(dobj, "io", [subset], s, cache=cache) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/geometry/vectorized_ops.h0000644000175100001770000000012714714401662017264 0ustar00runnerdockertypedef double v4df __attribute__ ((vector_size(32))); // vector of four double floats ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/loaders.py0000644000175100001770000021516014714401662014231 0ustar00runnerdocker""" This module gathers all user-facing functions with a `load_` prefix. """ import atexit import os import sys import time import types import warnings from collections.abc import Mapping from pathlib import Path from typing import TYPE_CHECKING, Any, Union, cast from urllib.parse import urlsplit import numpy as np from more_itertools import always_iterable from yt._maintenance.deprecation import ( future_positional_only, issue_deprecation_warning, ) from yt._typing import AnyFieldKey, AxisOrder, FieldKey from yt.data_objects.static_output import Dataset from yt.funcs import levenshtein_distance from yt.sample_data.api import lookup_on_disk_data from yt.utilities.decompose import decompose_array, get_psize from yt.utilities.exceptions import ( MountError, YTAmbiguousDataType, YTIllDefinedAMR, YTSimulationNotIdentified, YTUnidentifiedDataType, ) from yt.utilities.hierarchy_inspection import find_lowest_subclasses from yt.utilities.lib.misc_utilities import get_box_grids_level from yt.utilities.logger import ytLogger as mylog from yt.utilities.object_registries import ( output_type_registry, simulation_time_series_registry, ) from yt.utilities.on_demand_imports import _pooch as pooch, _ratarmount as ratarmount from yt.utilities.parallel_tools.parallel_analysis_interface import ( parallel_root_only_then_broadcast, ) if TYPE_CHECKING: from multiprocessing.connection import Connection # --- Loaders for known data formats --- # FUTURE: embedded warnings need to have their stacklevel decremented when this decorator is removed @future_positional_only({0: "fn"}, since="4.2") def load(fn: Union[str, "os.PathLike[str]"], *args, hint: str | None = None, **kwargs): """ Load a Dataset or DatasetSeries object. The data format is automatically discovered, and the exact return type is the corresponding subclass of :class:`yt.data_objects.static_output.Dataset`. A :class:`yt.data_objects.time_series.DatasetSeries` is created if the first argument is a pattern. Parameters ---------- fn : str, os.Pathlike[str] A path to the data location. This can be a file name, directory name, a glob pattern, or a url (for data types that support it). hint : str, optional Only classes whose name include a hint are considered. If loading fails with a YTAmbiguousDataType exception, this argument can be used to lift ambiguity. Hints are case insensitive. Additional arguments, if any, are passed down to the return class. Returns ------- :class:`yt.data_objects.static_output.Dataset` object If fn is a single path, create a Dataset from the appropriate subclass. :class:`yt.data_objects.time_series.DatasetSeries` If fn is a glob pattern (i.e. containing wildcards '[]?!*'), create a series. Raises ------ FileNotFoundError If fn does not match any existing file or directory. yt.utilities.exceptions.YTUnidentifiedDataType If fn matches existing files or directories with undetermined format. yt.utilities.exceptions.YTAmbiguousDataType If the data format matches more than one class of similar specialization levels. """ from importlib.metadata import entry_points from yt.frontends import _all # type: ignore [attr-defined] # noqa _input_fn = fn fn = os.path.expanduser(fn) if any(wildcard in fn for wildcard in "[]?!*"): from yt.data_objects.time_series import DatasetSeries return DatasetSeries(fn, *args, hint=hint, **kwargs) # This will raise FileNotFoundError if the path isn't matched # either in the current dir or yt.config.ytcfg['data_dir_directory'] if not fn.startswith("http"): fn = str(lookup_on_disk_data(fn)) external_frontends = entry_points(group="yt.frontends") # Ensure that external frontends are loaded for entrypoint in external_frontends: entrypoint.load() candidates: list[type[Dataset]] = [] for cls in output_type_registry.values(): if cls._is_valid(fn, *args, **kwargs): candidates.append(cls) # Filter the candidates if a hint was given if hint is not None: candidates = [c for c in candidates if hint.lower() in c.__name__.lower()] # Find only the lowest subclasses, i.e. most specialised front ends candidates = find_lowest_subclasses(candidates) if len(candidates) == 1: cls = candidates[0] if missing := cls._missing_load_requirements(): warnings.warn( f"This dataset appears to be of type {cls.__name__}, " "but the following requirements are currently missing: " f"{', '.join(missing)}\n" "Please verify your installation.", stacklevel=3, ) return cls(fn, *args, **kwargs) if len(candidates) > 1: raise YTAmbiguousDataType(_input_fn, candidates) raise YTUnidentifiedDataType(_input_fn, *args, **kwargs) def load_simulation(fn, simulation_type, find_outputs=False): """ Load a simulation time series object of the specified simulation type. Parameters ---------- fn : str, os.Pathlike, or byte (types supported by os.path.expandusers) Name of the data file or directory. simulation_type : str E.g. 'Enzo' find_outputs : bool Defaults to False Raises ------ FileNotFoundError If fn is not found. yt.utilities.exceptions.YTSimulationNotIdentified If simulation_type is unknown. """ from yt.frontends import _all # noqa fn = str(lookup_on_disk_data(fn)) try: cls = simulation_time_series_registry[simulation_type] except KeyError as e: raise YTSimulationNotIdentified(simulation_type) from e return cls(fn, find_outputs=find_outputs) # --- Loaders for generic ("stream") data --- def _sanitize_axis_order_args( geometry: str | tuple[str, AxisOrder], axis_order: AxisOrder | None ) -> tuple[str, AxisOrder | None]: # this entire function should be removed at the end of its deprecation cycle geometry_str: str if isinstance(geometry, tuple): issue_deprecation_warning( f"Received a tuple as {geometry=}\n" "Use the `axis_order` argument instead.", since="4.2", stacklevel=4, ) geometry_str, axis_order = geometry else: geometry_str = geometry return geometry_str, axis_order def load_uniform_grid( data, domain_dimensions, length_unit=None, bbox=None, nprocs=1, sim_time=0.0, mass_unit=None, time_unit=None, velocity_unit=None, magnetic_unit=None, periodicity=(True, True, True), geometry="cartesian", unit_system="cgs", default_species_fields=None, *, axis_order: AxisOrder | None = None, cell_widths=None, parameters=None, dataset_name: str = "UniformGridData", ): r"""Load a uniform grid of data into yt as a :class:`~yt.frontends.stream.data_structures.StreamHandler`. This should allow a uniform grid of data to be loaded directly into yt and analyzed as would any others. This comes with several caveats: * Units will be incorrect unless the unit system is explicitly specified. * Some functions may behave oddly, and parallelism will be disappointing or non-existent in most cases. * Particles may be difficult to integrate. Particle fields are detected as one-dimensional fields. Parameters ---------- data : dict This is a dict of numpy arrays, (numpy array, unit spec) tuples. Functions may also be supplied in place of numpy arrays as long as the subsequent argument nprocs is not specified to be greater than 1. Supplied functions much accepts the arguments (grid_object, field_name) and return numpy arrays. The keys to the dict are the field names. domain_dimensions : array_like This is the domain dimensions of the grid length_unit : string Unit to use for lengths. Defaults to unitless. bbox : array_like (xdim:zdim, LE:RE), optional Size of computational domain in units specified by length_unit. Defaults to a cubic unit-length domain. nprocs: integer, optional If greater than 1, will create this number of subarrays out of data sim_time : float, optional The simulation time in seconds mass_unit : string Unit to use for masses. Defaults to unitless. time_unit : string Unit to use for times. Defaults to unitless. velocity_unit : string Unit to use for velocities. Defaults to unitless. magnetic_unit : string Unit to use for magnetic fields. Defaults to unitless. periodicity : tuple of booleans Determines whether the data will be treated as periodic along each axis geometry : string (or tuple, deprecated) "cartesian", "cylindrical", "polar", "spherical", "geographic" or "spectral_cube". [DEPRECATED]: Optionally, a tuple can be provided to specify the axis ordering. For instance, to specify that the axis ordering should be z, x, y, this would be: ("cartesian", ("z", "x", "y")). The same can be done for other coordinates, for instance: ("spherical", ("theta", "phi", "r")). default_species_fields : string, optional If set, default species fields are created for H and He which also determine the mean molecular weight. Options are "ionized" and "neutral". axis_order: tuple of three strings, optional Force axis ordering, e.g. ("z", "y", "x") with cartesian geometry Otherwise use geometry-specific default ordering. cell_widths: list, optional If set, cell_widths is a list of arrays with an array for each dimension, specificing the cell spacing in that dimension. Must be consistent with the domain_dimensions. parameters: dictionary, optional Optional dictionary used to populate the dataset parameters, useful for storing dataset metadata. dataset_name: string, optional Optional string used to assign a name to the dataset. Stream datasets will use this value in place of a filename (in image prefixing, etc.) Examples -------- >>> np.random.seed(int(0x4D3D3D3)) >>> bbox = np.array([[0.0, 1.0], [-1.5, 1.5], [1.0, 2.5]]) >>> arr = np.random.random((128, 128, 128)) >>> data = dict(density=arr) >>> ds = load_uniform_grid(data, arr.shape, length_unit="cm", bbox=bbox, nprocs=12) >>> dd = ds.all_data() >>> dd["gas", "density"] unyt_array([0.76017901, 0.96855994, 0.49205428, ..., 0.78798258, 0.97569432, 0.99453904], 'g/cm**3') """ from yt.frontends.stream.data_structures import ( StreamDataset, StreamDictFieldHandler, StreamHandler, ) from yt.frontends.stream.definitions import ( assign_particle_data, process_data, set_particle_types, ) from yt.frontends.stream.misc import _validate_cell_widths geometry, axis_order = _sanitize_axis_order_args(geometry, axis_order) domain_dimensions = np.array(domain_dimensions) if bbox is None: bbox = np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]], "float64") domain_left_edge = np.array(bbox[:, 0], "float64") domain_right_edge = np.array(bbox[:, 1], "float64") grid_levels = np.zeros(nprocs, dtype="int32").reshape((nprocs, 1)) # First we fix our field names, apply units to data # and check for consistency of field shapes field_units, data, number_of_particles = process_data( data, grid_dims=tuple(domain_dimensions), allow_callables=nprocs == 1 ) sfh = StreamDictFieldHandler() if number_of_particles > 0: particle_types = set_particle_types(data) # Used much further below. pdata: dict[str | FieldKey, Any] = {"number_of_particles": number_of_particles} for key in list(data.keys()): if len(data[key].shape) == 1 or key[0] == "io": field: FieldKey if not isinstance(key, tuple): field = ("io", key) mylog.debug("Reassigning '%s' to '%s'", key, field) else: key = cast(FieldKey, key) field = key sfh._additional_fields += (field,) pdata[field] = data.pop(key) else: particle_types = {} if cell_widths is not None: cell_widths = _validate_cell_widths(cell_widths, domain_dimensions) if nprocs > 1: temp = {} new_data = {} # type: ignore [var-annotated] for key in data.keys(): psize = get_psize(np.array(data[key].shape), nprocs) ( grid_left_edges, grid_right_edges, shapes, slices, grid_cell_widths, ) = decompose_array( data[key].shape, psize, bbox, cell_widths=cell_widths, ) cell_widths = grid_cell_widths grid_dimensions = np.array(list(shapes), dtype="int32") temp[key] = [data[key][slice] for slice in slices] for gid in range(nprocs): new_data[gid] = {} for key in temp.keys(): new_data[gid].update({key: temp[key][gid]}) sfh.update(new_data) del new_data, temp else: sfh.update({0: data}) grid_left_edges = domain_left_edge grid_right_edges = domain_right_edge grid_dimensions = domain_dimensions.reshape(nprocs, 3).astype("int32") if cell_widths is not None: cell_widths = [ cell_widths, ] if length_unit is None: length_unit = "code_length" if mass_unit is None: mass_unit = "code_mass" if time_unit is None: time_unit = "code_time" if velocity_unit is None: velocity_unit = "code_velocity" if magnetic_unit is None: magnetic_unit = "code_magnetic" handler = StreamHandler( grid_left_edges, grid_right_edges, grid_dimensions, grid_levels, -np.ones(nprocs, dtype="int64"), np.zeros(nprocs, dtype="int64").reshape(nprocs, 1), # particle count np.zeros(nprocs).reshape((nprocs, 1)), sfh, field_units, (length_unit, mass_unit, time_unit, velocity_unit, magnetic_unit), particle_types=particle_types, periodicity=periodicity, cell_widths=cell_widths, parameters=parameters, ) handler.name = dataset_name # type: ignore [attr-defined] handler.domain_left_edge = domain_left_edge # type: ignore [attr-defined] handler.domain_right_edge = domain_right_edge # type: ignore [attr-defined] handler.refine_by = 2 # type: ignore [attr-defined] if np.all(domain_dimensions[1:] == 1): dimensionality = 1 elif domain_dimensions[2] == 1: dimensionality = 2 else: dimensionality = 3 handler.dimensionality = dimensionality # type: ignore [attr-defined] handler.domain_dimensions = domain_dimensions # type: ignore [attr-defined] handler.simulation_time = sim_time # type: ignore [attr-defined] handler.cosmology_simulation = 0 # type: ignore [attr-defined] sds = StreamDataset( handler, geometry=geometry, axis_order=axis_order, unit_system=unit_system, default_species_fields=default_species_fields, ) # Now figure out where the particles go if number_of_particles > 0: # This will update the stream handler too assign_particle_data(sds, pdata, bbox) return sds def load_amr_grids( grid_data, domain_dimensions, bbox=None, sim_time=0.0, length_unit=None, mass_unit=None, time_unit=None, velocity_unit=None, magnetic_unit=None, periodicity=(True, True, True), geometry="cartesian", refine_by=2, unit_system="cgs", default_species_fields=None, *, parameters=None, dataset_name: str = "AMRGridData", axis_order: AxisOrder | None = None, ): r"""Load a set of grids of data into yt as a :class:`~yt.frontends.stream.data_structures.StreamHandler`. This should allow a sequence of grids of varying resolution of data to be loaded directly into yt and analyzed as would any others. This comes with several caveats: * Units will be incorrect unless the unit system is explicitly specified. * Some functions may behave oddly, and parallelism will be disappointing or non-existent in most cases. * Particles may be difficult to integrate. * No consistency checks are performed on the index Parameters ---------- grid_data : list of dicts This is a list of dicts. Each dict must have entries "left_edge", "right_edge", "dimensions", "level", and then any remaining entries are assumed to be fields. Field entries must map to an NDArray *or* a function with the signature (grid_object, field_name) -> NDArray. The grid_data may also include a particle count. If no particle count is supplied, the dataset is understood to contain no particles. The grid_data will be modified in place and can't be assumed to be static. Grid data may also be supplied as a tuple of (NDarray or function, unit string) to specify the units. domain_dimensions : array_like This is the domain dimensions of the grid length_unit : string or float Unit to use for lengths. Defaults to unitless. If set to be a string, the bbox dimensions are assumed to be in the corresponding units. If set to a float, the value is a assumed to be the conversion from bbox dimensions to centimeters. mass_unit : string or float Unit to use for masses. Defaults to unitless. time_unit : string or float Unit to use for times. Defaults to unitless. velocity_unit : string or float Unit to use for velocities. Defaults to unitless. magnetic_unit : string or float Unit to use for magnetic fields. Defaults to unitless. bbox : array_like (xdim:zdim, LE:RE), optional Size of computational domain in units specified by length_unit. Defaults to a cubic unit-length domain. sim_time : float, optional The simulation time in seconds periodicity : tuple of booleans Determines whether the data will be treated as periodic along each axis geometry : string (or tuple, deprecated) "cartesian", "cylindrical", "polar", "spherical", "geographic" or "spectral_cube". [DEPRECATED]: Optionally, a tuple can be provided to specify the axis ordering. For instance, to specify that the axis ordering should be z, x, y, this would be: ("cartesian", ("z", "x", "y")). The same can be done for other coordinates, for instance: ("spherical", ("theta", "phi", "r")). refine_by : integer or list/array of integers. Specifies the refinement ratio between levels. Defaults to 2. This can be an array, in which case it specifies for each dimension. For instance, this can be used to say that some datasets have refinement of 1 in one dimension, indicating that they span the full range in that dimension. default_species_fields : string, optional If set, default species fields are created for H and He which also determine the mean molecular weight. Options are "ionized" and "neutral". parameters: dictionary, optional Optional dictionary used to populate the dataset parameters, useful for storing dataset metadata. dataset_name: string, optional Optional string used to assign a name to the dataset. Stream datasets will use this value in place of a filename (in image prefixing, etc.) axis_order: tuple of three strings, optional Force axis ordering, e.g. ("z", "y", "x") with cartesian geometry Otherwise use geometry-specific default ordering. Examples -------- >>> grid_data = [ ... dict( ... left_edge=[0.0, 0.0, 0.0], ... right_edge=[1.0, 1.0, 1.0], ... level=0, ... dimensions=[32, 32, 32], ... number_of_particles=0, ... ), ... dict( ... left_edge=[0.25, 0.25, 0.25], ... right_edge=[0.75, 0.75, 0.75], ... level=1, ... dimensions=[32, 32, 32], ... number_of_particles=0, ... ), ... ] ... >>> for g in grid_data: ... g["gas", "density"] = ( ... np.random.random(g["dimensions"]) * 2 ** g["level"], ... "g/cm**3", ... ) ... >>> ds = load_amr_grids(grid_data, [32, 32, 32], length_unit=1.0) """ from yt.frontends.stream.data_structures import ( StreamDataset, StreamDictFieldHandler, StreamHandler, ) from yt.frontends.stream.definitions import process_data, set_particle_types geometry, axis_order = _sanitize_axis_order_args(geometry, axis_order) domain_dimensions = np.array(domain_dimensions) ngrids = len(grid_data) if bbox is None: bbox = np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]], "float64") domain_left_edge = np.array(bbox[:, 0], "float64") domain_right_edge = np.array(bbox[:, 1], "float64") grid_levels = np.zeros((ngrids, 1), dtype="int32") grid_left_edges = np.zeros((ngrids, 3), dtype="float64") grid_right_edges = np.zeros((ngrids, 3), dtype="float64") grid_dimensions = np.zeros((ngrids, 3), dtype="int32") number_of_particles = np.zeros((ngrids, 1), dtype="int64") parent_ids = np.zeros(ngrids, dtype="int64") - 1 sfh = StreamDictFieldHandler() for i, g in enumerate(grid_data): grid_left_edges[i, :] = g.pop("left_edge") grid_right_edges[i, :] = g.pop("right_edge") grid_dimensions[i, :] = g.pop("dimensions") grid_levels[i, :] = g.pop("level") field_units, data, n_particles = process_data( g, grid_dims=tuple(grid_dimensions[i, :]) ) number_of_particles[i, :] = n_particles sfh[i] = data # We now reconstruct our parent ids, so that our particle assignment can # proceed. mask = np.empty(ngrids, dtype="int32") for gi in range(ngrids): get_box_grids_level( grid_left_edges[gi, :], grid_right_edges[gi, :], grid_levels[gi].item() + 1, grid_left_edges, grid_right_edges, grid_levels, mask, ) ids = np.where(mask.astype("bool")) for ci in ids: parent_ids[ci] = gi # Check if the grid structure is properly aligned (bug #1295) for lvl in range(grid_levels.min() + 1, grid_levels.max() + 1): idx = grid_levels.flatten() == lvl dims = domain_dimensions * refine_by ** (lvl - 1) for iax, ax in enumerate("xyz"): cell_edges = np.linspace( domain_left_edge[iax], domain_right_edge[iax], dims[iax], endpoint=False ) if set(grid_left_edges[idx, iax]) - set(cell_edges): raise YTIllDefinedAMR(lvl, ax) if length_unit is None: length_unit = "code_length" if mass_unit is None: mass_unit = "code_mass" if time_unit is None: time_unit = "code_time" if velocity_unit is None: velocity_unit = "code_velocity" if magnetic_unit is None: magnetic_unit = "code_magnetic" particle_types = {} for grid in sfh.values(): particle_types.update(set_particle_types(grid)) handler = StreamHandler( grid_left_edges, grid_right_edges, grid_dimensions, grid_levels, parent_ids, number_of_particles, np.zeros(ngrids).reshape((ngrids, 1)), sfh, field_units, (length_unit, mass_unit, time_unit, velocity_unit, magnetic_unit), particle_types=particle_types, periodicity=periodicity, parameters=parameters, ) handler.name = dataset_name # type: ignore [attr-defined] handler.domain_left_edge = domain_left_edge # type: ignore [attr-defined] handler.domain_right_edge = domain_right_edge # type: ignore [attr-defined] handler.refine_by = refine_by # type: ignore [attr-defined] if np.all(domain_dimensions[1:] == 1): dimensionality = 1 elif domain_dimensions[2] == 1: dimensionality = 2 else: dimensionality = 3 handler.dimensionality = dimensionality # type: ignore [attr-defined] handler.domain_dimensions = domain_dimensions # type: ignore [attr-defined] handler.simulation_time = sim_time # type: ignore [attr-defined] handler.cosmology_simulation = 0 # type: ignore [attr-defined] sds = StreamDataset( handler, geometry=geometry, axis_order=axis_order, unit_system=unit_system, default_species_fields=default_species_fields, ) return sds def load_particles( data: Mapping[AnyFieldKey, np.ndarray | tuple[np.ndarray, str]], length_unit=None, bbox=None, sim_time=None, mass_unit=None, time_unit=None, velocity_unit=None, magnetic_unit=None, periodicity=(True, True, True), geometry="cartesian", unit_system="cgs", data_source=None, default_species_fields=None, *, axis_order: AxisOrder | None = None, parameters=None, dataset_name: str = "ParticleData", ): r"""Load a set of particles into yt as a :class:`~yt.frontends.stream.data_structures.StreamParticleHandler`. This will allow a collection of particle data to be loaded directly into yt and analyzed as would any others. This comes with several caveats: * There must be sufficient space in memory to contain all the particle data. * Parallelism will be disappointing or non-existent in most cases. * Fluid fields are not supported. Note: in order for the dataset to take advantage of SPH functionality, the following two fields must be provided: * ('io', 'density') * ('io', 'smoothing_length') Parameters ---------- data : dict This is a dict of numpy arrays or (numpy array, unit name) tuples, where the keys are the field names. Particles positions must be named "particle_position_x", "particle_position_y", and "particle_position_z". length_unit : float Conversion factor from simulation length units to centimeters bbox : array_like (xdim:zdim, LE:RE), optional Size of computational domain in units of the length_unit sim_time : float, optional The simulation time in seconds mass_unit : float Conversion factor from simulation mass units to grams time_unit : float Conversion factor from simulation time units to seconds velocity_unit : float Conversion factor from simulation velocity units to cm/s magnetic_unit : float Conversion factor from simulation magnetic units to gauss periodicity : tuple of booleans Determines whether the data will be treated as periodic along each axis geometry : string (or tuple, deprecated) "cartesian", "cylindrical", "polar", "spherical", "geographic" or "spectral_cube". data_source : YTSelectionContainer, optional If set, parameters like `bbox`, `sim_time`, and code units are derived from it. default_species_fields : string, optional If set, default species fields are created for H and He which also determine the mean molecular weight. Options are "ionized" and "neutral". axis_order: tuple of three strings, optional Force axis ordering, e.g. ("z", "y", "x") with cartesian geometry Otherwise use geometry-specific default ordering. parameters: dictionary, optional Optional dictionary used to populate the dataset parameters, useful for storing dataset metadata. dataset_name: string, optional Optional string used to assign a name to the dataset. Stream datasets will use this value in place of a filename (in image prefixing, etc.) Examples -------- >>> pos = [np.random.random(128 * 128 * 128) for i in range(3)] >>> data = dict( ... particle_position_x=pos[0], ... particle_position_y=pos[1], ... particle_position_z=pos[2], ... ) >>> bbox = np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]) >>> ds = load_particles(data, 3.08e24, bbox=bbox) """ from yt.frontends.stream.data_structures import ( StreamDictFieldHandler, StreamHandler, StreamParticlesDataset, ) from yt.frontends.stream.definitions import process_data, set_particle_types domain_dimensions = np.ones(3, "int32") nprocs = 1 # Parse bounding box if data_source is not None: le, re = data_source.get_bbox() le = le.to_value("code_length") re = re.to_value("code_length") bbox = list(zip(le, re, strict=True)) if bbox is None: bbox = np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]], "float64") else: bbox = np.array(bbox) domain_left_edge = np.array(bbox[:, 0], "float64") domain_right_edge = np.array(bbox[:, 1], "float64") grid_levels = np.zeros(nprocs, dtype="int32").reshape((nprocs, 1)) # Parse simulation time if data_source is not None: sim_time = data_source.ds.current_time if sim_time is None: sim_time = 0.0 else: sim_time = float(sim_time) # Parse units def parse_unit(unit, dimension): if unit is None: unit = "code_" + dimension if data_source is not None: unit = getattr(data_source.ds, dimension + "_unit", unit) return unit length_unit = parse_unit(length_unit, "length") mass_unit = parse_unit(mass_unit, "mass") time_unit = parse_unit(time_unit, "time") velocity_unit = parse_unit(velocity_unit, "velocity") magnetic_unit = parse_unit(magnetic_unit, "magnetic") # Preprocess data field_units, data, _ = process_data(data) sfh = StreamDictFieldHandler() pdata: dict[AnyFieldKey, np.ndarray | tuple[np.ndarray, str]] = {} for key in data.keys(): field: FieldKey if not isinstance(key, tuple): field = ("io", key) mylog.debug("Reassigning '%s' to '%s'", key, field) else: field = key pdata[field] = data[key] sfh._additional_fields += (field,) data = pdata # Drop reference count particle_types = set_particle_types(data) sfh.update({"stream_file": data}) grid_left_edges = domain_left_edge grid_right_edges = domain_right_edge grid_dimensions = domain_dimensions.reshape(nprocs, 3).astype("int32") # I'm not sure we need any of this. handler = StreamHandler( grid_left_edges, grid_right_edges, grid_dimensions, grid_levels, -np.ones(nprocs, dtype="int64"), np.zeros(nprocs, dtype="int64").reshape(nprocs, 1), # Temporary np.zeros(nprocs).reshape((nprocs, 1)), sfh, field_units, (length_unit, mass_unit, time_unit, velocity_unit, magnetic_unit), particle_types=particle_types, periodicity=periodicity, parameters=parameters, ) handler.name = dataset_name # type: ignore [attr-defined] handler.domain_left_edge = domain_left_edge # type: ignore [attr-defined] handler.domain_right_edge = domain_right_edge # type: ignore [attr-defined] handler.refine_by = 2 # type: ignore [attr-defined] handler.dimensionality = 3 # type: ignore [attr-defined] handler.domain_dimensions = domain_dimensions # type: ignore [attr-defined] handler.simulation_time = sim_time # type: ignore [attr-defined] handler.cosmology_simulation = 0 # type: ignore [attr-defined] sds = StreamParticlesDataset( handler, geometry=geometry, unit_system=unit_system, default_species_fields=default_species_fields, axis_order=axis_order, ) return sds def load_hexahedral_mesh( data, connectivity, coordinates, length_unit=None, bbox=None, sim_time=0.0, mass_unit=None, time_unit=None, velocity_unit=None, magnetic_unit=None, periodicity=(True, True, True), geometry="cartesian", unit_system="cgs", *, axis_order: AxisOrder | None = None, parameters=None, dataset_name: str = "HexahedralMeshData", ): r"""Load a hexahedral mesh of data into yt as a :class:`~yt.frontends.stream.data_structures.StreamHandler`. This should allow a semistructured grid of data to be loaded directly into yt and analyzed as would any others. This comes with several caveats: * Units will be incorrect unless the data has already been converted to cgs. * Some functions may behave oddly, and parallelism will be disappointing or non-existent in most cases. * Particles may be difficult to integrate. Particle fields are detected as one-dimensional fields. The number of particles is set by the "number_of_particles" key in data. Parameters ---------- data : dict This is a dict of numpy arrays, where the keys are the field names. There must only be one. Note that the data in the numpy arrays should define the cell-averaged value for of the quantity in in the hexahedral cell. connectivity : array_like This should be of size (N,8) where N is the number of zones. coordinates : array_like This should be of size (M,3) where M is the number of vertices indicated in the connectivity matrix. bbox : array_like (xdim:zdim, LE:RE), optional Size of computational domain in units of the length unit. sim_time : float, optional The simulation time in seconds mass_unit : string Unit to use for masses. Defaults to unitless. time_unit : string Unit to use for times. Defaults to unitless. velocity_unit : string Unit to use for velocities. Defaults to unitless. magnetic_unit : string Unit to use for magnetic fields. Defaults to unitless. periodicity : tuple of booleans Determines whether the data will be treated as periodic along each axis geometry : string (or tuple, deprecated) "cartesian", "cylindrical", "polar", "spherical", "geographic" or "spectral_cube". [DEPRECATED]: Optionally, a tuple can be provided to specify the axis ordering. For instance, to specify that the axis ordering should be z, x, y, this would be: ("cartesian", ("z", "x", "y")). The same can be done for other coordinates, for instance: ("spherical", ("theta", "phi", "r")). axis_order: tuple of three strings, optional Force axis ordering, e.g. ("z", "y", "x") with cartesian geometry Otherwise use geometry-specific default ordering. parameters: dictionary, optional Optional dictionary used to populate the dataset parameters, useful for storing dataset metadata. dataset_name: string, optional Optional string used to assign a name to the dataset. Stream datasets will use this value in place of a filename (in image prefixing, etc.) """ from yt.frontends.stream.data_structures import ( StreamDictFieldHandler, StreamHandler, StreamHexahedralDataset, ) from yt.frontends.stream.definitions import process_data, set_particle_types geometry, axis_order = _sanitize_axis_order_args(geometry, axis_order) domain_dimensions = np.ones(3, "int32") * 2 nprocs = 1 if bbox is None: bbox = np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]], "float64") domain_left_edge = np.array(bbox[:, 0], "float64") domain_right_edge = np.array(bbox[:, 1], "float64") grid_levels = np.zeros(nprocs, dtype="int32").reshape((nprocs, 1)) field_units, data, _ = process_data(data) sfh = StreamDictFieldHandler() particle_types = set_particle_types(data) sfh.update({"connectivity": connectivity, "coordinates": coordinates, 0: data}) # Simple check for axis length correctness if len(data) > 0: fn = sorted(data)[0] array_values = data[fn] if array_values.size != connectivity.shape[0]: mylog.error( "Dimensions of array must be one fewer than the coordinate set." ) raise RuntimeError grid_left_edges = domain_left_edge grid_right_edges = domain_right_edge grid_dimensions = domain_dimensions.reshape(nprocs, 3).astype("int32") if length_unit is None: length_unit = "code_length" if mass_unit is None: mass_unit = "code_mass" if time_unit is None: time_unit = "code_time" if velocity_unit is None: velocity_unit = "code_velocity" if magnetic_unit is None: magnetic_unit = "code_magnetic" # I'm not sure we need any of this. handler = StreamHandler( grid_left_edges, grid_right_edges, grid_dimensions, grid_levels, -np.ones(nprocs, dtype="int64"), np.zeros(nprocs, dtype="int64").reshape(nprocs, 1), # Temporary np.zeros(nprocs).reshape((nprocs, 1)), sfh, field_units, (length_unit, mass_unit, time_unit, velocity_unit, magnetic_unit), particle_types=particle_types, periodicity=periodicity, parameters=parameters, ) handler.name = dataset_name # type: ignore [attr-defined] handler.domain_left_edge = domain_left_edge # type: ignore [attr-defined] handler.domain_right_edge = domain_right_edge # type: ignore [attr-defined] handler.refine_by = 2 # type: ignore [attr-defined] handler.dimensionality = 3 # type: ignore [attr-defined] handler.domain_dimensions = domain_dimensions # type: ignore [attr-defined] handler.simulation_time = sim_time # type: ignore [attr-defined] handler.cosmology_simulation = 0 # type: ignore [attr-defined] sds = StreamHexahedralDataset( handler, geometry=geometry, axis_order=axis_order, unit_system=unit_system ) return sds def load_octree( octree_mask, data, bbox=None, sim_time=0.0, length_unit=None, mass_unit=None, time_unit=None, velocity_unit=None, magnetic_unit=None, periodicity=(True, True, True), over_refine_factor=None, num_zones=2, partial_coverage=1, unit_system="cgs", default_species_fields=None, *, parameters=None, domain_dimensions=None, dataset_name: str = "OctreeData", ): r"""Load an octree mask into yt. Octrees can be saved out by calling save_octree on an OctreeContainer. This enables them to be loaded back in. This will initialize an Octree of data. Note that fluid fields will not work yet, or possibly ever. Parameters ---------- octree_mask : np.ndarray[uint8_t] This is a depth-first refinement mask for an Octree. It should be of size n_octs * 8 (but see note about the root oct below), where each item is 1 for an oct-cell being refined and 0 for it not being refined. For num_zones != 2, the children count will still be 8, so there will still be n_octs * 8 entries. Note that if the root oct is not refined, there will be only one entry for the root, so the size of the mask will be (n_octs - 1)*8 + 1. data : dict A dictionary of 1D arrays. Note that these must of the size of the number of "False" values in the ``octree_mask``. bbox : array_like (xdim:zdim, LE:RE), optional Size of computational domain in units of length sim_time : float, optional The simulation time in seconds length_unit : string Unit to use for lengths. Defaults to unitless. mass_unit : string Unit to use for masses. Defaults to unitless. time_unit : string Unit to use for times. Defaults to unitless. velocity_unit : string Unit to use for velocities. Defaults to unitless. magnetic_unit : string Unit to use for magnetic fields. Defaults to unitless. periodicity : tuple of booleans Determines whether the data will be treated as periodic along each axis partial_coverage : boolean Whether or not an oct can be refined cell-by-cell, or whether all 8 get refined. default_species_fields : string, optional If set, default species fields are created for H and He which also determine the mean molecular weight. Options are "ionized" and "neutral". num_zones : int The number of zones along each dimension in an oct parameters: dictionary, optional Optional dictionary used to populate the dataset parameters, useful for storing dataset metadata. domain_dimensions : 3 elements array-like, optional This is the domain dimensions of the root *mesh*, which can be used to specify (indirectly) the number of root oct nodes. dataset_name : string, optional Optional string used to assign a name to the dataset. Stream datasets will use this value in place of a filename (in image prefixing, etc.) Example ------- >>> import numpy as np >>> oct_mask = np.zeros(33) # 5 refined values gives 7 * 4 + 5 octs to mask ... oct_mask[[0, 5, 7, 16]] = 8 >>> octree_mask = np.array(oct_mask, dtype=np.uint8) >>> quantities = {} >>> quantities["gas", "density"] = np.random.random((29, 1)) # num of false's >>> bbox = np.array([[-10.0, 10.0], [-10.0, 10.0], [-10.0, 10.0]]) >>> ds = load_octree( ... octree_mask=octree_mask, ... data=quantities, ... bbox=bbox, ... num_zones=1, ... partial_coverage=0, ... ) """ from yt.frontends.stream.data_structures import ( StreamDictFieldHandler, StreamHandler, StreamOctreeDataset, ) from yt.frontends.stream.definitions import process_data, set_particle_types if not isinstance(octree_mask, np.ndarray) or octree_mask.dtype != np.uint8: raise TypeError("octree_mask should be a Numpy array with type uint8") nz = num_zones # for compatibility if over_refine_factor is not None: nz = 1 << over_refine_factor if domain_dimensions is None: # We assume that if it isn't specified, it defaults to the number of # zones (i.e., a single root oct.) domain_dimensions = np.array([nz, nz, nz]) else: domain_dimensions = np.array(domain_dimensions) nprocs = 1 if bbox is None: bbox = np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]], "float64") domain_left_edge = np.array(bbox[:, 0], "float64") domain_right_edge = np.array(bbox[:, 1], "float64") grid_levels = np.zeros(nprocs, dtype="int32").reshape((nprocs, 1)) field_units, data, _ = process_data(data) sfh = StreamDictFieldHandler() particle_types = set_particle_types(data) sfh.update({0: data}) grid_left_edges = domain_left_edge grid_right_edges = domain_right_edge grid_dimensions = domain_dimensions.reshape(nprocs, 3).astype("int32") if length_unit is None: length_unit = "code_length" if mass_unit is None: mass_unit = "code_mass" if time_unit is None: time_unit = "code_time" if velocity_unit is None: velocity_unit = "code_velocity" if magnetic_unit is None: magnetic_unit = "code_magnetic" # I'm not sure we need any of this. handler = StreamHandler( grid_left_edges, grid_right_edges, grid_dimensions, grid_levels, -np.ones(nprocs, dtype="int64"), np.zeros(nprocs, dtype="int64").reshape(nprocs, 1), # Temporary np.zeros(nprocs).reshape((nprocs, 1)), sfh, field_units, (length_unit, mass_unit, time_unit, velocity_unit, magnetic_unit), particle_types=particle_types, periodicity=periodicity, parameters=parameters, ) handler.name = dataset_name # type: ignore [attr-defined] handler.domain_left_edge = domain_left_edge # type: ignore [attr-defined] handler.domain_right_edge = domain_right_edge # type: ignore [attr-defined] handler.refine_by = 2 # type: ignore [attr-defined] handler.dimensionality = 3 # type: ignore [attr-defined] handler.domain_dimensions = domain_dimensions # type: ignore [attr-defined] handler.simulation_time = sim_time # type: ignore [attr-defined] handler.cosmology_simulation = 0 # type: ignore [attr-defined] sds = StreamOctreeDataset( handler, unit_system=unit_system, default_species_fields=default_species_fields ) sds.octree_mask = octree_mask # type: ignore [attr-defined] sds.partial_coverage = partial_coverage # type: ignore [attr-defined] sds.num_zones = num_zones # type: ignore [attr-defined] return sds def load_unstructured_mesh( connectivity, coordinates, node_data=None, elem_data=None, length_unit=None, bbox=None, sim_time=0.0, mass_unit=None, time_unit=None, velocity_unit=None, magnetic_unit=None, periodicity=(False, False, False), geometry="cartesian", unit_system="cgs", *, axis_order: AxisOrder | None = None, parameters=None, dataset_name: str = "UnstructuredMeshData", ): r"""Load an unstructured mesh of data into yt as a :class:`~yt.frontends.stream.data_structures.StreamHandler`. This should allow an unstructured mesh data to be loaded directly into yt and analyzed as would any others. Not all functionality for visualization will be present, and some analysis functions may not yet have been implemented. Particle fields are detected as one-dimensional fields. The number of particles is set by the "number_of_particles" key in data. In the parameter descriptions below, a "vertex" is a 3D point in space, an "element" is a single polyhedron whose location is defined by a set of vertices, and a "mesh" is a set of polyhedral elements, each with the same number of vertices. Parameters ---------- connectivity : list of array_like or array_like This should either be a single 2D array or list of 2D arrays. If this is a list, each element in the list corresponds to the connectivity information for a distinct mesh. Each array can have different connectivity length and should be of shape (N,M) where N is the number of elements and M is the number of vertices per element. coordinates : array_like The 3D coordinates of mesh vertices. This should be of size (L, D) where L is the number of vertices and D is the number of coordinates per vertex (the spatial dimensions of the dataset). Currently this must be either 2 or 3. When loading more than one mesh, the data for each mesh should be concatenated into a single coordinates array. node_data : dict or list of dicts For a single mesh, a dict mapping field names to 2D numpy arrays, representing data defined at element vertices. For multiple meshes, this must be a list of dicts. Note that these are not the values as a function of the coordinates, but of the connectivity. Their shape should be the same as the connectivity. This means that if the data is in the shape of the coordinates, you may need to reshape them using the `connectivity` array as an index. elem_data : dict or list of dicts For a single mesh, a dict mapping field names to 1D numpy arrays, where each array has a length equal to the number of elements. The data must be defined at the center of each mesh element and there must be only one data value for each element. For multiple meshes, this must be a list of dicts, with one dict for each mesh. bbox : array_like (xdim:zdim, LE:RE), optional Size of computational domain in units of the length unit. sim_time : float, optional The simulation time in seconds length_unit : string Unit to use for length. Defaults to unitless. mass_unit : string Unit to use for masses. Defaults to unitless. time_unit : string Unit to use for times. Defaults to unitless. velocity_unit : string Unit to use for velocities. Defaults to unitless. magnetic_unit : string Unit to use for magnetic fields. Defaults to unitless. periodicity : tuple of booleans Determines whether the data will be treated as periodic along each axis geometry : string (or tuple, deprecated) "cartesian", "cylindrical", "polar", "spherical", "geographic" or "spectral_cube". [DEPRECATED]: Optionally, a tuple can be provided to specify the axis ordering. For instance, to specify that the axis ordering should be z, x, y, this would be: ("cartesian", ("z", "x", "y")). The same can be done for other coordinates, for instance: ("spherical", ("theta", "phi", "r")). axis_order: tuple of three strings, optional Force axis ordering, e.g. ("z", "y", "x") with cartesian geometry Otherwise use geometry-specific default ordering. parameters: dictionary, optional Optional dictionary used to populate the dataset parameters, useful for storing dataset metadata. dataset_name: string, optional Optional string used to assign a name to the dataset. Stream datasets will use this value in place of a filename (in image prefixing, etc.) Examples -------- Load a simple mesh consisting of two tets. >>> # Coordinates for vertices of two tetrahedra >>> coordinates = np.array( ... [ ... [0.0, 0.0, 0.5], ... [0.0, 1.0, 0.5], ... [0.5, 1, 0.5], ... [0.5, 0.5, 0.0], ... [0.5, 0.5, 1.0], ... ] ... ) >>> # The indices in the coordinates array of mesh vertices. >>> # This mesh has two elements. >>> connectivity = np.array([[0, 1, 2, 4], [0, 1, 2, 3]]) >>> # Field data defined at the centers of the two mesh elements. >>> elem_data = {("connect1", "elem_field"): np.array([1, 2])} >>> # Field data defined at node vertices >>> node_data = { ... ("connect1", "node_field"): np.array( ... [[0.0, 1.0, 2.0, 4.0], [0.0, 1.0, 2.0, 3.0]] ... ) ... } >>> ds = load_unstructured_mesh( ... connectivity, coordinates, elem_data=elem_data, node_data=node_data ... ) """ from yt.frontends.exodus_ii.util import get_num_pseudo_dims from yt.frontends.stream.data_structures import ( StreamDictFieldHandler, StreamHandler, StreamUnstructuredMeshDataset, ) from yt.frontends.stream.definitions import process_data, set_particle_types geometry, axis_order = _sanitize_axis_order_args(geometry, axis_order) dimensionality = coordinates.shape[1] domain_dimensions = np.ones(3, "int32") * 2 nprocs = 1 if elem_data is None and node_data is None: raise RuntimeError("No data supplied in load_unstructured_mesh.") connectivity = list(always_iterable(connectivity, base_type=np.ndarray)) num_meshes = max(1, len(connectivity)) elem_data = list(always_iterable(elem_data, base_type=dict)) or [{}] * num_meshes node_data = list(always_iterable(node_data, base_type=dict)) or [{}] * num_meshes data = [{} for i in range(num_meshes)] # type: ignore [var-annotated] for elem_dict, data_dict in zip(elem_data, data, strict=True): for field, values in elem_dict.items(): data_dict[field] = values for node_dict, data_dict in zip(node_data, data, strict=True): for field, values in node_dict.items(): data_dict[field] = values if bbox is None: bbox = [ [ coordinates[:, i].min() - 0.1 * abs(coordinates[:, i].min()), coordinates[:, i].max() + 0.1 * abs(coordinates[:, i].max()), ] for i in range(dimensionality) ] if dimensionality < 3: bbox.append([0.0, 1.0]) if dimensionality < 2: bbox.append([0.0, 1.0]) # handle pseudo-dims here num_pseudo_dims = get_num_pseudo_dims(coordinates) dimensionality -= num_pseudo_dims for i in range(dimensionality, 3): bbox[i][0] = 0.0 bbox[i][1] = 1.0 bbox = np.array(bbox, dtype=np.float64) domain_left_edge = np.array(bbox[:, 0], "float64") domain_right_edge = np.array(bbox[:, 1], "float64") grid_levels = np.zeros(nprocs, dtype="int32").reshape((nprocs, 1)) field_units = {} particle_types = {} sfh = StreamDictFieldHandler() sfh.update({"connectivity": connectivity, "coordinates": coordinates}) for i, d in enumerate(data): _f_unit, _data, _ = process_data(d) field_units.update(_f_unit) sfh[i] = _data particle_types.update(set_particle_types(d)) grid_left_edges = domain_left_edge grid_right_edges = domain_right_edge grid_dimensions = domain_dimensions.reshape(nprocs, 3).astype("int32") if length_unit is None: length_unit = "code_length" if mass_unit is None: mass_unit = "code_mass" if time_unit is None: time_unit = "code_time" if velocity_unit is None: velocity_unit = "code_velocity" if magnetic_unit is None: magnetic_unit = "code_magnetic" # I'm not sure we need any of this. handler = StreamHandler( grid_left_edges, grid_right_edges, grid_dimensions, grid_levels, -np.ones(nprocs, dtype="int64"), np.zeros(nprocs, dtype="int64").reshape(nprocs, 1), # Temporary np.zeros(nprocs).reshape((nprocs, 1)), sfh, field_units, (length_unit, mass_unit, time_unit, velocity_unit, magnetic_unit), particle_types=particle_types, periodicity=periodicity, parameters=parameters, ) handler.name = dataset_name # type: ignore [attr-defined] handler.domain_left_edge = domain_left_edge # type: ignore [attr-defined] handler.domain_right_edge = domain_right_edge # type: ignore [attr-defined] handler.refine_by = 2 # type: ignore [attr-defined] handler.dimensionality = dimensionality # type: ignore [attr-defined] handler.domain_dimensions = domain_dimensions # type: ignore [attr-defined] handler.simulation_time = sim_time # type: ignore [attr-defined] handler.cosmology_simulation = 0 # type: ignore [attr-defined] sds = StreamUnstructuredMeshDataset( handler, geometry=geometry, axis_order=axis_order, unit_system=unit_system ) fluid_types = ["all"] for i in range(1, num_meshes + 1): fluid_types += ["connect%d" % i] sds.fluid_types = tuple(fluid_types) def flatten(l): return [item for sublist in l for item in sublist] sds._node_fields = flatten([[f[1] for f in m] for m in node_data if m]) # type: ignore [attr-defined] sds._elem_fields = flatten([[f[1] for f in m] for m in elem_data if m]) # type: ignore [attr-defined] sds.default_field = [f for f in sds.field_list if f[0] == "connect1"][-1] sds.default_fluid_type = sds.default_field[0] return sds # --- Loader for yt sample datasets --- @parallel_root_only_then_broadcast def _get_sample_data( fn: str | None = None, *, progressbar: bool = True, timeout=None, **kwargs ): # this isolates all the filename management and downloading so that it # can be restricted to a single process if running in parallel. Returns # the loadable_path as well as the kwargs dictionary, which is modified # by this function (note that the kwargs are returned explicitly rather than # relying on in-place modification so that the updated kwargs can be # broadcast to other processes during parallel execution). import tarfile from yt.sample_data.api import ( _download_sample_data_file, _get_test_data_dir_path, get_data_registry_table, get_download_cache_dir, ) pooch_logger = pooch.utils.get_logger() # normalize path for platform portability # for consistency with yt.load, we also convert to str explicitly, # which gives us support Path objects for free fn = str(fn).replace("/", os.path.sep) topdir, _, specific_file = fn.partition(os.path.sep) registry_table = get_data_registry_table() known_names: list[str] = registry_table.dropna()["filename"].to_list() if topdir not in known_names: msg = f"'{topdir}' is not an available dataset." lexical_distances: list[tuple[str, int]] = [ (name, levenshtein_distance(name, topdir)) for name in known_names ] suggestions: list[str] = [name for name, dist in lexical_distances if dist < 4] if len(suggestions) == 1: msg += f" Did you mean '{suggestions[0]}' ?" elif suggestions: msg += " Did you mean to type any of the following ?\n\n " msg += "\n ".join(f"'{_}'" for _ in suggestions) raise ValueError(msg) # PR 3089 # note: in the future the registry table should be reindexed # so that the following line can be replaced with # # specs = registry_table.loc[fn] # # however we don't want to do it right now because the "filename" column is # currently incomplete specs = registry_table.query(f"`filename` == '{topdir}'").iloc[0] load_name = specific_file or specs["load_name"] or "" if not isinstance(specs["load_kwargs"], dict): raise ValueError( "The requested dataset seems to be improperly registered.\n" "Tip: the entry in yt/sample_data_registry.json may be inconsistent with " "https://github.com/yt-project/website/blob/master/data/datafiles.json\n" "Please report this to https://github.com/yt-project/yt/issues/new" ) load_kwargs = {**specs["load_kwargs"], **kwargs} save_dir = _get_test_data_dir_path() data_path = save_dir.joinpath(fn) if save_dir.joinpath(topdir).exists(): # if the data is already available locally, `load_sample` # only acts as a thin wrapper around `load` if load_name and os.sep not in fn: data_path = data_path.joinpath(load_name) mylog.info("Sample dataset found in '%s'", data_path) if timeout is not None: mylog.info("Ignoring the `timeout` keyword argument received.") return data_path, load_kwargs mylog.info("'%s' is not available locally. Looking up online.", fn) # effectively silence the pooch's logger and create our own log instead pooch_logger.setLevel(100) mylog.info("Downloading from %s", specs["url"]) # downloading via a pooch.Pooch instance behind the scenes filename = urlsplit(specs["url"]).path.split("/")[-1] tmp_file = _download_sample_data_file( filename, progressbar=progressbar, timeout=timeout ) # pooch has functionalities to unpack downloaded archive files, # but it needs to be told in advance that we are downloading a tarball. # Since that information is not necessarily trivial to guess from the filename, # we rely on the standard library to perform a conditional unpacking instead. if tarfile.is_tarfile(tmp_file): mylog.info("Untaring downloaded file to '%s'", save_dir) with tarfile.open(tmp_file) as fh: def is_within_directory(directory, target): abs_directory = os.path.abspath(directory) abs_target = os.path.abspath(target) prefix = os.path.commonprefix([abs_directory, abs_target]) return prefix == abs_directory def safe_extract(tar, path=".", members=None, *, numeric_owner=False): for member in tar.getmembers(): member_path = os.path.join(path, member.name) if not is_within_directory(path, member_path): raise Exception("Attempted Path Traversal in Tar File") if sys.version_info >= (3, 12): # the filter argument is new in Python 3.12, but not specifying it # explicitly raises a deprecation warning on 3.12 and 3.13 extractall_kwargs = {"filter": "data"} else: extractall_kwargs = {} tar.extractall( path, members, numeric_owner=numeric_owner, **extractall_kwargs ) safe_extract(fh, save_dir) os.remove(tmp_file) else: os.replace(tmp_file, os.path.join(save_dir, fn)) loadable_path = Path.joinpath(save_dir, fn) if load_name not in str(loadable_path): loadable_path = loadable_path.joinpath(load_name, specific_file) try: # clean cache dir get_download_cache_dir().rmdir() except OSError: # cache dir isn't empty pass return loadable_path, load_kwargs def load_sample( fn: str | None = None, *, progressbar: bool = True, timeout=None, **kwargs ): r""" Load sample data with yt. This is a simple wrapper around :func:`~yt.loaders.load` to include fetching data with pooch from remote source. The data registry table can be retrieved and visualized using :func:`~yt.sample_data.api.get_data_registry_table`. The `filename` column contains usable keys that can be passed as the first positional argument to load_sample. Some data samples contain series of datasets. It may be required to supply the relative path to a specific dataset. Parameters ---------- fn: str The `filename` of the dataset to load, as defined in the data registry table. progressbar: bool display a progress bar (tqdm). timeout: float or int (optional) Maximal waiting time, in seconds, after which download is aborted. `None` means "no limit". This parameter is directly passed to down to requests.get via pooch.HTTPDownloader Notes ----- - This function is experimental as of yt 4.0.0, do not rely on its exact behaviour. - Any additional keyword argument is passed down to :func:`~yt.loaders.load`. - In case of collision with predefined keyword arguments as set in the data registry, the ones passed to this function take priority. - Datasets with slashes '/' in their names can safely be used even on Windows. On the contrary, paths using backslashes '\' won't work outside of Windows, so it is recommended to favour the UNIX convention ('/') in scripts that are meant to be cross-platform. - This function requires pandas and pooch. - Corresponding sample data live at https://yt-project.org/data """ if fn is None: print( "One can see which sample datasets are available at: https://yt-project.org/data\n" "or alternatively by running: yt.sample_data.api.get_data_registry_table()", file=sys.stderr, ) return None loadable_path, load_kwargs = _get_sample_data( fn, progressbar=progressbar, timeout=timeout, **kwargs ) return load(loadable_path, **load_kwargs) def _mount_helper( archive: str, mountPoint: str, ratarmount_kwa: dict, conn: "Connection" ): try: fuseOperationsObject = ratarmount.TarMount( pathToMount=archive, mountPoint=mountPoint, lazyMounting=True, **ratarmount_kwa, ) fuseOperationsObject.use_ns = True conn.send(True) except Exception: conn.send(False) raise ratarmount.fuse.FUSE( operations=fuseOperationsObject, mountpoint=mountPoint, foreground=True, nothreads=True, ) # --- Loader for tar-based datasets --- def load_archive( fn: str | Path, path: str, ratarmount_kwa: dict | None = None, mount_timeout: float = 1.0, *args, **kwargs, ) -> Dataset: r""" Load archived data with yt. This is a wrapper around :func:`~yt.loaders.load` to include mounting and unmounting the archive as a read-only filesystem and load it. Parameters ---------- fn: str The `filename` of the archive containing the dataset. path: str The path to the dataset in the archive. ratarmount_kwa: dict, optional Optional parameters to pass to ratarmount to mount the archive. mount_timeout: float, optional The timeout to wait for ratarmount to mount the archive. Default is 1s. Notes ----- - The function is experimental and may work or not depending on your setup. - Any additional keyword argument is passed down to :func:`~yt.loaders.load`. - This function requires ratarmount to be installed. - This function does not work on Windows system. """ import tarfile from multiprocessing import Pipe, Process warnings.warn( "The 'load_archive' function is still experimental and may be unstable.", stacklevel=2, ) fn = os.path.expanduser(fn) # This will raise FileNotFoundError if the path isn't matched # either in the current dir or yt.config.ytcfg['data_dir_directory'] if not fn.startswith("http"): fn = str(lookup_on_disk_data(fn)) if ratarmount_kwa is None: ratarmount_kwa = {} try: tarfile.open(fn) except tarfile.ReadError as exc: raise YTUnidentifiedDataType(fn, *args, **kwargs) from exc # Note: the temporary directory will be created by ratarmount tempdir = fn + ".mount" tempdir_base = tempdir i = 0 while os.path.exists(tempdir): i += 1 tempdir = f"{tempdir_base}.{i}" parent_conn, child_conn = Pipe() proc = Process(target=_mount_helper, args=(fn, tempdir, ratarmount_kwa, child_conn)) proc.start() if not parent_conn.recv(): raise MountError(f"An error occurred while mounting {fn} in {tempdir}") # Note: the mounting needs to happen in another process which # needs be run in the foreground (otherwise it may # unmount). To prevent a race-condition here, we wait # for the folder to be mounted within a reasonable time. t = 0.0 while t < mount_timeout: if os.path.ismount(tempdir): break time.sleep(0.1) t += 0.1 else: raise MountError(f"Folder {tempdir} does not appear to be mounted") # We need to kill the process at exit (to force unmounting) def umount_callback(): proc.terminate() atexit.register(umount_callback) # Alternatively, can dismount manually def del_callback(self): proc.terminate() atexit.unregister(umount_callback) ds = load(os.path.join(tempdir, path), *args, **kwargs) ds.dismount = types.MethodType(del_callback, ds) return ds def load_hdf5_file( fn: Union[str, "os.PathLike[str]"], root_node: str | None = "/", fields: list[str] | None = None, bbox: np.ndarray | None = None, nchunks: int = 0, dataset_arguments: dict | None = None, ): """ Create a (grid-based) yt dataset given the path to an hdf5 file. This function accepts a filename, as well as (potentially) a bounding box, the root node where fields are stored, and the number of chunks to attempt to decompose the object into. This function will then introspect that HDF5 file, attempt to determine the available fields, and then supply a :class:`yt.data_objects.static_output.Dataset` object. However, unlike the other loaders, the data is *not* required to be preloaded into memory, and will only be loaded *on demand*. This does not yet work with particle-type datasets. Parameters ---------- fn : str, os.Pathlike[str] A path to the hdf5 file that contains the data. root_node: str, optional If the fields to be loaded are stored under an HDF5 group object, specify it here. Otherwise, the fields are assumed to be at the root level of the HDF5 file hierarchy. fields : list of str, optional The fields to be included as part of the dataset. If this is not included, all of the datasets under *root_node* will be included. If your file contains, for instance, a "parameters" node at the root level next to other fields, it would be (mistakenly) included. This allows you to specify only those that should be included. bbox : array_like (xdim:zdim, LE:RE), optional If supplied, this will be the bounding box for the dataset. If not supplied, it will be assumed to be from 0 to 1 in all dimensions. nchunks : int, optional How many chunks should this dataset be split into? If 0 or not supplied, yt will attempt to ensure that there is one chunk for every 64**3 zones in the dataset. dataset_arguments : dict, optional Any additional arguments that should be passed to :class:`yt.loaders.load_amr_grids`, including things like the unit length and the coordinates. Returns ------- :class:`yt.data_objects.static_output.Dataset` object This returns an instance of a dataset, created with `load_amr_grids`, that can read from the HDF5 file supplied to this function. An open handle to the HDF5 file is retained. Raises ------ FileNotFoundError If fn does not match any existing file or directory. """ from yt.utilities.on_demand_imports import _h5py as h5py dataset_arguments = dataset_arguments or {} if bbox is None: bbox = np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]) mylog.info("Assuming unitary (0..1) bounding box.") def _read_data(handle, root_node): def _reader(grid, field_name): ftype, fname = field_name si = grid.get_global_startindex() ei = si + grid.ActiveDimensions return handle[root_node][fname][si[0] : ei[0], si[1] : ei[1], si[2] : ei[2]] return _reader fn = str(lookup_on_disk_data(fn)) handle = h5py.File(fn, "r") reader = _read_data(handle, root_node) if fields is None: fields = list(handle[root_node].keys()) mylog.debug("Identified fields %s", fields) shape = handle[root_node][fields[0]].shape if nchunks <= 0: # We apply a pretty simple heuristic here. We don't want more than # about 64^3 zones per chunk. So ... full_size = np.prod(shape) nchunks = full_size // (64**3) mylog.info("Auto-guessing %s chunks from a size of %s", nchunks, full_size) grid_data = [] psize = get_psize(np.array(shape), nchunks) left_edges, right_edges, shapes, _, _ = decompose_array(shape, psize, bbox) for le, re, s in zip(left_edges, right_edges, shapes, strict=True): data = {_: reader for _ in fields} data.update({"left_edge": le, "right_edge": re, "dimensions": s, "level": 0}) grid_data.append(data) return load_amr_grids(grid_data, shape, bbox=bbox, **dataset_arguments) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/py.typed0000644000175100001770000000000014714401662013706 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3671532 yt-4.4.0/yt/sample_data/0000755000175100001770000000000014714401715014472 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/sample_data/__init__.py0000644000175100001770000000000014714401662016572 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/sample_data/api.py0000644000175100001770000001405114714401662015617 0ustar00runnerdocker""" This is a collection of helper functions to yt.load_sample """ import json import re import sys from functools import lru_cache from pathlib import Path from typing import Optional from warnings import warn from yt.config import ytcfg from yt.utilities.on_demand_imports import ( _pandas as pd, _pooch as pooch, _requests as requests, ) num_exp = re.compile(r"\d*(\.\d*)?") byte_unit_exp = re.compile(r"[KMGT]?B") def _parse_byte_size(s: str): """ Convert a string size specification to integer byte size. This function should be insensitive to case and whitespace. It doesn't always return an int, as a temporary measure to deal with missing or corrupted data in the registry. This should be fixed in the future. Examples -------- # most of the following examples are adapted from # https://stackoverflow.com/a/31631711/5489622 >>> _parse_byte_size(None) >>> from numpy import nan >>> _parse_byte_size(nan) >>> _parse_byte_size("1B") 1 >>> _parse_byte_size("1.00 KB") 1024 >>> _parse_byte_size("488.28 KB") 500000 >>> _parse_byte_size("1.00 MB") 1049000 >>> _parse_byte_size("47.68 MB") 50000000 >>> _parse_byte_size("1.00 GB") 1074000000 >>> _parse_byte_size("4.66 GB") 5004000000 >>> _parse_byte_size("1.00 TB") 1100000000000 >>> _parse_byte_size("4.55 TB") 5003000000000 """ try: s = s.upper() except AttributeError: # input is not a string (likely a np.nan) return pd.NA match = re.search(num_exp, s) if match is None: raise ValueError val = float(match.group()) match = re.search(byte_unit_exp, s) if match is None: raise ValueError unit = match.group() prefixes = ["B", "K", "M", "G", "T"] raw_res = val * 1024 ** prefixes.index(unit[0]) return int(float(f"{raw_res:.3e}")) def _get_sample_data_registry(): import importlib.resources as importlib_resources return json.loads( importlib_resources.files("yt") .joinpath("sample_data_registry.json") .read_bytes() ) @lru_cache(maxsize=128) def get_data_registry_table(): """ Load the sample data registry as a pandas.Dataframe instance. This function is considered experimental and is exposed for exploratory purposed. The output format is subject to change. The output of this function is cached so it will only generate one request per session. """ # it would be nicer to have an actual api on the yt website server, # but this will do for now api_url = "https://raw.githubusercontent.com/yt-project/website/master/data/datafiles.json" response = requests.get(api_url) if not response.ok: raise RuntimeError( "Could not retrieve registry data. Please check your network setup." ) website_json = response.json() # this dict follows this schema: {frontend_name: {flat dataframe-like}} columns = ["code", "filename", "size", "url", "description"] website_table = pd.concat(pd.DataFrame(d) for d in website_json.values())[columns] # add a int-type byte size column # note that we cast to pandas specific type "Int64" because we expect missing values # see https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html#integer-dtypes-and-missing-data website_table["byte_size"] = ( website_table["size"].apply(_parse_byte_size).astype("Int64") ) # normalize urls to match the local json website_table["url"] = website_table["url"].apply( lambda u: u.replace("http:", "https:") ) # load local data pooch_table = pd.DataFrame(_get_sample_data_registry().values()) # merge tables unified_table = website_table.merge(pooch_table, on="url", how="outer") # PR 3089 # ideally we should be able to do this, but it's not possible # at the time of writing because fhe "filename" is incomplete # see the companion comment in load_sample # unified_table.set_index("filename", inplace=True) # unified_table.index.rename("id", inplace=True) return unified_table def _get_test_data_dir_path(): p = Path(ytcfg.get("yt", "test_data_dir")) if p.is_dir(): return p warn( "Storage directory from yt config doesn't exist " f"(currently set to '{p}'). " "Current working directory will be used instead." ) return Path.cwd() def lookup_on_disk_data(fn) -> Path: """ Look for data file/dir on disk. Returns ------- pathlib.Path to a file/dir matching fn if any Raises ------ FileNotFoundError """ path = Path(fn).expanduser().resolve() if path.exists(): return path err_msg = f"No such file or directory: '{fn}'." test_data_dir = _get_test_data_dir_path() if not test_data_dir.is_dir(): raise FileNotFoundError(err_msg) alt_path = _get_test_data_dir_path().joinpath(fn).resolve() if alt_path != path: if alt_path.exists(): return alt_path err_msg += f"\n(Also tried '{alt_path}')." raise FileNotFoundError(err_msg) def get_download_cache_dir(): return _get_test_data_dir_path() / "yt_download_cache" _POOCHIE = None def _get_pooch_instance(): global _POOCHIE if _POOCHIE is None: data_registry = get_data_registry_table() cache_storage = get_download_cache_dir() registry = {k: v["hash"] for k, v in _get_sample_data_registry().items()} _POOCHIE = pooch.create( path=cache_storage, base_url="https://yt-project.org/data/", registry=registry, ) return _POOCHIE def _download_sample_data_file( filename, progressbar=True, timeout: Optional[int] = None ): """ Download a file by url. Returns ------- storage_filename : location of the downloaded file """ downloader = pooch.HTTPDownloader(progressbar=progressbar, timeout=timeout) poochie = _get_pooch_instance() poochie.fetch(filename, downloader=downloader) return poochie.path / filename ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/sample_data_registry.json0000644000175100001770000011072414714401662017323 0ustar00runnerdocker{ "A2052.tar.gz": { "hash": "a6a427750945b36a7b7478e3e301e4aba75e95af11bdda2e9ef759c9bbcde5e5", "load_kwargs": {}, "load_name": "xray_fits/A2052_merged_0.3-2_match-core_tmap_bgecorr.fits", "url": "https://yt-project.org/data/A2052.tar.gz" }, "AM06.tar.gz": { "hash": "d95090c2c65725411ad5c11f1627c311ec144f93ed04a88a4a4e446aeb0420a6", "load_kwargs": {}, "load_name": "AM06.out1.00400.athdf", "url": "https://yt-project.org/data/AM06.tar.gz" }, "ActiveParticleCosmology.tar.gz": { "hash": "6a52b6539db12f29c4e4209705ffa823bb8027f65bf9431cf068d3ffdb28f55e", "load_kwargs": {}, "load_name": "DD0046/DD0046", "url": "https://yt-project.org/data/ActiveParticleCosmology.tar.gz" }, "ActiveParticleTwoSphere.tar.gz": { "hash": "7b056da9180417c9339bb5c3b98f5dfa5ac33f2f6c2a0d5b70c624a41dd60d4e", "load_kwargs": {}, "load_name": "DD0011/DD0011", "url": "https://yt-project.org/data/ActiveParticleTwoSphere.tar.gz" }, "ArepoBullet.tar.gz": { "hash": "17afd084a0e527e9dccf4cf0dc7e9296a61e44faec3bbc1f9efb3609d9df1025", "load_kwargs": {}, "load_name": "snapshot_150.hdf5", "url": "https://yt-project.org/data/ArepoBullet.tar.gz" }, "ArepoCosmicRays.tar.gz": { "hash": "e664ea7f634fff778511e9ae7b2e375ba9c5099e427f7c037d48bba9944388f4", "load_kwargs": {}, "load_name": "snapshot_039.hdf5", "url": "https://yt-project.org/data/ArepoCosmicRays.tar.gz" }, "BigEndianGadgetBinary.tar.gz": { "hash": "0ab8d69b7c0c1a74282c567823916cc947f9ead7ab54b749a38a17c158b371c9", "load_kwargs": {}, "load_name": "BigEndianGadgetBinary", "url": "https://yt-project.org/data/BigEndianGadgetBinary.tar.gz" }, "C15-3D-3deg.tar.gz": { "hash": "b59a4ea80b3e5bf0c8fc1ab960241b23e8cba6d7346cab25d071d9a63c3be2b7", "load_kwargs": {}, "load_name": "chimera_002715000_grid_1_01.h5", "url": "https://yt-project.org/data/C15-3D-3deg.tar.gz" }, "CfRadialGrid.tar.gz": { "hash": "22a45b322773d4b2864ccab93c521d8e3d637d81c8d8c0852e1a67c7fa883e0c", "load_kwargs": {}, "load_name": "grid1.nc", "url": "https://yt-project.org/data/CfRadialGrid.tar.gz" }, "ChollaSimple.tar.gz": { "hash": "aa277471f694f9de2e46c190efd8cd631c0694044aa29cb8431f127319774d73", "load_kwargs": {}, "load_name": "0.h5", "url": "https://yt-project.org/data/ChollaSimple.tar.gz" }, "ClusterMerger.tar.gz": { "hash": "862a537bdecb8de363f20abec751665fd2b9208c29ae66331105cf848e7a0033", "load_kwargs": {}, "load_name": "Data_000100", "url": "https://yt-project.org/data/ClusterMerger.tar.gz" }, "CompactObjects.tar.gz": { "hash": "e24f275d86c95900f62c640c8f5fcc4fec3310e09893b6ed6a9f1e7adc6b9d64", "load_kwargs": {}, "load_name": null, "url": "https://yt-project.org/data/CompactObjects.tar.gz" }, "D9p_500.tar.gz": { "hash": "6d62244dbb4d0974f2e63dd191c3aaf65a8a5ebb90c202e40e871576c3267040", "load_kwargs": {}, "load_name": "10MpcBox_HartGal_csf_a0.500.d", "url": "https://yt-project.org/data/D9p_500.tar.gz" }, "DICEGalaxyDisk.tar.gz": { "hash": "e7cb5255be0b6e712620d337d019239980827ce458585fe5f3141d0f1d910721", "load_kwargs": {}, "load_name": "output_00001", "url": "https://yt-project.org/data/DICEGalaxyDisk.tar.gz" }, "DICEGalaxyDisk_nonCosmological.tar.gz": { "hash": "f95032fc9dc61e760e3919f2c3a59960684d21b9a44cc34cb1a15953de18f30d", "load_kwargs": {}, "load_name": "output_00002/info_00002.txt", "url": "https://yt-project.org/data/DICEGalaxyDisk_nonCosmological.tar.gz" }, "DMonly.tar.gz": { "hash": "45ad285c0df4d7c9e08f497047ac04da373580cd70a8e6fefbe824726235c5c7", "load_kwargs": {}, "load_name": "PMcrs0.0100.DAT", "url": "https://yt-project.org/data/DMonly.tar.gz" }, "DeeplyNestedZoom.tar.gz": { "hash": "cf6c33fc9d743624f9ce95b947df18bae6159afc1eb18e65aecc1b57cc9432ed", "load_kwargs": {}, "load_name": "DD0025/data0025", "url": "https://yt-project.org/data/DeeplyNestedZoom.tar.gz" }, "DensTurbMag.tar.gz": { "hash": "43f3e03d1065d233d315c620316ba8fd877a94bd368d911905dbf81861beb47f", "load_kwargs": {}, "load_name": "DensTurbMag_hdf5_plt_cnt_0015", "url": "https://yt-project.org/data/DensTurbMag.tar.gz" }, "EAGLE_6.tar.gz": { "hash": "234309547fdce3f68e0399bc6f565364ec3bdd1887ce4e414071e55654157652", "load_kwargs": {}, "load_name": "eagle_0005.hdf5", "url": "https://yt-project.org/data/EAGLE_6.tar.gz" }, "ENZOE_orszag-tang_0.5.tar.gz": { "hash": "dbd93068131d3d100c186adb801302a303ac2c5c54118ab5f9e19b409eee5efc", "load_kwargs": {}, "load_name": "ENZOE_orszag-tang_0.5.block_list", "url": "https://yt-project.org/data/ENZOE_orszag-tang_0.5.tar.gz" }, "ENZOP_DD0140.tar.gz": { "hash": "8df14326a82845479398447b8b55bd93d8eea1c5f31657536b390e6703458e0d", "load_kwargs": {}, "load_name": "ENZOP_DD0140.block_list", "url": "https://yt-project.org/data/ENZOP_DD0140.tar.gz" }, "EnzoKelvinHelmholtz.tar.gz": { "hash": "750dcd29a060aee7b5607afdb30a6e3c2b465be60fc5e022c19a78ab04b35a39", "load_kwargs": {}, "load_name": "DD0011/DD0011", "url": "https://yt-project.org/data/EnzoKelvinHelmholtz.tar.gz" }, "Enzo_64.tar.gz": { "hash": "2b17fffb6a2e8e471ef85dbdbfa5568f09f7bdb77715c69b40dfe48be8f71bc9", "load_kwargs": {}, "load_name": "DD0043/data0043", "url": "https://yt-project.org/data/Enzo_64.tar.gz" }, "ExodusII_tests.tar.gz": { "hash": "d21c5ef320f2b4705ec4cacf26488a3a59544fab8cc3a9726e8989ad1bd8d177", "load_kwargs": {}, "load_name": "ExodusII/gold.e", "url": "https://yt-project.org/data/ExodusII_tests.tar.gz" }, "F37_80.tar.gz": { "hash": "ff0eeb2690ec53f25def74a346fdef07878e46d16424ce4f5d31d8c841408a46", "load_kwargs": {}, "load_name": "chimera_00001_grid_1_01.h5", "url": "https://yt-project.org/data/F37_80.tar.gz" }, "FIRE_M12i_ref11.tar.gz": { "hash": "74488ca3158810ad76ed65bf43a5769b899b63d3985a9b43ff55bffd04bf3af2", "load_kwargs": {}, "load_name": "snapshot_600.hdf5", "url": "https://yt-project.org/data/FIRE_M12i_ref11.tar.gz" }, "GDFClumps.tar.gz": { "hash": "d9d81f934bdc8e1459bb04471fdcafcea2c2600d326e20a3d922b4c148f7783b", "load_kwargs": {}, "load_name": "clumps.h5", "url": "https://yt-project.org/data/GDFClumps.tar.gz" }, "Gadget3-snap-format2.tar.gz": { "hash": "bedc0f1979f0dd0cc9707c60a806171fe47b4bece0d29b474ff4030b560d5660", "load_kwargs": {}, "load_name": "Gadget3-snap-format2", "url": "https://yt-project.org/data/Gadget3-snap-format2.tar.gz" }, "GadgetDiskGalaxy.tar.gz": { "hash": "262bbd552786c133918a858c8ce09f5348583733fb906f6defbb6375b5a7951e", "load_kwargs": {}, "load_name": "snapshot_200.hdf5", "url": "https://yt-project.org/data/GadgetDiskGalaxy.tar.gz" }, "GalaxyClusterMerger.tar.gz": { "hash": "64d2909369fc0ca7a810734aaa53a6db2f526b101993c401e6b66fc0db597ed7", "load_kwargs": {}, "load_name": "fiducial_1to3_b0.273d_hdf5_plt_cnt_0175", "url": "https://yt-project.org/data/GalaxyClusterMerger.tar.gz" }, "GasSloshing.tar.gz": { "hash": "3e16c5975f310bafd1e859080310de7ca01d5aa055b3504c10138c93154202a4", "load_kwargs": {}, "load_name": "sloshing_nomag2_hdf5_plt_cnt_0150", "url": "https://yt-project.org/data/GasSloshing.tar.gz" }, "GasSloshingLowRes.tar.gz": { "hash": "1eb53d8c0b90e2ebb95aee129046bd230f5296347c90a44558b6f2bdb79951ed", "load_kwargs": {}, "load_name": "sloshing_low_res_hdf5_plt_cnt_0300", "url": "https://yt-project.org/data/GasSloshingLowRes.tar.gz" }, "GaussianBeam.tar.gz": { "hash": "5200448cee1cc03f8ba630c5f88c6a258f9d55cab5672ce8847035d701cfef15", "load_kwargs": {}, "load_name": "plt03008", "url": "https://yt-project.org/data/GaussianBeam.tar.gz" }, "GaussianCloud.tar.gz": { "hash": "edf869828cc14c2f47a389a62bf5a9bec22fba0de7e5c5380c6eb1acca9aa6a1", "load_kwargs": {}, "load_name": "data.0077.3d.hdf5", "url": "https://yt-project.org/data/GaussianCloud.tar.gz" }, "HiresIsolatedGalaxy.tar.gz": { "hash": "1b310a243158dbb9f42b9a71a1c71512a24f1e3cac906713b518555f8bc350c6", "load_kwargs": {}, "load_name": "DD0044/DD0044", "url": "https://yt-project.org/data/HiresIsolatedGalaxy.tar.gz" }, "InteractingJets.tar.gz": { "hash": "e6a0d55df9763842b94fdd453b69b1f2a392f19b4188a9d08c58bf63613c281f", "load_kwargs": {}, "load_name": "jet_000002", "url": "https://yt-project.org/data/InteractingJets.tar.gz" }, "IsolatedGalaxy.tar.gz": { "hash": "fc081bd4420efd02f7ba2db7eaf4bff0299d5cc4da93436be55a30c219daaf18", "load_kwargs": {}, "load_name": "galaxy0030/galaxy0030", "url": "https://yt-project.org/data/IsolatedGalaxy.tar.gz" }, "IsolatedGalaxy_Gravity.tar.gz": { "hash": "2b7faa6a2472d4afe526678775c208242605a8918ace99154bd4ecb6df266505", "load_kwargs": {}, "load_name": "galaxy0030/galaxy0030", "url": "https://yt-project.org/data/IsolatedGalaxy_Gravity.tar.gz" }, "IsothermalCollapse.tar.gz": { "hash": "1420074fdef1c00692bdd4dee31b5b92eaebd668a80510dc554942590b0dae47", "load_kwargs": { "bounding_box": [ [ -3, 3 ], [ -3, 3 ], [ -3, 3 ] ], "unit_base": { "UnitLength_in_cm": 5e+16, "UnitMass_in_g": 1.98992e+33, "UnitVelocity_in_cm_per_s": 46385.19 } }, "load_name": "snap_505.hdf5", "url": "https://yt-project.org/data/IsothermalCollapse.tar.gz" }, "IsothermalSphere.tar.gz": { "hash": "787b0aba3446c62d07caafd7db8e0f225d52b78ce2afe6d86c2535898fc7f4e5", "load_kwargs": {}, "load_name": "data.0000.3d.hdf5", "url": "https://yt-project.org/data/IsothermalSphere.tar.gz" }, "JetICMWall.tar.gz": { "hash": "4f156be55eb96d994001b64965017abf55728ce406bdbdbf5b0de3b3238b905a", "load_kwargs": {}, "load_name": "Data_000060", "url": "https://yt-project.org/data/JetICMWall.tar.gz" }, "KelvinHelmholtz.tar.gz": { "hash": "a70ccdab287e6b24553881d24a08a4025079b0e40eea9097c7ef686166ce571d", "load_kwargs": {}, "load_name": "data.0004.hdf5", "url": "https://yt-project.org/data/KelvinHelmholtz.tar.gz" }, "KeplerianDisk.tar.gz": { "hash": "488307807a1e53c404b316f28915d614aa6673defd7b2291e3f26b7fc7839506", "load_kwargs": {}, "load_name": "disk.out1.00000.athdf", "url": "https://yt-project.org/data/KeplerianDisk.tar.gz" }, "KeplerianRing.tar.gz": { "hash": "b2e26d9f98319289283e44e4303a3ac0bfc937931290d8aff86e437503634eab", "load_kwargs": {}, "load_name": "keplerian_ring_0020.hdf5", "url": "https://yt-project.org/data/KeplerianRing.tar.gz" }, "LangmuirWave_v2.tar.gz": { "hash": "27ab056b79b598b4b88e8aa4be042282c07899e85206dd42a506639a69e31270", "load_kwargs": {}, "load_name": "plt00020_v2", "url": "https://yt-project.org/data/LangmuirWave_v2.tar.gz" }, "Laser.tar.gz": { "hash": "7850a9f8cba1fcab4712b269fbea707dbfbaf611605e319cefe444bf9a1347f0", "load_kwargs": {}, "load_name": "plt00015", "url": "https://yt-project.org/data/Laser.tar.gz" }, "LocBub_dust.tar.gz": { "hash": "1ad9728ca982616686d9bd6c1c959fece3c067ecbbba64ff8c2c5bfcc1deed01", "load_kwargs": {}, "load_name": "LocBub_dust_hdf5_plt_cnt_0220", "url": "https://yt-project.org/data/LocBub_dust.tar.gz" }, "MagneticumCluster.tar.gz": { "hash": "403c2224c205a44ee92a5925053be8aef6e5d530d9297c18151cdb0f588c32a7", "load_kwargs": { "long_ids": true, "field_spec": "magneticum_box2_hr" }, "load_name": "snap_132", "url": "https://yt-project.org/data/MagneticumCluster.tar.gz" }, "MHDBlast.tar.gz": { "hash": "fee1139df19448bae966a5396775476935886e69e3898fd7cf71d54548c4e14b", "load_kwargs": {}, "load_name": "id0/Blast.0100.vtk", "url": "https://yt-project.org/data/MHDBlast.tar.gz" }, "MHDCTOrszagTang.tar.gz": { "hash": "90d69d9ed5488051fdbab2ebddf1b45647ab513b19ab0b04bb9ea65b0bbbab26", "load_kwargs": {}, "load_name": "DD0004/data0004", "url": "https://yt-project.org/data/MHDCTOrszagTang.tar.gz" }, "MHDOrszagTangVortex.tar.gz": { "hash": "24c97644ab8620333b0839ac69e0e4021820681eebafe96882fdf3b8fbec1270", "load_kwargs": {}, "load_name": "Data_000018", "url": "https://yt-project.org/data/MHDOrszagTangVortex.tar.gz" }, "MHDSloshing.tar.gz": { "hash": "f0d0b4e26145a2f7445c3afed8ed0294026a2209496c477cc0d1a7a0f022bf38", "load_kwargs": { "units_override": { "length_unit": [1.0, "Mpc"], "mass_unit": [100000000000000.0, "Msun"], "time_unit": [1.0, "Myr"] } }, "load_name": "virgo_low_res.0054.vtk", "url": "https://yt-project.org/data/MHDSloshing.tar.gz" }, "MHD_Cyl3d_hdf5_plt_cnt_0100.tar.gz": { "hash": "f3ad1a0d01e44617a2291733cdceb9b7a54a4f3551155e5ec1b14ee012357dc8", "load_kwargs": {}, "load_name": "MHD_Cyl3d_hdf5_plt_cnt_0100.hdf5", "url": "https://yt-project.org/data/MHD_Cyl3d_hdf5_plt_cnt_0100.tar.gz" }, "MOOSE_Sample_data.tar.gz": { "hash": "0a43e40931e25c4a0e4f205b9902666c2d7a89cd5959e2d5eb7819059cabe01f", "load_kwargs": {}, "load_name": "out.e", "url": "https://yt-project.org/data/MOOSE_Sample_data.tar.gz" }, "MoabTest.tar.gz": { "hash": "22609631c0e417964e0247b4ad62199a60c2d7979f38f8587960a9f8b2147fab", "load_kwargs": {}, "load_name": "fng_usrbin22.h5m", "url": "https://yt-project.org/data/MoabTest.tar.gz" }, "MultiRegion.tar.gz": { "hash": "6acf8e21f2b847102cfbb9f6e22b9b919a9b807e22b2176854fead43e3344e9d", "load_kwargs": {}, "load_name": "two_region_example_out.e", "url": "https://yt-project.org/data/MultiRegion.tar.gz" }, "Nyx_LyA.tar.gz": { "hash": "db9fff2e78ac326780d9ba4fcdb0f9fe30440dd9100b6c50cb1f97722f2c8ea8", "load_kwargs": {}, "load_name": "plt00000", "url": "https://yt-project.org/data/Nyx_LyA.tar.gz" }, "Orbit.tar.gz": { "hash": "3df7988b555c5ff17e73f4675167c78baa33993e0c321d56c8da542c5ae03a4b", "load_kwargs": {}, "load_name": "orbit_hdf5_chk_0014", "url": "https://yt-project.org/data/Orbit.tar.gz" }, "PlasmaAcceleration_v2.tar.gz": { "hash": "45eea2e3b1d9bd796816c1d300e5ccd4db57c9609ad8dc9b93a157a90c3a89a4", "load_kwargs": {}, "load_name": "plt00030_v2", "url": "https://yt-project.org/data/PlasmaAcceleration_v2.tar.gz" }, "Plummer.tar.gz": { "hash": "3ef04dad7aee393a9cb08cf6b8616f271aa733a53a07fdfad34b8effcb550998", "load_kwargs": {}, "load_name": "plummer_000000", "url": "https://yt-project.org/data/Plummer.tar.gz" }, "PopIII_mini.tar.gz": { "hash": "184dfeb09474fad8ec91474d44b75f6a7d33be99c021b76902d91baad98ddca6", "load_kwargs": {}, "load_name": "DD0034/DD0034", "url": "https://yt-project.org/data/PopIII_mini.tar.gz" }, "RT_particles.tar.gz": { "hash": "69f7a96dbc8fad60defc03f1a2e864de4ffbf5d881d7cac2c8727ae317e0a24f", "load_kwargs": {}, "load_name": "plt00050", "url": "https://yt-project.org/data/RT_particles.tar.gz" }, "RadAdvect.tar.gz": { "hash": "0511e8d45cdb9b268089bc4ef4608a58b7218cf0db9da8d83a14cbd34722784b", "load_kwargs": {}, "load_name": "plt00000", "url": "https://yt-project.org/data/RadAdvect.tar.gz" }, "RadTube.tar.gz": { "hash": "43a2f00c435cbaa4708d6e7e5e3ff0784699f5d28bc435ecbd677f64e6351837", "load_kwargs": {}, "load_name": "plt00500", "url": "https://yt-project.org/data/RadTube.tar.gz" }, "RamPressureStripping.tar.gz": { "hash": "4ef4f9f9127668699111091d31ece0fff6e7ab2e9e6af7c547810df31db4a40b", "load_kwargs": { "units_override": { "length_unit": 8.0236e+22, "mass_unit": 5.1649293e+39, "time_unit": 308600000000000.0 } }, "load_name": "id0/rps.0062.vtk", "url": "https://yt-project.org/data/RamPressureStripping.tar.gz" }, "SecondOrderQuads.tar.gz": { "hash": "b00000d4a6ec3907e2f22cbf58c7b582a9a63ca2792a0419fb56a8733b1ec904", "load_kwargs": {}, "load_name": "lid_driven_out.e", "url": "https://yt-project.org/data/SecondOrderQuads.tar.gz" }, "SecondOrderTets.tar.gz": { "hash": "73bbd2bf10d02287f15303215952d560969daab1919001969c29991561e26cb9", "load_kwargs": {}, "load_name": "tet10_unstructured_out.e", "url": "https://yt-project.org/data/SecondOrderTets.tar.gz" }, "SecondOrderTris.tar.gz": { "hash": "d5a4e05117ceb53b31e1ecef077355b05dbd8a43b6773751b48390ef80795165", "load_kwargs": {}, "load_name": "RZ_p_no_parts_do_nothing_bcs_cone_out.e", "url": "https://yt-project.org/data/SecondOrderTris.tar.gz" }, "Sedov3D.tar.gz": { "hash": "4e3220744024333293c707bafe64706d325e1f6ee0914bee4be78c3b48d3f97e", "load_kwargs": {}, "load_name": "Sedov_3d/sedov_hdf5_chk_0000", "url": "https://yt-project.org/data/Sedov3D.tar.gz" }, "ShockCloud.tar.gz": { "hash": "5a41fe85c293b5714fa3fb989c7d905ed8b6cc92276c524ec786933081176027", "load_kwargs": {}, "load_name": "id0/Cloud.0050.vtk", "url": "https://yt-project.org/data/ShockCloud.tar.gz" }, "SimbaExample.tar.gz": { "hash": "6770e4d5ead92cfc966bca5b8735903389809885f0f1b50523448f68bc53a5ec", "load_kwargs": {}, "load_name": "simba_example.hdf5", "url": "https://yt-project.org/data/SimbaExample.tar.gz" }, "StarParticles.tar.gz": { "hash": "83fbcb7d520cc49c25430003a9e1f1ca8ffed7ab9f83319aa91faa45494ed485", "load_kwargs": {}, "load_name": "plrd01000", "url": "https://yt-project.org/data/StarParticles.tar.gz" }, "TNGHalo.tar.gz": { "hash": "759b517500362d1ccd6286fe8854b1a303ecbce4678b9ff7ce541cfecda8a604", "load_kwargs": {}, "load_name": "halo_59.hdf5", "url": "https://yt-project.org/data/TNGHalo.tar.gz" }, "TipsyAuxiliary.tar.gz": { "hash": "c27205c3b5f673b9822c76212c59c9632027210a6ee5f3c2e9eac213b805a143", "load_kwargs": {}, "load_name": "ascii/onestar.00001", "url": "https://yt-project.org/data/TipsyAuxiliary.tar.gz" }, "TipsyGalaxy.tar.gz": { "hash": "34228eec5d43a8c0465b785efea39e4e963390730919e0bf819f8fca474d3341", "load_kwargs": {}, "load_name": "galaxy.00300", "url": "https://yt-project.org/data/TipsyGalaxy.tar.gz" }, "ToroShockTube.tar.gz": { "hash": "016541e0e600f77b820bfc7281bafc2e3c3a1bd6ae11567a0dce7247032a5f6a", "load_kwargs": {}, "load_name": "DD0001/data0001", "url": "https://yt-project.org/data/ToroShockTube.tar.gz" }, "TurbBoxLowRes.tar.gz": { "hash": "96b9e08dcf007bd4410ed98f9252aad33138515b8e83f4168f4f7bcda5e5bc27", "load_kwargs": {}, "load_name": "data.0005.3d.hdf5", "url": "https://yt-project.org/data/TurbBoxLowRes.tar.gz" }, "UnigridData.tar.gz": { "hash": "8efffc5377c99eaa45c086ac730d056b890c8358f17e8a2f7d8c281384fc2ab7", "load_kwargs": {}, "load_name": "velocity_field_20.fits", "url": "https://yt-project.org/data/UnigridData.tar.gz" }, "WDMerger_hdf5_chk_1000.tar.gz": { "hash": "a3806579f6e25b61ddc918dd6c3004d69cda88a94be3967111702cf7636ba158", "load_kwargs": {}, "load_name": "WDMerger_hdf5_chk_1000.hdf5", "url": "https://yt-project.org/data/WDMerger_hdf5_chk_1000.tar.gz" }, "WaveDarkMatter.tar.gz": { "hash": "4996fbf505ea56a196970d8920e9e46d0da70fecc792355dabaab6be8e5f0172", "load_kwargs": {}, "load_name": "psiDM_000020", "url": "https://yt-project.org/data/WaveDarkMatter.tar.gz" }, "WindTunnel.tar.gz": { "hash": "414b923d59f671196f2185c5216ae241156619b5042854df6236dea9fa10ee2b", "load_kwargs": {}, "load_name": "windtunnel_4lev_hdf5_plt_cnt_0030", "url": "https://yt-project.org/data/WindTunnel.tar.gz" }, "ZeldovichPancake.tar.gz": { "hash": "88467b9babe6886ce393ee9ee01ade9be3842a32cef9946863c68f765b8ebd32", "load_kwargs": {}, "load_name": "plt32.2d.hdf5", "url": "https://yt-project.org/data/ZeldovichPancake.tar.gz" }, "acisf05356N003_evt2.fits.gz": { "hash": "6fc406dd056b525592f704b202dadf52c41024660dbbd96cfbeb830d710a70d8", "load_kwargs": {}, "load_name": null, "url": "https://yt-project.org/data/acisf05356N003_evt2.fits.gz" }, "agora_1e11.00400.tar.gz": { "hash": "3a8820735bcf5067dfbe0fcd90d79121a801ea1a34c7d6de7e09185d3c635e6a", "load_kwargs": {}, "load_name": "agora_1e11.00400", "url": "https://yt-project.org/data/agora_1e11.00400.tar.gz" }, "ahf_halos.tar.gz": { "hash": "66d60ab80520cdeb1510a9c7e48a214d79f2559f7808bc346722028b5b48f8f7", "load_kwargs": {}, "load_name": "snap_N64L16_068.parameter", "url": "https://yt-project.org/data/ahf_halos.tar.gz" }, "athenapk_cluster.tar.gz": { "hash": "95f641b31cca00a39fa728be367861db9abc75ec3e3adcd9103325f675d3e9c1", "load_kwargs": {}, "load_name": "athenapk_cluster.restart.00000.rhdf", "url": "https://yt-project.org/data/athenapk_cluster.tar.gz" }, "athenapk_disk.tar.gz": { "hash": "3bd59a494a10b28cd691d1f69700f9e100a6a3a32541b0e16348d81163404cd0", "load_kwargs": {}, "load_name": "athenapk_disk.prim.00000.phdf", "url": "https://yt-project.org/data/athenapk_disk.tar.gz" }, "bw_cartesian_3d.tar.gz": { "hash": "de8b3b4c2a8c93f857c6729a118960eb1bcf244e1de07aee231d6fcc589c5628", "load_kwargs": {}, "load_name": "output0001.dat", "url": "https://yt-project.org/data/bw_cartesian_3d.tar.gz" }, "bw_cylindrical_3d.tar.gz": { "hash": "7392e60dabd7ef636ce98929c56d4d8ece89958716a67e32792dce5981d64709", "load_kwargs": {}, "load_name": "output0001.dat", "url": "https://yt-project.org/data/bw_cylindrical_3d.tar.gz" }, "bw_polar_2d.tar.gz": { "hash": "85308582ab55049474a9a302d1e88459a89f4e8925c5b901fbf3c590b1f269ab", "load_kwargs": {}, "load_name": "output0001.dat", "url": "https://yt-project.org/data/bw_polar_2d.tar.gz" }, "bw_spherical_2d.tar.gz": { "hash": "2407f8f3352d7f2687a608bdfb7143e892d54b983d5a0d1f869132f41c98dbcc", "load_kwargs": {}, "load_name": "output0001.dat", "url": "https://yt-project.org/data/bw_spherical_2d.tar.gz" }, "kh2d.tar.gz": { "hash": "2ea94f4f24b3423dcb67dc1b1eb28c83a02b66b6c79b29d1703b7aa2aa76b71e", "load_kwargs": {}, "load_name": "output0001.dat", "url": "https://yt-project.org/data/kh2d.tar.gz" }, "kh3d.tar.gz": { "hash": "ae354a29eb88e2cd6720efdb98de96a22b4988364cd65cf8f8dce071914430c3", "load_kwargs": {}, "load_name": "output0001.dat", "url": "https://yt-project.org/data/kh3d.tar.gz" }, "mhd_jet.tar.gz": { "hash": "24b7d24ec10d876dcc508563d516e1a4fe323150a6d5aef2f4640b020da817d7", "load_kwargs": {}, "load_name": "Jet0003.dat", "url": "https://yt-project.org/data/mhd_jet.tar.gz" }, "riemann1d.tar.gz": { "hash": "377483c6b74c1826b5f7c5e2b52a5b8ffeb974515d62d6135208cc96f7f09ae4", "load_kwargs": {}, "load_name": "output0001.dat", "url": "https://yt-project.org/data/riemann1d.tar.gz" }, "rmi_dust_2d.tar.gz": { "hash": "96fafc63755b065ffa17d63a7ab5f01d4183ed1b10728ce1c5a23281cb43f5a6", "load_kwargs": {"parfiles": ["rmi_dust_2d/amrvac.par"]}, "load_name": "output0001.dat", "url": "https://yt-project.org/data/rmi_dust_2d.tar.gz" }, "solar_prom2d.tar.gz": { "hash": "a8ddb034c27badf8160af1c4622ec9e9835c4434a5556e55212df6803c5020dc", "load_kwargs": {"parfiles": ["solar_prom2d/amrvac.par"]}, "load_name": "output0001.dat", "url": "https://yt-project.org/data/solar_prom2d.tar.gz" }, "arbor.tar.gz": { "hash": "d6c89c6496acdd92ef086736b8644996de58b1f4292b67b01d5db94ce60210fb", "load_kwargs": {}, "load_name": null, "url": "https://yt-project.org/data/arbor.tar.gz" }, "big_tipsy.tar.gz": { "hash": "fe7a5b1b7bb3d960a3ccb77457c039a010c05b5caa10b6891add3274fb9f7e36", "load_kwargs": {}, "load_name": "g1536.00256", "url": "https://yt-project.org/data/big_tipsy.tar.gz" }, "c5.tar.gz": { "hash": "db007304b128472d2439c946a8169f5cc00e0701bbcda18012e1beb8cd30dba6", "load_kwargs": {}, "load_name": "c5.h5m", "url": "https://yt-project.org/data/c5.tar.gz" }, "castro_sedov_1d_cyl_plt00150.tar.gz": { "hash": "c96a8fdf3cd43563a88e34569a6b37e57a0349692a9100ea65252506a167f08f", "load_kwargs": {}, "load_name": "", "url": "https://yt-project.org/data/castro_sedov_1d_cyl_plt00150.tar.gz" }, "castro_sedov_2d_cyl_in_cart_plt00150.tar.gz": { "hash": "8b73a37a0503dba593af65f8bf16dfcc11ac825d671f11e5a364a2dee63c9047", "load_kwargs": {}, "load_name": null, "url": "https://yt-project.org/data/castro_sedov_2d_cyl_in_cart_plt00150.tar.gz" }, "castro_sod_x_plt00036.tar.gz": { "hash": "3f0a586b41e7b54fa2b3cddd50f9384feb2efe1fe1a815e7348965ae7bf88f78", "load_kwargs": {}, "load_name": null, "url": "https://yt-project.org/data/castro_sod_x_plt00036.tar.gz" }, "check_pooch.py": { "hash": "d73fa43c3d56b3487ba96d45c0e7198f3d5471475f19105ee4fcd6442e2a3024", "load_kwargs": {}, "load_name": null, "url": "https://yt-project.org/data/check_pooch.py.tar.gz" }, "cm1_tornado_lofs.tar.gz": { "hash": "7a84f688b363b79d2ac7bf53d1a963f01d07926a0d268e4163ba87ea81699706", "load_kwargs": {}, "load_name": "nc4_cm1_lofs_tornado_test.nc", "url": "https://yt-project.org/data/cm1_tornado_lofs.tar.gz" }, "datafiles.json": { "hash": "56732241ddfe5770d0dacc52c1957a0c8b13ca095fd0d8f029dbd263be714838", "load_kwargs": {}, "load_name": null, "url": "https://yt-project.org/data/datafiles.json.tar.gz" }, "enzo_cosmology_plus.tar.gz": { "hash": "dab1d5cdf30ec6a99b556fe20cc91ba9a3b37c079b608de7cb92a240b2a943c8", "load_kwargs": {}, "load_name": "DD0046/DD0046", "url": "https://yt-project.org/data/enzo_cosmology_plus.tar.gz" }, "enzo_tiny_cosmology.tar.gz": { "hash": "91b04e443dfdafc2de234d7bb6f7ab4a0a970ec43e6ff5c59e0c9ded8fa41fa9", "load_kwargs": {}, "load_name": "DD0046/DD0046", "url": "https://yt-project.org/data/enzo_tiny_cosmology.tar.gz" }, "example-2d.tar.gz": { "hash": "d9f93ec4c83bd2abe14ea7bac447d154f44b86087e62945de8ce8e295f0d231d", "load_kwargs": {}, "load_name": "hdf5/data00000100.h5", "url": "https://yt-project.org/data/example-2d.tar.gz" }, "example-3d.tar.gz": { "hash": "42f860e42e80885d80f4ecfa84f58a8b083bb69d6b9e3b1bae1517003cb163c3", "load_kwargs": {}, "load_name": "hdf5/data00000100.h5", "url": "https://yt-project.org/data/example-3d.tar.gz" }, "fiducial_1to1_b0.tar.gz": { "hash": "1af47a8230addb96055a5c73d1fab9b413f566df191d03c60f0f202df845e7ff", "load_kwargs": {}, "load_name": "fiducial_1to1_b0_hdf5_part_0004", "url": "https://yt-project.org/data/fiducial_1to1_b0.tar.gz" }, "fiducial_1to3_b1.tar.gz": { "hash": "23c9faaa1af084b186f4115662967473b1b5cfe76b9afe232be800b6bd7da941", "load_kwargs": {}, "load_name": "fiducial_1to3_b1_hdf5_part_0080", "url": "https://yt-project.org/data/fiducial_1to3_b1.tar.gz" }, "gadget_fof_halos.tar.gz": { "hash": "77430eaca62ccc0c476d863f0beff5b0c71fa827b7a2fc33aa9cf803e001cc90", "load_kwargs": {}, "load_name": "groups_042/fof_subhalo_tab_042.0.hdf5", "url": "https://yt-project.org/data/gadget_fof_halos.tar.gz" }, "gadget_halos.tar.gz": { "hash": "1eb2c77700fac6eb8f9435549e0613ce38035c9106b295b9d5f6c70c82542842", "load_kwargs": {}, "load_name": "data/groups_076/fof_subhalo_tab_076.0.hdf5", "url": "https://yt-project.org/data/gadget_halos.tar.gz" }, "geos.tar.gz": { "hash": "2fbf9537be8c4f5da84ea22ed0971ded1a9aa71140fddda92f00f20df6fc43d3", "load_kwargs": {}, "load_name": "GEOS.fp.asm.inst3_3d_aer_Nv.20180822_0900.V01.nc4", "url": "https://yt-project.org/data/geos.tar.gz" }, "gizmo_64.tar.gz": { "hash": "11b8f425832bc1579854e013390483c9131ef5f27e3a74c8685a6d6dfcb809e5", "load_kwargs": {}, "load_name": "output/snap_N64L16_135.hdf5", "url": "https://yt-project.org/data/gizmo_64.tar.gz" }, "gizmo_cosmology_plus.tar.gz": { "hash": "73062872b13c6a8c86ac134b18e177c3526250000cef5559ceb0b2383cbaba6b", "load_kwargs": {}, "load_name": "snap_N128L16_131.hdf5", "url": "https://yt-project.org/data/gizmo_cosmology_plus.tar.gz" }, "gizmo_mhd_mwdisk.tar.gz": { "hash": "6f64248af83f50543173ade9c2c3f33ef1eae762b8e97196da5caf79ba4c5651", "load_kwargs": {}, "load_name": "gizmo_mhd_mwdisk.hdf5", "url": "https://yt-project.org/data/gizmo_mhd_mwdisk.tar.gz" }, "gizmo_zeldovich.tar.gz": { "hash": "5f1a73ead736024ffb6c099f84e834ec927d2818c93d7c775d9ff625adaa7c51", "load_kwargs": {}, "load_name": "snapshot_076_wi_gizver.hdf5", "url": "https://yt-project.org/data/gizmo_zeldovich.tar.gz" }, "grs-50-cube.fits.gz": { "hash": "1aff152f616d2626c8515c1493e65f07843d9b5f2ba73b7e4271a2dc6a9e6da7", "load_kwargs": {}, "load_name": null, "url": "https://yt-project.org/data/grs-50-cube.fits.gz" }, "halo1e11_run1.00400.tar.gz": { "hash": "414d05f9d5aa2dc2811e55fab59ad2c1426ec51675aaf41db773b24cd2f0aa5f", "load_kwargs": {}, "load_name": "halo1e11_run1.00400", "url": "https://yt-project.org/data/halo1e11_run1.00400.tar.gz" }, "hello-0210.tar.gz": { "hash": "d18964e339c8ddc2a56859edd2cd853648b914e63434a59d4a457a876d9496f3", "load_kwargs": {}, "load_name": "hello-0210.block_list", "url": "https://yt-project.org/data/hello-0210.tar.gz" }, "m33_hi.fits.gz": { "hash": "6a44b6c352a5c98c58f0b2b7498fd87e2430c530971eaff68b843146d82d6609", "load_kwargs": {}, "load_name": null, "url": "https://yt-project.org/data/m33_hi.fits.gz" }, "maestro_subCh_plt00248.tar.gz": { "hash": "6de1b63b63ce9738e4596ccf4cc75087d5029c892c07f40c2c69887e76b8ec1a", "load_kwargs": {}, "load_name": "maestro_subCh_plt00248", "url": "https://yt-project.org/data/maestro_subCh_plt00248.tar.gz" }, "maestro_xrb_lores_23437.tar.gz": { "hash": "a17b63609a2c50a829e26420f093ee898237b1ef8256ef38ee8be56a6f824460", "load_kwargs": {}, "load_name": null, "url": "https://yt-project.org/data/maestro_xrb_lores_23437.tar.gz" }, "medium_tipsy.tar.gz": { "hash": "b77c1802c5ea6cf3ab3a698a741fbfdbbcdb561bf57be2fe7a3c9d00980ff438", "load_kwargs": {}, "load_name": "g1536.00256", "url": "https://yt-project.org/data/medium_tipsy.tar.gz" }, "nyx_sedov_plt00086.tgz": { "hash": "308f6b2a81363c5aada35941c199f149d8688f8ce65803f33d6dedce43a57a1c", "load_kwargs": {}, "load_name": null, "url": "https://yt-project.org/data/nyx_sedov_plt00086.tgz" }, "nyx_small.tar.gz": { "hash": "1eefb443c2de6d9c6f4546bb87e1bde89ae98adb97c860a6065ac3177ecf1127", "load_kwargs": {}, "load_name": "nyx_small_00000", "url": "https://yt-project.org/data/nyx_small.tar.gz" }, "output_00080.tar.gz": { "hash": "175b18fbf72246c36442d7886f04194e1ea4d9d0b2bac2d5568a887d5f51bf29", "load_kwargs": {}, "load_name": "info_00080.txt", "url": "https://yt-project.org/data/output_00080.tar.gz" }, "output_00080_halos.tar.gz": { "hash": "fb8f573ea7cb183e1531091b3fce91cf7015cfdf5ccbce9ef2742e3135b52d4c", "load_kwargs": {}, "load_name": null, "url": "https://yt-project.org/data/output_00080_halos.tar.gz" }, "output_00101.tar.gz": { "hash": "7ff2287c225d4e68f7ec04efd183732bc7ae5a2686c0d7626b4f9f4b6361f6df", "load_kwargs": {}, "load_name": "info_00101.txt", "url": "https://yt-project.org/data/output_00101.tar.gz" }, "owls_fof_halos.tar.gz": { "hash": "7ed6bd5f2eab16e8614f30676d7a0129c5fda98f994c06000e9fae8275a14c90", "load_kwargs": {}, "load_name": "groups_008/group_008.0.hdf5", "url": "https://yt-project.org/data/owls_fof_halos.tar.gz" }, "parthenon_advection.tar.gz": { "hash": "93e9e747cc9c0f230f87ea960a116c7e8c117b348f1488d72402aaaa906e4e89", "load_kwargs": {}, "load_name": "advection_2d.out0.final.phdf", "url": "https://yt-project.org/data/parthenon_advection.tar.gz" }, "ramses_empty_record.tar.gz": { "hash": "1d119d8afa144abce5b980e5ba47c45bdfe5c851de8c7a43823b62524b0bd6e0", "load_kwargs": {}, "load_name": "output_00003/info_00003.txt", "url": "https://yt-project.org/data/ramses_empty_record.tar.gz" }, "ramses_extra_fields_small.tar.gz": { "hash": "8d2dde6e6150f0767ebe3ace84e2e790503154c5d90474008afdfecbac68ffed", "load_kwargs": {}, "load_name": "output_00001/info_00001.txt", "url": "https://yt-project.org/data/ramses_extra_fields_small.tar.gz" }, "ramses_mhd_128.tar.gz": { "hash": "9bb48f090e2fdb795b2be49f571edecdd944c389c62283e5357a226406504d76", "load_kwargs": {}, "load_name": "output_00027/info_00027.txt", "url": "https://yt-project.org/data/ramses_mhd_128.tar.gz" }, "ramses_mhd_amr.tar.gz": { "hash": "bd7e0a11f971e246a3393272be4e4b7e0d84e85351bdd2acac2bd8025cf00774", "load_kwargs": {}, "load_name": "output_00019/info_00019.txt", "url": "https://yt-project.org/data/ramses_mhd_amr.tar.gz" }, "ramses_new_format.tar.gz": { "hash": "6ef426587454846456473c2026229d620e588a1fa9cbe409a0eb0a11517958c6", "load_kwargs": {}, "load_name": "output_00002/info_00002.txt", "url": "https://yt-project.org/data/ramses_new_format.tar.gz" }, "ramses_rt_00088.tar.gz": { "hash": "bdeb1480aa3e669f972af982ad89cd17f45749a90e3ba74dc2370201bdda4b3c", "load_kwargs": {}, "load_name": "output_00088/info_00088.txt", "url": "https://yt-project.org/data/ramses_rt_00088.tar.gz" }, "ramses_sink_00016.tar.gz": { "hash": "e28306c6e6cfc84ac0fcc209bb04b9677efd1916a2a1bcd5cb0b3f1ec90ce83a", "load_kwargs": {}, "load_name": "output_00016/info_00016.txt", "url": "https://yt-project.org/data/ramses_sink_00016.tar.gz" }, "ramses_star_formation.tar.gz": { "hash": "7513bfe14e270de6f643623815d815cf9f4adf0bf7dfb33ff3bc20fcd3f965e6", "load_kwargs": {}, "load_name": "output_00013/info_00013.txt", "url": "https://yt-project.org/data/ramses_star_formation.tar.gz" }, "rockstar_halos.tar.gz": { "hash": "5090052faad3a397ab69d833a15b1779466e1893a024dca55de18bfd9aeae774", "load_kwargs": {}, "load_name": "halos_0.0.bin", "url": "https://yt-project.org/data/rockstar_halos.tar.gz" }, "sedov_1d_sph_plt00120.tar.gz": { "hash": "25e8316a64a747043c2c95bfaaf6092a5aa2748a8e31295fa86a0bb793e7c1af", "load_kwargs": {}, "load_name": null, "url": "https://yt-project.org/data/sedov_1d_sph_plt00120.tar.gz" }, "sedov_tst_0004.tar.gz": { "hash": "f0c592efde0257efea77e743167439da6df44cde71a9917b305751a8818eabb5", "load_kwargs": {}, "load_name": "sedov/sedov_tst_0004.h5", "url": "https://yt-project.org/data/sedov_tst_0004.tar.gz" }, "sizmbhloz-clref04SNth-rs9_a0.9011.tar.gz": { "hash": "3fc14b8e48e9b80c4ea4b324c5815eccb4ca04fe90903ce77f1496838ad764d4", "load_kwargs": {}, "load_name": "sizmbhloz-clref04SNth-rs9_a0.9011.art", "url": "https://yt-project.org/data/sizmbhloz-clref04SNth-rs9_a0.9011.tar.gz" }, "SmartStars.tar.gz": { "hash": "990ff61aa85408d33c4c6be7181e1f64fcd738526dcf0e34be5605d8bb158605", "load_kwargs": {}, "load_name": "DD0100/output_0100", "url": "https://yt-project.org/data/SmartStars.tar.gz" }, "snapshot_010.tar.gz": { "hash": "c98fa744de250d02833b643e3250dce2cf2c7a622c42c96b201024ee28582ba9", "load_kwargs": {}, "load_name": "snapshot_010", "url": "https://yt-project.org/data/snapshot_010.tar.gz" }, "snapshot_028_z000p000.tar.gz": { "hash": "35b00963a0b34f315054a7b254854859f626155dc368ac2c83158c15ce73e6d1", "load_kwargs": {}, "load_name": "snap_028_z000p000.0.hdf5", "url": "https://yt-project.org/data/snapshot_028_z000p000.tar.gz" }, "snapshot_033.tar.gz": { "hash": "f15788bd298c1f292548d2773f8232ad1f2e41723e210d200b479cb97c4d21bd", "load_kwargs": {}, "load_name": "snap_033.0.hdf5", "url": "https://yt-project.org/data/snapshot_033.tar.gz" }, "snipshot_399_z000p000.tar.gz": { "hash": "a4009624898d9926a90dffa79f9eaf8fe3bde9ba5adaf40577bc36d932016672", "load_kwargs": {}, "load_name": "snip_399_z000p000.0.hdf5", "url": "https://yt-project.org/data/snipshot_399_z000p000.tar.gz" }, "test_outputs.tar.gz": { "hash": "f7f59ee7e8b9e45d42e005b8116f896ce85ea1ea4a7a1c9b44c6de5b2f09a625", "load_kwargs": {}, "load_name": "RadTube/plt00500", "url": "https://yt-project.org/data/test_outputs.tar.gz" }, "tiny_fof_halos.tar.gz": { "hash": "48555ba2c482be88d7944a79ecf27967b517366d4d1075a7a5639e6d2339d8f0", "load_kwargs": {}, "load_name": "DD0045/DD0045.0.h5", "url": "https://yt-project.org/data/tiny_fof_halos.tar.gz" }, "ytdata_test.tar.gz": { "hash": "cafb2b06ab3190ba17909585b58a4724e25f27ac72f11d6dff1a482146eb8958", "load_kwargs": {}, "load_name": "slice.h5", "url": "https://yt-project.org/data/ytdata_test.tar.gz" } } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/startup_tasks.py0000644000175100001770000001247114714401662015507 0ustar00runnerdocker# This handles the command line. import argparse import os import signal import sys from yt.config import ytcfg from yt.funcs import ( mylog, paste_traceback, paste_traceback_detailed, signal_ipython, signal_print_traceback, ) from yt.utilities import rpdb exe_name = os.path.basename(sys.executable) # At import time, we determined whether or not we're being run in parallel. def turn_on_parallelism(): parallel_capable = False try: # we import this to check if mpi4py is installed from mpi4py import MPI # NOQA except ImportError as e: mylog.error( "Warning: Attempting to turn on parallelism, " "but mpi4py import failed. Try pip install mpi4py." ) raise e # Now we have to turn on the parallelism from the perspective of the # parallel_analysis_interface from yt.utilities.parallel_tools.parallel_analysis_interface import ( enable_parallelism, ) parallel_capable = enable_parallelism() return parallel_capable # This fallback is for Paraview: # We use two signals, SIGUSR1 and SIGUSR2. In a non-threaded environment, # we set up handlers to process these by printing the current stack and to # raise a RuntimeError. The latter can be used, inside pdb, to catch an error # and then examine the current stack. try: signal.signal(signal.SIGUSR1, signal_print_traceback) mylog.debug("SIGUSR1 registered for traceback printing") signal.signal(signal.SIGUSR2, signal_ipython) mylog.debug("SIGUSR2 registered for IPython Insertion") except (ValueError, RuntimeError, AttributeError): # Not in main thread pass class SetExceptionHandling(argparse.Action): def __call__(self, parser, namespace, values, option_string=None): # If we recognize one of the arguments on the command line as indicating a # different mechanism for handling tracebacks, we attach one of those handlers # and remove the argument from sys.argv. # if self.dest == "paste": sys.excepthook = paste_traceback mylog.debug("Enabling traceback pasting") elif self.dest == "paste-detailed": sys.excepthook = paste_traceback_detailed mylog.debug("Enabling detailed traceback pasting") elif self.dest == "detailed": import cgitb cgitb.enable(format="text") mylog.debug("Enabling detailed traceback reporting") elif self.dest == "rpdb": sys.excepthook = rpdb.rpdb_excepthook mylog.debug("Enabling remote debugging") class SetConfigOption(argparse.Action): def __call__(self, parser, namespace, values, option_string=None): param, val = values.split("=") mylog.debug("Overriding config: %s = %s", param, val) ytcfg["yt", param] = val if param == "log_level": # special case mylog.setLevel(int(val)) class YTParser(argparse.ArgumentParser): def error(self, message): """error(message: string) Prints a help message that is more detailed than the argparse default and then exits. """ self.print_help(sys.stderr) self.exit(2, f"{self.prog}: error: {message}\n") parser = YTParser(description="yt command line arguments") parser.add_argument( "--config", action=SetConfigOption, help="Set configuration option, in the form param=value", ) parser.add_argument( "--paste", action=SetExceptionHandling, help="Paste traceback to paste.yt-project.org", nargs=0, ) parser.add_argument( "--paste-detailed", action=SetExceptionHandling, help="Paste a detailed traceback with local variables to " + "paste.yt-project.org", nargs=0, ) parser.add_argument( "--detailed", action=SetExceptionHandling, help="Display detailed traceback.", nargs=0, ) parser.add_argument( "--rpdb", action=SetExceptionHandling, help="Enable remote pdb interaction (for parallel debugging).", nargs=0, ) parser.add_argument( "--parallel", action="store_true", default=False, dest="parallel", help="Run in MPI-parallel mode (must be launched as an MPI task)", ) if not hasattr(sys, "argv") or sys.argv is None: sys.argv = [] unparsed_args: list[str] = [] parallel_capable = False if not ytcfg.get("yt", "internals", "command_line"): opts, unparsed_args = parser.parse_known_args() # THIS IS NOT SUCH A GOOD IDEA: # sys.argv = [a for a in unparsed_args] if opts.parallel: parallel_capable = turn_on_parallelism() subparsers = parser.add_subparsers( title="subcommands", dest="subcommands", description="Valid subcommands", ) else: subparsers = parser.add_subparsers( title="subcommands", dest="subcommands", description="Valid subcommands", ) def print_help(*args, **kwargs): parser.print_help() help_parser = subparsers.add_parser("help", help="Print help message") help_parser.set_defaults(func=print_help) if parallel_capable: pass elif ( exe_name in ["mpi4py", "embed_enzo", "python{}.{}-mpi".format(*sys.version_info)] or "_parallel" in dir(sys) or any("ipengine" in arg for arg in sys.argv) or any("cluster-id" in arg for arg in sys.argv) ): parallel_capable = turn_on_parallelism() else: parallel_capable = False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/testing.py0000644000175100001770000017520414714401662014261 0ustar00runnerdockerimport hashlib import itertools as it import os import pickle import shutil import sys import tempfile import unittest from collections.abc import Callable, Mapping from functools import wraps from importlib.util import find_spec from shutil import which from typing import TYPE_CHECKING from unittest import SkipTest import matplotlib import numpy as np from more_itertools import always_iterable from numpy.random import RandomState from unyt.exceptions import UnitOperationError from yt._maintenance.deprecation import issue_deprecation_warning from yt.config import ytcfg from yt.frontends.stream.data_structures import StreamParticlesDataset from yt.funcs import is_sequence from yt.loaders import load, load_particles from yt.units.yt_array import YTArray, YTQuantity if TYPE_CHECKING: from collections.abc import Mapping from yt._typing import AnyFieldKey ANSWER_TEST_TAG = "answer_test" # Expose assert_true and assert_less_equal from unittest.TestCase # this is adopted from nose. Doing this here allows us to avoid importing # nose at the top level. def _deprecated_assert_func(func): @wraps(func) def retf(*args, **kwargs): issue_deprecation_warning( f"yt.testing.{func.__name__} is deprecated", since="4.2", stacklevel=3, ) return func(*args, **kwargs) return retf class _Dummy(unittest.TestCase): def nop(self): pass _t = _Dummy("nop") assert_true = _deprecated_assert_func(_t.assertTrue) assert_less_equal = _deprecated_assert_func(_t.assertLessEqual) def assert_rel_equal(a1, a2, decimals, err_msg="", verbose=True): from numpy.testing import assert_almost_equal # We have nan checks in here because occasionally we have fields that get # weighted without non-zero weights. I'm looking at you, particle fields! if isinstance(a1, np.ndarray): assert a1.size == a2.size # Mask out NaNs assert (np.isnan(a1) == np.isnan(a2)).all() a1[np.isnan(a1)] = 1.0 a2[np.isnan(a2)] = 1.0 # Mask out 0 ind1 = np.array(np.abs(a1) < np.finfo(a1.dtype).eps) ind2 = np.array(np.abs(a2) < np.finfo(a2.dtype).eps) assert (ind1 == ind2).all() a1[ind1] = 1.0 a2[ind2] = 1.0 elif np.any(np.isnan(a1)) and np.any(np.isnan(a2)): return True if not isinstance(a1, np.ndarray) and a1 == a2 == 0.0: # NANS! a1 = a2 = 1.0 return assert_almost_equal( np.array(a1) / np.array(a2), 1.0, decimals, err_msg=err_msg, verbose=verbose ) # tested: volume integral is 1. def cubicspline_python( x: float | np.ndarray, ) -> np.ndarray: """ cubic spline SPH kernel function for testing against more effiecient cython methods Parameters ---------- x: impact parameter / smoothing length [dimenionless] Returns ------- value of the kernel function """ # C is 8/pi _c = 8.0 / np.pi x = np.asarray(x) kernel = np.zeros(x.shape, dtype=x.dtype) half1 = np.where(np.logical_and(x >= 0.0, x <= 0.5)) kernel[half1] = 1.0 - 6.0 * x[half1] ** 2 * (1.0 - x[half1]) half2 = np.where(np.logical_and(x > 0.5, x <= 1.0)) kernel[half2] = 2.0 * (1.0 - x[half2]) ** 3 return kernel * _c def integrate_kernel( kernelfunc: Callable[[float], float], b: float, hsml: float ) -> float: """ integrates a kernel function over a line passing entirely through it Parameters: ----------- kernelfunc: the kernel function to integrate b: impact parameter hsml: smoothing length [same units as impact parameter] Returns: -------- the integral of the SPH kernel function. units: 1 / units of b and hsml """ pre = 1.0 / hsml**2 x = b / hsml xmax = np.sqrt(1.0 - x**2) xmin = -1.0 * xmax xe = np.linspace(xmin, xmax, 500) # shape: 500, x.shape xc = 0.5 * (xe[:-1, ...] + xe[1:, ...]) dx = np.diff(xe, axis=0) spv = kernelfunc(np.sqrt(xc**2 + x**2)) integral = np.sum(spv * dx, axis=0) return pre * integral _zeroperiods = np.array([0.0, 0.0, 0.0]) def distancematrix( pos3_i0: np.ndarray, pos3_i1: np.ndarray, periodic: tuple[bool, bool, bool] = (True,) * 3, periods: np.ndarray = _zeroperiods, ) -> np.ndarray: """ Calculates the distances between two arrays of points. Parameters: ---------- pos3_i0: shape (first number of points, 3) positions of the first set of points. The second index is for positions along the different cartesian axes pos3_i1: shape (second number of points, 3) as pos3_i0, but for the second set of points periodic: are the positions along each axis periodic (True) or not periods: the periods along each axis. Ignored if positions in a given direction are not periodic. Returns: -------- a 2D-array of distances between postions `pos3_i0` (changes along index 0) and `pos3_i1` (changes along index 1) """ d2 = np.zeros((len(pos3_i0), len(pos3_i1)), dtype=pos3_i0.dtype) for ax in range(3): # 'center on' pos3_i1 _d = pos3_i0[:, ax, np.newaxis] - pos3_i1[np.newaxis, :, ax] if periodic[ax]: _period = periods[ax] _d += 0.5 * _period # center on half box size _d %= _period # wrap coordinate to 0 -- boxsize range _d -= 0.5 * _period # center back to zero d2 += _d**2 return np.sqrt(d2) def amrspace(extent, levels=7, cells=8): """Creates two numpy arrays representing the left and right bounds of an AMR grid as well as an array for the AMR level of each cell. Parameters ---------- extent : array-like This a sequence of length 2*ndims that is the bounds of each dimension. For example, the 2D unit square would be given by [0.0, 1.0, 0.0, 1.0]. A 3D cylindrical grid may look like [0.0, 2.0, -1.0, 1.0, 0.0, 2*np.pi]. levels : int or sequence of ints, optional This is the number of AMR refinement levels. If given as a sequence (of length ndims), then each dimension will be refined down to this level. All values in this array must be the same or zero. A zero valued dimension indicates that this dim should not be refined. Taking the 3D cylindrical example above if we don't want refine theta but want r and z at 5 we would set levels=(5, 5, 0). cells : int, optional This is the number of cells per refinement level. Returns ------- left : float ndarray, shape=(npoints, ndims) The left AMR grid points. right : float ndarray, shape=(npoints, ndims) The right AMR grid points. level : int ndarray, shape=(npoints,) The AMR level for each point. Examples -------- >>> l, r, lvl = amrspace([0.0, 2.0, 1.0, 2.0, 0.0, 3.14], levels=(3, 3, 0), cells=2) >>> print(l) [[ 0. 1. 0. ] [ 0.25 1. 0. ] [ 0. 1.125 0. ] [ 0.25 1.125 0. ] [ 0.5 1. 0. ] [ 0. 1.25 0. ] [ 0.5 1.25 0. ] [ 1. 1. 0. ] [ 0. 1.5 0. ] [ 1. 1.5 0. ]] """ extent = np.asarray(extent, dtype="f8") dextent = extent[1::2] - extent[::2] ndims = len(dextent) if isinstance(levels, int): minlvl = maxlvl = levels levels = np.array([levels] * ndims, dtype="int32") else: levels = np.asarray(levels, dtype="int32") minlvl = levels.min() maxlvl = levels.max() if minlvl != maxlvl and (minlvl != 0 or {minlvl, maxlvl} != set(levels)): raise ValueError("all levels must have the same value or zero.") dims_zero = levels == 0 dims_nonzero = ~dims_zero ndims_nonzero = dims_nonzero.sum() npoints = (cells**ndims_nonzero - 1) * maxlvl + 1 left = np.empty((npoints, ndims), dtype="float64") right = np.empty((npoints, ndims), dtype="float64") level = np.empty(npoints, dtype="int32") # fill zero dims left[:, dims_zero] = extent[::2][dims_zero] right[:, dims_zero] = extent[1::2][dims_zero] # fill non-zero dims dcell = 1.0 / cells left_slice = tuple( ( slice(extent[2 * n], extent[2 * n + 1], extent[2 * n + 1]) if dims_zero[n] else slice(0.0, 1.0, dcell) ) for n in range(ndims) ) right_slice = tuple( ( slice(extent[2 * n + 1], extent[2 * n], -extent[2 * n + 1]) if dims_zero[n] else slice(dcell, 1.0 + dcell, dcell) ) for n in range(ndims) ) left_norm_grid = np.reshape(np.mgrid[left_slice].T.flat[ndims:], (-1, ndims)) lng_zero = left_norm_grid[:, dims_zero] lng_nonzero = left_norm_grid[:, dims_nonzero] right_norm_grid = np.reshape(np.mgrid[right_slice].T.flat[ndims:], (-1, ndims)) rng_zero = right_norm_grid[:, dims_zero] rng_nonzero = right_norm_grid[:, dims_nonzero] level[0] = maxlvl left[0, :] = extent[::2] right[0, dims_zero] = extent[1::2][dims_zero] right[0, dims_nonzero] = (dcell**maxlvl) * dextent[dims_nonzero] + extent[::2][ dims_nonzero ] for i, lvl in enumerate(range(maxlvl, 0, -1)): start = (cells**ndims_nonzero - 1) * i + 1 stop = (cells**ndims_nonzero - 1) * (i + 1) + 1 dsize = dcell ** (lvl - 1) * dextent[dims_nonzero] level[start:stop] = lvl left[start:stop, dims_zero] = lng_zero left[start:stop, dims_nonzero] = lng_nonzero * dsize + extent[::2][dims_nonzero] right[start:stop, dims_zero] = rng_zero right[start:stop, dims_nonzero] = ( rng_nonzero * dsize + extent[::2][dims_nonzero] ) return left, right, level def _check_field_unit_args_helper(args: dict, default_args: dict): values = list(args.values()) keys = list(args.keys()) if all(v is None for v in values): for key in keys: args[key] = default_args[key] elif None in values: raise ValueError( "Error in creating a fake dataset:" f" either all or none of the following arguments need to specified: {keys}." ) elif any(len(v) != len(values[0]) for v in values): raise ValueError( "Error in creating a fake dataset:" f" all the following arguments must have the same length: {keys}." ) return list(args.values()) _fake_random_ds_default_fields = ("density", "velocity_x", "velocity_y", "velocity_z") _fake_random_ds_default_units = ("g/cm**3", "cm/s", "cm/s", "cm/s") _fake_random_ds_default_negative = (False, False, False, False) def fake_random_ds( ndims, peak_value=1.0, fields=None, units=None, particle_fields=None, particle_field_units=None, negative=False, nprocs=1, particles=0, length_unit=1.0, unit_system="cgs", bbox=None, default_species_fields=None, ): from yt.loaders import load_uniform_grid prng = RandomState(0x4D3D3D3) if not is_sequence(ndims): ndims = [ndims, ndims, ndims] else: assert len(ndims) == 3 if not is_sequence(negative): if fields: negative = [negative for f in fields] else: negative = None fields, units, negative = _check_field_unit_args_helper( { "fields": fields, "units": units, "negative": negative, }, { "fields": _fake_random_ds_default_fields, "units": _fake_random_ds_default_units, "negative": _fake_random_ds_default_negative, }, ) offsets = [] for n in negative: if n: offsets.append(0.5) else: offsets.append(0.0) data = {} for field, offset, u in zip(fields, offsets, units, strict=True): v = (prng.random_sample(ndims) - offset) * peak_value if field[0] == "all": v = v.ravel() data[field] = (v, u) if particles: if particle_fields is not None: for field, unit in zip(particle_fields, particle_field_units, strict=True): if field in ("particle_position", "particle_velocity"): data["io", field] = (prng.random_sample((int(particles), 3)), unit) else: data["io", field] = (prng.random_sample(size=int(particles)), unit) else: for f in (f"particle_position_{ax}" for ax in "xyz"): data["io", f] = (prng.random_sample(size=particles), "code_length") for f in (f"particle_velocity_{ax}" for ax in "xyz"): data["io", f] = (prng.random_sample(size=particles) - 0.5, "cm/s") data["io", "particle_mass"] = (prng.random_sample(particles), "g") ug = load_uniform_grid( data, ndims, length_unit=length_unit, nprocs=nprocs, unit_system=unit_system, bbox=bbox, default_species_fields=default_species_fields, ) return ug _geom_transforms = { # These are the bounds we want. Cartesian we just assume goes 0 .. 1. "cartesian": ((0.0, 0.0, 0.0), (1.0, 1.0, 1.0)), "spherical": ((0.0, 0.0, 0.0), (1.0, np.pi, 2 * np.pi)), "cylindrical": ((0.0, 0.0, 0.0), (1.0, 1.0, 2.0 * np.pi)), # rzt "polar": ((0.0, 0.0, 0.0), (1.0, 2.0 * np.pi, 1.0)), # rtz "geographic": ((-90.0, -180.0, 0.0), (90.0, 180.0, 1000.0)), # latlonalt "internal_geographic": ((-90.0, -180.0, 0.0), (90.0, 180.0, 1000.0)), # latlondep "spectral_cube": ((0.0, 0.0, 0.0), (1.0, 1.0, 1.0)), } _fake_amr_ds_default_fields = ("Density",) _fake_amr_ds_default_units = ("g/cm**3",) def fake_amr_ds( fields=None, units=None, geometry="cartesian", particles=0, length_unit=None, *, domain_left_edge=None, domain_right_edge=None, ): from yt.loaders import load_amr_grids fields, units = _check_field_unit_args_helper( { "fields": fields, "units": units, }, { "fields": _fake_amr_ds_default_fields, "units": _fake_amr_ds_default_units, }, ) prng = RandomState(0x4D3D3D3) default_LE, default_RE = _geom_transforms[geometry] LE = np.array(domain_left_edge or default_LE, dtype="float64") RE = np.array(domain_right_edge or default_RE, dtype="float64") data = [] for gspec in _amr_grid_index: level, left_edge, right_edge, dims = gspec left_edge = left_edge * (RE - LE) + LE right_edge = right_edge * (RE - LE) + LE gdata = { "level": level, "left_edge": left_edge, "right_edge": right_edge, "dimensions": dims, } for f, u in zip(fields, units, strict=True): gdata[f] = (prng.random_sample(dims), u) if particles: for i, f in enumerate(f"particle_position_{ax}" for ax in "xyz"): pdata = prng.random_sample(particles) pdata /= right_edge[i] - left_edge[i] pdata += left_edge[i] gdata["io", f] = (pdata, "code_length") for f in (f"particle_velocity_{ax}" for ax in "xyz"): gdata["io", f] = (prng.random_sample(particles) - 0.5, "cm/s") gdata["io", "particle_mass"] = (prng.random_sample(particles), "g") data.append(gdata) bbox = np.array([LE, RE]).T return load_amr_grids( data, [32, 32, 32], geometry=geometry, bbox=bbox, length_unit=length_unit ) _fake_particle_ds_default_fields = ( "particle_position_x", "particle_position_y", "particle_position_z", "particle_mass", "particle_velocity_x", "particle_velocity_y", "particle_velocity_z", ) _fake_particle_ds_default_units = ("cm", "cm", "cm", "g", "cm/s", "cm/s", "cm/s") _fake_particle_ds_default_negative = (False, False, False, False, True, True, True) def fake_particle_ds( fields=None, units=None, negative=None, npart=16**3, length_unit=1.0, data=None, ): from yt.loaders import load_particles prng = RandomState(0x4D3D3D3) if negative is not None and not is_sequence(negative): negative = [negative for f in fields] fields, units, negative = _check_field_unit_args_helper( { "fields": fields, "units": units, "negative": negative, }, { "fields": _fake_particle_ds_default_fields, "units": _fake_particle_ds_default_units, "negative": _fake_particle_ds_default_negative, }, ) offsets = [] for n in negative: if n: offsets.append(0.5) else: offsets.append(0.0) data = data if data else {} for field, offset, u in zip(fields, offsets, units, strict=True): if field in data: v = data[field] continue if "position" in field: v = prng.normal(loc=0.5, scale=0.25, size=npart) np.clip(v, 0.0, 1.0, v) v = prng.random_sample(npart) - offset data[field] = (v, u) bbox = np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]) ds = load_particles(data, 1.0, bbox=bbox) return ds def fake_tetrahedral_ds(): from yt.frontends.stream.sample_data.tetrahedral_mesh import ( _connectivity, _coordinates, ) from yt.loaders import load_unstructured_mesh prng = RandomState(0x4D3D3D3) # the distance from the origin node_data = {} dist = np.sum(_coordinates**2, 1) node_data["connect1", "test"] = dist[_connectivity] # each element gets a random number elem_data = {} elem_data["connect1", "elem"] = prng.rand(_connectivity.shape[0]) ds = load_unstructured_mesh( _connectivity, _coordinates, node_data=node_data, elem_data=elem_data ) return ds def fake_hexahedral_ds(fields=None): from yt.frontends.stream.sample_data.hexahedral_mesh import ( _connectivity, _coordinates, ) from yt.loaders import load_unstructured_mesh prng = RandomState(0x4D3D3D3) # the distance from the origin node_data = {} dist = np.sum(_coordinates**2, 1) node_data["connect1", "test"] = dist[_connectivity - 1] for field in always_iterable(fields): node_data["connect1", field] = dist[_connectivity - 1] # each element gets a random number elem_data = {} elem_data["connect1", "elem"] = prng.rand(_connectivity.shape[0]) ds = load_unstructured_mesh( _connectivity - 1, _coordinates, node_data=node_data, elem_data=elem_data ) return ds def small_fake_hexahedral_ds(): from yt.loaders import load_unstructured_mesh _coordinates = np.array( [ [-1.0, -1.0, -1.0], [0.0, -1.0, -1.0], [-0.0, 0.0, -1.0], [-1.0, -0.0, -1.0], [-1.0, -1.0, 0.0], [-0.0, -1.0, 0.0], [-0.0, 0.0, -0.0], [-1.0, 0.0, -0.0], ] ) _connectivity = np.array([[1, 2, 3, 4, 5, 6, 7, 8]]) # the distance from the origin node_data = {} dist = np.sum(_coordinates**2, 1) node_data["connect1", "test"] = dist[_connectivity - 1] ds = load_unstructured_mesh(_connectivity - 1, _coordinates, node_data=node_data) return ds def fake_stretched_ds(N=16): from yt.loaders import load_uniform_grid rng = np.random.default_rng(seed=0x4D3D3D3) data = {"density": rng.random((N, N, N))} cell_widths = [] for _ in range(3): cw = rng.random(N) cw /= cw.sum() cell_widths.append(cw) return load_uniform_grid( data, [N, N, N], bbox=np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]), cell_widths=cell_widths, ) def fake_vr_orientation_test_ds(N=96, scale=1): """ create a toy dataset that puts a sphere at (0,0,0), a single cube on +x, two cubes on +y, and three cubes on +z in a domain from [-1*scale,1*scale]**3. The lower planes (x = -1*scale, y = -1*scale, z = -1*scale) are also given non-zero values. This dataset allows you to easily explore orientations and handiness in VR and other renderings Parameters ---------- N : integer The number of cells along each direction scale : float A spatial scale, the domain boundaries will be multiplied by scale to test datasets that have spatial different scales (e.g. data in CGS units) """ from yt.loaders import load_uniform_grid xmin = ymin = zmin = -1.0 * scale xmax = ymax = zmax = 1.0 * scale dcoord = (xmax - xmin) / N arr = np.zeros((N, N, N), dtype=np.float64) arr[:, :, :] = 1.0e-4 bbox = np.array([[xmin, xmax], [ymin, ymax], [zmin, zmax]]) # coordinates -- in the notation data[i, j, k] x = (np.arange(N) + 0.5) * dcoord + xmin y = (np.arange(N) + 0.5) * dcoord + ymin z = (np.arange(N) + 0.5) * dcoord + zmin x3d, y3d, z3d = np.meshgrid(x, y, z, indexing="ij") # sphere at the origin c = np.array([0.5 * (xmin + xmax), 0.5 * (ymin + ymax), 0.5 * (zmin + zmax)]) r = np.sqrt((x3d - c[0]) ** 2 + (y3d - c[1]) ** 2 + (z3d - c[2]) ** 2) arr[r < 0.05] = 1.0 arr[abs(x3d - xmin) < 2 * dcoord] = 0.3 arr[abs(y3d - ymin) < 2 * dcoord] = 0.3 arr[abs(z3d - zmin) < 2 * dcoord] = 0.3 # single cube on +x xc = 0.75 * scale dx = 0.05 * scale idx = np.logical_and( np.logical_and(x3d > xc - dx, x3d < xc + dx), np.logical_and( np.logical_and(y3d > -dx, y3d < dx), np.logical_and(z3d > -dx, z3d < dx) ), ) arr[idx] = 1.0 # two cubes on +y dy = 0.05 * scale for yc in [0.65 * scale, 0.85 * scale]: idx = np.logical_and( np.logical_and(y3d > yc - dy, y3d < yc + dy), np.logical_and( np.logical_and(x3d > -dy, x3d < dy), np.logical_and(z3d > -dy, z3d < dy) ), ) arr[idx] = 0.8 # three cubes on +z dz = 0.05 * scale for zc in [0.5 * scale, 0.7 * scale, 0.9 * scale]: idx = np.logical_and( np.logical_and(z3d > zc - dz, z3d < zc + dz), np.logical_and( np.logical_and(x3d > -dz, x3d < dz), np.logical_and(y3d > -dz, y3d < dz) ), ) arr[idx] = 0.6 data = {"density": (arr, "g/cm**3")} ds = load_uniform_grid(data, arr.shape, bbox=bbox) return ds def fake_sph_orientation_ds(): """Returns an in-memory SPH dataset useful for testing This dataset should have one particle at the origin, one more particle along the x axis, two along y, and three along z. All particles will have non-overlapping smoothing regions with a radius of 0.25, masses of 1, and densities of 1, and zero velocity. """ from yt import load_particles npart = 7 # one particle at the origin, one particle along x-axis, two along y, # three along z data = { "particle_position_x": (np.array([0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0]), "cm"), "particle_position_y": (np.array([0.0, 0.0, 1.0, 2.0, 0.0, 0.0, 0.0]), "cm"), "particle_position_z": (np.array([0.0, 0.0, 0.0, 0.0, 1.0, 2.0, 3.0]), "cm"), "particle_mass": (np.ones(npart), "g"), "particle_velocity_x": (np.zeros(npart), "cm/s"), "particle_velocity_y": (np.zeros(npart), "cm/s"), "particle_velocity_z": (np.zeros(npart), "cm/s"), "smoothing_length": (0.25 * np.ones(npart), "cm"), "density": (np.ones(npart), "g/cm**3"), "temperature": (np.ones(npart), "K"), } bbox = np.array([[-4, 4], [-4, 4], [-4, 4]]) return load_particles(data=data, length_unit=1.0, bbox=bbox) def fake_sph_grid_ds(hsml_factor=1.0): """Returns an in-memory SPH dataset useful for testing This dataset should have 27 particles with the particles arranged uniformly on a 3D grid. The bottom left corner is (0.5,0.5,0.5) and the top right corner is (2.5,2.5,2.5). All particles will have non-overlapping smoothing regions with a radius of 0.05, masses of 1, and densities of 1, and zero velocity. """ from yt import load_particles npart = 27 x = np.empty(npart) y = np.empty(npart) z = np.empty(npart) tot = 0 for i in range(0, 3): for j in range(0, 3): for k in range(0, 3): x[tot] = i + 0.5 y[tot] = j + 0.5 z[tot] = k + 0.5 tot += 1 data = { "particle_position_x": (x, "cm"), "particle_position_y": (y, "cm"), "particle_position_z": (z, "cm"), "particle_mass": (np.ones(npart), "g"), "particle_velocity_x": (np.zeros(npart), "cm/s"), "particle_velocity_y": (np.zeros(npart), "cm/s"), "particle_velocity_z": (np.zeros(npart), "cm/s"), "smoothing_length": (0.05 * np.ones(npart) * hsml_factor, "cm"), "density": (np.ones(npart), "g/cm**3"), "temperature": (np.ones(npart), "K"), } bbox = np.array([[0, 3], [0, 3], [0, 3]]) return load_particles(data=data, length_unit=1.0, bbox=bbox) def constantmass(i: int, j: int, k: int) -> float: return 1.0 _xhat = np.array([1, 0, 0]) _yhat = np.array([0, 1, 0]) _zhat = np.array([0, 0, 1]) _floathalves = 0.5 * np.ones((3,), dtype=np.float64) def fake_sph_flexible_grid_ds( hsml_factor: float = 1.0, nperside: int = 3, periodic: bool = True, e1hat: np.ndarray = _xhat, e2hat: np.ndarray = _yhat, e3hat: np.ndarray = _zhat, offsets: np.ndarray = _floathalves, massgenerator: Callable[[int, int, int], float] = constantmass, unitrho: float = 1.0, bbox: np.ndarray | None = None, recenter: np.ndarray | None = None, ) -> StreamParticlesDataset: """Returns an in-memory SPH dataset useful for testing Parameters: ----------- hsml_factor: all particles have smoothing lengths of `hsml_factor` * 0.5 nperside: the dataset will have `nperside`**3 particles, arranged uniformly on a 3D grid periodic: are the positions taken to be periodic? (applies to all coordinate axes) e1hat: shape (3,) the first basis vector defining the 3D grid. If the basis vectors are not normalized to 1 or not orthogonal, the spacing or overlap between SPH particles will be affected, but this is allowed. e2hat: shape (3,) the second basis vector defining the 3D grid. (See `e1hat`.) e3hat: shape (3,) the third basis vector defining the 3D grid. (See `e1hat`.) offsets: shape (3,) the the zero point of the 3D grid along each coordinate axis massgenerator: a function assigning a mass to each particle, as a function of the e[1-3]hat indices, in order unitrho: defines the density for a particle with mass 1 ('g'), and the standard (uniform) grid `hsml_factor`. bbox: if np.ndarray, shape is (2, 3) the assumed enclosing volume of the particles. Should enclose all the coordinate values. If not specified, a bbox is defined which encloses all coordinates values with a margin. If `periodic`, the size of the `bbox` along each coordinate is also the period along that axis. recenter: if not `None`, after generating the grid, the positions are periodically shifted to move the old center to this positions. Useful for testing periodicity handling. This shift is relative to the halfway positions of the bbox edges. Returns: -------- A `StreamParticlesDataset` object with particle positions, masses, velocities (zero), smoothing lengths, and densities specified. Values are in cgs units. """ npart = nperside**3 pos = np.empty((npart, 3), dtype=np.float64) mass = np.empty((npart,), dtype=np.float64) for i in range(0, nperside): for j in range(0, nperside): for k in range(0, nperside): _pos = ( (offsets[0] + i) * e1hat + (offsets[1] + j) * e2hat + (offsets[2] + k) * e3hat ) ind = nperside**2 * i + nperside * j + k pos[ind, :] = _pos mass[ind] = massgenerator(i, j, k) rho = unitrho * mass if bbox is None: eps = 1e-3 margin = (1.0 + eps) * hsml_factor bbox = np.array( [ [np.min(pos[:, 0]) - margin, np.max(pos[:, 0]) + margin], [np.min(pos[:, 1]) - margin, np.max(pos[:, 1]) + margin], [np.min(pos[:, 2]) - margin, np.max(pos[:, 2]) + margin], ] ) if recenter is not None: periods = bbox[:, 1] - bbox[:, 0] # old center -> new position pos += -0.5 * periods[np.newaxis, :] + recenter[np.newaxis, :] # wrap coordinates -> all in [0, boxsize) range pos %= periods[np.newaxis, :] # shift back to original bbox range pos += (bbox[:, 0])[np.newaxis, :] if not periodic: # remove points outside bbox to avoid errors: okinds = np.ones(len(mass), dtype=bool) for ax in [0, 1, 2]: okinds &= pos[:, ax] < bbox[ax, 1] okinds &= pos[:, ax] >= bbox[ax, 0] npart = sum(okinds) else: okinds = np.ones((npart,), dtype=bool) data: Mapping[AnyFieldKey, tuple[np.ndarray, str]] = { "particle_position_x": (np.copy(pos[okinds, 0]), "cm"), "particle_position_y": (np.copy(pos[okinds, 1]), "cm"), "particle_position_z": (np.copy(pos[okinds, 2]), "cm"), "particle_mass": (np.copy(mass[okinds]), "g"), "particle_velocity_x": (np.zeros(npart), "cm/s"), "particle_velocity_y": (np.zeros(npart), "cm/s"), "particle_velocity_z": (np.zeros(npart), "cm/s"), "smoothing_length": (np.ones(npart) * 0.5 * hsml_factor, "cm"), "density": (np.copy(rho[okinds]), "g/cm**3"), } ds = load_particles( data=data, bbox=bbox, periodicity=(periodic,) * 3, length_unit=1.0, mass_unit=1.0, time_unit=1.0, velocity_unit=1.0, ) ds.kernel_name = "cubic" return ds def fake_random_sph_ds( npart: int, bbox: np.ndarray, periodic: bool | tuple[bool, bool, bool] = True, massrange: tuple[float, float] = (0.5, 2.0), hsmlrange: tuple[float, float] = (0.5, 2.0), unitrho: float = 1.0, ) -> StreamParticlesDataset: """Returns an in-memory SPH dataset useful for testing Parameters: ----------- npart: number of particles to generate bbox: shape: (3, 2), units: "cm" the assumed enclosing volume of the particles. Particle positions are drawn uniformly from these ranges. periodic: are the positions taken to be periodic? If a single value, that value is applied to all axes massrange: particle masses are drawn uniformly from this range (unit: "g") hsmlrange: units: "cm" particle smoothing lengths are drawn uniformly from this range unitrho: defines the density for a particle with mass 1 ("g"), and smoothing length 1 ("cm"). Returns: -------- A `StreamParticlesDataset` object with particle positions, masses, velocities (zero), smoothing lengths, and densities specified. Values are in cgs units. """ if not hasattr(periodic, "__len__"): periodic = (periodic,) * 3 gen = np.random.default_rng(seed=0) posx = gen.uniform(low=bbox[0][0], high=bbox[0][1], size=npart) posy = gen.uniform(low=bbox[1][0], high=bbox[1][1], size=npart) posz = gen.uniform(low=bbox[2][0], high=bbox[2][1], size=npart) mass = gen.uniform(low=massrange[0], high=massrange[1], size=npart) hsml = gen.uniform(low=hsmlrange[0], high=hsmlrange[1], size=npart) dens = mass / hsml**3 * unitrho data: Mapping[AnyFieldKey, tuple[np.ndarray, str]] = { "particle_position_x": (posx, "cm"), "particle_position_y": (posy, "cm"), "particle_position_z": (posz, "cm"), "particle_mass": (mass, "g"), "particle_velocity_x": (np.zeros(npart), "cm/s"), "particle_velocity_y": (np.zeros(npart), "cm/s"), "particle_velocity_z": (np.zeros(npart), "cm/s"), "smoothing_length": (hsml, "cm"), "density": (dens, "g/cm**3"), } ds = load_particles( data=data, bbox=bbox, periodicity=periodic, length_unit=1.0, mass_unit=1.0, time_unit=1.0, velocity_unit=1.0, ) ds.kernel_name = "cubic" return ds def construct_octree_mask(prng=RandomState(0x1D3D3D3), refined=None): # noqa B008 # Implementation taken from url: # http://docs.hyperion-rt.org/en/stable/advanced/indepth_oct.html if refined in (None, True): refined = [True] if not refined: refined = [False] return refined # Loop over subcells for _ in range(8): # Insert criterion for whether cell should be sub-divided. Here we # just use a random number to demonstrate. divide = prng.random_sample() < 0.12 # Append boolean to overall list refined.append(divide) # If the cell is sub-divided, recursively divide it further if divide: construct_octree_mask(prng, refined) return refined def fake_octree_ds( prng=RandomState(0x4D3D3D3), # noqa B008 refined=None, quantities=None, bbox=None, sim_time=0.0, length_unit=None, mass_unit=None, time_unit=None, velocity_unit=None, magnetic_unit=None, periodicity=(True, True, True), num_zones=2, partial_coverage=1, unit_system="cgs", ): from yt.loaders import load_octree octree_mask = np.asarray( construct_octree_mask(prng=prng, refined=refined), dtype=np.uint8 ) particles = np.sum(np.invert(octree_mask)) if quantities is None: quantities = {} quantities["gas", "density"] = prng.random_sample((particles, 1)) quantities["gas", "velocity_x"] = prng.random_sample((particles, 1)) quantities["gas", "velocity_y"] = prng.random_sample((particles, 1)) quantities["gas", "velocity_z"] = prng.random_sample((particles, 1)) ds = load_octree( octree_mask=octree_mask, data=quantities, bbox=bbox, sim_time=sim_time, length_unit=length_unit, mass_unit=mass_unit, time_unit=time_unit, velocity_unit=velocity_unit, magnetic_unit=magnetic_unit, periodicity=periodicity, partial_coverage=partial_coverage, num_zones=num_zones, unit_system=unit_system, ) return ds def add_noise_fields(ds): """Add 4 classes of noise fields to a dataset""" prng = RandomState(0x4D3D3D3) def _binary_noise(field, data): """random binary data""" return prng.randint(low=0, high=2, size=data.size).astype("float64") def _positive_noise(field, data): """random strictly positive data""" return prng.random_sample(data.size) + 1e-16 def _negative_noise(field, data): """random negative data""" return -prng.random_sample(data.size) def _even_noise(field, data): """random data with mixed signs""" return 2 * prng.random_sample(data.size) - 1 ds.add_field(("gas", "noise0"), _binary_noise, sampling_type="cell") ds.add_field(("gas", "noise1"), _positive_noise, sampling_type="cell") ds.add_field(("gas", "noise2"), _negative_noise, sampling_type="cell") ds.add_field(("gas", "noise3"), _even_noise, sampling_type="cell") def expand_keywords(keywords, full=False): """ expand_keywords is a means for testing all possible keyword arguments in the nosetests. Simply pass it a dictionary of all the keyword arguments and all of the values for these arguments that you want to test. It will return a list of kwargs dicts containing combinations of the various kwarg values you passed it. These can then be passed to the appropriate function in nosetests. If full=True, then every possible combination of keywords is produced, otherwise, every keyword option is included at least once in the output list. Be careful, by using full=True, you may be in for an exponentially larger number of tests! Parameters ---------- keywords : dict a dictionary where the keys are the keywords for the function, and the values of each key are the possible values that this key can take in the function full : bool if set to True, every possible combination of given keywords is returned Returns ------- array of dicts An array of dictionaries to be individually passed to the appropriate function matching these kwargs. Examples -------- >>> keywords = {} >>> keywords["dpi"] = (50, 100, 200) >>> keywords["cmap"] = ("cmyt.arbre", "cmyt.kelp") >>> list_of_kwargs = expand_keywords(keywords) >>> print(list_of_kwargs) array([{'cmap': 'cmyt.arbre', 'dpi': 50}, {'cmap': 'cmyt.kelp', 'dpi': 100}, {'cmap': 'cmyt.arbre', 'dpi': 200}], dtype=object) >>> list_of_kwargs = expand_keywords(keywords, full=True) >>> print(list_of_kwargs) array([{'cmap': 'cmyt.arbre', 'dpi': 50}, {'cmap': 'cmyt.arbre', 'dpi': 100}, {'cmap': 'cmyt.arbre', 'dpi': 200}, {'cmap': 'cmyt.kelp', 'dpi': 50}, {'cmap': 'cmyt.kelp', 'dpi': 100}, {'cmap': 'cmyt.kelp', 'dpi': 200}], dtype=object) >>> for kwargs in list_of_kwargs: ... write_projection(*args, **kwargs) """ issue_deprecation_warning( "yt.testing.expand_keywords is deprecated", since="4.2", stacklevel=3 ) # if we want every possible combination of keywords, use iter magic if full: keys = sorted(keywords) list_of_kwarg_dicts = np.array( [ dict(zip(keys, prod, strict=True)) for prod in it.product(*(keywords[key] for key in keys)) ] ) # if we just want to probe each keyword, but not necessarily every # combination else: # Determine the maximum number of values any of the keywords has num_lists = 0 for val in keywords.values(): if isinstance(val, str): num_lists = max(1.0, num_lists) else: num_lists = max(len(val), num_lists) # Construct array of kwargs dicts, each element of the list is a different # **kwargs dict. each kwargs dict gives a different combination of # the possible values of the kwargs # initialize array list_of_kwarg_dicts = np.array([{} for x in range(num_lists)]) # fill in array for i in np.arange(num_lists): list_of_kwarg_dicts[i] = {} for key in keywords.keys(): # if it's a string, use it (there's only one) if isinstance(keywords[key], str): list_of_kwarg_dicts[i][key] = keywords[key] # if there are more options, use the i'th val elif i < len(keywords[key]): list_of_kwarg_dicts[i][key] = keywords[key][i] # if there are not more options, use the 0'th val else: list_of_kwarg_dicts[i][key] = keywords[key][0] return list_of_kwarg_dicts def skip(reason: str): # a drop-in replacement for pytest.mark.skip decorator with nose-compatibility def dec(func): @wraps(func) def wrapper(*args, **kwargs): raise SkipTest(reason) return wrapper return dec def skipif(condition: bool, reason: str): # a drop-in replacement for pytest.mark.skipif decorator with nose-compatibility def dec(func): if condition: return skip(reason)(func) else: return func return dec def requires_module(module): """ Decorator that takes a module name as an argument and tries to import it. If the module imports without issue, the function is returned, but if not, a null function is returned. This is so tests that depend on certain modules being imported will not fail if the module is not installed on the testing platform. """ return skipif(find_spec(module) is None, reason=f"Missing required module {module}") def requires_module_pytest(*module_names): """ This is a replacement for yt.testing.requires_module that's compatible with pytest, and accepts an arbitrary number of requirements to avoid stacking decorators Important: this is meant to decorate test functions only, it won't work as a decorator to fixture functions. It's meant to be imported as >>> from yt.testing import requires_module_pytest as requires_module So that it can be later renamed to `requires_module`. """ # note: import pytest here so that it is not a hard requirement for # importing yt.testing see https://github.com/yt-project/yt/issues/4507 import pytest def deco(func): missing = [name for name in module_names if find_spec(name) is None] # note that order between these two decorators matters @pytest.mark.skipif( missing, reason=f"missing requirement(s): {', '.join(missing)}", ) @wraps(func) def inner_func(*args, **kwargs): return func(*args, **kwargs) return inner_func return deco def requires_file(req_file): condition = ( not os.path.exists(req_file) and not os.path.exists(os.path.join(ytcfg.get("yt", "test_data_dir"), req_file)) and not ytcfg.get("yt", "internals", "strict_requires") ) return skipif(condition, reason=f"Missing required file {req_file}") def disable_dataset_cache(func): @wraps(func) def newfunc(*args, **kwargs): restore_cfg_state = False if not ytcfg.get("yt", "skip_dataset_cache"): ytcfg["yt", "skip_dataset_cache"] = True restore_cfg_state = True rv = func(*args, **kwargs) if restore_cfg_state: ytcfg["yt", "skip_dataset_cache"] = False return rv return newfunc @disable_dataset_cache def units_override_check(fn): from numpy.testing import assert_equal units_list = ["length", "time", "mass", "velocity", "magnetic", "temperature"] ds1 = load(fn) units_override = {} attrs1 = [] attrs2 = [] for u in units_list: unit_attr = getattr(ds1, f"{u}_unit", None) if unit_attr is not None: attrs1.append(unit_attr) units_override[f"{u}_unit"] = (unit_attr.v, unit_attr.units) del ds1 ds2 = load(fn, units_override=units_override) assert len(ds2.units_override) > 0 for u in units_list: unit_attr = getattr(ds2, f"{u}_unit", None) if unit_attr is not None: attrs2.append(unit_attr) assert_equal(attrs1, attrs2) # This is an export of the 40 grids in IsolatedGalaxy that are of level 4 or # lower. It's just designed to give a sample AMR index to deal with. _amr_grid_index = [ [0, [0.0, 0.0, 0.0], [1.0, 1.0, 1.0], [32, 32, 32]], [1, [0.25, 0.21875, 0.25], [0.5, 0.5, 0.5], [16, 18, 16]], [1, [0.5, 0.21875, 0.25], [0.75, 0.5, 0.5], [16, 18, 16]], [1, [0.21875, 0.5, 0.25], [0.5, 0.75, 0.5], [18, 16, 16]], [1, [0.5, 0.5, 0.25], [0.75, 0.75, 0.5], [16, 16, 16]], [1, [0.25, 0.25, 0.5], [0.5, 0.5, 0.75], [16, 16, 16]], [1, [0.5, 0.25, 0.5], [0.75, 0.5, 0.75], [16, 16, 16]], [1, [0.25, 0.5, 0.5], [0.5, 0.75, 0.75], [16, 16, 16]], [1, [0.5, 0.5, 0.5], [0.75, 0.75, 0.75], [16, 16, 16]], [2, [0.5, 0.5, 0.5], [0.71875, 0.71875, 0.71875], [28, 28, 28]], [3, [0.5, 0.5, 0.5], [0.6640625, 0.65625, 0.6796875], [42, 40, 46]], [4, [0.5, 0.5, 0.5], [0.59765625, 0.6015625, 0.6015625], [50, 52, 52]], [2, [0.28125, 0.5, 0.5], [0.5, 0.734375, 0.71875], [28, 30, 28]], [3, [0.3359375, 0.5, 0.5], [0.5, 0.671875, 0.6640625], [42, 44, 42]], [4, [0.40625, 0.5, 0.5], [0.5, 0.59765625, 0.59765625], [48, 50, 50]], [2, [0.5, 0.28125, 0.5], [0.71875, 0.5, 0.71875], [28, 28, 28]], [3, [0.5, 0.3359375, 0.5], [0.671875, 0.5, 0.6640625], [44, 42, 42]], [4, [0.5, 0.40625, 0.5], [0.6015625, 0.5, 0.59765625], [52, 48, 50]], [2, [0.28125, 0.28125, 0.5], [0.5, 0.5, 0.71875], [28, 28, 28]], [3, [0.3359375, 0.3359375, 0.5], [0.5, 0.5, 0.671875], [42, 42, 44]], [ 4, [0.46484375, 0.37890625, 0.50390625], [0.4765625, 0.390625, 0.515625], [6, 6, 6], ], [4, [0.40625, 0.40625, 0.5], [0.5, 0.5, 0.59765625], [48, 48, 50]], [2, [0.5, 0.5, 0.28125], [0.71875, 0.71875, 0.5], [28, 28, 28]], [3, [0.5, 0.5, 0.3359375], [0.6796875, 0.6953125, 0.5], [46, 50, 42]], [4, [0.5, 0.5, 0.40234375], [0.59375, 0.6015625, 0.5], [48, 52, 50]], [2, [0.265625, 0.5, 0.28125], [0.5, 0.71875, 0.5], [30, 28, 28]], [3, [0.3359375, 0.5, 0.328125], [0.5, 0.65625, 0.5], [42, 40, 44]], [4, [0.40234375, 0.5, 0.40625], [0.5, 0.60546875, 0.5], [50, 54, 48]], [2, [0.5, 0.265625, 0.28125], [0.71875, 0.5, 0.5], [28, 30, 28]], [3, [0.5, 0.3203125, 0.328125], [0.6640625, 0.5, 0.5], [42, 46, 44]], [4, [0.5, 0.3984375, 0.40625], [0.546875, 0.5, 0.5], [24, 52, 48]], [4, [0.546875, 0.41796875, 0.4453125], [0.5625, 0.4375, 0.5], [8, 10, 28]], [4, [0.546875, 0.453125, 0.41796875], [0.5546875, 0.48046875, 0.4375], [4, 14, 10]], [4, [0.546875, 0.4375, 0.4375], [0.609375, 0.5, 0.5], [32, 32, 32]], [4, [0.546875, 0.4921875, 0.41796875], [0.56640625, 0.5, 0.4375], [10, 4, 10]], [ 4, [0.546875, 0.48046875, 0.41796875], [0.5703125, 0.4921875, 0.4375], [12, 6, 10], ], [4, [0.55859375, 0.46875, 0.43359375], [0.5703125, 0.48046875, 0.4375], [6, 6, 2]], [2, [0.265625, 0.28125, 0.28125], [0.5, 0.5, 0.5], [30, 28, 28]], [3, [0.328125, 0.3359375, 0.328125], [0.5, 0.5, 0.5], [44, 42, 44]], [4, [0.4140625, 0.40625, 0.40625], [0.5, 0.5, 0.5], [44, 48, 48]], ] def check_results(func): r"""This is a decorator for a function to verify that the (numpy ndarray) result of a function is what it should be. This function is designed to be used for very light answer testing. Essentially, it wraps around a larger function that returns a numpy array, and that has results that should not change. It is not necessarily used inside the testing scripts themselves, but inside testing scripts written by developers during the testing of pull requests and new functionality. If a hash is specified, it "wins" and the others are ignored. Otherwise, tolerance is 1e-8 (just above single precision.) The correct results will be stored if the command line contains --answer-reference , and otherwise it will compare against the results on disk. The filename will be func_results_ref_FUNCNAME.cpkl where FUNCNAME is the name of the function being tested. If you would like more control over the name of the pickle file the results are stored in, you can pass the result_basename keyword argument to the function you are testing. The check_results decorator will use the value of the keyword to construct the filename of the results data file. If result_basename is not specified, the name of the testing function is used. This will raise an exception if the results are not correct. Examples -------- >>> @check_results ... def my_func(ds): ... return ds.domain_width >>> my_func(ds) >>> @check_results ... def field_checker(dd, field_name): ... return dd[field_name] >>> field_checker(ds.all_data(), "density", result_basename="density") """ def compute_results(func): @wraps(func) def _func(*args, **kwargs): name = kwargs.pop("result_basename", func.__name__) rv = func(*args, **kwargs) if hasattr(rv, "convert_to_base"): rv.convert_to_base() _rv = rv.ndarray_view() else: _rv = rv mi = _rv.min() ma = _rv.max() st = _rv.std(dtype="float64") su = _rv.sum(dtype="float64") si = _rv.size ha = hashlib.md5(_rv.tobytes()).hexdigest() fn = f"func_results_ref_{name}.cpkl" with open(fn, "wb") as f: pickle.dump((mi, ma, st, su, si, ha), f) return rv return _func import yt.startup_tasks as _startup_tasks unparsed_args = _startup_tasks.unparsed_args if "--answer-reference" in unparsed_args: return compute_results(func) def compare_results(func): @wraps(func) def _func(*args, **kwargs): from numpy.testing import assert_allclose, assert_equal name = kwargs.pop("result_basename", func.__name__) rv = func(*args, **kwargs) if hasattr(rv, "convert_to_base"): rv.convert_to_base() _rv = rv.ndarray_view() else: _rv = rv vals = ( _rv.min(), _rv.max(), _rv.std(dtype="float64"), _rv.sum(dtype="float64"), _rv.size, hashlib.md5(_rv.tobytes()).hexdigest(), ) fn = f"func_results_ref_{name}.cpkl" if not os.path.exists(fn): print("Answers need to be created with --answer-reference .") return False with open(fn, "rb") as f: ref = pickle.load(f) print(f"Sizes: {vals[4] == ref[4]} ({vals[4]}, {ref[4]})") assert_allclose(vals[0], ref[0], 1e-8, err_msg="min") assert_allclose(vals[1], ref[1], 1e-8, err_msg="max") assert_allclose(vals[2], ref[2], 1e-8, err_msg="std") assert_allclose(vals[3], ref[3], 1e-8, err_msg="sum") assert_equal(vals[4], ref[4]) print("Hashes equal: %s" % (vals[-1] == ref[-1])) return rv return _func return compare_results(func) def periodicity_cases(ds): # This is a generator that yields things near the corners. It's good for # getting different places to check periodicity. yield (ds.domain_left_edge + ds.domain_right_edge) / 2.0 dx = ds.domain_width / ds.domain_dimensions # We start one dx in, and only go to one in as well. for i in (1, ds.domain_dimensions[0] - 2): for j in (1, ds.domain_dimensions[1] - 2): for k in (1, ds.domain_dimensions[2] - 2): center = dx * np.array([i, j, k]) + ds.domain_left_edge yield center def run_nose( verbose=False, run_answer_tests=False, answer_big_data=False, call_pdb=False, module=None, ): issue_deprecation_warning( "yt.run_nose (aka yt.testing.run_nose) is deprecated. " "Please do not rely on this function as it will be removed " "in the process of migrating yt tests from nose to pytest.", stacklevel=3, since="4.1", ) from yt.utilities.logger import ytLogger as mylog from yt.utilities.on_demand_imports import _nose orig_level = mylog.getEffectiveLevel() mylog.setLevel(50) nose_argv = sys.argv nose_argv += ["--exclude=answer_testing", "--detailed-errors", "--exe"] if call_pdb: nose_argv += ["--pdb", "--pdb-failures"] if verbose: nose_argv.append("-v") if run_answer_tests: nose_argv.append("--with-answer-testing") if answer_big_data: nose_argv.append("--answer-big-data") if module: nose_argv.append(module) initial_dir = os.getcwd() yt_file = os.path.abspath(__file__) yt_dir = os.path.dirname(yt_file) if os.path.samefile(os.path.dirname(yt_dir), initial_dir): # Provide a nice error message to work around nose bug # see https://github.com/nose-devs/nose/issues/701 raise RuntimeError( """ The yt.run_nose function does not work correctly when invoked in the same directory as the installed yt package. Try starting a python session in a different directory before invoking yt.run_nose again. Alternatively, you can also run the "nosetests" executable in the current directory like so: $ nosetests """ ) os.chdir(yt_dir) try: _nose.run(argv=nose_argv) finally: os.chdir(initial_dir) mylog.setLevel(orig_level) def assert_allclose_units(actual, desired, rtol=1e-7, atol=0, **kwargs): """Raise an error if two objects are not equal up to desired tolerance This is a wrapper for :func:`numpy.testing.assert_allclose` that also verifies unit consistency Parameters ---------- actual : array-like Array obtained (possibly with attached units) desired : array-like Array to compare with (possibly with attached units) rtol : float, optional Relative tolerance, defaults to 1e-7 atol : float or quantity, optional Absolute tolerance. If units are attached, they must be consistent with the units of ``actual`` and ``desired``. If no units are attached, assumes the same units as ``desired``. Defaults to zero. Notes ----- Also accepts additional keyword arguments accepted by :func:`numpy.testing.assert_allclose`, see the documentation of that function for details. """ from numpy.testing import assert_allclose # Create a copy to ensure this function does not alter input arrays act = YTArray(actual) des = YTArray(desired) try: des = des.in_units(act.units) except UnitOperationError as e: raise AssertionError( f"Units of actual ({act.units}) and desired ({des.units}) " "do not have equivalent dimensions" ) from e rt = YTArray(rtol) if not rt.units.is_dimensionless: raise AssertionError(f"Units of rtol ({rt.units}) are not dimensionless") if not isinstance(atol, YTArray): at = YTQuantity(atol, des.units) try: at = at.in_units(act.units) except UnitOperationError as e: raise AssertionError( f"Units of atol ({at.units}) and actual ({act.units}) " "do not have equivalent dimensions" ) from e # units have been validated, so we strip units before calling numpy # to avoid spurious errors act = act.value des = des.value rt = rt.value at = at.value return assert_allclose(act, des, rt, at, **kwargs) def assert_fname(fname): """Function that checks file type using libmagic""" if fname is None: return with open(fname, "rb") as fimg: data = fimg.read() image_type = "" # see http://www.w3.org/TR/PNG/#5PNG-file-signature if data.startswith(b"\211PNG\r\n\032\n"): image_type = ".png" # see http://www.mathguide.de/info/tools/media-types/image/jpeg elif data.startswith(b"\377\330"): image_type = ".jpeg" elif data.startswith(b"%!PS-Adobe"): data_str = data.decode("utf-8", "ignore") if "EPSF" in data_str[: data_str.index("\n")]: image_type = ".eps" else: image_type = ".ps" elif data.startswith(b"%PDF"): image_type = ".pdf" extension = os.path.splitext(fname)[1] assert image_type == extension, ( f"Expected an image of type {extension!r} but {fname!r} " "is an image of type {image_type!r}" ) def requires_backend(backend): """Decorator to check for a specified matplotlib backend. This decorator returns the decorated function if the specified `backend` is same as of `matplotlib.get_backend()`, otherwise returns null function. It could be used to execute function only when a particular `backend` of matplotlib is being used. Parameters ---------- backend : String The value which is compared with the current matplotlib backend in use. """ return skipif( backend.lower() != matplotlib.get_backend().lower(), reason=f"'{backend}' backend not in use", ) def requires_external_executable(*names): missing = [name for name in names if which(name) is None] return skipif( len(missing) > 0, reason=f"missing external executable(s): {', '.join(missing)}" ) class TempDirTest(unittest.TestCase): """ A test class that runs in a temporary directory and removes it afterward. """ def setUp(self): self.curdir = os.getcwd() self.tmpdir = tempfile.mkdtemp() os.chdir(self.tmpdir) def tearDown(self): os.chdir(self.curdir) shutil.rmtree(self.tmpdir) class ParticleSelectionComparison: """ This is a test helper class that takes a particle dataset, caches the particles it has on disk (manually reading them using lower-level IO routines) and then received a data object that it compares against manually running the data object's selection routines. All supplied data objects must be created from the input dataset. """ def __init__(self, ds): self.ds = ds # Construct an index so that we get all the data_files ds.index particles = {} # hsml is the smoothing length we use for radial selection hsml = {} for data_file in ds.index.data_files: for ptype, pos_arr in ds.index.io._yield_coordinates(data_file): particles.setdefault(ptype, []).append(pos_arr) if ptype in getattr(ds, "_sph_ptypes", ()): hsml.setdefault(ptype, []).append( ds.index.io._get_smoothing_length( data_file, pos_arr.dtype, pos_arr.shape ) ) for ptype in particles: particles[ptype] = np.concatenate(particles[ptype]) if ptype in hsml: hsml[ptype] = np.concatenate(hsml[ptype]) self.particles = particles self.hsml = hsml def compare_dobj_selection(self, dobj): from numpy.testing import assert_array_almost_equal_nulp for ptype in sorted(self.particles): x, y, z = self.particles[ptype].T # Set our radii to zero for now, I guess? radii = self.hsml.get(ptype, 0.0) sel_index = dobj.selector.select_points(x, y, z, radii) if sel_index is None: sel_pos = np.empty((0, 3)) else: sel_pos = self.particles[ptype][sel_index, :] obj_results = [] for chunk in dobj.chunks([], "io"): obj_results.append(chunk[ptype, "particle_position"]) if any(_.size > 0 for _ in obj_results): obj_results = np.concatenate(obj_results, axis=0) else: obj_results = np.empty((0, 3)) # Sometimes we get unitary scaling or other floating point noise. 5 # NULP should be OK. This is mostly for stuff like Rockstar, where # the f32->f64 casting happens at different places depending on # which code path we use. assert_array_almost_equal_nulp( np.asarray(sel_pos), np.asarray(obj_results), 5 ) def run_defaults(self): """ This runs lots of samples that touch different types of wraparounds. Specifically, it does: * sphere in center with radius 0.1 unitary * sphere in center with radius 0.2 unitary * sphere in each of the eight corners of the domain with radius 0.1 unitary * sphere in center with radius 0.5 unitary * box that covers 0.1 .. 0.9 * box from 0.8 .. 0.85 * box from 0.3..0.6, 0.2..0.8, 0.0..0.1 """ sp1 = self.ds.sphere("c", (0.1, "unitary")) self.compare_dobj_selection(sp1) sp2 = self.ds.sphere("c", (0.2, "unitary")) self.compare_dobj_selection(sp2) centers = [ [0.04, 0.5, 0.5], [0.5, 0.04, 0.5], [0.5, 0.5, 0.04], [0.04, 0.04, 0.04], [0.96, 0.5, 0.5], [0.5, 0.96, 0.5], [0.5, 0.5, 0.96], [0.96, 0.96, 0.96], ] r = self.ds.quan(0.1, "unitary") for center in centers: c = self.ds.arr(center, "unitary") + self.ds.domain_left_edge.in_units( "unitary" ) if not all(self.ds.periodicity): # filter out the periodic bits for non-periodic datasets if any(c - r < self.ds.domain_left_edge) or any( c + r > self.ds.domain_right_edge ): continue sp = self.ds.sphere(c, (0.1, "unitary")) self.compare_dobj_selection(sp) sp = self.ds.sphere("c", (0.5, "unitary")) self.compare_dobj_selection(sp) dd = self.ds.all_data() self.compare_dobj_selection(dd) # This is in raw numbers, so we can offset for the left edge LE = self.ds.domain_left_edge.in_units("unitary").d reg1 = self.ds.r[ (0.1 + LE[0], "unitary") : (0.9 + LE[0], "unitary"), (0.1 + LE[1], "unitary") : (0.9 + LE[1], "unitary"), (0.1 + LE[2], "unitary") : (0.9 + LE[2], "unitary"), ] self.compare_dobj_selection(reg1) reg2 = self.ds.r[ (0.8 + LE[0], "unitary") : (0.85 + LE[0], "unitary"), (0.8 + LE[1], "unitary") : (0.85 + LE[1], "unitary"), (0.8 + LE[2], "unitary") : (0.85 + LE[2], "unitary"), ] self.compare_dobj_selection(reg2) reg3 = self.ds.r[ (0.3 + LE[0], "unitary") : (0.6 + LE[0], "unitary"), (0.2 + LE[1], "unitary") : (0.8 + LE[1], "unitary"), (0.0 + LE[2], "unitary") : (0.1 + LE[2], "unitary"), ] self.compare_dobj_selection(reg3) def _deprecated_numpy_testing_reexport(func): import numpy.testing as npt npt_func = getattr(npt, func.__name__) @wraps(npt_func) def retf(*args, **kwargs): __tracebackhide__ = True # Hide traceback for pytest issue_deprecation_warning( f"yt.testing.{func.__name__} is a pure re-export of " f"numpy.testing.{func.__name__}, it will stop working in the future. " "Please import this function directly from numpy instead.", since="4.2", stacklevel=3, ) return npt_func(*args, **kwargs) return retf @_deprecated_numpy_testing_reexport def assert_array_equal(): ... @_deprecated_numpy_testing_reexport def assert_almost_equal(): ... @_deprecated_numpy_testing_reexport def assert_equal(): ... @_deprecated_numpy_testing_reexport def assert_array_less(): ... @_deprecated_numpy_testing_reexport def assert_string_equal(): ... @_deprecated_numpy_testing_reexport def assert_array_almost_equal_nulp(): ... @_deprecated_numpy_testing_reexport def assert_allclose(): ... @_deprecated_numpy_testing_reexport def assert_raises(): ... @_deprecated_numpy_testing_reexport def assert_approx_equal(): ... @_deprecated_numpy_testing_reexport def assert_array_almost_equal(): ... ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3671532 yt-4.4.0/yt/tests/0000755000175100001770000000000014714401715013362 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/tests/__init__.py0000644000175100001770000000000014714401662015462 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/tests/test_external_frontends.py0000644000175100001770000000312514714401662020701 0ustar00runnerdockerimport importlib.metadata import pytest import yt from yt.data_objects.static_output import Dataset from yt.geometry.grid_geometry_handler import GridIndex from yt.utilities.object_registries import output_type_registry class MockEntryPoint: @classmethod def load(cls): class MockHierarchy(GridIndex): grid = None class ExtDataset(Dataset): _index_class = MockHierarchy def _parse_parameter_file(self): self.current_time = 1.0 self.cosmological_simulation = 0 def _set_code_unit_attributes(self): self.length_unit = self.quan(1.0, "code_length") self.mass_unit = self.quan(1.0, "code_mass") self.time_unit = self.quan(1.0, "code_time") @classmethod def _is_valid(cls, filename, *args, **kwargs): return filename.endswith("mock") @pytest.fixture() def mock_external_frontend(monkeypatch): def mock_entry_points(group=None): return [MockEntryPoint] monkeypatch.setattr(importlib.metadata, "entry_points", mock_entry_points) assert "ExtDataset" not in output_type_registry yield assert "ExtDataset" in output_type_registry # teardown to avoid test pollution output_type_registry.pop("ExtDataset") @pytest.mark.usefixtures("mock_external_frontend") def test_external_frontend(tmp_path): test_file = tmp_path / "tmp.mock" test_file.write_text("") # create the file assert test_file.is_file() ds = yt.load(test_file) assert "ExtDataset" in ds.__class__.__name__ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/tests/test_funcs.py0000644000175100001770000001032714714401662016115 0ustar00runnerdockerimport os from nose.tools import assert_raises from numpy.testing import assert_equal from yt.funcs import ( just_one, levenshtein_distance, simple_download_file, validate_axis, validate_center, ) from yt.testing import fake_amr_ds from yt.units import YTArray, YTQuantity def test_validate_axis(): validate_axis(None, 0) validate_axis(None, "X") ds = fake_amr_ds(geometry="cylindrical") ds.slice("Theta", 0.25) with assert_raises(TypeError) as ex: # default geometry is cartesian ds = fake_amr_ds() ds.slice("r", 0.25) desired = "Expected axis to be any of [0, 1, 2, 'x', 'y', 'z', 'X', 'Y', 'Z'], received 'r'" actual = str(ex.exception) assert actual == desired def test_validate_center(): validate_center("max") validate_center("MIN_") with assert_raises(TypeError) as ex: validate_center("avg") desired = ( "Expected 'center' to be in ['c', 'center', 'm', 'max', 'min'] " "or the prefix to be 'max_'/'min_', received 'avg'." ) assert_equal(str(ex.exception), desired) validate_center(YTQuantity(0.25, "cm")) validate_center([0.25, 0.25, 0.25]) class CustomCenter: def __init__(self, center): self.center = center with assert_raises(TypeError) as ex: validate_center(CustomCenter(10)) desired = ( "Expected 'center' to be a numeric object of type " "list/tuple/np.ndarray/YTArray/YTQuantity, received " "'yt.tests.test_funcs.test_validate_center.." "CustomCenter'." ) assert_equal(str(ex.exception)[:50], desired[:50]) def test_just_one(): # Check that behaviour of this function is consistent before and after refactor # PR 2893 for unit in ["mm", "cm", "km", "pc", "g", "kg", "M_sun"]: obj = YTArray([0.0, 1.0], unit) expected = YTQuantity(obj.flat[0], obj.units, registry=obj.units.registry) jo = just_one(obj) assert jo == expected def test_levenshtein(): assert_equal(levenshtein_distance("abcdef", "abcdef"), 0) # Deletions / additions assert_equal(levenshtein_distance("abcdef", "abcde"), 1) assert_equal(levenshtein_distance("abcdef", "abcd"), 2) assert_equal(levenshtein_distance("abcdef", "abc"), 3) assert_equal(levenshtein_distance("abcdf", "abcdef"), 1) assert_equal(levenshtein_distance("cdef", "abcdef"), 2) assert_equal(levenshtein_distance("bde", "abcdef"), 3) # Substitutions assert_equal(levenshtein_distance("abcd", "abc_"), 1) assert_equal(levenshtein_distance("abcd", "ab__"), 2) assert_equal(levenshtein_distance("abcd", "a___"), 3) assert_equal(levenshtein_distance("abcd", "____"), 4) # Deletion + Substitutions assert_equal(levenshtein_distance("abcd", "abc_z"), 2) assert_equal(levenshtein_distance("abcd", "ab__zz"), 4) assert_equal(levenshtein_distance("abcd", "a___zzz"), 6) assert_equal(levenshtein_distance("abcd", "____zzzz"), 8) # Max distance assert_equal(levenshtein_distance("abcd", "", max_dist=0), 1) assert_equal(levenshtein_distance("abcd", "", max_dist=3), 4) assert_equal(levenshtein_distance("abcd", "", max_dist=10), 4) assert_equal(levenshtein_distance("abcd", "", max_dist=1), 2) assert_equal(levenshtein_distance("abcd", "a", max_dist=2), 3) assert_equal(levenshtein_distance("abcd", "ad", max_dist=2), 2) assert_equal(levenshtein_distance("abcd", "abd", max_dist=2), 1) assert_equal(levenshtein_distance("abcd", "abcd", max_dist=2), 0) def test_simple_download_file(): fn = simple_download_file("http://yt-project.org", "simple-download-file") try: assert fn == "simple-download-file" assert os.path.exists("simple-download-file") finally: # Clean up after ourselves. try: os.unlink("simple-download-file") except FileNotFoundError: pass with assert_raises(RuntimeError) as ex: simple_download_file("http://yt-project.org/404", "simple-download-file") desired = "Attempt to download file from http://yt-project.org/404 failed with error 404: Not Found." actual = str(ex.exception) assert actual == desired assert not os.path.exists("simple-download-file") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/tests/test_load_archive.py0000644000175100001770000000735514714401662017426 0ustar00runnerdockerimport os import sys import tarfile import time import pytest from yt.config import ytcfg from yt.loaders import load_archive from yt.sample_data.api import _download_sample_data_file, get_data_registry_table from yt.testing import requires_module_pytest from yt.utilities.exceptions import YTUnidentifiedDataType @pytest.fixture() def data_registry(): yield get_data_registry_table() @pytest.fixture() def tmp_data_dir(tmp_path): pre_test_data_dir = ytcfg["yt", "test_data_dir"] ytcfg.set("yt", "test_data_dir", str(tmp_path)) yield tmp_path ytcfg.set("yt", "test_data_dir", pre_test_data_dir) # Note: ratarmount cannot currently be installed on Windows as of v0.8.1 @pytest.mark.skipif( sys.platform.startswith("win"), reason="ratarmount cannot currently be installed on Windows as of v0.8.1", ) @pytest.mark.skipif( os.environ.get("JENKINS_HOME") is not None, reason="Archive mounting times out on Jenkins.", ) @requires_module_pytest("pooch", "ratarmount") @pytest.mark.parametrize( "fn, exact_loc, class_", [ ( "ToroShockTube.tar.gz", "ToroShockTube/DD0001/data0001", "EnzoDataset", ), ( "ramses_sink_00016.tar.gz", "ramses_sink_00016/output_00016", "RAMSESDataset", ), ], ) @pytest.mark.parametrize("archive_suffix", ["", ".gz"]) def test_load_archive( fn, exact_loc, class_: str, archive_suffix, tmp_data_dir, data_registry ): # Download the sample .tar.gz'd file targz_path = _download_sample_data_file(filename=fn) tar_path = targz_path.with_suffix(archive_suffix) if tar_path != targz_path: # Open the tarfile and uncompress it to .tar, .tar.gz, and .tar.bz2 files with tarfile.open(targz_path, mode="r:*") as targz: mode = "w" + archive_suffix.replace(".", ":") with tarfile.open(tar_path, mode=mode) as tar: for member in targz.getmembers(): content = targz.extractfile(member) tar.addfile(member, fileobj=content) # Now try to open the .tar.* files warn_msg = "The 'load_archive' function is still experimental and may be unstable." with pytest.warns(UserWarning, match=warn_msg): ds = load_archive(tar_path, exact_loc, mount_timeout=10) assert type(ds).__name__ == class_ # Make sure the index is readable ds.index # Check cleanup mount_path = tar_path.with_name(tar_path.name + ".mount") assert mount_path.is_mount() ## Manually dismount ds.dismount() ## The dismounting happens concurrently, wait a few sec. time.sleep(2) ## Mount path should not exist anymore *and* have been deleted assert not mount_path.is_mount() assert not mount_path.exists() @pytest.mark.skipif( sys.platform.startswith("win"), reason="ratarmount cannot currently be installed on Windows as of v0.8.1", ) @pytest.mark.skipif( os.environ.get("JENKINS_HOME") is not None, reason="Archive mounting times out on Jenkins.", ) @pytest.mark.filterwarnings( "ignore:The 'load_archive' function is still experimental and may be unstable." ) @requires_module_pytest("pooch", "ratarmount") def test_load_invalid_archive(tmp_data_dir, data_registry): # Archive does not exist with pytest.raises(FileNotFoundError): load_archive("this_file_does_not_exist.tar.gz", "invalid_location") targz_path = _download_sample_data_file(filename="ToroShockTube.tar.gz") # File does not exist with pytest.raises(FileNotFoundError): load_archive(targz_path, "invalid_location") # File exists but is not recognized with pytest.raises(YTUnidentifiedDataType): load_archive(targz_path, "ToroShockTube/DD0001/data0001.memorymap") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/tests/test_load_errors.py0000644000175100001770000002203014714401662017304 0ustar00runnerdockerimport re import pytest from yt.data_objects.static_output import Dataset from yt.geometry.grid_geometry_handler import GridIndex from yt.loaders import load, load_simulation from yt.utilities.exceptions import ( YTAmbiguousDataType, YTSimulationNotIdentified, YTUnidentifiedDataType, ) from yt.utilities.object_registries import output_type_registry @pytest.fixture def tmp_data_dir(tmp_path): from yt.config import ytcfg pre_test_data_dir = ytcfg["yt", "test_data_dir"] ytcfg.set("yt", "test_data_dir", str(tmp_path)) yield tmp_path ytcfg.set("yt", "test_data_dir", pre_test_data_dir) @pytest.mark.usefixtures("tmp_data_dir") def test_load_not_a_file(tmp_path): with pytest.raises(FileNotFoundError): load(tmp_path / "not_a_file") @pytest.mark.parametrize("simtype", ["Enzo", "unregistered_simulation_type"]) @pytest.mark.usefixtures("tmp_data_dir") def test_load_simulation_not_a_file(simtype, tmp_path): # it is preferable to report the most important problem in an error message # (missing data is worse than a typo insimulation_type) # so we make sure the error raised is not YTSimulationNotIdentified, # even with an absurd simulation type with pytest.raises(FileNotFoundError): load_simulation(tmp_path / "not_a_file", simtype) @pytest.fixture() def tmp_path_with_empty_file(tmp_path): empty_file_path = tmp_path / "empty_file" empty_file_path.touch() return tmp_path, empty_file_path def test_load_unidentified_data_dir(tmp_path_with_empty_file): tmp_path, empty_file_path = tmp_path_with_empty_file with pytest.raises(YTUnidentifiedDataType): load(tmp_path) def test_load_unidentified_data_file(tmp_path_with_empty_file): tmp_path, empty_file_path = tmp_path_with_empty_file with pytest.raises(YTUnidentifiedDataType): load(empty_file_path) def test_load_simulation_unidentified_data_dir(tmp_path_with_empty_file): tmp_path, empty_file_path = tmp_path_with_empty_file with pytest.raises(YTSimulationNotIdentified): load_simulation(tmp_path, "unregistered_simulation_type") def test_load_simulation_unidentified_data_file(tmp_path_with_empty_file): tmp_path, empty_file_path = tmp_path_with_empty_file with pytest.raises(YTSimulationNotIdentified): load_simulation( empty_file_path, "unregistered_simulation_type", ) @pytest.fixture() def ambiguous_dataset_classes(): # We deliberately setup a situation where two Dataset subclasses # that aren't parents are consisdered valid. # We implement the bare minimum for these classes to be actually # loadable in order to test hints. class MockHierarchy(GridIndex): pass class MockDataset(Dataset): _index_class = MockHierarchy def _parse_parameter_file(self, *args, **kwargs): self.current_time = -1.0 self.cosmological_simulation = 0 def _set_code_unit_attributes(self, *args, **kwargs): self.length_unit = self.quan(1, "m") self.mass_unit = self.quan(1, "kg") self.time_unit = self.quan(1, "s") @classmethod def _is_valid(cls, *args, **kwargs): return True class AlphaDataset(MockDataset): @classmethod def _is_valid(cls, *args, **kwargs): return True class BetaDataset(MockDataset): @classmethod def _is_valid(cls, *args, **kwargs): return True yield # teardown to avoid possible breakage in following tests output_type_registry.pop("MockDataset") output_type_registry.pop("AlphaDataset") output_type_registry.pop("BetaDataset") @pytest.mark.usefixtures("ambiguous_dataset_classes") def test_load_ambiguous_data(tmp_path): with pytest.raises(YTAmbiguousDataType): load(tmp_path) file = tmp_path / "fake_datafile0011.dump" file.touch() pattern = str(tmp_path / "fake_datafile00??.dump") # loading a DatasetSeries should not crash until an item is retrieved ts = load(pattern) with pytest.raises(YTAmbiguousDataType): ts[0] @pytest.mark.parametrize( "hint, expected_type", [ ("alpha", "AlphaDataset"), ("al", "AlphaDataset"), ("ph", "AlphaDataset"), ("beta", "BetaDataset"), ("BeTA", "BetaDataset"), ("b", "BetaDataset"), ("mock", "MockDataset"), ("MockDataset", "MockDataset"), ], ) @pytest.mark.usefixtures("ambiguous_dataset_classes") def test_load_ambiguous_data_with_hint(hint, expected_type, tmp_path): ds = load(tmp_path, hint=hint) assert type(ds).__name__ == expected_type file1 = tmp_path / "fake_datafile0011.dump" file2 = tmp_path / "fake_datafile0022.dump" file1.touch() file2.touch() pattern = str(tmp_path / "fake_datafile00??.dump") ts = load(pattern, hint=hint) ds = ts[0] assert type(ds).__name__ == expected_type ds = ts[1] assert type(ds).__name__ == expected_type @pytest.fixture() def catchall_dataset_class(): # define a Dataset class matching any input, # just so that we don't have to require an actual # dataset for some tests class MockHierarchy(GridIndex): pass class MockDataset(Dataset): _index_class = MockHierarchy def _parse_parameter_file(self, *args, **kwargs): self.current_time = -1.0 self.cosmological_simulation = 0 def _set_code_unit_attributes(self, *args, **kwargs): self.length_unit = self.quan(1, "m") self.mass_unit = self.quan(1, "kg") self.time_unit = self.quan(1, "s") @classmethod def _is_valid(cls, *args, **kwargs): return True yield # teardown to avoid possible breakage in following tests output_type_registry.pop("MockDataset") @pytest.mark.usefixtures("catchall_dataset_class") def test_depr_load_keyword(tmp_path): with pytest.deprecated_call( match=r"Using the 'fn' argument as keyword \(on position 0\) is deprecated\.", ): load(fn=tmp_path) @pytest.fixture() def invalid_dataset_class_with_missing_requirements(): # define a Dataset class matching any input, # just so that we don't have to require an actual # dataset for some tests class MockHierarchy(GridIndex): pass class MockDataset(Dataset): _load_requirements = ["mock_package_name"] _index_class = MockHierarchy def _parse_parameter_file(self, *args, **kwargs): self.current_time = -1.0 self.cosmological_simulation = 0 def _set_code_unit_attributes(self, *args, **kwargs): self.length_unit = self.quan(1, "m") self.mass_unit = self.quan(1, "kg") self.time_unit = self.quan(1, "s") @classmethod def _is_valid(cls, *args, **kwargs): return False yield # teardown to avoid possible breakage in following tests output_type_registry.pop("MockDataset") @pytest.mark.usefixtures("invalid_dataset_class_with_missing_requirements") def test_all_invalid_with_missing_requirements(tmp_path): with pytest.raises( YTUnidentifiedDataType, match=re.compile( re.escape(f"Could not determine input format from {tmp_path!r}\n") + "The following types could not be thorougly checked against your data because " "their requirements are missing. " "You may want to inspect this list and check your installation:\n" r".*" r"- MockDataset \(requires: mock_package_name\)" r".*" "\n\n" "Please make sure you are running a sufficiently recent version of yt.", re.DOTALL, ), ): load(tmp_path) @pytest.fixture() def valid_dataset_class_with_missing_requirements(): # define a Dataset class matching any input, # just so that we don't have to require an actual # dataset for some tests class MockHierarchy(GridIndex): pass class MockDataset(Dataset): _load_requirements = ["mock_package_name"] _index_class = MockHierarchy def _parse_parameter_file(self, *args, **kwargs): self.current_time = -1.0 self.cosmological_simulation = 0 def _set_code_unit_attributes(self, *args, **kwargs): self.length_unit = self.quan(1, "m") self.mass_unit = self.quan(1, "kg") self.time_unit = self.quan(1, "s") @classmethod def _is_valid(cls, *args, **kwargs): return True yield # teardown to avoid possible breakage in following tests output_type_registry.pop("MockDataset") @pytest.mark.usefixtures("valid_dataset_class_with_missing_requirements") def test_single_valid_with_missing_requirements(tmp_path): with pytest.warns( UserWarning, match=( "This dataset appears to be of type MockDataset, " "but the following requirements are currently missing: mock_package_name\n" "Please verify your installation." ), ): load(tmp_path) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/tests/test_load_sample.py0000644000175100001770000001415114714401662017256 0ustar00runnerdockerimport logging import os import re import sys import textwrap import numpy as np import pytest from yt.config import ytcfg from yt.loaders import load_sample from yt.sample_data.api import ( get_data_registry_table, get_download_cache_dir, ) from yt.testing import requires_module_pytest from yt.utilities.logger import ytLogger @pytest.fixture() def tmp_data_dir(tmp_path): pre_test_data_dir = ytcfg["yt", "test_data_dir"] ytcfg.set("yt", "test_data_dir", str(tmp_path)) yield tmp_path ytcfg.set("yt", "test_data_dir", pre_test_data_dir) @pytest.fixture() def capturable_logger(caplog): """ This set the minimal conditions to make pytest's caplog fixture usable. """ propagate = ytLogger.propagate ytLogger.propagate = True with caplog.at_level(logging.INFO, "yt"): yield ytLogger.propagate = propagate @requires_module_pytest("pandas", "pooch", "f90nml") @pytest.mark.usefixtures("capturable_logger") @pytest.mark.parametrize( "fn, archive, exact_loc, class_", [ ( "ToroShockTube", "ToroShockTube.tar.gz", "ToroShockTube/DD0001/data0001", "EnzoDataset", ), ( "KeplerianDisk", "KeplerianDisk.tar.gz", "KeplerianDisk/disk.out1.00000.athdf", "AthenaPPDataset", ), # trying with an exact name as well ( "KeplerianDisk/disk.out1.00000.athdf", "KeplerianDisk.tar.gz", "KeplerianDisk/disk.out1.00000.athdf", "AthenaPPDataset", ), # check this special case because it relies on implementations # details in the AMRVAC frontend (using parfiles) # and could easily fail to load. See GH PR #3343 ( "rmi_dust_2d", "rmi_dust_2d.tar.gz", "rmi_dust_2d/output0001.dat", "AMRVACDataset", ), ], ) def test_load_sample_small_dataset( fn, archive, exact_loc, class_: str, tmp_data_dir, caplog ): cache_path = get_download_cache_dir() assert not cache_path.exists() ds = load_sample(fn, progressbar=False, timeout=30) assert type(ds).__name__ == class_ assert not cache_path.exists() text = textwrap.dedent( f""" '{fn.replace('/', os.path.sep)}' is not available locally. Looking up online. Downloading from https://yt-project.org/data/{archive} Untaring downloaded file to '{str(tmp_data_dir)}' """ ).strip("\n") expected = [("yt", 20, message) for message in text.split("\n")] assert caplog.record_tuples[:3] == expected caplog.clear() # loading a second time should not result in a download request ds2 = load_sample(fn) assert type(ds2).__name__ == class_ assert caplog.record_tuples[0] == ( "yt", 20, f"Sample dataset found in '{os.path.join(str(tmp_data_dir), *exact_loc.split('/'))}'", ) @pytest.mark.skipif( sys.platform.startswith("win"), # flakyness is probably due to Windows' infamous lack of time resolution # overall, this test doesn't seem worth it. reason="This test is flaky on Windows", ) @requires_module_pytest("pandas", "pooch") @pytest.mark.usefixtures("capturable_logger") def test_load_sample_timeout(tmp_data_dir, caplog): from requests.exceptions import ConnectTimeout, ReadTimeout # note that requests is a direct dependency to pooch, # so we don't need to mark it in a decorator. with pytest.raises((ConnectTimeout, ReadTimeout)): load_sample("IsolatedGalaxy", progressbar=False, timeout=0.00001) @requires_module_pytest("pandas", "requests") @pytest.mark.xfail(reason="Registry is currently incomplete.") def test_registry_integrity(): reg = get_data_registry_table() assert not any(reg.isna()) @pytest.fixture() def sound_subreg(): # this selection is needed because the full dataset is incomplete # so we test only the values that can be parsed reg = get_data_registry_table() return reg[["size", "byte_size"]][reg["size"].notna()] @requires_module_pytest("pandas", "requests") def test_registry_byte_size_dtype(sound_subreg): from pandas import Int64Dtype assert sound_subreg["byte_size"].dtype == Int64Dtype() @requires_module_pytest("pandas", "requests") def test_registry_byte_size_sign(sound_subreg): np.testing.assert_array_less(0, sound_subreg["byte_size"]) @requires_module_pytest("pandas", "requests") def test_unknown_filename(): fake_name = "these_are_not_the_files_your_looking_for" with pytest.raises(ValueError, match=f"'{fake_name}' is not an available dataset."): load_sample(fake_name) @requires_module_pytest("pandas", "requests") def test_typo_filename(): wrong_name = "Isolatedgalaxy" with pytest.raises( ValueError, match=re.escape( f"'{wrong_name}' is not an available dataset. Did you mean 'IsolatedGalaxy' ?" ), ): load_sample(wrong_name, timeout=1) @pytest.fixture() def fake_data_dir_in_a_vaccum(tmp_path): pre_test_data_dir = ytcfg["yt", "test_data_dir"] ytcfg.set("yt", "test_data_dir", "/does/not/exist") origin = os.getcwd() os.chdir(tmp_path) yield ytcfg.set("yt", "test_data_dir", pre_test_data_dir) os.chdir(origin) @pytest.mark.skipif( sys.platform.startswith("win"), reason="can't figure out how to match the warning message in a cross-platform way", ) @requires_module_pytest("pandas", "pooch") @pytest.mark.usefixtures("fake_data_dir_in_a_vaccum") def test_data_dir_broken(): # check that load_sample still works if the test_data_dir # isn't properly set, in which case we should dl to the # cwd and issue a warning. msg = ( r"Storage directory from yt config doesn't exist " r"\(currently set to '/does/not/exist'\)\. " r"Current working directory will be used instead\." ) with pytest.warns(UserWarning, match=msg): load_sample("ToroShockTube") def test_filename_none(capsys): assert load_sample() is None captured = capsys.readouterr() assert "yt.sample_data.api.get_data_registry_table" in captured.err ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/tests/test_testing.py0000644000175100001770000000142014714401662016446 0ustar00runnerdockerfrom unittest import SkipTest import matplotlib from yt.testing import requires_backend active_backend = matplotlib.get_backend() inactive_backend = ({"gtkagg", "macosx", "wx", "tkagg"} - {active_backend}).pop() def test_requires_inactive_backend(): @requires_backend(inactive_backend) def foo(): return try: foo() except SkipTest: pass else: raise AssertionError( "@requires_backend appears to be broken (skip was expected)" ) def test_requires_active_backend(): @requires_backend(active_backend) def foo(): return try: foo() except SkipTest: raise AssertionError( "@requires_backend appears to be broken (skip was not expected)" ) from None ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/tests/test_version.py0000644000175100001770000000215514714401662016464 0ustar00runnerdockerimport pytest import yt from yt._version import VersionTuple, _parse_to_version_info @pytest.mark.parametrize( "version_str, expected", ( ("4.1.0", VersionTuple(4, 1, 0, "final", 0)), ("6.2.5", VersionTuple(6, 2, 5, "final", 0)), ("4.1.dev0", VersionTuple(4, 1, 0, "alpha", 0)), ("4.1.0rc", VersionTuple(4, 1, 0, "candidate", 0)), ("4.1.0rc1", VersionTuple(4, 1, 0, "candidate", 1)), ("4.1.0rc2", VersionTuple(4, 1, 0, "candidate", 2)), ), ) def test_parse_version_info(version_str, expected): actual = _parse_to_version_info(version_str) assert actual == expected def test_version_tuple_comp(): # exercise comparison with builtin tuples # comparison results do not matter for this test yt.version_info > (4,) # noqa: B015 yt.version_info > (4, 1) # noqa: B015 yt.version_info > (4, 1, 0) # noqa: B015 yt.version_info < (4, 1, 0) # noqa: B015 yt.version_info <= (4, 1, 0) # noqa: B015 yt.version_info >= (4, 1, 0) # noqa: B015 yt.version_info == (4, 1, 0) # noqa: B015 assert isinstance(yt.version_info, tuple) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.371153 yt-4.4.0/yt/units/0000755000175100001770000000000014714401715013362 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/units/__init__.py0000644000175100001770000000652114714401662015500 0ustar00runnerdockerfrom unyt.array import ( loadtxt, savetxt, unyt_array, unyt_quantity, ) from unyt.unit_object import Unit, define_unit # NOQA: F401 from unyt.unit_registry import UnitRegistry # NOQA: Ffg401 from unyt.unit_systems import UnitSystem # NOQA: F401 from yt.units.physical_constants import * from yt.units.physical_constants import _ConstantContainer from yt.units.unit_symbols import * from yt.units.unit_symbols import _SymbolContainer from yt.utilities.exceptions import YTArrayTooLargeToDisplay from yt.units._numpy_wrapper_functions import ( uconcatenate, ucross, udot, uhstack, uintersect1d, unorm, ustack, uunion1d, uvstack, ) YTArray = unyt_array YTQuantity = unyt_quantity class UnitContainer: """A container for units and constants to associate with a dataset This object is usually accessed on a Dataset instance via ``ds.units``. Parameters ---------- registry : UnitRegistry instance A unit registry to associate with units and constants accessed on this object. Example ------- >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> code_mass = ds.units.code_mass >>> (12 * code_mass).to("Msun") unyt_quantity(4.89719136e+11, 'Msun') >>> code_mass.registry is ds.unit_registry True >>> ds.units.newtons_constant unyt_quantity(6.67384e-08, 'cm**3/(g*s**2)') """ def __init__(self, registry): self.unit_symbols = _SymbolContainer(registry) self.physical_constants = _ConstantContainer(registry) def __dir__(self): all_dir = self.unit_symbols.__dir__() + self.physical_constants.__dir__() all_dir += object.__dir__(self) return list(set(all_dir)) def __getattr__(self, item): pc = self.physical_constants us = self.unit_symbols ret = getattr(us, item, None) or getattr(pc, item, None) if not ret: raise AttributeError(item) return ret def display_ytarray(arr): r""" Display a YTArray in a Jupyter widget that enables unit switching. The array returned by this function is read-only, and only works with arrays of size 3 or lower. Parameters ---------- arr : YTArray The Array to display; must be of size 3 or lower. Examples -------- >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> display_ytarray(ds.domain_width) """ if arr.size > 3: raise YTArrayTooLargeToDisplay(arr.size, 3) import ipywidgets unit_registry = arr.units.registry equiv = unit_registry.list_same_dimensions(arr.units) dropdown = ipywidgets.Dropdown(options=sorted(equiv), value=str(arr.units)) def arr_updater(arr, texts): def _value_updater(change): arr2 = arr.in_units(change["new"]) if arr2.shape == (): arr2 = [arr2] for v, t in zip(arr2, texts): t.value = str(v.value) return _value_updater if arr.shape == (): arr_iter = [arr] else: arr_iter = arr texts = [ipywidgets.Text(value=str(_.value), disabled=True) for _ in arr_iter] dropdown.observe(arr_updater(arr, texts), names="value") return ipywidgets.HBox(texts + [dropdown]) def _wrap_display_ytarray(arr): from IPython.core.display import display display(display_ytarray(arr)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/units/_numpy_wrapper_functions.py0000644000175100001770000001303314714401662021074 0ustar00runnerdocker# This module is not part of the public namespace `yt.units` # It is home to wrapper functions that are directly copied from unyt 2.9.2 # We vendor them as a transition step towards unyt 3.0 (in development), # where these wrapper functions are deprecated and are should be replaced with vanilla numpy API # FUTURE: # - require unyt>=3.0 # - deprecate these functions in yt too from unyt import unyt_array, unyt_quantity import numpy as np def _validate_numpy_wrapper_units(v, arrs): if not any(isinstance(a, unyt_array) for a in arrs): return v if not all(isinstance(a, unyt_array) for a in arrs): raise RuntimeError("Not all of your arrays are unyt_arrays.") a1 = arrs[0] if not all(a.units == a1.units for a in arrs[1:]): raise RuntimeError("Your arrays must have identical units.") v.units = a1.units return v def uconcatenate(arrs, axis=0): """Concatenate a sequence of arrays. This wrapper around numpy.concatenate preserves units. All input arrays must have the same units. See the documentation of numpy.concatenate for full details. Examples -------- >>> from unyt import cm >>> A = [1, 2, 3]*cm >>> B = [2, 3, 4]*cm >>> uconcatenate((A, B)) unyt_array([1, 2, 3, 2, 3, 4], 'cm') """ v = np.concatenate(arrs, axis=axis) v = _validate_numpy_wrapper_units(v, arrs) return v def ucross(arr1, arr2, registry=None, axisa=-1, axisb=-1, axisc=-1, axis=None): """Applies the cross product to two YT arrays. This wrapper around numpy.cross preserves units. See the documentation of numpy.cross for full details. """ v = np.cross(arr1, arr2, axisa=axisa, axisb=axisb, axisc=axisc, axis=axis) units = arr1.units * arr2.units arr = unyt_array(v, units, registry=registry) return arr def uintersect1d(arr1, arr2, assume_unique=False): """Find the sorted unique elements of the two input arrays. A wrapper around numpy.intersect1d that preserves units. All input arrays must have the same units. See the documentation of numpy.intersect1d for full details. Examples -------- >>> from unyt import cm >>> A = [1, 2, 3]*cm >>> B = [2, 3, 4]*cm >>> uintersect1d(A, B) unyt_array([2, 3], 'cm') """ v = np.intersect1d(arr1, arr2, assume_unique=assume_unique) v = _validate_numpy_wrapper_units(v, [arr1, arr2]) return v def uunion1d(arr1, arr2): """Find the union of two arrays. A wrapper around numpy.intersect1d that preserves units. All input arrays must have the same units. See the documentation of numpy.intersect1d for full details. Examples -------- >>> from unyt import cm >>> A = [1, 2, 3]*cm >>> B = [2, 3, 4]*cm >>> uunion1d(A, B) unyt_array([1, 2, 3, 4], 'cm') """ v = np.union1d(arr1, arr2) v = _validate_numpy_wrapper_units(v, [arr1, arr2]) return v def unorm(data, ord=None, axis=None, keepdims=False): """Matrix or vector norm that preserves units This is a wrapper around np.linalg.norm that preserves units. See the documentation for that function for descriptions of the keyword arguments. Examples -------- >>> from unyt import km >>> data = [1, 2, 3]*km >>> print(unorm(data)) 3.7416573867739413 km """ norm = np.linalg.norm(data, ord=ord, axis=axis, keepdims=keepdims) if norm.shape == (): return unyt_quantity(norm, data.units) return unyt_array(norm, data.units) def udot(op1, op2): """Matrix or vector dot product that preserves units This is a wrapper around np.dot that preserves units. Examples -------- >>> from unyt import km, s >>> a = np.eye(2)*km >>> b = (np.ones((2, 2)) * 2)*s >>> print(udot(a, b)) [[2. 2.] [2. 2.]] km*s """ dot = np.dot(op1.d, op2.d) units = op1.units * op2.units if dot.shape == (): return unyt_quantity(dot, units) return unyt_array(dot, units) def uvstack(arrs): """Stack arrays in sequence vertically (row wise) while preserving units This is a wrapper around np.vstack that preserves units. Examples -------- >>> from unyt import km >>> a = [1, 2, 3]*km >>> b = [2, 3, 4]*km >>> print(uvstack([a, b])) [[1 2 3] [2 3 4]] km """ v = np.vstack(arrs) v = _validate_numpy_wrapper_units(v, arrs) return v def uhstack(arrs): """Stack arrays in sequence horizontally while preserving units This is a wrapper around np.hstack that preserves units. Examples -------- >>> from unyt import km >>> a = [1, 2, 3]*km >>> b = [2, 3, 4]*km >>> print(uhstack([a, b])) [1 2 3 2 3 4] km >>> a = [[1],[2],[3]]*km >>> b = [[2],[3],[4]]*km >>> print(uhstack([a, b])) [[1 2] [2 3] [3 4]] km """ v = np.hstack(arrs) v = _validate_numpy_wrapper_units(v, arrs) return v def ustack(arrs, axis=0): """Join a sequence of arrays along a new axis while preserving units The axis parameter specifies the index of the new axis in the dimensions of the result. For example, if ``axis=0`` it will be the first dimension and if ``axis=-1`` it will be the last dimension. This is a wrapper around np.stack that preserves units. See the documentation for np.stack for full details. Examples -------- >>> from unyt import km >>> a = [1, 2, 3]*km >>> b = [2, 3, 4]*km >>> print(ustack([a, b])) [[1 2 3] [2 3 4]] km """ v = np.stack(arrs, axis=axis) v = _validate_numpy_wrapper_units(v, arrs) return v ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/units/dimensions.py0000644000175100001770000000016314714401662016105 0ustar00runnerdockerfrom unyt.dimensions import current_mks # explicit re-export to enable type checking from unyt.dimensions import * ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/units/equivalencies.py0000644000175100001770000000004114714401662016565 0ustar00runnerdockerfrom unyt.equivalencies import * ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/units/physical_constants.py0000644000175100001770000000273414714401662017653 0ustar00runnerdockerfrom unyt.array import unyt_quantity from unyt.unit_systems import add_constants from yt.units.unit_registry import default_unit_registry add_constants(globals(), registry=default_unit_registry) class _ConstantContainer: """A container for physical constants to associate with a dataset. This object is usually accessed on a Dataset instance via ``ds.units.physical_constants``. Parameters ---------- registry : UnitRegistry instance A unit registry to associate with units constants accessed on this object. Example ------- >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> ds.units.physical_constants.newtons_constant unyt_quantity(6.67384e-08, 'cm**3/(g*s**2)') """ def __init__(self, registry): self._registry = registry self._cache = {} def __dir__(self): ret = [p for p in globals() if not p.startswith("_")] + object.__dir__(self) return list(set(ret)) def __getattr__(self, item): if item in self._cache: return self._cache[item] if item in globals(): const = globals()[item].copy() const.units.registry = self._registry const.convert_to_base(self._registry.unit_system) const_v, const_unit = const.v, const.units ret = unyt_quantity(const_v, const_unit, registry=self._registry) self._cache[item] = ret return ret raise AttributeError(item) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.371153 yt-4.4.0/yt/units/tests/0000755000175100001770000000000014714401715014524 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/units/tests/__init__.py0000644000175100001770000000000014714401662016624 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/units/tests/test_magnetic_code_units.py0000644000175100001770000000432114714401662022141 0ustar00runnerdockerimport numpy as np from yt.loaders import load_uniform_grid from numpy.testing import assert_allclose def test_magnetic_code_units(): sqrt4pi = np.sqrt(4.0 * np.pi) ddims = (16,) * 3 data = {"density": (np.random.uniform(size=ddims), "g/cm**3")} ds1 = load_uniform_grid( data, ddims, magnetic_unit=(sqrt4pi, "gauss"), unit_system="cgs" ) assert_allclose(ds1.magnetic_unit.value, sqrt4pi) assert str(ds1.magnetic_unit.units) == "G" mucu = ds1.magnetic_unit.to("code_magnetic") assert_allclose(mucu.value, 1.0) assert str(mucu.units) == "code_magnetic" ds2 = load_uniform_grid(data, ddims, magnetic_unit=(1.0, "T"), unit_system="cgs") assert_allclose(ds2.magnetic_unit.value, 10000.0) assert str(ds2.magnetic_unit.units) == "G" mucu = ds2.magnetic_unit.to("code_magnetic") assert_allclose(mucu.value, 1.0) assert str(mucu.units) == "code_magnetic" ds3 = load_uniform_grid(data, ddims, magnetic_unit=(1.0, "T"), unit_system="mks") assert_allclose(ds3.magnetic_unit.value, 1.0) assert str(ds3.magnetic_unit.units) == "T" mucu = ds3.magnetic_unit.to("code_magnetic") assert_allclose(mucu.value, 1.0) assert str(mucu.units) == "code_magnetic" ds4 = load_uniform_grid( data, ddims, magnetic_unit=(1.0, "gauss"), unit_system="mks" ) assert_allclose(ds4.magnetic_unit.value, 1.0e-4) assert str(ds4.magnetic_unit.units) == "T" mucu = ds4.magnetic_unit.to("code_magnetic") assert_allclose(mucu.value, 1.0) assert str(mucu.units) == "code_magnetic" ds5 = load_uniform_grid( data, ddims, magnetic_unit=(1.0, "uG"), unit_system="mks" ) assert_allclose(ds5.magnetic_unit.value, 1.0e-10) assert str(ds5.magnetic_unit.units) == "T" mucu = ds5.magnetic_unit.to("code_magnetic") assert_allclose(mucu.value, 1.0) assert str(mucu.units) == "code_magnetic" ds6 = load_uniform_grid( data, ddims, magnetic_unit=(1.0, "nT"), unit_system="cgs" ) assert_allclose(ds6.magnetic_unit.value, 1.0e-5) assert str(ds6.magnetic_unit.units) == "G" mucu = ds6.magnetic_unit.to("code_magnetic") assert_allclose(mucu.value, 1.0) assert str(mucu.units) == "code_magnetic" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/units/unit_lookup_table.py0000644000175100001770000000004614714401662017454 0ustar00runnerdockerfrom unyt._unit_lookup_table import * ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/units/unit_object.py0000644000175100001770000000003714714401662016242 0ustar00runnerdockerfrom unyt.unit_object import * ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/units/unit_registry.py0000644000175100001770000000032614714401662016645 0ustar00runnerdockerfrom unyt.dimensions import dimensionless from unyt.unit_registry import * default_unit_registry = UnitRegistry(unit_system="cgs") # type: ignore default_unit_registry.add("h", 1.0, dimensionless, tex_repr=r"h") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/units/unit_symbols.py0000644000175100001770000000267714714401662016500 0ustar00runnerdockerfrom unyt.unit_object import Unit from unyt.unit_systems import add_symbols from yt.units.unit_registry import default_unit_registry add_symbols(globals(), registry=default_unit_registry) class _SymbolContainer: """A container for units to associate with a dataset. This object is usually accessed on a Dataset instance via ``ds.units.unit_symbols``. Parameters ---------- registry : UnitRegistry instance A unit registry to associate with units accessed on this object. Example ------- >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> code_mass = ds.units.code_mass >>> (12 * code_mass).to("Msun") unyt_quantity(4.89719136e+11, 'Msun') >>> code_mass.registry is ds.unit_registry True """ def __init__(self, registry): self._registry = registry self._cache = {} def __dir__(self): ret = [u for u in globals() if not u.startswith("_")] ret += list(self._registry.keys()) ret += object.__dir__(self) return list(set(ret)) def __getattr__(self, item): if item in self._cache: return self._cache[item] if hasattr(globals(), item): ret = Unit(globals()[item].expr, registry=self._registry) elif item in self._registry: ret = Unit(item, registry=self._registry) else: raise AttributeError(item) self._cache[item] = ret return ret ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/units/unit_systems.py0000644000175100001770000000127114714401662016504 0ustar00runnerdockerfrom unyt.unit_systems import * def create_code_unit_system(unit_registry, current_mks_unit=None): code_unit_system = UnitSystem( name=unit_registry.unit_system_id, length_unit="code_length", mass_unit="code_mass", time_unit="code_time", temperature_unit="code_temperature", current_mks_unit=current_mks_unit, registry=unit_registry, ) code_unit_system["velocity"] = "code_velocity" if current_mks_unit: code_unit_system["magnetic_field_mks"] = "code_magnetic" else: code_unit_system["magnetic_field_cgs"] = "code_magnetic" code_unit_system["pressure"] = "code_pressure" return code_unit_system ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/units/yt_array.py0000644000175100001770000000023114714401662015563 0ustar00runnerdockerfrom unyt.array import * # NOQA: F403, F401 from yt.funcs import array_like_field # NOQA: F401 from yt.units import YTArray, YTQuantity # NOQA: F401 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3791533 yt-4.4.0/yt/utilities/0000755000175100001770000000000014714401715014233 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/__init__.py0000644000175100001770000000000014714401662016333 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3791533 yt-4.4.0/yt/utilities/amr_kdtree/0000755000175100001770000000000014714401715016350 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/amr_kdtree/__init__.py0000644000175100001770000000003714714401662020462 0ustar00runnerdocker""" Initialize amr_kdtree """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/amr_kdtree/amr_kdtools.py0000644000175100001770000000256314714401662021247 0ustar00runnerdockerimport numpy as np from yt.funcs import mylog def receive_and_reduce(comm, incoming_rank, image, add_to_front, *, use_opacity=True): mylog.debug("Receiving image from %04i", incoming_rank) # mylog.debug( '%04i receiving image from %04i'%(self.comm.rank,back.owner)) arr2 = comm.recv_array(incoming_rank, incoming_rank).reshape( (image.shape[0], image.shape[1], image.shape[2]) ) if not use_opacity: np.add(image, arr2, image) return image if add_to_front: front = arr2 back = image else: front = image back = arr2 if image.shape[2] == 3: # Assume Projection Camera, Add np.add(image, front, image) return image ta = 1.0 - front[:, :, 3] np.maximum(ta, 0.0, ta) # This now does the following calculation, but in a memory # conservative fashion # image[:,:,i ] = front[:,:,i] + ta*back[:,:,i] image = back.copy() for i in range(4): np.multiply(image[:, :, i], ta, image[:, :, i]) np.add(image, front, image) return image def send_to_parent(comm, outgoing_rank, image): mylog.debug("Sending image to %04i", outgoing_rank) comm.send_array(image, outgoing_rank, tag=comm.rank) def scatter_image(comm, root, image): mylog.debug("Scattering from %04i", root) image = comm.mpi_bcast(image, root=root) return image ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/amr_kdtree/amr_kdtree.py0000644000175100001770000005225514714401662021051 0ustar00runnerdockerimport operator import numpy as np from yt.funcs import is_sequence, mylog from yt.geometry.grid_geometry_handler import GridIndex from yt.utilities.amr_kdtree.amr_kdtools import ( receive_and_reduce, scatter_image, send_to_parent, ) from yt.utilities.lib.amr_kdtools import Node from yt.utilities.lib.partitioned_grid import PartitionedGrid from yt.utilities.math_utils import periodic_position from yt.utilities.on_demand_imports import _h5py as h5py from yt.utilities.parallel_tools.parallel_analysis_interface import ( ParallelAnalysisInterface, ) steps = np.array( [ [-1, -1, -1], [-1, -1, 0], [-1, -1, 1], [-1, 0, -1], [-1, 0, 0], [-1, 0, 1], [-1, 1, -1], [-1, 1, 0], [-1, 1, 1], [0, -1, -1], [0, -1, 0], [0, -1, 1], [0, 0, -1], # [ 0, 0, 0], [0, 0, 1], [0, 1, -1], [0, 1, 0], [0, 1, 1], [1, -1, -1], [1, -1, 0], [1, -1, 1], [1, 0, -1], [1, 0, 0], [1, 0, 1], [1, 1, -1], [1, 1, 0], [1, 1, 1], ] ) def _apply_log(data, log_changed, log_new): """Helper used to set log10/10^ to data in AMRKDTree""" if not log_changed: return if log_new: np.log10(data, data) else: np.power(10.0, data, data) class Tree: def __init__( self, ds, comm_rank=0, comm_size=1, left=None, right=None, min_level=None, max_level=None, data_source=None, ): self.ds = ds try: self._id_offset = ds.index.grids[0]._id_offset except AttributeError: self._id_offset = 0 if data_source is None: data_source = ds.all_data() self.data_source = data_source if left is None: left = np.array([-np.inf] * 3) if right is None: right = np.array([np.inf] * 3) if min_level is None: min_level = 0 if max_level is None: max_level = ds.index.max_level self.min_level = min_level self.max_level = max_level self.comm_rank = comm_rank self.comm_size = comm_size self.trunk = Node(None, None, None, left, right, -1, 1) self.build() def add_grids(self, grids): gles = np.array([g.LeftEdge for g in grids]) gres = np.array([g.RightEdge for g in grids]) gids = np.array([g.id for g in grids], dtype="int64") self.trunk.add_grids( gids.size, gles, gres, gids, self.comm_rank, self.comm_size ) del gles, gres, gids, grids def build(self): lvl_range = range(self.min_level, self.max_level + 1) for lvl in lvl_range: # grids = self.data_source.select_grids(lvl) grids = np.array( [b for b, mask in self.data_source.blocks if b.Level == lvl] ) if len(grids) == 0: continue self.add_grids(grids) def check_tree(self): for node in self.trunk.depth_traverse(): if node.grid == -1: continue grid = self.ds.index.grids[node.grid - self._id_offset] dds = grid.dds gle = grid.LeftEdge nle = self.ds.arr(node.get_left_edge(), units="code_length") nre = self.ds.arr(node.get_right_edge(), units="code_length") li = np.rint((nle - gle) / dds).astype("int32") ri = np.rint((nre - gle) / dds).astype("int32") dims = ri - li assert np.all(grid.LeftEdge <= nle) assert np.all(grid.RightEdge >= nre) assert np.all(dims > 0) # print(grid, dims, li, ri) # Calculate the Volume vol = self.trunk.kd_sum_volume() mylog.debug("AMRKDTree volume = %e", vol) self.trunk.kd_node_check() def sum_cells(self, all_cells=False): cells = 0 for node in self.trunk.depth_traverse(): if node.grid == -1: continue if not all_cells and not node.kd_is_leaf(): continue grid = self.ds.index.grids[node.grid - self._id_offset] dds = grid.dds gle = grid.LeftEdge nle = self.ds.arr(node.get_left_edge(), units="code_length") nre = self.ds.arr(node.get_right_edge(), units="code_length") li = np.rint((nle - gle) / dds).astype("int32") ri = np.rint((nre - gle) / dds).astype("int32") dims = ri - li cells += np.prod(dims) return cells class AMRKDTree(ParallelAnalysisInterface): r"""A KDTree for AMR data. Not applicable to particle or octree-based datasets. """ fields = None log_fields = None no_ghost = True def __init__(self, ds, min_level=None, max_level=None, data_source=None): if not issubclass(ds.index.__class__, GridIndex): raise RuntimeError( "AMRKDTree does not support particle or octree-based data." ) ParallelAnalysisInterface.__init__(self) self.ds = ds self.current_vcds = [] self.current_saved_grids = [] self.bricks = [] self.brick_dimensions = [] self.sdx = ds.index.get_smallest_dx() self._initialized = False try: self._id_offset = ds.index.grids[0]._id_offset except AttributeError: self._id_offset = 0 if data_source is None: data_source = self.ds.all_data() self.data_source = data_source mylog.debug("Building AMRKDTree") self.tree = Tree( ds, self.comm.rank, self.comm.size, min_level=min_level, max_level=max_level, data_source=data_source, ) def set_fields(self, fields, log_fields, no_ghost, force=False): new_fields = self.data_source._determine_fields(fields) regenerate_data = ( self.fields is None or len(self.fields) != len(new_fields) or self.fields != new_fields or force ) if not is_sequence(log_fields): log_fields = [log_fields] new_log_fields = list(log_fields) self.tree.trunk.set_dirty(regenerate_data) self.fields = new_fields if self.log_fields is not None and not regenerate_data: flip_log = list(map(operator.ne, self.log_fields, new_log_fields)) else: flip_log = [False] * len(new_log_fields) self.log_fields = new_log_fields self.no_ghost = no_ghost del self.bricks, self.brick_dimensions self.brick_dimensions = [] bricks = [] for b in self.traverse(): list(map(_apply_log, b.my_data, flip_log, self.log_fields)) bricks.append(b) self.bricks = np.array(bricks) self.brick_dimensions = np.array(self.brick_dimensions) self._initialized = True def initialize_source(self, fields, log_fields, no_ghost): if ( fields == self.fields and log_fields == self.log_fields and no_ghost == self.no_ghost ): return self.set_fields(fields, log_fields, no_ghost) def traverse(self, viewpoint=None): for node in self.tree.trunk.kd_traverse(viewpoint=viewpoint): yield self.get_brick_data(node) def slice_traverse(self, viewpoint=None): if not hasattr(self.ds.index, "grid"): raise NotImplementedError for node in self.tree.trunk.kd_traverse(viewpoint=viewpoint): grid = self.ds.index.grids[node.grid - self._id_offset] dds = grid.dds gle = grid.LeftEdge.in_units("code_length").ndarray_view() nle = node.get_left_edge() nre = node.get_right_edge() li = np.rint((nle - gle) / dds).astype("int32") ri = np.rint((nre - gle) / dds).astype("int32") dims = ri - li sl = (slice(li[0], ri[0]), slice(li[1], ri[1]), slice(li[2], ri[2])) gi = grid.get_global_startindex() + li yield grid, node, (sl, dims, gi) def get_node(self, nodeid): path = np.binary_repr(nodeid) depth = 1 temp = self.tree.trunk for depth in range(1, len(path)): if path[depth] == "0": temp = temp.left else: temp = temp.right assert temp is not None return temp def locate_node(self, pos): return self.tree.trunk.find_node(pos) def get_reduce_owners(self): owners = {} for bottom_id in range(self.comm.size, 2 * self.comm.size): temp = self.get_node(bottom_id) owners[temp.node_id] = temp.node_id - self.comm.size while temp is not None: if temp.parent is None: break if temp == temp.parent.right: break temp = temp.parent owners[temp.node_id] = owners[temp.left.node_id] return owners def reduce_tree_images(self, image, viewpoint, *, use_opacity=True): if self.comm.size <= 1: return image myrank = self.comm.rank nprocs = self.comm.size owners = self.get_reduce_owners() node = self.get_node(nprocs + myrank) while owners[node.parent.node_id] == myrank: split_dim = node.parent.get_split_dim() split_pos = node.parent.get_split_pos() add_to_front = viewpoint[split_dim] >= split_pos image = receive_and_reduce( self.comm, owners[node.parent.right.node_id], image, add_to_front, use_opacity=use_opacity, ) if node.parent.node_id == 1: break else: node = node.parent else: send_to_parent(self.comm, owners[node.parent.node_id], image) return scatter_image(self.comm, owners[1], image) def get_brick_data(self, node): if node.data is not None and not node.dirty: return node.data grid = self.ds.index.grids[node.grid - self._id_offset] dds = grid.dds.ndarray_view() gle = grid.LeftEdge.ndarray_view() nle = node.get_left_edge() nre = node.get_right_edge() li = np.rint((nle - gle) / dds).astype("int32") ri = np.rint((nre - gle) / dds).astype("int32") dims = ri - li assert np.all(grid.LeftEdge <= nle) assert np.all(grid.RightEdge >= nre) if grid in self.current_saved_grids and not node.dirty: dds = self.current_vcds[self.current_saved_grids.index(grid)] else: dds = [] vcd = grid.get_vertex_centered_data( self.fields, smoothed=True, no_ghost=self.no_ghost ) for i, field in enumerate(self.fields): if self.log_fields[i]: v = vcd[field].astype("float64") v[v <= 0] = np.nan dds.append(np.log10(v)) else: dds.append(vcd[field].astype("float64")) self.current_saved_grids.append(grid) self.current_vcds.append(dds) if self.data_source.selector is None: mask = np.ones(dims, dtype="uint8") else: mask, _ = self.data_source.selector.fill_mask_regular_grid(grid) mask = mask[li[0] : ri[0], li[1] : ri[1], li[2] : ri[2]].astype("uint8") data = [ d[li[0] : ri[0] + 1, li[1] : ri[1] + 1, li[2] : ri[2] + 1].copy() for d in dds ] brick = PartitionedGrid( grid.id, data, mask, nle.copy(), nre.copy(), dims.astype("int64") ) node.data = brick node.dirty = False if not self._initialized: self.brick_dimensions.append(dims) return brick def locate_neighbors(self, grid, ci): r"""Given a grid and cell index, finds the 26 neighbor grids and cell indices. Parameters ---------- grid: Grid Object Grid containing the cell of interest ci: array-like The cell index of the cell of interest Returns ------- grids: Numpy array of Grid objects cis: List of neighbor cell index tuples Both of these are neighbors that, relative to the current cell index (i,j,k), are ordered as: (i-1, j-1, k-1), (i-1, j-1, k ), (i-1, j-1, k+1), ... (i-1, j , k-1), (i-1, j , k ), (i-1, j , k+1), ... (i+1, j+1, k-1), (i-1, j-1, k ), (i+1, j+1, k+1) That is they start from the lower left and proceed to upper right varying the third index most frequently. Note that the center cell (i,j,k) is omitted. """ ci = np.array(ci) center_dds = grid.dds position = grid.LeftEdge + (np.array(ci) + 0.5) * grid.dds grids = np.empty(26, dtype="object") cis = np.empty([26, 3], dtype="int64") offs = 0.5 * (center_dds + self.sdx) new_cis = ci + steps in_grid = np.all((new_cis >= 0) * (new_cis < grid.ActiveDimensions), axis=1) new_positions = position + steps * offs new_positions = [periodic_position(p, self.ds) for p in new_positions] grids[in_grid] = grid get_them = np.argwhere(in_grid).ravel() cis[in_grid] = new_cis[in_grid] if (in_grid).sum() > 0: grids[np.logical_not(in_grid)] = [ self.ds.index.grids[ self.locate_node(new_positions[i]).grid - self._id_offset ] for i in get_them ] cis[np.logical_not(in_grid)] = [ (new_positions[i] - grids[i].LeftEdge) / grids[i].dds for i in get_them ] cis = [tuple(_ci) for _ci in cis] return grids, cis def locate_neighbors_from_position(self, position): r"""Given a position, finds the 26 neighbor grids and cell indices. This is a mostly a wrapper for locate_neighbors. Parameters ---------- position: array-like Position of interest Returns ------- grids: Numpy array of Grid objects cis: List of neighbor cell index tuples Both of these are neighbors that, relative to the current cell index (i,j,k), are ordered as: (i-1, j-1, k-1), (i-1, j-1, k ), (i-1, j-1, k+1), ... (i-1, j , k-1), (i-1, j , k ), (i-1, j , k+1), ... (i+1, j+1, k-1), (i-1, j-1, k ), (i+1, j+1, k+1) That is they start from the lower left and proceed to upper right varying the third index most frequently. Note that the center cell (i,j,k) is omitted. """ position = np.array(position) grid = self.ds.index.grids[self.locate_node(position).grid - self._id_offset] ci = ((position - grid.LeftEdge) / grid.dds).astype("int64") return self.locate_neighbors(grid, ci) def store_kd_bricks(self, fn=None): if not self._initialized: self.initialize_source() if fn is None: fn = f"{self.ds}_kd_bricks.h5" if self.comm.rank != 0: self.comm.recv_array(self.comm.rank - 1, tag=self.comm.rank - 1) f = h5py.File(fn, mode="w") for node in self.tree.depth_traverse(): i = node.node_id if node.data is not None: for fi, field in enumerate(self.fields): try: f.create_dataset( f"/brick_{hex(i)}_{field}", data=node.data.my_data[fi].astype("float64"), ) except Exception: pass f.close() del f if self.comm.rank != (self.comm.size - 1): self.comm.send_array([0], self.comm.rank + 1, tag=self.comm.rank) def load_kd_bricks(self, fn=None): if fn is None: fn = f"{self.ds}_kd_bricks.h5" if self.comm.rank != 0: self.comm.recv_array(self.comm.rank - 1, tag=self.comm.rank - 1) try: f = h5py.File(fn, mode="a") for node in self.tree.depth_traverse(): i = node.node_id if node.grid != -1: data = [ f[f"brick_{hex(i)}_{field}"][:].astype("float64") for field in self.fields ] node.data = PartitionedGrid( node.grid.id, data, node.l_corner.copy(), node.r_corner.copy(), node.dims.astype("int64"), ) self.bricks.append(node.data) self.brick_dimensions.append(node.dims) self.bricks = np.array(self.bricks) self.brick_dimensions = np.array(self.brick_dimensions) self._initialized = True f.close() del f except Exception: pass if self.comm.rank != (self.comm.size - 1): self.comm.send_array([0], self.comm.rank + 1, tag=self.comm.rank) def join_parallel_trees(self): if self.comm.size == 0: return nid, pid, lid, rid, les, res, gid, splitdims, splitposs = self.get_node_arrays() nid = self.comm.par_combine_object(nid, "cat", "list") pid = self.comm.par_combine_object(pid, "cat", "list") lid = self.comm.par_combine_object(lid, "cat", "list") rid = self.comm.par_combine_object(rid, "cat", "list") gid = self.comm.par_combine_object(gid, "cat", "list") les = self.comm.par_combine_object(les, "cat", "list") res = self.comm.par_combine_object(res, "cat", "list") splitdims = self.comm.par_combine_object(splitdims, "cat", "list") splitposs = self.comm.par_combine_object(splitposs, "cat", "list") nid = np.array(nid) self.rebuild_tree_from_array( nid, pid, lid, rid, les, res, gid, splitdims, splitposs ) def get_node_arrays(self): nids = [] leftids = [] rightids = [] parentids = [] les = [] res = [] gridids = [] splitdims = [] splitposs = [] for node in self.tree.trunk.depth_first_touch(): nids.append(node.node_id) les.append(node.get_left_edge()) res.append(node.get_right_edge()) if node.left is None: leftids.append(-1) else: leftids.append(node.left.node_id) if node.right is None: rightids.append(-1) else: rightids.append(node.right.node_id) if node.parent is None: parentids.append(-1) else: parentids.append(node.parent.node_id) if node.grid is None: gridids.append(-1) else: gridids.append(node.grid) splitdims.append(node.get_split_dim()) splitposs.append(node.get_split_pos()) return ( nids, parentids, leftids, rightids, les, res, gridids, splitdims, splitposs, ) def rebuild_tree_from_array( self, nids, pids, lids, rids, les, res, gids, splitdims, splitposs ): del self.tree.trunk self.tree.trunk = Node(None, None, None, les[0], res[0], gids[0], nids[0]) N = nids.shape[0] for i in range(N): n = self.get_node(nids[i]) n.set_left_edge(les[i]) n.set_right_edge(res[i]) if lids[i] != -1 and n.left is None: n.left = Node( n, None, None, np.zeros(3, dtype="float64"), np.zeros(3, dtype="float64"), -1, lids[i], ) if rids[i] != -1 and n.right is None: n.right = Node( n, None, None, np.zeros(3, dtype="float64"), np.zeros(3, dtype="float64"), -1, rids[i], ) if gids[i] != -1: n.grid = gids[i] if splitdims[i] != -1: n.create_split(splitdims[i], splitposs[i]) mylog.info( "AMRKDTree rebuilt, Final Volume: %e", self.tree.trunk.kd_sum_volume() ) return self.tree.trunk def count_volume(self): return self.tree.trunk.kd_sum_volume() def count_cells(self): return self.tree.sum_cells() if __name__ == "__main__": from time import time import yt ds = yt.load("/Users/skillman/simulations/DD1717/DD1717") ds.index t1 = time() hv = AMRKDTree(ds) t2 = time() print(hv.tree.trunk.kd_sum_volume()) print(hv.tree.trunk.kd_node_check()) print(f"Time: {t2 - t1:e} seconds") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/amr_kdtree/api.py0000644000175100001770000000004214714401662017470 0ustar00runnerdockerfrom .amr_kdtree import AMRKDTree ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3791533 yt-4.4.0/yt/utilities/answer_testing/0000755000175100001770000000000014714401715017267 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/answer_testing/__init__.py0000644000175100001770000000007014714401662021376 0ustar00runnerdocker""" The components of the Enzo testing mechanism """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/answer_testing/answer_tests.py0000644000175100001770000002441314714401662022367 0ustar00runnerdocker""" Title: answer_tests.py Purpose: Contains answer tests that are used by yt's various frontends """ import hashlib import os import tempfile import matplotlib.image as mpimg import numpy as np import yt.visualization.plot_window as pw from yt.utilities.answer_testing.framework import create_obj from yt.utilities.answer_testing.testing_utilities import ( _create_phase_plot_attribute_plot, _create_plot_window_attribute_plot, ) def grid_hierarchy(ds): result = {} result["grid_dimensions"] = ds.index.grid_dimensions result["grid_left_edge"] = ds.index.grid_left_edge result["grid_right_edge"] = ds.index.grid_right_edge result["grid_levels"] = ds.index.grid_levels result["grid_particle_count"] = ds.index.grid_particle_count return result def parentage_relationships(ds): parents = [] children = [] for g in ds.index.grids: p = g.Parent if p is None: parents.append(-1) elif hasattr(p, "id"): parents.append(p.id) else: parents = parents + [pg.id for pg in p] children = children + [c.id for c in g.Children] result = np.array(parents + children) return result def grid_values(ds, field): # The hashing is done here so that there is only one entry for # the test that contains info about all of the grids as opposed # to having a separate 'grid_id : grid_hash' pair for each grid # since that makes the answer file much larger result = None for g in ds.index.grids: if result is None: result = hashlib.md5(bytes(g.id) + g[field].tobytes()) else: result.update(bytes(g.id) + g[field].tobytes()) g.clear_data() return result.hexdigest() def projection_values(ds, axis, field, weight_field, dobj_type): if dobj_type is not None: dobj = create_obj(ds, dobj_type) else: dobj = None if ds.domain_dimensions[axis] == 1: # This originally returned None, but None can't be converted # to a bytes array (for hashing), so use -1 as a string, # since ints can't be converted to bytes either return bytes(str(-1).encode("utf-8")) proj = ds.proj(field, axis, weight_field=weight_field, data_source=dobj) # This is to try and remove python-specific anchors in the yaml # answer file. Also, using __repr__() results in weird strings # of strings that make comparison fail even though the data is # the same result = None for k, v in proj.field_data.items(): k = k.__repr__().encode("utf8") if result is None: result = hashlib.md5(k + v.tobytes()) else: result.update(k + v.tobytes()) return result.hexdigest() def field_values(ds, field, obj_type=None, particle_type=False): # If needed build an instance of the dataset type obj = create_obj(ds, obj_type) determined_field = obj._determine_fields(field)[0] fd = ds.field_info[determined_field] # Get the proper weight field depending on if we're looking at # particles or not if particle_type: weight_field = (determined_field[0], "particle_ones") elif fd.is_sph_field: weight_field = (determined_field[0], "ones") else: weight_field = ("index", "ones") # Get the average, min, and max avg = obj.quantities.weighted_average_quantity( determined_field, weight=weight_field ) minimum, maximum = obj.quantities.extrema(field) # Return as a hashable bytestring return np.array([avg, minimum, maximum]) def pixelized_projection_values(ds, axis, field, weight_field=None, dobj_type=None): if dobj_type is not None: obj = create_obj(ds, dobj_type) else: obj = None proj = ds.proj(field, axis, weight_field=weight_field, data_source=obj) frb = proj.to_frb((1.0, "unitary"), 256) frb.render(field) if weight_field is not None: frb.render(weight_field) d = frb.data for f in proj.field_data: # Sometimes f will be a tuple. d[f"{f}_sum"] = proj.field_data[f].sum(dtype="float64") # This is to try and remove python-specific anchors in the yaml # answer file. Also, using __repr__() results in weird strings # of strings that make comparison fail even though the data is # the same result = None for k, v in d.items(): k = k.__repr__().encode("utf8") if result is None: result = hashlib.md5(k + v.tobytes()) else: result.update(k + v.tobytes()) return result.hexdigest() def small_patch_amr(ds, field, weight, axis, ds_obj): hex_digests = {} # Grid hierarchy test gh_hd = grid_hierarchy(ds) hex_digests["grid_hierarchy"] = gh_hd # Parentage relationships test pr_hd = parentage_relationships(ds) hex_digests["parentage_relationships"] = pr_hd # Grid values, projection values, and field values tests gv_hd = grid_values(ds, field) hex_digests["grid_values"] = gv_hd fv_hd = field_values(ds, field, ds_obj) hex_digests["field_values"] = fv_hd pv_hd = projection_values(ds, axis, field, weight, ds_obj) hex_digests["projection_values"] = pv_hd return hex_digests def big_patch_amr(ds, field, weight, axis, ds_obj): hex_digests = {} # Grid hierarchy test gh_hd = grid_hierarchy(ds) hex_digests["grid_hierarchy"] = gh_hd # Parentage relationships test pr_hd = parentage_relationships(ds) hex_digests["parentage_relationships"] = pr_hd # Grid values, projection values, and field values tests gv_hd = grid_values(ds, field) hex_digests["grid_values"] = gv_hd ppv_hd = pixelized_projection_values(ds, axis, field, weight, ds_obj) hex_digests["pixelized_projection_values"] = ppv_hd return hex_digests def generic_array(func, args=None, kwargs=None): if args is None: args = [] if kwargs is None: kwargs = {} return func(*args, **kwargs) def sph_answer(ds, ds_str_repr, ds_nparticles, field, weight, ds_obj, axis): # Make sure we're dealing with the right dataset assert str(ds) == ds_str_repr # Set up keys of test names hex_digests = {} dd = ds.all_data() assert dd["particle_position"].shape == (ds_nparticles, 3) tot = sum( dd[ptype, "particle_position"].shape[0] for ptype in ds.particle_types if ptype != "all" ) # Check assert tot == ds_nparticles dobj = create_obj(ds, ds_obj) s1 = dobj["ones"].sum() s2 = sum(mask.sum() for block, mask in dobj.blocks) assert s1 == s2 if field[0] in ds.particle_types: particle_type = True else: particle_type = False if not particle_type: ppv_hd = pixelized_projection_values(ds, axis, field, weight, ds_obj) hex_digests["pixelized_projection_values"] = ppv_hd fv_hd = field_values(ds, field, ds_obj, particle_type=particle_type) hex_digests["field_values"] = fv_hd return hex_digests def get_field_size_and_mean(ds, field, geometric): if geometric: obj = ds.all_data() else: obj = ds.data return np.array([obj[field].size, obj[field].mean()]) def plot_window_attribute( ds, plot_field, plot_axis, attr_name, attr_args, plot_type="SlicePlot", callback_id="", callback_runners=None, ): if callback_runners is None: callback_runners = [] plot = _create_plot_window_attribute_plot(ds, plot_type, plot_field, plot_axis, {}) for r in callback_runners: r(plot_field, plot) attr = getattr(plot, attr_name) attr(*attr_args[0], **attr_args[1]) tmpfd, tmpname = tempfile.mkstemp(suffix=".png") os.close(tmpfd) plot.save(name=tmpname) image = mpimg.imread(tmpname) os.remove(tmpname) return image def phase_plot_attribute( ds_fn, x_field, y_field, z_field, attr_name, attr_args, plot_type="PhasePlot", plot_kwargs=None, ): if plot_kwargs is None: plot_kwargs = {} data_source = ds_fn.all_data() plot = _create_phase_plot_attribute_plot( data_source, x_field, y_field, z_field, plot_type, plot_kwargs ) attr = getattr(plot, attr_name) attr(*attr_args[0], **attr_args[1]) tmpfd, tmpname = tempfile.mkstemp(suffix=".png") os.close(tmpfd) plot.save(name=tmpname) image = mpimg.imread(tmpname) os.remove(tmpname) return image def generic_image(img_fname): from yt._maintenance.deprecation import issue_deprecation_warning issue_deprecation_warning( "yt.utilities.answer_testing.answer_tests.generic_image is deprecated " "and will be removed in a future version. Please use pytest-mpl instead", since="4.4", stacklevel=2, ) img_data = mpimg.imread(img_fname) return img_data def axial_pixelization(ds): r""" This test is typically used once per geometry or coordinates type. Feed it a dataset, and it checks that the results of basic pixelization don't change. """ for i, axis in enumerate(ds.coordinates.axis_order): (bounds, center, display_center) = pw.get_window_parameters( axis, ds.domain_center, None, ds ) slc = ds.slice(axis, center[i]) xax = ds.coordinates.axis_name[ds.coordinates.x_axis[axis]] yax = ds.coordinates.axis_name[ds.coordinates.y_axis[axis]] pix_x = ds.coordinates.pixelize(axis, slc, xax, bounds, (512, 512)) pix_y = ds.coordinates.pixelize(axis, slc, yax, bounds, (512, 512)) # Wipe out all NaNs pix_x[np.isnan(pix_x)] = 0.0 pix_y[np.isnan(pix_y)] = 0.0 pix_x pix_y return pix_x, pix_y def extract_connected_sets(ds_fn, data_source, field, num_levels, min_val, max_val): n, all_sets = data_source.extract_connected_sets( field, num_levels, min_val, max_val ) result = [] for level in all_sets: for set_id in all_sets[level]: result.append( [ all_sets[level][set_id]["cell_mass"].size, all_sets[level][set_id]["cell_mass"].sum(), ] ) result = np.array(result) return result def VR_image_comparison(scene): tmpfd, tmpname = tempfile.mkstemp(suffix=".png") os.close(tmpfd) scene.save(tmpname, sigma_clip=1.0) image = mpimg.imread(tmpname) os.remove(tmpname) return image ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/answer_testing/api.py0000644000175100001770000000010014714401662020402 0ustar00runnerdockerfrom yt.utilities.answer_testing.framework import AnswerTesting ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/answer_testing/framework.py0000644000175100001770000011623314714401662021645 0ustar00runnerdocker""" Title: framework.py Purpose: Contains answer tests that are used by yt's various frontends """ import contextlib import hashlib import logging import os import pickle import shelve import sys import tempfile import time import warnings import zlib from collections import defaultdict import numpy as np from matplotlib import image as mpimg from matplotlib.testing.compare import compare_images from nose.plugins import Plugin from numpy.testing import assert_almost_equal, assert_equal from yt.config import ytcfg from yt.data_objects.static_output import Dataset from yt.funcs import get_pbar, get_yt_version from yt.loaders import load, load_simulation from yt.testing import ( assert_allclose_units, assert_rel_equal, skipif, ) from yt.utilities.exceptions import ( YTAmbiguousDataType, YTCloudError, YTNoAnswerNameSpecified, YTNoOldAnswer, YTUnidentifiedDataType, ) from yt.utilities.logger import disable_stream_logging from yt.visualization import ( image_writer as image_writer, particle_plots as particle_plots, plot_window as pw, profile_plotter as profile_plotter, ) mylog = logging.getLogger("nose.plugins.answer-testing") run_big_data = False # Set the latest gold and local standard filenames _latest = ytcfg.get("yt", "gold_standard_filename") _latest_local = ytcfg.get("yt", "local_standard_filename") _url_path = ytcfg.get("yt", "answer_tests_url") class AnswerTesting(Plugin): name = "answer-testing" _my_version = None def options(self, parser, env=os.environ): super().options(parser, env=env) parser.add_option( "--answer-name", dest="answer_name", metavar="str", default=None, help="The name of the standard to store/compare against", ) parser.add_option( "--answer-store", dest="store_results", metavar="bool", default=False, action="store_true", help="Should we store this result instead of comparing?", ) parser.add_option( "--local", dest="local_results", default=False, action="store_true", help="Store/load reference results locally?", ) parser.add_option( "--answer-big-data", dest="big_data", default=False, help="Should we run against big data, too?", action="store_true", ) parser.add_option( "--local-dir", dest="output_dir", metavar="str", help="The name of the directory to store local results", ) @property def my_version(self, version=None): if self._my_version is not None: return self._my_version if version is None: try: version = get_yt_version() except Exception: version = f"UNKNOWN{time.time()}" self._my_version = version return self._my_version def configure(self, options, conf): super().configure(options, conf) if not self.enabled: return disable_stream_logging() # Parse through the storage flags to make sense of them # and use reasonable defaults # If we're storing the data, default storage name is local # latest version if options.store_results: if options.answer_name is None: self.store_name = _latest_local else: self.store_name = options.answer_name self.compare_name = None # if we're not storing, then we're comparing, and we want default # comparison name to be the latest gold standard # either on network or local else: if options.answer_name is None: if options.local_results: self.compare_name = _latest_local else: self.compare_name = _latest else: self.compare_name = options.answer_name self.store_name = self.my_version self.store_results = options.store_results ytcfg["yt", "internals", "within_testing"] = True AnswerTestingTest.result_storage = self.result_storage = defaultdict(dict) if self.compare_name == "SKIP": self.compare_name = None elif self.compare_name == "latest": self.compare_name = _latest # Local/Cloud storage if options.local_results: if options.output_dir is None: print("Please supply an output directory with the --local-dir option") sys.exit(1) storage_class = AnswerTestLocalStorage output_dir = os.path.realpath(options.output_dir) # Fix up filename for local storage if self.compare_name is not None: self.compare_name = os.path.join( output_dir, self.compare_name, self.compare_name ) # Create a local directory only when `options.answer_name` is # provided. If it is not provided then creating local directory # will depend on the `AnswerTestingTest.answer_name` value of the # test, this case is handled in AnswerTestingTest class. if options.store_results and options.answer_name is not None: name_dir_path = os.path.join(output_dir, self.store_name) if not os.path.isdir(name_dir_path): os.makedirs(name_dir_path) self.store_name = os.path.join(name_dir_path, self.store_name) else: storage_class = AnswerTestCloudStorage # Initialize answer/reference storage AnswerTestingTest.reference_storage = self.storage = storage_class( self.compare_name, self.store_name ) AnswerTestingTest.options = options self.local_results = options.local_results global run_big_data run_big_data = options.big_data def finalize(self, result=None): if not self.store_results: return self.storage.dump(self.result_storage) def help(self): return "yt answer testing support" class AnswerTestStorage: def __init__(self, reference_name=None, answer_name=None): self.reference_name = reference_name self.answer_name = answer_name self.cache = {} def dump(self, result_storage, result): raise NotImplementedError def get(self, ds_name, default=None): raise NotImplementedError class AnswerTestCloudStorage(AnswerTestStorage): def get(self, ds_name, default=None): import urllib.error import urllib.request if self.reference_name is None: return default if ds_name in self.cache: return self.cache[ds_name] url = _url_path.format(self.reference_name, ds_name) try: resp = urllib.request.urlopen(url) except urllib.error.HTTPError as exc: raise YTNoOldAnswer(url) from exc else: for _ in range(3): try: data = resp.read() except Exception: time.sleep(0.01) else: # We were successful break else: # Raise error if all tries were unsuccessful raise YTCloudError(url) # This is dangerous, but we have a controlled S3 environment rv = pickle.loads(data) self.cache[ds_name] = rv return rv def progress_callback(self, current, total): self.pbar.update(current) def dump(self, result_storage): if self.answer_name is None: return # This is where we dump our result storage up to Amazon, if we are able # to. import pyrax credentials = os.path.expanduser(os.path.join("~", ".yt", "rackspace")) pyrax.set_credential_file(credentials) cf = pyrax.cloudfiles c = cf.get_container("yt-answer-tests") pb = get_pbar("Storing results ", len(result_storage)) for i, ds_name in enumerate(result_storage): pb.update(i + 1) rs = pickle.dumps(result_storage[ds_name]) object_name = f"{self.answer_name}_{ds_name}" if object_name in c.get_object_names(): obj = c.get_object(object_name) c.delete_object(obj) c.store_object(object_name, rs) pb.finish() class AnswerTestLocalStorage(AnswerTestStorage): def dump(self, result_storage): # The 'tainted' attribute is automatically set to 'True' # if the dataset required for an answer test is missing # (see can_run_ds(). # This logic check prevents creating a shelve with empty answers. storage_is_tainted = result_storage.get("tainted", False) if self.answer_name is None or storage_is_tainted: return # Store data using shelve ds = shelve.open(self.answer_name, protocol=-1) for ds_name in result_storage: answer_name = f"{ds_name}" if answer_name in ds: mylog.info("Overwriting %s", answer_name) ds[answer_name] = result_storage[ds_name] ds.close() def get(self, ds_name, default=None): if self.reference_name is None: return default # Read data using shelve answer_name = f"{ds_name}" os.makedirs(os.path.dirname(self.reference_name), exist_ok=True) ds = shelve.open(self.reference_name, protocol=-1) try: result = ds[answer_name] except KeyError: result = default ds.close() return result @contextlib.contextmanager def temp_cwd(cwd): oldcwd = os.getcwd() os.chdir(cwd) yield os.chdir(oldcwd) def can_run_ds(ds_fn, file_check=False): result_storage = AnswerTestingTest.result_storage if isinstance(ds_fn, Dataset): return result_storage is not None path = ytcfg.get("yt", "test_data_dir") if not os.path.isdir(path): return False if file_check: return os.path.isfile(os.path.join(path, ds_fn)) and result_storage is not None try: load(ds_fn) except FileNotFoundError: if ytcfg.get("yt", "internals", "strict_requires"): if result_storage is not None: result_storage["tainted"] = True raise return False except (YTUnidentifiedDataType, YTAmbiguousDataType): return False return result_storage is not None def data_dir_load(ds_fn, cls=None, args=None, kwargs=None): args = args or () kwargs = kwargs or {} path = ytcfg.get("yt", "test_data_dir") if isinstance(ds_fn, Dataset): return ds_fn if not os.path.isdir(path): return False if cls is None: ds = load(ds_fn, *args, **kwargs) else: ds = cls(os.path.join(path, ds_fn), *args, **kwargs) ds.index return ds def data_dir_load_v2(fn, *args, **kwargs): # a version of data_dir_load without type flexibility # that is simpler to reason about path = os.path.join(ytcfg.get("yt", "test_data_dir"), fn) return load(path, *args, **kwargs) def sim_dir_load(sim_fn, path=None, sim_type="Enzo", find_outputs=False): if path is None and not os.path.exists(sim_fn): raise OSError if os.path.exists(sim_fn) or not path: path = "." return load_simulation( os.path.join(path, sim_fn), sim_type, find_outputs=find_outputs ) class AnswerTestingTest: reference_storage = None result_storage = None prefix = "" options = None # This variable should be set if we are not providing `--answer-name` as # command line parameter while running yt's answer testing using nosetests. answer_name = None def __init__(self, ds_fn): if ds_fn is None: self.ds = None elif isinstance(ds_fn, Dataset): self.ds = ds_fn else: self.ds = data_dir_load(ds_fn, kwargs={"unit_system": "code"}) def __call__(self): if AnswerTestingTest.result_storage is None: return nv = self.run() # Test answer name should be provided either as command line parameters # or by setting AnswerTestingTest.answer_name if self.options.answer_name is None and self.answer_name is None: raise YTNoAnswerNameSpecified() # This is for running answer test when `--answer-name` is not set in # nosetests command line arguments. In this case, set the answer_name # from the `answer_name` keyword in the test case if self.options.answer_name is None: pyver = f"py{sys.version_info.major}{sys.version_info.minor}" self.answer_name = f"{pyver}_{self.answer_name}" answer_store_dir = os.path.realpath(self.options.output_dir) ref_name = os.path.join( answer_store_dir, self.answer_name, self.answer_name ) self.reference_storage.reference_name = ref_name self.reference_storage.answer_name = ref_name # If we are generating golden answers (passed --answer-store arg): # - create the answer directory for this test # - self.reference_storage.answer_name will be path to answer files if self.options.store_results: answer_test_dir = os.path.join(answer_store_dir, self.answer_name) if not os.path.isdir(answer_test_dir): os.makedirs(answer_test_dir) self.reference_storage.reference_name = None if self.reference_storage.reference_name is not None: # Compare test generated values against the golden answer dd = self.reference_storage.get(self.storage_name) if dd is None or self.description not in dd: raise YTNoOldAnswer(f"{self.storage_name} : {self.description}") ov = dd[self.description] self.compare(nv, ov) else: # Store results, hence do nothing (in case of --answer-store arg) ov = None self.result_storage[self.storage_name][self.description] = nv @property def storage_name(self): if self.prefix != "": return f"{self.prefix}_{self.ds}" return str(self.ds) def compare(self, new_result, old_result): raise RuntimeError def create_plot(self, ds, plot_type, plot_field, plot_axis, plot_kwargs=None): # plot_type should be a string # plot_kwargs should be a dict if plot_type is None: raise RuntimeError("Must explicitly request a plot type") cls = getattr(pw, plot_type, None) if cls is None: cls = getattr(particle_plots, plot_type) plot = cls(*(ds, plot_axis, plot_field), **plot_kwargs) return plot @property def sim_center(self): """ This returns the center of the domain. """ return 0.5 * (self.ds.domain_right_edge + self.ds.domain_left_edge) @property def max_dens_location(self): """ This is a helper function to return the location of the most dense point. """ return self.ds.find_max(("gas", "density"))[1] @property def entire_simulation(self): """ Return an unsorted array of values that cover the entire domain. """ return self.ds.all_data() @property def description(self): obj_type = getattr(self, "obj_type", None) if obj_type is None: oname = "all" else: oname = "_".join(str(s) for s in obj_type) args = [self._type_name, str(self.ds), oname] args += [str(getattr(self, an)) for an in self._attrs] suffix = getattr(self, "suffix", None) if suffix: args.append(suffix) return "_".join(args).replace(".", "_") class FieldValuesTest(AnswerTestingTest): _type_name = "FieldValues" _attrs = ("field",) def __init__(self, ds_fn, field, obj_type=None, particle_type=False, decimals=10): super().__init__(ds_fn) self.obj_type = obj_type self.field = field self.particle_type = particle_type self.decimals = decimals def run(self): obj = create_obj(self.ds, self.obj_type) field = obj._determine_fields(self.field)[0] fd = self.ds.field_info[field] if self.particle_type: weight_field = (field[0], "particle_ones") elif fd.is_sph_field: weight_field = (field[0], "ones") else: weight_field = ("index", "ones") avg = obj.quantities.weighted_average_quantity(field, weight=weight_field) mi, ma = obj.quantities.extrema(self.field) return [avg, mi, ma] def compare(self, new_result, old_result): err_msg = f"Field values for {self.field} not equal." if hasattr(new_result, "d"): new_result = new_result.d if hasattr(old_result, "d"): old_result = old_result.d if self.decimals is None: assert_equal(new_result, old_result, err_msg=err_msg, verbose=True) else: # What we do here is check if the old_result has units; if not, we # assume they will be the same as the units of new_result. if isinstance(old_result, np.ndarray) and not hasattr( old_result, "in_units" ): # coerce it here to the same units old_result = old_result * new_result[0].uq assert_allclose_units( new_result, old_result, 10.0 ** (-self.decimals), err_msg=err_msg, verbose=True, ) class AllFieldValuesTest(AnswerTestingTest): _type_name = "AllFieldValues" _attrs = ("field",) def __init__(self, ds_fn, field, obj_type=None, decimals=None): super().__init__(ds_fn) self.obj_type = obj_type self.field = field self.decimals = decimals def run(self): obj = create_obj(self.ds, self.obj_type) return obj[self.field] def compare(self, new_result, old_result): err_msg = f"All field values for {self.field} not equal." if hasattr(new_result, "d"): new_result = new_result.d if hasattr(old_result, "d"): old_result = old_result.d if self.decimals is None: assert_equal(new_result, old_result, err_msg=err_msg, verbose=True) else: assert_rel_equal( new_result, old_result, self.decimals, err_msg=err_msg, verbose=True ) class ProjectionValuesTest(AnswerTestingTest): _type_name = "ProjectionValues" _attrs = ("field", "axis", "weight_field") def __init__( self, ds_fn, axis, field, weight_field=None, obj_type=None, decimals=10 ): super().__init__(ds_fn) self.axis = axis self.field = field self.weight_field = weight_field self.obj_type = obj_type self.decimals = decimals def run(self): if self.obj_type is not None: obj = create_obj(self.ds, self.obj_type) else: obj = None if self.ds.domain_dimensions[self.axis] == 1: return None proj = self.ds.proj( self.field, self.axis, weight_field=self.weight_field, data_source=obj ) return proj.field_data def compare(self, new_result, old_result): if new_result is None: return assert len(new_result) == len(old_result) nind, oind = None, None for k in new_result: assert k in old_result if oind is None: oind = np.array(np.isnan(old_result[k])) np.logical_or(oind, np.isnan(old_result[k]), oind) if nind is None: nind = np.array(np.isnan(new_result[k])) np.logical_or(nind, np.isnan(new_result[k]), nind) oind = ~oind nind = ~nind for k in new_result: err_msg = f"{k} values of {self.field} ({self.weight_field} weighted) projection (axis {self.axis}) not equal." if k == "weight_field": # Our weight_field can vary between unit systems, whereas we # can do a unitful comparison for the other fields. So we do # not do the test here. continue nres, ores = new_result[k][nind], old_result[k][oind] if hasattr(nres, "d"): nres = nres.d if hasattr(ores, "d"): ores = ores.d if self.decimals is None: assert_equal(nres, ores, err_msg=err_msg) else: assert_allclose_units( nres, ores, 10.0**-(self.decimals), err_msg=err_msg ) class PixelizedProjectionValuesTest(AnswerTestingTest): _type_name = "PixelizedProjectionValues" _attrs = ("field", "axis", "weight_field") def __init__(self, ds_fn, axis, field, weight_field=None, obj_type=None): super().__init__(ds_fn) self.axis = axis self.field = field self.weight_field = weight_field self.obj_type = obj_type def _get_frb(self, obj): proj = self.ds.proj( self.field, self.axis, weight_field=self.weight_field, data_source=obj ) frb = proj.to_frb((1.0, "unitary"), 256) return proj, frb def run(self): if self.obj_type is not None: obj = create_obj(self.ds, self.obj_type) else: obj = None proj, frb = self._get_frb(obj) frb.render(self.field) if self.weight_field is not None: frb.render(self.weight_field) d = frb.data for f in proj.field_data: # Sometimes f will be a tuple. d[f"{f}_sum"] = proj.field_data[f].sum(dtype="float64") return d def compare(self, new_result, old_result): assert len(new_result) == len(old_result) for k in new_result: assert k in old_result for k in new_result: # weight_field does not have units, so we do not directly compare them if k == "weight_field_sum": continue try: assert_allclose_units(new_result[k], old_result[k], 1e-10) except AssertionError: dump_images(new_result[k], old_result[k]) raise class PixelizedParticleProjectionValuesTest(PixelizedProjectionValuesTest): def _get_frb(self, obj): proj_plot = particle_plots.ParticleProjectionPlot( self.ds, self.axis, [self.field], weight_field=self.weight_field ) return proj_plot.data_source, proj_plot.frb class GridValuesTest(AnswerTestingTest): _type_name = "GridValues" _attrs = ("field",) def __init__(self, ds_fn, field): super().__init__(ds_fn) self.field = field def run(self): hashes = {} for g in self.ds.index.grids: hashes[g.id] = hashlib.md5(g[self.field].tobytes()).hexdigest() g.clear_data() return hashes def compare(self, new_result, old_result): assert len(new_result) == len(old_result) for k in new_result: assert k in old_result for k in new_result: if hasattr(new_result[k], "d"): new_result[k] = new_result[k].d if hasattr(old_result[k], "d"): old_result[k] = old_result[k].d assert_equal(new_result[k], old_result[k]) class VerifySimulationSameTest(AnswerTestingTest): _type_name = "VerifySimulationSame" _attrs = () def __init__(self, simulation_obj): self.ds = simulation_obj def run(self): result = [ds.current_time for ds in self.ds] return result def compare(self, new_result, old_result): assert_equal( len(new_result), len(old_result), err_msg="Number of outputs not equal.", verbose=True, ) for i in range(len(new_result)): assert_equal( new_result[i], old_result[i], err_msg="Output times not equal.", verbose=True, ) class GridHierarchyTest(AnswerTestingTest): _type_name = "GridHierarchy" _attrs = () def run(self): result = {} result["grid_dimensions"] = self.ds.index.grid_dimensions result["grid_left_edges"] = self.ds.index.grid_left_edge result["grid_right_edges"] = self.ds.index.grid_right_edge result["grid_levels"] = self.ds.index.grid_levels result["grid_particle_count"] = self.ds.index.grid_particle_count return result def compare(self, new_result, old_result): for k in new_result: if hasattr(new_result[k], "d"): new_result[k] = new_result[k].d if hasattr(old_result[k], "d"): old_result[k] = old_result[k].d assert_equal(new_result[k], old_result[k]) class ParentageRelationshipsTest(AnswerTestingTest): _type_name = "ParentageRelationships" _attrs = () def run(self): result = {} result["parents"] = [] result["children"] = [] for g in self.ds.index.grids: p = g.Parent if p is None: result["parents"].append(None) elif hasattr(p, "id"): result["parents"].append(p.id) else: result["parents"].append([pg.id for pg in p]) result["children"].append([c.id for c in g.Children]) return result def compare(self, new_result, old_result): for newp, oldp in zip( new_result["parents"], old_result["parents"], strict=True, ): assert newp == oldp for newc, oldc in zip( new_result["children"], old_result["children"], strict=True, ): assert newc == oldc def dump_images(new_result, old_result, decimals=10): tmpfd, old_image = tempfile.mkstemp(prefix="baseline_", suffix=".png") os.close(tmpfd) tmpfd, new_image = tempfile.mkstemp(prefix="thisPR_", suffix=".png") os.close(tmpfd) image_writer.write_projection(new_result, new_image) image_writer.write_projection(old_result, old_image) results = compare_images(old_image, new_image, 10 ** (-decimals)) if results is not None: tempfiles = [ line.strip() for line in results.split("\n") if line.endswith(".png") ] for fn in tempfiles: sys.stderr.write(f"\n[[ATTACHMENT|{fn}]]") sys.stderr.write("\n") def ensure_image_comparability(a, b): # pad nans to the right and the bottom of two images to make them comparable # via matplotlib if they do not have the same shape if a.shape == b.shape: return a, b assert a.shape[2:] == b.shape[2:] warnings.warn( f"Images have different shapes {a.shape} and {b.shape}. " "Padding nans to make them comparable.", stacklevel=2, ) smallest_containing_shape = ( max(a.shape[0], b.shape[0]), max(a.shape[1], b.shape[1]), *a.shape[2:], ) pa = np.full(smallest_containing_shape, np.nan) pa[: a.shape[0], : a.shape[1], ...] = a pb = np.full(smallest_containing_shape, np.nan) pb[: b.shape[0], : b.shape[1], ...] = b return pa, pb def compare_image_lists(new_result, old_result, decimals): fns = [] for _ in range(2): tmpfd, tmpname = tempfile.mkstemp(suffix=".png") os.close(tmpfd) fns.append(tmpname) num_images = len(old_result) assert num_images > 0 for i in range(num_images): expected = pickle.loads(zlib.decompress(old_result[i])) actual = pickle.loads(zlib.decompress(new_result[i])) expected_p, actual_p = ensure_image_comparability(expected, actual) mpimg.imsave(fns[0], expected_p) mpimg.imsave(fns[1], actual_p) results = compare_images(fns[0], fns[1], 10 ** (-decimals)) if results is not None: tempfiles = [ line.strip() for line in results.split("\n") if line.endswith(".png") ] for fn, img, padded in zip( tempfiles, (expected, actual), (expected_p, actual_p), strict=True, ): # padded images are convenient for comparison # but what we really want to store and upload # are the actual results if padded.shape != img.shape: mpimg.imsave(fn, img) if os.environ.get("JENKINS_HOME") is not None: for fn in tempfiles: sys.stderr.write(f"\n[[ATTACHMENT|{fn}]]") sys.stderr.write("\n") assert_equal(results, None, results) for fn in fns: os.remove(fn) class PlotWindowAttributeTest(AnswerTestingTest): _type_name = "PlotWindowAttribute" _attrs = ( "plot_type", "plot_field", "plot_axis", "attr_name", "attr_args", "callback_id", ) def __init__( self, ds_fn: str, plot_field: str, plot_axis: str, attr_name: str | None = None, attr_args: tuple | None = None, decimals: int | None = 12, plot_type: str | None = "SlicePlot", callback_id: str | None = "", callback_runners: tuple | None = None, ): super().__init__(ds_fn) self.plot_type = plot_type self.plot_field = plot_field self.plot_axis = plot_axis self.plot_kwargs = {} self.attr_name = attr_name self.attr_args = attr_args self.decimals = decimals # callback_id is so that we don't have to hash the actual callbacks # run, but instead we call them something self.callback_id = callback_id if callback_runners is None: callback_runners = () self.callback_runners = callback_runners def run(self): plot = self.create_plot( self.ds, self.plot_type, self.plot_field, self.plot_axis, self.plot_kwargs ) for r in self.callback_runners: r(self, plot) if self.attr_name and self.attr_args: attr = getattr(plot, self.attr_name) attr(*self.attr_args[0], **self.attr_args[1]) tmpfd, tmpname = tempfile.mkstemp(suffix=".png") os.close(tmpfd) plot.save(name=tmpname) image = mpimg.imread(tmpname) os.remove(tmpname) return [zlib.compress(image.dumps())] def compare(self, new_result, old_result): compare_image_lists(new_result, old_result, self.decimals) class PhasePlotAttributeTest(AnswerTestingTest): _type_name = "PhasePlotAttribute" _attrs = ("plot_type", "x_field", "y_field", "z_field", "attr_name", "attr_args") def __init__( self, ds_fn, x_field, y_field, z_field, attr_name, attr_args, decimals, plot_type="PhasePlot", ): super().__init__(ds_fn) self.data_source = self.ds.all_data() self.plot_type = plot_type self.x_field = x_field self.y_field = y_field self.z_field = z_field self.plot_kwargs = {} self.attr_name = attr_name self.attr_args = attr_args self.decimals = decimals def create_plot( self, data_source, x_field, y_field, z_field, plot_type, plot_kwargs=None ): # plot_type should be a string # plot_kwargs should be a dict if plot_type is None: raise RuntimeError("Must explicitly request a plot type") cls = getattr(profile_plotter, plot_type, None) if cls is None: cls = getattr(particle_plots, plot_type) plot = cls(*(data_source, x_field, y_field, z_field), **plot_kwargs) return plot def run(self): plot = self.create_plot( self.data_source, self.x_field, self.y_field, self.z_field, self.plot_type, self.plot_kwargs, ) attr = getattr(plot, self.attr_name) attr(*self.attr_args[0], **self.attr_args[1]) tmpfd, tmpname = tempfile.mkstemp(suffix=".png") os.close(tmpfd) plot.save(name=tmpname) image = mpimg.imread(tmpname) os.remove(tmpname) return [zlib.compress(image.dumps())] def compare(self, new_result, old_result): compare_image_lists(new_result, old_result, self.decimals) class GenericArrayTest(AnswerTestingTest): _type_name = "GenericArray" _attrs = ("array_func_name", "args", "kwargs") def __init__(self, ds_fn, array_func, args=None, kwargs=None, decimals=None): super().__init__(ds_fn) self.array_func = array_func self.array_func_name = array_func.__name__ self.args = args self.kwargs = kwargs self.decimals = decimals def run(self): if self.args is None: args = [] else: args = self.args if self.kwargs is None: kwargs = {} else: kwargs = self.kwargs return self.array_func(*args, **kwargs) def compare(self, new_result, old_result): if not isinstance(new_result, dict): new_result = {"answer": new_result} old_result = {"answer": old_result} assert_equal( len(new_result), len(old_result), err_msg="Number of outputs not equal.", verbose=True, ) for k in new_result: if hasattr(new_result[k], "d"): new_result[k] = new_result[k].d if hasattr(old_result[k], "d"): old_result[k] = old_result[k].d if self.decimals is None: assert_almost_equal(new_result[k], old_result[k]) else: assert_allclose_units( new_result[k], old_result[k], 10 ** (-self.decimals) ) class AxialPixelizationTest(AnswerTestingTest): # This test is typically used once per geometry or coordinates type. # Feed it a dataset, and it checks that the results of basic pixelization # don't change. _type_name = "AxialPixelization" _attrs = ("geometry",) def __init__(self, ds_fn, decimals=None): super().__init__(ds_fn) self.decimals = decimals self.geometry = self.ds.coordinates.name def run(self): rv = {} ds = self.ds for i, axis in enumerate(ds.coordinates.axis_order): (bounds, center, display_center) = pw.get_window_parameters( axis, ds.domain_center, None, ds ) slc = ds.slice(axis, center[i]) xax = ds.coordinates.axis_name[ds.coordinates.x_axis[axis]] yax = ds.coordinates.axis_name[ds.coordinates.y_axis[axis]] pix_x = ds.coordinates.pixelize(axis, slc, ("gas", xax), bounds, (512, 512)) pix_y = ds.coordinates.pixelize(axis, slc, ("gas", yax), bounds, (512, 512)) # Wipe out invalid values (fillers) pix_x[~np.isfinite(pix_x)] = 0.0 pix_y[~np.isfinite(pix_y)] = 0.0 rv[f"{axis}_x"] = pix_x rv[f"{axis}_y"] = pix_y return rv def compare(self, new_result, old_result): assert_equal( len(new_result), len(old_result), err_msg="Number of outputs not equal.", verbose=True, ) for k in new_result: if hasattr(new_result[k], "d"): new_result[k] = new_result[k].d if hasattr(old_result[k], "d"): old_result[k] = old_result[k].d if self.decimals is None: assert_almost_equal(new_result[k], old_result[k]) else: assert_allclose_units( new_result[k], old_result[k], 10 ** (-self.decimals) ) def requires_answer_testing(): return skipif( AnswerTestingTest.result_storage is None, reason="answer testing storage is not properly setup", ) def requires_ds(ds_fn, big_data=False, file_check=False): condition = (big_data and not run_big_data) or not can_run_ds(ds_fn, file_check) return skipif(condition, reason=f"cannot load dataset {ds_fn}") def small_patch_amr(ds_fn, fields, input_center="max", input_weight=("gas", "density")): if not can_run_ds(ds_fn): return dso = [None, ("sphere", (input_center, (0.1, "unitary")))] yield GridHierarchyTest(ds_fn) yield ParentageRelationshipsTest(ds_fn) for field in fields: yield GridValuesTest(ds_fn, field) for dobj_name in dso: for axis in [0, 1, 2]: for weight_field in [None, input_weight]: yield ProjectionValuesTest( ds_fn, axis, field, weight_field, dobj_name ) yield FieldValuesTest(ds_fn, field, dobj_name) def big_patch_amr(ds_fn, fields, input_center="max", input_weight=("gas", "density")): if not can_run_ds(ds_fn): return dso = [None, ("sphere", (input_center, (0.1, "unitary")))] yield GridHierarchyTest(ds_fn) yield ParentageRelationshipsTest(ds_fn) for field in fields: yield GridValuesTest(ds_fn, field) for axis in [0, 1, 2]: for dobj_name in dso: for weight_field in [None, input_weight]: yield PixelizedProjectionValuesTest( ds_fn, axis, field, weight_field, dobj_name ) def _particle_answers( ds, ds_str_repr, ds_nparticles, fields, proj_test_class, center="c" ): if not can_run_ds(ds): return assert_equal(str(ds), ds_str_repr) dso = [None, ("sphere", (center, (0.1, "unitary")))] dd = ds.all_data() # this needs to explicitly be "all" assert_equal(dd["all", "particle_position"].shape, (ds_nparticles, 3)) tot = sum( dd[ptype, "particle_position"].shape[0] for ptype in ds.particle_types_raw ) assert_equal(tot, ds_nparticles) for dobj_name in dso: for field, weight_field in fields.items(): particle_type = field[0] in ds.particle_types for axis in [0, 1, 2]: if not particle_type: yield proj_test_class(ds, axis, field, weight_field, dobj_name) yield FieldValuesTest(ds, field, dobj_name, particle_type=particle_type) def nbody_answer(ds, ds_str_repr, ds_nparticles, fields, center="c"): return _particle_answers( ds, ds_str_repr, ds_nparticles, fields, PixelizedParticleProjectionValuesTest, center=center, ) def sph_answer(ds, ds_str_repr, ds_nparticles, fields, center="c"): return _particle_answers( ds, ds_str_repr, ds_nparticles, fields, PixelizedProjectionValuesTest, center=center, ) def create_obj(ds, obj_type): # obj_type should be tuple of # ( obj_name, ( args ) ) if obj_type is None: return ds.all_data() cls = getattr(ds, obj_type[0]) obj = cls(*obj_type[1]) return obj ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/answer_testing/level_sets_tests.py0000644000175100001770000000236114714401662023233 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_equal from yt.utilities.answer_testing.framework import AnswerTestingTest class ExtractConnectedSetsTest(AnswerTestingTest): _type_name = "ExtractConnectedSets" _attrs = () def __init__(self, ds_fn, data_source, field, num_levels, min_val, max_val): super().__init__(ds_fn) self.data_source = data_source self.field = field self.num_levels = num_levels self.min_val = min_val self.max_val = max_val def run(self): n, all_sets = self.data_source.extract_connected_sets( self.field, self.num_levels, self.min_val, self.max_val ) result = [] for level in all_sets: for set_id in all_sets[level]: result.append( [ all_sets[level][set_id]["cell_mass"].size, all_sets[level][set_id]["cell_mass"].sum(), ] ) result = np.array(result) return result def compare(self, new_result, old_result): err_msg = f"Size and/or mass of connected sets do not agree for {self.ds_fn}." assert_equal(new_result, old_result, err_msg=err_msg, verbose=True) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/answer_testing/testing_utilities.py0000644000175100001770000003525714714401662023426 0ustar00runnerdockerimport functools import hashlib import inspect import os import numpy as np import pytest import yaml import yt.visualization.particle_plots as particle_plots import yt.visualization.plot_window as pw import yt.visualization.profile_plotter as profile_plotter from yt.config import ytcfg from yt.data_objects.selection_objects.region import YTRegion from yt.data_objects.static_output import Dataset from yt.loaders import load, load_simulation from yt.utilities.on_demand_imports import _h5py as h5py from yt.visualization.volume_rendering.scene import Scene def _streamline_for_io(params): r""" Put test results in a more io-friendly format. Many of yt's tests use objects such as tuples as test parameters (fields, for instance), but when these objects are written to a yaml file, yaml includes python specific anchors that make the file harder to read and less portable. The goal of this function is to convert these objects to strings (using __repr__() has it's own issues) in order to solve this problem. Parameters ---------- params : dict The dictionary of test parameters in the form {param_name : param_value}. Returns ------- streamlined_params : dict The dictionary of parsed and converted {param_name : param_value} pairs. """ streamlined_params = {} for key, value in params.items(): # Check for user-defined functions if inspect.isfunction(key): key = key.__name__ if inspect.isfunction(value): value = value.__name__ # The key can be nested iterables, e.g., # d = [None, ('sphere', (center, (0.1, 'unitary')))] so we need # to use recursion if not isinstance(key, str) and hasattr(key, "__iter__"): key = _iterable_to_string(key) # The value can also be nested iterables if not isinstance(value, str) and hasattr(value, "__iter__"): value = _iterable_to_string(value) # Scene objects need special treatment to make them more IO friendly if isinstance(value, Scene): value = "Scene" elif isinstance(value, YTRegion): value = "Region" streamlined_params[key] = value return streamlined_params def _iterable_to_string(iterable): r""" An extension of streamline_for_io that does the work of making an iterable more io-friendly. Parameters ---------- iterable : python iterable The object to be parsed and converted. Returns ------- result : str The io-friendly version of the passed iterable. """ result = iterable.__class__.__name__ for elem in iterable: # Check for user-defined functions if inspect.isfunction(elem): result += "_" + elem.__name__ # Non-string iterables (e.g., lists, tuples, etc.) elif not isinstance(elem, str) and hasattr(elem, "__iter__"): result += "_" + _iterable_to_string(elem) # Non-string non-iterables (ints, floats, etc.) elif not isinstance(elem, str) and not hasattr(elem, "__iter__"): result += "_" + str(elem) # Strings elif isinstance(elem, str): result += "_" + elem return result def _hash_results(results): r""" Driver function for hashing the test result. Parameters ---------- results : dict Dictionary of {test_name : test_result} pairs. Returns ------- results : dict Same as the passed results, but the test_results are now hex digests of the hashed test_result. """ # Here, results should be comprised of only the tests, not the test # parameters # Use a new dictionary so as to not overwrite the non-hashed test # results in case those are to be saved hashed_results = {} for test_name, test_value in results.items(): hashed_results[test_name] = generate_hash(test_value) return hashed_results def _hash_dict(data): r""" Specifically handles hashing a dictionary object. Parameters ---------- data : dict The dictionary to be hashed. Returns ------- hd.hexdigest : str The hex digest of the hashed dictionary. """ hd = None for key, value in data.items(): # Some keys are tuples, not strings if not isinstance(key, str): key = key.__repr__() # Test suites can return values that are dictionaries of other tests if isinstance(value, dict): hashed_data = _hash_dict(value) else: hashed_data = bytearray(key.encode("utf8")) + bytearray(value) # If we're returning from a recursive call (and therefore hashed_data # is a hex digest), we need to encode the string before it can be # hashed if isinstance(hashed_data, str): hashed_data = hashed_data.encode("utf8") if hd is None: hd = hashlib.md5(hashed_data) else: hd.update(hashed_data) return hd.hexdigest() def generate_hash(data): r""" Actually performs the hash operation. Parameters ---------- data : python object Data to be hashed. Returns ------- hd : str Hex digest of the hashed data. """ # Sometimes md5 complains that the data is not contiguous, so we # make it so here if isinstance(data, np.ndarray): data = np.ascontiguousarray(data) elif isinstance(data, str): data = data.encode("utf-8") # Try to hash. Some tests return hashable types (like ndarrays) and # others don't (such as dictionaries) try: hd = hashlib.md5(data).hexdigest() # Handle those tests that return non-hashable types. This is done # here instead of in the tests themselves to try and reduce boilerplate # and provide a central location where all of this is done in case it needs # to be changed except TypeError: if isinstance(data, dict): hd = _hash_dict(data) elif data is None: hd = hashlib.md5(bytes(str(-1).encode("utf-8"))).hexdigest() else: raise return hd def _save_result(data, output_file): r""" Saves the test results to the desired answer file. Parameters ---------- data : dict Test results to be saved. output_file : str Name of file to save results to. """ with open(output_file, "a") as f: yaml.dump(data, f) def _save_raw_arrays(arrays, answer_file, func_name): r""" Saves the raw arrays produced from answer tests to a file. The structure of `answer_file` is: each test function (e.g., test_toro1d[0-None-None-0]) forms a group. Within each group is a hdf5 dataset named after the test (e.g., field_values). The value stored in each dataset is the raw array corresponding to that test and function. Parameters ---------- arrays : dict Keys are the test name (e.g. field_values) and the values are the actual answer arrays produced by the test. answer_file : str The name of the file to save the answers to, in hdf5 format. func_name : str The name of the function (possibly augmented by pytest with test parameters) that called the test functions (e.g, test_toro1d). """ with h5py.File(answer_file, "a") as f: grp = f.create_group(func_name) for test_name, test_data in arrays.items(): # Some answer tests (e.g., grid_values, projection_values) # return a dictionary, which cannot be handled by h5py if isinstance(test_data, dict): sub_grp = grp.create_group(test_name) _parse_raw_answer_dict(test_data, sub_grp) else: # Some tests return None, which hdf5 can't handle, and there is # no proxy, so we have to make one ourselves. Using -1 if test_data is None: test_data = -1 grp.create_dataset(test_name, data=test_data) def _parse_raw_answer_dict(d, h5grp): for k, v in d.items(): if isinstance(v, dict): h5_sub_grp = h5grp.create_group(k) _parse_raw_answer_dict(v, h5_sub_grp) else: k = str(k) h5grp.create_dataset(k, data=v) def _compare_raw_arrays(arrays, answer_file, func_name): r""" Reads in previously saved raw array data and compares the current results with the old ones. The structure of `answer_file` is: each test function (e.g., test_toro1d[0-None-None-0]) forms a group. Within each group is a hdf5 dataset named after the test (e.g., field_values). The value stored in each dataset is the raw array corresponding to that test and function. Parameters ---------- arrays : dict Keys are the test name (e.g. field_values) and the values are the actual answer arrays produced by the test. answer_file : str The name of the file to load the answers from, in hdf5 format. func_name : str The name of the function (possibly augmented by pytest with test parameters) that called the test functions (e.g, test_toro1d). """ with h5py.File(answer_file, "r") as f: for test_name, new_answer in arrays.items(): np.testing.assert_array_equal(f[func_name][test_name][:], new_answer) def can_run_ds(ds_fn, file_check=False): r""" Validates whether or not a given input can be loaded and used as a Dataset object. """ if isinstance(ds_fn, Dataset): return True path = ytcfg.get("yt", "test_data_dir") if not os.path.isdir(path): return False if file_check: return os.path.isfile(os.path.join(path, ds_fn)) try: load(ds_fn) return True except FileNotFoundError: return False def can_run_sim(sim_fn, sim_type, file_check=False): r""" Validates whether or not a given input can be used as a simulation time-series object. """ path = ytcfg.get("yt", "test_data_dir") if not os.path.isdir(path): return False if file_check: return os.path.isfile(os.path.join(path, sim_fn)) try: load_simulation(sim_fn, sim_type) except FileNotFoundError: return False return True def data_dir_load(ds_fn, cls=None, args=None, kwargs=None): r""" Loads a sample dataset from the designated test_data_dir for use in testing. """ args = args or () kwargs = kwargs or {} path = ytcfg.get("yt", "test_data_dir") # Some frontends require their field_lists during test parameterization. # If the data isn't found, the parameterizing functions return None, since # pytest.skip cannot be called outside of a test or fixture. if ds_fn is None: raise FileNotFoundError if not os.path.isdir(path): raise FileNotFoundError if isinstance(ds_fn, Dataset): return ds_fn if cls is None: ds = load(ds_fn, *args, **kwargs) else: ds = cls(os.path.join(path, ds_fn), *args, **kwargs) ds.index return ds def requires_ds(ds_fn, file_check=False): r""" Meta-wrapper for specifying required data for a test and checking if said data exists. """ def ffalse(func): @functools.wraps(func) def skip(*args, **kwargs): msg = f"{ds_fn} not found, skipping {func.__name__}." pytest.skip(msg) return skip def ftrue(func): @functools.wraps(func) def true_wrapper(*args, **kwargs): return func(*args, **kwargs) return true_wrapper if not can_run_ds(ds_fn, file_check): return ffalse else: return ftrue def requires_sim(sim_fn, sim_type, file_check=False): r""" Meta-wrapper for specifying a required simulation for a test and checking if said simulation exists. """ def ffalse(func): @functools.wraps(func) def skip(*args, **kwargs): msg = f"{sim_fn} not found, skipping {func.__name__}." pytest.skip(msg) return skip def ftrue(func): @functools.wraps(func) def true_wrapper(*args, **kwargs): return func return true_wrapper if not can_run_sim(sim_fn, sim_type, file_check): return ffalse else: return ftrue def _create_plot_window_attribute_plot(ds, plot_type, field, axis, pkwargs=None): r""" Convenience function used in plot_window_attribute_test. Parameters ---------- ds : Dataset The Dataset object from which the plotting data is taken. plot_type : string Type of plot to make (e.g., SlicePlot). field : yt field The field (e.g, density) to plot. axis : int The plot axis to plot or project along. pkwargs : dict Any keywords to be passed when creating the plot. """ if plot_type is None: raise RuntimeError("Must explicitly request a plot type") cls = getattr(pw, plot_type, None) if cls is None: cls = getattr(particle_plots, plot_type) plot = cls(ds, axis, field, **pkwargs) return plot def _create_phase_plot_attribute_plot( data_source, x_field, y_field, z_field, plot_type, plot_kwargs=None ): r""" Convenience function used in phase_plot_attribute_test. Parameters ---------- data_source : Dataset object The Dataset object from which the plotting data is taken. x_field : yt field Field to plot on x-axis. y_field : yt field Field to plot on y-axis. z_field : yt field Field to plot on z-axis. plot_type : string Type of plot to make (e.g., SlicePlot). plot_kwargs : dict Any keywords to be passed when creating the plot. """ if plot_type is None: raise RuntimeError("Must explicitly request a plot type") cls = getattr(profile_plotter, plot_type, None) if cls is None: cls = getattr(particle_plots, plot_type) plot = cls(*(data_source, x_field, y_field, z_field), **plot_kwargs) return plot def get_parameterization(fname): """ Returns a dataset's field list to make test parameterizationn easier. Some tests (such as those that use the toro1d dataset in enzo) check every field in a dataset. In order to parametrize the tests without having to hardcode a list of every field, this function is used. Additionally, if the dataset cannot be found, this function enables pytest to mark the test as failed without the whole test run crashing, since the parameterization happens at import time. """ try: ds = data_dir_load(fname) return ds.field_list except FileNotFoundError: return [ None, ] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/api.py0000644000175100001770000000004014714401662015351 0ustar00runnerdocker""" API for yt.utilities """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/chemical_formulas.py0000644000175100001770000000333214714401662020264 0ustar00runnerdockerimport re from .periodic_table import periodic_table from .physical_ratios import _primordial_mass_fraction class ChemicalFormula: def __init__(self, formula_string): # See YTEP-0003 for information on the format. self.formula_string = formula_string self.elements = [] if "_" in self.formula_string: molecule, ionization = self.formula_string.split("_") if ionization[0] == "p": charge = int(ionization[1:]) elif ionization[0] == "m": charge = -int(ionization[1:]) else: raise NotImplementedError elif self.formula_string.startswith("El"): molecule = self.formula_string charge = -1 else: molecule = self.formula_string charge = 0 self.charge = charge for element, count in re.findall(r"([A-Z][a-z]*)(\d*)", molecule): if count == "": count = 1 self.elements.append((periodic_table[element], int(count))) self.weight = sum(n * e.weight for e, n in self.elements) def __repr__(self): return self.formula_string def compute_mu(ion_state): if ion_state == "ionized" or ion_state is None: # full ionization n_H = 2.0 # fully ionized hydrogen gives two particles n_He = 3.0 # fully ionized helium gives three particles elif ion_state == "neutral": n_H = 1.0 # neutral hydrogen gives one particle n_He = 1.0 # neutral helium gives one particle muinv = n_H * _primordial_mass_fraction["H"] / ChemicalFormula("H").weight muinv += n_He * _primordial_mass_fraction["He"] / ChemicalFormula("He").weight return 1.0 / muinv ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/command_line.py0000644000175100001770000013041214714401662017234 0ustar00runnerdockerimport argparse import base64 import json import os import pprint import sys import textwrap from typing import Any import numpy as np from more_itertools import always_iterable from yt.config import ytcfg from yt.funcs import ( download_file, enable_plugins, ensure_dir, ensure_dir_exists, get_git_version, mylog, update_git, ) from yt.loaders import load from yt.utilities.exceptions import YTFieldNotParseable, YTUnidentifiedDataType from yt.utilities.metadata import get_metadata from yt.visualization.plot_window import ProjectionPlot, SlicePlot # isort: off # This needs to be set before importing startup_tasks ytcfg["yt", "internals", "command_line"] = True # isort: skip from yt.startup_tasks import parser, subparsers # isort: skip # noqa: E402 # isort: on # loading field plugins for backward compatibility, since this module # used to do "from yt.mods import *" try: enable_plugins() except FileNotFoundError: pass _default_colormap = ytcfg.get("yt", "default_colormap") def _fix_ds(arg, *args, **kwargs): if os.path.isdir(f"{arg}") and os.path.exists(f"{arg}/{arg}"): ds = load(f"{arg}/{arg}", *args, **kwargs) elif os.path.isdir(f"{arg}.dir") and os.path.exists(f"{arg}.dir/{arg}"): ds = load(f"{arg}.dir/{arg}", *args, **kwargs) elif arg.endswith(".index"): ds = load(arg[:-10], *args, **kwargs) else: ds = load(arg, *args, **kwargs) return ds def _add_arg(sc, arg): if isinstance(arg, str): arg = _common_options[arg].copy() elif isinstance(arg, tuple): exclusive, *args = arg if exclusive: grp = sc.add_mutually_exclusive_group() else: grp = sc.add_argument_group() for arg in args: _add_arg(grp, arg) return argc = dict(arg.items()) argnames = [] if "short" in argc: argnames.append(argc.pop("short")) if "longname" in argc: argnames.append(argc.pop("longname")) sc.add_argument(*argnames, **argc) def _print_failed_source_update(reinstall=False): print() print("The yt package is not installed from a git repository,") print("so you must update this installation manually.") if "Continuum Analytics" in sys.version or "Anaconda" in sys.version: # see http://stackoverflow.com/a/21318941/1382869 for why we need # to check both Continuum *and* Anaconda print() print("Since it looks like you are using a python installation") print("that is managed by conda, you may want to do:") print() print(" $ conda update yt") print() print("to update your yt installation.") if reinstall: print() print("To update all of your packages, you can do:") print() print(" $ conda update --all") else: print("If you manage your python dependencies with pip, you may") print("want to do:") print() print(" $ python -m pip install -U yt") print() print("to update your yt installation.") def _print_installation_information(path): import yt print() print("yt module located at:") print(f" {path}") if "YT_DEST" in os.environ: spath = os.path.join(os.environ["YT_DEST"], "src", "yt-supplemental") if os.path.isdir(spath): print("The supplemental repositories are located at:") print(f" {spath}") print() print("The current version of yt is:") print() print("---") print(f"Version = {yt.__version__}") vstring = get_git_version(path) if vstring is not None: print(f"Changeset = {vstring.strip()}") print("---") return vstring class FileStreamer: final_size = None next_sent = 0 chunksize = 100 * 1024 def __init__(self, f, final_size=None): location = f.tell() f.seek(0, os.SEEK_END) self.final_size = f.tell() - location f.seek(location) self.f = f def __iter__(self): from tqdm import tqdm with tqdm( total=self.final_size, desc="Uploading file", unit="B", unit_scale=True ) as pbar: while self.f.tell() < self.final_size: yield self.f.read(self.chunksize) pbar.update(self.chunksize) _subparsers = {None: subparsers} _subparsers_description = { "config": "Get and set configuration values for yt", } class YTCommandSubtype(type): def __init__(cls, name, b, d): type.__init__(cls, name, b, d) if cls.name is None: return if cls.subparser not in _subparsers: try: description = _subparsers_description[cls.subparser] except KeyError: description = cls.subparser parent_parser = argparse.ArgumentParser(add_help=False) p = subparsers.add_parser( cls.subparser, help=description, description=description, parents=[parent_parser], ) _subparsers[cls.subparser] = p.add_subparsers( title=cls.subparser, dest=cls.subparser ) sp = _subparsers[cls.subparser] for name in always_iterable(cls.name): sc = sp.add_parser(name, description=cls.description, help=cls.description) sc.set_defaults(func=cls.run) for arg in cls.args: _add_arg(sc, arg) class YTCommand(metaclass=YTCommandSubtype): args: tuple[str | dict[str, Any], ...] = () name: str | list[str] | None = None description: str = "" aliases = () ndatasets: int = 1 subparser: str | None = None @classmethod def run(cls, args): self = cls() # Check for some things we know; for instance, comma separated # field names should be parsed as tuples. if getattr(args, "field", None) is not None and "," in args.field: if args.field.count(",") > 1: raise YTFieldNotParseable(args.field) args.field = tuple(_.strip() for _ in args.field.split(",")) if getattr(args, "weight", None) is not None and "," in args.weight: if args.weight.count(",") > 1: raise YTFieldNotParseable(args.weight) args.weight = tuple(_.strip() for _ in args.weight.split(",")) # Some commands need to be run repeatedly on datasets # In fact, this is the rule and the opposite is the exception # BUT, we only want to parse the arguments once. if cls.ndatasets > 1: self(args) else: ds_args = getattr(args, "ds", []) if len(ds_args) > 1: datasets = args.ds for ds in datasets: args.ds = ds self(args) elif len(ds_args) == 0: datasets = [] self(args) else: args.ds = getattr(args, "ds", [None])[0] self(args) class GetParameterFiles(argparse.Action): def __call__(self, parser, namespace, values, option_string=None): if len(values) == 1: datasets = values elif len(values) == 2 and namespace.basename is not None: datasets = [ "%s%04i" % (namespace.basename, r) for r in range(int(values[0]), int(values[1]), namespace.skip) ] else: datasets = values namespace.ds = [_fix_ds(ds) for ds in datasets] _common_options = { "all": { "longname": "--all", "dest": "reinstall", "default": False, "action": "store_true", "help": ( "Reinstall the full yt stack in the current location. " "This option has been deprecated and will not have any " "effect." ), }, "ds": { "short": "ds", "action": GetParameterFiles, "nargs": "+", "help": "datasets to run on", }, "ods": { "action": GetParameterFiles, "dest": "ds", "nargs": "*", "help": "(Optional) datasets to run on", }, "axis": { "short": "-a", "longname": "--axis", "action": "store", "type": int, "dest": "axis", "default": 4, "help": "Axis (4 for all three)", }, "log": { "short": "-l", "longname": "--log", "action": "store_true", "dest": "takelog", "default": True, "help": "Use logarithmic scale for image", }, "linear": { "longname": "--linear", "action": "store_false", "dest": "takelog", "help": "Use linear scale for image", }, "text": { "short": "-t", "longname": "--text", "action": "store", "type": str, "dest": "text", "default": None, "help": "Textual annotation", }, "field": { "short": "-f", "longname": "--field", "action": "store", "type": str, "dest": "field", "default": "density", "help": ("Field to color by, use a comma to separate field tuple values"), }, "weight": { "short": "-g", "longname": "--weight", "action": "store", "type": str, "dest": "weight", "default": None, "help": ( "Field to weight projections with, " "use a comma to separate field tuple values" ), }, "cmap": { "longname": "--colormap", "action": "store", "type": str, "dest": "cmap", "default": _default_colormap, "help": "Colormap name", }, "zlim": { "short": "-z", "longname": "--zlim", "action": "store", "type": float, "dest": "zlim", "default": None, "nargs": 2, "help": "Color limits (min, max)", }, "dex": { "longname": "--dex", "action": "store", "type": float, "dest": "dex", "default": None, "nargs": 1, "help": "Number of dex above min to display", }, "width": { "short": "-w", "longname": "--width", "action": "store", "type": float, "dest": "width", "default": None, "help": "Width in specified units", }, "unit": { "short": "-u", "longname": "--unit", "action": "store", "type": str, "dest": "unit", "default": "1", "help": "Desired axes units", }, "center": { "short": "-c", "longname": "--center", "action": "store", "type": float, "dest": "center", "default": None, "nargs": 3, "help": "Center, space separated (-1 -1 -1 for max)", }, "max": { "short": "-m", "longname": "--max", "action": "store_true", "dest": "max", "default": False, "help": "Center the plot on the density maximum", }, "bn": { "short": "-b", "longname": "--basename", "action": "store", "type": str, "dest": "basename", "default": None, "help": "Basename of datasets", }, "output": { "short": "-o", "longname": "--output", "action": "store", "type": str, "dest": "output", "default": "frames/", "help": "Folder in which to place output images", }, "outputfn": { "short": "-o", "longname": "--output", "action": "store", "type": str, "dest": "output", "default": None, "help": "File in which to place output", }, "skip": { "short": "-s", "longname": "--skip", "action": "store", "type": int, "dest": "skip", "default": 1, "help": "Skip factor for outputs", }, "proj": { "short": "-p", "longname": "--projection", "action": "store_true", "dest": "projection", "default": False, "help": "Use a projection rather than a slice", }, "maxw": { "longname": "--max-width", "action": "store", "type": float, "dest": "max_width", "default": 1.0, "help": "Maximum width in code units", }, "minw": { "longname": "--min-width", "action": "store", "type": float, "dest": "min_width", "default": 50, "help": "Minimum width in units of smallest dx (default: 50)", }, "nframes": { "short": "-n", "longname": "--nframes", "action": "store", "type": int, "dest": "nframes", "default": 100, "help": "Number of frames to generate", }, "slabw": { "longname": "--slab-width", "action": "store", "type": float, "dest": "slab_width", "default": 1.0, "help": "Slab width in specified units", }, "slabu": { "short": "-g", "longname": "--slab-unit", "action": "store", "type": str, "dest": "slab_unit", "default": "1", "help": "Desired units for the slab", }, "ptype": { "longname": "--particle-type", "action": "store", "type": int, "dest": "ptype", "default": 2, "help": "Particle type to select", }, "agecut": { "longname": "--age-cut", "action": "store", "type": float, "dest": "age_filter", "default": None, "nargs": 2, "help": "Bounds for the field to select", }, "uboxes": { "longname": "--unit-boxes", "action": "store_true", "dest": "unit_boxes", "help": "Display heldsul unit boxes", }, "thresh": { "longname": "--threshold", "action": "store", "type": float, "dest": "threshold", "default": None, "help": "Density threshold", }, "dm_only": { "longname": "--all-particles", "action": "store_false", "dest": "dm_only", "default": True, "help": "Use all particles", }, "grids": { "longname": "--show-grids", "action": "store_true", "dest": "grids", "default": False, "help": "Show the grid boundaries", }, "time": { "longname": "--time", "action": "store_true", "dest": "time", "default": False, "help": "Print time in years on image", }, "contours": { "longname": "--contours", "action": "store", "type": int, "dest": "contours", "default": None, "help": "Number of Contours for Rendering", }, "contour_width": { "longname": "--contour_width", "action": "store", "type": float, "dest": "contour_width", "default": None, "help": "Width of gaussians used for rendering.", }, "enhance": { "longname": "--enhance", "action": "store_true", "dest": "enhance", "default": False, "help": "Enhance!", }, "valrange": { "short": "-r", "longname": "--range", "action": "store", "type": float, "dest": "valrange", "default": None, "nargs": 2, "help": "Range, space separated", }, "up": { "longname": "--up", "action": "store", "type": float, "dest": "up", "default": None, "nargs": 3, "help": "Up, space separated", }, "viewpoint": { "longname": "--viewpoint", "action": "store", "type": float, "dest": "viewpoint", "default": [1.0, 1.0, 1.0], "nargs": 3, "help": "Viewpoint, space separated", }, "pixels": { "longname": "--pixels", "action": "store", "type": int, "dest": "pixels", "default": None, "help": "Number of Pixels for Rendering", }, "halos": { "longname": "--halos", "action": "store", "type": str, "dest": "halos", "default": "multiple", "help": "Run halo profiler on a 'single' halo or 'multiple' halos.", }, "halo_radius": { "longname": "--halo_radius", "action": "store", "type": float, "dest": "halo_radius", "default": 0.1, "help": "Constant radius for profiling halos if using hop output files with no " + "radius entry. Default: 0.1.", }, "halo_radius_units": { "longname": "--halo_radius_units", "action": "store", "type": str, "dest": "halo_radius_units", "default": "1", "help": "Units for radius used with --halo_radius flag. " + "Default: '1' (code units).", }, "halo_hop_style": { "longname": "--halo_hop_style", "action": "store", "type": str, "dest": "halo_hop_style", "default": "new", "help": "Style of hop output file. " + "'new' for yt_hop files and 'old' for enzo_hop files.", }, "halo_dataset": { "longname": "--halo_dataset", "action": "store", "type": str, "dest": "halo_dataset", "default": None, "help": "HaloProfiler dataset.", }, "make_profiles": { "longname": "--make_profiles", "action": "store_true", "default": False, "help": "Make profiles with halo profiler.", }, "make_projections": { "longname": "--make_projections", "action": "store_true", "default": False, "help": "Make projections with halo profiler.", }, } class YTInstInfoCmd(YTCommand): name = ["instinfo", "version"] args = ( { "short": "-u", "longname": "--update-source", "action": "store_true", "default": False, "help": "Update the yt installation, if able", }, { "short": "-o", "longname": "--output-version", "action": "store", "default": None, "dest": "outputfile", "help": "File into which the current revision number will be stored", }, ) description = """ Get some information about the yt installation """ def __call__(self, opts): import importlib.resources as importlib_resources path = os.path.dirname(importlib_resources.files("yt")) vstring = _print_installation_information(path) if vstring is not None: print("This installation CAN be automatically updated.") if opts.update_source: update_git(path) elif opts.update_source: _print_failed_source_update() if vstring is not None and opts.outputfile is not None: open(opts.outputfile, "w").write(vstring) class YTLoadCmd(YTCommand): name = "load" description = """ Load a single dataset into an IPython instance """ args = ("ds",) def __call__(self, args): if args.ds is None: print("Could not load file.") sys.exit() import IPython import yt local_ns = {} local_ns["ds"] = args.ds local_ns["pf"] = args.ds local_ns["yt"] = yt try: from traitlets.config.loader import Config except ImportError: from IPython.config.loader import Config cfg = Config() # prepend sys.path with current working directory sys.path.insert(0, "") IPython.embed(config=cfg, user_ns=local_ns) class YTMapserverCmd(YTCommand): args = ( "proj", "field", "weight", "linear", "center", "width", "cmap", { "short": "-a", "longname": "--axis", "action": "store", "type": int, "dest": "axis", "default": 0, "help": "Axis", }, { "short": "-o", "longname": "--host", "action": "store", "type": str, "dest": "host", "default": None, "help": "IP Address to bind on", }, {"short": "ds", "nargs": 1, "type": str, "help": "The dataset to load."}, ) name = "mapserver" description = """ Serve a plot in a GMaps-style interface """ def __call__(self, args): from yt.frontends.ramses.data_structures import RAMSESDataset from yt.visualization.mapserver.pannable_map import PannableMapServer # For RAMSES datasets, use the bbox feature to make the dataset load faster if RAMSESDataset._is_valid(args.ds) and args.center and args.width: kwa = { "bbox": [ [c - args.width / 2 for c in args.center], [c + args.width / 2 for c in args.center], ] } else: kwa = {} ds = _fix_ds(args.ds, **kwa) if args.center and args.width: center = args.center width = args.width ad = ds.box( left_edge=[c - args.width / 2 for c in args.center], right_edge=[c + args.width / 2 for c in args.center], ) else: center = [0.5] * 3 width = 1.0 ad = ds.all_data() if args.axis >= 4: print("Doesn't work with multiple axes!") return if args.projection: p = ProjectionPlot( ds, args.axis, args.field, weight_field=args.weight, data_source=ad, center=center, width=width, ) else: p = SlicePlot( ds, args.axis, args.field, data_source=ad, center=center, width=width ) p.set_log("all", args.takelog) p.set_cmap("all", args.cmap) PannableMapServer(p.data_source, args.field, args.takelog, args.cmap) try: import bottle except ImportError as e: raise ImportError( "The mapserver functionality requires the bottle " "package to be installed. Please install using `pip " "install bottle`." ) from e bottle.debug(True) if args.host is not None: colonpl = args.host.find(":") if colonpl >= 0: port = int(args.host.split(":")[-1]) args.host = args.host[:colonpl] else: port = 8080 bottle.run(server="auto", host=args.host, port=port) else: bottle.run(server="auto") class YTPastebinCmd(YTCommand): name = "pastebin" args = ( { "short": "-l", "longname": "--language", "action": "store", "default": None, "dest": "language", "help": "Use syntax highlighter for the file in language", }, { "short": "-L", "longname": "--languages", "action": "store_true", "default": False, "dest": "languages", "help": "Retrieve a list of supported languages", }, { "short": "-e", "longname": "--encoding", "action": "store", "default": "utf-8", "dest": "encoding", "help": "Specify the encoding of a file (default is " "utf-8 or guessing if available)", }, { "short": "-b", "longname": "--open-browser", "action": "store_true", "default": False, "dest": "open_browser", "help": "Open the paste in a web browser", }, { "short": "-p", "longname": "--private", "action": "store_true", "default": False, "dest": "private", "help": "Paste as private", }, { "short": "-c", "longname": "--clipboard", "action": "store_true", "default": False, "dest": "clipboard", "help": "File to output to; else, print.", }, {"short": "file", "type": str}, ) description = """ Post a script to an anonymous pastebin """ def __call__(self, args): from yt.utilities import lodgeit as lo lo.main( args.file, languages=args.languages, language=args.language, encoding=args.encoding, open_browser=args.open_browser, private=args.private, clipboard=args.clipboard, ) class YTPastebinGrabCmd(YTCommand): args = ({"short": "number", "type": str},) name = "pastebin_grab" description = """ Print an online pastebin to STDOUT for local use. """ def __call__(self, args): from yt.utilities import lodgeit as lo lo.main(None, download=args.number) class YTPlotCmd(YTCommand): args = ( "width", "unit", "bn", "proj", "center", "zlim", "axis", "field", "weight", "skip", "cmap", "output", "grids", "time", "ds", "max", "log", "linear", { "short": "-fu", "longname": "--field-unit", "action": "store", "type": str, "dest": "field_unit", "default": None, "help": "Desired field units", }, { "longname": "--show-scale-bar", "action": "store_true", "help": "Annotate the plot with the scale", }, ) name = "plot" description = """ Create a set of images """ def __call__(self, args): ds = args.ds center = args.center if args.center == (-1, -1, -1): mylog.info("No center fed in; seeking.") v, center = ds.find_max("density") if args.max: v, center = ds.find_max("density") elif args.center is None: center = 0.5 * (ds.domain_left_edge + ds.domain_right_edge) center = np.array(center) if ds.dimensionality < 3: dummy_dimensions = np.nonzero(ds.index.grids[0].ActiveDimensions <= 1) axes = dummy_dimensions[0][0] elif args.axis == 4: axes = range(3) else: axes = args.axis unit = args.unit if unit is None: unit = "unitary" if args.width is None: width = None else: width = (args.width, args.unit) for ax in always_iterable(axes): mylog.info("Adding plot for axis %i", ax) if args.projection: plt = ProjectionPlot( ds, ax, args.field, center=center, width=width, weight_field=args.weight, ) else: plt = SlicePlot(ds, ax, args.field, center=center, width=width) if args.grids: plt.annotate_grids() if args.time: plt.annotate_timestamp() if args.show_scale_bar: plt.annotate_scale() if args.field_unit: plt.set_unit(args.field, args.field_unit) plt.set_cmap(args.field, args.cmap) plt.set_log(args.field, args.takelog) if args.zlim: plt.set_zlim(args.field, *args.zlim) ensure_dir_exists(args.output) plt.save(os.path.join(args.output, f"{ds}")) class YTRPDBCmd(YTCommand): name = "rpdb" description = """ Connect to a currently running (on localhost) rpd session. Commands run with --rpdb will trigger an rpdb session with any uncaught exceptions. """ args = ( { "short": "-t", "longname": "--task", "action": "store", "default": 0, "dest": "task", "help": "Open a web browser.", }, ) def __call__(self, args): from . import rpdb rpdb.run_rpdb(int(args.task)) class YTStatsCmd(YTCommand): args = ( "outputfn", "bn", "skip", "ds", "field", { "longname": "--max", "action": "store_true", "default": False, "dest": "max", "help": "Display maximum of field requested through -f option.", }, { "longname": "--min", "action": "store_true", "default": False, "dest": "min", "help": "Display minimum of field requested through -f option.", }, ) name = "stats" description = """ Print stats and max/min value of a given field (if requested), for one or more datasets (default field is density) """ def __call__(self, args): ds = args.ds ds.print_stats() vals = {} field = ds._get_field_info(args.field) if args.max: vals["max"] = ds.find_max(field) print(f"Maximum {field.name}: {vals['max'][0]:0.5e} at {vals['max'][1]}") if args.min: vals["min"] = ds.find_min(field) print(f"Minimum {field.name}: {vals['min'][0]:0.5e} at {vals['min'][1]}") if args.output is not None: t = ds.current_time.to("yr") with open(args.output, "a") as f: f.write(f"{ds} ({t:0.5e})\n") if "min" in vals: f.write( f"Minimum {field.name} is {vals['min'][0]:0.5e} at {vals['min'][1]}\n" ) if "max" in vals: f.write( f"Maximum {field.name} is {vals['max'][0]:0.5e} at {vals['max'][1]}\n" ) class YTUpdateCmd(YTCommand): args = ("all",) name = "update" description = """ Update the yt installation to the most recent version """ def __call__(self, opts): import importlib.resources as importlib_resources path = os.path.dirname(importlib_resources.files("yt")) vstring = _print_installation_information(path) if vstring is not None: print() print("This installation CAN be automatically updated.") update_git(path) else: _print_failed_source_update(opts.reinstall) class YTDeleteImageCmd(YTCommand): args = ({"short": "delete_hash", "type": str},) description = """ Delete image from imgur.com. """ name = "delete_image" def __call__(self, args): import urllib.error import urllib.request headers = {"Authorization": f"Client-ID {ytcfg.get('yt', 'imagebin_api_key')}"} delete_url = ytcfg.get("yt", "imagebin_delete_url") req = urllib.request.Request( delete_url.format(delete_hash=args.delete_hash), headers=headers, method="DELETE", ) try: response = urllib.request.urlopen(req).read().decode() except urllib.error.HTTPError as e: print("ERROR", e) return {"deleted": False} rv = json.loads(response) if "success" in rv and rv["success"]: print("\nImage successfully deleted!\n") else: print() print("Something has gone wrong! Here is the server response:") print() pprint.pprint(rv) class YTUploadImageCmd(YTCommand): args = ({"short": "file", "type": str},) description = """ Upload an image to imgur.com. Must be PNG. """ name = "upload_image" def __call__(self, args): import urllib.error import urllib.parse import urllib.request filename = args.file if not filename.endswith(".png"): print("File must be a PNG file!") return 1 headers = {"Authorization": f"Client-ID {ytcfg.get('yt', 'imagebin_api_key')}"} image_data = base64.b64encode(open(filename, "rb").read()) parameters = { "image": image_data, type: "base64", "name": filename, "title": f"{filename} uploaded by yt", } data = urllib.parse.urlencode(parameters).encode("utf-8") req = urllib.request.Request( ytcfg.get("yt", "imagebin_upload_url"), data=data, headers=headers ) try: response = urllib.request.urlopen(req).read().decode() except urllib.error.HTTPError as e: print("ERROR", e) return {"uploaded": False} rv = json.loads(response) if "data" in rv and "link" in rv["data"]: print() print("Image successfully uploaded! You can find it at:") print(f" {rv['data']['link']}") print() print("If you'd like to delete it, use the following") print(f" yt delete_image {rv['data']['deletehash']}") print() else: print() print("Something has gone wrong! Here is the server response:") print() pprint.pprint(rv) class YTUploadFileCmd(YTCommand): args = ({"short": "file", "type": str},) description = """ Upload a file to yt's curldrop. """ name = "upload" def __call__(self, args): from yt.utilities.on_demand_imports import _requests as requests fs = iter(FileStreamer(open(args.file, "rb"))) upload_url = ytcfg.get("yt", "curldrop_upload_url") r = requests.put(upload_url + "/" + os.path.basename(args.file), data=fs) print() print(r.text) class YTConfigLocalConfigHandler: def load_config(self, args) -> None: import os from yt.config import YTConfig from yt.utilities.configure import CONFIG local_config_file = YTConfig.get_local_config_file() global_config_file = YTConfig.get_global_config_file() local_exists = os.path.exists(local_config_file) global_exists = os.path.exists(global_config_file) local_arg_exists = hasattr(args, "local") global_arg_exists = hasattr(args, "global") config_file: str | None = None if getattr(args, "local", False): config_file = local_config_file elif getattr(args, "global", False): config_file = global_config_file else: if local_exists and global_exists: s = ( "Yt detected a local and a global configuration file, refusing " "to proceed.\n" f"Local config file: {local_config_file}\n" f"Global config file: {global_config_file}" ) # Only print the info about "--global" and "--local" if they exist if local_arg_exists and global_arg_exists: s += ( "\n" # missing eol from previous string "Specify which one you want to use using the `--local` or the " "`--global` flags." ) sys.exit(s) elif local_exists: config_file = local_config_file elif global_exists: config_file = global_config_file if config_file is None: print("WARNING: no configuration file installed.", file=sys.stderr) else: print( f"INFO: reading configuration file: {config_file}", file=sys.stderr ) CONFIG.read(config_file) self.config_file = config_file _global_local_args = ( { "short": "--local", "action": "store_true", "help": "Store the configuration in the local configuration file.", }, { "short": "--global", "action": "store_true", "help": "Store the configuration in the global configuration file.", }, ) class YTConfigGetCmd(YTCommand, YTConfigLocalConfigHandler): subparser = "config" name = "get" description = "get a config value" args = ( {"short": "section", "help": "The section containing the option."}, {"short": "option", "help": "The option to retrieve."}, *_global_local_args, ) def __call__(self, args): from yt.utilities.configure import get_config self.load_config(args) print(get_config(args.section, args.option)) class YTConfigSetCmd(YTCommand, YTConfigLocalConfigHandler): subparser = "config" name = "set" description = "set a config value" args = ( {"short": "section", "help": "The section containing the option."}, {"short": "option", "help": "The option to set."}, {"short": "value", "help": "The value to set the option to."}, *_global_local_args, ) def __call__(self, args): from yt.utilities.configure import set_config self.load_config(args) if self.config_file is None: self.config_file = os.path.join(os.getcwd(), "yt.toml") print( f"INFO: configuration will be written to {self.config_file}", file=sys.stderr, ) set_config(args.section, args.option, args.value, self.config_file) class YTConfigRemoveCmd(YTCommand, YTConfigLocalConfigHandler): subparser = "config" name = "rm" description = "remove a config option" args = ( {"short": "section", "help": "The section containing the option."}, {"short": "option", "help": "The option to remove."}, *_global_local_args, ) def __call__(self, args): from yt.utilities.configure import rm_config self.load_config(args) rm_config(args.section, args.option, self.config_file) class YTConfigListCmd(YTCommand, YTConfigLocalConfigHandler): subparser = "config" name = "list" description = "show the config content" args = _global_local_args def __call__(self, args): from yt.utilities.configure import write_config self.load_config(args) write_config(sys.stdout) class YTConfigPrintPath(YTCommand, YTConfigLocalConfigHandler): subparser = "config" name = "print-path" description = "show path to the config file" args = _global_local_args def __call__(self, args): self.load_config(args) print(self.config_file) class YTSearchCmd(YTCommand): args = ( { "short": "-o", "longname": "--output", "action": "store", "type": str, "dest": "output", "default": "yt_index.json", "help": "File in which to place output", }, { "longname": "--check-all", "short": "-a", "help": "Attempt to load every file", "action": "store_true", "default": False, "dest": "check_all", }, { "longname": "--full", "short": "-f", "help": "Output full contents of parameter file", "action": "store_true", "default": False, "dest": "full_output", }, ) description = """ Attempt to find outputs that yt can recognize in directories. """ name = "search" def __call__(self, args): from yt.utilities.object_registries import output_type_registry candidates = [] for base, dirs, files in os.walk(".", followlinks=True): print("(% 10i candidates) Examining %s" % (len(candidates), base)) recurse = [] if args.check_all: candidates.extend([os.path.join(base, _) for _ in files]) for _, otr in sorted(output_type_registry.items()): c, r = otr._guess_candidates(base, dirs, files) candidates.extend([os.path.join(base, _) for _ in c]) recurse.append(r) if len(recurse) > 0 and not all(recurse): del dirs[:] # Now we have a ton of candidates. We're going to do something crazy # and try to load each one. records = [] for i, c in enumerate(sorted(candidates)): print("(% 10i/% 10i) Evaluating %s" % (i, len(candidates), c)) try: record = get_metadata(c, args.full_output) except YTUnidentifiedDataType: continue records.append(record) with open(args.output, "w") as f: json.dump(records, f, indent=4) print(f"Identified {len(records)} records output to {args.output}") class YTDownloadData(YTCommand): args = ( { "short": "filename", "action": "store", "type": str, "help": "The name of the file to download", "nargs": "?", "default": "", }, { "short": "location", "action": "store", "type": str, "nargs": "?", "help": "The location in which to place the file, can be " '"supp_data_dir", "test_data_dir", or any valid ' "path on disk. ", "default": "", }, { "longname": "--overwrite", "short": "-c", "help": "Overwrite existing file.", "action": "store_true", "default": False, }, { "longname": "--list", "short": "-l", "help": "Display all available files.", "action": "store_true", "default": False, }, ) description = """ Download a file from http://yt-project.org/data and save it to a particular location. Files can be saved to the locations provided by the "test_data_dir" or "supp_data_dir" configuration entries, or any valid path to a location on disk. """ name = "download" def __call__(self, args): if args.list: self.get_list() return if not args.filename: raise RuntimeError( "You need to provide a filename. See --help " "for details or use --list to get available " "datasets." ) elif not args.location: raise RuntimeError( "You need to specify download location. See --help for details." ) data_url = f"http://yt-project.org/data/{args.filename}" if args.location in ["test_data_dir", "supp_data_dir"]: data_dir = ytcfg.get("yt", args.location) if data_dir == "/does/not/exist": raise RuntimeError(f"'{args.location}' is not configured!") else: data_dir = args.location if not os.path.exists(data_dir): print(f"The directory '{data_dir}' does not exist. Creating...") ensure_dir(data_dir) data_file = os.path.join(data_dir, args.filename) if os.path.exists(data_file) and not args.overwrite: raise OSError(f"File '{data_file}' exists and overwrite=False!") print(f"Attempting to download file: {args.filename}") fn = download_file(data_url, data_file) if not os.path.exists(fn): raise OSError(f"The file '{args.filename}' did not download!!") print(f"File: {args.filename} downloaded successfully to {data_file}") def get_list(self): import urllib.request data = ( urllib.request.urlopen("http://yt-project.org/data/datafiles.json") .read() .decode("utf8") ) data = json.loads(data) for key in data: for ds in data[key]: ds["fullname"] = ds["url"].replace("http://yt-project.org/data/", "") print("{fullname} ({size}) type: {code}".format(**ds)) for line in textwrap.wrap(ds["description"]): print("\t", line) def run_main(): args = parser.parse_args() # The following is a workaround for a nasty Python 3 bug: # http://bugs.python.org/issue16308 # http://bugs.python.org/issue9253 try: args.func except AttributeError: parser.print_help() sys.exit(0) args.func(args) if __name__ == "__main__": run_main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/configuration_tree.py0000644000175100001770000001417314714401662020502 0ustar00runnerdockerclass ConfigNode: def __init__(self, key, parent=None): self.key = key self.children = {} self.parent = parent def add(self, key, child): self.children[key] = child child.parent = self def update(self, other, extra_data=None): def _recursive_upsert(other_dict, keys): for key, val in other_dict.items(): new_keys = keys + [key] if isinstance(val, dict): _recursive_upsert(val, new_keys) else: self.upsert_from_list(new_keys, val, extra_data) _recursive_upsert(other, keys=[]) def get_child(self, key, constructor=None): if key in self.children: child = self.children[key] elif constructor is not None: child = self.children[key] = constructor() else: raise KeyError(f"Cannot get key {key}") return child def add_child(self, key): self.get_child(key, lambda: ConfigNode(key, parent=self)) def remove_child(self, key): self.children.pop(key) def upsert_from_list(self, keys, value, extra_data=None): key, *next_keys = keys if len(next_keys) == 0: # reach the end of the upsert leaf = self.get_child( key, lambda: ConfigLeaf( key, parent=self, value=value, extra_data=extra_data ), ) leaf.value = value leaf.extra_data = extra_data if not isinstance(leaf, ConfigLeaf): raise RuntimeError(f"Expected a ConfigLeaf, got {leaf}!") else: next_node = self.get_child(key, lambda: ConfigNode(key, parent=self)) if not isinstance(next_node, ConfigNode): raise RuntimeError(f"Expected a ConfigNode, got {next_node}!") next_node.upsert_from_list(next_keys, value, extra_data) def get_from_list(self, key_list): next, *key_list_remainder = key_list child = self.get_child(next) if len(key_list_remainder) == 0: return child else: return child.get_from_list(key_list_remainder) def get(self, *keys): return self.get_from_list(keys) def get_leaf(self, *keys, callback=lambda leaf: leaf.value): leaf = self.get_from_list(keys) return callback(leaf) def pop_leaf(self, keys): *node_keys, leaf_key = keys node = self.get_from_list(node_keys) node.children.pop(leaf_key) def get_deepest_leaf(self, *keys, callback=lambda leaf: leaf.value): root_key, *keys, leaf_key = keys root_node = self.get_child(root_key) node_list = [root_node] node = root_node # Traverse the tree down following the keys for k in keys: try: node = node.get_child(k) node_list.append(node) except KeyError: break # For each node, starting from the deepest, try to find the leaf for node in reversed(node_list): try: leaf = node.get_child(leaf_key) if not isinstance(leaf, ConfigLeaf): raise RuntimeError(f"Expected a ConfigLeaf, got {leaf}!") return callback(leaf) except KeyError: continue raise KeyError(f"Cannot any node that contains the leaf {leaf_key}.") def serialize(self): retval = {} for key, child in self.children.items(): retval[key] = child.serialize() return retval @staticmethod def from_dict(other, parent=None, **kwa): me = ConfigNode(None, parent=parent) for key, val in other.items(): if isinstance(val, dict): me.add(key, ConfigNode.from_dict(val, parent=me, **kwa)) else: me.add(key, ConfigLeaf(key, parent=me, value=val, **kwa)) return me def _as_dict_with_count(self, callback): data = {} total_count = 0 for key, child in self.children.items(): if isinstance(child, ConfigLeaf): total_count += 1 data[key] = callback(child) elif isinstance(child, ConfigNode): child_data, count = child._as_dict_with_count(callback) total_count += count if count > 0: data[key] = child_data return data, total_count def as_dict(self, callback=lambda child: child.value): data, _ = self._as_dict_with_count(callback) return data def __repr__(self): return f"" def __contains__(self, item): return item in self.children # Add support for IPython rich display # see https://ipython.readthedocs.io/en/stable/config/integrating.html def _repr_json_(self): return self.as_dict() class ConfigLeaf: def __init__(self, key, parent: ConfigNode, value, extra_data=None): self.key = key # the name of the config leaf self._value = value self.parent = parent self.extra_data = extra_data def serialize(self): return self.value def get_tree(self): node = self parents = [] while node is not None: parents.append(node) node = node.parent return reversed(parents) @property def value(self): return self._value @value.setter def value(self, new_value): if type(self.value) is type(new_value): self._value = new_value else: tree = self.get_tree() tree_str = ".".join(node.key for node in tree if node.key) msg = f"Error when setting {tree_str}.\n" msg += ( "Tried to assign a value of type " f"{type(new_value)}, expected type {type(self.value)}." ) source = self.extra_data.get("source", None) if source: msg += f"\nThis entry was last modified in {source}." raise TypeError(msg) def __repr__(self): return f"" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/configure.py0000644000175100001770000001364514714401662016600 0ustar00runnerdockerimport os import sys import warnings from collections.abc import Callable from pathlib import Path from more_itertools import always_iterable from yt.utilities.configuration_tree import ConfigLeaf, ConfigNode configuration_callbacks: list[Callable[["YTConfig"], None]] = [] def config_dir(): config_root = os.environ.get( "XDG_CONFIG_HOME", os.path.join(os.path.expanduser("~"), ".config") ) conf_dir = os.path.join(config_root, "yt") return conf_dir class YTConfig: def __init__(self, defaults=None): if defaults is None: defaults = {} self.config_root = ConfigNode(None) def get(self, section, *keys, callback=None): node_or_leaf = self.config_root.get(section, *keys) if isinstance(node_or_leaf, ConfigLeaf): if callback is not None: return callback(node_or_leaf) return node_or_leaf.value return node_or_leaf def get_most_specific(self, section, *keys, **kwargs): use_fallback = "fallback" in kwargs fallback = kwargs.pop("fallback", None) try: return self.config_root.get_deepest_leaf(section, *keys) except KeyError as err: if use_fallback: return fallback else: raise err def update(self, new_values, metadata=None): if metadata is None: metadata = {} self.config_root.update(new_values, metadata) def has_section(self, section): try: self.config_root.get_child(section) return True except KeyError: return False def add_section(self, section): self.config_root.add_child(section) def remove_section(self, section): if self.has_section(section): self.config_root.remove_child(section) return True else: return False def set(self, *args, metadata=None): section, *keys, value = args if metadata is None: metadata = {"source": "runtime"} self.config_root.upsert_from_list( [section] + list(keys), value, extra_data=metadata ) def remove(self, *args): self.config_root.pop_leaf(args) def read(self, file_names): file_names_read = [] for fname in always_iterable(file_names): if not os.path.exists(fname): continue metadata = {"source": f"file: {fname}"} if sys.version_info >= (3, 11): import tomllib else: import tomli as tomllib try: with open(fname, "rb") as fh: data = tomllib.load(fh) except tomllib.TOMLDecodeError as exc: warnings.warn( f"Could not load configuration file {fname} (invalid TOML: {exc})", stacklevel=2, ) else: self.update(data, metadata=metadata) file_names_read.append(fname) return file_names_read def write(self, file_handler): import tomli_w value = self.config_root.as_dict() config_as_str = tomli_w.dumps(value) try: file_path = Path(file_handler) except TypeError: if not hasattr(file_handler, "write"): raise TypeError( f"Expected a path to a file, or a writable object, got {file_handler}" ) from None file_handler.write(config_as_str) else: pdir = file_path.parent if not pdir.exists(): warnings.warn( f"{pdir!s} does not exist, creating it (recursively)", stacklevel=2 ) os.makedirs(pdir) file_path.write_text(config_as_str) @staticmethod def get_global_config_file(): return os.path.join(config_dir(), "yt.toml") @staticmethod def get_local_config_file(): path = Path.cwd() while path.parent is not path: candidate = path.joinpath("yt.toml") if candidate.is_file(): return os.path.abspath(candidate) else: path = path.parent return os.path.join(os.path.abspath(os.curdir), "yt.toml") def __setitem__(self, args, value): section, *keys = always_iterable(args) self.set(section, *keys, value, metadata=None) def __getitem__(self, key): section, *keys = always_iterable(key) return self.get(section, *keys) def __contains__(self, item): return item in self.config_root # Add support for IPython rich display # see https://ipython.readthedocs.io/en/stable/config/integrating.html def _repr_json_(self): return self.config_root._repr_json_() CONFIG = YTConfig() def _cast_bool_helper(value): if value in ("true", "True", True): return True elif value in ("false", "False", False): return False else: raise ValueError("Cannot safely cast to bool") def _expand_all(s): return os.path.expandvars(os.path.expanduser(s)) def _cast_value_helper(value, types=(_cast_bool_helper, int, float, _expand_all)): for t in types: try: retval = t(value) return retval except ValueError: pass def get_config(section, option): *option_path, option_name = option.split(".") return CONFIG.get(section, *option_path, option_name) def set_config(section, option, value, config_file): if not CONFIG.has_section(section): CONFIG.add_section(section) option_path = option.split(".") CONFIG.set(section, *option_path, _cast_value_helper(value)) write_config(config_file) def write_config(config_file): CONFIG.write(config_file) def rm_config(section, option, config_file): option_path = option.split(".") CONFIG.remove(section, *option_path) write_config(config_file) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/cosmology.py0000644000175100001770000005154114714401662016627 0ustar00runnerdockerimport functools import numpy as np from yt.units import dimensions from yt.units.unit_object import Unit # type: ignore from yt.units.unit_registry import UnitRegistry # type: ignore from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.physical_constants import ( gravitational_constant_cgs as G, speed_of_light_cgs, ) class Cosmology: r""" Create a cosmology calculator to compute cosmological distances and times. For an explanation of the various cosmological measures, see, for example Hogg (1999, https://arxiv.org/abs/astro-ph/9905116). WARNING: Cosmological distance calculations return values that are either in the comoving or proper frame, depending on the specific quantity. For simplicity, the proper and comoving frames are set equal to each other within the cosmology calculator. This means that for some distance value, x, x.to("Mpc") and x.to("Mpccm") will be the same. The user should take care to understand which reference frame is correct for the given calculation. Parameters ---------- hubble_constant : float The Hubble parameter at redshift zero in units of 100 km/s/Mpc. Default: 0.71. omega_matter : the fraction of the energy density of the Universe in matter at redshift zero. Default: 0.27. omega_lambda : the fraction of the energy density of the Universe in a cosmological constant. Default: 0.73. omega_radiation : the fraction of the energy density of the Universe in relativistic matter at redshift zero. omega_curvature : the fraction of the energy density of the Universe in curvature. Default: 0.0. unit_system : :class:`yt.units.unit_systems.UnitSystem`, optional The units system to use when making calculations. If not specified, cgs units are assumed. use_dark_factor: Bool, optional The flag to either use the cosmological constant (False, default) or to use the parameterization of w(a) as given in Linder 2002. This, along with w_0 and w_a, only matters in the function expansion_factor. w_0 : float, optional The Linder 2002 parameterization of w(a) is: w(a) = w_0 + w_a(1 - a). w_0 is w(a = 1). Only matters if use_dark_factor = True. Default is None. Cosmological constant case corresponds to w_0 = -1. w_a : float, optional See w_0. w_a is the derivative of w(a) evaluated at a = 1. Cosmological constant case corresponds to w_a = 0. Default is None. Examples -------- >>> from yt.utilities.cosmology import Cosmology >>> co = Cosmology() >>> print(co.t_from_z(0.0).in_units("Gyr")) """ def __init__( self, hubble_constant=0.71, omega_matter=0.27, omega_lambda=0.73, omega_radiation=0.0, omega_curvature=0.0, unit_registry=None, unit_system="cgs", use_dark_factor=False, w_0=-1.0, w_a=0.0, ): self.omega_matter = float(omega_matter) self.omega_radiation = float(omega_radiation) self.omega_lambda = float(omega_lambda) self.omega_curvature = float(omega_curvature) hubble_constant = float(hubble_constant) if unit_registry is None: unit_registry = UnitRegistry(unit_system=unit_system) unit_registry.add("h", hubble_constant, dimensions.dimensionless, r"h") for my_unit in ["m", "pc", "AU", "au"]: new_unit = f"{my_unit}cm" my_u = Unit(my_unit, registry=unit_registry) # technically not true, but distances here are actually comoving unit_registry.add( new_unit, my_u.base_value, dimensions.length, f"\\rm{{{my_unit}}}/(1+z)", prefixable=True, ) self.unit_registry = unit_registry self.hubble_constant = self.quan(hubble_constant, "100*km/s/Mpc") self.unit_system = unit_system # For non-standard dark energy. If false, use default cosmological constant # This only affects the expansion_factor function. self.use_dark_factor = use_dark_factor self.w_0 = w_0 self.w_a = w_a def hubble_distance(self): r""" The distance corresponding to c / h, where c is the speed of light and h is the Hubble parameter in units of 1 / time. """ return self.quan(speed_of_light_cgs / self.hubble_constant).in_base( self.unit_system ) def comoving_radial_distance(self, z_i, z_f): r""" The comoving distance along the line of sight to on object at redshift, z_f, viewed at a redshift, z_i. Parameters ---------- z_i : float The redshift of the observer. z_f : float The redshift of the observed object. Examples -------- >>> from yt.utilities.cosmology import Cosmology >>> co = Cosmology() >>> print(co.comoving_radial_distance(0.0, 1.0).in_units("Mpccm")) """ return ( self.hubble_distance() * trapezoid_int(self.inverse_expansion_factor, z_i, z_f) ).in_base(self.unit_system) def comoving_transverse_distance(self, z_i, z_f): r""" When multiplied by some angle, the distance between two objects observed at redshift, z_f, with an angular separation given by that angle, viewed by an observer at redshift, z_i (Hogg 1999). Parameters ---------- z_i : float The redshift of the observer. z_f : float The redshift of the observed object. Examples -------- >>> from yt.utilities.cosmology import Cosmology >>> co = Cosmology() >>> print(co.comoving_transverse_distance(0.0, 1.0).in_units("Mpccm")) """ if self.omega_curvature > 0: return ( self.hubble_distance() / np.sqrt(self.omega_curvature) * np.sinh( np.sqrt(self.omega_curvature) * self.comoving_radial_distance(z_i, z_f) / self.hubble_distance() ) ).in_base(self.unit_system) elif self.omega_curvature < 0: return ( self.hubble_distance() / np.sqrt(np.fabs(self.omega_curvature)) * np.sin( np.sqrt(np.fabs(self.omega_curvature)) * self.comoving_radial_distance(z_i, z_f) / self.hubble_distance() ) ).in_base(self.unit_system) else: return self.comoving_radial_distance(z_i, z_f) def comoving_volume(self, z_i, z_f): r""" "The comoving volume is the volume measure in which number densities of non-evolving objects locked into Hubble flow are constant with redshift." -- Hogg (1999) Parameters ---------- z_i : float The lower redshift of the interval. z_f : float The higher redshift of the interval. Examples -------- >>> from yt.utilities.cosmology import Cosmology >>> co = Cosmology() >>> print(co.comoving_volume(0.0, 1.0).in_units("Gpccm**3")) """ if self.omega_curvature > 1e-10: return ( 2 * np.pi * np.power(self.hubble_distance(), 3) / self.omega_curvature * ( self.comoving_transverse_distance(z_i, z_f) / self.hubble_distance() * np.sqrt( 1 + self.omega_curvature * np.sqrt( self.comoving_transverse_distance(z_i, z_f) / self.hubble_distance() ) ) - np.sinh( np.fabs(self.omega_curvature) * self.comoving_transverse_distance(z_i, z_f) / self.hubble_distance() ) / np.sqrt(self.omega_curvature) ) ).in_base(self.unit_system) elif self.omega_curvature < -1e-10: return ( 2 * np.pi * np.power(self.hubble_distance(), 3) / np.fabs(self.omega_curvature) * ( self.comoving_transverse_distance(z_i, z_f) / self.hubble_distance() * np.sqrt( 1 + self.omega_curvature * np.sqrt( self.comoving_transverse_distance(z_i, z_f) / self.hubble_distance() ) ) - np.arcsin( np.fabs(self.omega_curvature) * self.comoving_transverse_distance(z_i, z_f) / self.hubble_distance() ) / np.sqrt(np.fabs(self.omega_curvature)) ) ).in_base(self.unit_system) else: return ( 4 * np.pi * np.power(self.comoving_transverse_distance(z_i, z_f), 3) / 3 ).in_base(self.unit_system) def angular_diameter_distance(self, z_i, z_f): r""" Following Hogg (1999), the angular diameter distance is 'the ratio of an object's physical transverse size to its angular size in radians.' Parameters ---------- z_i : float The redshift of the observer. z_f : float The redshift of the observed object. Examples -------- >>> from yt.utilities.cosmology import Cosmology >>> co = Cosmology() >>> print(co.angular_diameter_distance(0.0, 1.0).in_units("Mpc")) """ return ( self.comoving_transverse_distance(0, z_f) / (1 + z_f) - self.comoving_transverse_distance(0, z_i) / (1 + z_i) ).in_base(self.unit_system) def angular_scale(self, z_i, z_f): r""" The proper transverse distance between two points at redshift z_f observed at redshift z_i per unit of angular separation. Parameters ---------- z_i : float The redshift of the observer. z_f : float The redshift of the observed object. Examples -------- >>> from yt.utilities.cosmology import Cosmology >>> co = Cosmology() >>> print(co.angular_scale(0.0, 1.0).in_units("kpc / arcsec")) """ scale = self.angular_diameter_distance(z_i, z_f) / self.quan(1, "radian") return scale.in_base(self.unit_system) def luminosity_distance(self, z_i, z_f): r""" The distance that would be inferred from the inverse-square law of light and the measured flux and luminosity of the observed object. Parameters ---------- z_i : float The redshift of the observer. z_f : float The redshift of the observed object. Examples -------- >>> from yt.utilities.cosmology import Cosmology >>> co = Cosmology() >>> print(co.luminosity_distance(0.0, 1.0).in_units("Mpc")) """ return ( self.comoving_transverse_distance(0, z_f) * (1 + z_f) - self.comoving_transverse_distance(0, z_i) * (1 + z_i) ).in_base(self.unit_system) def lookback_time(self, z_i, z_f): r""" The difference in the age of the Universe between the redshift interval z_i to z_f. Parameters ---------- z_i : float The lower redshift of the interval. z_f : float The higher redshift of the interval. Examples -------- >>> from yt.utilities.cosmology import Cosmology >>> co = Cosmology() >>> print(co.lookback_time(0.0, 1.0).in_units("Gyr")) """ return ( trapezoid_int(self.age_integrand, z_i, z_f) / self.hubble_constant ).in_base(self.unit_system) def critical_density(self, z): r""" The density required for closure of the Universe at a given redshift in the proper frame. Parameters ---------- z : float Redshift. Examples -------- >>> from yt.utilities.cosmology import Cosmology >>> co = Cosmology() >>> print(co.critical_density(0.0).in_units("g/cm**3")) >>> print(co.critical_density(0).in_units("Msun/Mpc**3")) """ return (3.0 * self.hubble_parameter(z) ** 2 / 8.0 / np.pi / G).in_base( self.unit_system ) def hubble_parameter(self, z): r""" The value of the Hubble parameter at a given redshift. Parameters ---------- z: float Redshift. Examples -------- >>> from yt.utilities.cosmology import Cosmology >>> co = Cosmology() >>> print(co.hubble_parameter(1.0).in_units("km/s/Mpc")) """ return self.hubble_constant.in_base(self.unit_system) * self.expansion_factor(z) def age_integrand(self, z): return 1.0 / (z + 1) / self.expansion_factor(z) def expansion_factor(self, z): r""" The ratio between the Hubble parameter at a given redshift and redshift zero. This is also the primary function integrated to calculate the cosmological distances. """ # Use non-standard dark energy if self.use_dark_factor: dark_factor = self.get_dark_factor(z) # Use default cosmological constant else: dark_factor = 1.0 zp1 = 1 + z return np.sqrt( self.omega_matter * zp1**3 + self.omega_curvature * zp1**2 + self.omega_radiation * zp1**4 + self.omega_lambda * dark_factor ) def inverse_expansion_factor(self, z): return 1.0 / self.expansion_factor(z) def path_length_function(self, z): return ((1 + z) ** 2) * self.inverse_expansion_factor(z) def path_length(self, z_i, z_f): return trapezoid_int(self.path_length_function, z_i, z_f) def t_from_a(self, a): """ Compute the age of the Universe for a given scale factor. Parameters ---------- a : float Scale factor. Examples -------- >>> from yt.utilities.cosmology import Cosmology >>> co = Cosmology() >>> print(co.t_from_a(1.0).in_units("Gyr")) """ # Interpolate from a table of log(a) vs. log(t) la = np.log10(a) la_i = min(-6, np.asarray(la).min() - 3) la_f = np.asarray(la).max() bins_per_dex = 1000 n_bins = int((la_f - la_i) * bins_per_dex + 1) la_bins = np.linspace(la_i, la_f, n_bins) z_bins = 1.0 / np.power(10, la_bins) - 1 # Integrate in redshift. lt = trapezoid_cumulative_integral(self.age_integrand, z_bins) # Add a minus sign because we've switched the integration limits. table = InterpTable(la_bins[1:], np.log10(-lt)) t = np.power(10, table(la)) return (t / self.hubble_constant).in_base(self.unit_system) def t_from_z(self, z): """ Compute the age of the Universe for a given redshift. Parameters ---------- z : float Redshift. Examples -------- >>> from yt.utilities.cosmology import Cosmology >>> co = Cosmology() >>> print(co.t_from_z(0.0).in_units("Gyr")) """ return self.t_from_a(1.0 / (1.0 + z)) def a_from_t(self, t): """ Compute the scale factor for a given age of the Universe. Parameters ---------- t : YTQuantity or float Time since the Big Bang. If a float is given, units are assumed to be seconds. Examples -------- >>> from yt.utilities.cosmology import Cosmology >>> co = Cosmology() >>> print(co.a_from_t(4.0e17)) """ if not isinstance(t, YTArray): t = self.arr(t, "s") lt = np.log10((t * self.hubble_constant).to("")) # Interpolate from a table of log(a) vs. log(t) # Make initial guess for bounds and widen if necessary. la_i = -6 la_f = 6 bins_per_dex = 1000 iter = 0 while True: good = True n_bins = int((la_f - la_i) * bins_per_dex + 1) la_bins = np.linspace(la_i, la_f, n_bins) z_bins = 1.0 / np.power(10, la_bins) - 1 # Integrate in redshift. lt_bins = trapezoid_cumulative_integral(self.age_integrand, z_bins) # Add a minus sign because we've switched the integration limits. table = InterpTable(np.log10(-lt_bins), la_bins[1:]) la = table(lt) # We want to have the la_bins lower bound be decently # below the minimum calculated la values. laa = np.asarray(la) if laa.min() < la_i + 2: la_i -= 3 good = False if laa.max() > la_f: la_f = laa.max() + 1 good = False if good: break iter += 1 if iter > 10: raise RuntimeError("a_from_t calculation did not converge!") a = np.power(10, table(lt)) return a def z_from_t(self, t): """ Compute the redshift for a given age of the Universe. Parameters ---------- t : YTQuantity or float Time since the Big Bang. If a float is given, units are assumed to be seconds. Examples -------- >>> from yt.utilities.cosmology import Cosmology >>> co = Cosmology() >>> print(co.z_from_t(4.0e17)) """ a = self.a_from_t(t) return 1.0 / a - 1.0 def get_dark_factor(self, z): """ This function computes the additional term that enters the expansion factor when using non-standard dark energy. See Dolag et al 2004 eq. 7 for ref (but note that there's a typo in his eq. There should be no negative sign). At the moment, this only works using the parameterization given in Linder 2002 eq. 7: w(a) = w0 + wa(1 - a) = w0 + wa * z / (1+z). This gives rise to an analytic expression. It is also only functional for Gadget simulations, at the moment. Parameters ---------- z: float Redshift """ # Get value of scale factor a corresponding to redshift z scale_factor = 1.0 / (1.0 + z) # Evaluate exponential using Linder02 parameterization dark_factor = np.power( scale_factor, -3.0 * (1.0 + self.w_0 + self.w_a) ) * np.exp(-3.0 * self.w_a * (1.0 - scale_factor)) return dark_factor _arr = None @property def arr(self): if self._arr is not None: return self._arr self._arr = functools.partial(YTArray, registry=self.unit_registry) return self._arr _quan = None @property def quan(self): if self._quan is not None: return self._quan self._quan = functools.partial(YTQuantity, registry=self.unit_registry) return self._quan def trapzint(f, a, b, bins=10000): from yt._maintenance.deprecation import issue_deprecation_warning issue_deprecation_warning( "yt.utilities.cosmology.trapzint is an alias " "to yt.utilities.cosmology.trapezoid_int, " "and will be removed in a future version. " "Please use yt.utilities.cosmology.trapezoid_int directly.", since="4.3.0", stacklevel=3, ) return trapezoid_int(f, a, b, bins) def trapezoid_int(f, a, b, bins=10000): from yt._maintenance.numpy2_compat import trapezoid zbins = np.logspace(np.log10(a + 1), np.log10(b + 1), bins) - 1 return trapezoid(f(zbins[:-1]), x=zbins[:-1], dx=np.diff(zbins)) def trapezoid_cumulative_integral(f, x): """ Perform cumulative integration using the trapezoid rule. """ fy = f(x) return (0.5 * (fy[:-1] + fy[1:]) * np.diff(x)).cumsum() class InterpTable: """ Generate a function to linearly interpolate from provided arrays. """ def __init__(self, x, y): self.x = x self.y = y def __call__(self, val): i = np.clip(np.digitize(val, self.x) - 1, 0, self.x.size - 2) slope = (self.y[i + 1] - self.y[i]) / (self.x[i + 1] - self.x[i]) return slope * (val - self.x[i]) + self.y[i] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/cython_fortran_utils.pxd0000644000175100001770000000112114714401662021223 0ustar00runnerdockercimport numpy as np from libc.stdio cimport FILE ctypedef np.int32_t INT32_t ctypedef np.int64_t INT64_t ctypedef np.float64_t DOUBLE_t cdef class FortranFile: cdef FILE* cfile cdef bint _closed cpdef INT64_t skip(self, INT64_t n=*) except -1 cdef INT64_t get_size(self, str dtype) cpdef INT32_t read_int(self) except? -1 cpdef np.ndarray read_vector(self, str dtype) cdef int read_vector_inplace(self, str dtype, void *data) cpdef INT64_t tell(self) except -1 cpdef INT64_t seek(self, INT64_t pos, INT64_t whence=*) except -1 cpdef void close(self) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/cython_fortran_utils.pyx0000644000175100001770000002466514714401662021272 0ustar00runnerdocker# distutils: libraries = STD_LIBS cimport numpy as np import numpy as np from libc.stdio cimport * cdef INT32_SIZE = sizeof(np.int32_t) cdef DOUBLE_SIZE = sizeof(np.float64_t) cdef class FortranFile: """This class provides facilities to interact with files written in fortran-record format. Since this is a non-standard file format, whose contents depend on the compiler and the endianness of the machine, caution is advised. This code will assume that the record header is written as a 32bit (4byte) signed integer. The code also assumes that the records use the system's local endianness. Notes ----- Since the assumed record header is an signed integer on 32bit, it will overflow at 2**31=2147483648 elements. This module has been inspired by scipy's FortranFile, especially the docstrings. """ def __cinit__(self, str fname): self.cfile = fopen(fname.encode('utf-8'), 'rb') self._closed = False if self.cfile is NULL: self._closed = True raise FileNotFoundError(fname.encode('utf-8')) def __enter__(self): return self def __exit__(self, type, value, traceback): self.close() cpdef INT64_t skip(self, INT64_t n=1) except -1: """Skip records. Parameters ---------- - n : integer The number of records to skip Returns ------- value : int Returns 0 on success. """ cdef INT32_t s1, s2, i if self._closed: raise ValueError("Read of closed file.") for i in range(n): fread(&s1, INT32_SIZE, 1, self.cfile) fseek(self.cfile, s1, SEEK_CUR) fread(&s2, INT32_SIZE, 1, self.cfile) if s1 != s2: raise IOError('Sizes do not agree in the header and footer for ' 'this record - check header dtype. Got %s and %s' % (s1, s2)) return 0 cdef INT64_t get_size(self, str dtype): """Return the size of an element given its datatype. Parameters ---------- dtype : str The dtype, see note for details about the values of dtype. Returns ------- size : int The size in byte of the dtype Note: ----- See https://docs.python.org/3.5/library/struct.html#format-characters for details about the formatting characters. """ if dtype == 'i': return 4 elif dtype == 'd': return 8 elif dtype == 'f': return 4 elif dtype == 'l': return 8 else: # Fallback to (slow) numpy-based to compute the size return np.dtype(dtype).itemsize cpdef np.ndarray read_vector(self, str dtype): """Reads a record from the file and return it as numpy array. Parameters ---------- d : data type This is the datatype (from the struct module) that we should read. Returns ------- tr : numpy.ndarray This is the vector of values read from the file. Examples -------- >>> f = FortranFile("fort.3") >>> rv = f.read_vector("d") # Read a float64 array >>> rv = f.read_vector("i") # Read an int32 array """ cdef INT32_t s1, s2, size cdef np.ndarray data if self._closed: raise ValueError("I/O operation on closed file.") size = self.get_size(dtype) fread(&s1, INT32_SIZE, 1, self.cfile) # Check record is compatible with data type if s1 % size != 0: raise ValueError('Size obtained (%s) does not match with the expected ' 'size (%s) of multi-item record' % (s1, size)) data = np.empty(s1 // size, dtype=dtype) fread(data.data, size, s1 // size, self.cfile) fread(&s2, INT32_SIZE, 1, self.cfile) if s1 != s2: raise IOError('Sizes do not agree in the header and footer for ' 'this record - check header dtype') return data cdef int read_vector_inplace(self, str dtype, void *data): """Reads a record from the file. Parameters ---------- d : data type This is the datatype (from the struct module) that we should read. data : void* The pointer where to store the data. It should be preallocated and have the correct size. """ cdef INT32_t s1, s2, size if self._closed: raise ValueError("I/O operation on closed file.") size = self.get_size(dtype) fread(&s1, INT32_SIZE, 1, self.cfile) # Check record is compatible with data type if s1 % size != 0: raise ValueError('Size obtained (%s) does not match with the expected ' 'size (%s) of multi-item record' % (s1, size)) fread(data, size, s1 // size, self.cfile) fread(&s2, INT32_SIZE, 1, self.cfile) if s1 != s2: raise IOError('Sizes do not agree in the header and footer for ' 'this record - check header dtype') cpdef INT32_t read_int(self) except? -1: """Reads a single int32 from the file and return it. Returns ------- data : int32 The value. Examples -------- >>> f = FortranFile("fort.3") >>> rv = f.read_vector("d") # Read a float64 array >>> rv = f.read_vector("i") # Read an int32 array """ cdef INT32_t s1, s2 cdef INT32_t data if self._closed: raise ValueError("I/O operation on closed file.") fread(&s1, INT32_SIZE, 1, self.cfile) if s1 != INT32_SIZE != 0: raise ValueError('Size obtained (%s) does not match with the expected ' 'size (%s) of record' % (s1, INT32_SIZE)) fread(&data, INT32_SIZE, s1 // INT32_SIZE, self.cfile) fread(&s2, INT32_SIZE, 1, self.cfile) if s1 != s2: raise IOError('Sizes do not agree in the header and footer for ' 'this record - check header dtype') return data def read_attrs(self, object attrs): """This function reads from that file according to a definition of attributes, returning a dictionary. Fortran unformatted files provide total bytesize at the beginning and end of a record. By correlating the components of that record with attribute names, we construct a dictionary that gets returned. Note that this function is used for reading sequentially-written records. If you have many written that were written simultaneously. Parameters ---------- attrs : iterable of iterables This object should be an iterable of one of the formats: [ (attr_name, count, struct type), ... ]. [ ((name1,name2,name3), count, vector type] [ ((name1,name2,name3), count, 'type type type'] [ (attr_name, count, struct type, optional)] `optional` : boolean. If True, the attribute can be stored as an empty Fortran record. Returns ------- values : dict This will return a dict of iterables of the components of the values in the file. Examples -------- >>> header = [ ("ncpu", 1, "i"), ("nfiles", 2, "i") ] >>> f = FortranFile("fort.3") >>> rv = f.read_attrs(header) """ cdef str dtype cdef int n cdef dict data cdef key cdef np.ndarray tmp cdef bint optional if self._closed: raise ValueError("I/O operation on closed file.") data = {} for a in attrs: if len(a) == 3: key, n, dtype = a optional = False else: key, n, dtype, optional = a if n == 1: tmp = self.read_vector(dtype) if len(tmp) == 0 and optional: continue elif (len(tmp) == 1) or (n == -1): data[key] = tmp[0] else: raise ValueError("Expected a record of length %s, got %s (%s)" % (n, len(tmp), key)) else: tmp = self.read_vector(dtype) if (len(tmp) == 0 and optional): continue elif (len(tmp) != n) and (n != -1): raise ValueError("Expected a record of length %s, got %s (%s)" % (n, len(tmp), key)) if isinstance(key, tuple): # There are multiple keys for ikey in range(n): data[key[ikey]] = tmp[ikey] else: data[key] = tmp return data cpdef INT64_t tell(self) except -1: """Return current stream position.""" cdef INT64_t pos if self._closed: raise ValueError("I/O operation on closed file.") pos = ftell(self.cfile) return pos cpdef INT64_t seek(self, INT64_t pos, INT64_t whence=SEEK_SET) except -1: """Change stream position. Parameters ---------- pos : int Change the stream position to the given byte offset. The offset is interpreted relative to the position indicated by whence. whence : int Determine how pos is interpreted. Can by any of * 0 -- start of stream (the default); offset should be zero or positive * 1 -- current stream position; offset may be negative * 2 -- end of stream; offset is usually negative Returns ------- pos : int The new absolute position. """ if self._closed: raise ValueError("I/O operation on closed file.") if whence < 0 or whence > 2: raise ValueError("whence argument can be 0, 1, or 2. Got %s" % whence) fseek(self.cfile, pos, whence) return self.tell() cpdef void close(self): """Close the file descriptor. This method has no effect if the file is already closed. """ if self._closed: return fclose(self.cfile) self._closed = True def __dealloc__(self): if not self._closed: self.close() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/decompose.py0000644000175100001770000001166314714401662016573 0ustar00runnerdockerimport numpy as np def SIEVE_PRIMES(x): return x and x[:1] + SIEVE_PRIMES([n for n in x if n % x[0]]) def decompose_to_primes(max_prime): """Decompose number into the primes""" for prime in SIEVE_PRIMES(list(range(2, max_prime))): if prime * prime > max_prime: break while max_prime % prime == 0: yield prime max_prime //= prime if max_prime > 1: yield max_prime def decompose_array(shape, psize, bbox, *, cell_widths=None): """Calculate list of product(psize) subarrays of arr, along with their left and right edges """ return split_array(bbox[:, 0], bbox[:, 1], shape, psize, cell_widths=cell_widths) def evaluate_domain_decomposition(n_d, pieces, ldom): """Evaluate longest to shortest edge ratio BEWARE: lot's of magic here""" eff_dim = (n_d > 1).sum() exp = float(eff_dim - 1) / float(eff_dim) ideal_bsize = eff_dim * pieces ** (1.0 / eff_dim) * np.prod(n_d) ** exp mask = np.where(n_d > 1) nd_arr = np.array(n_d, dtype=np.float64)[mask] bsize = int(np.sum(ldom[mask] / nd_arr * np.prod(nd_arr))) load_balance = float(np.prod(n_d)) / ( float(pieces) * np.prod((n_d - 1) // ldom + 1) ) # 0.25 is magic number quality = load_balance / (1 + 0.25 * (bsize / ideal_bsize - 1.0)) # \todo add a factor that estimates lower cost when x-direction is # not chopped too much # \deprecated estimate these magic numbers quality *= 1.0 - (0.001 * ldom[0] + 0.0001 * ldom[1]) / pieces if np.any(ldom > n_d): quality = 0 return quality def factorize_number(pieces): """Return array consisting of prime, its power and number of different decompositions in three dimensions for this prime """ factors = list(decompose_to_primes(pieces)) temp = np.bincount(factors) return np.array( [ (prime, temp[prime], (temp[prime] + 1) * (temp[prime] + 2) // 2) for prime in np.unique(factors) ], dtype="int64", ) def get_psize(n_d, pieces): """Calculate the best division of array into px*py*pz subarrays. The goal is to minimize the ratio of longest to shortest edge to minimize the amount of inter-process communication. """ fac = factorize_number(pieces) nfactors = len(fac[:, 2]) best = 0.0 p_size = np.ones(3, dtype=np.int64) if pieces == 1: return p_size while np.all(fac[:, 2] > 0): ldom = np.ones(3, dtype=np.int64) for nfac in range(nfactors): i = int(np.sqrt(0.25 + 2 * (fac[nfac, 2] - 1)) - 0.5) k = fac[nfac, 2] - (1 + i * (i + 1) // 2) i = fac[nfac, 1] - i j = fac[nfac, 1] - (i + k) ldom *= fac[nfac, 0] ** np.array([i, j, k]) quality = evaluate_domain_decomposition(n_d, pieces, ldom) if quality > best: best = quality p_size = ldom # search for next unique combination for j in range(nfactors): if fac[j, 2] > 1: fac[j, 2] -= 1 break else: if j < nfactors - 1: fac[j, 2] = int((fac[j, 1] + 1) * (fac[j, 1] + 2) / 2) else: fac[:, 2] = 0 # no more combinations to try return p_size def split_array(gle, gre, shape, psize, *, cell_widths=None): """Split array into px*py*pz subarrays.""" n_d = np.array(shape, dtype=np.int64) dds = (gre - gle) / shape left_edges = [] right_edges = [] shapes = [] slices = [] if cell_widths is None: cell_widths_by_grid = None else: cell_widths_by_grid = [] for i in range(psize[0]): for j in range(psize[1]): for k in range(psize[2]): piece = np.array((i, j, k), dtype=np.int64) lei = n_d * piece // psize rei = n_d * (piece + np.ones(3, dtype=np.int64)) // psize if cell_widths is not None: cws = [] offset_le = [] offset_re = [] for idim in range(3): cws.append(cell_widths[idim][lei[idim] : rei[idim]]) offset_le.append(np.sum(cell_widths[idim][0 : lei[idim]])) offset_re.append(offset_le[idim] + np.sum(cws[idim])) cell_widths_by_grid.append(cws) offset_re = np.array(offset_re) offset_le = np.array(offset_le) else: offset_le = lei * dds offset_re = rei * dds lle = gle + offset_le lre = gle + offset_re left_edges.append(lle) right_edges.append(lre) shapes.append(rei - lei) slices.append(np.s_[lei[0] : rei[0], lei[1] : rei[1], lei[2] : rei[2]]) return left_edges, right_edges, shapes, slices, cell_widths_by_grid ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/definitions.py0000644000175100001770000000160114714401662017117 0ustar00runnerdockerfrom .physical_ratios import ( au_per_mpc, cm_per_mpc, km_per_mpc, kpc_per_mpc, miles_per_mpc, mpc_per_mpc, pc_per_mpc, rsun_per_mpc, sec_per_day, sec_per_Gyr, sec_per_Myr, sec_per_year, ) # The number of levels we expect to have at most MAXLEVEL = 48 # How many of each thing are in an Mpc mpc_conversion = { "Mpc": mpc_per_mpc, "mpc": mpc_per_mpc, "kpc": kpc_per_mpc, "pc": pc_per_mpc, "au": au_per_mpc, "rsun": rsun_per_mpc, "miles": miles_per_mpc, "km": km_per_mpc, "cm": cm_per_mpc, } # Nicely formatted versions of common length units formatted_length_unit_names = { "au": "AU", "rsun": r"R_\odot", "code_length": r"code\ length", } # How many seconds are in each thing sec_conversion = { "Gyr": sec_per_Gyr, "Myr": sec_per_Myr, "years": sec_per_year, "days": sec_per_day, } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/exceptions.py0000644000175100001770000006571114714401662017001 0ustar00runnerdocker# We don't need to import 'exceptions' import os.path from unyt.exceptions import UnitOperationError from yt._typing import FieldKey class YTException(Exception): pass # Data access exceptions: class YTUnidentifiedDataType(YTException): def __init__(self, filename, *args, **kwargs): self.filename = filename self.args = args self.kwargs = kwargs def __str__(self): from yt.utilities.hierarchy_inspection import ( get_classes_with_missing_requirements, ) msg = f"Could not determine input format from {self.filename!r}" if self.args: msg += ", " + (", ".join(f"{a!r}" for a in self.args)) if self.kwargs: msg += ", " + (", ".join(f"{k}={v!r}" for k, v in self.kwargs.items())) msg += "\n" if len(unusable_classes := get_classes_with_missing_requirements()) > 0: msg += ( "The following types could not be thorougly checked against your data because " "their requirements are missing. " "You may want to inspect this list and check your installation:" ) for cls, missing in unusable_classes.items(): requirements_str = ", ".join(missing) msg += f"\n- {cls.__name__} (requires: {requirements_str})" msg += "\n\n" msg += "Please make sure you are running a sufficiently recent version of yt." return msg class YTAmbiguousDataType(YTUnidentifiedDataType): def __init__(self, filename, candidates): self.filename = filename self.candidates = candidates def __str__(self): msg = f"Multiple data type candidates for {self.filename}\n" msg += "The following independent classes were detected as valid :\n" for c in self.candidates: msg += f"{c}\n" msg += ( "This degeneracy can be lifted using the `hint` keyword argument in yt.load" ) return msg class YTSphereTooSmall(YTException): def __init__(self, ds, radius, smallest_cell): self.ds = ds self.radius = radius self.smallest_cell = smallest_cell def __str__(self): return f"{self.radius:0.5e} < {self.smallest_cell:0.5e}" class YTAxesNotOrthogonalError(YTException): def __init__(self, axes): self.axes = axes def __str__(self): return f"The supplied axes are not orthogonal. {self.axes}" class YTNoDataInObjectError(YTException): def __init__(self, obj): self.obj_type = getattr(obj, "_type_name", "") def __str__(self): s = "The object requested has no data included in it." if self.obj_type == "slice": s += " It may lie on a grid face. Try offsetting slightly." return s class YTFieldNotFound(YTException): def __init__(self, field, ds): self.field = field self.ds = ds def _get_suggestions(self) -> list[FieldKey]: from yt.funcs import levenshtein_distance field = self.field ds = self.ds suggestions = {} if not isinstance(field, tuple): ftype, fname = None, field elif field[1] is None: ftype, fname = None, field[0] else: ftype, fname = field # Limit the suggestions to a distance of 3 (at most 3 edits) # This is very arbitrary, but is picked so that... # - small typos lead to meaningful suggestions (e.g. `densty` -> `density`) # - we don't suggest unrelated things (e.g. `pressure` -> `density` has a distance # of 6, we definitely do not want it) # A threshold of 3 seems like a good middle point. max_distance = 3 # Suggest (ftype, fname), with alternative ftype for ft, fn in ds.derived_field_list: if fn.lower() == fname.lower() and ( ftype is None or ft.lower() != ftype.lower() ): suggestions[ft, fn] = 0 if ftype is not None: # Suggest close matches using levenshtein distance fields_str = {_: str(_).lower() for _ in ds.derived_field_list} field_str = str(field).lower() for (ft, fn), fs in fields_str.items(): distance = levenshtein_distance(field_str, fs, max_dist=max_distance) if distance < max_distance: if (ft, fn) in suggestions: continue suggestions[ft, fn] = distance # Return suggestions sorted by increasing distance (first are most likely) return [ (ft, fn) for (ft, fn), distance in sorted(suggestions.items(), key=lambda v: v[1]) ] def __str__(self): msg = f"Could not find field {self.field!r} in {self.ds}." try: suggestions = self._get_suggestions() except AttributeError: # This may happen if passing a field that is e.g. an Ellipsis # e.g. when using ds.r[...] suggestions = [] if suggestions: msg += "\nDid you mean:\n\t" msg += "\n\t".join(str(_) for _ in suggestions) return msg class YTParticleTypeNotFound(YTException): def __init__(self, fname, ds): self.fname = fname self.ds = ds def __str__(self): return f"Could not find particle_type {self.fname!r} in {self.ds}." class YTSceneFieldNotFound(YTException): pass class YTCouldNotGenerateField(YTFieldNotFound): def __str__(self): return f"Could field '{self.fname}' in {self.ds} could not be generated." class YTFieldTypeNotFound(YTException): def __init__(self, ftype, ds=None): self.ftype = ftype self.ds = ds def __str__(self): if self.ds is not None and self.ftype in self.ds.particle_types: return ( f"Could not find field type {self.ftype!r}. " "This field type is a known particle type for this dataset. " "Try adding this field with sampling_type='particle'." ) else: return f"Could not find field type {self.ftype!r}." class YTSimulationNotIdentified(YTException): def __init__(self, sim_type): self.sim_type = sim_type def __str__(self): from yt.utilities.object_registries import simulation_time_series_registry return ( f"Simulation time-series type {self.sim_type!r} not defined. " f"Supported types are {list(simulation_time_series_registry)}" ) class YTCannotParseFieldDisplayName(YTException): def __init__(self, field_name, display_name, mathtext_error): self.field_name = field_name self.display_name = display_name self.mathtext_error = mathtext_error def __str__(self): return ( f"The display name {self.display_name!r} of the derived field {self.field_name!r} " f"contains the following LaTeX parser errors:\n{self.mathtext_error}" ) class YTCannotParseUnitDisplayName(YTException): def __init__(self, field_name, unit_name, mathtext_error): self.field_name = field_name self.unit_name = unit_name self.mathtext_error = mathtext_error def __str__(self): return ( f"The unit display name {self.unit_name!r} of the derived field {self.field_name!r} " f"contains the following LaTeX parser errors:\n{self.mathtext_error}" ) class InvalidSimulationTimeSeries(YTException): def __init__(self, message): self.message = message def __str__(self): return self.message class MissingParameter(YTException): def __init__(self, ds, parameter): self.ds = ds self.parameter = parameter def __str__(self): return f"dataset {self.ds} is missing {self.parameter} parameter." class NoStoppingCondition(YTException): def __init__(self, ds): self.ds = ds def __str__(self): return f"Simulation {self.ds} has no stopping condition. StopTime or StopCycle should be set." class YTNotInsideNotebook(YTException): def __str__(self): return "This function only works from within an IPython Notebook." class YTCoordinateNotImplemented(YTException): def __str__(self): return "This coordinate is not implemented for this geometry type." # define for back compat reasons for code written before yt 4.0 YTUnitOperationError = UnitOperationError class YTUnitNotRecognized(YTException): def __init__(self, unit): self.unit = unit def __str__(self): return f"This dataset doesn't recognize {self.unit!r}" class YTFieldUnitError(YTException): def __init__(self, field_info, returned_units): self.msg = ( f"The field function associated with the field {field_info.name!r} returned " f"data with units {returned_units!r} but was defined with units {field_info.units!r}." ) def __str__(self): return self.msg class YTFieldUnitParseError(YTException): def __init__(self, field_info): self.msg = ( f"The field {field_info.name!r} has unparsable units {field_info.units!r}." ) def __str__(self): return self.msg class YTSpatialFieldUnitError(YTException): def __init__(self, field): self.msg = ( f"Field {field!r} is a spatial field but has unknown units but " "spatial fields must have explicitly defined units. Add the " "field with explicit 'units' to clear this error." ) def __str__(self): return self.msg class YTHubRegisterError(YTException): def __str__(self): return ( "You must create an API key before uploading. See " "https://data.yt-project.org/getting_started.html" ) class YTNoFilenamesMatchPattern(YTException): def __init__(self, pattern): self.pattern = pattern def __str__(self): return f"No filenames were found to match the pattern: {self.pattern!r}" class YTNoOldAnswer(YTException): def __init__(self, path): self.path = path def __str__(self): return f"There is no old answer available.\n{self.path!r}" class YTNoAnswerNameSpecified(YTException): def __init__(self, message=None): if message is None or message == "": message = ( "Answer name not provided for the answer testing test." "\n Please specify --answer-name= in" " command line mode or in AnswerTestingTest.answer_name" " variable." ) self.message = message def __str__(self): return str(self.message) class YTCloudError(YTException): def __init__(self, path): self.path = path def __str__(self): return ( f"Failed to retrieve cloud data. Connection may be broken.\n {self.path!r}" ) class YTEllipsoidOrdering(YTException): def __init__(self, ds, A, B, C): self.ds = ds self._A = A self._B = B self._C = C def __str__(self): return "Must have A>=B>=C" class EnzoTestOutputFileNonExistent(YTException): def __init__(self, filename): self.filename = filename self.testname = os.path.basename(os.path.dirname(filename)) def __str__(self): return ( f"Enzo test output file (OutputLog) not generated for: {self.testname!r}.\n" "Test did not complete." ) class YTNoAPIKey(YTException): def __init__(self, service, config_name): self.service = service self.config_name = config_name def __str__(self): from yt.config import config_dir try: conf = os.path.join(config_dir(), "yt", "yt.toml") except Exception: # this is really not a good time to raise another exception conf = "yt's configuration file" return f"You need to set an API key for {self.service!r} in {conf} as {self.config_name!r}" class YTTooManyVertices(YTException): def __init__(self, nv, fn): self.nv = nv self.fn = fn def __str__(self): s = f"There are too many vertices ({self.nv}) to upload to Sketchfab. " s += f"Your model has been saved as {self.fn} . You should upload manually." return s class YTInvalidWidthError(YTException): def __init__(self, width): self.error = f"width ({str(width)}) is invalid" def __str__(self): return str(self.error) class YTFieldNotParseable(YTException): def __init__(self, field): self.field = field def __str__(self): return f"Cannot identify field {self.field!r}" class YTDataSelectorNotImplemented(YTException): def __init__(self, class_name): self.class_name = class_name def __str__(self): return f"Data selector {self.class_name!r} not implemented." class YTParticleDepositionNotImplemented(YTException): def __init__(self, class_name): self.class_name = class_name def __str__(self): return f"Particle deposition method {self.class_name!r} not implemented." class YTDomainOverflow(YTException): def __init__(self, mi, ma, dle, dre): self.mi = mi self.ma = ma self.dle = dle self.dre = dre def __str__(self): return ( f"Particle bounds {self.mi} and {self.ma} " f"exceed domain bounds {self.dle} and {self.dre}" ) class YTIntDomainOverflow(YTException): def __init__(self, dims, dd): self.dims = dims self.dd = dd def __str__(self): return f"Integer domain overflow: {self.dims} in {self.dd}" class YTIllDefinedFilter(YTException): def __init__(self, filter, s1, s2): self.filter = filter self.s1 = s1 self.s2 = s2 def __str__(self): return ( f"Filter {self.filter!r} ill-defined. " f"Applied to shape {self.s1} but is shape {self.s2}." ) class YTIllDefinedParticleFilter(YTException): def __init__(self, filter, missing): self.filter = filter self.missing = missing def __str__(self): msg = ( '\nThe fields\n\t{},\nrequired by the "{}" particle filter, ' "are not defined for this dataset." ) f = self.filter return msg.format("\n".join(str(m) for m in self.missing), f.name) class YTIllDefinedBounds(YTException): def __init__(self, lb, ub): self.lb = lb self.ub = ub def __str__(self): v = f"The bounds {self.lb:0.3e} and {self.ub:0.3e} are ill-defined. " v += "Typically this happens when a log binning is specified " v += "and zero or negative values are given for the bounds." return v class YTObjectNotImplemented(YTException): def __init__(self, ds, obj_name): self.ds = ds self.obj_name = obj_name def __str__(self): return f"The object type {self.obj_name!r} is not implemented for the dataset {self.ds!s}" class YTParticleOutputFormatNotImplemented(YTException): def __str__(self): return "The particle output format is not supported." class YTFileNotParseable(YTException): def __init__(self, fname, line): self.fname = fname self.line = line def __str__(self): return f"Error while parsing file {self.fname!r} at line {self.line}" class YTRockstarMultiMassNotSupported(YTException): def __init__(self, mi, ma, ptype): self.mi = mi self.ma = ma self.ptype = ptype def __str__(self): v = f"Particle type '{self.ptype}' has minimum mass {self.mi:0.3e} and maximum " v += f"mass {self.ma:0.3e}. Multi-mass particles are not currently supported." return v class YTTooParallel(YTException): def __str__(self): return "You've used too many processors for this dataset." class YTElementTypeNotRecognized(YTException): def __init__(self, dim, num_nodes): self.dim = dim self.num_nodes = num_nodes def __str__(self): return f"Element type not recognized - dim = {self.dim}, num_nodes = {self.num_nodes}" class YTDuplicateFieldInProfile(YTException): def __init__(self, field, new_spec, old_spec): self.field = field self.new_spec = new_spec self.old_spec = old_spec def __str__(self): r = f"""Field {self.field} already exists with field spec: {self.old_spec} But being asked to add it with: {self.new_spec}""" return r class YTInvalidPositionArray(YTException): def __init__(self, shape, dimensions): self.shape = shape self.dimensions = dimensions def __str__(self): r = f"""Position arrays must be length and shape (N,3). But this one has {self.dimensions} and {self.shape}.""" return r class YTIllDefinedCutRegion(YTException): def __init__(self, conditions): self.conditions = conditions def __str__(self): r = ( "Can't mix particle/discrete and fluid/mesh conditions or quantities. " "Conditions specified:\n" ) r += "\n".join(c for c in self.conditions) return r class YTMixedCutRegion(YTException): def __init__(self, conditions, field): self.conditions = conditions self.field = field def __str__(self): r = f"""Can't mix particle/discrete and fluid/mesh conditions or quantities. Field: {self.field} and Conditions specified: """ r += "\n".join(c for c in self.conditions) return r class YTGDFAlreadyExists(YTException): def __init__(self, filename): self.filename = filename def __str__(self): return f"A file already exists at {self.filename} and overwrite=False." class YTNonIndexedDataContainer(YTException): def __init__(self, cont): self.cont = cont def __str__(self): class_name = self.cont.__class__.__name__ return ( f"The data container type ({class_name}) is an unindexed type. " "Operations such as ires, icoords, fcoords and fwidth will not work on it.\n" "Did you just attempt to perform an off-axis operation ? " "Be sure to consult the latest documentation to see whether the operation " "you tried is actually supported for your data type." ) class YTGDFUnknownGeometry(YTException): def __init__(self, geometry): self.geometry = geometry def __str__(self): return ( """Unknown geometry %i. Please refer to GDF standard for more information""" % self.geometry ) class YTInvalidUnitEquivalence(YTException): def __init__(self, equiv, unit1, unit2): self.equiv = equiv self.unit1 = unit1 self.unit2 = unit2 def __str__(self): return f"The unit equivalence {self.equiv!r} does not exist for the units {self.unit1!r} and {self.unit2!r}." class YTPlotCallbackError(YTException): def __init__(self, callback): self.callback = "annotate_" + callback def __str__(self): return f"{self.callback} callback failed" class YTUnsupportedPlotCallback(YTPlotCallbackError): def __init__(self, callback: str, plot_type: str) -> None: super().__init__(callback) self.plot_type = plot_type def __str__(self): return f"The `{self.plot_type}` class currently doesn't support the `{self.callback}` method." class YTPixelizeError(YTException): def __init__(self, message): self.message = message def __str__(self): return self.message class YTDimensionalityError(YTException): def __init__(self, wrong, right): self.wrong = wrong self.right = right def __str__(self): return f"Dimensionality specified was {self.wrong} but we need {self.right}" class YTInvalidShaderType(YTException): def __init__(self, source): self.source = source def __str__(self): return f"Can't identify shader_type for file {self.source!r}" class YTInvalidFieldType(YTException): def __init__(self, fields): self.fields = fields def __str__(self): return ( "\nSlicePlot, ProjectionPlot, and OffAxisProjectionPlot can " "only plot fields that\n" "are defined on a mesh or for SPH particles, but received the " "following N-body\n" "particle fields:\n\n" f" {self.fields!r}\n\n" "Did you mean to use ParticlePlot or plot a deposited particle " "field instead?" ) class YTUnknownUniformKind(YTException): def __init__(self, kind): self.kind = kind def __str__(self): return f"Can't determine kind specification for {self.kind!r}" class YTUnknownUniformSize(YTException): def __init__(self, size_spec): self.size_spec = size_spec def __str__(self): return f"Can't determine size specification for {self.size_spec!r}" class YTDataTypeUnsupported(YTException): def __init__(self, this, supported): self.supported = supported self.this = this def __str__(self): v = f"This operation is not supported for data of geometry {self.this!r}; " v += f"It supports data of geometries {self.supported!r}" return v class YTBoundsDefinitionError(YTException): def __init__(self, message, bounds): self.bounds = bounds self.message = message def __str__(self): v = f"This operation has encountered a bounds error: {self.message} " v += f"\nSpecified bounds are {self.bounds!r}." return v def screen_one_element_list(lis): if len(lis) == 1: return lis[0] return lis class YTIllDefinedProfile(YTException): def __init__(self, bin_fields, fields, weight_field, is_pfield): nbin = len(bin_fields) nfields = len(fields) self.bin_fields = screen_one_element_list(bin_fields) self.bin_fields_ptype = screen_one_element_list(is_pfield[:nbin]) self.fields = screen_one_element_list(fields) self.fields_ptype = screen_one_element_list(is_pfield[nbin : nbin + nfields]) self.weight_field = weight_field if self.weight_field is not None: self.weight_field_ptype = is_pfield[-1] def __str__(self): msg = ( "\nCannot create a profile object that mixes particle and mesh " "fields.\n\n" "Received the following bin_fields:\n\n" " %s, particle_type = %s\n\n" "Profile fields:\n\n" " %s, particle_type = %s\n" ) msg = msg % ( self.bin_fields, self.bin_fields_ptype, self.fields, self.fields_ptype, ) if self.weight_field is not None: weight_msg = "\nAnd weight field:\n\n %s, particle_type = %s\n" weight_msg = weight_msg % (self.weight_field, self.weight_field_ptype) else: weight_msg = "" return msg + weight_msg class YTProfileDataShape(YTException): def __init__(self, field1, shape1, field2, shape2): self.field1 = field1 self.shape1 = shape1 self.field2 = field2 self.shape2 = shape2 def __str__(self): return ( "Profile fields must have same shape: {self.field1!r} has " f"shape {self.shape1} and {self.field2!r} has shape {self.shape2}." ) class YTBooleanObjectError(YTException): def __init__(self, bad_object): self.bad_object = bad_object def __str__(self): v = f"Supplied:\n{self.bad_object}\nto a boolean operation" v += " but it is not a YTSelectionContainer3D object." return v class YTBooleanObjectsWrongDataset(YTException): def __init__(self): pass def __str__(self): return "Boolean data objects must share a common dataset object." class YTIllDefinedAMR(YTException): def __init__(self, level, axis): self.level = level self.axis = axis def __str__(self): return ( f"Grids on the level {self.level} are not properly aligned with cell edges " f"on the parent level ({self.axis!r} axis)" ) class YTIllDefinedParticleData(YTException): pass class YTIllDefinedAMRData(YTException): pass class YTInconsistentGridFieldShape(YTException): def __init__(self, shapes): self.shapes = shapes def __str__(self): msg = "Not all grid-based fields have the same shape!\n" for name, shape in self.shapes: msg += f" Field {name!r} has shape {shape}.\n" return msg class YTInconsistentParticleFieldShape(YTException): def __init__(self, ptype, shapes): self.ptype = ptype self.shapes = shapes def __str__(self): msg = "Not all fields with field type {self.ptype!r} have the same shape!\n" for name, shape in self.shapes: field = (self.ptype, name) msg += f" Field {field} has shape {shape}.\n" return msg class YTInconsistentGridFieldShapeGridDims(YTException): def __init__(self, shapes, grid_dims): self.shapes = shapes self.grid_dims = grid_dims def __str__(self): msg = "Not all grid-based fields match the grid dimensions! " msg += f"Grid dims are {self.grid_dims}, " msg += "and the following fields have shapes that do not match them:\n" for name, shape in self.shapes: if shape != self.grid_dims: msg += f" Field {name} has shape {shape}.\n" return msg class YTCommandRequiresModule(YTException): def __init__(self, module: str): self.module = module def __str__(self): msg = f"This command requires {self.module!r} to be installed.\n\n" msg += f"Please install {self.module!r} with the package manager " msg += "appropriate for your python environment, e.g.:\n" msg += f" conda install {self.module}\n" msg += "or:\n" msg += f" python -m pip install {self.module}\n" return msg class YTModuleRemoved(YTException): def __init__(self, name, new_home=None, info=None): message = f"The {name} module has been removed from yt." if new_home is not None: message += f"\nIt has been moved to {new_home}." if info is not None: message += f"\nFor more information, see {info}." super().__init__(message) class YTArrayTooLargeToDisplay(YTException): def __init__(self, size, max_size): self.size = size self.max_size = max_size def __str__(self): msg = f"The requested array is of size {self.size}.\n" msg += "We do not support displaying arrays larger\n" msg += f"than size {self.max_size}." return msg class YTConfigurationError(YTException): pass class GenerationInProgress(Exception): def __init__(self, fields): self.fields = fields class MountError(Exception): def __init__(self, message): self.message = message ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/file_handler.py0000644000175100001770000000763014714401662017230 0ustar00runnerdockerfrom contextlib import contextmanager from yt._maintenance.deprecation import issue_deprecation_warning from yt.utilities.on_demand_imports import NotAModule, _h5py as h5py def valid_hdf5_signature(fn: str, /) -> bool: signature = b"\x89HDF\r\n\x1a\n" try: with open(fn, "rb") as f: header = f.read(8) return header == signature except Exception: return False def warn_h5py(fn): issue_deprecation_warning( "warn_h5py is not used within yt any more and is deprecated.", since="4.3" ) needs_h5py = valid_hdf5_signature(fn) if needs_h5py and isinstance(h5py.File, NotAModule): raise RuntimeError( "This appears to be an HDF5 file, but h5py is not installed." ) class HDF5FileHandler: handle = None def __init__(self, filename): self.handle = h5py.File(filename, mode="r") def __getitem__(self, key): return self.handle[key] def __contains__(self, item): return item in self.handle def __len__(self): return len(self.handle) @property def attrs(self): return self.handle.attrs def keys(self): return list(self.handle.keys()) def items(self): return list(self.handle.items()) def close(self): if self.handle is not None: self.handle.close() class FITSFileHandler(HDF5FileHandler): def __init__(self, filename): from yt.utilities.on_demand_imports import _astropy if isinstance(filename, _astropy.pyfits.hdu.image._ImageBaseHDU): self.handle = _astropy.pyfits.HDUList(filename) elif isinstance(filename, _astropy.pyfits.HDUList): self.handle = filename else: self.handle = _astropy.pyfits.open( filename, memmap=True, do_not_scale_image_data=True, ignore_blank=True ) self._fits_files = [] def __del__(self): for f in self._fits_files: f.close() del self._fits_files del self.handle self.handle = None def close(self): self.handle.close() def valid_netcdf_signature(fn: str, /) -> bool: try: with open(fn, "rb") as f: header = f.read(4) except Exception: return False else: return header in (b"CDF\x01", b"CDF\x02") or ( fn.endswith((".nc", ".nc4")) and valid_hdf5_signature(fn) ) def valid_netcdf_classic_signature(filename): issue_deprecation_warning( "valid_netcdf_classic_signature is not used within yt any more and is deprecated.", since="4.3", ) signature_v1 = b"CDF\x01" signature_v2 = b"CDF\x02" try: with open(filename, "rb") as f: header = f.read(4) return header == signature_v1 or header == signature_v2 except Exception: return False def warn_netcdf(fn): # There are a few variants of the netCDF format. issue_deprecation_warning( "warn_netcdf is not used within yt any more and is deprecated.", since="4.3" ) classic = valid_netcdf_classic_signature(fn) # NetCDF-4 Classic files are HDF5 files constrained to the Classic # data model used by netCDF-3. netcdf4_classic = valid_hdf5_signature(fn) and fn.endswith((".nc", ".nc4")) needs_netcdf = classic or netcdf4_classic from yt.utilities.on_demand_imports import _netCDF4 as netCDF4 if needs_netcdf and isinstance(netCDF4.Dataset, NotAModule): raise RuntimeError( "This appears to be a netCDF file, but the " "python bindings for netCDF4 are not installed." ) class NetCDF4FileHandler: def __init__(self, filename): self.filename = filename @contextmanager def open_ds(self, **kwargs): from yt.utilities.on_demand_imports import _netCDF4 as netCDF4 ds = netCDF4.Dataset(self.filename, mode="r", **kwargs) yield ds ds.close() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/flagging_methods.py0000644000175100001770000001356414714401662020120 0ustar00runnerdockerimport numpy as np # For modern purposes from yt.utilities.lib.misc_utilities import grow_flagging_field flagging_method_registry = {} class RegisteredFlaggingMethod(type): def __init__(cls, name, b, d): type.__init__(cls, name, b, d) class FlaggingMethod: _skip_add = False def __init_subclass__(cls, *args, **kwargs): super().__init_subclass__(*args, **kwargs) if hasattr(cls, "_type_name") and not cls._skip_add: flagging_method_registry[cls._type_name] = cls class OverDensity(FlaggingMethod): _type_name = "overdensity" def __init__(self, over_density): self.over_density = over_density def __call__(self, grid): rho = grid["gas", "density"] / (grid.ds.refine_by**grid.Level) return rho > self.over_density class FlaggingGrid: def __init__(self, grid, methods): self.grid = grid flagged = np.zeros(grid.ActiveDimensions, dtype="bool") for method in methods: flagged |= method(self.grid) self.flagged = grow_flagging_field(flagged) self.subgrids = [] self.left_index = grid.get_global_startindex() self.dimensions = grid.ActiveDimensions.copy() def find_subgrids(self): if not np.any(self.flagged): return [] psg = ProtoSubgrid(self.flagged, self.left_index, self.dimensions) sgl = [psg] index = 0 while index < len(sgl): psg = sgl[index] psg.shrink() if psg.dimensions.prod() == 0: sgl[index] = None continue while not psg.acceptable: new_psgs = [] for dim in np.argsort(psg.dimensions)[::-1]: new_psgs = psg.find_by_zero_signature(dim) if len(new_psgs) > 1: break if len(new_psgs) <= 1: new_psgs = psg.find_by_second_derivative() psg = new_psgs[0] sgl[index] = psg sgl.extend(new_psgs[1:]) psg.shrink() index += 1 return sgl # Much or most of this is directly translated from Enzo class ProtoSubgrid: def __init__(self, flagged_base, left_index, dimensions, offset=(0, 0, 0)): self.left_index = left_index.copy() self.dimensions = dimensions.copy() self.flagged = flagged_base[ offset[0] : offset[0] + dimensions[0], offset[1] : offset[1] + dimensions[1], offset[2] : offset[2] + dimensions[2], ] self.compute_signatures() def compute_signatures(self): self.sigs = [] for dim in range(3): d1 = (dim + 1) % 3 d2 = int(dim == 0) self.sigs.append(self.flagged.sum(axis=d1).sum(axis=d2)) @property def acceptable(self): return float(self.flagged.sum()) / self.flagged.size > 0.2 def shrink(self): new_ind = [] for dim in range(3): sig = self.sigs[dim] new_start = 0 while sig[new_start] == 0: new_start += 1 new_end = sig.size while sig[new_end - 1] == 0: new_end -= 1 self.dimensions[dim] = new_end - new_start self.left_index[dim] += new_start new_ind.append((new_start, new_end)) self.flagged = self.flagged[ new_ind[0][0] : new_ind[0][1], new_ind[1][0] : new_ind[1][1], new_ind[2][0] : new_ind[2][1], ] self.compute_signatures() def find_by_zero_signature(self, dim): sig = self.sigs[dim] grid_ends = np.zeros((sig.size, 2), dtype="int64") ng = 0 i = 0 while i < sig.size: if sig[i] != 0: grid_ends[ng, 0] = i while i < sig.size and sig[i] != 0: i += 1 grid_ends[ng, 1] = i - 1 ng += 1 i += 1 new_grids = [] for si, ei in grid_ends[:ng, :]: li = self.left_index.copy() dims = self.dimensions.copy() li[dim] += si dims[dim] = ei - si offset = [0, 0, 0] offset[dim] = si new_grids.append(ProtoSubgrid(self.flagged, li, dims, offset)) return new_grids def find_by_second_derivative(self): max_strength = 0 max_axis = -1 for dim in range(3): sig = self.sigs[dim] sd = sig[:-2] - 2.0 * sig[1:-1] + sig[2:] center = int((self.flagged.shape[dim] - 1) / 2) strength = zero_strength = zero_cross = 0 for i in range(1, sig.size - 2): # Note that sd is offset by one if sd[i - 1] * sd[i] < 0: strength = np.abs(sd[i - 1] - sd[i]) # TODO this differs from what I could find in ENZO # there's |center - i| < |center - zero_cross| instead # additionally zero_cross is undefined in first pass if strength > zero_strength or ( strength == zero_strength and np.abs(center - i) < np.abs(zero_cross - i) ): zero_strength = strength zero_cross = i if zero_strength > max_strength: max_axis = dim dims = self.dimensions.copy() li = self.left_index.copy() dims[max_axis] = zero_cross psg1 = ProtoSubgrid(self.flagged, li, dims) li[max_axis] += zero_cross dims[max_axis] = self.dimensions[max_axis] - zero_cross offset = np.zeros(3) offset[max_axis] = zero_cross psg2 = ProtoSubgrid(self.flagged, li, dims, offset) return [psg1, psg2] def __str__(self): return f"LI: ({self.left_index}) DIMS: ({self.dimensions})" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/fortran_utils.py0000644000175100001770000002325114714401662017504 0ustar00runnerdockerimport io import os import struct import numpy as np def read_attrs(f, attrs, endian="="): r"""This function accepts a file pointer and reads from that file pointer according to a definition of attributes, returning a dictionary. Fortran unformatted files provide total bytesize at the beginning and end of a record. By correlating the components of that record with attribute names, we construct a dictionary that gets returned. Note that this function is used for reading sequentially-written records. If you have many written that were written simultaneously, see read_record. Parameters ---------- f : File object An open file object. Should have been opened in mode rb. attrs : iterable of iterables This object should be an iterable of one of the formats: [ (attr_name, count, struct type), ... ]. [ ((name1,name2,name3),count, vector type] [ ((name1,name2,name3),count, 'type type type'] endian : str '=' is native, '>' is big, '<' is little endian Returns ------- values : dict This will return a dict of iterables of the components of the values in the file. Examples -------- >>> header = [("ncpu", 1, "i"), ("nfiles", 2, "i")] >>> f = open("fort.3", "rb") >>> rv = read_attrs(f, header) """ vv = {} net_format = endian for _a, n, t in attrs: for end in "@=<>": t = t.replace(end, "") net_format += "".join(["I"] + ([t] * n) + ["I"]) size = struct.calcsize(net_format) vals = list(struct.unpack(net_format, f.read(size))) vv = {} for a, n, t in attrs: for end in "@=<>": t = t.replace(end, "") if isinstance(a, tuple): n = len(a) s1 = vals.pop(0) v = [vals.pop(0) for i in range(n)] s2 = vals.pop(0) if s1 != s2: size = struct.calcsize(endian + "I" + "".join(n * [t]) + "I") raise OSError( "An error occurred while reading a Fortran record. " "Got a different size at the beginning and at the " "end of the record: %s %s", s1, s2, ) if n == 1: v = v[0] if isinstance(a, tuple): if len(a) != len(v): raise OSError( "An error occurred while reading a Fortran " "record. Record length is not equal to expected " f"length: {len(a)} {len(v)}" ) for k, val in zip(a, v, strict=True): vv[k] = val else: vv[a] = v return vv def read_cattrs(f, attrs, endian="="): r"""This function accepts a file pointer to a C-binary file and reads from that file pointer according to a definition of attributes, returning a dictionary. This function performs very similarly to read_attrs, except it does not add on any record padding. It is thus useful for using the same header types as in read_attrs, but for C files rather than Fortran. Parameters ---------- f : File object An open file object. Should have been opened in mode rb. attrs : iterable of iterables This object should be an iterable of one of the formats: [ (attr_name, count, struct type), ... ]. [ ((name1,name2,name3),count, vector type] [ ((name1,name2,name3),count, 'type type type'] endian : str '=' is native, '>' is big, '<' is little endian Returns ------- values : dict This will return a dict of iterables of the components of the values in the file. Examples -------- >>> header = [("ncpu", 1, "i"), ("nfiles", 2, "i")] >>> f = open("cdata.bin", "rb") >>> rv = read_cattrs(f, header) """ vv = {} net_format = endian for _a, n, t in attrs: for end in "@=<>": t = t.replace(end, "") net_format += "".join([t] * n) size = struct.calcsize(net_format) vals = list(struct.unpack(net_format, f.read(size))) vv = {} for a, n, t in attrs: for end in "@=<>": t = t.replace(end, "") if isinstance(a, tuple): n = len(a) v = [vals.pop(0) for i in range(n)] if n == 1: v = v[0] if isinstance(a, tuple): if len(a) != len(v): raise OSError( "An error occurred while reading a Fortran " "record. Record length is not equal to expected " f"length: {len(a)} {len(v)}" ) for k, val in zip(a, v, strict=True): vv[k] = val else: vv[a] = v return vv def read_vector(f, d, endian="="): r"""This function accepts a file pointer and reads from that file pointer a vector of values. Parameters ---------- f : File object An open file object. Should have been opened in mode rb. d : data type This is the datatype (from the struct module) that we should read. endian : str '=' is native, '>' is big, '<' is little endian Returns ------- tr : numpy.ndarray This is the vector of values read from the file. Examples -------- >>> f = open("fort.3", "rb") >>> rv = read_vector(f, "d") """ pad_fmt = f"{endian}I" pad_size = struct.calcsize(pad_fmt) vec_len = struct.unpack(pad_fmt, f.read(pad_size))[0] # bytes vec_fmt = f"{endian}{d}" vec_size = struct.calcsize(vec_fmt) if vec_len % vec_size != 0: raise OSError( "An error occurred while reading a Fortran record. " "Vector length is not compatible with data type: %s %s", vec_len, vec_size, ) vec_num = int(vec_len / vec_size) if isinstance(f, io.IOBase): tr = np.frombuffer(f.read(vec_len), vec_fmt, count=vec_num) else: tr = np.frombuffer(f, vec_fmt, count=vec_num) vec_len2 = struct.unpack(pad_fmt, f.read(pad_size))[0] if vec_len != vec_len2: raise OSError( "An error occurred while reading a Fortran record. " "Got a different size at the beginning and at the " "end of the record: %s %s", vec_len, vec_len2, ) return tr def skip(f, n=1, endian="="): r"""This function accepts a file pointer and skips a Fortran unformatted record. Optionally check that the skip was done correctly by checking the pad bytes. Parameters ---------- f : File object An open file object. Should have been opened in mode rb. n : int Number of records to skip. endian : str '=' is native, '>' is big, '<' is little endian Returns ------- skipped: The number of elements in the skipped array Examples -------- >>> f = open("fort.3", "rb") >>> skip(f, 3) """ skipped = np.zeros(n, dtype=np.int32) fmt = endian + "I" fmt_size = struct.calcsize(fmt) for i in range(n): size = f.read(fmt_size) s1 = struct.unpack(fmt, size)[0] f.seek(s1 + fmt_size, os.SEEK_CUR) s2 = struct.unpack(fmt, size)[0] if s1 != s2: raise OSError( "An error occurred while reading a Fortran record. " "Got a different size at the beginning and at the " "end of the record: %s %s", s1, s2, ) skipped[i] = s1 / fmt_size return skipped def peek_record_size(f, endian="="): r"""This function accept the file handle and returns the size of the next record and then rewinds the file to the previous position. Parameters ---------- f : File object An open file object. Should have been opened in mode rb. endian : str '=' is native, '>' is big, '<' is little endian Returns ------- Number of bytes in the next record """ pos = f.tell() s = struct.unpack(">i", f.read(struct.calcsize(">i"))) f.seek(pos) return s[0] def read_record(f, rspec, endian="="): r"""This function accepts a file pointer and reads from that file pointer a single "record" with different components. Fortran unformatted files provide total bytesize at the beginning and end of a record. By correlating the components of that record with attribute names, we construct a dictionary that gets returned. Parameters ---------- f : File object An open file object. Should have been opened in mode rb. rspec : iterable of iterables This object should be an iterable of the format [ (attr_name, count, struct type), ... ]. endian : str '=' is native, '>' is big, '<' is little endian Returns ------- values : dict This will return a dict of iterables of the components of the values in the file. Examples -------- >>> header = [("ncpu", 1, "i"), ("nfiles", 2, "i")] >>> f = open("fort.3", "rb") >>> rv = read_record(f, header) """ vv = {} net_format = endian + "I" for _a, n, t in rspec: t = t if len(t) == 1 else t[-1] net_format += f"{n}{t}" net_format += "I" size = struct.calcsize(net_format) vals = list(struct.unpack(net_format, f.read(size))) s1, s2 = vals.pop(0), vals.pop(-1) if s1 != s2: raise OSError( "An error occurred while reading a Fortran record. Got " "a different size at the beginning and at the end of " "the record: %s %s", s1, s2, ) pos = 0 for a, n, _t in rspec: vv[a] = vals[pos : pos + n] pos += n return vv ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3791533 yt-4.4.0/yt/utilities/grid_data_format/0000755000175100001770000000000014714401715017521 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/grid_data_format/__init__.py0000644000175100001770000000003214714401662021626 0ustar00runnerdockerfrom .conversion import * ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3791533 yt-4.4.0/yt/utilities/grid_data_format/conversion/0000755000175100001770000000000014714401715021706 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/grid_data_format/conversion/__init__.py0000644000175100001770000000016114714401662024016 0ustar00runnerdockerfrom .conversion_abc import Converter from .conversion_athena import AthenaConverter, AthenaDistributedConverter ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/grid_data_format/conversion/conversion_abc.py0000644000175100001770000000024614714401662025255 0ustar00runnerdockerclass Converter: def __init__(self, basename, outname=None): self.basename = basename self.outname = outname def convert(self): pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/grid_data_format/conversion/conversion_athena.py0000644000175100001770000004640114714401662025773 0ustar00runnerdockerimport os from glob import glob import numpy as np from yt.utilities.grid_data_format.conversion.conversion_abc import Converter from yt.utilities.on_demand_imports import _h5py as h5py translation_dict = {} translation_dict["density"] = "density" translation_dict["total_energy"] = "specific_energy" translation_dict["velocity_x"] = "velocity_x" translation_dict["velocity_y"] = "velocity_y" translation_dict["velocity_z"] = "velocity_z" translation_dict["cell_centered_B_x"] = "mag_field_x" translation_dict["cell_centered_B_y"] = "mag_field_y" translation_dict["cell_centered_B_z"] = "mag_field_z" class AthenaDistributedConverter(Converter): def __init__(self, basename, outname=None, source_dir=None, field_conversions=None): self.fields = [] self.current_time = 0.0 name = basename.split(".") self.ddn = int(name[1]) if source_dir is None: source_dir = "./" self.source_dir = source_dir self.basename = name[0] if outname is None: outname = self.basename + ".%04i" % self.ddn + ".gdf" self.outname = outname if field_conversions is None: field_conversions = {} self.field_conversions = field_conversions self.handle = None def parse_line(self, line, grid): # grid is a dictionary splitup = line.strip().split() if "vtk" in splitup: grid["vtk_version"] = splitup[-1] elif "Really" in splitup: grid["time"] = splitup[-1] self.current_time = grid["time"] elif "PRIMITIVE" in splitup: grid["time"] = float(splitup[4].rstrip(",")) grid["level"] = int(splitup[6].rstrip(",")) grid["domain"] = int(splitup[8].rstrip(",")) self.current_time = grid["time"] elif "DIMENSIONS" in splitup: grid["dimensions"] = np.array(splitup[-3:], dtype="int64") elif "ORIGIN" in splitup: grid["left_edge"] = np.array(splitup[-3:], dtype="float64") elif "SPACING" in splitup: grid["dds"] = np.array(splitup[-3:], dtype="float64") elif "CELL_DATA" in splitup: grid["ncells"] = int(splitup[-1]) elif "SCALARS" in splitup: field = splitup[1] grid["read_field"] = field grid["read_type"] = "scalar" elif "VECTORS" in splitup: field = splitup[1] grid["read_field"] = field grid["read_type"] = "vector" def write_gdf_field(self, fn, grid_number, field, data): f = self.handle ## --------- Store Grid Data --------- ## if "grid_%010i" % grid_number not in f["data"].keys(): g = f["data"].create_group("grid_%010i" % grid_number) else: g = f["data"]["grid_%010i" % grid_number] name = field try: name = translation_dict[name] except Exception: pass # print 'Writing %s' % name if name not in g.keys(): g.create_dataset(name, data=data) def read_and_write_index(self, basename, ddn, gdf_name): """Read Athena legacy vtk file from multiple cpus""" proc_names = glob(os.path.join(self.source_dir, "id*")) # print('Reading a dataset from %i Processor Files' % len(proc_names)) N = len(proc_names) grid_dims = np.empty([N, 3], dtype="int64") grid_left_edges = np.empty([N, 3], dtype="float64") grid_dds = np.empty([N, 3], dtype="float64") grid_levels = np.zeros(N, dtype="int64") grid_parent_ids = -1 * np.ones(N, dtype="int64") grid_particle_counts = np.zeros([N, 1], dtype="int64") for i in range(N): if i == 0: fn = os.path.join( self.source_dir, f"id{i}", basename + f".{ddn:04d}.vtk" ) else: fn = os.path.join( self.source_dir, f"id{i}", basename + f"-id{i}.{ddn:04d}.vtk" ) print(f"Reading file {fn}") f = open(fn, "rb") grid = {} grid["read_field"] = None grid["read_type"] = None line = f.readline() while grid["read_field"] is None: self.parse_line(line, grid) if "SCALAR" in line.strip().split(): break if "VECTOR" in line.strip().split(): break if "TABLE" in line.strip().split(): break if len(line) == 0: break del line line = f.readline() if len(line) == 0: break if np.prod(grid["dimensions"]) != grid["ncells"]: grid["dimensions"] -= 1 grid["dimensions"][grid["dimensions"] == 0] = 1 if np.prod(grid["dimensions"]) != grid["ncells"]: print( "product of dimensions %i not equal to number of cells %i" % (np.prod(grid["dimensions"]), grid["ncells"]) ) raise TypeError # Append all hierarchy info before reading this grid's data grid_dims[i] = grid["dimensions"] grid_left_edges[i] = grid["left_edge"] grid_dds[i] = grid["dds"] # grid_ncells[i]=grid['ncells'] del grid f.close() del f f = self.handle ## --------- Begin level nodes --------- ## g = f.create_group("gridded_data_format") g.attrs["format_version"] = np.float32(1.0) g.attrs["data_software"] = "athena" f.create_group("data") f.create_group("field_types") f.create_group("particle_types") pars_g = f.create_group("simulation_parameters") gles = grid_left_edges gdims = grid_dims dle = np.min(gles, axis=0) dre = np.max(gles + grid_dims * grid_dds, axis=0) glis = ((gles - dle) / grid_dds).astype("int64") ddims = (dre - dle) / grid_dds[0] # grid_left_index f.create_dataset("grid_left_index", data=glis) # grid_dimensions f.create_dataset("grid_dimensions", data=gdims) # grid_level f.create_dataset("grid_level", data=grid_levels) ## ----------QUESTIONABLE NEXT LINE--------- ## # This data needs two dimensions for now. f.create_dataset("grid_particle_count", data=grid_particle_counts) # grid_parent_id f.create_dataset("grid_parent_id", data=grid_parent_ids) ## --------- Done with top level nodes --------- ## pars_g.attrs["refine_by"] = np.int64(1) pars_g.attrs["dimensionality"] = np.int64(3) pars_g.attrs["domain_dimensions"] = ddims pars_g.attrs["current_time"] = self.current_time pars_g.attrs["domain_left_edge"] = dle pars_g.attrs["domain_right_edge"] = dre pars_g.attrs["unique_identifier"] = "athenatest" pars_g.attrs["cosmological_simulation"] = np.int64(0) pars_g.attrs["num_ghost_zones"] = np.int64(0) pars_g.attrs["field_ordering"] = np.int64(1) pars_g.attrs["boundary_conditions"] = np.int64([0] * 6) # For Now # Extra pars: # pars_g.attrs['n_cells'] = grid['ncells'] pars_g.attrs["vtk_version"] = 1.0 # Add particle types # Nothing to do here # Add particle field attributes # f.close() def read_and_write_data(self, basename, ddn, gdf_name): proc_names = glob(os.path.join(self.source_dir, "id*")) # print('Reading a dataset from %i Processor Files' % len(proc_names)) N = len(proc_names) for i in range(N): if i == 0: fn = os.path.join( self.source_dir, f"id{i}", basename + f".{ddn:04d}.vtk" ) else: fn = os.path.join( self.source_dir, +f"id{i}", basename + f"-id{i}.{ddn:04d}.vtk" ) f = open(fn, "rb") # print('Reading data from %s' % fn) line = f.readline() while line != "": if len(line) == 0: break splitup = line.strip().split() if "DIMENSIONS" in splitup: grid_dims = np.array(splitup[-3:], dtype="int64") line = f.readline() continue elif "CELL_DATA" in splitup: grid_ncells = int(splitup[-1]) line = f.readline() if np.prod(grid_dims) != grid_ncells: grid_dims -= 1 grid_dims[grid_dims == 0] = 1 if np.prod(grid_dims) != grid_ncells: print( "product of dimensions %i not equal to number of cells %i" % (np.prod(grid_dims), grid_ncells) ) raise TypeError break else: del line line = f.readline() read_table = False while line != "": if len(line) == 0: break splitup = line.strip().split() if "SCALARS" in splitup: field = splitup[1] if not read_table: line = f.readline() # Read the lookup table line read_table = True data = np.fromfile(f, dtype=">f4", count=grid_ncells).reshape( grid_dims, order="F" ) if i == 0: self.fields.append(field) # print('writing field %s' % field) self.write_gdf_field(gdf_name, i, field, data) read_table = False elif "VECTORS" in splitup: field = splitup[1] data = np.fromfile(f, dtype=">f4", count=3 * grid_ncells) data_x = data[0::3].reshape(grid_dims, order="F") data_y = data[1::3].reshape(grid_dims, order="F") data_z = data[2::3].reshape(grid_dims, order="F") if i == 0: self.fields.append(field + "_x") self.fields.append(field + "_y") self.fields.append(field + "_z") # print('writing field %s' % field) self.write_gdf_field(gdf_name, i, field + "_x", data_x) self.write_gdf_field(gdf_name, i, field + "_y", data_y) self.write_gdf_field(gdf_name, i, field + "_z", data_z) del data, data_x, data_y, data_z del line line = f.readline() f.close() del f f = self.handle field_g = f["field_types"] # Add Field Attributes for name in self.fields: tname = name try: tname = translation_dict[name] except Exception: pass this_field = field_g.create_group(tname) if name in self.field_conversions.keys(): this_field.attrs["field_to_cgs"] = self.field_conversions[name] else: this_field.attrs["field_to_cgs"] = np.float64("1.0") # For Now def convert(self, index=True, data=True): self.handle = h5py.File(self.outname, mode="a") if index: self.read_and_write_index(self.basename, self.ddn, self.outname) if data: self.read_and_write_data(self.basename, self.ddn, self.outname) self.handle.close() class AthenaConverter(Converter): def __init__(self, basename, outname=None, field_conversions=None): self.fields = [] self.basename = basename name = basename.split(".") fn = "%s.%04i" % (name[0], int(name[1])) self.ddn = int(name[1]) self.basename = fn if outname is None: outname = fn + ".gdf" self.outname = outname if field_conversions is None: field_conversions = {} self.field_conversions = field_conversions def parse_line(self, line, grid): # grid is a dictionary splitup = line.strip().split() if "vtk" in splitup: grid["vtk_version"] = splitup[-1] elif "Really" in splitup: grid["time"] = splitup[-1] elif "DIMENSIONS" in splitup: grid["dimensions"] = np.array(splitup[-3:], dtype="int64") elif "ORIGIN" in splitup: grid["left_edge"] = np.array(splitup[-3:], dtype="float64") elif "SPACING" in splitup: grid["dds"] = np.array(splitup[-3:], dtype="float64") elif "CELL_DATA" in splitup: grid["ncells"] = int(splitup[-1]) elif "SCALARS" in splitup: field = splitup[1] grid["read_field"] = field grid["read_type"] = "scalar" elif "VECTORS" in splitup: field = splitup[1] grid["read_field"] = field grid["read_type"] = "vector" def read_grid(self, filename): """Read Athena legacy vtk file from single cpu""" f = open(filename, "rb") # print('Reading from %s'%filename) grid = {} grid["read_field"] = None grid["read_type"] = None table_read = False line = f.readline() while line != "": while grid["read_field"] is None: self.parse_line(line, grid) if grid["read_type"] == "vector": break if not table_read: line = f.readline() if "TABLE" in line.strip().split(): table_read = True if len(line) == 0: break if len(line) == 0: break if np.prod(grid["dimensions"]) != grid["ncells"]: grid["dimensions"] -= 1 if np.prod(grid["dimensions"]) != grid["ncells"]: print( "product of dimensions %i not equal to number of cells %i" % (np.prod(grid["dimensions"]), grid["ncells"]) ) raise TypeError if grid["read_type"] == "scalar": grid[grid["read_field"]] = np.fromfile( f, dtype=">f4", count=grid["ncells"] ).reshape(grid["dimensions"], order="F") self.fields.append(grid["read_field"]) elif grid["read_type"] == "vector": data = np.fromfile(f, dtype=">f4", count=3 * grid["ncells"]) grid[grid["read_field"] + "_x"] = data[0::3].reshape( grid["dimensions"], order="F" ) grid[grid["read_field"] + "_y"] = data[1::3].reshape( grid["dimensions"], order="F" ) grid[grid["read_field"] + "_z"] = data[2::3].reshape( grid["dimensions"], order="F" ) self.fields.append(grid["read_field"] + "_x") self.fields.append(grid["read_field"] + "_y") self.fields.append(grid["read_field"] + "_z") else: raise TypeError grid["read_field"] = None grid["read_type"] = None line = f.readline() if len(line) == 0: break grid["right_edge"] = grid["left_edge"] + grid["dds"] * (grid["dimensions"]) return grid def write_to_gdf(self, fn, grid): f = h5py.File(fn, mode="a") ## --------- Begin level nodes --------- ## g = f.create_group("gridded_data_format") g.attrs["format_version"] = np.float32(1.0) g.attrs["data_software"] = "athena" data_g = f.create_group("data") field_g = f.create_group("field_types") f.create_group("particle_types") pars_g = f.create_group("simulation_parameters") dle = grid["left_edge"] # True only in this case of one grid for the domain gles = np.array([grid["left_edge"]]) gdims = np.array([grid["dimensions"]]) glis = ((gles - dle) / grid["dds"]).astype("int64") # grid_left_index f.create_dataset("grid_left_index", data=glis) # grid_dimensions f.create_dataset("grid_dimensions", data=gdims) levels = np.array([0], dtype="int64") # unigrid example # grid_level f.create_dataset("grid_level", data=levels) ## ----------QUESTIONABLE NEXT LINE--------- ## # This data needs two dimensions for now. n_particles = np.array([[0]], dtype="int64") # grid_particle_count f.create_dataset("grid_particle_count", data=n_particles) # Assume -1 means no parent. parent_ids = np.array([-1], dtype="int64") # grid_parent_id f.create_dataset("grid_parent_id", data=parent_ids) ## --------- Done with top level nodes --------- ## f.create_group("index") ## --------- Store Grid Data --------- ## g0 = data_g.create_group("grid_%010i" % 0) for field in self.fields: name = field if field in translation_dict.keys(): name = translation_dict[name] if name not in g0.keys(): g0.create_dataset(name, data=grid[field]) ## --------- Store Particle Data --------- ## # Nothing to do ## --------- Attribute Tables --------- ## pars_g.attrs["refine_by"] = np.int64(1) pars_g.attrs["dimensionality"] = np.int64(3) pars_g.attrs["domain_dimensions"] = grid["dimensions"] try: pars_g.attrs["current_time"] = grid["time"] except Exception: pars_g.attrs["current_time"] = 0.0 pars_g.attrs["domain_left_edge"] = grid["left_edge"] # For Now pars_g.attrs["domain_right_edge"] = grid["right_edge"] # For Now pars_g.attrs["unique_identifier"] = "athenatest" pars_g.attrs["cosmological_simulation"] = np.int64(0) pars_g.attrs["num_ghost_zones"] = np.int64(0) pars_g.attrs["field_ordering"] = np.int64(0) pars_g.attrs["boundary_conditions"] = np.int64([0] * 6) # For Now # Extra pars: pars_g.attrs["n_cells"] = grid["ncells"] pars_g.attrs["vtk_version"] = grid["vtk_version"] # Add Field Attributes for name in g0.keys(): tname = name try: tname = translation_dict[name] except Exception: pass this_field = field_g.create_group(tname) if name in self.field_conversions.keys(): this_field.attrs["field_to_cgs"] = self.field_conversions[name] else: this_field.attrs["field_to_cgs"] = np.float64("1.0") # For Now # Add particle types # Nothing to do here # Add particle field attributes f.close() def convert(self): grid = self.read_grid(self.basename + ".vtk") self.write_to_gdf(self.outname, grid) # import sys # if __name__ == '__main__': # n = sys.argv[-1] # n = n.split('.') # fn = '%s.%04i'%(n[0],int(n[1])) # grid = read_grid(fn+'.vtk') # write_to_hdf5(fn+'.gdf',grid) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3831534 yt-4.4.0/yt/utilities/grid_data_format/docs/0000755000175100001770000000000014714401715020451 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/grid_data_format/docs/IRATE_notes.txt0000644000175100001770000000350514714401662023272 0ustar00runnerdockerHere is info from Erik Tollerud about the IRATE data format. The bitbucket project is at https://bitbucket.org/eteq/irate-format and I've posted a copy of the docs at http://www.physics.uci.edu/~etolleru/irate-docs/ , in particular http://www.physics.uci.edu/~etolleru/irate-docs/formatspec.html , which details the actual requirements for data to fit in the format. As far as I can tell, the following steps are needed to make GDF fit inside the IRATE format: *move everything except "/simulation_parameters" into a group named "/GridData" *rename "/simulation_parameters" to "SimulationParameters" *remove the 'field_types' group (this is not absolutely necessary, but the convention we had in mind for IRATE is that the dataset names themselves (e.g. the datasets like /data/gridxxxxxx/density) serve as the field definitions. * The unit information that's in 'field_types' should then be attributes in either "/GridData" or "/GridData/data" following the naming scheme e.g. "densityunitcgs" following the unit form given in the IRATE doc and an additional attribute e.g. "densityunitname" should be added with the human-readable name of the unit. This unit information can also live at the dataset level, but it probably makes more sense to put it instead at the higher level (IRATE supports both ways of doing it) * The Cosmology group (as defined in the IRATE specification) must be added - for simulations that are not technically "cosmological", you can just use one of the default cosmologies (WMAP7 is a reasonable choice - there's a function in the IRATE tools that automatically takes care of all the details for this). * optional: redo all the group names to follow the CamelCase convention - that's what we've been using elsewhere in IRATE. This is an arbitrary choice, but it would be nice for it to be consistent throughout the format. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/grid_data_format/docs/gdf_specification.txt0000644000175100001770000002552614714401662024665 0ustar00runnerdockerGridded Data Format =================== This is a pre-release of version 1.0 of this format. Lots of formats have come before, but this one is simple and will work with yt; the idea is to create an import and export function in yt that will read this, so that other codes (such as ZEUS-MP) can export directly to it or convert their data to it, and so that yt can export to it from any format it recognizes and reads. Caveats and Notes ----------------- #. We avoid having many attributes on many nodes, as access can be quite slow #. Cartesian data only for now #. All grids must have the same number of ghost zones. #. If "/grid_parent" does not exist, parentage relationships will be reconstructed and assumed to allow multiple grids #. No parentage can skip levels #. All grids are at the same time #. This format is designed for single-fluid calculations (with color fields) but it should be viewed as extensible to multiple-fluids. #. All fluid quantities are assumed to be in every grid, filling every zone. Inside a given grid, for a given particle type, all the affiliated fields must be the same length. (i.e., dark matter's velocity must be the same in all dimensions.) #. Everything is in a single file; for extremely large datasets, the user may utilize HDF5 external links to link to files other than the primary. (This enables, for instance, Enzo datasets to have only a thin wrapper that creates this format.) #. All fluid fields in this version of the format are assumed to have the dimensionality of the grid they reside in plus any ghost zones, plus any additionally dimensionality required by the staggering property. #. Particles may have dataspaces affiliated with them. (See Enzo's OutputParticleTypeGrouping for more information.) This enables a light wrapper around data formats with interspersed particle types. #. Boundary conditions are very simply specified -- future revisions will feature more complicated and rich specifications for the boundary. Furthermore, we make a distinction between fluid quantities and particle quantities. Particles remain affiliated with grid nodes. Positions of particles are global, but this will change with future versions of this document. Format Declaration ------------------ The file type is HDF5. We require version 1.8 or greater. At the root level, this group must exist: :: /gridded_data_format This must contain the (float) attribute ``format_version``. This document describes version 1.0. Optional attributes may exist: ``data_software`` string, references the application creating the file, not the author of the data ``data_software_version`` string, should reference a unique version number ``data_author`` string, references the person or persons who created the data, should include an email address ``data_comment`` string, anything about the data Top Level Nodes --------------- At least five top-level groups must exist, although some may be empty. :: /gridded_data_format /data /simulation_parameters /field_types /particle_types Additionally, the grid structure elements must exist. The 0-indexed index into this array defines a unique "Grid ID". ``/grid_left_index`` (int64, Nx3): global, relative to current level, and only the active region ``/grid_dimensions`` (int64, Nx3): only the active regions ``/grid_level`` (int64, N): level, indexed by zero ``/grid_particle_count`` (int64, N): total number of particles. (May change in subsequent versions.) ``/grid_parent_id`` (int64, N): optional, may only reference a single parent Grid Fields ----------- Underneath ``/data/`` there must be entries for every grid, of the format ``/data/grid_%010i``. These grids need no attributes, and underneath them datasets live. Fluid Fields ++++++++++++ For every grid we then define ``/data/grid_%010i/%(field)s``. Where ``%(field)s`` draws from all of the fields defined. We define no standard for which fields must be present, only the names and units. Units should always be ''proper'' cgs (or conversion factors should be supplied, below), and field names should be drawn from this list, with these names. Not all fields must be represented. Field must extend beyond the active region if ghost zones are included. All pre-defined fields are assumed to be cell-centered unless this is overridden in ``field_types``. * ``density`` (g/cc) * ``temperature`` (K) * ``specific_thermal_energy`` (erg/g) * ``specific_energy`` (erg/g, includes kinetic and magnetic) * ``magnetic_energy`` (erg/g) * ``velocity_x`` (cm/s) * ``velocity_y`` (cm/s) * ``velocity_z`` (cm/s) * ``species_density_%s`` (g/cc) where %s is the species name including ionization state, such as H2I, HI, HII, CO, "elec" for electron * ``mag_field_x`` * ``mag_field_y`` * ``mag_field_z`` Particle Fields +++++++++++++++ Particles are more expensive to sort and identify based on "type" -- for instance, dark matter versus star particles. The particles should be separated based on type, under the group ``/data/grid_%010i/particles/``. The particles group will have sub-groups, each of which will be named after the type of particle it represents. We only specify "dark_matter" as a type; anything else must be specified as described below. Each node, for instance ``/data/grid_%010i/particles/dark_matter/``, must contain the following fields: * ``mass`` (g) * ``id`` * ``position_x`` (in physical units) * ``position_y`` (in physical units) * ``position_z`` (in physical units) * ``velocity_x`` (cm/s) * ``velocity_y`` (cm/s) * ``velocity_z`` (cm/s) * ``dataspace`` (optional) an HDF5 dataspace to be used when opening all affiliated fields. If this is to be used, it must be appropriately set in the particle type definition. This is of type ``H5T_STD_REF_DSETREG``. (See Enzo's OutputParticleTypeGrouping for an example.) Additional Fields +++++++++++++++++ Any additional fields from the data can be added, but must have a corresponding entry in the root field table (described below.) The naming scheme is to be as explicit as possible, with units in cgs (or a conversion factor to the standard cgs unit, in the field table.) Attribute Table --------------- In the root node, we define several groups which contain attributes. Simulation Parameters +++++++++++++++++++++ These attributes will all be associated with ``/simulation_parameters``. ``refine_by`` relative global refinement ``dimensionality`` 1-, 2- or 3-D data ``domain_dimensions`` dimensions in the top grid ``current_time`` current time in simulation, in seconds, from "start" of simulation ``domain_left_edge`` the left edge of the domain, in cm ``domain_right_edge`` the right edge of the domain, in cm ``unique_identifier`` regarded as a string, but can be anything ``cosmological_simulation`` 0 or 1 ``num_ghost_zones`` integer ``field_ordering`` integer: 0 for C, 1 for Fortran ``boundary_conditions`` integer (6): 0 for periodic, 1 for mirrored, 2 for outflow. Needs one for each face of the cube. Any past the dimensionality should be set to -1. The order of specification goes left in 0th dimension, right in 0th dimension, left in 1st dimension, right in 1st dimensions, left in 2nd dimension, right in 2nd dimension. Note also that yt does not currently support non-periodic boundary conditions, and that the assumption of periodicity shows up primarily in plots and covering grids. Optionally, attributes for cosmological simulations can be provided, if cosmological_simulation above is set to 1: * current_redshift * omega_matter (at z=0) * omega_lambda (at z=0) * hubble_constant (h100) Fluid Field Attributes ++++++++++++++++++++++ Every field that is included that is not both in CGS already and in the list above requires parameters. If a field is in the above list but is not in CGS, only the field_to_cgs attribute is necessary. These will be stored under ``/field_types`` and each must possess the following attributes: ``field_name`` a string that will be used to describe the field; can contain spaces. ``field_to_cgs`` a float that will be used to convert the field to cgs units, if necessary. Set to 1.0 if no conversion necessary. Note that if non-CGS units are desired this field should simply be viewed as the value by which field values are multiplied to get to some internally consistent unit system. ``field_units`` a string that names the units. ``staggering`` an integer: 0 for cell-centered, 1 for face-centered, 2 for vertex-centered. Non-cellcentered data will be linearly-interpolated; more complicated reconstruction will be defined in a future version of this standard; for 1.0 we only allow for simple definitions. Particle Types ++++++++++++++ Every particle type that is not recognized (i.e., all non-Dark Matter types) needs to have an entry under ``/particle_types``. Each entry must possess the following attributes: ``particle_type_name`` a string that will be used to describe the field; can contain spaces. ``particle_use_dataspace`` (optional) if 1, the dataspace (see particle field definition above) will be used for all particle fields for this type of particle. Useful if a given type of particle is embedded inside a larger list of different types of particle. ``particle_type_num`` an integer giving the total number of particles of this type. For instance, to define a particle of type ``accreting_black_hole``, the file must contain ``/particle_types/accreting_black_hole``, with the ``particle_type_name`` attribute of "Accreting Black Hole". Particle Field Attributes +++++++++++++++++++++++++ Every particle type that contains a new field (for instance, ``accretion_rate``) needs to have an entry under ``/particle_types/{particle_type_name}/{field_name}`` containing the following attributes: ``field_name`` a string that will be used to describe the field; can contain spaces. ``field_to_cgs`` a float that will be used to convert the field to cgs units, if necessary. Set to 1.0 if no conversion necessary. ``field_units`` a string that names the units. Role of YT ---------- yt will provide a reader for this data, so that any data in this format can be used by the code. Additionally, the names and specifications in this code reflect the internal yt data structures. yt will also provide a writer for this data, which will operate on any existing data format. Provided that a simulation code can read this data, this will enable cross-platform comparison. Furthermore, any external piece of software (i.e., Stranger) that implements reading this format will be able to read any format of data that yt understands. Example File ------------ An example file constructed from the ``RD0005-mine`` dataset is available at http://yt.enzotools.org/files/RD0005.gdf . It is not yet a complete conversion, but it is a working proof of concept. Readers and writers are forthcoming. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3831534 yt-4.4.0/yt/utilities/grid_data_format/scripts/0000755000175100001770000000000014714401715021210 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/grid_data_format/scripts/convert_distributed_athena.py0000644000175100001770000000140414714401662027164 0ustar00runnerdockerimport sys from grid_data_format import AthenaDistributedConverter # Assumes that last input is the basename for the athena dataset. # i.e. kh_3d_mhd_hlld_128_beta5000_sub_tanhd.0030 basename = sys.argv[-1] converter = AthenaDistributedConverter(basename) converter.convert() # If you have units information, set up a conversion dictionary for # each field. Each key is the name of the field that Athena uses. # Each value is what you have to multiply the raw output from Athena # by to get cgs units. # code_to_cgs = {'density':1.0e3, # 'total_energy':1.0e-3, # 'velocity_x':1.2345, # 'velocity_y':1.2345, # 'velocity_z':1.2345} # converter = AthenaDistributedConverter(basename, field_conversions=code_to_cgs) # converter.convert() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/grid_data_format/scripts/convert_single_athena.py0000644000175100001770000000135614714401662026131 0ustar00runnerdockerimport sys from grid_data_format import AthenaConverter # Assumes that last input is the basename for the athena dataset. # i.e. kh_3d_mhd_hlld_128_beta5000_sub_tanhd.0030 basename = sys.argv[-1] converter = AthenaConverter(basename) converter.convert() # If you have units information, set up a conversion dictionary for # each field. Each key is the name of the field that Athena uses. # Each value is what you have to multiply the raw output from Athena # by to get cgs units. # code_to_cgs = {'density':1.0e3, # 'total_energy':1.0e-3, # 'velocity_x':1.2345, # 'velocity_y':1.2345, # 'velocity_z':1.2345} # converter = AthenaDistributedConverter(basename, field_conversions=code_to_cgs) # converter.convert() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3831534 yt-4.4.0/yt/utilities/grid_data_format/tests/0000755000175100001770000000000014714401715020663 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/grid_data_format/tests/__init__.py0000644000175100001770000000000014714401662022763 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/grid_data_format/tests/test_writer.py0000644000175100001770000000230214714401662023606 0ustar00runnerdockerimport os import shutil import tempfile from numpy.testing import assert_equal from yt.frontends.gdf.data_structures import GDFDataset from yt.loaders import load from yt.testing import fake_random_ds, requires_module from yt.utilities.grid_data_format.writer import write_to_gdf from yt.utilities.on_demand_imports import _h5py as h5py TEST_AUTHOR = "yt test runner" TEST_COMMENT = "Testing write_to_gdf" def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True @requires_module("h5py") def test_write_gdf(): """Main test suite for write_gdf""" tmpdir = tempfile.mkdtemp() tmpfile = os.path.join(tmpdir, "test_gdf.h5") try: test_ds = fake_random_ds(64) write_to_gdf( test_ds, tmpfile, data_author=TEST_AUTHOR, data_comment=TEST_COMMENT ) del test_ds assert isinstance(load(tmpfile), GDFDataset) h5f = h5py.File(tmpfile, mode="r") gdf = h5f["gridded_data_format"].attrs assert_equal(gdf["data_author"], TEST_AUTHOR) assert_equal(gdf["data_comment"], TEST_COMMENT) h5f.close() finally: shutil.rmtree(tmpdir) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/grid_data_format/writer.py0000644000175100001770000002706414714401662021421 0ustar00runnerdockerimport os import sys from contextlib import contextmanager import numpy as np from yt import __version__ as yt_version from yt.funcs import iter_fields from yt.utilities.exceptions import YTGDFAlreadyExists from yt.utilities.on_demand_imports import _h5py as h5py from yt.utilities.parallel_tools.parallel_analysis_interface import ( communication_system, parallel_objects, ) def write_to_gdf( ds, gdf_path, fields=None, data_author=None, data_comment=None, dataset_units=None, particle_type_name="dark_matter", overwrite=False, **kwargs, ): """ Write a dataset to the given path in the Grid Data Format. Parameters ---------- ds : Dataset object The yt data to write out. gdf_path : string The path of the file to output. fields The field or list of fields to write out. If None, defaults to ds.field_list. data_author : string, optional The name of the author who wrote the data. Default: None. data_comment : string, optional A descriptive comment. Default: None. dataset_units : dictionary, optional A dictionary of (value, unit) tuples to set the default units of the dataset. Keys can be: * "length_unit" * "time_unit" * "mass_unit" * "velocity_unit" * "magnetic_unit" If not specified, these will carry over from the parent dataset. particle_type_name : string, optional The particle type of the particles in the dataset. Default: "dark_matter" overwrite : boolean, optional Whether or not to overwrite an already existing file. If False, attempting to overwrite an existing file will result in an exception. Examples -------- >>> dataset_units = {"length_unit": (1.0, "Mpc"), "time_unit": (1.0, "Myr")} >>> write_to_gdf( ... ds, ... "clumps.h5", ... data_author="John ZuHone", ... dataset_units=dataset_units, ... data_comment="My Really Cool Dataset", ... overwrite=True, ... ) """ if fields is None: fields = ds.field_list fields = list(iter_fields(fields)) with _create_new_gdf( ds, gdf_path, data_author, data_comment, dataset_units=dataset_units, particle_type_name=particle_type_name, overwrite=overwrite, ) as f: # now add the fields one-by-one _write_fields_to_gdf(ds, f, fields, particle_type_name) def save_field(ds, fields, field_parameters=None): """ Write a single field associated with the dataset ds to the backup file. Parameters ---------- ds : Dataset object The yt dataset that the field is associated with. fields : field of list of fields The name(s) of the field(s) to save. field_parameters : dictionary A dictionary of field parameters to set. """ fields = list(iter_fields(fields)) for field_name in fields: if isinstance(field_name, tuple): field_name = field_name[1] field_obj = ds._get_field_info(field_name) if field_obj.sampling_type == "particle": print("Saving particle fields currently not supported.") return with _get_backup_file(ds) as f: # now save the field _write_fields_to_gdf( ds, f, fields, particle_type_name="dark_matter", field_parameters=field_parameters, ) def _write_fields_to_gdf( ds, fhandle, fields, particle_type_name, field_parameters=None ): for field in fields: # add field info to field_types group g = fhandle["field_types"] # create the subgroup with the field's name fi = ds._get_field_info(field) ftype, fname = fi.name try: sg = g.create_group(fname) except ValueError: print("Error - File already contains field called " + field) sys.exit(1) # grab the display name and units from the field info container. display_name = fi.display_name units = fi.units # check that they actually contain something... if display_name: sg.attrs["field_name"] = np.bytes_(display_name) else: sg.attrs["field_name"] = np.bytes_(str(field)) if units: sg.attrs["field_units"] = np.bytes_(units) else: sg.attrs["field_units"] = np.bytes_("None") # @todo: is this always true? sg.attrs["staggering"] = 0 # first we must create the datasets on all processes. g = fhandle["data"] for grid in ds.index.grids: for field in fields: # sanitize get the field info object fi = ds._get_field_info(field) ftype, fname = fi.name grid_group = g["grid_%010i" % (grid.id - grid._id_offset)] particles_group = grid_group["particles"] pt_group = particles_group[particle_type_name] if fi.sampling_type == "particle": # particle data pt_group.create_dataset(fname, grid.ActiveDimensions, dtype="float64") else: # a field grid_group.create_dataset(fname, grid.ActiveDimensions, dtype="float64") # now add the actual data, grid by grid g = fhandle["data"] data_source = ds.all_data() citer = data_source.chunks([], "io", local_only=True) for region in parallel_objects(citer): # is there a better way to the get the grids on each chunk? for chunk in ds.index._chunk_io(region): for grid in chunk.objs: for field in fields: # sanitize and get the field info object fi = ds._get_field_info(field) ftype, fname = fi.name # set field parameters, if specified if field_parameters is not None: for k, v in field_parameters.items(): grid.set_field_parameter(k, v) grid_group = g["grid_%010i" % (grid.id - grid._id_offset)] particles_group = grid_group["particles"] pt_group = particles_group[particle_type_name] # add the field data to the grid group # Check if this is a real field or particle data. grid.get_data(field) units = fhandle["field_types"][fname].attrs["field_units"] if fi.sampling_type == "particle": # particle data dset = pt_group[fname] dset[:] = grid[field].in_units(units) else: # a field dset = grid_group[fname] dset[:] = grid[field].in_units(units) @contextmanager def _get_backup_file(ds): backup_filename = ds.backup_filename if os.path.exists(backup_filename): # backup file already exists, open it. We use parallel # h5py if it is available if communication_system.communicators[-1].size > 1 and h5py.get_config().mpi: mpi4py_communicator = communication_system.communicators[-1].comm f = h5py.File( backup_filename, mode="r+", driver="mpio", comm=mpi4py_communicator ) else: f = h5py.File(backup_filename, mode="r+") yield f f.close() else: # backup file does not exist, create it with _create_new_gdf( ds, backup_filename, data_author=None, data_comment=None, particle_type_name="dark_matter", ) as f: yield f @contextmanager def _create_new_gdf( ds, gdf_path, data_author=None, data_comment=None, dataset_units=None, particle_type_name="dark_matter", overwrite=False, **kwargs, ): # Make sure we have the absolute path to the file first gdf_path = os.path.abspath(gdf_path) # Is the file already there? If so, are we allowing # overwriting? if os.path.exists(gdf_path) and not overwrite: raise YTGDFAlreadyExists(gdf_path) ### # Create and open the file with h5py. We use parallel # h5py if it is available. ### if communication_system.communicators[-1].size > 1 and h5py.get_config().mpi: mpi4py_communicator = communication_system.communicators[-1].comm f = h5py.File(gdf_path, mode="w", driver="mpio", comm=mpi4py_communicator) else: f = h5py.File(gdf_path, mode="w") ### # "gridded_data_format" group ### g = f.create_group("gridded_data_format") g.attrs["data_software"] = "yt" g.attrs["data_software_version"] = yt_version if data_author is not None: g.attrs["data_author"] = data_author if data_comment is not None: g.attrs["data_comment"] = data_comment ### # "simulation_parameters" group ### g = f.create_group("simulation_parameters") g.attrs["refine_by"] = ds.refine_by g.attrs["dimensionality"] = ds.dimensionality g.attrs["domain_dimensions"] = ds.domain_dimensions g.attrs["current_time"] = ds.current_time g.attrs["domain_left_edge"] = ds.domain_left_edge g.attrs["domain_right_edge"] = ds.domain_right_edge g.attrs["unique_identifier"] = ds.unique_identifier g.attrs["cosmological_simulation"] = ds.cosmological_simulation # @todo: Where is this in the yt API? g.attrs["num_ghost_zones"] = 0 # @todo: Where is this in the yt API? g.attrs["field_ordering"] = 0 # @todo: not yet supported by yt. g.attrs["boundary_conditions"] = np.array([0, 0, 0, 0, 0, 0], "int32") if ds.cosmological_simulation: g.attrs["current_redshift"] = ds.current_redshift g.attrs["omega_matter"] = ds.omega_matter g.attrs["omega_lambda"] = ds.omega_lambda g.attrs["hubble_constant"] = ds.hubble_constant if dataset_units is None: dataset_units = {} g = f.create_group("dataset_units") for u in ["length", "time", "mass", "velocity", "magnetic"]: unit_name = u + "_unit" if unit_name in dataset_units: value, units = dataset_units[unit_name] else: attr = getattr(ds, unit_name) value = float(attr) units = str(attr.units) d = g.create_dataset(unit_name, data=value) d.attrs["unit"] = units ### # "field_types" group ### g = f.create_group("field_types") ### # "particle_types" group ### g = f.create_group("particle_types") # @todo: Particle type iterator sg = g.create_group(particle_type_name) sg["particle_type_name"] = np.bytes_(particle_type_name) ### # root datasets -- info about the grids ### f["grid_dimensions"] = ds.index.grid_dimensions f["grid_left_index"] = np.array( [grid.get_global_startindex() for grid in ds.index.grids] ).reshape(ds.index.grid_dimensions.shape[0], 3) f["grid_level"] = ds.index.grid_levels.flat # @todo: Fill with proper values f["grid_parent_id"] = -np.ones(ds.index.grid_dimensions.shape[0]) f["grid_particle_count"] = ds.index.grid_particle_count ### # "data" group -- where we should spend the most time ### g = f.create_group("data") for grid in ds.index.grids: # add group for this grid grid_group = g.create_group("grid_%010i" % (grid.id - grid._id_offset)) # add group for the particles on this grid particles_group = grid_group.create_group("particles") particles_group.create_group(particle_type_name) yield f # close the file when done f.close() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/hierarchy_inspection.py0000644000175100001770000000265414714401662021026 0ustar00runnerdockerimport inspect from collections import Counter from typing import TypeVar from more_itertools import flatten from yt.data_objects.static_output import Dataset from yt.utilities.object_registries import output_type_registry T = TypeVar("T") def find_lowest_subclasses(candidates: list[type[T]]) -> list[type[T]]: """ This function takes a list of classes, and returns only the ones that are are not super classes of any others in the list. i.e. the ones that are at the bottom of the specified mro of classes. Parameters ---------- candidates : Iterable An iterable object that is a collection of classes to find the lowest subclass of. Returns ------- result : list A list of classes which are not super classes for any others in candidates. """ count = Counter(flatten(inspect.getmro(c) for c in candidates)) return [x for x in candidates if count[x] == 1] def get_classes_with_missing_requirements() -> dict[type[Dataset], list[str]]: # We need a function here rather than an global constant registry because: # - computation should be delayed until needed so that the result is independent of import order # - tests might (temporarily) mutate output_type_registry return { cls: missing for cls in sorted(output_type_registry.values(), key=lambda c: c.__name__) if (missing := cls._missing_load_requirements()) } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/initial_conditions.py0000644000175100001770000000667614714401662020507 0ustar00runnerdockerimport numpy as np class FluidOperator: def apply(self, ds): for g in ds.index.grids: self(g) class TopHatSphere(FluidOperator): def __init__(self, radius, center, fields): self.radius = radius self.center = center self.fields = fields def __call__(self, grid, sub_select=None): r = np.zeros(grid.ActiveDimensions, dtype="float64") for i, ax in enumerate("xyz"): np.add(r, (grid[ax].to_ndarray() - self.center[i]) ** 2.0, r) np.sqrt(r, r) ind = r <= self.radius if sub_select is not None: ind &= sub_select for field, val in self.fields.items(): grid[field][ind] = val class CoredSphere(FluidOperator): def __init__(self, core_radius, radius, center, fields): self.radius = radius self.center = center self.fields = fields self.core_radius = core_radius def __call__(self, grid, sub_select=None): r = np.zeros(grid.ActiveDimensions, dtype="float64") r2 = self.radius**2 cr2 = self.core_radius**2 for i, ax in enumerate("xyz"): np.add(r, (grid[ax] - self.center[i]) ** 2.0, r) np.maximum(r, cr2, r) ind = r <= r2 if sub_select is not None: ind &= sub_select for field, (outer_val, inner_val) in self.fields.items(): val = ((r[ind] - cr2) / (r2 - cr2)) ** 0.5 * (outer_val - inner_val) grid[field][ind] = val + inner_val class BetaModelSphere(FluidOperator): def __init__(self, beta, core_radius, radius, center, fields): self.radius = radius self.center = center self.fields = fields self.core_radius = core_radius self.beta = beta def __call__(self, grid, sub_select=None): r = np.zeros(grid.ActiveDimensions, dtype="float64") r2 = self.radius**2 cr2 = self.core_radius**2 for i, ax in enumerate("xyz"): np.add(r, (grid[ax].ndarray_view() - self.center[i]) ** 2.0, r) ind = r <= r2 if sub_select is not None: ind &= sub_select for field, core_val in self.fields.items(): val = core_val * (1.0 + r[ind] / cr2) ** (-1.5 * self.beta) grid[field][ind] = val class NFWModelSphere(FluidOperator): def __init__(self, scale_radius, radius, center, fields): self.radius = radius self.center = center self.fields = fields self.scale_radius = scale_radius # unitless def __call__(self, grid, sub_select=None): r = np.zeros(grid.ActiveDimensions, dtype="float64") for i, ax in enumerate("xyz"): np.add(r, (grid[ax].ndarray_view() - self.center[i]) ** 2.0, r) np.sqrt(r, r) ind = r <= self.radius r /= self.scale_radius if sub_select is not None: ind &= sub_select for field, scale_val in self.fields.items(): val = scale_val / (r[ind] * (1.0 + r[ind]) ** 2) grid[field][ind] = val class RandomFluctuation(FluidOperator): def __init__(self, fields): self.fields = fields def __call__(self, grid, sub_select=None): if sub_select is None: sub_select = Ellipsis rng = np.random.default_rng() for field, mag in self.fields.items(): vals = grid[field][sub_select] rc = 1.0 + (rng.random(vals.shape) - 0.5) * mag grid[field][sub_select] *= rc ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/io_handler.py0000644000175100001770000002346214714401662016721 0ustar00runnerdockerimport os from collections import defaultdict from collections.abc import Iterator, Mapping from contextlib import contextmanager from functools import _make_key, lru_cache import numpy as np from yt._typing import FieldKey, ParticleCoordinateTuple from yt.geometry.selection_routines import GridSelector from yt.utilities.on_demand_imports import _h5py as h5py io_registry = {} use_caching = 0 def _make_io_key(args, *_args, **kwargs): self, obj, field, ctx = args # Ignore self because we have a self-specific cache return _make_key((obj.id, field), *_args, **kwargs) class BaseIOHandler: _vector_fields: dict[str, int] = {} _dataset_type: str _particle_reader = False _cache_on = False _misses = 0 _hits = 0 def __init_subclass__(cls, *args, **kwargs): super().__init_subclass__(*args, **kwargs) if hasattr(cls, "_dataset_type"): io_registry[cls._dataset_type] = cls if use_caching and hasattr(cls, "_read_obj_field"): cls._read_obj_field = lru_cache( maxsize=use_caching, typed=True, make_key=_make_io_key )(cls._read_obj_field) def __init__(self, ds): self.queue = defaultdict(dict) self.ds = ds self._last_selector_id = None self._last_selector_counts = None self._array_fields = {} self._cached_fields = {} # We need a function for reading a list of sets # and a function for *popping* from a queue all the appropriate sets @contextmanager def preload(self, chunk, fields: list[FieldKey], max_size): yield self def peek(self, grid, field): return self.queue[grid.id].get(field, None) def push(self, grid, field, data): if grid.id in self.queue and field in self.queue[grid.id]: raise ValueError self.queue[grid][field] = data def _field_in_backup(self, grid, backup_file, field_name): if os.path.exists(backup_file): fhandle = h5py.File(backup_file, mode="r") g = fhandle["data"] grid_group = g["grid_%010i" % (grid.id - grid._id_offset)] if field_name in grid_group: return_val = True else: return_val = False fhandle.close() return return_val else: return False def _read_data_set(self, grid, field): # check backup file first. if field not found, # call frontend-specific io method backup_filename = grid.ds.backup_filename if not os.path.exists(backup_filename): return self._read_data(grid, field) elif self._field_in_backup(grid, backup_filename, field): fhandle = h5py.File(backup_filename, mode="r") g = fhandle["data"] grid_group = g["grid_%010i" % (grid.id - grid._id_offset)] data = grid_group[field][:] fhandle.close() return data else: return self._read_data(grid, field) # Now we define our interface def _read_data(self, grid, field): pass def _read_fluid_selection( self, chunks, selector, fields: list[FieldKey], size ) -> Mapping[FieldKey, np.ndarray]: # This function has an interesting history. It previously was mandate # to be defined by all of the subclasses. But, to avoid having to # rewrite a whole bunch of IO handlers all at once, and to allow a # better abstraction for grid-based frontends, we're now defining it in # the base class. rv = {} nodal_fields = [] for field in fields: finfo = self.ds.field_info[field] nodal_flag = finfo.nodal_flag if np.any(nodal_flag): num_nodes = 2 ** sum(nodal_flag) rv[field] = np.empty((size, num_nodes), dtype="=f8") nodal_fields.append(field) else: rv[field] = np.empty(size, dtype="=f8") ind = {field: 0 for field in fields} for field, obj, data in self.io_iter(chunks, fields): if data is None: continue if isinstance(selector, GridSelector) and field not in nodal_fields: ind[field] += data.size rv[field] = data.copy() else: ind[field] += obj.select(selector, data, rv[field], ind[field]) return rv def io_iter(self, chunks, fields: list[FieldKey]): raise NotImplementedError( "subclassing Dataset.io_iter this is required in order to use the default " "implementation of Dataset._read_fluid_selection. " "Custom implementations of the latter may not rely on this method." ) def _read_data_slice(self, grid, field, axis, coord): sl = [slice(None), slice(None), slice(None)] sl[axis] = slice(coord, coord + 1) tr = self._read_data_set(grid, field)[tuple(sl)] if tr.dtype == "float32": tr = tr.astype("float64") return tr def _read_field_names(self, grid): pass @property def _read_exception(self): return None def _read_chunk_data(self, chunk, fields): return {} def _read_particle_coords( self, chunks, ptf: defaultdict[str, list[str]] ) -> Iterator[ParticleCoordinateTuple]: # An iterator that yields particle coordinates for each chunk by particle # type. Must be implemented by each frontend. Must yield a tuple of # (particle type, xyz, hsml) by chunk. If the frontend does not have # a smoothing length, yield (particle type, xyz, 0.0) raise NotImplementedError def _read_particle_data_file(self, data_file, ptf, selector=None): # each frontend needs to implement this: read from a data_file object # and return a dict of fields for that data_file raise NotImplementedError def _read_particle_selection( self, chunks, selector, fields: list[FieldKey] ) -> dict[FieldKey, np.ndarray]: data: dict[FieldKey, list[np.ndarray]] = {} # Initialize containers for tracking particle, field information # ptf (particle field types) maps particle type to list of on-disk fields to read # field_maps stores fields, accounting for field unions ptf: defaultdict[str, list[str]] = defaultdict(list) field_maps: defaultdict[FieldKey, list[FieldKey]] = defaultdict(list) # We first need a set of masks for each particle type chunks = list(chunks) unions = self.ds.particle_unions # What we need is a mapping from particle types to return types for field in fields: ftype, fname = field # We should add a check for p.fparticle_unions or something here if ftype in unions: for pt in unions[ftype]: ptf[pt].append(fname) field_maps[pt, fname].append(field) else: ptf[ftype].append(fname) field_maps[field].append(field) data[field] = [] # Now we read. for field_r, vals in self._read_particle_fields(chunks, ptf, selector): # Note that we now need to check the mappings for field_f in field_maps[field_r]: data[field_f].append(vals) rv: dict[FieldKey, np.ndarray] = {} # the return dictionary fields = list(data.keys()) for field_f in fields: # We need to ensure the arrays have the right shape if there are no # particles in them. total = sum(_.size for _ in data[field_f]) if total > 0: vals = data.pop(field_f) # note: numpy.concatenate has a dtype argument that would avoid # a copy using .astype(...), available in numpy>=1.20 rv[field_f] = np.concatenate(vals, axis=0).astype("float64") else: shape = [0] if field_f[1] in self._vector_fields: shape.append(self._vector_fields[field_f[1]]) elif field_f[1] in self._array_fields: shape.append(self._array_fields[field_f[1]]) rv[field_f] = np.empty(shape, dtype="float64") return rv def _read_particle_fields(self, chunks, ptf, selector): # Now we have all the sizes, and we can allocate for data_file in self._sorted_chunk_iterator(chunks): data_file_data = self._read_particle_data_file(data_file, ptf, selector) # temporary trickery so it's still an iterator, need to adjust # the io_handler.BaseIOHandler.read_particle_selection() method # to not use an iterator. yield from data_file_data.items() @staticmethod def _get_data_files(chunks): chunks = list(chunks) data_files = set() for chunk in chunks: for obj in chunk.objs: data_files.update(obj.data_files) return data_files def _sorted_chunk_iterator(self, chunks): data_files = self._get_data_files(chunks) yield from sorted(data_files, key=lambda x: (x.filename, x.start)) # As a note: we don't *actually* want this to be how it is forever. There's no # reason we need to have the fluid and particle IO handlers separated. But, # for keeping track of which frontend is which, this is a useful abstraction. class BaseParticleIOHandler(BaseIOHandler): pass class IOHandlerExtracted(BaseIOHandler): _dataset_type = "extracted" def _read_data_set(self, grid, field): return grid.base_grid[field] / grid.base_grid.convert(field) def _read_data_slice(self, grid, field, axis, coord): sl = [slice(None), slice(None), slice(None)] sl[axis] = slice(coord, coord + 1) return grid.base_grid[field][tuple(sl)] / grid.base_grid.convert(field) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3951538 yt-4.4.0/yt/utilities/lib/0000755000175100001770000000000014714401715015001 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/__init__.py0000644000175100001770000000002514714401662017110 0ustar00runnerdocker"""Blank __init__""" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/_octree_raytracing.hpp0000644000175100001770000003005114714401662021355 0ustar00runnerdocker#include #include #include #include #include #include #include #include #include #include typedef double F; const int Ndim = 3; /* A simple node struct that contains a key and a fixed number of children, typically Nchildren = 2**Ndim */ template struct GenericNode { // Tree data GenericNode** children = nullptr; GenericNode* parent = nullptr; // Node data keyType key; int level = 0; bool terminal = false; int index = -1; }; template struct RayInfo { std::vector keys; std::vector t; RayInfo() {}; RayInfo(int N) { if (N > 0) { keys.reserve(N); t.reserve(2*N); } } }; struct Ray { std::array o; // Origin std::array d; // Direction F tmin = -1e99; F tmax = 1e99; Ray(const F* _o, const F* _d, const F _tmin, const F _tmax) : tmin(_tmin), tmax(_tmax) { for (auto idim = 0; idim < Ndim; ++idim) { o[idim] = _o[idim]; } F dd = 0; for (auto idim = 0; idim < Ndim; ++idim) { dd += _d[idim] * _d[idim]; } dd = sqrt(dd); for (auto idim = 0; idim < Ndim; ++idim) { d[idim] = _d[idim] / dd; } }; Ray() {}; }; /* Converts an array of integer position into a flattened index. The fast varying index is the last one. */ inline unsigned char ijk2iflat(const std::array ijk) { unsigned char iflat = 0; for (auto i : ijk) { iflat += i; iflat <<= 1; } return iflat >> 1; }; /* Converts a flattened index into an array of integer position. The fast varying index is the last one. */ inline std::array iflat2ijk(unsigned char iflat) { std::array ijk; for (auto idim = Ndim-1; idim >= 0; --idim) { ijk[idim] = iflat & 0b1; iflat >>= 1; } return ijk; }; /* A class to build an octree and cast rays through it. */ template class Octree { typedef GenericNode Node; typedef std::vector keyVector; typedef std::array Pos; typedef std::array iPos; typedef std::array ucPos; private: const unsigned char twotondim; const int maxDepth; Pos size; Pos DLE; // Domain left edge Pos DRE; // Domain right edge Node* root; int global_index = 0; public: Octree(int _maxDepth, F* _DLE, F* _DRE) : twotondim (1<children = (Node**) malloc(sizeof(Node*)*twotondim); for (auto i = 0; i < twotondim; ++i) root->children = nullptr; } ~Octree() { recursive_remove_node(root); }; /* Insert a new node in the tree. */ Node* insert_node(const iPos ipos, const int lvl, keyType key) { assert(lvl <= maxDepth); // std::cerr << "Inserting at level: " << lvl << "/" << maxDepth << std::endl; // this is 0b100..., where the 1 is at position maxDepth uint64_t mask = 1<<(maxDepth - 1); iPos ijk = ipos; std::array bitMask; Node* node = root; Node* child = nullptr; // Go down the tree for (auto ibit = maxDepth-1; ibit >= maxDepth - lvl; --ibit) { // Find children based on bits for (auto idim = 0; idim < Ndim; ++idim) { bitMask[idim] = ijk[idim] & mask; } mask >>= 1; auto iflat = ijk2iflat(bitMask); // Create child if it does not exist yet child = create_get_node(node, iflat); node = child; } // Mark last node as terminal node->terminal = true; node->key = key; return node; } Node* insert_node(const int* ipos, const int lvl, keyType key) { std::array ipos_as_arr; for (auto idim = 0; idim < Ndim; ++idim) ipos_as_arr[idim] = ipos[idim]; return insert_node(ipos_as_arr, lvl, key); } void insert_node_no_ret(const int* ipos, const int lvl, keyType key) { insert_node(ipos, lvl, key); } // Perform multiple ray cast RayInfo** cast_rays(const F *origins, const F *directions, const int Nrays) { RayInfo **ray_infos = (RayInfo**) malloc(sizeof(RayInfo*)*Nrays); int Nfound = 0; #pragma omp parallel for shared(ray_infos, Nfound) schedule(static, 100) for (auto i = 0; i < Nrays; ++i) { std::vector tList; ray_infos[i] = new RayInfo(Nfound); auto ri = ray_infos[i]; Ray r(&origins[3*i], &directions[3*i], -1e99, 1e99); cast_ray(&r, ri->keys, ri->t); // Keep track of the number of cells hit to preallocate the next ray info container Nfound = std::max(Nfound, (int) ri->keys.size()); } return ray_infos; } // Perform single ray tracing void cast_ray(Ray *r, keyVector &keyList, std::vector &tList) { // Boolean mask for direction unsigned char a = 0; unsigned char bmask = twotondim >> 1; // Put ray in positive direction and store info in bitmask "a" for (auto idim = 0; idim < Ndim; ++idim) { if (r->d[idim] < 0.0) { r->o[idim] = size[idim]-r->o[idim]; r->d[idim] = -r->d[idim]; a |= bmask; } bmask >>= 1; } // Compute intersection points Pos t0, t1; for (auto idim = 0; idim < Ndim; ++idim){ t0[idim] = (DLE[idim] - r->o[idim]) / r->d[idim]; t1[idim] = (DRE[idim] - r->o[idim]) / r->d[idim]; } // If entry point is smaller than exit point, find path in octree if (*std::max_element(t0.begin(), t0.end()) < *std::min_element(t1.begin(), t1.end())) proc_subtree(t0[0], t0[1], t0[2], t1[0], t1[1], t1[2], root, a, keyList, tList); } void cast_ray(double* o, double* d, keyVector &keyList, std::vector &tList) { Ray r(o, d, -1e99, 1e99); cast_ray(&r, keyList, tList); } private: /* Upsert a node as a child of another. This will create a new node as a child of the current one, or return an existing one if it already exists */ Node* create_get_node(Node* parent, const int iflat) { // Create children if not already existing if (parent->children == nullptr) { parent->children = (Node**) malloc(sizeof(Node*)*twotondim); for (auto i = 0; i < twotondim; ++i) parent->children[i] = nullptr; } if (parent->children[iflat] == nullptr) { Node* node = new Node(); node->level = parent->level + 1; node->index = global_index; node->parent = parent; ++global_index; parent->children[iflat] = node; } return parent->children[iflat]; } /* Recursively free memory. */ void recursive_remove_node(Node* node) { if (node->children) { for (auto i = 0; i < twotondim; ++i) { auto child = node->children[i]; if (child) { recursive_remove_node(child); } delete child; } free(node->children); } } /* Traverse the tree, assuming that the ray intersects From http://wscg.zcu.cz/wscg2000/Papers_2000/X31.pdf */ void proc_subtree(const F tx0, const F ty0, const F tz0, const F tx1, const F ty1, const F tz1, const Node *n, const unsigned char a, keyVector &keyList, std::vector &tList, int lvl=0) { // Check if exit face is not in our back if (tx1 < 0 || ty1 < 0 || tz1 < 0) return; // Exit if the node is null (happens if it hasn't been added to the tree) if (!n) return; // Process leaf node if (n->terminal) { keyList.push_back(n->key); // Push entry & exit t tList.push_back(std::max(std::max(tx0, ty0), tz0)); tList.push_back(std::min(std::min(tx1, ty1), tz1)); assert(n->children == nullptr); return; } // Early break for leafs without children that aren't terminal if (n->children == nullptr) return; // Compute middle intersection F txm, tym, tzm; txm = (tx0 + tx1) * 0.5; tym = (ty0 + ty1) * 0.5; tzm = (tz0 + tz1) * 0.5; unsigned char iNode = first_node(tx0, ty0, tz0, txm, tym, tzm); // Iterate over children do { switch (iNode) { case 0: proc_subtree(tx0, ty0, tz0, txm, tym, tzm, n->children[a], a, keyList, tList, lvl+1); iNode = next_node(txm, tym, tzm, 4, 2, 1); break; case 1: proc_subtree(tx0, ty0, tzm, txm, tym, tz1, n->children[1^a], a, keyList, tList, lvl+1); iNode = next_node(txm, tym, tz1, 5, 3, 8); break; case 2: proc_subtree(tx0, tym, tz0, txm, ty1, tzm, n->children[2^a], a, keyList, tList, lvl+1); iNode = next_node(txm, ty1, tzm, 6, 8, 3); break; case 3: proc_subtree(tx0, tym, tzm, txm, ty1, tz1, n->children[3^a], a, keyList, tList, lvl+1); iNode = next_node(txm, ty1, tz1, 7, 8, 8); break; case 4: proc_subtree(txm, ty0, tz0, tx1, tym, tzm, n->children[4^a], a, keyList, tList, lvl+1); iNode = next_node(tx1, tym, tzm, 8, 6, 5); break; case 5: proc_subtree(txm, ty0, tzm, tx1, tym, tz1, n->children[5^a], a, keyList, tList, lvl+1); iNode = next_node(tx1, tym, tz1, 8, 7, 8); break; case 6: proc_subtree(txm, tym, tz0, tx1, ty1, tzm, n->children[6^a], a, keyList, tList, lvl+1); iNode = next_node(tx1, ty1, tzm, 8, 8, 7); break; case 7: proc_subtree(txm, tym, tzm, tx1, ty1, tz1, n->children[7^a], a, keyList, tList, lvl+1); iNode = 8; break; } } while (iNode < twotondim); } // From "An Efficient Parametric Algorithm for Octree Traversal" by Revelles, Urena, & Lastra inline unsigned char first_node(const F tx0, const F ty0, const F tz0, const F txm, const F tym, const F tzm) { unsigned char index = 0; if (tx0 >= std::max(ty0, tz0)) { // enters YZ plane if (tym < tx0) index |= 0b010; if (tzm < tx0) index |= 0b001; } else if (ty0 >= std::max(tx0, tz0)) { // enters XZ plane if (txm < ty0) index |= 0b100; if (tzm < ty0) index |= 0b001; } else { // enters XY plane if (txm < tz0) index |= 0b100; if (tym < tz0) index |= 0b010; } return index; } // From "An Efficient Parametric Algorithm for Octree Traversal" by Revelles, Urena, & Lastra inline unsigned char next_node(const F tx, const F ty, const F tz, const uint8_t ix, const uint8_t iy, const uint8_t iz) { if(tx < std::min(ty, tz)) { // YZ plane return ix; } else if (ty < std::min(tx, tz)) { // XZ plane return iy; } else { // XY plane return iz; } } }; ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/_octree_raytracing.pxd0000644000175100001770000000174114714401662021365 0ustar00runnerdocker"""This is a wrapper around the C++ class to efficiently cast rays into an octree. It relies on the seminal paper by J. Revelles,, C.Ureña and M.Lastra. """ cimport numpy as np import numpy as np cimport cython from libcpp.vector cimport vector from cython.parallel import parallel, prange from libc.stdlib cimport free, malloc from .grid_traversal cimport sampler_function from .image_samplers cimport ImageAccumulator, ImageSampler from .partitioned_grid cimport PartitionedGrid from .volume_container cimport VolumeContainer cdef extern from "_octree_raytracing.hpp": cdef cppclass RayInfo[T]: vector[T] keys vector[double] t cdef cppclass Octree[T] nogil: Octree(int depth, double* LE, double* RE) void insert_node_no_ret(const int* ipos, const int lvl, T key) void cast_ray(double* origins, double* directions, vector[T] keyList, vector[double] tList) cdef class _OctreeRayTracing: cdef Octree[int]* oct cdef int depth ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/_octree_raytracing.pyx0000644000175100001770000000177214714401662021416 0ustar00runnerdocker# distutils: language = c++ # distutils: extra_compile_args = CPP14_FLAG OMP_ARGS """This is a wrapper around the C++ class to efficiently cast rays into an octree. It relies on the seminal paper by J. Revelles,, C.Ureña and M.Lastra. """ cimport numpy as np import numpy as np cimport cython cdef class _OctreeRayTracing: def __init__(self, np.ndarray LE, np.ndarray RE, int depth): cdef double* LE_ptr = LE.data cdef double* RE_ptr = RE.data self.oct = new Octree[int](depth, LE_ptr, RE_ptr) self.depth = depth @cython.boundscheck(False) @cython.wraparound(False) def add_nodes(self, int[:, :] ipos_view, int[:] lvl_view, int[:] key): cdef int i cdef int ii[3] for i in range(len(key)): ii[0] = ipos_view[i, 0] ii[1] = ipos_view[i, 1] ii[2] = ipos_view[i, 2] self.oct.insert_node_no_ret(ii, lvl_view[i], key[i]) def __dealloc__(self): del self.oct ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/allocation_container.pxd0000644000175100001770000000134014714401662021704 0ustar00runnerdocker""" An allocation container and memory pool """ cimport numpy as np from libc.stdlib cimport free, malloc, realloc cdef struct AllocationContainer: np.uint64_t n np.uint64_t n_assigned np.uint64_t offset np.int64_t con_id # container id void *my_objs cdef class ObjectPool: cdef public np.uint64_t itemsize cdef np.uint64_t n_con cdef AllocationContainer* containers cdef void allocate_objs(self, int n_objs, np.int64_t con_id = ?) except * cdef void setup_objs(self, void *obj, np.uint64_t count, np.uint64_t offset, np.int64_t con_id) cdef void teardown_objs(self, void *obj, np.uint64_t n, np.uint64_t offset, np.int64_t con_id) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/allocation_container.pyx0000644000175100001770000001026114714401662021733 0ustar00runnerdocker # distutils: libraries = STD_LIBS """ An allocation container and memory pool """ cimport numpy as np import numpy as np cdef class ObjectPool: def __cinit__(self): """This class is *not* meant to be initialized directly, but instead through subclasses. Those subclasses need to implement at a minimum the setting of itemsize, but can optionally also implement setup_objs and teardown_objs to either set default values or initialize additional pointers and values, and then free them.""" self.containers = NULL self.n_con = 0 self.itemsize = -1 # Never use the base class def __dealloc__(self): cdef int i cdef AllocationContainer *obj for i in range(self.n_con): obj = &self.containers[i] self.teardown_objs(obj.my_objs, obj.n, obj.offset, obj.con_id) if self.containers != NULL: free(self.containers) def __getitem__(self, int i): """This returns an array view (if possible and implemented in a subclass) on the pool of objects specified by i.""" return self._con_to_array(i) def __len__(self): return self.n_con def append(self, int n_objs, np.int64_t con_id = -1): """This allocates a new batch of n_objs objects, with the container id specified as con_id. Return value is a view on them.""" self.allocate_objs(n_objs, con_id) return self[self.n_con - 1] cdef void allocate_objs(self, int n_objs, np.int64_t con_id = -1) except *: cdef AllocationContainer *n_cont cdef AllocationContainer *prev self.containers = realloc( self.containers, sizeof(AllocationContainer) * (self.n_con + 1)) n_cont = &self.containers[self.n_con] if self.n_con == 0: n_cont.offset = 0 else: prev = &self.containers[self.n_con - 1] n_cont.offset = prev.offset + prev.n self.n_con += 1 n_cont.my_objs = malloc(self.itemsize * n_objs) if n_cont.my_objs == NULL: raise MemoryError n_cont.n = n_objs n_cont.n_assigned = 0 n_cont.con_id = con_id self.setup_objs(n_cont.my_objs, n_objs, n_cont.offset, n_cont.con_id) cdef void setup_objs(self, void *obj, np.uint64_t count, np.uint64_t offset, np.int64_t con_id): """This can be overridden in the subclass, where it can initialize any of the allocated objects.""" pass cdef void teardown_objs(self, void *obj, np.uint64_t n, np.uint64_t offset, np.int64_t con_id): # We assume that these are all allocated and have no sub-allocations """If overridden, additional behavior can be provided during teardown of allocations. For instance, if you allocate some memory on each of the allocated objects.""" if obj != NULL: free(obj) def to_arrays(self): rv = [] cdef int i for i in range(self.n_con): rv.append(self._con_to_array(i)) return rv def _con_to_array(self, int i): """This has to be implemented in a subclass, and should return an appropriate np.asarray() of a memoryview of self.my_objs.""" raise NotImplementedError cdef class BitmaskPool(ObjectPool): def __cinit__(self): """This is an example implementation of object pools for bitmasks (uint8) arrays. It lets you reasonably quickly allocate a set of uint8 arrays which can be accessed and modified, and then virtually append to that.""" # Base class will ALSO be called self.itemsize = sizeof(np.uint8_t) cdef void setup_objs(self, void *obj, np.uint64_t n, np.uint64_t offset, np.int64_t con_id): cdef np.uint64_t i cdef np.uint8_t *mask = obj for i in range(n): mask[i] = 0 def _con_to_array(self, int i): cdef AllocationContainer *obj = &self.containers[i] cdef np.uint8_t[:] o = ( obj.my_objs) rv = np.asarray(o) return rv ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/alt_ray_tracers.pyx0000644000175100001770000002004714714401662020725 0ustar00runnerdocker# distutils: libraries = STD_LIBS """ """ import numpy as np cimport cython cimport libc.math as math cimport numpy as np @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def _cyl2cart(np.ndarray[np.float64_t, ndim=2] x): """Converts points in cylindrical coordinates to cartesian points.""" # NOTE this should be removed once the coord interface comes online cdef int i, I cdef np.ndarray[np.float64_t, ndim=2] xcart I = x.shape[0] xcart = np.empty((I, x.shape[1]), dtype='float64') for i in range(I): xcart[i,0] = x[i,0] * math.cos(x[i,2]) xcart[i,1] = x[i,0] * math.sin(x[i,2]) xcart[i,2] = x[i,1] return xcart @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def _cart_intersect(np.ndarray[np.float64_t, ndim=1] a, np.ndarray[np.float64_t, ndim=1] b, np.ndarray[np.float64_t, ndim=2] c, np.ndarray[np.float64_t, ndim=2] d): """Finds the times and locations of the lines defined by a->b and c->d in cartesian space. a and b must be 1d points. c and d must be 2d arrays of equal shape whose second dim is the same length as a and b. """ cdef int i, I cdef np.ndarray[np.float64_t, ndim=1] r, s, t cdef np.ndarray[np.float64_t, ndim=2] loc I = c.shape[0] shape = (I, c.shape[1]) t = np.empty(I, dtype='float64') loc = np.empty(shape, dtype='float64') r = b - a for i in range(I): s = d[i] - c[i] t[i] = (np.cross(c[i] - a, s).sum()) / (np.cross(r, s).sum()) loc[i] = a + t[i]*r return t, loc @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def cylindrical_ray_trace(np.ndarray[np.float64_t, ndim=1] p1, np.ndarray[np.float64_t, ndim=1] p2, np.ndarray[np.float64_t, ndim=2] left_edges, np.ndarray[np.float64_t, ndim=2] right_edges): """Computes straight (cartesian) rays being traced through a cylindrical geometry. Parameters ---------- p1 : length 3 float ndarray start point for ray p2 : length 3 float ndarray stop point for ray left_edges : 2d ndarray left edges of grid cells right_edges : 2d ndarray right edges of grid cells Returns ------- t : 1d float ndarray ray parametric time on range [0,1] s : 1d float ndarray ray parametric distance on range [0,len(ray)] rztheta : 2d float ndarray ray grid cell intersections in cylindrical coordinates inds : 1d int ndarray indexes into the grid cells which the ray crosses in order. """ cdef np.int64_t i, I cdef np.float64_t a, b, bsqrd, twoa cdef np.ndarray[np.float64_t, ndim=1] p1cart, p2cart, dpcart, t, s, \ rleft, rright, zleft, zright, \ cleft, cright, thetaleft, thetaright, \ tmleft, tpleft, tmright, tpright, tsect cdef np.ndarray[np.int64_t, ndim=1, cast=True] inds, tinds, sinds cdef np.ndarray[np.float64_t, ndim=2] xyz, rztheta, ptemp, b1, b2, dsect # set up points ptemp = np.array([p1, p2]) ptemp = _cyl2cart(ptemp) p1cart = ptemp[0] p2cart = ptemp[1] dpcart = p2cart - p1cart # set up components rleft = left_edges[:,0] rright = right_edges[:,0] zleft = left_edges[:,1] zright = right_edges[:,1] a = dpcart[0] * dpcart[0] + dpcart[1] * dpcart[1] b = 2 * dpcart[0] * p1cart[0] + 2 * dpcart[1] * p1cart[1] cleft = p1cart[0] * p1cart[0] + p1cart[1] * p1cart[1] - rleft * rleft cright = p1cart[0] * p1cart[0] + p1cart[1] * p1cart[1] - rright * rright twoa = 2 * a bsqrd = b * b # Compute positive and negative times and associated masks I = np.intp(left_edges.shape[0]) tmleft = np.empty(I, dtype='float64') tpleft = np.empty(I, dtype='float64') tmright = np.empty(I, dtype='float64') tpright = np.empty(I, dtype='float64') for i in range(I): tmleft[i] = (-b - math.sqrt(bsqrd - 4*a*cleft[i])) / twoa tpleft[i] = (-b + math.sqrt(bsqrd - 4*a*cleft[i])) / twoa tmright[i] = (-b - math.sqrt(bsqrd - 4*a*cright[i])) / twoa tpright[i] = (-b + math.sqrt(bsqrd - 4*a*cright[i])) / twoa tmmright = np.logical_and(~np.isnan(tmright), rright <= p1[0]) tpmright = np.logical_and(~np.isnan(tpright), rright <= p2[0]) tmmleft = np.logical_and(~np.isnan(tmleft), rleft <= p1[0]) tpmleft = np.logical_and(~np.isnan(tpleft), rleft <= p2[0]) # compute first cut of indexes and thetas, which # have been filtered by those values for which intersection # times are impossible (see above masks). Note that this is # still independent of z. inds = np.unique(np.concatenate([np.argwhere(tmmleft).flat, np.argwhere(tpmleft).flat, np.argwhere(tmmright).flat, np.argwhere(tpmright).flat,])) if 0 == inds.shape[0]: inds = np.arange(np.intp(I)) thetaleft = np.empty(I) thetaleft.fill(p1[2]) thetaright = np.empty(I) thetaright.fill(p2[2]) else: rleft = rleft[inds] rright = rright[inds] zleft = zleft[inds] zright = zright[inds] thetaleft = np.arctan2((p1cart[1] + tmleft[inds]*dpcart[1]), (p1cart[0] + tmleft[inds]*dpcart[0])) nans = np.isnan(thetaleft) thetaleft[nans] = np.arctan2((p1cart[1] + tpleft[inds[nans]]*dpcart[1]), (p1cart[0] + tpleft[inds[nans]]*dpcart[0])) thetaright = np.arctan2((p1cart[1] + tmright[inds]*dpcart[1]), (p1cart[0] + tmright[inds]*dpcart[0])) nans = np.isnan(thetaright) thetaright[nans] = np.arctan2((p1cart[1] + tpright[inds[nans]]*dpcart[1]), (p1cart[0] + tpright[inds[nans]]*dpcart[0])) # Set up the cell boundary arrays b1 = np.concatenate([[rleft, zright, thetaleft], [rleft, zleft, thetaleft], [rleft, zleft, thetaleft], [rright, zleft, thetaright], [rleft, zleft, thetaleft], [rright, zright, thetaleft], [rleft, zright, thetaleft], [rright, zleft, thetaleft], ], axis=1).T b2 = np.concatenate([[rright, zright, thetaright], [rright, zleft, thetaright], [rleft, zright, thetaleft], [rright, zright, thetaright], [rleft, zleft, thetaright], [rright, zright, thetaright], [rleft, zright, thetaright], [rright, zleft, thetaright], ], axis=1).T inds = np.concatenate([inds, inds, inds, inds, inds, inds, inds, inds]) # find intersections and compute return values tsect, dsect = _cart_intersect(p1cart, p2cart, _cyl2cart(b1), _cyl2cart(b2)) tmask = np.logical_and(0.0<=tsect, tsect<=1.0) ret = np.unique(tsect[tmask], return_index=True) tsect, tinds = ret[0], ret[1].astype('int64') inds = inds[tmask][tinds] xyz = dsect[tmask][tinds] s = np.sqrt(((xyz - p1cart) * (xyz - p1cart)).sum(axis=1)) ret = np.unique(s, return_index=True) s, sinds = ret[0], ret[1].astype('int64') inds = inds[sinds] xyz = xyz[sinds] t = s/np.sqrt((dpcart*dpcart).sum()) sinds = s.argsort() s = s[sinds] t = t[sinds] inds = inds[sinds] xyz = xyz[sinds] rztheta = np.concatenate([np.sqrt(xyz[:,0] * xyz[:,0] + xyz[:,1] * xyz[:,1])[:,np.newaxis], xyz[:,2:3], np.arctan2(xyz[:,1], xyz[:,0])[:,np.newaxis]], axis=1) return t, s, rztheta, inds #rztheta[:,2] = 0.0 + (rztheta[:,2] - np.pi*3/2)%(2*np.pi) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/amr_kdtools.pxd0000644000175100001770000000441514714401662020041 0ustar00runnerdocker""" AMR kD-Tree Cython Tools """ cimport numpy as np cdef struct Split: int dim np.float64_t pos cdef class Node: cdef public Node left cdef public Node right cdef public Node parent cdef public int grid cdef public bint dirty cdef public np.int64_t node_id cdef public np.int64_t node_ind cdef np.float64_t left_edge[3] cdef np.float64_t right_edge[3] cdef public data cdef Split * split cdef int level cdef int point_in_node(self, np.float64_t[:] point) cdef Node _find_node(self, np.float64_t[:] point) cdef int _kd_is_leaf(self) cdef add_grid(self, np.float64_t[:] gle, np.float64_t[:] gre, int gid, int rank, int size) cdef insert_grid(self, np.float64_t[:] gle, np.float64_t[:] gre, int grid_id, int rank, int size) cpdef add_grids(self, int ngrids, np.float64_t[:,:] gles, np.float64_t[:,:] gres, np.int64_t[:] gids, int rank, int size) cdef int should_i_split(self, int rank, int size) cdef void insert_grids(self, int ngrids, np.float64_t[:,:] gles, np.float64_t[:,:] gres, np.int64_t[:] gids, int rank, int size) cdef split_grid(self, np.float64_t[:] gle, np.float64_t[:] gre, int gid, int rank, int size) cdef int split_grids(self, int ngrids, np.float64_t[:,:] gles, np.float64_t[:,:] gres, np.int64_t[:] gids, int rank, int size) cdef geo_split(self, np.float64_t[:] gle, np.float64_t[:] gre, int grid_id, int rank, int size) cdef void divide(self, Split * split) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/amr_kdtools.pyx0000644000175100001770000006475714714401662020105 0ustar00runnerdocker # distutils: libraries = STD_LIBS """ AMR kD-Tree Cython Tools """ import numpy as np cimport cython cimport numpy as np from cython.view cimport array as cvarray from libc.stdlib cimport free, malloc @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef class Node: def __cinit__(self, Node parent, Node left, Node right, np.float64_t[:] left_edge, np.float64_t[:] right_edge, int grid, np.int64_t node_id): self.dirty = False self.left = left self.right = right self.parent = parent cdef int i for i in range(3): self.left_edge[i] = left_edge[i] self.right_edge[i] = right_edge[i] self.grid = grid self.node_id = node_id self.split == NULL def print_me(self): print('Node %i' % self.node_id) print('\t le: %e %e %e' % (self.left_edge[0], self.left_edge[1], self.left_edge[2])) print('\t re: %e %e %e' % (self.right_edge[0], self.right_edge[1], self.right_edge[2])) print('\t grid: %i' % self.grid) def get_split_dim(self): if self.split != NULL: return self.split.dim else: return -1 def get_split_pos(self): if self.split != NULL: return self.split.pos else: return np.nan def set_left_edge(self, np.float64_t[:] left_edge): cdef int i for i in range(3): self.left_edge[i] = left_edge[i] def set_right_edge(self, np.float64_t[:] right_edge): cdef int i for i in range(3): self.right_edge[i] = right_edge[i] def create_split(self, dim, pos): split = malloc(sizeof(Split)) split.dim = dim split.pos = pos self.split = split def __dealloc__(self): if self.split != NULL: free(self.split) # Begin input of converted methods def get_left_edge(self): le = np.empty(3, dtype='float64') for i in range(3): le[i] = self.left_edge[i] return le def get_right_edge(self): re = np.empty(3, dtype='float64') for i in range(3): re[i] = self.right_edge[i] return re def set_dirty(self, bint state): cdef Node node for node in self.depth_traverse(): node.dirty = state def kd_traverse(self, viewpoint=None): cdef Node node if viewpoint is None: for node in self.depth_traverse(): if node._kd_is_leaf() == 1 and node.grid != -1: yield node else: for node in self.viewpoint_traverse(viewpoint): if node._kd_is_leaf() == 1 and node.grid != -1: yield node def add_pygrid(self, np.float64_t[:] gle, np.float64_t[:] gre, int gid, int rank, int size): """ The entire purpose of this function is to move everything from ndarrays to internal C pointers. """ cdef np.float64_t[:] pgles = cvarray(format="d", shape=(3,), itemsize=sizeof(np.float64_t)) cdef np.float64_t[:] pgres = cvarray(format="d", shape=(3,), itemsize=sizeof(np.float64_t)) cdef int j for j in range(3): pgles[j] = gle[j] pgres[j] = gre[j] self.add_grid(pgles, pgres, gid, rank, size) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef add_grid(self, np.float64_t[:] gle, np.float64_t[:] gre, int gid, int rank, int size): if not should_i_build(self, rank, size): return if self._kd_is_leaf() == 1: self.insert_grid(gle, gre, gid, rank, size) else: less_id = gle[self.split.dim] < self.split.pos if less_id: self.left.add_grid(gle, gre, gid, rank, size) greater_id = gre[self.split.dim] > self.split.pos if greater_id: self.right.add_grid(gle, gre, gid, rank, size) return @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef insert_grid(self, np.float64_t[:] gle, np.float64_t[:] gre, int grid_id, int rank, int size): if not should_i_build(self, rank, size): return # If we should continue to split based on parallelism, do so! if self.should_i_split(rank, size): self.geo_split(gle, gre, grid_id, rank, size) return cdef int contained = 1 for i in range(3): if gle[i] > self.left_edge[i] or\ gre[i] < self.right_edge[i]: contained *= 0 if contained == 1: self.grid = grid_id assert(self.grid != -1) return # Split the grid cdef int check = self.split_grid(gle, gre, grid_id, rank, size) # If check is -1, then we have found a place where there are no choices. # Exit out and set the node to None. if check == -1: self.grid = -1 return @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cpdef add_grids(self, int ngrids, np.float64_t[:,:] gles, np.float64_t[:,:] gres, np.int64_t[:] gids, int rank, int size): cdef int i, j, nless, ngreater, index cdef np.float64_t[:,:] less_gles, less_gres, greater_gles, greater_gres cdef np.int64_t[:] l_ids, g_ids if not should_i_build(self, rank, size): return if self._kd_is_leaf() == 1: self.insert_grids(ngrids, gles, gres, gids, rank, size) return less_ids = cvarray(format="q", shape=(ngrids,), itemsize=sizeof(np.int64_t)) greater_ids = cvarray(format="q", shape=(ngrids,), itemsize=sizeof(np.int64_t)) nless = 0 ngreater = 0 for i in range(ngrids): if gles[i, self.split.dim] < self.split.pos: less_ids[nless] = i nless += 1 if gres[i, self.split.dim] > self.split.pos: greater_ids[ngreater] = i ngreater += 1 #print('nless: %i' % nless) #print('ngreater: %i' % ngreater) if nless > 0: less_gles = cvarray(format="d", shape=(nless,3), itemsize=sizeof(np.float64_t)) less_gres = cvarray(format="d", shape=(nless,3), itemsize=sizeof(np.float64_t)) l_ids = cvarray(format="q", shape=(nless,), itemsize=sizeof(np.int64_t)) for i in range(nless): index = less_ids[i] l_ids[i] = gids[index] for j in range(3): less_gles[i,j] = gles[index,j] less_gres[i,j] = gres[index,j] self.left.add_grids(nless, less_gles, less_gres, l_ids, rank, size) if ngreater > 0: greater_gles = cvarray(format="d", shape=(ngreater,3), itemsize=sizeof(np.float64_t)) greater_gres = cvarray(format="d", shape=(ngreater,3), itemsize=sizeof(np.float64_t)) g_ids = cvarray(format="q", shape=(ngreater,), itemsize=sizeof(np.int64_t)) for i in range(ngreater): index = greater_ids[i] g_ids[i] = gids[index] for j in range(3): greater_gles[i,j] = gles[index,j] greater_gres[i,j] = gres[index,j] self.right.add_grids(ngreater, greater_gles, greater_gres, g_ids, rank, size) return @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int should_i_split(self, int rank, int size): if self.node_id < size and self.node_id > 0: return 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void insert_grids(self, int ngrids, np.float64_t[:,:] gles, np.float64_t[:,:] gres, np.int64_t[:] gids, int rank, int size): if not should_i_build(self, rank, size) or ngrids == 0: return cdef int contained = 1 cdef int check if ngrids == 1: # If we should continue to split based on parallelism, do so! if self.should_i_split(rank, size): self.geo_split(gles[0,:], gres[0,:], gids[0], rank, size) return for i in range(3): contained *= gles[0,i] <= self.left_edge[i] contained *= gres[0,i] >= self.right_edge[i] if contained == 1: # print('Node fully contained, setting to grid: %i' % gids[0]) self.grid = gids[0] assert(self.grid != -1) return # Split the grids check = self.split_grids(ngrids, gles, gres, gids, rank, size) # If check is -1, then we have found a place where there are no choices. # Exit out and set the node to None. if check == -1: self.grid = -1 return @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef split_grid(self, np.float64_t[:] gle, np.float64_t[:] gre, int gid, int rank, int size): cdef int j cdef np.uint8_t[:] less_ids, greater_ids data = cvarray(format="d", shape=(1,2,3), itemsize=sizeof(np.float64_t)) for j in range(3): data[0,0,j] = gle[j] data[0,1,j] = gre[j] less_ids = cvarray(format="B", shape=(1,), itemsize=sizeof(np.uint8_t)) greater_ids = cvarray(format="B", shape=(1,), itemsize=sizeof(np.uint8_t)) best_dim, split_pos, nless, ngreater = \ kdtree_get_choices(1, data, self.left_edge, self.right_edge, less_ids, greater_ids) # If best_dim is -1, then we have found a place where there are no choices. # Exit out and set the node to None. if best_dim == -1: return -1 split = malloc(sizeof(Split)) split.dim = best_dim split.pos = split_pos # Create a Split self.divide(split) # Populate Left Node #print('Inserting left node', self.left_edge, self.right_edge) if nless == 1: self.left.insert_grid(gle, gre, gid, rank, size) # Populate Right Node #print('Inserting right node', self.left_edge, self.right_edge) if ngreater == 1: self.right.insert_grid(gle, gre, gid, rank, size) return 0 #@cython.boundscheck(False) #@cython.wraparound(False) #@cython.cdivision(True) cdef int split_grids(self, int ngrids, np.float64_t[:,:] gles, np.float64_t[:,:] gres, np.int64_t[:] gids, int rank, int size): # Find a Split cdef int i, j, index cdef np.float64_t[:,:] less_gles, less_gres, greater_gles, greater_gres cdef np.int64_t[:] l_ids, g_ids if ngrids == 0: return 0 data = cvarray(format="d", shape=(ngrids,2,3), itemsize=sizeof(np.float64_t)) for i in range(ngrids): for j in range(3): data[i,0,j] = gles[i,j] data[i,1,j] = gres[i,j] less_ids = cvarray(format="B", shape=(ngrids,), itemsize=sizeof(np.uint8_t)) greater_ids = cvarray(format="B", shape=(ngrids,), itemsize=sizeof(np.uint8_t)) best_dim, split_pos, nless, ngreater = \ kdtree_get_choices(ngrids, data, self.left_edge, self.right_edge, less_ids, greater_ids) # If best_dim is -1, then we have found a place where there are no choices. # Exit out and set the node to None. if best_dim == -1: print('Failed to split grids.') return -1 split = malloc(sizeof(Split)) split.dim = best_dim split.pos = split_pos # Create a Split self.divide(split) less_index = cvarray(format="q", shape=(ngrids,), itemsize=sizeof(np.int64_t)) greater_index = cvarray(format="q", shape=(ngrids,), itemsize=sizeof(np.int64_t)) nless = 0 ngreater = 0 for i in range(ngrids): if less_ids[i] == 1: less_index[nless] = i nless += 1 if greater_ids[i] == 1: greater_index[ngreater] = i ngreater += 1 if nless > 0: less_gles = cvarray(format="d", shape=(nless,3), itemsize=sizeof(np.float64_t)) less_gres = cvarray(format="d", shape=(nless,3), itemsize=sizeof(np.float64_t)) l_ids = cvarray(format="q", shape=(nless,), itemsize=sizeof(np.int64_t)) for i in range(nless): index = less_index[i] l_ids[i] = gids[index] for j in range(3): less_gles[i,j] = gles[index,j] less_gres[i,j] = gres[index,j] # Populate Left Node #print('Inserting left node', self.left_edge, self.right_edge) self.left.insert_grids(nless, less_gles, less_gres, l_ids, rank, size) if ngreater > 0: greater_gles = cvarray(format="d", shape=(ngreater,3), itemsize=sizeof(np.float64_t)) greater_gres = cvarray(format="d", shape=(ngreater,3), itemsize=sizeof(np.float64_t)) g_ids = cvarray(format="q", shape=(ngreater,), itemsize=sizeof(np.int64_t)) for i in range(ngreater): index = greater_index[i] g_ids[i] = gids[index] for j in range(3): greater_gles[i,j] = gles[index,j] greater_gres[i,j] = gres[index,j] # Populate Right Node #print('Inserting right node', self.left_edge, self.right_edge) self.right.insert_grids(ngreater, greater_gles, greater_gres, g_ids, rank, size) return 0 cdef geo_split(self, np.float64_t[:] gle, np.float64_t[:] gre, int grid_id, int rank, int size): cdef int big_dim = 0 cdef int i cdef np.float64_t v, my_max = 0.0 for i in range(3): v = gre[i] - gle[i] if v > my_max: my_max = v big_dim = i new_pos = (gre[big_dim] + gle[big_dim])/2. lnew_gle = cvarray(format="d", shape=(3,), itemsize=sizeof(np.float64_t)) lnew_gre = cvarray(format="d", shape=(3,), itemsize=sizeof(np.float64_t)) rnew_gle = cvarray(format="d", shape=(3,), itemsize=sizeof(np.float64_t)) rnew_gre = cvarray(format="d", shape=(3,), itemsize=sizeof(np.float64_t)) for j in range(3): lnew_gle[j] = gle[j] lnew_gre[j] = gre[j] rnew_gle[j] = gle[j] rnew_gre[j] = gre[j] split = malloc(sizeof(Split)) split.dim = big_dim split.pos = new_pos # Create a Split self.divide(split) #lnew_gre[big_dim] = new_pos # Populate Left Node #print('Inserting left node', self.left_edge, self.right_edge) self.left.insert_grid(lnew_gle, lnew_gre, grid_id, rank, size) #rnew_gle[big_dim] = new_pos # Populate Right Node #print('Inserting right node', self.left_edge, self.right_edge) self.right.insert_grid(rnew_gle, rnew_gre, grid_id, rank, size) return cdef void divide(self, Split * split): # Create a Split self.split = split cdef np.float64_t[:] le = np.empty(3, dtype='float64') cdef np.float64_t[:] re = np.empty(3, dtype='float64') cdef int i for i in range(3): le[i] = self.left_edge[i] re[i] = self.right_edge[i] re[split.dim] = split.pos self.left = Node(self, None, None, le, re, self.grid, _lchild_id(self.node_id)) re[split.dim] = self.right_edge[split.dim] le[split.dim] = split.pos self.right = Node(self, None, None, le, re, self.grid, _rchild_id(self.node_id)) return # def kd_sum_volume(self): cdef np.float64_t vol = 1.0 if (self.left is None) and (self.right is None): if self.grid == -1: return 0.0 for i in range(3): vol *= self.right_edge[i] - self.left_edge[i] return vol else: return self.left.kd_sum_volume() + self.right.kd_sum_volume() def kd_node_check(self): assert (self.left is None) == (self.right is None) if (self.left is None) and (self.right is None): if self.grid != -1: return np.prod(self.right_edge - self.left_edge) else: return 0.0 else: return self.left.kd_node_check()+self.right.kd_node_check() def kd_is_leaf(self): cdef int has_l_child = self.left == None cdef int has_r_child = self.right == None assert has_l_child == has_r_child return has_l_child cdef int _kd_is_leaf(self): if self.left is None or self.right is None: return 1 return 0 def depth_traverse(self, max_node=None): ''' Yields a depth-first traversal of the kd tree always going to the left child before the right. ''' current = self previous = None if max_node is None: max_node = np.inf while current is not None: yield current current, previous = step_depth(current, previous) if current is None: break if current.node_id >= max_node: current = current.parent previous = current.right def depth_first_touch(self, max_node=None): ''' Yields a depth-first traversal of the kd tree always going to the left child before the right. ''' current = self previous = None if max_node is None: max_node = np.inf while current is not None: if previous is None or previous.parent != current: yield current current, previous = step_depth(current, previous) if current is None: break if current.node_id >= max_node: current = current.parent previous = current.right def breadth_traverse(self): ''' Yields a breadth-first traversal of the kd tree always going to the left child before the right. ''' current = self previous = None while current is not None: yield current current, previous = step_depth(current, previous) def viewpoint_traverse(self, viewpoint): ''' Yields a viewpoint dependent traversal of the kd-tree. Starts with nodes furthest away from viewpoint. ''' current = self previous = None while current is not None: yield current current, previous = step_viewpoint(current, previous, viewpoint) cdef int point_in_node(self, np.float64_t[:] point): cdef int i cdef int inside = 1 for i in range(3): inside *= self.left_edge[i] <= point[i] inside *= self.right_edge[i] > point[i] return inside cdef Node _find_node(self, np.float64_t[:] point): while self._kd_is_leaf() == 0: if point[self.split.dim] < self.split.pos: self = self.left else: self = self.right return self def find_node(self, np.float64_t[:] point): """ Find the AMRKDTree node enclosing a position """ assert(self.point_in_node(point)) return self._find_node(point) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline np.int64_t _lchild_id(np.int64_t node_id): return (node_id<<1) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline np.int64_t _rchild_id(np.int64_t node_id): return (node_id<<1) + 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline np.int64_t _parent_id(np.int64_t node_id): return (node_id-1) >> 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int should_i_build(Node node, int rank, int size): if (node.node_id < size) or (node.node_id >= 2*size): return 1 elif node.node_id - size == rank: return 1 else: return 0 def step_depth(Node current, Node previous): ''' Takes a single step in the depth-first traversal ''' if current._kd_is_leaf() == 1: # At a leaf, move back up previous = current current = current.parent elif current.parent is previous: # Moving down, go left first previous = current if current.left is not None: current = current.left elif current.right is not None: current = current.right else: current = current.parent elif current.left is previous: # Moving up from left, go right previous = current if current.right is not None: current = current.right else: current = current.parent elif current.right is previous: # Moving up from right child, move up previous = current current = current.parent return current, previous def step_viewpoint(Node current, Node previous, viewpoint): ''' Takes a single step in the viewpoint based traversal. Always goes to the node furthest away from viewpoint first. ''' if current._kd_is_leaf() == 1: # At a leaf, move back up previous = current current = current.parent elif current.split.dim is None: # This is a dead node previous = current current = current.parent elif current.parent is previous: # Moving down previous = current if viewpoint[current.split.dim] <= current.split.pos: if current.right is not None: current = current.right else: previous = current.right else: if current.left is not None: current = current.left else: previous = current.left elif current.right is previous: # Moving up from right previous = current if viewpoint[current.split.dim] <= current.split.pos: if current.left is not None: current = current.left else: current = current.parent else: current = current.parent elif current.left is previous: # Moving up from left child previous = current if viewpoint[current.split.dim] > current.split.pos: if current.right is not None: current = current.right else: current = current.parent else: current = current.parent return current, previous @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef kdtree_get_choices(int n_grids, np.float64_t[:,:,:] data, np.float64_t[:] l_corner, np.float64_t[:] r_corner, np.uint8_t[:] less_ids, np.uint8_t[:] greater_ids, ): cdef int i, j, k, dim, n_unique, best_dim, my_split cdef np.float64_t split cdef np.float64_t[:,:] uniquedims cdef np.float64_t[:] uniques uniquedims = cvarray(format="d", shape=(3, 2*n_grids), itemsize=sizeof(np.float64_t)) my_max = 0 my_split = 0 best_dim = -1 for dim in range(3): n_unique = 0 uniques = uniquedims[dim] for i in range(n_grids): # Check for disqualification for j in range(2): # print("Checking against", i,j,dim,data[i,j,dim]) if not (l_corner[dim] < data[i][j][dim] and data[i][j][dim] < r_corner[dim]): # print("Skipping ", data[i,j,dim], l_corner[dim], r_corner[dim]) continue skipit = 0 # Add our left ... for k in range(n_unique): if uniques[k] == data[i][j][dim]: skipit = 1 # print("Identified", uniques[k], data[i,j,dim], n_unique) break if skipit == 0: uniques[n_unique] = data[i][j][dim] n_unique += 1 if n_unique > my_max: best_dim = dim my_max = n_unique my_split = (n_unique-1)/2 if best_dim == -1: return -1, 0, 0, 0 # I recognize how lame this is. cdef np.ndarray[np.float64_t, ndim=1] tarr = np.empty(my_max, dtype='float64') for i in range(my_max): # print("Setting tarr: ", i, uniquedims[best_dim][i]) tarr[i] = uniquedims[best_dim][i] tarr.sort() split = tarr[my_split] cdef int nless=0, ngreater=0 for i in range(n_grids): if data[i][0][best_dim] < split: less_ids[i] = 1 nless += 1 else: less_ids[i] = 0 if data[i][1][best_dim] > split: greater_ids[i] = 1 ngreater += 1 else: greater_ids[i] = 0 # Return out unique values return best_dim, split, nless, ngreater ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/api.py0000644000175100001770000000072414714401662016130 0ustar00runnerdockerfrom .basic_octree import * from .contour_finding import * from .depth_first_octree import * from .fortran_reader import * from .grid_traversal import * from .image_utilities import * from .interpolators import * from .line_integral_convolution import * from .marching_cubes import * from .mesh_utilities import * from .misc_utilities import * from .particle_mesh_operations import * from .points_in_volume import * from .quad_tree import * from .write_array import * ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/autogenerated_element_samplers.pxd0000644000175100001770000000444314714401662023772 0ustar00runnerdockercdef void Q1Function3D(double* fx, double* x, double* vertices, double* phys_x) noexcept nogil cdef void Q1Jacobian3D(double* rcol, double* scol, double* tcol, double* x, double* vertices, double* phys_x) noexcept nogil cdef void Q1Function2D(double* fx, double* x, double* vertices, double* phys_x) noexcept nogil cdef void Q1Jacobian2D(double* rcol, double* scol, double* x, double* vertices, double* phys_x) noexcept nogil cdef void Q2Function2D(double* fx, double* x, double* vertices, double* phys_x) noexcept nogil cdef void Q2Jacobian2D(double* rcol, double* scol, double* x, double* vertices, double* phys_x) noexcept nogil cdef void Tet2Function3D(double* fx, double* x, double* vertices, double* phys_x) noexcept nogil cdef void Tet2Jacobian3D(double* rcol, double* scol, double* tcol, double* x, double* vertices, double* phys_x) noexcept nogil cdef void T2Function2D(double* fx, double* x, double* vertices, double* phys_x) noexcept nogil cdef void T2Jacobian2D(double* rcol, double* scol, double* x, double* vertices, double* phys_x) noexcept nogil cdef void W1Function3D(double* fx, double* x, double* vertices, double* phys_x) noexcept nogil cdef void W1Jacobian3D(double* rcol, double* scol, double* tcol, double* x, double* vertices, double* phys_x) noexcept nogil ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/autogenerated_element_samplers.pyx0000644000175100001770000004747314714401662024031 0ustar00runnerdocker# This file contains auto-generated functions for sampling # inside finite element solutions for various mesh types. # To see how the code generation works in detail, see # yt/utilities/mesh_code_generation.py. cimport cython from libc.math cimport pow @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void Q1Function3D(double* fx, double* x, double* vertices, double* phys_x) noexcept nogil: fx[0] = 0.125*(1 - x[0])*(1 - x[1])*(1 - x[2])*vertices[0] + 0.125*(1 - x[0])*(1 - x[1])*(1 + x[2])*vertices[12] + 0.125*(1 - x[0])*(1 + x[1])*(1 - x[2])*vertices[9] + 0.125*(1 - x[0])*(1 + x[1])*(1 + x[2])*vertices[21] + 0.125*(1 + x[0])*(1 - x[1])*(1 - x[2])*vertices[3] + 0.125*(1 + x[0])*(1 - x[1])*(1 + x[2])*vertices[15] + 0.125*(1 + x[0])*(1 + x[1])*(1 - x[2])*vertices[6] + 0.125*(1 + x[0])*(1 + x[1])*(1 + x[2])*vertices[18] - phys_x[0] fx[1] = 0.125*(1 - x[0])*(1 - x[1])*(1 - x[2])*vertices[1] + 0.125*(1 - x[0])*(1 - x[1])*(1 + x[2])*vertices[13] + 0.125*(1 - x[0])*(1 + x[1])*(1 - x[2])*vertices[10] + 0.125*(1 - x[0])*(1 + x[1])*(1 + x[2])*vertices[22] + 0.125*(1 + x[0])*(1 - x[1])*(1 - x[2])*vertices[4] + 0.125*(1 + x[0])*(1 - x[1])*(1 + x[2])*vertices[16] + 0.125*(1 + x[0])*(1 + x[1])*(1 - x[2])*vertices[7] + 0.125*(1 + x[0])*(1 + x[1])*(1 + x[2])*vertices[19] - phys_x[1] fx[2] = 0.125*(1 - x[0])*(1 - x[1])*(1 - x[2])*vertices[2] + 0.125*(1 - x[0])*(1 - x[1])*(1 + x[2])*vertices[14] + 0.125*(1 - x[0])*(1 + x[1])*(1 - x[2])*vertices[11] + 0.125*(1 - x[0])*(1 + x[1])*(1 + x[2])*vertices[23] + 0.125*(1 + x[0])*(1 - x[1])*(1 - x[2])*vertices[5] + 0.125*(1 + x[0])*(1 - x[1])*(1 + x[2])*vertices[17] + 0.125*(1 + x[0])*(1 + x[1])*(1 - x[2])*vertices[8] + 0.125*(1 + x[0])*(1 + x[1])*(1 + x[2])*vertices[20] - phys_x[2] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void Q1Jacobian3D(double* rcol, double* scol, double* tcol, double* x, double* vertices, double* phys_x) noexcept nogil: rcol[0] = -0.125*(1 - x[1])*(1 - x[2])*vertices[0] + 0.125*(1 - x[1])*(1 - x[2])*vertices[3] - 0.125*(1 - x[1])*(1 + x[2])*vertices[12] + 0.125*(1 - x[1])*(1 + x[2])*vertices[15] + 0.125*(1 + x[1])*(1 - x[2])*vertices[6] - 0.125*(1 + x[1])*(1 - x[2])*vertices[9] + 0.125*(1 + x[1])*(1 + x[2])*vertices[18] - 0.125*(1 + x[1])*(1 + x[2])*vertices[21] scol[0] = -0.125*(1 - x[0])*(1 - x[2])*vertices[0] + 0.125*(1 - x[0])*(1 - x[2])*vertices[9] - 0.125*(1 - x[0])*(1 + x[2])*vertices[12] + 0.125*(1 - x[0])*(1 + x[2])*vertices[21] - 0.125*(1 + x[0])*(1 - x[2])*vertices[3] + 0.125*(1 + x[0])*(1 - x[2])*vertices[6] - 0.125*(1 + x[0])*(1 + x[2])*vertices[15] + 0.125*(1 + x[0])*(1 + x[2])*vertices[18] tcol[0] = -0.125*(1 - x[0])*(1 - x[1])*vertices[0] + 0.125*(1 - x[0])*(1 - x[1])*vertices[12] - 0.125*(1 - x[0])*(1 + x[1])*vertices[9] + 0.125*(1 - x[0])*(1 + x[1])*vertices[21] - 0.125*(1 + x[0])*(1 - x[1])*vertices[3] + 0.125*(1 + x[0])*(1 - x[1])*vertices[15] - 0.125*(1 + x[0])*(1 + x[1])*vertices[6] + 0.125*(1 + x[0])*(1 + x[1])*vertices[18] rcol[1] = -0.125*(1 - x[1])*(1 - x[2])*vertices[1] + 0.125*(1 - x[1])*(1 - x[2])*vertices[4] - 0.125*(1 - x[1])*(1 + x[2])*vertices[13] + 0.125*(1 - x[1])*(1 + x[2])*vertices[16] + 0.125*(1 + x[1])*(1 - x[2])*vertices[7] - 0.125*(1 + x[1])*(1 - x[2])*vertices[10] + 0.125*(1 + x[1])*(1 + x[2])*vertices[19] - 0.125*(1 + x[1])*(1 + x[2])*vertices[22] scol[1] = -0.125*(1 - x[0])*(1 - x[2])*vertices[1] + 0.125*(1 - x[0])*(1 - x[2])*vertices[10] - 0.125*(1 - x[0])*(1 + x[2])*vertices[13] + 0.125*(1 - x[0])*(1 + x[2])*vertices[22] - 0.125*(1 + x[0])*(1 - x[2])*vertices[4] + 0.125*(1 + x[0])*(1 - x[2])*vertices[7] - 0.125*(1 + x[0])*(1 + x[2])*vertices[16] + 0.125*(1 + x[0])*(1 + x[2])*vertices[19] tcol[1] = -0.125*(1 - x[0])*(1 - x[1])*vertices[1] + 0.125*(1 - x[0])*(1 - x[1])*vertices[13] - 0.125*(1 - x[0])*(1 + x[1])*vertices[10] + 0.125*(1 - x[0])*(1 + x[1])*vertices[22] - 0.125*(1 + x[0])*(1 - x[1])*vertices[4] + 0.125*(1 + x[0])*(1 - x[1])*vertices[16] - 0.125*(1 + x[0])*(1 + x[1])*vertices[7] + 0.125*(1 + x[0])*(1 + x[1])*vertices[19] rcol[2] = -0.125*(1 - x[1])*(1 - x[2])*vertices[2] + 0.125*(1 - x[1])*(1 - x[2])*vertices[5] - 0.125*(1 - x[1])*(1 + x[2])*vertices[14] + 0.125*(1 - x[1])*(1 + x[2])*vertices[17] + 0.125*(1 + x[1])*(1 - x[2])*vertices[8] - 0.125*(1 + x[1])*(1 - x[2])*vertices[11] + 0.125*(1 + x[1])*(1 + x[2])*vertices[20] - 0.125*(1 + x[1])*(1 + x[2])*vertices[23] scol[2] = -0.125*(1 - x[0])*(1 - x[2])*vertices[2] + 0.125*(1 - x[0])*(1 - x[2])*vertices[11] - 0.125*(1 - x[0])*(1 + x[2])*vertices[14] + 0.125*(1 - x[0])*(1 + x[2])*vertices[23] - 0.125*(1 + x[0])*(1 - x[2])*vertices[5] + 0.125*(1 + x[0])*(1 - x[2])*vertices[8] - 0.125*(1 + x[0])*(1 + x[2])*vertices[17] + 0.125*(1 + x[0])*(1 + x[2])*vertices[20] tcol[2] = -0.125*(1 - x[0])*(1 - x[1])*vertices[2] + 0.125*(1 - x[0])*(1 - x[1])*vertices[14] - 0.125*(1 - x[0])*(1 + x[1])*vertices[11] + 0.125*(1 - x[0])*(1 + x[1])*vertices[23] - 0.125*(1 + x[0])*(1 - x[1])*vertices[5] + 0.125*(1 + x[0])*(1 - x[1])*vertices[17] - 0.125*(1 + x[0])*(1 + x[1])*vertices[8] + 0.125*(1 + x[0])*(1 + x[1])*vertices[20] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void Q1Function2D(double* fx, double* x, double* vertices, double* phys_x) noexcept nogil: fx[0] = 0.25*(1 - x[0])*(1 - x[1])*vertices[0] + 0.25*(1 - x[0])*(1 + x[1])*vertices[6] + 0.25*(1 + x[0])*(1 - x[1])*vertices[2] + 0.25*(1 + x[0])*(1 + x[1])*vertices[4] - phys_x[0] fx[1] = 0.25*(1 - x[0])*(1 - x[1])*vertices[1] + 0.25*(1 - x[0])*(1 + x[1])*vertices[7] + 0.25*(1 + x[0])*(1 - x[1])*vertices[3] + 0.25*(1 + x[0])*(1 + x[1])*vertices[5] - phys_x[1] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void Q1Jacobian2D(double* rcol, double* scol, double* x, double* vertices, double* phys_x) noexcept nogil: rcol[0] = -0.25*(1 - x[1])*vertices[0] + 0.25*(1 - x[1])*vertices[2] + 0.25*(1 + x[1])*vertices[4] - 0.25*(1 + x[1])*vertices[6] scol[0] = -0.25*(1 - x[0])*vertices[0] + 0.25*(1 - x[0])*vertices[6] - 0.25*(1 + x[0])*vertices[2] + 0.25*(1 + x[0])*vertices[4] rcol[1] = -0.25*(1 - x[1])*vertices[1] + 0.25*(1 - x[1])*vertices[3] + 0.25*(1 + x[1])*vertices[5] - 0.25*(1 + x[1])*vertices[7] scol[1] = -0.25*(1 - x[0])*vertices[1] + 0.25*(1 - x[0])*vertices[7] - 0.25*(1 + x[0])*vertices[3] + 0.25*(1 + x[0])*vertices[5] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void Q2Function2D(double* fx, double* x, double* vertices, double* phys_x) noexcept nogil: fx[0] = (1 + x[0])*(-1 + x[0])*(1 + x[1])*(-1 + x[1])*vertices[16] - 0.5*(1 + x[0])*(-1 + x[0])*x[1]*(-1 + x[1])*vertices[8] - 0.5*(1 + x[0])*(-1 + x[0])*x[1]*(1 + x[1])*vertices[12] - phys_x[0] - 0.5*x[0]*(-1 + x[0])*(1 + x[1])*(-1 + x[1])*vertices[14] + 0.25*x[0]*(-1 + x[0])*x[1]*(-1 + x[1])*vertices[0] + 0.25*x[0]*(-1 + x[0])*x[1]*(1 + x[1])*vertices[6] - 0.5*x[0]*(1 + x[0])*(1 + x[1])*(-1 + x[1])*vertices[10] + 0.25*x[0]*(1 + x[0])*x[1]*(-1 + x[1])*vertices[2] + 0.25*x[0]*(1 + x[0])*x[1]*(1 + x[1])*vertices[4] fx[1] = (1 + x[0])*(-1 + x[0])*(1 + x[1])*(-1 + x[1])*vertices[17] - 0.5*(1 + x[0])*(-1 + x[0])*x[1]*(-1 + x[1])*vertices[9] - 0.5*(1 + x[0])*(-1 + x[0])*x[1]*(1 + x[1])*vertices[13] - phys_x[1] - 0.5*x[0]*(-1 + x[0])*(1 + x[1])*(-1 + x[1])*vertices[15] + 0.25*x[0]*(-1 + x[0])*x[1]*(-1 + x[1])*vertices[1] + 0.25*x[0]*(-1 + x[0])*x[1]*(1 + x[1])*vertices[7] - 0.5*x[0]*(1 + x[0])*(1 + x[1])*(-1 + x[1])*vertices[11] + 0.25*x[0]*(1 + x[0])*x[1]*(-1 + x[1])*vertices[3] + 0.25*x[0]*(1 + x[0])*x[1]*(1 + x[1])*vertices[5] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void Q2Jacobian2D(double* rcol, double* scol, double* x, double* vertices, double* phys_x) noexcept nogil: rcol[0] = -0.5*(-1 + x[0])*(1 + x[1])*(-1 + x[1])*vertices[14] + (-1 + x[0])*(1 + x[1])*(-1 + x[1])*vertices[16] + 0.25*(-1 + x[0])*x[1]*(-1 + x[1])*vertices[0] - 0.5*(-1 + x[0])*x[1]*(-1 + x[1])*vertices[8] + 0.25*(-1 + x[0])*x[1]*(1 + x[1])*vertices[6] - 0.5*(-1 + x[0])*x[1]*(1 + x[1])*vertices[12] - 0.5*(1 + x[0])*(1 + x[1])*(-1 + x[1])*vertices[10] + (1 + x[0])*(1 + x[1])*(-1 + x[1])*vertices[16] + 0.25*(1 + x[0])*x[1]*(-1 + x[1])*vertices[2] - 0.5*(1 + x[0])*x[1]*(-1 + x[1])*vertices[8] + 0.25*(1 + x[0])*x[1]*(1 + x[1])*vertices[4] - 0.5*(1 + x[0])*x[1]*(1 + x[1])*vertices[12] - 0.5*x[0]*(1 + x[1])*(-1 + x[1])*vertices[10] - 0.5*x[0]*(1 + x[1])*(-1 + x[1])*vertices[14] + 0.25*x[0]*x[1]*(-1 + x[1])*vertices[0] + 0.25*x[0]*x[1]*(-1 + x[1])*vertices[2] + 0.25*x[0]*x[1]*(1 + x[1])*vertices[4] + 0.25*x[0]*x[1]*(1 + x[1])*vertices[6] scol[0] = -0.5*(-1 + x[1])*(1 + x[0])*(-1 + x[0])*vertices[8] + (-1 + x[1])*(1 + x[0])*(-1 + x[0])*vertices[16] + 0.25*(-1 + x[1])*x[0]*(-1 + x[0])*vertices[0] - 0.5*(-1 + x[1])*x[0]*(-1 + x[0])*vertices[14] + 0.25*(-1 + x[1])*x[0]*(1 + x[0])*vertices[2] - 0.5*(-1 + x[1])*x[0]*(1 + x[0])*vertices[10] - 0.5*(1 + x[1])*(1 + x[0])*(-1 + x[0])*vertices[12] + (1 + x[1])*(1 + x[0])*(-1 + x[0])*vertices[16] + 0.25*(1 + x[1])*x[0]*(-1 + x[0])*vertices[6] - 0.5*(1 + x[1])*x[0]*(-1 + x[0])*vertices[14] + 0.25*(1 + x[1])*x[0]*(1 + x[0])*vertices[4] - 0.5*(1 + x[1])*x[0]*(1 + x[0])*vertices[10] - 0.5*x[1]*(1 + x[0])*(-1 + x[0])*vertices[8] - 0.5*x[1]*(1 + x[0])*(-1 + x[0])*vertices[12] + 0.25*x[1]*x[0]*(-1 + x[0])*vertices[0] + 0.25*x[1]*x[0]*(-1 + x[0])*vertices[6] + 0.25*x[1]*x[0]*(1 + x[0])*vertices[2] + 0.25*x[1]*x[0]*(1 + x[0])*vertices[4] rcol[1] = -0.5*(-1 + x[0])*(1 + x[1])*(-1 + x[1])*vertices[15] + (-1 + x[0])*(1 + x[1])*(-1 + x[1])*vertices[17] + 0.25*(-1 + x[0])*x[1]*(-1 + x[1])*vertices[1] - 0.5*(-1 + x[0])*x[1]*(-1 + x[1])*vertices[9] + 0.25*(-1 + x[0])*x[1]*(1 + x[1])*vertices[7] - 0.5*(-1 + x[0])*x[1]*(1 + x[1])*vertices[13] - 0.5*(1 + x[0])*(1 + x[1])*(-1 + x[1])*vertices[11] + (1 + x[0])*(1 + x[1])*(-1 + x[1])*vertices[17] + 0.25*(1 + x[0])*x[1]*(-1 + x[1])*vertices[3] - 0.5*(1 + x[0])*x[1]*(-1 + x[1])*vertices[9] + 0.25*(1 + x[0])*x[1]*(1 + x[1])*vertices[5] - 0.5*(1 + x[0])*x[1]*(1 + x[1])*vertices[13] - 0.5*x[0]*(1 + x[1])*(-1 + x[1])*vertices[11] - 0.5*x[0]*(1 + x[1])*(-1 + x[1])*vertices[15] + 0.25*x[0]*x[1]*(-1 + x[1])*vertices[1] + 0.25*x[0]*x[1]*(-1 + x[1])*vertices[3] + 0.25*x[0]*x[1]*(1 + x[1])*vertices[5] + 0.25*x[0]*x[1]*(1 + x[1])*vertices[7] scol[1] = -0.5*(-1 + x[1])*(1 + x[0])*(-1 + x[0])*vertices[9] + (-1 + x[1])*(1 + x[0])*(-1 + x[0])*vertices[17] + 0.25*(-1 + x[1])*x[0]*(-1 + x[0])*vertices[1] - 0.5*(-1 + x[1])*x[0]*(-1 + x[0])*vertices[15] + 0.25*(-1 + x[1])*x[0]*(1 + x[0])*vertices[3] - 0.5*(-1 + x[1])*x[0]*(1 + x[0])*vertices[11] - 0.5*(1 + x[1])*(1 + x[0])*(-1 + x[0])*vertices[13] + (1 + x[1])*(1 + x[0])*(-1 + x[0])*vertices[17] + 0.25*(1 + x[1])*x[0]*(-1 + x[0])*vertices[7] - 0.5*(1 + x[1])*x[0]*(-1 + x[0])*vertices[15] + 0.25*(1 + x[1])*x[0]*(1 + x[0])*vertices[5] - 0.5*(1 + x[1])*x[0]*(1 + x[0])*vertices[11] - 0.5*x[1]*(1 + x[0])*(-1 + x[0])*vertices[9] - 0.5*x[1]*(1 + x[0])*(-1 + x[0])*vertices[13] + 0.25*x[1]*x[0]*(-1 + x[0])*vertices[1] + 0.25*x[1]*x[0]*(-1 + x[0])*vertices[7] + 0.25*x[1]*x[0]*(1 + x[0])*vertices[3] + 0.25*x[1]*x[0]*(1 + x[0])*vertices[5] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void Tet2Function3D(double* fx, double* x, double* vertices, double* phys_x) noexcept nogil: fx[0] = (-x[0] + 2*pow(x[0], 2))*vertices[3] + (-x[1] + 2*pow(x[1], 2))*vertices[6] + (-x[2] + 2*pow(x[2], 2))*vertices[9] + (4*x[0] - 4*x[0]*x[1] - 4*x[0]*x[2] - 4*pow(x[0], 2))*vertices[12] + (4*x[1] - 4*x[1]*x[0] - 4*x[1]*x[2] - 4*pow(x[1], 2))*vertices[18] + (4*x[2] - 4*x[2]*x[0] - 4*x[2]*x[1] - 4*pow(x[2], 2))*vertices[21] + (1 - 3*x[0] + 4*x[0]*x[1] + 4*x[0]*x[2] + 2*pow(x[0], 2) - 3*x[1] + 4*x[1]*x[2] + 2*pow(x[1], 2) - 3*x[2] + 2*pow(x[2], 2))*vertices[0] - phys_x[0] + 4*x[0]*x[1]*vertices[15] + 4*x[0]*x[2]*vertices[24] + 4*x[1]*x[2]*vertices[27] fx[1] = (-x[0] + 2*pow(x[0], 2))*vertices[4] + (-x[1] + 2*pow(x[1], 2))*vertices[7] + (-x[2] + 2*pow(x[2], 2))*vertices[10] + (4*x[0] - 4*x[0]*x[1] - 4*x[0]*x[2] - 4*pow(x[0], 2))*vertices[13] + (4*x[1] - 4*x[1]*x[0] - 4*x[1]*x[2] - 4*pow(x[1], 2))*vertices[19] + (4*x[2] - 4*x[2]*x[0] - 4*x[2]*x[1] - 4*pow(x[2], 2))*vertices[22] + (1 - 3*x[0] + 4*x[0]*x[1] + 4*x[0]*x[2] + 2*pow(x[0], 2) - 3*x[1] + 4*x[1]*x[2] + 2*pow(x[1], 2) - 3*x[2] + 2*pow(x[2], 2))*vertices[1] - phys_x[1] + 4*x[0]*x[1]*vertices[16] + 4*x[0]*x[2]*vertices[25] + 4*x[1]*x[2]*vertices[28] fx[2] = (-x[0] + 2*pow(x[0], 2))*vertices[5] + (-x[1] + 2*pow(x[1], 2))*vertices[8] + (-x[2] + 2*pow(x[2], 2))*vertices[11] + (4*x[0] - 4*x[0]*x[1] - 4*x[0]*x[2] - 4*pow(x[0], 2))*vertices[14] + (4*x[1] - 4*x[1]*x[0] - 4*x[1]*x[2] - 4*pow(x[1], 2))*vertices[20] + (4*x[2] - 4*x[2]*x[0] - 4*x[2]*x[1] - 4*pow(x[2], 2))*vertices[23] + (1 - 3*x[0] + 4*x[0]*x[1] + 4*x[0]*x[2] + 2*pow(x[0], 2) - 3*x[1] + 4*x[1]*x[2] + 2*pow(x[1], 2) - 3*x[2] + 2*pow(x[2], 2))*vertices[2] - phys_x[2] + 4*x[0]*x[1]*vertices[17] + 4*x[0]*x[2]*vertices[26] + 4*x[1]*x[2]*vertices[29] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void Tet2Jacobian3D(double* rcol, double* scol, double* tcol, double* x, double* vertices, double* phys_x) noexcept nogil: rcol[0] = (-1 + 4*x[0])*vertices[3] + (-3 + 4*x[0] + 4*x[1] + 4*x[2])*vertices[0] + (4 - 8*x[0] - 4*x[1] - 4*x[2])*vertices[12] + 4*x[1]*vertices[15] - 4*x[1]*vertices[18] - 4*x[2]*vertices[21] + 4*x[2]*vertices[24] scol[0] = (-1 + 4*x[1])*vertices[6] + (-3 + 4*x[0] + 4*x[1] + 4*x[2])*vertices[0] + (4 - 4*x[0] - 8*x[1] - 4*x[2])*vertices[18] - 4*x[0]*vertices[12] + 4*x[0]*vertices[15] - 4*x[2]*vertices[21] + 4*x[2]*vertices[27] tcol[0] = (-1 + 4*x[2])*vertices[9] + (-3 + 4*x[0] + 4*x[1] + 4*x[2])*vertices[0] + (4 - 4*x[0] - 4*x[1] - 8*x[2])*vertices[21] - 4*x[0]*vertices[12] + 4*x[0]*vertices[24] - 4*x[1]*vertices[18] + 4*x[1]*vertices[27] rcol[1] = (-1 + 4*x[0])*vertices[4] + (-3 + 4*x[0] + 4*x[1] + 4*x[2])*vertices[1] + (4 - 8*x[0] - 4*x[1] - 4*x[2])*vertices[13] + 4*x[1]*vertices[16] - 4*x[1]*vertices[19] - 4*x[2]*vertices[22] + 4*x[2]*vertices[25] scol[1] = (-1 + 4*x[1])*vertices[7] + (-3 + 4*x[0] + 4*x[1] + 4*x[2])*vertices[1] + (4 - 4*x[0] - 8*x[1] - 4*x[2])*vertices[19] - 4*x[0]*vertices[13] + 4*x[0]*vertices[16] - 4*x[2]*vertices[22] + 4*x[2]*vertices[28] tcol[1] = (-1 + 4*x[2])*vertices[10] + (-3 + 4*x[0] + 4*x[1] + 4*x[2])*vertices[1] + (4 - 4*x[0] - 4*x[1] - 8*x[2])*vertices[22] - 4*x[0]*vertices[13] + 4*x[0]*vertices[25] - 4*x[1]*vertices[19] + 4*x[1]*vertices[28] rcol[2] = (-1 + 4*x[0])*vertices[5] + (-3 + 4*x[0] + 4*x[1] + 4*x[2])*vertices[2] + (4 - 8*x[0] - 4*x[1] - 4*x[2])*vertices[14] + 4*x[1]*vertices[17] - 4*x[1]*vertices[20] - 4*x[2]*vertices[23] + 4*x[2]*vertices[26] scol[2] = (-1 + 4*x[1])*vertices[8] + (-3 + 4*x[0] + 4*x[1] + 4*x[2])*vertices[2] + (4 - 4*x[0] - 8*x[1] - 4*x[2])*vertices[20] - 4*x[0]*vertices[14] + 4*x[0]*vertices[17] - 4*x[2]*vertices[23] + 4*x[2]*vertices[29] tcol[2] = (-1 + 4*x[2])*vertices[11] + (-3 + 4*x[0] + 4*x[1] + 4*x[2])*vertices[2] + (4 - 4*x[0] - 4*x[1] - 8*x[2])*vertices[23] - 4*x[0]*vertices[14] + 4*x[0]*vertices[26] - 4*x[1]*vertices[20] + 4*x[1]*vertices[29] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void T2Function2D(double* fx, double* x, double* vertices, double* phys_x) noexcept nogil: fx[0] = (-x[0] + 2*pow(x[0], 2))*vertices[2] + (-x[1] + 2*pow(x[1], 2))*vertices[4] + (-4*x[0]*x[1] + 4*x[1] - 4*pow(x[1], 2))*vertices[10] + (4*x[0] - 4*x[0]*x[1] - 4*pow(x[0], 2))*vertices[6] + (1 - 3*x[0] + 4*x[0]*x[1] + 2*pow(x[0], 2) - 3*x[1] + 2*pow(x[1], 2))*vertices[0] - phys_x[0] + 4*x[0]*x[1]*vertices[8] fx[1] = (-x[0] + 2*pow(x[0], 2))*vertices[3] + (-x[1] + 2*pow(x[1], 2))*vertices[5] + (-4*x[0]*x[1] + 4*x[1] - 4*pow(x[1], 2))*vertices[11] + (4*x[0] - 4*x[0]*x[1] - 4*pow(x[0], 2))*vertices[7] + (1 - 3*x[0] + 4*x[0]*x[1] + 2*pow(x[0], 2) - 3*x[1] + 2*pow(x[1], 2))*vertices[1] - phys_x[1] + 4*x[0]*x[1]*vertices[9] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void T2Jacobian2D(double* rcol, double* scol, double* x, double* vertices, double* phys_x) noexcept nogil: rcol[0] = (-1 + 4*x[0])*vertices[2] + (-3 + 4*x[0] + 4*x[1])*vertices[0] + (4 - 8*x[0] - 4*x[1])*vertices[6] + 4*x[1]*vertices[8] - 4*x[1]*vertices[10] scol[0] = (-1 + 4*x[1])*vertices[4] + (-3 + 4*x[0] + 4*x[1])*vertices[0] + (4 - 4*x[0] - 8*x[1])*vertices[10] - 4*x[0]*vertices[6] + 4*x[0]*vertices[8] rcol[1] = (-1 + 4*x[0])*vertices[3] + (-3 + 4*x[0] + 4*x[1])*vertices[1] + (4 - 8*x[0] - 4*x[1])*vertices[7] + 4*x[1]*vertices[9] - 4*x[1]*vertices[11] scol[1] = (-1 + 4*x[1])*vertices[5] + (-3 + 4*x[0] + 4*x[1])*vertices[1] + (4 - 4*x[0] - 8*x[1])*vertices[11] - 4*x[0]*vertices[7] + 4*x[0]*vertices[9] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void W1Function3D(double* fx, double* x, double* vertices, double* phys_x) noexcept nogil: fx[0] = 0.5*(1 - x[0] - x[1])*(1 - x[2])*vertices[0] + 0.5*(1 - x[0] - x[1])*(1 + x[2])*vertices[9] - phys_x[0] + 0.5*x[0]*(1 - x[2])*vertices[3] + 0.5*x[0]*(1 + x[2])*vertices[12] + 0.5*x[1]*(1 - x[2])*vertices[6] + 0.5*x[1]*(1 + x[2])*vertices[15] fx[1] = 0.5*(1 - x[0] - x[1])*(1 - x[2])*vertices[1] + 0.5*(1 - x[0] - x[1])*(1 + x[2])*vertices[10] - phys_x[1] + 0.5*x[0]*(1 - x[2])*vertices[4] + 0.5*x[0]*(1 + x[2])*vertices[13] + 0.5*x[1]*(1 - x[2])*vertices[7] + 0.5*x[1]*(1 + x[2])*vertices[16] fx[2] = 0.5*(1 - x[0] - x[1])*(1 - x[2])*vertices[2] + 0.5*(1 - x[0] - x[1])*(1 + x[2])*vertices[11] - phys_x[2] + 0.5*x[0]*(1 - x[2])*vertices[5] + 0.5*x[0]*(1 + x[2])*vertices[14] + 0.5*x[1]*(1 - x[2])*vertices[8] + 0.5*x[1]*(1 + x[2])*vertices[17] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void W1Jacobian3D(double* rcol, double* scol, double* tcol, double* x, double* vertices, double* phys_x) noexcept nogil: rcol[0] = -0.5*(1 - x[2])*vertices[0] + 0.5*(1 - x[2])*vertices[3] - 0.5*(1 + x[2])*vertices[9] + 0.5*(1 + x[2])*vertices[12] scol[0] = -0.5*(1 - x[2])*vertices[0] + 0.5*(1 - x[2])*vertices[6] - 0.5*(1 + x[2])*vertices[9] + 0.5*(1 + x[2])*vertices[15] tcol[0] = -0.5*(1 - x[0] - x[1])*vertices[0] + 0.5*(1 - x[0] - x[1])*vertices[9] - 0.5*x[0]*vertices[3] + 0.5*x[0]*vertices[12] - 0.5*x[1]*vertices[6] + 0.5*x[1]*vertices[15] rcol[1] = -0.5*(1 - x[2])*vertices[1] + 0.5*(1 - x[2])*vertices[4] - 0.5*(1 + x[2])*vertices[10] + 0.5*(1 + x[2])*vertices[13] scol[1] = -0.5*(1 - x[2])*vertices[1] + 0.5*(1 - x[2])*vertices[7] - 0.5*(1 + x[2])*vertices[10] + 0.5*(1 + x[2])*vertices[16] tcol[1] = -0.5*(1 - x[0] - x[1])*vertices[1] + 0.5*(1 - x[0] - x[1])*vertices[10] - 0.5*x[0]*vertices[4] + 0.5*x[0]*vertices[13] - 0.5*x[1]*vertices[7] + 0.5*x[1]*vertices[16] rcol[2] = -0.5*(1 - x[2])*vertices[2] + 0.5*(1 - x[2])*vertices[5] - 0.5*(1 + x[2])*vertices[11] + 0.5*(1 + x[2])*vertices[14] scol[2] = -0.5*(1 - x[2])*vertices[2] + 0.5*(1 - x[2])*vertices[8] - 0.5*(1 + x[2])*vertices[11] + 0.5*(1 + x[2])*vertices[17] tcol[2] = -0.5*(1 - x[0] - x[1])*vertices[2] + 0.5*(1 - x[0] - x[1])*vertices[11] - 0.5*x[0]*vertices[5] + 0.5*x[0]*vertices[14] - 0.5*x[1]*vertices[8] + 0.5*x[1]*vertices[17] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/basic_octree.pyx0000644000175100001770000005673014714401662020201 0ustar00runnerdocker # distutils: libraries = STD_LIBS """ A refine-by-two AMR-specific octree """ import numpy as np cimport cython # Double up here for def'd functions cimport numpy as np cimport numpy as cnp from libc.stdlib cimport free, malloc from yt.utilities.lib.fp_utils cimport imax import sys cdef extern from "platform_dep.h": # NOTE that size_t might not be int void *alloca(int) cdef extern from "math.h": double sqrt(double x) cdef inline np.float64_t f64max(np.float64_t f0, np.float64_t f1): if f0 > f1: return f0 return f1 cdef struct OctreeNode: np.float64_t *val np.float64_t weight_val np.int64_t pos[3] np.uint64_t level int nvals int max_level # The maximum level under this node with mass. OctreeNode *children[2][2][2] OctreeNode *parent OctreeNode *next OctreeNode *up_next cdef void OTN_add_value(OctreeNode *self, np.float64_t *val, np.float64_t weight_val, int level, int treecode): cdef int i for i in range(self.nvals): self.val[i] += val[i] self.weight_val += weight_val if treecode and val[0] > 0.: self.max_level = imax(self.max_level, level) cdef void OTN_refine(OctreeNode *self, int incremental = 0): cdef int i, j, k cdef np.int64_t npos[3] for i in range(2): npos[0] = self.pos[0] * 2 + i for j in range(2): npos[1] = self.pos[1] * 2 + j # We have to be careful with allocation... for k in range(2): npos[2] = self.pos[2] * 2 + k self.children[i][j][k] = OTN_initialize( npos, self.nvals, self.val, self.weight_val, self.level + 1, self, incremental) if incremental: return for i in range(self.nvals): self.val[i] = 0.0 self.weight_val = 0.0 cdef OctreeNode *OTN_initialize(np.int64_t pos[3], int nvals, np.float64_t *val, np.float64_t weight_val, int level, OctreeNode *parent, int incremental = 0): cdef OctreeNode *node cdef int i, j, k node = malloc(sizeof(OctreeNode)) node.pos[0] = pos[0] node.pos[1] = pos[1] node.pos[2] = pos[2] node.nvals = nvals node.parent = parent node.next = NULL node.up_next = NULL node.max_level = 0 node.val = malloc( nvals * sizeof(np.float64_t)) if incremental: for i in range(nvals): node.val[i] = 0. node.weight_val = 0. else: for i in range(nvals): node.val[i] = val[i] node.weight_val = weight_val for i in range(2): for j in range(2): for k in range(2): node.children[i][j][k] = NULL node.level = level return node cdef void OTN_free(OctreeNode *node): cdef int i, j, k for i in range(2): for j in range(2): for k in range(2): if node.children[i][j][k] == NULL: continue OTN_free(node.children[i][j][k]) free(node.val) free(node) cdef class Octree: cdef int nvals cdef np.int64_t po2[80] cdef OctreeNode ****root_nodes cdef np.int64_t top_grid_dims[3] cdef int incremental # Below is for the treecode. cdef np.float64_t opening_angle # We'll store dist here so it doesn't have to be calculated twice. cdef np.float64_t dist cdef np.float64_t root_dx[3] cdef OctreeNode *last_node def __cinit__(self, np.ndarray[np.int64_t, ndim=1] top_grid_dims, int nvals, int incremental = False): cdef np.uint64_t i, j, k self.incremental = incremental cdef np.int64_t pos[3] cdef np.float64_t *vals = alloca( sizeof(np.float64_t)*nvals) cdef np.float64_t weight_val = 0.0 self.nvals = nvals for i in range(nvals): vals[i] = 0.0 self.top_grid_dims[0] = top_grid_dims[0] self.top_grid_dims[1] = top_grid_dims[1] self.top_grid_dims[2] = top_grid_dims[2] # This wouldn't be necessary if we did bitshifting... for i in range(80): self.po2[i] = 2**i # Cython doesn't seem to like sizeof(OctreeNode ***) self.root_nodes = \ malloc(sizeof(void*) * top_grid_dims[0]) # We initialize our root values to 0.0. for i in range(top_grid_dims[0]): pos[0] = i self.root_nodes[i] = \ malloc(sizeof(OctreeNode **) * top_grid_dims[1]) for j in range(top_grid_dims[1]): pos[1] = j self.root_nodes[i][j] = \ malloc(sizeof(OctreeNode *) * top_grid_dims[1]) for k in range(top_grid_dims[2]): pos[2] = k self.root_nodes[i][j][k] = OTN_initialize( pos, nvals, vals, weight_val, 0, NULL) cdef void add_to_position(self, int level, np.int64_t pos[3], np.float64_t *val, np.float64_t weight_val, treecode): cdef int i, j, k, L cdef OctreeNode *node node = self.find_on_root_level(pos, level) cdef np.int64_t fac for L in range(level): if self.incremental: OTN_add_value(node, val, weight_val, level, treecode) if node.children[0][0][0] == NULL: OTN_refine(node, self.incremental) # Maybe we should use bitwise operators? fac = self.po2[level - L - 1] i = (pos[0] >= fac*(2*node.pos[0]+1)) j = (pos[1] >= fac*(2*node.pos[1]+1)) k = (pos[2] >= fac*(2*node.pos[2]+1)) node = node.children[i][j][k] OTN_add_value(node, val, weight_val, level, treecode) cdef OctreeNode *find_on_root_level(self, np.int64_t pos[3], int level): # We need this because the root level won't just have four children # So we find on the root level, then we traverse the tree. cdef np.int64_t i, j, k i = (pos[0] / self.po2[level]) j = (pos[1] / self.po2[level]) k = (pos[2] / self.po2[level]) return self.root_nodes[i][j][k] @cython.boundscheck(False) @cython.wraparound(False) def add_array_to_tree(self, int level, np.ndarray[np.int64_t, ndim=1] pxs, np.ndarray[np.int64_t, ndim=1] pys, np.ndarray[np.int64_t, ndim=1] pzs, np.ndarray[np.float64_t, ndim=2] pvals, np.ndarray[np.float64_t, ndim=1] pweight_vals, int treecode = 0): cdef int npx = pxs.shape[0] cdef int p cdef cnp.float64_t *vals cdef cnp.float64_t *data = pvals.data cdef cnp.int64_t pos[3] for p in range(npx): vals = data + self.nvals*p pos[0] = pxs[p] pos[1] = pys[p] pos[2] = pzs[p] self.add_to_position(level, pos, vals, pweight_vals[p], treecode) def add_grid_to_tree(self, int level, np.ndarray[np.int64_t, ndim=1] start_index, np.ndarray[np.float64_t, ndim=2] pvals, np.ndarray[np.float64_t, ndim=2] wvals, np.ndarray[np.int32_t, ndim=2] cm): pass @cython.boundscheck(False) @cython.wraparound(False) def get_all_from_level(self, int level, int count_only = 0): cdef int i, j, k cdef int total = 0 for i in range(self.top_grid_dims[0]): for j in range(self.top_grid_dims[1]): for k in range(self.top_grid_dims[2]): total += self.count_at_level(self.root_nodes[i][j][k], level) if count_only: return total # Allocate our array cdef np.ndarray[np.int64_t, ndim=2] npos cdef np.ndarray[np.float64_t, ndim=2] nvals cdef np.ndarray[np.float64_t, ndim=1] nwvals npos = np.zeros( (total, 3), dtype='int64') nvals = np.zeros( (total, self.nvals), dtype='float64') nwvals = np.zeros( total, dtype='float64') cdef np.int64_t curpos = 0 cdef np.int64_t *pdata = npos.data cdef np.float64_t *vdata = nvals.data cdef np.float64_t *wdata = nwvals.data for i in range(self.top_grid_dims[0]): for j in range(self.top_grid_dims[1]): for k in range(self.top_grid_dims[2]): curpos += self.fill_from_level(self.root_nodes[i][j][k], level, curpos, pdata, vdata, wdata) return npos, nvals, nwvals cdef int count_at_level(self, OctreeNode *node, int level): cdef int i, j, k # We only really return a non-zero, calculated value if we are at the # level in question. if node.level == level: if self.incremental: return 1 # We return 1 if there are no finer points at this level and zero # if there are return (node.children[0][0][0] == NULL) if node.children[0][0][0] == NULL: return 0 cdef int count = 0 for i in range(2): for j in range(2): for k in range(2): count += self.count_at_level(node.children[i][j][k], level) return count cdef int fill_from_level(self, OctreeNode *node, int level, np.int64_t curpos, np.int64_t *pdata, np.float64_t *vdata, np.float64_t *wdata): cdef int i, j, k if node.level == level: if node.children[0][0][0] != NULL and not self.incremental: return 0 for i in range(self.nvals): vdata[self.nvals * curpos + i] = node.val[i] wdata[curpos] = node.weight_val pdata[curpos * 3] = node.pos[0] pdata[curpos * 3 + 1] = node.pos[1] pdata[curpos * 3 + 2] = node.pos[2] return 1 if node.children[0][0][0] == NULL: return 0 cdef np.int64_t added = 0 for i in range(2): for j in range(2): for k in range(2): added += self.fill_from_level(node.children[i][j][k], level, curpos + added, pdata, vdata, wdata) return added @cython.cdivision(True) cdef np.float64_t fbe_node_separation(self, OctreeNode *node1, OctreeNode *node2): # Find the distance between the two nodes. cdef np.float64_t dx1, dx2, p1, p2, dist cdef int i dist = 0.0 for i in range(3): # Discover the appropriate dx for each node/dim. dx1 = self.root_dx[i] / ( self.po2[node1.level]) dx2 = self.root_dx[i] / ( self.po2[node2.level]) # The added term is to re-cell center the data. p1 = ( node1.pos[i]) * dx1 + dx1/2. p2 = ( node2.pos[i]) * dx2 + dx2/2. dist += (p1 - p2) * (p1 - p2) dist = sqrt(dist) return dist @cython.cdivision(True) cdef np.float64_t fbe_opening_angle(self, OctreeNode *node1, OctreeNode *node2): # Calculate the opening angle of node2 upon the center of node1. # In order to keep things simple, we will not assume symmetry in all # three directions of the octree, and we'll use the largest dimension # if the tree is not symmetric. This is not strictly the opening angle # the purest sense, but it's slightly more accurate, so it's OK. # This is done in code units to match the distance calculation. cdef np.float64_t d2, dx2, dist cdef np.int64_t n2 cdef int i d2 = 0.0 if node1 is node2: return 100000.0 # Just some large number. if self.top_grid_dims[1] == self.top_grid_dims[0] and \ self.top_grid_dims[2] == self.top_grid_dims[0]: # Symmetric n2 = self.po2[node2.level] * self.top_grid_dims[0] d2 = 1. / ( n2) else: # Not symmetric for i in range(3): n2 = self.po2[node2.level] * self.top_grid_dims[i] dx2 = 1. / ( n2) d2 = f64max(d2, dx2) # Now calculate the opening angle. dist = self.fbe_node_separation(node1, node2) self.dist = dist return d2 / dist cdef void set_next(self, OctreeNode *node, int treecode): # This sets up the linked list, pointing node.next to the next node # in the iteration order. cdef int i, j, k if treecode and node.val[0] is not 0.: # In a treecode, we only make a new next link if this node has mass. self.last_node.next = node self.last_node = node elif treecode and node.val[0] is 0.: # No mass means it's children have no mass, no need to dig an # further. return else: # We're not doing the treecode, but we still want a linked list, # we don't care about val[0] necessarily. self.last_node.next = node self.last_node = node if node.children[0][0][0] is NULL: return for i in range(2): for j in range(2): for k in range(2): self.set_next(node.children[i][j][k], treecode) return cdef void set_up_next(self, OctreeNode *node): # This sets up a second linked list, pointing node.up_next to the next # node in the list that is at the same or lower (coarser) level than # this node. This is useful in the treecode for skipping over nodes # that don't need to be inspected. cdef OctreeNode *initial_next cdef OctreeNode *temp_next initial_next = node.next temp_next = node.next if node.next is NULL: return while temp_next.level > node.level: temp_next = temp_next.next if temp_next is NULL: break node.up_next = temp_next self.set_up_next(initial_next) def finalize(self, int treecode = 0): # Set up the linked list for the nodes. # Set treecode = 1 if nodes with no mass are to be skipped in the # list. cdef int i, j, k, sum, top_grid_total, ii, jj, kk self.last_node = self.root_nodes[0][0][0] for i in range(self.top_grid_dims[0]): for j in range(self.top_grid_dims[1]): for k in range(self.top_grid_dims[2]): self.set_next(self.root_nodes[i][j][k], treecode) # Now we want to link to the next node in the list that is # on a level the same or lower (coarser) than us. This will be used # during a treecode search so we can skip higher-level (finer) nodes. sum = 1 top_grid_total = self.top_grid_dims[0] * self.top_grid_dims[1] * \ self.top_grid_dims[2] for i in range(self.top_grid_dims[0]): for j in range(self.top_grid_dims[1]): for k in range(self.top_grid_dims[2]): self.set_up_next(self.root_nodes[i][j][k]) # Point the root_nodes.up_next to the next root_node in the # list, except for the last one which stays pointing to NULL. if sum < top_grid_total - 1: ii = i jj = j kk = (k + 1) % self.top_grid_dims[2] if kk < k: jj = (j + 1) % self.top_grid_dims[1] if jj < j: ii = (i + 1) % self.top_grid_dims[0] self.root_nodes[i][j][k].up_next = \ self.root_nodes[ii][jj][kk] sum += 1 @cython.cdivision(True) cdef np.float64_t fbe_main(self, np.float64_t potential, int truncate, np.float64_t kinetic): # The work is done here. Starting at the top of the linked list of # nodes, cdef np.float64_t angle, dist cdef OctreeNode *this_node cdef OctreeNode *pair_node this_node = self.root_nodes[0][0][0] while this_node is not NULL: # Iterate down the list to a node that either has no children and # is at the max_level of the tree, or to a node where # all of its children are massless. The second case is when data # from a level that isn't the deepest has been added to the tree. while this_node.max_level is not this_node.level: this_node = this_node.next # In case we reach the end of the list... if this_node is NULL: break if this_node is NULL: break if truncate and potential > kinetic: print('Truncating...') break pair_node = this_node.next while pair_node is not NULL: # If pair_node is massless, we can jump to up_next, because # nothing pair_node contains will have mass either. # I think that this should primarily happen for root_nodes # created for padding to make the region cubical. if pair_node.val[0] is 0.0: pair_node = pair_node.up_next continue # If pair_node is a childless node, or is a coarser node with # no children, we can calculate the pot # right now, and get a new pair_node. if pair_node.max_level is pair_node.level: dist = self.fbe_node_separation(this_node, pair_node) potential += this_node.val[0] * pair_node.val[0] / dist if truncate and potential > kinetic: break pair_node = pair_node.next continue # Next, if the opening angle to pair_node is small enough, # calculate the potential and get a new pair_node using # up_next because we don't need to look at pair_node's children. angle = self.fbe_opening_angle(this_node, pair_node) if angle < self.opening_angle: # self.dist was just set in fbe_opening_angle, so we # can use it here without re-calculating it for these two # nodes. potential += this_node.val[0] * pair_node.val[0] / self.dist if truncate and potential > kinetic: break # We can skip all the nodes that are contained within # pair_node, saving time walking the linked list. pair_node = pair_node.up_next # If we've gotten this far, pair_node has children, but it's # too coarse, so we simply dig deeper using .next. else: pair_node = pair_node.next # We've exhausted the pair_nodes. # Now we find a new this_node in the list, and do more searches # over pair_node. this_node = this_node.next return potential @cython.boundscheck(False) @cython.wraparound(False) def find_binding_energy(self, int truncate, np.float64_t kinetic, np.ndarray[np.float64_t, ndim=1] root_dx, float opening_angle = 1.0): r"""Find the binding energy of an ensemble of data points using the treecode method. Note: The first entry of the vals array MUST be Mass. Any other values will be ignored, including the weight array. """ # The real work is done in fbe_main(), this just sets things up # and returns the potential. cdef int i cdef np.float64_t potential potential = 0.0 self.opening_angle = opening_angle for i in range(3): self.root_dx[i] = root_dx[i] potential = self.fbe_main(potential, truncate, kinetic) return potential cdef int node_ID(self, OctreeNode *node): # Returns an unique ID for this node based on its position and level. cdef int ID, offset, root cdef np.uint64_t i cdef np.int64_t this_grid_dims[3] offset = 0 root = 1 for i in range(3): root *= self.top_grid_dims[i] this_grid_dims[i] = self.top_grid_dims[i] * 2**node.level for i in range(node.level): offset += root * 2**(3 * i) ID = offset + (node.pos[0] + this_grid_dims[0] * (node.pos[1] + \ this_grid_dims[1] * node.pos[2])) return ID cdef int node_ID_on_level(self, OctreeNode *node): # Returns the node ID on node.level for this node. cdef int ID, i cdef np.int64_t this_grid_dims[3] for i in range(3): this_grid_dims[i] = self.top_grid_dims[i] * 2**node.level ID = node.pos[0] + this_grid_dims[0] * (node.pos[1] + \ this_grid_dims[1] * node.pos[2]) return ID cdef void print_node_info(self, OctreeNode *node): cdef int i, j, k line = "%d\t" % self.node_ID(node) if node.next is not NULL: line += "%d\t" % self.node_ID(node.next) else: line += "-1\t" if node.up_next is not NULL: line += "%d\t" % self.node_ID(node.up_next) else: line += "-1\t" line += "%d\t%d\t%d\t%d\t" % (node.level,node.pos[0],node.pos[1],node.pos[2]) for i in range(node.nvals): line += "%1.5e\t" % node.val[i] line += "%f\t" % node.weight_val line += "%s\t%s\t" % (node.children[0][0][0] is not NULL, node.parent is not NULL) if node.children[0][0][0] is not NULL: nline = "" for i in range(2): for j in range(2): for k in range(2): nline += "%d," % self.node_ID(node.children[i][j][k]) line += nline print(line) return cdef void iterate_print_nodes(self, OctreeNode *node): cdef int i, j, k self.print_node_info(node) if node.children[0][0][0] is NULL: return for i in range(2): for j in range(2): for k in range(2): self.iterate_print_nodes(node.children[i][j][k]) return def print_all_nodes(self): r""" Prints out information about all the nodes in the octree. Parameters ---------- None. Examples -------- >>> octree.print_all_nodes() (many lines of data) """ cdef int i, j, k sys.stdout.flush() sys.stderr.flush() line = "ID\tnext\tup_n\tlevel\tx\ty\tz\t" for i in range(self.nvals): line += "val%d\t\t" % i line += "weight\t\tchild?\tparent?\tchildren" print(line) for i in range(self.top_grid_dims[0]): for j in range(self.top_grid_dims[1]): for k in range(self.top_grid_dims[2]): self.iterate_print_nodes(self.root_nodes[i][j][k]) sys.stdout.flush() sys.stderr.flush() return def __dealloc__(self): cdef int i, j, k for i in range(self.top_grid_dims[0]): for j in range(self.top_grid_dims[1]): for k in range(self.top_grid_dims[2]): OTN_free(self.root_nodes[i][j][k]) free(self.root_nodes[i][j]) free(self.root_nodes[i]) free(self.root_nodes) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/bitarray.pxd0000644000175100001770000000673614714401662017350 0ustar00runnerdocker""" Bit array functions """ import numpy as np cimport cython cimport numpy as np cdef inline void ba_set_value(np.uint8_t *buf, np.uint64_t ind, np.uint8_t val) noexcept nogil: # This assumes 8 bit buffer. If value is greater than zero (thus allowing # us to use 1-255 as 'True') then we identify first the index in the buffer # we are setting. We do this by truncating the index by bit-shifting to # the left three times, essentially dividing it by eight (and taking the # floor.) # The next step is to turn *on* what we're attempting to turn on, which # means taking our index and truncating it to the first 3 bits (which we do # with an & operation) and then turning on the correct bit. # # So if we're asking for index 33 in the bitarray, we would want the 4th # uint8 element, then the 2nd bit (index 1). # # To turn it on, we logical *or* with that. To turn it off, we logical # *and* with the *inverse*, which will allow everything *but* that bit to # stay on. if val > 0: buf[ind >> 3] |= (1 << (ind & 7)) else: buf[ind >> 3] &= ~(1 << (ind & 7)) cdef inline np.uint8_t ba_get_value(np.uint8_t *buf, np.uint64_t ind) noexcept nogil: cdef np.uint8_t rv = (buf[ind >> 3] & (1 << (ind & 7))) if rv == 0: return 0 return 1 cdef inline void ba_set_range(np.uint8_t *buf, np.uint64_t start_ind, np.uint64_t stop_ind, np.uint8_t val) nogil: # Should this be inclusive of both end points? I think it should not, to # match slicing semantics. # # We need to figure out the first and last values, and then we just set the # ones in-between to 255. if stop_ind < start_ind: return cdef np.uint64_t i cdef np.uint8_t j, bitmask cdef np.uint64_t buf_start = start_ind >> 3 cdef np.uint64_t buf_stop = stop_ind >> 3 cdef np.uint8_t start_j = start_ind & 7 cdef np.uint8_t stop_j = stop_ind & 7 if buf_start == buf_stop: for j in range(start_j, stop_j): ba_set_value(&buf[buf_start], j, val) return bitmask = 0 for j in range(start_j, 8): bitmask |= (1 << j) if val > 0: buf[buf_start] |= bitmask else: buf[buf_start] &= ~bitmask if val > 0: bitmask = 255 else: bitmask = 0 for i in range(buf_start + 1, buf_stop): buf[i] = bitmask bitmask = 0 for j in range(0, stop_j): bitmask |= (1 << j) if val > 0: buf[buf_stop] |= bitmask else: buf[buf_stop] &= ~bitmask cdef inline np.uint8_t _num_set_bits( np.uint8_t b ): # https://stackoverflow.com/questions/30688465/how-to-check-the-number-of-set-bits-in-an-8-bit-unsigned-char b = b - ((b >> 1) & 0x55) b = (b & 0x33) + ((b >> 2) & 0x33) return (((b + (b >> 4)) & 0x0F) * 0x01) cdef class bitarray: cdef np.uint8_t *buf cdef np.uint64_t size cdef np.uint64_t buf_size # Not exactly the same cdef np.uint8_t final_bitmask cdef public object ibuf cdef void _set_value(self, np.uint64_t ind, np.uint8_t val) cdef np.uint8_t _query_value(self, np.uint64_t ind) cdef void _set_range(self, np.uint64_t start, np.uint64_t stop, np.uint8_t val) cdef np.uint64_t _count(self) cdef bitarray _logical_and(self, bitarray other, bitarray result = *) cdef bitarray _logical_or(self, bitarray other, bitarray result = *) cdef bitarray _logical_xor(self, bitarray other, bitarray result = *) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/bitarray.pyx0000644000175100001770000002103414714401662017361 0ustar00runnerdocker# distutils: libraries = STD_LIBS """ Bit array functions """ import numpy as np cimport cython cimport numpy as np cdef class bitarray: @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def __cinit__(self, np.int64_t size = -1, np.ndarray[np.uint8_t, ndim=1, cast=True] arr = None): r"""This is a bitarray, which flips individual bits to on/off inside a uint8 container array. By encoding on/off inside each bit in a uint8 array, we can compress boolean information down by up to a factor of 8. Either an input array or a size must be provided. Parameters ---------- size : int The size we should pre-allocate. arr : array-like An input array to turn into a bitarray. Examples -------- >>> arr_in1 = np.array([True, True, False]) >>> arr_in2 = np.array([False, True, True]) >>> a = ba.bitarray(arr = arr_in1) >>> b = ba.bitarray(arr = arr_in2) >>> print(a & b) >>> print (a & b).as_bool_array() """ cdef np.uint64_t i if size == -1 and arr is None: raise RuntimeError elif size == -1: size = arr.size elif size != -1 and arr is not None: if size != arr.size: raise RuntimeError self.buf_size = (size >> 3) cdef np.uint8_t bitmask = 255 if (size & 7) != 0: # We need an extra one if we've got any lingering bits self.buf_size += 1 bitmask = 0 for i in range(size & 7): bitmask |= (1< ibuf_t.data self.size = size if arr is not None: self.set_from_array(arr) else: for i in range(self.buf_size): self.buf[i] = 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def set_from_array(self, np.ndarray[np.uint8_t, cast=True] arr not None): r"""Given an array that is either uint8_t or boolean, set the values of this array to match it. Parameters ---------- arr : array, castable to uint8 The array we set from. """ cdef np.uint64_t i, j cdef np.uint8_t *btemp = self.buf arr = np.ascontiguousarray(arr) j = 0 for i in range(self.size): btemp[i >> 3] = btemp[i >> 3] | (arr[i] << (j)) j += 1 if j == 8: j = 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def as_bool_array(self): r"""Return a copy of this array, as a boolean array. All of the values encoded in this bitarray are expanded into boolean values in a new array and returned. Returns ------- arr : numpy array of type bool The uint8 values expanded into boolean values """ cdef np.uint64_t i, j cdef np.uint8_t *btemp = self.buf cdef np.ndarray[np.uint8_t, ndim=1] output output = np.zeros(self.size, "uint8") j = 0 for i in range(self.size): output[i] = (btemp[i >> 3] >> (j)) & 1 j += 1 if j == 8: j = 0 return output.astype("bool") cdef void _set_value(self, np.uint64_t ind, np.uint8_t val): ba_set_value(self.buf, ind, val) def set_value(self, np.uint64_t ind, np.uint8_t val): r"""Set the on/off value of a given bit. Modify the value encoded in a given index. Parameters ---------- ind : int The index to query in the bitarray. val : bool or uint8_t What to set the index to Examples -------- >>> arr_in = np.array([True, True, False]) >>> a = ba.bitarray(arr = arr_in) >>> print(a.set_value(2, 1)) """ ba_set_value(self.buf, ind, val) cdef np.uint8_t _query_value(self, np.uint64_t ind): return ba_get_value(self.buf, ind) def query_value(self, np.uint64_t ind): r"""Query the on/off value of a given bit. Return the value encoded in a given index. Parameters ---------- ind : int The index to query in the bitarray. Examples -------- >>> arr_in = np.array([True, True, False]) >>> a = ba.bitarray(arr = arr_in) >>> print(a.query_value(2)) """ return ba_get_value(self.buf, ind) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void _set_range(self, np.uint64_t start, np.uint64_t stop, np.uint8_t val): ba_set_range(self.buf, start, stop, val) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def set_range(self, np.uint64_t start, np.uint64_t stop, np.uint8_t val): r"""Set a range of values to on/off. Uses slice-style indexing. No return value. Parameters ---------- start : int The starting component of a slice. stop : int The ending component of a slice. val : bool or uint8_t What to set the range to Examples -------- >>> arr_in = np.array([True, True, False, True, True, False]) >>> a = ba.bitarray(arr = arr_in) >>> a.set_range(0, 3, 0) """ ba_set_range(self.buf, start, stop, val) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef np.uint64_t _count(self): cdef np.uint64_t count = 0 cdef np.uint64_t i self.buf[self.buf_size - 1] &= self.final_bitmask for i in range(self.buf_size): count += _num_set_bits(self.buf[i]) return count @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def count(self): r"""Count the number of values set in the array. Parameters ---------- Examples -------- >>> arr_in = np.array([True, True, False, True, True, False]) >>> a = ba.bitarray(arr = arr_in) >>> a.count() """ return self._count() cdef bitarray _logical_and(self, bitarray other, bitarray result = None): # Create a place to put it. Note that we might have trailing values, # we actually need to reset the ending set. if other.size != self.size: raise IndexError if result is None: result = bitarray(self.size) for i in range(self.buf_size): result.buf[i] = other.buf[i] & self.buf[i] result.buf[self.buf_size - 1] &= self.final_bitmask return result def logical_and(self, bitarray other, bitarray result = None): return self._logical_and(other, result) def __and__(self, bitarray other): # Wrap it directly here. return self.logical_and(other) def __iand__(self, bitarray other): rv = self.logical_and(other, self) return rv cdef bitarray _logical_or(self, bitarray other, bitarray result = None): if other.size != self.size: raise IndexError if result is None: result = bitarray(self.size) for i in range(self.buf_size): result.buf[i] = other.buf[i] | self.buf[i] result.buf[self.buf_size - 1] &= self.final_bitmask return result def logical_or(self, bitarray other, bitarray result = None): return self._logical_or(other, result) def __or__(self, bitarray other): return self.logical_or(other) def __ior__(self, bitarray other): return self.logical_or(other, self) cdef bitarray _logical_xor(self, bitarray other, bitarray result = None): if other.size != self.size: raise IndexError if result is None: result = bitarray(self.size) for i in range(self.buf_size): result.buf[i] = other.buf[i] ^ self.buf[i] result.buf[self.buf_size - 1] &= self.final_bitmask return result def logical_xor(self, bitarray other, bitarray result = None): return self._logical_xor(other, result) def __xor__(self, bitarray other): return self.logical_xor(other) def __ixor__(self, bitarray other): return self.logical_xor(other, self) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/bounded_priority_queue.pxd0000644000175100001770000000237714714401662022315 0ustar00runnerdocker""" A cython implementation of the bounded priority queue This is a priority queue that only keeps track of smallest k values that have been added to it. """ import numpy as np cimport numpy as np cdef class BoundedPriorityQueue: cdef public np.float64_t[:] heap cdef np.float64_t* heap_ptr cdef public np.int64_t[:] pids cdef np.int64_t* pids_ptr cdef int use_pids cdef np.intp_t size cdef np.intp_t max_elements cdef int max_heapify(self, np.intp_t index) except -1 nogil cdef int propagate_up(self, np.intp_t index) except -1 nogil cdef int add(self, np.float64_t val) except -1 nogil cdef int add_pid(self, np.float64_t val, np.int64_t pid) except -1 nogil cdef int heap_append(self, np.float64_t val, np.int64_t ind) except -1 nogil cdef np.float64_t extract_max(self) except -1 nogil cdef int validate_heap(self) except -1 nogil cdef class NeighborList: cdef public np.float64_t[:] data cdef np.float64_t* data_ptr cdef public np.int64_t[:] pids cdef np.int64_t* pids_ptr cdef np.intp_t size cdef np.intp_t _max_size cdef int _update_memview(self) except -1 cdef int _extend(self) except -1 nogil cdef int add_pid(self, np.float64_t val, np.int64_t ind) except -1 nogil ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/bounded_priority_queue.pyx0000644000175100001770000001747114714401662022343 0ustar00runnerdocker""" A cython implementation of the bounded priority queue This is a priority queue that only keeps track of smallest k values that have been added to it. This priority queue is implemented with the configuration of having the largest element at the beginning - this exploited to store nearest neighbour lists. """ import numpy as np cimport cython cimport numpy as np from cpython.mem cimport PyMem_Free, PyMem_Malloc, PyMem_Realloc cdef class BoundedPriorityQueue: def __cinit__(self, np.intp_t max_elements, np.intp_t pids=0): self.max_elements = max_elements # mark invalid recently values with -1 self.heap = np.zeros(max_elements)-1 self.heap_ptr = &(self.heap[0]) # only allocate memory if we intend to store particle ID's self.use_pids = pids if pids == 1: self.pids = np.zeros(max_elements, dtype="int64")-1 self.pids_ptr = &(self.pids[0]) self.size = 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef int max_heapify(self, np.intp_t index) except -1 nogil: cdef np.intp_t left = 2 * index + 1 cdef np.intp_t right = 2 * index + 2 cdef np.intp_t largest = index if left < self.size and self.heap_ptr[left] > self.heap_ptr[largest]: largest = left if right < self.size and self.heap_ptr[right] > self.heap_ptr[largest]: largest = right if largest != index: self.heap_ptr[index], self.heap_ptr[largest] = \ self.heap_ptr[largest], self.heap_ptr[index] if self.use_pids: self.pids_ptr[index], self.pids_ptr[largest] = \ self.pids_ptr[largest], self.pids_ptr[index] self.max_heapify(largest) return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef int propagate_up(self, np.intp_t index) except -1 nogil: while index != 0 and self.heap_ptr[(index - 1) // 2] < self.heap_ptr[index]: self.heap_ptr[index], self.heap_ptr[(index - 1) // 2] = self.heap_ptr[(index - 1) // 2], self.heap_ptr[index] if self.use_pids: self.pids_ptr[index], self.pids_ptr[(index - 1) // 2] = self.pids_ptr[(index - 1) // 2], self.pids_ptr[index] index = (index - 1) // 2 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef int add(self, np.float64_t val) except -1 nogil: # if not at max size append, if at max size, only append if smaller than # the maximum value if self.size == self.max_elements: if val < self.heap_ptr[0]: self.extract_max() self.heap_append(val, -1) else: self.heap_append(val, -1) return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef int add_pid(self, np.float64_t val, np.int64_t ind) except -1 nogil: if self.size == self.max_elements: if val < self.heap_ptr[0]: self.extract_max() self.heap_append(val, ind) else: self.heap_append(val, ind) return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef int heap_append(self, np.float64_t val, np.int64_t ind) except -1 nogil: self.heap_ptr[self.size] = val if self.use_pids: self.pids_ptr[self.size] = ind self.size += 1 self.propagate_up(self.size - 1) return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef np.float64_t extract_max(self) except -1 nogil: cdef np.float64_t maximum = self.heap_ptr[0] cdef np.float64_t val cdef np.int64_t ind val = self.heap_ptr[self.size-1] self.heap_ptr[self.size-1] = -1 if self.use_pids: ind = self.pids_ptr[self.size-1] self.pids_ptr[self.size-1] = -1 self.size -= 1 if self.size > 0: self.heap_ptr[0] = val if self.use_pids: self.pids_ptr[0] = ind self.max_heapify(0) return maximum cdef int validate_heap(self) except -1 nogil: # this function loops through every element in the heap, if any children # are greater than their parents then we return zero, which is an error # as the heap condition is not satisfied cdef int i, index for i in range(self.size-1, -1, -1): index = i while index != 0: if self.heap_ptr[index] > self.heap_ptr[(index - 1) // 2]: return 0 index = (index - 1) // 2 return 1 cdef class NeighborList: def __cinit__(self, np.intp_t init_size=32): self.size = 0 self._max_size = init_size self.data_ptr = PyMem_Malloc( self._max_size * sizeof(np.float64_t) ) self.pids_ptr = PyMem_Malloc( self._max_size * sizeof(np.int64_t) ) self._update_memview() def __dealloc__(self): PyMem_Free(self.data_ptr) PyMem_Free(self.pids_ptr) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef int _update_memview(self) except -1: self.data = self.data_ptr self.pids = self.pids_ptr @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef int _extend(self) except -1 nogil: if self.size == self._max_size: self._max_size *= 2 with gil: self.data_ptr = PyMem_Realloc( self.data_ptr, self._max_size * sizeof(np.float64_t) ) self.pids_ptr = PyMem_Realloc( self.pids_ptr, self._max_size * sizeof(np.int64_t) ) self._update_memview() return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @cython.initializedcheck(False) cdef int add_pid(self, np.float64_t val, np.int64_t ind) except -1 nogil: self._extend() self.data_ptr[self.size] = val self.pids_ptr[self.size] = ind self.size += 1 return 0 # these are test functions which are called from # yt/utilities/lib/tests/test_nn.py # they are stored here to allow easy interaction with functions not exposed at # the python level def validate_pid(): m = BoundedPriorityQueue(5, True) # Add elements to the queue elements = [0.1, 0.25, 1.33, 0.5, 3.2, 4.6, 2.0, 0.4, 4.0, .001] pids = [1,2,3,4,5,6,7,8,9,10] for el, pid in zip(elements, pids): m.add_pid(el, pid) m.extract_max() m.extract_max() m.extract_max() return np.asarray(m.heap), np.asarray(m.pids) def validate(): m = BoundedPriorityQueue(5) # Add elements to the queue for el in [0.1, 0.25, 1.33, 0.5, 3.2, 4.6, 2.0, 0.4, 4.0, .001]: m.add(el) m.extract_max() m.extract_max() m.extract_max() return np.asarray(m.heap) def validate_nblist(): nblist = NeighborList(init_size=2) for i in range(4): nblist.add_pid(1.0, i) # Copy is necessary here. Without it, the allocated memory would be freed. # Leaving random data array. return np.asarray(nblist.data).copy(), np.asarray(nblist.pids).copy() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/bounding_volume_hierarchy.pxd0000644000175100001770000000600714714401662022754 0ustar00runnerdockercimport cython import numpy as np cimport numpy as np from yt.utilities.lib.element_mappings cimport ElementSampler from yt.utilities.lib.primitives cimport BBox, Ray cdef extern from "mesh_triangulation.h": enum: MAX_NUM_TRI int HEX_NV int HEX_NT int TETRA_NV int TETRA_NT int WEDGE_NV int WEDGE_NT int triangulate_hex[MAX_NUM_TRI][3] int triangulate_tetra[MAX_NUM_TRI][3] int triangulate_wedge[MAX_NUM_TRI][3] int hex20_faces[6][8] int tet10_faces[4][6] # node for the bounding volume hierarchy cdef struct BVHNode: np.int64_t begin np.int64_t end BVHNode* left BVHNode* right BBox bbox # pointer to function that computes primitive intersection ctypedef np.int64_t (*intersect_func_type)(const void* primitives, const np.int64_t item, Ray* ray) noexcept nogil # pointer to function that computes primitive centroids ctypedef void (*centroid_func_type)(const void *primitives, const np.int64_t item, np.float64_t[3] centroid) noexcept nogil # pointer to function that computes primitive bounding boxes ctypedef void (*bbox_func_type)(const void *primitives, const np.int64_t item, BBox* bbox) noexcept nogil cdef class BVH: cdef BVHNode* root cdef void* primitives cdef np.int64_t* prim_ids cdef np.float64_t** centroids cdef BBox* bboxes cdef np.float64_t* vertices cdef np.float64_t* field_data cdef np.int64_t num_prim_per_elem cdef np.int64_t num_prim cdef np.int64_t num_elem cdef np.int64_t num_verts_per_elem cdef np.int64_t num_field_per_elem cdef int[MAX_NUM_TRI][3] tri_array cdef ElementSampler sampler cdef centroid_func_type get_centroid cdef bbox_func_type get_bbox cdef intersect_func_type get_intersect cdef np.int64_t _partition(self, np.int64_t begin, np.int64_t end, np.int64_t ax, np.float64_t split) noexcept nogil cdef void _set_up_triangles(self, np.float64_t[:, :] vertices, np.int64_t[:, :] indices) noexcept nogil cdef void _set_up_patches(self, np.float64_t[:, :] vertices, np.int64_t[:, :] indices) noexcept nogil cdef void _set_up_tet_patches(self, np.float64_t[:, :] vertices, np.int64_t[:, :] indices) noexcept nogil cdef void intersect(self, Ray* ray) noexcept nogil cdef void _get_node_bbox(self, BVHNode* node, np.int64_t begin, np.int64_t end) noexcept nogil cdef void _recursive_intersect(self, Ray* ray, BVHNode* node) noexcept nogil cdef BVHNode* _recursive_build(self, np.int64_t begin, np.int64_t end) noexcept nogil cdef void _recursive_free(self, BVHNode* node) noexcept nogil ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/bounding_volume_hierarchy.pyx0000644000175100001770000004467014714401662023011 0ustar00runnerdocker# distutils: libraries = STD_LIBS # distutils: include_dirs = LIB_DIR # distutils: language = c++ # distutils: extra_compile_args = CPP14_FLAG OMP_ARGS # distutils: extra_link_args = CPP14_FLAG OMP_ARGS cimport cython import numpy as np cimport numpy as np from libc.math cimport fabs from libc.stdlib cimport free, malloc from cython.parallel import parallel, prange from yt.utilities.lib.element_mappings cimport ( ElementSampler, P1Sampler3D, Q1Sampler3D, S2Sampler3D, Tet2Sampler3D, W1Sampler3D, ) from yt.utilities.lib.primitives cimport ( BBox, Patch, Ray, TetPatch, Triangle, patch_bbox, patch_centroid, ray_bbox_intersect, ray_patch_intersect, ray_tet_patch_intersect, ray_triangle_intersect, tet_patch_bbox, tet_patch_centroid, triangle_bbox, triangle_centroid, ) from .image_samplers cimport ImageSampler cdef ElementSampler Q1Sampler = Q1Sampler3D() cdef ElementSampler P1Sampler = P1Sampler3D() cdef ElementSampler W1Sampler = W1Sampler3D() cdef ElementSampler S2Sampler = S2Sampler3D() cdef ElementSampler Tet2Sampler = Tet2Sampler3D() cdef extern from "platform_dep.h" nogil: double fmax(double x, double y) double fmin(double x, double y) # define some constants cdef np.float64_t INF = np.inf cdef np.int64_t LEAF_SIZE = 16 cdef class BVH: ''' This class implements a bounding volume hierarchy (BVH), a spatial acceleration structure for fast ray-tracing. A BVH is like a kd-tree, except that instead of partitioning the *volume* of the parent to create the children, we partition the primitives themselves into 'left' or 'right' sub-trees. The bounding volume for a node is then determined by computing the bounding volume of the primitives that belong to it. This allows us to quickly discard primitives that are not close to intersecting a given ray. This class is currently used to provide software 3D rendering support for finite element datasets. For 1st-order meshes, every element of the mesh is triangulated, and this set of triangles forms the primitives that will be used for the ray-trace. The BVH can then quickly determine which element is hit by each ray associated with the image plane, and the appropriate interpolation can be performed to sample the finite element solution at that hit position. Currently, 2nd-order meshes are only supported for 20-node hexahedral elements. There, the primitive type is a bi-quadratic patch instead of a triangle, and each intersection involves computing a Newton-Raphson solve. See yt/utilities/lib/primitives.pyx for the definitions of both of these primitive types. ''' @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def __cinit__(self, np.float64_t[:, :] vertices, np.int64_t[:, :] indices, np.float64_t[:, :] field_data): self.num_elem = indices.shape[0] self.num_verts_per_elem = indices.shape[1] self.num_field_per_elem = field_data.shape[1] # We need to figure out what kind of elements we've been handed. if self.num_verts_per_elem == 8: self.num_prim_per_elem = HEX_NT self.tri_array = triangulate_hex self.sampler = Q1Sampler elif self.num_verts_per_elem == 6: self.num_prim_per_elem = WEDGE_NT self.tri_array = triangulate_wedge self.sampler = W1Sampler elif self.num_verts_per_elem == 4: self.num_prim_per_elem = TETRA_NT self.tri_array = triangulate_tetra self.sampler = P1Sampler elif self.num_verts_per_elem == 20: self.num_prim_per_elem = 6 self.sampler = S2Sampler elif self.num_verts_per_elem == 10: self.num_prim_per_elem = 4 self.sampler = Tet2Sampler else: raise NotImplementedError("Could not determine element type for " "nverts = %d. " % self.num_verts_per_elem) self.num_prim = self.num_prim_per_elem*self.num_elem # allocate storage cdef np.int64_t v_size = self.num_verts_per_elem * self.num_elem * 3 self.vertices = malloc(v_size * sizeof(np.float64_t)) cdef np.int64_t f_size = self.num_field_per_elem * self.num_elem self.field_data = malloc(f_size * sizeof(np.float64_t)) self.prim_ids = malloc(self.num_prim * sizeof(np.int64_t)) self.centroids = malloc(self.num_prim * sizeof(np.float64_t*)) cdef np.int64_t i for i in range(self.num_prim): self.centroids[i] = malloc(3*sizeof(np.float64_t)) self.bboxes = malloc(self.num_prim * sizeof(BBox)) # create data buffers cdef np.int64_t j, k cdef np.int64_t field_offset, vertex_offset for i in range(self.num_elem): for j in range(self.num_verts_per_elem): vertex_offset = i*self.num_verts_per_elem*3 + j*3 for k in range(3): self.vertices[vertex_offset + k] = vertices[indices[i,j]][k] field_offset = i*self.num_field_per_elem for j in range(self.num_field_per_elem): self.field_data[field_offset + j] = field_data[i][j] # set up primitives if self.num_verts_per_elem == 20: self.primitives = malloc(self.num_prim * sizeof(Patch)) self.get_centroid = patch_centroid self.get_bbox = patch_bbox self.get_intersect = ray_patch_intersect self._set_up_patches(vertices, indices) elif self.num_verts_per_elem == 10: self.primitives = malloc(self.num_prim * sizeof(TetPatch)) self.get_centroid = tet_patch_centroid self.get_bbox = tet_patch_bbox self.get_intersect = ray_tet_patch_intersect self._set_up_tet_patches(vertices, indices) else: self.primitives = malloc(self.num_prim * sizeof(Triangle)) self.get_centroid = triangle_centroid self.get_bbox = triangle_bbox self.get_intersect = ray_triangle_intersect self._set_up_triangles(vertices, indices) self.root = self._recursive_build(0, self.num_prim) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void _set_up_patches(self, np.float64_t[:, :] vertices, np.int64_t[:, :] indices) noexcept nogil: cdef Patch* patch cdef np.int64_t i, j, k, ind, idim cdef np.int64_t offset, prim_index for i in range(self.num_elem): offset = self.num_prim_per_elem*i for j in range(self.num_prim_per_elem): # for each face prim_index = offset + j patch = &( self.primitives)[prim_index] self.prim_ids[prim_index] = prim_index patch.elem_id = i for k in range(8): # for each vertex ind = hex20_faces[j][k] for idim in range(3): # for each spatial dimension (yikes) patch.v[k][idim] = vertices[indices[i, ind]][idim] self.get_centroid(self.primitives, prim_index, self.centroids[prim_index]) self.get_bbox(self.primitives, prim_index, &(self.bboxes[prim_index])) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void _set_up_tet_patches(self, np.float64_t[:, :] vertices, np.int64_t[:, :] indices) noexcept nogil: cdef TetPatch* tet_patch cdef np.int64_t i, j, k, ind, idim cdef np.int64_t offset, prim_index for i in range(self.num_elem): offset = self.num_prim_per_elem*i for j in range(self.num_prim_per_elem): # for each face prim_index = offset + j tet_patch = &( self.primitives)[prim_index] self.prim_ids[prim_index] = prim_index tet_patch.elem_id = i for k in range(6): # for each vertex ind = tet10_faces[j][k] for idim in range(3): # for each spatial dimension (yikes) tet_patch.v[k][idim] = vertices[indices[i, ind]][idim] self.get_centroid(self.primitives, prim_index, self.centroids[prim_index]) self.get_bbox(self.primitives, prim_index, &(self.bboxes[prim_index])) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void _set_up_triangles(self, np.float64_t[:, :] vertices, np.int64_t[:, :] indices) noexcept nogil: # fill our array of primitives cdef np.int64_t offset, tri_index cdef np.int64_t v0, v1, v2 cdef Triangle* tri cdef np.int64_t i, j, k for i in range(self.num_elem): offset = self.num_prim_per_elem*i for j in range(self.num_prim_per_elem): tri_index = offset + j self.prim_ids[tri_index] = tri_index tri = &( self.primitives)[tri_index] tri.elem_id = i v0 = indices[i][self.tri_array[j][0]] v1 = indices[i][self.tri_array[j][1]] v2 = indices[i][self.tri_array[j][2]] for k in range(3): tri.p0[k] = vertices[v0][k] tri.p1[k] = vertices[v1][k] tri.p2[k] = vertices[v2][k] self.get_centroid(self.primitives, tri_index, self.centroids[tri_index]) self.get_bbox(self.primitives, tri_index, &(self.bboxes[tri_index])) cdef void _recursive_free(self, BVHNode* node) noexcept nogil: if node.end - node.begin > LEAF_SIZE: self._recursive_free(node.left) self._recursive_free(node.right) free(node) def __dealloc__(self): if self.root == NULL: return self._recursive_free(self.root) free(self.primitives) free(self.prim_ids) for i in range(self.num_prim): free(self.centroids[i]) free(self.centroids) free(self.bboxes) free(self.field_data) free(self.vertices) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef np.int64_t _partition(self, np.int64_t begin, np.int64_t end, np.int64_t ax, np.float64_t split) noexcept nogil: # this re-orders the primitive array so that all of the primitives # to the left of mid have centroids less than or equal to "split" # along the direction "ax". All the primitives to the right of mid # will have centroids *greater* than "split" along "ax". cdef np.int64_t mid = begin while (begin != end): if self.centroids[mid][ax] > split: mid += 1 elif self.centroids[begin][ax] > split: self.prim_ids[mid], self.prim_ids[begin] = \ self.prim_ids[begin], self.prim_ids[mid] self.centroids[mid], self.centroids[begin] = \ self.centroids[begin], self.centroids[mid] self.bboxes[mid], self.bboxes[begin] = \ self.bboxes[begin], self.bboxes[mid] mid += 1 begin += 1 return mid @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void _get_node_bbox(self, BVHNode* node, np.int64_t begin, np.int64_t end) noexcept nogil: cdef np.int64_t i, j cdef BBox box = self.bboxes[begin] for i in range(begin+1, end): for j in range(3): box.left_edge[j] = fmin(box.left_edge[j], self.bboxes[i].left_edge[j]) box.right_edge[j] = fmax(box.right_edge[j], self.bboxes[i].right_edge[j]) node.bbox = box @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void intersect(self, Ray* ray) noexcept nogil: self._recursive_intersect(ray, self.root) if ray.elem_id < 0: return cdef np.float64_t[3] position cdef np.int64_t i for i in range(3): position[i] = ray.origin[i] + ray.t_far*ray.direction[i] cdef np.float64_t* vertex_ptr cdef np.float64_t* field_ptr vertex_ptr = self.vertices + ray.elem_id*self.num_verts_per_elem*3 field_ptr = self.field_data + ray.elem_id*self.num_field_per_elem cdef np.float64_t[4] mapped_coord self.sampler.map_real_to_unit(mapped_coord, vertex_ptr, position) if self.num_field_per_elem == 1: ray.data_val = field_ptr[0] else: ray.data_val = self.sampler.sample_at_unit_point(mapped_coord, field_ptr) ray.near_boundary = self.sampler.check_mesh_lines(mapped_coord) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void _recursive_intersect(self, Ray* ray, BVHNode* node) noexcept nogil: # check for bbox intersection: if not ray_bbox_intersect(ray, node.bbox): return # check for leaf cdef np.int64_t i if (node.end - node.begin) <= LEAF_SIZE: for i in range(node.begin, node.end): self.get_intersect(self.primitives, self.prim_ids[i], ray) return # if not leaf, intersect with left and right children self._recursive_intersect(ray, node.left) self._recursive_intersect(ray, node.right) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef BVHNode* _recursive_build(self, np.int64_t begin, np.int64_t end) noexcept nogil: cdef BVHNode *node = malloc(sizeof(BVHNode)) node.begin = begin node.end = end self._get_node_bbox(node, begin, end) # check for leaf if (end - begin) <= LEAF_SIZE: return node # we use the "split in the middle of the longest axis approach" # see: http://www.vadimkravcenko.com/bvh-tree-building/ # compute longest dimension cdef np.int64_t ax = 0 cdef np.float64_t d = fabs(node.bbox.right_edge[0] - node.bbox.left_edge[0]) if fabs(node.bbox.right_edge[1] - node.bbox.left_edge[1]) > d: ax = 1 if fabs(node.bbox.right_edge[2] - node.bbox.left_edge[2]) > d: ax = 2 # split in half along that dimension cdef np.float64_t split = 0.5*(node.bbox.right_edge[ax] + node.bbox.left_edge[ax]) # sort triangle list cdef np.int64_t mid = self._partition(begin, end, ax, split) if(mid == begin or mid == end): mid = begin + (end-begin)/2 # recursively build sub-trees node.left = self._recursive_build(begin, mid) node.right = self._recursive_build(mid, end) return node @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void cast_rays(np.float64_t* image, const np.float64_t* origins, const np.float64_t* direction, const int N, BVH bvh) noexcept nogil: cdef Ray* ray cdef int i, j, k with nogil, parallel(): ray = malloc(sizeof(Ray)) for k in range(3): ray.direction[k] = direction[k] ray.inv_dir[k] = 1.0 / direction[k] for i in prange(N): for j in range(3): ray.origin[j] = origins[N*j + i] ray.t_far = INF ray.t_near = 0.0 ray.data_val = 0 ray.elem_id = -1 bvh.intersect(ray) image[i] = ray.data_val free(ray) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def test_ray_trace(np.ndarray[np.float64_t, ndim=1] image, np.ndarray[np.float64_t, ndim=2] origins, np.ndarray[np.float64_t, ndim=1] direction, BVH bvh): cdef int N = origins.shape[0] cast_rays(&image[0], &origins[0, 0], &direction[0], N, bvh) cdef class BVHMeshSampler(ImageSampler): @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def __call__(self, BVH bvh, int num_threads = 0): ''' This function is supposed to cast the rays and return the image. ''' cdef int vi, vj, i, j cdef np.float64_t *v_pos cdef np.float64_t *v_dir cdef np.int64_t nx, ny, size cdef np.float64_t width[3] for i in range(3): width[i] = self.width[i] nx = self.nv[0] ny = self.nv[1] size = nx * ny cdef Ray* ray with nogil, parallel(): ray = malloc(sizeof(Ray)) v_pos = malloc(3 * sizeof(np.float64_t)) v_dir = malloc(3 * sizeof(np.float64_t)) for j in prange(size): vj = j % ny vi = (j - vj) / ny vj = vj self.vector_function(self, vi, vj, width, v_dir, v_pos) for i in range(3): ray.origin[i] = v_pos[i] ray.direction[i] = v_dir[i] ray.inv_dir[i] = 1.0 / v_dir[i] ray.t_far = 1e37 ray.t_near = 0.0 ray.data_val = 0 ray.elem_id = -1 bvh.intersect(ray) self.image[vi, vj, 0] = ray.data_val self.image_used[vi, vj] = ray.elem_id self.mesh_lines[vi, vj] = ray.near_boundary self.zbuffer[vi, vj] = ray.t_far free(v_pos) free(v_dir) free(ray) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/contour_finding.pxd0000644000175100001770000000243714714401662020714 0ustar00runnerdocker""" Contour finding exports """ cimport cython cimport numpy as np cdef inline np.int64_t i64max(np.int64_t i0, np.int64_t i1): if i0 > i1: return i0 return i1 cdef inline np.int64_t i64min(np.int64_t i0, np.int64_t i1): if i0 < i1: return i0 return i1 cdef extern from "math.h": double fabs(double x) cdef extern from "stdlib.h": # NOTE that size_t might not be int void *alloca(int) cdef struct ContourID cdef struct ContourID: np.int64_t contour_id ContourID *parent ContourID *next ContourID *prev np.int64_t count cdef struct CandidateContour cdef struct CandidateContour: np.int64_t contour_id np.int64_t join_id CandidateContour *next cdef ContourID *contour_create(np.int64_t contour_id, ContourID *prev = ?) cdef void contour_delete(ContourID *node) cdef ContourID *contour_find(ContourID *node) cdef void contour_union(ContourID *node1, ContourID *node2) cdef int candidate_contains(CandidateContour *first, np.int64_t contour_id, np.int64_t join_id = ?) cdef CandidateContour *candidate_add(CandidateContour *first, np.int64_t contour_id, np.int64_t join_id = ?) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/contour_finding.pyx0000644000175100001770000006673414714401662020753 0ustar00runnerdocker# distutils: libraries = STD_LIBS # distutils: include_dirs = LIB_DIR_GEOM """ A two-pass contour finding algorithm """ from __future__ import print_function import numpy as np cimport cython cimport numpy as np from libc.stdlib cimport free, malloc, realloc from yt.geometry.oct_container cimport OctInfo, OctreeContainer from yt.geometry.oct_visitors cimport Oct from .amr_kdtools cimport Node from .partitioned_grid cimport PartitionedGrid from .volume_container cimport VolumeContainer, vc_index, vc_pos_index import sys cdef inline ContourID *contour_create(np.int64_t contour_id, ContourID *prev = NULL): node = malloc(sizeof(ContourID)) #print("Creating contour with id", contour_id) node.contour_id = contour_id node.next = node.parent = NULL node.prev = prev node.count = 1 if prev != NULL: prev.next = node return node cdef inline void contour_delete(ContourID *node): if node.prev != NULL: node.prev.next = node.next if node.next != NULL: node.next.prev = node.prev free(node) cdef inline ContourID *contour_find(ContourID *node): cdef ContourID *temp cdef ContourID *root root = node # First we find the root while root.parent != NULL and root.parent != root: root = root.parent if root == root.parent: root.parent = NULL # Now, we update everything along the tree. # So now everything along the line to the root has the parent set to the # root. while node.parent != NULL: temp = node.parent root.count += node.count node.count = 0 node.parent = root node = temp return root cdef inline void contour_union(ContourID *node1, ContourID *node2): if node1 == node2: return node1 = contour_find(node1) node2 = contour_find(node2) if node1 == node2: return cdef ContourID *pri cdef ContourID *sec if node1.count > node2.count: pri = node1 sec = node2 elif node2.count > node1.count: pri = node2 sec = node1 # might be a tie elif node1.contour_id < node2.contour_id: pri = node1 sec = node2 else: pri = node2 sec = node1 pri.count += sec.count sec.count = 0 sec.parent = pri cdef inline int candidate_contains(CandidateContour *first, np.int64_t contour_id, np.int64_t join_id = -1): while first != NULL: if first.contour_id == contour_id \ and first.join_id == join_id: return 1 first = first.next return 0 cdef inline CandidateContour *candidate_add(CandidateContour *first, np.int64_t contour_id, np.int64_t join_id = -1): cdef CandidateContour *node node = malloc(sizeof(CandidateContour)) node.contour_id = contour_id node.join_id = join_id node.next = first return node cdef class ContourTree: # This class is essentially a Union-Find algorithm. What we want to do is # to, given a connection between two objects, identify the unique ID for # those two objects. So what we have is a collection of contours, and they # eventually all get joined and contain lots of individual IDs. But it's # easy to find the *first* contour, i.e., the primary ID, for each of the # subsequent IDs. # # This means that we can connect id 202483 to id 2472, and if id 2472 is # connected to id 143, the connection will *actually* be from 202483 to # 143. In this way we can speed up joining things and knowing their # "canonical" id. # # This is a multi-step process, since we first want to connect all of the # contours, then we end up wanting to coalesce them, and ultimately we join # them at the end. The join produces a table that maps the initial to the # final, and we can go through and just update all of those. cdef ContourID *first cdef ContourID *last def clear(self): # Here, we wipe out ALL of our contours, but not the pointers to them cdef ContourID *cur cdef ContourID *next cur = self.first while cur != NULL: next = cur.next free(cur) cur = next self.first = self.last = NULL def __init__(self): self.first = self.last = NULL @cython.boundscheck(False) @cython.wraparound(False) def add_contours(self, np.ndarray[np.int64_t, ndim=1] contour_ids): # This adds new contours, from the given contour IDs, to the tree. # Each one can be connected to a parent, as well as to next/prev in the # set of contours belonging to this tree. cdef int i, n n = contour_ids.shape[0] cdef ContourID *cur = self.last for i in range(n): #print(i, contour_ids[i]) cur = contour_create(contour_ids[i], cur) if self.first == NULL: self.first = cur self.last = cur def add_contour(self, np.int64_t contour_id): self.last = contour_create(contour_id, self.last) def cull_candidates(self, np.ndarray[np.int64_t, ndim=3] candidates): # This function looks at each preliminary contour ID belonging to a # given collection of values, and then if need be it creates a new # contour for it. cdef int i, j, k, ni, nj, nk, nc cdef CandidateContour *first = NULL cdef CandidateContour *temp cdef np.int64_t cid nc = 0 ni = candidates.shape[0] nj = candidates.shape[1] nk = candidates.shape[2] for i in range(ni): for j in range(nj): for k in range(nk): cid = candidates[i,j,k] if cid == -1: continue if candidate_contains(first, cid) == 0: nc += 1 first = candidate_add(first, cid) cdef np.ndarray[np.int64_t, ndim=1] contours contours = np.empty(nc, dtype="int64") i = 0 # This removes all the temporary contours for this set of contours and # instead constructs a final list of them. while first != NULL: contours[i] = first.contour_id i += 1 temp = first.next free(first) first = temp return contours def cull_joins(self, np.ndarray[np.int64_t, ndim=2] cjoins): # This coalesces contour IDs, so that we have only the final name # resolutions -- the .join_id from a candidate. So many items will map # to a single join_id. cdef int i, ni, nc cdef CandidateContour *first = NULL cdef CandidateContour *temp cdef np.int64_t cid1, cid2 nc = 0 ni = cjoins.shape[0] for i in range(ni): cid1 = cjoins[i,0] cid2 = cjoins[i,1] if cid1 == -1: continue if cid2 == -1: continue if candidate_contains(first, cid1, cid2) == 0: nc += 1 first = candidate_add(first, cid1, cid2) cdef np.ndarray[np.int64_t, ndim=2] contours contours = np.empty((nc,2), dtype="int64") i = 0 while first != NULL: contours[i,0] = first.contour_id contours[i,1] = first.join_id i += 1 temp = first.next free(first) first = temp return contours @cython.boundscheck(False) @cython.wraparound(False) def add_joins(self, np.ndarray[np.int64_t, ndim=2] join_tree): cdef int i, n, ins cdef np.int64_t cid1, cid2 # Okay, this requires lots of iteration, unfortunately cdef ContourID *cur cdef ContourID *c1 cdef ContourID *c2 n = join_tree.shape[0] #print("Counting") #print("Checking", self.count()) for i in range(n): ins = 0 cid1 = join_tree[i, 0] cid2 = join_tree[i, 1] c1 = c2 = NULL cur = self.first #print("Looking for ", cid1, cid2) while c1 == NULL or c2 == NULL: if cur.contour_id == cid1: c1 = contour_find(cur) if cur.contour_id == cid2: c2 = contour_find(cur) ins += 1 cur = cur.next if cur == NULL: break if c1 == NULL or c2 == NULL: if c1 == NULL: print(" Couldn't find ", cid1) if c2 == NULL: print(" Couldn't find ", cid2) print(" Inspected ", ins) raise RuntimeError else: c1.count = c2.count = 0 contour_union(c1, c2) def count(self): cdef int n = 0 cdef ContourID *cur = self.first while cur != NULL: cur = cur.next n += 1 return n @cython.boundscheck(False) @cython.wraparound(False) def export(self): cdef int n = self.count() cdef ContourID *cur cdef ContourID *root cur = self.first cdef np.ndarray[np.int64_t, ndim=2] joins joins = np.empty((n, 2), dtype="int64") n = 0 while cur != NULL: root = contour_find(cur) joins[n, 0] = cur.contour_id joins[n, 1] = root.contour_id cur = cur.next n += 1 return joins def __dealloc__(self): self.clear() cdef class TileContourTree: cdef np.float64_t min_val cdef np.float64_t max_val def __init__(self, np.float64_t min_val, np.float64_t max_val): self.min_val = min_val self.max_val = max_val @cython.boundscheck(False) @cython.wraparound(False) def identify_contours(self, np.ndarray[np.float64_t, ndim=3] values, np.ndarray[np.int64_t, ndim=3] contour_ids, np.ndarray[np.uint8_t, ndim=3] mask, np.int64_t start): # This just looks at neighbor values and tries to identify which zones # are touching by face within a given brick. cdef int i, j, k, ni, nj, nk, offset cdef int off_i, off_j, off_k, oi, ok, oj cdef ContourID *cur = NULL cdef ContourID *c1 cdef ContourID *c2 cdef np.float64_t v cdef np.int64_t nc ni = values.shape[0] nj = values.shape[1] nk = values.shape[2] nc = 0 cdef ContourID **container = malloc( sizeof(ContourID*)*ni*nj*nk) for i in range(ni*nj*nk): container[i] = NULL for i in range(ni): for j in range(nj): for k in range(nk): v = values[i,j,k] if mask[i,j,k] == 0: continue if v < self.min_val or v > self.max_val: continue nc += 1 c1 = contour_create(nc + start) cur = container[i*nj*nk + j*nk + k] = c1 for oi in range(3): off_i = oi - 1 + i if not (0 <= off_i < ni): continue for oj in range(3): off_j = oj - 1 + j if not (0 <= off_j < nj): continue for ok in range(3): if oi == oj == ok == 1: continue off_k = ok - 1 + k if not (0 <= off_k < nk): continue if off_k > k and off_j > j and off_i > i: continue offset = off_i*nj*nk + off_j*nk + off_k c2 = container[offset] if c2 == NULL: continue c2 = contour_find(c2) cur.count = c2.count = 0 contour_union(cur, c2) cur = contour_find(cur) for i in range(ni): for j in range(nj): for k in range(nk): c1 = container[i*nj*nk + j*nk + k] if c1 == NULL: continue c1 = contour_find(c1) contour_ids[i,j,k] = c1.contour_id for i in range(ni*nj*nk): if container[i] != NULL: free(container[i]) free(container) return nc @cython.boundscheck(False) @cython.wraparound(False) def link_node_contours(Node trunk, contours, ContourTree tree, np.ndarray[np.int64_t, ndim=1] node_ids): cdef int n_nodes = node_ids.shape[0] cdef np.int64_t node_ind cdef VolumeContainer **vcs = malloc( sizeof(VolumeContainer*) * n_nodes) cdef int i cdef PartitionedGrid pg for i in range(n_nodes): pg = contours[node_ids[i]][2] vcs[i] = pg.container cdef np.ndarray[np.uint8_t] examined = np.zeros(n_nodes, "uint8") for _, cinfo in sorted(contours.items(), key = lambda a: -a[1][0]): _, node_ind, pg, _ = cinfo construct_boundary_relationships(trunk, tree, node_ind, examined, vcs, node_ids) examined[node_ind] = 1 cdef inline void get_spos(VolumeContainer *vc, int i, int j, int k, int axis, np.float64_t *spos): spos[0] = vc.left_edge[0] + i * vc.dds[0] spos[1] = vc.left_edge[1] + j * vc.dds[1] spos[2] = vc.left_edge[2] + k * vc.dds[2] spos[axis] += 0.5 * vc.dds[axis] cdef inline int spos_contained(VolumeContainer *vc, np.float64_t *spos): cdef int i for i in range(3): if spos[i] <= vc.left_edge[i] or spos[i] >= vc.right_edge[i]: return 0 return 1 @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef void construct_boundary_relationships(Node trunk, ContourTree tree, np.int64_t nid, np.ndarray[np.uint8_t, ndim=1] examined, VolumeContainer **vcs, np.ndarray[np.int64_t, ndim=1] node_ids): # We only look at the boundary and find the nodes next to it. # Contours is a dict, keyed by the node.id. cdef int i, j, off_i, off_j, oi, oj, ax, ax0, ax1, n1, n2 cdef np.int64_t c1, c2 cdef Node adj_node cdef VolumeContainer *vc1 cdef VolumeContainer *vc0 = vcs[nid] cdef int s = (vc0.dims[1]*vc0.dims[0] + vc0.dims[0]*vc0.dims[2] + vc0.dims[1]*vc0.dims[2]) * 18 # We allocate an array of fixed (maximum) size cdef np.ndarray[np.int64_t, ndim=2] joins = np.zeros((s, 2), dtype="int64") cdef int ti = 0, side, m1, m2, index cdef int pos[3] cdef int my_pos[3] cdef np.float64_t spos[3] for ax in range(3): ax0 = (ax + 1) % 3 ax1 = (ax + 2) % 3 n1 = vc0.dims[ax0] n2 = vc0.dims[ax1] for i in range(n1): for j in range(n2): for off_i in range(3): oi = off_i - 1 if i == 0 and oi == -1: continue if i == n1 - 1 and oi == 1: continue for off_j in range(3): oj = off_j - 1 if j == 0 and oj == -1: continue if j == n2 - 1 and oj == 1: continue pos[ax0] = i + oi pos[ax1] = j + oj my_pos[ax0] = i my_pos[ax1] = j for side in range(2): # We go off each end of the block. if side == 0: pos[ax] = -1 my_pos[ax] = 0 else: pos[ax] = vc0.dims[ax] my_pos[ax] = vc0.dims[ax]-1 get_spos(vc0, pos[0], pos[1], pos[2], ax, spos) adj_node = trunk._find_node(spos) vc1 = vcs[adj_node.node_ind] if spos_contained(vc1, spos): index = vc_index(vc0, my_pos[0], my_pos[1], my_pos[2]) m1 = vc0.mask[index] c1 = (vc0.data[0])[index] index = vc_pos_index(vc1, spos) m2 = vc1.mask[index] c2 = (vc1.data[0])[index] if m1 == 1 and m2 == 1 and c1 > -1 and c2 > -1: if examined[adj_node.node_ind] == 0: joins[ti,0] = i64max(c1,c2) joins[ti,1] = i64min(c1,c2) else: joins[ti,0] = c1 joins[ti,1] = c2 ti += 1 if ti == 0: return new_joins = tree.cull_joins(joins[:ti,:]) tree.add_joins(new_joins) @cython.boundscheck(False) @cython.wraparound(False) def update_joins(np.ndarray[np.int64_t, ndim=2] joins, np.ndarray[np.int64_t, ndim=3] contour_ids, np.ndarray[np.int64_t, ndim=1] final_joins): cdef int j, nj, nf cdef int ci, cj, ck nj = joins.shape[0] nf = final_joins.shape[0] for ci in range(contour_ids.shape[0]): for cj in range(contour_ids.shape[1]): for ck in range(contour_ids.shape[2]): if contour_ids[ci,cj,ck] == -1: continue for j in range(nj): if contour_ids[ci,cj,ck] == joins[j,0]: contour_ids[ci,cj,ck] = joins[j,1] break for j in range(nf): if contour_ids[ci,cj,ck] == final_joins[j]: contour_ids[ci,cj,ck] = j + 1 break cdef class FOFNode: cdef np.int64_t tag, count def __init__(self, np.int64_t tag): self.tag = tag self.count = 0 cdef class ParticleContourTree(ContourTree): cdef np.float64_t linking_length, linking_length2 cdef np.float64_t DW[3] cdef np.float64_t DLE[3] cdef np.float64_t DRE[3] cdef bint periodicity[3] cdef int minimum_count def __init__(self, linking_length, periodicity = (True, True, True), int minimum_count = 8): cdef int i self.linking_length = linking_length self.linking_length2 = linking_length * linking_length self.first = self.last = NULL for i in range(3): self.periodicity[i] = periodicity[i] self.minimum_count = minimum_count @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def identify_contours(self, OctreeContainer octree, np.ndarray[np.int64_t, ndim=1] dom_ind, np.ndarray[cython.floating, ndim=2] positions, np.ndarray[np.int64_t, ndim=1] particle_ids, int domain_id, int domain_offset): cdef np.ndarray[np.int64_t, ndim=1] pdoms, pcount, pind, doff cdef np.float64_t pos[3] cdef Oct *oct = NULL cdef Oct **neighbors = NULL cdef OctInfo oi cdef ContourID *c0 cdef np.int64_t moff = octree.get_domain_offset(domain_id + domain_offset) cdef np.int64_t i, j, k, n, nneighbors = -1, pind0, offset cdef int counter = 0 cdef int verbose = 0 pcount = np.zeros_like(dom_ind) doff = np.zeros_like(dom_ind) - 1 # First, we find the oct for each particle. pdoms = np.zeros(positions.shape[0], dtype="int64") pdoms -= -1 # First we allocate our container cdef ContourID **container = malloc( sizeof(ContourID*) * positions.shape[0]) for i in range(3): self.DW[i] = (octree.DRE[i] - octree.DLE[i]) self.DLE[i] = octree.DLE[i] self.DRE[i] = octree.DRE[i] for i in range(positions.shape[0]): counter += 1 container[i] = NULL for j in range(3): pos[j] = positions[i, j] oct = octree.get(pos, NULL) if oct == NULL or (domain_id > 0 and oct.domain != domain_id): continue offset = oct.domain_ind - moff pcount[offset] += 1 pdoms[i] = offset pind = np.argsort(pdoms) cdef np.int64_t *ipind = pind.data cdef cython.floating *fpos = positions.data # pind is now the pointer into the position and particle_ids array. for i in range(positions.shape[0]): offset = pdoms[pind[i]] if doff[offset] < 0: doff[offset] = i del pdoms cdef int nsize = 27 cdef np.int64_t *nind = malloc(sizeof(np.int64_t)*nsize) counter = 0 cdef np.int64_t frac = (doff.shape[0] / 20.0) if verbose == 1: print("Will be outputting every", frac, file=sys.stderr) for i in range(doff.shape[0]): if verbose == 1 and counter >= frac: counter = 0 print("FOF-ing % 5.1f%% done" % ((100.0 * i)/doff.size), file=sys.stderr) counter += 1 # Any particles found for this oct? if doff[i] < 0: continue offset = pind[doff[i]] # This can probably be replaced at some point with a faster lookup. for j in range(3): pos[j] = positions[offset, j] oct = octree.get(pos, &oi) if oct == NULL or (domain_id > 0 and oct.domain != domain_id): continue # Now we have our primary oct, so we will get its neighbors. neighbors = octree.neighbors(&oi, &nneighbors, oct, self.periodicity) # Now we have all our neighbors. And, we should be set for what # else we need to do. if nneighbors > nsize: nind = realloc( nind, sizeof(np.int64_t)*nneighbors) nsize = nneighbors for j in range(nneighbors): nind[j] = neighbors[j].domain_ind - moff for n in range(j): if nind[j] == nind[n]: nind[j] = -1 break # This is allocated by the neighbors function, so we deallocate it. free(neighbors) # We might know that all our internal particles are linked. # Otherwise, we look at each particle. for j in range(pcount[i]): # Note that this offset is the particle index pind0 = pind[doff[i] + j] # Look at each neighboring oct for k in range(nneighbors): if nind[k] == -1: continue offset = doff[nind[k]] if offset < 0: continue # NOTE: doff[i] will not monotonically increase. So we # need a unique ID for each container that we are # accessing. self.link_particles(container, fpos, ipind, pcount[nind[k]], offset, pind0, doff[i] + j) cdef np.ndarray[np.int64_t, ndim=1] contour_ids contour_ids = np.ones(positions.shape[0], dtype="int64") contour_ids *= -1 # Perform one last contour_find on each. Note that we no longer need # to look at any of the doff or internal offset stuff. for i in range(positions.shape[0]): if container[i] == NULL: continue container[i] = contour_find(container[i]) for i in range(positions.shape[0]): if container[i] == NULL: continue c0 = container[i] if c0.count < self.minimum_count: continue contour_ids[i] = particle_ids[pind[c0.contour_id]] free(container) del pind return contour_ids @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef void link_particles(self, ContourID **container, cython.floating *positions, np.int64_t *pind, np.int64_t pcount, np.int64_t noffset, np.int64_t pind0, np.int64_t poffset): # Now we look at each particle and evaluate it cdef np.float64_t pos0[3] cdef np.float64_t pos1[3] cdef np.float64_t edges[2][3] cdef int link cdef ContourID *c0 cdef ContourID *c1 cdef np.int64_t pind1 cdef int i, j # We use pid here so that we strictly take new ones. # Note that pind0 will not monotonically increase, but c0 = container[pind0] if c0 == NULL: c0 = container[pind0] = contour_create(poffset, self.last) self.last = c0 if self.first == NULL: self.first = c0 c0 = container[pind0] = contour_find(c0) for i in range(3): # We make a very conservative guess here about the edges. pos0[i] = positions[pind0*3 + i] edges[0][i] = pos0[i] - self.linking_length*1.01 edges[1][i] = pos0[i] + self.linking_length*1.01 if edges[0][i] < self.DLE[i] or edges[0][i] > self.DRE[i]: # We skip this one, since we're close to the boundary edges[0][i] = -1e30 edges[1][i] = 1e30 # Lets set up some bounds for the particles. Maybe we can get away # with reducing our number of calls to r2dist_early. for i in range(pcount): pind1 = pind[noffset + i] if pind1 == pind0: continue c1 = container[pind1] if c1 != NULL and c1.contour_id == c0.contour_id: # Already linked. continue for j in range(3): pos1[j] = positions[pind1*3 + j] link = r2dist_early(pos0, pos1, self.DW, self.periodicity, self.linking_length2, edges) if link == 0: continue if c1 == NULL: c0.count += 1 container[pind1] = c0 elif c0.contour_id != c1.contour_id: contour_union(c0, c1) c0 = container[pind1] = container[pind0] = contour_find(c0) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef inline int r2dist_early(np.float64_t ppos[3], np.float64_t cpos[3], np.float64_t DW[3], bint periodicity[3], np.float64_t max_r2, np.float64_t edges[2][3]): cdef int i cdef np.float64_t r2, DR r2 = 0.0 for i in range(3): if cpos[i] < edges[0][i]: return 0 if cpos[i] > edges[1][i]: return 0 for i in range(3): DR = (ppos[i] - cpos[i]) if not periodicity[i]: pass elif (DR > DW[i]/2.0): DR -= DW[i] elif (DR < -DW[i]/2.0): DR += DW[i] r2 += DR * DR if r2 > max_r2: return 0 return 1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cosmology_time.pyx0000644000175100001770000000317014714401662020576 0ustar00runnerdocker# distutils: language = c++ # distutils: libraries = STD_LIBS cimport numpy as np import numpy as np from libc.math cimport sqrt import cython @cython.cdivision(True) cdef inline double _a_dot(double a, double h0, double om_m, double om_l) noexcept: om_k = 1.0 - om_m - om_l return h0 * a * sqrt(om_m * (a ** -3) + om_k * (a ** -2) + om_l) @cython.cdivision(True) cpdef double _a_dot_recip(double a, double h0, double om_m, double om_l): return 1. / _a_dot(a, h0, om_m, om_l) cdef inline double _da_dtau(double a, double h0, double om_m, double om_l) noexcept: return a**2 * _a_dot(a, h0, om_m, om_l) @cython.cdivision(True) cpdef double _da_dtau_recip(double a, double h0, double om_m, double om_l) noexcept: return 1. / _da_dtau(a, h0, om_m, om_l) def t_frw(ds, z): from scipy.integrate import quad aexp = 1 / (1 + z) h0 = ds.hubble_constant om_m = ds.omega_matter om_l = ds.omega_lambda conv = ds.quan(0.01, "Mpc/km*s").to("Gyr") if isinstance(z, (int, float)): return ds.quan( quad(_a_dot_recip, 0, aexp, args=(h0, om_m, om_l))[0], units=conv, ) return ds.arr( [quad(_a_dot_recip, 0, a, args=(h0, om_m, om_l))[0] for a in aexp], units=conv, ) def tau_frw(ds, z): from scipy.integrate import quad aexp = 1 / (1 + z) h0 = ds.hubble_constant om_m = ds.omega_matter om_l = ds.omega_lambda if isinstance(z, (int, float)): return quad(_da_dtau_recip, 1, aexp, args=(h0, om_m, om_l))[0] return np.asarray( [quad(_da_dtau_recip, 1, a, args=(h0, om_m, om_l))[0] for a in aexp], ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3951538 yt-4.4.0/yt/utilities/lib/cykdtree/0000755000175100001770000000000014714401715016613 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cykdtree/__init__.py0000644000175100001770000000126414714401662020730 0ustar00runnerdockerfrom yt.utilities.lib.cykdtree import plot # NOQA from yt.utilities.lib.cykdtree.kdtree import PyKDTree, PyNode # NOQA def make_tree(pts, **kwargs): r"""Build a KD-tree for a set of points. Args: pts (np.ndarray of float64): (n,m) Array of n mD points. \*\*kwargs: Additional keyword arguments are passed to the appropriate class for constructuing the tree. Returns: T (:class:`cykdtree.PyKDTree`): KDTree object. Raises: ValueError: If `pts` is not a 2D array. """ # Check input if pts.ndim != 2: raise ValueError("pts must be a 2D array of ND coordinates") T = PyKDTree(pts, **kwargs) return T ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cykdtree/c_kdtree.cpp0000644000175100001770000000000014714401662021066 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cykdtree/c_kdtree.hpp0000644000175100001770000006752314714401662021122 0ustar00runnerdocker#include #include #include #include #include #include #include #include "c_utils.hpp" #define LEAF_MAX 4294967295 template T deserialize_scalar(std::istream &is) { T scalar; is.read((char*)&scalar, sizeof(T)); return scalar; } template void serialize_scalar(std::ostream &os, const T &scalar) { os.write((char*)&scalar, sizeof(scalar)); } template T* deserialize_pointer_array(std::istream &is, uint64_t len) { T* arr = (T*)malloc(len*sizeof(T)); is.read((char*)&arr[0], len*sizeof(T)); return arr; } template void serialize_pointer_array(std::ostream &os, const T* array, uint64_t len) { os.write((char*)array, len*sizeof(T)); } class Node { public: bool is_empty; bool is_leaf; uint32_t leafid; uint32_t ndim; double *left_edge; double *right_edge; uint64_t left_idx; uint64_t children; bool *periodic_left; bool *periodic_right; std::vector > left_neighbors; std::vector > right_neighbors; std::vector all_neighbors; std::vector left_nodes; // innernode parameters uint32_t split_dim; double split; Node *less; Node *greater; // empty node constructor Node() { is_empty = true; is_leaf = false; leafid = LEAF_MAX; ndim = 0; left_edge = NULL; right_edge = NULL; periodic_left = NULL; periodic_right = NULL; less = NULL; greater = NULL; } // empty node with some info Node(uint32_t ndim0, double *le, double *re, bool *ple, bool *pre) { is_empty = true; is_leaf = false; leafid = 4294967295; ndim = ndim0; left_edge = (double*)malloc(ndim*sizeof(double)); right_edge = (double*)malloc(ndim*sizeof(double)); periodic_left = (bool*)malloc(ndim*sizeof(bool)); periodic_right = (bool*)malloc(ndim*sizeof(bool)); memcpy(left_edge, le, ndim*sizeof(double)); memcpy(right_edge, re, ndim*sizeof(double)); memcpy(periodic_left, ple, ndim*sizeof(bool)); memcpy(periodic_right, pre, ndim*sizeof(bool)); less = NULL; greater = NULL; for (uint32_t i=0; i left_nodes0) { is_empty = false; is_leaf = false; leafid = 4294967295; ndim = ndim0; left_idx = Lidx; split_dim = sdim0; split = split0; less = lnode; greater = gnode; children = lnode->children + gnode->children; left_edge = (double*)malloc(ndim*sizeof(double)); right_edge = (double*)malloc(ndim*sizeof(double)); periodic_left = (bool*)malloc(ndim*sizeof(bool)); periodic_right = (bool*)malloc(ndim*sizeof(bool)); memcpy(left_edge, le, ndim*sizeof(double)); memcpy(right_edge, re, ndim*sizeof(double)); memcpy(periodic_left, ple, ndim*sizeof(bool)); memcpy(periodic_right, pre, ndim*sizeof(bool)); for (uint32_t d = 0; d < ndim; d++) left_nodes.push_back(left_nodes0[d]); left_neighbors = std::vector >(ndim); right_neighbors = std::vector >(ndim); } // leafnode constructor Node(uint32_t ndim0, double *le, double *re, bool *ple, bool *pre, uint64_t Lidx, uint64_t n, int leafid0, std::vector left_nodes0) { is_empty = false; is_leaf = true; leafid = leafid0; ndim = ndim0; split = 0.0; split_dim = 0; left_idx = Lidx; less = NULL; greater = NULL; children = n; left_edge = (double*)malloc(ndim*sizeof(double)); right_edge = (double*)malloc(ndim*sizeof(double)); periodic_left = (bool*)malloc(ndim*sizeof(bool)); periodic_right = (bool*)malloc(ndim*sizeof(bool)); memcpy(left_edge, le, ndim*sizeof(double)); memcpy(right_edge, re, ndim*sizeof(double)); memcpy(periodic_left, ple, ndim*sizeof(bool)); memcpy(periodic_right, pre, ndim*sizeof(bool)); for (uint32_t d = 0; d < ndim; d++) left_nodes.push_back(left_nodes0[d]); left_neighbors = std::vector >(ndim); right_neighbors = std::vector >(ndim); for (uint32_t d = 0; d < ndim; d++) { if ((left_nodes[d]) && (!(left_nodes[d]->is_empty))) add_neighbors(left_nodes[d], d); } } Node(std::istream &is) { // Note that Node instances initialized via this method do not have // any neighbor information. We will build neighbor information later // by walking the tree bool check_bit = deserialize_scalar(is); if (!check_bit) { // something has gone terribly wrong so we crash abort(); } is_empty = deserialize_scalar(is); is_leaf = deserialize_scalar(is); leafid = deserialize_scalar(is); ndim = deserialize_scalar(is); left_edge = deserialize_pointer_array(is, ndim); right_edge = deserialize_pointer_array(is, ndim); left_idx = deserialize_scalar(is); children = deserialize_scalar(is); periodic_left = deserialize_pointer_array(is, ndim); periodic_right = deserialize_pointer_array(is, ndim); split_dim = deserialize_scalar(is); split = deserialize_scalar(is); less = NULL; greater = NULL; left_neighbors = std::vector >(ndim); right_neighbors = std::vector >(ndim); for (uint32_t i=0; i(os, true); serialize_scalar(os, is_empty); serialize_scalar(os, is_leaf); serialize_scalar(os, leafid); serialize_scalar(os, ndim); serialize_pointer_array(os, left_edge, ndim); serialize_pointer_array(os, right_edge, ndim); serialize_scalar(os, left_idx); serialize_scalar(os, children); serialize_pointer_array(os, periodic_left, ndim); serialize_pointer_array(os, periodic_right, ndim); serialize_scalar(os, split_dim); serialize_scalar(os, split); } ~Node() { if (left_edge) free(left_edge); if (right_edge) free(right_edge); if (periodic_left) free(periodic_left); if (periodic_right) free(periodic_right); } friend std::ostream &operator<<(std::ostream &os, const Node &node) { // this is available for nicely formatted debugging, use serialize // to save data to disk os << "is_empty: " << node.is_empty << std::endl; os << "is_leaf: " << node.is_leaf << std::endl; os << "leafid: " << node.leafid << std::endl; os << "ndim: " << node.ndim << std::endl; os << "left_edge: "; for (uint32_t i = 0; i < node.ndim; i++) { os << node.left_edge[i] << " "; } os << std::endl; os << "right_edge: "; for (uint32_t i = 0; i < node.ndim; i++) { os << node.right_edge[i] << " "; } os << std::endl; os << "left_idx: " << node.left_idx << std::endl; os << "children: " << node.children << std::endl; os << "periodic_left: "; for (uint32_t i = 0; i < node.ndim; i++) { os << node.periodic_left[i] << " "; } os << std::endl; os << "periodic_right: "; for (uint32_t i = 0; i < node.ndim; i++) { os << node.periodic_right[i] << " "; } os << std::endl; os << "split_dim: " << node.split_dim << std::endl; os << "split: " << node.split << std::endl; for (uint32_t i=0; i < node.left_nodes.size(); i++) { os << node.left_nodes[i] << std::endl; if (node.left_nodes[i]) { os << node.left_nodes[i]->left_idx << std::endl; os << node.left_nodes[i]->children << std::endl; } } return os; } Node* copy() { Node *out; if (is_empty) { if (left_edge) { out = new Node(ndim, left_edge, right_edge, periodic_left, periodic_right); } else { out = new Node(); } } else if (is_leaf) { std::vector left_nodes_copy; for (uint32_t d = 0; d < ndim; d++) left_nodes_copy.push_back(NULL); out = new Node(ndim, left_edge, right_edge, periodic_left, periodic_right, left_idx, children, leafid, left_nodes_copy); } else { Node *lnode = less->copy(); Node *gnode = greater->copy(); std::vector left_nodes_copy; for (uint32_t d = 0; d < ndim; d++) left_nodes_copy.push_back(NULL); out = new Node(ndim, left_edge, right_edge, periodic_left, periodic_right, left_idx, split_dim, split, lnode, gnode, left_nodes_copy); std::vector::iterator it; for (uint32_t d = 0; d < ndim; d++) { for (it = left_neighbors[d].begin(); it != left_neighbors[d].end(); it++) { out->left_neighbors[d].push_back(*it); } for (it = right_neighbors[d].begin(); it != right_neighbors[d].end(); it++) { out->right_neighbors[d].push_back(*it); } } } return out; } void update_ids(uint32_t add_to) { leafid += add_to; uint32_t i; for (uint32_t d = 0; d < ndim; d++) { for (i = 0; i < left_neighbors[d].size(); i++) left_neighbors[d][i] += add_to; for (i = 0; i < right_neighbors[d].size(); i++) right_neighbors[d][i] += add_to; } for (i = 0; i < all_neighbors.size(); i++) all_neighbors[i] += add_to; } void print_neighbors() { uint32_t i, j; // Left printf("left: ["); for (i = 0; i < ndim; i++) { printf("["); for (j = 0; j < left_neighbors[i].size(); j++) printf("%u ", left_neighbors[i][j]); printf("] "); } printf("]\n"); // Right printf("right: ["); for (i = 0; i < ndim; i++) { printf("["); for (j = 0; j < right_neighbors[i].size(); j++) printf("%u ", right_neighbors[i][j]); printf("] "); } printf("]\n"); } void add_neighbors(Node* curr, uint32_t dim) { if (curr->is_leaf) { left_neighbors[dim].push_back(curr->leafid); curr->right_neighbors[dim].push_back(leafid); } else { if (curr->split_dim == dim) { add_neighbors(curr->greater, dim); } else { if (curr->split > this->right_edge[curr->split_dim]) add_neighbors(curr->less, dim); else if (curr->split < this->left_edge[curr->split_dim]) add_neighbors(curr->greater, dim); else { add_neighbors(curr->less, dim); add_neighbors(curr->greater, dim); } } } } void clear_neighbors() { uint32_t d; for (d = 0; d < ndim; d++) { left_neighbors[d].clear(); right_neighbors[d].clear(); } } bool is_left_node(Node *lnode, uint32_t ldim) { uint32_t d; for (d = 0; d < ndim; d++) { if (d == ldim) continue; if (right_edge[d] < lnode->left_edge[d]) return false; if (left_edge[d] > lnode->right_edge[d]) return false; } return true; } void select_unique_neighbors() { if (!is_leaf) return; uint32_t d; std::vector::iterator last; for (d = 0; d < ndim; d++) { // left std::sort(left_neighbors[d].begin(), left_neighbors[d].end()); last = std::unique(left_neighbors[d].begin(), left_neighbors[d].end()); left_neighbors[d].erase(last, left_neighbors[d].end()); // right std::sort(right_neighbors[d].begin(), right_neighbors[d].end()); last = std::unique(right_neighbors[d].begin(), right_neighbors[d].end()); right_neighbors[d].erase(last, right_neighbors[d].end()); } } void join_neighbors() { if (!is_leaf) return; uint32_t d; std::vector::iterator last; // Create concatenated vector and remove duplicates all_neighbors = left_neighbors[0]; for (d = 1; d < ndim; d++) all_neighbors.insert(all_neighbors.end(), left_neighbors[d].begin(), left_neighbors[d].end()); for (d = 0; d < ndim; d++) all_neighbors.insert(all_neighbors.end(), right_neighbors[d].begin(), right_neighbors[d].end()); // Get unique std::sort(all_neighbors.begin(), all_neighbors.end()); last = std::unique(all_neighbors.begin(), all_neighbors.end()); all_neighbors.erase(last, all_neighbors.end()); } bool check_overlap(Node other, uint32_t dim) { if (other.right_edge[dim] < left_edge[dim]) return false; else if (other.left_edge[dim] > right_edge[dim]) return false; else return true; } }; void write_tree_nodes(std::ostream &os, Node* node) { if (node) { // depth first search of tree below node, writing each node to os // as we go node->serialize(os); write_tree_nodes(os, node->less); write_tree_nodes(os, node->greater); } else { // write null character to indicate empty node serialize_scalar(os, false); } } Node* read_tree_nodes(std::istream &is, std::vector &leaves, std::vector &left_nodes) { Node* node = new Node(is); node->left_nodes = left_nodes; bool is_leaf = true; if (is.peek()) { // read left subtree node->less = read_tree_nodes(is, leaves, left_nodes); is_leaf = false; } else { // no left children is.get(); node->less = NULL; } if (is.peek()) { // read right subtree std::vector greater_left_nodes = left_nodes; greater_left_nodes[node->split_dim] = node->less; node->greater = read_tree_nodes(is, leaves, greater_left_nodes); is_leaf = false; } else { // no right children is.get(); node->greater = NULL; } if (is_leaf) { leaves.push_back(node); for (uint32_t d = 0; d < node->ndim; d++) { if ((node->left_nodes[d]) && (!(node->left_nodes[d]->is_empty))) { node->add_neighbors(node->left_nodes[d], d); } } } return node; } void free_tree_nodes(Node* node) { if (node) { free_tree_nodes(node->less); free_tree_nodes(node->greater); delete node; } } class KDTree { public: bool is_partial; bool skip_dealloc_root; bool use_sliding_midpoint; uint64_t* all_idx; uint64_t npts; uint32_t ndim; uint64_t left_idx; int64_t data_version; bool *periodic_left; bool *periodic_right; uint32_t leafsize; double* domain_left_edge; double* domain_right_edge; double* domain_width; bool* periodic; bool any_periodic; double* domain_mins; double* domain_maxs; uint32_t num_leaves; std::vector leaves; Node* root; // KDTree() {} KDTree(double *pts, uint64_t *idx, uint64_t n, uint32_t m, uint32_t leafsize0, double *left_edge, double *right_edge, bool *periodic_left0, bool *periodic_right0, double *domain_mins0, double *domain_maxs0, int64_t dversion, bool use_sliding_midpoint0 = false, bool dont_build = false) { is_partial = true; skip_dealloc_root = false; use_sliding_midpoint = use_sliding_midpoint0; all_idx = idx; npts = n; ndim = m; leafsize = leafsize0; domain_left_edge = (double*)malloc(ndim*sizeof(double)); domain_right_edge = (double*)malloc(ndim*sizeof(double)); periodic_left = (bool*)malloc(ndim*sizeof(bool)); periodic_right = (bool*)malloc(ndim*sizeof(bool)); data_version = dversion; periodic = (bool*)malloc(ndim*sizeof(bool)); domain_mins = NULL; domain_maxs = NULL; domain_width = (double*)malloc(ndim*sizeof(double)); num_leaves = 0; memcpy(domain_left_edge, left_edge, ndim*sizeof(double)); memcpy(domain_right_edge, right_edge, ndim*sizeof(double)); memcpy(periodic_left, periodic_left0, ndim*sizeof(bool)); memcpy(periodic_right, periodic_right0, ndim*sizeof(bool)); if (domain_mins0) { domain_mins = (double*)malloc(ndim*sizeof(double)); memcpy(domain_mins, domain_mins0, ndim*sizeof(double)); } else if (pts) { domain_mins = min_pts(pts, n, m); } if (domain_maxs0) { domain_maxs = (double*)malloc(ndim*sizeof(double)); memcpy(domain_maxs, domain_maxs0, ndim*sizeof(double)); } else if (pts) { domain_maxs = max_pts(pts, n, m); } any_periodic = false; for (uint32_t d = 0; d < ndim; d++) { if ((periodic_left[d]) && (periodic_right[d])) { periodic[d] = true; any_periodic = true; } else { periodic[d] = false; } } for (uint32_t d = 0; d < ndim; d++) domain_width[d] = domain_right_edge[d] - domain_left_edge[d]; if ((pts) && (!(dont_build))) build_tree(pts); } KDTree(double *pts, uint64_t *idx, uint64_t n, uint32_t m, uint32_t leafsize0, double *left_edge, double *right_edge, bool *periodic0, int64_t dversion, bool use_sliding_midpoint0 = false, bool dont_build = false) { is_partial = false; skip_dealloc_root = false; use_sliding_midpoint = use_sliding_midpoint0; left_idx = 0; all_idx = idx; npts = n; ndim = m; leafsize = leafsize0; domain_left_edge = (double*)malloc(ndim*sizeof(double)); domain_right_edge = (double*)malloc(ndim*sizeof(double)); data_version = dversion; periodic_left = (bool*)malloc(ndim*sizeof(bool)); periodic_right = (bool*)malloc(ndim*sizeof(bool)); periodic = (bool*)malloc(ndim*sizeof(bool)); domain_mins = NULL; domain_maxs = NULL; domain_width = (double*)malloc(ndim*sizeof(double)); num_leaves = 0; memcpy(domain_left_edge, left_edge, ndim*sizeof(double)); memcpy(domain_right_edge, right_edge, ndim*sizeof(double)); memcpy(periodic, periodic0, ndim*sizeof(bool)); if (pts) { domain_mins = min_pts(pts, n, m); domain_maxs = max_pts(pts, n, m); } any_periodic = false; for (uint32_t d = 0; d < ndim; d++) { if (periodic[d]) { periodic_left[d] = true; periodic_right[d] = true; any_periodic = true; } else { periodic_left[d] = false; periodic_right[d] = false; } } for (uint32_t d = 0; d < ndim; d++) domain_width[d] = domain_right_edge[d] - domain_left_edge[d]; if ((pts) && (!(dont_build))) build_tree(pts); } KDTree(std::istream &is) { data_version = deserialize_scalar(is); is_partial = deserialize_scalar(is); use_sliding_midpoint = deserialize_scalar(is); npts = deserialize_scalar(is); all_idx = deserialize_pointer_array(is, npts); ndim = deserialize_scalar(is); left_idx = deserialize_scalar(is); periodic = deserialize_pointer_array(is, ndim); periodic_left = deserialize_pointer_array(is, ndim); periodic_right = deserialize_pointer_array(is, ndim); any_periodic = deserialize_scalar(is); leafsize = deserialize_scalar(is); domain_left_edge = deserialize_pointer_array(is, ndim); domain_right_edge = deserialize_pointer_array(is, ndim); domain_width = deserialize_pointer_array(is, ndim); domain_mins = deserialize_pointer_array(is, ndim); domain_maxs = deserialize_pointer_array(is, ndim); num_leaves = deserialize_scalar(is); std::vector left_nodes; for (uint32_t i=0; i < ndim; i++) { left_nodes.push_back(NULL); } root = read_tree_nodes(is, leaves, left_nodes); finalize_neighbors(); } void serialize(std::ostream &os) { serialize_scalar(os, data_version); serialize_scalar(os, is_partial); serialize_scalar(os, use_sliding_midpoint); serialize_scalar(os, npts); serialize_pointer_array(os, all_idx, npts); serialize_scalar(os, ndim); serialize_scalar(os, left_idx); serialize_pointer_array(os, periodic, ndim); serialize_pointer_array(os, periodic_left, ndim); serialize_pointer_array(os, periodic_right, ndim); serialize_scalar(os, any_periodic); serialize_scalar(os, leafsize); serialize_pointer_array(os, domain_left_edge, ndim); serialize_pointer_array(os, domain_right_edge, ndim); serialize_pointer_array(os, domain_width, ndim); serialize_pointer_array(os, domain_mins, ndim); serialize_pointer_array(os, domain_maxs, ndim); serialize_scalar(os, num_leaves); write_tree_nodes(os, root); } ~KDTree() { if (!(skip_dealloc_root)) free_tree_nodes(root); free(domain_left_edge); free(domain_right_edge); free(domain_width); if (domain_mins) free(domain_mins); if (domain_maxs) free(domain_maxs); free(periodic); free(periodic_left); free(periodic_right); } void consolidate_edges(double *leaves_le, double *leaves_re) { for (uint32_t k = 0; k < num_leaves; k++) { memcpy(leaves_le+ndim*leaves[k]->leafid, leaves[k]->left_edge, ndim*sizeof(double)); memcpy(leaves_re+ndim*leaves[k]->leafid, leaves[k]->right_edge, ndim*sizeof(double)); } } void build_tree(double* all_pts) { uint32_t d; double *LE = (double*)malloc(ndim*sizeof(double)); double *RE = (double*)malloc(ndim*sizeof(double)); bool *PLE = (bool*)malloc(ndim*sizeof(bool)); bool *PRE = (bool*)malloc(ndim*sizeof(bool)); double *mins = (double*)malloc(ndim*sizeof(double)); double *maxs = (double*)malloc(ndim*sizeof(double)); std::vector left_nodes; if (!(domain_mins)) domain_mins = min_pts(all_pts, npts, ndim); if (!(domain_maxs)) domain_maxs = max_pts(all_pts, npts, ndim); for (d = 0; d < ndim; d++) { LE[d] = domain_left_edge[d]; RE[d] = domain_right_edge[d]; PLE[d] = periodic_left[d]; PRE[d] = periodic_right[d]; mins[d] = domain_mins[d]; maxs[d] = domain_maxs[d]; left_nodes.push_back(NULL); } root = build(0, npts, LE, RE, PLE, PRE, all_pts, mins, maxs, left_nodes); free(LE); free(RE); free(PLE); free(PRE); free(mins); free(maxs); // Finalize neighbors finalize_neighbors(); } void finalize_neighbors() { uint32_t d; // Add periodic neighbors if (any_periodic) set_neighbors_periodic(); // Remove duplicate neighbors for (d = 0; d < num_leaves; d++) { leaves[d]->select_unique_neighbors(); leaves[d]->join_neighbors(); } } void clear_neighbors() { std::vector::iterator it; for (it = leaves.begin(); it != leaves.end(); it++) (*it)->clear_neighbors(); } void set_neighbors_periodic() { uint32_t d0; Node* leaf; Node *prev; uint64_t i, j; // Periodic neighbors for (i = 0; i < num_leaves; i++) { leaf = leaves[i]; for (d0 = 0; d0 < ndim; d0++) { if (!leaf->periodic_left[d0]) continue; for (j = i; j < num_leaves; j++) { prev = leaves[j]; if (!prev->periodic_right[d0]) continue; add_neighbors_periodic(leaf, prev, d0); } } } } void add_neighbors_periodic(Node *leaf, Node *prev, uint32_t d0) { uint32_t d, ndim_escape; bool match; if (!leaf->periodic_left[d0]) return; if (!prev->periodic_right[d0]) return; match = true; ndim_escape = 0; for (d = 0; d < ndim; d++) { if (d == d0) continue; if (leaf->left_edge[d] >= prev->right_edge[d]) { if (!(leaf->periodic_right[d] && prev->periodic_left[d])) { match = false; break; } else { ndim_escape++; } } if (leaf->right_edge[d] <= prev->left_edge[d]) { if (!(prev->periodic_right[d] && leaf->periodic_left[d])) { match = false; break; } else { ndim_escape++; } } } if ((match) && (ndim_escape < (ndim-1))) { // printf("%d: %d, %d (%d)\n", d0, leaf->leafid, prev->leafid, ndim_escape); leaf->left_neighbors[d0].push_back(prev->leafid); prev->right_neighbors[d0].push_back(leaf->leafid); } } Node* build(uint64_t Lidx, uint64_t n, double *LE, double *RE, bool *PLE, bool *PRE, double* all_pts, double *mins, double *maxes, std::vector left_nodes) { // Create leaf if (n < leafsize) { Node* out = new Node(ndim, LE, RE, PLE, PRE, Lidx, n, num_leaves, left_nodes); num_leaves++; leaves.push_back(out); return out; } else { // Split uint32_t dmax, d; int64_t split_idx = 0; double split_val = 0.0; dmax = split(all_pts, all_idx, Lidx, n, ndim, mins, maxes, split_idx, split_val, use_sliding_midpoint); if (maxes[dmax] == mins[dmax]) { // all points singular Node* out = new Node(ndim, LE, RE, PLE, PRE, Lidx, n, num_leaves, left_nodes); num_leaves++; leaves.push_back(out); return out; } // Get new boundaries uint64_t Nless = split_idx-Lidx+1; uint64_t Ngreater = n - Nless; double *lessmaxes = (double*)malloc(ndim*sizeof(double)); double *lessright = (double*)malloc(ndim*sizeof(double)); bool *lessPRE = (bool*)malloc(ndim*sizeof(bool)); double *greatermins = (double*)malloc(ndim*sizeof(double)); double *greaterleft = (double*)malloc(ndim*sizeof(double)); bool *greaterPLE = (bool*)malloc(ndim*sizeof(bool)); std::vector greater_left_nodes; for (d = 0; d < ndim; d++) { lessmaxes[d] = maxes[d]; lessright[d] = RE[d]; lessPRE[d] = PRE[d]; greatermins[d] = mins[d]; greaterleft[d] = LE[d]; greaterPLE[d] = PLE[d]; greater_left_nodes.push_back(left_nodes[d]); } lessmaxes[dmax] = split_val; lessright[dmax] = split_val; lessPRE[dmax] = false; greatermins[dmax] = split_val; greaterleft[dmax] = split_val; greaterPLE[dmax] = false; // Build less and greater nodes Node* less = build(Lidx, Nless, LE, lessright, PLE, lessPRE, all_pts, mins, lessmaxes, left_nodes); greater_left_nodes[dmax] = less; Node* greater = build(Lidx+Nless, Ngreater, greaterleft, RE, greaterPLE, PRE, all_pts, greatermins, maxes, greater_left_nodes); // Create innernode referencing child nodes Node* out = new Node(ndim, LE, RE, PLE, PRE, Lidx, dmax, split_val, less, greater, left_nodes); free(lessright); free(greaterleft); free(lessPRE); free(greaterPLE); free(lessmaxes); free(greatermins); return out; } } double* wrap_pos(double* pos) { uint32_t d; double* wrapped_pos = (double*)malloc(ndim*sizeof(double)); for (d = 0; d < ndim; d++) { if (periodic[d]) { if (pos[d] < domain_left_edge[d]) { wrapped_pos[d] = domain_right_edge[d] - fmod((domain_right_edge[d] - pos[d]),domain_width[d]); } else { wrapped_pos[d] = domain_left_edge[d] + fmod((pos[d] - domain_left_edge[d]),domain_width[d]); } } else { wrapped_pos[d] = pos[d]; } } return wrapped_pos; } Node* search(double* pos0, bool dont_wrap = false) { uint32_t i; Node* out = NULL; bool valid; // Wrap positions double* pos; if ((!dont_wrap) && (any_periodic)) pos = wrap_pos(pos0); // allocates new array else pos = pos0; // Ensure that pos is in root, early return NULL if it's not valid = true; for (i = 0; i < ndim; i++) { if (pos[i] < root->left_edge[i]) { valid = false; break; } if (pos[i] >= root->right_edge[i]) { valid = false; break; } } // Traverse tree looking for leaf containing pos if (valid) { out = root; while (!(out->is_leaf)) { if (pos[out->split_dim] < out->split) out = out->less; else out = out->greater; } } if ((!dont_wrap) && (any_periodic)) free(pos); return out; } std::vector get_neighbor_ids(double* pos) { Node* leaf; std::vector neighbors; leaf = search(pos); if (leaf) neighbors = leaf->all_neighbors; return neighbors; } }; ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cykdtree/c_utils.cpp0000644000175100001770000001446114714401662020770 0ustar00runnerdocker#include #include #include #include #include #include #include "c_utils.hpp" #include bool isEqual(double f1, double f2) { return (fabs(f1 - f2) <= FLT_EPSILON); } double* max_pts(double *pts, uint64_t n, uint32_t m) { double* max = (double*)std::malloc(m*sizeof(double)); uint32_t d; for (d = 0; d < m; d++) max[d] = -DBL_MAX; // pts[d]; for (uint64_t i = 0; i < n; i++) { for (d = 0; d < m; d++) { if (pts[m*i + d] > max[d]) max[d] = pts[m*i + d]; } } return max; } double* min_pts(double *pts, uint64_t n, uint32_t m) { double* min = (double*)std::malloc(m*sizeof(double)); uint32_t d; for (d = 0; d < m; d++) min[d] = DBL_MAX; // pts[d]; for (uint64_t i = 0; i < n; i++) { for (d = 0; d < m; d++) { if (pts[m*i + d] < min[d]) min[d] = pts[m*i + d]; } } return min; } uint64_t argmax_pts_dim(double *pts, uint64_t *idx, uint32_t m, uint32_t d, uint64_t Lidx, uint64_t Ridx) { double max = -DBL_MAX; uint64_t idx_max = Lidx; for (uint64_t i = Lidx; i <= Ridx; i++) { if (pts[m*idx[i] + d] > max) { max = pts[m*idx[i] + d]; idx_max = i; } } return idx_max; } uint64_t argmin_pts_dim(double *pts, uint64_t *idx, uint32_t m, uint32_t d, uint64_t Lidx, uint64_t Ridx) { double min = DBL_MAX; uint64_t idx_min = Lidx; for (uint64_t i = Lidx; i <= Ridx; i++) { if (pts[m*idx[i] + d] < min) { min = pts[m*idx[i] + d]; idx_min = i; } } return idx_min; } // http://www.comp.dit.ie/rlawlor/Alg_DS/sorting/quickSort.c void quickSort(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r) { int64_t j; if( l < r ) { // divide and conquer j = partition(pts, idx, ndim, d, l, r, (l+r)/2); quickSort(pts, idx, ndim, d, l, j-1); quickSort(pts, idx, ndim, d, j+1, r); } } void insertSort(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r) { int64_t i, j; uint64_t t; if (r <= l) return; for (i = l+1; i <= r; i++) { t = idx[i]; j = i - 1; while ((j >= l) && (pts[ndim*idx[j]+d] > pts[ndim*t+d])) { idx[j+1] = idx[j]; j--; } idx[j+1] = t; } } int64_t pivot(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r) { if (r < l) { return -1; } else if (r == l) { return l; } else if ((r - l) < 5) { insertSort(pts, idx, ndim, d, l, r); return (l+r)/2; } int64_t i, subr, m5; uint64_t t; int64_t nsub = 0; for (i = l; i <= r; i+=5) { subr = i + 4; if (subr > r) subr = r; insertSort(pts, idx, ndim, d, i, subr); m5 = (i+subr)/2; t = idx[m5]; idx[m5] = idx[l + nsub]; idx[l + nsub] = t; nsub++; } return pivot(pts, idx, ndim, d, l, l+nsub-1); // return select(pts, idx, ndim, d, l, l+nsub-1, (nsub/2)+(nsub%2)); } int64_t partition_given_pivot(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r, double pivot) { // If all less than pivot, j will remain r // If all greater than pivot, j will be l-1 if (r < l) return -1; int64_t i, j, tp = -1; uint64_t t; for (i = l, j = r; i <= j; ) { if ((pts[ndim*idx[i]+d] > pivot) && (pts[ndim*idx[j]+d] <= pivot)) { t = idx[i]; idx[i] = idx[j]; idx[j] = t; } if (isEqual(pts[ndim*idx[i]+d], pivot)) tp = i; // if (pts[ndim*idx[i]+d] == pivot) tp = i; if (pts[ndim*idx[i]+d] <= pivot) i++; if (pts[ndim*idx[j]+d] > pivot) j--; } if ((tp >= 0) && (tp != j)) { t = idx[tp]; idx[tp] = idx[j]; idx[j] = t; } return j; } int64_t partition(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r, int64_t p) { double pivot; int64_t j; uint64_t t; if (r < l) return -1; pivot = pts[ndim*idx[p]+d]; t = idx[p]; idx[p] = idx[l]; idx[l] = t; j = partition_given_pivot(pts, idx, ndim, d, l+1, r, pivot); t = idx[l]; idx[l] = idx[j]; idx[j] = t; return j; } // https://en.wikipedia.org/wiki/Median_of_medians int64_t select(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l0, int64_t r0, int64_t n) { int64_t p; int64_t l = l0, r = r0; while ( 1 ) { if (l == r) return l; p = pivot(pts, idx, ndim, d, l, r); p = partition(pts, idx, ndim, d, l, r, p); if (p < 0) return -1; else if (n == (p-l0+1)) { return p; } else if (n < (p-l0+1)) { r = p - 1; } else { l = p + 1; } } } uint32_t split(double *all_pts, uint64_t *all_idx, uint64_t Lidx, uint64_t n, uint32_t ndim, double *mins, double *maxes, int64_t &split_idx, double &split_val, bool use_sliding_midpoint) { // Return immediately if variables empty if ((n == 0) || (ndim == 0)) { split_idx = -1; split_val = 0.0; return 0; } // Find dimension to split along uint32_t dmax, d; dmax = 0; for (d = 1; d < ndim; d++) if ((maxes[d]-mins[d]) > (maxes[dmax]-mins[dmax])) dmax = d; if (maxes[dmax] == mins[dmax]) { // all points singular return ndim; } if (use_sliding_midpoint) { // Split at middle, then slide midpoint as necessary split_val = (mins[dmax] + maxes[dmax])/2.0; split_idx = partition_given_pivot(all_pts, all_idx, ndim, dmax, Lidx, Lidx+n-1, split_val); if (split_idx == (int64_t)(Lidx-1)) { uint64_t t; split_idx = argmin_pts_dim(all_pts, all_idx, ndim, dmax, Lidx, Lidx+n-1); t = all_idx[split_idx]; all_idx[split_idx] = all_idx[Lidx]; all_idx[Lidx] = t; split_idx = Lidx; split_val = all_pts[ndim*all_idx[split_idx] + dmax]; } else if (split_idx == (int64_t)(Lidx+n-1)) { uint64_t t; split_idx = argmax_pts_dim(all_pts, all_idx, ndim, dmax, Lidx, Lidx+n-1); t = all_idx[split_idx]; all_idx[split_idx] = all_idx[Lidx+n-1]; all_idx[Lidx+n-1] = t; split_idx = Lidx+n-2; split_val = all_pts[ndim*all_idx[split_idx] + dmax]; } } else { // Find median along dimension int64_t nsel = (n/2) + (n%2); split_idx = select(all_pts, all_idx, ndim, dmax, Lidx, Lidx+n-1, nsel); split_val = all_pts[ndim*all_idx[split_idx] + dmax]; } return dmax; } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cykdtree/c_utils.hpp0000644000175100001770000000322514714401662020771 0ustar00runnerdocker#include #include #include #include #include #include #include #include bool isEqual(double f1, double f2); double* max_pts(double *pts, uint64_t n, uint32_t m); double* min_pts(double *pts, uint64_t n, uint32_t m); uint64_t argmin_pts_dim(double *pts, uint64_t *idx, uint32_t m, uint32_t d, uint64_t Lidx, uint64_t Ridx); uint64_t argmax_pts_dim(double *pts, uint64_t *idx, uint32_t m, uint32_t d, uint64_t Lidx, uint64_t Ridx); void quickSort(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r); void insertSort(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r); int64_t pivot(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r); int64_t partition_given_pivot(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r, double pivot); int64_t partition(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r, int64_t p); int64_t select(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r, int64_t n); uint32_t split(double *all_pts, uint64_t *all_idx, uint64_t Lidx, uint64_t n, uint32_t ndim, double *mins, double *maxes, int64_t &split_idx, double &split_val, bool use_sliding_midpoint = false); ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cykdtree/kdtree.pxd0000644000175100001770000000761014714401662020613 0ustar00runnerdockercimport numpy as np from libc.stdint cimport int32_t, int64_t, uint32_t, uint64_t from libcpp cimport bool from libcpp.pair cimport pair from libcpp.vector cimport vector cdef extern from "" namespace "std": cdef cppclass istream: pass cdef cppclass ostream: pass # the following extern definitions adapted from # http://stackoverflow.com/a/31009461/1382869 # obviously std::ios_base isn't a namespace, but this lets # Cython generate the correct C++ code cdef extern from "" namespace "std::ios_base": cdef cppclass open_mode: pass cdef open_mode binary # you can define other constants as needed cdef extern from "" namespace "std": cdef cppclass ofstream(ostream): # constructors ofstream(const char*) except + ofstream(const char*, open_mode) except+ cdef cppclass ifstream(istream): # constructors ifstream(const char*) except + ifstream(const char*, open_mode) except+ cdef extern from "c_kdtree.hpp": cdef cppclass Node: bool is_leaf uint32_t leafid uint32_t ndim double *left_edge double *right_edge uint64_t left_idx uint64_t children uint32_t split_dim double split Node* less Node* greater bool *periodic_left bool *periodic_right vector[vector[uint32_t]] left_neighbors vector[vector[uint32_t]] right_neighbors vector[uint32_t] all_neighbors cdef cppclass KDTree: uint64_t* all_idx uint64_t npts uint32_t ndim int64_t data_version uint32_t leafsize double* domain_left_edge double* domain_right_edge double* domain_width double* domain_mins double* domain_maxs bool* periodic bool any_periodic uint32_t num_leaves vector[Node*] leaves Node* root KDTree(double *pts, uint64_t *idx, uint64_t n, uint32_t m, uint32_t leafsize0, double *left_edge, double *right_edge, bool *periodic, int64_t data_version) KDTree(double *pts, uint64_t *idx, uint64_t n, uint32_t m, uint32_t leafsize0, double *left_edge, double *right_edge, bool *periodic, int64_t data_version, bool use_sliding_midpoint) KDTree(istream &ist) void serialize(ostream &os) double* wrap_pos(double* pos) noexcept nogil vector[uint32_t] get_neighbor_ids(double* pos) noexcept nogil Node* search(double* pos) noexcept nogil Node* search(double* pos, bool dont_wrap) noexcept nogil void consolidate_edges(double *leaves_le, double *leaves_re) cdef class PyNode: cdef Node *_node cdef readonly np.uint32_t id cdef readonly np.uint64_t npts cdef readonly np.uint32_t ndim cdef readonly np.uint32_t num_leaves cdef readonly np.uint64_t start_idx cdef readonly np.uint64_t stop_idx cdef double *_domain_width cdef readonly object left_neighbors, right_neighbors cdef void _init_node(self, Node* node, uint32_t num_leaves, double *domain_width) cdef class PyKDTree: cdef KDTree *_tree cdef readonly uint64_t npts cdef readonly uint32_t ndim cdef readonly uint32_t num_leaves cdef readonly uint32_t leafsize cdef readonly int64_t data_version cdef double *_left_edge cdef double *_right_edge cdef bool *_periodic cdef readonly object leaves cdef readonly object _idx cdef void _init_tree(self, KDTree* tree) cdef void _make_tree(self, double *pts, bool use_sliding_midpoint) cdef void _make_leaves(self) cdef np.ndarray[np.uint32_t, ndim=1] _get_neighbor_ids(self, np.ndarray[double, ndim=1] pos) cdef np.ndarray[np.uint32_t, ndim=1] _get_neighbor_ids_3(self, np.float64_t pos[3]) cdef PyNode _get(self, np.ndarray[double, ndim=1] pos) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cykdtree/kdtree.pyx0000644000175100001770000004604614714401662020646 0ustar00runnerdocker# distutils: libraries = STD_LIBS # Note that we used to include the empty c_kdtree.cpp file, but that seems to break cythonize. # distutils: sources = yt/utilities/lib/cykdtree/c_utils.cpp # distutils: depends = yt/utilities/lib/cykdtree/c_kdtree.hpp, yt/utilities/lib/cykdtree/c_utils.hpp # distutils: language = c++ # distutils: extra_compile_args = CPP11_FLAG import numpy as np cimport numpy as np from cpython cimport bool as pybool from cython.operator cimport dereference from libc.stdint cimport uint32_t, uint64_t from libc.stdlib cimport free, malloc from libcpp cimport bool as cbool cdef class PyNode: r"""A container for leaf info. Attributes: npts (np.uint64_t): Number of points in this node. ndim (np.uint32_t): Number of dimensions in domain. num_leaves (np.uint32_t): Number of leaves in the tree containing this node. start_idx (np.uint64_t): Index where indices for this node begin. stop_idx (np.uint64_t): One passed the end of indices for this node. left_edge (np.ndarray of float64): Minimum bounds of this node in each dimension. right_edge (np.ndarray of float64): Maximum bounds of this node in each dimension. periodic_left (np.ndarray of bool): Periodicity of minimum bounds. periodic_right (np.ndarray of bool): Periodicity of maximum bounds. domain_width (np.ndarray of float64): Width of the total domain in each dimension. left_neighbors (list of lists): Indices of neighbor leaves at the minimum bounds in each dimension. right_neighbors (list of lists): Indices of neighbor leaves at the maximum bounds in each dimension. """ cdef void _init_node(self, Node* node, uint32_t num_leaves, double *domain_width): cdef np.uint32_t i, j self._node = node self.id = node.leafid self.npts = node.children self.ndim = node.ndim self.num_leaves = num_leaves self.start_idx = node.left_idx self.stop_idx = (node.left_idx + node.children) self._domain_width = domain_width self.left_neighbors = [None for i in range(self.ndim)] self.right_neighbors = [None for i in range(self.ndim)] for i in range(self.ndim): self.left_neighbors[i] = [node.left_neighbors[i][j] for j in range(node.left_neighbors[i].size())] self.right_neighbors[i] = [node.right_neighbors[i][j] for j in range(node.right_neighbors[i].size())] def __cinit__(self): # Initialize everything to NULL/0/None to prevent seg fault self._node = NULL self.id = 0 self.npts = 0 self.ndim = 0 self.num_leaves = 0 self.start_idx = 0 self.stop_idx = 0 self._domain_width = NULL self.left_neighbors = None self.right_neighbors = None def __init__(self): pass def __repr__(self): nchars = 1 + len(str(self.__class__.__name__)) return ('%s(id=%i, npts=%i, start_idx=%i, stop_idx=%i,\n' + ' ' * nchars + 'left_edge=%s,\n' + ' ' * nchars + 'right_edge=%s)') % ( self.__class__.__name__, self.id, self.npts, self.start_idx, self.stop_idx, self.left_edge, self.right_edge, ) @property def periodic_left(self): cdef cbool[:] view = self._node.periodic_left return np.asarray(view) @property def periodic_right(self): cdef cbool[:] view = self._node.periodic_right return np.asarray(view) @property def left_edge(self): cdef np.float64_t[:] view = self._node.left_edge return np.asarray(view) @property def right_edge(self): cdef np.float64_t[:] view = self._node.right_edge return np.asarray(view) @property def domain_width(self): cdef np.float64_t[:] view = self._domain_width return np.asarray(view) @property def slice(self): """slice: Slice of kdtree indices contained by this node.""" return slice(self.start_idx, self.stop_idx) @property def neighbors(self): """list of int: Indices of all neighboring leaves including this leaf.""" cdef np.uint32_t i cdef object out cdef vector[uint32_t] vout = self._node.all_neighbors out = [vout[i] for i in range(vout.size())] return out def assert_equal(self, PyNode solf): """Assert that node properties are equal.""" np.testing.assert_equal(self.npts, solf.npts) np.testing.assert_equal(self.ndim, solf.ndim) np.testing.assert_equal(self.num_leaves, solf.num_leaves) np.testing.assert_equal(self.id, solf.id) np.testing.assert_equal(self.start_idx, solf.start_idx) np.testing.assert_equal(self.stop_idx, solf.stop_idx) np.testing.assert_array_equal(self.left_edge, solf.left_edge) np.testing.assert_array_equal(self.right_edge, solf.right_edge) np.testing.assert_array_equal(self.periodic_left, solf.periodic_left) np.testing.assert_array_equal(self.periodic_right, solf.periodic_right) for i in range(self.ndim): np.testing.assert_equal(self.left_neighbors[i], solf.left_neighbors[i]) np.testing.assert_equal(self.right_neighbors[i], solf.right_neighbors[i]) np.testing.assert_equal(self.neighbors, solf.neighbors) cdef class PyKDTree: r"""Construct a KDTree for a set of points. Args: pts (np.ndarray of double): (n,m) array of n coordinates in a m-dimensional domain. left_edge (np.ndarray of double): (m,) domain minimum in each dimension. right_edge (np.ndarray of double): (m,) domain maximum in each dimension. periodic (bool or np.ndarray of bool, optional): Truth of the domain periodicity overall (if bool), or in each dimension (if np.ndarray). Defaults to `False`. leafsize (int, optional): The maximum number of points that should be in a leaf. Defaults to 10000. nleaves (int, optional): The number of leaves that should be in the resulting tree. If greater than 0, leafsize is adjusted to produce a tree with 2**(ceil(log2(nleaves))) leaves. The leafsize keyword argument is ignored if nleaves is greater zero. Defaults to 0. data_version (int, optional): An optional user-provided integer that can be used to uniquely identify the data used to generate the KDTree. This is useful if you save the kdtree to disk and restore it later and need to verify that the underlying data is the same. use_sliding_midpoint (bool, optional): If True, the sliding midpoint rule is used to perform splits. Otherwise, the median is used. Defaults to False. Raises: ValueError: If `leafsize < 2`. This currently segfaults. Attributes: npts (uint64): Number of points in the tree. ndim (uint32): Number of dimensions points occupy. data_version (int64): User-provided version number (defaults to 0) num_leaves (uint32): Number of leaves in the tree. leafsize (uint32): Maximum number of points a leaf can have. leaves (list of `cykdtree.PyNode`): Tree leaves. idx (np.ndarray of uint64): Indices sorting the points by leaf. left_edge (np.ndarray of double): (m,) domain minimum in each dimension. right_edge (np.ndarray of double): (m,) domain maximum in each dimension. domain_width (np.ndarray of double): (m,) domain width in each dimension. periodic (np.ndarray of bool): Truth of domain periodicity in each dimension. """ cdef void _init_tree(self, KDTree* tree): self._tree = tree self.ndim = tree.ndim self.data_version = tree.data_version self.npts = tree.npts self.num_leaves = tree.num_leaves self.leafsize = tree.leafsize self._make_leaves() self._idx = np.empty(self.npts, 'uint64') cdef uint64_t i for i in range(self.npts): self._idx[i] = tree.all_idx[i] def __cinit__(self): # Initialize everything to NULL/0/None to prevent seg fault self._tree = NULL self.npts = 0 self.ndim = 0 self.num_leaves = 0 self.leafsize = 0 self._left_edge = NULL self._right_edge = NULL self._periodic = NULL self.leaves = None self._idx = None def __init__(self, np.ndarray[double, ndim=2] pts = None, left_edge = None, right_edge = None, periodic = False, int leafsize = 10000, int nleaves = 0, data_version = None, use_sliding_midpoint = False): # Return with nothing set if points not provided if pts is None: return # Set leafsize of number of leaves provided if nleaves > 0: nleaves = (2**np.ceil(np.log2(nleaves))) leafsize = pts.shape[0]//nleaves + 1 if (leafsize < 2): # This is here to prevent segfault. The cpp code needs modified to # support leafsize = 1 raise ValueError("'leafsize' cannot be smaller than 2.") if left_edge is None: left_edge = np.min(pts, axis=0) else: left_edge = np.array(left_edge) if right_edge is None: right_edge = np.max(pts, axis=0) else: right_edge = np.array(right_edge) if data_version is None: data_version = 0 self.data_version = data_version cdef uint32_t i self.npts = pts.shape[0] self.ndim = pts.shape[1] assert(left_edge.size == self.ndim) assert(right_edge.size == self.ndim) self.leafsize = leafsize self._left_edge = malloc(self.ndim*sizeof(double)) self._right_edge = malloc(self.ndim*sizeof(double)) self._periodic = malloc(self.ndim*sizeof(cbool)) for i in range(self.ndim): self._left_edge[i] = left_edge[i] self._right_edge[i] = right_edge[i] if isinstance(periodic, pybool): for i in range(self.ndim): self._periodic[i] = periodic else: for i in range(self.ndim): self._periodic[i] = periodic[i] # Create tree and leaves self._make_tree(&pts[0,0], use_sliding_midpoint) self._make_leaves() def __dealloc__(self): if self._tree != NULL: del self._tree if self._left_edge != NULL: free(self._left_edge) if self._right_edge != NULL: free(self._right_edge) if self._periodic != NULL: free(self._periodic) def assert_equal(self, PyKDTree solf, pybool strict_idx = True): r"""Compare this tree to another tree. Args: solf (PyKDTree): Another KDTree to compare with this one. strict_idx (bool, optional): If True, the index vectors are compared for equality element by element. If False, corresponding leaves must contain the same indices, but they can be in any order. Defaults to True. Raises: AssertionError: If there are mismatches between any of the two trees' parameters. """ np.testing.assert_equal(self.npts, solf.npts) np.testing.assert_equal(self.ndim, solf.ndim) np.testing.assert_equal(self.data_version, solf.data_version) np.testing.assert_equal(self.leafsize, solf.leafsize) np.testing.assert_equal(self.num_leaves, solf.num_leaves) np.testing.assert_array_equal(self.left_edge, solf.left_edge) np.testing.assert_array_equal(self.right_edge, solf.right_edge) np.testing.assert_array_equal(self.periodic, solf.periodic) # Compare index at the leaf level since we only care that the leaves # contain the same points if strict_idx: np.testing.assert_array_equal(self._idx, solf._idx) for i in range(self.num_leaves): self.leaves[i].assert_equal(solf.leaves[i]) if not strict_idx: np.testing.assert_array_equal( np.sort(self._idx[self.leaves[i].slice]), np.sort(solf._idx[solf.leaves[i].slice])) cdef void _make_tree(self, double *pts, bool use_sliding_midpoint): r"""Carry out creation of KDTree at C++ level.""" cdef uint64_t[:] idx = np.arange(self.npts).astype('uint64') self._tree = new KDTree(pts, &idx[0], self.npts, self.ndim, self.leafsize, self._left_edge, self._right_edge, self._periodic, self.data_version, use_sliding_midpoint) self._idx = idx cdef void _make_leaves(self): r"""Create a list of Python leaf objects from C++ leaves.""" self.num_leaves = self._tree.leaves.size() self.leaves = [None for _ in xrange(self.num_leaves)] cdef Node* leafnode cdef PyNode leafnode_py for k in xrange(self.num_leaves): leafnode = self._tree.leaves[k] leafnode_py = PyNode() leafnode_py._init_node(leafnode, self.num_leaves, self._tree.domain_width) self.leaves[leafnode.leafid] = leafnode_py @property def left_edge(self): cdef np.float64_t[:] view = self._tree.domain_left_edge return np.asarray(view) @property def right_edge(self): cdef np.float64_t[:] view = self._tree.domain_right_edge return np.asarray(view) @property def domain_width(self): cdef np.float64_t[:] view = self._tree.domain_width return np.asarray(view) @property def periodic(self): cdef cbool[:] view = self._tree.periodic # return np.asarray(view) cdef object out = np.empty(self.ndim, 'bool') cdef np.uint32_t i for i in range(self.ndim): out[i] = view[i] return out def leaf_idx(self, np.uint32_t leafid): r"""Get array of indices for points in a leaf. Args: leafid (np.uint32_t): Unique index of the leaf in question. Returns: np.ndarray of np.uint64_t: Indices of points belonging to leaf. """ cdef np.ndarray[np.uint64_t] out = self._idx[self.leaves[leafid].slice] return out cdef np.ndarray[np.uint32_t, ndim=1] _get_neighbor_ids(self, np.ndarray[double, ndim=1] pos): cdef np.uint32_t i cdef vector[uint32_t] vout = self._tree.get_neighbor_ids(&pos[0]) cdef np.ndarray[np.uint32_t, ndim=1] out = np.empty(vout.size(), 'uint32') for i in xrange(vout.size()): out[i] = vout[i] return out @property def idx(self): return np.asarray(self._idx) def get_neighbor_ids(self, np.ndarray[double, ndim=1] pos): r"""Return the IDs of leaves containing & neighboring a given position. Args: pos (np.ndarray of double): Coordinates. Returns: np.ndarray of uint32: Leaves containing/neighboring `pos`. Raises: ValueError: If pos is not contained within the KDTree. """ return self._get_neighbor_ids(pos) cdef np.ndarray[np.uint32_t, ndim=1] _get_neighbor_ids_3(self, np.float64_t pos[3]): cdef np.uint32_t i cdef vector[uint32_t] vout = self._tree.get_neighbor_ids(&pos[0]) cdef np.ndarray[np.uint32_t, ndim=1] out = np.empty(vout.size(), 'uint32') for i in xrange(vout.size()): out[i] = vout[i] return out cdef PyNode _get(self, np.ndarray[double, ndim=1] pos): assert(len(pos) == self.ndim) cdef Node* leafnode = self._tree.search(&pos[0]) if leafnode == NULL: raise ValueError("Position is not within the kdtree root node.") cdef PyNode out = self.leaves[leafnode.leafid] return out def get(self, np.ndarray[double, ndim=1] pos): r"""Return the leaf containing a given position. Args: pos (np.ndarray of double): Coordinates. Returns: :class:`cykdtree.PyNode`: Leaf containing `pos`. Raises: ValueError: If pos is not contained within the KDTree. """ return self._get(pos) def consolidate_edges(self): r"""Return arrays of the left and right edges for all leaves in the tree on each process. Returns: tuple(np.ndarray of double, np.ndarray of double): The left (first array) and right (second array) edges of each leaf (1st array dimension), in each dimension (2nd array dimension). """ cdef np.ndarray[np.float64_t, ndim=2] leaves_le cdef np.ndarray[np.float64_t, ndim=2] leaves_re leaves_le = np.empty((self.num_leaves, self.ndim), 'float64') leaves_re = np.empty((self.num_leaves, self.ndim), 'float64') self._tree.consolidate_edges(&leaves_le[0,0], &leaves_re[0,0]) return (leaves_le, leaves_re) def save(self, str filename): r"""Saves the PyKDTree to disk as raw binary data. Note that this file may not necessarily be portable. Args: filename (string): Name of the file to serialize the kdtree to """ cdef KDTree* my_tree = self._tree cdef ofstream* outputter = new ofstream(filename.encode('utf8'), binary) try: my_tree.serialize(dereference(outputter)) finally: del outputter @classmethod def from_file(cls, str filename, data_version=None): r"""Create a PyKDTree from a binary file created by ``PyKDTree.save()`` Note that loading a file created on another machine may create a corrupted PyKDTree instance. Args: filename (string): Name of the file to load the kdtree from data_version (int): A unique integer corresponding to the data being loaded. If the loaded data_version does not match the data_version supplied here then an OSError is raised. Optional. Returns: :class:`cykdtree.PyKDTree`: A KDTree restored from the file """ cdef ifstream* inputter = new ifstream(filename.encode(), binary) cdef PyKDTree ret = cls() if data_version is None: data_version = 0 try: ret._init_tree(new KDTree(dereference(inputter))) finally: del inputter return ret ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cykdtree/plot.py0000644000175100001770000001232114714401662020143 0ustar00runnerdockerimport numpy as np def _plot2D_root( seg, pts=None, txt=None, plotfile=None, point_kw=None, box_kw=None, axs=None, subplot_kw=None, gridspec_kw=None, fig_kw=None, save_kw=None, title=None, xlabel="x", ylabel="y", label_kw=None, ): r"""Plot a 2D kd-tree. Args: seg (list of np.ndarray): Line segments to plot defining box edges. pts (np.ndarray, optional): Points contained by the kdtree. Defaults to None if not provided and points are not plotted. txt (list of tuples, optional): Each tuple contains the (x, y, string) information for text labels to be added to the boxes. Defaults to None and text is not added. plotfile (:obj:`str`, optional): Full path to file where the plot should be saved. If None, the plot is displayed. Defaults to None point_kw (:obj:`dict`, optional): Keywords passed directly to :func:`matplotlib.pyplot.scatter` for drawing the points. Defaults to empty dict. box_kw (:obj:`dict`, optional): Keywords passed directly to :class:`matplotlib.collections.LineCollection` for drawing the leaf boxes. Defaults to empty dict. axs (:obj:`matplotlib.pyplot.Axes`, optional): Axes that should be used for plotting. Defaults to None and new axes are created. subplot_kw (:obj:`dict`, optional): Keywords passed directly to :meth:`matplotlib.figure.Figure.add_subplot`. Defaults to {}. gridspec_kw (:obj:`dict`, optional): Keywords passed directly to :class:`matplotlib.gridspec.GridSpec`. Defaults to empty dict. fig_kw (:obj:`dict`, optional): Keywords passed directly to :func:`matplotlib.pyplot.figure`. Defaults to empty dict. save_kw (:obj:`dict`, optional): Keywords passed directly to :func:`matplotlib.pyplot.savefig`. Defaults to empty dict. title (:obj:`str`, optional): Title that the plot should be given. Defaults to None and no title is displayed. xlabel (:obj:`str`, optional): Label for the x-axis. Defaults to 'x'. ylabel (:obj:`str`, optional): Label for the y-axis. Defaults to 'y'. label_kw (:obj:`dict`, optional): Keywords passed directly to :class:`matplotlib.text.Text` when creating box labels. Defaults to empty dict. Returns: :obj:`matplotlib.pyplot.Axes`: Axes containing the plot. """ import matplotlib.pyplot as plt from matplotlib.collections import LineCollection if point_kw is None: point_kw = {} if box_kw is None: box_kw = {} if subplot_kw is None: subplot_kw = {} if gridspec_kw is None: gridspec_kw = {} if fig_kw is None: fig_kw = {} if save_kw is None: save_kw = {} if label_kw is None: label_kw = {} # Axes creation if axs is None: plt.close("all") fig, axs = plt.subplots( subplot_kw=subplot_kw, gridspec_kw=gridspec_kw, **fig_kw ) # Labels if title is not None: axs.set_title(title) axs.set_xlabel(xlabel, **label_kw) axs.set_ylabel(ylabel, **label_kw) # Plot points if isinstance(pts, list): for p in pts: if p is not None: axs.scatter(p[:, 0], p[:, 1], **point_kw) elif pts is not None: axs.scatter(pts[:, 0], pts[:, 1], **point_kw) # Plot boxes lc = LineCollection(seg, **box_kw) axs.add_collection(lc) # Labels if txt is not None: # label_kw.setdefault('axes', axs) label_kw.setdefault("verticalalignment", "bottom") label_kw.setdefault("horizontalalignment", "left") for t in txt: plt.text(*t, **label_kw) axs.autoscale() axs.margins(0.1) # Save if plotfile is not None: plt.savefig(plotfile, **save_kw) else: plt.show() # Return axes return axs def plot2D_serial(tree, pts=None, label_boxes=False, **kwargs): r"""Plot a 2D kd-tree constructed in serial. Parameters ---------- tree: :class:`cykdtree.kdtree.PyKDTree` kd-tree class. pts: np.ndarray, optional Points contained by the kdtree. label_boxes: bool If True, leaves in the tree are labeled with their index. Defaults to False. Additional keywords are passed to :func:`cykdtree.plot._plot2D_root`. Returns ------- :obj:`matplotlib.pyplot.Axes`: Axes containing the plot. """ # Box edges seg = [] for leaf in tree.leaves: le = leaf.left_edge re = leaf.right_edge # Top seg.append(np.array([[le[0], re[1]], [re[0], re[1]]], "float")) # Bottom seg.append(np.array([[le[0], le[1]], [re[0], le[1]]], "float")) # Left seg.append(np.array([[le[0], le[1]], [le[0], re[1]]], "float")) # Right seg.append(np.array([[re[0], le[1]], [re[0], re[1]]], "float")) # Labels txt = None if label_boxes: txt = [] for leaf in tree.leaves: txt.append((leaf.left_edge[0], leaf.left_edge[1], "%d" % leaf.id)) # Return axes return _plot2D_root(seg, pts=pts, txt=txt, **kwargs) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3991537 yt-4.4.0/yt/utilities/lib/cykdtree/tests/0000755000175100001770000000000014714401715017755 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cykdtree/tests/__init__.py0000644000175100001770000001667114714401662022102 0ustar00runnerdockerimport cProfile import itertools import pstats import sys import time from datetime import datetime from subprocess import PIPE, Popen import numpy as np from nose.tools import nottest def assert_less_equal(x, y): size_match = True try: xshape = (1,) yshape = (1,) if isinstance(x, np.ndarray) or isinstance(y, np.ndarray): if isinstance(x, np.ndarray): xshape = x.shape if isinstance(y, np.ndarray): yshape = y.shape size_match = xshape == yshape assert (x <= y).all() else: assert x <= y except: if not size_match: raise AssertionError( "Shape mismatch\n\n" + f"x.shape: {str(x.shape)}\ny.shape: {str(y.shape)}\n" ) raise AssertionError( "Variables are not less-equal ordered\n\n" + f"x: {str(x)}\ny: {str(y)}\n" ) def call_subprocess(np, func, args, kwargs): # Create string with arguments & kwargs args_str = "" for a in args: args_str += f"{a}," for k, v in kwargs.items(): args_str += f"{k}={v}," if args_str.endswith(","): args_str = args_str[:-1] cmd = [ "mpirun", "-n", str(np), sys.executable, "-c", f"'from {func.__module__} import {func.__name__}; {func.__name__}({args_str})'", ] cmd = " ".join(cmd) print(f"Running the following command:\n{cmd}") p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=True) output, err = p.communicate() exit_code = p.returncode print(output.decode("utf-8")) if exit_code != 0: print(err.decode("utf-8")) raise Exception("Error on spawned process. See output.") return None return output.decode("utf-8") def iter_dict(dicts): try: return ( dict(itertools.izip(dicts, x)) for x in itertools.product(*dicts.itervalues()) ) except AttributeError: # python 3 return (dict(zip(dicts, x)) for x in itertools.product(*dicts.values())) def parametrize(**pargs): for k in pargs.keys(): if not isinstance(pargs[k], (tuple, list)): pargs[k] = (pargs[k],) def dec(func): def pfunc(kwargs0): # Wrapper so that name encodes parameters def wrapped(*args, **kwargs): kwargs.update(**kwargs0) return func(*args, **kwargs) wrapped.__name__ = func.__name__ for k, v in kwargs0.items(): wrapped.__name__ += f"_{k}{v}" return wrapped def func_param(*args, **kwargs): out = [] for ipargs in iter_dict(pargs): out.append(pfunc(ipargs)(*args, **kwargs)) return out func_param.__name__ = func.__name__ return func_param return dec np.random.seed(100) pts2 = np.random.rand(100, 2).astype("float64") pts3 = np.random.rand(100, 3).astype("float64") rand_state = np.random.get_state() left_neighbors_x = [[], [0], [1], [2], [], [], [4, 5], [5]] # None # None # None left_neighbors_y = [ [], # None [], # None [], # None [], # None [0, 1], [4], [1, 2, 3], [6], ] left_neighbors_x_periodic = [[3], [0], [1], [2], [6], [6, 7], [4, 5], [5]] left_neighbors_y_periodic = [[5], [5, 7], [7], [7], [0, 1], [4], [1, 2, 3], [6]] @nottest def make_points_neighbors(periodic=False): ndim = 2 npts = 50 leafsize = 10 np.random.set_state(rand_state) pts = np.random.rand(npts, ndim).astype("float64") left_edge = np.zeros(ndim, "float64") right_edge = np.ones(ndim, "float64") if periodic: lx = left_neighbors_x_periodic ly = left_neighbors_y_periodic else: lx = left_neighbors_x ly = left_neighbors_y num_leaves = len(lx) ln = [lx, ly] rn = [[[] for i in range(num_leaves)] for _ in range(ndim)] for d in range(ndim): for i in range(num_leaves): for j in ln[d][i]: rn[d][j].append(i) for i in range(num_leaves): rn[d][i] = list(set(rn[d][i])) return pts, left_edge, right_edge, leafsize, ln, rn @nottest def make_points(npts, ndim, leafsize=10, distrib="rand", seed=100): ndim = int(ndim) npts = int(npts) leafsize = int(leafsize) np.random.seed(seed) LE = 0.0 RE = 1.0 left_edge = LE * np.ones(ndim, "float64") right_edge = RE * np.ones(ndim, "float64") if npts <= 0: npts = 100 leafsize = 10 if ndim == 2: pts = pts2 elif ndim == 3: pts = pts3 else: pts = np.random.rand(npts, ndim).astype("float64") else: if distrib == "rand": pts = np.random.rand(npts, ndim).astype("float64") elif distrib == "uniform": pts = np.random.uniform(low=LE, high=RE, size=(npts, ndim)) elif distrib in ("gaussian", "normal"): pts = np.random.normal( loc=(LE + RE) / 2.0, scale=(RE - LE) / 4.0, size=(npts, ndim) ) np.clip(pts, LE, RE) else: raise ValueError(f"Invalid 'distrib': {distrib}") return pts, left_edge, right_edge, leafsize @nottest def run_test( npts, ndim, nproc=0, distrib="rand", periodic=False, leafsize=10, profile=False, suppress_final_output=False, **kwargs, ): r"""Run a routine with a designated number of points & dimensions on a selected number of processors. Args: npart (int): Number of particles. nproc (int): Number of processors. ndim (int): Number of dimensions. distrib (str, optional): Distribution that should be used when generating points. Defaults to 'rand'. periodic (bool, optional): If True, the domain is assumed to be periodic. Defaults to False. leafsize (int, optional): Maximum number of points that should be in an leaf. Defaults to 10. profile (bool, optional): If True cProfile is used. Defaults to False. suppress_final_output (bool, optional): If True, the final output from spawned MPI processes is suppressed. This is mainly for timing purposes. Defaults to False. """ from yt.utilities.lib.cykdtree import make_tree unique_str = datetime.today().strftime("%Y%j%H%M%S") pts, left_edge, right_edge, leafsize = make_points( npts, ndim, leafsize=leafsize, distrib=distrib ) # Set keywords for multiprocessing version if nproc > 1: kwargs["suppress_final_output"] = suppress_final_output if profile: kwargs["profile"] = f"{unique_str}_mpi_profile.dat" # Run if profile: pr = cProfile.Profile() t0 = time.time() pr.enable() make_tree( pts, nproc=nproc, left_edge=left_edge, right_edge=right_edge, periodic=periodic, leafsize=leafsize, **kwargs, ) if profile: pr.disable() t1 = time.time() ps = pstats.Stats(pr) ps.add(kwargs["profile"]) if isinstance(profile, str): ps.dump_stats(profile) print(f"Stats saved to {profile}") else: sort_key = "tottime" ps.sort_stats(sort_key).print_stats(25) # ps.sort_stats(sort_key).print_callers(5) print(f"{t1 - t0} s according to 'time'") return ps ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cykdtree/tests/scaling.py0000644000175100001770000001716014714401662021755 0ustar00runnerdockerr"""Routines for tracking the scaling of the triangulation routines.""" import cProfile import os import pstats import time import numpy as np from yt.utilities.lib.cykdtree.tests import run_test def stats_run( npart, nproc, ndim, periodic=False, overwrite=False, display=False, suppress_final_output=False, ): r"""Get timing stats using :package:`cProfile`. Args: npart (int): Number of particles. nproc (int): Number of processors. ndim (int): Number of dimensions. periodic (bool, optional): If True, the domain is assumed to be periodic. Defaults to False. overwrite (bool, optional): If True, the existing file for this set of input parameters if overwritten. Defaults to False. suppress_final_output (bool, optional): If True, the final output from spawned MPI processes is suppressed. This is mainly for timing purposes. Defaults to False. display (bool, optional): If True, display the profile results. Defaults to False. """ perstr = "" outstr = "" if periodic: perstr = "_periodic" if suppress_final_output: outstr = "_noout" fname_stat = f"stat_{npart}part_{nproc}proc_{ndim}dim{perstr}{outstr}.txt" if overwrite or not os.path.isfile(fname_stat): cProfile.run( "from yt.utilities.lib.cykdtree.tests import run_test; " + f"run_test({npart}, {ndim}, nproc={nproc}, " + f"periodic={periodic}, " + f"suppress_final_output={suppress_final_output})", fname_stat, ) if display: p = pstats.Stats(fname_stat) p.sort_stats("time").print_stats(10) return p return fname_stat def time_run( npart, nproc, ndim, nrep=1, periodic=False, leafsize=10, suppress_final_output=False ): r"""Get running times using :package:`time`. Args: npart (int): Number of particles. nproc (int): Number of processors. ndim (int): Number of dimensions. nrep (int, optional): Number of times the run should be performed to get an average. Defaults to 1. periodic (bool, optional): If True, the domain is assumed to be periodic. Defaults to False. leafsize (int, optional): The maximum number of points that should be in any leaf in the tree. Defaults to 10. suppress_final_output (bool, optional): If True, the final output from spawned MPI processes is suppressed. This is mainly for timing purposes. Defaults to False. """ times = np.empty(nrep, "float") for i in range(nrep): t1 = time.time() run_test( npart, ndim, nproc=nproc, periodic=periodic, leafsize=leafsize, suppress_final_output=suppress_final_output, ) t2 = time.time() times[i] = t2 - t1 return np.mean(times), np.std(times) def strong_scaling( npart=1e6, nrep=1, periodic=False, leafsize=10, overwrite=True, suppress_final_output=False, ): r"""Plot the scaling with number of processors for a particular function. Args: npart (int, optional): Number of particles. Defaults to 1e6. nrep (int, optional): Number of times the run should be performed to get an average. Defaults to 1. periodic (bool, optional): If True, the domain is assumed to be periodic. Defaults to False. leafsize (int, optional): The maximum number of points that should be in any leaf in the tree. Defaults to 10. overwrite (bool, optional): If True, the existing file for this set of input parameters if overwritten. Defaults to False. suppress_final_output (bool, optional): If True, the final output from spawned MPI processes is suppressed. This is mainly for timing purposes. Defaults to False. """ import matplotlib.pyplot as plt npart = int(npart) perstr = "" outstr = "" if periodic: perstr = "_periodic" if suppress_final_output: outstr = "_noout" fname_plot = ( f"plot_strong_scaling_nproc_{npart}part{perstr}_{leafsize}leafsize{outstr}.png" ) nproc_list = [1, 2, 4, 8] # , 16] ndim_list = [2, 3, 4] clr_list = ["b", "r", "g", "m"] times = np.empty((len(nproc_list), len(ndim_list), 2), "float") for j, nproc in enumerate(nproc_list): for i, ndim in enumerate(ndim_list): times[j, i, 0], times[j, i, 1] = time_run( npart, nproc, ndim, nrep=nrep, periodic=periodic, leafsize=leafsize, suppress_final_output=suppress_final_output, ) print(f"Finished {ndim}D on {nproc}.") fig, axs = plt.subplots(1, 1) for i in range(len(ndim_list)): ndim = ndim_list[i] clr = clr_list[i] axs.errorbar( nproc_list, times[:, i, 0], yerr=times[:, i, 1], fmt=clr, label=f"ndim = {ndim}", ) axs.set_xlabel("# of Processors") axs.set_ylabel("Time (s)") axs.legend() fig.savefig(fname_plot) print(" " + fname_plot) def weak_scaling( npart=1e4, nrep=1, periodic=False, leafsize=10, overwrite=True, suppress_final_output=False, ): r"""Plot the scaling with number of processors with a constant number of particles per processor for a particular function. Args: npart (int, optional): Number of particles per processor. Defaults to 1e4. nrep (int, optional): Number of times the run should be performed to get an average. Defaults to 1. periodic (bool, optional): If True, the domain is assumed to be periodic. Defaults to False. leafsize (int, optional): The maximum number of points that should be in any leaf in the tree. Defaults to 10. overwrite (bool, optional): If True, the existing file for this set of input parameters if overwritten. Defaults to False. suppress_final_output (bool, optional): If True, the final output from spawned MPI processes is suppressed. This is mainly for timing purposes. Defaults to False. """ import matplotlib.pyplot as plt npart = int(npart) perstr = "" outstr = "" if periodic: perstr = "_periodic" if suppress_final_output: outstr = "_noout" fname_plot = ( f"plot_weak_scaling_nproc_{npart}part{perstr}_{leafsize}leafsize{outstr}.png" ) nproc_list = [1, 2, 4, 8, 16] ndim_list = [2, 3] clr_list = ["b", "r", "g", "m"] times = np.empty((len(nproc_list), len(ndim_list), 2), "float") for j, nproc in enumerate(nproc_list): for i, ndim in enumerate(ndim_list): times[j, i, 0], times[j, i, 1] = time_run( npart * nproc, nproc, ndim, nrep=nrep, periodic=periodic, leafsize=leafsize, suppress_final_output=suppress_final_output, ) fig, axs = plt.subplots(1, 1) for i in range(len(ndim_list)): ndim = ndim_list[i] clr = clr_list[i] axs.errorbar( nproc_list, times[:, i, 0], yerr=times[:, i, 1], fmt=clr, label=f"ndim = {ndim}", ) axs.set_xlabel("# of Processors") axs.set_ylabel("Time (s)") axs.legend() fig.savefig(fname_plot) print(" " + fname_plot) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cykdtree/tests/test_kdtree.py0000644000175100001770000001005014714401662022641 0ustar00runnerdockerimport tempfile import time import numpy as np from nose.tools import assert_raises import yt.utilities.lib.cykdtree as cykdtree from yt.utilities.lib.cykdtree.tests import ( make_points, make_points_neighbors, parametrize, ) @parametrize( npts=100, ndim=(2, 3), periodic=(False, True), use_sliding_midpoint=(False, True) ) def test_PyKDTree(npts=100, ndim=2, periodic=False, use_sliding_midpoint=False): pts, le, re, ls = make_points(npts, ndim) cykdtree.PyKDTree( pts, le, re, leafsize=ls, periodic=periodic, use_sliding_midpoint=use_sliding_midpoint, ) def test_PyKDTree_errors(): pts, le, re, ls = make_points(100, 2) assert_raises(ValueError, cykdtree.PyKDTree, pts, le, re, leafsize=1) @parametrize(npts=100, ndim=(2, 3), periodic=(False, True)) def test_search(npts=100, ndim=2, periodic=False): pts, le, re, ls = make_points(npts, ndim) tree = cykdtree.PyKDTree(pts, le, re, leafsize=ls, periodic=periodic) pos_list = [le, (le + re) / 2.0] if periodic: pos_list.append(re) for pos in pos_list: leaf = tree.get(pos) leaf.neighbors @parametrize(npts=100, ndim=(2, 3)) def test_search_errors(npts=100, ndim=2): pts, le, re, ls = make_points(npts, ndim) tree = cykdtree.PyKDTree(pts, le, re, leafsize=ls) assert_raises(ValueError, tree.get, re) @parametrize(periodic=(False, True)) def test_neighbors(periodic=False): pts, le, re, ls, left_neighbors, right_neighbors = make_points_neighbors( periodic=periodic ) tree = cykdtree.PyKDTree(pts, le, re, leafsize=ls, periodic=periodic) for leaf in tree.leaves: out_str = str(leaf.id) try: for d in range(tree.ndim): out_str += f"\nleft: {d} {leaf.left_neighbors[d]} {left_neighbors[d][leaf.id]}" assert len(left_neighbors[d][leaf.id]) == len(leaf.left_neighbors[d]) for i in range(len(leaf.left_neighbors[d])): assert left_neighbors[d][leaf.id][i] == leaf.left_neighbors[d][i] out_str += f"\nright: {d} {leaf.right_neighbors[d]} {right_neighbors[d][leaf.id]}" assert len(right_neighbors[d][leaf.id]) == len(leaf.right_neighbors[d]) for i in range(len(leaf.right_neighbors[d])): assert right_neighbors[d][leaf.id][i] == leaf.right_neighbors[d][i] except Exception as e: for leaf in tree.leaves: print(leaf.id, leaf.left_edge, leaf.right_edge) print(out_str) raise e @parametrize(npts=100, ndim=(2, 3), periodic=(False, True)) def test_get_neighbor_ids(npts=100, ndim=2, periodic=False): pts, le, re, ls = make_points(npts, ndim) tree = cykdtree.PyKDTree(pts, le, re, leafsize=ls, periodic=periodic) pos_list = [le, (le + re) / 2.0] if periodic: pos_list.append(re) for pos in pos_list: tree.get_neighbor_ids(pos) def time_tree_construction(Ntime, LStime, ndim=2): pts, le, re, ls = make_points(Ntime, ndim, leafsize=LStime) t0 = time.time() cykdtree.PyKDTree(pts, le, re, leafsize=LStime) t1 = time.time() print(f"{Ntime} {ndim}D points, leafsize {LStime}: took {t1 - t0} s") def time_neighbor_search(Ntime, LStime, ndim=2): pts, le, re, ls = make_points(Ntime, ndim, leafsize=LStime) tree = cykdtree.PyKDTree(pts, le, re, leafsize=LStime) t0 = time.time() tree.get_neighbor_ids(0.5 * np.ones(tree.ndim, "double")) t1 = time.time() print(f"{Ntime} {ndim}D points, leafsize {LStime}: took {t1 - t0} s") def test_save_load(): for periodic in (True, False): for ndim in range(1, 5): pts, le, re, ls = make_points(100, ndim) tree = cykdtree.PyKDTree( pts, le, re, leafsize=ls, periodic=periodic, data_version=ndim + 12 ) with tempfile.NamedTemporaryFile(delete=False) as tf: tree.save(tf.name) restore_tree = cykdtree.PyKDTree.from_file(tf.name) tree.assert_equal(restore_tree) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cykdtree/tests/test_plot.py0000644000175100001770000000101314714401662022340 0ustar00runnerdockerimport os from yt.utilities.lib.cykdtree.kdtree import PyKDTree from yt.utilities.lib.cykdtree.plot import plot2D_serial from yt.utilities.lib.cykdtree.tests import make_points def test_plot2D_serial(): fname_test = "test_plot2D_serial.png" pts, le, re, ls = make_points(100, 2) tree = PyKDTree(pts, le, re, leafsize=ls) axs = plot2D_serial( tree, pts, title="Serial Test", plotfile=fname_test, label_boxes=True ) os.remove(fname_test) # plot2D_serial(tree, pts, axs=axs) del axs ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cykdtree/tests/test_utils.py0000644000175100001770000001405014714401662022527 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_equal from yt.utilities.lib.cykdtree import utils # type: ignore from yt.utilities.lib.cykdtree.tests import assert_less_equal, parametrize def test_max_pts(): pts = np.arange(5 * 3, dtype="float64").reshape((5, 3)) out = utils.py_max_pts(pts) np.testing.assert_allclose(out, np.max(pts, axis=0)) def test_min_pts(): pts = np.arange(5 * 3, dtype="float64").reshape((5, 3)) out = utils.py_min_pts(pts) np.testing.assert_allclose(out, np.min(pts, axis=0)) @parametrize(N=(10), ndim=(2, 3), Lidx=(0, 5), Ridx=(5, 9)) def test_argmax_pts_dim(N=10, ndim=2, Lidx=0, Ridx=9): d = ndim - 1 pts = np.random.rand(N, ndim).astype("float64") idx = np.argsort(pts[:, d]).astype("uint64") out = utils.py_argmax_pts_dim(pts, idx, d, Lidx, Ridx) assert_equal(out, np.argmax(pts[idx[Lidx : (Ridx + 1)], d]) + Lidx) @parametrize(N=(10), ndim=(2, 3), Lidx=(0, 5), Ridx=(5, 9)) def test_argmin_pts_dim(N=10, ndim=2, Lidx=0, Ridx=9): d = ndim - 1 pts = np.random.rand(N, ndim).astype("float64") idx = np.argsort(pts[:, d]).astype("uint64") out = utils.py_argmin_pts_dim(pts, idx, d, Lidx, Ridx) assert_equal(out, np.argmin(pts[idx[Lidx : (Ridx + 1)], d]) + Lidx) @parametrize(N=(0, 10, 11), ndim=(2, 3)) def test_quickSort(N=10, ndim=2): d = ndim - 1 np.random.seed(10) pts = np.random.rand(N, ndim).astype("float64") idx = utils.py_quickSort(pts, d) assert_equal(idx.size, N) if N != 0: np.testing.assert_allclose(idx, np.argsort(pts[:, d])) @parametrize(N=(0, 10, 11), ndim=(2, 3)) def test_insertSort(N=10, ndim=2): d = ndim - 1 np.random.seed(10) pts = np.random.rand(N, ndim).astype("float64") idx = utils.py_insertSort(pts, d) assert_equal(idx.size, N) if N != 0: np.testing.assert_allclose(idx, np.argsort(pts[:, d])) @parametrize(N=(0, 10, 11), ndim=(2, 3)) def test_pivot(N=10, ndim=2): d = ndim - 1 np.random.seed(10) pts = np.random.rand(N, ndim).astype("float64") q, idx = utils.py_pivot(pts, d) if N == 0: np.testing.assert_equal(q, -1) else: piv = pts[idx[q], d] nmax = 7 * N / 10 + 6 assert_less_equal(np.sum(pts[:, d] < piv), nmax) assert_less_equal(np.sum(pts[:, d] > piv), nmax) @parametrize(N=(0, 10, 11), ndim=(2, 3)) def test_partition(N=10, ndim=2): d = ndim - 1 p = 0 np.random.seed(10) pts = np.random.rand(N, ndim).astype("float64") q, idx = utils.py_partition(pts, d, p) if N == 0: assert_equal(q, -1) else: piv = pts[p, d] np.testing.assert_approx_equal(pts[idx[q], d], piv) np.testing.assert_array_less(pts[idx[:q], d], piv) np.testing.assert_array_less(piv, pts[idx[(q + 1) :], d]) @parametrize(N=(0, 10, 11), ndim=(2, 3)) def test_partition_given_pivot(N=10, ndim=2): d = ndim - 1 np.random.seed(10) pts = np.random.rand(N, ndim).astype("float64") if N == 0: piv_list = [0.5] else: piv_list = [0.5, np.median(pts[:, d])] for piv in piv_list: q, idx = utils.py_partition_given_pivot(pts, d, piv) if N == 0: assert_equal(q, -1) else: assert_less_equal(pts[idx[q], d], piv) np.testing.assert_array_less(pts[idx[:q], d], piv) np.testing.assert_array_less(piv, pts[idx[(q + 1) :], d]) @parametrize(N=(0, 10, 11), ndim=(2, 3)) def test_select(N=10, ndim=2): d = ndim - 1 np.random.seed(10) pts = np.random.rand(N, ndim).astype("float64") p = int(N) // 2 + int(N) % 2 q, idx = utils.py_select(pts, d, p) assert_equal(idx.size, N) if N == 0: assert_equal(q, -1) else: assert_equal(q, p - 1) med = np.median(pts[:, d]) np.testing.assert_array_less(pts[idx[:q], d], med) np.testing.assert_array_less(med, pts[idx[(q + 1) :], d]) if N % 2: np.testing.assert_approx_equal(pts[idx[q], d], med) else: np.testing.assert_array_less(pts[idx[q], d], med) @parametrize(N=(0, 10, 11), ndim=(2, 3), use_sliding_midpoint=(False, True)) def test_split(N=10, ndim=2, use_sliding_midpoint=False): np.random.seed(10) pts = np.random.rand(N, ndim).astype("float64") p = int(N) // 2 + int(N) % 2 q, d, idx = utils.py_split(pts, use_sliding_midpoint=use_sliding_midpoint) assert_equal(idx.size, N) if N == 0: assert_equal(q, -1) else: if use_sliding_midpoint: # Midpoint med = 0.5 * (np.min(pts[:, d]) + np.max(pts[:, d])) np.testing.assert_array_less(pts[idx[:q], d], med) np.testing.assert_array_less(med, pts[idx[(q + 1) :], d]) np.testing.assert_array_less(pts[idx[q], d], med) # Sliding midpoint (slide to minimum) q, d, idx = utils.py_split( pts, mins=-1 * np.ones(ndim), maxs=np.ones(ndim), use_sliding_midpoint=True, ) med = np.min(pts[:, d]) assert_equal(q, 0) np.testing.assert_array_less(pts[idx[:q], d], med) np.testing.assert_array_less(med, pts[idx[(q + 1) :], d]) np.testing.assert_approx_equal(pts[idx[q], d], med) # Sliding midpoint (slide to maximum) q, d, idx = utils.py_split( pts, mins=np.zeros(ndim), maxs=2 * np.ones(ndim), use_sliding_midpoint=True, ) med = np.max(pts[:, d]) assert_equal(q, N - 2) np.testing.assert_array_less(pts[idx[: (q + 1)], d], med) np.testing.assert_approx_equal(pts[idx[q + 1], d], med) else: assert_equal(q, p - 1) med = np.median(pts[:, d]) np.testing.assert_array_less(pts[idx[:q], d], med) np.testing.assert_array_less(med, pts[idx[(q + 1) :], d]) if N % 2: np.testing.assert_approx_equal(pts[idx[q], d], med) else: np.testing.assert_array_less(pts[idx[q], d], med) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cykdtree/utils.pxd0000644000175100001770000000376314714401662020502 0ustar00runnerdockercimport numpy as np from libc.stdint cimport int32_t, int64_t, uint32_t, uint64_t from libcpp cimport bool from libcpp.pair cimport pair from libcpp.vector cimport vector cdef extern from "c_utils.hpp": double* max_pts(double *pts, uint64_t n, uint64_t m) double* min_pts(double *pts, uint64_t n, uint64_t m) uint64_t argmax_pts_dim(double *pts, uint64_t *idx, uint32_t m, uint32_t d, uint64_t Lidx, uint64_t Ridx) uint64_t argmin_pts_dim(double *pts, uint64_t *idx, uint32_t m, uint32_t d, uint64_t Lidx, uint64_t Ridx) void quickSort(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r) int64_t partition(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r, int64_t p) int64_t partition_given_pivot(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r, double pivot) int64_t select(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r, int64_t n) int64_t pivot(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r) void insertSort(double *pts, uint64_t *idx, uint32_t ndim, uint32_t d, int64_t l, int64_t r) uint32_t split(double *all_pts, uint64_t *all_idx, uint64_t Lidx, uint64_t n, uint32_t ndim, double *mins, double *maxes, int64_t &split_idx, double &split_val) uint32_t split(double *all_pts, uint64_t *all_idx, uint64_t Lidx, uint64_t n, uint32_t ndim, double *mins, double *maxes, int64_t &split_idx, double &split_val, bool use_sliding_midpoint) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cykdtree/utils.pyx0000644000175100001770000003031114714401662020514 0ustar00runnerdocker# distutils: libraries = STD_LIBS # distutils: sources = yt/utilities/lib/cykdtree/c_utils.cpp # distutils: depends = yt/utilities/lib/cykdtree/c_utils.hpp # distutils: language = c++ # distutils: extra_compile_args = CPP11_FLAG import numpy as np cimport numpy as np from libc.stdint cimport int64_t, uint32_t, uint64_t from libcpp cimport bool as cbool def py_max_pts(np.ndarray[np.float64_t, ndim=2] pos): r"""Get the maximum of points along each coordinate. Args: pos (np.ndarray of float64): (n,m) array of n m-D coordinates. Returns: np.ndarray of float64: Maximum of pos along each coordinate. """ cdef uint64_t n = pos.shape[0] cdef uint32_t m = pos.shape[1] cdef np.float64_t* cout = max_pts(&pos[0,0], n, m) cdef uint32_t i = 0 cdef np.ndarray[np.float64_t] out = np.zeros(m, 'float64') for i in range(m): out[i] = cout[i] #free(cout) return out def py_min_pts(np.ndarray[np.float64_t, ndim=2] pos): r"""Get the minimum of points along each coordinate. Args: pos (np.ndarray of float64): (n,m) array of n m-D coordinates. Returns: np.ndarray of float64: Minimum of pos along each coordinate. """ cdef uint64_t n = pos.shape[0] cdef uint32_t m = pos.shape[1] cdef np.float64_t* cout = min_pts(&pos[0,0], n, m) cdef uint32_t i = 0 cdef np.ndarray[np.float64_t] out = np.zeros(m, 'float64') for i in range(m): out[i] = cout[i] #free(cout) return out def py_argmax_pts_dim(np.ndarray[np.float64_t, ndim=2] pos, uint64_t[:] idx, np.uint32_t d, int Lidx0, int Ridx0): r"""Get the maximum of points along one dimension for a subset of the point indices. This is essentially max(pos[idx[Lidx:(Ridx+1)], d]). Args: pos (np.ndarray of float64): (n,m) array of n m-D coordinates. idx (np.ndarray of uint64_t): (n,) array of indices for positions. d (uint32_t): Dimension to compute maximum along. Lidx (int): Index in idx that search should begin at. Ridx (int): Index in idx that search should end at. Returns: uint64_t: Index in idx that provides maximum position in the subset indices along dimension d. """ cdef np.intp_t n = pos.shape[0] cdef uint32_t m = pos.shape[1] cdef uint64_t Lidx = 0 cdef uint64_t Ridx = (n-1) if (Lidx0 < 0): Lidx = (n + Lidx0) elif Lidx0 >= n: raise Exception("Left index (%d) exceeds size of positions array (%d)." % (Lidx0, n)) else: Lidx = Lidx0 if (Ridx0 < 0): Ridx = (n + Ridx0) elif Ridx0 >= n: raise Exception("Right index (%d) exceeds size of positions array (%d)." % (Ridx0, n)) else: Ridx = Ridx0 cdef np.uint64_t cout = Lidx if (Ridx > Lidx): cout = argmax_pts_dim(&pos[0,0], &idx[0], m, d, Lidx, Ridx) return cout def py_argmin_pts_dim(np.ndarray[np.float64_t, ndim=2] pos, uint64_t[:] idx, np.uint32_t d, int Lidx0, int Ridx0): r"""Get the minimum of points along one dimension for a subset of the point indices. This is essentially min(pos[idx[Lidx:(Ridx+1)], d]). Args: pos (np.ndarray of float64): (n,m) array of n m-D coordinates. idx (np.ndarray of uint64_t): (n,) array of indices for positions. d (uint32_t): Dimension to compute minimum along. Lidx (int): Index in idx that search should begin at. Ridx (int): Index in idx that search should end at. Returns: uint64_t: Index in idx that provides minimum position in the subset indices along dimension d. """ cdef uint64_t n = pos.shape[0] cdef uint32_t m = pos.shape[1] cdef uint64_t Lidx = 0 cdef uint64_t Ridx = n if (Lidx0 < 0): Lidx = (n + Lidx0) else: Lidx = Lidx0 if (Ridx0 < 0): Ridx = (n + Ridx0) else: Ridx = Ridx0 cdef np.uint64_t cout = Lidx if (Ridx > Lidx): cout = argmin_pts_dim(&pos[0,0], &idx[0], m, d, Lidx, Ridx) return cout def py_quickSort(np.ndarray[np.float64_t, ndim=2] pos, np.uint32_t d): r"""Get the indices required to sort coordinates along one dimension. Args: pos (np.ndarray of float64): (n,m) array of n m-D coordinates. d (np.uint32_t): Dimension that pos should be sorted along. Returns: np.ndarray of uint64: Indices that sort pos along dimension d. """ cdef np.intp_t ndim = pos.shape[1] cdef int64_t l = 0 cdef int64_t r = pos.shape[0]-1 cdef uint64_t[:] idx idx = np.arange(pos.shape[0]).astype('uint64') cdef double *ptr_pos = NULL cdef uint64_t *ptr_idx = NULL if pos.shape[0] != 0: ptr_pos = &pos[0,0] ptr_idx = &idx[0] quickSort(ptr_pos, ptr_idx, ndim, d, l, r) return idx def py_insertSort(np.ndarray[np.float64_t, ndim=2] pos, np.uint32_t d): r"""Get the indices required to sort coordinates along one dimension. Args: pos (np.ndarray of float64): (n,m) array of n m-D coordinates. d (np.uint32_t): Dimension that pos should be sorted along. Returns: np.ndarray of uint64: Indices that sort pos along dimension d. """ cdef np.intp_t ndim = pos.shape[1] cdef int64_t l = 0 cdef int64_t r = pos.shape[0]-1 cdef uint64_t[:] idx idx = np.arange(pos.shape[0]).astype('uint64') cdef double *ptr_pos = NULL cdef uint64_t *ptr_idx = NULL if pos.shape[0] != 0: ptr_pos = &pos[0,0] ptr_idx = &idx[0] insertSort(ptr_pos, ptr_idx, ndim, d, l, r) return idx def py_pivot(np.ndarray[np.float64_t, ndim=2] pos, np.uint32_t d): r"""Get the index of the median of medians along one dimension and indices that partition pos according to the median of medians. Args: pos (np.ndarray of float64): (n,m) array of n m-D coordinates. d (np.uint32_t): Dimension that pos should be partitioned along. Returns: tuple of int64 and np.ndarray of uint64: Index q of idx that is the pivot. All elements of idx before the pivot will be less than the pivot. If there is an odd number of points, the pivot will be the median. """ cdef np.intp_t ndim = pos.shape[1] cdef int64_t l = 0 cdef int64_t r = pos.shape[0]-1 cdef uint64_t[:] idx idx = np.arange(pos.shape[0]).astype('uint64') cdef double *ptr_pos = NULL cdef uint64_t *ptr_idx = NULL if pos.shape[0] != 0: ptr_pos = &pos[0,0] ptr_idx = &idx[0] cdef int64_t q = pivot(ptr_pos, ptr_idx, ndim, d, l, r) return q, idx def py_partition(np.ndarray[np.float64_t, ndim=2] pos, np.uint32_t d, np.int64_t p): r"""Get the indices required to partition coordinates along one dimension. Args: pos (np.ndarray of float64): (n,m) array of n m-D coordinates. d (np.uint32_t): Dimension that pos should be partitioned along. p (np.int64_t): Element of pos[:,d] that should be used as the pivot to partition pos. Returns: tuple of int64 and np.ndarray of uint64: Location of the pivot in the partitioned array and the indices required to partition the array such that elements before the pivot are smaller and elements after the pivot are larger. """ cdef np.intp_t ndim = pos.shape[1] cdef int64_t l = 0 cdef int64_t r = pos.shape[0]-1 cdef uint64_t[:] idx idx = np.arange(pos.shape[0]).astype('uint64') cdef double *ptr_pos = NULL cdef uint64_t *ptr_idx = NULL if pos.shape[0] != 0: ptr_pos = &pos[0,0] ptr_idx = &idx[0] cdef int64_t q = partition(ptr_pos, ptr_idx, ndim, d, l, r, p) return q, idx def py_partition_given_pivot(np.ndarray[np.float64_t, ndim=2] pos, np.uint32_t d, np.float64_t pval): r"""Get the indices required to partition coordinates along one dimension. Args: pos (np.ndarray of float64): (n,m) array of n m-D coordinates. d (np.uint32_t): Dimension that pos should be partitioned along. pval (np.float64_t): Value that should be used to partition pos. Returns: tuple of int64 and np.ndarray of uint64: Location of the largest value that is smaller than pval in partitioned array and the indices required to partition the array such that elements before the pivot are smaller and elements after the pivot are larger. """ cdef np.intp_t ndim = pos.shape[1] cdef int64_t l = 0 cdef int64_t r = pos.shape[0]-1 cdef uint64_t[:] idx idx = np.arange(pos.shape[0]).astype('uint64') cdef double *ptr_pos = NULL cdef uint64_t *ptr_idx = NULL if pos.shape[0] != 0: ptr_pos = &pos[0,0] ptr_idx = &idx[0] cdef int64_t q = partition_given_pivot(ptr_pos, ptr_idx, ndim, d, l, r, pval) return q, idx def py_select(np.ndarray[np.float64_t, ndim=2] pos, np.uint32_t d, np.int64_t t): r"""Get the indices required to partition coordinates such that the first t elements in pos[:,d] are the smallest t elements in pos[:,d]. Args: pos (np.ndarray of float64): (n,m) array of n m-D coordinates. d (np.uint32_t): Dimension that pos should be partitioned along. t (np.int64_t): Number of smallest elements in pos[:,d] that should be partitioned. Returns: tuple of int64 and np.ndarray of uint64: Location of element t in the partitioned array and the indices required to partition the array such that elements before element t are smaller and elements after the pivot are larger. """ cdef np.intp_t ndim = pos.shape[1] cdef int64_t l = 0 cdef int64_t r = pos.shape[0]-1 cdef uint64_t[:] idx idx = np.arange(pos.shape[0]).astype('uint64') cdef double *ptr_pos = NULL cdef uint64_t *ptr_idx = NULL if pos.shape[0] != 0: ptr_pos = &pos[0,0] ptr_idx = &idx[0] cdef int64_t q = select(ptr_pos, ptr_idx, ndim, d, l, r, t) return q, idx def py_split(np.ndarray[np.float64_t, ndim=2] pos, np.ndarray[np.float64_t, ndim=1] mins = None, np.ndarray[np.float64_t, ndim=1] maxs = None, bool use_sliding_midpoint = False): r"""Get the indices required to split the positions equally along the largest dimension. Args: pos (np.ndarray of float64): (n,m) array of n m-D coordinates. mins (np.ndarray of float64, optional): (m,) array of mins. Defaults to None and is set to mins of pos along each dimension. maxs (np.ndarray of float64, optional): (m,) array of maxs. Defaults to None and is set to maxs of pos along each dimension. use_sliding_midpoint (bool, optional): If True, the sliding midpoint rule is used to split the positions. Defaults to False. Returns: tuple(int64, uint32, np.ndarray of uint64): The index of the split in the partitioned array, the dimension of the split, and the indices required to partition the array. """ cdef np.intp_t npts = pos.shape[0] cdef np.intp_t ndim = pos.shape[1] cdef uint64_t Lidx = 0 cdef uint64_t[:] idx idx = np.arange(pos.shape[0]).astype('uint64') cdef double *ptr_pos = NULL cdef uint64_t *ptr_idx = NULL cdef double *ptr_mins = NULL cdef double *ptr_maxs = NULL if (npts != 0) and (ndim != 0): if mins is None: mins = np.min(pos, axis=0) if maxs is None: maxs = np.max(pos, axis=0) ptr_pos = &pos[0,0] ptr_idx = &idx[0] ptr_mins = &mins[0] ptr_maxs = &maxs[0] cdef int64_t q = 0 cdef double split_val = 0.0 cdef cbool c_midpoint_flag = use_sliding_midpoint cdef uint32_t dsplit = split(ptr_pos, ptr_idx, Lidx, npts, ndim, ptr_mins, ptr_maxs, q, split_val, c_midpoint_flag) return q, dsplit, idx ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3991537 yt-4.4.0/yt/utilities/lib/cykdtree/windows/0000755000175100001770000000000014714401715020305 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cykdtree/windows/stdint.h0000644000175100001770000001763414714401662021777 0ustar00runnerdocker// ISO C9x compliant stdint.h for Microsoft Visual Studio // Based on ISO/IEC 9899:TC2 Committee draft (May 6, 2005) WG14/N1124 // // Copyright (c) 2006-2013 Alexander Chemeris // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are met: // // 1. Redistributions of source code must retain the above copyright notice, // this list of conditions and the following disclaimer. // // 2. Redistributions in binary form must reproduce the above copyright // notice, this list of conditions and the following disclaimer in the // documentation and/or other materials provided with the distribution. // // 3. Neither the name of the product nor the names of its contributors may // be used to endorse or promote products derived from this software // without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED // WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF // MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO // EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, // PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; // OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR // OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF // ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // /////////////////////////////////////////////////////////////////////////////// #ifndef _MSC_VER // [ #error "Use this header only with Microsoft Visual C++ compilers!" #endif // _MSC_VER ] #ifndef _MSC_STDINT_H_ // [ #define _MSC_STDINT_H_ #if _MSC_VER > 1000 #pragma once #endif #if _MSC_VER >= 1600 // [ #include #else // ] _MSC_VER >= 1600 [ #include // For Visual Studio 6 in C++ mode and for many Visual Studio versions when // compiling for ARM we should wrap include with 'extern "C++" {}' // or compiler give many errors like this: // error C2733: second C linkage of overloaded function 'wmemchr' not allowed #ifdef __cplusplus extern "C" { #endif # include #ifdef __cplusplus } #endif // Define _W64 macros to mark types changing their size, like intptr_t. #ifndef _W64 # if !defined(__midl) && (defined(_X86_) || defined(_M_IX86)) && _MSC_VER >= 1300 # define _W64 __w64 # else # define _W64 # endif #endif // 7.18.1 Integer types // 7.18.1.1 Exact-width integer types // Visual Studio 6 and Embedded Visual C++ 4 doesn't // realize that, e.g. char has the same size as __int8 // so we give up on __intX for them. #if (_MSC_VER < 1300) typedef signed char int8_t; typedef signed short int16_t; typedef signed int int32_t; typedef unsigned char uint8_t; typedef unsigned short uint16_t; typedef unsigned int uint32_t; #else typedef signed __int8 int8_t; typedef signed __int16 int16_t; typedef signed __int32 int32_t; typedef unsigned __int8 uint8_t; typedef unsigned __int16 uint16_t; typedef unsigned __int32 uint32_t; #endif typedef signed __int64 int64_t; typedef unsigned __int64 uint64_t; // 7.18.1.2 Minimum-width integer types typedef int8_t int_least8_t; typedef int16_t int_least16_t; typedef int32_t int_least32_t; typedef int64_t int_least64_t; typedef uint8_t uint_least8_t; typedef uint16_t uint_least16_t; typedef uint32_t uint_least32_t; typedef uint64_t uint_least64_t; // 7.18.1.3 Fastest minimum-width integer types typedef int8_t int_fast8_t; typedef int16_t int_fast16_t; typedef int32_t int_fast32_t; typedef int64_t int_fast64_t; typedef uint8_t uint_fast8_t; typedef uint16_t uint_fast16_t; typedef uint32_t uint_fast32_t; typedef uint64_t uint_fast64_t; // 7.18.1.4 Integer types capable of holding object pointers #ifdef _WIN64 // [ typedef signed __int64 intptr_t; typedef unsigned __int64 uintptr_t; #else // _WIN64 ][ typedef _W64 signed int intptr_t; typedef _W64 unsigned int uintptr_t; #endif // _WIN64 ] // 7.18.1.5 Greatest-width integer types typedef int64_t intmax_t; typedef uint64_t uintmax_t; // 7.18.2 Limits of specified-width integer types #if !defined(__cplusplus) || defined(__STDC_LIMIT_MACROS) // [ See footnote 220 at page 257 and footnote 221 at page 259 // 7.18.2.1 Limits of exact-width integer types #define INT8_MIN ((int8_t)_I8_MIN) #define INT8_MAX _I8_MAX #define INT16_MIN ((int16_t)_I16_MIN) #define INT16_MAX _I16_MAX #define INT32_MIN ((int32_t)_I32_MIN) #define INT32_MAX _I32_MAX #define INT64_MIN ((int64_t)_I64_MIN) #define INT64_MAX _I64_MAX #define UINT8_MAX _UI8_MAX #define UINT16_MAX _UI16_MAX #define UINT32_MAX _UI32_MAX #define UINT64_MAX _UI64_MAX // 7.18.2.2 Limits of minimum-width integer types #define INT_LEAST8_MIN INT8_MIN #define INT_LEAST8_MAX INT8_MAX #define INT_LEAST16_MIN INT16_MIN #define INT_LEAST16_MAX INT16_MAX #define INT_LEAST32_MIN INT32_MIN #define INT_LEAST32_MAX INT32_MAX #define INT_LEAST64_MIN INT64_MIN #define INT_LEAST64_MAX INT64_MAX #define UINT_LEAST8_MAX UINT8_MAX #define UINT_LEAST16_MAX UINT16_MAX #define UINT_LEAST32_MAX UINT32_MAX #define UINT_LEAST64_MAX UINT64_MAX // 7.18.2.3 Limits of fastest minimum-width integer types #define INT_FAST8_MIN INT8_MIN #define INT_FAST8_MAX INT8_MAX #define INT_FAST16_MIN INT16_MIN #define INT_FAST16_MAX INT16_MAX #define INT_FAST32_MIN INT32_MIN #define INT_FAST32_MAX INT32_MAX #define INT_FAST64_MIN INT64_MIN #define INT_FAST64_MAX INT64_MAX #define UINT_FAST8_MAX UINT8_MAX #define UINT_FAST16_MAX UINT16_MAX #define UINT_FAST32_MAX UINT32_MAX #define UINT_FAST64_MAX UINT64_MAX // 7.18.2.4 Limits of integer types capable of holding object pointers #ifdef _WIN64 // [ # define INTPTR_MIN INT64_MIN # define INTPTR_MAX INT64_MAX # define UINTPTR_MAX UINT64_MAX #else // _WIN64 ][ # define INTPTR_MIN INT32_MIN # define INTPTR_MAX INT32_MAX # define UINTPTR_MAX UINT32_MAX #endif // _WIN64 ] // 7.18.2.5 Limits of greatest-width integer types #define INTMAX_MIN INT64_MIN #define INTMAX_MAX INT64_MAX #define UINTMAX_MAX UINT64_MAX // 7.18.3 Limits of other integer types #ifdef _WIN64 // [ # define PTRDIFF_MIN _I64_MIN # define PTRDIFF_MAX _I64_MAX #else // _WIN64 ][ # define PTRDIFF_MIN _I32_MIN # define PTRDIFF_MAX _I32_MAX #endif // _WIN64 ] #define SIG_ATOMIC_MIN INT_MIN #define SIG_ATOMIC_MAX INT_MAX #ifndef SIZE_MAX // [ # ifdef _WIN64 // [ # define SIZE_MAX _UI64_MAX # else // _WIN64 ][ # define SIZE_MAX _UI32_MAX # endif // _WIN64 ] #endif // SIZE_MAX ] // WCHAR_MIN and WCHAR_MAX are also defined in #ifndef WCHAR_MIN // [ # define WCHAR_MIN 0 #endif // WCHAR_MIN ] #ifndef WCHAR_MAX // [ # define WCHAR_MAX _UI16_MAX #endif // WCHAR_MAX ] #define WINT_MIN 0 #define WINT_MAX _UI16_MAX #endif // __STDC_LIMIT_MACROS ] // 7.18.4 Limits of other integer types #if !defined(__cplusplus) || defined(__STDC_CONSTANT_MACROS) // [ See footnote 224 at page 260 // 7.18.4.1 Macros for minimum-width integer constants #define INT8_C(val) val##i8 #define INT16_C(val) val##i16 #define INT32_C(val) val##i32 #define INT64_C(val) val##i64 #define UINT8_C(val) val##ui8 #define UINT16_C(val) val##ui16 #define UINT32_C(val) val##ui32 #define UINT64_C(val) val##ui64 // 7.18.4.2 Macros for greatest-width integer constants // These #ifndef's are needed to prevent collisions with . // Check out Issue 9 for the details. #ifndef INTMAX_C // [ # define INTMAX_C INT64_C #endif // INTMAX_C ] #ifndef UINTMAX_C // [ # define UINTMAX_C UINT64_C #endif // UINTMAX_C ] #endif // __STDC_CONSTANT_MACROS ] #endif // _MSC_VER >= 1600 ] #endif // _MSC_STDINT_H_ ] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/cyoctree.pyx0000644000175100001770000006733314714401662017375 0ustar00runnerdocker# distutils: libraries = STD_LIBS """ CyOctree building, loading and refining routines """ cimport cython cimport libc.math as math cimport numpy as np import numpy as np from libc.stdlib cimport free, malloc from yt.geometry.particle_deposit cimport get_kernel_func, kernel_func np.import_array() cdef struct Octree: # Array of 3*num_nodes [x1, y1, z1, x2, y2, z2, ...] np.float64_t * node_positions # 1 or 0 of whether the oct has refined to make children np.uint8_t * refined # Each oct stores the depth in the tree: the root node is 0 np.uint8_t * depth # pstart and pend tell us which particles in the pidx are stored in each oct np.int64_t * pstart np.int64_t * pend # This tells us the index of each child np.int64_t * children # Here we store the coordinates and IDs of all the particles in the tree np.float64_t * pposx np.float64_t * pposy np.float64_t * pposz np.int64_t * pidx # The max number of particles per leaf, if above, we refine np.int64_t n_ref # The number of particles in our tree, e.g the length of ppos and pidx np.int64_t num_particles # The total size of the octree (x, y, z) np.float64_t * size # The current number of nodes in the octree np.int64_t num_nodes # The maximum depth before we stop refining, and the maximum number of nodes # we can fit in our array np.uint8_t max_depth np.int64_t max_num_nodes @cython.boundscheck(False) @cython.wraparound(False) cdef int octree_build_node(Octree * tree, long int node_idx): """ This is the main function in the building of the octree. This function takes in the tree and an index of a oct to process. If the node has too many particles (> n_ref) and the depth is less than the maximum tree depth then we refine. The `refine` creates 8 sub octs, within our oct. We indenfity the particles in each of the subocts. We then recursively call this function on each of the subocts. Parameters ---------- tree : Octree * A pointer to the octree node_idx : long int The index of the current node we are processing Returns ------- int Success of tree build """ cdef np.int64_t splits[9] cdef np.int64_t i, j, k, n, start, end cdef np.float64_t lx, ly, lz, sz, inv_size # If we are running out of space in our tree, then we *try* to # relloacate a tree of double the size if (tree.num_nodes + 8) >= tree.max_num_nodes: if octree_reallocate(tree, tree.max_num_nodes * 2): return 1 if ( (tree.pend[node_idx] - tree.pstart[node_idx] > tree.n_ref) and (tree.depth[node_idx] < tree.max_depth) ): tree.refined[node_idx] = 1 # As we have decided to refine, we need to know which of the particles # in this oct will go into each of the 8 child octs split_helper(tree, node_idx, splits) # Figure out the size of the current oct inv_size = 1. / 2.**tree.depth[node_idx] sx = tree.size[0] * inv_size sy = tree.size[1] * inv_size sz = tree.size[2] * inv_size lx = tree.node_positions[3*node_idx] - sx/2. ly = tree.node_positions[(3*node_idx)+1] - sy/2. lz = tree.node_positions[(3*node_idx)+2] - sz/2. # Loop through and generate the children AND recursively refine... n = 0 for i in range(2): for j in range(2): for k in range(2): start = splits[n] end = splits[n + 1] child = tree.num_nodes # Store the child location tree.children[8*node_idx + n] = child tree.node_positions[child*3] = lx + sx*i tree.node_positions[(child*3)+1] = ly + sy*j tree.node_positions[(child*3)+2] = lz + sz*k tree.refined[child] = 0 tree.depth[child] = tree.depth[node_idx] + 1 tree.pstart[child] = start tree.pend[child] = end tree.num_nodes += 1 # Recursively refine child if octree_build_node(tree, child): return 1 n += 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) cdef int octree_allocate(Octree * octree, long int num_nodes): """ This is the main allocation function in the octree. We allocate all of the arrays we require to store information about every single oct in the tree Parameters ---------- octree : Octree * A pointer to the octree num_nodes : long int The maximum number of nodes to allocate for Returns ------- int Success of allocation """ octree.node_positions = malloc( num_nodes * 3 * sizeof(np.float64_t)) if octree.node_positions == NULL: return 1 octree.size = malloc(3 * sizeof(np.float64_t)) if octree.size == NULL: return 1 octree.children = malloc(8 * num_nodes * sizeof(np.int64_t)) if octree.children == NULL: return 1 octree.pstart = malloc(num_nodes * sizeof(np.int64_t)) if octree.pstart == NULL: return 1 octree.pend = malloc(num_nodes * sizeof(np.int64_t)) if octree.pend == NULL: return 1 octree.refined = malloc(num_nodes * sizeof(np.int8_t)) if octree.refined == NULL: return 1 octree.depth = malloc(num_nodes * sizeof(np.int8_t)) if octree.depth == NULL: return 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) cdef int octree_reallocate(Octree * octree, long int num_nodes): """ This function re-allocates all of the arrays malloc'd in `octree_allocate` See Notes for when we want to re-allocate. Parameters ---------- octree : Octree * A pointer to the octree num_nodes : long int The maximum number of nodes to (re)allocate for Returns ------- int Success of the reallocation Notes ----- Why do we want to re-allocate? Well 2 cases, 1) The octree is still building and we have ran out of space, so we have asked for an increased number of nodes and are reallocating each array 2) We have finished building the octree and we have used less nodes than we originally allocated. We are now shrinking the octree and giving the spare memory back. """ cdef np.float64_t * old_arr cdef np.int64_t * old_arr_int cdef np.uint8_t * old_arr_uint cdef np.int64_t i old_arr = octree.node_positions octree.node_positions = malloc(num_nodes * 3 * sizeof(np.float64_t)) if octree.node_positions == NULL: return 1 for i in range(3*octree.num_nodes): octree.node_positions[i] = old_arr[i] free(old_arr) old_arr_int = octree.children octree.children = malloc(num_nodes * 8 * sizeof(np.int64_t)) if octree.children == NULL: return 1 for i in range(8*octree.num_nodes): octree.children[i] = old_arr_int[i] free(old_arr_int) old_arr_int = octree.pstart octree.pstart = malloc(num_nodes * sizeof(np.int64_t)) if octree.pstart == NULL: return 1 for i in range(octree.num_nodes): octree.pstart[i] = old_arr_int[i] free(old_arr_int) old_arr_int = octree.pend octree.pend = malloc(num_nodes * sizeof(np.int64_t)) if octree.pend == NULL: return 1 for i in range(octree.num_nodes): octree.pend[i] = old_arr_int[i] free(old_arr_int) old_arr_uint = octree.refined octree.refined = malloc(num_nodes * sizeof(np.int8_t)) if octree.refined == NULL: return 1 for i in range(octree.num_nodes): octree.refined[i] = old_arr_uint[i] free(old_arr_uint) old_arr_uint = octree.depth octree.depth = malloc(num_nodes * sizeof(np.int8_t)) if octree.depth == NULL: return 1 for i in range(octree.num_nodes): octree.depth[i] = old_arr_uint[i] free(old_arr_uint) octree.max_num_nodes = num_nodes return 0 @cython.boundscheck(False) @cython.wraparound(False) cdef void octree_deallocate(Octree * octree): """ Just free-ing every array we allocated to ensure we don't leak. Parameter --------- octree : Octree * Pointer to the octree """ free(octree.node_positions) free(octree.size) free(octree.children) free(octree.pstart) free(octree.pend) free(octree.refined) free(octree.depth) free(octree.pidx) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef class CyOctree: """ This a class to store the underlying octree and particle data that can be interacted with from both Cython and Python """ cdef Octree * c_octree cdef np.float64_t[::1, :] input_positions cdef np.int64_t n_ref cdef np.float64_t[:] left_edge cdef np.float64_t[:] right_edge cdef np.uint8_t max_depth cdef kernel_func kernel def __init__( self, np.float64_t[:, :] input_pos, left_edge=None, right_edge=None, np.int64_t n_ref=32, np.uint8_t max_depth=200 ): """ Octree initialiser. We copy the inputted particle positions and make the root node. We then refine the octree until every leaf has either less particles than n_ref or is at the maximum depth. Finally, we re-allocate all of the memory required by tree to ensure we do not use more memory than required. Parameters ---------- input_pos : 2D memory view Particles positions in the format (num_particles, 3) {left,right}_edge : ndarray xyz coordinates of the lower left (upper right) corner of the octree. n_ref : int, default: 32 The maximum number of particles per leaf, if more, the oct will refine max_depth : int, default: 200 The maximum depth the octree will refine to. If we set this too high then we may hit a stack overflow due to the recursive nature of the build """ self.n_ref = n_ref self.max_depth = max_depth self.input_positions = np.asfortranarray(input_pos, dtype=np.float64) if self._allocate_octree(): raise MemoryError("Unable to allocate memory required for octree build.") self._make_root(left_edge, right_edge) if octree_build_node(self.c_octree, 0): raise MemoryError("Unable to allocate memory required for octree build.") if octree_reallocate(self.c_octree, self.c_octree.num_nodes): raise MemoryError("Unable to allocate memory required for octree build.") def __del__(self): """ Make sure we clean up properly! """ octree_deallocate(self.c_octree) free(self.c_octree) @property def bound_particles(self): """ The particle selection may select SPH particles with smoothing lengths which is in the octree domains. However, if the particle center is NOT in the octree, they are not included. So the number of particles passed to the tree *may* not be equal to the number of particles which are bound by the tree. """ return self.c_octree.pend[0] @property def num_nodes(self): """ The total number of nodes after tree construction """ return self.c_octree.num_nodes @property def node_positions(self): """ The centre of every node within the octree """ cdef np.npy_intp shape[2] shape[0] = self.c_octree.num_nodes shape[1] = 3 arr = np.PyArray_SimpleNewFromData( 2, &shape[0], np.NPY_FLOAT64, self.c_octree.node_positions) return np.copy(arr) @property def node_refined(self): """ An array of length num_nodes which contains either True / False for whether each cell has refined or not. E.g False for a leaf, True for a node """ cdef np.npy_intp shape shape = self.c_octree.num_nodes arr = np.PyArray_SimpleNewFromData( 1, &shape, np.NPY_UINT8, self.c_octree.refined) return np.copy(arr).astype(np.bool_) @property def node_depth(self): """ The depth for each node in the tree. The root node is defined as a depth of 0. """ cdef np.npy_intp shape = self.c_octree.num_nodes arr = np.PyArray_SimpleNewFromData( 1, &shape, np.NPY_UINT8, self.c_octree.depth) return np.copy(arr) @property def node_sizes(self): """ The size of each node in the x, y and z directions. We calculate this on the fly. As we know the size of the whole tree and the depth of each node """ cdef np.int64_t i sizes = np.zeros((self.c_octree.num_nodes, 3), dtype=np.float64) sizes[:, 0] = self.c_octree.size[0] sizes[:, 1] = self.c_octree.size[1] sizes[:, 2] = self.c_octree.size[2] for i in range(self.c_octree.num_nodes): sizes[i, :] /= (2.0**self.c_octree.depth[i] / 2.0) return sizes def _make_root(self, left_edge, right_edge): """ The root node is the hardest node to build as we need to find out which particles we contain, and then sieving them into children is easy. In the case that the left_edge/right_edge is not defined then we select a tree that is sufficiently big to contain every particle. However, if they are defined then we need to loop through and find out which particles are actually in the tree. Parameters ---------- {left,right}_edge : ndarray xyz coordinates of the lower left (upper right) corner of the octree. If None, the tree will be made large enough to encompass all particles. """ cdef int i = 0 # How many particles are there? self.c_octree.num_particles = self.input_positions.shape[0] # We now number all of the the particles in the tree. This allows us # to shuffle the pids and say for example Oct11 contains particles # 7 to 11 # This pidx[7:12] would give the input indices of the particles we # store. This proxy allows us to re-arrange the particles without # re-arranging the users input data. self.c_octree.pidx = malloc( self.c_octree.num_particles * sizeof(np.int64_t) ) for i in range(0, self.c_octree.num_particles): self.c_octree.pidx[i] = i if left_edge is None: # If the edges are None, then we can just find the loop through # and find them out left_edge = np.zeros(3, dtype=np.float64) right_edge = np.zeros(3, dtype=np.float64) left_edge[0] = self.c_octree.pposx[0] left_edge[1] = self.c_octree.pposy[0] left_edge[2] = self.c_octree.pposz[0] right_edge[0] = self.c_octree.pposx[0] right_edge[1] = self.c_octree.pposy[0] right_edge[2] = self.c_octree.pposz[0] for i in range(self.c_octree.num_particles): left_edge[0] = min(self.c_octree.pposx[i], left_edge[0]) left_edge[1] = min(self.c_octree.pposy[i], left_edge[1]) left_edge[2] = min(self.c_octree.pposz[i], left_edge[2]) right_edge[0] = max(self.c_octree.pposx[i], right_edge[0]) right_edge[1] = max(self.c_octree.pposy[i], right_edge[1]) right_edge[2] = max(self.c_octree.pposz[i], right_edge[2]) self.c_octree.pstart[0] = 0 self.c_octree.pend[0] = self.input_positions.shape[0] else: # Damn! The user did supply a left and right so we need to find # which particles are in the range left_edge = left_edge.astype(np.float64) right_edge = right_edge.astype(np.float64) # We loop through the particles and arrange them such that particles # in the tree are to the left of the split and the particles not # are to the right # e.g. # pidx = [1, 2, 3, 5 | 0, 4] # where split = 4 and particles 0 and 4 are not in the tree split = select( self.c_octree, left_edge, right_edge, 0, self.input_positions.shape[0]) self.c_octree.pstart[0] = 0 self.c_octree.pend[0] = split # Set the total size of the tree size = (right_edge - left_edge) / 2.0 center = (right_edge + left_edge) / 2.0 self.left_edge = left_edge self.right_edge = right_edge # Now we add the data about the root node! self.c_octree.node_positions[0] = center[0] self.c_octree.node_positions[1] = center[1] self.c_octree.node_positions[2] = center[2] self.c_octree.size[0] = size[0] self.c_octree.size[1] = size[1] self.c_octree.size[2] = size[2] # We are not refined yet self.c_octree.refined[0] = 0 self.c_octree.depth[0] = 0 def _allocate_octree(self): """ This actually allocates the C struct Octree """ self.c_octree = malloc(sizeof(Octree)) self.c_octree.n_ref = self.n_ref self.c_octree.num_nodes = 1 # This is sort of an arbitrary guess but it doesn't matter because # we will increase this value and attempt to reallocate if it is too # small self.c_octree.max_num_nodes = max(self.input_positions.shape[0] / self.n_ref, 1) self.c_octree.max_depth = self.max_depth # Fast C pointers to the particle coordinates self.c_octree.pposx = &self.input_positions[0, 0] self.c_octree.pposy = &self.input_positions[0, 1] self.c_octree.pposz = &self.input_positions[0, 2] if octree_allocate(self.c_octree, self.c_octree.max_num_nodes): return 1 cdef void smooth_onto_cells( self, np.float64_t[:] buff, np.float64_t[:] buff_den, np.float64_t posx, np.float64_t posy, np.float64_t posz, np.float64_t hsml, np.float64_t prefactor, np.float64_t prefactor_norm, long int num_node, int use_normalization=0 ): """ We smooth a field onto cells within an octree using SPH deposition. To achieve this we loop through every oct in the tree and check if it is a leaf. A leaf just means that an oct has not refined, and thus has no children. Parameters ---------- buff : memoryview The array which we are depositing the field onto, it has the length of the number of leaves. buff_den : memoryview The array we deposit just mass onto to allow normalization pos<> : float64_t The x, y, and z coordinates of the particle we are depositing hsml : float64_t The smoothing length of the particle prefactor(_norm) : float64_t This is a pre-computed value, based on the particles properties used in the deposition num_node : lonmg int The current node we are checking to see if refined or not use_normalization : int, default: 0 Do we want a normalized sph field? If so, fill the buff_den. """ cdef Octree * tree = self.c_octree cdef double q_ij, diff_x, diff_y, diff_z, diff, sx, sy, sz cdef int i cdef long int child_node if tree.refined[num_node] == 0: diff_x = tree.node_positions[3*num_node] - posx diff_y = tree.node_positions[3*num_node+1] - posy diff_z = tree.node_positions[3*num_node+2] - posz q_ij = math.sqrt(diff_x*diff_x + diff_y*diff_y + diff_z*diff_z) q_ij /= hsml buff[num_node] += (prefactor * self.kernel(q_ij)) if use_normalization: buff_den[num_node] += (prefactor_norm * self.kernel(q_ij)) else: # All direct children of the current node are the same size, thus # we can compute their size once, outside of the loop sz_factor = 1.0 / 2.0**(tree.depth[num_node] + 1) sqrt_sz_factor = math.sqrt(sz_factor) sx = tree.size[0] sy = tree.size[1] sz = tree.size[2] child_node_size = sqrt_sz_factor * math.sqrt(sx*sx + sy*sy + sz*sz) for i in range(8): child_node = tree.children[8*num_node + i] diff_x = tree.node_positions[3*child_node] - posx diff_y = tree.node_positions[3*child_node+1] - posy diff_z = tree.node_positions[3*child_node+2] - posz diff = math.sqrt(diff_x*diff_x + diff_y*diff_y + diff_z*diff_z) # Could the current particle possibly intersect this child node? if diff - child_node_size - hsml < 0: self.smooth_onto_cells(buff, buff_den, posx, posy, posz, hsml, prefactor, prefactor_norm, child_node, use_normalization=use_normalization) def interpolate_sph_cells(self, np.float64_t[:] buff, np.float64_t[:] buff_den, np.float64_t[:] posx, np.float64_t[:] posy, np.float64_t[:] posz, np.float64_t[:] pmass, np.float64_t[:] pdens, np.float64_t[:] hsml, np.float64_t[:] field, kernel_name="cubic", int use_normalization=0 ): """ We loop through every particle in the simulation and begin to deposit the particle properties onto all of the leaves that it intersects Parameters ---------- buff : memoryview The array which we are depositing the field onto, it has the length of the number of leaves. buff_den : memoryview The array we deposit just mass onto to allow normalization pos<> : memoryview The x, y, and z coordinates of all the partciles pmass : memoryview The mass of the particles pdens : memoryview The density of the particles hsml : memoryview The smoothing lengths of the particles field : memoryview The field we are depositing for each particle kernel_name: str, default: "cubic" Choice of kernel for SPH deposition use_normalization : int, default: 0 Do we want a normalized sph field? If so, fill the buff_den. """ self.kernel = get_kernel_func(kernel_name) cdef int i cdef double prefactor, prefactor_norm for i in range(posx.shape[0]): prefactor = pmass[i] / pdens[i] / hsml[i]**3 prefactor_norm = prefactor prefactor *= field[i] self.smooth_onto_cells(buff, buff_den, posx[i], posy[i], posz[i], hsml[i], prefactor, prefactor_norm, 0, use_normalization=use_normalization) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef np.int64_t separate( np.float64_t * pos, np.int64_t * pidx, double value, np.int64_t start, np.int64_t end ) noexcept nogil: """ This is a simple utility function which takes a selection of particles and re-arranges them such that values below `value` are to the left of split and values above are to the right. Parameters ---------- pos : float64_t * Pointer to the coordinates we are splitting along pidx : int64_t & Pointer to the corresponding particle IDs value : double The value to split the data along start : int64_t Index of first particle in the current node end : int64_t Index of the last particle in the current node """ cdef np.int64_t index cdef np.int64_t split = start for index in range(start, end): if pos[pidx[index]] < value: pidx[split], pidx[index] = pidx[index], pidx[split] split += 1 return split @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void split_helper(Octree * tree, np.int64_t node_idx, np.int64_t * splits): """ A utility function to split a collection of particles along the x, y and z direction such that we identify the particles within the 8 children of an oct. We first split the particles in the x direction, then we split the particles in the y and the z directions. This is currently hardcoded for 8 octs but can be readily extended to allow over-refinement. The splits[0] and splits[1] tell oct one the start and last particle The splits[1] and splits[2] tell oct two the start and last particle and so on Parameters ---------- tree : Octree * A pointer to the octree node_idx : int64_t The index of the oct we are splitting splits : int64_t * Pointer to split array which stores the start and end indices. It needs to be N+1 long where N is the number of children """ splits[0] = tree.pstart[node_idx] splits[8] = tree.pend[node_idx] splits[4] = separate( tree.pposx, tree.pidx, tree.node_positions[3*node_idx], splits[0], splits[8]) splits[2] = separate( tree.pposy, tree.pidx, tree.node_positions[(3*node_idx)+1], splits[0], splits[4]) splits[6] = separate( tree.pposy, tree.pidx, tree.node_positions[(3*node_idx)+1], splits[4], splits[8]) splits[1] = separate( tree.pposz, tree.pidx, tree.node_positions[(3*node_idx)+2], splits[0], splits[2]) splits[3] = separate( tree.pposz, tree.pidx, tree.node_positions[(3*node_idx)+2], splits[2], splits[4]) splits[5] = separate( tree.pposz, tree.pidx, tree.node_positions[(3*node_idx)+2], splits[4], splits[6]) splits[7] = separate( tree.pposz, tree.pidx, tree.node_positions[(3*node_idx)+2], splits[6], splits[8]) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef np.int64_t select( Octree * octree, np.float64_t[::1] left_edge, np.float64_t[::1] right_edge, np.int64_t start, np.int64_t end ) noexcept nogil: """ Re-arrange the input particles such that those outside the bounds of the tree occur after the split index and can thus be ignored for the remainder of the tree construction Parameters ---------- tree : Octree * A pointer to the octree left_edge : ndarray The coords of the lower left corner of the box right_edge : ndarray The coords of the upper right corner of the box start : int64_t The first particle in the bounds (0) end : int64_t The last particle in the bounds """ cdef np.int64_t index cdef np.int64_t split = start cdef np.float64_t * posx = octree.pposx cdef np.float64_t * posy = octree.pposy cdef np.float64_t * posz = octree.pposz cdef np.int64_t * pidx = octree.pidx for index in range(start, end): if posx[pidx[index]] < right_edge[0] and posx[pidx[index]] > left_edge[0]: if posy[pidx[index]] < right_edge[1] and posy[pidx[index]] > left_edge[1]: if posz[pidx[index]] < right_edge[2] and posz[pidx[index]] > left_edge[2]: if split < index: pidx[split], pidx[index] = pidx[index], pidx[split] split += 1 return split ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/depth_first_octree.pyx0000644000175100001770000001431214714401662021421 0ustar00runnerdocker # distutils: libraries = STD_LIBS """ This is a recursive function to return a depth-first octree """ import numpy as np cimport cython cimport numpy as np cdef class position: cdef public int output_pos, refined_pos def __cinit__(self): self.output_pos = 0 self.refined_pos = 0 cdef class OctreeGrid: cdef public object child_indices, fields, left_edges, dimensions, dx cdef public int level, offset def __cinit__(self, np.ndarray[np.int32_t, ndim=3] child_indices, np.ndarray[np.float64_t, ndim=4] fields, np.ndarray[np.float64_t, ndim=1] left_edges, np.ndarray[np.int32_t, ndim=1] dimensions, np.ndarray[np.float64_t, ndim=1] dx, int level, int offset): self.child_indices = child_indices self.fields = fields self.left_edges = left_edges self.dimensions = dimensions self.dx = dx self.level = level self.offset = offset cdef class OctreeGridList: cdef public object grids def __cinit__(self, grids): self.grids = grids def __getitem__(self, int item): return self.grids[item] @cython.boundscheck(False) def RecurseOctreeDepthFirst(int i_i, int j_i, int k_i, int i_f, int j_f, int k_f, position curpos, int gi, np.ndarray[np.float64_t, ndim=2] output, np.ndarray[np.int32_t, ndim=1] refined, OctreeGridList grids): #cdef int s = curpos cdef int i, i_off, j, j_off, k, k_off, ci, fi cdef int child_i, child_j, child_k cdef OctreeGrid child_grid cdef OctreeGrid grid = grids[gi] cdef np.ndarray[np.float64_t, ndim=4] fields = grid.fields cdef np.ndarray[np.float64_t, ndim=1] leftedges = grid.left_edges cdef np.float64_t dx = grid.dx[0] cdef np.float64_t child_dx cdef np.ndarray[np.float64_t, ndim=1] child_leftedges cdef np.float64_t cx, cy, cz #here we go over the 8 octants #in general however, a mesh cell on this level #may have more than 8 children on the next level #so we find the int float center (cxyz) of each child cell # and from that find the child cell indices for i_off in range(i_f): i = i_off + i_i #index cx = (leftedges[0] + i*dx) for j_off in range(j_f): j = j_off + j_i cy = (leftedges[1] + j*dx) for k_off in range(k_f): k = k_off + k_i cz = (leftedges[2] + k*dx) ci = grid.child_indices[i,j,k] if ci == -1: for fi in range(fields.shape[0]): output[curpos.output_pos,fi] = fields[fi,i,j,k] refined[curpos.refined_pos] = 0 curpos.output_pos += 1 curpos.refined_pos += 1 else: refined[curpos.refined_pos] = 1 curpos.refined_pos += 1 child_grid = grids[ci-grid.offset] child_dx = child_grid.dx[0] child_leftedges = child_grid.left_edges child_i = int((cx - child_leftedges[0])/child_dx) child_j = int((cy - child_leftedges[1])/child_dx) child_k = int((cz - child_leftedges[2])/child_dx) # s = Recurs..... RecurseOctreeDepthFirst(child_i, child_j, child_k, 2, 2, 2, curpos, ci - grid.offset, output, refined, grids) @cython.boundscheck(False) def RecurseOctreeByLevels(int i_i, int j_i, int k_i, int i_f, int j_f, int k_f, np.ndarray[np.int32_t, ndim=1] curpos, int gi, np.ndarray[np.float64_t, ndim=2] output, np.ndarray[np.int32_t, ndim=2] genealogy, np.ndarray[np.float64_t, ndim=2] corners, OctreeGridList grids): cdef np.int32_t i, i_off, j, j_off, k, k_off, ci, fi cdef int child_i, child_j, child_k cdef OctreeGrid child_grid cdef OctreeGrid grid = grids[gi-1] cdef int level = grid.level cdef np.ndarray[np.int32_t, ndim=3] child_indices = grid.child_indices cdef np.ndarray[np.float64_t, ndim=4] fields = grid.fields cdef np.ndarray[np.float64_t, ndim=1] leftedges = grid.left_edges cdef np.float64_t dx = grid.dx[0] cdef np.float64_t child_dx cdef np.ndarray[np.float64_t, ndim=1] child_leftedges cdef np.float64_t cx, cy, cz cdef int cp s = None for i_off in range(i_f): i = i_off + i_i cx = (leftedges[0] + i*dx) for j_off in range(j_f): j = j_off + j_i cy = (leftedges[1] + j*dx) for k_off in range(k_f): k = k_off + k_i cz = (leftedges[2] + k*dx) cp = curpos[level] corners[cp, 0] = cx corners[cp, 1] = cy corners[cp, 2] = cz genealogy[curpos[level], 2] = level # always output data for fi in range(fields.shape[0]): output[cp,fi] = fields[fi,i,j,k] ci = child_indices[i,j,k] if ci > -1: child_grid = grids[ci-1] child_dx = child_grid.dx[0] child_leftedges = child_grid.left_edges child_i = int((cx-child_leftedges[0])/child_dx) child_j = int((cy-child_leftedges[1])/child_dx) child_k = int((cz-child_leftedges[2])/child_dx) # set current child id to id of next cell to examine genealogy[cp, 0] = curpos[level+1] # set next parent id to id of current cell genealogy[curpos[level+1]:curpos[level+1]+8, 1] = cp s = RecurseOctreeByLevels(child_i, child_j, child_k, 2, 2, 2, curpos, ci, output, genealogy, corners, grids) curpos[level] += 1 return s ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/distance_queue.pxd0000644000175100001770000000233514714401662020520 0ustar00runnerdocker""" A queue for evaluating distances to discrete points """ cimport cython cimport numpy as np import numpy as np from libc.stdlib cimport free, malloc from libc.string cimport memmove # THESE TWO STRUCTS MUST BE EQUIVALENT cdef struct ItemList: np.int64_t ind np.float64_t value cdef struct NeighborList: np.int64_t pn # Particle number np.float64_t r2 # radius**2 cdef int Neighbor_compare(void *on1, void *on2) nogil cdef np.float64_t r2dist(np.float64_t ppos[3], np.float64_t cpos[3], np.float64_t DW[3], bint periodicity[3], np.float64_t max_dist2) cdef class PriorityQueue: cdef int maxn cdef int curn cdef ItemList* items cdef void item_reset(self) cdef int item_insert(self, np.int64_t i, np.float64_t value) cdef class DistanceQueue(PriorityQueue): cdef np.float64_t DW[3] cdef bint periodicity[3] cdef NeighborList* neighbors # flat array cdef void _setup(self, np.float64_t DW[3], bint periodicity[3]) cdef void neighbor_eval(self, np.int64_t pn, np.float64_t ppos[3], np.float64_t cpos[3]) cdef void neighbor_reset(self) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/distance_queue.pyx0000644000175100001770000001336514714401662020552 0ustar00runnerdocker # distutils: libraries = STD_LIBS """ Distance queue implementation """ cimport numpy as np import numpy as np cimport cython cdef int Neighbor_compare(void *on1, void *on2) noexcept nogil: cdef NeighborList *n1 cdef NeighborList *n2 n1 = on1 n2 = on2 # Note that we set this up so that "greatest" evaluates to the *end* of the # list, so we can do standard radius comparisons. if n1.r2 < n2.r2: return -1 elif n1.r2 == n2.r2: return 0 else: return 1 @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) cdef np.float64_t r2dist(np.float64_t ppos[3], np.float64_t cpos[3], np.float64_t DW[3], bint periodicity[3], np.float64_t max_dist2): cdef int i cdef np.float64_t r2, DR r2 = 0.0 for i in range(3): DR = (ppos[i] - cpos[i]) if not periodicity[i]: pass elif (DR > DW[i]/2.0): DR -= DW[i] elif (DR < -DW[i]/2.0): DR += DW[i] r2 += DR * DR if max_dist2 >= 0.0 and r2 > max_dist2: return -1.0 return r2 cdef class PriorityQueue: """This class acts as a "priority-queue." It was extracted from the DistanceQueue object so that it could serve as a general-purpose method for storing the N-most "valuable" objects. It's relatively simple, in that it provides storage for a single int64 (which is usually an 'index' into an external array or list) and a single float64 "value" associated with them. You can insert new objects, and then if it's already at maxn objects in it, it'll bump the least valuable one off the end. Of particular note is that *lower* values are considered to be *more valuable* than higher ones; this is because our typical use case is to store radii. """ def __cinit__(self, int maxn): self.maxn = maxn self.curn = 0 self.items = malloc( sizeof(ItemList) * self.maxn) cdef void item_reset(self): for i in range(self.maxn): self.items[i].value = 1e300 self.items[i].ind = -1 self.curn = 0 cdef int item_insert(self, np.int64_t ind, np.float64_t value): cdef int i, di if self.curn == 0: self.items[0].value = value self.items[0].ind = ind self.curn += 1 return 0 # Now insert in a sorted way di = -1 for i in range(self.curn - 1, -1, -1): # We are checking if i is less than us, to see if we should insert # to the right (i.e., i+1). if self.items[i].value < value: di = i break # The outermost one is already too small. if di == self.maxn - 1: return -1 if (self.maxn - (di + 2)) > 0: memmove( (self.items + di + 2), (self.items + di + 1), sizeof(ItemList) * (self.maxn - (di + 2))) self.items[di + 1].value = value self.items[di + 1].ind = ind if self.curn < self.maxn: self.curn += 1 return di + 1 cdef class DistanceQueue: """This is a distance queue object, designed to incrementally evaluate N positions against a reference point. It is initialized with the maximum number that are to be retained (i.e., maxn-nearest neighbors).""" def __cinit__(self, int maxn): if sizeof(ItemList) != sizeof(NeighborList): # This should almost never, ever happen unless we do something very # wrong, and must be broken at compile time. raise RuntimeError self.neighbors = self.items self.neighbor_reset() for i in range(3): self.DW[i] = 0 self.periodicity[i] = False cdef void _setup(self, np.float64_t DW[3], bint periodicity[3]): for i in range(3): self.DW[i] = DW[i] self.periodicity[i] = periodicity[i] def setup(self, np.float64_t[:] DW, periodicity): for i in range(3): self.DW[i] = DW[i] self.periodicity[i] = periodicity[i] def __dealloc__(self): free(self.neighbors) cdef void neighbor_eval(self, np.int64_t pn, np.float64_t ppos[3], np.float64_t cpos[3]): # Here's a python+numpy simulator of this: # http://paste.yt-project.org/show/5445/ cdef np.float64_t r2, r2_trunc if self.curn == self.maxn: # Truncate calculation if it's bigger than this in any dimension r2_trunc = self.neighbors[self.curn - 1].r2 else: # Don't truncate our calculation r2_trunc = -1 r2 = r2dist(ppos, cpos, self.DW, self.periodicity, r2_trunc) if r2 == -1: return self.item_insert(pn, r2) cdef void neighbor_reset(self): self.item_reset() def find_nearest(self, np.float64_t[:] center, np.float64_t[:,:] points): """This function accepts a center and a set of [N,3] points, and it will return the indices into the points array of the maxn closest neighbors.""" cdef int i, j cdef np.float64_t ppos[3] cdef np.float64_t cpos[3] self.neighbor_reset() for i in range(3): cpos[i] = center[i] for j in range(points.shape[0]): for i in range(3): ppos[i] = points[j,i] self.neighbor_eval(j, ppos, cpos) rv = np.empty(self.curn, dtype="int64") for i in range(self.curn): rv[i] = self.neighbors[i].pn return rv ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/element_mappings.pxd0000644000175100001770000002152014714401662021046 0ustar00runnerdockercimport cython cimport numpy as np from numpy cimport ndarray import numpy as np from libc.math cimport fabs, fmax cdef class ElementSampler: # how close a point has to be to the element # to get counted as "inside". This is in the # mapped coordinates of the element. cdef np.float64_t inclusion_tol cdef int num_mapped_coords cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil cdef double sample_at_real_point(self, double* vertices, double* field_values, double* physical_x) noexcept nogil cdef int check_inside(self, double* mapped_coord) noexcept nogil cdef int check_mesh_lines(self, double* mapped_coord) noexcept nogil cdef class P1Sampler1D(ElementSampler): cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil cdef int check_inside(self, double* mapped_coord) noexcept nogil cdef class P1Sampler2D(ElementSampler): cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil cdef int check_inside(self, double* mapped_coord) noexcept nogil cdef class P1Sampler3D(ElementSampler): cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil cdef int check_inside(self, double* mapped_coord) noexcept nogil cdef int check_mesh_lines(self, double* mapped_coord) noexcept nogil # This typedef defines a function pointer that defines the system # of equations that will be solved by the NonlinearSolveSamplers. # # inputs: # x - pointer to the mapped coordinate # vertices - pointer to the element vertices # phys_x - pointer to the physical coordinate # # outputs: # # fx - the result of solving the system, should be close to 0 # once it is converged. # ctypedef void (*func_type)(double* fx, double* x, double* vertices, double* phys_x) noexcept nogil # This typedef defines a function pointer that defines the Jacobian # matrix used by the NonlinearSolveSampler3D. Subclasses needed to # define a Jacobian function in this form. # # inputs: # x - pointer to the mapped coordinate # vertices - pointer to the element vertices # phys_x - pointer to the physical coordinate # # outputs: # # rcol - the first column of the jacobian # scol - the second column of the jacobian # tcol - the third column of the jaocobian # ctypedef void (*jac_type3D)(double* rcol, double* scol, double* tcol, double* x, double* vertices, double* phys_x) noexcept nogil # This typedef defines a function pointer that defines the Jacobian # matrix used by the NonlinearSolveSampler2D. Subclasses needed to # define a Jacobian function in this form. # # inputs: # x - pointer to the mapped coordinate # vertices - pointer to the element vertices # phys_x - pointer to the physical coordinate # # outputs: # # rcol - the first column of the jacobian # scol - the second column of the jacobian # ctypedef void (*jac_type2D)(double* rcol, double* scol, double* x, double* vertices, double* phys_x) noexcept nogil cdef class NonlinearSolveSampler3D(ElementSampler): cdef int dim cdef int max_iter cdef np.float64_t tolerance cdef func_type func cdef jac_type3D jac cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil cdef class Q1Sampler3D(NonlinearSolveSampler3D): cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil cdef int check_inside(self, double* mapped_coord) noexcept nogil cdef int check_mesh_lines(self, double* mapped_coord) noexcept nogil cdef class W1Sampler3D(NonlinearSolveSampler3D): cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil cdef int check_inside(self, double* mapped_coord) noexcept nogil cdef int check_mesh_lines(self, double* mapped_coord) noexcept nogil cdef class S2Sampler3D(NonlinearSolveSampler3D): cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil cdef int check_inside(self, double* mapped_coord) noexcept nogil cdef int check_mesh_lines(self, double* mapped_coord) noexcept nogil cdef class NonlinearSolveSampler2D(ElementSampler): cdef int dim cdef int max_iter cdef np.float64_t tolerance cdef func_type func cdef jac_type2D jac cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil cdef class Q1Sampler2D(NonlinearSolveSampler2D): cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil cdef int check_inside(self, double* mapped_coord) noexcept nogil cdef class Q2Sampler2D(NonlinearSolveSampler2D): cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil cdef int check_inside(self, double* mapped_coord) noexcept nogil cdef class T2Sampler2D(NonlinearSolveSampler2D): cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil cdef int check_inside(self, double* mapped_coord) noexcept nogil cdef class Tet2Sampler3D(NonlinearSolveSampler3D): cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil cdef int check_inside(self, double* mapped_coord) noexcept nogil ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/element_mappings.pyx0000644000175100001770000013211214714401662021073 0ustar00runnerdocker# distutils: libraries = STD_LIBS """ This file contains coordinate mappings between physical coordinates and those defined on unit elements, as well as doing the corresponding intracell interpolation on finite element data. """ cimport cython cimport numpy as np from numpy cimport ndarray import numpy as np from libc.math cimport fabs from yt.utilities.lib.autogenerated_element_samplers cimport ( Q1Function2D, Q1Function3D, Q1Jacobian2D, Q1Jacobian3D, Q2Function2D, Q2Jacobian2D, T2Function2D, T2Jacobian2D, Tet2Function3D, Tet2Jacobian3D, W1Function3D, W1Jacobian3D, ) cdef extern from "platform_dep.h": double fmax(double x, double y) noexcept nogil @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double determinant_3x3(double* col0, double* col1, double* col2) noexcept nogil: return col0[0]*col1[1]*col2[2] - col0[0]*col1[2]*col2[1] - \ col0[1]*col1[0]*col2[2] + col0[1]*col1[2]*col2[0] + \ col0[2]*col1[0]*col2[1] - col0[2]*col1[1]*col2[0] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double maxnorm(double* f, int dim) noexcept nogil: cdef double err cdef int i err = fabs(f[0]) for i in range(1, dim): err = fmax(err, fabs(f[i])) return err cdef class ElementSampler: ''' This is a base class for sampling the value of a finite element solution at an arbitrary point inside a mesh element. In general, this will be done by transforming the requested physical coordinate into a mapped coordinate system, sampling the solution in mapped coordinates, and returning the result. This is not to be used directly; use one of the subclasses instead. ''' def __init__(self): self.inclusion_tol = 1.0e-8 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil: pass @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil: pass @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_inside(self, double* mapped_coord) noexcept nogil: pass @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def check_contains(self, np.float64_t[:,::1] vertices, np.float64_t[:,::1] positions): cdef np.ndarray[np.float64_t, ndim=2] mapped_coords cdef np.ndarray[np.uint8_t, ndim=1] mask mapped_coords = self.map_reals_to_unit(vertices, positions) mask = np.zeros(mapped_coords.shape[0], dtype=np.uint8) cdef double[3] mapped_coord cdef int i, j for i in range(mapped_coords.shape[0]): for j in range(mapped_coords.shape[1]): mapped_coord[j] = mapped_coords[i, j] mask[i] = self.check_inside(mapped_coord) return mask @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_mesh_lines(self, double* mapped_coord) noexcept nogil: pass @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double sample_at_real_point(self, double* vertices, double* field_values, double* physical_x) noexcept nogil: cdef double val cdef double mapped_coord[4] self.map_real_to_unit(mapped_coord, vertices, physical_x) val = self.sample_at_unit_point(mapped_coord, field_values) return val @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def map_reals_to_unit(self, np.float64_t[:,::1] vertices, np.float64_t[:,::1] positions): cdef double mapped_x[3] cdef int i, n # We have N vertices, which each have three components. cdef np.ndarray[np.float64_t, ndim=2] output_coords output_coords = np.zeros((positions.shape[0], positions.shape[1]), dtype="float64") # Now for each position, we map for n in range(positions.shape[0]): self.map_real_to_unit(mapped_x, &vertices[0,0], &positions[n, 0]) for i in range(positions.shape[1]): output_coords[n, i] = mapped_x[i] return output_coords def sample_at_real_points(self, np.float64_t[:,::1] vertices, np.float64_t[::1] field_values, np.float64_t[:,::1] positions): cdef np.ndarray[np.float64_t, ndim=1] output_values output_values = np.zeros(positions.shape[0], dtype="float64") for n in range(positions.shape[0]): output_values[n] = self.sample_at_real_point( &vertices[0,0], &field_values[0], &positions[n,0]) return output_values cdef class P1Sampler1D(ElementSampler): ''' This implements sampling inside a linear, 1D element. ''' def __init__(self): super(P1Sampler1D, self).__init__() self.num_mapped_coords = 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil: mapped_x[0] = -1.0 + 2.0*(physical_x[0] - vertices[0]) / (vertices[1] - vertices[0]) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil: return vals[0] * (1 - coord[0]) / 2.0 + vals[1] * (1.0 + coord[0]) / 2.0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_inside(self, double* mapped_coord) noexcept nogil: if (fabs(mapped_coord[0]) - 1.0 > self.inclusion_tol): return 0 return 1 cdef class P1Sampler2D(ElementSampler): ''' This implements sampling inside a linear, triangular mesh element. This mapping is linear and can be inverted easily. Note that this implementation uses triangular (or barycentric) coordinates. ''' def __init__(self): super(P1Sampler2D, self).__init__() self.num_mapped_coords = 3 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil: cdef double[3] col0 cdef double[3] col1 cdef double[3] col2 col0[0] = vertices[0] col0[1] = vertices[1] col0[2] = 1.0 col1[0] = vertices[2] col1[1] = vertices[3] col1[2] = 1.0 col2[0] = vertices[4] col2[1] = vertices[5] col2[2] = 1.0 det = determinant_3x3(col0, col1, col2) mapped_x[0] = ((vertices[3] - vertices[5])*physical_x[0] + \ (vertices[4] - vertices[2])*physical_x[1] + \ (vertices[2]*vertices[5] - vertices[4]*vertices[3])) / det mapped_x[1] = ((vertices[5] - vertices[1])*physical_x[0] + \ (vertices[0] - vertices[4])*physical_x[1] + \ (vertices[4]*vertices[1] - vertices[0]*vertices[5])) / det mapped_x[2] = 1.0 - mapped_x[1] - mapped_x[0] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil: return vals[0]*coord[0] + vals[1]*coord[1] + vals[2]*coord[2] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_inside(self, double* mapped_coord) noexcept nogil: # for triangles, we check whether all mapped_coords are # between 0 and 1, to within the inclusion tolerance cdef int i for i in range(3): if (mapped_coord[i] < -self.inclusion_tol or mapped_coord[i] - 1.0 > self.inclusion_tol): return 0 return 1 cdef class P1Sampler3D(ElementSampler): ''' This implements sampling inside a linear, tetrahedral mesh element. This mapping is linear and can be inverted easily. ''' def __init__(self): super(P1Sampler3D, self).__init__() self.num_mapped_coords = 4 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil: cdef int i cdef double d cdef double[3] bvec cdef double[3] col0 cdef double[3] col1 cdef double[3] col2 # here, we express positions relative to the 4th element, # which is selected by vertices[9] for i in range(3): bvec[i] = physical_x[i] - vertices[9 + i] col0[i] = vertices[0 + i] - vertices[9 + i] col1[i] = vertices[3 + i] - vertices[9 + i] col2[i] = vertices[6 + i] - vertices[9 + i] d = determinant_3x3(col0, col1, col2) mapped_x[0] = determinant_3x3(bvec, col1, col2)/d mapped_x[1] = determinant_3x3(col0, bvec, col2)/d mapped_x[2] = determinant_3x3(col0, col1, bvec)/d mapped_x[3] = 1.0 - mapped_x[0] - mapped_x[1] - mapped_x[2] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil: return vals[0]*coord[0] + vals[1]*coord[1] + \ vals[2]*coord[2] + vals[3]*coord[3] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_inside(self, double* mapped_coord) noexcept nogil: # for tetrahedra, we check whether all mapped coordinates # are within 0 and 1, to within the inclusion tolerance cdef int i for i in range(4): if (mapped_coord[i] < -self.inclusion_tol or mapped_coord[i] - 1.0 > self.inclusion_tol): return 0 return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_mesh_lines(self, double* mapped_coord) noexcept nogil: cdef double u, v, w cdef double thresh = 2.0e-2 if mapped_coord[0] == 0: u = mapped_coord[1] v = mapped_coord[2] w = mapped_coord[3] elif mapped_coord[1] == 0: u = mapped_coord[2] v = mapped_coord[3] w = mapped_coord[0] elif mapped_coord[2] == 0: u = mapped_coord[1] v = mapped_coord[3] w = mapped_coord[0] else: u = mapped_coord[1] v = mapped_coord[2] w = mapped_coord[0] if ((u < thresh) or (v < thresh) or (w < thresh) or (fabs(u - 1) < thresh) or (fabs(v - 1) < thresh) or (fabs(w - 1) < thresh)): return 1 return -1 cdef class NonlinearSolveSampler3D(ElementSampler): ''' This is a base class for handling element samplers that require a nonlinear solve to invert the mapping between coordinate systems. To do this, we perform Newton-Raphson iteration using a specified system of equations with an analytic Jacobian matrix. This solver is hard-coded for 3D for reasons of efficiency. This is not to be used directly, use one of the subclasses instead. ''' def __init__(self): super(NonlinearSolveSampler3D, self).__init__() self.tolerance = 1.0e-9 self.max_iter = 10 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil: ''' A thorough description of Newton's method and modifications for global convergence can be found in Dennis's text "Numerical Methods for Unconstrained Optimization and Nonlinear Equations." x: solution vector; holds unit/mapped coordinates xk: temporary vector for holding solution of current iteration f: residual vector r, s, t: three columns of Jacobian matrix corresponding to unit/mapped coordinates r, s, and t d: Jacobian determinant s_n: Newton step vector lam: fraction of Newton step by which to change x alpha: constant proportional to how much residual required to decrease. 1e-4 is value of alpha recommended by Dennis err_c: Error of current iteration err_plus: Error of next iteration min_lam: minimum fraction of Newton step that the line search is allowed to take. General experience suggests that lambda values smaller than 1e-3 will not significantly reduce the residual, but we set to 1e-6 just to be safe ''' cdef int i cdef double d, lam cdef double[3] f cdef double[3] r cdef double[3] s cdef double[3] t cdef double[3] x, xk, s_n cdef int iterations = 0 cdef double err_c, err_plus cdef double alpha = 1e-4 cdef double min_lam = 1e-6 # initial guess for i in range(3): x[i] = 0.0 # initial error norm self.func(f, x, vertices, physical_x) err_c = maxnorm(f, 3) # begin Newton iteration while (err_c > self.tolerance and iterations < self.max_iter): self.jac(r, s, t, x, vertices, physical_x) d = determinant_3x3(r, s, t) s_n[0] = - (determinant_3x3(f, s, t)/d) s_n[1] = - (determinant_3x3(r, f, t)/d) s_n[2] = - (determinant_3x3(r, s, f)/d) xk[0] = x[0] + s_n[0] xk[1] = x[1] + s_n[1] xk[2] = x[2] + s_n[2] self.func(f, xk, vertices, physical_x) err_plus = maxnorm(f, 3) lam = 1 while err_plus > err_c * (1. - alpha * lam) and lam > min_lam: lam = lam / 2 xk[0] = x[0] + lam * s_n[0] xk[1] = x[1] + lam * s_n[1] xk[2] = x[2] + lam * s_n[2] self.func(f, xk, vertices, physical_x) err_plus = maxnorm(f, 3) x[0] = xk[0] x[1] = xk[1] x[2] = xk[2] err_c = err_plus iterations += 1 if (err_c > self.tolerance): # we did not converge, set bogus value for i in range(3): mapped_x[i] = -99.0 else: for i in range(3): mapped_x[i] = x[i] cdef class Q1Sampler3D(NonlinearSolveSampler3D): ''' This implements sampling inside a 3D, linear, hexahedral mesh element. ''' def __init__(self): super(Q1Sampler3D, self).__init__() self.num_mapped_coords = 3 self.dim = 3 self.func = Q1Function3D self.jac = Q1Jacobian3D @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil: cdef double F, rm, rp, sm, sp, tm, tp rm = 1.0 - coord[0] rp = 1.0 + coord[0] sm = 1.0 - coord[1] sp = 1.0 + coord[1] tm = 1.0 - coord[2] tp = 1.0 + coord[2] F = vals[0]*rm*sm*tm + vals[1]*rp*sm*tm + vals[2]*rp*sp*tm + vals[3]*rm*sp*tm + \ vals[4]*rm*sm*tp + vals[5]*rp*sm*tp + vals[6]*rp*sp*tp + vals[7]*rm*sp*tp return 0.125*F @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_inside(self, double* mapped_coord) noexcept nogil: # for hexes, the mapped coordinates all go from -1 to 1 # if we are inside the element. if (fabs(mapped_coord[0]) - 1.0 > self.inclusion_tol or fabs(mapped_coord[1]) - 1.0 > self.inclusion_tol or fabs(mapped_coord[2]) - 1.0 > self.inclusion_tol): return 0 return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_mesh_lines(self, double* mapped_coord) noexcept nogil: if (fabs(fabs(mapped_coord[0]) - 1.0) < 1e-1 and fabs(fabs(mapped_coord[1]) - 1.0) < 1e-1): return 1 elif (fabs(fabs(mapped_coord[0]) - 1.0) < 1e-1 and fabs(fabs(mapped_coord[2]) - 1.0) < 1e-1): return 1 elif (fabs(fabs(mapped_coord[1]) - 1.0) < 1e-1 and fabs(fabs(mapped_coord[2]) - 1.0) < 1e-1): return 1 else: return -1 cdef class S2Sampler3D(NonlinearSolveSampler3D): ''' This implements sampling inside a 3D, 20-node hexahedral mesh element. ''' def __init__(self): super(S2Sampler3D, self).__init__() self.num_mapped_coords = 3 self.dim = 3 self.func = S2Function3D self.jac = S2Jacobian3D @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil: cdef double F, r, s, t, rm, rp, sm, sp, tm, tp r = coord[0] rm = 1.0 - r rp = 1.0 + r s = coord[1] sm = 1.0 - s sp = 1.0 + s t = coord[2] tm = 1.0 - t tp = 1.0 + t F = rm*sm*tm*(-r - s - t - 2.0)*vals[0] \ + rp*sm*tm*( r - s - t - 2.0)*vals[1] \ + rp*sp*tm*( r + s - t - 2.0)*vals[2] \ + rm*sp*tm*(-r + s - t - 2.0)*vals[3] \ + rm*sm*tp*(-r - s + t - 2.0)*vals[4] \ + rp*sm*tp*( r - s + t - 2.0)*vals[5] \ + rp*sp*tp*( r + s + t - 2.0)*vals[6] \ + rm*sp*tp*(-r + s + t - 2.0)*vals[7] \ + 2.0*(1.0 - r*r)*sm*tm*vals[8] \ + 2.0*rp*(1.0 - s*s)*tm*vals[9] \ + 2.0*(1.0 - r*r)*sp*tm*vals[10] \ + 2.0*rm*(1.0 - s*s)*tm*vals[11] \ + 2.0*rm*sm*(1.0 - t*t)*vals[12] \ + 2.0*rp*sm*(1.0 - t*t)*vals[13] \ + 2.0*rp*sp*(1.0 - t*t)*vals[14] \ + 2.0*rm*sp*(1.0 - t*t)*vals[15] \ + 2.0*(1.0 - r*r)*sm*tp*vals[16] \ + 2.0*rp*(1.0 - s*s)*tp*vals[17] \ + 2.0*(1.0 - r*r)*sp*tp*vals[18] \ + 2.0*rm*(1.0 - s*s)*tp*vals[19] return 0.125*F @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_inside(self, double* mapped_coord) noexcept nogil: if (fabs(mapped_coord[0]) - 1.0 > self.inclusion_tol or fabs(mapped_coord[1]) - 1.0 > self.inclusion_tol or fabs(mapped_coord[2]) - 1.0 > self.inclusion_tol): return 0 return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_mesh_lines(self, double* mapped_coord) noexcept nogil: if (fabs(fabs(mapped_coord[0]) - 1.0) < 1e-1 and fabs(fabs(mapped_coord[1]) - 1.0) < 1e-1): return 1 elif (fabs(fabs(mapped_coord[0]) - 1.0) < 1e-1 and fabs(fabs(mapped_coord[2]) - 1.0) < 1e-1): return 1 elif (fabs(fabs(mapped_coord[1]) - 1.0) < 1e-1 and fabs(fabs(mapped_coord[2]) - 1.0) < 1e-1): return 1 else: return -1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline void S2Function3D(double* fx, double* x, double* vertices, double* phys_x) noexcept nogil: cdef int i cdef double r, s, t, rm, rp, sm, sp, tm, tp r = x[0] rm = 1.0 - r rp = 1.0 + r s = x[1] sm = 1.0 - s sp = 1.0 + s t = x[2] tm = 1.0 - t tp = 1.0 + t for i in range(3): fx[i] = rm*sm*tm*(-r - s - t - 2.0)*vertices[0 + i] \ + rp*sm*tm*( r - s - t - 2.0)*vertices[3 + i] \ + rp*sp*tm*( r + s - t - 2.0)*vertices[6 + i] \ + rm*sp*tm*(-r + s - t - 2.0)*vertices[9 + i] \ + rm*sm*tp*(-r - s + t - 2.0)*vertices[12 + i] \ + rp*sm*tp*( r - s + t - 2.0)*vertices[15 + i] \ + rp*sp*tp*( r + s + t - 2.0)*vertices[18 + i] \ + rm*sp*tp*(-r + s + t - 2.0)*vertices[21 + i] \ + 2.0*(1.0 - r*r)*sm*tm*vertices[24 + i] \ + 2.0*rp*(1.0 - s*s)*tm*vertices[27 + i] \ + 2.0*(1.0 - r*r)*sp*tm*vertices[30 + i] \ + 2.0*rm*(1.0 - s*s)*tm*vertices[33 + i] \ + 2.0*rm*sm*(1.0 - t*t)*vertices[36 + i] \ + 2.0*rp*sm*(1.0 - t*t)*vertices[39 + i] \ + 2.0*rp*sp*(1.0 - t*t)*vertices[42 + i] \ + 2.0*rm*sp*(1.0 - t*t)*vertices[45 + i] \ + 2.0*(1.0 - r*r)*sm*tp*vertices[48 + i] \ + 2.0*rp*(1.0 - s*s)*tp*vertices[51 + i] \ + 2.0*(1.0 - r*r)*sp*tp*vertices[54 + i] \ + 2.0*rm*(1.0 - s*s)*tp*vertices[57 + i] \ - 8.0*phys_x[i] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline void S2Jacobian3D(double* rcol, double* scol, double* tcol, double* x, double* vertices, double* phys_x) noexcept nogil: cdef int i cdef double r, s, t, rm, rp, sm, sp, tm, tp r = x[0] rm = 1.0 - r rp = 1.0 + r s = x[1] sm = 1.0 - s sp = 1.0 + s t = x[2] tm = 1.0 - t tp = 1.0 + t for i in range(3): rcol[i] = (sm*tm*(r + s + t + 2.0) - rm*sm*tm)*vertices[0 + i] \ + (sm*tm*(r - s - t - 2.0) + rp*sm*tm)*vertices[3 + i] \ + (sp*tm*(r + s - t - 2.0) + rp*sp*tm)*vertices[6 + i] \ + (sp*tm*(r - s + t + 2.0) - rm*sp*tm)*vertices[9 + i] \ + (sm*tp*(r + s - t + 2.0) - rm*sm*tp)*vertices[12 + i] \ + (sm*tp*(r - s + t - 2.0) + rp*sm*tp)*vertices[15 + i] \ + (sp*tp*(r + s + t - 2.0) + rp*sp*tp)*vertices[18 + i] \ + (sp*tp*(r - s - t + 2.0) - rm*sp*tp)*vertices[21 + i] \ - 4.0*r*sm*tm*vertices[24 + i] \ + 2.0*(1.0 - s*s)*tm*vertices[27 + i] \ - 4.0*r*sp*tm*vertices[30 + i] \ - 2.0*(1.0 - s*s)*tm*vertices[33 + i] \ - 2.0*sm*(1.0 - t*t)*vertices[36 + i] \ + 2.0*sm*(1.0 - t*t)*vertices[39 + i] \ + 2.0*sp*(1.0 - t*t)*vertices[42 + i] \ - 2.0*sp*(1.0 - t*t)*vertices[45 + i] \ - 4.0*r*sm*tp*vertices[48 + i] \ + 2.0*(1.0 - s*s)*tp*vertices[51 + i] \ - 4.0*r*sp*tp*vertices[54 + i] \ - 2.0*(1.0 - s*s)*tp*vertices[57 + i] scol[i] = ( rm*tm*(r + s + t + 2.0) - rm*sm*tm)*vertices[0 + i] \ + (-rp*tm*(r - s - t - 2.0) - rp*sm*tm)*vertices[3 + i] \ + ( rp*tm*(r + s - t - 2.0) + rp*sp*tm)*vertices[6 + i] \ + (-rm*tm*(r - s + t + 2.0) + rm*sp*tm)*vertices[9 + i] \ + ( rm*tp*(r + s - t + 2.0) - rm*sm*tp)*vertices[12 + i] \ + (-rp*tp*(r - s + t - 2.0) - rp*sm*tp)*vertices[15 + i] \ + ( rp*tp*(r + s + t - 2.0) + rp*sp*tp)*vertices[18 + i] \ + (-rm*tp*(r - s - t + 2.0) + rm*sp*tp)*vertices[21 + i] \ - 2.0*(1.0 - r*r)*tm*vertices[24 + i] \ - 4.0*rp*s*tm*vertices[27 + i] \ + 2.0*(1.0 - r*r)*tm*vertices[30 + i] \ - 4.0*rm*s*tm*vertices[33 + i] \ - 2.0*rm*(1.0 - t*t)*vertices[36 + i] \ - 2.0*rp*(1.0 - t*t)*vertices[39 + i] \ + 2.0*rp*(1.0 - t*t)*vertices[42 + i] \ + 2.0*rm*(1.0 - t*t)*vertices[45 + i] \ - 2.0*(1.0 - r*r)*tp*vertices[48 + i] \ - 4.0*rp*s*tp*vertices[51 + i] \ + 2.0*(1.0 - r*r)*tp*vertices[54 + i] \ - 4.0*rm*s*tp*vertices[57 + i] tcol[i] = ( rm*sm*(r + s + t + 2.0) - rm*sm*tm)*vertices[0 + i] \ + (-rp*sm*(r - s - t - 2.0) - rp*sm*tm)*vertices[3 + i] \ + (-rp*sp*(r + s - t - 2.0) - rp*sp*tm)*vertices[6 + i] \ + ( rm*sp*(r - s + t + 2.0) - rm*sp*tm)*vertices[9 + i] \ + (-rm*sm*(r + s - t + 2.0) + rm*sm*tp)*vertices[12 + i] \ + ( rp*sm*(r - s + t - 2.0) + rp*sm*tp)*vertices[15 + i] \ + ( rp*sp*(r + s + t - 2.0) + rp*sp*tp)*vertices[18 + i] \ + (-rm*sp*(r - s - t + 2.0) + rm*sp*tp)*vertices[21 + i] \ - 2.0*(1.0 - r*r)*sm*vertices[24 + i] \ - 2.0*rp*(1.0 - s*s)*vertices[27 + i] \ - 2.0*(1.0 - r*r)*sp*vertices[30 + i] \ - 2.0*rm*(1.0 - s*s)*vertices[33 + i] \ - 4.0*rm*sm*t*vertices[36 + i] \ - 4.0*rp*sm*t*vertices[39 + i] \ - 4.0*rp*sp*t*vertices[42 + i] \ - 4.0*rm*sp*t*vertices[45 + i] \ + 2.0*(1.0 - r*r)*sm*vertices[48 + i] \ + 2.0*rp*(1.0 - s*s)*vertices[51 + i] \ + 2.0*(1.0 - r*r)*sp*vertices[54 + i] \ + 2.0*rm*(1.0 - s*s)*vertices[57 + i] cdef class W1Sampler3D(NonlinearSolveSampler3D): ''' This implements sampling inside a 3D, linear, wedge mesh element. ''' def __init__(self): super(W1Sampler3D, self).__init__() self.num_mapped_coords = 3 self.dim = 3 self.func = W1Function3D self.jac = W1Jacobian3D @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil: cdef double F F = vals[0]*(1.0 - coord[0] - coord[1])*(1.0 - coord[2]) + \ vals[1]*coord[0]*(1.0 - coord[2]) + \ vals[2]*coord[1]*(1.0 - coord[2]) + \ vals[3]*(1.0 - coord[0] - coord[1])*(1.0 + coord[2]) + \ vals[4]*coord[0]*(1.0 + coord[2]) + \ vals[5]*coord[1]*(1.0 + coord[2]) return F / 2.0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_inside(self, double* mapped_coord) noexcept nogil: # for wedges the bounds of the mapped coordinates are: # 0 <= mapped_coord[0] <= 1 - mapped_coord[1] # 0 <= mapped_coord[1] # -1 <= mapped_coord[2] <= 1 if (mapped_coord[0] < -self.inclusion_tol or mapped_coord[0] + mapped_coord[1] - 1.0 > self.inclusion_tol): return 0 if (mapped_coord[1] < -self.inclusion_tol): return 0 if (fabs(mapped_coord[2]) - 1.0 > self.inclusion_tol): return 0 return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_mesh_lines(self, double* mapped_coord) noexcept nogil: cdef double r, s cdef double thresh = 5.0e-2 r = mapped_coord[0] s = mapped_coord[1] cdef int near_edge_r, near_edge_s, near_edge_t near_edge_r = (r < thresh) or (fabs(r + s - 1.0) < thresh) near_edge_s = (s < thresh) near_edge_t = fabs(fabs(mapped_coord[2]) - 1.0) < thresh # we use ray.instID to pass back whether the ray is near the # element boundary or not (used to annotate mesh lines) if (near_edge_r and near_edge_s): return 1 elif (near_edge_r and near_edge_t): return 1 elif (near_edge_s and near_edge_t): return 1 else: return -1 cdef class NonlinearSolveSampler2D(ElementSampler): ''' This is a base class for handling element samplers that require a nonlinear solve to invert the mapping between coordinate systems. To do this, we perform Newton-Raphson iteration using a specified system of equations with an analytic Jacobian matrix. This solver is hard-coded for 2D for reasons of efficiency. This is not to be used directly, use one of the subclasses instead. ''' def __init__(self): super(NonlinearSolveSampler2D, self).__init__() self.tolerance = 1.0e-9 self.max_iter = 10 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void map_real_to_unit(self, double* mapped_x, double* vertices, double* physical_x) noexcept nogil: ''' A thorough description of Newton's method and modifications for global convergence can be found in Dennis's text "Numerical Methods for Unconstrained Optimization and Nonlinear Equations." x: solution vector; holds unit/mapped coordinates xk: temporary vector for holding solution of current iteration f: residual vector A: Jacobian matrix (derivative of residual vector wrt x) d: Jacobian determinant s_n: Newton step vector lam: fraction of Newton step by which to change x alpha: constant proportional to how much residual required to decrease. 1e-4 is value of alpha recommended by Dennis err_c: Error of current iteration err_plus: Error of next iteration min_lam: minimum fraction of Newton step that the line search is allowed to take. General experience suggests that lambda values smaller than 1e-3 will not significantly reduce the residual, but we set to 1e-6 just to be safe ''' cdef int i cdef double d, lam cdef double[2] f cdef double[2] x, xk, s_n cdef double[4] A cdef int iterations = 0 cdef double err_c, err_plus cdef double alpha = 1e-4 cdef double min_lam = 1e-6 # initial guess for i in range(2): x[i] = 0.0 # initial error norm self.func(f, x, vertices, physical_x) err_c = maxnorm(f, 2) # begin Newton iteration while (err_c > self.tolerance and iterations < self.max_iter): self.jac(&A[0], &A[2], x, vertices, physical_x) d = (A[0]*A[3] - A[1]*A[2]) s_n[0] = -( A[3]*f[0] - A[2]*f[1]) / d s_n[1] = -(-A[1]*f[0] + A[0]*f[1]) / d xk[0] = x[0] + s_n[0] xk[1] = x[1] + s_n[1] self.func(f, xk, vertices, physical_x) err_plus = maxnorm(f, 2) lam = 1 while err_plus > err_c * (1. - alpha * lam) and lam > min_lam: lam = lam / 2 xk[0] = x[0] + lam * s_n[0] xk[1] = x[1] + lam * s_n[1] self.func(f, xk, vertices, physical_x) err_plus = maxnorm(f, 2) x[0] = xk[0] x[1] = xk[1] err_c = err_plus iterations += 1 if (err_c > self.tolerance): # we did not converge, set bogus value for i in range(2): mapped_x[i] = -99.0 else: for i in range(2): mapped_x[i] = x[i] cdef class Q1Sampler2D(NonlinearSolveSampler2D): ''' This implements sampling inside a 2D, linear, quadrilateral mesh element. ''' def __init__(self): super(Q1Sampler2D, self).__init__() self.num_mapped_coords = 2 self.dim = 2 self.func = Q1Function2D self.jac = Q1Jacobian2D @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil: cdef double F, rm, rp, sm, sp rm = 1.0 - coord[0] rp = 1.0 + coord[0] sm = 1.0 - coord[1] sp = 1.0 + coord[1] F = vals[0]*rm*sm + vals[1]*rp*sm + vals[2]*rp*sp + vals[3]*rm*sp return 0.25*F @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_inside(self, double* mapped_coord) noexcept nogil: # for quads, we check whether the mapped_coord is between # -1 and 1 in both directions. if (fabs(mapped_coord[0]) - 1.0 > self.inclusion_tol or fabs(mapped_coord[1]) - 1.0 > self.inclusion_tol): return 0 return 1 cdef class Q2Sampler2D(NonlinearSolveSampler2D): ''' This implements sampling inside a 2D, quadratic, quadrilateral mesh element. ''' def __init__(self): super(Q2Sampler2D, self).__init__() self.num_mapped_coords = 2 self.dim = 2 self.func = Q2Function2D self.jac = Q2Jacobian2D @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil: cdef double[9] phi cdef double rv = 0 zet = coord[0] eta = coord[1] zetm = coord[0] - 1. zetp = coord[0] + 1. etam = coord[1] - 1. etap = coord[1] + 1. phi[0] = zet * zetm * eta * etam / 4. phi[1] = zet * zetp * eta * etam / 4. phi[2] = zet * zetp * eta * etap / 4. phi[3] = zet * zetm * eta * etap / 4. phi[4] = zetp * zetm * eta * etam / -2. phi[5] = zet * zetp * etap * etam / -2. phi[6] = zetp * zetm * eta * etap / -2. phi[7] = zet * zetm * etap * etam / -2. phi[8] = zetp * zetm * etap * etam for i in range(9): rv += vals[i] * phi[i] return rv @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_inside(self, double* mapped_coord) noexcept nogil: # for quads, we check whether the mapped_coord is between # -1 and 1 in both directions. if (fabs(mapped_coord[0]) - 1.0 > self.inclusion_tol or fabs(mapped_coord[1]) - 1.0 > self.inclusion_tol): return 0 return 1 cdef class T2Sampler2D(NonlinearSolveSampler2D): ''' This implements sampling inside a 2D, quadratic, triangular mesh element. Note that this implementation uses canonical coordinates. ''' def __init__(self): super(T2Sampler2D, self).__init__() self.num_mapped_coords = 2 self.dim = 2 self.func = T2Function2D self.jac = T2Jacobian2D @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil: cdef double phi0, phi1, phi2, phi3, phi4, phi5, c0sq, c1sq, c0c1 c0sq = coord[0] * coord[0] c1sq = coord[1] * coord[1] c0c1 = coord[0] * coord[1] phi0 = 1 - 3 * coord[0] + 2 * c0sq - 3 * coord[1] + \ 2 * c1sq + 4 * c0c1 phi1 = -coord[0] + 2 * c0sq phi2 = -coord[1] + 2 * c1sq phi3 = 4 * coord[0] - 4 * c0sq - 4 * c0c1 phi4 = 4 * c0c1 phi5 = 4 * coord[1] - 4 * c1sq - 4 * c0c1 return vals[0]*phi0 + vals[1]*phi1 + vals[2]*phi2 + vals[3]*phi3 + \ vals[4]*phi4 + vals[5]*phi5 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_inside(self, double* mapped_coord) noexcept nogil: # for canonical tris, we check whether the mapped_coords are between # 0 and 1. if (mapped_coord[0] < -self.inclusion_tol or \ mapped_coord[1] < -self.inclusion_tol or \ mapped_coord[0] + mapped_coord[1] - 1.0 > self.inclusion_tol): return 0 return 1 cdef class Tet2Sampler3D(NonlinearSolveSampler3D): ''' This implements sampling inside a 3D, quadratic, tetrahedral mesh element. Note that this implementation uses canonical coordinates. ''' def __init__(self): super(Tet2Sampler3D, self).__init__() self.num_mapped_coords = 3 self.dim = 3 self.func = Tet2Function3D self.jac = Tet2Jacobian3D @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double sample_at_unit_point(self, double* coord, double* vals) noexcept nogil: cdef double[10] phi cdef double coordsq[3] cdef int i cdef double return_value = 0 for i in range(3): coordsq[i] = coord[i] * coord[i] phi[0] = 1 - 3 * coord[0] + 2 * coordsq[0] - 3 * coord[1] + \ 2 * coordsq[1] - 3 * coord[2] + 2 * coordsq[2] + \ 4 * coord[0] * coord[1] + 4 * coord[0] * coord[2] + \ 4 * coord[1] * coord[2] phi[1] = -coord[0] + 2 * coordsq[0] phi[2] = -coord[1] + 2 * coordsq[1] phi[3] = -coord[2] + 2 * coordsq[2] phi[4] = 4 * coord[0] - 4 * coordsq[0] - 4 * coord[0] * coord[1] - \ 4 * coord[0] * coord[2] phi[5] = 4 * coord[0] * coord[1] phi[6] = 4 * coord[1] - 4 * coordsq[1] - 4 * coord[0] * coord[1] - \ 4 * coord[1] * coord[2] phi[7] = 4 * coord[2] - 4 * coordsq[2] - 4 * coord[2] * coord[0] - \ 4 * coord[2] * coord[1] phi[8] = 4 * coord[0] * coord[2] phi[9] = 4 * coord[1] * coord[2] for i in range(10): return_value += phi[i] * vals[i] return return_value @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_inside(self, double* mapped_coord) noexcept nogil: # for canonical tets, we check whether the mapped_coords are between # 0 and 1. if (mapped_coord[0] < -self.inclusion_tol or \ mapped_coord[1] < -self.inclusion_tol or \ mapped_coord[2] < -self.inclusion_tol or \ mapped_coord[0] + mapped_coord[1] + mapped_coord[2] - 1.0 > \ self.inclusion_tol): return 0 return 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int check_mesh_lines(self, double* mapped_coord) noexcept nogil: cdef double u, v cdef double thresh = 2.0e-2 if mapped_coord[0] == 0: u = mapped_coord[1] v = mapped_coord[2] elif mapped_coord[1] == 0: u = mapped_coord[2] v = mapped_coord[0] elif mapped_coord[2] == 0: u = mapped_coord[1] v = mapped_coord[0] else: u = mapped_coord[1] v = mapped_coord[2] if ((u < thresh) or (v < thresh) or (fabs(u - 1) < thresh) or (fabs(v - 1) < thresh)): return 1 return -1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def test_hex_sampler(np.ndarray[np.float64_t, ndim=2] vertices, np.ndarray[np.float64_t, ndim=1] field_values, np.ndarray[np.float64_t, ndim=1] physical_x): cdef double val cdef Q1Sampler3D sampler = Q1Sampler3D() val = sampler.sample_at_real_point( vertices.data, field_values.data, physical_x.data) return val @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def test_hex20_sampler(np.ndarray[np.float64_t, ndim=2] vertices, np.ndarray[np.float64_t, ndim=1] field_values, np.ndarray[np.float64_t, ndim=1] physical_x): cdef double val cdef S2Sampler3D sampler = S2Sampler3D() val = sampler.sample_at_real_point( vertices.data, field_values.data, physical_x.data) return val @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def test_tetra_sampler(np.ndarray[np.float64_t, ndim=2] vertices, np.ndarray[np.float64_t, ndim=1] field_values, np.ndarray[np.float64_t, ndim=1] physical_x): cdef double val sampler = P1Sampler3D() val = sampler.sample_at_real_point( vertices.data, field_values.data, physical_x.data) return val @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def test_wedge_sampler(np.ndarray[np.float64_t, ndim=2] vertices, np.ndarray[np.float64_t, ndim=1] field_values, np.ndarray[np.float64_t, ndim=1] physical_x): cdef double val cdef W1Sampler3D sampler = W1Sampler3D() val = sampler.sample_at_real_point( vertices.data, field_values.data, physical_x.data) return val @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def test_linear1D_sampler(np.ndarray[np.float64_t, ndim=2] vertices, np.ndarray[np.float64_t, ndim=1] field_values, np.ndarray[np.float64_t, ndim=1] physical_x): cdef double val sampler = P1Sampler1D() val = sampler.sample_at_real_point( vertices.data, field_values.data, physical_x.data) return val @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def test_tri_sampler(np.ndarray[np.float64_t, ndim=2] vertices, np.ndarray[np.float64_t, ndim=1] field_values, np.ndarray[np.float64_t, ndim=1] physical_x): cdef double val sampler = P1Sampler2D() val = sampler.sample_at_real_point( vertices.data, field_values.data, physical_x.data) return val @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def test_quad_sampler(np.ndarray[np.float64_t, ndim=2] vertices, np.ndarray[np.float64_t, ndim=1] field_values, np.ndarray[np.float64_t, ndim=1] physical_x): cdef double val sampler = Q1Sampler2D() val = sampler.sample_at_real_point( vertices.data, field_values.data, physical_x.data) return val @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def test_quad2_sampler(np.ndarray[np.float64_t, ndim=2] vertices, np.ndarray[np.float64_t, ndim=1] field_values, np.ndarray[np.float64_t, ndim=1] physical_x): cdef double val sampler = Q2Sampler2D() val = sampler.sample_at_real_point( vertices.data, field_values.data, physical_x.data) return val @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def test_tri2_sampler(np.ndarray[np.float64_t, ndim=2] vertices, np.ndarray[np.float64_t, ndim=1] field_values, np.ndarray[np.float64_t, ndim=1] physical_x): cdef double val sampler = T2Sampler2D() val = sampler.sample_at_real_point( vertices.data, field_values.data, physical_x.data) return val @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def test_tet2_sampler(np.ndarray[np.float64_t, ndim=2] vertices, np.ndarray[np.float64_t, ndim=1] field_values, np.ndarray[np.float64_t, ndim=1] physical_x): cdef double val sampler = Tet2Sampler3D() val = sampler.sample_at_real_point( vertices.data, field_values.data, physical_x.data) return val ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.3991537 yt-4.4.0/yt/utilities/lib/embree_mesh/0000755000175100001770000000000014714401715017254 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/embree_mesh/__init__.py0000644000175100001770000000000014714401662021354 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/embree_mesh/mesh_construction.pxd0000644000175100001770000000145414714401662023544 0ustar00runnerdockercimport numpy as np from pyembree.rtcore cimport Triangle, Vec3f, Vertex ctypedef struct MeshDataContainer: Vertex* vertices # array of triangle vertices Triangle* indices # which vertices belong to which triangles double* field_data # the field values at the vertices int* element_indices # which vertices belong to which *element* int tpe # the number of triangles per element int vpe # the number of vertices per element int fpe # the number of field values per element ctypedef struct Patch: float[8][3] v unsigned int geomID np.float64_t* vertices np.float64_t* field_data ctypedef struct Tet_Patch: float[6][3] v unsigned int geomID np.float64_t* vertices np.float64_t* field_data ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/embree_mesh/mesh_construction.pyx0000644000175100001770000003236714714401662023600 0ustar00runnerdocker# distutils: include_dirs = EMBREE_INC_DIR # distutils: library_dirs = EMBREE_LIB_DIR # distutils: libraries = EMBREE_LIBS # distutils: language = c++ """ This file contains the ElementMesh classes, which represent the target that the rays will be cast at when rendering finite element data. This class handles the interface between the internal representation of the mesh and the pyembree representation. Note - this file is only used for the Embree-accelerated ray-tracer. """ import numpy as np cimport numpy as np cimport pyembree.rtcore_geometry as rtcg cimport pyembree.rtcore_geometry_user as rtcgu from libc.stdlib cimport free, malloc from mesh_intersection cimport ( patchBoundsFunc, patchIntersectFunc, tet_patchBoundsFunc, tet_patchIntersectFunc, ) from mesh_samplers cimport sample_hex, sample_tetra, sample_wedge from mesh_traversal cimport YTEmbreeScene from pyembree.rtcore cimport Triangle, Vertex from yt.utilities.exceptions import YTElementTypeNotRecognized cdef extern from "mesh_triangulation.h": enum: MAX_NUM_TRI int HEX_NV int HEX_NT int TETRA_NV int TETRA_NT int WEDGE_NV int WEDGE_NT int triangulate_hex[MAX_NUM_TRI][3] int triangulate_tetra[MAX_NUM_TRI][3] int triangulate_wedge[MAX_NUM_TRI][3] int hex20_faces[6][8] int tet10_faces[4][6] cdef class LinearElementMesh: r''' This creates a 1st-order mesh to be ray-traced with embree. Currently, we handle non-triangular mesh types by converting them to triangular meshes. This class performs this transformation. Currently, this is implemented for hexahedral and tetrahedral meshes. Parameters ---------- scene : EmbreeScene This is the scene to which the constructed polygons will be added. vertices : a np.ndarray of floats. This specifies the x, y, and z coordinates of the vertices in the polygon mesh. This should either have the shape (num_vertices, 3). For example, vertices[2][1] should give the y-coordinate of the 3rd vertex in the mesh. indices : a np.ndarray of ints This should either have the shape (num_elements, 4) or (num_elements, 8) for tetrahedral and hexahedral meshes, respectively. For tetrahedral meshes, each element will be represented by four triangles in the scene. For hex meshes, each element will be represented by 12 triangles, 2 for each face. For hex meshes, we assume that the node ordering is as defined here: http://homepages.cae.wisc.edu/~tautges/papers/cnmev3.pdf ''' cdef Vertex* vertices cdef Triangle* indices cdef unsigned int mesh cdef double* field_data cdef rtcg.RTCFilterFunc filter_func # triangles per element, vertices per element, and field points per # element, respectively: cdef int tpe, vpe, fpe cdef int[MAX_NUM_TRI][3] tri_array cdef int* element_indices cdef MeshDataContainer datac def __init__(self, YTEmbreeScene scene, np.ndarray vertices, np.ndarray indices, np.ndarray data): # We need to figure out what kind of elements we've been handed. if indices.shape[1] == 8: self.vpe = HEX_NV self.tpe = HEX_NT self.tri_array = triangulate_hex elif indices.shape[1] == 6: self.vpe = WEDGE_NV self.tpe = WEDGE_NT self.tri_array = triangulate_wedge elif indices.shape[1] == 4: self.vpe = TETRA_NV self.tpe = TETRA_NT self.tri_array = triangulate_tetra else: raise YTElementTypeNotRecognized(vertices.shape[1], indices.shape[1]) self._build_from_indices(scene, vertices, indices) self._set_field_data(scene, data) self._set_sampler_type(scene) cdef void _build_from_indices(self, YTEmbreeScene scene, np.ndarray vertices_in, np.ndarray indices_in): cdef int i, j cdef int nv = vertices_in.shape[0] cdef int ne = indices_in.shape[0] cdef int nt = self.tpe*ne cdef unsigned int mesh = rtcg.rtcNewTriangleMesh(scene.scene_i, rtcg.RTC_GEOMETRY_STATIC, nt, nv, 1) # first just copy over the vertices cdef Vertex* vertices = malloc(nv * sizeof(Vertex)) for i in range(nv): vertices[i].x = vertices_in[i, 0] vertices[i].y = vertices_in[i, 1] vertices[i].z = vertices_in[i, 2] rtcg.rtcSetBuffer(scene.scene_i, mesh, rtcg.RTC_VERTEX_BUFFER, vertices, 0, sizeof(Vertex)) # now build up the triangles cdef Triangle* triangles = malloc(nt * sizeof(Triangle)) for i in range(ne): for j in range(self.tpe): triangles[self.tpe*i+j].v0 = indices_in[i][self.tri_array[j][0]] triangles[self.tpe*i+j].v1 = indices_in[i][self.tri_array[j][1]] triangles[self.tpe*i+j].v2 = indices_in[i][self.tri_array[j][2]] rtcg.rtcSetBuffer(scene.scene_i, mesh, rtcg.RTC_INDEX_BUFFER, triangles, 0, sizeof(Triangle)) cdef int* element_indices = malloc(ne * self.vpe * sizeof(int)) for i in range(ne): for j in range(self.vpe): element_indices[i*self.vpe + j] = indices_in[i][j] self.element_indices = element_indices self.vertices = vertices self.indices = triangles self.mesh = mesh cdef void _set_field_data(self, YTEmbreeScene scene, np.ndarray data_in): cdef int ne = data_in.shape[0] self.fpe = data_in.shape[1] cdef double* field_data = malloc(ne * self.fpe * sizeof(double)) for i in range(ne): for j in range(self.fpe): field_data[i*self.fpe+j] = data_in[i][j] self.field_data = field_data cdef MeshDataContainer datac datac.vertices = self.vertices datac.indices = self.indices datac.field_data = self.field_data datac.element_indices = self.element_indices datac.tpe = self.tpe datac.vpe = self.vpe datac.fpe = self.fpe self.datac = datac rtcg.rtcSetUserData(scene.scene_i, self.mesh, &self.datac) cdef void _set_sampler_type(self, YTEmbreeScene scene): if self.vpe == 4: self.filter_func = sample_tetra elif self.vpe == 6: self.filter_func = sample_wedge elif self.vpe == 8: self.filter_func = sample_hex else: raise NotImplementedError("Sampler type not implemented.") rtcg.rtcSetIntersectionFilterFunction(scene.scene_i, self.mesh, self.filter_func) def __dealloc__(self): free(self.field_data) free(self.element_indices) free(self.vertices) free(self.indices) cdef class QuadraticElementMesh: r''' This creates a mesh of quadratic patches corresponding to the faces of 2nd-order Lagrange elements for direct rendering via embree. Currently, this is implemented for 20-point hexahedral meshes only. Parameters ---------- scene : EmbreeScene This is the scene to which the constructed patches will be added. vertices : a np.ndarray of floats. This specifies the x, y, and z coordinates of the vertices in the mesh. This should either have the shape (num_vertices, 3). For example, vertices[2][1] should give the y-coordinate of the 3rd vertex in the mesh. indices : a np.ndarray of ints This should have the shape (num_elements, 20). Each hex will be represented in the scene by 6 bi-quadratic patches. We assume that the node ordering is as defined here: http://homepages.cae.wisc.edu/~tautges/papers/cnmev3.pdf ''' cdef void* patches cdef np.float64_t* vertices cdef np.float64_t* field_data cdef unsigned int mesh # patches per element, vertices per element, vertices per face, # and field points per element, respectively: cdef int ppe, vpe, vpf, fpe def __init__(self, YTEmbreeScene scene, np.ndarray vertices, np.ndarray indices, np.ndarray field_data): # 20-point hexes if indices.shape[1] == 20: self.vpe = 20 self.ppe = 6 self.vpf = 8 self._build_from_indices_hex20(scene, vertices, indices, field_data) # 10-point tets elif indices.shape[1] == 10: self.vpe = 10 self.ppe = 4 self.vpf = 6 self._build_from_indices_tet10(scene, vertices, indices, field_data) else: raise NotImplementedError cdef void _build_from_indices_hex20(self, YTEmbreeScene scene, np.ndarray vertices_in, np.ndarray indices_in, np.ndarray field_data): cdef int i, j, k, ind, idim cdef int ne = indices_in.shape[0] cdef int npatch = self.ppe*ne cdef unsigned int mesh = rtcgu.rtcNewUserGeometry(scene.scene_i, npatch) cdef np.ndarray[np.float64_t, ndim=2] element_vertices cdef Patch* patches = malloc(npatch * sizeof(Patch)) self.vertices = malloc(self.vpe * ne * 3 * sizeof(np.float64_t)) self.field_data = malloc(self.vpe * ne * sizeof(np.float64_t)) for i in range(ne): element_vertices = vertices_in[indices_in[i]] for j in range(self.vpe): self.field_data[i*self.vpe + j] = field_data[i][j] for k in range(3): self.vertices[i*self.vpe*3 + j*3 + k] = element_vertices[j][k] cdef Patch* patch for i in range(ne): # for each element element_vertices = vertices_in[indices_in[i]] for j in range(self.ppe): # for each face patch = &(patches[i*self.ppe+j]) patch.geomID = mesh for k in range(self.vpf): # for each vertex ind = hex20_faces[j][k] for idim in range(3): # for each spatial dimension (yikes) patch.v[k][idim] = element_vertices[ind][idim] patch.vertices = self.vertices + i*self.vpe*3 patch.field_data = self.field_data + i*self.vpe self.patches = patches self.mesh = mesh rtcg.rtcSetUserData(scene.scene_i, self.mesh, patches) rtcgu.rtcSetBoundsFunction(scene.scene_i, self.mesh, patchBoundsFunc) rtcgu.rtcSetIntersectFunction(scene.scene_i, self.mesh, patchIntersectFunc) cdef void _build_from_indices_tet10(self, YTEmbreeScene scene, np.ndarray vertices_in, np.ndarray indices_in, np.ndarray field_data): cdef int i, j, k, ind, idim cdef int ne = indices_in.shape[0] cdef int npatch = self.ppe*ne cdef unsigned int mesh = rtcgu.rtcNewUserGeometry(scene.scene_i, npatch) cdef np.ndarray[np.float64_t, ndim=2] element_vertices cdef Tet_Patch* patches = malloc(npatch * sizeof(Tet_Patch)) self.vertices = malloc(self.vpe * ne * 3 * sizeof(np.float64_t)) self.field_data = malloc(self.vpe * ne * sizeof(np.float64_t)) for i in range(ne): element_vertices = vertices_in[indices_in[i]] for j in range(self.vpe): self.field_data[i*self.vpe + j] = field_data[i][j] for k in range(3): self.vertices[i*self.vpe*3 + j*3 + k] = element_vertices[j][k] cdef Tet_Patch* patch for i in range(ne): # for each element element_vertices = vertices_in[indices_in[i]] for j in range(self.ppe): # for each face patch = &(patches[i*self.ppe+j]) patch.geomID = mesh for k in range(self.vpf): # for each vertex ind = tet10_faces[j][k] for idim in range(3): # for each spatial dimension (yikes) patch.v[k][idim] = element_vertices[ind][idim] patch.vertices = self.vertices + i*self.vpe*3 patch.field_data = self.field_data + i*self.vpe self.patches = patches self.mesh = mesh rtcg.rtcSetUserData(scene.scene_i, self.mesh, patches) rtcgu.rtcSetBoundsFunction(scene.scene_i, self.mesh, tet_patchBoundsFunc) rtcgu.rtcSetIntersectFunction(scene.scene_i, self.mesh, tet_patchIntersectFunc) def __dealloc__(self): free(self.patches) free(self.vertices) free(self.field_data) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/embree_mesh/mesh_intersection.pxd0000644000175100001770000000143414714401662023516 0ustar00runnerdockercimport cython cimport pyembree.rtcore as rtc cimport pyembree.rtcore_geometry as rtcg cimport pyembree.rtcore_ray as rtcr from .mesh_construction cimport Patch, Tet_Patch cdef void patchIntersectFunc(Patch* patches, rtcr.RTCRay& ray, size_t item) noexcept nogil cdef void patchBoundsFunc(Patch* patches, size_t item, rtcg.RTCBounds* bounds_o) noexcept nogil cdef void tet_patchIntersectFunc(Tet_Patch* tet_patches, rtcr.RTCRay& ray, size_t item) noexcept nogil cdef void tet_patchBoundsFunc(Tet_Patch* tet_patches, size_t item, rtcg.RTCBounds* bounds_o) noexcept nogil ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/embree_mesh/mesh_intersection.pyx0000644000175100001770000001004414714401662023540 0ustar00runnerdocker# distutils: include_dirs = EMBREE_INC_DIR # distutils: library_dirs = EMBREE_LIB_DIR # distutils: libraries = EMBREE_LIBS # distutils: language = c++ """ This file contains functions used for performing ray-tracing with Embree for 2nd-order Lagrange Elements. Note - this file is only used for the Embree-accelerated ray-tracer. """ cimport cython cimport pyembree.rtcore_geometry as rtcg cimport pyembree.rtcore_ray as rtcr from libc.math cimport fabs, fmax, fmin from yt.utilities.lib.primitives cimport ( RayHitData, compute_patch_hit, compute_tet_patch_hit, ) from .mesh_samplers cimport sample_hex20, sample_tet10 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void patchBoundsFunc(Patch* patches, size_t item, rtcg.RTCBounds* bounds_o) noexcept nogil: cdef Patch patch = patches[item] cdef float lo_x = 1.0e300 cdef float lo_y = 1.0e300 cdef float lo_z = 1.0e300 cdef float hi_x = -1.0e300 cdef float hi_y = -1.0e300 cdef float hi_z = -1.0e300 cdef int i for i in range(8): lo_x = fmin(lo_x, patch.v[i][0]) lo_y = fmin(lo_y, patch.v[i][1]) lo_z = fmin(lo_z, patch.v[i][2]) hi_x = fmax(hi_x, patch.v[i][0]) hi_y = fmax(hi_y, patch.v[i][1]) hi_z = fmax(hi_z, patch.v[i][2]) bounds_o.lower_x = lo_x bounds_o.lower_y = lo_y bounds_o.lower_z = lo_z bounds_o.upper_x = hi_x bounds_o.upper_y = hi_y bounds_o.upper_z = hi_z @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void patchIntersectFunc(Patch* patches, rtcr.RTCRay& ray, size_t item) noexcept nogil: cdef Patch patch = patches[item] cdef RayHitData hd = compute_patch_hit(patch.v, ray.org, ray.dir) # only count this is it's the closest hit if (hd.t < ray.tnear or hd.t > ray.Ng[0]): return if (fabs(hd.u) <= 1.0 and fabs(hd.v) <= 1.0 and hd.converged): # we have a hit, so update ray information ray.u = hd.u ray.v = hd.v ray.geomID = patch.geomID ray.primID = item ray.Ng[0] = hd.t # sample the solution at the calculated point sample_hex20(patches, ray) return @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void tet_patchBoundsFunc(Tet_Patch* tet_patches, size_t item, rtcg.RTCBounds* bounds_o) noexcept nogil: cdef Tet_Patch tet_patch = tet_patches[item] cdef float lo_x = 1.0e300 cdef float lo_y = 1.0e300 cdef float lo_z = 1.0e300 cdef float hi_x = -1.0e300 cdef float hi_y = -1.0e300 cdef float hi_z = -1.0e300 cdef int i for i in range(6): lo_x = fmin(lo_x, tet_patch.v[i][0]) lo_y = fmin(lo_y, tet_patch.v[i][1]) lo_z = fmin(lo_z, tet_patch.v[i][2]) hi_x = fmax(hi_x, tet_patch.v[i][0]) hi_y = fmax(hi_y, tet_patch.v[i][1]) hi_z = fmax(hi_z, tet_patch.v[i][2]) bounds_o.lower_x = lo_x bounds_o.lower_y = lo_y bounds_o.lower_z = lo_z bounds_o.upper_x = hi_x bounds_o.upper_y = hi_y bounds_o.upper_z = hi_z @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void tet_patchIntersectFunc(Tet_Patch* tet_patches, rtcr.RTCRay& ray, size_t item) noexcept nogil: cdef Tet_Patch tet_patch = tet_patches[item] cdef RayHitData hd = compute_tet_patch_hit(tet_patch.v, ray.org, ray.dir) # only count this is it's the closest hit if (hd.t < ray.tnear or hd.t > ray.Ng[0]): return if (hd.converged and 0 <= hd.u and 0 <= hd.v and hd.u + hd.v <= 1): # we have a hit, so update ray information ray.u = hd.u ray.v = hd.v ray.geomID = tet_patch.geomID ray.primID = item ray.Ng[0] = hd.t # sample the solution at the calculated point sample_tet10(tet_patches, ray) return ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/embree_mesh/mesh_samplers.pxd0000644000175100001770000000105214714401662022632 0ustar00runnerdockercimport cython cimport pyembree.rtcore as rtc cimport pyembree.rtcore_ray as rtcr cdef void sample_hex(void* userPtr, rtcr.RTCRay& ray) noexcept nogil cdef void sample_wedge(void* userPtr, rtcr.RTCRay& ray) noexcept nogil cdef void sample_tetra(void* userPtr, rtcr.RTCRay& ray) noexcept nogil cdef void sample_hex20(void* userPtr, rtcr.RTCRay& ray) noexcept nogil cdef void sample_tet10(void* userPtr, rtcr.RTCRay& ray) noexcept nogil ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/embree_mesh/mesh_samplers.pyx0000644000175100001770000002166514714401662022673 0ustar00runnerdocker# distutils: include_dirs = EMBREE_INC_DIR # distutils: library_dirs = EMBREE_LIB_DIR # distutils: libraries = EMBREE_LIBS # distutils: language = c++ """ This file contains functions that sample a surface mesh at the point hit by a ray. These can be used with pyembree in the form of "filter feedback functions." Note - this file is only used for the Embree-accelerated ray-tracer. """ cimport cython cimport pyembree.rtcore_ray as rtcr from pyembree.rtcore cimport Triangle from yt.utilities.lib.element_mappings cimport ( ElementSampler, P1Sampler3D, Q1Sampler3D, S2Sampler3D, Tet2Sampler3D, W1Sampler3D, ) from yt.utilities.lib.primitives cimport patchSurfaceFunc, tet_patchSurfaceFunc from .mesh_construction cimport MeshDataContainer, Patch, Tet_Patch cdef ElementSampler Q1Sampler = Q1Sampler3D() cdef ElementSampler P1Sampler = P1Sampler3D() cdef ElementSampler S2Sampler = S2Sampler3D() cdef ElementSampler W1Sampler = W1Sampler3D() cdef ElementSampler Tet2Sampler = Tet2Sampler3D() @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void get_hit_position(double* position, void* userPtr, rtcr.RTCRay& ray) noexcept nogil: cdef int primID, i cdef double[3][3] vertex_positions cdef Triangle tri cdef MeshDataContainer* data primID = ray.primID data = userPtr tri = data.indices[primID] vertex_positions[0][0] = data.vertices[tri.v0].x vertex_positions[0][1] = data.vertices[tri.v0].y vertex_positions[0][2] = data.vertices[tri.v0].z vertex_positions[1][0] = data.vertices[tri.v1].x vertex_positions[1][1] = data.vertices[tri.v1].y vertex_positions[1][2] = data.vertices[tri.v1].z vertex_positions[2][0] = data.vertices[tri.v2].x vertex_positions[2][1] = data.vertices[tri.v2].y vertex_positions[2][2] = data.vertices[tri.v2].z for i in range(3): position[i] = vertex_positions[0][i]*(1.0 - ray.u - ray.v) + \ vertex_positions[1][i]*ray.u + \ vertex_positions[2][i]*ray.v @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void sample_hex(void* userPtr, rtcr.RTCRay& ray) noexcept nogil: cdef int ray_id, elem_id, i cdef double val cdef double[8] field_data cdef int[8] element_indices cdef double[24] vertices cdef double[3] position cdef MeshDataContainer* data data = userPtr ray_id = ray.primID if ray_id == -1: return # ray_id records the id number of the hit according to # embree, in which the primitives are triangles. Here, # we convert this to the element id by dividing by the # number of triangles per element. elem_id = ray_id / data.tpe get_hit_position(position, userPtr, ray) for i in range(8): element_indices[i] = data.element_indices[elem_id*8+i] for i in range(data.fpe): field_data[i] = data.field_data[elem_id*data.fpe+i] for i in range(8): vertices[i*3] = data.vertices[element_indices[i]].x vertices[i*3 + 1] = data.vertices[element_indices[i]].y vertices[i*3 + 2] = data.vertices[element_indices[i]].z # we use ray.time to pass the value of the field cdef double mapped_coord[3] Q1Sampler.map_real_to_unit(mapped_coord, vertices, position) if data.fpe == 1: val = field_data[0] else: val = Q1Sampler.sample_at_unit_point(mapped_coord, field_data) ray.time = val # we use ray.instID to pass back whether the ray is near the # element boundary or not (used to annotate mesh lines) ray.instID = Q1Sampler.check_mesh_lines(mapped_coord) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void sample_wedge(void* userPtr, rtcr.RTCRay& ray) noexcept nogil: cdef int ray_id, elem_id, i cdef double val cdef double[6] field_data cdef int[6] element_indices cdef double[18] vertices cdef double[3] position cdef MeshDataContainer* data data = userPtr ray_id = ray.primID if ray_id == -1: return # ray_id records the id number of the hit according to # embree, in which the primitives are triangles. Here, # we convert this to the element id by dividing by the # number of triangles per element. elem_id = ray_id / data.tpe get_hit_position(position, userPtr, ray) for i in range(6): element_indices[i] = data.element_indices[elem_id*6+i] for i in range(data.fpe): field_data[i] = data.field_data[elem_id*data.fpe+i] for i in range(6): vertices[i*3] = data.vertices[element_indices[i]].x vertices[i*3 + 1] = data.vertices[element_indices[i]].y vertices[i*3 + 2] = data.vertices[element_indices[i]].z # we use ray.time to pass the value of the field cdef double mapped_coord[3] W1Sampler.map_real_to_unit(mapped_coord, vertices, position) if data.fpe == 1: val = field_data[0] else: val = W1Sampler.sample_at_unit_point(mapped_coord, field_data) ray.time = val ray.instID = W1Sampler.check_mesh_lines(mapped_coord) @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) @cython.cdivision(True) cdef void sample_hex20(void* userPtr, rtcr.RTCRay& ray) noexcept nogil: cdef int ray_id, i cdef double val cdef double[3] position cdef float[3] pos cdef Patch* data data = userPtr ray_id = ray.primID if ray_id == -1: return cdef Patch patch = data[ray_id] # ray_id records the id number of the hit according to # embree, in which the primitives are patches. Here, # we convert this to the element id by dividing by the # number of patches per element. # fills "position" with the physical position of the hit patchSurfaceFunc(data[ray_id].v, ray.u, ray.v, pos) for i in range(3): position[i] = pos[i] # we use ray.time to pass the value of the field cdef double mapped_coord[3] S2Sampler.map_real_to_unit(mapped_coord, patch.vertices, position) val = S2Sampler.sample_at_unit_point(mapped_coord, patch.field_data) ray.time = val ray.instID = S2Sampler.check_mesh_lines(mapped_coord) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void sample_tetra(void* userPtr, rtcr.RTCRay& ray) noexcept nogil: cdef int ray_id, elem_id, i cdef double val cdef double[4] field_data cdef int[4] element_indices cdef double[12] vertices cdef double[3] position cdef MeshDataContainer* data data = userPtr ray_id = ray.primID if ray_id == -1: return get_hit_position(position, userPtr, ray) # ray_id records the id number of the hit according to # embree, in which the primitives are triangles. Here, # we convert this to the element id by dividing by the # number of triangles per element. elem_id = ray_id / data.tpe for i in range(4): element_indices[i] = data.element_indices[elem_id*4+i] vertices[i*3] = data.vertices[element_indices[i]].x vertices[i*3 + 1] = data.vertices[element_indices[i]].y vertices[i*3 + 2] = data.vertices[element_indices[i]].z for i in range(data.fpe): field_data[i] = data.field_data[elem_id*data.fpe+i] # we use ray.time to pass the value of the field cdef double mapped_coord[4] P1Sampler.map_real_to_unit(mapped_coord, vertices, position) if data.fpe == 1: val = field_data[0] else: val = P1Sampler.sample_at_unit_point(mapped_coord, field_data) ray.time = val ray.instID = P1Sampler.check_mesh_lines(mapped_coord) @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) @cython.cdivision(True) cdef void sample_tet10(void* userPtr, rtcr.RTCRay& ray) noexcept nogil: cdef int ray_id, i cdef double val cdef double[3] position cdef float[3] pos cdef Tet_Patch* data data = userPtr ray_id = ray.primID if ray_id == -1: return cdef Tet_Patch tet_patch = data[ray_id] # ray_id records the id number of the hit according to # embree, in which the primitives are patches. Here, # we convert this to the element id by dividing by the # number of patches per element. # fills "position" with the physical position of the hit tet_patchSurfaceFunc(data[ray_id].v, ray.u, ray.v, pos) for i in range(3): position[i] = pos[i] # we use ray.time to pass the value of the field cdef double mapped_coord[3] Tet2Sampler.map_real_to_unit(mapped_coord, tet_patch.vertices, position) val = Tet2Sampler.sample_at_unit_point(mapped_coord, tet_patch.field_data) ray.time = val ray.instID = Tet2Sampler.check_mesh_lines(mapped_coord) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/embree_mesh/mesh_traversal.pxd0000644000175100001770000000022514714401662023010 0ustar00runnerdockercimport pyembree.rtcore cimport pyembree.rtcore_ray cimport pyembree.rtcore_scene as rtcs cdef class YTEmbreeScene: cdef rtcs.RTCScene scene_i ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/embree_mesh/mesh_traversal.pyx0000644000175100001770000000532114714401662023037 0ustar00runnerdocker# distutils: include_dirs = EMBREE_INC_DIR # distutils: library_dirs = EMBREE_LIB_DIR # distutils: libraries = EMBREE_LIBS # distutils: language = c++ """ This file contains the MeshSampler classes, which handles casting rays at a mesh source using either pyembree or the cython ray caster. """ cimport cython cimport numpy as np import numpy as np cimport pyembree.rtcore as rtc cimport pyembree.rtcore_geometry as rtcg cimport pyembree.rtcore_ray as rtcr cimport pyembree.rtcore_scene as rtcs from libc.stdlib cimport free, malloc from yt.utilities.lib.image_samplers cimport ImageSampler rtc.rtcInit(NULL) rtc.rtcSetErrorFunction(error_printer) cdef void error_printer(const rtc.RTCError code, const char *_str): print("ERROR CAUGHT IN EMBREE") rtc.print_error(code) print("ERROR MESSAGE:", _str) cdef class YTEmbreeScene: def __init__(self): self.scene_i = rtcs.rtcNewScene(rtcs.RTC_SCENE_STATIC, rtcs.RTC_INTERSECT1) def __dealloc__(self): rtcs.rtcDeleteScene(self.scene_i) cdef class EmbreeMeshSampler(ImageSampler): @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def __call__(self, YTEmbreeScene scene, int num_threads = 0): ''' This function is supposed to cast the rays and return the image. ''' rtcs.rtcCommit(scene.scene_i) cdef int vi, vj, i, j cdef np.float64_t *v_pos cdef np.float64_t *v_dir cdef np.int64_t nx, ny, size cdef np.float64_t width[3] for i in range(3): width[i] = self.width[i] nx = self.nv[0] ny = self.nv[1] size = nx * ny cdef rtcr.RTCRay ray v_pos = malloc(3 * sizeof(np.float64_t)) v_dir = malloc(3 * sizeof(np.float64_t)) for j in range(size): vj = j % ny vi = (j - vj) / ny vj = vj self.vector_function(self, vi, vj, width, v_dir, v_pos) for i in range(3): ray.org[i] = v_pos[i] ray.dir[i] = v_dir[i] ray.tnear = 0.0 ray.tfar = 1e37 ray.geomID = rtcg.RTC_INVALID_GEOMETRY_ID ray.primID = rtcg.RTC_INVALID_GEOMETRY_ID ray.instID = rtcg.RTC_INVALID_GEOMETRY_ID ray.mask = -1 ray.time = 0 ray.Ng[0] = 1e37 # we use this to track the hit distance rtcs.rtcIntersect(scene.scene_i, ray) self.image[vi, vj, 0] = ray.time self.image_used[vi, vj] = ray.primID self.mesh_lines[vi, vj] = ray.instID self.zbuffer[vi, vj] = ray.tfar free(v_pos) free(v_dir) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/endian_swap.h0000644000175100001770000000110014714401662017433 0ustar00runnerdocker/* These macros are taken from Paul Bourke's page at http://local.wasp.uwa.edu.au/~pbourke/dataformats/fortran/ The current year is 2010, and evidently we still have to deal with endianness. */ #define SWAP_2(x) ( (((x) & 0xff) << 8) | ((unsigned short)(x) >> 8) ) #define SWAP_4(x) ( ((x) << 24) | (((x) << 8) & 0x00ff0000) | \ (((x) >> 8) & 0x0000ff00) | ((x) >> 24) ) #define FIX_SHORT(x) (*(unsigned short *)&(x) = SWAP_2(*(unsigned short *)&(x))) #define FIX_LONG(x) (*(unsigned *)&(x) = SWAP_4(*(unsigned *)&(x))) #define FIX_FLOAT(x) FIX_LONG(x) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/field_interpolation_tables.pxd0000644000175100001770000001131714714401662023106 0ustar00runnerdocker# distutils: language = c++ # distutils: extra_compile_args = CPP14_FLAG # distutils: extra_link_args = CPP14_FLAG """ Field Interpolation Tables """ cimport cython cimport numpy as np from libc.stdlib cimport malloc from yt.utilities.lib.fp_utils cimport fabs, fclip, fmax, fmin, iclip, imax, imin cdef extern from "" namespace "std": bint isnormal(double x) noexcept nogil cdef extern from "platform_dep_math.hpp": bint __isnormal(double) noexcept nogil cdef struct FieldInterpolationTable: # Note that we make an assumption about retaining a reference to values # externally. np.float64_t *values np.float64_t bounds[2] np.float64_t dbin np.float64_t idbin np.float64_t *d0 np.float64_t *dy int field_id int weight_field_id int weight_table_id int nbins @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline void FIT_initialize_table(FieldInterpolationTable *fit, int nbins, np.float64_t *values, np.float64_t bounds1, np.float64_t bounds2, int field_id, int weight_field_id, int weight_table_id) noexcept nogil: cdef int i fit.bounds[0] = bounds1; fit.bounds[1] = bounds2 fit.nbins = nbins fit.dbin = (fit.bounds[1] - fit.bounds[0])/(fit.nbins-1) fit.idbin = 1.0/fit.dbin # Better not pull this out from under us, yo fit.values = values fit.d0 = malloc(sizeof(np.float64_t) * nbins) fit.dy = malloc(sizeof(np.float64_t) * nbins) for i in range(nbins-1): fit.d0[i] = fit.bounds[0] + i * fit.dbin fit.dy[i] = (fit.values[i + 1] - fit.values[i]) * fit.idbin fit.field_id = field_id fit.weight_field_id = weight_field_id fit.weight_table_id = weight_table_id @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline np.float64_t FIT_get_value(const FieldInterpolationTable *fit, np.float64_t dvs[6]) noexcept nogil: cdef np.float64_t dd, dout cdef int bin_id if dvs[fit.field_id] >= fit.bounds[1] or dvs[fit.field_id] <= fit.bounds[0]: return 0.0 if not __isnormal(dvs[fit.field_id]): return 0.0 bin_id = ((dvs[fit.field_id] - fit.bounds[0]) * fit.idbin) bin_id = iclip(bin_id, 0, fit.nbins-2) dd = dvs[fit.field_id] - fit.d0[bin_id] # x - x0 dout = fit.values[bin_id] + dd * fit.dy[bin_id] cdef int wfi = fit.weight_field_id if wfi != -1: dout *= dvs[wfi] return dout @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline void FIT_eval_transfer( const np.float64_t dt, np.float64_t *dvs, np.float64_t *rgba, const int n_fits, const FieldInterpolationTable fits[6], const int field_table_ids[6], const int grey_opacity) noexcept nogil: cdef int i, fid cdef np.float64_t ta cdef np.float64_t istorage[6] cdef np.float64_t trgba[6] for i in range(n_fits): istorage[i] = FIT_get_value(&fits[i], dvs) for i in range(n_fits): fid = fits[i].weight_table_id if fid != -1: istorage[i] *= istorage[fid] for i in range(6): trgba[i] = istorage[field_table_ids[i]] if grey_opacity == 1: ta = fmax(1.0 - dt*trgba[3], 0.0) for i in range(4): rgba[i] = dt*trgba[i] + ta*rgba[i] else: for i in range(3): ta = fmax(1.0-dt*trgba[i], 0.0) rgba[i] = dt*trgba[i] + ta*rgba[i] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline void FIT_eval_transfer_with_light(np.float64_t dt, np.float64_t *dvs, np.float64_t *grad, np.float64_t *l_dir, np.float64_t *l_rgba, np.float64_t *rgba, int n_fits, FieldInterpolationTable fits[6], int field_table_ids[6], int grey_opacity) noexcept nogil: cdef int i, fid cdef np.float64_t ta, dot_prod cdef np.float64_t istorage[6] cdef np.float64_t trgba[6] dot_prod = 0.0 for i in range(3): dot_prod += l_dir[i]*grad[i] #dot_prod = fmax(0.0, dot_prod) for i in range(6): istorage[i] = 0.0 for i in range(n_fits): istorage[i] = FIT_get_value(&fits[i], dvs) for i in range(n_fits): fid = fits[i].weight_table_id if fid != -1: istorage[i] *= istorage[fid] for i in range(6): trgba[i] = istorage[field_table_ids[i]] if grey_opacity == 1: ta = fmax(1.0-dt*(trgba[0] + trgba[1] + trgba[2]), 0.0) for i in range(3): rgba[i] = (1.-ta)*trgba[i]*(1. + dot_prod*l_rgba[i]) + ta * rgba[i] else: for i in range(3): ta = fmax(1.0-dt*trgba[i], 0.0) rgba[i] = (1.-ta)*trgba[i]*(1. + dot_prod*l_rgba[i]) + ta * rgba[i] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/fixed_interpolator.cpp0000644000175100001770000001116414714401662021412 0ustar00runnerdocker/******************************************************************************* *******************************************************************************/ // // A small, tiny, itty bitty module for computation-intensive interpolation // that I can't seem to make fast in Cython // #include "fixed_interpolator.hpp" #define VINDEX(A,B,C) data[((((A)+ci[0])*(ds[1]+1)+((B)+ci[1]))*(ds[2]+1)+ci[2]+(C))] // (((C*ds[1])+B)*ds[0]+A) #define OINDEX(A,B,C) data[(A)*(ds[1]+1)*(ds[2]+1)+(B)*ds[2]+(B)+(C)] npy_float64 fast_interpolate(int ds[3], int ci[3], npy_float64 dp[3], npy_float64 *data) { int i; npy_float64 dv, dm[3]; for(i=0;i<3;i++)dm[i] = (1.0 - dp[i]); dv = 0.0; dv += VINDEX(0,0,0) * (dm[0]*dm[1]*dm[2]); dv += VINDEX(0,0,1) * (dm[0]*dm[1]*dp[2]); dv += VINDEX(0,1,0) * (dm[0]*dp[1]*dm[2]); dv += VINDEX(0,1,1) * (dm[0]*dp[1]*dp[2]); dv += VINDEX(1,0,0) * (dp[0]*dm[1]*dm[2]); dv += VINDEX(1,0,1) * (dp[0]*dm[1]*dp[2]); dv += VINDEX(1,1,0) * (dp[0]*dp[1]*dm[2]); dv += VINDEX(1,1,1) * (dp[0]*dp[1]*dp[2]); /*assert(dv < -20);*/ return dv; } npy_float64 offset_interpolate(int ds[3], npy_float64 dp[3], npy_float64 *data) { npy_float64 dv, vz[4]; dv = 1.0 - dp[2]; vz[0] = dv*OINDEX(0,0,0) + dp[2]*OINDEX(0,0,1); vz[1] = dv*OINDEX(0,1,0) + dp[2]*OINDEX(0,1,1); vz[2] = dv*OINDEX(1,0,0) + dp[2]*OINDEX(1,0,1); vz[3] = dv*OINDEX(1,1,0) + dp[2]*OINDEX(1,1,1); dv = 1.0 - dp[1]; vz[0] = dv*vz[0] + dp[1]*vz[1]; vz[1] = dv*vz[2] + dp[1]*vz[3]; dv = 1.0 - dp[0]; vz[0] = dv*vz[0] + dp[0]*vz[1]; return vz[0]; } void offset_fill(int ds[3], npy_float64 *data, npy_float64 gridval[8]) { gridval[0] = OINDEX(0,0,0); gridval[1] = OINDEX(1,0,0); gridval[2] = OINDEX(1,1,0); gridval[3] = OINDEX(0,1,0); gridval[4] = OINDEX(0,0,1); gridval[5] = OINDEX(1,0,1); gridval[6] = OINDEX(1,1,1); gridval[7] = OINDEX(0,1,1); } void vertex_interp(npy_float64 v1, npy_float64 v2, npy_float64 isovalue, npy_float64 vl[3], npy_float64 dds[3], npy_float64 x, npy_float64 y, npy_float64 z, int vind1, int vind2) { /*if (fabs(isovalue - v1) < 0.000001) return 0.0; if (fabs(isovalue - v2) < 0.000001) return 1.0; if (fabs(v1 - v2) < 0.000001) return 0.0;*/ int i; static npy_float64 cverts[8][3] = {{0,0,0}, {1,0,0}, {1,1,0}, {0,1,0}, {0,0,1}, {1,0,1}, {1,1,1}, {0,1,1}}; npy_float64 mu = ((isovalue - v1) / (v2 - v1)); if (fabs(1.0 - isovalue/v1) < 0.000001) mu = 0.0; if (fabs(1.0 - isovalue/v2) < 0.000001) mu = 1.0; if (fabs(v1/v2) < 0.000001) mu = 0.0; vl[0] = x; vl[1] = y; vl[2] = z; for (i=0;i<3;i++) vl[i] += dds[i] * cverts[vind1][i] + dds[i] * mu*(cverts[vind2][i] - cverts[vind1][i]); } npy_float64 trilinear_interpolate(int ds[3], int ci[3], npy_float64 dp[3], npy_float64 *data) { /* dims is one less than the dimensions of the array */ int i; npy_float64 dm[3], vz[4]; //dp is the distance to the plane. dm is val, dp = 1-val for(i=0;i<3;i++)dm[i] = (1.0 - dp[i]); //First interpolate in z vz[0] = dm[2]*VINDEX(0,0,0) + dp[2]*VINDEX(0,0,1); vz[1] = dm[2]*VINDEX(0,1,0) + dp[2]*VINDEX(0,1,1); vz[2] = dm[2]*VINDEX(1,0,0) + dp[2]*VINDEX(1,0,1); vz[3] = dm[2]*VINDEX(1,1,0) + dp[2]*VINDEX(1,1,1); //Then in y vz[0] = dm[1]*vz[0] + dp[1]*vz[1]; vz[1] = dm[1]*vz[2] + dp[1]*vz[3]; //Then in x vz[0] = dm[0]*vz[0] + dp[0]*vz[1]; /*assert(dv < -20);*/ return vz[0]; } void eval_gradient(int ds[3], npy_float64 dp[3], npy_float64 *data, npy_float64 *grad) { // We just take some small value int i; npy_float64 denom, plus, minus, backup, normval; normval = 0.0; for (i = 0; i < 3; i++) { backup = dp[i]; grad[i] = 0.0; if (dp[i] >= 0.95) {plus = dp[i]; minus = dp[i] - 0.05;} else if (dp[i] <= 0.05) {plus = dp[i] + 0.05; minus = 0.0;} else {plus = dp[i] + 0.05; minus = dp[i] - 0.05;} //fprintf(stderr, "DIM: %d %0.3lf %0.3lf\n", i, plus, minus); denom = plus - minus; dp[i] = plus; grad[i] += offset_interpolate(ds, dp, data) / denom; dp[i] = minus; grad[i] -= offset_interpolate(ds, dp, data) / denom; dp[i] = backup; normval += grad[i]*grad[i]; } if (normval != 0.0){ normval = sqrt(normval); for (i = 0; i < 3; i++) grad[i] /= -normval; //fprintf(stderr, "Normval: %0.3lf %0.3lf %0.3lf %0.3lf\n", // normval, grad[0], grad[1], grad[2]); }else{ grad[0]=grad[1]=grad[2]=0.0; } } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/fixed_interpolator.hpp0000644000175100001770000000214014714401662021411 0ustar00runnerdocker/******************************************************************************* *******************************************************************************/ // // A small, tiny, itty bitty module for computation-intensive interpolation // that I can't seem to make fast in Cython // #include "Python.h" #include #include #include #include #include "numpy/ndarrayobject.h" npy_float64 fast_interpolate(int ds[3], int ci[3], npy_float64 dp[3], npy_float64 *data); npy_float64 offset_interpolate(int ds[3], npy_float64 dp[3], npy_float64 *data); npy_float64 trilinear_interpolate(int ds[3], int ci[3], npy_float64 dp[3], npy_float64 *data); void eval_gradient(int ds[3], npy_float64 dp[3], npy_float64 *data, npy_float64 *grad); void offset_fill(int *ds, npy_float64 *data, npy_float64 *gridval); void vertex_interp(npy_float64 v1, npy_float64 v2, npy_float64 isovalue, npy_float64 vl[3], npy_float64 dds[3], npy_float64 x, npy_float64 y, npy_float64 z, int vind1, int vind2); ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/fixed_interpolator.pxd0000644000175100001770000000202114714401662021413 0ustar00runnerdocker""" Fixed interpolator includes """ cimport numpy as np cdef extern from "fixed_interpolator.hpp": np.float64_t fast_interpolate(int ds[3], int ci[3], np.float64_t dp[3], np.float64_t *data) noexcept nogil np.float64_t offset_interpolate(int ds[3], np.float64_t dp[3], np.float64_t *data) noexcept nogil np.float64_t trilinear_interpolate(int ds[3], int ci[3], np.float64_t dp[3], np.float64_t *data) noexcept nogil void eval_gradient(int ds[3], np.float64_t dp[3], np.float64_t *data, np.float64_t grad[3]) noexcept nogil void offset_fill(int *ds, np.float64_t *data, np.float64_t *gridval) noexcept nogil void vertex_interp(np.float64_t v1, np.float64_t v2, np.float64_t isovalue, np.float64_t vl[3], np.float64_t dds[3], np.float64_t x, np.float64_t y, np.float64_t z, int vind1, int vind2) noexcept nogil ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/fnv_hash.pxd0000644000175100001770000000023614714401662017314 0ustar00runnerdocker""" Definitions for fnv_hash """ import numpy as np cimport numpy as np cdef np.int64_t c_fnv_hash(unsigned unsigned char[:] octets) noexcept nogil ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/fnv_hash.pyx0000644000175100001770000000163214714401662017342 0ustar00runnerdocker# distutils: libraries = STD_LIBS # distutils: include_dirs = LIB_DIR """ Fast hashing routines """ import numpy as np cimport cython cimport numpy as np @cython.wraparound(False) @cython.boundscheck(False) cdef np.int64_t c_fnv_hash(unsigned char[:] octets) noexcept nogil: # https://bitbucket.org/yt_analysis/yt/issues/1052/field-access-tests-fail-under-python3 # FNV hash cf. http://www.isthe.com/chongo/tech/comp/fnv/index.html cdef np.int64_t hash_val = 2166136261 cdef int i for i in range(octets.shape[0]): hash_val = hash_val ^ octets[i] hash_val = hash_val * 16777619 return hash_val def fnv_hash(octets): """ Create a FNV hash from a bytestring. Info: http://www.isthe.com/chongo/tech/comp/fnv/index.html Parameters ---------- octets : bytestring The string of bytes to generate a hash from. """ return c_fnv_hash(octets) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/fortran_reader.pyx0000644000175100001770000003141614714401662020546 0ustar00runnerdocker # distutils: libraries = STD_LIBS """ Simple readers for fortran unformatted data, specifically for the Tiger code. """ import numpy as np cimport cython cimport numpy as np from libc.stdio cimport FILE, fclose, fopen #cdef inline int imax(int i0, int i1): #if i0 > i1: return i0 #return i1 cdef extern from "endian_swap.h": void FIX_SHORT( unsigned short ) void FIX_LONG( unsigned ) void FIX_FLOAT( float ) cdef extern from "platform_dep.h": void *alloca(size_t) cdef extern from "stdio.h": cdef int SEEK_SET cdef int SEEK_CUR cdef int SEEK_END int fseek(FILE *stream, long offset, int whence) size_t fread(void *ptr, size_t size, size_t nmemb, FILE *stream) long ftell(FILE *stream) char *fgets(char *s, int size, FILE *stream) @cython.boundscheck(False) @cython.wraparound(False) def read_and_seek(char *filename, np.int64_t offset1, np.int64_t offset2, np.ndarray buffer, int bytes): cdef FILE *f = fopen(filename, "rb") cdef void *buf = buffer.data cdef char line[1024] cdef size_t n = 1023 fseek(f, offset1, SEEK_SET) fgets(line, n, f) fseek(f, offset2, SEEK_CUR) fread(buf, 1, bytes, f) fclose(f) def count_art_octs(char *fn, long offset, int min_level, int max_level, int nhydro_vars, level_info): cdef int nchild = 8 cdef int next_record = -1, nLevel = -1 cdef int dummy_records[9] cdef int readin = -1 cdef FILE *f = fopen(fn, "rb") fseek(f, offset, SEEK_SET) for _ in range(min_level + 1, max_level + 1): fread(dummy_records, sizeof(int), 2, f) fread(&nLevel, sizeof(int), 1, f); FIX_LONG(nLevel) print(level_info) level_info.append(nLevel) fread(dummy_records, sizeof(int), 2, f) fread(&next_record, sizeof(int), 1, f); FIX_LONG(next_record) print("Record size is:", next_record) # Offset for one record header we just read next_record = (nLevel * (next_record + 2*sizeof(int))) - sizeof(int) fseek(f, next_record, SEEK_CUR) # Now we skip the second section fread(&readin, sizeof(int), 1, f); FIX_LONG(readin) nhydro_vars = next_record//4-2-3 #nhvar in daniel's code #record length is normally 2 pad bytes, 8 + 2 hvars (the 2 is nchem) # and then 3 vars, but we can find nhvars only here and not in other # file headers next_record = (2*sizeof(int) + readin) * (nLevel * nchild) next_record -= sizeof(int) fseek(f, next_record, SEEK_CUR) print("nhvars",nhydro_vars) fclose(f) def read_art_tree(char *fn, long offset, int min_level, int max_level, np.ndarray[np.int64_t, ndim=2] oct_indices, np.ndarray[np.int64_t, ndim=1] oct_levels, np.ndarray[np.int64_t, ndim=2] oct_info): # np.ndarray[np.int64_t, ndim=1] oct_mask, # np.ndarray[np.int64_t, ndim=1] oct_parents, # This accepts the filename of the ART header and an integer offset that # points to the start of the record *following* the reading of iOctFree and # nOct. For those following along at home, we only need to read: # iOctPr, iOctLv cdef int nchild = 8 cdef int iOct, nLevel, ic1 cdef np.int64_t next_record = -1 cdef long long child_record cdef int iOctPs[3] cdef np.int64_t dummy_records[9] cdef int readin = -1 cdef FILE *f = fopen(fn, "rb") fseek(f, offset, SEEK_SET) cdef int Level = -1 cdef int * iNOLL = alloca(sizeof(int)*(max_level-min_level+1)) cdef int * iHOLL = alloca(sizeof(int)*(max_level-min_level+1)) cdef int iOctMax = 0 level_offsets = [0] for _ in range(min_level + 1, max_level + 1): fread(&readin, sizeof(int), 1, f); FIX_LONG(readin) fread(&Level, sizeof(int), 1, f); FIX_LONG(Level) fread(&iNOLL[Level], sizeof(int), 1, f); FIX_LONG(iNOLL[Level]) fread(&iHOLL[Level], sizeof(int), 1, f); FIX_LONG(iHOLL[Level]) fread(&readin, sizeof(int), 1, f); FIX_LONG(readin) iOct = iHOLL[Level] - 1 nLevel = iNOLL[Level] #print("Reading Hierarchy for Level", Lev, Level, nLevel, iOct) #print(ftell(f)) for ic1 in range(nLevel): iOctMax = max(iOctMax, iOct) #print(readin, iOct, nLevel, sizeof(int)) next_record = ftell(f) fread(&readin, sizeof(int), 1, f); FIX_LONG(readin) assert readin==52 next_record += readin + sizeof(int) fread(iOctPs, sizeof(int), 3, f) FIX_LONG(iOctPs[0]); FIX_LONG(iOctPs[1]); FIX_LONG(iOctPs[2]) oct_indices[iOct, 0] = iOctPs[0] oct_indices[iOct, 1] = iOctPs[1] oct_indices[iOct, 2] = iOctPs[2] oct_info[iOct, 1] = ic1 #grid_info[iOct, 2] = iOctPr # we don't seem to need this fread(dummy_records, sizeof(int), 6, f) # skip Nb fread(&readin, sizeof(int), 1, f); FIX_LONG(readin) #oct_parents[iOct] = readin - 1 fread(&readin, sizeof(int), 1, f); FIX_LONG(readin) oct_levels[iOct] = readin fread(&iOct, sizeof(int), 1, f); FIX_LONG(iOct) iOct -= 1 assert next_record > 0 fseek(f, next_record, SEEK_SET) fread(&readin, sizeof(int), 1, f); FIX_LONG(readin) assert readin==52 level_offsets.append(ftell(f)) #skip over the hydro variables #find the length of one child section #print('measuring child record ',) fread(&next_record, sizeof(int), 1, f) #print(next_record,) FIX_LONG(next_record) #print(next_record) fseek(f,ftell(f)-sizeof(int),SEEK_SET) #rewind #This is a sloppy fix; next_record is 64bit #and I don't think FIX_LONG(next_record) is working #correctly for 64bits if next_record > 4294967296L: next_record -= 4294967296L assert next_record == 56 #find the length of all of the children section child_record = ftell(f) + (next_record+2*sizeof(int))*nLevel*nchild #print('Skipping over hydro vars', ftell(f), child_record) fseek(f, child_record, SEEK_SET) # for ic1 in range(nLevel * nchild): # fread(&next_record, sizeof(int), 1, f); FIX_LONG(next_record) # fread(&idc, sizeof(int), 1, f); FIX_LONG(idc); idc -= 1 + (128**3) # fread(&cm, sizeof(int), 1, f); FIX_LONG(cm) # #if cm == 0: oct_mask[idc] = 1 # #else: total_masked += 1 # assert next_record > 0 # fseek(f, next_record - sizeof(int), SEEK_CUR) fclose(f) return level_offsets def read_art_root_vars(char *fn, long root_grid_offset, int nhydro_vars, int nx, int ny, int nz, int ix, int iy, int iz, fields, var): cdef FILE *f = fopen(fn, "rb") cdef int j,l, cell_record_size = nhydro_vars * sizeof(float) cdef float temp = -1 l=0 fseek(f, root_grid_offset, SEEK_SET) # Now we seet out the cell we want cdef int my_offset = (((iz * ny) + iy) * nx + ix) #print(cell_record_size, my_offset, ftell(f)) fseek(f, cell_record_size * my_offset, SEEK_CUR) #(((C)*GridDimension[1]+(B))*GridDimension[0]+A) for j in range(nhydro_vars): fread(&temp, sizeof(float), 1, f) if j in fields: FIX_FLOAT(temp) var[l]=temp l+=1 fclose(f) cdef void read_art_vars(FILE *f, int min_level, int max_level, int nhydro_vars, int grid_level,long grid_id,long child_offset, fields, np.ndarray[np.int64_t, ndim=1] level_offsets, var): # nhydro_vars is the number of columns- 3 (adjusting for vars) # this is normally 10=(8+2chem species) cdef int record_size = 2+1+1+nhydro_vars+2 cdef float temp = -1.0 cdef float varpad[2] cdef int new_padding = -1 cdef int padding[3] cdef long offset = 8*grid_id*record_size*sizeof(float) fseek(f, level_offsets[grid_level] + offset, SEEK_SET) for j in range(8): #iterate over the children l = 0 fread(padding, sizeof(int), 3, f); FIX_LONG(padding[0]) #print("Record Size", padding[0]) # This should be replaced by an fread of nhydro_vars length for k in range(nhydro_vars): #iterate over the record fread(&temp, sizeof(float), 1, f); FIX_FLOAT(temp) #print(k, temp) if k in fields: var[j,l] = temp l += 1 fread(varpad, sizeof(float), 2, f) fread(&new_padding, sizeof(int), 1, f); FIX_LONG(new_padding) assert(padding[0] == new_padding) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def read_art_grid(int varindex, np.ndarray[np.int64_t, ndim=1] start_index, np.ndarray[np.int32_t, ndim=1] grid_dims, np.ndarray[np.float32_t, ndim=3] data, np.ndarray[np.uint8_t, ndim=3] filled, np.ndarray[np.float32_t, ndim=2] level_data, int level, int ref_factor, component_grid_info): cdef int gi, i, j, k, grid_id cdef int ir, jr, kr cdef int offi, offj, offk, odind cdef np.int64_t di, dj, dk cdef np.ndarray[np.int64_t, ndim=1] ogrid_info cdef np.ndarray[np.int64_t, ndim=1] og_start_index cdef np.float64_t temp_data cdef np.int64_t end_index[3] cdef int kr_offset, jr_offset, ir_offset cdef int to_fill = 0 # Note that indexing into a cell is: # (k*2 + j)*2 + i for i in range(3): end_index[i] = start_index[i] + grid_dims[i] for gi in range(len(component_grid_info)): ogrid_info = component_grid_info[gi] grid_id = ogrid_info[1] og_start_index = ogrid_info[3:6] #the oct left edge for i in range(2*ref_factor): di = i + og_start_index[0] * ref_factor if di < start_index[0] or di >= end_index[0]: continue ir = (i / ref_factor) for j in range(2 * ref_factor): dj = j + og_start_index[1] * ref_factor if dj < start_index[1] or dj >= end_index[1]: continue jr = (j / ref_factor) for k in range(2 * ref_factor): dk = k + og_start_index[2] * ref_factor if dk < start_index[2] or dk >= end_index[2]: continue kr = (k / ref_factor) offi = di - start_index[0] offj = dj - start_index[1] offk = dk - start_index[2] #print(offi, filled.shape[0],) #print(offj, filled.shape[1],) #print(offk, filled.shape[2]) if filled[offi, offj, offk] == 1: continue if level > 0: odind = (kr*2 + jr)*2 + ir # Replace with an ART-specific reader #temp_data = local_hydro_data.m_var_array[ # level][8*offset + odind] temp_data = level_data[varindex, 8*grid_id + odind] else: kr_offset = kr + (start_index[0] / ref_factor) jr_offset = jr + (start_index[1] / ref_factor) ir_offset = ir + (start_index[2] / ref_factor) odind = (kr_offset * grid_dims[0] + jr_offset)*grid_dims[1] + ir_offset temp_data = level_data[varindex, odind] data[offi, offj, offk] = temp_data filled[offi, offj, offk] = 1 to_fill += 1 return to_fill @cython.cdivision(True) @cython.boundscheck(True) @cython.wraparound(False) def fill_child_mask(np.ndarray[np.int64_t, ndim=2] file_locations, np.ndarray[np.int64_t, ndim=1] grid_le, np.ndarray[np.uint8_t, ndim=4] art_child_masks, np.ndarray[np.uint8_t, ndim=3] child_mask): #loop over file_locations, for each row extracting the index & LE #of the oct we will pull pull from art_child_masks #then use the art_child_masks info to fill in child_mask cdef int i,ioct,x,y,z cdef int nocts = file_locations.shape[0] cdef int lex,ley,lez for i in range(nocts): ioct = file_locations[i,1] #from fortran to python indexing? lex = file_locations[i,3] - grid_le[0] #the oct left edge x ley = file_locations[i,4] - grid_le[1] lez = file_locations[i,5] - grid_le[2] for x in range(2): for y in range(2): for z in range(2): child_mask[lex+x,ley+y,lez+z] = art_child_masks[ioct,x,y,z] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/fp_utils.pxd0000644000175100001770000000311414714401662017343 0ustar00runnerdocker""" Shareable definitions for common fp/int Cython utilities """ cimport cython cimport numpy as np cdef inline np.int64_t imax(np.int64_t i0, np.int64_t i1) noexcept nogil: if i0 > i1: return i0 return i1 cdef inline np.float64_t fmax(np.float64_t f0, np.float64_t f1) noexcept nogil: if f0 > f1: return f0 return f1 cdef inline np.int64_t imin(np.int64_t i0, np.int64_t i1) noexcept nogil: if i0 < i1: return i0 return i1 cdef inline np.float64_t fmin(np.float64_t f0, np.float64_t f1) noexcept nogil: if f0 < f1: return f0 return f1 cdef inline np.float64_t fabs(np.float64_t f0) noexcept nogil: if f0 < 0.0: return -f0 return f0 cdef inline np.int64_t iclip(np.int64_t i, np.int64_t a, np.int64_t b) noexcept nogil: if i < a: return a if i > b: return b return i cdef inline np.int64_t i64clip(np.int64_t i, np.int64_t a, np.int64_t b) noexcept nogil: if i < a: return a if i > b: return b return i cdef inline np.float64_t fclip(np.float64_t f, np.float64_t a, np.float64_t b) noexcept nogil: return fmin(fmax(f, a), b) cdef inline np.int64_t i64max(np.int64_t i0, np.int64_t i1) noexcept nogil: if i0 > i1: return i0 return i1 cdef inline np.int64_t i64min(np.int64_t i0, np.int64_t i1) noexcept nogil: if i0 < i1: return i0 return i1 cdef inline _ensure_code(arr): if hasattr(arr, "units"): if "code_length" == str(arr.units): return arr arr.convert_to_units("code_length") return arr ctypedef fused any_float: np.float32_t np.float64_t ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/geometry_utils.pxd0000644000175100001770000003222314714401662020574 0ustar00runnerdocker""" Particle Deposition onto Octs """ cimport cython cimport numpy as np from libc.float cimport DBL_MANT_DIG from libc.math cimport frexp, ldexp, sqrt cdef enum: ORDER_MAX=20 cdef enum: # TODO: Handle error for indices past max INDEX_MAX_64=2097151 cdef enum: XSHIFT=2 YSHIFT=1 ZSHIFT=0 @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef inline np.int64_t ifrexp(np.float64_t x, np.int64_t *e): cdef np.float64_t m cdef int e0 = 0 m = frexp(x,&e0) e[0] = e0 return ldexp(m,DBL_MANT_DIG) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef inline np.int64_t msdb(np.int64_t a, np.int64_t b): """Get the most significant differing bit between a and b.""" cdef np.int64_t c, ndx c = a ^ b ndx = 0 while (0 < c): c = (c >> 1) ndx+=1 return ndx @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef inline np.int64_t xor_msb(np.float64_t a, np.float64_t b): """Get the exponent of the highest differing bit between a and b""" # Get mantissa and exponents for each number cdef np.int64_t a_m, a_e, b_m, b_e, x, y, z b_e = 0 a_e = 0 a_m = ifrexp(a,&a_e) b_m = ifrexp(b,&b_e) x = ((a_e+1)*DBL_MANT_DIG) y = ((b_e+1)*DBL_MANT_DIG) # Compare mantissa if exponents equal if x == y: if a_m == b_m: return 0 z = msdb(a_m,b_m) #if 1: return z x = x - z return x-1 # required so that xor_msb(0.0,1.0)!=xor_msb(1.0,1.0) # Otherwise return largest exponent if y < x: return x else: return y @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef inline int compare_floats_morton(np.float64_t p[3], np.float64_t q[3]): cdef int j, out, dim cdef np.int64_t x, y x = -9999999999 y = 0 dim = 0 for j in range(3):#[::-1]: y = xor_msb(p[j],q[j]) if x < y: x = y dim = j if p[dim] < q[dim]: out = 1 else: out = 0 return out @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef inline np.float64_t euclidean_distance(np.float64_t[:] p, np.float64_t[:] q): cdef int j cdef np.float64_t d d = 0.0 for j in range(3): d+=(p[j]-q[j])**2 return sqrt(d) # Todo: allow radius reported independently in each dimension for rectangular domain @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef inline np.float64_t smallest_quadtree_box(np.float64_t p[3], np.float64_t q[3], np.int32_t order, np.float64_t DLE[3], np.float64_t DRE[3], np.float64_t *cx, np.float64_t *cy, np.float64_t *cz): cdef int j cdef np.float64_t c[3] cdef np.uint64_t pidx[3] # cdef np.uint64_t qidx[3] for j in range(3): pidx[j] = 0 # qidx[j] = 0 cdef np.uint64_t pidx_next[3] cdef np.uint64_t qidx_next[3] cdef np.float64_t dds[3] cdef np.float64_t rad cdef int lvl = 0 cdef int done = 0 while not done: if (lvl+1 >= order): done = 1 for j in range(3): dds[j] = (DRE[j] - DLE[j])/(1 << ( lvl+1)) pidx_next[j] = ((p[j] - DLE[j])/dds[j]) qidx_next[j] = ((q[j] - DLE[j])/dds[j]) for j in range(3): if pidx_next[j]!=qidx_next[j]: done = 1 break if not done: for j in range(3): pidx[j] = pidx_next[j] # qidx[j] = qidx_next[j] lvl+=1 rad = 0.0 for j in range(3): dds[j] = (DRE[j] - DLE[j])/(1 << lvl) c[j] = dds[j]*(pidx[j]+0.5) rad+=((dds[j]/2.0)**2) cx[0] = c[0] cy[0] = c[1] cz[0] = c[2] return sqrt(rad) #----------------------------------------------------------------------------- # 21 bits spread over 64 with 3 bits in between @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef inline np.uint64_t spread_64bits_by3(np.uint64_t x): x=(x&(0x00000000001FFFFF)) x=(x|(x<<20))*(0x000001FFC00003FF) #----------------------------------------------------------------------------- # 21 bits spread over 64 with 2 bits in between @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef inline np.uint64_t spread_64bits_by2(np.uint64_t x): # This magic comes from http://stackoverflow.com/questions/1024754/how-to-compute-a-3d-morton-number-interleave-the-bits-of-3-ints # Only reversible up to 2097151 # Select highest 21 bits (Required to be reversible to 21st bit) # x = ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---k jihg fedc ba98 7654 3210 x=(x&(0x00000000001FFFFF)) # x = ---- ---- ---- ---- ---- ---k jihg fedc ba-- ---- ---- ---- ---- --98 7654 3210 x=(x|(x<<20))&(0x000001FFC00003FF) # x = ---- ---- ---- -kji hgf- ---- ---- -edc ba-- ---- ---- 9876 5--- ---- ---4 3210 x=(x|(x<<10))&(0x0007E007C00F801F) # x = ---- ---- -kji h--- -gf- ---- -edc ---- ba-- ---- 987- ---6 5--- ---4 32-- --10 x=(x|(x<<4))&(0x00786070C0E181C3) # x = ---- ---k ji-- h--g --f- ---e d--c --b- -a-- --98 --7- -6-- 5--- -43- -2-- 1--0 x=(x|(x<<2))&(0x0199219243248649) # x = ---- -kj- -i-- h--g --f- -e-- d--c --b- -a-- 9--8 --7- -6-- 5--4 --3- -2-- 1--0 x=(x|(x<<2))&(0x0649249249249249) # x = ---k --j- -i-- h--g --f- -e-- d--c --b- -a-- 9--8 --7- -6-- 5--4 --3- -2-- 1--0 x=(x|(x<<2))&(0x1249249249249249) return x @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef inline np.uint64_t compact_64bits_by2(np.uint64_t x): # Reversed magic x=x&(0x1249249249249249) x=(x|(x>>2))&(0x0649249249249249) x=(x|(x>>2))&(0x0199219243248649) x=(x|(x>>2))&(0x00786070C0E181C3) x=(x|(x>>4))&(0x0007E007C00F801F) x=(x|(x>>10))&(0x000001FFC00003FF) x=(x|(x>>20))&(0x00000000001FFFFF) return x #----------------------------------------------------------------------------- # 10 bits spread over 32 with 2 bits in between @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef inline np.uint32_t spread_32bits_by2(np.uint32_t x): # Only reversible up to 1023 # Select highest 10 bits (Required to be reversible to 10st bit) # x = ---- ---- ---- ---- ---- --98 7654 3210 x=(x&(0x000003FF)) # x = ---- --98 ---- ---- ---- ---- 7654 3210 x=(x|(x<<16))&(0xFF0000FF) # x = ---- --98 ---- ---- 7654 ---- ---- 3210 x=(x|(x<<8))&(0x0300F00F) # x = ---- --98 ---- 76-- --54 ---- 32-- --10 x=(x|(x<<4))&(0x030C30C3) # x = ---- 9--8 --7- -6-- 5--4 --3- -2-- 1--0 x=(x|(x<<2))&(0x09249249) return x @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef inline np.uint32_t compact_32bits_by2(np.uint32_t x): # Reversed magic x=x&(0x09249249) x=(x|(x>>2))&(0x030C30C3) x=(x|(x>>4))&(0x0300F00F) x=(x|(x>>8))&(0xFF0000FF) x=(x|(x>>16))&(0x000003FF) return x @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef inline np.uint64_t masked_merge_64bit(np.uint64_t a, np.uint64_t b, np.uint64_t mask): # https://graphics.stanford.edu/~seander/bithacks.html#MaskedMerge return a ^ ((a ^ b) & mask) @cython.cdivision(True) cdef inline np.uint64_t encode_morton_64bit(np.uint64_t x_ind, np.uint64_t y_ind, np.uint64_t z_ind): cdef np.uint64_t mi mi = 0 mi |= spread_64bits_by2(z_ind)<>XSHIFT) p[1] = compact_64bits_by2(mi>>YSHIFT) p[2] = compact_64bits_by2(mi>>ZSHIFT) @cython.cdivision(True) cdef inline np.uint64_t bounded_morton(np.float64_t x, np.float64_t y, np.float64_t z, np.float64_t *DLE, np.float64_t *DRE, np.int32_t order): cdef int i cdef np.float64_t dds[3] cdef np.uint64_t x_ind, y_ind, z_ind cdef np.uint64_t mi for i in range(3): dds[i] = (DRE[i] - DLE[i]) / (1 << order) x_ind = ((x - DLE[0])/dds[0]) y_ind = ((y - DLE[1])/dds[1]) z_ind = ((z - DLE[2])/dds[2]) mi = encode_morton_64bit(x_ind,y_ind,z_ind) return mi @cython.cdivision(True) cdef inline np.uint64_t bounded_morton_relative(np.float64_t x, np.float64_t y, np.float64_t z, np.float64_t *DLE, np.float64_t *DRE, np.int32_t order1, np.int32_t order2): cdef int i cdef np.float64_t dds1[3] cdef np.float64_t dds2[3] cdef np.float64_t DLE2[3] cdef np.uint64_t x_ind, y_ind, z_ind cdef np.uint64_t mi2 for i in range(3): dds1[i] = (DRE[i] - DLE[i]) / (1 << order1) dds2[i] = dds1[i] / (1 << order2) DLE2[0] = ( ((x - DLE[0])/dds1[0])) * dds1[0] DLE2[1] = ( ((y - DLE[1])/dds1[1])) * dds1[1] DLE2[2] = ( ((z - DLE[2])/dds1[2])) * dds1[2] x_ind = ((x - DLE2[0])/dds2[0]) y_ind = ((y - DLE2[1])/dds2[1]) z_ind = ((z - DLE2[2])/dds2[2]) mi2 = encode_morton_64bit(x_ind,y_ind,z_ind) return mi2 # This doesn't seem to be much, if at all, faster... @cython.cdivision(True) cdef inline np.uint64_t bounded_morton_dds(np.float64_t x, np.float64_t y, np.float64_t z, np.float64_t *DLE, np.float64_t *dds): cdef np.uint64_t x_ind, y_ind, z_ind cdef np.uint64_t mi x_ind = ((x - DLE[0])/dds[0]) y_ind = ((y - DLE[1])/dds[1]) z_ind = ((z - DLE[2])/dds[2]) mi = encode_morton_64bit(x_ind,y_ind,z_ind) return mi @cython.cdivision(True) cdef inline np.uint64_t bounded_morton_relative_dds(np.float64_t x, np.float64_t y, np.float64_t z, np.float64_t *DLE, np.float64_t *dds1, np.float64_t *dds2): cdef np.float64_t DLE2[3] cdef np.uint64_t x_ind, y_ind, z_ind cdef np.uint64_t mi2 DLE2[0] = ( ((x - DLE[0])/dds1[0])) * dds1[0] DLE2[1] = ( ((y - DLE[1])/dds1[1])) * dds1[1] DLE2[2] = ( ((z - DLE[2])/dds1[2])) * dds1[2] x_ind = ((x - DLE2[0])/dds2[0]) y_ind = ((y - DLE2[1])/dds2[1]) z_ind = ((z - DLE2[2])/dds2[2]) mi2 = encode_morton_64bit(x_ind,y_ind,z_ind) return mi2 @cython.cdivision(True) cdef inline np.uint64_t bounded_morton_split_dds(np.float64_t x, np.float64_t y, np.float64_t z, np.float64_t *DLE, np.float64_t *dds, np.uint64_t *p): cdef np.uint64_t mi p[0] = ((x - DLE[0])/dds[0]) p[1] = ((y - DLE[1])/dds[1]) p[2] = ((z - DLE[2])/dds[2]) mi = encode_morton_64bit(p[0], p[1], p[2]) return mi @cython.cdivision(True) cdef inline np.uint64_t bounded_morton_split_relative_dds(np.float64_t x, np.float64_t y, np.float64_t z, np.float64_t *DLE, np.float64_t *dds1, np.float64_t *dds2, np.uint64_t *p2): cdef np.float64_t DLE2[3] cdef np.uint64_t mi2 DLE2[0] = DLE[0] + ( ((x - DLE[0])/dds1[0])) * dds1[0] DLE2[1] = DLE[1] + ( ((y - DLE[1])/dds1[1])) * dds1[1] DLE2[2] = DLE[2] + ( ((z - DLE[2])/dds1[2])) * dds1[2] p2[0] = ((x - DLE2[0])/dds2[0]) p2[1] = ((y - DLE2[1])/dds2[1]) p2[2] = ((z - DLE2[2])/dds2[2]) mi2 = encode_morton_64bit(p2[0], p2[1], p2[2]) return mi2 cdef np.uint32_t morton_neighbors_coarse(np.uint64_t mi1, np.uint64_t max_index1, bint periodicity[3], np.uint32_t nn, np.uint32_t[:,:] index, np.uint64_t[:,:] ind1_n, np.uint64_t[:] neighbors) cdef np.uint32_t morton_neighbors_refined(np.uint64_t mi1, np.uint64_t mi2, np.uint64_t max_index1, np.uint64_t max_index2, bint periodicity[3], np.uint32_t nn, np.uint32_t[:,:] index, np.uint64_t[:,:] ind1_n, np.uint64_t[:,:] ind2_n, np.uint64_t[:] neighbors1, np.uint64_t[:] neighbors2) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/geometry_utils.pyx0000644000175100001770000014130514714401662020623 0ustar00runnerdocker# distutils: libraries = STD_LIBS # distutils: language = c++ # distutils: extra_compile_args = CPP14_FLAG OMP_ARGS # distutils: extra_link_args = CPP14_FLAG OMP_ARGS """ Simple integrators for the radiative transfer equation """ import numpy as np cimport cython cimport numpy as np from libc.math cimport copysign, fabs, log2 from libc.stdlib cimport free, malloc from yt.utilities.lib.fp_utils cimport i64clip from yt.utilities.exceptions import YTDomainOverflow from yt.utilities.lib.vec3_ops cimport L2_norm, cross, dot, subtract cdef extern from "math.h": double exp(double x) noexcept nogil float expf(float x) noexcept nogil long double expl(long double x) noexcept nogil double floor(double x) noexcept nogil double ceil(double x) noexcept nogil double fmod(double x, double y) noexcept nogil double fabs(double x) noexcept nogil cdef extern from "platform_dep.h": long int lrint(double x) noexcept nogil @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef np.int64_t graycode(np.int64_t x): return x^(x>>1) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef np.int64_t igraycode(np.int64_t x): cdef np.int64_t i, j if x == 0: return x m = ceil(log2(x)) + 1 i, j = x, 1 while j < m: i = i ^ (x>>j) j += 1 return i @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef np.int64_t direction(np.int64_t x, np.int64_t n): #assert x < 2**n if x == 0: return 0 elif x%2 == 0: return tsb(x-1, n)%n else: return tsb(x, n)%n @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef np.int64_t tsb(np.int64_t x, np.int64_t width): #assert x < 2**width cdef np.int64_t i = 0 while x&1 and i <= width: x = x >> 1 i += 1 return i @cython.cpow(True) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef np.int64_t bitrange(np.int64_t x, np.int64_t width, np.int64_t start, np.int64_t end): return x >> (width-end) & ((2**(end-start))-1) @cython.cpow(True) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef np.int64_t rrot(np.int64_t x, np.int64_t i, np.int64_t width): i = i%width x = (x>>i) | (x<>width-i) return x&(2**width-1) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef np.int64_t transform(np.int64_t entry, np.int64_t direction, np.int64_t width, np.int64_t x): return rrot((x^entry), direction + 1, width) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef np.int64_t entry(np.int64_t x): if x == 0: return 0 return graycode(2*((x-1)/2)) @cython.cpow(True) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef np.int64_t setbit(np.int64_t x, np.int64_t w, np.int64_t i, np.int64_t b): if b == 1: return x | 2**(w-i-1) elif b == 0: return x & ~2**(w-i-1) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def spread_bits(np.uint64_t x): return spread_64bits_by2(x) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def compact_bits(np.uint64_t x): return compact_64bits_by2(x) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def lsz(np.uint64_t v, int stride = 1, int start = 0): cdef int c c = start while ((np.uint64(1) << np.uint64(c)) & np.uint64(v)): c += stride return c @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def lsb(np.uint64_t v, int stride = 1, int start = 0): cdef int c c = start while (np.uint64(v) << np.uint64(c)) and not ((np.uint64(1) << np.uint64(c)) & np.uint64(v)): c += stride return c @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def bitwise_addition(np.uint64_t x, np.int64_t y0, int stride = 1, int start = 0): if (y0 == 0): return x cdef int end, p, pstart cdef list mstr cdef np.uint64_t m, y, out y = np.uint64(np.abs(y0)) if (y0 > 0): func_ls = lsz else: func_ls = lsb # Continue until all bits added p = 0 out = x while (y >> p): if (y & (1 << p)): # Get end point pstart = start + p*stride end = func_ls(out,stride=stride,start=pstart) # Create mask mstr = (end + 1) * ['0'] for i in range(pstart,end+1,stride): mstr[i] = '1' m = int(''.join(mstr[::-1]), 2) # Invert portion in mask # print(mstr[::-1]) # print(y,p,(pstart,end+1),bin(m),bin(out),bin(~out)) out = masked_merge_64bit(out, ~out, m) # Move to next bit p += 1 return out @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef np.int64_t point_to_hilbert(int order, np.int64_t p[3]): cdef np.int64_t h, e, d, l, b, w, i, x h = e = d = 0 for i in range(order): l = 0 for x in range(3): b = bitrange(p[3-x-1], order, i, i+1) l |= (b<= INDEX_MAX_64: raise ValueError("Point exceeds max ({}) ".format(INDEX_MAX_64)+ "for 64bit interleave.") p[j] = left_index[j] morton_index = point_to_morton(p) return morton_index @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def get_morton_indices(np.ndarray[np.uint64_t, ndim=2] left_index): cdef np.int64_t i cdef int j cdef np.ndarray[np.uint64_t, ndim=1] morton_indices cdef np.uint64_t p[3] morton_indices = np.zeros(left_index.shape[0], 'uint64') for i in range(left_index.shape[0]): for j in range(3): if left_index[i, j] >= INDEX_MAX_64: raise ValueError("Point exceeds max ({}) ".format(INDEX_MAX_64)+ "for 64bit interleave.") p[j] = left_index[i, j] morton_indices[i] = point_to_morton(p) return morton_indices @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def get_morton_indices_unravel(np.ndarray[np.uint64_t, ndim=1] left_x, np.ndarray[np.uint64_t, ndim=1] left_y, np.ndarray[np.uint64_t, ndim=1] left_z): cdef np.int64_t i cdef np.ndarray[np.uint64_t, ndim=1] morton_indices cdef np.uint64_t p[3] morton_indices = np.zeros(left_x.shape[0], 'uint64') for i in range(left_x.shape[0]): p[0] = left_x[i] p[1] = left_y[i] p[2] = left_z[i] for j in range(3): if p[j] >= INDEX_MAX_64: raise ValueError("Point exceeds max ({}) ".format(INDEX_MAX_64)+ "for 64 bit interleave.") morton_indices[i] = point_to_morton(p) return morton_indices @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def get_morton_point(np.uint64_t index): cdef int j cdef np.uint64_t p[3] cdef np.ndarray[np.uint64_t, ndim=1] position position = np.zeros(3, 'uint64') morton_to_point(index, p) for j in range(3): position[j] = p[j] return position @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def get_morton_points(np.ndarray[np.uint64_t, ndim=1] indices): # This is inspired by the scurve package by user cortesi on GH. cdef int i, j cdef np.uint64_t p[3] cdef np.ndarray[np.uint64_t, ndim=2] positions positions = np.zeros((indices.shape[0], 3), 'uint64') for i in range(indices.shape[0]): morton_to_point(indices[i], p) for j in range(3): positions[i, j] = p[j] return positions @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def get_morton_neighbors_coarse(mi1, max_index1, periodic, nn): cdef int i cdef np.uint32_t ntot cdef np.ndarray[np.uint32_t, ndim=2] index = np.zeros((2*nn+1,3), dtype='uint32') cdef np.ndarray[np.uint64_t, ndim=2] ind1_n = np.zeros((2*nn+1,3), dtype='uint64') cdef np.ndarray[np.uint64_t, ndim=1] neighbors = np.zeros((2*nn+1)**3, dtype='uint64') cdef bint periodicity[3] if periodic: for i in range(3): periodicity[i] = 1 else: for i in range(3): periodicity[i] = 0 ntot = morton_neighbors_coarse(mi1, max_index1, periodicity, nn, index, ind1_n, neighbors) return np.resize(neighbors, (ntot,)) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef np.uint32_t morton_neighbors_coarse(np.uint64_t mi1, np.uint64_t max_index1, bint periodicity[3], np.uint32_t nn, np.uint32_t[:,:] index, np.uint64_t[:,:] ind1_n, np.uint64_t[:] neighbors): cdef np.uint32_t ntot = 0 cdef np.uint64_t ind1[3] cdef np.uint32_t count[3] cdef np.uint32_t origin[3] cdef np.int64_t adv cdef int i, j, k, ii, ij, ik for i in range(3): count[i] = 0 origin[i] = 0 # Get indices decode_morton_64bit(mi1,ind1) # Determine which directions are valid for j,i in enumerate(range(-nn,(nn+1))): if i == 0: for k in range(3): ind1_n[j,k] = ind1[k] index[count[k],k] = j origin[k] = count[k] count[k] += 1 else: for k in range(3): adv = ((ind1[k]) + i) if (adv < 0): if periodicity[k]: while adv < 0: adv += max_index1 ind1_n[j,k] = (adv % max_index1) else: continue elif (adv >= max_index1): if periodicity[k]: ind1_n[j,k] = (adv % max_index1) else: continue else: ind1_n[j,k] = (adv) # print(i,k,adv,max_index1,ind1_n[j,k],adv % max_index1) index[count[k],k] = j count[k] += 1 # Iterate over ever combinations for ii in range(count[0]): i = index[ii,0] for ij in range(count[1]): j = index[ij,1] for ik in range(count[2]): k = index[ik,2] if (ii != origin[0]) or (ij != origin[1]) or (ik != origin[2]): neighbors[ntot] = encode_morton_64bit(ind1_n[i,0], ind1_n[j,1], ind1_n[k,2]) ntot += 1 return ntot @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def get_morton_neighbors_refined(mi1, mi2, max_index1, max_index2, periodic, nn): cdef int i cdef np.uint32_t ntot cdef np.ndarray[np.uint32_t, ndim=2] index = np.zeros((2*nn+1,3), dtype='uint32') cdef np.ndarray[np.uint64_t, ndim=2] ind1_n = np.zeros((2*nn+1,3), dtype='uint64') cdef np.ndarray[np.uint64_t, ndim=2] ind2_n = np.zeros((2*nn+1,3), dtype='uint64') cdef np.ndarray[np.uint64_t, ndim=1] neighbors1 = np.zeros((2*nn+1)**3, dtype='uint64') cdef np.ndarray[np.uint64_t, ndim=1] neighbors2 = np.zeros((2*nn+1)**3, dtype='uint64') cdef bint periodicity[3] if periodic: for i in range(3): periodicity[i] = 1 else: for i in range(3): periodicity[i] = 0 ntot = morton_neighbors_refined(mi1, mi2, max_index1, max_index2, periodicity, nn, index, ind1_n, ind2_n, neighbors1, neighbors2) return np.resize(neighbors1, (ntot,)), np.resize(neighbors2, (ntot,)) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef np.uint32_t morton_neighbors_refined(np.uint64_t mi1, np.uint64_t mi2, np.uint64_t max_index1, np.uint64_t max_index2, bint periodicity[3], np.uint32_t nn, np.uint32_t[:,:] index, np.uint64_t[:,:] ind1_n, np.uint64_t[:,:] ind2_n, np.uint64_t[:] neighbors1, np.uint64_t[:] neighbors2): cdef np.uint32_t ntot = 0 cdef np.uint64_t ind1[3] cdef np.uint64_t ind2[3] cdef np.uint32_t count[3] cdef np.uint32_t origin[3] cdef np.int64_t adv, maj, rem, adv1 cdef int i, j, k, ii, ij, ik for i in range(3): count[i] = 0 origin[i] = 0 # Get indices decode_morton_64bit(mi1,ind1) decode_morton_64bit(mi2,ind2) # Determine which directions are valid for j,i in enumerate(range(-nn,(nn+1))): if i == 0: for k in range(3): ind1_n[j,k] = ind1[k] ind2_n[j,k] = ind2[k] index[count[k],k] = j origin[k] = count[k] count[k] += 1 else: for k in range(3): adv = (ind2[k] + i) maj = adv / (max_index2) rem = adv % (max_index2) if adv < 0: adv1 = (ind1[k] + (maj-1)) if adv1 < 0: if periodicity[k]: while adv1 < 0: adv1 += max_index1 ind1_n[j,k] = adv1 else: continue else: ind1_n[j,k] = adv1 while adv < 0: adv += max_index2 ind2_n[j,k] = adv elif adv >= max_index2: adv1 = (ind1[k] + maj) if adv1 >= max_index1: if periodicity[k]: ind1_n[j,k] = (adv1 % max_index1) else: continue else: ind1_n[j,k] = adv1 ind2_n[j,k] = rem else: ind1_n[j,k] = ind1[k] ind2_n[j,k] = (adv) index[count[k],k] = j count[k] += 1 # Iterate over ever combinations for ii in range(count[0]): i = index[ii,0] for ij in range(count[1]): j = index[ij,1] for ik in range(count[2]): k = index[ik,2] if (ii != origin[0]) or (ij != origin[1]) or (ik != origin[2]): neighbors1[ntot] = encode_morton_64bit(ind1_n[i,0], ind1_n[j,1], ind1_n[k,2]) neighbors2[ntot] = encode_morton_64bit(ind2_n[i,0], ind2_n[j,1], ind2_n[k,2]) ntot += 1 return ntot @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def morton_neighbor_periodic(np.ndarray[np.uint64_t,ndim=1] p, list dim_list, list num_list, np.uint64_t max_index): cdef np.uint64_t p1[3] cdef int j, dim, num for j in range(3): p1[j] = np.uint64(p[j]) for dim,num in zip(dim_list,num_list): p1[dim] = np.uint64((np.int64(p[dim]) + num) % max_index) return np.int64(point_to_morton(p1)) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def morton_neighbor_bounded(np.ndarray[np.uint64_t,ndim=1] p, list dim_list, list num_list, np.uint64_t max_index): cdef np.int64_t x cdef np.uint64_t p1[3] cdef int j, dim, num for j in range(3): p1[j] = np.uint64(p[j]) for dim,num in zip(dim_list,num_list): x = np.int64(p[dim]) + num if (x >= 0) and (x < max_index): p1[dim] = np.uint64(x) else: return np.int64(-1) return np.int64(point_to_morton(p1)) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def morton_neighbor(np.ndarray[np.uint64_t,ndim=1] p, list dim_list, list num_list, np.uint64_t max_index, periodic = False): if periodic: return morton_neighbor_periodic(p, dim_list, num_list, max_index) else: return morton_neighbor_bounded(p, dim_list, num_list, max_index) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def get_morton_neighbors(np.ndarray[np.uint64_t,ndim=1] mi, int order = ORDER_MAX, periodic = False): """Returns array of neighboring morton indices""" # Declare cdef int i, j, k, l, n cdef np.uint64_t max_index cdef np.ndarray[np.uint64_t, ndim=2] p cdef np.int64_t nmi cdef np.ndarray[np.uint64_t, ndim=1] mi_neighbors p = get_morton_points(mi) mi_neighbors = np.zeros(26*mi.shape[0], 'uint64') n = 0 max_index = np.int64(1 << order) # Define function if periodic: fneighbor = morton_neighbor_periodic else: fneighbor = morton_neighbor_bounded for i in range(mi.shape[0]): for j in range(3): # +1 in dimension j nmi = fneighbor(p[i,:],[j],[+1],max_index) if nmi > 0: mi_neighbors[n] = np.uint64(nmi) n+=1 # +/- in dimension k for k in range(j+1,3): # +1 in dimension k nmi = fneighbor(p[i,:],[j,k],[+1,+1],max_index) if nmi > 0: mi_neighbors[n] = np.uint64(nmi) n+=1 # +/- in dimension l for l in range(k+1,3): nmi = fneighbor(p[i,:],[j,k,l],[+1,+1,+1],max_index) if nmi > 0: mi_neighbors[n] = np.uint64(nmi) n+=1 nmi = fneighbor(p[i,:],[j,k,l],[+1,+1,-1],max_index) if nmi > 0: mi_neighbors[n] = np.uint64(nmi) n+=1 # -1 in dimension k nmi = fneighbor(p[i,:],[j,k],[+1,-1],max_index) if nmi > 0: mi_neighbors[n] = np.uint64(nmi) n+=1 # +/- in dimension l for l in range(k+1,3): nmi = fneighbor(p[i,:],[j,k,l],[+1,-1,+1],max_index) if nmi > 0: mi_neighbors[n] = np.uint64(nmi) n+=1 nmi = fneighbor(p[i,:],[j,k,l],[+1,-1,-1],max_index) if nmi > 0: mi_neighbors[n] = np.uint64(nmi) n+=1 # -1 in dimension j nmi = fneighbor(p[i,:],[j],[-1],max_index) if nmi > 0: mi_neighbors[n] = np.uint64(nmi) n+=1 # +/- in dimension k for k in range(j+1,3): # +1 in dimension k nmi = fneighbor(p[i,:],[j,k],[-1,+1],max_index) if nmi > 0: mi_neighbors[n] = np.uint64(nmi) n+=1 # +/- in dimension l for l in range(k+1,3): nmi = fneighbor(p[i,:],[j,k,l],[-1,+1,+1],max_index) if nmi > 0: mi_neighbors[n] = np.uint64(nmi) n+=1 nmi = fneighbor(p[i,:],[j,k,l],[-1,+1,-1],max_index) if nmi > 0: mi_neighbors[n] = np.uint64(nmi) n+=1 # -1 in dimension k nmi = fneighbor(p[i,:],[j,k],[-1,-1],max_index) if nmi > 0: mi_neighbors[n] = np.uint64(nmi) n+=1 # +/- in dimension l for l in range(k+1,3): nmi = fneighbor(p[i,:],[j,k,l],[-1,-1,+1],max_index) if nmi > 0: mi_neighbors[n] = np.uint64(nmi) n+=1 nmi = fneighbor(p[i,:],[j,k,l],[-1,-1,-1],max_index) if nmi > 0: mi_neighbors[n] = np.uint64(nmi) n+=1 mi_neighbors = np.resize(mi_neighbors,(n,)) return np.unique(np.hstack([mi,mi_neighbors])) def ifrexp_cy(np.float64_t x): cdef np.int64_t e, m m = ifrexp(x, &e) return m,e def msdb_cy(np.int64_t a, np.int64_t b): return msdb(a,b) def msdb_cy(np.int64_t a, np.int64_t b): return msdb(a,b) def xor_msb_cy(np.float64_t a, np.float64_t b): return xor_msb(a,b) def morton_qsort_swap(np.ndarray[np.uint64_t, ndim=1] ind, np.uint64_t a, np.uint64_t b): # http://www.geeksforgeeks.org/iterative-quick-sort/ cdef np.int64_t t = ind[a] ind[a] = ind[b] ind[b] = t def morton_qsort_partition(np.ndarray[cython.floating, ndim=2] pos, np.int64_t l, np.int64_t h, np.ndarray[np.uint64_t, ndim=1] ind, use_loop = False): # Initialize cdef int k cdef np.int64_t i, j cdef np.float64_t ppos[3] cdef np.float64_t ipos[3] cdef np.uint64_t done, pivot if use_loop: # http://www.geeksforgeeks.org/iterative-quick-sort/ # A bit slower # Set starting point & pivot i = (l - 1) for k in range(3): ppos[k] = pos[ind[h],k] # Loop over array moving ind for points smaller than pivot to front for j in range(l, h): for k in range(3): ipos[k] = pos[ind[j],k] if compare_floats_morton(ipos,ppos): i+=1 morton_qsort_swap(ind,i,j) # Swap the pivot to the midpoint in the partition i+=1 morton_qsort_swap(ind,i,h) return i else: # Set starting point & pivot i = l-1 j = h done = 0 pivot = ind[h] for k in range(3): ppos[k] = pos[pivot,k] # Loop until entire array processed while not done: # Process bottom while not done: i+=1 if i == j: done = 1 break for k in range(3): ipos[k] = pos[ind[i],k] if compare_floats_morton(ppos,ipos): ind[j] = ind[i] break # Process top while not done: j-=1 if j == i: done = 1 break for k in range(3): ipos[k] = pos[ind[j],k] if compare_floats_morton(ipos,ppos): ind[i] = ind[j] break ind[j] = pivot return j @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def morton_qsort_recursive(np.ndarray[cython.floating, ndim=2] pos, np.int64_t l, np.int64_t h, np.ndarray[np.uint64_t, ndim=1] ind, use_loop = False): # http://www.geeksforgeeks.org/iterative-quick-sort/ cdef np.int64_t p if (l < h): p = morton_qsort_partition(pos, l, h, ind, use_loop=use_loop) morton_qsort_recursive(pos, l, p-1, ind, use_loop=use_loop) morton_qsort_recursive(pos, p+1, h, ind, use_loop=use_loop) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def morton_qsort_iterative(np.ndarray[cython.floating, ndim=2] pos, np.int64_t l, np.int64_t h, np.ndarray[np.uint64_t, ndim=1] ind, use_loop = False): # http://www.geeksforgeeks.org/iterative-quick-sort/ # Auxiliary stack cdef np.ndarray[np.int64_t, ndim=1] stack = np.zeros(h-l+1, dtype=np.int64) cdef np.int64_t top = -1 cdef np.int64_t p top+=1 stack[top] = l top+=1 stack[top] = h # Pop from stack until it's empty while (top >= 0): # Get next set h = stack[top] top-=1 l = stack[top] top-=1 # Partition p = morton_qsort_partition(pos, l, h, ind, use_loop=use_loop) # Add left partition to the stack if (p-1) > l: top+=1 stack[top] = l top+=1 stack[top] = p - 1 # Add right partition to the stack if (p+1) < h: top+=1 stack[top] = p + 1 top+=1 stack[top] = h @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def morton_qsort(np.ndarray[cython.floating, ndim=2] pos, np.int64_t l, np.int64_t h, np.ndarray[np.uint64_t, ndim=1] ind, recursive = False, use_loop = False): #get_morton_argsort1(pos,l,h,ind) if recursive: morton_qsort_recursive(pos,l,h,ind,use_loop=use_loop) else: morton_qsort_iterative(pos,l,h,ind,use_loop=use_loop) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def get_morton_argsort1(np.ndarray[cython.floating, ndim=2] pos, np.int64_t start, np.int64_t end, np.ndarray[np.uint64_t, ndim=1] ind): # Return if only one position selected if start >= end: return # Initialize cdef np.int64_t top top = morton_qsort_partition(pos,start,end,ind) # Do remaining parts on either side of pivot, sort side first if (top-1-start < end-(top+1)): get_morton_argsort1(pos,start,top-1,ind) get_morton_argsort1(pos,top+1,end,ind) else: get_morton_argsort1(pos,top+1,end,ind) get_morton_argsort1(pos,start,top-1,ind) return def compare_morton(np.ndarray[cython.floating, ndim=1] p0, np.ndarray[cython.floating, ndim=1] q0): cdef np.float64_t p[3] cdef np.float64_t q[3] # cdef np.int64_t iep,ieq,imp,imq cdef int j for j in range(3): p[j] = p0[j] q[j] = q0[j] # imp = ifrexp(p[j],&iep) # imq = ifrexp(q[j],&ieq) # print(j,p[j],q[j],xor_msb(p[j],q[j]),'m=',imp,imq,'e=',iep,ieq) return compare_floats_morton(p,q) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef np.int64_t position_to_morton(np.ndarray[cython.floating, ndim=1] pos_x, np.ndarray[cython.floating, ndim=1] pos_y, np.ndarray[cython.floating, ndim=1] pos_z, np.float64_t dds[3], np.float64_t DLE[3], np.float64_t DRE[3], np.ndarray[np.uint64_t, ndim=1] ind, int filter): cdef np.uint64_t ii[3] cdef np.float64_t p[3] cdef np.int64_t i, j, use cdef np.uint64_t DD[3] cdef np.uint64_t FLAG = ~(0) for i in range(3): DD[i] = ((DRE[i] - DLE[i]) / dds[i]) for i in range(pos_x.shape[0]): use = 1 p[0] = pos_x[i] p[1] = pos_y[i] p[2] = pos_z[i] for j in range(3): if p[j] < DLE[j] or p[j] > DRE[j]: if filter == 1: # We only allow 20 levels, so this is inaccessible use = 0 break return i ii[j] = ((p[j] - DLE[j])/dds[j]) ii[j] = i64clip(ii[j], 0, DD[j] - 1) if use == 0: ind[i] = FLAG continue ind[i] = encode_morton_64bit(ii[0],ii[1],ii[2]) return pos_x.shape[0] def compute_morton(np.ndarray pos_x, np.ndarray pos_y, np.ndarray pos_z, domain_left_edge, domain_right_edge, filter_bbox = False, order = ORDER_MAX): cdef int i cdef int filter if filter_bbox: filter = 1 else: filter = 0 cdef np.float64_t dds[3] cdef np.float64_t DLE[3] cdef np.float64_t DRE[3] for i in range(3): DLE[i] = domain_left_edge[i] DRE[i] = domain_right_edge[i] dds[i] = (DRE[i] - DLE[i]) / (1 << order) cdef np.ndarray[np.uint64_t, ndim=1] ind ind = np.zeros(pos_x.shape[0], dtype="uint64") cdef np.int64_t rv if pos_x.dtype == np.float32: rv = position_to_morton[np.float32_t]( pos_x, pos_y, pos_z, dds, DLE, DRE, ind, filter) elif pos_x.dtype == np.float64: rv = position_to_morton[np.float64_t]( pos_x, pos_y, pos_z, dds, DLE, DRE, ind, filter) else: print("Could not identify dtype.", pos_x.dtype) raise NotImplementedError if rv < pos_x.shape[0]: mis = (pos_x.min(), pos_y.min(), pos_z.min()) mas = (pos_x.max(), pos_y.max(), pos_z.max()) raise YTDomainOverflow(mis, mas, domain_left_edge, domain_right_edge) return ind @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def dist(np.ndarray[np.float64_t, ndim=1] p0, np.ndarray[np.float64_t, ndim=1] q0): cdef int j cdef np.float64_t p[3] cdef np.float64_t q[3] for j in range(3): p[j] = p0[j] q[j] = q0[j] return euclidean_distance(p,q) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def dist_to_box(np.ndarray[np.float64_t, ndim=1] p, np.ndarray[np.float64_t, ndim=1] cbox, np.float64_t rbox): cdef int j cdef np.float64_t d = 0.0 for j in range(3): d+= max((cbox[j]-rbox)-p[j],0.0,p[j]-(cbox[j]+rbox))**2 return np.sqrt(d) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def solution_radius(np.ndarray[np.float64_t, ndim=2] P, int k, np.uint64_t i, np.ndarray[np.uint64_t, ndim=1] idx, int order, np.ndarray[np.float64_t, ndim=1] DLE, np.ndarray[np.float64_t, ndim=1] DRE): c = np.zeros(3, dtype=np.float64) return quadtree_box(P[i,:],P[idx[k-1],:],order,DLE,DRE,c) @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def knn_direct(np.ndarray[np.float64_t, ndim=2] P, np.uint64_t k, np.uint64_t i, np.ndarray[np.uint64_t, ndim=1] idx, return_dist = False, return_rad = False): """Directly compute the k nearest neighbors by sorting on distance. Args: P (np.ndarray): (N,d) array of points to search sorted by Morton order. k (int): number of nearest neighbors to find. i (int): index of point that nearest neighbors should be found for. idx (np.ndarray): indices of points from P to be considered. return_dist (Optional[bool]): If True, distances to the k nearest neighbors are also returned (in order of proximity). (default = False) return_rad (Optional[bool]): If True, distance to farthest nearest neighbor is also returned. This is set to False if return_dist is True. (default = False) Returns: np.ndarray: Indices of k nearest neighbors to point i. """ cdef int j,m cdef np.int64_t[:] sort_fwd cdef np.float64_t[:] ipos cdef np.float64_t[:] jpos cdef np.float64_t[:] dist = np.zeros(len(idx), dtype='float64') ipos = np.zeros(3) jpos = np.zeros(3) for m in range(3): ipos[m] = P[i,m] for j in range(len(idx)): for m in range(3): jpos[m] = P[idx[j],m] dist[j] = euclidean_distance(ipos, jpos) # casting to uint64 for compatibility with 32 bits systems # see https://github.com/yt-project/yt/issues/3656 sort_fwd = np.argsort(dist, kind='mergesort')[:k].astype(np.int64, copy=False) if return_dist: return np.array(idx)[sort_fwd], np.array(dist)[sort_fwd] elif return_rad: return np.array(idx)[sort_fwd], np.array(dist)[sort_fwd][k-1] else: return np.array(idx)[sort_fwd] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def quadtree_box(np.ndarray[np.float64_t, ndim=1] p, np.ndarray[np.float64_t, ndim=1] q, int order, np.ndarray[np.float64_t, ndim=1] DLE, np.ndarray[np.float64_t, ndim=1] DRE, np.ndarray[np.float64_t, ndim=1] c): # Declare & transfer values to ctypes cdef int j cdef np.float64_t ppos[3] cdef np.float64_t qpos[3] cdef np.float64_t rbox cdef np.float64_t cbox[3] cdef np.float64_t DLE1[3] cdef np.float64_t DRE1[3] for j in range(3): ppos[j] = p[j] qpos[j] = q[j] DLE1[j] = DLE[j] DRE1[j] = DRE[j] # Get smallest box containing p & q rbox = smallest_quadtree_box(ppos,qpos,order,DLE1,DRE1, &cbox[0],&cbox[1],&cbox[2]) # Transfer values to python array for j in range(3): c[j] = cbox[j] return rbox @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def csearch_morton(np.ndarray[np.float64_t, ndim=2] P, int k, np.uint64_t i, np.ndarray[np.uint64_t, ndim=1] Ai, np.uint64_t l, np.uint64_t h, int order, np.ndarray[np.float64_t, ndim=1] DLE, np.ndarray[np.float64_t, ndim=1] DRE, int nu = 4): """Expand search concentrically to determine set of k nearest neighbors for point i. Args: P (np.ndarray): (N,d) array of points to search sorted by Morton order. k (int): number of nearest neighbors to find. i (int): index of point that nearest neighbors should be found for. Ai (np.ndarray): (N,k) array of partial nearest neighbor indices. l (int): index of lowest point to consider in addition to Ai. h (int): index of highest point to consider in addition to Ai. order (int): Maximum depth that Morton order quadtree should reach. DLE (np.float64[3]): 3 floats defining domain lower bounds in each dim. DRE (np.float64[3]): 3 floats defining domain upper bounds in each dim. nu (int): minimum number of points before a direct knn search is performed. (default = 4) Returns: np.ndarray: (N,k) array of nearest neighbor indices. Raises: ValueError: If l i): raise ValueError("Both l and h must be on the same side of i.") m = np.uint64((h + l)/2) # New range is small enough to consider directly if (h-l) < nu: if m > i: return knn_direct(P,k,i,np.hstack((Ai,np.arange(l,h+1,dtype=np.uint64)))) else: return knn_direct(P,k,i,np.hstack((np.arange(l,h+1,dtype=np.uint64),Ai))) # Add middle point if m > i: Ai,rad_Ai = knn_direct(P,k,i,np.hstack((Ai,m)).astype(np.uint64),return_rad=True) else: Ai,rad_Ai = knn_direct(P,k,i,np.hstack((m,Ai)).astype(np.uint64),return_rad=True) cbox_sol = np.zeros(3,dtype=np.float64) rbox_sol = quadtree_box(P[i,:],P[Ai[k-1],:],order,DLE,DRE,cbox_sol) # Return current solution if hl box is outside current solution's box # Uses actual box cbox_hl = np.zeros(3,dtype=np.float64) rbox_hl = quadtree_box(P[l,:],P[h,:],order,DLE,DRE,cbox_hl) if dist_to_box(cbox_sol,cbox_hl,rbox_hl) >= 1.5*rbox_sol: print('{} outside: rad = {}, rbox = {}, dist = {}'.format(m,rad_Ai,rbox_sol,dist_to_box(P[i,:],cbox_hl,rbox_hl))) return Ai # Expand search to lower/higher indices as needed if i < m: # They are already sorted... Ai = csearch_morton(P,k,i,Ai,l,m-1,order,DLE,DRE,nu=nu) if compare_morton(P[m,:],P[i,:]+dist(P[i,:],P[Ai[k-1],:])): Ai = csearch_morton(P,k,i,Ai,m+1,h,order,DLE,DRE,nu=nu) else: Ai = csearch_morton(P,k,i,Ai,m+1,h,order,DLE,DRE,nu=nu) if compare_morton(P[i,:]-dist(P[i,:],P[Ai[k-1],:]),P[m,:]): Ai = csearch_morton(P,k,i,Ai,l,m-1,order,DLE,DRE,nu=nu) return Ai @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def knn_morton(np.ndarray[np.float64_t, ndim=2] P0, int k, np.uint64_t i0, float c = 1.0, int nu = 4, issorted = False, int order = ORDER_MAX, np.ndarray[np.float64_t, ndim=1] DLE = np.zeros(3,dtype=np.float64), np.ndarray[np.float64_t, ndim=1] DRE = np.zeros(3,dtype=np.float64)): """Get the indices of the k nearest neighbors to point i. Args: P (np.ndarray): (N,d) array of points to search. k (int): number of nearest neighbors to find for each point in P. i (np.uint64): index of point to find neighbors for. c (float): factor determining how many indices before/after i are used in the initial search (i-c*k to i+c*k, default = 1.0) nu (int): minimum number of points before a direct knn search is performed. (default = 4) issorted (Optional[bool]): if True, P is assumed to be sorted already according to Morton order. order (int): Maximum depth that Morton order quadtree should reach. If not provided, ORDER_MAX is used. DLE (np.ndarray): (d,) array of domain lower bounds in each dimension. If not provided, this is determined from the points. DRE (np.ndarray): (d,) array of domain upper bounds in each dimension. If not provided, this is determined from the points. Returns: np.ndarray: (N,k) indices of k nearest neighbors for each point in P. """ cdef int j cdef np.uint64_t i cdef np.int64_t N = P0.shape[0] cdef np.ndarray[np.float64_t, ndim=2] P cdef np.ndarray[np.uint64_t, ndim=1] sort_fwd = np.arange(N,dtype=np.uint64) cdef np.ndarray[np.uint64_t, ndim=1] sort_rev = np.arange(N,dtype=np.uint64) cdef np.ndarray[np.uint64_t, ndim=1] Ai cdef np.int64_t idxmin, idxmax, u, l cdef np.uint64_t I # Sort if necessary if issorted: P = P0 i = i0 else: morton_qsort(P0,0,N-1,sort_fwd) sort_rev = np.argsort(sort_fwd).astype(np.uint64) P = P0[sort_fwd,:] i = sort_rev[i0] # Check domain and set if singular for j in range(3): if DLE[j] == DRE[j]: DLE[j] = min(P[:,j]) DRE[j] = max(P[:,j]) # Get initial guess bassed on position in z-order idxmin = max(i-c*k, 0) idxmax = min(i+c*k, N-1) Ai = np.hstack((np.arange(idxmin,i,dtype=np.uint64), np.arange(i+1,idxmax+1,dtype=np.uint64))) Ai,rad_Ai = knn_direct(P,k,i,Ai,return_rad=True) # Get radius of solution cbox_Ai = np.zeros(3,dtype=np.float64) rbox_Ai = quadtree_box(P[i,:],P[Ai[k-1],:],order,DLE,DRE,cbox_Ai) rad_Ai = rbox_Ai # Extend upper bound to match lower bound if idxmax < (N-1): if compare_morton(P[i,:]+rad_Ai,P[idxmax,:]): u = i else: I = 1 while (idxmax+(2**I) < N) and compare_morton(P[idxmax+(2**I),:],P[i,:]+rad_Ai): I+=1 u = min(idxmax+(2**I),N-1) Ai = csearch_morton(P,k,i,Ai,min(idxmax+1,N-1),u,order,DLE,DRE,nu=nu) else: u = idxmax # Extend lower bound to match upper bound if idxmin > 0: if compare_morton(P[idxmin,:],P[i,:]-rad_Ai): l = i else: I = 1 while (idxmin-(2**I) >= 0) and compare_morton(P[i,:]-rad_Ai,P[idxmin-(2**I),:]): I+=1 l = max(idxmin-(2**I),0) Ai = csearch_morton(P,k,i,Ai,l,max(idxmin-1,0),order,DLE,DRE,nu=nu) else: l = idxmin # Return indices of neighbors in the correct order if issorted: return Ai else: return sort_fwd[Ai] cdef struct PointSet cdef struct PointSet: int count # First index is point index, second is xyz np.float64_t points[2][3] PointSet *next cdef inline void get_intersection(np.float64_t p0[3], np.float64_t p1[3], int ax, np.float64_t coord, PointSet *p): cdef np.float64_t vec[3] cdef np.float64_t t for j in range(3): vec[j] = p1[j] - p0[j] if vec[ax] == 0.0: return # bail if the line is in the plane t = (coord - p0[ax])/vec[ax] # We know that if they're on opposite sides, it has to intersect. And we # won't get called otherwise. for j in range(3): p.points[p.count][j] = p0[j] + vec[j] * t p.count += 1 @cython.cdivision(True) def triangle_plane_intersect(int ax, np.float64_t coord, np.ndarray[np.float64_t, ndim=3] triangles): cdef np.float64_t p0[3] cdef np.float64_t p1[3] cdef np.float64_t p2[3] cdef np.float64_t E0[3] cdef np.float64_t E1[3] cdef np.float64_t tri_norm[3] cdef np.float64_t plane_norm[3] cdef np.float64_t dp cdef int i, j, k, count, ntri, nlines nlines = 0 ntri = triangles.shape[0] cdef PointSet *first cdef PointSet *last cdef PointSet *points first = last = points = NULL for i in range(ntri): count = 0 # Here p0 holds the triangle's zeroth node coordinates, # p1 holds the first node's coordinates, and # p2 holds the second node's coordinates for j in range(3): p0[j] = triangles[i, 0, j] p1[j] = triangles[i, 1, j] p2[j] = triangles[i, 2, j] plane_norm[j] = 0.0 plane_norm[ax] = 1.0 subtract(p1, p0, E0) subtract(p2, p0, E1) cross(E0, E1, tri_norm) dp = dot(tri_norm, plane_norm) dp /= L2_norm(tri_norm) # Skip if triangle is close to being parallel to plane. if (fabs(dp) > 0.995): continue # Now for each line segment (01, 12, 20) we check to see how many cross # the coordinate of the slice. # Here, the components of p2 are either +1 or -1 depending on whether the # node's coordinate corresponding to the slice axis is greater than the # coordinate of the slice. p2[0] -> node 0; p2[1] -> node 1; p2[2] -> node2 for j in range(3): # Add 0 so that any -0s become +0s. Necessary for consistent determination # of plane intersection p2[j] = copysign(1.0, triangles[i, j, ax] - coord + 0) if p2[0] * p2[1] < 0: count += 1 if p2[1] * p2[2] < 0: count += 1 if p2[2] * p2[0] < 0: count += 1 if count == 2: nlines += 1 elif count == 3: raise RuntimeError("It should be geometrically impossible for a plane to" "to intersect all three legs of a triangle. Please contact" "yt developers with your mesh") else: continue points = malloc(sizeof(PointSet)) points.count = 0 points.next = NULL # Here p0 and p1 again hold node coordinates if p2[0] * p2[1] < 0: # intersection of 01 triangle segment with plane for j in range(3): p0[j] = triangles[i, 0, j] p1[j] = triangles[i, 1, j] get_intersection(p0, p1, ax, coord, points) if p2[1] * p2[2] < 0: # intersection of 12 triangle segment with plane for j in range(3): p0[j] = triangles[i, 1, j] p1[j] = triangles[i, 2, j] get_intersection(p0, p1, ax, coord, points) if p2[2] * p2[0] < 0: # intersection of 20 triangle segment with plane for j in range(3): p0[j] = triangles[i, 2, j] p1[j] = triangles[i, 0, j] get_intersection(p0, p1, ax, coord, points) if last != NULL: last.next = points if first == NULL: first = points last = points points = first cdef np.ndarray[np.float64_t, ndim=3] line_segments line_segments = np.empty((nlines, 2, 3), dtype="float64") k = 0 while points != NULL: for j in range(3): line_segments[k, 0, j] = points.points[0][j] line_segments[k, 1, j] = points.points[1][j] k += 1 last = points points = points.next free(last) return line_segments ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/grid_traversal.pxd0000644000175100001770000000474514714401662020541 0ustar00runnerdocker""" Definitions for the traversal code """ import numpy as np cimport cython cimport numpy as np from .image_samplers cimport ImageSampler from .volume_container cimport VolumeContainer, vc_index, vc_pos_index ctypedef void sampler_function( VolumeContainer *vc, np.float64_t v_pos[3], np.float64_t v_dir[3], np.float64_t enter_t, np.float64_t exit_t, int index[3], void *data) noexcept nogil #----------------------------------------------------------------------------- # walk_volume(VolumeContainer *vc, np.float64_t v_pos[3], np.float64_t v_dir[3], sampler_function *sample, # void *data, np.float64_t *return_t = NULL, np.float64_t max_t = 1.0) # vc VolumeContainer* : Pointer to the volume container to be traversed. # v_pos np.float64_t[3] : The x,y,z coordinates of the ray's origin. # v_dir np.float64_t[3] : The x,y,z coordinates of the ray's direction. # sample sampler_function* : Pointer to the sampler function to be used. # return_t np.float64_t* : Pointer to the final value of t that is still inside the volume container. Defaulted to NULL. # max_t np.float64_t : The maximum value of t that the ray is allowed to travel. Defaulted to 1.0 (no restriction). # # Note: 't' is not time here. Rather, it is a factor representing the difference between the initial point 'v_pos' # and the end point, which we might call v_end. It is scaled such that v_pos + v * t = v_pos at t = 0.0, and # v_end at t = 1.0. Therefore, if max_t is set to 1.0, there is no restriction on t. # # Written by the yt Development Team. # Encapsulates the Amanatides & Woo "Fast Traversal Voxel Algorithm" to walk over a volume container 'vc' # The function occurs in two phases, initialization and traversal. # See: https://www.researchgate.net/publication/2611491_A_Fast_Voxel_Traversal_Algorithm_for_Ray_Tracing # Returns: The number of voxels hit during the traversal phase. If the traversal phase is not reached, returns 0. #----------------------------------------------------------------------------- cdef int walk_volume(VolumeContainer *vc, np.float64_t v_pos[3], np.float64_t v_dir[3], sampler_function *sampler, void *data, np.float64_t *return_t = *, np.float64_t max_t = *) noexcept nogil ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/grid_traversal.pyx0000644000175100001770000003061514714401662020561 0ustar00runnerdocker# distutils: include_dirs = LIB_DIR # distutils: libraries = STD_LIBS # distutils: language = c++ # distutils: extra_compile_args = CPP14_FLAG # distutils: extra_link_args = CPP14_FLAG """ Simple integrators for the radiative transfer equation """ import numpy as np cimport cython cimport numpy as np from libc.math cimport atan2, cos, fabs, floor, sin, sqrt from yt.utilities.lib.fp_utils cimport fmin @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int walk_volume(VolumeContainer *vc, np.float64_t v_pos[3], np.float64_t v_dir[3], sampler_function *sample, void *data, np.float64_t *return_t = NULL, np.float64_t max_t = 1.0) noexcept nogil: cdef int cur_ind[3] cdef int step[3] cdef int x, y, i, hit, direction cdef np.float64_t intersect_t = 1.1 cdef np.float64_t iv_dir[3] cdef np.float64_t tmax[3] cdef np.float64_t tdelta[3] cdef np.float64_t exit_t = -1.0, enter_t = -1.0 cdef np.float64_t tl, temp_x, temp_y = -1 if max_t > 1.0: max_t = 1.0 direction = -1 if vc.left_edge[0] <= v_pos[0] and v_pos[0] < vc.right_edge[0] and \ vc.left_edge[1] <= v_pos[1] and v_pos[1] < vc.right_edge[1] and \ vc.left_edge[2] <= v_pos[2] and v_pos[2] < vc.right_edge[2]: intersect_t = 0.0 direction = 3 for i in range(3): if (v_dir[i] < 0): step[i] = -1 elif (v_dir[i] == 0.0): step[i] = 0 continue else: step[i] = 1 iv_dir[i] = 1.0/v_dir[i] if direction == 3: continue x = (i+1) % 3 y = (i+2) % 3 if step[i] > 0: tl = (vc.left_edge[i] - v_pos[i])*iv_dir[i] else: tl = (vc.right_edge[i] - v_pos[i])*iv_dir[i] temp_x = (v_pos[x] + tl*v_dir[x]) temp_y = (v_pos[y] + tl*v_dir[y]) if fabs(temp_x - vc.left_edge[x]) < 1e-10*vc.dds[x]: temp_x = vc.left_edge[x] elif fabs(temp_x - vc.right_edge[x]) < 1e-10*vc.dds[x]: temp_x = vc.right_edge[x] if fabs(temp_y - vc.left_edge[y]) < 1e-10*vc.dds[y]: temp_y = vc.left_edge[y] elif fabs(temp_y - vc.right_edge[y]) < 1e-10*vc.dds[y]: temp_y = vc.right_edge[y] if vc.left_edge[x] <= temp_x and temp_x <= vc.right_edge[x] and \ vc.left_edge[y] <= temp_y and temp_y <= vc.right_edge[y] and \ 0.0 <= tl and tl < intersect_t: direction = i intersect_t = tl if enter_t >= 0.0: intersect_t = enter_t if not ((0.0 <= intersect_t) and (intersect_t < max_t)): return 0 for i in range(3): # Two things have to be set inside this loop. # cur_ind[i], the current index of the grid cell the ray is in # tmax[i], the 't' until it crosses out of the grid cell tdelta[i] = step[i] * iv_dir[i] * vc.dds[i] if i == direction and step[i] > 0: # Intersection with the left face in this direction cur_ind[i] = 0 elif i == direction and step[i] < 0: # Intersection with the right face in this direction cur_ind[i] = vc.dims[i] - 1 else: # We are somewhere in the middle of the face temp_x = intersect_t * v_dir[i] + v_pos[i] # current position temp_y = ((temp_x - vc.left_edge[i])*vc.idds[i]) # There are some really tough cases where we just within a couple # least significant places of the edge, and this helps prevent # killing the calculation through a segfault in those cases. if -1 < temp_y < 0 and step[i] > 0: temp_y = 0.0 elif vc.dims[i] - 1 < temp_y < vc.dims[i] and step[i] < 0: temp_y = vc.dims[i] - 1 cur_ind[i] = (floor(temp_y)) if step[i] > 0: temp_y = (cur_ind[i] + 1) * vc.dds[i] + vc.left_edge[i] elif step[i] < 0: temp_y = cur_ind[i] * vc.dds[i] + vc.left_edge[i] tmax[i] = (temp_y - v_pos[i]) * iv_dir[i] if step[i] == 0: tmax[i] = 1e60 # We have to jumpstart our calculation for i in range(3): if cur_ind[i] == vc.dims[i] and step[i] >= 0: return 0 if cur_ind[i] == -1 and step[i] <= -1: return 0 enter_t = intersect_t hit = 0 while 1: hit += 1 if tmax[0] < tmax[1]: if tmax[0] < tmax[2]: i = 0 else: i = 2 else: if tmax[1] < tmax[2]: i = 1 else: i = 2 exit_t = fmin(tmax[i], max_t) sample(vc, v_pos, v_dir, enter_t, exit_t, cur_ind, data) cur_ind[i] += step[i] enter_t = tmax[i] tmax[i] += tdelta[i] if cur_ind[i] < 0 or cur_ind[i] >= vc.dims[i] or enter_t >= max_t: break if return_t != NULL: return_t[0] = exit_t return hit def hp_pix2vec_nest(long nside, long ipix): raise NotImplementedError # cdef double v[3] # healpix_interface.pix2vec_nest(nside, ipix, v) # cdef np.ndarray[np.float64_t, ndim=1] tr = np.empty((3,), dtype='float64') # tr[0] = v[0] # tr[1] = v[1] # tr[2] = v[2] # return tr def arr_pix2vec_nest(long nside, np.ndarray[np.int64_t, ndim=1] aipix): raise NotImplementedError # cdef int n = aipix.shape[0] # cdef int i # cdef double v[3] # cdef long ipix # cdef np.ndarray[np.float64_t, ndim=2] tr = np.zeros((n, 3), dtype='float64') # for i in range(n): # ipix = aipix[i] # healpix_interface.pix2vec_nest(nside, ipix, v) # tr[i,0] = v[0] # tr[i,1] = v[1] # tr[i,2] = v[2] # return tr def hp_vec2pix_nest(long nside, double x, double y, double z): raise NotImplementedError # cdef double v[3] # v[0] = x # v[1] = y # v[2] = z # cdef long ipix # healpix_interface.vec2pix_nest(nside, v, &ipix) # return ipix def arr_vec2pix_nest(long nside, np.ndarray[np.float64_t, ndim=1] x, np.ndarray[np.float64_t, ndim=1] y, np.ndarray[np.float64_t, ndim=1] z): raise NotImplementedError # cdef int n = x.shape[0] # cdef int i # cdef double v[3] # cdef long ipix # cdef np.ndarray[np.int64_t, ndim=1] tr = np.zeros(n, dtype='int64') # for i in range(n): # v[0] = x[i] # v[1] = y[i] # v[2] = z[i] # healpix_interface.vec2pix_nest(nside, v, &ipix) # tr[i] = ipix # return tr def hp_pix2ang_nest(long nside, long ipnest): raise NotImplementedError # cdef double theta, phi # healpix_interface.pix2ang_nest(nside, ipnest, &theta, &phi) # return (theta, phi) def arr_pix2ang_nest(long nside, np.ndarray[np.int64_t, ndim=1] aipnest): raise NotImplementedError # cdef int n = aipnest.shape[0] # cdef int i # cdef long ipnest # cdef np.ndarray[np.float64_t, ndim=2] tr = np.zeros((n, 2), dtype='float64') # cdef double theta, phi # for i in range(n): # ipnest = aipnest[i] # healpix_interface.pix2ang_nest(nside, ipnest, &theta, &phi) # tr[i,0] = theta # tr[i,1] = phi # return tr def hp_ang2pix_nest(long nside, double theta, double phi): raise NotImplementedError # cdef long ipix # healpix_interface.ang2pix_nest(nside, theta, phi, &ipix) # return ipix def arr_ang2pix_nest(long nside, np.ndarray[np.float64_t, ndim=1] atheta, np.ndarray[np.float64_t, ndim=1] aphi): raise NotImplementedError # cdef int n = atheta.shape[0] # cdef int i # cdef long ipnest # cdef np.ndarray[np.int64_t, ndim=1] tr = np.zeros(n, dtype='int64') # cdef double theta, phi # for i in range(n): # theta = atheta[i] # phi = aphi[i] # healpix_interface.ang2pix_nest(nside, theta, phi, &ipnest) # tr[i] = ipnest # return tr @cython.boundscheck(False) @cython.cdivision(False) @cython.wraparound(False) def pixelize_healpix(long nside, np.ndarray[np.float64_t, ndim=1] values, long ntheta, long nphi, np.ndarray[np.float64_t, ndim=2] irotation): raise NotImplementedError # # We will first to pix2vec, rotate, then calculate the angle # cdef int i, j, thetai, phii # cdef long ipix # cdef double v0[3], v1[3] # cdef double pi = 3.1415926 # cdef np.float64_t pi2 = pi/2.0 # cdef np.float64_t phi, theta # cdef np.ndarray[np.float64_t, ndim=2] results # cdef np.ndarray[np.int32_t, ndim=2] count # results = np.zeros((ntheta, nphi), dtype="float64") # count = np.zeros((ntheta, nphi), dtype="int32") # cdef np.float64_t phi0 = 0 # cdef np.float64_t dphi = 2.0 * pi/(nphi-1) # cdef np.float64_t theta0 = 0 # cdef np.float64_t dtheta = pi/(ntheta-1) # # We assume these are the rotated theta and phi # for thetai in range(ntheta): # theta = theta0 + dtheta * thetai # for phii in range(nphi): # phi = phi0 + dphi * phii # # We have our rotated vector # v1[0] = cos(phi) * sin(theta) # v1[1] = sin(phi) * sin(theta) # v1[2] = cos(theta) # # Now we rotate back # for i in range(3): # v0[i] = 0 # for j in range(3): # v0[i] += v1[j] * irotation[j,i] # # Get the pixel this vector is inside # healpix_interface.vec2pix_nest(nside, v0, &ipix) # results[thetai, phii] = values[ipix] # count[i, j] += 1 # return results, count def healpix_aitoff_proj(np.ndarray[np.float64_t, ndim=1] pix_image, long nside, np.ndarray[np.float64_t, ndim=2] image, np.ndarray[np.float64_t, ndim=2] irotation): raise NotImplementedError # cdef double pi = np.pi # cdef int i, j, k, l # cdef np.float64_t x, y, z, zb # cdef np.float64_t dx, dy, inside # cdef double v0[3], v1[3] # dx = 2.0 / (image.shape[1] - 1) # dy = 2.0 / (image.shape[0] - 1) # cdef np.float64_t s2 = sqrt(2.0) # cdef long ipix # for i in range(image.shape[1]): # x = (-1.0 + i*dx)*s2*2.0 # for j in range(image.shape[0]): # y = (-1.0 + j * dy)*s2 # zb = (x*x/8.0 + y*y/2.0 - 1.0) # if zb > 0: continue # z = (1.0 - (x/4.0)**2.0 - (y/2.0)**2.0) # z = sqrt(z) # # Longitude # phi = (2.0*atan(z*x/(2.0 * (2.0*z*z-1.0))) + pi) # # Latitude # # We shift it into co-latitude # theta = (asin(z*y) + pi/2.0) # # Now to account for rotation we translate into vectors # v1[0] = cos(phi) * sin(theta) # v1[1] = sin(phi) * sin(theta) # v1[2] = cos(theta) # for k in range(3): # v0[k] = 0 # for l in range(3): # v0[k] += v1[l] * irotation[l,k] # healpix_interface.vec2pix_nest(nside, v0, &ipix) # #print("Rotated", v0[0], v0[1], v0[2], v1[0], v1[1], v1[2], ipix, pix_image[ipix]) # image[j, i] = pix_image[ipix] def arr_fisheye_vectors(int resolution, np.float64_t fov, int nimx=1, int nimy=1, int nimi=0, int nimj=0, np.float64_t off_theta=0.0, np.float64_t off_phi=0.0): # We now follow figures 4-7 of: # http://paulbourke.net/miscellaneous/domefisheye/fisheye/ # ...but all in Cython. cdef np.ndarray[np.float64_t, ndim=3] vp cdef int i, j cdef np.float64_t r, phi, theta, px, py cdef np.float64_t fov_rad = fov * np.pi / 180.0 cdef int nx = resolution//nimx cdef int ny = resolution//nimy vp = np.zeros((nx,ny, 3), dtype="float64") for i in range(nx): px = (2.0 * (nimi*nx + i)) / resolution - 1.0 for j in range(ny): py = (2.0 * (nimj*ny + j)) / resolution - 1.0 r = sqrt(px*px + py*py) if r > 1.01: vp[i,j,0] = vp[i,j,1] = vp[i,j,2] = 0.0 continue phi = atan2(py, px) theta = r * fov_rad / 2.0 theta += off_theta phi += off_phi vp[i,j,0] = sin(theta) * cos(phi) vp[i,j,1] = sin(theta) * sin(phi) vp[i,j,2] = cos(theta) return vp ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/healpix_interface.pxd0000644000175100001770000000070714714401662021175 0ustar00runnerdocker""" A light interface to a few HEALPix routines """ import numpy as np cimport cython cimport numpy as np from libc.stdio cimport FILE, fclose, fopen cdef extern from "healpix_vectors.h": int pix2vec_nest(long nside, long ipix, double *v) void vec2pix_nest(long nside, double *vec, long *ipix) void pix2ang_nest(long nside, long ipix, double *theta, double *phi) void ang2pix_nest(long nside, double theta, double phi, long *ipix) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/image_samplers.pxd0000644000175100001770000000505514714401662020514 0ustar00runnerdocker""" Definitions for image samplers """ import numpy as np cimport cython cimport numpy as np from .partitioned_grid cimport PartitionedGrid from .volume_container cimport VolumeContainer cdef enum: Nch = 4 # NOTE: We don't want to import the field_interpolator_tables here, as it # breaks a bunch of C++ interop. Maybe some day it won't. So, we just forward # declare. cdef struct VolumeRenderAccumulator ctypedef int calculate_extent_function(ImageSampler image, VolumeContainer *vc, np.int64_t rv[4]) except -1 nogil ctypedef void generate_vector_info_function(ImageSampler im, np.int64_t vi, np.int64_t vj, np.float64_t width[2], np.float64_t v_dir[3], np.float64_t v_pos[3]) noexcept nogil cdef struct ImageAccumulator: np.float64_t rgba[Nch] void *supp_data cdef class ImageSampler: cdef np.float64_t[:,:,:] vp_pos cdef np.float64_t[:,:,:] vp_dir cdef np.float64_t *center cdef np.float64_t[:,:,:] image cdef np.float64_t[:,:] zbuffer cdef np.int64_t[:,:] image_used cdef np.int64_t[:,:] mesh_lines cdef np.float64_t pdx, pdy cdef np.float64_t bounds[4] cdef np.float64_t[:,:] camera_data # position, width, unit_vec[0,2] cdef int nv[2] cdef np.float64_t *x_vec cdef np.float64_t *y_vec cdef public object acenter, aimage, ax_vec, ay_vec cdef public object azbuffer cdef public object aimage_used cdef public object amesh_lines cdef void *supp_data cdef np.float64_t width[3] cdef public object lens_type cdef public str volume_method cdef calculate_extent_function *extent_function cdef generate_vector_info_function *vector_function cdef void setup(self, PartitionedGrid pg) @staticmethod cdef void sample(VolumeContainer *vc, np.float64_t v_pos[3], np.float64_t v_dir[3], np.float64_t enter_t, np.float64_t exit_t, int index[3], void *data) noexcept nogil cdef class ProjectionSampler(ImageSampler): pass cdef class InterpolatedProjectionSampler(ImageSampler): cdef VolumeRenderAccumulator *vra cdef public object tf_obj cdef public object my_field_tables cdef class VolumeRenderSampler(ImageSampler): cdef VolumeRenderAccumulator *vra cdef public object tf_obj cdef public object my_field_tables cdef object tree_containers cdef class LightSourceRenderSampler(ImageSampler): cdef VolumeRenderAccumulator *vra cdef public object tf_obj cdef public object my_field_tables ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/image_samplers.pyx0000644000175100001770000005602414714401662020543 0ustar00runnerdocker# distutils: include_dirs = LIB_DIR # distutils: extra_compile_args = CPP14_FLAG OMP_ARGS # distutils: extra_link_args = CPP14_FLAG OMP_ARGS # distutils: libraries = STD_LIBS FIXED_INTERP # distutils: language = c++ """ Image sampler definitions """ import numpy as np cimport cython from libc.math cimport sqrt from libc.stdlib cimport free, malloc from yt.utilities.lib cimport lenses from yt.utilities.lib.fp_utils cimport fclip, i64clip, imin from .field_interpolation_tables cimport ( FieldInterpolationTable, FIT_eval_transfer, FIT_eval_transfer_with_light, FIT_initialize_table, ) from .fixed_interpolator cimport eval_gradient, offset_interpolate from .grid_traversal cimport sampler_function, walk_volume from yt.funcs import mylog from ._octree_raytracing cimport RayInfo, _OctreeRayTracing cdef extern from "platform_dep.h": long int lrint(double x) noexcept nogil from cython.parallel import parallel, prange from cpython.exc cimport PyErr_CheckSignals cdef struct VolumeRenderAccumulator: int n_fits int n_samples FieldInterpolationTable *fits int field_table_ids[6] np.float64_t star_coeff np.float64_t star_er np.float64_t star_sigma_num np.float64_t *light_dir np.float64_t *light_rgba int grey_opacity cdef class ImageSampler: def __init__(self, np.float64_t[:,:,:] vp_pos, np.float64_t[:,:,:] vp_dir, np.ndarray[np.float64_t, ndim=1] center, bounds, np.ndarray[np.float64_t, ndim=3] image, np.ndarray[np.float64_t, ndim=1] x_vec, np.ndarray[np.float64_t, ndim=1] y_vec, np.ndarray[np.float64_t, ndim=1] width, str volume_method, *args, **kwargs): cdef int i self.volume_method = volume_method camera_data = kwargs.pop("camera_data", None) if camera_data is not None: self.camera_data = camera_data zbuffer = kwargs.pop("zbuffer", None) if zbuffer is None: zbuffer = np.ones((image.shape[0], image.shape[1]), "float64") image_used = np.zeros((image.shape[0], image.shape[1]), "int64") mesh_lines = np.zeros((image.shape[0], image.shape[1]), "int64") self.lens_type = kwargs.pop("lens_type", None) if self.lens_type == "plane-parallel": self.extent_function = lenses.calculate_extent_plane_parallel self.vector_function = lenses.generate_vector_info_plane_parallel else: if not (vp_pos.shape[0] == vp_dir.shape[0] == image.shape[0]) or \ not (vp_pos.shape[1] == vp_dir.shape[1] == image.shape[1]): msg = "Bad lens shape / direction for %s\n" % (self.lens_type) msg += "Shapes: (%s - %s - %s) and (%s - %s - %s)" % ( vp_pos.shape[0], vp_dir.shape[0], image.shape[0], vp_pos.shape[1], vp_dir.shape[1], image.shape[1]) raise RuntimeError(msg) if camera_data is not None and self.lens_type == 'perspective': self.extent_function = lenses.calculate_extent_perspective else: self.extent_function = lenses.calculate_extent_null self.\ vector_function = lenses.generate_vector_info_null # These assignments are so we can track the objects and prevent their # de-allocation from reference counts. Note that we do this to the # "atleast_3d" versions. Also, note that we re-assign the input # arguments. self.vp_pos = vp_pos self.vp_dir = vp_dir self.image = self.aimage = image self.acenter = center self.center = center.data self.ax_vec = x_vec self.x_vec = x_vec.data self.ay_vec = y_vec self.y_vec = y_vec.data self.zbuffer = zbuffer self.azbuffer = np.asarray(zbuffer) self.image_used = image_used self.aimage_used = np.asarray(image_used) self.mesh_lines = mesh_lines self.amesh_lines = np.asarray(mesh_lines) self.nv[0] = image.shape[0] self.nv[1] = image.shape[1] for i in range(4): self.bounds[i] = bounds[i] self.pdx = (bounds[1] - bounds[0])/self.nv[0] self.pdy = (bounds[3] - bounds[2])/self.nv[1] for i in range(3): self.width[i] = width[i] def __call__(self, PartitionedGrid pg, **kwa): if self.volume_method == 'KDTree': return self.cast_through_kdtree(pg, **kwa) elif self.volume_method == 'Octree': return self.cast_through_octree(pg, **kwa) else: raise NotImplementedError( 'Volume rendering has not been implemented for method: "%s"' % self.volume_method ) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def cast_through_kdtree(self, PartitionedGrid pg, int num_threads = 0): # This routine will iterate over all of the vectors and cast each in # turn. Might benefit from a more sophisticated intersection check, # like http://courses.csusm.edu/cs697exz/ray_box.htm cdef int vi, vj, hit, i, j cdef VolumeContainer *vc = pg.container self.setup(pg) cdef np.float64_t *v_pos cdef np.float64_t *v_dir cdef np.float64_t max_t hit = 0 cdef np.int64_t nx, ny, size cdef np.int64_t iter[4] self.extent_function(self, vc, iter) iter[0] = i64clip(iter[0]-1, 0, self.nv[0]) iter[1] = i64clip(iter[1]+1, 0, self.nv[0]) iter[2] = i64clip(iter[2]-1, 0, self.nv[1]) iter[3] = i64clip(iter[3]+1, 0, self.nv[1]) nx = (iter[1] - iter[0]) ny = (iter[3] - iter[2]) size = nx * ny cdef ImageAccumulator *idata cdef np.float64_t width[3] cdef int chunksize = 100 for i in range(3): width[i] = self.width[i] with nogil, parallel(num_threads = num_threads): idata = malloc(sizeof(ImageAccumulator)) idata.supp_data = self.supp_data v_pos = malloc(3 * sizeof(np.float64_t)) v_dir = malloc(3 * sizeof(np.float64_t)) for j in prange(size, schedule="static", chunksize=chunksize): vj = j % ny vi = (j - vj) / ny + iter[0] vj = vj + iter[2] # Dynamically calculate the position self.vector_function(self, vi, vj, width, v_dir, v_pos) for i in range(Nch): idata.rgba[i] = self.image[vi, vj, i] max_t = fclip(self.zbuffer[vi, vj], 0.0, 1.0) walk_volume(vc, v_pos, v_dir, self.sample, ( idata), NULL, max_t) if (j % (10*chunksize)) == 0: with gil: PyErr_CheckSignals() for i in range(Nch): self.image[vi, vj, i] = idata.rgba[i] idata.supp_data = NULL free(idata) free(v_pos) free(v_dir) return hit @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def cast_through_octree(self, PartitionedGrid pg, _OctreeRayTracing oct, int num_threads = 0): cdef RayInfo[int]* ri self.setup(pg) cdef sampler_function* sampler = self.sample cdef np.int64_t nx, ny, size cdef VolumeContainer *vc cdef ImageAccumulator *idata nx = self.nv[0] ny = self.nv[1] size = nx * ny cdef int i, j, k, vi, vj, icell cdef int[3] index = [0, 0, 0] cdef int chunksize = 100 cdef int n_fields = pg.container.n_fields cdef np.float64_t vp_dir_len mylog.debug('Integrating rays') with nogil, parallel(num_threads=num_threads): idata = malloc(sizeof(ImageAccumulator)) idata.supp_data = self.supp_data ri = new RayInfo[int]() vc = malloc(sizeof(VolumeContainer)) vc.n_fields = 1 vc.data = malloc(sizeof(np.float64_t*)) vc.mask = malloc(8*sizeof(np.uint8_t)) # The actual dimensions are 2x2x2, but the sampler # assumes vertex-centred data for a 1x1x1 lattice (i.e. # 2^3 vertices) vc.dims[0] = 1 vc.dims[1] = 1 vc.dims[2] = 1 for j in prange(size, schedule='static', chunksize=chunksize): vj = j % ny vi = (j - vj) / ny vp_dir_len = sqrt( self.vp_dir[vi, vj, 0]**2 + self.vp_dir[vi, vj, 1]**2 + self.vp_dir[vi, vj, 2]**2) # Cast ray oct.oct.cast_ray(&self.vp_pos[vi, vj, 0], &self.vp_dir[vi, vj, 0], ri.keys, ri.t) # Contains the ordered indices of the cells hit by the ray # and the entry/exit t values if ri.keys.size() == 0: continue for i in range(Nch): idata.rgba[i] = self.image[vi, vj, i] for i in range(8): vc.mask[i] = 1 # Iterate over cells for i in range(ri.keys.size()): icell = ri.keys[i] for k in range(n_fields): vc.data[k] = &pg.container.data[k][14*icell] # Fill the volume container with the current boundaries for k in range(3): vc.left_edge[k] = pg.container.data[0][14*icell+8+k] vc.right_edge[k] = pg.container.data[0][14*icell+11+k] vc.dds[k] = (vc.right_edge[k] - vc.left_edge[k]) vc.idds[k] = 1/vc.dds[k] # Now call the sampler sampler( vc, &self.vp_pos[vi, vj, 0], &self.vp_dir[vi, vj, 0], ri.t[2*i ]/vp_dir_len, ri.t[2*i+1]/vp_dir_len, index, idata ) for i in range(Nch): self.image[vi, vj, i] = idata.rgba[i] # Empty keys and t ri.keys.clear() ri.t.clear() del ri free(vc.data) free(vc.mask) free(vc) idata.supp_data = NULL free(idata) mylog.debug('Done integration') cdef void setup(self, PartitionedGrid pg): return @staticmethod cdef void sample( VolumeContainer *vc, np.float64_t v_pos[3], np.float64_t v_dir[3], np.float64_t enter_t, np.float64_t exit_t, int index[3], void *data) noexcept nogil: return def ensure_code_unit_params(self, params): for param_name in ['center', 'vp_pos', 'vp_dir', 'width']: param = params[param_name] if hasattr(param, 'in_units'): params[param_name] = param.in_units('code_length') bounds = params['bounds'] if hasattr(bounds[0], 'units'): params['bounds'] = tuple(b.in_units('code_length').d for b in bounds) return params cdef class ProjectionSampler(ImageSampler): @staticmethod cdef void sample( VolumeContainer *vc, np.float64_t v_pos[3], np.float64_t v_dir[3], np.float64_t enter_t, np.float64_t exit_t, int index[3], void *data) noexcept nogil: cdef ImageAccumulator *im = data cdef int i cdef np.float64_t dl = (exit_t - enter_t) cdef int di = (index[0]*vc.dims[1]+index[1])*vc.dims[2]+index[2] for i in range(imin(4, vc.n_fields)): im.rgba[i] += vc.data[i][di] * dl cdef class InterpolatedProjectionSampler(ImageSampler): def __cinit__(self, np.ndarray vp_pos, np.ndarray vp_dir, np.ndarray[np.float64_t, ndim=1] center, bounds, np.ndarray[np.float64_t, ndim=3] image, np.ndarray[np.float64_t, ndim=1] x_vec, np.ndarray[np.float64_t, ndim=1] y_vec, np.ndarray[np.float64_t, ndim=1] width, str volume_method, n_samples = 10, **kwargs ): ImageSampler.__init__(self, vp_pos, vp_dir, center, bounds, image, x_vec, y_vec, width, volume_method, **kwargs) # Now we handle tf_obj self.vra = \ malloc(sizeof(VolumeRenderAccumulator)) self.vra.n_samples = n_samples self.supp_data = self.vra @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @staticmethod cdef void sample( VolumeContainer *vc, np.float64_t v_pos[3], np.float64_t v_dir[3], np.float64_t enter_t, np.float64_t exit_t, int index[3], void *data) noexcept nogil: cdef ImageAccumulator *im = data cdef VolumeRenderAccumulator *vri = \ im.supp_data # we assume this has vertex-centered data. cdef int offset = index[0] * (vc.dims[1] + 1) * (vc.dims[2] + 1) \ + index[1] * (vc.dims[2] + 1) + index[2] cdef np.float64_t dp[3] cdef np.float64_t ds[3] cdef np.float64_t dt = (exit_t - enter_t) / vri.n_samples cdef np.float64_t dvs[6] for i in range(3): dp[i] = (enter_t + 0.5 * dt) * v_dir[i] + v_pos[i] dp[i] -= index[i] * vc.dds[i] + vc.left_edge[i] dp[i] *= vc.idds[i] ds[i] = v_dir[i] * vc.idds[i] * dt for i in range(vri.n_samples): for j in range(vc.n_fields): dvs[j] = offset_interpolate(vc.dims, dp, vc.data[j] + offset) for j in range(imin(3, vc.n_fields)): im.rgba[j] += dvs[j] * dt for j in range(3): dp[j] += ds[j] cdef class VolumeRenderSampler(ImageSampler): def __cinit__(self, np.ndarray vp_pos, np.ndarray vp_dir, np.ndarray[np.float64_t, ndim=1] center, bounds, np.ndarray[np.float64_t, ndim=3] image, np.ndarray[np.float64_t, ndim=1] x_vec, np.ndarray[np.float64_t, ndim=1] y_vec, np.ndarray[np.float64_t, ndim=1] width, str volume_method, tf_obj, n_samples = 10, **kwargs ): ImageSampler.__init__(self, vp_pos, vp_dir, center, bounds, image, x_vec, y_vec, width, volume_method, **kwargs) cdef int i cdef np.ndarray[np.float64_t, ndim=1] temp # Now we handle tf_obj self.vra = \ malloc(sizeof(VolumeRenderAccumulator)) self.vra.fits = \ malloc(sizeof(FieldInterpolationTable) * 6) self.vra.n_fits = tf_obj.n_field_tables assert(self.vra.n_fits <= 6) self.vra.grey_opacity = getattr(tf_obj, "grey_opacity", 0) self.vra.n_samples = n_samples self.my_field_tables = [] for i in range(self.vra.n_fits): temp = tf_obj.tables[i].y FIT_initialize_table(&self.vra.fits[i], temp.shape[0], temp.data, tf_obj.tables[i].x_bounds[0], tf_obj.tables[i].x_bounds[1], tf_obj.field_ids[i], tf_obj.weight_field_ids[i], tf_obj.weight_table_ids[i]) self.my_field_tables.append((tf_obj.tables[i], tf_obj.tables[i].y)) for i in range(6): self.vra.field_table_ids[i] = tf_obj.field_table_ids[i] self.supp_data = self.vra @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @staticmethod cdef void sample( VolumeContainer *vc, np.float64_t v_pos[3], np.float64_t v_dir[3], np.float64_t enter_t, np.float64_t exit_t, int index[3], void *data) noexcept nogil: cdef ImageAccumulator *im = data cdef VolumeRenderAccumulator *vri = \ im.supp_data # we assume this has vertex-centered data. cdef int offset = index[0] * (vc.dims[1] + 1) * (vc.dims[2] + 1) \ + index[1] * (vc.dims[2] + 1) + index[2] cdef int cell_offset = index[0] * (vc.dims[1]) * (vc.dims[2]) \ + index[1] * (vc.dims[2]) + index[2] if vc.mask[cell_offset] != 1: return cdef np.float64_t dp[3] cdef np.float64_t ds[3] cdef np.float64_t dt = (exit_t - enter_t) / vri.n_samples cdef np.float64_t dvs[6] for i in range(3): dp[i] = (enter_t + 0.5 * dt) * v_dir[i] + v_pos[i] dp[i] -= index[i] * vc.dds[i] + vc.left_edge[i] dp[i] *= vc.idds[i] ds[i] = v_dir[i] * vc.idds[i] * dt for i in range(vri.n_samples): for j in range(vc.n_fields): dvs[j] = offset_interpolate(vc.dims, dp, vc.data[j] + offset) FIT_eval_transfer(dt, dvs, im.rgba, vri.n_fits, vri.fits, vri.field_table_ids, vri.grey_opacity) for j in range(3): dp[j] += ds[j] def __dealloc__(self): for i in range(self.vra.n_fits): free(self.vra.fits[i].d0) free(self.vra.fits[i].dy) free(self.vra.fits) free(self.vra) cdef class LightSourceRenderSampler(ImageSampler): def __cinit__(self, np.ndarray vp_pos, np.ndarray vp_dir, np.ndarray[np.float64_t, ndim=1] center, bounds, np.ndarray[np.float64_t, ndim=3] image, np.ndarray[np.float64_t, ndim=1] x_vec, np.ndarray[np.float64_t, ndim=1] y_vec, np.ndarray[np.float64_t, ndim=1] width, str volume_method, tf_obj, n_samples = 10, light_dir=(1.,1.,1.), light_rgba=(1.,1.,1.,1.), **kwargs): ImageSampler.__init__(self, vp_pos, vp_dir, center, bounds, image, x_vec, y_vec, width, volume_method, **kwargs) cdef int i cdef np.ndarray[np.float64_t, ndim=1] temp # Now we handle tf_obj self.vra = \ malloc(sizeof(VolumeRenderAccumulator)) self.vra.fits = \ malloc(sizeof(FieldInterpolationTable) * 6) self.vra.n_fits = tf_obj.n_field_tables assert(self.vra.n_fits <= 6) self.vra.grey_opacity = getattr(tf_obj, "grey_opacity", 0) self.vra.n_samples = n_samples self.vra.light_dir = malloc(sizeof(np.float64_t) * 3) self.vra.light_rgba = malloc(sizeof(np.float64_t) * 4) light_dir /= np.sqrt(light_dir[0] * light_dir[0] + light_dir[1] * light_dir[1] + light_dir[2] * light_dir[2]) for i in range(3): self.vra.light_dir[i] = light_dir[i] for i in range(4): self.vra.light_rgba[i] = light_rgba[i] self.my_field_tables = [] for i in range(self.vra.n_fits): temp = tf_obj.tables[i].y FIT_initialize_table(&self.vra.fits[i], temp.shape[0], temp.data, tf_obj.tables[i].x_bounds[0], tf_obj.tables[i].x_bounds[1], tf_obj.field_ids[i], tf_obj.weight_field_ids[i], tf_obj.weight_table_ids[i]) self.my_field_tables.append((tf_obj.tables[i], tf_obj.tables[i].y)) for i in range(6): self.vra.field_table_ids[i] = tf_obj.field_table_ids[i] self.supp_data = self.vra @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) @staticmethod cdef void sample( VolumeContainer *vc, np.float64_t v_pos[3], np.float64_t v_dir[3], np.float64_t enter_t, np.float64_t exit_t, int index[3], void *data) noexcept nogil: cdef ImageAccumulator *im = data cdef VolumeRenderAccumulator *vri = \ im.supp_data # we assume this has vertex-centered data. cdef int offset = index[0] * (vc.dims[1] + 1) * (vc.dims[2] + 1) \ + index[1] * (vc.dims[2] + 1) + index[2] cdef np.float64_t dp[3] cdef np.float64_t ds[3] cdef np.float64_t dt = (exit_t - enter_t) / vri.n_samples cdef np.float64_t dvs[6] cdef np.float64_t *grad grad = malloc(3 * sizeof(np.float64_t)) for i in range(3): dp[i] = (enter_t + 0.5 * dt) * v_dir[i] + v_pos[i] dp[i] -= index[i] * vc.dds[i] + vc.left_edge[i] dp[i] *= vc.idds[i] ds[i] = v_dir[i] * vc.idds[i] * dt for i in range(vri.n_samples): for j in range(vc.n_fields): dvs[j] = offset_interpolate(vc.dims, dp, vc.data[j] + offset) eval_gradient(vc.dims, dp, vc.data[0] + offset, grad) FIT_eval_transfer_with_light(dt, dvs, grad, vri.light_dir, vri.light_rgba, im.rgba, vri.n_fits, vri.fits, vri.field_table_ids, vri.grey_opacity) for j in range(3): dp[j] += ds[j] free(grad) def __dealloc__(self): for i in range(self.vra.n_fits): free(self.vra.fits[i].d0) free(self.vra.fits[i].dy) free(self.vra.light_dir) free(self.vra.light_rgba) free(self.vra.fits) free(self.vra) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/image_utilities.pyx0000644000175100001770000003415114714401662020725 0ustar00runnerdocker # distutils: libraries = STD_LIBS # distutils: extra_compile_args = OMP_ARGS # distutils: extra_link_args = OMP_ARGS """ Utilities for images """ import numpy as np cimport numpy as np cimport cython from libc.math cimport ceil, floor, log2, sqrt from libc.stdlib cimport free, malloc from yt.utilities.lib.fp_utils cimport iclip, imin, imax, fclip, fmin, fmax from cython.parallel import prange, parallel @cython.wraparound(False) def add_points_to_greyscale_image( np.ndarray[np.float64_t, ndim=2] buffer, np.ndarray[np.uint8_t, ndim=2] buffer_mask, np.ndarray[np.float64_t, ndim=1] px, np.ndarray[np.float64_t, ndim=1] py, np.ndarray[np.float64_t, ndim=1] pv): cdef int i, j, pi cdef int npx = px.shape[0] cdef int xs = buffer.shape[0] cdef int ys = buffer.shape[1] for pi in range(npx): j = (xs * px[pi]) i = (ys * py[pi]) if (i < 0) or (i >= buffer.shape[0]) or (j < 0) or (j >= buffer.shape[1]): # some particles might intersect the image buffer # but actually be centered out of bounds. Skip those. # see https://github.com/yt-project/yt/issues/4603 continue buffer[i, j] += pv[pi] buffer_mask[i, j] = 1 return cdef inline int ij2idx(const int i, const int j, const int Nx) noexcept nogil: return i * Nx + j @cython.cdivision(True) @cython.wraparound(False) @cython.boundscheck(False) cdef void _add_cell_to_image_offaxis( const np.float64_t dx, const np.float64_t w, const np.float64_t q, const np.float64_t cell_max_width, const np.float64_t x, const np.float64_t y, const int Nx, const int Ny, const int Nsx, const int Nsy, np.float64_t* buffer, np.float64_t* buffer_weight, const int max_depth, const np.float64_t[:, :, ::1] stamp, const np.uint8_t[:, :, ::1] stamp_mask, ) noexcept nogil: cdef np.float64_t lx, rx, ly, ry cdef int j, k, depth cdef int jmin, jmax, kmin, kmax, jj1, jj2, kk1, kk2, itmp cdef np.float64_t cell_max_half_width = cell_max_width / 2 cdef np.float64_t xx1, xx2, yy1, yy2, dvx, dvy, sw, sq, tmp, dx_loc, dy_loc cdef np.float64_t dx3 = dx * dx * dx * Nx * Ny lx = x - cell_max_half_width rx = x + cell_max_half_width ly = y - cell_max_half_width ry = y + cell_max_half_width # Compute the range of pixels that the cell may overlap jmin = imax(floor(lx * Nx), 0) jmax = imax(ceil(rx * Nx), Nx - 1) kmin = imin(floor(ly * Ny), 0) kmax = imax(ceil(ry * Ny), Ny - 1) # If the cell is fully within one pixel if (jmax == jmin + 1) and (kmax == kmin + 1): buffer[ij2idx(jmin, kmin, Nx)] += q * dx3 buffer_weight[ij2idx(jmin, kmin, Nx)] += w * dx3 return # Our 'stamp' has multiple resolutions, select the one # that is at a higher resolution than the pixel # we are projecting onto with at least 4 pixels on the diagonal depth = iclip( (ceil(log2(4 * sqrt(3) * dx * fmax(Nx, Ny)))), 1, max_depth - 1, ) jmax = imin(Nsx, 1 << depth) kmax = imin(Nsy, 1 << depth) dx_loc = cell_max_width / jmax dy_loc = cell_max_width / kmax for j in range(jmax): xx1 = ((j - jmax / 2.) * dx_loc + x) * Nx xx2 = ((j + 1 - jmax / 2.) * dx_loc + x) * Nx jj1 = xx1 jj2 = xx2 # The subcell is out of the projected area if jj2 < 0 or jj1 >= Nx: continue # Fraction of overlap with the pixel in x direction dvx = fclip((jj2 - xx1) / (xx2 - xx1), 0., 1.) for k in range(kmax): if stamp_mask[depth, j, k] == 0: continue yy1 = ((k - kmax / 2.) * dy_loc + y) * Ny yy2 = ((k + 1 - kmax / 2.) * dy_loc + y) * Ny kk1 = yy1 kk2 = yy2 # The subcell is out of the projected area if kk2 < 0 or kk1 >= Ny: continue tmp = stamp[depth, j, k] * dx3 sw = tmp * w sq = tmp * q # Fraction of overlap with the pixel in y direction dvy = fclip((kk2 - yy1) / (yy2 - yy1), 0., 1.) if jj1 >= 0 and kk1 >= 0: tmp = dvx * dvy itmp = ij2idx(jj1, kk1, Nx) buffer[itmp] += sq * tmp buffer_weight[itmp] += sw * tmp if jj1 >= 0 and kk2 < Ny: tmp = dvx * (1 - dvy) itmp = ij2idx(jj1, kk2, Nx) buffer[itmp] += sq * tmp buffer_weight[itmp] += sw * tmp if jj2 < Nx and kk1 >= 0: tmp = (1 - dvx) * dvy itmp = ij2idx(jj2, kk1, Nx) buffer[itmp] += sq * tmp buffer_weight[itmp] += sw * tmp if jj2 < Nx and kk2 < Ny: tmp = (1 - dvx) * (1 - dvy) itmp = ij2idx(jj2, kk2, Nx) buffer[itmp] += sq * tmp buffer_weight[itmp] += sw * tmp @cython.boundscheck(False) @cython.cdivision(True) @cython.wraparound(False) def add_cells_to_image_offaxis( *, const np.float64_t[:, ::1] Xp, const np.float64_t[::1] dXp, const np.float64_t[::1] qty, const np.float64_t[::1] weight, const np.float64_t[:, :] rotation, np.float64_t[:, ::1] buffer, np.float64_t[:, ::1] buffer_weight, const int Nx, const int Ny, const int Npix_min = 4, ): cdef np.ndarray[np.float64_t, ndim=1] center = np.array([0.5, 0.5, 0.5]) cdef np.float64_t w0 = 1 / sqrt(3.) cdef int i, j, k cdef np.ndarray[np.float64_t, ndim=1] a = np.array([1., 0, 0]) * w0 cdef np.ndarray[np.float64_t, ndim=1] b = np.array([0, 1., 0]) * w0 cdef np.ndarray[np.float64_t, ndim=1] c = np.array([0, 0, 1.]) * w0 a = np.dot(rotation, a) b = np.dot(rotation, b) c = np.dot(rotation, c) cdef np.ndarray[np.float64_t, ndim=1] o = center - (a + b + c) / 2 cdef int Nsx, Nsy cdef np.float64_t dx_max = np.max(dXp) # The largest cell needs to be resolved by at least this number of pixels Nsx = max(Npix_min, int(ceil(2 * dx_max * sqrt(3) * Nx))) Nsy = max(Npix_min, int(ceil(2 * dx_max * sqrt(3) * Ny))) cdef int max_depth = int(ceil(log2(max(Nsx, Nsy)))) cdef int depth cdef np.ndarray[np.float64_t, ndim=3] stamp_arr = np.zeros((max_depth, Nsx, Nsy), dtype=float) cdef np.ndarray[np.uint8_t, ndim=3] stamp_mask_arr = np.zeros((max_depth, Nsx, Nsy), dtype=np.uint8) cdef np.float64_t[:, :, ::1] stamp = stamp_arr cdef np.uint8_t[:, :, ::1] stamp_mask = stamp_mask_arr # Precompute the mip for depth in range(max_depth): if depth == 0: stamp[0, 0, 0] = 1 continue direct_integrate_cube( o, a, b, c, stamp_arr[depth, :, :], stamp_mask_arr[depth, :, :], imin(1 << depth, Nsx), imin(1 << depth, Nsy), ) stamp_arr[depth] /= np.sum(stamp[depth]) # Iterate over all cells, applying the stamp cdef np.float64_t x, y, dx cdef np.float64_t[:, ::1] rotation_view = np.ascontiguousarray(rotation) cdef np.float64_t w, q, cell_max_width, sq3 sq3 = sqrt(3.) # Local buffers cdef np.float64_t *lbuffer cdef np.float64_t *lbuffer_weight cdef int num_particles = len(Xp) with nogil, parallel(): lbuffer = malloc(sizeof(np.float64_t*) * Nx * Ny) lbuffer_weight = malloc(sizeof(np.float64_t*) * Nx * Ny) for j in range(Nx * Ny): lbuffer[j] = 0 lbuffer_weight[j] = 0 for i in prange(num_particles, schedule="runtime"): dx = dXp[i] w = weight[i] q = qty[i] cell_max_width = dx * sq3 x = ( rotation_view[0, 0] * Xp[i, 0] + rotation_view[0, 1] * Xp[i, 1] + rotation_view[0, 2] * Xp[i, 2] ) + 0.5 y = ( rotation_view[1, 0] * Xp[i, 0] + rotation_view[1, 1] * Xp[i, 1] + rotation_view[1, 2] * Xp[i, 2] ) + 0.5 _add_cell_to_image_offaxis( dx, w, q, cell_max_width, x, y, Nx, Ny, Nsx, Nsy, lbuffer, lbuffer_weight, max_depth, stamp, stamp_mask ) # Copy back data in main buffer with gil: for j in range(Nx): for k in range(Ny): buffer[j, k] += lbuffer[ij2idx(j, k, Nx)] buffer_weight[j, k] += lbuffer_weight[ij2idx(j, k, Nx)] # Free memory free(lbuffer) free(lbuffer_weight) @cython.boundscheck(False) @cython.wraparound(False) cdef inline np.float64_t det2d(const np.float64_t[::1] a, const np.float64_t[::1] b) noexcept nogil: return a[0] * b[1] - a[1] * b[0] @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) cdef bint check_in_parallelogram( const np.float64_t[::1] PA, const np.float64_t[::1] PQ, const np.float64_t[::1] PR, const int signPQ, const int signPR, np.float64_t[2] out ) noexcept nogil: cdef np.float64_t det_PQR = det2d(PQ, PR) if det_PQR == 0: out[0] = -1 out[1] = -1 return False out[0] = -det2d(PA, PQ) / det_PQR out[1] = det2d(PA, PR) / det_PQR if 0 <= signPQ * out[0] <= 1 and 0 <= signPR * out[1] <= 1: return True out[0] = -1 out[1] = -1 return False @cython.boundscheck(False) @cython.cdivision(True) cdef int direct_integrate_cube( np.ndarray[np.float64_t, ndim=1] O, np.ndarray[np.float64_t, ndim=1] u, np.ndarray[np.float64_t, ndim=1] v, np.ndarray[np.float64_t, ndim=1] w, np.ndarray[np.float64_t, ndim=2] buffer, np.ndarray[np.uint8_t, ndim=2] buffer_mask, const int Nx, const int Ny, ) except -1: """ Compute depth of cube from direct integration of entry/exit points of rays """ cdef np.float64_t[::1] u2d = u[:2] cdef np.float64_t[::1] v2d = v[:2] cdef np.float64_t[::1] w2d = w[:2] cdef np.float64_t[::1] Oback = O + u + v + w cdef np.float64_t[::1] X = np.zeros(2) cdef np.float64_t[::1] OfrontA = np.zeros(2) cdef np.float64_t[::1] ObackA = np.zeros(2) cdef np.float64_t inv_dx = 1. / Nx cdef np.float64_t inv_dy = 1. / Ny cdef np.float64_t[2] nm cdef bint within cdef np.float64_t zmin, zmax, z cdef int Nhit, i, j for i in range(Nx): X[0] = (i + 0.5) * inv_dx OfrontA[0] = X[0] - O[0] ObackA[0] = X[0] - Oback[0] for j in range(Ny): zmin = np.inf zmax = -np.inf Nhit = 0 X[1] = (j + 0.5) * inv_dy OfrontA[1] = X[1] - O[1] ObackA[1] = X[1] - Oback[1] within = check_in_parallelogram(OfrontA, v2d, u2d, 1, 1, nm) if within: z = O[2] + nm[0] * u[2] + nm[1] * v[2] zmin = fmin(z, zmin) zmax = fmax(z, zmax) Nhit += 1 within = check_in_parallelogram(OfrontA, w2d, v2d, 1, 1, nm) if within: z = O[2] + nm[0] * v[2] + nm[1] * w[2] zmin = fmin(z, zmin) zmax = fmax(z, zmax) Nhit += 1 within = check_in_parallelogram(OfrontA, w2d, u2d, 1, 1, nm) if within: z = O[2] + nm[0] * u[2] + nm[1] * w[2] zmin = fmin(z, zmin) zmax = fmax(z, zmax) Nhit += 1 within = check_in_parallelogram(ObackA, v2d, u2d, -1, -1, nm) if within: z = Oback[2] + nm[0] * u[2] + nm[1] * v[2] zmin = fmin(z, zmin) zmax = fmax(z, zmax) Nhit += 1 within = check_in_parallelogram(ObackA, w2d, v2d, -1, -1, nm) if within: z = Oback[2] + nm[0] * v[2] + nm[1] * w[2] zmin = fmin(z, zmin) zmax = fmax(z, zmax) Nhit += 1 within = check_in_parallelogram(ObackA, w2d, u2d, -1, -1, nm) if within: z = Oback[2] + nm[0] * u[2] + nm[1] * w[2] zmin = fmin(z, zmin) zmax = fmax(z, zmax) Nhit += 1 if Nhit == 0: continue elif Nhit == 1: raise RuntimeError("This should not happen") else: buffer[i, j] += zmax - zmin buffer_mask[i, j] = 1 def add_points_to_image( np.ndarray[np.uint8_t, ndim=3] buffer, np.ndarray[np.float64_t, ndim=1] px, np.ndarray[np.float64_t, ndim=1] py, np.float64_t pv): cdef int i, j, k, pi cdef int npx = px.shape[0] cdef int xs = buffer.shape[0] cdef int ys = buffer.shape[1] cdef int v v = iclip((pv * 255), 0, 255) for pi in range(npx): j = (xs * px[pi]) i = (ys * py[pi]) for k in range(3): buffer[i, j, k] = v buffer[i, j, 3] = 255 return def add_rgba_points_to_image( np.ndarray[np.float64_t, ndim=3] buffer, np.ndarray[np.float64_t, ndim=1] px, np.ndarray[np.float64_t, ndim=1] py, np.ndarray[np.float64_t, ndim=2] rgba, ): """ Splat rgba points onto an image Given an image buffer, add colors to pixels defined by fractional positions px and py, with colors rgba. px and py are one dimensional arrays, and rgba is a an array of rgba values. """ cdef int i, j, k, pi cdef int npart = px.shape[0] cdef int xs = buffer.shape[0] cdef int ys = buffer.shape[1] #iv = iclip((pv * 255), 0, 255) for pi in range(npart): j = (xs * px[pi]) i = (ys * py[pi]) if i < 0 or j < 0 or i >= xs or j >= ys: continue for k in range(4): buffer[i, j, k] += rgba[pi, k] return ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/interpolators.pyx0000644000175100001770000002315714714401662020461 0ustar00runnerdocker # distutils: libraries = STD_LIBS """ Simple interpolators """ import numpy as np cimport cython cimport numpy as np from yt.utilities.lib.fp_utils cimport iclip @cython.cdivision(True) @cython.wraparound(False) @cython.boundscheck(False) cpdef void UnilinearlyInterpolate(np.ndarray[np.float64_t, ndim=1] table, np.ndarray[np.float64_t, ndim=1] x_vals, np.ndarray[np.float64_t, ndim=1] x_bins, np.ndarray[np.int32_t, ndim=1] x_is, np.ndarray[np.float64_t, ndim=1] output): cdef double x, xp, xm cdef int i, x_i for i in range(x_vals.shape[0]): x_i = x_is[i] x = x_vals[i] dx_inv = 1.0 / (x_bins[x_i+1] - x_bins[x_i]) xp = (x - x_bins[x_i]) * dx_inv xm = (x_bins[x_i+1] - x) * dx_inv output[i] = table[x_i ] * (xm) \ + table[x_i+1] * (xp) @cython.cdivision(True) @cython.wraparound(False) @cython.boundscheck(False) cpdef void BilinearlyInterpolate(np.ndarray[np.float64_t, ndim=2] table, np.ndarray[np.float64_t, ndim=1] x_vals, np.ndarray[np.float64_t, ndim=1] y_vals, np.ndarray[np.float64_t, ndim=1] x_bins, np.ndarray[np.float64_t, ndim=1] y_bins, np.ndarray[np.int32_t, ndim=1] x_is, np.ndarray[np.int32_t, ndim=1] y_is, np.ndarray[np.float64_t, ndim=1] output): cdef double x, xp, xm cdef double y, yp, ym cdef double dx_inv, dy_inv cdef int i, x_i, y_i for i in range(x_vals.shape[0]): x_i = x_is[i] y_i = y_is[i] x = x_vals[i] y = y_vals[i] dx_inv = 1.0 / (x_bins[x_i+1] - x_bins[x_i]) dy_inv = 1.0 / (y_bins[y_i+1] - y_bins[y_i]) xp = (x - x_bins[x_i]) * dx_inv yp = (y - y_bins[y_i]) * dy_inv xm = (x_bins[x_i+1] - x) * dx_inv ym = (y_bins[y_i+1] - y) * dy_inv output[i] = table[x_i , y_i ] * (xm*ym) \ + table[x_i+1, y_i ] * (xp*ym) \ + table[x_i , y_i+1] * (xm*yp) \ + table[x_i+1, y_i+1] * (xp*yp) @cython.cdivision(True) @cython.wraparound(False) @cython.boundscheck(False) cpdef void TrilinearlyInterpolate(np.ndarray[np.float64_t, ndim=3] table, np.ndarray[np.float64_t, ndim=1] x_vals, np.ndarray[np.float64_t, ndim=1] y_vals, np.ndarray[np.float64_t, ndim=1] z_vals, np.ndarray[np.float64_t, ndim=1] x_bins, np.ndarray[np.float64_t, ndim=1] y_bins, np.ndarray[np.float64_t, ndim=1] z_bins, np.ndarray[np.int64_t, ndim=1] x_is, np.ndarray[np.int64_t, ndim=1] y_is, np.ndarray[np.int64_t, ndim=1] z_is, np.ndarray[np.float64_t, ndim=1] output): cdef double x, xp, xm cdef double y, yp, ym cdef double z, zp, zm cdef double dx_inv, dy_inv, dz_inv cdef int i, x_i, y_i, z_i for i in range(x_vals.shape[0]): x_i = x_is[i] y_i = y_is[i] z_i = z_is[i] x = x_vals[i] y = y_vals[i] z = z_vals[i] dx_inv = 1.0 / (x_bins[x_i+1] - x_bins[x_i]) dy_inv = 1.0 / (y_bins[y_i+1] - y_bins[y_i]) dz_inv = 1.0 / (z_bins[z_i+1] - z_bins[z_i]) xp = (x - x_bins[x_i]) * dx_inv yp = (y - y_bins[y_i]) * dy_inv zp = (z - z_bins[z_i]) * dz_inv xm = (x_bins[x_i+1] - x) * dx_inv ym = (y_bins[y_i+1] - y) * dy_inv zm = (z_bins[z_i+1] - z) * dz_inv output[i] = table[x_i ,y_i ,z_i ] * (xm*ym*zm) \ + table[x_i+1,y_i ,z_i ] * (xp*ym*zm) \ + table[x_i ,y_i+1,z_i ] * (xm*yp*zm) \ + table[x_i ,y_i ,z_i+1] * (xm*ym*zp) \ + table[x_i+1,y_i ,z_i+1] * (xp*ym*zp) \ + table[x_i ,y_i+1,z_i+1] * (xm*yp*zp) \ + table[x_i+1,y_i+1,z_i ] * (xp*yp*zm) \ + table[x_i+1,y_i+1,z_i+1] * (xp*yp*zp) @cython.cdivision(True) @cython.wraparound(False) @cython.boundscheck(False) cpdef void QuadrilinearlyInterpolate(np.ndarray[np.float64_t, ndim=4] table, np.ndarray[np.float64_t, ndim=1] x_vals, np.ndarray[np.float64_t, ndim=1] y_vals, np.ndarray[np.float64_t, ndim=1] z_vals, np.ndarray[np.float64_t, ndim=1] w_vals, np.ndarray[np.float64_t, ndim=1] x_bins, np.ndarray[np.float64_t, ndim=1] y_bins, np.ndarray[np.float64_t, ndim=1] z_bins, np.ndarray[np.float64_t, ndim=1] w_bins, np.ndarray[np.int64_t, ndim=1] x_is, np.ndarray[np.int64_t, ndim=1] y_is, np.ndarray[np.int64_t, ndim=1] z_is, np.ndarray[np.int64_t, ndim=1] w_is, np.ndarray[np.float64_t, ndim=1] output): cdef double x, xp, xm cdef double y, yp, ym cdef double z, zp, zm cdef double w, wp, wm cdef double dx_inv, dy_inv, dz_inv, dw_inv cdef int i, x_i, y_i, z_i, w_i for i in range(x_vals.shape[0]): x_i = x_is[i] y_i = y_is[i] z_i = z_is[i] w_i = w_is[i] x = x_vals[i] y = y_vals[i] z = z_vals[i] w = w_vals[i] dx_inv = 1.0 / (x_bins[x_i+1] - x_bins[x_i]) dy_inv = 1.0 / (y_bins[y_i+1] - y_bins[y_i]) dz_inv = 1.0 / (z_bins[z_i+1] - z_bins[z_i]) dw_inv = 1.0 / (w_bins[w_i+1] - w_bins[w_i]) xp = (x - x_bins[x_i]) * dx_inv yp = (y - y_bins[y_i]) * dy_inv zp = (z - z_bins[z_i]) * dz_inv wp = (w - w_bins[w_i]) * dw_inv xm = (x_bins[x_i+1] - x) * dx_inv ym = (y_bins[y_i+1] - y) * dy_inv zm = (z_bins[z_i+1] - z) * dz_inv wm = (w_bins[w_i+1] - w) * dw_inv output[i] = table[x_i ,y_i ,z_i ,w_i ] * (xm*ym*zm*wm) \ + table[x_i+1,y_i ,z_i ,w_i ] * (xp*ym*zm*wm) \ + table[x_i ,y_i+1,z_i ,w_i ] * (xm*yp*zm*wm) \ + table[x_i ,y_i ,z_i+1,w_i ] * (xm*ym*zp*wm) \ + table[x_i ,y_i ,z_i ,w_i+1] * (xm*ym*zm*wp) \ + table[x_i+1,y_i ,z_i ,w_i+1] * (xp*ym*zm*wp) \ + table[x_i ,y_i+1,z_i ,w_i+1] * (xm*yp*zm*wp) \ + table[x_i ,y_i ,z_i+1,w_i+1] * (xm*ym*zp*wp) \ + table[x_i+1,y_i ,z_i+1,w_i ] * (xp*ym*zp*wm) \ + table[x_i ,y_i+1,z_i+1,w_i ] * (xm*yp*zp*wm) \ + table[x_i+1,y_i+1,z_i ,w_i ] * (xp*yp*zm*wm) \ + table[x_i+1,y_i ,z_i+1,w_i+1] * (xp*ym*zp*wp) \ + table[x_i ,y_i+1,z_i+1,w_i+1] * (xm*yp*zp*wp) \ + table[x_i+1,y_i+1,z_i ,w_i+1] * (xp*yp*zm*wp) \ + table[x_i+1,y_i+1,z_i+1,w_i ] * (xp*yp*zp*wm) \ + table[x_i+1,y_i+1,z_i+1,w_i+1] * (xp*yp*zp*wp) @cython.cdivision(True) @cython.wraparound(False) @cython.boundscheck(False) cpdef void ghost_zone_interpolate(int rf, np.ndarray[np.float64_t, ndim=3] input_field, np.ndarray[np.float64_t, ndim=1] input_left, np.ndarray[np.float64_t, ndim=3] output_field, np.ndarray[np.float64_t, ndim=1] output_left): cdef int oi, oj, ok cdef int ii, ij, ik cdef np.float64_t xp, xm, yp, ym, zp, zm, temp cdef np.float64_t ods[3] cdef np.float64_t ids[3] cdef np.float64_t iids[3] cdef np.float64_t opos[3] cdef np.float64_t ropos[3] cdef int i for i in range(3): temp = input_left[i] + (rf * (input_field.shape[i] - 1)) ids[i] = (temp - input_left[i])/(input_field.shape[i]-1) temp = output_left[i] + output_field.shape[i] - 1 ods[i] = (temp - output_left[i])/(output_field.shape[i]-1) iids[i] = 1.0/ids[i] opos[0] = output_left[0] for oi in range(output_field.shape[0]): ropos[0] = ((opos[0] - input_left[0]) * iids[0]) ii = iclip( ropos[0], 0, input_field.shape[0] - 2) xp = ropos[0] - ii xm = 1.0 - xp opos[1] = output_left[1] for oj in range(output_field.shape[1]): ropos[1] = ((opos[1] - input_left[1]) * iids[1]) ij = iclip( ropos[1], 0, input_field.shape[1] - 2) yp = ropos[1] - ij ym = 1.0 - yp opos[2] = output_left[2] for ok in range(output_field.shape[2]): ropos[2] = ((opos[2] - input_left[2]) * iids[2]) ik = iclip( ropos[2], 0, input_field.shape[2] - 2) zp = ropos[2] - ik zm = 1.0 - zp output_field[oi,oj,ok] = \ input_field[ii ,ij ,ik ] * (xm*ym*zm) \ + input_field[ii+1,ij ,ik ] * (xp*ym*zm) \ + input_field[ii ,ij+1,ik ] * (xm*yp*zm) \ + input_field[ii ,ij ,ik+1] * (xm*ym*zp) \ + input_field[ii+1,ij ,ik+1] * (xp*ym*zp) \ + input_field[ii ,ij+1,ik+1] * (xm*yp*zp) \ + input_field[ii+1,ij+1,ik ] * (xp*yp*zm) \ + input_field[ii+1,ij+1,ik+1] * (xp*yp*zp) opos[2] += ods[2] opos[1] += ods[1] opos[0] += ods[0] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/lenses.pxd0000644000175100001770000000173514714401662017016 0ustar00runnerdocker""" Definitions for the lens code """ import numpy as np cimport cython cimport numpy as np from libc.math cimport ( M_PI, acos, asin, atan, atan2, cos, exp, fabs, floor, log2, sin, sqrt, ) from yt.utilities.lib.fp_utils cimport fclip, fmax, fmin, i64clip, iclip, imax, imin from .image_samplers cimport ( ImageSampler, calculate_extent_function, generate_vector_info_function, ) from .vec3_ops cimport L2_norm, dot, fma, subtract from .volume_container cimport VolumeContainer cdef extern from "platform_dep.h": long int lrint(double x) noexcept nogil cdef extern from "limits.h": cdef int SHRT_MAX cdef generate_vector_info_function generate_vector_info_plane_parallel cdef generate_vector_info_function generate_vector_info_null cdef calculate_extent_function calculate_extent_plane_parallel cdef calculate_extent_function calculate_extent_perspective cdef calculate_extent_function calculate_extent_null ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/lenses.pyx0000644000175100001770000001611314714401662017037 0ustar00runnerdocker # distutils: libraries = STD_LIBS """ Functions for computing the extent of lenses and whatnot """ import numpy as np cimport cython cimport numpy as np from .image_samplers cimport ImageSampler @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int calculate_extent_plane_parallel(ImageSampler image, VolumeContainer *vc, np.int64_t rv[4]) except -1 nogil: # We do this for all eight corners cdef np.float64_t temp cdef np.float64_t *edges[2] cdef np.float64_t cx, cy cdef np.float64_t extrema[4] cdef int i, j, k edges[0] = vc.left_edge edges[1] = vc.right_edge extrema[0] = extrema[2] = 1e300; extrema[1] = extrema[3] = -1e300 for i in range(2): for j in range(2): for k in range(2): # This should rotate it into the vector plane temp = edges[i][0] * image.x_vec[0] temp += edges[j][1] * image.x_vec[1] temp += edges[k][2] * image.x_vec[2] if temp < extrema[0]: extrema[0] = temp if temp > extrema[1]: extrema[1] = temp temp = edges[i][0] * image.y_vec[0] temp += edges[j][1] * image.y_vec[1] temp += edges[k][2] * image.y_vec[2] if temp < extrema[2]: extrema[2] = temp if temp > extrema[3]: extrema[3] = temp cx = cy = 0.0 for i in range(3): cx += image.center[i] * image.x_vec[i] cy += image.center[i] * image.y_vec[i] rv[0] = lrint((extrema[0] - cx - image.bounds[0])/image.pdx) rv[1] = rv[0] + lrint((extrema[1] - extrema[0])/image.pdx) rv[2] = lrint((extrema[2] - cy - image.bounds[2])/image.pdy) rv[3] = rv[2] + lrint((extrema[3] - extrema[2])/image.pdy) return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int calculate_extent_perspective(ImageSampler image, VolumeContainer *vc, np.int64_t rv[4]) except -1 nogil: cdef np.float64_t cam_pos[3] cdef np.float64_t cam_width[3] cdef np.float64_t north_vector[3] cdef np.float64_t east_vector[3] cdef np.float64_t normal_vector[3] cdef np.float64_t vertex[3] cdef np.float64_t pos1[3] cdef np.float64_t sight_vector[3] cdef np.float64_t sight_center[3] cdef np.float64_t corners[3][8] cdef float sight_vector_norm, sight_angle_cos, sight_length, dx, dy cdef int i, iv, px, py cdef int min_px, min_py, max_px, max_py min_px = SHRT_MAX min_py = SHRT_MAX max_px = -SHRT_MAX max_py = -SHRT_MAX # calculate vertices for 8 corners of vc corners[0][0] = vc.left_edge[0] corners[0][1] = vc.right_edge[0] corners[0][2] = vc.right_edge[0] corners[0][3] = vc.left_edge[0] corners[0][4] = vc.left_edge[0] corners[0][5] = vc.right_edge[0] corners[0][6] = vc.right_edge[0] corners[0][7] = vc.left_edge[0] corners[1][0] = vc.left_edge[1] corners[1][1] = vc.left_edge[1] corners[1][2] = vc.right_edge[1] corners[1][3] = vc.right_edge[1] corners[1][4] = vc.left_edge[1] corners[1][5] = vc.left_edge[1] corners[1][6] = vc.right_edge[1] corners[1][7] = vc.right_edge[1] corners[2][0] = vc.left_edge[2] corners[2][1] = vc.left_edge[2] corners[2][2] = vc.left_edge[2] corners[2][3] = vc.left_edge[2] corners[2][4] = vc.right_edge[2] corners[2][5] = vc.right_edge[2] corners[2][6] = vc.right_edge[2] corners[2][7] = vc.right_edge[2] # This code was ported from # yt.visualization.volume_rendering.lens.PerspectiveLens.project_to_plane() for i in range(3): cam_pos[i] = image.camera_data[0, i] cam_width[i] = image.camera_data[1, i] east_vector[i] = image.camera_data[2, i] north_vector[i] = image.camera_data[3, i] normal_vector[i] = image.camera_data[4, i] for iv in range(8): vertex[0] = corners[0][iv] vertex[1] = corners[1][iv] vertex[2] = corners[2][iv] cam_width[1] = cam_width[0] * image.nv[1] / image.nv[0] subtract(vertex, cam_pos, sight_vector) fma(cam_width[2], normal_vector, cam_pos, sight_center) sight_vector_norm = L2_norm(sight_vector) if sight_vector_norm != 0: for i in range(3): sight_vector[i] /= sight_vector_norm sight_angle_cos = dot(sight_vector, normal_vector) sight_angle_cos = fclip(sight_angle_cos, -1.0, 1.0) if acos(sight_angle_cos) < 0.5 * M_PI and sight_angle_cos != 0.0: sight_length = cam_width[2] / sight_angle_cos else: sight_length = sqrt(cam_width[0] * cam_width[0] + cam_width[1] * cam_width[1]) sight_length /= sqrt(1.0 - sight_angle_cos * sight_angle_cos) fma(sight_length, sight_vector, cam_pos, pos1) subtract(pos1, sight_center, pos1) dx = dot(pos1, east_vector) dy = dot(pos1, north_vector) px = int(image.nv[0] * 0.5 + image.nv[0] / cam_width[0] * dx) py = int(image.nv[1] * 0.5 + image.nv[1] / cam_width[1] * dy) min_px = min(min_px, px) max_px = max(max_px, px) min_py = min(min_py, py) max_py = max(max_py, py) rv[0] = max(min_px, 0) rv[1] = min(max_px, image.nv[0]) rv[2] = max(min_py, 0) rv[3] = min(max_py, image.nv[1]) return 0 # We do this for a bunch of lenses. Fallback is to grab them from the vector # info supplied. @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int calculate_extent_null(ImageSampler image, VolumeContainer *vc, np.int64_t rv[4]) except -1 nogil: rv[0] = 0 rv[1] = image.nv[0] rv[2] = 0 rv[3] = image.nv[1] return 0 @cython.boundscheck(False) @cython.wraparound(False) cdef void generate_vector_info_plane_parallel(ImageSampler im, np.int64_t vi, np.int64_t vj, np.float64_t width[2], # Now outbound np.float64_t v_dir[3], np.float64_t v_pos[3]) noexcept nogil: cdef int i cdef np.float64_t px, py px = width[0] * (vi)/(im.nv[0]-1) - width[0]/2.0 py = width[1] * (vj)/(im.nv[1]-1) - width[1]/2.0 # atleast_3d will add to beginning and end v_pos[0] = im.vp_pos[0,0,0]*px + im.vp_pos[0,3,0]*py + im.vp_pos[0,9,0] v_pos[1] = im.vp_pos[0,1,0]*px + im.vp_pos[0,4,0]*py + im.vp_pos[0,10,0] v_pos[2] = im.vp_pos[0,2,0]*px + im.vp_pos[0,5,0]*py + im.vp_pos[0,11,0] for i in range(3): v_dir[i] = im.vp_dir[0,i,0] @cython.boundscheck(False) @cython.wraparound(False) cdef void generate_vector_info_null(ImageSampler im, np.int64_t vi, np.int64_t vj, np.float64_t width[2], # Now outbound np.float64_t v_dir[3], np.float64_t v_pos[3]) noexcept nogil: cdef int i for i in range(3): # Here's a funny thing: we use vi here because our *image* will be # flattened. That means that im.nv will be a better one-d offset, # since vp_pos has funny strides. v_pos[i] = im.vp_pos[vi, vj, i] v_dir[i] = im.vp_dir[vi, vj, i] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/line_integral_convolution.pyx0000644000175100001770000000415414714401662023023 0ustar00runnerdocker""" Utilities for line integral convolution annotation """ import numpy as np cimport cython cimport numpy as np @cython.cdivision(True) cdef void _advance_2d(double vx, double vy, int* x, int* y, double* fx, double* fy, int w, int h): cdef double tx, ty if vx>=0: tx = (1-fx[0])/vx else: tx = -fx[0]/vx if vy>=0: ty = (1-fy[0])/vy else: ty = -fy[0]/vy if tx=0: x[0]+=1 fx[0]=0 else: x[0]-=1 fx[0]=1 fy[0]+=tx*vy else: if vy>=0: y[0]+=1 fy[0]=0 else: y[0]-=1 fy[0]=1 fx[0]+=ty*vx if x[0]>=w: x[0]=w-1 if x[0]<0: x[0]=0 if y[0]<0: y[0]=0 if y[0]>=h: y[0]=h-1 def line_integral_convolution_2d( np.ndarray[double, ndim=3] vectors, np.ndarray[double, ndim=2] texture, np.ndarray[double, ndim=1] kernel): cdef int i,j,l,x,y cdef int h,w,kernellen cdef double fx, fy cdef np.ndarray[double, ndim=2] result w = vectors.shape[0] h = vectors.shape[1] kernellen = kernel.shape[0] result = np.zeros((w,h),dtype=np.double) vectors = vectors[...,::-1].copy() for i in range(w): for j in range(h): if vectors[i,j,0]==0 and vectors[i,j,1]==0: continue x = i y = j fx = 0.5 fy = 0.5 l = kernellen//2 result[i,j] += kernel[l]*texture[x,y] while l0: _advance_2d(-vectors[x,y,0],-vectors[x,y,1], &x, &y, &fx, &fy, w, h) l-=1 result[i,j] += kernel[l]*texture[x,y] return result ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/marching_cubes.h0000644000175100001770000004265014714401662020133 0ustar00runnerdockerint edge_table[256] = { 0x0 , 0x109, 0x203, 0x30a, 0x406, 0x50f, 0x605, 0x70c, 0x80c, 0x905, 0xa0f, 0xb06, 0xc0a, 0xd03, 0xe09, 0xf00, 0x190, 0x99 , 0x393, 0x29a, 0x596, 0x49f, 0x795, 0x69c, 0x99c, 0x895, 0xb9f, 0xa96, 0xd9a, 0xc93, 0xf99, 0xe90, 0x230, 0x339, 0x33 , 0x13a, 0x636, 0x73f, 0x435, 0x53c, 0xa3c, 0xb35, 0x83f, 0x936, 0xe3a, 0xf33, 0xc39, 0xd30, 0x3a0, 0x2a9, 0x1a3, 0xaa , 0x7a6, 0x6af, 0x5a5, 0x4ac, 0xbac, 0xaa5, 0x9af, 0x8a6, 0xfaa, 0xea3, 0xda9, 0xca0, 0x460, 0x569, 0x663, 0x76a, 0x66 , 0x16f, 0x265, 0x36c, 0xc6c, 0xd65, 0xe6f, 0xf66, 0x86a, 0x963, 0xa69, 0xb60, 0x5f0, 0x4f9, 0x7f3, 0x6fa, 0x1f6, 0xff , 0x3f5, 0x2fc, 0xdfc, 0xcf5, 0xfff, 0xef6, 0x9fa, 0x8f3, 0xbf9, 0xaf0, 0x650, 0x759, 0x453, 0x55a, 0x256, 0x35f, 0x55 , 0x15c, 0xe5c, 0xf55, 0xc5f, 0xd56, 0xa5a, 0xb53, 0x859, 0x950, 0x7c0, 0x6c9, 0x5c3, 0x4ca, 0x3c6, 0x2cf, 0x1c5, 0xcc , 0xfcc, 0xec5, 0xdcf, 0xcc6, 0xbca, 0xac3, 0x9c9, 0x8c0, 0x8c0, 0x9c9, 0xac3, 0xbca, 0xcc6, 0xdcf, 0xec5, 0xfcc, 0xcc , 0x1c5, 0x2cf, 0x3c6, 0x4ca, 0x5c3, 0x6c9, 0x7c0, 0x950, 0x859, 0xb53, 0xa5a, 0xd56, 0xc5f, 0xf55, 0xe5c, 0x15c, 0x55 , 0x35f, 0x256, 0x55a, 0x453, 0x759, 0x650, 0xaf0, 0xbf9, 0x8f3, 0x9fa, 0xef6, 0xfff, 0xcf5, 0xdfc, 0x2fc, 0x3f5, 0xff , 0x1f6, 0x6fa, 0x7f3, 0x4f9, 0x5f0, 0xb60, 0xa69, 0x963, 0x86a, 0xf66, 0xe6f, 0xd65, 0xc6c, 0x36c, 0x265, 0x16f, 0x66 , 0x76a, 0x663, 0x569, 0x460, 0xca0, 0xda9, 0xea3, 0xfaa, 0x8a6, 0x9af, 0xaa5, 0xbac, 0x4ac, 0x5a5, 0x6af, 0x7a6, 0xaa , 0x1a3, 0x2a9, 0x3a0, 0xd30, 0xc39, 0xf33, 0xe3a, 0x936, 0x83f, 0xb35, 0xa3c, 0x53c, 0x435, 0x73f, 0x636, 0x13a, 0x33 , 0x339, 0x230, 0xe90, 0xf99, 0xc93, 0xd9a, 0xa96, 0xb9f, 0x895, 0x99c, 0x69c, 0x795, 0x49f, 0x596, 0x29a, 0x393, 0x99 , 0x190, 0xf00, 0xe09, 0xd03, 0xc0a, 0xb06, 0xa0f, 0x905, 0x80c, 0x70c, 0x605, 0x50f, 0x406, 0x30a, 0x203, 0x109, 0x0 }; int tri_table[256][16] = { {-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {0, 8, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {0, 1, 9, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {1, 8, 3, 9, 8, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {1, 2, 10, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {0, 8, 3, 1, 2, 10, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {9, 2, 10, 0, 2, 9, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {2, 8, 3, 2, 10, 8, 10, 9, 8, -1, -1, -1, -1, -1, -1, -1}, {3, 11, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {0, 11, 2, 8, 11, 0, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {1, 9, 0, 2, 3, 11, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {1, 11, 2, 1, 9, 11, 9, 8, 11, -1, -1, -1, -1, -1, -1, -1}, {3, 10, 1, 11, 10, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {0, 10, 1, 0, 8, 10, 8, 11, 10, -1, -1, -1, -1, -1, -1, -1}, {3, 9, 0, 3, 11, 9, 11, 10, 9, -1, -1, -1, -1, -1, -1, -1}, {9, 8, 10, 10, 8, 11, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {4, 7, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {4, 3, 0, 7, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {0, 1, 9, 8, 4, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {4, 1, 9, 4, 7, 1, 7, 3, 1, -1, -1, -1, -1, -1, -1, -1}, {1, 2, 10, 8, 4, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {3, 4, 7, 3, 0, 4, 1, 2, 10, -1, -1, -1, -1, -1, -1, -1}, {9, 2, 10, 9, 0, 2, 8, 4, 7, -1, -1, -1, -1, -1, -1, -1}, {2, 10, 9, 2, 9, 7, 2, 7, 3, 7, 9, 4, -1, -1, -1, -1}, {8, 4, 7, 3, 11, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {11, 4, 7, 11, 2, 4, 2, 0, 4, -1, -1, -1, -1, -1, -1, -1}, {9, 0, 1, 8, 4, 7, 2, 3, 11, -1, -1, -1, -1, -1, -1, -1}, {4, 7, 11, 9, 4, 11, 9, 11, 2, 9, 2, 1, -1, -1, -1, -1}, {3, 10, 1, 3, 11, 10, 7, 8, 4, -1, -1, -1, -1, -1, -1, -1}, {1, 11, 10, 1, 4, 11, 1, 0, 4, 7, 11, 4, -1, -1, -1, -1}, {4, 7, 8, 9, 0, 11, 9, 11, 10, 11, 0, 3, -1, -1, -1, -1}, {4, 7, 11, 4, 11, 9, 9, 11, 10, -1, -1, -1, -1, -1, -1, -1}, {9, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {9, 5, 4, 0, 8, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {0, 5, 4, 1, 5, 0, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {8, 5, 4, 8, 3, 5, 3, 1, 5, -1, -1, -1, -1, -1, -1, -1}, {1, 2, 10, 9, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {3, 0, 8, 1, 2, 10, 4, 9, 5, -1, -1, -1, -1, -1, -1, -1}, {5, 2, 10, 5, 4, 2, 4, 0, 2, -1, -1, -1, -1, -1, -1, -1}, {2, 10, 5, 3, 2, 5, 3, 5, 4, 3, 4, 8, -1, -1, -1, -1}, {9, 5, 4, 2, 3, 11, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {0, 11, 2, 0, 8, 11, 4, 9, 5, -1, -1, -1, -1, -1, -1, -1}, {0, 5, 4, 0, 1, 5, 2, 3, 11, -1, -1, -1, -1, -1, -1, -1}, {2, 1, 5, 2, 5, 8, 2, 8, 11, 4, 8, 5, -1, -1, -1, -1}, {10, 3, 11, 10, 1, 3, 9, 5, 4, -1, -1, -1, -1, -1, -1, -1}, {4, 9, 5, 0, 8, 1, 8, 10, 1, 8, 11, 10, -1, -1, -1, -1}, {5, 4, 0, 5, 0, 11, 5, 11, 10, 11, 0, 3, -1, -1, -1, -1}, {5, 4, 8, 5, 8, 10, 10, 8, 11, -1, -1, -1, -1, -1, -1, -1}, {9, 7, 8, 5, 7, 9, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {9, 3, 0, 9, 5, 3, 5, 7, 3, -1, -1, -1, -1, -1, -1, -1}, {0, 7, 8, 0, 1, 7, 1, 5, 7, -1, -1, -1, -1, -1, -1, -1}, {1, 5, 3, 3, 5, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {9, 7, 8, 9, 5, 7, 10, 1, 2, -1, -1, -1, -1, -1, -1, -1}, {10, 1, 2, 9, 5, 0, 5, 3, 0, 5, 7, 3, -1, -1, -1, -1}, {8, 0, 2, 8, 2, 5, 8, 5, 7, 10, 5, 2, -1, -1, -1, -1}, {2, 10, 5, 2, 5, 3, 3, 5, 7, -1, -1, -1, -1, -1, -1, -1}, {7, 9, 5, 7, 8, 9, 3, 11, 2, -1, -1, -1, -1, -1, -1, -1}, {9, 5, 7, 9, 7, 2, 9, 2, 0, 2, 7, 11, -1, -1, -1, -1}, {2, 3, 11, 0, 1, 8, 1, 7, 8, 1, 5, 7, -1, -1, -1, -1}, {11, 2, 1, 11, 1, 7, 7, 1, 5, -1, -1, -1, -1, -1, -1, -1}, {9, 5, 8, 8, 5, 7, 10, 1, 3, 10, 3, 11, -1, -1, -1, -1}, {5, 7, 0, 5, 0, 9, 7, 11, 0, 1, 0, 10, 11, 10, 0, -1}, {11, 10, 0, 11, 0, 3, 10, 5, 0, 8, 0, 7, 5, 7, 0, -1}, {11, 10, 5, 7, 11, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {10, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {0, 8, 3, 5, 10, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {9, 0, 1, 5, 10, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {1, 8, 3, 1, 9, 8, 5, 10, 6, -1, -1, -1, -1, -1, -1, -1}, {1, 6, 5, 2, 6, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {1, 6, 5, 1, 2, 6, 3, 0, 8, -1, -1, -1, -1, -1, -1, -1}, {9, 6, 5, 9, 0, 6, 0, 2, 6, -1, -1, -1, -1, -1, -1, -1}, {5, 9, 8, 5, 8, 2, 5, 2, 6, 3, 2, 8, -1, -1, -1, -1}, {2, 3, 11, 10, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {11, 0, 8, 11, 2, 0, 10, 6, 5, -1, -1, -1, -1, -1, -1, -1}, {0, 1, 9, 2, 3, 11, 5, 10, 6, -1, -1, -1, -1, -1, -1, -1}, {5, 10, 6, 1, 9, 2, 9, 11, 2, 9, 8, 11, -1, -1, -1, -1}, {6, 3, 11, 6, 5, 3, 5, 1, 3, -1, -1, -1, -1, -1, -1, -1}, {0, 8, 11, 0, 11, 5, 0, 5, 1, 5, 11, 6, -1, -1, -1, -1}, {3, 11, 6, 0, 3, 6, 0, 6, 5, 0, 5, 9, -1, -1, -1, -1}, {6, 5, 9, 6, 9, 11, 11, 9, 8, -1, -1, -1, -1, -1, -1, -1}, {5, 10, 6, 4, 7, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {4, 3, 0, 4, 7, 3, 6, 5, 10, -1, -1, -1, -1, -1, -1, -1}, {1, 9, 0, 5, 10, 6, 8, 4, 7, -1, -1, -1, -1, -1, -1, -1}, {10, 6, 5, 1, 9, 7, 1, 7, 3, 7, 9, 4, -1, -1, -1, -1}, {6, 1, 2, 6, 5, 1, 4, 7, 8, -1, -1, -1, -1, -1, -1, -1}, {1, 2, 5, 5, 2, 6, 3, 0, 4, 3, 4, 7, -1, -1, -1, -1}, {8, 4, 7, 9, 0, 5, 0, 6, 5, 0, 2, 6, -1, -1, -1, -1}, {7, 3, 9, 7, 9, 4, 3, 2, 9, 5, 9, 6, 2, 6, 9, -1}, {3, 11, 2, 7, 8, 4, 10, 6, 5, -1, -1, -1, -1, -1, -1, -1}, {5, 10, 6, 4, 7, 2, 4, 2, 0, 2, 7, 11, -1, -1, -1, -1}, {0, 1, 9, 4, 7, 8, 2, 3, 11, 5, 10, 6, -1, -1, -1, -1}, {9, 2, 1, 9, 11, 2, 9, 4, 11, 7, 11, 4, 5, 10, 6, -1}, {8, 4, 7, 3, 11, 5, 3, 5, 1, 5, 11, 6, -1, -1, -1, -1}, {5, 1, 11, 5, 11, 6, 1, 0, 11, 7, 11, 4, 0, 4, 11, -1}, {0, 5, 9, 0, 6, 5, 0, 3, 6, 11, 6, 3, 8, 4, 7, -1}, {6, 5, 9, 6, 9, 11, 4, 7, 9, 7, 11, 9, -1, -1, -1, -1}, {10, 4, 9, 6, 4, 10, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {4, 10, 6, 4, 9, 10, 0, 8, 3, -1, -1, -1, -1, -1, -1, -1}, {10, 0, 1, 10, 6, 0, 6, 4, 0, -1, -1, -1, -1, -1, -1, -1}, {8, 3, 1, 8, 1, 6, 8, 6, 4, 6, 1, 10, -1, -1, -1, -1}, {1, 4, 9, 1, 2, 4, 2, 6, 4, -1, -1, -1, -1, -1, -1, -1}, {3, 0, 8, 1, 2, 9, 2, 4, 9, 2, 6, 4, -1, -1, -1, -1}, {0, 2, 4, 4, 2, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {8, 3, 2, 8, 2, 4, 4, 2, 6, -1, -1, -1, -1, -1, -1, -1}, {10, 4, 9, 10, 6, 4, 11, 2, 3, -1, -1, -1, -1, -1, -1, -1}, {0, 8, 2, 2, 8, 11, 4, 9, 10, 4, 10, 6, -1, -1, -1, -1}, {3, 11, 2, 0, 1, 6, 0, 6, 4, 6, 1, 10, -1, -1, -1, -1}, {6, 4, 1, 6, 1, 10, 4, 8, 1, 2, 1, 11, 8, 11, 1, -1}, {9, 6, 4, 9, 3, 6, 9, 1, 3, 11, 6, 3, -1, -1, -1, -1}, {8, 11, 1, 8, 1, 0, 11, 6, 1, 9, 1, 4, 6, 4, 1, -1}, {3, 11, 6, 3, 6, 0, 0, 6, 4, -1, -1, -1, -1, -1, -1, -1}, {6, 4, 8, 11, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {7, 10, 6, 7, 8, 10, 8, 9, 10, -1, -1, -1, -1, -1, -1, -1}, {0, 7, 3, 0, 10, 7, 0, 9, 10, 6, 7, 10, -1, -1, -1, -1}, {10, 6, 7, 1, 10, 7, 1, 7, 8, 1, 8, 0, -1, -1, -1, -1}, {10, 6, 7, 10, 7, 1, 1, 7, 3, -1, -1, -1, -1, -1, -1, -1}, {1, 2, 6, 1, 6, 8, 1, 8, 9, 8, 6, 7, -1, -1, -1, -1}, {2, 6, 9, 2, 9, 1, 6, 7, 9, 0, 9, 3, 7, 3, 9, -1}, {7, 8, 0, 7, 0, 6, 6, 0, 2, -1, -1, -1, -1, -1, -1, -1}, {7, 3, 2, 6, 7, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {2, 3, 11, 10, 6, 8, 10, 8, 9, 8, 6, 7, -1, -1, -1, -1}, {2, 0, 7, 2, 7, 11, 0, 9, 7, 6, 7, 10, 9, 10, 7, -1}, {1, 8, 0, 1, 7, 8, 1, 10, 7, 6, 7, 10, 2, 3, 11, -1}, {11, 2, 1, 11, 1, 7, 10, 6, 1, 6, 7, 1, -1, -1, -1, -1}, {8, 9, 6, 8, 6, 7, 9, 1, 6, 11, 6, 3, 1, 3, 6, -1}, {0, 9, 1, 11, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {7, 8, 0, 7, 0, 6, 3, 11, 0, 11, 6, 0, -1, -1, -1, -1}, {7, 11, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {7, 6, 11, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {3, 0, 8, 11, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {0, 1, 9, 11, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {8, 1, 9, 8, 3, 1, 11, 7, 6, -1, -1, -1, -1, -1, -1, -1}, {10, 1, 2, 6, 11, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {1, 2, 10, 3, 0, 8, 6, 11, 7, -1, -1, -1, -1, -1, -1, -1}, {2, 9, 0, 2, 10, 9, 6, 11, 7, -1, -1, -1, -1, -1, -1, -1}, {6, 11, 7, 2, 10, 3, 10, 8, 3, 10, 9, 8, -1, -1, -1, -1}, {7, 2, 3, 6, 2, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {7, 0, 8, 7, 6, 0, 6, 2, 0, -1, -1, -1, -1, -1, -1, -1}, {2, 7, 6, 2, 3, 7, 0, 1, 9, -1, -1, -1, -1, -1, -1, -1}, {1, 6, 2, 1, 8, 6, 1, 9, 8, 8, 7, 6, -1, -1, -1, -1}, {10, 7, 6, 10, 1, 7, 1, 3, 7, -1, -1, -1, -1, -1, -1, -1}, {10, 7, 6, 1, 7, 10, 1, 8, 7, 1, 0, 8, -1, -1, -1, -1}, {0, 3, 7, 0, 7, 10, 0, 10, 9, 6, 10, 7, -1, -1, -1, -1}, {7, 6, 10, 7, 10, 8, 8, 10, 9, -1, -1, -1, -1, -1, -1, -1}, {6, 8, 4, 11, 8, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {3, 6, 11, 3, 0, 6, 0, 4, 6, -1, -1, -1, -1, -1, -1, -1}, {8, 6, 11, 8, 4, 6, 9, 0, 1, -1, -1, -1, -1, -1, -1, -1}, {9, 4, 6, 9, 6, 3, 9, 3, 1, 11, 3, 6, -1, -1, -1, -1}, {6, 8, 4, 6, 11, 8, 2, 10, 1, -1, -1, -1, -1, -1, -1, -1}, {1, 2, 10, 3, 0, 11, 0, 6, 11, 0, 4, 6, -1, -1, -1, -1}, {4, 11, 8, 4, 6, 11, 0, 2, 9, 2, 10, 9, -1, -1, -1, -1}, {10, 9, 3, 10, 3, 2, 9, 4, 3, 11, 3, 6, 4, 6, 3, -1}, {8, 2, 3, 8, 4, 2, 4, 6, 2, -1, -1, -1, -1, -1, -1, -1}, {0, 4, 2, 4, 6, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {1, 9, 0, 2, 3, 4, 2, 4, 6, 4, 3, 8, -1, -1, -1, -1}, {1, 9, 4, 1, 4, 2, 2, 4, 6, -1, -1, -1, -1, -1, -1, -1}, {8, 1, 3, 8, 6, 1, 8, 4, 6, 6, 10, 1, -1, -1, -1, -1}, {10, 1, 0, 10, 0, 6, 6, 0, 4, -1, -1, -1, -1, -1, -1, -1}, {4, 6, 3, 4, 3, 8, 6, 10, 3, 0, 3, 9, 10, 9, 3, -1}, {10, 9, 4, 6, 10, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {4, 9, 5, 7, 6, 11, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {0, 8, 3, 4, 9, 5, 11, 7, 6, -1, -1, -1, -1, -1, -1, -1}, {5, 0, 1, 5, 4, 0, 7, 6, 11, -1, -1, -1, -1, -1, -1, -1}, {11, 7, 6, 8, 3, 4, 3, 5, 4, 3, 1, 5, -1, -1, -1, -1}, {9, 5, 4, 10, 1, 2, 7, 6, 11, -1, -1, -1, -1, -1, -1, -1}, {6, 11, 7, 1, 2, 10, 0, 8, 3, 4, 9, 5, -1, -1, -1, -1}, {7, 6, 11, 5, 4, 10, 4, 2, 10, 4, 0, 2, -1, -1, -1, -1}, {3, 4, 8, 3, 5, 4, 3, 2, 5, 10, 5, 2, 11, 7, 6, -1}, {7, 2, 3, 7, 6, 2, 5, 4, 9, -1, -1, -1, -1, -1, -1, -1}, {9, 5, 4, 0, 8, 6, 0, 6, 2, 6, 8, 7, -1, -1, -1, -1}, {3, 6, 2, 3, 7, 6, 1, 5, 0, 5, 4, 0, -1, -1, -1, -1}, {6, 2, 8, 6, 8, 7, 2, 1, 8, 4, 8, 5, 1, 5, 8, -1}, {9, 5, 4, 10, 1, 6, 1, 7, 6, 1, 3, 7, -1, -1, -1, -1}, {1, 6, 10, 1, 7, 6, 1, 0, 7, 8, 7, 0, 9, 5, 4, -1}, {4, 0, 10, 4, 10, 5, 0, 3, 10, 6, 10, 7, 3, 7, 10, -1}, {7, 6, 10, 7, 10, 8, 5, 4, 10, 4, 8, 10, -1, -1, -1, -1}, {6, 9, 5, 6, 11, 9, 11, 8, 9, -1, -1, -1, -1, -1, -1, -1}, {3, 6, 11, 0, 6, 3, 0, 5, 6, 0, 9, 5, -1, -1, -1, -1}, {0, 11, 8, 0, 5, 11, 0, 1, 5, 5, 6, 11, -1, -1, -1, -1}, {6, 11, 3, 6, 3, 5, 5, 3, 1, -1, -1, -1, -1, -1, -1, -1}, {1, 2, 10, 9, 5, 11, 9, 11, 8, 11, 5, 6, -1, -1, -1, -1}, {0, 11, 3, 0, 6, 11, 0, 9, 6, 5, 6, 9, 1, 2, 10, -1}, {11, 8, 5, 11, 5, 6, 8, 0, 5, 10, 5, 2, 0, 2, 5, -1}, {6, 11, 3, 6, 3, 5, 2, 10, 3, 10, 5, 3, -1, -1, -1, -1}, {5, 8, 9, 5, 2, 8, 5, 6, 2, 3, 8, 2, -1, -1, -1, -1}, {9, 5, 6, 9, 6, 0, 0, 6, 2, -1, -1, -1, -1, -1, -1, -1}, {1, 5, 8, 1, 8, 0, 5, 6, 8, 3, 8, 2, 6, 2, 8, -1}, {1, 5, 6, 2, 1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {1, 3, 6, 1, 6, 10, 3, 8, 6, 5, 6, 9, 8, 9, 6, -1}, {10, 1, 0, 10, 0, 6, 9, 5, 0, 5, 6, 0, -1, -1, -1, -1}, {0, 3, 8, 5, 6, 10, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {10, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {11, 5, 10, 7, 5, 11, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {11, 5, 10, 11, 7, 5, 8, 3, 0, -1, -1, -1, -1, -1, -1, -1}, {5, 11, 7, 5, 10, 11, 1, 9, 0, -1, -1, -1, -1, -1, -1, -1}, {10, 7, 5, 10, 11, 7, 9, 8, 1, 8, 3, 1, -1, -1, -1, -1}, {11, 1, 2, 11, 7, 1, 7, 5, 1, -1, -1, -1, -1, -1, -1, -1}, {0, 8, 3, 1, 2, 7, 1, 7, 5, 7, 2, 11, -1, -1, -1, -1}, {9, 7, 5, 9, 2, 7, 9, 0, 2, 2, 11, 7, -1, -1, -1, -1}, {7, 5, 2, 7, 2, 11, 5, 9, 2, 3, 2, 8, 9, 8, 2, -1}, {2, 5, 10, 2, 3, 5, 3, 7, 5, -1, -1, -1, -1, -1, -1, -1}, {8, 2, 0, 8, 5, 2, 8, 7, 5, 10, 2, 5, -1, -1, -1, -1}, {9, 0, 1, 5, 10, 3, 5, 3, 7, 3, 10, 2, -1, -1, -1, -1}, {9, 8, 2, 9, 2, 1, 8, 7, 2, 10, 2, 5, 7, 5, 2, -1}, {1, 3, 5, 3, 7, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {0, 8, 7, 0, 7, 1, 1, 7, 5, -1, -1, -1, -1, -1, -1, -1}, {9, 0, 3, 9, 3, 5, 5, 3, 7, -1, -1, -1, -1, -1, -1, -1}, {9, 8, 7, 5, 9, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {5, 8, 4, 5, 10, 8, 10, 11, 8, -1, -1, -1, -1, -1, -1, -1}, {5, 0, 4, 5, 11, 0, 5, 10, 11, 11, 3, 0, -1, -1, -1, -1}, {0, 1, 9, 8, 4, 10, 8, 10, 11, 10, 4, 5, -1, -1, -1, -1}, {10, 11, 4, 10, 4, 5, 11, 3, 4, 9, 4, 1, 3, 1, 4, -1}, {2, 5, 1, 2, 8, 5, 2, 11, 8, 4, 5, 8, -1, -1, -1, -1}, {0, 4, 11, 0, 11, 3, 4, 5, 11, 2, 11, 1, 5, 1, 11, -1}, {0, 2, 5, 0, 5, 9, 2, 11, 5, 4, 5, 8, 11, 8, 5, -1}, {9, 4, 5, 2, 11, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {2, 5, 10, 3, 5, 2, 3, 4, 5, 3, 8, 4, -1, -1, -1, -1}, {5, 10, 2, 5, 2, 4, 4, 2, 0, -1, -1, -1, -1, -1, -1, -1}, {3, 10, 2, 3, 5, 10, 3, 8, 5, 4, 5, 8, 0, 1, 9, -1}, {5, 10, 2, 5, 2, 4, 1, 9, 2, 9, 4, 2, -1, -1, -1, -1}, {8, 4, 5, 8, 5, 3, 3, 5, 1, -1, -1, -1, -1, -1, -1, -1}, {0, 4, 5, 1, 0, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {8, 4, 5, 8, 5, 3, 9, 0, 5, 0, 3, 5, -1, -1, -1, -1}, {9, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {4, 11, 7, 4, 9, 11, 9, 10, 11, -1, -1, -1, -1, -1, -1, -1}, {0, 8, 3, 4, 9, 7, 9, 11, 7, 9, 10, 11, -1, -1, -1, -1}, {1, 10, 11, 1, 11, 4, 1, 4, 0, 7, 4, 11, -1, -1, -1, -1}, {3, 1, 4, 3, 4, 8, 1, 10, 4, 7, 4, 11, 10, 11, 4, -1}, {4, 11, 7, 9, 11, 4, 9, 2, 11, 9, 1, 2, -1, -1, -1, -1}, {9, 7, 4, 9, 11, 7, 9, 1, 11, 2, 11, 1, 0, 8, 3, -1}, {11, 7, 4, 11, 4, 2, 2, 4, 0, -1, -1, -1, -1, -1, -1, -1}, {11, 7, 4, 11, 4, 2, 8, 3, 4, 3, 2, 4, -1, -1, -1, -1}, {2, 9, 10, 2, 7, 9, 2, 3, 7, 7, 4, 9, -1, -1, -1, -1}, {9, 10, 7, 9, 7, 4, 10, 2, 7, 8, 7, 0, 2, 0, 7, -1}, {3, 7, 10, 3, 10, 2, 7, 4, 10, 1, 10, 0, 4, 0, 10, -1}, {1, 10, 2, 8, 7, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {4, 9, 1, 4, 1, 7, 7, 1, 3, -1, -1, -1, -1, -1, -1, -1}, {4, 9, 1, 4, 1, 7, 0, 8, 1, 8, 7, 1, -1, -1, -1, -1}, {4, 0, 3, 7, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {4, 8, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {9, 10, 8, 10, 11, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {3, 0, 9, 3, 9, 11, 11, 9, 10, -1, -1, -1, -1, -1, -1, -1}, {0, 1, 10, 0, 10, 8, 8, 10, 11, -1, -1, -1, -1, -1, -1, -1}, {3, 1, 10, 11, 3, 10, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {1, 2, 11, 1, 11, 9, 9, 11, 8, -1, -1, -1, -1, -1, -1, -1}, {3, 0, 9, 3, 9, 11, 1, 2, 9, 2, 11, 9, -1, -1, -1, -1}, {0, 2, 11, 8, 0, 11, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {3, 2, 11, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {2, 3, 8, 2, 8, 10, 10, 8, 9, -1, -1, -1, -1, -1, -1, -1}, {9, 10, 2, 0, 9, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {2, 3, 8, 2, 8, 10, 0, 1, 8, 1, 10, 8, -1, -1, -1, -1}, {1, 10, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {1, 3, 8, 9, 1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {0, 9, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {0, 3, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1} }; ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/marching_cubes.pyx0000644000175100001770000003723214714401662020524 0ustar00runnerdocker# distutils: include_dirs = LIB_DIR # distutils: libraries = STD_LIBS FIXED_INTERP # distutils: language = c++ """ Marching cubes implementation """ cimport cython cimport numpy as np import numpy as np from libc.math cimport sqrt from libc.stdlib cimport free, malloc from .fixed_interpolator cimport ( eval_gradient, offset_fill, offset_interpolate, vertex_interp, ) from yt.units.yt_array import YTArray cdef extern from "marching_cubes.h": int tri_table[256][16] int edge_table[256] cdef struct Triangle: Triangle *next np.float64_t p[3][3] np.float64_t val[3] # Usually only use one value cdef struct TriangleCollection: int count Triangle *first Triangle *current cdef Triangle *AddTriangle(Triangle *self, np.float64_t p0[3], np.float64_t p1[3], np.float64_t p2[3]): cdef Triangle *nn = malloc(sizeof(Triangle)) if self != NULL: self.next = nn cdef int i for i in range(3): nn.p[0][i] = p0[i] for i in range(3): nn.p[1][i] = p1[i] for i in range(3): nn.p[2][i] = p2[i] nn.next = NULL return nn cdef int CountTriangles(Triangle *first): cdef int count = 0 cdef Triangle *this = first while this != NULL: count += 1 this = this.next return count cdef void FillTriangleValues(np.ndarray[np.float64_t, ndim=1] values, Triangle *first, int nskip = 1): cdef Triangle *this = first cdef int i = 0 cdef int j while this != NULL: for j in range(nskip): values[i*nskip + j] = this.val[j] i += 1 this = this.next cdef void WipeTriangles(Triangle *first): cdef Triangle *this = first cdef Triangle *last while this != NULL: last = this this = this.next free(last) cdef void FillAndWipeTriangles(np.ndarray[np.float64_t, ndim=2] vertices, Triangle *first): cdef int count = 0 cdef Triangle *this = first cdef Triangle *last cdef int i, j while this != NULL: for i in range(3): for j in range(3): vertices[count, j] = this.p[i][j] count += 1 # Do it at the end because it's an index last = this this = this.next free(last) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int march_cubes( np.float64_t gv[8], np.float64_t isovalue, np.float64_t dds[3], np.float64_t x, np.float64_t y, np.float64_t z, TriangleCollection *triangles): cdef np.float64_t vertlist[12][3] cdef int cubeindex = 0 cdef int n cdef int nt = 0 for n in range(8): if gv[n] < isovalue: cubeindex |= (1 << n) if edge_table[cubeindex] == 0: return 0 if (edge_table[cubeindex] & 1): # 0,0,0 with 1,0,0 vertex_interp(gv[0], gv[1], isovalue, vertlist[0], dds, x, y, z, 0, 1) if (edge_table[cubeindex] & 2): # 1,0,0 with 1,1,0 vertex_interp(gv[1], gv[2], isovalue, vertlist[1], dds, x, y, z, 1, 2) if (edge_table[cubeindex] & 4): # 1,1,0 with 0,1,0 vertex_interp(gv[2], gv[3], isovalue, vertlist[2], dds, x, y, z, 2, 3) if (edge_table[cubeindex] & 8): # 0,1,0 with 0,0,0 vertex_interp(gv[3], gv[0], isovalue, vertlist[3], dds, x, y, z, 3, 0) if (edge_table[cubeindex] & 16): # 0,0,1 with 1,0,1 vertex_interp(gv[4], gv[5], isovalue, vertlist[4], dds, x, y, z, 4, 5) if (edge_table[cubeindex] & 32): # 1,0,1 with 1,1,1 vertex_interp(gv[5], gv[6], isovalue, vertlist[5], dds, x, y, z, 5, 6) if (edge_table[cubeindex] & 64): # 1,1,1 with 0,1,1 vertex_interp(gv[6], gv[7], isovalue, vertlist[6], dds, x, y, z, 6, 7) if (edge_table[cubeindex] & 128): # 0,1,1 with 0,0,1 vertex_interp(gv[7], gv[4], isovalue, vertlist[7], dds, x, y, z, 7, 4) if (edge_table[cubeindex] & 256): # 0,0,0 with 0,0,1 vertex_interp(gv[0], gv[4], isovalue, vertlist[8], dds, x, y, z, 0, 4) if (edge_table[cubeindex] & 512): # 1,0,0 with 1,0,1 vertex_interp(gv[1], gv[5], isovalue, vertlist[9], dds, x, y, z, 1, 5) if (edge_table[cubeindex] & 1024): # 1,1,0 with 1,1,1 vertex_interp(gv[2], gv[6], isovalue, vertlist[10], dds, x, y, z, 2, 6) if (edge_table[cubeindex] & 2048): # 0,1,0 with 0,1,1 vertex_interp(gv[3], gv[7], isovalue, vertlist[11], dds, x, y, z, 3, 7) n = 0 while 1: triangles.current = AddTriangle(triangles.current, vertlist[tri_table[cubeindex][n ]], vertlist[tri_table[cubeindex][n+1]], vertlist[tri_table[cubeindex][n+2]]) triangles.count += 1 nt += 1 if triangles.first == NULL: triangles.first = triangles.current n += 3 if tri_table[cubeindex][n] == -1: break return nt @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def march_cubes_grid(np.float64_t isovalue, np.ndarray[np.float64_t, ndim=3] values, np.ndarray[np.uint8_t, ndim=3, cast=True] mask, np.ndarray[np.float64_t, ndim=1] left_edge, np.ndarray[np.float64_t, ndim=1] dxs, obj_sample = None, int sample_type = 1): cdef int dims[3] cdef int i, j, k, n, m, nt cdef int offset cdef np.float64_t gv[8] cdef np.float64_t pos[3] cdef np.float64_t point[3] cdef np.float64_t idds[3] cdef np.float64_t *intdata = NULL cdef np.float64_t *sdata = NULL cdef np.float64_t do_sample cdef np.ndarray[np.float64_t, ndim=3] sample cdef np.ndarray[np.float64_t, ndim=1] sampled cdef TriangleCollection triangles cdef Triangle *last cdef Triangle *current if obj_sample is not None: sample = obj_sample sdata = sample.data do_sample = sample_type # 1 for face, 2 for vertex else: do_sample = 0 for i in range(3): dims[i] = values.shape[i] - 1 idds[i] = 1.0 / dxs[i] triangles.first = triangles.current = NULL last = current = NULL triangles.count = 0 cdef np.float64_t *data = values.data cdef np.float64_t *dds = dxs.data pos[0] = left_edge[0] for i in range(dims[0]): pos[1] = left_edge[1] for j in range(dims[1]): pos[2] = left_edge[2] for k in range(dims[2]): if mask[i,j,k] == 1: offset = i * (dims[1] + 1) * (dims[2] + 1) \ + j * (dims[2] + 1) + k intdata = data + offset offset_fill(dims, intdata, gv) nt = march_cubes(gv, isovalue, dds, pos[0], pos[1], pos[2], &triangles) if nt == 0 or do_sample == 0: pos[2] += dds[2] continue if last == NULL and triangles.first != NULL: current = triangles.first last = NULL elif last != NULL: current = last.next if do_sample == 1: # At each triangle's center, sample our secondary field while current != NULL: for n in range(3): point[n] = 0.0 for n in range(3): for m in range(3): point[m] += (current.p[n][m]-pos[m])*idds[m] for n in range(3): point[n] /= 3.0 current.val[0] = offset_interpolate(dims, point, sdata + offset) last = current if current.next == NULL: break current = current.next elif do_sample == 2: while current != NULL: for n in range(3): for m in range(3): point[m] = (current.p[n][m]-pos[m])*idds[m] current.val[n] = offset_interpolate(dims, point, sdata + offset) last = current if current.next == NULL: break current = current.next pos[2] += dds[2] pos[1] += dds[1] pos[0] += dds[0] # Hallo, we are all done. cdef np.ndarray[np.float64_t, ndim=2] vertices vertices = np.zeros((triangles.count*3,3), dtype='float64') if do_sample == 0: FillAndWipeTriangles(vertices, triangles.first) return vertices cdef int nskip = 0 if do_sample == 1: nskip = 1 elif do_sample == 2: nskip = 3 sampled = np.zeros(triangles.count * nskip, dtype='float64') FillTriangleValues(sampled, triangles.first, nskip) FillAndWipeTriangles(vertices, triangles.first) if hasattr(obj_sample, 'units'): sampled = YTArray(sampled, obj_sample.units) return vertices, sampled @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def march_cubes_grid_flux( np.float64_t isovalue, np.ndarray[np.float64_t, ndim=3] values, np.ndarray[np.float64_t, ndim=3] v1, np.ndarray[np.float64_t, ndim=3] v2, np.ndarray[np.float64_t, ndim=3] v3, np.ndarray[np.float64_t, ndim=3] flux_field, np.ndarray[np.uint8_t, ndim=3, cast=True] mask, np.ndarray[np.float64_t, ndim=1] left_edge, np.ndarray[np.float64_t, ndim=1] dxs): cdef int dims[3] cdef int i, j, k, n, m cdef int offset cdef np.float64_t gv[8] cdef np.float64_t *intdata = NULL cdef TriangleCollection triangles cdef Triangle *current = NULL cdef Triangle *last = NULL cdef np.float64_t *data = values.data cdef np.float64_t *v1data = v1.data cdef np.float64_t *v2data = v2.data cdef np.float64_t *v3data = v3.data cdef np.float64_t *fdata = flux_field.data cdef np.float64_t *dds = dxs.data cdef np.float64_t flux = 0.0 cdef np.float64_t temp, area, s cdef np.float64_t center[3] cdef np.float64_t point[3] cdef np.float64_t cell_pos[3] cdef np.float64_t fv[3] cdef np.float64_t idds[3] cdef np.float64_t normal[3] for i in range(3): dims[i] = values.shape[i] - 1 idds[i] = 1.0 / dds[i] triangles.first = triangles.current = NULL triangles.count = 0 cell_pos[0] = left_edge[0] for i in range(dims[0]): cell_pos[1] = left_edge[1] for j in range(dims[1]): cell_pos[2] = left_edge[2] for k in range(dims[2]): if mask[i,j,k] == 1: offset = i * (dims[1] + 1) * (dims[2] + 1) \ + j * (dims[2] + 1) + k intdata = data + offset offset_fill(dims, intdata, gv) march_cubes(gv, isovalue, dds, cell_pos[0], cell_pos[1], cell_pos[2], &triangles) # Now our triangles collection has a bunch. We now # calculate fluxes for each. if last == NULL and triangles.first != NULL: current = triangles.first last = NULL elif last != NULL: current = last.next while current != NULL: # Calculate the center of the triangle wval = 0.0 for n in range(3): center[n] = 0.0 for n in range(3): for m in range(3): point[m] = (current.p[n][m]-cell_pos[m])*idds[m] # Now we calculate the value at this point temp = offset_interpolate(dims, point, intdata) #print("something", temp, point[0], point[1], point[2]) wval += temp for m in range(3): center[m] += temp * point[m] # Now we divide by our normalizing factor for n in range(3): center[n] /= wval # We have our center point of the triangle, in 0..1 # coordinates. So now we interpolate our three # fields. fv[0] = offset_interpolate(dims, center, v1data + offset) fv[1] = offset_interpolate(dims, center, v2data + offset) fv[2] = offset_interpolate(dims, center, v3data + offset) # We interpolate again the actual value data wval = offset_interpolate(dims, center, fdata + offset) # Now we have our flux vector and our field value! # We just need a normal vector with which we can # dot it. The normal should be equal to the gradient # in the center of the triangle, or thereabouts. eval_gradient(dims, center, intdata, normal) temp = 0.0 for n in range(3): temp += normal[n]*normal[n] # Take the negative, to ensure it points inwardly temp = -sqrt(temp) # Dump this somewhere for now temp = wval * (fv[0] * normal[0] + fv[1] * normal[1] + fv[2] * normal[2])/temp # Now we need the area of the triangle. This will take # a lot of time to calculate compared to the rest. # We use Heron's formula. for n in range(3): fv[n] = 0.0 for n in range(3): fv[0] += (current.p[0][n] - current.p[2][n]) * (current.p[0][n] - current.p[2][n]) fv[1] += (current.p[1][n] - current.p[0][n]) * (current.p[1][n] - current.p[0][n]) fv[2] += (current.p[2][n] - current.p[1][n]) * (current.p[2][n] - current.p[1][n]) s = 0.0 for n in range(3): fv[n] = sqrt(fv[n]) s += 0.5 * fv[n] area = (s*(s-fv[0])*(s-fv[1])*(s-fv[2])) area = sqrt(area) flux += temp*area last = current if current.next == NULL: break current = current.next cell_pos[2] += dds[2] cell_pos[1] += dds[1] cell_pos[0] += dds[0] # Hallo, we are all done. WipeTriangles(triangles.first) return flux ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/mesh_triangulation.h0000644000175100001770000000327714714401662021060 0ustar00runnerdocker#define MAX_NUM_TRI 12 #define HEX_NV 8 #define HEX_NT 12 #define TETRA_NV 4 #define TETRA_NT 4 #define WEDGE_NV 6 #define WEDGE_NT 8 // This array is used to triangulate the hexahedral mesh elements // Each element has six faces with two triangles each. // The vertex ordering convention is assumed to follow that used // here: http://homepages.cae.wisc.edu/~tautges/papers/cnmev3.pdf // Note that this is the case for Exodus II data. int triangulate_hex[MAX_NUM_TRI][3] = { {0, 2, 1}, {0, 3, 2}, // Face is 3 2 1 0 {4, 5, 6}, {4, 6, 7}, // Face is 4 5 6 7 {0, 1, 5}, {0, 5, 4}, // Face is 0 1 5 4 {1, 2, 6}, {1, 6, 5}, // Face is 1 2 6 5 {0, 7, 3}, {0, 4, 7}, // Face is 3 0 4 7 {3, 6, 2}, {3, 7, 6} // Face is 2 3 7 6 }; // Similarly, this is used to triangulate the tetrahedral cells int triangulate_tetra[MAX_NUM_TRI][3] = { {0, 1, 3}, {2, 3, 1}, {0, 3, 2}, {0, 2, 1}, {-1, -1, -1}, {-1, -1, -1}, {-1, -1, -1}, {-1, -1, -1}, {-1, -1, -1}, {-1, -1, -1}, {-1, -1, -1}, {-1, -1, -1} }; // Triangulate wedges int triangulate_wedge[MAX_NUM_TRI][3] = { {3, 0, 1}, {4, 3, 1}, {2, 5, 4}, {2, 4, 1}, {0, 3, 2}, {2, 3, 5}, {3, 4, 5}, {0, 2, 1}, {-1, -1, -1}, {-1, -1, -1}, {-1, -1, -1}, {-1, -1, -1} }; // This is used to select faces from a 20-sided hex element int hex20_faces[6][8] = { {0, 1, 5, 4, 12, 8, 13, 16}, {1, 2, 6, 5, 13, 9, 14, 17}, {3, 2, 6, 7, 15, 10, 14, 18}, {0, 3, 7, 4, 12, 11, 15, 19}, {4, 5, 6, 7, 19, 16, 17, 18}, {0, 1, 2, 3, 11, 8, 9, 10} }; // This is used to select faces from a second-order tet element int tet10_faces[4][6] = { {0, 1, 3, 4, 8, 7}, {2, 3, 1, 9, 8, 5}, {0, 3, 2, 7, 9, 6}, {0, 2, 1, 6, 5, 4} }; ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/mesh_triangulation.pyx0000644000175100001770000002343014714401662021442 0ustar00runnerdocker""" This file contains code for triangulating unstructured meshes. That is, for every element in the mesh, it breaks up the element into some number of triangles, returning a triangle mesh instead. It also contains code for removing duplicate triangles from the resulting mesh using a hash-table approach, so that we don't waste time rendering impossible-to-see triangles. This code is currently used by the OpenGL-accelerated unstructured mesh renderer, as well as when annotating mesh lines on regular slices. """ import numpy as np cimport cython cimport numpy as np from libc.stdlib cimport free, malloc from yt.utilities.exceptions import YTElementTypeNotRecognized cdef extern from "mesh_triangulation.h": enum: MAX_NUM_TRI int HEX_NV int HEX_NT int TETRA_NV int TETRA_NT int WEDGE_NV int WEDGE_NT int triangulate_hex[MAX_NUM_TRI][3] int triangulate_tetra[MAX_NUM_TRI][3] int triangulate_wedge[MAX_NUM_TRI][3] int hex20_faces[6][8] int tet10_faces[4][6] cdef struct TriNode: np.uint64_t key np.int64_t elem np.int64_t tri[3] TriNode* next_node cdef np.int64_t triangles_are_equal(np.int64_t tri1[3], np.int64_t tri2[3]) noexcept nogil: cdef np.int64_t found for i in range(3): found = False for j in range(3): if tri1[i] == tri2[j]: found = True if not found: return 0 return 1 cdef np.uint64_t triangle_hash(np.int64_t tri[3]) noexcept nogil: # http://stackoverflow.com/questions/1536393/good-hash-function-for-permutations cdef np.uint64_t h = 1 for i in range(3): h *= (1779033703 + 2*tri[i]) return h // 2 # should be enough, consider dynamic resizing in the future cdef np.int64_t TABLE_SIZE = 2**24 cdef class TriSet: ''' This is a hash table data structure for rapidly identifying the exterior triangles in a polygon mesh. We loop over each triangle in each element and update the TriSet for each one. We keep only the triangles that appear once, as these make up the exterior of the mesh. ''' cdef TriNode **table cdef np.uint64_t num_items def __cinit__(self): self.table = malloc(TABLE_SIZE * sizeof(TriNode*)) for i in range(TABLE_SIZE): self.table[i] = NULL self.num_items = 0 def __dealloc__(self): cdef np.int64_t i cdef TriNode *node cdef TriNode *delete_node for i in range(TABLE_SIZE): node = self.table[i] while (node != NULL): delete_node = node node = node.next_node free(delete_node) self.table[i] = NULL free(self.table) @cython.boundscheck(False) @cython.wraparound(False) def get_exterior_tris(self): ''' Returns two numpy arrays, one storing the exterior triangle indices and the other storing the corresponding element ids. ''' cdef np.int64_t[:, ::1] tri_indices = np.empty((self.num_items, 3), dtype="int64") cdef np.int64_t[::1] element_map = np.empty(self.num_items, dtype="int64") cdef TriNode* node cdef np.int64_t counter = 0 cdef np.int64_t i, j for i in range(TABLE_SIZE): node = self.table[i] while node != NULL: for j in range(3): tri_indices[counter, j] = node.tri[j] element_map[counter] = node.elem counter += 1 node = node.next_node return tri_indices, element_map cdef TriNode* _allocate_new_node(self, np.int64_t tri[3], np.uint64_t key, np.int64_t elem) noexcept nogil: cdef TriNode* new_node = malloc(sizeof(TriNode)) new_node.key = key new_node.elem = elem new_node.tri[0] = tri[0] new_node.tri[1] = tri[1] new_node.tri[2] = tri[2] new_node.next_node = NULL self.num_items += 1 return new_node @cython.cdivision(True) cdef void update(self, np.int64_t tri[3], np.int64_t elem) noexcept nogil: cdef np.uint64_t key = triangle_hash(tri) cdef np.uint64_t index = key % TABLE_SIZE cdef TriNode *node = self.table[index] if node == NULL: self.table[index] = self._allocate_new_node(tri, key, elem) return if key == node.key and triangles_are_equal(node.tri, tri): # this triangle is already here, delete it self.table[index] = node.next_node free(node) self.num_items -= 1 return elif node.next_node == NULL: node.next_node = self._allocate_new_node(tri, key, elem) return # walk through node list cdef TriNode* prev = node node = node.next_node while node != NULL: if key == node.key and triangles_are_equal(node.tri, tri): # this triangle is already here, delete it prev.next_node = node.next_node free(node) self.num_items -= 1 return if node.next_node == NULL: # we have reached the end; add new node node.next_node = self._allocate_new_node(tri, key, elem) return prev = node node = node.next_node cdef class MeshInfoHolder: cdef np.int64_t num_elem cdef np.int64_t num_tri cdef np.int64_t num_verts cdef np.int64_t VPE # num verts per element cdef np.int64_t TPE # num tris per element cdef int[MAX_NUM_TRI][3] tri_array def __cinit__(self, np.int64_t[:, ::1] indices): ''' This class is used to store metadata about the type of mesh being used. ''' self.num_elem = indices.shape[0] self.VPE = indices.shape[1] if (self.VPE == 8 or self.VPE == 20 or self.VPE == 27): self.TPE = HEX_NT self.tri_array = triangulate_hex elif (self.VPE == 4 or self.VPE == 10): self.TPE = TETRA_NT self.tri_array = triangulate_tetra elif self.VPE == 6: self.TPE = WEDGE_NT self.tri_array = triangulate_wedge else: raise YTElementTypeNotRecognized(3, self.VPE) self.num_tri = self.TPE * self.num_elem self.num_verts = self.num_tri * 3 @cython.boundscheck(False) @cython.wraparound(False) def cull_interior_triangles(np.int64_t[:, ::1] indices): ''' This is used to remove interior triangles from the mesh before rendering it on the GPU. ''' cdef MeshInfoHolder m = MeshInfoHolder(indices) cdef TriSet s = TriSet() cdef np.int64_t i, j, k cdef np.int64_t tri[3] for i in range(m.num_elem): for j in range(m.TPE): for k in range(3): tri[k] = indices[i, m.tri_array[j][k]] s.update(tri, i) return s.get_exterior_tris() @cython.boundscheck(False) @cython.wraparound(False) def get_vertex_data(np.float64_t[:, ::1] coords, np.float64_t[:, ::1] data, np.int64_t[:, ::1] indices): ''' This converts the data array from the shape (num_elem, conn_length) to (num_verts, ). ''' cdef MeshInfoHolder m = MeshInfoHolder(indices) cdef np.int64_t num_verts = coords.shape[0] cdef np.float32_t[:] vertex_data = np.zeros(num_verts, dtype="float32") cdef np.int64_t i, j for i in range(m.num_elem): for j in range(m.VPE): vertex_data[indices[i, j]] = data[i, j] return vertex_data @cython.boundscheck(False) @cython.wraparound(False) def triangulate_mesh(np.float64_t[:, ::1] coords, np.ndarray data, np.int64_t[:, ::1] indices): ''' This converts a mesh into a flattened triangle array suitable for rendering on the GPU. ''' cdef np.int64_t[:, ::1] exterior_tris cdef np.int64_t[::1] element_map exterior_tris, element_map = cull_interior_triangles(indices) cdef np.int64_t num_tri = exterior_tris.shape[0] cdef np.int64_t num_verts = 3 * num_tri cdef np.int64_t num_coords = 3 * num_verts cdef np.float32_t[:] vertex_data if data.ndim == 2: vertex_data = get_vertex_data(coords, data, indices) else: vertex_data = data.astype("float32") cdef np.int32_t[:] tri_indices = np.empty(num_verts, dtype=np.int32) cdef np.float32_t[:] tri_data = np.empty(num_verts, dtype=np.float32) cdef np.float32_t[:] tri_coords = np.empty(num_coords, dtype=np.float32) cdef np.int64_t vert_index, i, j, k for i in range(num_tri): for j in range(3): vert_index = i*3 + j if data.ndim == 1: tri_data[vert_index] = vertex_data[element_map[i]] else: tri_data[vert_index] = vertex_data[exterior_tris[i, j]] tri_indices[vert_index] = vert_index for k in range(3): tri_coords[vert_index*3 + k] = coords[exterior_tris[i, j], k] return np.array(tri_coords), np.array(tri_data), np.array(tri_indices) @cython.boundscheck(False) @cython.wraparound(False) def triangulate_indices(np.int64_t[:, ::1] indices): ''' This is like triangulate_mesh, except it only considers the connectivity information, instead of also copying the vertex coordinates and the data values. ''' cdef MeshInfoHolder m = MeshInfoHolder(indices) cdef np.int64_t[:, ::1] tri_indices = np.empty((m.num_tri, 3), dtype=np.int_) cdef np.int64_t i, j, k for i in range(m.num_elem): for j in range(m.TPE): for k in range(3): tri_indices[i*m.TPE + j, k] = indices[i, m.tri_array[j][k]] return np.array(tri_indices) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/mesh_utilities.pyx0000644000175100001770000000650614714401662020602 0ustar00runnerdocker # distutils: libraries = STD_LIBS """ Utilities for unstructured and semi-structured meshes """ import numpy as np cimport cython cimport numpy as np from yt.utilities.lib.fp_utils cimport fmax, fmin cdef extern from "platform_dep.h": double rint(double x) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fill_fcoords(np.ndarray[np.float64_t, ndim=2] coords, np.ndarray[np.int64_t, ndim=2] indices, int offset = 0): cdef np.ndarray[np.float64_t, ndim=2] fcoords cdef int nc = indices.shape[0] cdef int nv = indices.shape[1] cdef np.float64_t pos[3] cdef int i, j, k fcoords = np.empty((nc, 3), dtype="float64") for i in range(nc): for j in range(3): pos[j] = 0.0 for j in range(nv): for k in range(3): pos[k] += coords[indices[i, j] - offset, k] for j in range(3): pos[j] /= nv fcoords[i, j] = pos[j] return fcoords @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fill_fwidths(np.ndarray[np.float64_t, ndim=2] coords, np.ndarray[np.int64_t, ndim=2] indices, int offset = 0): cdef np.ndarray[np.float64_t, ndim=2] fwidths cdef int nc = indices.shape[0] cdef int nv = indices.shape[1] if nv != 8: raise NotImplementedError cdef np.float64_t LE[3] cdef np.float64_t RE[3] cdef int i, j, k cdef np.float64_t pos fwidths = np.empty((nc, 3), dtype="float64") for i in range(nc): for j in range(3): LE[j] = 1e60 RE[j] = -1e60 for j in range(nv): for k in range(3): pos = coords[indices[i, j] - offset, k] LE[k] = fmin(pos, LE[k]) RE[k] = fmax(pos, RE[k]) for j in range(3): fwidths[i, j] = RE[j] - LE[j] return fwidths @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def smallest_fwidth(np.ndarray[np.float64_t, ndim=2] coords, np.ndarray[np.int64_t, ndim=2] indices, int offset = 0): cdef np.float64_t fwidth = 1e60, pos cdef int nc = indices.shape[0] cdef int nv = indices.shape[1] if nv != 8: raise NotImplementedError cdef np.float64_t LE[3] cdef np.float64_t RE[3] cdef int i, j, k for i in range(nc): for j in range(3): LE[j] = 1e60 RE[j] = -1e60 for j in range(nv): for k in range(3): pos = coords[indices[i, j] - offset, k] LE[k] = fmin(pos, LE[k]) RE[k] = fmax(pos, RE[k]) for j in range(3): fwidth = fmin(fwidth, RE[j] - LE[j]) return fwidth @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def clamp_edges(np.float64_t[:] edge, np.float64_t[:] pleft, np.float64_t[:] pdx): """Clamp edge to pleft + pdx*n where n is the closest integer Note that edge is modified in-place. """ cdef np.float64_t start_index cdef np.float64_t integer_index cdef np.intp_t shape = edge.shape[0] for i in range(shape): start_index = (edge[i] - pleft[i]) / pdx[i] integer_index = rint(start_index) edge[i] = integer_index * pdx[i] + pleft[i] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/misc_utilities.pyx0000644000175100001770000013060014714401662020572 0ustar00runnerdocker# distutils: libraries = STD_LIBS # distutils: language = c++ # distutils: extra_compile_args = CPP14_FLAG OMP_ARGS # distutils: extra_link_args = CPP14_FLAG OMP_ARGS """ Simple utilities that don't fit anywhere else """ import numpy as np from yt.funcs import get_pbar from yt.units.yt_array import YTArray cimport cython cimport numpy as np from cpython cimport buffer from cython.view cimport memoryview from libc.math cimport abs, sqrt from libc.stdlib cimport free, malloc from libc.string cimport strcmp from yt.geometry.selection_routines cimport _ensure_code from yt.utilities.lib.fp_utils cimport fmax, fmin cdef extern from "platform_dep.h": # NOTE that size_t might not be int void *alloca(int) from cython.parallel import prange from cpython.exc cimport PyErr_CheckSignals @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def new_bin_profile1d(np.ndarray[np.intp_t, ndim=1] bins_x, np.ndarray[np.float64_t, ndim=1] wsource, np.ndarray[np.float64_t, ndim=2] bsource, np.ndarray[np.float64_t, ndim=1] wresult, np.ndarray[np.float64_t, ndim=2] bresult, np.ndarray[np.float64_t, ndim=2] mresult, np.ndarray[np.float64_t, ndim=2] qresult, np.ndarray[np.uint8_t, ndim=1, cast=True] used): cdef int n, fi, bin cdef np.float64_t wval, bval, oldwr, bval_mresult cdef int nb = bins_x.shape[0] cdef int nf = bsource.shape[1] for n in range(nb): bin = bins_x[n] wval = wsource[n] # Skip field value entries where the weight field is zero if wval == 0: continue oldwr = wresult[bin] wresult[bin] += wval for fi in range(nf): bval = bsource[n,fi] bval_mresult = bval - mresult[bin,fi] # qresult has to have the previous wresult qresult[bin,fi] += oldwr * wval * bval_mresult * bval_mresult / \ (oldwr + wval) bresult[bin,fi] += wval*bval # mresult needs the new wresult mresult[bin,fi] += wval * bval_mresult / wresult[bin] used[bin] = 1 return @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def new_bin_profile2d(np.ndarray[np.intp_t, ndim=1] bins_x, np.ndarray[np.intp_t, ndim=1] bins_y, np.ndarray[np.float64_t, ndim=1] wsource, np.ndarray[np.float64_t, ndim=2] bsource, np.ndarray[np.float64_t, ndim=2] wresult, np.ndarray[np.float64_t, ndim=3] bresult, np.ndarray[np.float64_t, ndim=3] mresult, np.ndarray[np.float64_t, ndim=3] qresult, np.ndarray[np.uint8_t, ndim=2, cast=True] used): cdef int n, fi, bin_x, bin_y cdef np.float64_t wval, bval, oldwr, bval_mresult cdef int nb = bins_x.shape[0] cdef int nf = bsource.shape[1] for n in range(nb): bin_x = bins_x[n] bin_y = bins_y[n] wval = wsource[n] # Skip field value entries where the weight field is zero if wval == 0: continue oldwr = wresult[bin_x, bin_y] wresult[bin_x,bin_y] += wval for fi in range(nf): bval = bsource[n,fi] bval_mresult = bval - mresult[bin_x,bin_y,fi] # qresult has to have the previous wresult qresult[bin_x,bin_y,fi] += oldwr * wval * bval_mresult * bval_mresult / \ (oldwr + wval) bresult[bin_x,bin_y,fi] += wval*bval # mresult needs the new wresult mresult[bin_x,bin_y,fi] += wval * bval_mresult / wresult[bin_x,bin_y] used[bin_x,bin_y] = 1 return @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def new_bin_profile3d(np.ndarray[np.intp_t, ndim=1] bins_x, np.ndarray[np.intp_t, ndim=1] bins_y, np.ndarray[np.intp_t, ndim=1] bins_z, np.ndarray[np.float64_t, ndim=1] wsource, np.ndarray[np.float64_t, ndim=2] bsource, np.ndarray[np.float64_t, ndim=3] wresult, np.ndarray[np.float64_t, ndim=4] bresult, np.ndarray[np.float64_t, ndim=4] mresult, np.ndarray[np.float64_t, ndim=4] qresult, np.ndarray[np.uint8_t, ndim=3, cast=True] used): cdef int n, fi, bin_x, bin_y, bin_z cdef np.float64_t wval, bval, oldwr, bval_mresult cdef int nb = bins_x.shape[0] cdef int nf = bsource.shape[1] for n in range(nb): bin_x = bins_x[n] bin_y = bins_y[n] bin_z = bins_z[n] wval = wsource[n] # Skip field value entries where the weight field is zero if wval == 0: continue oldwr = wresult[bin_x, bin_y, bin_z] wresult[bin_x,bin_y,bin_z] += wval for fi in range(nf): bval = bsource[n,fi] bval_mresult = bval - mresult[bin_x,bin_y,bin_z,fi] # qresult has to have the previous wresult qresult[bin_x,bin_y,bin_z,fi] += \ oldwr * wval * bval_mresult * bval_mresult / \ (oldwr + wval) bresult[bin_x,bin_y,bin_z,fi] += wval*bval # mresult needs the new wresult mresult[bin_x,bin_y,bin_z,fi] += wval * bval_mresult / \ wresult[bin_x,bin_y,bin_z] used[bin_x,bin_y,bin_z] = 1 return @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def lines(np.float64_t[:,:,:] image, np.int64_t[:] xs, np.int64_t[:] ys, np.float64_t[:,:] colors, int points_per_color=1, int thick=1, int flip=0, int crop = 0): cdef int nx = image.shape[0] cdef int ny = image.shape[1] cdef int nl = xs.shape[0] cdef np.float64_t alpha[4] cdef np.float64_t outa cdef int i, j, xi, yi cdef int dx, dy, sx, sy, e2, err cdef np.int64_t x0, x1, y0, y1 cdef int has_alpha = (image.shape[2] == 4) cdef int no_color = (image.shape[2] < 3) for j in range(0, nl, 2): # From wikipedia http://en.wikipedia.org/wiki/Bresenham's_line_algorithm x0 = xs[j] y0 = ys[j] x1 = xs[j+1] y1 = ys[j+1] dx = abs(x1-x0) dy = abs(y1-y0) if crop == 1 and (dx > nx/2.0 or dy > ny/2.0): continue err = dx - dy if no_color: for i in range(4): alpha[i] = colors[j, 0] elif has_alpha: for i in range(4): alpha[i] = colors[j/points_per_color,i] else: for i in range(3): alpha[i] = colors[j/points_per_color,3]*\ colors[j/points_per_color,i] if x0 < x1: sx = 1 else: sx = -1 if y0 < y1: sy = 1 else: sy = -1 while(1): if (x0 < thick and sx == -1): break elif (x0 >= nx-thick+1 and sx == 1): break elif (y0 < thick and sy == -1): break elif (y0 >= ny-thick+1 and sy == 1): break if x0 >= thick and x0 < nx-thick and y0 >= thick and y0 < ny-thick: for xi in range(x0-thick/2, x0+(1+thick)/2): for yi in range(y0-thick/2, y0+(1+thick)/2): if flip: yi0 = ny - yi else: yi0 = yi if no_color: image[xi, yi0, 0] = fmin(alpha[0], image[xi, yi0, 0]) elif has_alpha: image[xi, yi0, 3] = outa = alpha[3] + image[xi, yi0, 3]*(1-alpha[3]) if outa != 0.0: outa = 1.0/outa for i in range(3): image[xi, yi0, i] = \ ((1.-alpha[3])*image[xi, yi0, i]*image[xi, yi0, 3] + alpha[3]*alpha[i])*outa else: for i in range(3): image[xi, yi0, i] = \ (1.-alpha[i])*image[xi,yi0,i] + alpha[i] if (x0 == x1 and y0 == y1): break e2 = 2*err if e2 > -dy: err = err - dy x0 += sx if e2 < dx : err = err + dx y0 += sy return @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def zlines(np.ndarray[np.float64_t, ndim=3] image, np.ndarray[np.float64_t, ndim=2] zbuffer, np.ndarray[np.int64_t, ndim=1] xs, np.ndarray[np.int64_t, ndim=1] ys, np.ndarray[np.float64_t, ndim=1] zs, np.ndarray[np.float64_t, ndim=2] colors, int points_per_color=1, int thick=1, int flip=0, int crop = 0): cdef int nx = image.shape[0] cdef int ny = image.shape[1] cdef int nl = xs.shape[0] cdef np.float64_t[:] alpha cdef int i, j, c cdef int dx, dy, sx, sy, e2, err cdef np.int64_t x0, x1, y0, y1, yi0 cdef np.float64_t z0, z1, dzx, dzy alpha = np.zeros(4) for j in range(0, nl, 2): # From wikipedia http://en.wikipedia.org/wiki/Bresenham's_line_algorithm x0 = xs[j] y0 = ys[j] x1 = xs[j+1] y1 = ys[j+1] z0 = zs[j] z1 = zs[j+1] dx = abs(x1-x0) dy = abs(y1-y0) dzx = (z1-z0) / (dx * dx + dy * dy) * dx dzy = (z1-z0) / (dx * dx + dy * dy) * dy err = dx - dy if crop == 1 and (dx > nx/2.0 or dy > ny/2.0): continue c = j/points_per_color/2 for i in range(3): alpha[i] = colors[c, i] * colors[c, 3] alpha[3] = colors[c, 3] if x0 < x1: sx = 1 else: sx = -1 if y0 < y1: sy = 1 else: sy = -1 while(1): if (x0 < thick and sx == -1): break elif (x0 >= nx-thick+1 and sx == 1): break elif (y0 < thick and sy == -1): break elif (y0 >= ny-thick+1 and sy == 1): break if x0 >= thick and x0 < nx-thick and y0 >= thick and y0 < ny-thick: for _ in range(x0-thick/2, x0+(1+thick)/2): for yi in range(y0-thick/2, y0+(1+thick)/2): if flip: yi0 = ny - yi else: yi0 = yi if z0 < zbuffer[x0, yi0]: if alpha[3] != 1.0: talpha = image[x0, yi0, 3] image[x0, yi0, 3] = alpha[3] + talpha * (1 - alpha[3]) for i in range(3): if image[x0, yi0, 3] == 0.0: image[x0, yi0, i] = 0.0 else: image[x0, yi0, i] = (alpha[3]*alpha[i] + image[x0, yi0, i]*talpha*(1.0-alpha[3]))/image[x0,yi0,3] else: for i in range(4): image[x0, yi0, i] = alpha[i] if (1.0 - image[x0, yi0, 3] < 1.0e-4): image[x0, yi0, 3] = 1.0 zbuffer[x0, yi0] = z0 if (x0 == x1 and y0 == y1): break e2 = 2*err if e2 > -dy: err = err - dy x0 += sx z0 += dzx if e2 < dx : err = err + dx y0 += sy z0 += dzy # assert(np.abs(z0 - z1) < 1.0e-3 * (np.abs(z0) + np.abs(z1))) return @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def zpoints(np.ndarray[np.float64_t, ndim=3] image, np.ndarray[np.float64_t, ndim=2] zbuffer, np.ndarray[np.int64_t, ndim=1] xs, np.ndarray[np.int64_t, ndim=1] ys, np.ndarray[np.float64_t, ndim=1] zs, np.ndarray[np.float64_t, ndim=2] colors, np.ndarray[np.int64_t, ndim=1] radii, #pixels int points_per_color=1, int thick=1, int flip=0): cdef int nx = image.shape[0] cdef int ny = image.shape[1] cdef np.float64_t[:] alpha cdef np.float64_t talpha cdef int i, j, c cdef np.int64_t kx, ky, r, r2 cdef np.int64_t[:] idx, ks cdef np.int64_t x0, y0, yi0 cdef np.float64_t z0 alpha = np.zeros(4) #the sources must be ordered along z to avoid edges when two overlap idx = np.argsort(zs) for j in idx: r = radii[j] r2 = int((r+0.3)*(r+0.3)) #0.3 to get nicer shape ks = np.arange(-r, r+1, dtype=np.int64) z0 = zs[j] for kx in ks: x0 = xs[j]+kx if (x0 < 0 or x0 >= nx): continue for ky in ks: y0 = ys[j]+ky if (y0 < 0 or y0 >= ny): continue if (kx*kx + ky*ky > r2): continue c = j/points_per_color for i in range(3): alpha[i] = colors[c, i] * colors[c, 3] alpha[3] = colors[c, 3] if flip: yi0 = ny - y0 else: yi0 = y0 if z0 < zbuffer[x0, yi0]: if alpha[3] != 1.0: talpha = image[x0, yi0, 3] image[x0, yi0, 3] = alpha[3] + talpha * (1 - alpha[3]) for i in range(3): image[x0, yi0, i] = (alpha[3]*alpha[i] + image[x0, yi0, i]*talpha*(1.0-alpha[3]))/image[x0,yi0,3] if image[x0, yi0, 3] == 0.0: image[x0, yi0, i] = 0.0 else: for i in range(4): image[x0, yi0, i] = alpha[i] if (1.0 - image[x0, yi0, 3] < 1.0e-4): zbuffer[x0, yi0] = z0 return def rotate_vectors(np.ndarray[np.float64_t, ndim=3] vecs, np.ndarray[np.float64_t, ndim=2] R): cdef int nx = vecs.shape[0] cdef int ny = vecs.shape[1] rotated = np.empty((nx,ny,3),dtype='float64') for i in range(nx): for j in range(ny): for k in range(3): rotated[i,j,k] =\ R[k,0]*vecs[i,j,0]+R[k,1]*vecs[i,j,1]+R[k,2]*vecs[i,j,2] return rotated @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def get_color_bounds(np.ndarray[np.float64_t, ndim=1] px, np.ndarray[np.float64_t, ndim=1] py, np.ndarray[np.float64_t, ndim=1] pdx, np.ndarray[np.float64_t, ndim=1] pdy, np.ndarray[np.float64_t, ndim=1] value, np.float64_t leftx, np.float64_t rightx, np.float64_t lefty, np.float64_t righty, np.float64_t mindx = -1, np.float64_t maxdx = -1): cdef int i cdef np.float64_t mi = 1e100, ma = -1e100, v cdef int npx = px.shape[0] with nogil: for i in range(npx): v = value[i] if v < mi or v > ma: if px[i] + pdx[i] < leftx: continue if px[i] - pdx[i] > rightx: continue if py[i] + pdy[i] < lefty: continue if py[i] - pdy[i] > righty: continue if pdx[i] < mindx or pdy[i] < mindx: continue if maxdx > 0 and (pdx[i] > maxdx or pdy[i] > maxdx): continue if v < mi: mi = v if v > ma: ma = v return (mi, ma) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def kdtree_get_choices(np.ndarray[np.float64_t, ndim=3] data, np.ndarray[np.float64_t, ndim=1] l_corner, np.ndarray[np.float64_t, ndim=1] r_corner): cdef int i, j, k, dim, n_unique, best_dim, n_grids, my_split n_grids = data.shape[0] cdef np.float64_t **uniquedims cdef np.float64_t *uniques cdef np.float64_t split uniquedims = alloca(3 * sizeof(np.float64_t*)) for i in range(3): uniquedims[i] = \ alloca(2*n_grids * sizeof(np.float64_t)) my_max = 0 best_dim = -1 my_split = -1 for dim in range(3): n_unique = 0 uniques = uniquedims[dim] for i in range(n_grids): # Check for disqualification for j in range(2): #print("Checking against", i,j,dim,data[i,j,dim]) if not (l_corner[dim] < data[i, j, dim] and data[i, j, dim] < r_corner[dim]): #print("Skipping ", data[i,j,dim]) continue skipit = 0 # Add our left ... for k in range(n_unique): if uniques[k] == data[i, j, dim]: skipit = 1 #print("Identified", uniques[k], data[i,j,dim], n_unique) break if skipit == 0: uniques[n_unique] = data[i, j, dim] n_unique += 1 if n_unique > my_max: best_dim = dim my_max = n_unique my_split = (n_unique-1)/2 # I recognize how lame this is. cdef np.ndarray[np.float64_t, ndim=1] tarr = np.empty(my_max, dtype='float64') for i in range(my_max): #print("Setting tarr: ", i, uniquedims[best_dim][i]) tarr[i] = uniquedims[best_dim][i] tarr.sort() if my_split < 0: raise RuntimeError split = tarr[my_split] cdef np.ndarray[np.uint8_t, ndim=1] less_ids = np.empty(n_grids, dtype='uint8') cdef np.ndarray[np.uint8_t, ndim=1] greater_ids = np.empty(n_grids, dtype='uint8') for i in range(n_grids): if data[i, 0, best_dim] < split: less_ids[i] = 1 else: less_ids[i] = 0 if data[i, 1, best_dim] > split: greater_ids[i] = 1 else: greater_ids[i] = 0 # Return out unique values return best_dim, split, less_ids.view("bool"), greater_ids.view("bool") @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def get_box_grids_level(np.ndarray[np.float64_t, ndim=1] left_edge, np.ndarray[np.float64_t, ndim=1] right_edge, int level, np.ndarray[np.float64_t, ndim=2] left_edges, np.ndarray[np.float64_t, ndim=2] right_edges, np.ndarray[np.int32_t, ndim=2] levels, np.ndarray[np.int32_t, ndim=1] mask, int min_index = 0): cdef int i, n cdef int nx = left_edges.shape[0] cdef int inside cdef np.float64_t eps = np.finfo(np.float64).eps for i in range(nx): if i < min_index or levels[i,0] != level: mask[i] = 0 continue inside = 1 for n in range(3): if (right_edges[i,n] - left_edge[n]) <= eps or \ (right_edge[n] - left_edges[i,n]) <= eps: inside = 0 break if inside == 1: mask[i] = 1 else: mask[i] = 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def get_box_grids_below_level( np.ndarray[np.float64_t, ndim=1] left_edge, np.ndarray[np.float64_t, ndim=1] right_edge, int level, np.ndarray[np.float64_t, ndim=2] left_edges, np.ndarray[np.float64_t, ndim=2] right_edges, np.ndarray[np.int32_t, ndim=2] levels, np.ndarray[np.int32_t, ndim=1] mask, int min_level = 0): cdef int i, n cdef int nx = left_edges.shape[0] cdef int inside cdef np.float64_t eps = np.finfo(np.float64).eps for i in range(nx): mask[i] = 0 if levels[i,0] <= level and levels[i,0] >= min_level: inside = 1 for n in range(3): if (right_edges[i,n] - left_edge[n]) <= eps or \ (right_edge[n] - left_edges[i,n]) <= eps: inside = 0 break if inside == 1: mask[i] = 1 @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def obtain_position_vector( data, field_names = (("index", "x"), ("index", "y"), ("index", "z")) ): # This is just to let the pointers exist and whatnot. We can't cdef them # inside conditionals. cdef np.ndarray[np.float64_t, ndim=1] xf cdef np.ndarray[np.float64_t, ndim=1] yf cdef np.ndarray[np.float64_t, ndim=1] zf cdef np.ndarray[np.float64_t, ndim=2] rf cdef np.ndarray[np.float64_t, ndim=3] xg cdef np.ndarray[np.float64_t, ndim=3] yg cdef np.ndarray[np.float64_t, ndim=3] zg cdef np.ndarray[np.float64_t, ndim=4] rg cdef np.float64_t c[3] cdef int i, j, k units = data[field_names[0]].units center = data.get_field_parameter("center").to(units) c[0] = center[0]; c[1] = center[1]; c[2] = center[2] if len(data[field_names[0]].shape) == 1: # One dimensional data xf = data[field_names[0]] yf = data[field_names[1]] zf = data[field_names[2]] rf = YTArray(np.empty((3, xf.shape[0]), 'float64'), xf.units) for i in range(xf.shape[0]): rf[0, i] = xf[i] - c[0] rf[1, i] = yf[i] - c[1] rf[2, i] = zf[i] - c[2] return rf else: # Three dimensional data xg = data[field_names[0]] yg = data[field_names[1]] zg = data[field_names[2]] shape = (3, xg.shape[0], xg.shape[1], xg.shape[2]) rg = YTArray(np.empty(shape, 'float64'), xg.units) #rg = YTArray(rg, xg.units) for i in range(xg.shape[0]): for j in range(xg.shape[1]): for k in range(xg.shape[2]): rg[0,i,j,k] = xg[i,j,k] - c[0] rg[1,i,j,k] = yg[i,j,k] - c[1] rg[2,i,j,k] = zg[i,j,k] - c[2] return rg @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def obtain_relative_velocity_vector( data, field_names = (("gas", "velocity_x"), ("gas", "velocity_y"), ("gas", "velocity_z")), bulk_vector = "bulk_velocity" ): # This is just to let the pointers exist and whatnot. We can't cdef them # inside conditionals. cdef np.ndarray[np.float64_t, ndim=1] vxf cdef np.ndarray[np.float64_t, ndim=1] vyf cdef np.ndarray[np.float64_t, ndim=1] vzf cdef np.ndarray[np.float64_t, ndim=2] rvf cdef np.ndarray[np.float64_t, ndim=3] vxg cdef np.ndarray[np.float64_t, ndim=3] vyg cdef np.ndarray[np.float64_t, ndim=3] vzg cdef np.ndarray[np.float64_t, ndim=4] rvg cdef np.float64_t bv[3] cdef int i, j, k, dim units = data[field_names[0]].units bulk_vector = data.get_field_parameter(bulk_vector).to(units) dim = data[field_names[0]].ndim if dim == 1: # One dimensional data vxf = data[field_names[0]].astype("float64") vyf = data[field_names[1]].astype("float64") vzf = data[field_names[2]].astype("float64") vyf.convert_to_units(vxf.units) vzf.convert_to_units(vxf.units) rvf = YTArray(np.empty((3, vxf.shape[0]), 'float64'), vxf.units) if bulk_vector is None: bv[0] = bv[1] = bv[2] = 0.0 else: bulk_vector = bulk_vector.in_units(vxf.units) bv[0] = bulk_vector[0] bv[1] = bulk_vector[1] bv[2] = bulk_vector[2] for i in range(vxf.shape[0]): rvf[0, i] = vxf[i] - bv[0] rvf[1, i] = vyf[i] - bv[1] rvf[2, i] = vzf[i] - bv[2] return rvf elif dim == 3: # Three dimensional data vxg = data[field_names[0]].astype("float64") vyg = data[field_names[1]].astype("float64") vzg = data[field_names[2]].astype("float64") vyg.convert_to_units(vxg.units) vzg.convert_to_units(vxg.units) shape = (3, vxg.shape[0], vxg.shape[1], vxg.shape[2]) rvg = YTArray(np.empty(shape, 'float64'), vxg.units) if bulk_vector is None: bv[0] = bv[1] = bv[2] = 0.0 else: bulk_vector = bulk_vector.in_units(vxg.units) bv[0] = bulk_vector[0] bv[1] = bulk_vector[1] bv[2] = bulk_vector[2] for i in range(vxg.shape[0]): for j in range(vxg.shape[1]): for k in range(vxg.shape[2]): rvg[0,i,j,k] = vxg[i,j,k] - bv[0] rvg[1,i,j,k] = vyg[i,j,k] - bv[1] rvg[2,i,j,k] = vzg[i,j,k] - bv[2] return rvg else: raise NotImplementedError(f"Unsupported dimensionality `{dim}`.") def grow_flagging_field(oofield): cdef np.ndarray[np.uint8_t, ndim=3] ofield = oofield.astype("uint8") cdef np.ndarray[np.uint8_t, ndim=3] nfield nfield = np.zeros_like(ofield) cdef int i, j, k, ni, nj, nk cdef int oi, oj, ok for ni in range(ofield.shape[0]): for nj in range(ofield.shape[1]): for nk in range(ofield.shape[2]): for oi in range(3): i = ni + (oi - 1) if i < 0 or i >= ofield.shape[0]: continue for oj in range(3): j = nj + (oj - 1) if j < 0 or j >= ofield.shape[1]: continue for ok in range(3): k = nk + (ok - 1) if k < 0 or k >= ofield.shape[2]: continue if ofield[i, j, k] == 1: nfield[ni, nj, nk] = 1 return nfield.astype("bool") @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fill_region( input_fields, output_fields, np.int32_t output_level, np.ndarray[np.int64_t, ndim=1] left_index, np.ndarray[np.int64_t, ndim=2] ipos, np.ndarray[np.int64_t, ndim=1] ires, np.ndarray[np.int64_t, ndim=1] level_dims, np.ndarray[np.int64_t, ndim=1] refine_by ): cdef int i, n cdef np.int64_t tot = 0, oi, oj, ok cdef np.int64_t rf[3] cdef np.int64_t iind[3] cdef np.int64_t oind[3] cdef np.int64_t dim[3] cdef np.ndarray[np.float64_t, ndim=3] ofield cdef np.ndarray[np.float64_t, ndim=1] ifield nf = len(input_fields) # The variable offsets governs for each dimension and each possible # wrapping if we do it. Then the wi, wj, wk indices check into each # [dim][wrap] inside the loops. cdef int wi, wj, wk cdef int offsets[3][3] cdef np.int64_t off for i in range(3): # Offsets here is a way of accounting for periodicity. It keeps track # of how to offset our grid as we loop over the icoords. dim[i] = output_fields[0].shape[i] offsets[i][0] = offsets[i][2] = 0 offsets[i][1] = 1 if left_index[i] < 0: offsets[i][2] = 1 if left_index[i] + dim[i] >= level_dims[i]: offsets[i][0] = 1 for n in range(nf): tot = 0 ofield = output_fields[n] ifield = input_fields[n] for i in range(ipos.shape[0]): for k in range(3): rf[k] = refine_by[k]**(output_level - ires[i]) for wi in range(3): if offsets[0][wi] == 0: continue off = (left_index[0] + level_dims[0]*(wi-1)) iind[0] = ipos[i, 0] * rf[0] - off # rf here is the "refinement factor", or, the number of zones # that this zone could potentially contribute to our filled # grid. for oi in range(rf[0]): # Now we need to apply our offset oind[0] = oi + iind[0] if oind[0] < 0: continue elif oind[0] >= dim[0]: break for wj in range(3): if offsets[1][wj] == 0: continue off = (left_index[1] + level_dims[1]*(wj-1)) iind[1] = ipos[i, 1] * rf[1] - off for oj in range(rf[1]): oind[1] = oj + iind[1] if oind[1] < 0: continue elif oind[1] >= dim[1]: break for wk in range(3): if offsets[2][wk] == 0: continue off = (left_index[2] + level_dims[2]*(wk-1)) iind[2] = ipos[i, 2] * rf[2] - off for ok in range(rf[2]): oind[2] = ok + iind[2] if oind[2] < 0: continue elif oind[2] >= dim[2]: break ofield[oind[0], oind[1], oind[2]] = \ ifield[i] tot += 1 return tot @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def flip_bitmask(np.ndarray[np.float64_t, ndim=1] vals, np.float64_t left_edge, np.float64_t right_edge, np.uint64_t nbins): cdef np.uint64_t i, bin_id cdef np.float64_t idx = nbins / (right_edge - left_edge) cdef np.ndarray[np.uint8_t, ndim=1, cast=True] bitmask bitmask = np.zeros(nbins, dtype="uint8") for i in range(vals.shape[0]): bin_id = ((vals[i] - left_edge)*idx) bitmask[bin_id] = 1 return bitmask #@cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def flip_morton_bitmask(np.ndarray[np.uint64_t, ndim=1] morton_indices, int max_order): # We assume that the morton_indices are fed to us in a setup that allows # for 20 levels. This means that we shift right by 3*(20-max_order) (or is # that a fencepost?) cdef np.uint64_t mi, i cdef np.ndarray[np.uint8_t, ndim=1, cast=True] bitmask # Note that this will fail if it's too big, since numpy will check nicely # the memory availability. I guess. bitmask = np.zeros(1 << (3*max_order), dtype="uint8") for i in range(morton_indices.shape[0]): mi = (morton_indices[i] >> (3 * (20-max_order))) bitmask[mi] = 1 return bitmask #@cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def count_collisions(np.ndarray[np.uint8_t, ndim=2] masks): cdef int i, j, k cdef np.ndarray[np.uint32_t, ndim=1] counts cdef np.ndarray[np.uint8_t, ndim=1] collides counts = np.zeros(masks.shape[1], dtype="uint32") collides = np.zeros(masks.shape[1], dtype="uint8") for i in range(masks.shape[1]): print(i) for j in range(masks.shape[1]): collides[j] = 0 for k in range(masks.shape[0]): if masks[k,i] == 0: continue for j in range(masks.shape[1]): if j == i: continue if masks[k,j] == 1: collides[j] = 1 counts[i] = collides.sum() return counts @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def fill_region_float(np.ndarray[np.float64_t, ndim=2] fcoords, np.ndarray[np.float64_t, ndim=2] fwidth, np.ndarray[np.float64_t, ndim=1] data, np.ndarray[np.float64_t, ndim=1] box_left_edge, np.ndarray[np.float64_t, ndim=1] box_right_edge, np.ndarray[np.float64_t, ndim=3] dest, int antialias = 1, period = None, int check_period = 1): cdef np.float64_t ds_period[3] cdef np.float64_t box_dds[3] cdef np.float64_t box_idds[3] cdef np.float64_t width[3] cdef np.float64_t LE[3] cdef np.float64_t RE[3] cdef np.int64_t i, j, k, p, xi, yi cdef np.int64_t dims[3] cdef np.int64_t ld[3] cdef np.int64_t ud[3] cdef np.float64_t overlap[3] cdef np.float64_t dsp cdef np.float64_t osp[3] cdef np.float64_t odsp[3] cdef np.float64_t sp[3] cdef np.float64_t lfd[3] cdef np.float64_t ufd[3] # These are the temp vars we get from the arrays # Some periodicity helpers cdef int diter[3][2] cdef np.float64_t diterv[3][2] if period is not None: for i in range(3): ds_period[i] = period[i] else: ds_period[0] = ds_period[1] = ds_period[2] = 0.0 box_left_edge = _ensure_code(box_left_edge) box_right_edge = _ensure_code(box_right_edge) _ensure_code(fcoords) _ensure_code(fwidth) for i in range(3): LE[i] = box_left_edge[i] RE[i] = box_right_edge[i] width[i] = RE[i] - LE[i] dims[i] = dest.shape[i] box_dds[i] = width[i] / dims[i] box_idds[i] = 1.0/box_dds[i] diter[i][0] = diter[i][1] = 0 diterv[i][0] = diterv[i][1] = 0.0 overlap[i] = 1.0 with nogil: for p in range(fcoords.shape[0]): for i in range(3): diter[i][1] = 999 odsp[i] = fwidth[p,i]*0.5 osp[i] = fcoords[p,i] # already centered overlap[i] = 1.0 dsp = data[p] if check_period == 1: for i in range(3): if (osp[i] - odsp[i] < LE[i]): diter[i][1] = +1 diterv[i][1] = ds_period[i] elif (osp[i] + odsp[i] > RE[i]): diter[i][1] = -1 diterv[i][1] = -ds_period[i] for xi in range(2): if diter[0][xi] == 999: continue sp[0] = osp[0] + diterv[0][xi] if (sp[0] + odsp[0] < LE[0]) or (sp[0] - odsp[0] > RE[0]): continue for yi in range(2): if diter[1][yi] == 999: continue sp[1] = osp[1] + diterv[1][yi] if (sp[1] + odsp[1] < LE[1]) or (sp[1] - odsp[1] > RE[1]): continue for zi in range(2): if diter[2][zi] == 999: continue sp[2] = osp[2] + diterv[2][zi] if (sp[2] + odsp[2] < LE[2]) or (sp[2] - odsp[2] > RE[2]): continue for i in range(3): ld[i] = fmax(((sp[i]-odsp[i]-LE[i])*box_idds[i]),0) # NOTE: This is a different way of doing it than in the C # routines. In C, we were implicitly casting the # initialization to int, but *not* the conditional, which # was allowed an extra value: # for(j=lc;j fmin(((sp[i]+odsp[i]-LE[i])*box_idds[i] + 1), dims[i]) for i in range(ld[0], ud[0]): if antialias == 1: lfd[0] = box_dds[0] * i + LE[0] ufd[0] = box_dds[0] * (i + 1) + LE[0] overlap[0] = ((fmin(ufd[0], sp[0]+odsp[0]) - fmax(lfd[0], (sp[0]-odsp[0])))*box_idds[0]) if overlap[0] < 0.0: continue for j in range(ld[1], ud[1]): if antialias == 1: lfd[1] = box_dds[1] * j + LE[1] ufd[1] = box_dds[1] * (j + 1) + LE[1] overlap[1] = ((fmin(ufd[1], sp[1]+odsp[1]) - fmax(lfd[1], (sp[1]-odsp[1])))*box_idds[1]) if overlap[1] < 0.0: continue for k in range(ld[2], ud[2]): if antialias == 1: lfd[2] = box_dds[2] * k + LE[2] ufd[2] = box_dds[2] * (k + 1) + LE[2] overlap[2] = ((fmin(ufd[2], sp[2]+odsp[2]) - fmax(lfd[2], (sp[2]-odsp[2])))*box_idds[2]) if overlap[2] < 0.0: continue dest[i,j,k] += dsp * (overlap[0]*overlap[1]*overlap[2]) else: dest[i,j,k] = dsp @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def gravitational_binding_energy( np.float64_t[:] mass, np.float64_t[:] x, np.float64_t[:] y, np.float64_t[:] z, int truncate, np.float64_t kinetic, int num_threads = 0): cdef int q_outer, q_inner, n_q cdef np.float64_t mass_o, x_o, y_o, z_o cdef np.float64_t mass_i, x_i, y_i, z_i cdef np.float64_t total_potential = 0. cdef np.float64_t this_potential n_q = mass.size pbar = get_pbar("Calculating potential for %d cells with %d thread(s)" % (n_q,num_threads), n_q) # using reversed iterator in order to make use of guided scheduling # (inner loop is getting more and more expensive) for q_outer in prange(n_q - 1,-1,-1, nogil=True,schedule='guided',num_threads=num_threads): this_potential = 0. mass_o = mass[q_outer] x_o = x[q_outer] y_o = y[q_outer] z_o = z[q_outer] for q_inner in range(q_outer + 1, n_q): mass_i = mass[q_inner] x_i = x[q_inner] y_i = y[q_inner] z_i = z[q_inner] # not using += operator so that variable is not automatically reduced this_potential = this_potential + mass_o * mass_i / \ sqrt((x_i - x_o) * (x_i - x_o) + (y_i - y_o) * (y_i - y_o) + (z_i - z_o) * (z_i - z_o)) total_potential += this_potential if truncate and this_potential / kinetic > 1.: break with gil: PyErr_CheckSignals() # this call is not thread safe, but it gives a reasonable approximation pbar.update() pbar.finish() return total_potential # The OnceIndirect code is from: # http://stackoverflow.com/questions/10465091/assembling-a-cython-memoryview-from-numpy-arrays/12991519#12991519 # This is under the CC-BY-SA license. cdef class OnceIndirect: cdef object _objects cdef void** buf cdef int ndim cdef int n_rows cdef int buf_len cdef Py_ssize_t* shape cdef Py_ssize_t* strides cdef Py_ssize_t* suboffsets cdef Py_ssize_t itemsize cdef bytes format cdef int is_readonly def __cinit__(self, object rows, want_writable=True, want_format=True, allow_indirect=False): """ Set want_writable to False if you don't want writable data. (This may prevent copies.) Set want_format to False if your input doesn't support PyBUF_FORMAT (unlikely) Set allow_indirect to True if you are ok with the memoryview being indirect in dimensions other than the first. (This may prevent copies.) An example usage: cdef double[::cython.view.indirect, ::1] vectors = OnceIndirect([object.vector for object in objects]) """ demand = buffer.PyBUF_INDIRECT if allow_indirect else buffer.PyBUF_STRIDES if want_writable: demand |= buffer.PyBUF_WRITABLE if want_format: demand |= buffer.PyBUF_FORMAT self._objects = [memoryview(row, demand) for row in rows] self.n_rows = len(self._objects) self.buf_len = sizeof(void*) * self.n_rows self.buf = malloc(self.buf_len) self.ndim = 1 + self._objects[0].ndim self.shape = malloc(sizeof(Py_ssize_t) * self.ndim) self.strides = malloc(sizeof(Py_ssize_t) * self.ndim) self.suboffsets = malloc(sizeof(Py_ssize_t) * self.ndim) cdef memoryview example_obj = self._objects[0] self.itemsize = example_obj.itemsize if want_format: self.format = example_obj.view.format else: self.format = None self.is_readonly |= example_obj.view.readonly for dim in range(self.ndim): if dim == 0: self.shape[dim] = self.n_rows self.strides[dim] = sizeof(void*) self.suboffsets[dim] = 0 else: self.shape[dim] = example_obj.view.shape[dim - 1] self.strides[dim] = example_obj.view.strides[dim - 1] if example_obj.view.suboffsets == NULL: self.suboffsets[dim] = -1 else: self.suboffsets[dim] = example_obj.suboffsets[dim - 1] cdef memoryview obj cdef int i = 0 for obj in self._objects: assert_similar(example_obj, obj) self.buf[i] = obj.view.buf i += 1 def __getbuffer__(self, Py_buffer* buff, int flags): if (flags & buffer.PyBUF_INDIRECT) != buffer.PyBUF_INDIRECT: raise Exception("don't want to copy data") if flags & buffer.PyBUF_WRITABLE and self.is_readonly: raise Exception("couldn't provide writable, you should have demanded it earlier") if flags & buffer.PyBUF_FORMAT: if self.format is None: raise Exception("couldn't provide format, you should have demanded it earlier") buff.format = self.format else: buff.format = NULL buff.buf = self.buf buff.obj = self buff.len = self.buf_len buff.readonly = self.is_readonly buff.ndim = self.ndim buff.shape = self.shape buff.strides = self.strides buff.suboffsets = self.suboffsets buff.itemsize = self.itemsize buff.internal = NULL def __dealloc__(self): free(self.buf) free(self.shape) free(self.strides) free(self.suboffsets) cdef int assert_similar(memoryview left_, memoryview right_) except -1: cdef Py_buffer left = left_.view cdef Py_buffer right = right_.view assert left.ndim == right.ndim cdef int i for i in range(left.ndim): assert left.shape[i] == right.shape[i], (left_.shape, right_.shape) assert left.strides[i] == right.strides[i], (left_.strides, right_.strides) if left.suboffsets == NULL: assert right.suboffsets == NULL, (left_.suboffsets, right_.suboffsets) else: for i in range(left.ndim): assert left.suboffsets[i] == right.suboffsets[i], (left_.suboffsets, right_.suboffsets) if left.format == NULL: assert right.format == NULL, (bytes(left.format), bytes(right.format)) else: #alternatively, compare as Python strings: #assert bytes(left.format) == bytes(right.format) assert strcmp(left.format, right.format) == 0, (bytes(left.format), bytes(right.format)) return 0 def _obtain_coords_and_widths(np.int64_t[:] icoords, np.int64_t[:] ires, np.float64_t[:] cell_widths, np.float64_t offset): # This function only accepts *one* axis of icoords, because we will be # looping over the axes in python. This also simplifies cell_width # allocation and computation. cdef np.ndarray[np.float64_t, ndim=1] fcoords = np.zeros(icoords.size, dtype="f8") cdef np.ndarray[np.float64_t, ndim=1] fwidth = np.zeros(icoords.size, dtype="f8") cdef np.ndarray[np.float64_t, ndim=1] cell_centers = np.zeros(cell_widths.size, dtype="f8") cdef int i cdef np.float64_t pos = offset for i in range(cell_widths.size): cell_centers[i] = pos + 0.5 * cell_widths[i] pos += cell_widths[i] for i in range(icoords.size): fcoords[i] = cell_centers[icoords[i]] fwidth[i] = cell_widths[icoords[i]] return fcoords, fwidth ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/octree_raytracing.py0000644000175100001770000000757414714401662021075 0ustar00runnerdockerfrom itertools import product import numpy as np from yt.funcs import mylog from yt.utilities.lib._octree_raytracing import _OctreeRayTracing class OctreeRayTracing: octree = None data_source = None log_fields = None fields = None # Internal data _cell_index = None _tvalues = None def __init__(self, data_source): self.data_source = data_source ds = data_source.ds LE = np.array([0, 0, 0], dtype=np.float64) RE = np.array([1, 1, 1], dtype=np.float64) lvl_min = ds.min_level + 1 # This is the max refinement so that the smallest cells have size # 1/2**depth depth = lvl_min + ds.max_level + 1 self.octree = _OctreeRayTracing(LE, RE, depth) ds = data_source.ds xyz = np.stack([data_source[key].to_value("unitary") for key in "xyz"], axis=-1) lvl = data_source["grid_level"].value.astype("int64", copy=False) + lvl_min ipos = np.floor(xyz * (1 << depth)).astype("int64") mylog.debug("Adding cells to volume") self.octree.add_nodes( ipos.astype(np.int32), lvl.astype(np.int32), np.arange(len(ipos), dtype=np.int32), ) def vertex_centered_data(self, field): data_source = self.data_source chunks = data_source.index._chunk(data_source, "spatial", ngz=1) finfo = data_source.ds._get_field_info(field) units = finfo.units rv = data_source.ds.arr( np.zeros((2, 2, 2, data_source.ires.size), dtype="float64"), units ) binary_3D_index_iter = product(*[range(2)] * 3) ind = {(i, j, k): 0 for i, j, k in binary_3D_index_iter} for chunk in chunks: with data_source._chunked_read(chunk): gz = data_source._current_chunk.objs[0] gz.field_parameters = data_source.field_parameters wogz = gz._base_grid vertex_data = gz.get_vertex_centered_data([field])[field] for i, j, k in product(*[range(2)] * 3): ind[i, j, k] += wogz.select( data_source.selector, vertex_data[i : i + 2, j : j + 2, k : k + 2, ...], rv[i, j, k, :], ind[i, j, k], ) return rv def set_fields(self, fields, log_fields, no_ghost, force=False): if no_ghost: raise NotImplementedError("Ghost zones are required with Octree datasets") if len(fields) != 1: raise ValueError( "Can only set one fields at a time. " "This is likely a bug, and should be reported." ) field = self.data_source._determine_fields(fields)[0] take_log = log_fields[0] vertex_data = self.vertex_centered_data(field) if take_log: vertex_data = np.log10(vertex_data) # Vertex_data has shape (2, 2, 2, ...) # Note: here we have the wrong ordering within the oct (classical Fortran/C # ordering issue) so we need to swap axis 0 and 2. self.data = vertex_data.swapaxes(0, 2).reshape(8, -1) def cast_rays(self, vp_pos, vp_dir): """Cast the rays through the oct. Parameters ---------- vp_pos : float arrays (Nrays, Ndim) vp_dir : float arrays (Nrays, Ndim) The position (unitary) and direction of each ray Returns ------- cell_index : list of integer arrays of shape (Ncell) For each ray, contains an ordered array of cell ids that it intersects with tvalues : list of float arrays of shape (Ncell, 2) The t value at entry and exit for each cell. """ if not self._cell_index: self._cell_index, self._tvalues = self.octree.cast_rays(vp_pos, vp_dir) return self._cell_index, self._tvalues ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/origami.pyx0000644000175100001770000000315214714401662017174 0ustar00runnerdocker# distutils: sources = yt/utilities/lib/origami_tags.c # distutils: include_dirs = LIB_DIR # distutils: depends = yt/utilities/lib/origami_tags.h # distutils: libraries = STD_LIBS """ This calls the ORIGAMI routines """ import numpy as np cimport numpy as np from libc.stdlib cimport free, malloc cdef extern from "origami_tags.h": int compute_tags(int ng, double boxsize, double **r, int npart, unsigned char *m) cdef int printed_citation = 0 def run_origami(np.ndarray[np.float64_t, ndim=1] pos_x, np.ndarray[np.float64_t, ndim=1] pos_y, np.ndarray[np.float64_t, ndim=1] pos_z, double boxsize): # We assume these have been passed in in the correct order and # C-contiguous. global printed_citation if printed_citation == 0: print("ORIGAMI was developed by Bridget Falck and Mark Neyrinck.") print("Please cite Falck, Neyrinck, & Szalay 2012, ApJ, 754, 2, 125.") printed_citation = 1 cdef int npart = pos_x.size if npart == 1: return np.zeros(1, dtype="uint8") assert(sizeof(unsigned char) == sizeof(np.uint8_t)) assert(sizeof(double) == sizeof(np.float64_t)) cdef int ng = np.round(npart**(1./3)) assert(ng**3 == npart) cdef double **r = malloc(sizeof(double *) * 3) r[0] = pos_x.data r[1] = pos_y.data r[2] = pos_z.data cdef np.ndarray[np.uint8_t, ndim=1] tags = np.zeros(npart, dtype="uint8") cdef void *m = tags.data compute_tags(ng, boxsize, r, npart, m) free(r) return tags ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/origami_tags.c0000644000175100001770000001330214714401662017612 0ustar00runnerdocker// This code was originally written by Bridget Falck and Mark Neyrinck. // They have agreed to release it under the terms of the BSD license. #include "origami_tags.h" int isneg(int h) { return (int)(h < 0); } int par(int i, int j, int k, int ng) { return i + (j + k*ng)*ng; } int compute_tags(int ng, double boxsize, double **r, int np, unsigned char *m) { /* Note that the particles must be fed in according to the order specified in * the README file */ double xmin,xmax,ymin,ymax,zmin,zmax; double negb2, b2; int ng4, h, i,i2, x,y,z,nhalo,nhalo0,nhalo1,nhalo2,nhaloany; unsigned char *m0,*m1,*m2, mn,m0n,m1n,m2n; /*Morphology tag */ double dx,d1,d2; b2 = boxsize/2.; negb2 = -boxsize/2.; ng4=ng/4; /* Boxsize should be the range in r, yielding a range 0-1 */ printf("%d particles\n",np);fflush(stdout); xmin = BF; xmax = -BF; ymin = BF; ymax = -BF; zmin = BF; zmax = -BF; m0 = (unsigned char *)malloc(np*sizeof(unsigned char)); /* for the diagonals */ m1 = (unsigned char *)malloc(np*sizeof(unsigned char)); m2 = (unsigned char *)malloc(np*sizeof(unsigned char)); for (i=0; ixmax) xmax = r[0][i]; if (r[1][i]ymax) ymax = r[1][i]; if (r[2][i]zmax) zmax = r[2][i]; m[i] = 1; m0[i] = 1; m1[i] = 1; m2[i] = 1; } if (m==NULL) { printf("Morphology array cannot be allocated.\n"); return 1; } // printf("np: %d, x: %f,%f; y: %f,%f; z: %f,%f\n",np,xmin,xmax, ymin,ymax, zmin,zmax); fflush(stdout); printf("Calculating ORIGAMI morphology.\n"); for (x=0; x b2) dx -= boxsize; if (dx < 0.) { /*printf("x:%d %d %d %d %f\n",x,y,z,h,dx);*/ if (m[i] % 2 > 0) { m[i] *= 2; } if (m[i2] % 2 > 0){ m[i2] *= 2; } break; } } for (h=1; h b2) dx -= boxsize; if (dx < 0.) { /*printf("y:%d %d %d %d %f\n",x,y,z,h,dx);*/ if (m[i] % 3 > 0) { m[i] *= 3; } if (m[i2] % 3 > 0){ m[i2] *= 3; } break; } } for (h=1; h b2) dx -= boxsize; if (dx < 0.) { /*printf("z:%d %d %d %d %f\n",x,y,z,h,dx);*/ if (m[i] % 5 > 0) { m[i] *= 5; } if (m[i2] % 5 > 0){ m[i2] *= 5; } break; } } // Now do diagonal directions for (h=1; h b2) d1 -= boxsize; if (d2 < negb2) d2 += boxsize; if (d2 > b2) d2 -= boxsize; if ((d1 + d2)*h < 0.) { m0[i] *= 2; break; } } for (h=1; h b2) d1 -= boxsize; if (d2 < negb2) d2 += boxsize; if (d2 > b2) d2 -= boxsize; if ((d1 - d2)*h < 0.) { m0[i] *= 3; break; } } // y for (h=1; h b2) d1 -= boxsize; if (d2 < negb2) d2 += boxsize; if (d2 > b2) d2 -= boxsize; if ((d1 + d2)*h < 0.) { m1[i] *= 2; break; } } for (h=1; h b2) d1 -= boxsize; if (d2 < negb2) d2 += boxsize; if (d2 > b2) d2 -= boxsize; if ((d1 - d2)*h < 0.) { m1[i] *= 3; break; } } // z for (h=1; h b2) d1 -= boxsize; if (d2 < negb2) d2 += boxsize; if (d2 > b2) d2 -= boxsize; if ((d1 + d2)*h < 0.) { m2[i] *=2; break; } } for (h=1; h b2) d1 -= boxsize; if (d2 < negb2) d2 += boxsize; if (d2 > b2) d2 -= boxsize; if ((d1 - d2)*h < 0.) { m2[i] *= 3; break; } } } } } nhalo = 0; nhalo0 = 0; nhalo1 = 0; nhalo2 = 0; nhaloany = 0; for (i=0;i #include #define BF 1e30 #define max(A,B) (((A)>(B)) ? (A):(B)) #define goodmod(A,B) (((A) >= (B)) ? (A-B):(((A) < 0) ? (A+B):(A))) int isneg(int h); int par(int i, int j, int k, int ng); int compute_tags(int ng, double boxsize, double **r, int npart, unsigned char *m); #endif // __ORIGAMI_TAGS_H__ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/particle_kdtree_tools.pxd0000644000175100001770000000101414714401662022074 0ustar00runnerdockercimport cython cimport numpy as np from yt.utilities.lib.bounded_priority_queue cimport BoundedPriorityQueue from yt.utilities.lib.cykdtree.kdtree cimport KDTree, uint64_t cdef struct axes_range: int start int stop int step cdef int set_axes_range(axes_range *axes, int skipaxis) cdef int find_neighbors(np.float64_t * pos, np.float64_t[:, ::1] tree_positions, BoundedPriorityQueue queue, KDTree * c_tree, uint64_t skipidx, axes_range * axes) except -1 nogil ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/particle_kdtree_tools.pyx0000644000175100001770000003063114714401662022130 0ustar00runnerdocker# distutils: language = c++ """ Cython tools for working with the PyKDTree particle KDTree. """ import numpy as np cimport cython cimport numpy as np from cpython.exc cimport PyErr_CheckSignals from libc.math cimport sqrt from yt.utilities.lib.cykdtree.kdtree cimport KDTree, Node, PyKDTree, uint32_t, uint64_t from yt.funcs import get_pbar from yt.geometry.particle_deposit cimport get_kernel_func, kernel_func from yt.utilities.lib.bounded_priority_queue cimport BoundedPriorityQueue, NeighborList cdef int CHUNKSIZE = 4096 # This structure allows the nearest neighbor finding to consider a subset of # spatial dimensions, i.e the spatial separation in the x and z coordinates # could be consider by using set_axes_range(axes, 1), this would cause the while # loops to skip the y dimensions, without the performance hit of an if statement cdef struct axes_range: int start int stop int step # skipaxis: x=0, y=1, z=2 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int set_axes_range(axes_range *axes, int skipaxis): axes.start = 0 axes.stop = 3 axes.step = 1 if skipaxis == 0: axes.start = 1 if skipaxis == 1: axes.step = 2 if skipaxis == 2: axes.stop = 2 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def generate_smoothing_length(np.float64_t[:, ::1] tree_positions, PyKDTree kdtree, int n_neighbors): """Calculate array of distances to the nth nearest neighbor Parameters ---------- tree_positions: arrays of floats with shape (n_particles, 3) The positions of particles in kdtree sorted order. Currently assumed to be 3D positions. kdtree: A PyKDTree instance A kdtree to do nearest neighbors searches with n_neighbors: The neighbor number to calculate the distance to Returns ------- smoothing_lengths: arrays of floats with shape (n_particles, ) The calculated smoothing lengths """ cdef int i cdef KDTree * c_tree = kdtree._tree cdef int n_particles = tree_positions.shape[0] cdef np.float64_t * pos cdef np.float64_t[:] smoothing_length = np.empty(n_particles) cdef BoundedPriorityQueue queue = BoundedPriorityQueue(n_neighbors) # We are using all spatial dimensions cdef axes_range axes set_axes_range(&axes, -1) pbar = get_pbar("Generate smoothing length", n_particles) with nogil: for i in range(n_particles): # Reset queue to "empty" state, doing it this way avoids # needing to reallocate memory queue.size = 0 if i % CHUNKSIZE == 0: with gil: pbar.update(i-1) PyErr_CheckSignals() pos = &(tree_positions[i, 0]) find_neighbors(pos, tree_positions, queue, c_tree, i, &axes) smoothing_length[i] = sqrt(queue.heap_ptr[0]) pbar.update(n_particles-1) pbar.finish() return np.asarray(smoothing_length) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def estimate_density(np.float64_t[:, ::1] tree_positions, np.float64_t[:] mass, np.float64_t[:] smoothing_length, PyKDTree kdtree, kernel_name="cubic"): """Estimate density using SPH gather method. Parameters ---------- tree_positions: array of floats with shape (n_particles, 3) The positions of particles in kdtree sorted order. Currently assumed to be 3D positions. mass: array of floats with shape (n_particles) The masses of particles in kdtree sorted order. smoothing_length: array of floats with shape (n_particles) The smoothing lengths of particles in kdtree sorted order. kdtree: A PyKDTree instance A kdtree to do nearest neighbors searches with. kernel_name: str The name of the kernel function to use in density estimation. Returns ------- density: array of floats with shape (n_particles) The calculated density. """ cdef int i, j, k cdef KDTree * c_tree = kdtree._tree cdef int n_particles = tree_positions.shape[0] cdef np.float64_t h_i2, ih_i2, q_ij cdef np.float64_t * pos cdef np.float64_t[:] density = np.empty(n_particles) cdef kernel_func kernel = get_kernel_func(kernel_name) cdef NeighborList nblist = NeighborList() # We are using all spatial dimensions cdef axes_range axes set_axes_range(&axes, -1) pbar = get_pbar("Estimating density", n_particles) with nogil: for i in range(n_particles): # Reset list to "empty" state, doing it this way avoids # needing to reallocate memory nblist.size = 0 if i % CHUNKSIZE == 0: with gil: pbar.update(i - 1) PyErr_CheckSignals() pos = &(tree_positions[i, 0]) h_i2 = smoothing_length[i] ** 2 find_neighbors_ball(pos, h_i2, tree_positions, nblist, c_tree, i, &axes) ih_i2 = 1.0 / h_i2 # See eq. 10 of Price 2012 density[i] = mass[i] * kernel(0) for k in range(nblist.size): j = nblist.pids[k] q_ij = sqrt(nblist.data[k] * ih_i2) density[i] += mass[j] * kernel(q_ij) pbar.update(n_particles - 1) pbar.finish() return np.asarray(density) @cython.boundscheck(False) @cython.wraparound(False) cdef int find_neighbors(np.float64_t * pos, np.float64_t[:, ::1] tree_positions, BoundedPriorityQueue queue, KDTree * c_tree, uint64_t skipidx, axes_range * axes) except -1 nogil: cdef Node* leafnode # Make an initial guess based on the closest node leafnode = c_tree.search(&pos[0]) process_node_points(leafnode, queue, tree_positions, pos, skipidx, axes) # Traverse the rest of the kdtree to finish the neighbor list find_knn(c_tree.root, queue, tree_positions, pos, leafnode.leafid, skipidx, axes) return 0 @cython.boundscheck(False) @cython.wraparound(False) cdef int find_knn(Node* node, BoundedPriorityQueue queue, np.float64_t[:, ::1] tree_positions, np.float64_t* pos, uint32_t skipleaf, uint64_t skipidx, axes_range * axes, ) except -1 nogil: # if we aren't a leaf then we keep traversing until we find a leaf, else we # we actually begin to check the leaf if not node.is_leaf: if not cull_node(node.less, pos, queue, skipleaf, axes): find_knn(node.less, queue, tree_positions, pos, skipleaf, skipidx, axes) if not cull_node(node.greater, pos, queue, skipleaf, axes): find_knn(node.greater, queue, tree_positions, pos, skipleaf, skipidx, axes) else: if not cull_node(node, pos, queue, skipleaf, axes): process_node_points(node, queue, tree_positions, pos, skipidx, axes) return 0 @cython.boundscheck(False) @cython.wraparound(False) cdef inline int cull_node(Node* node, np.float64_t* pos, BoundedPriorityQueue queue, uint32_t skipleaf, axes_range * axes, ) except -1 nogil: cdef int k cdef np.float64_t v cdef np.float64_t tpos, ndist = 0 if node.leafid == skipleaf: return True k = axes.start while k < axes.stop: v = pos[k] if v < node.left_edge[k]: tpos = node.left_edge[k] - v elif v > node.right_edge[k]: tpos = v - node.right_edge[k] else: tpos = 0 ndist += tpos*tpos k += axes.step return (ndist > queue.heap[0] and queue.size == queue.max_elements) @cython.boundscheck(False) @cython.wraparound(False) cdef inline int process_node_points(Node* node, BoundedPriorityQueue queue, np.float64_t[:, ::1] positions, np.float64_t* pos, int skipidx, axes_range * axes, ) except -1 nogil: cdef uint64_t i, k cdef np.float64_t tpos, sq_dist for i in range(node.left_idx, node.left_idx + node.children): if i == skipidx: continue sq_dist = 0.0 k = axes.start while k < axes.stop: tpos = positions[i, k] - pos[k] sq_dist += tpos*tpos k += axes.step queue.add_pid(sq_dist, i) return 0 @cython.boundscheck(False) @cython.wraparound(False) cdef int find_neighbors_ball(np.float64_t * pos, np.float64_t r2, np.float64_t[:, ::1] tree_positions, NeighborList nblist, KDTree * c_tree, uint64_t skipidx, axes_range * axes ) except -1 nogil: """Find neighbors within a ball.""" cdef Node* leafnode # Make an initial guess based on the closest node leafnode = c_tree.search(&pos[0]) process_node_points_ball(leafnode, nblist, tree_positions, pos, r2, skipidx, axes) # Traverse the rest of the kdtree to finish the neighbor list find_ball(c_tree.root, nblist, tree_positions, pos, r2, leafnode.leafid, skipidx, axes) return 0 @cython.boundscheck(False) @cython.wraparound(False) cdef int find_ball(Node* node, NeighborList nblist, np.float64_t[:, ::1] tree_positions, np.float64_t* pos, np.float64_t r2, uint32_t skipleaf, uint64_t skipidx, axes_range * axes, ) except -1 nogil: """Traverse the k-d tree to process leaf nodes.""" if not node.is_leaf: if not cull_node_ball(node.less, pos, r2, skipleaf, axes): find_ball(node.less, nblist, tree_positions, pos, r2, skipleaf, skipidx, axes) if not cull_node_ball(node.greater, pos, r2, skipleaf, axes): find_ball(node.greater, nblist, tree_positions, pos, r2, skipleaf, skipidx, axes) else: if not cull_node_ball(node, pos, r2, skipleaf, axes): process_node_points_ball(node, nblist, tree_positions, pos, r2, skipidx, axes) return 0 @cython.boundscheck(False) @cython.wraparound(False) cdef inline int cull_node_ball(Node* node, np.float64_t* pos, np.float64_t r2, uint32_t skipleaf, axes_range * axes, ) except -1 nogil: """Check if the node does not intersect with the ball at all.""" cdef int k cdef np.float64_t v cdef np.float64_t tpos, ndist = 0 if node.leafid == skipleaf: return True k = axes.start while k < axes.stop: v = pos[k] if v < node.left_edge[k]: tpos = node.left_edge[k] - v elif v > node.right_edge[k]: tpos = v - node.right_edge[k] else: tpos = 0 ndist += tpos*tpos k += axes.step return ndist > r2 @cython.boundscheck(False) @cython.wraparound(False) cdef inline int process_node_points_ball(Node* node, NeighborList nblist, np.float64_t[:, ::1] positions, np.float64_t* pos, np.float64_t r2, int skipidx, axes_range * axes, ) except -1 nogil: """Add points from the leaf node within the ball to the neighbor list.""" cdef uint64_t i, k cdef np.float64_t tpos, sq_dist for i in range(node.left_idx, node.left_idx + node.children): if i == skipidx: continue sq_dist = 0.0 k = axes.start while k < axes.stop: tpos = positions[i, k] - pos[k] sq_dist += tpos*tpos k += axes.step if (sq_dist < r2): nblist.add_pid(sq_dist, i) return 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/particle_mesh_operations.pyx0000644000175100001770000003431414714401662022633 0ustar00runnerdocker # distutils: libraries = STD_LIBS """ Simple integrators for the radiative transfer equation """ cimport cython cimport numpy as np import numpy as np from yt.utilities.lib.fp_utils cimport fclip @cython.boundscheck(False) @cython.wraparound(False) def CICDeposit_3(np.ndarray[np.float64_t, ndim=1] posx, np.ndarray[np.float64_t, ndim=1] posy, np.ndarray[np.float64_t, ndim=1] posz, np.ndarray[np.float64_t, ndim=1] mass, np.int64_t npositions, np.ndarray[np.float64_t, ndim=3] field, np.ndarray[np.float64_t, ndim=1] leftEdge, np.ndarray[np.int32_t, ndim=1] gridDimension, np.float64_t cellSize): cdef int i1, j1, k1, n cdef np.float64_t xpos, ypos, zpos cdef np.float64_t fact, edge0, edge1, edge2 cdef np.float64_t le0, le1, le2 cdef np.float64_t dx, dy, dz, dx2, dy2, dz2 edge0 = ( gridDimension[0]) - 0.5001 edge1 = ( gridDimension[1]) - 0.5001 edge2 = ( gridDimension[2]) - 0.5001 fact = 1.0 / cellSize le0 = leftEdge[0] le1 = leftEdge[1] le2 = leftEdge[2] for n in range(npositions): # Compute the position of the central cell xpos = (posx[n] - le0)*fact ypos = (posy[n] - le1)*fact zpos = (posz[n] - le2)*fact if (xpos < 0.5001) or (xpos > edge0): continue if (ypos < 0.5001) or (ypos > edge1): continue if (zpos < 0.5001) or (zpos > edge2): continue i1 = (xpos + 0.5) j1 = (ypos + 0.5) k1 = (zpos + 0.5) # Compute the weights dx = ( i1) + 0.5 - xpos dy = ( j1) + 0.5 - ypos dz = ( k1) + 0.5 - zpos dx2 = 1.0 - dx dy2 = 1.0 - dy dz2 = 1.0 - dz # Interpolate from field into sumfield field[i1-1,j1-1,k1-1] += mass[n] * dx * dy * dz field[i1 ,j1-1,k1-1] += mass[n] * dx2 * dy * dz field[i1-1,j1 ,k1-1] += mass[n] * dx * dy2 * dz field[i1 ,j1 ,k1-1] += mass[n] * dx2 * dy2 * dz field[i1-1,j1-1,k1 ] += mass[n] * dx * dy * dz2 field[i1 ,j1-1,k1 ] += mass[n] * dx2 * dy * dz2 field[i1-1,j1 ,k1 ] += mass[n] * dx * dy2 * dz2 field[i1 ,j1 ,k1 ] += mass[n] * dx2 * dy2 * dz2 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def CICDeposit_2(np.float64_t[:] posx, np.float64_t[:] posy, np.float64_t[:] mass, np.int64_t npositions, np.float64_t[:, :] field, np.uint8_t[:, :] field_mask, np.float64_t[:] x_bin_edges, np.float64_t[:] y_bin_edges): cdef int i1, j1, n cdef np.float64_t xpos, ypos cdef np.float64_t edgex, edgey cdef np.float64_t dx, dy, ddx, ddy, ddx2, ddy2 edgex = ( x_bin_edges.shape[0] - 1) + 0.5001 edgey = ( y_bin_edges.shape[0] - 1) + 0.5001 # We are always dealing with uniformly spaced bins for CiC dx = x_bin_edges[1] - x_bin_edges[0] dy = y_bin_edges[1] - y_bin_edges[0] for n in range(npositions): # Compute the position of the central cell xpos = (posx[n] - x_bin_edges[0])/dx ypos = (posy[n] - y_bin_edges[0])/dy if (xpos < -0.5001) or (xpos > edgex): continue if (ypos < -0.5001) or (ypos > edgey): continue i1 = (xpos + 0.5) j1 = (ypos + 0.5) # Compute the weights ddx = ( i1) + 0.5 - xpos ddy = ( j1) + 0.5 - ypos ddx2 = 1.0 - ddx ddy2 = 1.0 - ddy # Deposit onto field if i1 > 0 and j1 > 0: field[i1-1,j1-1] += mass[n] * ddx * ddy field_mask[i1-1,j1-1] = 1 if j1 > 0 and i1 < field.shape[0]: field[i1 ,j1-1] += mass[n] * ddx2 * ddy field_mask[i1,j1-1] = 1 if i1 > 0 and j1 < field.shape[1]: field[i1-1,j1 ] += mass[n] * ddx * ddy2 field_mask[i1-1,j1] = 1 if i1 < field.shape[0] and j1 < field.shape[1]: field[i1 ,j1 ] += mass[n] * ddx2 * ddy2 field_mask[i1,j1] = 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def NGPDeposit_2(np.float64_t[:] posx, np.float64_t[:] posy, np.float64_t[:] mass, np.int64_t npositions, np.float64_t[:, :] field, np.uint8_t[:, :] field_mask, np.float64_t[:] x_bin_edges, np.float64_t[:] y_bin_edges): cdef int i, j, i1, j1, n cdef np.float64_t xpos, ypos cdef np.float64_t[2] x_endpoints cdef np.float64_t[2] y_endpoints x_endpoints = (x_bin_edges[0], x_bin_edges[x_bin_edges.shape[0] - 1]) y_endpoints = (y_bin_edges[0], y_bin_edges[y_bin_edges.shape[0] - 1]) for n in range(npositions): xpos = posx[n] ypos = posy[n] if (xpos < x_endpoints[0]) or (xpos > x_endpoints[1]): continue if (ypos < y_endpoints[0]) or (ypos > y_endpoints[1]): continue for i in range(x_bin_edges.shape[0]): if (xpos >= x_bin_edges[i]) and (xpos < x_bin_edges[i+1]): i1 = i break for j in range(y_bin_edges.shape[0]): if (ypos >= y_bin_edges[j]) and (ypos < y_bin_edges[j+1]): j1 = j break # Deposit onto field field[i1,j1] += mass[n] field_mask[i1,j1] = 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def sample_field_at_positions(np.ndarray[np.float64_t, ndim=3] arr, np.ndarray[np.float64_t, ndim=1] left_edge, np.ndarray[np.float64_t, ndim=1] right_edge, np.ndarray[np.float64_t, ndim=1] pos_x, np.ndarray[np.float64_t, ndim=1] pos_y, np.ndarray[np.float64_t, ndim=1] pos_z): cdef np.float64_t idds[3] cdef int dims[3] cdef int ind[3] cdef int i, npart npart = pos_x.shape[0] cdef np.ndarray[np.float64_t, ndim=1] sample sample = np.zeros(npart, dtype='float64') for i in range(3): dims[i] = arr.shape[i] idds[i] = ( dims[i]) / (right_edge[i] - left_edge[i]) for i in range(npart): if not ((left_edge[0] <= pos_x[i] <= right_edge[0]) and (left_edge[1] <= pos_y[i] <= right_edge[1]) and (left_edge[2] <= pos_z[i] <= right_edge[2])): continue ind[0] = ((pos_x[i] - left_edge[0]) * idds[0]) ind[1] = ((pos_y[i] - left_edge[1]) * idds[1]) ind[2] = ((pos_z[i] - left_edge[2]) * idds[2]) sample[i] = arr[ind[0], ind[1], ind[2]] return sample @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def CICSample_3(np.ndarray[np.float64_t, ndim=1] posx, np.ndarray[np.float64_t, ndim=1] posy, np.ndarray[np.float64_t, ndim=1] posz, np.ndarray[np.float64_t, ndim=1] sample, np.int64_t npositions, np.ndarray[np.float64_t, ndim=3] field, np.ndarray[np.float64_t, ndim=1] leftEdge, np.ndarray[np.int32_t, ndim=1] gridDimension, np.float64_t cellSize): cdef int i1, j1, k1, n cdef np.float64_t xpos, ypos, zpos cdef np.float64_t fact, edge0, edge1, edge2 cdef np.float64_t le0, le1, le2 cdef np.float64_t dx, dy, dz, dx2, dy2, dz2 edge0 = ( gridDimension[0]) - 0.5001 edge1 = ( gridDimension[1]) - 0.5001 edge2 = ( gridDimension[2]) - 0.5001 fact = 1.0 / cellSize le0 = leftEdge[0] le1 = leftEdge[1] le2 = leftEdge[2] for n in range(npositions): # Compute the position of the central cell xpos = (posx[n]-le0)*fact ypos = (posy[n]-le1)*fact zpos = (posz[n]-le2)*fact if (xpos < -1 or ypos < -1 or zpos < -1 or xpos >= edge0+1.5001 or ypos >= edge1+1.5001 or zpos >= edge2+1.5001): continue xpos = fclip(xpos, 0.5001, edge0) ypos = fclip(ypos, 0.5001, edge1) zpos = fclip(zpos, 0.5001, edge2) i1 = (xpos + 0.5) j1 = (ypos + 0.5) k1 = (zpos + 0.5) # Compute the weights dx = ( i1) + 0.5 - xpos dy = ( j1) + 0.5 - ypos dz = ( k1) + 0.5 - zpos dx2 = 1.0 - dx dy2 = 1.0 - dy dz2 = 1.0 - dz # Interpolate from field onto the particle sample[n] = (field[i1-1,j1-1,k1-1] * dx * dy * dz + field[i1 ,j1-1,k1-1] * dx2 * dy * dz + field[i1-1,j1 ,k1-1] * dx * dy2 * dz + field[i1 ,j1 ,k1-1] * dx2 * dy2 * dz + field[i1-1,j1-1,k1 ] * dx * dy * dz2 + field[i1 ,j1-1,k1 ] * dx2 * dy * dz2 + field[i1-1,j1 ,k1 ] * dx * dy2 * dz2 + field[i1 ,j1 ,k1 ] * dx2 * dy2 * dz2) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def assign_particles_to_cells(np.ndarray[np.int32_t, ndim=1] levels, #for cells np.ndarray[np.float32_t, ndim=2] left_edges, #many cells np.ndarray[np.float32_t, ndim=2] right_edges, np.ndarray[np.float32_t, ndim=1] pos_x, #particle np.ndarray[np.float32_t, ndim=1] pos_y, np.ndarray[np.float32_t, ndim=1] pos_z): #for every cell, assign the particles belonging to it, #skipping previously assigned particles cdef long level_max = np.max(levels) cdef long i,j,level cdef long npart = pos_x.shape[0] cdef long ncells = left_edges.shape[0] cdef np.ndarray[np.int32_t, ndim=1] assign = np.zeros(npart,dtype='int32')-1 for level in range(level_max,0,-1): #start with the finest level for i in range(ncells): #go through every cell on the finest level first if not levels[i] == level: continue for j in range(npart): #iterate over all particles, skip if assigned if assign[j]>-1: continue if (left_edges[i,0] <= pos_x[j] <= right_edges[i,0]): if (left_edges[i,1] <= pos_y[j] <= right_edges[i,1]): if (left_edges[i,2] <= pos_z[j] <= right_edges[i,2]): assign[j]=i return assign @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def assign_particles_to_cell_lists(np.ndarray[np.int32_t, ndim=1] levels, #for cells np.ndarray[np.int32_t,ndim=1] assign, np.int64_t level_max, np.ndarray[np.float32_t, ndim=2] left_edges, #many cells np.ndarray[np.float32_t, ndim=2] right_edges, np.ndarray[np.float32_t, ndim=1] pos_x, #particle np.ndarray[np.float32_t, ndim=1] pos_y, np.ndarray[np.float32_t, ndim=1] pos_z): #for every cell, assign the particles belonging to it, #skipping previously assigned particles #Todo: instead of iterating every particles, could use kdtree cdef long i,j,level cdef long npart = pos_x.shape[0] cdef long ncells = left_edges.shape[0] #cdef np.ndarray[np.int32_t, ndim=1] assign #assign = np.zeros(npart,dtype='int32')-1 index_lists = [] for level in range(level_max,-1,-1): #start with the finest level for i in range(ncells): #go through every cell on the finest level first if not levels[i] == level: continue index_list = [] for j in range(npart): #iterate over all particles, skip if assigned if assign[j]>-1: continue if (left_edges[i,0] <= pos_x[j] <= right_edges[i,0]): if (left_edges[i,1] <= pos_y[j] <= right_edges[i,1]): if (left_edges[i,2] <= pos_z[j] <= right_edges[i,2]): assign[j]=i index_list += j, index_lists += index_list, return assign,index_lists @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def recursive_particle_assignment(grids, grid, np.ndarray[np.float32_t, ndim=2] left_edges, #many cells np.ndarray[np.float32_t, ndim=2] right_edges, np.ndarray[np.float32_t, ndim=1] pos_x, #particle np.ndarray[np.float32_t, ndim=1] pos_y, np.ndarray[np.float32_t, ndim=1] pos_z): #start on level zero, grid particles onto every mesh #every particle we are fed, we can assume it exists on our grid #must fill in the grid_particle_count array #and particle_indices for every grid cdef long i, j cdef long npart = pos_x.shape[0] cdef np.ndarray[np.int32_t, ndim=1] assigned = np.zeros(npart,dtype='int32') cdef np.ndarray[np.int32_t, ndim=1] never_assigned = np.ones(npart,dtype='int32') for i in np.unique(grid.child_index_mask): if i== -1: continue #assigned to this subgrid assigned = np.zeros(npart,dtype='int32') for j in range(npart): if (left_edges[i,0] <= pos_x[j] <= right_edges[i,0]): if (left_edges[i,1] <= pos_y[j] <= right_edges[i,1]): if (left_edges[i,2] <= pos_z[j] <= right_edges[i,2]): assigned[j]=1 never_assigned[j]=0 if np.sum(assigned)>0: recursive_particle_assignment(grids,grid,left_edges,right_edges, pos_x[assigned],pos_y[assigned],pos_z[assigned]) #now we have assigned particles to other subgrids, we are left with particles on our grid ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/partitioned_grid.pxd0000644000175100001770000000115114714401662021044 0ustar00runnerdocker""" Definitions for the partitioned grid """ import numpy as np cimport cython cimport numpy as np from .volume_container cimport VolumeContainer cdef class PartitionedGrid: cdef public object my_data cdef public object source_mask cdef public object LeftEdge cdef public object RightEdge cdef public int parent_grid_id cdef VolumeContainer *container cdef np.float64_t star_er cdef np.float64_t star_sigma_num cdef np.float64_t star_coeff cdef void get_vector_field(self, np.float64_t pos[3], np.float64_t *vel, np.float64_t *vel_mag) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/partitioned_grid.pyx0000644000175100001770000001272414714401662021101 0ustar00runnerdocker# distutils: include_dirs = LIB_DIR # distutils: libraries = STD_LIBS FIXED_INTERP # distutils: language = c++ # distutils: extra_compile_args = CPP14_FLAG # distutils: extra_link_args = CPP14_FLAG """ Image sampler definitions """ import numpy as np cimport cython cimport numpy as np from libc.stdlib cimport free, malloc from .fixed_interpolator cimport offset_interpolate cdef class PartitionedGrid: @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def __cinit__(self, int parent_grid_id, data, mask, np.ndarray[np.float64_t, ndim=1] left_edge, np.ndarray[np.float64_t, ndim=1] right_edge, np.ndarray[np.int64_t, ndim=1] dims, int n_fields = -1): # The data is likely brought in via a slice, so we copy it cdef np.ndarray[np.float64_t, ndim=3] tdata cdef np.ndarray[np.uint8_t, ndim=3] mask_data self.container = NULL self.parent_grid_id = parent_grid_id self.LeftEdge = left_edge self.RightEdge = right_edge self.container = \ malloc(sizeof(VolumeContainer)) cdef VolumeContainer *c = self.container # convenience if n_fields == -1: n_fields = len(data) cdef int n_data = len(data) c.n_fields = n_fields for i in range(3): c.left_edge[i] = left_edge[i] c.right_edge[i] = right_edge[i] c.dims[i] = dims[i] c.dds[i] = (c.right_edge[i] - c.left_edge[i])/dims[i] c.idds[i] = 1.0/c.dds[i] self.my_data = data self.source_mask = mask mask_data = mask c.data = malloc(sizeof(np.float64_t*) * n_fields) for i in range(n_data): tdata = data[i] c.data[i] = tdata.data c.mask = mask_data.data def __dealloc__(self): # The data fields are not owned by the container, they are owned by us! # So we don't need to deallocate them. if self.container == NULL: return if self.container.data != NULL: free(self.container.data) free(self.container) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def integrate_streamline(self, pos, np.float64_t h, mag): cdef np.float64_t cmag[1] cdef np.float64_t k1[3] cdef np.float64_t k2[3] cdef np.float64_t k3[3] cdef np.float64_t k4[3] cdef np.float64_t newpos[3] cdef np.float64_t oldpos[3] for i in range(3): newpos[i] = oldpos[i] = pos[i] self.get_vector_field(newpos, k1, cmag) for i in range(3): newpos[i] = oldpos[i] + 0.5*k1[i]*h if not (self.LeftEdge[0] < newpos[0] and newpos[0] < self.RightEdge[0] and \ self.LeftEdge[1] < newpos[1] and newpos[1] < self.RightEdge[1] and \ self.LeftEdge[2] < newpos[2] and newpos[2] < self.RightEdge[2]): if mag is not None: mag[0] = cmag[0] for i in range(3): pos[i] = newpos[i] return self.get_vector_field(newpos, k2, cmag) for i in range(3): newpos[i] = oldpos[i] + 0.5*k2[i]*h if not (self.LeftEdge[0] <= newpos[0] and newpos[0] <= self.RightEdge[0] and \ self.LeftEdge[1] <= newpos[1] and newpos[1] <= self.RightEdge[1] and \ self.LeftEdge[2] <= newpos[2] and newpos[2] <= self.RightEdge[2]): if mag is not None: mag[0] = cmag[0] for i in range(3): pos[i] = newpos[i] return self.get_vector_field(newpos, k3, cmag) for i in range(3): newpos[i] = oldpos[i] + k3[i]*h if not (self.LeftEdge[0] <= newpos[0] and newpos[0] <= self.RightEdge[0] and \ self.LeftEdge[1] <= newpos[1] and newpos[1] <= self.RightEdge[1] and \ self.LeftEdge[2] <= newpos[2] and newpos[2] <= self.RightEdge[2]): if mag is not None: mag[0] = cmag[0] for i in range(3): pos[i] = newpos[i] return self.get_vector_field(newpos, k4, cmag) for i in range(3): pos[i] = oldpos[i] + h*(k1[i]/6.0 + k2[i]/3.0 + k3[i]/3.0 + k4[i]/6.0) if mag is not None: for i in range(3): newpos[i] = pos[i] self.get_vector_field(newpos, k4, cmag) mag[0] = cmag[0] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void get_vector_field(self, np.float64_t pos[3], np.float64_t *vel, np.float64_t *vel_mag): cdef np.float64_t dp[3] cdef int ci[3] cdef VolumeContainer *c = self.container # convenience for i in range(3): ci[i] = (int)((pos[i]-self.LeftEdge[i])/c.dds[i]) dp[i] = (pos[i] - ci[i]*c.dds[i] - self.LeftEdge[i])/c.dds[i] cdef int offset = ci[0] * (c.dims[1] + 1) * (c.dims[2] + 1) \ + ci[1] * (c.dims[2] + 1) + ci[2] vel_mag[0] = 0.0 for i in range(3): vel[i] = offset_interpolate(c.dims, dp, c.data[i] + offset) vel_mag[0] += vel[i]*vel[i] vel_mag[0] = np.sqrt(vel_mag[0]) if vel_mag[0] != 0.0: for i in range(3): vel[i] /= vel_mag[0] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/pixelization_constants.cpp0000644000175100001770000000624714714401662022332 0ustar00runnerdocker/******************************************************************************* *******************************************************************************/ // // Some Cython versions don't like module-level constants, so we'll put them // here. // #include "pixelization_constants.hpp" /* Six faces, two vectors for each, two indices for each vector. The function below unrolls how these are defined. Some info can be found at: http://www.mscsoftware.com/training_videos/patran/Reverb_help/index.html#page/Finite%20Element%20Modeling/elem_lib_topics.16.8.html This is [6][2][2] in shape. Here are the faces and their four edges each: F1 1 2 3 4 F2 5 6 7 8 F3 1 10 5 9 F4 2 11 6 10 F5 3 12 7 11 F6 4 9 8 12 The edges are then defined by: E1 1 2 E2 2 6 E3 6 5 E4 5 1 E5 4 3 E6 3 7 E7 7 8 E8 8 4 E9 1 4 E10 2 3 E11 6 7 E12 5 8 Now we unroll these here ... */ const npy_uint8 hex_face_defs[MAX_NUM_FACES][2][2] = { /* Note that the first of each pair is the shared vertex */ {{1, 0}, {1, 5}}, {{2, 3}, {2, 6}}, {{1, 0}, {1, 2}}, {{5, 1}, {5, 6}}, {{4, 5}, {4, 7}}, {{0, 4}, {0, 3}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}} }; /* http://www.mscsoftware.com/training_videos/patran/Reverb_help/index.html#page/Finite%2520Element%2520Modeling/elem_lib_topics.16.6.html F1 1 2 3 F2 1 5 4 F3 2 6 5 F4 3 4 6 The edges are then defined by: E1 1 2 E2 2 3 E3 3 1 E4 1 4 E5 2 4 E6 3 4 */ const npy_uint8 tetra_face_defs[MAX_NUM_FACES][2][2] = { {{1, 0}, {1, 2}}, {{1, 0}, {1, 3}}, {{2, 1}, {2, 3}}, {{3, 0}, {3, 2}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}} }; /* http://www.mscsoftware.com/training_videos/patran/Reverb_help/index.html#page/Finite%2520Element%2520Modeling/elem_lib_topics.16.7.html F1 1 2 3 * F2 4 5 6 * F3 1 8 4 7 F4 2 9 5 8 F5 3 7 6 9 The edges are then defined by: E1 2 1 E2 1 3 E3 3 2 E4 5 4 E5 4 6 E6 6 5 E7 2 5 E8 1 4 E9 3 6 */ const npy_uint8 wedge_face_defs[MAX_NUM_FACES][2][2] = { {{0, 1}, {0, 2}}, {{3, 4}, {3, 5}}, {{0, 1}, {0, 3}}, {{2, 0}, {2, 5}}, {{1, 2}, {1, 4}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}}, {{255, 255}, {255, 255}} }; ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/pixelization_constants.hpp0000644000175100001770000000132714714401662022331 0ustar00runnerdocker/******************************************************************************* *******************************************************************************/ // // Some Cython versions don't like module-level constants, so we'll put them // here. // #include "Python.h" #include #include #include #include #include "numpy/ndarrayobject.h" #define MAX_NUM_FACES 16 #define HEX_IND 0 #define HEX_NF 6 #define TETRA_IND 1 #define TETRA_NF 4 #define WEDGE_IND 2 #define WEDGE_NF 5 extern const npy_uint8 hex_face_defs[MAX_NUM_FACES][2][2]; extern const npy_uint8 tetra_face_defs[MAX_NUM_FACES][2][2]; extern const npy_uint8 wedge_face_defs[MAX_NUM_FACES][2][2]; ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/pixelization_routines.pyx0000644000175100001770000026524314714401662022227 0ustar00runnerdocker# distutils: include_dirs = LIB_DIR # distutils: extra_compile_args = CPP14_FLAG OMP_ARGS # distutils: extra_link_args = CPP14_FLAG OMP_ARGS # distutils: language = c++ # distutils: libraries = STD_LIBS # distutils: sources = yt/utilities/lib/pixelization_constants.cpp """ Pixelization routines """ import numpy as np cimport cython cimport libc.math as math cimport numpy as np from cython.view cimport array as cvarray from yt.utilities.lib.fp_utils cimport ( any_float, fabs, fmax, fmin, i64max, i64min, iclip, ) from yt.utilities.exceptions import YTElementTypeNotRecognized, YTPixelizeError from cpython.exc cimport PyErr_CheckSignals from cython.parallel cimport parallel, prange from libc.stdlib cimport free, malloc from yt.geometry.particle_deposit cimport get_kernel_func, kernel_func from yt.utilities.lib.element_mappings cimport ( ElementSampler, P1Sampler1D, P1Sampler2D, P1Sampler3D, Q1Sampler2D, Q1Sampler3D, Q2Sampler2D, S2Sampler3D, T2Sampler2D, Tet2Sampler3D, W1Sampler3D, ) from .vec3_ops cimport cross, dot, subtract from yt.funcs import get_pbar from yt.utilities.lib.bounded_priority_queue cimport BoundedPriorityQueue from yt.utilities.lib.cykdtree.kdtree cimport KDTree, PyKDTree from yt.utilities.lib.particle_kdtree_tools cimport ( axes_range, find_neighbors, set_axes_range, ) cdef int TABLE_NVALS=512 cdef extern from "pixelization_constants.hpp": enum: MAX_NUM_FACES int HEX_IND int HEX_NF np.uint8_t hex_face_defs[MAX_NUM_FACES][2][2] int TETRA_IND int TETRA_NF np.uint8_t tetra_face_defs[MAX_NUM_FACES][2][2] int WEDGE_IND int WEDGE_NF np.uint8_t wedge_face_defs[MAX_NUM_FACES][2][2] cdef extern from "numpy/npy_math.h": double NPY_PI @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def pixelize_cartesian(np.float64_t[:,:] buff, any_float[:] px, any_float[:] py, any_float[:] pdx, any_float[:] pdy, any_float[:] data, bounds, int antialias = 1, period = None, int check_period = 1, np.float64_t line_width = 0.0, *, int return_mask = 0, ): cdef np.float64_t x_min, x_max, y_min, y_max cdef np.float64_t period_x = 0.0, period_y = 0.0 cdef np.float64_t width, height, px_dx, px_dy, ipx_dx, ipx_dy cdef np.float64_t ld_x, ld_y, cx, cy cdef int i, j, p, xi, yi cdef int lc, lr, rc, rr cdef np.float64_t lypx, rypx, lxpx, rxpx, overlap1, overlap2 # These are the temp vars we get from the arrays cdef np.float64_t oxsp, oysp, xsp, ysp, dxsp, dysp, dsp # Some periodicity helpers cdef int xiter[2] cdef int yiter[2] cdef np.float64_t xiterv[2] cdef np.float64_t yiterv[2] cdef np.ndarray[np.uint8_t, ndim=2] mask_arr = np.zeros_like(buff, dtype="uint8") cdef np.uint8_t[:, :] mask = mask_arr if period is not None: period_x = period[0] period_y = period[1] x_min = bounds[0] x_max = bounds[1] y_min = bounds[2] y_max = bounds[3] width = x_max - x_min height = y_max - y_min px_dx = width / ( buff.shape[1]) px_dy = height / ( buff.shape[0]) ipx_dx = 1.0 / px_dx ipx_dy = 1.0 / px_dy if px.shape[0] != py.shape[0] or \ px.shape[0] != pdx.shape[0] or \ px.shape[0] != pdy.shape[0] or \ px.shape[0] != data.shape[0]: raise YTPixelizeError("Arrays are not of correct shape.") xiter[0] = yiter[0] = 0 xiterv[0] = yiterv[0] = 0.0 # Here's a basic outline of what we're going to do here. The xiter and # yiter variables govern whether or not we should check periodicity -- are # we both close enough to the edge that it would be important *and* are we # periodic? # # The other variables are all either pixel positions or data positions. # Pixel positions will vary regularly from the left edge of the window to # the right edge of the window; px_dx and px_dy are the dx (cell width, not # half-width). ipx_dx and ipx_dy are the inverse, for quick math. # # The values in xsp, dxsp, x_min and their y counterparts, are the # data-space coordinates, and are related to the data fed in. We make some # modifications for periodicity. # # Inside the finest loop, we compute the "left column" (lc) and "lower row" # (lr) and then iterate up to "right column" (rc) and "uppeR row" (rr), # depositing into them the data value. Overlap computes the relative # overlap of a data value with a pixel. # # NOTE ON ROWS AND COLUMNS: # # The way that images are plotting in matplotlib is somewhat different # from what most might expect. The first axis of the array plotted is # what varies along the x axis. So for instance, if you supply # origin='lower' and plot the results of an mgrid operation, at a fixed # 'y' value you will see the results of that array held constant in the # first dimension. Here is some example code: # # import matplotlib.pyplot as plt # import numpy as np # x, y = np.mgrid[0:1:100j,0:1:100j] # plt.imshow(x, interpolation='nearest', origin='lower') # plt.imshow(y, interpolation='nearest', origin='lower') # # The values in the image: # lower left: arr[0,0] # lower right: arr[0,-1] # upper left: arr[-1,0] # upper right: arr[-1,-1] # # So what we want here is to fill an array such that we fill: # first axis : y_min .. y_max # second axis: x_min .. x_max with nogil: for p in range(px.shape[0]): xiter[1] = yiter[1] = 999 xiterv[1] = yiterv[1] = 0.0 oxsp = px[p] oysp = py[p] dxsp = pdx[p] dysp = pdy[p] dsp = data[p] if check_period == 1: if (oxsp - dxsp < x_min): xiter[1] = +1 xiterv[1] = period_x elif (oxsp + dxsp > x_max): xiter[1] = -1 xiterv[1] = -period_x if (oysp - dysp < y_min): yiter[1] = +1 yiterv[1] = period_y elif (oysp + dysp > y_max): yiter[1] = -1 yiterv[1] = -period_y overlap1 = overlap2 = 1.0 for xi in range(2): if xiter[xi] == 999: continue xsp = oxsp + xiterv[xi] if (xsp + dxsp < x_min) or (xsp - dxsp > x_max): continue for yi in range(2): if yiter[yi] == 999: continue ysp = oysp + yiterv[yi] if (ysp + dysp < y_min) or (ysp - dysp > y_max): continue lc = fmax(((xsp-dxsp-x_min)*ipx_dx),0) lr = fmax(((ysp-dysp-y_min)*ipx_dy),0) # NOTE: This is a different way of doing it than in the C # routines. In C, we were implicitly casting the # initialization to int, but *not* the conditional, which # was allowed an extra value: # for(j=lc;j fmin(((xsp+dxsp-x_min)*ipx_dx + 1), buff.shape[1]) rr = fmin(((ysp+dysp-y_min)*ipx_dy + 1), buff.shape[0]) # Note that we're iterating here over *y* in the i # direction. See the note above about this. for i in range(lr, rr): lypx = px_dy * i + y_min rypx = px_dy * (i+1) + y_min if antialias == 1: overlap2 = ((fmin(rypx, ysp+dysp) - fmax(lypx, (ysp-dysp)))*ipx_dy) if overlap2 < 0.0: continue for j in range(lc, rc): lxpx = px_dx * j + x_min rxpx = px_dx * (j+1) + x_min if line_width > 0: # Here, we figure out if we're within # line_width*px_dx of the cell edge # Midpoint of x: cx = (rxpx+lxpx)*0.5 ld_x = fmin(fabs(cx - (xsp+dxsp)), fabs(cx - (xsp-dxsp))) ld_x *= ipx_dx # Midpoint of y: cy = (rypx+lypx)*0.5 ld_y = fmin(fabs(cy - (ysp+dysp)), fabs(cy - (ysp-dysp))) ld_y *= ipx_dy if ld_x <= line_width or ld_y <= line_width: buff[i,j] = 1.0 mask[i,j] = 1 elif antialias == 1: overlap1 = ((fmin(rxpx, xsp+dxsp) - fmax(lxpx, (xsp-dxsp)))*ipx_dx) if overlap1 < 0.0: continue # This next line is not commented out because # it's an oddity; we actually want to skip # depositing if the overlap is zero, and that's # how it used to work when we were more # conservative about the iteration indices. # This will reduce artifacts if we ever move to # compositing instead of replacing bitmaps. if overlap1 * overlap2 < 1.e-6: continue # make sure pixel value is not a NaN before incrementing it if buff[i,j] != buff[i,j]: buff[i,j] = 0.0 buff[i,j] += (dsp * overlap1) * overlap2 mask[i,j] = 1 else: buff[i,j] = dsp mask[i,j] = 1 if return_mask: return mask_arr.astype("bool") @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def pixelize_cartesian_nodal(np.float64_t[:,:] buff, np.float64_t[:] px, np.float64_t[:] py, np.float64_t[:] pz, np.float64_t[:] pdx, np.float64_t[:] pdy, np.float64_t[:] pdz, np.float64_t[:, :] data, np.float64_t coord, bounds, int antialias = 1, period = None, int check_period = 1, *, int return_mask = 0, ): cdef np.float64_t x_min, x_max, y_min, y_max cdef np.float64_t period_x = 0.0, period_y = 0.0 cdef np.float64_t width, height, px_dx, px_dy, ipx_dx, ipx_dy cdef np.float64_t cx, cy, cz cdef int i, j, p, xi, yi cdef int lc, lr, rc, rr cdef np.float64_t lypx, rypx, lxpx, rxpx, overlap1, overlap2 # These are the temp vars we get from the arrays cdef np.float64_t oxsp, oysp, ozsp cdef np.float64_t xsp, ysp, zsp cdef np.float64_t dxsp, dysp, dzsp # Some periodicity helpers cdef int xiter[2] cdef int yiter[2] cdef int ii, jj, kk, ind cdef np.float64_t xiterv[2] cdef np.float64_t yiterv[2] if period is not None: period_x = period[0] period_y = period[1] x_min = bounds[0] x_max = bounds[1] y_min = bounds[2] y_max = bounds[3] width = x_max - x_min height = y_max - y_min px_dx = width / ( buff.shape[1]) px_dy = height / ( buff.shape[0]) ipx_dx = 1.0 / px_dx ipx_dy = 1.0 / px_dy if px.shape[0] != py.shape[0] or \ px.shape[0] != pz.shape[0] or \ px.shape[0] != pdx.shape[0] or \ px.shape[0] != pdy.shape[0] or \ px.shape[0] != pdz.shape[0] or \ px.shape[0] != data.shape[0]: raise YTPixelizeError("Arrays are not of correct shape.") xiter[0] = yiter[0] = 0 xiterv[0] = yiterv[0] = 0.0 # Here's a basic outline of what we're going to do here. The xiter and # yiter variables govern whether or not we should check periodicity -- are # we both close enough to the edge that it would be important *and* are we # periodic? # # The other variables are all either pixel positions or data positions. # Pixel positions will vary regularly from the left edge of the window to # the right edge of the window; px_dx and px_dy are the dx (cell width, not # half-width). ipx_dx and ipx_dy are the inverse, for quick math. # # The values in xsp, dxsp, x_min and their y counterparts, are the # data-space coordinates, and are related to the data fed in. We make some # modifications for periodicity. # # Inside the finest loop, we compute the "left column" (lc) and "lower row" # (lr) and then iterate up to "right column" (rc) and "uppeR row" (rr), # depositing into them the data value. Overlap computes the relative # overlap of a data value with a pixel. # # NOTE ON ROWS AND COLUMNS: # # The way that images are plotting in matplotlib is somewhat different # from what most might expect. The first axis of the array plotted is # what varies along the x axis. So for instance, if you supply # origin='lower' and plot the results of an mgrid operation, at a fixed # 'y' value you will see the results of that array held constant in the # first dimension. Here is some example code: # # import matplotlib.pyplot as plt # import numpy as np # x, y = np.mgrid[0:1:100j,0:1:100j] # plt.imshow(x, interpolation='nearest', origin='lower') # plt.imshow(y, interpolation='nearest', origin='lower') # # The values in the image: # lower left: arr[0,0] # lower right: arr[0,-1] # upper left: arr[-1,0] # upper right: arr[-1,-1] # # So what we want here is to fill an array such that we fill: # first axis : y_min .. y_max # second axis: x_min .. x_max cdef np.ndarray[np.uint8_t, ndim=2] mask_arr = np.zeros_like(buff, dtype="uint8") cdef np.uint8_t[:, :] mask = mask_arr with nogil: for p in range(px.shape[0]): xiter[1] = yiter[1] = 999 xiterv[1] = yiterv[1] = 0.0 oxsp = px[p] oysp = py[p] ozsp = pz[p] dxsp = pdx[p] dysp = pdy[p] dzsp = pdz[p] if check_period == 1: if (oxsp - dxsp < x_min): xiter[1] = +1 xiterv[1] = period_x elif (oxsp + dxsp > x_max): xiter[1] = -1 xiterv[1] = -period_x if (oysp - dysp < y_min): yiter[1] = +1 yiterv[1] = period_y elif (oysp + dysp > y_max): yiter[1] = -1 yiterv[1] = -period_y overlap1 = overlap2 = 1.0 zsp = ozsp for xi in range(2): if xiter[xi] == 999: continue xsp = oxsp + xiterv[xi] if (xsp + dxsp < x_min) or (xsp - dxsp > x_max): continue for yi in range(2): if yiter[yi] == 999: continue ysp = oysp + yiterv[yi] if (ysp + dysp < y_min) or (ysp - dysp > y_max): continue lc = fmax(((xsp-dxsp-x_min)*ipx_dx),0) lr = fmax(((ysp-dysp-y_min)*ipx_dy),0) # NOTE: This is a different way of doing it than in the C # routines. In C, we were implicitly casting the # initialization to int, but *not* the conditional, which # was allowed an extra value: # for(j=lc;j fmin(((xsp+dxsp-x_min)*ipx_dx + 1), buff.shape[1]) rr = fmin(((ysp+dysp-y_min)*ipx_dy + 1), buff.shape[0]) # Note that we're iterating here over *y* in the i # direction. See the note above about this. for i in range(lr, rr): lypx = px_dy * i + y_min rypx = px_dy * (i+1) + y_min for j in range(lc, rc): lxpx = px_dx * j + x_min rxpx = px_dx * (j+1) + x_min cx = (rxpx+lxpx)*0.5 cy = (rypx+lypx)*0.5 cz = coord ii = (cx - xsp + dxsp) jj = (cy - ysp + dysp) kk = (cz - zsp + dzsp) ind = 4*ii + 2*jj + kk buff[i,j] = data[p, ind] mask[i,j] = 1 if return_mask: return mask_arr.astype("bool") @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def pixelize_off_axis_cartesian( np.float64_t[:,:] buff, np.float64_t[:] x, np.float64_t[:] y, np.float64_t[:] z, np.float64_t[:] px, np.float64_t[:] py, np.float64_t[:] pdx, np.float64_t[:] pdy, np.float64_t[:] pdz, np.float64_t[:] center, np.float64_t[:,:] inv_mat, np.int64_t[:] indices, np.float64_t[:] data, bounds, *, int return_mask=0, ): cdef np.float64_t x_min, x_max, y_min, y_max cdef np.float64_t width, height, px_dx, px_dy, ipx_dx, ipx_dy, md cdef int i, j, p, ip cdef int lc, lr, rc, rr # These are the temp vars we get from the arrays cdef np.float64_t xsp, ysp, zsp, dxsp, dysp, dzsp, dsp cdef np.float64_t pxsp, pysp, cxpx, cypx, cx, cy, cz # Some periodicity helpers cdef np.ndarray[np.int64_t, ndim=2] mask x_min = bounds[0] x_max = bounds[1] y_min = bounds[2] y_max = bounds[3] width = x_max - x_min height = y_max - y_min px_dx = width / ( buff.shape[1]) px_dy = height / ( buff.shape[0]) ipx_dx = 1.0 / px_dx ipx_dy = 1.0 / px_dy if px.shape[0] != py.shape[0] or \ px.shape[0] != pdx.shape[0] or \ px.shape[0] != pdy.shape[0] or \ px.shape[0] != pdz.shape[0] or \ px.shape[0] != indices.shape[0] or \ px.shape[0] != data.shape[0]: raise YTPixelizeError("Arrays are not of correct shape.") mask = np.zeros((buff.shape[0], buff.shape[1]), "int64") with nogil: for ip in range(indices.shape[0]): p = indices[ip] xsp = x[p] ysp = y[p] zsp = z[p] pxsp = px[p] pysp = py[p] dxsp = pdx[p] dysp = pdy[p] dzsp = pdz[p] dsp = data[p] # Any point we want to plot is at most this far from the center md = 2.0 * math.sqrt(dxsp*dxsp + dysp*dysp + dzsp*dzsp) if pxsp + md < x_min or \ pxsp - md > x_max or \ pysp + md < y_min or \ pysp - md > y_max: continue lc = fmax(((pxsp - md - x_min)*ipx_dx),0) lr = fmax(((pysp - md - y_min)*ipx_dy),0) rc = fmin(((pxsp + md - x_min)*ipx_dx + 1), buff.shape[1]) rr = fmin(((pysp + md - y_min)*ipx_dy + 1), buff.shape[0]) for i in range(lr, rr): cypx = px_dy * (i + 0.5) + y_min for j in range(lc, rc): cxpx = px_dx * (j + 0.5) + x_min cx = inv_mat[0,0]*cxpx + inv_mat[0,1]*cypx + center[0] cy = inv_mat[1,0]*cxpx + inv_mat[1,1]*cypx + center[1] cz = inv_mat[2,0]*cxpx + inv_mat[2,1]*cypx + center[2] if fabs(xsp - cx) * 0.99 > dxsp or \ fabs(ysp - cy) * 0.99 > dysp or \ fabs(zsp - cz) * 0.99 > dzsp: continue mask[i, j] += 1 # make sure pixel value is not a NaN before incrementing it if buff[i,j] != buff[i,j]: buff[i,j] = 0.0 buff[i, j] += dsp for i in range(buff.shape[0]): for j in range(buff.shape[1]): if mask[i,j] == 0: continue buff[i,j] /= mask[i,j] if return_mask: return mask!=0 @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def pixelize_cylinder(np.float64_t[:,:] buff, np.float64_t[:] radius, np.float64_t[:] dradius, np.float64_t[:] theta, np.float64_t[:] dtheta, np.float64_t[:] field, extents, *, int return_mask=0, ): cdef np.float64_t x, y, dx, dy, r0, theta0 cdef np.float64_t rmin, rmax, tmin, tmax, x0, y0, x1, y1, xp, yp cdef np.float64_t r_i, theta_i, dr_i, dtheta_i cdef np.float64_t r_inc, theta_inc cdef np.float64_t costheta, sintheta cdef int i, i1, pi, pj cdef np.float64_t twoPI = 2 * NPY_PI cdef int imin, imax imin = np.asarray(radius).argmin() imax = np.asarray(radius).argmax() rmin = radius[imin] - dradius[imin] rmax = radius[imax] + dradius[imax] imin = np.asarray(theta).argmin() imax = np.asarray(theta).argmax() tmin = theta[imin] - dtheta[imin] tmax = theta[imax] + dtheta[imax] cdef np.ndarray[np.uint8_t, ndim=2] mask_arr = np.zeros_like(buff, dtype="uint8") cdef np.uint8_t[:, :] mask = mask_arr x0, x1, y0, y1 = extents dx = (x1 - x0) / buff.shape[0] dy = (y1 - y0) / buff.shape[1] cdef np.float64_t rbounds[2] cdef np.float64_t prbounds[2] cdef np.float64_t ptbounds[2] cdef np.float64_t corners[8] # Find our min and max r corners[0] = x0*x0+y0*y0 corners[1] = x1*x1+y0*y0 corners[2] = x0*x0+y1*y1 corners[3] = x1*x1+y1*y1 corners[4] = x0*x0 corners[5] = x1*x1 corners[6] = y0*y0 corners[7] = y1*y1 rbounds[0] = rbounds[1] = corners[0] for i in range(8): rbounds[0] = fmin(rbounds[0], corners[i]) rbounds[1] = fmax(rbounds[1], corners[i]) rbounds[0] = math.sqrt(rbounds[0]) rbounds[1] = math.sqrt(rbounds[1]) # If we include the origin in either direction, we need to have radius of # zero as our lower bound. if x0 < 0 and x1 > 0: rbounds[0] = 0.0 if y0 < 0 and y1 > 0: rbounds[0] = 0.0 r_inc = 0.5 * fmin(dx, dy) with nogil: for i in range(radius.shape[0]): r0 = radius[i] theta0 = theta[i] dr_i = dradius[i] dtheta_i = dtheta[i] # Skip out early if we're offsides, for zoomed in plots if r0 + dr_i < rbounds[0] or r0 - dr_i > rbounds[1]: continue theta_i = theta0 - dtheta_i theta_inc = r_inc / (r0 + dr_i) while theta_i < theta0 + dtheta_i: r_i = r0 - dr_i costheta = math.cos(theta_i) sintheta = math.sin(theta_i) while r_i < r0 + dr_i: if rmax <= r_i: r_i += r_inc continue y = r_i * costheta x = r_i * sintheta pi = ((x - x0)/dx) pj = ((y - y0)/dy) if pi >= 0 and pi < buff.shape[0] and \ pj >= 0 and pj < buff.shape[1]: # we got a pixel that intersects the grid cell # now check that this pixel doesn't go beyond the data domain xp = x0 + pi*dx yp = y0 + pj*dy corners[0] = xp*xp + yp*yp corners[1] = xp*xp + (yp+dy)**2 corners[2] = (xp+dx)**2 + yp*yp corners[3] = (xp+dx)**2 + (yp+dy)**2 prbounds[0] = prbounds[1] = corners[3] for i1 in range(3): prbounds[0] = fmin(prbounds[0], corners[i1]) prbounds[1] = fmax(prbounds[1], corners[i1]) prbounds[0] = math.sqrt(prbounds[0]) prbounds[1] = math.sqrt(prbounds[1]) corners[0] = math.atan2(xp, yp) corners[1] = math.atan2(xp, yp+dy) corners[2] = math.atan2(xp+dx, yp) corners[3] = math.atan2(xp+dx, yp+dy) ptbounds[0] = ptbounds[1] = corners[3] for i1 in range(3): ptbounds[0] = fmin(ptbounds[0], corners[i1]) ptbounds[1] = fmax(ptbounds[1], corners[i1]) # shift to a [0, 2*PI] interval # note: with fmod, the sign of the returned value # matches the sign of the first argument, so need # to offset by 2pi to ensure a positive result in [0, 2pi] ptbounds[0] = math.fmod(ptbounds[0]+twoPI, twoPI) ptbounds[1] = math.fmod(ptbounds[1]+twoPI, twoPI) if prbounds[0] >= rmin and prbounds[1] <= rmax and \ ptbounds[0] >= tmin and ptbounds[1] <= tmax: buff[pi, pj] = field[i] mask[pi, pj] = 1 r_i += r_inc theta_i += theta_inc if return_mask: return mask_arr.astype("bool") cdef int aitoff_Lambda_btheta_to_xy(np.float64_t Lambda, np.float64_t btheta, np.float64_t *x, np.float64_t *y) except -1: cdef np.float64_t z = math.sqrt(1 + math.cos(btheta) * math.cos(Lambda / 2.0)) x[0] = 2.0 * math.cos(btheta) * math.sin(Lambda / 2.0) / z y[0] = math.sin(btheta) / z return 0 @cython.cdivision(True) @cython.boundscheck(False) @cython.wraparound(False) def pixelize_aitoff(np.float64_t[:] azimuth, np.float64_t[:] dazimuth, np.float64_t[:] colatitude, np.float64_t[:] dcolatitude, buff_size, np.float64_t[:] field, bounds, # this is a 4-tuple input_img = None, np.float64_t azimuth_offset = 0.0, np.float64_t colatitude_offset = 0.0, *, int return_mask = 0 ): # http://paulbourke.net/geometry/transformationprojection/ # (Lambda) longitude is -PI to PI (longitude = azimuth - PI) # (btheta) latitude is -PI/2 to PI/2 (latitude = PI/2 - colatitude) # # z^2 = 1 + cos(latitude) cos(longitude/2) # x = cos(latitude) sin(longitude/2) / z # y = sin(latitude) / z cdef np.ndarray[np.float64_t, ndim=2] img cdef int i, j, nf, fi cdef np.float64_t x, y, z, zb cdef np.float64_t dx, dy, xw, yw cdef np.float64_t Lambda0, btheta0, Lambda_p, dLambda_p, btheta_p, dbtheta_p cdef np.float64_t PI = np.pi cdef np.float64_t s2 = math.sqrt(2.0) cdef np.float64_t xmax, ymax, xmin, ymin nf = field.shape[0] if input_img is None: img = np.zeros((buff_size[0], buff_size[1])) img[:] = np.nan else: img = input_img cdef np.ndarray[np.uint8_t, ndim=2] mask_arr = np.ones_like(img, dtype="uint8") cdef np.uint8_t[:, :] mask = mask_arr # Okay, here's our strategy. We compute the bounds in x and y, which will # be a rectangle, and then for each x, y position we check to see if it's # within our Lambda. This will cost *more* computations of the # (x,y)->(Lambda,btheta) calculation, but because we no longer have to search # through the Lambda, btheta arrays, it should be faster. xw = bounds[1] - bounds[0] yw = bounds[3] - bounds[2] dx = xw / (img.shape[0] - 1) dy = yw / (img.shape[1] - 1) x = y = 0 for fi in range(nf): Lambda_p = (azimuth[fi] + azimuth_offset) - PI dLambda_p = dazimuth[fi] btheta_p = PI/2.0 - (colatitude[fi] + colatitude_offset) dbtheta_p = dcolatitude[fi] # Four transformations aitoff_Lambda_btheta_to_xy(Lambda_p - dLambda_p, btheta_p - dbtheta_p, &x, &y) xmin = x xmax = x ymin = y ymax = y aitoff_Lambda_btheta_to_xy(Lambda_p - dLambda_p, btheta_p + dbtheta_p, &x, &y) xmin = fmin(xmin, x) xmax = fmax(xmax, x) ymin = fmin(ymin, y) ymax = fmax(ymax, y) aitoff_Lambda_btheta_to_xy(Lambda_p + dLambda_p, btheta_p - dbtheta_p, &x, &y) xmin = fmin(xmin, x) xmax = fmax(xmax, x) ymin = fmin(ymin, y) ymax = fmax(ymax, y) aitoff_Lambda_btheta_to_xy(Lambda_p + dLambda_p, btheta_p + dbtheta_p, &x, &y) xmin = fmin(xmin, x) xmax = fmax(xmax, x) ymin = fmin(ymin, y) ymax = fmax(ymax, y) # special cases where the projection of the cell isn't # bounded by the rectangle (in image space) that bounds its corners. # Note that performance may take a serious hit here. The overarching algorithm # is optimized for cells with small angular width. if xmin * xmax < 0.0: # on the central meridian aitoff_Lambda_btheta_to_xy(0.0, btheta_p - dbtheta_p, &x, &y) ymin = fmin(ymin, y) ymax = fmax(ymax, y) aitoff_Lambda_btheta_to_xy(0.0, btheta_p + dbtheta_p, &x, &y) ymin = fmin(ymin, y) ymax = fmax(ymax, y) if ymin * ymax < 0.0: # on the equator aitoff_Lambda_btheta_to_xy(Lambda_p - dLambda_p, 0.0, &x, &y) xmin = fmin(xmin, x) xmax = fmax(xmax, x) aitoff_Lambda_btheta_to_xy(Lambda_p + dLambda_p, 0.0, &x, &y) xmin = fmin(xmin, x) xmax = fmax(xmax, x) # Now we have the (projected rectangular) bounds. # Shift into normalized image coords xmin = (xmin - bounds[0]) xmax = (xmax - bounds[0]) ymin = (ymin - bounds[2]) ymax = (ymax - bounds[2]) # Finally, select a rectangular region in image space # that fully contains the projected data point. # We'll reject image pixels in that rectangle that are # not actually intersecting the data point as we go. x0 = (xmin / dx) x1 = (xmax / dx) + 1 y0 = (ymin / dy) y1 = (ymax / dy) + 1 for i in range(x0, x1): x = (bounds[0] + i * dx) / 2.0 for j in range(y0, y1): y = (bounds[2] + j * dy) zb = (x*x + y*y - 1.0) if zb > 0: continue z = (1.0 - 0.5*x*x - 0.5*y*y) z = math.sqrt(z) # Longitude Lambda0 = 2.0*math.atan(z*x*s2/(2.0*z*z-1.0)) # Latitude # We shift it into co-latitude btheta0 = math.asin(z*y*s2) # Now we just need to figure out which pixel contributes. # We do not have a fast search. if not (Lambda_p - dLambda_p <= Lambda0 <= Lambda_p + dLambda_p): continue if not (btheta_p - dbtheta_p <= btheta0 <= btheta_p + dbtheta_p): continue img[i, j] = field[fi] mask[i, j] = 1 if return_mask: return img, mask_arr.astype("bool") else: return img # This function accepts a set of vertices (for a polyhedron) that are # assumed to be in order for bottom, then top, in the same clockwise or # counterclockwise direction (i.e., like points 1-8 in Figure 4 of the ExodusII # manual). It will then either *match* or *fill* the results. If it is # matching, it will early terminate with a 0 or final-terminate with a 1 if the # results match. Otherwise, it will fill the signs with -1's and 1's to show # the sign of the dot product of the point with the cross product of the face. cdef int check_face_dot(int nvertices, np.float64_t point[3], np.float64_t **vertices, np.int8_t *signs, int match): # Because of how we are doing this, we do not *care* what the signs are or # how the faces are ordered, we only care if they match between the point # and the centroid. # So, let's compute these vectors. See above where these are written out # for ease of use. cdef np.float64_t vec1[3] cdef np.float64_t vec2[3] cdef np.float64_t cp_vec[3] cdef np.float64_t npoint[3] cdef np.float64_t dp cdef np.uint8_t faces[MAX_NUM_FACES][2][2] cdef np.uint8_t nf if nvertices == 4: faces = tetra_face_defs nf = TETRA_NF elif nvertices == 6: faces = wedge_face_defs nf = WEDGE_NF elif nvertices == 8: faces = hex_face_defs nf = HEX_NF else: return -1 cdef int n, vi1a, vi1b, vi2a, vi2b for n in range(nf): vi1a = faces[n][0][0] vi1b = faces[n][0][1] vi2a = faces[n][1][0] vi2b = faces[n][1][1] # Shared vertex is vi1a and vi2a subtract(vertices[vi1b], vertices[vi1a], vec1) subtract(vertices[vi2b], vertices[vi2a], vec2) subtract(point, vertices[vi1b], npoint) cross(vec1, vec2, cp_vec) dp = dot(cp_vec, npoint) if match == 0: if dp < 0: signs[n] = -1 else: signs[n] = 1 else: if dp <= 0 and signs[n] < 0: continue elif dp >= 0 and signs[n] > 0: continue else: # mismatch! return 0 return 1 def pixelize_element_mesh(np.ndarray[np.float64_t, ndim=2] coords, np.ndarray[np.int64_t, ndim=2] conn, buff_size, np.ndarray[np.float64_t, ndim=2] field, extents, int index_offset = 0, *, return_mask=False, ): cdef np.ndarray[np.float64_t, ndim=3] img img = np.zeros(buff_size, dtype="float64") img[:] = np.nan cdef np.ndarray[np.uint8_t, ndim=3] mask_arr = np.ones_like(img, dtype="uint8") cdef np.uint8_t[:, :, :] mask = mask_arr # Two steps: # 1. Is image point within the mesh bounding box? # 2. Is image point within the mesh element? # Second is more intensive. It will convert the element vertices to the # mapped coordinate system, and check whether the result in in-bounds or not # Note that we have to have a pseudo-3D pixel buffer. One dimension will # always be 1. cdef np.float64_t pLE[3] cdef np.float64_t pRE[3] cdef np.float64_t LE[3] cdef np.float64_t RE[3] cdef int use cdef np.int64_t n, i, pi, pj, pk, ci, cj cdef np.int64_t pstart[3] cdef np.int64_t pend[3] cdef np.float64_t ppoint[3] cdef np.float64_t idds[3] cdef np.float64_t dds[3] cdef np.float64_t *vertices cdef np.float64_t *field_vals cdef int nvertices = conn.shape[1] cdef int ndim = coords.shape[1] cdef int num_field_vals = field.shape[1] cdef double[4] mapped_coord cdef ElementSampler sampler # Pick the right sampler and allocate storage for the mapped coordinate if ndim == 3 and nvertices == 4: sampler = P1Sampler3D() elif ndim == 3 and nvertices == 6: sampler = W1Sampler3D() elif ndim == 3 and nvertices == 8: sampler = Q1Sampler3D() elif ndim == 3 and nvertices == 20: sampler = S2Sampler3D() elif ndim == 2 and nvertices == 3: sampler = P1Sampler2D() elif ndim == 1 and nvertices == 2: sampler = P1Sampler1D() elif ndim == 2 and nvertices == 4: sampler = Q1Sampler2D() elif ndim == 2 and nvertices == 9: sampler = Q2Sampler2D() elif ndim == 2 and nvertices == 6: sampler = T2Sampler2D() elif ndim == 3 and nvertices == 10: sampler = Tet2Sampler3D() else: raise YTElementTypeNotRecognized(ndim, nvertices) # if we are in 2D land, the 1 cell thick dimension had better be 'z' if ndim == 2: if buff_size[2] != 1: raise RuntimeError("Slices of 2D datasets must be " "perpendicular to the 'z' direction.") # allocate temporary storage vertices = malloc(ndim * sizeof(np.float64_t) * nvertices) field_vals = malloc(sizeof(np.float64_t) * num_field_vals) # fill the image bounds and pixel size information here for i in range(ndim): pLE[i] = extents[i][0] pRE[i] = extents[i][1] dds[i] = (pRE[i] - pLE[i])/buff_size[i] if dds[i] == 0.0: idds[i] = 0.0 else: idds[i] = 1.0 / dds[i] with cython.boundscheck(False): for ci in range(conn.shape[0]): # Fill the vertices LE[0] = LE[1] = LE[2] = 1e60 RE[0] = RE[1] = RE[2] = -1e60 for n in range(num_field_vals): field_vals[n] = field[ci, n] for n in range(nvertices): cj = conn[ci, n] - index_offset for i in range(ndim): vertices[ndim*n + i] = coords[cj, i] LE[i] = fmin(LE[i], vertices[ndim*n+i]) RE[i] = fmax(RE[i], vertices[ndim*n+i]) use = 1 for i in range(ndim): if RE[i] < pLE[i] or LE[i] >= pRE[i]: use = 0 break pstart[i] = i64max( ((LE[i] - pLE[i])*idds[i]) - 1, 0) pend[i] = i64min( ((RE[i] - pLE[i])*idds[i]) + 1, img.shape[i]-1) # override for the low-dimensional case if ndim < 3: pstart[2] = 0 pend[2] = 0 if ndim < 2: pstart[1] = 0 pend[1] = 0 if use == 0: continue # Now our bounding box intersects, so we get the extents of our pixel # region which overlaps with the bounding box, and we'll check each # pixel in there. for pi in range(pstart[0], pend[0] + 1): ppoint[0] = (pi + 0.5) * dds[0] + pLE[0] for pj in range(pstart[1], pend[1] + 1): ppoint[1] = (pj + 0.5) * dds[1] + pLE[1] for pk in range(pstart[2], pend[2] + 1): ppoint[2] = (pk + 0.5) * dds[2] + pLE[2] # Now we just need to figure out if our ppoint is within # our set of vertices. sampler.map_real_to_unit(mapped_coord, vertices, ppoint) if not sampler.check_inside(mapped_coord): continue if (num_field_vals == 1): img[pi, pj, pk] = field_vals[0] else: img[pi, pj, pk] = sampler.sample_at_unit_point(mapped_coord, field_vals) mask[pi, pj, pk] = 1 free(vertices) free(field_vals) if return_mask: return img, mask_arr.astype("bool") else: return img # used as a cache to avoid repeatedly creating # instances of SPHKernelInterpolationTable kernel_tables = {} cdef class SPHKernelInterpolationTable: cdef public object kernel_name cdef kernel_func kernel cdef np.float64_t[::1] table cdef np.float64_t[::1] q2_vals cdef np.float64_t q2_range, iq2_range def __init__(self, kernel_name): self.kernel_name = kernel_name self.kernel = get_kernel_func(kernel_name) self.populate_table() @cython.initializedcheck(False) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef np.float64_t integrate_q2(self, np.float64_t q2) noexcept nogil: # See equation 30 of the SPLASH paper cdef int i # Our bounds are -sqrt(R*R - q2) and sqrt(R*R-q2) # And our R is always 1; note that our smoothing kernel functions # expect it to run from 0 .. 1, so we multiply the integrand by 2 cdef int N = 200 cdef np.float64_t qz cdef np.float64_t R = 1 cdef np.float64_t R0 = -math.sqrt(R*R-q2) cdef np.float64_t R1 = math.sqrt(R*R-q2) cdef np.float64_t dR = (R1-R0)/N # Set to our bounds cdef np.float64_t integral = 0.0 integral += self.kernel(math.sqrt(R0*R0 + q2)) integral += self.kernel(math.sqrt(R1*R1 + q2)) # We're going to manually conduct a trapezoidal integration for i in range(1, N): qz = R0 + i * dR integral += 2.0*self.kernel(math.sqrt(qz*qz + q2)) integral *= (R1-R0)/(2*N) return integral def populate_table(self): cdef int i self.table = cvarray(format="d", shape=(TABLE_NVALS,), itemsize=sizeof(np.float64_t)) self.q2_vals = cvarray(format="d", shape=(TABLE_NVALS,), itemsize=sizeof(np.float64_t)) # We run from 0 to 1 here over R for i in range(TABLE_NVALS): self.q2_vals[i] = i * 1.0/(TABLE_NVALS-1) self.table[i] = self.integrate_q2(self.q2_vals[i]) self.q2_range = self.q2_vals[TABLE_NVALS-1] - self.q2_vals[0] self.iq2_range = (TABLE_NVALS-1)/self.q2_range @cython.initializedcheck(False) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline np.float64_t interpolate(self, np.float64_t q2) noexcept nogil: cdef int index cdef np.float64_t F_interpolate index = ((q2 - self.q2_vals[0])*(self.iq2_range)) if index >= TABLE_NVALS: return 0.0 F_interpolate = self.table[index] + ( (self.table[index+1] - self.table[index]) *(q2 - self.q2_vals[index])*self.iq2_range) return F_interpolate def interpolate_array(self, np.float64_t[:] q2_vals): cdef np.float64_t[:] ret = np.empty(q2_vals.shape[0]) cdef int i for i in range(q2_vals.shape[0]): ret[i] = self.interpolate(q2_vals[i]) return np.array(ret) @cython.initializedcheck(False) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def pixelize_sph_kernel_projection( np.float64_t[:, :] buff, np.uint8_t[:, :] mask, any_float[:] posx, any_float[:] posy, any_float[:] posz, any_float[:] hsml, any_float[:] pmass, any_float[:] pdens, any_float[:] quantity_to_smooth, bounds, kernel_name="cubic", weight_field=None, _check_period = (1, 1, 1), period=None): cdef np.intp_t xsize, ysize cdef np.float64_t x_min, x_max, y_min, y_max, z_min, z_max, prefactor_j cdef np.int64_t xi, yi, x0, x1, y0, y1, xxi, yyi cdef np.float64_t q_ij2, posx_diff, posy_diff, ih_j2 cdef np.float64_t x, y, dx, dy, idx, idy, h_j2, px, py, pz cdef np.float64_t period_x = 0, period_y = 0, period_z = 0 cdef int i, j, ii, jj, kk cdef np.float64_t[:] _weight_field cdef int * xiter cdef int * yiter cdef int * ziter cdef np.float64_t * xiterv cdef np.float64_t * yiterv cdef np.float64_t * ziterv cdef np.int8_t[3] check_period if weight_field is not None: _weight_field = weight_field if period is not None: period_x = period[0] period_y = period[1] period_z = period[2] for i in range(3): check_period[i] = np.int8(_check_period[i]) # we find the x and y range over which we have pixels and we find how many # pixels we have in each dimension xsize, ysize = buff.shape[0], buff.shape[1] x_min = bounds[0] x_max = bounds[1] y_min = bounds[2] y_max = bounds[3] z_min = bounds[4] z_max = bounds[5] dx = (x_max - x_min) / xsize dy = (y_max - y_min) / ysize idx = 1.0/dx idy = 1.0/dy if kernel_name not in kernel_tables: kernel_tables[kernel_name] = SPHKernelInterpolationTable(kernel_name) cdef SPHKernelInterpolationTable itab = kernel_tables[kernel_name] with nogil, parallel(): # loop through every particle # NOTE: this loop can be quite time consuming. However it is easily # parallelizable in multiple ways, such as: # 1) use multiple cores to process individual particles (the outer loop) # 2) use multiple cores to process individual pixels for a given particle # (the inner loops) # Depending on the ratio of particles' "sphere of influence" (a.k.a. the smoothing # length) to the physical width of the pixels, different parallelization # strategies may yield different speed-ups. Strategy #1 works better in the case # of lots of itty bitty particles. Strategy #2 works well when we have a # not-very-large-number of reasonably large-compared-to-pixels particles. We # currently employ #1 as its workload is more even and consistent, even though it # comes with a price of an additional, per thread memory for storing the # intermediate results. local_buff = malloc(sizeof(np.float64_t) * xsize * ysize) xiterv = malloc(sizeof(np.float64_t) * 2) yiterv = malloc(sizeof(np.float64_t) * 2) ziterv = malloc(sizeof(np.float64_t) * 2) xiter = malloc(sizeof(int) * 2) yiter = malloc(sizeof(int) * 2) ziter = malloc(sizeof(int) * 2) xiter[0] = yiter[0] = ziter[0] = 0 xiterv[0] = yiterv[0] = ziterv[0] = 0.0 for i in range(xsize * ysize): local_buff[i] = 0.0 for j in prange(0, posx.shape[0], schedule="dynamic"): if j % 100000 == 0: with gil: PyErr_CheckSignals() xiter[1] = yiter[1] = ziter[1] = 999 if check_period[0] == 1: if posx[j] - hsml[j] < x_min: xiter[1] = +1 xiterv[1] = period_x elif posx[j] + hsml[j] > x_max: xiter[1] = -1 xiterv[1] = -period_x if check_period[1] == 1: if posy[j] - hsml[j] < y_min: yiter[1] = +1 yiterv[1] = period_y elif posy[j] + hsml[j] > y_max: yiter[1] = -1 yiterv[1] = -period_y if check_period[2] == 1: if posz[j] - hsml[j] < z_min: ziter[1] = +1 ziterv[1] = period_z elif posz[j] + hsml[j] > z_max: ziter[1] = -1 ziterv[1] = -period_z # we set the smoothing length squared with lower limit of the pixel # Nope! that causes weird grid resolution dependences and increases # total values when resolution elements have hsml < grid spacing h_j2 = hsml[j]*hsml[j] ih_j2 = 1.0/h_j2 prefactor_j = pmass[j] / pdens[j] / hsml[j]**2 * quantity_to_smooth[j] if weight_field is not None: prefactor_j *= _weight_field[j] # Discussion point: do we want the hsml margin on the z direction? # it's consistent with Ray and Region selections, I think, # but does tend to 'tack on' stuff compared to the nominal depth for kk in range(2): # discard if z is outside bounds if ziter[kk] == 999: continue pz = posz[j] + ziterv[kk] ## removed hsml 'margin' in the projection direction to avoid ## double-counting particles near periodic edges ## and adding extra 'depth' to projections #if (pz + hsml[j] < z_min) or (pz - hsml[j] > z_max): continue if (pz < z_min) or (pz > z_max): continue for ii in range(2): if xiter[ii] == 999: continue px = posx[j] + xiterv[ii] if (px + hsml[j] < x_min) or (px - hsml[j] > x_max): continue for jj in range(2): if yiter[jj] == 999: continue py = posy[j] + yiterv[jj] if (py + hsml[j] < y_min) or (py - hsml[j] > y_max): continue # here we find the pixels which this particle contributes to x0 = ((px - hsml[j] - x_min)*idx) x1 = ((px + hsml[j] - x_min)*idx) x0 = iclip(x0-1, 0, xsize) x1 = iclip(x1+1, 0, xsize) y0 = ((py - hsml[j] - y_min)*idy) y1 = ((py + hsml[j] - y_min)*idy) y0 = iclip(y0-1, 0, ysize) y1 = iclip(y1+1, 0, ysize) # found pixels we deposit on, loop through those pixels for xi in range(x0, x1): # we use the centre of the pixel to calculate contribution x = (xi + 0.5) * dx + x_min posx_diff = px - x posx_diff = posx_diff * posx_diff if posx_diff > h_j2: continue for yi in range(y0, y1): y = (yi + 0.5) * dy + y_min posy_diff = py - y posy_diff = posy_diff * posy_diff if posy_diff > h_j2: continue q_ij2 = (posx_diff + posy_diff) * ih_j2 if q_ij2 >= 1: continue # see equation 32 of the SPLASH paper # now we just use the kernel projection local_buff[xi + yi*xsize] += prefactor_j * itab.interpolate(q_ij2) mask[xi, yi] = 1 with gil: for xxi in range(xsize): for yyi in range(ysize): buff[xxi, yyi] += local_buff[xxi + yyi*xsize] free(local_buff) free(xiterv) free(yiterv) free(xiter) free(yiter) return mask @cython.boundscheck(False) @cython.wraparound(False) def interpolate_sph_positions_gather(np.float64_t[:] buff, np.float64_t[:, ::1] tree_positions, np.float64_t[:, ::1] field_positions, np.float64_t[:] hsml, np.float64_t[:] pmass, np.float64_t[:] pdens, np.float64_t[:] quantity_to_smooth, PyKDTree kdtree, int use_normalization=1, kernel_name="cubic", pbar=None, int num_neigh=32): """ This function takes in arbitrary positions, field_positions, at which to perform a nearest neighbor search and perform SPH interpolation. The results are stored in the buffer, buff, which is in the same order as the field_positions are put in. """ cdef np.float64_t q_ij, h_j2, ih_j2, prefactor_j, smoothed_quantity_j cdef np.float64_t * pos_ptr cdef int i, particle, index cdef BoundedPriorityQueue queue = BoundedPriorityQueue(num_neigh, True) cdef np.float64_t[:] buff_den cdef KDTree * ctree = kdtree._tree # Which dimensions shall we use for spatial distances? cdef axes_range axes set_axes_range(&axes, -1) # Only allocate memory if we are using normalization if use_normalization: buff_den = np.zeros(buff.shape[0], dtype="float64") kernel = get_kernel_func(kernel_name) # Loop through all the positions we want to interpolate the SPH field onto with nogil: for i in range(0, buff.shape[0]): queue.size = 0 # Update the current position pos_ptr = &field_positions[i, 0] # Use the KDTree to find the nearest neighbors find_neighbors(pos_ptr, tree_positions, queue, ctree, -1, &axes) # Set the smoothing length squared to the square of the distance # of the furthest nearest neighbor h_j2 = queue.heap[0] ih_j2 = 1.0/h_j2 # Loop through each nearest neighbor and add contribution to the # buffer for index in range(queue.max_elements): particle = queue.pids[index] # Calculate contribution of this particle prefactor_j = (pmass[particle] / pdens[particle] / hsml[particle]**3) q_ij = math.sqrt(queue.heap[index]*ih_j2) smoothed_quantity_j = (prefactor_j * quantity_to_smooth[particle] * kernel(q_ij)) # See equations 6, 9, and 11 of the SPLASH paper buff[i] += smoothed_quantity_j if use_normalization: buff_den[i] += prefactor_j * kernel(q_ij) if use_normalization: normalization_1d_utility(buff, buff_den) @cython.boundscheck(False) @cython.wraparound(False) def interpolate_sph_grid_gather(np.float64_t[:, :, :] buff, np.float64_t[:, ::1] tree_positions, np.float64_t[:] bounds, np.float64_t[:] hsml, np.float64_t[:] pmass, np.float64_t[:] pdens, np.float64_t[:] quantity_to_smooth, PyKDTree kdtree, int use_normalization=1, kernel_name="cubic", pbar=None, int num_neigh=32, *, int return_mask=0, ): """ This function takes in the bounds and number of cells in a grid (well, actually we implicitly calculate this from the size of buff). Then we can perform nearest neighbor search and SPH interpolation at the centre of each cell in the grid. """ cdef np.float64_t q_ij, h_j2, ih_j2, prefactor_j, smoothed_quantity_j cdef np.float64_t dx, dy, dz cdef np.float64_t[::1] pos = np.zeros(3, dtype="float64") cdef np.float64_t * pos_ptr = &pos[0] cdef int i, j, k, particle, index cdef BoundedPriorityQueue queue = BoundedPriorityQueue(num_neigh, True) cdef np.float64_t[:, :, :] buff_den cdef KDTree * ctree = kdtree._tree cdef int prog # Which dimensions shall we use for spatial distances? cdef axes_range axes set_axes_range(&axes, -1) # Only allocate memory if we are using normalization if use_normalization: buff_den = np.zeros([buff.shape[0], buff.shape[1], buff.shape[2]], dtype="float64") kernel = get_kernel_func(kernel_name) dx = (bounds[1] - bounds[0]) / buff.shape[0] dy = (bounds[3] - bounds[2]) / buff.shape[1] dz = (bounds[5] - bounds[4]) / buff.shape[2] # Loop through all the positions we want to interpolate the SPH field onto pbar = get_pbar(title="Interpolating (gather) SPH field", maxval=(buff.shape[0]*buff.shape[1]*buff.shape[2] // 10000)*10000) cdef np.ndarray[np.uint8_t, ndim=3] mask_arr = np.zeros_like(buff, dtype="uint8") cdef np.uint8_t[:, :, :] mask = mask_arr prog = 0 with nogil: for i in range(0, buff.shape[0]): for j in range(0, buff.shape[1]): for k in range(0, buff.shape[2]): prog += 1 if prog % 10000 == 0: with gil: PyErr_CheckSignals() pbar.update(prog) queue.size = 0 # Update the current position pos[0] = bounds[0] + (i + 0.5) * dx pos[1] = bounds[2] + (j + 0.5) * dy pos[2] = bounds[4] + (k + 0.5) * dz # Use the KDTree to find the nearest neighbors find_neighbors(pos_ptr, tree_positions, queue, ctree, -1, &axes) # Set the smoothing length squared to the square of the distance # of the furthest nearest neighbor h_j2 = queue.heap[0] ih_j2 = 1.0/h_j2 # Loop through each nearest neighbor and add contribution to the # buffer for index in range(queue.max_elements): particle = queue.pids[index] # Calculate contribution of this particle prefactor_j = (pmass[particle] / pdens[particle] / hsml[particle]**3) q_ij = math.sqrt(queue.heap[index]*ih_j2) smoothed_quantity_j = (prefactor_j * quantity_to_smooth[particle] * kernel(q_ij)) # See equations 6, 9, and 11 of the SPLASH paper buff[i, j, k] += smoothed_quantity_j mask[i, j, k] = 1 if use_normalization: buff_den[i, j, k] += prefactor_j * kernel(q_ij) if use_normalization: normalization_3d_utility(buff, buff_den) if return_mask: return mask_arr.astype("bool") @cython.initializedcheck(False) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def pixelize_sph_kernel_slice( np.float64_t[:, :] buff, np.uint8_t[:, :] mask, np.float64_t[:] posx, np.float64_t[:] posy, np.float64_t[:] posz, np.float64_t[:] hsml, np.float64_t[:] pmass, np.float64_t[:] pdens, np.float64_t[:] quantity_to_smooth, bounds, np.float64_t slicez, kernel_name="cubic", _check_period = (1, 1, 1), period=None): #print("bounds, slicez, kernel_name, check_period, period") #print(bounds) #print(slicez) #print(kernel_name) #print(check_period) #print(period) #print() # bounds are [x0, x1, y0, y1], slicez is the single coordinate # of the slice along the normal direction. # similar method to pixelize_sph_kernel_projection cdef np.intp_t xsize, ysize cdef np.float64_t x_min, x_max, y_min, y_max, prefactor_j cdef np.int64_t xi, yi, x0, x1, y0, y1, xxi, yyi cdef np.float64_t q_ij, posx_diff, posy_diff, posz_diff, ih_j cdef np.float64_t x, y, dx, dy, idx, idy, h_j2, h_j, px, py, pz cdef int i, j, ii, jj cdef np.float64_t period_x = 0, period_y = 0, period_z = 0 cdef int * xiter cdef int * yiter cdef np.float64_t * xiterv cdef np.float64_t * yiterv cdef np.int8_t[3] check_period if period is not None: period_x = period[0] period_y = period[1] period_z = period[2] for i in range(3): check_period[i] = np.int8(_check_period[i]) xsize, ysize = buff.shape[0], buff.shape[1] x_min = bounds[0] x_max = bounds[1] y_min = bounds[2] y_max = bounds[3] dx = (x_max - x_min) / xsize dy = (y_max - y_min) / ysize idx = 1.0/dx idy = 1.0/dy kernel = get_kernel_func(kernel_name) #print('particle index, ii, jj, px, py, pz') with nogil, parallel(): # NOTE see note in pixelize_sph_kernel_projection local_buff = malloc(sizeof(np.float64_t) * xsize * ysize) xiterv = malloc(sizeof(np.float64_t) * 2) yiterv = malloc(sizeof(np.float64_t) * 2) xiter = malloc(sizeof(int) * 2) yiter = malloc(sizeof(int) * 2) xiter[0] = yiter[0] = 0 xiterv[0] = yiterv[0] = 0.0 for i in range(xsize * ysize): local_buff[i] = 0.0 for j in prange(0, posx.shape[0], schedule="dynamic"): if j % 100000 == 0: with gil: PyErr_CheckSignals() #with gil: # print(j) xiter[1] = yiter[1] = 999 pz = posz[j] if check_period[0] == 1: if posx[j] - hsml[j] < x_min: xiter[1] = 1 xiterv[1] = period_x elif posx[j] + hsml[j] > x_max: xiter[1] = -1 xiterv[1] = -period_x if check_period[1] == 1: if posy[j] - hsml[j] < y_min: yiter[1] = 1 yiterv[1] = period_y elif posy[j] + hsml[j] > y_max: yiter[1] = -1 yiterv[1] = -period_y if check_period[2] == 1: # z of particle might be < hsml from the slice plane # but across a periodic boundary if posz[j] - hsml[j] > slicez: pz = posz[j] - period_z elif posz[j] + hsml[j] < slicez: pz = posz[j] + period_z h_j2 = hsml[j] * hsml[j] #fmax(hsml[j]*hsml[j], dx*dy) h_j = hsml[j] #math.sqrt(h_j2) ih_j = 1.0/h_j posz_diff = pz - slicez posz_diff = posz_diff * posz_diff if posz_diff > h_j2: continue prefactor_j = pmass[j] / pdens[j] / hsml[j]**3 prefactor_j *= quantity_to_smooth[j] for ii in range(2): if xiter[ii] == 999: continue px = posx[j] + xiterv[ii] if (px + hsml[j] < x_min) or (px - hsml[j] > x_max): continue for jj in range(2): if yiter[jj] == 999: continue py = posy[j] + yiterv[jj] if (py + hsml[j] < y_min) or (py - hsml[j] > y_max): continue x0 = ( (px - hsml[j] - x_min) * idx) x1 = ( (px + hsml[j] - x_min) * idx) x0 = iclip(x0-1, 0, xsize) x1 = iclip(x1+1, 0, xsize) y0 = ( (py - hsml[j] - y_min) * idy) y1 = ( (py + hsml[j] - y_min) * idy) y0 = iclip(y0-1, 0, ysize) y1 = iclip(y1+1, 0, ysize) #with gil: # print(ii, jj, px, py, pz) # Now we know which pixels to deposit onto for this particle, # so loop over them and add this particle's contribution for xi in range(x0, x1): x = (xi + 0.5) * dx + x_min posx_diff = px - x posx_diff = posx_diff * posx_diff if posx_diff > h_j2: continue for yi in range(y0, y1): y = (yi + 0.5) * dy + y_min posy_diff = py - y posy_diff = posy_diff * posy_diff if posy_diff > h_j2: continue # see equation 4 of the SPLASH paper q_ij = math.sqrt(posx_diff + posy_diff + posz_diff) * ih_j if q_ij >= 1: continue # see equations 6, 9, and 11 of the SPLASH paper local_buff[xi + yi*xsize] += prefactor_j * kernel(q_ij) mask[xi, yi] = 1 with gil: for xxi in range(xsize): for yyi in range(ysize): buff[xxi, yyi] += local_buff[xxi + yyi*xsize] free(local_buff) free(xiterv) free(yiterv) free(xiter) free(yiter) @cython.initializedcheck(False) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def pixelize_sph_kernel_arbitrary_grid(np.float64_t[:, :, :] buff, np.float64_t[:] posx, np.float64_t[:] posy, np.float64_t[:] posz, np.float64_t[:] hsml, np.float64_t[:] pmass, np.float64_t[:] pdens, np.float64_t[:] quantity_to_smooth, bounds, pbar=None, kernel_name="cubic", check_period=True, period=None): cdef np.intp_t xsize, ysize, zsize cdef np.float64_t x_min, x_max, y_min, y_max, z_min, z_max, prefactor_j cdef np.int64_t xi, yi, zi, x0, x1, y0, y1, z0, z1 cdef np.float64_t q_ij, posx_diff, posy_diff, posz_diff, px, py, pz cdef np.float64_t x, y, z, dx, dy, dz, idx, idy, idz, h_j2, h_j, ih_j # cdef np.float64_t h_j3 cdef int j, ii, jj, kk cdef np.float64_t period_x = 0, period_y = 0, period_z = 0 cdef int xiter[2] cdef int yiter[2] cdef int ziter[2] cdef np.float64_t xiterv[2] cdef np.float64_t yiterv[2] cdef np.float64_t ziterv[2] cdef int[3] periodic xiter[0] = yiter[0] = ziter[0] = 0 xiterv[0] = yiterv[0] = ziterv[0] = 0.0 if hasattr(check_period, "__len__"): periodic[0] = int(check_period[0]) periodic[1] = int(check_period[1]) periodic[2] = int(check_period[2]) else: _cp = int(check_period) periodic[0] = _cp periodic[1] = _cp periodic[2] = _cp if period is not None: period_x = period[0] period_y = period[1] period_z = period[2] xsize, ysize, zsize = buff.shape[0], buff.shape[1], buff.shape[2] x_min = bounds[0] x_max = bounds[1] y_min = bounds[2] y_max = bounds[3] z_min = bounds[4] z_max = bounds[5] dx = (x_max - x_min) / xsize dy = (y_max - y_min) / ysize dz = (z_max - z_min) / zsize idx = 1.0/dx idy = 1.0/dy idz = 1.0/dz kernel = get_kernel_func(kernel_name) # nogil seems dangerous here, but there are no actual parallel # sections (e.g., prange instead of range) used here. # However, for future writers: # !! the final buff array mutation has no protections against # !! race conditions (e.g., OpenMP's atomic read/write), and # !! cython doesn't seem to provide such options. # (other routines in this file use private variable buffer arrays # and add everything together at the end, but grid arrays can get # big fast, and having such a large array in each thread could # cause memory use issues.) with nogil: # TODO make this parallel without using too much memory for j in range(0, posx.shape[0]): if j % 50000 == 0: with gil: if(pbar is not None): pbar.update(50000) PyErr_CheckSignals() # end with gil xiter[1] = yiter[1] = ziter[1] = 999 xiterv[1] = yiterv[1] = ziterv[1] = 0.0 if periodic[0] == 1: if posx[j] - hsml[j] < x_min: xiter[1] = +1 xiterv[1] = period_x elif posx[j] + hsml[j] > x_max: xiter[1] = -1 xiterv[1] = -period_x if periodic[1] == 1: if posy[j] - hsml[j] < y_min: yiter[1] = +1 yiterv[1] = period_y elif posy[j] + hsml[j] > y_max: yiter[1] = -1 yiterv[1] = -period_y if periodic[2] == 1: if posz[j] - hsml[j] < z_min: ziter[1] = +1 ziterv[1] = period_z elif posz[j] + hsml[j] > z_max: ziter[1] = -1 ziterv[1] = -period_z #h_j3 = fmax(hsml[j]*hsml[j]*hsml[j], dx*dy*dz) h_j = hsml[j] #math.cbrt(h_j3) h_j2 = h_j*h_j ih_j = 1/h_j prefactor_j = pmass[j] / pdens[j] / hsml[j]**3 * quantity_to_smooth[j] for ii in range(2): if xiter[ii] == 999: continue px = posx[j] + xiterv[ii] if (px + hsml[j] < x_min) or (px - hsml[j] > x_max): continue for jj in range(2): if yiter[jj] == 999: continue py = posy[j] + yiterv[jj] if (py + hsml[j] < y_min) or (py - hsml[j] > y_max): continue for kk in range(2): if ziter[kk] == 999: continue pz = posz[j] + ziterv[kk] if (pz + hsml[j] < z_min) or (pz - hsml[j] > z_max): continue x0 = ( (px - hsml[j] - x_min) * idx) x1 = ( (px + hsml[j] - x_min) * idx) x0 = iclip(x0-1, 0, xsize) x1 = iclip(x1+1, 0, xsize) y0 = ( (py - hsml[j] - y_min) * idy) y1 = ( (py + hsml[j] - y_min) * idy) y0 = iclip(y0-1, 0, ysize) y1 = iclip(y1+1, 0, ysize) z0 = ( (pz - hsml[j] - z_min) * idz) z1 = ( (pz + hsml[j] - z_min) * idz) z0 = iclip(z0-1, 0, zsize) z1 = iclip(z1+1, 0, zsize) # Now we know which voxels to deposit onto for this particle, # so loop over them and add this particle's contribution for xi in range(x0, x1): x = (xi + 0.5) * dx + x_min posx_diff = px - x posx_diff = posx_diff * posx_diff if posx_diff > h_j2: continue for yi in range(y0, y1): y = (yi + 0.5) * dy + y_min posy_diff = py - y posy_diff = posy_diff * posy_diff if posy_diff > h_j2: continue for zi in range(z0, z1): z = (zi + 0.5) * dz + z_min posz_diff = pz - z posz_diff = posz_diff * posz_diff if posz_diff > h_j2: continue # see equation 4 of the SPLASH paper q_ij = math.sqrt(posx_diff + posy_diff + posz_diff) * ih_j if q_ij >= 1: continue # shared variable buff should not # be mutatated in a nogil section # where different threads may change # the same array element buff[xi, yi, zi] += prefactor_j \ * kernel(q_ij) def pixelize_element_mesh_line(np.ndarray[np.float64_t, ndim=2] coords, np.ndarray[np.int64_t, ndim=2] conn, np.ndarray[np.float64_t, ndim=1] start_point, np.ndarray[np.float64_t, ndim=1] end_point, npoints, np.ndarray[np.float64_t, ndim=2] field, int index_offset = 0): # This routine chooses the correct element sampler to interpolate field # values at evenly spaced points along a sampling line cdef np.float64_t *vertices cdef np.float64_t *field_vals cdef int nvertices = conn.shape[1] cdef int ndim = coords.shape[1] cdef int num_field_vals = field.shape[1] cdef int num_plot_nodes = npoints cdef int num_intervals = npoints - 1 cdef double[4] mapped_coord cdef ElementSampler sampler cdef np.ndarray[np.float64_t, ndim=1] lin_vec cdef np.ndarray[np.float64_t, ndim=1] lin_inc cdef np.ndarray[np.float64_t, ndim=2] lin_sample_points cdef np.int64_t i, n, j, k cdef np.ndarray[np.float64_t, ndim=1] arc_length cdef np.float64_t lin_length, inc_length cdef np.ndarray[np.float64_t, ndim=1] plot_values cdef np.float64_t sample_point[3] lin_vec = np.zeros(ndim, dtype="float64") lin_inc = np.zeros(ndim, dtype="float64") lin_sample_points = np.zeros((num_plot_nodes, ndim), dtype="float64") arc_length = np.zeros(num_plot_nodes, dtype="float64") plot_values = np.zeros(num_plot_nodes, dtype="float64") # Pick the right sampler and allocate storage for the mapped coordinate if ndim == 3 and nvertices == 4: sampler = P1Sampler3D() elif ndim == 3 and nvertices == 6: sampler = W1Sampler3D() elif ndim == 3 and nvertices == 8: sampler = Q1Sampler3D() elif ndim == 3 and nvertices == 20: sampler = S2Sampler3D() elif ndim == 2 and nvertices == 3: sampler = P1Sampler2D() elif ndim == 1 and nvertices == 2: sampler = P1Sampler1D() elif ndim == 2 and nvertices == 4: sampler = Q1Sampler2D() elif ndim == 2 and nvertices == 9: sampler = Q2Sampler2D() elif ndim == 2 and nvertices == 6: sampler = T2Sampler2D() elif ndim == 3 and nvertices == 10: sampler = Tet2Sampler3D() else: raise YTElementTypeNotRecognized(ndim, nvertices) # allocate temporary storage vertices = malloc(ndim * sizeof(np.float64_t) * nvertices) field_vals = malloc(sizeof(np.float64_t) * num_field_vals) lin_vec = end_point - start_point lin_length = np.linalg.norm(lin_vec) lin_inc = lin_vec / num_intervals inc_length = lin_length / num_intervals for j in range(ndim): lin_sample_points[0, j] = start_point[j] arc_length[0] = 0 for i in range(1, num_intervals + 1): for j in range(ndim): lin_sample_points[i, j] = lin_sample_points[i-1, j] + lin_inc[j] arc_length[i] = arc_length[i-1] + inc_length for i in range(num_intervals + 1): for j in range(3): if j < ndim: sample_point[j] = lin_sample_points[i][j] else: sample_point[j] = 0 for ci in range(conn.shape[0]): for n in range(num_field_vals): field_vals[n] = field[ci, n] # Fill the vertices for n in range(nvertices): cj = conn[ci, n] - index_offset for k in range(ndim): vertices[ndim*n + k] = coords[cj, k] sampler.map_real_to_unit(mapped_coord, vertices, sample_point) if not sampler.check_inside(mapped_coord) and ci != conn.shape[0] - 1: continue elif not sampler.check_inside(mapped_coord): raise ValueError("Check to see that both starting and ending line points " "are within the domain of the mesh.") plot_values[i] = sampler.sample_at_unit_point(mapped_coord, field_vals) break free(vertices) free(field_vals) return arc_length, plot_values # intended for use in ParticleImageBuffer @cython.boundscheck(False) @cython.wraparound(False) def rotate_particle_coord_pib(np.float64_t[:] px, np.float64_t[:] py, np.float64_t[:] pz, center, width, normal_vector, north_vector): # We want to do two rotations, one to first rotate our coordinates to have # the normal vector be the z-axis (i.e., the viewer's perspective), and then # another rotation to make the north-vector be the y-axis (i.e., north). # Fortunately, total_rotation_matrix = rotation_matrix_1 x rotation_matrix_2 cdef int num_particles = np.size(px) cdef np.float64_t[:] z_axis = np.array([0., 0., 1.], dtype="float64") cdef np.float64_t[:] y_axis = np.array([0., 1., 0.], dtype="float64") cdef np.float64_t[:, :] normal_rotation_matrix cdef np.float64_t[:] transformed_north_vector cdef np.float64_t[:, :] north_rotation_matrix cdef np.float64_t[:, :] rotation_matrix normal_rotation_matrix = get_rotation_matrix(normal_vector, z_axis) transformed_north_vector = np.matmul(normal_rotation_matrix, north_vector) north_rotation_matrix = get_rotation_matrix(transformed_north_vector, y_axis) rotation_matrix = np.matmul(north_rotation_matrix, normal_rotation_matrix) cdef np.float64_t[:] px_rotated = np.empty(num_particles, dtype="float64") cdef np.float64_t[:] py_rotated = np.empty(num_particles, dtype="float64") cdef np.float64_t[:] coordinate_matrix = np.empty(3, dtype="float64") cdef np.float64_t[:] rotated_coordinates cdef np.float64_t[:] rotated_center rotated_center = rotation_matmul( rotation_matrix, np.array([center[0], center[1], center[2]])) # set up the rotated bounds cdef np.float64_t rot_bounds_x0 = rotated_center[0] - width[0] / 2 cdef np.float64_t rot_bounds_x1 = rotated_center[0] + width[0] / 2 cdef np.float64_t rot_bounds_y0 = rotated_center[1] - width[1] / 2 cdef np.float64_t rot_bounds_y1 = rotated_center[1] + width[1] / 2 for i in range(num_particles): coordinate_matrix[0] = px[i] coordinate_matrix[1] = py[i] coordinate_matrix[2] = pz[i] rotated_coordinates = rotation_matmul( rotation_matrix, coordinate_matrix) px_rotated[i] = rotated_coordinates[0] py_rotated[i] = rotated_coordinates[1] return px_rotated, py_rotated, rot_bounds_x0, rot_bounds_x1, rot_bounds_y0, rot_bounds_y1 # version intended for SPH off-axis slices/projections # includes dealing with periodic boundaries, but also # shifts particles so center -> origin. # therefore, don't want to use this in the ParticleImageBuffer, # which expects differently centered coordinates. @cython.boundscheck(False) @cython.wraparound(False) def rotate_particle_coord(np.float64_t[:] px, np.float64_t[:] py, np.float64_t[:] pz, center, bounds, periodic, width, depth, normal_vector, north_vector): # We want to do two rotations, one to first rotate our coordinates to have # the normal vector be the z-axis (i.e., the viewer's perspective), and then # another rotation to make the north-vector be the y-axis (i.e., north). # Fortunately, total_rotation_matrix = rotation_matrix_1 x rotation_matrix_2 cdef np.int64_t num_particles = np.size(px) cdef np.float64_t[:] z_axis = np.array([0., 0., 1.], dtype="float64") cdef np.float64_t[:] y_axis = np.array([0., 1., 0.], dtype="float64") cdef np.float64_t[:, :] normal_rotation_matrix cdef np.float64_t[:] transformed_north_vector cdef np.float64_t[:, :] north_rotation_matrix cdef np.float64_t[:, :] rotation_matrix normal_rotation_matrix = get_rotation_matrix(normal_vector, z_axis) transformed_north_vector = np.matmul(normal_rotation_matrix, north_vector) north_rotation_matrix = get_rotation_matrix(transformed_north_vector, y_axis) rotation_matrix = np.matmul(north_rotation_matrix, normal_rotation_matrix) cdef np.float64_t[:] px_rotated = np.empty(num_particles, dtype="float64") cdef np.float64_t[:] py_rotated = np.empty(num_particles, dtype="float64") cdef np.float64_t[:] pz_rotated = np.empty(num_particles, dtype="float64") cdef np.float64_t[:] coordinate_matrix = np.empty(3, dtype="float64") cdef np.float64_t[:] rotated_coordinates cdef np.float64_t[:] rotated_center cdef np.int64_t i cdef int ax #rotated_center = rotation_matmul( # rotation_matrix, np.array([center[0], center[1], center[2]])) rotated_center = np.zeros((3,), dtype=center.dtype) # set up the rotated bounds cdef np.float64_t rot_bounds_x0 = rotated_center[0] - 0.5 * width[0] cdef np.float64_t rot_bounds_x1 = rotated_center[0] + 0.5 * width[0] cdef np.float64_t rot_bounds_y0 = rotated_center[1] - 0.5 * width[1] cdef np.float64_t rot_bounds_y1 = rotated_center[1] + 0.5 * width[1] cdef np.float64_t rot_bounds_z0 = rotated_center[2] - 0.5 * depth cdef np.float64_t rot_bounds_z1 = rotated_center[2] + 0.5 * depth for i in range(num_particles): coordinate_matrix[0] = px[i] coordinate_matrix[1] = py[i] coordinate_matrix[2] = pz[i] # centering: # make sure this also works for centers close to periodic edges # added consequence: the center is placed at the origin # (might as well keep it there in these temporary coordinates) for ax in range(3): # assumed center is zero even if non-periodic coordinate_matrix[ax] -= center[ax] if not periodic[ax]: continue period = bounds[2 * ax + 1] - bounds[2 * ax] # abs. difference between points in the volume is <= period if coordinate_matrix[ax] < -0.5 * period: coordinate_matrix[ax] += period if coordinate_matrix[ax] > 0.5 * period: coordinate_matrix[ax] -= period rotated_coordinates = rotation_matmul( rotation_matrix, coordinate_matrix) px_rotated[i] = rotated_coordinates[0] py_rotated[i] = rotated_coordinates[1] pz_rotated[i] = rotated_coordinates[2] return (px_rotated, py_rotated, pz_rotated, rot_bounds_x0, rot_bounds_x1, rot_bounds_y0, rot_bounds_y1, rot_bounds_z0, rot_bounds_z1) @cython.boundscheck(False) @cython.wraparound(False) def off_axis_projection_SPH(np.float64_t[:] px, np.float64_t[:] py, np.float64_t[:] pz, np.float64_t[:] particle_masses, np.float64_t[:] particle_densities, np.float64_t[:] smoothing_lengths, bounds, center, width, periodic, np.float64_t[:] quantity_to_smooth, np.float64_t[:, :] projection_array, np.uint8_t[:, :] mask, normal_vector, north_vector, weight_field=None, depth=None, kernel_name="cubic"): # periodic: periodicity of the data set: # Do nothing in event of a 0 normal vector if np.allclose(normal_vector, 0.): return if depth is None: # set to volume diagonal + margin -> won't exclude anything depth = 2. * np.sqrt((bounds[1] - bounds[0])**2 + (bounds[3] - bounds[2])**2 + (bounds[5] - bounds[4])**2) px_rotated, py_rotated, pz_rotated, \ rot_bounds_x0, rot_bounds_x1, \ rot_bounds_y0, rot_bounds_y1, \ rot_bounds_z0, rot_bounds_z1 = rotate_particle_coord(px, py, pz, center, bounds, periodic, width, depth, normal_vector, north_vector) # check_period=0: assumed to be a small region compared to the box # size. The rotation already ensures that a center close to a # periodic edge works out fine. # since the simple single-coordinate modulo math periodicity # does not apply to the *rotated* coordinates, the periodicity # approach implemented for this along-axis projection method # would fail here check_period = np.array([0, 0, 0], dtype="int") pixelize_sph_kernel_projection(projection_array, mask, px_rotated, py_rotated, pz_rotated, smoothing_lengths, particle_masses, particle_densities, quantity_to_smooth, [rot_bounds_x0, rot_bounds_x1, rot_bounds_y0, rot_bounds_y1, rot_bounds_z0, rot_bounds_z1], weight_field=weight_field, _check_period=check_period, kernel_name=kernel_name) # like slice pixelization, but for off-axis planes def pixelize_sph_kernel_cutting( np.float64_t[:, :] buff, np.uint8_t[:, :] mask, np.float64_t[:] posx, np.float64_t[:] posy, np.float64_t[:] posz, np.float64_t[:] hsml, np.float64_t[:] pmass, np.float64_t[:] pdens, np.float64_t[:] quantity_to_smooth, center, widthxy, normal_vector, north_vector, boxbounds, periodic, kernel_name="cubic", int check_period=1): if check_period == 0: periodic = np.zeros(3, dtype=bool) posx_rot, posy_rot, posz_rot, \ rot_bounds_x0, rot_bounds_x1, \ rot_bounds_y0, rot_bounds_y1, \ rot_bounds_z0, _ = rotate_particle_coord(posx, posy, posz, center, boxbounds, periodic, widthxy, 0., normal_vector, north_vector) bounds_rot = np.array([rot_bounds_x0, rot_bounds_x1, rot_bounds_y0, rot_bounds_y1]) slicez_rot = rot_bounds_z0 pixelize_sph_kernel_slice(buff, mask, posx_rot, posy_rot, posz_rot, hsml, pmass, pdens, quantity_to_smooth, bounds_rot, slicez_rot, kernel_name=kernel_name, _check_period=np.zeros(3, dtype="int"), period=None) @cython.boundscheck(False) @cython.wraparound(False) cdef np.float64_t[:] rotation_matmul(np.float64_t[:, :] rotation_matrix, np.float64_t[:] coordinate_matrix): cdef np.float64_t[:] out = np.zeros(3) for i in range(3): for j in range(3): out[i] += rotation_matrix[i, j] * coordinate_matrix[j] return out @cython.boundscheck(False) @cython.wraparound(False) cpdef np.float64_t[:, :] get_rotation_matrix(np.float64_t[:] normal_vector, np.float64_t[:] final_vector): """ Returns a numpy rotation matrix corresponding to the rotation of the given normal vector to the specified final_vector. See https://math.stackexchange.com/a/476311 although note we return the inverse of what's specified there. """ cdef np.float64_t[:] normal_unit_vector = normal_vector / np.linalg.norm(normal_vector) cdef np.float64_t[:] final_unit_vector = final_vector / np.linalg.norm(final_vector) cdef np.float64_t[:] v = np.cross(final_unit_vector, normal_unit_vector) cdef np.float64_t s = np.linalg.norm(v) cdef np.float64_t c = np.dot(final_unit_vector, normal_unit_vector) # if the normal vector is identical to the final vector, just return the # identity matrix if np.isclose(c, 1, rtol=1e-09): return np.identity(3, dtype="float64") # if the normal vector is the negative final vector, return the appropriate # rotation matrix for flipping your coordinate system. if np.isclose(s, 0, rtol=1e-09): return np.array([[0, -1, 0],[-1, 0, 0],[0, 0, -1]], dtype="float64") cdef np.float64_t[:, :] cross_product_matrix = np.array([[0, -1 * v[2], v[1]], [v[2], 0, -1 * v[0]], [-1 * v[1], v[0], 0]], dtype="float64") return np.linalg.inv(np.identity(3, dtype="float64") + cross_product_matrix + np.matmul(cross_product_matrix, cross_product_matrix) * 1/(1+c)) @cython.boundscheck(False) @cython.wraparound(False) def normalization_3d_utility(np.float64_t[:, :, :] num, np.float64_t[:, :, :] den): cdef int i, j, k for i in range(num.shape[0]): for j in range(num.shape[1]): for k in range(num.shape[2]): if den[i, j, k] != 0.0: num[i, j, k] = num[i, j, k] / den[i, j, k] @cython.boundscheck(False) @cython.wraparound(False) def normalization_2d_utility(np.float64_t[:, :] num, np.float64_t[:, :] den): cdef int i, j for i in range(num.shape[0]): for j in range(num.shape[1]): if den[i, j] != 0.0: num[i, j] = num[i, j] / den[i, j] @cython.boundscheck(False) @cython.wraparound(False) def normalization_1d_utility(np.float64_t[:] num, np.float64_t[:] den): cdef int i for i in range(num.shape[0]): if den[i] != 0.0: num[i] = num[i] / den[i] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/platform_dep.h0000644000175100001770000000105414714401662017627 0ustar00runnerdocker#include #ifdef MS_WIN32 #include "malloc.h" /* note: the following implicitly sets a mininum VS version: conservative minimum is _MSC_VER >= 1928 (VS 2019, 16.8), but may work for VS 2015 but that has not been tested. see https://github.com/yt-project/yt/pull/4980 and https://learn.microsoft.com/en-us/cpp/overview/visual-cpp-language-conformance */ #include #include #elif defined(__FreeBSD__) #include #include #include #else #include #include "alloca.h" #include #endif ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/platform_dep_math.hpp0000644000175100001770000000107114714401662021177 0ustar00runnerdocker/* This file provides a compatibility layout between MSVC, and different version of GCC. MSVC does not define isnormal in the std:: namespace, so we cannot import it from , but from instead. However with GCC-5, there is a clash between the definition of isnormal in and using C++14, so we need to import from cmath instead. */ #if _MSC_VER #include inline bool __isnormal(double x) { return isnormal(x); } #elif defined(__FreeBSD__) #else #include inline bool __isnormal(double x) { return std::isnormal(x); } #endif ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/points_in_volume.pyx0000644000175100001770000002216114714401662021137 0ustar00runnerdocker # distutils: libraries = STD_LIBS """ Checks for points contained in a volume """ import numpy as np cimport cython cimport numpy as np from libc.math cimport sqrt cdef extern from "math.h": double fabs(double x) @cython.wraparound(False) @cython.boundscheck(False) def planar_points_in_volume( np.ndarray[np.float64_t, ndim=2] points, np.ndarray[np.int8_t, ndim=1] pmask, # pixel mask np.ndarray[np.float64_t, ndim=1] left_edge, np.ndarray[np.float64_t, ndim=1] right_edge, np.ndarray[np.int32_t, ndim=3] mask, float dx): cdef np.ndarray[np.int8_t, ndim=1] \ valid = np.zeros(points.shape[0], dtype='int8') cdef int i, dim, count cdef int ex cdef double dx_inv cdef unsigned int idx[3] count = 0 dx_inv = 1.0 / dx for i in xrange(points.shape[0]): if pmask[i] == 0: continue ex = 1 for dim in xrange(3): if points[i,dim] < left_edge[dim] or points[i,dim] > right_edge[dim]: valid[i] = ex = 0 break if ex == 1: for dim in xrange(3): idx[dim] = \ ((points[i,dim] - left_edge[dim]) * dx_inv) if mask[idx[0], idx[1], idx[2]] == 1: valid[i] = 1 count += 1 cdef np.ndarray[np.int32_t, ndim=1] result = np.empty(count, dtype='int32') count = 0 for i in xrange(points.shape[0]): if valid[i] == 1 and pmask[i] == 1: result[count] = i count += 1 return result cdef inline void set_rotated_pos( np.float64_t cp[3], np.float64_t rdds[3][3], np.float64_t rorigin[3], int i, int j, int k): cdef int oi for oi in range(3): cp[oi] = rdds[0][oi] * (0.5 + i) \ + rdds[1][oi] * (0.5 + j) \ + rdds[2][oi] * (0.5 + k) \ + rorigin[oi] #@cython.wraparound(False) #@cython.boundscheck(False) def grid_points_in_volume( np.ndarray[np.float64_t, ndim=1] box_lengths, np.ndarray[np.float64_t, ndim=1] box_origin, np.ndarray[np.float64_t, ndim=2] rot_mat, np.ndarray[np.float64_t, ndim=1] grid_left_edge, np.ndarray[np.float64_t, ndim=1] grid_right_edge, np.ndarray[np.float64_t, ndim=1] dds, np.ndarray[np.int32_t, ndim=3] mask, int break_first): cdef int n[3] cdef int i, j, k cdef np.float64_t rds[3][3] cdef np.float64_t cur_pos[3] cdef np.float64_t rorigin[3] for i in range(3): rorigin[i] = 0.0 for i in range(3): n[i] = mask.shape[i] for j in range(3): # Set up our transposed dx, which has a component in every # direction rds[i][j] = dds[i] * rot_mat[j,i] # In our rotated coordinate system, the box origin is 0,0,0 # so we subtract the box_origin from the grid_origin and rotate # that rorigin[j] += (grid_left_edge[i] - box_origin[i]) * rot_mat[j,i] for i in range(n[0]): for j in range(n[1]): for k in range(n[2]): set_rotated_pos(cur_pos, rds, rorigin, i, j, k) if (cur_pos[0] > box_lengths[0]): continue if (cur_pos[1] > box_lengths[1]): continue if (cur_pos[2] > box_lengths[2]): continue if (cur_pos[0] < 0.0): continue if (cur_pos[1] < 0.0): continue if (cur_pos[2] < 0.0): continue if break_first: if mask[i,j,k]: return 1 else: mask[i,j,k] = 1 return 0 cdef void normalize_vector(np.float64_t vec[3]): cdef int i cdef np.float64_t norm = 0.0 for i in range(3): norm += vec[i]*vec[i] norm = sqrt(norm) for i in range(3): vec[i] /= norm cdef void get_cross_product(np.float64_t v1[3], np.float64_t v2[3], np.float64_t cp[3]): cp[0] = v1[1]*v2[2] - v1[2]*v2[1] cp[1] = v1[3]*v2[0] - v1[0]*v2[3] cp[2] = v1[0]*v2[1] - v1[1]*v2[0] #print(cp[0], cp[1], cp[2]) cdef int check_projected_overlap( np.float64_t sep_ax[3], np.float64_t sep_vec[3], int gi, np.float64_t b_vec[3][3], np.float64_t g_vec[3][3]): cdef int g_ax, b_ax cdef np.float64_t tba, tga, ba, ga, sep_dot ba = ga = sep_dot = 0.0 for g_ax in range(3): # We need the grid vectors, which we'll precompute here tba = tga = 0.0 for b_ax in range(3): tba += b_vec[g_ax][b_ax] * sep_vec[b_ax] tga += g_vec[g_ax][b_ax] * sep_vec[b_ax] ba += fabs(tba) ga += fabs(tga) sep_dot += sep_vec[g_ax] * sep_ax[g_ax] #print(sep_vec[0], sep_vec[1], sep_vec[2],) #print(sep_ax[0], sep_ax[1], sep_ax[2]) return (fabs(sep_dot) > ba+ga) # Now we do @cython.wraparound(False) @cython.boundscheck(False) def find_grids_in_inclined_box( np.ndarray[np.float64_t, ndim=2] box_vectors, np.ndarray[np.float64_t, ndim=1] box_center, np.ndarray[np.float64_t, ndim=2] grid_left_edges, np.ndarray[np.float64_t, ndim=2] grid_right_edges): # http://www.gamasutra.com/view/feature/3383/simple_intersection_tests_for_games.php?page=5 cdef int n = grid_right_edges.shape[0] cdef int g_ax, b_ax, gi cdef np.float64_t b_vec[3][3] cdef np.float64_t g_vec[3][3] cdef np.float64_t a_vec[3][3] cdef np.float64_t sep_ax[15][3] cdef np.float64_t sep_vec[3] cdef np.ndarray[np.int32_t, ndim=1] good = np.zeros(n, dtype='int32') cdef np.ndarray[np.float64_t, ndim=2] grid_centers # Fill in our axis unit vectors for b_ax in range(3): for g_ax in range(3): a_vec[b_ax][g_ax] = (b_ax == g_ax) grid_centers = (grid_right_edges + grid_left_edges)/2.0 # Now we pre-compute our candidate separating axes, because the unit # vectors for all the grids are identical for b_ax in range(3): # We have 6 principal axes we already know, which are the grid (domain) # principal axes and the box axes sep_ax[b_ax][0] = sep_ax[b_ax][1] = sep_ax[b_ax][2] = 0.0 sep_ax[b_ax][b_ax] = 1.0 # delta_ijk, for grid axes for g_ax in range(3): b_vec[b_ax][g_ax] = 0.5*box_vectors[b_ax,g_ax] sep_ax[b_ax + 3][g_ax] = b_vec[b_ax][g_ax] # box axes normalize_vector(sep_ax[b_ax + 3]) for g_ax in range(3): get_cross_product(b_vec[b_ax], a_vec[g_ax], sep_ax[b_ax*3 + g_ax + 6]) normalize_vector(sep_ax[b_ax*3 + g_ax + 6]) for gi in range(n): for g_ax in range(3): # Calculate the separation vector sep_vec[g_ax] = grid_centers[gi, g_ax] - box_center[g_ax] # Calculate the grid axis lengths g_vec[g_ax][0] = g_vec[g_ax][1] = g_vec[g_ax][2] = 0.0 g_vec[g_ax][g_ax] = 0.5 * (grid_right_edges[gi, g_ax] - grid_left_edges[gi, g_ax]) for b_ax in range(15): #print(b_ax,) if check_projected_overlap( sep_ax[b_ax], sep_vec, gi, b_vec, g_vec): good[gi] = 1 break return good def calculate_fill_grids(int fill_level, int refratio, int last_level, np.ndarray[np.int64_t, ndim=1] domain_width, np.ndarray[np.int64_t, ndim=1] cg_start_index, np.ndarray[np.int32_t, ndim=1] cg_dims, np.ndarray[np.int64_t, ndim=1] g_start_index, np.ndarray[np.int32_t, ndim=1] g_dims, np.ndarray[np.uint8_t, ndim=3, cast=True] g_child_mask): cdef np.int64_t cgstart[3] cdef np.int64_t gstart[3] cdef np.int64_t cgend[3] cdef np.int64_t gend[3] cdef np.int64_t dw[3] cdef np.int64_t cxi, cyi, czi, gxi, gyi, gzi, ci, cj, ck cdef int i, total = 0 for i in range(3): dw[i] = domain_width[i] cgstart[i] = cg_start_index[i] gstart[i] = g_start_index[i] cgend[i] = cgstart[i] + cg_dims[i] gend[i] = gstart[i] + g_dims[i] for cxi in range(cgstart[0], cgend[0]+1): ci = (cxi % dw[0]) if ci < 0: ci += dw[0] if ci < gstart[0]*refratio or ci >= gend[0]*refratio: continue gxi = ( (ci / refratio)) - gstart[0] for cyi in range(cgstart[1], cgend[1]): cj = (cyi % dw[1]) if cj < 0: cj += dw[1] if cj < gstart[1]*refratio or cj >= gend[1]*refratio: continue gyi = ( (cj / refratio)) - gstart[1] for czi in range(cgstart[2], cgend[2]): ck = (czi % dw[2]) if ck < 0: ck += dw[2] if ck < gstart[2]*refratio or cj >= gend[2]*refratio: continue gzi = ( (ck / refratio)) - gstart[2] if last_level or g_child_mask[gxi, gyi, gzi] > 0: total += 1 return total ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/primitives.pxd0000644000175100001770000001022614714401662017713 0ustar00runnerdockercimport cython import numpy as np cimport numpy as np from .vec3_ops cimport cross, dot, subtract cdef struct Ray: np.float64_t origin[3] np.float64_t direction[3] np.float64_t inv_dir[3] np.float64_t data_val np.float64_t t_near np.float64_t t_far np.int64_t elem_id np.int64_t near_boundary cdef struct BBox: np.float64_t left_edge[3] np.float64_t right_edge[3] cdef struct RayHitData: np.float64_t u np.float64_t v np.float64_t t np.int64_t converged cdef struct Triangle: np.float64_t p0[3] np.float64_t p1[3] np.float64_t p2[3] np.int64_t elem_id cdef np.int64_t ray_bbox_intersect(Ray* ray, const BBox bbox) noexcept nogil cdef np.int64_t ray_triangle_intersect(const void* primitives, const np.int64_t item, Ray* ray) noexcept nogil cdef void triangle_centroid(const void *primitives, const np.int64_t item, np.float64_t[3] centroid) noexcept nogil cdef void triangle_bbox(const void *primitives, const np.int64_t item, BBox* bbox) noexcept nogil cdef struct Patch: np.float64_t[8][3] v # 8 vertices per patch np.int64_t elem_id cdef void patchSurfaceFunc(const cython.floating[8][3] verts, const cython.floating u, const cython.floating v, cython.floating[3] S) noexcept nogil cdef void patchSurfaceDerivU(const cython.floating[8][3] verts, const cython.floating u, const cython.floating v, cython.floating[3] Su) noexcept nogil cdef void patchSurfaceDerivV(const cython.floating[8][3] verts, const cython.floating u, const cython.floating v, cython.floating[3] Sv) noexcept nogil cdef RayHitData compute_patch_hit(cython.floating[8][3] verts, cython.floating[3] ray_origin, cython.floating[3] ray_direction) noexcept nogil cdef np.int64_t ray_patch_intersect(const void* primitives, const np.int64_t item, Ray* ray) noexcept nogil cdef void patch_centroid(const void *primitives, const np.int64_t item, np.float64_t[3] centroid) noexcept nogil cdef void patch_bbox(const void *primitives, const np.int64_t item, BBox* bbox) noexcept nogil cdef struct TetPatch: np.float64_t[6][3] v # 6 vertices per patch np.int64_t elem_id cdef RayHitData compute_tet_patch_hit(cython.floating[6][3] verts, cython.floating[3] ray_origin, cython.floating[3] ray_direction) noexcept nogil cdef void tet_patchSurfaceFunc(const cython.floating[6][3] verts, const cython.floating u, const cython.floating v, cython.floating[3] S) noexcept nogil cdef void tet_patchSurfaceDerivU(const cython.floating[6][3] verts, const cython.floating u, const cython.floating v, cython.floating[3] Su) noexcept nogil cdef void tet_patchSurfaceDerivV(const cython.floating[6][3] verts, const cython.floating u, const cython.floating v, cython.floating[3] Sv) noexcept nogil cdef np.int64_t ray_tet_patch_intersect(const void* primitives, const np.int64_t item, Ray* ray) noexcept nogil cdef void tet_patch_centroid(const void *primitives, const np.int64_t item, np.float64_t[3] centroid) noexcept nogil cdef void tet_patch_bbox(const void *primitives, const np.int64_t item, BBox* bbox) noexcept nogil ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/primitives.pyx0000644000175100001770000004517414714401662017752 0ustar00runnerdocker# distutils: libraries = STD_LIBS """ This file contains definitions of the various primitives that can be used by the Cython ray-tracer for unstructured mesh rendering. To define a new primitive type, you need to define a struct that represents it. You also need to provide three functions: 1. A function that computes the intersection between a given ray and a given primitive. 2. A function that computes the centroid of the primitive type. 3. A function that computes the axis-aligned bounding box of a given primitive. """ cimport cython import numpy as np cimport numpy as np from libc.math cimport fabs from yt.utilities.lib.vec3_ops cimport L2_norm, cross, distance, dot, subtract cdef np.float64_t DETERMINANT_EPS = 1.0e-10 cdef np.float64_t INF = np.inf cdef extern from "platform_dep.h" nogil: double fmax(double x, double y) double fmin(double x, double y) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef np.int64_t ray_bbox_intersect(Ray* ray, const BBox bbox) noexcept nogil: ''' This returns an integer flag that indicates whether a ray and a bounding box intersect. It does not modify either either the ray or the box. ''' # https://tavianator.com/fast-branchless-raybounding-box-intersections/ cdef np.float64_t tmin = -INF cdef np.float64_t tmax = INF cdef np.int64_t i cdef np.float64_t t1, t2 for i in range(3): t1 = (bbox.left_edge[i] - ray.origin[i])*ray.inv_dir[i] t2 = (bbox.right_edge[i] - ray.origin[i])*ray.inv_dir[i] tmin = fmax(tmin, fmin(t1, t2)) tmax = fmin(tmax, fmax(t1, t2)) return tmax >= fmax(tmin, 0.0) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef np.int64_t ray_triangle_intersect(const void* primitives, const np.int64_t item, Ray* ray) noexcept nogil: ''' This returns an integer flag that indicates whether a triangle is the closest hit for the ray so far. If it is, the ray is updated to store the current triangle index and the distance to the first hit. The triangle used is the one indexed by "item" in the array of primitives. ''' # https://en.wikipedia.org/wiki/M%C3%B6ller%E2%80%93Trumbore_intersection_algorithm cdef Triangle tri = ( primitives)[item] # edge vectors cdef np.float64_t e1[3] cdef np.float64_t e2[3] subtract(tri.p1, tri.p0, e1) subtract(tri.p2, tri.p0, e2) cdef np.float64_t P[3] cross(ray.direction, e2, P) cdef np.float64_t det, inv_det det = dot(e1, P) if(det > -DETERMINANT_EPS and det < DETERMINANT_EPS): return False inv_det = 1.0 / det cdef np.float64_t T[3] subtract(ray.origin, tri.p0, T) cdef np.float64_t u = dot(T, P) * inv_det if(u < 0.0 or u > 1.0): return False cdef np.float64_t Q[3] cross(T, e1, Q) cdef np.float64_t v = dot(ray.direction, Q) * inv_det if(v < 0.0 or u + v > 1.0): return False cdef np.float64_t t = dot(e2, Q) * inv_det if(t > DETERMINANT_EPS and t < ray.t_far): ray.t_far = t ray.elem_id = tri.elem_id return True return False @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void triangle_centroid(const void *primitives, const np.int64_t item, np.float64_t[3] centroid) noexcept nogil: ''' This computes the centroid of the input triangle. The triangle used is the one indexed by "item" in the array of primitives. The result will be stored in the numpy array passed in as "centroid". ''' cdef Triangle tri = ( primitives)[item] cdef np.int64_t i for i in range(3): centroid[i] = (tri.p0[i] + tri.p1[i] + tri.p2[i]) / 3.0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void triangle_bbox(const void *primitives, const np.int64_t item, BBox* bbox) noexcept nogil: ''' This computes the bounding box of the input triangle. The triangle used is the one indexed by "item" in the array of primitives. The result will be stored in the input BBox. ''' cdef Triangle tri = ( primitives)[item] cdef np.int64_t i for i in range(3): bbox.left_edge[i] = fmin(fmin(tri.p0[i], tri.p1[i]), tri.p2[i]) bbox.right_edge[i] = fmax(fmax(tri.p0[i], tri.p1[i]), tri.p2[i]) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void patchSurfaceFunc(const cython.floating[8][3] verts, const cython.floating u, const cython.floating v, cython.floating[3] S) noexcept nogil: ''' This function is a parametric representation of the surface of a bi-quadratic patch. The inputs are the eight nodes that define a face of a 20-node hex element, and two parameters u and v that vary from -1 to 1 and tell you where you are on the surface of the patch. The output is the array 'S' that stores the physical (x, y, z) position of the corresponding point on the patch. This function is needed to compute the intersection of rays and bi-quadratic patches. ''' cdef int i for i in range(3): S[i] = 0.25*(1.0 - u)*(1.0 - v)*(-u - v - 1)*verts[0][i] + \ 0.25*(1.0 + u)*(1.0 - v)*( u - v - 1)*verts[1][i] + \ 0.25*(1.0 + u)*(1.0 + v)*( u + v - 1)*verts[2][i] + \ 0.25*(1.0 - u)*(1.0 + v)*(-u + v - 1)*verts[3][i] + \ 0.5*(1 - u)*(1 - v*v)*verts[4][i] + \ 0.5*(1 - u*u)*(1 - v)*verts[5][i] + \ 0.5*(1 + u)*(1 - v*v)*verts[6][i] + \ 0.5*(1 - u*u)*(1 + v)*verts[7][i] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void patchSurfaceDerivU(const cython.floating[8][3] verts, const cython.floating u, const cython.floating v, cython.floating[3] Su) noexcept nogil: ''' This function computes the derivative of the S(u, v) function w.r.t u. ''' cdef int i for i in range(3): Su[i] = (-0.25*(v - 1.0)*(u + v + 1) - 0.25*(u - 1.0)*(v - 1.0))*verts[0][i] + \ (-0.25*(v - 1.0)*(u - v - 1) - 0.25*(u + 1.0)*(v - 1.0))*verts[1][i] + \ ( 0.25*(v + 1.0)*(u + v - 1) + 0.25*(u + 1.0)*(v + 1.0))*verts[2][i] + \ ( 0.25*(v + 1.0)*(u - v + 1) + 0.25*(u - 1.0)*(v + 1.0))*verts[3][i] + \ 0.5*(v*v - 1.0)*verts[4][i] + u*(v - 1.0)*verts[5][i] - \ 0.5*(v*v - 1.0)*verts[6][i] - u*(v + 1.0)*verts[7][i] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void patchSurfaceDerivV(const cython.floating[8][3] verts, const cython.floating u, const cython.floating v, cython.floating[3] Sv) noexcept nogil: ''' This function computes the derivative of the S(u, v) function w.r.t v. ''' cdef int i for i in range(3): Sv[i] = (-0.25*(u - 1.0)*(u + v + 1) - 0.25*(u - 1.0)*(v - 1.0))*verts[0][i] + \ (-0.25*(u + 1.0)*(u - v - 1) + 0.25*(u + 1.0)*(v - 1.0))*verts[1][i] + \ ( 0.25*(u + 1.0)*(u + v - 1) + 0.25*(u + 1.0)*(v + 1.0))*verts[2][i] + \ ( 0.25*(u - 1.0)*(u - v + 1) - 0.25*(u - 1.0)*(v + 1.0))*verts[3][i] + \ 0.5*(u*u - 1.0)*verts[5][i] + v*(u - 1.0)*verts[4][i] - \ 0.5*(u*u - 1.0)*verts[7][i] - v*(u + 1.0)*verts[6][i] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef RayHitData compute_patch_hit(cython.floating[8][3] verts, cython.floating[3] ray_origin, cython.floating[3] ray_direction) noexcept nogil: """ This function iteratively computes whether the bi-quadratic patch defined by the eight input nodes intersects with the given ray. Either way, information about the potential hit is stored in the returned RayHitData. """ # first we compute the two planes that define the ray. cdef cython.floating[3] n, N1, N2 cdef cython.floating A = dot(ray_direction, ray_direction) for i in range(3): n[i] = ray_direction[i] / A if ((fabs(n[0]) > fabs(n[1])) and (fabs(n[0]) > fabs(n[2]))): N1[0] = n[1] N1[1] =-n[0] N1[2] = 0.0 else: N1[0] = 0.0 N1[1] = n[2] N1[2] =-n[1] cross(N1, n, N2) cdef cython.floating d1 = -dot(N1, ray_origin) cdef cython.floating d2 = -dot(N2, ray_origin) # the initial guess is set to zero cdef cython.floating u = 0.0 cdef cython.floating v = 0.0 cdef cython.floating[3] S patchSurfaceFunc(verts, u, v, S) cdef cython.floating fu = dot(N1, S) + d1 cdef cython.floating fv = dot(N2, S) + d2 cdef cython.floating err = fmax(fabs(fu), fabs(fv)) # begin Newton iteration cdef cython.floating tol = 1.0e-5 cdef int iterations = 0 cdef int max_iter = 10 cdef cython.floating[3] Su cdef cython.floating[3] Sv cdef cython.floating J11, J12, J21, J22, det while ((err > tol) and (iterations < max_iter)): # compute the Jacobian patchSurfaceDerivU(verts, u, v, Su) patchSurfaceDerivV(verts, u, v, Sv) J11 = dot(N1, Su) J12 = dot(N1, Sv) J21 = dot(N2, Su) J22 = dot(N2, Sv) det = (J11*J22 - J12*J21) # update the u, v values u -= ( J22*fu - J12*fv) / det v -= (-J21*fu + J11*fv) / det patchSurfaceFunc(verts, u, v, S) fu = dot(N1, S) + d1 fv = dot(N2, S) + d2 err = fmax(fabs(fu), fabs(fv)) iterations += 1 # t is the distance along the ray to this hit cdef cython.floating t = distance(S, ray_origin) / L2_norm(ray_direction) # return hit data cdef RayHitData hd hd.u = u hd.v = v hd.t = t hd.converged = (iterations < max_iter) return hd @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef np.int64_t ray_patch_intersect(const void* primitives, const np.int64_t item, Ray* ray) noexcept nogil: ''' This returns an integer flag that indicates whether the given patch is the closest hit for the ray so far. If it is, the ray is updated to store the current primitive index and the distance to the first hit. The patch used is the one indexed by "item" in the array of primitives. ''' cdef Patch patch = ( primitives)[item] cdef RayHitData hd = compute_patch_hit(patch.v, ray.origin, ray.direction) # only count this is it's the closest hit if (hd.t < ray.t_near or hd.t > ray.t_far): return False if (fabs(hd.u) <= 1.0 and fabs(hd.v) <= 1.0 and hd.converged): # we have a hit, so update ray information ray.t_far = hd.t ray.elem_id = patch.elem_id return True return False @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void patch_centroid(const void *primitives, const np.int64_t item, np.float64_t[3] centroid) noexcept nogil: ''' This computes the centroid of the input patch. The patch used is the one indexed by "item" in the array of primitives. The result will be stored in the numpy array passed in as "centroid". ''' cdef np.int64_t i, j cdef Patch patch = ( primitives)[item] for j in range(3): centroid[j] = 0.0 for i in range(8): for j in range(3): centroid[j] += patch.v[i][j] for j in range(3): centroid[j] /= 8.0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void patch_bbox(const void *primitives, const np.int64_t item, BBox* bbox) noexcept nogil: ''' This computes the bounding box of the input patch. The patch used is the one indexed by "item" in the array of primitives. The result will be stored in the input BBox. ''' cdef np.int64_t i, j cdef Patch patch = ( primitives)[item] for j in range(3): bbox.left_edge[j] = patch.v[0][j] bbox.right_edge[j] = patch.v[0][j] for i in range(1, 8): for j in range(3): bbox.left_edge[j] = fmin(bbox.left_edge[j], patch.v[i][j]) bbox.right_edge[j] = fmax(bbox.right_edge[j], patch.v[i][j]) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void tet_patchSurfaceFunc(const cython.floating[6][3] verts, const cython.floating u, const cython.floating v, cython.floating[3] S) noexcept nogil: cdef int i # Computes for canonical triangle coordinates for i in range(3): S[i] = (1.0 - 3.0*u + 2.0*u*u - 3.0*v + 2.0*v*v + 4.0*u*v)*verts[0][i] + \ (-u + 2.0*u*u)*verts[1][i] + \ (-v + 2.0*v*v)*verts[2][i] + \ (4.0*u - 4.0*u*u - 4.0*u*v)*verts[3][i] + \ (4.0*u*v)*verts[4][i] + \ (4.0*v - 4.0*v*v - 4.0*u*v)*verts[5][i] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void tet_patchSurfaceDerivU(const cython.floating[6][3] verts, const cython.floating u, const cython.floating v, cython.floating[3] Su) noexcept nogil: cdef int i # Computes for canonical triangle coordinates for i in range(3): Su[i] = (-3.0 + 4.0*u + 4.0*v)*verts[0][i] + \ (-1.0 + 4.0*u)*verts[1][i] + \ (4.0 - 8.0*u - 4.0*v)*verts[3][i] + \ (4.0*v)*verts[4][i] + \ (-4.0*v)*verts[5][i] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void tet_patchSurfaceDerivV(const cython.floating[6][3] verts, const cython.floating u, const cython.floating v, cython.floating[3] Sv) noexcept nogil: cdef int i # Computes for canonical triangle coordinates for i in range(3): Sv[i] = (-3.0 + 4.0*v + 4.0*u)*verts[0][i] + \ (-1.0 + 4.0*v)*verts[2][i] + \ (-4.0*u)*verts[3][i] + \ (4.0*u)*verts[4][i] + \ (4.0 - 8.0*v - 4.0*u)*verts[5][i] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef RayHitData compute_tet_patch_hit(cython.floating[6][3] verts, cython.floating[3] ray_origin, cython.floating[3] ray_direction) noexcept nogil: # first we compute the two planes that define the ray. cdef cython.floating[3] n, N1, N2 cdef cython.floating A = dot(ray_direction, ray_direction) for i in range(3): n[i] = ray_direction[i] / A if ((fabs(n[0]) > fabs(n[1])) and (fabs(n[0]) > fabs(n[2]))): N1[0] = n[1] N1[1] =-n[0] N1[2] = 0.0 else: N1[0] = 0.0 N1[1] = n[2] N1[2] =-n[1] cross(N1, n, N2) cdef cython.floating d1 = -dot(N1, ray_origin) cdef cython.floating d2 = -dot(N2, ray_origin) # the initial guess is set to zero cdef cython.floating u = 0.0 cdef cython.floating v = 0.0 cdef cython.floating[3] S tet_patchSurfaceFunc(verts, u, v, S) cdef cython.floating fu = dot(N1, S) + d1 cdef cython.floating fv = dot(N2, S) + d2 cdef cython.floating err = fmax(fabs(fu), fabs(fv)) # begin Newton iteration cdef cython.floating tol = 1.0e-5 cdef int iterations = 0 cdef int max_iter = 10 cdef cython.floating[3] Su cdef cython.floating[3] Sv cdef cython.floating J11, J12, J21, J22, det while ((err > tol) and (iterations < max_iter)): # compute the Jacobian tet_patchSurfaceDerivU(verts, u, v, Su) tet_patchSurfaceDerivV(verts, u, v, Sv) J11 = dot(N1, Su) J12 = dot(N1, Sv) J21 = dot(N2, Su) J22 = dot(N2, Sv) det = (J11*J22 - J12*J21) # update the u, v values u -= ( J22*fu - J12*fv) / det v -= (-J21*fu + J11*fv) / det tet_patchSurfaceFunc(verts, u, v, S) fu = dot(N1, S) + d1 fv = dot(N2, S) + d2 err = fmax(fabs(fu), fabs(fv)) iterations += 1 # t is the distance along the ray to this hit cdef cython.floating t = distance(S, ray_origin) / L2_norm(ray_direction) # return hit data cdef RayHitData hd hd.u = u hd.v = v hd.t = t hd.converged = (iterations < max_iter) return hd @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef np.int64_t ray_tet_patch_intersect(const void* primitives, const np.int64_t item, Ray* ray) noexcept nogil: cdef TetPatch tet_patch = ( primitives)[item] cdef RayHitData hd = compute_tet_patch_hit(tet_patch.v, ray.origin, ray.direction) # only count this is it's the closest hit if (hd.t < ray.t_near or hd.t > ray.t_far): return False if (0 <= hd.u and 0 <= hd.v and hd.u + hd.v <= 1.0 and hd.converged): # we have a hit, so update ray information ray.t_far = hd.t ray.elem_id = tet_patch.elem_id return True return False @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void tet_patch_centroid(const void *primitives, const np.int64_t item, np.float64_t[3] centroid) noexcept nogil: cdef np.int64_t i, j cdef TetPatch tet_patch = ( primitives)[item] for j in range(3): centroid[j] = 0.0 for i in range(6): for j in range(3): centroid[j] += tet_patch.v[i][j] for j in range(3): centroid[j] /= 6.0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void tet_patch_bbox(const void *primitives, const np.int64_t item, BBox* bbox) noexcept nogil: cdef np.int64_t i, j cdef TetPatch tet_patch = ( primitives)[item] for j in range(3): bbox.left_edge[j] = tet_patch.v[0][j] bbox.right_edge[j] = tet_patch.v[0][j] for i in range(1, 6): for j in range(3): bbox.left_edge[j] = fmin(bbox.left_edge[j], tet_patch.v[i][j]) bbox.right_edge[j] = fmax(bbox.right_edge[j], tet_patch.v[i][j]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/quad_tree.pyx0000644000175100001770000005325214714401662017524 0ustar00runnerdocker # distutils: libraries = STD_LIBS """ A refine-by-two AMR-specific quadtree """ import numpy as np cimport cython cimport numpy as np from libc.stdlib cimport free, malloc from yt.utilities.lib.fp_utils cimport fmax, fmin from yt.utilities.exceptions import YTIntDomainOverflow cdef extern from "platform_dep.h": # NOTE that size_t might not be int void *alloca(int) cdef struct QuadTreeNode: np.float64_t *val np.float64_t weight_val np.int64_t pos[2] QuadTreeNode *children[2][2] ctypedef void QTN_combine(QuadTreeNode *self, np.float64_t *val, np.float64_t weight_val, int nvals) cdef void QTN_add_value(QuadTreeNode *self, np.float64_t *val, np.float64_t weight_val, int nvals): cdef int i for i in range(nvals): self.val[i] += val[i] self.weight_val += weight_val cdef void QTN_max_value(QuadTreeNode *self, np.float64_t *val, np.float64_t weight_val, int nvals): cdef int i for i in range(nvals): self.val[i] = fmax(val[i], self.val[i]) self.weight_val = 1.0 cdef void QTN_min_value(QuadTreeNode *self, np.float64_t *val, np.float64_t weight_val, int nvals): cdef int i #cdef np.float64_t *big_num = 1.0 #big_num = 1.0 #1e10 for i in range(nvals): if self.val[i] == 0: self.val[i] = 1e50 # end if self.val[i] = fmin(val[i], self.val[i]) self.weight_val = 1.0 cdef void QTN_refine(QuadTreeNode *self, int nvals): cdef int i, j cdef np.int64_t npos[2] cdef np.float64_t *tvals = alloca( sizeof(np.float64_t) * nvals) for i in range(nvals): tvals[i] = 0.0 for i in range(2): npos[0] = self.pos[0] * 2 + i for j in range(2): npos[1] = self.pos[1] * 2 + j # We have to be careful with allocation... self.children[i][j] = QTN_initialize( npos, nvals, tvals, 0.0) cdef QuadTreeNode *QTN_initialize(np.int64_t pos[2], int nvals, np.float64_t *val, np.float64_t weight_val): cdef QuadTreeNode *node cdef int i, j node = malloc(sizeof(QuadTreeNode)) node.pos[0] = pos[0] node.pos[1] = pos[1] node.val = malloc( nvals * sizeof(np.float64_t)) for i in range(2): for j in range(2): node.children[i][j] = NULL if val != NULL: for i in range(nvals): node.val[i] = val[i] node.weight_val = weight_val return node cdef void QTN_free(QuadTreeNode *node): cdef int i, j for i in range(2): for j in range(2): if node.children[i][j] == NULL: continue QTN_free(node.children[i][j]) free(node.val) free(node) cdef class QuadTree: cdef int nvals cdef QuadTreeNode ***root_nodes cdef public np.int64_t top_grid_dims[2] cdef int merged cdef int num_cells cdef QTN_combine *combine cdef np.float64_t bounds[4] cdef np.float64_t dds[2] cdef np.int64_t last_dims[2] cdef int max_level def __cinit__(self, np.ndarray[np.int64_t, ndim=1] top_grid_dims, int nvals, bounds, method = "integrate"): if method == "integrate": self.combine = QTN_add_value elif method == "mip" or \ method == "max": self.combine = QTN_max_value elif method == "min": self.combine = QTN_min_value else: raise NotImplementedError(f"Unknown projection method {self.method!r}") self.merged = 1 self.max_level = 0 cdef int i, j cdef np.int64_t pos[2] cdef np.float64_t *vals = malloc( sizeof(np.float64_t)*nvals) cdef np.float64_t weight_val = 0.0 self.nvals = nvals for i in range(nvals): vals[i] = 0.0 for i in range(4): self.bounds[i] = bounds[i] self.top_grid_dims[0] = top_grid_dims[0] self.top_grid_dims[1] = top_grid_dims[1] self.dds[0] = (self.bounds[1] - self.bounds[0])/self.top_grid_dims[0] self.dds[1] = (self.bounds[3] - self.bounds[2])/self.top_grid_dims[1] self.root_nodes = \ malloc(sizeof(QuadTreeNode **) * top_grid_dims[0]) # We initialize our root values to 0.0. for i in range(top_grid_dims[0]): pos[0] = i self.root_nodes[i] = \ malloc(sizeof(QuadTreeNode *) * top_grid_dims[1]) for j in range(top_grid_dims[1]): pos[1] = j self.root_nodes[i][j] = QTN_initialize( pos, nvals, vals, weight_val) self.num_cells = self.top_grid_dims[0] * self.top_grid_dims[1] free(vals) cdef int count_total_cells(self, QuadTreeNode *root): cdef int total = 0 cdef int i, j if root.children[0][0] == NULL: return 1 for i in range(2): for j in range(2): total += self.count_total_cells(root.children[i][j]) return total + 1 @cython.boundscheck(False) @cython.wraparound(False) cdef int fill_buffer(self, QuadTreeNode *root, int curpos, np.ndarray[np.int32_t, ndim=1] refined, np.ndarray[np.float64_t, ndim=2] values, np.ndarray[np.float64_t, ndim=1] wval): cdef int i, j for i in range(self.nvals): values[curpos, i] = root.val[i] wval[curpos] = root.weight_val if root.children[0][0] != NULL: refined[curpos] = 1 else: return curpos+1 curpos += 1 for i in range(2): for j in range(2): curpos = self.fill_buffer(root.children[i][j], curpos, refined, values, wval) return curpos @cython.boundscheck(False) @cython.wraparound(False) cdef int unfill_buffer(self, QuadTreeNode *root, int curpos, np.ndarray[np.int32_t, ndim=1] refined, np.ndarray[np.float64_t, ndim=2] values, np.ndarray[np.float64_t, ndim=1] wval): cdef int i, j for i in range(self.nvals): root.val[i] = values[curpos, i] root.weight_val = wval[curpos] if refined[curpos] == 0: return curpos+1 curpos += 1 cdef QuadTreeNode *child cdef np.int64_t pos[2] for i in range(2): for j in range(2): pos[0] = root.pos[0]*2 + i pos[1] = root.pos[1]*2 + j child = QTN_initialize(pos, self.nvals, NULL, 0.0) root.children[i][j] = child curpos = self.unfill_buffer(child, curpos, refined, values, wval) return curpos @cython.boundscheck(False) @cython.wraparound(False) def frombuffer(self, np.ndarray[np.int32_t, ndim=1] refined, np.ndarray[np.float64_t, ndim=2] values, np.ndarray[np.float64_t, ndim=1] wval, method): if method == "mip" or method == "max" or method == -1: self.merged = -1 if method == "min" or method == -2: self.merged = -2 elif method == "integrate" or method == 1: self.merged = 1 cdef int curpos = 0 self.num_cells = wval.shape[0] for i in range(self.top_grid_dims[0]): for j in range(self.top_grid_dims[1]): curpos = self.unfill_buffer(self.root_nodes[i][j], curpos, refined, values, wval) @cython.boundscheck(False) @cython.wraparound(False) def tobuffer(self): cdef int total = self.num_cells # We now have four buffers: # Refined or not (total,) int32 # Values in each node (total, nvals) float64 # Weight values in each node (total,) float64 cdef np.ndarray[np.int32_t, ndim=1] refined refined = np.zeros(total, dtype='int32') cdef np.ndarray[np.float64_t, ndim=2] values values = np.zeros((total, self.nvals), dtype='float64') cdef np.ndarray[np.float64_t, ndim=1] wval wval = np.zeros(total, dtype='float64') cdef int curpos = 0 for i in range(self.top_grid_dims[0]): for j in range(self.top_grid_dims[1]): curpos = self.fill_buffer(self.root_nodes[i][j], curpos, refined, values, wval) return (refined, values, wval) def get_args(self): return (self.top_grid_dims[0], self.top_grid_dims[1], self.nvals) cdef int add_to_position(self, int level, np.int64_t pos[2], np.float64_t *val, np.float64_t weight_val, int skip = 0): cdef int i, j, L cdef QuadTreeNode *node node = self.find_on_root_level(pos, level) if node == NULL: return -1 if level > self.max_level: self.max_level = level for L in range(level): if node.children[0][0] == NULL: QTN_refine(node, self.nvals) self.num_cells += 4 # Maybe we should use bitwise operators? i = (pos[0] >> (level - L - 1)) & 1 j = (pos[1] >> (level - L - 1)) & 1 node = node.children[i][j] if skip == 1: return 0 self.combine(node, val, weight_val, self.nvals) return 0 @cython.cdivision(True) cdef QuadTreeNode *find_on_root_level(self, np.int64_t pos[2], int level): # We need this because the root level won't just have four children # So we find on the root level, then we traverse the tree. cdef np.int64_t i, j i = pos[0] >> level j = pos[1] >> level if i >= self.top_grid_dims[0] or i < 0 or \ j >= self.top_grid_dims[1] or j < 0: self.last_dims[0] = i self.last_dims[1] = j return NULL return self.root_nodes[i][j] @cython.boundscheck(False) @cython.wraparound(False) def add_array_to_tree(self, int level, np.ndarray[np.int64_t, ndim=1] pxs, np.ndarray[np.int64_t, ndim=1] pys, np.ndarray[np.float64_t, ndim=2] pvals, np.ndarray[np.float64_t, ndim=1] pweight_vals, int skip = 0): cdef int p cdef np.float64_t *vals cdef np.float64_t *data = pvals.data cdef np.int64_t pos[2] for p in range(pxs.shape[0]): vals = data + self.nvals*p pos[0] = pxs[p] pos[1] = pys[p] self.add_to_position(level, pos, vals, pweight_vals[p], skip) return @cython.boundscheck(False) @cython.wraparound(False) def add_chunk_to_tree(self, np.ndarray[np.int64_t, ndim=1] pxs, np.ndarray[np.int64_t, ndim=1] pys, np.ndarray[np.int64_t, ndim=1] level, np.ndarray[np.float64_t, ndim=2] pvals, np.ndarray[np.float64_t, ndim=1] pweight_vals): cdef int ps = pxs.shape[0] cdef int p, rv cdef np.float64_t *vals cdef np.float64_t *data = pvals.data cdef np.int64_t pos[2] for p in range(ps): vals = data + self.nvals*p pos[0] = pxs[p] pos[1] = pys[p] rv = self.add_to_position(level[p], pos, vals, pweight_vals[p]) if rv == -1: raise YTIntDomainOverflow( (self.last_dims[0], self.last_dims[1]), (self.top_grid_dims[0], self.top_grid_dims[1])) return @cython.boundscheck(False) @cython.wraparound(False) def initialize_chunk(self, np.ndarray[np.int64_t, ndim=1] pxs, np.ndarray[np.int64_t, ndim=1] pys, np.ndarray[np.int64_t, ndim=1] level): cdef int num = pxs.shape[0] cdef int p, rv cdef np.int64_t pos[2] for p in range(num): pos[0] = pxs[p] pos[1] = pys[p] rv = self.add_to_position(level[p], pos, NULL, 0.0, 1) if rv == -1: raise YTIntDomainOverflow( (self.last_dims[0], self.last_dims[1]), (self.top_grid_dims[0], self.top_grid_dims[1])) return @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def get_all(self, int count_only = 0, int method = 1): cdef int i, j, vi cdef int total = 0 self.merged = method for i in range(self.top_grid_dims[0]): for j in range(self.top_grid_dims[1]): total += self.count(self.root_nodes[i][j]) if count_only: return total # Allocate our array cdef np.ndarray[np.int64_t, ndim=1] oix cdef np.ndarray[np.int64_t, ndim=1] oiy cdef np.ndarray[np.int64_t, ndim=1] oires cdef np.ndarray[np.float64_t, ndim=2] nvals cdef np.ndarray[np.float64_t, ndim=1] nwvals oix = np.zeros(total, dtype='int64') oiy = np.zeros(total, dtype='int64') oires = np.zeros(total, dtype='int64') nvals = np.zeros( (total, self.nvals), dtype='float64') nwvals = np.zeros( total, dtype='float64') cdef np.int64_t curpos = 0 cdef np.int64_t *ix = oix.data cdef np.int64_t *iy = oiy.data cdef np.int64_t *ires = oires.data cdef np.float64_t *vdata = nvals.data cdef np.float64_t *wdata = nwvals.data cdef np.float64_t wtoadd cdef np.float64_t *vtoadd = malloc( sizeof(np.float64_t)*self.nvals) for i in range(self.top_grid_dims[0]): for j in range(self.top_grid_dims[1]): for vi in range(self.nvals): vtoadd[vi] = 0.0 wtoadd = 0.0 curpos += self.fill(self.root_nodes[i][j], curpos, ix, iy, ires, vdata, wdata, vtoadd, wtoadd, 0) free(vtoadd) return oix, oiy, oires, nvals, nwvals cdef int count(self, QuadTreeNode *node): cdef int i, j cdef int count = 0 if node.children[0][0] == NULL: return 1 for i in range(2): for j in range(2): count += self.count(node.children[i][j]) return count @cython.cdivision(True) cdef int fill(self, QuadTreeNode *node, np.int64_t curpos, np.int64_t *ix, np.int64_t *iy, np.int64_t *ires, np.float64_t *vdata, np.float64_t *wdata, np.float64_t *vtoadd, np.float64_t wtoadd, np.int64_t level): cdef int i, j, n if node.children[0][0] == NULL: if self.merged == -1: for i in range(self.nvals): vdata[self.nvals * curpos + i] = fmax(node.val[i], vtoadd[i]) elif self.merged == -2: for i in range(self.nvals): vdata[self.nvals * curpos + i] = fmin(node.val[i], vtoadd[i]) wdata[curpos] = 1.0 else: for i in range(self.nvals): vdata[self.nvals * curpos + i] = node.val[i] + vtoadd[i] wdata[curpos] = node.weight_val + wtoadd ires[curpos] = level ix[curpos] = node.pos[0] iy[curpos] = node.pos[1] return 1 cdef np.int64_t added = 0 cdef np.float64_t *vorig vorig = malloc(sizeof(np.float64_t) * self.nvals) if self.merged == 1: for i in range(self.nvals): vorig[i] = vtoadd[i] vtoadd[i] += node.val[i] wtoadd += node.weight_val elif self.merged == -1 or self.merged == -2: for i in range(self.nvals): vtoadd[i] = node.val[i] for i in range(2): for j in range(2): if self.merged == -1: for n in range(self.nvals): vtoadd[n] = node.val[n] added += self.fill(node.children[i][j], curpos + added, ix, iy, ires, vdata, wdata, vtoadd, wtoadd, level + 1) if self.merged == 1: for i in range(self.nvals): vtoadd[i] = vorig[i] wtoadd -= node.weight_val free(vorig) return added @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def fill_image(self, np.ndarray[np.float64_t, ndim=2] buffer, _bounds, int val_index = 0, int weighted = 0): cdef np.float64_t pos[2] cdef np.float64_t dds[2] cdef int nn[2] cdef int i, j cdef np.float64_t bounds[4] cdef np.float64_t opos[4] cdef np.float64_t weight = 0.0, value = 0.0 cdef np.float64_t *wval = NULL if weighted == 1: wval = &weight for i in range(4): bounds[i] = _bounds[i] for i in range(2): nn[i] = buffer.shape[i] dds[i] = (bounds[i*2 + 1] - bounds[i*2])/nn[i] pos[0] = bounds[0] opos[0] = opos[1] = pos[0] + dds[0] for i in range(nn[0]): pos[1] = bounds[2] opos[2] = opos[3] = pos[1] + dds[1] for j in range(nn[1]): # We start at level zero. In the future we could optimize by # retaining oct information from previous cells. if not ((opos[0] <= pos[0] <= opos[1]) and (opos[2] <= pos[1] <= opos[3])): value = self.find_value_at_pos(pos, val_index, opos, wval) buffer[i,j] = value if weighted == 1: buffer[i,j] /= weight pos[1] += dds[1] pos[0] += dds[0] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef np.float64_t find_value_at_pos(self, np.float64_t pos[2], int val_index, np.float64_t opos[4], np.float64_t *wval = NULL): cdef int i cdef np.int64_t ind[2] cdef np.float64_t cp[2] cdef np.float64_t dds[2] cdef np.float64_t value = 0.0 cdef np.float64_t weight = 0.0 cdef QuadTreeNode *cur for i in range(2): ind[i] = (pos[i]/self.dds[i]) cp[i] = (ind[i] + 0.5) * self.dds[i] dds[i] = self.dds[i] cur = self.root_nodes[ind[0]][ind[1]] value += cur.val[val_index] weight += cur.weight_val while cur.children[0][0] != NULL: for i in range(2): # Note that below offset by half a dx for center, after # updating to the next level dds[i] = dds[i] * 0.5 if cp[i] >= pos[i]: ind[i] = 0 cp[i] -= dds[i] * 0.5 else: ind[i] = 1 cp[i] += dds[i] * 0.5 cur = cur.children[ind[0]][ind[1]] value += cur.val[val_index] weight += cur.weight_val opos[0] = cp[0] - dds[0] * 0.5 opos[1] = cp[0] + dds[0] * 0.5 opos[2] = cp[1] - dds[1] * 0.5 opos[3] = cp[1] + dds[1] * 0.5 if wval != NULL: wval[0] = weight return value def __dealloc__(self): cdef int i, j for i in range(self.top_grid_dims[0]): for j in range(self.top_grid_dims[1]): QTN_free(self.root_nodes[i][j]) free(self.root_nodes[i]) free(self.root_nodes) cdef void QTN_merge_nodes(QuadTreeNode *n1, QuadTreeNode *n2, int nvals, QTN_combine *func): # We have four choices when merging nodes. # 1. If both nodes have no refinement, then we add values of n2 to n1. # 2. If both have refinement, we call QTN_merge_nodes on all four children. # 3. If n2 has refinement and n1 does not, we detach n2's children and # attach them to n1. # 4. If n1 has refinement and n2 does not, we add the value of n2 to n1. cdef int i, j func(n1, n2.val, n2.weight_val, nvals) if n1.children[0][0] == n2.children[0][0] == NULL: pass elif n1.children[0][0] != NULL and n2.children[0][0] != NULL: for i in range(2): for j in range(2): QTN_merge_nodes(n1.children[i][j], n2.children[i][j], nvals, func) elif n1.children[0][0] == NULL and n2.children[0][0] != NULL: for i in range(2): for j in range(2): n1.children[i][j] = n2.children[i][j] n2.children[i][j] = NULL elif n1.children[0][0] != NULL and n2.children[0][0] == NULL: pass else: raise RuntimeError def merge_quadtrees(QuadTree qt1, QuadTree qt2, method = 1): cdef int i, j qt1.num_cells = 0 cdef QTN_combine *func if method == 1: qt1.merged = 1 func = QTN_add_value elif method == -1: qt1.merged = -1 func = QTN_max_value elif method == -2: qt1.merged = -2 func = QTN_min_value else: raise NotImplementedError if qt1.merged != 0 or qt2.merged != 0: assert(qt1.merged == qt2.merged) for i in range(qt1.top_grid_dims[0]): for j in range(qt1.top_grid_dims[1]): QTN_merge_nodes(qt1.root_nodes[i][j], qt2.root_nodes[i][j], qt1.nvals, func) qt1.num_cells += qt1.count_total_cells( qt1.root_nodes[i][j]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/ragged_arrays.pyx0000644000175100001770000000440714714401662020363 0ustar00runnerdocker""" Some simple operations for operating on ragged arrays """ import numpy as np cimport cython cimport numpy as np cdef fused numpy_dt: np.float32_t np.float64_t np.int32_t np.int64_t cdef numpy_dt r_min(numpy_dt a, numpy_dt b): if a < b: return a return b cdef numpy_dt r_max(numpy_dt a, numpy_dt b): if a > b: return a return b cdef numpy_dt r_add(numpy_dt a, numpy_dt b): return a + b cdef numpy_dt r_subtract(numpy_dt a, numpy_dt b): return a - b cdef numpy_dt r_multiply(numpy_dt a, numpy_dt b): return a * b @cython.cdivision(True) cdef numpy_dt r_divide(numpy_dt a, numpy_dt b): return a / b def index_unop(np.ndarray[numpy_dt, ndim=1] values, np.ndarray[np.int64_t, ndim=1] indices, np.ndarray[np.int64_t, ndim=1] sizes, operation): cdef numpy_dt mi, ma if numpy_dt == np.float32_t: dt = "float32" mi = np.finfo(dt).min ma = np.finfo(dt).max elif numpy_dt == np.float64_t: dt = "float64" mi = np.finfo(dt).min ma = np.finfo(dt).max elif numpy_dt == np.int32_t: dt = "int32" mi = np.iinfo(dt).min ma = np.iinfo(dt).max elif numpy_dt == np.int64_t: dt = "int64" mi = np.iinfo(dt).min ma = np.iinfo(dt).max cdef np.ndarray[numpy_dt] out_values = np.zeros(sizes.size, dtype=dt) cdef numpy_dt (*func)(numpy_dt a, numpy_dt b) # Now we figure out our function. At present, we only allow addition and # multiplication, because they are commutative and easy to bootstrap. cdef numpy_dt ival, val if operation == "sum": ival = 0 func = r_add elif operation == "prod": ival = 1 func = r_multiply elif operation == "max": ival = mi func = r_max elif operation == "min": ival = ma func = r_min else: raise NotImplementedError cdef np.int64_t i, ind_ind, ind_arr ind_ind = 0 for i in range(sizes.size): # Each entry in sizes is the size of the array val = ival for _ in range(sizes[i]): ind_arr = indices[ind_ind] val = func(val, values[ind_arr]) ind_ind += 1 out_values[i] = val return out_values ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.403154 yt-4.4.0/yt/utilities/lib/tests/0000755000175100001770000000000014714401715016143 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/tests/__init__.py0000644000175100001770000000000014714401662020243 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/tests/test_allocation_container.py0000644000175100001770000000121014714401662023736 0ustar00runnerdockerfrom numpy.testing import assert_array_equal, assert_equal from yt.utilities.lib.allocation_container import BitmaskPool def test_bitmask_pool(): bmp = BitmaskPool() assert_equal(len(bmp), 0) bmp.append(100) assert_equal(len(bmp), 1) assert_equal(bmp[0].size, 100) bmp.append(200) assert_equal(len(bmp), 2) assert_equal(bmp[0].size, 100) assert_equal(bmp[1].size, 200) assert_equal(sum(_.size for _ in bmp.to_arrays()), 300) arrs = bmp.to_arrays() assert_equal(arrs[0].size, 100) assert_equal(arrs[1].size, 200) arrs[0][:] = 1 arrs = bmp.to_arrays() assert_array_equal(arrs[0], 1) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/tests/test_alt_ray_tracers.py0000644000175100001770000000656214714401662022744 0ustar00runnerdocker"""Tests for non-cartesian ray tracers.""" import numpy as np from numpy.testing import assert_equal from yt.testing import amrspace from yt.utilities.lib.alt_ray_tracers import _cyl2cart, cylindrical_ray_trace left_grid = right_grid = amr_levels = center_grid = data = None old_settings = None def setup_module(): # set up some sample cylindrical grid data, radiating out from center global left_grid, right_grid, amr_levels, center_grid, data, old_settings old_settings = np.geterr() np.seterr(all="ignore") l1, r1, lvl1 = amrspace([0.0, 1.0, 0.0, -1.0, 0.0, 2 * np.pi], levels=(7, 7, 0)) l2, r2, lvl2 = amrspace([0.0, 1.0, 0.0, 1.0, 0.0, 2 * np.pi], levels=(7, 7, 0)) left_grid = np.concatenate([l1, l2], axis=0) right_grid = np.concatenate([r1, r2], axis=0) amr_levels = np.concatenate([lvl1, lvl2], axis=0) center_grid = (left_grid + right_grid) / 2.0 data = np.cos(np.sqrt(np.sum(center_grid[:, :2] ** 2, axis=1))) ** 2 # cos^2 def teardown_module(): np.seterr(**old_settings) point_pairs = np.array( [ # p1 p2 ([0.5, -1.0, 0.0], [1.0, 1.0, 0.75 * np.pi]), # Everything different ([0.5, -1.0, 0.0], [0.5, 1.0, 0.75 * np.pi]), # r same ([0.5, -1.0, 0.0], [0.5, 1.0, np.pi]), # diagonal through z-axis # straight through z-axis ([0.5, 0.0, 0.0], [0.5, 0.0, np.pi]), # ([0.5, 0.0, np.pi*3/2 + 0.0], [0.5, 0.0, np.pi*3/2 + np.pi]), # ([0.5, 0.0, np.pi/2 + 0.0], [0.5, 0.0, np.pi/2 + np.pi]), # ([0.5, 0.0, np.pi + 0.0], [0.5, 0.0, np.pi + np.pi]), # const z, not through z-axis ([0.5, 0.1, 0.0], [0.5, 0.1, 0.75 * np.pi]), # ([0.5, 0.1, np.pi + 0.0], [0.5, 0.1, np.pi + 0.75*np.pi]), # ([0.5, 0.1, np.pi*3/2 + 0.0], [0.5, 0.1, np.pi*3/2 + 0.75*np.pi]), # ([0.5, 0.1, np.pi/2 + 0.0], [0.5, 0.1, np.pi/2 + 0.75*np.pi]), # ([0.5, 0.1, 2*np.pi + 0.0], [0.5, 0.1, 2*np.pi + 0.75*np.pi]), # ([0.5, 0.1, np.pi/4 + 0.0], [0.5, 0.1, np.pi/4 + 0.75*np.pi]), # ([0.5, 0.1, np.pi*3/8 + 0.0], [0.5, 0.1, np.pi*3/8 + 0.75*np.pi]), ( [0.5, -1.0, 0.75 * np.pi], [1.0, 1.0, 0.75 * np.pi], ), # r,z different - theta same ([0.5, -1.0, 0.75 * np.pi], [0.5, 1.0, 0.75 * np.pi]), # z-axis parallel ([0.0, -1.0, 0.0], [0.0, 1.0, 0.0]), # z-axis itself ] ) def check_monotonic_inc(arr): assert np.all(0.0 <= (arr[1:] - arr[:-1])) def check_bounds(arr, blower, bupper): assert np.all(blower <= arr) assert np.all(bupper >= arr) def test_cylindrical_ray_trace(): for pair in point_pairs: p1, p2 = pair p1cart, p2cart = _cyl2cart(pair) pathlen = np.sqrt(np.sum((p2cart - p1cart) ** 2)) t, s, rztheta, inds = cylindrical_ray_trace(p1, p2, left_grid, right_grid) npoints = len(t) check_monotonic_inc(t) assert 0.0 <= t[0] assert t[-1] <= 1.0 check_monotonic_inc(s) assert 0.0 <= s[0] assert s[-1] <= pathlen assert_equal(npoints, len(s)) assert_equal((npoints, 3), rztheta.shape) check_bounds(rztheta[:, 0], 0.0, 1.0) check_bounds(rztheta[:, 1], -1.0, 1.0) check_bounds(rztheta[:, 2], 0.0, 2 * np.pi) check_monotonic_inc(rztheta[:, 2]) assert_equal(npoints, len(inds)) check_bounds(inds, 0, len(left_grid) - 1) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/tests/test_bitarray.py0000644000175100001770000001316714714401662021402 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_array_equal, assert_equal import yt.utilities.lib.bitarray as ba def test_inout_bitarray(): # Check that we can do it for bitarrays that are funny-shaped rng = np.random.default_rng() for i in range(7): # Check we can feed in an array arr_in = rng.random(32**3 + i) > 0.5 b = ba.bitarray(arr=arr_in) if i > 0: assert_equal(b.ibuf.size, (32**3) / 8.0 + 1) arr_out = b.as_bool_array() assert_equal(arr_in, arr_out) # Let's check we can do it without feeding it at first b = ba.bitarray(size=arr_in.size) b.set_from_array(arr_in) arr_out = b.as_bool_array() assert_equal(arr_in, arr_out) # Try a big array arr_in = rng.random(32**3 + i) > 0.5 b = ba.bitarray(arr=arr_in) arr_out = b.as_bool_array() assert_equal(arr_in, arr_out) assert_equal(b.count(), arr_in.sum()) # Let's check we can do something interesting. arr_in1 = rng.random(32**3) > 0.5 arr_in2 = rng.random(32**3) > 0.5 b1 = ba.bitarray(arr=arr_in1) b2 = ba.bitarray(arr=arr_in2) b3 = ba.bitarray(arr=(arr_in1 & arr_in2)) assert_equal((b1.ibuf & b2.ibuf), b3.ibuf) assert_equal(b1.count(), arr_in1.sum()) assert_equal(b2.count(), arr_in2.sum()) # Let's check the logical and operation b4 = b1.logical_and(b2) assert_equal(b4.count(), b3.count()) assert_array_equal(b4.as_bool_array(), b3.as_bool_array()) b5 = b1 & b2 assert_equal(b5.count(), b3.count()) assert_array_equal(b5.as_bool_array(), b3.as_bool_array()) b1 &= b2 assert_equal(b1.count(), b4.count()) assert_array_equal(b1.as_bool_array(), b4.as_bool_array()) # Repeat this, but with the logical or operators b1 = ba.bitarray(arr=arr_in1) b2 = ba.bitarray(arr=arr_in2) b3 = ba.bitarray(arr=(arr_in1 | arr_in2)) assert_equal((b1.ibuf | b2.ibuf), b3.ibuf) assert_equal(b1.count(), arr_in1.sum()) assert_equal(b2.count(), arr_in2.sum()) # Let's check the logical and operation b4 = b1.logical_or(b2) assert_equal(b4.count(), b3.count()) assert_array_equal(b4.as_bool_array(), b3.as_bool_array()) b5 = b1 | b2 assert_equal(b5.count(), b3.count()) assert_array_equal(b5.as_bool_array(), b3.as_bool_array()) b1 |= b2 assert_equal(b1.count(), b4.count()) assert_array_equal(b1.as_bool_array(), b4.as_bool_array()) # Repeat this, but with the logical xor operators b1 = ba.bitarray(arr=arr_in1) b2 = ba.bitarray(arr=arr_in2) b3 = ba.bitarray(arr=(arr_in1 ^ arr_in2)) assert_equal((b1.ibuf ^ b2.ibuf), b3.ibuf) assert_equal(b1.count(), arr_in1.sum()) assert_equal(b2.count(), arr_in2.sum()) # Let's check the logical and operation b4 = b1.logical_xor(b2) assert_equal(b4.count(), b3.count()) assert_array_equal(b4.as_bool_array(), b3.as_bool_array()) b5 = b1 ^ b2 assert_equal(b5.count(), b3.count()) assert_array_equal(b5.as_bool_array(), b3.as_bool_array()) b1 ^= b2 assert_equal(b1.count(), b4.count()) assert_array_equal(b1.as_bool_array(), b4.as_bool_array()) b = ba.bitarray(10) for i in range(10): b.set_value(i, 2) # 2 should evaluate to True arr = b.as_bool_array() assert_equal(arr[: i + 1].all(), True) assert_equal(arr[i + 1 :].any(), False) for i in range(10): b.set_value(i, 0) arr = b.as_bool_array() assert_equal(arr.any(), False) b.set_value(7, 1) arr = b.as_bool_array() assert_array_equal(arr, [0, 0, 0, 0, 0, 0, 0, 1, 0, 0]) b.set_value(2, 1) arr = b.as_bool_array() assert_array_equal(arr, [0, 0, 1, 0, 0, 0, 0, 1, 0, 0]) def test_set_range(): b = ba.bitarray(127) # Test once where we're in the middle of start and end bits b.set_range(4, 65, 1) comparison_array = np.zeros(127, dtype="uint8") comparison_array[4:65] = 1 arr = b.as_bool_array().astype("uint8") assert_array_equal(arr, comparison_array) assert_equal(b.count(), comparison_array.sum()) # Test when we start and stop in the same byte b = ba.bitarray(127) b.set_range(4, 6, 1) comparison_array = np.zeros(127, dtype="uint8") comparison_array[4:6] = 1 arr = b.as_bool_array().astype("uint8") assert_array_equal(arr, comparison_array) assert_equal(b.count(), comparison_array.sum()) # Test now where we're in the middle of start b = ba.bitarray(64) b.set_range(33, 36, 1) comparison_array = np.zeros(64, dtype="uint8") comparison_array[33:36] = 1 arr = b.as_bool_array().astype("uint8") assert_array_equal(arr, comparison_array) assert_equal(b.count(), comparison_array.sum()) # Now we test when we end on a byte edge, but we have 65 entries b = ba.bitarray(65) b.set_range(32, 64, 1) comparison_array = np.zeros(65, dtype="uint8") comparison_array[32:64] = 1 arr = b.as_bool_array().astype("uint8") assert_array_equal(arr, comparison_array) assert_equal(b.count(), comparison_array.sum()) # Let's do the inverse b = ba.bitarray(127) b.set_range(0, 127, 1) assert_equal(b.as_bool_array().all(), True) b.set_range(0, 127, 0) assert_equal(b.as_bool_array().any(), False) b.set_range(3, 9, 1) comparison_array = np.zeros(127, dtype="uint8") comparison_array[3:9] = 1 arr = b.as_bool_array().astype("uint8") assert_array_equal(arr, comparison_array) assert_equal(b.count(), comparison_array.sum()) # Now let's overlay some zeros b.set_range(7, 10, 0) comparison_array[7:10] = 0 arr = b.as_bool_array().astype("uint8") assert_array_equal(arr, comparison_array) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/tests/test_bounding_volume_hierarchy.py0000644000175100001770000000263014714401662025010 0ustar00runnerdockerimport numpy as np import yt from yt.testing import requires_file from yt.utilities.lib.bounding_volume_hierarchy import BVH, test_ray_trace from yt.visualization.volume_rendering.api import Camera, Scene def get_rays(camera): normal_vector = camera.unit_vectors[2].d W = np.array([8.0, 8.0]) N = np.array([800, 800]) dx = W / N x_points = np.linspace((-N[0] / 2 + 0.5) * dx[0], (N[0] / 2 - 0.5) * dx[0], N[0]) y_points = np.linspace((-N[1] / 2 + 0.5) * dx[1], (N[1] / 2 - 0.5) * dx[0], N[1]) X, Y = np.meshgrid(x_points, y_points) result = np.dot(camera.unit_vectors[0:2].T, [X.ravel(), Y.ravel()]) vec_origins = camera.scene.arr(result.T, "unitary") + camera.position return np.array(vec_origins), np.array(normal_vector) fn = "MOOSE_sample_data/out.e-s010" @requires_file(fn) def test_bounding_volume_hierarchy(): ds = yt.load(fn) vertices = ds.index.meshes[0].connectivity_coords indices = ds.index.meshes[0].connectivity_indices - 1 ad = ds.all_data() field_data = ad["connect1", "diffused"] bvh = BVH(vertices, indices, field_data) sc = Scene() cam = Camera(sc) cam.set_position(np.array([8.0, 8.0, 8.0])) cam.focus = np.array([0.0, 0.0, 0.0]) origins, direction = get_rays(cam) image = np.empty(800 * 800, np.float64) test_ray_trace(image, origins, direction, bvh) image = image.reshape((800, 800)) return image ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/tests/test_element_mappings.py0000644000175100001770000001342514714401662023111 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_almost_equal from yt.utilities.lib.element_mappings import ( test_hex20_sampler, test_hex_sampler, test_linear1D_sampler, test_quad2_sampler, test_quad_sampler, test_tet2_sampler, test_tetra_sampler, test_tri2_sampler, test_tri_sampler, test_wedge_sampler, ) def check_all_vertices(sampler, vertices, field_values): NV = vertices.shape[0] NDIM = vertices.shape[1] x = np.empty(NDIM) for i in range(NV): x = vertices[i] val = sampler(vertices, field_values, x) assert_almost_equal(val, field_values[i]) def test_P1Sampler1D(): vertices = np.array([[0.1], [0.3]]) field_values = np.array([1.0, 2.0]) check_all_vertices(test_linear1D_sampler, vertices, field_values) def test_P1Sampler2D(): vertices = np.array([[0.1, 0.2], [0.6, 0.3], [0.2, 0.7]]) field_values = np.array([1.0, 2.0, 3.0]) check_all_vertices(test_tri_sampler, vertices, field_values) def test_P1Sampler3D(): vertices = np.array( [[0.1, 0.1, 0.1], [0.6, 0.3, 0.2], [0.2, 0.7, 0.2], [0.4, 0.4, 0.7]] ) field_values = np.array([1.0, 2.0, 3.0, 4.0]) check_all_vertices(test_tetra_sampler, vertices, field_values) def test_Q1Sampler2D(): vertices = np.array([[0.1, 0.2], [0.6, 0.3], [0.7, 0.9], [0.2, 0.7]]) field_values = np.array([1.0, 2.0, 3.0, 4.0]) check_all_vertices(test_quad_sampler, vertices, field_values) def test_Q2Sampler2D(): vertices = np.array( [ [2.0, 3.0], [7.0, 4.0], [10.0, 15.0], [4.0, 12.0], [4.5, 3.5], [8.5, 9.5], [7.0, 13.5], [3.0, 7.5], [5.75, 8.5], ] ) field_values = np.array([7.0, 27.0, 40.0, 12.0, 13.0, 30.0, 22.0, 9.0, 16.0]) check_all_vertices(test_quad2_sampler, vertices, field_values) def test_Q1Sampler3D(): vertices = np.array( [ [2.00657905, 0.6888599, 1.4375], [1.8658198, 1.00973171, 1.4375], [1.97881594, 1.07088163, 1.4375], [2.12808879, 0.73057381, 1.4375], [2.00657905, 0.6888599, 1.2], [1.8658198, 1.00973171, 1.2], [1.97881594, 1.07088163, 1.2], [2.12808879, 0.73057381, 1.2], ] ) field_values = np.array( [ 0.4526278, 0.45262656, 0.45262657, 0.4526278, 0.54464296, 0.54464149, 0.5446415, 0.54464296, ] ) check_all_vertices(test_hex_sampler, vertices, field_values) def test_S2Sampler3D(): vertices = np.array( [ [3.00608789e-03, 4.64941000e-02, -3.95758979e-04], [3.03202730e-03, 4.64941000e-02, 0.00000000e00], [3.03202730e-03, 4.70402000e-02, 3.34389809e-20], [3.00608789e-03, 4.70402000e-02, -3.95758979e-04], [2.45511948e-03, 4.64941000e-02, -3.23222611e-04], [2.47630461e-03, 4.64941000e-02, 1.20370622e-35], [2.47630461e-03, 4.70402000e-02, 3.34389809e-20], [2.45511948e-03, 4.70402000e-02, -3.23222611e-04], [3.01905760e-03, 4.64941000e-02, -1.97879489e-04], [3.03202730e-03, 4.67671500e-02, 3.34389809e-20], [3.01905760e-03, 4.70402000e-02, -1.97879489e-04], [3.00608789e-03, 4.67671500e-02, -3.95758979e-04], [2.73060368e-03, 4.64941000e-02, -3.59490795e-04], [2.75416596e-03, 4.64941000e-02, -1.86574463e-34], [2.75416596e-03, 4.70402000e-02, 6.68779617e-20], [2.73060368e-03, 4.70402000e-02, -3.59490795e-04], [2.47100265e-03, 4.64941000e-02, -1.61958070e-04], [2.47630461e-03, 4.67671500e-02, 1.67194904e-20], [2.47100265e-03, 4.70402000e-02, -1.61958070e-04], [2.45511948e-03, 4.67671500e-02, -3.23222611e-04], ] ) field_values = np.array( [ 659.80151432, 650.95995348, 650.02809796, 658.81589888, 659.77560908, 650.93582507, 649.99987015, 658.78508795, 655.38073390, 650.49402572, 654.42199842, 659.30870660, 659.78856170, 650.94788928, 650.01398406, 658.80049342, 655.35571708, 650.46784761, 654.39247905, 659.28034852, ] ) check_all_vertices(test_hex20_sampler, vertices, field_values) def test_W1Sampler3D(): vertices = np.array( [ [-0.34641016, 0.3, 0.0], [-0.31754265, 0.25, 0.0], [-0.28867513, 0.3, 0.0], [-0.34641016, 0.3, 0.05], [-0.31754265, 0.25, 0.05], [-0.28867513, 0.3, 0.05], ] ) field_values = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0]) check_all_vertices(test_wedge_sampler, vertices, field_values) def test_T2Sampler2D(): vertices = np.array( [[0.1, 0.2], [0.3, 0.5], [0.2, 0.9], [0.2, 0.35], [0.25, 0.7], [0.15, 0.55]] ) field_values = np.array([15.0, 37.0, 49.0, 32.0, 46.0, 24.0]) check_all_vertices(test_tri2_sampler, vertices, field_values) def test_Tet2Sampler3D(): vertices = np.array( [ [0.3, -0.4, 0.6], [1.7, -0.7, 0.8], [0.4, 1.2, 0.4], [0.4, -0.2, 2.0], [1.0, -0.55, 0.7], [1.05, 0.25, 0.6], [0.35, 0.4, 0.5], [0.35, -0.3, 1.3], [1.05, -0.45, 1.4], [0.4, 0.5, 1.2], ] ) field_values = np.array( [15.0, 37.0, 49.0, 24.0, 30.0, 44.0, 20.0, 17.0, 32.0, 36.0] ) check_all_vertices(test_tet2_sampler, vertices, field_values) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/tests/test_fill_region.py0000644000175100001770000000254014714401662022047 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_equal from yt.utilities.lib.misc_utilities import fill_region NDIM = 32 def test_fill_region(): for level in range(2): rf = 2**level output_fields = [ np.zeros((NDIM * rf, NDIM * rf, NDIM * rf), "float64") for i in range(3) ] input_fields = [np.empty(NDIM**3, "float64") for i in range(3)] v = np.mgrid[ 0.0 : 1.0 : NDIM * 1j, 0.0 : 1.0 : NDIM * 1j, 0.0 : 1.0 : NDIM * 1j ] input_fields[0][:] = v[0].ravel() input_fields[1][:] = v[1].ravel() input_fields[2][:] = v[2].ravel() left_index = np.zeros(3, "int64") ipos = np.empty((NDIM**3, 3), dtype="int64") ind = np.indices((NDIM, NDIM, NDIM)) ipos[:, 0] = ind[0].ravel() ipos[:, 1] = ind[1].ravel() ipos[:, 2] = ind[2].ravel() ires = np.zeros(NDIM * NDIM * NDIM, "int64") ddims = np.array([NDIM, NDIM, NDIM], dtype="int64") * rf fill_region( input_fields, output_fields, level, left_index, ipos, ires, ddims, np.array([2, 2, 2], dtype="i8"), ) for r in range(level + 1): for o, i in zip(output_fields, v, strict=True): assert_equal(o[r::rf, r::rf, r::rf], i) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/tests/test_geometry_utils.py0000644000175100001770000007131514714401662022637 0ustar00runnerdockerimport numpy as np from numpy.testing import ( assert_array_equal, assert_array_less, assert_equal, assert_raises, ) from yt.testing import fake_random_ds from yt.utilities.lib.misc_utilities import ( obtain_position_vector, obtain_relative_velocity_vector, ) _fields = ("density", "velocity_x", "velocity_y", "velocity_z") _units = ("g/cm**3", "cm/s", "cm/s", "cm/s") # TODO: error compact/spread bits for incorrect size # TODO: test msdb for [0,0], [1,1], [2,2] etc. def test_spread_bits(): from yt.utilities.lib.geometry_utils import spread_bits li = [ ( np.uint64(0b111111111111111111111), np.uint64(0b1001001001001001001001001001001001001001001001001001001001001), ) ] for i, ans in li: out = spread_bits(i) assert_equal(out, ans) def test_compact_bits(): from yt.utilities.lib.geometry_utils import compact_bits li = [ ( np.uint64(0b111111111111111111111), np.uint64(0b1001001001001001001001001001001001001001001001001001001001001), ) ] for ans, i in li: out = compact_bits(i) assert_equal(out, ans) def test_spread_and_compact_bits(): from yt.utilities.lib.geometry_utils import compact_bits, spread_bits li = [np.uint64(0b111111111111111111111)] for ans in li: mi = spread_bits(ans) out = compact_bits(mi) assert_equal(out, ans) def test_lsz(): from yt.utilities.lib.geometry_utils import lsz li = [ ( np.uint64(0b1001001001001001001001001001001001001001001001001001001001001), 3 * 21, 3, 0, ), ( np.uint64(0b1001001001001001001001001001001001001001001001001001001001000), 3 * 0, 3, 0, ), ( np.uint64(0b1001001001001001001001001001001001001001001001001001001000001), 3 * 1, 3, 0, ), ( np.uint64(0b1001001001001001001001001001001001001001001001001001000001001), 3 * 2, 3, 0, ), ( np.uint64(0b10010010010010010010010010010010010010010010010010010010010010), 3 * 0, 3, 0, ), ( np.uint64( 0b100100100100100100100100100100100100100100100100100100100100100 ), 3 * 0, 3, 0, ), (np.uint64(0b100), 0, 1, 0), (np.uint64(0b100), 1, 1, 1), (np.uint64(0b100), 3, 1, 2), (np.uint64(0b100), 3, 1, 3), ] for i, ans, stride, start in li: out = lsz(i, stride=stride, start=start) assert_equal(out, ans) def test_lsb(): from yt.utilities.lib.geometry_utils import lsb li = [ ( np.uint64(0b1001001001001001001001001001001001001001001001001001001001001), 3 * 0, ), ( np.uint64(0b1001001001001001001001001001001001001001001001001001001001000), 3 * 1, ), ( np.uint64(0b1001001001001001001001001001001001001001001001001001001000000), 3 * 2, ), ( np.uint64(0b1001001001001001001001001001001001001001001001001001000000000), 3 * 3, ), ( np.uint64(0b10010010010010010010010010010010010010010010010010010010010010), 3 * 21, ), ( np.uint64( 0b100100100100100100100100100100100100100100100100100100100100100 ), 3 * 21, ), ] for i, ans in li: out = lsb(i, stride=3) assert_equal(out, ans) def test_bitwise_addition(): from yt.utilities.lib.geometry_utils import bitwise_addition # TODO: Handle negative & periodic boundaries lz = [ (0, 1), # (0,-1), (1, 1), (1, 2), (1, 4), (1, -1), (2, 1), (2, 2), (2, -1), (2, -2), (3, 1), (3, 5), (3, -1), ] for i, a in lz: i = np.uint64(i) a = np.int64(a) out = bitwise_addition(i, a, stride=1, start=0) assert_equal(out, i + a) # def test_add_to_morton_coord(): # from yt.utilities.lib.geometry_utils import add_to_morton_coord def test_get_morton_indices(): from yt.utilities.lib.geometry_utils import ( get_morton_indices, get_morton_indices_unravel, ) INDEX_MAX_64 = np.uint64(2097151) li = np.arange(6, dtype=np.uint64).reshape((2, 3)) mi_ans = np.array([10, 229], dtype=np.uint64) mi_out = get_morton_indices(li) mi_out2 = get_morton_indices_unravel(li[:, 0], li[:, 1], li[:, 2]) assert_array_equal(mi_out, mi_ans) assert_array_equal(mi_out2, mi_ans) li[0, :] = INDEX_MAX_64 * np.ones(3, dtype=np.uint64) assert_raises(ValueError, get_morton_indices, li) assert_raises(ValueError, get_morton_indices_unravel, li[:, 0], li[:, 1], li[:, 2]) def test_get_morton_points(): from yt.utilities.lib.geometry_utils import get_morton_points mi = np.array([10, 229], dtype=np.uint64) li_ans = np.arange(6, dtype=np.uint64).reshape((2, 3)) li_out = get_morton_points(mi) assert_array_equal(li_out, li_ans) def test_compare_morton(): # TODO: Add error messages to assertions from yt.utilities.lib.geometry_utils import compare_morton # Diagonal p = np.array([0.0, 0.0, 0.0], dtype=np.float64) q = np.array([1.0, 1.0, 1.0], dtype=np.float64) assert_equal(compare_morton(p, q), 1) assert_equal(compare_morton(q, p), 0) assert_equal(compare_morton(p, p), 0) # 1-1 vs 0-1 p = np.array([1.0, 1.0, 0.0], dtype=np.float64) q = np.array([1.0, 1.0, 1.0], dtype=np.float64) assert_equal(compare_morton(p, q), 1) assert_equal(compare_morton(q, p), 0) assert_equal(compare_morton(p, p), 0) # x advance, y decrease p = np.array([0.0, 1.0, 0.0], dtype=np.float64) q = np.array([1.0, 0.0, 0.0], dtype=np.float64) assert_equal(compare_morton(p, q), 1) assert_equal(compare_morton(q, p), 0) assert_equal(compare_morton(p, p), 0) # x&y advance, z decrease p = np.array([0.0, 0.0, 1.0], dtype=np.float64) q = np.array([1.0, 1.0, 0.0], dtype=np.float64) assert_equal(compare_morton(p, q), 1) assert_equal(compare_morton(q, p), 0) assert_equal(compare_morton(p, p), 0) def test_get_morton_neighbors_coarse(): from yt.utilities.lib.geometry_utils import get_morton_neighbors_coarse imax = 5 ngz = 1 tests = { (7, 1): np.array( [ 35, 49, 56, 48, 33, 40, 32, 42, 34, 3, 17, 24, 16, 1, 8, 0, 10, 2, 21, 28, 20, 5, 12, 4, 14, 6, ], dtype="uint64", ), (7, 0): np.array( [ 35, 49, 56, 48, 33, 40, 32, 42, 34, 3, 17, 24, 16, 1, 8, 0, 10, 2, 21, 28, 20, 5, 12, 4, 14, 6, ], dtype="uint64", ), (0, 1): np.array( [ 4, 6, 7, 70, 132, 133, 196, 5, 68, 256, 258, 259, 322, 384, 385, 448, 257, 320, 2, 3, 66, 128, 129, 192, 1, 64, ], dtype="uint64", ), (0, 0): np.array([4, 6, 7, 5, 2, 3, 1], dtype="uint64"), (448, 1): np.array( [ 192, 64, 0, 9, 82, 18, 27, 128, 137, 228, 100, 36, 45, 118, 54, 63, 164, 173, 320, 256, 265, 338, 274, 283, 384, 393, ], dtype="uint64", ), (448, 0): np.array([228, 118, 63, 173, 338, 283, 393], dtype="uint64"), } for (mi1, periodic), ans in tests.items(): n1 = get_morton_neighbors_coarse(mi1, imax, periodic, ngz) assert_equal(np.sort(n1), np.sort(ans)) def test_get_morton_neighbors_refined(): from yt.utilities.lib.geometry_utils import get_morton_neighbors_refined imax1 = 5 imax2 = 5 ngz = 1 tests = { (7, 7, 1): ( np.array( [ 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, ], dtype="uint64", ), np.array( [ 35, 49, 56, 48, 33, 40, 32, 42, 34, 3, 17, 24, 16, 1, 8, 0, 10, 2, 21, 28, 20, 5, 12, 4, 14, 6, ], dtype="uint64", ), ), (7, 7, 0): ( np.array( [ 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, ], dtype="uint64", ), np.array( [ 35, 49, 56, 48, 33, 40, 32, 42, 34, 3, 17, 24, 16, 1, 8, 0, 10, 2, 21, 28, 20, 5, 12, 4, 14, 6, ], dtype="uint64", ), ), (0, 0, 1): ( np.array( [ 0, 0, 0, 64, 128, 128, 192, 0, 64, 256, 256, 256, 320, 384, 384, 448, 256, 320, 0, 0, 64, 128, 128, 192, 0, 64, ], dtype="uint64", ), np.array( [ 4, 6, 7, 70, 132, 133, 196, 5, 68, 256, 258, 259, 322, 384, 385, 448, 257, 320, 2, 3, 66, 128, 129, 192, 1, 64, ], dtype="uint64", ), ), (0, 0, 0): ( np.array([0, 0, 0, 0, 0, 0, 0], dtype="uint64"), np.array([4, 6, 7, 5, 2, 3, 1], dtype="uint64"), ), (448, 448, 1): ( np.array( [ 192, 64, 0, 64, 192, 128, 192, 128, 192, 448, 320, 256, 320, 448, 384, 448, 384, 448, 320, 256, 320, 448, 384, 448, 384, 448, ], dtype="uint64", ), np.array( [ 192, 64, 0, 9, 82, 18, 27, 128, 137, 228, 100, 36, 45, 118, 54, 63, 164, 173, 320, 256, 265, 338, 274, 283, 384, 393, ], dtype="uint64", ), ), (448, 448, 0): ( np.array([448, 448, 448, 448, 448, 448, 448], dtype="uint64"), np.array([228, 118, 63, 173, 338, 283, 393], dtype="uint64"), ), } for (mi1, mi2, periodic), (ans1, ans2) in tests.items(): n1, n2 = get_morton_neighbors_refined(mi1, mi2, imax1, imax2, periodic, ngz) assert_equal(np.sort(n1), np.sort(ans1)) assert_equal(np.sort(n2), np.sort(ans2)) def test_morton_neighbor(): from yt.utilities.lib.geometry_utils import get_morton_indices, morton_neighbor order = 20 imax = np.uint64(1 << order) p = np.array( [ [imax / 2, imax / 2, imax / 2], [imax / 2, imax / 2, 0], [imax / 2, imax / 2, imax], ], dtype=np.uint64, ) p_ans = np.array( [ [imax / 2, imax / 2, imax / 2 + 1], [imax / 2, imax / 2, imax / 2 - 1], [imax / 2, imax / 2, imax - 1], [imax / 2, imax / 2, 1], [imax / 2, imax / 2 + 1, imax / 2 + 1], [imax / 2 - 1, imax / 2 - 1, imax / 2], [imax / 2 - 1, imax / 2, imax / 2 + 1], [imax / 2, imax / 2 - 1, imax - 1], [imax / 2, imax / 2 + 1, 1], ], dtype=np.uint64, ) mi_ans = get_morton_indices(p_ans) assert_equal(morton_neighbor(p[0, :], [2], [+1], imax), mi_ans[0]) assert_equal(morton_neighbor(p[0, :], [2], [-1], imax), mi_ans[1]) assert_equal(morton_neighbor(p[1, :], [2], [-1], imax, periodic=False), -1) assert_equal(morton_neighbor(p[2, :], [2], [+1], imax, periodic=False), -1) assert_equal(morton_neighbor(p[1, :], [2], [-1], imax, periodic=True), mi_ans[2]) assert_equal(morton_neighbor(p[2, :], [2], [+1], imax, periodic=True), mi_ans[3]) assert_equal(morton_neighbor(p[0, :], [1, 2], [+1, +1], imax), mi_ans[4]) assert_equal(morton_neighbor(p[0, :], [0, 1], [-1, -1], imax), mi_ans[5]) assert_equal(morton_neighbor(p[0, :], [0, 2], [-1, +1], imax), mi_ans[6]) assert_equal(morton_neighbor(p[1, :], [1, 2], [-1, -1], imax, periodic=False), -1) assert_equal(morton_neighbor(p[2, :], [1, 2], [+1, +1], imax, periodic=False), -1) assert_equal( morton_neighbor(p[1, :], [1, 2], [-1, -1], imax, periodic=True), mi_ans[7] ) assert_equal( morton_neighbor(p[2, :], [1, 2], [+1, +1], imax, periodic=True), mi_ans[8] ) def test_get_morton_neighbors(): from yt.utilities.lib.geometry_utils import get_morton_indices, get_morton_neighbors order = 20 imax = 1 << order p = np.array( [ [imax / 2, imax / 2, imax / 2], [imax / 2, imax / 2, 0], [imax / 2, imax / 2, imax], ], dtype=np.uint64, ) pn_non = [ np.array( [ # x +/- 1 [imax / 2 + 1, imax / 2, imax / 2], [imax / 2 + 1, imax / 2 + 1, imax / 2], [imax / 2 + 1, imax / 2 + 1, imax / 2 + 1], [imax / 2 + 1, imax / 2 + 1, imax / 2 - 1], [imax / 2 + 1, imax / 2 - 1, imax / 2], [imax / 2 + 1, imax / 2 - 1, imax / 2 + 1], [imax / 2 + 1, imax / 2 - 1, imax / 2 - 1], [imax / 2 + 1, imax / 2, imax / 2 + 1], [imax / 2 + 1, imax / 2, imax / 2 - 1], [imax / 2 - 1, imax / 2, imax / 2], [imax / 2 - 1, imax / 2 + 1, imax / 2], [imax / 2 - 1, imax / 2 + 1, imax / 2 + 1], [imax / 2 - 1, imax / 2 + 1, imax / 2 - 1], [imax / 2 - 1, imax / 2 - 1, imax / 2], [imax / 2 - 1, imax / 2 - 1, imax / 2 + 1], [imax / 2 - 1, imax / 2 - 1, imax / 2 - 1], [imax / 2 - 1, imax / 2, imax / 2 + 1], [imax / 2 - 1, imax / 2, imax / 2 - 1], # y +/- 1 [imax / 2, imax / 2 + 1, imax / 2], [imax / 2, imax / 2 + 1, imax / 2 + 1], [imax / 2, imax / 2 + 1, imax / 2 - 1], [imax / 2, imax / 2 - 1, imax / 2], [imax / 2, imax / 2 - 1, imax / 2 + 1], [imax / 2, imax / 2 - 1, imax / 2 - 1], # x +/- 1 [imax / 2, imax / 2, imax / 2 + 1], [imax / 2, imax / 2, imax / 2 - 1], ], dtype=np.uint64, ), np.array( [ # x +/- 1 [imax / 2 + 1, imax / 2, 0], [imax / 2 + 1, imax / 2 + 1, 0], [imax / 2 + 1, imax / 2 + 1, 1], [imax / 2 + 1, imax / 2 - 1, 0], [imax / 2 + 1, imax / 2 - 1, 1], [imax / 2 + 1, imax / 2, 1], [imax / 2 - 1, imax / 2, 0], [imax / 2 - 1, imax / 2 + 1, 0], [imax / 2 - 1, imax / 2 + 1, 1], [imax / 2 - 1, imax / 2 - 1, 0], [imax / 2 - 1, imax / 2 - 1, 1], [imax / 2 - 1, imax / 2, 1], # y +/- 1 [imax / 2, imax / 2 + 1, 0], [imax / 2, imax / 2 + 1, 1], [imax / 2, imax / 2 - 1, 0], [imax / 2, imax / 2 - 1, 1], # z +/- 1 [imax / 2, imax / 2, 0 + 1], ], dtype=np.uint64, ), np.array( [ # x +/- 1 [imax / 2 + 1, imax / 2, imax], [imax / 2 + 1, imax / 2 + 1, imax], [imax / 2 + 1, imax / 2 + 1, imax - 1], [imax / 2 + 1, imax / 2 - 1, imax], [imax / 2 + 1, imax / 2 - 1, imax - 1], [imax / 2 + 1, imax / 2, imax - 1], [imax / 2 - 1, imax / 2, imax], [imax / 2 - 1, imax / 2 + 1, imax], [imax / 2 - 1, imax / 2 + 1, imax - 1], [imax / 2 - 1, imax / 2 - 1, imax], [imax / 2 - 1, imax / 2 - 1, imax - 1], [imax / 2 - 1, imax / 2, imax - 1], # y +/- 1 [imax / 2, imax / 2 + 1, imax], [imax / 2, imax / 2 + 1, imax - 1], [imax / 2, imax / 2 - 1, imax], [imax / 2, imax / 2 - 1, imax - 1], # z +/- 1 [imax / 2, imax / 2, imax - 1], ], dtype=np.uint64, ), ] pn_per = [ np.array( [ # x +/- 1 [imax / 2 + 1, imax / 2, imax / 2], [imax / 2 + 1, imax / 2 + 1, imax / 2], [imax / 2 + 1, imax / 2 + 1, imax / 2 + 1], [imax / 2 + 1, imax / 2 + 1, imax / 2 - 1], [imax / 2 + 1, imax / 2 - 1, imax / 2], [imax / 2 + 1, imax / 2 - 1, imax / 2 + 1], [imax / 2 + 1, imax / 2 - 1, imax / 2 - 1], [imax / 2 + 1, imax / 2, imax / 2 + 1], [imax / 2 + 1, imax / 2, imax / 2 - 1], [imax / 2 - 1, imax / 2, imax / 2], [imax / 2 - 1, imax / 2 + 1, imax / 2], [imax / 2 - 1, imax / 2 + 1, imax / 2 + 1], [imax / 2 - 1, imax / 2 + 1, imax / 2 - 1], [imax / 2 - 1, imax / 2 - 1, imax / 2], [imax / 2 - 1, imax / 2 - 1, imax / 2 + 1], [imax / 2 - 1, imax / 2 - 1, imax / 2 - 1], [imax / 2 - 1, imax / 2, imax / 2 + 1], [imax / 2 - 1, imax / 2, imax / 2 - 1], # y +/- 1 [imax / 2, imax / 2 + 1, imax / 2], [imax / 2, imax / 2 + 1, imax / 2 + 1], [imax / 2, imax / 2 + 1, imax / 2 - 1], [imax / 2, imax / 2 - 1, imax / 2], [imax / 2, imax / 2 - 1, imax / 2 + 1], [imax / 2, imax / 2 - 1, imax / 2 - 1], # z +/- 1 [imax / 2, imax / 2, imax / 2 + 1], [imax / 2, imax / 2, imax / 2 - 1], ], dtype=np.uint64, ), np.array( [ # x +/- 1 [imax / 2 + 1, imax / 2, 0], [imax / 2 + 1, imax / 2 + 1, 0], [imax / 2 + 1, imax / 2 + 1, 1], [imax / 2 + 1, imax / 2 + 1, imax - 1], [imax / 2 + 1, imax / 2 - 1, 0], [imax / 2 + 1, imax / 2 - 1, 1], [imax / 2 + 1, imax / 2 - 1, imax - 1], [imax / 2 + 1, imax / 2, 1], [imax / 2 + 1, imax / 2, imax - 1], [imax / 2 - 1, imax / 2, 0], [imax / 2 - 1, imax / 2 + 1, 0], [imax / 2 - 1, imax / 2 + 1, 1], [imax / 2 - 1, imax / 2 + 1, imax - 1], [imax / 2 - 1, imax / 2 - 1, 0], [imax / 2 - 1, imax / 2 - 1, 1], [imax / 2 - 1, imax / 2 - 1, imax - 1], [imax / 2 - 1, imax / 2, 1], [imax / 2 - 1, imax / 2, imax - 1], # y +/- 1 [imax / 2, imax / 2 + 1, 0], [imax / 2, imax / 2 + 1, 1], [imax / 2, imax / 2 + 1, imax - 1], [imax / 2, imax / 2 - 1, 0], [imax / 2, imax / 2 - 1, 1], [imax / 2, imax / 2 - 1, imax - 1], # z +/- 1 [imax / 2, imax / 2, 0 + 1], [imax / 2, imax / 2, imax - 1], ], dtype=np.uint64, ), np.array( [ # x +/- 1 [imax / 2 + 1, imax / 2, imax], [imax / 2 + 1, imax / 2 + 1, imax], [imax / 2 + 1, imax / 2 + 1, 1], [imax / 2 + 1, imax / 2 + 1, imax - 1], [imax / 2 + 1, imax / 2 - 1, imax], [imax / 2 + 1, imax / 2 - 1, 1], [imax / 2 + 1, imax / 2 - 1, imax - 1], [imax / 2 + 1, imax / 2, 1], [imax / 2 + 1, imax / 2, imax - 1], [imax / 2 - 1, imax / 2, imax], [imax / 2 - 1, imax / 2 + 1, imax], [imax / 2 - 1, imax / 2 + 1, 1], [imax / 2 - 1, imax / 2 + 1, imax - 1], [imax / 2 - 1, imax / 2 - 1, imax], [imax / 2 - 1, imax / 2 - 1, 1], [imax / 2 - 1, imax / 2 - 1, imax - 1], [imax / 2 - 1, imax / 2, 1], [imax / 2 - 1, imax / 2, imax - 1], # y +/- 1 [imax / 2, imax / 2 + 1, imax], [imax / 2, imax / 2 + 1, 1], [imax / 2, imax / 2 + 1, imax - 1], [imax / 2, imax / 2 - 1, imax], [imax / 2, imax / 2 - 1, 1], [imax / 2, imax / 2 - 1, imax - 1], # z +/- 1 [imax / 2, imax / 2, 1], [imax / 2, imax / 2, imax - 1], ], dtype=np.uint64, ), ] mi = get_morton_indices(p) N = mi.shape[0] # Non-periodic for i in range(N): out = get_morton_neighbors( np.array([mi[i]], dtype=np.uint64), order=order, periodic=False ) ans = get_morton_indices(np.vstack([p[i, :], pn_non[i]])) assert_array_equal(np.unique(out), np.unique(ans), err_msg=f"Non-periodic: {i}") # Periodic for i in range(N): out = get_morton_neighbors( np.array([mi[i]], dtype=np.uint64), order=order, periodic=True ) ans = get_morton_indices(np.vstack([p[i, :], pn_per[i]])) assert_array_equal(np.unique(out), np.unique(ans), err_msg=f"Periodic: {i}") def test_dist(): from yt.utilities.lib.geometry_utils import dist p = np.array([0.0, 0.0, 0.0], dtype=np.float64) q = np.array([0.0, 0.0, 0.0], dtype=np.float64) assert_equal(dist(p, q), 0.0) p = np.array([0.0, 0.0, 0.0], dtype=np.float64) q = np.array([1.0, 0.0, 0.0], dtype=np.float64) assert_equal(dist(p, q), 1.0) p = np.array([0.0, 0.0, 0.0], dtype=np.float64) q = np.array([1.0, 1.0, 0.0], dtype=np.float64) assert_equal(dist(p, q), np.sqrt(2.0)) p = np.array([0.0, 0.0, 0.0], dtype=np.float64) q = np.array([1.0, 1.0, 1.0], dtype=np.float64) assert_equal(dist(p, q), np.sqrt(3.0)) def test_knn_direct(seed=1): from yt.utilities.lib.geometry_utils import knn_direct np.random.seed(seed) k = 64 N = 1e5 idx = np.arange(N, dtype=np.uint64) rad = np.arange(N, dtype=np.float64) pos = np.vstack(3 * [rad**2 / 3.0]).T sort_shf = np.arange(N, dtype=np.uint64) for _ in range(20): np.random.shuffle(sort_shf) sort_ans = np.argsort(sort_shf)[:k] sort_out = knn_direct(pos[sort_shf, :], k, sort_ans[0], idx) assert_array_equal(sort_out, sort_ans) # TODO: test of quadtree (.pxd) def test_obtain_position_vector(): ds = fake_random_ds( 64, nprocs=8, fields=_fields, units=_units, negative=[False, True, True, True] ) dd = ds.sphere((0.5, 0.5, 0.5), 0.2) coords = obtain_position_vector(dd) r = np.sqrt(np.sum(coords * coords, axis=0)) assert_array_less(r.max(), 0.2) assert_array_less(0.0, r.min()) def test_obtain_relative_velocity_vector(): ds = fake_random_ds( 64, nprocs=8, fields=_fields, units=_units, negative=[False, True, True, True] ) dd = ds.all_data() vels = obtain_relative_velocity_vector(dd) assert_array_equal(vels[0, :], dd["gas", "velocity_x"]) assert_array_equal(vels[1, :], dd["gas", "velocity_y"]) assert_array_equal(vels[2, :], dd["gas", "velocity_z"]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/tests/test_nn.py0000644000175100001770000000172314714401662020173 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_array_equal from yt.utilities.lib.bounded_priority_queue import ( validate, validate_nblist, validate_pid, ) # These test functions use utility functions in # yt.utilities.lib.bounded_priority_queue # to test functions which are not exposed at a python level def test_bounded_priority_queue(): dists = validate() answers = np.array([0.1, 0.001, -1.0, -1.0, -1.0]) assert_array_equal(answers, dists) def test_bounded_priority_queue_pid(): dists, pids = validate_pid() answers = np.array([0.1, 0.001, -1.0, -1.0, -1.0]) answers_pids = np.array([1, 10, -1, -1, -1]) assert_array_equal(answers, dists) assert_array_equal(answers_pids, pids) def test_neighbor_list(): data, pids = validate_nblist() answers_data = np.array([1.0, 1.0, 1.0, 1.0]) answers_pids = np.array([0, 1, 2, 3]) assert_array_equal(answers_data, data) assert_array_equal(answers_pids, pids) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/tests/test_ragged_arrays.py0000644000175100001770000000325614714401662022375 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_equal from yt.testing import assert_rel_equal from yt.utilities.lib.ragged_arrays import index_unop operations = ((np.sum, "sum"), (np.prod, "prod"), (np.max, "max"), (np.min, "min")) dtypes = ( (-1e8, 1e8, "float32"), (-1e8, 1e8, "float64"), (-10000, 10000, "int32"), (-100000000, 100000000, "int64"), ) def test_index_unop(): np.random.seed(0x4D3D3D3) indices = np.arange(1000, dtype="int64") np.random.shuffle(indices) sizes = np.array([200, 50, 50, 100, 32, 32, 32, 32, 32, 64, 376], dtype="int64") for mi, ma, dtype in dtypes: for op, operation in operations: # Create a random set of values values = np.random.random(1000) if operation != "prod": values = values * ma + (ma - mi) if operation == "prod" and dtype.startswith("int"): values = values.astype(dtype) values[values != 0] = 1 values[values == 0] = -1 values = values.astype(dtype) out_values = index_unop(values, indices, sizes, operation) i = 0 for j, v in enumerate(sizes): arr = values[indices[i : i + v]] if dtype == "float32": # Numpy 1.9.1 changes the accumulator type to promote assert_rel_equal(op(arr), out_values[j], 6) elif dtype == "float64": # Numpy 1.9.1 changes the accumulator type to promote assert_rel_equal(op(arr), out_values[j], 12) else: assert_equal(op(arr), out_values[j]) i += v ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/tests/test_sample.py0000644000175100001770000000202314714401662021033 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_allclose from yt.utilities.lib.particle_mesh_operations import CICSample_3 def test_sample(): grid = {} dims = np.array([64, 64, 64], dtype="int32") inds = np.indices(dims) grid["x"] = inds[0] + 0.5 grid["y"] = inds[1] + 0.5 grid["z"] = inds[2] + 0.5 num_particles = np.int64(1000) xp = np.random.uniform(low=1.0, high=63.0, size=num_particles) yp = np.random.uniform(low=1.0, high=63.0, size=num_particles) zp = np.random.uniform(low=1.0, high=63.0, size=num_particles) xfield = np.zeros(num_particles) yfield = np.zeros(num_particles) zfield = np.zeros(num_particles) dx = 1.0 le = np.zeros(3) CICSample_3(xp, yp, zp, xfield, num_particles, grid["x"], le, dims, dx) CICSample_3(xp, yp, zp, yfield, num_particles, grid["y"], le, dims, dx) CICSample_3(xp, yp, zp, zfield, num_particles, grid["z"], le, dims, dx) assert_allclose(xp, xfield) assert_allclose(yp, yfield) assert_allclose(zp, zfield) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/tsearch.c0000644000175100001770000000615014714401662016601 0ustar00runnerdocker/* * Tree search generalized from Knuth (6.2.2) Algorithm T just like * the AT&T man page says. * * The node_t structure is for internal use only, lint doesn't grok it. * * Written by reading the System V Interface Definition, not the code. * * Totally public domain. */ /*LINTLIBRARY*/ #include "tsearch.h" #include typedef struct node_t { char *key; struct node_t *left, *right; } node; /* find or insert datum into search tree */ void * tsearch(const void *vkey, void **vrootp, int (*compar)(const void *, const void *)) { node *q; char *key = (char *)vkey; node **rootp = (node **)vrootp; if (rootp == (struct node_t **)0) return ((void *)0); while (*rootp != (struct node_t *)0) { /* Knuth's T1: */ int r; if ((r = (*compar)(key, (*rootp)->key)) == 0) /* T2: */ return ((void *)*rootp); /* we found it! */ rootp = (r < 0) ? &(*rootp)->left : /* T3: follow left branch */ &(*rootp)->right; /* T4: follow right branch */ } q = (node *) malloc(sizeof(node)); /* T5: key not found */ if (q != (struct node_t *)0) { /* make new node */ *rootp = q; /* link new node to old */ q->key = key; /* initialize new node */ q->left = q->right = (struct node_t *)0; } return ((void *)q); } /* find datum in search tree */ void * tfind(const void *vkey, void **vrootp, int (*compar)(const void *, const void *)) { char *key = (char *)vkey; node **rootp = (node **)vrootp; if (rootp == (struct node_t **)0) return ((void *)0); while (*rootp != (struct node_t *)0) { /* Knuth's T1: */ int r; if ((r = (*compar)(key, (*rootp)->key)) == 0) /* T2: */ return ((void *)*rootp); /* we found it! */ rootp = (r < 0) ? &(*rootp)->left : /* T3: follow left branch */ &(*rootp)->right; /* T4: follow right branch */ } return ((void *)0); /* T5: key not found */ } /* delete node with given key */ void * tdelete(const void *vkey, void **vrootp, int (*compar)(const void *, const void *)) { node **rootp = (node **)vrootp; char *key = (char *)vkey; node *p = (node *)1; node *q; node *r; int cmp; if (rootp == (struct node_t **)0 || *rootp == (struct node_t *)0) return ((struct node_t *)0); while ((cmp = (*compar)(key, (*rootp)->key)) != 0) { p = *rootp; rootp = (cmp < 0) ? &(*rootp)->left : /* follow left branch */ &(*rootp)->right; /* follow right branch */ if (*rootp == (struct node_t *)0) return ((void *)0); /* key not found */ } r = (*rootp)->right; /* D1: */ if ((q = (*rootp)->left) == (struct node_t *)0) /* Left (struct node_t *)0? */ q = r; else if (r != (struct node_t *)0) { /* Right link is null? */ if (r->left == (struct node_t *)0) { /* D2: Find successor */ r->left = q; q = r; } else { /* D3: Find (struct node_t *)0 link */ for (q = r->left; q->left != (struct node_t *)0; q = r->left) r = q; r->left = q->right; q->left = (*rootp)->left; q->right = (*rootp)->right; } } free((struct node_t *) *rootp); /* D4: Free node */ *rootp = q; /* link parent to new node */ return(p); } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/tsearch.h0000644000175100001770000000116714714401662016611 0ustar00runnerdocker/* * Tree search generalized from Knuth (6.2.2) Algorithm T just like * the AT&T man page says. * * The node_t structure is for internal use only, lint doesn't grok it. * * Written by reading the System V Interface Definition, not the code. * * Totally public domain. */ /*LINTLIBRARY*/ #ifndef TSEARCH_H #define TSEARCH_H void * tsearch(const void *vkey, void **vrootp, int (*compar)(const void *, const void *)); void * tfind(const void *vkey, void **vrootp, int (*compar)(const void *, const void *)); void * tdelete(const void *vkey, void **vrootp, int (*compar)(const void *, const void *)); #endif ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/vec3_ops.pxd0000644000175100001770000000345114714401662017243 0ustar00runnerdockercimport cython from libc.math cimport sqrt @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline cython.floating dot(const cython.floating[3] a, const cython.floating[3] b) noexcept nogil: return a[0]*b[0] + a[1]*b[1] + a[2]*b[2] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline void cross(const cython.floating[3] a, const cython.floating[3] b, cython.floating c[3]) noexcept nogil: c[0] = a[1]*b[2] - a[2]*b[1] c[1] = a[2]*b[0] - a[0]*b[2] c[2] = a[0]*b[1] - a[1]*b[0] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline void subtract(const cython.floating[3] a, const cython.floating[3] b, cython.floating c[3]) noexcept nogil: c[0] = a[0] - b[0] c[1] = a[1] - b[1] c[2] = a[2] - b[2] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline cython.floating distance(const cython.floating[3] a, const cython.floating[3] b) noexcept nogil: return sqrt((a[0] - b[0])**2 + (a[1] - b[1])**2 +(a[2] - b[2])**2) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline void fma(const cython.floating f, const cython.floating[3] a, const cython.floating[3] b, cython.floating[3] c) noexcept nogil: c[0] = f * a[0] + b[0] c[1] = f * a[1] + b[1] c[2] = f * a[2] + b[2] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline cython.floating L2_norm(const cython.floating[3] a) noexcept nogil: return sqrt(a[0]*a[0] + a[1]*a[1] + a[2]*a[2]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/volume_container.pxd0000644000175100001770000000633414714401662021076 0ustar00runnerdocker""" A volume container """ cimport numpy as np cdef struct VolumeContainer: #----------------------------------------------------------------------------- # Encapsulates a volume container used for volume rendering. # # n_fields int : The number of fields available to the volume renderer. # data np.float64_t** : The data within the volume container. # mask np.uint8_t* : The mask of the volume container. It has dimensions one fewer in each direction than data. # left_edge np.float64_t[3] : The left edge of the volume container's bounding box. # right_edge np.float64_t[3] : The right edge of the volume container's bounding box. # np.float64_t dds[3] : The delta dimensions, such that dds[0] = ddx, dds[1] = ddy, dds[2] = ddz. # np.float64_t idds[3] : The inverse delta dimensions. Same as dds pattern, but the inverse. i.e. idds[0] = inv_ddx. # dims int[3] : The dimensions of the volume container. dims[0] = x, dims[1] = y, dims[2] = z. #----------------------------------------------------------------------------- int n_fields np.float64_t **data np.uint8_t *mask np.float64_t left_edge[3] np.float64_t right_edge[3] np.float64_t dds[3] np.float64_t idds[3] int dims[3] cdef inline int vc_index(VolumeContainer *vc, int i, int j, int k): #----------------------------------------------------------------------------- # vc_index(VolumeContainer *vc, int i, int j, int k) # vc VolumeContainer* : Pointer to the volume container to be indexed. # i int : The first dimension coordinate. # j int : The second dimension coordinate. # k int : The third dimension coordinates. # # Returns the 3-dimensional index in the volume container given coordinates i, j, k. # This is used for 3-dimensional access in a flat container using C-ordering. # This is calculated by: # vc_index = i * vc.dim[1] * vc.dims[2] + j * vc.dims[2] + k # and then simplified (as shown below) by combining one multiplication operation. # # 2-dimensional example: # A 4 x 3 array may be represented as: # a = [0, 1, 2, 3, # 4, 5, 6, 7, # 8, 9, 10, 11] # or similarly, in a flat container in row-successive order as: # a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] # # To access the coordinate at (1,1) in the flat container: # i * dims[1] + j # = 1 * 3 + 1 # = 4 # The 3-dimensional case (presented below) is similar. #----------------------------------------------------------------------------- return (i*vc.dims[1]+j)*vc.dims[2]+k cdef inline int vc_pos_index(VolumeContainer *vc, np.float64_t *spos): cdef int index[3] cdef int i for i in range(3): index[i] = ((spos[i] - vc.left_edge[i]) * vc.idds[i]) return vc_index(vc, index[0], index[1], index[2]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lib/write_array.pyx0000644000175100001770000000232314714401662020074 0ustar00runnerdocker""" Faster, cythonized file IO """ import numpy as np cimport cython cimport numpy as np DTYPE = np.float64 ctypedef np.float64_t DTYPE_t @cython.boundscheck(False) def write_3D_array(np.ndarray[DTYPE_t, ndim=3] data, fhandle): assert data.dtype == DTYPE cdef int Nx, Ny, Nz Nx = data.shape[0] Ny = data.shape[1] Nz = data.shape[2] cdef unsigned int i, j, k for i in np.arange(Nz): for j in np.arange(Ny): for k in np.arange(Nx): fhandle.write(str(data[k, j, i]) + '\n') @cython.boundscheck(False) def write_3D_vector_array(np.ndarray[DTYPE_t, ndim=3] data_x, np.ndarray[DTYPE_t, ndim=3] data_y, np.ndarray[DTYPE_t, ndim=3] data_z, fhandle): assert data_x.dtype == DTYPE cdef int Nx, Ny, Nz Nx = data_x.shape[0] Ny = data_x.shape[1] Nz = data_x.shape[2] cdef unsigned int i, j, k for i in np.arange(Nz): for j in np.arange(Ny): for k in np.arange(Nx): fx = data_x[k, j, i] fy = data_y[k, j, i] fz = data_z[k, j, i] fhandle.write('{} {} {} \n'.format(fx, fy, fz)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/linear_interpolators.py0000644000175100001770000003563214714401662021056 0ustar00runnerdockerimport numpy as np import yt.utilities.lib.interpolators as lib from yt.funcs import mylog class UnilinearFieldInterpolator: def __init__(self, table, boundaries, field_names, truncate=False): r"""Initialize a 1D interpolator for field data. table : array The data table over which interpolation is performed. boundaries: tuple or array If a tuple, this should specify the upper and lower bounds for the bins of the data table. This assumes the bins are evenly spaced. If an array, this specifies the bins explicitly. field_names: str Name of the field to be used as input data for interpolation. truncate : bool If False, an exception is raised if the input values are outside the bounds of the table. If True, extrapolation is performed. Examples -------- ad = ds.all_data() table_data = np.random.random(64) interp = UnilinearFieldInterpolator(table_data, (0.0, 1.0), "x", truncate=True) field_data = interp(ad) """ self.table = table.astype("float64") self.truncate = truncate self.x_name = field_names if isinstance(boundaries, np.ndarray): if boundaries.size != table.shape[0]: mylog.error("Bins array not the same length as the data.") raise ValueError self.x_bins = boundaries else: x0, x1 = boundaries self.x_bins = np.linspace(x0, x1, table.shape[0], dtype="float64") def __call__(self, data_object): orig_shape = data_object[self.x_name].shape x_vals = data_object[self.x_name].ravel().astype("float64") x_i = (np.digitize(x_vals, self.x_bins) - 1).astype("int32") if np.any((x_i == -1) | (x_i == len(self.x_bins) - 1)): if not self.truncate: mylog.error( "Sorry, but your values are outside " "the table! Dunno what to do, so dying." ) mylog.error("Error was in: %s", data_object) raise ValueError else: x_i = np.minimum(np.maximum(x_i, 0), len(self.x_bins) - 2) my_vals = np.zeros(x_vals.shape, dtype="float64") lib.UnilinearlyInterpolate(self.table, x_vals, self.x_bins, x_i, my_vals) my_vals.shape = orig_shape return my_vals class BilinearFieldInterpolator: def __init__(self, table, boundaries, field_names, truncate=False): r"""Initialize a 2D interpolator for field data. table : array The data table over which interpolation is performed. boundaries: tuple Either a tuple of lower and upper bounds for the x and y bins given as (x0, x1, y0, y1) or a tuple of two arrays containing the x and y bins. field_names: list Names of the fields to be used as input data for interpolation. truncate : bool If False, an exception is raised if the input values are outside the bounds of the table. If True, extrapolation is performed. Examples -------- ad = ds.all_data() table_data = np.random.random((64, 64)) interp = BilinearFieldInterpolator(table_data, (0.0, 1.0, 0.0, 1.0), ["x", "y"], truncate=True) field_data = interp(ad) """ self.table = table.astype("float64") self.truncate = truncate self.x_name, self.y_name = field_names if len(boundaries) == 4: x0, x1, y0, y1 = boundaries self.x_bins = np.linspace(x0, x1, table.shape[0], dtype="float64") self.y_bins = np.linspace(y0, y1, table.shape[1], dtype="float64") elif len(boundaries) == 2: if boundaries[0].size != table.shape[0]: mylog.error("X bins array not the same length as the data.") raise ValueError if boundaries[1].size != table.shape[1]: mylog.error("Y bins array not the same length as the data.") raise ValueError self.x_bins = boundaries[0] self.y_bins = boundaries[1] else: mylog.error( "Boundaries must be given as (x0, x1, y0, y1) or as (x_bins, y_bins)" ) raise ValueError def __call__(self, data_object): orig_shape = data_object[self.x_name].shape x_vals = data_object[self.x_name].ravel().astype("float64") y_vals = data_object[self.y_name].ravel().astype("float64") x_i = (np.digitize(x_vals, self.x_bins) - 1).astype("int32") y_i = (np.digitize(y_vals, self.y_bins) - 1).astype("int32") if np.any((x_i == -1) | (x_i == len(self.x_bins) - 1)) or np.any( (y_i == -1) | (y_i == len(self.y_bins) - 1) ): if not self.truncate: mylog.error( "Sorry, but your values are outside " "the table! Dunno what to do, so dying." ) mylog.error("Error was in: %s", data_object) raise ValueError else: x_i = np.minimum(np.maximum(x_i, 0), len(self.x_bins) - 2) y_i = np.minimum(np.maximum(y_i, 0), len(self.y_bins) - 2) my_vals = np.zeros(x_vals.shape, dtype="float64") lib.BilinearlyInterpolate( self.table, x_vals, y_vals, self.x_bins, self.y_bins, x_i, y_i, my_vals ) my_vals.shape = orig_shape return my_vals class TrilinearFieldInterpolator: def __init__(self, table, boundaries, field_names, truncate=False): r"""Initialize a 3D interpolator for field data. table : array The data table over which interpolation is performed. boundaries: tuple Either a tuple of lower and upper bounds for the x, y, and z bins given as (x0, x1, y0, y1, z0, z1) or a tuple of three arrays containing the x, y, and z bins. field_names: list Names of the fields to be used as input data for interpolation. truncate : bool If False, an exception is raised if the input values are outside the bounds of the table. If True, extrapolation is performed. Examples -------- ad = ds.all_data() table_data = np.random.random((64, 64, 64)) interp = TrilinearFieldInterpolator(table_data, (0.0, 1.0, 0.0, 1.0, 0.0, 1.0), ["x", "y", "z"], truncate=True) field_data = interp(ad) """ self.table = table.astype("float64") self.truncate = truncate self.x_name, self.y_name, self.z_name = field_names if len(boundaries) == 6: x0, x1, y0, y1, z0, z1 = boundaries self.x_bins = np.linspace(x0, x1, table.shape[0], dtype="float64") self.y_bins = np.linspace(y0, y1, table.shape[1], dtype="float64") self.z_bins = np.linspace(z0, z1, table.shape[2], dtype="float64") elif len(boundaries) == 3: if boundaries[0].size != table.shape[0]: mylog.error("X bins array not the same length as the data.") raise ValueError if boundaries[1].size != table.shape[1]: mylog.error("Y bins array not the same length as the data.") raise ValueError if boundaries[2].size != table.shape[2]: mylog.error("Z bins array not the same length as the data.") raise ValueError self.x_bins = boundaries[0] self.y_bins = boundaries[1] self.z_bins = boundaries[2] else: mylog.error( "Boundaries must be given as (x0, x1, y0, y1, z0, z1) " "or as (x_bins, y_bins, z_bins)" ) raise ValueError def __call__(self, data_object): orig_shape = data_object[self.x_name].shape x_vals = data_object[self.x_name].ravel().astype("float64") y_vals = data_object[self.y_name].ravel().astype("float64") z_vals = data_object[self.z_name].ravel().astype("float64") x_i = np.digitize(x_vals, self.x_bins).astype("int64") - 1 y_i = np.digitize(y_vals, self.y_bins).astype("int64") - 1 z_i = np.digitize(z_vals, self.z_bins).astype("int64") - 1 if ( np.any((x_i == -1) | (x_i == len(self.x_bins) - 1)) or np.any((y_i == -1) | (y_i == len(self.y_bins) - 1)) or np.any((z_i == -1) | (z_i == len(self.z_bins) - 1)) ): if not self.truncate: mylog.error( "Sorry, but your values are outside " "the table! Dunno what to do, so dying." ) mylog.error("Error was in: %s", data_object) raise ValueError else: x_i = np.minimum(np.maximum(x_i, 0), len(self.x_bins) - 2) y_i = np.minimum(np.maximum(y_i, 0), len(self.y_bins) - 2) z_i = np.minimum(np.maximum(z_i, 0), len(self.z_bins) - 2) my_vals = np.zeros(x_vals.shape, dtype="float64") lib.TrilinearlyInterpolate( self.table, x_vals, y_vals, z_vals, self.x_bins, self.y_bins, self.z_bins, x_i, y_i, z_i, my_vals, ) my_vals.shape = orig_shape return my_vals class QuadrilinearFieldInterpolator: def __init__(self, table, boundaries, field_names, truncate=False): r"""Initialize a 4D interpolator for field data. table : array The data table over which interpolation is performed. boundaries: tuple Either a tuple of lower and upper bounds for the x, y, z, and w bins given as (x0, x1, y0, y1, z0, z1, w0, w1) or a tuple of four arrays containing the x, y, z, and w bins. field_names: list Names of the fields to be used as input data for interpolation. truncate : bool If False, an exception is raised if the input values are outside the bounds of the table. If True, extrapolation is performed. Examples -------- ad = ds.all_data() table_data = np.random.random((64, 64, 64, 64)) interp = BilinearFieldInterpolator(table_data, (0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0), ["x", "y", "z", "w"], truncate=True) field_data = interp(ad) """ self.table = table.astype("float64") self.truncate = truncate self.x_name, self.y_name, self.z_name, self.w_name = field_names if len(boundaries) == 8: x0, x1, y0, y1, z0, z1, w0, w1 = boundaries self.x_bins = np.linspace(x0, x1, table.shape[0]).astype("float64") self.y_bins = np.linspace(y0, y1, table.shape[1]).astype("float64") self.z_bins = np.linspace(z0, z1, table.shape[2]).astype("float64") self.w_bins = np.linspace(w0, w1, table.shape[3]).astype("float64") elif len(boundaries) == 4: if boundaries[0].size != table.shape[0]: mylog.error("X bins array not the same length as the data.") raise ValueError if boundaries[1].size != table.shape[1]: mylog.error("Y bins array not the same length as the data.") raise ValueError if boundaries[2].size != table.shape[2]: mylog.error("Z bins array not the same length as the data.") raise ValueError if boundaries[3].size != table.shape[3]: mylog.error("W bins array not the same length as the data.") raise ValueError self.x_bins = boundaries[0] self.y_bins = boundaries[1] self.z_bins = boundaries[2] self.w_bins = boundaries[3] else: mylog.error( "Boundaries must be given as (x0, x1, y0, y1, z0, z1, w0, w1) " "or as (x_bins, y_bins, z_bins, w_bins)" ) raise ValueError def __call__(self, data_object): orig_shape = data_object[self.x_name].shape x_vals = data_object[self.x_name].ravel().astype("float64") y_vals = data_object[self.y_name].ravel().astype("float64") z_vals = data_object[self.z_name].ravel().astype("float64") w_vals = data_object[self.w_name].ravel().astype("float64") x_i = np.digitize(x_vals, self.x_bins).astype("int64") - 1 y_i = np.digitize(y_vals, self.y_bins).astype("int64") - 1 z_i = np.digitize(z_vals, self.z_bins).astype("int64") - 1 w_i = np.digitize(w_vals, self.w_bins).astype("int64") - 1 if ( np.any((x_i == -1) | (x_i == len(self.x_bins) - 1)) or np.any((y_i == -1) | (y_i == len(self.y_bins) - 1)) or np.any((z_i == -1) | (z_i == len(self.z_bins) - 1)) or np.any((w_i == -1) | (w_i == len(self.w_bins) - 1)) ): if not self.truncate: mylog.error( "Sorry, but your values are outside " "the table! Dunno what to do, so dying." ) mylog.error("Error was in: %s", data_object) raise ValueError else: x_i = np.minimum(np.maximum(x_i, 0), len(self.x_bins) - 2) y_i = np.minimum(np.maximum(y_i, 0), len(self.y_bins) - 2) z_i = np.minimum(np.maximum(z_i, 0), len(self.z_bins) - 2) w_i = np.minimum(np.maximum(w_i, 0), len(self.w_bins) - 2) my_vals = np.zeros(x_vals.shape, dtype="float64") lib.QuadrilinearlyInterpolate( self.table, x_vals, y_vals, z_vals, w_vals, self.x_bins, self.y_bins, self.z_bins, self.w_bins, x_i, y_i, z_i, w_i, my_vals, ) my_vals.shape = orig_shape return my_vals def get_centers(ds, filename, center_cols, radius_col, unit="1"): """ Return an iterator over EnzoSphere objects generated from the appropriate columns in *filename*. Optionally specify the *unit* radius is in. """ for line in open(filename): if line.startswith("#"): continue vals = line.split() x, y, z = (float(vals[i]) for i in center_cols) r = float(vals[radius_col]) yield ds.sphere([x, y, z], r / ds[unit]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/lodgeit.py0000644000175100001770000002424214714401662016241 0ustar00runnerdocker""" LodgeIt! ~~~~~~~~ A script that pastes stuff into the enzotools pastebin on paste.enzotools.org. Modified (very, very slightly) from the original script by the authors below. .lodgeitrc / _lodgeitrc ----------------------- Under UNIX create a file called ``~/.lodgeitrc``, under Windows create a file ``%APPDATA%/_lodgeitrc`` to override defaults:: language=default_language clipboard=true/false open_browser=true/false encoding=fallback_charset :authors: 2007-2008 Georg Brandl , 2006 Armin Ronacher , 2006 Matt Good , 2005 Raphael Slinckx """ import os import sys from optparse import OptionParser SCRIPT_NAME = os.path.basename(sys.argv[0]) VERSION = "0.3" SERVICE_URL = "http://paste.yt-project.org/" SETTING_KEYS = ["author", "title", "language", "private", "clipboard", "open_browser"] # global server proxy _xmlrpc_service = None def fail(msg, code): """Bail out with an error message.""" print(f"ERROR: {msg}", file=sys.stderr) sys.exit(code) def load_default_settings(): """Load the defaults from the lodgeitrc file.""" settings = { "language": None, "clipboard": True, "open_browser": False, "encoding": "iso-8859-15", } rcfile = None if os.name == "posix": rcfile = os.path.expanduser("~/.lodgeitrc") elif os.name == "nt" and "APPDATA" in os.environ: rcfile = os.path.expandvars(r"$APPDATA\_lodgeitrc") if rcfile: try: f = open(rcfile) for line in f: if line.strip()[:1] in "#;": continue p = line.split("=", 1) if len(p) == 2: key = p[0].strip().lower() if key in settings: if key in ("clipboard", "open_browser"): settings[key] = p[1].strip().lower() in ( "true", "1", "on", "yes", ) else: settings[key] = p[1].strip() f.close() except OSError: pass settings["tags"] = [] settings["title"] = None return settings def make_utf8(text, encoding): """Convert a text to UTF-8, brute-force.""" try: u = str(text, "utf-8") uenc = "utf-8" except UnicodeError: try: u = str(text, encoding) uenc = "utf-8" except UnicodeError: u = str(text, "iso-8859-15", "ignore") uenc = "iso-8859-15" try: import chardet except ImportError: return u.encode("utf-8") d = chardet.detect(text) if d["encoding"] == uenc: return u.encode("utf-8") return str(text, d["encoding"], "ignore").encode("utf-8") def get_xmlrpc_service(): """Create the XMLRPC server proxy and cache it.""" global _xmlrpc_service import xmlrpc.client if _xmlrpc_service is None: try: _xmlrpc_service = xmlrpc.client.ServerProxy( SERVICE_URL + "xmlrpc/", allow_none=True ) except Exception as err: fail(f"Could not connect to Pastebin: {err}", -1) return _xmlrpc_service def copy_url(url): """Copy the url into the clipboard.""" # try windows first try: import win32clipboard except ImportError: # then give pbcopy a try. do that before gtk because # gtk might be installed on os x but nobody is interested # in the X11 clipboard there. from subprocess import PIPE, Popen try: client = Popen(["pbcopy"], stdin=PIPE) except OSError: try: import pygtk pygtk.require("2.0") import gobject import gtk except ImportError: return gtk.clipboard_get(gtk.gdk.SELECTION_CLIPBOARD).set_text(url) gobject.idle_add(gtk.main_quit) gtk.main() else: client.stdin.write(url) client.stdin.close() client.wait() else: win32clipboard.OpenClipboard() win32clipboard.EmptyClipboard() win32clipboard.SetClipboardText(url) win32clipboard.CloseClipboard() def open_webbrowser(url): """Open a new browser window.""" import webbrowser webbrowser.open(url) def language_exists(language): """Check if a language alias exists.""" xmlrpc = get_xmlrpc_service() langs = xmlrpc.pastes.getLanguages() return language in langs def get_mimetype(data, filename): """Try to get MIME type from data.""" try: import gnomevfs except ImportError: from mimetypes import guess_type if filename: return guess_type(filename)[0] else: if filename: return gnomevfs.get_mime_type(os.path.abspath(filename)) return gnomevfs.get_mime_type_for_data(data) def print_languages(): """Print a list of all supported languages, with description.""" xmlrpc = get_xmlrpc_service() languages = xmlrpc.pastes.getLanguages().items() def cmp(x, y): # emulate Python2's builtin cmp function # https://docs.python.org/2.7/library/functions.html#cmp # https://docs.python.org/3/whatsnew/3.0.html#ordering-comparisons return (x > y) - (x < y) languages.sort(lambda a, b: cmp(a[1].lower(), b[1].lower())) print("Supported Languages:") for alias, name in languages: print(f" {alias:<30}{name}") def download_paste(uid): """Download a paste given by ID.""" xmlrpc = get_xmlrpc_service() paste = xmlrpc.pastes.getPaste(uid) if not paste: fail(f'Paste "{uid}" does not exist.', 5) code = paste["code"] print(code) def create_paste(code, language, filename, mimetype, private): """Create a new paste.""" xmlrpc = get_xmlrpc_service() rv = xmlrpc.pastes.newPaste(language, code, None, filename, mimetype, private) if not rv: fail("Could not create paste. Something went wrong on the server side.", 4) return rv def compile_paste(filenames, langopt): """Create a single paste out of zero, one or multiple files.""" def read_file(f): try: return f.read() finally: f.close() mime = "" lang = langopt or "" if not filenames: data = read_file(sys.stdin) if not langopt: mime = get_mimetype(data, "") or "" fname = "" elif len(filenames) == 1: fname = filenames[0] data = read_file(open(filenames[0], "rb")) if not langopt: mime = get_mimetype(data, filenames[0]) or "" else: result = [] for fname in filenames: data = read_file(open(fname, "rb")) if langopt: result.append(f"### {fname} [{langopt}]\n\n") else: result.append(f"### {fname}\n\n") result.append(data) result.append("\n\n") data = "".join(result) lang = "multi" return data, lang, fname, mime def main( filename, languages=False, language=None, encoding="utf-8", open_browser=False, private=False, clipboard=False, download=None, ): """Paste a given script into a pastebin using the Lodgeit tool.""" # usage = ('Usage: %%prog [options] [FILE ...]\n\n' # 'Read the files and paste their contents to %s.\n' # 'If no file is given, read from standard input.\n' # 'If multiple files are given, they are put into a single paste.' # % SERVICE_URL) # parser = OptionParser(usage=usage) # # settings = load_default_settings() # # parser.add_option('-v', '--version', action='store_true', # help='Print script version') # parser.add_option('-L', '--languages', action='store_true', default=False, # help='Retrieve a list of supported languages') # parser.add_option('-l', '--language', default=settings['language'], # help='Used syntax highlighter for the file') # parser.add_option('-e', '--encoding', default=settings['encoding'], # help='Specify the encoding of a file (default is ' # 'utf-8 or guessing if available)') # parser.add_option('-b', '--open-browser', dest='open_browser', # action='store_true', # default=settings['open_browser'], # help='Open the paste in a web browser') # parser.add_option('-p', '--private', action='store_true', default=False, # help='Paste as private') # parser.add_option('--no-clipboard', dest='clipboard', # action='store_false', # default=settings['clipboard'], # help="Don't copy the url into the clipboard") # parser.add_option('--download', metavar='UID', # help='Download a given paste') # # opts, args = parser.parse_args() # if languages: print_languages() return elif download: download_paste(download) return # check language if given if language and not language_exists(language): print(f"Language {language} is not supported.") return # load file(s) args = [filename] try: data, language, filename, mimetype = compile_paste(args, language) except Exception as err: fail(f"Error while reading the file(s): {err}", 2) if not data: fail("Aborted, no content to paste.", 4) # create paste code = make_utf8(data, encoding).decode("utf-8") pid = create_paste(code, language, filename, mimetype, private) url = f"{SERVICE_URL}show/{pid}/" print(url) if open_browser: open_webbrowser(url) if clipboard: copy_url(url) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/logger.py0000644000175100001770000001063414714401662016071 0ustar00runnerdockerimport logging import sys from collections.abc import Callable from yt.utilities.configure import YTConfig, configuration_callbacks _yt_sh: logging.StreamHandler | None = None _original_emitter: Callable[[logging.LogRecord], None] | None = None def set_log_level(level): """ Select which minimal logging level should be displayed. Parameters ---------- level: int or str Possible values by increasing level: 0 or "notset" 1 or "all" 10 or "debug" 20 or "info" 30 or "warning" 40 or "error" 50 or "critical" """ # this is a user-facing interface to avoid importing from yt.utilities in user code. if isinstance(level, str): level = level.upper() if level == "ALL": # non-standard alias level = 1 ytLogger.setLevel(level) ytLogger.debug("Set log level to %s", level) ytLogger = logging.getLogger("yt") class DuplicateFilter(logging.Filter): """A filter that removes duplicated successive log entries.""" # source # https://stackoverflow.com/questions/44691558/suppress-multiple-messages-with-same-content-in-python-logging-module-aka-log-co def filter(self, record): current_log = (record.module, record.levelno, record.msg, record.args) if current_log != getattr(self, "last_log", None): self.last_log = current_log return True return False ytLogger.addFilter(DuplicateFilter()) class DeprecatedFieldFilter(logging.Filter): """A filter that suppresses repeated logging of deprecated field warnings""" def __init__(self, name=""): self.logged_fields = [] super().__init__(name=name) def filter(self, record): if not record.msg.startswith("The Derived Field"): return True field = record.args[0] if field in self.logged_fields: return False self.logged_fields.append(field) return True ytLogger.addFilter(DeprecatedFieldFilter()) # This next bit is grabbed from: # http://stackoverflow.com/questions/384076/how-can-i-make-the-python-logging-output-to-be-colored def add_coloring_to_emit_ansi(fn): # add methods we need to the class def new(*args): levelno = args[0].levelno if levelno >= 50: color = "\x1b[31m" # red elif levelno >= 40: color = "\x1b[31m" # red elif levelno >= 30: color = "\x1b[33m" # yellow elif levelno >= 20: color = "\x1b[32m" # green elif levelno >= 10: color = "\x1b[35m" # pink else: color = "\x1b[0m" # normal ln = color + args[0].levelname + "\x1b[0m" args[0].levelname = ln return fn(*args) return new ufstring = "%(name)-3s: [%(levelname)-9s] %(asctime)s %(message)s" cfstring = "%(name)-3s: [%(levelname)-18s] %(asctime)s %(message)s" def colorize_logging(): f = logging.Formatter(cfstring) ytLogger.handlers[0].setFormatter(f) ytLogger.handlers[0].emit = add_coloring_to_emit_ansi(ytLogger.handlers[0].emit) def uncolorize_logging(): global _original_emitter, _yt_sh if None not in (_original_emitter, _yt_sh): f = logging.Formatter(ufstring) ytLogger.handlers[0].setFormatter(f) _yt_sh.emit = _original_emitter def disable_stream_logging(): if len(ytLogger.handlers) > 0: ytLogger.removeHandler(ytLogger.handlers[0]) h = logging.NullHandler() ytLogger.addHandler(h) def _runtime_configuration(ytcfg: YTConfig) -> None: # only run this at the end of yt.__init__, after yt.config.ytcfg was instantiated global _original_emitter, _yt_sh if ytcfg.get("yt", "stdout_stream_logging"): stream = sys.stdout else: stream = sys.stderr _level = min(max(ytcfg.get("yt", "log_level"), 0), 50) if ytcfg.get("yt", "suppress_stream_logging"): disable_stream_logging() else: _yt_sh = logging.StreamHandler(stream=stream) # create formatter and add it to the handlers formatter = logging.Formatter(ufstring) _yt_sh.setFormatter(formatter) # add the handler to the logger ytLogger.addHandler(_yt_sh) ytLogger.setLevel(_level) ytLogger.propagate = False _original_emitter = _yt_sh.emit if ytcfg.get("yt", "colored_logs"): colorize_logging() configuration_callbacks.append(_runtime_configuration) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/math_utils.py0000644000175100001770000013736114714401662016772 0ustar00runnerdockerimport math import numpy as np from yt.units.yt_array import YTArray prec_accum = { int: np.int64, np.int8: np.int64, np.int16: np.int64, np.int32: np.int64, np.int64: np.int64, np.uint8: np.uint64, np.uint16: np.uint64, np.uint32: np.uint64, np.uint64: np.uint64, float: np.float64, np.float16: np.float64, np.float32: np.float64, np.float64: np.float64, complex: np.complex128, np.complex64: np.complex128, np.complex128: np.complex128, np.dtype("int"): np.int64, np.dtype("int8"): np.int64, np.dtype("int16"): np.int64, np.dtype("int32"): np.int64, np.dtype("int64"): np.int64, np.dtype("uint8"): np.uint64, np.dtype("uint16"): np.uint64, np.dtype("uint32"): np.uint64, np.dtype("uint64"): np.uint64, np.dtype("float"): np.float64, np.dtype("float16"): np.float64, np.dtype("float32"): np.float64, np.dtype("float64"): np.float64, np.dtype("complex"): np.complex128, np.dtype("complex64"): np.complex128, np.dtype("complex128"): np.complex128, } def periodic_position(pos, ds): r"""Assuming periodicity, find the periodic position within the domain. Parameters ---------- pos : array An array of floats. ds : ~yt.data_objects.static_output.Dataset A simulation static output. Examples -------- >>> a = np.array([1.1, 0.5, 0.5]) >>> data = {"Density": np.ones([32, 32, 32])} >>> ds = load_uniform_grid(data, [32, 32, 32], 1.0) >>> ppos = periodic_position(a, ds) >>> ppos array([ 0.1, 0.5, 0.5]) """ off = (pos - ds.domain_left_edge) % ds.domain_width return ds.domain_left_edge + off def periodic_dist(a, b, period, periodicity=(True, True, True)): r"""Find the Euclidean periodic distance between two sets of points. Parameters ---------- a : array or list Either an ndim long list of coordinates corresponding to a single point or an (ndim, npoints) list of coordinates for many points in space. b : array of list Either an ndim long list of coordinates corresponding to a single point or an (ndim, npoints) list of coordinates for many points in space. period : float or array or list If the volume is symmetrically periodic, this can be a single float, otherwise an array or list of floats giving the periodic size of the volume for each dimension. periodicity : An ndim-element tuple of booleans If an entry is true, the domain is assumed to be periodic along that direction. Examples -------- >>> a = [0.1, 0.1, 0.1] >>> b = [0.9, 0, 9, 0.9] >>> period = 1.0 >>> dist = periodic_dist(a, b, 1.0) >>> dist 0.346410161514 """ a = np.array(a) b = np.array(b) period = np.array(period) if period.size == 1: period = np.array([period, period, period]) if a.shape != b.shape: raise RuntimeError("Arrays must be the same shape.") if period.shape != a.shape and len(a.shape) > 1: n_tup = tuple(1 for i in range(a.ndim - 1)) period = np.tile(np.reshape(period, (a.shape[0],) + n_tup), (1,) + a.shape[1:]) elif len(a.shape) == 1: a = np.reshape(a, (a.shape[0],) + (1, 1)) b = np.reshape(b, (a.shape[0],) + (1, 1)) period = np.reshape(period, (a.shape[0],) + (1, 1)) c = np.empty((2,) + a.shape, dtype="float64") c[0, :] = np.abs(a - b) p_directions = [i for i, p in enumerate(periodicity) if p] np_directions = [i for i, p in enumerate(periodicity) if not p] for d in p_directions: c[1, d, :] = period[d, :] - np.abs(a - b)[d, :] for d in np_directions: c[1, d, :] = c[0, d, :] d = np.amin(c, axis=0) ** 2 r2 = d.sum(axis=0) if r2.size == 1: return np.sqrt(r2[0, 0]) return np.sqrt(r2) def periodic_ray(start, end, left=None, right=None): """ periodic_ray(start, end, left=None, right=None) Break up periodic ray into non-periodic segments. Accepts start and end points of periodic ray as YTArrays. Accepts optional left and right edges of periodic volume as YTArrays. Returns a list of lists of coordinates, where each element of the top-most list is a 2-list of start coords and end coords of the non-periodic ray: [[[x0start,y0start,z0start], [x0end, y0end, z0end]], [[x1start,y1start,z1start], [x1end, y1end, z1end]], ...,] Parameters ---------- start : array The starting coordinate of the ray. end : array The ending coordinate of the ray. left : optional, array The left corner of the periodic domain. If not given, an array of zeros with same size as the starting coordinate us used. right : optional, array The right corner of the periodic domain. If not given, an array of ones with same size as the starting coordinate us used. Examples -------- >>> import yt >>> start = yt.YTArray([0.5, 0.5, 0.5]) >>> end = yt.YTArray([1.25, 1.25, 1.25]) >>> periodic_ray(start, end) [ [ YTArray([0.5, 0.5, 0.5]) (dimensionless), YTArray([1., 1., 1.]) (dimensionless) ], [ YTArray([0., 0., 0.]) (dimensionless), YTArray([0.25, 0.25, 0.25]) (dimensionless) ] ] """ if left is None: left = np.zeros(start.shape) if right is None: right = np.ones(start.shape) dim = right - left vector = end - start wall = np.zeros_like(start) close = np.zeros(start.shape, dtype=object) left_bound = vector < 0 right_bound = vector > 0 no_bound = vector == 0.0 bound = vector != 0.0 wall[left_bound] = left[left_bound] close[left_bound] = np.max wall[right_bound] = right[right_bound] close[right_bound] = np.min wall[no_bound] = np.inf close[no_bound] = np.min segments = [] this_start = start.copy() this_end = end.copy() t = 0.0 tolerance = 1e-6 while t < 1.0 - tolerance: hit_left = (this_start <= left) & (vector < 0) if (hit_left).any(): this_start[hit_left] += dim[hit_left] this_end[hit_left] += dim[hit_left] hit_right = (this_start >= right) & (vector > 0) if (hit_right).any(): this_start[hit_right] -= dim[hit_right] this_end[hit_right] -= dim[hit_right] nearest = vector.unit_array * np.array( [close[q]([this_end[q], wall[q]]) for q in range(start.size)] ) dt = ((nearest - this_start) / vector)[bound].min() now = this_start + vector * dt close_enough = np.abs(now - nearest) / np.abs(vector.max()) < 1e-10 now[close_enough] = nearest[close_enough] segments.append([this_start.copy(), now.copy()]) this_start = now.copy() t += dt return segments def euclidean_dist(a, b): r"""Find the Euclidean distance between two points. Parameters ---------- a : array or list Either an ndim long list of coordinates corresponding to a single point or an (ndim, npoints) list of coordinates for many points in space. b : array or list Either an ndim long list of coordinates corresponding to a single point or an (ndim, npoints) list of coordinates for many points in space. Examples -------- >>> a = [0.1, 0.1, 0.1] >>> b = [0.9, 0, 9, 0.9] >>> period = 1.0 >>> dist = euclidean_dist(a, b) >>> dist 1.38564064606 """ a = np.array(a) b = np.array(b) if a.shape != b.shape: RuntimeError("Arrays must be the same shape.") c = a.copy() np.subtract(c, b, c) np.power(c, 2, c) c = c.sum(axis=0) if isinstance(c, np.ndarray): np.sqrt(c, c) else: # This happens if a and b only have one entry. c = math.sqrt(c) return c def rotate_vector_3D(a, dim, angle): r"""Rotates the elements of an array around an axis by some angle. Given an array of 3D vectors a, this rotates them around a coordinate axis by a clockwise angle. An alternative way to think about it is the coordinate axes are rotated counterclockwise, which changes the directions of the vectors accordingly. Parameters ---------- a : array An array of 3D vectors with dimension Nx3. dim : integer A integer giving the axis around which the vectors will be rotated. (x, y, z) = (0, 1, 2). angle : float The angle in radians through which the vectors will be rotated clockwise. Examples -------- >>> a = np.array([[1, 1, 0], [1, 0, 1], [0, 1, 1], [1, 1, 1], [3, 4, 5]]) >>> b = rotate_vector_3D(a, 2, np.pi / 2) >>> print(b) [[ 1.00000000e+00 -1.00000000e+00 0.00000000e+00] [ 6.12323400e-17 -1.00000000e+00 1.00000000e+00] [ 1.00000000e+00 6.12323400e-17 1.00000000e+00] [ 1.00000000e+00 -1.00000000e+00 1.00000000e+00] [ 4.00000000e+00 -3.00000000e+00 5.00000000e+00]] """ mod = False if len(a.shape) == 1: mod = True a = np.array([a]) if a.shape[1] != 3: raise ValueError("The second dimension of the array a must be == 3!") if dim == 0: R = np.array( [ [1, 0, 0], [0, np.cos(angle), np.sin(angle)], [0, -np.sin(angle), np.cos(angle)], ] ) elif dim == 1: R = np.array( [ [np.cos(angle), 0, -np.sin(angle)], [0, 1, 0], [np.sin(angle), 0, np.cos(angle)], ] ) elif dim == 2: R = np.array( [ [np.cos(angle), np.sin(angle), 0], [-np.sin(angle), np.cos(angle), 0], [0, 0, 1], ] ) else: raise ValueError("dim must be 0, 1, or 2!") if mod: return np.dot(R, a.T).T[0] else: return np.dot(R, a.T).T def modify_reference_frame(CoM, L, P=None, V=None): r"""Rotates and translates data into a new reference frame to make calculations easier. This is primarily useful for calculations of halo data. The data is translated into the center of mass frame. Next, it is rotated such that the angular momentum vector for the data is aligned with the z-axis. Put another way, if one calculates the angular momentum vector on the data that comes out of this function, it will always be along the positive z-axis. If the center of mass is re-calculated, it will be at the origin. Parameters ---------- CoM : array The center of mass in 3D. L : array The angular momentum vector. Optional -------- P : array The positions of the data to be modified (i.e. particle or grid cell positions). The array should be Nx3. V : array The velocities of the data to be modified (i.e. particle or grid cell velocities). The array should be Nx3. Returns ------- L : array The angular momentum vector equal to [0, 0, 1] modulo machine error. P : array The modified positional data. Only returned if P is not None V : array The modified velocity data. Only returned if V is not None Examples -------- >>> CoM = np.array([0.5, 0.5, 0.5]) >>> L = np.array([1, 0, 0]) >>> P = np.array([[1, 0.5, 0.5], [0, 0.5, 0.5], [0.5, 0.5, 0.5], [0, 0, 0]]) >>> V = p.copy() >>> LL, PP, VV = modify_reference_frame(CoM, L, P, V) >>> LL array([ 6.12323400e-17, 0.00000000e+00, 1.00000000e+00]) >>> PP array([[ 3.06161700e-17, 0.00000000e+00, 5.00000000e-01], [ -3.06161700e-17, 0.00000000e+00, -5.00000000e-01], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], [ 5.00000000e-01, -5.00000000e-01, -5.00000000e-01]]) >>> VV array([[ -5.00000000e-01, 5.00000000e-01, 1.00000000e+00], [ -5.00000000e-01, 5.00000000e-01, 3.06161700e-17], [ -5.00000000e-01, 5.00000000e-01, 5.00000000e-01], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]]) """ # First translate the positions to center of mass reference frame. if P is not None: P = P - CoM # is the L vector pointing in the Z direction? if np.inner(L[:2], L[:2]) == 0.0: # the reason why we need to explicitly check for the above # is that formula is used in denominator # this just checks if we need to flip the z axis or not if L[2] < 0.0: # this is just a simple flip in direction of the z axis if P is not None: P = -P if V is not None: V = -V # return the values if V is None and P is not None: return L, P elif P is None and V is not None: return L, V else: return L, P, V # Normal vector is not aligned with simulation Z axis # Therefore we are going to have to apply a rotation # Now find the angle between modified L and the x-axis. LL = L.copy() LL[2] = 0.0 theta = np.arccos(np.inner(LL, [1.0, 0.0, 0.0]) / np.inner(LL, LL) ** 0.5) if L[1] < 0.0: theta = -theta # Now rotate all the position, velocity, and L vectors by this much around # the z axis. if P is not None: P = rotate_vector_3D(P, 2, theta) if V is not None: V = rotate_vector_3D(V, 2, theta) L = rotate_vector_3D(L, 2, theta) # Now find the angle between L and the z-axis. theta = np.arccos(np.inner(L, [0.0, 0.0, 1.0]) / np.inner(L, L) ** 0.5) # This time we rotate around the y axis. if P is not None: P = rotate_vector_3D(P, 1, theta) if V is not None: V = rotate_vector_3D(V, 1, theta) L = rotate_vector_3D(L, 1, theta) # return the values if V is None and P is not None: return L, P elif P is None and V is not None: return L, V else: return L, P, V def compute_rotational_velocity(CoM, L, P, V): r"""Computes the rotational velocity for some data around an axis. This is primarily for halo computations. Given some data, this computes the circular rotational velocity of each point (particle) in reference to the axis defined by the angular momentum vector. This is accomplished by converting the reference frame of the center of mass of the halo. Parameters ---------- CoM : array The center of mass in 3D. L : array The angular momentum vector. P : array The positions of the data to be modified (i.e. particle or grid cell positions). The array should be Nx3. V : array The velocities of the data to be modified (i.e. particle or grid cell velocities). The array should be Nx3. Returns ------- v : array An array N elements long that gives the circular rotational velocity for each datum (particle). Examples -------- >>> CoM = np.array([0, 0, 0]) >>> L = np.array([0, 0, 1]) >>> P = np.array([[1, 0, 0], [1, 1, 1], [0, 0, 1], [1, 1, 0]]) >>> V = np.array([[0, 1, 10], [-1, -1, -1], [1, 1, 1], [1, -1, -1]]) >>> circV = compute_rotational_velocity(CoM, L, P, V) >>> circV array([ 1. , 0. , 0. , 1.41421356]) """ # First we translate into the simple coordinates. L, P, V = modify_reference_frame(CoM, L, P, V) # Find the vector in the plane of the galaxy for each position point # that is perpendicular to the radial vector. radperp = np.cross([0, 0, 1], P) # Find the component of the velocity along the radperp vector. # Unf., I don't think there's a better way to do this. res = np.empty(V.shape[0], dtype="float64") for i, rp in enumerate(radperp): temp = np.dot(rp, V[i]) / np.dot(rp, rp) * rp res[i] = np.dot(temp, temp) ** 0.5 return res def compute_parallel_velocity(CoM, L, P, V): r"""Computes the parallel velocity for some data around an axis. This is primarily for halo computations. Given some data, this computes the velocity component along the angular momentum vector. This is accomplished by converting the reference frame of the center of mass of the halo. Parameters ---------- CoM : array The center of mass in 3D. L : array The angular momentum vector. P : array The positions of the data to be modified (i.e. particle or grid cell positions). The array should be Nx3. V : array The velocities of the data to be modified (i.e. particle or grid cell velocities). The array should be Nx3. Returns ------- v : array An array N elements long that gives the parallel velocity for each datum (particle). Examples -------- >>> CoM = np.array([0, 0, 0]) >>> L = np.array([0, 0, 1]) >>> P = np.array([[1, 0, 0], [1, 1, 1], [0, 0, 1], [1, 1, 0]]) >>> V = np.array([[0, 1, 10], [-1, -1, -1], [1, 1, 1], [1, -1, -1]]) >>> paraV = compute_parallel_velocity(CoM, L, P, V) >>> paraV array([10, -1, 1, -1]) """ # First we translate into the simple coordinates. L, P, V = modify_reference_frame(CoM, L, P, V) # And return just the z-axis velocities. return V[:, 2] def compute_radial_velocity(CoM, L, P, V): r"""Computes the radial velocity for some data around an axis. This is primarily for halo computations. Given some data, this computes the radial velocity component for the data. This is accomplished by converting the reference frame of the center of mass of the halo. Parameters ---------- CoM : array The center of mass in 3D. L : array The angular momentum vector. P : array The positions of the data to be modified (i.e. particle or grid cell positions). The array should be Nx3. V : array The velocities of the data to be modified (i.e. particle or grid cell velocities). The array should be Nx3. Returns ------- v : array An array N elements long that gives the radial velocity for each datum (particle). Examples -------- >>> CoM = np.array([0, 0, 0]) >>> L = np.array([0, 0, 1]) >>> P = np.array([[1, 0, 0], [1, 1, 1], [0, 0, 1], [1, 1, 0]]) >>> V = np.array([[0, 1, 10], [-1, -1, -1], [1, 1, 1], [1, -1, -1]]) >>> radV = compute_radial_velocity(CoM, L, P, V) >>> radV array([ 1. , 1.41421356 , 0. , 0.]) """ # First we translate into the simple coordinates. L, P, V = modify_reference_frame(CoM, L, P, V) # We find the tangential velocity by dotting the velocity vector # with the cylindrical radial vector for this point. # Unf., I don't think there's a better way to do this. P[:, 2] = 0 res = np.empty(V.shape[0], dtype="float64") for i, rad in enumerate(P): temp = np.dot(rad, V[i]) / np.dot(rad, rad) * rad res[i] = np.dot(temp, temp) ** 0.5 return res def compute_cylindrical_radius(CoM, L, P, V): r"""Compute the radius for some data around an axis in cylindrical coordinates. This is primarily for halo computations. Given some data, this computes the cylindrical radius for each point. This is accomplished by converting the reference frame of the center of mass of the halo. Parameters ---------- CoM : array The center of mass in 3D. L : array The angular momentum vector. P : array The positions of the data to be modified (i.e. particle or grid cell positions). The array should be Nx3. V : array The velocities of the data to be modified (i.e. particle or grid cell velocities). The array should be Nx3. Returns ------- cyl_r : array An array N elements long that gives the radial velocity for each datum (particle). Examples -------- >>> CoM = np.array([0, 0, 0]) >>> L = np.array([0, 0, 1]) >>> P = np.array([[1, 0, 0], [1, 1, 1], [0, 0, 1], [1, 1, 0]]) >>> V = np.array([[0, 1, 10], [-1, -1, -1], [1, 1, 1], [1, -1, -1]]) >>> cyl_r = compute_cylindrical_radius(CoM, L, P, V) >>> cyl_r array([ 1. , 1.41421356, 0. , 1.41421356]) """ # First we translate into the simple coordinates. L, P, V = modify_reference_frame(CoM, L, P, V) # Demote all the positions to the z=0 plane, which makes the distance # calculation very easy. P[:, 2] = 0 return np.sqrt((P * P).sum(axis=1)) def ortho_find(vec1): r"""Find two complementary orthonormal vectors to a given vector. For any given non-zero vector, there are infinite pairs of vectors orthonormal to it. This function gives you one arbitrary pair from that set along with the normalized version of the original vector. Parameters ---------- vec1 : array_like An array or list to represent a 3-vector. Returns ------- vec1 : array The original 3-vector array after having been normalized. vec2 : array A 3-vector array which is orthonormal to vec1. vec3 : array A 3-vector array which is orthonormal to vec1 and vec2. Raises ------ ValueError If input vector is the zero vector. Notes ----- Our initial vector is `vec1` which consists of 3 components: `x1`, `y1`, and `z1`. ortho_find determines a vector, `vec2`, which is orthonormal to `vec1` by finding a vector which has a zero-value dot-product with `vec1`. .. math:: vec1 \cdot vec2 = x_1 x_2 + y_1 y_2 + z_1 z_2 = 0 As a starting point, we arbitrarily choose `vec2` to have `x2` = 1, `y2` = 0: .. math:: vec1 \cdot vec2 = x_1 + (z_1 z_2) = 0 \rightarrow z_2 = -(x_1 / z_1) Of course, this will fail if `z1` = 0, in which case, let's say use `z2` = 1 and `x2` = 0: .. math:: \rightarrow y_2 = -(z_1 / y_1) Similarly, if `y1` = 0, this case will fail, in which case we use `y2` = 1 and `z2` = 0: .. math:: \rightarrow x_2 = -(y_1 / x_1) Since we don't allow `vec1` to be zero, all cases are accounted for. Producing `vec3`, the complementary orthonormal vector to `vec1` and `vec2` is accomplished by simply taking the cross product of `vec1` and `vec2`. Examples -------- >>> a = [1.0, 2.0, 3.0] >>> a, b, c = ortho_find(a) >>> a array([ 0.26726124, 0.53452248, 0.80178373]) >>> b array([ 0.9486833 , 0. , -0.31622777]) >>> c array([-0.16903085, 0.84515425, -0.50709255]) """ vec1 = np.array(vec1, dtype=np.float64) # Normalize norm = np.sqrt(np.vdot(vec1, vec1)) if norm == 0: raise ValueError("Zero vector used as input.") vec1 /= norm x1 = vec1[0] y1 = vec1[1] z1 = vec1[2] if z1 != 0: x2 = 1.0 y2 = 0.0 z2 = -(x1 / z1) norm2 = (1.0 + z2**2.0) ** (0.5) elif y1 != 0: x2 = 0.0 z2 = 1.0 y2 = -(z1 / y1) norm2 = (1.0 + y2**2.0) ** (0.5) else: y2 = 1.0 z2 = 0.0 x2 = -(y1 / x1) norm2 = (1.0 + z2**2.0) ** (0.5) vec2 = np.array([x2, y2, z2]) vec2 /= norm2 vec3 = np.cross(vec1, vec2) return vec1, vec2, vec3 def quartiles(a, axis=None, out=None, overwrite_input=False): """ Compute the quartile values (25% and 75%) along the specified axis in the same way that the numpy.median calculates the median (50%) value alone a specified axis. Check numpy.median for details, as it is virtually the same algorithm. Returns an array of the quartiles of the array elements [lower quartile, upper quartile]. Parameters ---------- a : array_like Input array or object that can be converted to an array. axis : {None, int}, optional Axis along which the quartiles are computed. The default (axis=None) is to compute the quartiles along a flattened version of the array. out : ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary. overwrite_input : {False, True}, optional If True, then allow use of memory of input array (a) for calculations. The input array will be modified by the call to quartiles. This will save memory when you do not need to preserve the contents of the input array. Treat the input as undefined, but it will probably be fully or partially sorted. Default is False. Note that, if `overwrite_input` is True and the input is not already an ndarray, an error will be raised. Returns ------- quartiles : ndarray A new 2D array holding the result (unless `out` is specified, in which case that array is returned instead). If the input contains integers, or floats of smaller precision than 64, then the output data-type is float64. Otherwise, the output data-type is the same as that of the input. See Also -------- numpy.median, numpy.mean, numpy.percentile Notes ----- Given a vector V of length N, the quartiles of V are the 25% and 75% values of a sorted copy of V, ``V_sorted`` - i.e., ``V_sorted[(N-1)/4]`` and ``3*V_sorted[(N-1)/4]``, when N is odd. When N is even, it is the average of the two values bounding these values of ``V_sorted``. Examples -------- >>> a = np.arange(100).reshape(10, 10) >>> a array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29], [30, 31, 32, 33, 34, 35, 36, 37, 38, 39], [40, 41, 42, 43, 44, 45, 46, 47, 48, 49], [50, 51, 52, 53, 54, 55, 56, 57, 58, 59], [60, 61, 62, 63, 64, 65, 66, 67, 68, 69], [70, 71, 72, 73, 74, 75, 76, 77, 78, 79], [80, 81, 82, 83, 84, 85, 86, 87, 88, 89], [90, 91, 92, 93, 94, 95, 96, 97, 98, 99]]) >>> mu.quartiles(a) array([ 24.5, 74.5]) >>> mu.quartiles(a, axis=0) array([[ 15., 16., 17., 18., 19., 20., 21., 22., 23., 24.], [ 65., 66., 67., 68., 69., 70., 71., 72., 73., 74.]]) >>> mu.quartiles(a, axis=1) array([[ 1.5, 11.5, 21.5, 31.5, 41.5, 51.5, 61.5, 71.5, 81.5, 91.5], [ 6.5, 16.5, 26.5, 36.5, 46.5, 56.5, 66.5, 76.5, 86.5, 96.5]]) """ if overwrite_input: if axis is None: a_sorted = sorted(a.ravel()) else: a.sort(axis=axis) a_sorted = a else: a_sorted = np.sort(a, axis=axis) if axis is None: axis = 0 indexer = [slice(None)] * a_sorted.ndim indices = [int(a_sorted.shape[axis] / 4), int(a_sorted.shape[axis] * 0.75)] result = [] for index in indices: if a_sorted.shape[axis] % 2 == 1: # index with slice to allow mean (below) to work indexer[axis] = slice(index, index + 1) else: indexer[axis] = slice(index - 1, index + 1) # special cases for small arrays if a_sorted.shape[axis] == 2: # index with slice to allow mean (below) to work indexer[axis] = slice(index, index + 1) # Use mean in odd and even case to coerce data type # and check, use out array. result.append(np.mean(a_sorted[tuple(indexer)], axis=axis, out=out)) return np.array(result) def get_perspective_matrix(fovy, aspect, z_near, z_far): """ Given a field of view in radians, an aspect ratio, and a near and far plane distance, this routine computes the transformation matrix corresponding to perspective projection using homogeneous coordinates. Parameters ---------- fovy : scalar The angle in degrees of the field of view. aspect : scalar The aspect ratio of width / height for the projection. z_near : scalar The distance of the near plane from the camera. z_far : scalar The distance of the far plane from the camera. Returns ------- persp_matrix : ndarray A new 4x4 2D array. Represents a perspective transformation in homogeneous coordinates. Note that this matrix does not actually perform the projection. After multiplying a 4D vector of the form (x_0, y_0, z_0, 1.0), the point will be transformed to some (x_1, y_1, z_1, w). The final projection is applied by performing a divide by w, that is (x_1/w, y_1/w, z_1/w, w/w). The matrix uses a row-major ordering, rather than the column major ordering typically used by OpenGL. Notes ----- The usage of 4D homogeneous coordinates is for OpenGL and GPU hardware that automatically performs the divide by w operation. See the following for more details about the OpenGL perspective matrices. https://www.tomdalling.com/blog/modern-opengl/explaining-homogenous-coordinates-and-projective-geometry/ http://www.songho.ca/opengl/gl_projectionmatrix.html """ tan_half_fovy = np.tan(np.radians(fovy) / 2) result = np.zeros((4, 4), dtype="float32", order="C") # result[0][0] = 1 / (aspect * tan_half_fovy) # result[1][1] = 1 / tan_half_fovy # result[2][2] = - (z_far + z_near) / (z_far - z_near) # result[3][2] = -1 # result[2][3] = -(2 * z_far * z_near) / (z_far - z_near) f = z_far n = z_near t = tan_half_fovy * n b = -t * aspect r = t * aspect l = -t * aspect result[0][0] = (2 * n) / (r - l) result[2][0] = (r + l) / (r - l) result[1][1] = (2 * n) / (t - b) result[1][2] = (t + b) / (t - b) result[2][2] = -(f + n) / (f - n) result[2][3] = -2 * f * n / (f - n) result[3][2] = -1 return result def get_orthographic_matrix(maxr, aspect, z_near, z_far): """ Given a field of view in radians, an aspect ratio, and a near and far plane distance, this routine computes the transformation matrix corresponding to perspective projection using homogeneous coordinates. Parameters ---------- maxr : scalar should be ``max(|x|, |y|)`` aspect : scalar The aspect ratio of width / height for the projection. z_near : scalar The distance of the near plane from the camera. z_far : scalar The distance of the far plane from the camera. Returns ------- persp_matrix : ndarray A new 4x4 2D array. Represents a perspective transformation in homogeneous coordinates. Note that this matrix does not actually perform the projection. After multiplying a 4D vector of the form (x_0, y_0, z_0, 1.0), the point will be transformed to some (x_1, y_1, z_1, w). The final projection is applied by performing a divide by w, that is (x_1/w, y_1/w, z_1/w, w/w). The matrix uses a row-major ordering, rather than the column major ordering typically used by OpenGL. Notes ----- The usage of 4D homogeneous coordinates is for OpenGL and GPU hardware that automatically performs the divide by w operation. See the following for more details about the OpenGL perspective matrices. http://www.scratchapixel.com/lessons/3d-basic-rendering/perspective-and-orthographic-projection-matrix/orthographic-projection-matrix https://www.tomdalling.com/blog/modern-opengl/explaining-homogenous-coordinates-and-projective-geometry/ http://www.songho.ca/opengl/gl_projectionmatrix.html """ r = maxr * aspect t = maxr l = -r b = -t result = np.zeros((4, 4), dtype="float32", order="C") result[0][0] = 2.0 / (r - l) result[1][1] = 2.0 / (t - b) result[2][2] = -2.0 / (z_far - z_near) result[3][3] = 1 result[3][0] = -(r + l) / (r - l) result[3][1] = -(t + b) / (t - b) result[3][2] = -(z_far + z_near) / (z_far - z_near) return result def get_lookat_matrix(eye, center, up): """ Given the position of a camera, the point it is looking at, and an up-direction. Computes the lookat matrix that moves all vectors such that the camera is at the origin of the coordinate system, looking down the z-axis. Parameters ---------- eye : array_like The position of the camera. Must be 3D. center : array_like The location that the camera is looking at. Must be 3D. up : array_like The direction that is considered up for the camera. Must be 3D. Returns ------- lookat_matrix : ndarray A new 4x4 2D array in homogeneous coordinates. This matrix moves all vectors in the same way required to move the camera to the origin of the coordinate system, with it pointing down the negative z-axis. """ eye = np.array(eye) center = np.array(center) up = np.array(up) f = (center - eye) / np.linalg.norm(center - eye) s = np.cross(f, up) / np.linalg.norm(np.cross(f, up)) u = np.cross(s, f) result = np.zeros((4, 4), dtype="float32", order="C") result[0][0] = s[0] result[0][1] = s[1] result[0][2] = s[2] result[1][0] = u[0] result[1][1] = u[1] result[1][2] = u[2] result[2][0] = -f[0] result[2][1] = -f[1] result[2][2] = -f[2] result[0][3] = -np.dot(s, eye) result[1][3] = -np.dot(u, eye) result[2][3] = np.dot(f, eye) result[3][3] = 1.0 return result def get_translate_matrix(dx, dy, dz): """ Given a movement amount for each coordinate, creates a translation matrix that moves the vector by each amount. Parameters ---------- dx : scalar A translation amount for the x-coordinate dy : scalar A translation amount for the y-coordinate dz : scalar A translation amount for the z-coordinate Returns ------- trans_matrix : ndarray A new 4x4 2D array. Represents a translation by dx, dy and dz in each coordinate respectively. """ result = np.zeros((4, 4), dtype="float32", order="C") result[0][0] = 1.0 result[1][1] = 1.0 result[2][2] = 1.0 result[3][3] = 1.0 result[0][3] = dx result[1][3] = dy result[2][3] = dz return result def get_scale_matrix(dx, dy, dz): """ Given a scaling factor for each coordinate, returns a matrix that corresponds to the given scaling amounts. Parameters ---------- dx : scalar A scaling factor for the x-coordinate. dy : scalar A scaling factor for the y-coordinate. dz : scalar A scaling factor for the z-coordinate. Returns ------- scale_matrix : ndarray A new 4x4 2D array. Represents a scaling by dx, dy, and dz in each coordinate respectively. """ result = np.zeros((4, 4), dtype="float32", order="C") result[0][0] = dx result[1][1] = dy result[2][2] = dz result[3][3] = 1 return result def get_rotation_matrix(theta, rot_vector): """ Given an angle theta and a 3D vector rot_vector, this routine computes the rotation matrix corresponding to rotating theta radians about rot_vector. Parameters ---------- theta : scalar The angle in radians. rot_vector : array_like The axis of rotation. Must be 3D. Returns ------- rot_matrix : ndarray A new 3x3 2D array. This is the representation of a rotation of theta radians about rot_vector in the simulation box coordinate frame See Also -------- ortho_find Examples -------- >>> a = [0, 1, 0] >>> theta = 0.785398163 # pi/4 >>> rot = mu.get_rotation_matrix(theta, a) >>> rot array([[ 0.70710678, 0. , 0.70710678], [ 0. , 1. , 0. ], [-0.70710678, 0. , 0.70710678]]) >>> np.dot(rot, a) array([ 0., 1., 0.]) # since a is an eigenvector by construction >>> np.dot(rot, [1, 0, 0]) array([ 0.70710678, 0. , -0.70710678]) """ ux = rot_vector[0] uy = rot_vector[1] uz = rot_vector[2] cost = np.cos(theta) sint = np.sin(theta) R = np.array( [ [ cost + ux**2 * (1 - cost), ux * uy * (1 - cost) - uz * sint, ux * uz * (1 - cost) + uy * sint, ], [ uy * ux * (1 - cost) + uz * sint, cost + uy**2 * (1 - cost), uy * uz * (1 - cost) - ux * sint, ], [ uz * ux * (1 - cost) - uy * sint, uz * uy * (1 - cost) + ux * sint, cost + uz**2 * (1 - cost), ], ] ) return R def quaternion_mult(q1, q2): """ Multiply two quaternions. The inputs are 4-component numpy arrays in the order [w, x, y, z]. """ w = q1[0] * q2[0] - q1[1] * q2[1] - q1[2] * q2[2] - q1[3] * q2[3] x = q1[0] * q2[1] + q1[1] * q2[0] + q1[2] * q2[3] - q1[3] * q2[2] y = q1[0] * q2[2] + q1[2] * q2[0] + q1[3] * q2[1] - q1[1] * q2[3] z = q1[0] * q2[3] + q1[3] * q2[0] + q1[1] * q2[2] - q1[2] * q2[1] return np.array([w, x, y, z]) def quaternion_to_rotation_matrix(quaternion): """ This converts a quaternion representation of on orientation to a rotation matrix. The input is a 4-component numpy array in the order [w, x, y, z], and the output is a 3x3 matrix stored as a 2D numpy array. We follow the approach in "3D Math Primer for Graphics and Game Development" by Dunn and Parberry. """ w = quaternion[0] x = quaternion[1] y = quaternion[2] z = quaternion[3] R = np.empty((3, 3), dtype=np.float64) R[0][0] = 1.0 - 2.0 * y**2 - 2.0 * z**2 R[0][1] = 2.0 * x * y + 2.0 * w * z R[0][2] = 2.0 * x * z - 2.0 * w * y R[1][0] = 2.0 * x * y - 2.0 * w * z R[1][1] = 1.0 - 2.0 * x**2 - 2.0 * z**2 R[1][2] = 2.0 * y * z + 2.0 * w * x R[2][0] = 2.0 * x * z + 2.0 * w * y R[2][1] = 2.0 * y * z - 2.0 * w * x R[2][2] = 1.0 - 2.0 * x**2 - 2.0 * y**2 return R def rotation_matrix_to_quaternion(rot_matrix): """ Convert a rotation matrix-based representation of an orientation to a quaternion. The input should be a 3x3 rotation matrix, while the output will be a 4-component numpy array. We follow the approach in "3D Math Primer for Graphics and Game Development" by Dunn and Parberry. """ m11 = rot_matrix[0][0] m12 = rot_matrix[0][1] m13 = rot_matrix[0][2] m21 = rot_matrix[1][0] m22 = rot_matrix[1][1] m23 = rot_matrix[1][2] m31 = rot_matrix[2][0] m32 = rot_matrix[2][1] m33 = rot_matrix[2][2] four_w_squared_minus_1 = m11 + m22 + m33 four_x_squared_minus_1 = m11 - m22 - m33 four_y_squared_minus_1 = m22 - m11 - m33 four_z_squared_minus_1 = m33 - m11 - m22 max_index = 0 four_max_squared_minus_1 = four_w_squared_minus_1 if four_x_squared_minus_1 > four_max_squared_minus_1: four_max_squared_minus_1 = four_x_squared_minus_1 max_index = 1 if four_y_squared_minus_1 > four_max_squared_minus_1: four_max_squared_minus_1 = four_y_squared_minus_1 max_index = 2 if four_z_squared_minus_1 > four_max_squared_minus_1: four_max_squared_minus_1 = four_z_squared_minus_1 max_index = 3 max_val = 0.5 * np.sqrt(four_max_squared_minus_1 + 1.0) mult = 0.25 / max_val if max_index == 0: w = max_val x = (m23 - m32) * mult y = (m31 - m13) * mult z = (m12 - m21) * mult elif max_index == 1: x = max_val w = (m23 - m32) * mult y = (m12 + m21) * mult z = (m31 + m13) * mult elif max_index == 2: y = max_val w = (m31 - m13) * mult x = (m12 + m21) * mult z = (m23 + m32) * mult elif max_index == 3: z = max_val w = (m12 - m21) * mult x = (m31 + m13) * mult y = (m23 + m32) * mult return np.array([w, x, y, z]) def get_sph_r(coords): # The spherical coordinates radius is simply the magnitude of the # coordinate vector. return np.sqrt(np.sum(coords**2, axis=0)) def resize_vector(vector, vector_array): if len(vector_array.shape) == 4: res_vector = np.resize(vector, (3, 1, 1, 1)) else: res_vector = np.resize(vector, (3, 1)) return res_vector def normalize_vector(vector): # this function normalizes # an input vector L2 = np.atleast_1d(np.linalg.norm(vector)) L2[L2 == 0] = 1.0 vector = vector / L2 return vector def get_sph_theta(coords, normal): # The angle (theta) with respect to the normal (J), is the arccos # of the dot product of the normal with the normalized coordinate # vector. res_normal = resize_vector(normal, coords) # check if the normal vector is normalized # since arccos requires the vector to be normalised res_normal = normalize_vector(res_normal) tile_shape = [1] + list(coords.shape)[1:] J = np.tile(res_normal, tile_shape) JdotCoords = np.sum(J * coords, axis=0) with np.errstate(invalid="ignore"): ret = np.arccos(JdotCoords / np.sqrt(np.sum(coords**2, axis=0))) ret[np.isnan(ret)] = 0 return ret def get_sph_phi(coords, normal): # We have freedom with respect to what axis (xprime) to define # the disk angle. Here I've chosen to use the axis that is # perpendicular to the normal and the y-axis. When normal == # y-hat, then set xprime = z-hat. With this definition, when # normal == z-hat (as is typical), then xprime == x-hat. # # The angle is then given by the arctan of the ratio of the # yprime-component and the xprime-component of the coordinate # vector. normal = normalize_vector(normal) (zprime, xprime, yprime) = ortho_find(normal) res_xprime = resize_vector(xprime, coords) res_yprime = resize_vector(yprime, coords) tile_shape = [1] + list(coords.shape)[1:] Jx = np.tile(res_xprime, tile_shape) Jy = np.tile(res_yprime, tile_shape) Px = np.sum(Jx * coords, axis=0) Py = np.sum(Jy * coords, axis=0) return np.arctan2(Py, Px) def get_cyl_r(coords, normal): # The cross product of the normal (J) with a coordinate vector # gives a vector of magnitude equal to the cylindrical radius. res_normal = resize_vector(normal, coords) res_normal = normalize_vector(res_normal) tile_shape = [1] + list(coords.shape)[1:] J = np.tile(res_normal, tile_shape) JcrossCoords = np.cross(J, coords, axisa=0, axisb=0, axisc=0) return np.sqrt(np.sum(JcrossCoords**2, axis=0)) def get_cyl_z(coords, normal): # The dot product of the normal (J) with the coordinate vector # gives the cylindrical height. res_normal = resize_vector(normal, coords) res_normal = normalize_vector(res_normal) tile_shape = [1] + list(coords.shape)[1:] J = np.tile(res_normal, tile_shape) return np.sum(J * coords, axis=0) def get_cyl_theta(coords, normal): # This is identical to the spherical phi component return get_sph_phi(coords, normal) def get_cyl_r_component(vectors, theta, normal): # The r of a vector is the vector dotted with rhat normal = normalize_vector(normal) (zprime, xprime, yprime) = ortho_find(normal) res_xprime = resize_vector(xprime, vectors) res_yprime = resize_vector(yprime, vectors) tile_shape = [1] + list(vectors.shape)[1:] Jx = np.tile(res_xprime, tile_shape) Jy = np.tile(res_yprime, tile_shape) rhat = Jx * np.cos(theta) + Jy * np.sin(theta) return np.sum(vectors * rhat, axis=0) def get_cyl_theta_component(vectors, theta, normal): # The theta component of a vector is the vector dotted with thetahat normal = normalize_vector(normal) (zprime, xprime, yprime) = ortho_find(normal) res_xprime = resize_vector(xprime, vectors) res_yprime = resize_vector(yprime, vectors) tile_shape = [1] + list(vectors.shape)[1:] Jx = np.tile(res_xprime, tile_shape) Jy = np.tile(res_yprime, tile_shape) thetahat = -Jx * np.sin(theta) + Jy * np.cos(theta) return np.sum(vectors * thetahat, axis=0) def get_cyl_z_component(vectors, normal): # The z component of a vector is the vector dotted with zhat normal = normalize_vector(normal) (zprime, xprime, yprime) = ortho_find(normal) res_zprime = resize_vector(zprime, vectors) tile_shape = [1] + list(vectors.shape)[1:] zhat = np.tile(res_zprime, tile_shape) return np.sum(vectors * zhat, axis=0) def get_sph_r_component(vectors, theta, phi, normal): # The r component of a vector is the vector dotted with rhat normal = normalize_vector(normal) (zprime, xprime, yprime) = ortho_find(normal) res_xprime = resize_vector(xprime, vectors) res_yprime = resize_vector(yprime, vectors) res_zprime = resize_vector(zprime, vectors) tile_shape = [1] + list(vectors.shape)[1:] Jx, Jy, Jz = ( YTArray(np.tile(rprime, tile_shape), "") for rprime in (res_xprime, res_yprime, res_zprime) ) rhat = ( Jx * np.sin(theta) * np.cos(phi) + Jy * np.sin(theta) * np.sin(phi) + Jz * np.cos(theta) ) return np.sum(vectors * rhat, axis=0) def get_sph_phi_component(vectors, phi, normal): # The phi component of a vector is the vector dotted with phihat normal = normalize_vector(normal) (zprime, xprime, yprime) = ortho_find(normal) res_xprime = resize_vector(xprime, vectors) res_yprime = resize_vector(yprime, vectors) tile_shape = [1] + list(vectors.shape)[1:] Jx = YTArray(np.tile(res_xprime, tile_shape), "") Jy = YTArray(np.tile(res_yprime, tile_shape), "") phihat = -Jx * np.sin(phi) + Jy * np.cos(phi) return np.sum(vectors * phihat, axis=0) def get_sph_theta_component(vectors, theta, phi, normal): # The theta component of a vector is the vector dotted with thetahat normal = normalize_vector(normal) (zprime, xprime, yprime) = ortho_find(normal) res_xprime = resize_vector(xprime, vectors) res_yprime = resize_vector(yprime, vectors) res_zprime = resize_vector(zprime, vectors) tile_shape = [1] + list(vectors.shape)[1:] Jx, Jy, Jz = ( YTArray(np.tile(rprime, tile_shape), "") for rprime in (res_xprime, res_yprime, res_zprime) ) thetahat = ( Jx * np.cos(theta) * np.cos(phi) + Jy * np.cos(theta) * np.sin(phi) - Jz * np.sin(theta) ) return np.sum(vectors * thetahat, axis=0) def compute_stddev_image(buff2, buff): """ This function computes the standard deviation of a weighted projection. It defines the standard deviation as sigma^2 = -^2, where the brackets indicate averages (with the weight) and v is the field in question. There is the possibility that if the projection at a particular location is of a constant or a single cell/particle, then = ^2 and instead of getting zero one gets roundoff error that results in sigma^2 < 0, which is unphysical. We handle this here by performing the subtraction and checking that any and all negative values of sigma^2 can be attributed to roundoff and thus be safely set to zero. We error out if this is not the case. There are ways of computing the standard deviation that are designed to avoid this catastrophic cancellation, but this would require rather substantial and invasive changes to the projection machinery so for the time being it is avoided. Parameters ---------- buff2 : ImageArray The image of the weighted averge of the field squared buff : ImageArray The image of the weighted averge of the field """ buff1 = buff * buff y = buff2 - buff1 close_negs = np.isclose(np.asarray(buff2), np.asarray(buff1))[y < 0.0] if close_negs.all(): y[y < 0.0] = 0.0 else: raise ValueError( "Something went wrong, there are significant negative " "variances in the standard deviation image!" ) return np.sqrt(y) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/mesh_code_generation.py0000644000175100001770000001333314714401662020752 0ustar00runnerdockerimport yaml from sympy import Matrix, MatrixSymbol, ccode, diff, symarray # define some templates used below fun_signature = """cdef void %s(double* fx, double* x, double* vertices, double* phys_x) noexcept nogil""" fun_dec_template = fun_signature + " \n" fun_def_template = ( """@cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) \n""" + fun_signature + ": \n" ) jac_signature_3D = """cdef void %s(double* rcol, double* scol, double* tcol, double* x, double* vertices, double* phys_x) noexcept nogil""" jac_dec_template_3D = jac_signature_3D + " \n" jac_def_template_3D = ( """@cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) \n""" + jac_signature_3D + ": \n" ) jac_signature_2D = """cdef void %s(double* rcol, double* scol, double* x, double* vertices, double* phys_x) noexcept nogil""" jac_dec_template_2D = jac_signature_2D + " \n" jac_def_template_2D = ( """@cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) \n""" + jac_signature_2D + ": \n" ) file_header = ( "# This file contains auto-generated functions for sampling \n" + "# inside finite element solutions for various mesh types. \n" + "# To see how the code generation works in detail, see \n" + "# yt/utilities/mesh_code_generation.py. \n" ) class MeshCodeGenerator: """ A class for automatically generating the functions and jacobians used for sampling inside finite element calculations. """ def __init__(self, mesh_data): """ Mesh data should be a dictionary containing information about the type of elements used. See yt/utilities/mesh_types.yaml for more information. """ self.mesh_type = mesh_data["mesh_type"] self.num_dim = mesh_data["num_dim"] self.num_vertices = mesh_data["num_vertices"] self.num_mapped_coords = mesh_data["num_mapped_coords"] x = MatrixSymbol("x", self.num_mapped_coords, 1) self.x = x self.N = Matrix(eval(mesh_data["shape_functions"])) self._compute_jacobian() def _compute_jacobian(self): assert self.num_vertices == len(self.N) assert self.num_dim == self.num_mapped_coords self.X = MatrixSymbol("vertices", self.num_vertices, self.num_dim) self.fx = MatrixSymbol("fx", self.num_dim, 1) self.physical_position = MatrixSymbol("phys_x", self.num_dim, 1) self.f = (self.N.T * Matrix(self.X)).T - self.physical_position self.J = symarray("J", (self.num_dim, self.num_dim)) for i in range(self.num_dim): for j, var in enumerate(self.x): self.J[i][j] = diff(self.f[i, 0], var) self.rcol = MatrixSymbol("rcol", self.num_dim, 1) self.scol = MatrixSymbol("scol", self.num_dim, 1) self.tcol = MatrixSymbol("tcol", self.num_dim, 1) self.function_name = "%sFunction%dD" % (self.mesh_type, self.num_dim) self.function_header = fun_def_template % self.function_name self.function_declaration = fun_dec_template % self.function_name self.jacobian_name = "%sJacobian%dD" % (self.mesh_type, self.num_dim) if self.num_dim == 3: self.jacobian_header = jac_def_template_3D % self.jacobian_name self.jacobian_declaration = jac_dec_template_3D % self.jacobian_name elif self.num_dim == 2: self.jacobian_header = jac_def_template_2D % self.jacobian_name self.jacobian_declaration = jac_dec_template_2D % self.jacobian_name def get_interpolator_definition(self): """ This returns the function definitions for the given mesh type. """ function_code = self.function_header for i in range(self.num_dim): function_code += "\t" + ccode(self.f[i, 0], self.fx[i, 0]) + "\n" jacobian_code = self.jacobian_header for i in range(self.num_dim): jacobian_code += "\t" + ccode(self.J[i, 0], self.rcol[i, 0]) + "\n" jacobian_code += "\t" + ccode(self.J[i, 1], self.scol[i, 0]) + "\n" if self.num_dim == 2: continue jacobian_code += "\t" + ccode(self.J[i, 2], self.tcol[i, 0]) + "\n" return function_code, jacobian_code def get_interpolator_declaration(self): """ This returns the function declarations for the given mesh type. """ return self.function_declaration, self.jacobian_declaration if __name__ == "__main__": with open("mesh_types.yaml") as f: lines = f.read() mesh_types = yaml.load(lines, Loader=yaml.FullLoader) pxd_file = open("lib/autogenerated_element_samplers.pxd", "w") pyx_file = open("lib/autogenerated_element_samplers.pyx", "w") pyx_file.write(file_header) pyx_file.write("\n \n") pyx_file.write("cimport cython \n") pyx_file.write("from libc.math cimport pow \n") pyx_file.write("\n \n") for _, mesh_data in sorted(mesh_types.items()): codegen = MeshCodeGenerator(mesh_data) function_code, jacobian_code = codegen.get_interpolator_definition() function_decl, jacobian_decl = codegen.get_interpolator_declaration() pxd_file.write(function_decl) pxd_file.write("\n \n") pxd_file.write(jacobian_decl) pxd_file.write("\n \n") pyx_file.write(function_code) pyx_file.write("\n \n") pyx_file.write(jacobian_code) pyx_file.write("\n \n") pxd_file.close() pyx_file.close() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/mesh_types.yaml0000644000175100001770000000462314714401662017305 0ustar00runnerdockerHex8: mesh_type: Q1 num_dim: 3 num_vertices: 8 num_mapped_coords: 3 shape_functions: | [(1 - x[0])*(1 - x[1])*(1 - x[2])/8., (1 + x[0])*(1 - x[1])*(1 - x[2])/8., (1 + x[0])*(1 + x[1])*(1 - x[2])/8., (1 - x[0])*(1 + x[1])*(1 - x[2])/8., (1 - x[0])*(1 - x[1])*(1 + x[2])/8., (1 + x[0])*(1 - x[1])*(1 + x[2])/8., (1 + x[0])*(1 + x[1])*(1 + x[2])/8., (1 - x[0])*(1 + x[1])*(1 + x[2])/8.] Quad4: mesh_type: Q1 num_dim: 2 num_vertices: 4 num_mapped_coords: 2 shape_functions: | [(1 - x[0])*(1 - x[1])/4., (1 + x[0])*(1 - x[1])/4., (1 + x[0])*(1 + x[1])/4., (1 - x[0])*(1 + x[1])/4.] Quad9: mesh_type: Q2 num_dim: 2 num_vertices: 9 num_mapped_coords: 2 shape_functions: | [x[0] * (x[0] - 1) * x[1] * (x[1] - 1) / 4., x[0] * (x[0] + 1) * x[1] * (x[1] - 1) / 4., x[0] * (x[0] + 1) * x[1] * (x[1] + 1) / 4., x[0] * (x[0] - 1) * x[1] * (x[1] + 1) / 4., (x[0] + 1) * (x[0] - 1) * x[1] * (x[1] - 1) / -2., x[0] * (x[0] + 1) * (x[1] + 1) * (x[1] - 1) / -2., (x[0] + 1) * (x[0] - 1) * x[1] * (x[1] + 1) / -2., x[0] * (x[0] - 1) * (x[1] + 1) * (x[1] - 1) / -2., (x[0] + 1) * (x[0] - 1) * (x[1] + 1) * (x[1] - 1)] Wedge6: mesh_type: W1 num_dim: 3 num_vertices: 6 num_mapped_coords: 3 shape_functions: | [(1 - x[0] - x[1]) * (1 - x[2]) / 2., x[0] * (1 - x[2]) / 2., x[1] * (1 - x[2]) / 2., (1 - x[0] - x[1]) * (1 + x[2]) / 2., x[0] * (1 + x[2]) / 2., x[1] * (1 + x[2]) / 2.] Tri6: mesh_type: T2 num_dim: 2 num_vertices: 6 num_mapped_coords: 2 shape_functions: | [1 - 3 * x[0] + 2 * x[0]**2 - 3 * x[1] + 2 * x[1]**2 + 4 * x[0] * x[1], -x[0] + 2 * x[0]**2, -x[1] + 2 * x[1]**2, 4 * x[0] - 4 * x[0]**2 - 4 * x[0] * x[1], 4 * x[0] * x[1], 4 * x[1] - 4 * x[1]**2 - 4 * x[0] * x[1]] Tet10: mesh_type: Tet2 num_dim: 3 num_vertices: 10 num_mapped_coords: 3 shape_functions: | [1 - 3 * x[0] + 2 * x[0]**2 - 3 * x[1] + 2 * x[1]**2 - 3 * x[2] + 2 * x[2]**2 + 4 * x[0] * x[1] + 4 * x[0] * x[2] + 4 * x[1] * x[2], -x[0] + 2 * x[0]**2, -x[1] + 2 * x[1]**2, -x[2] + 2 * x[2]**2, 4 * x[0] - 4 * x[0]**2 - 4 * x[0] * x[1] - 4 * x[0] * x[2], 4 * x[0] * x[1], 4 * x[1] - 4 * x[1]**2 - 4 * x[1] * x[0] - 4 * x[1] * x[2], 4 * x[2] - 4 * x[2]**2 - 4 * x[2] * x[0] - 4 * x[2] * x[1], 4 * x[0] * x[2], 4 * x[1] * x[2]] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/metadata.py0000644000175100001770000000157514714401662016376 0ustar00runnerdockerfrom yt.loaders import load DEFAULT_ATTRS = ( "dimensionality", "refine_by", "domain_dimensions", "current_time", "domain_left_edge", "domain_right_edge", "unique_identifier", "current_redshift", "cosmological_simulation", "omega_matter", "omega_lambda", "hubble_constant", "dataset_type", ) def get_metadata(path, full_output=False, attrs=DEFAULT_ATTRS): ds = load(path) metadata = {"filename": path} for a in attrs: v = getattr(ds, a, None) if v is None: continue if hasattr(v, "tolist"): v = v.tolist() metadata[a] = v if full_output: params = {} for p, v in ds.parameters.items(): if hasattr(v, "tolist"): v = v.tolist() params[p] = v metadata["params"] = params ds.close() return metadata ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/minimal_representation.py0000644000175100001770000002213214714401662021356 0ustar00runnerdockerimport abc import json import os from uuid import uuid4 import numpy as np from yt.funcs import compare_dicts, is_sequence from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.on_demand_imports import _h5py as h5py def _sanitize_list(flist): temp = [] for item in flist: if isinstance(item, str): temp.append(item.encode("latin-1")) elif isinstance(item, tuple) and all(isinstance(i, str) for i in item): temp.append(tuple(_sanitize_list(list(item)))) else: temp.append(item) return temp def _serialize_to_h5(g, cdict): for item in cdict: if isinstance(cdict[item], (YTQuantity, YTArray)): g[item] = cdict[item].d g[item].attrs["units"] = str(cdict[item].units) elif isinstance(cdict[item], dict): _serialize_to_h5(g.create_group(item), cdict[item]) elif cdict[item] is None: g[item] = "None" elif isinstance(cdict[item], list): g[item] = _sanitize_list(cdict[item]) elif isinstance(cdict[item], tuple) and all( isinstance(i, str) for i in cdict[item] ): g[item] = tuple(_sanitize_list(cdict[item])) else: g[item] = cdict[item] def _deserialize_from_h5(g, ds): result = {} for item in g: if item == "chunks": continue if "units" in g[item].attrs: if is_sequence(g[item]): result[item] = ds.arr(g[item][:], g[item].attrs["units"]) else: result[item] = ds.quan(g[item][()], g[item].attrs["units"]) elif isinstance(g[item], h5py.Group): result[item] = _deserialize_from_h5(g[item], ds) elif g[item] == "None": result[item] = None else: try: result[item] = g[item][:] # try array except ValueError: result[item] = g[item][()] # fallback to scalar return result class ContainerClass: pass class MinimalRepresentation(metaclass=abc.ABCMeta): def _update_attrs(self, obj, attr_list): for attr in attr_list: setattr(self, attr, getattr(obj, attr, None)) if hasattr(obj, "ds"): self.output_hash = obj.ds._hash() self._ds_mrep = obj.ds._mrep if hasattr(obj, "data_source"): self.data_source_hash = obj.data_source._hash def __init__(self, obj): self._update_attrs(obj, self._attr_list) @abc.abstractmethod def _generate_post(self): pass @property @abc.abstractmethod def _attr_list(self): pass def _return_filtered_object(self, attrs): new_attrs = tuple(attr for attr in self._attr_list if attr not in attrs) new_class = type( f"Filtered{self.__class__.__name__}", (FilteredRepresentation,), {"_attr_list": new_attrs}, ) return new_class(self) @property def _attrs(self): return {attr: getattr(self, attr) for attr in self._attr_list} @classmethod def _from_metadata(cls, metadata): cc = ContainerClass() for a, v in metadata.values(): setattr(cc, a, v) return cls(cc) def store(self, storage): if hasattr(self, "_ds_mrep"): self._ds_mrep.store(storage) metadata, (final_name, chunks) = self._generate_post() metadata["obj_type"] = self.type with h5py.File(storage, mode="r") as h5f: dset = str(uuid4())[:8] h5f.create_group(dset) _serialize_to_h5(h5f[dset], metadata) if len(chunks) > 0: g = h5f[dset].create_group("chunks") g.attrs["final_name"] = final_name for fname, fdata in chunks: if isinstance(fname, (tuple, list)): fname = "*".join(fname) if isinstance(fdata, (YTQuantity, YTArray)): g.create_dataset(fname, data=fdata.d, compression="lzf") g[fname].attrs["units"] = str(fdata.units) else: g.create_dataset(fname, data=fdata, compression="lzf") def restore(self, storage, ds): # noqa: B027 pass def upload(self): raise NotImplementedError("This method hasn't been ported to python 3") def load(self, storage): raise NotImplementedError("This method hasn't been ported to python 3") def dump(self, storage): raise NotImplementedError("This method hasn't been ported to python 3") class FilteredRepresentation(MinimalRepresentation): def _generate_post(self): raise RuntimeError class MinimalDataset(MinimalRepresentation): _attr_list = ( "dimensionality", "refine_by", "domain_dimensions", "current_time", "domain_left_edge", "domain_right_edge", "unique_identifier", "current_redshift", "output_hash", "cosmological_simulation", "omega_matter", "omega_lambda", "hubble_constant", "name", ) type = "simulation_output" def __init__(self, obj): super().__init__(obj) self.output_hash = obj._hash() self.name = str(obj) def _generate_post(self): metadata = self._attrs chunks = [] return (metadata, (None, chunks)) class MinimalMappableData(MinimalRepresentation): _attr_list: tuple[str, ...] = ( "field_data", "field", "weight_field", "axis", "output_hash", "vm_type", ) def _generate_post(self): nobj = self._return_filtered_object(("field_data",)) metadata = nobj._attrs chunks = [(arr, self.field_data[arr]) for arr in self.field_data] return (metadata, ("field_data", chunks)) def _read_chunks(self, g, ds): for fname in g.keys(): if "*" in fname: arr = tuple(fname.split("*")) else: arr = fname try: self.field_data[arr] = ds.arr(g[fname][:], g[fname].attrs["units"]) except KeyError: self.field_data[arr] = g[fname][:] class MinimalProjectionData(MinimalMappableData): type = "proj" vm_type = "Projection" _attr_list = ( "field_data", "field", "weight_field", "axis", "output_hash", "center", "method", "field_parameters", "data_source_hash", ) def restore(self, storage, ds): if hasattr(self, "_ds_mrep"): self._ds_mrep.restore(storage, ds) metadata, (final_name, chunks) = self._generate_post() with h5py.File(storage, mode="r") as h5f: for dset in h5f: stored_metadata = _deserialize_from_h5(h5f[dset], ds) if compare_dicts(metadata, stored_metadata): self._read_chunks(h5f[dset]["chunks"], ds) return True return False class MinimalSliceData(MinimalMappableData): type = "slice" vm_type = "Slice" weight_field = "None" class MinimalImageCollectionData(MinimalRepresentation): type = "image_collection" _attr_list = ("name", "output_hash", "images", "image_metadata") def _generate_post(self): nobj = self._return_filtered_object(("images",)) metadata = nobj._attrs chunks = list(self.images) return (metadata, ("images", chunks)) _hub_categories = ( "News", "Documents", "Simulation Management", "Data Management", "Analysis and Visualization", "Paper Repositories", "Astrophysical Utilities", "yt Scripts", ) class MinimalProjectDescription(MinimalRepresentation): type = "project" _attr_list = ("title", "url", "description", "category", "image_url") def __init__(self, title, url, description, category, image_url=""): assert category in _hub_categories self.title = title self.url = url self.description = description self.category = category self.image_url = image_url def _generate_post(self): metadata = self._attrs return (metadata, ("chunks", [])) class MinimalNotebook(MinimalRepresentation): type = "notebook" _attr_list = ("title",) def __init__(self, filename, title=None): # First we read in the data if not os.path.isfile(filename): raise OSError(filename) self.data = open(filename).read() if title is None: title = json.loads(self.data)["metadata"]["name"] self.title = title self.data = np.fromstring(self.data, dtype="c") def _generate_post(self): metadata = self._attrs chunks = [("notebook", self.data)] return (metadata, ("chunks", chunks)) class ImageCollection: def __init__(self, ds, name): self.ds = ds self.name = name self.images = [] self.image_metadata = [] def add_image(self, fn, descr): self.image_metadata.append(descr) self.images.append((os.path.basename(fn), np.fromfile(fn, dtype="c"))) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/nodal_data_utils.py0000644000175100001770000000262314714401662020117 0ustar00runnerdockerimport numpy as np _index_map = np.array( [ [0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 0, 1, 0, 1, 0, 1], [0, 0, 1, 1, 0, 0, 1, 1], [0, 1, 2, 3, 0, 1, 2, 3], [0, 0, 0, 0, 1, 1, 1, 1], [0, 1, 0, 1, 2, 3, 2, 3], [0, 0, 1, 1, 2, 2, 3, 3], [0, 1, 2, 3, 4, 5, 6, 7], ] ) def _get_linear_index(nodal_flag): if len(nodal_flag) == 2: return 1 * nodal_flag[1] + 2 * nodal_flag[0] else: return 1 * nodal_flag[2] + 2 * nodal_flag[1] + 4 * nodal_flag[0] def _get_indices(nodal_flag): li = _get_linear_index(nodal_flag) return _index_map[li] def get_nodal_data(data_source, field): finfo = data_source.ds._get_field_info(field) nodal_flag = finfo.nodal_flag field_data = data_source[field] inds = _get_indices(nodal_flag) return field_data[:, inds] def get_nodal_slices(shape, nodal_flag, dim): slices = [] dir_slices = [[] for _ in range(dim)] for i in range(dim): if nodal_flag[i]: dir_slices[i] = [slice(0, shape[i] - 1), slice(1, shape[i])] else: dir_slices[i] = [slice(0, shape[i])] for sl_i in dir_slices[0]: for sl_j in dir_slices[1]: if dim > 2: for sl_k in dir_slices[2]: slices.append([sl_i, sl_j, sl_k]) else: slices.append([sl_i, sl_j]) return tuple(slices) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/object_registries.py0000644000175100001770000000171514714401662020320 0ustar00runnerdocker# These are some of the data object registries that are used in different places in the # code. Not all of the self-registering objects are included in these. from typing import TYPE_CHECKING if TYPE_CHECKING: from yt.data_objects.analyzer_objects import AnalysisTask from yt.data_objects.data_containers import YTDataContainer from yt.data_objects.derived_quantities import DerivedQuantity from yt.data_objects.static_output import Dataset from yt.data_objects.time_series import DatasetSeries from yt.visualization.volume_rendering.old_camera import Camera analysis_task_registry: dict[str, type["AnalysisTask"]] = {} derived_quantity_registry: dict[str, type["DerivedQuantity"]] = {} output_type_registry: dict[str, type["Dataset"]] = {} simulation_time_series_registry: dict[str, type["DatasetSeries"]] = {} # TODO: split into 2 registries to avoid a typing.Union data_object_registry: dict[str, "type[YTDataContainer] | type[Camera]"] = {} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/on_demand_imports.py0000644000175100001770000002515614714401662020320 0ustar00runnerdockerimport sys from functools import wraps from importlib.util import find_spec class NotAModule: """ A class to implement an informative error message that will be outputted if someone tries to use an on-demand import without having the requisite package installed. """ def __init__(self, pkg_name, exc: BaseException | None = None): self.pkg_name = pkg_name self._original_exception = exc error_note = ( f"Something went wrong while trying to lazy-import {pkg_name}. " f"Please make sure that {pkg_name} is properly installed.\n" "If the problem persists, please file an issue at " "https://github.com/yt-project/yt/issues/new" ) self.error: BaseException if exc is None: self.error = ImportError(error_note) elif sys.version_info >= (3, 11): exc.add_note(error_note) self.error = exc else: # mimic Python 3.11 behaviour: # preserve error message and traceback self.error = type(exc)(f"{exc!s}\n{error_note}").with_traceback( exc.__traceback__ ) def __getattr__(self, item): raise self.error def __call__(self, *args, **kwargs): raise self.error def __repr__(self) -> str: if self._original_exception is None: return f"NotAModule({self.pkg_name!r})" else: return f"NotAModule({self.pkg_name!r}, {self._original_exception!r})" class OnDemand: _default_factory: type[NotAModule] = NotAModule def __init_subclass__(cls): if not cls.__name__.endswith("_imports"): raise TypeError(f"class {cls}'s name needs to be suffixed '_imports'") def __new__(cls): if cls is OnDemand: raise TypeError("The OnDemand base class cannot be instantiated.") else: return object.__new__(cls) @property def _name(self) -> str: _name, _, _suffix = self.__class__.__name__.rpartition("_") return _name @property def __is_available__(self) -> bool: # special protocol to support testing framework return find_spec(self._name) is not None def safe_import(func): @property @wraps(func) def inner(self): try: return func(self) except ImportError as exc: return self._default_factory(self._name, exc) return inner class netCDF4_imports(OnDemand): @safe_import def Dataset(self): from netCDF4 import Dataset return Dataset _netCDF4 = netCDF4_imports() class astropy_imports(OnDemand): @safe_import def log(self): from astropy import log if log.exception_logging_enabled(): log.disable_exception_logging() return log @safe_import def pyfits(self): from astropy.io import fits return fits @safe_import def pywcs(self): import astropy.wcs as pywcs self.log return pywcs @safe_import def units(self): from astropy import units self.log return units @safe_import def conv(self): import astropy.convolution as conv self.log return conv @safe_import def time(self): import astropy.time as time self.log return time @safe_import def wcsaxes(self): from astropy.visualization import wcsaxes self.log return wcsaxes @safe_import def WCS(self): from astropy.wcs import WCS self.log return WCS _astropy = astropy_imports() class regions_imports(OnDemand): @safe_import def Regions(self): from regions import Regions return Regions _regions = regions_imports() class NotCartopy(NotAModule): """ A custom class to return error messages dependent on system installation for cartopy imports. """ def __init__(self, pkg_name, exc: BaseException | None = None): super().__init__(pkg_name, exc) if any(s in sys.version for s in ("Anaconda", "Continuum")): # the conda-based installs of cartopy don't have issues with the # GEOS library, so the error message for users with conda can be # relatively short. Discussion related to this is in # yt-project/yt#1966 self.error = ImportError( f"This functionality requires the {self.pkg_name} " "package to be installed." ) else: self.error = ImportError( f"This functionality requires the {pkg_name} " "package to be installed.\n" "For further instruction please refer to Cartopy's documentation\n" "https://scitools.org.uk/cartopy/docs/latest/installing.html" ) class cartopy_imports(OnDemand): _default_factory = NotCartopy @safe_import def crs(self): import cartopy.crs as crs return crs _cartopy = cartopy_imports() class pooch_imports(OnDemand): @safe_import def HTTPDownloader(self): from pooch import HTTPDownloader return HTTPDownloader @safe_import def utils(self): from pooch import utils return utils @safe_import def create(self): from pooch import create return create _pooch = pooch_imports() class pyart_imports(OnDemand): @safe_import def io(self): from pyart import io return io @safe_import def map(self): from pyart import map return map _pyart = pyart_imports() class xarray_imports(OnDemand): @safe_import def open_dataset(self): from xarray import open_dataset return open_dataset _xarray = xarray_imports() class scipy_imports(OnDemand): @safe_import def signal(self): from scipy import signal return signal @safe_import def spatial(self): from scipy import spatial return spatial @safe_import def ndimage(self): from scipy import ndimage return ndimage # Optimize is not presently used by yt, but appears here to enable # required functionality in yt extension, trident @safe_import def optimize(self): from scipy import optimize return optimize _scipy = scipy_imports() class h5py_imports(OnDemand): def __init__(self): # this ensures the import ordering between netcdf4 and h5py. If h5py is # imported first, can get file lock errors on some systems (including travis-ci) # so we need to do this before initializing h5py_imports()! # similar to this issue https://github.com/pydata/xarray/issues/2560 if find_spec("h5py") is None or find_spec("netCDF4") is None: return try: import netCDF4 # noqa F401 except ImportError: pass @safe_import def File(self): from h5py import File return File @safe_import def Group(self): from h5py import Group return Group @safe_import def Dataset(self): from h5py import Dataset return Dataset @safe_import def get_config(self): from h5py import get_config return get_config @safe_import def h5f(self): from h5py import h5f return h5f @safe_import def h5p(self): from h5py import h5p return h5p @safe_import def h5d(self): from h5py import h5d return h5d @safe_import def h5s(self): from h5py import h5s return h5s _h5py = h5py_imports() class nose_imports(OnDemand): @safe_import def run(self): from nose import run return run _nose = nose_imports() class libconf_imports(OnDemand): @safe_import def load(self): from libconf import load return load _libconf = libconf_imports() class yaml_imports(OnDemand): @safe_import def load(self): from yaml import load return load @safe_import def FullLoader(self): from yaml import FullLoader return FullLoader _yaml = yaml_imports() class NotMiniball(NotAModule): def __init__(self, pkg_name): super().__init__(pkg_name) str = ( "This functionality requires the %s package to be installed. " "Installation instructions can be found at " "https://github.com/weddige/miniball or alternatively you can " "install via `python -m pip install MiniballCpp`." ) self.error = ImportError(str % self.pkg_name) class miniball_imports(OnDemand): @safe_import def Miniball(self): from miniball import Miniball return Miniball _miniball = miniball_imports() class f90nml_imports(OnDemand): @safe_import def read(self): from f90nml import read return read @safe_import def Namelist(self): from f90nml import Namelist return Namelist _f90nml = f90nml_imports() class requests_imports(OnDemand): @safe_import def post(self): from requests import post return post @safe_import def put(self): from requests import put return put @safe_import def codes(self): from requests import codes return codes @safe_import def get(self): from requests import get return get @safe_import def exceptions(self): from requests import exceptions return exceptions _requests = requests_imports() class pandas_imports(OnDemand): @safe_import def NA(self): from pandas import NA return NA @safe_import def DataFrame(self): from pandas import DataFrame return DataFrame @safe_import def concat(self): from pandas import concat return concat @safe_import def read_csv(self): from pandas import read_csv return read_csv _pandas = pandas_imports() class firefly_imports(OnDemand): @safe_import def data_reader(self): import firefly.data_reader as data_reader return data_reader @safe_import def server(self): import firefly.server as server return server _firefly = firefly_imports() # Note: ratarmount may fail with an OSError on import if libfuse is missing class ratarmount_imports(OnDemand): @safe_import def TarMount(self): from ratarmount import TarMount return TarMount @safe_import def fuse(self): from ratarmount import fuse return fuse _ratarmount = ratarmount_imports() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/operator_registry.py0000644000175100001770000000051014714401662020365 0ustar00runnerdockerimport copy from collections import UserDict class OperatorRegistry(UserDict): def find(self, op, *args, **kwargs): if isinstance(op, str): # Lookup, assuming string or hashable object op = copy.deepcopy(self[op]) op.args = args op.kwargs = kwargs return op ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/orientation.py0000644000175100001770000000703414714401662017145 0ustar00runnerdockerimport numpy as np from yt.units.yt_array import YTArray from yt.utilities.exceptions import YTException def _aligned(a, b): aligned_component = np.abs(np.dot(a, b) / np.linalg.norm(a) / np.linalg.norm(b)) return np.isclose(aligned_component, 1.0, 1.0e-13) def _validate_unit_vectors(normal_vector, north_vector): # Make sure vectors are unitless if north_vector is not None: north_vector = YTArray(north_vector, "", dtype="float64") if normal_vector is not None: normal_vector = YTArray(normal_vector, "", dtype="float64") if not np.dot(normal_vector, normal_vector) > 0: raise YTException("normal_vector cannot be the zero vector.") if north_vector is not None and _aligned(north_vector, normal_vector): raise YTException("normal_vector and north_vector cannot be aligned.") return normal_vector, north_vector class Orientation: def __init__(self, normal_vector, north_vector=None, steady_north=False): r"""An object that returns a set of basis vectors for orienting cameras a data containers. Parameters ---------- normal_vector : array_like A vector normal to the image plane north_vector : array_like, optional The 'up' direction to orient the image plane. If not specified, gets calculated automatically steady_north : bool, optional Boolean to control whether to normalize the north_vector by subtracting off the dot product of it and the normal vector. Makes it easier to do rotations along a single axis. If north_vector is specified, is switched to True. Default: False """ normal_vector, north_vector = _validate_unit_vectors( normal_vector, north_vector ) self.steady_north = steady_north if north_vector is not None: self.steady_north = True self.north_vector = north_vector self._setup_normalized_vectors(normal_vector, north_vector) if self.north_vector is None: self.north_vector = self.unit_vectors[1] def _setup_normalized_vectors(self, normal_vector, north_vector): normal_vector, north_vector = _validate_unit_vectors( normal_vector, north_vector ) # Now we set up our various vectors normal_vector /= np.sqrt(np.dot(normal_vector, normal_vector)) if north_vector is None: vecs = np.identity(3) t = np.cross(normal_vector, vecs).sum(axis=1) ax = t.argmax() east_vector = np.cross(vecs[ax, :], normal_vector).ravel() # self.north_vector must remain None otherwise rotations about a fixed axis # will break. The north_vector calculated here will still be included # in self.unit_vectors. north_vector = np.cross(normal_vector, east_vector).ravel() else: if self.steady_north or (np.dot(north_vector, normal_vector) != 0.0): north_vector = ( north_vector - np.dot(north_vector, normal_vector) * normal_vector ) east_vector = np.cross(north_vector, normal_vector).ravel() north_vector /= np.sqrt(np.dot(north_vector, north_vector)) east_vector /= np.sqrt(np.dot(east_vector, east_vector)) self.normal_vector = normal_vector self.north_vector = north_vector self.unit_vectors = YTArray([east_vector, north_vector, normal_vector], "") self.inv_mat = np.linalg.pinv(self.unit_vectors) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.403154 yt-4.4.0/yt/utilities/parallel_tools/0000755000175100001770000000000014714401715017247 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/parallel_tools/__init__.py0000644000175100001770000000004214714401662021355 0ustar00runnerdocker""" Tools for parallelism. """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/parallel_tools/controller_system.py0000644000175100001770000000201314714401662023405 0ustar00runnerdockerfrom abc import abstractmethod from .parallel_analysis_interface import ProcessorPool class WorkSplitter: def __init__(self, controller, group1, group2): self.group1 = group1 self.group2 = group2 self.controller = controller @classmethod def setup(cls, ng1, ng2): pp, wg = ProcessorPool.from_sizes( [(1, "controller"), (ng1, "group1"), (ng2, "group2")] ) groupc = pp["controller"] group1 = pp["group1"] group2 = pp["group2"] obj = cls(groupc, group1, group2) obj.run(wg.name) def run(self, name): if name == "controller": self.run_controller() elif name == "group1": self.run_group1() elif name == "group2": self.run_group2() else: raise NotImplementedError @abstractmethod def run_controller(self): pass @abstractmethod def run_group1(self): pass @abstractmethod def run_group2(self): pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/parallel_tools/io_runner.py0000644000175100001770000001375214714401662021632 0ustar00runnerdockerimport time from contextlib import contextmanager import numpy as np from yt.utilities.io_handler import BaseIOHandler from yt.utilities.logger import ytLogger as mylog from .parallel_analysis_interface import ProcessorPool, parallel_objects try: from .parallel_analysis_interface import MPI except ImportError: pass YT_TAG_MESSAGE = 317 # Cell 317 knows where to go class IOCommunicator(BaseIOHandler): def __init__(self, ds, wg, pool): mylog.info("Initializing IOCommunicator") self.ds = ds self.wg = wg # We don't need to use this! self.pool = pool self.comm = pool.comm # We read our grids here self.grids = [] storage = {} grids = ds.index.grids.tolist() grids.sort(key=lambda a: a.filename) for sto, g in parallel_objects(grids, storage=storage): sto.result = self.comm.rank sto.result_id = g.id self.grids.append(g) self._id_offset = ds.index.grids[0]._id_offset mylog.info("Reading from disk ...") self.initialize_data() mylog.info("Broadcasting ...") self.comm.comm.bcast(storage, root=wg.ranks[0]) mylog.info("Done.") self.hooks = [] def initialize_data(self): ds = self.ds fields = [ f for f in ds.field_list if not ds.field_info[f].sampling_type == "particle" ] pfields = [ f for f in ds.field_list if ds.field_info[f].sampling_type == "particle" ] # Preload is only defined for Enzo ... if ds.index.io._dataset_type == "enzo_packed_3d": self.queue = ds.index.io.queue ds.index.io.preload(self.grids, fields) for g in self.grids: for f in fields: if f not in self.queue[g.id]: d = np.zeros(g.ActiveDimensions, dtype="float64") self.queue[g.id][f] = d for f in pfields: self.queue[g.id][f] = self._read(g, f) else: self.queue = {} for g in self.grids: for f in fields + pfields: self.queue[g.id][f] = ds.index.io._read(g, f) def _read(self, g, f): fi = self.ds.field_info[f] if fi.sampling_type == "particle" and g.NumberOfParticles == 0: # because this gets upcast to float return np.array([], dtype="float64") try: temp = self.ds.index.io._read_data_set(g, f) except Exception: # self.ds.index.io._read_exception as exc: if fi.not_in_all: temp = np.zeros(g.ActiveDimensions, dtype="float64") else: raise return temp def wait(self): status = MPI.Status() while True: if self.comm.comm.Iprobe(MPI.ANY_SOURCE, YT_TAG_MESSAGE, status=status): msg = self.comm.comm.recv(source=status.source, tag=YT_TAG_MESSAGE) if msg["op"] == "end": mylog.debug("Shutting down IO.") break self._send_data(msg, status.source) status = MPI.Status() else: time.sleep(1e-2) def _send_data(self, msg, dest): grid_id = msg["grid_id"] field = msg["field"] ts = self.queue[grid_id][field].astype("float64") mylog.debug("Opening send to %s (%s)", dest, ts.shape) self.hooks.append(self.comm.comm.Isend([ts, MPI.DOUBLE], dest=dest)) class IOHandlerRemote(BaseIOHandler): _dataset_type = "remote" def __init__(self, ds, wg, pool): self.ds = ds self.wg = wg # probably won't need self.pool = pool self.comm = pool.comm self.proc_map = self.comm.comm.bcast(None, root=pool["io"].ranks[0]) super().__init__() def _read_data_set(self, grid, field): dest = self.proc_map[grid.id] msg = {"grid_id": grid.id, "field": field, "op": "read"} mylog.debug("Requesting %s for %s from %s", field, grid, dest) if self.ds.field_info[field].sampling_type == "particle": data = np.empty(grid.NumberOfParticles, "float64") else: data = np.empty(grid.ActiveDimensions, "float64") hook = self.comm.comm.Irecv([data, MPI.DOUBLE], source=dest) self.comm.comm.send(msg, dest=dest, tag=YT_TAG_MESSAGE) mylog.debug("Waiting for data.") MPI.Request.Wait(hook) return data def _read_data_slice(self, grid, field, axis, coord): sl = [slice(None), slice(None), slice(None)] sl[axis] = slice(coord, coord + 1) # sl = tuple(reversed(sl)) return self._read_data_set(grid, field)[tuple(sl)] def terminate(self): msg = {"op": "end"} if self.wg.comm.rank == 0: for rank in self.pool["io"].ranks: mylog.debug("Sending termination message to %s", rank) self.comm.comm.send(msg, dest=rank, tag=YT_TAG_MESSAGE) @contextmanager def remote_io(ds, wg, pool): original_io = ds.index.io ds.index.io = IOHandlerRemote(ds, wg, pool) yield ds.index.io.terminate() ds.index.io = original_io def io_nodes(fn, n_io, n_work, func, *args, **kwargs): from yt.loaders import load pool, wg = ProcessorPool.from_sizes([(n_io, "io"), (n_work, "work")]) rv = None if wg.name == "work": ds = load(fn) with remote_io(ds, wg, pool): rv = func(ds, *args, **kwargs) elif wg.name == "io": ds = load(fn) io = IOCommunicator(ds, wg, pool) io.wait() # We should broadcast the result rv = pool.comm.mpi_bcast(rv, root=pool["work"].ranks[0]) pool.free_all() mylog.debug("Return value: %s", rv) return rv # Here is an example of how to use this functionality. if __name__ == "__main__": def gq(ds): dd = ds.all_data() return dd.quantities["TotalQuantity"](("gas", "cell_mass")) q = io_nodes("DD0087/DD0087", 8, 24, gq) mylog.info(q) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/parallel_tools/parallel_analysis_interface.py0000644000175100001770000014101714714401662025345 0ustar00runnerdockerimport itertools import logging import os import sys import traceback from functools import wraps from io import StringIO import numpy as np from more_itertools import always_iterable import yt.utilities.logger from yt.config import ytcfg from yt.data_objects.image_array import ImageArray from yt.funcs import is_sequence from yt.units.unit_registry import UnitRegistry # type: ignore from yt.units.yt_array import YTArray from yt.utilities.exceptions import YTNoDataInObjectError from yt.utilities.lib.quad_tree import QuadTree, merge_quadtrees from yt.utilities.logger import ytLogger as mylog # We default to *no* parallelism unless it gets turned on, in which case this # will be changed. MPI = None parallel_capable = False dtype_names = { "float32": "MPI.FLOAT", "float64": "MPI.DOUBLE", "int32": "MPI.INT", "int64": "MPI.LONG", "c": "MPI.CHAR", } op_names = {"sum": "MPI.SUM", "min": "MPI.MIN", "max": "MPI.MAX"} class FilterAllMessages(logging.Filter): """ This is a simple filter for logging.Logger's that won't let any messages pass. """ def filter(self, record): return 0 # Set up translation table and import things def traceback_writer_hook(file_suffix=""): def write_to_file(exc_type, exc, tb): sys.__excepthook__(exc_type, exc, tb) fn = f"yt_traceback{file_suffix}" with open(fn, "w") as fhandle: traceback.print_exception(exc_type, exc, tb, file=fhandle) print(f"Wrote traceback to {fn}") MPI.COMM_WORLD.Abort(1) return write_to_file def default_mpi_excepthook(exception_type, exception_value, tb): traceback.print_tb(tb) mylog.error("%s: %s", exception_type.__name__, exception_value) comm = yt.communication_system.communicators[-1] if comm.size > 1: mylog.error("Error occurred on rank %d.", comm.rank) MPI.COMM_WORLD.Abort(1) def enable_parallelism(suppress_logging: bool = False, communicator=None) -> bool: """ This method is used inside a script to turn on MPI parallelism, via mpi4py. More information about running yt in parallel can be found here: https://yt-project.org/docs/3.0/analyzing/parallel_computation.html Parameters ---------- suppress_logging : bool If set to True, only rank 0 will log information after the initial setup of MPI. communicator : mpi4py.MPI.Comm The MPI communicator to use. This controls which processes yt can see. If not specified, will be set to COMM_WORLD. Returns ------- parallel_capable: bool True if the call was successful. False otherwise. """ global parallel_capable, MPI try: from mpi4py import MPI as _MPI except ImportError: mylog.error("Could not enable parallelism: mpi4py is not installed") return False MPI = _MPI exe_name = os.path.basename(sys.executable) # if no communicator specified, set to COMM_WORLD if communicator is None: communicator = MPI.COMM_WORLD.Dup() else: communicator = communicator.Dup() parallel_capable = communicator.size > 1 if not parallel_capable: mylog.error( "Could not enable parallelism: only one mpi process is running. " "To remedy this, launch the Python interpreter as\n" " mpirun -n python3 .py # with X > 1 ", ) return False mylog.info( "Global parallel computation enabled: %s / %s", communicator.rank, communicator.size, ) communication_system.push(communicator) ytcfg["yt", "internals", "global_parallel_rank"] = communicator.rank ytcfg["yt", "internals", "global_parallel_size"] = communicator.size ytcfg["yt", "internals", "parallel"] = True if exe_name == "embed_enzo" or ("_parallel" in dir(sys) and sys._parallel): # type: ignore ytcfg["yt", "inline"] = True yt.utilities.logger.uncolorize_logging() # Even though the uncolorize function already resets the format string, # we reset it again so that it includes the processor. f = logging.Formatter( "P%03i %s" % (communicator.rank, yt.utilities.logger.ufstring) ) if len(yt.utilities.logger.ytLogger.handlers) > 0: yt.utilities.logger.ytLogger.handlers[0].setFormatter(f) if ytcfg.get("yt", "parallel_traceback"): sys.excepthook = traceback_writer_hook("_%03i" % communicator.rank) else: sys.excepthook = default_mpi_excepthook if ytcfg.get("yt", "log_level") < 20: yt.utilities.logger.ytLogger.warning( "Log Level is set low -- this could affect parallel performance!" ) dtype_names.update( { "float32": MPI.FLOAT, "float64": MPI.DOUBLE, "int32": MPI.INT, "int64": MPI.LONG, "c": MPI.CHAR, } ) op_names.update({"sum": MPI.SUM, "min": MPI.MIN, "max": MPI.MAX}) # Turn off logging on all but the root rank, if specified. if suppress_logging: if communicator.rank > 0: mylog.addFilter(FilterAllMessages()) return True # Because the dtypes will == correctly but do not hash the same, we need this # function for dictionary access. def get_mpi_type(dtype): for dt, val in dtype_names.items(): if dt == dtype: return val class ObjectIterator: """ This is a generalized class that accepts a list of objects and then attempts to intelligently iterate over them. """ def __init__(self, pobj, just_list=False, attr="_grids"): self.pobj = pobj if hasattr(pobj, attr) and getattr(pobj, attr) is not None: gs = getattr(pobj, attr) else: gs = getattr(pobj._data_source, attr) if len(gs) == 0: raise YTNoDataInObjectError(pobj) if hasattr(gs[0], "proc_num"): # This one sort of knows about MPI, but not quite self._objs = [ g for g in gs if g.proc_num == ytcfg.get("yt", "internals", "topcomm_parallel_rank") ] self._use_all = True else: self._objs = gs if hasattr(self._objs[0], "filename"): self._objs = sorted(self._objs, key=lambda g: g.filename) self._use_all = False self.ng = len(self._objs) self.just_list = just_list def __iter__(self): yield from self._objs class ParallelObjectIterator(ObjectIterator): """ This takes an object, *pobj*, that implements ParallelAnalysisInterface, and then does its thing, calling initialize and finalize on the object. """ def __init__(self, pobj, just_list=False, attr="_grids", round_robin=False): ObjectIterator.__init__(self, pobj, just_list, attr=attr) # pobj has to be a ParallelAnalysisInterface, so it must have a .comm # object. self._offset = pobj.comm.rank self._skip = pobj.comm.size # Note that we're doing this in advance, and with a simple means # of choosing them; more advanced methods will be explored later. if self._use_all: self.my_obj_ids = np.arange(len(self._objs)) else: if not round_robin: self.my_obj_ids = np.array_split( np.arange(len(self._objs)), self._skip )[self._offset] else: self.my_obj_ids = np.arange(len(self._objs))[self._offset :: self._skip] def __iter__(self): for gid in self.my_obj_ids: yield self._objs[gid] if not self.just_list: self.pobj._finalize_parallel() def parallel_simple_proxy(func): """ This is a decorator that broadcasts the result of computation on a single processor to all other processors. To do so, it uses the _processing and _distributed flags in the object to check for blocks. Meant only to be used on objects that subclass :class:`~yt.utilities.parallel_tools.parallel_analysis_interface.ParallelAnalysisInterface`. """ @wraps(func) def single_proc_results(self, *args, **kwargs): retval = None if hasattr(self, "dont_wrap"): if func.__name__ in self.dont_wrap: return func(self, *args, **kwargs) if not parallel_capable or self._processing or not self._distributed: return func(self, *args, **kwargs) comm = _get_comm((self,)) if self._owner == comm.rank: self._processing = True retval = func(self, *args, **kwargs) self._processing = False # To be sure we utilize the root= kwarg, we manually access the .comm # attribute, which must be an instance of MPI.Intracomm, and call bcast # on that. retval = comm.comm.bcast(retval, root=self._owner) return retval return single_proc_results class ParallelDummy(type): """ This is a base class that, on instantiation, replaces all attributes that don't start with ``_`` with :func:`~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_simple_proxy`-wrapped attributes. Used as a metaclass. """ def __init__(cls, name, bases, d): super().__init__(name, bases, d) skip = d.pop("dont_wrap", []) extra = d.pop("extra_wrap", []) for attrname in d: if attrname.startswith("_") or attrname in skip: if attrname not in extra: continue attr = getattr(cls, attrname) if callable(attr): setattr(cls, attrname, parallel_simple_proxy(attr)) def parallel_passthrough(func): """ If we are not run in parallel, this function passes the input back as output; otherwise, the function gets called. Used as a decorator. """ @wraps(func) def passage(self, *args, **kwargs): if not self._distributed: return args[0] return func(self, *args, **kwargs) return passage def _get_comm(args): if len(args) > 0 and hasattr(args[0], "comm"): comm = args[0].comm else: comm = communication_system.communicators[-1] return comm def parallel_blocking_call(func): """ This decorator blocks on entry and exit of a function. """ @wraps(func) def barrierize(*args, **kwargs): if not parallel_capable: return func(*args, **kwargs) mylog.debug("Entering barrier before %s", func.__name__) comm = _get_comm(args) comm.barrier() retval = func(*args, **kwargs) mylog.debug("Entering barrier after %s", func.__name__) comm.barrier() return retval return barrierize def parallel_root_only_then_broadcast(func): """ This decorator blocks and calls the function on the root processor and then broadcasts the results to the other processors. """ @wraps(func) def root_only(*args, **kwargs): if not parallel_capable: return func(*args, **kwargs) comm = _get_comm(args) rv0 = None if comm.rank == 0: try: rv0 = func(*args, **kwargs) except Exception: traceback.print_last() rv = comm.mpi_bcast(rv0) if not rv: raise RuntimeError return rv return root_only def parallel_root_only(func): """ This decorator blocks and calls the function on the root processor, but does not broadcast results to the other processors. """ @wraps(func) def root_only(*args, **kwargs): if not parallel_capable: return func(*args, **kwargs) comm = _get_comm(args) rv = None if comm.rank == 0: try: rv = func(*args, **kwargs) all_clear = 1 except Exception: traceback.print_last() all_clear = 0 else: all_clear = None all_clear = comm.mpi_bcast(all_clear) if not all_clear: raise RuntimeError return rv return root_only class Workgroup: def __init__(self, size, ranks, comm, name): self.size = size self.ranks = ranks self.comm = comm self.name = name class ProcessorPool: comm = None size = None ranks = None available_ranks = None tasks = None def __init__(self): self.comm = communication_system.communicators[-1] self.size = self.comm.size self.ranks = list(range(self.size)) self.available_ranks = list(range(self.size)) self.workgroups = [] def add_workgroup(self, size=None, ranks=None, name=None): if size is None: size = len(self.available_ranks) if len(self.available_ranks) < size: mylog.error( "Not enough resources available, asked for %d have %d", size, self.available_ranks, ) raise RuntimeError if ranks is None: ranks = [self.available_ranks.pop(0) for i in range(size)] # Default name to the workgroup number. if name is None: name = str(len(self.workgroups)) group = self.comm.comm.Get_group().Incl(ranks) new_comm = self.comm.comm.Create(group) if self.comm.rank in ranks: communication_system.communicators.append(Communicator(new_comm)) self.workgroups.append(Workgroup(len(ranks), ranks, new_comm, name)) def free_workgroup(self, workgroup): # If you want to actually delete the workgroup you will need to # pop it out of the self.workgroups list so you don't have references # that are left dangling, e.g. see free_all() below. for i in workgroup.ranks: if self.comm.rank == i: communication_system.communicators.pop() self.available_ranks.append(i) self.available_ranks.sort() def free_all(self): for wg in self.workgroups: self.free_workgroup(wg) while self.workgroups: self.workgroups.pop(0) @classmethod def from_sizes(cls, sizes): pool = cls() rank = pool.comm.rank for i, size in enumerate(always_iterable(sizes)): if is_sequence(size): size, name = size else: name = "workgroup_%02i" % i pool.add_workgroup(size, name=name) for wg in pool.workgroups: if rank in wg.ranks: workgroup = wg return pool, workgroup def __getitem__(self, key): for wg in self.workgroups: if wg.name == key: return wg raise KeyError(key) class ResultsStorage: slots = ["result", "result_id"] result = None result_id = None def parallel_objects(objects, njobs=0, storage=None, barrier=True, dynamic=False): r"""This function dispatches components of an iterable to different processors. The parallel_objects function accepts an iterable, *objects*, and based on the number of jobs requested and number of available processors, decides how to dispatch individual objects to processors or sets of processors. This can implicitly include multi-level parallelism, such that the processor groups assigned each object can be composed of several or even hundreds of processors. *storage* is also available, for collation of results at the end of the iteration loop. Calls to this function can be nested. This should not be used to iterate over datasets -- :class:`~yt.data_objects.time_series.DatasetSeries` provides a much nicer interface for that. Parameters ---------- objects : Iterable The list of objects to dispatch to different processors. njobs : int How many jobs to spawn. By default, one job will be dispatched for each available processor. storage : dict This is a dictionary, which will be filled with results during the course of the iteration. The keys will be the dataset indices and the values will be whatever is assigned to the *result* attribute on the storage during iteration. barrier : bool Should a barier be placed at the end of iteration? dynamic : bool This governs whether or not dynamic load balancing will be enabled. This requires one dedicated processor; if this is enabled with a set of 128 processors available, only 127 will be available to iterate over objects as one will be load balancing the rest. Examples -------- Here is a simple example of iterating over a set of centers and making slice plots centered at each. >>> for c in parallel_objects(centers): ... SlicePlot(ds, "x", "Density", center=c).save() ... Here's an example of calculating the angular momentum vector of a set of spheres, but with a set of four jobs of multiple processors each. Note that we also store the results. >>> storage = {} >>> for sto, c in parallel_objects(centers, njobs=4, storage=storage): ... sp = ds.sphere(c, (100, "kpc")) ... sto.result = sp.quantities["AngularMomentumVector"]() ... >>> for sphere_id, L in sorted(storage.items()): ... print(centers[sphere_id], L) ... """ if dynamic: from .task_queue import dynamic_parallel_objects yield from dynamic_parallel_objects(objects, njobs=njobs, storage=storage) return if not parallel_capable: njobs = 1 my_communicator = communication_system.communicators[-1] my_size = my_communicator.size if njobs <= 0: njobs = my_size if njobs > my_size: mylog.error( "You have asked for %s jobs, but you only have %s processors.", njobs, my_size, ) raise RuntimeError my_rank = my_communicator.rank all_new_comms = np.array_split(np.arange(my_size), njobs) for i, comm_set in enumerate(all_new_comms): if my_rank in comm_set: my_new_id = i break if parallel_capable: communication_system.push_with_ids(all_new_comms[my_new_id].tolist()) to_share = {} # If our objects object is slice-aware, like time series data objects are, # this will prevent intermediate objects from being created. oiter = itertools.islice(enumerate(objects), my_new_id, None, njobs) for result_id, obj in oiter: if storage is not None: rstore = ResultsStorage() rstore.result_id = result_id yield rstore, obj to_share[rstore.result_id] = rstore.result else: yield obj if parallel_capable: communication_system.pop() if storage is not None: # Now we have to broadcast it new_storage = my_communicator.par_combine_object( to_share, datatype="dict", op="join" ) storage.update(new_storage) if barrier: my_communicator.barrier() def parallel_ring(objects, generator_func, mutable=False): r"""This function loops in a ring around a set of objects, yielding the results of generator_func and passing from one processor to another to avoid IO or expensive computation. This function is designed to operate in sequence on a set of objects, where the creation of those objects might be expensive. For instance, this could be a set of particles that are costly to read from disk. Processor N will run generator_func on an object, and the results of that will both be yielded and passed to processor N-1. If the length of the objects is not equal to the number of processors, then the final processor in the top communicator will re-generate the data as needed. In all likelihood, this function will only be useful internally to yt. Parameters ---------- objects : Iterable The list of objects to operate on. generator_func : callable This function will be called on each object, and the results yielded. It must return a single NumPy array; for multiple values, it needs to have a custom dtype. mutable : bool Should the arrays be considered mutable? Currently, this will only work if the number of processors equals the number of objects. Examples -------- Here is a simple example of a ring loop around a set of integers, with a custom dtype. >>> dt = np.dtype([("x", "float64"), ("y", "float64"), ("z", "float64")]) >>> def gfunc(o): ... np.random.seed(o) ... rv = np.empty(1000, dtype=dt) ... rv["x"] = np.random.random(1000) ... rv["y"] = np.random.random(1000) ... rv["z"] = np.random.random(1000) ... return rv ... >>> obj = range(8) >>> for obj, arr in parallel_ring(obj, gfunc): ... print(arr["x"].sum(), arr["y"].sum(), arr["z"].sum()) ... """ if mutable: raise NotImplementedError my_comm = communication_system.communicators[-1] my_size = my_comm.size my_rank = my_comm.rank # This will also be the first object we access if not parallel_capable and not mutable: for obj in objects: yield obj, generator_func(obj) return generate_endpoints = len(objects) != my_size # gback False: send the object backwards # gforw False: receive an object from forwards if len(objects) == my_size: generate_endpoints = False gback = False gforw = False else: # In this case, the first processor (my_rank == 0) will generate. generate_endpoints = True gback = my_rank == 0 gforw = my_rank == my_size - 1 if generate_endpoints and mutable: raise NotImplementedError # Now we need to do pairwise sends source = (my_rank + 1) % my_size dest = (my_rank - 1) % my_size oiter = itertools.islice(itertools.cycle(objects), my_rank, my_rank + len(objects)) idata = None isize = np.zeros((1,), dtype="int64") osize = np.zeros((1,), dtype="int64") for obj in oiter: if idata is None or gforw: idata = generator_func(obj) idtype = odtype = idata.dtype if get_mpi_type(idtype) is None: idtype = "c" yield obj, idata # We first send to the previous processor tags = [] if not gforw: tags.append(my_comm.mpi_nonblocking_recv(isize, source)) if not gback: osize[0] = idata.size tags.append(my_comm.mpi_nonblocking_send(osize, dest)) my_comm.mpi_Request_Waitall(tags) odata = idata tags = [] if not gforw: idata = np.empty(isize[0], dtype=odtype) tags.append( my_comm.mpi_nonblocking_recv(idata.view(idtype), source, dtype=idtype) ) if not gback: tags.append( my_comm.mpi_nonblocking_send(odata.view(idtype), dest, dtype=idtype) ) my_comm.mpi_Request_Waitall(tags) del odata class CommunicationSystem: communicators: list["Communicator"] = [] def __init__(self): self.communicators.append(Communicator(None)) def push(self, new_comm): if not isinstance(new_comm, Communicator): new_comm = Communicator(new_comm) self.communicators.append(new_comm) self._update_parallel_state(new_comm) def push_with_ids(self, ids): group = self.communicators[-1].comm.Get_group().Incl(ids) new_comm = self.communicators[-1].comm.Create(group) self.push(new_comm) return new_comm def _update_parallel_state(self, new_comm): ytcfg["yt", "internals", "topcomm_parallel_size"] = new_comm.size ytcfg["yt", "internals", "topcomm_parallel_rank"] = new_comm.rank if new_comm.rank > 0 and ytcfg.get("yt", "serialize"): ytcfg["yt", "only_deserialize"] = True def pop(self): self.communicators.pop() self._update_parallel_state(self.communicators[-1]) def _reconstruct_communicator(): return communication_system.communicators[-1] class Communicator: comm = None _grids = None _distributed = None __tocast = "c" def __init__(self, comm=None): self.comm = comm self._distributed = comm is not None and self.comm.size > 1 def __del__(self): if self.comm is not None: self.comm.Free() """ This is an interface specification providing several useful utility functions for analyzing something in parallel. """ def __reduce__(self): # We don't try to reconstruct any of the properties of the communicator # or the processors. In general, we don't want to. return (_reconstruct_communicator, ()) def barrier(self): if not self._distributed: return mylog.debug("Opening MPI Barrier on %s", self.comm.rank) self.comm.Barrier() def mpi_exit_test(self, data=False): # data==True -> exit. data==False -> no exit mine, statuses = self.mpi_info_dict(data) if True in statuses.values(): raise RuntimeError("Fatal error. Exiting.") return None @parallel_passthrough def par_combine_object(self, data, op, datatype=None): # op can be chosen from: # cat # join # data is selected to be of types: # np.ndarray # dict # data field dict if datatype is not None: pass elif isinstance(data, dict): datatype = "dict" elif isinstance(data, np.ndarray): datatype = "array" elif isinstance(data, list): datatype = "list" # Now we have our datatype, and we conduct our operation if datatype == "dict" and op == "join": if self.comm.rank == 0: for i in range(1, self.comm.size): data.update(self.comm.recv(source=i, tag=0)) else: self.comm.send(data, dest=0, tag=0) # Send the keys first, then each item one by one # This is to prevent MPI from crashing when sending more # than 2GiB of data over the network. keys = self.comm.bcast(list(data.keys()), root=0) for key in keys: tmp = data.get(key, None) data[key] = self.comm.bcast(tmp, root=0) return data elif datatype == "dict" and op == "cat": field_keys = sorted(data.keys()) size = data[field_keys[0]].shape[-1] sizes = np.zeros(self.comm.size, dtype="int64") outsize = np.array(size, dtype="int64") self.comm.Allgather([outsize, 1, MPI.LONG], [sizes, 1, MPI.LONG]) # This nested concatenate is to get the shapes to work out correctly; # if we just add [0] to sizes, it will broadcast a summation, not a # concatenation. offsets = np.add.accumulate(np.concatenate([[0], sizes]))[:-1] arr_size = self.comm.allreduce(size, op=MPI.SUM) for key in field_keys: dd = data[key] rv = self.alltoallv_array(dd, arr_size, offsets, sizes) data[key] = rv return data elif datatype == "array" and op == "cat": if data is None: ncols = -1 size = 0 dtype = "float64" mylog.warning( "Array passed to par_combine_object was None. " "Setting dtype to float64. This may break things!" ) else: dtype = data.dtype if len(data) == 0: ncols = -1 size = 0 elif len(data.shape) == 1: ncols = 1 size = data.shape[0] else: ncols, size = data.shape ncols = self.comm.allreduce(ncols, op=MPI.MAX) if ncols == 0: data = np.zeros(0, dtype=dtype) # This only works for elif data is None: data = np.zeros((ncols, 0), dtype=dtype) size = data.shape[-1] sizes = np.zeros(self.comm.size, dtype="int64") outsize = np.array(size, dtype="int64") self.comm.Allgather([outsize, 1, MPI.LONG], [sizes, 1, MPI.LONG]) # This nested concatenate is to get the shapes to work out correctly; # if we just add [0] to sizes, it will broadcast a summation, not a # concatenation. offsets = np.add.accumulate(np.concatenate([[0], sizes]))[:-1] arr_size = self.comm.allreduce(size, op=MPI.SUM) data = self.alltoallv_array(data, arr_size, offsets, sizes) return data elif datatype == "list" and op == "cat": recv_data = self.comm.allgather(data) # Now flatten into a single list, since this # returns us a list of lists. data = [] while recv_data: data.extend(recv_data.pop(0)) return data raise NotImplementedError @parallel_passthrough def mpi_bcast(self, data, root=0): # The second check below makes sure that we know how to communicate # this type of array. Otherwise, we'll pickle it. if isinstance(data, np.ndarray) and get_mpi_type(data.dtype) is not None: if self.comm.rank == root: if isinstance(data, YTArray): info = ( data.shape, data.dtype, str(data.units), data.units.registry.lut, ) if isinstance(data, ImageArray): info += ("ImageArray",) else: info += ("YTArray",) else: info = (data.shape, data.dtype) else: info = () info = self.comm.bcast(info, root=root) if self.comm.rank != root: if len(info) == 5: registry = UnitRegistry(lut=info[3], add_default_symbols=False) if info[-1] == "ImageArray": data = ImageArray( np.empty(info[0], dtype=info[1]), units=info[2], registry=registry, ) else: data = YTArray( np.empty(info[0], dtype=info[1]), info[2], registry=registry ) else: data = np.empty(info[0], dtype=info[1]) mpi_type = get_mpi_type(info[1]) self.comm.Bcast([data, mpi_type], root=root) return data else: # Use pickled methods. data = self.comm.bcast(data, root=root) return data def preload(self, grids, fields, io_handler): # This is non-functional. return @parallel_passthrough def mpi_allreduce(self, data, dtype=None, op="sum"): op = op_names[op] if isinstance(data, np.ndarray) and data.dtype != bool: if dtype is None: dtype = data.dtype if dtype != data.dtype: data = data.astype(dtype) temp = data.copy() self.comm.Allreduce( [temp, get_mpi_type(dtype)], [data, get_mpi_type(dtype)], op ) return data else: # We use old-school pickling here on the assumption the arrays are # relatively small ( < 1e7 elements ) return self.comm.allreduce(data, op) ### # Non-blocking stuff. ### def mpi_nonblocking_recv(self, data, source, tag=0, dtype=None): if not self._distributed: return -1 if dtype is None: dtype = data.dtype mpi_type = get_mpi_type(dtype) return self.comm.Irecv([data, mpi_type], source, tag) def mpi_nonblocking_send(self, data, dest, tag=0, dtype=None): if not self._distributed: return -1 if dtype is None: dtype = data.dtype mpi_type = get_mpi_type(dtype) return self.comm.Isend([data, mpi_type], dest, tag) def mpi_Request_Waitall(self, hooks): if not self._distributed: return MPI.Request.Waitall(hooks) def mpi_Request_Waititer(self, hooks): for _hook in hooks: req = MPI.Request.Waitany(hooks) yield req def mpi_Request_Testall(self, hooks): """ This returns False if any of the request hooks are un-finished, and True if they are all finished. """ if not self._distributed: return True return MPI.Request.Testall(hooks) ### # End non-blocking stuff. ### ### # Parallel rank and size properties. ### @property def size(self): if not self._distributed: return 1 return self.comm.size @property def rank(self): if not self._distributed: return 0 return self.comm.rank def mpi_info_dict(self, info): if not self._distributed: return 0, {0: info} data = None if self.comm.rank == 0: data = {0: info} for i in range(1, self.comm.size): data[i] = self.comm.recv(source=i, tag=0) else: self.comm.send(info, dest=0, tag=0) mylog.debug("Opening MPI Broadcast on %s", self.comm.rank) data = self.comm.bcast(data, root=0) return self.comm.rank, data def claim_object(self, obj): if not self._distributed: return obj._owner = self.comm.rank obj._distributed = True def do_not_claim_object(self, obj): if not self._distributed: return obj._owner = -1 obj._distributed = True def write_on_root(self, fn): if not self._distributed: return open(fn, "w") if self.comm.rank == 0: return open(fn, "w") else: return StringIO() def get_filename(self, prefix, rank=None): if not self._distributed: return prefix if rank is None: return "%s_%04i" % (prefix, self.comm.rank) else: return "%s_%04i" % (prefix, rank) def is_mine(self, obj): if not obj._distributed: return True return obj._owner == self.comm.rank def send_quadtree(self, target, buf, tgd, args): sizebuf = np.zeros(1, "int64") sizebuf[0] = buf[0].size self.comm.Send([sizebuf, MPI.LONG], dest=target) self.comm.Send([buf[0], MPI.INT], dest=target) self.comm.Send([buf[1], MPI.DOUBLE], dest=target) self.comm.Send([buf[2], MPI.DOUBLE], dest=target) def recv_quadtree(self, target, tgd, args): sizebuf = np.zeros(1, "int64") self.comm.Recv(sizebuf, source=target) buf = [ np.empty((sizebuf[0],), "int32"), np.empty((sizebuf[0], args[2]), "float64"), np.empty((sizebuf[0],), "float64"), ] self.comm.Recv([buf[0], MPI.INT], source=target) self.comm.Recv([buf[1], MPI.DOUBLE], source=target) self.comm.Recv([buf[2], MPI.DOUBLE], source=target) return buf @parallel_passthrough def merge_quadtree_buffers(self, qt, merge_style): # This is a modified version of pairwise reduction from Lisandro Dalcin, # in the reductions demo of mpi4py size = self.comm.size rank = self.comm.rank mask = 1 buf = qt.tobuffer() print("PROC", rank, buf[0].shape, buf[1].shape, buf[2].shape) sys.exit() args = qt.get_args() # Will always be the same tgd = np.array([args[0], args[1]], dtype="int64") sizebuf = np.zeros(1, "int64") while mask < size: if (mask & rank) != 0: target = (rank & ~mask) % size # print("SENDING FROM %02i to %02i" % (rank, target)) buf = qt.tobuffer() self.send_quadtree(target, buf, tgd, args) # qt = self.recv_quadtree(target, tgd, args) else: target = rank | mask if target < size: # print("RECEIVING FROM %02i on %02i" % (target, rank)) buf = self.recv_quadtree(target, tgd, args) qto = QuadTree(tgd, args[2], qt.bounds) qto.frombuffer(buf[0], buf[1], buf[2], merge_style) merge_quadtrees(qt, qto, style=merge_style) del qto # self.send_quadtree(target, qt, tgd, args) mask <<= 1 if rank == 0: buf = qt.tobuffer() sizebuf[0] = buf[0].size self.comm.Bcast([sizebuf, MPI.LONG], root=0) if rank != 0: buf = [ np.empty((sizebuf[0],), "int32"), np.empty((sizebuf[0], args[2]), "float64"), np.empty((sizebuf[0],), "float64"), ] self.comm.Bcast([buf[0], MPI.INT], root=0) self.comm.Bcast([buf[1], MPI.DOUBLE], root=0) self.comm.Bcast([buf[2], MPI.DOUBLE], root=0) self.refined = buf[0] if rank != 0: qt = QuadTree(tgd, args[2], qt.bounds) qt.frombuffer(buf[0], buf[1], buf[2], merge_style) return qt def send_array(self, arr, dest, tag=0): if not isinstance(arr, np.ndarray): self.comm.send((None, None), dest=dest, tag=tag) self.comm.send(arr, dest=dest, tag=tag) return tmp = arr.view(self.__tocast) # Cast to CHAR # communicate type and shape and optionally units if isinstance(arr, YTArray): unit_metadata = (str(arr.units), arr.units.registry.lut) if isinstance(arr, ImageArray): unit_metadata += ("ImageArray",) else: unit_metadata += ("YTArray",) else: unit_metadata = () self.comm.send((arr.dtype.str, arr.shape) + unit_metadata, dest=dest, tag=tag) self.comm.Send([arr, MPI.CHAR], dest=dest, tag=tag) del tmp def recv_array(self, source, tag=0): metadata = self.comm.recv(source=source, tag=tag) dt, ne = metadata[:2] if ne is None and dt is None: return self.comm.recv(source=source, tag=tag) arr = np.empty(ne, dtype=dt) if len(metadata) == 5: registry = UnitRegistry(lut=metadata[3], add_default_symbols=False) if metadata[-1] == "ImageArray": arr = ImageArray(arr, units=metadata[2], registry=registry) else: arr = YTArray(arr, metadata[2], registry=registry) tmp = arr.view(self.__tocast) self.comm.Recv([tmp, MPI.CHAR], source=source, tag=tag) return arr def alltoallv_array(self, send, total_size, offsets, sizes): if len(send.shape) > 1: recv = [] for i in range(send.shape[0]): recv.append( self.alltoallv_array(send[i, :].copy(), total_size, offsets, sizes) ) recv = np.array(recv) return recv offset = offsets[self.comm.rank] tmp_send = send.view(self.__tocast) recv = np.empty(total_size, dtype=send.dtype) if isinstance(send, YTArray): # We assume send.units is consistent with the units # on the receiving end. if isinstance(send, ImageArray): recv = ImageArray(recv, units=send.units) else: recv = YTArray(recv, send.units) recv[offset : offset + send.size] = send[:] dtr = send.dtype.itemsize / tmp_send.dtype.itemsize # > 1 roff = [off * dtr for off in offsets] rsize = [siz * dtr for siz in sizes] tmp_recv = recv.view(self.__tocast) self.comm.Allgatherv( (tmp_send, tmp_send.size, MPI.CHAR), (tmp_recv, (rsize, roff), MPI.CHAR) ) return recv def probe_loop(self, tag, callback): while True: st = MPI.Status() self.comm.Probe(MPI.ANY_SOURCE, tag=tag, status=st) try: callback(st) except StopIteration: mylog.debug("Probe loop ending.") break communication_system = CommunicationSystem() class ParallelAnalysisInterface: comm = None _grids = None _distributed = None def __init__(self, comm=None): if comm is None: self.comm = communication_system.communicators[-1] else: self.comm = comm self._grids = self.comm._grids self._distributed = self.comm._distributed def _get_objs(self, attr, *args, **kwargs): if self._distributed: rr = kwargs.pop("round_robin", False) self._initialize_parallel(*args, **kwargs) return ParallelObjectIterator(self, attr=attr, round_robin=rr) return ObjectIterator(self, attr=attr) def _get_grids(self, *args, **kwargs): if self._distributed: self._initialize_parallel(*args, **kwargs) return ParallelObjectIterator(self, attr="_grids") return ObjectIterator(self, attr="_grids") def _get_grid_objs(self): if self._distributed: return ParallelObjectIterator(self, True, attr="_grids") return ObjectIterator(self, True, attr="_grids") def get_dependencies(self, fields): deps = [] fi = self.ds.field_info for field in fields: if any(getattr(v, "ghost_zones", 0) > 0 for v in fi[field].validators): continue deps += list( always_iterable(fi[field].get_dependencies(ds=self.ds).requested) ) return list(set(deps)) def _initialize_parallel(self): pass def _finalize_parallel(self): pass def partition_index_2d(self, axis): if not self._distributed: return False, self.index.grid_collection(self.center, self.index.grids) xax = self.ds.coordinates.x_axis[axis] yax = self.ds.coordinates.y_axis[axis] cc = MPI.Compute_dims(self.comm.size, 2) mi = self.comm.rank cx, cy = np.unravel_index(mi, cc) x = np.mgrid[0 : 1 : (cc[0] + 1) * 1j][cx : cx + 2] y = np.mgrid[0 : 1 : (cc[1] + 1) * 1j][cy : cy + 2] DLE, DRE = self.ds.domain_left_edge.copy(), self.ds.domain_right_edge.copy() LE = np.ones(3, dtype="float64") * DLE RE = np.ones(3, dtype="float64") * DRE LE[xax] = x[0] * (DRE[xax] - DLE[xax]) + DLE[xax] RE[xax] = x[1] * (DRE[xax] - DLE[xax]) + DLE[xax] LE[yax] = y[0] * (DRE[yax] - DLE[yax]) + DLE[yax] RE[yax] = y[1] * (DRE[yax] - DLE[yax]) + DLE[yax] mylog.debug("Dimensions: %s %s", LE, RE) reg = self.ds.region(self.center, LE, RE) return True, reg def partition_index_3d(self, ds, padding=0.0, rank_ratio=1): LE, RE = np.array(ds.left_edge), np.array(ds.right_edge) # We need to establish if we're looking at a subvolume, in which case # we *do* want to pad things. if (LE == self.ds.domain_left_edge).all() and ( RE == self.ds.domain_right_edge ).all(): subvol = False else: subvol = True if not self._distributed and not subvol: return False, LE, RE, ds if not self._distributed and subvol: return True, LE, RE, self.ds.region(self.center, LE - padding, RE + padding) elif ytcfg.get("yt", "inline"): # At this point, we want to identify the root grid tile to which # this processor is assigned. # The only way I really know how to do this is to get the level-0 # grid that belongs to this processor. grids = self.ds.index.select_grids(0) root_grids = [g for g in grids if g.proc_num == self.comm.rank] if len(root_grids) != 1: raise RuntimeError # raise KeyError LE = root_grids[0].LeftEdge RE = root_grids[0].RightEdge return True, LE, RE, self.ds.region(self.center, LE, RE) cc = MPI.Compute_dims(self.comm.size / rank_ratio, 3) mi = self.comm.rank % (self.comm.size // rank_ratio) cx, cy, cz = np.unravel_index(mi, cc) x = np.mgrid[LE[0] : RE[0] : (cc[0] + 1) * 1j][cx : cx + 2] y = np.mgrid[LE[1] : RE[1] : (cc[1] + 1) * 1j][cy : cy + 2] z = np.mgrid[LE[2] : RE[2] : (cc[2] + 1) * 1j][cz : cz + 2] LE = np.array([x[0], y[0], z[0]], dtype="float64") RE = np.array([x[1], y[1], z[1]], dtype="float64") if padding > 0: return True, LE, RE, self.ds.region(self.center, LE - padding, RE + padding) return False, LE, RE, self.ds.region(self.center, LE, RE) def partition_region_3d(self, left_edge, right_edge, padding=0.0, rank_ratio=1): """ Given a region, it subdivides it into smaller regions for parallel analysis. """ LE, RE = left_edge[:], right_edge[:] if not self._distributed: raise NotImplementedError cc = MPI.Compute_dims(self.comm.size / rank_ratio, 3) mi = self.comm.rank % (self.comm.size // rank_ratio) cx, cy, cz = np.unravel_index(mi, cc) x = np.mgrid[LE[0] : RE[0] : (cc[0] + 1) * 1j][cx : cx + 2] y = np.mgrid[LE[1] : RE[1] : (cc[1] + 1) * 1j][cy : cy + 2] z = np.mgrid[LE[2] : RE[2] : (cc[2] + 1) * 1j][cz : cz + 2] LE = np.array([x[0], y[0], z[0]], dtype="float64") RE = np.array([x[1], y[1], z[1]], dtype="float64") if padding > 0: return True, LE, RE, self.ds.region(self.center, LE - padding, RE + padding) return False, LE, RE, self.ds.region(self.center, LE, RE) def partition_index_3d_bisection_list(self): """ Returns an array that is used to drive _partition_index_3d_bisection, below. """ def factor(n): if n == 1: return [1] i = 2 limit = n**0.5 while i <= limit: if n % i == 0: ret = factor(n / i) ret.append(i) return ret i += 1 return [n] cc = MPI.Compute_dims(self.comm.size, 3) si = self.comm.size factors = factor(si) xyzfactors = [factor(cc[0]), factor(cc[1]), factor(cc[2])] # Each entry of cuts is a two element list, that is: # [cut dim, number of cuts] cuts = [] # The higher cuts are in the beginning. # We're going to do our best to make the cuts cyclic, i.e. x, then y, # then z, etc... lastdim = 0 for f in factors: nextdim = (lastdim + 1) % 3 while True: if f in xyzfactors[nextdim]: cuts.append([nextdim, f]) topop = xyzfactors[nextdim].index(f) xyzfactors[nextdim].pop(topop) lastdim = nextdim break nextdim = (nextdim + 1) % 3 return cuts class GroupOwnership(ParallelAnalysisInterface): def __init__(self, items): ParallelAnalysisInterface.__init__(self) self.num_items = len(items) self.items = items assert self.num_items >= self.comm.size self.owned = list(range(self.comm.size)) self.pointer = 0 if parallel_capable: communication_system.push_with_ids([self.comm.rank]) def __del__(self): if parallel_capable: communication_system.pop() def inc(self, n=-1): old_item = self.item if n == -1: n = self.comm.size for _ in range(n): if self.pointer >= self.num_items - self.comm.size: break self.owned[self.pointer % self.comm.size] += self.comm.size self.pointer += 1 if self.item is not old_item: self.switch() def dec(self, n=-1): old_item = self.item if n == -1: n = self.comm.size for _ in range(n): if self.pointer == 0: break self.owned[(self.pointer - 1) % self.comm.size] -= self.comm.size self.pointer -= 1 if self.item is not old_item: self.switch() _last = None @property def item(self): own = self.owned[self.comm.rank] if self._last != own: self._item = self.items[own] self._last = own return self._item def switch(self): pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/parallel_tools/task_queue.py0000644000175100001770000001323514714401662021774 0ustar00runnerdockerimport numpy as np from yt.funcs import mylog from .parallel_analysis_interface import ( ResultsStorage, _get_comm, communication_system, parallel_capable, ) messages = { "task": {"msg": "next"}, "result": {"msg": "result"}, "task_req": {"msg": "task_req"}, "end": {"msg": "no_more_tasks"}, } class TaskQueueNonRoot: def __init__(self, tasks, comm, subcomm): self.tasks = tasks self.results = {} self.comm = comm self.subcomm = subcomm def send_result(self, result): new_msg = messages["result"].copy() new_msg["value"] = result if self.subcomm.rank == 0: self.comm.comm.send(new_msg, dest=0, tag=1) self.subcomm.barrier() def __next__(self): msg = messages["task_req"].copy() if self.subcomm.rank == 0: self.comm.comm.send(msg, dest=0, tag=1) msg = self.comm.comm.recv(source=0, tag=2) msg = self.subcomm.bcast(msg, root=0) if msg["msg"] == messages["end"]["msg"]: mylog.debug("Notified to end") raise StopIteration return msg["value"] # For Python 2 compatibility next = __next__ def __iter__(self): return self def run(self, callable): for task in self: self.send_result(callable(task)) return self.finalize() def finalize(self, vals=None): return self.comm.comm.bcast(vals, root=0) class TaskQueueRoot(TaskQueueNonRoot): def __init__(self, tasks, comm, njobs): self.njobs = njobs self.tasks = tasks self.results = {} self.assignments = {} self._notified = 0 self._current = 0 self._remaining = len(self.tasks) self.comm = comm # Set up threading here # self.dist = threading.Thread(target=self.handle_assignments) # self.dist.daemon = True # self.dist.start() def run(self, func=None): self.comm.probe_loop(1, self.handle_assignment) return self.finalize(self.results) def insert_result(self, source_id, rstore): task_id = rstore.result_id self.results[task_id] = rstore.result def assign_task(self, source_id): if self._remaining == 0: mylog.debug("Notifying %s to end", source_id) msg = messages["end"].copy() self._notified += 1 else: msg = messages["task"].copy() task_id = self._current task = self.tasks[task_id] self.assignments[source_id] = task_id self._current += 1 self._remaining -= 1 msg["value"] = task self.comm.comm.send(msg, dest=source_id, tag=2) def handle_assignment(self, status): msg = self.comm.comm.recv(source=status.source, tag=1) if msg["msg"] == messages["result"]["msg"]: self.insert_result(status.source, msg["value"]) elif msg["msg"] == messages["task_req"]["msg"]: self.assign_task(status.source) else: mylog.error("GOT AN UNKNOWN MESSAGE: %s", msg) raise RuntimeError if self._notified >= self.njobs: raise StopIteration def task_queue(func, tasks, njobs=0): comm = _get_comm(()) if not parallel_capable: mylog.error("Cannot create task queue for serial process.") raise RuntimeError my_size = comm.comm.size if njobs <= 0: njobs = my_size - 1 if njobs >= my_size: mylog.error( "You have asked for %s jobs, but only %s processors are available.", njobs, (my_size - 1), ) raise RuntimeError my_rank = comm.rank all_new_comms = np.array_split(np.arange(1, my_size), njobs) all_new_comms.insert(0, np.array([0])) for i, comm_set in enumerate(all_new_comms): if my_rank in comm_set: my_new_id = i break subcomm = communication_system.push_with_ids(all_new_comms[my_new_id].tolist()) if comm.comm.rank == 0: my_q = TaskQueueRoot(tasks, comm, njobs) else: my_q = TaskQueueNonRoot(None, comm, subcomm) communication_system.pop() return my_q.run(func) def dynamic_parallel_objects(tasks, njobs=0, storage=None, broadcast=True): comm = _get_comm(()) if not parallel_capable: mylog.error("Cannot create task queue for serial process.") raise RuntimeError my_size = comm.comm.size if njobs <= 0: njobs = my_size - 1 if njobs >= my_size: mylog.error( "You have asked for %s jobs, but only %s processors are available.", njobs, (my_size - 1), ) raise RuntimeError my_rank = comm.rank all_new_comms = np.array_split(np.arange(1, my_size), njobs) all_new_comms.insert(0, np.array([0])) for i, comm_set in enumerate(all_new_comms): if my_rank in comm_set: my_new_id = i break subcomm = communication_system.push_with_ids(all_new_comms[my_new_id].tolist()) if comm.comm.rank == 0: my_q = TaskQueueRoot(tasks, comm, njobs) my_q.comm.probe_loop(1, my_q.handle_assignment) else: my_q = TaskQueueNonRoot(None, comm, subcomm) if storage is None: for task in my_q: yield task else: for task in my_q: rstore = ResultsStorage() yield rstore, task my_q.send_result(rstore) if storage is not None: if broadcast: my_results = my_q.comm.comm.bcast(my_q.results, root=0) else: my_results = my_q.results storage.update(my_results) communication_system.pop() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/parameter_file_storage.py0000644000175100001770000001423414714401662021315 0ustar00runnerdockerimport os.path from itertools import islice from yt.config import ytcfg from yt.funcs import mylog from yt.utilities.object_registries import output_type_registry from yt.utilities.parallel_tools.parallel_analysis_interface import ( parallel_simple_proxy, ) _field_names = ("hash", "bn", "fp", "tt", "ctid", "class_name", "last_seen") class NoParameterShelf(Exception): pass class UnknownDatasetType(Exception): def __init__(self, name): self.name = name def __str__(self): return f"{self.name}" def __repr__(self): return f"{self.name}" class ParameterFileStore: """ This class is designed to be a semi-persistent storage for parameter files. By identifying each dataset with a unique hash, objects can be stored independently of datasets -- when an object is loaded, the dataset is as well, based on the hash. For storage concerns, only a few hundred will be retained in cache. """ _shared_state = {} # type: ignore _distributed = True _processing = False _owner = 0 _register = True def __new__(cls, *p, **k): self = object.__new__(cls, *p, **k) self.__dict__ = cls._shared_state return self def __init__(self, in_memory=False): """ Create the dataset database if yt is configured to store them. Otherwise, use read-only settings. """ if not self._register: return if ytcfg.get("yt", "store_parameter_files"): self._read_only = False self.init_db() self._records = self.read_db() else: self._read_only = True self._records = {} self._register = False @parallel_simple_proxy def init_db(self): """ This function ensures that the storage database exists and can be used. """ dbn = self._get_db_name() dbdir = os.path.dirname(dbn) try: if not os.path.isdir(dbdir): os.mkdir(dbdir) except OSError as exc: raise NoParameterShelf from exc open(dbn, "ab") # make sure it exists, allow to close # Now we read in all our records and return them # these will be broadcast def _get_db_name(self): base_file_name = ytcfg.get("yt", "parameter_file_store") if not os.access(os.path.expanduser("~/"), os.W_OK): return os.path.abspath(base_file_name) return os.path.expanduser(f"~/.yt/{base_file_name}") def get_ds_hash(self, hash): """This returns a dataset based on a hash.""" return self._convert_ds(self._records[hash]) def get_ds_ctid(self, ctid): """This returns a dataset based on a CurrentTimeIdentifier.""" for h in self._records: if self._records[h]["ctid"] == ctid: return self._convert_ds(self._records[h]) def _adapt_ds(self, ds): """This turns a dataset into a CSV entry.""" return { "bn": ds.basename, "fp": ds.directory, "tt": ds.current_time, "ctid": ds.unique_identifier, "class_name": ds.__class__.__name__, "last_seen": ds._instantiated, } def _convert_ds(self, ds_dict): """This turns a CSV entry into a dataset.""" bn = ds_dict["bn"] fp = ds_dict["fp"] fn = os.path.join(fp, bn) class_name = ds_dict["class_name"] if class_name not in output_type_registry: raise UnknownDatasetType(class_name) mylog.info("Checking %s", fn) if os.path.exists(fn): ds = output_type_registry[class_name](os.path.join(fp, bn)) else: raise OSError # This next one is to ensure that we manually update the last_seen # record *now*, for during write_out. self._records[ds._hash()]["last_seen"] = ds._instantiated return ds def check_ds(self, ds): """ This will ensure that the dataset (*ds*) handed to it is recorded in the storage unit. In doing so, it will update path and "last_seen" information. """ hash = ds._hash() if hash not in self._records: self.insert_ds(ds) return ds_dict = self._records[hash] self._records[hash]["last_seen"] = ds._instantiated if ds_dict["bn"] != ds.basename or ds_dict["fp"] != ds.directory: self.wipe_hash(hash) self.insert_ds(ds) def insert_ds(self, ds): """This will insert a new *ds* and flush the database to disk.""" self._records[ds._hash()] = self._adapt_ds(ds) self.flush_db() def wipe_hash(self, hash): """ This removes a *hash* corresponding to a dataset from the storage. """ if hash not in self._records: return del self._records[hash] self.flush_db() def flush_db(self): """This flushes the storage to disk.""" if self._read_only: return self._write_out() self.read_db() def get_recent(self, n=10): recs = sorted(self._records.values(), key=lambda a: -a["last_seen"])[:n] return recs @parallel_simple_proxy def _write_out(self): import csv if self._read_only: return fn = self._get_db_name() f = open(f"{fn}.tmp", "w") w = csv.DictWriter(f, _field_names) maxn = ytcfg.get("yt", "maximum_stored_datasets") # number written for h, v in islice( sorted(self._records.items(), key=lambda a: -a[1]["last_seen"]), 0, maxn ): v["hash"] = h w.writerow(v) f.close() os.rename(f"{fn}.tmp", fn) @parallel_simple_proxy def read_db(self): """This will read the storage device from disk.""" import csv f = open(self._get_db_name()) vals = csv.DictReader(f, _field_names) db = {} for v in vals: db[v.pop("hash")] = v if v["last_seen"] is None: v["last_seen"] = 0.0 else: v["last_seen"] = float(v["last_seen"]) f.close() return db ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/particle_generator.py0000644000175100001770000004122314714401662020461 0ustar00runnerdockerimport numpy as np from yt.funcs import get_pbar from yt.units._numpy_wrapper_functions import uconcatenate from yt.utilities.lib.particle_mesh_operations import CICSample_3 class ParticleGenerator: def __init__(self, ds, num_particles, field_list, ptype="io"): """ Base class for generating particle fields which may be applied to streams. Normally this would not be called directly, since it doesn't really do anything except allocate memory. Takes a *ds* to serve as the basis for determining grids, the number of particles *num_particles*, a list of fields, *field_list*, and the particle type *ptype*, which has a default value of "io". """ self.ds = ds self.num_particles = num_particles self.field_list = [ (ptype, fd) if isinstance(fd, str) else fd for fd in field_list ] self.field_list.append((ptype, "particle_index")) self.field_units = { (ptype, f"particle_position_{ax}"): "code_length" for ax in "xyz" } self.field_units[ptype, "particle_index"] = "" self.ptype = ptype self._set_default_fields() try: self.posx_index = self.field_list.index(self.default_fields[0]) self.posy_index = self.field_list.index(self.default_fields[1]) self.posz_index = self.field_list.index(self.default_fields[2]) except Exception as e: raise KeyError( "You must specify position fields: " + " ".join(f"particle_position_{ax}" for ax in "xyz") ) from e self.index_index = self.field_list.index((ptype, "particle_index")) self.num_grids = self.ds.index.num_grids self.NumberOfParticles = np.zeros(self.num_grids, dtype="int64") self.ParticleGridIndices = np.zeros(self.num_grids + 1, dtype="int64") self.num_fields = len(self.field_list) self.particles = np.zeros( (self.num_particles, self.num_fields), dtype="float64" ) def _set_default_fields(self): self.default_fields = [ (self.ptype, "particle_position_x"), (self.ptype, "particle_position_y"), (self.ptype, "particle_position_z"), ] def has_key(self, key): """ Check to see if *key* is in the particle field list. """ return key in self.field_list def keys(self): """ Return the list of particle fields. """ return self.field_list def __getitem__(self, key): """ Get the field associated with key. """ return self.particles[:, self.field_list.index(key)] def __setitem__(self, key, val): """ Sets a field to be some other value. Note that we assume that the particles have been sorted by grid already, so make sure the setting of the field is consistent with this. """ self.particles[:, self.field_list.index(key)] = val[:] def __len__(self): """ The number of particles """ return self.num_particles def get_for_grid(self, grid): """ Return a dict containing all of the particle fields in the specified grid. """ ind = grid.id - grid._id_offset start = self.ParticleGridIndices[ind] end = self.ParticleGridIndices[ind + 1] tr = {} for field in self.field_list: fi = self.field_list.index(field) if field in self.field_units: tr[field] = self.ds.arr( self.particles[start:end, fi], self.field_units[field] ) else: tr[field] = self.particles[start:end, fi] return tr def _setup_particles(self, x, y, z, setup_fields=None): """ Assigns grids to particles and sets up particle positions. *setup_fields* is a dict of fields other than the particle positions to set up. """ particle_grids, particle_grid_inds = self.ds.index._find_points(x, y, z) idxs = np.argsort(particle_grid_inds) self.particles[:, self.posx_index] = x[idxs] self.particles[:, self.posy_index] = y[idxs] self.particles[:, self.posz_index] = z[idxs] self.NumberOfParticles = np.bincount( particle_grid_inds.astype("intp"), minlength=self.num_grids ) if self.num_grids > 1: np.add.accumulate( self.NumberOfParticles.squeeze(), out=self.ParticleGridIndices[1:] ) else: self.ParticleGridIndices[1] = self.NumberOfParticles.squeeze() if setup_fields is not None: for key, value in setup_fields.items(): field = (self.ptype, key) if isinstance(key, str) else key if field not in self.default_fields: self.particles[:, self.field_list.index(field)] = value[idxs] def assign_indices(self, function=None, **kwargs): """ Assign unique indices to the particles. The default is to just use numpy.arange, but any function may be supplied with keyword arguments. """ if function is None: self.particles[:, self.index_index] = np.arange(self.num_particles) else: self.particles[:, self.index_index] = function(**kwargs) def map_grid_fields_to_particles(self, mapping_dict): r""" For the fields in *mapping_dict*, map grid fields to the particles using CIC sampling. Examples -------- >>> field_map = { ... "density": "particle_density", ... "temperature": "particle_temperature", ... } >>> particles.map_grid_fields_to_particles(field_map) """ pbar = get_pbar("Mapping fields to particles", self.num_grids) for i, grid in enumerate(self.ds.index.grids): pbar.update(i + 1) if self.NumberOfParticles[i] > 0: start = self.ParticleGridIndices[i] end = self.ParticleGridIndices[i + 1] # Note we add one ghost zone to the grid! cube = grid.retrieve_ghost_zones(1, list(mapping_dict.keys())) le = np.array(grid.LeftEdge).astype(np.float64) dims = np.array(grid.ActiveDimensions).astype(np.int32) for gfield, pfield in mapping_dict.items(): self.field_units[pfield] = cube[gfield].units field_index = self.field_list.index(pfield) CICSample_3( self.particles[start:end, self.posx_index], self.particles[start:end, self.posy_index], self.particles[start:end, self.posz_index], self.particles[start:end, field_index], np.int64(self.NumberOfParticles[i]), cube[gfield], le, dims, grid.dds[0], ) pbar.finish() def apply_to_stream(self, overwrite=False, **kwargs): """ Apply the particles to a grid-based stream dataset. If particles already exist, and overwrite=False, do not overwrite them, but add the new ones to them. """ grid_data = [] for i, g in enumerate(self.ds.index.grids): data = {} number_of_particles = self.NumberOfParticles[i] if not overwrite: number_of_particles += g.NumberOfParticles grid_particles = self.get_for_grid(g) for field in self.field_list: if number_of_particles > 0: if ( g.NumberOfParticles > 0 and not overwrite and field in self.ds.field_list ): # We have particles in this grid, we're not # overwriting them, and the field is in the field # list already data[field] = uconcatenate([g[field], grid_particles[field]]) else: # Otherwise, simply add the field in data[field] = grid_particles[field] else: # We don't have particles in this grid data[field] = np.array([], dtype="float64") grid_data.append(data) self.ds.index.update_data(grid_data) class FromListParticleGenerator(ParticleGenerator): def __init__(self, ds, num_particles, data, ptype="io"): r""" Generate particle fields from array-like lists contained in a dict. Parameters ---------- ds : `Dataset` The dataset which will serve as the base for these particles. num_particles : int The number of particles in the dict. data : dict of NumPy arrays The particle fields themselves. ptype : string, optional The particle type for these particle fields. Default: "io" Examples -------- >>> num_p = 100000 >>> posx = np.random.random(num_p) >>> posy = np.random.random(num_p) >>> posz = np.random.random(num_p) >>> mass = np.ones(num_p) >>> data = { ... "particle_position_x": posx, ... "particle_position_y": posy, ... "particle_position_z": posz, ... "particle_mass": mass, ... } >>> particles = FromListParticleGenerator(ds, num_p, data) """ field_list = list(data.keys()) if "particle_position_x" in data: x = data.pop("particle_position_x") y = data.pop("particle_position_y") z = data.pop("particle_position_z") elif (ptype, "particle_position_x") in data: x = data.pop((ptype, "particle_position_x")) y = data.pop((ptype, "particle_position_y")) z = data.pop((ptype, "particle_position_z")) xcond = np.logical_or(x < ds.domain_left_edge[0], x >= ds.domain_right_edge[0]) ycond = np.logical_or(y < ds.domain_left_edge[1], y >= ds.domain_right_edge[1]) zcond = np.logical_or(z < ds.domain_left_edge[2], z >= ds.domain_right_edge[2]) cond = np.logical_or(xcond, ycond) cond = np.logical_or(zcond, cond) if np.any(cond): raise ValueError("Some particles are outside of the domain!!!") super().__init__(ds, num_particles, field_list, ptype=ptype) self._setup_particles(x, y, z, setup_fields=data) class LatticeParticleGenerator(ParticleGenerator): def __init__( self, ds, particles_dims, particles_left_edge, particles_right_edge, field_list, ptype="io", ): r""" Generate particles in a lattice arrangement. Parameters ---------- ds : `Dataset` The dataset which will serve as the base for these particles. particles_dims : int, array-like The number of particles along each dimension particles_left_edge : float, array-like The 'left-most' starting positions of the lattice. particles_right_edge : float, array-like The 'right-most' ending positions of the lattice. field_list : list of strings A list of particle fields ptype : string, optional The particle type for these particle fields. Default: "io" Examples -------- >>> dims = (128, 128, 128) >>> le = np.array([0.25, 0.25, 0.25]) >>> re = np.array([0.75, 0.75, 0.75]) >>> fields = [ ... ("all", "particle_position_x"), ... ("all", "particle_position_y"), ... ("all", "particle_position_z"), ... ("all", "particle_density"), ... ("all", "particle_temperature"), ... ] >>> particles = LatticeParticleGenerator(ds, dims, le, re, fields) """ num_x = particles_dims[0] num_y = particles_dims[1] num_z = particles_dims[2] xmin = particles_left_edge[0] ymin = particles_left_edge[1] zmin = particles_left_edge[2] xmax = particles_right_edge[0] ymax = particles_right_edge[1] zmax = particles_right_edge[2] DLE = ds.domain_left_edge.in_units("code_length").d DRE = ds.domain_right_edge.in_units("code_length").d xcond = (xmin < DLE[0]) or (xmax >= DRE[0]) ycond = (ymin < DLE[1]) or (ymax >= DRE[1]) zcond = (zmin < DLE[2]) or (zmax >= DRE[2]) cond = xcond or ycond or zcond if cond: raise ValueError("Proposed bounds for particles are outside domain!!!") super().__init__(ds, num_x * num_y * num_z, field_list, ptype=ptype) dx = (xmax - xmin) / (num_x - 1) dy = (ymax - ymin) / (num_y - 1) dz = (zmax - zmin) / (num_z - 1) inds = np.indices((num_x, num_y, num_z)) xpos = inds[0] * dx + xmin ypos = inds[1] * dy + ymin zpos = inds[2] * dz + zmin self._setup_particles(xpos.flat[:], ypos.flat[:], zpos.flat[:]) class WithDensityParticleGenerator(ParticleGenerator): def __init__( self, ds, data_source, num_particles, field_list, density_field=("gas", "density"), ptype="io", ): r""" Generate particles based on a density field. Parameters ---------- ds : `Dataset` The dataset which will serve as the base for these particles. data_source : `yt.data_objects.selection_objects.base_objects.YTSelectionContainer` The data source containing the density field. num_particles : int The number of particles to be generated field_list : list of strings A list of particle fields density_field : tuple, optional A density field which will serve as the distribution function for the particle positions. Theoretically, this could be any 'per-volume' field. ptype : string, optional The particle type for these particle fields. Default: "io" Examples -------- >>> sphere = ds.sphere(ds.domain_center, 0.5) >>> num_p = 100000 >>> fields = [ ... ("all", "particle_position_x"), ... ("all", "particle_position_y"), ... ("all", "particle_position_z"), ... ("all", "particle_density"), ... ("all", "particle_temperature"), ... ] >>> particles = WithDensityParticleGenerator( ... ds, sphere, num_particles, fields, density_field="Dark_Matter_Density" ... ) """ super().__init__(ds, num_particles, field_list, ptype=ptype) num_cells = len(data_source["index", "x"].flat) max_mass = ( data_source[density_field] * data_source["gas", "cell_volume"] ).max() num_particles_left = num_particles all_x = [] all_y = [] all_z = [] pbar = get_pbar("Generating Particles", num_particles) tot_num_accepted = 0 rng = np.random.default_rng() while num_particles_left > 0: m = rng.uniform(high=1.01 * max_mass, size=num_particles_left) idxs = rng.integers(low=0, high=num_cells, size=num_particles_left) m_true = ( data_source[density_field] * data_source["gas", "cell_volume"] ).flat[idxs] accept = m <= m_true num_accepted = accept.sum() accepted_idxs = idxs[accept] xpos = ( data_source["index", "x"].flat[accepted_idxs] + rng.uniform(low=-0.5, high=0.5, size=num_accepted) * data_source["index", "dx"].flat[accepted_idxs] ) ypos = ( data_source["index", "y"].flat[accepted_idxs] + rng.uniform(low=-0.5, high=0.5, size=num_accepted) * data_source["index", "dy"].flat[accepted_idxs] ) zpos = ( data_source["index", "z"].flat[accepted_idxs] + rng.uniform(low=-0.5, high=0.5, size=num_accepted) * data_source["index", "dz"].flat[accepted_idxs] ) all_x.append(xpos) all_y.append(ypos) all_z.append(zpos) num_particles_left -= num_accepted tot_num_accepted += num_accepted pbar.update(tot_num_accepted) pbar.finish() x = uconcatenate(all_x) y = uconcatenate(all_y) z = uconcatenate(all_z) self._setup_particles(x, y, z) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/performance_counters.py0000644000175100001770000000755014714401662021040 0ustar00runnerdockerimport atexit import time from bisect import insort from collections import defaultdict from datetime import datetime as dt from functools import wraps from yt.config import ytcfg from yt.funcs import mylog class PerformanceCounters: _shared_state = {} # type: ignore def __new__(cls, *args, **kwargs): self = object.__new__(cls, *args, **kwargs) self.__dict__ = cls._shared_state return self def __init__(self): self.counters = defaultdict(lambda: 0.0) self.counting = defaultdict(lambda: False) self.starttime = defaultdict(lambda: 0) self.endtime = defaultdict(lambda: 0) self._on = ytcfg.get("yt", "time_functions") self.exit() def __call__(self, name): if not self._on: return if self.counting[name]: self.counters[name] = time.time() - self.counters[name] self.counting[name] = False self.endtime[name] = dt.now() else: self.counters[name] = time.time() self.counting[name] = True self.starttime[name] = dt.now() def call_func(self, func): if not self._on: return func @wraps(func) def func_wrapper(*args, **kwargs): self(func.__name__) func(*args, **kwargs) self(func.__name__) return func_wrapper def print_stats(self): mylog.info("Current counter status:\n") times = [] for i in self.counters: insort(times, [self.starttime[i], i, 1]) # 1 for 'on' if not self.counting[i]: insort(times, [self.endtime[i], i, 0]) # 0 for 'off' shifts = {} order = [] endtimes = {} shift = 0 multi = 5 for i in times: # a starting entry if i[2] == 1: shifts[i[1]] = shift order.append(i[1]) shift += 1 if i[2] == 0: shift -= 1 endtimes[i[1]] = self.counters[i[1]] line = "" for i in order: if self.counting[i]: line = "%s%s%i : %s : still running\n" % ( line, " " * shifts[i] * multi, shifts[i], i, ) else: line = "%s%s%i : %s : %0.3e\n" % ( line, " " * shifts[i] * multi, shifts[i], i, self.counters[i], ) mylog.info("\n%s", line) def exit(self): if self._on: atexit.register(self.print_stats) yt_counters = PerformanceCounters() time_function = yt_counters.call_func class ProfilingController: def __init__(self): self.profilers = {} def profile_function(self, function_name): def wrapper(func): try: import cProfile except ImportError: return func my_prof = cProfile.Profile() self.profilers[function_name] = my_prof @wraps(func) def run_in_profiler(*args, **kwargs): my_prof.enable() func(*args, **kwargs) my_prof.disable() return run_in_profiler return wrapper def write_out(self, filename_prefix): if ytcfg.get("yt", "internals", "parallel"): pfn = "%s_%03i_%03i" % ( filename_prefix, ytcfg.get("yt", "internals", "global_parallel_rank"), ytcfg.get("yt", "internals", "global_parallel_size"), ) else: pfn = f"{filename_prefix}" for n, p in sorted(self.profilers.items()): fn = f"{pfn}_{n}.cprof" mylog.info("Dumping %s into %s", n, fn) p.dump_stats(fn) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/periodic_table.py0000644000175100001770000001441314714401662017556 0ustar00runnerdockerimport numbers import numpy as np _elements = ( (1, 1.0079400000, "Hydrogen", "H"), (2, 4.0026020000, "Helium", "He"), (3, 6.9410000000, "Lithium", "Li"), (4, 9.0121820000, "Beryllium", "Be"), (5, 10.8110000000, "Boron", "B"), (6, 12.0107000000, "Carbon", "C"), (7, 14.0067000000, "Nitrogen", "N"), (8, 15.9994000000, "Oxygen", "O"), (9, 18.9994000000, "Fluorine", "F"), (10, 20.1797000000, "Neon", "Ne"), (11, 22.9897692800, "Sodium", "Na"), (12, 24.3050000000, "Magnesium", "Mg"), (13, 26.9815386000, "Aluminium", "Al"), (14, 28.0855000000, "Silicon", "Si"), (15, 30.9737620000, "Phosphorus", "P"), (16, 32.0650000000, "Sulphur", "S"), (17, 35.4530000000, "Chlorine", "Cl"), (18, 39.9480000000, "Argon", "Ar"), (19, 39.0983000000, "Potassium", "K"), (20, 40.0780000000, "Calcium", "Ca"), (21, 44.9559120000, "Scandium", "Sc"), (22, 47.8670000000, "Titanium", "Ti"), (23, 50.9415000000, "Vanadium", "V"), (24, 51.9961000000, "Chromium", "Cr"), (25, 54.9380450000, "Manganese", "Mn"), (26, 55.8450000000, "Iron", "Fe"), (27, 58.9331950000, "Cobalt", "Co"), (28, 58.6934000000, "Nickel", "Ni"), (29, 63.5460000000, "Copper", "Cu"), (30, 65.3800000000, "Zinc", "Zn"), (31, 69.7230000000, "Gallium", "Ga"), (32, 72.6400000000, "Germanium", "Ge"), (33, 74.9216000000, "Arsenic", "As"), (34, 78.9600000000, "Selenium", "Se"), (35, 79.9040000000, "Bromine", "Br"), (36, 83.7980000000, "Krypton", "Kr"), (37, 85.4678000000, "Rubidium", "Rb"), (38, 87.6200000000, "Strontium", "Sr"), (39, 88.9058500000, "Yttrium", "Y"), (40, 91.2240000000, "Zirkonium", "Zr"), (41, 92.9063800000, "Niobium", "Nb"), (42, 95.9600000000, "Molybdaenum", "Mo"), (43, 98.0000000000, "Technetium", "Tc"), (44, 101.0700000000, "Ruthenium", "Ru"), (45, 102.9055000000, "Rhodium", "Rh"), (46, 106.4200000000, "Palladium", "Pd"), (47, 107.8682000000, "Silver", "Ag"), (48, 112.4110000000, "Cadmium", "Cd"), (49, 114.8180000000, "Indium", "In"), (50, 118.7100000000, "Tin", "Sn"), (51, 121.7600000000, "Antimony", "Sb"), (52, 127.6000000000, "Tellurium", "Te"), (53, 126.9044700000, "Iodine", "I"), (54, 131.2930000000, "Xenon", "Xe"), (55, 132.9054519000, "Cesium", "Cs"), (56, 137.3270000000, "Barium", "Ba"), (57, 138.9054700000, "Lanthanum", "La"), (58, 140.1160000000, "Cerium", "Ce"), (59, 140.9076500000, "Praseodymium", "Pr"), (60, 144.2420000000, "Neodymium", "Nd"), (61, 145.0000000000, "Promethium", "Pm"), (62, 150.3600000000, "Samarium", "Sm"), (63, 151.9640000000, "Europium", "Eu"), (64, 157.2500000000, "Gadolinium", "Gd"), (65, 158.9253500000, "Terbium", "Tb"), (66, 162.5001000000, "Dysprosium", "Dy"), (67, 164.9303200000, "Holmium", "Ho"), (68, 167.2590000000, "Erbium", "Er"), (69, 168.9342100000, "Thulium", "Tm"), (70, 173.0540000000, "Ytterbium", "Yb"), (71, 174.9668000000, "Lutetium", "Lu"), (72, 178.4900000000, "Hafnium", "Hf"), (73, 180.9478800000, "Tantalum", "Ta"), (74, 183.8400000000, "Tungsten", "W"), (75, 186.2070000000, "Rhenium", "Re"), (76, 190.2300000000, "Osmium", "Os"), (77, 192.2170000000, "Iridium", "Ir"), (78, 192.0840000000, "Platinum", "Pt"), (79, 196.9665690000, "Gold", "Au"), (80, 200.5900000000, "Hydrargyrum", "Hg"), (81, 204.3833000000, "Thallium", "Tl"), (82, 207.2000000000, "Lead", "Pb"), (83, 208.9804010000, "Bismuth", "Bi"), (84, 210.0000000000, "Polonium", "Po"), (85, 210.0000000000, "Astatine", "At"), (86, 220.0000000000, "Radon", "Rn"), (87, 223.0000000000, "Francium", "Fr"), (88, 226.0000000000, "Radium", "Ra"), (89, 227.0000000000, "Actinium", "Ac"), (90, 232.0380600000, "Thorium", "Th"), (91, 231.0358800000, "Protactinium", "Pa"), (92, 238.0289100000, "Uranium", "U"), (93, 237.0000000000, "Neptunium", "Np"), (94, 244.0000000000, "Plutonium", "Pu"), (95, 243.0000000000, "Americium", "Am"), (96, 247.0000000000, "Curium", "Cm"), (97, 247.0000000000, "Berkelium", "Bk"), (98, 251.0000000000, "Californium", "Cf"), (99, 252.0000000000, "Einsteinium", "Es"), (100, 257.0000000000, "Fermium", "Fm"), (101, 258.0000000000, "Mendelevium", "Md"), (102, 259.0000000000, "Nobelium", "No"), (103, 262.0000000000, "Lawrencium", "Lr"), (104, 261.0000000000, "Rutherfordium", "Rf"), (105, 262.0000000000, "Dubnium", "Db"), (106, 266.0000000000, "Seaborgium", "Sg"), (107, 264.0000000000, "Bohrium", "Bh"), (108, 277.0000000000, "Hassium", "Hs"), (109, 268.0000000000, "Meitnerium", "Mt"), (110, 271.0000000000, "Ununnilium", "Ds"), (111, 272.0000000000, "Unununium", "Rg"), (112, 285.0000000000, "Ununbium", "Uub"), (113, 284.0000000000, "Ununtrium", "Uut"), (114, 289.0000000000, "Ununquadium", "Uuq"), (115, 288.0000000000, "Ununpentium", "Uup"), (116, 292.0000000000, "Ununhexium", "Uuh"), (118, 294.0000000000, "Ununoctium", "Uuo"), # Now some special cases that are *not* elements (-1, 2.014102, "Deuterium", "D"), (-1, 0.00054858, "Electron", "El"), ) class Element: def __init__(self, num, weight, name, symbol): self.num = num self.weight = weight self.name = name self.symbol = symbol def __repr__(self): return f"Element: {self.symbol} ({self.name})" class PeriodicTable: def __init__(self): self.elements_by_number = {} self.elements_by_name = {} self.elements_by_symbol = {} for num, weight, name, symbol in _elements: e = Element(num, weight, name, symbol) self.elements_by_number[num] = e self.elements_by_name[name] = e self.elements_by_symbol[symbol] = e def __getitem__(self, key): if isinstance(key, (np.number, numbers.Number)): d = self.elements_by_number elif isinstance(key, str): if len(key) <= 2: d = self.elements_by_symbol elif len(key) == 3 and key[0] == "U": d = self.elements_by_symbol else: d = self.elements_by_name else: raise KeyError(key) return d[key] periodic_table = PeriodicTable() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/physical_constants.py0000644000175100001770000001013414714401662020515 0ustar00runnerdockerfrom math import pi from yt.units.yt_array import YTQuantity from yt.utilities.physical_ratios import ( amu_grams, boltzmann_constant_erg_per_K, mass_earth_grams, mass_electron_grams, mass_hydrogen_grams, mass_jupiter_grams, mass_mars_grams, mass_mercury_grams, mass_neptune_grams, mass_saturn_grams, mass_sun_grams, mass_uranus_grams, mass_venus_grams, newton_cgs, planck_cgs, planck_charge_esu, planck_energy_erg, planck_length_cm, planck_mass_grams, planck_temperature_K, planck_time_s, speed_of_light_cm_per_s, standard_gravity_cm_per_s2, ) mass_electron_cgs = YTQuantity(mass_electron_grams, "g") mass_electron = mass_electron_cgs me = mass_electron_cgs amu_cgs = YTQuantity(amu_grams, "g") amu = amu_cgs Na = 1 / amu_cgs mass_hydrogen_cgs = YTQuantity(mass_hydrogen_grams, "g") mass_hydrogen = mass_hydrogen_cgs mp = mass_hydrogen_cgs mh = mp # Velocities speed_of_light_cgs = YTQuantity(speed_of_light_cm_per_s, "cm/s") speed_of_light = speed_of_light_cgs clight = speed_of_light_cgs c = speed_of_light_cgs # Cross Sections # 8*pi/3 (alpha*hbar*c/(2*pi))**2 cross_section_thompson_cgs = YTQuantity(6.65245854533e-25, "cm**2") cross_section_thompson = cross_section_thompson_cgs thompson_cross_section = cross_section_thompson_cgs sigma_thompson = cross_section_thompson_cgs # Charge charge_proton_cgs = YTQuantity(4.8032056e-10, "esu") charge_proton = charge_proton_cgs proton_charge = charge_proton_cgs elementary_charge = charge_proton_cgs qp = charge_proton_cgs # Physical Constants boltzmann_constant_cgs = YTQuantity(boltzmann_constant_erg_per_K, "erg/K") boltzmann_constant = boltzmann_constant_cgs kboltz = boltzmann_constant_cgs kb = kboltz gravitational_constant_cgs = YTQuantity(newton_cgs, "cm**3/g/s**2") gravitational_constant = gravitational_constant_cgs G = gravitational_constant_cgs planck_constant_cgs = YTQuantity(planck_cgs, "erg*s") planck_constant = planck_constant_cgs hcgs = planck_constant_cgs hbar = 0.5 * hcgs / pi stefan_boltzmann_constant_cgs = YTQuantity(5.670373e-5, "erg/cm**2/s**1/K**4") stefan_boltzmann_constant = stefan_boltzmann_constant_cgs Tcmb = YTQuantity(2.726, "K") # Current CMB temperature CMB_temperature = Tcmb # Solar System mass_sun_cgs = YTQuantity(mass_sun_grams, "g") mass_sun = mass_sun_cgs solar_mass = mass_sun_cgs msun = mass_sun_cgs # Standish, E.M. (1995) "Report of the IAU WGAS Sub-Group on Numerical Standards", # in Highlights of Astronomy (I. Appenzeller, ed.), Table 1, # Kluwer Academic Publishers, Dordrecht. # REMARK: following masses include whole systems (planet + moons) mass_jupiter_cgs = YTQuantity(mass_jupiter_grams, "g") mass_jupiter = mass_jupiter_cgs jupiter_mas = mass_jupiter_cgs mass_mercury_cgs = YTQuantity(mass_mercury_grams, "g") mass_mercury = mass_mercury_cgs mercury_mass = mass_mercury_cgs mass_venus_cgs = YTQuantity(mass_venus_grams, "g") mass_venus = mass_venus_cgs venus_mass = mass_venus_cgs mass_earth_cgs = YTQuantity(mass_earth_grams, "g") mass_earth = mass_earth_cgs earth_mass = mass_earth_cgs mearth = mass_earth_cgs mass_mars_cgs = YTQuantity(mass_mars_grams, "g") mass_mars = mass_mars_cgs mars_mass = mass_mars_cgs mass_saturn_cgs = YTQuantity(mass_saturn_grams, "g") mass_saturn = mass_saturn_cgs saturn_mass = mass_saturn_cgs mass_uranus_cgs = YTQuantity(mass_uranus_grams, "g") mass_uranus = mass_uranus_cgs uranus_mass = mass_uranus_cgs mass_neptune_cgs = YTQuantity(mass_neptune_grams, "g") mass_neptune = mass_neptune_cgs neptune_mass = mass_neptune_cgs # Planck units m_pl = planck_mass = YTQuantity(planck_mass_grams, "g") l_pl = planck_length = YTQuantity(planck_length_cm, "cm") t_pl = planck_time = YTQuantity(planck_time_s, "s") E_pl = planck_energy = YTQuantity(planck_energy_erg, "erg") q_pl = planck_charge = YTQuantity(planck_charge_esu, "esu") T_pl = planck_temperature = YTQuantity(planck_temperature_K, "K") # MKS E&M units mu_0 = YTQuantity(4.0e-7 * pi, "N/A**2") eps_0 = (1.0 / (clight**2 * mu_0)).in_units("C**2/N/m**2") # Misc standard_gravity_cgs = YTQuantity(standard_gravity_cm_per_s2, "cm/s**2") standard_gravity = standard_gravity_cgs ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/physical_ratios.py0000644000175100001770000001111314714401662020000 0ustar00runnerdockerimport numpy as np # # Physical Constants and Units Conversion Factors # # Values for these constants, unless otherwise noted, are drawn from IAU, # IUPAC, NIST, and NASA data, whichever is newer. # http://maia.usno.navy.mil/NSFA/IAU2009_consts.html # http://goldbook.iupac.org/list_goldbook_phys_constants_defs.html # http://physics.nist.gov/cuu/Constants/index.html # http://nssdc.gsfc.nasa.gov/planetary/factsheet/jupiterfact.html # Elementary masses mass_electron_grams = 9.10938291e-28 amu_grams = 1.660538921e-24 mass_hydrogen_grams = 1.007947 * amu_grams # Solar values (see Mamajek 2012) # https://sites.google.com/site/mamajeksstarnotes/bc-scale mass_sun_grams = 1.98841586e33 temp_sun_kelvin = 5870.0 luminosity_sun_ergs_per_sec = 3.8270e33 # Consistent with solar abundances used in Cloudy metallicity_sun = 0.01295 # Conversion Factors: X au * mpc_per_au = Y mpc # length mpc_per_mpc = 1e0 mpc_per_kpc = 1e-3 mpc_per_pc = 1e-6 mpc_per_au = 4.84813682e-12 mpc_per_rsun = 2.253962e-14 mpc_per_rearth = 2.06470307893e-16 mpc_per_rjup = 2.26566120943e-15 mpc_per_miles = 5.21552871e-20 mpc_per_km = 3.24077929e-20 mpc_per_cm = 3.24077929e-25 kpc_per_cm = mpc_per_cm / mpc_per_kpc pc_per_cm = mpc_per_cm / mpc_per_pc km_per_pc = 3.08567758e13 km_per_m = 1e-3 km_per_cm = 1e-5 m_per_cm = 1e-2 ly_per_cm = 1.05702341e-18 rsun_per_cm = 1.4378145e-11 rearth_per_cm = 1.56961033e-9 # Mean (volumetric) radius rjup_per_cm = 1.43039006737e-10 # Mean (volumetric) radius au_per_cm = 6.68458712e-14 ang_per_cm = 1.0e8 m_per_fpc = 0.0324077929 kpc_per_mpc = 1.0 / mpc_per_kpc pc_per_mpc = 1.0 / mpc_per_pc au_per_mpc = 1.0 / mpc_per_au rsun_per_mpc = 1.0 / mpc_per_rsun rearth_per_mpc = 1.0 / mpc_per_rearth rjup_per_mpc = 1.0 / mpc_per_rjup miles_per_mpc = 1.0 / mpc_per_miles km_per_mpc = 1.0 / mpc_per_km cm_per_mpc = 1.0 / mpc_per_cm cm_per_kpc = 1.0 / kpc_per_cm cm_per_km = 1.0 / km_per_cm cm_per_m = 1.0 / m_per_cm pc_per_km = 1.0 / km_per_pc cm_per_pc = 1.0 / pc_per_cm cm_per_ly = 1.0 / ly_per_cm cm_per_rsun = 1.0 / rsun_per_cm cm_per_rearth = 1.0 / rearth_per_cm cm_per_rjup = 1.0 / rjup_per_cm cm_per_au = 1.0 / au_per_cm cm_per_ang = 1.0 / ang_per_cm # time # "IAU Style Manual" by G.A. Wilkins, Comm. 5, in IAU Transactions XXB (1989) sec_per_Gyr = 31.5576e15 sec_per_Myr = 31.5576e12 sec_per_kyr = 31.5576e9 sec_per_year = 31.5576e6 sec_per_day = 86400.0 sec_per_hr = 3600.0 sec_per_min = 60.0 day_per_year = 365.25 # velocities, accelerations speed_of_light_cm_per_s = 2.99792458e10 standard_gravity_cm_per_s2 = 9.80665e2 # some constants newton_cgs = 6.67384e-8 planck_cgs = 6.62606957e-27 # temperature / energy boltzmann_constant_erg_per_K = 1.3806488e-16 erg_per_eV = 1.602176562e-12 erg_per_keV = erg_per_eV * 1.0e3 K_per_keV = erg_per_keV / boltzmann_constant_erg_per_K keV_per_K = 1.0 / K_per_keV keV_per_erg = 1.0 / erg_per_keV eV_per_erg = 1.0 / erg_per_eV kelvin_per_rankine = 5.0 / 9.0 # Solar System masses # Standish, E.M. (1995) "Report of the IAU WGAS Sub-Group on Numerical Standards", # in Highlights of Astronomy (I. Appenzeller, ed.), Table 1, # Kluwer Academic Publishers, Dordrecht. # REMARK: following masses include whole systems (planet + moons) mass_jupiter_grams = mass_sun_grams / 1047.3486 mass_mercury_grams = mass_sun_grams / 6023600.0 mass_venus_grams = mass_sun_grams / 408523.71 mass_earth_grams = mass_sun_grams / 328900.56 mass_mars_grams = mass_sun_grams / 3098708.0 mass_saturn_grams = mass_sun_grams / 3497.898 mass_uranus_grams = mass_sun_grams / 22902.98 mass_neptune_grams = mass_sun_grams / 19412.24 # flux jansky_cgs = 1.0e-23 # Cosmological constants # Calculated with H = 100 km/s/Mpc, value given in units of h^2 g cm^-3 # Multiply by h^2 to get the critical density in units of g cm^-3 rho_crit_g_cm3_h2 = 1.8784710838431654e-29 primordial_H_mass_fraction = 0.76 _primordial_mass_fraction = { "H": primordial_H_mass_fraction, "He": (1 - primordial_H_mass_fraction), } # Misc. Approximations mass_mean_atomic_cosmology = 1.22 mass_mean_atomic_galactic = 2.3 # Miscellaneous HUGE = 1.0e90 TINY = 1.0e-40 # Planck units hbar_cgs = 0.5 * planck_cgs / np.pi planck_mass_grams = np.sqrt(hbar_cgs * speed_of_light_cm_per_s / newton_cgs) planck_length_cm = np.sqrt(hbar_cgs * newton_cgs / speed_of_light_cm_per_s**3) planck_time_s = planck_length_cm / speed_of_light_cm_per_s planck_energy_erg = ( planck_mass_grams * speed_of_light_cm_per_s * speed_of_light_cm_per_s ) planck_temperature_K = planck_energy_erg / boltzmann_constant_erg_per_K planck_charge_esu = np.sqrt(hbar_cgs * speed_of_light_cm_per_s) # Imperial and other non-metric units grams_per_pound = 453.59237 pascal_per_atm = 101325.0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/png_writer.py0000644000175100001770000000131714714401662016770 0ustar00runnerdockerfrom io import BytesIO import PIL from PIL import Image from PIL.PngImagePlugin import PngInfo from .._version import __version__ as yt_version def call_png_write_png(buffer, fileobj, dpi): metadata = PngInfo() metadata.add_text("Software", f"PIL-{PIL.__version__}|yt-{yt_version}") Image.fromarray(buffer).save( fileobj, dpi=(dpi, dpi), format="png", pnginfo=metadata ) def write_png(buffer, filename, dpi=100): with open(filename, "wb") as fileobj: call_png_write_png(buffer, fileobj, dpi) def write_png_to_string(buffer, dpi=100): fileobj = BytesIO() call_png_write_png(buffer, fileobj, dpi) png_str = fileobj.getvalue() fileobj.close() return png_str ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/rpdb.py0000644000175100001770000000637514714401662015550 0ustar00runnerdockerimport cmd import signal import sys import traceback from io import StringIO from xmlrpc.client import ServerProxy from xmlrpc.server import SimpleXMLRPCServer from yt.config import ytcfg class PdbXMLRPCServer(SimpleXMLRPCServer): """ shutdown-enabled XMLRPCServer from http://code.activestate.com/recipes/114579/ """ finished = False def register_signal(self, signum): signal.signal(signum, self.signal_handler) def signal_handler(self, signum, frame): print("Caught signal", signum) self.shutdown() def shutdown(self): self.finished = True return 1 def serve_forever(self): while not self.finished: self.handle_request() print("DONE SERVING") def rpdb_excepthook(exc_type, exc, tb): traceback.print_exception(exc_type, exc, tb) task = ytcfg.get("yt", "internals", "global_parallel_rank") size = ytcfg.get("yt", "internals", "global_parallel_size") print(f"Starting RPDB server on task {task} ; connect with 'yt rpdb -t {task}'") handler = pdb_handler(tb) server = PdbXMLRPCServer(("localhost", 8010 + task)) server.register_introspection_functions() server.register_instance(handler) server.register_function(server.shutdown) server.serve_forever() server.server_close() if size > 1: from mpi4py import MPI # This COMM_WORLD is okay. We want to barrierize here, while waiting # for shutdown from the rest of the parallel group. If you are running # with --rpdb it is assumed you know what you are doing and you won't # let this get out of hand. MPI.COMM_WORLD.Barrier() class pdb_handler: def __init__(self, tb): import pdb self.cin = StringIO() sys.stdin = self.cin self.cout = StringIO() sys.stdout = self.cout sys.stderr = self.cout self.debugger = pdb.Pdb(stdin=self.cin, stdout=self.cout) self.debugger.reset() self.debugger.setup(tb.tb_frame, tb) def execute(self, line): tt = self.cout.tell() self.debugger.onecmd(line) self.cout.seek(tt) return self.cout.read() class rpdb_cmd(cmd.Cmd): def __init__(self, proxy): self.proxy = proxy cmd.Cmd.__init__(self) print(self.proxy.execute("bt")) def default(self, line): print(self.proxy.execute(line)) def do_shutdown(self, args): print(self.proxy.shutdown()) return True def do_help(self, line): print(self.proxy.execute(f"help {line}")) def postcmd(self, stop, line): return stop def postloop(self): try: self.proxy.shutdown() except Exception: pass __header = """ You're in a remote PDB session with task %(task)s You can run PDB commands, and when you're done, type 'shutdown' to quit. """ def run_rpdb(task=None): port = 8010 if task is None: try: task + int(sys.argv[-1]) except Exception: pass port += task sp = ServerProxy(f"http://localhost:{port}/") try: pp = rpdb_cmd(sp) except OSError: print("Connection refused. Is the server running?") sys.exit(1) pp.cmdloop(__header % {"task": port - 8010}) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/sdf.py0000644000175100001770000013340114714401662015364 0ustar00runnerdockerimport os from collections import UserDict from io import StringIO import numpy as np from yt.funcs import mylog def get_thingking_deps(): try: from thingking.arbitrary_page import PageCacheURL from thingking.httpmmap import HTTPArray except ImportError: raise ImportError( "This functionality requires the thingking package to be installed" ) from None return HTTPArray, PageCacheURL _types = { "int16_t": "int16", "uint16_t": "uint16", "int": "int32", "int32_t": "int32", "uint32_t": "uint32", "int64_t": "int64", "uint64_t": "uint64", "float": "float32", "double": "float64", "unsigned int": "I", "unsigned char": "B", "char": "B", } _rev_types = {} for v, t in _types.items(): _rev_types[t] = v _rev_types["= left, axis=1) np.logical_and(mask, np.all(pos < right, axis=1), mask) else: np.logical_and(mask, np.all(pos >= left, axis=1), mask) np.logical_and(mask, np.all(pos < right, axis=1), mask) return mask return myfilter def sphere_filter(center, radius, domain_width): def myfilter(chunk, mask=None): pos = np.array([chunk["x"], chunk["y"], chunk["z"]]).T left = center - radius # This hurts, but is useful for periodicity. Probably should check # first if it is even needed for a given left/right for i in range(3): pos[:, i] = np.mod(pos[:, i] - left[i], domain_width[i]) + left[i] # Now get all particles that are within the radius if mask is None: mask = ((pos - center) ** 2).sum(axis=1) ** 0.5 < radius else: np.multiply(mask, np.linalg.norm(pos - center, 2) < radius, mask) return mask return myfilter def _ensure_xyz_fields(fields): for f in "xyz": if f not in fields: fields.append(f) def spread_bitsv(ival, level): res = np.zeros_like(ival, dtype="int64") for i in range(level): ares = np.bitwise_and(ival, 1 << i) << (i * 2) np.bitwise_or(res, ares, res) return res def get_keyv(iarr, level): i1, i2, i3 = (v.astype("int64") for v in iarr) i1 = spread_bitsv(i1, level) i2 = spread_bitsv(i2, level) << 1 i3 = spread_bitsv(i3, level) << 2 np.bitwise_or(i1, i2, i1) np.bitwise_or(i1, i3, i1) return i1 class DataStruct: """docstring for DataStruct""" _offset = 0 def __init__(self, dtypes, num, filename): self.filename = filename self.dtype = np.dtype(dtypes) self.size = num self.itemsize = self.dtype.itemsize self.data = {} self.handle = None def set_offset(self, offset): self._offset = offset if self.size == -1: file_size = os.path.getsize(self.filename) file_size -= offset self.size = float(file_size) / self.itemsize assert int(self.size) == self.size def build_memmap(self): assert self.size != -1 self.handle = np.memmap( self.filename, dtype=self.dtype, mode="r", shape=self.size, offset=self._offset, ) for k in self.dtype.names: self.data[k] = self.handle[k] def __del__(self): if self.handle is not None: try: self.handle.close() except AttributeError: pass del self.handle self.handle = None def __getitem__(self, key): mask = None if isinstance(key, (int, np.integer)): if key == -1: key = slice(-1, None) else: key = slice(key, key + 1) elif isinstance(key, np.ndarray): mask = key key = slice(None, None) if not isinstance(key, slice): raise NotImplementedError if key.start is None: key = slice(0, key.stop) if key.stop is None: key = slice(key.start, self.shape) if key.start < 0: key = slice(self.size + key.start, key.stop) if key.stop < 0: key = slice(key.start, self.size + key.stop) arr = self.handle[key.start : key.stop] if mask is None: return arr else: return arr[mask] class RedirectArray: """docstring for RedirectArray""" def __init__(self, http_array, key): self.http_array = http_array self.key = key self.size = http_array.shape self.dtype = http_array.dtype[key] def __getitem__(self, sl): if isinstance(sl, int): return self.http_array[sl][self.key][0] return self.http_array[sl][self.key] class HTTPDataStruct(DataStruct): """docstring for HTTPDataStruct""" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) HTTPArray, PageCacheURL = get_thingking_deps() self.HTTPArray = HTTPArray self.pcu = PageCacheURL(self.filename) def set_offset(self, offset): self._offset = offset if self.size == -1: # Read small piece: file_size = self.pcu.total_size file_size -= offset self.size = float(file_size) / self.itemsize assert int(self.size) == self.size def build_memmap(self): assert self.size != -1 mylog.info( "Building memmap with offset: %i and size %i", self._offset, self.size ) self.handle = self.HTTPArray( self.filename, dtype=self.dtype, shape=self.size, offset=self._offset ) for k in self.dtype.names: self.data[k] = RedirectArray(self.handle, k) class SDFRead(UserDict): _eof = "SDF-EO" _data_struct = DataStruct def __init__(self, filename=None, header=None): r"""Read an SDF file, loading parameters and variables. Given an SDF file (see https://bitbucket.org/JohnSalmon/sdf), parse the ASCII header and construct numpy memmap array access. Parameters ---------- filename: string The filename associated with the data to be loaded. header: string, optional If separate from the data file, a file containing the header can be specified. Default: None. Returns ------- self : SDFRead object Dict-like container of parameters and data. References ---------- SDF is described here: J. K. Salmon and M. S. Warren. Self-Describing File (SDF) Library. Zenodo, Jun 2014. URL https://bitbucket.org/JohnSalmon/sdf. Examples -------- >>> sdf = SDFRead("data.sdf", header="data.hdr") >>> print(sdf.parameters) >>> print(sdf["x"]) """ super().__init__() self.filename = filename if header is None: header = filename self.header = header self.parameters = {} self.structs = [] self.comments = [] if filename is not None: self.parse_header() self.set_offsets() self.load_memmaps() def write(self, filename): f = open(filename, "w") f.write("# SDF 1.0\n") f.write(f"parameter byteorder = {self.parameters['byteorder']};\n") for c in self.comments: if "\x0c" in c: continue if "SDF 1.0" in c: continue f.write(f"{c}") for k, v in sorted(self.parameters.items()): if k == "byteorder": continue try: t = _rev_types[v.dtype.name] except Exception: t = type(v).__name__ if t == str.__name__: f.write(f'parameter {k} = "{v}";\n') else: f.write(f"{t} {k} = {v};\n") struct_order = [] for s in self.structs: f.write("struct {\n") to_write = [] for var in s.dtype.descr: k, v = var[0], _rev_types[var[1]] to_write.append(k) f.write(f"\t{v} {k};\n") f.write("}[%i];\n" % s.size) struct_order.append(to_write) f.write("#\x0c\n") f.write("# SDF-EOH\n") return struct_order, f def __repr__(self): disp = f" file: {self.filename}\n" disp += "parameters: \n" for k, v in self.parameters.items(): disp += f"\t{k}: {v}\n" disp += "arrays: \n" for k, v in self.items(): disp += f"\t{k}[{v.size}]\n" return disp def parse_header(self): """docstring for parse_header""" # Pre-process ascfile = open(self.header) while True: l = ascfile.readline() if self._eof in l: break self.parse_line(l, ascfile) hoff = ascfile.tell() ascfile.close() if self.header != self.filename: hoff = 0 self.parameters["header_offset"] = hoff def parse_line(self, line, ascfile): """Parse a line of sdf""" if "struct" in line: self.parse_struct(line, ascfile) return if "#" in line: self.comments.append(line) return spl = _lstrip(line.split("=")) vtype, vname = _lstrip(spl[0].split()) vname = vname.strip("[]") vval = spl[-1].strip(";") if vtype == "parameter": self.parameters[vname] = vval return elif vtype == "char": vtype = "str" try: vval = eval("np." + vtype + f"({vval})") except AttributeError: if vtype not in _types: mylog.warning("Skipping parameter %s", vname) return vval = eval("np." + _types[vtype] + f"({vval})") self.parameters[vname] = vval def parse_struct(self, line, ascfile): assert "struct" in line str_types = [] l = ascfile.readline() while "}" not in l: vtype, vnames = _get_struct_vars(l) for v in vnames: str_types.append((v, vtype)) l = ascfile.readline() spec_chars = r"{}[]\;\n\\" num = l.strip(spec_chars) if len(num) == 0: # We need to compute the number of records. The DataStruct will # handle this. num = "-1" num = int(num) struct = self._data_struct(str_types, num, self.filename) self.structs.append(struct) return def set_offsets(self): running_off = self.parameters["header_offset"] for struct in self.structs: struct.set_offset(running_off) running_off += struct.size * struct.itemsize return def load_memmaps(self): for struct in self.structs: struct.build_memmap() self.update(struct.data) class HTTPSDFRead(SDFRead): r"""Read an SDF file hosted on the internet. Given an SDF file (see https://bitbucket.org/JohnSalmon/sdf), parse the ASCII header and construct numpy memmap array access. Parameters ---------- filename : string The filename associated with the data to be loaded. header : string, optional If separate from the data file, a file containing the header can be specified. Default: None. Returns ------- self : SDFRead object Dict-like container of parameters and data. References ---------- SDF is described here: J. K. Salmon and M. S. Warren. Self-Describing File (SDF) Library. Zenodo, Jun 2014. URL https://bitbucket.org/JohnSalmon/sdf. Examples -------- >>> sdf = SDFRead("data.sdf", header="data.hdr") >>> print(sdf.parameters) >>> print(sdf["x"]) """ _data_struct = HTTPDataStruct def __init__(self, *args, **kwargs): HTTPArray, _ = get_thingking_deps() self.HTTPArray = HTTPArray super().__init__(*args, **kwargs) def parse_header(self): """docstring for parse_header""" # Pre-process ascfile = self.HTTPArray(self.header) max_header_size = 1024 * 1024 lines = StringIO(ascfile[:max_header_size].data[:]) while True: l = lines.readline() if self._eof in l: break self.parse_line(l, lines) hoff = lines.tell() if self.header != self.filename: hoff = 0 self.parameters["header_offset"] = hoff def load_sdf(filename, header=None): r"""Load an SDF file. Given an SDF file (see https://bitbucket.org/JohnSalmon/sdf), parse the ASCII header and construct numpy memmap array access. The file can be either local (on a hard drive, for example), or remote (on the World Wide Web). Parameters ---------- filename: string The filename or WWW address associated with the data to be loaded. header: string, optional If separate from the data file, a file containing the header can be specified. Default: None. Returns ------- sdf : SDFRead object Dict-like container of parameters and data. References ---------- SDF is described here: J. K. Salmon and M. S. Warren. Self-Describing File (SDF) Library. Zenodo, Jun 2014. URL https://bitbucket.org/JohnSalmon/sdf. Examples -------- >>> sdf = SDFRead("data.sdf", header="data.hdr") >>> print(sdf.parameters) >>> print(sdf["x"]) """ if "http" in filename: sdf = HTTPSDFRead(filename, header=header) else: sdf = SDFRead(filename, header=header) return sdf def _shift_periodic(pos, left, right, domain_width): """ Periodically shift positions that are right of left+domain_width to the left, and those left of right-domain_width to the right. """ for i in range(3): mask = pos[:, i] >= left[i] + domain_width[i] pos[mask, i] -= domain_width[i] mask = pos[:, i] < right[i] - domain_width[i] pos[mask, i] += domain_width[i] return class SDFIndex: """docstring for SDFIndex This provides an index mechanism into the full SDF Dataset. Most useful class methods: get_cell_data(level, cell_iarr, fields) iter_bbox_data(left, right, fields) iter_bbox_data(left, right, fields) """ def __init__(self, sdfdata, indexdata, level=None): super().__init__() self.sdfdata = sdfdata self.indexdata = indexdata if level is None: level = self.indexdata.parameters.get("level", None) self.level = level self.rmin = None self.rmax = None self.domain_width = None self.domain_buffer = 0 self.domain_dims = 0 self.domain_active_dims = 0 self.wandering_particles = False self.valid_indexdata = True self.masks = { "p": int("011" * level, 2), "t": int("101" * level, 2), "r": int("110" * level, 2), "z": int("011" * level, 2), "y": int("101" * level, 2), "x": int("110" * level, 2), 2: int("011" * level, 2), 1: int("101" * level, 2), 0: int("110" * level, 2), } self.dim_slices = { "p": slice(0, None, 3), "t": slice(1, None, 3), "r": slice(2, None, 3), "z": slice(0, None, 3), "y": slice(1, None, 3), "x": slice(2, None, 3), 2: slice(0, None, 3), 1: slice(1, None, 3), 0: slice(2, None, 3), } self.set_bounds() self._midx_version = self.indexdata.parameters.get("midx_version", 0) if self._midx_version >= 1.0: max_key = self.get_key(np.array([2**self.level - 1] * 3, dtype="int64")) else: max_key = self.indexdata["index"][-1] self._max_key = max_key def _fix_rexact(self, rmin, rmax): center = 0.5 * (rmax + rmin) mysize = rmax - rmin mysize *= 1.0 + 4.0 * np.finfo(np.float32).eps self.rmin = center - 0.5 * mysize self.rmax = center + 0.5 * mysize def set_bounds(self): if ( "x_min" in self.sdfdata.parameters and "x_max" in self.sdfdata.parameters ) or ( "theta_min" in self.sdfdata.parameters and "theta_max" in self.sdfdata.parameters ): if "x_min" in self.sdfdata.parameters: rmin = np.array( [ self.sdfdata.parameters["x_min"], self.sdfdata.parameters["y_min"], self.sdfdata.parameters["z_min"], ] ) rmax = np.array( [ self.sdfdata.parameters["x_max"], self.sdfdata.parameters["y_max"], self.sdfdata.parameters["z_max"], ] ) elif "theta_min" in self.sdfdata.parameters: rmin = np.array( [ self.sdfdata.parameters["r_min"], self.sdfdata.parameters["theta_min"], self.sdfdata.parameters["phi_min"], ] ) rmax = np.array( [ self.sdfdata.parameters["r_max"], self.sdfdata.parameters["theta_max"], self.sdfdata.parameters["phi_max"], ] ) self._fix_rexact(rmin, rmax) self.true_domain_left = self.rmin.copy() self.true_domain_right = self.rmax.copy() self.true_domain_width = self.rmax - self.rmin self.domain_width = self.rmax - self.rmin self.domain_dims = 1 << self.level self.domain_buffer = 0 self.domain_active_dims = self.domain_dims else: mylog.debug("Setting up older data") rx = self.sdfdata.parameters.get("Rx") ry = self.sdfdata.parameters.get("Ry") rz = self.sdfdata.parameters.get("Rz") a = self.sdfdata.parameters.get("a", 1.0) rmin = -a * np.array([rx, ry, rz]) rmax = a * np.array([rx, ry, rz]) self.true_domain_left = rmin.copy() self.true_domain_right = rmax.copy() self.true_domain_width = rmax - rmin expand_root = 0.0 morton_xyz = self.sdfdata.parameters.get("morton_xyz", False) if not morton_xyz: mylog.debug("Accounting for wandering particles") self.wandering_particles = True ic_Nmesh = self.sdfdata.parameters.get("ic_Nmesh", 0) # Expand root for non power-of-2 if ic_Nmesh != 0: f2 = 1 << int(np.log2(ic_Nmesh - 1) + 1) if f2 != ic_Nmesh: expand_root = 1.0 * f2 / ic_Nmesh - 1.0 mylog.debug("Expanding: %s, %s, %s", f2, ic_Nmesh, expand_root) rmin *= 1.0 + expand_root rmax *= 1.0 + expand_root self._fix_rexact(rmin, rmax) self.domain_width = self.rmax - self.rmin self.domain_dims = 1 << self.level self.domain_buffer = ( self.domain_dims - int(self.domain_dims / (1.0 + expand_root)) ) / 2 self.domain_active_dims = self.domain_dims - 2 * self.domain_buffer mylog.debug("MIDX rmin: %s, rmax: %s", self.rmin, self.rmax) mylog.debug( "MIDX: domain_width: %s, domain_dims: %s, domain_active_dims: %s ", self.domain_width, self.domain_dims, self.domain_active_dims, ) def spread_bits(self, ival, level=None): if level is None: level = self.level res = 0 for i in range(level): res |= ((ival >> i) & 1) << (i * 3) return res def get_key(self, iarr, level=None): if level is None: level = self.level i1, i2, i3 = (v.astype("int64") for v in iarr) return ( self.spread_bits(i1, level) | self.spread_bits(i2, level) << 1 | self.spread_bits(i3, level) << 2 ) def spread_bitsv(self, ival, level=None): if level is None: level = self.level return spread_bitsv(ival, level) def get_keyv(self, iarr, level=None): if level is None: level = self.level return get_keyv(iarr, level) def get_key_slow(self, iarr, level=None): if level is None: level = self.level i1, i2, i3 = iarr rep1 = np.binary_repr(i1, width=self.level) rep2 = np.binary_repr(i2, width=self.level) rep3 = np.binary_repr(i3, width=self.level) inter = np.zeros(self.level * 3, dtype="c") inter[self.dim_slices[0]] = rep1 inter[self.dim_slices[1]] = rep2 inter[self.dim_slices[2]] = rep3 return int(inter.tobytes(), 2) def get_key_ijk(self, i1, i2, i3, level=None): return self.get_key(np.array([i1, i2, i3]), level=level) def get_slice_key(self, ind, dim="r"): slb = np.binary_repr(ind, width=self.level) expanded = np.array([0] * self.level * 3, dtype="c") expanded[self.dim_slices[dim]] = slb return int(expanded.tobytes(), 2) def get_ind_from_key(self, key, dim="r"): ind = [0, 0, 0] br = np.binary_repr(key, width=self.level * 3) for dim in range(3): ind[dim] = int(br[self.dim_slices[dim]], 2) return ind def get_slice_chunks(self, slice_dim, slice_index): sl_key = self.get_slice_key(slice_index, dim=slice_dim) mask = (self.indexdata["index"] & ~self.masks[slice_dim]) == sl_key offsets = self.indexdata["base"][mask] lengths = self.indexdata["len"][mask] return mask, offsets, lengths def get_ibbox_slow(self, ileft, iright): """ Given left and right indices, return a mask and set of offsets+lengths into the sdf data. """ mask = np.zeros(self.indexdata["index"].shape, dtype="bool") ileft = np.array(ileft, dtype="int64") iright = np.array(iright, dtype="int64") for i in range(3): left_key = self.get_slice_key(ileft[i], dim=i) right_key = self.get_slice_key(iright[i], dim=i) dim_inds = self.indexdata["index"] & ~self.masks[i] mask *= (dim_inds >= left_key) * (dim_inds <= right_key) del dim_inds offsets = self.indexdata["base"][mask] lengths = self.indexdata["len"][mask] return mask, offsets, lengths def get_ibbox(self, ileft, iright): """ Given left and right indices, return a mask and set of offsets+lengths into the sdf data. """ # print('Getting data from ileft to iright:', ileft, iright) ix, iy, iz = (iright - ileft) * 1j mylog.debug("MIDX IBBOX: %s %s %s %s %s", ileft, iright, ix, iy, iz) # plus 1 that is sliced, plus a bit since mgrid is not inclusive Z, Y, X = np.mgrid[ ileft[2] : iright[2] + 1.01, ileft[1] : iright[1] + 1.01, ileft[0] : iright[0] + 1.01, ] mask = slice(0, -1, None) X = X[mask, mask, mask].astype("int64").ravel() Y = Y[mask, mask, mask].astype("int64").ravel() Z = Z[mask, mask, mask].astype("int64").ravel() if self.wandering_particles: # Need to get padded bbox around the border to catch # wandering particles. dmask = X < self.domain_buffer dmask += Y < self.domain_buffer dmask += Z < self.domain_buffer dmask += X >= self.domain_dims dmask += Y >= self.domain_dims dmask += Z >= self.domain_dims dinds = self.get_keyv([X[dmask], Y[dmask], Z[dmask]]) dinds = dinds[dinds < self._max_key] dinds = dinds[self.indexdata["len"][dinds] > 0] # print('Getting boundary layers for wanderers, cells: %i' % dinds.size) # Correct For periodicity X[X < self.domain_buffer] += self.domain_active_dims Y[Y < self.domain_buffer] += self.domain_active_dims Z[Z < self.domain_buffer] += self.domain_active_dims X[X >= self.domain_buffer + self.domain_active_dims] -= self.domain_active_dims Y[Y >= self.domain_buffer + self.domain_active_dims] -= self.domain_active_dims Z[Z >= self.domain_buffer + self.domain_active_dims] -= self.domain_active_dims # print('periodic:', X.min(), X.max(), Y.min(), Y.max(), Z.min(), Z.max()) indices = self.get_keyv([X, Y, Z]) # Only mask out if we are actually getting data rather than getting indices into # a space. if self.valid_indexdata: indices = indices[indices < self._max_key] # indices = indices[self.indexdata['len'][indices] > 0] # Faster for sparse lookups. Need better heuristic. new_indices = [] for ind in indices: if self.indexdata["len"][ind] > 0: new_indices.append(ind) indices = np.array(indices, dtype="int64") # indices = np.array([self.get_key_ijk(x, y, z) for x, y, z in zip(X, Y, Z)]) # Here we sort the indices to batch consecutive reads together. if self.wandering_particles: indices = np.sort(np.append(indices, dinds)) else: indices = np.sort(indices) return indices def get_bbox(self, left, right): """ Given left and right indices, return a mask and set of offsets+lengths into the sdf data. """ ileft = np.floor((left - self.rmin) / self.domain_width * self.domain_dims) iright = np.floor((right - self.rmin) / self.domain_width * self.domain_dims) if np.any(iright - ileft) > self.domain_dims: mylog.warning( "Attempting to get data from bounding box larger than the domain. " "You may want to check your units." ) # iright[iright <= ileft+1] += 1 return self.get_ibbox(ileft, iright) def get_nparticles_bbox(self, left, right): """ Given left and right edges, return total number of particles present. """ ileft = np.floor((left - self.rmin) / self.domain_width * self.domain_dims) iright = np.floor((right - self.rmin) / self.domain_width * self.domain_dims) indices = self.get_ibbox(ileft, iright) npart = 0 for ind in indices: npart += self.indexdata["len"][ind] return npart def get_data(self, chunk, fields): data = {} for field in fields: data[field] = self.sdfdata[field][chunk] return data def get_next_nonzero_chunk(self, key, stop=None): # These next two while loops are to squeeze the keys if they are empty. # Would be better to go through and set base equal to the last non-zero base. if stop is None: stop = self._max_key while key < stop: if self.indexdata["len"][key] == 0: # print('Squeezing keys, incrementing') key += 1 else: break return key def get_previous_nonzero_chunk(self, key, stop=None): # These next two while loops are to squeeze the keys if they are empty. # Would be better to go through and set base equal to the last non-zero base. if stop is None: stop = self.indexdata["index"][0] while key > stop: if self.indexdata["len"][key] == 0: # print('Squeezing keys, decrementing') key -= 1 else: break return key def iter_data(self, inds, fields): num_inds = len(inds) num_reads = 0 mylog.debug("MIDX Reading %i chunks", num_inds) i = 0 while i < num_inds: ind = inds[i] base = self.indexdata["base"][ind] length = self.indexdata["len"][ind] # Concatenate aligned reads nexti = i + 1 combined = 0 while nexti < num_inds: nextind = inds[nexti] # print( # "b: %i l: %i end: %i next: %i" # % (base, length, base + length, self.indexdata["base"][nextind]) # ) if combined < 1024 and base + length == self.indexdata["base"][nextind]: length += self.indexdata["len"][nextind] i += 1 nexti += 1 combined += 1 else: break chunk = slice(base, base + length) mylog.debug( "Reading chunk %i of length %i after catting %i starting at %i", i, length, combined, ind, ) num_reads += 1 if length > 0: data = self.get_data(chunk, fields) yield data del data i += 1 mylog.debug("Read %i chunks, batched into %i reads", num_inds, num_reads) def filter_particles(self, myiter, myfilter): for data in myiter: mask = myfilter(data) if mask.sum() == 0: continue filtered = {} for f in data.keys(): filtered[f] = data[f][mask] yield filtered def filter_bbox(self, left, right, myiter): """ Filter data by masking out data outside of a bbox defined by left/right. Account for periodicity of data, allowing left/right to be outside of the domain. """ for data in myiter: # mask = np.zeros_like(data, dtype='bool') pos = np.array([data["x"].copy(), data["y"].copy(), data["z"].copy()]).T DW = self.true_domain_width # This hurts, but is useful for periodicity. Probably should check first # if it is even needed for a given left/right _shift_periodic(pos, left, right, DW) # Now get all particles that are within the bbox mask = np.all(pos >= left, axis=1) * np.all(pos < right, axis=1) # print('Mask shape, sum:', mask.shape, mask.sum()) mylog.debug( "Filtering particles, returning %i out of %i", mask.sum(), mask.shape[0] ) if not np.any(mask): continue filtered = {ax: pos[:, i][mask] for i, ax in enumerate("xyz")} for f in data.keys(): if f in "xyz": continue filtered[f] = data[f][mask] # for i, ax in enumerate('xyz'): # #print(left, right) # assert np.all(filtered[ax] >= left[i]) # assert np.all(filtered[ax] < right[i]) yield filtered def filter_sphere(self, center, radius, myiter): """ Filter data by masking out data outside of a sphere defined by a center and radius. Account for periodicity of data, allowing left/right to be outside of the domain. """ # Get left/right for periodicity considerations left = center - radius right = center + radius for data in myiter: pos = np.array([data["x"].copy(), data["y"].copy(), data["z"].copy()]).T DW = self.true_domain_width _shift_periodic(pos, left, right, DW) # Now get all particles that are within the sphere mask = ((pos - center) ** 2).sum(axis=1) ** 0.5 < radius mylog.debug( "Filtering particles, returning %i out of %i", mask.sum(), mask.shape[0] ) if not np.any(mask): continue filtered = {ax: pos[:, i][mask] for i, ax in enumerate("xyz")} for f in data.keys(): if f in "xyz": continue filtered[f] = data[f][mask] yield filtered def iter_filtered_bbox_fields(self, left, right, data, pos_fields, fields): """ This function should be destroyed, as it will only work with units. """ kpcuq = left.in_units("kpccm").uq mpcuq = left.in_units("Mpccm/h").uq DW = (self.true_domain_width * kpcuq).in_units("Mpc/h") if pos_fields is None: pos_fields = "x", "y", "z" xf, yf, zf = pos_fields mylog.debug("Using position fields: %s", pos_fields) # I'm sorry. pos = ( mpcuq * np.array( [ data[xf].in_units("Mpccm/h"), data[yf].in_units("Mpccm/h"), data[zf].in_units("Mpccm/h"), ] ).T ) # This hurts, but is useful for periodicity. Probably should check first # if it is even needed for a given left/right _shift_periodic(pos, left, right, DW) mylog.debug( "Periodic filtering, %s %s %s %s", left, right, pos.min(axis=0), pos.max(axis=0), ) # Now get all particles that are within the bbox mask = np.all(pos >= left, axis=1) * np.all(pos < right, axis=1) mylog.debug( "Filtering particles, returning %i out of %i", mask.sum(), mask.shape[0] ) if np.any(mask): for i, f in enumerate(pos_fields): yield f, pos[:, i][mask] for f in fields: if f in pos_fields: continue # print('yielding nonpos field', f) yield f, data[f][mask] def iter_bbox_data(self, left, right, fields): """ Iterate over all data within a bounding box defined by a left and a right. """ _ensure_xyz_fields(fields) mylog.debug("MIDX Loading region from %s to %s", left, right) inds = self.get_bbox(left, right) # Need to put left/right in float32 to avoid fp roundoff errors # in the bbox later. # left = left.astype('float32') # right = right.astype('float32') # my_filter = bbox_filter(left, right, self.true_domain_width) yield from self.filter_bbox(left, right, self.iter_data(inds, fields)) # for dd in self.filter_particles( # self.iter_data(inds, fields), # my_filter): # yield dd def iter_sphere_data(self, center, radius, fields): """ Iterate over all data within some sphere defined by a center and a radius. """ _ensure_xyz_fields(fields) mylog.debug("MIDX Loading spherical region %s to %s", center, radius) inds = self.get_bbox(center - radius, center + radius) yield from self.filter_sphere(center, radius, self.iter_data(inds, fields)) def iter_ibbox_data(self, left, right, fields): mylog.debug("MIDX Loading region from %s to %s", left, right) inds = self.get_ibbox(left, right) return self.iter_data(inds, fields) def get_contiguous_chunk(self, left_key, right_key, fields): lbase = 0 if left_key > self._max_key: raise RuntimeError( "Left key is too large. Key: %i Max Key: %i" % (left_key, self._max_key) ) right_key = min(right_key, self._max_key) left_key = self.get_next_nonzero_chunk(left_key, right_key - 1) right_key = self.get_previous_nonzero_chunk(right_key, left_key) lbase = self.indexdata["base"][left_key] rbase = self.indexdata["base"][right_key] rlen = self.indexdata["len"][right_key] length = rbase + rlen - lbase if length > 0: mylog.debug( "Getting contiguous chunk of size %i starting at %i", length, lbase ) return self.get_data(slice(lbase, lbase + length), fields) def get_key_data(self, key, fields): if key > self._max_key: raise RuntimeError( "Left key is too large. Key: %i Max Key: %i" % (key, self._max_key) ) base = self.indexdata["base"][key] length = self.indexdata["len"][key] - base if length > 0: mylog.debug( "Getting contiguous chunk of size %i starting at %i", length, base ) return self.get_data(slice(base, base + length), fields) def iter_slice_data(self, slice_dim, slice_index, fields): mask, offsets, lengths = self.get_slice_chunks(slice_dim, slice_index) for off, l in zip(offsets, lengths, strict=True): data = {} chunk = slice(off, off + l) for field in fields: data[field] = self.sdfdata[field][chunk] yield data del data def get_key_bounds(self, level, cell_iarr): """ Get index keys for index file supplied. level: int Requested level cell_iarr: array-like, length 3 Requested cell from given level. Returns: lmax_lk, lmax_rk """ shift = self.level - level level_buff = 0 level_lk = self.get_key(cell_iarr + level_buff) level_rk = self.get_key(cell_iarr + level_buff) + 1 lmax_lk = level_lk << shift * 3 lmax_rk = ((level_rk) << shift * 3) - 1 # print( # "Level ", # level, # np.binary_repr(level_lk, width=self.level * 3), # np.binary_repr(level_rk, width=self.level * 3), # ) # print( # "Level ", # self.level, # np.binary_repr(lmax_lk, width=self.level * 3), # np.binary_repr(lmax_rk, width=self.level * 3), # ) return lmax_lk, lmax_rk def find_max_cell(self): max_cell = np.argmax(self.indexdata["len"][:]) return max_cell def find_max_cell_center(self): max_cell = self.find_max_cell() cell_ijk = np.array( self.get_ind_from_key(self.indexdata["index"][max_cell]), dtype="int64" ) position = (cell_ijk + 0.5) * (self.domain_width / self.domain_dims) + self.rmin return position def get_cell_data(self, level, cell_iarr, fields): """ Get data from requested cell This uses the raw cell index, and doesn't account for periodicity or an expanded domain (non-power of 2). level: int Requested level cell_iarr: array-like, length 3 Requested cell from given level. fields: list Requested fields Returns: cell_data: dict Dictionary of field_name, field_data """ cell_iarr = np.array(cell_iarr, dtype="int64") lk, rk = self.get_key_bounds(level, cell_iarr) mylog.debug("Reading contiguous chunk from %i to %i", lk, rk) return self.get_contiguous_chunk(lk, rk, fields) def get_cell_bbox(self, level, cell_iarr): """Get floating point bounding box for a given midx cell Returns: bbox: array-like of shape (3,2) """ cell_iarr = np.array(cell_iarr, dtype="int64") cell_width = self.get_cell_width(level) le = self.rmin + cell_iarr * cell_width re = le + cell_width bbox = np.array([le, re]).T assert bbox.shape == (3, 2) return bbox def iter_padded_bbox_data(self, level, cell_iarr, pad, fields): """ Yields data chunks for a cell on the given level plus a padding around the cell, for a list of fields. Yields: dd: A dictionaries of data. Example: for chunk in midx.iter_padded_bbox_data( 6, np.array([128]*3), 8.0, ['x','y','z','ident']): print(chunk['x'].max()) """ _ensure_xyz_fields(fields) bbox = self.get_cell_bbox(level, cell_iarr) filter_left = bbox[:, 0] - pad filter_right = bbox[:, 1] + pad # Center cell for dd in self.filter_bbox( filter_left, filter_right, [self.get_cell_data(level, cell_iarr, fields)] ): yield dd del dd # Bottom & Top pbox = bbox.copy() pbox[0, 0] -= pad[0] pbox[0, 1] += pad[0] pbox[1, 0] -= pad[1] pbox[1, 1] += pad[1] pbox[2, 0] -= pad[2] pbox[2, 1] = bbox[2, 0] for dd in self.filter_bbox( filter_left, filter_right, self.iter_bbox_data(pbox[:, 0], pbox[:, 1], fields), ): yield dd del dd pbox[2, 0] = bbox[2, 1] pbox[2, 1] = pbox[2, 0] + pad[2] for dd in self.filter_bbox( filter_left, filter_right, self.iter_bbox_data(pbox[:, 0], pbox[:, 1], fields), ): yield dd del dd # Front & Back pbox = bbox.copy() pbox[0, 0] -= pad[0] pbox[0, 1] += pad[0] pbox[1, 0] -= pad[1] pbox[1, 1] = bbox[1, 0] for dd in self.filter_bbox( filter_left, filter_right, self.iter_bbox_data(pbox[:, 0], pbox[:, 1], fields), ): yield dd del dd pbox[1, 0] = bbox[1, 1] pbox[1, 1] = pbox[1, 0] + pad[1] for dd in self.filter_bbox( filter_left, filter_right, self.iter_bbox_data(pbox[:, 0], pbox[:, 1], fields), ): yield dd del dd # Left & Right pbox = bbox.copy() pbox[0, 0] -= pad[0] pbox[0, 1] = bbox[0, 0] for dd in self.filter_bbox( filter_left, filter_right, self.iter_bbox_data(pbox[:, 0], pbox[:, 1], fields), ): yield dd del dd pbox[0, 0] = bbox[0, 1] pbox[0, 1] = pbox[0, 0] + pad[0] for dd in self.filter_bbox( filter_left, filter_right, self.iter_bbox_data(pbox[:, 0], pbox[:, 1], fields), ): yield dd del dd def get_padded_bbox_data(self, level, cell_iarr, pad, fields): """ Return list of data chunks for a cell on the given level plus a padding around the cell, for a list of fields. Returns ------- data: list A list of dictionaries of data. Examples -------- >>> chunks = midx.get_padded_bbox_data( ... 6, np.array([128] * 3), 8.0, ["x", "y", "z", "ident"] ... ) """ _ensure_xyz_fields(fields) data = [] for dd in self.iter_padded_bbox_data(level, cell_iarr, pad, fields): data.append(dd) return data def get_cell_width(self, level): return self.domain_width / 2**level def iter_padded_bbox_keys(self, level, cell_iarr, pad): """ Returns: bbox: array-like of shape (3,2) """ bbox = self.get_cell_bbox(level, cell_iarr) # Need to get all of these low_key, high_key = self.get_key_bounds(level, cell_iarr) yield from range(low_key, high_key) # Bottom & Top pbox = bbox.copy() pbox[0, 0] -= pad[0] pbox[0, 1] += pad[0] pbox[1, 0] -= pad[1] pbox[1, 1] += pad[1] pbox[2, 0] -= pad[2] pbox[2, 1] = bbox[2, 0] yield from self.get_bbox(pbox[:, 0], pbox[:, 1]) pbox[2, 0] = bbox[2, 1] pbox[2, 1] = pbox[2, 0] + pad[2] yield from self.get_bbox(pbox[:, 0], pbox[:, 1]) # Front & Back pbox = bbox.copy() pbox[0, 0] -= pad[0] pbox[0, 1] += pad[0] pbox[1, 0] -= pad[1] pbox[1, 1] = bbox[1, 0] yield from self.get_bbox(pbox[:, 0], pbox[:, 1]) pbox[1, 0] = bbox[1, 1] pbox[1, 1] = pbox[1, 0] + pad[1] yield from self.get_bbox(pbox[:, 0], pbox[:, 1]) # Left & Right pbox = bbox.copy() pbox[0, 0] -= pad[0] pbox[0, 1] = bbox[0, 0] yield from self.get_bbox(pbox[:, 0], pbox[:, 1]) pbox[0, 0] = bbox[0, 1] pbox[0, 1] = pbox[0, 0] + pad[0] yield from self.get_bbox(pbox[:, 0], pbox[:, 1]) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.403154 yt-4.4.0/yt/utilities/tests/0000755000175100001770000000000014714401715015375 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/__init__.py0000644000175100001770000000000014714401662017475 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/cosmology_answers.yml0000644000175100001770000000605714714401662021706 0ustar00runnerdockercosmologies: EdS: {hubble_constant: 0.7, omega_lambda: 0.0, omega_matter: 1.0, omega_radiation: 0.0} LCDM: {hubble_constant: 0.7, omega_lambda: 0.7, omega_matter: 0.3, omega_radiation: 0.0} omega_radiation: {hubble_constant: 0.7, omega_lambda: 0.7, omega_matter: 0.2999, omega_radiation: 0.0001} open: {hubble_constant: 0.7, omega_lambda: 0.0, omega_matter: 0.3, omega_radiation: 0.0} functions: age_integrand: answers: {EdS: 0.17677669529663687, LCDM: 0.28398091712353246, omega_radiation: 0.2839442815151592, open: 0.21926450482675733} args: [1] angular_diameter_distance: answers: {EdS: -47.63859400006291, LCDM: 74.71628440466395, omega_radiation: 74.61156201248941, open: 114.42229537588507} args: [1, 2] units: Mpc angular_scale: answers: {EdS: -47.63859400006291, LCDM: 74.71628440466395, omega_radiation: 74.61156201248941, open: 114.42229537588507} args: [1, 2] units: Mpc/radian comoving_radial_distance: answers: {EdS: 1111.3289801408714, LCDM: 1875.857637458183, omega_radiation: 1875.4640874252457, open: 1448.958605095395} args: [1, 2] units: Mpc comoving_transverse_distance: answers: {EdS: 1111.3289801408714, LCDM: 1875.857637458183, omega_radiation: 1875.4640874252457, open: 1468.3857559045264} args: [1, 2] units: Mpc comoving_volume: answers: {EdS: 5.749320615429804, LCDM: 27.64956077763837, omega_radiation: 27.632162011458195, open: 82.84024895030079} args: [1, 2] units: Gpc**3 critical_density: answers: {EdS: 1088.0152888198547, LCDM: 421.6059244176937, omega_radiation: 421.7147259465755, open: 707.2099377329054} args: [1] units: Msun/kpc**3 expansion_factor: answers: {EdS: 2.8284271247461903, LCDM: 1.7606816861659007, omega_radiation: 1.7609088562444108, open: 2.2803508501982757} args: [1] hubble_distance: answers: {EdS: 4282.749400000001, LCDM: 4282.749400000001, omega_radiation: 4282.749400000001, open: 4282.749400000001} units: Mpc hubble_parameter: answers: {EdS: 197.98989873223329, LCDM: 123.24771803161306, omega_radiation: 123.26361993710874, open: 159.6245595138793} args: [1] units: km/s/Mpc inverse_expansion_factor: answers: {EdS: 0.35355339059327373, LCDM: 0.5679618342470649, omega_radiation: 0.5678885630303184, open: 0.43852900965351466} args: [1] lookback_time: answers: {EdS: 1500.1343648374723, LCDM: 2524.828936828046, omega_radiation: 2524.3141825048465, open: 1949.3932768737345} args: [1, 2] units: Myr luminosity_distance: answers: {EdS: 5842.669112528931, LCDM: 8931.175468218224, omega_radiation: 8929.836295237686, open: 8369.418267416617} args: [1, 2] units: Mpc path_length: answers: {EdS: 1.5782728314065704, LCDM: 2.679181396559161, omega_radiation: 2.6785873459658784, open: 2.0714508064868244} args: [1, 2] path_length_function: answers: {EdS: 1.414213562373095, LCDM: 2.2718473369882597, omega_radiation: 2.2715542521212737, open: 1.7541160386140586} args: [1] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_amr_kdtree.py0000644000175100001770000000161614714401662021130 0ustar00runnerdockerimport itertools import numpy as np from numpy.testing import assert_almost_equal from yt.testing import fake_amr_ds def test_amr_kdtree_set_fields(): ds = fake_amr_ds(fields=["density", "pressure"], units=["g/cm**3", "dyn/cm**2"]) dd = ds.all_data() fields = ds.field_list dd.tiles.set_fields(fields, [True, True], False) gold = {} for i, block in enumerate(dd.tiles.traverse()): gold[i] = [data.copy() for data in block.my_data] for log_fields in itertools.product([True, False], [True, False]): dd.tiles.set_fields(fields, log_fields, False) for iblock, block in enumerate(dd.tiles.traverse()): for i in range(len(fields)): if log_fields[i]: data = block.my_data[i] else: data = np.log10(block.my_data[i]) assert_almost_equal(gold[iblock][i], data) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_chemical_formulas.py0000644000175100001770000000204514714401662022465 0ustar00runnerdockerfrom numpy.testing import assert_allclose, assert_equal from yt.utilities.chemical_formulas import ChemicalFormula, compute_mu from yt.utilities.periodic_table import periodic_table _molecules = ( ("H2O_p1", (("H", 2), ("O", 1)), 1), ("H2O_m1", (("H", 2), ("O", 1)), -1), ("H2O", (("H", 2), ("O", 1)), 0), ("H2SO4", (("H", 2), ("S", 1), ("O", 4)), 0), # Now a harder one ("UuoMtUuq3", (("Uuo", 1), ("Mt", 1), ("Uuq", 3)), 0), ) def test_formulas(): for formula, components, charge in _molecules: f = ChemicalFormula(formula) w = sum(n * periodic_table[e].weight for e, n in components) assert_equal(f.charge, charge) assert_equal(f.weight, w) for (n, c1), (e, c2) in zip(components, f.elements, strict=True): assert_equal(n, e.symbol) assert_equal(c1, c2) def test_default_mu(): assert_allclose(compute_mu(None), 0.5924489101195808) assert_allclose(compute_mu("ionized"), 0.5924489101195808) assert_allclose(compute_mu("neutral"), 1.2285402715185552) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_config.py0000644000175100001770000001237414714401662020263 0ustar00runnerdockerimport contextlib import os import shutil import sys import tempfile import unittest import unittest.mock as mock from io import StringIO import yt.config import yt.utilities.command_line from yt.utilities.configure import YTConfig _TEST_PLUGIN = "_test_plugin.py" # NOTE: the normalization of the crazy camel-case will be checked _DUMMY_CFG_INI = f"""[yt] logLevel = 49 pluginfilename = {_TEST_PLUGIN} boolean_stuff = True chunk_size = 3 """ _DUMMY_CFG_TOML = f"""[yt] log_level = 49 plugin_filename = "{_TEST_PLUGIN}" boolean_stuff = true chunk_size = 3 """ @contextlib.contextmanager def captureOutput(): oldout, olderr = sys.stdout, sys.stderr try: out = [StringIO(), StringIO()] sys.stdout, sys.stderr = out yield out finally: sys.stdout, sys.stderr = oldout, olderr out[0] = out[0].getvalue() out[1] = out[1].getvalue() class SysExitException(Exception): pass class TestYTConfig(unittest.TestCase): def setUp(self): self.xdg_config_home = os.environ.get("XDG_CONFIG_HOME") self.tmpdir = tempfile.mkdtemp() os.environ["XDG_CONFIG_HOME"] = self.tmpdir os.mkdir(os.path.join(self.tmpdir, "yt")) # run inside another temporary directory to avoid polluting the # local space when we dump configuration to a local yt.toml file self.origin = os.getcwd() os.chdir(tempfile.mkdtemp()) def tearDown(self): shutil.rmtree(self.tmpdir) if self.xdg_config_home: os.environ["XDG_CONFIG_HOME"] = self.xdg_config_home else: os.environ.pop("XDG_CONFIG_HOME") os.chdir(self.origin) def _runYTConfig(self, args): args = ["yt", "config"] + args retcode = 0 with ( mock.patch.object(sys, "argv", args), mock.patch("sys.exit", side_effect=SysExitException) as exit, captureOutput() as output, ): try: yt.utilities.command_line.run_main() except SysExitException: args = exit.mock_calls[0][1] retcode = args[0] if len(args) else 0 return {"rc": retcode, "stdout": output[0], "stderr": output[1]} def _testKeyValue(self, key, val_set, val_get): info = self._runYTConfig(["set", "yt", key, str(val_set)]) self.assertEqual(info["rc"], 0) info = self._runYTConfig(["get", "yt", key]) self.assertEqual(info["rc"], 0) self.assertEqual(info["stdout"].strip(), str(val_get)) info = self._runYTConfig(["rm", "yt", key]) self.assertEqual(info["rc"], 0) def _testKeyTypeError(self, key, val1, val2, expect_error): info = self._runYTConfig(["set", "yt", key, str(val1)]) self.assertEqual(info["rc"], 0) if expect_error: with self.assertRaises(TypeError): info = self._runYTConfig(["set", "yt", key, str(val2)]) else: info = self._runYTConfig(["set", "yt", key, str(val2)]) info = self._runYTConfig(["rm", "yt", key]) self.assertEqual(info["rc"], 0) class TestYTConfigCommands(TestYTConfig): def testConfigCommands(self): def remove_spaces_and_breaks(s): return "".join(s.split()) self.assertFalse(os.path.exists(YTConfig.get_global_config_file())) info = self._runYTConfig(["--help"]) self.assertEqual(info["rc"], 0) self.assertEqual(info["stderr"], "") self.assertIn( remove_spaces_and_breaks("Get and set configuration values for yt"), remove_spaces_and_breaks(info["stdout"]), ) info = self._runYTConfig(["list"]) self.assertEqual(info["rc"], 0) self.assertEqual(info["stdout"], "") self._testKeyValue("internals.parallel", True, True) self._testKeyValue( "test_data_dir", "~/yt-data", os.path.expanduser("~/yt-data") ) self._testKeyValue( "test_data_dir", "$HOME/yt-data", os.path.expandvars("$HOME/yt-data") ) with self.assertRaises(KeyError): self._runYTConfig(["get", "yt", "foo"]) # Check TypeErrors are raised when changing the type of an entry self._testKeyTypeError("foo.bar", "test", 10, expect_error=True) self._testKeyTypeError("foo.bar", "test", False, expect_error=True) # Check no type error are raised when *not* changing the type self._testKeyTypeError("foo.bar", 10, 20, expect_error=False) self._testKeyTypeError("foo.bar", "foo", "bar", expect_error=False) def tearDown(self): if os.path.exists(YTConfig.get_global_config_file()): os.remove(YTConfig.get_global_config_file()) super().tearDown() class TestYTConfigGlobalLocal(TestYTConfig): def setUp(self): super().setUp() with open(YTConfig.get_local_config_file(), mode="w") as f: f.writelines("[yt]\n") with open(YTConfig.get_global_config_file(), mode="w") as f: f.writelines("[yt]\n") def testAmbiguousConfig(self): info = self._runYTConfig(["list"]) self.assertFalse(len(info["rc"]) == 0) for cmd in (["list", "--local"], ["list", "--global"]): info = self._runYTConfig(cmd) self.assertEqual(info["rc"], 0) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_coordinate_conversions.py0000644000175100001770000001357114714401662023575 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_array_almost_equal from yt.utilities.math_utils import ( get_cyl_r, get_cyl_r_component, get_cyl_theta, get_cyl_theta_component, get_cyl_z, get_cyl_z_component, get_sph_phi, get_sph_phi_component, get_sph_r, get_sph_r_component, get_sph_theta, get_sph_theta_component, ) # Randomly generated coordinates in the domain [[-1,1],[-1,1],-1,1]] coords = np.array( [ [-0.41503037, -0.22102472, -0.55774212], [0.73828247, -0.17913899, 0.64076921], [0.08922066, -0.94254844, -0.61774511], [0.10173242, -0.95789145, 0.16294352], [0.73186508, -0.3109153, 0.75728738], [0.8757989, -0.41475119, -0.57039201], [0.58040762, 0.81969082, 0.46759728], [-0.89983356, -0.9853683, -0.38355343], ] ).T def test_spherical_coordinate_conversion(): normal = [0, 0, 1] real_r = [ 0.72950559, 0.99384957, 1.13047198, 0.97696269, 1.09807968, 1.12445067, 1.10788685, 1.38843954, ] real_theta = [ 2.44113629, 0.87012028, 2.14891444, 1.4032274, 0.80979483, 2.10280198, 1.13507735, 1.85068416, ] real_phi = [ -2.65224483, -0.23804243, -1.47641858, -1.46498842, -0.40172325, -0.4422801, 0.95466734, -2.31085392, ] calc_r = get_sph_r(coords) calc_theta = get_sph_theta(coords, normal) calc_phi = get_sph_phi(coords, normal) assert_array_almost_equal(calc_r, real_r) assert_array_almost_equal(calc_theta, real_theta) assert_array_almost_equal(calc_phi, real_phi) normal = [1, 0, 0] real_theta = [ 2.17598842, 0.73347681, 1.49179079, 1.46647589, 0.8412984, 0.67793705, 1.0193883, 2.27586987, ] real_phi = [ -1.94809584, 1.843405, -2.56143151, 2.97309903, 1.96037671, -2.1995016, 0.51841239, -2.77038877, ] calc_theta = get_sph_theta(coords, normal) calc_phi = get_sph_phi(coords, normal) assert_array_almost_equal(calc_theta, real_theta) assert_array_almost_equal(calc_phi, real_phi) def test_cylindrical_coordinate_conversion(): normal = [0, 0, 1] real_r = [ 0.47021498, 0.75970506, 0.94676179, 0.96327853, 0.79516968, 0.96904193, 1.00437346, 1.3344104, ] real_theta = [ -2.65224483, -0.23804243, -1.47641858, -1.46498842, -0.40172325, -0.4422801, 0.95466734, -2.31085392, ] real_z = [ -0.55774212, 0.64076921, -0.61774511, 0.16294352, 0.75728738, -0.57039201, 0.46759728, -0.38355343, ] calc_r = get_cyl_r(coords, normal) calc_theta = get_cyl_theta(coords, normal) calc_z = get_cyl_z(coords, normal) assert_array_almost_equal(calc_r, real_r) assert_array_almost_equal(calc_theta, real_theta) assert_array_almost_equal(calc_z, real_z) normal = [1, 0, 0] real_r = [ 0.59994016, 0.66533898, 1.12694569, 0.97165149, 0.81862843, 0.70524152, 0.94368441, 1.05738542, ] real_theta = [ -1.94809584, 1.843405, -2.56143151, 2.97309903, 1.96037671, -2.1995016, 0.51841239, -2.77038877, ] real_z = [ -0.41503037, 0.73828247, 0.08922066, 0.10173242, 0.73186508, 0.8757989, 0.58040762, -0.89983356, ] calc_r = get_cyl_r(coords, normal) calc_theta = get_cyl_theta(coords, normal) calc_z = get_cyl_z(coords, normal) assert_array_almost_equal(calc_r, real_r) assert_array_almost_equal(calc_theta, real_theta) assert_array_almost_equal(calc_z, real_z) def test_spherical_coordinate_projections(): normal = [0, 0, 1] theta = get_sph_theta(coords, normal) phi = get_sph_phi(coords, normal) zero = np.tile(0, coords.shape[1]) # Purely radial field vecs = np.array( [np.sin(theta) * np.cos(phi), np.sin(theta) * np.sin(phi), np.cos(theta)] ) assert_array_almost_equal(zero, get_sph_theta_component(vecs, theta, phi, normal)) assert_array_almost_equal(zero, get_sph_phi_component(vecs, phi, normal)) # Purely toroidal field vecs = np.array([-np.sin(phi), np.cos(phi), zero]) assert_array_almost_equal(zero, get_sph_theta_component(vecs, theta, phi, normal)) assert_array_almost_equal(zero, get_sph_r_component(vecs, theta, phi, normal)) # Purely poloidal field vecs = np.array( [np.cos(theta) * np.cos(phi), np.cos(theta) * np.sin(phi), -np.sin(theta)] ) assert_array_almost_equal(zero, get_sph_phi_component(vecs, phi, normal)) assert_array_almost_equal(zero, get_sph_r_component(vecs, theta, phi, normal)) def test_cylindrical_coordinate_projections(): normal = [0, 0, 1] theta = get_cyl_theta(coords, normal) z = get_cyl_z(coords, normal) zero = np.tile(0, coords.shape[1]) # Purely radial field vecs = np.array([np.cos(theta), np.sin(theta), zero]) assert_array_almost_equal(zero, get_cyl_theta_component(vecs, theta, normal)) assert_array_almost_equal(zero, get_cyl_z_component(vecs, normal)) # Purely toroidal field vecs = np.array([-np.sin(theta), np.cos(theta), zero]) assert_array_almost_equal(zero, get_cyl_z_component(vecs, normal)) assert_array_almost_equal(zero, get_cyl_r_component(vecs, theta, normal)) # Purely z field vecs = np.array([zero, zero, z]) assert_array_almost_equal(zero, get_cyl_theta_component(vecs, theta, normal)) assert_array_almost_equal(zero, get_cyl_r_component(vecs, theta, normal)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_cosmology.py0000644000175100001770000002065614714401662021033 0ustar00runnerdockerimport os import numpy as np from numpy.testing import assert_almost_equal, assert_equal from yt.testing import assert_rel_equal, requires_file, requires_module from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.answer_testing.framework import data_dir_load from yt.utilities.cosmology import Cosmology from yt.utilities.on_demand_imports import _yaml as yaml local_dir = os.path.dirname(os.path.abspath(__file__)) def z_from_t_analytic(my_time, hubble_constant=0.7, omega_matter=0.3, omega_lambda=0.7): """ Compute the redshift from time after the big bang. This is based on Enzo's CosmologyComputeExpansionFactor.C, but altered to use physical units. """ hubble_constant = YTQuantity(hubble_constant, "100*km/s/Mpc") omega_curvature = 1.0 - omega_matter - omega_lambda OMEGA_TOLERANCE = 1e-5 ETA_TOLERANCE = 1.0e-10 # Convert the time to Time * H0. if not isinstance(my_time, YTArray): my_time = YTArray(my_time, "s") t0 = (my_time.in_units("s") * hubble_constant.in_units("1/s")).to_ndarray() # For a flat universe with omega_matter = 1, it's easy. if np.fabs(omega_matter - 1) < OMEGA_TOLERANCE and omega_lambda < OMEGA_TOLERANCE: a = np.power(1.5 * t0, 2.0 / 3.0) # For omega_matter < 1 and omega_lambda == 0 see # Peebles 1993, eq. 13-3, 13-10. # Actually, this is a little tricky since we must solve an equation # of the form eta - np.sinh(eta) + x = 0.. elif omega_matter < 1 and omega_lambda < OMEGA_TOLERANCE: x = 2 * t0 * np.power(1.0 - omega_matter, 1.5) / omega_matter # Compute eta in a three step process, first from a third-order # Taylor expansion of the formula above, then use that in a fifth-order # approximation. Then finally, iterate on the formula itself, solving for # eta. This works well because parts 1 & 2 are an excellent approximation # when x is small and part 3 converges quickly when x is large. eta = np.power(6 * x, 1.0 / 3.0) # part 1 eta = np.power(120 * x / (20 + eta * eta), 1.0 / 3.0) # part 2 mask = np.ones(eta.size, dtype=bool) max_iter = 1000 for i in range(max_iter): # part 3 eta_old = eta[mask] eta[mask] = np.arcsinh(eta[mask] + x[mask]) mask[mask] = np.fabs(eta[mask] - eta_old) >= ETA_TOLERANCE if not mask.any(): break if i == max_iter - 1: raise RuntimeError("No convergence after %d iterations." % i) # Now use eta to compute the expansion factor (eq. 13-10, part 2). a = omega_matter / (2.0 * (1.0 - omega_matter)) * (np.cosh(eta) - 1.0) # For flat universe, with non-zero omega_lambda, see eq. 13-20. elif np.fabs(omega_curvature) < OMEGA_TOLERANCE and omega_lambda > OMEGA_TOLERANCE: a = np.power(omega_matter / (1 - omega_matter), 1.0 / 3.0) * np.power( np.sinh(1.5 * np.sqrt(1.0 - omega_matter) * t0), 2.0 / 3.0 ) else: raise NotImplementedError redshift = (1.0 / a) - 1.0 return redshift def t_from_z_analytic(z, hubble_constant=0.7, omega_matter=0.3, omega_lambda=0.7): """ Compute the age of the Universe from redshift. This is based on Enzo's CosmologyComputeTimeFromRedshift.C, but altered to use physical units. """ hubble_constant = YTQuantity(hubble_constant, "100*km/s/Mpc") omega_curvature = 1.0 - omega_matter - omega_lambda # For a flat universe with omega_matter = 1, things are easy. if omega_matter == 1.0 and omega_lambda == 0.0: t0 = 2.0 / 3.0 / np.power(1 + z, 1.5) # For omega_matter < 1 and omega_lambda == 0 see # Peebles 1993, eq. 13-3, 13-10. elif omega_matter < 1 and omega_lambda == 0: eta = np.arccosh(1 + 2 * (1 - omega_matter) / omega_matter / (1 + z)) t0 = ( omega_matter / (2 * np.power(1.0 - omega_matter, 1.5)) * (np.sinh(eta) - eta) ) # For flat universe, with non-zero omega_lambda, see eq. 13-20. elif np.fabs(omega_curvature) < 1.0e-3 and omega_lambda != 0: t0 = ( 2.0 / 3.0 / np.sqrt(1 - omega_matter) * np.arcsinh( np.sqrt((1 - omega_matter) / omega_matter) / np.power(1 + z, 1.5) ) ) else: raise NotImplementedError(f"{hubble_constant}, {omega_matter}, {omega_lambda}") # Now convert from Time * H0 to time. my_time = t0 / hubble_constant return my_time def test_z_t_roundtrip(): """ Make sure t_from_z and z_from_t are consistent. """ co = Cosmology() # random sample in log(a) from -6 to 6 my_random = np.random.RandomState(6132305) la = 12 * my_random.random_sample(10000) - 6 z1 = 1 / np.power(10, la) - 1 t = co.t_from_z(z1) z2 = co.z_from_t(t) assert_rel_equal(z1, z2, 4) def test_z_t_analytic(): """ Test z/t conversions against analytic solutions. """ cosmos = ( {"hubble_constant": 0.7, "omega_matter": 0.3, "omega_lambda": 0.7}, {"hubble_constant": 0.7, "omega_matter": 1.0, "omega_lambda": 0.0}, {"hubble_constant": 0.7, "omega_matter": 0.3, "omega_lambda": 0.0}, ) for cosmo in cosmos: omega_curvature = 1 - cosmo["omega_matter"] - cosmo["omega_lambda"] co = Cosmology(omega_curvature=omega_curvature, **cosmo) # random sample in log(a) from -6 to 6 my_random = np.random.RandomState(10132324) la = 12 * my_random.random_sample(1000) - 6 z = 1 / np.power(10, la) - 1 t_an = t_from_z_analytic(z, **cosmo).to("Gyr") t_co = co.t_from_z(z).to("Gyr") assert_rel_equal( t_an, t_co, 4, err_msg=f"t_from_z does not match analytic version for cosmology {cosmo}.", ) # random sample in log(t/t0) from -3 to 1 t0 = np.power(10, 4 * my_random.random_sample(1000) - 3) t = (t0 / co.hubble_constant).to("Gyr") z_an = z_from_t_analytic(t, **cosmo) z_co = co.z_from_t(t) # compare scale factors since z approaches 0 assert_rel_equal( 1 / (1 + z_an), 1 / (1 + z_co), 5, err_msg=f"z_from_t does not match analytic version for cosmology {cosmo}.", ) def test_dark_factor(): """ Test that dark factor returns same value for when not being used and when w_0 = -1 and w_z = 0. """ co = Cosmology(w_0=-1, w_a=0, use_dark_factor=False) assert_equal(co.get_dark_factor(0), 1.0) co.use_dark_factor = True assert_equal(co.get_dark_factor(0), 1.0) @requires_module("yaml") def test_cosmology_calculator_answers(): """ Test cosmology calculator functions against previously calculated values. """ fn = os.path.join(local_dir, "cosmology_answers.yml") with open(fn) as fh: data = yaml.load(fh, Loader=yaml.FullLoader) cosmologies = data["cosmologies"] functions = data["functions"] for cname, copars in cosmologies.items(): omega_curvature = ( 1 - copars["omega_matter"] - copars["omega_lambda"] - copars["omega_radiation"] ) cosmology = Cosmology(omega_curvature=omega_curvature, **copars) for fname, finfo in functions.items(): func = getattr(cosmology, fname) args = finfo.get("args", []) val = func(*args) units = finfo.get("units") if units is not None: val.convert_to_units(units) val = float(val) err_msg = ( "{} answer has changed for {} cosmology, old: {:f}, new: {:f}.".format( fname, cname, finfo["answers"][cname], val, ) ) assert_almost_equal(val, finfo["answers"][cname], 10, err_msg=err_msg) enzotiny = "enzo_tiny_cosmology/DD0020/DD0020" @requires_module("h5py") @requires_file(enzotiny) def test_dataset_cosmology_calculator(): """ Test datasets's cosmology calculator against standalone. """ ds = data_dir_load(enzotiny) co = Cosmology( hubble_constant=ds.hubble_constant, omega_matter=ds.omega_matter, omega_lambda=ds.omega_lambda, ) v1 = ds.cosmology.comoving_radial_distance(1, 5).to("Mpccm").v v2 = co.comoving_radial_distance(1, 5).to("Mpccm").v assert_equal(v1, v2) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_cython_fortran_utils.py0000644000175100001770000000150714714401662023271 0ustar00runnerdockerimport struct import numpy as np import pytest from yt.utilities.cython_fortran_utils import FortranFile def test_raise_error_when_file_does_not_exist(): with pytest.raises(FileNotFoundError): FortranFile("/this/file/does/not/exist") def test_read(tmp_path): dummy_file = tmp_path / "test.bin" # Write a Fortran-formatted file containing one record with 4 doubles # The format is a 32bit integer with value 4*sizeof(double)=32 # followed by 4 doubles and another 32bit integer with value 32 # Note that there is no memory alignment, hence the "=" below buff = struct.pack("=i 4d i", 32, 1.0, 2.0, 3.0, 4.0, 32) dummy_file.write_bytes(buff) with FortranFile(str(dummy_file)) as f: np.testing.assert_equal( f.read_vector("d"), [1.0, 2.0, 3.0, 4.0], ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_decompose.py0000644000175100001770000000732714714401662020776 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_almost_equal, assert_array_equal import yt.utilities.decompose as dec def test_psize_2d(): procs = dec.get_psize(np.array([5, 1, 7]), 6) assert_array_equal(procs, np.array([3, 1, 2])) procs = dec.get_psize(np.array([1, 7, 5]), 6) assert_array_equal(procs, np.array([1, 2, 3])) procs = dec.get_psize(np.array([7, 5, 1]), 6) assert_array_equal(procs, np.array([2, 3, 1])) def test_psize_3d(): procs = dec.get_psize(np.array([33, 35, 37]), 12) assert_array_equal(procs, np.array([3, 2, 2])) def test_decomposition_2d(): array = np.ones((7, 5, 1)) bbox = np.array([[-0.7, 0.0], [1.5, 2.0], [0.0, 0.7]]) ledge, redge, shapes, slices, _ = dec.decompose_array( array.shape, np.array([2, 3, 1]), bbox ) data = [array[slice] for slice in slices] assert_array_equal(data[1].shape, np.array([3, 2, 1])) gold_le = np.array( [ [-0.7, 1.5, 0.0], [-0.7, 1.6, 0.0], [-0.7, 1.8, 0.0], [-0.4, 1.5, 0.0], [-0.4, 1.6, 0.0], [-0.4, 1.8, 0.0], ] ) assert_almost_equal(ledge, gold_le, 8) gold_re = np.array( [ [-0.4, 1.6, 0.7], [-0.4, 1.8, 0.7], [-0.4, 2.0, 0.7], [0.0, 1.6, 0.7], [0.0, 1.8, 0.7], [0.0, 2.0, 0.7], ] ) assert_almost_equal(redge, gold_re, 8) def test_decomposition_3d(): array = np.ones((33, 35, 37)) bbox = np.array([[0.0, 1.0], [-1.5, 1.5], [1.0, 2.5]]) ledge, redge, shapes, slices, _ = dec.decompose_array( array.shape, np.array([3, 2, 2]), bbox ) data = [array[slice] for slice in slices] assert_array_equal(data[0].shape, np.array([11, 17, 18])) gold_le = np.array( [ [0.00000, -1.50000, 1.00000], [0.00000, -1.50000, 1.72973], [0.00000, -0.04286, 1.00000], [0.00000, -0.04286, 1.72973], [0.33333, -1.50000, 1.00000], [0.33333, -1.50000, 1.72973], [0.33333, -0.04286, 1.00000], [0.33333, -0.04286, 1.72973], [0.66667, -1.50000, 1.00000], [0.66667, -1.50000, 1.72973], [0.66667, -0.04286, 1.00000], [0.66667, -0.04286, 1.72973], ] ) assert_almost_equal(ledge, gold_le, 5) gold_re = np.array( [ [0.33333, -0.04286, 1.72973], [0.33333, -0.04286, 2.50000], [0.33333, 1.50000, 1.72973], [0.33333, 1.50000, 2.50000], [0.66667, -0.04286, 1.72973], [0.66667, -0.04286, 2.50000], [0.66667, 1.50000, 1.72973], [0.66667, 1.50000, 2.50000], [1.00000, -0.04286, 1.72973], [1.00000, -0.04286, 2.50000], [1.00000, 1.50000, 1.72973], [1.00000, 1.50000, 2.50000], ] ) assert_almost_equal(redge, gold_re, 5) def test_decomposition_with_cell_widths(): array = np.ones((33, 35, 37)) bbox = np.array([[0.0, 1.0], [-1.5, 1.5], [1.0, 2.5]]) # build some cell widths, rescale to match bounding box cell_widths = [] for idim in range(3): wid = bbox[idim][1] - bbox[idim][0] cws = np.random.random((array.shape[idim],)) factor = wid / cws.sum() cell_widths.append(factor * cws) ledge, redge, _, _, widths_by_grid = dec.decompose_array( array.shape, np.array([3, 2, 2]), bbox, cell_widths=cell_widths ) for grid_id in range(len(ledge)): grid_wid = redge[grid_id] - ledge[grid_id] cws = widths_by_grid[grid_id] cws_wid = np.array([np.sum(cws[dim]) for dim in range(3)]) assert_almost_equal(grid_wid, cws_wid, 5) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_flagging_methods.py0000644000175100001770000000056314714401662022314 0ustar00runnerdockerimport numpy as np from yt.testing import fake_random_ds from yt.utilities.flagging_methods import flagging_method_registry def test_over_density(): ds = fake_random_ds(64) ds.index od_flag = flagging_method_registry["overdensity"](0.75) criterion = ds.index.grids[0]["gas", "density"] > 0.75 assert np.all(od_flag(ds.index.grids[0]) == criterion) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_hierarchy_inspection.py0000644000175100001770000000260114714401662023217 0ustar00runnerdocker""" Created on Wed Feb 18 18:24:09 2015 @author: stuart """ from ..hierarchy_inspection import find_lowest_subclasses class level1: pass class level1a: pass class level2(level1): pass class level3(level2): pass class level4(level3): pass def test_empty(): result = find_lowest_subclasses([]) assert len(result) == 0 def test_single(): result = find_lowest_subclasses([level2]) assert len(result) == 1 assert result[0] is level2 def test_two_classes(): result = find_lowest_subclasses([level1, level2]) assert len(result) == 1 assert result[0] is level2 def test_four_deep(): result = find_lowest_subclasses([level1, level2, level3, level4]) assert len(result) == 1 assert result[0] is level4 def test_four_deep_outoforder(): result = find_lowest_subclasses([level2, level3, level1, level4]) assert len(result) == 1 assert result[0] is level4 def test_diverging_tree(): result = find_lowest_subclasses([level1, level2, level3, level1a]) assert len(result) == 2 assert level1a in result and level3 in result def test_without_parents(): result = find_lowest_subclasses([level1, level3]) assert len(result) == 1 assert result[0] is level3 def test_without_grandparents(): result = find_lowest_subclasses([level1, level4]) assert len(result) == 1 assert result[0] is level4 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_interpolators.py0000644000175100001770000001201014714401662021706 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_array_almost_equal, assert_array_equal import yt.utilities.linear_interpolators as lin from yt.testing import fake_random_ds from yt.utilities.lib.interpolators import ghost_zone_interpolate def test_linear_interpolator_1d(): random_data = np.random.random(64) fv = {"x": np.mgrid[0.0:1.0:64j]} # evenly spaced bins ufi = lin.UnilinearFieldInterpolator(random_data, (0.0, 1.0), "x", True) assert_array_equal(ufi(fv), random_data) # randomly spaced bins size = 64 shift = (1.0 / size) * np.random.random(size) - (0.5 / size) fv["x"] += shift ufi = lin.UnilinearFieldInterpolator( random_data, np.linspace(0.0, 1.0, size) + shift, "x", True ) assert_array_almost_equal(ufi(fv), random_data, 15) def test_linear_interpolator_2d(): random_data = np.random.random((64, 64)) # evenly spaced bins fv = dict(zip("xy", np.mgrid[0.0:1.0:64j, 0.0:1.0:64j], strict=True)) bfi = lin.BilinearFieldInterpolator(random_data, (0.0, 1.0, 0.0, 1.0), "xy", True) assert_array_equal(bfi(fv), random_data) # randomly spaced bins size = 64 bins = np.linspace(0.0, 1.0, size) shifts = {ax: (1.0 / size) * np.random.random(size) - (0.5 / size) for ax in "xy"} fv["x"] += shifts["x"][:, np.newaxis] fv["y"] += shifts["y"] bfi = lin.BilinearFieldInterpolator( random_data, (bins + shifts["x"], bins + shifts["y"]), "xy", True ) assert_array_almost_equal(bfi(fv), random_data, 15) def test_linear_interpolator_3d(): random_data = np.random.random((64, 64, 64)) # evenly spaced bins fv = dict(zip("xyz", np.mgrid[0.0:1.0:64j, 0.0:1.0:64j, 0.0:1.0:64j], strict=True)) tfi = lin.TrilinearFieldInterpolator( random_data, (0.0, 1.0, 0.0, 1.0, 0.0, 1.0), "xyz", True ) assert_array_almost_equal(tfi(fv), random_data) # randomly spaced bins size = 64 bins = np.linspace(0.0, 1.0, size) shifts = {ax: (1.0 / size) * np.random.random(size) - (0.5 / size) for ax in "xyz"} fv["x"] += shifts["x"][:, np.newaxis, np.newaxis] fv["y"] += shifts["y"][:, np.newaxis] fv["z"] += shifts["z"] tfi = lin.TrilinearFieldInterpolator( random_data, (bins + shifts["x"], bins + shifts["y"], bins + shifts["z"]), "xyz", True, ) assert_array_almost_equal(tfi(fv), random_data, 15) def test_linear_interpolator_4d(): random_data = np.random.random((64, 64, 64, 64)) # evenly spaced bins fv = dict( zip( "xyzw", np.mgrid[0.0:1.0:64j, 0.0:1.0:64j, 0.0:1.0:64j, 0.0:1.0:64j], strict=True, ) ) tfi = lin.QuadrilinearFieldInterpolator( random_data, (0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0), "xyzw", True ) assert_array_almost_equal(tfi(fv), random_data) # randomly spaced bins size = 64 bins = np.linspace(0.0, 1.0, size) shifts = {ax: (1.0 / size) * np.random.random(size) - (0.5 / size) for ax in "xyzw"} fv["x"] += shifts["x"][:, np.newaxis, np.newaxis, np.newaxis] fv["y"] += shifts["y"][:, np.newaxis, np.newaxis] fv["z"] += shifts["z"][:, np.newaxis] fv["w"] += shifts["w"] tfi = lin.QuadrilinearFieldInterpolator( random_data, ( bins + shifts["x"], bins + shifts["y"], bins + shifts["z"], bins + shifts["w"], ), "xyzw", True, ) assert_array_almost_equal(tfi(fv), random_data, 15) def test_ghost_zone_extrapolation(): ds = fake_random_ds(16) g = ds.index.grids[0] vec = g.get_vertex_centered_data( [("index", "x"), ("index", "y"), ("index", "z")], no_ghost=True ) for i, ax in enumerate("xyz"): xc = g["index", ax] tf = lin.TrilinearFieldInterpolator( xc, ( g.LeftEdge[0] + g.dds[0] / 2.0, g.RightEdge[0] - g.dds[0] / 2.0, g.LeftEdge[1] + g.dds[1] / 2.0, g.RightEdge[1] - g.dds[1] / 2.0, g.LeftEdge[2] + g.dds[2] / 2.0, g.RightEdge[2] - g.dds[2] / 2.0, ), ["x", "y", "z"], truncate=True, ) lx, ly, lz = np.mgrid[ g.LeftEdge[0] : g.RightEdge[0] : (g.ActiveDimensions[0] + 1) * 1j, g.LeftEdge[1] : g.RightEdge[1] : (g.ActiveDimensions[1] + 1) * 1j, g.LeftEdge[2] : g.RightEdge[2] : (g.ActiveDimensions[2] + 1) * 1j, ] xi = tf({"x": lx, "y": ly, "z": lz}) xz = np.zeros(g.ActiveDimensions + 1) ghost_zone_interpolate( 1, xc, np.array([0.5, 0.5, 0.5], dtype="f8"), xz, np.array([0.0, 0.0, 0.0], dtype="f8"), ) ii = (lx, ly, lz)[i] assert_array_equal(ii, vec["index", ax]) assert_array_equal(ii, xi) assert_array_equal(ii, xz) def test_get_vertex_centered_data(): ds = fake_random_ds(16) g = ds.index.grids[0] g.get_vertex_centered_data([("gas", "density")], no_ghost=True) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_minimal_representation.py0000644000175100001770000000303314714401662023556 0ustar00runnerdockerimport os.path from numpy.testing import assert_equal, assert_raises import yt from yt.config import ytcfg from yt.testing import requires_file, requires_module G30 = "IsolatedGalaxy/galaxy0030/galaxy0030" old_serialize = None def setup_module(): global old_serialize old_serialize = ytcfg.get("yt", "serialize") ytcfg["yt", "serialize"] = True def teardown_module(): ytcfg["yt", "serialize"] = old_serialize @requires_module("h5py") @requires_file(G30) def test_store(): ds = yt.load(G30) store = ds.parameter_filename + ".yt" field = "density" if os.path.isfile(store): os.remove(store) proj1 = ds.proj(field, "z") sp = ds.sphere(ds.domain_center, (4, "kpc")) proj2 = ds.proj(field, "z", data_source=sp) proj1_c = ds.proj(field, "z") assert_equal(proj1[field], proj1_c[field]) proj2_c = ds.proj(field, "z", data_source=sp) assert_equal(proj2[field], proj2_c[field]) def fail_for_different_method(): proj2_c = ds.proj(field, "z", data_source=sp, method="max") assert_equal(proj2[field], proj2_c[field]) # A note here: a unyt.exceptions.UnitOperationError is raised # and caught by numpy, which reraises a ValueError assert_raises(ValueError, fail_for_different_method) def fail_for_different_source(): sp = ds.sphere(ds.domain_center, (2, "kpc")) proj2_c = ds.proj(field, "z", data_source=sp, method="integrate") assert_equal(proj2_c[field], proj2[field]) assert_raises(AssertionError, fail_for_different_source) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_on_demand_imports.py0000644000175100001770000000327214714401662022514 0ustar00runnerdockerimport pytest from yt.utilities.on_demand_imports import OnDemand, safe_import def test_access_available_module(): class os_imports(OnDemand): @safe_import def path(self): from os import path return path _os = os_imports() _os.path.join("eggs", "saussage") def test_access_unavailable_module(): class Bacon_imports(OnDemand): @safe_import def spam(self): from Bacon import spam return spam _bacon = Bacon_imports() with pytest.raises( ImportError, match=r"No module named 'Bacon'", ) as excinfo: _bacon.spam() # yt should add information to the original error message # but this done slightly differently in Python>=3.11 # (using exception notes), so we can't just match the error message # directly. Instead this implements a Python-version agnostic check # that the user-visible error message is what we expect. complete_error_message = excinfo.exconly() assert complete_error_message == ( "ModuleNotFoundError: No module named 'Bacon'\n" "Something went wrong while trying to lazy-import Bacon. " "Please make sure that Bacon is properly installed.\n" "If the problem persists, please file an issue at " "https://github.com/yt-project/yt/issues/new" ) def test_class_invalidation(): with pytest.raises( TypeError, match="class .*'s name needs to be suffixed '_imports'" ): class Bacon(OnDemand): pass def test_base_class_instanciation(): with pytest.raises( TypeError, match="The OnDemand base class cannot be instantiated." ): OnDemand() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_particle_generator.py0000644000175100001770000001244314714401662022664 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_almost_equal, assert_equal from yt.loaders import load_uniform_grid from yt.units._numpy_wrapper_functions import uconcatenate from yt.utilities.particle_generator import ( FromListParticleGenerator, LatticeParticleGenerator, WithDensityParticleGenerator, ) def test_particle_generator(): # First generate our dataset domain_dims = (32, 32, 32) dens = np.zeros(domain_dims) + 0.1 temp = 4.0 * np.ones(domain_dims) fields = {"density": (dens, "code_mass/code_length**3"), "temperature": (temp, "K")} ds = load_uniform_grid(fields, domain_dims, 1.0) # Now generate particles from density field_list = [ ("io", "particle_position_x"), ("io", "particle_position_y"), ("io", "particle_position_z"), ("io", "particle_index"), ("io", "particle_gas_density"), ] num_particles = 10000 field_dict = {("gas", "density"): ("io", "particle_gas_density")} sphere = ds.sphere(ds.domain_center, 0.45) particles1 = WithDensityParticleGenerator(ds, sphere, num_particles, field_list) particles1.assign_indices() particles1.map_grid_fields_to_particles(field_dict) # Test to make sure we ended up with the right number of particles per grid particles1.apply_to_stream() particles_per_grid1 = [grid.NumberOfParticles for grid in ds.index.grids] assert_equal(particles_per_grid1, particles1.NumberOfParticles) particles_per_grid1 = [ len(grid["all", "particle_position_x"]) for grid in ds.index.grids ] assert_equal(particles_per_grid1, particles1.NumberOfParticles) tags = uconcatenate([grid["all", "particle_index"] for grid in ds.index.grids]) assert np.unique(tags).size == num_particles del tags # Set up a lattice of particles pdims = np.array([32, 32, 32]) def new_indices(): # We just add new indices onto the existing ones return np.arange(np.prod(pdims)) + num_particles le = np.array([0.25, 0.25, 0.25]) re = np.array([0.75, 0.75, 0.75]) particles2 = LatticeParticleGenerator(ds, pdims, le, re, field_list) particles2.assign_indices(function=new_indices) particles2.map_grid_fields_to_particles(field_dict) # Test lattice positions xpos = np.unique(particles2["io", "particle_position_x"]) ypos = np.unique(particles2["io", "particle_position_y"]) zpos = np.unique(particles2["io", "particle_position_z"]) xpred = np.linspace(le[0], re[0], num=pdims[0], endpoint=True) ypred = np.linspace(le[1], re[1], num=pdims[1], endpoint=True) zpred = np.linspace(le[2], re[2], num=pdims[2], endpoint=True) assert_almost_equal(xpos, xpred) assert_almost_equal(ypos, ypred) assert_almost_equal(zpos, zpred) del xpos, ypos, zpos del xpred, ypred, zpred # Test the number of particles again particles2.apply_to_stream() particles_per_grid2 = [grid.NumberOfParticles for grid in ds.index.grids] assert_equal( particles_per_grid2, particles1.NumberOfParticles + particles2.NumberOfParticles ) [grid.field_data.clear() for grid in ds.index.grids] particles_per_grid2 = [ len(grid["all", "particle_position_x"]) for grid in ds.index.grids ] assert_equal( particles_per_grid2, particles1.NumberOfParticles + particles2.NumberOfParticles ) # Test the uniqueness of tags tags = np.concatenate([grid["all", "particle_index"] for grid in ds.index.grids]) tags.sort() assert_equal(tags, np.arange(np.prod(pdims) + num_particles)) del tags # Now dump all of these particle fields out into a dict pdata = {} dd = ds.all_data() for field in field_list: pdata[field] = dd[field] # Test the "from-list" generator and particle field overwrite num_particles3 = num_particles + np.prod(pdims) particles3 = FromListParticleGenerator(ds, num_particles3, pdata) particles3.apply_to_stream(overwrite=True) # Test the number of particles again particles_per_grid3 = [grid.NumberOfParticles for grid in ds.index.grids] assert_equal( particles_per_grid3, particles1.NumberOfParticles + particles2.NumberOfParticles ) particles_per_grid2 = [ len(grid["all", "particle_position_z"]) for grid in ds.index.grids ] assert_equal( particles_per_grid3, particles1.NumberOfParticles + particles2.NumberOfParticles ) assert_equal(particles_per_grid2, particles_per_grid3) # Test adding in particles with a different particle type num_star_particles = 20000 pdata2 = { ("star", "particle_position_x"): np.random.uniform(size=num_star_particles), ("star", "particle_position_y"): np.random.uniform(size=num_star_particles), ("star", "particle_position_z"): np.random.uniform(size=num_star_particles), } particles4 = FromListParticleGenerator(ds, num_star_particles, pdata2, ptype="star") particles4.apply_to_stream() dd = ds.all_data() assert dd["star", "particle_position_x"].size == num_star_particles assert dd["io", "particle_position_x"].size == num_particles3 assert dd["all", "particle_position_x"].size == num_star_particles + num_particles3 del pdata del pdata2 del ds del particles1 del particles2 del particles4 del fields del dens del temp ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_periodic_table.py0000644000175100001770000000122414714401662021753 0ustar00runnerdockerfrom numpy.testing import assert_equal from yt.utilities.periodic_table import _elements, periodic_table def test_element_accuracy(): for num, w, name, sym in _elements: e0 = periodic_table[num] e1 = periodic_table[name] e2 = periodic_table[sym] # If num == -1, then we are in one of the things like Deuterium or El # that are not elements by themselves. if num == -1: e0 = e1 assert_equal(id(e0), id(e1)) assert_equal(id(e0), id(e2)) assert_equal(e0.num, num) assert_equal(e0.weight, w) assert_equal(e0.name, name) assert_equal(e0.symbol, sym) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_periodicity.py0000644000175100001770000000524414714401662021340 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_almost_equal from yt.testing import fake_random_ds from yt.utilities.math_utils import euclidean_dist, periodic_dist def setup_module(): from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True def test_periodicity(): # First test the simple case were we find the distance between two points a = [0.1, 0.1, 0.1] b = [0.9, 0.9, 0.9] period = 1.0 dist = periodic_dist(a, b, period) assert_almost_equal(dist, 0.34641016151377535) dist = periodic_dist(a, b, period, (True, False, False)) assert_almost_equal(dist, 1.1489125293076059) dist = periodic_dist(a, b, period, (False, True, False)) assert_almost_equal(dist, 1.1489125293076059) dist = periodic_dist(a, b, period, (False, False, True)) assert_almost_equal(dist, 1.1489125293076059) dist = periodic_dist(a, b, period, (True, True, False)) assert_almost_equal(dist, 0.84852813742385713) dist = periodic_dist(a, b, period, (True, False, True)) assert_almost_equal(dist, 0.84852813742385713) dist = periodic_dist(a, b, period, (False, True, True)) assert_almost_equal(dist, 0.84852813742385713) dist = euclidean_dist(a, b) assert_almost_equal(dist, 1.3856406460551021) # Now test the more complicated cases where we're calculating radii based # on data objects ds = fake_random_ds(64) # First we test flattened data data = ds.all_data() positions = np.array([data["index", ax] for ax in "xyz"]) c = [0.1, 0.1, 0.1] n_tup = tuple(1 for i in range(positions.ndim - 1)) center = np.tile( np.reshape(np.array(c), (positions.shape[0],) + n_tup), (1,) + positions.shape[1:], ) dist = periodic_dist(positions, center, period, ds.periodicity) assert_almost_equal(dist.min(), 0.00270632938683) assert_almost_equal(dist.max(), 0.863319074398) dist = euclidean_dist(positions, center) assert_almost_equal(dist.min(), 0.00270632938683) assert_almost_equal(dist.max(), 1.54531407988) # Then grid-like data data = ds.index.grids[0] positions = np.array([data["index", ax] for ax in "xyz"]) c = [0.1, 0.1, 0.1] n_tup = tuple(1 for i in range(positions.ndim - 1)) center = np.tile( np.reshape(np.array(c), (positions.shape[0],) + n_tup), (1,) + positions.shape[1:], ) dist = periodic_dist(positions, center, period, ds.periodicity) assert_almost_equal(dist.min(), 0.00270632938683) assert_almost_equal(dist.max(), 0.863319074398) dist = euclidean_dist(positions, center) assert_almost_equal(dist.min(), 0.00270632938683) assert_almost_equal(dist.max(), 1.54531407988) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_selectors.py0000644000175100001770000001363614714401662021023 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_array_less, assert_equal from yt.testing import fake_random_ds from yt.utilities.math_utils import periodic_dist def setup_module(): from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True def test_point_selector(): # generate fake amr data bbox = np.array([[-1.0, 1.0], [-1.0, 1.0], [-1.0, 1.0]]) ds = fake_random_ds(16, nprocs=7, bbox=bbox) assert all(ds.periodicity) dd = ds.all_data() positions = np.array([dd["index", ax] for ax in "xyz"]).T delta = 0.5 * np.array([dd["index", f"d{ax}"] for ax in "xyz"]).T # ensure cell centers and corners always return one and # only one point object for p in positions: data = ds.point(p) assert_equal(data["index", "ones"].shape[0], 1) for p in positions - delta: data = ds.point(p) assert_equal(data["index", "ones"].shape[0], 1) for p in positions + delta: data = ds.point(p) assert_equal(data["index", "ones"].shape[0], 1) def test_sphere_selector(): # generate fake data with a number of non-cubical grids ds = fake_random_ds(64, nprocs=51) assert all(ds.periodicity) # aligned tests spheres = [[0.0, 0.0, 0.0], [0.5, 0.5, 0.5], [1.0, 1.0, 1.0], [0.25, 0.75, 0.25]] for center in spheres: data = ds.sphere(center, 0.25) # WARNING: this value has not be externally verified dd = ds.all_data() dd.set_field_parameter("center", ds.arr(center, "code_length")) n_outside = (dd["index", "radius"] >= 0.25).sum() assert_equal( data["index", "radius"].size + n_outside, dd["index", "radius"].size ) positions = np.array([data["index", ax] for ax in "xyz"]) centers = ( np.tile(data.center, data["index", "x"].shape[0]) .reshape(data["index", "x"].shape[0], 3) .transpose() ) dist = periodic_dist( positions, centers, ds.domain_right_edge - ds.domain_left_edge, ds.periodicity, ) # WARNING: this value has not been externally verified assert_array_less(dist, 0.25) def test_ellipsoid_selector(): # generate fake data with a number of non-cubical grids ds = fake_random_ds(64, nprocs=51) assert all(ds.periodicity) ellipsoids = [[0.0, 0.0, 0.0], [0.5, 0.5, 0.5], [1.0, 1.0, 1.0], [0.25, 0.75, 0.25]] # spherical ellipsoid tests ratios = 3 * [0.25] for center in ellipsoids: data = ds.ellipsoid( center, ratios[0], ratios[1], ratios[2], np.array([1.0, 0.0, 0.0]), 0.0 ) data.get_data() dd = ds.all_data() dd.set_field_parameter("center", ds.arr(center, "code_length")) n_outside = (dd["index", "radius"] >= ratios[0]).sum() assert_equal( data["index", "radius"].size + n_outside, dd["index", "radius"].size ) positions = np.array([data["index", ax] for ax in "xyz"]) centers = ( np.tile(data.center, data.shape[0]).reshape(data.shape[0], 3).transpose() ) dist = periodic_dist( positions, centers, ds.domain_right_edge - ds.domain_left_edge, ds.periodicity, ) # WARNING: this value has not been externally verified assert_array_less(dist, ratios[0]) # aligned ellipsoid tests ratios = [0.25, 0.1, 0.1] for center in ellipsoids: data = ds.ellipsoid( center, ratios[0], ratios[1], ratios[2], np.array([1.0, 0.0, 0.0]), 0.0 ) # hack to compute elliptic distance dist2 = np.zeros(data["index", "ones"].shape[0]) for i, ax in enumerate(("index", k) for k in "xyz"): positions = np.zeros((3, data["index", "ones"].shape[0])) positions[i, :] = data[ax] centers = np.zeros((3, data["index", "ones"].shape[0])) centers[i, :] = center[i] dist2 += ( periodic_dist( positions, centers, ds.domain_right_edge - ds.domain_left_edge, ds.periodicity, ) / ratios[i] ) ** 2 # WARNING: this value has not been externally verified assert_array_less(dist2, 1.0) def test_slice_selector(): # generate fake data with a number of non-cubical grids ds = fake_random_ds(64, nprocs=51) assert all(ds.periodicity) for i, d in enumerate(("index", k) for k in "xyz"): for coord in np.arange(0.0, 1.0, 0.1): data = ds.slice(i, coord) data.get_data() v = data[d].to_ndarray() assert_equal(data.shape[0], 64**2) assert_equal(data["index", "ones"].shape[0], 64**2) assert_array_less(np.abs(v - coord), 1.0 / 128.0 + 1e-6) def test_cutting_plane_selector(): # generate fake data with a number of non-cubical grids ds = fake_random_ds(64, nprocs=51) assert all(ds.periodicity) # test cutting plane against orthogonal plane for i in range(3): norm = np.zeros(3) norm[i] = 1.0 for coord in np.arange(0, 1.0, 0.1): center = np.zeros(3) center[i] = coord data = ds.slice(i, coord) data.get_data() data2 = ds.cutting(norm, center) data2.get_data() assert data.shape[0] == data2.shape[0] cells1 = np.lexsort( (data["index", "x"], data["index", "y"], data["index", "z"]) ) cells2 = np.lexsort( (data2["index", "x"], data2["index", "y"], data2["index", "z"]) ) for d2 in "xyz": assert_equal(data["index", d2][cells1], data2["index", d2][cells2]) # def test_region_selector(): # # def test_disk_selector(): # # def test_orthoray_selector(): # # def test_ray_selector(): ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tests/test_set_log_level.py0000644000175100001770000000153214714401662021633 0ustar00runnerdockerfrom numpy.testing import assert_raises from yt.utilities.logger import set_log_level old_level = None def setup_module(): global old_level from yt.utilities.logger import ytLogger old_level = ytLogger.level def teardown_module(): set_log_level(old_level) def test_valid_level(): # test a subset of valid entries to cover # - case-insensitivity # - integer values # - "all" alias, which isn't standard for lvl in ("all", "ALL", 10, 42, "info", "warning", "ERROR", "CRITICAL"): set_log_level(lvl) def test_invalid_level(): # these are the exceptions raised by logging.Logger.setLog # since they are perfectly clear and readable, we check that nothing else # happens in the wrapper assert_raises(TypeError, set_log_level, 1.5) assert_raises(ValueError, set_log_level, "invalid_level") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/tree_container.py0000644000175100001770000000065214714401662017612 0ustar00runnerdockerclass TreeContainer: r"""A recursive data container for things like merger trees and clump-finder trees. """ _child_attr = "children" def __init__(self): setattr(self, self._child_attr, None) def __iter__(self): yield self children = getattr(self, self._child_attr) if children is None: return for child in children: yield from child ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/utilities/voropp.pyx0000644000175100001770000000433314714401662016326 0ustar00runnerdocker""" Wrapping code for voro++ """ cimport libcpp from cython.operator cimport dereference as deref import numpy as np cimport cython cimport numpy as np cdef extern from "voro++.hh" namespace "voro": cdef cppclass c_loop_all cdef cppclass voronoicell: double volume() cdef cppclass container: container(double xmin, double xmax, double ymin, double ymax, double zmin, double zmax, int nx, int ny, int nz, libcpp.bool xper, libcpp.bool yper, libcpp.bool zper, int alloc) void put(int n, double x, double y, double z) void store_cell_volumes(double *vols) int compute_cell(voronoicell c, c_loop_all vl) double sum_cell_volumes() cdef cppclass c_loop_all: c_loop_all(container &con) int inc() int start() cdef class VoronoiVolume: cdef container *my_con cdef public int npart def __init__(self, xi, yi, zi, left_edge, right_edge): self.my_con = new container(left_edge[0], right_edge[0], left_edge[1], right_edge[1], left_edge[2], right_edge[2], xi, yi, zi, False, False, False, 8) self.npart = 0 def __dealloc__(self): del self.my_con @cython.boundscheck(False) @cython.wraparound(False) def add_array(self, np.ndarray[np.float64_t, ndim=1] xpos, np.ndarray[np.float64_t, ndim=1] ypos, np.ndarray[np.float64_t, ndim=1] zpos): cdef int i for i in range(xpos.shape[0]): self.my_con.put(self.npart, xpos[i], ypos[i], zpos[i]) self.npart += 1 @cython.boundscheck(False) @cython.wraparound(False) def get_volumes(self): cdef np.ndarray vol = np.zeros(self.npart, 'double') #self.my_con.store_cell_volumes(vdouble) cdef c_loop_all *vl = new c_loop_all(deref(self.my_con)) cdef voronoicell c if not vl.start(): return cdef int i = 0 while 1: if self.my_con.compute_cell(c, deref(vl)): vol[i] = c.volume() if not vl.inc(): break i += 1 del vl return vol ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.411154 yt-4.4.0/yt/visualization/0000755000175100001770000000000014714401715015121 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/__init__.py0000644000175100001770000000023214714401662017230 0ustar00runnerdocker""" Raven ===== Raven is the plotting interface, with support for several different engines. Well, two for now, but maybe more later. Who knows? """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/_colormap_data.py0000644000175100001770000161646114714401662020457 0ustar00runnerdocker### Auto-generated colormap tables, taken from Matplotlib ### import numpy as np from numpy import array color_map_luts = {} ### IDL colormap 0 :: B-W LINEAR ### color_map_luts['idl00'] = \ ( array([ 0.0000000, 0.0039062, 0.0078125, 0.0117188, 0.0156250, 0.0195312, 0.0234375, 0.0273438, 0.0312500, 0.0351562, 0.0390625, 0.0429688, 0.0468750, 0.0507812, 0.0546875, 0.0585938, 0.0625000, 0.0664062, 0.0703125, 0.0742188, 0.0781250, 0.0820312, 0.0859375, 0.0898438, 0.0937500, 0.0976562, 0.1015625, 0.1054688, 0.1093750, 0.1132812, 0.1171875, 0.1210938, 0.1250000, 0.1289062, 0.1328125, 0.1367188, 0.1406250, 0.1445312, 0.1484375, 0.1523438, 0.1562500, 0.1601562, 0.1640625, 0.1679688, 0.1718750, 0.1757812, 0.1796875, 0.1835938, 0.1875000, 0.1914062, 0.1953125, 0.1992188, 0.2031250, 0.2070312, 0.2109375, 0.2148438, 0.2187500, 0.2226562, 0.2265625, 0.2304688, 0.2343750, 0.2382812, 0.2421875, 0.2460938, 0.2500000, 0.2539062, 0.2578125, 0.2617188, 0.2656250, 0.2695312, 0.2734375, 0.2773438, 0.2812500, 0.2851562, 0.2890625, 0.2929688, 0.2968750, 0.3007812, 0.3046875, 0.3085938, 0.3125000, 0.3164062, 0.3203125, 0.3242188, 0.3281250, 0.3320312, 0.3359375, 0.3398438, 0.3437500, 0.3476562, 0.3515625, 0.3554688, 0.3593750, 0.3632812, 0.3671875, 0.3710938, 0.3750000, 0.3789062, 0.3828125, 0.3867188, 0.3906250, 0.3945312, 0.3984375, 0.4023438, 0.4062500, 0.4101562, 0.4140625, 0.4179688, 0.4218750, 0.4257812, 0.4296875, 0.4335938, 0.4375000, 0.4414062, 0.4453125, 0.4492188, 0.4531250, 0.4570312, 0.4609375, 0.4648438, 0.4687500, 0.4726562, 0.4765625, 0.4804688, 0.4843750, 0.4882812, 0.4921875, 0.4960938, 0.5000000, 0.5039062, 0.5078125, 0.5117188, 0.5156250, 0.5195312, 0.5234375, 0.5273438, 0.5312500, 0.5351562, 0.5390625, 0.5429688, 0.5468750, 0.5507812, 0.5546875, 0.5585938, 0.5625000, 0.5664062, 0.5703125, 0.5742188, 0.5781250, 0.5820312, 0.5859375, 0.5898438, 0.5937500, 0.5976562, 0.6015625, 0.6054688, 0.6093750, 0.6132812, 0.6171875, 0.6210938, 0.6250000, 0.6289062, 0.6328125, 0.6367188, 0.6406250, 0.6445312, 0.6484375, 0.6523438, 0.6562500, 0.6601562, 0.6640625, 0.6679688, 0.6718750, 0.6757812, 0.6796875, 0.6835938, 0.6875000, 0.6914062, 0.6953125, 0.6992188, 0.7031250, 0.7070312, 0.7109375, 0.7148438, 0.7187500, 0.7226562, 0.7265625, 0.7304688, 0.7343750, 0.7382812, 0.7421875, 0.7460938, 0.7500000, 0.7539062, 0.7578125, 0.7617188, 0.7656250, 0.7695312, 0.7734375, 0.7773438, 0.7812500, 0.7851562, 0.7890625, 0.7929688, 0.7968750, 0.8007812, 0.8046875, 0.8085938, 0.8125000, 0.8164062, 0.8203125, 0.8242188, 0.8281250, 0.8320312, 0.8359375, 0.8398438, 0.8437500, 0.8476562, 0.8515625, 0.8554688, 0.8593750, 0.8632812, 0.8671875, 0.8710938, 0.8750000, 0.8789062, 0.8828125, 0.8867188, 0.8906250, 0.8945312, 0.8984375, 0.9023438, 0.9062500, 0.9101562, 0.9140625, 0.9179688, 0.9218750, 0.9257812, 0.9296875, 0.9335938, 0.9375000, 0.9414062, 0.9453125, 0.9492188, 0.9531250, 0.9570312, 0.9609375, 0.9648438, 0.9687500, 0.9726562, 0.9765625, 0.9804688, 0.9843750, 0.9882812, 0.9921875, 0.9960938]), array([ 0.0000000, 0.0039062, 0.0078125, 0.0117188, 0.0156250, 0.0195312, 0.0234375, 0.0273438, 0.0312500, 0.0351562, 0.0390625, 0.0429688, 0.0468750, 0.0507812, 0.0546875, 0.0585938, 0.0625000, 0.0664062, 0.0703125, 0.0742188, 0.0781250, 0.0820312, 0.0859375, 0.0898438, 0.0937500, 0.0976562, 0.1015625, 0.1054688, 0.1093750, 0.1132812, 0.1171875, 0.1210938, 0.1250000, 0.1289062, 0.1328125, 0.1367188, 0.1406250, 0.1445312, 0.1484375, 0.1523438, 0.1562500, 0.1601562, 0.1640625, 0.1679688, 0.1718750, 0.1757812, 0.1796875, 0.1835938, 0.1875000, 0.1914062, 0.1953125, 0.1992188, 0.2031250, 0.2070312, 0.2109375, 0.2148438, 0.2187500, 0.2226562, 0.2265625, 0.2304688, 0.2343750, 0.2382812, 0.2421875, 0.2460938, 0.2500000, 0.2539062, 0.2578125, 0.2617188, 0.2656250, 0.2695312, 0.2734375, 0.2773438, 0.2812500, 0.2851562, 0.2890625, 0.2929688, 0.2968750, 0.3007812, 0.3046875, 0.3085938, 0.3125000, 0.3164062, 0.3203125, 0.3242188, 0.3281250, 0.3320312, 0.3359375, 0.3398438, 0.3437500, 0.3476562, 0.3515625, 0.3554688, 0.3593750, 0.3632812, 0.3671875, 0.3710938, 0.3750000, 0.3789062, 0.3828125, 0.3867188, 0.3906250, 0.3945312, 0.3984375, 0.4023438, 0.4062500, 0.4101562, 0.4140625, 0.4179688, 0.4218750, 0.4257812, 0.4296875, 0.4335938, 0.4375000, 0.4414062, 0.4453125, 0.4492188, 0.4531250, 0.4570312, 0.4609375, 0.4648438, 0.4687500, 0.4726562, 0.4765625, 0.4804688, 0.4843750, 0.4882812, 0.4921875, 0.4960938, 0.5000000, 0.5039062, 0.5078125, 0.5117188, 0.5156250, 0.5195312, 0.5234375, 0.5273438, 0.5312500, 0.5351562, 0.5390625, 0.5429688, 0.5468750, 0.5507812, 0.5546875, 0.5585938, 0.5625000, 0.5664062, 0.5703125, 0.5742188, 0.5781250, 0.5820312, 0.5859375, 0.5898438, 0.5937500, 0.5976562, 0.6015625, 0.6054688, 0.6093750, 0.6132812, 0.6171875, 0.6210938, 0.6250000, 0.6289062, 0.6328125, 0.6367188, 0.6406250, 0.6445312, 0.6484375, 0.6523438, 0.6562500, 0.6601562, 0.6640625, 0.6679688, 0.6718750, 0.6757812, 0.6796875, 0.6835938, 0.6875000, 0.6914062, 0.6953125, 0.6992188, 0.7031250, 0.7070312, 0.7109375, 0.7148438, 0.7187500, 0.7226562, 0.7265625, 0.7304688, 0.7343750, 0.7382812, 0.7421875, 0.7460938, 0.7500000, 0.7539062, 0.7578125, 0.7617188, 0.7656250, 0.7695312, 0.7734375, 0.7773438, 0.7812500, 0.7851562, 0.7890625, 0.7929688, 0.7968750, 0.8007812, 0.8046875, 0.8085938, 0.8125000, 0.8164062, 0.8203125, 0.8242188, 0.8281250, 0.8320312, 0.8359375, 0.8398438, 0.8437500, 0.8476562, 0.8515625, 0.8554688, 0.8593750, 0.8632812, 0.8671875, 0.8710938, 0.8750000, 0.8789062, 0.8828125, 0.8867188, 0.8906250, 0.8945312, 0.8984375, 0.9023438, 0.9062500, 0.9101562, 0.9140625, 0.9179688, 0.9218750, 0.9257812, 0.9296875, 0.9335938, 0.9375000, 0.9414062, 0.9453125, 0.9492188, 0.9531250, 0.9570312, 0.9609375, 0.9648438, 0.9687500, 0.9726562, 0.9765625, 0.9804688, 0.9843750, 0.9882812, 0.9921875, 0.9960938]), array([ 0.0000000, 0.0039062, 0.0078125, 0.0117188, 0.0156250, 0.0195312, 0.0234375, 0.0273438, 0.0312500, 0.0351562, 0.0390625, 0.0429688, 0.0468750, 0.0507812, 0.0546875, 0.0585938, 0.0625000, 0.0664062, 0.0703125, 0.0742188, 0.0781250, 0.0820312, 0.0859375, 0.0898438, 0.0937500, 0.0976562, 0.1015625, 0.1054688, 0.1093750, 0.1132812, 0.1171875, 0.1210938, 0.1250000, 0.1289062, 0.1328125, 0.1367188, 0.1406250, 0.1445312, 0.1484375, 0.1523438, 0.1562500, 0.1601562, 0.1640625, 0.1679688, 0.1718750, 0.1757812, 0.1796875, 0.1835938, 0.1875000, 0.1914062, 0.1953125, 0.1992188, 0.2031250, 0.2070312, 0.2109375, 0.2148438, 0.2187500, 0.2226562, 0.2265625, 0.2304688, 0.2343750, 0.2382812, 0.2421875, 0.2460938, 0.2500000, 0.2539062, 0.2578125, 0.2617188, 0.2656250, 0.2695312, 0.2734375, 0.2773438, 0.2812500, 0.2851562, 0.2890625, 0.2929688, 0.2968750, 0.3007812, 0.3046875, 0.3085938, 0.3125000, 0.3164062, 0.3203125, 0.3242188, 0.3281250, 0.3320312, 0.3359375, 0.3398438, 0.3437500, 0.3476562, 0.3515625, 0.3554688, 0.3593750, 0.3632812, 0.3671875, 0.3710938, 0.3750000, 0.3789062, 0.3828125, 0.3867188, 0.3906250, 0.3945312, 0.3984375, 0.4023438, 0.4062500, 0.4101562, 0.4140625, 0.4179688, 0.4218750, 0.4257812, 0.4296875, 0.4335938, 0.4375000, 0.4414062, 0.4453125, 0.4492188, 0.4531250, 0.4570312, 0.4609375, 0.4648438, 0.4687500, 0.4726562, 0.4765625, 0.4804688, 0.4843750, 0.4882812, 0.4921875, 0.4960938, 0.5000000, 0.5039062, 0.5078125, 0.5117188, 0.5156250, 0.5195312, 0.5234375, 0.5273438, 0.5312500, 0.5351562, 0.5390625, 0.5429688, 0.5468750, 0.5507812, 0.5546875, 0.5585938, 0.5625000, 0.5664062, 0.5703125, 0.5742188, 0.5781250, 0.5820312, 0.5859375, 0.5898438, 0.5937500, 0.5976562, 0.6015625, 0.6054688, 0.6093750, 0.6132812, 0.6171875, 0.6210938, 0.6250000, 0.6289062, 0.6328125, 0.6367188, 0.6406250, 0.6445312, 0.6484375, 0.6523438, 0.6562500, 0.6601562, 0.6640625, 0.6679688, 0.6718750, 0.6757812, 0.6796875, 0.6835938, 0.6875000, 0.6914062, 0.6953125, 0.6992188, 0.7031250, 0.7070312, 0.7109375, 0.7148438, 0.7187500, 0.7226562, 0.7265625, 0.7304688, 0.7343750, 0.7382812, 0.7421875, 0.7460938, 0.7500000, 0.7539062, 0.7578125, 0.7617188, 0.7656250, 0.7695312, 0.7734375, 0.7773438, 0.7812500, 0.7851562, 0.7890625, 0.7929688, 0.7968750, 0.8007812, 0.8046875, 0.8085938, 0.8125000, 0.8164062, 0.8203125, 0.8242188, 0.8281250, 0.8320312, 0.8359375, 0.8398438, 0.8437500, 0.8476562, 0.8515625, 0.8554688, 0.8593750, 0.8632812, 0.8671875, 0.8710938, 0.8750000, 0.8789062, 0.8828125, 0.8867188, 0.8906250, 0.8945312, 0.8984375, 0.9023438, 0.9062500, 0.9101562, 0.9140625, 0.9179688, 0.9218750, 0.9257812, 0.9296875, 0.9335938, 0.9375000, 0.9414062, 0.9453125, 0.9492188, 0.9531250, 0.9570312, 0.9609375, 0.9648438, 0.9687500, 0.9726562, 0.9765625, 0.9804688, 0.9843750, 0.9882812, 0.9921875, 0.9960938]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 1 :: BLUE ### color_map_luts['idl01'] = \ ( array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0195312, 0.0351562, 0.0507812, 0.0664062, 0.0820312, 0.0976562, 0.1132812, 0.1289062, 0.1445312, 0.1601562, 0.1757812, 0.1914062, 0.2070312, 0.2226562, 0.2382812, 0.2539062, 0.2695312, 0.2851562, 0.3007812, 0.3164062, 0.3320312, 0.3515625, 0.3671875, 0.3828125, 0.3984375, 0.4140625, 0.4296875, 0.4453125, 0.4609375, 0.4765625, 0.4921875, 0.5078125, 0.5234375, 0.5390625, 0.5546875, 0.5703125, 0.5859375, 0.6015625, 0.6171875, 0.6328125, 0.6484375, 0.6640625, 0.6835938, 0.6992188, 0.7148438, 0.7304688, 0.7460938, 0.7617188, 0.7773438, 0.7929688, 0.8085938, 0.8242188, 0.8398438, 0.8554688, 0.8710938, 0.8867188, 0.9023438, 0.9179688, 0.9335938, 0.9492188, 0.9648438, 0.9804688, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0078125, 0.0156250, 0.0195312, 0.0273438, 0.0351562, 0.0390625, 0.0468750, 0.0507812, 0.0585938, 0.0664062, 0.0703125, 0.0781250, 0.0820312, 0.0898438, 0.0976562, 0.1015625, 0.1093750, 0.1132812, 0.1210938, 0.1289062, 0.1328125, 0.1406250, 0.1445312, 0.1523438, 0.1601562, 0.1640625, 0.1718750, 0.1757812, 0.1835938, 0.1914062, 0.1953125, 0.2031250, 0.2070312, 0.2148438, 0.2226562, 0.2265625, 0.2343750, 0.2382812, 0.2460938, 0.2539062, 0.2578125, 0.2656250, 0.2695312, 0.2773438, 0.2851562, 0.2890625, 0.2968750, 0.3007812, 0.3085938, 0.3164062, 0.3203125, 0.3281250, 0.3320312, 0.3398438, 0.3476562, 0.3515625, 0.3593750, 0.3671875, 0.3710938, 0.3789062, 0.3828125, 0.3906250, 0.3984375, 0.4023438, 0.4101562, 0.4140625, 0.4218750, 0.4296875, 0.4335938, 0.4414062, 0.4453125, 0.4531250, 0.4609375, 0.4648438, 0.4726562, 0.4765625, 0.4843750, 0.4921875, 0.4960938, 0.5039062, 0.5078125, 0.5156250, 0.5234375, 0.5273438, 0.5351562, 0.5390625, 0.5468750, 0.5546875, 0.5585938, 0.5664062, 0.5703125, 0.5781250, 0.5859375, 0.5898438, 0.5976562, 0.6015625, 0.6093750, 0.6171875, 0.6210938, 0.6289062, 0.6328125, 0.6406250, 0.6484375, 0.6523438, 0.6601562, 0.6640625, 0.6718750, 0.6796875, 0.6835938, 0.6914062, 0.6992188, 0.7031250, 0.7109375, 0.7148438, 0.7226562, 0.7304688, 0.7343750, 0.7421875, 0.7460938, 0.7539062, 0.7617188, 0.7656250, 0.7734375, 0.7773438, 0.7851562, 0.7929688, 0.7968750, 0.8046875, 0.8085938, 0.8164062, 0.8242188, 0.8281250, 0.8359375, 0.8398438, 0.8476562, 0.8554688, 0.8593750, 0.8671875, 0.8710938, 0.8789062, 0.8867188, 0.8906250, 0.8984375, 0.9023438, 0.9101562, 0.9179688, 0.9218750, 0.9296875, 0.9335938, 0.9414062, 0.9492188, 0.9531250, 0.9609375, 0.9648438, 0.9726562, 0.9804688, 0.9843750, 0.9921875, 0.9960938]), array([ 0.0000000, 0.0039062, 0.0078125, 0.0156250, 0.0195312, 0.0234375, 0.0312500, 0.0351562, 0.0390625, 0.0468750, 0.0507812, 0.0546875, 0.0625000, 0.0664062, 0.0703125, 0.0781250, 0.0820312, 0.0898438, 0.0937500, 0.0976562, 0.1054688, 0.1093750, 0.1132812, 0.1210938, 0.1250000, 0.1289062, 0.1367188, 0.1406250, 0.1445312, 0.1523438, 0.1562500, 0.1640625, 0.1679688, 0.1718750, 0.1796875, 0.1835938, 0.1875000, 0.1953125, 0.1992188, 0.2031250, 0.2109375, 0.2148438, 0.2187500, 0.2265625, 0.2304688, 0.2382812, 0.2421875, 0.2460938, 0.2539062, 0.2578125, 0.2617188, 0.2695312, 0.2734375, 0.2773438, 0.2851562, 0.2890625, 0.2929688, 0.3007812, 0.3046875, 0.3125000, 0.3164062, 0.3203125, 0.3281250, 0.3320312, 0.3359375, 0.3437500, 0.3476562, 0.3515625, 0.3593750, 0.3632812, 0.3671875, 0.3750000, 0.3789062, 0.3867188, 0.3906250, 0.3945312, 0.4023438, 0.4062500, 0.4101562, 0.4179688, 0.4218750, 0.4257812, 0.4335938, 0.4375000, 0.4414062, 0.4492188, 0.4531250, 0.4609375, 0.4648438, 0.4687500, 0.4765625, 0.4804688, 0.4843750, 0.4921875, 0.4960938, 0.5000000, 0.5078125, 0.5117188, 0.5156250, 0.5234375, 0.5273438, 0.5312500, 0.5390625, 0.5429688, 0.5507812, 0.5546875, 0.5585938, 0.5664062, 0.5703125, 0.5742188, 0.5820312, 0.5859375, 0.5898438, 0.5976562, 0.6015625, 0.6054688, 0.6132812, 0.6171875, 0.6250000, 0.6289062, 0.6328125, 0.6406250, 0.6445312, 0.6484375, 0.6562500, 0.6601562, 0.6640625, 0.6718750, 0.6757812, 0.6796875, 0.6875000, 0.6914062, 0.6992188, 0.7031250, 0.7070312, 0.7148438, 0.7187500, 0.7226562, 0.7304688, 0.7343750, 0.7382812, 0.7460938, 0.7500000, 0.7539062, 0.7617188, 0.7656250, 0.7734375, 0.7773438, 0.7812500, 0.7890625, 0.7929688, 0.7968750, 0.8046875, 0.8085938, 0.8125000, 0.8203125, 0.8242188, 0.8281250, 0.8359375, 0.8398438, 0.8476562, 0.8515625, 0.8554688, 0.8632812, 0.8671875, 0.8710938, 0.8789062, 0.8828125, 0.8867188, 0.8945312, 0.8984375, 0.9023438, 0.9101562, 0.9140625, 0.9218750, 0.9257812, 0.9296875, 0.9375000, 0.9414062, 0.9453125, 0.9531250, 0.9570312, 0.9609375, 0.9687500, 0.9726562, 0.9765625, 0.9843750, 0.9882812, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 2 :: GRN-RED-BLU-WHT ### color_map_luts['idl02'] = \ ( array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0234375, 0.0468750, 0.0703125, 0.0937500, 0.1171875, 0.1406250, 0.1640625, 0.1875000, 0.2109375, 0.2343750, 0.2578125, 0.2812500, 0.3046875, 0.3281250, 0.3515625, 0.3750000, 0.3984375, 0.4218750, 0.4453125, 0.4687500, 0.4921875, 0.5156250, 0.5390625, 0.5625000, 0.5859375, 0.6093750, 0.6328125, 0.6562500, 0.6796875, 0.7031250, 0.7265625, 0.7500000, 0.7734375, 0.7968750, 0.8203125, 0.8437500, 0.8671875, 0.8906250, 0.9140625, 0.9375000, 0.9492188, 0.9609375, 0.9726562, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9804688, 0.9765625, 0.9726562, 0.9687500, 0.9687500, 0.9687500, 0.9687500, 0.9687500, 0.9648438, 0.9609375, 0.9570312, 0.9531250, 0.9492188, 0.9453125, 0.9414062, 0.9375000, 0.9375000, 0.9335938, 0.9296875, 0.9257812, 0.9218750, 0.9218750, 0.9218750, 0.9218750, 0.9179688, 0.9140625, 0.9101562, 0.9062500, 0.9023438, 0.8984375, 0.8945312, 0.8906250, 0.8906250, 0.8906250, 0.8906250, 0.8906250, 0.8867188, 0.8828125, 0.8789062, 0.8750000, 0.8710938, 0.8671875, 0.8632812, 0.8593750, 0.8554688, 0.8515625, 0.8476562, 0.8437500, 0.8437500, 0.8437500, 0.8437500, 0.8437500, 0.8398438, 0.8359375, 0.8320312, 0.8281250, 0.8242188, 0.8203125, 0.8164062, 0.8125000, 0.8085938, 0.8046875, 0.8007812, 0.7968750, 0.7968750, 0.7968750, 0.7968750, 0.7968750, 0.7929688, 0.7890625, 0.7851562, 0.7812500, 0.7773438, 0.7734375, 0.7695312, 0.7656250, 0.7656250, 0.7656250, 0.7656250, 0.7656250, 0.7617188, 0.7578125, 0.7539062, 0.7500000, 0.7460938, 0.7421875, 0.7382812, 0.7343750, 0.7304688, 0.7265625, 0.7226562, 0.7187500, 0.7187500, 0.7187500, 0.7187500, 0.7187500, 0.7148438, 0.7109375, 0.7070312, 0.7031250, 0.6992188, 0.6953125, 0.6914062, 0.6875000, 0.6875000, 0.6875000, 0.6875000, 0.6875000, 0.6835938, 0.6796875, 0.6757812, 0.6718750, 0.6679688, 0.6640625, 0.6601562, 0.6562500, 0.6523438, 0.6484375, 0.6445312, 0.6406250, 0.6406250, 0.6406250, 0.6406250, 0.6406250, 0.6367188, 0.6328125, 0.6289062, 0.6250000, 0.6210938, 0.6171875, 0.6132812, 0.6093750, 0.6054688, 0.6015625, 0.5976562, 0.5937500, 0.5937500, 0.5937500, 0.5937500, 0.5937500, 0.5898438, 0.5859375, 0.5820312, 0.5781250, 0.5898438, 0.6015625, 0.6132812, 0.6250000, 0.6367188, 0.6484375, 0.6601562, 0.6718750, 0.6875000, 0.7031250, 0.7187500, 0.7343750, 0.7460938, 0.7578125, 0.7695312, 0.7812500, 0.7929688, 0.8046875, 0.8164062, 0.8281250, 0.8398438, 0.8515625, 0.8632812, 0.8750000, 0.8906250, 0.9062500, 0.9218750, 0.9375000, 0.9492188, 0.9609375, 0.9726562, 0.9843750, 0.9882812, 0.9921875, 0.9960938]), array([ 0.0000000, 0.1406250, 0.2812500, 0.2929688, 0.3085938, 0.3203125, 0.3359375, 0.3515625, 0.3632812, 0.3789062, 0.3906250, 0.4062500, 0.4218750, 0.4570312, 0.4921875, 0.5273438, 0.5625000, 0.5976562, 0.6328125, 0.6679688, 0.7031250, 0.7382812, 0.7734375, 0.8085938, 0.8437500, 0.8789062, 0.9140625, 0.9492188, 0.9843750, 0.9726562, 0.9609375, 0.9492188, 0.9375000, 0.9140625, 0.8906250, 0.8671875, 0.8437500, 0.8203125, 0.7968750, 0.7734375, 0.7500000, 0.7265625, 0.7031250, 0.6796875, 0.6562500, 0.6328125, 0.6093750, 0.5859375, 0.5625000, 0.5390625, 0.5156250, 0.4921875, 0.4687500, 0.4453125, 0.4218750, 0.3984375, 0.3750000, 0.3515625, 0.3281250, 0.3046875, 0.2812500, 0.2578125, 0.2343750, 0.2109375, 0.1875000, 0.1640625, 0.1406250, 0.1171875, 0.0937500, 0.0703125, 0.0468750, 0.0234375, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0312500, 0.0625000, 0.0937500, 0.1250000, 0.1562500, 0.1875000, 0.2187500, 0.2500000, 0.2812500, 0.3125000, 0.3437500, 0.3750000, 0.4062500, 0.4375000, 0.4687500, 0.5000000, 0.5273438, 0.5546875, 0.5820312, 0.6093750, 0.6406250, 0.6718750, 0.7031250, 0.7343750, 0.7656250, 0.7968750, 0.8281250, 0.8593750, 0.8906250, 0.9218750, 0.9531250, 0.9843750, 0.9882812, 0.9921875, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0117188, 0.0195312, 0.0273438, 0.0351562, 0.0390625, 0.0468750, 0.0546875, 0.0625000, 0.0703125, 0.0781250, 0.0859375, 0.0976562, 0.1054688, 0.1132812, 0.1210938, 0.1328125, 0.1367188, 0.1445312, 0.1523438, 0.1601562, 0.1679688, 0.1757812, 0.1835938, 0.1953125, 0.2031250, 0.2109375, 0.2187500, 0.2304688, 0.2382812, 0.2460938, 0.2539062, 0.2617188, 0.2656250, 0.2734375, 0.2812500, 0.2890625, 0.2968750, 0.3046875, 0.3125000, 0.3242188, 0.3320312, 0.3398438, 0.3476562, 0.3593750, 0.3671875, 0.3750000, 0.3828125, 0.3945312, 0.3984375, 0.4062500, 0.4140625, 0.4218750, 0.4296875, 0.4375000, 0.4453125, 0.4531250, 0.4609375, 0.4687500, 0.4765625, 0.4882812, 0.4960938, 0.5039062, 0.5117188, 0.5234375, 0.5273438, 0.5351562, 0.5429688, 0.5507812, 0.5585938, 0.5664062, 0.5742188, 0.5859375, 0.5937500, 0.6015625, 0.6093750, 0.6210938, 0.6250000, 0.6328125, 0.6367188, 0.6445312, 0.6523438, 0.6601562, 0.6679688, 0.6796875, 0.6875000, 0.6953125, 0.7031250, 0.7148438, 0.7226562, 0.7304688, 0.7382812, 0.7500000, 0.7539062, 0.7617188, 0.7695312, 0.7773438, 0.7851562, 0.7929688, 0.8007812, 0.8125000, 0.8203125, 0.8281250, 0.8359375, 0.8437500, 0.8476562, 0.8554688, 0.8632812, 0.8710938, 0.8789062, 0.8867188, 0.8945312, 0.9062500, 0.9140625, 0.9218750, 0.9296875, 0.9414062, 0.9492188, 0.9570312, 0.9648438, 0.9765625, 0.9804688, 0.9843750, 0.9882812, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 3 :: RED TEMPERATURE ### color_map_luts['idl03'] = \ ( array([ 0.0000000, 0.0039062, 0.0078125, 0.0156250, 0.0195312, 0.0273438, 0.0312500, 0.0390625, 0.0429688, 0.0507812, 0.0546875, 0.0585938, 0.0664062, 0.0703125, 0.0781250, 0.0820312, 0.0898438, 0.0937500, 0.1015625, 0.1054688, 0.1093750, 0.1171875, 0.1210938, 0.1289062, 0.1328125, 0.1406250, 0.1445312, 0.1523438, 0.1562500, 0.1640625, 0.1679688, 0.1718750, 0.1796875, 0.1835938, 0.1914062, 0.1953125, 0.2031250, 0.2070312, 0.2148438, 0.2187500, 0.2226562, 0.2304688, 0.2343750, 0.2421875, 0.2460938, 0.2539062, 0.2578125, 0.2656250, 0.2695312, 0.2734375, 0.2812500, 0.2851562, 0.2929688, 0.2968750, 0.3046875, 0.3085938, 0.3164062, 0.3203125, 0.3281250, 0.3320312, 0.3359375, 0.3437500, 0.3476562, 0.3554688, 0.3593750, 0.3671875, 0.3710938, 0.3789062, 0.3828125, 0.3867188, 0.3945312, 0.3984375, 0.4062500, 0.4101562, 0.4179688, 0.4218750, 0.4296875, 0.4335938, 0.4414062, 0.4453125, 0.4492188, 0.4570312, 0.4609375, 0.4687500, 0.4726562, 0.4804688, 0.4843750, 0.4921875, 0.4960938, 0.5000000, 0.5078125, 0.5117188, 0.5195312, 0.5234375, 0.5312500, 0.5351562, 0.5429688, 0.5468750, 0.5507812, 0.5585938, 0.5625000, 0.5703125, 0.5742188, 0.5820312, 0.5859375, 0.5937500, 0.5976562, 0.6054688, 0.6093750, 0.6132812, 0.6210938, 0.6250000, 0.6328125, 0.6367188, 0.6445312, 0.6484375, 0.6562500, 0.6601562, 0.6640625, 0.6718750, 0.6757812, 0.6835938, 0.6875000, 0.6953125, 0.6992188, 0.7070312, 0.7109375, 0.7187500, 0.7226562, 0.7265625, 0.7343750, 0.7382812, 0.7460938, 0.7500000, 0.7578125, 0.7617188, 0.7695312, 0.7734375, 0.7773438, 0.7851562, 0.7890625, 0.7968750, 0.8007812, 0.8085938, 0.8125000, 0.8203125, 0.8242188, 0.8281250, 0.8359375, 0.8398438, 0.8476562, 0.8515625, 0.8593750, 0.8632812, 0.8710938, 0.8750000, 0.8828125, 0.8867188, 0.8906250, 0.8984375, 0.9023438, 0.9101562, 0.9140625, 0.9218750, 0.9257812, 0.9335938, 0.9375000, 0.9414062, 0.9492188, 0.9531250, 0.9609375, 0.9648438, 0.9726562, 0.9765625, 0.9843750, 0.9882812, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0117188, 0.0195312, 0.0273438, 0.0351562, 0.0429688, 0.0507812, 0.0585938, 0.0664062, 0.0703125, 0.0781250, 0.0859375, 0.0937500, 0.1015625, 0.1093750, 0.1171875, 0.1250000, 0.1328125, 0.1367188, 0.1445312, 0.1523438, 0.1601562, 0.1679688, 0.1757812, 0.1835938, 0.1914062, 0.1992188, 0.2031250, 0.2109375, 0.2187500, 0.2265625, 0.2343750, 0.2421875, 0.2500000, 0.2578125, 0.2656250, 0.2695312, 0.2773438, 0.2851562, 0.2929688, 0.3007812, 0.3085938, 0.3164062, 0.3242188, 0.3320312, 0.3359375, 0.3437500, 0.3515625, 0.3593750, 0.3671875, 0.3750000, 0.3828125, 0.3906250, 0.3984375, 0.4023438, 0.4101562, 0.4179688, 0.4257812, 0.4335938, 0.4414062, 0.4492188, 0.4570312, 0.4648438, 0.4687500, 0.4765625, 0.4843750, 0.4921875, 0.5000000, 0.5078125, 0.5156250, 0.5234375, 0.5312500, 0.5351562, 0.5429688, 0.5507812, 0.5585938, 0.5664062, 0.5742188, 0.5820312, 0.5898438, 0.5976562, 0.6015625, 0.6093750, 0.6171875, 0.6250000, 0.6328125, 0.6406250, 0.6484375, 0.6562500, 0.6640625, 0.6679688, 0.6757812, 0.6835938, 0.6914062, 0.6992188, 0.7070312, 0.7148438, 0.7226562, 0.7304688, 0.7343750, 0.7421875, 0.7500000, 0.7578125, 0.7656250, 0.7734375, 0.7812500, 0.7890625, 0.7968750, 0.8007812, 0.8085938, 0.8164062, 0.8242188, 0.8320312, 0.8398438, 0.8476562, 0.8554688, 0.8632812, 0.8671875, 0.8750000, 0.8828125, 0.8906250, 0.8984375, 0.9062500, 0.9140625, 0.9218750, 0.9296875, 0.9335938, 0.9414062, 0.9492188, 0.9570312, 0.9648438, 0.9726562, 0.9804688, 0.9882812, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0117188, 0.0273438, 0.0429688, 0.0585938, 0.0742188, 0.0898438, 0.1054688, 0.1210938, 0.1367188, 0.1523438, 0.1679688, 0.1835938, 0.1992188, 0.2109375, 0.2265625, 0.2421875, 0.2578125, 0.2734375, 0.2890625, 0.3046875, 0.3203125, 0.3359375, 0.3515625, 0.3671875, 0.3828125, 0.3984375, 0.4101562, 0.4257812, 0.4414062, 0.4570312, 0.4726562, 0.4882812, 0.5039062, 0.5195312, 0.5351562, 0.5507812, 0.5664062, 0.5820312, 0.5976562, 0.6093750, 0.6250000, 0.6406250, 0.6562500, 0.6718750, 0.6875000, 0.7031250, 0.7187500, 0.7343750, 0.7500000, 0.7656250, 0.7812500, 0.7968750, 0.8085938, 0.8242188, 0.8398438, 0.8554688, 0.8710938, 0.8867188, 0.9023438, 0.9179688, 0.9335938, 0.9492188, 0.9648438, 0.9804688, 0.9960938]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 4 :: BLUE ### color_map_luts['idl04'] = \ ( array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0273438, 0.0585938, 0.0859375, 0.1171875, 0.1445312, 0.1757812, 0.2031250, 0.2343750, 0.2617188, 0.2929688, 0.3203125, 0.3515625, 0.3789062, 0.4101562, 0.4375000, 0.4687500, 0.4882812, 0.5078125, 0.5273438, 0.5468750, 0.5664062, 0.5859375, 0.6054688, 0.6250000, 0.6445312, 0.6640625, 0.6835938, 0.7031250, 0.7226562, 0.7421875, 0.7617188, 0.7812500, 0.7812500, 0.7851562, 0.7851562, 0.7890625, 0.7890625, 0.7929688, 0.7929688, 0.7968750, 0.7968750, 0.8007812, 0.8007812, 0.8046875, 0.8046875, 0.8085938, 0.8085938, 0.8125000, 0.8125000, 0.8164062, 0.8164062, 0.8203125, 0.8203125, 0.8242188, 0.8242188, 0.8281250, 0.8281250, 0.8320312, 0.8320312, 0.8359375, 0.8359375, 0.8398438, 0.8398438, 0.8437500, 0.8437500, 0.8476562, 0.8476562, 0.8515625, 0.8515625, 0.8554688, 0.8554688, 0.8593750, 0.8593750, 0.8632812, 0.8632812, 0.8671875, 0.8671875, 0.8710938, 0.8710938, 0.8750000, 0.8750000, 0.8789062, 0.8789062, 0.8828125, 0.8828125, 0.8867188, 0.8867188, 0.8906250, 0.8906250, 0.8945312, 0.8945312, 0.8984375, 0.8984375, 0.9023438, 0.9023438, 0.9062500, 0.9062500, 0.9101562, 0.9101562, 0.9140625, 0.9140625, 0.9179688, 0.9179688, 0.9218750, 0.9218750, 0.9257812, 0.9257812, 0.9296875, 0.9296875, 0.9335938, 0.9335938, 0.9375000, 0.9375000, 0.9414062, 0.9414062, 0.9453125, 0.9453125, 0.9492188, 0.9492188, 0.9531250, 0.9531250, 0.9570312, 0.9570312, 0.9609375, 0.9609375, 0.9648438, 0.9648438, 0.9687500, 0.9687500, 0.9726562, 0.9726562, 0.9765625, 0.9765625, 0.9804688, 0.9804688, 0.9843750, 0.9843750, 0.9882812, 0.9882812, 0.9921875, 0.9921875, 0.9960938, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0117188, 0.0234375, 0.0351562, 0.0468750, 0.0585938, 0.0703125, 0.0820312, 0.0976562, 0.1093750, 0.1210938, 0.1328125, 0.1445312, 0.1562500, 0.1679688, 0.1796875, 0.1953125, 0.2070312, 0.2187500, 0.2304688, 0.2421875, 0.2539062, 0.2656250, 0.2773438, 0.2929688, 0.3046875, 0.3164062, 0.3281250, 0.3398438, 0.3515625, 0.3632812, 0.3750000, 0.3906250, 0.4023438, 0.4140625, 0.4257812, 0.4375000, 0.4492188, 0.4609375, 0.4726562, 0.4882812, 0.5000000, 0.5117188, 0.5234375, 0.5351562, 0.5468750, 0.5585938, 0.5703125, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5820312, 0.5781250, 0.5781250, 0.5742188, 0.5703125, 0.5703125, 0.5664062, 0.5664062, 0.5625000, 0.5585938, 0.5585938, 0.5546875, 0.5507812, 0.5507812, 0.5468750, 0.5468750, 0.5351562, 0.5273438, 0.5156250, 0.5078125, 0.4960938, 0.4882812, 0.4765625, 0.4687500, 0.4570312, 0.4492188, 0.4375000, 0.4296875, 0.4179688, 0.4101562, 0.3984375, 0.3906250, 0.3632812, 0.3398438, 0.3164062, 0.2929688, 0.2656250, 0.2421875, 0.2187500, 0.1953125, 0.1679688, 0.1445312, 0.1210938, 0.0976562, 0.0703125, 0.0468750, 0.0234375, 0.0000000, 0.0078125, 0.0156250, 0.0234375, 0.0351562, 0.0429688, 0.0507812, 0.0625000, 0.0703125, 0.0781250, 0.0898438, 0.0976562, 0.1054688, 0.1132812, 0.1250000, 0.1328125, 0.1406250, 0.1523438, 0.1601562, 0.1679688, 0.1796875, 0.1875000, 0.1953125, 0.2070312, 0.2148438, 0.2226562, 0.2304688, 0.2421875, 0.2500000, 0.2578125, 0.2695312, 0.2773438, 0.2851562, 0.2968750, 0.3046875, 0.3125000, 0.3242188, 0.3320312, 0.3398438, 0.3476562, 0.3593750, 0.3671875, 0.3750000, 0.3867188, 0.3945312, 0.4023438, 0.4140625, 0.4218750, 0.4296875, 0.4414062, 0.4492188, 0.4570312, 0.4648438, 0.4765625, 0.4843750, 0.4921875, 0.5039062, 0.5117188, 0.5195312, 0.5312500, 0.5390625, 0.5468750, 0.5546875, 0.5664062, 0.5742188, 0.5820312, 0.5937500, 0.6015625, 0.6093750, 0.6210938, 0.6289062, 0.6367188, 0.6484375, 0.6562500, 0.6640625, 0.6718750, 0.6835938, 0.6914062, 0.6992188, 0.7109375, 0.7187500, 0.7265625, 0.7382812, 0.7460938, 0.7539062, 0.7656250, 0.7734375, 0.7812500, 0.7890625, 0.8007812, 0.8085938, 0.8164062, 0.8281250, 0.8359375, 0.8437500, 0.8554688, 0.8632812, 0.8710938, 0.8828125, 0.8906250, 0.8984375, 0.9062500, 0.9179688, 0.9257812, 0.9335938, 0.9453125, 0.9531250, 0.9609375, 0.9726562, 0.9804688, 0.9882812, 0.9960938]), array([ 0.0000000, 0.0078125, 0.0156250, 0.0234375, 0.0312500, 0.0390625, 0.0468750, 0.0546875, 0.0625000, 0.0703125, 0.0781250, 0.0859375, 0.0976562, 0.1054688, 0.1132812, 0.1210938, 0.1289062, 0.1367188, 0.1445312, 0.1523438, 0.1601562, 0.1679688, 0.1757812, 0.1835938, 0.1953125, 0.2031250, 0.2109375, 0.2187500, 0.2265625, 0.2343750, 0.2421875, 0.2500000, 0.2578125, 0.2656250, 0.2734375, 0.2812500, 0.2929688, 0.3007812, 0.3085938, 0.3164062, 0.3242188, 0.3320312, 0.3398438, 0.3476562, 0.3554688, 0.3632812, 0.3710938, 0.3789062, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3750000, 0.3632812, 0.3515625, 0.3398438, 0.3281250, 0.3164062, 0.3046875, 0.2929688, 0.2773438, 0.2656250, 0.2539062, 0.2421875, 0.2304688, 0.2187500, 0.2070312, 0.1953125, 0.1796875, 0.1679688, 0.1562500, 0.1445312, 0.1328125, 0.1210938, 0.1093750, 0.0976562, 0.0820312, 0.0703125, 0.0585938, 0.0468750, 0.0351562, 0.0234375, 0.0117188, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 5 :: STD GAMMA-II ### color_map_luts['idl05'] = \ ( array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0156250, 0.0351562, 0.0546875, 0.0742188, 0.0898438, 0.1093750, 0.1289062, 0.1484375, 0.1640625, 0.1835938, 0.2031250, 0.2226562, 0.2382812, 0.2578125, 0.2773438, 0.2968750, 0.3164062, 0.3164062, 0.3164062, 0.3164062, 0.3164062, 0.3164062, 0.3164062, 0.3164062, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3085938, 0.3281250, 0.3476562, 0.3671875, 0.3867188, 0.4062500, 0.4257812, 0.4453125, 0.4648438, 0.4843750, 0.5039062, 0.5234375, 0.5429688, 0.5625000, 0.5820312, 0.6015625, 0.6210938, 0.6406250, 0.6601562, 0.6796875, 0.7031250, 0.7226562, 0.7421875, 0.7656250, 0.7851562, 0.8046875, 0.8281250, 0.8476562, 0.8671875, 0.8906250, 0.9101562, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9687500, 0.9375000, 0.9062500, 0.8789062, 0.8476562, 0.8164062, 0.7890625, 0.7578125, 0.7265625, 0.6992188, 0.6679688, 0.6367188, 0.6562500, 0.6757812, 0.6953125, 0.7148438, 0.7343750, 0.7539062, 0.7734375, 0.7929688, 0.8164062, 0.8359375, 0.8554688, 0.8750000, 0.8945312, 0.9140625, 0.9335938, 0.9531250, 0.9726562, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0195312, 0.0390625, 0.0625000, 0.0820312, 0.1054688, 0.1250000, 0.1445312, 0.1679688, 0.1875000, 0.2109375, 0.2304688, 0.2500000, 0.2734375, 0.2929688, 0.3164062, 0.3320312, 0.3515625, 0.3710938, 0.3906250, 0.4101562, 0.4257812, 0.4453125, 0.4648438, 0.4843750, 0.5039062, 0.5234375, 0.5390625, 0.5585938, 0.5781250, 0.5976562, 0.6171875, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6367188, 0.6601562, 0.6835938, 0.7070312, 0.7304688, 0.7539062, 0.7773438, 0.8007812, 0.8281250, 0.8515625, 0.8750000, 0.8984375, 0.9218750, 0.9453125, 0.9687500, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 0.0000000, 0.0195312, 0.0390625, 0.0585938, 0.0781250, 0.1015625, 0.1210938, 0.1406250, 0.1601562, 0.1796875, 0.2031250, 0.2226562, 0.2421875, 0.2617188, 0.2812500, 0.3046875, 0.3242188, 0.3437500, 0.3632812, 0.3828125, 0.4062500, 0.4257812, 0.4453125, 0.4648438, 0.4843750, 0.5078125, 0.5273438, 0.5468750, 0.5664062, 0.5859375, 0.6093750, 0.6289062, 0.6484375, 0.6679688, 0.6875000, 0.7109375, 0.7304688, 0.7500000, 0.7695312, 0.7890625, 0.8125000, 0.8320312, 0.8515625, 0.8710938, 0.8906250, 0.9140625, 0.9335938, 0.9531250, 0.9726562, 0.9960938, 0.9765625, 0.9570312, 0.9335938, 0.9140625, 0.8906250, 0.8710938, 0.8515625, 0.8281250, 0.8085938, 0.7851562, 0.7656250, 0.7421875, 0.7226562, 0.7031250, 0.6796875, 0.6601562, 0.6367188, 0.6171875, 0.5937500, 0.5742188, 0.5546875, 0.5312500, 0.5117188, 0.4882812, 0.4687500, 0.4453125, 0.4257812, 0.4062500, 0.3828125, 0.3632812, 0.3398438, 0.3203125, 0.2968750, 0.2773438, 0.2578125, 0.2343750, 0.2148438, 0.1914062, 0.1718750, 0.1484375, 0.1289062, 0.1093750, 0.0859375, 0.0664062, 0.0429688, 0.0234375, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0156250, 0.0351562, 0.0546875, 0.0742188, 0.0937500, 0.1093750, 0.1289062, 0.1484375, 0.1679688, 0.1875000, 0.2070312, 0.2226562, 0.2421875, 0.2617188, 0.2812500, 0.3007812, 0.3203125, 0.3007812, 0.2773438, 0.2539062, 0.2304688, 0.2070312, 0.1835938, 0.1601562, 0.1406250, 0.1171875, 0.0937500, 0.0703125, 0.0468750, 0.0234375, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0117188, 0.0234375, 0.0351562, 0.0468750, 0.0625000, 0.0742188, 0.0859375, 0.0976562, 0.1132812, 0.1250000, 0.1367188, 0.1484375, 0.1601562, 0.1757812, 0.1875000, 0.1992188, 0.2109375, 0.2265625, 0.2382812, 0.2500000, 0.2617188, 0.2773438, 0.2890625, 0.3007812, 0.3125000, 0.3242188, 0.3398438, 0.3515625, 0.3632812, 0.3750000, 0.3906250, 0.4023438, 0.4140625, 0.4257812, 0.4375000, 0.4531250, 0.4648438, 0.4765625, 0.4882812, 0.5039062, 0.5156250, 0.5273438, 0.5390625, 0.5546875, 0.5664062, 0.5781250, 0.5898438, 0.6015625, 0.6171875, 0.6289062, 0.6406250, 0.6523438, 0.6679688, 0.6796875, 0.6914062, 0.7031250, 0.7148438, 0.7304688, 0.7421875, 0.7539062, 0.7656250, 0.7812500, 0.7929688, 0.8046875, 0.8164062, 0.8320312, 0.8437500, 0.8554688, 0.8671875, 0.8789062, 0.8945312, 0.9062500, 0.9179688, 0.9296875, 0.9453125, 0.9570312, 0.9687500, 0.9804688, 0.9960938]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 6 :: PRISM ### color_map_luts['idl06'] = \ ( array([ 0.0000000, 0.0117188, 0.0273438, 0.0429688, 0.0585938, 0.0742188, 0.0859375, 0.1015625, 0.1171875, 0.1328125, 0.1484375, 0.1601562, 0.1757812, 0.1914062, 0.2070312, 0.2226562, 0.2343750, 0.2500000, 0.2656250, 0.2812500, 0.2968750, 0.3085938, 0.3242188, 0.3398438, 0.3554688, 0.3710938, 0.3828125, 0.3984375, 0.4140625, 0.4296875, 0.4453125, 0.4570312, 0.4726562, 0.4882812, 0.5039062, 0.5195312, 0.5351562, 0.5468750, 0.5625000, 0.5781250, 0.5937500, 0.6093750, 0.6210938, 0.6367188, 0.6523438, 0.6679688, 0.6835938, 0.6953125, 0.7109375, 0.7265625, 0.7421875, 0.7578125, 0.7695312, 0.7851562, 0.8007812, 0.8164062, 0.8320312, 0.8437500, 0.8593750, 0.8750000, 0.8906250, 0.9062500, 0.9179688, 0.9335938, 0.9492188, 0.9648438, 0.9804688, 0.9960938, 0.9804688, 0.9648438, 0.9492188, 0.9335938, 0.9179688, 0.8984375, 0.8828125, 0.8671875, 0.8515625, 0.8359375, 0.8203125, 0.8007812, 0.7851562, 0.7695312, 0.7539062, 0.7382812, 0.7187500, 0.7031250, 0.6875000, 0.6718750, 0.6562500, 0.6406250, 0.6210938, 0.6054688, 0.5898438, 0.5742188, 0.5585938, 0.5390625, 0.5234375, 0.5078125, 0.4921875, 0.4765625, 0.4609375, 0.4414062, 0.4257812, 0.4101562, 0.3945312, 0.3789062, 0.3593750, 0.3437500, 0.3281250, 0.3125000, 0.2968750, 0.2812500, 0.2617188, 0.2460938, 0.2304688, 0.2148438, 0.1992188, 0.1796875, 0.1640625, 0.1484375, 0.1328125, 0.1171875, 0.1015625, 0.0820312, 0.0664062, 0.0507812, 0.0351562, 0.0195312, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0117188, 0.0273438, 0.0429688, 0.0585938, 0.0742188, 0.0898438, 0.1054688, 0.1210938, 0.1367188, 0.1523438, 0.1679688, 0.1835938, 0.1992188, 0.2148438, 0.2304688, 0.2460938, 0.2617188, 0.2773438, 0.2929688, 0.3085938, 0.3242188, 0.3398438, 0.3554688, 0.3710938, 0.3867188, 0.4023438, 0.4179688, 0.4335938, 0.4492188, 0.4648438, 0.4804688, 0.4960938, 0.5117188, 0.5273438, 0.5429688, 0.5585938, 0.5742188, 0.5898438, 0.6054688, 0.6210938, 0.6367188, 0.6523438, 0.6679688, 0.6835938, 0.6992188, 0.7148438, 0.7304688, 0.7460938, 0.7617188, 0.7773438, 0.7929688, 0.8085938, 0.8242188, 0.8398438, 0.8554688, 0.8710938, 0.8867188, 0.9023438, 0.9179688, 0.9335938, 0.9492188, 0.9648438, 0.9804688, 0.9960938, 0.9804688, 0.9648438, 0.9492188, 0.9335938, 0.9179688, 0.9023438, 0.8867188, 0.8710938, 0.8554688, 0.8398438, 0.8242188, 0.8085938, 0.7929688, 0.7773438, 0.7617188, 0.7460938, 0.7304688, 0.7148438, 0.6992188, 0.6835938, 0.6640625, 0.6484375, 0.6328125, 0.6171875, 0.6015625, 0.5859375, 0.5703125, 0.5546875, 0.5390625, 0.5234375, 0.5078125, 0.4921875, 0.4765625, 0.4609375, 0.4453125, 0.4296875, 0.4140625, 0.3984375, 0.3828125, 0.3671875, 0.3515625, 0.3320312, 0.3164062, 0.3007812, 0.2851562, 0.2695312, 0.2539062, 0.2382812, 0.2226562, 0.2070312, 0.1914062, 0.1757812, 0.1601562, 0.1445312, 0.1289062, 0.1132812, 0.0976562, 0.0820312, 0.0664062, 0.0507812, 0.0351562, 0.0195312, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0117188, 0.0273438, 0.0429688, 0.0585938, 0.0742188, 0.0898438, 0.1054688, 0.1171875, 0.1328125, 0.1484375, 0.1640625, 0.1796875, 0.1953125, 0.2109375, 0.2226562, 0.2382812, 0.2539062, 0.2695312, 0.2851562, 0.3007812, 0.3164062, 0.3320312, 0.3437500, 0.3593750, 0.3750000, 0.3906250, 0.4062500, 0.4218750, 0.4375000, 0.4492188, 0.4648438, 0.4804688, 0.4960938, 0.5117188, 0.5273438, 0.5429688, 0.5546875, 0.5703125, 0.5859375, 0.6015625, 0.6171875, 0.6328125, 0.6484375, 0.6640625, 0.6757812, 0.6914062, 0.7070312, 0.7226562, 0.7382812, 0.7539062, 0.7695312, 0.7812500, 0.7968750, 0.8125000, 0.8281250, 0.8437500, 0.8593750, 0.8750000, 0.8867188, 0.9023438, 0.9179688, 0.9335938, 0.9492188, 0.9648438, 0.9804688, 0.9960938, 0.9804688, 0.9648438, 0.9492188, 0.9335938, 0.9179688, 0.9023438, 0.8867188, 0.8710938, 0.8554688, 0.8398438, 0.8242188, 0.8085938, 0.7929688, 0.7773438, 0.7617188, 0.7460938, 0.7304688, 0.7148438, 0.6992188, 0.6835938, 0.6640625, 0.6484375, 0.6328125, 0.6171875, 0.6015625, 0.5859375, 0.5703125, 0.5546875, 0.5390625, 0.5234375, 0.5078125, 0.4921875, 0.4765625, 0.4609375, 0.4453125, 0.4296875, 0.4140625, 0.3984375, 0.3828125, 0.3671875, 0.3515625, 0.3320312, 0.3164062, 0.3007812, 0.2851562, 0.2695312, 0.2539062, 0.2382812, 0.2226562, 0.2070312, 0.1914062, 0.1757812, 0.1601562, 0.1445312, 0.1289062, 0.1132812, 0.0976562, 0.0820312, 0.0664062, 0.0507812, 0.0351562, 0.0195312, 0.0000000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 7 :: RED-PURPLE ### color_map_luts['idl07'] = \ ( array([ 0.0000000, 0.0000000, 0.0117188, 0.0234375, 0.0390625, 0.0507812, 0.0664062, 0.0781250, 0.0937500, 0.1054688, 0.1171875, 0.1289062, 0.1406250, 0.1523438, 0.1640625, 0.1757812, 0.1875000, 0.1992188, 0.2109375, 0.2226562, 0.2343750, 0.2460938, 0.2578125, 0.2695312, 0.2812500, 0.2929688, 0.3046875, 0.3164062, 0.3281250, 0.3320312, 0.3359375, 0.3398438, 0.3437500, 0.3515625, 0.3593750, 0.3671875, 0.3750000, 0.3789062, 0.3828125, 0.3867188, 0.3906250, 0.3984375, 0.4062500, 0.4140625, 0.4218750, 0.4257812, 0.4296875, 0.4335938, 0.4375000, 0.4453125, 0.4531250, 0.4609375, 0.4687500, 0.4726562, 0.4765625, 0.4804688, 0.4843750, 0.4921875, 0.5000000, 0.5078125, 0.5156250, 0.5195312, 0.5234375, 0.5273438, 0.5312500, 0.5390625, 0.5468750, 0.5546875, 0.5625000, 0.5664062, 0.5703125, 0.5742188, 0.5781250, 0.5859375, 0.5937500, 0.6015625, 0.6093750, 0.6132812, 0.6171875, 0.6210938, 0.6250000, 0.6328125, 0.6406250, 0.6484375, 0.6562500, 0.6601562, 0.6640625, 0.6679688, 0.6718750, 0.6796875, 0.6875000, 0.6953125, 0.7031250, 0.7070312, 0.7109375, 0.7148438, 0.7187500, 0.7226562, 0.7265625, 0.7304688, 0.7343750, 0.7382812, 0.7421875, 0.7460938, 0.7500000, 0.7539062, 0.7578125, 0.7617188, 0.7656250, 0.7734375, 0.7812500, 0.7890625, 0.7968750, 0.8007812, 0.8046875, 0.8085938, 0.8125000, 0.8164062, 0.8203125, 0.8242188, 0.8281250, 0.8320312, 0.8359375, 0.8398438, 0.8437500, 0.8476562, 0.8515625, 0.8554688, 0.8593750, 0.8671875, 0.8750000, 0.8828125, 0.8906250, 0.8945312, 0.8984375, 0.9023438, 0.9062500, 0.9101562, 0.9140625, 0.9179688, 0.9218750, 0.9257812, 0.9296875, 0.9335938, 0.9375000, 0.9414062, 0.9453125, 0.9492188, 0.9531250, 0.9609375, 0.9687500, 0.9765625, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9882812, 0.9921875]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0078125, 0.0117188, 0.0156250, 0.0156250, 0.0156250, 0.0156250, 0.0156250, 0.0156250, 0.0156250, 0.0156250, 0.0156250, 0.0156250, 0.0156250, 0.0156250, 0.0156250, 0.0195312, 0.0234375, 0.0273438, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0351562, 0.0390625, 0.0429688, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0507812, 0.0546875, 0.0585938, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0664062, 0.0703125, 0.0742188, 0.0781250, 0.0781250, 0.0781250, 0.0781250, 0.0781250, 0.0820312, 0.0859375, 0.0898438, 0.0937500, 0.0937500, 0.0937500, 0.0937500, 0.0937500, 0.0976562, 0.1015625, 0.1054688, 0.1093750, 0.1171875, 0.1250000, 0.1328125, 0.1406250, 0.1484375, 0.1562500, 0.1640625, 0.1718750, 0.1796875, 0.1875000, 0.1953125, 0.2031250, 0.2109375, 0.2187500, 0.2265625, 0.2343750, 0.2382812, 0.2421875, 0.2460938, 0.2500000, 0.2578125, 0.2656250, 0.2734375, 0.2812500, 0.2890625, 0.2968750, 0.3046875, 0.3125000, 0.3203125, 0.3281250, 0.3359375, 0.3437500, 0.3515625, 0.3593750, 0.3671875, 0.3750000, 0.3828125, 0.3906250, 0.3984375, 0.4062500, 0.4179688, 0.4296875, 0.4414062, 0.4531250, 0.4648438, 0.4765625, 0.4882812, 0.5000000, 0.5117188, 0.5234375, 0.5351562, 0.5468750, 0.5585938, 0.5703125, 0.5820312, 0.5937500, 0.6054688, 0.6171875, 0.6289062, 0.6406250, 0.6523438, 0.6640625, 0.6757812, 0.6875000, 0.7031250, 0.7187500, 0.7343750, 0.7500000, 0.7695312, 0.7890625, 0.8085938, 0.8281250, 0.8476562, 0.8671875, 0.8867188, 0.9062500, 0.9257812, 0.9453125, 0.9648438, 0.9843750, 0.9843750, 0.9882812, 0.9921875]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0117188, 0.0117188, 0.0117188, 0.0117188, 0.0156250, 0.0156250, 0.0156250, 0.0156250, 0.0156250, 0.0195312, 0.0234375, 0.0273438, 0.0351562, 0.0351562, 0.0390625, 0.0429688, 0.0468750, 0.0507812, 0.0546875, 0.0585938, 0.0664062, 0.0664062, 0.0703125, 0.0703125, 0.0742188, 0.0781250, 0.0859375, 0.0898438, 0.0976562, 0.0976562, 0.1015625, 0.1015625, 0.1054688, 0.1093750, 0.1132812, 0.1171875, 0.1250000, 0.1250000, 0.1289062, 0.1289062, 0.1328125, 0.1367188, 0.1406250, 0.1445312, 0.1523438, 0.1523438, 0.1562500, 0.1562500, 0.1601562, 0.1640625, 0.1679688, 0.1718750, 0.1796875, 0.1796875, 0.1835938, 0.1875000, 0.1914062, 0.1953125, 0.1992188, 0.2031250, 0.2109375, 0.2109375, 0.2070312, 0.2070312, 0.2031250, 0.2109375, 0.2187500, 0.2265625, 0.2343750, 0.2343750, 0.2382812, 0.2421875, 0.2460938, 0.2500000, 0.2539062, 0.2578125, 0.2656250, 0.2695312, 0.2734375, 0.2773438, 0.2851562, 0.2890625, 0.2929688, 0.2968750, 0.3007812, 0.3046875, 0.3085938, 0.3125000, 0.3203125, 0.3203125, 0.3242188, 0.3242188, 0.3281250, 0.3320312, 0.3359375, 0.3398438, 0.3476562, 0.3515625, 0.3554688, 0.3593750, 0.3671875, 0.3710938, 0.3750000, 0.3789062, 0.3828125, 0.3867188, 0.3906250, 0.3945312, 0.4023438, 0.4023438, 0.4062500, 0.4101562, 0.4140625, 0.4179688, 0.4218750, 0.4257812, 0.4296875, 0.4335938, 0.4375000, 0.4414062, 0.4492188, 0.4531250, 0.4570312, 0.4609375, 0.4648438, 0.4687500, 0.4726562, 0.4765625, 0.4843750, 0.4843750, 0.4882812, 0.4882812, 0.4921875, 0.4960938, 0.5039062, 0.5117188, 0.5195312, 0.5234375, 0.5312500, 0.5390625, 0.5468750, 0.5507812, 0.5585938, 0.5625000, 0.5703125, 0.5742188, 0.5820312, 0.5898438, 0.5976562, 0.6015625, 0.6093750, 0.6132812, 0.6210938, 0.6250000, 0.6289062, 0.6328125, 0.6406250, 0.6445312, 0.6484375, 0.6523438, 0.6601562, 0.6640625, 0.6718750, 0.6757812, 0.6835938, 0.6875000, 0.6914062, 0.6953125, 0.7031250, 0.7070312, 0.7148438, 0.7187500, 0.7265625, 0.7304688, 0.7343750, 0.7382812, 0.7460938, 0.7500000, 0.7539062, 0.7578125, 0.7656250, 0.7695312, 0.7773438, 0.7812500, 0.7890625, 0.7929688, 0.7968750, 0.8007812, 0.8085938, 0.8125000, 0.8164062, 0.8203125, 0.8281250, 0.8320312, 0.8359375, 0.8398438, 0.8476562, 0.8515625, 0.8554688, 0.8593750, 0.8632812, 0.8671875, 0.8710938, 0.8750000, 0.8828125, 0.8867188, 0.8906250, 0.8945312, 0.8984375, 0.9023438, 0.9062500, 0.9101562, 0.9179688, 0.9218750, 0.9257812, 0.9296875, 0.9335938, 0.9375000, 0.9414062, 0.9453125, 0.9492188, 0.9492188, 0.9531250, 0.9570312, 0.9609375, 0.9609375, 0.9648438, 0.9687500, 0.9726562, 0.9726562, 0.9765625, 0.9804688, 0.9843750, 0.9843750, 0.9882812, 0.9921875]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 8 :: GREEN ### color_map_luts['idl08'] = \ ( array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0117188, 0.0156250, 0.0234375, 0.0312500, 0.0351562, 0.0429688, 0.0468750, 0.0546875, 0.0625000, 0.0664062, 0.0742188, 0.0781250, 0.0859375, 0.0937500, 0.0976562, 0.1054688, 0.1132812, 0.1171875, 0.1250000, 0.1289062, 0.1367188, 0.1445312, 0.1484375, 0.1562500, 0.1601562, 0.1679688, 0.1757812, 0.1796875, 0.1875000, 0.1953125, 0.1992188, 0.2070312, 0.2109375, 0.2187500, 0.2265625, 0.2304688, 0.2382812, 0.2421875, 0.2500000, 0.2578125, 0.2617188, 0.2695312, 0.2773438, 0.2812500, 0.2890625, 0.2929688, 0.3007812, 0.3085938, 0.3125000, 0.3203125, 0.3242188, 0.3320312, 0.3398438, 0.3437500, 0.3515625, 0.3554688, 0.3632812, 0.3710938, 0.3750000, 0.3828125, 0.3906250, 0.3945312, 0.4023438, 0.4062500, 0.4140625, 0.4218750, 0.4257812, 0.4335938, 0.4375000, 0.4453125, 0.4531250, 0.4570312, 0.4648438, 0.4726562, 0.4765625, 0.4843750, 0.4882812, 0.4960938, 0.5039062, 0.5078125, 0.5156250, 0.5195312, 0.5273438, 0.5351562, 0.5390625, 0.5468750, 0.5546875, 0.5585938, 0.5664062, 0.5703125, 0.5781250, 0.5859375, 0.5898438, 0.5976562, 0.6015625, 0.6093750, 0.6171875, 0.6210938, 0.6289062, 0.6367188, 0.6406250, 0.6484375, 0.6523438, 0.6601562, 0.6679688, 0.6718750, 0.6796875, 0.6835938, 0.6914062, 0.6992188, 0.7031250, 0.7109375, 0.7148438, 0.7226562, 0.7304688, 0.7343750, 0.7421875, 0.7500000, 0.7539062, 0.7617188, 0.7656250, 0.7734375, 0.7812500, 0.7851562, 0.7929688, 0.7968750, 0.8046875, 0.8125000, 0.8164062, 0.8242188, 0.8320312, 0.8359375, 0.8437500, 0.8476562, 0.8554688, 0.8632812, 0.8671875, 0.8750000, 0.8789062, 0.8867188, 0.8945312, 0.8984375, 0.9062500, 0.9140625, 0.9179688, 0.9257812, 0.9296875, 0.9375000, 0.9453125, 0.9492188, 0.9570312, 0.9609375, 0.9687500, 0.9765625, 0.9804688, 0.9882812, 0.9960938]), array([ 0.0000000, 0.0039062, 0.0078125, 0.0117188, 0.0156250, 0.0195312, 0.0234375, 0.0273438, 0.0312500, 0.0351562, 0.0390625, 0.0429688, 0.0468750, 0.0507812, 0.0546875, 0.0585938, 0.0625000, 0.0664062, 0.0703125, 0.0742188, 0.0781250, 0.0820312, 0.0859375, 0.0898438, 0.0937500, 0.0976562, 0.1015625, 0.1054688, 0.1093750, 0.1132812, 0.1171875, 0.1210938, 0.1250000, 0.1289062, 0.1328125, 0.1367188, 0.1406250, 0.1445312, 0.1484375, 0.1523438, 0.1562500, 0.1601562, 0.1640625, 0.1679688, 0.1718750, 0.1757812, 0.1796875, 0.1835938, 0.1875000, 0.1914062, 0.1953125, 0.1992188, 0.2031250, 0.2070312, 0.2109375, 0.2148438, 0.2187500, 0.2226562, 0.2265625, 0.2304688, 0.2343750, 0.2382812, 0.2421875, 0.2460938, 0.2500000, 0.2539062, 0.2578125, 0.2617188, 0.2656250, 0.2695312, 0.2734375, 0.2773438, 0.2812500, 0.2851562, 0.2890625, 0.2929688, 0.2968750, 0.3007812, 0.3046875, 0.3085938, 0.3125000, 0.3164062, 0.3203125, 0.3242188, 0.3281250, 0.3320312, 0.3359375, 0.3398438, 0.3437500, 0.3476562, 0.3515625, 0.3554688, 0.3593750, 0.3632812, 0.3671875, 0.3710938, 0.3750000, 0.3789062, 0.3828125, 0.3867188, 0.3906250, 0.3945312, 0.3984375, 0.4023438, 0.4062500, 0.4101562, 0.4140625, 0.4179688, 0.4218750, 0.4257812, 0.4296875, 0.4335938, 0.4375000, 0.4414062, 0.4453125, 0.4492188, 0.4531250, 0.4570312, 0.4609375, 0.4648438, 0.4687500, 0.4726562, 0.4765625, 0.4804688, 0.4843750, 0.4882812, 0.4921875, 0.4960938, 0.5000000, 0.5039062, 0.5078125, 0.5117188, 0.5156250, 0.5195312, 0.5234375, 0.5273438, 0.5312500, 0.5351562, 0.5390625, 0.5429688, 0.5468750, 0.5507812, 0.5546875, 0.5585938, 0.5625000, 0.5664062, 0.5703125, 0.5742188, 0.5781250, 0.5820312, 0.5859375, 0.5898438, 0.5937500, 0.5976562, 0.6015625, 0.6054688, 0.6093750, 0.6132812, 0.6171875, 0.6210938, 0.6250000, 0.6289062, 0.6328125, 0.6367188, 0.6406250, 0.6445312, 0.6484375, 0.6523438, 0.6562500, 0.6601562, 0.6640625, 0.6679688, 0.6718750, 0.6757812, 0.6796875, 0.6835938, 0.6875000, 0.6914062, 0.6953125, 0.6992188, 0.7031250, 0.7070312, 0.7109375, 0.7148438, 0.7187500, 0.7226562, 0.7265625, 0.7304688, 0.7343750, 0.7382812, 0.7421875, 0.7460938, 0.7500000, 0.7539062, 0.7578125, 0.7617188, 0.7656250, 0.7695312, 0.7734375, 0.7773438, 0.7812500, 0.7851562, 0.7890625, 0.7929688, 0.7968750, 0.8007812, 0.8046875, 0.8085938, 0.8125000, 0.8164062, 0.8203125, 0.8242188, 0.8281250, 0.8320312, 0.8359375, 0.8398438, 0.8437500, 0.8476562, 0.8515625, 0.8554688, 0.8593750, 0.8632812, 0.8671875, 0.8710938, 0.8750000, 0.8789062, 0.8828125, 0.8867188, 0.8906250, 0.8945312, 0.8984375, 0.9023438, 0.9062500, 0.9101562, 0.9140625, 0.9179688, 0.9218750, 0.9257812, 0.9296875, 0.9335938, 0.9375000, 0.9414062, 0.9453125, 0.9492188, 0.9531250, 0.9570312, 0.9609375, 0.9648438, 0.9687500, 0.9726562, 0.9765625, 0.9804688, 0.9843750, 0.9882812, 0.9921875, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0117188, 0.0234375, 0.0390625, 0.0507812, 0.0664062, 0.0781250, 0.0937500, 0.1054688, 0.1210938, 0.1328125, 0.1445312, 0.1601562, 0.1718750, 0.1875000, 0.1992188, 0.2148438, 0.2265625, 0.2421875, 0.2539062, 0.2656250, 0.2812500, 0.2929688, 0.3085938, 0.3203125, 0.3359375, 0.3476562, 0.3632812, 0.3750000, 0.3867188, 0.4023438, 0.4140625, 0.4296875, 0.4414062, 0.4570312, 0.4687500, 0.4843750, 0.4960938, 0.5078125, 0.5234375, 0.5351562, 0.5507812, 0.5625000, 0.5781250, 0.5898438, 0.6054688, 0.6171875, 0.6289062, 0.6445312, 0.6562500, 0.6718750, 0.6835938, 0.6992188, 0.7109375, 0.7265625, 0.7382812, 0.7500000, 0.7656250, 0.7773438, 0.7929688, 0.8046875, 0.8203125, 0.8320312, 0.8476562, 0.8593750, 0.8710938, 0.8867188, 0.8984375, 0.9140625, 0.9257812, 0.9414062, 0.9531250, 0.9687500, 0.9804688, 0.9960938]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 9 :: GRN ### color_map_luts['idl09'] = \ ( array([ 0.0039062, 0.0039062, 0.0039062, 0.0078125, 0.0078125, 0.0078125, 0.0117188, 0.0117188, 0.0156250, 0.0156250, 0.0156250, 0.0195312, 0.0195312, 0.0195312, 0.0234375, 0.0234375, 0.0273438, 0.0273438, 0.0273438, 0.0312500, 0.0312500, 0.0351562, 0.0351562, 0.0351562, 0.0390625, 0.0390625, 0.0390625, 0.0429688, 0.0429688, 0.0468750, 0.0468750, 0.0468750, 0.0507812, 0.0507812, 0.0507812, 0.0546875, 0.0546875, 0.0585938, 0.0585938, 0.0585938, 0.0625000, 0.0625000, 0.0664062, 0.0664062, 0.0664062, 0.0703125, 0.0703125, 0.0703125, 0.0742188, 0.0742188, 0.0781250, 0.0781250, 0.0781250, 0.0820312, 0.0820312, 0.0820312, 0.0859375, 0.0859375, 0.0898438, 0.0898438, 0.0898438, 0.0937500, 0.0937500, 0.0976562, 0.0976562, 0.0976562, 0.1015625, 0.1015625, 0.1015625, 0.1054688, 0.1054688, 0.1093750, 0.1093750, 0.1093750, 0.1132812, 0.1132812, 0.1132812, 0.1171875, 0.1171875, 0.1210938, 0.1210938, 0.1210938, 0.1250000, 0.1250000, 0.1289062, 0.1289062, 0.1289062, 0.1328125, 0.1328125, 0.1328125, 0.1367188, 0.1367188, 0.1406250, 0.1406250, 0.1406250, 0.1445312, 0.1445312, 0.1445312, 0.1484375, 0.1484375, 0.1523438, 0.1523438, 0.1523438, 0.1562500, 0.1562500, 0.1601562, 0.1601562, 0.1601562, 0.1640625, 0.1640625, 0.1640625, 0.1679688, 0.1679688, 0.1718750, 0.1718750, 0.1718750, 0.1757812, 0.1757812, 0.1757812, 0.1796875, 0.1796875, 0.1835938, 0.1835938, 0.1835938, 0.1875000, 0.1875000, 0.1914062, 0.1914062, 0.1914062, 0.1953125, 0.1953125, 0.1953125, 0.1992188, 0.1992188, 0.2031250, 0.2031250, 0.2031250, 0.2109375, 0.2109375, 0.2148438, 0.2187500, 0.2187500, 0.2226562, 0.2265625, 0.2265625, 0.2304688, 0.2343750, 0.2343750, 0.2382812, 0.2421875, 0.2421875, 0.2460938, 0.2500000, 0.2500000, 0.2578125, 0.2656250, 0.2734375, 0.2773438, 0.2851562, 0.2929688, 0.3007812, 0.3085938, 0.3164062, 0.3242188, 0.3320312, 0.3359375, 0.3437500, 0.3515625, 0.3593750, 0.3671875, 0.3750000, 0.3828125, 0.3867188, 0.3945312, 0.4023438, 0.4101562, 0.4179688, 0.4257812, 0.4335938, 0.4414062, 0.4453125, 0.4531250, 0.4609375, 0.4687500, 0.4765625, 0.4843750, 0.4921875, 0.4960938, 0.5039062, 0.5117188, 0.5195312, 0.5273438, 0.5351562, 0.5429688, 0.5507812, 0.5546875, 0.5625000, 0.5703125, 0.5781250, 0.5859375, 0.5937500, 0.6015625, 0.6093750, 0.6132812, 0.6210938, 0.6289062, 0.6367188, 0.6445312, 0.6523438, 0.6601562, 0.6640625, 0.6718750, 0.6796875, 0.6875000, 0.6953125, 0.7031250, 0.7109375, 0.7187500, 0.7226562, 0.7304688, 0.7382812, 0.7460938, 0.7539062, 0.7617188, 0.7695312, 0.7734375, 0.7812500, 0.7890625, 0.7968750, 0.8046875, 0.8125000, 0.8203125, 0.8281250, 0.8320312, 0.8398438, 0.8476562, 0.8554688, 0.8632812, 0.8710938, 0.8789062, 0.8828125, 0.8906250, 0.8984375, 0.9062500, 0.9140625, 0.9218750, 0.9296875, 0.9375000, 0.9414062, 0.9492188, 0.9570312, 0.9648438, 0.9726562, 0.9804688, 0.9882812, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0039062, 0.0078125, 0.0078125, 0.0117188, 0.0156250, 0.0156250, 0.0195312, 0.0234375, 0.0234375, 0.0273438, 0.0312500, 0.0312500, 0.0351562, 0.0390625, 0.0390625, 0.0429688, 0.0468750, 0.0468750, 0.0507812, 0.0546875, 0.0546875, 0.0585938, 0.0625000, 0.0625000, 0.0664062, 0.0703125, 0.0703125, 0.0742188, 0.0781250, 0.0781250, 0.0820312, 0.0859375, 0.0859375, 0.0898438, 0.0937500, 0.0937500, 0.0976562, 0.1015625, 0.1015625, 0.1054688, 0.1093750, 0.1093750, 0.1132812, 0.1171875, 0.1171875, 0.1210938, 0.1250000, 0.1250000, 0.1289062, 0.1328125, 0.1328125, 0.1367188, 0.1406250, 0.1406250, 0.1445312, 0.1484375, 0.1484375, 0.1523438, 0.1562500, 0.1562500, 0.1601562, 0.1640625, 0.1640625, 0.1679688, 0.1718750, 0.1718750, 0.1757812, 0.1796875, 0.1796875, 0.1835938, 0.1875000, 0.1875000, 0.1914062, 0.1953125, 0.1953125, 0.1992188, 0.2031250, 0.2031250, 0.2070312, 0.2109375, 0.2109375, 0.2148438, 0.2187500, 0.2187500, 0.2226562, 0.2265625, 0.2265625, 0.2304688, 0.2343750, 0.2343750, 0.2382812, 0.2421875, 0.2421875, 0.2460938, 0.2500000, 0.2500000, 0.2539062, 0.2578125, 0.2578125, 0.2617188, 0.2656250, 0.2656250, 0.2695312, 0.2734375, 0.2734375, 0.2773438, 0.2812500, 0.2812500, 0.2851562, 0.2890625, 0.2890625, 0.2929688, 0.2968750, 0.2968750, 0.3007812, 0.3046875, 0.3046875, 0.3085938, 0.3125000, 0.3125000, 0.3164062, 0.3203125, 0.3203125, 0.3242188, 0.3281250, 0.3281250, 0.3320312, 0.3359375, 0.3398438, 0.3437500, 0.3515625, 0.3554688, 0.3593750, 0.3671875, 0.3710938, 0.3750000, 0.3828125, 0.3867188, 0.3906250, 0.3984375, 0.4023438, 0.4062500, 0.4140625, 0.4179688, 0.4218750, 0.4296875, 0.4335938, 0.4375000, 0.4453125, 0.4492188, 0.4531250, 0.4609375, 0.4648438, 0.4687500, 0.4765625, 0.4804688, 0.4843750, 0.4921875, 0.4960938, 0.5000000, 0.5078125, 0.5117188, 0.5156250, 0.5234375, 0.5273438, 0.5312500, 0.5390625, 0.5429688, 0.5468750, 0.5546875, 0.5585938, 0.5664062, 0.5703125, 0.5742188, 0.5820312, 0.5859375, 0.5898438, 0.5976562, 0.6015625, 0.6054688, 0.6132812, 0.6171875, 0.6210938, 0.6289062, 0.6328125, 0.6367188, 0.6445312, 0.6484375, 0.6523438, 0.6601562, 0.6640625, 0.6679688, 0.6757812, 0.6796875, 0.6835938, 0.6914062, 0.6953125, 0.6992188, 0.7070312, 0.7109375, 0.7148438, 0.7226562, 0.7265625, 0.7304688, 0.7382812, 0.7421875, 0.7460938, 0.7539062, 0.7578125, 0.7617188, 0.7695312, 0.7734375, 0.7812500, 0.7851562, 0.7890625, 0.7968750, 0.8007812, 0.8046875, 0.8125000, 0.8164062, 0.8203125, 0.8281250, 0.8320312, 0.8359375, 0.8437500, 0.8476562, 0.8515625, 0.8593750, 0.8632812, 0.8671875, 0.8750000, 0.8789062, 0.8828125, 0.8906250, 0.8945312, 0.8984375, 0.9062500, 0.9101562, 0.9140625, 0.9218750, 0.9257812, 0.9296875, 0.9375000, 0.9414062, 0.9453125, 0.9531250, 0.9570312, 0.9609375, 0.9687500, 0.9726562, 0.9765625, 0.9843750, 0.9882812, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0117188, 0.0117188, 0.0117188, 0.0117188, 0.0156250, 0.0156250, 0.0156250, 0.0195312, 0.0195312, 0.0195312, 0.0195312, 0.0234375, 0.0234375, 0.0234375, 0.0234375, 0.0273438, 0.0273438, 0.0273438, 0.0273438, 0.0312500, 0.0312500, 0.0312500, 0.0312500, 0.0351562, 0.0351562, 0.0351562, 0.0390625, 0.0390625, 0.0390625, 0.0390625, 0.0429688, 0.0429688, 0.0429688, 0.0429688, 0.0468750, 0.0468750, 0.0468750, 0.0468750, 0.0507812, 0.0507812, 0.0507812, 0.0507812, 0.0546875, 0.0546875, 0.0546875, 0.0585938, 0.0585938, 0.0585938, 0.0585938, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0664062, 0.0664062, 0.0664062, 0.0664062, 0.0703125, 0.0703125, 0.0703125, 0.0703125, 0.0742188, 0.0742188, 0.0742188, 0.0781250, 0.0781250, 0.0781250, 0.0781250, 0.0820312, 0.0820312, 0.0820312, 0.0820312, 0.0859375, 0.0859375, 0.0859375, 0.0859375, 0.0898438, 0.0898438, 0.0898438, 0.0898438, 0.0937500, 0.0937500, 0.0937500, 0.0976562, 0.0976562, 0.0976562, 0.0976562, 0.1015625, 0.1015625, 0.1015625, 0.1015625, 0.1054688, 0.1054688, 0.1054688, 0.1054688, 0.1093750, 0.1093750, 0.1093750, 0.1093750, 0.1132812, 0.1132812, 0.1132812, 0.1171875, 0.1171875, 0.1171875, 0.1171875, 0.1210938, 0.1210938, 0.1210938, 0.1210938, 0.1250000, 0.1250000, 0.1250000, 0.1250000, 0.1289062, 0.1289062, 0.1289062, 0.1289062, 0.1328125, 0.1328125, 0.1328125, 0.1367188, 0.1367188, 0.1367188, 0.1367188, 0.1406250, 0.1406250, 0.1406250, 0.1406250, 0.1445312, 0.1445312, 0.1445312, 0.1445312, 0.1484375, 0.1484375, 0.1484375, 0.1484375, 0.1523438, 0.1523438, 0.1523438, 0.1562500, 0.1562500, 0.1562500, 0.1562500, 0.1601562, 0.1601562, 0.1601562, 0.1601562, 0.1640625, 0.1640625, 0.1640625, 0.1640625, 0.1679688, 0.1679688, 0.1679688, 0.1679688, 0.1718750, 0.1718750, 0.1718750, 0.1757812, 0.1757812, 0.1757812, 0.1757812, 0.1796875, 0.1796875, 0.1796875, 0.1796875, 0.1835938, 0.1835938, 0.1835938, 0.1835938, 0.1875000, 0.1835938, 0.1914062, 0.2031250, 0.2148438, 0.2265625, 0.2382812, 0.2500000, 0.2617188, 0.2734375, 0.2851562, 0.2968750, 0.3085938, 0.3203125, 0.3320312, 0.3437500, 0.3515625, 0.3632812, 0.3750000, 0.3867188, 0.3984375, 0.4101562, 0.4218750, 0.4335938, 0.4453125, 0.4570312, 0.4687500, 0.4804688, 0.4921875, 0.5039062, 0.5117188, 0.5234375, 0.5351562, 0.5468750, 0.5585938, 0.5703125, 0.5820312, 0.5937500, 0.6054688, 0.6171875, 0.6289062, 0.6406250, 0.6523438, 0.6640625, 0.6718750, 0.6835938, 0.6953125, 0.7070312, 0.7187500, 0.7304688, 0.7421875, 0.7539062, 0.7656250, 0.7773438, 0.7890625, 0.8007812, 0.8125000, 0.8242188, 0.8320312, 0.8437500, 0.8554688, 0.8671875, 0.8789062, 0.8906250, 0.9023438, 0.9140625, 0.9257812, 0.9375000, 0.9492188, 0.9609375, 0.9726562, 0.9843750, 0.9960938]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 10 :: GREEN-PINK ### color_map_luts['idl10'] = \ ( array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0195312, 0.0390625, 0.0585938, 0.0781250, 0.0976562, 0.1171875, 0.1367188, 0.1562500, 0.1757812, 0.1953125, 0.2148438, 0.2343750, 0.2539062, 0.2734375, 0.2929688, 0.3125000, 0.3281250, 0.3437500, 0.3593750, 0.3750000, 0.3945312, 0.4140625, 0.4335938, 0.4531250, 0.4726562, 0.4921875, 0.5117188, 0.5312500, 0.5507812, 0.5703125, 0.5898438, 0.6093750, 0.6210938, 0.6328125, 0.6445312, 0.6562500, 0.6679688, 0.6796875, 0.6914062, 0.7031250, 0.7148438, 0.7265625, 0.7382812, 0.7500000, 0.7617188, 0.7734375, 0.7851562, 0.7968750, 0.8085938, 0.8203125, 0.8320312, 0.8437500, 0.8554688, 0.8671875, 0.8789062, 0.8906250, 0.9023438, 0.9140625, 0.9257812, 0.9375000, 0.9492188, 0.9609375, 0.9726562, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9843750, 0.9882812, 0.9921875, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0546875, 0.1093750, 0.1679688, 0.2226562, 0.2812500, 0.3164062, 0.3515625, 0.3867188, 0.4218750, 0.4570312, 0.4921875, 0.5273438, 0.5625000, 0.5976562, 0.6328125, 0.6679688, 0.7031250, 0.6992188, 0.6953125, 0.6914062, 0.6875000, 0.6835938, 0.6796875, 0.6757812, 0.6718750, 0.6679688, 0.6640625, 0.6601562, 0.6562500, 0.6523438, 0.6484375, 0.6445312, 0.6406250, 0.6367188, 0.6328125, 0.6289062, 0.6250000, 0.6210938, 0.6171875, 0.6132812, 0.6093750, 0.6015625, 0.5937500, 0.5859375, 0.5781250, 0.5742188, 0.5703125, 0.5664062, 0.5625000, 0.5585938, 0.5546875, 0.5507812, 0.5468750, 0.5429688, 0.5390625, 0.5351562, 0.5312500, 0.5273438, 0.5234375, 0.5195312, 0.5156250, 0.5117188, 0.5078125, 0.5039062, 0.5000000, 0.4921875, 0.4843750, 0.4765625, 0.4687500, 0.4648438, 0.4609375, 0.4570312, 0.4531250, 0.4492188, 0.4453125, 0.4414062, 0.4375000, 0.4335938, 0.4296875, 0.4257812, 0.4218750, 0.4179688, 0.4140625, 0.4101562, 0.4062500, 0.4023438, 0.3984375, 0.3945312, 0.3906250, 0.3867188, 0.3828125, 0.3789062, 0.3750000, 0.3671875, 0.3593750, 0.3515625, 0.3437500, 0.3398438, 0.3359375, 0.3320312, 0.3281250, 0.3242188, 0.3203125, 0.3164062, 0.3125000, 0.3085938, 0.3046875, 0.3007812, 0.2968750, 0.2929688, 0.2890625, 0.2851562, 0.2812500, 0.2773438, 0.2734375, 0.2695312, 0.2656250, 0.2578125, 0.2500000, 0.2421875, 0.2343750, 0.2304688, 0.2265625, 0.2226562, 0.2187500, 0.2109375, 0.2031250, 0.1953125, 0.1875000, 0.1835938, 0.1796875, 0.1757812, 0.1718750, 0.1640625, 0.1562500, 0.1484375, 0.1406250, 0.1328125, 0.1250000, 0.1171875, 0.1093750, 0.1015625, 0.0937500, 0.0859375, 0.0781250, 0.0742188, 0.0703125, 0.0664062, 0.0625000, 0.0546875, 0.0468750, 0.0390625, 0.0312500, 0.0234375, 0.0156250, 0.0078125, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0156250, 0.0312500, 0.0468750, 0.0625000, 0.0781250, 0.0937500, 0.1093750, 0.1250000, 0.1406250, 0.1562500, 0.1718750, 0.1875000, 0.2031250, 0.2187500, 0.2343750, 0.2500000, 0.2656250, 0.2812500, 0.2968750, 0.3125000, 0.3281250, 0.3437500, 0.3593750, 0.3750000, 0.3906250, 0.4062500, 0.4218750, 0.4375000, 0.4531250, 0.4687500, 0.4843750, 0.5000000, 0.5156250, 0.5312500, 0.5468750, 0.5625000, 0.5742188, 0.5859375, 0.5976562, 0.6093750, 0.6250000, 0.6406250, 0.6562500, 0.6718750, 0.6875000, 0.7031250, 0.7187500, 0.7343750, 0.7500000, 0.7656250, 0.7812500, 0.7968750, 0.8125000, 0.8281250, 0.8437500, 0.8593750, 0.8750000, 0.8906250, 0.9062500, 0.9218750, 0.9375000, 0.9531250, 0.9687500, 0.9843750, 0.9882812, 0.9921875, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0117188, 0.0156250, 0.0234375, 0.0312500, 0.0390625, 0.0468750, 0.0546875, 0.0585938, 0.0664062, 0.0742188, 0.0820312, 0.0898438, 0.0976562, 0.1054688, 0.1132812, 0.1210938, 0.1289062, 0.1367188, 0.1445312, 0.1484375, 0.1562500, 0.1640625, 0.1718750, 0.1796875, 0.1875000, 0.1953125, 0.2070312, 0.2109375, 0.2187500, 0.2265625, 0.2343750, 0.2421875, 0.2500000, 0.2578125, 0.2656250, 0.2695312, 0.2773438, 0.2851562, 0.2929688, 0.3007812, 0.3085938, 0.3164062, 0.3242188, 0.3281250, 0.3359375, 0.3437500, 0.3515625, 0.3593750, 0.3671875, 0.3750000, 0.3867188, 0.3906250, 0.3984375, 0.4062500, 0.4140625, 0.4218750, 0.4296875, 0.4375000, 0.4453125, 0.4531250, 0.4609375, 0.4687500, 0.4765625, 0.4804688, 0.4882812, 0.4960938, 0.5039062, 0.5117188, 0.5195312, 0.5273438, 0.5351562, 0.5390625, 0.5468750, 0.5546875, 0.5625000, 0.5703125, 0.5781250, 0.5859375, 0.5976562, 0.6015625, 0.6093750, 0.6171875, 0.6250000, 0.6328125, 0.6406250, 0.6484375, 0.6562500, 0.6640625, 0.6718750, 0.6796875, 0.6875000, 0.6914062, 0.6992188, 0.7070312, 0.7148438, 0.7226562, 0.7304688, 0.7382812, 0.7460938, 0.7539062, 0.7617188, 0.7695312, 0.7773438, 0.7773438, 0.7734375, 0.7695312, 0.7656250, 0.7656250, 0.7656250, 0.7656250, 0.7617188, 0.7617188, 0.7578125, 0.7539062, 0.7500000, 0.7500000, 0.7460938, 0.7460938, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7382812, 0.7382812, 0.7343750, 0.7304688, 0.7265625, 0.7265625, 0.7226562, 0.7226562, 0.7187500, 0.7187500, 0.7226562, 0.7265625, 0.7304688, 0.7304688, 0.7304688, 0.7304688, 0.7304688, 0.7304688, 0.7304688, 0.7304688, 0.7343750, 0.7343750, 0.7343750, 0.7343750, 0.7382812, 0.7382812, 0.7382812, 0.7382812, 0.7382812, 0.7382812, 0.7382812, 0.7382812, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7460938, 0.7460938, 0.7460938, 0.7460938, 0.7460938, 0.7500000, 0.7539062, 0.7578125, 0.7617188, 0.7656250, 0.7695312, 0.7734375, 0.7773438, 0.7812500, 0.7851562, 0.7890625, 0.7929688, 0.7929688, 0.7968750, 0.8007812, 0.8046875, 0.8085938, 0.8125000, 0.8164062, 0.8203125, 0.8242188, 0.8281250, 0.8320312, 0.8359375, 0.8359375, 0.8398438, 0.8437500, 0.8476562, 0.8515625, 0.8554688, 0.8593750, 0.8632812, 0.8671875, 0.8710938, 0.8750000, 0.8789062, 0.8828125, 0.8867188, 0.8906250, 0.8945312, 0.8984375, 0.9023438, 0.9062500, 0.9101562, 0.9140625, 0.9179688, 0.9218750, 0.9257812, 0.9257812, 0.9296875, 0.9335938, 0.9375000, 0.9414062, 0.9453125, 0.9492188, 0.9531250, 0.9570312, 0.9609375, 0.9648438, 0.9687500, 0.9726562, 0.9765625, 0.9804688, 0.9843750, 0.9882812, 0.9921875, 0.9960938]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 11 :: BLUE-RED ### color_map_luts['idl11'] = \ ( array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0156250, 0.0312500, 0.0468750, 0.0625000, 0.0781250, 0.0937500, 0.1093750, 0.1250000, 0.1406250, 0.1562500, 0.1718750, 0.1875000, 0.2031250, 0.2187500, 0.2343750, 0.2500000, 0.2656250, 0.2812500, 0.2968750, 0.3125000, 0.3320312, 0.3476562, 0.3632812, 0.3789062, 0.3945312, 0.4101562, 0.4257812, 0.4414062, 0.4570312, 0.4726562, 0.4882812, 0.5039062, 0.5195312, 0.5351562, 0.5507812, 0.5664062, 0.5820312, 0.5976562, 0.6132812, 0.6289062, 0.6445312, 0.6640625, 0.6796875, 0.6953125, 0.7109375, 0.7265625, 0.7421875, 0.7578125, 0.7734375, 0.7890625, 0.8046875, 0.8203125, 0.8359375, 0.8515625, 0.8671875, 0.8828125, 0.8984375, 0.9140625, 0.9296875, 0.9453125, 0.9609375, 0.9765625, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 0.0000000, 0.0039062, 0.0078125, 0.0117188, 0.0156250, 0.0312500, 0.0468750, 0.0625000, 0.0820312, 0.0976562, 0.1132812, 0.1289062, 0.1484375, 0.1640625, 0.1796875, 0.1953125, 0.2148438, 0.2304688, 0.2460938, 0.2617188, 0.2812500, 0.2968750, 0.3125000, 0.3281250, 0.3476562, 0.3632812, 0.3789062, 0.3945312, 0.4140625, 0.4296875, 0.4453125, 0.4609375, 0.4804688, 0.4960938, 0.5117188, 0.5273438, 0.5468750, 0.5625000, 0.5781250, 0.5937500, 0.6132812, 0.6289062, 0.6445312, 0.6601562, 0.6796875, 0.6953125, 0.7109375, 0.7265625, 0.7460938, 0.7617188, 0.7773438, 0.7929688, 0.8125000, 0.8281250, 0.8437500, 0.8593750, 0.8789062, 0.8945312, 0.9101562, 0.9257812, 0.9453125, 0.9609375, 0.9765625, 0.9960938, 0.9960938, 0.9804688, 0.9648438, 0.9492188, 0.9335938, 0.9179688, 0.9023438, 0.8867188, 0.8710938, 0.8554688, 0.8398438, 0.8242188, 0.8085938, 0.7929688, 0.7773438, 0.7617188, 0.7460938, 0.7304688, 0.7148438, 0.6992188, 0.6835938, 0.6640625, 0.6484375, 0.6328125, 0.6171875, 0.6015625, 0.5859375, 0.5703125, 0.5546875, 0.5390625, 0.5234375, 0.5078125, 0.4921875, 0.4765625, 0.4609375, 0.4453125, 0.4296875, 0.4140625, 0.3984375, 0.3828125, 0.3671875, 0.3515625, 0.3320312, 0.3164062, 0.3007812, 0.2851562, 0.2695312, 0.2539062, 0.2382812, 0.2226562, 0.2070312, 0.1914062, 0.1757812, 0.1601562, 0.1445312, 0.1289062, 0.1132812, 0.0976562, 0.0820312, 0.0664062, 0.0507812, 0.0351562, 0.0195312, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 0.0000000, 0.0039062, 0.0078125, 0.0117188, 0.0156250, 0.0312500, 0.0468750, 0.0625000, 0.0820312, 0.0976562, 0.1132812, 0.1289062, 0.1484375, 0.1640625, 0.1796875, 0.1953125, 0.2148438, 0.2304688, 0.2460938, 0.2617188, 0.2812500, 0.2968750, 0.3125000, 0.3281250, 0.3476562, 0.3632812, 0.3789062, 0.3945312, 0.4140625, 0.4296875, 0.4453125, 0.4609375, 0.4804688, 0.4960938, 0.5117188, 0.5273438, 0.5468750, 0.5625000, 0.5781250, 0.5937500, 0.6132812, 0.6289062, 0.6445312, 0.6601562, 0.6796875, 0.6953125, 0.7109375, 0.7265625, 0.7460938, 0.7617188, 0.7773438, 0.7929688, 0.8125000, 0.8281250, 0.8437500, 0.8593750, 0.8789062, 0.8945312, 0.9101562, 0.9257812, 0.9453125, 0.9609375, 0.9765625, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9804688, 0.9648438, 0.9492188, 0.9335938, 0.9179688, 0.9023438, 0.8867188, 0.8710938, 0.8515625, 0.8359375, 0.8203125, 0.8046875, 0.7890625, 0.7734375, 0.7578125, 0.7421875, 0.7265625, 0.7070312, 0.6914062, 0.6757812, 0.6601562, 0.6445312, 0.6289062, 0.6132812, 0.5976562, 0.5820312, 0.5625000, 0.5468750, 0.5312500, 0.5156250, 0.5000000, 0.4843750, 0.4687500, 0.4531250, 0.4375000, 0.4179688, 0.4023438, 0.3867188, 0.3710938, 0.3554688, 0.3398438, 0.3242188, 0.3085938, 0.2929688, 0.2734375, 0.2578125, 0.2421875, 0.2265625, 0.2109375, 0.1953125, 0.1796875, 0.1640625, 0.1484375, 0.1289062, 0.1132812, 0.0976562, 0.0820312, 0.0664062, 0.0507812, 0.0351562, 0.0195312, 0.0000000, 0.0000000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 12 :: 16 LEVEL ### color_map_luts['idl12'] = \ ( array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8632812, 0.8632812, 0.8632812, 0.8632812, 0.8632812, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8710938, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 0.0000000, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.3281250, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.7421875, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.8593750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 13 :: RAINBOW ### color_map_luts['idl13'] = \ ( array([ 0.0000000, 0.0156250, 0.0351562, 0.0507812, 0.0703125, 0.0859375, 0.1054688, 0.1210938, 0.1406250, 0.1562500, 0.1757812, 0.1953125, 0.2109375, 0.2265625, 0.2382812, 0.2500000, 0.2656250, 0.2695312, 0.2812500, 0.2890625, 0.3007812, 0.3085938, 0.3125000, 0.3203125, 0.3242188, 0.3320312, 0.3281250, 0.3359375, 0.3398438, 0.3437500, 0.3359375, 0.3398438, 0.3398438, 0.3398438, 0.3320312, 0.3281250, 0.3281250, 0.3281250, 0.3242188, 0.3085938, 0.3046875, 0.3007812, 0.2968750, 0.2773438, 0.2734375, 0.2656250, 0.2578125, 0.2343750, 0.2265625, 0.2148438, 0.2070312, 0.1796875, 0.1679688, 0.1562500, 0.1406250, 0.1289062, 0.0976562, 0.0820312, 0.0625000, 0.0468750, 0.0156250, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0156250, 0.0312500, 0.0468750, 0.0820312, 0.0976562, 0.1132812, 0.1289062, 0.1640625, 0.1796875, 0.1992188, 0.2148438, 0.2460938, 0.2617188, 0.2812500, 0.2968750, 0.3125000, 0.3476562, 0.3632812, 0.3789062, 0.3945312, 0.4296875, 0.4453125, 0.4648438, 0.4804688, 0.5117188, 0.5273438, 0.5468750, 0.5625000, 0.5976562, 0.6132812, 0.6289062, 0.6445312, 0.6601562, 0.6953125, 0.7109375, 0.7304688, 0.7460938, 0.7773438, 0.7929688, 0.8125000, 0.8281250, 0.8632812, 0.8789062, 0.8945312, 0.9101562, 0.9453125, 0.9609375, 0.9765625, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0156250, 0.0312500, 0.0625000, 0.0820312, 0.0976562, 0.1132812, 0.1484375, 0.1640625, 0.1796875, 0.1992188, 0.2148438, 0.2460938, 0.2617188, 0.2812500, 0.2968750, 0.3281250, 0.3476562, 0.3632812, 0.3789062, 0.4140625, 0.4296875, 0.4453125, 0.4648438, 0.4960938, 0.5117188, 0.5273438, 0.5468750, 0.5625000, 0.5937500, 0.6132812, 0.6289062, 0.6445312, 0.6796875, 0.6953125, 0.7109375, 0.7304688, 0.7617188, 0.7773438, 0.7929688, 0.8125000, 0.8437500, 0.8593750, 0.8789062, 0.8945312, 0.9101562, 0.9453125, 0.9609375, 0.9765625, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9765625, 0.9453125, 0.9296875, 0.9101562, 0.8945312, 0.8632812, 0.8437500, 0.8281250, 0.8125000, 0.7773438, 0.7617188, 0.7460938, 0.7304688, 0.6953125, 0.6796875, 0.6640625, 0.6445312, 0.6289062, 0.5976562, 0.5781250, 0.5625000, 0.5468750, 0.5117188, 0.4960938, 0.4804688, 0.4648438, 0.4296875, 0.4140625, 0.3984375, 0.3789062, 0.3476562, 0.3320312, 0.3125000, 0.2968750, 0.2812500, 0.2460938, 0.2304688, 0.2148438, 0.1992188, 0.1640625, 0.1484375, 0.1328125, 0.1132812, 0.0820312, 0.0664062, 0.0468750, 0.0312500, 0.0000000]), array([ 0.0000000, 0.0117188, 0.0273438, 0.0390625, 0.0546875, 0.0742188, 0.0898438, 0.1093750, 0.1250000, 0.1484375, 0.1679688, 0.1875000, 0.2070312, 0.2304688, 0.2460938, 0.2656250, 0.2812500, 0.3007812, 0.3164062, 0.3359375, 0.3554688, 0.3710938, 0.3906250, 0.4062500, 0.4257812, 0.4414062, 0.4609375, 0.4765625, 0.4960938, 0.5156250, 0.5312500, 0.5507812, 0.5664062, 0.5859375, 0.6015625, 0.6210938, 0.6367188, 0.6562500, 0.6757812, 0.6914062, 0.7109375, 0.7265625, 0.7460938, 0.7617188, 0.7812500, 0.7968750, 0.8164062, 0.8359375, 0.8515625, 0.8710938, 0.8867188, 0.9062500, 0.9218750, 0.9414062, 0.9570312, 0.9765625, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9609375, 0.9453125, 0.9296875, 0.9101562, 0.8789062, 0.8593750, 0.8437500, 0.8281250, 0.7929688, 0.7773438, 0.7617188, 0.7460938, 0.7304688, 0.6953125, 0.6796875, 0.6640625, 0.6445312, 0.6132812, 0.5937500, 0.5781250, 0.5625000, 0.5273438, 0.5117188, 0.4960938, 0.4804688, 0.4453125, 0.4296875, 0.4140625, 0.3984375, 0.3789062, 0.3476562, 0.3281250, 0.3125000, 0.2968750, 0.2617188, 0.2460938, 0.2304688, 0.2148438, 0.1796875, 0.1640625, 0.1484375, 0.1328125, 0.0976562, 0.0820312, 0.0625000, 0.0468750, 0.0312500, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 14 :: STEPS ### color_map_luts['idl14'] = \ ( array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0273438, 0.0585938, 0.0898438, 0.1210938, 0.1523438, 0.1835938, 0.2148438, 0.2460938, 0.2734375, 0.3046875, 0.3359375, 0.3671875, 0.3984375, 0.4296875, 0.4609375, 0.4921875, 0.5234375, 0.5546875, 0.5898438, 0.6210938, 0.6562500, 0.6875000, 0.7187500, 0.7539062, 0.7851562, 0.8203125, 0.8515625, 0.8828125, 0.9179688, 0.9492188, 0.9843750, 0.0000000, 0.0039062, 0.0078125, 0.0117188, 0.0156250, 0.0195312, 0.0234375, 0.0273438, 0.0312500, 0.0351562, 0.0390625, 0.0429688, 0.0468750, 0.0546875, 0.0625000, 0.0703125, 0.0781250, 0.0898438, 0.0976562, 0.1054688, 0.1132812, 0.1250000, 0.1328125, 0.1406250, 0.1484375, 0.1601562, 0.1718750, 0.1835938, 0.1953125, 0.2070312, 0.2187500, 0.2304688, 0.2460938, 0.2578125, 0.2695312, 0.2812500, 0.2929688, 0.3046875, 0.3203125, 0.3320312, 0.3476562, 0.3632812, 0.3789062, 0.3945312, 0.4101562, 0.4218750, 0.4375000, 0.4531250, 0.4687500, 0.4843750, 0.5000000, 0.5117188, 0.5273438, 0.5429688, 0.5585938, 0.5742188, 0.5898438, 0.6054688, 0.6210938, 0.6367188, 0.6523438, 0.6679688, 0.6835938, 0.6953125, 0.7070312, 0.7226562, 0.7343750, 0.7500000, 0.7617188, 0.7734375, 0.7890625, 0.8007812, 0.8164062, 0.8281250, 0.8437500, 0.8515625, 0.8593750, 0.8710938, 0.8789062, 0.8867188, 0.8984375, 0.9062500, 0.9140625, 0.9257812, 0.9335938, 0.9414062, 0.9531250, 0.9531250, 0.9570312, 0.9609375, 0.9648438, 0.9648438, 0.9687500, 0.9726562, 0.9765625, 0.9765625, 0.9804688, 0.9843750, 0.9882812, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 0.0000000, 0.1640625, 0.3320312, 0.4960938, 0.6640625, 0.8281250, 0.9960938, 0.9609375, 0.9218750, 0.8828125, 0.8437500, 0.8046875, 0.7695312, 0.7304688, 0.6914062, 0.6523438, 0.6132812, 0.5781250, 0.5390625, 0.5000000, 0.4609375, 0.4218750, 0.3867188, 0.3476562, 0.3085938, 0.2695312, 0.2304688, 0.1953125, 0.1562500, 0.1171875, 0.0781250, 0.0390625, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0039062, 0.0039062, 0.0117188, 0.0195312, 0.0312500, 0.0390625, 0.0468750, 0.0585938, 0.0664062, 0.0742188, 0.0859375, 0.0937500, 0.1015625, 0.1132812, 0.1210938, 0.1289062, 0.1406250, 0.1406250, 0.1484375, 0.1562500, 0.1640625, 0.1757812, 0.1875000, 0.1992188, 0.2109375, 0.2265625, 0.2382812, 0.2500000, 0.2617188, 0.2734375, 0.2851562, 0.3007812, 0.3203125, 0.3398438, 0.3593750, 0.3750000, 0.3906250, 0.4062500, 0.4218750, 0.4414062, 0.4648438, 0.4882812, 0.5117188, 0.5390625, 0.5625000, 0.5898438, 0.6132812, 0.6406250, 0.6679688, 0.6953125, 0.7226562, 0.7539062, 0.7812500, 0.8125000, 0.8398438, 0.8710938, 0.8945312, 0.9179688, 0.9453125, 0.9687500, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0312500, 0.0625000, 0.0937500, 0.1250000, 0.1601562, 0.1914062, 0.2226562, 0.2539062, 0.2890625, 0.3203125, 0.3515625, 0.3828125, 0.4140625, 0.4492188, 0.4804688, 0.5117188, 0.5429688, 0.5781250, 0.6093750, 0.6406250, 0.6718750, 0.7031250, 0.7382812, 0.7695312, 0.8007812, 0.8320312, 0.8671875, 0.8984375, 0.9296875, 0.9609375, 0.9960938, 0.0000000, 0.0195312, 0.0390625, 0.0585938, 0.0820312, 0.1015625, 0.1210938, 0.1445312, 0.1640625, 0.1835938, 0.2070312, 0.2265625, 0.2460938, 0.2695312, 0.2890625, 0.3085938, 0.3320312, 0.3476562, 0.3671875, 0.3828125, 0.4023438, 0.4218750, 0.4375000, 0.4570312, 0.4726562, 0.4921875, 0.5117188, 0.5273438, 0.5468750, 0.5625000, 0.5820312, 0.6015625, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0078125, 0.0117188, 0.0156250, 0.0195312, 0.0273438, 0.0351562, 0.0468750, 0.0546875, 0.0664062, 0.0781250, 0.0898438, 0.1054688, 0.1171875, 0.1328125, 0.1523438, 0.1718750, 0.1914062, 0.2148438, 0.2343750, 0.2539062, 0.2773438, 0.2968750, 0.3203125, 0.3476562, 0.3789062, 0.4062500, 0.4375000, 0.4687500, 0.5000000, 0.5312500, 0.5664062, 0.5976562, 0.6328125, 0.6679688, 0.7031250, 0.7382812, 0.7734375, 0.8085938, 0.8476562, 0.8750000, 0.9062500, 0.9335938, 0.9648438, 0.9960938]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 15 :: STERN SPECIAL ### color_map_luts['idl15'] = \ ( array([ 0.0000000, 0.0703125, 0.1406250, 0.2109375, 0.2812500, 0.3515625, 0.4218750, 0.4960938, 0.5664062, 0.6367188, 0.7070312, 0.7773438, 0.8476562, 0.9179688, 0.9921875, 0.9726562, 0.9531250, 0.9335938, 0.9140625, 0.8945312, 0.8710938, 0.8515625, 0.8320312, 0.8125000, 0.7929688, 0.7695312, 0.7500000, 0.7304688, 0.7109375, 0.6914062, 0.6718750, 0.6484375, 0.6289062, 0.6093750, 0.5898438, 0.5703125, 0.5468750, 0.5273438, 0.5078125, 0.4882812, 0.4687500, 0.4492188, 0.4257812, 0.4062500, 0.3867188, 0.3671875, 0.3476562, 0.3242188, 0.3046875, 0.2851562, 0.2656250, 0.2460938, 0.2265625, 0.2031250, 0.1835938, 0.1640625, 0.1445312, 0.1250000, 0.1015625, 0.0820312, 0.0625000, 0.0429688, 0.0234375, 0.0000000, 0.2500000, 0.2539062, 0.2578125, 0.2617188, 0.2656250, 0.2695312, 0.2734375, 0.2773438, 0.2812500, 0.2851562, 0.2890625, 0.2929688, 0.2968750, 0.3007812, 0.3046875, 0.3085938, 0.3125000, 0.3164062, 0.3203125, 0.3242188, 0.3281250, 0.3320312, 0.3359375, 0.3398438, 0.3437500, 0.3476562, 0.3515625, 0.3554688, 0.3593750, 0.3632812, 0.3671875, 0.3710938, 0.3750000, 0.3789062, 0.3828125, 0.3867188, 0.3906250, 0.3945312, 0.3984375, 0.4023438, 0.4062500, 0.4101562, 0.4140625, 0.4179688, 0.4218750, 0.4257812, 0.4296875, 0.4335938, 0.4375000, 0.4414062, 0.4453125, 0.4492188, 0.4531250, 0.4570312, 0.4609375, 0.4648438, 0.4687500, 0.4726562, 0.4765625, 0.4804688, 0.4843750, 0.4882812, 0.4921875, 0.4960938, 0.5000000, 0.5039062, 0.5078125, 0.5117188, 0.5156250, 0.5195312, 0.5234375, 0.5273438, 0.5312500, 0.5351562, 0.5390625, 0.5429688, 0.5468750, 0.5507812, 0.5546875, 0.5585938, 0.5625000, 0.5664062, 0.5703125, 0.5742188, 0.5781250, 0.5820312, 0.5859375, 0.5898438, 0.5937500, 0.5976562, 0.6015625, 0.6054688, 0.6093750, 0.6132812, 0.6171875, 0.6210938, 0.6250000, 0.6289062, 0.6328125, 0.6367188, 0.6406250, 0.6445312, 0.6484375, 0.6523438, 0.6562500, 0.6601562, 0.6640625, 0.6679688, 0.6718750, 0.6757812, 0.6796875, 0.6835938, 0.6875000, 0.6914062, 0.6953125, 0.6992188, 0.7031250, 0.7070312, 0.7109375, 0.7148438, 0.7187500, 0.7226562, 0.7265625, 0.7304688, 0.7343750, 0.7382812, 0.7421875, 0.7460938, 0.7500000, 0.7539062, 0.7578125, 0.7617188, 0.7656250, 0.7695312, 0.7734375, 0.7773438, 0.7812500, 0.7851562, 0.7890625, 0.7929688, 0.7968750, 0.8007812, 0.8046875, 0.8085938, 0.8125000, 0.8164062, 0.8203125, 0.8242188, 0.8281250, 0.8320312, 0.8359375, 0.8398438, 0.8437500, 0.8476562, 0.8515625, 0.8554688, 0.8593750, 0.8632812, 0.8671875, 0.8710938, 0.8750000, 0.8789062, 0.8828125, 0.8867188, 0.8906250, 0.8945312, 0.8984375, 0.9023438, 0.9062500, 0.9101562, 0.9140625, 0.9179688, 0.9218750, 0.9257812, 0.9296875, 0.9335938, 0.9375000, 0.9414062, 0.9453125, 0.9492188, 0.9531250, 0.9570312, 0.9609375, 0.9648438, 0.9687500, 0.9726562, 0.9765625, 0.9804688, 0.9843750, 0.9882812, 0.9921875, 0.9960938]), array([ 0.0000000, 0.0039062, 0.0078125, 0.0117188, 0.0156250, 0.0195312, 0.0234375, 0.0273438, 0.0312500, 0.0351562, 0.0390625, 0.0429688, 0.0468750, 0.0507812, 0.0546875, 0.0585938, 0.0625000, 0.0664062, 0.0703125, 0.0742188, 0.0781250, 0.0820312, 0.0859375, 0.0898438, 0.0937500, 0.0976562, 0.1015625, 0.1054688, 0.1093750, 0.1132812, 0.1171875, 0.1210938, 0.1250000, 0.1289062, 0.1328125, 0.1367188, 0.1406250, 0.1445312, 0.1484375, 0.1523438, 0.1562500, 0.1601562, 0.1640625, 0.1679688, 0.1718750, 0.1757812, 0.1796875, 0.1835938, 0.1875000, 0.1914062, 0.1953125, 0.1992188, 0.2031250, 0.2070312, 0.2109375, 0.2148438, 0.2187500, 0.2226562, 0.2265625, 0.2304688, 0.2343750, 0.2382812, 0.2421875, 0.2460938, 0.2500000, 0.2539062, 0.2578125, 0.2617188, 0.2656250, 0.2695312, 0.2734375, 0.2773438, 0.2812500, 0.2851562, 0.2890625, 0.2929688, 0.2968750, 0.3007812, 0.3046875, 0.3085938, 0.3125000, 0.3164062, 0.3203125, 0.3242188, 0.3281250, 0.3320312, 0.3359375, 0.3398438, 0.3437500, 0.3476562, 0.3515625, 0.3554688, 0.3593750, 0.3632812, 0.3671875, 0.3710938, 0.3750000, 0.3789062, 0.3828125, 0.3867188, 0.3906250, 0.3945312, 0.3984375, 0.4023438, 0.4062500, 0.4101562, 0.4140625, 0.4179688, 0.4218750, 0.4257812, 0.4296875, 0.4335938, 0.4375000, 0.4414062, 0.4453125, 0.4492188, 0.4531250, 0.4570312, 0.4609375, 0.4648438, 0.4687500, 0.4726562, 0.4765625, 0.4804688, 0.4843750, 0.4882812, 0.4921875, 0.4960938, 0.5000000, 0.5039062, 0.5078125, 0.5117188, 0.5156250, 0.5195312, 0.5234375, 0.5273438, 0.5312500, 0.5351562, 0.5390625, 0.5429688, 0.5468750, 0.5507812, 0.5546875, 0.5585938, 0.5625000, 0.5664062, 0.5703125, 0.5742188, 0.5781250, 0.5820312, 0.5859375, 0.5898438, 0.5937500, 0.5976562, 0.6015625, 0.6054688, 0.6093750, 0.6132812, 0.6171875, 0.6210938, 0.6250000, 0.6289062, 0.6328125, 0.6367188, 0.6406250, 0.6445312, 0.6484375, 0.6523438, 0.6562500, 0.6601562, 0.6640625, 0.6679688, 0.6718750, 0.6757812, 0.6796875, 0.6835938, 0.6875000, 0.6914062, 0.6953125, 0.6992188, 0.7031250, 0.7070312, 0.7109375, 0.7148438, 0.7187500, 0.7226562, 0.7265625, 0.7304688, 0.7343750, 0.7382812, 0.7421875, 0.7460938, 0.7500000, 0.7539062, 0.7578125, 0.7617188, 0.7656250, 0.7695312, 0.7734375, 0.7773438, 0.7812500, 0.7851562, 0.7890625, 0.7929688, 0.7968750, 0.8007812, 0.8046875, 0.8085938, 0.8125000, 0.8164062, 0.8203125, 0.8242188, 0.8281250, 0.8320312, 0.8359375, 0.8398438, 0.8437500, 0.8476562, 0.8515625, 0.8554688, 0.8593750, 0.8632812, 0.8671875, 0.8710938, 0.8750000, 0.8789062, 0.8828125, 0.8867188, 0.8906250, 0.8945312, 0.8984375, 0.9023438, 0.9062500, 0.9101562, 0.9140625, 0.9179688, 0.9218750, 0.9257812, 0.9296875, 0.9335938, 0.9375000, 0.9414062, 0.9453125, 0.9492188, 0.9531250, 0.9570312, 0.9609375, 0.9648438, 0.9687500, 0.9726562, 0.9765625, 0.9804688, 0.9843750, 0.9882812, 0.9921875, 0.9960938]), array([ 0.0000000, 0.0039062, 0.0117188, 0.0195312, 0.0273438, 0.0351562, 0.0429688, 0.0507812, 0.0585938, 0.0664062, 0.0742188, 0.0820312, 0.0898438, 0.0976562, 0.1054688, 0.1132812, 0.1210938, 0.1289062, 0.1367188, 0.1445312, 0.1523438, 0.1601562, 0.1679688, 0.1757812, 0.1835938, 0.1914062, 0.1992188, 0.2070312, 0.2148438, 0.2226562, 0.2304688, 0.2382812, 0.2460938, 0.2539062, 0.2617188, 0.2695312, 0.2773438, 0.2851562, 0.2929688, 0.3007812, 0.3085938, 0.3164062, 0.3242188, 0.3320312, 0.3398438, 0.3476562, 0.3554688, 0.3632812, 0.3710938, 0.3789062, 0.3867188, 0.3945312, 0.4023438, 0.4101562, 0.4179688, 0.4257812, 0.4335938, 0.4414062, 0.4492188, 0.4570312, 0.4648438, 0.4726562, 0.4804688, 0.4882812, 0.4960938, 0.5039062, 0.5117188, 0.5195312, 0.5273438, 0.5351562, 0.5429688, 0.5507812, 0.5585938, 0.5664062, 0.5742188, 0.5820312, 0.5898438, 0.5976562, 0.6054688, 0.6132812, 0.6210938, 0.6289062, 0.6367188, 0.6445312, 0.6523438, 0.6601562, 0.6679688, 0.6757812, 0.6835938, 0.6914062, 0.6992188, 0.7070312, 0.7148438, 0.7226562, 0.7304688, 0.7382812, 0.7460938, 0.7539062, 0.7617188, 0.7695312, 0.7773438, 0.7851562, 0.7929688, 0.8007812, 0.8085938, 0.8164062, 0.8242188, 0.8320312, 0.8398438, 0.8476562, 0.8554688, 0.8632812, 0.8710938, 0.8789062, 0.8867188, 0.8945312, 0.9023438, 0.9101562, 0.9179688, 0.9257812, 0.9335938, 0.9414062, 0.9492188, 0.9570312, 0.9648438, 0.9726562, 0.9804688, 0.9882812, 0.9960938, 0.9804688, 0.9648438, 0.9492188, 0.9296875, 0.9140625, 0.8984375, 0.8828125, 0.8632812, 0.8476562, 0.8320312, 0.8164062, 0.7968750, 0.7812500, 0.7656250, 0.7500000, 0.7304688, 0.7148438, 0.6992188, 0.6835938, 0.6640625, 0.6484375, 0.6328125, 0.6171875, 0.5976562, 0.5820312, 0.5664062, 0.5507812, 0.5312500, 0.5156250, 0.5000000, 0.4843750, 0.4648438, 0.4492188, 0.4335938, 0.4179688, 0.3984375, 0.3828125, 0.3671875, 0.3515625, 0.3320312, 0.3164062, 0.3007812, 0.2851562, 0.2656250, 0.2500000, 0.2343750, 0.2187500, 0.1992188, 0.1835938, 0.1679688, 0.1523438, 0.1328125, 0.1171875, 0.1015625, 0.0859375, 0.0664062, 0.0507812, 0.0351562, 0.0195312, 0.0000000, 0.0117188, 0.0273438, 0.0429688, 0.0585938, 0.0742188, 0.0859375, 0.1015625, 0.1171875, 0.1328125, 0.1484375, 0.1601562, 0.1757812, 0.1914062, 0.2070312, 0.2226562, 0.2343750, 0.2500000, 0.2656250, 0.2812500, 0.2968750, 0.3085938, 0.3242188, 0.3398438, 0.3554688, 0.3710938, 0.3828125, 0.3984375, 0.4140625, 0.4296875, 0.4453125, 0.4570312, 0.4726562, 0.4882812, 0.5039062, 0.5195312, 0.5351562, 0.5468750, 0.5625000, 0.5781250, 0.5937500, 0.6093750, 0.6210938, 0.6367188, 0.6523438, 0.6679688, 0.6835938, 0.6953125, 0.7109375, 0.7265625, 0.7421875, 0.7578125, 0.7695312, 0.7851562, 0.8007812, 0.8164062, 0.8320312, 0.8437500, 0.8593750, 0.8750000, 0.8906250, 0.9062500, 0.9179688, 0.9335938, 0.9492188, 0.9648438, 0.9804688, 0.9960938]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 16 :: Haze ### color_map_luts['idl16'] = \ ( array([ 0.6523438, 0.6523438, 0.9960938, 0.9921875, 0.9726562, 0.9648438, 0.9570312, 0.9492188, 0.9453125, 0.9375000, 0.9296875, 0.9218750, 0.9140625, 0.9062500, 0.8984375, 0.8906250, 0.8828125, 0.8750000, 0.8671875, 0.8593750, 0.8515625, 0.8437500, 0.8359375, 0.8281250, 0.8203125, 0.8125000, 0.8046875, 0.7968750, 0.7890625, 0.7812500, 0.7734375, 0.7656250, 0.7578125, 0.7500000, 0.7421875, 0.7343750, 0.7265625, 0.7187500, 0.7109375, 0.7031250, 0.6953125, 0.6875000, 0.6796875, 0.6718750, 0.6640625, 0.6562500, 0.6484375, 0.6406250, 0.6328125, 0.6250000, 0.6171875, 0.6093750, 0.6015625, 0.5937500, 0.5859375, 0.5781250, 0.5703125, 0.5625000, 0.5546875, 0.5507812, 0.5429688, 0.5351562, 0.5273438, 0.5195312, 0.5117188, 0.5039062, 0.4960938, 0.4882812, 0.4804688, 0.4726562, 0.4648438, 0.4570312, 0.4492188, 0.4414062, 0.4335938, 0.4257812, 0.4179688, 0.4101562, 0.4023438, 0.3945312, 0.3867188, 0.3789062, 0.3710938, 0.3632812, 0.3554688, 0.3476562, 0.3398438, 0.3320312, 0.3242188, 0.3164062, 0.3085938, 0.3007812, 0.2929688, 0.2851562, 0.2773438, 0.2695312, 0.2617188, 0.2539062, 0.2460938, 0.2382812, 0.2304688, 0.2226562, 0.2148438, 0.2070312, 0.1992188, 0.1914062, 0.1835938, 0.1757812, 0.1679688, 0.1601562, 0.1562500, 0.1484375, 0.1406250, 0.1328125, 0.1250000, 0.1171875, 0.1093750, 0.1015625, 0.0937500, 0.0859375, 0.0781250, 0.0703125, 0.0625000, 0.0546875, 0.0468750, 0.0507812, 0.0312500, 0.0234375, 0.0156250, 0.0156250, 0.0234375, 0.0273438, 0.0351562, 0.0429688, 0.0507812, 0.0585938, 0.0664062, 0.0742188, 0.0820312, 0.0898438, 0.0976562, 0.1054688, 0.1132812, 0.1210938, 0.1289062, 0.1367188, 0.1445312, 0.1523438, 0.1601562, 0.1679688, 0.1757812, 0.1835938, 0.1914062, 0.1992188, 0.2070312, 0.2148438, 0.2226562, 0.2304688, 0.2382812, 0.2460938, 0.2539062, 0.2617188, 0.2695312, 0.2773438, 0.2851562, 0.2929688, 0.3007812, 0.3085938, 0.3164062, 0.3242188, 0.3320312, 0.3398438, 0.3476562, 0.3554688, 0.3632812, 0.3710938, 0.3789062, 0.3867188, 0.3945312, 0.4023438, 0.4101562, 0.4179688, 0.4218750, 0.4296875, 0.4375000, 0.4453125, 0.4531250, 0.4609375, 0.4687500, 0.4765625, 0.4843750, 0.4921875, 0.5000000, 0.5078125, 0.5156250, 0.5234375, 0.5312500, 0.5390625, 0.5468750, 0.5546875, 0.5625000, 0.5703125, 0.5781250, 0.5859375, 0.5937500, 0.6015625, 0.6093750, 0.6171875, 0.6250000, 0.6328125, 0.6406250, 0.6484375, 0.6562500, 0.6640625, 0.6718750, 0.6796875, 0.6875000, 0.6953125, 0.7031250, 0.7109375, 0.7187500, 0.7265625, 0.7343750, 0.7421875, 0.7500000, 0.7578125, 0.7656250, 0.7734375, 0.7812500, 0.7890625, 0.7968750, 0.8046875, 0.8125000, 0.8203125, 0.8242188, 0.8320312, 0.8398438, 0.8476562, 0.8554688, 0.8632812, 0.8710938, 0.8789062, 0.8867188, 0.8945312, 0.9023438, 0.9101562, 0.9179688, 0.9257812, 0.9335938, 0.9414062, 0.9492188, 0.9570312, 0.9648438, 0.9726562, 0.9804688, 0.9804688]), array([ 0.4375000, 0.4375000, 0.8320312, 0.8281250, 0.8203125, 0.8164062, 0.8125000, 0.8046875, 0.8007812, 0.7929688, 0.7890625, 0.7812500, 0.7773438, 0.7734375, 0.7656250, 0.7617188, 0.7539062, 0.7500000, 0.7460938, 0.7382812, 0.7343750, 0.7265625, 0.7226562, 0.7148438, 0.7109375, 0.7070312, 0.6992188, 0.6953125, 0.6875000, 0.6835938, 0.6796875, 0.6718750, 0.6679688, 0.6601562, 0.6562500, 0.6484375, 0.6445312, 0.6406250, 0.6328125, 0.6289062, 0.6210938, 0.6171875, 0.6132812, 0.6054688, 0.6015625, 0.5937500, 0.5898438, 0.5859375, 0.5781250, 0.5742188, 0.5664062, 0.5625000, 0.5546875, 0.5507812, 0.5468750, 0.5390625, 0.5351562, 0.5273438, 0.5234375, 0.5195312, 0.5117188, 0.5078125, 0.5000000, 0.4960938, 0.4882812, 0.4843750, 0.4804688, 0.4726562, 0.4687500, 0.4609375, 0.4570312, 0.4531250, 0.4453125, 0.4414062, 0.4335938, 0.4296875, 0.4257812, 0.4179688, 0.4140625, 0.4062500, 0.4023438, 0.3945312, 0.3906250, 0.3867188, 0.3789062, 0.3750000, 0.3671875, 0.3632812, 0.3593750, 0.3515625, 0.3476562, 0.3398438, 0.3359375, 0.3281250, 0.3242188, 0.3203125, 0.3125000, 0.3085938, 0.3007812, 0.2968750, 0.2929688, 0.2851562, 0.2812500, 0.2734375, 0.2695312, 0.2656250, 0.2578125, 0.2539062, 0.2460938, 0.2421875, 0.2343750, 0.2304688, 0.2265625, 0.2187500, 0.2148438, 0.2070312, 0.2031250, 0.1992188, 0.1914062, 0.1875000, 0.1796875, 0.1757812, 0.1679688, 0.1640625, 0.1601562, 0.1523438, 0.1484375, 0.1406250, 0.1367188, 0.1328125, 0.1250000, 0.1210938, 0.1250000, 0.1289062, 0.1328125, 0.1406250, 0.1445312, 0.1484375, 0.1562500, 0.1601562, 0.1640625, 0.1718750, 0.1757812, 0.1796875, 0.1875000, 0.1914062, 0.1953125, 0.2031250, 0.2070312, 0.2109375, 0.2187500, 0.2226562, 0.2265625, 0.2343750, 0.2382812, 0.2421875, 0.2500000, 0.2539062, 0.2578125, 0.2656250, 0.2695312, 0.2734375, 0.2812500, 0.2851562, 0.2929688, 0.2968750, 0.3007812, 0.3085938, 0.3125000, 0.3164062, 0.3242188, 0.3281250, 0.3320312, 0.3398438, 0.3437500, 0.3476562, 0.3554688, 0.3593750, 0.3632812, 0.3710938, 0.3750000, 0.3789062, 0.3867188, 0.3906250, 0.3945312, 0.4023438, 0.4062500, 0.4101562, 0.4179688, 0.4218750, 0.4257812, 0.4335938, 0.4375000, 0.4414062, 0.4492188, 0.4531250, 0.4570312, 0.4648438, 0.4687500, 0.4726562, 0.4804688, 0.4843750, 0.4882812, 0.4960938, 0.5000000, 0.5039062, 0.5117188, 0.5156250, 0.5195312, 0.5273438, 0.5312500, 0.5351562, 0.5429688, 0.5468750, 0.5507812, 0.5585938, 0.5625000, 0.5664062, 0.5742188, 0.5781250, 0.5820312, 0.5898438, 0.5937500, 0.5976562, 0.6054688, 0.6093750, 0.6132812, 0.6210938, 0.6250000, 0.6289062, 0.6367188, 0.6406250, 0.6445312, 0.6523438, 0.6562500, 0.6601562, 0.6679688, 0.6718750, 0.6757812, 0.6835938, 0.6875000, 0.6914062, 0.6992188, 0.7031250, 0.7070312, 0.7148438, 0.7187500, 0.7226562, 0.7304688, 0.7343750, 0.7382812, 0.7460938, 0.7500000, 0.7539062, 0.7617188, 0.7617188]), array([ 0.9960938, 0.9960938, 0.9921875, 0.9804688, 0.9765625, 0.9726562, 0.9687500, 0.9648438, 0.9609375, 0.9570312, 0.9531250, 0.9492188, 0.9453125, 0.9414062, 0.9375000, 0.9335938, 0.9296875, 0.9257812, 0.9218750, 0.9179688, 0.9140625, 0.9101562, 0.9062500, 0.9023438, 0.8984375, 0.8945312, 0.8906250, 0.8867188, 0.8828125, 0.8789062, 0.8750000, 0.8710938, 0.8671875, 0.8632812, 0.8593750, 0.8554688, 0.8515625, 0.8476562, 0.8437500, 0.8398438, 0.8359375, 0.8320312, 0.8281250, 0.8242188, 0.8203125, 0.8164062, 0.8125000, 0.8085938, 0.8046875, 0.8007812, 0.7968750, 0.7929688, 0.7890625, 0.7851562, 0.7812500, 0.7773438, 0.7734375, 0.7695312, 0.7656250, 0.7617188, 0.7578125, 0.7539062, 0.7500000, 0.7460938, 0.7421875, 0.7382812, 0.7343750, 0.7304688, 0.7265625, 0.7226562, 0.7187500, 0.7148438, 0.7109375, 0.7070312, 0.7031250, 0.6992188, 0.6953125, 0.6914062, 0.6875000, 0.6835938, 0.6796875, 0.6757812, 0.6718750, 0.6679688, 0.6640625, 0.6601562, 0.6562500, 0.6523438, 0.6484375, 0.6445312, 0.6406250, 0.6367188, 0.6328125, 0.6289062, 0.6250000, 0.6210938, 0.6171875, 0.6132812, 0.6093750, 0.6054688, 0.6015625, 0.5976562, 0.5937500, 0.5898438, 0.5859375, 0.5820312, 0.5781250, 0.5742188, 0.5703125, 0.5664062, 0.5625000, 0.5585938, 0.5546875, 0.5507812, 0.5468750, 0.5429688, 0.5390625, 0.5351562, 0.5312500, 0.5273438, 0.5234375, 0.5195312, 0.5156250, 0.5117188, 0.5078125, 0.5039062, 0.5000000, 0.4960938, 0.4921875, 0.4882812, 0.4843750, 0.4804688, 0.4765625, 0.4726562, 0.4687500, 0.4648438, 0.4609375, 0.4570312, 0.4531250, 0.4492188, 0.4453125, 0.4414062, 0.4375000, 0.4335938, 0.4296875, 0.4257812, 0.4218750, 0.4179688, 0.4140625, 0.4101562, 0.4062500, 0.4023438, 0.3984375, 0.3945312, 0.3906250, 0.3867188, 0.3828125, 0.3789062, 0.3750000, 0.3710938, 0.3671875, 0.3632812, 0.3593750, 0.3554688, 0.3515625, 0.3476562, 0.3437500, 0.3398438, 0.3359375, 0.3320312, 0.3281250, 0.3242188, 0.3203125, 0.3164062, 0.3125000, 0.3085938, 0.3046875, 0.3007812, 0.2968750, 0.2929688, 0.2890625, 0.2851562, 0.2812500, 0.2773438, 0.2734375, 0.2695312, 0.2656250, 0.2617188, 0.2578125, 0.2539062, 0.2500000, 0.2460938, 0.2421875, 0.2382812, 0.2343750, 0.2304688, 0.2265625, 0.2226562, 0.2187500, 0.2148438, 0.2109375, 0.2070312, 0.2031250, 0.1992188, 0.1953125, 0.1914062, 0.1875000, 0.1835938, 0.1796875, 0.1757812, 0.1718750, 0.1679688, 0.1640625, 0.1601562, 0.1562500, 0.1523438, 0.1484375, 0.1445312, 0.1406250, 0.1367188, 0.1328125, 0.1289062, 0.1250000, 0.1210938, 0.1171875, 0.1132812, 0.1093750, 0.1054688, 0.1015625, 0.0976562, 0.0937500, 0.0898438, 0.0859375, 0.0820312, 0.0781250, 0.0742188, 0.0703125, 0.0664062, 0.0625000, 0.0585938, 0.0546875, 0.0507812, 0.0468750, 0.0429688, 0.0507812, 0.0351562, 0.0312500, 0.0273438, 0.0234375, 0.0195312, 0.0156250, 0.0117188, 0.0078125, 0.0039062, 0.0000000, 0.0000000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 17 :: Blue - Pastel - Red ### color_map_luts['idl17'] = \ ( array([ 0.1289062, 0.1289062, 0.1250000, 0.1210938, 0.1210938, 0.1171875, 0.1132812, 0.1093750, 0.1054688, 0.1015625, 0.0976562, 0.0937500, 0.0898438, 0.0859375, 0.0820312, 0.0781250, 0.0742188, 0.0664062, 0.0625000, 0.0585938, 0.0546875, 0.0468750, 0.0429688, 0.0507812, 0.0312500, 0.0273438, 0.0195312, 0.0156250, 0.0078125, 0.0039062, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0078125, 0.0156250, 0.0195312, 0.0273438, 0.0312500, 0.0507812, 0.0429688, 0.0507812, 0.0546875, 0.0585938, 0.0664062, 0.0703125, 0.0742188, 0.0781250, 0.0820312, 0.1132812, 0.1406250, 0.1640625, 0.1875000, 0.2070312, 0.2265625, 0.2382812, 0.2539062, 0.2656250, 0.2734375, 0.2812500, 0.2851562, 0.2890625, 0.2890625, 0.2890625, 0.2929688, 0.2929688, 0.2968750, 0.2968750, 0.2968750, 0.3007812, 0.2968750, 0.3007812, 0.3046875, 0.3085938, 0.3125000, 0.3164062, 0.3242188, 0.3281250, 0.3320312, 0.3359375, 0.3437500, 0.3476562, 0.3554688, 0.3593750, 0.3554688, 0.3671875, 0.3750000, 0.3828125, 0.3906250, 0.3984375, 0.4062500, 0.4140625, 0.4218750, 0.4257812, 0.4335938, 0.4375000, 0.4414062, 0.4492188, 0.4531250, 0.4570312, 0.4648438, 0.4687500, 0.4726562, 0.4804688, 0.4843750, 0.4882812, 0.4960938, 0.5000000, 0.5039062, 0.5117188, 0.5156250, 0.5195312, 0.5273438, 0.5312500, 0.5351562, 0.5390625, 0.5468750, 0.5507812, 0.5546875, 0.5625000, 0.5664062, 0.5703125, 0.5781250, 0.5820312, 0.5859375, 0.5937500, 0.5976562, 0.6015625, 0.6093750, 0.6132812, 0.6171875, 0.6250000, 0.6289062, 0.6328125, 0.6406250, 0.6445312, 0.6484375, 0.6562500, 0.6601562, 0.6640625, 0.6718750, 0.6757812, 0.6796875, 0.6835938, 0.6914062, 0.6953125, 0.6992188, 0.7070312, 0.7109375, 0.7148438, 0.7226562, 0.7265625, 0.7304688, 0.7382812, 0.7421875, 0.7460938, 0.7539062, 0.7539062]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0117188, 0.0195312, 0.0234375, 0.0312500, 0.0507812, 0.0468750, 0.0507812, 0.0585938, 0.0664062, 0.0742188, 0.0820312, 0.0898438, 0.0976562, 0.1054688, 0.1132812, 0.1210938, 0.1289062, 0.1367188, 0.1445312, 0.1523438, 0.1640625, 0.1718750, 0.1796875, 0.1914062, 0.1992188, 0.2070312, 0.2187500, 0.2265625, 0.2343750, 0.2460938, 0.2539062, 0.2656250, 0.2773438, 0.2851562, 0.2968750, 0.3046875, 0.3164062, 0.3281250, 0.3398438, 0.3476562, 0.3593750, 0.3710938, 0.3828125, 0.3945312, 0.4062500, 0.4179688, 0.4296875, 0.4414062, 0.4531250, 0.4648438, 0.4765625, 0.4882812, 0.5000000, 0.5117188, 0.5234375, 0.5390625, 0.5507812, 0.5625000, 0.5781250, 0.5898438, 0.6015625, 0.6171875, 0.6289062, 0.6445312, 0.6562500, 0.6718750, 0.6835938, 0.6992188, 0.7109375, 0.7265625, 0.7421875, 0.7539062, 0.7695312, 0.7851562, 0.8007812, 0.8164062, 0.8281250, 0.8437500, 0.8593750, 0.8750000, 0.8906250, 0.9062500, 0.9140625, 0.9179688, 0.9218750, 0.9296875, 0.9335938, 0.9375000, 0.9414062, 0.9492188, 0.9531250, 0.9570312, 0.9609375, 0.9648438, 0.9726562, 0.9726562, 0.9609375, 0.9531250, 0.9414062, 0.9257812, 0.9062500, 0.8867188, 0.8710938, 0.8515625, 0.8359375, 0.8164062, 0.8007812, 0.7851562, 0.7656250, 0.7500000, 0.7343750, 0.7187500, 0.7031250, 0.6835938, 0.6679688, 0.6523438, 0.6367188, 0.6250000, 0.6093750, 0.5937500, 0.5781250, 0.5625000, 0.5507812, 0.5351562, 0.5195312, 0.5078125, 0.4921875, 0.4804688, 0.4648438, 0.4531250, 0.4531250, 0.4492188, 0.4414062, 0.4375000, 0.4296875, 0.4257812, 0.4062500, 0.4062500, 0.4023438, 0.3984375, 0.3945312, 0.3906250, 0.3867188, 0.3828125, 0.3789062, 0.3750000, 0.3710938, 0.3671875, 0.3632812, 0.3593750, 0.3437500, 0.3437500, 0.3437500, 0.3398438, 0.3359375, 0.3320312, 0.3320312, 0.3281250, 0.3242188, 0.3203125, 0.3125000, 0.3085938, 0.3046875, 0.2968750, 0.2929688, 0.2890625, 0.2851562, 0.2773438, 0.2734375, 0.2695312, 0.2656250, 0.2617188, 0.2578125, 0.2500000, 0.2460938, 0.2382812, 0.2265625, 0.2187500, 0.2109375, 0.2031250, 0.1914062, 0.1835938, 0.1757812, 0.1679688, 0.1601562, 0.1562500, 0.1484375, 0.1406250, 0.1328125, 0.1289062, 0.1210938, 0.1132812, 0.1093750, 0.1015625, 0.0976562, 0.0898438, 0.0859375, 0.0820312, 0.0742188, 0.0703125, 0.0664062, 0.0625000, 0.0546875, 0.0507812, 0.0468750, 0.0429688, 0.0507812, 0.0351562, 0.0312500, 0.0273438, 0.0273438, 0.0234375, 0.0195312, 0.0156250, 0.0156250, 0.0117188, 0.0078125, 0.0078125, 0.0039062, 0.0039062, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 0.3750000, 0.3750000, 0.3789062, 0.3867188, 0.3906250, 0.3945312, 0.3984375, 0.4062500, 0.4101562, 0.4140625, 0.4179688, 0.4218750, 0.4296875, 0.4335938, 0.4375000, 0.4414062, 0.4492188, 0.4531250, 0.4570312, 0.4609375, 0.4648438, 0.4726562, 0.4765625, 0.4804688, 0.4843750, 0.4882812, 0.4960938, 0.5000000, 0.5039062, 0.5078125, 0.5156250, 0.5195312, 0.5234375, 0.5273438, 0.5312500, 0.5390625, 0.5429688, 0.5468750, 0.5507812, 0.5585938, 0.5625000, 0.5664062, 0.5703125, 0.5742188, 0.5820312, 0.5859375, 0.5898438, 0.5937500, 0.6015625, 0.6054688, 0.6093750, 0.6132812, 0.6171875, 0.6250000, 0.6289062, 0.6328125, 0.6367188, 0.6406250, 0.6484375, 0.6523438, 0.6562500, 0.6601562, 0.6679688, 0.6718750, 0.6757812, 0.6796875, 0.6835938, 0.6914062, 0.6953125, 0.6992188, 0.7031250, 0.7109375, 0.7148438, 0.7187500, 0.7226562, 0.7265625, 0.7343750, 0.7382812, 0.7421875, 0.7460938, 0.7539062, 0.7578125, 0.7617188, 0.7656250, 0.7695312, 0.7773438, 0.7812500, 0.7851562, 0.7890625, 0.7929688, 0.8007812, 0.8046875, 0.8085938, 0.8125000, 0.8203125, 0.8242188, 0.8281250, 0.8320312, 0.8359375, 0.8437500, 0.8476562, 0.8515625, 0.8554688, 0.8632812, 0.8671875, 0.8710938, 0.8750000, 0.8789062, 0.8867188, 0.8906250, 0.8945312, 0.8984375, 0.9062500, 0.9101562, 0.9062500, 0.8984375, 0.8945312, 0.8867188, 0.8789062, 0.8710938, 0.8671875, 0.8593750, 0.8515625, 0.8437500, 0.8359375, 0.8281250, 0.8203125, 0.8125000, 0.5820312, 0.5546875, 0.5273438, 0.4960938, 0.4609375, 0.4335938, 0.4023438, 0.3710938, 0.3437500, 0.3164062, 0.2890625, 0.2617188, 0.2343750, 0.2070312, 0.1835938, 0.1562500, 0.1328125, 0.1093750, 0.0859375, 0.0859375, 0.0898438, 0.0937500, 0.0976562, 0.1015625, 0.1054688, 0.1093750, 0.1093750, 0.1132812, 0.1171875, 0.1171875, 0.1210938, 0.1210938, 0.1250000, 0.1250000, 0.1132812, 0.1093750, 0.1015625, 0.0976562, 0.0898438, 0.0859375, 0.0781250, 0.0703125, 0.0664062, 0.0625000, 0.0585938, 0.0546875, 0.0507812, 0.0468750, 0.0429688, 0.0507812, 0.0351562, 0.0312500, 0.0273438, 0.0234375, 0.0195312, 0.0195312, 0.0156250, 0.0117188, 0.0117188, 0.0078125, 0.0039062, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 18 :: Pastels ### color_map_luts['idl18'] = \ ( array([ 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9882812, 0.9804688, 0.9726562, 0.9648438, 0.9570312, 0.9492188, 0.9414062, 0.9335938, 0.9257812, 0.9179688, 0.9101562, 0.9023438, 0.8945312, 0.8867188, 0.8789062, 0.8710938, 0.8632812, 0.8554688, 0.8476562, 0.8437500, 0.8359375, 0.8281250, 0.8203125, 0.8125000, 0.8046875, 0.7968750, 0.7890625, 0.7812500, 0.7734375, 0.7656250, 0.7578125, 0.7500000, 0.7421875, 0.7343750, 0.7265625, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0078125, 0.0312500, 0.0507812, 0.0742188, 0.0976562, 0.1171875, 0.1406250, 0.1640625, 0.1875000, 0.2070312, 0.2304688, 0.2539062, 0.2773438, 0.2968750, 0.3203125, 0.3437500, 0.3632812, 0.3867188, 0.4101562, 0.4335938, 0.4531250, 0.4765625, 0.5000000, 0.5195312, 0.5429688, 0.5664062, 0.5898438, 0.6093750, 0.6328125, 0.6562500, 0.6757812, 0.6992188, 0.7226562, 0.7460938, 0.7656250, 0.7890625, 0.8125000, 0.8359375, 0.8554688, 0.8789062, 0.9023438, 0.9218750, 0.9453125, 0.9687500, 0.9921875, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.5468750, 0.5703125, 0.5937500, 0.6132812, 0.6367188, 0.6601562, 0.6796875, 0.7031250, 0.7265625, 0.7500000, 0.7695312, 0.7929688, 0.8164062, 0.8359375, 0.8593750, 0.8828125, 0.9062500, 0.9257812, 0.9492188, 0.9726562, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9765625, 0.9531250, 0.9296875, 0.9101562, 0.8867188, 0.8632812, 0.8398438, 0.8203125, 0.7968750, 0.7734375, 0.7539062, 0.7304688, 0.7070312, 0.6835938, 0.6640625, 0.6640625]), array([ 0.2812500, 0.2812500, 0.2890625, 0.2968750, 0.3046875, 0.3125000, 0.3203125, 0.3281250, 0.3359375, 0.3437500, 0.3515625, 0.3554688, 0.3632812, 0.3710938, 0.3789062, 0.3867188, 0.3945312, 0.4023438, 0.4101562, 0.4179688, 0.4257812, 0.4335938, 0.4414062, 0.4492188, 0.4570312, 0.4648438, 0.4726562, 0.4804688, 0.4882812, 0.4960938, 0.5039062, 0.5117188, 0.5195312, 0.5273438, 0.5351562, 0.5429688, 0.5507812, 0.5546875, 0.5625000, 0.5703125, 0.5781250, 0.5859375, 0.5937500, 0.6015625, 0.6093750, 0.6171875, 0.6250000, 0.6328125, 0.6406250, 0.6484375, 0.6562500, 0.6640625, 0.6718750, 0.6796875, 0.6875000, 0.6953125, 0.7031250, 0.7109375, 0.7187500, 0.7265625, 0.7343750, 0.7421875, 0.7460938, 0.7539062, 0.7617188, 0.7695312, 0.7773438, 0.7851562, 0.7929688, 0.8007812, 0.8085938, 0.8164062, 0.8242188, 0.8320312, 0.8398438, 0.8476562, 0.8554688, 0.8632812, 0.8710938, 0.8789062, 0.8867188, 0.8945312, 0.9023438, 0.9101562, 0.9179688, 0.9257812, 0.9335938, 0.9414062, 0.9453125, 0.9531250, 0.9609375, 0.9687500, 0.9765625, 0.9843750, 0.9921875, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9726562, 0.9492188, 0.9257812, 0.9062500, 0.8828125, 0.8593750, 0.8359375, 0.8164062, 0.7929688, 0.7695312, 0.7500000, 0.7265625, 0.7031250, 0.6796875, 0.6601562, 0.6367188, 0.6132812, 0.5937500, 0.5703125, 0.5468750, 0.5234375, 0.5039062, 0.4804688, 0.4570312, 0.4375000, 0.4140625, 0.3906250, 0.3671875, 0.3476562, 0.3242188, 0.3007812, 0.2773438, 0.2578125, 0.2343750, 0.2109375, 0.1914062, 0.1679688, 0.1445312, 0.1210938, 0.1015625, 0.0781250, 0.0546875, 0.0351562, 0.0117188, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 19 :: Hue Sat Lightness 1 ### color_map_luts['idl19'] = \ ( array([ 0.9804688, 0.9804688, 0.9804688, 0.9804688, 0.9843750, 0.9843750, 0.9882812, 0.9843750, 0.9843750, 0.9804688, 0.9804688, 0.9765625, 0.9765625, 0.9726562, 0.9726562, 0.9726562, 0.9687500, 0.9687500, 0.9648438, 0.9648438, 0.9609375, 0.9609375, 0.9609375, 0.9570312, 0.9570312, 0.9531250, 0.9531250, 0.9492188, 0.9492188, 0.9492188, 0.9453125, 0.9453125, 0.9414062, 0.9414062, 0.9414062, 0.9375000, 0.9375000, 0.9375000, 0.9335938, 0.9335938, 0.9335938, 0.9296875, 0.9296875, 0.9257812, 0.9179688, 0.8984375, 0.8828125, 0.8671875, 0.8476562, 0.8320312, 0.8164062, 0.8007812, 0.7851562, 0.7695312, 0.7578125, 0.7421875, 0.7265625, 0.7148438, 0.6992188, 0.6875000, 0.6757812, 0.6601562, 0.6484375, 0.6367188, 0.6250000, 0.6132812, 0.6015625, 0.5898438, 0.5781250, 0.5664062, 0.5585938, 0.5468750, 0.5351562, 0.5273438, 0.5195312, 0.5078125, 0.5000000, 0.4921875, 0.4843750, 0.4765625, 0.4687500, 0.4609375, 0.4531250, 0.4453125, 0.4375000, 0.4335938, 0.4257812, 0.4257812, 0.4296875, 0.4335938, 0.4414062, 0.4453125, 0.4492188, 0.4531250, 0.4570312, 0.4609375, 0.4648438, 0.4726562, 0.4765625, 0.4804688, 0.4882812, 0.4882812, 0.4921875, 0.4960938, 0.5000000, 0.5039062, 0.5117188, 0.5156250, 0.5195312, 0.5273438, 0.5312500, 0.5351562, 0.5390625, 0.5429688, 0.5468750, 0.5468750, 0.5507812, 0.5546875, 0.5625000, 0.5664062, 0.5703125, 0.5742188, 0.5781250, 0.5820312, 0.5859375, 0.5898438, 0.5937500, 0.5976562, 0.6015625, 0.6054688, 0.6093750, 0.6132812, 0.6171875, 0.6210938, 0.6250000, 0.6289062, 0.6328125, 0.6367188, 0.6406250, 0.6445312, 0.6484375, 0.6523438, 0.6562500, 0.6601562, 0.6640625, 0.6679688, 0.6679688, 0.6718750, 0.6796875, 0.6796875, 0.6835938, 0.6875000, 0.6914062, 0.6953125, 0.6992188, 0.7031250, 0.7070312, 0.7109375, 0.7148438, 0.7187500, 0.7226562, 0.7226562, 0.7265625, 0.7304688, 0.7343750, 0.7382812, 0.7421875, 0.7421875, 0.7460938, 0.7500000, 0.7539062, 0.7578125, 0.7617188, 0.7656250, 0.7695312, 0.7773438, 0.7812500, 0.7890625, 0.7929688, 0.7968750, 0.8046875, 0.8085938, 0.8125000, 0.8203125, 0.8242188, 0.8281250, 0.8320312, 0.8359375, 0.8398438, 0.8437500, 0.8476562, 0.8515625, 0.8554688, 0.8593750, 0.8593750, 0.8632812, 0.8671875, 0.8710938, 0.8750000, 0.8789062, 0.8828125, 0.8828125, 0.8867188, 0.8906250, 0.8906250, 0.8945312, 0.8984375, 0.8984375, 0.9023438, 0.9062500, 0.9062500, 0.9101562, 0.9101562, 0.9140625, 0.9179688, 0.9179688, 0.9179688, 0.9218750, 0.9218750, 0.9218750, 0.9257812, 0.9257812, 0.9257812, 0.9296875, 0.9296875, 0.9335938, 0.9335938, 0.9335938, 0.9375000, 0.9375000, 0.9375000, 0.9414062, 0.9453125, 0.9453125, 0.9453125, 0.9453125, 0.9492188, 0.9492188, 0.9531250, 0.9531250, 0.9531250, 0.9570312, 0.9570312, 0.9609375, 0.9609375, 0.9609375, 0.9648438, 0.9648438, 0.9687500, 0.9687500, 0.9726562, 0.9726562, 0.9765625, 0.9765625, 0.9804688, 0.9804688]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0039062, 0.0078125, 0.0078125, 0.0156250, 0.0195312, 0.0273438, 0.0312500, 0.0273438, 0.0351562, 0.0507812, 0.0312500, 0.0585938, 0.0664062, 0.0703125, 0.0781250, 0.0820312, 0.0898438, 0.0937500, 0.1015625, 0.1054688, 0.1093750, 0.1171875, 0.1210938, 0.1289062, 0.1328125, 0.1367188, 0.1445312, 0.1484375, 0.1562500, 0.1601562, 0.1640625, 0.1718750, 0.1757812, 0.1796875, 0.1875000, 0.1914062, 0.1953125, 0.2031250, 0.2070312, 0.2148438, 0.2187500, 0.2226562, 0.2304688, 0.2343750, 0.2382812, 0.2460938, 0.2500000, 0.2539062, 0.2578125, 0.2656250, 0.2695312, 0.2734375, 0.2812500, 0.2851562, 0.2890625, 0.2968750, 0.3007812, 0.3046875, 0.3085938, 0.3164062, 0.3203125, 0.3242188, 0.3281250, 0.3320312, 0.3359375, 0.3398438, 0.3437500, 0.3515625, 0.3554688, 0.3593750, 0.3632812, 0.3710938, 0.3750000, 0.3789062, 0.3828125, 0.3867188, 0.3945312, 0.3984375, 0.4023438, 0.4062500, 0.4101562, 0.4179688, 0.4218750, 0.4296875, 0.4453125, 0.4609375, 0.4765625, 0.4882812, 0.5039062, 0.5156250, 0.5312500, 0.5429688, 0.5585938, 0.5703125, 0.5820312, 0.5937500, 0.6093750, 0.6171875, 0.6289062, 0.6406250, 0.6523438, 0.6640625, 0.6757812, 0.6835938, 0.6953125, 0.7070312, 0.7187500, 0.7265625, 0.7343750, 0.7460938, 0.7539062, 0.7617188, 0.7695312, 0.7773438, 0.7851562, 0.7929688, 0.8007812, 0.8085938, 0.8164062, 0.8242188, 0.8320312, 0.8398438, 0.8437500, 0.8515625, 0.8593750, 0.8632812, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8789062, 0.8789062, 0.8789062, 0.8789062, 0.8789062, 0.8789062, 0.8789062, 0.8828125, 0.8828125, 0.8828125, 0.8828125, 0.8828125, 0.8828125, 0.8867188, 0.8867188, 0.8867188, 0.8867188, 0.8867188, 0.8906250, 0.8906250, 0.8906250, 0.8906250, 0.8906250, 0.8945312, 0.8945312, 0.8945312, 0.8945312, 0.8945312, 0.8945312, 0.8984375, 0.8984375, 0.8984375, 0.8984375, 0.9023438, 0.9023438, 0.9062500, 0.9062500, 0.9062500, 0.9062500, 0.9062500, 0.9101562, 0.9101562, 0.9101562, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9179688, 0.9179688, 0.9179688, 0.9218750, 0.9218750, 0.9218750, 0.9218750, 0.9218750, 0.9257812, 0.9257812, 0.9257812, 0.9257812, 0.9296875, 0.9296875, 0.9296875, 0.9335938, 0.9335938, 0.9335938, 0.9375000, 0.9375000, 0.9414062, 0.9414062, 0.9453125, 0.9453125, 0.9453125, 0.9492188, 0.9492188, 0.9531250, 0.9531250, 0.9570312, 0.9570312, 0.9609375, 0.9609375, 0.9648438, 0.9648438, 0.9687500, 0.9687500, 0.9726562, 0.9726562, 0.9765625, 0.9765625, 0.9765625]), array([ 0.0117188, 0.0117188, 0.0351562, 0.0585938, 0.0859375, 0.1093750, 0.1328125, 0.1601562, 0.1875000, 0.2148438, 0.2421875, 0.2578125, 0.2851562, 0.3164062, 0.3281250, 0.3671875, 0.3906250, 0.4140625, 0.4375000, 0.4609375, 0.4843750, 0.5078125, 0.5273438, 0.5507812, 0.5742188, 0.5937500, 0.6132812, 0.6328125, 0.6562500, 0.6757812, 0.6953125, 0.7148438, 0.7304688, 0.7500000, 0.7695312, 0.7851562, 0.8046875, 0.8203125, 0.8398438, 0.8554688, 0.8710938, 0.8867188, 0.9023438, 0.9179688, 0.9257812, 0.9257812, 0.9218750, 0.9218750, 0.9218750, 0.9179688, 0.9179688, 0.9179688, 0.9179688, 0.9140625, 0.9140625, 0.9140625, 0.9101562, 0.9101562, 0.9101562, 0.9062500, 0.9062500, 0.9062500, 0.9062500, 0.9023438, 0.9023438, 0.9023438, 0.8984375, 0.8984375, 0.8984375, 0.8984375, 0.8984375, 0.8945312, 0.8945312, 0.8945312, 0.8945312, 0.8906250, 0.8906250, 0.8906250, 0.8906250, 0.8906250, 0.8867188, 0.8867188, 0.8867188, 0.8867188, 0.8867188, 0.8828125, 0.8828125, 0.8828125, 0.8828125, 0.8828125, 0.8789062, 0.8789062, 0.8789062, 0.8789062, 0.8789062, 0.8789062, 0.8789062, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8632812, 0.8593750, 0.8515625, 0.8476562, 0.8437500, 0.8359375, 0.8320312, 0.8281250, 0.8242188, 0.8203125, 0.8125000, 0.8085938, 0.8046875, 0.8046875, 0.8007812, 0.7968750, 0.7929688, 0.7890625, 0.7890625, 0.7851562, 0.7812500, 0.7812500, 0.7773438, 0.7734375, 0.7734375, 0.7695312, 0.7695312, 0.7695312, 0.7656250, 0.7695312, 0.7656250, 0.7656250, 0.7617188, 0.7617188, 0.7617188, 0.7617188, 0.7617188, 0.7617188, 0.7617188, 0.7617188, 0.7617188, 0.7617188, 0.7617188, 0.7656250, 0.7656250, 0.7695312, 0.7734375, 0.7773438, 0.7812500, 0.7851562, 0.7851562, 0.7890625, 0.7929688, 0.7968750, 0.8007812, 0.8007812, 0.8046875, 0.8085938, 0.8125000, 0.8164062, 0.8164062, 0.8203125, 0.8242188, 0.8281250, 0.8281250, 0.8320312, 0.8320312, 0.8359375, 0.8398438, 0.8437500, 0.8437500, 0.8476562, 0.8515625, 0.8515625, 0.8554688, 0.8593750, 0.8632812, 0.8632812, 0.8671875, 0.8710938, 0.8710938, 0.8750000, 0.8789062, 0.8828125, 0.8828125, 0.8867188, 0.8906250, 0.8906250, 0.8945312, 0.8984375, 0.8984375, 0.9023438, 0.9062500, 0.9062500, 0.9101562, 0.9101562, 0.9140625, 0.9179688, 0.9179688, 0.9218750, 0.9257812, 0.9257812, 0.9296875, 0.9296875, 0.9335938, 0.9375000, 0.9375000, 0.9414062, 0.9414062, 0.9453125, 0.9492188, 0.9492188, 0.9531250, 0.9531250, 0.9570312, 0.9609375, 0.9609375, 0.9648438, 0.9648438, 0.9687500, 0.9687500, 0.9726562, 0.9726562, 0.9765625, 0.9765625, 0.9765625]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 20 :: Hue Sat Lightness 2 ### color_map_luts['idl20'] = \ ( array([ 0.9882812, 0.9882812, 0.9804688, 0.9765625, 0.9765625, 0.9726562, 0.9726562, 0.9687500, 0.9687500, 0.9648438, 0.9648438, 0.9609375, 0.9609375, 0.9609375, 0.9570312, 0.9570312, 0.9531250, 0.9531250, 0.9531250, 0.9492188, 0.9492188, 0.9453125, 0.9453125, 0.9453125, 0.9414062, 0.9414062, 0.9375000, 0.9375000, 0.9375000, 0.9335938, 0.9335938, 0.9335938, 0.9296875, 0.9296875, 0.9296875, 0.9257812, 0.9257812, 0.9218750, 0.9218750, 0.9218750, 0.9179688, 0.9179688, 0.9179688, 0.9179688, 0.9140625, 0.9101562, 0.9101562, 0.9062500, 0.9062500, 0.9023438, 0.9023438, 0.8984375, 0.8945312, 0.8945312, 0.8906250, 0.8867188, 0.8828125, 0.8828125, 0.8789062, 0.8750000, 0.8710938, 0.8710938, 0.8671875, 0.8632812, 0.8593750, 0.8554688, 0.8515625, 0.8476562, 0.8437500, 0.8398438, 0.8359375, 0.8320312, 0.8281250, 0.8242188, 0.8203125, 0.8125000, 0.8085938, 0.8046875, 0.8007812, 0.7968750, 0.7890625, 0.7812500, 0.7773438, 0.7734375, 0.7656250, 0.7617188, 0.7578125, 0.7500000, 0.7500000, 0.7460938, 0.7460938, 0.7421875, 0.7382812, 0.7343750, 0.7304688, 0.7226562, 0.7226562, 0.7187500, 0.7148438, 0.7148438, 0.7109375, 0.7070312, 0.7031250, 0.6992188, 0.6953125, 0.6914062, 0.6875000, 0.6835938, 0.6796875, 0.6757812, 0.6718750, 0.6718750, 0.6679688, 0.6640625, 0.6601562, 0.6562500, 0.6523438, 0.6484375, 0.6445312, 0.6445312, 0.6406250, 0.6367188, 0.6328125, 0.6289062, 0.6250000, 0.6210938, 0.6171875, 0.6132812, 0.6093750, 0.6054688, 0.6015625, 0.5976562, 0.5937500, 0.5898438, 0.5820312, 0.5781250, 0.5742188, 0.5703125, 0.5664062, 0.5625000, 0.5585938, 0.5546875, 0.5468750, 0.5429688, 0.5390625, 0.5390625, 0.5351562, 0.5312500, 0.5273438, 0.5234375, 0.5156250, 0.5117188, 0.5078125, 0.5039062, 0.5000000, 0.4960938, 0.4921875, 0.4882812, 0.4843750, 0.4765625, 0.4726562, 0.4687500, 0.4648438, 0.4609375, 0.4609375, 0.4531250, 0.4453125, 0.4414062, 0.4375000, 0.4335938, 0.4335938, 0.4257812, 0.4218750, 0.4257812, 0.4335938, 0.4453125, 0.4492188, 0.4570312, 0.4609375, 0.4687500, 0.4804688, 0.4882812, 0.4960938, 0.5039062, 0.5117188, 0.5234375, 0.5351562, 0.5429688, 0.5507812, 0.5625000, 0.5742188, 0.5859375, 0.5937500, 0.6054688, 0.6171875, 0.6289062, 0.6406250, 0.6523438, 0.6679688, 0.6796875, 0.6914062, 0.7070312, 0.7187500, 0.7343750, 0.7500000, 0.7617188, 0.7773438, 0.7929688, 0.8085938, 0.8242188, 0.8437500, 0.8593750, 0.8750000, 0.8906250, 0.9101562, 0.9257812, 0.9296875, 0.9335938, 0.9335938, 0.9375000, 0.9375000, 0.9375000, 0.9414062, 0.9414062, 0.9414062, 0.9453125, 0.9453125, 0.9492188, 0.9492188, 0.9492188, 0.9531250, 0.9531250, 0.9570312, 0.9570312, 0.9570312, 0.9609375, 0.9609375, 0.9648438, 0.9648438, 0.9648438, 0.9687500, 0.9687500, 0.9726562, 0.9726562, 0.9765625, 0.9765625, 0.9804688, 0.9804688, 0.9843750, 0.9843750, 0.9882812, 0.9882812, 0.9921875, 0.9921875, 0.9921875, 0.9921875]), array([ 0.9843750, 0.9843750, 0.9765625, 0.9765625, 0.9726562, 0.9726562, 0.9687500, 0.9687500, 0.9648438, 0.9648438, 0.9609375, 0.9609375, 0.9570312, 0.9570312, 0.9531250, 0.9531250, 0.9492188, 0.9453125, 0.9453125, 0.9414062, 0.9414062, 0.9375000, 0.9335938, 0.9335938, 0.9296875, 0.9296875, 0.9257812, 0.9218750, 0.9218750, 0.9179688, 0.9179688, 0.9140625, 0.9101562, 0.9101562, 0.9062500, 0.9062500, 0.9023438, 0.8984375, 0.8984375, 0.8945312, 0.8906250, 0.8906250, 0.8867188, 0.8828125, 0.8828125, 0.8789062, 0.8750000, 0.8710938, 0.8710938, 0.8671875, 0.8632812, 0.8632812, 0.8593750, 0.8554688, 0.8515625, 0.8515625, 0.8476562, 0.8476562, 0.8437500, 0.8398438, 0.8359375, 0.8359375, 0.8320312, 0.8281250, 0.8242188, 0.8242188, 0.8164062, 0.8164062, 0.8125000, 0.8125000, 0.8046875, 0.8046875, 0.8007812, 0.8007812, 0.7968750, 0.7890625, 0.7890625, 0.7851562, 0.7851562, 0.7812500, 0.7773438, 0.7695312, 0.7695312, 0.7656250, 0.7656250, 0.7617188, 0.7617188, 0.7578125, 0.7578125, 0.7578125, 0.7578125, 0.7578125, 0.7578125, 0.7578125, 0.7617188, 0.7578125, 0.7617188, 0.7617188, 0.7617188, 0.7656250, 0.7656250, 0.7656250, 0.7695312, 0.7695312, 0.7734375, 0.7734375, 0.7773438, 0.7773438, 0.7812500, 0.7812500, 0.7851562, 0.7890625, 0.7929688, 0.7968750, 0.7968750, 0.8007812, 0.8046875, 0.8085938, 0.8125000, 0.8203125, 0.8242188, 0.8281250, 0.8320312, 0.8398438, 0.8437500, 0.8476562, 0.8554688, 0.8593750, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8789062, 0.8789062, 0.8789062, 0.8789062, 0.8789062, 0.8789062, 0.8789062, 0.8828125, 0.8828125, 0.8828125, 0.8828125, 0.8828125, 0.8867188, 0.8867188, 0.8867188, 0.8867188, 0.8867188, 0.8867188, 0.8906250, 0.8906250, 0.8906250, 0.8906250, 0.8945312, 0.8945312, 0.8945312, 0.8945312, 0.8945312, 0.8984375, 0.8984375, 0.8984375, 0.8984375, 0.9023438, 0.9023438, 0.9023438, 0.9023438, 0.9062500, 0.9062500, 0.9062500, 0.9062500, 0.9101562, 0.9101562, 0.9101562, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9179688, 0.9179688, 0.9179688, 0.9218750, 0.9218750, 0.9218750, 0.9257812, 0.9257812, 0.9257812, 0.9296875, 0.9296875, 0.9179688, 0.9023438, 0.8867188, 0.8710938, 0.8515625, 0.8359375, 0.8203125, 0.8007812, 0.7851562, 0.7656250, 0.7460938, 0.7304688, 0.7109375, 0.6914062, 0.6718750, 0.6523438, 0.6328125, 0.6132812, 0.5898438, 0.5703125, 0.5468750, 0.5273438, 0.5039062, 0.4804688, 0.4570312, 0.4335938, 0.4140625, 0.3867188, 0.3632812, 0.3320312, 0.3046875, 0.2890625, 0.2617188, 0.2382812, 0.2109375, 0.1835938, 0.1562500, 0.1289062, 0.1054688, 0.1054688]), array([ 0.9843750, 0.9843750, 0.9765625, 0.9765625, 0.9726562, 0.9726562, 0.9687500, 0.9687500, 0.9648438, 0.9648438, 0.9609375, 0.9609375, 0.9570312, 0.9570312, 0.9531250, 0.9531250, 0.9492188, 0.9492188, 0.9492188, 0.9453125, 0.9453125, 0.9414062, 0.9414062, 0.9414062, 0.9375000, 0.9375000, 0.9335938, 0.9335938, 0.9335938, 0.9296875, 0.9296875, 0.9296875, 0.9257812, 0.9257812, 0.9257812, 0.9257812, 0.9218750, 0.9218750, 0.9218750, 0.9218750, 0.9179688, 0.9179688, 0.9179688, 0.9179688, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9101562, 0.9101562, 0.9101562, 0.9062500, 0.9062500, 0.9062500, 0.9062500, 0.9023438, 0.9023438, 0.9023438, 0.8984375, 0.8984375, 0.8984375, 0.8984375, 0.8945312, 0.8945312, 0.8945312, 0.8945312, 0.8945312, 0.8906250, 0.8906250, 0.8906250, 0.8906250, 0.8867188, 0.8867188, 0.8867188, 0.8867188, 0.8867188, 0.8828125, 0.8828125, 0.8828125, 0.8828125, 0.8828125, 0.8828125, 0.8789062, 0.8828125, 0.8789062, 0.8789062, 0.8789062, 0.8789062, 0.8750000, 0.8789062, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8710938, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8671875, 0.8593750, 0.8515625, 0.8476562, 0.8398438, 0.8320312, 0.8242188, 0.8203125, 0.8125000, 0.8046875, 0.7968750, 0.7890625, 0.7812500, 0.7695312, 0.7617188, 0.7539062, 0.7460938, 0.7343750, 0.7265625, 0.7148438, 0.7070312, 0.6953125, 0.6875000, 0.6757812, 0.6640625, 0.6562500, 0.6445312, 0.6328125, 0.6210938, 0.6093750, 0.5976562, 0.5859375, 0.5703125, 0.5585938, 0.5468750, 0.5351562, 0.5195312, 0.5039062, 0.4921875, 0.4765625, 0.4609375, 0.4531250, 0.4375000, 0.4179688, 0.4140625, 0.4101562, 0.4101562, 0.4062500, 0.3984375, 0.3906250, 0.3867188, 0.3867188, 0.3789062, 0.3750000, 0.3671875, 0.3632812, 0.3632812, 0.3554688, 0.3515625, 0.3437500, 0.3437500, 0.3359375, 0.3320312, 0.3242188, 0.3242188, 0.3164062, 0.3125000, 0.3046875, 0.3046875, 0.2968750, 0.2929688, 0.2851562, 0.2812500, 0.2773438, 0.2734375, 0.2695312, 0.2617188, 0.2578125, 0.2539062, 0.2460938, 0.2421875, 0.2421875, 0.2304688, 0.2265625, 0.2265625, 0.2148438, 0.2109375, 0.2109375, 0.1992188, 0.1953125, 0.1914062, 0.1835938, 0.1796875, 0.1757812, 0.1679688, 0.1640625, 0.1601562, 0.1523438, 0.1484375, 0.1406250, 0.1367188, 0.1328125, 0.1250000, 0.1210938, 0.1171875, 0.1093750, 0.1054688, 0.0976562, 0.0937500, 0.0898438, 0.0859375, 0.0781250, 0.0703125, 0.0664062, 0.0585938, 0.0546875, 0.0507812, 0.0312500, 0.0507812, 0.0312500, 0.0273438, 0.0195312, 0.0156250, 0.0078125, 0.0039062, 0.0000000, 0.0000000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 21 :: Hue Sat Value 1 ### color_map_luts['idl21'] = \ ( array([ 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9843750, 0.9648438, 0.9453125, 0.9257812, 0.9101562, 0.8906250, 0.8710938, 0.8554688, 0.8359375, 0.8203125, 0.8007812, 0.7851562, 0.7656250, 0.7500000, 0.7343750, 0.7187500, 0.6992188, 0.6835938, 0.6679688, 0.6523438, 0.6367188, 0.6210938, 0.6054688, 0.5898438, 0.5742188, 0.5625000, 0.5468750, 0.5312500, 0.5195312, 0.5039062, 0.4882812, 0.4765625, 0.4609375, 0.4492188, 0.4375000, 0.4218750, 0.4101562, 0.3984375, 0.3867188, 0.3710938, 0.3593750, 0.3476562, 0.3359375, 0.3320312, 0.3359375, 0.3398438, 0.3437500, 0.3476562, 0.3515625, 0.3554688, 0.3593750, 0.3632812, 0.3671875, 0.3710938, 0.3750000, 0.3789062, 0.3828125, 0.3867188, 0.3906250, 0.3945312, 0.3984375, 0.4023438, 0.4062500, 0.4101562, 0.4140625, 0.4179688, 0.4218750, 0.4257812, 0.4296875, 0.4335938, 0.4375000, 0.4414062, 0.4453125, 0.4492188, 0.4531250, 0.4570312, 0.4609375, 0.4648438, 0.4687500, 0.4726562, 0.4765625, 0.4804688, 0.4843750, 0.4882812, 0.4921875, 0.4960938, 0.5000000, 0.5039062, 0.5078125, 0.5117188, 0.5156250, 0.5195312, 0.5234375, 0.5273438, 0.5312500, 0.5351562, 0.5390625, 0.5429688, 0.5468750, 0.5507812, 0.5546875, 0.5585938, 0.5625000, 0.5664062, 0.5703125, 0.5742188, 0.5781250, 0.5820312, 0.5859375, 0.5898438, 0.5937500, 0.5976562, 0.6015625, 0.6054688, 0.6093750, 0.6132812, 0.6171875, 0.6210938, 0.6250000, 0.6289062, 0.6328125, 0.6367188, 0.6406250, 0.6445312, 0.6484375, 0.6523438, 0.6562500, 0.6601562, 0.6640625, 0.6718750, 0.6835938, 0.6914062, 0.7031250, 0.7148438, 0.7265625, 0.7343750, 0.7460938, 0.7578125, 0.7656250, 0.7773438, 0.7851562, 0.7929688, 0.8046875, 0.8125000, 0.8203125, 0.8320312, 0.8398438, 0.8476562, 0.8554688, 0.8632812, 0.8710938, 0.8789062, 0.8867188, 0.8945312, 0.8984375, 0.9062500, 0.9140625, 0.9218750, 0.9257812, 0.9335938, 0.9375000, 0.9453125, 0.9492188, 0.9570312, 0.9609375, 0.9648438, 0.9687500, 0.9765625, 0.9804688, 0.9843750, 0.9882812, 0.9921875, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0039062, 0.0078125, 0.0117188, 0.0156250, 0.0195312, 0.0234375, 0.0273438, 0.0312500, 0.0351562, 0.0507812, 0.0429688, 0.0468750, 0.0507812, 0.0546875, 0.0585938, 0.0625000, 0.0664062, 0.0703125, 0.0742188, 0.0781250, 0.0820312, 0.0859375, 0.0898438, 0.0937500, 0.0976562, 0.1015625, 0.1054688, 0.1093750, 0.1132812, 0.1171875, 0.1210938, 0.1250000, 0.1289062, 0.1328125, 0.1367188, 0.1406250, 0.1445312, 0.1484375, 0.1523438, 0.1562500, 0.1601562, 0.1640625, 0.1679688, 0.1718750, 0.1757812, 0.1796875, 0.1835938, 0.1875000, 0.1914062, 0.1953125, 0.1992188, 0.2031250, 0.2070312, 0.2109375, 0.2148438, 0.2187500, 0.2226562, 0.2265625, 0.2304688, 0.2343750, 0.2382812, 0.2421875, 0.2460938, 0.2500000, 0.2500000, 0.2539062, 0.2578125, 0.2617188, 0.2656250, 0.2695312, 0.2734375, 0.2773438, 0.2812500, 0.2851562, 0.2890625, 0.2929688, 0.2968750, 0.3007812, 0.3046875, 0.3085938, 0.3125000, 0.3164062, 0.3203125, 0.3242188, 0.3281250, 0.3398438, 0.3593750, 0.3789062, 0.3984375, 0.4179688, 0.4335938, 0.4531250, 0.4726562, 0.4882812, 0.5078125, 0.5234375, 0.5429688, 0.5585938, 0.5742188, 0.5937500, 0.6093750, 0.6250000, 0.6406250, 0.6562500, 0.6718750, 0.6875000, 0.7031250, 0.7187500, 0.7343750, 0.7500000, 0.7656250, 0.7773438, 0.7929688, 0.8085938, 0.8203125, 0.8359375, 0.8476562, 0.8632812, 0.8750000, 0.8906250, 0.9023438, 0.9140625, 0.9296875, 0.9414062, 0.9531250, 0.9648438, 0.9765625, 0.9882812, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9921875, 0.9882812, 0.9843750, 0.9804688, 0.9804688, 0.9765625, 0.9726562, 0.9726562, 0.9687500, 0.9648438, 0.9648438, 0.9609375, 0.9609375, 0.9609375, 0.9570312, 0.9570312, 0.9570312, 0.9570312, 0.9531250, 0.9531250, 0.9531250, 0.9531250, 0.9531250, 0.9531250, 0.9570312, 0.9570312, 0.9570312, 0.9570312, 0.9609375, 0.9609375, 0.9609375, 0.9648438, 0.9648438, 0.9687500, 0.9726562, 0.9726562, 0.9765625, 0.9804688, 0.9804688, 0.9804688]), array([ 0.0117188, 0.0117188, 0.0507812, 0.0664062, 0.0898438, 0.1171875, 0.1445312, 0.1718750, 0.1953125, 0.2226562, 0.2460938, 0.2734375, 0.2968750, 0.3203125, 0.3476562, 0.3710938, 0.3945312, 0.4179688, 0.4453125, 0.4687500, 0.4921875, 0.5156250, 0.5390625, 0.5625000, 0.5820312, 0.6054688, 0.6289062, 0.6523438, 0.6718750, 0.6953125, 0.7187500, 0.7382812, 0.7617188, 0.7812500, 0.8007812, 0.8242188, 0.8437500, 0.8632812, 0.8867188, 0.9062500, 0.9257812, 0.9453125, 0.9648438, 0.9843750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9882812, 0.9765625, 0.9648438, 0.9531250, 0.9453125, 0.9335938, 0.9218750, 0.9140625, 0.9023438, 0.8945312, 0.8828125, 0.8750000, 0.8632812, 0.8554688, 0.8476562, 0.8359375, 0.8281250, 0.8203125, 0.8125000, 0.8046875, 0.7968750, 0.7890625, 0.7812500, 0.7734375, 0.7656250, 0.7578125, 0.7500000, 0.7460938, 0.7382812, 0.7304688, 0.7265625, 0.7187500, 0.7148438, 0.7070312, 0.7031250, 0.6992188, 0.6914062, 0.6875000, 0.6835938, 0.6796875, 0.6757812, 0.6718750, 0.6679688, 0.6679688, 0.6718750, 0.6757812, 0.6796875, 0.6835938, 0.6875000, 0.6914062, 0.6953125, 0.6992188, 0.7031250, 0.7070312, 0.7109375, 0.7148438, 0.7187500, 0.7226562, 0.7265625, 0.7304688, 0.7343750, 0.7382812, 0.7421875, 0.7460938, 0.7460938, 0.7500000, 0.7539062, 0.7578125, 0.7617188, 0.7656250, 0.7695312, 0.7734375, 0.7773438, 0.7812500, 0.7851562, 0.7890625, 0.7929688, 0.7968750, 0.8007812, 0.8046875, 0.8085938, 0.8125000, 0.8164062, 0.8203125, 0.8242188, 0.8281250, 0.8320312, 0.8359375, 0.8398438, 0.8437500, 0.8476562, 0.8515625, 0.8554688, 0.8593750, 0.8632812, 0.8671875, 0.8710938, 0.8750000, 0.8789062, 0.8828125, 0.8867188, 0.8906250, 0.8945312, 0.8984375, 0.9023438, 0.9062500, 0.9101562, 0.9140625, 0.9179688, 0.9218750, 0.9257812, 0.9296875, 0.9335938, 0.9375000, 0.9414062, 0.9453125, 0.9492188, 0.9531250, 0.9570312, 0.9609375, 0.9648438, 0.9687500, 0.9726562, 0.9765625, 0.9804688, 0.9804688]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 22 :: Hue Sat Value 2 ### color_map_luts['idl22'] = \ ( array([ 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9921875, 0.9882812, 0.9843750, 0.9804688, 0.9726562, 0.9687500, 0.9648438, 0.9609375, 0.9531250, 0.9492188, 0.9414062, 0.9375000, 0.9296875, 0.9257812, 0.9179688, 0.9101562, 0.9023438, 0.8984375, 0.8906250, 0.8828125, 0.8750000, 0.8671875, 0.8593750, 0.8515625, 0.8437500, 0.8320312, 0.8242188, 0.8164062, 0.8085938, 0.7968750, 0.7890625, 0.7773438, 0.7695312, 0.7578125, 0.7500000, 0.7382812, 0.7304688, 0.7187500, 0.7070312, 0.6953125, 0.6835938, 0.6718750, 0.6640625, 0.6562500, 0.6523438, 0.6484375, 0.6445312, 0.6406250, 0.6367188, 0.6328125, 0.6289062, 0.6250000, 0.6210938, 0.6171875, 0.6132812, 0.6093750, 0.6054688, 0.6015625, 0.5976562, 0.5937500, 0.5898438, 0.5859375, 0.5820312, 0.5781250, 0.5742188, 0.5703125, 0.5664062, 0.5625000, 0.5585938, 0.5546875, 0.5507812, 0.5468750, 0.5429688, 0.5390625, 0.5351562, 0.5312500, 0.5273438, 0.5234375, 0.5195312, 0.5156250, 0.5117188, 0.5078125, 0.5039062, 0.5000000, 0.4960938, 0.4921875, 0.4882812, 0.4843750, 0.4804688, 0.4765625, 0.4726562, 0.4687500, 0.4648438, 0.4609375, 0.4570312, 0.4531250, 0.4492188, 0.4453125, 0.4414062, 0.4375000, 0.4335938, 0.4296875, 0.4257812, 0.4218750, 0.4179688, 0.4140625, 0.4101562, 0.4062500, 0.4023438, 0.3984375, 0.3945312, 0.3906250, 0.3867188, 0.3828125, 0.3789062, 0.3750000, 0.3710938, 0.3671875, 0.3632812, 0.3593750, 0.3554688, 0.3515625, 0.3476562, 0.3437500, 0.3398438, 0.3359375, 0.3320312, 0.3320312, 0.3281250, 0.3281250, 0.3437500, 0.3554688, 0.3671875, 0.3789062, 0.3906250, 0.4023438, 0.4179688, 0.4296875, 0.4453125, 0.4570312, 0.4726562, 0.4843750, 0.5000000, 0.5117188, 0.5273438, 0.5429688, 0.5585938, 0.5703125, 0.5859375, 0.6015625, 0.6171875, 0.6328125, 0.6484375, 0.6640625, 0.6796875, 0.6992188, 0.7148438, 0.7304688, 0.7460938, 0.7656250, 0.7812500, 0.8007812, 0.8164062, 0.8359375, 0.8515625, 0.8710938, 0.8906250, 0.9101562, 0.9257812, 0.9453125, 0.9648438, 0.9843750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 0.9882812, 0.9882812, 0.9843750, 0.9804688, 0.9765625, 0.9726562, 0.9687500, 0.9648438, 0.9609375, 0.9570312, 0.9531250, 0.9492188, 0.9453125, 0.9414062, 0.9375000, 0.9335938, 0.9296875, 0.9257812, 0.9218750, 0.9179688, 0.9140625, 0.9101562, 0.9062500, 0.9023438, 0.8984375, 0.8945312, 0.8906250, 0.8867188, 0.8828125, 0.8789062, 0.8750000, 0.8710938, 0.8671875, 0.8632812, 0.8593750, 0.8554688, 0.8515625, 0.8476562, 0.8437500, 0.8398438, 0.8359375, 0.8320312, 0.8281250, 0.8242188, 0.8203125, 0.8164062, 0.8125000, 0.8085938, 0.8046875, 0.8007812, 0.7968750, 0.7929688, 0.7890625, 0.7851562, 0.7812500, 0.7773438, 0.7734375, 0.7695312, 0.7656250, 0.7617188, 0.7578125, 0.7539062, 0.7500000, 0.7460938, 0.7421875, 0.7382812, 0.7343750, 0.7304688, 0.7265625, 0.7226562, 0.7187500, 0.7148438, 0.7109375, 0.7070312, 0.7031250, 0.6992188, 0.6953125, 0.6914062, 0.6875000, 0.6835938, 0.6796875, 0.6757812, 0.6718750, 0.6679688, 0.6640625, 0.6601562, 0.6601562, 0.6601562, 0.6640625, 0.6679688, 0.6718750, 0.6757812, 0.6796875, 0.6875000, 0.6914062, 0.6953125, 0.7031250, 0.7070312, 0.7148438, 0.7187500, 0.7265625, 0.7343750, 0.7382812, 0.7460938, 0.7539062, 0.7617188, 0.7695312, 0.7773438, 0.7851562, 0.7929688, 0.8007812, 0.8085938, 0.8164062, 0.8242188, 0.8359375, 0.8437500, 0.8515625, 0.8632812, 0.8710938, 0.8828125, 0.8906250, 0.9023438, 0.9101562, 0.9218750, 0.9335938, 0.9414062, 0.9531250, 0.9648438, 0.9765625, 0.9882812, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9843750, 0.9648438, 0.9453125, 0.9257812, 0.9062500, 0.8828125, 0.8632812, 0.8437500, 0.8242188, 0.8007812, 0.7812500, 0.7578125, 0.7382812, 0.7148438, 0.6953125, 0.6718750, 0.6484375, 0.6250000, 0.6054688, 0.5820312, 0.5585938, 0.5351562, 0.5117188, 0.4882812, 0.4648438, 0.4414062, 0.4179688, 0.3906250, 0.3671875, 0.3437500, 0.3164062, 0.2929688, 0.2695312, 0.2421875, 0.2187500, 0.1914062, 0.1640625, 0.1406250, 0.1132812, 0.1132812]), array([ 0.9882812, 0.9882812, 0.9843750, 0.9804688, 0.9765625, 0.9726562, 0.9687500, 0.9687500, 0.9648438, 0.9648438, 0.9609375, 0.9609375, 0.9570312, 0.9570312, 0.9531250, 0.9531250, 0.9531250, 0.9531250, 0.9492188, 0.9492188, 0.9492188, 0.9492188, 0.9492188, 0.9492188, 0.9492188, 0.9492188, 0.9531250, 0.9531250, 0.9531250, 0.9531250, 0.9570312, 0.9570312, 0.9609375, 0.9609375, 0.9648438, 0.9648438, 0.9687500, 0.9726562, 0.9765625, 0.9765625, 0.9804688, 0.9843750, 0.9882812, 0.9921875, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9882812, 0.9765625, 0.9648438, 0.9531250, 0.9414062, 0.9257812, 0.9140625, 0.9023438, 0.8867188, 0.8750000, 0.8593750, 0.8476562, 0.8320312, 0.8203125, 0.8046875, 0.7929688, 0.7773438, 0.7617188, 0.7460938, 0.7304688, 0.7148438, 0.6992188, 0.6835938, 0.6679688, 0.6523438, 0.6367188, 0.6210938, 0.6054688, 0.5859375, 0.5703125, 0.5546875, 0.5351562, 0.5195312, 0.5000000, 0.4843750, 0.4648438, 0.4453125, 0.4296875, 0.4101562, 0.3906250, 0.3710938, 0.3515625, 0.3359375, 0.3242188, 0.3203125, 0.3164062, 0.3125000, 0.3085938, 0.3046875, 0.3007812, 0.2968750, 0.2929688, 0.2890625, 0.2851562, 0.2812500, 0.2773438, 0.2734375, 0.2695312, 0.2656250, 0.2617188, 0.2578125, 0.2539062, 0.2500000, 0.2460938, 0.2421875, 0.2382812, 0.2343750, 0.2304688, 0.2265625, 0.2226562, 0.2187500, 0.2148438, 0.2109375, 0.2070312, 0.2031250, 0.1992188, 0.1953125, 0.1914062, 0.1875000, 0.1835938, 0.1796875, 0.1757812, 0.1718750, 0.1679688, 0.1640625, 0.1601562, 0.1562500, 0.1523438, 0.1484375, 0.1445312, 0.1406250, 0.1367188, 0.1328125, 0.1289062, 0.1250000, 0.1210938, 0.1171875, 0.1132812, 0.1093750, 0.1054688, 0.1015625, 0.0976562, 0.0937500, 0.0898438, 0.0859375, 0.0820312, 0.0781250, 0.0742188, 0.0703125, 0.0664062, 0.0625000, 0.0585938, 0.0546875, 0.0507812, 0.0468750, 0.0429688, 0.0507812, 0.0351562, 0.0312500, 0.0273438, 0.0234375, 0.0195312, 0.0156250, 0.0117188, 0.0078125, 0.0078125]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 23 :: Purple-Red + Stripes ### color_map_luts['idl23'] = \ ( array([ 0.0000000, 0.0000000, 0.4960938, 0.7460938, 0.7343750, 0.7265625, 0.7187500, 0.7109375, 0.6992188, 0.6914062, 0.5468750, 0.5390625, 0.6640625, 0.6562500, 0.6445312, 0.6367188, 0.6289062, 0.6171875, 0.6093750, 0.6015625, 0.4726562, 0.4648438, 0.5742188, 0.5664062, 0.5546875, 0.5468750, 0.5390625, 0.5273438, 0.5195312, 0.5117188, 0.4023438, 0.3945312, 0.4843750, 0.4765625, 0.4648438, 0.4570312, 0.4492188, 0.4375000, 0.4296875, 0.4218750, 0.3281250, 0.3203125, 0.3945312, 0.3867188, 0.3750000, 0.3671875, 0.3593750, 0.3476562, 0.3398438, 0.3320312, 0.2578125, 0.2500000, 0.3046875, 0.2968750, 0.2851562, 0.2773438, 0.2695312, 0.2578125, 0.2500000, 0.2421875, 0.1835938, 0.1796875, 0.2148438, 0.2031250, 0.1953125, 0.1875000, 0.1796875, 0.1679688, 0.1601562, 0.1523438, 0.1132812, 0.1054688, 0.1250000, 0.1132812, 0.1054688, 0.0976562, 0.0898438, 0.0781250, 0.0703125, 0.0625000, 0.0429688, 0.0351562, 0.0351562, 0.0234375, 0.0156250, 0.0078125, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0156250, 0.0468750, 0.0703125, 0.0937500, 0.1171875, 0.1406250, 0.1640625, 0.1875000, 0.2109375, 0.1875000, 0.2070312, 0.2812500, 0.3046875, 0.3320312, 0.3554688, 0.3789062, 0.4023438, 0.4257812, 0.4492188, 0.3789062, 0.3984375, 0.5195312, 0.5429688, 0.5664062, 0.5898438, 0.6132812, 0.6367188, 0.6640625, 0.6875000, 0.5664062, 0.5859375, 0.7578125, 0.7812500, 0.8046875, 0.8281250, 0.8515625, 0.8750000, 0.8984375, 0.9218750, 0.7578125, 0.7773438, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 0.0000000, 0.0000000, 0.4960938, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0234375, 0.0468750, 0.0703125, 0.0742188, 0.0937500, 0.1406250, 0.1640625, 0.1875000, 0.2109375, 0.2343750, 0.2578125, 0.2812500, 0.3046875, 0.2656250, 0.2812500, 0.3789062, 0.4023438, 0.4257812, 0.4492188, 0.4726562, 0.4960938, 0.5195312, 0.5429688, 0.4531250, 0.4726562, 0.6132812, 0.6367188, 0.6640625, 0.6875000, 0.7109375, 0.7343750, 0.7578125, 0.7812500, 0.6445312, 0.6640625, 0.8515625, 0.8750000, 0.8984375, 0.9218750, 0.9453125, 0.9687500, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9687500, 0.9453125, 0.9179688, 0.8945312, 0.8710938, 0.8437500, 0.8203125, 0.6367188, 0.6171875, 0.7460938, 0.7187500, 0.6953125, 0.6718750, 0.6445312, 0.6210938, 0.5976562, 0.5703125, 0.4375000, 0.4179688, 0.4960938, 0.4726562, 0.4492188, 0.4296875, 0.4062500, 0.3828125, 0.3593750, 0.3359375, 0.2500000, 0.2343750, 0.2695312, 0.2460938, 0.2226562, 0.2031250, 0.1796875, 0.1562500, 0.1328125, 0.1093750, 0.0703125, 0.0507812, 0.0429688, 0.0195312, 0.0000000, 0.0000000]), array([ 0.0000000, 0.0000000, 0.4960938, 0.7460938, 0.7500000, 0.7500000, 0.7539062, 0.7578125, 0.7617188, 0.7617188, 0.6132812, 0.6132812, 0.7734375, 0.7734375, 0.7773438, 0.7812500, 0.7851562, 0.7890625, 0.7890625, 0.7929688, 0.6367188, 0.6406250, 0.8007812, 0.8046875, 0.8085938, 0.8125000, 0.8125000, 0.8164062, 0.8203125, 0.8242188, 0.6601562, 0.6640625, 0.8320312, 0.8359375, 0.8398438, 0.8398438, 0.8437500, 0.8476562, 0.8515625, 0.8515625, 0.6835938, 0.6875000, 0.8632812, 0.8632812, 0.8671875, 0.8710938, 0.8750000, 0.8789062, 0.8789062, 0.8828125, 0.7070312, 0.7109375, 0.8906250, 0.8945312, 0.8984375, 0.9023438, 0.9023438, 0.9062500, 0.9101562, 0.9140625, 0.7343750, 0.7343750, 0.9218750, 0.9257812, 0.9296875, 0.9296875, 0.9335938, 0.9375000, 0.9414062, 0.9414062, 0.7578125, 0.7578125, 0.9531250, 0.9570312, 0.9570312, 0.9609375, 0.9648438, 0.9687500, 0.9687500, 0.9726562, 0.7812500, 0.7812500, 0.9804688, 0.9843750, 0.9882812, 0.9921875, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7968750, 0.7968750, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9687500, 0.7578125, 0.7382812, 0.8984375, 0.8750000, 0.8515625, 0.8281250, 0.8046875, 0.7812500, 0.7578125, 0.7343750, 0.5664062, 0.5468750, 0.6601562, 0.6367188, 0.6132812, 0.5898438, 0.5664062, 0.5429688, 0.5195312, 0.4960938, 0.3789062, 0.3593750, 0.4257812, 0.4023438, 0.3789062, 0.3554688, 0.3281250, 0.3046875, 0.2812500, 0.2578125, 0.1875000, 0.1679688, 0.1875000, 0.1640625, 0.1406250, 0.1171875, 0.0937500, 0.0703125, 0.0468750, 0.0234375, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0078125, 0.0195312, 0.0273438, 0.0390625, 0.0468750, 0.0585938, 0.0664062, 0.0625000, 0.0703125, 0.0976562, 0.1093750, 0.1171875, 0.1289062, 0.1367188, 0.1484375, 0.1562500, 0.1679688, 0.1406250, 0.1484375, 0.1992188, 0.1875000, 0.1796875, 0.1718750, 0.1601562, 0.1523438, 0.1445312, 0.1328125, 0.0976562, 0.0937500, 0.1054688, 0.0976562, 0.0898438, 0.0781250, 0.0703125, 0.0625000, 0.0507812, 0.0429688, 0.0273438, 0.0195312, 0.0156250, 0.0078125, 0.0000000, 0.0000000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 24 :: Beach ### color_map_luts['idl24'] = \ ( array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0156250, 0.0312500, 0.0468750, 0.0664062, 0.0820312, 0.0976562, 0.1132812, 0.1328125, 0.1484375, 0.1640625, 0.1796875, 0.1992188, 0.2148438, 0.2304688, 0.2500000, 0.2656250, 0.2812500, 0.2968750, 0.3164062, 0.3320312, 0.3476562, 0.3632812, 0.3828125, 0.3984375, 0.4140625, 0.4296875, 0.4492188, 0.4648438, 0.4804688, 0.5000000, 0.5156250, 0.5312500, 0.5468750, 0.2968750, 0.2968750, 0.2968750, 0.3164062, 0.3320312, 0.3476562, 0.3632812, 0.3828125, 0.3984375, 0.4140625, 0.4296875, 0.4492188, 0.4648438, 0.4804688, 0.5000000, 0.5156250, 0.5312500, 0.5468750, 0.5664062, 0.5820312, 0.5976562, 0.6132812, 0.6132812, 0.6289062, 0.6406250, 0.6523438, 0.6640625, 0.6757812, 0.6914062, 0.7031250, 0.7148438, 0.7265625, 0.7382812, 0.7539062, 0.7656250, 0.7773438, 0.7890625, 0.8007812, 0.8164062, 0.8281250, 0.8398438, 0.8515625, 0.8515625, 0.8554688, 0.8593750, 0.8671875, 0.8710938, 0.8750000, 0.8789062, 0.8828125, 0.8867188, 0.8945312, 0.8984375, 0.9023438, 0.9062500, 0.9101562, 0.9140625, 0.9218750, 0.9257812, 0.9296875, 0.9335938, 0.9375000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 0.9804688, 0.9804688, 0.9726562, 0.9648438, 0.9570312, 0.9492188, 0.9414062, 0.9296875, 0.9218750, 0.9140625, 0.9062500, 0.8984375, 0.8906250, 0.8828125, 0.8750000, 0.8632812, 0.8554688, 0.8476562, 0.8398438, 0.8320312, 0.8242188, 0.8164062, 0.8046875, 0.7968750, 0.7890625, 0.7812500, 0.7734375, 0.7656250, 0.7578125, 0.7500000, 0.7382812, 0.7304688, 0.7226562, 0.7148438, 0.7070312, 0.6992188, 0.6914062, 0.6796875, 0.6718750, 0.6640625, 0.6562500, 0.6484375, 0.6406250, 0.6328125, 0.6250000, 0.6132812, 0.6054688, 0.5976562, 0.5898438, 0.5820312, 0.5742188, 0.5664062, 0.5546875, 0.5468750, 0.5390625, 0.5312500, 0.5234375, 0.5156250, 0.5078125, 0.5000000, 0.5000000, 0.5039062, 0.5117188, 0.5195312, 0.5234375, 0.5312500, 0.5390625, 0.5429688, 0.5507812, 0.5585938, 0.5664062, 0.5703125, 0.5781250, 0.5859375, 0.5898438, 0.5976562, 0.6054688, 0.6132812, 0.6171875, 0.6250000, 0.6328125, 0.6367188, 0.6445312, 0.6523438, 0.6562500, 0.6640625, 0.6718750, 0.6796875, 0.6835938, 0.6914062, 0.6992188, 0.7031250, 0.7109375, 0.7109375, 0.6914062, 0.6679688, 0.6445312, 0.6250000, 0.6015625, 0.5820312, 0.5585938, 0.5390625, 0.5156250, 0.4921875, 0.4726562, 0.4492188, 0.4296875, 0.4062500, 0.3867188, 0.3632812, 0.3437500, 0.3203125, 0.2968750, 0.2773438, 0.2539062, 0.2343750, 0.2109375, 0.1914062, 0.1679688, 0.1445312, 0.1250000, 0.1015625, 0.0820312, 0.0585938, 0.0507812, 0.0156250, 0.0000000, 0.0000000, 0.0000000, 0.0156250, 0.0312500, 0.0468750, 0.0664062, 0.0820312, 0.0976562, 0.1132812, 0.1328125, 0.1484375, 0.1640625, 0.1796875, 0.1992188, 0.2148438, 0.2304688, 0.2500000, 0.2656250, 0.2812500, 0.2968750, 0.3164062, 0.3164062, 0.3320312, 0.3476562, 0.3632812, 0.3828125, 0.3984375, 0.4140625, 0.4296875, 0.4492188, 0.4648438, 0.4804688, 0.5000000, 0.5156250, 0.5312500, 0.5468750, 0.5664062, 0.5820312, 0.5976562, 0.6132812, 0.6328125, 0.6328125, 0.6484375, 0.6640625, 0.6796875, 0.6992188, 0.7148438, 0.7304688, 0.7500000, 0.7656250, 0.7812500, 0.7968750, 0.8164062, 0.8320312, 0.8476562, 0.8632812, 0.8828125, 0.8984375, 0.9140625, 0.9296875, 0.9492188, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0156250, 0.0312500, 0.0468750, 0.0664062, 0.0820312, 0.0976562, 0.1132812, 0.1328125, 0.1484375, 0.1640625, 0.1796875, 0.1992188, 0.2148438, 0.2304688, 0.2500000, 0.2656250, 0.2812500, 0.2968750, 0.3164062, 0.3320312, 0.3476562, 0.3632812, 0.3828125, 0.3984375, 0.4140625, 0.4296875, 0.4492188, 0.4648438, 0.4804688, 0.5000000, 0.5156250, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.5312500, 0.2968750, 0.2968750, 0.2968750, 0.2812500, 0.2656250, 0.2500000, 0.2343750, 0.2187500, 0.2031250, 0.1875000, 0.1718750, 0.1562500, 0.1406250, 0.1250000, 0.1093750, 0.0898438, 0.0742188, 0.0585938, 0.0429688, 0.0273438, 0.0117188, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0156250, 0.0312500, 0.0468750, 0.0664062, 0.0820312, 0.0976562, 0.1132812, 0.1328125, 0.1484375, 0.1640625, 0.1796875, 0.1992188, 0.2148438, 0.2304688, 0.2500000, 0.2656250, 0.2812500, 0.2968750, 0.3164062, 0.3164062, 0.3281250, 0.3437500, 0.3554688, 0.3710938, 0.3828125, 0.3984375, 0.4101562, 0.4257812, 0.4414062, 0.4531250, 0.4687500, 0.4804688, 0.4960938, 0.5078125, 0.5234375, 0.5351562, 0.5507812, 0.5664062, 0.5781250, 0.5937500, 0.6054688, 0.6210938, 0.6210938, 0.6328125, 0.6484375, 0.6640625, 0.6757812, 0.6914062, 0.7070312, 0.7187500, 0.7343750, 0.7500000, 0.7617188, 0.7773438, 0.7929688, 0.8046875, 0.8203125, 0.8359375, 0.8476562, 0.8632812, 0.8789062, 0.8906250, 0.9062500, 0.9218750, 0.9335938, 0.0000000, 0.0000000, 0.0000000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 25 :: Mac Style ### color_map_luts['idl25'] = \ ( array([ 0.4843750, 0.4843750, 0.4687500, 0.4492188, 0.4335938, 0.4140625, 0.3984375, 0.3789062, 0.3632812, 0.3437500, 0.3281250, 0.3085938, 0.2929688, 0.2734375, 0.2578125, 0.2382812, 0.2226562, 0.2031250, 0.1875000, 0.1679688, 0.1523438, 0.1328125, 0.1171875, 0.0976562, 0.0820312, 0.0625000, 0.0468750, 0.0273438, 0.0117188, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0195312, 0.0507812, 0.0546875, 0.0742188, 0.0898438, 0.1093750, 0.1250000, 0.1445312, 0.1601562, 0.1796875, 0.1953125, 0.2148438, 0.2304688, 0.2500000, 0.2656250, 0.2851562, 0.3007812, 0.3203125, 0.3359375, 0.3554688, 0.3710938, 0.3906250, 0.4062500, 0.4257812, 0.4414062, 0.4609375, 0.4804688, 0.4921875, 0.5156250, 0.5273438, 0.5507812, 0.5625000, 0.5859375, 0.5976562, 0.6210938, 0.6328125, 0.6562500, 0.6679688, 0.6914062, 0.7031250, 0.7265625, 0.7382812, 0.7617188, 0.7734375, 0.7968750, 0.8085938, 0.8320312, 0.8437500, 0.8671875, 0.8789062, 0.9023438, 0.9140625, 0.9375000, 0.9492188, 0.9726562, 0.9843750, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9960938, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0078125, 0.0234375, 0.0429688, 0.0585938, 0.0781250, 0.0937500, 0.1132812, 0.1289062, 0.1484375, 0.1640625, 0.1835938, 0.1992188, 0.2187500, 0.2343750, 0.2539062, 0.2695312, 0.2890625, 0.3046875, 0.3242188, 0.3398438, 0.3593750, 0.3750000, 0.3945312, 0.4101562, 0.4296875, 0.4453125, 0.4648438, 0.4765625, 0.5000000, 0.5117188, 0.5351562, 0.5468750, 0.5703125, 0.5820312, 0.6054688, 0.6171875, 0.6406250, 0.6523438, 0.6757812, 0.6875000, 0.7109375, 0.7226562, 0.7460938, 0.7578125, 0.7812500, 0.7929688, 0.8164062, 0.8281250, 0.8515625, 0.8632812, 0.8867188, 0.8984375, 0.9218750, 0.9375000, 0.9570312, 0.9726562, 0.9921875, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9843750, 0.9648438, 0.9492188, 0.9296875, 0.9140625, 0.8945312, 0.8789062, 0.8593750, 0.8437500, 0.8242188, 0.8085938, 0.7890625, 0.7734375, 0.7539062, 0.7382812, 0.7187500, 0.7031250, 0.6835938, 0.6679688, 0.6484375, 0.6328125, 0.6132812, 0.5976562, 0.5781250, 0.5625000, 0.5429688, 0.5273438, 0.5117188, 0.4921875, 0.4765625, 0.4570312, 0.4414062, 0.4218750, 0.4062500, 0.3867188, 0.3710938, 0.3515625, 0.3359375, 0.3164062, 0.3007812, 0.2812500, 0.2656250, 0.2460938, 0.2304688, 0.2109375, 0.1953125, 0.1757812, 0.1601562, 0.1406250, 0.1250000, 0.1054688, 0.0898438, 0.0703125, 0.0546875, 0.0351562, 0.0195312, 0.0195312]), array([ 0.9960938, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9921875, 0.9960938, 0.9765625, 0.9648438, 0.9414062, 0.9296875, 0.9062500, 0.8945312, 0.8710938, 0.8593750, 0.8359375, 0.8242188, 0.8007812, 0.7890625, 0.7656250, 0.7539062, 0.7304688, 0.7187500, 0.6953125, 0.6835938, 0.6601562, 0.6484375, 0.6250000, 0.6132812, 0.5898438, 0.5781250, 0.5546875, 0.5429688, 0.5195312, 0.5078125, 0.4882812, 0.4726562, 0.4531250, 0.4375000, 0.4179688, 0.4023438, 0.3828125, 0.3671875, 0.3476562, 0.3320312, 0.3125000, 0.2968750, 0.2773438, 0.2617188, 0.2421875, 0.2265625, 0.2070312, 0.1914062, 0.1718750, 0.1562500, 0.1367188, 0.1210938, 0.1015625, 0.0859375, 0.0664062, 0.0507812, 0.0312500, 0.0156250, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 26 :: Eos A ### color_map_luts['idl26'] = \ ( array([ 0.0000000, 0.0000000, 0.4960938, 0.4804688, 0.4648438, 0.4453125, 0.4296875, 0.4140625, 0.3984375, 0.3789062, 0.3281250, 0.3125000, 0.2968750, 0.3125000, 0.2968750, 0.2812500, 0.2617188, 0.2460938, 0.2304688, 0.2148438, 0.1757812, 0.1640625, 0.1484375, 0.1484375, 0.1289062, 0.1132812, 0.0976562, 0.0820312, 0.0625000, 0.0468750, 0.0273438, 0.0117188, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0312500, 0.0664062, 0.0976562, 0.1328125, 0.1640625, 0.1992188, 0.2304688, 0.2382812, 0.2656250, 0.2968750, 0.3632812, 0.3984375, 0.4296875, 0.4648438, 0.4960938, 0.5312500, 0.5625000, 0.5351562, 0.5664062, 0.5976562, 0.6953125, 0.7304688, 0.7617188, 0.7968750, 0.8281250, 0.8632812, 0.8945312, 0.8359375, 0.8632812, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.8945312, 0.9882812, 0.9804688, 0.9726562, 0.9687500, 0.9609375, 0.9531250, 0.9453125, 0.8476562, 0.8398438, 0.8320312, 0.9218750, 0.9140625, 0.9062500, 0.8984375, 0.8945312, 0.8867188, 0.8789062, 0.7851562, 0.7812500, 0.7734375, 0.8515625, 0.8476562, 0.8398438, 0.8320312, 0.8242188, 0.8203125, 0.8125000, 0.7265625, 0.7187500, 0.7109375, 0.7851562, 0.7773438, 0.7734375, 0.7656250, 0.7578125, 0.7500000, 0.7460938, 0.6640625, 0.6601562, 0.6523438, 0.7187500, 0.7109375, 0.7031250, 0.6992188, 0.6914062, 0.6835938, 0.6796875, 0.6054688, 0.5976562, 0.5898438, 0.6523438, 0.6445312, 0.6367188, 0.6289062, 0.6250000, 0.6171875, 0.6093750, 0.5429688, 0.5390625, 0.5312500, 0.5820312, 0.5781250, 0.5703125, 0.5625000, 0.5585938, 0.5507812, 0.5429688, 0.4843750, 0.4765625, 0.4687500, 0.5156250, 0.5078125, 0.5078125]), array([ 0.0000000, 0.0000000, 0.4960938, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0664062, 0.1328125, 0.1992188, 0.2656250, 0.3320312, 0.3984375, 0.4648438, 0.4765625, 0.5351562, 0.5976562, 0.7304688, 0.7968750, 0.8632812, 0.9296875, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.8945312, 0.9765625, 0.9609375, 0.9453125, 0.9296875, 0.9101562, 0.8945312, 0.8789062, 0.7734375, 0.7617188, 0.7460938, 0.8125000, 0.7968750, 0.7773438, 0.7617188, 0.7460938, 0.7304688, 0.7109375, 0.6250000, 0.6093750, 0.5937500, 0.6445312, 0.6289062, 0.6132812, 0.5976562, 0.5781250, 0.5625000, 0.5468750, 0.4765625, 0.4609375, 0.4453125, 0.4882812, 0.4804688, 0.4726562, 0.4648438, 0.4531250, 0.4453125, 0.4375000, 0.3867188, 0.3789062, 0.3710938, 0.4062500, 0.3984375, 0.3867188, 0.3789062, 0.3710938, 0.3632812, 0.3554688, 0.3125000, 0.3046875, 0.2968750, 0.3203125, 0.3125000, 0.3046875, 0.2968750, 0.2890625, 0.2812500, 0.2734375, 0.2382812, 0.2304688, 0.2226562, 0.2382812, 0.2304688, 0.2226562, 0.2148438, 0.2070312, 0.1953125, 0.1875000, 0.1640625, 0.1562500, 0.1484375, 0.1562500, 0.1484375, 0.1406250, 0.1289062, 0.1210938, 0.1132812, 0.1054688, 0.0859375, 0.0820312, 0.0742188, 0.0742188, 0.0625000, 0.0546875, 0.0468750, 0.0507812, 0.0312500, 0.0234375, 0.0117188, 0.0039062, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 0.0000000, 0.0000000, 0.4960938, 0.7539062, 0.7617188, 0.7695312, 0.7773438, 0.7851562, 0.7968750, 0.8046875, 0.7304688, 0.7382812, 0.7460938, 0.8359375, 0.8437500, 0.8515625, 0.8632812, 0.8710938, 0.8789062, 0.8867188, 0.8046875, 0.8125000, 0.8203125, 0.9179688, 0.9296875, 0.9375000, 0.9453125, 0.9531250, 0.9609375, 0.9687500, 0.8789062, 0.8867188, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9296875, 0.8632812, 0.7148438, 0.6562500, 0.5937500, 0.5976562, 0.5273438, 0.4609375, 0.3945312, 0.3281250, 0.2617188, 0.1953125, 0.1171875, 0.0585938, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 27 :: Eos B ### color_map_luts['idl27'] = \ ( array([ 0.9960938, 0.9960938, 0.4960938, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0351562, 0.0703125, 0.1054688, 0.1406250, 0.1757812, 0.1914062, 0.2226562, 0.2812500, 0.3164062, 0.3554688, 0.3906250, 0.4257812, 0.4609375, 0.4960938, 0.5312500, 0.5117188, 0.5429688, 0.6367188, 0.6757812, 0.7109375, 0.7460938, 0.7812500, 0.8164062, 0.8515625, 0.8867188, 0.8320312, 0.8632812, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.9960938, 0.9882812, 0.9804688, 0.9726562, 0.9648438, 0.9570312, 0.9492188, 0.9414062, 0.8398438, 0.8359375, 0.9179688, 0.9140625, 0.9062500, 0.8984375, 0.8906250, 0.8828125, 0.8750000, 0.8671875, 0.7734375, 0.7656250, 0.8437500, 0.8398438, 0.8320312, 0.8242188, 0.8164062, 0.8085938, 0.8007812, 0.7929688, 0.7070312, 0.6992188, 0.7695312, 0.7656250, 0.7578125, 0.7500000, 0.7421875, 0.7343750, 0.7265625, 0.7187500, 0.6406250, 0.6328125, 0.6953125, 0.6875000, 0.6835938, 0.6757812, 0.6679688, 0.6601562, 0.6523438, 0.6445312, 0.5742188, 0.5664062, 0.6210938, 0.6132812, 0.6093750, 0.6015625, 0.5937500, 0.5859375, 0.5781250, 0.5703125, 0.5078125, 0.5000000, 0.5468750, 0.5390625, 0.5351562, 0.5273438, 0.5195312, 0.5117188, 0.5039062, 0.4960938, 0.4414062, 0.4335938, 0.4726562, 0.4648438, 0.4570312, 0.4570312]), array([ 0.9960938, 0.9960938, 0.4960938, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0273438, 0.0585938, 0.0898438, 0.1210938, 0.1523438, 0.1835938, 0.2148438, 0.2226562, 0.2500000, 0.3085938, 0.3398438, 0.3710938, 0.4023438, 0.4335938, 0.4648438, 0.4960938, 0.5273438, 0.5039062, 0.5312500, 0.6210938, 0.6523438, 0.6835938, 0.7148438, 0.7460938, 0.7773438, 0.8085938, 0.8398438, 0.7812500, 0.8085938, 0.9335938, 0.9648438, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.8945312, 0.8945312, 0.9960938, 0.9765625, 0.9609375, 0.9453125, 0.9296875, 0.9101562, 0.8945312, 0.8789062, 0.7734375, 0.7617188, 0.8281250, 0.8125000, 0.7968750, 0.7773438, 0.7617188, 0.7460938, 0.7304688, 0.7109375, 0.6250000, 0.6093750, 0.6601562, 0.6445312, 0.6289062, 0.6132812, 0.5976562, 0.5781250, 0.5625000, 0.5468750, 0.4765625, 0.4609375, 0.4960938, 0.4882812, 0.4804688, 0.4726562, 0.4648438, 0.4531250, 0.4453125, 0.4375000, 0.3867188, 0.3789062, 0.4140625, 0.4062500, 0.3984375, 0.3867188, 0.3789062, 0.3710938, 0.3632812, 0.3554688, 0.3125000, 0.3046875, 0.3281250, 0.3203125, 0.3125000, 0.3046875, 0.2968750, 0.2890625, 0.2812500, 0.2734375, 0.2382812, 0.2304688, 0.2460938, 0.2382812, 0.2304688, 0.2226562, 0.2148438, 0.2070312, 0.1953125, 0.1875000, 0.1640625, 0.1562500, 0.1640625, 0.1562500, 0.1484375, 0.1406250, 0.1289062, 0.1210938, 0.1132812, 0.1054688, 0.0859375, 0.0820312, 0.0820312, 0.0742188, 0.0625000, 0.0546875, 0.0468750, 0.0507812, 0.0312500, 0.0234375, 0.0117188, 0.0039062, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 0.9960938, 0.9960938, 0.4960938, 0.5117188, 0.5312500, 0.5468750, 0.5625000, 0.5781250, 0.5976562, 0.6132812, 0.5664062, 0.5820312, 0.6640625, 0.6796875, 0.6953125, 0.7109375, 0.7304688, 0.7460938, 0.7617188, 0.7773438, 0.7148438, 0.7304688, 0.8281250, 0.8437500, 0.8632812, 0.8789062, 0.8945312, 0.9101562, 0.9296875, 0.9453125, 0.8632812, 0.8789062, 0.9960938, 0.9648438, 0.9335938, 0.9023438, 0.8710938, 0.8398438, 0.8085938, 0.7773438, 0.6718750, 0.6406250, 0.6835938, 0.6523438, 0.6210938, 0.5898438, 0.5585938, 0.5273438, 0.4960938, 0.4648438, 0.3906250, 0.3632812, 0.3710938, 0.3398438, 0.3085938, 0.2773438, 0.2460938, 0.2148438, 0.1835938, 0.1523438, 0.1093750, 0.0820312, 0.0585938, 0.0273438, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 28 :: Hardcandy ### color_map_luts['idl28'] = \ ( array([ 0.0156250, 0.0156250, 0.0195312, 0.0273438, 0.0351562, 0.0429688, 0.0468750, 0.0546875, 0.0585938, 0.0625000, 0.0664062, 0.0664062, 0.0703125, 0.0703125, 0.0703125, 0.0664062, 0.0625000, 0.0585938, 0.0546875, 0.0507812, 0.0429688, 0.0507812, 0.0312500, 0.0195312, 0.0117188, 0.0039062, 0.0000000, 0.0117188, 0.0195312, 0.0273438, 0.0507812, 0.0468750, 0.0507812, 0.0585938, 0.0664062, 0.0703125, 0.0742188, 0.0742188, 0.0742188, 0.0742188, 0.0742188, 0.0703125, 0.0625000, 0.0546875, 0.0468750, 0.0351562, 0.0195312, 0.0078125, 0.0078125, 0.0234375, 0.0429688, 0.0664062, 0.0898438, 0.1132812, 0.1406250, 0.1640625, 0.1914062, 0.2187500, 0.2500000, 0.2773438, 0.3046875, 0.3320312, 0.3632812, 0.3906250, 0.4140625, 0.4414062, 0.4648438, 0.4882812, 0.5078125, 0.5273438, 0.5429688, 0.5585938, 0.5703125, 0.5820312, 0.5859375, 0.5898438, 0.5937500, 0.5898438, 0.5859375, 0.5781250, 0.5664062, 0.5546875, 0.5390625, 0.5195312, 0.4960938, 0.4726562, 0.4453125, 0.4179688, 0.3867188, 0.3515625, 0.3203125, 0.2851562, 0.2460938, 0.2109375, 0.1718750, 0.1367188, 0.0976562, 0.0625000, 0.0234375, 0.0039062, 0.0507812, 0.0703125, 0.0976562, 0.1250000, 0.1484375, 0.1718750, 0.1875000, 0.2031250, 0.2148438, 0.2187500, 0.2226562, 0.2226562, 0.2187500, 0.2070312, 0.1953125, 0.1757812, 0.1523438, 0.1289062, 0.0976562, 0.0625000, 0.0234375, 0.0117188, 0.0585938, 0.1054688, 0.1562500, 0.2070312, 0.2617188, 0.3164062, 0.3750000, 0.4335938, 0.4921875, 0.5507812, 0.6093750, 0.6640625, 0.7226562, 0.7773438, 0.8281250, 0.8789062, 0.9257812, 0.9687500, 0.9843750, 0.9453125, 0.9140625, 0.8867188, 0.8671875, 0.8476562, 0.8359375, 0.8281250, 0.8281250, 0.8320312, 0.8437500, 0.8593750, 0.8789062, 0.9062500, 0.9375000, 0.9726562, 0.9804688, 0.9335938, 0.8828125, 0.8320312, 0.7734375, 0.7148438, 0.6523438, 0.5859375, 0.5195312, 0.4531250, 0.3828125, 0.3164062, 0.2500000, 0.1835938, 0.1171875, 0.0546875, 0.0039062, 0.0585938, 0.1132812, 0.1640625, 0.2109375, 0.2500000, 0.2851562, 0.3164062, 0.3398438, 0.3554688, 0.3671875, 0.3710938, 0.3710938, 0.3593750, 0.3437500, 0.3203125, 0.2929688, 0.2578125, 0.2148438, 0.1640625, 0.1093750, 0.0507812, 0.0117188, 0.0820312, 0.1562500, 0.2343750, 0.3164062, 0.4023438, 0.4882812, 0.5742188, 0.6640625, 0.7539062, 0.8437500, 0.9335938, 0.9726562, 0.8906250, 0.8085938, 0.7304688, 0.6562500, 0.5898438, 0.5234375, 0.4648438, 0.4140625, 0.3710938, 0.3320312, 0.3007812, 0.2773438, 0.2617188, 0.2539062, 0.2578125, 0.2656250, 0.2812500, 0.3085938, 0.3398438, 0.3828125, 0.4296875, 0.4843750, 0.5468750, 0.6171875, 0.6914062, 0.7695312, 0.8554688, 0.9414062, 0.9570312, 0.8632812, 0.7656250, 0.6679688, 0.5664062, 0.4687500, 0.3710938, 0.2734375, 0.1796875, 0.0898438, 0.0039062, 0.0742188, 0.1523438, 0.2226562, 0.2890625, 0.3476562, 0.3984375, 0.4375000, 0.4726562, 0.4960938, 0.4960938]), array([ 0.1992188, 0.1992188, 0.2265625, 0.2539062, 0.2812500, 0.3085938, 0.3320312, 0.3593750, 0.3867188, 0.4101562, 0.4335938, 0.4570312, 0.4843750, 0.5039062, 0.5273438, 0.5507812, 0.5703125, 0.5937500, 0.6132812, 0.6328125, 0.6523438, 0.6679688, 0.6875000, 0.7031250, 0.7187500, 0.7343750, 0.7460938, 0.7617188, 0.7734375, 0.7851562, 0.7929688, 0.8046875, 0.8125000, 0.8203125, 0.8281250, 0.8320312, 0.8359375, 0.8398438, 0.8437500, 0.8437500, 0.8437500, 0.8437500, 0.8437500, 0.8398438, 0.8398438, 0.8320312, 0.8281250, 0.8203125, 0.8164062, 0.8046875, 0.7968750, 0.7851562, 0.7773438, 0.7617188, 0.7500000, 0.7382812, 0.7226562, 0.7070312, 0.6914062, 0.6718750, 0.6562500, 0.6367188, 0.6171875, 0.5976562, 0.5781250, 0.5546875, 0.5351562, 0.5117188, 0.4882812, 0.4648438, 0.4414062, 0.4140625, 0.3906250, 0.3632812, 0.3398438, 0.3125000, 0.2851562, 0.2617188, 0.2343750, 0.2070312, 0.1796875, 0.1523438, 0.1250000, 0.0976562, 0.0703125, 0.0429688, 0.0156250, 0.0078125, 0.0351562, 0.0625000, 0.0859375, 0.1132812, 0.1406250, 0.1640625, 0.1914062, 0.2148438, 0.2382812, 0.2656250, 0.2890625, 0.3085938, 0.3320312, 0.3554688, 0.3750000, 0.3945312, 0.4140625, 0.4335938, 0.4531250, 0.4687500, 0.4882812, 0.5039062, 0.5195312, 0.5312500, 0.5468750, 0.5585938, 0.5703125, 0.5820312, 0.5898438, 0.5976562, 0.6054688, 0.6132812, 0.6210938, 0.6250000, 0.6289062, 0.6328125, 0.6328125, 0.6328125, 0.6328125, 0.6328125, 0.6328125, 0.6289062, 0.6250000, 0.6210938, 0.6132812, 0.6054688, 0.5976562, 0.5898438, 0.5781250, 0.5703125, 0.5585938, 0.5429688, 0.5312500, 0.5156250, 0.5000000, 0.4843750, 0.4687500, 0.4492188, 0.4335938, 0.4140625, 0.3945312, 0.3710938, 0.3515625, 0.3281250, 0.3085938, 0.2851562, 0.2617188, 0.2382812, 0.2109375, 0.1875000, 0.1640625, 0.1367188, 0.1093750, 0.0859375, 0.0585938, 0.0312500, 0.0039062, 0.0195312, 0.0468750, 0.0742188, 0.1015625, 0.1289062, 0.1562500, 0.1835938, 0.2109375, 0.2382812, 0.2617188, 0.2890625, 0.3164062, 0.3437500, 0.3671875, 0.3945312, 0.4179688, 0.4414062, 0.4687500, 0.4921875, 0.5117188, 0.5351562, 0.5585938, 0.5781250, 0.6015625, 0.6210938, 0.6406250, 0.6562500, 0.6757812, 0.6914062, 0.7070312, 0.7226562, 0.7382812, 0.7539062, 0.7656250, 0.7773438, 0.7890625, 0.7968750, 0.8085938, 0.8164062, 0.8242188, 0.8281250, 0.8359375, 0.8398438, 0.8437500, 0.8437500, 0.8437500, 0.8437500, 0.8437500, 0.8437500, 0.8398438, 0.8359375, 0.8320312, 0.8281250, 0.8203125, 0.8125000, 0.8046875, 0.7929688, 0.7851562, 0.7734375, 0.7578125, 0.7460938, 0.7304688, 0.7187500, 0.7031250, 0.6835938, 0.6679688, 0.6484375, 0.6289062, 0.6093750, 0.5898438, 0.5703125, 0.5468750, 0.5273438, 0.5039062, 0.4804688, 0.4570312, 0.4335938, 0.4062500, 0.3828125, 0.3554688, 0.3320312, 0.3046875, 0.2773438, 0.2500000, 0.2226562, 0.1953125, 0.1718750, 0.1445312, 0.1171875, 0.0898438, 0.0898438]), array([ 0.4531250, 0.4531250, 0.4101562, 0.3671875, 0.3281250, 0.2890625, 0.2500000, 0.2148438, 0.1796875, 0.1484375, 0.1171875, 0.0937500, 0.0703125, 0.0468750, 0.0312500, 0.0195312, 0.0078125, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0117188, 0.0234375, 0.0507812, 0.0546875, 0.0781250, 0.1015625, 0.1289062, 0.1601562, 0.1953125, 0.2304688, 0.2656250, 0.3046875, 0.3437500, 0.3867188, 0.4257812, 0.4687500, 0.5117188, 0.5546875, 0.5976562, 0.6367188, 0.6796875, 0.7187500, 0.7539062, 0.7890625, 0.8242188, 0.8515625, 0.8828125, 0.9062500, 0.9296875, 0.9492188, 0.9648438, 0.9765625, 0.9843750, 0.9882812, 0.9882812, 0.9882812, 0.9804688, 0.9726562, 0.9609375, 0.9414062, 0.9218750, 0.8984375, 0.8750000, 0.8437500, 0.8125000, 0.7812500, 0.7421875, 0.7070312, 0.6679688, 0.6250000, 0.5859375, 0.5429688, 0.5000000, 0.4570312, 0.4140625, 0.3750000, 0.3320312, 0.2929688, 0.2539062, 0.2187500, 0.1835938, 0.1523438, 0.1210938, 0.0937500, 0.0703125, 0.0507812, 0.0351562, 0.0195312, 0.0078125, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0117188, 0.0195312, 0.0351562, 0.0546875, 0.0742188, 0.0976562, 0.1250000, 0.1562500, 0.1875000, 0.2226562, 0.2617188, 0.2968750, 0.3398438, 0.3789062, 0.4218750, 0.4648438, 0.5078125, 0.5468750, 0.5898438, 0.6328125, 0.6718750, 0.7109375, 0.7500000, 0.7851562, 0.8164062, 0.8476562, 0.8789062, 0.9023438, 0.9257812, 0.9453125, 0.9609375, 0.9726562, 0.9843750, 0.9882812, 0.9921875, 0.9882812, 0.9843750, 0.9726562, 0.9609375, 0.9453125, 0.9257812, 0.9023438, 0.8789062, 0.8476562, 0.8164062, 0.7851562, 0.7500000, 0.7109375, 0.6718750, 0.6328125, 0.5898438, 0.5468750, 0.5078125, 0.4648438, 0.4218750, 0.3789062, 0.3398438, 0.2968750, 0.2617188, 0.2226562, 0.1875000, 0.1562500, 0.1250000, 0.0976562, 0.0742188, 0.0546875, 0.0351562, 0.0195312, 0.0117188, 0.0039062, 0.0000000, 0.0000000, 0.0000000, 0.0078125, 0.0195312, 0.0351562, 0.0507812, 0.0703125, 0.0937500, 0.1210938, 0.1523438, 0.1835938, 0.2187500, 0.2539062, 0.2929688, 0.3320312, 0.3750000, 0.4140625, 0.4570312, 0.5000000, 0.5429688, 0.5859375, 0.6250000, 0.6679688, 0.7070312, 0.7421875, 0.7812500, 0.8125000, 0.8437500, 0.8750000, 0.8984375, 0.9218750, 0.9414062, 0.9609375, 0.9726562, 0.9804688, 0.9882812, 0.9882812, 0.9882812, 0.9843750, 0.9765625, 0.9648438, 0.9492188, 0.9296875, 0.9062500, 0.8828125, 0.8515625, 0.8242188, 0.7890625, 0.7539062, 0.7187500, 0.6796875, 0.6367188, 0.5976562, 0.5546875, 0.5117188, 0.4687500, 0.4257812, 0.3867188, 0.3437500, 0.3046875, 0.2656250, 0.2304688, 0.1953125, 0.1601562, 0.1289062, 0.1015625, 0.0781250, 0.0546875, 0.0507812, 0.0234375, 0.0117188, 0.0039062, 0.0000000, 0.0000000, 0.0000000, 0.0078125, 0.0195312, 0.0312500, 0.0468750, 0.0703125, 0.0937500, 0.1171875, 0.1484375, 0.1796875, 0.2148438, 0.2500000, 0.2890625, 0.3281250, 0.3671875, 0.4101562, 0.4101562]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 29 :: Nature ### color_map_luts['idl29'] = \ ( array([ 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4296875, 0.4453125, 0.4804688, 0.4960938, 0.5117188, 0.5273438, 0.5585938, 0.5742188, 0.5859375, 0.6132812, 0.6250000, 0.6367188, 0.6445312, 0.6640625, 0.6718750, 0.6796875, 0.6875000, 0.6992188, 0.7031250, 0.7031250, 0.7070312, 0.7109375, 0.7070312, 0.7070312, 0.7031250, 0.6992188, 0.6914062, 0.6796875, 0.6718750, 0.6640625, 0.6562500, 0.6367188, 0.6250000, 0.6132812, 0.6015625, 0.5742188, 0.5585938, 0.5429688, 0.5117188, 0.4960938, 0.4804688, 0.4648438, 0.4296875, 0.4101562, 0.3906250, 0.3750000, 0.3398438, 0.3203125, 0.3007812, 0.2656250, 0.2460938, 0.2304688, 0.2109375, 0.1796875, 0.1640625, 0.1484375, 0.1171875, 0.1015625, 0.0898438, 0.0742188, 0.0507812, 0.0507812, 0.0312500, 0.0195312, 0.0039062, 0.0000000, 0.0078125, 0.0195312, 0.0234375, 0.0234375, 0.0273438, 0.0312500, 0.0273438, 0.0273438, 0.0234375, 0.0195312, 0.0117188, 0.0078125, 0.0039062, 0.0117188, 0.0195312, 0.0312500, 0.0507812, 0.0625000, 0.0742188, 0.1015625, 0.1171875, 0.1328125, 0.1484375, 0.1796875, 0.1953125, 0.2109375, 0.2304688, 0.2656250, 0.2851562, 0.3007812, 0.3359375, 0.3554688, 0.3750000, 0.3906250, 0.4296875, 0.4453125, 0.4648438, 0.4960938, 0.5117188, 0.5273438, 0.5429688, 0.5742188, 0.5859375, 0.6015625, 0.6132812, 0.6367188, 0.6445312, 0.6562500, 0.6718750, 0.6796875, 0.6875000, 0.6914062, 0.7031250, 0.7031250, 0.7070312, 0.7109375, 0.7070312, 0.7070312, 0.7031250, 0.6992188, 0.6914062, 0.6875000, 0.6796875, 0.6640625, 0.6562500, 0.6445312, 0.6250000, 0.6132812, 0.6015625, 0.5859375, 0.5585938, 0.5429688, 0.5273438, 0.4960938, 0.4804688, 0.4648438, 0.4453125, 0.4101562, 0.3906250, 0.3750000, 0.3554688, 0.3203125, 0.3007812, 0.2851562, 0.2460938, 0.2304688, 0.2109375, 0.1953125, 0.1640625, 0.1484375, 0.1328125, 0.1171875, 0.0898438, 0.0742188, 0.0625000, 0.0507812, 0.0312500, 0.0195312, 0.0117188, 0.0000000, 0.0078125, 0.0117188, 0.0234375, 0.0234375, 0.0273438, 0.0273438, 0.0273438, 0.0273438, 0.0234375, 0.0234375, 0.0117188, 0.0078125, 0.0000000, 0.0117188, 0.0195312, 0.0312500, 0.0507812, 0.0625000, 0.0742188, 0.0898438, 0.1171875, 0.1328125, 0.1484375, 0.1640625, 0.1953125, 0.2109375, 0.2304688, 0.2460938, 0.2851562, 0.3007812, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.3203125, 0.0000000, 0.9960938, 0.9960938]), array([ 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9804688, 0.9882812, 0.9531250, 0.9335938, 0.9179688, 0.9023438, 0.8750000, 0.8671875, 0.8554688, 0.8437500, 0.8359375, 0.8359375, 0.8320312, 0.8359375, 0.8398438, 0.8437500, 0.8515625, 0.8710938, 0.8828125, 0.8945312, 0.9257812, 0.9414062, 0.9609375, 0.9804688, 0.9687500, 0.9492188, 0.9257812, 0.8789062, 0.8515625, 0.8281250, 0.8007812, 0.7500000, 0.7265625, 0.6992188, 0.6757812, 0.6250000, 0.6015625, 0.5781250, 0.5351562, 0.5156250, 0.4960938, 0.4765625, 0.4414062, 0.4257812, 0.4101562, 0.3984375, 0.3750000, 0.3671875, 0.3593750, 0.3476562, 0.3437500, 0.3437500, 0.3437500, 0.3437500, 0.3476562, 0.3515625, 0.3632812, 0.3710938, 0.3789062, 0.3867188, 0.4062500, 0.4179688, 0.4296875, 0.4414062, 0.4687500, 0.4843750, 0.4960938, 0.5234375, 0.5390625, 0.5507812, 0.5664062, 0.5937500, 0.6054688, 0.6171875, 0.6406250, 0.6484375, 0.6601562, 0.6679688, 0.6835938, 0.6875000, 0.6953125, 0.6992188, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.6992188, 0.6953125, 0.6875000, 0.6757812, 0.6679688, 0.6601562, 0.6484375, 0.6289062, 0.6171875, 0.6015625, 0.5781250, 0.5625000, 0.5468750, 0.5312500, 0.5000000, 0.4843750, 0.4687500, 0.4375000, 0.4218750, 0.4062500, 0.3906250, 0.3593750, 0.3437500, 0.3320312, 0.3164062, 0.2890625, 0.2773438, 0.2656250, 0.2421875, 0.2343750, 0.2226562, 0.2148438, 0.1992188, 0.1953125, 0.1875000, 0.1796875, 0.1757812, 0.1718750, 0.1718750, 0.1679688, 0.1679688, 0.1679688, 0.1718750, 0.1718750, 0.1757812, 0.1796875, 0.1835938, 0.1875000, 0.1914062, 0.1992188, 0.2070312, 0.2109375, 0.2148438, 0.2226562, 0.2304688, 0.2343750, 0.2382812, 0.2460938, 0.2500000, 0.2500000, 0.2539062, 0.2578125, 0.2617188, 0.2617188, 0.2617188, 0.2617188, 0.2617188, 0.2617188, 0.2578125, 0.2578125, 0.2539062, 0.2500000, 0.2421875, 0.2382812, 0.2343750, 0.2265625, 0.2187500, 0.2148438, 0.2070312, 0.1953125, 0.1875000, 0.1796875, 0.1679688, 0.1601562, 0.1523438, 0.1445312, 0.1328125, 0.1250000, 0.1171875, 0.1093750, 0.0976562, 0.0898438, 0.0859375, 0.0742188, 0.0664062, 0.0625000, 0.0585938, 0.0468750, 0.0429688, 0.0507812, 0.0312500, 0.0273438, 0.0234375, 0.0234375, 0.0156250, 0.0156250, 0.0117188, 0.0078125, 0.0039062, 0.0039062, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.9960938, 0.9960938]), array([ 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.8671875, 0.8203125, 0.7265625, 0.6835938, 0.6406250, 0.5976562, 0.5156250, 0.4765625, 0.4414062, 0.3710938, 0.3398438, 0.3085938, 0.2812500, 0.2304688, 0.2109375, 0.1914062, 0.1718750, 0.1445312, 0.1328125, 0.1250000, 0.1171875, 0.1132812, 0.1171875, 0.1210938, 0.1328125, 0.1445312, 0.1562500, 0.1914062, 0.2109375, 0.2304688, 0.2578125, 0.3085938, 0.3398438, 0.3710938, 0.4062500, 0.4765625, 0.5156250, 0.5546875, 0.6406250, 0.6835938, 0.7265625, 0.7734375, 0.8671875, 0.9140625, 0.9609375, 0.9804688, 0.8867188, 0.8359375, 0.7890625, 0.6914062, 0.6445312, 0.5976562, 0.5507812, 0.4609375, 0.4179688, 0.3750000, 0.2929688, 0.2539062, 0.2187500, 0.1835938, 0.1171875, 0.0859375, 0.0585938, 0.0351562, 0.0078125, 0.0273438, 0.0468750, 0.0742188, 0.0859375, 0.0937500, 0.0976562, 0.1054688, 0.1015625, 0.0976562, 0.0859375, 0.0742188, 0.0625000, 0.0468750, 0.0078125, 0.0078125, 0.0351562, 0.0585938, 0.1171875, 0.1484375, 0.1835938, 0.2539062, 0.2929688, 0.3320312, 0.3750000, 0.4609375, 0.5039062, 0.5507812, 0.5976562, 0.6914062, 0.7382812, 0.7890625, 0.8828125, 0.9335938, 0.9804688, 0.9609375, 0.8671875, 0.8203125, 0.7734375, 0.6835938, 0.6406250, 0.5976562, 0.5546875, 0.4765625, 0.4414062, 0.4062500, 0.3710938, 0.3085938, 0.2812500, 0.2578125, 0.2109375, 0.1914062, 0.1718750, 0.1562500, 0.1328125, 0.1250000, 0.1210938, 0.1132812, 0.1171875, 0.1210938, 0.1250000, 0.1445312, 0.1562500, 0.1718750, 0.1914062, 0.2304688, 0.2578125, 0.2812500, 0.3398438, 0.3710938, 0.4062500, 0.4414062, 0.5156250, 0.5546875, 0.5976562, 0.6835938, 0.7265625, 0.7734375, 0.8203125, 0.9140625, 0.9609375, 0.9804688, 0.9335938, 0.8359375, 0.7890625, 0.7382812, 0.6445312, 0.5976562, 0.5507812, 0.5039062, 0.4179688, 0.3750000, 0.3320312, 0.2929688, 0.2187500, 0.1835938, 0.1484375, 0.0859375, 0.0585938, 0.0351562, 0.0078125, 0.0273438, 0.0468750, 0.0625000, 0.0859375, 0.0937500, 0.0976562, 0.1015625, 0.1015625, 0.0976562, 0.0937500, 0.0859375, 0.0625000, 0.0468750, 0.0273438, 0.0078125, 0.0351562, 0.0585938, 0.0859375, 0.1484375, 0.1835938, 0.2187500, 0.2929688, 0.3320312, 0.3750000, 0.4179688, 0.5039062, 0.5507812, 0.5976562, 0.6445312, 0.7382812, 0.7890625, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.8359375, 0.0000000, 0.9960938, 0.9960938]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 30 :: Ocean ### color_map_luts['idl30'] = \ ( array([ 0.2109375, 0.2109375, 0.2304688, 0.2460938, 0.2656250, 0.2851562, 0.3007812, 0.3203125, 0.3203125, 0.3398438, 0.3554688, 0.3750000, 0.3906250, 0.4101562, 0.4296875, 0.4453125, 0.4648438, 0.4804688, 0.4960938, 0.5117188, 0.5273438, 0.5429688, 0.5585938, 0.5585938, 0.5742188, 0.5859375, 0.6015625, 0.6132812, 0.6250000, 0.6367188, 0.6445312, 0.6562500, 0.6640625, 0.6718750, 0.6796875, 0.6875000, 0.6914062, 0.6914062, 0.6992188, 0.7031250, 0.7031250, 0.7070312, 0.7070312, 0.7109375, 0.7070312, 0.7070312, 0.7031250, 0.7031250, 0.6992188, 0.6914062, 0.6875000, 0.6796875, 0.6796875, 0.6718750, 0.6640625, 0.6562500, 0.6445312, 0.6367188, 0.6250000, 0.6132812, 0.6015625, 0.5859375, 0.5742188, 0.5585938, 0.5429688, 0.5273438, 0.5117188, 0.5117188, 0.4960938, 0.4804688, 0.4648438, 0.4453125, 0.4296875, 0.4101562, 0.3906250, 0.3750000, 0.3554688, 0.3359375, 0.3203125, 0.3007812, 0.2851562, 0.2656250, 0.2656250, 0.2460938, 0.2304688, 0.2109375, 0.1953125, 0.1796875, 0.1640625, 0.1484375, 0.1328125, 0.1171875, 0.1015625, 0.0898438, 0.0742188, 0.0625000, 0.0507812, 0.0507812, 0.0507812, 0.0312500, 0.0195312, 0.0117188, 0.0039062, 0.0000000, 0.0078125, 0.0117188, 0.0195312, 0.0234375, 0.0234375, 0.0273438, 0.0273438, 0.0312500, 0.0312500, 0.0273438, 0.0273438, 0.0234375, 0.0234375, 0.0195312, 0.0117188, 0.0078125, 0.0000000, 0.0039062, 0.0117188, 0.0195312, 0.0312500, 0.0507812, 0.0507812, 0.0507812, 0.0625000, 0.0742188, 0.0898438, 0.1015625, 0.1171875, 0.1328125, 0.1484375, 0.1640625, 0.1796875, 0.1953125, 0.2109375, 0.2304688, 0.2460938, 0.2656250, 0.2656250, 0.2851562, 0.3007812, 0.3203125, 0.3398438, 0.3554688, 0.3750000, 0.3906250, 0.4101562, 0.4296875, 0.4453125, 0.4648438, 0.4804688, 0.4960938, 0.5117188, 0.5117188, 0.5273438, 0.5429688, 0.5585938, 0.5742188, 0.5859375, 0.6015625, 0.6132812, 0.6250000, 0.6367188, 0.6445312, 0.6562500, 0.6640625, 0.6718750, 0.6718750, 0.6796875, 0.6875000, 0.6914062, 0.6992188, 0.7031250, 0.7031250, 0.7070312, 0.7070312, 0.7109375, 0.7070312, 0.7070312, 0.7031250, 0.7031250, 0.6992188, 0.6992188, 0.6914062, 0.6875000, 0.6796875, 0.6718750, 0.6640625, 0.6562500, 0.6445312, 0.6367188, 0.6250000, 0.6132812, 0.6015625, 0.5859375, 0.5742188, 0.5585938, 0.5585938, 0.5429688, 0.5273438, 0.5117188, 0.4960938, 0.4804688, 0.4648438, 0.4453125, 0.4296875, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.4101562, 0.0000000, 0.9960938, 0.9960938]), array([ 0.2617188, 0.2617188, 0.2617188, 0.2617188, 0.2617188, 0.2617188, 0.2617188, 0.2578125, 0.2578125, 0.2578125, 0.2539062, 0.2500000, 0.2500000, 0.2460938, 0.2421875, 0.2382812, 0.2343750, 0.2304688, 0.2226562, 0.2187500, 0.2148438, 0.2109375, 0.2070312, 0.2070312, 0.2031250, 0.1992188, 0.1914062, 0.1875000, 0.1835938, 0.1835938, 0.1796875, 0.1757812, 0.1718750, 0.1718750, 0.1718750, 0.1679688, 0.1679688, 0.1679688, 0.1679688, 0.1679688, 0.1718750, 0.1718750, 0.1757812, 0.1796875, 0.1835938, 0.1875000, 0.1953125, 0.1992188, 0.2070312, 0.2148438, 0.2226562, 0.2343750, 0.2343750, 0.2421875, 0.2539062, 0.2656250, 0.2773438, 0.2890625, 0.3046875, 0.3164062, 0.3320312, 0.3437500, 0.3593750, 0.3750000, 0.3906250, 0.4062500, 0.4218750, 0.4218750, 0.4375000, 0.4531250, 0.4687500, 0.4843750, 0.5000000, 0.5156250, 0.5312500, 0.5468750, 0.5625000, 0.5781250, 0.5898438, 0.6015625, 0.6171875, 0.6289062, 0.6289062, 0.6406250, 0.6484375, 0.6601562, 0.6679688, 0.6757812, 0.6835938, 0.6875000, 0.6953125, 0.6992188, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.6992188, 0.6992188, 0.6953125, 0.6875000, 0.6835938, 0.6757812, 0.6679688, 0.6601562, 0.6484375, 0.6406250, 0.6289062, 0.6171875, 0.6054688, 0.5937500, 0.5937500, 0.5781250, 0.5664062, 0.5507812, 0.5390625, 0.5234375, 0.5117188, 0.4960938, 0.4843750, 0.4687500, 0.4570312, 0.4414062, 0.4296875, 0.4179688, 0.4062500, 0.4062500, 0.3945312, 0.3867188, 0.3789062, 0.3710938, 0.3632812, 0.3554688, 0.3515625, 0.3476562, 0.3437500, 0.3437500, 0.3437500, 0.3437500, 0.3437500, 0.3476562, 0.3476562, 0.3554688, 0.3593750, 0.3671875, 0.3750000, 0.3867188, 0.3984375, 0.4101562, 0.4257812, 0.4414062, 0.4570312, 0.4765625, 0.4960938, 0.5156250, 0.5351562, 0.5351562, 0.5546875, 0.5781250, 0.6015625, 0.6250000, 0.6484375, 0.6757812, 0.6992188, 0.7265625, 0.7500000, 0.7773438, 0.8007812, 0.8281250, 0.8515625, 0.8515625, 0.8789062, 0.9023438, 0.9257812, 0.9492188, 0.9687500, 0.9921875, 0.9804688, 0.9609375, 0.9414062, 0.9257812, 0.9101562, 0.8945312, 0.8828125, 0.8710938, 0.8710938, 0.8593750, 0.8515625, 0.8437500, 0.8398438, 0.8359375, 0.8320312, 0.8320312, 0.8359375, 0.8359375, 0.8437500, 0.8476562, 0.8554688, 0.8671875, 0.8750000, 0.8750000, 0.8906250, 0.9023438, 0.9179688, 0.9335938, 0.9531250, 0.9687500, 0.9882812, 0.9804688, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.0000000, 0.9960938, 0.9960938]), array([ 0.5507812, 0.5507812, 0.5976562, 0.6445312, 0.6914062, 0.7382812, 0.7890625, 0.8359375, 0.8359375, 0.8867188, 0.9335938, 0.9804688, 0.9609375, 0.9140625, 0.8671875, 0.8203125, 0.7734375, 0.7265625, 0.6835938, 0.6406250, 0.5976562, 0.5546875, 0.5156250, 0.5156250, 0.4765625, 0.4414062, 0.4062500, 0.3710938, 0.3398438, 0.3085938, 0.2812500, 0.2578125, 0.2304688, 0.2109375, 0.1914062, 0.1718750, 0.1562500, 0.1562500, 0.1445312, 0.1328125, 0.1250000, 0.1210938, 0.1171875, 0.1132812, 0.1171875, 0.1210938, 0.1250000, 0.1328125, 0.1445312, 0.1562500, 0.1718750, 0.1914062, 0.1914062, 0.2109375, 0.2304688, 0.2578125, 0.2812500, 0.3085938, 0.3398438, 0.3710938, 0.4062500, 0.4414062, 0.4765625, 0.5156250, 0.5546875, 0.5976562, 0.6406250, 0.6406250, 0.6835938, 0.7265625, 0.7734375, 0.8203125, 0.8671875, 0.9140625, 0.9609375, 0.9804688, 0.9335938, 0.8828125, 0.8359375, 0.7890625, 0.7382812, 0.6914062, 0.6914062, 0.6445312, 0.5976562, 0.5507812, 0.5039062, 0.4609375, 0.4179688, 0.3750000, 0.3320312, 0.2929688, 0.2539062, 0.2187500, 0.1835938, 0.1484375, 0.1171875, 0.1171875, 0.0859375, 0.0585938, 0.0351562, 0.0078125, 0.0078125, 0.0273438, 0.0468750, 0.0625000, 0.0742188, 0.0859375, 0.0937500, 0.0976562, 0.1015625, 0.1054688, 0.1054688, 0.1015625, 0.0976562, 0.0937500, 0.0859375, 0.0742188, 0.0625000, 0.0468750, 0.0273438, 0.0078125, 0.0078125, 0.0351562, 0.0585938, 0.0859375, 0.1171875, 0.1171875, 0.1484375, 0.1835938, 0.2187500, 0.2539062, 0.2929688, 0.3320312, 0.3750000, 0.4179688, 0.4609375, 0.5039062, 0.5507812, 0.5976562, 0.6445312, 0.6914062, 0.6914062, 0.7382812, 0.7890625, 0.8359375, 0.8867188, 0.9335938, 0.9804688, 0.9609375, 0.9140625, 0.8671875, 0.8203125, 0.7734375, 0.7265625, 0.6835938, 0.6406250, 0.6406250, 0.5976562, 0.5546875, 0.5156250, 0.4765625, 0.4414062, 0.4062500, 0.3710938, 0.3398438, 0.3085938, 0.2812500, 0.2578125, 0.2304688, 0.2109375, 0.2109375, 0.1914062, 0.1718750, 0.1562500, 0.1445312, 0.1328125, 0.1250000, 0.1210938, 0.1171875, 0.1132812, 0.1171875, 0.1210938, 0.1250000, 0.1328125, 0.1445312, 0.1445312, 0.1562500, 0.1718750, 0.1914062, 0.2109375, 0.2304688, 0.2578125, 0.2812500, 0.3085938, 0.3398438, 0.3710938, 0.4062500, 0.4414062, 0.4765625, 0.5156250, 0.5156250, 0.5546875, 0.5976562, 0.6406250, 0.6835938, 0.7265625, 0.7734375, 0.8203125, 0.8671875, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.9140625, 0.0000000, 0.9960938, 0.9960938]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 31 :: Peppermint ### color_map_luts['idl31'] = \ ( array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3125000, 0.3125000, 0.3164062, 0.3125000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.9375000, 0.9375000, 0.9375000, 0.9375000]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.0625000, 0.1250000, 0.1250000, 0.1250000, 0.1250000, 0.1250000, 0.1250000, 0.1250000, 0.1250000, 0.1250000, 0.1250000, 0.1250000, 0.1250000, 0.1250000, 0.1250000, 0.1250000, 0.1250000, 0.1875000, 0.1875000, 0.1875000, 0.1875000, 0.1875000, 0.1875000, 0.1875000, 0.1875000, 0.1875000, 0.1875000, 0.1875000, 0.1875000, 0.1875000, 0.1875000, 0.1875000, 0.1875000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.2500000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3125000, 0.3750000, 0.3750000, 0.3750000, 0.3750000, 0.3750000, 0.3750000, 0.3750000, 0.3750000, 0.3750000, 0.3750000, 0.3750000, 0.3750000, 0.3750000, 0.3750000, 0.3750000, 0.3750000, 0.4375000, 0.4375000, 0.4375000, 0.4375000, 0.4375000, 0.4375000, 0.4375000, 0.4375000, 0.4375000, 0.4375000, 0.4375000, 0.4375000, 0.4375000, 0.4375000, 0.4375000, 0.4375000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5000000, 0.5625000, 0.5625000, 0.5625000, 0.5625000, 0.5625000, 0.5625000, 0.5625000, 0.5625000, 0.5625000, 0.5625000, 0.5625000, 0.5625000, 0.5625000, 0.5625000, 0.5625000, 0.5625000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6875000, 0.6875000, 0.6875000, 0.6875000, 0.6875000, 0.6875000, 0.6875000, 0.6875000, 0.6875000, 0.6875000, 0.6875000, 0.6875000, 0.6875000, 0.6875000, 0.6875000, 0.6875000, 0.7500000, 0.7500000, 0.7500000, 0.7500000, 0.7500000, 0.7500000, 0.7539062, 0.7500000, 0.7500000, 0.7500000, 0.7500000, 0.7500000, 0.7500000, 0.7500000, 0.7500000, 0.7500000, 0.8125000, 0.8125000, 0.8125000, 0.8125000, 0.8125000, 0.8125000, 0.8125000, 0.8125000, 0.8125000, 0.8125000, 0.8125000, 0.8125000, 0.8125000, 0.8125000, 0.8125000, 0.8125000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.8750000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.9375000, 0.9375000]), array([ 0.3125000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.9375000, 0.0000000, 0.3125000, 0.6250000, 0.6250000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 32 :: Plasma ### color_map_luts['idl32'] = \ ( array([ 0.0078125, 0.0078125, 0.0117188, 0.0156250, 0.0195312, 0.0234375, 0.0273438, 0.0312500, 0.0351562, 0.0507812, 0.0429688, 0.0507812, 0.0546875, 0.0585938, 0.0625000, 0.0664062, 0.0703125, 0.0742188, 0.0820312, 0.0859375, 0.0898438, 0.0937500, 0.0976562, 0.1015625, 0.1054688, 0.1093750, 0.1171875, 0.1210938, 0.1250000, 0.1289062, 0.1328125, 0.1367188, 0.1406250, 0.1445312, 0.1484375, 0.1523438, 0.1523438, 0.1562500, 0.1601562, 0.1640625, 0.1679688, 0.1718750, 0.1757812, 0.1796875, 0.1796875, 0.1835938, 0.1875000, 0.1914062, 0.1953125, 0.1992188, 0.2031250, 0.2070312, 0.2109375, 0.2148438, 0.2187500, 0.2226562, 0.2265625, 0.2304688, 0.2343750, 0.2382812, 0.2421875, 0.2460938, 0.2539062, 0.2578125, 0.2617188, 0.2656250, 0.2734375, 0.2773438, 0.2812500, 0.2890625, 0.2929688, 0.3007812, 0.3046875, 0.3085938, 0.3164062, 0.3203125, 0.3281250, 0.3320312, 0.3359375, 0.3437500, 0.3476562, 0.3515625, 0.3593750, 0.3632812, 0.3671875, 0.3710938, 0.3789062, 0.3828125, 0.3867188, 0.3906250, 0.3945312, 0.3984375, 0.4023438, 0.4023438, 0.4062500, 0.4101562, 0.4140625, 0.4179688, 0.4179688, 0.4218750, 0.4218750, 0.4257812, 0.4296875, 0.4296875, 0.4335938, 0.4335938, 0.4375000, 0.4414062, 0.4414062, 0.4453125, 0.4453125, 0.4492188, 0.4531250, 0.4570312, 0.4570312, 0.4609375, 0.4648438, 0.4687500, 0.4726562, 0.4765625, 0.4804688, 0.4843750, 0.4882812, 0.4960938, 0.5000000, 0.5039062, 0.5117188, 0.5156250, 0.5234375, 0.5273438, 0.5351562, 0.5390625, 0.5468750, 0.5546875, 0.5585938, 0.5664062, 0.5742188, 0.5820312, 0.5859375, 0.5937500, 0.6015625, 0.6054688, 0.6132812, 0.6171875, 0.6250000, 0.6289062, 0.6367188, 0.6406250, 0.6484375, 0.6523438, 0.6562500, 0.6601562, 0.6640625, 0.6679688, 0.6718750, 0.6757812, 0.6796875, 0.6796875, 0.6835938, 0.6835938, 0.6875000, 0.6875000, 0.6914062, 0.6914062, 0.6953125, 0.6953125, 0.6953125, 0.6953125, 0.6992188, 0.6992188, 0.6992188, 0.7031250, 0.7031250, 0.7031250, 0.7070312, 0.7070312, 0.7109375, 0.7109375, 0.7148438, 0.7187500, 0.7187500, 0.7226562, 0.7265625, 0.7304688, 0.7343750, 0.7421875, 0.7460938, 0.7500000, 0.7578125, 0.7617188, 0.7695312, 0.7734375, 0.7812500, 0.7890625, 0.7968750, 0.8046875, 0.8125000, 0.8203125, 0.8281250, 0.8359375, 0.8437500, 0.8515625, 0.8593750, 0.8671875, 0.8750000, 0.8789062, 0.8867188, 0.8945312, 0.9023438, 0.9062500, 0.9140625, 0.9218750, 0.9257812, 0.9296875, 0.9335938, 0.9414062, 0.9453125, 0.9453125, 0.9492188, 0.9531250, 0.9570312, 0.9570312, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9609375, 0.9648438, 0.9648438, 0.9648438, 0.9648438, 0.9648438, 0.9648438, 0.9648438, 0.9648438, 0.9648438, 0.9648438, 0.9648438, 0.9648438, 0.9648438, 0.9687500, 0.9687500, 0.9687500, 0.9726562, 0.9765625, 0.9804688, 0.9804688, 0.9843750, 0.9921875, 0.9960938, 0.9921875, 0.9843750, 0.9804688, 0.9726562, 0.9648438, 0.9648438]), array([ 0.0742188, 0.0742188, 0.0781250, 0.0859375, 0.0976562, 0.1132812, 0.1289062, 0.1484375, 0.1718750, 0.1992188, 0.2265625, 0.2539062, 0.2851562, 0.3164062, 0.3476562, 0.3828125, 0.4140625, 0.4492188, 0.4843750, 0.5156250, 0.5468750, 0.5781250, 0.6054688, 0.6328125, 0.6601562, 0.6835938, 0.7031250, 0.7187500, 0.7343750, 0.7460938, 0.7539062, 0.7578125, 0.7617188, 0.7578125, 0.7539062, 0.7460938, 0.7343750, 0.7187500, 0.7031250, 0.6835938, 0.6601562, 0.6328125, 0.6054688, 0.5781250, 0.5468750, 0.5156250, 0.4843750, 0.4492188, 0.4179688, 0.3828125, 0.3476562, 0.3164062, 0.2851562, 0.2539062, 0.2265625, 0.1992188, 0.1718750, 0.1484375, 0.1289062, 0.1132812, 0.0976562, 0.0859375, 0.0781250, 0.0742188, 0.0742188, 0.0742188, 0.0781250, 0.0859375, 0.0976562, 0.1132812, 0.1289062, 0.1484375, 0.1718750, 0.1992188, 0.2265625, 0.2539062, 0.2851562, 0.3164062, 0.3476562, 0.3828125, 0.4179688, 0.4492188, 0.4843750, 0.5156250, 0.5468750, 0.5781250, 0.6054688, 0.6328125, 0.6601562, 0.6835938, 0.7031250, 0.7187500, 0.7343750, 0.7460938, 0.7539062, 0.7578125, 0.7617188, 0.7578125, 0.7539062, 0.7460938, 0.7343750, 0.7187500, 0.7031250, 0.6835938, 0.6601562, 0.6328125, 0.6054688, 0.5781250, 0.5468750, 0.5156250, 0.4843750, 0.4492188, 0.4140625, 0.3828125, 0.3476562, 0.3164062, 0.2851562, 0.2539062, 0.2265625, 0.1992188, 0.1718750, 0.1484375, 0.1289062, 0.1132812, 0.0976562, 0.0859375, 0.0781250, 0.0742188, 0.0742188, 0.0742188, 0.0781250, 0.0859375, 0.0976562, 0.1132812, 0.1289062, 0.1484375, 0.1718750, 0.1992188, 0.2265625, 0.2539062, 0.2851562, 0.3164062, 0.3476562, 0.3828125, 0.4179688, 0.4492188, 0.4843750, 0.5156250, 0.5468750, 0.5781250, 0.6054688, 0.6328125, 0.6601562, 0.6835938, 0.7031250, 0.7187500, 0.7343750, 0.7460938, 0.7539062, 0.7578125, 0.7617188, 0.7578125, 0.7539062, 0.7460938, 0.7343750, 0.7187500, 0.7031250, 0.6835938, 0.6601562, 0.6328125, 0.6054688, 0.5781250, 0.5468750, 0.5156250, 0.4843750, 0.4492188, 0.4140625, 0.3828125, 0.3476562, 0.3164062, 0.2851562, 0.2539062, 0.2265625, 0.1992188, 0.1718750, 0.1484375, 0.1289062, 0.1132812, 0.0976562, 0.0859375, 0.0781250, 0.0742188, 0.0742188, 0.0742188, 0.0781250, 0.0859375, 0.0976562, 0.1132812, 0.1289062, 0.1484375, 0.1718750, 0.1992188, 0.2265625, 0.2539062, 0.2851562, 0.3164062, 0.3476562, 0.3828125, 0.4179688, 0.4492188, 0.4843750, 0.5156250, 0.5468750, 0.5781250, 0.6054688, 0.6328125, 0.6601562, 0.6835938, 0.7031250, 0.7187500, 0.7343750, 0.7460938, 0.7539062, 0.7578125, 0.7617188, 0.7578125, 0.7539062, 0.7460938, 0.7343750, 0.7187500, 0.7031250, 0.6835938, 0.6601562, 0.6328125, 0.6054688, 0.5781250, 0.5468750, 0.5156250, 0.4843750, 0.4492188, 0.4140625, 0.3828125, 0.3476562, 0.3164062, 0.2851562, 0.2539062, 0.2265625, 0.1992188, 0.1718750, 0.1484375, 0.1289062, 0.1132812, 0.0976562, 0.0859375, 0.0781250, 0.0781250]), array([ 0.0273438, 0.0273438, 0.0429688, 0.0585938, 0.0742188, 0.0859375, 0.1015625, 0.1171875, 0.1328125, 0.1523438, 0.1679688, 0.1835938, 0.1992188, 0.2148438, 0.2304688, 0.2460938, 0.2617188, 0.2773438, 0.2929688, 0.3046875, 0.3203125, 0.3359375, 0.3515625, 0.3632812, 0.3789062, 0.3906250, 0.4023438, 0.4179688, 0.4296875, 0.4414062, 0.4531250, 0.4648438, 0.4765625, 0.4882812, 0.5000000, 0.5078125, 0.5195312, 0.5312500, 0.5429688, 0.5507812, 0.5625000, 0.5742188, 0.5859375, 0.5976562, 0.6093750, 0.6210938, 0.6328125, 0.6445312, 0.6601562, 0.6718750, 0.6875000, 0.7031250, 0.7187500, 0.7343750, 0.7500000, 0.7656250, 0.7851562, 0.8007812, 0.8203125, 0.8398438, 0.8593750, 0.8789062, 0.8984375, 0.9179688, 0.9414062, 0.9609375, 0.9804688, 0.9882812, 0.9687500, 0.9492188, 0.9257812, 0.9062500, 0.8867188, 0.8671875, 0.8476562, 0.8281250, 0.8085938, 0.7890625, 0.7734375, 0.7578125, 0.7421875, 0.7265625, 0.7109375, 0.6992188, 0.6875000, 0.6718750, 0.6640625, 0.6523438, 0.6445312, 0.6328125, 0.6250000, 0.6171875, 0.6132812, 0.6054688, 0.6015625, 0.5937500, 0.5898438, 0.5859375, 0.5781250, 0.5742188, 0.5703125, 0.5625000, 0.5585938, 0.5507812, 0.5468750, 0.5390625, 0.5312500, 0.5234375, 0.5156250, 0.5039062, 0.4921875, 0.4804688, 0.4687500, 0.4531250, 0.4414062, 0.4218750, 0.4062500, 0.3867188, 0.3671875, 0.3476562, 0.3242188, 0.3046875, 0.2812500, 0.2539062, 0.2304688, 0.2070312, 0.1796875, 0.1523438, 0.1250000, 0.0976562, 0.0703125, 0.0429688, 0.0195312, 0.9882812, 0.9609375, 0.9335938, 0.9101562, 0.8867188, 0.8632812, 0.8398438, 0.8203125, 0.8007812, 0.7812500, 0.7617188, 0.7460938, 0.7343750, 0.7187500, 0.7070312, 0.6953125, 0.6875000, 0.6796875, 0.6718750, 0.6679688, 0.6640625, 0.6601562, 0.6601562, 0.6562500, 0.6562500, 0.6562500, 0.6562500, 0.6601562, 0.6601562, 0.6601562, 0.6601562, 0.6640625, 0.6640625, 0.6640625, 0.6601562, 0.6601562, 0.6562500, 0.6523438, 0.6484375, 0.6406250, 0.6328125, 0.6250000, 0.6132812, 0.6015625, 0.5859375, 0.5703125, 0.5507812, 0.5312500, 0.5117188, 0.4882812, 0.4648438, 0.4375000, 0.4140625, 0.3828125, 0.3554688, 0.3242188, 0.2929688, 0.2617188, 0.2265625, 0.1953125, 0.1601562, 0.1289062, 0.0937500, 0.0625000, 0.0312500, 0.9960938, 0.9648438, 0.9335938, 0.9062500, 0.8789062, 0.8554688, 0.8320312, 0.8085938, 0.7890625, 0.7695312, 0.7539062, 0.7382812, 0.7265625, 0.7148438, 0.7070312, 0.6992188, 0.6953125, 0.6914062, 0.6914062, 0.6914062, 0.6953125, 0.6992188, 0.7031250, 0.7070312, 0.7148438, 0.7187500, 0.7265625, 0.7343750, 0.7421875, 0.7500000, 0.7539062, 0.7617188, 0.7656250, 0.7695312, 0.7734375, 0.7734375, 0.7734375, 0.7734375, 0.7695312, 0.7617188, 0.7539062, 0.7460938, 0.7343750, 0.7187500, 0.7031250, 0.6835938, 0.6601562, 0.6367188, 0.6093750, 0.5820312, 0.5546875, 0.5195312, 0.4882812, 0.4531250, 0.4179688, 0.3789062, 0.3398438, 0.3398438]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 33 :: Blue-Red ### color_map_luts['idl33'] = \ ( array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0117188, 0.0273438, 0.0429688, 0.0585938, 0.0742188, 0.0898438, 0.1054688, 0.1210938, 0.1367188, 0.1523438, 0.1679688, 0.1835938, 0.1992188, 0.2148438, 0.2304688, 0.2460938, 0.2617188, 0.2773438, 0.2929688, 0.3085938, 0.3242188, 0.3398438, 0.3554688, 0.3710938, 0.3867188, 0.4023438, 0.4179688, 0.4335938, 0.4492188, 0.4648438, 0.4804688, 0.4960938, 0.5117188, 0.5273438, 0.5429688, 0.5585938, 0.5742188, 0.5898438, 0.6054688, 0.6210938, 0.6367188, 0.6523438, 0.6679688, 0.6835938, 0.6992188, 0.7148438, 0.7304688, 0.7460938, 0.7617188, 0.7773438, 0.7929688, 0.8085938, 0.8242188, 0.8398438, 0.8554688, 0.8710938, 0.8867188, 0.9023438, 0.9179688, 0.9335938, 0.9492188, 0.9648438, 0.9804688, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9765625, 0.9609375, 0.9414062, 0.9257812, 0.9101562, 0.8906250, 0.8750000, 0.8554688, 0.8398438, 0.8242188, 0.8046875, 0.7890625, 0.7695312, 0.7539062, 0.7382812, 0.7187500, 0.7031250, 0.6835938, 0.6679688, 0.6523438, 0.6328125, 0.6171875, 0.5976562, 0.5820312, 0.5664062, 0.5468750, 0.5312500, 0.5117188, 0.5117188]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0117188, 0.0273438, 0.0429688, 0.0585938, 0.0742188, 0.0898438, 0.1054688, 0.1210938, 0.1367188, 0.1523438, 0.1679688, 0.1835938, 0.1992188, 0.2148438, 0.2304688, 0.2460938, 0.2617188, 0.2773438, 0.2929688, 0.3085938, 0.3242188, 0.3398438, 0.3554688, 0.3710938, 0.3867188, 0.4023438, 0.4179688, 0.4335938, 0.4492188, 0.4648438, 0.4804688, 0.4960938, 0.5117188, 0.5273438, 0.5429688, 0.5585938, 0.5742188, 0.5898438, 0.6054688, 0.6210938, 0.6367188, 0.6523438, 0.6679688, 0.6835938, 0.6992188, 0.7148438, 0.7304688, 0.7460938, 0.7617188, 0.7773438, 0.7929688, 0.8085938, 0.8242188, 0.8398438, 0.8554688, 0.8710938, 0.8867188, 0.9023438, 0.9179688, 0.9335938, 0.9492188, 0.9648438, 0.9804688, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9804688, 0.9648438, 0.9492188, 0.9335938, 0.9179688, 0.9023438, 0.8867188, 0.8710938, 0.8554688, 0.8398438, 0.8242188, 0.8085938, 0.7929688, 0.7773438, 0.7617188, 0.7460938, 0.7304688, 0.7148438, 0.6992188, 0.6835938, 0.6679688, 0.6523438, 0.6367188, 0.6210938, 0.6054688, 0.5898438, 0.5742188, 0.5585938, 0.5429688, 0.5273438, 0.5117188, 0.4960938, 0.4804688, 0.4648438, 0.4492188, 0.4335938, 0.4179688, 0.4023438, 0.3867188, 0.3710938, 0.3554688, 0.3398438, 0.3242188, 0.3085938, 0.2929688, 0.2773438, 0.2617188, 0.2460938, 0.2304688, 0.2148438, 0.1992188, 0.1835938, 0.1679688, 0.1523438, 0.1367188, 0.1210938, 0.1054688, 0.0898438, 0.0742188, 0.0585938, 0.0429688, 0.0273438, 0.0117188, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 0.5117188, 0.5117188, 0.5273438, 0.5429688, 0.5585938, 0.5742188, 0.5898438, 0.6054688, 0.6210938, 0.6367188, 0.6523438, 0.6679688, 0.6835938, 0.6992188, 0.7148438, 0.7304688, 0.7460938, 0.7617188, 0.7773438, 0.7929688, 0.8085938, 0.8242188, 0.8398438, 0.8554688, 0.8710938, 0.8867188, 0.9023438, 0.9179688, 0.9335938, 0.9492188, 0.9648438, 0.9804688, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9804688, 0.9648438, 0.9492188, 0.9335938, 0.9179688, 0.9023438, 0.8867188, 0.8710938, 0.8554688, 0.8398438, 0.8242188, 0.8085938, 0.7929688, 0.7773438, 0.7617188, 0.7460938, 0.7304688, 0.7148438, 0.6992188, 0.6835938, 0.6679688, 0.6523438, 0.6367188, 0.6210938, 0.6054688, 0.5898438, 0.5742188, 0.5585938, 0.5429688, 0.5273438, 0.5117188, 0.4960938, 0.4804688, 0.4648438, 0.4492188, 0.4335938, 0.4179688, 0.4023438, 0.3867188, 0.3710938, 0.3554688, 0.3398438, 0.3242188, 0.3085938, 0.2929688, 0.2773438, 0.2617188, 0.2460938, 0.2304688, 0.2148438, 0.1992188, 0.1835938, 0.1679688, 0.1523438, 0.1367188, 0.1210938, 0.1054688, 0.0898438, 0.0742188, 0.0585938, 0.0429688, 0.0273438, 0.0117188, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 34 :: Rainbow ### color_map_luts['idl34'] = \ ( array([ 0.4843750, 0.4843750, 0.4687500, 0.4492188, 0.4335938, 0.4140625, 0.3984375, 0.3789062, 0.3632812, 0.3437500, 0.3281250, 0.3085938, 0.2929688, 0.2734375, 0.2578125, 0.2382812, 0.2226562, 0.2031250, 0.1875000, 0.1679688, 0.1523438, 0.1328125, 0.1171875, 0.0976562, 0.0820312, 0.0625000, 0.0468750, 0.0273438, 0.0117188, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0195312, 0.0507812, 0.0546875, 0.0742188, 0.0898438, 0.1093750, 0.1250000, 0.1445312, 0.1601562, 0.1796875, 0.1953125, 0.2148438, 0.2304688, 0.2500000, 0.2656250, 0.2851562, 0.3007812, 0.3203125, 0.3359375, 0.3554688, 0.3710938, 0.3906250, 0.4062500, 0.4257812, 0.4414062, 0.4609375, 0.4804688, 0.4960938, 0.5156250, 0.5312500, 0.5507812, 0.5664062, 0.5859375, 0.6015625, 0.6210938, 0.6367188, 0.6562500, 0.6718750, 0.6914062, 0.7070312, 0.7265625, 0.7421875, 0.7617188, 0.7773438, 0.7968750, 0.8125000, 0.8320312, 0.8476562, 0.8671875, 0.8828125, 0.9023438, 0.9179688, 0.9375000, 0.9531250, 0.9726562, 0.9882812, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0078125, 0.0234375, 0.0429688, 0.0585938, 0.0781250, 0.0937500, 0.1132812, 0.1289062, 0.1484375, 0.1640625, 0.1835938, 0.1992188, 0.2187500, 0.2343750, 0.2539062, 0.2695312, 0.2890625, 0.3046875, 0.3242188, 0.3398438, 0.3593750, 0.3750000, 0.3945312, 0.4101562, 0.4296875, 0.4453125, 0.4648438, 0.4804688, 0.5000000, 0.5156250, 0.5351562, 0.5507812, 0.5703125, 0.5859375, 0.6054688, 0.6210938, 0.6406250, 0.6562500, 0.6757812, 0.6914062, 0.7109375, 0.7265625, 0.7460938, 0.7617188, 0.7812500, 0.7968750, 0.8164062, 0.8320312, 0.8515625, 0.8671875, 0.8867188, 0.9023438, 0.9218750, 0.9414062, 0.9570312, 0.9765625, 0.9921875, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9843750, 0.9687500, 0.9492188, 0.9335938, 0.9140625, 0.8984375, 0.8789062, 0.8632812, 0.8437500, 0.8281250, 0.8085938, 0.7929688, 0.7734375, 0.7578125, 0.7382812, 0.7226562, 0.7031250, 0.6875000, 0.6679688, 0.6523438, 0.6328125, 0.6171875, 0.5976562, 0.5820312, 0.5625000, 0.5468750, 0.5273438, 0.5117188, 0.4921875, 0.4765625, 0.4570312, 0.4414062, 0.4218750, 0.4062500, 0.3867188, 0.3710938, 0.3515625, 0.3359375, 0.3164062, 0.3007812, 0.2812500, 0.2656250, 0.2460938, 0.2304688, 0.2109375, 0.1953125, 0.1757812, 0.1601562, 0.1406250, 0.1250000, 0.1054688, 0.0898438, 0.0703125, 0.0546875, 0.0351562, 0.0195312, 0.0195312]), array([ 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9804688, 0.9648438, 0.9453125, 0.9296875, 0.9101562, 0.8945312, 0.8750000, 0.8593750, 0.8398438, 0.8242188, 0.8046875, 0.7890625, 0.7695312, 0.7539062, 0.7343750, 0.7187500, 0.6992188, 0.6835938, 0.6640625, 0.6484375, 0.6289062, 0.6132812, 0.5937500, 0.5781250, 0.5585938, 0.5429688, 0.5234375, 0.5078125, 0.4882812, 0.4726562, 0.4531250, 0.4375000, 0.4179688, 0.4023438, 0.3828125, 0.3671875, 0.3476562, 0.3320312, 0.3125000, 0.2968750, 0.2773438, 0.2617188, 0.2421875, 0.2265625, 0.2070312, 0.1914062, 0.1718750, 0.1562500, 0.1367188, 0.1210938, 0.1015625, 0.0859375, 0.0664062, 0.0507812, 0.0312500, 0.0156250, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 35 :: Blue Waves ### color_map_luts['idl35'] = \ ( array([ 0.3203125, 0.3203125, 0.3007812, 0.2851562, 0.2656250, 0.2460938, 0.2304688, 0.2109375, 0.1953125, 0.1796875, 0.1640625, 0.1484375, 0.1328125, 0.1171875, 0.1015625, 0.0898438, 0.0742188, 0.0625000, 0.0507812, 0.0507812, 0.0312500, 0.0195312, 0.0117188, 0.0039062, 0.0000000, 0.0078125, 0.0117188, 0.0195312, 0.0234375, 0.0234375, 0.0273438, 0.0273438, 0.0312500, 0.0273438, 0.0273438, 0.0234375, 0.0234375, 0.0195312, 0.0117188, 0.0078125, 0.0000000, 0.0039062, 0.0117188, 0.0195312, 0.0312500, 0.0507812, 0.0507812, 0.0625000, 0.0742188, 0.0898438, 0.1015625, 0.1171875, 0.1328125, 0.1484375, 0.1640625, 0.1796875, 0.1953125, 0.2109375, 0.2304688, 0.2460938, 0.2656250, 0.2851562, 0.3007812, 0.3203125, 0.3398438, 0.3554688, 0.3750000, 0.3906250, 0.4101562, 0.4296875, 0.4453125, 0.4648438, 0.4804688, 0.4960938, 0.5117188, 0.5273438, 0.5429688, 0.5585938, 0.5742188, 0.5859375, 0.6015625, 0.6132812, 0.6250000, 0.6367188, 0.6445312, 0.6562500, 0.6640625, 0.6718750, 0.6796875, 0.6875000, 0.6914062, 0.6992188, 0.7031250, 0.7031250, 0.7070312, 0.7070312, 0.7109375, 0.7070312, 0.7070312, 0.7031250, 0.7031250, 0.6992188, 0.6914062, 0.6875000, 0.6796875, 0.6718750, 0.6640625, 0.6562500, 0.6445312, 0.6367188, 0.6250000, 0.6132812, 0.6015625, 0.5859375, 0.5742188, 0.5585938, 0.5429688, 0.5273438, 0.5117188, 0.4960938, 0.4804688, 0.4648438, 0.4453125, 0.4296875, 0.4101562, 0.3906250, 0.3750000, 0.3554688, 0.3359375, 0.3203125, 0.3007812, 0.2851562, 0.2656250, 0.2460938, 0.2304688, 0.2109375, 0.1953125, 0.1796875, 0.1640625, 0.1484375, 0.1328125, 0.1171875, 0.1015625, 0.0898438, 0.0742188, 0.0625000, 0.0507812, 0.0507812, 0.0312500, 0.0195312, 0.0117188, 0.0039062, 0.0000000, 0.0078125, 0.0117188, 0.0195312, 0.0234375, 0.0234375, 0.0273438, 0.0273438, 0.0312500, 0.0273438, 0.0273438, 0.0234375, 0.0234375, 0.0195312, 0.0117188, 0.0078125, 0.0000000, 0.0039062, 0.0117188, 0.0195312, 0.0312500, 0.0507812, 0.0507812, 0.0625000, 0.0742188, 0.0898438, 0.1015625, 0.1171875, 0.1328125, 0.1484375, 0.1640625, 0.1796875, 0.1953125, 0.2109375, 0.2304688, 0.2460938, 0.2656250, 0.2851562, 0.3007812, 0.3203125, 0.3398438, 0.3554688, 0.3750000, 0.3906250, 0.4101562, 0.4296875, 0.4453125, 0.4648438, 0.4804688, 0.4960938, 0.5117188, 0.5273438, 0.5429688, 0.5585938, 0.5742188, 0.5859375, 0.6015625, 0.6132812, 0.6250000, 0.6367188, 0.6445312, 0.6562500, 0.6640625, 0.6718750, 0.6796875, 0.6875000, 0.6914062, 0.6992188, 0.7031250, 0.7031250, 0.7070312, 0.7070312, 0.7109375, 0.7070312, 0.7070312, 0.7031250, 0.7031250, 0.6992188, 0.6914062, 0.6875000, 0.6796875, 0.6718750, 0.6640625, 0.6562500, 0.6445312, 0.6367188, 0.6250000, 0.6132812, 0.6015625, 0.5859375, 0.5742188, 0.5585938, 0.5429688, 0.5273438, 0.5117188, 0.4960938, 0.4804688, 0.4648438, 0.4453125, 0.4296875, 0.4101562, 0.3906250, 0.3750000, 0.3750000]), array([ 0.0000000, 0.0000000, 0.0039062, 0.0039062, 0.0078125, 0.0078125, 0.0117188, 0.0156250, 0.0156250, 0.0195312, 0.0234375, 0.0234375, 0.0273438, 0.0312500, 0.0351562, 0.0507812, 0.0429688, 0.0468750, 0.0507812, 0.0585938, 0.0625000, 0.0664062, 0.0742188, 0.0781250, 0.0859375, 0.0898438, 0.0976562, 0.1054688, 0.1093750, 0.1171875, 0.1250000, 0.1328125, 0.1367188, 0.1445312, 0.1523438, 0.1601562, 0.1679688, 0.1757812, 0.1796875, 0.1875000, 0.1953125, 0.1992188, 0.2070312, 0.2148438, 0.2187500, 0.2265625, 0.2304688, 0.2343750, 0.2382812, 0.2421875, 0.2460938, 0.2500000, 0.2539062, 0.2578125, 0.2578125, 0.2617188, 0.2617188, 0.2617188, 0.2617188, 0.2617188, 0.2617188, 0.2617188, 0.2617188, 0.2578125, 0.2578125, 0.2539062, 0.2500000, 0.2500000, 0.2460938, 0.2421875, 0.2382812, 0.2343750, 0.2304688, 0.2226562, 0.2187500, 0.2148438, 0.2109375, 0.2070312, 0.2031250, 0.1992188, 0.1914062, 0.1875000, 0.1835938, 0.1835938, 0.1796875, 0.1757812, 0.1718750, 0.1718750, 0.1718750, 0.1679688, 0.1679688, 0.1679688, 0.1679688, 0.1718750, 0.1718750, 0.1757812, 0.1796875, 0.1835938, 0.1875000, 0.1953125, 0.1992188, 0.2070312, 0.2148438, 0.2226562, 0.2343750, 0.2421875, 0.2539062, 0.2656250, 0.2773438, 0.2890625, 0.3046875, 0.3164062, 0.3320312, 0.3437500, 0.3593750, 0.3750000, 0.3906250, 0.4062500, 0.4218750, 0.4375000, 0.4531250, 0.4687500, 0.4843750, 0.5000000, 0.5156250, 0.5312500, 0.5468750, 0.5625000, 0.5781250, 0.5898438, 0.6015625, 0.6171875, 0.6289062, 0.6406250, 0.6484375, 0.6601562, 0.6679688, 0.6757812, 0.6835938, 0.6875000, 0.6953125, 0.6992188, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.7031250, 0.6992188, 0.6992188, 0.6953125, 0.6875000, 0.6835938, 0.6757812, 0.6679688, 0.6601562, 0.6484375, 0.6406250, 0.6289062, 0.6171875, 0.6054688, 0.5937500, 0.5781250, 0.5664062, 0.5507812, 0.5390625, 0.5234375, 0.5117188, 0.4960938, 0.4843750, 0.4687500, 0.4570312, 0.4414062, 0.4296875, 0.4179688, 0.4062500, 0.3945312, 0.3867188, 0.3789062, 0.3710938, 0.3632812, 0.3554688, 0.3515625, 0.3476562, 0.3437500, 0.3437500, 0.3437500, 0.3437500, 0.3437500, 0.3476562, 0.3554688, 0.3593750, 0.3671875, 0.3750000, 0.3867188, 0.3984375, 0.4101562, 0.4257812, 0.4414062, 0.4570312, 0.4765625, 0.4960938, 0.5156250, 0.5351562, 0.5546875, 0.5781250, 0.6015625, 0.6250000, 0.6484375, 0.6757812, 0.6992188, 0.7265625, 0.7500000, 0.7773438, 0.8007812, 0.8281250, 0.8515625, 0.8789062, 0.9023438, 0.9257812, 0.9492188, 0.9687500, 0.9921875, 0.9804688, 0.9609375, 0.9414062, 0.9257812, 0.9101562, 0.8945312, 0.8828125, 0.8710938, 0.8593750, 0.8515625, 0.8437500, 0.8398438, 0.8359375, 0.8320312, 0.8320312, 0.8359375, 0.8359375, 0.8437500, 0.8476562, 0.8554688, 0.8671875, 0.8750000, 0.8906250, 0.9023438, 0.9179688, 0.9335938, 0.9531250, 0.9687500, 0.9882812, 0.9804688, 0.9609375, 0.9375000, 0.9140625, 0.9140625]), array([ 0.8359375, 0.8359375, 0.7890625, 0.7382812, 0.6914062, 0.6445312, 0.5976562, 0.5507812, 0.5039062, 0.4609375, 0.4179688, 0.3750000, 0.3320312, 0.2929688, 0.2539062, 0.2187500, 0.1835938, 0.1484375, 0.1171875, 0.0859375, 0.0585938, 0.0351562, 0.0078125, 0.0078125, 0.0273438, 0.0468750, 0.0625000, 0.0742188, 0.0859375, 0.0937500, 0.0976562, 0.1015625, 0.1054688, 0.1015625, 0.0976562, 0.0937500, 0.0859375, 0.0742188, 0.0625000, 0.0468750, 0.0273438, 0.0078125, 0.0078125, 0.0351562, 0.0585938, 0.0859375, 0.1171875, 0.1484375, 0.1835938, 0.2187500, 0.2539062, 0.2929688, 0.3320312, 0.3750000, 0.4179688, 0.4609375, 0.5039062, 0.5507812, 0.5976562, 0.6445312, 0.6914062, 0.7382812, 0.7890625, 0.8359375, 0.8867188, 0.9335938, 0.9804688, 0.9609375, 0.9140625, 0.8671875, 0.8203125, 0.7734375, 0.7265625, 0.6835938, 0.6406250, 0.5976562, 0.5546875, 0.5156250, 0.4765625, 0.4414062, 0.4062500, 0.3710938, 0.3398438, 0.3085938, 0.2812500, 0.2578125, 0.2304688, 0.2109375, 0.1914062, 0.1718750, 0.1562500, 0.1445312, 0.1328125, 0.1250000, 0.1210938, 0.1171875, 0.1132812, 0.1171875, 0.1210938, 0.1250000, 0.1328125, 0.1445312, 0.1562500, 0.1718750, 0.1914062, 0.2109375, 0.2304688, 0.2578125, 0.2812500, 0.3085938, 0.3398438, 0.3710938, 0.4062500, 0.4414062, 0.4765625, 0.5156250, 0.5546875, 0.5976562, 0.6406250, 0.6835938, 0.7265625, 0.7734375, 0.8203125, 0.8671875, 0.9140625, 0.9609375, 0.9804688, 0.9335938, 0.8828125, 0.8359375, 0.7890625, 0.7382812, 0.6914062, 0.6445312, 0.5976562, 0.5507812, 0.5039062, 0.4609375, 0.4179688, 0.3750000, 0.3320312, 0.2929688, 0.2539062, 0.2187500, 0.1835938, 0.1484375, 0.1171875, 0.0859375, 0.0585938, 0.0351562, 0.0078125, 0.0078125, 0.0273438, 0.0468750, 0.0625000, 0.0742188, 0.0859375, 0.0937500, 0.0976562, 0.1015625, 0.1054688, 0.1015625, 0.0976562, 0.0937500, 0.0859375, 0.0742188, 0.0625000, 0.0468750, 0.0273438, 0.0078125, 0.0078125, 0.0351562, 0.0585938, 0.0859375, 0.1171875, 0.1484375, 0.1835938, 0.2187500, 0.2539062, 0.2929688, 0.3320312, 0.3750000, 0.4179688, 0.4609375, 0.5039062, 0.5507812, 0.5976562, 0.6445312, 0.6914062, 0.7382812, 0.7890625, 0.8359375, 0.8867188, 0.9335938, 0.9804688, 0.9609375, 0.9140625, 0.8671875, 0.8203125, 0.7734375, 0.7265625, 0.6835938, 0.6406250, 0.5976562, 0.5546875, 0.5156250, 0.4765625, 0.4414062, 0.4062500, 0.3710938, 0.3398438, 0.3085938, 0.2812500, 0.2578125, 0.2304688, 0.2109375, 0.1914062, 0.1718750, 0.1562500, 0.1445312, 0.1328125, 0.1250000, 0.1210938, 0.1171875, 0.1132812, 0.1171875, 0.1210938, 0.1250000, 0.1328125, 0.1445312, 0.1562500, 0.1718750, 0.1914062, 0.2109375, 0.2304688, 0.2578125, 0.2812500, 0.3085938, 0.3398438, 0.3710938, 0.4062500, 0.4414062, 0.4765625, 0.5156250, 0.5546875, 0.5976562, 0.6406250, 0.6835938, 0.7265625, 0.7734375, 0.8203125, 0.8671875, 0.9140625, 0.9609375, 0.9804688, 0.9804688]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 36 :: Volcano ### color_map_luts['idl36'] = \ ( array([ 0.2500000, 0.2500000, 0.2343750, 0.2226562, 0.2109375, 0.1992188, 0.1875000, 0.1757812, 0.1640625, 0.1562500, 0.1445312, 0.1367188, 0.1250000, 0.1171875, 0.1093750, 0.1015625, 0.0937500, 0.0859375, 0.0781250, 0.0742188, 0.0664062, 0.0625000, 0.0546875, 0.0507812, 0.0468750, 0.0429688, 0.0507812, 0.0351562, 0.0351562, 0.0312500, 0.0273438, 0.0273438, 0.0273438, 0.0273438, 0.0273438, 0.0273438, 0.0273438, 0.0273438, 0.0273438, 0.0312500, 0.0312500, 0.0351562, 0.0507812, 0.0429688, 0.0468750, 0.0507812, 0.0546875, 0.0585938, 0.0664062, 0.0703125, 0.0781250, 0.0859375, 0.0898438, 0.0976562, 0.1054688, 0.1171875, 0.1250000, 0.1328125, 0.1445312, 0.1523438, 0.1640625, 0.1718750, 0.1835938, 0.1953125, 0.2070312, 0.2187500, 0.2304688, 0.2460938, 0.2578125, 0.2734375, 0.2851562, 0.3007812, 0.3125000, 0.3281250, 0.3437500, 0.3593750, 0.3750000, 0.3906250, 0.4062500, 0.4257812, 0.4414062, 0.4570312, 0.4765625, 0.4921875, 0.5117188, 0.5312500, 0.5468750, 0.5664062, 0.5859375, 0.6054688, 0.6250000, 0.6445312, 0.6640625, 0.6835938, 0.7031250, 0.7265625, 0.7460938, 0.7656250, 0.7851562, 0.8085938, 0.8281250, 0.8515625, 0.8710938, 0.8945312, 0.9140625, 0.9375000, 0.9609375, 0.9804688, 0.9882812, 0.9648438, 0.9414062, 0.9218750, 0.8984375, 0.8750000, 0.8515625, 0.8320312, 0.8085938, 0.7851562, 0.7617188, 0.7382812, 0.7148438, 0.6953125, 0.6718750, 0.6484375, 0.6250000, 0.6015625, 0.5781250, 0.5585938, 0.5351562, 0.5117188, 0.4882812, 0.4687500, 0.4453125, 0.4218750, 0.4023438, 0.3789062, 0.3593750, 0.3359375, 0.3125000, 0.2929688, 0.2734375, 0.2500000, 0.2304688, 0.2070312, 0.1875000, 0.1679688, 0.1484375, 0.1289062, 0.1093750, 0.0898438, 0.0703125, 0.0507812, 0.0312500, 0.0117188, 0.0039062, 0.0195312, 0.0507812, 0.0585938, 0.0742188, 0.0898438, 0.1093750, 0.1250000, 0.1406250, 0.1562500, 0.1718750, 0.1875000, 0.2031250, 0.2187500, 0.2343750, 0.2460938, 0.2617188, 0.2734375, 0.2851562, 0.3007812, 0.3125000, 0.3242188, 0.3359375, 0.3476562, 0.3593750, 0.3671875, 0.3789062, 0.3906250, 0.3984375, 0.4062500, 0.4179688, 0.4257812, 0.4335938, 0.4414062, 0.4453125, 0.4531250, 0.4609375, 0.4648438, 0.4726562, 0.4765625, 0.4804688, 0.4843750, 0.4882812, 0.4921875, 0.4960938, 0.5000000, 0.5000000, 0.5039062, 0.5039062, 0.5039062, 0.5039062, 0.5039062, 0.5039062, 0.5039062, 0.5039062, 0.5039062, 0.5000000, 0.4960938, 0.4960938, 0.4921875, 0.4882812, 0.4843750, 0.4804688, 0.4765625, 0.4687500, 0.4648438, 0.4570312, 0.4531250, 0.4453125, 0.4375000, 0.4296875, 0.4218750, 0.4140625, 0.4023438, 0.3945312, 0.3867188, 0.3750000, 0.3632812, 0.3554688, 0.3437500, 0.3320312, 0.3203125, 0.3085938, 0.2929688, 0.2812500, 0.2695312, 0.2539062, 0.2421875, 0.2265625, 0.2109375, 0.1953125, 0.1835938, 0.1679688, 0.1523438, 0.1328125, 0.1171875, 0.1015625, 0.0859375, 0.0664062, 0.0507812, 0.0312500, 0.0312500]), array([ 0.1367188, 0.1367188, 0.1562500, 0.1718750, 0.1914062, 0.2109375, 0.2265625, 0.2460938, 0.2617188, 0.2812500, 0.2968750, 0.3125000, 0.3281250, 0.3476562, 0.3632812, 0.3750000, 0.3906250, 0.4062500, 0.4218750, 0.4335938, 0.4492188, 0.4609375, 0.4726562, 0.4843750, 0.4960938, 0.5078125, 0.5195312, 0.5273438, 0.5351562, 0.5468750, 0.5546875, 0.5625000, 0.5664062, 0.5742188, 0.5781250, 0.5859375, 0.5898438, 0.5937500, 0.5937500, 0.5976562, 0.5976562, 0.5976562, 0.5976562, 0.5976562, 0.5976562, 0.5976562, 0.5937500, 0.5898438, 0.5859375, 0.5820312, 0.5781250, 0.5703125, 0.5664062, 0.5585938, 0.5507812, 0.5429688, 0.5351562, 0.5234375, 0.5156250, 0.5039062, 0.4921875, 0.4804688, 0.4687500, 0.4570312, 0.4414062, 0.4296875, 0.4140625, 0.4023438, 0.3867188, 0.3710938, 0.3554688, 0.3398438, 0.3242188, 0.3085938, 0.2890625, 0.2734375, 0.2578125, 0.2382812, 0.2226562, 0.2031250, 0.1835938, 0.1679688, 0.1484375, 0.1289062, 0.1132812, 0.0937500, 0.0742188, 0.0585938, 0.0507812, 0.0195312, 0.0039062, 0.0117188, 0.0273438, 0.0468750, 0.0625000, 0.0781250, 0.0976562, 0.1132812, 0.1289062, 0.1445312, 0.1601562, 0.1757812, 0.1914062, 0.2070312, 0.2187500, 0.2343750, 0.2460938, 0.2578125, 0.2695312, 0.2812500, 0.2929688, 0.3046875, 0.3125000, 0.3242188, 0.3320312, 0.3398438, 0.3476562, 0.3554688, 0.3593750, 0.3671875, 0.3710938, 0.3750000, 0.3789062, 0.3828125, 0.3867188, 0.3867188, 0.3867188, 0.3867188, 0.3867188, 0.3867188, 0.3867188, 0.3828125, 0.3789062, 0.3789062, 0.3710938, 0.3671875, 0.3632812, 0.3554688, 0.3476562, 0.3437500, 0.3359375, 0.3242188, 0.3164062, 0.3046875, 0.2968750, 0.2851562, 0.2734375, 0.2617188, 0.2500000, 0.2382812, 0.2226562, 0.2109375, 0.1953125, 0.1796875, 0.1640625, 0.1484375, 0.1328125, 0.1171875, 0.1015625, 0.0859375, 0.0664062, 0.0507812, 0.0351562, 0.0156250, 0.0000000, 0.0156250, 0.0351562, 0.0507812, 0.0703125, 0.0898438, 0.1054688, 0.1250000, 0.1445312, 0.1601562, 0.1796875, 0.1992188, 0.2148438, 0.2343750, 0.2500000, 0.2695312, 0.2851562, 0.3007812, 0.3203125, 0.3359375, 0.3515625, 0.3671875, 0.3828125, 0.3984375, 0.4101562, 0.4257812, 0.4375000, 0.4531250, 0.4648438, 0.4765625, 0.4882812, 0.5000000, 0.5117188, 0.5195312, 0.5312500, 0.5390625, 0.5468750, 0.5546875, 0.5625000, 0.5703125, 0.5742188, 0.5820312, 0.5859375, 0.5898438, 0.5937500, 0.5937500, 0.5976562, 0.5976562, 0.5976562, 0.5976562, 0.5976562, 0.5976562, 0.5937500, 0.5937500, 0.5898438, 0.5859375, 0.5820312, 0.5742188, 0.5703125, 0.5625000, 0.5546875, 0.5468750, 0.5390625, 0.5312500, 0.5195312, 0.5117188, 0.5000000, 0.4882812, 0.4765625, 0.4648438, 0.4531250, 0.4375000, 0.4257812, 0.4101562, 0.3945312, 0.3828125, 0.3671875, 0.3515625, 0.3359375, 0.3164062, 0.3007812, 0.2851562, 0.2656250, 0.2500000, 0.2343750, 0.2148438, 0.1953125, 0.1796875, 0.1601562, 0.1445312, 0.1250000, 0.1250000]), array([ 0.4531250, 0.4531250, 0.4101562, 0.3671875, 0.3281250, 0.2890625, 0.2500000, 0.2148438, 0.1796875, 0.1484375, 0.1171875, 0.0937500, 0.0703125, 0.0468750, 0.0312500, 0.0195312, 0.0078125, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0117188, 0.0234375, 0.0507812, 0.0546875, 0.0781250, 0.1015625, 0.1289062, 0.1601562, 0.1953125, 0.2304688, 0.2656250, 0.3046875, 0.3437500, 0.3867188, 0.4257812, 0.4687500, 0.5117188, 0.5546875, 0.5976562, 0.6367188, 0.6796875, 0.7187500, 0.7539062, 0.7890625, 0.8242188, 0.8515625, 0.8828125, 0.9062500, 0.9296875, 0.9492188, 0.9648438, 0.9765625, 0.9843750, 0.9882812, 0.9882812, 0.9882812, 0.9804688, 0.9726562, 0.9609375, 0.9414062, 0.9218750, 0.8984375, 0.8750000, 0.8437500, 0.8125000, 0.7812500, 0.7421875, 0.7070312, 0.6679688, 0.6250000, 0.5859375, 0.5429688, 0.5000000, 0.4570312, 0.4140625, 0.3750000, 0.3320312, 0.2929688, 0.2539062, 0.2187500, 0.1835938, 0.1523438, 0.1210938, 0.0937500, 0.0703125, 0.0507812, 0.0351562, 0.0195312, 0.0078125, 0.0000000, 0.0000000, 0.0000000, 0.0039062, 0.0117188, 0.0195312, 0.0351562, 0.0546875, 0.0742188, 0.0976562, 0.1250000, 0.1562500, 0.1875000, 0.2226562, 0.2617188, 0.2968750, 0.3398438, 0.3789062, 0.4218750, 0.4648438, 0.5078125, 0.5468750, 0.5898438, 0.6328125, 0.6718750, 0.7109375, 0.7500000, 0.7851562, 0.8164062, 0.8476562, 0.8789062, 0.9023438, 0.9257812, 0.9453125, 0.9609375, 0.9726562, 0.9843750, 0.9882812, 0.9921875, 0.9882812, 0.9843750, 0.9726562, 0.9609375, 0.9453125, 0.9257812, 0.9023438, 0.8789062, 0.8476562, 0.8164062, 0.7851562, 0.7500000, 0.7109375, 0.6718750, 0.6328125, 0.5898438, 0.5468750, 0.5078125, 0.4648438, 0.4218750, 0.3789062, 0.3398438, 0.2968750, 0.2617188, 0.2226562, 0.1875000, 0.1562500, 0.1250000, 0.0976562, 0.0742188, 0.0546875, 0.0351562, 0.0195312, 0.0117188, 0.0039062, 0.0000000, 0.0000000, 0.0000000, 0.0078125, 0.0195312, 0.0351562, 0.0507812, 0.0703125, 0.0937500, 0.1210938, 0.1523438, 0.1835938, 0.2187500, 0.2539062, 0.2929688, 0.3320312, 0.3750000, 0.4140625, 0.4570312, 0.5000000, 0.5429688, 0.5859375, 0.6250000, 0.6679688, 0.7070312, 0.7421875, 0.7812500, 0.8125000, 0.8437500, 0.8750000, 0.8984375, 0.9218750, 0.9414062, 0.9609375, 0.9726562, 0.9804688, 0.9882812, 0.9882812, 0.9882812, 0.9843750, 0.9765625, 0.9648438, 0.9492188, 0.9296875, 0.9062500, 0.8828125, 0.8515625, 0.8242188, 0.7890625, 0.7539062, 0.7187500, 0.6796875, 0.6367188, 0.5976562, 0.5546875, 0.5117188, 0.4687500, 0.4257812, 0.3867188, 0.3437500, 0.3046875, 0.2656250, 0.2304688, 0.1953125, 0.1601562, 0.1289062, 0.1015625, 0.0781250, 0.0546875, 0.0507812, 0.0234375, 0.0117188, 0.0039062, 0.0000000, 0.0000000, 0.0000000, 0.0078125, 0.0195312, 0.0312500, 0.0468750, 0.0703125, 0.0937500, 0.1171875, 0.1484375, 0.1796875, 0.2148438, 0.2500000, 0.2890625, 0.3281250, 0.3671875, 0.4101562, 0.4101562]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 37 :: Waves ### color_map_luts['idl37'] = \ ( array([ 0.4843750, 0.4843750, 0.4726562, 0.4609375, 0.4492188, 0.4375000, 0.4257812, 0.4140625, 0.4023438, 0.3906250, 0.3789062, 0.3671875, 0.3554688, 0.3437500, 0.3320312, 0.3203125, 0.3085938, 0.2968750, 0.2851562, 0.2734375, 0.2656250, 0.2539062, 0.2421875, 0.2343750, 0.2226562, 0.2109375, 0.2031250, 0.1914062, 0.1835938, 0.1757812, 0.1640625, 0.1562500, 0.1484375, 0.1406250, 0.1289062, 0.1210938, 0.1132812, 0.1054688, 0.0976562, 0.0937500, 0.0859375, 0.0781250, 0.0742188, 0.0664062, 0.0585938, 0.0546875, 0.0507812, 0.0429688, 0.0507812, 0.0351562, 0.0312500, 0.0273438, 0.0234375, 0.0195312, 0.0156250, 0.0156250, 0.0117188, 0.0078125, 0.0078125, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0078125, 0.0078125, 0.0117188, 0.0156250, 0.0156250, 0.0195312, 0.0234375, 0.0273438, 0.0312500, 0.0351562, 0.0507812, 0.0429688, 0.0507812, 0.0546875, 0.0585938, 0.0664062, 0.0742188, 0.0781250, 0.0859375, 0.0937500, 0.0976562, 0.1054688, 0.1132812, 0.1210938, 0.1289062, 0.1406250, 0.1484375, 0.1562500, 0.1640625, 0.1757812, 0.1835938, 0.1914062, 0.2031250, 0.2109375, 0.2226562, 0.2343750, 0.2421875, 0.2539062, 0.2656250, 0.2734375, 0.2851562, 0.2968750, 0.3085938, 0.3203125, 0.3320312, 0.3437500, 0.3554688, 0.3671875, 0.3789062, 0.3906250, 0.4023438, 0.4140625, 0.4257812, 0.4375000, 0.4492188, 0.4609375, 0.4726562, 0.4843750, 0.5000000, 0.5117188, 0.5234375, 0.5351562, 0.5468750, 0.5585938, 0.5703125, 0.5820312, 0.5937500, 0.6054688, 0.6171875, 0.6289062, 0.6406250, 0.6523438, 0.6640625, 0.6757812, 0.6875000, 0.6992188, 0.7109375, 0.7226562, 0.7304688, 0.7421875, 0.7539062, 0.7617188, 0.7734375, 0.7851562, 0.7929688, 0.8046875, 0.8125000, 0.8203125, 0.8320312, 0.8398438, 0.8476562, 0.8554688, 0.8671875, 0.8750000, 0.8828125, 0.8906250, 0.8984375, 0.9023438, 0.9101562, 0.9179688, 0.9218750, 0.9296875, 0.9375000, 0.9414062, 0.9453125, 0.9531250, 0.9570312, 0.9609375, 0.9648438, 0.9687500, 0.9726562, 0.9765625, 0.9804688, 0.9804688, 0.9843750, 0.9882812, 0.9882812, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9960938, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9882812, 0.9882812, 0.9843750, 0.9804688, 0.9804688, 0.9765625, 0.9726562, 0.9687500, 0.9648438, 0.9609375, 0.9570312, 0.9531250, 0.9453125, 0.9414062, 0.9375000, 0.9296875, 0.9218750, 0.9179688, 0.9101562, 0.9023438, 0.8984375, 0.8906250, 0.8828125, 0.8750000, 0.8671875, 0.8554688, 0.8476562, 0.8398438, 0.8320312, 0.8203125, 0.8125000, 0.8046875, 0.7929688, 0.7851562, 0.7734375, 0.7617188, 0.7539062, 0.7421875, 0.7304688, 0.7226562, 0.7109375, 0.6992188, 0.6875000, 0.6757812, 0.6640625, 0.6523438, 0.6406250, 0.6289062, 0.6171875, 0.6054688, 0.5937500, 0.5820312, 0.5703125, 0.5585938, 0.5468750, 0.5351562, 0.5234375, 0.5234375]), array([ 0.4726562, 0.4726562, 0.5507812, 0.6054688, 0.6250000, 0.6054688, 0.5507812, 0.4726562, 0.3789062, 0.2812500, 0.2031250, 0.1484375, 0.1328125, 0.1484375, 0.2031250, 0.2812500, 0.3750000, 0.4726562, 0.5507812, 0.6054688, 0.6250000, 0.6054688, 0.5507812, 0.4726562, 0.3789062, 0.2812500, 0.2031250, 0.1484375, 0.1328125, 0.1484375, 0.2031250, 0.2812500, 0.3750000, 0.4726562, 0.5507812, 0.6054688, 0.6250000, 0.6054688, 0.5507812, 0.4726562, 0.3789062, 0.2812500, 0.2031250, 0.1484375, 0.1328125, 0.1484375, 0.2031250, 0.2812500, 0.3750000, 0.4726562, 0.5507812, 0.6054688, 0.6250000, 0.6054688, 0.5507812, 0.4726562, 0.3789062, 0.2812500, 0.2031250, 0.1484375, 0.1328125, 0.1484375, 0.2031250, 0.2812500, 0.3750000, 0.4726562, 0.5507812, 0.6054688, 0.6250000, 0.6054688, 0.5507812, 0.4726562, 0.3789062, 0.2812500, 0.2031250, 0.1484375, 0.1328125, 0.1484375, 0.2031250, 0.2812500, 0.3750000, 0.4726562, 0.5507812, 0.6054688, 0.6250000, 0.6054688, 0.5507812, 0.4726562, 0.3789062, 0.2812500, 0.2031250, 0.1484375, 0.1328125, 0.1484375, 0.2031250, 0.2812500, 0.3750000, 0.4726562, 0.5507812, 0.6054688, 0.6250000, 0.6054688, 0.5507812, 0.4726562, 0.3789062, 0.2812500, 0.2031250, 0.1484375, 0.1328125, 0.1484375, 0.2031250, 0.2812500, 0.3750000, 0.4726562, 0.5507812, 0.6054688, 0.6250000, 0.6054688, 0.5507812, 0.4726562, 0.3750000, 0.2812500, 0.2031250, 0.1484375, 0.1328125, 0.1484375, 0.2031250, 0.2812500, 0.3789062, 0.4726562, 0.5507812, 0.6054688, 0.6250000, 0.6054688, 0.5507812, 0.4726562, 0.3750000, 0.2812500, 0.2031250, 0.1484375, 0.1328125, 0.1484375, 0.2031250, 0.2812500, 0.3789062, 0.4726562, 0.5507812, 0.6054688, 0.6250000, 0.6054688, 0.5507812, 0.4726562, 0.3750000, 0.2812500, 0.2031250, 0.1484375, 0.1328125, 0.1484375, 0.2031250, 0.2812500, 0.3789062, 0.4726562, 0.5507812, 0.6054688, 0.6250000, 0.6054688, 0.5507812, 0.4726562, 0.3750000, 0.2812500, 0.2031250, 0.1484375, 0.1328125, 0.1484375, 0.2031250, 0.2812500, 0.3789062, 0.4726562, 0.5507812, 0.6054688, 0.6250000, 0.6054688, 0.5507812, 0.4726562, 0.3750000, 0.2812500, 0.2031250, 0.1484375, 0.1328125, 0.1484375, 0.2031250, 0.2812500, 0.3789062, 0.4726562, 0.5507812, 0.6054688, 0.6250000, 0.6054688, 0.5507812, 0.4726562, 0.3750000, 0.2812500, 0.2031250, 0.1484375, 0.1328125, 0.1484375, 0.2031250, 0.2812500, 0.3789062, 0.4726562, 0.5507812, 0.6054688, 0.6250000, 0.6054688, 0.5507812, 0.4726562, 0.3750000, 0.2812500, 0.2031250, 0.1484375, 0.1328125, 0.1484375, 0.2031250, 0.2812500, 0.3789062, 0.4726562, 0.5507812, 0.6054688, 0.6250000, 0.6054688, 0.5507812, 0.4726562, 0.3750000, 0.2812500, 0.2031250, 0.1484375, 0.1328125, 0.1484375, 0.2031250, 0.2812500, 0.3789062, 0.4726562, 0.5507812, 0.6054688, 0.6250000, 0.6054688, 0.5507812, 0.4726562, 0.3750000, 0.2812500, 0.2031250, 0.1484375, 0.1328125, 0.1484375, 0.2031250, 0.2031250]), array([ 0.5117188, 0.5117188, 0.5234375, 0.5351562, 0.5468750, 0.5585938, 0.5703125, 0.5820312, 0.5937500, 0.6054688, 0.6171875, 0.6289062, 0.6406250, 0.6523438, 0.6640625, 0.6757812, 0.6875000, 0.6992188, 0.7109375, 0.7226562, 0.7304688, 0.7421875, 0.7539062, 0.7617188, 0.7734375, 0.7851562, 0.7929688, 0.8046875, 0.8125000, 0.8203125, 0.8320312, 0.8398438, 0.8476562, 0.8554688, 0.8671875, 0.8750000, 0.8828125, 0.8906250, 0.8984375, 0.9023438, 0.9101562, 0.9179688, 0.9218750, 0.9296875, 0.9375000, 0.9414062, 0.9453125, 0.9531250, 0.9570312, 0.9609375, 0.9648438, 0.9687500, 0.9726562, 0.9765625, 0.9804688, 0.9804688, 0.9843750, 0.9882812, 0.9882812, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9960938, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9921875, 0.9882812, 0.9882812, 0.9843750, 0.9804688, 0.9804688, 0.9765625, 0.9726562, 0.9687500, 0.9648438, 0.9609375, 0.9570312, 0.9531250, 0.9453125, 0.9414062, 0.9375000, 0.9296875, 0.9218750, 0.9179688, 0.9101562, 0.9023438, 0.8984375, 0.8906250, 0.8828125, 0.8750000, 0.8671875, 0.8554688, 0.8476562, 0.8398438, 0.8320312, 0.8203125, 0.8125000, 0.8046875, 0.7929688, 0.7851562, 0.7734375, 0.7617188, 0.7539062, 0.7421875, 0.7304688, 0.7226562, 0.7109375, 0.6992188, 0.6875000, 0.6757812, 0.6640625, 0.6523438, 0.6406250, 0.6289062, 0.6171875, 0.6054688, 0.5937500, 0.5820312, 0.5703125, 0.5585938, 0.5468750, 0.5351562, 0.5234375, 0.5117188, 0.4960938, 0.4843750, 0.4726562, 0.4609375, 0.4492188, 0.4375000, 0.4257812, 0.4140625, 0.4023438, 0.3906250, 0.3789062, 0.3671875, 0.3554688, 0.3437500, 0.3320312, 0.3203125, 0.3085938, 0.2968750, 0.2851562, 0.2734375, 0.2656250, 0.2539062, 0.2421875, 0.2343750, 0.2226562, 0.2109375, 0.2031250, 0.1914062, 0.1835938, 0.1757812, 0.1640625, 0.1562500, 0.1484375, 0.1406250, 0.1289062, 0.1210938, 0.1132812, 0.1054688, 0.0976562, 0.0937500, 0.0859375, 0.0781250, 0.0742188, 0.0664062, 0.0585938, 0.0546875, 0.0507812, 0.0429688, 0.0507812, 0.0351562, 0.0312500, 0.0273438, 0.0234375, 0.0195312, 0.0156250, 0.0156250, 0.0117188, 0.0078125, 0.0078125, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0039062, 0.0078125, 0.0078125, 0.0117188, 0.0156250, 0.0156250, 0.0195312, 0.0234375, 0.0273438, 0.0312500, 0.0351562, 0.0507812, 0.0429688, 0.0507812, 0.0546875, 0.0585938, 0.0664062, 0.0742188, 0.0781250, 0.0859375, 0.0937500, 0.0976562, 0.1054688, 0.1132812, 0.1210938, 0.1289062, 0.1406250, 0.1484375, 0.1562500, 0.1640625, 0.1757812, 0.1835938, 0.1914062, 0.2031250, 0.2109375, 0.2226562, 0.2343750, 0.2421875, 0.2539062, 0.2656250, 0.2734375, 0.2851562, 0.2968750, 0.3085938, 0.3203125, 0.3320312, 0.3437500, 0.3554688, 0.3671875, 0.3789062, 0.3906250, 0.4023438, 0.4140625, 0.4257812, 0.4375000, 0.4492188, 0.4609375, 0.4726562, 0.4726562]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 38 :: Rainbow18 ### color_map_luts['idl38'] = \ ( array([ 0.0000000, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.6835938, 0.6835938, 0.6835938, 0.6835938, 0.6835938, 0.6835938, 0.6835938, 0.6835938, 0.6835938, 0.6835938, 0.6835938, 0.6835938, 0.6835938, 0.6835938, 0.9960938, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.5468750, 0.5468750, 0.5468750, 0.5468750, 0.5468750, 0.5468750, 0.5468750, 0.5468750, 0.5468750, 0.5468750, 0.5468750, 0.5468750, 0.5468750, 0.5468750, 0.5468750, 0.5468750, 0.6640625, 0.6640625, 0.6640625, 0.6640625, 0.6640625, 0.6640625, 0.6640625, 0.6640625, 0.6640625, 0.6640625, 0.6640625, 0.6640625, 0.6640625, 0.6640625, 0.6640625, 0.6640625, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.6250000, 0.4882812, 0.4882812, 0.4882812, 0.4882812, 0.4882812, 0.4882812, 0.4882812, 0.4882812, 0.4882812, 0.4882812, 0.4882812, 0.4882812, 0.4882812, 0.4882812, 0.4882812, 0.4882812, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.1953125, 0.9960938, 0.9960938]), array([ 0.0000000, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.5859375, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.7812500, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.4687500, 0.4687500, 0.4687500, 0.4687500, 0.4687500, 0.4687500, 0.4687500, 0.4687500, 0.4687500, 0.4687500, 0.4687500, 0.4687500, 0.4687500, 0.4687500, 0.4687500, 0.4687500, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.3906250, 0.2929688, 0.2929688, 0.2929688, 0.2929688, 0.2929688, 0.2929688, 0.2929688, 0.2929688, 0.2929688, 0.2929688, 0.2929688, 0.2929688, 0.2929688, 0.2929688, 0.9960938, 0.9960938]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 39 :: Rainbow + white ### color_map_luts['idl39'] = \ ( array([ 0.0000000, 0.0156250, 0.0351562, 0.0507812, 0.0703125, 0.0859375, 0.1054688, 0.1210938, 0.1406250, 0.1562500, 0.1757812, 0.1953125, 0.2265625, 0.2382812, 0.2500000, 0.2656250, 0.2695312, 0.2812500, 0.2890625, 0.3007812, 0.3085938, 0.3125000, 0.3203125, 0.3242188, 0.3281250, 0.3359375, 0.3398438, 0.3437500, 0.3359375, 0.3398438, 0.3398438, 0.3398438, 0.3320312, 0.3281250, 0.3281250, 0.3281250, 0.3085938, 0.3046875, 0.3007812, 0.2968750, 0.2773438, 0.2734375, 0.2656250, 0.2578125, 0.2343750, 0.2265625, 0.2148438, 0.1796875, 0.1679688, 0.1562500, 0.1406250, 0.1289062, 0.0976562, 0.0820312, 0.0625000, 0.0468750, 0.0156250, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0156250, 0.0312500, 0.0468750, 0.0820312, 0.0976562, 0.1132812, 0.1640625, 0.1796875, 0.1992188, 0.2148438, 0.2460938, 0.2617188, 0.2812500, 0.2968750, 0.3125000, 0.3476562, 0.3632812, 0.3789062, 0.4296875, 0.4453125, 0.4648438, 0.4804688, 0.5117188, 0.5273438, 0.5468750, 0.5625000, 0.5976562, 0.6132812, 0.6289062, 0.6445312, 0.6953125, 0.7109375, 0.7304688, 0.7460938, 0.7773438, 0.7929688, 0.8125000, 0.8281250, 0.8632812, 0.8789062, 0.8945312, 0.9453125, 0.9609375, 0.9765625, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0156250, 0.0625000, 0.0820312, 0.0976562, 0.1132812, 0.1484375, 0.1640625, 0.1796875, 0.1992188, 0.2148438, 0.2460938, 0.2617188, 0.2812500, 0.3281250, 0.3476562, 0.3632812, 0.3789062, 0.4140625, 0.4296875, 0.4453125, 0.4648438, 0.4960938, 0.5117188, 0.5273438, 0.5468750, 0.5937500, 0.6132812, 0.6289062, 0.6445312, 0.6796875, 0.6953125, 0.7109375, 0.7304688, 0.7617188, 0.7773438, 0.7929688, 0.8437500, 0.8593750, 0.8789062, 0.8945312, 0.9101562, 0.9453125, 0.9609375, 0.9765625, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9765625, 0.9453125, 0.9296875, 0.9101562, 0.8945312, 0.8632812, 0.8437500, 0.8281250, 0.7773438, 0.7617188, 0.7460938, 0.7304688, 0.6953125, 0.6796875, 0.6640625, 0.6445312, 0.6289062, 0.5976562, 0.5781250, 0.5625000, 0.5117188, 0.4960938, 0.4804688, 0.4648438, 0.4296875, 0.4140625, 0.3984375, 0.3789062, 0.3476562, 0.3320312, 0.3125000, 0.2968750, 0.2460938, 0.2304688, 0.2148438, 0.1992188, 0.1640625, 0.1484375, 0.1328125, 0.1132812, 0.0820312, 0.0664062, 0.0468750, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.9960938]), array([ 0.0000000, 0.0117188, 0.0273438, 0.0390625, 0.0546875, 0.0742188, 0.0898438, 0.1093750, 0.1250000, 0.1484375, 0.1679688, 0.1875000, 0.2304688, 0.2460938, 0.2656250, 0.2812500, 0.3007812, 0.3164062, 0.3359375, 0.3554688, 0.3710938, 0.3906250, 0.4062500, 0.4257812, 0.4609375, 0.4765625, 0.4960938, 0.5156250, 0.5312500, 0.5507812, 0.5664062, 0.5859375, 0.6015625, 0.6210938, 0.6367188, 0.6562500, 0.6914062, 0.7109375, 0.7265625, 0.7460938, 0.7617188, 0.7812500, 0.7968750, 0.8164062, 0.8359375, 0.8515625, 0.8710938, 0.9062500, 0.9218750, 0.9414062, 0.9570312, 0.9765625, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9609375, 0.9453125, 0.9296875, 0.8789062, 0.8593750, 0.8437500, 0.8281250, 0.7929688, 0.7773438, 0.7617188, 0.7460938, 0.7304688, 0.6953125, 0.6796875, 0.6640625, 0.6132812, 0.5937500, 0.5781250, 0.5625000, 0.5273438, 0.5117188, 0.4960938, 0.4804688, 0.4453125, 0.4296875, 0.4140625, 0.3984375, 0.3476562, 0.3281250, 0.3125000, 0.2968750, 0.2617188, 0.2460938, 0.2304688, 0.2148438, 0.1796875, 0.1640625, 0.1484375, 0.0976562, 0.0820312, 0.0625000, 0.0468750, 0.0312500, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.9960938]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) ### IDL colormap 40 :: Rainbow + black ### color_map_luts['idl40'] = \ ( array([ 0.0000000, 0.0156250, 0.0351562, 0.0507812, 0.0703125, 0.0859375, 0.1054688, 0.1210938, 0.1406250, 0.1562500, 0.1757812, 0.1953125, 0.2265625, 0.2382812, 0.2500000, 0.2656250, 0.2695312, 0.2812500, 0.2890625, 0.3007812, 0.3085938, 0.3125000, 0.3203125, 0.3242188, 0.3281250, 0.3359375, 0.3398438, 0.3437500, 0.3359375, 0.3398438, 0.3398438, 0.3398438, 0.3320312, 0.3281250, 0.3281250, 0.3281250, 0.3085938, 0.3046875, 0.3007812, 0.2968750, 0.2773438, 0.2734375, 0.2656250, 0.2578125, 0.2343750, 0.2265625, 0.2148438, 0.1796875, 0.1679688, 0.1562500, 0.1406250, 0.1289062, 0.0976562, 0.0820312, 0.0625000, 0.0468750, 0.0156250, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0156250, 0.0312500, 0.0468750, 0.0820312, 0.0976562, 0.1132812, 0.1640625, 0.1796875, 0.1992188, 0.2148438, 0.2460938, 0.2617188, 0.2812500, 0.2968750, 0.3125000, 0.3476562, 0.3632812, 0.3789062, 0.4296875, 0.4453125, 0.4648438, 0.4804688, 0.5117188, 0.5273438, 0.5468750, 0.5625000, 0.5976562, 0.6132812, 0.6289062, 0.6445312, 0.6953125, 0.7109375, 0.7304688, 0.7460938, 0.7773438, 0.7929688, 0.8125000, 0.8281250, 0.8632812, 0.8789062, 0.8945312, 0.9453125, 0.9609375, 0.9765625, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.0000000]), array([ 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0156250, 0.0625000, 0.0820312, 0.0976562, 0.1132812, 0.1484375, 0.1640625, 0.1796875, 0.1992188, 0.2148438, 0.2460938, 0.2617188, 0.2812500, 0.3281250, 0.3476562, 0.3632812, 0.3789062, 0.4140625, 0.4296875, 0.4453125, 0.4648438, 0.4960938, 0.5117188, 0.5273438, 0.5468750, 0.5937500, 0.6132812, 0.6289062, 0.6445312, 0.6796875, 0.6953125, 0.7109375, 0.7304688, 0.7617188, 0.7773438, 0.7929688, 0.8437500, 0.8593750, 0.8789062, 0.8945312, 0.9101562, 0.9453125, 0.9609375, 0.9765625, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9765625, 0.9453125, 0.9296875, 0.9101562, 0.8945312, 0.8632812, 0.8437500, 0.8281250, 0.7773438, 0.7617188, 0.7460938, 0.7304688, 0.6953125, 0.6796875, 0.6640625, 0.6445312, 0.6289062, 0.5976562, 0.5781250, 0.5625000, 0.5117188, 0.4960938, 0.4804688, 0.4648438, 0.4296875, 0.4140625, 0.3984375, 0.3789062, 0.3476562, 0.3320312, 0.3125000, 0.2968750, 0.2460938, 0.2304688, 0.2148438, 0.1992188, 0.1640625, 0.1484375, 0.1328125, 0.1132812, 0.0820312, 0.0664062, 0.0468750, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 0.0000000, 0.0117188, 0.0273438, 0.0390625, 0.0546875, 0.0742188, 0.0898438, 0.1093750, 0.1250000, 0.1484375, 0.1679688, 0.1875000, 0.2304688, 0.2460938, 0.2656250, 0.2812500, 0.3007812, 0.3164062, 0.3359375, 0.3554688, 0.3710938, 0.3906250, 0.4062500, 0.4257812, 0.4609375, 0.4765625, 0.4960938, 0.5156250, 0.5312500, 0.5507812, 0.5664062, 0.5859375, 0.6015625, 0.6210938, 0.6367188, 0.6562500, 0.6914062, 0.7109375, 0.7265625, 0.7460938, 0.7617188, 0.7812500, 0.7968750, 0.8164062, 0.8359375, 0.8515625, 0.8710938, 0.9062500, 0.9218750, 0.9414062, 0.9570312, 0.9765625, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9960938, 0.9609375, 0.9453125, 0.9296875, 0.8789062, 0.8593750, 0.8437500, 0.8281250, 0.7929688, 0.7773438, 0.7617188, 0.7460938, 0.7304688, 0.6953125, 0.6796875, 0.6640625, 0.6132812, 0.5937500, 0.5781250, 0.5625000, 0.5273438, 0.5117188, 0.4960938, 0.4804688, 0.4453125, 0.4296875, 0.4140625, 0.3984375, 0.3476562, 0.3281250, 0.3125000, 0.2968750, 0.2617188, 0.2460938, 0.2304688, 0.2148438, 0.1796875, 0.1640625, 0.1484375, 0.0976562, 0.0820312, 0.0625000, 0.0468750, 0.0312500, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.0000000]), array([ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) color_map_luts["doom"] = ( array([ 0, 31, 23, 75, 255, 27, 19, 11, 7, 47, 35, 23, 15, 79, 71, 63, 255, 247, 243, 235, 231, 223, 219, 211, 203, 199, 191, 187, 179, 175, 167, 163, 155, 151, 143, 139, 131, 127, 119, 115, 107, 103, 95, 91, 83, 79, 71, 67, 255, 255, 255, 255, 255, 255, 255, 255, 255, 247, 239, 231, 223, 215, 207, 203, 191, 179, 171, 163, 155, 143, 135, 127, 119, 107, 95, 83, 75, 63, 51, 43, 239, 231, 223, 219, 211, 203, 199, 191, 183, 179, 171, 167, 159, 151, 147, 139, 131, 127, 119, 111, 107, 99, 91, 87, 79, 71, 67, 59, 55, 47, 39, 35, 119, 111, 103, 95, 91, 83, 75, 67, 63, 55, 47, 39, 31, 23, 19, 11, 191, 183, 175, 167, 159, 155, 147, 139, 131, 123, 119, 111, 103, 95, 87, 83, 159, 143, 131, 119, 103, 91, 79, 67, 123, 111, 103, 91, 83, 71, 63, 55, 255, 235, 215, 195, 175, 155, 135, 115, 255, 255, 255, 255, 255, 255, 255, 255, 255, 239, 227, 215, 203, 191, 179, 167, 155, 139, 127, 115, 103, 91, 79, 67, 231, 199, 171, 143, 115, 83, 55, 27, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 243, 235, 223, 215, 203, 195, 183, 175, 255, 255, 255, 255, 255, 255, 255, 255, 167, 159, 147, 135, 79, 67, 55, 47, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 207, 159, 111, 167]) / 255.0, array([ 0, 23, 15, 75, 255, 27, 19, 11, 7, 55, 43, 31, 23, 59, 51, 43, 183, 171, 163, 151, 143, 135, 123, 115, 107, 99, 91, 87, 79, 71, 63, 59, 51, 47, 43, 35, 31, 27, 23, 19, 15, 11, 7, 7, 7, 0, 0, 0, 235, 227, 219, 211, 207, 199, 191, 187, 179, 171, 163, 155, 147, 139, 131, 127, 123, 115, 111, 107, 99, 95, 87, 83, 79, 71, 67, 63, 55, 47, 43, 35, 239, 231, 223, 219, 211, 203, 199, 191, 183, 179, 171, 167, 159, 151, 147, 139, 131, 127, 119, 111, 107, 99, 91, 87, 79, 71, 67, 59, 55, 47, 39, 35, 255, 239, 223, 207, 191, 175, 159, 147, 131, 115, 99, 83, 67, 51, 35, 23, 167, 159, 151, 143, 135, 127, 123, 115, 107, 99, 95, 87, 83, 75, 67, 63, 131, 119, 107, 95, 83, 71, 59, 51, 127, 115, 107, 99, 87, 79, 71, 63, 255, 219, 187, 155, 123, 91, 67, 43, 255, 219, 187, 155, 123, 95, 63, 31, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 231, 199, 171, 143, 115, 83, 55, 27, 0, 0, 0, 0, 0, 0, 0, 0, 255, 235, 215, 199, 179, 163, 143, 127, 115, 111, 103, 95, 87, 79, 71, 67, 255, 255, 255, 255, 255, 255, 255, 255, 63, 55, 47, 35, 59, 47, 35, 27, 0, 0, 0, 0, 0, 0, 0, 0, 159, 231, 123, 0, 0, 0, 0, 107]) / 255.0, array([ 0, 11, 7, 75, 255, 27, 19, 11, 7, 31, 15, 7, 0, 43, 35, 27, 183, 171, 163, 151, 143, 135, 123, 115, 107, 99, 91, 87, 79, 71, 63, 59, 51, 47, 43, 35, 31, 27, 23, 19, 15, 11, 7, 7, 7, 0, 0, 0, 223, 211, 199, 187, 179, 167, 155, 147, 131, 123, 115, 107, 99, 91, 83, 79, 75, 71, 67, 63, 59, 55, 51, 47, 43, 39, 35, 31, 27, 23, 19, 15, 239, 231, 223, 219, 211, 203, 199, 191, 183, 179, 171, 167, 159, 151, 147, 139, 131, 127, 119, 111, 107, 99, 91, 87, 79, 71, 67, 59, 55, 47, 39, 35, 111, 103, 95, 87, 79, 71, 63, 55, 47, 43, 35, 27, 23, 15, 11, 7, 143, 135, 127, 119, 111, 107, 99, 91, 87, 79, 75, 67, 63, 55, 51, 47, 99, 83, 75, 63, 51, 43, 35, 27, 99, 87, 79, 71, 59, 51, 43, 39, 115, 87, 67, 47, 31, 19, 7, 0, 255, 219, 187, 155, 123, 95, 63, 31, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 227, 203, 179, 155, 131, 107, 83, 255, 219, 187, 155, 123, 91, 59, 27, 23, 15, 15, 11, 7, 0, 0, 0, 255, 215, 179, 143, 107, 71, 35, 0, 0, 0, 0, 0, 39, 27, 19, 11, 83, 71, 59, 47, 35, 23, 11, 0, 67, 75, 255, 255, 207, 155, 107, 107]) / 255.0, np.ones(256), ) # Aliases color_map_luts['B-W LINEAR'] = color_map_luts['idl00'] color_map_luts['BLUE'] = color_map_luts['idl01'] color_map_luts['GRN-RED-BLU-WHT'] = color_map_luts['idl02'] color_map_luts['RED TEMPERATURE'] = color_map_luts['idl03'] color_map_luts['BLUE'] = color_map_luts['idl04'] color_map_luts['STD GAMMA-II'] = color_map_luts['idl05'] color_map_luts['PRISM'] = color_map_luts['idl06'] color_map_luts['RED-PURPLE'] = color_map_luts['idl07'] color_map_luts['GREEN'] = color_map_luts['idl08'] color_map_luts['GRN'] = color_map_luts['idl09'] color_map_luts['GREEN-PINK'] = color_map_luts['idl10'] color_map_luts['BLUE-RED'] = color_map_luts['idl11'] color_map_luts['16 LEVEL'] = color_map_luts['idl12'] color_map_luts['RAINBOW'] = color_map_luts['idl13'] color_map_luts['STEPS'] = color_map_luts['idl14'] color_map_luts['STERN SPECIAL'] = color_map_luts['idl15'] color_map_luts['Haze'] = color_map_luts['idl16'] color_map_luts['Blue - Pastel - Red'] = color_map_luts['idl17'] color_map_luts['Pastels'] = color_map_luts['idl18'] color_map_luts['Hue Sat Lightness 1'] = color_map_luts['idl19'] color_map_luts['Hue Sat Lightness 2'] = color_map_luts['idl20'] color_map_luts['Hue Sat Value 1'] = color_map_luts['idl21'] color_map_luts['Hue Sat Value 2'] = color_map_luts['idl22'] color_map_luts['Purple-Red + Stripes'] = color_map_luts['idl23'] color_map_luts['Beach'] = color_map_luts['idl24'] color_map_luts['Mac Style'] = color_map_luts['idl25'] color_map_luts['Eos A'] = color_map_luts['idl26'] color_map_luts['Eos B'] = color_map_luts['idl27'] color_map_luts['Hardcandy'] = color_map_luts['idl28'] color_map_luts['Nature'] = color_map_luts['idl29'] color_map_luts['Ocean'] = color_map_luts['idl30'] color_map_luts['Peppermint'] = color_map_luts['idl31'] color_map_luts['Plasma'] = color_map_luts['idl32'] color_map_luts['Blue-Red'] = color_map_luts['idl33'] color_map_luts['Rainbow'] = color_map_luts['idl34'] color_map_luts['Blue Waves'] = color_map_luts['idl35'] color_map_luts['Volcano'] = color_map_luts['idl36'] color_map_luts['Waves'] = color_map_luts['idl37'] color_map_luts['Rainbow18'] = color_map_luts['idl38'] color_map_luts['Rainbow + white'] = color_map_luts['idl39'] color_map_luts['Rainbow + black'] = color_map_luts['idl40'] # Create a reversed LUT for each of the above defined LUTs # and append a "_r" (for reversal. consistent with MPL convention). # So for example, the reversal of "Waves" is "Waves_r" temp = {} for k,v in color_map_luts.items(): temp[k+"_r"] = (v[0][::-1], v[1][::-1], v[2][::-1], v[3][::-1]) color_map_luts.update(temp) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/_commons.py0000644000175100001770000001660114714401662017312 0ustar00runnerdockerimport os import warnings from functools import wraps from typing import TYPE_CHECKING, TypeVar import matplotlib as mpl from matplotlib.ticker import SymmetricalLogLocator from more_itertools import always_iterable from yt.config import ytcfg if TYPE_CHECKING: from matplotlib.backend_bases import FigureCanvasBase _DEFAULT_FONT_PROPERTIES = None def get_default_font_properties(): global _DEFAULT_FONT_PROPERTIES if _DEFAULT_FONT_PROPERTIES is None: import importlib.resources as importlib_resources _yt_style = mpl.rc_params_from_file( importlib_resources.files("yt") / "default.mplstyle", use_default_template=False, ) _DEFAULT_FONT_PROPERTIES = { "family": _yt_style["font.family"][0], "math_fontfamily": _yt_style["mathtext.fontset"], } return _DEFAULT_FONT_PROPERTIES def _get_supported_image_file_formats(): from matplotlib.backend_bases import FigureCanvasBase return frozenset(FigureCanvasBase.get_supported_filetypes().keys()) def _get_supported_canvas_classes(): from matplotlib.backends.backend_agg import FigureCanvasAgg from matplotlib.backends.backend_pdf import FigureCanvasPdf from matplotlib.backends.backend_ps import FigureCanvasPS from matplotlib.backends.backend_svg import FigureCanvasSVG return frozenset( (FigureCanvasAgg, FigureCanvasPdf, FigureCanvasPS, FigureCanvasSVG) ) def get_canvas_class(suffix: str) -> type["FigureCanvasBase"]: s = suffix.removeprefix(".") if s not in _get_supported_image_file_formats(): raise ValueError(f"Unsupported file format '{suffix}'.") for cls in _get_supported_canvas_classes(): if s in cls.get_supported_filetypes(): return cls raise RuntimeError( "Something went terribly wrong. " f"File extension '{suffix}' is supposed to be supported " "but no compatible backend was found." ) def validate_image_name(filename, suffix: str | None = None) -> str: """ Build a valid image filename with a specified extension (default to png). The suffix parameter is ignored if the input filename has a valid extension already. Otherwise, suffix is appended to the filename, replacing any existing extension. """ name, psuffix = os.path.splitext(filename) psuffix = psuffix.removeprefix(".") if suffix is not None: suffix = suffix.removeprefix(".") if psuffix in _get_supported_image_file_formats(): if suffix in _get_supported_image_file_formats() and suffix != psuffix: warnings.warn( f"Received two valid image formats {psuffix!r} (from filename) " f"and {suffix!r} (from suffix). The former is ignored.", stacklevel=2, ) return f"{name}.{suffix}" return str(filename) if suffix is None: suffix = "png" if suffix not in _get_supported_image_file_formats(): raise ValueError(f"Unsupported file format {suffix!r}") return f"{filename}.{suffix}" def get_canvas(figure, filename): name, suffix = os.path.splitext(filename) if not suffix: raise ValueError( f"Can not determine canvas class from filename '{filename}' " f"without an extension." ) return get_canvas_class(suffix)(figure) def invalidate_plot(f): @wraps(f) def newfunc(self, *args, **kwargs): retv = f(self, *args, **kwargs) self._plot_valid = False return retv return newfunc def invalidate_data(f): @wraps(f) def newfunc(self, *args, **kwargs): retv = f(self, *args, **kwargs) self._data_valid = False self._plot_valid = False return retv return newfunc def invalidate_figure(f): @wraps(f) def newfunc(self, *args, **kwargs): retv = f(self, *args, **kwargs) for field in self.plots.keys(): self.plots[field].figure = None self.plots[field].axes = None self.plots[field].cax = None self._setup_plots() return retv return newfunc def validate_plot(f): @wraps(f) def newfunc(self, *args, **kwargs): # TODO: _profile_valid and _data_valid seem to play very similar roles, # there's probably room to abstract these into a common operation if hasattr(self, "_data_valid") and not self._data_valid: self._recreate_frb() if hasattr(self, "_profile_valid") and not self._profile_valid: self._recreate_profile() if not self._plot_valid: # it is the responsibility of _setup_plots to # call plot.run_callbacks() self._setup_plots() retv = f(self, *args, **kwargs) return retv return newfunc T = TypeVar("T", tuple, list) def _swap_axes_extents(extent: T) -> T: """ swaps the x and y extent values, preserving type of extent Parameters ---------- extent : sequence of four unyt quantities the current 4-element tuple or list of unyt quantities describing the plot extent. extent = (xmin, xmax, ymin, ymax). Returns ------- tuple or list the extent axes swapped, now with (ymin, ymax, xmin, xmax). """ extent_swapped = [extent[2], extent[3], extent[0], extent[1]] return type(extent)(extent_swapped) def _swap_arg_pair_order(*args): """ flips adjacent argument pairs, useful for swapping x-y plot arguments Parameters ---------- *args argument pairs, must have an even number of *args Returns ------- tuple args with order of pairs switched, i.e,: _swap_arg_pair_order(x, y, px, py) returns: y, x, py, px """ if len(args) % 2 != 0: raise TypeError("Number of arguments must be even.") n_pairs = len(args) // 2 new_args = [] for i in range(n_pairs): x_id = i * 2 new_args.append(args[x_id + 1]) new_args.append(args[x_id]) return tuple(new_args) class _MPL38_SymmetricalLogLocator(SymmetricalLogLocator): # Backporting behaviour from matplotlib 3.8 (in development at the time of writing) # see https://github.com/matplotlib/matplotlib/pull/25970 def __init__(self, *args, **kwargs): if mpl.__version_info__ >= (3, 8): raise RuntimeError( "_MPL38_SymmetricalLogLocator is not needed with matplotlib>=3.8" ) super().__init__(*args, **kwargs) def tick_values(self, vmin, vmax): linthresh = self._linthresh if vmax < vmin: vmin, vmax = vmax, vmin if -linthresh <= vmin < vmax <= linthresh: # only the linear range is present return sorted({vmin, 0, vmax}) return super().tick_values(vmin, vmax) def get_default_from_config(data_source, *, field, keys, defaults): _keys = list(always_iterable(keys)) _defaults = list(always_iterable(defaults)) ftype, fname = data_source._determine_fields(field)[0] ret = [ ytcfg.get_most_specific("plot", ftype, fname, key, fallback=default) for key, default in zip(_keys, _defaults, strict=True) ] if len(ret) == 1: return ret[0] else: return ret def _get_units_label(units: str) -> str: if r"\frac" in units: return rf"$\ \ \left({units}\right)$" elif units: return rf"$\ \ ({units})$" else: return "" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/_handlers.py0000644000175100001770000004043614714401662017442 0ustar00runnerdockerimport weakref from numbers import Real from typing import TYPE_CHECKING, Any, Literal, Optional, TypeAlias, Union import matplotlib as mpl import numpy as np import unyt as un from matplotlib.colors import Colormap, LogNorm, Normalize, SymLogNorm from unyt import unyt_quantity from yt._typing import Quantity, Unit from yt.config import ytcfg from yt.funcs import get_brewer_cmap, is_sequence, mylog if TYPE_CHECKING: # RGBColorType, RGBAColorType and ColorType are backported from matplotlib 3.8.0 RGBColorType = tuple[float, float, float] | str RGBAColorType = Union[ # noqa: UP007 str, # "none" or "#RRGGBBAA"/"#RGBA" hex strings tuple[float, float, float, float], # 2 tuple (color, alpha) representations, not infinitely recursive # RGBColorType includes the (str, float) tuple, even for RGBA strings tuple[RGBColorType, float], # (4-tuple, float) is odd, but accepted as the outer float overriding A of 4-tuple tuple[tuple[float, float, float, float], float], ] ColorType = RGBColorType | RGBAColorType # this type alias is unique to the present module ColormapInput: TypeAlias = Colormap | str | None class NormHandler: """ A bookkeeper class that can hold a fully defined norm object, or dynamically build one on demand according to a set of constraints. If a fully defined norm object is added, any existing constraints are dropped, and vice versa. These rules are implemented with properties and watcher patterns. It also keeps track of display units so that vmin, vmax and linthresh can be updated with implicit units. """ # using slots here to minimize the risk of introducing bugs # since attributes names are essential to this class's implementation __slots__ = ( "data_source", "ds", "_display_units", "_vmin", "_vmax", "_dynamic_range", "_norm_type", "_linthresh", "_norm_type", "_norm", "prefer_log", ) _constraint_attrs: list[str] = [ "vmin", "vmax", "dynamic_range", "norm_type", "linthresh", ] def __init__( self, data_source, *, display_units: un.Unit, vmin: un.unyt_quantity | None = None, vmax: un.unyt_quantity | None = None, dynamic_range: float | None = None, norm_type: type[Normalize] | None = None, norm: Normalize | None = None, linthresh: float | None = None, ): self.data_source = weakref.proxy(data_source) self.ds = data_source.ds # should already be a weakref proxy self._display_units = display_units self._norm = norm self._vmin = vmin self._vmax = vmax self._dynamic_range = dynamic_range self._norm_type = norm_type self._linthresh = linthresh self.prefer_log = True if self.norm is not None and self.has_constraints: raise TypeError( "NormHandler input is malformed. " "A norm cannot be passed along other constraints." ) def _get_constraints(self) -> dict[str, Any]: return { attr: getattr(self, attr) for attr in self.__class__._constraint_attrs if getattr(self, attr) is not None } @property def has_constraints(self) -> bool: return bool(self._get_constraints()) def _reset_constraints(self) -> None: constraints = self._get_constraints() if not constraints: return msg = ", ".join([f"{name}={value}" for name, value in constraints.items()]) mylog.warning("Dropping norm constraints (%s)", msg) for name in constraints.keys(): setattr(self, name, None) def _reset_norm(self) -> None: if self.norm is None: return mylog.warning("Dropping norm (%s)", self.norm) self._norm = None def to_float(self, val: un.unyt_quantity) -> float: return float(val.to(self.display_units).d) def to_quan(self, val) -> un.unyt_quantity: if isinstance(val, un.unyt_quantity): return self.ds.quan(val) elif ( is_sequence(val) and len(val) == 2 and isinstance(val[0], Real) and isinstance(val[1], (str, un.Unit)) ): return self.ds.quan(*val) elif isinstance(val, Real): return self.ds.quan(val, self.display_units) else: raise TypeError(f"Could not convert {val!r} to unyt_quantity") @property def display_units(self) -> un.Unit: return self._display_units @display_units.setter def display_units(self, newval: Unit) -> None: self._display_units = un.Unit(newval, registry=self.ds.unit_registry) def _set_quan_attr(self, attr: str, newval: Quantity | float | None) -> None: if newval is None: setattr(self, attr, None) else: try: quan = self.to_quan(newval) except TypeError as exc: raise TypeError( "Expected None, a float, or a unyt_quantity, " f"received {newval} with type {type(newval)}" ) from exc else: setattr(self, attr, quan) @property def vmin(self) -> un.unyt_quantity | Literal["min"] | None: return self._vmin @vmin.setter def vmin(self, newval: Quantity | float | Literal["min"] | None) -> None: self._reset_norm() if newval == "min": self._vmin = "min" else: self._set_quan_attr("_vmin", newval) @property def vmax(self) -> un.unyt_quantity | Literal["max"] | None: return self._vmax @vmax.setter def vmax(self, newval: Quantity | float | Literal["max"] | None) -> None: self._reset_norm() if newval == "max": self._vmax = "max" else: self._set_quan_attr("_vmax", newval) @property def dynamic_range(self) -> float | None: return self._dynamic_range @dynamic_range.setter def dynamic_range(self, newval: float | None) -> None: if newval is None: return try: newval = float(newval) except TypeError: raise TypeError( f"Expected a float. Received {newval} with type {type(newval)}" ) from None if newval == 0: raise ValueError("Dynamic range cannot be zero.") if newval == 1: raise ValueError("Dynamic range cannot be unity.") self._reset_norm() self._dynamic_range = newval def get_dynamic_range( self, dvmin: float | None, dvmax: float | None ) -> tuple[float, float]: if self.dynamic_range is None: raise RuntimeError( "Something went terribly wrong in setting up a dynamic range" ) if self.vmax is None: if self.vmin is None: raise TypeError( "Cannot set dynamic range with neither " "vmin and vmax being constrained." ) if dvmin is None: raise RuntimeError( "Something went terribly wrong in setting up a dynamic range" ) return dvmin, dvmin * self.dynamic_range elif self.vmin is None: if dvmax is None: raise RuntimeError( "Something went terribly wrong in setting up a dynamic range" ) return dvmax / self.dynamic_range, dvmax else: raise TypeError( "Cannot set dynamic range with both " "vmin and vmax already constrained." ) @property def norm_type(self) -> type[Normalize] | None: return self._norm_type @norm_type.setter def norm_type(self, newval: type[Normalize] | None) -> None: if not ( newval is None or (isinstance(newval, type) and issubclass(newval, Normalize)) ): raise TypeError( "Expected a subclass of matplotlib.colors.Normalize, " f"received {newval} with type {type(newval)}" ) self._reset_norm() if newval is not SymLogNorm: self.linthresh = None self._norm_type = newval @property def norm(self) -> Normalize | None: return self._norm @norm.setter def norm(self, newval: Normalize) -> None: if not isinstance(newval, Normalize): raise TypeError( "Expected a matplotlib.colors.Normalize object, " f"received {newval} with type {type(newval)}" ) self._reset_constraints() self._norm = newval @property def linthresh(self) -> float | None: return self._linthresh @linthresh.setter def linthresh(self, newval: Quantity | float | None) -> None: self._reset_norm() self._set_quan_attr("_linthresh", newval) if self._linthresh is not None and self._linthresh <= 0: raise ValueError( f"linthresh can only be set to strictly positive values, got {newval}" ) if newval is not None: self.norm_type = SymLogNorm def get_norm(self, data: np.ndarray, *args, **kw) -> Normalize: if self.norm is not None: return self.norm dvmin = dvmax = None finite_values_mask = np.isfinite(data) if self.vmin is not None and not ( isinstance(self.vmin, str) and self.vmin == "min" ): dvmin = self.to_float(self.vmin) elif np.any(finite_values_mask): dvmin = self.to_float(np.nanmin(data[finite_values_mask])) if self.vmax is not None and not ( isinstance(self.vmax, str) and self.vmax == "max" ): dvmax = self.to_float(self.vmax) elif np.any(finite_values_mask): dvmax = self.to_float(np.nanmax(data[finite_values_mask])) if self.dynamic_range is not None: dvmin, dvmax = self.get_dynamic_range(dvmin, dvmax) if dvmin is None: dvmin = 1 * getattr(data, "units", 1) kw.setdefault("vmin", dvmin) if dvmax is None: dvmax = 1 * getattr(data, "units", 1) kw.setdefault("vmax", dvmax) norm_type: type[Normalize] if data.ndim == 3: assert data.shape[-1] == 4 # this is an RGBA array, only linear normalization makes sense here norm_type = Normalize elif self.norm_type is not None: # this is a convenience mechanism for backward compat, # allowing to toggle between lin and log scaling without detailed user input norm_type = self.norm_type else: if ( not self.prefer_log or kw["vmin"] == kw["vmax"] or not np.any(finite_values_mask) ): norm_type = Normalize elif kw["vmin"] <= 0: # note: see issue 3944 (and PRs and issues linked therein) for a # discussion on when to switch to SymLog and related questions # of how to calculate a default linthresh value. norm_type = SymLogNorm else: norm_type = LogNorm if norm_type is SymLogNorm: if self.linthresh is not None: linthresh = self.to_float(self.linthresh) else: linthresh = self._guess_linthresh(data[finite_values_mask]) kw.setdefault("linthresh", linthresh) kw.setdefault("base", 10) return norm_type(*args, **kw) def _guess_linthresh(self, finite_plot_data): # finite_plot_data is the ImageArray or ColorbarHandler data, already # filtered to be finite values # get the extrema for the negative and positive values separately # neg_min -> neg_max -> 0 -> pos_min -> pos_max def get_minmax(data): if len(data) > 0: return np.nanmin(data), np.nanmax(data) return None, None pos_min, pos_max = get_minmax(finite_plot_data[finite_plot_data > 0]) neg_min, neg_max = get_minmax(finite_plot_data[finite_plot_data < 0]) has_pos = pos_min is not None has_neg = neg_min is not None # the starting guess is the absolute value of the point closest to 0 # (remember: neg_max is closer to 0 than neg_min) if has_pos and has_neg: linthresh = np.min((-neg_max, pos_min)) elif has_pos: linthresh = pos_min elif has_neg: linthresh = -neg_max else: # this condition should be handled before here in get_norm raise RuntimeError("No finite data points.") log10_linthresh = np.log10(linthresh) # if either the pos or neg ranges exceed cutoff_sigdigs, then # linthresh is shifted to decrease the range to avoid floating point # precision errors in the normalization. cutoff_sigdigs = 15 # max allowable range in significant digits if has_pos and np.log10(pos_max) - log10_linthresh > cutoff_sigdigs: linthresh = pos_max / (10.0**cutoff_sigdigs) log10_linthresh = np.log10(linthresh) if has_neg and np.log10(-neg_min) - log10_linthresh > cutoff_sigdigs: linthresh = np.abs(neg_min) / (10.0**cutoff_sigdigs) if isinstance(linthresh, unyt_quantity): # if the original plot_data has units, linthresh will have units here return self.to_float(linthresh) return linthresh class ColorbarHandler: __slots__ = ("_draw_cbar", "_draw_minorticks", "_cmap", "_background_color") def __init__( self, *, draw_cbar: bool = True, draw_minorticks: bool = True, cmap: "ColormapInput" = None, background_color: str | None = None, ): self._draw_cbar = draw_cbar self._draw_minorticks = draw_minorticks self._cmap: Colormap | None = None self._set_cmap(cmap) self._background_color: ColorType | None = background_color @property def draw_cbar(self) -> bool: return self._draw_cbar @draw_cbar.setter def draw_cbar(self, newval) -> None: if not isinstance(newval, bool): raise TypeError( f"Expected a boolean, got {newval} with type {type(newval)}" ) self._draw_cbar = newval @property def draw_minorticks(self) -> bool: return self._draw_minorticks @draw_minorticks.setter def draw_minorticks(self, newval) -> None: if not isinstance(newval, bool): raise TypeError( f"Expected a boolean, got {newval} with type {type(newval)}" ) self._draw_minorticks = newval @property def cmap(self) -> Colormap: return self._cmap or mpl.colormaps[ytcfg.get("yt", "default_colormap")] @cmap.setter def cmap(self, newval: "ColormapInput") -> None: self._set_cmap(newval) def _set_cmap(self, newval: "ColormapInput") -> None: # a separate setter function is better supported by type checkers (mypy) # than relying purely on a property setter to narrow type # from ColormapInput to Colormap if isinstance(newval, Colormap) or newval is None: self._cmap = newval elif isinstance(newval, str): self._cmap = mpl.colormaps[newval] elif is_sequence(newval): # type: ignore[unreachable] # tuple colormaps are from palettable (or brewer2mpl) self._cmap = get_brewer_cmap(newval) else: raise TypeError( "Expected a colormap object or name, " f"got {newval} with type {type(newval)}" ) @property def background_color(self) -> "ColorType": return self._background_color or "white" @background_color.setter def background_color(self, newval: Optional["ColorType"]) -> None: if newval is None: self._background_color = self.cmap(0) else: self._background_color = newval @property def has_background_color(self) -> bool: return self._background_color is not None ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/api.py0000644000175100001770000000200714714401662016244 0ustar00runnerdockerfrom .base_plot_types import get_multi_plot from .color_maps import add_colormap, make_colormap, show_colormaps from .fits_image import ( FITSImageData, FITSOffAxisProjection, FITSOffAxisSlice, FITSParticleOffAxisProjection, FITSParticleProjection, FITSProjection, FITSSlice, ) from .fixed_resolution import FixedResolutionBuffer, ParticleImageBuffer from .image_writer import ( apply_colormap, map_to_colors, multi_image_composite, scale_image, splat_points, write_bitmap, write_image, write_projection, ) from .line_plot import LineBuffer, LinePlot from .particle_plots import ParticlePhasePlot, ParticlePlot, ParticleProjectionPlot from .plot_modifications import PlotCallback, callback_registry from .plot_window import ( AxisAlignedProjectionPlot, AxisAlignedSlicePlot, OffAxisProjectionPlot, OffAxisSlicePlot, ProjectionPlot, SlicePlot, plot_2d, ) from .profile_plotter import PhasePlot, ProfilePlot from .streamlines import Streamlines ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/base_plot_types.py0000644000175100001770000005577614714401662020714 0ustar00runnerdockerimport sys import warnings from abc import ABC from io import BytesIO from typing import TYPE_CHECKING, Optional, TypedDict import matplotlib import numpy as np from matplotlib.scale import SymmetricalLogTransform from matplotlib.ticker import LogFormatterMathtext from yt._typing import AlphaT from yt.funcs import ( get_interactivity, is_sequence, matplotlib_style_context, mylog, setdefault_mpl_metadata, setdefaultattr, ) from yt.visualization._handlers import ColorbarHandler, NormHandler from ._commons import ( get_canvas, validate_image_name, ) if matplotlib.__version_info__ >= (3, 8): from matplotlib.ticker import SymmetricalLogLocator else: from ._commons import _MPL38_SymmetricalLogLocator as SymmetricalLogLocator if TYPE_CHECKING: from typing import Literal from matplotlib.axes import Axes from matplotlib.axis import Axis from matplotlib.figure import Figure from matplotlib.transforms import Transform class FormatKwargs(TypedDict): style: Literal["scientific"] scilimits: tuple[int, int] useMathText: bool BACKEND_SPECS = { "GTK": ["backend_gtk", "FigureCanvasGTK", "FigureManagerGTK"], "GTKAgg": ["backend_gtkagg", "FigureCanvasGTKAgg", None], "GTKCairo": ["backend_gtkcairo", "FigureCanvasGTKCairo", None], "MacOSX": ["backend_macosx", "FigureCanvasMac", "FigureManagerMac"], "Qt5Agg": ["backend_qt5agg", "FigureCanvasQTAgg", None], "QtAgg": ["backend_qtagg", "FigureCanvasQTAgg", None], "TkAgg": ["backend_tkagg", "FigureCanvasTkAgg", None], "WX": ["backend_wx", "FigureCanvasWx", None], "WXAgg": ["backend_wxagg", "FigureCanvasWxAgg", None], "GTK3Cairo": [ "backend_gtk3cairo", "FigureCanvasGTK3Cairo", "FigureManagerGTK3Cairo", ], "GTK3Agg": ["backend_gtk3agg", "FigureCanvasGTK3Agg", "FigureManagerGTK3Agg"], "WebAgg": ["backend_webagg", "FigureCanvasWebAgg", None], "nbAgg": ["backend_nbagg", "FigureCanvasNbAgg", "FigureManagerNbAgg"], "agg": ["backend_agg", "FigureCanvasAgg", None], } class CallbackWrapper: def __init__(self, viewer, window_plot, frb, field, font_properties, font_color): self.frb = frb self.data = frb.data_source self._axes = window_plot.axes self._figure = window_plot.figure if len(self._axes.images) > 0: self.raw_image_shape = self._axes.images[0]._A.shape if viewer._has_swapped_axes: # store the original un-transposed shape self.raw_image_shape = self.raw_image_shape[1], self.raw_image_shape[0] if frb.axis is not None: DD = frb.ds.domain_width xax = frb.ds.coordinates.x_axis[frb.axis] yax = frb.ds.coordinates.y_axis[frb.axis] self._period = (DD[xax], DD[yax]) self.ds = frb.ds self.xlim = viewer.xlim self.ylim = viewer.ylim self._swap_axes = viewer._has_swapped_axes self._flip_horizontal = viewer._flip_horizontal # needed for quiver self._flip_vertical = viewer._flip_vertical # needed for quiver # an important note on _swap_axes: _swap_axes will swap x,y arguments # in callbacks (e.g., plt.plot(x,y) will be plt.plot(y, x). The xlim # and ylim arguments above, and internal callback references to coordinates # are the **unswapped** ranges. self._axes_unit_names = viewer._axes_unit_names if "OffAxisSlice" in viewer._plot_type: self._type_name = "CuttingPlane" else: self._type_name = viewer._plot_type self.aspect = window_plot._aspect self.font_properties = font_properties self.font_color = font_color self.field = field self._transform = viewer._transform class PlotMPL: """A base class for all yt plots made using matplotlib, that is backend independent.""" def __init__( self, fsize, axrect: tuple[float, float, float, float], *, norm_handler: NormHandler, figure: Optional["Figure"] = None, axes: Optional["Axes"] = None, ): """Initialize PlotMPL class""" import matplotlib.figure self._plot_valid = True if figure is None: if not is_sequence(fsize): fsize = (fsize, fsize) self.figure = matplotlib.figure.Figure(figsize=fsize, frameon=True) else: figure.set_size_inches(fsize) self.figure = figure if axes is None: self._create_axes(axrect) else: axes.clear() axes.set_position(axrect) self.axes = axes self.interactivity = get_interactivity() figure_canvas, figure_manager = self._get_canvas_classes() self.canvas = figure_canvas(self.figure) if figure_manager is not None: self.manager = figure_manager(self.canvas, 1) self.axes.tick_params( which="both", axis="both", direction="in", top=True, right=True ) self.norm_handler = norm_handler def _create_axes(self, axrect: tuple[float, float, float, float]) -> None: self.axes = self.figure.add_axes(axrect) def _get_canvas_classes(self): if self.interactivity: key = str(matplotlib.get_backend()) else: key = "agg" module, fig_canvas, fig_manager = BACKEND_SPECS[key] mod = __import__( "matplotlib.backends", globals(), locals(), [module], 0, ) submod = getattr(mod, module) FigureCanvas = getattr(submod, fig_canvas) if fig_manager is not None: FigureManager = getattr(submod, fig_manager) return FigureCanvas, FigureManager return FigureCanvas, None def save(self, name, mpl_kwargs=None, canvas=None): """Choose backend and save image to disk""" if mpl_kwargs is None: mpl_kwargs = {} name = validate_image_name(name) setdefault_mpl_metadata(mpl_kwargs, name) try: canvas = get_canvas(self.figure, name) except ValueError: canvas = self.canvas mylog.info("Saving plot %s", name) with matplotlib_style_context(): canvas.print_figure(name, **mpl_kwargs) return name def show(self): try: self.manager.show() except AttributeError: self.canvas.show() def _get_labels(self): ax = self.axes labels = ax.xaxis.get_ticklabels() + ax.yaxis.get_ticklabels() labels += ax.xaxis.get_minorticklabels() labels += ax.yaxis.get_minorticklabels() labels += [ ax.title, ax.xaxis.label, ax.yaxis.label, ax.xaxis.get_offset_text(), ax.yaxis.get_offset_text(), ] return labels def _set_font_properties(self, font_properties, font_color): for label in self._get_labels(): label.set_fontproperties(font_properties) if font_color is not None: label.set_color(font_color) def _repr_png_(self): from matplotlib.backends.backend_agg import FigureCanvasAgg canvas = FigureCanvasAgg(self.figure) f = BytesIO() with matplotlib_style_context(): canvas.print_figure(f) f.seek(0) return f.read() class ImagePlotMPL(PlotMPL, ABC): """A base class for yt plots made using imshow""" _default_font_size = 18.0 def __init__( self, fsize=None, axrect=None, caxrect=None, *, norm_handler: NormHandler, colorbar_handler: ColorbarHandler, figure: Optional["Figure"] = None, axes: Optional["Axes"] = None, cax: Optional["Axes"] = None, ): """Initialize ImagePlotMPL class object""" self._transform: Transform | None setdefaultattr(self, "_transform", None) self.colorbar_handler = colorbar_handler _missing_layout_specs = [_ is None for _ in (fsize, axrect, caxrect)] if all(_missing_layout_specs): fsize, axrect, caxrect = self._get_best_layout() elif any(_missing_layout_specs): raise TypeError( "ImagePlotMPL cannot be initialized with partially specified layout." ) super().__init__( fsize, axrect, norm_handler=norm_handler, figure=figure, axes=axes ) if cax is None: self.cax = self.figure.add_axes(caxrect) else: cax.clear() cax.set_position(caxrect) self.cax = cax def _setup_layout_constraints( self, figure_size: tuple[float, float] | float, fontsize: float ): # Setup base layout attributes # derived classes need to call this before super().__init__ # but they are free to do other stuff in between if isinstance(figure_size, tuple): assert len(figure_size) == 2 assert all(isinstance(_, float) for _ in figure_size) self._figure_size = figure_size else: assert isinstance(figure_size, float) self._figure_size = (figure_size, figure_size) self._draw_axes = True fontscale = float(fontsize) / self.__class__._default_font_size if fontscale < 1.0: fontscale = np.sqrt(fontscale) self._cb_size = 0.0375 * self._figure_size[0] self._ax_text_size = [1.2 * fontscale, 0.9 * fontscale] self._top_buff_size = 0.30 * fontscale self._aspect = 1.0 def _reset_layout(self) -> None: size, axrect, caxrect = self._get_best_layout() self.axes.set_position(axrect) self.cax.set_position(caxrect) self.figure.set_size_inches(*size) def _init_image(self, data, extent, aspect, *, alpha: AlphaT = None): """Store output of imshow in image variable""" norm = self.norm_handler.get_norm(data) extent = [float(e) for e in extent] if self._transform is None: # sets the transform to be an ax.TransData object, where the # coordinate system of the data is controlled by the xlim and ylim # of the data. transform = self.axes.transData else: transform = self._transform self._validate_axes_extent(extent, transform) self.image = self.axes.imshow( data.to_ndarray(), origin="lower", extent=extent, norm=norm, aspect=aspect, cmap=self.colorbar_handler.cmap, interpolation="nearest", transform=transform, alpha=alpha, ) self._set_axes() def _set_axes(self) -> None: fmt_kwargs: FormatKwargs = { "style": "scientific", "scilimits": (-2, 3), "useMathText": True, } self.image.axes.ticklabel_format(**fmt_kwargs) self.image.axes.set_facecolor(self.colorbar_handler.background_color) self.cax.tick_params(which="both", direction="in") # For creating a multipanel plot by ImageGrid # we may need the location keyword, which requires Matplotlib >= 3.7.0 cb_location = getattr(self.cax, "orientation", None) if matplotlib.__version_info__ >= (3, 7): self.cb = self.figure.colorbar(self.image, self.cax, location=cb_location) else: if cb_location in ["top", "bottom"]: warnings.warn( "Cannot properly set the orientation of colorbar. " "Consider upgrading matplotlib to version 3.7 or newer", stacklevel=6, ) self.cb = self.figure.colorbar(self.image, self.cax) cb_axis: Axis if self.cb.orientation == "vertical": cb_axis = self.cb.ax.yaxis else: cb_axis = self.cb.ax.xaxis cb_scale = cb_axis.get_scale() if cb_scale == "symlog": trf = cb_axis.get_transform() if not isinstance(trf, SymmetricalLogTransform): raise RuntimeError cb_axis.set_major_locator(SymmetricalLogLocator(trf)) cb_axis.set_major_formatter( LogFormatterMathtext(linthresh=trf.linthresh, base=trf.base) ) if cb_scale not in ("log", "symlog"): self.cb.ax.ticklabel_format(**fmt_kwargs) if self.colorbar_handler.draw_minorticks and cb_scale == "symlog": # no minor ticks are drawn by default in symlog, as of matplotlib 3.7.1 # see https://github.com/matplotlib/matplotlib/issues/25994 trf = cb_axis.get_transform() if not isinstance(trf, SymmetricalLogTransform): raise RuntimeError if float(trf.base).is_integer(): locator = SymmetricalLogLocator(trf, subs=list(range(1, int(trf.base)))) cb_axis.set_minor_locator(locator) elif self.colorbar_handler.draw_minorticks: self.cb.minorticks_on() else: self.cb.minorticks_off() def _validate_axes_extent(self, extent, transform): # if the axes are cartopy GeoAxes, this checks that the axes extent # is properly set. if "cartopy" not in sys.modules: # cartopy isn't already loaded, nothing to do here return from cartopy.mpl.geoaxes import GeoAxes if isinstance(self.axes, GeoAxes): # some projections have trouble when passing extents at or near the # limits. So we only set_extent when the plot is a subset of the # globe, within the tolerance of the transform. # note that `set_extent` here is setting the extent of the axes. # still need to pass the extent arg to imshow in order to # ensure that it is properly scaled. also note that set_extent # expects values in the coordinates of the transform: it will # calculate the coordinates in the projection. global_extent = transform.x_limits + transform.y_limits thresh = transform.threshold if all( abs(extent[ie]) < (abs(global_extent[ie]) - thresh) for ie in range(4) ): self.axes.set_extent(extent, crs=transform) def _get_best_layout(self): # this method is called in ImagePlotMPL.__init__ # required attributes # - self._figure_size: Union[float, Tuple[float, float]] # - self._aspect: float # - self._ax_text_size: Tuple[float, float] # - self._draw_axes: bool # - self.colorbar_handler: ColorbarHandler # optional attributes # - self._unit_aspect: float # Ensure the figure size along the long axis is always equal to _figure_size unit_aspect = getattr(self, "_unit_aspect", 1) if is_sequence(self._figure_size): x_fig_size, y_fig_size = self._figure_size y_fig_size *= unit_aspect else: x_fig_size = y_fig_size = self._figure_size scaling = self._aspect / unit_aspect if scaling < 1: x_fig_size *= scaling else: y_fig_size /= scaling if self.colorbar_handler.draw_cbar: cb_size = self._cb_size cb_text_size = self._ax_text_size[1] + 0.45 else: cb_size = x_fig_size * 0.04 cb_text_size = 0.0 if self._draw_axes: x_axis_size = self._ax_text_size[0] y_axis_size = self._ax_text_size[1] else: x_axis_size = x_fig_size * 0.04 y_axis_size = y_fig_size * 0.04 top_buff_size = self._top_buff_size if not self._draw_axes and not self.colorbar_handler.draw_cbar: x_axis_size = 0.0 y_axis_size = 0.0 cb_size = 0.0 cb_text_size = 0.0 top_buff_size = 0.0 xbins = np.array([x_axis_size, x_fig_size, cb_size, cb_text_size]) ybins = np.array([y_axis_size, y_fig_size, top_buff_size]) size = [xbins.sum(), ybins.sum()] x_frac_widths = xbins / size[0] y_frac_widths = ybins / size[1] # axrect is the rectangle defining the area of the # axis object of the plot. Its range goes from 0 to 1 in # x and y directions. The first two values are the x,y # start values of the axis object (lower left corner), and the # second two values are the size of the axis object. To get # the upper right corner, add the first x,y to the second x,y. axrect = ( x_frac_widths[0], y_frac_widths[0], x_frac_widths[1], y_frac_widths[1], ) # caxrect is the rectangle defining the area of the colorbar # axis object of the plot. It is defined just as the axrect # tuple is. caxrect = ( x_frac_widths[0] + x_frac_widths[1], y_frac_widths[0], x_frac_widths[2], y_frac_widths[1], ) return size, axrect, caxrect def _toggle_axes(self, choice, draw_frame=None): """ Turn on/off displaying the axis ticks and labels for a plot. Parameters ---------- choice : boolean If True, set the axes to be drawn. If False, set the axes to not be drawn. """ self._draw_axes = choice self._draw_frame = draw_frame if draw_frame is None: draw_frame = choice if self.colorbar_handler.has_background_color and not draw_frame: # workaround matplotlib's behaviour # last checked with Matplotlib 3.5 warnings.warn( f"Previously set background color {self.colorbar_handler.background_color} " "has no effect. Pass `draw_frame=True` if you wish to preserve background color.", stacklevel=4, ) self.axes.set_frame_on(draw_frame) self.axes.get_xaxis().set_visible(choice) self.axes.get_yaxis().set_visible(choice) self._reset_layout() def _toggle_colorbar(self, choice: bool): """ Turn on/off displaying the colorbar for a plot choice = True or False """ self.colorbar_handler.draw_cbar = choice self.cax.set_visible(choice) size, axrect, caxrect = self._get_best_layout() self.axes.set_position(axrect) self.cax.set_position(caxrect) self.figure.set_size_inches(*size) def _get_labels(self): labels = super()._get_labels() if getattr(self.cb, "orientation", "vertical") == "horizontal": cbaxis = self.cb.ax.xaxis else: cbaxis = self.cb.ax.yaxis labels += cbaxis.get_ticklabels() labels += [cbaxis.label, cbaxis.get_offset_text()] return labels def hide_axes(self, *, draw_frame=None): """ Hide the axes for a plot including ticks and labels """ self._toggle_axes(False, draw_frame) return self def show_axes(self): """ Show the axes for a plot including ticks and labels """ self._toggle_axes(True) return self def hide_colorbar(self): """ Hide the colorbar for a plot including ticks and labels """ self._toggle_colorbar(False) return self def show_colorbar(self): """ Show the colorbar for a plot including ticks and labels """ self._toggle_colorbar(True) return self def get_multi_plot(nx, ny, colorbar="vertical", bw=4, dpi=300, cbar_padding=0.4): r"""Construct a multiple axes plot object, with or without a colorbar, into which multiple plots may be inserted. This will create a set of :class:`matplotlib.axes.Axes`, all lined up into a grid, which are then returned to the user and which can be used to plot multiple plots on a single figure. Parameters ---------- nx : int Number of axes to create along the x-direction ny : int Number of axes to create along the y-direction colorbar : {'vertical', 'horizontal', None}, optional Should Axes objects for colorbars be allocated, and if so, should they correspond to the horizontal or vertical set of axes? bw : number The base height/width of an axes object inside the figure, in inches dpi : number The dots per inch fed into the Figure instantiation Returns ------- fig : :class:`matplotlib.figure.Figure` The figure created inside which the axes reside tr : list of list of :class:`matplotlib.axes.Axes` objects This is a list, where the inner list is along the x-axis and the outer is along the y-axis cbars : list of :class:`matplotlib.axes.Axes` objects Each of these is an axes onto which a colorbar can be placed. Notes ----- This is a simple implementation for a common use case. Viewing the source can be instructive, and is encouraged to see how to generate more complicated or more specific sets of multiplots for your own purposes. """ import matplotlib.figure from matplotlib.backends.backend_agg import FigureCanvasAgg hf, wf = 1.0 / ny, 1.0 / nx fudge_x = fudge_y = 1.0 if colorbar is None: fudge_x = fudge_y = 1.0 elif colorbar.lower() == "vertical": fudge_x = nx / (cbar_padding + nx) fudge_y = 1.0 elif colorbar.lower() == "horizontal": fudge_x = 1.0 fudge_y = ny / (cbar_padding + ny) fig = matplotlib.figure.Figure((bw * nx / fudge_x, bw * ny / fudge_y), dpi=dpi) fig.set_canvas(FigureCanvasAgg(fig)) fig.subplots_adjust( wspace=0.0, hspace=0.0, top=1.0, bottom=0.0, left=0.0, right=1.0 ) tr = [] for j in range(ny): tr.append([]) for i in range(nx): left = i * wf * fudge_x bottom = fudge_y * (1.0 - (j + 1) * hf) + (1.0 - fudge_y) ax = fig.add_axes([left, bottom, wf * fudge_x, hf * fudge_y]) tr[-1].append(ax) cbars = [] if colorbar is None: pass elif colorbar.lower() == "horizontal": for i in range(nx): # left, bottom, width, height # Here we want 0.10 on each side of the colorbar # We want it to be 0.05 tall # And we want a buffer of 0.15 ax = fig.add_axes( [ wf * (i + 0.10) * fudge_x, hf * fudge_y * 0.20, wf * (1 - 0.20) * fudge_x, hf * fudge_y * 0.05, ] ) cbars.append(ax) elif colorbar.lower() == "vertical": for j in range(ny): ax = fig.add_axes( [ wf * (nx + 0.05) * fudge_x, hf * fudge_y * (ny - (j + 0.95)), wf * fudge_x * 0.05, hf * fudge_y * 0.90, ] ) ax.clear() cbars.append(ax) return fig, tr, cbars ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/color_maps.py0000644000175100001770000003147214714401662017641 0ustar00runnerdockerimport cmyt # noqa: F401 import matplotlib as mpl import numpy as np from matplotlib.colors import LinearSegmentedColormap from yt.funcs import get_brewer_cmap from yt.utilities.logger import ytLogger as mylog from . import _colormap_data as _cm yt_colormaps = {} def add_colormap(name, cdict): """ Adds a colormap to the colormaps available in yt for this session """ # Note: this function modifies the global variable 'yt_colormaps' yt_colormaps[name] = LinearSegmentedColormap(name, cdict, 256) mpl.colormaps.register(yt_colormaps[name]) # YTEP-0040 backward compatibility layer # yt colormaps used to be defined here, but were migrated to an external # package, cmyt. In the process, 5 of them were renamed. We register them again here # under their historical names to preserves backwards compatibility. _HISTORICAL_ALIASES = { "arbre": "cmyt.arbre", "algae": "cmyt.algae", "bds_highcontrast": "cmyt.algae", "octarine": "cmyt.octarine", "dusk": "cmyt.dusk", "kamae": "cmyt.pastel", "kelp": "cmyt.kelp", "black_blueish": "cmyt.pixel_blue", "black_green": "cmyt.pixel_green", "purple_mm": "cmyt.xray", } def register_yt_colormaps_from_cmyt(): """ For backwards compatibility, register yt colormaps without the "cmyt." prefix, but do it in a collision-safe way. """ for hist_name, alias in _HISTORICAL_ALIASES.items(): # note that mpl.colormaps.__getitem__ returns *copies* cmap = mpl.colormaps[alias] cmap.name = hist_name mpl.colormaps.register(cmap) mpl.colormaps.register(cmap.reversed()) register_yt_colormaps_from_cmyt() # Add colormaps in _colormap_data.py that weren't defined here _vs = np.linspace(0, 1, 256) for k, v in list(_cm.color_map_luts.items()): if k in yt_colormaps: continue cdict = { "red": np.transpose([_vs, v[0], v[0]]), "green": np.transpose([_vs, v[1], v[1]]), "blue": np.transpose([_vs, v[2], v[2]]), } try: add_colormap(k, cdict) except ValueError: # expected if another map with identical name was already registered mylog.warning("cannot register colormap '%s' (naming collision)", k) def get_colormap_lut(cmap_id: tuple[str, str] | str): # "lut" stands for "lookup table". This function provides a consistent and # reusable accessor to a hidden (and by default, uninitialized) attribute # (`_lut`) in registered colormaps, from matplotlib or palettable. # colormap "lookup tables" are RGBA arrays in matplotlib, # and contain sufficient data to reconstruct the colormaps entirely. # This exists mostly for historical reasons, hence the custom output format. # It isn't meant as part of yt's public api. if isinstance(cmap_id, tuple) and len(cmap_id) == 2: cmap = get_brewer_cmap(cmap_id) elif isinstance(cmap_id, str): cmap = mpl.colormaps[cmap_id] else: raise TypeError( "Expected a string or a 2-tuple of strings as a colormap id. " f"Received: {cmap_id}" ) if not cmap._isinit: cmap._init() r = cmap._lut[:-3, 0] g = cmap._lut[:-3, 1] b = cmap._lut[:-3, 2] a = np.ones(b.shape) return [r, g, b, a] def show_colormaps(subset="all", filename=None): """ Displays the colormaps available to yt. Note, most functions can use both the matplotlib and the native yt colormaps; however, there are some special functions existing within image_writer.py (e.g. write_image() write_bitmap(), etc.), which cannot access the matplotlib colormaps. In addition to the colormaps listed, one can access the reverse of each colormap by appending a "_r" to any map. If you wish to only see certain colormaps, include them in the cmap_list attribute. Parameters ---------- subset : string, or list of strings, optional valid values : "all", "yt_native", or list of cmap names default : "all" As mentioned above, a few functions can only access yt_native colormaps. To display only the yt_native colormaps, set this to "yt_native". If you wish to only see a few colormaps side by side, you can include them as a list of colormap names. Example: ['cmyt.algae', 'gist_stern', 'cmyt.kamae', 'nipy_spectral'] filename : string, opt default: None If filename is set, then it will save the colormaps to an output file. If it is not set, it will "show" the result interactively. """ from matplotlib import pyplot as plt a = np.outer(np.arange(0, 1, 0.01), np.ones(10)) if subset == "all": maps = [ m for m in plt.colormaps() if (not m.startswith("idl")) & (not m.endswith("_r")) ] elif subset == "yt_native": maps = [ m for m in _cm.color_map_luts if (not m.startswith("idl")) & (not m.endswith("_r")) ] else: try: maps = [m for m in plt.colormaps() if m in subset] if len(maps) == 0: raise AttributeError except AttributeError as e: raise AttributeError( "show_colormaps requires subset attribute " "to be 'all', 'yt_native', or a list of " "valid colormap names." ) from e maps = sorted(set(maps)) # scale the image size by the number of cmaps plt.figure(figsize=(2.0 * len(maps) / 10.0, 6)) plt.subplots_adjust(top=0.7, bottom=0.05, left=0.01, right=0.99) l = len(maps) + 1 for i, m in enumerate(maps): plt.subplot(1, l, i + 1) plt.axis("off") plt.imshow(a, aspect="auto", cmap=mpl.colormaps[m], origin="lower") plt.title(m, rotation=90, fontsize=10, verticalalignment="bottom") if filename is not None: plt.savefig(filename, dpi=100, facecolor="gray") else: plt.show() def make_colormap(ctuple_list, name=None, interpolate=True): """ This generates a custom colormap based on the colors and spacings you provide. Enter a ctuple_list, which consists of tuples of (color, spacing) to return a colormap appropriate for use in yt. If you specify a name, it will automatically be added to the current session as a valid colormap. Output colormap is in the format yt expects for adding a colormap to the current session: a dictionary with the appropriate RGB channels each consisting of a 256x3 array : First number is the number at which we are defining a color breakpoint Second number is the (0..1) number to interpolate to when coming *from below* Third number is the (0..1) number to interpolate to when coming *from above* Parameters ---------- ctuple_list: list of (color, float) tuples The ctuple_list consists of pairs of (color, interval) tuples identifying the colors to use in the colormap and the intervals they take to change to the next color in the list. A color can either be a string of the name of a color, or it can be an array of 3 floats, each representing the intensity of R, G, and B on a scale of 0 to 1. Valid color names and their equivalent arrays are listed below. Any interval can be given for the different color tuples, and the total of all the intervals will be scaled to the 256 output elements. If a ctuple_list ends with a color and a non-zero interval, a white 0-interval would be added to the end to finish the interpolation. To avoid finishing with white, specify your own zero-interval color at the end. name: string, optional If you wish this colormap to be added as a valid colormap to the current session, specify a name here. Default: None interpolate: boolean Designates whether or not the colormap will interpolate between the colors provided or just give solid colors across the intervals. Default: True Preset Color Options -------------------- 'white' : np.array([255, 255, 255 ])/255. 'gray' : np.array([130, 130, 130])/255. 'dgray' : np.array([80, 80, 80])/255. 'black' : np.array([0, 0, 0])/255. 'blue' : np.array([0, 0, 255])/255. 'dblue' : np.array([0, 0, 160])/255. 'purple' : np.array([100, 0, 200])/255. 'dpurple' : np.array([66, 0, 133])/255. 'dred' : np.array([160, 0, 0])/255. 'red' : np.array([255, 0, 0])/255. 'orange' : np.array([255, 128, 0])/255. 'dorange' : np.array([200,100, 0])/255. 'yellow' : np.array([255, 255, 0])/255. 'dyellow' : np.array([200, 200, 0])/255. 'green' : np.array([0, 255, 0])/255. 'dgreen' : np.array([0, 160, 0])/255. Examples -------- To obtain a colormap that starts at black with equal intervals in green, blue, red, yellow in that order and interpolation between those colors. (In reality, it starts at black, takes an interval of 10 to interpolate to green, then an interval of 10 to interpolate to blue, then an interval of 10 to interpolate to red.) >>> cm = make_colormap([("black", 10), ("green", 10), ("blue", 10), ("red", 0)]) To add a colormap that has five equal blocks of solid major colors to the current session as "steps": >>> make_colormap( ... [("red", 10), ("orange", 10), ("yellow", 10), ("green", 10), ("blue", 10)], ... name="steps", ... interpolate=False, ... ) To add a colormap that looks like the French flag (i.e. equal bands of blue, white, and red) using your own RGB keys, then to display it: >>> make_colormap( ... [([0, 0, 1], 10), ([1, 1, 1], 10), ([1, 0, 0], 10)], ... name="french_flag", ... interpolate=False, ... ) >>> show_colormaps(["french_flag"]) """ # aliases for different colors color_dict = { "white": np.array([255, 255, 255]) / 255.0, "gray": np.array([130, 130, 130]) / 255.0, "dgray": np.array([80, 80, 80]) / 255.0, "black": np.array([0, 0, 0]) / 255.0, "blue": np.array([0, 0, 255]) / 255.0, "dblue": np.array([0, 0, 160]) / 255.0, "purple": np.array([100, 0, 200]) / 255.0, "dpurple": np.array([66, 0, 133]) / 255.0, "dred": np.array([160, 0, 0]) / 255.0, "red": np.array([255, 0, 0]) / 255.0, "orange": np.array([255, 128, 0]) / 255.0, "dorange": np.array([200, 100, 0]) / 255.0, "yellow": np.array([255, 255, 0]) / 255.0, "dyellow": np.array([200, 200, 0]) / 255.0, "green": np.array([0, 255, 0]) / 255.0, "dgreen": np.array([0, 160, 0]) / 255.0, } cmap = np.zeros((256, 3)) # If the user provides a list with a non-zero final interval, it # doesn't make sense because you have an interval but no final # color to which it interpolates. So provide a 0-length white final # interval to end the previous interval in white. if ctuple_list[-1][1] != 0: ctuple_list.append(("white", 0)) # Figure out how many intervals there are total. rolling_index = 0 for i, (color, interval) in enumerate(ctuple_list): if isinstance(color, str): ctuple_list[i] = (color_dict[color], interval) rolling_index += interval scale = 256.0 / rolling_index n = len(ctuple_list) # Step through each ctuple and interpolate from one color to the # next over the interval provided rolling_index = 0 for i in range(n - 1): color, interval = ctuple_list[i] interval *= scale next_index = rolling_index + interval next_color, next_interval = ctuple_list[i + 1] if not interpolate: next_color = color # Interpolate the R, G, and B channels from one color to the next # Use np.round to make sure you're on a discrete index interval = int(np.round(next_index) - np.round(rolling_index)) for j in np.arange(3): cmap[int(np.rint(rolling_index)) : int(np.rint(next_index)), j] = ( np.linspace(color[j], next_color[j], num=interval) ) rolling_index = next_index # Return a dictionary with the appropriate RGB channels each consisting of # a 256x3 array in the format that is expected by add_colormap() to add a # colormap to the session. # The format is as follows: # First number is the number at which we are defining a color breakpoint # Second number is the (0..1) number to interpolate to when coming *from below* # Third number is the (0..1) number to interpolate to when coming *from above* _vs = np.linspace(0, 1, 256) cdict = { "red": np.transpose([_vs, cmap[:, 0], cmap[:, 0]]), "green": np.transpose([_vs, cmap[:, 1], cmap[:, 1]]), "blue": np.transpose([_vs, cmap[:, 2], cmap[:, 2]]), } if name is not None: add_colormap(name, cdict) return cdict ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/eps_writer.py0000644000175100001770000015707314714401662017674 0ustar00runnerdockerimport os import numpy as np import pyx from matplotlib import pyplot as plt from matplotlib.colors import LogNorm, Normalize from yt.config import ytcfg from yt.units.unit_object import Unit # type: ignore from yt.units.yt_array import YTQuantity from yt.utilities.logger import ytLogger as mylog from .plot_window import PlotWindow from .profile_plotter import PhasePlot, ProfilePlot def convert_frac_to_tex(string): frac_pos = string.find(r"\frac") result = string[frac_pos + 5 :] level = [0] * len(result) clevel = 0 for i in range(len(result)): if result[i] == "{": clevel += 1 elif result[i] == "}": clevel -= 1 level[i] = clevel div_pos = level.index(0) end_pos = level.index(0, div_pos + 1) result = ( r"${" + result[: div_pos + 1] + r"\over" + result[div_pos + 1 : end_pos] + r"}$" + result[end_pos:] ) result = result.replace(r"\ ", r"\;") return result def pyxize_label(string): frac_pos = string.find(r"\frac") if frac_pos >= 0: pre = string[:frac_pos] result = pre + convert_frac_to_tex(string) else: result = string result = result.replace("$", "") result = r"$" + result + r"$" return result class DualEPS: def __init__(self, figsize=(12, 12)): r"""Initializes the DualEPS class to which we can progressively add layers of vector graphics and compressed bitmaps. Parameters ---------- figsize : tuple of floats The width and height of a single figure in centimeters. """ pyx.unit.set(xscale=1.4) self.figsize = figsize self.canvas = None self.colormaps = None self.field = None self.axes_drawn = False def hello_world(self): r"""A simple test.""" if self.canvas is None: self.canvas = pyx.canvas.canvas() p = pyx.path.line(0, 0, 1, 1) self.canvas.stroke(p) self.canvas.text(0, 0, "Hello world.") # ============================================================================= def return_field(self, plot): if isinstance(plot, (PlotWindow, PhasePlot)): return list(plot.plots.keys())[0] else: return None # ============================================================================= def axis_box( self, xrange=(0, 1), yrange=(0, 1), xlabel="", ylabel="", xlog=False, ylog=False, xdata=None, ydata=None, tickcolor=None, bare_axes=False, pos=(0, 0), xaxis_side=0, yaxis_side=0, size=None, ): r"""Draws an axis box in the figure. Parameters ---------- xrange : tuple of floats The min and max of the x-axis yrange : tuple of floats The min and max of the y-axis xlabel : string Label for the x-axis ylabel : string Label for the y-axis xlog : boolean Flag to use a logarithmic x-axis ylog : boolean Flag to use a logarithmic y-axis tickcolor : `pyx.color.*.*` Color for the tickmarks. Example: pyx.color.cmyk.black bare_axes : boolean Set to true to have no annotations or tick marks on all of the axes. pos : tuple of floats (x,y) position in centimeters of the origin in the figure xaxis_side : integer Set to 0 for the x-axis annotations to be on the left. Set to 1 to print them on the right side. yaxis_side : integer Set to 0 for the y-axis annotations to be on the bottom. Set to 1 to print them on the top. size : tuple of floats Size of axis box in units of figsize Examples -------- >>> d = DualEPS() >>> d.axis_box(xrange=(0, 100), yrange=(1e-3, 1), ylog=True) >>> d.save_fig() """ if isinstance(xrange[0], YTQuantity): xrange = (xrange[0].value, xrange[1].value) if isinstance(yrange[0], YTQuantity): yrange = (yrange[0].value, yrange[1].value) if tickcolor is None: c1 = pyx.graph.axis.painter.regular(tickattrs=[pyx.color.cmyk.black]) c2 = pyx.graph.axis.painter.regular( tickattrs=[pyx.color.cmyk.black], labelattrs=None ) else: c1 = pyx.graph.axis.painter.regular(tickattrs=[tickcolor]) c2 = pyx.graph.axis.painter.regular(tickattrs=[tickcolor], labelattrs=None) if size is None: psize = self.figsize else: psize = (size[0] * self.figsize[0], size[1] * self.figsize[1]) xticklabels = True yticklabels = True if xaxis_side == 0: xleftlabel = xlabel xrightlabel = "" c1x = c1 c2x = c2 elif xaxis_side == 1: xleftlabel = "" xrightlabel = xlabel c1x = c2 c2x = c1 else: xticklabels = False xleftlabel = "" xrightlabel = "" c1x = c1 c2x = c2 if yaxis_side == 0: yleftlabel = ylabel yrightlabel = "" c1y = c1 c2y = c2 elif yaxis_side == 1: yleftlabel = "" yrightlabel = ylabel c1y = c2 c2y = c1 else: yticklabels = False yleftlabel = "" yrightlabel = "" c1y = c1 c2y = c2 if xlog: if xticklabels: xaxis = pyx.graph.axis.log( min=xrange[0], max=xrange[1], title=xleftlabel, painter=c1x ) xaxis2 = pyx.graph.axis.log( min=xrange[0], max=xrange[1], title=xrightlabel, painter=c2x ) else: xaxis = pyx.graph.axis.log( min=xrange[0], max=xrange[1], title=xleftlabel, painter=c1x, parter=None, ) xaxis2 = pyx.graph.axis.log( min=xrange[0], max=xrange[1], title=xrightlabel, painter=c2x, parter=None, ) else: if xticklabels: xaxis = pyx.graph.axis.lin( min=xrange[0], max=xrange[1], title=xleftlabel, painter=c1x ) xaxis2 = pyx.graph.axis.lin( min=xrange[0], max=xrange[1], title=xrightlabel, painter=c2x ) else: xaxis = pyx.graph.axis.lin( min=xrange[0], max=xrange[1], title=xleftlabel, painter=c1x, parter=None, ) xaxis2 = pyx.graph.axis.lin( min=xrange[0], max=xrange[1], title=xrightlabel, painter=c2x, parter=None, ) if ylog: if yticklabels: yaxis = pyx.graph.axis.log( min=yrange[0], max=yrange[1], title=yleftlabel, painter=c1y ) yaxis2 = pyx.graph.axis.log( min=yrange[0], max=yrange[1], title=yrightlabel, painter=c2y ) else: yaxis = pyx.graph.axis.log( min=yrange[0], max=yrange[1], title=yleftlabel, painter=c1y, parter=None, ) yaxis2 = pyx.graph.axis.log( min=yrange[0], max=yrange[1], title=yrightlabel, painter=c2y, parter=None, ) else: if yticklabels: yaxis = pyx.graph.axis.lin( min=yrange[0], max=yrange[1], title=yleftlabel, painter=c1y ) yaxis2 = pyx.graph.axis.lin( min=yrange[0], max=yrange[1], title=yrightlabel, painter=c2y ) else: yaxis = pyx.graph.axis.lin( min=yrange[0], max=yrange[1], title=yleftlabel, painter=c1y, parter=None, ) yaxis2 = pyx.graph.axis.lin( min=yrange[0], max=yrange[1], title=yrightlabel, painter=c2y, parter=None, ) if bare_axes: if ylog: yaxis = pyx.graph.axis.log( min=yrange[0], max=yrange[1], title=yleftlabel, parter=None ) yaxis2 = pyx.graph.axis.log( min=yrange[0], max=yrange[1], title=yrightlabel, parter=None ) else: yaxis = pyx.graph.axis.lin( min=yrange[0], max=yrange[1], title=yleftlabel, parter=None ) yaxis2 = pyx.graph.axis.lin( min=yrange[0], max=yrange[1], title=yrightlabel, parter=None ) if xlog: xaxis = pyx.graph.axis.log( min=xrange[0], max=xrange[1], title=xleftlabel, parter=None ) xaxis2 = pyx.graph.axis.log( min=xrange[0], max=xrange[1], title=xrightlabel, parter=None ) else: xaxis = pyx.graph.axis.lin( min=xrange[0], max=xrange[1], title=xleftlabel, parter=None ) xaxis2 = pyx.graph.axis.lin( min=xrange[0], max=xrange[1], title=xrightlabel, parter=None ) blank_data = pyx.graph.data.points([(-1e20, -1e20), (-1e19, -1e19)], x=1, y=2) if self.canvas is None: self.canvas = pyx.graph.graphxy( width=psize[0], height=psize[1], x=xaxis, y=yaxis, x2=xaxis2, y2=yaxis2, xpos=pos[0], ypos=pos[1], ) if xdata is None: self.canvas.plot(blank_data) else: data = pyx.graph.data.points(np.array([xdata, ydata]).T, x=1, y=2) self.canvas.plot( data, [pyx.graph.style.line([pyx.style.linewidth.Thick])] ) else: plot = pyx.graph.graphxy( width=psize[0], height=psize[1], x=xaxis, y=yaxis, x2=xaxis2, y2=yaxis2, xpos=pos[0], ypos=pos[1], ) if xdata is None: plot.plot(blank_data) else: data = pyx.graph.data.points(np.array([xdata, ydata]).T, x=1, y=2) plot.plot(data, [pyx.graph.style.line([pyx.style.linewidth.Thick])]) self.canvas.insert(plot) self.axes_drawn = True # ============================================================================= def axis_box_yt( self, plot, units=None, bare_axes=False, tickcolor=None, xlabel=None, ylabel=None, **kwargs, ): r"""Wrapper around DualEPS.axis_box to automatically fill in the axis ranges and labels from a yt plot. This also accepts any parameters that DualEPS.axis_box takes. Parameters ---------- plot : `yt.visualization.plot_window.PlotWindow` yt plot on which the axes are based. units : string Unit description that overrides yt's unit description. Only affects the axis label. bare_axes : boolean Set to true to have no annotations or tick marks on all of the axes. Examples -------- >>> p = SlicePlot(ds, 0, "density") >>> d = DualEPS() >>> d.axis_box_yt(p) >>> d.save_fig() """ if isinstance(plot, (PlotWindow, PhasePlot)): plot.refresh() if isinstance(plot, PlotWindow): data = plot.frb width = plot.width[0] if units is None: units = plot.ds.get_smallest_appropriate_unit(width) width = width.in_units(str(units)) xc = 0.5 * (plot.xlim[0] + plot.xlim[1]) yc = 0.5 * (plot.ylim[0] + plot.ylim[1]) _xrange = [(plot.xlim[i] - xc).in_units(units) for i in (0, 1)] _yrange = [(plot.ylim[i] - yc).in_units(units) for i in (0, 1)] _xlog = False _ylog = False if bare_axes: _xlabel = "" _ylabel = "" else: if xlabel is not None: _xlabel = xlabel else: if data.axis != 4: xi = plot.ds.coordinates.x_axis[data.axis] x_name = plot.ds.coordinates.axis_name[xi] _xlabel = f"{x_name} ({units})" else: _xlabel = f"x ({units})" if ylabel is not None: _ylabel = ylabel else: if data.axis != 4: yi = plot.ds.coordinates.y_axis[data.axis] y_name = plot.ds.coordinates.axis_name[yi] _ylabel = f"{y_name} ({units})" else: _ylabel = f"y ({units})" if tickcolor is None: _tickcolor = pyx.color.cmyk.white elif isinstance(plot, ProfilePlot): subplot = plot.axes.values()[0] # limits for axes xlimits = subplot.get_xlim() _xrange = ( YTQuantity(xlimits[0], "m"), YTQuantity(xlimits[1], "m"), ) # unit hardcoded but afaik it is not used anywhere so it doesn't matter if list(plot.axes.ylim.values())[0][0] is None: ylimits = subplot.get_ylim() else: ylimits = list(plot.axes.ylim.values())[0] _yrange = ( YTQuantity(ylimits[0], "m"), YTQuantity(ylimits[1], "m"), ) # unit hardcoded but afaik it is not used anywhere so it doesn't matter # axis labels xaxis = subplot.xaxis _xlabel = pyxize_label(xaxis.label.get_text()) yaxis = subplot.yaxis _ylabel = pyxize_label(yaxis.label.get_text()) # set log if necessary if subplot.get_xscale() == "log": _xlog = True else: _xlog = False if subplot.get_yscale() == "log": _ylog = True else: _ylog = False _tickcolor = None elif isinstance(plot, PhasePlot): k = list(plot.plots.keys())[0] _xrange = plot[k].axes.get_xlim() _yrange = plot[k].axes.get_ylim() _xlog = plot.profile.x_log _ylog = plot.profile.y_log if bare_axes: _xlabel = "" _ylabel = "" else: if xlabel is not None: _xlabel = xlabel else: _xlabel = plot[k].axes.get_xlabel() if ylabel is not None: _ylabel = ylabel else: _ylabel = plot[k].axes.get_ylabel() _xlabel = pyxize_label(_xlabel) _ylabel = pyxize_label(_ylabel) if tickcolor is None: _tickcolor = None elif isinstance(plot, np.ndarray): ax = plt.gca() _xrange = ax.get_xlim() _yrange = ax.get_ylim() _xlog = False _ylog = False if bare_axes: _xlabel = "" _ylabel = "" else: if xlabel is not None: _xlabel = xlabel else: _xlabel = ax.get_xlabel() if ylabel is not None: _ylabel = ylabel else: _ylabel = ax.get_ylabel() if tickcolor is None: _tickcolor = None else: _xrange = plot._axes.get_xlim() _yrange = plot._axes.get_ylim() _xlog = plot._log_x _ylog = plot._log_y if bare_axes: _xlabel = "" _ylabel = "" else: if xlabel is not None: _xlabel = xlabel else: _xlabel = plot._x_label if ylabel is not None: _ylabel = ylabel else: _ylabel = plot._y_label if tickcolor is None: _tickcolor = None if tickcolor is not None: _tickcolor = tickcolor self.axis_box( xrange=_xrange, yrange=_yrange, xlabel=_xlabel, ylabel=_ylabel, tickcolor=_tickcolor, xlog=_xlog, ylog=_ylog, bare_axes=bare_axes, **kwargs, ) # ============================================================================= def insert_image(self, filename, pos=(0, 0), size=None): r"""Inserts a JPEG file in the figure. Parameters ---------- filename : string Name of the JPEG file pos : tuple of floats Position of the origin of the image in centimeters size : tuple of flots Size of image in units of figsize Examples -------- >>> d = DualEPS() >>> d.axis_box(xrange=(0, 100), yrange=(1e-3, 1), ylog=True) >>> d.insert_image("image.jpg") >>> d.save_fig() """ if size is not None: width = size[0] * self.figsize[0] height = size[1] * self.figsize[1] else: width = self.figsize[0] height = self.figsize[1] image = pyx.bitmap.jpegimage(filename) if self.canvas is None: self.canvas = pyx.canvas.canvas() self.canvas.insert( pyx.bitmap.bitmap( pos[0], pos[1], image, compressmode=None, width=width, height=height ) ) # ============================================================================= def insert_image_yt(self, plot, field=None, pos=(0, 0), scale=1.0): r"""Inserts a bitmap taken from a yt plot. Parameters ---------- plot : `yt.visualization.plot_window.PlotWindow` yt plot that provides the image pos : tuple of floats Position of the origin of the image in centimeters. Examples -------- >>> p = SlicePlot(ds, 0, "density") >>> d = DualEPS() >>> d.axis_box_yt(p) >>> d.insert_image_yt(p) >>> d.save_fig() Notes ----- For best results, set use_colorbar=False when creating the yt image. """ from matplotlib.backends.backend_agg import FigureCanvasAgg # We need to remove the colorbar (if necessary), remove the # axes, and resize the figure to span the entire figure force_square = False if self.canvas is None: self.canvas = pyx.canvas.canvas() if isinstance(plot, (PlotWindow, PhasePlot)): if field is None: self.field = list(plot.plots.keys())[0] mylog.warning( "No field specified. Choosing first field (%s)", self.field ) else: self.field = plot.data_source._determine_fields(field)[0] if self.field not in plot.plots.keys(): raise RuntimeError("Field '%s' does not exist!", self.field) if isinstance(plot, PlotWindow): plot.hide_colorbar() plot.hide_axes() else: plot.plots[self.field]._toggle_axes(False) plot.plots[self.field]._toggle_colorbar(False) plot.refresh() _p1 = plot.plots[self.field].figure force_square = True elif isinstance(plot, ProfilePlot): _p1 = plot.figures.items()[0][1] elif isinstance(plot, np.ndarray): plt.figure() iplot = plt.figimage(plot) _p1 = iplot.figure _p1.set_size_inches(self.figsize[0], self.figsize[1]) ax = plt.gca() _p1.add_axes(ax) else: raise RuntimeError("Unknown plot type") _p1.axes[0].set_position([0, 0, 1, 1]) # rescale figure _p1.set_facecolor("w") # set background color figure_canvas = FigureCanvasAgg(_p1) figure_canvas.draw() size = (_p1.get_size_inches() * _p1.dpi).astype("int64") # Account for non-square images after removing the colorbar. scale *= 1.0 - 1.0 / (_p1.dpi * self.figsize[0]) if force_square: yscale = scale * float(size[1]) / float(size[0]) else: yscale = scale image = pyx.bitmap.image(size[0], size[1], "RGB", figure_canvas.tostring_rgb()) self.canvas.insert( pyx.bitmap.bitmap( pos[0], pos[1], image, width=scale * self.figsize[0], height=yscale * self.figsize[1], ) ) # ============================================================================= def colorbar( self, name, zrange=(0, 1), label="", log=False, tickcolor=None, orientation="right", pos=None, shrink=1.0, ): r"""Places a colorbar adjacent to the current figure. Parameters ---------- name : string name of the (matplotlib) colormap to use zrange : tuple of floats min and max of the colorbar's range label : string colorbar label log : boolean Flag to use a logarithmic scale tickcolor : `pyx.color.*.*` Color for the tickmarks. Example: pyx.color.cmyk.black orientation : string Placement of the colorbar. Can be "left", "right", "top", or "bottom". pos : list of floats (x,y) position of the origin of the colorbar in centimeters. shrink : float Factor to shrink the colorbar's size. A value of 1 means the colorbar will have a height / width of the figure. Examples -------- >>> d = DualEPS() >>> d.axis_box(xrange=(0, 100), yrange=(1e-3, 1), ylog=True) >>> d.insert_image("image.jpg") >>> d.colorbar( ... "hot", xrange=(1e-2, 1e-4), log=True, label="Density [cm$^{-3}$]" ... ) >>> d.save_fig() """ if pos is None: pos = [0, 0] if orientation == "right": origin = (pos[0] + self.figsize[0] + 0.5, pos[1]) size = (0.1 * self.figsize[0], self.figsize[1]) imsize = (1, 256) elif orientation == "left": origin = (pos[0] - 0.5 - 0.1 * self.figsize[0], pos[1]) size = (0.1 * self.figsize[0], self.figsize[1]) imsize = (1, 256) elif orientation == "top": origin = (pos[0], pos[1] + self.figsize[1] + 0.5) imorigin = (pos[0] + self.figsize[0], pos[1] + self.figsize[1] + 0.5) size = (self.figsize[0], 0.1 * self.figsize[1]) imsize = (256, 1) elif orientation == "bottom": origin = (pos[0], pos[1] - 0.5 - 0.1 * self.figsize[1]) imorigin = (pos[0] + self.figsize[0], pos[1] - 0.5 - 0.1 * self.figsize[1]) size = (self.figsize[0], 0.1 * self.figsize[1]) imsize = (256, 1) else: raise RuntimeError(f"orientation {orientation} unknown") # If shrink is a scalar, then convert into tuple if not isinstance(shrink, (tuple, list)): shrink = (shrink, shrink) # Scale the colorbar shift = (0.5 * (1.0 - shrink[0]) * size[0], 0.5 * (1.0 - shrink[1]) * size[1]) # To facilitate stretching rather than shrinking # If stretched in both directions (makes no sense?) then y dominates. if shrink[0] > 1.0: shift = (0.05 * self.figsize[0], 0.5 * (1.0 - shrink[1]) * size[1]) if shrink[1] > 1.0: shift = (0.5 * (1.0 - shrink[0]) * size[0], 0.05 * self.figsize[1]) size = (size[0] * shrink[0], size[1] * shrink[1]) origin = (origin[0] + shift[0], origin[1] + shift[1]) # Convert the colormap into a string x = np.linspace(1, 0, 256) cm_string = plt.get_cmap(name)(x, bytes=True)[:, 0:3].tobytes() cmap_im = pyx.bitmap.image(imsize[0], imsize[1], "RGB", cm_string) if orientation == "top" or orientation == "bottom": imorigin = (imorigin[0] - shift[0], imorigin[1] + shift[1]) self.canvas.insert( pyx.bitmap.bitmap( imorigin[0], imorigin[1], cmap_im, width=-size[0], height=size[1] ) ) else: self.canvas.insert( pyx.bitmap.bitmap( origin[0], origin[1], cmap_im, width=size[0], height=size[1] ) ) if tickcolor is None: c1 = pyx.graph.axis.painter.regular(tickattrs=[pyx.color.cmyk.black]) pyx.graph.axis.painter.regular( tickattrs=[pyx.color.cmyk.black], labelattrs=None ) else: c1 = pyx.graph.axis.painter.regular(tickattrs=[tickcolor]) pyx.graph.axis.painter.regular(tickattrs=[tickcolor], labelattrs=None) if log: yaxis = pyx.graph.axis.log( min=zrange[0], max=zrange[1], title=label, painter=c1 ) yaxis2 = pyx.graph.axis.log(min=zrange[0], max=zrange[1], parter=None) else: yaxis = pyx.graph.axis.lin( min=zrange[0], max=zrange[1], title=label, painter=c1 ) yaxis2 = pyx.graph.axis.lin(min=zrange[0], max=zrange[1], parter=None) xaxis = pyx.graph.axis.lin(parter=None) if orientation == "right": _colorbar = pyx.graph.graphxy( width=size[0], height=size[1], xpos=origin[0], ypos=origin[1], x=xaxis, y=yaxis2, y2=yaxis, ) elif orientation == "left": _colorbar = pyx.graph.graphxy( width=size[0], height=size[1], xpos=origin[0], ypos=origin[1], x=xaxis, y2=yaxis2, y=yaxis, ) elif orientation == "top": _colorbar = pyx.graph.graphxy( width=size[0], height=size[1], xpos=origin[0], ypos=origin[1], y=xaxis, x=yaxis2, x2=yaxis, ) elif orientation == "bottom": _colorbar = pyx.graph.graphxy( width=size[0], height=size[1], xpos=origin[0], ypos=origin[1], y=xaxis, x2=yaxis2, x=yaxis, ) blank_data = pyx.graph.data.points([(-1e10, -1e10), (-9e10, -9e10)], x=1, y=2) _colorbar.plot(blank_data) self.canvas.insert(_colorbar) # ============================================================================= def colorbar_yt(self, plot, field=None, cb_labels=None, **kwargs): r"""Wrapper around DualEPS.colorbar to take information from a yt plot. Accepts all parameters that DualEPS.colorbar takes. Parameters ---------- plot : A yt plot yt plot from which the information is taken. cb_labels : list of labels for the colorbars. List should be the same size as the number of colorbars used. Should be passed into this function by either the singleplot or multiplot api. Examples -------- >>> p = SlicePlot(ds, 0, "density") >>> p.hide_colorbar() >>> d = DualEPS() >>> d.axis_box_yt(p) >>> d.insert_image_yt(p) >>> d.colorbar_yt(p) >>> d.save_fig() """ if isinstance(plot, ProfilePlot): raise RuntimeError( "When using ProfilePlots you must either set yt_nocbar=True or provide " "colorbar flags so that the profiles don't have colorbars" ) _cmap = None if field is not None: self.field = plot.data_source._determine_fields(field)[0] if isinstance(plot, (PlotWindow, PhasePlot)): _cmap = plot[self.field].colorbar_handler.cmap else: if plot.cmap is not None: _cmap = plot.cmap.name if _cmap is None: _cmap = ytcfg.get("yt", "default_colormap") if isinstance(plot, (PlotWindow, PhasePlot)): if isinstance(plot, PlotWindow): try: _zlabel = plot.frb[self.field].info["label"] _unit = Unit( plot.frb[self.field].units, registry=plot.ds.unit_registry ) units = _unit.latex_representation() # PyX does not support \frac because it's based on TeX. units = pyxize_label(units) _zlabel += r" (" + units + r")" except NotImplementedError: print("Colorbar label not available") _zlabel = "" else: _, _, z_title = plot._get_field_title(self.field, plot.profile) _zlabel = pyxize_label(z_title) _zlabel = _zlabel.replace("_", r"\;") _p = plot.plots[self.field] _norm = _p.norm_handler.get_norm(plot.frb[self.field]) norm_type = type(_norm) if norm_type is LogNorm: _zlog = True elif norm_type is Normalize: # linear scaling _zlog = False else: raise RuntimeError( "eps_writer is not compatible with scalings other than linear and log, " f"received {norm_type}" ) _zrange = (_norm.vmin, _norm.vmax) else: _zlabel = plot._z_label.replace("_", r"\;") _zlog = plot._log_z _zrange = (plot.norm.vmin, plot.norm.vmax) if cb_labels is not None: # Overrides deduced labels _zlabel = cb_labels.pop() self.colorbar(_cmap, zrange=_zrange, label=_zlabel, log=_zlog, **kwargs) # ============================================================================= def circle( self, radius=0.2, loc=(0.5, 0.5), color=pyx.color.cmyk.white, linewidth=pyx.style.linewidth.normal, ): r"""Draws a circle in the current figure. Parameters ---------- radius : float Radius of the circle in units of figsize loc : tuple of floats Location of the circle's center in units of figsize color : `pyx.color.*.*` Color of the circle stroke. Example: pyx.color.cmyk.white linewidth : `pyx.style.linewidth.*` Width of the circle stroke width. Example: pyx.style.linewidth.normal Examples -------- >>> d = DualEPS() >>> d.axis_box(xrange=(0, 100), yrange=(1e-3, 1), ylog=True) >>> d.insert_image("image.jpg") >>> d.circle(radius=0.1, color=pyx.color.cmyk.Red) >>> d.save_fig() """ circle = pyx.path.circle( self.figsize[0] * loc[0], self.figsize[1] * loc[1], self.figsize[0] * radius ) self.canvas.stroke(circle, [color, linewidth]) # ============================================================================= def arrow( self, size=0.2, label="", loc=(0.05, 0.08), labelloc="top", color=pyx.color.cmyk.white, linewidth=pyx.style.linewidth.normal, rotation=0.0, ): r"""Draws an arrow in the current figure Parameters ---------- size : float Length of arrow (base to tip) in units of the figure size. label : string Annotation label of the arrow. loc : tuple of floats Location of the left hand side of the arrow in units of the figure size. rotation : float Orientation angle of the arrow in units of degrees labelloc : string Location of the label with respect to the line. Can be "top" or "bottom" color : `pyx.color.*.*` Color of the arrow. Example: pyx.color.cymk.white linewidth : `pyx.style.linewidth.*` Width of the arrow. Example: pyx.style.linewidth.normal Examples -------- >>> d = DualEPS() >>> d.axis_box(xrange=(0, 100), yrange=(1e-3, 1), ylog=True) >>> d.insert_image("arrow_image.jpg") >>> d.arrow(size=0.2, label="Black Hole!", loc=(0.05, 0.1)) >>> d.save_fig() """ line = pyx.path.line( self.figsize[0] * loc[0], self.figsize[1] * loc[1], self.figsize[0] * (loc[0] + size * np.cos(np.pi * rotation / 180)), self.figsize[1] * (loc[1] + size * np.sin(np.pi * rotation / 180)), ) self.canvas.stroke(line, [linewidth, color, pyx.deco.earrow()]) if labelloc == "bottom": yoff = -0.1 * size valign = pyx.text.valign.top else: yoff = +0.1 * size valign = pyx.text.valign.bottom if label != "": self.canvas.text( self.figsize[0] * (loc[0] + 0.5 * size), self.figsize[1] * (loc[1] + yoff), label, [color, valign, pyx.text.halign.center], ) # ============================================================================= def scale_line( self, size=0.2, label="", loc=(0.05, 0.08), labelloc="top", color=pyx.color.cmyk.white, linewidth=pyx.style.linewidth.normal, ): r"""Draws a scale line in the current figure. Parameters ---------- size : float Length of the scale line in units of the figure size. label : string Annotation label of the scale line. loc : tuple of floats Location of the left hand side of the scale line in units of the figure size. labelloc : string Location of the label with respect to the line. Can be "top" or "bottom" color : `pyx.color.*.*` Color of the scale line. Example: pyx.color.cymk.white linewidth : `pyx.style.linewidth.*` Width of the scale line. Example: pyx.style.linewidth.normal Examples -------- >>> d = DualEPS() >>> d.axis_box(xrange=(0, 100), yrange=(1e-3, 1), ylog=True) >>> d.insert_image("image.jpg") >>> d.scale_line(size=0.2, label="1 kpc", loc=(0.05, 0.1)) >>> d.save_fig() """ line = pyx.path.line( self.figsize[0] * loc[0], self.figsize[1] * loc[1], self.figsize[0] * (loc[0] + size), self.figsize[1] * loc[1], ) self.canvas.stroke(line, [linewidth, color]) line = pyx.path.line( self.figsize[0] * loc[0], self.figsize[1] * (loc[1] - 0.1 * size), self.figsize[0] * loc[0], self.figsize[1] * (loc[1] + 0.1 * size), ) self.canvas.stroke(line, [linewidth, color]) line = pyx.path.line( self.figsize[0] * (loc[0] + size), self.figsize[1] * (loc[1] - 0.1 * size), self.figsize[0] * (loc[0] + size), self.figsize[1] * (loc[1] + 0.1 * size), ) self.canvas.stroke(line, [linewidth, color]) if labelloc == "bottom": yoff = -0.1 * size valign = pyx.text.valign.top else: yoff = +0.1 * size valign = pyx.text.valign.bottom if label != "": self.canvas.text( self.figsize[0] * (loc[0] + 0.5 * size), self.figsize[1] * (loc[1] + yoff), label, [color, valign, pyx.text.halign.center], ) # ============================================================================= def title_box( self, text, color=pyx.color.cmyk.black, bgcolor=pyx.color.cmyk.white, loc=(0.02, 0.98), halign=pyx.text.halign.left, valign=pyx.text.valign.top, text_opts=None, ): r"""Inserts a box with text in the current figure. Parameters ---------- text : string String to insert in the textbox. color : `pyx.color.*.*` Color of the text. Example: pyx.color.cmyk.black bgcolor : `pyx.color.*.*` Color of the textbox background. Example: pyx.color.cmyk.white loc : tuple of floats Location of the textbox origin in units of the figure size. halign : `pyx.text.halign.*` Horizontal alignment of the text. Example: pyx.text.halign.left valign : `pyx.text.valign.*` Vertical alignment of the text. Example: pyx.text.valign.top Examples -------- >>> d = DualEPS() >>> d.axis_box(xrange=(0, 100), yrange=(1e-3, 1), ylog=True) >>> d.insert_image("image.jpg") >>> d.title_box("Halo 1", loc=(0.05, 0.95)) >>> d.save_fig() """ if text_opts is None: text_opts = [] tbox = self.canvas.text( self.figsize[0] * loc[0], self.figsize[1] * loc[1], text, [color, valign, halign] + text_opts, ) if bgcolor is not None: tpath = tbox.bbox().enlarged(2 * pyx.unit.x_pt).path() self.canvas.draw(tpath, [pyx.deco.filled([bgcolor]), pyx.deco.stroked()]) self.canvas.insert(tbox) # ============================================================================= def save_fig(self, filename="test", format="eps", resolution=250): r"""Saves current figure to a file. Parameters ---------- filename : string Name of the saved file without the extension. format : string Format type. Can be "eps" or "pdf" Examples -------- >>> d = DualEPS() >>> d.axis_box(xrange=(0, 100), yrange=(1e-3, 1), ylog=True) """ filename = os.path.expanduser(filename) if format == "eps": self.canvas.writeEPSfile(filename) elif format == "pdf": self.canvas.writePDFfile(filename) elif format == "png": self.canvas.writeGSfile(filename + ".png", "png16m", resolution=resolution) elif format == "jpg": self.canvas.writeGSfile(filename + ".jpeg", "jpeg", resolution=resolution) else: raise RuntimeError(f"format {format} unknown.") # ============================================================================= # ============================================================================= # ============================================================================= def multiplot( ncol, nrow, yt_plots=None, fields=None, images=None, xranges=None, yranges=None, xlabels=None, ylabels=None, xdata=None, ydata=None, colorbars=None, shrink_cb=0.95, figsize=(8, 8), margins=(0, 0), titles=None, savefig=None, format="eps", yt_nocbar=False, bare_axes=False, xaxis_flags=None, yaxis_flags=None, cb_flags=None, cb_location=None, cb_labels=None, ): r"""Convenience routine to create a multi-panel figure from yt plots or JPEGs. The images are first placed from the origin, and then bottom-to-top and left-to-right. Parameters ---------- ncol : integer Number of columns in the figure. nrow : integer Number of rows in the figure. yt_plots : list of yt plot instances yt plots to include in the figure. images : list of strings JPEG filenames to include in the figure. xranges : list of tuples The min and max of the x-axes yranges : list of tuples The min and max of the y-axes xlabels : list of strings Labels for the x-axes ylabels : list of strings Labels for the y-axes colorbars : list of dicts Dicts that describe the type of colorbar to be used in each figure. Use the function return_colormap() to create these dicts. shrink_cb : float Factor by which the colorbar is shrunk. figsize : tuple of floats The width and height of a single figure in centimeters. margins : tuple of floats The horizontal and vertical margins between panels in centimeters. titles : list of strings Titles that are placed in textboxes in each panel. savefig : string Name of the saved file without the extension. format : string File format of the figure. eps or pdf accepted. yt_nocbar : boolean Flag to indicate whether or not colorbars are created. bare_axes : boolean Set to true to have no annotations or tick marks on all of the axes. cb_flags : list of booleans Flags for each plot to have a colorbar or not. cb_location : list of strings Strings to control the location of the colorbar (left, right, top, bottom) cb_labels : list of labels for the colorbars. List should be the same size as the number of colorbars used. Examples -------- >>> images = ["density.jpg", "hi_density.jpg", "entropy.jpg", "special.jpg"] >>> cbs = [] >>> cbs.append(return_colormap("cmyt.arbre", "Density [cm$^{-3}$]", (0, 10), False)) >>> cbs.append(return_colormap("cmyt.kelp", "HI Density", (0, 5), False)) >>> cbs.append(return_colormap("hot", r"Entropy [K cm$^2$]", (1e-2, 1e6), True)) >>> cbs.append(return_colormap("Spectral", "Stuff$_x$!", (1, 300), True)) >>> mp = multiplot( ... 2, ... 2, ... images=images, ... margins=(0.1, 0.1), ... titles=["1", "2", "3", "4"], ... xlabels=["one", "two"], ... ylabels=None, ... colorbars=cbs, ... shrink_cb=0.95, ... ) >>> mp.scale_line(label="$r_{vir}$", labelloc="top") >>> mp.save_fig("multiplot") Notes ----- If given both yt_plots and images, this will get preference to the yt plots. """ # Error check npanels = ncol * nrow if cb_labels is not None: cb_labels.reverse() # Because I pop the list if images is not None: if len(images) != npanels: raise RuntimeError( "Number of images (%d) doesn't match nrow(%d)" " x ncol(%d)." % (len(images), nrow, ncol) ) if yt_plots is None and images is None: raise RuntimeError("Must supply either yt_plots or image filenames.") if yt_plots is not None and images is not None: mylog.warning("Given both images and yt plots. Ignoring images.") if yt_plots is not None: _yt = True else: _yt = False if fields is None: fields = [None] * npanels # If no ranges or labels given and given only images, fill them in. if not _yt: if xranges is None: xranges = [] for _ in range(npanels): xranges.append((0, 1)) if yranges is None: yranges = [] for _ in range(npanels): yranges.append((0, 1)) if xlabels is None: xlabels = [] for _ in range(npanels): xlabels.append("") if ylabels is None: ylabels = [] for _ in range(npanels): ylabels.append("") d = DualEPS(figsize=figsize) for j in range(nrow): invj = nrow - j - 1 ypos = invj * (figsize[1] + margins[1]) for i in range(ncol): xpos = i * (figsize[0] + margins[0]) index = j * ncol + i if isinstance(yt_plots, list): this_plot = yt_plots[index] else: this_plot = yt_plots if j == nrow - 1: xaxis = 0 elif j == 0: xaxis = 1 else: xaxis = -1 if i == 0: yaxis = 0 elif i == ncol - 1: yaxis = 1 else: yaxis = -1 if xdata is None: _xdata = None else: _xdata = xdata[index] if ydata is None: _ydata = None else: _ydata = ydata[index] if xaxis_flags is not None: if xaxis_flags[index] is not None: xaxis = xaxis_flags[index] if yaxis_flags is not None: if yaxis_flags[index] is not None: yaxis = yaxis_flags[index] if _yt: this_plot._setup_plots() if xlabels is not None: xlabel = xlabels[i] else: xlabel = None if ylabels is not None: ylabel = ylabels[j] else: ylabel = None d.insert_image_yt(this_plot, pos=(xpos, ypos), field=fields[index]) d.axis_box_yt( this_plot, pos=(xpos, ypos), bare_axes=bare_axes, xaxis_side=xaxis, yaxis_side=yaxis, xlabel=xlabel, ylabel=ylabel, xdata=_xdata, ydata=_ydata, ) else: d.insert_image(images[index], pos=(xpos, ypos)) d.axis_box( pos=(xpos, ypos), xrange=xranges[index], yrange=yranges[index], xlabel=xlabels[i], ylabel=ylabels[j], bare_axes=bare_axes, xaxis_side=xaxis, yaxis_side=yaxis, xdata=_xdata, ydata=_ydata, ) if titles is not None: if titles[index] is not None: d.title_box( titles[index], loc=( i + 0.05 + i * margins[0] / figsize[0], j + 0.98 + j * margins[1] / figsize[1], ), ) # Insert colorbars after all axes are placed because we want to # put them on the edges of the bounding box. bbox = ( 100.0 * d.canvas.bbox().left().t, 100.0 * d.canvas.bbox().right().t - d.figsize[0], 100.0 * d.canvas.bbox().bottom().t, 100.0 * d.canvas.bbox().top().t - d.figsize[1], ) for j in range(nrow): invj = nrow - j - 1 ypos0 = invj * (figsize[1] + margins[1]) for i in range(ncol): xpos0 = i * (figsize[0] + margins[0]) index = j * ncol + i if isinstance(yt_plots, list): this_plot = yt_plots[index] else: this_plot = yt_plots if (not _yt and colorbars is not None) or (_yt and not yt_nocbar): if cb_flags is not None: if not cb_flags[index]: continue if cb_location is None: if ncol == 1: orientation = "right" elif i == 0: orientation = "left" elif i + 1 == ncol: orientation = "right" elif j == 0: orientation = "bottom" elif j + 1 == nrow: orientation = "top" else: orientation = None # Marker for interior plot else: if isinstance(cb_location, dict): if fields[index] not in cb_location.keys(): raise RuntimeError( f"{fields[index]} not found in cb_location dict" ) orientation = cb_location[fields[index]] elif isinstance(cb_location, list): orientation = cb_location[index] else: raise RuntimeError("Bad format: cb_location") if orientation == "right": xpos = bbox[1] ypos = ypos0 elif orientation == "left": xpos = bbox[0] ypos = ypos0 elif orientation == "bottom": ypos = bbox[2] xpos = xpos0 elif orientation == "top": ypos = bbox[3] xpos = xpos0 else: mylog.warning( "Unknown colorbar location %s. No colorbar displayed.", orientation, ) orientation = None # Marker for interior plot if orientation is not None: if _yt: # Set field if undefined if fields[index] is None: fields[index] = d.return_field(yt_plots[index]) d.colorbar_yt( this_plot, field=fields[index], pos=[xpos, ypos], shrink=shrink_cb, orientation=orientation, cb_labels=cb_labels, ) else: d.colorbar( colorbars[index]["cmap"], zrange=colorbars[index]["range"], label=colorbars[index]["name"], log=colorbars[index]["log"], orientation=orientation, pos=[xpos, ypos], shrink=shrink_cb, ) if savefig is not None: d.save_fig(savefig, format=format) return d # ============================================================================= def multiplot_yt(ncol, nrow, plots, fields=None, **kwargs): r"""Wrapper for multiplot that takes a yt PlotWindow Accepts all parameters used in multiplot. Parameters ---------- ncol : integer Number of columns in the figure. nrow : integer Number of rows in the figure. plots : ``PlotWindow`` instance, ``PhasePlot`` instance, or list of plots yt plots to be used. Examples -------- >>> p1 = SlicePlot(ds, 0, "density") >>> p1.set_width(10, "kpc") >>> p2 = SlicePlot(ds, 0, "temperature") >>> p2.set_width(10, "kpc") >>> p2.set_colormap("temperature", "hot") >>> sph = ds.sphere(ds.domain_center, (10, "kpc")) >>> p3 = PhasePlot( ... sph, "radius", "density", "temperature", weight_field="cell_mass" ... ) >>> p4 = PhasePlot(sph, "radius", "density", "pressure", "cell_mass") >>> mp = multiplot_yt( ... 2, ... 2, ... [p1, p2, p3, p4], ... savefig="yt", ... shrink_cb=0.9, ... bare_axes=True, ... yt_nocbar=False, ... margins=(0.5, 0.5), ... ) """ # Determine whether the plots are organized in a PlotWindow, or list # of PlotWindows if isinstance(plots, (PlotWindow, PhasePlot)): if fields is None: fields = plots.fields if len(fields) < nrow * ncol: raise RuntimeError( f"Number of plots ({len(fields)}) is less " f"than nrow({nrow}) x ncol({ncol})." ) figure = multiplot(ncol, nrow, yt_plots=plots, fields=fields, **kwargs) elif isinstance(plots, list) and isinstance(plots[0], (PlotWindow, PhasePlot)): if len(plots) < nrow * ncol: raise RuntimeError( f"Number of plots ({len(fields)}) is less " f"than nrow({nrow}) x ncol({ncol})." ) figure = multiplot(ncol, nrow, yt_plots=plots, fields=fields, **kwargs) else: raise RuntimeError("Unknown plot type in multiplot_yt") return figure # ============================================================================= def single_plot( plot, field=None, figsize=(12, 12), cb_orient="right", bare_axes=False, savefig=None, colorbar=True, file_format="eps", **kwargs, ): r"""Wrapper for DualEPS routines to create a figure directly from a yt plot. Calls insert_image_yt, axis_box_yt, and colorbar_yt. Parameters ---------- plot : `yt.visualization.plot_window.PlotWindow` yt plot that provides the image and metadata figsize : tuple of floats Size of the figure in centimeters. cb_orient : string Placement of the colorbar. Can be "left", "right", "top", or "bottom". bare_axes : boolean Set to true to have no annotations or tick marks on all of the axes. savefig : string Name of the saved file without the extension. colorbar : boolean Set to true to include a colorbar file_format : string Format type. Can be "eps" or "pdf" Examples -------- >>> p = SlicePlot(ds, 0, "density") >>> p.set_width(0.1, "kpc") >>> single_plot(p, savefig="figure1") """ d = DualEPS(figsize=figsize) d.insert_image_yt(plot, field=field) d.axis_box_yt(plot, bare_axes=bare_axes, **kwargs) if colorbar and not isinstance(plot, ProfilePlot): d.colorbar_yt(plot, orientation=cb_orient) if savefig is not None: d.save_fig(savefig, format=file_format) return d # ============================================================================= def return_colormap(cmap=None, label="", range=(0, 1), log=False): r"""Returns a dict that describes a colorbar. Exclusively for use with multiplot. Parameters ---------- cmap : string name of the (matplotlib) colormap to use label : string colorbar label range : tuple of floats min and max of the colorbar's range log : boolean Flag to use a logarithmic scale Examples -------- >>> cb = return_colormap("cmyt.arbre", "Density [cm$^{-3}$]", (0, 10), False) """ if cmap is None: cmap = ytcfg.get("yt", "default_colormap") return {"cmap": cmap, "name": label, "range": range, "log": log} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/fits_image.py0000644000175100001770000020405214714401662017606 0ustar00runnerdockerimport re import sys from functools import partial from numbers import Number as numeric_type import numpy as np from more_itertools import first, mark_ends from yt._typing import FieldKey from yt.data_objects.construction_data_containers import YTCoveringGrid from yt.data_objects.image_array import ImageArray from yt.fields.derived_field import DerivedField from yt.funcs import fix_axis, is_sequence, iter_fields, mylog, validate_moment from yt.units import dimensions from yt.units.unit_object import Unit # type: ignore from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.math_utils import compute_stddev_image from yt.utilities.on_demand_imports import _astropy from yt.utilities.parallel_tools.parallel_analysis_interface import parallel_root_only from yt.visualization.fixed_resolution import FixedResolutionBuffer, ParticleImageBuffer from yt.visualization.particle_plots import ( ParticleAxisAlignedDummyDataSource, ParticleDummyDataSource, ParticleOffAxisDummyDataSource, ) from yt.visualization.volume_rendering.off_axis_projection import off_axis_projection class UnitfulHDU: def __init__(self, hdu): self.hdu = hdu self.header = self.hdu.header self.name = self.header["BTYPE"] self.units = self.header["BUNIT"] self.shape = self.hdu.shape @property def data(self): return YTArray(self.hdu.data, self.units) def __repr__(self): im_shape = " x ".join(str(s) for s in self.shape) return f"FITSImage: {self.name} ({im_shape}, {self.units})" class FITSImageData: def __init__( self, data, fields=None, length_unit=None, width=None, img_ctr=None, wcs=None, current_time=None, time_unit=None, mass_unit=None, velocity_unit=None, magnetic_unit=None, ds=None, unit_header=None, **kwargs, ): r"""Initialize a FITSImageData object. FITSImageData contains a collection of FITS ImageHDU instances and WCS information, along with units for each of the images. FITSImageData instances can be constructed from ImageArrays, NumPy arrays, dicts of such arrays, FixedResolutionBuffers, and YTCoveringGrids. The latter two are the most powerful because WCS information can be constructed automatically from their coordinates. Parameters ---------- data : FixedResolutionBuffer or a YTCoveringGrid. Or, an ImageArray, an numpy.ndarray, or dict of such arrays The data to be made into a FITS image or images. fields : single string or list of strings, optional The field names for the data. If *fields* is none and *data* has keys, it will use these for the fields. If *data* is just a single array one field name must be specified. length_unit : string The units of the WCS coordinates and the length unit of the file. Defaults to the length unit of the dataset, if there is one, or "cm" if there is not. width : float or YTQuantity The width of the image. Either a single value or iterable of values. If a float, assumed to be in *units*. Only used if this information is not already provided by *data*. img_ctr : array_like or YTArray The center coordinates of the image. If a list or NumPy array, it is assumed to be in *units*. This will overwrite any center coordinates potentially provided by *data*. Default in other cases is [0.0]*(number of dimensions). wcs : `~astropy.wcs.WCS` instance, optional Supply an AstroPy WCS instance. Will override automatic WCS creation from FixedResolutionBuffers and YTCoveringGrids. current_time : float, tuple, or YTQuantity, optional The current time of the image(s). If not specified, one will be set from the dataset if there is one. If a float, it will be assumed to be in *time_unit* units. time_unit : string The default time units of the file. Defaults to "s". mass_unit : string The default time units of the file. Defaults to "g". velocity_unit : string The default velocity units of the file. Defaults to "cm/s". magnetic_unit : string The default magnetic units of the file. Defaults to "gauss". ds : `~yt.static_output.Dataset` instance, optional The dataset associated with the image(s), typically used to transfer metadata to the header(s). Does not need to be specified if *data* has a dataset as an attribute. Examples -------- >>> # This example uses a FRB. >>> ds = load("sloshing_nomag2_hdf5_plt_cnt_0150") >>> prj = ds.proj(2, "kT", weight_field=("gas", "density")) >>> frb = prj.to_frb((0.5, "Mpc"), 800) >>> # This example just uses the FRB and puts the coords in kpc. >>> f_kpc = FITSImageData( ... frb, fields="kT", length_unit="kpc", time_unit=(1.0, "Gyr") ... ) >>> # This example specifies a specific WCS. >>> from astropy.wcs import WCS >>> w = WCS(naxis=self.dimensionality) >>> w.wcs.crval = [30.0, 45.0] # RA, Dec in degrees >>> w.wcs.cunit = ["deg"] * 2 >>> nx, ny = 800, 800 >>> w.wcs.crpix = [0.5 * (nx + 1), 0.5 * (ny + 1)] >>> w.wcs.ctype = ["RA---TAN", "DEC--TAN"] >>> scale = 1.0 / 3600.0 # One arcsec per pixel >>> w.wcs.cdelt = [-scale, scale] >>> f_deg = FITSImageData(frb, fields="kT", wcs=w) >>> f_deg.writeto("temp.fits") """ if fields is not None: fields = list(iter_fields(fields)) if ds is None: ds = getattr(data, "ds", None) self.fields = [] self.field_units = {} if unit_header is None: self._set_units( ds, [length_unit, mass_unit, time_unit, velocity_unit, magnetic_unit] ) else: self._set_units_from_header(unit_header) wcs_unit = str(self.length_unit.units) self._fix_current_time(ds, current_time) if width is None: width = 1.0 if isinstance(width, tuple): if ds is None: width = YTQuantity(width[0], width[1]) else: width = ds.quan(width[0], width[1]) exclude_fields = [ "x", "y", "z", "px", "py", "pz", "pdx", "pdy", "pdz", "weight_field", ] if isinstance(data, _astropy.pyfits.PrimaryHDU): data = _astropy.pyfits.HDUList([data]) if isinstance(data, _astropy.pyfits.HDUList): self.hdulist = data for hdu in data: self.fields.append(hdu.header["btype"]) self.field_units[hdu.header["btype"]] = hdu.header["bunit"] self.shape = self.hdulist[0].shape self.dimensionality = len(self.shape) wcs_names = [key for key in self.hdulist[0].header if "WCSNAME" in key] for name in wcs_names: if name == "WCSNAME": key = " " else: key = name[-1] w = _astropy.pywcs.WCS( header=self.hdulist[0].header, key=key, naxis=self.dimensionality ) setattr(self, "wcs" + key.strip().lower(), w) return self.hdulist = _astropy.pyfits.HDUList() stddev = False if hasattr(data, "keys"): img_data = data if fields is None: fields = list(img_data.keys()) if hasattr(data, "data_source"): stddev = getattr(data.data_source, "moment", 1) == 2 elif isinstance(data, np.ndarray): if fields is None: mylog.warning( "No field name given for this array. Calling it 'image_data'." ) fn = "image_data" fields = [fn] else: fn = fields[0] img_data = {fn: data} for fd in fields: if isinstance(fd, tuple): fname = fd[1] elif isinstance(fd, DerivedField): fname = fd.name[1] else: fname = fd if stddev: fname += "_stddev" self.fields.append(fname) # Sanity checking names s = set() duplicates = {f for f in self.fields if f in s or s.add(f)} if len(duplicates) > 0: for i, fd in enumerate(self.fields): if fd in duplicates: if isinstance(fields[i], tuple): ftype, fname = fields[i] elif isinstance(fields[i], DerivedField): ftype, fname = fields[i].name else: raise RuntimeError( f"Cannot distinguish between fields with same name {fd}!" ) self.fields[i] = f"{ftype}_{fname}" for is_first, _is_last, (i, (name, field)) in mark_ends( enumerate(zip(self.fields, fields, strict=True)) ): if name not in exclude_fields: this_img = img_data[field] if hasattr(img_data[field], "units"): has_code_unit = False for atom in this_img.units.expr.atoms(): if str(atom).startswith("code"): has_code_unit = True if has_code_unit: mylog.warning( "Cannot generate an image with code " "units. Converting to units in CGS." ) funits = this_img.units.get_base_equivalent("cgs") this_img.convert_to_units(funits) else: funits = this_img.units self.field_units[name] = str(funits) else: self.field_units[name] = "dimensionless" mylog.info("Making a FITS image of field %s", name) if isinstance(this_img, ImageArray): if i == 0: self.shape = this_img.shape[::-1] this_img = np.asarray(this_img) else: if i == 0: self.shape = this_img.shape this_img = np.asarray(this_img.T) if is_first: hdu = _astropy.pyfits.PrimaryHDU(this_img) else: hdu = _astropy.pyfits.ImageHDU(this_img) hdu.name = name hdu.header["btype"] = name hdu.header["bunit"] = re.sub("()", "", self.field_units[name]) for unit in ("length", "time", "mass", "velocity", "magnetic"): if unit == "magnetic": short_unit = "bf" else: short_unit = unit[0] key = f"{short_unit}unit" value = getattr(self, f"{unit}_unit") if value is not None: hdu.header[key] = float(value.value) hdu.header.comments[key] = f"[{value.units}]" hdu.header["time"] = float(self.current_time.value) if hasattr(self, "current_redshift"): hdu.header["HUBBLE"] = self.hubble_constant hdu.header["REDSHIFT"] = self.current_redshift self.hdulist.append(hdu) self.dimensionality = len(self.shape) if wcs is None: w = _astropy.pywcs.WCS( header=self.hdulist[0].header, naxis=self.dimensionality ) # FRBs and covering grids are special cases where # we have coordinate information, so we take advantage # of this and construct the WCS object if isinstance(img_data, FixedResolutionBuffer): dx = (img_data.bounds[1] - img_data.bounds[0]).to_value(wcs_unit) dy = (img_data.bounds[3] - img_data.bounds[2]).to_value(wcs_unit) dx /= self.shape[0] dy /= self.shape[1] if img_ctr is not None: xctr, yctr = img_ctr else: xctr = 0.5 * (img_data.bounds[1] + img_data.bounds[0]).to_value( wcs_unit ) yctr = 0.5 * (img_data.bounds[3] + img_data.bounds[2]).to_value( wcs_unit ) center = [xctr, yctr] cdelt = [dx, dy] elif isinstance(img_data, YTCoveringGrid): cdelt = img_data.dds.to_value(wcs_unit) if img_ctr is not None: center = img_ctr else: center = 0.5 * (img_data.left_edge + img_data.right_edge).to_value( wcs_unit ) else: # If img_data is just an array we use the width and img_ctr # parameters to determine the cell widths if img_ctr is None: img_ctr = np.zeros(3) if not is_sequence(width): width = [width] * self.dimensionality if isinstance(width[0], YTQuantity): cdelt = [ wh.to_value(wcs_unit) / n for wh, n in zip(width, self.shape, strict=True) ] else: cdelt = [ float(wh) / n for wh, n in zip(width, self.shape, strict=True) ] center = img_ctr[: self.dimensionality] w.wcs.crpix = 0.5 * (np.array(self.shape) + 1) w.wcs.crval = center w.wcs.cdelt = cdelt w.wcs.ctype = ["linear"] * self.dimensionality w.wcs.cunit = [wcs_unit] * self.dimensionality self.set_wcs(w) else: self.set_wcs(wcs) def _fix_current_time(self, ds, current_time): if ds is None: registry = None else: registry = ds.unit_registry tunit = Unit(self.time_unit, registry=registry) if current_time is None: if ds is not None: current_time = ds.current_time else: self.current_time = YTQuantity(0.0, "s") return elif isinstance(current_time, numeric_type): current_time = YTQuantity(current_time, tunit) elif isinstance(current_time, tuple): current_time = YTQuantity(current_time[0], current_time[1]) self.current_time = current_time.to(tunit) def _set_units(self, ds, base_units): if ds is not None: if getattr(ds, "cosmological_simulation", False): self.hubble_constant = ds.hubble_constant self.current_redshift = ds.current_redshift attrs = ( "length_unit", "mass_unit", "time_unit", "velocity_unit", "magnetic_unit", ) cgs_units = ("cm", "g", "s", "cm/s", "gauss") for unit, attr, cgs_unit in zip(base_units, attrs, cgs_units, strict=True): if unit is None: if ds is not None: u = getattr(ds, attr, None) elif attr == "velocity_unit": u = self.length_unit / self.time_unit elif attr == "magnetic_unit": u = np.sqrt( 4.0 * np.pi * self.mass_unit / (self.time_unit**2 * self.length_unit) ) else: u = cgs_unit else: u = unit if isinstance(u, str): uq = YTQuantity(1.0, u) elif isinstance(u, numeric_type): uq = YTQuantity(u, cgs_unit) elif isinstance(u, YTQuantity): uq = u.copy() elif isinstance(u, tuple): uq = YTQuantity(u[0], u[1]) else: uq = None if uq is not None: atoms = {str(a) for a in uq.units.expr.atoms()} if hasattr(self, "hubble_constant"): # Don't store cosmology units if str(uq.units).startswith("cm") or "h" in atoms or "a" in atoms: uq.convert_to_cgs() if any(a.startswith("code") for a in atoms): # Don't store code units mylog.warning( "Cannot use code units of '%s' " "when creating a FITSImageData instance! " "Converting to a cgs equivalent.", uq.units, ) uq.convert_to_cgs() if attr == "length_unit" and uq.value != 1.0: mylog.warning("Converting length units from %s to %s.", uq, uq.units) uq = YTQuantity(1.0, uq.units) setattr(self, attr, uq) def _set_units_from_header(self, header): if "hubble" in header: self.hubble_constant = header["HUBBLE"] self.current_redshift = header["REDSHIFT"] for unit in ["length", "time", "mass", "velocity", "magnetic"]: if unit == "magnetic": key = "BFUNIT" else: key = unit[0].upper() + "UNIT" if key not in header: continue u = header.comments[key].strip("[]") uq = YTQuantity(header[key], u) setattr(self, unit + "_unit", uq) def set_wcs(self, wcs, wcsname=None, suffix=None): """ Set the WCS coordinate information for all images with a WCS object *wcs*. """ if wcsname is None: wcs.wcs.name = "yt" else: wcs.wcs.name = wcsname h = wcs.to_header() if suffix is None: self.wcs = wcs else: setattr(self, "wcs" + suffix, wcs) for img in self.hdulist: for k, v in h.items(): kk = k if suffix is not None: kk += suffix img.header[kk] = v def change_image_name(self, old_name, new_name): """ Change the name of a FITS image. Parameters ---------- old_name : string The old name of the image. new_name : string The new name of the image. """ idx = self.fields.index(old_name) self.hdulist[idx].name = new_name self.hdulist[idx].header["BTYPE"] = new_name self.field_units[new_name] = self.field_units.pop(old_name) self.fields[idx] = new_name def convolve(self, field, kernel, **kwargs): """ Convolve an image with a kernel, either a simple Gaussian kernel or one provided by AstroPy. Currently, this only works for 2D images. All keyword arguments are passed to :meth:`~astropy.convolution.convolve`. Parameters ---------- field : string The name of the field to convolve. kernel : float, YTQuantity, (value, unit) tuple, or AstroPy Kernel object The kernel to convolve the image with. If this is an AstroPy Kernel object, the image will be convolved with it. Otherwise, it is assumed that the kernel is a Gaussian and that this value is the standard deviation. If a float, it is assumed that the units are pixels, but a (value, unit) tuple or YTQuantity can be supplied to specify the standard deviation in physical units. Examples -------- >>> fid = FITSSlice(ds, "z", ("gas", "density")) >>> fid.convolve("density", (3.0, "kpc")) """ if self.dimensionality == 3: raise RuntimeError("Convolution currently only works for 2D FITSImageData!") conv = _astropy.conv if field not in self.keys(): raise KeyError(f"{field} not an image!") idx = self.fields.index(field) if not isinstance(kernel, conv.Kernel): if not isinstance(kernel, numeric_type): unit = str(self.wcs.wcs.cunit[0]) pix_scale = YTQuantity(self.wcs.wcs.cdelt[0], unit) if isinstance(kernel, tuple): stddev = YTQuantity(kernel[0], kernel[1]).to(unit) else: stddev = kernel.to(unit) kernel = stddev / pix_scale kernel = conv.Gaussian2DKernel(x_stddev=kernel) self.hdulist[idx].data = conv.convolve(self.hdulist[idx].data, kernel, **kwargs) def update_header(self, field, key, value): """ Update the FITS header for *field* with a *key*, *value* pair. If *field* == "all", all headers will be updated. """ if field == "all": for img in self.hdulist: img.header[key] = value else: if field not in self.keys(): raise KeyError(f"{field} not an image!") idx = self.fields.index(field) self.hdulist[idx].header[key] = value def keys(self): return self.fields def has_key(self, key): return key in self.fields def values(self): return [self[k] for k in self.fields] def items(self): return [(k, self[k]) for k in self.fields] def __getitem__(self, item): return UnitfulHDU(self.hdulist[item]) def __repr__(self): return str([self[k] for k in self.keys()]) def info(self, output=None): """ Summarize the info of the HDUs in this `FITSImageData` instance. Note that this function prints its results to the console---it does not return a value. Parameters ---------- output : file, boolean, optional A file-like object to write the output to. If `False`, does not output to a file and instead returns a list of tuples representing the FITSImageData info. Writes to ``sys.stdout`` by default. """ hinfo = self.hdulist.info(output=False) num_cols = len(hinfo[0]) if output is None: output = sys.stdout if num_cols == 8: header = "No. Name Ver Type Cards Dimensions Format Units" format = "{:3d} {:10} {:3} {:11} {:5d} {} {} {}" else: header = ( "No. Name Type Cards Dimensions Format Units" ) format = "{:3d} {:10} {:11} {:5d} {} {} {}" if self.hdulist._file is None: name = "(No file associated with this FITSImageData)" else: name = self.hdulist._file.name results = [f"Filename: {name}", header] for line in hinfo: units = self.field_units[self.hdulist[line[0]].header["btype"]] summary = tuple(list(line[:-1]) + [units]) if output: results.append(format.format(*summary)) else: results.append(summary) if output: output.write("\n".join(results)) output.write("\n") output.flush() else: return results[2:] @parallel_root_only def writeto(self, fileobj, fields=None, overwrite=False, **kwargs): r""" Write all of the fields or a subset of them to a FITS file. Parameters ---------- fileobj : string The name of the file to write to. fields : list of strings, optional The fields to write to the file. If not specified all of the fields in the buffer will be written. overwrite : boolean Whether or not to overwrite a previously existing file. Default: False **kwargs Additional keyword arguments are passed to :meth:`~astropy.io.fits.HDUList.writeto`. """ if fields is None: hdus = self.hdulist else: hdus = _astropy.pyfits.HDUList() for field in fields: hdus.append(self.hdulist[field]) hdus.writeto(fileobj, overwrite=overwrite, **kwargs) def to_glue(self, label="yt", data_collection=None): """ Takes the data in the FITSImageData instance and exports it to Glue (http://glueviz.org) for interactive analysis. Optionally add a *label*. If you are already within the Glue environment, you can pass a *data_collection* object, otherwise Glue will be started. """ from glue.core import Data, DataCollection from glue.core.coordinates import coordinates_from_header try: from glue.app.qt.application import GlueApplication except ImportError: from glue.qt.glue_application import GlueApplication image = Data(label=label) image.coords = coordinates_from_header(self.wcs.to_header()) for k in self.fields: image.add_component(self[k].data, k) if data_collection is None: dc = DataCollection([image]) app = GlueApplication(dc) app.start() else: data_collection.append(image) def to_aplpy(self, **kwargs): """ Use APLpy (http://aplpy.github.io) for plotting. Returns an `aplpy.FITSFigure` instance. All keyword arguments are passed to the `aplpy.FITSFigure` constructor. """ import aplpy return aplpy.FITSFigure(self.hdulist, **kwargs) def get_data(self, field): """ Return the data array of the image corresponding to *field* with units attached. Deprecated. """ return self[field].data def set_unit(self, field, units): """ Set the units of *field* to *units*. """ if field not in self.keys(): raise KeyError(f"{field} not an image!") idx = self.fields.index(field) new_data = YTArray(self.hdulist[idx].data, self.field_units[field]).to(units) self.hdulist[idx].data = new_data.v self.hdulist[idx].header["bunit"] = units self.field_units[field] = units def pop(self, key): """ Remove a field with name *key* and return it as a new FITSImageData instance. """ if key not in self.keys(): raise KeyError(f"{key} not an image!") idx = self.fields.index(key) im = self.hdulist.pop(idx) self.field_units.pop(key) self.fields.remove(key) f = _astropy.pyfits.PrimaryHDU(im.data, header=im.header) return FITSImageData(f, current_time=f.header["TIME"], unit_header=f.header) def close(self): self.hdulist.close() @classmethod def from_file(cls, filename): """ Generate a FITSImageData instance from one previously written to disk. Parameters ---------- filename : string The name of the file to open. """ f = _astropy.pyfits.open(filename, lazy_load_hdus=False) return cls(f, current_time=f[0].header["TIME"], unit_header=f[0].header) @classmethod def from_images(cls, image_list): """ Generate a new FITSImageData instance from a list of FITSImageData instances. Parameters ---------- image_list : list of FITSImageData instances The images to be combined. """ image_list = image_list if isinstance(image_list, list) else [image_list] first_image = first(image_list) w = first_image.wcs img_shape = first_image.shape data = [] for fid in image_list: assert_same_wcs(w, fid.wcs) if img_shape != fid.shape: raise RuntimeError("Images do not have the same shape!") for hdu in fid.hdulist: if len(data) == 0: data.append(_astropy.pyfits.PrimaryHDU(hdu.data, header=hdu.header)) else: data.append(_astropy.pyfits.ImageHDU(hdu.data, header=hdu.header)) data = _astropy.pyfits.HDUList(data) return cls( data, current_time=first_image.current_time, unit_header=first_image[0].header, ) def create_sky_wcs( self, sky_center, sky_scale, ctype=None, crota=None, cd=None, pc=None, wcsname="celestial", replace_old_wcs=True, ): """ Takes a Cartesian WCS and converts it to one in a sky-based coordinate system. Parameters ---------- sky_center : iterable of floats Reference coordinates of the WCS in degrees. sky_scale : tuple or YTQuantity Conversion between an angle unit and a length unit, e.g. (3.0, "arcsec/kpc") ctype : list of strings, optional The type of the coordinate system to create. Default: A "tangential" projection. crota : 2-element ndarray, optional Rotation angles between cartesian coordinates and the celestial coordinates. cd : 2x2-element ndarray, optional Dimensioned coordinate transformation matrix. pc : 2x2-element ndarray, optional Coordinate transformation matrix. wcsname : string, optional The name of the WCS to be stored in the FITS header. replace_old_wcs : boolean, optional If True, the original WCS will be overwritten but first copied to a second WCS ("WCSAXESA"). If False, this new WCS will be placed into the second WCS. """ if ctype is None: ctype = ["RA---TAN", "DEC--TAN"] old_wcs = self.wcs naxis = old_wcs.naxis crval = [sky_center[0], sky_center[1]] if isinstance(sky_scale, YTQuantity): scaleq = sky_scale else: scaleq = YTQuantity(sky_scale[0], sky_scale[1]) if scaleq.units.dimensions != dimensions.angle / dimensions.length: raise RuntimeError( f"sky_scale {sky_scale} not in correct dimensions of angle/length!" ) deltas = old_wcs.wcs.cdelt units = [str(unit) for unit in old_wcs.wcs.cunit] new_dx = (YTQuantity(-deltas[0], units[0]) * scaleq).in_units("deg") new_dy = (YTQuantity(deltas[1], units[1]) * scaleq).in_units("deg") new_wcs = _astropy.pywcs.WCS(naxis=naxis) cdelt = [new_dx.v, new_dy.v] cunit = ["deg"] * 2 if naxis == 3: crval.append(old_wcs.wcs.crval[2]) cdelt.append(old_wcs.wcs.cdelt[2]) ctype.append(old_wcs.wcs.ctype[2]) cunit.append(old_wcs.wcs.cunit[2]) new_wcs.wcs.crpix = old_wcs.wcs.crpix new_wcs.wcs.cdelt = cdelt new_wcs.wcs.crval = crval new_wcs.wcs.cunit = cunit new_wcs.wcs.ctype = ctype if crota is not None: new_wcs.wcs.crota = crota if cd is not None: new_wcs.wcs.cd = cd if pc is not None: new_wcs.wcs.cd = pc if replace_old_wcs: self.set_wcs(new_wcs, wcsname=wcsname) self.set_wcs(old_wcs, wcsname="yt", suffix="a") else: self.set_wcs(new_wcs, wcsname=wcsname, suffix="a") class FITSImageBuffer(FITSImageData): pass def sanitize_fits_unit(unit): if unit == "Mpc": mylog.info("Changing FITS file length unit to kpc.") unit = "kpc" elif unit == "au": unit = "AU" return unit # This list allows one to determine which axes are the # correct axes of the image in a right-handed coordinate # system depending on which axis is sliced or projected axis_wcs = [[1, 2], [2, 0], [0, 1]] def construct_image( ds, axis, data_source, center, image_res, width, length_unit, origin="domain" ): if width is None: width = ds.domain_width[axis_wcs[axis]] unit = ds.get_smallest_appropriate_unit(width[0]) mylog.info( "Making an image of the entire domain, " "so setting the center to the domain center." ) else: width = ds.coordinates.sanitize_width(axis, width, None) unit = str(width[0].units) if is_sequence(image_res): nx, ny = image_res else: nx, ny = image_res, image_res dx = width[0] / nx dy = width[1] / ny crpix = [0.5 * (nx + 1), 0.5 * (ny + 1)] if unit == "unitary": unit = ds.get_smallest_appropriate_unit(ds.domain_width.max()) elif unit == "code_length": unit = ds.get_smallest_appropriate_unit(ds.quan(1.0, "code_length")) unit = sanitize_fits_unit(unit) if length_unit is None: length_unit = unit if any(char.isdigit() for char in length_unit) and "*" in length_unit: length_unit = length_unit.split("*")[-1] cunit = [length_unit] * 2 ctype = ["LINEAR"] * 2 cdelt = [dx.in_units(length_unit), dy.in_units(length_unit)] if origin == "domain": if is_sequence(axis): crval = center.in_units(length_unit) else: crval = [center[idx].in_units(length_unit) for idx in axis_wcs[axis]] elif origin == "image": crval = np.zeros(2) if hasattr(data_source, "to_frb"): if is_sequence(axis): frb = data_source.to_frb(width[0], (nx, ny), height=width[1]) else: frb = data_source.to_frb(width[0], (nx, ny), center=center, height=width[1]) elif isinstance(data_source, ParticleDummyDataSource): if hasattr(data_source, "normal_vector"): # If we have a normal vector, this means # that the data source is off-axis bounds = (-width[0] / 2, width[0] / 2, -width[1] / 2, width[1] / 2) periodic = False else: # Otherwise, this is an on-axis data source axes = axis_wcs[axis] bounds = ( center[axes[0]] - width[0] / 2, center[axes[0]] + width[0] / 2, center[axes[1]] - width[1] / 2, center[axes[1]] + width[1] / 2, ) periodic = all(ds.periodicity) frb = ParticleImageBuffer(data_source, bounds, (nx, ny), periodic=periodic) else: frb = None w = _astropy.pywcs.WCS(naxis=2) w.wcs.crpix = crpix w.wcs.cdelt = cdelt w.wcs.crval = crval w.wcs.cunit = cunit w.wcs.ctype = ctype return w, frb, length_unit def assert_same_wcs(wcs1, wcs2): from numpy.testing import assert_allclose assert wcs1.naxis == wcs2.naxis for i in range(wcs1.naxis): assert wcs1.wcs.cunit[i] == wcs2.wcs.cunit[i] assert wcs1.wcs.ctype[i] == wcs2.wcs.ctype[i] assert_allclose(wcs1.wcs.crpix, wcs2.wcs.crpix) assert_allclose(wcs1.wcs.cdelt, wcs2.wcs.cdelt) assert_allclose(wcs1.wcs.crval, wcs2.wcs.crval) crota1 = getattr(wcs1.wcs, "crota", None) crota2 = getattr(wcs2.wcs, "crota", None) if crota1 is None or crota2 is None: assert crota1 == crota2 else: assert_allclose(wcs1.wcs.crota, wcs2.wcs.crota) cd1 = getattr(wcs1.wcs, "cd", None) cd2 = getattr(wcs2.wcs, "cd", None) if cd1 is None or cd2 is None: assert cd1 == cd2 else: assert_allclose(wcs1.wcs.cd, wcs2.wcs.cd) pc1 = getattr(wcs1.wcs, "pc", None) pc2 = getattr(wcs2.wcs, "pc", None) if pc1 is None or pc2 is None: assert pc1 == pc2 else: assert_allclose(wcs1.wcs.pc, wcs2.wcs.pc) class FITSSlice(FITSImageData): r""" Generate a FITSImageData of an on-axis slice. Parameters ---------- ds : :class:`~yt.data_objects.static_output.Dataset` The dataset object. axis : character or integer The axis of the slice. One of "x","y","z", or 0,1,2. fields : string or list of strings The fields to slice image_res : an int or 2-tuple of ints Specify the resolution of the resulting image. A single value will be used for both axes, whereas a tuple of values will be used for the individual axes. Default: 512 center : 'center', 'c', 'left', 'l', 'right', 'r', id of a global extremum, or array-like The coordinate of the selection's center. Defaults to the 'center', i.e. center of the domain. Centering on the min or max of a field is supported by passing a tuple such as ('min', ('gas', 'density')) or ('max', ('gas', 'temperature'). A single string may also be used (e.g. "min_density" or "max_temperature"), though it's not as flexible and does not allow to select an exact field/particle type. With this syntax, the first field matching the provided name is selected. 'max' or 'm' can be used as a shortcut for ('max', ('gas', 'density')) 'min' can be used as a shortcut for ('min', ('gas', 'density')) One can also select an exact point as a 3 element coordinate sequence, e.g. [0.5, 0.5, 0] Units can be specified by passing in *center* as a tuple containing a 3-element coordinate sequence and string unit name, e.g. ([0, 0.5, 0.5], "cm"), or by passing in a YTArray. Code units are assumed if unspecified. The domain edges along the selected *axis* can be selected with 'left'/'l' and 'right'/'r' respectively. width : tuple or a float. Width can have four different formats to support variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') specifies a width that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) specifies a width that is 10 kiloparsecs wide along the x axis and 15 kiloparsecs wide along the y axis. In the other two examples, code units are assumed, for example (0.2, 0.3) specifies a width that has an x width of 0.2 and a y width of 0.3 in code units. length_unit : string, optional the length units that the coordinates are written in. The default is to use the default length unit of the dataset. origin : string The origin of the coordinate system in the file. If "domain", then the center coordinates will be the same as the center of the image as defined by the *center* keyword argument. If "image", then the center coordinates will be set to (0,0). Default: "domain" """ def __init__( self, ds, axis, fields, image_res=512, center="center", width=None, length_unit=None, *, origin="domain", **kwargs, ): fields = list(iter_fields(fields)) axis = fix_axis(axis, ds) center, dcenter = ds.coordinates.sanitize_center(center, axis) slc = ds.slice(axis, center[axis], **kwargs) w, frb, lunit = construct_image( ds, axis, slc, dcenter, image_res, width, length_unit, origin=origin, ) super().__init__(frb, fields=fields, length_unit=lunit, wcs=w) class FITSProjection(FITSImageData): r""" Generate a FITSImageData of an on-axis projection. Parameters ---------- ds : :class:`~yt.data_objects.static_output.Dataset` The dataset object. axis : character or integer The axis along which to project. One of "x","y","z", or 0,1,2. fields : string or list of strings The fields to project image_res : an int or 2-tuple of ints Specify the resolution of the resulting image. A single value will be used for both axes, whereas a tuple of values will be used for the individual axes. Default: 512 center : 'center', 'c', 'left', 'l', 'right', 'r', id of a global extremum, or array-like The coordinate of the selection's center. Defaults to the 'center', i.e. center of the domain. Centering on the min or max of a field is supported by passing a tuple such as ('min', ('gas', 'density')) or ('max', ('gas', 'temperature'). A single string may also be used (e.g. "min_density" or "max_temperature"), though it's not as flexible and does not allow to select an exact field/particle type. With this syntax, the first field matching the provided name is selected. 'max' or 'm' can be used as a shortcut for ('max', ('gas', 'density')) 'min' can be used as a shortcut for ('min', ('gas', 'density')) One can also select an exact point as a 3 element coordinate sequence, e.g. [0.5, 0.5, 0] Units can be specified by passing in *center* as a tuple containing a 3-element coordinate sequence and string unit name, e.g. ([0, 0.5, 0.5], "cm"), or by passing in a YTArray. Code units are assumed if unspecified. The domain edges along the selected *axis* can be selected with 'left'/'l' and 'right'/'r' respectively. width : tuple or a float. Width can have four different formats to support variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') specifies a width that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) specifies a width that is 10 kiloparsecs wide along the x-axis and 15 kiloparsecs wide along the y-axis. In the other two examples, code units are assumed, for example (0.2, 0.3) specifies a width that has an x width of 0.2 and a y width of 0.3 in code units. weight_field : string The field used to weight the projection. length_unit : string, optional the length units that the coordinates are written in. The default is to use the default length unit of the dataset. origin : string The origin of the coordinate system in the file. If "domain", then the center coordinates will be the same as the center of the image as defined by the *center* keyword argument. If "image", then the center coordinates will be set to (0,0). Default: "domain" moment : integer, optional for a weighted projection, moment = 1 (the default) corresponds to a weighted average. moment = 2 corresponds to a weighted standard deviation. """ def __init__( self, ds, axis, fields, image_res=512, center="center", width=None, weight_field=None, length_unit=None, *, origin="domain", moment=1, **kwargs, ): fields = list(iter_fields(fields)) axis = fix_axis(axis, ds) center, dcenter = ds.coordinates.sanitize_center(center, axis) prj = ds.proj( fields[0], axis, weight_field=weight_field, moment=moment, **kwargs ) w, frb, lunit = construct_image( ds, axis, prj, dcenter, image_res, width, length_unit, origin=origin, ) super().__init__(frb, fields=fields, length_unit=lunit, wcs=w) class FITSParticleProjection(FITSImageData): r""" Generate a FITSImageData of an on-axis projection of a particle field. Parameters ---------- ds : :class:`~yt.data_objects.static_output.Dataset` The dataset object. axis : character or integer The axis along which to project. One of "x","y","z", or 0,1,2. fields : string or list of strings The fields to project image_res : an int or 2-tuple of ints Specify the resolution of the resulting image. A single value will be used for both axes, whereas a tuple of values will be used for the individual axes. Default: 512 center : 'center', 'c', 'left', 'l', 'right', 'r', id of a global extremum, or array-like The coordinate of the selection's center. Defaults to the 'center', i.e. center of the domain. Centering on the min or max of a field is supported by passing a tuple such as ('min', ('gas', 'density')) or ('max', ('gas', 'temperature'). A single string may also be used (e.g. "min_density" or "max_temperature"), though it's not as flexible and does not allow to select an exact field/particle type. With this syntax, the first field matching the provided name is selected. 'max' or 'm' can be used as a shortcut for ('max', ('gas', 'density')) 'min' can be used as a shortcut for ('min', ('gas', 'density')) One can also select an exact point as a 3 element coordinate sequence, e.g. [0.5, 0.5, 0] Units can be specified by passing in *center* as a tuple containing a 3-element coordinate sequence and string unit name, e.g. ([0, 0.5, 0.5], "cm"), or by passing in a YTArray. Code units are assumed if unspecified. The domain edges along the selected *axis* can be selected with 'left'/'l' and 'right'/'r' respectively. width : tuple or a float. Width can have four different formats to support variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') specifies a width that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) specifies a width that is 10 kiloparsecs wide along the x-axis and 15 kiloparsecs wide along the y-axis. In the other two examples, code units are assumed, for example (0.2, 0.3) specifies a width that has an x width of 0.2 and a y width of 0.3 in code units. depth : A tuple or a float A tuple containing the depth to project through and the string key of the unit: (width, 'unit'). If set to a float, code units are assumed. Defaults to the entire domain. weight_field : string The field used to weight the projection. length_unit : string, optional the length units that the coordinates are written in. The default is to use the default length unit of the dataset. deposition : string, optional Controls the order of the interpolation of the particles onto the mesh. "ngp" is 0th-order "nearest-grid-point" method (the default), "cic" is 1st-order "cloud-in-cell". density : boolean, optional If True, the quantity to be projected will be divided by the area of the cells, to make a projected density of the quantity. Default: False field_parameters : dictionary A dictionary of field parameters than can be accessed by derived fields. data_source : yt.data_objects.data_containers.YTSelectionContainer, optional If specified, this will be the data source used for selecting regions to project. origin : string The origin of the coordinate system in the file. If "domain", then the center coordinates will be the same as the center of the image as defined by the *center* keyword argument. If "image", then the center coordinates will be set to (0,0). Default: "domain" """ def __init__( self, ds, axis, fields, image_res=512, center="center", width=None, depth=(1, "1"), weight_field=None, length_unit=None, deposition="ngp", density=False, field_parameters=None, data_source=None, *, origin="domain", ): fields = list(iter_fields(fields)) axis = fix_axis(axis, ds) center, dcenter = ds.coordinates.sanitize_center(center, axis) width = ds.coordinates.sanitize_width(axis, width, depth) width[-1].convert_to_units(width[0].units) if field_parameters is None: field_parameters = {} ps = ParticleAxisAlignedDummyDataSource( center, ds, axis, width, fields, weight_field=weight_field, field_parameters=field_parameters, data_source=data_source, deposition=deposition, density=density, ) w, frb, lunit = construct_image( ds, axis, ps, dcenter, image_res, width, length_unit, origin=origin ) super().__init__(frb, fields=fields, length_unit=lunit, wcs=w) class FITSParticleOffAxisProjection(FITSImageData): r""" Generate a FITSImageData of an off-axis projection of a particle field. Parameters ---------- ds : :class:`~yt.data_objects.static_output.Dataset` The dataset object. axis : character or integer The axis along which to project. One of "x","y","z", or 0,1,2. fields : string or list of strings The fields to project image_res : an int or 2-tuple of ints Specify the resolution of the resulting image. A single value will be used for both axes, whereas a tuple of values will be used for the individual axes. Default: 512 center : A sequence of floats, a string, or a tuple. The coordinate of the center of the image. If set to 'c', 'center' or left blank, the plot is centered on the middle of the domain. If set to 'max' or 'm', the center will be located at the maximum of the ('gas', 'density') field. Centering on the max or min of a specific field is supported by providing a tuple such as ("min","temperature") or ("max","dark_matter_density"). Units can be specified by passing in *center* as a tuple containing a coordinate and string unit name or by passing in a YTArray. If a list or unitless array is supplied, code units are assumed. width : tuple or a float. Width can have four different formats to support variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') specifies a width that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) specifies a width that is 10 kiloparsecs wide along the x-axis and 15 kiloparsecs wide along the y-axis. In the other two examples, code units are assumed, for example (0.2, 0.3) specifies a width that has an x width of 0.2 and a y width of 0.3 in code units. depth : A tuple or a float A tuple containing the depth to project through and the string key of the unit: (width, 'unit'). If set to a float, code units are assumed. Defaults to the entire domain. weight_field : string The field used to weight the projection. length_unit : string, optional the length units that the coordinates are written in. The default is to use the default length unit of the dataset. deposition : string, optional Controls the order of the interpolation of the particles onto the mesh. "ngp" is 0th-order "nearest-grid-point" method (the default), "cic" is 1st-order "cloud-in-cell". density : boolean, optional If True, the quantity to be projected will be divided by the area of the cells, to make a projected density of the quantity. Default: False field_parameters : dictionary A dictionary of field parameters than can be accessed by derived fields. data_source : yt.data_objects.data_containers.YTSelectionContainer, optional If specified, this will be the data source used for selecting regions to project. origin : string The origin of the coordinate system in the file. If "domain", then the center coordinates will be the same as the center of the image as defined by the *center* keyword argument. If "image", then the center coordinates will be set to (0,0). Default: "domain" """ def __init__( self, ds, normal, fields, image_res=512, center="c", width=None, depth=(1, "1"), weight_field=None, length_unit=None, deposition="ngp", density=False, field_parameters=None, data_source=None, north_vector=None, ): fields = list(iter_fields(fields)) center, dcenter = ds.coordinates.sanitize_center(center, None) width = ds.coordinates.sanitize_width(normal, width, depth) wd = tuple(el.in_units("code_length").v for el in width) if field_parameters is None: field_parameters = {} ps = ParticleOffAxisDummyDataSource( center, ds, normal, wd, fields, weight_field=weight_field, field_parameters=field_parameters, data_source=data_source, deposition=deposition, density=density, north_vector=north_vector, ) w, frb, lunit = construct_image( ds, normal, ps, dcenter, image_res, width, length_unit, origin="image", ) super().__init__(frb, fields=fields, length_unit=lunit, wcs=w) class FITSOffAxisSlice(FITSImageData): r""" Generate a FITSImageData of an off-axis slice. Parameters ---------- ds : :class:`~yt.data_objects.static_output.Dataset` The dataset object. normal : a sequence of floats The vector normal to the projection plane. fields : string or list of strings The fields to slice image_res : an int or 2-tuple of ints Specify the resolution of the resulting image. A single value will be used for both axes, whereas a tuple of values will be used for the individual axes. Default: 512 center : 'center', 'c', id of a global extremum, or array-like The coordinate of the selection's center. Defaults to the 'center', i.e. center of the domain. Centering on the min or max of a field is supported by passing a tuple such as ('min', ('gas', 'density')) or ('max', ('gas', 'temperature'). A single string may also be used (e.g. "min_density" or "max_temperature"), though it's not as flexible and does not allow to select an exact field/particle type. With this syntax, the first field matching the provided name is selected. 'max' or 'm' can be used as a shortcut for ('max', ('gas', 'density')) 'min' can be used as a shortcut for ('min', ('gas', 'density')) One can also select an exact point as a 3 element coordinate sequence, e.g. [0.5, 0.5, 0] Units can be specified by passing in *center* as a tuple containing a 3-element coordinate sequence and string unit name, e.g. ([0, 0.5, 0.5], "cm"), or by passing in a YTArray. Code units are assumed if unspecified. width : tuple or a float. Width can have four different formats to support variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') specifies a width that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) specifies a width that is 10 kiloparsecs wide along the x axis and 15 kiloparsecs wide along the y axis. In the other two examples, code units are assumed, for example (0.2, 0.3) specifies a width that has an x width of 0.2 and a y width of 0.3 in code units. north_vector : a sequence of floats A vector defining the 'up' direction in the plot. This option sets the orientation of the slicing plane. If not set, an arbitrary grid-aligned north-vector is chosen. length_unit : string, optional the length units that the coordinates are written in. The default is to use the default length unit of the dataset. """ def __init__( self, ds, normal, fields, image_res=512, center="center", width=None, north_vector=None, length_unit=None, ): fields = list(iter_fields(fields)) center, dcenter = ds.coordinates.sanitize_center(center, axis=None) cut = ds.cutting(normal, center, north_vector=north_vector) center = ds.arr([0.0] * 2, "code_length") w, frb, lunit = construct_image( ds, normal, cut, center, image_res, width, length_unit ) super().__init__(frb, fields=fields, length_unit=lunit, wcs=w) class FITSOffAxisProjection(FITSImageData): r""" Generate a FITSImageData of an off-axis projection. Parameters ---------- ds : :class:`~yt.data_objects.static_output.Dataset` This is the dataset object corresponding to the simulation output to be plotted. normal : a sequence of floats The vector normal to the projection plane. fields : string, list of strings The name of the field(s) to be plotted. image_res : an int or 2-tuple of ints Specify the resolution of the resulting image. A single value will be used for both axes, whereas a tuple of values will be used for the individual axes. Default: 512 center : 'center', 'c', id of a global extremum, or array-like The coordinate of the selection's center. Defaults to the 'center', i.e. center of the domain. Centering on the min or max of a field is supported by passing a tuple such as ('min', ('gas', 'density')) or ('max', ('gas', 'temperature'). A single string may also be used (e.g. "min_density" or "max_temperature"), though it's not as flexible and does not allow to select an exact field/particle type. With this syntax, the first field matching the provided name is selected. 'max' or 'm' can be used as a shortcut for ('max', ('gas', 'density')) 'min' can be used as a shortcut for ('min', ('gas', 'density')) One can also select an exact point as a 3 element coordinate sequence, e.g. [0.5, 0.5, 0] Units can be specified by passing in *center* as a tuple containing a 3-element coordinate sequence and string unit name, e.g. ([0, 0.5, 0.5], "cm"), or by passing in a YTArray. Code units are assumed if unspecified. width : tuple or a float. Width can have four different formats to support variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') specifies a width that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) specifies a width that is 10 kiloparsecs wide along the x-axis and 15 kiloparsecs wide along the y-axis. In the other two examples, code units are assumed, for example (0.2, 0.3) specifies a width that has an x width of 0.2 and a y width of 0.3 in code units. depth : A tuple or a float A tuple containing the depth to project through and the string key of the unit: (width, 'unit'). If set to a float, code units are assumed weight_field : string The name of the weighting field. Set to None for no weight. north_vector : a sequence of floats A vector defining the 'up' direction in the plot. This option sets the orientation of the slicing plane. If not set, an arbitrary grid-aligned north-vector is chosen. method : string The method of projection. Valid methods are: "integrate" with no weight_field specified : integrate the requested field along the line of sight. "integrate" with a weight_field specified : weight the requested field by the weighting field and integrate along the line of sight. "sum" : This method is the same as integrate, except that it does not multiply by a path length when performing the integration, and is just a straight summation of the field along the given axis. WARNING: This should only be used for uniform resolution grid datasets, as other datasets may result in unphysical images. data_source : yt.data_objects.data_containers.YTSelectionContainer, optional If specified, this will be the data source used for selecting regions to project. length_unit : string, optional the length units that the coordinates are written in. The default is to use the default length unit of the dataset. moment : integer, optional for a weighted projection, moment = 1 (the default) corresponds to a weighted average. moment = 2 corresponds to a weighted standard deviation. """ def __init__( self, ds, normal, fields, center="center", width=(1.0, "unitary"), weight_field=None, image_res=512, data_source=None, north_vector=None, depth=(1.0, "unitary"), method="integrate", length_unit=None, *, moment=1, ): validate_moment(moment, weight_field) center, dcenter = ds.coordinates.sanitize_center(center, axis=None) fields = list(iter_fields(fields)) buf = {} width = ds.coordinates.sanitize_width(normal, width, depth) wd = tuple(el.in_units("code_length").v for el in width) if not is_sequence(image_res): image_res = (image_res, image_res) res = (image_res[0], image_res[1]) if data_source is None: source = ds.all_data() else: source = data_source fields = source._determine_fields(list(iter_fields(fields))) stddev_str = "_stddev" if moment == 2 else "" for item in fields: ftype, fname = item key = (ftype, f"{fname}{stddev_str}") buf[key] = off_axis_projection( source, center, normal, wd, res, item, north_vector=north_vector, method=method, weight=weight_field, depth=depth, ).swapaxes(0, 1) if moment == 2: def _sq_field(field, data, item: FieldKey): return data[item] ** 2 fd = ds._get_field_info(item) field_sq = (ftype, f"tmp_{fname}_squared") ds.add_field( field_sq, partial(_sq_field, item=item), sampling_type=fd.sampling_type, units=f"({fd.units})*({fd.units})", ) buff2 = off_axis_projection( source, center, normal, wd, res, field_sq, north_vector=north_vector, method=method, weight=weight_field, depth=depth, ).swapaxes(0, 1) buf[key] = compute_stddev_image(buff2, buf[key]) ds.field_info.pop(field_sq) center = ds.arr([0.0] * 2, "code_length") w, not_an_frb, lunit = construct_image( ds, normal, buf, center, image_res, width, length_unit ) super().__init__(buf, fields=list(buf.keys()), wcs=w, length_unit=lunit, ds=ds) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/fixed_resolution.py0000644000175100001770000007405014714401662021064 0ustar00runnerdockerimport sys import weakref from functools import partial from typing import TYPE_CHECKING import numpy as np from yt._maintenance.deprecation import issue_deprecation_warning from yt._typing import FieldKey, MaskT from yt.data_objects.image_array import ImageArray from yt.frontends.ytdata.utilities import save_as_dataset from yt.funcs import get_output_filename, iter_fields, mylog from yt.loaders import load_uniform_grid from yt.utilities.lib.api import ( # type: ignore CICDeposit_2, add_points_to_greyscale_image, ) from yt.utilities.lib.pixelization_routines import ( pixelize_cylinder, rotate_particle_coord_pib, ) from yt.utilities.math_utils import compute_stddev_image from yt.utilities.on_demand_imports import _h5py as h5py from .volume_rendering.api import off_axis_projection if sys.version_info >= (3, 12): from typing import override else: from typing_extensions import override if TYPE_CHECKING: from yt.visualization.fixed_resolution_filters import FixedResolutionBufferFilter class FixedResolutionBuffer: r""" FixedResolutionBuffer(data_source, bounds, buff_size, antialias = True) This accepts a 2D data object, such as a Projection or Slice, and implements a protocol for generating a pixelized, fixed-resolution image buffer. yt stores 2D AMR data internally as a set of 2D coordinates and the half-width of individual pixels. Converting this to an image buffer requires a deposition step, where individual variable-resolution pixels are deposited into a buffer of some resolution, to create an image. This object is an interface to that pixelization step: it can deposit multiple fields. It acts as a standard YTDataContainer object, such that dict-style access returns an image of a given field. Parameters ---------- data_source : :class:`yt.data_objects.construction_data_containers.YTQuadTreeProj` or :class:`yt.data_objects.selection_data_containers.YTSlice` This is the source to be pixelized, which can be a projection, slice or cutting plane. bounds : sequence of floats Bounds are the min and max in the image plane that we want our image to cover. It's in the order of (xmin, xmax, ymin, ymax), where the coordinates are all in the appropriate code units. buff_size : sequence of ints The size of the image to generate. antialias : boolean This can be true or false. It determines whether or not sub-pixel rendering is used during data deposition. periodic : boolean This can be true or false, and governs whether the pixelization will span the domain boundaries. filters : list of FixedResolutionBufferFilter objects (optional) Examples -------- To make a projection and then several images, you can generate a single FRB and then access multiple fields: >>> proj = ds.proj(0, ("gas", "density")) >>> frb1 = FixedResolutionBuffer(proj, (0.2, 0.3, 0.4, 0.5), (1024, 1024)) >>> print(frb1["gas", "density"].max()) 1.0914e-9 g/cm**3 >>> print(frb1["gas", "temperature"].max()) 104923.1 K """ _exclude_fields = ( "pz", "pdz", "dx", "x", "y", "z", "r", "dr", "phi", "dphi", "theta", "dtheta", ("index", "dx"), ("index", "x"), ("index", "y"), ("index", "z"), ("index", "r"), ("index", "dr"), ("index", "phi"), ("index", "dphi"), ("index", "theta"), ("index", "dtheta"), ) def __init__( self, data_source, bounds, buff_size, antialias=True, periodic=False, *, filters: list["FixedResolutionBufferFilter"] | None = None, ): self.data_source = data_source self.ds = data_source.ds self.bounds = bounds self.buff_size = (int(buff_size[0]), int(buff_size[1])) self.antialias = antialias self.data: dict[str, ImageArray] = {} self.mask: dict[str, MaskT] = {} self.axis = data_source.axis self.periodic = periodic self._data_valid = False # import type here to avoid import cycles # note that this import statement is actually crucial at runtime: # the filter methods for the present class are defined only when # fixed_resolution_filters is imported, so we need to guarantee # that it happens no later than instantiation from yt.visualization.fixed_resolution_filters import ( # noqa FixedResolutionBufferFilter, ) self._filters: list[FixedResolutionBufferFilter] = ( filters if filters is not None else [] ) ds = getattr(data_source, "ds", None) if ds is not None: ds.plots.append(weakref.proxy(self)) # Handle periodicity, just in case if self.data_source.axis is not None: DLE = self.ds.domain_left_edge DRE = self.ds.domain_right_edge DD = float(self.periodic) * (DRE - DLE) axis = self.data_source.axis xax = self.ds.coordinates.x_axis[axis] yax = self.ds.coordinates.y_axis[axis] self._period = (DD[xax], DD[yax]) self._edges = ((DLE[xax], DRE[xax]), (DLE[yax], DRE[yax])) def keys(self): return self.data.keys() def __delitem__(self, item): del self.data[item] def _generate_image_and_mask(self, item) -> None: mylog.info( "Making a fixed resolution buffer of (%s) %d by %d", item, self.buff_size[0], self.buff_size[1], ) bounds = [] for b in self.bounds: if hasattr(b, "in_units"): b = float(b.in_units("code_length")) bounds.append(b) buff, mask = self.ds.coordinates.pixelize( self.data_source.axis, self.data_source, item, bounds, self.buff_size, int(self.antialias), return_mask=True, ) buff = self._apply_filters(buff) # FIXME FIXME FIXME we shouldn't need to do this for projections # but that will require fixing data object access for particle # projections try: if hasattr(item, "name"): it = item.name else: it = item units = self.data_source._projected_units[it] except (KeyError, AttributeError): units = self.data_source[item].units self.data[item] = ImageArray(buff, units=units, info=self._get_info(item)) self.mask[item] = mask self._data_valid = True def __getitem__(self, item): # backward compatibility return self.get_image(item) def get_image(self, key, /) -> ImageArray: if not (key in self.data and self._data_valid): self._generate_image_and_mask(key) return self.data[key] def get_mask(self, key, /) -> MaskT: """Return the boolean array associated with an image with the same key. Elements set to True indicate pixels that were updated by a pixelisation routine """ if not (key in self.mask and self._data_valid): self._generate_image_and_mask(key) return self.mask[key] def render(self, item): # deleguate to __getitem__ for historical reasons # this method exists for clarity of intention return self[item] def _apply_filters(self, buffer: np.ndarray) -> np.ndarray: for f in self._filters: buffer = f(buffer) return buffer def __setitem__(self, item, val): self.data[item] = val def _get_data_source_fields(self): exclude = self.data_source._key_fields + list(self._exclude_fields) fields = getattr(self.data_source, "fields", []) fields += getattr(self.data_source, "field_data", {}).keys() for f in fields: if f not in exclude and f[0] not in self.data_source.ds.particle_types: self.render(f) def _get_info(self, item): info = {} ftype, fname = field = self.data_source._determine_fields(item)[0] finfo = self.data_source.ds._get_field_info(field) info["data_source"] = self.data_source.__str__() info["axis"] = self.data_source.axis info["field"] = str(item) info["xlim"] = self.bounds[:2] info["ylim"] = self.bounds[2:] info["length_unit"] = self.data_source.ds.length_unit info["length_to_cm"] = info["length_unit"].in_cgs().to_ndarray() info["center"] = self.data_source.center try: info["coord"] = self.data_source.coord except AttributeError: pass try: info["weight_field"] = self.data_source.weight_field except AttributeError: pass info["label"] = finfo.get_latex_display_name() return info def convert_to_pixel(self, coords): r"""This function converts coordinates in code-space to pixel-space. Parameters ---------- coords : sequence of array_like This is (x_coord, y_coord). Because of the way the math is done, these can both be arrays. Returns ------- output : sequence of array_like This returns px_coord, py_coord """ dpx = (self.bounds[1] - self.bounds[0]) / self.buff_size[0] dpy = (self.bounds[3] - self.bounds[2]) / self.buff_size[1] px = (coords[0] - self.bounds[0]) / dpx py = (coords[1] - self.bounds[2]) / dpy return (px, py) def convert_distance_x(self, distance): r"""This function converts code-space distance into pixel-space distance in the x-coordinate. Parameters ---------- distance : array_like This is x-distance in code-space you would like to convert. Returns ------- output : array_like The return value is the distance in the y-pixel coordinates. """ dpx = (self.bounds[1] - self.bounds[0]) / self.buff_size[0] return distance / dpx def convert_distance_y(self, distance): r"""This function converts code-space distance into pixel-space distance in the y-coordinate. Parameters ---------- distance : array_like This is y-distance in code-space you would like to convert. Returns ------- output : array_like The return value is the distance in the x-pixel coordinates. """ dpy = (self.bounds[3] - self.bounds[2]) / self.buff_size[1] return distance / dpy def set_unit(self, field, unit, equivalency=None, equivalency_kwargs=None): """Sets a new unit for the requested field parameters ---------- field : string or field tuple The name of the field that is to be changed. unit : string or Unit object The name of the new unit. equivalency : string, optional If set, the equivalency to use to convert the current units to the new requested unit. If None, the unit conversion will be done without an equivalency equivalency_kwargs : string, optional Keyword arguments to be passed to the equivalency. Only used if ``equivalency`` is set. """ if equivalency_kwargs is None: equivalency_kwargs = {} field = self.data_source._determine_fields(field)[0] if equivalency is None: self[field].convert_to_units(unit) else: equiv_array = self[field].to_equivalent( unit, equivalency, **equivalency_kwargs ) # equiv_array isn't necessarily an ImageArray. This is an issue # inherent to the way the unit system handles YTArray # subclasses and I don't see how to modify the unit system to # fix this. Instead, we paper over this issue and hard code # that equiv_array is an ImageArray self[field] = ImageArray( equiv_array, equiv_array.units, equiv_array.units.registry, self[field].info, ) def export_hdf5(self, filename, fields=None): r"""Export a set of fields to a set of HDF5 datasets. This function will export any number of fields into datasets in a new HDF5 file. Parameters ---------- filename : string This file will be opened in "append" mode. fields : list of strings These fields will be pixelized and output. """ if fields is None: fields = list(self.data.keys()) output = h5py.File(filename, mode="a") for field in fields: output.create_dataset("_".join(field), data=self[field]) output.close() def to_fits_data(self, fields=None, other_keys=None, length_unit=None, **kwargs): r"""Export the fields in this FixedResolutionBuffer instance to a FITSImageData instance. This will export a set of FITS images of either the fields specified or all the fields already in the object. Parameters ---------- fields : list of strings These fields will be pixelized and output. If "None", the keys of the FRB will be used. other_keys : dictionary, optional A set of header keys and values to write into the FITS header. length_unit : string, optional the length units that the coordinates are written in. The default is to use the default length unit of the dataset. """ from yt.visualization.fits_image import FITSImageData if length_unit is None: length_unit = self.ds.length_unit if fields is None: fields = list(self.data.keys()) else: fields = list(iter_fields(fields)) if len(fields) == 0: raise RuntimeError( "No fields to export. Either pass a field or list of fields to " "to_fits_data or access a field from the FixedResolutionBuffer " "object." ) fid = FITSImageData(self, fields=fields, length_unit=length_unit) if other_keys is not None: for k, v in other_keys.items(): fid.update_header("all", k, v) return fid def export_dataset(self, fields=None, nprocs=1): r"""Export a set of pixelized fields to an in-memory dataset that can be analyzed as any other in yt. Unit information and other parameters (e.g., geometry, current_time, etc.) will be taken from the parent dataset. Parameters ---------- fields : list of strings, optional These fields will be pixelized and output. If "None", the keys of the FRB will be used. nprocs: integer, optional If greater than 1, will create this number of subarrays out of data Examples -------- >>> import yt >>> ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150") >>> slc = ds.slice(2, 0.0) >>> frb = slc.to_frb((500.0, "kpc"), 500) >>> ds2 = frb.export_dataset( ... fields=[("gas", "density"), ("gas", "temperature")], nprocs=32 ... ) """ nx, ny = self.buff_size data = {} if fields is None: fields = list(self.keys()) for field in fields: arr = self[field] data[field] = (arr.d.T.reshape(nx, ny, 1), str(arr.units)) bounds = [b.in_units("code_length").v for b in self.bounds] bbox = np.array([[bounds[0], bounds[1]], [bounds[2], bounds[3]], [0.0, 1.0]]) return load_uniform_grid( data, [nx, ny, 1], length_unit=self.ds.length_unit, bbox=bbox, sim_time=self.ds.current_time.in_units("s").v, mass_unit=self.ds.mass_unit, time_unit=self.ds.time_unit, velocity_unit=self.ds.velocity_unit, magnetic_unit=self.ds.magnetic_unit, periodicity=(False, False, False), geometry=self.ds.geometry, nprocs=nprocs, ) def save_as_dataset(self, filename=None, fields=None): r"""Export a fixed resolution buffer to a reloadable yt dataset. This function will take a fixed resolution buffer and output a dataset containing either the fields presently existing or fields given in the ``fields`` list. The resulting dataset can be reloaded as a yt dataset. Parameters ---------- filename : str, optional The name of the file to be written. If None, the name will be a combination of the original dataset and the type of data container. fields : list of strings or tuples, optional If this is supplied, it is the list of fields to be saved to disk. If not supplied, all the fields that have been queried will be saved. Returns ------- filename : str The name of the file that has been created. Examples -------- >>> import yt >>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") >>> proj = ds.proj(("gas", "density"), "x", weight_field=("gas", "density")) >>> frb = proj.to_frb(1.0, (800, 800)) >>> fn = frb.save_as_dataset(fields=[("gas", "density")]) >>> ds2 = yt.load(fn) >>> print(ds2.data["gas", "density"]) [[ 1.25025353e-30 1.25025353e-30 1.25025353e-30 ..., 7.90820691e-31 7.90820691e-31 7.90820691e-31] [ 1.25025353e-30 1.25025353e-30 1.25025353e-30 ..., 7.90820691e-31 7.90820691e-31 7.90820691e-31] [ 1.25025353e-30 1.25025353e-30 1.25025353e-30 ..., 7.90820691e-31 7.90820691e-31 7.90820691e-31] ..., [ 1.55834239e-30 1.55834239e-30 1.55834239e-30 ..., 8.51353199e-31 8.51353199e-31 8.51353199e-31] [ 1.55834239e-30 1.55834239e-30 1.55834239e-30 ..., 8.51353199e-31 8.51353199e-31 8.51353199e-31] [ 1.55834239e-30 1.55834239e-30 1.55834239e-30 ..., 8.51353199e-31 8.51353199e-31 8.51353199e-31]] g/cm**3 """ keyword = f"{str(self.ds)}_{self.data_source._type_name}_frb" filename = get_output_filename(filename, keyword, ".h5") data = {} if fields is not None: for f in self.data_source._determine_fields(fields): data[f] = self[f] else: data.update(self.data) ftypes = {field: "grid" for field in data} extra_attrs = { arg: getattr(self.data_source, arg, None) for arg in self.data_source._con_args + self.data_source._tds_attrs } extra_attrs["con_args"] = self.data_source._con_args extra_attrs["left_edge"] = self.ds.arr([self.bounds[0], self.bounds[2]]) extra_attrs["right_edge"] = self.ds.arr([self.bounds[1], self.bounds[3]]) # The data dimensions are [NY, NX] but buff_size is [NX, NY]. extra_attrs["ActiveDimensions"] = self.buff_size[::-1] extra_attrs["level"] = 0 extra_attrs["data_type"] = "yt_frb" extra_attrs["container_type"] = self.data_source._type_name extra_attrs["dimensionality"] = self.data_source._dimensionality save_as_dataset( self.ds, filename, data, field_types=ftypes, extra_attrs=extra_attrs ) return filename @property def limits(self): rv = {"x": None, "y": None, "z": None} xax = self.ds.coordinates.x_axis[self.axis] yax = self.ds.coordinates.y_axis[self.axis] xn = self.ds.coordinates.axis_name[xax] yn = self.ds.coordinates.axis_name[yax] rv[xn] = (self.bounds[0], self.bounds[1]) rv[yn] = (self.bounds[2], self.bounds[3]) return rv def setup_filters(self): issue_deprecation_warning( "The FixedResolutionBuffer.setup_filters method is now a no-op. ", stacklevel=3, since="4.1", ) class CylindricalFixedResolutionBuffer(FixedResolutionBuffer): """ This object is a subclass of :class:`yt.visualization.fixed_resolution.FixedResolutionBuffer` that supports non-aligned input data objects, primarily cutting planes. """ def __init__(self, data_source, radius, buff_size, antialias=True, *, filters=None): self.data_source = data_source self.ds = data_source.ds self.radius = radius self.buff_size = buff_size self.antialias = antialias self.data = {} self._filters = filters if filters is not None else [] ds = getattr(data_source, "ds", None) if ds is not None: ds.plots.append(weakref.proxy(self)) @override def _generate_image_and_mask(self, item) -> None: buff = np.zeros(self.buff_size, dtype="f8") mask = pixelize_cylinder( buff, self.data_source["r"], self.data_source["dr"], self.data_source["theta"], self.data_source["dtheta"], self.data_source[item].astype("float64"), self.radius, return_mask=True, ) self.data[item] = ImageArray( buff, units=self.data_source[item].units, info=self._get_info(item) ) self.mask[item] = mask class OffAxisProjectionFixedResolutionBuffer(FixedResolutionBuffer): """ This object is a subclass of :class:`yt.visualization.fixed_resolution.FixedResolutionBuffer` that supports off axis projections. This calls the volume renderer. """ @override def _generate_image_and_mask(self, item) -> None: mylog.info( "Making a fixed resolution buffer of (%s) %d by %d", item, self.buff_size[0], self.buff_size[1], ) dd = self.data_source # only need the first two for SPH, # but need the third one for other data formats. width = self.ds.arr( ( self.bounds[1] - self.bounds[0], self.bounds[3] - self.bounds[2], self.bounds[5] - self.bounds[4], ) ) depth = dd.depth[0] if dd.depth is not None else None buff = off_axis_projection( dd.dd, dd.center, dd.normal_vector, width, self.buff_size, item, weight=dd.weight_field, volume=dd.volume, no_ghost=dd.no_ghost, interpolated=dd.interpolated, north_vector=dd.north_vector, depth=depth, method=dd.method, ) if self.data_source.moment == 2: def _sq_field(field, data, item: FieldKey): return data[item] ** 2 fd = self.ds._get_field_info(item) ftype, fname = item item_sq = (ftype, f"tmp_{fname}_squared") self.ds.add_field( item_sq, partial(_sq_field, item=item), sampling_type=fd.sampling_type, units=f"({fd.units})*({fd.units})", ) buff2 = off_axis_projection( dd.dd, dd.center, dd.normal_vector, width, self.buff_size, item_sq, weight=dd.weight_field, volume=dd.volume, no_ghost=dd.no_ghost, interpolated=dd.interpolated, north_vector=dd.north_vector, depth=dd.depth, method=dd.method, ) buff = compute_stddev_image(buff2, buff) self.ds.field_info.pop(item_sq) ia = ImageArray(buff.swapaxes(0, 1), info=self._get_info(item)) self.data[item] = ia self.mask[item] = None class ParticleImageBuffer(FixedResolutionBuffer): """ This object is a subclass of :class:`yt.visualization.fixed_resolution.FixedResolutionBuffer` that supports particle plots. It splats points onto an image buffer. """ def __init__( self, data_source, bounds, buff_size, antialias=True, periodic=False, *, filters=None, ): super().__init__( data_source, bounds, buff_size, antialias, periodic, filters=filters ) # set up the axis field names axis = self.axis if axis is not None: self.xax = self.ds.coordinates.x_axis[axis] self.yax = self.ds.coordinates.y_axis[axis] axis_name = self.ds.coordinates.axis_name self.x_field = f"particle_position_{axis_name[self.xax]}" self.y_field = f"particle_position_{axis_name[self.yax]}" @override def _generate_image_and_mask(self, item) -> None: deposition = self.data_source.deposition density = self.data_source.density mylog.info( "Splatting (%s) onto a %d by %d mesh using method '%s'", item, self.buff_size[0], self.buff_size[1], deposition, ) dd = self.data_source.dd ftype = item[0] if self.axis is None: wd = [] for w in self.data_source.width: if hasattr(w, "to_value"): w = w.to_value("code_length") wd.append(w) x_data, y_data, *bounds = rotate_particle_coord_pib( dd[ftype, "particle_position_x"].to_value("code_length"), dd[ftype, "particle_position_y"].to_value("code_length"), dd[ftype, "particle_position_z"].to_value("code_length"), self.data_source.center.to_value("code_length"), wd, self.data_source.normal_vector, self.data_source.north_vector, ) x_data = np.array(x_data) y_data = np.array(y_data) else: bounds = [] for b in self.bounds: if hasattr(b, "to_value"): b = b.to_value("code_length") bounds.append(b) x_data = dd[ftype, self.x_field].to_value("code_length") y_data = dd[ftype, self.y_field].to_value("code_length") data = dd[item] # handle periodicity dx = x_data - bounds[0] dy = y_data - bounds[2] if self.periodic: dx %= float(self._period[0].in_units("code_length")) dy %= float(self._period[1].in_units("code_length")) # convert to pixels px = dx / (bounds[1] - bounds[0]) py = dy / (bounds[3] - bounds[2]) # select only the particles that will actually show up in the image mask = np.logical_and( np.logical_and(px >= 0.0, px <= 1.0), np.logical_and(py >= 0.0, py <= 1.0) ) weight_field = self.data_source.weight_field if weight_field is None: weight_data = np.ones_like(data.v) else: weight_data = dd[weight_field] splat_vals = weight_data[mask] * data[mask] x_bin_edges = np.linspace(0.0, 1.0, self.buff_size[0] + 1) y_bin_edges = np.linspace(0.0, 1.0, self.buff_size[1] + 1) # splat particles buff = np.zeros(self.buff_size) buff_mask = np.zeros_like(buff, dtype="uint8") if deposition == "ngp": add_points_to_greyscale_image( buff, buff_mask, px[mask], py[mask], splat_vals ) elif deposition == "cic": CICDeposit_2( py[mask], px[mask], splat_vals, mask.sum(), buff, buff_mask, x_bin_edges, y_bin_edges, ) else: raise ValueError(f"Received unknown deposition method '{deposition}'") # remove values in no-particle region buff[buff_mask == 0] = np.nan # Normalize by the surface area of the pixel or volume of pencil if # requested info = self._get_info(item) if density: dpx = (bounds[1] - bounds[0]) / self.buff_size[0] dpy = (bounds[3] - bounds[2]) / self.buff_size[1] norm = self.ds.quan(dpx * dpy, "code_length**2").in_base() buff /= norm.v units = data.units / norm.units info["label"] += " $\\rm{Density}$" else: units = data.units # divide by the weight_field, if needed if weight_field is not None: weight_buff = np.zeros(self.buff_size) weight_buff_mask = np.zeros(self.buff_size, dtype="uint8") if deposition == "ngp": add_points_to_greyscale_image( weight_buff, weight_buff_mask, px[mask], py[mask], weight_data[mask] ) elif deposition == "cic": CICDeposit_2( py[mask], px[mask], weight_data[mask], mask.sum(), weight_buff, weight_buff_mask, y_bin_edges, x_bin_edges, ) # remove values in no-particle region weight_buff[weight_buff_mask == 0] = np.nan locs = np.where(weight_buff > 0) buff[locs] /= weight_buff[locs] self.data[item] = ImageArray(buff, units=units, info=info) self.mask[item] = buff_mask != 0 # over-ride the base class version, since we don't want to exclude # particle fields def _get_data_source_fields(self): exclude = self.data_source._key_fields + list(self._exclude_fields) fields = getattr(self.data_source, "fields", []) fields += getattr(self.data_source, "field_data", {}).keys() for f in fields: if f not in exclude: self.render(f) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/fixed_resolution_filters.py0000644000175100001770000000714114714401662022611 0ustar00runnerdockerfrom abc import ABC, abstractmethod from functools import update_wrapper, wraps import numpy as np from yt._maintenance.deprecation import issue_deprecation_warning from yt.visualization.fixed_resolution import FixedResolutionBuffer def apply_filter(f): issue_deprecation_warning( "The apply_filter decorator is not used in yt any more and " "will be removed in a future version. " "Please do not use it.", stacklevel=3, since="4.1", ) @wraps(f) def newfunc(self, *args, **kwargs): self._filters.append((f.__name__, (args, kwargs))) # Invalidate the data of the frb to force its regeneration self._data_valid = False return self return newfunc class FixedResolutionBufferFilter(ABC): """ This object allows to apply data transformation directly to :class:`yt.visualization.fixed_resolution.FixedResolutionBuffer` """ def __init_subclass__(cls, *args, **kwargs): if cls.__init__.__doc__ is None: # allow docstring definition at the class level instead of __init__ cls.__init__.__doc__ = cls.__doc__ # add a method to FixedResolutionBuffer method_name = "apply_" + cls._filter_name def closure(self, *args, **kwargs): self._filters.append(cls(*args, **kwargs)) self._data_valid = False return self update_wrapper( wrapper=closure, wrapped=cls.__init__, assigned=("__annotations__", "__doc__"), ) closure.__name__ = method_name setattr(FixedResolutionBuffer, method_name, closure) @abstractmethod def __init__(self, *args, **kwargs): """This method is required in subclasses, but the signature is arbitrary""" pass @abstractmethod def apply(self, buff: np.ndarray) -> np.ndarray: pass def __call__(self, buff: np.ndarray) -> np.ndarray: # alias to apply return self.apply(buff) class FixedResolutionBufferGaussBeamFilter(FixedResolutionBufferFilter): """ This filter convolves :class:`yt.visualization.fixed_resolution.FixedResolutionBuffer` with 2d gaussian that is 'nbeam' pixels wide and has standard deviation 'sigma'. """ _filter_name = "gauss_beam" def __init__(self, nbeam=30, sigma=2.0): self.nbeam = nbeam self.sigma = sigma def apply(self, buff): from yt.utilities.on_demand_imports import _scipy hnbeam = self.nbeam // 2 sigma = self.sigma l = np.linspace(-hnbeam, hnbeam, num=self.nbeam + 1) x, y = np.meshgrid(l, l) g2d = (1.0 / (sigma * np.sqrt(2.0 * np.pi))) * np.exp( -((x / sigma) ** 2 + (y / sigma) ** 2) / (2 * sigma**2) ) g2d /= g2d.max() npm, nqm = np.shape(buff) spl = _scipy.signal.convolve(buff, g2d) return spl[hnbeam : npm + hnbeam, hnbeam : nqm + hnbeam] class FixedResolutionBufferWhiteNoiseFilter(FixedResolutionBufferFilter): """ This filter adds white noise with the amplitude "bg_lvl" to :class:`yt.visualization.fixed_resolution.FixedResolutionBuffer`. If "bg_lvl" is not present, 10th percentile of the FRB's value is used instead. """ _filter_name = "white_noise" def __init__(self, bg_lvl=None): self.bg_lvl = bg_lvl def apply(self, buff): if self.bg_lvl is None: amp = np.percentile(buff, 10) else: amp = self.bg_lvl npm, nqm = np.shape(buff) rng = np.random.default_rng() return buff + rng.standard_normal((npm, nqm)) * amp ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/geo_plot_utils.py0000644000175100001770000000704314714401662020530 0ustar00runnerdockerfrom types import FunctionType from typing import Any valid_transforms: dict[str, FunctionType] = {} transform_list = [ "PlateCarree", "LambertConformal", "LambertCylindrical", "Mercator", "Miller", "Mollweide", "Orthographic", "Robinson", "Stereographic", "TransverseMercator", "InterruptedGoodeHomolosine", "RotatedPole", "OSGB", "EuroPP", "Geostationary", "Gnomonic", "NorthPolarStereo", "OSNI", "SouthPolarStereo", "AlbersEqualArea", "AzimuthalEquidistant", "Sinusoidal", "UTM", "NearsidePerspective", "LambertAzimuthalEqualArea", ] def cartopy_importer(transform_name): r"""Convenience function to import cartopy projection types""" def _func(*args, **kwargs): from yt.utilities.on_demand_imports import _cartopy as cartopy return getattr(cartopy.crs, transform_name)(*args, **kwargs) return _func def get_mpl_transform(mpl_proj) -> FunctionType | None: r"""This returns an instantiated transform function given a transform function name and arguments. Parameters ---------- mpl_proj : string or tuple the matplotlib projection type. Can take the form of string or tuple. Examples -------- >>> get_mpl_transform("PlateCarree") >>> get_mpl_transform( ... ("RotatedPole", (), {"pole_latitude": 37.5, "pole_longitude": 177.5}) ... ) """ # first check to see if the transform dict is empty, if it is fill it with # the cartopy functions if not valid_transforms: for mpl_transform in transform_list: valid_transforms[mpl_transform] = cartopy_importer(mpl_transform) # check to see if mpl_proj is a string or tuple, and construct args and # kwargs to pass to cartopy function based on that. key: str | None = None args: tuple = () kwargs: dict[str, Any] = {} if isinstance(mpl_proj, str): key = mpl_proj instantiated_func = valid_transforms[key](*args, **kwargs) elif isinstance(mpl_proj, tuple): if len(mpl_proj) == 2: key, args = mpl_proj kwargs = {} elif len(mpl_proj) == 3: key, args, kwargs = mpl_proj else: raise ValueError(f"Expected a tuple with len 2 or 3, received {mpl_proj}") if not isinstance(key, str): raise TypeError( f"Expected a string a the first element in mpl_proj, got {key!r}" ) instantiated_func = valid_transforms[key](*args, **kwargs) elif hasattr(mpl_proj, "globe"): # cartopy transforms have a globe method associated with them key = mpl_proj instantiated_func = mpl_proj elif hasattr(mpl_proj, "set_transform"): # mpl axes objects have a set_transform method, so we'll check if that # exists on something passed in. key = mpl_proj instantiated_func = mpl_proj elif hasattr(mpl_proj, "name"): # last we'll check if the transform is in the list of registered # projections in matplotlib. from matplotlib.projections import get_projection_names registered_projections = get_projection_names() if mpl_proj.name in registered_projections: key = mpl_proj instantiated_func = mpl_proj else: key = None # build in a check that if none of the above options are satisfied by what # the user passes that None is returned for the instantiated function if key is None: instantiated_func = None return instantiated_func ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/image_writer.py0000644000175100001770000003665114714401662020165 0ustar00runnerdockerimport numpy as np from yt._maintenance.ipython_compat import IS_IPYTHON from yt.config import ytcfg from yt.funcs import mylog from yt.units.yt_array import YTQuantity from yt.utilities import png_writer as pw from yt.utilities.exceptions import YTNotInsideNotebook from yt.utilities.lib import image_utilities as au from yt.visualization.color_maps import get_colormap_lut from ._commons import get_canvas, validate_image_name def scale_image(image, mi=None, ma=None): r"""Scale an image ([NxNxM] where M = 1-4) to be uint8 and values scaled from [0,255]. Parameters ---------- image : array_like or tuple of image info Examples -------- >>> image = scale_image(image) >>> image = scale_image(image, min=0, max=1000) """ if isinstance(image, np.ndarray) and image.dtype == np.uint8: return image if isinstance(image, (tuple, list)): image, mi, ma = image if mi is None: mi = image.min() if ma is None: ma = image.max() image = np.clip((image - mi) / (ma - mi) * 255, 0, 255).astype("uint8") return image def multi_image_composite( fn, red_channel, blue_channel, green_channel=None, alpha_channel=None ): r"""Write an image with different color channels corresponding to different quantities. Accepts at least a red and a blue array, of shape (N,N) each, that are optionally scaled and composited into a final image, written into `fn`. Can also accept green and alpha. Parameters ---------- fn : string Filename to save red_channel : array_like or tuple of image info Array, of shape (N,N), to be written into the red channel of the output image. If not already uint8, will be converted (and scaled) into uint8. Optionally, you can also specify a tuple that includes scaling information, in the form of (array_to_plot, min_value_to_scale, max_value_to_scale). blue_channel : array_like or tuple of image info Array, of shape (N,N), to be written into the blue channel of the output image. If not already uint8, will be converted (and scaled) into uint8. Optionally, you can also specify a tuple that includes scaling information, in the form of (array_to_plot, min_value_to_scale, max_value_to_scale). green_channel : array_like or tuple of image info, optional Array, of shape (N,N), to be written into the green channel of the output image. If not already uint8, will be converted (and scaled) into uint8. If not supplied, will be left empty. Optionally, you can also specify a tuple that includes scaling information, in the form of (array_to_plot, min_value_to_scale, max_value_to_scale). alpha_channel : array_like or tuple of image info, optional Array, of shape (N,N), to be written into the alpha channel of the output image. If not already uint8, will be converted (and scaled) into uint8. If not supplied, will be made fully opaque. Optionally, you can also specify a tuple that includes scaling information, in the form of (array_to_plot, min_value_to_scale, max_value_to_scale). Examples -------- >>> red_channel = np.log10(frb["gas", "temperature"]) >>> blue_channel = np.log10(frb["gas", "density"]) >>> multi_image_composite("multi_channel1.png", red_channel, blue_channel) """ red_channel = scale_image(red_channel) blue_channel = scale_image(blue_channel) if green_channel is None: green_channel = np.zeros(red_channel.shape, dtype="uint8") else: green_channel = scale_image(green_channel) if alpha_channel is None: alpha_channel = np.zeros(red_channel.shape, dtype="uint8") + 255 else: alpha_channel = scale_image(alpha_channel) image = np.array([red_channel, green_channel, blue_channel, alpha_channel]) image = image.transpose().copy() # Have to make sure it's contiguous pw.write_png(image, fn) def write_bitmap(bitmap_array, filename, max_val=None, transpose=False): r"""Write out a bitmapped image directly to a PNG file. This accepts a three- or four-channel `bitmap_array`. If the image is not already uint8, it will be scaled and converted. If it is four channel, only the first three channels will be scaled, while the fourth channel is assumed to be in the range of [0,1]. If it is not four channel, a fourth alpha channel will be added and set to fully opaque. The resultant image will be directly written to `filename` as a PNG with no colormap applied. `max_val` is a value used if the array is passed in as anything other than uint8; it will be the value used for scaling and clipping in the first three channels when the array is converted. Additionally, the minimum is assumed to be zero; this makes it primarily suited for the results of volume rendered images, rather than misaligned projections. Parameters ---------- bitmap_array : array_like Array of shape (N,M,3) or (N,M,4), to be written. If it is not already a uint8 array, it will be scaled and converted to uint8. filename : string Filename to save to. If None, PNG contents will be returned as a string. max_val : float, optional The upper limit to clip values to in the output, if converting to uint8. If `bitmap_array` is already uint8, this will be ignore. transpose : boolean, optional If transpose is False, we assume that the incoming bitmap_array is such that the first element resides in the upper-left corner. If True, the first element will be placed in the lower-left corner. """ if len(bitmap_array.shape) != 3 or bitmap_array.shape[-1] not in (3, 4): raise RuntimeError( "Expecting image array of shape (N,M,3) or " f"(N,M,4), received {str(bitmap_array.shape)}" ) if bitmap_array.dtype != np.uint8: s1, s2 = bitmap_array.shape[:2] if bitmap_array.shape[-1] == 3: alpha_channel = 255 * np.ones((s1, s2, 1), dtype="uint8") else: alpha_channel = (255 * bitmap_array[:, :, 3]).astype("uint8") alpha_channel.shape = s1, s2, 1 if max_val is None: max_val = bitmap_array[:, :, :3].max() if max_val == 0.0: # avoid dividing by zero for blank images max_val = 1.0 bitmap_array = np.clip(bitmap_array[:, :, :3] / max_val, 0.0, 1.0) * 255 bitmap_array = np.concatenate( [bitmap_array.astype("uint8"), alpha_channel], axis=-1 ) if transpose: bitmap_array = bitmap_array.swapaxes(0, 1).copy(order="C") if filename is not None: pw.write_png(bitmap_array, filename) else: return pw.write_png_to_string(bitmap_array.copy()) return bitmap_array def write_image(image, filename, color_bounds=None, cmap_name=None, func=lambda x: x): r"""Write out a floating point array directly to a PNG file, scaling it and applying a colormap. This function will scale an image and directly call libpng to write out a colormapped version of that image. It is designed for rapid-fire saving of image buffers generated using `yt.visualization.api.FixedResolutionBuffers` and the likes. Parameters ---------- image : array_like This is an (unscaled) array of floating point values, shape (N,N,) to save in a PNG file. filename : string Filename to save as. color_bounds : tuple of floats, optional The min and max to scale between. Outlying values will be clipped. cmap_name : string, optional An acceptable colormap. See either yt.visualization.color_maps or https://scipy-cookbook.readthedocs.io/items/Matplotlib_Show_colormaps.html . func : function, optional A function to transform the buffer before applying a colormap. Returns ------- scaled_image : uint8 image that has been saved Examples -------- >>> sl = ds.slice(0, 0.5, "Density") >>> frb1 = FixedResolutionBuffer(sl, (0.2, 0.3, 0.4, 0.5), (1024, 1024)) >>> write_image(frb1["gas", "density"], "saved.png") """ if cmap_name is None: cmap_name = ytcfg.get("yt", "default_colormap") if len(image.shape) == 3: mylog.info("Using only channel 1 of supplied image") image = image[:, :, 0] to_plot = apply_colormap(image, color_bounds=color_bounds, cmap_name=cmap_name) pw.write_png(to_plot, filename) return to_plot def apply_colormap(image, color_bounds=None, cmap_name=None, func=lambda x: x): r"""Apply a colormap to a floating point image, scaling to uint8. This function will scale an image and directly call libpng to write out a colormapped version of that image. It is designed for rapid-fire saving of image buffers generated using `yt.visualization.api.FixedResolutionBuffers` and the likes. Parameters ---------- image : array_like This is an (unscaled) array of floating point values, shape (N,N,) to save in a PNG file. color_bounds : tuple of floats, optional The min and max to scale between. Outlying values will be clipped. cmap_name : string, optional An acceptable colormap. See either yt.visualization.color_maps or https://scipy-cookbook.readthedocs.io/items/Matplotlib_Show_colormaps.html . func : function, optional A function to transform the buffer before applying a colormap. Returns ------- to_plot : uint8 image with colorbar applied. """ if cmap_name is None: cmap_name = ytcfg.get("yt", "default_colormap") from yt.data_objects.image_array import ImageArray image = ImageArray(func(image)) if color_bounds is None: mi = np.nanmin(image[~np.isinf(image)]) * image.uq ma = np.nanmax(image[~np.isinf(image)]) * image.uq color_bounds = mi, ma else: color_bounds = [YTQuantity(func(c), image.units) for c in color_bounds] image = (image - color_bounds[0]) / (color_bounds[1] - color_bounds[0]) to_plot = map_to_colors(image, cmap_name) to_plot = np.clip(to_plot, 0, 255) return to_plot def map_to_colors(buff, cmap_name): lut = get_colormap_lut(cmap_name) if isinstance(cmap_name, tuple): # If we are using the colorbrewer maps, don't interpolate shape = buff.shape # We add float_eps so that digitize doesn't go out of bounds x = np.mgrid[0.0 : 1.0 + np.finfo(np.float32).eps : lut[0].shape[0] * 1j] inds = np.digitize(buff.ravel(), x) inds.shape = (shape[0], shape[1]) mapped = np.dstack([(v[inds] * 255).astype("uint8") for v in lut]) del inds else: x = np.mgrid[0.0 : 1.0 : lut[0].shape[0] * 1j] mapped = np.dstack([(np.interp(buff, x, v) * 255).astype("uint8") for v in lut]) return mapped.copy("C") def splat_points(image, points_x, points_y, contribution=None, transposed=False): if contribution is None: contribution = 100.0 val = contribution * 1.0 / points_x.size if transposed: points_y = 1.0 - points_y points_x = 1.0 - points_x im = image.copy() au.add_points_to_image(im, points_x, points_y, val) return im def write_projection( data, filename, colorbar=True, colorbar_label=None, title=None, vmin=None, vmax=None, take_log=True, figsize=(8, 6), dpi=100, cmap_name=None, extent=None, xlabel=None, ylabel=None, ): r"""Write a projection or volume rendering to disk with a variety of pretty parameters such as limits, title, colorbar, etc. write_projection uses the standard matplotlib interface to create the figure. N.B. This code only works *after* you have created the projection using the standard framework (i.e. the Camera interface or off_axis_projection). Accepts an NxM sized array representing the projection itself as well as the filename to which you will save this figure. Note that the final resolution of your image will be a product of dpi/100 * figsize. Parameters ---------- data : array_like image array as output by off_axis_projection or camera.snapshot() filename : string the filename where the data will be saved colorbar : boolean do you want a colorbar generated to the right of the image? colorbar_label : string the label associated with your colorbar title : string the label at the top of the figure vmin : float or None the lower limit of the zaxis (part of matplotlib api) vmax : float or None the lower limit of the zaxis (part of matplotlib api) take_log : boolean plot the log of the data array (and take the log of the limits if set)? figsize : array_like width, height in inches of final image dpi : int final image resolution in pixels / inch cmap_name : string The name of the colormap. Examples -------- >>> image = off_axis_projection(ds, c, L, W, N, "Density", no_ghost=False) >>> write_projection( ... image, ... "test.png", ... colorbar_label="Column Density (cm$^{-2}$)", ... title="Offaxis Projection", ... vmin=1e-5, ... vmax=1e-3, ... take_log=True, ... ) """ if cmap_name is None: cmap_name = ytcfg.get("yt", "default_colormap") import matplotlib.colors import matplotlib.figure # If this is rendered as log, then apply now. if take_log: norm_cls = matplotlib.colors.LogNorm else: norm_cls = matplotlib.colors.Normalize norm = norm_cls(vmin=vmin, vmax=vmax) # Create the figure and paint the data on fig = matplotlib.figure.Figure(figsize=figsize) ax = fig.add_subplot(111) cax = ax.imshow( data.to_ndarray(), norm=norm, extent=extent, cmap=cmap_name, ) if title: ax.set_title(title) if xlabel: ax.set_xlabel(xlabel) if ylabel: ax.set_ylabel(ylabel) # Suppress the x and y pixel counts if extent is None: ax.set_xticks(()) ax.set_yticks(()) # Add a color bar and label if requested if colorbar: cbar = fig.colorbar(cax) if colorbar_label: cbar.ax.set_ylabel(colorbar_label) filename = validate_image_name(filename) canvas = get_canvas(fig, filename) mylog.info("Saving plot %s", filename) fig.tight_layout() canvas.print_figure(filename, dpi=dpi) return filename def display_in_notebook(image, max_val=None): """ A helper function to display images in an IPython notebook Must be run from within an IPython notebook, or else it will raise a YTNotInsideNotebook exception. Parameters ---------- image : array_like This is an (unscaled) array of floating point values, shape (N,N,3) or (N,N,4) to display in the notebook. The first three channels will be scaled automatically. max_val : float, optional The upper limit to clip values of the image. Only applies to the first three channels. """ if IS_IPYTHON: from IPython.core.displaypub import publish_display_data data = write_bitmap(image, None, max_val=max_val) publish_display_data( data={"image/png": data}, source="yt.visualization.image_writer.display_in_notebook", ) else: raise YTNotInsideNotebook ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/line_plot.py0000644000175100001770000003536314714401662017473 0ustar00runnerdockerfrom collections import defaultdict import numpy as np from matplotlib.colors import LogNorm, Normalize, SymLogNorm from yt.funcs import is_sequence, mylog from yt.units.unit_object import Unit # type: ignore from yt.units.yt_array import YTArray from yt.visualization.plot_container import ( BaseLinePlot, PlotDictionary, invalidate_plot, ) class LineBuffer: r""" LineBuffer(ds, start_point, end_point, npoints, label = None) This takes a data source and implements a protocol for generating a 'pixelized', fixed-resolution line buffer. In other words, LineBuffer takes a starting point, ending point, and number of sampling points and can subsequently generate YTArrays of field values along the sample points. Parameters ---------- ds : :class:`yt.data_objects.static_output.Dataset` This is the dataset object holding the data that can be sampled by the LineBuffer start_point : n-element list, tuple, ndarray, or YTArray Contains the coordinates of the first point for constructing the LineBuffer. Must contain n elements where n is the dimensionality of the dataset. end_point : n-element list, tuple, ndarray, or YTArray Contains the coordinates of the first point for constructing the LineBuffer. Must contain n elements where n is the dimensionality of the dataset. npoints : int How many points to sample between start_point and end_point Examples -------- >>> lb = yt.LineBuffer(ds, (0.25, 0, 0), (0.25, 1, 0), 100) >>> lb["all", "u"].max() 0.11562424257143075 dimensionless """ def __init__(self, ds, start_point, end_point, npoints, label=None): self.ds = ds self.start_point = _validate_point(start_point, ds, start=True) self.end_point = _validate_point(end_point, ds) self.npoints = npoints self.label = label self.data = {} def keys(self): return self.data.keys() def __setitem__(self, item, val): self.data[item] = val def __getitem__(self, item): if item in self.data: return self.data[item] mylog.info("Making a line buffer with %d points of %s", self.npoints, item) self.points, self.data[item] = self.ds.coordinates.pixelize_line( item, self.start_point, self.end_point, self.npoints ) return self.data[item] def __delitem__(self, item): del self.data[item] class LinePlotDictionary(PlotDictionary): def __init__(self, data_source): super().__init__(data_source) self.known_dimensions = {} def _sanitize_dimensions(self, item): field = self.data_source._determine_fields(item)[0] finfo = self.data_source.ds.field_info[field] dimensions = Unit( finfo.units, registry=self.data_source.ds.unit_registry ).dimensions if dimensions not in self.known_dimensions: self.known_dimensions[dimensions] = item return self.known_dimensions[dimensions] def __getitem__(self, item): ret_item = self._sanitize_dimensions(item) return super().__getitem__(ret_item) def __setitem__(self, item, value): ret_item = self._sanitize_dimensions(item) super().__setitem__(ret_item, value) def __contains__(self, item): ret_item = self._sanitize_dimensions(item) return super().__contains__(ret_item) class LinePlot(BaseLinePlot): r""" A class for constructing line plots Parameters ---------- ds : :class:`yt.data_objects.static_output.Dataset` This is the dataset object corresponding to the simulation output to be plotted. fields : string / tuple, or list of strings / tuples The name(s) of the field(s) to be plotted. start_point : n-element list, tuple, ndarray, or YTArray Contains the coordinates of the first point for constructing the line. Must contain n elements where n is the dimensionality of the dataset. end_point : n-element list, tuple, ndarray, or YTArray Contains the coordinates of the first point for constructing the line. Must contain n elements where n is the dimensionality of the dataset. npoints : int How many points to sample between start_point and end_point for constructing the line plot figure_size : int or two-element iterable of ints Size in inches of the image. Default: 5 (5x5) fontsize : int Font size for all text in the plot. Default: 14 field_labels : dictionary Keys should be the field names. Values should be latex-formattable strings used in the LinePlot legend Default: None Example ------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> plot = yt.LinePlot(ds, "density", [0, 0, 0], [1, 1, 1], 512) >>> plot.add_legend("density") >>> plot.set_x_unit("cm") >>> plot.set_unit("density", "kg/cm**3") >>> plot.save() """ _plot_dict_type = LinePlotDictionary _plot_type = "line_plot" _default_figure_size = (5.0, 5.0) _default_font_size = 14.0 def __init__( self, ds, fields, start_point, end_point, npoints, figure_size=None, fontsize: float | None = None, field_labels=None, ): """ Sets up figure and axes """ line = LineBuffer(ds, start_point, end_point, npoints, label=None) self.lines = [line] self._initialize_instance(self, ds, fields, figure_size, fontsize, field_labels) self._setup_plots() @classmethod def _initialize_instance( cls, obj, ds, fields, figure_size, fontsize, field_labels=None ): obj._x_unit = None obj._titles = {} data_source = ds.all_data() obj.fields = data_source._determine_fields(fields) obj.include_legend = defaultdict(bool) super(LinePlot, obj).__init__( data_source, figure_size=figure_size, fontsize=fontsize ) if field_labels is None: obj.field_labels = {} else: obj.field_labels = field_labels for f in obj.fields: if f not in obj.field_labels: obj.field_labels[f] = f[1] def _get_axrect(self): fontscale = self._font_properties._size / self.__class__._default_font_size top_buff_size = 0.35 * fontscale x_axis_size = 1.35 * fontscale y_axis_size = 0.7 * fontscale right_buff_size = 0.2 * fontscale if is_sequence(self.figure_size): figure_size = self.figure_size else: figure_size = (self.figure_size, self.figure_size) xbins = np.array([x_axis_size, figure_size[0], right_buff_size]) ybins = np.array([y_axis_size, figure_size[1], top_buff_size]) x_frac_widths = xbins / xbins.sum() y_frac_widths = ybins / ybins.sum() return ( x_frac_widths[0], y_frac_widths[0], x_frac_widths[1], y_frac_widths[1], ) @classmethod def from_lines( cls, ds, fields, lines, figure_size=None, font_size=None, field_labels=None ): """ A class method for constructing a line plot from multiple sampling lines Parameters ---------- ds : :class:`yt.data_objects.static_output.Dataset` This is the dataset object corresponding to the simulation output to be plotted. fields : field name or list of field names The name(s) of the field(s) to be plotted. lines : list of :class:`yt.visualization.line_plot.LineBuffer` instances The lines from which to sample data figure_size : int or two-element iterable of ints Size in inches of the image. Default: 5 (5x5) font_size : int Font size for all text in the plot. Default: 14 field_labels : dictionary Keys should be the field names. Values should be latex-formattable strings used in the LinePlot legend Default: None Example -------- >>> ds = yt.load( ... "SecondOrderTris/RZ_p_no_parts_do_nothing_bcs_cone_out.e", step=-1 ... ) >>> fields = [field for field in ds.field_list if field[0] == "all"] >>> lines = [ ... yt.LineBuffer(ds, [0.25, 0, 0], [0.25, 1, 0], 100, label="x = 0.25"), ... yt.LineBuffer(ds, [0.5, 0, 0], [0.5, 1, 0], 100, label="x = 0.5"), ... ] >>> lines.append() >>> plot = yt.LinePlot.from_lines(ds, fields, lines) >>> plot.save() """ obj = cls.__new__(cls) obj.lines = lines cls._initialize_instance(obj, ds, fields, figure_size, font_size, field_labels) obj._setup_plots() return obj def _setup_plots(self): if self._plot_valid: return for plot in self.plots.values(): plot.axes.cla() for line in self.lines: dimensions_counter = defaultdict(int) for field in self.fields: finfo = self.ds.field_info[field] dimensions = Unit( finfo.units, registry=self.ds.unit_registry ).dimensions dimensions_counter[dimensions] += 1 for field in self.fields: # get plot instance plot = self._get_plot_instance(field) # calculate x and y x, y = self.ds.coordinates.pixelize_line( field, line.start_point, line.end_point, line.npoints ) # scale x and y to proper units if self._x_unit is None: unit_x = x.units else: unit_x = self._x_unit unit_y = plot.norm_handler.display_units x.convert_to_units(unit_x) y.convert_to_units(unit_y) # determine legend label str_seq = [] str_seq.append(line.label) str_seq.append(self.field_labels[field]) delim = "; " legend_label = delim.join(filter(None, str_seq)) # apply plot to matplotlib axes plot.axes.plot(x, y, label=legend_label) # apply log transforms if requested norm = plot.norm_handler.get_norm(data=y) y_norm_type = type(norm) if y_norm_type is Normalize: plot.axes.set_yscale("linear") elif y_norm_type is LogNorm: plot.axes.set_yscale("log") elif y_norm_type is SymLogNorm: plot.axes.set_yscale("symlog") else: raise NotImplementedError( f"LinePlot doesn't support y norm with type {type(norm)}" ) # set font properties plot._set_font_properties(self._font_properties, None) # set x and y axis labels axes_unit_labels = self._get_axes_unit_labels(unit_x, unit_y) if self._xlabel is not None: x_label = self._xlabel else: x_label = r"$\rm{Path\ Length" + axes_unit_labels[0] + "}$" if self._ylabel is not None: y_label = self._ylabel else: finfo = self.ds.field_info[field] dimensions = Unit( finfo.units, registry=self.ds.unit_registry ).dimensions if dimensions_counter[dimensions] > 1: y_label = ( r"$\rm{Multiple\ Fields}$" + r"$\rm{" + axes_unit_labels[1] + "}$" ) else: y_label = ( finfo.get_latex_display_name() + r"$\rm{" + axes_unit_labels[1] + "}$" ) plot.axes.set_xlabel(x_label) plot.axes.set_ylabel(y_label) # apply title if field in self._titles: plot.axes.set_title(self._titles[field]) # apply legend dim_field = self.plots._sanitize_dimensions(field) if self.include_legend[dim_field]: plot.axes.legend() self._plot_valid = True @invalidate_plot def annotate_legend(self, field): """ Adds a legend to the `LinePlot` instance. The `_sanitize_dimensions` call ensures that a legend label will be added for every field of a multi-field plot """ dim_field = self.plots._sanitize_dimensions(field) self.include_legend[dim_field] = True @invalidate_plot def set_x_unit(self, unit_name): """Set the unit to use along the x-axis Parameters ---------- unit_name: str The name of the unit to use for the x-axis unit """ self._x_unit = unit_name @invalidate_plot def set_unit(self, field, new_unit): """Set the unit used to plot the field Parameters ---------- field: str or field tuple The name of the field to set the units for new_unit: string or Unit object """ field = self.data_source._determine_fields(field)[0] pnh = self.plots[field].norm_handler pnh.display_units = new_unit @invalidate_plot def annotate_title(self, field, title): """Set the unit used to plot the field Parameters ---------- field: str or field tuple The name of the field to set the units for title: str The title to use for the plot """ self._titles[self.data_source._determine_fields(field)[0]] = title def _validate_point(point, ds, start=False): if not is_sequence(point): raise RuntimeError("Input point must be array-like") if not isinstance(point, YTArray): point = ds.arr(point, "code_length", dtype=np.float64) if len(point.shape) != 1: raise RuntimeError("Input point must be a 1D array") if point.shape[0] < ds.dimensionality: raise RuntimeError("Input point must have an element for each dimension") # need to pad to 3D elements to avoid issues later if point.shape[0] < 3: if start: val = 0 else: val = 1 point = np.append(point.d, [val] * (3 - ds.dimensionality)) * point.uq return point ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.411154 yt-4.4.0/yt/visualization/mapserver/0000755000175100001770000000000014714401715017125 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/mapserver/__init__.py0000644000175100001770000000000014714401662021225 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.411154 yt-4.4.0/yt/visualization/mapserver/html/0000755000175100001770000000000014714401715020071 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/mapserver/html/Leaflet.Coordinates-0.1.5.css0000644000175100001770000000124114714401662025066 0ustar00runnerdocker/* * From https://github.com/MrMufflon/Leaflet.Coordinates * * Fixed small issue about formatting by C. Cadiou (cphyc)) */ .leaflet-control-coordinates{background-color:#D8D8D8;background-color:rgba(255,255,255,.8);cursor:pointer}.leaflet-control-coordinates,.leaflet-control-coordinates .uiElement input{-webkit-border-radius:5px;-moz-border-radius:5px;border-radius:5px}.leaflet-control-coordinates .uiElement{margin:4px}.leaflet-control-coordinates .uiElement .labelFirst{margin-right:4px}.leaflet-control-coordinates .uiHidden{display:none}.leaflet-control-coordinates .uiElement.label{color:inherit;font-weight:inherit;font-size:inherit;padding:0;display:inherit} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/mapserver/html/Leaflet.Coordinates-0.1.5.src.js0000644000175100001770000002204314714401662025503 0ustar00runnerdocker/* * From https://github.com/MrMufflon/Leaflet.Coordinates * * Fixed small issue about formatting by C. Cadiou (cphyc)) */ /* * L.Control.Coordinates is used for displaying current mouse coordinates on the map. */ L.Control.Coordinates = L.Control.extend({ options: { position: 'bottomright', //decimals used if not using DMS or labelFormatter functions decimals: 4, //decimalseperator used if not using DMS or labelFormatter functions decimalSeperator: ".", //label templates for usage if no labelFormatter function is defined labelTemplateLat: "Lat: {y}", labelTemplateLng: "Lng: {x}", //label formatter functions labelFormatterLat: undefined, labelFormatterLng: undefined, //switch on/off input fields on click enableUserInput: true, //use Degree-Minute-Second useDMS: false, //if true lat-lng instead of lng-lat label ordering is used useLatLngOrder: false, //if true user given coordinates are centered directly centerUserCoordinates: false, //leaflet marker type markerType: L.marker, //leaflet marker properties markerProps: {} }, onAdd: function(map) { this._map = map; var className = 'leaflet-control-coordinates', container = this._container = L.DomUtil.create('div', className), options = this.options; //label containers this._labelcontainer = L.DomUtil.create("div", "uiElement label", container); this._label = L.DomUtil.create("span", "labelFirst", this._labelcontainer); //input containers this._inputcontainer = L.DomUtil.create("div", "uiElement input uiHidden", container); var xSpan, ySpan; if (options.useLatLngOrder) { ySpan = L.DomUtil.create("span", "", this._inputcontainer); this._inputY = this._createInput("inputY", this._inputcontainer); xSpan = L.DomUtil.create("span", "", this._inputcontainer); this._inputX = this._createInput("inputX", this._inputcontainer); } else { xSpan = L.DomUtil.create("span", "", this._inputcontainer); this._inputX = this._createInput("inputX", this._inputcontainer); ySpan = L.DomUtil.create("span", "", this._inputcontainer); this._inputY = this._createInput("inputY", this._inputcontainer); } xSpan.innerHTML = options.labelTemplateLng.replace("{x}", ""); ySpan.innerHTML = options.labelTemplateLat.replace("{y}", ""); L.DomEvent.on(this._inputX, 'keyup', this._handleKeypress, this); L.DomEvent.on(this._inputY, 'keyup', this._handleKeypress, this); //connect to mouseevents map.on("mousemove", this._update, this); map.on('dragstart', this.collapse, this); map.whenReady(this._update, this); this._showsCoordinates = true; //whether or not to show inputs on click if (options.enableUserInput) { L.DomEvent.addListener(this._container, "click", this._switchUI, this); } return container; }, /** * Creates an input HTML element in given container with given classname */ _createInput: function(classname, container) { var input = L.DomUtil.create("input", classname, container); input.type = "text"; L.DomEvent.disableClickPropagation(input); return input; }, _clearMarker: function() { this._map.removeLayer(this._marker); }, /** * Called onkeyup of input fields */ _handleKeypress: function(e) { switch (e.keyCode) { case 27: //Esc this.collapse(); break; case 13: //Enter this._handleSubmit(); this.collapse(); break; default: //All keys this._handleSubmit(); break; } }, /** * Called on each keyup except ESC */ _handleSubmit: function() { var x = L.NumberFormatter.createValidNumber(this._inputX.value, this.options.decimalSeperator); var y = L.NumberFormatter.createValidNumber(this._inputY.value, this.options.decimalSeperator); if (x !== undefined && y !== undefined) { var marker = this._marker; if (!marker) { marker = this._marker = this._createNewMarker(); marker.on("click", this._clearMarker, this); } var ll = new L.LatLng(y, x); marker.setLatLng(ll); marker.addTo(this._map); if (this.options.centerUserCoordinates) { this._map.setView(ll, this._map.getZoom()); } } }, /** * Shows inputs fields */ expand: function() { this._showsCoordinates = false; this._map.off("mousemove", this._update, this); L.DomEvent.addListener(this._container, "mousemove", L.DomEvent.stop); L.DomEvent.removeListener(this._container, "click", this._switchUI, this); L.DomUtil.addClass(this._labelcontainer, "uiHidden"); L.DomUtil.removeClass(this._inputcontainer, "uiHidden"); }, /** * Creates the label according to given options and formatters */ _createCoordinateLabel: function(ll) { var opts = this.options, x, y; if (opts.customLabelFcn) { return opts.customLabelFcn(ll, opts); } if (opts.labelFormatterLng) { x = opts.labelFormatterLng(ll.lng); } else { x = L.Util.template(opts.labelTemplateLng, { x: this._getNumber(ll.lng, opts) }); } if (opts.labelFormatterLat) { y = opts.labelFormatterLat(ll.lat); } else { y = L.Util.template(opts.labelTemplateLat, { y: this._getNumber(ll.lat, opts) }); } if (opts.useLatLngOrder) { return y + " " + x; } return x + " " + y; }, /** * Returns a Number according to options (DMS or decimal) */ _getNumber: function(n, opts) { var res; if (opts.useDMS) { res = L.NumberFormatter.toDMS(n); } else { res = L.NumberFormatter.round(n, opts.decimals, opts.decimalSeperator); } return res; }, /** * Shows coordinate labels after user input has ended. Also * displays a marker with popup at the last input position. */ collapse: function() { if (!this._showsCoordinates) { this._map.on("mousemove", this._update, this); this._showsCoordinates = true; var opts = this.options; L.DomEvent.addListener(this._container, "click", this._switchUI, this); L.DomEvent.removeListener(this._container, "mousemove", L.DomEvent.stop); L.DomUtil.addClass(this._inputcontainer, "uiHidden"); L.DomUtil.removeClass(this._labelcontainer, "uiHidden"); if (this._marker) { var m = this._createNewMarker(), ll = this._marker.getLatLng(); m.setLatLng(ll); var container = L.DomUtil.create("div", ""); var label = L.DomUtil.create("div", "", container); label.innerHTML = this._ordinateLabel(ll); var close = L.DomUtil.create("a", "", container); close.innerHTML = "Remove"; close.href = "#"; var stop = L.DomEvent.stopPropagation; L.DomEvent .on(close, 'click', stop) .on(close, 'mousedown', stop) .on(close, 'dblclick', stop) .on(close, 'click', L.DomEvent.preventDefault) .on(close, 'click', function() { this._map.removeLayer(m); }, this); m.bindPopup(container); m.addTo(this._map); this._map.removeLayer(this._marker); this._marker = null; } } }, /** * Click callback for UI */ _switchUI: function(evt) { L.DomEvent.stop(evt); L.DomEvent.stopPropagation(evt); L.DomEvent.preventDefault(evt); if (this._showsCoordinates) { //show textfields this.expand(); } else { //show coordinates this.collapse(); } }, onRemove: function(map) { map.off("mousemove", this._update, this); }, /** * Mousemove callback function updating labels and input elements */ _update: function(evt) { var pos = evt.latlng, opts = this.options; if (pos) { pos = pos.wrap(); this._currentPos = pos; this._inputY.value = L.NumberFormatter.round(pos.lat, opts.decimals, opts.decimalSeperator); this._inputX.value = L.NumberFormatter.round(pos.lng, opts.decimals, opts.decimalSeperator); this._label.innerHTML = this._createCoordinateLabel(pos); } }, _createNewMarker: function() { return this.options.markerType(null, this.options.markerProps); } }); //constructor registration L.control.coordinates = function(options) { return new L.Control.Coordinates(options); }; //map init hook L.Map.mergeOptions({ coordinateControl: false }); L.Map.addInitHook(function() { if (this.options.coordinateControl) { this.coordinateControl = new L.Control.Coordinates(); this.addControl(this.coordinateControl); } }); L.NumberFormatter = { round: function(num, dec, sep) { var res = L.Util.formatNum(num, dec) + "", numbers = res.split("."); if (numbers[1]) { var d = dec - numbers[1].length; for (; d > 0; d--) { numbers[1] += "0"; } res = numbers.join(sep || "."); } return res; }, toDMS: function(deg) { var d = Math.floor(Math.abs(deg)); var minfloat = (Math.abs(deg) - d) * 60; var m = Math.floor(minfloat); var secfloat = (minfloat - m) * 60; var s = Math.round(secfloat); if (s == 60) { m++; s = "00"; } if (m == 60) { d++; m = "00"; } if (s < 10) { s = "0" + s; } if (m < 10) { m = "0" + m; } var dir = ""; if (deg < 0) { dir = "-"; } return ("" + dir + d + "° " + m + "' " + s + "''"); }, createValidNumber: function(num, sep) { if (num && num.length > 0) { var numbers = num.split(sep || "."); try { var numRes = Number(numbers.join(".")); if (isNaN(numRes)) { return undefined; } return numRes; } catch (e) { return undefined; } } return undefined; } }; ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/mapserver/html/__init__.py0000644000175100001770000000000014714401662022171 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/mapserver/html/map.js0000644000175100001770000000641214714401662021210 0ustar00runnerdockerfunction setFullScreen () { $("#map").width($(window).width()); $("#map").height($(window).height()); } var SearchWidget = function () { var obj = { filter: function (searchStrs) { console.log("filtering on " + searchStrs); this._selector.each(function(i, el) { var val = $(el).text(); // Search var matched = searchStrs.map((str) => { return val.indexOf(str) !== -1; }).reduce((reduced, result) => { return reduced && result; }, true); if (matched) { $(el).show(); } else { $(el).hide(); } }); }, init: function () { var self = this; var searchElement = $('
'); var selector = $('.leaflet-control-layers-list label'); this._selector = selector; // Add input in the DOM selector.first().parent().prepend(searchElement); // Listen to keyboard input $('#filter input').keyup(function(ev) { const val = $(this).val(); self.filter(val.split(" ")); }); }, _selector: null }; obj.init(); return obj; }; $(document).ready(function() { // Initialize to full screen setFullScreen(); // initialize the map on the "map" div with a given center and zoom $.getJSON('/list', function(data) { var layers = [], layer_groups = [], default_layer = [null]; var layer_group = {}; // Loop over field types for (var type in data['data']) { var dtype = data['data'][type]; // Loop over fields of given type for (var field in dtype) { var loc = dtype[field] var field = loc[0], active = loc[1], url = 'map/' + field[0] + ',' + field[1] + '/{z}/{x}/{y}.png'; // Create new layer var layer = new L.TileLayer(url, {id: 'MapID', maxzoom: 18}); // Create readable name human_name = field.join(' '); // Store it layers.push(layer); layer_group[human_name] = layer; if (active) { default_layer[0] = layer; } } } var map = new L.Map('map', { crs: L.CRS.Simple, center: new L.LatLng(-128, -128), zoom: 4, layers: default_layer }); L.control.layers(layer_group).addTo(map); var unit = data['unit'], px2unit = data['px2unit'], decimals = 2; var fmt = (n) => { return L.NumberFormatter.round(n, decimals, ".") }; L.control.coordinates({ position: "bottomleft", //optional default "bootomright" decimals: 2, //optional default 4 decimalSeperator: ".", //optional default "." enableUserInput: false, //optional default true useDMS: false, //optional default false useLatLngOrder: false, //ordering of labels, default false-> lng-lat markerType: L.marker, //optional default L.marker labelFormatterLng : (lng) => { return fmt((lng+128)*px2unit) + " " + unit }, //optional default none, labelFormatterLat : (lat) => { return fmt((lat+128)*px2unit) + " " + unit }, //optional default none }).addTo(map); // Search widget var search = SearchWidget(); }); // Resize map automatically $(window).resize(setFullScreen); }); ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/mapserver/html/map_index.html0000644000175100001770000000154414714401662022730 0ustar00runnerdocker
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/mapserver/pannable_map.py0000644000175100001770000001211414714401662022114 0ustar00runnerdockerimport os from functools import wraps import bottle import numpy as np from yt.fields.derived_field import ValidateSpatial from yt.utilities.lib.misc_utilities import get_color_bounds from yt.utilities.png_writer import write_png_to_string from yt.visualization.fixed_resolution import FixedResolutionBuffer from yt.visualization.image_writer import apply_colormap local_dir = os.path.dirname(__file__) def exc_writeout(f): import traceback @wraps(f) def func(*args, **kwargs): try: rv = f(*args, **kwargs) return rv except Exception: traceback.print_exc(None, open("temp.exc", "w")) raise return func class PannableMapServer: _widget_name = "pannable_map" def __init__(self, data, field, takelog, cmap, route_prefix=""): self.data = data self.ds = data.ds self.field = field self.cmap = cmap bottle.route(f"{route_prefix}/map/:field/:L/:x/:y.png")(self.map) bottle.route(f"{route_prefix}/map/:field/:L/:x/:y.png")(self.map) bottle.route(f"{route_prefix}/")(self.index) bottle.route(f"{route_prefix}/:field")(self.index) bottle.route(f"{route_prefix}/index.html")(self.index) bottle.route(f"{route_prefix}/list", "GET")(self.list_fields) # This is a double-check, since we do not always mandate this for # slices: self.data[self.field] = self.data[self.field].astype("float64", copy=False) bottle.route(f"{route_prefix}/static/:path", "GET")(self.static) self.takelog = takelog self._lock = False for unit in ["Gpc", "Mpc", "kpc", "pc"]: v = self.ds.domain_width[0].in_units(unit).value if v > 1: break self.unit = unit self.px2unit = self.ds.domain_width[0].in_units(unit).value / 256 def lock(self): import time while self._lock: time.sleep(0.01) self._lock = True def unlock(self): self._lock = False def map(self, field, L, x, y): if "," in field: field = tuple(field.split(",")) cmap = self.cmap dd = 1.0 / (2.0 ** (int(L))) relx = int(x) * dd rely = int(y) * dd DW = self.ds.domain_right_edge - self.ds.domain_left_edge xl = self.ds.domain_left_edge[0] + relx * DW[0] yl = self.ds.domain_left_edge[1] + rely * DW[1] xr = xl + dd * DW[0] yr = yl + dd * DW[1] try: self.lock() w = 256 # pixels data = self.data[field] frb = FixedResolutionBuffer(self.data, (xl, xr, yl, yr), (w, w)) cmi, cma = get_color_bounds( self.data["px"], self.data["py"], self.data["pdx"], self.data["pdy"], data, self.ds.domain_left_edge[0], self.ds.domain_right_edge[0], self.ds.domain_left_edge[1], self.ds.domain_right_edge[1], dd * DW[0] / (64 * 256), dd * DW[0], ) finally: self.unlock() if self.takelog: cmi = np.log10(cmi) cma = np.log10(cma) to_plot = apply_colormap( np.log10(frb[field]), color_bounds=(cmi, cma), cmap_name=cmap ) else: to_plot = apply_colormap( frb[field], color_bounds=(cmi, cma), cmap_name=cmap ) rv = write_png_to_string(to_plot) return rv def index(self, field=None): if field is not None: self.field = field return bottle.static_file( "map_index.html", root=os.path.join(local_dir, "html") ) def static(self, path): if path[-4:].lower() in (".png", ".gif", ".jpg"): bottle.response.headers["Content-Type"] = f"image/{path[-3:].lower()}" elif path[-4:].lower() == ".css": bottle.response.headers["Content-Type"] = "text/css" elif path[-3:].lower() == ".js": bottle.response.headers["Content-Type"] = "text/javascript" full_path = os.path.join(os.path.join(local_dir, "html"), path) return open(full_path).read() def list_fields(self): d = {} # Add fluid fields (only gas for now) for ftype in self.ds.fluid_types: d[ftype] = [] for f in self.ds.derived_field_list: if f[0] != ftype: continue # Discard fields which need ghost zones for now df = self.ds.field_info[f] if any(isinstance(v, ValidateSpatial) for v in df.validators): continue # Discard cutting plane fields if "cutting" in f[1]: continue active = f[1] == self.field d[ftype].append((f, active)) print(self.px2unit, self.unit) return { "data": d, "px2unit": self.px2unit, "unit": self.unit, "active": self.field, } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/particle_plots.py0000644000175100001770000010401114714401662020515 0ustar00runnerdockerimport warnings from typing import Union import numpy as np from yt._maintenance.deprecation import issue_deprecation_warning from yt.data_objects.profiles import create_profile from yt.data_objects.static_output import Dataset from yt.funcs import fix_axis, iter_fields, mylog from yt.units.yt_array import YTArray from yt.utilities.orientation import Orientation from yt.visualization.fixed_resolution import ParticleImageBuffer from yt.visualization.profile_plotter import PhasePlot from .plot_window import ( NormalPlot, PWViewerMPL, get_axes_unit, get_oblique_window_parameters, get_window_parameters, ) class ParticleDummyDataSource: _type_name = "Particle" _dimensionality = 2 _con_args = ("center", "axis", "width", "fields", "weight_field") _tds_attrs = () _key_fields: list[str] = [] def __init__( self, center, ds, width, fields, dd, *, weight_field=None, field_parameters=None, deposition="ngp", density=False, ): self.center = center self.ds = ds self.width = width self.dd = dd if weight_field is not None: weight_field = self._determine_fields(weight_field)[0] self.weight_field = weight_field self.deposition = deposition self.density = density if field_parameters is None: self.field_parameters = {} else: self.field_parameters = field_parameters fields = self._determine_fields(fields) self.fields = fields def _determine_fields(self, *args): return self.dd._determine_fields(*args) def get_field_parameter(self, name, default=None): """ This is typically only used by derived field functions, but it returns parameters used to generate fields. """ if name in self.field_parameters: return self.field_parameters[name] else: return default class ParticleAxisAlignedDummyDataSource(ParticleDummyDataSource): def __init__( self, center, ds, axis, width, fields, *, weight_field=None, field_parameters=None, data_source=None, deposition="ngp", density=False, ): self.axis = axis LE = center - 0.5 * YTArray(width) RE = center + 0.5 * YTArray(width) for ax in range(3): if not ds.periodicity[ax]: LE[ax] = max(LE[ax], ds.domain_left_edge[ax]) RE[ax] = min(RE[ax], ds.domain_right_edge[ax]) dd = ds.region( center, LE, RE, fields, field_parameters=field_parameters, data_source=data_source, ) super().__init__( center, ds, width, fields, dd, weight_field=weight_field, field_parameters=field_parameters, deposition=deposition, density=density, ) class ParticleOffAxisDummyDataSource(ParticleDummyDataSource): def __init__( self, center, ds, normal_vector, width, fields, *, weight_field=None, field_parameters=None, data_source=None, deposition="ngp", density=False, north_vector=None, ): self.axis = None # always true for oblique data objects normal = np.array(normal_vector) normal = normal / np.linalg.norm(normal) # If north_vector is None, we set the default here. # This is chosen so that if normal_vector is one of the # cartesian coordinate axes, the projection will match # the corresponding on-axis projection. if north_vector is None: vecs = np.identity(3) t = np.cross(vecs, normal).sum(axis=1) ax = t.argmax() east_vector = np.cross(vecs[ax, :], normal).ravel() north = np.cross(normal, east_vector).ravel() else: north = np.array(north_vector) north = north / np.linalg.norm(north) self.normal_vector = normal self.north_vector = north if data_source is None: dd = ds.all_data() else: dd = data_source self.orienter = Orientation(normal_vector, north_vector=north_vector) super().__init__( center, ds, width, fields, dd, weight_field=weight_field, field_parameters=field_parameters, deposition=deposition, density=density, ) class ParticleProjectionPlot(NormalPlot): r"""Creates a particle plot from a dataset Given a ds object, a normal to project along, and a field name string, this will return a PWViewerMPL object containing the plot. The plot can be updated using one of the many helper functions defined in PlotWindow. Parameters ---------- ds : `Dataset` This is the dataset object corresponding to the simulation output to be plotted. normal : int, str, or 3-element sequence of floats This specifies the normal vector to the projection. Valid int values are 0, 1 and 2. Corresponding str values depend on the geometry of the dataset and are generally given by `ds.coordinates.axis_order`. E.g. in cartesian they are 'x', 'y' and 'z'. An arbitrary normal vector may be specified as a 3-element sequence of floats. fields : string, list or None If a string or list, the name of the particle field(s) to be used on the colorbar. The color shown will correspond to the sum of the given field along the line of sight. If None, the particle positions will be indicated using a fixed color, instead. Default is None. color : 'b', 'g', 'r', 'c', 'm', 'y', 'k', or 'w' One the matplotlib-recognized color strings. The color that will indicate the particle locations on the mesh. This argument is ignored if z_fields is not None. Default is 'b'. center : 'center', 'c', 'left', 'l', 'right', 'r', id of a global extremum, or array-like The coordinate of the selection's center. Defaults to the 'center', i.e. center of the domain. Centering on the min or max of a field is supported by passing a tuple such as ('min', ('gas', 'density')) or ('max', ('gas', 'temperature'). A single string may also be used (e.g. "min_density" or "max_temperature"), though it's not as flexible and does not allow to select an exact field/particle type. With this syntax, the first field matching the provided name is selected. 'max' or 'm' can be used as a shortcut for ('max', ('gas', 'density')) 'min' can be used as a shortcut for ('min', ('gas', 'density')) One can also select an exact point as a 3 element coordinate sequence, e.g. [0.5, 0.5, 0] Units can be specified by passing in *center* as a tuple containing a 3-element coordinate sequence and string unit name, e.g. ([0, 0.5, 0.5], "cm"), or by passing in a YTArray. Code units are assumed if unspecified. The domain edges along the selected *axis* can be selected with 'left'/'l' and 'right'/'r' respectively. width : tuple or a float. Width can have four different formats to support windows with variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') requests a plot window that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) requests a window that is 10 kiloparsecs wide along the x-axis and 15 kiloparsecs wide along the y-axis. In the other two examples, code units are assumed, for example (0.2, 0.3) requests a plot that has an x width of 0.2 and a y width of 0.3 in code units. If units are provided the resulting plot axis labels will use the supplied units. depth : A tuple or a float A tuple containing the depth to project through and the string key of the unit: (width, 'unit'). If set to a float, code units are assumed. Defaults to the entire domain. weight_field : string The name of the weighting field. Set to None for no weight. If given, the plot will show a weighted average along the line of sight of the fields given in the ``fields`` argument. axes_unit : A string The name of the unit for the tick labels on the x and y axes. Defaults to None, which automatically picks an appropriate unit. If axes_unit is '1', 'u', or 'unitary', it will not display the units, and only show the axes name. origin : string or length 1, 2, or 3 sequence of strings The location of the origin of the plot coordinate system. This is represented by '-' separated string or a tuple of strings. In the first index the y-location is given by 'lower', 'upper', or 'center'. The second index is the x-location, given as 'left', 'right', or 'center'. Finally, whether the origin is applied in 'domain' space, plot 'window' space or 'native' simulation coordinate system is given. For example, both 'upper-right-domain' and ['upper', 'right', 'domain'] both place the origin in the upper right hand corner of domain space. If x or y are not given, a value is inferred. For instance, 'left-domain' corresponds to the lower-left hand corner of the simulation domain, 'center-domain' corresponds to the center of the simulation domain, or 'center-window' for the center of the plot window. Further examples: ================================== ============================ format example ================================== ============================ '{space}' 'domain' '{xloc}-{space}' 'left-window' '{yloc}-{space}' 'upper-domain' '{yloc}-{xloc}-{space}' 'lower-right-window' ('{space}',) ('window',) ('{xloc}', '{space}') ('right', 'domain') ('{yloc}', '{space}') ('lower', 'window') ('{yloc}', '{xloc}', '{space}') ('lower', 'right', 'window') ================================== ============================ fontsize : integer The size of the fonts for the axis, colorbar, and tick labels. field_parameters : dictionary A dictionary of field parameters than can be accessed by derived fields. window_size : float The size of the window on the longest axis (in units of inches), including the margins but not the colorbar. aspect : float The aspect ratio of the plot. Set to None for 1. data_source : YTSelectionContainer object The object to be used for data selection. Defaults to a region covering the entire simulation. deposition : string Controls the order of the interpolation of the particles onto the mesh. "ngp" is 0th-order "nearest-grid-point" method (the default), "cic" is 1st-order "cloud-in-cell". density : boolean If True, the quantity to be projected will be divided by the area of the cells, to make a projected density of the quantity. The plot name and units will also reflect this. Default: False north_vector : a sequence of floats A vector defining the 'up' direction in off-axis particle projection plots; not used if the plot is on-axis. This option sets the orientation of the projected plane. If not set, an arbitrary grid-aligned north-vector is chosen. Examples -------- This will save an image to the file 'galaxy0030_Particle_z_particle_mass.png' >>> from yt import load >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> p = yt.ParticleProjectionPlot(ds, 2, "particle_mass") >>> p.save() """ # ignoring type check here, because mypy doesn't allow __new__ methods to # return instances of subclasses. The design we use here is however based # on the pathlib.Path class from the standard library # https://github.com/python/mypy/issues/1020 def __new__( # type: ignore cls, ds, normal=None, *args, axis=None, **kwargs ) -> Union["AxisAlignedParticleProjectionPlot", "OffAxisParticleProjectionPlot"]: # TODO: when axis' deprecation expires, # remove default value for normal normal = cls._handle_normalaxis_parameters(normal=normal, axis=axis) if cls is ParticleProjectionPlot: normal = cls.sanitize_normal_vector(ds, normal) if isinstance(normal, str): cls = AxisAlignedParticleProjectionPlot else: cls = OffAxisParticleProjectionPlot self = object.__new__(cls) return self # type: ignore [return-value] @staticmethod def _handle_normalaxis_parameters(*, normal, axis) -> None: # TODO: when axis' deprecation expires, # remove this method entirely if axis is not None: issue_deprecation_warning( "Argument 'axis' is a deprecated alias for 'normal'.", since="4.2", stacklevel=4, ) if normal is not None: raise TypeError("Received incompatible arguments 'axis' and 'normal'") normal = axis if normal is None: raise TypeError("missing required positional argument: 'normal'") return normal class AxisAlignedParticleProjectionPlot(ParticleProjectionPlot, PWViewerMPL): _plot_type = "Particle" _frb_generator = ParticleImageBuffer def __init__( self, ds, normal=None, fields=None, color="b", center="center", width=None, depth=(1, "1"), weight_field=None, axes_unit=None, origin="center-window", fontsize=18, field_parameters=None, window_size=8.0, aspect=None, data_source=None, deposition="ngp", density=False, *, north_vector=None, axis=None, ): if north_vector is not None: # this kwarg exists only for symmetry reasons with OffAxisSlicePlot mylog.warning( "Ignoring 'north_vector' keyword as it is ill-defined for " "an AxisAlignedParticleProjectionPlot object." ) del north_vector normal = self._handle_normalaxis_parameters(normal=normal, axis=axis) # this will handle time series data and controllers ts = self._initialize_dataset(ds) self.ts = ts ds = self.ds = ts[0] normal = self.sanitize_normal_vector(ds, normal) if field_parameters is None: field_parameters = {} self.set_axes_unit(axes_unit or get_axes_unit(width, ds)) # if no fields are passed in, we simply mark the x and # y fields using a given color. Use the 'particle_ones' # field to do this. We also turn off the colorbar in # this case. use_cbar = True splat_color = None if fields is None: fields = [("all", "particle_ones")] weight_field = ("all", "particle_ones") use_cbar = False splat_color = color axis = fix_axis(normal, ds) (bounds, center, display_center) = get_window_parameters( axis, center, width, ds ) x_coord = ds.coordinates.x_axis[axis] y_coord = ds.coordinates.y_axis[axis] depth = ds.coordinates.sanitize_depth(depth) width = np.zeros_like(center) width[x_coord] = bounds[1] - bounds[0] width[y_coord] = bounds[3] - bounds[2] width[axis] = depth[0].in_units(width[x_coord].units) self.projected = weight_field is None ParticleSource = ParticleAxisAlignedDummyDataSource( center, ds, axis, width, fields, weight_field=weight_field, field_parameters=field_parameters, data_source=data_source, deposition=deposition, density=density, ) PWViewerMPL.__init__( self, ParticleSource, bounds, origin=origin, fontsize=fontsize, fields=fields, window_size=window_size, aspect=aspect, splat_color=splat_color, geometry=ds.geometry, periodic=True, oblique=False, ) if not use_cbar: self.hide_colorbar() class OffAxisParticleProjectionPlot(ParticleProjectionPlot, PWViewerMPL): _plot_type = "Particle" _frb_generator = ParticleImageBuffer def __init__( self, ds, normal=None, fields=None, color="b", center="center", width=None, depth=(1, "1"), weight_field=None, axes_unit=None, origin="center-window", fontsize=18, field_parameters=None, window_size=8.0, aspect=None, data_source=None, deposition="ngp", density=False, *, north_vector=None, axis=None, ): if data_source is not None: warnings.warn( "The 'data_source' argument has no effect for " "off-axis particle projections (not implemented)", stacklevel=2, ) del data_source if origin != "center-window": warnings.warn( "The 'origin' argument is ignored for off-axis " "particle projections, it is always 'center-window'", stacklevel=2, ) del origin normal = self._handle_normalaxis_parameters(normal=normal, axis=axis) # this will handle time series data and controllers ts = self._initialize_dataset(ds) self.ts = ts ds = self.ds = ts[0] normal = self.sanitize_normal_vector(ds, normal) if field_parameters is None: field_parameters = {} self.set_axes_unit(axes_unit or get_axes_unit(width, ds)) # if no fields are passed in, we simply mark the x and # y fields using a given color. Use the 'particle_ones' # field to do this. We also turn off the colorbar in # this case. use_cbar = True splat_color = None if fields is None: fields = [("all", "particle_ones")] weight_field = ("all", "particle_ones") use_cbar = False splat_color = color (bounds, center_rot) = get_oblique_window_parameters( normal, center, width, ds, depth=depth ) width = ds.coordinates.sanitize_width(normal, width, depth) self.projected = weight_field is None ParticleSource = ParticleOffAxisDummyDataSource( center_rot, ds, normal, width, fields, weight_field=weight_field, field_parameters=field_parameters, data_source=None, deposition=deposition, density=density, north_vector=north_vector, ) PWViewerMPL.__init__( self, ParticleSource, bounds, origin="center-window", fontsize=fontsize, fields=fields, window_size=window_size, aspect=aspect, splat_color=splat_color, geometry=ds.geometry, periodic=False, oblique=True, ) if not use_cbar: self.hide_colorbar() class ParticlePhasePlot(PhasePlot): r""" Create a 2d particle phase plot from a data source or from a `yt.data_objects.profiles.ParticleProfile` object. Given a data object (all_data, region, sphere, etc.), an x field, y field, and z field (or fields), this will create a particle plot by depositing the particles onto a two-dimensional mesh, using either nearest grid point or cloud-in-cell deposition. Parameters ---------- data_source : YTSelectionContainer or Dataset The data object to be profiled, such as all_data, region, or sphere. If data_source is a Dataset, data_source.all_data() will be used. x_field : str The x field for the mesh. y_field : str The y field for the mesh. z_fields : None, str, or list If None, particles will be splatted onto the mesh, but no colormap will be used. If str or list, the name of the field or fields to be displayed on the colorbar. The displayed values will correspond to the sum of the field or fields along the line of sight. Default: None. color : 'b', 'g', 'r', 'c', 'm', 'y', 'k', or 'w' One the matplotlib-recognized color strings. The color that will indicate the particle locations on the mesh. This argument is ignored if z_fields is not None. Default : 'b' x_bins : int The number of bins in x field for the mesh. Default: 800. y_bins : int The number of bins in y field for the mesh. Default: 800. weight_field : str The field to weight by. If given, the plot will show a weighted average along the line of sight of the fields given in the ``z_fields`` argument. Default: None. deposition : str Either 'ngp' or 'cic'. Controls what type of interpolation will be used to deposit the particle z_fields onto the mesh. Default: 'ngp' fontsize: int Font size for all text in the plot. Default: 18. figure_size : int Size in inches of the image. Default: 8 (8x8) shading : str This argument is directly passed down to matplotlib.axes.Axes.pcolormesh see https://matplotlib.org/3.3.1/gallery/images_contours_and_fields/pcolormesh_grids.html#sphx-glr-gallery-images-contours-and-fields-pcolormesh-grids-py # noqa Default: 'nearest' Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> ad = ds.all_data() >>> plot = ParticlePhasePlot( ... ad, ... "particle_position_x", ... "particle_position_y", ... ["particle_mass"], ... x_bins=800, ... y_bins=800, ... ) >>> plot.save() >>> # Change plot properties. >>> plot.set_log("particle_mass", True) >>> plot.set_unit("particle_position_x", "Mpc") >>> plot.set_unit("particle_velocity_z", "km/s") >>> plot.set_unit("particle_mass", "Msun") """ _plot_type = "ParticlePhase" def __init__( self, data_source, x_field, y_field, z_fields=None, color="b", x_bins=800, y_bins=800, weight_field=None, deposition="ngp", fontsize=18, figure_size=8.0, shading="nearest", ): if isinstance(data_source, Dataset): data_source = data_source.all_data() # if no z_fields are passed in, use a constant color if z_fields is None: self.use_cbar = False self.splat_color = color z_fields = [("all", "particle_ones")] profile = create_profile( data_source, [x_field, y_field], list(iter_fields(z_fields)), n_bins=[x_bins, y_bins], weight_field=weight_field, deposition=deposition, ) type(self)._initialize_instance( self, data_source, profile, fontsize, figure_size, shading ) def ParticlePlot(ds, x_field, y_field, z_fields=None, color="b", *args, **kwargs): r""" A factory function for :class:`yt.visualization.particle_plots.ParticleProjectionPlot` and :class:`yt.visualization.profile_plotter.ParticlePhasePlot` objects. This essentially allows for a single entry point to both types of particle plots, the distinction being determined by the fields passed in. If the x_field and y_field combination corresponds to a valid, right-handed spatial plot, an ``ParticleProjectionPlot`` will be returned. This plot object can be updated using one of the many helper functions defined in ``PlotWindow``. If the x_field and y_field combo do not correspond to a valid ``ParticleProjectionPlot``, then a ``ParticlePhasePlot``. This object can be modified by its own set of helper functions defined in PhasePlot. We note below which arguments are only accepted by ``ParticleProjectionPlot`` and which arguments are only accepted by ``ParticlePhasePlot``. Parameters ---------- ds : :class:`yt.data_objects.static_output.Dataset` This is the dataset object corresponding to the simulation output to be plotted. x_field : string This is the particle field that will be plotted on the x-axis. y_field : string This is the particle field that will be plotted on the y-axis. z_fields : string, list, or None. If None, particles will be splatted onto the plot, but no colormap will be used. The particle color will instead be determined by the 'color' argument. If str or list, the name of the field or fields to be displayed on the colorbar. Default: None. color : 'b', 'g', 'r', 'c', 'm', 'y', 'k', or 'w' One the matplotlib-recognized color strings. The color that will indicate the particle locations on the plot. This argument is ignored if z_fields is not None. Default is 'b'. weight_field : string The name of the weighting field. Set to None for no weight. fontsize : integer The size of the fonts for the axis, colorbar, and tick labels. data_source : YTSelectionContainer Object Object to be used for data selection. Defaults to a region covering the entire simulation. center : 'center', 'c', 'left', 'l', 'right', 'r', id of a global extremum, or array-like The coordinate of the selection's center. Defaults to the 'center', i.e. center of the domain. Centering on the min or max of a field is supported by passing a tuple such as ('min', ('gas', 'density')) or ('max', ('gas', 'temperature'). A single string may also be used (e.g. "min_density" or "max_temperature"), though it's not as flexible and does not allow to select an exact field/particle type. With this syntax, the first field matching the provided name is selected. 'max' or 'm' can be used as a shortcut for ('max', ('gas', 'density')) 'min' can be used as a shortcut for ('min', ('gas', 'density')) One can also select an exact point as a 3 element coordinate sequence, e.g. [0.5, 0.5, 0] Units can be specified by passing in *center* as a tuple containing a 3-element coordinate sequence and string unit name, e.g. ([0, 0.5, 0.5], "cm"), or by passing in a YTArray. Code units are assumed if unspecified. The domain edges along the selected *axis* can be selected with 'left'/'l' and 'right'/'r' respectively. This argument is only accepted by ``ParticleProjectionPlot``. width : tuple or a float. Width can have four different formats to support windows with variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') requests a plot window that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) requests a window that is 10 kiloparsecs wide along the x axis and 15 kiloparsecs wide along the y axis. In the other two examples, code units are assumed, for example (0.2, 0.3) requests a plot that has an x width of 0.2 and a y width of 0.3 in code units. If units are provided the resulting plot axis labels will use the supplied units. This argument is only accepted by ``ParticleProjectionPlot``. depth : A tuple or a float A tuple containing the depth to project through and the string key of the unit: (width, 'unit'). If set to a float, code units are assumed. Defaults to the entire domain. This argument is only accepted by ``ParticleProjectionPlot``. axes_unit : A string The name of the unit for the tick labels on the x and y axes. Defaults to None, which automatically picks an appropriate unit. If axes_unit is '1', 'u', or 'unitary', it will not display the units, and only show the axes name. origin : string or length 1, 2, or 3 sequence of strings The location of the origin of the plot coordinate system. This is represented by '-' separated string or a tuple of strings. In the first index the y-location is given by 'lower', 'upper', or 'center'. The second index is the x-location, given as 'left', 'right', or 'center'. Finally, the whether the origin is applied in 'domain' space, plot 'window' space or 'native' simulation coordinate system is given. For example, both 'upper-right-domain' and ['upper', 'right', 'domain'] both place the origin in the upper right hand corner of domain space. If x or y are not given, a value is inferred. For instance, 'left-domain' corresponds to the lower-left hand corner of the simulation domain, 'center-domain' corresponds to the center of the simulation domain, or 'center-window' for the center of the plot window. Further examples: ================================== ============================ format example ================================== ============================ '{space}' 'domain' '{xloc}-{space}' 'left-window' '{yloc}-{space}' 'upper-domain' '{yloc}-{xloc}-{space}' 'lower-right-window' ('{space}',) ('window',) ('{xloc}', '{space}') ('right', 'domain') ('{yloc}', '{space}') ('lower', 'window') ('{yloc}', '{xloc}', '{space}') ('lower', 'right', 'window') ================================== ============================ This argument is only accepted by ``ParticleProjectionPlot``. window_size : float The size of the window on the longest axis (in units of inches), including the margins but not the colorbar. This argument is only accepted by ``ParticleProjectionPlot``. aspect : float The aspect ratio of the plot. Set to None for 1. This argument is only accepted by ``ParticleProjectionPlot``. x_bins : int The number of bins in x field for the mesh. Defaults to 800. This argument is only accepted by ``ParticlePhasePlot``. y_bins : int The number of bins in y field for the mesh. Defaults to 800. This argument is only accepted by ``ParticlePhasePlot``. deposition : str Either 'ngp' or 'cic'. Controls what type of interpolation will be used to deposit the particle z_fields onto the mesh. Defaults to 'ngp'. figure_size : int Size in inches of the image. Defaults to 8 (product an 8x8 inch figure). This argument is only accepted by ``ParticlePhasePlot``. Examples -------- >>> from yt import load >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> p = yt.ParticlePlot( ... ds, ... "particle_position_x", ... "particle_position_y", ... "particle_mass", ... width=(0.5, 0.5), ... ) >>> p.set_unit("particle_mass", "Msun") >>> p = yt.ParticlePlot(ds, "particle_position_x", "particle_velocity_z", color="g") """ dd = kwargs.get("data_source", None) if dd is None: dd = ds.all_data() x_field = dd._determine_fields(x_field)[0] y_field = dd._determine_fields(y_field)[0] direction = 3 # try potential axes for a ParticleProjectionPlot: for axis in [0, 1, 2]: xax = ds.coordinates.x_axis[axis] yax = ds.coordinates.y_axis[axis] ax_field_template = "particle_position_%s" xf = ax_field_template % ds.coordinates.axis_name[xax] yf = ax_field_template % ds.coordinates.axis_name[yax] if (x_field[1], y_field[1]) in [(xf, yf), (yf, xf)]: direction = axis break if direction < 3: # Make a ParticleProjectionPlot return ParticleProjectionPlot(ds, direction, z_fields, color, *args, **kwargs) # Does not correspond to any valid PlotWindow-style plot, # use ParticlePhasePlot instead else: return ParticlePhasePlot(dd, x_field, y_field, z_fields, color, *args, **kwargs) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/plot_container.py0000644000175100001770000011314714714401662020523 0ustar00runnerdockerimport abc import base64 import os import warnings from collections import defaultdict from functools import wraps from typing import Any, Final, Literal import matplotlib from matplotlib.colors import LogNorm, Normalize, SymLogNorm from unyt.dimensions import length from yt._maintenance.deprecation import issue_deprecation_warning from yt._maintenance.ipython_compat import IS_IPYTHON from yt._typing import FieldKey, Quantity from yt.config import ytcfg from yt.data_objects.time_series import DatasetSeries from yt.funcs import ensure_dir, is_sequence, iter_fields from yt.units.unit_object import Unit # type: ignore from yt.utilities.definitions import formatted_length_unit_names from yt.utilities.exceptions import YTConfigurationError, YTNotInsideNotebook from yt.visualization._commons import get_default_from_config from yt.visualization._handlers import ColorbarHandler, NormHandler from yt.visualization.base_plot_types import PlotMPL from ._commons import ( _get_units_label, get_default_font_properties, invalidate_data, invalidate_figure, invalidate_plot, validate_image_name, validate_plot, ) latex_prefixes = { "u": r"\mu", } def apply_callback(f): issue_deprecation_warning( "The apply_callback decorator is not used in yt any more and " "will be removed in a future version. " "Please do not use it.", stacklevel=3, since="4.1", ) @wraps(f) def newfunc(*args, **kwargs): args[0]._callbacks.append((f.__name__, (args, kwargs))) return args[0] return newfunc def accepts_all_fields(func): """ Decorate a function whose second argument is and deal with the special case field == 'all', looping over all fields already present in the PlotContainer object. """ # This is to be applied to PlotContainer class methods with the following signature: # # f(self, field, *args, **kwargs) -> self @wraps(func) def newfunc(self, field, *args, **kwargs): if field == "all": field = self.plots.keys() for f in self.data_source._determine_fields(field): func(self, f, *args, **kwargs) return self return newfunc # define a singleton sentinel to be used as default value distinct from None class Unset: _instance = None def __new__(cls): if cls._instance is None: cls._instance = object.__new__(cls) return cls._instance UNSET: Final = Unset() class PlotDictionary(defaultdict): def __getitem__(self, item): return defaultdict.__getitem__( self, self.data_source._determine_fields(item)[0] ) def __setitem__(self, item, value): return defaultdict.__setitem__( self, self.data_source._determine_fields(item)[0], value ) def __contains__(self, item): return defaultdict.__contains__( self, self.data_source._determine_fields(item)[0] ) def __init__(self, data_source, default_factory=None): self.data_source = data_source return defaultdict.__init__(self, default_factory) class PlotContainer(abc.ABC): """A container for generic plots""" _plot_dict_type: type[PlotDictionary] = PlotDictionary _plot_type: str | None = None _plot_valid = False _default_figure_size = tuple(matplotlib.rcParams["figure.figsize"]) _default_font_size = 14.0 def __init__(self, data_source, figure_size=None, fontsize: float | None = None): from matplotlib.font_manager import FontProperties self.data_source = data_source self.ds = data_source.ds self.ts = self._initialize_dataset(self.ds) self.plots = self.__class__._plot_dict_type(data_source) self._set_figure_size(figure_size) if fontsize is None: fontsize = self.__class__._default_font_size font_dict = get_default_font_properties() | {"size": fontsize} self._font_properties = FontProperties(**font_dict) self._font_color = None self._xlabel = None self._ylabel = None self._minorticks: dict[FieldKey, bool] = {} @accepts_all_fields @invalidate_plot def set_log( self, field, log: bool | None = None, *, linthresh: float | Quantity | Literal["auto"] | None = None, symlog_auto: bool | None = None, # deprecated ): """set a field to log, linear, or symlog. Symlog scaling is a combination of linear and log, where from 0 to a threshold value, it operates as linear, and then beyond that it operates as log. Symlog can also work with negative values in log space as well as negative and positive values simultaneously and symmetrically. If symlog scaling is desired, please set log=True and either set symlog_auto=True or select a value for linthresh. Parameters ---------- field : string the field to set a transform if field == 'all', applies to all plots. log : boolean, optional set log to True for log scaling, False for linear scaling. linthresh : float, (float, str), unyt_quantity, or 'auto', optional when using symlog scaling, linthresh is the value at which scaling transitions from linear to logarithmic. linthresh must be positive. Note: setting linthresh will automatically enable symlog scale Note that *log* and *linthresh* are mutually exclusive arguments """ if log is None and linthresh is None and symlog_auto is None: raise TypeError("set_log requires log or linthresh be set") if symlog_auto is not None: issue_deprecation_warning( "the symlog_auto argument is deprecated. Use linthresh='auto' instead", since="4.1", stacklevel=5, ) if symlog_auto is True: linthresh = "auto" elif symlog_auto is False: pass else: raise TypeError( "Received invalid value for parameter symlog_auto. " f"Expected a boolean, got {symlog_auto!r}" ) if log is not None and linthresh is not None: # we do not raise an error here for backward compatibility warnings.warn( f"log={log} has no effect because linthresh specified. Using symlog.", stacklevel=4, ) pnh = self.plots[field].norm_handler if linthresh is not None: if isinstance(linthresh, str): if linthresh == "auto": pnh.norm_type = SymLogNorm else: raise ValueError( "Expected a number, a unyt_quantity, a (float, 'unit') tuple, or 'auto'. " f"Got linthresh={linthresh!r}" ) else: # pnh takes care of switching to symlog when linthresh is set pnh.linthresh = linthresh elif log is True: pnh.norm_type = LogNorm elif log is False: pnh.norm_type = Normalize else: raise TypeError( f"Could not parse arguments log={log!r}, linthresh={linthresh!r}" ) return self def get_log(self, field): """get the transform type of a field. Parameters ---------- field : string the field to get a transform if field == 'all', applies to all plots. """ # devnote : accepts_all_fields decorator is not applicable here because # the return variable isn't self issue_deprecation_warning( "The get_log method is not reliable and is deprecated. " "Please do not rely on it.", stacklevel=3, since="4.1", ) log = {} if field == "all": fields = list(self.plots.keys()) else: fields = field for field in self.data_source._determine_fields(fields): pnh = self.plots[field].norm_handler if pnh.norm is not None: log[field] = type(pnh.norm) is LogNorm elif pnh.norm_type is not None: log[field] = pnh.norm_type is LogNorm else: # the NormHandler object has no constraints yet # so we'll assume defaults log[field] = True return log @invalidate_plot def set_transform(self, field, name: str): field = self.data_source._determine_fields(field)[0] pnh = self.plots[field].norm_handler pnh.norm_type = { "linear": Normalize, "log10": LogNorm, "symlog": SymLogNorm, }[name] return self @accepts_all_fields @invalidate_plot def set_norm(self, field, norm: Normalize): r""" Set a custom ``matplotlib.colors.Normalize`` to plot *field*. Any constraints previously set with `set_log`, `set_zlim` will be dropped. Note that any float value attached to *norm* (e.g. vmin, vmax, vcenter ...) will be read in the current displayed units, which can be controlled with the `set_unit` method. Parameters ---------- field : str or tuple[str, str] if field == 'all', applies to all plots. norm : matplotlib.colors.Normalize see https://matplotlib.org/stable/tutorials/colors/colormapnorms.html """ pnh = self.plots[field].norm_handler pnh.norm = norm return self @accepts_all_fields @invalidate_plot def set_minorticks(self, field, state): """Turn minor ticks on or off in the current plot. Displaying minor ticks reduces performance; turn them off using set_minorticks('all', False) if drawing speed is a problem. Parameters ---------- field : string the field to remove minorticks if field == 'all', applies to all plots. state : bool the state indicating 'on' (True) or 'off' (False) """ self._minorticks[field] = state return self @abc.abstractmethod def _setup_plots(self): # Left blank to be overridden in subclasses pass def render(self) -> None: r"""Render plots. This operation is expensive and usually doesn't need to be requested explicitly. In most cases, yt handles rendering automatically and delays it as much as possible to avoid redundant calls on each plot modification (e.g. via `annotate_*` methods). However, valid use cases of this method include: - fine control of render (and clear) operations when yt plots are combined with plot customizations other than plot callbacks (`annotate_*`) - testing """ # this public API method should never be no-op, so we invalidate # the plot to force a fresh render in _setup_plots() self._plot_valid = False self._setup_plots() def _initialize_dataset(self, ts): if not isinstance(ts, DatasetSeries): if not is_sequence(ts): ts = [ts] ts = DatasetSeries(ts) return ts @invalidate_data def _switch_ds(self, new_ds, data_source=None): old_object = self.data_source name = old_object._type_name kwargs = {n: getattr(old_object, n) for n in old_object._con_args} kwargs["center"] = getattr(old_object, "center", None) if data_source is not None: if name != "proj": raise RuntimeError( "The data_source keyword argument " "is only defined for projections." ) kwargs["data_source"] = data_source self.ds = new_ds # A _hack_ for ParticleProjectionPlots if name == "Particle": from yt.visualization.particle_plots import ( ParticleAxisAlignedDummyDataSource, ) new_object = ParticleAxisAlignedDummyDataSource(ds=self.ds, **kwargs) else: new_object = getattr(new_ds, name)(**kwargs) self.data_source = new_object for d in "xyz": lim_name = d + "lim" if hasattr(self, lim_name): lim = getattr(self, lim_name) lim = tuple(new_ds.quan(l.value, str(l.units)) for l in lim) setattr(self, lim_name, lim) self.plots.data_source = new_object self._colorbar_label.data_source = new_object self._setup_plots() @validate_plot def __getitem__(self, item): return self.plots[item] def _set_font_properties(self): for f in self.plots: self.plots[f]._set_font_properties(self._font_properties, self._font_color) @invalidate_plot @invalidate_figure def set_font(self, font_dict=None): """ Set the font and font properties. Parameters ---------- font_dict : dict A dict of keyword parameters to be passed to :class:`matplotlib.font_manager.FontProperties`. Possible keys include: * family - The font family. Can be serif, sans-serif, cursive, 'fantasy' or 'monospace'. * style - The font style. Either normal, italic or oblique. * color - A valid color string like 'r', 'g', 'red', 'cobalt', and 'orange'. * variant - Either normal or small-caps. * size - Either a relative value of xx-small, x-small, small, medium, large, x-large, xx-large or an absolute font size, e.g. 12 * stretch - A numeric value in the range 0-1000 or one of ultra-condensed, extra-condensed, condensed, semi-condensed, normal, semi-expanded, expanded, extra-expanded or ultra-expanded * weight - A numeric value in the range 0-1000 or one of ultralight, light, normal, regular, book, medium, roman, semibold, demibold, demi, bold, heavy, extra bold, or black See the matplotlib font manager API documentation for more details. https://matplotlib.org/stable/api/font_manager_api.html Notes ----- Mathtext axis labels will only obey the `size` and `color` keyword. Examples -------- This sets the font to be 24-pt, blue, sans-serif, italic, and bold-face. >>> slc = SlicePlot(ds, "x", "Density") >>> slc.set_font( ... { ... "family": "sans-serif", ... "style": "italic", ... "weight": "bold", ... "size": 24, ... "color": "blue", ... } ... ) """ from matplotlib.font_manager import FontProperties if font_dict is None: font_dict = {} if "color" in font_dict: self._font_color = font_dict.pop("color") # Set default values if the user does not explicitly set them. # this prevents reverting to the matplotlib defaults. _default_size = {"size": self.__class__._default_font_size} font_dict = get_default_font_properties() | _default_size | font_dict self._font_properties = FontProperties(**font_dict) return self def set_font_size(self, size): """Set the size of the font used in the plot This sets the font size by calling the set_font function. See set_font for more font customization options. Parameters ---------- size : float The absolute size of the font in points (1 pt = 1/72 inch). """ return self.set_font({"size": size}) def _set_figure_size(self, size): if size is None: self.figure_size = self.__class__._default_figure_size elif is_sequence(size): if len(size) != 2: raise TypeError(f"Expected a single float or a pair, got {size}") self.figure_size = float(size[0]), float(size[1]) else: self.figure_size = float(size) @invalidate_plot @invalidate_figure def set_figure_size(self, size): """Sets a new figure size for the plot parameters ---------- size : float, a sequence of two floats, or None The size of the figure (in units of inches), including the margins but not the colorbar. If a single float is passed, it's interpreted as the size along the long axis. Pass None to reset """ self._set_figure_size(size) return self @validate_plot def save( self, name: str | list[str] | tuple[str, ...] | None = None, suffix: str | None = None, mpl_kwargs: dict[str, Any] | None = None, ): """saves the plot to disk. Parameters ---------- name : string or tuple, optional The base of the filename. If name is a directory or if name is not set, the filename of the dataset is used. For a tuple, the resulting path will be given by joining the elements of the tuple suffix : string, optional Specify the image type by its suffix. If not specified, the output type will be inferred from the filename. Defaults to '.png'. mpl_kwargs : dict, optional A dict of keyword arguments to be passed to matplotlib. >>> slc.save(mpl_kwargs={"bbox_inches": "tight"}) """ names = [] if mpl_kwargs is None: mpl_kwargs = {} elif "format" in mpl_kwargs: new_suffix = mpl_kwargs.pop("format") if new_suffix != suffix: warnings.warn( f"Overriding suffix {suffix!r} with mpl_kwargs['format'] = {new_suffix!r}. " "Use the `suffix` argument directly to suppress this warning.", stacklevel=2, ) suffix = new_suffix if name is None: name = str(self.ds) elif isinstance(name, (list, tuple)): if not all(isinstance(_, str) for _ in name): raise TypeError( f"Expected a single str or an iterable of str, got {name!r}" ) name = os.path.join(*name) name = os.path.expanduser(name) parent_dir, _, prefix1 = name.replace(os.sep, "/").rpartition("/") parent_dir = parent_dir.replace("/", os.sep) if parent_dir and not os.path.isdir(parent_dir): ensure_dir(parent_dir) if name.endswith(("/", os.path.sep)): name = os.path.join(name, str(self.ds)) new_name = validate_image_name(name, suffix) if new_name == name: for v in self.plots.values(): out_name = v.save(name, mpl_kwargs) names.append(out_name) return names name = new_name prefix, suffix = os.path.splitext(name) if hasattr(self.data_source, "axis"): axis = self.ds.coordinates.axis_name.get(self.data_source.axis, "") else: axis = None weight = None stddev = None plot_type = self._plot_type if plot_type in ["Projection", "OffAxisProjection"]: weight = self.data_source.weight_field if weight is not None: weight = weight[1].replace(" ", "_") if getattr(self.data_source, "moment", 1) == 2: stddev = "standard_deviation" if "Cutting" in self.data_source.__class__.__name__: plot_type = "OffAxisSlice" for k, v in self.plots.items(): if isinstance(k, tuple): k = k[1] if plot_type is None: # implemented this check to make mypy happy, because we can't use str.join # with PlotContainer._plot_type = None raise TypeError(f"{self.__class__} is missing a _plot_type value (str)") name_elements = [prefix, plot_type] if axis: name_elements.append(axis) name_elements.append(k.replace(" ", "_")) if weight: name_elements.append(weight) if stddev: name_elements.append(stddev) name = "_".join(name_elements) + suffix names.append(v.save(name, mpl_kwargs)) return names @invalidate_data def refresh(self): # invalidate_data will take care of everything return self @validate_plot def show(self): r"""This will send any existing plots to the IPython notebook. If yt is being run from within an IPython session, and it is able to determine this, this function will send any existing plots to the notebook for display. If yt can't determine if it's inside an IPython session, it will raise YTNotInsideNotebook. Examples -------- >>> from yt import SlicePlot >>> slc = SlicePlot( ... ds, "x", [("gas", "density"), ("gas", "velocity_magnitude")] ... ) >>> slc.show() """ interactivity = self.plots[list(self.plots.keys())[0]].interactivity if interactivity: for v in sorted(self.plots.values()): v.show() else: if IS_IPYTHON: from IPython.display import display display(self) else: raise YTNotInsideNotebook @validate_plot def display(self, name=None, mpl_kwargs=None): """Will attempt to show the plot in in an IPython notebook. Failing that, the plot will be saved to disk.""" try: return self.show() except YTNotInsideNotebook: return self.save(name=name, mpl_kwargs=mpl_kwargs) @validate_plot def _repr_html_(self): """Return an html representation of the plot object. Will display as a png for each WindowPlotMPL instance in self.plots""" ret = "" for field in self.plots: img = base64.b64encode(self.plots[field]._repr_png_()).decode() ret += ( r'
' ) return ret @invalidate_plot def set_xlabel(self, label): r""" Allow the user to modify the X-axis title Defaults to the global value. Fontsize defaults to 18. Parameters ---------- label : str The new string for the x-axis. >>> plot.set_xlabel("H2I Number Density (cm$^{-3}$)") """ self._xlabel = label return self @invalidate_plot def set_ylabel(self, label): r""" Allow the user to modify the Y-axis title Defaults to the global value. Parameters ---------- label : str The new string for the y-axis. >>> plot.set_ylabel("Temperature (K)") """ self._ylabel = label return self def _get_axes_unit_labels(self, unit_x, unit_y): axes_unit_labels = ["", ""] comoving = False hinv = False for i, un in enumerate((unit_x, unit_y)): unn = None if hasattr(self.data_source, "axis"): if hasattr(self.ds.coordinates, "image_units"): # This *forces* an override unn = self.ds.coordinates.image_units[self.data_source.axis][i] elif hasattr(self.ds.coordinates, "default_unit_label"): axax = getattr(self.ds.coordinates, f"{'xy'[i]}_axis")[ self.data_source.axis ] unn = self.ds.coordinates.default_unit_label.get(axax, None) if unn in (1, "1", "dimensionless"): axes_unit_labels[i] = "" continue if unn is not None: axes_unit_labels[i] = _get_units_label(unn).strip("$") continue # Use sympy to factor h out of the unit. In this context 'un' # is a string, so we call the Unit constructor. expr = Unit(un, registry=self.ds.unit_registry).expr h_expr = Unit("h", registry=self.ds.unit_registry).expr # See http://docs.sympy.org/latest/modules/core.html#sympy.core.expr.Expr h_power = expr.as_coeff_exponent(h_expr)[1] # un is now the original unit, but with h factored out. un = str(expr * h_expr ** (-1 * h_power)) un_unit = Unit(un, registry=self.ds.unit_registry) cm = Unit("cm").expr if str(un).endswith("cm") and cm not in un_unit.expr.atoms(): comoving = True un = un[:-2] # no length units besides code_length end in h so this is safe if h_power == -1: hinv = True elif h_power != 0: # It doesn't make sense to scale a position by anything # other than h**-1 raise RuntimeError if un not in ["1", "u", "unitary"]: if un in formatted_length_unit_names: un = formatted_length_unit_names[un] else: un = Unit(un, registry=self.ds.unit_registry) un = un.latex_representation() if hinv: un = un + r"\,h^{-1}" if comoving: un = un + r"\,(1+z)^{-1}" pp = un[0] if pp in latex_prefixes: symbol_wo_prefix = un[1:] if symbol_wo_prefix in self.ds.unit_registry.prefixable_units: un = un.replace(pp, "{" + latex_prefixes[pp] + "}", 1) axes_unit_labels[i] = _get_units_label(un).strip("$") return axes_unit_labels def hide_colorbar(self, field=None): """ Hides the colorbar for a plot and updates the size of the plot accordingly. Defaults to operating on all fields for a PlotContainer object. Parameters ---------- field : string, field tuple, or list of strings or field tuples (optional) The name of the field(s) that we want to hide the colorbar. If None or 'all' is provided, will default to using all fields available for this object. Examples -------- This will save an image with no colorbar. >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> s = SlicePlot(ds, 2, "density", "c", (20, "kpc")) >>> s.hide_colorbar() >>> s.save() This will save an image with no axis or colorbar. >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> s = SlicePlot(ds, 2, "density", "c", (20, "kpc")) >>> s.hide_axes() >>> s.hide_colorbar() >>> s.save() """ if field is None or field == "all": field = self.plots.keys() for f in self.data_source._determine_fields(field): self.plots[f].hide_colorbar() return self def show_colorbar(self, field=None): """ Shows the colorbar for a plot and updates the size of the plot accordingly. Defaults to operating on all fields for a PlotContainer object. See hide_colorbar(). Parameters ---------- field : string, field tuple, or list of strings or field tuples (optional) The name of the field(s) that we want to show the colorbar. """ if field is None: field = self.fields for f in iter_fields(field): self.plots[f].show_colorbar() return self def hide_axes(self, field=None, draw_frame=None): """ Hides the axes for a plot and updates the size of the plot accordingly. Defaults to operating on all fields for a PlotContainer object. Parameters ---------- field : string, field tuple, or list of strings or field tuples (optional) The name of the field(s) that we want to hide the axes. draw_frame : boolean If True, the axes frame will still be drawn. Defaults to False. See note below for more details. Examples -------- This will save an image with no axes. >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> s = SlicePlot(ds, 2, "density", "c", (20, "kpc")) >>> s.hide_axes() >>> s.save() This will save an image with no axis or colorbar. >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> s = SlicePlot(ds, 2, "density", "c", (20, "kpc")) >>> s.hide_axes() >>> s.hide_colorbar() >>> s.save() Note ---- By default, when removing the axes, the patch on which the axes are drawn is disabled, making it impossible to later change e.g. the background colour. To force the axes patch to be displayed while still hiding the axes, set the ``draw_frame`` keyword argument to ``True``. """ if field is None: field = self.fields for f in iter_fields(field): self.plots[f].hide_axes(draw_frame=draw_frame) return self def show_axes(self, field=None): """ Shows the axes for a plot and updates the size of the plot accordingly. Defaults to operating on all fields for a PlotContainer object. See hide_axes(). Parameters ---------- field : string, field tuple, or list of strings or field tuples (optional) The name of the field(s) that we want to show the axes. """ if field is None: field = self.fields for f in iter_fields(field): self.plots[f].show_axes() return self class ImagePlotContainer(PlotContainer, abc.ABC): """A container for plots with colorbars.""" _colorbar_valid = False def __init__(self, data_source, figure_size, fontsize): super().__init__(data_source, figure_size, fontsize) self._callbacks = [] self._colorbar_label = PlotDictionary(self.data_source, lambda: None) def _get_default_handlers( self, field, default_display_units: Unit ) -> tuple[NormHandler, ColorbarHandler]: usr_units_str = get_default_from_config( self.data_source, field=field, keys="units", defaults=[None] ) if usr_units_str is not None: usr_units = Unit(usr_units_str) d1 = usr_units.dimensions d2 = default_display_units.dimensions if d1 == d2: display_units = usr_units elif getattr(self, "projected", False) and d2 / d1 == length: path_length_units = Unit( ytcfg.get_most_specific( "plot", *field, "path_length_units", fallback="cm" ), registry=self.data_source.ds.unit_registry, ) display_units = usr_units * path_length_units else: raise YTConfigurationError( f"Invalid units in configuration file for field {field!r}. " f"Found {usr_units!r}" ) else: display_units = default_display_units pnh = NormHandler(self.data_source, display_units=display_units) cbh = ColorbarHandler( cmap=get_default_from_config( self.data_source, field=field, keys="cmap", defaults=[None], ) ) return pnh, cbh @accepts_all_fields @invalidate_plot def set_cmap(self, field, cmap): """set the colormap for one of the fields Parameters ---------- field : string the field to set the colormap if field == 'all', applies to all plots. cmap : string or tuple If a string, will be interpreted as name of the colormap. If a tuple, it is assumed to be of the form (name, type, number) to be used for palettable functionality. (name, type, number, bool) can be used to specify if a reverse colormap is to be used. """ self._colorbar_valid = False self.plots[field].colorbar_handler.cmap = cmap return self @accepts_all_fields @invalidate_plot def set_background_color(self, field, color=None): """set the background color to match provided color Parameters ---------- field : string the field to set the colormap if field == 'all', applies to all plots. color : string or RGBA tuple (optional) if set, set the background color to this color if unset, background color is set to the bottom value of the color map """ cbh = self[field].colorbar_handler cbh.background_color = color return self @accepts_all_fields @invalidate_plot def set_zlim( self, field, zmin: float | Quantity | Literal["min"] | Unset = UNSET, zmax: float | Quantity | Literal["max"] | Unset = UNSET, dynamic_range: float | None = None, ): """set the scale of the colormap Parameters ---------- field : string the field to set a colormap scale if field == 'all', applies to all plots. zmin : float, Quantity, or 'min' the new minimum of the colormap scale. If 'min', will set to the minimum value in the current view. zmax : float, Quantity, or 'max' the new maximum of the colormap scale. If 'max', will set to the maximum value in the current view. Other Parameters ---------------- dynamic_range : float (default: None) The dynamic range of the image. If zmin == None, will set zmin = zmax / dynamic_range If zmax == None, will set zmax = zmin * dynamic_range """ if zmin is UNSET and zmax is UNSET: raise TypeError("Missing required argument zmin or zmax") if zmin is UNSET: zmin = None elif zmin is None: # this sentinel value juggling is barely maintainable # this use case is deprecated so we can simplify the logic here # in the future and use `None` as the default value, # instead of the custom sentinel UNSET issue_deprecation_warning( "Passing `zmin=None` explicitly is deprecated. " "If you wish to explicitly set zmin to the minimal " "data value, pass `zmin='min'` instead. " "Otherwise leave this argument unset.", since="4.1", stacklevel=5, ) zmin = "min" if zmax is UNSET: zmax = None elif zmax is None: # see above issue_deprecation_warning( "Passing `zmax=None` explicitly is deprecated. " "If you wish to explicitly set zmax to the maximal " "data value, pass `zmax='max'` instead. " "Otherwise leave this argument unset.", since="4.1", stacklevel=5, ) zmax = "max" pnh = self.plots[field].norm_handler pnh.vmin = zmin pnh.vmax = zmax pnh.dynamic_range = dynamic_range return self @accepts_all_fields @invalidate_plot def set_colorbar_minorticks(self, field, state): """turn colorbar minor ticks on or off in the current plot Displaying minor ticks reduces performance; turn them off using set_colorbar_minorticks('all', False) if drawing speed is a problem. Parameters ---------- field : string the field to remove colorbar minorticks if field == 'all', applies to all plots. state : bool the state indicating 'on' (True) or 'off' (False) """ self.plots[field].colorbar_handler.draw_minorticks = state return self @invalidate_plot def set_colorbar_label(self, field, label): r""" Sets the colorbar label. Parameters ---------- field : str or tuple The name of the field to modify the label for. label : str The new label >>> plot.set_colorbar_label( ... ("gas", "density"), "Dark Matter Density (g cm$^{-3}$)" ... ) """ field = self.data_source._determine_fields(field) self._colorbar_label[field] = label return self def _get_axes_labels(self, field): return (self._xlabel, self._ylabel, self._colorbar_label[field]) class BaseLinePlot(PlotContainer, abc.ABC): # A common ancestor to LinePlot and ProfilePlot @abc.abstractmethod def _get_axrect(self): pass def _get_plot_instance(self, field): if field in self.plots: return self.plots[field] axrect = self._get_axrect() pnh = NormHandler( self.data_source, display_units=self.data_source.ds.field_info[field].units ) finfo = self.data_source.ds._get_field_info(field) if not finfo.take_log: pnh.norm_type = Normalize plot = PlotMPL(self.figure_size, axrect, norm_handler=pnh) self.plots[field] = plot return plot ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/plot_modifications.py0000644000175100001770000036572714714401662021406 0ustar00runnerdockerimport inspect import re import sys import warnings from abc import ABC, abstractmethod from functools import update_wrapper from numbers import Integral, Number from typing import Any, TypeGuard import matplotlib import numpy as np from unyt import unyt_quantity from yt._maintenance.deprecation import issue_deprecation_warning from yt._typing import AnyFieldKey, FieldKey from yt.data_objects.data_containers import YTDataContainer from yt.data_objects.level_sets.clump_handling import Clump from yt.data_objects.selection_objects.cut_region import YTCutRegion from yt.frontends.ytdata.data_structures import YTClumpContainer from yt.funcs import is_sequence, mylog, validate_width_tuple from yt.geometry.api import Geometry from yt.geometry.unstructured_mesh_handler import UnstructuredIndex from yt.units import dimensions from yt.units._numpy_wrapper_functions import uhstack from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.exceptions import ( YTDataTypeUnsupported, YTFieldNotFound, YTFieldTypeNotFound, YTUnsupportedPlotCallback, ) from yt.utilities.lib.geometry_utils import triangle_plane_intersect from yt.utilities.lib.line_integral_convolution import line_integral_convolution_2d from yt.utilities.lib.mesh_triangulation import triangulate_indices from yt.utilities.lib.pixelization_routines import ( pixelize_cartesian, pixelize_off_axis_cartesian, ) from yt.utilities.math_utils import periodic_ray from yt.visualization._commons import ( _swap_arg_pair_order, _swap_axes_extents, invalidate_plot, ) from yt.visualization.base_plot_types import CallbackWrapper from yt.visualization.image_writer import apply_colormap from yt.visualization.plot_window import PWViewerMPL if sys.version_info >= (3, 11): from typing import assert_never else: from typing_extensions import assert_never callback_registry: dict[str, type["PlotCallback"]] = {} def _validate_factor_tuple(factor) -> tuple[int, int]: if ( is_sequence(factor) and len(factor) == 2 and all(isinstance(_, Integral) for _ in factor) ): # - checking for "is_sequence" allows lists, numpy arrays and other containers # - checking for Integral type allows numpy integer types # in any case we return a with strict typing return (int(factor[0]), int(factor[1])) elif isinstance(factor, Integral): return (int(factor), int(factor)) else: raise TypeError( f"Expected a single, or a pair of integers, received {factor!r}" ) class PlotCallback(ABC): # _supported_geometries is set by subclasses of PlotCallback to a tuple of # strings corresponding to the names of the geometries that a callback # supports. By default it is None, which means it supports everything. # Note that if there's a coord_system parameter that is set to "axis" or # "figure" this is disregarded. If "force" is included in the tuple, it # will *not* check whether or not the coord_system is in axis or figure, # and will only look at the geometries. _supported_geometries: tuple[str, ...] | None = None _incompatible_plot_types: tuple[str, ...] = () def __init_subclass__(cls, *args, **kwargs): if inspect.isabstract(cls): return # register class callback_registry[cls.__name__] = cls # create a PWViewerMPL method by wrapping __init__ if cls.__init__.__doc__ is None: # allow docstring definition at the class level instead of __init__ cls.__init__.__doc__ = cls.__doc__ supported_geometries = cls._supported_geometries incompatible_plot_types = cls._incompatible_plot_types type_name = cls._type_name @invalidate_plot def closure(self, *args, **kwargs): nonlocal supported_geometries nonlocal incompatible_plot_types nonlocal type_name geom = self.ds.geometry if not ( supported_geometries is None or geom in supported_geometries or ( kwargs.get("coord_system") in ("axis", "figure") and "force" not in supported_geometries ) ): raise YTDataTypeUnsupported(geom, supported_geometries) if self._plot_type in incompatible_plot_types: raise YTUnsupportedPlotCallback(type_name, self._plot_type) self._callbacks.append(cls(*args, **kwargs)) return self update_wrapper( wrapper=closure, wrapped=cls.__init__, assigned=("__annotations__", "__doc__"), ) method_name = "annotate_" + type_name closure.__name__ = method_name setattr(PWViewerMPL, method_name, closure) @abstractmethod def __init__(self, *args, **kwargs) -> None: pass @abstractmethod def __call__(self, plot: CallbackWrapper) -> Any: pass def _project_coords(self, plot, coord): """ Convert coordinates from simulation data coordinates to projected data coordinates. Simulation data coordinates are three dimensional, and can either be specified as a YTArray or as a list or array in code_length units. Projected data units are 2D versions of the simulation data units relative to the axes of the final plot. """ if len(coord) == 3: if not isinstance(coord, YTArray): coord_copy = plot.data.ds.arr(coord, "code_length") else: # coord is being copied so that if the user has a unyt_array already # we don't change the user's version coord_copy = coord.to("code_length") ax = plot.data.axis # if this is an on-axis projection or slice, then # just grab the appropriate 2 coords for the on-axis view if ax is not None: (xi, yi) = ( plot.data.ds.coordinates.x_axis[ax], plot.data.ds.coordinates.y_axis[ax], ) ret_coord = (coord_copy[xi], coord_copy[yi]) # if this is an off-axis project or slice (ie cutting plane) # we have to calculate where the data coords fall in the projected # plane else: # transpose is just to get [[x1,x2,...],[y1,y2,...],[z1,z2,...]] # in the same order as plot.data.center for array arithmetic coord_vectors = coord_copy.transpose() - plot.data.center x = np.dot(coord_vectors, plot.data.orienter.unit_vectors[1]) y = np.dot(coord_vectors, plot.data.orienter.unit_vectors[0]) # Transpose into image coords. Due to VR being not a # right-handed coord system ret_coord = (y, x) # if the position is already two-coords, it is expected to be # in the proper projected orientation else: raise ValueError("'data' coordinates must be 3 dimensions") return ret_coord def _convert_to_plot(self, plot, coord, offset=True): """ Convert coordinates from projected data coordinates to PlotWindow plot coordinates. Projected data coordinates are two dimensional and refer to the location relative to the specific axes being plotted, although still in simulation units. PlotWindow plot coordinates are locations as found in the final plot, usually with the origin in the center of the image and the extent of the image defined by the final plot axis markers. """ # coord should be a 2 x ncoord array-like datatype. try: ncoord = np.array(coord).shape[1] except IndexError: ncoord = 1 # Convert the data and plot limits to tiled numpy arrays so that # convert_to_plot is automatically vectorized. phy_bounds = self._physical_bounds(plot) x0, x1, y0, y1 = (np.array(np.tile(xyi, ncoord)) for xyi in phy_bounds) plt_bounds = self._plot_bounds(plot) xx0, xx1, yy0, yy1 = (np.tile(xyi, ncoord) for xyi in plt_bounds) try: ccoord = np.array(coord.to("code_length")) except AttributeError: ccoord = np.array(coord) # We need a special case for when we are only given one coordinate. if ccoord.shape == (2,): return np.array( [ ((ccoord[0] - x0) / (x1 - x0) * (xx1 - xx0) + xx0)[0], ((ccoord[1] - y0) / (y1 - y0) * (yy1 - yy0) + yy0)[0], ] ) else: return np.array( [ (ccoord[0][:] - x0) / (x1 - x0) * (xx1 - xx0) + xx0, (ccoord[1][:] - y0) / (y1 - y0) * (yy1 - yy0) + yy0, ] ) def _sanitize_coord_system(self, plot, coord, coord_system): """ Given a set of one or more x,y (and z) coordinates and a coordinate system, convert the coordinates (and transformation) ready for final plotting. Parameters ---------- plot: a PlotMPL subclass The plot that we are converting coordinates for coord: array-like Coordinates in some coordinate system: [x,y,z]. Alternatively, can specify multiple coordinates as: [[x1,x2,...,xn], [y1, y2,...,yn], [z1,z2,...,zn]] coord_system: string Possible values include: * ``'data'`` 3D data coordinates relative to original dataset * ``'plot'`` 2D coordinates as defined by the final axis locations * ``'axis'`` 2D coordinates within the axis object from (0,0) in lower left to (1,1) in upper right. Same as matplotlib axis coords. * ``'figure'`` 2D coordinates within figure object from (0,0) in lower left to (1,1) in upper right. Same as matplotlib figure coords. """ # Assure coords are either a YTArray or numpy array coord = np.asanyarray(coord, dtype="float64") # if in data coords, project them to plot coords if coord_system == "data": if len(coord) < 3: raise ValueError( "Coordinates in 'data' coordinate system need to be in 3D" ) coord = self._project_coords(plot, coord) coord = self._convert_to_plot(plot, coord) # if in plot coords, define the transform correctly if coord_system == "data" or coord_system == "plot": self.transform = plot._axes.transData return coord # if in axis coords, define the transform correctly if coord_system == "axis": self.transform = plot._axes.transAxes if len(coord) > 2: raise ValueError( "Coordinates in 'axis' coordinate system need to be in 2D" ) return coord # if in figure coords, define the transform correctly elif coord_system == "figure": self.transform = plot._figure.transFigure return coord else: raise ValueError( "Argument coord_system must have a value of " "'data', 'plot', 'axis', or 'figure'." ) def _physical_bounds(self, plot): xlims = tuple(v.in_units("code_length") for v in plot.xlim) ylims = tuple(v.in_units("code_length") for v in plot.ylim) # _swap_axes note: do NOT need to unswap here because plot (a CallbackWrapper # instance) stores the x, y lims of the underlying data object. return xlims + ylims def _plot_bounds(self, plot): xlims = plot._axes.get_xlim() ylims = plot._axes.get_ylim() # _swap_axes note: because we are getting the plot limits from the axes # object, if the axes have been swapped, these will be reversed from the # _physical_bounds. So we need to unswap here, but not in _physical_bounds. if plot._swap_axes: return ylims + xlims return xlims + ylims def _pixel_scale(self, plot): x0, x1, y0, y1 = self._physical_bounds(plot) xx0, xx1, yy0, yy1 = self._plot_bounds(plot) dx = (xx1 - xx0) / (x1 - x0) dy = (yy1 - yy0) / (y1 - y0) return dx, dy def _set_font_properties(self, plot, labels, **kwargs): """ This sets all of the text instances created by a callback to have the same font size and properties as all of the other fonts in the figure. If kwargs are set, they override the defaults. """ # This is a little messy because there is no trivial way to update # a MPL.font_manager.FontProperties object with new attributes # aside from setting them individually. So we pick out the relevant # MPL.Text() kwargs from the local kwargs and let them override the # defaults. local_font_properties = plot.font_properties.copy() # Turn off the default TT font file, otherwise none of this works. local_font_properties.set_file(None) local_font_properties.set_family("stixgeneral") if "family" in kwargs: local_font_properties.set_family(kwargs["family"]) if "file" in kwargs: local_font_properties.set_file(kwargs["file"]) if "fontconfig_pattern" in kwargs: local_font_properties.set_fontconfig_pattern(kwargs["fontconfig_pattern"]) if "name" in kwargs: local_font_properties.set_name(kwargs["name"]) if "size" in kwargs: local_font_properties.set_size(kwargs["size"]) if "slant" in kwargs: local_font_properties.set_slant(kwargs["slant"]) if "stretch" in kwargs: local_font_properties.set_stretch(kwargs["stretch"]) if "style" in kwargs: local_font_properties.set_style(kwargs["style"]) if "variant" in kwargs: local_font_properties.set_variant(kwargs["variant"]) if "weight" in kwargs: local_font_properties.set_weight(kwargs["weight"]) # For each label, set the font properties and color to the figure # defaults if not already set in the callback itself for label in labels: if plot.font_color is not None and "color" not in kwargs: label.set_color(plot.font_color) label.set_fontproperties(local_font_properties) def _set_plot_limits(self, plot, extent=None) -> None: """ calls set_xlim, set_ylim for plot, accounting for swapped axes Parameters ---------- plot : CallbackWrapper a CallbackWrapper instance extent : tuple or list The raw extent (prior to swapping). if None, will fetch it. """ if extent is None: extent = self._plot_bounds(plot) if plot._swap_axes: extent = _swap_axes_extents(extent) plot._axes.set_xlim(extent[0], extent[1]) plot._axes.set_ylim(extent[2], extent[3]) @staticmethod def _sanitize_xy_order(plot, *args): """ flips x-y pairs of plot arguments if needed Parameters ---------- plot : CallbackWrapper a CallbackWrapper instance *args x, y plot arguments, must have an even number of *args Returns ------- tuple either the original args or new args, with (x, y) pairs switched. i.e., _sanitize_xy_order(plot, x, y, px, py) returns: x, y, px, py if plot._swap_axes is False y, x, py, px if plot._swap_axes is True """ if plot._swap_axes: return _swap_arg_pair_order(*args) return args class VelocityCallback(PlotCallback): """ Adds a 'quiver' plot of velocity to the plot, skipping all but every *factor* datapoint. *scale* is the data units per arrow length unit using *scale_units* and *plot_args* allows you to pass in matplotlib arguments (see matplotlib.axes.Axes.quiver for more info). if *normalize* is True, the velocity fields will be scaled by their local (in-plane) length, allowing morphological features to be more clearly seen for fields with substantial variation in field strength. """ _type_name = "velocity" _supported_geometries = ( "cartesian", "spectral_cube", "polar", "cylindrical", "spherical", ) _incompatible_plot_types = ("OffAxisProjection", "Particle") def __init__( self, factor: tuple[int, int] | int = 16, *, scale=None, scale_units=None, normalize=False, plot_args=None, **kwargs, ): self.factor = _validate_factor_tuple(factor) self.scale = scale self.scale_units = scale_units self.normalize = normalize if plot_args is not None: issue_deprecation_warning( "`plot_args` is deprecated. " "You can now pass arbitrary keyword arguments instead of a dictionary.", since="4.1", stacklevel=5, ) plot_args.update(kwargs) else: plot_args = kwargs self.plot_args = plot_args def __call__(self, plot) -> "BaseQuiverCallback": ftype = plot.data._current_fluid_type # Instantiation of these is cheap geometry: Geometry = plot.data.ds.geometry if plot._type_name == "CuttingPlane": if geometry is Geometry.CARTESIAN: pass elif ( geometry is Geometry.POLAR or geometry is Geometry.CYLINDRICAL or geometry is Geometry.SPHERICAL or geometry is Geometry.GEOGRAPHIC or geometry is Geometry.INTERNAL_GEOGRAPHIC or geometry is Geometry.SPECTRAL_CUBE ): raise NotImplementedError( f"annotate_velocity is not supported for cutting plane for {geometry=}" ) else: assert_never(geometry) qcb: BaseQuiverCallback if plot._type_name == "CuttingPlane": qcb = CuttingQuiverCallback( (ftype, "cutting_plane_velocity_x"), (ftype, "cutting_plane_velocity_y"), factor=self.factor, scale=self.scale, normalize=self.normalize, scale_units=self.scale_units, **self.plot_args, ) else: xax = plot.data.ds.coordinates.x_axis[plot.data.axis] yax = plot.data.ds.coordinates.y_axis[plot.data.axis] axis_names = plot.data.ds.coordinates.axis_name bv = plot.data.get_field_parameter("bulk_velocity") if bv is not None: bv_x = bv[xax] bv_y = bv[yax] else: bv_x = bv_y = 0 if geometry is Geometry.POLAR or geometry is Geometry.CYLINDRICAL: if axis_names[plot.data.axis] == "z": # polar_z and cyl_z is aligned with cartesian_z # should convert r-theta plane to x-y plane xv = (ftype, "velocity_cartesian_x") yv = (ftype, "velocity_cartesian_y") else: xv = (ftype, f"velocity_{axis_names[xax]}") yv = (ftype, f"velocity_{axis_names[yax]}") elif geometry is Geometry.SPHERICAL: if axis_names[plot.data.axis] == "phi": xv = (ftype, "velocity_cylindrical_radius") yv = (ftype, "velocity_cylindrical_z") elif axis_names[plot.data.axis] == "theta": xv = (ftype, "velocity_conic_x") yv = (ftype, "velocity_conic_y") else: raise NotImplementedError( f"annotate_velocity is missing support for normal={axis_names[plot.data.axis]!r}" ) elif geometry is Geometry.CARTESIAN: xv = (ftype, f"velocity_{axis_names[xax]}") yv = (ftype, f"velocity_{axis_names[yax]}") elif ( geometry is Geometry.GEOGRAPHIC or geometry is Geometry.INTERNAL_GEOGRAPHIC or geometry is Geometry.SPECTRAL_CUBE ): raise NotImplementedError( f"annotate_velocity is not supported for {geometry=}" ) else: assert_never(geometry) # determine the full fields including field type xv = plot.data._determine_fields(xv)[0] yv = plot.data._determine_fields(yv)[0] qcb = QuiverCallback( xv, yv, factor=self.factor, scale=self.scale, scale_units=self.scale_units, normalize=self.normalize, bv_x=bv_x, bv_y=bv_y, **self.plot_args, ) return qcb(plot) class MagFieldCallback(PlotCallback): """ Adds a 'quiver' plot of magnetic field to the plot, skipping all but every *factor* datapoint. *scale* is the data units per arrow length unit using *scale_units* and *plot_args* allows you to pass in matplotlib arguments (see matplotlib.axes.Axes.quiver for more info). if *normalize* is True, the magnetic fields will be scaled by their local (in-plane) length, allowing morphological features to be more clearly seen for fields with substantial variation in field strength. """ _type_name = "magnetic_field" _supported_geometries = ( "cartesian", "spectral_cube", "polar", "cylindrical", "spherical", ) _incompatible_plot_types = ("OffAxisProjection", "Particle") def __init__( self, factor: tuple[int, int] | int = 16, *, scale=None, scale_units=None, normalize=False, plot_args=None, **kwargs, ): self.factor = _validate_factor_tuple(factor) self.scale = scale self.scale_units = scale_units self.normalize = normalize if plot_args is not None: issue_deprecation_warning( "`plot_args` is deprecated. " "You can now pass arbitrary keyword arguments instead of a dictionary.", since="4.1", stacklevel=5, ) plot_args.update(kwargs) else: plot_args = kwargs self.plot_args = plot_args def __call__(self, plot) -> "BaseQuiverCallback": ftype = plot.data._current_fluid_type # Instantiation of these is cheap geometry: Geometry = plot.data.ds.geometry qcb: BaseQuiverCallback if plot._type_name == "CuttingPlane": if geometry is Geometry.CARTESIAN: pass elif ( geometry is Geometry.POLAR or geometry is Geometry.CYLINDRICAL or geometry is Geometry.SPHERICAL or geometry is Geometry.GEOGRAPHIC or geometry is Geometry.INTERNAL_GEOGRAPHIC or geometry is Geometry.SPECTRAL_CUBE ): raise NotImplementedError( f"annotate_magnetic_field is not supported for cutting plane for {geometry=}" ) else: assert_never(geometry) qcb = CuttingQuiverCallback( (ftype, "cutting_plane_magnetic_field_x"), (ftype, "cutting_plane_magnetic_field_y"), factor=self.factor, scale=self.scale, scale_units=self.scale_units, normalize=self.normalize, **self.plot_args, ) else: xax = plot.data.ds.coordinates.x_axis[plot.data.axis] yax = plot.data.ds.coordinates.y_axis[plot.data.axis] axis_names = plot.data.ds.coordinates.axis_name if geometry is Geometry.POLAR or geometry is Geometry.CYLINDRICAL: if axis_names[plot.data.axis] == "z": # polar_z and cyl_z is aligned with cartesian_z # should convert r-theta plane to x-y plane xv = (ftype, "magnetic_field_cartesian_x") yv = (ftype, "magnetic_field_cartesian_y") else: xv = (ftype, f"magnetic_field_{axis_names[xax]}") yv = (ftype, f"magnetic_field_{axis_names[yax]}") elif geometry is Geometry.SPHERICAL: if axis_names[plot.data.axis] == "phi": xv = (ftype, "magnetic_field_cylindrical_radius") yv = (ftype, "magnetic_field_cylindrical_z") elif axis_names[plot.data.axis] == "theta": xv = (ftype, "magnetic_field_conic_x") yv = (ftype, "magnetic_field_conic_y") else: raise NotImplementedError( f"annotate_magnetic_field is missing support for normal={axis_names[plot.data.axis]!r}" ) elif geometry is Geometry.CARTESIAN: xv = (ftype, f"magnetic_field_{axis_names[xax]}") yv = (ftype, f"magnetic_field_{axis_names[yax]}") elif ( geometry is Geometry.GEOGRAPHIC or geometry is Geometry.INTERNAL_GEOGRAPHIC or geometry is Geometry.SPECTRAL_CUBE ): raise NotImplementedError( f"annotate_magnetic_field is not supported for {geometry=}" ) else: assert_never(geometry) qcb = QuiverCallback( xv, yv, factor=self.factor, scale=self.scale, scale_units=self.scale_units, normalize=self.normalize, **self.plot_args, ) return qcb(plot) class BaseQuiverCallback(PlotCallback, ABC): _incompatible_plot_types = ("OffAxisProjection", "Particle") def __init__( self, field_x, field_y, field_c=None, *, factor: tuple[int, int] | int = 16, scale=None, scale_units=None, normalize=False, plot_args=None, **kwargs, ): self.field_x = field_x self.field_y = field_y self.field_c = field_c self.factor = _validate_factor_tuple(factor) self.scale = scale self.scale_units = scale_units self.normalize = normalize if plot_args is None: plot_args = kwargs else: issue_deprecation_warning( "`plot_args` is deprecated. " "You can now pass arbitrary keyword arguments instead of a dictionary.", since="4.1", stacklevel=5, ) plot_args.update(kwargs) self.plot_args = plot_args @abstractmethod def _get_quiver_data(self, plot, bounds: tuple, nx: int, ny: int): # must return (pixX, pixY, pixC) arrays (pixC can be None) pass def __call__(self, plot): # construct mesh bounds = self._physical_bounds(plot) nx = plot.raw_image_shape[1] // self.factor[0] ny = plot.raw_image_shape[0] // self.factor[1] xx0, xx1, yy0, yy1 = self._plot_bounds(plot) if plot._transform is None: X, Y = np.meshgrid( np.linspace(xx0, xx1, nx, endpoint=True), np.linspace(yy0, yy1, ny, endpoint=True), ) else: # when we have a cartopy transform, provide the x, y values # in the coordinate reference system of the data and let cartopy # do the transformation. Also check for the exact bounds of the transform # which can cause issues with projections. tform_bnds = plot._transform.x_limits + plot._transform.y_limits if any(b.d == tb for b, tb in zip(bounds, tform_bnds, strict=True)): # note: cartopy will also raise its own warning, but it is useful to add this # warning as well since the only way to avoid the exact bounds is to change the # extent of the plot. warnings.warn( "Using the exact bounds of the transform may cause errors at the bounds." " To avoid this warning, adjust the width of your plot object to not include " "the bounds.", stacklevel=2, ) X, Y = np.meshgrid( np.linspace(bounds[0].d, bounds[1].d, nx, endpoint=True), np.linspace(bounds[2].d, bounds[3].d, ny, endpoint=True), ) pixX, pixY, pixC = self._get_quiver_data(plot, bounds, nx, ny) retv = self._finalize(plot, X, Y, pixX, pixY, pixC) self._set_plot_limits(plot, (xx0, xx1, yy0, yy1)) return retv def _finalize(self, plot, X, Y, pixX, pixY, pixC): if self.normalize: nn = np.sqrt(pixX**2 + pixY**2) nn = np.where(nn == 0, 1, nn) pixX /= nn pixY /= nn X, Y, pixX, pixY = self._sanitize_xy_order(plot, X, Y, pixX, pixY) # quiver plots ignore x, y axes inversions when using angles="uv" (the # default), so reverse the direction of the vectors when flipping the axis. if self.plot_args.get("angles", None) != "xy": if plot._flip_vertical: pixY = -1 * pixY if plot._flip_horizontal: pixX = -1 * pixX args = [X, Y, pixX, pixY] if pixC is not None: args.append(pixC) kwargs = { "scale": self.scale, "scale_units": self.scale_units, } kwargs.update(self.plot_args) if plot._transform is not None: if "transform" in kwargs: msg = ( "The base plot already has a transform set, ignoring the provided keyword " "argument and using the base transform." ) warnings.warn(msg, stacklevel=2) kwargs["transform"] = plot._transform return plot._axes.quiver(*args, **kwargs) class QuiverCallback(BaseQuiverCallback): """ Adds a 'quiver' plot to any plot, using the *field_x* and *field_y* from the associated data, skipping every *factor* pixels. *field_c* is an optional field name used for color. *scale* is the data units per arrow length unit using *scale_units* and *plot_args* allows you to pass in matplotlib arguments (see matplotlib.axes.Axes.quiver for more info). if *normalize* is True, the fields will be scaled by their local (in-plane) length, allowing morphological features to be more clearly seen for fields with substantial variation in field strength. """ _type_name = "quiver" _supported_geometries: tuple[str, ...] = ( "cartesian", "spectral_cube", "polar", "cylindrical", "spherical", "geographic", "internal_geographic", ) def __init__( self, field_x, field_y, field_c=None, *, factor: tuple[int, int] | int = 16, scale=None, scale_units=None, normalize=False, bv_x=0, bv_y=0, plot_args=None, **kwargs, ): super().__init__( field_x, field_y, field_c, factor=factor, scale=scale, scale_units=scale_units, normalize=normalize, plot_args=plot_args, **kwargs, ) self.bv_x = bv_x self.bv_y = bv_y def _get_quiver_data(self, plot, bounds: tuple, nx: int, ny: int): # calls the pixelizer, returns pixX, pixY, pixC arrays def transform(field_name, vector_value): field_units = plot.data[field_name].units def _transformed_field(field, data): return data[field_name] - data.ds.arr(vector_value, field_units) plot.data.ds.add_field( ("gas", f"transformed_{field_name}"), sampling_type="cell", function=_transformed_field, units=field_units, display_field=False, ) if self.bv_x != 0.0 or self.bv_x != 0.0: # We create a relative vector field transform(self.field_x, self.bv_x) transform(self.field_y, self.bv_y) field_x = f"transformed_{self.field_x}" field_y = f"transformed_{self.field_y}" else: field_x, field_y = self.field_x, self.field_y periodic = int(any(plot.data.ds.periodicity)) pixX = plot.data.ds.coordinates.pixelize( plot.data.axis, plot.data, field_x, bounds, (nx, ny), False, # antialias periodic, ) pixY = plot.data.ds.coordinates.pixelize( plot.data.axis, plot.data, field_y, bounds, (nx, ny), False, # antialias periodic, ) if self.field_c is not None: pixC = plot.data.ds.coordinates.pixelize( plot.data.axis, plot.data, self.field_c, bounds, (nx, ny), False, # antialias periodic, ) else: pixC = None return pixX, pixY, pixC class ContourCallback(PlotCallback): """ Add contours in *field* to the plot. *levels* governs the number of contours generated, *factor* governs the number of points used in the interpolation, *take_log* governs how it is contoured and *clim* gives the (lower, upper) limits for contouring. An alternate data source can be specified with *data_source*, but by default the plot's data source will be queried. """ _type_name = "contour" _supported_geometries = ("cartesian", "spectral_cube", "cylindrical") _incompatible_plot_types = ("Particle",) def __init__( self, field: AnyFieldKey, levels: int = 5, *, factor: tuple[int, int] | int = 4, clim: tuple[float, float] | None = None, label: bool = False, take_log: bool | None = None, data_source: YTDataContainer | None = None, plot_args: dict[str, Any] | None = None, text_args: dict[str, Any] | None = None, ncont: int | None = None, # deprecated ) -> None: if ncont is not None: issue_deprecation_warning( "The `ncont` keyword argument is deprecated, use `levels` instead.", since="4.1", stacklevel=5, ) levels = ncont if clim is not None and not isinstance(levels, (int, np.integer)): raise TypeError(f"clim requires levels be an integer, received {levels}") self.levels = levels self.field = field self.factor = _validate_factor_tuple(factor) self.clim = clim self.take_log = take_log self.plot_args = { "colors": "black", "linestyles": "solid", **(plot_args or {}), } self.label = label self.text_args = { "colors": "white", **(text_args or {}), } self.data_source = data_source def __call__(self, plot) -> None: from matplotlib.tri import LinearTriInterpolator, Triangulation # These need to be in code_length x0, x1, y0, y1 = self._physical_bounds(plot) # These are in plot coordinates, which may not be code coordinates. xx0, xx1, yy0, yy1 = self._plot_bounds(plot) # See the note about rows/columns in the pixelizer for more information # on why we choose the bounds we do numPoints_x = plot.raw_image_shape[1] numPoints_y = plot.raw_image_shape[0] # Go from data->plot coordinates dx, dy = self._pixel_scale(plot) # We want xi, yi in plot coordinates xi, yi = np.mgrid[ xx0 : xx1 : numPoints_x / (self.factor[0] * 1j), yy0 : yy1 : numPoints_y / (self.factor[1] * 1j), ] data = self.data_source or plot.data if plot._type_name in ["CuttingPlane", "Projection", "Slice"]: if plot._type_name == "CuttingPlane": x = data["px"] * dx y = data["py"] * dy z = data[self.field] elif plot._type_name in ["Projection", "Slice"]: # Makes a copy of the position fields "px" and "py" and adds the # appropriate shift to the copied field. AllX = np.zeros(data["px"].size, dtype="bool") AllY = np.zeros(data["py"].size, dtype="bool") XShifted = data["px"].copy() YShifted = data["py"].copy() dom_x, dom_y = plot._period for shift in np.mgrid[-1:1:3j]: # type: ignore [misc] xlim = (data["px"] + shift * dom_x >= x0) & ( data["px"] + shift * dom_x <= x1 ) ylim = (data["py"] + shift * dom_y >= y0) & ( data["py"] + shift * dom_y <= y1 ) XShifted[xlim] += shift * dom_x YShifted[ylim] += shift * dom_y AllX |= xlim AllY |= ylim # At this point XShifted and YShifted are the shifted arrays of # position data in data coordinates wI = AllX & AllY # This converts XShifted and YShifted into plot coordinates x = ((XShifted[wI] - x0) * dx).ndarray_view() + xx0 y = ((YShifted[wI] - y0) * dy).ndarray_view() + yy0 z = data[self.field][wI] # Both the input and output from the triangulator are in plot # coordinates triangulation = Triangulation(x, y) zi = LinearTriInterpolator(triangulation, z)(xi, yi) elif plot._type_name == "OffAxisProjection": zi = plot.frb[self.field][:: self.factor[0], :: self.factor[1]].transpose() take_log: bool if self.take_log is not None: take_log = self.take_log else: field = data._determine_fields([self.field])[0] take_log = plot.ds._get_field_info(field).take_log if take_log: zi = np.log10(zi) clim: tuple[float, float] | None if take_log and self.clim is not None: clim = np.log10(self.clim[0]), np.log10(self.clim[1]) else: clim = self.clim levels: np.ndarray | int if clim is not None: levels = np.linspace(clim[0], clim[1], self.levels) else: levels = self.levels xi, yi = self._sanitize_xy_order(plot, xi, yi) cset = plot._axes.contour(xi, yi, zi, levels, **self.plot_args) self._set_plot_limits(plot, (xx0, xx1, yy0, yy1)) if self.label: plot._axes.clabel(cset, **self.text_args) class GridBoundaryCallback(PlotCallback): """ Draws grids on an existing PlotWindow object. Adds grid boundaries to a plot, optionally with alpha-blending. By default, colors different levels of grids with different colors going from white to black, but you can change to any arbitrary colormap with cmap keyword, to all black grid edges for all levels with cmap=None and edgecolors=None, or to an arbitrary single color for grid edges with edgecolors='YourChosenColor' defined in any of the standard ways (e.g., edgecolors='white', edgecolors='r', edgecolors='#00FFFF', or edgecolor='0.3', where the last is a float in 0-1 scale indicating gray). Note that setting edgecolors overrides cmap if you have both set to non-None values. Cutoff for display is at min_pix wide. draw_ids puts the grid id a the corner of the grid (but its not so great in projections...). id_loc determines which corner holds the grid id. One can set min and maximum level of grids to display, and can change the linewidth of the displayed grids. """ _type_name = "grids" _supported_geometries = ("cartesian", "spectral_cube", "cylindrical") _incompatible_plot_types = ("OffAxisSlice", "OffAxisProjection", "Particle") def __init__( self, alpha=0.7, min_pix=1, min_pix_ids=20, draw_ids=False, id_loc=None, periodic=True, min_level=None, max_level=None, cmap="B-W LINEAR_r", edgecolors=None, linewidth=1.0, ): self.alpha = alpha self.min_pix = min_pix self.min_pix_ids = min_pix_ids self.draw_ids = draw_ids # put grid numbers in the corner. if id_loc is None: self.id_loc = "lower left" else: self.id_loc = id_loc.lower() # Make case-insensitive if not self.draw_ids: mylog.warning( "Supplied id_loc but draw_ids is False. Not drawing grid ids" ) self.periodic = periodic self.min_level = min_level self.max_level = max_level self.linewidth = linewidth self.cmap = cmap self.edgecolors = edgecolors def __call__(self, plot): if plot.data.ds.geometry == "cylindrical" and plot.data.ds.dimensionality == 3: raise NotImplementedError( "Grid annotation is only supported for \ for 2D cylindrical geometry, not 3D" ) from matplotlib.colors import colorConverter x0, x1, y0, y1 = self._physical_bounds(plot) xx0, xx1, yy0, yy1 = self._plot_bounds(plot) (dx, dy) = self._pixel_scale(plot) (ypix, xpix) = plot.raw_image_shape ax = plot.data.axis px_index = plot.data.ds.coordinates.x_axis[ax] py_index = plot.data.ds.coordinates.y_axis[ax] DW = plot.data.ds.domain_width if self.periodic: pxs, pys = np.mgrid[-1:1:3j, -1:1:3j] else: pxs, pys = np.mgrid[0:0:1j, 0:0:1j] GLE, GRE, levels, block_ids = [], [], [], [] for block, _mask in plot.data.blocks: GLE.append(block.LeftEdge.in_units("code_length")) GRE.append(block.RightEdge.in_units("code_length")) levels.append(block.Level) block_ids.append(block.id) if len(GLE) == 0: return # Retain both units and registry GLE = plot.ds.arr(GLE, units=GLE[0].units) GRE = plot.ds.arr(GRE, units=GRE[0].units) levels = np.array(levels) min_level = self.min_level or 0 max_level = self.max_level or levels.max() # sort the four arrays in order of ascending level, this makes images look nicer new_indices = np.argsort(levels) levels = levels[new_indices] GLE = GLE[new_indices] GRE = GRE[new_indices] block_ids = np.array(block_ids)[new_indices] for px_off, py_off in zip(pxs.ravel(), pys.ravel(), strict=True): pxo = px_off * DW[px_index] pyo = py_off * DW[py_index] left_edge_x = np.array((GLE[:, px_index] + pxo - x0) * dx) + xx0 left_edge_y = np.array((GLE[:, py_index] + pyo - y0) * dy) + yy0 right_edge_x = np.array((GRE[:, px_index] + pxo - x0) * dx) + xx0 right_edge_y = np.array((GRE[:, py_index] + pyo - y0) * dy) + yy0 xwidth = xpix * (right_edge_x - left_edge_x) / (xx1 - xx0) ywidth = ypix * (right_edge_y - left_edge_y) / (yy1 - yy0) visible = np.logical_and( np.logical_and(xwidth > self.min_pix, ywidth > self.min_pix), np.logical_and(levels >= min_level, levels <= max_level), ) # Grids can either be set by edgecolors OR a colormap. if self.edgecolors is not None: edgecolors = colorConverter.to_rgba(self.edgecolors, alpha=self.alpha) else: # use colormap if not explicitly overridden by edgecolors if self.cmap is not None: color_bounds = [0, max_level] edgecolors = ( apply_colormap( levels[visible] * 1.0, color_bounds=color_bounds, cmap_name=self.cmap, )[0, :, :] * 1.0 / 255.0 ) edgecolors[:, 3] = self.alpha else: edgecolors = (0.0, 0.0, 0.0, self.alpha) if visible.nonzero()[0].size == 0: continue edge_x = (left_edge_x, left_edge_x, right_edge_x, right_edge_x) edge_y = (left_edge_y, right_edge_y, right_edge_y, left_edge_y) edge_x, edge_y = self._sanitize_xy_order(plot, edge_x, edge_y) verts = np.array([edge_x, edge_y]) verts = verts.transpose()[visible, :, :] grid_collection = matplotlib.collections.PolyCollection( verts, facecolors="none", edgecolors=edgecolors, linewidth=self.linewidth, ) plot._axes.add_collection(grid_collection) visible_ids = np.logical_and( np.logical_and(xwidth > self.min_pix_ids, ywidth > self.min_pix_ids), np.logical_and(levels >= min_level, levels <= max_level), ) if self.draw_ids: plot_ids = np.where(visible_ids)[0] x = np.empty(plot_ids.size) y = np.empty(plot_ids.size) for i, n in enumerate(plot_ids): if self.id_loc == "lower left": x[i] = left_edge_x[n] + (2 * (xx1 - xx0) / xpix) y[i] = left_edge_y[n] + (2 * (yy1 - yy0) / ypix) elif self.id_loc == "lower right": x[i] = right_edge_x[n] - ( (10 * len(str(block_ids[i])) - 2) * (xx1 - xx0) / xpix ) y[i] = left_edge_y[n] + (2 * (yy1 - yy0) / ypix) elif self.id_loc == "upper left": x[i] = left_edge_x[n] + (2 * (xx1 - xx0) / xpix) y[i] = right_edge_y[n] - (12 * (yy1 - yy0) / ypix) elif self.id_loc == "upper right": x[i] = right_edge_x[n] - ( (10 * len(str(block_ids[i])) - 2) * (xx1 - xx0) / xpix ) y[i] = right_edge_y[n] - (12 * (yy1 - yy0) / ypix) else: raise RuntimeError( f"Unrecognized id_loc value ({self.id_loc!r}). " "Allowed values are 'lower left', lower right', " "'upper left', and 'upper right'." ) xi, yi = self._sanitize_xy_order(plot, x[i], y[i]) plot._axes.text(xi, yi, "%d" % block_ids[n], clip_on=True) # when type-checking with MPL >= 3.8, use # from matplotlib.typing import ColorType _ColorType = Any class StreamlineCallback(PlotCallback): """ Plot streamlines using matplotlib.axes.Axes.streamplot Arguments --------- field_x: field key The "velocity" analoguous field along the horizontal direction. field_y: field key The "velocity" analoguous field along the vertical direction. linewidth: float, or field key (default: 1.0) A constant scalar will be passed directly to matplotlib.axes.Axes.streamplot A field key will be first interpreted by yt and produce the adequate 2D array. Data fields are normalized by their maximum value, so the maximal linewidth is 1 by default. See `linewidth_upscaling` for fine tuning. Note that the absolute value is taken in all cases. linewidth_upscaling: float (default: 1.0) A constant multiplicative factor applied to linewidth. Final linewidth is obtained as: linewidth_upscaling * abs(linewidth) / max(abs(linewidth)) color: a color identifier, or a field key (default: matplotlib.rcParams['line.color']) A constant color identifier will be passed directly to matplotlib.axes.Axes.streamplot A field key will be first interpreted by yt and produce the adequate 2D array. See https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.streamplot.html for how to customize color mapping using `cmap` and `norm` arguments. color_threshold: float or unyt_quantity (default: -inf) Regions where the field used for color is lower than this threshold will be masked. Only used if color is a field key. factor: int, or tuple[int, int] (default: 16) Fields are downed-sampled by this factor with respect to the background image buffer size. A single integer factor will be used for both direction, but a tuple of 2 integers can be passed to set x and y downsampling independently. **kwargs: any additional keyword arguments will be passed directly to matplotlib.axes.Axes.streamplot """ _type_name = "streamlines" _supported_geometries = ( "cartesian", "spectral_cube", "polar", "cylindrical", "spherical", ) _incompatible_plot_types = ("OffAxisProjection", "Particle") def __init__( self, field_x: AnyFieldKey, field_y: AnyFieldKey, *, linewidth: float | AnyFieldKey = 1.0, linewidth_upscaling: float = 1.0, color: _ColorType | FieldKey | None = None, color_threshold: float | unyt_quantity = float("-inf"), factor: tuple[int, int] | int = 16, field_color=None, # deprecated display_threshold=None, # deprecated plot_args=None, # deprecated **kwargs, ): self.field_x = field_x self.field_y = field_y if color is not None and field_color is not None: raise TypeError( "`color` and `field_color` keyword arguments " "cannot be set at the same time." ) elif field_color is not None: issue_deprecation_warning( "The `field_color` keyword argument is deprecated. " "Use `color` instead.", since="4.3", stacklevel=5, ) self._color = field_color else: self._color = color if color_threshold is not None and display_threshold is not None: raise TypeError( "`color_threshold` and `display_threshold` keyword arguments " "cannot be set at the same time." ) elif display_threshold is not None: issue_deprecation_warning( "The `display_threshold` keyword argument is deprecated. " "Use `color_threshold` instead.", since="4.3", stacklevel=5, ) self._color_threshold = display_threshold else: self._color_threshold = color_threshold self._linewidth = linewidth self._linewidth_upscaling = linewidth_upscaling self.factor = _validate_factor_tuple(factor) if plot_args is not None: issue_deprecation_warning( "`plot_args` is deprecated. " "You can now pass arbitrary keyword arguments instead of a dictionary.", since="4.1", stacklevel=5, ) plot_args.update(kwargs) else: plot_args = kwargs self.plot_args = plot_args def __call__(self, plot) -> None: xx0, xx1, yy0, yy1 = self._plot_bounds(plot) # We are feeding this size into the pixelizer, where it will properly # set it in reverse order nx = plot.raw_image_shape[1] // self.factor[0] ny = plot.raw_image_shape[0] // self.factor[1] def pixelize(field): retv = plot.data.ds.coordinates.pixelize( plot.data.axis, plot.data, field=field, bounds=self._physical_bounds(plot), size=(nx, ny), ) if plot._swap_axes: return retv.transpose() else: return retv def is_field_key(val) -> TypeGuard[AnyFieldKey]: if not is_sequence(val): return False try: plot.data._determine_fields(val) except (YTFieldNotFound, YTFieldTypeNotFound): return False else: return True pixX = pixelize(self.field_x) pixY = pixelize(self.field_y) if isinstance(self._linewidth, (int, float)): linewidth = self._linewidth_upscaling * self._linewidth elif is_field_key(self._linewidth): linewidth = pixelize(self._linewidth) linewidth *= self._linewidth_upscaling / np.abs(linewidth).max() else: raise TypeError( f"annotate_streamlines received linewidth={self._linewidth!r}, " f"with type {type(self._linewidth)}. Expected a float or a field key." ) if is_field_key(self._color): color = pixelize(self._color) linewidth *= color > self._color_threshold else: if (_cmap := self.plot_args.get("cmap")) is not None: warnings.warn( f"annotate_streamlines received color={self._color!r}, " "which wasn't recognized as as field key. " "It is assumed to be a fixed color identifier. " f"Also received cmap={_cmap!r}, which will be ignored.", stacklevel=5, ) color = self._color X, Y = ( np.linspace(xx0, xx1, nx, endpoint=True), np.linspace(yy0, yy1, ny, endpoint=True), ) X, Y, pixX, pixY = self._sanitize_xy_order(plot, X, Y, pixX, pixY) if plot._swap_axes: # need an additional transpose here for streamline tracing X = X.transpose() Y = Y.transpose() streamplot_args = { "x": X, "y": Y, "u": pixX, "v": pixY, "color": color, "linewidth": linewidth, **self.plot_args, } plot._axes.streamplot(**streamplot_args) self._set_plot_limits(plot, (xx0, xx1, yy0, yy1)) class LinePlotCallback(PlotCallback): """ Overplot a line with endpoints at p1 and p2. p1 and p2 should be 2D or 3D coordinates consistent with the coordinate system denoted in the "coord_system" keyword. Parameters ---------- p1, p2 : 2- or 3-element tuples, lists, or arrays These are the coordinates of the endpoints of the line. coord_system : string, optional This string defines the coordinate system of the coordinates p1 and p2. Valid coordinates are: "data" -- the 3D dataset coordinates "plot" -- the 2D coordinates defined by the actual plot limits "axis" -- the MPL axis coordinates: (0,0) is lower left; (1,1) is upper right "figure" -- the MPL figure coordinates: (0,0) is lower left, (1,1) is upper right plot_args : dictionary, optional This dictionary is passed to the MPL plot function for generating the line. By default, it is: {'color':'white', 'linewidth':2} Examples -------- >>> # Overplot a diagonal white line from the lower left corner to upper >>> # right corner >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> s = yt.SlicePlot(ds, "z", "density") >>> s.annotate_line([0, 0], [1, 1], coord_system="axis") >>> s.save() >>> # Overplot a red dashed line from data coordinate (0.1, 0.2, 0.3) to >>> # (0.5, 0.6, 0.7) >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> s = yt.SlicePlot(ds, "z", "density") >>> s.annotate_line( ... [0.1, 0.2, 0.3], ... [0.5, 0.6, 0.7], ... coord_system="data", ... color="red", ... linestyles="--", ... ) >>> s.save() """ _type_name = "line" _supported_geometries: tuple[str, ...] = ( "cartesian", "spectral_cube", "polar", "cylindrical", ) def __init__( self, p1, p2, *, coord_system="data", plot_args: dict[str, Any] | None = None, **kwargs, ): self.p1 = p1 self.p2 = p2 if plot_args is not None: issue_deprecation_warning( "`plot_args` is deprecated. " "You can now pass arbitrary keyword arguments instead of a dictionary.", since="4.1", stacklevel=5, ) self.plot_args = { "color": "white", "linewidth": 2, **(plot_args or {}), **kwargs, } self.coord_system = coord_system self.transform = None def __call__(self, plot): p1 = self._sanitize_coord_system(plot, self.p1, coord_system=self.coord_system) p2 = self._sanitize_coord_system(plot, self.p2, coord_system=self.coord_system) start_pt, end_pt = [p1[0], p2[0]], [p1[1], p2[1]] if plot._swap_axes: start_pt, end_pt = [p2[0], p1[0]], [p2[1], p1[1]] plot._axes.plot(start_pt, end_pt, transform=self.transform, **self.plot_args) self._set_plot_limits(plot) class CuttingQuiverCallback(BaseQuiverCallback): """ Get a quiver plot on top of a cutting plane, using *field_x* and *field_y*, skipping every *factor* datapoint in the discretization. *scale* is the data units per arrow length unit using *scale_units* and *plot_args* allows you to pass in matplotlib arguments (see matplotlib.axes.Axes.quiver for more info). if *normalize* is True, the fields will be scaled by their local (in-plane) length, allowing morphological features to be more clearly seen for fields with substantial variation in field strength. """ _type_name = "cquiver" _supported_geometries = ("cartesian", "spectral_cube") def _get_quiver_data(self, plot, bounds: tuple, nx: int, ny: int): # calls the pixelizer, returns pixX, pixY, pixC arrays indices = np.argsort(plot.data["index", "dx"])[::-1].astype(np.int_) pixX = np.zeros((ny, nx), dtype="f8") pixY = np.zeros((ny, nx), dtype="f8") pixelize_off_axis_cartesian( pixX, plot.data["index", "x"].to("code_length"), plot.data["index", "y"].to("code_length"), plot.data["index", "z"].to("code_length"), plot.data["px"], plot.data["py"], plot.data["pdx"], plot.data["pdy"], plot.data["pdz"], plot.data.center, plot.data._inv_mat, indices, plot.data[self.field_x], bounds, ) pixelize_off_axis_cartesian( pixY, plot.data["index", "x"].to("code_length"), plot.data["index", "y"].to("code_length"), plot.data["index", "z"].to("code_length"), plot.data["px"], plot.data["py"], plot.data["pdx"], plot.data["pdy"], plot.data["pdz"], plot.data.center, plot.data._inv_mat, indices, plot.data[self.field_y], bounds, ) if self.field_c is None: pixC = None else: pixC = np.zeros((ny, nx), dtype="f8") pixelize_off_axis_cartesian( pixC, plot.data["index", "x"].to("code_length"), plot.data["index", "y"].to("code_length"), plot.data["index", "z"].to("code_length"), plot.data["px"], plot.data["py"], plot.data["pdx"], plot.data["pdy"], plot.data["pdz"], plot.data.center, plot.data._inv_mat, indices, plot.data[self.field_c], bounds, ) return pixX, pixY, pixC class ClumpContourCallback(PlotCallback): """ Take a list of *clumps* and plot them as a set of contours. """ _type_name = "clumps" _supported_geometries = ("cartesian", "spectral_cube", "cylindrical") _incompatible_plot_types = ("OffAxisSlice", "OffAxisProjection", "Particle") def __init__(self, clumps, *, plot_args=None, **kwargs): self.clumps = clumps if plot_args is not None: issue_deprecation_warning( "`plot_args` is deprecated. " "You can now pass arbitrary keyword arguments instead of a dictionary.", since="4.1", stacklevel=5, ) plot_args.update(kwargs) else: plot_args = kwargs if "color" in plot_args: plot_args["colors"] = plot_args.pop("color") self.plot_args = plot_args def __call__(self, plot): bounds = self._physical_bounds(plot) extent = self._plot_bounds(plot) ax = plot.data.axis px_index = plot.data.ds.coordinates.x_axis[ax] py_index = plot.data.ds.coordinates.y_axis[ax] xf = plot.data.ds.coordinates.axis_name[px_index] yf = plot.data.ds.coordinates.axis_name[py_index] dxf = f"d{xf}" dyf = f"d{yf}" ny, nx = plot.raw_image_shape buff = np.zeros((nx, ny), dtype="float64") for i, clump in enumerate(reversed(self.clumps)): mylog.info("Pixelizing contour %s", i) if isinstance(clump, Clump): ftype = "index" elif isinstance(clump, YTClumpContainer): ftype = "grid" else: raise RuntimeError( f"Unknown field type for object of type {type(clump)}." ) xf_copy = clump[ftype, xf].copy().in_units("code_length") yf_copy = clump[ftype, yf].copy().in_units("code_length") temp = np.zeros((ny, nx), dtype="f8") pixelize_cartesian( temp, xf_copy, yf_copy, clump[ftype, dxf].in_units("code_length") / 2.0, clump[ftype, dyf].in_units("code_length") / 2.0, clump[ftype, dxf].d * 0.0 + i + 1, # inits inside Pixelize bounds, 0, ) buff = np.maximum(temp, buff) if plot._swap_axes: buff = buff.transpose() extent = (extent[2], extent[3], extent[0], extent[1]) self.rv = plot._axes.contour( buff, np.unique(buff), extent=extent, **self.plot_args ) class ArrowCallback(PlotCallback): """ Overplot arrow(s) pointing at position(s) for highlighting specific features. By default, arrow points from lower left to the designated position "pos" with arrow length "length". Alternatively, if "starting_pos" is set, arrow will stretch from "starting_pos" to "pos" and "length" will be disregarded. "coord_system" keyword refers to positions set in "pos" arg and "starting_pos" keyword, which by default are in data coordinates. "length", "width", "head_length", and "head_width" keywords for the arrow are all in axis units, ie relative to the size of the plot axes as 1, even if the position of the arrow is set relative to another coordinate system. Parameters ---------- pos : array-like These are the coordinates where the marker(s) will be overplotted Either as [x,y,z] or as [[x1,x2,...],[y1,y2,...],[z1,z2,...]] length : float, optional The length, in axis units, of the arrow. Default: 0.03 width : float, optional The width, in axis units, of the tail line of the arrow. Default: 0.003 head_length : float, optional The length, in axis units, of the head of the arrow. If set to None, use 1.5*head_width Default: None head_width : float, optional The width, in axis units, of the head of the arrow. Default: 0.02 starting_pos : 2- or 3-element tuple, list, or array, optional These are the coordinates from which the arrow starts towards its point. Not compatible with 'length' kwarg. coord_system : string, optional This string defines the coordinate system of the coordinates of pos Valid coordinates are: "data" -- the 3D dataset coordinates "plot" -- the 2D coordinates defined by the actual plot limits "axis" -- the MPL axis coordinates: (0,0) is lower left; (1,1) is upper right "figure" -- the MPL figure coordinates: (0,0) is lower left, (1,1) is upper right plot_args : dictionary, optional This dictionary is passed to the MPL arrow function for generating the arrow. By default, it is: {'color':'white'} Examples -------- >>> # Overplot an arrow pointing to feature at data coord: (0.2, 0.3, 0.4) >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> s = yt.SlicePlot(ds, "z", "density") >>> s.annotate_arrow([0.2, 0.3, 0.4]) >>> s.save() >>> # Overplot a red arrow with longer length pointing to plot coordinate >>> # (0.1, -0.1) >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> s = yt.SlicePlot(ds, "z", "density") >>> s.annotate_arrow( ... [0.1, -0.1], length=0.06, coord_system="plot", color="red" ... ) >>> s.save() """ _type_name = "arrow" _supported_geometries = ("cartesian", "spectral_cube", "cylindrical") def __init__( self, pos, *, length=0.03, width=0.0001, head_width=0.01, head_length=0.01, starting_pos=None, coord_system="data", plot_args: dict[str, Any] | None = None, # deprecated **kwargs, ): self.pos = pos self.length = length self.width = width self.head_width = head_width self.head_length = head_length self.starting_pos = starting_pos self.coord_system = coord_system self.transform = None if plot_args is not None: issue_deprecation_warning( "`plot_args` is deprecated. " "You can now pass arbitrary keyword arguments instead of a dictionary.", since="4.1", stacklevel=5, ) self.plot_args = { "color": "white", **(plot_args or {}), **kwargs, } def __call__(self, plot): x, y = self._sanitize_coord_system( plot, self.pos, coord_system=self.coord_system ) xx0, xx1, yy0, yy1 = self._plot_bounds(plot) # normalize all of the kwarg lengths to the plot size plot_diag = ((yy1 - yy0) ** 2 + (xx1 - xx0) ** 2) ** (0.5) length = self.length * plot_diag width = self.width * plot_diag head_width = self.head_width * plot_diag head_length = None if self.head_length is not None: head_length = self.head_length * plot_diag if self.starting_pos is not None: start_x, start_y = self._sanitize_coord_system( plot, self.starting_pos, coord_system=self.coord_system ) dx = x - start_x dy = y - start_y else: dx = (xx1 - xx0) * 2 ** (0.5) * length dy = (yy1 - yy0) * 2 ** (0.5) * length # If the arrow is 0 length if dx == dy == 0: warnings.warn("The arrow has zero length. Not annotating.", stacklevel=2) return x, y, dx, dy = self._sanitize_xy_order(plot, x, y, dx, dy) try: plot._axes.arrow( x - dx, y - dy, dx, dy, width=width, head_width=head_width, head_length=head_length, transform=self.transform, length_includes_head=True, **self.plot_args, ) except ValueError: for i in range(len(x)): plot._axes.arrow( x[i] - dx, y[i] - dy, dx, dy, width=width, head_width=head_width, head_length=head_length, transform=self.transform, length_includes_head=True, **self.plot_args, ) self._set_plot_limits(plot, (xx0, xx1, yy0, yy1)) class MarkerAnnotateCallback(PlotCallback): """ Overplot marker(s) at a position(s) for highlighting specific features. Parameters ---------- pos : array-like These are the coordinates where the marker(s) will be overplotted Either as [x,y,z] or as [[x1,x2,...],[y1,y2,...],[z1,z2,...]] marker : string, optional The shape of the marker to be passed to the MPL scatter function. By default, it is 'x', but other acceptable values are: '.', 'o', 'v', '^', 's', 'p' '*', etc. See matplotlib.markers for more information. coord_system : string, optional This string defines the coordinate system of the coordinates of pos Valid coordinates are: "data" -- the 3D dataset coordinates "plot" -- the 2D coordinates defined by the actual plot limits "axis" -- the MPL axis coordinates: (0,0) is lower left; (1,1) is upper right "figure" -- the MPL figure coordinates: (0,0) is lower left, (1,1) is upper right plot_args : dictionary, optional This dictionary is passed to the MPL scatter function for generating the marker. By default, it is: {'color':'white', 's':50} Examples -------- >>> # Overplot a white X on a feature at data location (0.5, 0.5, 0.5) >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> s = yt.SlicePlot(ds, "z", "density") >>> s.annotate_marker([0.4, 0.5, 0.6]) >>> s.save() >>> # Overplot a big yellow circle at axis location (0.1, 0.2) >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> s = yt.SlicePlot(ds, "z", "density") >>> s.annotate_marker( ... [0.1, 0.2], ... marker="o", ... coord_system="axis", ... color="yellow", ... s=200, ... ) >>> s.save() """ _type_name = "marker" _supported_geometries = ("cartesian", "spectral_cube", "polar", "cylindrical") def __init__( self, pos, marker="x", *, coord_system="data", plot_args=None, **kwargs ): self.pos = pos self.marker = marker if plot_args is not None: issue_deprecation_warning( "`plot_args` is deprecated. " "You can now pass arbitrary keyword arguments instead of a dictionary.", since="4.1", stacklevel=5, ) self.plot_args = { "color": "white", "s": 50, **(plot_args or {}), **kwargs, } self.coord_system = coord_system self.transform = None def __call__(self, plot): x, y = self._sanitize_coord_system( plot, self.pos, coord_system=self.coord_system ) x, y = self._sanitize_xy_order(plot, x, y) plot._axes.scatter( x, y, marker=self.marker, transform=self.transform, **self.plot_args ) self._set_plot_limits(plot) class SphereCallback(PlotCallback): """ Overplot a circle with designated center and radius with optional text. Parameters ---------- center : 2- or 3-element tuple, list, or array These are the coordinates where the circle will be overplotted radius : YTArray, float, or (1, ('kpc')) style tuple The radius of the circle in code coordinates circle_args : dict, optional This dictionary is passed to the MPL circle object. By default, {'color':'white'} coord_system : string, optional This string defines the coordinate system of the coordinates of pos Valid coordinates are: "data" -- the 3D dataset coordinates "plot" -- the 2D coordinates defined by the actual plot limits "axis" -- the MPL axis coordinates: (0,0) is lower left; (1,1) is upper right "figure" -- the MPL figure coordinates: (0,0) is lower left, (1,1) is upper right text : string, optional Optional text to include next to the circle. text_args : dictionary, optional This dictionary is passed to the MPL text function. By default, it is: {'color':'white'} Examples -------- >>> # Overplot a white circle of radius 100 kpc over the central galaxy >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> s = yt.SlicePlot(ds, "z", "density") >>> s.annotate_sphere([0.5, 0.5, 0.5], radius=(100, "kpc")) >>> s.save() """ _type_name = "sphere" _supported_geometries = ("cartesian", "spectral_cube", "polar", "cylindrical") def __init__( self, center, radius, *, coord_system="data", text=None, circle_args=None, text_args=None, ): self.center = center self.radius = radius self.circle_args = { "color": "white", "fill": False, **(circle_args or {}), } self.text = text self.text_args = { "color": "white", **(text_args or {}), } self.coord_system = coord_system self.transform = None def __call__(self, plot): from matplotlib.patches import Circle if is_sequence(self.radius): self.radius = plot.data.ds.quan(self.radius[0], self.radius[1]) self.radius = self.radius.in_units(plot.xlim[0].units) if isinstance(self.radius, YTQuantity): if isinstance(self.center, YTArray): units = self.center.units else: units = "code_length" self.radius = self.radius.to(units) # This assures the radius has the appropriate size in # the different coordinate systems, since one cannot simply # apply a different transform for a length in the same way # you can for a coordinate. if self.coord_system == "data" or self.coord_system == "plot": scaled_radius = self.radius * self._pixel_scale(plot)[0] else: scaled_radius = self.radius / (plot.xlim[1] - plot.xlim[0]) x, y = self._sanitize_coord_system( plot, self.center, coord_system=self.coord_system ) x, y = self._sanitize_xy_order(plot, x, y) cir = Circle( (x, y), scaled_radius.v, transform=self.transform, **self.circle_args ) plot._axes.add_patch(cir) if self.text is not None: label = plot._axes.text( x, y, self.text, transform=self.transform, **self.text_args ) self._set_font_properties(plot, [label], **self.text_args) self._set_plot_limits(plot) class TextLabelCallback(PlotCallback): """ Overplot text on the plot at a specified position. If you desire an inset box around your text, set one with the inset_box_args dictionary keyword. Parameters ---------- pos : 2- or 3-element tuple, list, or array These are the coordinates where the text will be overplotted text : string The text you wish to include coord_system : string, optional This string defines the coordinate system of the coordinates of pos Valid coordinates are: "data" -- the 3D dataset coordinates "plot" -- the 2D coordinates defined by the actual plot limits "axis" -- the MPL axis coordinates: (0,0) is lower left; (1,1) is upper right "figure" -- the MPL figure coordinates: (0,0) is lower left, (1,1) is upper right text_args : dictionary, optional This dictionary is passed to the MPL text function for generating the text. By default, it is: {'color':'white'} and uses the defaults for the other fonts in the image. inset_box_args : dictionary, optional A dictionary of any arbitrary parameters to be passed to the Matplotlib FancyBboxPatch object as the inset box around the text. Default: {} Examples -------- >>> # Overplot white text at data location [0.55, 0.7, 0.4] >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> s = yt.SlicePlot(ds, "z", "density") >>> s.annotate_text([0.55, 0.7, 0.4], "Here is a galaxy") >>> s.save() >>> # Overplot yellow text at axis location [0.2, 0.8] with >>> # a shaded inset box >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> s = yt.SlicePlot(ds, "z", "density") >>> s.annotate_text( ... [0.2, 0.8], ... "Here is a galaxy", ... coord_system="axis", ... text_args={"color": "yellow"}, ... inset_box_args={ ... "boxstyle": "square,pad=0.3", ... "facecolor": "black", ... "linewidth": 3, ... "edgecolor": "white", ... "alpha": 0.5, ... }, ... ) >>> s.save() """ _type_name = "text" _supported_geometries: tuple[str, ...] = ( "cartesian", "spectral_cube", "polar", "cylindrical", ) def __init__( self, pos, text, *, coord_system="data", text_args=None, inset_box_args=None, ): self.pos = pos self.text = text self.text_args = { "color": "white", **(text_args or {}), } self.inset_box_args = inset_box_args self.coord_system = coord_system self.transform = None def __call__(self, plot): kwargs = self.text_args.copy() x, y = self._sanitize_coord_system( plot, self.pos, coord_system=self.coord_system ) # Set the font properties of text from this callback to be # consistent with other text labels in this figure if self.inset_box_args is not None: kwargs["bbox"] = self.inset_box_args x, y = self._sanitize_xy_order(plot, x, y) label = plot._axes.text(x, y, self.text, transform=self.transform, **kwargs) self._set_font_properties(plot, [label], **kwargs) self._set_plot_limits(plot) class ParticleCallback(PlotCallback): """ Adds particle positions, based on a thick slab along *axis* with a *width* along the line of sight. *p_size* controls the number of pixels per particle, and *col* governs the color. *ptype* will restrict plotted particles to only those that are of a given type. *alpha* determines the opacity of the marker symbol used in the scatter. An alternate data source can be specified with *data_source*, but by default the plot's data source will be queried. """ _type_name = "particles" region = None _descriptor = None _supported_geometries = ("cartesian", "spectral_cube", "cylindrical") _incompatible_plot_types = ("OffAxisSlice", "OffAxisProjection") def __init__( self, width, p_size=1.0, col="k", marker="o", stride=1, ptype="all", alpha=1.0, data_source=None, ): self.width = width self.p_size = p_size self.color = col self.marker = marker self.stride = stride self.ptype = ptype self.alpha = alpha self.data_source = data_source def __call__(self, plot): data = plot.data if is_sequence(self.width): validate_width_tuple(self.width) self.width = plot.data.ds.quan(self.width[0], self.width[1]) elif isinstance(self.width, YTQuantity): self.width = plot.data.ds.quan(self.width.value, self.width.units) else: self.width = plot.data.ds.quan(self.width, "code_length") # we construct a rectangular prism x0, x1, y0, y1 = self._physical_bounds(plot) if isinstance(self.data_source, YTCutRegion): mylog.warning( "Parameter 'width' is ignored in annotate_particles if the " "data_source is a cut_region. " "See https://github.com/yt-project/yt/issues/1933 for further details." ) self.region = self.data_source else: self.region = self._get_region((x0, x1), (y0, y1), plot.data.axis, data) ax = data.axis xax = plot.data.ds.coordinates.x_axis[ax] yax = plot.data.ds.coordinates.y_axis[ax] axis_names = plot.data.ds.coordinates.axis_name field_x = f"particle_position_{axis_names[xax]}" field_y = f"particle_position_{axis_names[yax]}" pt = self.ptype self.periodic_x = plot.data.ds.periodicity[xax] self.periodic_y = plot.data.ds.periodicity[yax] self.LE = plot.data.ds.domain_left_edge[xax], plot.data.ds.domain_left_edge[yax] self.RE = ( plot.data.ds.domain_right_edge[xax], plot.data.ds.domain_right_edge[yax], ) period_x = plot.data.ds.domain_width[xax] period_y = plot.data.ds.domain_width[yax] particle_x, particle_y = self._enforce_periodic( self.region[pt, field_x], self.region[pt, field_y], x0, x1, period_x, y0, y1, period_y, ) gg = ( (particle_x >= x0) & (particle_x <= x1) & (particle_y >= y0) & (particle_y <= y1) ) px, py = [particle_x[gg][:: self.stride], particle_y[gg][:: self.stride]] px, py = self._convert_to_plot(plot, [px, py]) px, py = self._sanitize_xy_order(plot, px, py) plot._axes.scatter( px, py, edgecolors="None", marker=self.marker, s=self.p_size, c=self.color, alpha=self.alpha, ) self._set_plot_limits(plot) def _enforce_periodic( self, particle_x, particle_y, x0, x1, period_x, y0, y1, period_y ): # duplicate particles if periodic in that direction AND if the plot # extends outside the domain boundaries. if self.periodic_x and x0 > self.RE[0]: particle_x = uhstack((particle_x, particle_x + period_x)) particle_y = uhstack((particle_y, particle_y)) if self.periodic_x and x1 < self.LE[0]: particle_x = uhstack((particle_x, particle_x - period_x)) particle_y = uhstack((particle_y, particle_y)) if self.periodic_y and y0 > self.RE[1]: particle_y = uhstack((particle_y, particle_y + period_y)) particle_x = uhstack((particle_x, particle_x)) if self.periodic_y and y1 < self.LE[1]: particle_y = uhstack((particle_y, particle_y - period_y)) particle_x = uhstack((particle_x, particle_x)) return particle_x, particle_y def _get_region(self, xlim, ylim, axis, data): LE, RE = [None] * 3, [None] * 3 ds = data.ds xax = ds.coordinates.x_axis[axis] yax = ds.coordinates.y_axis[axis] zax = axis LE[xax], RE[xax] = xlim LE[yax], RE[yax] = ylim LE[zax] = data.center[zax] - self.width * 0.5 LE[zax].convert_to_units("code_length") RE[zax] = LE[zax] + self.width if ( self.region is not None and np.all(self.region.left_edge <= LE) and np.all(self.region.right_edge >= RE) ): return self.region self.region = data.ds.region(data.center, LE, RE, data_source=self.data_source) return self.region class TitleCallback(PlotCallback): """ Accepts a *title* and adds it to the plot """ _type_name = "title" def __init__(self, title): self.title = title def __call__(self, plot): plot._axes.set_title(self.title) # Set the font properties of text from this callback to be # consistent with other text labels in this figure label = plot._axes.title self._set_font_properties(plot, [label]) class MeshLinesCallback(PlotCallback): """ Adds mesh lines to the plot. Only works for unstructured or semi-structured mesh data. For structured grid data, see GridBoundaryCallback or CellEdgesCallback. Parameters ---------- plot_args: dict, optional A dictionary of arguments that will be passed to matplotlib. Example ------- >>> import yt >>> ds = yt.load("MOOSE_sample_data/out.e-s010") >>> sl = yt.SlicePlot(ds, "z", ("connect2", "convected")) >>> sl.annotate_mesh_lines(color="black") """ _type_name = "mesh_lines" _supported_geometries = ("cartesian", "spectral_cube") _incompatible_plot_types = ("OffAxisSlice", "OffAxisProjection") def __init__(self, *, plot_args=None, **kwargs): if plot_args is not None: issue_deprecation_warning( "`plot_args` is deprecated. " "You can now pass arbitrary keyword arguments instead of a dictionary.", since="4.1", stacklevel=5, ) plot_args.update(kwargs) else: plot_args = kwargs self.plot_args = plot_args def promote_2d_to_3d(self, coords, indices, plot): new_coords = np.zeros((2 * coords.shape[0], 3)) new_connects = np.zeros( (indices.shape[0], 2 * indices.shape[1]), dtype=np.int64 ) new_coords[0 : coords.shape[0], 0:2] = coords new_coords[0 : coords.shape[0], 2] = plot.ds.domain_left_edge[2] new_coords[coords.shape[0] :, 0:2] = coords new_coords[coords.shape[0] :, 2] = plot.ds.domain_right_edge[2] new_connects[:, 0 : indices.shape[1]] = indices new_connects[:, indices.shape[1] :] = indices + coords.shape[0] return new_coords, new_connects def __call__(self, plot): index = plot.ds.index if not issubclass(type(index), UnstructuredIndex): raise RuntimeError( "Mesh line annotations only work for " "unstructured or semi-structured mesh data." ) for i, m in enumerate(index.meshes): try: ftype, fname = plot.field if ftype.startswith("connect") and int(ftype[-1]) - 1 != i: continue except ValueError: pass coords = m.connectivity_coords indices = m.connectivity_indices - m._index_offset num_verts = indices.shape[1] num_dims = coords.shape[1] if num_dims == 2 and num_verts == 3: coords, indices = self.promote_2d_to_3d(coords, indices, plot) elif num_dims == 2 and num_verts == 4: coords, indices = self.promote_2d_to_3d(coords, indices, plot) tri_indices = triangulate_indices(indices.astype(np.int_)) points = coords[tri_indices] tfc = TriangleFacetsCallback(points, **self.plot_args) tfc(plot) class TriangleFacetsCallback(PlotCallback): """ Intended for representing a slice of a triangular faceted geometry in a slice plot. Uses a set of *triangle_vertices* to find all triangles the plane of a SlicePlot intersects with. The lines between the intersection points of the triangles are then added to the plot to create an outline of the geometry represented by the triangles. """ _type_name = "triangle_facets" _supported_geometries = ("cartesian", "spectral_cube") def __init__(self, triangle_vertices, *, plot_args=None, **kwargs): if plot_args is not None: issue_deprecation_warning( "`plot_args` is deprecated. " "You can now pass arbitrary keyword arguments instead of a dictionary.", since="4.1", stacklevel=5, ) plot_args.update(kwargs) else: plot_args = kwargs self.plot_args = kwargs self.vertices = triangle_vertices def __call__(self, plot): ax = plot.data.axis xax = plot.data.ds.coordinates.x_axis[ax] yax = plot.data.ds.coordinates.y_axis[ax] if not hasattr(self.vertices, "in_units"): vertices = plot.data.pf.arr(self.vertices, "code_length") else: vertices = self.vertices l_cy = triangle_plane_intersect(plot.data.axis, plot.data.coord, vertices)[ :, :, (xax, yax) ] # l_cy is shape (nlines, 2, 2) # reformat for conversion to plot coordinates l_cy = np.rollaxis(l_cy, 0, 3) # shape is now (2, 2, nlines) # convert all line starting points l_cy[0] = self._convert_to_plot(plot, l_cy[0]) # convert all line ending points l_cy[1] = self._convert_to_plot(plot, l_cy[1]) if plot._swap_axes: # more convenient to swap the x, y values here before final roll x0, y0 = l_cy[0] # x, y values of start points x1, y1 = l_cy[1] # x, y values of end points l_cy[0] = np.vstack([y0, x0]) # swap x, y for start points l_cy[1] = np.vstack([y1, x1]) # swap x, y for end points # convert back to shape (nlines, 2, 2) l_cy = np.rollaxis(l_cy, 2, 0) # create line collection and add it to the plot lc = matplotlib.collections.LineCollection(l_cy, **self.plot_args) plot._axes.add_collection(lc) class TimestampCallback(PlotCallback): r""" Annotates the timestamp and/or redshift of the data output at a specified location in the image (either in a present corner, or by specifying (x,y) image coordinates with the x_pos, y_pos arguments. If no time_units are specified, it will automatically choose appropriate units. It allows for custom formatting of the time and redshift information, as well as the specification of an inset box around the text. Parameters ---------- x_pos, y_pos : floats, optional The image location of the timestamp in the coord system defined by the coord_system kwarg. Setting x_pos and y_pos overrides the corner parameter. corner : string, optional Corner sets up one of 4 predeterimined locations for the timestamp to be displayed in the image: 'upper_left', 'upper_right', 'lower_left', 'lower_right' (also allows None). This value will be overridden by the optional x_pos and y_pos keywords. time : boolean, optional Whether or not to show the ds.current_time of the data output. Can be used solo or in conjunction with redshift parameter. redshift : boolean, optional Whether or not to show the ds.current_time of the data output. Can be used solo or in conjunction with the time parameter. time_format : string, optional This specifies the format of the time output assuming "time" is the number of time and "unit" is units of the time (e.g. 's', 'Myr', etc.) The time can be specified to arbitrary precision according to printf formatting codes (defaults to .1f -- a float with 1 digits after decimal). Example: "Age = {time:.2f} {units}". time_unit : string, optional time_unit must be a valid yt time unit (e.g. 's', 'min', 'hr', 'yr', 'Myr', etc.) redshift_format : string, optional This specifies the format of the redshift output. The redshift can be specified to arbitrary precision according to printf formatting codes (defaults to 0.2f -- a float with 2 digits after decimal). Example: "REDSHIFT = {redshift:03.3g}", draw_inset_box : boolean, optional Whether or not an inset box should be included around the text If so, it uses the inset_box_args to set the matplotlib FancyBboxPatch object. coord_system : string, optional This string defines the coordinate system of the coordinates of pos Valid coordinates are: - "data": 3D dataset coordinates - "plot": 2D coordinates defined by the actual plot limits - "axis": MPL axis coordinates: (0,0) is lower left; (1,1) is upper right - "figure": MPL figure coordinates: (0,0) is lower left, (1,1) is upper right time_offset : float, (value, unit) tuple, or YTQuantity, optional Apply an offset to the time shown in the annotation from the value of the current time. If a scalar value with no units is passed in, the value of the *time_unit* kwarg is used for the units. Default: None, meaning no offset. text_args : dictionary, optional A dictionary of any arbitrary parameters to be passed to the Matplotlib text object. Defaults: ``{'color':'white', 'horizontalalignment':'center', 'verticalalignment':'top'}``. inset_box_args : dictionary, optional A dictionary of any arbitrary parameters to be passed to the Matplotlib FancyBboxPatch object as the inset box around the text. Defaults: ``{'boxstyle':'square', 'pad':0.3, 'facecolor':'black', 'linewidth':3, 'edgecolor':'white', 'alpha':0.5}`` Example ------- >>> import yt >>> ds = yt.load("Enzo_64/DD0020/data0020") >>> s = yt.SlicePlot(ds, "z", "density") >>> s.annotate_timestamp() """ _type_name = "timestamp" _supported_geometries = ( "cartesian", "spectral_cube", "cylindrical", "polar", "spherical", ) def __init__( self, x_pos=None, y_pos=None, corner="lower_left", *, time=True, redshift=False, time_format="t = {time:.1f} {units}", time_unit=None, redshift_format="z = {redshift:.2f}", draw_inset_box=False, coord_system="axis", time_offset=None, text_args=None, inset_box_args=None, ): # Set position based on corner argument. self.pos = (x_pos, y_pos) self.corner = corner self.time = time self.redshift = redshift self.time_format = time_format self.redshift_format = redshift_format self.time_unit = time_unit self.coord_system = coord_system self.time_offset = time_offset self.text_args = { "color": "white", "horizontalalignment": "center", "verticalalignment": "top", **(text_args or {}), } if draw_inset_box: self.inset_box_args = { "boxstyle": "square,pad=0.3", "facecolor": "black", "linewidth": 3, "edgecolor": "white", "alpha": 0.5, **(inset_box_args or {}), } else: self.inset_box_args = None def __call__(self, plot): # Setting pos overrides corner argument if self.pos[0] is None or self.pos[1] is None: if self.corner == "upper_left": self.pos = (0.03, 0.96) self.text_args["horizontalalignment"] = "left" self.text_args["verticalalignment"] = "top" elif self.corner == "upper_right": self.pos = (0.97, 0.96) self.text_args["horizontalalignment"] = "right" self.text_args["verticalalignment"] = "top" elif self.corner == "lower_left": self.pos = (0.03, 0.03) self.text_args["horizontalalignment"] = "left" self.text_args["verticalalignment"] = "bottom" elif self.corner == "lower_right": self.pos = (0.97, 0.03) self.text_args["horizontalalignment"] = "right" self.text_args["verticalalignment"] = "bottom" elif self.corner is None: self.pos = (0.5, 0.5) self.text_args["horizontalalignment"] = "center" self.text_args["verticalalignment"] = "center" else: raise ValueError( "Argument 'corner' must be set to " "'upper_left', 'upper_right', 'lower_left', " "'lower_right', or None" ) self.text = "" # If we're annotating the time, put it in the correct format if self.time: # If no time_units are set, then identify a best fit time unit if self.time_unit is None: if plot.ds._uses_code_time_unit: # if the unit system is in code units # we should not convert to seconds for the plot. self.time_unit = plot.ds.unit_system.base_units[dimensions.time] else: # in the case of non- code units then we self.time_unit = plot.ds.get_smallest_appropriate_unit( plot.ds.current_time, quantity="time" ) t = plot.ds.current_time.in_units(self.time_unit) if self.time_offset is not None: if isinstance(self.time_offset, tuple): toffset = plot.ds.quan(self.time_offset[0], self.time_offset[1]) elif isinstance(self.time_offset, Number): toffset = plot.ds.quan(self.time_offset, self.time_unit) elif not isinstance(self.time_offset, YTQuantity): raise RuntimeError( "'time_offset' must be a float, tuple, or YTQuantity!" ) t -= toffset.in_units(self.time_unit) try: # here the time unit will be in brackets on the annotation. un = self.time_unit.latex_representation() time_unit = r"$\ \ (" + un + r")$" except AttributeError: time_unit = str(self.time_unit).replace("_", " ") self.text += self.time_format.format(time=float(t), units=time_unit) # If time and redshift both shown, do one on top of the other if self.time and self.redshift: self.text += "\n" if self.redshift and not hasattr(plot.data.ds, "current_redshift"): warnings.warn( f"dataset {plot.data.ds} does not have current_redshift attribute. " "Set redshift=False to silence this warning.", stacklevel=2, ) self.redshift = False # If we're annotating the redshift, put it in the correct format if self.redshift: z = plot.data.ds.current_redshift # Replace instances of -0.0* with 0.0* to avoid # negative null redshifts (e.g., "-0.00"). self.text += self.redshift_format.format(redshift=float(z)) self.text = re.sub("-(0.0*)$", r"\g<1>", self.text) # This is just a fancy wrapper around the TextLabelCallback tcb = TextLabelCallback( self.pos, self.text, coord_system=self.coord_system, text_args=self.text_args, inset_box_args=self.inset_box_args, ) return tcb(plot) class ScaleCallback(PlotCallback): r""" Annotates the scale of the plot at a specified location in the image (either in a preset corner, or by specifying (x,y) image coordinates with the pos argument. Coeff and units (e.g. 1 Mpc or 100 kpc) refer to the distance scale you desire to show on the plot. If no coeff and units are specified, an appropriate pair will be determined such that your scale bar is never smaller than min_frac or greater than max_frac of your plottable axis length. Additional customization of the scale bar is possible by adjusting the text_args and size_bar_args dictionaries. The text_args dictionary accepts matplotlib's font_properties arguments to override the default font_properties for the current plot. The size_bar_args dictionary accepts keyword arguments for the AnchoredSizeBar class in matplotlib's axes_grid toolkit. Parameters ---------- corner : string, optional Corner sets up one of 4 predeterimined locations for the scale bar to be displayed in the image: 'upper_left', 'upper_right', 'lower_left', 'lower_right' (also allows None). This value will be overridden by the optional 'pos' keyword. coeff : float, optional The coefficient of the unit defining the distance scale (e.g. 10 kpc or 100 Mpc) for overplotting. If set to None along with unit keyword, coeff will be automatically determined to be a power of 10 relative to the best-fit unit. unit : string, optional unit must be a valid yt distance unit (e.g. 'm', 'km', 'AU', 'pc', 'kpc', etc.) or set to None. If set to None, will be automatically determined to be the best-fit to the data. pos : 2- or 3-element tuples, lists, or arrays, optional The image location of the scale bar in the plot coordinate system. Setting pos overrides the corner parameter. min_frac, max_frac: float, optional The minimum/maximum fraction of the axis width for the scale bar to extend. A value of 1 would allow the scale bar to extend across the entire axis width. Only used for automatically calculating best-fit coeff and unit when neither is specified, otherwise disregarded. coord_system : string, optional This string defines the coordinate system of the coordinates of pos Valid coordinates are: - "data": 3D dataset coordinates - "plot": 2D coordinates defined by the actual plot limits - "axis": MPL axis coordinates: (0,0) is lower left; (1,1) is upper right - "figure": MPL figure coordinates: (0,0) is lower left, (1,1) is upper right text_args : dictionary, optional A dictionary of parameters to used to update the font_properties for the text in this callback. For any property not set, it will use the defaults of the plot. Thus one can modify the text size with ``text_args={'size':24}`` size_bar_args : dictionary, optional A dictionary of parameters to be passed to the Matplotlib AnchoredSizeBar initializer. Defaults: ``{'pad': 0.25, 'sep': 5, 'borderpad': 1, 'color': 'w'}`` draw_inset_box : boolean, optional Whether or not an inset box should be included around the scale bar. inset_box_args : dictionary, optional A dictionary of keyword arguments to be passed to the matplotlib Patch object that represents the inset box. Defaults: ``{'facecolor': 'black', 'linewidth': 3, 'edgecolor': 'white', 'alpha': 0.5, 'boxstyle': 'square'}`` scale_text_format : string, optional This specifies the format of the scalebar value assuming "scale" is the numerical value and "unit" is units of the scale (e.g. 'cm', 'kpc', etc.) The scale can be specified to arbitrary precision according to printf formatting codes. The format string must only specify "scale" and "units". Example: "Length = {scale:.2f} {units}". Default: "{scale} {units}" Example ------- >>> import yt >>> ds = yt.load("Enzo_64/DD0020/data0020") >>> s = yt.SlicePlot(ds, "z", "density") >>> s.annotate_scale() """ _type_name = "scale" _supported_geometries = ("cartesian", "spectral_cube", "force") def __init__( self, *, corner="lower_right", coeff=None, unit=None, pos=None, max_frac=0.16, min_frac=0.015, coord_system="axis", text_args=None, size_bar_args=None, draw_inset_box=False, inset_box_args=None, scale_text_format="{scale} {units}", ): # Set position based on corner argument. self.corner = corner self.coeff = coeff self.unit = unit self.pos = pos self.max_frac = max_frac self.min_frac = min_frac self.coord_system = coord_system self.scale_text_format = scale_text_format self.size_bar_args = { "pad": 0.05, "sep": 5, "borderpad": 1, "color": "white", **(size_bar_args or {}), } self.inset_box_args = { "facecolor": "black", "linewidth": 3, "edgecolor": "white", "alpha": 0.5, "boxstyle": "square", **(inset_box_args or {}), } self.text_args = text_args or {} self.draw_inset_box = draw_inset_box def __call__(self, plot): from mpl_toolkits.axes_grid1.anchored_artists import AnchoredSizeBar # Callback only works for plots with axis ratios of 1 xsize = plot.xlim[1] - plot.xlim[0] # Setting pos overrides corner argument if self.pos is None: if self.corner == "upper_left": self.pos = (0.11, 0.952) elif self.corner == "upper_right": self.pos = (0.89, 0.952) elif self.corner == "lower_left": self.pos = (0.11, 0.052) elif self.corner == "lower_right": self.pos = (0.89, 0.052) elif self.corner is None: self.pos = (0.5, 0.5) else: raise ValueError( "Argument 'corner' must be set to " "'upper_left', 'upper_right', 'lower_left', " "'lower_right', or None" ) # When identifying a best fit distance unit, do not allow scale marker # to be greater than max_frac fraction of xaxis or under min_frac # fraction of xaxis max_scale = self.max_frac * xsize min_scale = self.min_frac * xsize # If no units are set, pick something sensible. if self.unit is None: # User has set the axes units and supplied a coefficient. if plot._axes_unit_names is not None and self.coeff is not None: self.unit = plot._axes_unit_names[0] # Nothing provided; identify a best fit distance unit. else: min_scale = plot.ds.get_smallest_appropriate_unit( min_scale, return_quantity=True ) max_scale = plot.ds.get_smallest_appropriate_unit( max_scale, return_quantity=True ) if self.coeff is None: self.coeff = max_scale.v self.unit = max_scale.units elif self.coeff is None: self.coeff = 1 self.scale = plot.ds.quan(self.coeff, self.unit) text = self.scale_text_format.format(scale=int(self.coeff), units=self.unit) image_scale = ( plot.frb.convert_distance_x(self.scale) / plot.frb.convert_distance_x(xsize) ).v size_vertical = self.size_bar_args.pop("size_vertical", 0.005 * plot.aspect) fontproperties = self.size_bar_args.pop( "fontproperties", plot.font_properties.copy() ) frameon = self.size_bar_args.pop("frameon", self.draw_inset_box) # FontProperties instances use set_() setter functions for key, val in self.text_args.items(): setter_func = "set_" + key try: getattr(fontproperties, setter_func)(val) except AttributeError as e: raise AttributeError( "Cannot set text_args keyword " f"to include {key!r} because MPL's fontproperties object does " f"not contain function {setter_func!r}" ) from e # this "anchors" the size bar to a box centered on self.pos in axis # coordinates self.size_bar_args["bbox_to_anchor"] = self.pos self.size_bar_args["bbox_transform"] = plot._axes.transAxes bar = AnchoredSizeBar( plot._axes.transAxes, image_scale, text, 10, size_vertical=size_vertical, fontproperties=fontproperties, frameon=frameon, **self.size_bar_args, ) bar.patch.set(**self.inset_box_args) plot._axes.add_artist(bar) return plot class RayCallback(PlotCallback): """ Adds a line representing the projected path of a ray across the plot. The ray can be either a YTOrthoRay, YTRay, or a LightRay object. annotate_ray() will properly account for periodic rays across the volume. If arrow is set to True, uses the MPL.pyplot.arrow function, otherwise uses the MPL.pyplot.plot function to plot a normal line. Adjust plot_args accordingly. Parameters ---------- ray : YTOrthoRay, YTRay, or LightRay Ray is the object that we want to include. We overplot the projected trajectory of the ray. If the object is a trident.LightRay object, it will only plot the segment of the LightRay that intersects the dataset currently displayed. arrow : boolean, optional Whether or not to place an arrowhead on the front of the ray to denote direction Default: False plot_args : dictionary, optional A dictionary of any arbitrary parameters to be passed to the Matplotlib line object. Defaults: {'color':'white', 'linewidth':2}. Examples -------- >>> # Overplot a ray and an ortho_ray object on a projection >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> oray = ds.ortho_ray(1, (0.3, 0.4)) # orthoray down the y axis >>> ray = ds.ray((0.1, 0.2, 0.3), (0.6, 0.7, 0.8)) # arbitrary ray >>> p = yt.ProjectionPlot(ds, "z", "density") >>> p.annotate_ray(oray) >>> p.annotate_ray(ray) >>> p.save() >>> # Overplot a LightRay object on a projection >>> import yt >>> from trident import LightRay >>> ds = yt.load("enzo_cosmology_plus/RD0004/RD0004") >>> lr = LightRay( ... "enzo_cosmology_plus/AMRCosmology.enzo", "Enzo", 0.0, 0.1, time_data=False ... ) >>> lray = lr.make_light_ray(seed=1) >>> p = yt.ProjectionPlot(ds, "z", "density") >>> p.annotate_ray(lr) >>> p.save() """ _type_name = "ray" _supported_geometries = ("cartesian", "spectral_cube", "force") def __init__(self, ray, *, arrow=False, plot_args=None, **kwargs): self.ray = ray self.arrow = arrow if plot_args is not None: issue_deprecation_warning( "`plot_args` is deprecated. " "You can now pass arbitrary keyword arguments instead of a dictionary.", since="4.1", stacklevel=5, ) self.plot_args = { "color": "white", "linewidth": 2, **(plot_args or {}), **kwargs, } def _process_ray(self): """ Get the start_coord and end_coord of a ray object """ return (self.ray.start_point, self.ray.end_point) def _process_ortho_ray(self): """ Get the start_coord and end_coord of an ortho_ray object """ start_coord = self.ray.ds.domain_left_edge.copy() end_coord = self.ray.ds.domain_right_edge.copy() xax = self.ray.ds.coordinates.x_axis[self.ray.axis] yax = self.ray.ds.coordinates.y_axis[self.ray.axis] start_coord[xax] = end_coord[xax] = self.ray.coords[0] start_coord[yax] = end_coord[yax] = self.ray.coords[1] return (start_coord, end_coord) def _process_light_ray(self, plot): """ Get the start_coord and end_coord of a LightRay object. Identify which of the sections of the LightRay is in the dataset that is currently being plotted. If there is one, return the start and end of the corresponding ray segment """ for ray_ds in self.ray.light_ray_solution: if ray_ds["unique_identifier"] == str(plot.ds.unique_identifier): start_coord = plot.ds.arr(ray_ds["start"]) end_coord = plot.ds.arr(ray_ds["end"]) return (start_coord, end_coord) # if no intersection between the plotted dataset and the LightRay # return a false tuple to pass to start_coord return ((False, False), (False, False)) def __call__(self, plot): type_name = getattr(self.ray, "_type_name", None) if type_name == "ray": start_coord, end_coord = self._process_ray() elif type_name == "ortho_ray": start_coord, end_coord = self._process_ortho_ray() elif hasattr(self.ray, "light_ray_solution"): start_coord, end_coord = self._process_light_ray(plot) else: raise ValueError("ray must be a YTRay, YTOrthoRay, or LightRay object.") # if start_coord and end_coord are all False, it means no intersecting # ray segment with this plot. if not all(start_coord) and not all(end_coord): return plot # if possible, break periodic ray into non-periodic # segments and add each of them individually if any(plot.ds.periodicity): segments = periodic_ray( start_coord.to("code_length"), end_coord.to("code_length"), left=plot.ds.domain_left_edge.to("code_length"), right=plot.ds.domain_right_edge.to("code_length"), ) else: segments = [[start_coord, end_coord]] # To assure that the last ray segment has an arrow if so desired # and all other ray segments are lines for segment in segments[:-1]: cb = LinePlotCallback( segment[0], segment[1], coord_system="data", **self.plot_args ) cb(plot) segment = segments[-1] if self.arrow: cb = ArrowCallback( segment[1], starting_pos=segment[0], coord_system="data", **self.plot_args, ) else: cb = LinePlotCallback( segment[0], segment[1], coord_system="data", **self.plot_args ) cb(plot) return plot class LineIntegralConvolutionCallback(PlotCallback): """ Add the line integral convolution to the plot for vector fields visualization. Two component of vector fields needed to be provided (i.e., velocity_x and velocity_y, magnetic_field_x and magnetic_field_y). Parameters ---------- field_x, field_y : string The names of two components of vector field which will be visualized texture : 2-d array with the same shape of image, optional Texture will be convolved when computing line integral convolution. A white noise background will be used as default. kernellen : float, optional The lens of kernel for convolution, which is the length over which the convolution will be performed. For longer kernellen, longer streamline structure will appear. lim : 2-element tuple, list, or array, optional The value of line integral convolution will be clipped to the range of lim, which applies upper and lower bounds to the values of line integral convolution and enhance the visibility of plots. Each element should be in the range of [0,1]. cmap : string, optional The name of colormap for line integral convolution plot. alpha : float, optional The alpha value for line integral convolution plot. const_alpha : boolean, optional If set to False (by default), alpha will be weighted spatially by the values of line integral convolution; otherwise a constant value of the given alpha is used. Example ------- >>> import yt >>> ds = yt.load("Enzo_64/DD0020/data0020") >>> s = yt.SlicePlot(ds, "z", "density") >>> s.annotate_line_integral_convolution( ... "velocity_x", "velocity_y", lim=(0.5, 0.65) ... ) """ _type_name = "line_integral_convolution" _supported_geometries = ( "cartesian", "spectral_cube", "polar", "cylindrical", "spherical", ) _incompatible_plot_types = ("LineIntegralConvolutionCallback",) def __init__( self, field_x, field_y, texture=None, kernellen=50.0, lim=(0.5, 0.6), cmap="binary", alpha=0.8, const_alpha=False, ): self.field_x = field_x self.field_y = field_y self.texture = texture self.kernellen = kernellen self.lim = lim self.cmap = cmap self.alpha = alpha self.const_alpha = const_alpha def __call__(self, plot): from matplotlib import cm bounds = self._physical_bounds(plot) extent = self._plot_bounds(plot) # We are feeding this size into the pixelizer, where it will properly # set it in reverse order nx = plot.raw_image_shape[1] ny = plot.raw_image_shape[0] pixX = plot.data.ds.coordinates.pixelize( plot.data.axis, plot.data, self.field_x, bounds, (nx, ny) ) pixY = plot.data.ds.coordinates.pixelize( plot.data.axis, plot.data, self.field_y, bounds, (nx, ny) ) vectors = np.concatenate((pixX[..., np.newaxis], pixY[..., np.newaxis]), axis=2) if self.texture is None: prng = np.random.RandomState(0x4D3D3D3) self.texture = prng.random_sample((nx, ny)) elif self.texture.shape != (nx, ny): raise ValueError( "'texture' must have the same shape " "with that of output image (%d, %d)" % (nx, ny) ) kernel = np.sin( np.arange(self.kernellen, dtype="float64") * np.pi / self.kernellen ) lic_data = line_integral_convolution_2d(vectors, self.texture, kernel) lic_data = lic_data / lic_data.max() lic_data_clip = np.clip(lic_data, self.lim[0], self.lim[1]) mask = ~(np.isfinite(pixX) & np.isfinite(pixY)) lic_data[mask] = np.nan lic_data_clip[mask] = np.nan if plot._swap_axes: lic_data_clip = lic_data_clip.transpose() extent = (extent[2], extent[3], extent[0], extent[1]) if self.const_alpha: plot._axes.imshow( lic_data_clip, extent=extent, cmap=self.cmap, alpha=self.alpha, origin="lower", aspect="auto", ) else: lic_data_rgba = cm.ScalarMappable(norm=None, cmap=self.cmap).to_rgba( lic_data_clip ) lic_data_clip_rescale = (lic_data_clip - self.lim[0]) / ( self.lim[1] - self.lim[0] ) lic_data_rgba[..., 3] = lic_data_clip_rescale * self.alpha plot._axes.imshow( lic_data_rgba, extent=extent, cmap=self.cmap, origin="lower", aspect="auto", ) return plot class CellEdgesCallback(PlotCallback): """ Annotate cell edges. This is done through a second call to pixelize, where the distance from a pixel to a cell boundary in pixels is compared against the `line_width` argument. The secondary image is colored as `color` and overlaid with the `alpha` value. Parameters ---------- line_width : float The width of the cell edge lines in normalized units relative to the size of the longest axis. Default is 1% of the size of the smallest axis. alpha : float When the second image is overlaid, it will have this level of alpha transparency. Default is 1.0 (fully-opaque). color : tuple of three floats or matplotlib color name This is the color of the cell edge values. It defaults to black. Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> s = yt.SlicePlot(ds, "z", "density") >>> s.annotate_cell_edges() >>> s.save() """ _type_name = "cell_edges" _supported_geometries = ("cartesian", "spectral_cube", "cylindrical") def __init__(self, line_width=0.002, alpha=1.0, color="black"): from matplotlib.colors import ColorConverter conv = ColorConverter() self.line_width = line_width self.alpha = alpha self.color = np.array(conv.to_rgb(color), dtype="uint8") * 255 def __call__(self, plot): if plot.data.ds.geometry == "cylindrical" and plot.data.ds.dimensionality == 3: raise NotImplementedError( "Cell edge annotation is only supported for \ for 2D cylindrical geometry, not 3D" ) x0, x1, y0, y1 = self._physical_bounds(plot) nx = plot.raw_image_shape[1] ny = plot.raw_image_shape[0] aspect = float((y1 - y0) / (x1 - x0)) pixel_aspect = float(ny) / nx relative_aspect = pixel_aspect / aspect if relative_aspect > 1: nx = int(nx / relative_aspect) else: ny = int(ny * relative_aspect) if aspect > 1: if nx < 1600: nx = int(1600.0 / nx * ny) ny = 1600 long_axis = ny else: if ny < 1600: nx = int(1600.0 / ny * nx) ny = 1600 long_axis = nx line_width = max(self.line_width * long_axis, 1.0) im = np.zeros((ny, nx), dtype="f8") pixelize_cartesian( im, plot.data["px"], plot.data["py"], plot.data["pdx"], plot.data["pdy"], plot.data["px"], # dummy field (x0, x1, y0, y1), line_width=line_width, ) # New image: im_buffer = np.zeros((ny, nx, 4), dtype="uint8") im_buffer[im > 0, 3] = 255 im_buffer[im > 0, :3] = self.color extent = self._plot_bounds(plot) if plot._swap_axes: im_buffer = im_buffer.transpose((1, 0, 2)) plot._axes.imshow( im_buffer, origin="lower", interpolation="bilinear", extent=extent, alpha=self.alpha, ) self._set_plot_limits(plot, extent) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/plot_window.py0000644000175100001770000033351614714401662020054 0ustar00runnerdockerimport abc import sys from collections import defaultdict from numbers import Number from typing import TYPE_CHECKING, Union import matplotlib import numpy as np from more_itertools import always_iterable from unyt.exceptions import UnitConversionError from yt._maintenance.deprecation import issue_deprecation_warning from yt._typing import AlphaT from yt.data_objects.image_array import ImageArray from yt.frontends.sph.data_structures import ParticleDataset from yt.frontends.stream.data_structures import StreamParticlesDataset from yt.frontends.ytdata.data_structures import YTSpatialPlotDataset from yt.funcs import ( fix_axis, fix_unitary, is_sequence, iter_fields, mylog, obj_length, parse_center_array, validate_moment, ) from yt.geometry.api import Geometry from yt.units.unit_object import Unit # type: ignore from yt.units.unit_registry import UnitParseError # type: ignore from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.exceptions import ( YTCannotParseUnitDisplayName, YTDataTypeUnsupported, YTInvalidFieldType, YTPlotCallbackError, YTUnitNotRecognized, ) from yt.utilities.math_utils import ortho_find from yt.utilities.orientation import Orientation from yt.visualization._handlers import ColorbarHandler, NormHandler from yt.visualization.base_plot_types import CallbackWrapper, ImagePlotMPL from ._commons import ( _get_units_label, _swap_axes_extents, get_default_from_config, ) from .fixed_resolution import ( FixedResolutionBuffer, OffAxisProjectionFixedResolutionBuffer, ) from .geo_plot_utils import get_mpl_transform from .plot_container import ( ImagePlotContainer, invalidate_data, invalidate_figure, invalidate_plot, ) if TYPE_CHECKING: from yt.visualization.plot_modifications import PlotCallback if sys.version_info >= (3, 11): from typing import assert_never else: from typing_extensions import assert_never def get_window_parameters(axis, center, width, ds): width = ds.coordinates.sanitize_width(axis, width, None) center, display_center = ds.coordinates.sanitize_center(center, axis) xax = ds.coordinates.x_axis[axis] yax = ds.coordinates.y_axis[axis] bounds = ( display_center[xax] - width[0] / 2, display_center[xax] + width[0] / 2, display_center[yax] - width[1] / 2, display_center[yax] + width[1] / 2, ) return (bounds, center, display_center) def get_oblique_window_parameters( normal, center, width, ds, depth=None, get3bounds=False ): center, display_center = ds.coordinates.sanitize_center(center, axis=None) width = ds.coordinates.sanitize_width(normal, width, depth) if len(width) == 2: # Transforming to the cutting plane coordinate system # the original dimensionless center messes up off-axis # SPH projections though -> don't use this center there center = ( (center - ds.domain_left_edge) / ds.domain_width - 0.5 ) * ds.domain_width (normal, perp1, perp2) = ortho_find(normal) mat = np.transpose(np.column_stack((perp1, perp2, normal))) center = np.dot(mat, center) w = tuple(el.in_units("code_length") for el in width) bounds = tuple(((2 * (i % 2)) - 1) * w[i // 2] / 2 for i in range(len(w) * 2)) if get3bounds and depth is None: # off-axis projection, depth not specified # -> set 'large enough' depth using half the box diagonal + margin d2 = ds.domain_width[0].in_units("code_length") ** 2 d2 += ds.domain_width[1].in_units("code_length") ** 2 d2 += ds.domain_width[2].in_units("code_length") ** 2 diag = np.sqrt(d2) bounds = bounds + (-0.51 * diag, 0.51 * diag) return (bounds, center) def get_axes_unit(width, ds): r""" Infers the axes unit names from the input width specification """ if ds.no_cgs_equiv_length: return ("code_length",) * 2 if is_sequence(width): if isinstance(width[1], str): axes_unit = (width[1], width[1]) elif is_sequence(width[1]): axes_unit = (width[0][1], width[1][1]) elif isinstance(width[0], YTArray): axes_unit = (str(width[0].units), str(width[1].units)) else: axes_unit = None else: if isinstance(width, YTArray): axes_unit = (str(width.units), str(width.units)) else: axes_unit = None return axes_unit def validate_mesh_fields(data_source, fields): # this check doesn't make sense for ytdata plot datasets, which # load mesh data as a particle field but nonetheless can still # make plots with it if isinstance(data_source.ds, YTSpatialPlotDataset): return canonical_fields = data_source._determine_fields(fields) invalid_fields = [] for field in canonical_fields: finfo = data_source.ds.field_info[field] if finfo.sampling_type == "particle": if not hasattr(data_source.ds, "_sph_ptypes"): pass elif finfo.is_sph_field: continue invalid_fields.append(field) if len(invalid_fields) > 0: raise YTInvalidFieldType(invalid_fields) class PlotWindow(ImagePlotContainer, abc.ABC): r""" A plotting mechanism based around the concept of a window into a data source. It can have arbitrary fields, each of which will be centered on the same viewpoint, but will have individual zlimits. The data and plot are updated separately, and each can be invalidated as the object is modified. Data is handled by a FixedResolutionBuffer object. Parameters ---------- data_source : :class:`yt.data_objects.selection_objects.base_objects.YTSelectionContainer2D` This is the source to be pixelized, which can be a projection, slice, or a cutting plane. bounds : sequence of floats Bounds are the min and max in the image plane that we want our image to cover. It's in the order of (xmin, xmax, ymin, ymax), where the coordinates are all in the appropriate code units. buff_size : sequence of ints The size of the image to generate. antialias : boolean This can be true or false. It determines whether or not sub-pixel rendering is used during data deposition. window_size : float The size of the window on the longest axis (in units of inches), including the margins but not the colorbar. """ def __init__( self, data_source, bounds, buff_size=(800, 800), antialias=True, periodic=True, origin="center-window", oblique=False, window_size=8.0, fields=None, fontsize=18, aspect=None, setup=False, *, geometry: Geometry = Geometry.CARTESIAN, ) -> None: # axis manipulation operations are callback-only: self._swap_axes_input = False self._flip_vertical = False self._flip_horizontal = False self.center = None self._periodic = periodic self.oblique = oblique self._equivalencies = defaultdict(lambda: (None, {})) # type: ignore [var-annotated] self.buff_size = buff_size self.antialias = antialias self._axes_unit_names = None self._transform = None self._projection = None self.aspect = aspect skip = list(FixedResolutionBuffer._exclude_fields) + data_source._key_fields fields = list(iter_fields(fields)) self.override_fields = list(set(fields).intersection(set(skip))) self.fields = [f for f in fields if f not in skip] self._frb: FixedResolutionBuffer | None = None super().__init__(data_source, window_size, fontsize) self._set_window(bounds) # this automatically updates the data and plot if origin != "native": if geometry is Geometry.CARTESIAN or geometry is Geometry.SPECTRAL_CUBE: pass elif ( geometry is Geometry.CYLINDRICAL or geometry is Geometry.POLAR or geometry is Geometry.SPHERICAL or geometry is Geometry.GEOGRAPHIC or geometry is Geometry.INTERNAL_GEOGRAPHIC ): mylog.info("Setting origin='native' for %s geometry.", geometry) origin = "native" else: assert_never(geometry) self.origin = origin if self.data_source.center is not None and not oblique: ax = self.data_source.axis xax = self.ds.coordinates.x_axis[ax] yax = self.ds.coordinates.y_axis[ax] center, display_center = self.ds.coordinates.sanitize_center( self.data_source.center, ax ) center = [display_center[xax], display_center[yax]] self.set_center(center) axname = self.ds.coordinates.axis_name[ax] transform = self.ds.coordinates.data_transform[axname] projection = self.ds.coordinates.data_projection[axname] self._projection = get_mpl_transform(projection) self._transform = get_mpl_transform(transform) self._setup_plots() for field in self.data_source._determine_fields(self.fields): finfo = self.data_source.ds._get_field_info(field) pnh = self.plots[field].norm_handler # take_log can be `None` so we explicitly compare against a boolean pnh.prefer_log = finfo.take_log is not False # override from user configuration if any log, linthresh = get_default_from_config( self.data_source, field=field, keys=["log", "linthresh"], defaults=[None, None], ) if linthresh is not None: self.set_log(field, linthresh=linthresh) elif log is not None: self.set_log(field, log) def __iter__(self): for ds in self.ts: mylog.warning("Switching to %s", ds) self._switch_ds(ds) yield self def piter(self, *args, **kwargs): for ds in self.ts.piter(*args, **kwargs): self._switch_ds(ds) yield self @property def frb(self): # Force the regeneration of the fixed resolution buffer # * if there's none # * if the data has been invalidated # * if the frb has been inalidated if not self._data_valid: self._recreate_frb() return self._frb @frb.setter def frb(self, value): self._frb = value self._data_valid = True @frb.deleter def frb(self): del self._frb self._frb = None def _recreate_frb(self): old_fields = None old_filters = [] # If we are regenerating an frb, we want to know what fields we had before if self._frb is not None: old_fields = list(self._frb.data.keys()) old_units = [_.units for _ in self._frb.data.values()] old_filters = self._frb._filters # Set the bounds if hasattr(self, "zlim"): # Support OffAxisProjectionPlot and OffAxisSlicePlot bounds = self.xlim + self.ylim + self.zlim else: bounds = self.xlim + self.ylim # Generate the FRB self.frb = self._frb_generator( self.data_source, bounds, self.buff_size, self.antialias, periodic=self._periodic, filters=old_filters, ) # At this point the frb has the valid bounds, size, aliasing, etc. if old_fields is not None: # Restore the old fields for key, units in zip(old_fields, old_units, strict=False): self._frb.render(key) equiv = self._equivalencies[key] if equiv[0] is None: self._frb[key].convert_to_units(units) else: self.frb.set_unit(key, units, equiv[0], equiv[1]) # Restore the override fields for key in self.override_fields: self._frb.render(key) @property def _has_swapped_axes(self): # note: we always run the validations here in case the states of # the conflicting attributes have changed. return self._validate_swap_axes(self._swap_axes_input) @invalidate_data def swap_axes(self): # toggles the swap_axes behavior new_swap_value = not self._swap_axes_input # note: we also validate here to catch invalid states immediately, even # though we validate on accessing the attribute in `_has_swapped_axes`. self._swap_axes_input = self._validate_swap_axes(new_swap_value) return self def _validate_swap_axes(self, swap_value: bool) -> bool: if swap_value and (self._transform or self._projection): mylog.warning("Cannot swap axes due to transform or projection") return False return swap_value @property def width(self): Wx = self.xlim[1] - self.xlim[0] Wy = self.ylim[1] - self.ylim[0] return (Wx, Wy) @property def bounds(self): return self.xlim + self.ylim @invalidate_data def zoom(self, factor): r"""This zooms the window by *factor* > 0. - zoom out with *factor* < 1 - zoom in with *factor* > 1 Parameters ---------- factor : float multiplier for the current width """ if factor <= 0: raise ValueError("Only positive zooming factors are meaningful.") Wx, Wy = self.width centerx = self.xlim[0] + Wx * 0.5 centery = self.ylim[0] + Wy * 0.5 nWx, nWy = Wx / factor, Wy / factor self.xlim = (centerx - nWx * 0.5, centerx + nWx * 0.5) self.ylim = (centery - nWy * 0.5, centery + nWy * 0.5) return self @invalidate_data def pan(self, deltas): r"""Pan the image by specifying absolute code unit coordinate deltas. Parameters ---------- deltas : Two-element sequence of floats, quantities, or (float, unit) tuples. (delta_x, delta_y). If a unit is not supplied the unit is assumed to be code_length. """ if len(deltas) != 2: raise TypeError( f"The pan function accepts a two-element sequence.\nReceived {deltas}." ) if isinstance(deltas[0], Number) and isinstance(deltas[1], Number): deltas = ( self.ds.quan(deltas[0], "code_length"), self.ds.quan(deltas[1], "code_length"), ) elif isinstance(deltas[0], tuple) and isinstance(deltas[1], tuple): deltas = ( self.ds.quan(deltas[0][0], deltas[0][1]), self.ds.quan(deltas[1][0], deltas[1][1]), ) elif isinstance(deltas[0], YTQuantity) and isinstance(deltas[1], YTQuantity): pass else: raise TypeError( "The arguments of the pan function must be a sequence of floats,\n" f"quantities, or (float, unit) tuples. Received {deltas}" ) self.xlim = (self.xlim[0] + deltas[0], self.xlim[1] + deltas[0]) self.ylim = (self.ylim[0] + deltas[1], self.ylim[1] + deltas[1]) return self @invalidate_data def pan_rel(self, deltas): r"""Pan the image by specifying relative deltas, to the FOV. Parameters ---------- deltas : sequence of floats (delta_x, delta_y) in *relative* code unit coordinates """ Wx, Wy = self.width self.xlim = (self.xlim[0] + Wx * deltas[0], self.xlim[1] + Wx * deltas[0]) self.ylim = (self.ylim[0] + Wy * deltas[1], self.ylim[1] + Wy * deltas[1]) return self @invalidate_plot def set_unit(self, field, new_unit, equivalency=None, equivalency_kwargs=None): """Sets a new unit for the requested field parameters ---------- field : string or field tuple The name of the field that is to be changed. new_unit : string or Unit object equivalency : string, optional If set, the equivalency to use to convert the current units to the new requested unit. If None, the unit conversion will be done without an equivalency equivalency_kwargs : string, optional Keyword arguments to be passed to the equivalency. Only used if ``equivalency`` is set. """ for f, u in zip(iter_fields(field), always_iterable(new_unit), strict=True): self.frb.set_unit(f, u, equivalency, equivalency_kwargs) self._equivalencies[f] = (equivalency, equivalency_kwargs) pnh = self.plots[f].norm_handler pnh.display_units = u return self @invalidate_plot def set_origin(self, origin): """Set the plot origin. Parameters ---------- origin : string or length 1, 2, or 3 sequence. The location of the origin of the plot coordinate system. This is typically represented by a '-' separated string or a tuple of strings. In the first index the y-location is given by 'lower', 'upper', or 'center'. The second index is the x-location, given as 'left', 'right', or 'center'. Finally, whether the origin is applied in 'domain' space, plot 'window' space or 'native' simulation coordinate system is given. For example, both 'upper-right-domain' and ['upper', 'right', 'domain'] place the origin in the upper right hand corner of domain space. If x or y are not given, a value is inferred. For instance, 'left-domain' corresponds to the lower-left hand corner of the simulation domain, 'center-domain' corresponds to the center of the simulation domain, or 'center-window' for the center of the plot window. In the event that none of these options place the origin in a desired location, a sequence of tuples and a string specifying the coordinate space can be given. If plain numeric types are input, units of `code_length` are assumed. Further examples: =============================================== =============================== format example =============================================== =============================== '{space}' 'domain' '{xloc}-{space}' 'left-window' '{yloc}-{space}' 'upper-domain' '{yloc}-{xloc}-{space}' 'lower-right-window' ('{space}',) ('window',) ('{xloc}', '{space}') ('right', 'domain') ('{yloc}', '{space}') ('lower', 'window') ('{yloc}', '{xloc}', '{space}') ('lower', 'right', 'window') ((yloc, '{unit}'), (xloc, '{unit}'), '{space}') ((0, 'm'), (.4, 'm'), 'window') (xloc, yloc, '{space}') (0.23, 0.5, 'domain') =============================================== =============================== """ self.origin = origin return self @invalidate_plot @invalidate_figure def set_mpl_projection(self, mpl_proj): r""" Set the matplotlib projection type with a cartopy transform function Given a string or a tuple argument, this will project the data onto the plot axes with the chosen transform function. Assumes that the underlying data has a PlateCarree transform type. To annotate the plot with coastlines or other annotations, `render()` will need to be called after this function to make the axes available for annotation. Parameters ---------- mpl_proj : string or tuple if passed as a string, mpl_proj is the specified projection type, if passed as a tuple, then tuple will take the form of ``("ProjectionType", (args))`` or ``("ProjectionType", (args), {kwargs})`` Valid projection type options include: 'PlateCarree', 'LambertConformal', 'LabmbertCylindrical', 'Mercator', 'Miller', 'Mollweide', 'Orthographic', 'Robinson', 'Stereographic', 'TransverseMercator', 'InterruptedGoodeHomolosine', 'RotatedPole', 'OGSB', 'EuroPP', 'Geostationary', 'Gnomonic', 'NorthPolarStereo', 'OSNI', 'SouthPolarStereo', 'AlbersEqualArea', 'AzimuthalEquidistant', 'Sinusoidal', 'UTM', 'NearsidePerspective', 'LambertAzimuthalEqualArea' Examples -------- This will create a Mollweide projection using Mollweide default values and annotate it with coastlines >>> import yt >>> ds = yt.load("") >>> p = yt.SlicePlot(ds, "altitude", "AIRDENS") >>> p.set_mpl_projection("AIRDENS", "Mollweide") >>> p.render() >>> p.plots["AIRDENS"].axes.coastlines() >>> p.show() This will move the PlateCarree central longitude to 90 degrees and annotate with coastlines. >>> import yt >>> ds = yt.load("") >>> p = yt.SlicePlot(ds, "altitude", "AIRDENS") >>> p.set_mpl_projection( ... "AIRDENS", ("PlateCarree", (), {"central_longitude": 90, "globe": None}) ... ) >>> p.render() >>> p.plots["AIRDENS"].axes.set_global() >>> p.plots["AIRDENS"].axes.coastlines() >>> p.show() This will create a RoatatedPole projection with the unrotated pole position at 37.5 degrees latitude and 177.5 degrees longitude by passing them in as args. >>> import yt >>> ds = yt.load("") >>> p = yt.SlicePlot(ds, "altitude", "AIRDENS") >>> p.set_mpl_projection("RotatedPole", (177.5, 37.5)) >>> p.render() >>> p.plots["AIRDENS"].axes.set_global() >>> p.plots["AIRDENS"].axes.coastlines() >>> p.show() This will create a RoatatedPole projection with the unrotated pole position at 37.5 degrees latitude and 177.5 degrees longitude by passing them in as kwargs. >>> import yt >>> ds = yt.load("") >>> p = yt.SlicePlot(ds, "altitude", "AIRDENS") >>> p.set_mpl_projection( ... ("RotatedPole", (), {"pole_latitude": 37.5, "pole_longitude": 177.5}) ... ) >>> p.render() >>> p.plots["AIRDENS"].axes.set_global() >>> p.plots["AIRDENS"].axes.coastlines() >>> p.show() """ self._projection = get_mpl_transform(mpl_proj) axname = self.ds.coordinates.axis_name[self.data_source.axis] transform = self.ds.coordinates.data_transform[axname] self._transform = get_mpl_transform(transform) return self @invalidate_data def _set_window(self, bounds): """Set the bounds of the plot window. This is normally only called internally, see set_width. Parameters ---------- bounds : a four element sequence of floats The x and y bounds, in the format (x0, x1, y0, y1) """ if self.center is not None: dx = bounds[1] - bounds[0] dy = bounds[3] - bounds[2] self.xlim = (self.center[0] - dx / 2.0, self.center[0] + dx / 2.0) self.ylim = (self.center[1] - dy / 2.0, self.center[1] + dy / 2.0) else: self.xlim = tuple(bounds[0:2]) self.ylim = tuple(bounds[2:4]) if len(bounds) == 6: # Support OffAxisProjectionPlot and OffAxisSlicePlot self.zlim = tuple(bounds[4:6]) mylog.info("xlim = %f %f", self.xlim[0], self.xlim[1]) mylog.info("ylim = %f %f", self.ylim[0], self.ylim[1]) if hasattr(self, "zlim"): mylog.info("zlim = %f %f", self.zlim[0], self.zlim[1]) @invalidate_data def set_width(self, width, unit=None): """set the width of the plot window parameters ---------- width : float, array of floats, (float, unit) tuple, or tuple of (float, unit) tuples. Width can have four different formats to support windows with variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') requests a plot window that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) requests a window that is 10 kiloparsecs wide along the x axis and 15 kiloparsecs wide along the y axis. In the other two examples, code units are assumed, for example (0.2, 0.3) requests a plot that has an x width of 0.2 and a y width of 0.3 in code units. If units are provided the resulting plot axis labels will use the supplied units. unit : str the unit the width has been specified in. If width is a tuple, this argument is ignored. Defaults to code units. """ if isinstance(width, Number): if unit is None: width = (width, "code_length") else: width = (width, fix_unitary(unit)) axes_unit = get_axes_unit(width, self.ds) width = self.ds.coordinates.sanitize_width(self.frb.axis, width, None) centerx = (self.xlim[1] + self.xlim[0]) / 2.0 centery = (self.ylim[1] + self.ylim[0]) / 2.0 self.xlim = (centerx - width[0] / 2, centerx + width[0] / 2) self.ylim = (centery - width[1] / 2, centery + width[1] / 2) if hasattr(self, "zlim"): centerz = (self.zlim[1] + self.zlim[0]) / 2.0 mw = self.ds.arr(width).max() self.zlim = (centerz - mw / 2.0, centerz + mw / 2.0) self.set_axes_unit(axes_unit) return self @invalidate_data def set_center(self, new_center, unit="code_length"): """Sets a new center for the plot window parameters ---------- new_center : two element sequence of floats The coordinates of the new center of the image in the coordinate system defined by the plot axes. If the unit keyword is not specified, the coordinates are assumed to be in code units. unit : string The name of the unit new_center is given in. If new_center is a YTArray or tuple of YTQuantities, this keyword is ignored. """ error = RuntimeError( "\n" "new_center must be a two-element list or tuple of floats \n" "corresponding to a coordinate in the plot relative to \n" "the plot coordinate system.\n" ) if new_center is None: self.center = None elif is_sequence(new_center): if len(new_center) != 2: raise error for el in new_center: if not isinstance(el, Number) and not isinstance(el, YTQuantity): raise error if isinstance(new_center[0], Number): new_center = [self.ds.quan(c, unit) for c in new_center] self.center = new_center else: raise error self._set_window(self.bounds) return self @invalidate_data def set_antialias(self, aa): """Turn antialiasing on or off. parameters ---------- aa : boolean """ self.antialias = aa @invalidate_data def set_buff_size(self, size): """Sets a new buffer size for the fixed resolution buffer parameters ---------- size : int or two element sequence of ints The number of data elements in the buffer on the x and y axes. If a scalar is provided, then the buffer is assumed to be square. """ if is_sequence(size): self.buff_size = size else: self.buff_size = (size, size) return self @invalidate_plot def set_axes_unit(self, unit_name): r"""Set the unit for display on the x and y axes of the image. Parameters ---------- unit_name : string or two element tuple of strings A unit, available for conversion in the dataset, that the image extents will be displayed in. If set to None, any previous units will be reset. If the unit is None, the default is chosen. If unit_name is '1', 'u', or 'unitary', it will not display the units, and only show the axes name. If unit_name is a tuple, the first element is assumed to be the unit for the x axis and the second element the unit for the y axis. Raises ------ YTUnitNotRecognized If the unit is not known, this will be raised. Examples -------- >>> from yt import load >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> p = ProjectionPlot(ds, "y", "Density") >>> p.set_axes_unit("kpc") """ # blind except because it could be in conversion_factors or units if unit_name is not None: if isinstance(unit_name, str): unit_name = (unit_name, unit_name) for un in unit_name: try: self.ds.length_unit.in_units(un) except (UnitConversionError, UnitParseError) as e: raise YTUnitNotRecognized(un) from e self._axes_unit_names = unit_name return self @invalidate_plot def flip_horizontal(self): """ inverts the horizontal axis (the image's abscissa) """ self._flip_horizontal = not self._flip_horizontal return self @invalidate_plot def flip_vertical(self): """ inverts the vertical axis (the image's ordinate) """ self._flip_vertical = not self._flip_vertical return self def to_fits_data(self, fields=None, other_keys=None, length_unit=None, **kwargs): r"""Export the fields in this PlotWindow instance to a FITSImageData instance. This will export a set of FITS images of either the fields specified or all the fields already in the object. Parameters ---------- fields : list of strings These fields will be pixelized and output. If "None", the keys of the FRB will be used. other_keys : dictionary, optional A set of header keys and values to write into the FITS header. length_unit : string, optional the length units that the coordinates are written in. The default is to use the default length unit of the dataset. """ return self.frb.to_fits_data( fields=fields, other_keys=other_keys, length_unit=length_unit, **kwargs ) class PWViewerMPL(PlotWindow): """Viewer using matplotlib as a backend via the WindowPlotMPL.""" _current_field = None _frb_generator: type[FixedResolutionBuffer] | None = None _plot_type: str | None = None def __init__(self, *args, **kwargs) -> None: if self._frb_generator is None: self._frb_generator = kwargs.pop("frb_generator") if self._plot_type is None: self._plot_type = kwargs.pop("plot_type") self._splat_color = kwargs.pop("splat_color", None) PlotWindow.__init__(self, *args, **kwargs) # import type here to avoid import cycles # note that this import statement is actually crucial at runtime: # the filter methods for the present class are defined only when # fixed_resolution_filters is imported, so we need to guarantee # that it happens no later than instantiation self._callbacks: list[PlotCallback] = [] @property def _data_valid(self) -> bool: return self._frb is not None and self._frb._data_valid @_data_valid.setter def _data_valid(self, value): if self._frb is None: # we delegate the (in)validation responsibility to the FRB # if we don't have one yet, we can exit without doing anything return else: self._frb._data_valid = value def _setup_origin(self): origin = self.origin axis_index = self.data_source.axis xc = None yc = None if isinstance(origin, str): origin = tuple(origin.split("-")) if len(origin) > 3: raise ValueError( "Invalid origin argument with too many elements; " f"expected 1, 2 or 3 elements, got {self.origin!r}, counting {len(origin)} elements. " "Use '-' as a separator for string arguments." ) if len(origin) == 1: coord_system = origin[0] if coord_system not in ("window", "domain", "native"): raise ValueError( "Invalid origin argument. " "Single element specification must be 'window', 'domain', or 'native'. " f"Got {self.origin!r}" ) origin = ("lower", "left", coord_system) elif len(origin) == 2: err_msg = "Invalid origin argument. Using 2 elements:\n" if origin[0] in ("left", "right", "center"): o0map = {"left": "lower", "right": "upper", "center": "center"} origin = (o0map[origin[0]],) + origin elif origin[0] in ("lower", "upper"): origin = (origin[0], "center", origin[-1]) else: err_msg += " - the first one must be 'left', 'right', 'lower', 'upper' or 'center'\n" if origin[-1] not in ("window", "domain", "native"): err_msg += " - the second one must be 'window', 'domain', or 'native'\n" if len(err_msg.split("\n")) > 2: err_msg += f"Got {self.origin!r}" raise ValueError(err_msg) elif len(origin) == 3: err_msg = "Invalid origin argument. Using 3 elements:\n" if isinstance(origin[0], (int, float)): xc = self.ds.quan(origin[0], "code_length") elif isinstance(origin[0], tuple): xc = self.ds.quan(*origin[0]) elif origin[0] not in ("lower", "upper", "center"): err_msg += " - the first one must be 'lower', 'upper' or 'center' or a distance\n" if isinstance(origin[1], (int, float)): yc = self.ds.quan(origin[1], "code_length") elif isinstance(origin[1], tuple): yc = self.ds.quan(*origin[1]) elif origin[1] not in ("left", "right", "center"): err_msg += " - the second one must be 'left', 'right', 'center' or a distance\n" if origin[-1] not in ("window", "domain", "native"): err_msg += " - the third one must be 'window', 'domain', or 'native'\n" if len(err_msg.split("\n")) > 2: err_msg += f"Got {self.origin!r}" raise ValueError(err_msg) assert not isinstance(origin, str) assert len(origin) == 3 assert origin[2] in ("window", "domain", "native") if origin[2] == "window": xllim, xrlim = self.xlim yllim, yrlim = self.ylim elif origin[2] == "domain": xax = self.ds.coordinates.x_axis[axis_index] yax = self.ds.coordinates.y_axis[axis_index] xllim = self.ds.domain_left_edge[xax] xrlim = self.ds.domain_right_edge[xax] yllim = self.ds.domain_left_edge[yax] yrlim = self.ds.domain_right_edge[yax] elif origin[2] == "native": return (self.ds.quan(0.0, "code_length"), self.ds.quan(0.0, "code_length")) if xc is None and yc is None: assert origin[0] in ("lower", "upper", "center") assert origin[1] in ("left", "right", "center") if origin[0] == "lower": yc = yllim elif origin[0] == "upper": yc = yrlim elif origin[0] == "center": yc = (yllim + yrlim) / 2.0 if origin[1] == "left": xc = xllim elif origin[1] == "right": xc = xrlim elif origin[1] == "center": xc = (xllim + xrlim) / 2.0 x_in_bounds = xc >= xllim and xc <= xrlim y_in_bounds = yc >= yllim and yc <= yrlim if not x_in_bounds and not y_in_bounds: raise ValueError( "origin inputs not in bounds of specified coordinate system domain; " f"got {self.origin!r} Bounds are {xllim, xrlim} and {yllim, yrlim} respectively" ) return xc, yc def _setup_plots(self): from matplotlib.mathtext import MathTextParser if self._plot_valid: return if not self._data_valid: self._recreate_frb() self._colorbar_valid = True field_list = list(set(self.data_source._determine_fields(self.fields))) for f in field_list: axis_index = self.data_source.axis xc, yc = self._setup_origin() if self.ds._uses_code_length_unit: # this should happen only if the dataset was initialized with # argument unit_system="code" or if it's set to have no CGS # equivalent. This only needs to happen here in the specific # case that we're doing a computationally intense operation # like using cartopy, but it prevents crashes in that case. (unit_x, unit_y) = ("code_length", "code_length") elif self._axes_unit_names is None: unit = self.ds.get_smallest_appropriate_unit( self.xlim[1] - self.xlim[0] ) unit_x = unit_y = unit coords = self.ds.coordinates if hasattr(coords, "image_units"): # check for special cases defined in # non cartesian CoordinateHandler subclasses image_units = coords.image_units[coords.axis_id[axis_index]] if image_units[0] in ("deg", "rad"): unit_x = "code_length" elif image_units[0] == 1: unit_x = "dimensionless" if image_units[1] in ("deg", "rad"): unit_y = "code_length" elif image_units[1] == 1: unit_y = "dimensionless" else: (unit_x, unit_y) = self._axes_unit_names # For some plots we may set aspect by hand, such as for spectral cube data. # This will likely be replaced at some point by the coordinate handler # setting plot aspect. if self.aspect is None: self.aspect = float( (self.ds.quan(1.0, unit_y) / self.ds.quan(1.0, unit_x)).in_cgs() ) extentx = (self.xlim - xc)[:2] extenty = (self.ylim - yc)[:2] # extentx/y arrays inherit units from xlim and ylim attributes # and these attributes are always length even for angular and # dimensionless axes so we need to strip out units for consistency if unit_x == "dimensionless": extentx = extentx / extentx.units else: extentx.convert_to_units(unit_x) if unit_y == "dimensionless": extenty = extenty / extenty.units else: extenty.convert_to_units(unit_y) extent = [*extentx, *extenty] image = self.frb.get_image(f) mask = self.frb.get_mask(f) assert mask is None or mask.dtype == bool font_size = self._font_properties.get_size() if f in self.plots.keys(): pnh = self.plots[f].norm_handler cbh = self.plots[f].colorbar_handler else: pnh, cbh = self._get_default_handlers( field=f, default_display_units=image.units ) if pnh.display_units != image.units: equivalency, equivalency_kwargs = self._equivalencies[f] image.convert_to_units( pnh.display_units, equivalency, **equivalency_kwargs ) fig = None axes = None cax = None draw_axes = True draw_frame = None if f in self.plots: draw_axes = self.plots[f]._draw_axes draw_frame = self.plots[f]._draw_frame if self.plots[f].figure is not None: fig = self.plots[f].figure axes = self.plots[f].axes cax = self.plots[f].cax # This is for splatting particle positions with a single # color instead of a colormap if self._splat_color is not None: # make image a rgba array, using the splat color greyscale_image = self.frb[f] ia = np.zeros((greyscale_image.shape[0], greyscale_image.shape[1], 4)) ia[:, :, 3] = 0.0 # set alpha to 0.0 locs = greyscale_image > 0.0 to_rgba = matplotlib.colors.colorConverter.to_rgba color_tuple = to_rgba(self._splat_color) ia[locs] = color_tuple ia = ImageArray(ia) else: ia = image swap_axes = self._has_swapped_axes aspect = self.aspect if swap_axes: extent = _swap_axes_extents(extent) ia = ia.transpose() aspect = 1.0 / aspect # aspect ends up passed to imshow(aspect=aspect) self.plots[f] = WindowPlotMPL( ia, extent, self.figure_size, font_size, aspect, fig, axes, cax, self._projection, self._transform, norm_handler=pnh, colorbar_handler=cbh, alpha=mask.astype("float64") if mask is not None else None, ) axes_unit_labels = self._get_axes_unit_labels(unit_x, unit_y) if self.oblique: labels = [ r"$\rm{Image\ x" + axes_unit_labels[0] + "}$", r"$\rm{Image\ y" + axes_unit_labels[1] + "}$", ] else: coordinates = self.ds.coordinates axis_names = coordinates.image_axis_name[axis_index] xax = coordinates.x_axis[axis_index] yax = coordinates.y_axis[axis_index] if hasattr(coordinates, "axis_default_unit_name"): axes_unit_labels = [ coordinates.axis_default_unit_name[xax], coordinates.axis_default_unit_name[yax], ] labels = [ r"$\rm{" + axis_names[0] + axes_unit_labels[0] + r"}$", r"$\rm{" + axis_names[1] + axes_unit_labels[1] + r"}$", ] if hasattr(coordinates, "axis_field"): if xax in coordinates.axis_field: xmin, xmax = coordinates.axis_field[xax]( 0, self.xlim, self.ylim ) else: xmin, xmax = (float(x) for x in extentx) if yax in coordinates.axis_field: ymin, ymax = coordinates.axis_field[yax]( 1, self.xlim, self.ylim ) else: ymin, ymax = (float(y) for y in extenty) new_extent = (xmin, xmax, ymin, ymax) if swap_axes: new_extent = _swap_axes_extents(new_extent) self.plots[f].image.set_extent(new_extent) self.plots[f].axes.set_aspect("auto") x_label, y_label, colorbar_label = self._get_axes_labels(f) if x_label is not None: labels[0] = x_label if y_label is not None: labels[1] = y_label if swap_axes: labels.reverse() self.plots[f].axes.set_xlabel(labels[0]) self.plots[f].axes.set_ylabel(labels[1]) # Determine the units of the data units = Unit(image.units, registry=self.ds.unit_registry) units = units.latex_representation() if colorbar_label is None: colorbar_label = image.info["label"] if getattr(self, "moment", 1) == 2: colorbar_label = f"{colorbar_label} \\rm{{Standard Deviation}}" if hasattr(self, "projected"): colorbar_label = f"$\\rm{{Projected }}$ {colorbar_label}" if units is not None and units != "": colorbar_label += _get_units_label(units) parser = MathTextParser("Agg") try: parser.parse(colorbar_label) except Exception as err: # unspecified exceptions might be raised from matplotlib via its own dependencies raise YTCannotParseUnitDisplayName(f, colorbar_label, str(err)) from err self.plots[f].cb.set_label(colorbar_label) # x-y axes minorticks if f not in self._minorticks: self._minorticks[f] = True if self._minorticks[f]: self.plots[f].axes.minorticks_on() else: self.plots[f].axes.minorticks_off() if not draw_axes: self.plots[f]._toggle_axes(draw_axes, draw_frame) self._set_font_properties() self.run_callbacks() if self._flip_horizontal or self._flip_vertical: # some callbacks (e.g., streamlines) fail when applied to a # flipped axis, so flip only at the end. for f in field_list: if self._flip_horizontal: ax = self.plots[f].axes ax.invert_xaxis() if self._flip_vertical: ax = self.plots[f].axes ax.invert_yaxis() self._plot_valid = True def setup_callbacks(self): issue_deprecation_warning( "The PWViewer.setup_callbacks method is a no-op.", since="4.1", stacklevel=3, ) @invalidate_plot def clear_annotations(self, index: int | None = None): """ Clear callbacks from the plot. If index is not set, clear all callbacks. If index is set, clear that index (ie 0 is the first one created, 1 is the 2nd one created, -1 is the last one created, etc.) """ if index is None: self._callbacks.clear() else: self._callbacks.pop(index) return self def list_annotations(self): """ List the current callbacks for the plot, along with their index. This index can be used with `clear_annotations` to remove a callback from the current plot. """ for i, cb in enumerate(self._callbacks): print(i, cb) def run_callbacks(self): for f in self.fields: keys = self.frb.keys() for callback in self._callbacks: # need to pass _swap_axes and adjust all the callbacks cbw = CallbackWrapper( self, self.plots[f], self.frb, f, self._font_properties, self._font_color, ) try: callback(cbw) except (NotImplementedError, YTDataTypeUnsupported): raise except Exception as e: raise YTPlotCallbackError(callback._type_name) from e for key in self.frb.keys(): if key not in keys: del self.frb[key] def export_to_mpl_figure( self, nrows_ncols, axes_pad=1.0, label_mode="L", cbar_location="right", cbar_size="5%", cbar_mode="each", cbar_pad="0%", ): r""" Creates a matplotlib figure object with the specified axes arrangement, nrows_ncols, and maps the underlying figures to the matplotlib axes. Note that all of these parameters are fed directly to the matplotlib ImageGrid class to create the new figure layout. Parameters ---------- nrows_ncols : tuple the number of rows and columns of the axis grid (e.g., nrows_ncols=(2,2,)) axes_pad : float padding between axes in inches label_mode : one of "L", "1", "all" arrangement of axes that are labeled cbar_location : one of "left", "right", "bottom", "top" where to place the colorbar cbar_size : string (percentage) scaling of the colorbar (e.g., "5%") cbar_mode : one of "each", "single", "edge", None how to represent the colorbar cbar_pad : string (percentage) padding between the axis and colorbar (e.g. "5%") Returns ------- The return is a matplotlib figure object. Examples -------- >>> import yt >>> ds = yt.load_sample("IsolatedGalaxy") >>> fields = ["density", "velocity_x", "velocity_y", "velocity_magnitude"] >>> p = yt.SlicePlot(ds, "z", fields) >>> p.set_log("velocity_x", False) >>> p.set_log("velocity_y", False) >>> fig = p.export_to_mpl_figure((2, 2)) >>> fig.tight_layout() >>> fig.savefig("test.png") """ import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import ImageGrid fig = plt.figure() grid = ImageGrid( fig, 111, nrows_ncols=nrows_ncols, axes_pad=axes_pad, label_mode=label_mode, cbar_location=cbar_location, cbar_size=cbar_size, cbar_mode=cbar_mode, cbar_pad=cbar_pad, ) fields = self.fields if len(fields) > len(grid): raise IndexError("not enough axes for the number of fields") for i, f in enumerate(self.fields): plot = self.plots[f] plot.figure = fig plot.axes = grid[i].axes plot.cax = grid.cbar_axes[i] self._setup_plots() return fig class NormalPlot: """This is the abstraction for SlicePlot and ProjectionPlot, where we define the common sanitizing mechanism for user input (normal direction). It is implemented as a mixin class. """ @staticmethod def sanitize_normal_vector(ds, normal) -> str | np.ndarray: """Return the name of a cartesian axis whener possible, or a 3-element 1D ndarray of float64 in any other valid case. Fail with a descriptive error message otherwise. """ axis_names = ds.coordinates.axis_order if isinstance(normal, str): if normal not in axis_names: names_str = ", ".join(f"'{name}'" for name in axis_names) raise ValueError( f"'{normal}' is not a valid axis name. Expected one of {names_str}." ) return normal if isinstance(normal, (int, np.integer)): if normal not in (0, 1, 2): raise ValueError( f"{normal} is not a valid axis identifier. Expected either 0, 1, or 2." ) return axis_names[normal] if not is_sequence(normal): raise TypeError( f"{normal} is not a valid normal vector identifier. " "Expected a string, integer or sequence of 3 floats." ) if len(normal) != 3: raise ValueError( f"{normal} with length {len(normal)} is not a valid normal vector. " "Expected a 3-element sequence." ) try: retv = np.array(normal, dtype="float64") if retv.shape != (3,): raise ValueError(f"{normal} is incorrectly shaped.") except ValueError as exc: raise TypeError(f"{normal} is not a valid normal vector.") from exc nonzero_idx = np.nonzero(retv)[0] if len(nonzero_idx) == 0: raise ValueError(f"A null vector {normal} isn't a valid normal vector.") if len(nonzero_idx) == 1: return axis_names[nonzero_idx[0]] return retv class SlicePlot(NormalPlot): r""" A dispatch class for :class:`yt.visualization.plot_window.AxisAlignedSlicePlot` and :class:`yt.visualization.plot_window.OffAxisSlicePlot` objects. This essentially allows for a single entry point to both types of slice plots, the distinction being determined by the specified normal vector to the projection. The returned plot object can be updated using one of the many helper functions defined in PlotWindow. Parameters ---------- ds : :class:`yt.data_objects.static_output.Dataset` This is the dataset object corresponding to the simulation output to be plotted. normal : int, str, or 3-element sequence of floats This specifies the normal vector to the slice. Valid int values are 0, 1 and 2. Corresponding str values depend on the geometry of the dataset and are generally given by `ds.coordinates.axis_order`. E.g. in cartesian they are 'x', 'y' and 'z'. An arbitrary normal vector may be specified as a 3-element sequence of floats. This returns a :class:`OffAxisSlicePlot` object or a :class:`AxisAlignedSlicePlot` object, depending on whether the requested normal directions corresponds to a natural axis of the dataset's geometry. fields : a (or a list of) 2-tuple of strings (ftype, fname) The name of the field(s) to be plotted. The following are nominally keyword arguments passed onto the respective slice plot objects generated by this function. Keyword Arguments ----------------- center : 'center', 'c', 'left', 'l', 'right', 'r', id of a global extremum, or array-like The coordinate of the selection's center. Defaults to the 'center', i.e. center of the domain. Centering on the min or max of a field is supported by passing a tuple such as ('min', ('gas', 'density')) or ('max', ('gas', 'temperature'). A single string may also be used (e.g. "min_density" or "max_temperature"), though it's not as flexible and does not allow to select an exact field/particle type. With this syntax, the first field matching the provided name is selected. 'max' or 'm' can be used as a shortcut for ('max', ('gas', 'density')) 'min' can be used as a shortcut for ('min', ('gas', 'density')) One can also select an exact point as a 3 element coordinate sequence, e.g. [0.5, 0.5, 0] Units can be specified by passing in *center* as a tuple containing a 3-element coordinate sequence and string unit name, e.g. ([0, 0.5, 0.5], "cm"), or by passing in a YTArray. Code units are assumed if unspecified. The domain edges along the selected *axis* can be selected with 'left'/'l' and 'right'/'r' respectively. width : tuple or a float. Width can have four different formats to support windows with variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') requests a plot window that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) requests a window that is 10 kiloparsecs wide along the x axis and 15 kiloparsecs wide along the y axis. In the other two examples, code units are assumed, for example (0.2, 0.3) requests a plot that has an x width of 0.2 and a y width of 0.3 in code units. If units are provided the resulting plot axis labels will use the supplied units. axes_unit : string The name of the unit for the tick labels on the x and y axes. Defaults to None, which automatically picks an appropriate unit. If axes_unit is '1', 'u', or 'unitary', it will not display the units, and only show the axes name. origin : string or length 1, 2, or 3 sequence. The location of the origin of the plot coordinate system for `AxisAlignedSlicePlot` object; for `OffAxisSlicePlot` objects this parameter is discarded. This is typically represented by a '-' separated string or a tuple of strings. In the first index the y-location is given by 'lower', 'upper', or 'center'. The second index is the x-location, given as 'left', 'right', or 'center'. Finally, the whether the origin is applied in 'domain' space, plot 'window' space or 'native' simulation coordinate system is given. For example, both 'upper-right-domain' and ['upper', 'right', 'domain'] place the origin in the upper right hand corner of domain space. If x or y are not given, a value is inferred. For instance, 'left-domain' corresponds to the lower-left hand corner of the simulation domain, 'center-domain' corresponds to the center of the simulation domain, or 'center-window' for the center of the plot window. In the event that none of these options place the origin in a desired location, a sequence of tuples and a string specifying the coordinate space can be given. If plain numeric types are input, units of `code_length` are assumed. Further examples: =============================================== =============================== format example =============================================== =============================== '{space}' 'domain' '{xloc}-{space}' 'left-window' '{yloc}-{space}' 'upper-domain' '{yloc}-{xloc}-{space}' 'lower-right-window' ('{space}',) ('window',) ('{xloc}', '{space}') ('right', 'domain') ('{yloc}', '{space}') ('lower', 'window') ('{yloc}', '{xloc}', '{space}') ('lower', 'right', 'window') ((yloc, '{unit}'), (xloc, '{unit}'), '{space}') ((0, 'm'), (.4, 'm'), 'window') (xloc, yloc, '{space}') (0.23, 0.5, 'domain') =============================================== =============================== north_vector : a sequence of floats A vector defining the 'up' direction in the `OffAxisSlicePlot`; not used in `AxisAlignedSlicePlot`. This option sets the orientation of the slicing plane. If not set, an arbitrary grid-aligned north-vector is chosen. fontsize : integer The size of the fonts for the axis, colorbar, and tick labels. field_parameters : dictionary A dictionary of field parameters than can be accessed by derived fields. data_source : YTSelectionContainer Object Object to be used for data selection. Defaults to a region covering the entire simulation. swap_axes : bool Raises ------ ValueError or TypeError If `normal` cannot be interpreted as a valid normal direction. Examples -------- >>> from yt import load >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> slc = SlicePlot(ds, "x", ("gas", "density"), center=[0.2, 0.3, 0.4]) >>> slc = SlicePlot( ... ds, [0.4, 0.2, -0.1], ("gas", "pressure"), north_vector=[0.2, -0.3, 0.1] ... ) """ # ignoring type check here, because mypy doesn't allow __new__ methods to # return instances of subclasses. The design we use here is however based # on the pathlib.Path class from the standard library # https://github.com/python/mypy/issues/1020 def __new__( # type: ignore cls, ds, normal, fields, *args, **kwargs ) -> Union["AxisAlignedSlicePlot", "OffAxisSlicePlot"]: if cls is SlicePlot: normal = cls.sanitize_normal_vector(ds, normal) if isinstance(normal, str): cls = AxisAlignedSlicePlot else: cls = OffAxisSlicePlot self = object.__new__(cls) return self # type: ignore [return-value] class ProjectionPlot(NormalPlot): r""" A dispatch class for :class:`yt.visualization.plot_window.AxisAlignedProjectionPlot` and :class:`yt.visualization.plot_window.OffAxisProjectionPlot` objects. This essentially allows for a single entry point to both types of projection plots, the distinction being determined by the specified normal vector to the slice. The returned plot object can be updated using one of the many helper functions defined in PlotWindow. Parameters ---------- ds : :class:`yt.data_objects.static_output.Dataset` This is the dataset object corresponding to the simulation output to be plotted. normal : int, str, or 3-element sequence of floats This specifies the normal vector to the projection. Valid int values are 0, 1 and 2. Corresponding str values depend on the geometry of the dataset and are generally given by `ds.coordinates.axis_order`. E.g. in cartesian they are 'x', 'y' and 'z'. An arbitrary normal vector may be specified as a 3-element sequence of floats. This function will return a :class:`OffAxisProjectionPlot` object or a :class:`AxisAlignedProjectionPlot` object, depending on whether the requested normal directions corresponds to a natural axis of the dataset's geometry. fields : a (or a list of) 2-tuple of strings (ftype, fname) The name of the field(s) to be plotted. Any additional positional and keyword arguments are passed down to the appropriate return class. See :class:`yt.visualization.plot_window.AxisAlignedProjectionPlot` and :class:`yt.visualization.plot_window.OffAxisProjectionPlot`. Raises ------ ValueError or TypeError If `normal` cannot be interpreted as a valid normal direction. """ # ignoring type check here, because mypy doesn't allow __new__ methods to # return instances of subclasses. The design we use here is however based # on the pathlib.Path class from the standard library # https://github.com/python/mypy/issues/1020 def __new__( # type: ignore cls, ds, normal, fields, *args, **kwargs ) -> Union["AxisAlignedProjectionPlot", "OffAxisProjectionPlot"]: if cls is ProjectionPlot: normal = cls.sanitize_normal_vector(ds, normal) if isinstance(normal, str): cls = AxisAlignedProjectionPlot else: cls = OffAxisProjectionPlot self = object.__new__(cls) return self # type: ignore [return-value] class AxisAlignedSlicePlot(SlicePlot, PWViewerMPL): r"""Creates a slice plot from a dataset Given a ds object, an axis to slice along, and a field name string, this will return a PWViewerMPL object containing the plot. The plot can be updated using one of the many helper functions defined in PlotWindow. Parameters ---------- ds : `Dataset` This is the dataset object corresponding to the simulation output to be plotted. normal : int or one of 'x', 'y', 'z' An int corresponding to the axis to slice along (0=x, 1=y, 2=z) or the axis name itself fields : string The name of the field(s) to be plotted. center : 'center', 'c', 'left', 'l', 'right', 'r', id of a global extremum, or array-like The coordinate of the selection's center. Defaults to the 'center', i.e. center of the domain. Centering on the min or max of a field is supported by passing a tuple such as ('min', ('gas', 'density')) or ('max', ('gas', 'temperature'). A single string may also be used (e.g. "min_density" or "max_temperature"), though it's not as flexible and does not allow to select an exact field/particle type. With this syntax, the first field matching the provided name is selected. 'max' or 'm' can be used as a shortcut for ('max', ('gas', 'density')) 'min' can be used as a shortcut for ('min', ('gas', 'density')) One can also select an exact point as a 3 element coordinate sequence, e.g. [0.5, 0.5, 0] Units can be specified by passing in *center* as a tuple containing a 3-element coordinate sequence and string unit name, e.g. ([0, 0.5, 0.5], "cm"), or by passing in a YTArray. Code units are assumed if unspecified. The domain edges along the selected *axis* can be selected with 'left'/'l' and 'right'/'r' respectively. width : tuple or a float. Width can have four different formats to support windows with variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') requests a plot window that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) requests a window that is 10 kiloparsecs wide along the x axis and 15 kiloparsecs wide along the y axis. In the other two examples, code units are assumed, for example (0.2, 0.3) requests a plot that has an x width of 0.2 and a y width of 0.3 in code units. If units are provided the resulting plot axis labels will use the supplied units. origin : string or length 1, 2, or 3 sequence. The location of the origin of the plot coordinate system. This is typically represented by a '-' separated string or a tuple of strings. In the first index the y-location is given by 'lower', 'upper', or 'center'. The second index is the x-location, given as 'left', 'right', or 'center'. Finally, whether the origin is applied in 'domain' space, plot 'window' space or 'native' simulation coordinate system is given. For example, both 'upper-right-domain' and ['upper', 'right', 'domain'] place the origin in the upper right hand corner of domain space. If x or y are not given, a value is inferred. For instance, 'left-domain' corresponds to the lower-left hand corner of the simulation domain, 'center-domain' corresponds to the center of the simulation domain, or 'center-window' for the center of the plot window. In the event that none of these options place the origin in a desired location, a sequence of tuples and a string specifying the coordinate space can be given. If plain numeric types are input, units of `code_length` are assumed. Further examples: =============================================== =============================== format example =============================================== =============================== '{space}' 'domain' '{xloc}-{space}' 'left-window' '{yloc}-{space}' 'upper-domain' '{yloc}-{xloc}-{space}' 'lower-right-window' ('{space}',) ('window',) ('{xloc}', '{space}') ('right', 'domain') ('{yloc}', '{space}') ('lower', 'window') ('{yloc}', '{xloc}', '{space}') ('lower', 'right', 'window') ((yloc, '{unit}'), (xloc, '{unit}'), '{space}') ((0, 'm'), (.4, 'm'), 'window') (xloc, yloc, '{space}') (0.23, 0.5, 'domain') =============================================== =============================== axes_unit : string The name of the unit for the tick labels on the x and y axes. Defaults to None, which automatically picks an appropriate unit. If axes_unit is '1', 'u', or 'unitary', it will not display the units, and only show the axes name. fontsize : integer The size of the fonts for the axis, colorbar, and tick labels. field_parameters : dictionary A dictionary of field parameters than can be accessed by derived fields. data_source: YTSelectionContainer object Object to be used for data selection. Defaults to ds.all_data(), a region covering the full domain buff_size: length 2 sequence Size of the buffer to use for the image, i.e. the number of resolution elements used. Effectively sets a resolution limit to the image if buff_size is smaller than the finest gridding. Examples -------- This will save an image in the file 'sliceplot_Density.png' >>> from yt import load >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> p = SlicePlot(ds, 2, "density", "c", (20, "kpc")) >>> p.save("sliceplot") """ _plot_type = "Slice" _frb_generator = FixedResolutionBuffer def __init__( self, ds, normal, fields, center="center", width=None, axes_unit=None, origin="center-window", fontsize=18, field_parameters=None, window_size=8.0, aspect=None, data_source=None, buff_size=(800, 800), *, north_vector=None, ): if north_vector is not None: # this kwarg exists only for symmetry reasons with OffAxisSlicePlot mylog.warning( "Ignoring 'north_vector' keyword as it is ill-defined for " "an AxisAlignedSlicePlot object." ) del north_vector normal = self.sanitize_normal_vector(ds, normal) # this will handle time series data and controllers axis = fix_axis(normal, ds) # print('center at SlicePlot init: ', center) # print('current domain left edge: ', ds.domain_left_edge) (bounds, center, display_center) = get_window_parameters( axis, center, width, ds ) # print('center after get_window_parameters: ', center) if field_parameters is None: field_parameters = {} if isinstance(ds, YTSpatialPlotDataset): slc = ds.all_data() slc.axis = axis if slc.axis != ds.parameters["axis"]: raise RuntimeError(f"Original slice axis is {ds.parameters['axis']}.") else: slc = ds.slice( axis, center[axis], field_parameters=field_parameters, center=center, data_source=data_source, ) slc.get_data(fields) validate_mesh_fields(slc, fields) PWViewerMPL.__init__( self, slc, bounds, origin=origin, fontsize=fontsize, fields=fields, window_size=window_size, aspect=aspect, buff_size=buff_size, geometry=ds.geometry, ) if axes_unit is None: axes_unit = get_axes_unit(width, ds) self.set_axes_unit(axes_unit) class AxisAlignedProjectionPlot(ProjectionPlot, PWViewerMPL): r"""Creates a projection plot from a dataset Given a ds object, an axis to project along, and a field name string, this will return a PWViewerMPL object containing the plot. The plot can be updated using one of the many helper functions defined in PlotWindow. Parameters ---------- ds : `Dataset` This is the dataset object corresponding to the simulation output to be plotted. normal : int or one of 'x', 'y', 'z' An int corresponding to the axis to slice along (0=x, 1=y, 2=z) or the axis name itself fields : string The name of the field(s) to be plotted. center : 'center', 'c', 'left', 'l', 'right', 'r', id of a global extremum, or array-like The coordinate of the selection's center. Defaults to the 'center', i.e. center of the domain. Centering on the min or max of a field is supported by passing a tuple such as ('min', ('gas', 'density')) or ('max', ('gas', 'temperature'). A single string may also be used (e.g. "min_density" or "max_temperature"), though it's not as flexible and does not allow to select an exact field/particle type. With this syntax, the first field matching the provided name is selected. 'max' or 'm' can be used as a shortcut for ('max', ('gas', 'density')) 'min' can be used as a shortcut for ('min', ('gas', 'density')) One can also select an exact point as a 3 element coordinate sequence, e.g. [0.5, 0.5, 0] Units can be specified by passing in *center* as a tuple containing a 3-element coordinate sequence and string unit name, e.g. ([0, 0.5, 0.5], "cm"), or by passing in a YTArray. Code units are assumed if unspecified. The domain edges along the selected *axis* can be selected with 'left'/'l' and 'right'/'r' respectively. width : tuple or a float. Width can have four different formats to support windows with variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') requests a plot window that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) requests a window that is 10 kiloparsecs wide along the x axis and 15 kiloparsecs wide along the y axis. In the other two examples, code units are assumed, for example (0.2, 0.3) requests a plot that has an x width of 0.2 and a y width of 0.3 in code units. If units are provided the resulting plot axis labels will use the supplied units. axes_unit : string The name of the unit for the tick labels on the x and y axes. Defaults to None, which automatically picks an appropriate unit. If axes_unit is '1', 'u', or 'unitary', it will not display the units, and only show the axes name. origin : string or length 1, 2, or 3 sequence. The location of the origin of the plot coordinate system. This is typically represented by a '-' separated string or a tuple of strings. In the first index the y-location is given by 'lower', 'upper', or 'center'. The second index is the x-location, given as 'left', 'right', or 'center'. Finally, whether the origin is applied in 'domain' space, plot 'window' space or 'native' simulation coordinate system is given. For example, both 'upper-right-domain' and ['upper', 'right', 'domain'] place the origin in the upper right hand corner of domain space. If x or y are not given, a value is inferred. For instance, 'left-domain' corresponds to the lower-left hand corner of the simulation domain, 'center-domain' corresponds to the center of the simulation domain, or 'center-window' for the center of the plot window. In the event that none of these options place the origin in a desired location, a sequence of tuples and a string specifying the coordinate space can be given. If plain numeric types are input, units of `code_length` are assumed. Further examples: =============================================== =============================== format example =============================================== =============================== '{space}' 'domain' '{xloc}-{space}' 'left-window' '{yloc}-{space}' 'upper-domain' '{yloc}-{xloc}-{space}' 'lower-right-window' ('{space}',) ('window',) ('{xloc}', '{space}') ('right', 'domain') ('{yloc}', '{space}') ('lower', 'window') ('{yloc}', '{xloc}', '{space}') ('lower', 'right', 'window') ((yloc, '{unit}'), (xloc, '{unit}'), '{space}') ((0, 'm'), (.4, 'm'), 'window') (xloc, yloc, '{space}') (0.23, 0.5, 'domain') =============================================== =============================== data_source : YTSelectionContainer Object Object to be used for data selection. Defaults to a region covering the entire simulation. weight_field : string The name of the weighting field. Set to None for no weight. max_level: int The maximum level to project to. fontsize : integer The size of the fonts for the axis, colorbar, and tick labels. method : string The method of projection. Valid methods are: "integrate" with no weight_field specified : integrate the requested field along the line of sight. "integrate" with a weight_field specified : weight the requested field by the weighting field and integrate along the line of sight. "max" : pick out the maximum value of the field in the line of sight. "min" : pick out the minimum value of the field in the line of sight. "sum" : This method is the same as integrate, except that it does not multiply by a path length when performing the integration, and is just a straight summation of the field along the given axis. WARNING: This should only be used for uniform resolution grid datasets, as other datasets may result in unphysical images. window_size : float The size of the window in inches. Set to 8 by default. aspect : float The aspect ratio of the plot. Set to None for 1. field_parameters : dictionary A dictionary of field parameters than can be accessed by derived fields. data_source: YTSelectionContainer object Object to be used for data selection. Defaults to ds.all_data(), a region covering the full domain buff_size: length 2 sequence Size of the buffer to use for the image, i.e. the number of resolution elements used. Effectively sets a resolution limit to the image if buff_size is smaller than the finest gridding. moment : integer, optional for a weighted projection, moment = 1 (the default) corresponds to a weighted average. moment = 2 corresponds to a weighted standard deviation. Examples -------- Create a projection plot with a width of 20 kiloparsecs centered on the center of the simulation box: >>> from yt import load >>> ds = load("IsolateGalaxygalaxy0030/galaxy0030") >>> p = AxisAlignedProjectionPlot(ds, "z", ("gas", "density"), width=(20, "kpc")) """ _plot_type = "Projection" _frb_generator = FixedResolutionBuffer def __init__( self, ds, normal, fields, center="center", width=None, axes_unit=None, weight_field=None, max_level=None, origin="center-window", fontsize=18, field_parameters=None, data_source=None, method="integrate", window_size=8.0, buff_size=(800, 800), aspect=None, *, moment=1, ): if method == "mip": issue_deprecation_warning( "'mip' method is a deprecated alias for 'max'. " "Please use method='max' directly.", since="4.1", stacklevel=3, ) method = "max" normal = self.sanitize_normal_vector(ds, normal) axis = fix_axis(normal, ds) # If a non-weighted integral projection, assure field-label reflects that if weight_field is None and method == "integrate": self.projected = True (bounds, center, display_center) = get_window_parameters( axis, center, width, ds ) if field_parameters is None: field_parameters = {} # We don't use the plot's data source for validation like in the other # plotting classes to avoid an exception test_data_source = ds.all_data() validate_mesh_fields(test_data_source, fields) if isinstance(ds, YTSpatialPlotDataset): proj = ds.all_data() proj.axis = axis if proj.axis != ds.parameters["axis"]: raise RuntimeError( f"Original projection axis is {ds.parameters['axis']}." ) if weight_field is not None: proj.weight_field = proj._determine_fields(weight_field)[0] else: proj.weight_field = weight_field proj.center = center else: proj = ds.proj( fields, axis, weight_field=weight_field, center=center, data_source=data_source, field_parameters=field_parameters, method=method, max_level=max_level, moment=moment, ) self.moment = moment PWViewerMPL.__init__( self, proj, bounds, fields=fields, origin=origin, fontsize=fontsize, window_size=window_size, aspect=aspect, buff_size=buff_size, geometry=ds.geometry, ) if axes_unit is None: axes_unit = get_axes_unit(width, ds) self.set_axes_unit(axes_unit) class OffAxisSlicePlot(SlicePlot, PWViewerMPL): r"""Creates an off axis slice plot from a dataset Given a ds object, a normal vector defining a slicing plane, and a field name string, this will return a PWViewerMPL object containing the plot. The plot can be updated using one of the many helper functions defined in PlotWindow. Parameters ---------- ds : :class:`yt.data_objects.static_output.Dataset` This is the dataset object corresponding to the simulation output to be plotted. normal : a sequence of floats The vector normal to the slicing plane. fields : string The name of the field(s) to be plotted. center : 'center', 'c' id of a global extremum, or array-like The coordinate of the selection's center. Defaults to the 'center', i.e. center of the domain. Centering on the min or max of a field is supported by passing a tuple such as ('min', ('gas', 'density')) or ('max', ('gas', 'temperature'). A single string may also be used (e.g. "min_density" or "max_temperature"), though it's not as flexible and does not allow to select an exact field/particle type. With this syntax, the first field matching the provided name is selected. 'max' or 'm' can be used as a shortcut for ('max', ('gas', 'density')) 'min' can be used as a shortcut for ('min', ('gas', 'density')) One can also select an exact point as a 3 element coordinate sequence, e.g. [0.5, 0.5, 0] Units can be specified by passing in *center* as a tuple containing a 3-element coordinate sequence and string unit name, e.g. ([0, 0.5, 0.5], "cm"), or by passing in a YTArray. Code units are assumed if unspecified. width : tuple or a float. Width can have four different formats to support windows with variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') requests a plot window that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) requests a window that is 10 kiloparsecs wide along the x axis and 15 kiloparsecs wide along the y axis. In the other two examples, code units are assumed, for example (0.2, 0.3) requests a plot that has an x width of 0.2 and a y width of 0.3 in code units. If units are provided the resulting plot axis labels will use the supplied units. axes_unit : string The name of the unit for the tick labels on the x and y axes. Defaults to None, which automatically picks an appropriate unit. If axes_unit is '1', 'u', or 'unitary', it will not display the units, and only show the axes name. north_vector : a sequence of floats A vector defining the 'up' direction in the plot. This option sets the orientation of the slicing plane. If not set, an arbitrary grid-aligned north-vector is chosen. fontsize : integer The size of the fonts for the axis, colorbar, and tick labels. field_parameters : dictionary A dictionary of field parameters than can be accessed by derived fields. data_source : YTSelectionContainer Object Object to be used for data selection. Defaults ds.all_data(), a region covering the full domain. buff_size: length 2 sequence Size of the buffer to use for the image, i.e. the number of resolution elements used. Effectively sets a resolution limit to the image if buff_size is smaller than the finest gridding. """ _plot_type = "OffAxisSlice" _frb_generator = FixedResolutionBuffer _supported_geometries = ("cartesian", "spectral_cube") def __init__( self, ds, normal, fields, center="center", width=None, axes_unit=None, north_vector=None, fontsize=18, field_parameters=None, data_source=None, buff_size=(800, 800), *, origin=None, ): if origin is not None: # this kwarg exists only for symmetry reasons with AxisAlignedSlicePlot # in OffAxisSlicePlot, the origin is hardcoded mylog.warning( "Ignoring 'origin' keyword as it is ill-defined for " "an OffAxisSlicePlot object." ) del origin if ds.geometry not in self._supported_geometries: raise NotImplementedError( f"off-axis slices are not supported for {ds.geometry!r} geometry\n" f"currently supported geometries: {self._supported_geometries!r}" ) # bounds are in cutting plane coordinates, centered on 0: # [xmin, xmax, ymin, ymax]. Can derive width/height back # from these. unit is code_length (bounds, center_rot) = get_oblique_window_parameters(normal, center, width, ds) if field_parameters is None: field_parameters = {} if isinstance(ds, YTSpatialPlotDataset): cutting = ds.all_data() cutting.axis = None cutting._inv_mat = ds.parameters["_inv_mat"] else: cutting = ds.cutting( normal, center, north_vector=north_vector, field_parameters=field_parameters, data_source=data_source, ) cutting.get_data(fields) validate_mesh_fields(cutting, fields) # Hard-coding the origin keyword since the other two options # aren't well-defined for off-axis data objects PWViewerMPL.__init__( self, cutting, bounds, fields=fields, origin="center-window", periodic=False, oblique=True, fontsize=fontsize, buff_size=buff_size, ) if axes_unit is None: axes_unit = get_axes_unit(width, ds) self.set_axes_unit(axes_unit) class OffAxisProjectionDummyDataSource: _type_name = "proj" _key_fields: list[str] = [] def __init__( self, center, ds, normal_vector, width, fields, interpolated, weight=None, volume=None, no_ghost=False, le=None, re=None, north_vector=None, depth=None, method="integrate", data_source=None, *, moment=1, ): validate_moment(moment, weight) self.center = center self.ds = ds self.axis = None # always true for oblique data objects self.normal_vector = normal_vector self.width = width self.depth = depth if data_source is None: self.dd = ds.all_data() else: self.dd = data_source fields = self.dd._determine_fields(fields) self.fields = fields self.interpolated = interpolated if weight is not None: weight = self.dd._determine_fields(weight)[0] self.weight_field = weight self.volume = volume self.no_ghost = no_ghost self.le = le self.re = re self.north_vector = north_vector self.method = method self.orienter = Orientation(normal_vector, north_vector=north_vector) self.moment = moment def _determine_fields(self, *args): return self.dd._determine_fields(*args) class OffAxisProjectionPlot(ProjectionPlot, PWViewerMPL): r"""Creates an off axis projection plot from a dataset Given a ds object, a normal vector to project along, and a field name string, this will return a PWViewerMPL object containing the plot. The plot can be updated using one of the many helper functions defined in PlotWindow. Parameters ---------- ds : :class:`yt.data_objects.static_output.Dataset` This is the dataset object corresponding to the simulation output to be plotted. normal : a sequence of floats The vector normal to the slicing plane. fields : string The name of the field(s) to be plotted. center : 'center', 'c', id of a global extremum, or array-like The coordinate of the selection's center. Defaults to the 'center', i.e. center of the domain. Centering on the min or max of a field is supported by passing a tuple such as ('min', ('gas', 'density')) or ('max', ('gas', 'temperature'). A single string may also be used (e.g. "min_density" or "max_temperature"), though it's not as flexible and does not allow to select an exact field/particle type. With this syntax, the first field matching the provided name is selected. 'max' or 'm' can be used as a shortcut for ('max', ('gas', 'density')) 'min' can be used as a shortcut for ('min', ('gas', 'density')) One can also select an exact point as a 3 element coordinate sequence, e.g. [0.5, 0.5, 0] Units can be specified by passing in *center* as a tuple containing a 3-element coordinate sequence and string unit name, e.g. ([0, 0.5, 0.5], "cm"), or by passing in a YTArray. Code units are assumed if unspecified. width : tuple or a float. Width can have four different formats to support windows with variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') requests a plot window that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) requests a window that is 10 kiloparsecs wide along the x axis and 15 kiloparsecs wide along the y axis. In the other two examples, code units are assumed, for example (0.2, 0.3) requests a plot that has an x width of 0.2 and a y width of 0.3 in code units. If units are provided the resulting plot axis labels will use the supplied units. depth : A tuple or a float A tuple containing the depth to project through and the string key of the unit: (width, 'unit'). If set to a float, code units are assumed weight_field : string The name of the weighting field. Set to None for no weight. max_level: int The maximum level to project to. axes_unit : string The name of the unit for the tick labels on the x and y axes. Defaults to None, which automatically picks an appropriate unit. If axes_unit is '1', 'u', or 'unitary', it will not display the units, and only show the axes name. north_vector : a sequence of floats A vector defining the 'up' direction in the plot. This option sets the orientation of the slicing plane. If not set, an arbitrary grid-aligned north-vector is chosen. fontsize : integer The size of the fonts for the axis, colorbar, and tick labels. method : string The method of projection. Valid methods are: "integrate" with no weight_field specified : integrate the requested field along the line of sight. "integrate" with a weight_field specified : weight the requested field by the weighting field and integrate along the line of sight. "sum" : This method is the same as integrate, except that it does not multiply by a path length when performing the integration, and is just a straight summation of the field along the given axis. WARNING: This should only be used for uniform resolution grid datasets, as other datasets may result in unphysical images. moment : integer, optional for a weighted projection, moment = 1 (the default) corresponds to a weighted average. moment = 2 corresponds to a weighted standard deviation. data_source: YTSelectionContainer object Object to be used for data selection. Defaults to ds.all_data(), a region covering the full domain buff_size: length 2 sequence Size of the buffer to use for the image, i.e. the number of resolution elements used. Effectively sets a resolution limit to the image if buff_size is smaller than the finest gridding. """ _plot_type = "OffAxisProjection" _frb_generator = OffAxisProjectionFixedResolutionBuffer _supported_geometries = ("cartesian", "spectral_cube") def __init__( self, ds, normal, fields, center="center", width=None, depth=None, axes_unit=None, weight_field=None, max_level=None, north_vector=None, volume=None, no_ghost=False, le=None, re=None, interpolated=False, fontsize=18, method="integrate", moment=1, data_source=None, buff_size=(800, 800), ): if ds.geometry not in self._supported_geometries: raise NotImplementedError( "off-axis slices are not supported" f" for {ds.geometry!r} geometry\n" "currently supported geometries:" f" {self._supported_geometries!r}" ) # center_rot normalizes the center to (0,0), # units match bounds # for SPH data, we want to input the original center # the cython backend handles centering to this point and # rotation. # get3bounds gets a depth 0.5 * diagonal + margin in the # depth=None case. (bounds, center_rot) = get_oblique_window_parameters( normal, center, width, ds, depth=depth, get3bounds=True, ) # will probably fail if you try to project an SPH and non-SPH # field in a single call # checks for SPH fields copied from the # _ortho_pixelize method in cartesian_coordinates.py ## data_source might be None here ## (OffAxisProjectionDummyDataSource gets used later) if data_source is None: data_source = ds.all_data() field = data_source._determine_fields(fields)[0] finfo = data_source.ds.field_info[field] is_sph_field = finfo.is_sph_field particle_datasets = (ParticleDataset, StreamParticlesDataset) if isinstance(data_source.ds, particle_datasets) and is_sph_field: center_use = parse_center_array(center, ds=data_source.ds, axis=None) else: center_use = center_rot fields = list(iter_fields(fields))[:] # oap_width = ds.arr( # (bounds[1] - bounds[0], # bounds[3] - bounds[2]) # ) OffAxisProj = OffAxisProjectionDummyDataSource( center_use, ds, normal, width, fields, interpolated, weight=weight_field, volume=volume, no_ghost=no_ghost, le=le, re=re, north_vector=north_vector, depth=depth, method=method, data_source=data_source, moment=moment, ) validate_mesh_fields(OffAxisProj, fields) if max_level is not None: OffAxisProj.dd.max_level = max_level # If a non-weighted, integral projection, assure field label # reflects that if weight_field is None and OffAxisProj.method == "integrate": self.projected = True self.moment = moment # Hard-coding the origin keyword since the other two options # aren't well-defined for off-axis data objects PWViewerMPL.__init__( self, OffAxisProj, bounds, fields=fields, origin="center-window", periodic=False, oblique=True, fontsize=fontsize, buff_size=buff_size, ) if axes_unit is None: axes_unit = get_axes_unit(width, ds) self.set_axes_unit(axes_unit) class WindowPlotMPL(ImagePlotMPL): """A container for a single PlotWindow matplotlib figure and axes""" def __init__( self, data, extent, figure_size, fontsize, aspect, figure, axes, cax, mpl_proj, mpl_transform, *, norm_handler: NormHandler, colorbar_handler: ColorbarHandler, alpha: AlphaT = None, ): self._projection = mpl_proj self._transform = mpl_transform self._setup_layout_constraints(figure_size, fontsize) self._draw_frame = True self._aspect = ((extent[1] - extent[0]) / (extent[3] - extent[2])).in_cgs() self._unit_aspect = aspect # Compute layout self._figure_size = figure_size self._draw_axes = True fontscale = float(fontsize) / self.__class__._default_font_size if fontscale < 1.0: fontscale = np.sqrt(fontscale) if is_sequence(figure_size): self._cb_size = 0.0375 * figure_size[0] else: self._cb_size = 0.0375 * figure_size self._ax_text_size = [1.2 * fontscale, 0.9 * fontscale] self._top_buff_size = 0.30 * fontscale super().__init__( figure=figure, axes=axes, cax=cax, norm_handler=norm_handler, colorbar_handler=colorbar_handler, ) self._init_image(data, extent, aspect, alpha=alpha) def _create_axes(self, axrect): self.axes = self.figure.add_axes(axrect, projection=self._projection) def plot_2d( ds, fields, center="center", width=None, axes_unit=None, origin="center-window", fontsize=18, field_parameters=None, window_size=8.0, aspect=None, data_source=None, ) -> AxisAlignedSlicePlot: r"""Creates a plot of a 2D dataset Given a ds object and a field name string, this will return a PWViewerMPL object containing the plot. The plot can be updated using one of the many helper functions defined in PlotWindow. Parameters ---------- ds : `Dataset` This is the dataset object corresponding to the simulation output to be plotted. fields : string The name of the field(s) to be plotted. center : 'center', 'c', id of a global extremum, or array-like The coordinate of the selection's center. Defaults to the 'center', i.e. center of the domain. Centering on the min or max of a field is supported by passing a tuple such as ('min', ('gas', 'density')) or ('max', ('gas', 'temperature'). A single string may also be used (e.g. "min_density" or "max_temperature"), though it's not as flexible and does not allow to select an exact field/particle type. With this syntax, the first field matching the provided name is selected. 'max' or 'm' can be used as a shortcut for ('max', ('gas', 'density')) 'min' can be used as a shortcut for ('min', ('gas', 'density')) One can also select an exact point as a 3 element coordinate sequence, e.g. [0.5, 0.5, 0] Units can be specified by passing in *center* as a tuple containing a 3-element coordinate sequence and string unit name, e.g. ([0, 0.5, 0.5], "cm"), or by passing in a YTArray. Code units are assumed if unspecified. plot_2d also accepts a coordinate in two dimensions. width : tuple or a float. Width can have four different formats to support windows with variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') requests a plot window that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) requests a window that is 10 kiloparsecs wide along the x axis and 15 kiloparsecs wide along the y axis. In the other two examples, code units are assumed, for example (0.2, 0.3) requests a plot that has an x width of 0.2 and a y width of 0.3 in code units. If units are provided the resulting plot axis labels will use the supplied units. origin : string or length 1, 2, or 3 sequence. The location of the origin of the plot coordinate system. This is typically represented by a '-' separated string or a tuple of strings. In the first index the y-location is given by 'lower', 'upper', or 'center'. The second index is the x-location, given as 'left', 'right', or 'center'. Finally, whether the origin is applied in 'domain' space, plot 'window' space or 'native' simulation coordinate system is given. For example, both 'upper-right-domain' and ['upper', 'right', 'domain'] place the origin in the upper right hand corner of domain space. If x or y are not given, a value is inferred. For instance, 'left-domain' corresponds to the lower-left hand corner of the simulation domain, 'center-domain' corresponds to the center of the simulation domain, or 'center-window' for the center of the plot window. In the event that none of these options place the origin in a desired location, a sequence of tuples and a string specifying the coordinate space can be given. If plain numeric types are input, units of `code_length` are assumed. Further examples: =============================================== =============================== format example =============================================== =============================== '{space}' 'domain' '{xloc}-{space}' 'left-window' '{yloc}-{space}' 'upper-domain' '{yloc}-{xloc}-{space}' 'lower-right-window' ('{space}',) ('window',) ('{xloc}', '{space}') ('right', 'domain') ('{yloc}', '{space}') ('lower', 'window') ('{yloc}', '{xloc}', '{space}') ('lower', 'right', 'window') ((yloc, '{unit}'), (xloc, '{unit}'), '{space}') ((0, 'm'), (.4, 'm'), 'window') (xloc, yloc, '{space}') (0.23, 0.5, 'domain') =============================================== =============================== axes_unit : string The name of the unit for the tick labels on the x and y axes. Defaults to None, which automatically picks an appropriate unit. If axes_unit is '1', 'u', or 'unitary', it will not display the units, and only show the axes name. fontsize : integer The size of the fonts for the axis, colorbar, and tick labels. field_parameters : dictionary A dictionary of field parameters than can be accessed by derived fields. data_source: YTSelectionContainer object Object to be used for data selection. Defaults to ds.all_data(), a region covering the full domain """ if ds.dimensionality != 2: raise RuntimeError("plot_2d only plots 2D datasets!") if ( ds.geometry is Geometry.CARTESIAN or ds.geometry is Geometry.POLAR or ds.geometry is Geometry.SPECTRAL_CUBE ): axis = "z" elif ds.geometry is Geometry.CYLINDRICAL: axis = "theta" elif ds.geometry is Geometry.SPHERICAL: axis = "phi" elif ( ds.geometry is Geometry.GEOGRAPHIC or ds.geometry is Geometry.INTERNAL_GEOGRAPHIC ): raise NotImplementedError( f"plot_2d does not yet support datasets with {ds.geometry} geometries" ) else: assert_never(ds.geometry) # Part of the convenience of plot_2d is to eliminate the use of the # superfluous coordinate, so we do that also with the center argument if not isinstance(center, str) and obj_length(center) == 2: c0_string = isinstance(center[0], str) c1_string = isinstance(center[1], str) if not c0_string and not c1_string: if obj_length(center[0]) == 2 and c1_string: # turning off type checking locally because center arg is hard to type correctly center = ds.arr(center[0], center[1]) # type: ignore [unreachable] elif not isinstance(center, YTArray): center = ds.arr(center, "code_length") center.convert_to_units("code_length") center = ds.arr([center[0], center[1], ds.domain_center[2]]) return AxisAlignedSlicePlot( ds, axis, fields, center=center, width=width, axes_unit=axes_unit, origin=origin, fontsize=fontsize, field_parameters=field_parameters, window_size=window_size, aspect=aspect, data_source=data_source, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/profile_plotter.py0000644000175100001770000014463014714401662020715 0ustar00runnerdockerimport base64 import os from functools import wraps from typing import TYPE_CHECKING, Any import matplotlib import numpy as np from more_itertools.more import always_iterable, unzip from yt._maintenance.ipython_compat import IS_IPYTHON from yt._typing import FieldKey from yt.data_objects.profiles import create_profile, sanitize_field_tuple_keys from yt.data_objects.static_output import Dataset from yt.frontends.ytdata.data_structures import YTProfileDataset from yt.funcs import iter_fields, matplotlib_style_context from yt.utilities.exceptions import YTNotInsideNotebook from yt.visualization._commons import _get_units_label from yt.visualization._handlers import ColorbarHandler, NormHandler from yt.visualization.base_plot_types import ImagePlotMPL, PlotMPL from ..data_objects.selection_objects.data_selection_objects import YTSelectionContainer from ._commons import validate_image_name from .plot_container import ( BaseLinePlot, ImagePlotContainer, invalidate_plot, validate_plot, ) if TYPE_CHECKING: from collections.abc import Iterable from yt._typing import FieldKey def invalidate_profile(f): @wraps(f) def newfunc(*args, **kwargs): rv = f(*args, **kwargs) args[0]._profile_valid = False return rv return newfunc def sanitize_label(labels, nprofiles): labels = list(always_iterable(labels)) or [None] if len(labels) == 1: labels = labels * nprofiles if len(labels) != nprofiles: raise ValueError( f"Number of labels {len(labels)} must match number of profiles {nprofiles}" ) invalid_data = [ (label, type(label)) for label in labels if label is not None and not isinstance(label, str) ] if invalid_data: invalid_labels, types = unzip(invalid_data) raise TypeError( "All labels must be None or a string, " f"received {invalid_labels} with type {types}" ) return labels def data_object_or_all_data(data_source): if isinstance(data_source, Dataset): data_source = data_source.all_data() if not isinstance(data_source, YTSelectionContainer): raise RuntimeError("data_source must be a yt selection data object") return data_source class ProfilePlot(BaseLinePlot): r""" Create a 1d profile plot from a data source or from a list of profile objects. Given a data object (all_data, region, sphere, etc.), an x field, and a y field (or fields), this will create a one-dimensional profile of the average (or total) value of the y field in bins of the x field. This can be used to create profiles from given fields or to plot multiple profiles created from `yt.data_objects.profiles.create_profile`. Parameters ---------- data_source : YTSelectionContainer Object The data object to be profiled, such as all_data, region, or sphere. If a dataset is passed in instead, an all_data data object is generated internally from the dataset. x_field : str The binning field for the profile. y_fields : str or list The field or fields to be profiled. weight_field : str The weight field for calculating weighted averages. If None, the profile values are the sum of the field values within the bin. Otherwise, the values are a weighted average. Default : ("gas", "mass") n_bins : int The number of bins in the profile. Default: 64. accumulation : bool If True, the profile values for a bin N are the cumulative sum of all the values from bin 0 to N. Default: False. fractional : If True the profile values are divided by the sum of all the profile data such that the profile represents a probability distribution function. label : str or list of strings If a string, the label to be put on the line plotted. If a list, this should be a list of labels for each profile to be overplotted. Default: None. plot_spec : dict or list of dicts A dictionary or list of dictionaries containing plot keyword arguments. For example, dict(color="red", linestyle=":"). Default: None. x_log : bool Whether the x_axis should be plotted with a logarithmic scaling (True), or linear scaling (False). Default: True. y_log : dict or bool A dictionary containing field:boolean pairs, setting the logarithmic property for that field. May be overridden after instantiation using set_log A single boolean can be passed to signify all fields should use logarithmic (True) or linear scaling (False). Default: True. Examples -------- This creates profiles of a single dataset. >>> import yt >>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") >>> ad = ds.all_data() >>> plot = yt.ProfilePlot( ... ad, ... ("gas", "density"), ... [("gas", "temperature"), ("gas", "velocity_x")], ... weight_field=("gas", "mass"), ... plot_spec=dict(color="red", linestyle="--"), ... ) >>> plot.save() This creates profiles from a time series object. >>> es = yt.load_simulation("AMRCosmology.enzo", "Enzo") >>> es.get_time_series() >>> profiles = [] >>> labels = [] >>> plot_specs = [] >>> for ds in es[-4:]: ... ad = ds.all_data() ... profiles.append( ... create_profile( ... ad, ... [("gas", "density")], ... fields=[("gas", "temperature"), ("gas", "velocity_x")], ... ) ... ) ... labels.append(ds.current_redshift) ... plot_specs.append(dict(linestyle="--", alpha=0.7)) >>> plot = yt.ProfilePlot.from_profiles( ... profiles, labels=labels, plot_specs=plot_specs ... ) >>> plot.save() Use set_line_property to change line properties of one or all profiles. """ _default_figure_size = (10.0, 8.0) _default_font_size = 18.0 x_log = None y_log = None x_title = None y_title = None _plot_valid = False def __init__( self, data_source, x_field, y_fields, weight_field=("gas", "mass"), n_bins=64, accumulation=False, fractional=False, label=None, plot_spec=None, x_log=True, y_log=True, ): data_source = data_object_or_all_data(data_source) y_fields = list(iter_fields(y_fields)) logs = {x_field: bool(x_log)} if isinstance(y_log, bool): y_log = {y_field: y_log for y_field in y_fields} if isinstance(data_source.ds, YTProfileDataset): profiles = [data_source.ds.profile] else: profiles = [ create_profile( data_source, [x_field], n_bins=[n_bins], fields=y_fields, weight_field=weight_field, accumulation=accumulation, fractional=fractional, logs=logs, ) ] if plot_spec is None: plot_spec = [{} for p in profiles] if not isinstance(plot_spec, list): plot_spec = [plot_spec.copy() for p in profiles] ProfilePlot._initialize_instance( self, data_source, profiles, label, plot_spec, y_log ) @classmethod def _initialize_instance( cls, obj, data_source, profiles, labels, plot_specs, y_log, ): obj._plot_title = {} obj._plot_text = {} obj._text_xpos = {} obj._text_ypos = {} obj._text_kwargs = {} super(ProfilePlot, obj).__init__(data_source) obj.profiles = list(always_iterable(profiles)) obj.x_log = None obj.y_log = sanitize_field_tuple_keys(y_log, data_source) or {} obj.y_title = {} obj.x_title = None obj.label = sanitize_label(labels, len(obj.profiles)) if plot_specs is None: plot_specs = [{} for p in obj.profiles] obj.plot_spec = plot_specs obj._xlim = (None, None) obj._setup_plots() obj._plot_valid = False # see https://github.com/yt-project/yt/issues/4489 return obj def _get_axrect(self): return (0.1, 0.1, 0.8, 0.8) @validate_plot def save( self, name: str | None = None, suffix: str | None = None, mpl_kwargs: dict[str, Any] | None = None, ): r""" Saves a 1d profile plot. Parameters ---------- name : str, optional The output file keyword. suffix : string, optional Specify the image type by its suffix. If not specified, the output type will be inferred from the filename. Defaults to '.png'. mpl_kwargs : dict, optional A dict of keyword arguments to be passed to matplotlib. """ if not self._plot_valid: self._setup_plots() # Mypy is hardly convinced that we have a `profiles` attribute # at this stage, so we're lasily going to deactivate it locally unique = set(self.plots.values()) iters: Iterable[tuple[int | FieldKey, PlotMPL]] if len(unique) < len(self.plots): iters = enumerate(sorted(unique)) else: iters = self.plots.items() if name is None: if len(self.profiles) == 1: # type: ignore name = str(self.profiles[0].ds) # type: ignore else: name = "Multi-data" name = validate_image_name(name, suffix) prefix, suffix = os.path.splitext(name) xfn = self.profiles[0].x_field # type: ignore if isinstance(xfn, tuple): xfn = xfn[1] names = [] for uid, plot in iters: if isinstance(uid, tuple): uid = uid[1] # type: ignore uid_name = f"{prefix}_1d-Profile_{xfn}_{uid}{suffix}" names.append(uid_name) with matplotlib_style_context(): plot.save(uid_name, mpl_kwargs=mpl_kwargs) return names @validate_plot def show(self): r"""This will send any existing plots to the IPython notebook. If yt is being run from within an IPython session, and it is able to determine this, this function will send any existing plots to the notebook for display. If yt can't determine if it's inside an IPython session, it will raise YTNotInsideNotebook. Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> pp = ProfilePlot(ds.all_data(), ("gas", "density"), ("gas", "temperature")) >>> pp.show() """ if IS_IPYTHON: from IPython.display import display display(self) else: raise YTNotInsideNotebook @validate_plot def _repr_html_(self): """Return an html representation of the plot object. Will display as a png for each WindowPlotMPL instance in self.plots""" ret = "" unique = set(self.plots.values()) if len(unique) < len(self.plots): iters = sorted(unique) else: iters = self.plots.values() for plot in iters: with matplotlib_style_context(): img = plot._repr_png_() img = base64.b64encode(img).decode() ret += ( r'
' ) return ret def _setup_plots(self): if self._plot_valid: return for f, p in self.plots.items(): p.axes.cla() if f in self._plot_text: p.axes.text( self._text_xpos[f], self._text_ypos[f], self._plot_text[f], fontproperties=self._font_properties, **self._text_kwargs[f], ) self._set_font_properties() for i, profile in enumerate(self.profiles): for field, field_data in profile.items(): plot = self._get_plot_instance(field) plot.axes.plot( np.array(profile.x), np.array(field_data), label=self.label[i], **self.plot_spec[i], ) for profile in self.profiles: for fname in profile.keys(): axes = self.plots[fname].axes xscale, yscale = self._get_field_log(fname, profile) xtitle, ytitle = self._get_field_title(fname, profile) axes.set_xscale(xscale) axes.set_yscale(yscale) axes.set_ylabel(ytitle) axes.set_xlabel(xtitle) pnh = self.plots[fname].norm_handler axes.set_ylim(pnh.vmin, pnh.vmax) axes.set_xlim(*self._xlim) if fname in self._plot_title: axes.set_title(self._plot_title[fname]) if any(self.label): axes.legend(loc="best") self._set_font_properties() self._plot_valid = True @classmethod def from_profiles(cls, profiles, labels=None, plot_specs=None, y_log=None): r""" Instantiate a ProfilePlot object from a list of profiles created with :func:`~yt.data_objects.profiles.create_profile`. Parameters ---------- profiles : a profile or list of profiles A single profile or list of profile objects created with :func:`~yt.data_objects.profiles.create_profile`. labels : list of strings A list of labels for each profile to be overplotted. Default: None. plot_specs : list of dicts A list of dictionaries containing plot keyword arguments. For example, [dict(color="red", linestyle=":")]. Default: None. Examples -------- >>> from yt import load_simulation >>> es = load_simulation("AMRCosmology.enzo", "Enzo") >>> es.get_time_series() >>> profiles = [] >>> labels = [] >>> plot_specs = [] >>> for ds in es[-4:]: ... ad = ds.all_data() ... profiles.append( ... create_profile( ... ad, ... [("gas", "density")], ... fields=[("gas", "temperature"), ("gas", "velocity_x")], ... ) ... ) ... labels.append(ds.current_redshift) ... plot_specs.append(dict(linestyle="--", alpha=0.7)) >>> plot = ProfilePlot.from_profiles( ... profiles, labels=labels, plot_specs=plot_specs ... ) >>> plot.save() """ if labels is not None and len(profiles) != len(labels): raise RuntimeError("Profiles list and labels list must be the same size.") if plot_specs is not None and len(plot_specs) != len(profiles): raise RuntimeError( "Profiles list and plot_specs list must be the same size." ) obj = cls.__new__(cls) profiles = list(always_iterable(profiles)) return cls._initialize_instance( obj, profiles[0].data_source, profiles, labels, plot_specs, y_log ) @invalidate_plot def set_line_property(self, property, value, index=None): r""" Set properties for one or all lines to be plotted. Parameters ---------- property : str The line property to be set. value : str, int, float The value to set for the line property. index : int The index of the profile in the list of profiles to be changed. If None, change all plotted lines. Default : None. Examples -------- Change all the lines in a plot plot.set_line_property("linestyle", "-") Change a single line. plot.set_line_property("linewidth", 4, index=0) """ if index is None: specs = self.plot_spec else: specs = [self.plot_spec[index]] for spec in specs: spec[property] = value return self @invalidate_plot def set_log(self, field, log): """set a field to log or linear. Parameters ---------- field : string the field to set a transform log : boolean Log on/off. """ if field == "all": self.x_log = log for field in list(self.profiles[0].field_data.keys()): self.y_log[field] = log else: (field,) = self.profiles[0].data_source._determine_fields([field]) if field == self.profiles[0].x_field: self.x_log = log elif field in self.profiles[0].field_data: self.y_log[field] = log else: raise KeyError(f"Field {field} not in profile plot!") return self @invalidate_plot def set_ylabel(self, field, label): """Sets a new ylabel for the specified fields Parameters ---------- field : string The name of the field that is to be changed. label : string The label to be placed on the y-axis """ if field == "all": for field in self.profiles[0].field_data: self.y_title[field] = label else: (field,) = self.profiles[0].data_source._determine_fields([field]) if field in self.profiles[0].field_data: self.y_title[field] = label else: raise KeyError(f"Field {field} not in profile plot!") return self @invalidate_plot def set_xlabel(self, label): """Sets a new xlabel for all profiles Parameters ---------- label : string The label to be placed on the x-axis """ self.x_title = label return self @invalidate_plot def set_unit(self, field, unit): """Sets a new unit for the requested field Parameters ---------- field : string The name of the field that is to be changed. unit : string or Unit object The name of the new unit. """ fd = self.profiles[0].data_source._determine_fields(field)[0] for profile in self.profiles: if fd == profile.x_field: profile.set_x_unit(unit) elif fd[1] in self.profiles[0].field_map: profile.set_field_unit(field, unit) else: raise KeyError(f"Field {field} not in profile plot!") return self @invalidate_plot def set_xlim(self, xmin=None, xmax=None): """Sets the limits of the bin field Parameters ---------- xmin : float or None The new x minimum. Defaults to None, which leaves the xmin unchanged. xmax : float or None The new x maximum. Defaults to None, which leaves the xmax unchanged. Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> pp = yt.ProfilePlot( ... ds.all_data(), ("gas", "density"), ("gas", "temperature") ... ) >>> pp.set_xlim(1e-29, 1e-24) >>> pp.save() """ self._xlim = (xmin, xmax) for i, p in enumerate(self.profiles): if xmin is None: xmi = p.x_bins.min() else: xmi = xmin if xmax is None: xma = p.x_bins.max() else: xma = xmax extrema = {p.x_field: ((xmi, str(p.x.units)), (xma, str(p.x.units)))} units = {p.x_field: str(p.x.units)} if self.x_log is None: logs = None else: logs = {p.x_field: self.x_log} for field in p.field_map.values(): units[field] = str(p.field_data[field].units) self.profiles[i] = create_profile( p.data_source, p.x_field, n_bins=len(p.x_bins) - 1, fields=list(p.field_map.values()), weight_field=p.weight_field, accumulation=p.accumulation, fractional=p.fractional, logs=logs, extrema=extrema, units=units, ) return self @invalidate_plot def set_ylim(self, field, ymin=None, ymax=None): """Sets the plot limits for the specified field we are binning. Parameters ---------- field : string or field tuple The field that we want to adjust the plot limits for. ymin : float or None The new y minimum. Defaults to None, which leaves the ymin unchanged. ymax : float or None The new y maximum. Defaults to None, which leaves the ymax unchanged. Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> pp = yt.ProfilePlot( ... ds.all_data(), ... ("gas", "density"), ... [("gas", "temperature"), ("gas", "velocity_x")], ... ) >>> pp.set_ylim(("gas", "temperature"), 1e4, 1e6) >>> pp.save() """ fields = list(self.plots.keys()) if field == "all" else field for profile in self.profiles: for field in profile.data_source._determine_fields(fields): if field in profile.field_map: field = profile.field_map[field] pnh = self.plots[field].norm_handler pnh.vmin = ymin pnh.vmax = ymax # Continue on to the next profile. break return self def _set_font_properties(self): for f in self.plots: self.plots[f]._set_font_properties(self._font_properties, self._font_color) def _get_field_log(self, field_y, profile): yfi = profile.field_info[field_y] if self.x_log is None: x_log = profile.x_log else: x_log = self.x_log y_log = self.y_log.get(field_y, yfi.take_log) scales = {True: "log", False: "linear"} return scales[x_log], scales[y_log] def _get_field_label(self, field, field_info, field_unit, fractional=False): field_unit = field_unit.latex_representation() field_name = field_info.display_name if isinstance(field, tuple): field = field[1] if field_name is None: field_name = field_info.get_latex_display_name() elif field_name.find("$") == -1: field_name = field_name.replace(" ", r"\ ") field_name = r"$\rm{" + field_name + r"}$" if fractional: label = field_name + r"$\rm{\ Probability\ Density}$" elif field_unit is None or field_unit == "": label = field_name else: label = field_name + _get_units_label(field_unit) return label def _get_field_title(self, field_y, profile): field_x = profile.x_field xfi = profile.field_info[field_x] yfi = profile.field_info[field_y] x_unit = profile.x.units y_unit = profile.field_units[field_y] fractional = profile.fractional x_title = self.x_title or self._get_field_label(field_x, xfi, x_unit) y_title = self.y_title.get(field_y, None) or self._get_field_label( field_y, yfi, y_unit, fractional ) return (x_title, y_title) @invalidate_plot def annotate_title(self, title, field="all"): r"""Set a title for the plot. Parameters ---------- title : str The title to add. field : str or list of str The field name for which title needs to be set. Examples -------- >>> # To set title for all the fields: >>> plot.annotate_title("This is a Profile Plot") >>> # To set title for specific fields: >>> plot.annotate_title("Profile Plot for Temperature", ("gas", "temperature")) >>> # Setting same plot title for both the given fields >>> plot.annotate_title( ... "Profile Plot: Temperature-Dark Matter Density", ... [("gas", "temperature"), ("deposit", "dark_matter_density")], ... ) """ fields = list(self.plots.keys()) if field == "all" else field for profile in self.profiles: for field in profile.data_source._determine_fields(fields): if field in profile.field_map: field = profile.field_map[field] self._plot_title[field] = title return self @invalidate_plot def annotate_text(self, xpos=0.0, ypos=0.0, text=None, field="all", **text_kwargs): r"""Allow the user to insert text onto the plot The x-position and y-position must be given as well as the text string. Add *text* to plot at location *xpos*, *ypos* in plot coordinates for the given fields or by default for all fields. (see example below). Parameters ---------- xpos : float Position on plot in x-coordinates. ypos : float Position on plot in y-coordinates. text : str The text to insert onto the plot. field : str or tuple The name of the field to add text to. **text_kwargs : dict Extra keyword arguments will be passed to matplotlib text instance >>> import yt >>> from yt.units import kpc >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> my_galaxy = ds.disk(ds.domain_center, [0.0, 0.0, 1.0], 10 * kpc, 3 * kpc) >>> plot = yt.ProfilePlot( ... my_galaxy, ("gas", "density"), [("gas", "temperature")] ... ) >>> # Annotate text for all the fields >>> plot.annotate_text(1e-26, 1e5, "This is annotated text in the plot area.") >>> plot.save() >>> # Annotate text for a given field >>> plot.annotate_text(1e-26, 1e5, "Annotated text", ("gas", "temperature")) >>> plot.save() >>> # Annotate text for multiple fields >>> fields = [("gas", "temperature"), ("gas", "density")] >>> plot.annotate_text(1e-26, 1e5, "Annotated text", fields) >>> plot.save() """ fields = list(self.plots.keys()) if field == "all" else field for profile in self.profiles: for field in profile.data_source._determine_fields(fields): if field in profile.field_map: field = profile.field_map[field] self._plot_text[field] = text self._text_xpos[field] = xpos self._text_ypos[field] = ypos self._text_kwargs[field] = text_kwargs return self class PhasePlot(ImagePlotContainer): r""" Create a 2d profile (phase) plot from a data source or from profile object created with `yt.data_objects.profiles.create_profile`. Given a data object (all_data, region, sphere, etc.), an x field, y field, and z field (or fields), this will create a two-dimensional profile of the average (or total) value of the z field in bins of the x and y fields. Parameters ---------- data_source : YTSelectionContainer Object The data object to be profiled, such as all_data, region, or sphere. If a dataset is passed in instead, an all_data data object is generated internally from the dataset. x_field : str The x binning field for the profile. y_field : str The y binning field for the profile. z_fields : str or list The field or fields to be profiled. weight_field : str The weight field for calculating weighted averages. If None, the profile values are the sum of the field values within the bin. Otherwise, the values are a weighted average. Default : ("gas", "mass") x_bins : int The number of bins in x field for the profile. Default: 128. y_bins : int The number of bins in y field for the profile. Default: 128. accumulation : bool or list of bools If True, the profile values for a bin n are the cumulative sum of all the values from bin 0 to n. If -True, the sum is reversed so that the value for bin n is the cumulative sum from bin N (total bins) to n. A list of values can be given to control the summation in each dimension independently. Default: False. fractional : If True the profile values are divided by the sum of all the profile data such that the profile represents a probability distribution function. fontsize : int Font size for all text in the plot. Default: 18. figure_size : int Size in inches of the image. Default: 8 (8x8) shading : str This argument is directly passed down to matplotlib.axes.Axes.pcolormesh see https://matplotlib.org/3.3.1/gallery/images_contours_and_fields/pcolormesh_grids.html#sphx-glr-gallery-images-contours-and-fields-pcolormesh-grids-py # noqa Default: 'nearest' Examples -------- >>> import yt >>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") >>> ad = ds.all_data() >>> plot = yt.PhasePlot( ... ad, ... ("gas", "density"), ... ("gas", "temperature"), ... [("gas", "mass")], ... weight_field=None, ... ) >>> plot.save() >>> # Change plot properties. >>> plot.set_cmap(("gas", "mass"), "jet") >>> plot.set_zlim(("gas", "mass"), 1e8, 1e13) >>> plot.annotate_title("This is a phase plot") """ x_log = None y_log = None plot_title = None _plot_valid = False _profile_valid = False _plot_type = "Phase" _xlim = (None, None) _ylim = (None, None) def __init__( self, data_source, x_field, y_field, z_fields, weight_field=("gas", "mass"), x_bins=128, y_bins=128, accumulation=False, fractional=False, fontsize=18, figure_size=8.0, shading="nearest", ): data_source = data_object_or_all_data(data_source) if isinstance(z_fields, tuple): z_fields = [z_fields] z_fields = list(always_iterable(z_fields)) if isinstance(data_source.ds, YTProfileDataset): profile = data_source.ds.profile else: profile = create_profile( data_source, [x_field, y_field], z_fields, n_bins=[x_bins, y_bins], weight_field=weight_field, accumulation=accumulation, fractional=fractional, ) type(self)._initialize_instance( self, data_source, profile, fontsize, figure_size, shading ) @classmethod def _initialize_instance( cls, obj, data_source, profile, fontsize, figure_size, shading ): obj.plot_title = {} obj.z_log = {} obj.z_title = {} obj._initfinished = False obj.x_log = None obj.y_log = None obj._plot_text = {} obj._text_xpos = {} obj._text_ypos = {} obj._text_kwargs = {} obj._profile = profile obj._shading = shading obj._profile_valid = True obj._xlim = (None, None) obj._ylim = (None, None) super(PhasePlot, obj).__init__(data_source, figure_size, fontsize) obj._setup_plots() obj._plot_valid = False # see https://github.com/yt-project/yt/issues/4489 obj._initfinished = True return obj def _get_field_title(self, field_z, profile): field_x = profile.x_field field_y = profile.y_field xfi = profile.field_info[field_x] yfi = profile.field_info[field_y] zfi = profile.field_info[field_z] x_unit = profile.x.units y_unit = profile.y.units z_unit = profile.field_units[field_z] fractional = profile.fractional x_label, y_label, z_label = self._get_axes_labels(field_z) x_title = x_label or self._get_field_label(field_x, xfi, x_unit) y_title = y_label or self._get_field_label(field_y, yfi, y_unit) z_title = z_label or self._get_field_label(field_z, zfi, z_unit, fractional) return (x_title, y_title, z_title) def _get_field_label(self, field, field_info, field_unit, fractional=False): field_unit = field_unit.latex_representation() field_name = field_info.display_name if isinstance(field, tuple): field = field[1] if field_name is None: field_name = field_info.get_latex_display_name() elif field_name.find("$") == -1: field_name = field_name.replace(" ", r"\ ") field_name = r"$\rm{" + field_name + r"}$" if fractional: label = field_name + r"$\rm{\ Probability\ Density}$" elif field_unit is None or field_unit == "": label = field_name else: label = field_name + _get_units_label(field_unit) return label def _get_field_log(self, field_z, profile): zfi = profile.field_info[field_z] if self.x_log is None: x_log = profile.x_log else: x_log = self.x_log if self.y_log is None: y_log = profile.y_log else: y_log = self.y_log if field_z in self.z_log: z_log = self.z_log[field_z] else: z_log = zfi.take_log scales = {True: "log", False: "linear"} return scales[x_log], scales[y_log], scales[z_log] @property def profile(self): if not self._profile_valid: self._recreate_profile() return self._profile @property def fields(self): return list(self.plots.keys()) def _setup_plots(self): if self._plot_valid: return for f, data in self.profile.items(): if f in self.plots: pnh = self.plots[f].norm_handler cbh = self.plots[f].colorbar_handler draw_axes = self.plots[f]._draw_axes if self.plots[f].figure is not None: fig = self.plots[f].figure axes = self.plots[f].axes cax = self.plots[f].cax else: fig = None axes = None cax = None else: pnh, cbh = self._get_default_handlers( field=f, default_display_units=self.profile[f].units ) fig = None axes = None cax = None draw_axes = True x_scale, y_scale, z_scale = self._get_field_log(f, self.profile) x_title, y_title, z_title = self._get_field_title(f, self.profile) font_size = self._font_properties.get_size() f = self.profile.data_source._determine_fields(f)[0] # if this is a Particle Phase Plot AND if we using a single color, # override the colorbar here. splat_color = getattr(self, "splat_color", None) if splat_color is not None: cbh.cmap = matplotlib.colors.ListedColormap(splat_color, "dummy") masked_data = data.copy() masked_data[~self.profile.used] = np.nan self.plots[f] = PhasePlotMPL( self.profile.x, self.profile.y, masked_data, x_scale, y_scale, self.figure_size, font_size, fig, axes, cax, shading=self._shading, norm_handler=pnh, colorbar_handler=cbh, ) self.plots[f]._toggle_axes(draw_axes) self.plots[f]._toggle_colorbar(cbh.draw_cbar) self.plots[f].axes.xaxis.set_label_text(x_title) self.plots[f].axes.yaxis.set_label_text(y_title) self.plots[f].cax.yaxis.set_label_text(z_title) self.plots[f].axes.set_xlim(self._xlim) self.plots[f].axes.set_ylim(self._ylim) if f in self._plot_text: self.plots[f].axes.text( self._text_xpos[f], self._text_ypos[f], self._plot_text[f], fontproperties=self._font_properties, **self._text_kwargs[f], ) if f in self.plot_title: self.plots[f].axes.set_title(self.plot_title[f]) # x-y axes minorticks if f not in self._minorticks: self._minorticks[f] = True if self._minorticks[f]: self.plots[f].axes.minorticks_on() else: self.plots[f].axes.minorticks_off() self._set_font_properties() # if this is a particle plot with one color only, hide the cbar here if hasattr(self, "use_cbar") and not self.use_cbar: self.plots[f].hide_colorbar() self._plot_valid = True @classmethod def from_profile(cls, profile, fontsize=18, figure_size=8.0, shading="nearest"): r""" Instantiate a PhasePlot object from a profile object created with :func:`~yt.data_objects.profiles.create_profile`. Parameters ---------- profile : An instance of :class:`~yt.data_objects.profiles.ProfileND` A single profile object. fontsize : float The fontsize to use, in points. figure_size : float The figure size to use, in inches. shading : str This argument is directly passed down to matplotlib.axes.Axes.pcolormesh see https://matplotlib.org/3.3.1/gallery/images_contours_and_fields/pcolormesh_grids.html#sphx-glr-gallery-images-contours-and-fields-pcolormesh-grids-py # noqa Default: 'nearest' Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> extrema = { ... ("gas", "density"): (1e-31, 1e-24), ... ("gas", "temperature"): (1e1, 1e8), ... ("gas", "mass"): (1e-6, 1e-1), ... } >>> profile = yt.create_profile( ... ds.all_data(), ... [("gas", "density"), ("gas", "temperature")], ... fields=[("gas", "mass")], ... extrema=extrema, ... fractional=True, ... ) >>> ph = yt.PhasePlot.from_profile(profile) >>> ph.save() """ obj = cls.__new__(cls) data_source = profile.data_source return cls._initialize_instance( obj, data_source, profile, fontsize, figure_size, shading ) def annotate_text(self, xpos=0.0, ypos=0.0, text=None, **text_kwargs): r""" Allow the user to insert text onto the plot The x-position and y-position must be given as well as the text string. Add *text* tp plot at location *xpos*, *ypos* in plot coordinates (see example below). Parameters ---------- xpos : float Position on plot in x-coordinates. ypos : float Position on plot in y-coordinates. text : str The text to insert onto the plot. **text_kwargs : dict Extra keyword arguments will be passed to matplotlib text instance >>> plot.annotate_text(1e-15, 5e4, "Hello YT") """ for f in self.data_source._determine_fields(list(self.plots.keys())): if self.plots[f].figure is not None and text is not None: self.plots[f].axes.text( xpos, ypos, text, fontproperties=self._font_properties, **text_kwargs, ) self._plot_text[f] = text self._text_xpos[f] = xpos self._text_ypos[f] = ypos self._text_kwargs[f] = text_kwargs return self @validate_plot def save(self, name: str | None = None, suffix: str | None = None, mpl_kwargs=None): r""" Saves a 2d profile plot. Parameters ---------- name : str, optional The output file keyword. suffix : string, optional Specify the image type by its suffix. If not specified, the output type will be inferred from the filename. Defaults to '.png'. mpl_kwargs : dict, optional A dict of keyword arguments to be passed to matplotlib. >>> plot.save(mpl_kwargs={"bbox_inches": "tight"}) """ names = [] if not self._plot_valid: self._setup_plots() if mpl_kwargs is None: mpl_kwargs = {} if name is None: name = str(self.profile.ds) name = os.path.expanduser(name) xfn = self.profile.x_field yfn = self.profile.y_field if isinstance(xfn, tuple): xfn = xfn[1] if isinstance(yfn, tuple): yfn = yfn[1] for f in self.profile.field_data: _f = f if isinstance(f, tuple): _f = _f[1] middle = f"2d-Profile_{xfn}_{yfn}_{_f}" splitname = os.path.split(name) if splitname[0] != "" and not os.path.isdir(splitname[0]): os.makedirs(splitname[0]) if os.path.isdir(name) and name != str(self.profile.ds): name = name + (os.sep if name[-1] != os.sep else "") name += str(self.profile.ds) new_name = validate_image_name(name, suffix) if new_name == name: for v in self.plots.values(): out_name = v.save(name, mpl_kwargs) names.append(out_name) return names name = new_name prefix, suffix = os.path.splitext(name) name = f"{prefix}_{middle}{suffix}" names.append(name) self.plots[f].save(name, mpl_kwargs) return names @invalidate_plot def set_title(self, field, title): """Set a title for the plot. Parameters ---------- field : str The z field of the plot to add the title. title : str The title to add. Examples -------- >>> plot.set_title(("gas", "mass"), "This is a phase plot") """ self.plot_title[self.data_source._determine_fields(field)[0]] = title return self @invalidate_plot def annotate_title(self, title): """Set a title for the plot. Parameters ---------- title : str The title to add. Examples -------- >>> plot.annotate_title("This is a phase plot") """ for f in self._profile.field_data: if isinstance(f, tuple): f = f[1] self.plot_title[self.data_source._determine_fields(f)[0]] = title return self @invalidate_plot def reset_plot(self): self.plots = {} return self @invalidate_plot def set_log(self, field, log): """set a field to log or linear. Parameters ---------- field : string the field to set a transform log : boolean Log on/off. """ p = self._profile if field == "all": self.x_log = log self.y_log = log for field in p.field_data: self.z_log[field] = log self._profile_valid = False else: (field,) = self.profile.data_source._determine_fields([field]) if field == p.x_field: self.x_log = log self._profile_valid = False elif field == p.y_field: self.y_log = log self._profile_valid = False elif field in p.field_data: super().set_log(field, log) else: raise KeyError(f"Field {field} not in phase plot!") return self @invalidate_plot def set_unit(self, field, unit): """Sets a new unit for the requested field Parameters ---------- field : string The name of the field that is to be changed. unit : string or Unit object The name of the new unit. """ fd = self.data_source._determine_fields(field)[0] if fd == self.profile.x_field: self.profile.set_x_unit(unit) elif fd == self.profile.y_field: self.profile.set_y_unit(unit) elif fd in self.profile.field_data.keys(): self.profile.set_field_unit(field, unit) self.plots[field].norm_handler.display_units = unit else: raise KeyError(f"Field {field} not in phase plot!") return self @invalidate_plot @invalidate_profile def set_xlim(self, xmin=None, xmax=None): """Sets the limits of the x bin field Parameters ---------- xmin : float or None The new x minimum in the current x-axis units. Defaults to None, which leaves the xmin unchanged. xmax : float or None The new x maximum in the current x-axis units. Defaults to None, which leaves the xmax unchanged. Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> pp = yt.PhasePlot(ds.all_data(), "density", "temperature", ("gas", "mass")) >>> pp.set_xlim(1e-29, 1e-24) >>> pp.save() """ p = self._profile if xmin is None: xmin = p.x_bins.min() elif not hasattr(xmin, "units"): xmin = self.ds.quan(xmin, p.x_bins.units) if xmax is None: xmax = p.x_bins.max() elif not hasattr(xmax, "units"): xmax = self.ds.quan(xmax, p.x_bins.units) self._xlim = (xmin, xmax) return self @invalidate_plot @invalidate_profile def set_ylim(self, ymin=None, ymax=None): """Sets the plot limits for the y bin field. Parameters ---------- ymin : float or None The new y minimum in the current y-axis units. Defaults to None, which leaves the ymin unchanged. ymax : float or None The new y maximum in the current y-axis units. Defaults to None, which leaves the ymax unchanged. Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> pp = yt.PhasePlot( ... ds.all_data(), ... ("gas", "density"), ... ("gas", "temperature"), ... ("gas", "mass"), ... ) >>> pp.set_ylim(1e4, 1e6) >>> pp.save() """ p = self._profile if ymin is None: ymin = p.y_bins.min() elif not hasattr(ymin, "units"): ymin = self.ds.quan(ymin, p.y_bins.units) if ymax is None: ymax = p.y_bins.max() elif not hasattr(ymax, "units"): ymax = self.ds.quan(ymax, p.y_bins.units) self._ylim = (ymin, ymax) return self def _recreate_profile(self): p = self._profile units = {p.x_field: str(p.x.units), p.y_field: str(p.y.units)} zunits = {field: str(p.field_units[field]) for field in p.field_units} extrema = {p.x_field: self._xlim, p.y_field: self._ylim} if self.x_log is not None or self.y_log is not None: logs = {} else: logs = None if self.x_log is not None: logs[p.x_field] = self.x_log if self.y_log is not None: logs[p.y_field] = self.y_log deposition = getattr(p, "deposition", None) additional_kwargs = { "accumulation": p.accumulation, "fractional": p.fractional, "deposition": deposition, } self._profile = create_profile( p.data_source, [p.x_field, p.y_field], list(p.field_map.values()), n_bins=[len(p.x_bins) - 1, len(p.y_bins) - 1], weight_field=p.weight_field, units=units, extrema=extrema, logs=logs, **additional_kwargs, ) for field in zunits: self._profile.set_field_unit(field, zunits[field]) self._profile_valid = True class PhasePlotMPL(ImagePlotMPL): """A container for a single matplotlib figure and axes for a PhasePlot""" def __init__( self, x_data, y_data, data, x_scale, y_scale, figure_size, fontsize, figure, axes, cax, shading="nearest", *, norm_handler: NormHandler, colorbar_handler: ColorbarHandler, ): self._initfinished = False self._shading = shading self._setup_layout_constraints(figure_size, fontsize) # this line is added purely to prevent exact image comparison tests # to fail, but eventually we should embrace the change and # use similar values for PhasePlotMPL and WindowPlotMPL self._ax_text_size[0] *= 1.1 / 1.2 # TODO: remove this super().__init__( figure=figure, axes=axes, cax=cax, norm_handler=norm_handler, colorbar_handler=colorbar_handler, ) self._init_image(x_data, y_data, data, x_scale, y_scale) self._initfinished = True def _init_image( self, x_data, y_data, image_data, x_scale, y_scale, ): """Store output of imshow in image variable""" norm = self.norm_handler.get_norm(image_data) self.image = None self.cb = None self.image = self.axes.pcolormesh( np.array(x_data), np.array(y_data), np.array(image_data.T), norm=norm, cmap=self.colorbar_handler.cmap, shading=self._shading, ) self._set_axes() self.axes.set_xscale(x_scale) self.axes.set_yscale(y_scale) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/streamlines.py0000644000175100001770000002115114714401662020022 0ustar00runnerdockerimport numpy as np from yt.data_objects.construction_data_containers import YTStreamline from yt.funcs import get_pbar from yt.units.yt_array import YTArray from yt.utilities.amr_kdtree.api import AMRKDTree from yt.utilities.parallel_tools.parallel_analysis_interface import ( ParallelAnalysisInterface, parallel_passthrough, ) def sanitize_length(length, ds): # Ensure that lengths passed in with units are returned as code_length # magnitudes without units if isinstance(length, YTArray): return ds.arr(length).in_units("code_length").d else: return length class Streamlines(ParallelAnalysisInterface): r"""A collection of streamlines that flow through the volume The Streamlines object contains a collection of streamlines defined as paths that are parallel to a specified vector field. Parameters ---------- ds : ~yt.data_objects.static_output.Dataset This is the dataset to streamline positions : array_like An array of initial starting positions of the streamlines. xfield : str or tuple of str, optional The x component of the vector field to be streamlined. Default:'velocity_x' yfield : str or tuple of str, optional The y component of the vector field to be streamlined. Default:'velocity_y' zfield : str or tuple of str, optional The z component of the vector field to be streamlined. Default:'velocity_z' volume : `yt.extensions.volume_rendering.AMRKDTree`, optional The volume to be streamlined. Can be specified for finer-grained control, but otherwise will be automatically generated. At this point it must use the AMRKDTree. Default: None dx : float, optional Optionally specify the step size during the integration. Default: minimum dx length : float, optional Optionally specify the length of integration. Default: np.max(self.ds.domain_right_edge-self.ds.domain_left_edge) direction : real, optional Specifies the direction of integration. The magnitude of this value has no effect, only the sign. get_magnitude: bool, optional Specifies whether the Streamlines.magnitudes array should be filled with the magnitude of the vector field at each point in the streamline. This seems to be a ~10% hit to performance. Default: False Examples -------- >>> import matplotlib.pyplot as plt ... import numpy as np ... from mpl_toolkits.mplot3d import Axes3D ... import yt ... from yt.visualization.api import Streamlines >>> # Load the dataset and set some parameters >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> c = np.array([0.5] * 3) >>> N = 100 >>> scale = 1.0 >>> pos_dx = np.random.random((N, 3)) * scale - scale / 2.0 >>> pos = c + pos_dx >>> # Define and construct streamlines >>> streamlines = Streamlines( ... ds, pos, "velocity_x", "velocity_y", "velocity_z", length=1.0 ... ) >>> streamlines.integrate_through_volume() >>> # Make a 3D plot of the streamlines and save it to disk >>> fig = plt.figure() >>> ax = Axes3D(fig) >>> for stream in streamlines.streamlines: ... stream = stream[np.all(stream != 0.0, axis=1)] ... ax.plot3D(stream[:, 0], stream[:, 1], stream[:, 2], alpha=0.1) >>> plt.savefig("streamlines.png") """ def __init__( self, ds, positions, xfield="velocity_x", yfield="velocity_x", zfield="velocity_x", volume=None, dx=None, length=None, direction=1, get_magnitude=False, ): ParallelAnalysisInterface.__init__(self) self.ds = ds self.start_positions = sanitize_length(positions, ds) self.N = self.start_positions.shape[0] # I need a data object to resolve the field names to field tuples # via _determine_fields() ad = self.ds.all_data() self.xfield = ad._determine_fields(xfield)[0] self.yfield = ad._determine_fields(yfield)[0] self.zfield = ad._determine_fields(zfield)[0] self.get_magnitude = get_magnitude self.direction = np.sign(direction) if volume is None: volume = AMRKDTree(self.ds) volume.set_fields( [self.xfield, self.yfield, self.zfield], [False, False, False], False ) volume.join_parallel_trees() self.volume = volume if dx is None: dx = self.ds.index.get_smallest_dx() self.dx = sanitize_length(dx, ds) if length is None: length = np.max(self.ds.domain_right_edge - self.ds.domain_left_edge) self.length = sanitize_length(length, ds) self.steps = int(self.length / self.dx) + 1 # Fix up the dx. self.dx = 1.0 * self.length / self.steps self.streamlines = np.zeros((self.N, self.steps, 3), dtype="float64") self.magnitudes = None if self.get_magnitude: self.magnitudes = np.zeros((self.N, self.steps), dtype="float64") def integrate_through_volume(self): nprocs = self.comm.size my_rank = self.comm.rank self.streamlines[my_rank::nprocs, 0, :] = self.start_positions[my_rank::nprocs] pbar = get_pbar("Streamlining", self.N) for i, stream in enumerate(self.streamlines[my_rank::nprocs]): thismag = None if self.get_magnitude: thismag = self.magnitudes[i, :] step = self.steps while step > 1: this_node = self.volume.locate_node(stream[-step, :]) step = self._integrate_through_brick( this_node, stream, step, mag=thismag ) pbar.update(i + 1) pbar.finish() self._finalize_parallel(None) self.streamlines = self.ds.arr(self.streamlines, "code_length") if self.get_magnitude: self.magnitudes = self.ds.arr( self.magnitudes, self.ds.field_info[self.xfield].units ) @parallel_passthrough def _finalize_parallel(self, data): self.streamlines = self.comm.mpi_allreduce(self.streamlines, op="sum") if self.get_magnitude: self.magnitudes = self.comm.mpi_allreduce(self.magnitudes, op="sum") def _integrate_through_brick(self, node, stream, step, periodic=False, mag=None): LE = self.ds.domain_left_edge.d RE = self.ds.domain_right_edge.d while step > 1: self.volume.get_brick_data(node) brick = node.data stream[-step + 1] = stream[-step] if mag is None: brick.integrate_streamline( stream[-step + 1], self.direction * self.dx, None ) else: marr = [mag] brick.integrate_streamline( stream[-step + 1], self.direction * self.dx, marr ) mag[-step + 1] = marr[0] cur_stream = stream[-step + 1, :] if np.sum(np.logical_or(cur_stream < LE, cur_stream >= RE)): return 0 nLE = node.get_left_edge() nRE = node.get_right_edge() if np.sum(np.logical_or(cur_stream < nLE, cur_stream >= nRE)): return step - 1 step -= 1 return step def clean_streamlines(self): temp = np.empty(self.N, dtype="object") temp2 = np.empty(self.N, dtype="object") for i, stream in enumerate(self.streamlines): mask = np.all(stream != 0.0, axis=1) temp[i] = stream[mask] temp2[i] = self.magnitudes[i, mask] self.streamlines = temp self.magnitudes = temp2 def path(self, streamline_id): """ Returns an YTSelectionContainer1D object defined by a streamline. Parameters ---------- streamline_id : int This defines which streamline from the Streamlines object that will define the YTSelectionContainer1D object. Returns ------- An YTStreamline YTSelectionContainer1D object Examples -------- >>> from yt.visualization.api import Streamlines >>> streamlines = Streamlines(ds, [0.5] * 3) >>> streamlines.integrate_through_volume() >>> stream = streamlines.path(0) >>> fig, ax = plt.subplots() >>> ax.set_yscale("log") >>> ax.plot(stream["t"], stream["gas", "density"], "-x") """ return YTStreamline( self.streamlines[streamline_id], ds=self.ds, length=self.length ) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.415154 yt-4.4.0/yt/visualization/tests/0000755000175100001770000000000014714401715016263 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/__init__.py0000644000175100001770000000000014714401662020363 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_callbacks.py0000644000175100001770000012201014714401662021610 0ustar00runnerdockerimport contextlib import inspect import shutil import tempfile from numpy.testing import assert_array_equal, assert_raises import yt.units as u from yt.config import ytcfg from yt.loaders import load from yt.testing import ( assert_fname, fake_amr_ds, fake_hexahedral_ds, fake_random_ds, fake_tetrahedral_ds, requires_file, requires_module, ) from yt.utilities.answer_testing.framework import ( PlotWindowAttributeTest, data_dir_load, requires_ds, ) from yt.utilities.exceptions import YTDataTypeUnsupported, YTPlotCallbackError from yt.visualization.api import OffAxisSlicePlot, ProjectionPlot, SlicePlot from yt.visualization.plot_container import accepts_all_fields # These are a very simple set of tests that verify that each callback is or is # not working. They only check that it functions without an error; they do not # check that it is providing correct results. Note that test_axis_manipulations # is one exception, which is a full answer test. # These are the callbacks still to test: # # X velocity # X magnetic_field # X quiver # X contour # X grids # X streamlines # units # X line # cquiver # clumps # X arrow # X marker # X sphere # hop_circles # hop_particles # coord_axes # X text # X particles # title # flash_ray_data # X timestamp # X scale # material_boundary # X ray # X line_integral_convolution # # X flip_horizontal, flip_vertical, swap_axes (all in test_axis_manipulations) # cylindrical data for callback test cyl_2d = "WDMerger_hdf5_chk_1000/WDMerger_hdf5_chk_1000.hdf5" cyl_3d = "MHD_Cyl3d_hdf5_plt_cnt_0100/MHD_Cyl3d_hdf5_plt_cnt_0100.hdf5" @contextlib.contextmanager def _cleanup_fname(): tmpdir = tempfile.mkdtemp() yield tmpdir shutil.rmtree(tmpdir) def test_method_signature(): ds = fake_amr_ds( fields=[("gas", "density"), ("gas", "velocity_x"), ("gas", "velocity_y")], units=["g/cm**3", "m/s", "m/s"], ) p = SlicePlot(ds, "z", ("gas", "density")) sig = inspect.signature(p.annotate_velocity) # checking the first few arguments rather than the whole signature # we just want to validate that method wrapping works assert list(sig.parameters.keys())[:4] == [ "factor", "scale", "scale_units", "normalize", ] def test_init_signature_error_callback(): ds = fake_amr_ds( fields=[("gas", "density"), ("gas", "velocity_x"), ("gas", "velocity_y")], units=["g/cm**3", "m/s", "m/s"], ) p = SlicePlot(ds, "z", ("gas", "density")) # annotate_velocity accepts only one positional argument assert_raises(TypeError, p.annotate_velocity, 1, 2, 3) def check_axis_manipulation(plot_obj, prefix): # convenience function for testing functionality of axis manipulation # callbacks. Can use in any of the other test functions. # test individual callbacks for cb in ("swap_axes", "flip_horizontal", "flip_vertical"): callback_handle = getattr(plot_obj, cb) callback_handle() # toggles on for axis operation assert_fname(plot_obj.save(prefix)[0]) callback_handle() # toggle off # test all at once for cb in ("swap_axes", "flip_horizontal", "flip_vertical"): callback_handle = getattr(plot_obj, cb) callback_handle() assert_fname(plot_obj.save(prefix)[0]) def test_timestamp_callback(): with _cleanup_fname() as prefix: ax = "z" vector = [1.0, 1.0, 1.0] ds = fake_amr_ds(fields=("density",), units=("g/cm**3",)) p = ProjectionPlot(ds, ax, ("gas", "density")) p.annotate_timestamp() assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_timestamp() assert_fname(p.save(prefix)[0]) p = OffAxisSlicePlot(ds, vector, ("gas", "density")) p.annotate_timestamp() assert_fname(p.save(prefix)[0]) # Now we'll check a few additional minor things p = SlicePlot(ds, "x", ("gas", "density")) p.annotate_timestamp(corner="lower_right", redshift=True, draw_inset_box=True) p.save(prefix) with _cleanup_fname() as prefix: ds = fake_amr_ds(fields=("density",), units=("g/cm**3",), geometry="spherical") p = ProjectionPlot(ds, "r", ("gas", "density")) p.annotate_timestamp(coord_system="axis") assert_fname(p.save(prefix)[0]) def test_timestamp_callback_code_units(): # see https://github.com/yt-project/yt/issues/3869 with _cleanup_fname() as prefix: ds = fake_random_ds(2, unit_system="code") p = SlicePlot(ds, "z", ("gas", "density")) p.annotate_timestamp() assert_fname(p.save(prefix)[0]) def test_scale_callback(): with _cleanup_fname() as prefix: ax = "z" vector = [1.0, 1.0, 1.0] ds = fake_amr_ds(fields=("density",), units=("g/cm**3",)) p = ProjectionPlot(ds, ax, ("gas", "density")) p.annotate_scale() assert_fname(p.save(prefix)[0]) p = ProjectionPlot(ds, ax, ("gas", "density"), width=(0.5, 1.0)) p.annotate_scale() assert_fname(p.save(prefix)[0]) p = ProjectionPlot(ds, ax, ("gas", "density"), width=(1.0, 1.5)) p.annotate_scale() assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_scale() assert_fname(p.save(prefix)[0]) p = OffAxisSlicePlot(ds, vector, ("gas", "density")) p.annotate_scale() assert_fname(p.save(prefix)[0]) # Now we'll check a few additional minor things p = SlicePlot(ds, "x", ("gas", "density")) p.annotate_scale(corner="upper_right", coeff=10.0, unit="kpc") assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, "x", ("gas", "density")) p.annotate_scale(text_args={"size": 24}) assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, "x", ("gas", "density")) p.annotate_scale(text_args={"font": 24}) assert_raises(YTPlotCallbackError) with _cleanup_fname() as prefix: ds = fake_amr_ds(fields=("density",), units=("g/cm**3",), geometry="spherical") p = ProjectionPlot(ds, "r", ("gas", "density")) assert_raises(YTDataTypeUnsupported, p.annotate_scale) assert_raises(YTDataTypeUnsupported, p.annotate_scale, coord_system="axis") def test_line_callback(): with _cleanup_fname() as prefix: ax = "z" vector = [1.0, 1.0, 1.0] ds = fake_amr_ds(fields=("density",), units=("g/cm**3",)) p = ProjectionPlot(ds, ax, ("gas", "density")) p.annotate_line([0.1, 0.1, 0.1], [0.5, 0.5, 0.5]) assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_line([0.1, 0.1, 0.1], [0.5, 0.5, 0.5]) assert_fname(p.save(prefix)[0]) p = OffAxisSlicePlot(ds, vector, ("gas", "density")) p.annotate_line([0.1, 0.1, 0.1], [0.5, 0.5, 0.5]) assert_fname(p.save(prefix)[0]) # Now we'll check a few additional minor things p = SlicePlot(ds, "x", ("gas", "density")) p.annotate_line([0.1, 0.1], [0.5, 0.5], coord_system="axis", color="red") p.save(prefix) check_axis_manipulation(p, prefix) with _cleanup_fname() as prefix: ds = fake_amr_ds(fields=("density",), units=("g/cm**3",), geometry="spherical") p = ProjectionPlot(ds, "r", ("gas", "density")) assert_raises( YTDataTypeUnsupported, p.annotate_line, [0.1, 0.1, 0.1], [0.5, 0.5, 0.5] ) p.annotate_line([0.1, 0.1], [0.5, 0.5], coord_system="axis") assert_fname(p.save(prefix)[0]) def test_ray_callback(): with _cleanup_fname() as prefix: ax = "z" vector = [1.0, 1.0, 1.0] ds = fake_amr_ds(fields=("density",), units=("g/cm**3",)) ray = ds.ray((0.1, 0.2, 0.3), (0.6, 0.8, 0.5)) oray = ds.ortho_ray(0, (0.3, 0.4)) p = ProjectionPlot(ds, ax, ("gas", "density")) p.annotate_ray(oray) p.annotate_ray(ray) assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_ray(oray) p.annotate_ray(ray) assert_fname(p.save(prefix)[0]) p = OffAxisSlicePlot(ds, vector, ("gas", "density")) p.annotate_ray(oray) p.annotate_ray(ray) assert_fname(p.save(prefix)[0]) # Now we'll check a few additional minor things p = SlicePlot(ds, "x", ("gas", "density")) p.annotate_ray(oray) p.annotate_ray(ray, color="red") p.save(prefix) check_axis_manipulation(p, prefix) with _cleanup_fname() as prefix: ds = fake_amr_ds(fields=("density",), units=("g/cm**3",), geometry="spherical") ray = ds.ray((0.1, 0.2, 0.3), (0.6, 0.8, 0.5)) oray = ds.ortho_ray(0, (0.3, 0.4)) p = ProjectionPlot(ds, "r", ("gas", "density")) assert_raises(YTDataTypeUnsupported, p.annotate_ray, oray) assert_raises(YTDataTypeUnsupported, p.annotate_ray, ray) def test_arrow_callback(): with _cleanup_fname() as prefix: ax = "z" vector = [1.0, 1.0, 1.0] ds = fake_amr_ds(fields=("density",), units=("g/cm**3",)) p = ProjectionPlot(ds, ax, ("gas", "density")) p.annotate_arrow([0.5, 0.5, 0.5]) assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_arrow([0.5, 0.5, 0.5]) assert_fname(p.save(prefix)[0]) p = OffAxisSlicePlot(ds, vector, ("gas", "density")) p.annotate_arrow([0.5, 0.5, 0.5]) assert_fname(p.save(prefix)[0]) # Now we'll check a few additional minor things p = SlicePlot(ds, "x", ("gas", "density")) p.annotate_arrow([0.5, 0.5], coord_system="axis", length=0.05) p.annotate_arrow( [[0.5, 0.6], [0.5, 0.6], [0.5, 0.6]], coord_system="data", length=0.05 ) p.annotate_arrow( [[0.5, 0.6, 0.8], [0.5, 0.6, 0.8], [0.5, 0.6, 0.8]], coord_system="data", length=0.05, ) p.annotate_arrow( [[0.5, 0.6, 0.8], [0.5, 0.6, 0.8]], coord_system="axis", length=0.05 ) p.annotate_arrow( [[0.5, 0.6, 0.8], [0.5, 0.6, 0.8]], coord_system="figure", length=0.05 ) p.annotate_arrow( [[0.5, 0.6, 0.8], [0.5, 0.6, 0.8]], coord_system="plot", length=0.05 ) p.save(prefix) check_axis_manipulation(p, prefix) with _cleanup_fname() as prefix: ds = fake_amr_ds(fields=("density",), units=("g/cm**3",), geometry="spherical") p = ProjectionPlot(ds, "r", ("gas", "density")) assert_raises(YTDataTypeUnsupported, p.annotate_arrow, [0.5, 0.5, 0.5]) p.annotate_arrow([0.5, 0.5], coord_system="axis") assert_fname(p.save(prefix)[0]) def test_marker_callback(): with _cleanup_fname() as prefix: ax = "z" vector = [1.0, 1.0, 1.0] ds = fake_amr_ds(fields=("density",), units=("g/cm**3",)) p = ProjectionPlot(ds, ax, ("gas", "density")) p.annotate_marker([0.5, 0.5, 0.5]) assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_marker([0.5, 0.5, 0.5]) assert_fname(p.save(prefix)[0]) p = OffAxisSlicePlot(ds, vector, ("gas", "density")) p.annotate_marker([0.5, 0.5, 0.5]) assert_fname(p.save(prefix)[0]) # Now we'll check a few additional minor things p = SlicePlot(ds, "x", ("gas", "density")) coord = ds.arr([0.75, 0.75, 0.75], "unitary") coord.convert_to_units("kpc") p.annotate_marker(coord, coord_system="data") p.annotate_marker([0.5, 0.5], coord_system="axis", marker="*") p.annotate_marker([[0.5, 0.6], [0.5, 0.6], [0.5, 0.6]], coord_system="data") p.annotate_marker( [[0.5, 0.6, 0.8], [0.5, 0.6, 0.8], [0.5, 0.6, 0.8]], coord_system="data" ) p.annotate_marker([[0.5, 0.6, 0.8], [0.5, 0.6, 0.8]], coord_system="axis") p.annotate_marker([[0.5, 0.6, 0.8], [0.5, 0.6, 0.8]], coord_system="figure") p.annotate_marker([[0.5, 0.6, 0.8], [0.5, 0.6, 0.8]], coord_system="plot") p.save(prefix) check_axis_manipulation(p, prefix) with _cleanup_fname() as prefix: ds = fake_amr_ds(fields=("density",), units=("g/cm**3",), geometry="spherical") p = ProjectionPlot(ds, "r", ("gas", "density")) assert_raises(YTDataTypeUnsupported, p.annotate_marker, [0.5, 0.5, 0.5]) p.annotate_marker([0.5, 0.5], coord_system="axis") assert_fname(p.save(prefix)[0]) def test_particles_callback(): with _cleanup_fname() as prefix: ax = "z" ds = fake_amr_ds(fields=("density",), units=("g/cm**3",), particles=1) p = ProjectionPlot(ds, ax, ("gas", "density")) p.annotate_particles((10, "Mpc")) assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_particles((10, "Mpc")) assert_fname(p.save(prefix)[0]) # Now we'll check a few additional minor things p = SlicePlot(ds, "x", ("gas", "density")) ad = ds.all_data() p.annotate_particles( (10, "Mpc"), p_size=1.0, col="k", marker="o", stride=1, ptype="all", alpha=1.0, data_source=ad, ) p.save(prefix) check_axis_manipulation(p, prefix) with _cleanup_fname() as prefix: ds = fake_amr_ds(fields=("density",), units=("g/cm**3",), geometry="spherical") p = ProjectionPlot(ds, "r", ("gas", "density")) assert_raises(YTDataTypeUnsupported, p.annotate_particles, (10, "Mpc")) def test_sphere_callback(): with _cleanup_fname() as prefix: ax = "z" vector = [1.0, 1.0, 1.0] ds = fake_amr_ds(fields=("density",), units=("g/cm**3",)) p = ProjectionPlot(ds, ax, ("gas", "density")) p.annotate_sphere([0.5, 0.5, 0.5], 0.1) assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_sphere([0.5, 0.5, 0.5], 0.1) assert_fname(p.save(prefix)[0]) p = OffAxisSlicePlot(ds, vector, ("gas", "density")) p.annotate_sphere([0.5, 0.5, 0.5], 0.1) assert_fname(p.save(prefix)[0]) # Now we'll check a few additional minor things p = SlicePlot(ds, "x", ("gas", "density")) p.annotate_sphere([0.5, 0.5], 0.1, coord_system="axis", text="blah") p.save(prefix) with _cleanup_fname() as prefix: ds = fake_amr_ds(fields=("density",), units=("g/cm**3",), geometry="spherical") p = ProjectionPlot(ds, "r", ("gas", "density")) assert_raises( YTDataTypeUnsupported, p.annotate_sphere, [0.5, 0.5, 0.5], 0.1, ) p.annotate_sphere([0.5, 0.5], 0.1, coord_system="axis", text="blah") assert_fname(p.save(prefix)[0]) def test_invalidated_annotations(): # check that annotate_sphere and annotate_arrow succeed on re-running after # an operation that invalidates the plot (set_font_size), see # https://github.com/yt-project/yt/issues/4698 ds = fake_amr_ds(fields=("density",), units=("g/cm**3",)) p = SlicePlot(ds, "z", ("gas", "density")) p.annotate_sphere([0.5, 0.5, 0.5], 0.1) p.set_font_size(24) p.render() p = SlicePlot(ds, "z", ("gas", "density")) p.annotate_arrow([0.5, 0.5, 0.5]) p.set_font_size(24) p.render() def test_text_callback(): with _cleanup_fname() as prefix: ax = "z" vector = [1.0, 1.0, 1.0] ds = fake_amr_ds(fields=("density",), units=("g/cm**3",)) p = ProjectionPlot(ds, ax, ("gas", "density")) p.annotate_text([0.5, 0.5, 0.5], "dinosaurs!") assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_text([0.5, 0.5, 0.5], "dinosaurs!") assert_fname(p.save(prefix)[0]) p = OffAxisSlicePlot(ds, vector, ("gas", "density")) p.annotate_text([0.5, 0.5, 0.5], "dinosaurs!") assert_fname(p.save(prefix)[0]) # Now we'll check a few additional minor things p = SlicePlot(ds, "x", ("gas", "density")) p.annotate_text( [0.5, 0.5], "dinosaurs!", coord_system="axis", text_args={"color": "red"} ) p.save(prefix) with _cleanup_fname() as prefix: ds = fake_amr_ds(fields=("density",), units=("g/cm**3",), geometry="spherical") p = ProjectionPlot(ds, "r", ("gas", "density")) assert_raises( YTDataTypeUnsupported, p.annotate_text, [0.5, 0.5, 0.5], "dinosaurs!" ) p.annotate_text( [0.5, 0.5], "dinosaurs!", coord_system="axis", text_args={"color": "red"} ) assert_fname(p.save(prefix)[0]) @requires_module("h5py") @requires_file(cyl_2d) @requires_file(cyl_3d) def test_velocity_callback(): with _cleanup_fname() as prefix: ds = fake_amr_ds( fields=("density", "velocity_x", "velocity_y", "velocity_z"), units=("g/cm**3", "cm/s", "cm/s", "cm/s"), ) for ax in "xyz": p = ProjectionPlot( ds, ax, ("gas", "density"), weight_field=("gas", "density") ) p.annotate_velocity() assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_velocity() assert_fname(p.save(prefix)[0]) # Test for OffAxis Slice p = SlicePlot(ds, [1, 1, 0], ("gas", "density"), north_vector=[0, 0, 1]) p.annotate_velocity(factor=40, normalize=True) assert_fname(p.save(prefix)[0]) # Now we'll check a few additional minor things p = SlicePlot(ds, "x", ("gas", "density")) p.annotate_velocity(factor=8, scale=0.5, scale_units="inches", normalize=True) assert_fname(p.save(prefix)[0]) with _cleanup_fname() as prefix: ds = fake_hexahedral_ds(fields=[f"velocity_{ax}" for ax in "xyz"]) sl = SlicePlot(ds, 1, ("connect1", "test")) sl.annotate_velocity() assert_fname(sl.save(prefix)[0]) with _cleanup_fname() as prefix: ds = load(cyl_2d) slc = SlicePlot(ds, "theta", ("gas", "velocity_magnitude")) slc.annotate_velocity() assert_fname(slc.save(prefix)[0]) with _cleanup_fname() as prefix: ds = load(cyl_3d) for ax in ["r", "z", "theta"]: slc = SlicePlot(ds, ax, ("gas", "velocity_magnitude")) slc.annotate_velocity() assert_fname(slc.save(prefix)[0]) slc = ProjectionPlot(ds, ax, ("gas", "velocity_magnitude")) slc.annotate_velocity() assert_fname(slc.save(prefix)[0]) def test_velocity_callback_spherical(): ds = fake_amr_ds( fields=("density", "velocity_r", "velocity_theta", "velocity_phi"), units=("g/cm**3", "cm/s", "cm/s", "cm/s"), geometry="spherical", ) with _cleanup_fname() as prefix: p = ProjectionPlot(ds, "phi", ("stream", "density")) p.annotate_velocity(factor=40, normalize=True) assert_fname(p.save(prefix)[0]) with _cleanup_fname() as prefix: p = ProjectionPlot(ds, "r", ("stream", "density")) p.annotate_velocity(factor=40, normalize=True) assert_raises(NotImplementedError, p.save, prefix) @requires_module("h5py") @requires_file(cyl_2d) @requires_file(cyl_3d) def test_magnetic_callback(): with _cleanup_fname() as prefix: ds = fake_amr_ds( fields=( "density", "magnetic_field_x", "magnetic_field_y", "magnetic_field_z", ), units=( "g/cm**3", "G", "G", "G", ), ) for ax in "xyz": p = ProjectionPlot( ds, ax, ("gas", "density"), weight_field=("gas", "density") ) p.annotate_magnetic_field() assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_magnetic_field() assert_fname(p.save(prefix)[0]) # Test for OffAxis Slice p = SlicePlot(ds, [1, 1, 0], ("gas", "density"), north_vector=[0, 0, 1]) p.annotate_magnetic_field(factor=40, normalize=True) assert_fname(p.save(prefix)[0]) # Now we'll check a few additional minor things p = SlicePlot(ds, "x", ("gas", "density")) p.annotate_magnetic_field( factor=8, scale=0.5, scale_units="inches", normalize=True ) assert_fname(p.save(prefix)[0]) with _cleanup_fname() as prefix: ds = load(cyl_2d) slc = SlicePlot(ds, "theta", ("gas", "magnetic_field_strength")) slc.annotate_magnetic_field() assert_fname(slc.save(prefix)[0]) with _cleanup_fname() as prefix: ds = load(cyl_3d) for ax in ["r", "z", "theta"]: slc = SlicePlot(ds, ax, ("gas", "magnetic_field_strength")) slc.annotate_magnetic_field() assert_fname(slc.save(prefix)[0]) slc = ProjectionPlot(ds, ax, ("gas", "magnetic_field_strength")) slc.annotate_magnetic_field() assert_fname(slc.save(prefix)[0]) check_axis_manipulation(slc, prefix) # only test the last axis with _cleanup_fname() as prefix: ds = fake_amr_ds( fields=( "density", "magnetic_field_r", "magnetic_field_theta", "magnetic_field_phi", ), units=( "g/cm**3", "G", "G", "G", ), geometry="spherical", ) p = ProjectionPlot(ds, "phi", ("gas", "density")) p.annotate_magnetic_field( factor=8, scale=0.5, scale_units="inches", normalize=True ) assert_fname(p.save(prefix)[0]) p = ProjectionPlot(ds, "theta", ("gas", "density")) p.annotate_magnetic_field( factor=8, scale=0.5, scale_units="inches", normalize=True ) assert_fname(p.save(prefix)[0]) p = ProjectionPlot(ds, "r", ("gas", "density")) p.annotate_magnetic_field( factor=8, scale=0.5, scale_units="inches", normalize=True ) assert_raises(NotImplementedError, p.save, prefix) @requires_module("h5py") @requires_file(cyl_2d) @requires_file(cyl_3d) def test_quiver_callback(): with _cleanup_fname() as prefix: ds = fake_amr_ds( fields=("density", "velocity_x", "velocity_y", "velocity_z"), units=("g/cm**3", "cm/s", "cm/s", "cm/s"), ) for ax in "xyz": p = ProjectionPlot(ds, ax, ("gas", "density")) p.annotate_quiver(("gas", "velocity_x"), ("gas", "velocity_y")) assert_fname(p.save(prefix)[0]) p = ProjectionPlot( ds, ax, ("gas", "density"), weight_field=("gas", "density") ) p.annotate_quiver(("gas", "velocity_x"), ("gas", "velocity_y")) assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_quiver(("gas", "velocity_x"), ("gas", "velocity_y")) assert_fname(p.save(prefix)[0]) # Now we'll check a few additional minor things p = SlicePlot(ds, "x", ("gas", "density")) p.annotate_quiver( ("gas", "velocity_x"), ("gas", "velocity_y"), factor=8, scale=0.5, scale_units="inches", normalize=True, bv_x=0.5 * u.cm / u.s, bv_y=0.5 * u.cm / u.s, ) assert_fname(p.save(prefix)[0]) check_axis_manipulation(p, prefix) with _cleanup_fname() as prefix: ds = load(cyl_2d) slc = SlicePlot(ds, "theta", ("gas", "density")) slc.annotate_quiver(("gas", "velocity_r"), ("gas", "velocity_z")) assert_fname(slc.save(prefix)[0]) with _cleanup_fname() as prefix: ds = load(cyl_3d) slc = SlicePlot(ds, "r", ("gas", "velocity_magnitude")) slc.annotate_quiver(("gas", "velocity_theta"), ("gas", "velocity_z")) assert_fname(slc.save(prefix)[0]) slc = SlicePlot(ds, "z", ("gas", "velocity_magnitude")) slc.annotate_quiver( ("gas", "velocity_cartesian_x"), ("gas", "velocity_cartesian_y") ) assert_fname(slc.save(prefix)[0]) slc = SlicePlot(ds, "theta", ("gas", "velocity_magnitude")) slc.annotate_quiver(("gas", "velocity_r"), ("gas", "velocity_z")) assert_fname(slc.save(prefix)[0]) def test_quiver_callback_spherical(): ds = fake_amr_ds( fields=("density", "velocity_r", "velocity_theta", "velocity_phi"), units=("g/cm**3", "cm/s", "cm/s", "cm/s"), geometry="spherical", ) with _cleanup_fname() as prefix: p = ProjectionPlot(ds, "phi", ("gas", "density")) p.annotate_quiver( ("gas", "velocity_cylindrical_radius"), ("gas", "velocity_cylindrical_z"), factor=8, scale=0.5, scale_units="inches", normalize=True, ) assert_fname(p.save(prefix)[0]) with _cleanup_fname() as prefix: p = ProjectionPlot(ds, "r", ("gas", "density")) p.annotate_quiver( ("gas", "velocity_theta"), ("gas", "velocity_phi"), factor=8, scale=0.5, scale_units="inches", normalize=True, ) assert_fname(p.save(prefix)[0]) @requires_module("h5py") @requires_file(cyl_2d) def test_contour_callback(): with _cleanup_fname() as prefix: ds = fake_amr_ds(fields=("density", "temperature"), units=("g/cm**3", "K")) for ax in "xyz": p = ProjectionPlot(ds, ax, ("gas", "density")) p.annotate_contour(("gas", "temperature")) assert_fname(p.save(prefix)[0]) p = ProjectionPlot( ds, ax, ("gas", "density"), weight_field=("gas", "density") ) p.annotate_contour(("gas", "temperature")) assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_contour(("gas", "temperature")) # BREAKS WITH ndarray assert_fname(p.save(prefix)[0]) # Now we'll check a few additional minor things p = SlicePlot(ds, "x", ("gas", "density")) p.annotate_contour( ("gas", "temperature"), levels=10, factor=8, take_log=False, clim=(0.4, 0.6), plot_args={"linewidths": 2.0}, label=True, text_args={"fontsize": "x-large"}, ) p.save(prefix) p = SlicePlot(ds, "x", ("gas", "density")) s2 = ds.slice(0, 0.2) p.annotate_contour( ("gas", "temperature"), levels=10, factor=8, take_log=False, clim=(0.4, 0.6), plot_args={"linewidths": 2.0}, label=True, text_args={"fontsize": "x-large"}, data_source=s2, ) p.save(prefix) with _cleanup_fname() as prefix: ds = load(cyl_2d) slc = SlicePlot(ds, "theta", ("gas", "plasma_beta")) slc.annotate_contour( ("gas", "plasma_beta"), levels=2, factor=7, take_log=False, clim=(1.0e-1, 1.0e1), label=True, plot_args={"colors": ("c", "w"), "linewidths": 1}, text_args={"fmt": "%1.1f"}, ) assert_fname(slc.save(prefix)[0]) check_axis_manipulation(slc, prefix) with _cleanup_fname() as prefix: ds = fake_amr_ds( fields=("density", "temperature"), units=("g/cm**3", "K"), geometry="spherical", ) p = SlicePlot(ds, "r", ("gas", "density")) kwargs = { "levels": 10, "factor": 8, "take_log": False, "clim": (0.4, 0.6), "plot_args": {"linewidths": 2.0}, "label": True, "text_args": {"fontsize": "x-large"}, } assert_raises( YTDataTypeUnsupported, p.annotate_contour, ("gas", "temperature"), **kwargs ) @requires_module("h5py") @requires_file(cyl_2d) def test_grids_callback(): with _cleanup_fname() as prefix: ds = fake_amr_ds(fields=("density",), units=("g/cm**3",)) for ax in "xyz": p = ProjectionPlot(ds, ax, ("gas", "density")) p.annotate_grids() assert_fname(p.save(prefix)[0]) p = ProjectionPlot( ds, ax, ("gas", "density"), weight_field=("gas", "density") ) p.annotate_grids() assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_grids() assert_fname(p.save(prefix)[0]) # Now we'll check a few additional minor things p = SlicePlot(ds, "x", ("gas", "density")) p.annotate_grids( alpha=0.7, min_pix=10, min_pix_ids=30, draw_ids=True, id_loc="upper right", periodic=False, min_level=2, max_level=3, cmap="gist_stern", ) p.save(prefix) with _cleanup_fname() as prefix: ds = load(cyl_2d) slc = SlicePlot(ds, "theta", ("gas", "density")) slc.annotate_grids() assert_fname(slc.save(prefix)[0]) check_axis_manipulation(slc, prefix) with _cleanup_fname() as prefix: ds = fake_amr_ds(fields=("density",), units=("g/cm**3",), geometry="spherical") p = SlicePlot(ds, "r", ("gas", "density")) kwargs = { "alpha": 0.7, "min_pix": 10, "min_pix_ids": 30, "draw_ids": True, "id_loc": "upper right", "periodic": False, "min_level": 2, "max_level": 3, "cmap": "gist_stern", } assert_raises(YTDataTypeUnsupported, p.annotate_grids, **kwargs) @requires_module("h5py") @requires_file(cyl_2d) def test_cell_edges_callback(): with _cleanup_fname() as prefix: ds = fake_amr_ds(fields=("density",), units=("g/cm**3",)) for ax in "xyz": p = ProjectionPlot(ds, ax, ("gas", "density")) p.annotate_cell_edges() assert_fname(p.save(prefix)[0]) p = ProjectionPlot( ds, ax, ("gas", "density"), weight_field=("gas", "density") ) p.annotate_cell_edges() assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_cell_edges() assert_fname(p.save(prefix)[0]) # Now we'll check a few additional minor things p = SlicePlot(ds, "x", ("gas", "density")) p.annotate_cell_edges(alpha=0.7, line_width=0.9, color=(0.0, 1.0, 1.0)) p.save(prefix) check_axis_manipulation(p, prefix) with _cleanup_fname() as prefix: ds = load(cyl_2d) slc = SlicePlot(ds, "theta", ("gas", "density")) slc.annotate_cell_edges() assert_fname(slc.save(prefix)[0]) with _cleanup_fname() as prefix: ds = fake_amr_ds(fields=("density",), units=("g/cm**3",), geometry="spherical") p = SlicePlot(ds, "r", ("gas", "density")) assert_raises(YTDataTypeUnsupported, p.annotate_cell_edges) def test_mesh_lines_callback(): with _cleanup_fname() as prefix: ds = fake_hexahedral_ds() for field in ds.field_list: sl = SlicePlot(ds, 1, field) sl.annotate_mesh_lines(color="black") assert_fname(sl.save(prefix)[0]) ds = fake_tetrahedral_ds() for field in ds.field_list: sl = SlicePlot(ds, 1, field) sl.annotate_mesh_lines(color="black") assert_fname(sl.save(prefix)[0]) check_axis_manipulation(sl, prefix) # only test the final field @requires_module("h5py") @requires_file(cyl_2d) @requires_file(cyl_3d) def test_streamline_callback(): with _cleanup_fname() as prefix: ds = fake_amr_ds( fields=("density", "velocity_x", "velocity_y", "magvel"), units=("g/cm**3", "cm/s", "cm/s", "cm/s"), ) for ax in "xyz": # Projection plot tests p = ProjectionPlot(ds, ax, ("gas", "density")) p.annotate_streamlines(("gas", "velocity_x"), ("gas", "velocity_y")) assert_fname(p.save(prefix)[0]) p = ProjectionPlot( ds, ax, ("gas", "density"), weight_field=("gas", "density") ) p.annotate_streamlines(("gas", "velocity_x"), ("gas", "velocity_y")) assert_fname(p.save(prefix)[0]) # Slice plot test p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_streamlines(("gas", "velocity_x"), ("gas", "velocity_y")) assert_fname(p.save(prefix)[0]) # Additional features p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_streamlines( ("gas", "velocity_x"), ("gas", "velocity_y"), factor=32, density=4 ) assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_streamlines( ("gas", "velocity_x"), ("gas", "velocity_y"), color=("stream", "magvel"), ) assert_fname(p.save(prefix)[0]) check_axis_manipulation(p, prefix) # a more thorough example involving many keyword arguments p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_streamlines( ("gas", "velocity_x"), ("gas", "velocity_y"), linewidth=("gas", "density"), linewidth_upscaling=3, color=("stream", "magvel"), color_threshold=0.5, cmap="viridis", arrowstyle="->", ) assert_fname(p.save(prefix)[0]) # Axisymmetric dataset with _cleanup_fname() as prefix: ds = load(cyl_2d) slc = SlicePlot(ds, "theta", ("gas", "velocity_magnitude")) slc.annotate_streamlines(("gas", "velocity_r"), ("gas", "velocity_z")) assert_fname(slc.save(prefix)[0]) with _cleanup_fname() as prefix: ds = load(cyl_3d) slc = SlicePlot(ds, "r", ("gas", "velocity_magnitude")) slc.annotate_streamlines(("gas", "velocity_theta"), ("gas", "velocity_z")) assert_fname(slc.save(prefix)[0]) slc = SlicePlot(ds, "z", ("gas", "velocity_magnitude")) slc.annotate_streamlines( ("gas", "velocity_cartesian_x"), ("gas", "velocity_cartesian_y") ) assert_fname(slc.save(prefix)[0]) slc = SlicePlot(ds, "theta", ("gas", "velocity_magnitude")) slc.annotate_streamlines(("gas", "velocity_r"), ("gas", "velocity_z")) assert_fname(slc.save(prefix)[0]) # Spherical dataset with _cleanup_fname() as prefix: ds = fake_amr_ds( fields=("density", "velocity_r", "velocity_theta", "velocity_phi"), units=("g/cm**3", "cm/s", "cm/s", "cm/s"), geometry="spherical", ) slc = SlicePlot(ds, "phi", ("gas", "velocity_magnitude")) slc.annotate_streamlines( ("gas", "velocity_cylindrical_radius"), ("gas", "velocity_cylindrical_z") ) assert_fname(slc.save(prefix)[0]) slc = SlicePlot(ds, "theta", ("gas", "velocity_magnitude")) slc.annotate_streamlines( ("gas", "velocity_conic_x"), ("gas", "velocity_conic_y") ) assert_fname(slc.save(prefix)[0]) @requires_module("h5py") @requires_file(cyl_2d) @requires_file(cyl_3d) def test_line_integral_convolution_callback(): with _cleanup_fname() as prefix: ds = fake_amr_ds( fields=("density", "velocity_x", "velocity_y", "velocity_z"), units=("g/cm**3", "cm/s", "cm/s", "cm/s"), ) for ax in "xyz": p = ProjectionPlot(ds, ax, ("gas", "density")) p.annotate_line_integral_convolution( ("gas", "velocity_x"), ("gas", "velocity_y") ) assert_fname(p.save(prefix)[0]) p = ProjectionPlot( ds, ax, ("gas", "density"), weight_field=("gas", "density") ) p.annotate_line_integral_convolution( ("gas", "velocity_x"), ("gas", "velocity_y") ) assert_fname(p.save(prefix)[0]) p = SlicePlot(ds, ax, ("gas", "density")) p.annotate_line_integral_convolution( ("gas", "velocity_x"), ("gas", "velocity_y") ) assert_fname(p.save(prefix)[0]) # Now we'll check a few additional minor things p = SlicePlot(ds, "x", ("gas", "density")) p.annotate_line_integral_convolution( ("gas", "velocity_x"), ("gas", "velocity_y"), kernellen=100.0, lim=(0.4, 0.7), cmap=ytcfg.get("yt", "default_colormap"), alpha=0.9, const_alpha=True, ) p.save(prefix) with _cleanup_fname() as prefix: ds = load(cyl_2d) slc = SlicePlot(ds, "theta", ("gas", "magnetic_field_strength")) slc.annotate_line_integral_convolution( ("gas", "magnetic_field_r"), ("gas", "magnetic_field_z") ) assert_fname(slc.save(prefix)[0]) with _cleanup_fname() as prefix: ds = load(cyl_3d) slc = SlicePlot(ds, "r", ("gas", "magnetic_field_strength")) slc.annotate_line_integral_convolution( ("gas", "magnetic_field_theta"), ("gas", "magnetic_field_z") ) assert_fname(slc.save(prefix)[0]) slc = SlicePlot(ds, "z", ("gas", "magnetic_field_strength")) slc.annotate_line_integral_convolution( ("gas", "magnetic_field_cartesian_x"), ("gas", "magnetic_field_cartesian_y") ) assert_fname(slc.save(prefix)[0]) slc = SlicePlot(ds, "theta", ("gas", "magnetic_field_strength")) slc.annotate_line_integral_convolution( ("gas", "magnetic_field_r"), ("gas", "magnetic_field_z") ) assert_fname(slc.save(prefix)[0]) check_axis_manipulation(slc, prefix) with _cleanup_fname() as prefix: ds = fake_amr_ds( fields=("density", "velocity_r", "velocity_theta", "velocity_phi"), units=("g/cm**3", "cm/s", "cm/s", "cm/s"), geometry="spherical", ) slc = SlicePlot(ds, "phi", ("gas", "velocity_magnitude")) slc.annotate_line_integral_convolution( ("gas", "velocity_cylindrical_radius"), ("gas", "velocity_cylindrical_z") ) assert_fname(slc.save(prefix)[0]) slc = SlicePlot(ds, "theta", ("gas", "velocity_magnitude")) slc.annotate_line_integral_convolution( ("gas", "velocity_conic_x"), ("gas", "velocity_conic_y") ) assert_fname(slc.save(prefix)[0]) check_axis_manipulation(slc, prefix) def test_accepts_all_fields_decorator(): fields = [ ("gas", "density"), ("gas", "velocity_x"), ("gas", "pressure"), ("gas", "temperature"), ] units = ["g/cm**3", "cm/s", "dyn/cm**2", "K"] ds = fake_random_ds(16, fields=fields, units=units) plot = SlicePlot(ds, "z", fields=fields) # mocking a class method plot.fake_attr = {f: "not set" for f in fields} @accepts_all_fields def set_fake_field_attribute(self, field, value): self.fake_attr[field] = value return self # test on a single field plot = set_fake_field_attribute(plot, field=fields[0], value=1) assert_array_equal([plot.fake_attr[f] for f in fields], [1] + ["not set"] * 3) # test using "all" as a field plot = set_fake_field_attribute(plot, field="all", value=2) assert_array_equal(list(plot.fake_attr.values()), [2] * 4) M7 = "DD0010/moving7_0010" @requires_ds(M7) def test_axis_manipulations(): # tests flip_horizontal, flip_vertical and swap_axes in different combinations # on a SlicePlot with a velocity callback. plot_field = ("gas", "density") decimals = 12 ds = data_dir_load(M7) def simple_velocity(test_obj, plot): # test_obj: the active PlotWindowAttributeTest # plot: the actual PlotWindow plot.annotate_velocity() def swap_axes(test_obj, plot): plot.swap_axes() def flip_horizontal(test_obj, plot): plot.flip_horizontal() def flip_vertical(test_obj, plot): plot.flip_vertical() callback_tests = ( ("flip_horizontal", (simple_velocity, flip_horizontal)), ("flip_vertical", (simple_velocity, flip_vertical)), ("swap_axes", (simple_velocity, swap_axes)), ("flip_and_swap", (simple_velocity, flip_vertical, flip_horizontal, swap_axes)), ) for n, r in callback_tests: test = PlotWindowAttributeTest( ds, plot_field, "x", attr_name=None, attr_args=None, decimals=decimals, callback_id=n, callback_runners=r, ) test_axis_manipulations.__name__ = test.description yield test ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_callbacks_geographic.py0000644000175100001770000000407014714401662024005 0ustar00runnerdockerimport numpy as np import pytest from yt import SlicePlot, load_uniform_grid from yt.testing import fake_amr_ds, requires_module @requires_module("cartopy") @pytest.mark.parametrize("geometry", ["geographic", "internal_geographic"]) def test_quiver_callback_geographic(geometry): flds = ("density", "velocity_ew", "velocity_ns") units = ("g/cm**3", "m/s", "m/s") ds = fake_amr_ds(fields=flds, units=units, geometry=geometry) for ax in ds.coordinates.axis_order: slc = SlicePlot(ds, ax, "density", buff_size=(50, 50)) if ax == ds.coordinates.radial_axis: # avoid the exact transform bounds slc.set_width((359.99, 179.99)) slc.annotate_quiver(("stream", "velocity_ew"), ("stream", "velocity_ns")) slc.render() @pytest.fixture() def ds_geo_uni_grid(): yc = 0.0 xc = 0.0 def _vel_calculator(grid, ax): y_lat = grid.fcoords[:, 1].d x_lon = grid.fcoords[:, 2].d x_lon[x_lon > 180] = x_lon[x_lon > 180] - 360.0 dist = np.sqrt((y_lat - yc) ** 2 + (xc - x_lon) ** 2) if ax == 1: sgn = np.sign(y_lat - yc) elif ax == 2: sgn = np.sign(x_lon - xc) vel = np.exp(-((dist / 45) ** 2)) * sgn return vel.reshape(grid.shape) def _calculate_u(grid, field): return _vel_calculator(grid, 2) def _calculate_v(grid, field): return _vel_calculator(grid, 1) data = {"u_vel": _calculate_u, "v_vel": _calculate_v} bbox = [[10.0, 1000], [-90, 90], [0, 360]] bbox = np.array(bbox) ds = load_uniform_grid( data, (16, 16, 32), bbox=bbox, geometry="geographic", axis_order=("altitude", "latitude", "longitude"), ) return ds @requires_module("cartopy") @pytest.mark.mpl_image_compare def test_geoquiver_answer(ds_geo_uni_grid): slc = SlicePlot(ds_geo_uni_grid, "altitude", "u_vel") slc.set_width((359.99, 179.99)) slc.set_log("u_vel", False) slc.annotate_quiver("u_vel", "v_vel", scale=50) slc.render() return slc.plots["u_vel"].figure ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_color_maps.py0000644000175100001770000000433114714401662022034 0ustar00runnerdockerimport os import shutil import tempfile import unittest import matplotlib.pyplot as plt import numpy as np from nose.tools import assert_raises from numpy.testing import assert_almost_equal, assert_equal from yt import make_colormap, show_colormaps from yt.testing import requires_backend class TestColorMaps(unittest.TestCase): def setUp(self): self.tmpdir = tempfile.mkdtemp() self.curdir = os.getcwd() os.chdir(self.tmpdir) def tearDown(self): os.chdir(self.curdir) shutil.rmtree(self.tmpdir) @requires_backend("Agg") def test_show_colormaps(self): show_colormaps() show_colormaps(subset=["jet", "cool"]) show_colormaps(subset="yt_native", filename="yt_color_maps.png") # Test for non-existent color map with assert_raises(AttributeError) as ex: show_colormaps(subset="unknown", filename="yt_color_maps.png") desired = ( "show_colormaps requires subset attribute to be 'all', " "'yt_native', or a list of valid colormap names." ) assert_equal(str(ex.exception), desired) @requires_backend("Agg") def test_make_colormap(self): make_colormap( [([0, 0, 1], 10), ([1, 1, 1], 10), ([1, 0, 0], 10)], name="french_flag", interpolate=False, ) show_colormaps("french_flag") cmap = make_colormap( [("dred", 5), ("blue", 2.0), ("orange", 0)], name="my_cmap" ) assert_almost_equal( cmap["red"][1], np.array([0.00392157, 0.62400345, 0.62400345]) ) assert_almost_equal( cmap["blue"][2], np.array([0.00784314, 0.01098901, 0.01098901]) ) assert_almost_equal(cmap["green"][3], np.array([0.01176471, 0.0, 0.0])) def test_cmyt_integration(): for name in ["algae", "bds_highcontrast", "kelp", "arbre", "octarine", "kamae"]: cmap = plt.get_cmap(name) assert cmap.name == name name_r = name + "_r" cmap_r = plt.get_cmap(name_r) assert cmap_r.name == name_r for name in ["algae", "kelp", "arbre", "octarine", "pastel"]: cmap = plt.get_cmap("cmyt." + name) assert cmap.name == "cmyt." + name ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_commons.py0000644000175100001770000000401414714401662021347 0ustar00runnerdockerimport pytest from numpy.testing import assert_raises from yt.visualization._commons import ( _swap_arg_pair_order, _swap_axes_extents, validate_image_name, ) @pytest.mark.parametrize( "name, expected", [ ("noext", "noext.png"), ("nothing.png", "nothing.png"), ("nothing.pdf", "nothing.pdf"), ("version.1.2.3", "version.1.2.3.png"), ], ) def test_default(name, expected): result = validate_image_name(name) assert result == expected @pytest.mark.parametrize( "name, suffix, expected", [ ("noext", ".png", "noext.png"), ("noext", None, "noext.png"), ("nothing.png", ".png", "nothing.png"), ("nothing.png", None, "nothing.png"), ("nothing.png", ".pdf", "nothing.pdf"), ("nothing.pdf", ".pdf", "nothing.pdf"), ("nothing.pdf", None, "nothing.pdf"), ("nothing.pdf", ".png", "nothing.png"), ("version.1.2.3", ".png", "version.1.2.3.png"), ("version.1.2.3", None, "version.1.2.3.png"), ("version.1.2.3", ".pdf", "version.1.2.3.pdf"), ], ) @pytest.mark.filterwarnings( r"ignore:Received two valid image formats '\w+' \(from filename\) " r"and '\w+' \(from suffix\). The former is ignored.:UserWarning" ) def test_custom_valid_ext(name, suffix, expected): result1 = validate_image_name(name, suffix=suffix) assert result1 == expected if suffix is not None: alt_suffix = suffix.replace(".", "") result2 = validate_image_name(name, suffix=alt_suffix) assert result2 == expected def test_extent_swap(): input_extent = [1, 2, 3, 4] expected = [3, 4, 1, 2] assert _swap_axes_extents(input_extent) == expected assert _swap_axes_extents(tuple(input_extent)) == tuple(expected) def test_swap_arg_pair_order(): assert _swap_arg_pair_order(1, 2) == (2, 1) assert _swap_arg_pair_order(1, 2, 3, 4, 5, 6) == (2, 1, 4, 3, 6, 5) assert_raises(TypeError, _swap_arg_pair_order, 1) assert_raises(TypeError, _swap_arg_pair_order, 1, 2, 3) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_eps_writer.py0000644000175100001770000000131314714401662022056 0ustar00runnerdockerimport yt from yt.testing import fake_amr_ds, requires_external_executable, requires_module @requires_external_executable("tex") @requires_module("pyx") def test_eps_writer(tmp_path): import yt.visualization.eps_writer as eps fields = [ ("gas", "density"), ("gas", "temperature"), ] units = [ "g/cm**3", "K", ] ds = fake_amr_ds(fields=fields, units=units) slc = yt.SlicePlot( ds, "z", fields=fields, ) eps_fig = eps.multiplot_yt(2, 1, slc, bare_axes=True) eps_fig.scale_line(0.2, "5 cm") savefile = tmp_path / "multi" eps_fig.save_fig(savefile, format="eps") assert savefile.with_suffix(".eps").exists() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_export_frb.py0000644000175100001770000000171014714401662022046 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_equal from yt.testing import assert_allclose_units, fake_random_ds def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True def test_export_frb(): test_ds = fake_random_ds(128) slc = test_ds.slice(0, 0.5) frb = slc.to_frb((0.5, "unitary"), 64) frb_ds = frb.export_dataset(fields=[("gas", "density")], nprocs=8) dd_frb = frb_ds.all_data() assert_equal(frb_ds.domain_left_edge.v, np.array([0.25, 0.25, 0.0])) assert_equal(frb_ds.domain_right_edge.v, np.array([0.75, 0.75, 1.0])) assert_equal(frb_ds.domain_width.v, np.array([0.5, 0.5, 1.0])) assert_equal(frb_ds.domain_dimensions, np.array([64, 64, 1], dtype="int64")) assert_allclose_units( frb["gas", "density"].sum(), dd_frb.quantities.total_quantity(("gas", "density")), ) assert_equal(frb_ds.index.num_grids, 8) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_filters.py0000644000175100001770000000275614714401662021357 0ustar00runnerdocker""" Tests for frb filters """ import numpy as np import yt from yt.testing import fake_amr_ds, requires_module @requires_module("scipy") def test_white_noise_filter(): ds = fake_amr_ds(fields=("density",), units=("g/cm**3",)) p = ds.proj(("gas", "density"), "z") frb = p.to_frb((1, "unitary"), 64) frb.apply_white_noise() frb.apply_white_noise(1e-3) frb.render(("gas", "density")) @requires_module("scipy") def test_gauss_beam_filter(): ds = fake_amr_ds(fields=("density",), units=("g/cm**3",)) p = ds.proj(("gas", "density"), "z") frb = p.to_frb((1, "unitary"), 64) frb.apply_gauss_beam(sigma=1.0) frb.render(("gas", "density")) @requires_module("scipy") def test_filter_wiring(): ds = fake_amr_ds(fields=[("gas", "density")], units=["g/cm**3"]) p = yt.SlicePlot(ds, "x", "density") # Note: frb is a FixedResolutionBuffer object frb1 = p.frb data_orig = frb1["density"].value sigma = 2 nbeam = 30 p.frb.apply_gauss_beam(nbeam=nbeam, sigma=sigma) frb2 = p.frb data_gauss = frb2["density"].value p.frb.apply_white_noise() frb3 = p.frb data_white = frb3["density"].value # We check the frb objects are different assert frb1 is not frb2 assert frb1 is not frb3 assert frb2 is not frb3 # We check the resulting image are different each time assert not np.allclose(data_orig, data_gauss) assert not np.allclose(data_orig, data_white) assert not np.allclose(data_gauss, data_white) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_fits_image.py0000644000175100001770000002461014714401662022007 0ustar00runnerdockerimport os import shutil import tempfile import numpy as np from numpy.testing import assert_allclose, assert_equal from yt.loaders import load from yt.testing import fake_random_ds, requires_file, requires_module from yt.utilities.on_demand_imports import _astropy from yt.visualization.fits_image import ( FITSImageData, FITSOffAxisProjection, FITSOffAxisSlice, FITSParticleOffAxisProjection, FITSParticleProjection, FITSProjection, FITSSlice, assert_same_wcs, ) from yt.visualization.volume_rendering.off_axis_projection import off_axis_projection @requires_module("astropy") def test_fits_image(): curdir = os.getcwd() tmpdir = tempfile.mkdtemp() os.chdir(tmpdir) fields = ("density", "temperature", "velocity_x", "velocity_y", "velocity_z") units = ("g/cm**3", "K", "cm/s", "cm/s", "cm/s") ds = fake_random_ds( 64, fields=fields, units=units, nprocs=16, length_unit=100.0, particles=10000 ) prj = ds.proj(("gas", "density"), 2) prj_frb = prj.to_frb((0.5, "unitary"), 128) fid1 = prj_frb.to_fits_data( fields=[("gas", "density"), ("gas", "temperature")], length_unit="cm" ) fits_prj = FITSProjection( ds, "z", [ds.fields.gas.density, ("gas", "temperature")], image_res=128, width=(0.5, "unitary"), ) assert_equal(fid1["density"].data, fits_prj["density"].data) assert_equal(fid1["temperature"].data, fits_prj["temperature"].data) fid1.writeto("fid1.fits", overwrite=True) new_fid1 = FITSImageData.from_file("fid1.fits") assert_equal(fid1["density"].data, new_fid1["density"].data) assert_equal(fid1["temperature"].data, new_fid1["temperature"].data) assert_equal(fid1.length_unit, new_fid1.length_unit) assert_equal(fid1.time_unit, new_fid1.time_unit) assert_equal(fid1.mass_unit, new_fid1.mass_unit) assert_equal(fid1.velocity_unit, new_fid1.velocity_unit) assert_equal(fid1.magnetic_unit, new_fid1.magnetic_unit) assert_equal(fid1.current_time, new_fid1.current_time) ds2 = load("fid1.fits") ds2.index assert ("fits", "density") in ds2.field_list assert ("fits", "temperature") in ds2.field_list dw_cm = ds2.domain_width.in_units("cm") assert dw_cm[0].v == 50.0 assert dw_cm[1].v == 50.0 slc = ds.slice(2, 0.5) slc_frb = slc.to_frb((0.5, "unitary"), 128) fid2 = slc_frb.to_fits_data( fields=[("gas", "density"), ("gas", "temperature")], length_unit="cm" ) fits_slc = FITSSlice( ds, "z", [("gas", "density"), ("gas", "temperature")], image_res=128, width=(0.5, "unitary"), ) assert_equal(fid2["density"].data, fits_slc["density"].data) assert_equal(fid2["temperature"].data, fits_slc["temperature"].data) fits_slc2 = FITSSlice( ds, "z", [("gas", "density"), ("gas", "temperature")], image_res=128, width=(0.5, "unitary"), origin="image", ) assert_equal(fits_slc2["density"].data, fits_slc["density"].data) assert_equal(fits_slc2["temperature"].data, fits_slc["temperature"].data) assert fits_slc.wcs.wcs.crval[0] != 0.0 assert fits_slc.wcs.wcs.crval[1] != 0.0 assert fits_slc2.wcs.wcs.crval[0] == 0.0 assert fits_slc2.wcs.wcs.crval[1] == 0.0 dens_img = fid2.pop("density") temp_img = fid2.pop("temperature") combined_fid = FITSImageData.from_images([dens_img, temp_img]) assert_equal(combined_fid.length_unit, dens_img.length_unit) assert_equal(combined_fid.time_unit, dens_img.time_unit) assert_equal(combined_fid.mass_unit, dens_img.mass_unit) assert_equal(combined_fid.velocity_unit, dens_img.velocity_unit) assert_equal(combined_fid.magnetic_unit, dens_img.magnetic_unit) assert_equal(combined_fid.current_time, dens_img.current_time) # Make sure that we can combine FITSImageData instances with more # than one image each combined_fid2 = FITSImageData.from_images([combined_fid, combined_fid]) # Writing the FITS file ensures that we have assembled the images # together correctly combined_fid2.writeto("combined.fits", overwrite=True) cut = ds.cutting([0.1, 0.2, -0.9], [0.5, 0.42, 0.6]) cut_frb = cut.to_frb((0.5, "unitary"), 128) fid3 = cut_frb.to_fits_data( fields=[("gas", "density"), ds.fields.gas.temperature], length_unit="cm" ) fits_cut = FITSOffAxisSlice( ds, [0.1, 0.2, -0.9], [("gas", "density"), ("gas", "temperature")], image_res=128, center=[0.5, 0.42, 0.6], width=(0.5, "unitary"), ) assert_equal(fid3["density"].data, fits_cut["density"].data) assert_equal(fid3["temperature"].data, fits_cut["temperature"].data) fid3.create_sky_wcs([30.0, 45.0], (1.0, "arcsec/kpc")) fid3.writeto("fid3.fits", overwrite=True) new_fid3 = FITSImageData.from_file("fid3.fits") assert_same_wcs(fid3.wcs, new_fid3.wcs) assert new_fid3.wcs.wcs.cunit[0] == "deg" assert new_fid3.wcs.wcs.cunit[1] == "deg" assert new_fid3.wcs.wcs.ctype[0] == "RA---TAN" assert new_fid3.wcs.wcs.ctype[1] == "DEC--TAN" assert new_fid3.wcsa.wcs.cunit[0] == "cm" assert new_fid3.wcsa.wcs.cunit[1] == "cm" assert new_fid3.wcsa.wcs.ctype[0] == "linear" assert new_fid3.wcsa.wcs.ctype[1] == "linear" buf = off_axis_projection( ds, ds.domain_center, [0.1, 0.2, -0.9], 0.5, 128, ("gas", "density") ).swapaxes(0, 1) fid4 = FITSImageData(buf, fields=[("gas", "density")], width=100.0) fits_oap = FITSOffAxisProjection( ds, [0.1, 0.2, -0.9], ("gas", "density"), width=(0.5, "unitary"), image_res=128, depth=(0.5, "unitary"), ) assert_equal(fid4["density"].data, fits_oap["density"].data) fid4.create_sky_wcs([30.0, 45.0], (1.0, "arcsec/kpc"), replace_old_wcs=False) assert fid4.wcs.wcs.cunit[0] == "cm" assert fid4.wcs.wcs.cunit[1] == "cm" assert fid4.wcs.wcs.ctype[0] == "linear" assert fid4.wcs.wcs.ctype[1] == "linear" assert fid4.wcsa.wcs.cunit[0] == "deg" assert fid4.wcsa.wcs.cunit[1] == "deg" assert fid4.wcsa.wcs.ctype[0] == "RA---TAN" assert fid4.wcsa.wcs.ctype[1] == "DEC--TAN" cvg = ds.covering_grid( ds.index.max_level, [0.25, 0.25, 0.25], [32, 32, 32], fields=[("gas", "density"), ("gas", "temperature")], ) fid5 = cvg.to_fits_data(fields=[("gas", "density"), ("gas", "temperature")]) assert fid5.dimensionality == 3 fid5.update_header("density", "time", 0.1) fid5.update_header("all", "units", "cgs") assert fid5["density"].header["time"] == 0.1 assert fid5["temperature"].header["units"] == "cgs" assert fid5["density"].header["units"] == "cgs" fid6 = FITSImageData.from_images(fid5) fid5.change_image_name("density", "mass_per_volume") assert fid5["mass_per_volume"].name == "mass_per_volume" assert fid5["mass_per_volume"].header["BTYPE"] == "mass_per_volume" assert "mass_per_volume" in fid5.fields assert "mass_per_volume" in fid5.field_units assert "density" not in fid5.fields assert "density" not in fid5.field_units assert "density" in fid6.fields assert_equal(fid6["density"].data, fid5["mass_per_volume"].data) fid7 = FITSImageData.from_images(fid4) fid7.convolve("density", (3.0, "cm")) sigma = 3.0 / fid7.wcs.wcs.cdelt[0] kernel = _astropy.conv.Gaussian2DKernel(x_stddev=sigma) data_conv = _astropy.conv.convolve(fid4["density"].data.d, kernel) assert_allclose(data_conv, fid7["density"].data.d) pfid = FITSParticleProjection(ds, "x", ("io", "particle_mass")) assert pfid["particle_mass"].name == "particle_mass" assert pfid["particle_mass"].header["BTYPE"] == "particle_mass" assert pfid["particle_mass"].units == "g" pofid = FITSParticleOffAxisProjection(ds, [1, 1, 1], ("io", "particle_mass")) assert pofid["particle_mass"].name == "particle_mass" assert pofid["particle_mass"].header["BTYPE"] == "particle_mass" assert pofid["particle_mass"].units == "g" pdfid = FITSParticleProjection(ds, "x", ("io", "particle_mass"), density=True) assert pdfid["particle_mass"].name == "particle_mass" assert pdfid["particle_mass"].header["BTYPE"] == "particle_mass" assert pdfid["particle_mass"].units == "g/cm**2" # Test moments for projections def _vysq(field, data): return data["gas", "velocity_y"] ** 2 ds.add_field(("gas", "vysq"), _vysq, sampling_type="cell", units="cm**2/s**2") fid8 = FITSProjection( ds, "y", [("gas", "velocity_y"), ("gas", "vysq")], moment=1, weight_field=("gas", "density"), ) fid9 = FITSProjection( ds, "y", ("gas", "velocity_y"), moment=2, weight_field=("gas", "density") ) sigy = np.sqrt(fid8["vysq"].data - fid8["velocity_y"].data ** 2) assert_allclose(sigy, fid9["velocity_y_stddev"].data) def _vlsq(field, data): return data["gas", "velocity_los"] ** 2 ds.add_field(("gas", "vlsq"), _vlsq, sampling_type="cell", units="cm**2/s**2") fid10 = FITSOffAxisProjection( ds, [1.0, -1.0, 1.0], [("gas", "velocity_los"), ("gas", "vlsq")], moment=1, weight_field=("gas", "density"), ) fid11 = FITSOffAxisProjection( ds, [1.0, -1.0, 1.0], ("gas", "velocity_los"), moment=2, weight_field=("gas", "density"), ) sigl = np.sqrt(fid10["vlsq"].data - fid10["velocity_los"].data ** 2) assert_allclose(sigl, fid11["velocity_los_stddev"].data) # We need to manually close all the file descriptors so # that windows can delete the folder that contains them. ds2.close() for fid in ( fid1, fid2, fid3, fid4, fid5, fid6, fid7, fid8, fid9, new_fid1, new_fid3, pfid, pdfid, pofid, ): fid.close() os.chdir(curdir) shutil.rmtree(tmpdir) etc = "enzo_tiny_cosmology/DD0046/DD0046" @requires_module("astropy") @requires_file(etc) def test_fits_cosmo(): ds = load(etc) fid = FITSProjection(ds, "z", ["density"]) assert fid.wcs.wcs.cunit[0] == "kpc" assert fid.wcs.wcs.cunit[1] == "kpc" assert ds.hubble_constant == fid.hubble_constant assert ds.current_redshift == fid.current_redshift assert fid["density"].header["HUBBLE"] == ds.hubble_constant assert fid["density"].header["REDSHIFT"] == ds.current_redshift fid.close() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_geo_projections.py0000644000175100001770000001275714714401662023102 0ustar00runnerdockerimport unittest import numpy as np import yt from yt.testing import fake_amr_ds, requires_module from yt.visualization.geo_plot_utils import get_mpl_transform, transform_list def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True class TestGeoProjections(unittest.TestCase): @requires_module("cartopy") def setUp(self): self.ds = fake_amr_ds(geometry="geographic") # switch off the log plot to avoid some unrelated matplotlib issues f = self.ds._get_field_info(("stream", "Density")) f.take_log = False @requires_module("cartopy") def tearDown(self): del self.ds @requires_module("cartopy") def test_geo_projection_setup(self): from yt.utilities.on_demand_imports import _cartopy as cartopy axis = "altitude" self.slc = yt.SlicePlot(self.ds, axis, ("stream", "Density"), origin="native") assert isinstance(self.slc._projection, cartopy.crs.Mollweide) assert isinstance(self.slc._transform, cartopy.crs.PlateCarree) assert self.ds.coordinates.data_projection[axis] == "Mollweide" assert self.ds.coordinates.data_transform[axis] == "PlateCarree" assert isinstance( self.slc._projection, type(self.slc.plots["stream", "Density"].axes.projection), ) @requires_module("cartopy") def test_geo_projections(self): from yt.utilities.on_demand_imports import _cartopy as cartopy self.slc = yt.SlicePlot( self.ds, "altitude", ("stream", "Density"), origin="native" ) for transform in transform_list: if transform == "UTM": # this requires special arguments so let's skip it continue if transform == "OSNI": # avoid crashes, see https://github.com/SciTools/cartopy/issues/1177 continue self.slc.set_mpl_projection(transform) proj_type = type(get_mpl_transform(transform)) assert isinstance(self.slc._projection, proj_type) assert isinstance(self.slc._transform, cartopy.crs.PlateCarree) assert isinstance( self.slc.plots["stream", "Density"].axes.projection, proj_type ) @requires_module("cartopy") def test_projection_object(self): from yt.utilities.on_demand_imports import _cartopy as cartopy shortlist = ["Orthographic", "PlateCarree", "Mollweide"] for transform in shortlist: projection = get_mpl_transform(transform) proj_type = type(projection) self.slc = yt.SlicePlot( self.ds, "altitude", ("stream", "Density"), origin="native" ) self.slc.set_mpl_projection(projection) assert isinstance(self.slc._projection, proj_type) assert isinstance(self.slc._transform, cartopy.crs.PlateCarree) assert isinstance( self.slc.plots["stream", "Density"].axes.projection, proj_type ) @requires_module("cartopy") def test_nondefault_transform(self): from yt.utilities.on_demand_imports import _cartopy as cartopy axis = "altitude" # Note: The Miller transform has an extent of approx. +/- 180 in x, # +/-132 in y (in Miller, x is longitude, y is a factor of latitude). # So by changing the projection in this way, the dataset goes from # covering the whole globe (+/- 90 latitude), to covering part of the # globe (+/-72 latitude). Totally fine for testing that the code runs, # but good to be aware of! self.ds.coordinates.data_transform[axis] = "Miller" self.slc = yt.SlicePlot(self.ds, axis, ("stream", "Density"), origin="native") shortlist = ["Orthographic", "PlateCarree", "Mollweide"] for transform in shortlist: self.slc.set_mpl_projection(transform) proj_type = type(get_mpl_transform(transform)) assert isinstance(self.slc._projection, proj_type) assert isinstance(self.slc._transform, cartopy.crs.Miller) assert self.ds.coordinates.data_projection[axis] == "Mollweide" assert self.ds.coordinates.data_transform[axis] == "Miller" assert isinstance( self.slc.plots["stream", "Density"].axes.projection, proj_type ) @requires_module("cartopy") def test_extent(self): # checks that the axis extent is narrowed when doing a subselection axis = "altitude" slc = yt.SlicePlot(self.ds, axis, ("stream", "Density"), origin="native") ax = slc.plots["stream", "Density"].axes full_extent = np.abs(ax.get_extent()) slc = yt.SlicePlot( self.ds, axis, ("stream", "Density"), origin="native", width=(80.0, 50.0) ) ax = slc.plots["stream", "Density"].axes extent = np.abs(ax.get_extent()) assert np.all(extent < full_extent) class TestNonGeoProjections(unittest.TestCase): def setUp(self): self.ds = fake_amr_ds() def tearDown(self): del self.ds del self.slc def test_projection_setup(self): axis = "x" self.slc = yt.SlicePlot(self.ds, axis, ("stream", "Density"), origin="native") assert self.ds.coordinates.data_projection[axis] is None assert self.ds.coordinates.data_transform[axis] is None assert self.slc._projection is None assert self.slc._transform is None ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_image_comp_2D_plots.py0000644000175100001770000003701614714401662023552 0ustar00runnerdocker# image tests using pytest-mpl from itertools import chain import matplotlib as mpl import numpy as np import numpy.testing as npt import pytest from matplotlib.colors import SymLogNorm from yt.data_objects.profiles import create_profile from yt.loaders import load_uniform_grid from yt.testing import add_noise_fields, fake_amr_ds, fake_particle_ds, fake_random_ds from yt.visualization.api import ( LinePlot, ParticleProjectionPlot, PhasePlot, ProfilePlot, SlicePlot, ) def test_sliceplot_set_unit_and_zlim_order(): ds = fake_random_ds(16) field = ("gas", "density") p0 = SlicePlot(ds, "z", field) p0.set_unit(field, "kg/m**3") p0.set_zlim(field, zmin=0) # reversing order of operations p1 = SlicePlot(ds, "z", field) p1.set_zlim(field, zmin=0) p1.set_unit(field, "kg/m**3") p0.render() p1.render() im0 = p0.plots[field].image._A im1 = p1.plots[field].image._A npt.assert_allclose(im0, im1) @pytest.mark.mpl_image_compare def test_inf_and_finite_values_set_zlim(): # see https://github.com/yt-project/yt/issues/3901 shape = (32, 16, 1) a = np.ones(16) b = np.ones((32, 16)) c = np.reshape(a * b, shape) # injecting an inf c[0, 0, 0] = np.inf field = ("gas", "density") data = {field: c} ds = load_uniform_grid( data, shape, bbox=np.array([[0, 1], [0, 1], [0, 1]]), ) p = SlicePlot(ds, "z", field) # setting zlim manually p.set_zlim(field, -10, 10) p.render() return p.plots[field].figure @pytest.mark.mpl_image_compare def test_sliceplot_custom_norm(): from matplotlib.colors import TwoSlopeNorm ds = fake_random_ds(16) field = ("gas", "density") p = SlicePlot(ds, "z", field) p.set_norm(field, norm=TwoSlopeNorm(vcenter=0, vmin=-0.5, vmax=1)) p.render() return p.plots[field].figure @pytest.mark.mpl_image_compare def test_sliceplot_custom_norm_symlog_int_base(): ds = fake_random_ds(16) add_noise_fields(ds) field = "noise3" p = SlicePlot(ds, "z", field) # using integer base !=10 and >2 to exercise special case # for colorbar minor ticks p.set_norm(field, norm=SymLogNorm(linthresh=0.1, base=5)) p.render() return p.plots[field].figure @pytest.mark.mpl_image_compare def test_lineplot_set_axis_properties(): ds = fake_random_ds(16) field = ("gas", "density") p = LinePlot( ds, field, start_point=[0, 0, 0], end_point=[1, 1, 1], npoints=32, ) p.set_x_unit("cm") p.set_unit(field, "kg/cm**3") p.set_log(field, False) p.render() return p.plots[field].figure @pytest.mark.mpl_image_compare def test_profileplot_set_axis_properties(): ds = fake_random_ds(16) disk = ds.disk(ds.domain_center, [0.0, 0.0, 1.0], (10, "m"), (1, "m")) p = ProfilePlot(disk, ("gas", "density"), [("gas", "velocity_x")]) p.set_unit(("gas", "density"), "kg/cm**3") p.set_log(("gas", "density"), False) p.set_unit(("gas", "velocity_x"), "mile/hour") p.render() return p.plots["gas", "velocity_x"].figure @pytest.mark.mpl_image_compare def test_particleprojectionplot_set_colorbar_properties(): ds = fake_particle_ds(npart=100) field = ("all", "particle_mass") p = ParticleProjectionPlot(ds, 2, field) p.set_buff_size(10) p.set_unit(field, "Msun") p.set_zlim(field, zmax=1e-35) p.set_log(field, False) p.render() return p.plots[field].figure class TestMultipanelPlot: @classmethod def setup_class(cls): cls.fields = [ ("gas", "density"), ("gas", "velocity_x"), ("gas", "velocity_y"), ("gas", "velocity_magnitude"), ] cls.ds = fake_random_ds(16) @pytest.mark.skipif( mpl.__version_info__ < (3, 7), reason="colorbar cannot currently be set horizontal in multi-panel plot with matplotlib older than 3.7.0", ) @pytest.mark.parametrize("cbar_location", ["top", "bottom", "left", "right"]) @pytest.mark.mpl_image_compare def test_multipanelplot_colorbar_orientation_simple(self, cbar_location): p = SlicePlot(self.ds, "z", self.fields) return p.export_to_mpl_figure((2, 2), cbar_location=cbar_location) @pytest.mark.parametrize("cbar_location", ["top", "bottom"]) def test_multipanelplot_colorbar_orientation_warning(self, cbar_location): p = SlicePlot(self.ds, "z", self.fields) if mpl.__version_info__ < (3, 7): with pytest.warns( UserWarning, match="Cannot properly set the orientation of colorbar.", ): p.export_to_mpl_figure((2, 2), cbar_location=cbar_location) else: p.export_to_mpl_figure((2, 2), cbar_location=cbar_location) class TestProfilePlot: @classmethod def setup_class(cls): fields = ("density", "temperature", "velocity_x", "velocity_y", "velocity_z") units = ("g/cm**3", "K", "cm/s", "cm/s", "cm/s") cls.ds = fake_random_ds(16, fields=fields, units=units) regions = [cls.ds.region([0.5] * 3, [0.4] * 3, [0.6] * 3), cls.ds.all_data()] pr_fields = [ [("gas", "density"), ("gas", "temperature")], [("gas", "density"), ("gas", "velocity_x")], [("gas", "temperature"), ("gas", "mass")], [("gas", "density"), ("index", "radius")], [("gas", "velocity_magnitude"), ("gas", "mass")], ] cls.profiles: dict[str, ProfilePlot] = {} for i_reg, reg in enumerate(regions): id_prefix = str(i_reg) for x_field, y_field in pr_fields: id_suffix = "_".join([*x_field, *y_field]) base_id = f"{id_prefix}_{id_suffix}" cls.profiles[base_id] = ProfilePlot(reg, x_field, y_field) cls.profiles[f"{base_id}_fractional_accumulation"] = ProfilePlot( reg, x_field, y_field, fractional=True, accumulation=True ) p1d = create_profile(reg, x_field, y_field) cls.profiles[f"{base_id}_from_profiles"] = ProfilePlot.from_profiles( p1d ) p1 = create_profile( cls.ds.all_data(), ("gas", "density"), ("gas", "temperature") ) p2 = create_profile( cls.ds.all_data(), ("gas", "density"), ("gas", "velocity_x") ) cls.profiles["from_multiple_profiles"] = ProfilePlot.from_profiles( [p1, p2], labels=["temperature", "velocity"] ) @pytest.mark.parametrize( "suffix", [None, "from_profiles", "fractional_accumulation"], ) @pytest.mark.parametrize("region", ["0", "1"]) @pytest.mark.parametrize( "xax, yax", [ (("gas", "density"), ("gas", "temperature")), (("gas", "density"), ("gas", "velocity_x")), (("gas", "temperature"), ("gas", "mass")), (("gas", "density"), ("index", "radius")), (("gas", "velocity_magnitude"), ("gas", "mass")), ], ) @pytest.mark.mpl_image_compare def test_profileplot_simple(self, region, xax, yax, suffix): key = "_".join(chain(region, xax, yax)) if suffix is not None: key += f"_{suffix}" plots = list(self.profiles[key].plots.values()) assert len(plots) == 1 return plots[0].figure @pytest.mark.mpl_image_compare def test_profileplot_from_multiple_profiles_0(self): plots = list(self.profiles["from_multiple_profiles"].plots.values()) assert len(plots) == 2 return plots[0].figure @pytest.mark.mpl_image_compare def test_profileplot_from_multiple_profiles_1(self): plots = list(self.profiles["from_multiple_profiles"].plots.values()) assert len(plots) == 2 return plots[1].figure class TestPhasePlot: @classmethod def setup_class(cls): fields = ("density", "temperature", "velocity_x", "velocity_y", "velocity_z") units = ("g/cm**3", "K", "cm/s", "cm/s", "cm/s") cls.ds = fake_random_ds(16, fields=fields, units=units) regions = [cls.ds.region([0.5] * 3, [0.4] * 3, [0.6] * 3), cls.ds.all_data()] pr_fields = [ [("gas", "density"), ("gas", "temperature"), ("gas", "mass")], [("gas", "density"), ("gas", "velocity_x"), ("gas", "mass")], [ ("index", "radius"), ("gas", "temperature"), ("gas", "velocity_magnitude"), ], ] cls.profiles: dict[str, PhasePlot] = {} for i_reg, reg in enumerate(regions): id_prefix = str(i_reg) for x_field, y_field, z_field in pr_fields: id_suffix = "_".join([*x_field, *y_field, *z_field]) base_id = f"{id_prefix}_{id_suffix}" cls.profiles[base_id] = PhasePlot( reg, x_field, y_field, z_field, x_bins=16, y_bins=16 ) cls.profiles[f"{base_id}_fractional_accumulation"] = PhasePlot( reg, x_field, y_field, z_field, fractional=True, accumulation=True, x_bins=16, y_bins=16, ) p2d = create_profile(reg, [x_field, y_field], z_field, n_bins=[16, 16]) cls.profiles[f"{base_id}_from_profiles"] = PhasePlot.from_profile(p2d) @pytest.mark.parametrize( "suffix", [None, "from_profiles", "fractional_accumulation"], ) @pytest.mark.parametrize("region", ["0", "1"]) @pytest.mark.parametrize( "xax, yax, zax", [ (("gas", "density"), ("gas", "temperature"), ("gas", "mass")), (("gas", "density"), ("gas", "velocity_x"), ("gas", "mass")), ( ("index", "radius"), ("gas", "temperature"), ("gas", "velocity_magnitude"), ), ], ) @pytest.mark.mpl_image_compare def test_phaseplot(self, region, xax, yax, zax, suffix): key = "_".join(chain(region, xax, yax, zax)) if suffix is not None: key += f"_{suffix}" plots = list(self.profiles[key].plots.values()) assert len(plots) == 1 return plots[0].figure class TestPhasePlotSetZlim: @classmethod def setup_class(cls): cls.ds = fake_random_ds(16) add_noise_fields(cls.ds) cls.data = cls.ds.sphere("c", 1) @pytest.mark.mpl_image_compare def test_phaseplot_set_zlim_with_implicit_units(self): p = PhasePlot( self.data, ("gas", "noise1"), ("gas", "noise3"), [("gas", "density")], weight_field=None, ) field = ("gas", "density") p.set_zlim(field, zmax=10) p.render() return p.plots[field].figure @pytest.mark.mpl_image_compare def test_phaseplot_set_zlim_with_explicit_units(self): p = PhasePlot( self.data, ("gas", "noise1"), ("gas", "noise3"), [("gas", "density")], weight_field=None, ) field = ("gas", "density") # using explicit units, we expect the colorbar units to stay unchanged p.set_zlim(field, zmin=(1e36, "kg/AU**3")) p.render() return p.plots[field].figure class TestSetBackgroundColor: # see https://github.com/yt-project/yt/issues/3854 @classmethod def setup_class(cls): cls.ds = fake_random_ds(16) def some_nans_field(field, data): ret = data["gas", "density"] ret[::2] *= np.nan return ret cls.ds.add_field( name=("gas", "polluted_field"), function=some_nans_field, sampling_type="local", ) @pytest.mark.mpl_image_compare def test_sliceplot_set_background_color_linear(self): field = ("gas", "density") p = SlicePlot(self.ds, "z", field, width=1.5) p.set_background_color(field, color="C0") p.set_log(field, False) p.render() return p.plots[field].figure @pytest.mark.mpl_image_compare def test_sliceplot_set_background_color_log(self): field = ("gas", "density") p = SlicePlot(self.ds, "z", field, width=1.5) p.set_background_color(field, color="C0") p.render() return p.plots[field].figure @pytest.mark.mpl_image_compare def test_sliceplot_set_background_color_and_bad_value(self): # see https://github.com/yt-project/yt/issues/4639 field = ("gas", "polluted_field") p = SlicePlot(self.ds, "z", field, width=1.5) p.set_background_color(field, color="black") # copy the default colormap cmap = mpl.colormaps["cmyt.arbre"] cmap.set_bad("red") p.set_cmap(field, cmap) p.render() return p.plots[field].figure class TestCylindricalZSlicePlot: @classmethod def setup_class(cls): cls.ds = fake_amr_ds(geometry="cylindrical") add_noise_fields(cls.ds) fields = ["noise%d" % i for i in range(4)] cls.plot = SlicePlot(cls.ds, "z", fields) @pytest.mark.parametrize("field", ["noise0", "noise1", "noise2", "noise3"]) @pytest.mark.mpl_image_compare def test_cylindrical_z_log(self, field): return self.plot.plots[field].figure @pytest.mark.parametrize("field", ["noise0", "noise1", "noise2", "noise3"]) @pytest.mark.mpl_image_compare def test_cylindrical_z_linear(self, field): self.plot.set_log("noise0", False) return self.plot.plots[field].figure @pytest.mark.parametrize( "theta_min, theta_max", [ pytest.param(0, 2 * np.pi, id="full_azimuthal_domain"), pytest.param(3 / 4 * np.pi, 5 / 4 * np.pi, id="restricted_sector"), ], ) @pytest.mark.mpl_image_compare def test_exclude_pixels_with_partial_bbox_intersection(self, theta_min, theta_max): rmin = 1.0 rmax = 2.0 ds = fake_amr_ds( geometry="cylindrical", domain_left_edge=[rmin, 0, theta_min], domain_right_edge=[rmax, 1, theta_max], ) add_noise_fields(ds) plot = SlicePlot(ds, "z", ("gas", "noise0")) for radius in [rmin - 0.01, rmax]: plot.annotate_sphere( center=[0, 0, 0], radius=radius, circle_args={"color": "red", "alpha": 0.4, "linewidth": 3}, ) plot.annotate_title("all pixels beyond (or on) red lines should be white") plot.render() return plot.plots["gas", "noise0"].figure class TestSphericalPhiSlicePlot: @classmethod def setup_class(cls): cls.ds = fake_amr_ds(geometry="spherical") add_noise_fields(cls.ds) fields = ["noise%d" % i for i in range(4)] cls.plot = SlicePlot(cls.ds, "phi", fields) @pytest.mark.parametrize("field", ["noise0", "noise1", "noise2", "noise3"]) @pytest.mark.mpl_image_compare def test_spherical_phi_log(self, field): return self.plot.plots[field].figure class TestSphericalThetaSlicePlot: @classmethod def setup_class(cls): cls.ds = fake_amr_ds(geometry="spherical") add_noise_fields(cls.ds) fields = ["noise%d" % i for i in range(4)] cls.plot = SlicePlot(cls.ds, "theta", fields) @pytest.mark.parametrize("field", ["noise0", "noise1", "noise2", "noise3"]) @pytest.mark.mpl_image_compare def test_spherical_theta_log(self, field): return self.plot.plots[field].figure ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_image_comp_geo.py0000644000175100001770000000331214714401662022626 0ustar00runnerdockerimport pytest import yt from yt.testing import fake_amr_ds, requires_module_pytest as requires_module class TestGeoTransform: # Cartopy require pykdtree *or* scipy to enable the projections # we test here. We require scipy for simplicity because it offers # better support for various platforms via PyPI at the time of writing.abs # the following projections are skipped (reason) # - UTM (requires additional arguments) # - OSNI (avoid crashes, see https://github.com/SciTools/cartopy/issues/1177) @classmethod def setup_class(cls): cls.ds = fake_amr_ds(geometry="geographic") @requires_module("cartopy", "scipy") @pytest.mark.parametrize( "transform", [ "PlateCarree", "LambertConformal", "LambertCylindrical", "Mercator", "Miller", "Mollweide", "Orthographic", "Robinson", "Stereographic", "TransverseMercator", "InterruptedGoodeHomolosine", "RotatedPole", "OSGB", "EuroPP", "Geostationary", "Gnomonic", "NorthPolarStereo", "SouthPolarStereo", "AlbersEqualArea", "AzimuthalEquidistant", "Sinusoidal", "NearsidePerspective", "LambertAzimuthalEqualArea", ], ) @pytest.mark.mpl_image_compare def test_geo_tranform(self, transform): field = ("stream", "Density") sl = yt.SlicePlot(self.ds, "altitude", field, origin="native") sl.set_mpl_projection(transform) sl.set_log(field, False) sl.render() return sl.plots[field].figure ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_image_writer.py0000644000175100001770000000471714714401662022364 0ustar00runnerdockerimport os import shutil import tempfile import unittest import numpy as np from nose.tools import assert_raises from numpy.testing import assert_equal from yt.testing import fake_random_ds from yt.visualization.image_writer import ( apply_colormap, multi_image_composite, splat_points, write_bitmap, ) class TestImageWriter(unittest.TestCase): @classmethod def setUpClass(cls): cls.tmpdir = tempfile.mkdtemp() cls.curdir = os.getcwd() os.chdir(cls.tmpdir) @classmethod def tearDownClass(cls): os.chdir(cls.curdir) shutil.rmtree(cls.tmpdir) def test_multi_image_composite(self): ds = fake_random_ds(64, nprocs=4, particles=16**3) center = [0.5, 0.5, 0.5] normal = [1, 1, 1] cut = ds.cutting(normal, center) frb = cut.to_frb((0.75, "unitary"), 64) multi_image_composite( "multi_channel1.png", frb["index", "x"], frb["index", "y"] ) # Test multi_image_composite with user specified scaling values mi = ds.quan(0.1, "code_length") ma = ds.quan(0.9, "code_length") multi_image_composite( "multi_channel2.png", (frb["index", "x"], mi, ma), [frb["index", "y"], mi, None], green_channel=frb["index", "z"], alpha_channel=frb["gas", "density"], ) # Test with numpy integer array x = np.array(np.random.randint(0, 256, size=(10, 10)), dtype="uint8") y = np.array(np.random.randint(0, 256, size=(10, 10)), dtype="uint8") multi_image_composite("multi_channel3.png", x, y) def test_write_bitmap(self): image = np.zeros([16, 16, 4], dtype="uint8") xs = np.random.rand(100) ys = np.random.rand(100) image = splat_points(image, xs, ys) png_str = write_bitmap(image.copy(), None) image_trans = image.swapaxes(0, 1).copy(order="C") png_str_trans = write_bitmap(image_trans, None, transpose=True) assert_equal(png_str, png_str_trans) with assert_raises(RuntimeError) as ex: write_bitmap(np.ones([16, 16]), None) desired = "Expecting image array of shape (N,M,3) or (N,M,4), received (16, 16)" assert_equal(str(ex.exception)[:50], desired[:50]) def test_apply_colormap(self): x = np.array(np.random.randint(0, 256, size=(10, 10)), dtype="uint8") apply_colormap(x, color_bounds=None, cmap_name=None, func=lambda x: x**2) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_invalid_origin.py0000644000175100001770000001162614714401662022700 0ustar00runnerdockerimport re import pytest from yt.testing import fake_amr_ds from yt.visualization.plot_window import SlicePlot @pytest.mark.parametrize( ("origin", "msg"), [ ( "ONE", ( "Invalid origin argument. " "Single element specification must be 'window', 'domain', or 'native'. " "Got 'ONE'" ), ), ( "ONE-TWO", ( "Invalid origin argument. " "Using 2 elements:\n" " - the first one must be 'left', 'right', 'lower', 'upper' or 'center'\n" " - the second one must be 'window', 'domain', or 'native'\n" "Got 'ONE-TWO'" ), ), ( "ONE-window", ( "Invalid origin argument. " "Using 2 elements:\n" " - the first one must be 'left', 'right', 'lower', 'upper' or 'center'\n" "Got 'ONE-window'" ), ), ( "left-TWO", ( "Invalid origin argument. " "Using 2 elements:\n" " - the second one must be 'window', 'domain', or 'native'\n" "Got 'left-TWO'" ), ), ( "ONE-TWO-THREE", ( "Invalid origin argument. " "Using 3 elements:\n" " - the first one must be 'lower', 'upper' or 'center' or a distance\n" " - the second one must be 'left', 'right', 'center' or a distance\n" " - the third one must be 'window', 'domain', or 'native'\n" "Got 'ONE-TWO-THREE'" ), ), ( "ONE-TWO-window", ( "Invalid origin argument. " "Using 3 elements:\n" " - the first one must be 'lower', 'upper' or 'center' or a distance\n" " - the second one must be 'left', 'right', 'center' or a distance\n" "Got 'ONE-TWO-window'" ), ), ( "ONE-left-window", ( "Invalid origin argument. " "Using 3 elements:\n" " - the first one must be 'lower', 'upper' or 'center' or a distance\n" "Got 'ONE-left-window'" ), ), ( "ONE-left-THREE", ( "Invalid origin argument. " "Using 3 elements:\n" " - the first one must be 'lower', 'upper' or 'center' or a distance\n" " - the third one must be 'window', 'domain', or 'native'\n" "Got 'ONE-left-THREE'" ), ), ( "lower-left-THREE", ( "Invalid origin argument. " "Using 3 elements:\n" " - the third one must be 'window', 'domain', or 'native'\n" "Got 'lower-left-THREE'" ), ), ( ("ONE", "TWO", "THREE"), ( "Invalid origin argument. " "Using 3 elements:\n" " - the first one must be 'lower', 'upper' or 'center' or a distance\n" " - the second one must be 'left', 'right', 'center' or a distance\n" " - the third one must be 'window', 'domain', or 'native'\n" "Got ('ONE', 'TWO', 'THREE')" ), ), ( ("ONE", "TWO", (1, 1, 3)), ( "Invalid origin argument. " "Using 3 elements:\n" " - the first one must be 'lower', 'upper' or 'center' or a distance\n" " - the second one must be 'left', 'right', 'center' or a distance\n" " - the third one must be 'window', 'domain', or 'native'\n" "Got ('ONE', 'TWO', (1, 1, 3))" ), ), ( "ONE-TWO-THREE-FOUR", ( "Invalid origin argument with too many elements; " "expected 1, 2 or 3 elements, got 'ONE-TWO-THREE-FOUR', counting 4 elements. " "Use '-' as a separator for string arguments." ), ), ], ) def test_invalidate_origin_value(origin, msg): ds = fake_amr_ds(fields=[("gas", "density")], units=["g*cm**-3"]) with pytest.raises(ValueError, match=re.escape(msg)): SlicePlot(ds, "z", ("gas", "density"), origin=origin) @pytest.mark.parametrize( "origin", # don't attempt to match exactly a TypeError message because it should be # emitted by unyt, not yt [ ((50, 50, 50), "TWO", "THREE"), ("ONE", (50, 50, 50), "THREE"), ], ) def test_invalidate_origin_type(origin): ds = fake_amr_ds(fields=[("gas", "density")], units=["g*cm**-3"]) with pytest.raises(TypeError): SlicePlot(ds, "z", ("gas", "density"), origin=origin) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_line_annotation_unit.py0000644000175100001770000000173314714401662024121 0ustar00runnerdockerimport numpy as np from yt.loaders import load_uniform_grid from yt.visualization.plot_window import ProjectionPlot def test_ds_arr_invariance_under_projection_plot(tmp_path): data_array = np.random.random((10, 10, 10)) bbox = np.array([[-100, 100], [-100, 100], [-100, 100]]) data = {("gas", "density"): (data_array, "g*cm**(-3)")} ds = load_uniform_grid(data, data_array.shape, length_unit="kpc", bbox=bbox) start_source = np.array((0, 0, -0.5)) end_source = np.array((0, 0, 0.5)) start = ds.arr(start_source, "unitary") end = ds.arr(end_source, "unitary") start_i = start.copy() end_i = end.copy() p = ProjectionPlot(ds, 0, "number_density") p.annotate_line(start, end) p.save(tmp_path) # for lack of a unyt.testing.assert_unit_array_equal function np.testing.assert_array_equal(start_i, start) assert start_i.units == start.units np.testing.assert_array_equal(end_i, end) assert end_i.units == end.units ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_line_plots.py0000644000175100001770000000655514714401662022060 0ustar00runnerdockerimport pytest from numpy.testing import assert_equal import yt from yt.testing import fake_random_ds from yt.visualization.line_plot import _validate_point def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True class TestLinePlotSimple: @classmethod def setup_class(cls): cls.ds = fake_random_ds(4) fields = [field for field in cls.ds.field_list if field[0] == "stream"] field_labels = {f: f[1] for f in fields} plot = yt.LinePlot( cls.ds, fields, (0, 0, 0), (1, 1, 0), 1000, field_labels=field_labels ) plot.annotate_legend(fields[0]) plot.annotate_legend(fields[1]) plot.set_x_unit("cm") plot.set_unit(fields[0], "kg/cm**3") plot.annotate_title(fields[0], "Density Plot") plot.render() cls.plot = plot @pytest.mark.mpl_image_compare def test_lineplot_simple_density(self): return self.plot.plots["stream", "density"].figure @pytest.mark.mpl_image_compare def test_lineplot_simple_velocity_x(self): return self.plot.plots["stream", "velocity_x"].figure class TestLinePlotMulti: @classmethod def setup_class(cls): cls.ds = fake_random_ds(4) fields = [field for field in cls.ds.field_list if field[0] == "stream"] field_labels = {f: f[1] for f in fields} lines = [] lines.append( yt.LineBuffer(cls.ds, [0.25, 0, 0], [0.25, 1, 0], 100, label="x = 0.5") ) lines.append( yt.LineBuffer(cls.ds, [0.5, 0, 0], [0.5, 1, 0], 100, label="x = 0.5") ) plot = yt.LinePlot.from_lines(cls.ds, fields, lines, field_labels=field_labels) plot.annotate_legend(fields[0]) plot.annotate_legend(fields[1]) plot.set_x_unit("cm") plot.set_unit(fields[0], "kg/cm**3") plot.annotate_title(fields[0], "Density Plot") plot.render() cls.plot = plot @pytest.mark.mpl_image_compare def test_lineplot_multi_density(self): return self.plot.plots["stream", "density"].figure @pytest.mark.mpl_image_compare def test_lineplot_multi_velocity_x(self): return self.plot.plots["stream", "velocity_x"].figure def test_line_buffer(): ds = fake_random_ds(32) lb = yt.LineBuffer(ds, (0, 0, 0), (1, 1, 1), 512, label="diag") lb["gas", "density"] lb["gas", "velocity_x"] assert_equal(lb["gas", "density"].size, 512) lb["gas", "density"] = 0 assert_equal(lb["gas", "density"], 0) assert_equal(set(lb.keys()), {("gas", "density"), ("gas", "velocity_x")}) del lb["gas", "velocity_x"] assert_equal(set(lb.keys()), {("gas", "density")}) def test_validate_point(): ds = fake_random_ds(3) with pytest.raises(RuntimeError, match="Input point must be array-like"): _validate_point(0, ds, start=True) with pytest.raises(RuntimeError, match="Input point must be a 1D array"): _validate_point(ds.arr([[0], [1]], "code_length"), ds, start=True) with pytest.raises( RuntimeError, match="Input point must have an element for each dimension" ): _validate_point(ds.arr([0, 1], "code_length"), ds, start=True) ds = fake_random_ds([32, 32, 1]) _validate_point(ds.arr([0, 1], "code_length"), ds, start=True) _validate_point(ds.arr([0, 1], "code_length"), ds) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_mesh_slices.py0000644000175100001770000001072514714401662022200 0ustar00runnerdockerimport numpy as np import pytest import yt from yt.testing import fake_hexahedral_ds, fake_tetrahedral_ds, small_fake_hexahedral_ds from yt.utilities.lib.geometry_utils import triangle_plane_intersect from yt.utilities.lib.mesh_triangulation import triangulate_indices def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True class TestTetrahedral: @classmethod def setup_class(cls): cls.ds = fake_tetrahedral_ds() cls.mesh = cls.ds.index.meshes[0] cls.ad = cls.ds.all_data() cls.slices = [ cls.ds.slice(idir, cls.ds.domain_center[idir]) for idir in range(3) ] cls.sps = [yt.SlicePlot(cls.ds, idir, cls.ds.field_list) for idir in range(3)] for sp in cls.sps: sp.annotate_mesh_lines() sp.set_log("all", False) sp.render() @pytest.mark.parametrize("idir", range(3)) def test_mesh_selection(self, idir): sl_obj = self.slices[idir] for field in self.ds.field_list: assert sl_obj[field].shape[0] == self.mesh.count(sl_obj.selector) assert sl_obj[field].shape[0] < self.ad[field].shape[0] @pytest.mark.parametrize("ax", range(3)) @pytest.mark.mpl_image_compare def test_mesh_slice_tetraheadral_all_elem(self, ax): return self.sps[ax].plots["all", "elem"].figure @pytest.mark.parametrize("ax", range(3)) @pytest.mark.mpl_image_compare def test_mesh_slice_tetraheadral_all_test(self, ax): return self.sps[ax].plots["all", "test"].figure @pytest.mark.parametrize("ax", range(3)) @pytest.mark.mpl_image_compare def test_mesh_slice_tetraheadral_connect1_elem(self, ax): return self.sps[ax].plots["connect1", "elem"].figure @pytest.mark.parametrize("ax", range(3)) @pytest.mark.mpl_image_compare def test_mesh_slice_tetraheadral_connect1_test(self, ax): return self.sps[ax].plots["connect1", "test"].figure class TestHexahedral: @classmethod def setup_class(cls): cls.ds = fake_hexahedral_ds() cls.mesh = cls.ds.index.meshes[0] cls.ad = cls.ds.all_data() cls.slices = [ cls.ds.slice(idir, cls.ds.domain_center[idir]) for idir in range(3) ] cls.sps = [yt.SlicePlot(cls.ds, idir, cls.ds.field_list) for idir in range(3)] for sp in cls.sps: sp.annotate_mesh_lines() sp.set_log("all", False) sp.render() @pytest.mark.parametrize("idir", range(3)) def test_mesh_selection(self, idir): sl_obj = self.slices[idir] for field in self.ds.field_list: assert sl_obj[field].shape[0] == self.mesh.count(sl_obj.selector) assert sl_obj[field].shape[0] < self.ad[field].shape[0] @pytest.mark.parametrize("ax", range(3)) @pytest.mark.mpl_image_compare def test_mesh_slice_hexaheadral_all_elem(self, ax): return self.sps[ax].plots["all", "elem"].figure @pytest.mark.parametrize("ax", range(3)) @pytest.mark.mpl_image_compare def test_mesh_slice_hexaheadral_all_test(self, ax): return self.sps[ax].plots["all", "test"].figure @pytest.mark.parametrize("ax", range(3)) @pytest.mark.mpl_image_compare def test_mesh_slice_hexaheadral_connect1_elem(self, ax): return self.sps[ax].plots["connect1", "elem"].figure @pytest.mark.parametrize("ax", range(3)) @pytest.mark.mpl_image_compare def test_mesh_slice_hexaheadral_connect1_test(self, ax): return self.sps[ax].plots["connect1", "test"].figure def test_perfect_element_intersection(): # This test tests mesh line annotation where a z=0 slice # perfectly intersects the top of a hexahedral element with node # z-coordinates containing both -0 and +0. Before # https://github.com/yt-project/yt/pull/1437 this test falsely # yielded three annotation lines, whereas the correct result is four # corresponding to the four edges of the top hex face. ds = small_fake_hexahedral_ds() indices = ds.index.meshes[0].connectivity_indices coords = ds.index.meshes[0].connectivity_coords tri_indices = triangulate_indices(indices) tri_coords = coords[tri_indices] lines = triangle_plane_intersect(2, 0, tri_coords) non_zero_lines = 0 for i in range(lines.shape[0]): norm = np.linalg.norm(lines[i][0] - lines[i][1]) if norm > 1e-8: non_zero_lines += 1 np.testing.assert_equal(non_zero_lines, 4) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_normal_plot_api.py0000644000175100001770000000332614714401662023060 0ustar00runnerdockerfrom itertools import product import numpy as np import pytest from yt.testing import fake_amr_ds from yt.visualization.plot_window import ProjectionPlot, SlicePlot @pytest.fixture(scope="module") def ds(): return fake_amr_ds(geometry="cartesian") @pytest.mark.parametrize("plot_cls", (SlicePlot, ProjectionPlot)) def test_normalplot_all_positional_args(ds, plot_cls): plot_cls(ds, "z", ("stream", "Density")) @pytest.mark.parametrize("plot_cls", (SlicePlot, ProjectionPlot)) def test_normalplot_normal_kwarg(ds, plot_cls): plot_cls(ds, normal="z", fields=("stream", "Density")) @pytest.mark.parametrize("plot_cls", (SlicePlot, ProjectionPlot)) def test_error_with_missing_fields_and_normal(ds, plot_cls): with pytest.raises( TypeError, match="missing 2 required positional arguments: 'normal' and 'fields'", ): plot_cls(ds) @pytest.mark.parametrize("plot_cls", (SlicePlot, ProjectionPlot)) def test_error_with_missing_fields_with_normal_kwarg(ds, plot_cls): with pytest.raises( TypeError, match=r"missing (1 )?required positional argument: 'fields'$" ): plot_cls(ds, normal="z") @pytest.mark.parametrize("plot_cls", (SlicePlot, ProjectionPlot)) def test_error_with_missing_fields_with_positional(ds, plot_cls): with pytest.raises( TypeError, match=r"missing (1 )?required positional argument: 'fields'$" ): plot_cls(ds, "z") @pytest.mark.parametrize( "plot_cls, normal", product([SlicePlot, ProjectionPlot], [(0, 0, 1), [0, 0, 1], np.array((0, 0, 1))]), ) def test_normalplot_normal_array(ds, plot_cls, normal): # see regression https://github.com/yt-project/yt/issues/3736 plot_cls(ds, normal, fields=("stream", "Density")) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_offaxisprojection.py0000644000175100001770000002246714714401662023444 0ustar00runnerdockerimport itertools as it import os import shutil import tempfile import unittest import numpy as np from numpy.testing import assert_equal from yt.testing import ( assert_fname, assert_rel_equal, fake_octree_ds, fake_random_ds, ) from yt.visualization.api import ( OffAxisProjectionPlot, OffAxisSlicePlot, ) from yt.visualization.image_writer import write_projection from yt.visualization.volume_rendering.api import off_axis_projection # TODO: replace this with pytest.mark.parametrize def expand_keywords(keywords, full=False): """ expand_keywords is a means for testing all possible keyword arguments in the nosetests. Simply pass it a dictionary of all the keyword arguments and all of the values for these arguments that you want to test. It will return a list of kwargs dicts containing combinations of the various kwarg values you passed it. These can then be passed to the appropriate function in nosetests. If full=True, then every possible combination of keywords is produced, otherwise, every keyword option is included at least once in the output list. Be careful, by using full=True, you may be in for an exponentially larger number of tests! Parameters ---------- keywords : dict a dictionary where the keys are the keywords for the function, and the values of each key are the possible values that this key can take in the function full : bool if set to True, every possible combination of given keywords is returned Returns ------- array of dicts An array of dictionaries to be individually passed to the appropriate function matching these kwargs. Examples -------- >>> keywords = {} >>> keywords["dpi"] = (50, 100, 200) >>> keywords["cmap"] = ("cmyt.arbre", "cmyt.kelp") >>> list_of_kwargs = expand_keywords(keywords) >>> print(list_of_kwargs) array([{'cmap': 'cmyt.arbre', 'dpi': 50}, {'cmap': 'cmyt.kelp', 'dpi': 100}, {'cmap': 'cmyt.arbre', 'dpi': 200}], dtype=object) >>> list_of_kwargs = expand_keywords(keywords, full=True) >>> print(list_of_kwargs) array([{'cmap': 'cmyt.arbre', 'dpi': 50}, {'cmap': 'cmyt.arbre', 'dpi': 100}, {'cmap': 'cmyt.arbre', 'dpi': 200}, {'cmap': 'cmyt.kelp', 'dpi': 50}, {'cmap': 'cmyt.kelp', 'dpi': 100}, {'cmap': 'cmyt.kelp', 'dpi': 200}], dtype=object) >>> for kwargs in list_of_kwargs: ... write_projection(*args, **kwargs) """ # if we want every possible combination of keywords, use iter magic if full: keys = sorted(keywords) list_of_kwarg_dicts = np.array( [ dict(zip(keys, prod, strict=True)) for prod in it.product(*(keywords[key] for key in keys)) ] ) # if we just want to probe each keyword, but not necessarily every # combination else: # Determine the maximum number of values any of the keywords has num_lists = 0 for val in keywords.values(): if isinstance(val, str): num_lists = max(1.0, num_lists) else: num_lists = max(len(val), num_lists) # Construct array of kwargs dicts, each element of the list is a different # **kwargs dict. each kwargs dict gives a different combination of # the possible values of the kwargs # initialize array list_of_kwarg_dicts = np.array([{} for x in range(num_lists)]) # fill in array for i in np.arange(num_lists): list_of_kwarg_dicts[i] = {} for key in keywords.keys(): # if it's a string, use it (there's only one) if isinstance(keywords[key], str): list_of_kwarg_dicts[i][key] = keywords[key] # if there are more options, use the i'th val elif i < len(keywords[key]): list_of_kwarg_dicts[i][key] = keywords[key][i] # if there are not more options, use the 0'th val else: list_of_kwarg_dicts[i][key] = keywords[key][0] return list_of_kwarg_dicts class TestOffAxisProjection(unittest.TestCase): @classmethod def setUpClass(cls): cls.tmpdir = tempfile.mkdtemp() cls.curdir = os.getcwd() os.chdir(cls.tmpdir) @classmethod def tearDownClass(cls): os.chdir(cls.curdir) shutil.rmtree(cls.tmpdir) def test_oap(self): """Tests functionality of off_axis_projection and write_projection.""" # args for off_axis_projection test_ds = fake_random_ds(64) c = test_ds.domain_center norm = [0.5, 0.5, 0.5] W = test_ds.arr([0.5, 0.5, 1.0], "unitary") N = 256 field = ("gas", "density") oap_args = [test_ds, c, norm, W, N, field] # kwargs for off_axis_projection oap_kwargs = {} oap_kwargs["weight"] = (None, "cell_mass") oap_kwargs["no_ghost"] = (True, False) oap_kwargs["interpolated"] = (False,) oap_kwargs["north_vector"] = ((1, 0, 0), (0, 0.5, 1.0)) oap_kwargs_list = expand_keywords(oap_kwargs) # args or write_projection fn = "test_%d.png" # kwargs for write_projection wp_kwargs = {} wp_kwargs["colorbar"] = (True, False) wp_kwargs["colorbar_label"] = "test" wp_kwargs["title"] = "test" wp_kwargs["vmin"] = (None,) wp_kwargs["vmax"] = (1e3, 1e5) wp_kwargs["take_log"] = (True, False) wp_kwargs["figsize"] = ((8, 6), [1, 1]) wp_kwargs["dpi"] = (100, 50) wp_kwargs["cmap_name"] = ("cmyt.arbre", "cmyt.kelp") wp_kwargs_list = expand_keywords(wp_kwargs) # test all off_axis_projection kwargs and write_projection kwargs # make sure they are able to be projected, then remove and try next # iteration for i, oap_kwargs in enumerate(oap_kwargs_list): image = off_axis_projection(*oap_args, **oap_kwargs) for wp_kwargs in wp_kwargs_list: write_projection(image, fn % i, **wp_kwargs) assert_fname(fn % i) # Test remaining parameters of write_projection write_projection(image, "test_2", xlabel="x-axis", ylabel="y-axis") assert_fname("test_2.png") write_projection(image, "test_3.pdf", xlabel="x-axis", ylabel="y-axis") assert_fname("test_3.pdf") write_projection(image, "test_4.eps", xlabel="x-axis", ylabel="y-axis") assert_fname("test_4.eps") def test_field_cut_off_axis_octree(): ds = fake_octree_ds() cut = ds.all_data().cut_region('obj["gas", "density"]>0.5') p1 = OffAxisProjectionPlot(ds, [1, 0, 0], ("gas", "density")) p2 = OffAxisProjectionPlot(ds, [1, 0, 0], ("gas", "density"), data_source=cut) assert_equal(p2.frb["gas", "density"].min() == 0.0, True) # Lots of zeros assert_equal((p1.frb["gas", "density"] == p2.frb["gas", "density"]).all(), False) p3 = OffAxisSlicePlot(ds, [1, 0, 0], ("gas", "density")) p4 = OffAxisSlicePlot(ds, [1, 0, 0], ("gas", "density"), data_source=cut) assert_equal((p3.frb["gas", "density"] == p4.frb["gas", "density"]).all(), False) p4rho = p4.frb["gas", "density"] assert_equal(np.nanmin(p4rho[p4rho > 0.0]) >= 0.5, True) def test_offaxis_moment(): ds = fake_random_ds(64) def _vlos_sq(field, data): return data["gas", "velocity_los"] ** 2 ds.add_field( ("gas", "velocity_los_squared"), _vlos_sq, sampling_type="local", units="cm**2/s**2", ) p1 = OffAxisProjectionPlot( ds, [1, 1, 1], [("gas", "velocity_los"), ("gas", "velocity_los_squared")], weight_field=("gas", "density"), moment=1, buff_size=(400, 400), ) p2 = OffAxisProjectionPlot( ds, [1, 1, 1], ("gas", "velocity_los"), weight_field=("gas", "density"), moment=2, buff_size=(400, 400), ) ## this failed because some - **2 values come out ## marginally < 0, resulting in unmatched NaN values in the ## first assert_rel_equal argument. The compute_stddev_image ## function used in OffAxisProjectionPlot checks for and deals ## with these cases. # assert_rel_equal( # np.sqrt( # p1.frb["gas", "velocity_los_squared"] - p1.frb["gas", "velocity_los"] ** 2 # ), # p2.frb["gas", "velocity_los"], # 10, # ) p1_expsq = p1.frb["gas", "velocity_los_squared"] p1_sqexp = p1.frb["gas", "velocity_los"] ** 2 # set values to zero that have **2 - **2 < 0, but # the absolute values are much smaller than the smallest # postive values of **2 and **2 # (i.e., the difference is pretty much zero) mindiff = 1e-10 * min( np.min(p1_expsq[p1_expsq > 0]), np.min(p1_sqexp[p1_sqexp > 0]) ) # print(mindiff) safeorbad = np.logical_not( np.logical_and(p1_expsq - p1_sqexp < 0, p1_expsq - p1_sqexp > -1.0 * mindiff) ) # avoid errors from sqrt(negative) # sqrt in zeros_like insures correct units p1res = np.zeros_like(np.sqrt(p1_expsq)) p1res[safeorbad] = np.sqrt(p1_expsq[safeorbad] - p1_sqexp[safeorbad]) p2res = p2.frb["gas", "velocity_los"] assert_rel_equal(p1res, p2res, 10) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_offaxisprojection_pytestonly.py0000644000175100001770000001243014714401662025743 0ustar00runnerdockerimport numpy as np import pytest import unyt from yt.testing import ( assert_rel_equal, cubicspline_python, fake_sph_flexible_grid_ds, integrate_kernel, ) from yt.visualization.api import ProjectionPlot @pytest.mark.parametrize("weighted", [True, False]) @pytest.mark.parametrize("periodic", [True, False]) @pytest.mark.parametrize("depth", [None, (1.0, "cm"), (0.5, "cm")]) @pytest.mark.parametrize("shiftcenter", [False, True]) @pytest.mark.parametrize("northvector", [None, (1.0e-4, 1.0, 0.0)]) def test_sph_proj_general_offaxis( northvector: tuple[float, float, float] | None, shiftcenter: bool, depth: tuple[float, str] | None, periodic: bool, weighted: bool, ) -> None: """ Same as the on-axis projections, but we rotate the basis vectors to test whether roations are handled ok. the rotation is chosen to be small so that in/exclusion of particles within bboxes, etc. works out the same way. We just send lines of sight through pixel centers for convenience. Particles at [0.5, 1.5, 2.5] (in each coordinate) smoothing lengths 0.25 all particles have mass 1., density 1.5, except the single center particle, with mass 2., density 3. Parameters: ----------- northvector: tuple y-axis direction in the final plot (direction vector) shiftcenter: bool shift the coordinates to center the projection on. (The grid is offset to this same center) depth: float or None depth of the projection slice periodic: bool assume periodic boundary conditions, or not weighted: bool make a weighted projection (density-weighted density), or not Returns: -------- None """ if shiftcenter: center = np.array((0.625, 0.625, 0.625)) # cm else: center = np.array((1.5, 1.5, 1.5)) # cm bbox = unyt.unyt_array(np.array([[0.0, 3.0], [0.0, 3.0], [0.0, 3.0]]), "cm") hsml_factor = 0.5 unitrho = 1.5 # test correct centering, particle selection def makemasses(i, j, k): if i == j == k == 1: return 2.0 else: return 1.0 # result shouldn't depend explicitly on the center if we re-center # the data, unless we get cut-offs in the non-periodic case # *almost* the z-axis # try to make sure dl differences from periodic wrapping are small epsilon = 1e-4 projaxis = np.array([epsilon, 0.00, np.sqrt(1.0 - epsilon**2)]) e1dir = projaxis / np.sqrt(np.sum(projaxis**2)) # TODO: figure out other (default) axes for basis vectors here if northvector is None: e2dir = np.array([0.0, 1.0, 0.0]) else: e2dir = np.asarray(northvector) e2dir = e2dir - np.sum(e1dir * e2dir) * e2dir # orthonormalize e2dir /= np.sqrt(np.sum(e2dir**2)) e3dir = np.cross(e1dir, e2dir) ds = fake_sph_flexible_grid_ds( hsml_factor=hsml_factor, nperside=3, periodic=periodic, offsets=np.full(3, 0.5), massgenerator=makemasses, unitrho=unitrho, bbox=bbox.v, recenter=center, e1hat=e1dir, e2hat=e2dir, e3hat=e3dir, ) source = ds.all_data() # couple to dataset -> right unit registry center = ds.arr(center, "cm") # print('position:\n', source['gas','position']) # m / rho, factor 1. / hsml**2 is included in the kernel integral # (density is adjusted, so same for center particle) prefactor = 1.0 / unitrho # / (0.5 * 0.5)**2 dl_cen = integrate_kernel(cubicspline_python, 0.0, 0.25) if weighted: toweight_field = ("gas", "density") else: toweight_field = None # we don't actually want a plot, it's just a straightforward, # common way to get an frb / image array prj = ProjectionPlot( ds, projaxis, ("gas", "density"), width=(2.5, "cm"), weight_field=toweight_field, buff_size=(5, 5), center=center, data_source=source, north_vector=northvector, depth=depth, ) img = prj.frb.data[("gas", "density")] if weighted: # periodic shifts will modify the (relative) dl values a bit expected_out = np.zeros( ( 5, 5, ), dtype=img.v.dtype, ) expected_out[::2, ::2] = unitrho if depth is None: expected_out[2, 2] *= 1.5 else: # only 2 * unitrho element included expected_out[2, 2] *= 2.0 else: expected_out = np.zeros( ( 5, 5, ), dtype=img.v.dtype, ) expected_out[::2, ::2] = dl_cen * prefactor * unitrho if depth is None: # 3 particles per l.o.s., including the denser one expected_out *= 3.0 expected_out[2, 2] *= 4.0 / 3.0 else: # 1 particle per l.o.s., including the denser one expected_out[2, 2] *= 2.0 # grid is shifted to the left -> 'missing' stuff at the left if (not periodic) and shiftcenter: expected_out[:1, :] = 0.0 expected_out[:, :1] = 0.0 # print(axis, shiftcenter, depth, periodic, weighted) # print("expected:\n", expected_out) # print("recovered:\n", img.v) assert_rel_equal(expected_out, img.v, 4) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_particle_plot.py0000644000175100001770000003531014714401662022540 0ustar00runnerdockerimport os import shutil import tempfile import unittest from unittest import mock import numpy as np from numpy.testing import assert_allclose, assert_array_almost_equal from yt.data_objects.particle_filters import add_particle_filter from yt.data_objects.profiles import create_profile from yt.loaders import load from yt.testing import fake_particle_ds, requires_file from yt.units.yt_array import YTArray from yt.utilities.answer_testing.framework import ( PhasePlotAttributeTest, PlotWindowAttributeTest, data_dir_load, requires_ds, ) from yt.visualization.api import ParticlePhasePlot, ParticlePlot, ParticleProjectionPlot from yt.visualization.tests.test_plotwindow import ATTR_ARGS, WIDTH_SPECS def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True # override some of the plotwindow ATTR_ARGS PROJ_ATTR_ARGS = ATTR_ARGS.copy() PROJ_ATTR_ARGS["set_cmap"] = [ ((("all", "particle_mass"), "RdBu"), {}), ((("all", "particle_mass"), "cmyt.pastel"), {}), ] PROJ_ATTR_ARGS["set_log"] = [((("all", "particle_mass"), False), {})] PROJ_ATTR_ARGS["set_zlim"] = [ ((("all", "particle_mass"), 1e39, 1e42), {}), ((("all", "particle_mass"),), {"zmin": 1e39, "dynamic_range": 4}), ] PHASE_ATTR_ARGS = { "annotate_text": [ (((5e-29, 5e7), "Hello YT"), {}), (((5e-29, 5e7), "Hello YT"), {"color": "b"}), ], "set_title": [((("all", "particle_mass"), "A phase plot."), {})], "set_log": [((("all", "particle_mass"), False), {})], "set_unit": [((("all", "particle_mass"), "Msun"), {})], "set_xlim": [((-4e7, 4e7), {})], "set_ylim": [((-4e7, 4e7), {})], } TEST_FLNMS = [None, "test", "test.png", "test.eps", "test.ps", "test.pdf"] CENTER_SPECS = ( "c", "C", "center", "Center", [0.5, 0.5, 0.5], [[0.2, 0.3, 0.4], "cm"], YTArray([0.3, 0.4, 0.7], "cm"), ) WEIGHT_FIELDS = (None, ("all", "particle_ones"), ("all", "particle_mass")) PHASE_FIELDS = [ ( ("all", "particle_velocity_x"), ("all", "particle_position_z"), ("all", "particle_mass"), ), ( ("all", "particle_position_x"), ("all", "particle_position_y"), ("all", "particle_ones"), ), ( ("all", "particle_velocity_x"), ("all", "particle_velocity_y"), [("all", "particle_mass"), ("all", "particle_ones")], ), ] g30 = "IsolatedGalaxy/galaxy0030/galaxy0030" @requires_ds(g30, big_data=True) def test_particle_projection_answers(): """ This iterates over the all the plot modification functions in PROJ_ATTR_ARGS. Each time, it compares the images produced by ParticleProjectionPlot to the gold standard. """ plot_field = ("all", "particle_mass") decimals = 12 ds = data_dir_load(g30) for ax in "xyz": for attr_name in PROJ_ATTR_ARGS.keys(): for args in PROJ_ATTR_ARGS[attr_name]: test = PlotWindowAttributeTest( ds, plot_field, ax, attr_name, args, decimals, "ParticleProjectionPlot", ) test_particle_projection_answers.__name__ = test.description yield test @requires_ds(g30, big_data=True) def test_particle_offaxis_projection_answers(): plot_field = ("all", "particle_mass") decimals = 12 ds = data_dir_load(g30) attr_name = "set_cmap" attr_args = ((("all", "particle_mass"), "RdBu"), {}) L = [1, 1, 1] test = PlotWindowAttributeTest( ds, plot_field, L, attr_name, attr_args, decimals, "ParticleProjectionPlot", ) test_particle_offaxis_projection_answers.__name__ = test.description yield test @requires_ds(g30, big_data=True) def test_particle_projection_filter(): """ This tests particle projection plots for filter fields. """ def formed_star(pfilter, data): filter = data["all", "creation_time"] > 0 return filter add_particle_filter( "formed_star", function=formed_star, filtered_type="all", requires=["creation_time"], ) plot_field = ("formed_star", "particle_mass") decimals = 12 ds = data_dir_load(g30) ds.add_particle_filter("formed_star") for ax in "xyz": attr_name = "set_log" test = PlotWindowAttributeTest( ds, plot_field, ax, attr_name, ((plot_field, False), {}), decimals, "ParticleProjectionPlot", ) test_particle_projection_filter.__name__ = test.description yield test @requires_ds(g30, big_data=True) def test_particle_phase_answers(): """ This iterates over the all the plot modification functions in PHASE_ATTR_ARGS. Each time, it compares the images produced by ParticlePhasePlot to the gold standard. """ decimals = 12 ds = data_dir_load(g30) x_field = ("all", "particle_velocity_x") y_field = ("all", "particle_velocity_y") z_field = ("all", "particle_mass") for attr_name in PHASE_ATTR_ARGS.keys(): for args in PHASE_ATTR_ARGS[attr_name]: test = PhasePlotAttributeTest( ds, x_field, y_field, z_field, attr_name, args, decimals, "ParticlePhasePlot", ) test_particle_phase_answers.__name__ = test.description yield test class TestParticlePhasePlotSave(unittest.TestCase): def setUp(self): self.tmpdir = tempfile.mkdtemp() self.curdir = os.getcwd() os.chdir(self.tmpdir) def tearDown(self): os.chdir(self.curdir) shutil.rmtree(self.tmpdir) def test_particle_phase_plot(self): test_ds = fake_particle_ds() data_sources = [ test_ds.region([0.5] * 3, [0.4] * 3, [0.6] * 3), test_ds.all_data(), ] particle_phases = [] for source in data_sources: for x_field, y_field, z_fields in PHASE_FIELDS: particle_phases.append( ParticlePhasePlot( source, x_field, y_field, z_fields, x_bins=16, y_bins=16, ) ) particle_phases.append( ParticlePhasePlot( source, x_field, y_field, z_fields, x_bins=16, y_bins=16, deposition="cic", ) ) pp = create_profile( source, [x_field, y_field], z_fields, weight_field=("all", "particle_ones"), n_bins=[16, 16], ) particle_phases.append(ParticlePhasePlot.from_profile(pp)) particle_phases[0]._repr_html_() with ( mock.patch("matplotlib.backends.backend_agg.FigureCanvasAgg.print_figure"), mock.patch("matplotlib.backends.backend_pdf.FigureCanvasPdf.print_figure"), mock.patch("matplotlib.backends.backend_ps.FigureCanvasPS.print_figure"), ): for p in particle_phases: for fname in TEST_FLNMS: p.save(fname) tgal = "TipsyGalaxy/galaxy.00300" @requires_file(tgal) def test_particle_phase_plot_semantics(): ds = load(tgal) ad = ds.all_data() dens_ex = ad.quantities.extrema(("Gas", "density")) temp_ex = ad.quantities.extrema(("Gas", "temperature")) plot = ParticlePlot( ds, ("Gas", "density"), ("Gas", "temperature"), ("Gas", "particle_mass") ) plot.set_log(("Gas", "density"), True) plot.set_log(("Gas", "temperature"), True) p = plot.profile # bin extrema are field extrema assert dens_ex[0] - np.spacing(dens_ex[0]) == p.x_bins[0] assert dens_ex[-1] + np.spacing(dens_ex[-1]) == p.x_bins[-1] assert temp_ex[0] - np.spacing(temp_ex[0]) == p.y_bins[0] assert temp_ex[-1] + np.spacing(temp_ex[-1]) == p.y_bins[-1] # bins are evenly spaced in log space logxbins = np.log10(p.x_bins) dxlogxbins = logxbins[1:] - logxbins[:-1] assert_allclose(dxlogxbins, dxlogxbins[0]) logybins = np.log10(p.y_bins) dylogybins = logybins[1:] - logybins[:-1] assert_allclose(dylogybins, dylogybins[0]) plot.set_log(("Gas", "density"), False) plot.set_log(("Gas", "temperature"), False) p = plot.profile # bin extrema are field extrema assert dens_ex[0] - np.spacing(dens_ex[0]) == p.x_bins[0] assert dens_ex[-1] + np.spacing(dens_ex[-1]) == p.x_bins[-1] assert temp_ex[0] - np.spacing(temp_ex[0]) == p.y_bins[0] assert temp_ex[-1] + np.spacing(temp_ex[-1]) == p.y_bins[-1] # bins are evenly spaced in log space dxbins = p.x_bins[1:] - p.x_bins[:-1] assert_allclose(dxbins, dxbins[0]) dybins = p.y_bins[1:] - p.y_bins[:-1] assert_allclose(dybins, dybins[0]) @requires_file(tgal) def test_set_units(): ds = load(tgal) sp = ds.sphere("max", (1.0, "Mpc")) pp = ParticlePhasePlot( sp, ("Gas", "density"), ("Gas", "temperature"), ("Gas", "particle_mass") ) # make sure we can set the units using the tuple without erroring out pp.set_unit(("Gas", "particle_mass"), "Msun") @requires_file(tgal) def test_switch_ds(): """ Tests the _switch_ds() method for ParticleProjectionPlots that as of 25th October 2017 requires a specific hack in plot_container.py """ ds = load(tgal) ds2 = load(tgal) plot = ParticlePlot( ds, ("Gas", "particle_position_x"), ("Gas", "particle_position_y"), ("Gas", "density"), ) plot._switch_ds(ds2) return class TestParticleProjectionPlotSave(unittest.TestCase): def setUp(self): self.tmpdir = tempfile.mkdtemp() self.curdir = os.getcwd() os.chdir(self.tmpdir) def tearDown(self): os.chdir(self.curdir) shutil.rmtree(self.tmpdir) def test_particle_plot(self): test_ds = fake_particle_ds() particle_projs = [] for dim in range(3): particle_projs += [ ParticleProjectionPlot(test_ds, dim, ("all", "particle_mass")), ParticleProjectionPlot( test_ds, dim, ("all", "particle_mass"), deposition="cic" ), ParticleProjectionPlot( test_ds, dim, ("all", "particle_mass"), density=True ), ] particle_projs[0]._repr_html_() with ( mock.patch("matplotlib.backends.backend_agg.FigureCanvasAgg.print_figure"), mock.patch("matplotlib.backends.backend_pdf.FigureCanvasPdf.print_figure"), mock.patch("matplotlib.backends.backend_ps.FigureCanvasPS.print_figure"), ): for p in particle_projs: for fname in TEST_FLNMS: p.save(fname)[0] def test_particle_plot_ds(self): test_ds = fake_particle_ds() ds_region = test_ds.region([0.5] * 3, [0.4] * 3, [0.6] * 3) for dim in range(3): pplot_ds = ParticleProjectionPlot( test_ds, dim, ("all", "particle_mass"), data_source=ds_region ) with mock.patch( "matplotlib.backends.backend_agg.FigureCanvasAgg.print_figure" ): pplot_ds.save() def test_particle_plot_c(self): test_ds = fake_particle_ds() for center in CENTER_SPECS: for dim in range(3): pplot_c = ParticleProjectionPlot( test_ds, dim, ("all", "particle_mass"), center=center ) with mock.patch( "matplotlib.backends.backend_agg.FigureCanvasAgg.print_figure" ): pplot_c.save() def test_particle_plot_wf(self): test_ds = fake_particle_ds() for dim in range(3): for weight_field in WEIGHT_FIELDS: pplot_wf = ParticleProjectionPlot( test_ds, dim, ("all", "particle_mass"), weight_field=weight_field ) with mock.patch( "matplotlib.backends.backend_agg.FigureCanvasAgg.print_figure" ): pplot_wf.save() def test_particle_plot_offaxis(self): test_ds = fake_particle_ds() Ls = [[1, 1, 1], [0, 1, -0.5]] Ns = [None, [1, 1, 1]] for L, N in zip(Ls, Ns, strict=True): for weight_field in WEIGHT_FIELDS: pplot_off = ParticleProjectionPlot( test_ds, L, ("all", "particle_mass"), north_vector=N, weight_field=weight_field, ) with mock.patch( "matplotlib.backends.backend_agg.FigureCanvasAgg.print_figure" ): pplot_off.save() def test_creation_with_width(self): test_ds = fake_particle_ds() for width, (xlim, ylim, pwidth, _aun) in WIDTH_SPECS.items(): plot = ParticleProjectionPlot( test_ds, 0, ("all", "particle_mass"), width=width ) xlim = [plot.ds.quan(el[0], el[1]) for el in xlim] ylim = [plot.ds.quan(el[0], el[1]) for el in ylim] pwidth = [plot.ds.quan(el[0], el[1]) for el in pwidth] for px, x in zip(plot.xlim, xlim, strict=True): assert_array_almost_equal(px, x, 14) for py, y in zip(plot.ylim, ylim, strict=True): assert_array_almost_equal(py, y, 14) for pw, w in zip(plot.width, pwidth, strict=True): assert_array_almost_equal(pw, w, 14) def test_particle_plot_instance(): """ Tests the type of plot instance returned by ParticlePlot. If x_field and y_field are any combination of valid particle_position in x, y or z axis,then ParticleProjectionPlot instance is expected. """ ds = fake_particle_ds() x_field = ("all", "particle_position_x") y_field = ("all", "particle_position_y") z_field = ("all", "particle_velocity_x") plot = ParticlePlot(ds, x_field, y_field) assert isinstance(plot, ParticleProjectionPlot) plot = ParticlePlot(ds, y_field, x_field) assert isinstance(plot, ParticleProjectionPlot) plot = ParticlePlot(ds, x_field, z_field) assert isinstance(plot, ParticlePhasePlot) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_plot_modifications.py0000644000175100001770000000100514714401662023557 0ustar00runnerdockerfrom yt.testing import requires_file from yt.utilities.answer_testing.framework import data_dir_load from yt.visualization.plot_window import SlicePlot @requires_file("amrvac/bw_3d0000.dat") def test_code_units_xy_labels(): ds = data_dir_load("amrvac/bw_3d0000.dat", kwargs={"unit_system": "code"}) p = SlicePlot(ds, "x", ("gas", "density")) ax = p.plots["gas", "density"].axes assert "code length" in ax.get_xlabel().replace("\\", "") assert "code length" in ax.get_ylabel().replace("\\", "") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_plotwindow.py0000644000175100001770000006702414714401662022114 0ustar00runnerdockerimport os import shutil import tempfile import unittest from collections import OrderedDict import numpy as np from matplotlib.colors import LogNorm, Normalize, SymLogNorm from numpy.testing import ( assert_array_almost_equal, assert_array_equal, assert_equal, assert_raises, ) from unyt import unyt_array from yt.loaders import load_uniform_grid from yt.testing import ( assert_allclose_units, assert_fname, assert_rel_equal, fake_amr_ds, fake_random_ds, requires_file, requires_module, ) from yt.units import kboltz from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.answer_testing.framework import ( PlotWindowAttributeTest, data_dir_load, requires_ds, ) from yt.utilities.exceptions import YTInvalidFieldType from yt.visualization.plot_window import ( AxisAlignedProjectionPlot, AxisAlignedSlicePlot, NormalPlot, OffAxisProjectionPlot, OffAxisSlicePlot, ProjectionPlot, SlicePlot, plot_2d, ) def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True TEST_FLNMS = ["test.png"] M7 = "DD0010/moving7_0010" FPROPS = {"family": "sans-serif", "style": "italic", "weight": "bold", "size": 24} ATTR_ARGS = { "pan": [(((0.1, 0.1),), {})], "pan_rel": [(((0.1, 0.1),), {})], "set_axes_unit": [ (("kpc",), {}), (("Mpc",), {}), ((("kpc", "kpc"),), {}), ((("kpc", "Mpc"),), {}), ], "set_buff_size": [((1600,), {}), (((600, 800),), {})], "set_center": [(((0.4, 0.3),), {})], "set_cmap": [(("density", "RdBu"), {}), (("density", "cmyt.pastel"), {})], "set_font": [((OrderedDict(sorted(FPROPS.items(), key=lambda t: t[0])),), {})], "set_log": [(("density", False), {})], "set_figure_size": [((7.0,), {})], "set_zlim": [ (("density", 1e-25, 1e-23), {}), (("density",), {"zmin": 1e-25, "dynamic_range": 4}), ], "zoom": [((10,), {})], } CENTER_SPECS = ( "m", "M", "max", "Max", "min", "Min", "c", "C", "center", "Center", "left", "right", [0.5, 0.5, 0.5], [[0.2, 0.3, 0.4], "cm"], YTArray([0.3, 0.4, 0.7], "cm"), ("max", ("gas", "density")), ("min", ("gas", "density")), ) WIDTH_SPECS = { # Width choices map to xlim, ylim, width, axes_unit_name 4-tuples None: ( ((0, "code_length"), (1, "code_length")), ((0, "code_length"), (1, "code_length")), ((1, "code_length"), (1, "code_length")), None, ), 0.2: ( ((0.4, "code_length"), (0.6, "code_length")), ((0.4, "code_length"), (0.6, "code_length")), ((0.2, "code_length"), (0.2, "code_length")), None, ), (0.4, 0.3): ( ((0.3, "code_length"), (0.7, "code_length")), ((0.35, "code_length"), (0.65, "code_length")), ((0.4, "code_length"), (0.3, "code_length")), None, ), (1.2, "cm"): ( ((-0.1, "code_length"), (1.1, "code_length")), ((-0.1, "code_length"), (1.1, "code_length")), ((1.2, "code_length"), (1.2, "code_length")), ("cm", "cm"), ), ((1.2, "cm"), (2.0, "cm")): ( ((-0.1, "code_length"), (1.1, "code_length")), ((-0.5, "code_length"), (1.5, "code_length")), ((1.2, "code_length"), (2.0, "code_length")), ("cm", "cm"), ), ((1.2, "cm"), (0.02, "m")): ( ((-0.1, "code_length"), (1.1, "code_length")), ((-0.5, "code_length"), (1.5, "code_length")), ((1.2, "code_length"), (2.0, "code_length")), ("cm", "m"), ), } WEIGHT_FIELDS = ( None, ("gas", "density"), ) PROJECTION_METHODS = ("integrate", "sum", "min", "max") BUFF_SIZES = [(800, 800), (1600, 1600), (1254, 1254), (800, 600)] def simple_contour(test_obj, plot): plot.annotate_contour(test_obj.plot_field) def simple_velocity(test_obj, plot): plot.annotate_velocity() def simple_streamlines(test_obj, plot): ax = test_obj.plot_axis xax = test_obj.ds.coordinates.x_axis[ax] yax = test_obj.ds.coordinates.y_axis[ax] xn = test_obj.ds.coordinates.axis_name[xax] yn = test_obj.ds.coordinates.axis_name[yax] plot.annotate_streamlines(("gas", f"velocity_{xn}"), ("gas", f"velocity_{yn}")) CALLBACK_TESTS = ( ("simple_contour", (simple_contour,)), ("simple_velocity", (simple_velocity,)), # ("simple_streamlines", (simple_streamlines,)), # ("simple_all", (simple_contour, simple_velocity, simple_streamlines)), ) @requires_ds(M7) def test_attributes(): """Test plot member functions that aren't callbacks""" plot_field = ("gas", "density") decimals = 12 ds = data_dir_load(M7) for ax in "xyz": for attr_name in ATTR_ARGS.keys(): for args in ATTR_ARGS[attr_name]: test = PlotWindowAttributeTest( ds, plot_field, ax, attr_name, args, decimals ) test_attributes.__name__ = test.description yield test for n, r in CALLBACK_TESTS: yield PlotWindowAttributeTest( ds, plot_field, ax, attr_name, args, decimals, callback_id=n, callback_runners=r, ) class TestHideAxesColorbar(unittest.TestCase): ds = None def setUp(self): if self.ds is None: self.ds = fake_random_ds(64) self.slc = SlicePlot(self.ds, 0, ("gas", "density")) self.tmpdir = tempfile.mkdtemp() self.curdir = os.getcwd() os.chdir(self.tmpdir) def tearDown(self): os.chdir(self.curdir) shutil.rmtree(self.tmpdir) del self.ds del self.slc def test_hide_show_axes(self): self.slc.hide_axes() self.slc.save() self.slc.show_axes() self.slc.save() def test_hide_show_colorbar(self): self.slc.hide_colorbar() self.slc.save() self.slc.show_colorbar() self.slc.save() def test_hide_axes_colorbar(self): self.slc.hide_colorbar() self.slc.hide_axes() self.slc.save() class TestSetWidth(unittest.TestCase): ds = None def setUp(self): if self.ds is None: self.ds = fake_random_ds(64) self.slc = SlicePlot(self.ds, 0, ("gas", "density")) def tearDown(self): del self.ds del self.slc def _assert_05cm(self): assert_array_equal( [self.slc.xlim, self.slc.ylim, self.slc.width], [ (YTQuantity(0.25, "cm"), YTQuantity(0.75, "cm")), (YTQuantity(0.25, "cm"), YTQuantity(0.75, "cm")), (YTQuantity(0.5, "cm"), YTQuantity(0.5, "cm")), ], ) def _assert_05_075cm(self): assert_array_equal( [self.slc.xlim, self.slc.ylim, self.slc.width], [ (YTQuantity(0.25, "cm"), YTQuantity(0.75, "cm")), (YTQuantity(0.125, "cm"), YTQuantity(0.875, "cm")), (YTQuantity(0.5, "cm"), YTQuantity(0.75, "cm")), ], ) def test_set_width_one(self): assert_equal( [self.slc.xlim, self.slc.ylim, self.slc.width], [(0.0, 1.0), (0.0, 1.0), (1.0, 1.0)], ) assert self.slc._axes_unit_names is None def test_set_width_nonequal(self): self.slc.set_width((0.5, 0.8)) assert_rel_equal( [self.slc.xlim, self.slc.ylim, self.slc.width], [(0.25, 0.75), (0.1, 0.9), (0.5, 0.8)], 15, ) assert self.slc._axes_unit_names is None def test_twoargs_eq(self): self.slc.set_width(0.5, "cm") self._assert_05cm() assert self.slc._axes_unit_names == ("cm", "cm") def test_tuple_eq(self): self.slc.set_width((0.5, "cm")) self._assert_05cm() assert self.slc._axes_unit_names == ("cm", "cm") def test_tuple_of_tuples_neq(self): self.slc.set_width(((0.5, "cm"), (0.75, "cm"))) self._assert_05_075cm() assert self.slc._axes_unit_names == ("cm", "cm") class TestPlotWindowSave(unittest.TestCase): def setUp(self): self.tmpdir = tempfile.mkdtemp() self.curdir = os.getcwd() os.chdir(self.tmpdir) def tearDown(self): os.chdir(self.curdir) shutil.rmtree(self.tmpdir) def test_slice_plot(self): test_ds = fake_random_ds(16) for dim in range(3): slc = SlicePlot(test_ds, dim, ("gas", "density")) for fname in TEST_FLNMS: assert_fname(slc.save(fname)[0]) def test_repr_html(self): test_ds = fake_random_ds(16) slc = SlicePlot(test_ds, 0, ("gas", "density")) slc._repr_html_() def test_projection_plot(self): test_ds = fake_random_ds(16) for dim in range(3): proj = ProjectionPlot(test_ds, dim, ("gas", "density")) for fname in TEST_FLNMS: assert_fname(proj.save(fname)[0]) def test_projection_plot_ds(self): test_ds = fake_random_ds(16) reg = test_ds.region([0.5] * 3, [0.4] * 3, [0.6] * 3) for dim in range(3): proj = ProjectionPlot(test_ds, dim, ("gas", "density"), data_source=reg) proj.save() def test_projection_plot_c(self): test_ds = fake_random_ds(16) for center in CENTER_SPECS: proj = ProjectionPlot(test_ds, 0, ("gas", "density"), center=center) proj.save() def test_projection_plot_wf(self): test_ds = fake_random_ds(16) for wf in WEIGHT_FIELDS: proj = ProjectionPlot(test_ds, 0, ("gas", "density"), weight_field=wf) proj.save() def test_projection_plot_m(self): test_ds = fake_random_ds(16) for method in PROJECTION_METHODS: proj = ProjectionPlot(test_ds, 0, ("gas", "density"), method=method) proj.save() def test_projection_plot_bs(self): test_ds = fake_random_ds(16) for bf in BUFF_SIZES: proj = ProjectionPlot(test_ds, 0, ("gas", "density"), buff_size=bf) image = proj.frb["gas", "density"] # note that image.shape is inverted relative to the passed in buff_size assert_equal(image.shape[::-1], bf) def test_offaxis_slice_plot(self): test_ds = fake_random_ds(16) slc = OffAxisSlicePlot(test_ds, [1, 1, 1], ("gas", "density")) for fname in TEST_FLNMS: assert_fname(slc.save(fname)[0]) def test_offaxis_projection_plot(self): test_ds = fake_random_ds(16) prj = OffAxisProjectionPlot(test_ds, [1, 1, 1], ("gas", "density")) for fname in TEST_FLNMS: assert_fname(prj.save(fname)[0]) def test_creation_with_width(self): test_ds = fake_random_ds(16) for width in WIDTH_SPECS: xlim, ylim, pwidth, aun = WIDTH_SPECS[width] plot = ProjectionPlot(test_ds, 0, ("gas", "density"), width=width) xlim = [plot.ds.quan(el[0], el[1]) for el in xlim] ylim = [plot.ds.quan(el[0], el[1]) for el in ylim] pwidth = [plot.ds.quan(el[0], el[1]) for el in pwidth] for px, x in zip(plot.xlim, xlim, strict=True): assert_array_almost_equal(px, x, 14) for py, y in zip(plot.ylim, ylim, strict=True): assert_array_almost_equal(py, y, 14) for pw, w in zip(plot.width, pwidth, strict=True): assert_array_almost_equal(pw, w, 14) assert aun == plot._axes_unit_names class TestPerFieldConfig(unittest.TestCase): ds = None def setUp(self): from yt.config import ytcfg newConfig = { ("yt", "default_colormap"): "viridis", ("plot", "gas", "log"): False, ("plot", "gas", "density", "units"): "lb/yard**3", ("plot", "gas", "density", "path_length_units"): "mile", ("plot", "gas", "density", "cmap"): "plasma", ("plot", "gas", "temperature", "log"): True, ("plot", "gas", "temperature", "linthresh"): 100, ("plot", "gas", "temperature", "cmap"): "hot", ("plot", "gas", "pressure", "log"): True, ("plot", "index", "radius", "linthresh"): 1e3, } # Backup the old config oldConfig = {} for key in newConfig.keys(): try: val = ytcfg[key] oldConfig[key] = val except KeyError: pass for key, val in newConfig.items(): ytcfg[key] = val self.oldConfig = oldConfig self.newConfig = newConfig fields = [("gas", "density"), ("gas", "temperature"), ("gas", "pressure")] units = ["g/cm**3", "K", "dyn/cm**2"] fields_to_plot = fields + [("index", "radius")] if self.ds is None: self.ds = fake_random_ds(16, fields=fields, units=units) self.proj = ProjectionPlot(self.ds, 0, fields_to_plot) def tearDown(self): from yt.config import ytcfg del self.ds del self.proj for key in self.newConfig.keys(): ytcfg.remove(*key) for key, val in self.oldConfig.items(): ytcfg[key] = val def test_units(self): from unyt import Unit assert_equal(self.proj.frb["gas", "density"].units, Unit("mile*lb/yd**3")) assert_equal(self.proj.frb["gas", "temperature"].units, Unit("cm*K")) assert_equal(self.proj.frb["gas", "pressure"].units, Unit("dyn/cm")) def test_scale(self): assert_equal( self.proj.plots["gas", "density"].norm_handler.norm_type, Normalize ) assert_equal( self.proj.plots["gas", "temperature"].norm_handler.norm_type, SymLogNorm ) assert_allclose_units( self.proj.plots["gas", "temperature"].norm_handler.linthresh, unyt_array(100, "K*cm"), ) assert_equal(self.proj.plots["gas", "pressure"].norm_handler.norm_type, LogNorm) assert_equal( self.proj.plots["index", "radius"].norm_handler.norm_type, SymLogNorm ) def test_cmap(self): assert_equal( self.proj.plots["gas", "density"].colorbar_handler.cmap.name, "plasma" ) assert_equal( self.proj.plots["gas", "temperature"].colorbar_handler.cmap.name, "hot" ) assert_equal( self.proj.plots["gas", "pressure"].colorbar_handler.cmap.name, "viridis" ) def test_on_off_compare(): # fake density field that varies in the x-direction only den = np.arange(32**3) / 32**2 + 1 den = den.reshape(32, 32, 32) den = np.array(den, dtype=np.float64) data = {"density": (den, "g/cm**3")} bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]]) ds = load_uniform_grid(data, den.shape, length_unit="Mpc", bbox=bbox, nprocs=64) sl_on = SlicePlot(ds, "z", [("gas", "density")]) L = [0, 0, 1] north_vector = [0, 1, 0] sl_off = OffAxisSlicePlot( ds, L, ("gas", "density"), center=[0, 0, 0], north_vector=north_vector ) assert_array_almost_equal(sl_on.frb["gas", "density"], sl_off.frb["gas", "density"]) sl_on.set_buff_size((800, 400)) sl_on._recreate_frb() sl_off.set_buff_size((800, 400)) sl_off._recreate_frb() assert_array_almost_equal(sl_on.frb["gas", "density"], sl_off.frb["gas", "density"]) def test_plot_particle_field_error(): ds = fake_random_ds(32, particles=100) field_names = [ ("all", "particle_mass"), [("all", "particle_mass"), ("gas", "density")], [("gas", "density"), ("all", "particle_mass")], ] objects_normals = [ (SlicePlot, 2), (SlicePlot, [1, 1, 1]), (ProjectionPlot, 2), (OffAxisProjectionPlot, [1, 1, 1]), ] for object, normal in objects_normals: for field_name_list in field_names: assert_raises(YTInvalidFieldType, object, ds, normal, field_name_list) def test_setup_origin(): origin_inputs = ( "domain", "left-window", "center-domain", "lower-right-window", ("window",), ("right", "domain"), ("lower", "window"), ("lower", "right", "window"), (0.5, 0.5, "domain"), ((50, "cm"), (50, "cm"), "domain"), ) w = (10, "cm") ds = fake_random_ds(32, length_unit=100.0) generated_limits = [] # lower limit -> llim # upper limit -> ulim # xllim xulim yllim yulim correct_limits = [ 45.0, 55.0, 45.0, 55.0, 0.0, 10.0, 0.0, 10.0, -5.0, 5.0, -5.0, 5.0, -10.0, 0, 0, 10.0, 0.0, 10.0, 0.0, 10.0, -55.0, -45.0, -55.0, -45.0, -5.0, 5.0, 0.0, 10.0, -10.0, 0, 0, 10.0, -5.0, 5.0, -5.0, 5.0, -5.0, 5.0, -5.0, 5.0, ] for o in origin_inputs: slc = SlicePlot(ds, 2, ("gas", "density"), width=w, origin=o) ax = slc.plots["gas", "density"].axes xlims = ax.get_xlim() ylims = ax.get_ylim() lims = [xlims[0], xlims[1], ylims[0], ylims[1]] for l in lims: generated_limits.append(l) assert_array_almost_equal(correct_limits, generated_limits) def test_frb_regen(): ds = fake_random_ds(32) slc = SlicePlot(ds, 2, ("gas", "density")) slc.set_buff_size(1200) assert_equal(slc.frb["gas", "density"].shape, (1200, 1200)) slc.set_buff_size((400.0, 200.7)) assert_equal(slc.frb["gas", "density"].shape, (200, 400)) def test_set_background_color(): ds = fake_random_ds(32) plot = SlicePlot(ds, 2, ("gas", "density")) plot.set_background_color(("gas", "density"), "red") plot.render() ax = plot.plots["gas", "density"].axes assert_equal(ax.get_facecolor(), (1.0, 0.0, 0.0, 1.0)) def test_set_unit(): ds = fake_random_ds(32, fields=(("gas", "temperature"),), units=("K",)) slc = SlicePlot(ds, 2, ("gas", "temperature")) orig_array = slc.frb["gas", "temperature"].copy() slc.set_unit(("gas", "temperature"), "degF") assert str(slc.frb["gas", "temperature"].units) == "°F" assert_array_almost_equal( np.array(slc.frb["gas", "temperature"]), np.array(orig_array) * 1.8 - 459.67 ) # test that a plot modifying function that destroys the frb preserves the # new unit slc.set_buff_size(1000) assert str(slc.frb["gas", "temperature"].units) == "°F" slc.set_buff_size(800) slc.set_unit(("gas", "temperature"), "K") assert str(slc.frb["gas", "temperature"].units) == "K" assert_array_almost_equal(slc.frb["gas", "temperature"], orig_array) slc.set_unit(("gas", "temperature"), "keV", equivalency="thermal") assert str(slc.frb["gas", "temperature"].units) == "keV" assert_array_almost_equal( slc.frb["gas", "temperature"], (orig_array * kboltz).to("keV") ) # test that a plot modifying function that destroys the frb preserves the # new unit with an equivalency slc.set_buff_size(1000) assert str(slc.frb["gas", "temperature"].units) == "keV" # test that destroying the FRB then changing the unit using an equivalency # doesn't error out, see issue #1316 slc = SlicePlot(ds, 2, ("gas", "temperature")) slc.set_buff_size(1000) slc.set_unit(("gas", "temperature"), "keV", equivalency="thermal") assert str(slc.frb["gas", "temperature"].units) == "keV" WD = "WDMerger_hdf5_chk_1000/WDMerger_hdf5_chk_1000.hdf5" blast_wave = "amrvac/bw_2d0000.dat" @requires_file(WD) @requires_file(blast_wave) def test_plot_2d(): # Cartesian ds = fake_random_ds((32, 32, 1), fields=("temperature",), units=("K",)) slc = SlicePlot( ds, "z", [("gas", "temperature")], width=(0.2, "unitary"), center=[0.4, 0.3, 0.5], ) slc2 = plot_2d( ds, ("gas", "temperature"), width=(0.2, "unitary"), center=[0.4, 0.3] ) slc3 = plot_2d( ds, ("gas", "temperature"), width=(0.2, "unitary"), center=ds.arr([0.4, 0.3], "cm"), ) assert_array_equal(slc.frb["gas", "temperature"], slc2.frb["gas", "temperature"]) assert_array_equal(slc.frb["gas", "temperature"], slc3.frb["gas", "temperature"]) # Cylindrical ds = data_dir_load(WD) slc = SlicePlot(ds, "theta", [("gas", "density")], width=(30000.0, "km")) slc2 = plot_2d(ds, ("gas", "density"), width=(30000.0, "km")) assert_array_equal(slc.frb["gas", "density"], slc2.frb["gas", "density"]) # Spherical ds = data_dir_load(blast_wave) slc = SlicePlot(ds, "phi", [("gas", "density")], width=(1, "unitary")) slc2 = plot_2d(ds, ("gas", "density"), width=(1, "unitary")) assert_array_equal(slc.frb["gas", "density"], slc2.frb["gas", "density"]) def test_symlog_colorbar(): ds = fake_random_ds(16) def _thresh_density(field, data): wh = data["gas", "density"] < 0.5 ret = data["gas", "density"] ret[wh] = 0 return ret def _neg_density(field, data): return -data["gas", "threshold_density"] ds.add_field( ("gas", "threshold_density"), function=_thresh_density, units="g/cm**3", sampling_type="cell", ) ds.add_field( ("gas", "negative_density"), function=_neg_density, units="g/cm**3", sampling_type="cell", ) for field in [ ("gas", "density"), ("gas", "threshold_density"), ("gas", "negative_density"), ]: plot = SlicePlot(ds, 2, field) plot.set_log(field, linthresh=0.1) with tempfile.NamedTemporaryFile(suffix="png") as f: plot.save(f.name) def test_symlog_min_zero(): # see https://github.com/yt-project/yt/issues/3791 shape = (32, 16, 1) a = np.linspace(0, 1, 16) b = np.ones((32, 16)) c = np.reshape(a * b, shape) data = {("gas", "density"): c} ds = load_uniform_grid( data, shape, bbox=np.array([[0.0, 5.0], [0, 1], [-0.1, +0.1]]), ) p = SlicePlot(ds, "z", "density") im_arr = p["gas", "density"].image.get_array() # check that no data value was mapped to a NaN (log(0)) assert np.all(~np.isnan(im_arr)) # 0 should be mapped to itself since we expect a symlog norm assert np.min(im_arr) == 0.0 def test_symlog_extremely_small_vals(): # check that the plot can be constructed without crashing # see https://github.com/yt-project/yt/issues/3858 # and https://github.com/yt-project/yt/issues/3944 shape = (64, 64, 1) arr = np.full(shape, 5.0e-324) arr[0, 0] = -1e12 arr[1, 1] = 200 arr2 = np.full(shape, 5.0e-324) arr2[0, 0] = -1e12 arr3 = arr.copy() arr3[4, 4] = 0.0 d = {"scalar_spans_0": arr, "tiny_vmax": arr2, "scalar_tiny_with_0": arr3} ds = load_uniform_grid(d, shape) for field in d: p = SlicePlot(ds, "z", field) p["stream", field] def test_symlog_linthresh_gt_vmax(): # check that some more edge cases do not crash # linthresh will end up being larger than vmax here. This is OK. shape = (64, 64, 1) arr = np.full(shape, -1e30) arr[1, 1] = -1e27 arr[2, 2] = 1e-12 arr[3, 3] = 1e-10 arr2 = -1 * arr.copy() # also check the reverse d = {"linthresh_gt_vmax": arr, "linthresh_lt_vmin": arr2} ds = load_uniform_grid(d, shape) for field in d: p = SlicePlot(ds, "z", field) p["stream", field] def test_symlog_symmetric(): # should run ok when abs(min negative) == abs(pos max) shape = (64, 64, 1) arr = np.full(shape, -1e30) arr[1, 1] = -1e27 arr[2, 2] = 1e10 arr[3, 3] = 1e30 d = {"linthresh_symmetric": arr} ds = load_uniform_grid(d, shape) p = SlicePlot(ds, "z", "linthresh_symmetric") p["stream", "linthresh_symmetric"] def test_nan_data(): data = np.random.random((16, 16, 16)) - 0.5 data[:9, :9, :9] = np.nan data = {"density": data} ds = load_uniform_grid(data, [16, 16, 16]) plot = SlicePlot(ds, "z", ("gas", "density")) with tempfile.NamedTemporaryFile(suffix="png") as f: plot.save(f.name) def test_sanitize_valid_normal_vector(): # note: we don't test against non-cartesian geometries # because the way normal "vectors" work isn't clearly # specified and works more as an implementation detail # at the moment ds = fake_amr_ds(geometry="cartesian") # We allow maximal polymorphism for axis-aligned directions: # even if 3-component vector is received, we want to use the # AxisAligned* plotting class (as opposed to OffAxis*) because # it's much easier to optimize so it's expected to be more # performant. axis_label_from_inputs = { "x": ["x", 0, [1, 0, 0], [0.1, 0.0, 0.0], [-10, 0, 0]], "y": ["y", 1, [0, 1, 0], [0.0, 0.1, 0.0], [0, -10, 0]], "z": ["z", 2, [0, 0, 1], [0.0, 0.0, 0.1], [0, 0, -10]], } for expected, user_inputs in axis_label_from_inputs.items(): for ui in user_inputs: assert NormalPlot.sanitize_normal_vector(ds, ui) == expected # arbitrary 3-floats sequences are also valid input. # They should be returned as np.ndarrays, but the norm and orientation # could be altered. What's important is that their direction is preserved. for ui in [(1, 1, 1), [0.0, -3, 1e9], np.ones(3, dtype="int8")]: res = NormalPlot.sanitize_normal_vector(ds, ui) assert isinstance(res, np.ndarray) assert res.dtype == np.float64 assert_array_equal( np.cross(ui, res), [0, 0, 0], ) def test_reject_invalid_normal_vector(): ds = fake_amr_ds(geometry="cartesian") for ui in [0.0, 1.0, 2.0, 3.0]: # acceptable scalar numeric values are restricted to integers. # Floats might be a sign that something went wrong upstream # e.g., rounding errors, parsing error... assert_raises(TypeError, NormalPlot.sanitize_normal_vector, ds, ui) for ui in [ "X", "xy", "not-an-axis", (0, 0, 0), [0, 0, 0], np.zeros(3), [1, 0, 0, 0], [1, 0], [1], [0], 3, 10, ]: assert_raises(ValueError, NormalPlot.sanitize_normal_vector, ds, ui) def test_dispatch_plot_classes(): ds = fake_random_ds(16) p1 = ProjectionPlot(ds, "z", ("gas", "density")) p2 = ProjectionPlot(ds, (1, 2, 3), ("gas", "density")) s1 = SlicePlot(ds, "z", ("gas", "density")) s2 = SlicePlot(ds, (1, 2, 3), ("gas", "density")) assert isinstance(p1, AxisAlignedProjectionPlot) assert isinstance(p2, OffAxisProjectionPlot) assert isinstance(s1, AxisAlignedSlicePlot) assert isinstance(s2, OffAxisSlicePlot) @requires_module("cartopy") def test_invalid_swap_projection(): # projections and transforms will not work ds = fake_amr_ds(geometry="geographic") slc = SlicePlot(ds, "altitude", ds.field_list[0], origin="native") slc.set_mpl_projection("Robinson") slc.swap_axes() # should raise mylog.warning and not toggle _swap_axes assert slc._has_swapped_axes is False def test_set_font(): # simply check that calling the set_font method doesn't raise an error # https://github.com/yt-project/yt/issues/4263 ds = fake_amr_ds() slc = SlicePlot(ds, "x", "Density") slc.set_font( { "family": "sans-serif", "style": "italic", "weight": "bold", "size": 24, "color": "blue", } ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_profile_plots.py0000644000175100001770000001571314714401662022565 0ustar00runnerdockerimport os import shutil import tempfile import unittest import pytest import yt from yt.testing import assert_allclose_units, fake_random_ds from yt.visualization.api import PhasePlot class TestPhasePlotAPI: @classmethod def setup_class(cls): cls.ds = fake_random_ds( 16, fields=("density", "temperature"), units=("g/cm**3", "K") ) def get_plot(self): return PhasePlot( self.ds, ("gas", "density"), ("gas", "temperature"), ("gas", "mass") ) @pytest.mark.parametrize("kwargs", [{}, {"color": "b"}]) @pytest.mark.mpl_image_compare def test_phaseplot_annotate_text(self, kwargs): p = self.get_plot() p.annotate_text(1e-4, 1e-2, "Test text annotation", **kwargs) p.render() return p.plots["gas", "mass"].figure @pytest.mark.mpl_image_compare def test_phaseplot_set_title(self): p = self.get_plot() p.set_title(("gas", "mass"), "Test Title") p.render() return p.plots["gas", "mass"].figure @pytest.mark.mpl_image_compare def test_phaseplot_set_log(self): p = self.get_plot() p.set_log(("gas", "mass"), False) p.render() return p.plots["gas", "mass"].figure @pytest.mark.mpl_image_compare def test_phaseplot_set_unit(self): p = self.get_plot() p.set_unit(("gas", "mass"), "Msun") p.render() return p.plots["gas", "mass"].figure @pytest.mark.mpl_image_compare def test_phaseplot_set_xlim(self): p = self.get_plot() p.set_xlim(1e-3, 1e0) p.render() return p.plots["gas", "mass"].figure @pytest.mark.mpl_image_compare def test_phaseplot_set_ylim(self): p = self.get_plot() p.set_ylim(1e-2, 1e0) p.render() return p.plots["gas", "mass"].figure def test_set_units(): fields = ("density", "temperature") units = ( "g/cm**3", "K", ) ds = fake_random_ds(16, fields=fields, units=units) sp = ds.sphere("max", (1.0, "Mpc")) p1 = yt.ProfilePlot(sp, ("index", "radius"), ("gas", "density")) p2 = yt.PhasePlot(sp, ("gas", "density"), ("gas", "temperature"), ("gas", "mass")) # make sure we can set the units using the tuple without erroring out p1.set_unit(("gas", "density"), "Msun/kpc**3") p2.set_unit(("gas", "temperature"), "R") def test_set_labels(): ds = fake_random_ds(16) ad = ds.all_data() plot = yt.ProfilePlot( ad, ("index", "radius"), [("gas", "velocity_x"), ("gas", "density")], weight_field=None, ) # make sure we can set the labels without erroring out plot.set_ylabel("all", "test ylabel") plot.set_xlabel("test xlabel") def test_create_from_dataset(): ds = fake_random_ds(16) plot1 = yt.ProfilePlot( ds, ("index", "radius"), [("gas", "velocity_x"), ("gas", "density")], weight_field=None, ) plot2 = yt.ProfilePlot( ds.all_data(), ("index", "radius"), [("gas", "velocity_x"), ("gas", "density")], weight_field=None, ) assert_allclose_units( plot1.profiles[0]["gas", "density"], plot2.profiles[0]["gas", "density"] ) assert_allclose_units( plot1.profiles[0]["velocity_x"], plot2.profiles[0]["velocity_x"] ) plot1 = yt.PhasePlot(ds, ("gas", "density"), ("gas", "velocity_x"), ("gas", "mass")) plot2 = yt.PhasePlot( ds.all_data(), ("gas", "density"), ("gas", "velocity_x"), ("gas", "mass") ) assert_allclose_units(plot1.profile["mass"], plot2.profile["mass"]) class TestAnnotations(unittest.TestCase): @classmethod def setUpClass(cls): cls.tmpdir = tempfile.mkdtemp() cls.curdir = os.getcwd() os.chdir(cls.tmpdir) ds = fake_random_ds(16) ad = ds.all_data() cls.fields = [ ("gas", "velocity_x"), ("gas", "velocity_y"), ("gas", "velocity_z"), ] cls.plot = yt.ProfilePlot( ad, ("index", "radius"), cls.fields, weight_field=None ) @classmethod def tearDownClass(cls): os.chdir(cls.curdir) shutil.rmtree(cls.tmpdir) def test_annotations(self): # make sure we can annotate without erroring out # annotate the plot with only velocity_x self.plot.annotate_title("velocity_x plot", self.fields[0]) self.plot.annotate_text(1e-1, 1e1, "Annotated velocity_x") # annotate the plots with velocity_y and velocity_z with # the same annotations self.plot.annotate_title("Velocity Plots (Y or Z)", self.fields[1:]) self.plot.annotate_text(1e-1, 1e1, "Annotated vel_y, vel_z", self.fields[1:]) self.plot.save() def test_annotations_wrong_fields(self): from yt.utilities.exceptions import YTFieldNotFound with self.assertRaises(YTFieldNotFound): self.plot.annotate_title("velocity_x plot", "wrong_field_name") with self.assertRaises(YTFieldNotFound): self.plot.annotate_text(1e-1, 1e1, "Annotated text", "wrong_field_name") def test_phaseplot_set_log(): fields = ("density", "temperature") units = ( "g/cm**3", "K", ) ds = fake_random_ds(16, fields=fields, units=units) sp = ds.sphere("max", (1.0, "Mpc")) p1 = yt.ProfilePlot(sp, ("index", "radius"), ("gas", "density")) p2 = yt.PhasePlot(sp, ("gas", "density"), ("gas", "temperature"), ("gas", "mass")) # make sure we can set the log-scaling using the tuple without erroring out p1.set_log(("gas", "density"), False) p2.set_log(("gas", "temperature"), False) assert not p1.y_log["gas", "density"] assert not p2.y_log # make sure we can set the log-scaling using a string without erroring out p1.set_log(("gas", "density"), True) p2.set_log(("gas", "temperature"), True) assert p1.y_log["gas", "density"] assert p2.y_log # make sure we can set the log-scaling using a field object p1.set_log(ds.fields.gas.density, False) p2.set_log(ds.fields.gas.temperature, False) assert not p1.y_log["gas", "density"] assert not p2.y_log def test_phaseplot_showhide_colorbar_axes(): fields = ("density", "temperature") units = ( "g/cm**3", "K", ) ds = fake_random_ds(16, fields=fields, units=units) ad = ds.all_data() plot = yt.PhasePlot(ad, ("gas", "density"), ("gas", "temperature"), ("gas", "mass")) # make sure we can hide colorbar plot.hide_colorbar() with tempfile.NamedTemporaryFile(suffix="png") as f1: plot.save(f1.name) # make sure we can show colorbar plot.show_colorbar() with tempfile.NamedTemporaryFile(suffix="png") as f2: plot.save(f2.name) # make sure we can hide axes plot.hide_axes() with tempfile.NamedTemporaryFile(suffix="png") as f3: plot.save(f3.name) # make sure we can show axes plot.show_axes() with tempfile.NamedTemporaryFile(suffix="png") as f4: plot.save(f4.name) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_raw_field_slices.py0000644000175100001770000000161414714401662023175 0ustar00runnerdockerimport pytest import yt from yt.testing import requires_file from yt.utilities.answer_testing.framework import data_dir_load_v2 class TestRawFieldSlice: @requires_file("Laser/plt00015") @classmethod def setup_class(cls): cls.ds = data_dir_load_v2("Laser/plt00015") fields = [ ("raw", "Bx"), ("raw", "By"), ("raw", "Bz"), ("raw", "Ex"), ("raw", "Ey"), ("raw", "Ez"), ("raw", "jx"), ("raw", "jy"), ("raw", "jz"), ] cls.sl = yt.SlicePlot(cls.ds, "z", fields) cls.sl.set_log("all", False) cls.sl.render() @pytest.mark.parametrize( "fname", ["Bx", "By", "Bz", "Ex", "Ey", "Ez", "jx", "jy", "jz"] ) @pytest.mark.mpl_image_compare def test_raw_field_slice(self, fname): return self.sl.plots["raw", "Bx"].figure ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_save.py0000644000175100001770000000670114714401662020637 0ustar00runnerdockerimport os import re import pytest from PIL import Image import yt from yt.testing import fake_amr_ds # avoid testing every supported format as some backends may be buggy on testing platforms FORMATS_TO_TEST = [".eps", ".pdf", ".svg"] @pytest.fixture(scope="session") def simple_sliceplot(): ds = fake_amr_ds() p = yt.SlicePlot(ds, "z", "Density") yield p def test_save_to_path(simple_sliceplot, tmp_path): p = simple_sliceplot p.save(f"{tmp_path}/") assert len(list((tmp_path).glob("*.png"))) == 1 def test_metadata(simple_sliceplot, tmp_path): simple_sliceplot.save(tmp_path / "ala.png") with Image.open(tmp_path / "ala.png") as img: assert "Software" in img.info assert "yt-" in img.info["Software"] simple_sliceplot.save(tmp_path / "ala.pdf") with open(tmp_path / "ala.pdf", "rb") as f: assert b"|yt-" in f.read() def test_save_to_missing_path(simple_sliceplot, tmp_path): # the missing layer should be created p = simple_sliceplot # using forward slashes should work even on windows ! save_path = os.path.join(tmp_path / "out") + "/" p.save(save_path) assert os.path.exists(save_path) assert len(list((tmp_path / "out").glob("*.png"))) == 1 def test_save_to_missing_path_with_file_prefix(simple_sliceplot, tmp_path): # see issue # https://github.com/yt-project/yt/issues/3210 p = simple_sliceplot p.save(tmp_path.joinpath("out", "saymyname")) assert (tmp_path / "out").exists() output_files = list((tmp_path / "out").glob("*.png")) assert len(output_files) == 1 assert output_files[0].stem.startswith("saymyname") # you're goddamn right @pytest.mark.parametrize("ext", FORMATS_TO_TEST) def test_suffix_from_filename(ext, simple_sliceplot, tmp_path): p = simple_sliceplot target = (tmp_path / "myfile").with_suffix(ext) # this shouldn't raise a warning, see issue # https://github.com/yt-project/yt/issues/3667 p.save(target) assert target.is_file() @pytest.mark.parametrize("ext", FORMATS_TO_TEST) def test_suffix_clashing(ext, simple_sliceplot, tmp_path): if ext == ".png": pytest.skip() p = simple_sliceplot target = (tmp_path / "myfile").with_suffix(ext) expected_warning = re.compile( rf"Received two valid image formats {ext.removeprefix('.')!r} " r"\(from filename\) and 'png' \(from suffix\)\. The former is ignored\." ) with pytest.warns(UserWarning, match=expected_warning): p.save(target, suffix="png") output_files = list(tmp_path.glob("*.png")) assert len(output_files) == 1 assert output_files[0].stem.startswith("myfile") assert not list((tmp_path / "out").glob(f"*.{ext}")) def test_invalid_format_from_filename(simple_sliceplot, tmp_path): p = simple_sliceplot target = (tmp_path / "myfile").with_suffix(".nope") p.save(target) output_files = list(tmp_path.glob("*")) assert len(output_files) == 1 # the output filename may contain a generated part # it's not exactly clear if it's desirable or intended in this case # so we just check conditions that should hold in any case assert output_files[0].name.startswith("myfile.nope") assert output_files[0].name.endswith(".png") def test_invalid_format_from_suffix(simple_sliceplot, tmp_path): p = simple_sliceplot target = tmp_path / "myfile" with pytest.raises(ValueError, match=r"Unsupported file format 'nope'"): p.save(target, suffix="nope") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_set_zlim.py0000644000175100001770000000715414714401662021532 0ustar00runnerdockerimport numpy as np import numpy.testing as npt import pytest from yt.testing import fake_amr_ds from yt.visualization.api import SlicePlot def test_float_vmin_then_set_unit(): field = ("gas", "density") ds = fake_amr_ds(fields=[field], units=["g/cm**3"]) p = SlicePlot(ds, "x", field) p.set_buff_size(16) p.render() cb = p.plots[field].image.colorbar raw_lims = np.array((cb.vmin, cb.vmax)) desired_lims = raw_lims.copy() desired_lims[0] = 1e-2 p.set_zlim(field, zmin=desired_lims[0]) p.render() cb = p.plots[field].image.colorbar new_lims = np.array((cb.vmin, cb.vmax)) npt.assert_almost_equal(new_lims, desired_lims) # 1 g/cm**3 == 1000 kg/m**3 p.set_unit(field, "kg/m**3") p.render() cb = p.plots[field].image.colorbar new_lims = np.array((cb.vmin, cb.vmax)) npt.assert_almost_equal(new_lims, 1000 * desired_lims) def test_set_unit_then_float_vmin(): field = ("gas", "density") ds = fake_amr_ds(fields=[field], units=["g/cm**3"]) p = SlicePlot(ds, "x", field) p.set_buff_size(16) p.set_unit(field, "kg/m**3") p.set_zlim(field, zmin=1) p.render() cb = p.plots[field].image.colorbar assert cb.vmin == 1.0 def test_reset_zlim(): field = ("gas", "density") ds = fake_amr_ds(fields=[field], units=["g/cm**3"]) p = SlicePlot(ds, "x", field) p.set_buff_size(16) p.render() cb = p.plots[field].image.colorbar raw_lims = np.array((cb.vmin, cb.vmax)) # set a new zmin value delta = np.diff(raw_lims)[0] p.set_zlim(field, zmin=raw_lims[0] + delta / 2) # passing "min" should restore default limit p.set_zlim(field, zmin="min") p.render() cb = p.plots[field].image.colorbar new_lims = np.array((cb.vmin, cb.vmax)) npt.assert_array_equal(new_lims, raw_lims) def test_set_dynamic_range_with_vmin(): field = ("gas", "density") ds = fake_amr_ds(fields=[field], units=["g/cm**3"]) p = SlicePlot(ds, "x", field) p.set_buff_size(16) zmin = 1e-2 p.set_zlim(field, zmin=zmin, dynamic_range=2) p.render() cb = p.plots[field].image.colorbar new_lims = np.array((cb.vmin, cb.vmax)) npt.assert_almost_equal(new_lims, (zmin, 2 * zmin)) def test_set_dynamic_range_with_vmax(): field = ("gas", "density") ds = fake_amr_ds(fields=[field], units=["g/cm**3"]) p = SlicePlot(ds, "x", field) p.set_buff_size(16) zmax = 1 p.set_zlim(field, zmax=zmax, dynamic_range=2) p.render() cb = p.plots[field].image.colorbar new_lims = np.array((cb.vmin, cb.vmax)) npt.assert_almost_equal(new_lims, (zmax / 2, zmax)) def test_set_dynamic_range_with_min(): field = ("gas", "density") ds = fake_amr_ds(fields=[field], units=["g/cm**3"]) p = SlicePlot(ds, "x", field) p.set_buff_size(16) p.render() cb = p.plots[field].image.colorbar vmin = cb.vmin p.set_zlim(field, zmin="min", dynamic_range=2) p.render() cb = p.plots[field].image.colorbar new_lims = np.array((cb.vmin, cb.vmax)) npt.assert_almost_equal(new_lims, (vmin, 2 * vmin)) def test_set_dynamic_range_with_None(): field = ("gas", "density") ds = fake_amr_ds(fields=[field], units=["g/cm**3"]) p = SlicePlot(ds, "x", field) p.set_buff_size(16) p.render() cb = p.plots[field].image.colorbar vmin = cb.vmin with pytest.deprecated_call(match="Passing `zmin=None` explicitly is deprecated"): p.set_zlim(field, zmin=None, dynamic_range=2) p.render() cb = p.plots[field].image.colorbar new_lims = np.array((cb.vmin, cb.vmax)) npt.assert_almost_equal(new_lims, (vmin, 2 * vmin)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/tests/test_splat.py0000644000175100001770000000203014714401662021013 0ustar00runnerdockerimport os import os.path import shutil import tempfile import matplotlib as mpl import numpy as np from numpy.testing import assert_equal import yt from yt.utilities.lib.api import add_rgba_points_to_image # type: ignore def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True def test_splat(): # Perform I/O in safe place instead of yt main dir tmpdir = tempfile.mkdtemp() curdir = os.getcwd() os.chdir(tmpdir) prng = np.random.RandomState(0x4D3D3D3) N = 16 Np = int(1e2) image = np.zeros([N, N, 4]) xs = prng.random_sample(Np) ys = prng.random_sample(Np) cbx = mpl.colormaps["RdBu"] cs = cbx(prng.random_sample(Np)) add_rgba_points_to_image(image, xs, ys, cs) before_hash = image.copy() fn = "tmp.png" yt.write_bitmap(image, fn) assert_equal(os.path.exists(fn), True) os.remove(fn) assert_equal(before_hash, image) os.chdir(curdir) # clean up shutil.rmtree(tmpdir) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.4191542 yt-4.4.0/yt/visualization/volume_rendering/0000755000175100001770000000000014714401715020465 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/UBVRI.py0000644000175100001770000003475714714401662021747 0ustar00runnerdockerimport numpy as np johnson_filters = { "B": { "wavelen": np.array( [ 3600, 3650, 3700, 3750, 3800, 3850, 3900, 3950, 4000, 4050, 4100, 4150, 4200, 4250, 4300, 4350, 4400, 4450, 4500, 4550, 4600, 4650, 4700, 4750, 4800, 4850, 4900, 4950, 5000, 5050, 5100, 5150, 5200, 5250, 5300, 5350, 5400, 5450, 5500, 5550, ], dtype="float64", ), "trans": np.array( [ 0.0, 0.0, 0.02, 0.05, 0.11, 0.18, 0.35, 0.55, 0.92, 0.95, 0.98, 0.99, 1.0, 0.99, 0.98, 0.96, 0.94, 0.91, 0.87, 0.83, 0.79, 0.74, 0.69, 0.63, 0.58, 0.52, 0.46, 0.41, 0.36, 0.3, 0.25, 0.2, 0.15, 0.12, 0.09, 0.06, 0.04, 0.02, 0.01, 0.0, ], dtype="float64", ), }, "I": { "wavelen": np.array( [ 6800, 6850, 6900, 6950, 7000, 7050, 7100, 7150, 7200, 7250, 7300, 7350, 7400, 7450, 7500, 7550, 7600, 7650, 7700, 7750, 7800, 7850, 7900, 7950, 8000, 8050, 8100, 8150, 8200, 8250, 8300, 8350, 8400, 8450, 8500, 8550, 8600, 8650, 8700, 8750, 8800, 8850, 8900, 8950, 9000, 9050, 9100, 9150, 9200, 9250, 9300, 9350, 9400, 9450, 9500, 9550, 9600, 9650, 9700, 9750, 9800, 9850, 9900, 9950, 10000, 10050, 10100, 10150, 10200, 10250, 10300, 10350, 10400, 10450, 10500, 10550, 10600, 10650, 10700, 10750, 10800, 10850, 10900, 10950, 11000, 11050, 11100, 11150, 11200, 11250, 11300, 11350, 11400, 11450, 11500, 11550, 11600, 11650, 11700, 11750, 11800, 11850, ], dtype="float64", ), "trans": np.array( [ 0.0, 0.0, 0.01, 0.01, 0.01, 0.04, 0.08, 0.13, 0.17, 0.21, 0.26, 0.3, 0.36, 0.4, 0.44, 0.49, 0.56, 0.6, 0.65, 0.72, 0.76, 0.84, 0.9, 0.93, 0.96, 0.97, 0.97, 0.98, 0.98, 0.99, 0.99, 0.99, 0.99, 1.0, 1.0, 1.0, 1.0, 1.0, 0.99, 0.98, 0.98, 0.97, 0.96, 0.94, 0.93, 0.9, 0.88, 0.86, 0.84, 0.8, 0.76, 0.74, 0.71, 0.68, 0.65, 0.61, 0.58, 0.56, 0.52, 0.5, 0.47, 0.44, 0.42, 0.39, 0.36, 0.34, 0.32, 0.3, 0.28, 0.26, 0.24, 0.22, 0.2, 0.19, 0.17, 0.16, 0.15, 0.13, 0.12, 0.11, 0.1, 0.09, 0.09, 0.08, 0.08, 0.07, 0.06, 0.05, 0.05, 0.04, 0.04, 0.03, 0.03, 0.02, 0.02, 0.02, 0.02, 0.02, 0.01, 0.01, 0.01, 0.0, ], dtype="float64", ), }, "R": { "wavelen": np.array( [ 5200, 5250, 5300, 5350, 5400, 5450, 5500, 5550, 5600, 5650, 5700, 5750, 5800, 5850, 5900, 5950, 6000, 6050, 6100, 6150, 6200, 6250, 6300, 6350, 6400, 6450, 6500, 6550, 6600, 6650, 6700, 6750, 6800, 6850, 6900, 6950, 7000, 7050, 7100, 7150, 7200, 7250, 7300, 7350, 7400, 7450, 7500, 7550, 7600, 7650, 7700, 7750, 7800, 7850, 7900, 7950, 8000, 8050, 8100, 8150, 8200, 8250, 8300, 8350, 8400, 8450, 8500, 8550, 8600, 8650, 8700, 8750, 8800, 8850, 8900, 8950, 9000, 9050, 9100, 9150, 9200, 9250, 9300, 9350, 9400, 9450, 9500, ], dtype="float64", ), "trans": np.array( [ 0.0, 0.01, 0.02, 0.04, 0.06, 0.11, 0.18, 0.23, 0.28, 0.34, 0.4, 0.46, 0.5, 0.55, 0.6, 0.64, 0.69, 0.71, 0.74, 0.77, 0.79, 0.81, 0.84, 0.86, 0.88, 0.9, 0.91, 0.92, 0.94, 0.95, 0.96, 0.97, 0.98, 0.99, 0.99, 1.0, 1.0, 0.99, 0.98, 0.96, 0.94, 0.92, 0.9, 0.88, 0.85, 0.83, 0.8, 0.77, 0.73, 0.7, 0.66, 0.62, 0.57, 0.53, 0.49, 0.45, 0.42, 0.39, 0.36, 0.34, 0.31, 0.27, 0.22, 0.19, 0.17, 0.15, 0.13, 0.12, 0.11, 0.1, 0.08, 0.07, 0.06, 0.06, 0.05, 0.04, 0.04, 0.03, 0.03, 0.02, 0.02, 0.02, 0.01, 0.01, 0.01, 0.01, 0.0, ], dtype="float64", ), }, "U": { "wavelen": np.array( [ 3000, 3050, 3100, 3150, 3200, 3250, 3300, 3350, 3400, 3450, 3500, 3550, 3600, 3650, 3700, 3750, 3800, 3850, 3900, 3950, 4000, 4050, 4100, 4150, ], dtype="float64", ), "trans": np.array( [ 0.0, 0.04, 0.1, 0.25, 0.61, 0.75, 0.84, 0.88, 0.93, 0.95, 0.97, 0.99, 1.0, 0.99, 0.97, 0.92, 0.73, 0.56, 0.36, 0.23, 0.05, 0.03, 0.01, 0.0, ], dtype="float64", ), }, "V": { "wavelen": np.array( [ 4600, 4650, 4700, 4750, 4800, 4850, 4900, 4950, 5000, 5050, 5100, 5150, 5200, 5250, 5300, 5350, 5400, 5450, 5500, 5550, 5600, 5650, 5700, 5750, 5800, 5850, 5900, 5950, 6000, 6050, 6100, 6150, 6200, 6250, 6300, 6350, 6400, 6450, 6500, 6550, 6600, 6650, 6700, 6750, 6800, 6850, 6900, 6950, 7000, 7050, 7100, 7150, 7200, 7250, 7300, 7350, ], dtype="float64", ), "trans": np.array( [ 0.0, 0.0, 0.01, 0.01, 0.02, 0.05, 0.11, 0.2, 0.38, 0.67, 0.78, 0.85, 0.91, 0.94, 0.96, 0.98, 0.98, 0.95, 0.87, 0.79, 0.72, 0.71, 0.69, 0.65, 0.62, 0.58, 0.52, 0.46, 0.4, 0.34, 0.29, 0.24, 0.2, 0.17, 0.14, 0.11, 0.08, 0.06, 0.05, 0.03, 0.02, 0.02, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.0, ], dtype="float64", ), }, } for vals in johnson_filters.values(): wavelen = vals["wavelen"] trans = vals["trans"] vals["Lchar"] = wavelen[np.argmax(trans)] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/__init__.py0000644000175100001770000000006514714401662022600 0ustar00runnerdocker""" API for yt.visualization.volume_rendering """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/_cuda_caster.cu0000644000175100001770000002225714714401662023443 0ustar00runnerdocker// An attempt at putting the ray-casting operation into CUDA //extern __shared__ float array[]; #define NUM_SAMPLES 25 #define VINDEX(A,B,C) tg.data[((((A)+ci[0])*(tg.dims[1]+1)+((B)+ci[1]))*(tg.dims[2]+1)+ci[2]+(C))] #define fmin(A, B) ( A * (A < B) + B * (B < A) ) #define fmax(A, B) ( A * (A > B) + B * (B > A) ) #define fclip(A, B, C) ( fmax( fmin(A, C), B) ) struct transfer_function { float vs[4][256]; float dbin; float bounds[2]; }; __shared__ struct transfer_function tf; struct grid { float left_edge[3]; float right_edge[3]; float dds[3]; int dims[3]; float *data; }; __shared__ struct grid tg; __device__ float interpolate(float *cache, int *ds, int *ci, float *dp) { int i; float dv, dm[3]; for(i=0;i<3;i++)dm[i] = (1.0 - dp[i]); dv = 0.0; dv += cache[0] * (dm[0]*dm[1]*dm[2]); dv += cache[1] * (dm[0]*dm[1]*dp[2]); dv += cache[2] * (dm[0]*dp[1]*dm[2]); dv += cache[3] * (dm[0]*dp[1]*dp[2]); dv += cache[4] * (dp[0]*dm[1]*dm[2]); dv += cache[5] * (dp[0]*dm[1]*dp[2]); dv += cache[6] * (dp[0]*dp[1]*dm[2]); dv += cache[7] * (dp[0]*dp[1]*dp[2]); return dv; } __device__ void eval_transfer(float dt, float dv, float *rgba, transfer_function &tf) { int i, bin_id; float temp, bv, dy, dd, ta; bin_id = (int) ((dv - tf.bounds[0]) / tf.dbin); bv = tf.vs[3][bin_id ]; dy = tf.vs[3][bin_id+1] - bv; dd = dv - (tf.bounds[0] + bin_id*tf.dbin); temp = bv+dd*(dy/tf.dbin); ta = temp; for (i = 0; i < 3; i++) { bv = tf.vs[i][bin_id ]; dy = tf.vs[i][bin_id+1]; dd = dv - (tf.bounds[0] + bin_id*tf.dbin); temp = bv+dd*(dy/tf.dbin); rgba[i] += (1.0 - rgba[3])*ta*temp*dt; } //rgba[3] += (1.0 - rgba[3])*ta*dt; } __device__ void sample_values(float v_pos[3], float v_dir[3], float enter_t, float exit_t, int ci[3], float rgba[4], transfer_function &tf, grid &tg) { float cp[3], dp[3], dt, t, dv; int dti, i; float cache[8]; cache[0] = VINDEX(0,0,0); cache[1] = VINDEX(0,0,1); cache[2] = VINDEX(0,1,0); cache[3] = VINDEX(0,1,1); cache[4] = VINDEX(1,0,0); cache[5] = VINDEX(1,0,1); cache[6] = VINDEX(1,1,0); cache[7] = VINDEX(1,1,1); dt = (exit_t - enter_t) / (NUM_SAMPLES-1); for (dti = 0; dti < NUM_SAMPLES - 1; dti++) { t = enter_t + dt*dti; for (i = 0; i < 3; i++) { cp[i] = v_pos[i] + t * v_dir[i]; dp[i] = fclip(fmod(cp[i], tg.dds[i])/tg.dds[i], 0.0, 1.0); } dv = interpolate(cache, tg.dims, ci, dp); eval_transfer(dt, dv, rgba, tf); } } /* We need to know several things if we want to ray cast through a grid. We need the grid spatial information, as well as its values. We also need the transfer function, which defines what our image will look like. */ __global__ void ray_cast(int ngrids, float *grid_data, int *dims, float *left_edge, float *right_edge, float *tf_r, float *tf_g, float *tf_b, float *tf_a, float *tf_bounds, float *v_dir, float *av_pos, float *image_r, float *image_g, float *image_b, float *image_a) { int cur_ind[3], step[3], x, y, i, direction; float intersect_t = 1.0, intersect_ts[3]; float tmax[3]; float tl, tr, temp_xl, temp_yl, temp_xr, temp_yr; int offset; //transfer_function tf; for (i = 0; i < 4; i++) { x = 4 * (8 * threadIdx.x + threadIdx.y) + i; tf.vs[0][x] = tf_r[x]; tf.vs[1][x] = tf_g[x]; tf.vs[2][x] = tf_b[x]; tf.vs[3][x] = tf_a[x]; } tf.bounds[0] = tf_bounds[0]; tf.bounds[1] = tf_bounds[1]; tf.dbin = (tf.bounds[1] - tf.bounds[0])/255.0; /* Set up the grid, just for convenience */ //grid tg; int grid_i; int tidx = (blockDim.x * gridDim.x) * ( blockDim.y * blockIdx.y + threadIdx.y) + (blockDim.x * blockIdx.x + threadIdx.x); float rgba[4]; //rgba[0] = image_r[tidx]; //rgba[1] = image_g[tidx]; //rgba[2] = image_b[tidx]; //rgba[3] = image_a[tidx]; float v_pos[3]; v_pos[0] = av_pos[tidx + 0]; v_pos[1] = av_pos[tidx + 1]; v_pos[2] = av_pos[tidx + 2]; tg.data = grid_data; int skip; for (i = 0; i < 3; i++) { step[i] = 0; step[i] += (v_dir[i] > 0); step[i] += -1 * (v_dir[i] < 0); } for(grid_i = 0; grid_i < ngrids; grid_i++) { skip = 0; if (threadIdx.x == 0) { if (threadIdx.y == 0) tg.dims[0] = dims[3*grid_i + 0]; if (threadIdx.y == 1) tg.dims[1] = dims[3*grid_i + 1]; if (threadIdx.y == 2) tg.dims[2] = dims[3*grid_i + 2]; } if (threadIdx.x == 1) { if (threadIdx.y == 0) tg.left_edge[0] = left_edge[3*grid_i + 0]; if (threadIdx.y == 1) tg.left_edge[1] = left_edge[3*grid_i + 1]; if (threadIdx.y == 2) tg.left_edge[2] = left_edge[3*grid_i + 2]; } if (threadIdx.x == 2) { if (threadIdx.y == 0) tg.right_edge[0] = right_edge[3*grid_i + 0]; if (threadIdx.y == 1) tg.right_edge[1] = right_edge[3*grid_i + 1]; if (threadIdx.y == 2) tg.right_edge[2] = right_edge[3*grid_i + 2]; } if (threadIdx.x == 3) { if (threadIdx.y == 0) tg.dds[0] = (tg.right_edge[0] - tg.left_edge[0])/tg.dims[0]; if (threadIdx.y == 1) tg.dds[1] = (tg.right_edge[1] - tg.left_edge[1])/tg.dims[1]; if (threadIdx.y == 2) tg.dds[2] = (tg.right_edge[2] - tg.left_edge[2])/tg.dims[2]; } /* We integrate our ray */ for (i = 0; i < 3; i++) { x = (i + 1) % 3; y = (i + 2) % 3; tl = (tg.left_edge[i] - v_pos[i])/v_dir[i]; temp_xl = (v_pos[i] + tl*v_dir[x]); temp_yr = (v_pos[i] + tl*v_dir[y]); tr = (tg.right_edge[i] - v_pos[i])/v_dir[i]; temp_xr = (v_pos[x] + tr*v_dir[x]); temp_yr = (v_pos[y] + tr*v_dir[y]); intersect_ts[i] = 1.0; intersect_ts[i] += ( (tg.left_edge[x] <= temp_xl) && (temp_xl <= tg.right_edge[x]) && (tg.left_edge[y] <= temp_yl) && (temp_yl <= tg.right_edge[y]) && (0.0 <= tl) && (tl < intersect_ts[i]) && (tl < tr) ) * tl; intersect_ts[i] += ( (tg.left_edge[x] <= temp_xr) && (temp_xr <= tg.right_edge[x]) && (tg.left_edge[y] <= temp_yr) && (temp_yr <= tg.right_edge[y]) && (0.0 <= tr) && (tr < intersect_ts[i]) && (tr < tl) ) * tr; intersect_t = ( intersect_ts[i] < intersect_t) * intersect_ts[i]; } intersect_t *= (!( (tg.left_edge[0] <= v_pos[0]) && (v_pos[0] <= tg.right_edge[0]) && (tg.left_edge[0] <= v_pos[0]) && (v_pos[0] <= tg.right_edge[0]) && (tg.left_edge[0] <= v_pos[0]) && (v_pos[0] <= tg.right_edge[0]))); skip = ((intersect_t < 0) || (intersect_t > 1.0)); for (i = 0; i < 3; i++) { cur_ind[i] = (int) floor(((v_pos[i] + intersect_t * v_dir[i]) + step[i]*1e-7*tg.dds[i] - tg.left_edge[i])/tg.dds[i]); tmax[i] = (((cur_ind[i]+step[i])*tg.dds[i])+ tg.left_edge[i]-v_pos[i])/v_dir[i]; cur_ind[i] -= ((cur_ind[i] == tg.dims[i]) && (step[i] < 0)); skip = ((cur_ind[i] < 0) || (cur_ind[i] >= tg.dims[i])); offset = (step[i] > 0); tmax[i] = (((cur_ind[i]+offset)*tg.dds[i])+tg.left_edge[i]-v_pos[i])/v_dir[i]; } /* This is the primary grid walking loop */ while(!( (skip) ||((cur_ind[0] < 0) || (cur_ind[0] >= tg.dims[0]) || (cur_ind[1] < 0) || (cur_ind[1] >= tg.dims[1]) || (cur_ind[2] < 0) || (cur_ind[2] >= tg.dims[2])))) { direction = 0; direction += 2 * (tmax[0] < tmax[1]) * (tmax[0] >= tmax[2]); direction += 1 * (tmax[0] >= tmax[1]) * (tmax[1] < tmax[2]); direction += 2 * (tmax[0] >= tmax[1]) * (tmax[1] >= tmax[2]); sample_values(v_pos, v_dir, intersect_t, tmax[direction], cur_ind, rgba, tf, tg); cur_ind[direction] += step[direction]; intersect_t = tmax[direction]; tmax[direction] += abs(tg.dds[direction]/v_dir[direction]); } tg.data += (tg.dims[0]+1) * (tg.dims[1]+1) * (tg.dims[2]+1); } int iy = threadIdx.y + blockDim.y * blockIdx.y; int ix = threadIdx.x + blockDim.x * blockIdx.x; __syncthreads(); image_r[tidx] = rgba[0]; image_g[tidx] = rgba[1]; image_b[tidx] = rgba[2]; image_a[tidx] = rgba[3]; } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/api.py0000644000175100001770000000132514714401662021612 0ustar00runnerdockerfrom .camera import Camera from .image_handling import export_rgba, import_rgba, plot_channel, plot_rgb from .off_axis_projection import off_axis_projection from .render_source import ( BoxSource, CoordinateVectorSource, GridSource, LineSource, MeshSource, OpaqueSource, PointSource, create_volume_source, set_raytracing_engine, ) from .scene import Scene from .transfer_function_helper import TransferFunctionHelper from .transfer_functions import ( ColorTransferFunction, MultiVariateTransferFunction, PlanckTransferFunction, ProjectionTransferFunction, TransferFunction, ) from .volume_rendering import create_scene, volume_render from .zbuffer_array import ZBuffer ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/blenders.py0000644000175100001770000000135314714401662022640 0ustar00runnerdockerimport numpy as np def enhance(im, stdval=6.0, just_alpha=True): if just_alpha: nz = im[im > 0.0] im[:] = im[:] / (nz.mean() + stdval * np.std(nz)) else: for c in range(3): nz = im[:, :, c][im[:, :, c] > 0.0] im[:, :, c] = im[:, :, c] / (nz.mean() + stdval * np.std(nz)) del nz np.clip(im, 0.0, 1.0, im) def enhance_rgba(im, stdval=6.0): nzc = im[:, :, :3][im[:, :, :3] > 0.0] cmax = nzc.mean() + stdval * nzc.std() nza = im[:, :, 3][im[:, :, 3] > 0.0] if len(nza) == 0: im[:, :, 3] = 1.0 amax = 1.0 else: amax = nza.mean() + stdval * nza.std() im.rescale(amax=amax, cmax=cmax, inline=True) np.clip(im, 0.0, 1.0, im) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/camera.py0000644000175100001770000006317314714401662022302 0ustar00runnerdockerimport weakref from numbers import Number as numeric_type import numpy as np from yt.funcs import ensure_numpy_array, is_sequence from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.math_utils import get_rotation_matrix from yt.utilities.orientation import Orientation from .lens import Lens, lenses from .utils import data_source_or_all def _sanitize_camera_property_units(value, scene): if is_sequence(value): if len(value) == 1: return _sanitize_camera_property_units(value[0], scene) elif isinstance(value, YTArray) and len(value) == 3: return scene.arr(value).in_units("unitary") elif ( len(value) == 2 and isinstance(value[0], numeric_type) and isinstance(value[1], str) ): return scene.arr([scene.arr(value[0], value[1]).in_units("unitary")] * 3) if len(value) == 3: if all(is_sequence(v) for v in value): if all( isinstance(v[0], numeric_type) and isinstance(v[1], str) for v in value ): return scene.arr([scene.arr(v[0], v[1]) for v in value]) else: raise RuntimeError( f"Cannot set camera width to invalid value '{value}'" ) return scene.arr(value, "unitary") else: if isinstance(value, (YTQuantity, YTArray)): return scene.arr([value.d] * 3, value.units).in_units("unitary") elif isinstance(value, numeric_type): return scene.arr([value] * 3, "unitary") raise RuntimeError(f"Cannot set camera width to invalid value '{value}'") class Camera(Orientation): r"""A representation of a point of view into a Scene. It is defined by a position (the location of the camera in the simulation domain,), a focus (the point at which the camera is pointed), a width (the width of the snapshot that will be taken, a resolution (the number of pixels in the image), and a north_vector (the "up" direction in the resulting image). A camera can use a variety of different Lens objects. Parameters ---------- scene: A :class:`yt.visualization.volume_rendering.scene.Scene` object A scene object that the camera will be attached to. data_source: :class:`AMR3DData` or :class:`Dataset`, optional This is the source to be rendered, which can be any arbitrary yt data object or dataset. lens_type: string, optional This specifies the type of lens to use for rendering. Current options are 'plane-parallel', 'perspective', and 'fisheye'. See :class:`yt.visualization.volume_rendering.lens.Lens` for details. Default: 'plane-parallel' auto: boolean If True, build smart defaults using the data source extent. This can be time-consuming to iterate over the entire dataset to find the positional bounds. Default: False Examples -------- In this example, the camera is set using defaults that are chosen to be reasonable for the argument Dataset. >>> import yt >>> from yt.visualization.volume_rendering.api import Scene >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = Scene() >>> cam = sc.add_camera(ds) Here, we set the camera properties manually: >>> import yt >>> from yt.visualization.volume_rendering.api import Scene >>> sc = Scene() >>> cam = sc.add_camera() >>> cam.position = np.array([0.5, 0.5, -1.0]) >>> cam.focus = np.array([0.5, 0.5, 0.0]) >>> cam.north_vector = np.array([1.0, 0.0, 0.0]) Finally, we create a camera with a non-default lens: >>> import yt >>> from yt.visualization.volume_rendering.api import Scene >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = Scene() >>> cam = sc.add_camera(ds, lens_type="perspective") """ _moved = True _width = None _focus = None _position = None _resolution = None def __init__(self, scene, data_source=None, lens_type="plane-parallel", auto=False): # import this here to avoid an import cycle from .scene import Scene if not isinstance(scene, Scene): raise RuntimeError( "The first argument passed to the Camera initializer is a " f"{type(scene)} object, expected a {Scene} object" ) self.scene = weakref.proxy(scene) self.lens = None self.north_vector = None self.normal_vector = None self.light = None self.data_source = data_source_or_all(data_source) self._resolution = (512, 512) if self.data_source is not None: self.scene._set_new_unit_registry(self.data_source.ds.unit_registry) self._focus = self.data_source.ds.domain_center self._position = self.data_source.ds.domain_right_edge self._width = self.data_source.ds.arr( [1.5 * self.data_source.ds.domain_width.max()] * 3 ) self._domain_center = self.data_source.ds.domain_center self._domain_width = self.data_source.ds.domain_width else: self._focus = scene.arr([0.0, 0.0, 0.0], "unitary") self._width = scene.arr([1.0, 1.0, 1.0], "unitary") self._position = scene.arr([1.0, 1.0, 1.0], "unitary") if auto: self.set_defaults_from_data_source(data_source) super().__init__( self.focus - self.position, self.north_vector, steady_north=False ) self.set_lens(lens_type) @property def position(self): r""" The location of the camera. Parameters ---------- position : number, YTQuantity, :obj:`!iterable`, or 3 element YTArray If a scalar, assumes that the position is the same in all three coordinates. If an iterable, must contain only scalars or (length, unit) tuples. """ return self._position @position.setter def position(self, value): position = _sanitize_camera_property_units(value, self.scene) if np.array_equal(position, self.focus): raise RuntimeError( "Cannot set the camera focus and position to the same value" ) self._position = position self.switch_orientation( normal_vector=self.focus - self._position, north_vector=self.north_vector, ) @position.deleter def position(self): del self._position @property def width(self): r"""The width of the region that will be seen in the image. Parameters ---------- width : number, YTQuantity, :obj:`!iterable`, or 3 element YTArray The width of the volume rendering in the horizontal, vertical, and depth directions. If a scalar, assumes that the width is the same in all three directions. If an iterable, must contain only scalars or (length, unit) tuples. """ return self._width @width.setter def width(self, value): width = _sanitize_camera_property_units(value, self.scene) self._width = width self.switch_orientation() @width.deleter def width(self): del self._width self._width = None @property def focus(self): r""" The focus defines the point the Camera is pointed at. Parameters ---------- focus : number, YTQuantity, :obj:`!iterable`, or 3 element YTArray The width of the volume rendering in the horizontal, vertical, and depth directions. If a scalar, assumes that the width is the same in all three directions. If an iterable, must contain only scalars or (length, unit) tuples. """ return self._focus @focus.setter def focus(self, value): focus = _sanitize_camera_property_units(value, self.scene) if np.array_equal(focus, self.position): raise RuntimeError( "Cannot set the camera focus and position to the same value" ) self._focus = focus self.switch_orientation( normal_vector=self.focus - self._position, north_vector=None ) @focus.deleter def focus(self): del self._focus @property def resolution(self): r"""The resolution is the number of pixels in the image that will be produced. Must be a 2-tuple of integers or an integer.""" return self._resolution @resolution.setter def resolution(self, value): if is_sequence(value): if len(value) != 2: raise RuntimeError else: value = (value, value) self._resolution = value @resolution.deleter def resolution(self): del self._resolution self._resolution = None def set_resolution(self, resolution): """ The resolution is the number of pixels in the image that will be produced. Must be a 2-tuple of integers or an integer. """ self.resolution = resolution def get_resolution(self): """ Returns the resolution of the volume rendering """ return self.resolution def _get_sampler_params(self, render_source): lens_params = self.lens._get_sampler_params(self, render_source) lens_params.update(width=self.width) pos = self.position.in_units("code_length").d width = self.width.in_units("code_length").d lens_params.update(camera_data=np.vstack((pos, width, self.unit_vectors.d))) return lens_params def set_lens(self, lens_type): r"""Set the lens to be used with this camera. Parameters ---------- lens_type : string Must be one of the following: 'plane-parallel' 'perspective' 'stereo-perspective' 'fisheye' 'spherical' 'stereo-spherical' """ if isinstance(lens_type, Lens): self.lens = lens_type elif lens_type not in lenses: raise RuntimeError( f"Lens type {lens_type} not in available list of available lens " "types ({})".format(", ".join([f"{_!r}" for _ in lenses])) ) else: self.lens = lenses[lens_type]() self.lens.set_camera(self) def set_defaults_from_data_source(self, data_source): """Resets the camera attributes to their default values""" position = data_source.ds.domain_right_edge width = 1.5 * data_source.ds.domain_width.max() (xmi, xma), (ymi, yma), (zmi, zma) = data_source.quantities["Extrema"]( ["x", "y", "z"] ) width = np.sqrt((xma - xmi) ** 2 + (yma - ymi) ** 2 + (zma - zmi) ** 2) focus = data_source.get_field_parameter("center") if is_sequence(width) and len(width) > 1 and isinstance(width[1], str): width = data_source.ds.quan(width[0], units=width[1]) # Now convert back to code length for subsequent manipulation width = width.in_units("code_length") # .value if not is_sequence(width): width = data_source.ds.arr([width, width, width], units="code_length") # left/right, top/bottom, front/back if not isinstance(width, YTArray): width = data_source.ds.arr(width, units="code_length") if not isinstance(focus, YTArray): focus = data_source.ds.arr(focus, units="code_length") # We can't use the property setters yet, since they rely on attributes # that will not be set up until the base class initializer is called. # See Issue #1131. self._width = width self._focus = focus self._position = position self._domain_center = data_source.ds.domain_center self._domain_width = data_source.ds.domain_width super().__init__( self.focus - self.position, self.north_vector, steady_north=False ) self._moved = True def set_width(self, width): r"""Set the width of the image that will be produced by this camera. Parameters ---------- width : number, YTQuantity, :obj:`!iterable`, or 3 element YTArray The width of the volume rendering in the horizontal, vertical, and depth directions. If a scalar, assumes that the width is the same in all three directions. If an iterable, must contain only scalars or (length, unit) tuples. """ self.width = width self.switch_orientation() def get_width(self): """Return the current camera width""" return self.width def set_position(self, position, north_vector=None): r"""Set the position of the camera. Parameters ---------- position : number, YTQuantity, :obj:`!iterable`, or 3 element YTArray If a scalar, assumes that the position is the same in all three coordinates. If an iterable, must contain only scalars or (length, unit) tuples. north_vector : array_like, optional The 'up' direction for the plane of rays. If not specific, calculated automatically. """ if north_vector is not None: self.north_vector = north_vector self.position = position def get_position(self): """Return the current camera position""" return self.position def set_focus(self, new_focus): """Sets the point the Camera is pointed at. Parameters ---------- new_focus : number, YTQuantity, :obj:`!iterable`, or 3 element YTArray If a scalar, assumes that the focus is the same is all three coordinates. If an iterable, must contain only scalars or (length, unit) tuples. """ self.focus = new_focus def get_focus(self): """Returns the current camera focus""" return self.focus def switch_orientation(self, normal_vector=None, north_vector=None): r"""Change the view direction based on any of the orientation parameters. This will recalculate all the necessary vectors and vector planes related to an orientable object. Parameters ---------- normal_vector: array_like, optional The new looking vector from the camera to the focus. north_vector : array_like, optional The 'up' direction for the plane of rays. If not specific, calculated automatically. """ if north_vector is None: north_vector = self.north_vector if normal_vector is None: normal_vector = self.normal_vector self._setup_normalized_vectors(normal_vector, north_vector) self.lens.setup_box_properties(self) def switch_view(self, normal_vector=None, north_vector=None): r"""Change the view based on any of the view parameters. This will recalculate the orientation and width based on any of normal_vector, width, center, and north_vector. Parameters ---------- normal_vector: array_like, optional The new looking vector from the camera to the focus. north_vector : array_like, optional The 'up' direction for the plane of rays. If not specific, calculated automatically. """ if north_vector is None: north_vector = self.north_vector if normal_vector is None: normal_vector = self.normal_vector self.switch_orientation(normal_vector=normal_vector, north_vector=north_vector) self._moved = True def rotate(self, theta, rot_vector=None, rot_center=None): r"""Rotate by a given angle Rotate the view. If `rot_vector` is None, rotation will occur around the `north_vector`. Parameters ---------- theta : float, in radians Angle (in radians) by which to rotate the view. rot_vector : array_like, optional Specify the rotation vector around which rotation will occur. Defaults to None, which sets rotation around `north_vector` rot_center : array_like, optional Specify the center around which rotation will occur. Defaults to None, which sets rotation around the original camera position (i.e. the camera position does not change) Examples -------- >>> import yt >>> import numpy as np >>> from yt.visualization.volume_rendering.api import Scene >>> sc = Scene() >>> cam = sc.add_camera() >>> # rotate the camera by pi / 4 radians: >>> cam.rotate(np.pi / 4.0) >>> # rotate the camera about the y-axis instead of cam.north_vector: >>> cam.rotate(np.pi / 4.0, np.array([0.0, 1.0, 0.0])) >>> # rotate the camera about the origin instead of its own position: >>> cam.rotate(np.pi / 4.0, rot_center=np.array([0.0, 0.0, 0.0])) """ rotate_all = rot_vector is not None if rot_vector is None: rot_vector = self.north_vector if rot_center is None: rot_center = self._position rot_vector = ensure_numpy_array(rot_vector) rot_vector = rot_vector / np.linalg.norm(rot_vector) new_position = self._position - rot_center R = get_rotation_matrix(theta, rot_vector) new_position = np.dot(R, new_position) + rot_center if (new_position == self._position).all(): normal_vector = self.unit_vectors[2] else: normal_vector = rot_center - new_position normal_vector = normal_vector / np.sqrt((normal_vector**2).sum()) if rotate_all: self.switch_view( normal_vector=np.dot(R, normal_vector), north_vector=np.dot(R, self.unit_vectors[1]), ) else: self.switch_view(normal_vector=np.dot(R, normal_vector)) if (new_position != self._position).any(): self.set_position(new_position) def pitch(self, theta, rot_center=None): r"""Rotate by a given angle about the horizontal axis Pitch the view. Parameters ---------- theta : float, in radians Angle (in radians) by which to pitch the view. rot_center : array_like, optional Specify the center around which rotation will occur. Examples -------- >>> import yt >>> import numpy as np >>> from yt.visualization.volume_rendering.api import Scene >>> sc = Scene() >>> sc.add_camera() >>> # pitch the camera by pi / 4 radians: >>> cam.pitch(np.pi / 4.0) >>> # pitch the camera about the origin instead of its own position: >>> cam.pitch(np.pi / 4.0, rot_center=np.array([0.0, 0.0, 0.0])) """ self.rotate(theta, rot_vector=self.unit_vectors[0], rot_center=rot_center) def yaw(self, theta, rot_center=None): r"""Rotate by a given angle about the vertical axis Yaw the view. Parameters ---------- theta : float, in radians Angle (in radians) by which to yaw the view. rot_center : array_like, optional Specify the center around which rotation will occur. Examples -------- >>> import yt >>> import numpy as np >>> from yt.visualization.volume_rendering.api import Scene >>> sc = Scene() >>> cam = sc.add_camera() >>> # yaw the camera by pi / 4 radians: >>> cam.yaw(np.pi / 4.0) >>> # yaw the camera about the origin instead of its own position: >>> cam.yaw(np.pi / 4.0, rot_center=np.array([0.0, 0.0, 0.0])) """ self.rotate(theta, rot_vector=self.unit_vectors[1], rot_center=rot_center) def roll(self, theta, rot_center=None): r"""Rotate by a given angle about the view normal axis Roll the view. Parameters ---------- theta : float, in radians Angle (in radians) by which to roll the view. rot_center : array_like, optional Specify the center around which rotation will occur. Examples -------- >>> import yt >>> import numpy as np >>> from yt.visualization.volume_rendering.api import Scene >>> sc = Scene() >>> cam = sc.add_camera(ds) >>> # roll the camera by pi / 4 radians: >>> cam.roll(np.pi / 4.0) >>> # roll the camera about the origin instead of its own position: >>> cam.roll(np.pi / 4.0, rot_center=np.array([0.0, 0.0, 0.0])) """ self.rotate(theta, rot_vector=self.unit_vectors[2], rot_center=rot_center) def iter_rotate(self, theta, n_steps, rot_vector=None, rot_center=None): r"""Loop over rotate, creating a rotation This will rotate `n_steps` until the current view has been rotated by an angle `theta`. Parameters ---------- theta : float, in radians Angle (in radians) by which to rotate the view. n_steps : int The number of snapshots to make. rot_vector : array_like, optional Specify the rotation vector around which rotation will occur. Defaults to None, which sets rotation around the original `north_vector` rot_center : array_like, optional Specify the center around which rotation will occur. Defaults to None, which sets rotation around the original camera position (i.e. the camera position does not change) Examples -------- >>> import yt >>> import numpy as np >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> im, sc = yt.volume_render(ds) >>> cam = sc.camera >>> for i in cam.iter_rotate(np.pi, 10): ... im = sc.render() ... sc.save("rotation_%04i.png" % i) """ dtheta = (1.0 * theta) / n_steps for i in range(n_steps): self.rotate(dtheta, rot_vector=rot_vector, rot_center=rot_center) yield i def iter_move(self, final, n_steps, exponential=False): r"""Loop over an iter_move and return snapshots along the way. This will yield `n_steps` until the current view has been moved to a final center of `final`. Parameters ---------- final : YTArray The final center to move to after `n_steps` n_steps : int The number of snapshots to make. exponential : boolean Specifies whether the move/zoom transition follows an exponential path toward the destination or linear. Default is False. Examples -------- >>> import yt >>> import numpy as np >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> final_position = ds.arr([0.2, 0.3, 0.6], "unitary") >>> im, sc = yt.volume_render(ds) >>> cam = sc.camera >>> for i in cam.iter_move(final_position, 10): ... sc.render() ... sc.save("move_%04i.png" % i) """ assert isinstance(final, YTArray) if exponential: position_diff = (final / self.position) * 1.0 dx = position_diff ** (1.0 / n_steps) else: dx = (final - self.position) * 1.0 / n_steps for i in range(n_steps): if exponential: self.set_position(self.position * dx) else: self.set_position(self.position + dx) yield i def zoom(self, factor): r"""Change the width of the FOV of the camera. This will appear to zoom the camera in by some `factor` toward the focal point along the current view direction, but really it's just changing the width of the field of view. Parameters ---------- factor : float The factor by which to divide the width Examples -------- >>> import yt >>> from yt.visualization.volume_rendering.api import Scene >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = Scene() >>> cam = sc.add_camera(ds) >>> cam.zoom(1.1) """ self.width[:2] = self.width[:2] / factor def iter_zoom(self, final, n_steps): r"""Loop over a iter_zoom and return snapshots along the way. This will yield `n_steps` snapshots until the current view has been zooming in to a final factor of `final`. Parameters ---------- final : float The zoom factor, with respect to current, desired at the end of the sequence. n_steps : int The number of zoom snapshots to make. Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> im, sc = yt.volume_render(ds) >>> cam = sc.camera >>> for i in cam.iter_zoom(100.0, 10): ... sc.render() ... sc.save("zoom_%04i.png" % i) """ f = final ** (1.0 / n_steps) for i in range(n_steps): self.zoom(f) yield i def __repr__(self): disp = ( ":\n\tposition:%s\n\tfocus:%s\n\t" + "north_vector:%s\n\twidth:%s\n\tlight:%s\n\tresolution:%s\n" ) % ( self.position, self.focus, self.north_vector, self.width, self.light, self.resolution, ) disp += f"Lens: {self.lens}" return disp ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/camera_path.py0000644000175100001770000003005314714401662023305 0ustar00runnerdockerimport random import numpy as np from yt.visualization.volume_rendering.create_spline import create_spline class Keyframes: def __init__( self, x, y, z=None, north_vectors=None, up_vectors=None, times=None, niter=50000, init_temp=10.0, alpha=0.999, fixed_start=False, ): r"""Keyframes for camera path generation. From a set of keyframes with position and optional up and north vectors, an interpolated camera path is generated. Parameters ---------- x : array_like The x positions of the keyframes y : array_like The y positions of the keyframes z : array_like, optional The z positions of the keyframes. Default: 0.0 north_vectors : array_like, optional The north vectors of the keyframes. Default: None up_vectors : array_like, optional The up vectors of the keyframes. Default: None times : array_like, optional The times of the keyframes. Default: arange(N) niter : integer, optional Maximum number of iterations to find solution. Default: 50000 init_temp : float, optional Initial temperature for simulated annealing when finding a solution. Lower initial temperatures result in an initial solution in first several iterations that changes more rapidly. Default: 10.0 alpha : float, optional Exponent in cooling function in simulated annealing. Must be < 1. In each iteration, the temperature_new = temperature_old * alpha. Default: 0.999 fixed_start: boolean, optional If true, the first point never changes when searching for shortest path. Default: False Examples -------- >>> import matplotlib.pyplot as plt ... import numpy as np ... from yt.visualization.volume_rendering.camera_path import * >>> # Make a camera path from 10 random (x, y, z) keyframes >>> data = np.random.random(10, 3) >>> kf = Keyframes(data[:, 0], data[:, 1], data[:, 2]) >>> path = kf.create_path(250, shortest_path=False) >>> # Plot the keyframes in the x-y plane and camera path >>> plt.plot(kf.pos[:, 0], kf.pos[:, 1], "ko") >>> plt.plot(path["position"][:, 0], path["position"][:, 1]) >>> plt.savefig("path.png") """ Nx = len(x) Ny = len(y) if z is not None: Nz = len(z) ndims = 3 else: Nz = 1 ndims = 2 if Nx * Ny * Nz != Nx**ndims: print("Need Nx (%d) == Ny (%d) == Nz (%d)" % (Nx, Ny, Nz)) raise RuntimeError self.nframes = Nx self.pos = np.zeros((Nx, 3)) self.pos[:, 0] = x self.pos[:, 1] = y if z is not None: self.pos[:, 2] = z else: self.pos[:, 2] = 0.0 self.north_vectors = north_vectors self.up_vectors = up_vectors if times is None: self.times = np.arange(self.nframes) else: self.times = times self.cartesian_matrix() self.setup_tsp(niter, init_temp, alpha, fixed_start) def setup_tsp(self, niter=50000, init_temp=10.0, alpha=0.999, fixed_start=False): r"""Setup parameters for Travelling Salesman Problem. Parameters ---------- niter : integer, optional Maximum number of iterations to find solution. Default: 50000 init_temp : float, optional Initial temperature for simulated annealing when finding a solution. Lower initial temperatures result in an initial solution in first several iterations that changes more rapidly. Default: 10.0 alpha : float, optional Exponent in cooling function in simulated annealing. Must be < 1. In each iteration, the temperature_new = temperature_old * alpha. Default: 0.999 fixed_start: boolean, optional If true, the first point never changes when searching for shortest path. Default: False """ # randomize tour self.tour = list(range(self.nframes)) rng = np.random.default_rng() rng.shuffle(self.tour) if fixed_start: first = self.tour.index(0) self.tour[0], self.tour[first] = self.tour[first], self.tour[0] self.max_iterations = niter self.initial_temp = init_temp self.alpha = alpha self.fixed_start = fixed_start self.best_score = None self.best = None def set_times(self, times): self.times = times def rand_seq(self): r""" Generates values in random order, equivalent to using shuffle in random without generation all values at once. """ values = list(range(self.nframes)) for i in range(self.nframes): # pick a random index into remaining values j = i + int(random.random() * (self.nframes - i)) # swap the values values[j], values[i] = values[i], values[j] # return the swapped value yield values[i] def all_pairs(self): r""" Generates all (i,j) pairs for (i,j) for 0-size """ for i in self.rand_seq(): for j in self.rand_seq(): yield (i, j) def reversed_sections(self, tour): r""" Generator to return all possible variations where a section between two cities are swapped. """ for i, j in self.all_pairs(): if i == j: continue copy = tour[:] if i < j: copy[i : j + 1] = reversed(tour[i : j + 1]) else: copy[i + 1 :] = reversed(tour[:j]) copy[:j] = reversed(tour[i + 1 :]) if self.fixed_start: ind = copy.index(0) copy[0], copy[ind] = copy[ind], copy[0] if copy != tour: # no point return the same tour yield copy def cartesian_matrix(self): r""" Create a distance matrix for the city coords that uses straight line distance """ self.dist_matrix = np.zeros((self.nframes, self.nframes)) xmat = np.zeros((self.nframes, self.nframes)) xmat[:, :] = self.pos[:, 0] dx = xmat - xmat.T ymat = np.zeros((self.nframes, self.nframes)) ymat[:, :] = self.pos[:, 1] dy = ymat - ymat.T zmat = np.zeros((self.nframes, self.nframes)) zmat[:, :] = self.pos[:, 2] dz = zmat - zmat.T self.dist_matrix = np.sqrt(dx * dx + dy * dy + dz * dz) def tour_length(self, tour): r""" Calculate the total length of the tour based on the distance matrix """ total = 0 num_cities = len(tour) for i in range(num_cities): j = (i + 1) % num_cities city_i = tour[i] city_j = tour[j] total += self.dist_matrix[city_i, city_j] return -total def cooling(self): T = self.initial_temp while True: yield T T = self.alpha * T def prob(self, prev, next, temperature): if next > prev: return 1.0 else: return np.exp(-abs(next - prev) / temperature) def get_shortest_path(self): """ Determine shortest path between all keyframes. """ # this obviously doesn't work. When someone fixes it, remove the NOQA self.setup_tsp(niter, init_temp, alpha, fixed_start) # NOQA num_eval = 1 cooling_schedule = self.cooling() current = self.tour current_score = self.tour_length(current) for temperature in cooling_schedule: done = False # Examine moves around the current position for next in self.reversed_sections(current): if num_eval >= self.max_iterations: done = True break next_score = self.tour_length(next) num_eval += 1 # Anneal. Accept new solution if a random number is # greater than our "probability". p = self.prob(current_score, next_score, temperature) if random.random() < p: current = next self.current_score = next_score if self.current_score > self.best_score: # print(num_eval, self.current_score, self.best_score, current) self.best_score = self.current_score self.best = current break if done: break self.pos = self.pos[self.tour, :] if self.north_vectors is not None: self.north_vectors = self.north_vectors[self.tour] if self.up_vectors is not None: self.up_vectors = self.up_vectors[self.tour] def create_path(self, npoints, path_time=None, tension=0.5, shortest_path=False): r"""Create a interpolated camera path from keyframes. Parameters ---------- npoints : integer Number of points to interpolate from keyframes path_time : array_like, optional Times of interpolated points. Default: Linearly spaced tension : float, optional Controls how sharp of a curve the spline takes. A higher tension allows for more sharp turns. Default: 0.5 shortest_path : boolean, optional If true, estimate the shortest path between the keyframes. Default: False Returns ------- path : dict Dictionary (time, position, north_vectors, up_vectors) of camera path. Also saved to self.path. """ self.npoints = npoints self.path = { "time": np.zeros(npoints), "position": np.zeros((npoints, 3)), "north_vectors": np.zeros((npoints, 3)), "up_vectors": np.zeros((npoints, 3)), } if shortest_path: self.get_shortest_path() if path_time is None: path_time = np.linspace(0, self.nframes, npoints) self.path["time"] = path_time for dim in range(3): self.path["position"][:, dim] = create_spline( self.times, self.pos[:, dim], path_time, tension=tension ) if self.north_vectors is not None: self.path["north_vectors"][:, dim] = create_spline( self.times, self.north_vectors[:, dim], path_time, tension=tension ) if self.up_vectors is not None: self.path["up_vectors"][:, dim] = create_spline( self.times, self.up_vectors[:, dim], path_time, tension=tension ) return self.path def write_path(self, filename="path.dat"): r"""Writes camera path to ASCII file Parameters ---------- filename : string, optional Filename containing the camera path. Default: path.dat """ fp = open(filename, "w") fp.write( "#%11s %12s %12s %12s %12s %12s %12s %12s %12s\n" % ("x", "y", "z", "north_x", "north_y", "north_z", "up_x", "up_y", "up_z") ) for i in range(self.npoints): fp.write( "{:.12f} {:.12f} {:.12f} {:.12f} {:.12f} {:.12f} {:.12f} {:.12f} {:.12f}\n".format( self.path["position"][i, 0], self.path["position"][i, 1], self.path["position"][i, 2], self.path["north_vectors"][i, 0], self.path["north_vectors"][i, 1], self.path["north_vectors"][i, 2], self.path["up_vectors"][i, 0], self.path["up_vectors"][i, 1], self.path["up_vectors"][i, 2], ) ) fp.close() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/create_spline.py0000644000175100001770000000351214714401662023656 0ustar00runnerdockerimport sys import numpy as np def create_spline(old_x, old_y, new_x, tension=0.5, sorted=False): """ Inputs: old_x: array of floats Original x-data to be fit with a Catmull-Rom spline old_y: array of floats Original y-data to be fit with a Catmull-Rom spline new_x: array of floats interpolate to these x-coordinates tension: float, optional controls the tension at the specified coordinates sorted: boolean, optional If True, then the old_x and old_y arrays are sorted, and then this routine does not try to sort the coordinates Outputs: result: array of floats interpolated y-coordinates """ ndata = len(old_x) N = len(new_x) result = np.zeros(N) if not sorted: isort = np.argsort(old_x) old_x = old_x[isort] old_y = old_y[isort] # Floor/ceiling of values outside of the original data new_x = np.minimum(new_x, old_x[-1]) new_x = np.maximum(new_x, old_x[0]) ind = np.searchsorted(old_x, new_x) im2 = np.maximum(ind - 2, 0) im1 = np.maximum(ind - 1, 0) ip1 = np.minimum(ind + 1, ndata - 1) for i in range(N): if ind[i] != im1[i]: u = (new_x[i] - old_x[im1[i]]) / (old_x[ind[i]] - old_x[im1[i]]) elif ind[i] == im1[i]: u = 0 else: print("Bad index during interpolation?") sys.exit() b0 = -tension * u + 2 * tension * u**2 - tension * u**3 b1 = 1.0 + (tension - 3) * u**2 + (2 - tension) * u**3 b2 = tension * u + (3 - 2 * tension) * u**2 + (tension - 2) * u**3 b3 = -tension * u**2 + tension * u**3 result[i] = ( b0 * old_y[im2[i]] + b1 * old_y[im1[i]] + b2 * old_y[ind[i]] + b3 * old_y[ip1[i]] ) return result ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/image_handling.py0000644000175100001770000001005314714401662023765 0ustar00runnerdockerimport numpy as np from yt.funcs import mylog from yt.utilities.on_demand_imports import _h5py as h5py def export_rgba( image, fn, h5=True, fits=False, ): """ This function accepts an *image*, of shape (N,M,4) corresponding to r,g,b,a, and saves to *fn*. If *h5* is True, then it will save in hdf5 format. If *fits* is True, it will save in fits format. """ if (not h5 and not fits) or (h5 and fits): raise ValueError("Choose either HDF5 or FITS format!") if h5: f = h5py.File(f"{fn}.h5", mode="w") f.create_dataset("R", data=image[:, :, 0]) f.create_dataset("G", data=image[:, :, 1]) f.create_dataset("B", data=image[:, :, 2]) f.create_dataset("A", data=image[:, :, 3]) f.close() if fits: from yt.visualization.fits_image import FITSImageData data = {} data["r"] = image[:, :, 0] data["g"] = image[:, :, 1] data["b"] = image[:, :, 2] data["a"] = image[:, :, 3] fib = FITSImageData(data) fib.writeto(f"{fn}.fits", overwrite=True) def import_rgba(name, h5=True): """ This function will read back in an HDF5 file, as saved by export_rgba, and return the frames to the user. *name* is the name of the file to be read in. """ if h5: f = h5py.File(name, mode="r") r = f["R"].value g = f["G"].value b = f["B"].value a = f["A"].value f.close() else: mylog.error("No support for fits import.") return np.array([r, g, b, a]).swapaxes(0, 2).swapaxes(0, 1) def plot_channel( image, name, cmap="gist_heat", log=True, dex=3, zero_factor=1.0e-10, label=None, label_color="w", label_size="large", ): """ This function will plot a single channel. *image* is an array shaped like (N,M), *name* is the pefix for the output filename. *cmap* is the name of the colormap to apply, *log* is whether or not the channel should be logged. Additionally, you may optionally specify the minimum-value cutoff for scaling as *dex*, which is taken with respect to the minimum value of the image. *zero_factor* applies a minimum value to all zero-valued elements. Optionally, *label*, *label_color* and *label_size* may be specified. """ import matplotlib as mpl from matplotlib import pyplot as plt from matplotlib.colors import LogNorm Nvec = image.shape[0] image[np.isnan(image)] = 0.0 ma = image[image > 0.0].max() image[image == 0.0] = ma * zero_factor if log: mynorm = LogNorm(ma / (10.0**dex), ma) fig = plt.gcf() ax = plt.gca() fig.clf() fig.set_dpi(100) fig.set_size_inches((Nvec / 100.0, Nvec / 100.0)) fig.subplots_adjust( left=0.0, right=1.0, bottom=0.0, top=1.0, wspace=0.0, hspace=0.0 ) mycm = mpl.colormaps[cmap] if log: ax.imshow(image, cmap=mycm, norm=mynorm, interpolation="nearest") else: ax.imshow(image, cmap=mycm, interpolation="nearest") if label is not None: ax.text(20, 20, label, color=label_color, size=label_size) fig.savefig(f"{name}_{cmap}.png") fig.clf() def plot_rgb(image, name, label=None, label_color="w", label_size="large"): """ This will plot the r,g,b channels of an *image* of shape (N,M,3) or (N,M,4). *name* is the prefix of the file name, which will be supplemented with "_rgb.png." *label*, *label_color* and *label_size* may also be specified. """ import matplotlib.pyplot as plt Nvec = image.shape[0] image[np.isnan(image)] = 0.0 if image.shape[2] >= 4: image = image[:, :, :3] fig = plt.gcf() ax = plt.gca() fig.clf() fig.set_dpi(100) fig.set_size_inches((Nvec / 100.0, Nvec / 100.0)) fig.subplots_adjust( left=0.0, right=1.0, bottom=0.0, top=1.0, wspace=0.0, hspace=0.0 ) ax.imshow(image, interpolation="nearest") if label is not None: ax.text(20, 20, label, color=label_color, size=label_size) fig.savefig(f"{name}_rgb.png") fig.clf() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/lens.py0000644000175100001770000007520114714401662022006 0ustar00runnerdockerimport numpy as np from yt.data_objects.image_array import ImageArray from yt.units._numpy_wrapper_functions import uhstack, unorm, uvstack from yt.utilities.lib.grid_traversal import arr_fisheye_vectors from yt.utilities.math_utils import get_rotation_matrix from yt.utilities.parallel_tools.parallel_analysis_interface import ( ParallelAnalysisInterface, ) class Lens(ParallelAnalysisInterface): """A Lens is used to define the set of rays for rendering.""" def __init__(self): super().__init__() self.viewpoint = None self.sub_samples = 5 self.num_threads = 0 self.box_vectors = None self.origin = None self.back_center = None self.front_center = None self.sampler = None def set_camera(self, camera): """Set the properties of the lens based on the camera. This is a proxy for setup_box_properties """ self.setup_box_properties(camera) def new_image(self, camera): """Initialize a new ImageArray to be used with this lens.""" self.current_image = ImageArray( np.zeros( (camera.resolution[0], camera.resolution[1], 4), dtype="float64", order="C", ), info={"imtype": "rendering"}, ) return self.current_image def setup_box_properties(self, camera): """Set up the view and stage based on the properties of the camera.""" unit_vectors = camera.unit_vectors width = camera.width center = camera.focus self.box_vectors = camera.scene.arr( [ unit_vectors[0] * width[0], unit_vectors[1] * width[1], unit_vectors[2] * width[2], ] ) self.origin = center - 0.5 * width.dot(unit_vectors) self.back_center = center - 0.5 * width[2] * unit_vectors[2] self.front_center = center + 0.5 * width[2] * unit_vectors[2] self.set_viewpoint(camera) def set_viewpoint(self, camera): """ Set the viewpoint used for AMRKDTree traversal such that you yield bricks from back to front or front to back from with respect to this point. Must be implemented for each Lens type. """ raise NotImplementedError("Need to choose viewpoint for this class") class PlaneParallelLens(Lens): r"""The lens for orthographic projections. All rays emerge parallel to each other, arranged along a plane. The initializer takes no parameters. """ def __init__(self): super().__init__() def _get_sampler_params(self, camera, render_source): if render_source.zbuffer is not None: image = render_source.zbuffer.rgba else: image = self.new_image(camera) vp_pos = np.concatenate( [ camera.inv_mat.ravel("F").d, self.back_center.ravel().in_units("code_length").d, ] ) sampler_params = { "vp_pos": vp_pos, "vp_dir": self.box_vectors[2], # All the same "center": self.back_center, "bounds": ( -camera.width[0] / 2.0, camera.width[0] / 2.0, -camera.width[1] / 2.0, camera.width[1] / 2.0, ), "x_vec": camera.unit_vectors[0], "y_vec": camera.unit_vectors[1], "width": np.array(camera.width, dtype="float64"), "image": image, "lens_type": "plane-parallel", } return sampler_params def set_viewpoint(self, camera): """Set the viewpoint based on the camera""" # This is a hack that should be replaced by an alternate plane-parallel # traversal. Put the camera really far away so that the effective # viewpoint is infinitely far away, making for parallel rays. self.viewpoint = ( self.front_center + camera.unit_vectors[2] * 1.0e6 * camera.width[2] ) def project_to_plane(self, camera, pos, res=None): if res is None: res = camera.resolution origin = self.origin.in_units("code_length").d front_center = self.front_center.in_units("code_length").d width = camera.width.in_units("code_length").d dx = np.array(np.dot(pos - origin, camera.unit_vectors[1])) dy = np.array(np.dot(pos - origin, camera.unit_vectors[0])) dz = np.array(np.dot(pos - front_center, -camera.unit_vectors[2])) # Transpose into image coords. px = (res[0] * (dy / width[0])).astype("int64") py = (res[1] * (dx / width[1])).astype("int64") return px, py, dz def __repr__(self): return ( ":\n" "\tlens_type:plane-parallel\n" f"\tviewpoint:{self.viewpoint}" ) class PerspectiveLens(Lens): r"""A lens for viewing a scene with a set of rays within an opening angle. The scene will have an element of perspective to it since the rays are not parallel. """ def __init__(self): super().__init__() def new_image(self, camera): self.current_image = ImageArray( np.zeros( (camera.resolution[0], camera.resolution[1], 4), dtype="float64", order="C", ), info={"imtype": "rendering"}, ) return self.current_image def _get_sampler_params(self, camera, render_source): # Enforce width[1] / width[0] = resolution[1] / resolution[0] camera.width[1] = camera.width[0] * ( camera.resolution[1] / camera.resolution[0] ) if render_source.zbuffer is not None: image = render_source.zbuffer.rgba else: image = self.new_image(camera) east_vec = camera.unit_vectors[0] north_vec = camera.unit_vectors[1] normal_vec = camera.unit_vectors[2] px = np.linspace(-0.5, 0.5, camera.resolution[0])[np.newaxis, :] py = np.linspace(-0.5, 0.5, camera.resolution[1])[np.newaxis, :] sample_x = camera.width[0] * np.array(east_vec.reshape(3, 1) * px) sample_x = sample_x.transpose() sample_y = camera.width[1] * np.array(north_vec.reshape(3, 1) * py) sample_y = sample_y.transpose() vectors = np.zeros( (camera.resolution[0], camera.resolution[1], 3), dtype="float64", order="C" ) sample_x = np.repeat( sample_x.reshape(camera.resolution[0], 1, 3), camera.resolution[1], axis=1 ) sample_y = np.repeat( sample_y.reshape(1, camera.resolution[1], 3), camera.resolution[0], axis=0 ) normal_vecs = np.tile(normal_vec, camera.resolution[0] * camera.resolution[1]) normal_vecs = normal_vecs.reshape(camera.resolution[0], camera.resolution[1], 3) # The maximum possible length of ray max_length = unorm(camera.position - camera._domain_center) + 0.5 * unorm( camera._domain_width ) # Rescale the ray to be long enough to cover the entire domain vectors = (sample_x + sample_y + normal_vecs * camera.width[2]) * ( max_length / camera.width[2] ) positions = np.tile( camera.position, camera.resolution[0] * camera.resolution[1] ) positions = positions.reshape(camera.resolution[0], camera.resolution[1], 3) uv = np.ones(3, dtype="float64") image = self.new_image(camera) sampler_params = { "vp_pos": positions, "vp_dir": vectors, "center": self.back_center, "bounds": (0.0, 1.0, 0.0, 1.0), "x_vec": uv, "y_vec": uv, "width": np.zeros(3, dtype="float64"), "image": image, "lens_type": "perspective", } return sampler_params def set_viewpoint(self, camera): """ For a PerspectiveLens, the viewpoint is the front center. """ self.viewpoint = self.front_center def project_to_plane(self, camera, pos, res=None): if res is None: res = camera.resolution width = camera.width.in_units("code_length").d position = camera.position.in_units("code_length").d width[1] = width[0] * res[1] / res[0] sight_vector = pos - position pos1 = sight_vector for i in range(0, sight_vector.shape[0]): sight_vector_norm = np.sqrt(np.dot(sight_vector[i], sight_vector[i])) if sight_vector_norm != 0: sight_vector[i] = sight_vector[i] / sight_vector_norm sight_center = camera.position + camera.width[2] * camera.unit_vectors[2] sight_center = sight_center.in_units("code_length").d for i in range(0, sight_vector.shape[0]): sight_angle_cos = np.dot(sight_vector[i], camera.unit_vectors[2]) # clip sight_angle_cos since floating point noise might # go outside the domain of arccos sight_angle_cos = np.clip(sight_angle_cos, -1.0, 1.0) if np.arccos(sight_angle_cos) < 0.5 * np.pi: sight_length = width[2] / sight_angle_cos else: # If the corner is on the backwards, then we put it outside of # the image It can not be simply removed because it may connect # to other corner within the image, which produces visible # domain boundary line sight_length = np.sqrt(width[0] ** 2 + width[1] ** 2) sight_length = sight_length / np.sqrt(1 - sight_angle_cos**2) pos1[i] = position + sight_length * sight_vector[i] dx = np.dot(pos1 - sight_center, camera.unit_vectors[0]) dy = np.dot(pos1 - sight_center, camera.unit_vectors[1]) dz = np.dot(pos - position, camera.unit_vectors[2]) # Transpose into image coords. px = (res[0] * 0.5 + res[0] / width[0] * dx).astype("int64") py = (res[1] * 0.5 + res[1] / width[1] * dy).astype("int64") return px, py, dz def __repr__(self): disp = f":\n\tlens_type:perspective\n\tviewpoint:{self.viewpoint}" return disp class StereoPerspectiveLens(Lens): """A lens that includes two sources for perspective rays, for 3D viewing The degree of differences between the left and right images is controlled by the disparity (the maximum distance between corresponding points in the left and right images). By default, the disparity is set to be 3 pixels. """ def __init__(self): super().__init__() self.disparity = None def new_image(self, camera): """Initialize a new ImageArray to be used with this lens.""" self.current_image = ImageArray( np.zeros( (camera.resolution[0], camera.resolution[1], 4), dtype="float64", order="C", ), info={"imtype": "rendering"}, ) return self.current_image def _get_sampler_params(self, camera, render_source): # Enforce width[1] / width[0] = 2 * resolution[1] / resolution[0] # For stereo-type lens, images for left and right eye are pasted together, # so the resolution of single-eye image will be 50% of the whole one. camera.width[1] = camera.width[0] * ( 2.0 * camera.resolution[1] / camera.resolution[0] ) if self.disparity is None: self.disparity = 3.0 * camera.width[0] / camera.resolution[0] if render_source.zbuffer is not None: image = render_source.zbuffer.rgba else: image = self.new_image(camera) vectors_left, positions_left = self._get_positions_vectors( camera, -self.disparity ) vectors_right, positions_right = self._get_positions_vectors( camera, self.disparity ) uv = np.ones(3, dtype="float64") image = self.new_image(camera) vectors_comb = uvstack([vectors_left, vectors_right]) positions_comb = uvstack([positions_left, positions_right]) image.shape = (camera.resolution[0], camera.resolution[1], 4) vectors_comb.shape = (camera.resolution[0], camera.resolution[1], 3) positions_comb.shape = (camera.resolution[0], camera.resolution[1], 3) sampler_params = { "vp_pos": positions_comb, "vp_dir": vectors_comb, "center": self.back_center, "bounds": (0.0, 1.0, 0.0, 1.0), "x_vec": uv, "y_vec": uv, "width": np.zeros(3, dtype="float64"), "image": image, "lens_type": "stereo-perspective", } return sampler_params def _get_positions_vectors(self, camera, disparity): single_resolution_x = int(np.floor(camera.resolution[0]) / 2) east_vec = camera.unit_vectors[0] north_vec = camera.unit_vectors[1] normal_vec = camera.unit_vectors[2] angle_disparity = -np.arctan2( disparity.in_units(camera.width.units), camera.width[2] ) R = get_rotation_matrix(angle_disparity, north_vec) east_vec_rot = np.dot(R, east_vec) normal_vec_rot = np.dot(R, normal_vec) px = np.linspace(-0.5, 0.5, single_resolution_x)[np.newaxis, :] py = np.linspace(-0.5, 0.5, camera.resolution[1])[np.newaxis, :] sample_x = camera.width[0] * np.array(east_vec_rot.reshape(3, 1) * px) sample_x = sample_x.transpose() sample_y = camera.width[1] * np.array(north_vec.reshape(3, 1) * py) sample_y = sample_y.transpose() vectors = np.zeros( (single_resolution_x, camera.resolution[1], 3), dtype="float64", order="C" ) sample_x = np.repeat( sample_x.reshape(single_resolution_x, 1, 3), camera.resolution[1], axis=1 ) sample_y = np.repeat( sample_y.reshape(1, camera.resolution[1], 3), single_resolution_x, axis=0 ) normal_vecs = np.tile( normal_vec_rot, single_resolution_x * camera.resolution[1] ) normal_vecs = normal_vecs.reshape(single_resolution_x, camera.resolution[1], 3) east_vecs = np.tile(east_vec_rot, single_resolution_x * camera.resolution[1]) east_vecs = east_vecs.reshape(single_resolution_x, camera.resolution[1], 3) # The maximum possible length of ray max_length = ( unorm(camera.position - camera._domain_center) + 0.5 * unorm(camera._domain_width) + np.abs(self.disparity) ) # Rescale the ray to be long enough to cover the entire domain vectors = (sample_x + sample_y + normal_vecs * camera.width[2]) * ( max_length / camera.width[2] ) positions = np.tile(camera.position, single_resolution_x * camera.resolution[1]) positions = positions.reshape(single_resolution_x, camera.resolution[1], 3) # Here the east_vecs is non-rotated one positions = positions + east_vecs * disparity return vectors, positions def project_to_plane(self, camera, pos, res=None): if res is None: res = camera.resolution # Enforce width[1] / width[0] = 2 * resolution[1] / resolution[0] # For stereo-type lens, images for left and right eye are pasted together, # so the resolution of single-eye image will be 50% of the whole one. camera.width[1] = camera.width[0] * (2.0 * res[1] / res[0]) if self.disparity is None: self.disparity = 3.0 * camera.width[0] / camera.resolution[0] px_left, py_left, dz_left = self._get_px_py_dz( camera, pos, res, -self.disparity ) px_right, py_right, dz_right = self._get_px_py_dz( camera, pos, res, self.disparity ) px = uvstack([px_left, px_right]) py = uvstack([py_left, py_right]) dz = uvstack([dz_left, dz_right]) return px, py, dz def _get_px_py_dz(self, camera, pos, res, disparity): res0_h = np.floor(res[0]) / 2 east_vec = camera.unit_vectors[0] north_vec = camera.unit_vectors[1] normal_vec = camera.unit_vectors[2] angle_disparity = -np.arctan2(disparity.d, camera.width[2].d) R = get_rotation_matrix(angle_disparity, north_vec) east_vec_rot = np.dot(R, east_vec) normal_vec_rot = np.dot(R, normal_vec) camera_position_shift = camera.position + east_vec * disparity camera_position_shift = camera_position_shift.in_units("code_length").d width = camera.width.in_units("code_length").d sight_vector = pos - camera_position_shift pos1 = sight_vector for i in range(0, sight_vector.shape[0]): sight_vector_norm = np.sqrt(np.dot(sight_vector[i], sight_vector[i])) sight_vector[i] = sight_vector[i] / sight_vector_norm sight_center = camera_position_shift + camera.width[2] * normal_vec_rot for i in range(0, sight_vector.shape[0]): sight_angle_cos = np.dot(sight_vector[i], normal_vec_rot) # clip sight_angle_cos since floating point noise might # cause it go outside the domain of arccos sight_angle_cos = np.clip(sight_angle_cos, -1.0, 1.0) if np.arccos(sight_angle_cos) < 0.5 * np.pi: sight_length = width[2] / sight_angle_cos else: # If the corner is on the backwards, then we put it outside of # the image It can not be simply removed because it may connect # to other corner within the image, which produces visible # domain boundary line sight_length = np.sqrt(width[0] ** 2 + width[1] ** 2) sight_length = sight_length / np.sqrt(1 - sight_angle_cos**2) pos1[i] = camera_position_shift + sight_length * sight_vector[i] dx = np.dot(pos1 - sight_center, east_vec_rot) dy = np.dot(pos1 - sight_center, north_vec) dz = np.dot(pos - camera_position_shift, normal_vec_rot) # Transpose into image coords. if disparity > 0: px = (res0_h * 0.5 + res0_h / camera.width[0].d * dx).astype("int64") else: px = (res0_h * 1.5 + res0_h / camera.width[0].d * dx).astype("int64") py = (res[1] * 0.5 + res[1] / camera.width[1].d * dy).astype("int64") return px, py, dz def set_viewpoint(self, camera): """ For a PerspectiveLens, the viewpoint is the front center. """ self.viewpoint = self.front_center def __repr__(self): disp = f":\n\tlens_type:perspective\n\tviewpoint:{self.viewpoint}" return disp class FisheyeLens(Lens): r"""A lens for dome-based renderings This lens type accepts a field-of-view property, fov, that describes how wide an angle the fisheye can see. Fisheye images are typically used for dome-based presentations; the Hayden planetarium for instance has a field of view of 194.6. The images returned by this camera will be flat pixel images that can and should be reshaped to the resolution. """ def __init__(self): super().__init__() self.fov = 180.0 self.radius = 1.0 self.center = None self.rotation_matrix = np.eye(3) def setup_box_properties(self, camera): """Set up the view and stage based on the properties of the camera.""" self.radius = camera.width.max() super().setup_box_properties(camera) self.set_viewpoint(camera) def new_image(self, camera): """Initialize a new ImageArray to be used with this lens.""" self.current_image = ImageArray( np.zeros( (camera.resolution[0], camera.resolution[0], 4), dtype="float64", order="C", ), info={"imtype": "rendering"}, ) return self.current_image def _get_sampler_params(self, camera, render_source): vp = -arr_fisheye_vectors(camera.resolution[0], self.fov) vp.shape = (camera.resolution[0], camera.resolution[0], 3) vp = vp.dot(np.linalg.inv(self.rotation_matrix)) vp *= self.radius uv = np.ones(3, dtype="float64") positions = ( np.ones((camera.resolution[0], camera.resolution[0], 3), dtype="float64") * camera.position ) if render_source.zbuffer is not None: image = render_source.zbuffer.rgba else: image = self.new_image(camera) sampler_params = { "vp_pos": positions, "vp_dir": vp, "center": self.center, "bounds": (0.0, 1.0, 0.0, 1.0), "x_vec": uv, "y_vec": uv, "width": np.zeros(3, dtype="float64"), "image": image, "lens_type": "fisheye", } return sampler_params def set_viewpoint(self, camera): """For a FisheyeLens, the viewpoint is the camera's position""" self.viewpoint = camera.position def __repr__(self): disp = ( f":\n\tlens_type:fisheye\n\tviewpoint:{self.viewpoint}" f"\nt\tfov:{self.fov}\n\tradius:{self.radius}" ) return disp def project_to_plane(self, camera, pos, res=None): if res is None: res = camera.resolution # the return values here need to be px, py, dz # these are the coordinates and dz for the resultant image. # Basically, what we need is an inverse projection from the fisheye # vectors back onto the plane. arr_fisheye_vectors goes from px, py to # vector, and we need the reverse. # First, we transform lpos into *relative to the camera* coordinates. position = camera.position.in_units("code_length").d lpos = position - pos lpos = lpos.dot(self.rotation_matrix) mag = (lpos * lpos).sum(axis=1) ** 0.5 # screen out NaN values that would result from dividing by mag mag[mag == 0] = 1 lpos /= mag[:, None] dz = (mag / self.radius).in_units("1/code_length").d theta = np.arccos(lpos[:, 2]) fov_rad = self.fov * np.pi / 180.0 r = 2.0 * theta / fov_rad phi = np.arctan2(lpos[:, 1], lpos[:, 0]) px = r * np.cos(phi) py = r * np.sin(phi) # dz is distance the ray would travel px = (px + 1.0) * res[0] / 2.0 py = (py + 1.0) * res[1] / 2.0 # px and py should be dimensionless px = np.rint(px, dtype="int64") py = np.rint(py, dtype="int64") return px, py, dz class SphericalLens(Lens): r"""A lens for cylindrical-spherical projection. Movies rendered in this way can be displayed in head-tracking devices or in YouTube 360 view. """ def __init__(self): super().__init__() self.radius = 1.0 self.center = None self.rotation_matrix = np.eye(3) def setup_box_properties(self, camera): """Set up the view and stage based on the properties of the camera.""" self.radius = camera.width.max() super().setup_box_properties(camera) self.set_viewpoint(camera) def _get_sampler_params(self, camera, render_source): px = np.linspace(-np.pi, np.pi, camera.resolution[0], endpoint=True)[:, None] py = np.linspace( -np.pi / 2.0, np.pi / 2.0, camera.resolution[1], endpoint=True )[None, :] vectors = np.zeros( (camera.resolution[0], camera.resolution[1], 3), dtype="float64", order="C" ) vectors[:, :, 0] = np.cos(px) * np.cos(py) vectors[:, :, 1] = np.sin(px) * np.cos(py) vectors[:, :, 2] = np.sin(py) # The maximum possible length of ray max_length = unorm(camera.position - camera._domain_center) + 0.5 * unorm( camera._domain_width ) # Rescale the ray to be long enough to cover the entire domain vectors = vectors * max_length positions = np.tile( camera.position, camera.resolution[0] * camera.resolution[1] ).reshape(camera.resolution[0], camera.resolution[1], 3) R1 = get_rotation_matrix(0.5 * np.pi, [1, 0, 0]) R2 = get_rotation_matrix(0.5 * np.pi, [0, 0, 1]) uv = np.dot(R1, camera.unit_vectors) uv = np.dot(R2, uv) vectors.reshape((camera.resolution[0] * camera.resolution[1], 3)) vectors = np.dot(vectors, uv) vectors.reshape((camera.resolution[0], camera.resolution[1], 3)) if render_source.zbuffer is not None: image = render_source.zbuffer.rgba else: image = self.new_image(camera) dummy = np.ones(3, dtype="float64") image.shape = (camera.resolution[0], camera.resolution[1], 4) vectors.shape = (camera.resolution[0], camera.resolution[1], 3) positions.shape = (camera.resolution[0], camera.resolution[1], 3) sampler_params = { "vp_pos": positions, "vp_dir": vectors, "center": self.back_center, "bounds": (0.0, 1.0, 0.0, 1.0), "x_vec": dummy, "y_vec": dummy, "width": np.zeros(3, dtype="float64"), "image": image, "lens_type": "spherical", } return sampler_params def set_viewpoint(self, camera): """For a SphericalLens, the viewpoint is the camera's position""" self.viewpoint = camera.position def project_to_plane(self, camera, pos, res=None): if res is None: res = camera.resolution # Much of our setup here is the same as in the fisheye, except for the # actual conversion back to the px, py values. position = camera.position.in_units("code_length").d lpos = position - pos mag = (lpos * lpos).sum(axis=1) ** 0.5 # screen out NaN values that would result from dividing by mag mag[mag == 0] = 1 lpos /= mag[:, None] # originally: # the x vector is cos(px) * cos(py) # the y vector is sin(px) * cos(py) # the z vector is sin(py) # y / x = tan(px), so arctan2(lpos[:,1], lpos[:,0]) => px # z = sin(py) so arcsin(z) = py # px runs from -pi to pi # py runs from -pi/2 to pi/2 px = np.arctan2(lpos[:, 1], lpos[:, 0]) py = np.arcsin(lpos[:, 2]) dz = mag / self.radius # dz is distance the ray would travel px = ((-px + np.pi) / (2.0 * np.pi)) * res[0] py = ((-py + np.pi / 2.0) / np.pi) * res[1] # px and py should be dimensionless px = np.rint(px).astype("int64") py = np.rint(py).astype("int64") return px, py, dz class StereoSphericalLens(Lens): r"""A lens for a stereo cylindrical-spherical projection. Movies rendered in this way can be displayed in VR devices or stereo youtube 360 degree movies. """ def __init__(self): super().__init__() self.radius = 1.0 self.center = None self.disparity = None self.rotation_matrix = np.eye(3) def setup_box_properties(self, camera): self.radius = camera.width.max() super().setup_box_properties(camera) self.set_viewpoint(camera) def _get_sampler_params(self, camera, render_source): if self.disparity is None: self.disparity = camera.width[0] / 1000.0 single_resolution_y = int(np.floor(camera.resolution[1]) / 2) px = np.linspace(-np.pi, np.pi, camera.resolution[0], endpoint=True)[:, None] py = np.linspace(-np.pi / 2.0, np.pi / 2.0, single_resolution_y, endpoint=True)[ None, : ] vectors = np.zeros( (camera.resolution[0], single_resolution_y, 3), dtype="float64", order="C" ) vectors[:, :, 0] = np.cos(px) * np.cos(py) vectors[:, :, 1] = np.sin(px) * np.cos(py) vectors[:, :, 2] = np.sin(py) # The maximum possible length of ray max_length = ( unorm(camera.position - camera._domain_center) + 0.5 * unorm(camera._domain_width) + np.abs(self.disparity) ) # Rescale the ray to be long enough to cover the entire domain vectors = vectors * max_length R1 = get_rotation_matrix(0.5 * np.pi, [1, 0, 0]) R2 = get_rotation_matrix(0.5 * np.pi, [0, 0, 1]) uv = np.dot(R1, camera.unit_vectors) uv = np.dot(R2, uv) vectors.reshape((camera.resolution[0] * single_resolution_y, 3)) vectors = np.dot(vectors, uv) vectors.reshape((camera.resolution[0], single_resolution_y, 3)) vectors2 = np.zeros( (camera.resolution[0], single_resolution_y, 3), dtype="float64", order="C" ) vectors2[:, :, 0] = -np.sin(px) * np.ones((1, single_resolution_y)) vectors2[:, :, 1] = np.cos(px) * np.ones((1, single_resolution_y)) vectors2[:, :, 2] = 0 vectors2.reshape((camera.resolution[0] * single_resolution_y, 3)) vectors2 = np.dot(vectors2, uv) vectors2.reshape((camera.resolution[0], single_resolution_y, 3)) positions = np.tile(camera.position, camera.resolution[0] * single_resolution_y) positions = positions.reshape(camera.resolution[0], single_resolution_y, 3) # The left and right are switched here since VR is in LHS. positions_left = positions + vectors2 * self.disparity positions_right = positions + vectors2 * (-self.disparity) if render_source.zbuffer is not None: image = render_source.zbuffer.rgba else: image = self.new_image(camera) dummy = np.ones(3, dtype="float64") vectors_comb = uhstack([vectors, vectors]) positions_comb = uhstack([positions_left, positions_right]) image.shape = (camera.resolution[0], camera.resolution[1], 4) vectors_comb.shape = (camera.resolution[0], camera.resolution[1], 3) positions_comb.shape = (camera.resolution[0], camera.resolution[1], 3) sampler_params = { "vp_pos": positions_comb, "vp_dir": vectors_comb, "center": self.back_center, "bounds": (0.0, 1.0, 0.0, 1.0), "x_vec": dummy, "y_vec": dummy, "width": np.zeros(3, dtype="float64"), "image": image, "lens_type": "stereo-spherical", } return sampler_params def set_viewpoint(self, camera): """ For a PerspectiveLens, the viewpoint is the front center. """ self.viewpoint = camera.position lenses = { "plane-parallel": PlaneParallelLens, "perspective": PerspectiveLens, "stereo-perspective": StereoPerspectiveLens, "fisheye": FisheyeLens, "spherical": SphericalLens, "stereo-spherical": StereoSphericalLens, } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/off_axis_projection.py0000644000175100001770000004762514714401662025110 0ustar00runnerdockerimport numpy as np from yt.data_objects.api import ImageArray from yt.funcs import is_sequence, mylog from yt.geometry.oct_geometry_handler import OctreeIndex from yt.units.unit_object import Unit # type: ignore from yt.utilities.lib.image_utilities import add_cells_to_image_offaxis from yt.utilities.lib.partitioned_grid import PartitionedGrid from yt.utilities.lib.pixelization_routines import ( normalization_2d_utility, off_axis_projection_SPH, ) from yt.visualization.volume_rendering.lens import PlaneParallelLens from .render_source import KDTreeVolumeSource from .scene import Scene from .transfer_functions import ProjectionTransferFunction from .utils import data_source_or_all def off_axis_projection( data_source, center, normal_vector, width, resolution, item, weight=None, volume=None, no_ghost=False, interpolated=False, north_vector=None, depth=None, num_threads=1, method="integrate", ): r"""Project through a dataset, off-axis, and return the image plane. This function will accept the necessary items to integrate through a volume at an arbitrary angle and return the integrated field of view to the user. Note that if a weight is supplied, it will multiply the pre-interpolated values together, then create cell-centered values, then interpolate within the cell to conduct the integration. Parameters ---------- data_source : ~yt.data_objects.static_output.Dataset or ~yt.data_objects.data_containers.YTSelectionDataContainer This is the dataset or data object to volume render. center : array_like The current 'center' of the view port -- the focal point for the camera. normal_vector : array_like The vector between the camera position and the center. width : float or list of floats The current width of the image. If a single float, the volume is cubical, but if not, it is left/right, top/bottom, front/back resolution : int or list of ints The number of pixels in each direction. item: string The field to project through the volume weight : optional, default None If supplied, the field will be pre-multiplied by this, then divided by the integrated value of this field. This returns an average rather than a sum. volume : `yt.extensions.volume_rendering.AMRKDTree`, optional The volume to ray cast through. Can be specified for finer-grained control, but otherwise will be automatically generated. no_ghost: bool, optional Optimization option. If True, homogenized bricks will extrapolate out from grid instead of interpolating from ghost zones that have to first be calculated. This can lead to large speed improvements, but at a loss of accuracy/smoothness in resulting image. The effects are less notable when the transfer function is smooth and broad. Default: True interpolated : optional, default False If True, the data is first interpolated to vertex-centered data, then tri-linearly interpolated along the ray. Not suggested for quantitative studies. north_vector : optional, array_like, default None A vector that, if specified, restricts the orientation such that the north vector dotted into the image plane points "up". Useful for rotations depth: float, tuple[float, str], or unyt_array of size 1. specify the depth of the projection region (size along the line of sight). If no units are given (unyt_array or second tuple element), code units are assumed. num_threads: integer, optional, default 1 Use this many OpenMP threads during projection. method : string The method of projection. Valid methods are: "integrate" with no weight_field specified : integrate the requested field along the line of sight. "integrate" with a weight_field specified : weight the requested field by the weighting field and integrate along the line of sight. "sum" : This method is the same as integrate, except that it does not multiply by a path length when performing the integration, and is just a straight summation of the field along the given axis. WARNING: This should only be used for uniform resolution grid datasets, as other datasets may result in unphysical images. or camera movements. Returns ------- image : array An (N,N) array of the final integrated values, in float64 form. Examples -------- >>> image = off_axis_projection( ... ds, ... [0.5, 0.5, 0.5], ... [0.2, 0.3, 0.4], ... 0.2, ... N, ... ("gas", "temperature"), ... ("gas", "density"), ... ) >>> write_image(np.log10(image), "offaxis.png") """ if method not in ("integrate", "sum"): raise NotImplementedError( "Only 'integrate' or 'sum' methods are valid for off-axis-projections" ) if interpolated: raise NotImplementedError( "Only interpolated=False methods are currently implemented " "for off-axis-projections" ) data_source = data_source_or_all(data_source) item = data_source._determine_fields([item])[0] # Assure vectors are numpy arrays as expected by cython code normal_vector = np.array(normal_vector, dtype="float64") if north_vector is not None: north_vector = np.array(north_vector, dtype="float64") # Add the normal as a field parameter to the data source # so line of sight fields can use it data_source.set_field_parameter("axis", normal_vector) # Sanitize units if not hasattr(center, "units"): center = data_source.ds.arr(center, "code_length") if not hasattr(width, "units"): width = data_source.ds.arr(width, "code_length") if depth is not None: # handle units (intrinsic or as a tuple), # then convert to code length # float -> assumed to be in code units if isinstance(depth, tuple): depth = data_source.ds.arr(np.array([depth[0]]), depth[1]) if hasattr(depth, "units"): depth = depth.to("code_length").d # depth = data_source.ds.arr(depth, "code_length") if hasattr(data_source.ds, "_sph_ptypes"): if method != "integrate": raise NotImplementedError("SPH Only allows 'integrate' method") sph_ptypes = data_source.ds._sph_ptypes fi = data_source.ds.field_info[item] raise_error = False ptype = sph_ptypes[0] ppos = [f"particle_position_{ax}" for ax in "xyz"] # Assure that the field we're trying to off-axis project # has a field type as the SPH particle type or if the field is an # alias to an SPH field or is a 'gas' field if item[0] in data_source.ds.known_filters: if item[0] not in sph_ptypes: raise_error = True else: ptype = item[0] ppos = ["x", "y", "z"] elif fi.is_alias: if fi.alias_name[0] not in sph_ptypes: raise_error = True elif item[0] != "gas": ptype = item[0] else: if fi.name[0] not in sph_ptypes and fi.name[0] != "gas": raise_error = True if raise_error: raise RuntimeError( "Can only perform off-axis projections for SPH fields, " f"Received {item!r}" ) normal = np.array(normal_vector) normal = normal / np.linalg.norm(normal) # If north_vector is None, we set the default here. # This is chosen so that if normal_vector is one of the # cartesian coordinate axes, the projection will match # the corresponding on-axis projection. if north_vector is None: vecs = np.identity(3) t = np.cross(vecs, normal).sum(axis=1) ax = t.argmax() east_vector = np.cross(vecs[ax, :], normal).ravel() north = np.cross(normal, east_vector).ravel() else: north = np.array(north_vector) north = north / np.linalg.norm(north) east_vector = np.cross(north, normal).ravel() # if weight is None: buf = np.zeros((resolution[0], resolution[1]), dtype="float64") mask = np.ones_like(buf, dtype="uint8") ## width from fixed_resolution.py is just the size of the domain # x_min = center[0] - width[0] / 2 # x_max = center[0] + width[0] / 2 # y_min = center[1] - width[1] / 2 # y_max = center[1] + width[1] / 2 # z_min = center[2] - width[2] / 2 # z_max = center[2] + width[2] / 2 periodic = data_source.ds.periodicity le = data_source.ds.domain_left_edge.to("code_length").d re = data_source.ds.domain_right_edge.to("code_length").d x_min, y_min, z_min = le x_max, y_max, z_max = re bounds = [x_min, x_max, y_min, y_max, z_min, z_max] # only need (rotated) x/y widths _width = (width.to("code_length").d)[:2] finfo = data_source.ds.field_info[item] ounits = finfo.output_units kernel_name = None if hasattr(data_source.ds, "kernel_name"): kernel_name = data_source.ds.kernel_name if kernel_name is None: kernel_name = "cubic" if weight is None: for chunk in data_source.chunks([], "io"): off_axis_projection_SPH( chunk[ptype, ppos[0]].to("code_length").d, chunk[ptype, ppos[1]].to("code_length").d, chunk[ptype, ppos[2]].to("code_length").d, chunk[ptype, "mass"].to("code_mass").d, chunk[ptype, "density"].to("code_density").d, chunk[ptype, "smoothing_length"].to("code_length").d, bounds, center.to("code_length").d, _width, periodic, chunk[item].in_units(ounits), buf, mask, normal_vector, north, depth=depth, kernel_name=kernel_name, ) # Assure that the path length unit is in the default length units # for the dataset by scaling the units of the smoothing length, # which in the above calculation is set to be code_length path_length_unit = Unit( "code_length", registry=data_source.ds.unit_registry ) default_path_length_unit = data_source.ds.unit_system["length"] buf *= data_source.ds.quan(1, path_length_unit).in_units( default_path_length_unit ) item_unit = data_source.ds._get_field_info(item).units item_unit = Unit(item_unit, registry=data_source.ds.unit_registry) funits = item_unit * default_path_length_unit else: # if there is a weight field, take two projections: # one of field*weight, the other of just weight, and divide them weight_buff = np.zeros((resolution[0], resolution[1]), dtype="float64") wounits = data_source.ds.field_info[weight].output_units for chunk in data_source.chunks([], "io"): off_axis_projection_SPH( chunk[ptype, ppos[0]].to("code_length").d, chunk[ptype, ppos[1]].to("code_length").d, chunk[ptype, ppos[2]].to("code_length").d, chunk[ptype, "mass"].to("code_mass").d, chunk[ptype, "density"].to("code_density").d, chunk[ptype, "smoothing_length"].to("code_length").d, bounds, center.to("code_length").d, _width, periodic, chunk[item].in_units(ounits), buf, mask, normal_vector, north, weight_field=chunk[weight].in_units(wounits), depth=depth, kernel_name=kernel_name, ) for chunk in data_source.chunks([], "io"): off_axis_projection_SPH( chunk[ptype, ppos[0]].to("code_length").d, chunk[ptype, ppos[1]].to("code_length").d, chunk[ptype, ppos[2]].to("code_length").d, chunk[ptype, "mass"].to("code_mass").d, chunk[ptype, "density"].to("code_density").d, chunk[ptype, "smoothing_length"].to("code_length").d, bounds, center.to("code_length").d, _width, periodic, chunk[weight].to(wounits), weight_buff, mask, normal_vector, north, depth=depth, kernel_name=kernel_name, ) normalization_2d_utility(buf, weight_buff) item_unit = data_source.ds._get_field_info(item).units item_unit = Unit(item_unit, registry=data_source.ds.unit_registry) funits = item_unit myinfo = { "field": item, "east_vector": east_vector, "north_vector": north_vector, "normal_vector": normal_vector, "width": width, "depth": depth, "units": funits, "type": "SPH smoothed projection", } return ImageArray( buf, funits, registry=data_source.ds.unit_registry, info=myinfo ) sc = Scene() data_source.ds.index if item is None: field = data_source.ds.field_list[0] mylog.info("Setting default field to %s", field.__repr__()) funits = data_source.ds._get_field_info(item).units vol = KDTreeVolumeSource(data_source, item) vol.num_threads = num_threads if weight is None: vol.set_field(item) else: # This is a temporary field, which we will remove at the end. weightfield = ("index", "temp_weightfield") def _make_wf(f, w): def temp_weightfield(field, data): tr = data[f].astype("float64") * data[w] return tr.d return temp_weightfield data_source.ds.field_info.add_field( weightfield, sampling_type="cell", function=_make_wf(item, weight), units="", ) # Now we have to tell the dataset to add it and to calculate # its dependencies.. deps, _ = data_source.ds.field_info.check_derived_fields([weightfield]) data_source.ds.field_dependencies.update(deps) vol.set_field(weightfield) vol.set_weight_field(weight) ptf = ProjectionTransferFunction() vol.set_transfer_function(ptf) camera = sc.add_camera(data_source) camera.set_width(width) if not is_sequence(resolution): resolution = [resolution] * 2 camera.resolution = resolution if not is_sequence(width): width = data_source.ds.arr([width] * 3) normal = np.array(normal_vector) normal = normal / np.linalg.norm(normal) camera.position = center - width[2] * normal camera.focus = center # If north_vector is None, we set the default here. # This is chosen so that if normal_vector is one of the # cartesian coordinate axes, the projection will match # the corresponding on-axis projection. if north_vector is None: vecs = np.identity(3) t = np.cross(vecs, normal).sum(axis=1) ax = t.argmax() east_vector = np.cross(vecs[ax, :], normal).ravel() north = np.cross(normal, east_vector).ravel() else: north = np.array(north_vector) north = north / np.linalg.norm(north) camera.switch_orientation(normal, north) sc.add_source(vol) vol.set_sampler(camera, interpolated=False) assert vol.sampler is not None fields = [vol.field] if vol.weight_field is not None: fields.append(vol.weight_field) mylog.debug("Casting rays") index = data_source.ds.index lens = camera.lens # This implementation is optimized for octrees with plane-parallel lenses # and implicitely assumes that the cells are cubic. # NOTE: we should be able to relax the cubic assumption to a rectangular # assumption (if all cells have the same aspect ratio) with some # renormalization of the coordinates and the projection axes. # This is NOT done in the following. dom_width = data_source.ds.domain_width cubic_domain = dom_width.max() == dom_width.min() if ( isinstance(index, OctreeIndex) and isinstance(lens, PlaneParallelLens) and cubic_domain ): fields.extend(("index", k) for k in "xyz") fields.append(("index", "dx")) data_source.get_data(fields) # We need the width of the plot window in projected coordinates, # i.e. we ignore the z-component wmax = width[:2].max() # Normalize the positions & dx so that they are in the range [-0.5, 0.5] xyz = np.stack( [ ((data_source["index", k] - center[i]) / wmax).to("1").d for i, k in enumerate("xyz") ], axis=-1, ) for idim, periodic in enumerate(data_source.ds.periodicity): if not periodic: continue # Wrap into [-0.5, +0.5] xyz[..., idim] = (xyz[..., idim] + 0.5) % 1 - 0.5 dx = (data_source["index", "dx"] / wmax).to("1").d if vol.weight_field is None: weight_field = np.ones_like(dx) else: weight_field = data_source[vol.weight_field] projected_weighted_qty = np.zeros(resolution) projected_weight = np.zeros(resolution) add_cells_to_image_offaxis( Xp=xyz, dXp=dx, qty=data_source[vol.field], weight=weight_field, rotation=camera.inv_mat.T, buffer=projected_weighted_qty, buffer_weight=projected_weight, Nx=resolution[0], Ny=resolution[1], ) image = ImageArray( data_source.ds.arr( np.stack([projected_weighted_qty, projected_weight], axis=-1), "dimensionless", ), funits, registry=data_source.ds.unit_registry, info={"imtype": "rendering"}, ) else: for grid, mask in data_source.blocks: data = [] for f in fields: # strip units before multiplying by mask for speed grid_data = grid[f] units = grid_data.units data.append( data_source.ds.arr(grid_data.d * mask, units, dtype="float64") ) pg = PartitionedGrid( grid.id, data, mask.astype("uint8"), grid.LeftEdge, grid.RightEdge, grid.ActiveDimensions.astype("int64"), ) grid.clear_data() vol.sampler(pg, num_threads=num_threads) image = vol.finalize_image(camera, vol.sampler.aimage) image = ImageArray( image, funits, registry=data_source.ds.unit_registry, info=image.info ) if weight is not None: data_source.ds.field_info.pop(("index", "temp_weightfield")) if method == "integrate": if weight is None: dl = width[2].in_units(data_source.ds.unit_system["length"]) image *= dl else: mask = image[:, :, 1] == 0 nmask = np.logical_not(mask) image[:, :, 0][nmask] /= image[:, :, 1][nmask] image[mask] = 0 return image[:, :, 0] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/old_camera.py0000644000175100001770000025741214714401662023141 0ustar00runnerdockerfrom copy import deepcopy import numpy as np from yt._maintenance.ipython_compat import IS_IPYTHON from yt.config import ytcfg from yt.data_objects.api import ImageArray from yt.funcs import ensure_numpy_array, get_num_threads, get_pbar, is_sequence, mylog from yt.units.yt_array import YTArray from yt.utilities.amr_kdtree.api import AMRKDTree from yt.utilities.exceptions import YTNotInsideNotebook from yt.utilities.lib.grid_traversal import ( arr_fisheye_vectors, arr_pix2vec_nest, pixelize_healpix, ) from yt.utilities.lib.image_samplers import ( InterpolatedProjectionSampler, LightSourceRenderSampler, ProjectionSampler, VolumeRenderSampler, ) from yt.utilities.lib.misc_utilities import lines from yt.utilities.lib.partitioned_grid import PartitionedGrid from yt.utilities.math_utils import get_rotation_matrix from yt.utilities.object_registries import data_object_registry from yt.utilities.orientation import Orientation from yt.utilities.parallel_tools.parallel_analysis_interface import ( ParallelAnalysisInterface, parallel_objects, ) from yt.visualization.image_writer import apply_colormap, write_bitmap, write_image from yt.visualization.volume_rendering.blenders import enhance_rgba from .transfer_functions import ProjectionTransferFunction def get_corners(le, re): return np.array( [ [le[0], le[1], le[2]], [re[0], le[1], le[2]], [re[0], re[1], le[2]], [le[0], re[1], le[2]], [le[0], le[1], re[2]], [re[0], le[1], re[2]], [re[0], re[1], re[2]], [le[0], re[1], re[2]], ], dtype="float64", ) class Camera(ParallelAnalysisInterface): r"""A viewpoint into a volume, for volume rendering. The camera represents the eye of an observer, which will be used to generate ray-cast volume renderings of the domain. Parameters ---------- center : array_like The current "center" of the view port -- the focal point for the camera. normal_vector : array_like The vector between the camera position and the center. width : float or list of floats The current width of the image. If a single float, the volume is cubical, but if not, it is left/right, top/bottom, front/back. resolution : int or list of ints The number of pixels in each direction. transfer_function : `yt.visualization.volume_rendering.TransferFunction` The transfer function used to map values to colors in an image. If not specified, defaults to a ProjectionTransferFunction. north_vector : array_like, optional The 'up' direction for the plane of rays. If not specific, calculated automatically. steady_north : bool, optional Boolean to control whether to normalize the north_vector by subtracting off the dot product of it and the normal vector. Makes it easier to do rotations along a single axis. If north_vector is specified, is switched to True. Default: False volume : `yt.extensions.volume_rendering.AMRKDTree`, optional The volume to ray cast through. Can be specified for finer-grained control, but otherwise will be automatically generated. fields : list of fields, optional This is the list of fields we want to volume render; defaults to Density. log_fields : list of bool, optional Whether we should take the log of the fields before supplying them to the volume rendering mechanism. sub_samples : int, optional The number of samples to take inside every cell per ray. ds : ~yt.data_objects.static_output.Dataset For now, this is a require parameter! But in the future it will become optional. This is the dataset to volume render. max_level: int, optional Specifies the maximum level to be rendered. Also specifies the maximum level used in the kd-Tree construction. Defaults to None (all levels), and only applies if use_kd=True. no_ghost: bool, optional Optimization option. If True, homogenized bricks will extrapolate out from grid instead of interpolating from ghost zones that have to first be calculated. This can lead to large speed improvements, but at a loss of accuracy/smoothness in resulting image. The effects are less notable when the transfer function is smooth and broad. Default: True data_source: data container, optional Optionally specify an arbitrary data source to the volume rendering. All cells not included in the data source will be ignored during ray casting. By default this will get set to ds.all_data(). Examples -------- >>> import yt.visualization.volume_rendering.api as vr >>> ds = load("DD1701") # Load a dataset >>> c = [0.5] * 3 # Center >>> L = [1.0, 1.0, 1.0] # Viewpoint >>> W = np.sqrt(3) # Width >>> N = 1024 # Pixels (1024^2) # Get density min, max >>> mi, ma = ds.all_data().quantities["Extrema"]("Density")[0] >>> mi, ma = np.log10(mi), np.log10(ma) # Construct transfer function >>> tf = vr.ColorTransferFunction((mi - 2, ma + 2)) # Sample transfer function with 5 gaussians. Use new col_bounds keyword. >>> tf.add_layers(5, w=0.05, col_bounds=(mi + 1, ma), colormap="nipy_spectral") # Create the camera object >>> cam = vr.Camera(c, L, W, (N, N), transfer_function=tf, ds=ds) # Ray cast, and save the image. >>> image = cam.snapshot(fn="my_rendering.png") """ _sampler_object = VolumeRenderSampler _tf_figure = None _render_figure = None def __init__( self, center, normal_vector, width, resolution, transfer_function=None, north_vector=None, steady_north=False, volume=None, fields=None, log_fields=None, sub_samples=5, ds=None, min_level=None, max_level=None, no_ghost=True, data_source=None, use_light=False, ): ParallelAnalysisInterface.__init__(self) if ds is not None: self.ds = ds if not is_sequence(resolution): resolution = (resolution, resolution) self.resolution = resolution self.sub_samples = sub_samples self.rotation_vector = north_vector if is_sequence(width) and len(width) > 1 and isinstance(width[1], str): width = self.ds.quan(width[0], units=width[1]) # Now convert back to code length for subsequent manipulation width = width.in_units("code_length").value if not is_sequence(width): width = (width, width, width) # left/right, top/bottom, front/back if not isinstance(width, YTArray): width = self.ds.arr(width, units="code_length") if not isinstance(center, YTArray): center = self.ds.arr(center, units="code_length") # Ensure that width and center are in the same units # Cf. https://bitbucket.org/yt_analysis/yt/issue/1080 width = width.in_units("code_length") center = center.in_units("code_length") self.orienter = Orientation( normal_vector, north_vector=north_vector, steady_north=steady_north ) if not steady_north: self.rotation_vector = self.orienter.unit_vectors[1] self._setup_box_properties(width, center, self.orienter.unit_vectors) if fields is None: fields = [("gas", "density")] self.fields = fields if transfer_function is None: transfer_function = ProjectionTransferFunction() self.transfer_function = transfer_function self.log_fields = log_fields dd = self.ds.all_data() efields = dd._determine_fields(self.fields) if self.log_fields is None: self.log_fields = [self.ds._get_field_info(f).take_log for f in efields] self.no_ghost = no_ghost self.use_light = use_light self.light_dir = None self.light_rgba = None if self.no_ghost: mylog.warning( "no_ghost is currently True (default). " "This may lead to artifacts at grid boundaries." ) if data_source is None: data_source = self.ds.all_data() self.data_source = data_source if volume is None: volume = AMRKDTree( self.ds, min_level=min_level, max_level=max_level, data_source=self.data_source, ) self.volume = volume def _setup_box_properties(self, width, center, unit_vectors): self.width = width self.center = center self.box_vectors = YTArray( [ unit_vectors[0] * width[0], unit_vectors[1] * width[1], unit_vectors[2] * width[2], ] ) self.origin = center - 0.5 * width.dot(YTArray(unit_vectors, "")) self.back_center = center - 0.5 * width[2] * unit_vectors[2] self.front_center = center + 0.5 * width[2] * unit_vectors[2] def update_view_from_matrix(self, mat): pass def project_to_plane(self, pos, res=None): if res is None: res = self.resolution dx = np.dot(pos - self.origin, self.orienter.unit_vectors[1]) dy = np.dot(pos - self.origin, self.orienter.unit_vectors[0]) dz = np.dot(pos - self.center, self.orienter.unit_vectors[2]) # Transpose into image coords. py = (res[0] * (dx / self.width[0])).astype("int64") px = (res[1] * (dy / self.width[1])).astype("int64") return px, py, dz def draw_grids(self, im, alpha=0.3, cmap=None, min_level=None, max_level=None): r"""Draws Grids on an existing volume rendering. By mapping grid level to a color, draws edges of grids on a volume rendering using the camera orientation. Parameters ---------- im: Numpy ndarray Existing image that has the same resolution as the Camera, which will be painted by grid lines. alpha : float, optional The alpha value for the grids being drawn. Used to control how bright the grid lines are with respect to the image. Default : 0.3 cmap : string, optional Colormap to be used mapping grid levels to colors. min_level : int, optional Optional parameter to specify the min level grid boxes to overplot on the image. max_level : int, optional Optional parameters to specify the max level grid boxes to overplot on the image. Returns ------- None Examples -------- >>> im = cam.snapshot() >>> cam.add_grids(im) >>> write_bitmap(im, "render_with_grids.png") """ if cmap is None: cmap = ytcfg.get("yt", "default_colormap") region = self.data_source corners = [] levels = [] for block, _mask in region.blocks: block_corners = np.array( [ [block.LeftEdge[0], block.LeftEdge[1], block.LeftEdge[2]], [block.RightEdge[0], block.LeftEdge[1], block.LeftEdge[2]], [block.RightEdge[0], block.RightEdge[1], block.LeftEdge[2]], [block.LeftEdge[0], block.RightEdge[1], block.LeftEdge[2]], [block.LeftEdge[0], block.LeftEdge[1], block.RightEdge[2]], [block.RightEdge[0], block.LeftEdge[1], block.RightEdge[2]], [block.RightEdge[0], block.RightEdge[1], block.RightEdge[2]], [block.LeftEdge[0], block.RightEdge[1], block.RightEdge[2]], ], dtype="float64", ) corners.append(block_corners) levels.append(block.Level) corners = np.dstack(corners) levels = np.array(levels) if max_level is not None: subset = levels <= max_level levels = levels[subset] corners = corners[:, :, subset] if min_level is not None: subset = levels >= min_level levels = levels[subset] corners = corners[:, :, subset] colors = ( apply_colormap( levels * 1.0, color_bounds=[0, self.ds.index.max_level], cmap_name=cmap )[0, :, :] * 1.0 / 255.0 ) colors[:, 3] = alpha order = [0, 1, 1, 2, 2, 3, 3, 0] order += [4, 5, 5, 6, 6, 7, 7, 4] order += [0, 4, 1, 5, 2, 6, 3, 7] vertices = np.empty([corners.shape[2] * 2 * 12, 3]) vertices = self.ds.arr(vertices, "code_length") for i in range(3): vertices[:, i] = corners[order, i, ...].ravel(order="F") px, py, dz = self.project_to_plane(vertices, res=im.shape[:2]) # Must normalize the image nim = im.rescale(inline=False) enhance_rgba(nim) nim.add_background_color("black", inline=True) # we flipped it in snapshot to get the orientation correct, so # flip the lines lines(nim.d, px.d, py.d, colors, 24, flip=1) return nim def draw_coordinate_vectors(self, im, length=0.05, thickness=1): r"""Draws three coordinate vectors in the corner of a rendering. Modifies an existing image to have three lines corresponding to the coordinate directions colored by {x,y,z} = {r,g,b}. Currently only functional for plane-parallel volume rendering. Parameters ---------- im: Numpy ndarray Existing image that has the same resolution as the Camera, which will be painted by grid lines. length: float, optional The length of the lines, as a fraction of the image size. Default : 0.05 thickness : int, optional Thickness in pixels of the line to be drawn. Returns ------- None Modifies -------- im: The original image. Examples -------- >>> im = cam.snapshot() >>> cam.draw_coordinate_vectors(im) >>> im.write_png("render_with_grids.png") """ length_pixels = length * self.resolution[0] # Put the starting point in the lower left px0 = int(length * self.resolution[0]) # CS coordinates! py0 = int((1.0 - length) * self.resolution[1]) alpha = im[:, :, 3].max() if alpha == 0.0: alpha = 1.0 coord_vectors = [ np.array([length_pixels, 0.0, 0.0]), np.array([0.0, length_pixels, 0.0]), np.array([0.0, 0.0, length_pixels]), ] colors = [ np.array([1.0, 0.0, 0.0, alpha]), np.array([0.0, 1.0, 0.0, alpha]), np.array([0.0, 0.0, 1.0, alpha]), ] # we flipped it in snapshot to get the orientation correct, so # flip the lines for vec, color in zip(coord_vectors, colors, strict=True): dx = int(np.dot(vec, self.orienter.unit_vectors[0])) dy = int(np.dot(vec, self.orienter.unit_vectors[1])) px = np.array([px0, px0 + dx], dtype="int64") py = np.array([py0, py0 + dy], dtype="int64") lines(im.d, px, py, np.array([color, color]), 1, thickness, flip=1) def draw_line(self, im, x0, x1, color=None): r"""Draws a line on an existing volume rendering. Given starting and ending positions x0 and x1, draws a line on a volume rendering using the camera orientation. Parameters ---------- im : ImageArray or 2D ndarray Existing image that has the same resolution as the Camera, which will be painted by grid lines. x0 : YTArray or ndarray Starting coordinate. If passed in as an ndarray, assumed to be in code units. x1 : YTArray or ndarray Ending coordinate, in simulation coordinates. If passed in as an ndarray, assumed to be in code units. color : array like, optional Color of the line (r, g, b, a). Defaults to white. Returns ------- None Examples -------- >>> im = cam.snapshot() >>> cam.draw_line(im, np.array([0.1, 0.2, 0.3]), np.array([0.5, 0.6, 0.7])) >>> write_bitmap(im, "render_with_line.png") """ if color is None: color = np.array([1.0, 1.0, 1.0, 1.0]) if not hasattr(x0, "units"): x0 = self.ds.arr(x0, "code_length") if not hasattr(x1, "units"): x1 = self.ds.arr(x1, "code_length") dx0 = ((x0 - self.origin) * self.orienter.unit_vectors[1]).sum() dx1 = ((x1 - self.origin) * self.orienter.unit_vectors[1]).sum() dy0 = ((x0 - self.origin) * self.orienter.unit_vectors[0]).sum() dy1 = ((x1 - self.origin) * self.orienter.unit_vectors[0]).sum() py0 = int(self.resolution[0] * (dx0 / self.width[0])) py1 = int(self.resolution[0] * (dx1 / self.width[0])) px0 = int(self.resolution[1] * (dy0 / self.width[1])) px1 = int(self.resolution[1] * (dy1 / self.width[1])) px = np.array([px0, px1], dtype="int64") py = np.array([py0, py1], dtype="int64") # we flipped it in snapshot to get the orientation correct, so # flip the lines lines(im.d, px, py, np.array([color, color]), flip=1) def draw_domain(self, im, alpha=0.3): r"""Draws domain edges on an existing volume rendering. Draws a white wireframe on the domain edges. Parameters ---------- im: Numpy ndarray Existing image that has the same resolution as the Camera, which will be painted by grid lines. alpha : float, optional The alpha value for the wireframe being drawn. Used to control how bright the lines are with respect to the image. Default : 0.3 Returns ------- nim: Numpy ndarray A new image with the domain lines drawn Examples -------- >>> im = cam.snapshot() >>> nim = cam.draw_domain(im) >>> write_bitmap(nim, "render_with_domain_boundary.png") """ # Must normalize the image nim = im.rescale(inline=False) enhance_rgba(nim) nim.add_background_color("black", inline=True) self.draw_box( nim, self.ds.domain_left_edge, self.ds.domain_right_edge, color=np.array([1.0, 1.0, 1.0, alpha]), ) return nim def draw_box(self, im, le, re, color=None): r"""Draws a box on an existing volume rendering. Draws a box defined by a left and right edge by modifying an existing volume rendering Parameters ---------- im: Numpy ndarray Existing image that has the same resolution as the Camera, which will be painted by grid lines. le: Numpy ndarray Left corner of the box re : Numpy ndarray Right corner of the box color : array like, optional Color of the box (r, g, b, a). Defaults to white. Returns ------- None Examples -------- >>> im = cam.snapshot() >>> cam.draw_box(im, np.array([0.1, 0.2, 0.3]), np.array([0.5, 0.6, 0.7])) >>> write_bitmap(im, "render_with_box.png") """ if color is None: color = np.array([1.0, 1.0, 1.0, 1.0]) corners = get_corners(le, re) order = [0, 1, 1, 2, 2, 3, 3, 0] order += [4, 5, 5, 6, 6, 7, 7, 4] order += [0, 4, 1, 5, 2, 6, 3, 7] vertices = np.empty([24, 3]) vertices = self.ds.arr(vertices, "code_length") for i in range(3): vertices[:, i] = corners[order, i, ...].ravel(order="F") px, py, dz = self.project_to_plane(vertices, res=im.shape[:2]) # we flipped it in snapshot to get the orientation correct, so # flip the lines lines( im.d, px.d.astype("int64"), py.d.astype("int64"), color.reshape(1, 4), 24, flip=1, ) def look_at(self, new_center, north_vector=None): r"""Change the view direction based on a new focal point. This will recalculate all the necessary vectors and vector planes to orient the image plane so that it points at a new location. Parameters ---------- new_center : array_like The new "center" of the view port -- the focal point for the camera. north_vector : array_like, optional The "up" direction for the plane of rays. If not specific, calculated automatically. """ normal_vector = self.front_center - new_center self.orienter.switch_orientation( normal_vector=normal_vector, north_vector=north_vector ) def switch_orientation(self, normal_vector=None, north_vector=None): r""" Change the view direction based on any of the orientation parameters. This will recalculate all the necessary vectors and vector planes related to an orientable object. Parameters ---------- normal_vector: array_like, optional The new looking vector. north_vector : array_like, optional The 'up' direction for the plane of rays. If not specific, calculated automatically. """ if north_vector is None: north_vector = self.north_vector if normal_vector is None: normal_vector = self.normal_vector self.orienter._setup_normalized_vectors(normal_vector, north_vector) def switch_view( self, normal_vector=None, width=None, center=None, north_vector=None ): r"""Change the view based on any of the view parameters. This will recalculate the orientation and width based on any of normal_vector, width, center, and north_vector. Parameters ---------- normal_vector: array_like, optional The new looking vector. width: float or array of floats, optional The new width. Can be a single value W -> [W,W,W] or an array [W1, W2, W3] (left/right, top/bottom, front/back) center: array_like, optional Specifies the new center. north_vector : array_like, optional The 'up' direction for the plane of rays. If not specific, calculated automatically. """ if width is None: width = self.width if not is_sequence(width): width = (width, width, width) # left/right, tom/bottom, front/back self.width = width if center is not None: self.center = center if north_vector is None: north_vector = self.orienter.north_vector if normal_vector is None: normal_vector = self.orienter.normal_vector self.switch_orientation(normal_vector=normal_vector, north_vector=north_vector) self._setup_box_properties(width, self.center, self.orienter.unit_vectors) def new_image(self): image = np.zeros( (self.resolution[0], self.resolution[1], 4), dtype="float64", order="C" ) return image def get_sampler_args(self, image): rotp = np.concatenate( [self.orienter.inv_mat.ravel("F"), self.back_center.ravel().ndview] ) args = ( np.atleast_3d(rotp), np.atleast_3d(self.box_vectors[2]), self.back_center, ( -self.width[0] / 2.0, self.width[0] / 2.0, -self.width[1] / 2.0, self.width[1] / 2.0, ), image, self.orienter.unit_vectors[0], self.orienter.unit_vectors[1], np.array(self.width, dtype="float64"), "KDTree", self.transfer_function, self.sub_samples, ) kwargs = { "lens_type": "plane-parallel", } return args, kwargs def get_sampler(self, args, kwargs): if self.use_light: if self.light_dir is None: self.set_default_light_dir() temp_dir = np.empty(3, dtype="float64") temp_dir = ( self.light_dir[0] * self.orienter.unit_vectors[1] + self.light_dir[1] * self.orienter.unit_vectors[2] + self.light_dir[2] * self.orienter.unit_vectors[0] ) if self.light_rgba is None: self.set_default_light_rgba() sampler = LightSourceRenderSampler( *args, light_dir=temp_dir, light_rgba=self.light_rgba, **kwargs ) else: sampler = self._sampler_object(*args, **kwargs) return sampler def finalize_image(self, image): view_pos = ( self.front_center + self.orienter.unit_vectors[2] * 1.0e6 * self.width[2] ) image = self.volume.reduce_tree_images(image, view_pos) if not self.transfer_function.grey_opacity: image[:, :, 3] = 1.0 return image def _render(self, double_check, num_threads, image, sampler): ncells = sum(b.source_mask.size for b in self.volume.bricks) pbar = get_pbar("Ray casting", ncells) total_cells = 0 if double_check: for brick in self.volume.bricks: for data in brick.my_data: if np.any(np.isnan(data)): raise RuntimeError view_pos = ( self.front_center + self.orienter.unit_vectors[2] * 1.0e6 * self.width[2] ) for brick in self.volume.traverse(view_pos): sampler(brick, num_threads=num_threads) total_cells += brick.source_mask.size pbar.update(total_cells) pbar.finish() image = sampler.aimage image = self.finalize_image(image) return image def _pyplot(self): from matplotlib import pyplot return pyplot def show_tf(self): if self._tf_figure is None: self._tf_figure = self._pyplot.figure(2) self.transfer_function.show(ax=self._tf_figure.axes) self._pyplot.draw() def annotate(self, ax, enhance=True, label_fmt=None): ax.get_xaxis().set_visible(False) ax.get_xaxis().set_ticks([]) ax.get_yaxis().set_visible(False) ax.get_yaxis().set_ticks([]) cb = self._pyplot.colorbar( ax.images[0], pad=0.0, fraction=0.05, drawedges=True, shrink=0.9 ) label = self.ds._get_field_info(self.fields[0]).get_label() if self.log_fields[0]: label = r"$\rm{log}\ $" + label self.transfer_function.vert_cbar(ax=cb.ax, label=label, label_fmt=label_fmt) def show_mpl(self, im, enhance=True, clear_fig=True): if self._render_figure is None: self._render_figure = self._pyplot.figure(1) if clear_fig: self._render_figure.clf() if enhance: nz = im[im > 0.0] nim = im / (nz.mean() + 6.0 * np.std(nz)) nim[nim > 1.0] = 1.0 nim[nim < 0.0] = 0.0 del nz else: nim = im ax = self._pyplot.imshow(nim[:, :, :3] / nim[:, :, :3].max(), origin="upper") return ax def draw(self): self._pyplot.draw() def save_annotated( self, fn, image, enhance=True, dpi=100, clear_fig=True, label_fmt=None ): """ Save an image with the transfer function represented as a colorbar. Parameters ---------- fn : str The output filename image : ImageArray The image to annotate enhance : bool, optional Enhance the contrast (default: True) dpi : int, optional Dots per inch in the output image (default: 100) clear_fig : bool, optional Reset the figure (through matplotlib.pyplot.clf()) before drawing. Setting this to false can allow us to overlay the image onto an existing figure label_fmt : str, optional A format specifier (e.g., label_fmt="%.2g") to use in formatting the data values that label the transfer function colorbar. """ image = image.swapaxes(0, 1) ax = self.show_mpl(image, enhance=enhance, clear_fig=clear_fig) self.annotate(ax.axes, enhance, label_fmt=label_fmt) self._pyplot.savefig(fn, bbox_inches="tight", facecolor="black", dpi=dpi) def save_image(self, image, fn=None, clip_ratio=None, transparent=False): if self.comm.rank == 0 and fn is not None: background = None if transparent else "black" image.write_png(fn, rescale=True, background=background) def initialize_source(self): return self.volume.initialize_source( self.fields, self.log_fields, self.no_ghost ) def get_information(self): info_dict = { "fields": self.fields, "type": self.__class__.__name__, "east_vector": self.orienter.unit_vectors[0], "north_vector": self.orienter.unit_vectors[1], "normal_vector": self.orienter.unit_vectors[2], "width": self.width, "dataset": self.ds.directory, } return info_dict def snapshot( self, fn=None, clip_ratio=None, double_check=False, num_threads=0, transparent=False, ): r"""Ray-cast the camera. This method instructs the camera to take a snapshot -- i.e., call the ray caster -- based on its current settings. Parameters ---------- fn : string, optional If supplied, the image will be saved out to this before being returned. Scaling will be to the maximum value. clip_ratio : float, optional If supplied, the 'max_val' argument to write_bitmap will be handed clip_ratio * image.std() double_check : bool, optional Optionally makes sure that the data contains only valid entries. Used for debugging. num_threads : int, optional If supplied, will use 'num_threads' number of OpenMP threads during the rendering. Defaults to 0, which uses the environment variable OMP_NUM_THREADS. transparent: bool, optional Optionally saves out the 4-channel rgba image, which can appear empty if the alpha channel is low everywhere. Default: False Returns ------- image : array An (N,M,3) array of the final returned values, in float64 form. """ if num_threads is None: num_threads = get_num_threads() image = self.new_image() args, kwargs = self.get_sampler_args(image) sampler = self.get_sampler(args, kwargs) self.initialize_source() image = ImageArray( self._render(double_check, num_threads, image, sampler), info=self.get_information(), ) # flip it up/down to handle how the png orientation is done image = image[:, ::-1, :] self.save_image(image, fn=fn, clip_ratio=clip_ratio, transparent=transparent) return image def show(self, clip_ratio=None): r"""This will take a snapshot and display the resultant image in the IPython notebook. If yt is being run from within an IPython session, and it is able to determine this, this function will snapshot and send the resultant image to the IPython notebook for display. If yt can't determine if it's inside an IPython session, it will raise YTNotInsideNotebook. Parameters ---------- clip_ratio : float, optional If supplied, the 'max_val' argument to write_bitmap will be handed clip_ratio * image.std() Examples -------- >>> cam.show() """ if IS_IPYTHON: from IPython.core.displaypub import publish_display_data image = self.snapshot()[:, :, :3] if clip_ratio is not None: clip_ratio *= image.std() data = write_bitmap(image, None, clip_ratio) publish_display_data( data={"image/png": data}, source="yt.visualization.volume_rendering.camera.Camera", ) else: raise YTNotInsideNotebook def set_default_light_dir(self): self.light_dir = [1.0, 1.0, 1.0] def set_default_light_rgba(self): self.light_rgba = [1.0, 1.0, 1.0, 1.0] def zoom(self, factor): r"""Change the distance to the focal point. This will zoom the camera in by some `factor` toward the focal point, along the current view direction, modifying the left/right and up/down extents as well. Parameters ---------- factor : float The factor by which to reduce the distance to the focal point. Notes ----- You will need to call snapshot() again to get a new image. """ self.width /= factor self._setup_box_properties(self.width, self.center, self.orienter.unit_vectors) def zoomin(self, final, n_steps, clip_ratio=None): r"""Loop over a zoomin and return snapshots along the way. This will yield `n_steps` snapshots until the current view has been zooming in to a final factor of `final`. Parameters ---------- final : float The zoom factor, with respect to current, desired at the end of the sequence. n_steps : int The number of zoom snapshots to make. clip_ratio : float, optional If supplied, the 'max_val' argument to write_bitmap will be handed clip_ratio * image.std() Examples -------- >>> for i, snapshot in enumerate(cam.zoomin(100.0, 10)): ... iw.write_bitmap(snapshot, "zoom_%04i.png" % i) """ f = final ** (1.0 / n_steps) for _ in range(n_steps): self.zoom(f) yield self.snapshot(clip_ratio=clip_ratio) def move_to( self, final, n_steps, final_width=None, exponential=False, clip_ratio=None ): r"""Loop over a look_at This will yield `n_steps` snapshots until the current view has been moved to a final center of `final` with a final width of final_width. Parameters ---------- final : array_like The final center to move to after `n_steps` n_steps : int The number of look_at snapshots to make. final_width: float or array_like, optional Specifies the final width after `n_steps`. Useful for moving and zooming at the same time. exponential : boolean Specifies whether the move/zoom transition follows an exponential path toward the destination or linear clip_ratio : float, optional If supplied, the 'max_val' argument to write_bitmap will be handed clip_ratio * image.std() Examples -------- >>> for i, snapshot in enumerate(cam.move_to([0.2, 0.3, 0.6], 10)): ... iw.write_bitmap(snapshot, "move_%04i.png" % i) """ dW = None if not isinstance(final, YTArray): final = self.ds.arr(final, units="code_length") if exponential: if final_width is not None: if not is_sequence(final_width): final_width = [final_width, final_width, final_width] if not isinstance(final_width, YTArray): final_width = self.ds.arr(final_width, units="code_length") # left/right, top/bottom, front/back if (self.center == 0.0).all(): self.center += (final - self.center) / (10.0 * n_steps) final_zoom = final_width / self.width dW = final_zoom ** (1.0 / n_steps) else: dW = self.ds.arr([1.0, 1.0, 1.0], "code_length") position_diff = final / self.center dx = position_diff ** (1.0 / n_steps) else: if final_width is not None: if not is_sequence(final_width): final_width = [final_width, final_width, final_width] if not isinstance(final_width, YTArray): final_width = self.ds.arr(final_width, units="code_length") # left/right, top/bottom, front/back dW = (1.0 * final_width - self.width) / n_steps else: dW = self.ds.arr([0.0, 0.0, 0.0], "code_length") dx = (final - self.center) * 1.0 / n_steps for _ in range(n_steps): if exponential: self.switch_view(center=self.center * dx, width=self.width * dW) else: self.switch_view(center=self.center + dx, width=self.width + dW) yield self.snapshot(clip_ratio=clip_ratio) def rotate(self, theta, rot_vector=None): r"""Rotate by a given angle Rotate the view. If `rot_vector` is None, rotation will occur around the `north_vector`. Parameters ---------- theta : float, in radians Angle (in radians) by which to rotate the view. rot_vector : array_like, optional Specify the rotation vector around which rotation will occur. Defaults to None, which sets rotation around `north_vector` Examples -------- >>> cam.rotate(np.pi / 4) """ rotate_all = rot_vector is not None if rot_vector is None: rot_vector = self.rotation_vector else: rot_vector = ensure_numpy_array(rot_vector) rot_vector = rot_vector / np.linalg.norm(rot_vector) R = get_rotation_matrix(theta, rot_vector) normal_vector = self.front_center - self.center normal_vector = normal_vector / np.sqrt((normal_vector**2).sum()) if rotate_all: self.switch_view( normal_vector=np.dot(R, normal_vector), north_vector=np.dot(R, self.orienter.unit_vectors[1]), ) else: self.switch_view(normal_vector=np.dot(R, normal_vector)) def pitch(self, theta): r"""Rotate by a given angle about the horizontal axis Pitch the view. Parameters ---------- theta : float, in radians Angle (in radians) by which to pitch the view. Examples -------- >>> cam.pitch(np.pi / 4) """ rot_vector = self.orienter.unit_vectors[0] R = get_rotation_matrix(theta, rot_vector) self.switch_view( normal_vector=np.dot(R, self.orienter.unit_vectors[2]), north_vector=np.dot(R, self.orienter.unit_vectors[1]), ) if self.orienter.steady_north: self.orienter.north_vector = self.orienter.unit_vectors[1] def yaw(self, theta): r"""Rotate by a given angle about the vertical axis Yaw the view. Parameters ---------- theta : float, in radians Angle (in radians) by which to yaw the view. Examples -------- >>> cam.yaw(np.pi / 4) """ rot_vector = self.orienter.unit_vectors[1] R = get_rotation_matrix(theta, rot_vector) self.switch_view(normal_vector=np.dot(R, self.orienter.unit_vectors[2])) def roll(self, theta): r"""Rotate by a given angle about the view normal axis Roll the view. Parameters ---------- theta : float, in radians Angle (in radians) by which to roll the view. Examples -------- >>> cam.roll(np.pi / 4) """ rot_vector = self.orienter.unit_vectors[2] R = get_rotation_matrix(theta, rot_vector) self.switch_view( normal_vector=np.dot(R, self.orienter.unit_vectors[2]), north_vector=np.dot(R, self.orienter.unit_vectors[1]), ) if self.orienter.steady_north: self.orienter.north_vector = np.dot(R, self.orienter.north_vector) def rotation(self, theta, n_steps, rot_vector=None, clip_ratio=None): r"""Loop over rotate, creating a rotation This will yield `n_steps` snapshots until the current view has been rotated by an angle `theta` Parameters ---------- theta : float, in radians Angle (in radians) by which to rotate the view. n_steps : int The number of look_at snapshots to make. rot_vector : array_like, optional Specify the rotation vector around which rotation will occur. Defaults to None, which sets rotation around the original `north_vector` clip_ratio : float, optional If supplied, the 'max_val' argument to write_bitmap will be handed clip_ratio * image.std() Examples -------- >>> for i, snapshot in enumerate(cam.rotation(np.pi, 10)): ... iw.write_bitmap(snapshot, "rotation_%04i.png" % i) """ dtheta = (1.0 * theta) / n_steps for _ in range(n_steps): self.rotate(dtheta, rot_vector=rot_vector) yield self.snapshot(clip_ratio=clip_ratio) data_object_registry["camera"] = Camera class InteractiveCamera(Camera): frames: list[ImageArray] = [] def snapshot(self, fn=None, clip_ratio=None): self._pyplot.figure(2) self.transfer_function.show() self._pyplot.draw() im = Camera.snapshot(self, fn, clip_ratio) self._pyplot.figure(1) self._pyplot.imshow(im / im.max()) self._pyplot.draw() self.frames.append(im) def rotation(self, theta, n_steps, rot_vector=None): for frame in Camera.rotation(self, theta, n_steps, rot_vector): if frame is not None: self.frames.append(frame) def zoomin(self, final, n_steps): for frame in Camera.zoomin(self, final, n_steps): if frame is not None: self.frames.append(frame) def clear_frames(self): del self.frames self.frames = [] def save(self, fn): self._pyplot.savefig(fn, bbox_inches="tight", facecolor="black") def save_frames(self, basename, clip_ratio=None): for i, frame in enumerate(self.frames): fn = basename + "_%04i.png" % i if clip_ratio is not None: write_bitmap(frame, fn, clip_ratio * frame.std()) else: write_bitmap(frame, fn) data_object_registry["interactive_camera"] = InteractiveCamera class PerspectiveCamera(Camera): r"""A viewpoint into a volume, for perspective volume rendering. The camera represents the eye of an observer, which will be used to generate ray-cast volume renderings of the domain. The rays start from the camera and end on the image plane, which generates a perspective view. Note: at the moment, this results in a left-handed coordinate system view Parameters ---------- center : array_like The location of the camera normal_vector : array_like The vector from the camera position to the center of the image plane width : float or list of floats width[0] and width[1] give the width and height of the image plane, and width[2] gives the depth of the image plane (distance between the camera and the center of the image plane). The view angles thus become: 2 * arctan(0.5 * width[0] / width[2]) in horizontal direction 2 * arctan(0.5 * width[1] / width[2]) in vertical direction (The following parameters are identical with the definitions in Camera class) resolution : int or list of ints The number of pixels in each direction. transfer_function : `yt.visualization.volume_rendering.TransferFunction` The transfer function used to map values to colors in an image. If not specified, defaults to a ProjectionTransferFunction. north_vector : array_like, optional The 'up' direction for the plane of rays. If not specific, calculated automatically. steady_north : bool, optional Boolean to control whether to normalize the north_vector by subtracting off the dot product of it and the normal vector. Makes it easier to do rotations along a single axis. If north_vector is specified, is switched to True. Default: False volume : `yt.extensions.volume_rendering.AMRKDTree`, optional The volume to ray cast through. Can be specified for finer-grained control, but otherwise will be automatically generated. fields : list of fields, optional This is the list of fields we want to volume render; defaults to Density. log_fields : list of bool, optional Whether we should take the log of the fields before supplying them to the volume rendering mechanism. sub_samples : int, optional The number of samples to take inside every cell per ray. ds : ~yt.data_objects.static_output.Dataset For now, this is a require parameter! But in the future it will become optional. This is the dataset to volume render. use_kd: bool, optional Specifies whether or not to use a kd-Tree framework for the Homogenized Volume and ray-casting. Default to True. max_level: int, optional Specifies the maximum level to be rendered. Also specifies the maximum level used in the kd-Tree construction. Defaults to None (all levels), and only applies if use_kd=True. no_ghost: bool, optional Optimization option. If True, homogenized bricks will extrapolate out from grid instead of interpolating from ghost zones that have to first be calculated. This can lead to large speed improvements, but at a loss of accuracy/smoothness in resulting image. The effects are less notable when the transfer function is smooth and broad. Default: True data_source: data container, optional Optionally specify an arbitrary data source to the volume rendering. All cells not included in the data source will be ignored during ray casting. By default this will get set to ds.all_data(). """ def __init__(self, *args, **kwargs): Camera.__init__(self, *args, **kwargs) def get_sampler_args(self, image): east_vec = self.orienter.unit_vectors[0].reshape(3, 1) north_vec = self.orienter.unit_vectors[1].reshape(3, 1) px = np.linspace(-0.5, 0.5, self.resolution[0])[np.newaxis, :] py = np.linspace(-0.5, 0.5, self.resolution[1])[np.newaxis, :] sample_x = self.width[0] * np.array(east_vec * px).transpose() sample_y = self.width[1] * np.array(north_vec * py).transpose() vectors = np.zeros( (self.resolution[0], self.resolution[1], 3), dtype="float64", order="C" ) sample_x = np.repeat( sample_x.reshape(self.resolution[0], 1, 3), self.resolution[1], axis=1 ) sample_y = np.repeat( sample_y.reshape(1, self.resolution[1], 3), self.resolution[0], axis=0 ) normal_vec = np.zeros( (self.resolution[0], self.resolution[1], 3), dtype="float64", order="C" ) normal_vec[:, :, 0] = self.orienter.unit_vectors[2, 0] normal_vec[:, :, 1] = self.orienter.unit_vectors[2, 1] normal_vec[:, :, 2] = self.orienter.unit_vectors[2, 2] vectors = sample_x + sample_y + normal_vec * self.width[2] positions = np.zeros( (self.resolution[0], self.resolution[1], 3), dtype="float64", order="C" ) positions[:, :, 0] = self.center[0] positions[:, :, 1] = self.center[1] positions[:, :, 2] = self.center[2] positions = self.ds.arr(positions, units="code_length") dummy = np.ones(3, dtype="float64") image.shape = (self.resolution[0], self.resolution[1], 4) args = ( positions, vectors, self.back_center, (0.0, 1.0, 0.0, 1.0), image, dummy, dummy, np.zeros(3, dtype="float64"), "KDTree", self.transfer_function, self.sub_samples, ) kwargs = { "lens_type": "perspective", } return args, kwargs def _render(self, double_check, num_threads, image, sampler): ncells = sum(b.source_mask.size for b in self.volume.bricks) pbar = get_pbar("Ray casting", ncells) total_cells = 0 if double_check: for brick in self.volume.bricks: for data in brick.my_data: if np.any(np.isnan(data)): raise RuntimeError for brick in self.volume.traverse(self.front_center): sampler(brick, num_threads=num_threads) total_cells += brick.source_mask.size pbar.update(total_cells) pbar.finish() image = self.finalize_image(sampler.aimage) return image def finalize_image(self, image): view_pos = self.front_center image.shape = self.resolution[0], self.resolution[1], 4 image = self.volume.reduce_tree_images(image, view_pos) if not self.transfer_function.grey_opacity: image[:, :, 3] = 1.0 return image def project_to_plane(self, pos, res=None): if res is None: res = self.resolution sight_vector = pos - self.center pos1 = sight_vector for i in range(0, sight_vector.shape[0]): sight_vector_norm = np.sqrt(np.dot(sight_vector[i], sight_vector[i])) sight_vector[i] = sight_vector[i] / sight_vector_norm sight_vector = self.ds.arr(sight_vector.value, units="dimensionless") sight_center = self.center + self.width[2] * self.orienter.unit_vectors[2] for i in range(0, sight_vector.shape[0]): sight_angle_cos = np.dot(sight_vector[i], self.orienter.unit_vectors[2]) if np.arccos(sight_angle_cos) < 0.5 * np.pi: sight_length = self.width[2] / sight_angle_cos else: # The corner is on the backwards, then put it outside of the # image It can not be simply removed because it may connect to # other corner within the image, which produces visible domain # boundary line sight_length = np.sqrt( self.width[0] ** 2 + self.width[1] ** 2 ) / np.sqrt(1 - sight_angle_cos**2) pos1[i] = self.center + sight_length * sight_vector[i] dx = np.dot(pos1 - sight_center, self.orienter.unit_vectors[0]) dy = np.dot(pos1 - sight_center, self.orienter.unit_vectors[1]) dz = np.dot(pos1 - sight_center, self.orienter.unit_vectors[2]) # Transpose into image coords. px = (res[0] * 0.5 + res[0] / self.width[0] * dx).astype("int64") py = (res[1] * 0.5 + res[1] / self.width[1] * dy).astype("int64") return px, py, dz def yaw(self, theta, rot_center): r"""Rotate by a given angle about the vertical axis through the point center. This is accomplished by rotating the focal point and then setting the looking vector to point to the center. Yaw the view. Parameters ---------- theta : float, in radians Angle (in radians) by which to yaw the view. rot_center : a tuple (x, y, z) The point to rotate about Examples -------- >>> cam.yaw(np.pi / 4, (0.0, 0.0, 0.0)) """ rot_vector = self.orienter.unit_vectors[1] focal_point = self.center - rot_center R = get_rotation_matrix(theta, rot_vector) focal_point = np.dot(R, focal_point) + rot_center normal_vector = rot_center - focal_point normal_vector = normal_vector / np.sqrt((normal_vector**2).sum()) self.switch_view(normal_vector=normal_vector, center=focal_point) data_object_registry["perspective_camera"] = PerspectiveCamera def corners(left_edge, right_edge): return np.array( [ [left_edge[:, 0], left_edge[:, 1], left_edge[:, 2]], [right_edge[:, 0], left_edge[:, 1], left_edge[:, 2]], [right_edge[:, 0], right_edge[:, 1], left_edge[:, 2]], [right_edge[:, 0], right_edge[:, 1], right_edge[:, 2]], [left_edge[:, 0], right_edge[:, 1], right_edge[:, 2]], [left_edge[:, 0], left_edge[:, 1], right_edge[:, 2]], [right_edge[:, 0], left_edge[:, 1], right_edge[:, 2]], [left_edge[:, 0], right_edge[:, 1], left_edge[:, 2]], ], dtype="float64", ) class HEALpixCamera(Camera): _sampler_object = None def __init__( self, center, radius, nside, transfer_function=None, fields=None, sub_samples=5, log_fields=None, volume=None, ds=None, use_kd=True, no_ghost=False, use_light=False, inner_radius=10, ): mylog.error("I am sorry, HEALpix Camera does not work yet in 3.0") raise NotImplementedError def new_image(self): image = np.zeros((12 * self.nside**2, 1, 4), dtype="float64", order="C") return image def get_sampler_args(self, image): nv = 12 * self.nside**2 vs = arr_pix2vec_nest(self.nside, np.arange(nv)) vs.shape = (nv, 1, 3) vs += 1e-8 uv = np.ones(3, dtype="float64") positions = np.ones((nv, 1, 3), dtype="float64") * self.center dx = min(g.dds.min() for g in self.ds.index.find_point(self.center)[0]) positions += self.inner_radius * dx * vs vs *= self.radius args = ( positions, vs, self.center, (0.0, 1.0, 0.0, 1.0), image, uv, uv, np.zeros(3, dtype="float64"), "KDTree", ) if self._needs_tf: args += (self.transfer_function,) args += (self.sub_samples,) return args, {} def _render(self, double_check, num_threads, image, sampler): pbar = get_pbar( "Ray casting", (self.volume.brick_dimensions + 1).prod(axis=-1).sum() ) total_cells = 0 if double_check: for brick in self.volume.bricks: for data in brick.my_data: if np.any(np.isnan(data)): raise RuntimeError view_pos = self.center for brick in self.volume.traverse(view_pos): sampler(brick, num_threads=num_threads) total_cells += np.prod(brick.my_data[0].shape) pbar.update(total_cells) pbar.finish() image = sampler.aimage self.finalize_image(image) return image def finalize_image(self, image): view_pos = self.center image = self.volume.reduce_tree_images(image, view_pos) return image def get_information(self): info_dict = { "fields": self.fields, "type": self.__class__.__name__, "center": self.center, "radius": self.radius, "dataset": self.ds.directory, } return info_dict def snapshot( self, fn=None, clip_ratio=None, double_check=False, num_threads=0, clim=None, label=None, ): r"""Ray-cast the camera. This method instructs the camera to take a snapshot -- i.e., call the ray caster -- based on its current settings. Parameters ---------- fn : string, optional If supplied, the image will be saved out to this before being returned. Scaling will be to the maximum value. clip_ratio : float, optional If supplied, the 'max_val' argument to write_bitmap will be handed clip_ratio * image.std() Returns ------- image : array An (N,M,3) array of the final returned values, in float64 form. """ if num_threads is None: num_threads = get_num_threads() image = self.new_image() args, kwargs = self.get_sampler_args(image) sampler = self.get_sampler(args, kwargs) self.volume.initialize_source() image = ImageArray( self._render(double_check, num_threads, image, sampler), info=self.get_information(), ) self.save_image(image, fn=fn, clim=clim, label=label) return image def save_image(self, image, fn=None, clim=None, label=None): if self.comm.rank == 0 and fn is not None: # This assumes Density; this is a relatively safe assumption. if label is None: label = f"Projected {self.fields[0]}" if clim is not None: cmin, cmax = clim else: cmin = cmax = None plot_allsky_healpix( image[:, 0, 0], self.nside, fn, label, cmin=cmin, cmax=cmax ) class StereoPairCamera(Camera): def __init__(self, original_camera, relative_separation=0.005): ParallelAnalysisInterface.__init__(self) self.original_camera = original_camera self.relative_separation = relative_separation def split(self): oc = self.original_camera uv = oc.orienter.unit_vectors c = oc.center fc = oc.front_center wx, wy, wz = oc.width left_normal = fc + uv[1] * 0.5 * self.relative_separation * wx - c right_normal = fc - uv[1] * 0.5 * self.relative_separation * wx - c left_camera = Camera( c, left_normal, oc.width, oc.resolution, oc.transfer_function, north_vector=uv[0], volume=oc.volume, fields=oc.fields, log_fields=oc.log_fields, sub_samples=oc.sub_samples, ds=oc.ds, ) right_camera = Camera( c, right_normal, oc.width, oc.resolution, oc.transfer_function, north_vector=uv[0], volume=oc.volume, fields=oc.fields, log_fields=oc.log_fields, sub_samples=oc.sub_samples, ds=oc.ds, ) return (left_camera, right_camera) class FisheyeCamera(Camera): def __init__( self, center, radius, fov, resolution, transfer_function=None, fields=None, sub_samples=5, log_fields=None, volume=None, ds=None, no_ghost=False, rotation=None, use_light=False, ): ParallelAnalysisInterface.__init__(self) self.use_light = use_light self.light_dir = None self.light_rgba = None if rotation is None: rotation = np.eye(3) self.rotation_matrix = rotation self.no_ghost = no_ghost if ds is not None: self.ds = ds self.center = np.array(center, dtype="float64") self.radius = radius self.fov = fov if is_sequence(resolution): raise RuntimeError("Resolution must be a single int") self.resolution = resolution if transfer_function is None: transfer_function = ProjectionTransferFunction() self.transfer_function = transfer_function if fields is None: fields = [("gas", "density")] dd = self.ds.all_data() fields = dd._determine_fields(fields) self.fields = fields if log_fields is None: log_fields = [self.ds._get_field_info(f).take_log for f in fields] self.log_fields = log_fields self.sub_samples = sub_samples if volume is None: volume = AMRKDTree(self.ds) volume.set_fields(fields, log_fields, no_ghost) self.volume = volume def get_information(self): return {} def new_image(self): image = np.zeros((self.resolution**2, 1, 4), dtype="float64", order="C") return image def get_sampler_args(self, image): vp = arr_fisheye_vectors(self.resolution, self.fov) vp.shape = (self.resolution**2, 1, 3) vp2 = vp.copy() for i in range(3): vp[:, :, i] = (vp2 * self.rotation_matrix[:, i]).sum(axis=2) del vp2 vp *= self.radius uv = np.ones(3, dtype="float64") positions = np.ones((self.resolution**2, 1, 3), dtype="float64") * self.center args = ( positions, vp, self.center, (0.0, 1.0, 0.0, 1.0), image, uv, uv, np.zeros(3, dtype="float64"), "KDTree", self.transfer_function, self.sub_samples, ) return args, {} def finalize_image(self, image): image.shape = self.resolution, self.resolution, 4 def _render(self, double_check, num_threads, image, sampler): pbar = get_pbar( "Ray casting", (self.volume.brick_dimensions + 1).prod(axis=-1).sum() ) total_cells = 0 if double_check: for brick in self.volume.bricks: for data in brick.my_data: if np.any(np.isnan(data)): raise RuntimeError view_pos = self.center for brick in self.volume.traverse(view_pos): sampler(brick, num_threads=num_threads) total_cells += np.prod(brick.my_data[0].shape) pbar.update(total_cells) pbar.finish() image = sampler.aimage self.finalize_image(image) return image class MosaicCamera(Camera): def __init__( self, center, normal_vector, width, resolution, transfer_function=None, north_vector=None, steady_north=False, volume=None, fields=None, log_fields=None, sub_samples=5, ds=None, use_kd=True, l_max=None, no_ghost=True, tree_type="domain", expand_factor=1.0, le=None, re=None, nimx=1, nimy=1, procs_per_wg=None, preload=True, use_light=False, ): ParallelAnalysisInterface.__init__(self) self.procs_per_wg = procs_per_wg if ds is not None: self.ds = ds if not is_sequence(resolution): resolution = (int(resolution / nimx), int(resolution / nimy)) self.resolution = resolution self.nimx = nimx self.nimy = nimy self.sub_samples = sub_samples if not is_sequence(width): width = (width, width, width) # front/back, left/right, top/bottom self.width = np.array([width[0], width[1], width[2]]) self.center = center self.steady_north = steady_north self.expand_factor = expand_factor # This seems to be necessary for now. Not sure what goes wrong when not true. if north_vector is not None: self.steady_north = True self.north_vector = north_vector self.normal_vector = normal_vector if fields is None: fields = [("gas", "density")] self.fields = fields if transfer_function is None: transfer_function = ProjectionTransferFunction() self.transfer_function = transfer_function self.log_fields = log_fields self.use_kd = use_kd self.l_max = l_max self.no_ghost = no_ghost self.preload = preload self.use_light = use_light self.light_dir = None self.light_rgba = None self.le = le self.re = re self.width[0] /= self.nimx self.width[1] /= self.nimy self.orienter = Orientation( normal_vector, north_vector=north_vector, steady_north=steady_north ) self.rotation_vector = self.orienter.north_vector # self._setup_box_properties(width, center, self.orienter.unit_vectors) if self.no_ghost: mylog.warning( "no_ghost is currently True (default). " "This may lead to artifacts at grid boundaries." ) self.tree_type = tree_type self.volume = volume # self.cameras = np.empty(self.nimx*self.nimy) def build_volume( self, volume, fields, log_fields, l_max, no_ghost, tree_type, le, re ): if volume is None: if self.use_kd: raise NotImplementedError volume = AMRKDTree( self.ds, l_max=l_max, fields=self.fields, no_ghost=no_ghost, tree_type=tree_type, log_fields=log_fields, le=le, re=re, ) else: self.use_kd = isinstance(volume, AMRKDTree) return volume def new_image(self): image = np.zeros( (self.resolution[0], self.resolution[1], 4), dtype="float64", order="C" ) return image def _setup_box_properties(self, width, center, unit_vectors): owidth = deepcopy(width) self.width = width self.origin = ( self.center - 0.5 * self.nimx * self.width[0] * self.orienter.unit_vectors[0] - 0.5 * self.nimy * self.width[1] * self.orienter.unit_vectors[1] - 0.5 * self.width[2] * self.orienter.unit_vectors[2] ) dx = self.width[0] dy = self.width[1] offi = self.imi + 0.5 offj = self.imj + 0.5 mylog.info("Mosaic offset: %f %f", offi, offj) global_center = self.center self.center = self.origin self.center += offi * dx * self.orienter.unit_vectors[0] self.center += offj * dy * self.orienter.unit_vectors[1] self.box_vectors = np.array( [ self.orienter.unit_vectors[0] * dx * self.nimx, self.orienter.unit_vectors[1] * dy * self.nimy, self.orienter.unit_vectors[2] * self.width[2], ] ) self.back_center = ( self.center - 0.5 * self.width[0] * self.orienter.unit_vectors[2] ) self.front_center = ( self.center + 0.5 * self.width[0] * self.orienter.unit_vectors[2] ) self.center = global_center self.width = owidth def snapshot(self, fn=None, clip_ratio=None, double_check=False, num_threads=0): my_storage = {} offx, offy = np.meshgrid(range(self.nimx), range(self.nimy)) offxy = zip(offx.ravel(), offy.ravel(), strict=True) for sto, xy in parallel_objects( offxy, self.procs_per_wg, storage=my_storage, dynamic=True ): self.volume = self.build_volume( self.volume, self.fields, self.log_fields, self.l_max, self.no_ghost, self.tree_type, self.le, self.re, ) self.initialize_source() self.imi, self.imj = xy mylog.debug("Working on: %i %i", self.imi, self.imj) self._setup_box_properties( self.width, self.center, self.orienter.unit_vectors ) image = self.new_image() args, kwargs = self.get_sampler_args(image) sampler = self.get_sampler(args, kwargs) image = self._render(double_check, num_threads, image, sampler) sto.id = self.imj * self.nimx + self.imi sto.result = image image = self.reduce_images(my_storage) self.save_image(image, fn=fn, clip_ratio=clip_ratio) return image def reduce_images(self, im_dict): final_image = 0 if self.comm.rank == 0: offx, offy = np.meshgrid(range(self.nimx), range(self.nimy)) offxy = zip(offx.ravel(), offy.ravel(), strict=True) nx, ny = self.resolution final_image = np.empty( (nx * self.nimx, ny * self.nimy, 4), dtype="float64", order="C" ) for xy in offxy: i, j = xy ind = j * self.nimx + i final_image[i * nx : (i + 1) * nx, j * ny : (j + 1) * ny, :] = im_dict[ ind ] return final_image data_object_registry["mosaic_camera"] = MosaicCamera def plot_allsky_healpix( image, nside, fn, label="", rotation=None, take_log=True, resolution=512, cmin=None, cmax=None, ): import matplotlib.backends.backend_agg import matplotlib.figure if rotation is None: rotation = np.eye(3, dtype="float64") img, count = pixelize_healpix(nside, image, resolution, resolution, rotation) fig = matplotlib.figure.Figure((10, 5)) ax = fig.add_subplot(1, 1, 1, projection="aitoff") if take_log: func = np.log10 else: def _identity(x): return x func = _identity implot = ax.imshow( func(img), extent=(-np.pi, np.pi, -np.pi / 2, np.pi / 2), clip_on=False, aspect=0.5, vmin=cmin, vmax=cmax, ) cb = fig.colorbar(implot, orientation="horizontal") cb.set_label(label) ax.xaxis.set_ticks(()) ax.yaxis.set_ticks(()) canvas = matplotlib.backends.backend_agg.FigureCanvasAgg(fig) canvas.print_figure(fn) return img, count class ProjectionCamera(Camera): def __init__( self, center, normal_vector, width, resolution, field, weight=None, volume=None, no_ghost=False, north_vector=None, ds=None, interpolated=False, method="integrate", ): if not interpolated: volume = 1 self.interpolated = interpolated self.field = field self.weight = weight self.resolution = resolution self.method = method fields = [field] if self.weight is not None: # This is a temporary field, which we will remove at the end # it is given a unique name to avoid conflicting with other # class instances self.weightfield = ("index", "temp_weightfield_%u" % (id(self),)) def _make_wf(f, w): def temp_weightfield(field, data): tr = data[f].astype("float64") * data[w] return data.apply_units(tr, field.units) return temp_weightfield ds.field_info.add_field( self.weightfield, function=_make_wf(self.field, self.weight) ) # Now we have to tell the dataset to add it and to calculate # its dependencies.. deps, _ = ds.field_info.check_derived_fields([self.weightfield]) ds.field_dependencies.update(deps) fields = [self.weightfield, self.weight] self.fields = fields self.log_fields = [False] * len(self.fields) Camera.__init__( self, center, normal_vector, width, resolution, None, fields=fields, ds=ds, volume=volume, log_fields=self.log_fields, north_vector=north_vector, no_ghost=no_ghost, ) # this would be better in an __exit__ function, but that would require # changes in code that uses this class def __del__(self): if hasattr(self, "weightfield") and hasattr(self, "ds"): try: self.ds.field_info.pop(self.weightfield) self.ds.field_dependencies.pop(self.weightfield) except KeyError: pass try: Camera.__del__(self) except AttributeError: pass def get_sampler(self, args, kwargs): if self.interpolated: sampler = InterpolatedProjectionSampler(*args, **kwargs) else: sampler = ProjectionSampler(*args, **kwargs) return sampler def initialize_source(self): if self.interpolated: Camera.initialize_source(self) else: pass def get_sampler_args(self, image): rotp = np.concatenate( [self.orienter.inv_mat.ravel("F"), self.back_center.ravel().ndview] ) args = ( np.atleast_3d(rotp), np.atleast_3d(self.box_vectors[2]), self.back_center, ( -self.width[0] / 2.0, self.width[0] / 2.0, -self.width[1] / 2.0, self.width[1] / 2.0, ), image, self.orienter.unit_vectors[0], self.orienter.unit_vectors[1], np.array(self.width, dtype="float64"), "KDTree", self.sub_samples, ) kwargs = {"lens_type": "plane-parallel"} return args, kwargs def finalize_image(self, image): ds = self.ds dd = ds.all_data() field = dd._determine_fields([self.field])[0] finfo = ds._get_field_info(field) dl = 1.0 if self.method == "integrate": if self.weight is None: dl = self.width[2].in_units(ds.unit_system["length"]) else: image[:, :, 0] /= image[:, :, 1] return ImageArray(image[:, :, 0], finfo.units, registry=ds.unit_registry) * dl def _render(self, double_check, num_threads, image, sampler): # Calculate the eight corners of the box # Back corners ... if self.interpolated: return Camera._render(self, double_check, num_threads, image, sampler) ds = self.ds width = self.width[2] north_vector = self.orienter.unit_vectors[0] east_vector = self.orienter.unit_vectors[1] normal_vector = self.orienter.unit_vectors[2] fields = self.fields mi = ds.domain_right_edge.copy() ma = ds.domain_left_edge.copy() for off1 in [-1, 1]: for off2 in [-1, 1]: for off3 in [-1, 1]: this_point = ( self.center + width / 2.0 * off1 * north_vector + width / 2.0 * off2 * east_vector + width / 2.0 * off3 * normal_vector ) np.minimum(mi, this_point, mi) np.maximum(ma, this_point, ma) # Now we have a bounding box. data_source = ds.region(self.center, mi, ma) for grid, mask in data_source.blocks: data = [(grid[field] * mask).astype("float64") for field in fields] pg = PartitionedGrid( grid.id, data, mask.astype("uint8"), grid.LeftEdge, grid.RightEdge, grid.ActiveDimensions.astype("int64"), ) grid.clear_data() sampler(pg, num_threads=num_threads) image = self.finalize_image(sampler.aimage) return image def save_image(self, image, fn=None, clip_ratio=None): dd = self.ds.all_data() field = dd._determine_fields([self.field])[0] finfo = self.ds._get_field_info(field) if finfo.take_log: im = np.log10(image) else: im = image if self.comm.rank == 0 and fn is not None: if clip_ratio is not None: write_image(im, fn) else: write_image(im, fn) def snapshot(self, fn=None, clip_ratio=None, double_check=False, num_threads=0): if num_threads is None: num_threads = get_num_threads() image = self.new_image() args, kwargs = self.get_sampler_args(image) sampler = self.get_sampler(args, kwargs) self.initialize_source() image = ImageArray( self._render(double_check, num_threads, image, sampler), info=self.get_information(), ) self.save_image(image, fn=fn, clip_ratio=clip_ratio) return image snapshot.__doc__ = Camera.snapshot.__doc__ data_object_registry["projection_camera"] = ProjectionCamera class SphericalCamera(Camera): def __init__(self, *args, **kwargs): Camera.__init__(self, *args, **kwargs) if self.resolution[0] / self.resolution[1] != 2: mylog.info("Warning: It's recommended to set the aspect ratio to 2:1") self.resolution = np.asarray(self.resolution) + 2 def get_sampler_args(self, image): px = np.linspace(-np.pi, np.pi, self.resolution[0], endpoint=True)[:, None] py = np.linspace(-np.pi / 2.0, np.pi / 2.0, self.resolution[1], endpoint=True)[ None, : ] vectors = np.zeros( (self.resolution[0], self.resolution[1], 3), dtype="float64", order="C" ) vectors[:, :, 0] = np.cos(px) * np.cos(py) vectors[:, :, 1] = np.sin(px) * np.cos(py) vectors[:, :, 2] = np.sin(py) vectors = vectors * self.width[0] positions = self.center + vectors * 0 R1 = get_rotation_matrix(0.5 * np.pi, [1, 0, 0]) R2 = get_rotation_matrix(0.5 * np.pi, [0, 0, 1]) uv = np.dot(R1, self.orienter.unit_vectors) uv = np.dot(R2, uv) vectors.reshape((self.resolution[0] * self.resolution[1], 3)) vectors = np.dot(vectors, uv) vectors.reshape((self.resolution[0], self.resolution[1], 3)) dummy = np.ones(3, dtype="float64") image.shape = (self.resolution[0] * self.resolution[1], 1, 4) vectors.shape = (self.resolution[0] * self.resolution[1], 1, 3) positions.shape = (self.resolution[0] * self.resolution[1], 1, 3) args = ( positions, vectors, self.back_center, (0.0, 1.0, 0.0, 1.0), image, dummy, dummy, np.zeros(3, dtype="float64"), self.transfer_function, self.sub_samples, ) return args, {"lens_type": "spherical"} def _render(self, double_check, num_threads, image, sampler): ncells = sum(b.source_mask.size for b in self.volume.bricks) pbar = get_pbar("Ray casting", ncells) total_cells = 0 if double_check: for brick in self.volume.bricks: for data in brick.my_data: if np.any(np.isnan(data)): raise RuntimeError for brick in self.volume.traverse(self.front_center): sampler(brick, num_threads=num_threads) total_cells += brick.source_mask.size pbar.update(total_cells) pbar.finish() image = self.finalize_image(sampler.aimage) return image def finalize_image(self, image): view_pos = self.front_center image.shape = self.resolution[0], self.resolution[1], 4 image = self.volume.reduce_tree_images(image, view_pos) if not self.transfer_function.grey_opacity: image[:, :, 3] = 1.0 image = image[1:-1, 1:-1, :] return image data_object_registry["spherical_camera"] = SphericalCamera class StereoSphericalCamera(Camera): def __init__(self, *args, **kwargs): self.disparity = kwargs.pop("disparity", 0.0) Camera.__init__(self, *args, **kwargs) self.disparity = self.ds.arr(self.disparity, units="code_length") self.disparity_s = self.ds.arr(0.0, units="code_length") if self.resolution[0] / self.resolution[1] != 2: mylog.info("Warning: It's recommended to set the aspect ratio to be 2:1") self.resolution = np.asarray(self.resolution) + 2 if self.disparity <= 0.0: self.disparity = self.width[0] / 1000.0 mylog.info( "Warning: Invalid value of disparity; now reset it to %f", self.disparity, ) def get_sampler_args(self, image): px = np.linspace(-np.pi, np.pi, self.resolution[0], endpoint=True)[:, None] py = np.linspace(-np.pi / 2.0, np.pi / 2.0, self.resolution[1], endpoint=True)[ None, : ] vectors = np.zeros( (self.resolution[0], self.resolution[1], 3), dtype="float64", order="C" ) vectors[:, :, 0] = np.cos(px) * np.cos(py) vectors[:, :, 1] = np.sin(px) * np.cos(py) vectors[:, :, 2] = np.sin(py) vectors2 = np.zeros( (self.resolution[0], self.resolution[1], 3), dtype="float64", order="C" ) vectors2[:, :, 0] = -np.sin(px) * np.ones((1, self.resolution[1])) vectors2[:, :, 1] = np.cos(px) * np.ones((1, self.resolution[1])) vectors2[:, :, 2] = 0 positions = self.center + vectors2 * self.disparity_s vectors = vectors * self.width[0] R1 = get_rotation_matrix(0.5 * np.pi, [1, 0, 0]) R2 = get_rotation_matrix(0.5 * np.pi, [0, 0, 1]) uv = np.dot(R1, self.orienter.unit_vectors) uv = np.dot(R2, uv) vectors.reshape((self.resolution[0] * self.resolution[1], 3)) vectors = np.dot(vectors, uv) vectors.reshape((self.resolution[0], self.resolution[1], 3)) dummy = np.ones(3, dtype="float64") image.shape = (self.resolution[0] * self.resolution[1], 1, 4) vectors.shape = (self.resolution[0] * self.resolution[1], 1, 3) positions.shape = (self.resolution[0] * self.resolution[1], 1, 3) args = ( positions, vectors, self.back_center, (0.0, 1.0, 0.0, 1.0), image, dummy, dummy, np.zeros(3, dtype="float64"), "KDTree", self.transfer_function, self.sub_samples, ) kwargs = {"lens_type": "stereo-spherical"} return args, kwargs def snapshot( self, fn=None, clip_ratio=None, double_check=False, num_threads=0, transparent=False, ): if num_threads is None: num_threads = get_num_threads() self.disparity_s = self.disparity image1 = self.new_image() args1, kwargs1 = self.get_sampler_args(image1) sampler1 = self.get_sampler(args1, kwargs1) self.initialize_source() image1 = self._render(double_check, num_threads, image1, sampler1, "(Left) ") self.disparity_s = -self.disparity image2 = self.new_image() args2, kwargs2 = self.get_sampler_args(image2) sampler2 = self.get_sampler(args2, kwargs2) self.initialize_source() image2 = self._render(double_check, num_threads, image2, sampler2, "(Right)") image = np.hstack([image1, image2]) image = self.volume.reduce_tree_images(image, self.center) image = ImageArray(image, info=self.get_information()) self.save_image(image, fn=fn, clip_ratio=clip_ratio, transparent=transparent) return image def _render(self, double_check, num_threads, image, sampler, msg): ncells = sum(b.source_mask.size for b in self.volume.bricks) pbar = get_pbar("Ray casting " + msg, ncells) total_cells = 0 if double_check: for brick in self.volume.bricks: for data in brick.my_data: if np.any(np.isnan(data)): raise RuntimeError for brick in self.volume.traverse(self.front_center): sampler(brick, num_threads=num_threads) total_cells += brick.source_mask.size pbar.update(total_cells) pbar.finish() image = sampler.aimage.copy() image.shape = self.resolution[0], self.resolution[1], 4 if not self.transfer_function.grey_opacity: image[:, :, 3] = 1.0 image = image[1:-1, 1:-1, :] return image data_object_registry["stereospherical_camera"] = StereoSphericalCamera # replaced in volume_rendering API by the function of the same name in # yt/visualization/volume_rendering/off_axis_projection def off_axis_projection( ds, center, normal_vector, width, resolution, field, weight=None, volume=None, no_ghost=False, interpolated=False, north_vector=None, method="integrate", ): r"""Project through a dataset, off-axis, and return the image plane. This function will accept the necessary items to integrate through a volume at an arbitrary angle and return the integrated field of view to the user. Note that if a weight is supplied, it will multiply the pre-interpolated values together, then create cell-centered values, then interpolate within the cell to conduct the integration. Parameters ---------- ds : ~yt.data_objects.static_output.Dataset This is the dataset to volume render. center : array_like The current 'center' of the view port -- the focal point for the camera. normal_vector : array_like The vector between the camera position and the center. width : float or list of floats The current width of the image. If a single float, the volume is cubical, but if not, it is left/right, top/bottom, front/back resolution : int or list of ints The number of pixels in each direction. field : string The field to project through the volume weight : optional, default None If supplied, the field will be pre-multiplied by this, then divided by the integrated value of this field. This returns an average rather than a sum. volume : `yt.extensions.volume_rendering.AMRKDTree`, optional The volume to ray cast through. Can be specified for finer-grained control, but otherwise will be automatically generated. no_ghost: bool, optional Optimization option. If True, homogenized bricks will extrapolate out from grid instead of interpolating from ghost zones that have to first be calculated. This can lead to large speed improvements, but at a loss of accuracy/smoothness in resulting image. The effects are less notable when the transfer function is smooth and broad. Default: True interpolated : optional, default False If True, the data is first interpolated to vertex-centered data, then tri-linearly interpolated along the ray. Not suggested for quantitative studies. method : string The method of projection. Valid methods are: "integrate" with no weight_field specified : integrate the requested field along the line of sight. "integrate" with a weight_field specified : weight the requested field by the weighting field and integrate along the line of sight. "sum" : This method is the same as integrate, except that it does not multiply by a path length when performing the integration, and is just a straight summation of the field along the given axis. WARNING: This should only be used for uniform resolution grid datasets, as other datasets may result in unphysical images. Returns ------- image : array An (N,N) array of the final integrated values, in float64 form. Examples -------- >>> image = off_axis_projection( ... ds, [0.5, 0.5, 0.5], [0.2, 0.3, 0.4], 0.2, N, "temperature", "density" ... ) >>> write_image(np.log10(image), "offaxis.png") """ projcam = ProjectionCamera( center, normal_vector, width, resolution, field, weight=weight, ds=ds, volume=volume, no_ghost=no_ghost, interpolated=interpolated, north_vector=north_vector, method=method, ) image = projcam.snapshot() return image[:, :] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/render_source.py0000644000175100001770000014515514714401662023712 0ustar00runnerdockerimport abc import warnings from functools import wraps from types import ModuleType from typing import Literal import numpy as np from yt.config import ytcfg from yt.data_objects.image_array import ImageArray from yt.funcs import ensure_numpy_array, is_sequence, mylog from yt.geometry.grid_geometry_handler import GridIndex from yt.geometry.oct_geometry_handler import OctreeIndex from yt.utilities.amr_kdtree.api import AMRKDTree from yt.utilities.configure import YTConfig, configuration_callbacks from yt.utilities.lib.bounding_volume_hierarchy import BVH from yt.utilities.lib.misc_utilities import zlines, zpoints from yt.utilities.lib.octree_raytracing import OctreeRayTracing from yt.utilities.lib.partitioned_grid import PartitionedGrid from yt.utilities.on_demand_imports import NotAModule from yt.utilities.parallel_tools.parallel_analysis_interface import ( ParallelAnalysisInterface, ) from yt.visualization.image_writer import apply_colormap from .transfer_function_helper import TransferFunctionHelper from .transfer_functions import ( ColorTransferFunction, ProjectionTransferFunction, TransferFunction, ) from .utils import ( data_source_or_all, get_corners, new_interpolated_projection_sampler, new_mesh_sampler, new_projection_sampler, new_volume_render_sampler, ) from .zbuffer_array import ZBuffer OptionalModule = ModuleType | NotAModule mesh_traversal: OptionalModule = NotAModule("pyembree") mesh_construction: OptionalModule = NotAModule("pyembree") def set_raytracing_engine( engine: Literal["yt", "embree"], ) -> None: """ Safely switch raytracing engines at runtime. Parameters ---------- engine: 'yt' or 'embree' - 'yt' selects the default engine. - 'embree' requires extra installation steps, see https://yt-project.org/doc/visualizing/unstructured_mesh_rendering.html?highlight=pyembree#optional-embree-installation Raises ------ UserWarning Raised if the required engine is not available. In this case, the default engine is restored. """ from yt.config import ytcfg global mesh_traversal, mesh_construction if engine == "embree": try: from yt.utilities.lib.embree_mesh import ( # type: ignore mesh_construction, mesh_traversal, ) except (ImportError, ValueError) as exc: # Catch ValueError in case size of objects in Cython change warnings.warn( "Failed to switch to embree raytracing engine. " f"The following error was raised:\n{exc}", stacklevel=2, ) mesh_traversal = NotAModule("pyembree") mesh_construction = NotAModule("pyembree") ytcfg["yt", "ray_tracing_engine"] = "yt" else: ytcfg["yt", "ray_tracing_engine"] = "embree" else: mesh_traversal = NotAModule("pyembree") mesh_construction = NotAModule("pyembree") ytcfg["yt", "ray_tracing_engine"] = "yt" def _init_raytracing_engine(ytcfg: YTConfig) -> None: # validate option from configuration file or fall back to default engine set_raytracing_engine(engine=ytcfg["yt", "ray_tracing_engine"]) configuration_callbacks.append(_init_raytracing_engine) def invalidate_volume(f): @wraps(f) def wrapper(*args, **kwargs): ret = f(*args, **kwargs) obj = args[0] if isinstance(obj._transfer_function, ProjectionTransferFunction): obj.sampler_type = "projection" obj._log_field = False obj._use_ghost_zones = False del obj.volume obj._volume_valid = False return ret return wrapper def validate_volume(f): @wraps(f) def wrapper(*args, **kwargs): obj = args[0] fields = [obj.field] log_fields = [obj.log_field] if obj.weight_field is not None: fields.append(obj.weight_field) log_fields.append(obj.log_field) if not obj._volume_valid: obj.volume.set_fields( fields, log_fields, no_ghost=(not obj.use_ghost_zones) ) obj._volume_valid = True return f(*args, **kwargs) return wrapper class RenderSource(ParallelAnalysisInterface, abc.ABC): """Base Class for Render Sources. Will be inherited for volumes, streamlines, etc. """ volume_method: str | None = None def __init__(self): super().__init__() self.opaque = False self.zbuffer = None @abc.abstractmethod def render(self, camera, zbuffer=None): pass @abc.abstractmethod def _validate(self): pass class OpaqueSource(RenderSource): """A base class for opaque render sources. Will be inherited from for LineSources, BoxSources, etc. """ def __init__(self): super().__init__() self.opaque = True def set_zbuffer(self, zbuffer): self.zbuffer = zbuffer def create_volume_source(data_source, field): data_source = data_source_or_all(data_source) ds = data_source.ds index_class = ds.index.__class__ if issubclass(index_class, GridIndex): return KDTreeVolumeSource(data_source, field) elif issubclass(index_class, OctreeIndex): return OctreeVolumeSource(data_source, field) else: raise NotImplementedError class VolumeSource(RenderSource, abc.ABC): """A class for rendering data from a volumetric data source Examples of such sources include a sphere, cylinder, or the entire computational domain. A :class:`VolumeSource` provides the framework to decompose an arbitrary yt data source into bricks that can be traversed and volume rendered. Parameters ---------- data_source: :class:`AMR3DData` or :class:`Dataset`, optional This is the source to be rendered, which can be any arbitrary yt data object or dataset. field : string The name of the field to be rendered. Examples -------- The easiest way to make a VolumeSource is to use the volume_render function, so that the VolumeSource gets created automatically. This example shows how to do this and then access the resulting source: >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> im, sc = yt.volume_render(ds) >>> volume_source = sc.get_source(0) You can also create VolumeSource instances by hand and add them to Scenes. This example manually creates a VolumeSource, adds it to a scene, sets the camera, and renders an image. >>> import yt >>> from yt.visualization.volume_rendering.api import ( ... Camera, Scene, create_volume_source) >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = Scene() >>> source = create_volume_source(ds.all_data(), "density") >>> sc.add_source(source) >>> sc.add_camera() >>> im = sc.render() """ _image = None data_source = None def __init__(self, data_source, field): r"""Initialize a new volumetric source for rendering.""" super().__init__() self.data_source = data_source_or_all(data_source) field = self.data_source._determine_fields(field)[0] self.current_image = None self.check_nans = False self.num_threads = 0 self.num_samples = 10 self.sampler_type = "volume-render" self._volume_valid = False # these are caches for properties, defined below self._volume = None self._transfer_function = None self._field = field self._log_field = self.data_source.ds.field_info[field].take_log self._use_ghost_zones = False self._weight_field = None self.tfh = TransferFunctionHelper(self.data_source.pf) self.tfh.set_field(self.field) @property def transfer_function(self): """The transfer function associated with this VolumeSource""" if self._transfer_function is not None: return self._transfer_function if self.tfh.tf is not None: self._transfer_function = self.tfh.tf return self._transfer_function mylog.info("Creating transfer function") self.tfh.set_field(self.field) self.tfh.set_log(self.log_field) self.tfh.build_transfer_function() self.tfh.setup_default() self._transfer_function = self.tfh.tf return self._transfer_function @transfer_function.setter def transfer_function(self, value): self.tfh.tf = None valid_types = ( TransferFunction, ColorTransferFunction, ProjectionTransferFunction, type(None), ) if not isinstance(value, valid_types): raise RuntimeError( "transfer_function not a valid type, " f"received object of type {type(value)}" ) if isinstance(value, ProjectionTransferFunction): self.sampler_type = "projection" if self._volume is not None: fields = [self.field] if self.weight_field is not None: fields.append(self.weight_field) self._volume_valid = False self._transfer_function = value @property def volume(self): """The abstract volume associated with this VolumeSource This object does the heavy lifting to access data in an efficient manner using a KDTree """ return self._get_volume() @volume.setter def volume(self, value): assert isinstance(value, AMRKDTree) del self._volume self._field = value.fields self._log_field = value.log_fields self._volume = value assert self._volume_valid @volume.deleter def volume(self): del self._volume self._volume = None @property def field(self): """The field to be rendered""" return self._field @field.setter @invalidate_volume def field(self, value): field = self.data_source._determine_fields(value) if len(field) > 1: raise RuntimeError( "VolumeSource.field can only be a single field but received " f"multiple fields: {field}" ) field = field[0] if self._field != field: log_field = self.data_source.ds.field_info[field].take_log self.tfh.bounds = None else: log_field = self._log_field self._log_field = log_field self._field = value self.transfer_function = None self.tfh.set_field(value) self.tfh.set_log(log_field) @property def log_field(self): """Whether or not the field rendering is computed in log space""" return self._log_field @log_field.setter @invalidate_volume def log_field(self, value): self.transfer_function = None self.tfh.set_log(value) self._log_field = value @property def use_ghost_zones(self): """Whether or not ghost zones are used to estimate vertex-centered data values at grid boundaries""" return self._use_ghost_zones @use_ghost_zones.setter @invalidate_volume def use_ghost_zones(self, value): self._use_ghost_zones = value @property def weight_field(self): """The weight field for the rendering Currently this is only used for off-axis projections. """ return self._weight_field @weight_field.setter @invalidate_volume def weight_field(self, value): self._weight_field = value def set_transfer_function(self, transfer_function): """Set transfer function for this source""" self.transfer_function = transfer_function return self def _validate(self): """Make sure that all dependencies have been met""" if self.data_source is None: raise RuntimeError("Data source not initialized") def set_volume(self, volume): """Associates an AMRKDTree with the VolumeSource""" self.volume = volume return self def set_field(self, field): """Set the source's field to render Parameters ---------- field: field name The field to render """ self.field = field return self def set_log(self, log_field): """Set whether the rendering of the source's field is done in log space Generally volume renderings of data whose values span a large dynamic range should be done on log space and volume renderings of data with small dynamic range should be done in linear space. Parameters ---------- log_field: boolean If True, the volume rendering will be done in log space, and if False will be done in linear space. """ self.log_field = log_field return self def set_weight_field(self, weight_field): """Set the source's weight field .. note:: This is currently only used for renderings using the ProjectionTransferFunction Parameters ---------- weight_field: field name The weight field to use in the rendering """ self.weight_field = weight_field return self def set_use_ghost_zones(self, use_ghost_zones): """Set whether or not interpolation at grid edges uses ghost zones Parameters ---------- use_ghost_zones: boolean If True, the AMRKDTree estimates vertex centered data using ghost zones, which can eliminate seams in the resulting volume rendering. Defaults to False for performance reasons. """ self.use_ghost_zones = use_ghost_zones return self def set_sampler(self, camera, interpolated=True): """Sets a volume render sampler The type of sampler is determined based on the ``sampler_type`` attribute of the VolumeSource. Currently the ``volume_render`` and ``projection`` sampler types are supported. The 'interpolated' argument is only meaningful for projections. If True, the data is first interpolated to the cell vertices, and then tri-linearly interpolated to the ray sampling positions. If False, then the cell-centered data is simply accumulated along the ray. Interpolation is always performed for volume renderings. """ if self.sampler_type == "volume-render": sampler = new_volume_render_sampler(camera, self) elif self.sampler_type == "projection" and interpolated: sampler = new_interpolated_projection_sampler(camera, self) elif self.sampler_type == "projection": sampler = new_projection_sampler(camera, self) else: NotImplementedError(f"{self.sampler_type} not implemented yet") self.sampler = sampler assert self.sampler is not None @abc.abstractmethod def _get_volume(self): """The abstract volume associated with this VolumeSource This object does the heavy lifting to access data in an efficient manner using a KDTree """ pass @abc.abstractmethod @validate_volume def render(self, camera, zbuffer=None): """Renders an image using the provided camera Parameters ---------- camera: :class:`yt.visualization.volume_rendering.camera.Camera` instance A volume rendering camera. Can be any type of camera. zbuffer: :class:`yt.visualization.volume_rendering.zbuffer_array.Zbuffer` instance # noqa: E501 A zbuffer array. This is used for opaque sources to determine the z position of the source relative to other sources. Only useful if you are manually calling render on multiple sources. Scene.render uses this internally. Returns ------- A :class:`yt.data_objects.image_array.ImageArray` instance containing the rendered image. """ pass def finalize_image(self, camera, image): """Parallel reduce the image. Parameters ---------- camera: :class:`yt.visualization.volume_rendering.camera.Camera` instance The camera used to produce the volume rendering image. image: :class:`yt.data_objects.image_array.ImageArray` instance A reference to an image to fill """ image.shape = camera.resolution[0], camera.resolution[1], 4 # If the call is from VR, the image is rotated by 180 to get correct # up direction if not self.transfer_function.grey_opacity: image[:, :, 3] = 1 return image def __repr__(self): disp = f":{str(self.data_source)} " disp += f"transfer_function:{str(self._transfer_function)}" return disp class KDTreeVolumeSource(VolumeSource): volume_method = "KDTree" def _get_volume(self): """The abstract volume associated with this VolumeSource This object does the heavy lifting to access data in an efficient manner using a KDTree """ if self._volume is None: mylog.info("Creating volume") volume = AMRKDTree(self.data_source.ds, data_source=self.data_source) self._volume = volume return self._volume @validate_volume def render(self, camera, zbuffer=None): """Renders an image using the provided camera Parameters ---------- camera: :class:`yt.visualization.volume_rendering.camera.Camera` A volume rendering camera. Can be any type of camera. zbuffer: :class:`yt.visualization.volume_rendering.zbuffer_array.Zbuffer` A zbuffer array. This is used for opaque sources to determine the z position of the source relative to other sources. Only useful if you are manually calling render on multiple sources. Scene.render uses this internally. Returns ------- A :class:`yt.data_objects.image_array.ImageArray` containing the rendered image. """ self.zbuffer = zbuffer self.set_sampler(camera) assert self.sampler is not None mylog.debug("Casting rays") total_cells = 0 if self.check_nans: for brick in self.volume.bricks: for data in brick.my_data: if np.any(np.isnan(data)): raise RuntimeError for brick in self.volume.traverse(camera.lens.viewpoint): mylog.debug("Using sampler %s", self.sampler) self.sampler(brick, num_threads=self.num_threads) total_cells += np.prod(brick.my_data[0].shape) mylog.debug("Done casting rays") self.current_image = self.finalize_image(camera, self.sampler.aimage) if zbuffer is None: self.zbuffer = ZBuffer( self.current_image, np.full(self.current_image.shape[:2], np.inf) ) return self.current_image def finalize_image(self, camera, image): if self._volume is not None: image = self.volume.reduce_tree_images( image, camera.lens.viewpoint, use_opacity=self.transfer_function.grey_opacity, ) return super().finalize_image(camera, image) class OctreeVolumeSource(VolumeSource): volume_method = "Octree" def __init__(self, *args, **kwa): super().__init__(*args, **kwa) self.set_use_ghost_zones(True) def _get_volume(self): """The abstract volume associated with this VolumeSource This object does the heavy lifting to access data in an efficient manner using an octree. """ if self._volume is None: mylog.info("Creating volume") volume = OctreeRayTracing(self.data_source) self._volume = volume return self._volume @validate_volume def render(self, camera, zbuffer=None): """Renders an image using the provided camera Parameters ---------- camera: :class:`yt.visualization.volume_rendering.camera.Camera` instance A volume rendering camera. Can be any type of camera. zbuffer: :class:`yt.visualization.volume_rendering.zbuffer_array.Zbuffer` instance # noqa: E501 A zbuffer array. This is used for opaque sources to determine the z position of the source relative to other sources. Only useful if you are manually calling render on multiple sources. Scene.render uses this internally. Returns ------- A :class:`yt.data_objects.image_array.ImageArray` instance containing the rendered image. """ self.zbuffer = zbuffer self.set_sampler(camera) if self.sampler is None: raise RuntimeError( "No sampler set. This is likely a bug as it should never happen." ) data = self.data_source dx = data["dx"].to_value("unitary")[:, None] xyz = np.stack([data[_].to_value("unitary") for _ in "xyz"], axis=-1) LE = xyz - dx / 2 RE = xyz + dx / 2 mylog.debug("Gathering data") dt = np.stack(list(self.volume.data) + [*LE.T, *RE.T], axis=-1).reshape( 1, len(dx), 14, 1 ) mask = np.full(dt.shape[1:], 1, dtype=np.uint8) dims = np.array([1, 1, 1], dtype="int64") pg = PartitionedGrid(0, dt, mask, LE.flatten(), RE.flatten(), dims, n_fields=1) mylog.debug("Casting rays") self.sampler(pg, oct=self.volume.octree) mylog.debug("Done casting rays") self.current_image = self.finalize_image(camera, self.sampler.aimage) if zbuffer is None: self.zbuffer = ZBuffer( self.current_image, np.full(self.current_image.shape[:2], np.inf) ) return self.current_image class MeshSource(OpaqueSource): """A source for unstructured mesh data. This functionality requires the embree ray-tracing engine and the associated pyembree python bindings to be installed in order to function. A :class:`MeshSource` provides the framework to volume render unstructured mesh data. Parameters ---------- data_source: :class:`AMR3DData` or :class:`Dataset`, optional This is the source to be rendered, which can be any arbitrary yt data object or dataset. field : string The name of the field to be rendered. Examples -------- >>> source = MeshSource(ds, ("connect1", "convected")) """ _image = None data_source = None def __init__(self, data_source, field): r"""Initialize a new unstructured mesh source for rendering.""" super().__init__() self.data_source = data_source_or_all(data_source) field = self.data_source._determine_fields(field)[0] self.field = field self.volume = None self.current_image = None self.engine = ytcfg.get("yt", "ray_tracing_engine") # default color map self._cmap = ytcfg.get("yt", "default_colormap") self._color_bounds = None # default mesh annotation options self._annotate_mesh = False self._mesh_line_color = None self._mesh_line_alpha = 1.0 # Error checking assert self.field is not None assert self.data_source is not None if self.field[0] == "all": raise NotImplementedError( "Mesh unions are not implemented for 3D rendering" ) if self.engine == "embree": self.volume = mesh_traversal.YTEmbreeScene() self.build_volume_embree() elif self.engine == "yt": self.build_volume_bvh() else: raise NotImplementedError( "Invalid ray-tracing engine selected. Choices are 'embree' and 'yt'." ) @property def cmap(self): """ This is the name of the colormap that will be used when rendering this MeshSource object. Should be a string, like 'cmyt.arbre', or 'cmyt.dusk'. """ return self._cmap @cmap.setter def cmap(self, cmap_name): self._cmap = cmap_name if hasattr(self, "data"): self.current_image = self.apply_colormap() @property def color_bounds(self): """ These are the bounds that will be used with the colormap to the display the rendered image. Should be a (vmin, vmax) tuple, like (0.0, 2.0). If None, the bounds will be automatically inferred from the max and min of the rendered data. """ return self._color_bounds @color_bounds.setter def color_bounds(self, bounds): self._color_bounds = bounds if hasattr(self, "data"): self.current_image = self.apply_colormap() def _validate(self): """Make sure that all dependencies have been met""" if self.data_source is None: raise RuntimeError("Data source not initialized.") if self.volume is None: raise RuntimeError("Volume not initialized.") def build_volume_embree(self): """ This constructs the mesh that will be ray-traced by pyembree. """ ftype, fname = self.field mesh_id = int(ftype[-1]) - 1 index = self.data_source.ds.index offset = index.meshes[mesh_id]._index_offset field_data = self.data_source[self.field].d # strip units vertices = index.meshes[mesh_id].connectivity_coords indices = index.meshes[mesh_id].connectivity_indices - offset # if this is an element field, promote to 2D here if len(field_data.shape) == 1: field_data = np.expand_dims(field_data, 1) # Here, we decide whether to render based on high-order or # low-order geometry. Right now, high-order geometry is only # implemented for 20-point hexes. if indices.shape[1] == 20 or indices.shape[1] == 10: self.mesh = mesh_construction.QuadraticElementMesh( self.volume, vertices, indices, field_data ) else: # if this is another type of higher-order element, we demote # to 1st order here, for now. if indices.shape[1] == 27: # hexahedral mylog.warning("27-node hexes not yet supported, dropping to 1st order.") field_data = field_data[:, 0:8] indices = indices[:, 0:8] self.mesh = mesh_construction.LinearElementMesh( self.volume, vertices, indices, field_data ) def build_volume_bvh(self): """ This constructs the mesh that will be ray-traced. """ ftype, fname = self.field mesh_id = int(ftype[-1]) - 1 index = self.data_source.ds.index offset = index.meshes[mesh_id]._index_offset field_data = self.data_source[self.field].d # strip units vertices = index.meshes[mesh_id].connectivity_coords indices = index.meshes[mesh_id].connectivity_indices - offset # if this is an element field, promote to 2D here if len(field_data.shape) == 1: field_data = np.expand_dims(field_data, 1) # Here, we decide whether to render based on high-order or # low-order geometry. if indices.shape[1] == 27: # hexahedral mylog.warning("27-node hexes not yet supported, dropping to 1st order.") field_data = field_data[:, 0:8] indices = indices[:, 0:8] self.volume = BVH(vertices, indices, field_data) def render(self, camera, zbuffer=None): """Renders an image using the provided camera Parameters ---------- camera: :class:`yt.visualization.volume_rendering.camera.Camera` A volume rendering camera. Can be any type of camera. zbuffer: :class:`yt.visualization.volume_rendering.zbuffer_array.Zbuffer` A zbuffer array. This is used for opaque sources to determine the z position of the source relative to other sources. Only useful if you are manually calling render on multiple sources. Scene.render uses this internally. Returns ------- A :class:`yt.data_objects.image_array.ImageArray` containing the rendered image. """ shape = (camera.resolution[0], camera.resolution[1], 4) if zbuffer is None: empty = np.empty(shape, dtype="float64") z = np.empty(empty.shape[:2], dtype="float64") empty[:] = 0.0 z[:] = np.inf zbuffer = ZBuffer(empty, z) elif zbuffer.rgba.shape != shape: zbuffer = ZBuffer(zbuffer.rgba.reshape(shape), zbuffer.z.reshape(shape[:2])) self.zbuffer = zbuffer self.sampler = new_mesh_sampler(camera, self, engine=self.engine) mylog.debug("Casting rays") self.sampler(self.volume) mylog.debug("Done casting rays") self.finalize_image(camera) self.current_image = self.apply_colormap() zbuffer += ZBuffer(self.current_image.astype("float64"), self.sampler.azbuffer) zbuffer.rgba = ImageArray(zbuffer.rgba) self.zbuffer = zbuffer self.current_image = self.zbuffer.rgba if self._annotate_mesh: self.current_image = self.annotate_mesh_lines( self._mesh_line_color, self._mesh_line_alpha ) return self.current_image def finalize_image(self, camera): sam = self.sampler # reshape data Nx = camera.resolution[0] Ny = camera.resolution[1] self.data = sam.aimage[:, :, 0].reshape(Nx, Ny) def annotate_mesh_lines(self, color=None, alpha=1.0): r""" Modifies this MeshSource by drawing the mesh lines. This modifies the current image by drawing the element boundaries and returns the modified image. Parameters ---------- color: array_like of shape (4,), optional The RGBA value to use to draw the mesh lines. Default is black. alpha : float, optional The opacity of the mesh lines. Default is 255 (solid). """ self.annotate_mesh = True self._mesh_line_color = color self._mesh_line_alpha = alpha if color is None: color = np.array([0, 0, 0, alpha]) locs = (self.sampler.amesh_lines == 1,) self.current_image[:, :, 0][locs] = color[0] self.current_image[:, :, 1][locs] = color[1] self.current_image[:, :, 2][locs] = color[2] self.current_image[:, :, 3][locs] = color[3] return self.current_image def apply_colormap(self): """ Applies a colormap to the current image without re-rendering. Returns ------- current_image : A new image with the specified color scale applied to the underlying data. """ image = ( apply_colormap( self.data, color_bounds=self._color_bounds, cmap_name=self._cmap ) / 255.0 ) alpha = image[:, :, 3] alpha[self.sampler.aimage_used == -1] = 0.0 image[:, :, 3] = alpha return image def __repr__(self): disp = f":{str(self.data_source)} " return disp class PointSource(OpaqueSource): r"""A rendering source of opaque points in the scene. This class provides a mechanism for adding points to a scene; these points will be opaque, and can also be colored. Parameters ---------- positions: array_like of shape (N, 3) The positions of points to be added to the scene. If specified with no units, the positions will be assumed to be in code units. colors : array_like of shape (N, 4), optional The colors of the points, including an alpha channel, in floating point running from 0..1. color_stride : int, optional The stride with which to access the colors when putting them on the scene. radii : array_like of shape (N), optional The radii of the points in the final image, in pixels (int) Examples -------- This example creates a volume rendering and adds 1000 random points to the image: >>> import yt >>> import numpy as np >>> from yt.visualization.volume_rendering.api import PointSource >>> from yt.units import kpc >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> im, sc = yt.volume_render(ds) >>> npoints = 1000 >>> vertices = np.random.random([npoints, 3]) * 1000 * kpc >>> colors = np.random.random([npoints, 4]) >>> colors[:, 3] = 1.0 >>> points = PointSource(vertices, colors=colors) >>> sc.add_source(points) >>> im = sc.render() """ _image = None data_source = None def __init__(self, positions, colors=None, color_stride=1, radii=None): assert positions.ndim == 2 and positions.shape[1] == 3 if colors is not None: assert colors.ndim == 2 and colors.shape[1] == 4 assert colors.shape[0] == positions.shape[0] if not is_sequence(radii): if radii is not None: # broadcast the value radii = radii * np.ones(positions.shape[0], dtype="int64") else: # default radii to 0 pixels (i.e. point is 1 pixel wide) radii = np.zeros(positions.shape[0], dtype="int64") else: assert radii.ndim == 1 assert radii.shape[0] == positions.shape[0] self.positions = positions # If colors aren't individually set, make black with full opacity if colors is None: colors = np.ones((len(positions), 4)) self.colors = colors self.color_stride = color_stride self.radii = radii def _validate(self): pass def render(self, camera, zbuffer=None): """Renders an image using the provided camera Parameters ---------- camera: :class:`yt.visualization.volume_rendering.camera.Camera` A volume rendering camera. Can be any type of camera. zbuffer: :class:`yt.visualization.volume_rendering.zbuffer_array.Zbuffer` A zbuffer array. This is used for opaque sources to determine the z position of the source relative to other sources. Only useful if you are manually calling render on multiple sources. Scene.render uses this internally. Returns ------- A :class:`yt.data_objects.image_array.ImageArray` containing the rendered image. """ vertices = self.positions if zbuffer is None: empty = camera.lens.new_image(camera) z = np.empty(empty.shape[:2], dtype="float64") empty[:] = 0.0 z[:] = np.inf zbuffer = ZBuffer(empty, z) else: empty = zbuffer.rgba z = zbuffer.z # DRAW SOME POINTS camera.lens.setup_box_properties(camera) px, py, dz = camera.lens.project_to_plane(camera, vertices) zpoints(empty, z, px, py, dz, self.colors, self.radii, self.color_stride) self.zbuffer = zbuffer return zbuffer def __repr__(self): disp = "" return disp class LineSource(OpaqueSource): r"""A render source for a sequence of opaque line segments. This class provides a mechanism for adding lines to a scene; these points will be opaque, and can also be colored. .. note:: If adding a LineSource to your rendering causes the image to appear blank or fades a VolumeSource, try lowering the values specified in the alpha channel of the ``colors`` array. Parameters ---------- positions: array_like of shape (N, 2, 3) The positions of the starting and stopping points for each line. For example,positions[0][0] and positions[0][1] would give the (x, y, z) coordinates of the beginning and end points of the first line, respectively. If specified with no units, assumed to be in code units. colors : array_like of shape (N, 4), optional The colors of the points, including an alpha channel, in floating point running from 0..1. The four channels correspond to r, g, b, and alpha values. Note that they correspond to the line segment succeeding each point; this means that strictly speaking they need only be (N-1) in length. color_stride : int, optional The stride with which to access the colors when putting them on the scene. Examples -------- This example creates a volume rendering and then adds some random lines to the image: >>> import yt >>> import numpy as np >>> from yt.visualization.volume_rendering.api import LineSource >>> from yt.units import kpc >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> im, sc = yt.volume_render(ds) >>> nlines = 4 >>> vertices = np.random.random([nlines, 2, 3]) * 600 * kpc >>> colors = np.random.random([nlines, 4]) >>> colors[:, 3] = 1.0 >>> lines = LineSource(vertices, colors) >>> sc.add_source(lines) >>> im = sc.render() """ _image = None data_source = None def __init__(self, positions, colors=None, color_stride=1): super().__init__() assert positions.ndim == 3 assert positions.shape[1] == 2 assert positions.shape[2] == 3 if colors is not None: assert colors.ndim == 2 assert colors.shape[1] == 4 # convert the positions to the shape expected by zlines, below N = positions.shape[0] self.positions = positions.reshape((2 * N, 3)) # If colors aren't individually set, make black with full opacity if colors is None: colors = np.ones((len(positions), 4)) self.colors = colors self.color_stride = color_stride def _validate(self): pass def render(self, camera, zbuffer=None): """Renders an image using the provided camera Parameters ---------- camera: :class:`yt.visualization.volume_rendering.camera.Camera` A volume rendering camera. Can be any type of camera. zbuffer: :class:`yt.visualization.volume_rendering.zbuffer_array.Zbuffer` z position of the source relative to other sources. Only useful if you are manually calling render on multiple sources. Scene.render uses this internally. Returns ------- A :class:`yt.data_objects.image_array.ImageArray` containing the rendered image. """ vertices = self.positions if zbuffer is None: empty = camera.lens.new_image(camera) z = np.empty(empty.shape[:2], dtype="float64") empty[:] = 0.0 z[:] = np.inf zbuffer = ZBuffer(empty, z) else: empty = zbuffer.rgba z = zbuffer.z # DRAW SOME LINES camera.lens.setup_box_properties(camera) px, py, dz = camera.lens.project_to_plane(camera, vertices) px = px.astype("int64") py = py.astype("int64") if len(px.shape) == 1: zlines( empty, z, px, py, dz, self.colors.astype("float64"), self.color_stride ) else: # For stereo-lens, two sets of pos for each eye are contained # in px...pz zlines( empty, z, px[0, :], py[0, :], dz[0, :], self.colors.astype("float64"), self.color_stride, ) zlines( empty, z, px[1, :], py[1, :], dz[1, :], self.colors.astype("float64"), self.color_stride, ) self.zbuffer = zbuffer return zbuffer def __repr__(self): disp = "" return disp class BoxSource(LineSource): r"""A render source for a box drawn with line segments. This render source will draw a box, with transparent faces, in data space coordinates. This is useful for annotations. Parameters ---------- left_edge: array-like of shape (3,), float The left edge coordinates of the box. right_edge : array-like of shape (3,), float The right edge coordinates of the box. color : array-like of shape (4,), float, optional The colors (including alpha) to use for the lines. Default is black with an alpha of 1.0. Examples -------- This example shows how to use BoxSource to add an outline of the domain boundaries to a volume rendering. >>> import yt >>> from yt.visualization.volume_rendering.api import BoxSource >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> im, sc = yt.volume_render(ds) >>> box_source = BoxSource( ... ds.domain_left_edge, ds.domain_right_edge, [1.0, 1.0, 1.0, 1.0] ... ) >>> sc.add_source(box_source) >>> im = sc.render() """ def __init__(self, left_edge, right_edge, color=None): assert left_edge.shape == (3,) assert right_edge.shape == (3,) if color is None: color = np.array([1.0, 1.0, 1.0, 1.0]) color = ensure_numpy_array(color) color.shape = (1, 4) corners = get_corners(left_edge.copy(), right_edge.copy()) order = [0, 1, 1, 2, 2, 3, 3, 0] order += [4, 5, 5, 6, 6, 7, 7, 4] order += [0, 4, 1, 5, 2, 6, 3, 7] vertices = np.empty([24, 3]) for i in range(3): vertices[:, i] = corners[order, i, ...].ravel(order="F") vertices = vertices.reshape((12, 2, 3)) super().__init__(vertices, color, color_stride=24) def _validate(self): pass class GridSource(LineSource): r"""A render source for drawing grids in a scene. This render source will draw blocks that are within a given data source, by default coloring them by their level of resolution. Parameters ---------- data_source: :class:`~yt.data_objects.api.DataContainer` The data container that will be used to identify grids to draw. alpha : float The opacity of the grids to draw. cmap : color map name The color map to use to map resolution levels to color. min_level : int, optional Minimum level to draw max_level : int, optional Maximum level to draw Examples -------- This example makes a volume rendering and adds outlines of all the AMR grids in the simulation: >>> import yt >>> from yt.visualization.volume_rendering.api import GridSource >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> im, sc = yt.volume_render(ds) >>> grid_source = GridSource(ds.all_data(), alpha=1.0) >>> sc.add_source(grid_source) >>> im = sc.render() This example does the same thing, except it only draws the grids that are inside a sphere of radius (0.1, "unitary") located at the domain center: >>> import yt >>> from yt.visualization.volume_rendering.api import GridSource >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> im, sc = yt.volume_render(ds) >>> dd = ds.sphere("c", (0.1, "unitary")) >>> grid_source = GridSource(dd, alpha=1.0) >>> sc.add_source(grid_source) >>> im = sc.render() """ def __init__( self, data_source, alpha=0.3, cmap=None, min_level=None, max_level=None ): self.data_source = data_source_or_all(data_source) corners = [] levels = [] for block, _mask in self.data_source.blocks: block_corners = np.array( [ [block.LeftEdge[0], block.LeftEdge[1], block.LeftEdge[2]], [block.RightEdge[0], block.LeftEdge[1], block.LeftEdge[2]], [block.RightEdge[0], block.RightEdge[1], block.LeftEdge[2]], [block.LeftEdge[0], block.RightEdge[1], block.LeftEdge[2]], [block.LeftEdge[0], block.LeftEdge[1], block.RightEdge[2]], [block.RightEdge[0], block.LeftEdge[1], block.RightEdge[2]], [block.RightEdge[0], block.RightEdge[1], block.RightEdge[2]], [block.LeftEdge[0], block.RightEdge[1], block.RightEdge[2]], ], dtype="float64", ) corners.append(block_corners) levels.append(block.Level) corners = np.dstack(corners) levels = np.array(levels) if cmap is None: cmap = ytcfg.get("yt", "default_colormap") if max_level is not None: subset = levels <= max_level levels = levels[subset] corners = corners[:, :, subset] if min_level is not None: subset = levels >= min_level levels = levels[subset] corners = corners[:, :, subset] colors = ( apply_colormap( levels * 1.0, color_bounds=[0, self.data_source.ds.index.max_level], cmap_name=cmap, )[0, :, :] / 255.0 ) colors[:, 3] = alpha order = [0, 1, 1, 2, 2, 3, 3, 0] order += [4, 5, 5, 6, 6, 7, 7, 4] order += [0, 4, 1, 5, 2, 6, 3, 7] vertices = np.empty([corners.shape[2] * 2 * 12, 3]) for i in range(3): vertices[:, i] = corners[order, i, ...].ravel(order="F") vertices = vertices.reshape((corners.shape[2] * 12, 2, 3)) super().__init__(vertices, colors, color_stride=24) class CoordinateVectorSource(OpaqueSource): r"""Draw coordinate vectors on the scene. This will draw a set of coordinate vectors on the camera image. They will appear in the lower right of the image. Parameters ---------- colors: array-like of shape (3,4), optional The RGBA values to use to draw the x, y, and z vectors. The default is [[1, 0, 0, alpha], [0, 1, 0, alpha], [0, 0, 1, alpha]] where ``alpha`` is set by the parameter below. If ``colors`` is set then ``alpha`` is ignored. alpha : float, optional The opacity of the vectors. thickness : int, optional The line thickness Examples -------- >>> import yt >>> from yt.visualization.volume_rendering.api import \ ... CoordinateVectorSource >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> im, sc = yt.volume_render(ds) >>> coord_source = CoordinateVectorSource() >>> sc.add_source(coord_source) >>> im = sc.render() """ def __init__(self, colors=None, alpha=1.0, *, thickness=1): super().__init__() # If colors aren't individually set, make black with full opacity if colors is None: colors = np.zeros((3, 4)) colors[0, 0] = 1.0 # x is red colors[1, 1] = 1.0 # y is green colors[2, 2] = 1.0 # z is blue colors[:, 3] = alpha self.colors = colors self.thick = thickness def _validate(self): pass def render(self, camera, zbuffer=None): """Renders an image using the provided camera Parameters ---------- camera: :class:`yt.visualization.volume_rendering.camera.Camera` A volume rendering camera. Can be any type of camera. zbuffer: :class:`yt.visualization.volume_rendering.zbuffer_array.Zbuffer` A zbuffer array. This is used for opaque sources to determine the z position of the source relative to other sources. Only useful if you are manually calling render on multiple sources. Scene.render uses this internally. Returns ------- A :class:`yt.data_objects.image_array.ImageArray` containing the rendered image. """ camera.lens.setup_box_properties(camera) center = camera.focus # Get positions at the focus positions = np.zeros([6, 3]) positions[:] = center # Create vectors in the x,y,z directions for i in range(3): positions[2 * i + 1, i] += camera.width.in_units("code_length").d[i] / 16.0 # Project to the image plane px, py, dz = camera.lens.project_to_plane(camera, positions) if len(px.shape) == 1: dpx = px[1::2] - px[::2] dpy = py[1::2] - py[::2] # Set the center of the coordinates to be in the lower left of the image lpx = camera.resolution[0] / 8 lpy = camera.resolution[1] - camera.resolution[1] / 8 # Upside-downsies # Offset the pixels according to the projections above px[::2] = lpx px[1::2] = lpx + dpx py[::2] = lpy py[1::2] = lpy + dpy dz[:] = 0.0 else: # For stereo-lens, two sets of pos for each eye are contained in px...pz dpx = px[:, 1::2] - px[:, ::2] dpy = py[:, 1::2] - py[:, ::2] lpx = camera.resolution[0] / 16 lpy = camera.resolution[1] - camera.resolution[1] / 8 # Upside-downsies # Offset the pixels according to the projections above px[:, ::2] = lpx px[:, 1::2] = lpx + dpx px[1, :] += camera.resolution[0] / 2 py[:, ::2] = lpy py[:, 1::2] = lpy + dpy dz[:, :] = 0.0 # Create a zbuffer if needed if zbuffer is None: empty = camera.lens.new_image(camera) z = np.empty(empty.shape[:2], dtype="float64") empty[:] = 0.0 z[:] = np.inf zbuffer = ZBuffer(empty, z) else: empty = zbuffer.rgba z = zbuffer.z # Draw the vectors px = px.astype("int64") py = py.astype("int64") if len(px.shape) == 1: zlines( empty, z, px, py, dz, self.colors.astype("float64"), thick=self.thick ) else: # For stereo-lens, two sets of pos for each eye are contained # in px...pz zlines( empty, z, px[0, :], py[0, :], dz[0, :], self.colors.astype("float64"), thick=self.thick, ) zlines( empty, z, px[1, :], py[1, :], dz[1, :], self.colors.astype("float64"), thick=self.thick, ) # Set the new zbuffer self.zbuffer = zbuffer return zbuffer def __repr__(self): disp = "" return disp ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/scene.py0000644000175100001770000010366014714401662022143 0ustar00runnerdockerimport functools from collections import OrderedDict import numpy as np from yt._maintenance.ipython_compat import IS_IPYTHON from yt.config import ytcfg from yt.funcs import mylog from yt.units.dimensions import length # type: ignore from yt.units.unit_registry import UnitRegistry # type: ignore from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.exceptions import YTNotInsideNotebook from yt.visualization._commons import get_canvas, validate_image_name from .camera import Camera from .render_source import ( BoxSource, CoordinateVectorSource, GridSource, LineSource, MeshSource, OpaqueSource, PointSource, RenderSource, VolumeSource, ) from .zbuffer_array import ZBuffer class Scene: """A virtual landscape for a volume rendering. The Scene class is meant to be the primary container for the new volume rendering framework. A single scene may contain several Camera and RenderSource instances, and is the primary driver behind creating a volume rendering. This sets up the basics needed to add sources and cameras. This does very little setup, and requires additional input to do anything useful. Examples -------- This example shows how to create an empty scene and add a VolumeSource and a Camera. >>> import yt >>> from yt.visualization.volume_rendering.api import ( ... Camera, Scene, create_volume_source) >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = Scene() >>> source = create_volume_source(ds.all_data(), "density") >>> sc.add_source(source) >>> cam = sc.add_camera() >>> im = sc.render() Alternatively, you can use the create_scene function to set up defaults and then modify the Scene later: >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = yt.create_scene(ds) >>> # Modify camera, sources, etc... >>> im = sc.render() """ _current = None _camera = None _unit_registry = None def __init__(self): r"""Create a new Scene instance""" super().__init__() self.sources = OrderedDict() self._last_render = None # A non-public attribute used to get around the fact that we can't # pass kwargs into _repr_png_() self._sigma_clip = None def get_source(self, source_num=0): """Returns the volume rendering source indexed by ``source_num``""" return list(self.sources.values())[source_num] def __getitem__(self, item): if item in self.sources: return self.sources[item] return self.get_source(item) @property def opaque_sources(self): """ Iterate over opaque RenderSource objects, returning a tuple of (key, source) """ for k, source in self.sources.items(): if isinstance(source, OpaqueSource) or issubclass( OpaqueSource, type(source) ): yield k, source @property def transparent_sources(self): """ Iterate over transparent RenderSource objects, returning a tuple of (key, source) """ for k, source in self.sources.items(): if not isinstance(source, OpaqueSource): yield k, source def add_source(self, render_source, keyname=None): """Add a render source to the scene. This will autodetect the type of source. Parameters ---------- render_source: :class:`yt.visualization.volume_rendering.render_source.RenderSource` A source to contribute to the volume rendering scene. keyname: string (optional) The dictionary key used to reference the source in the sources dictionary. """ if keyname is None: keyname = "source_%02i" % len(self.sources) data_sources = (VolumeSource, MeshSource, GridSource) if isinstance(render_source, data_sources): self._set_new_unit_registry(render_source.data_source.ds.unit_registry) line_annotation_sources = (GridSource, BoxSource, CoordinateVectorSource) if isinstance(render_source, line_annotation_sources): lens_str = str(self.camera.lens) if "fisheye" in lens_str or "spherical" in lens_str: raise NotImplementedError( "Line annotation sources are not supported " f"for {type(self.camera.lens).__name__}." ) if isinstance(render_source, (LineSource, PointSource)): if isinstance(render_source.positions, YTArray): render_source.positions = ( self.arr(render_source.positions).in_units("code_length").d ) self.sources[keyname] = render_source return self def __setitem__(self, key, value): return self.add_source(value, key) def _set_new_unit_registry(self, input_registry): self.unit_registry = UnitRegistry( add_default_symbols=False, lut=input_registry.lut ) # Validate that the new unit registry makes sense current_scaling = self.unit_registry["unitary"][0] if current_scaling != input_registry["unitary"][0]: for source in self.sources.items(): data_source = getattr(source, "data_source", None) if data_source is None: continue scaling = data_source.ds.unit_registry["unitary"][0] if scaling != current_scaling: raise NotImplementedError( "Simultaneously rendering data from datasets with " "different units is not supported" ) def render(self, camera=None): r"""Render all sources in the Scene. Use the current state of the Scene object to render all sources currently in the scene. Returns the image array. If you want to save the output to a file, call the save() function. Parameters ---------- camera: :class:`Camera`, optional If specified, use a different :class:`Camera` to render the scene. Returns ------- A :class:`yt.data_objects.image_array.ImageArray` instance containing the current rendering image. Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = yt.create_scene(ds) >>> # Modify camera, sources, etc... >>> im = sc.render() >>> sc.save(sigma_clip=4.0, render=False) Altneratively, if you do not need the image array, you can just call ``save`` as follows. >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = yt.create_scene(ds) >>> # Modify camera, sources, etc... >>> sc.save(sigma_clip=4.0) """ mylog.info("Rendering scene (Can take a while).") if camera is None: camera = self.camera assert camera is not None self._validate() bmp = self.composite(camera=camera) self._last_render = bmp return bmp def _render_on_demand(self, render): # checks for existing render before rendering, in most cases we want to # render every time, but in some cases pulling the previous render is # desirable (e.g., if only changing sigma_clip or # saving after a call to sc.show()). if self._last_render is not None and not render: mylog.info("Found previously rendered image to save.") return if self._last_render is None: mylog.warning("No previously rendered image found, rendering now.") elif render: mylog.warning( "Previously rendered image exists, but rendering anyway. " "Supply 'render=False' to save previously rendered image directly." ) self.render() def _get_render_sources(self): return [s for s in self.sources.values() if isinstance(s, RenderSource)] def _setup_save(self, fname, render) -> str: self._render_on_demand(render) rensources = self._get_render_sources() if fname is None: # if a volume source present, use its affiliated ds for fname if len(rensources) > 0: rs = rensources[0] basename = rs.data_source.ds.basename if isinstance(rs.field, str): field = rs.field else: field = rs.field[-1] fname = f"{basename}_Render_{field}" # if no volume source present, use a default filename else: fname = "Render_opaque" fname = validate_image_name(fname) mylog.info("Saving rendered image to %s", fname) return fname def save( self, fname: str | None = None, sigma_clip: float | None = None, render: bool = True, ): r"""Saves a rendered image of the Scene to disk. Once you have created a scene, this saves an image array to disk with an optional filename. This function calls render() to generate an image array, unless the render parameter is set to False, in which case the most recently rendered scene is used if it exists. Parameters ---------- fname: string, optional If specified, save the rendering as to the file "fname". If unspecified, it creates a default based on the dataset filename. The file format is inferred from the filename's suffix. Supported formats depend on which version of matplotlib is installed. Default: None sigma_clip: float, optional Image values greater than this number times the standard deviation plus the mean of the image will be clipped before saving. Useful for enhancing images as it gets rid of rare high pixel values. Default: None floor(vals > std_dev*sigma_clip + mean) render: boolean, optional If True, will always render the scene before saving. If False, will use results of previous render if it exists. Default: True Returns ------- Nothing Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = yt.create_scene(ds) >>> # Modify camera, sources, etc... >>> sc.save("test.png", sigma_clip=4) When saving multiple images without modifying the scene (camera, sources,etc.), render=False can be used to avoid re-rendering. This is useful for generating images at a range of sigma_clip values: >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = yt.create_scene(ds) >>> # save with different sigma clipping values >>> sc.save("raw.png") # The initial render call happens here >>> sc.save("clipped_2.png", sigma_clip=2, render=False) >>> sc.save("clipped_4.png", sigma_clip=4, render=False) """ fname = self._setup_save(fname, render) # We can render pngs natively but for other formats we defer to # matplotlib. if fname.endswith(".png"): self._last_render.write_png(fname, sigma_clip=sigma_clip) else: from matplotlib.figure import Figure shape = self._last_render.shape fig = Figure((shape[0] / 100.0, shape[1] / 100.0)) canvas = get_canvas(fig, fname) ax = fig.add_axes((0, 0, 1, 1)) ax.set_axis_off() out = self._last_render if sigma_clip is not None: max_val = out._clipping_value(sigma_clip) else: max_val = out[:, :, :3].max() alpha = 255 * out[:, :, 3].astype("uint8") out = np.clip(out[:, :, :3] / max_val, 0.0, 1.0) * 255 out = np.concatenate([out.astype("uint8"), alpha[..., None]], axis=-1) # not sure why we need rot90, but this makes the orientation # match the png writer ax.imshow(np.rot90(out), origin="lower") canvas.print_figure(fname, dpi=100) def save_annotated( self, fname: str | None = None, label_fmt: str | None = None, text_annotate=None, dpi: int = 100, sigma_clip: float | None = None, render: bool = True, tf_rect: list[float] | None = None, *, label_fontsize: float | str = 10, ): r"""Saves the most recently rendered image of the Scene to disk, including an image of the transfer function and and user-defined text. Once you have created a scene and rendered that scene to an image array, this saves that image array to disk with an optional filename. If an image has not yet been rendered for the current scene object, it forces one and writes it out. Parameters ---------- fname: string, optional If specified, save the rendering as a bitmap to the file "fname". If unspecified, it creates a default based on the dataset filename. Default: None sigma_clip: float, optional Image values greater than this number times the standard deviation plus the mean of the image will be clipped before saving. Useful for enhancing images as it gets rid of rare high pixel values. Default: None floor(vals > std_dev*sigma_clip + mean) dpi: integer, optional By default, the resulting image will be the same size as the camera parameters. If you supply a dpi, then the image will be scaled accordingly (from the default 100 dpi) label_fmt : str, optional A format specifier (e.g., label_fmt="%.2g") to use in formatting the data values that label the transfer function colorbar. label_fontsize : float or string, optional The fontsize used to display the numbers on the transfer function colorbar. This can be any matplotlib font size specification, e.g., "large" or 12. (default: 10) text_annotate : list of iterables Any text that you wish to display on the image. This should be an list containing a tuple of coordinates (in normalized figure coordinates), the text to display, and, optionally, a dictionary of keyword/value pairs to pass through to the matplotlib text() function. Each item in the main list is a separate string to write. render: boolean, optional If True, will render the scene before saving. If False, will use results of previous render if it exists. Default: True tf_rect : sequence of floats, optional A rectangle that defines the location of the transfer function legend. This is only used for the case where there are multiple volume sources with associated transfer functions. tf_rect is of the form [x0, y0, width, height], in figure coordinates. Returns ------- Nothing Examples -------- >>> sc.save_annotated( ... "fig.png", ... text_annotate=[ ... [ ... (0.05, 0.05), ... f"t = {ds.current_time.d}", ... dict(horizontalalignment="left"), ... ], ... [ ... (0.5, 0.95), ... "simulation title", ... dict(color="y", fontsize="24", horizontalalignment="center"), ... ], ... ], ... ) """ fname = self._setup_save(fname, render) ax = self._show_mpl( self._last_render.swapaxes(0, 1), sigma_clip=sigma_clip, dpi=dpi ) # number of transfer functions? num_trans_func = 0 for rs in self._get_render_sources(): if hasattr(rs, "transfer_function"): num_trans_func += 1 # which transfer function? if num_trans_func == 1: rs = self._get_render_sources()[0] tf = rs.transfer_function label = rs.data_source.ds._get_field_info(rs.field).get_label() self._annotate( ax.axes, tf, rs, label=label, label_fmt=label_fmt, label_fontsize=label_fontsize, ) else: # set the origin and width and height of the colorbar region if tf_rect is None: tf_rect = [0.80, 0.12, 0.12, 0.9] cbx0, cby0, cbw, cbh = tf_rect cbh_each = cbh / num_trans_func for i, rs in enumerate(self._get_render_sources()): ax = self._render_figure.add_axes( [cbx0, cby0 + i * cbh_each, 0.8 * cbw, 0.8 * cbh_each] ) try: tf = rs.transfer_function except AttributeError: pass else: label = rs.data_source.ds._get_field_info(rs.field).get_label() self._annotate_multi( ax, tf, rs, label=label, label_fmt=label_fmt, label_fontsize=label_fontsize, ) # any text? if text_annotate is not None: f = self._render_figure for t in text_annotate: xy = t[0] string = t[1] if len(t) == 3: opt = t[2] else: opt = {} # sane default if "color" not in opt: opt["color"] = "w" ax.axes.text(xy[0], xy[1], string, transform=f.transFigure, **opt) self._render_figure.canvas = get_canvas(self._render_figure, fname) self._render_figure.tight_layout() self._render_figure.savefig(fname, facecolor="black", pad_inches=0) def _show_mpl(self, im, sigma_clip=None, dpi=100): from matplotlib.figure import Figure s = im.shape self._render_figure = Figure( figsize=(s[1] / float(dpi), s[0] / float(dpi)), dpi=dpi ) self._render_figure.clf() ax = self._render_figure.add_subplot(111) ax.set_position([0, 0, 1, 1]) if sigma_clip is not None: nim = im / im._clipping_value(sigma_clip) nim[nim > 1.0] = 1.0 nim[nim < 0.0] = 0.0 else: nim = im axim = ax.imshow(nim[:, :, :3] / nim[:, :, :3].max(), interpolation="bilinear") return axim def _annotate(self, ax, tf, source, label="", label_fmt=None, label_fontsize=10): ax.get_xaxis().set_visible(False) ax.get_xaxis().set_ticks([]) ax.get_yaxis().set_visible(False) ax.get_yaxis().set_ticks([]) cb = self._render_figure.colorbar( ax.images[0], pad=0.0, fraction=0.05, drawedges=True ) tf.vert_cbar( ax=cb.ax, label=label, label_fmt=label_fmt, label_fontsize=label_fontsize, resolution=self.camera.resolution[0], log_scale=source.log_field, ) def _annotate_multi(self, ax, tf, source, label, label_fmt, label_fontsize=10): ax.yaxis.set_label_position("right") ax.yaxis.tick_right() tf.vert_cbar( ax=ax, label=label, label_fmt=label_fmt, label_fontsize=label_fontsize, resolution=self.camera.resolution[0], log_scale=source.log_field, size=6, ) def _validate(self): r"""Validate the current state of the scene.""" for source in self.sources.values(): source._validate() return def composite(self, camera=None): r"""Create a composite image of the current scene. First iterate over the opaque sources and set the ZBuffer. Then iterate over the transparent sources, rendering from the value of the zbuffer to the front of the box. Typically this function is accessed through the .render() command. Parameters ---------- camera: :class:`Camera`, optional If specified, use a specific :class:`Camera` to render the scene. Returns ------- im: :class:`ImageArray` ImageArray instance of the current rendering image. Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = yt.create_scene(ds) >>> # Modify camera, sources, etc... >>> im = sc.composite() """ if camera is None: camera = self.camera empty = camera.lens.new_image(camera) opaque = ZBuffer(empty, np.full(empty.shape[:2], np.inf)) for _, source in self.opaque_sources: source.render(camera, zbuffer=opaque) im = source.zbuffer.rgba for _, source in self.transparent_sources: im = source.render(camera, zbuffer=opaque) opaque.rgba = im # rotate image 180 degrees so orientation agrees with e.g. # a PlotWindow plot return np.rot90(im, k=2) def add_camera(self, data_source=None, lens_type="plane-parallel", auto=False): r"""Add a new camera to the Scene. The camera is defined by a position (the location of the camera in the simulation domain,), a focus (the point at which the camera is pointed), a width (the width of the snapshot that will be taken, a resolution (the number of pixels in the image), and a north_vector (the "up" direction in the resulting image). A camera can use a variety of different Lens objects. If the scene already has a camera associated with it, this function will create a new camera and discard the old one. Parameters ---------- data_source: :class:`AMR3DData` or :class:`Dataset`, optional This is the source to be rendered, which can be any arbitrary yt data object or dataset. lens_type: string, optional This specifies the type of lens to use for rendering. Current options are 'plane-parallel', 'perspective', and 'fisheye'. See :class:`yt.visualization.volume_rendering.lens.Lens` for details. Default: 'plane-parallel' auto: boolean If True, build smart defaults using the data source extent. This can be time-consuming to iterate over the entire dataset to find the positional bounds. Default: False Examples -------- In this example, the camera is set using defaults that are chosen to be reasonable for the argument Dataset. >>> import yt >>> from yt.visualization.volume_rendering.api import Camera, Scene >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = Scene() >>> sc.add_camera() Here, we set the camera properties manually: >>> import yt >>> from yt.visualization.volume_rendering.api import Camera, Scene >>> sc = Scene() >>> cam = sc.add_camera() >>> cam.position = np.array([0.5, 0.5, -1.0]) >>> cam.focus = np.array([0.5, 0.5, 0.0]) >>> cam.north_vector = np.array([1.0, 0.0, 0.0]) Finally, we create a camera with a non-default lens: >>> import yt >>> from yt.visualization.volume_rendering.api import Camera >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = Scene() >>> sc.add_camera(ds, lens_type="perspective") """ self._camera = Camera(self, data_source, lens_type, auto) return self.camera @property def camera(self): r"""The camera property. This is the default camera that will be used when rendering. Can be set manually, but Camera type will be checked for validity. """ return self._camera @camera.setter def camera(self, value): value.width = self.arr(value.width) value.focus = self.arr(value.focus) value.position = self.arr(value.position) self._camera = value @camera.deleter def camera(self): del self._camera self._camera = None @property def unit_registry(self): ur = self._unit_registry if ur is None: ur = UnitRegistry() # This will be updated when we add a volume source ur.add("unitary", 1.0, length) self._unit_registry = ur return self._unit_registry @unit_registry.setter def unit_registry(self, value): self._unit_registry = value if self.camera is not None: self.camera.width = YTArray( self.camera.width.in_units("unitary"), registry=value ) self.camera.focus = YTArray( self.camera.focus.in_units("unitary"), registry=value ) self.camera.position = YTArray( self.camera.position.in_units("unitary"), registry=value ) @unit_registry.deleter def unit_registry(self): del self._unit_registry self._unit_registry = None def set_camera(self, camera): r""" Set the camera to be used by this scene. """ self.camera = camera def get_camera(self): r""" Get the camera currently used by this scene. """ return self.camera def annotate_domain(self, ds, color=None): r""" Modifies this scene by drawing the edges of the computational domain. This adds a new BoxSource to the scene corresponding to the domain boundaries and returns the modified scene object. Parameters ---------- ds : :class:`yt.data_objects.static_output.Dataset` This is the dataset object corresponding to the simulation being rendered. Used to get the domain bounds. color : array_like of shape (4,), optional The RGBA value to use to draw the domain boundaries. Default is black with an alpha of 1.0. Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = yt.create_scene(ds) >>> sc.annotate_domain(ds) >>> im = sc.render() """ box_source = BoxSource(ds.domain_left_edge, ds.domain_right_edge, color=color) self.add_source(box_source) return self def annotate_grids( self, data_source, alpha=0.3, cmap=None, min_level=None, max_level=None ): r""" Modifies this scene by drawing the edges of the AMR grids. This adds a new GridSource to the scene that represents the AMR grid and returns the resulting Scene object. Parameters ---------- data_source: :class:`~yt.data_objects.api.DataContainer` The data container that will be used to identify grids to draw. alpha : float The opacity of the grids to draw. cmap : color map name The color map to use to map resolution levels to color. min_level : int, optional Minimum level to draw max_level : int, optional Maximum level to draw Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = yt.create_scene(ds) >>> sc.annotate_grids(ds.all_data()) >>> im = sc.render() """ if cmap is None: cmap = ytcfg.get("yt", "default_colormap") grids = GridSource( data_source, alpha=alpha, cmap=cmap, min_level=min_level, max_level=max_level, ) self.add_source(grids) return self def annotate_mesh_lines(self, color=None, alpha=1.0): """ Modifies this Scene by drawing the mesh line boundaries on all MeshSources. Parameters ---------- color : array_like of shape (4,), optional The RGBA value to use to draw the mesh lines. Default is black with an alpha of 1.0. alpha : float, optional The opacity of the mesh lines. Default is 255 (solid). """ for _, source in self.opaque_sources: if isinstance(source, MeshSource): source.annotate_mesh_lines(color=color, alpha=alpha) return self def annotate_axes(self, colors=None, alpha=1.0, *, thickness=1): r""" Modifies this scene by drawing the coordinate axes. This adds a new CoordinateVectorSource to the scene and returns the modified scene object. Parameters ---------- colors: array-like of shape (3,4), optional The RGBA values to use to draw the x, y, and z vectors. The default is [[1, 0, 0, alpha], [0, 1, 0, alpha], [0, 0, 1, alpha]] where ``alpha`` is set by the parameter below. If ``colors`` is set then ``alpha`` is ignored. alpha : float, optional The opacity of the vectors. thickness : int, optional The line thickness Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = yt.create_scene(ds) >>> sc.annotate_axes(alpha=0.5) >>> im = sc.render() """ coords = CoordinateVectorSource(colors, alpha, thickness=thickness) self.add_source(coords) return self def show(self, sigma_clip=None): r"""This will send the most recently rendered image to the IPython notebook. If yt is being run from within an IPython session, and it is able to determine this, this function will send the current image of this Scene to the notebook for display. If there is no current image, it will run the render() method on this Scene before sending the result to the notebook. If yt can't determine if it's inside an IPython session, this will raise YTNotInsideNotebook. Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = yt.create_scene(ds) >>> sc.show() """ if IS_IPYTHON: from IPython.display import display self._sigma_clip = sigma_clip display(self) else: raise YTNotInsideNotebook _arr = None @property def arr(self): """Converts an array into a :class:`yt.units.yt_array.YTArray` The returned YTArray will be dimensionless by default, but can be cast to arbitrary units using the ``units`` keyword argument. Parameters ---------- input_array : Iterable A tuple, list, or array to attach units to units: String unit specification, unit symbol object, or astropy units object The units of the array. Powers must be specified using python syntax (cm**3, not cm^3). dtype : string or NumPy dtype object The dtype of the returned array data Examples -------- >>> a = sc.arr([1, 2, 3], "cm") >>> b = sc.arr([4, 5, 6], "m") >>> a + b YTArray([ 401., 502., 603.]) cm >>> b + a YTArray([ 4.01, 5.02, 6.03]) m Arrays returned by this function know about the scene's unit system >>> a = sc.arr(np.ones(5), "unitary") >>> a.in_units("Mpc") YTArray([ 1.00010449, 1.00010449, 1.00010449, 1.00010449, 1.00010449]) Mpc """ if self._arr is not None: return self._arr self._arr = functools.partial(YTArray, registry=self.unit_registry) return self._arr _quan = None @property def quan(self): """Converts an scalar into a :class:`yt.units.yt_array.YTQuantity` The returned YTQuantity will be dimensionless by default, but can be cast to arbitrary units using the ``units`` keyword argument. Parameters ---------- input_scalar : an integer or floating point scalar The scalar to attach units to units : String unit specification, unit symbol object, or astropy units input_units : deprecated in favor of 'units' The units of the quantity. Powers must be specified using python syntax (cm**3, not cm^3). dtype : string or NumPy dtype object The dtype of the array data. Examples -------- >>> a = sc.quan(1, "cm") >>> b = sc.quan(2, "m") >>> a + b 201.0 cm >>> b + a 2.01 m Quantities created this way automatically know about the unit system of the scene >>> a = ds.quan(5, "unitary") >>> a.in_cgs() 1.543e+25 cm """ if self._quan is not None: return self._quan self._quan = functools.partial(YTQuantity, registry=self.unit_registry) return self._quan def _repr_png_(self): if self._last_render is None: self.render() png = self._last_render.write_png( filename=None, sigma_clip=self._sigma_clip, background="black" ) self._sigma_clip = None return png def __repr__(self): disp = ":" disp += "\nSources: \n" for k, v in self.sources.items(): disp += f" {k}: {v}\n" disp += "Camera: \n" disp += f" {self.camera}" return disp ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.423154 yt-4.4.0/yt/visualization/volume_rendering/tests/0000755000175100001770000000000014714401715021627 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/tests/__init__.py0000644000175100001770000000000014714401662023727 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/tests/test_camera_attributes.py0000644000175100001770000001022714714401662026741 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_almost_equal, assert_equal import yt.units as u from yt.testing import fake_random_ds from yt.visualization.volume_rendering.api import Scene valid_lens_types = [ "plane-parallel", "perspective", "stereo-perspective", "fisheye", "spherical", "stereo-spherical", ] def test_scene_and_camera_attributes(): ds = fake_random_ds(64, length_unit=2, bbox=np.array([[-1, 1], [-1, 1], [-1, 1]])) sc = Scene() cam = sc.add_camera(ds) # test that initial values are correct in code units assert_equal(cam.width, ds.arr([3, 3, 3], "code_length")) assert_equal(cam.position, ds.arr([1, 1, 1], "code_length")) assert_equal(cam.focus, ds.arr([0, 0, 0], "code_length")) # test setting the attributes in various ways attribute_values = [ ( 1, ds.arr([2, 2, 2], "code_length"), ), ( [1], ds.arr([2, 2, 2], "code_length"), ), ( [1, 2], RuntimeError, ), ( [1, 1, 1], ds.arr([2, 2, 2], "code_length"), ), ( (1, "code_length"), ds.arr([1, 1, 1], "code_length"), ), ( ((1, "code_length"), (1, "code_length")), RuntimeError, ), ( ((1, "cm"), (2, "cm"), (3, "cm")), ds.arr([0.5, 1, 1.5], "code_length"), ), ( 2 * u.cm, ds.arr([1, 1, 1], "code_length"), ), ( ds.arr(2, "cm"), ds.arr([1, 1, 1], "code_length"), ), ( [2 * u.cm], ds.arr([1, 1, 1], "code_length"), ), ( [1, 2, 3] * u.cm, ds.arr([0.5, 1, 1.5], "code_length"), ), ( [1, 2] * u.cm, RuntimeError, ), ( [u.cm * w for w in [1, 2, 3]], ds.arr([0.5, 1, 1.5], "code_length"), ), ] # define default values to avoid accidentally setting focus = position default_values = { "focus": [0, 0, 0], "position": [4, 4, 4], "width": [1, 1, 1], } attribute_list = list(default_values.keys()) for attribute in attribute_list: for other_attribute in [a for a in attribute_list if a != attribute]: setattr(cam, other_attribute, default_values[other_attribute]) for attribute_value, expected_result in attribute_values: try: # test properties setattr(cam, attribute, attribute_value) assert_almost_equal(getattr(cam, attribute), expected_result) except RuntimeError: assert expected_result is RuntimeError try: # test setters/getters getattr(cam, f"set_{attribute}")(attribute_value) assert_almost_equal(getattr(cam, f"get_{attribute}")(), expected_result) except RuntimeError: assert expected_result is RuntimeError resolution_values = ( ( 512, (512, 512), ), ( (512, 512), (512, 512), ), ( (256, 512), (256, 512), ), ((256, 256, 256), RuntimeError), ) for resolution_value, expected_result in resolution_values: try: # test properties cam.resolution = resolution_value assert_equal(cam.resolution, expected_result) except RuntimeError: assert expected_result is RuntimeError try: # test setters/getters cam.set_resolution(resolution_value) assert_equal(cam.get_resolution(), expected_result) except RuntimeError: assert expected_result is RuntimeError for lens_type in valid_lens_types: cam.set_lens(lens_type) # See issue #1287 cam.focus = [0, 0, 0] cam_pos = [1, 0, 0] north_vector = [0, 1, 0] cam.set_position(cam_pos, north_vector) cam_pos = [0, 1, 0] north_vector = [0, 0, 1] cam.set_position(cam_pos, north_vector) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/tests/test_composite.py0000644000175100001770000000464214714401662025251 0ustar00runnerdockerimport os import shutil import tempfile from unittest import TestCase import numpy as np from yt.data_objects.api import ImageArray from yt.testing import fake_random_ds from yt.visualization.volume_rendering.api import ( BoxSource, LineSource, Scene, create_volume_source, ) def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True class CompositeVRTest(TestCase): # This toggles using a temporary directory. Turn off to examine images. use_tmpdir = True def setUp(self): np.random.seed(0) if self.use_tmpdir: self.curdir = os.getcwd() # Perform I/O in safe place instead of yt main dir self.tmpdir = tempfile.mkdtemp() os.chdir(self.tmpdir) else: self.curdir, self.tmpdir = None, None def tearDown(self): if self.use_tmpdir: os.chdir(self.curdir) shutil.rmtree(self.tmpdir) def test_composite_vr(self): ds = fake_random_ds(64) dd = ds.sphere(ds.domain_center, 0.45 * ds.domain_width[0]) ds.field_info[ds.field_list[0]].take_log = False sc = Scene() cam = sc.add_camera(ds) cam.resolution = (512, 512) vr = create_volume_source(dd, field=ds.field_list[0]) vr.transfer_function.clear() vr.transfer_function.grey_opacity = True vr.transfer_function.map_to_colormap(0.0, 1.0, scale=3.0, colormap="Reds") sc.add_source(vr) cam.set_width(1.8 * ds.domain_width) cam.lens.setup_box_properties(cam) # DRAW SOME LINES npoints = 100 vertices = np.random.random([npoints, 2, 3]) colors = np.random.random([npoints, 4]) colors[:, 3] = 0.10 box_source = BoxSource( ds.domain_left_edge, ds.domain_right_edge, color=[1.0, 1.0, 1.0, 1.0] ) sc.add_source(box_source) LE = ds.domain_left_edge + np.array([0.1, 0.0, 0.3]) * ds.domain_left_edge.uq RE = ds.domain_right_edge - np.array([0.1, 0.2, 0.3]) * ds.domain_left_edge.uq color = np.array([0.0, 1.0, 0.0, 0.10]) box_source = BoxSource(LE, RE, color=color) sc.add_source(box_source) line_source = LineSource(vertices, colors) sc.add_source(line_source) im = sc.render() im = ImageArray(im.d) im.write_png("composite.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/tests/test_lenses.py0000644000175100001770000000776114714401662024545 0ustar00runnerdockerimport os import shutil import tempfile from unittest import TestCase import numpy as np from yt.testing import fake_random_ds from yt.visualization.volume_rendering.api import Scene, create_volume_source def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True class LensTest(TestCase): # This toggles using a temporary directory. Turn off to examine images. use_tmpdir = True def setUp(self): if self.use_tmpdir: self.curdir = os.getcwd() # Perform I/O in safe place instead of yt main dir self.tmpdir = tempfile.mkdtemp() os.chdir(self.tmpdir) else: self.curdir, self.tmpdir = None, None self.field = ("gas", "density") self.ds = fake_random_ds(32, fields=(self.field,), units=("g/cm**3",)) self.ds.index def tearDown(self): if self.use_tmpdir: os.chdir(self.curdir) shutil.rmtree(self.tmpdir) def test_perspective_lens(self): sc = Scene() cam = sc.add_camera(self.ds, lens_type="perspective") cam.position = self.ds.arr(np.array([1.0, 1.0, 1.0]), "code_length") vol = create_volume_source(self.ds, field=self.field) tf = vol.transfer_function tf.grey_opacity = True sc.add_source(vol) sc.save(f"test_perspective_{self.field[1]}.png", sigma_clip=6.0) def test_stereoperspective_lens(self): sc = Scene() cam = sc.add_camera(self.ds, lens_type="stereo-perspective") cam.resolution = [256, 128] cam.position = self.ds.arr(np.array([0.7, 0.7, 0.7]), "code_length") vol = create_volume_source(self.ds, field=self.field) tf = vol.transfer_function tf.grey_opacity = True sc.add_source(vol) sc.save(f"test_stereoperspective_{self.field[1]}.png", sigma_clip=6.0) def test_fisheye_lens(self): dd = self.ds.sphere(self.ds.domain_center, self.ds.domain_width[0] / 10) sc = Scene() cam = sc.add_camera(dd, lens_type="fisheye") cam.lens.fov = 360.0 cam.set_width(self.ds.domain_width) v, c = self.ds.find_max(("gas", "density")) cam.set_position(c - 0.0005 * self.ds.domain_width) vol = create_volume_source(dd, field=self.field) tf = vol.transfer_function tf.grey_opacity = True sc.add_source(vol) sc.save(f"test_fisheye_{self.field[1]}.png", sigma_clip=6.0) def test_plane_lens(self): dd = self.ds.sphere(self.ds.domain_center, self.ds.domain_width[0] / 10) sc = Scene() cam = sc.add_camera(dd, lens_type="plane-parallel") cam.set_width(self.ds.domain_width * 1e-2) v, c = self.ds.find_max(("gas", "density")) vol = create_volume_source(dd, field=self.field) tf = vol.transfer_function tf.grey_opacity = True sc.add_source(vol) sc.save(f"test_plane_{self.field[1]}.png", sigma_clip=6.0) def test_spherical_lens(self): sc = Scene() cam = sc.add_camera(self.ds, lens_type="spherical") cam.resolution = [256, 128] cam.position = self.ds.arr(np.array([0.6, 0.5, 0.5]), "code_length") vol = create_volume_source(self.ds, field=self.field) tf = vol.transfer_function tf.grey_opacity = True sc.add_source(vol) sc.save(f"test_spherical_{self.field[1]}.png", sigma_clip=6.0) def test_stereospherical_lens(self): w = (self.ds.domain_width).in_units("code_length") w = self.ds.arr(w, "code_length") sc = Scene() cam = sc.add_camera(self.ds, lens_type="stereo-spherical") cam.resolution = [256, 256] cam.position = self.ds.arr(np.array([0.6, 0.5, 0.5]), "code_length") vol = create_volume_source(self.ds, field=self.field) tf = vol.transfer_function tf.grey_opacity = True sc.add_source(vol) sc.save(f"test_stereospherical_{self.field[1]}.png", sigma_clip=6.0) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/tests/test_mesh_render.py0000644000175100001770000003171114714401662025537 0ustar00runnerdockerimport matplotlib.pyplot as plt import pytest from yt.config import ytcfg from yt.testing import ( fake_hexahedral_ds, fake_tetrahedral_ds, requires_file, requires_module, ) from yt.utilities.answer_testing.framework import data_dir_load, data_dir_load_v2 from yt.visualization.volume_rendering.api import MeshSource, Scene, create_scene from yt.visualization.volume_rendering.render_source import set_raytracing_engine @pytest.fixture @requires_module("pyembree") def with_pyembree_ray_tracing_engine(): old = ytcfg["yt", "ray_tracing_engine"] # the @requires_module decorator only guards against pyembree not being # available for import, but it might not be installed properly regardless # so we need to be extra careful not to run tests with the default engine try: set_raytracing_engine("embree") except UserWarning as exc: pytest.skip(str(exc)) else: if ytcfg["yt", "ray_tracing_engine"] != "embree": pytest.skip("Error while setting embree raytracing engine") yield set_raytracing_engine(old) @pytest.fixture def with_default_ray_tracing_engine(): old = ytcfg["yt", "ray_tracing_engine"] set_raytracing_engine("yt") yield set_raytracing_engine(old) @pytest.mark.usefixtures("with_default_ray_tracing_engine") class TestMeshRenderDefaultEngine: @classmethod def setup_class(cls): cls.images = {} cls.ds_t = fake_tetrahedral_ds() for field in cls.ds_t.field_list: if field[0] == "all": continue sc = Scene() sc.add_source(MeshSource(cls.ds_t, field)) sc.add_camera() im = sc.render() cls.images["tetrahedral_" + "_".join(field)] = im cls.ds_h = fake_hexahedral_ds() for field in cls.ds_t.field_list: if field[0] == "all": continue sc = Scene() sc.add_source(MeshSource(cls.ds_h, field)) sc.add_camera() im = sc.render() cls.images["hexahedral_" + "_".join(field)] = im @pytest.mark.parametrize("kind", ["tetrahedral", "hexahedral"]) @pytest.mark.parametrize("fname", ["elem", "test"]) @pytest.mark.mpl_image_compare(remove_text=True) def test_mesh_render_default_engine(self, kind, fname): fig, ax = plt.subplots() ax.imshow(self.images[f"{kind}_connect1_{fname}"]) return fig @pytest.mark.usefixtures("with_pyembree_ray_tracing_engine") class TestMeshRenderPyembreeEngine: @classmethod def setup_class(cls): cls.images = {} cls.ds_t = fake_tetrahedral_ds() for field in cls.ds_t.field_list: if field[0] == "all": continue sc = Scene() sc.add_source(MeshSource(cls.ds_t, field)) sc.add_camera() im = sc.render() cls.images["tetrahedral_" + "_".join(field)] = im cls.ds_h = fake_hexahedral_ds() for field in cls.ds_t.field_list: if field[0] == "all": continue sc = Scene() sc.add_source(MeshSource(cls.ds_h, field)) sc.add_camera() im = sc.render() cls.images["hexahedral_" + "_".join(field)] = im @pytest.mark.parametrize("kind", ["tetrahedral", "hexahedral"]) @pytest.mark.parametrize("fname", ["elem", "test"]) @pytest.mark.mpl_image_compare(remove_text=True) def test_mesh_render_pyembree_engine(self, kind, fname): fig, ax = plt.subplots() ax.imshow(self.images[f"{kind}_connect1_{fname}"]) return fig hex8 = "MOOSE_sample_data/out.e-s010" @pytest.mark.usefixtures("with_default_ray_tracing_engine") class TestHex8DefaultEngine: @requires_file(hex8) @classmethod def setup_class(cls): cls.ds = data_dir_load_v2(hex8, step=-1) cls.images = {} for field in [("connect1", "diffused"), ("connect2", "convected")]: sc = create_scene(cls.ds, field) im = sc.render() cls.images["_".join(field)] = im @pytest.mark.mpl_image_compare(remove_text=True) def test_mesh_render_default_engine_hex8_connect1_diffused(self): fig, ax = plt.subplots() ax.imshow(self.images["connect1_diffused"]) return fig @pytest.mark.mpl_image_compare(remove_text=True) def test_mesh_render_default_engine_hex8_connect2_convected(self): fig, ax = plt.subplots() ax.imshow(self.images["connect2_convected"]) return fig @pytest.mark.usefixtures("with_pyembree_ray_tracing_engine") class TestHex8PyembreeEngine: @requires_file(hex8) @classmethod def setup_class(cls): cls.ds = data_dir_load_v2(hex8, step=-1) cls.images = {} for field in [("connect1", "diffused"), ("connect2", "convected")]: sc = create_scene(cls.ds, field) im = sc.render() cls.images["_".join(field)] = im @pytest.mark.mpl_image_compare(remove_text=True) def test_mesh_render_pyembree_engine_hex8_connect1_diffused(self): fig, ax = plt.subplots() ax.imshow(self.images["connect1_diffused"]) return fig @pytest.mark.mpl_image_compare(remove_text=True) def test_mesh_render_pyembree_engine_hex8_connect2_convected(self): fig, ax = plt.subplots() ax.imshow(self.images["connect2_convected"]) return fig tet4 = "MOOSE_sample_data/high_order_elems_tet4_refine_out.e" @pytest.mark.usefixtures("with_default_ray_tracing_engine") class TestTet4DefaultEngine: @requires_file(tet4) @classmethod def setup_class(cls): cls.ds = data_dir_load_v2(tet4, step=-1) cls.images = {} for field in [("connect1", "u")]: sc = create_scene(cls.ds, field) im = sc.render() cls.images["_".join(field)] = im @pytest.mark.mpl_image_compare(remove_text=True) def test_mesh_render_default_engine_tet4(self): fig, ax = plt.subplots() ax.imshow(self.images["connect1_u"]) return fig @pytest.mark.usefixtures("with_pyembree_ray_tracing_engine") class TestTet4PyembreeEngine: @requires_file(tet4) @classmethod def setup_class(cls): cls.ds = data_dir_load_v2(tet4, step=-1) cls.images = {} for field in [("connect1", "u")]: sc = create_scene(cls.ds, field) im = sc.render() cls.images["_".join(field)] = im @pytest.mark.mpl_image_compare(remove_text=True) def test_mesh_render_pyembree_engine_tet4(self): fig, ax = plt.subplots() ax.imshow(self.images["connect1_u"]) return fig hex20 = "MOOSE_sample_data/mps_out.e" @pytest.mark.usefixtures("with_default_ray_tracing_engine") class TestHex20DefaultEngine: @requires_file(hex20) @classmethod def setup_class(cls): cls.ds = data_dir_load_v2(hex20, step=-1) cls.images = {} for field in [("connect2", "temp")]: sc = create_scene(cls.ds, field) im = sc.render() cls.images["_".join(field)] = im @pytest.mark.mpl_image_compare(remove_text=True) def test_mesh_render_default_engine_hex20(self): fig, ax = plt.subplots() ax.imshow(self.images["connect2_temp"]) return fig @pytest.mark.usefixtures("with_pyembree_ray_tracing_engine") class TestHex20PyembreeEngine: @requires_file(hex20) @classmethod def setup_class(cls): cls.ds = data_dir_load_v2(hex20, step=-1) cls.images = {} for field in [("connect2", "temp")]: sc = create_scene(cls.ds, field) im = sc.render() cls.images["_".join(field)] = im @pytest.mark.mpl_image_compare(remove_text=True) def test_mesh_render_pyembree_engine_hex20(self): fig, ax = plt.subplots() ax.imshow(self.images["connect2_temp"]) return fig wedge6 = "MOOSE_sample_data/wedge_out.e" @pytest.mark.usefixtures("with_default_ray_tracing_engine") class TestWedge6DefaultEngine: @requires_file(wedge6) @classmethod def setup_class(cls): cls.ds = data_dir_load_v2(wedge6, step=-1) cls.images = {} for field in [("connect1", "diffused")]: sc = create_scene(cls.ds, field) im = sc.render() cls.images["_".join(field)] = im @pytest.mark.mpl_image_compare(remove_text=True) def test_mesh_render_default_engine_wedge6(self): fig, ax = plt.subplots() ax.imshow(self.images["connect1_diffused"]) return fig @pytest.mark.usefixtures("with_pyembree_ray_tracing_engine") class TestWedge6PyembreeEngine: @requires_file(wedge6) @classmethod def setup_class(cls): cls.ds = data_dir_load_v2(wedge6, step=-1) cls.images = {} for field in [("connect1", "diffused")]: sc = create_scene(cls.ds, field) im = sc.render() cls.images["_".join(field)] = im @pytest.mark.mpl_image_compare(remove_text=True) def test_mesh_render_pyembree_engine_wedge6(self): fig, ax = plt.subplots() ax.imshow(self.images["connect1_diffused"]) return fig tet10 = "SecondOrderTets/tet10_unstructured_out.e" @pytest.mark.usefixtures("with_default_ray_tracing_engine") class TestTet10DefaultEngine: @requires_file(tet10) @classmethod def setup_class(cls): cls.ds = data_dir_load_v2(tet10, step=-1) cls.images = {} for field in [("connect1", "uz")]: sc = create_scene(cls.ds, field) im = sc.render() cls.images["_".join(field)] = im @pytest.mark.mpl_image_compare(remove_text=True) def test_mesh_render_default_engine_tet10(self): fig, ax = plt.subplots() ax.imshow(self.images["connect1_uz"]) return fig @pytest.mark.usefixtures("with_pyembree_ray_tracing_engine") class TestTet10PyembreeEngine: @requires_file(tet10) @classmethod def setup_class(cls): cls.ds = data_dir_load_v2(tet10, step=-1) cls.images = {} for field in [("connect1", "uz")]: sc = create_scene(cls.ds, field) im = sc.render() cls.images["_".join(field)] = im @pytest.mark.mpl_image_compare(remove_text=True) def test_mesh_render_pyembree_engine_tet10(self): fig, ax = plt.subplots() ax.imshow(self.images["connect1_uz"]) return fig @requires_file(hex8) @pytest.mark.usefixtures("with_default_ray_tracing_engine") @pytest.mark.mpl_image_compare def test_perspective_mesh_render_default(): ds = data_dir_load(hex8) sc = create_scene(ds, ("connect2", "diffused")) cam = sc.add_camera(ds, lens_type="perspective") cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length") cam_pos = ds.arr([-4.5, 4.5, -4.5], "code_length") north_vector = ds.arr([0.0, -1.0, -1.0], "dimensionless") cam.set_position(cam_pos, north_vector) cam.resolution = (800, 800) im = sc.render() fig, ax = plt.subplots() ax.imshow(im) return fig @requires_file(hex8) @pytest.mark.usefixtures("with_pyembree_ray_tracing_engine") @pytest.mark.mpl_image_compare def test_perspective_mesh_render_pyembree(): ds = data_dir_load(hex8) sc = create_scene(ds, ("connect2", "diffused")) cam = sc.add_camera(ds, lens_type="perspective") cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length") cam_pos = ds.arr([-4.5, 4.5, -4.5], "code_length") north_vector = ds.arr([0.0, -1.0, -1.0], "dimensionless") cam.set_position(cam_pos, north_vector) cam.resolution = (800, 800) im = sc.render() fig, ax = plt.subplots() ax.imshow(im) return fig @requires_file(hex8) @pytest.mark.usefixtures("with_default_ray_tracing_engine") @pytest.mark.mpl_image_compare def test_composite_mesh_render_default(): ds = data_dir_load(hex8) sc = Scene() cam = sc.add_camera(ds) cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length") cam.set_position( ds.arr([-3.0, 3.0, -3.0], "code_length"), ds.arr([0.0, -1.0, 0.0], "dimensionless"), ) cam.set_width = ds.arr([8.0, 8.0, 8.0], "code_length") cam.resolution = (800, 800) ms1 = MeshSource(ds, ("connect1", "diffused")) ms2 = MeshSource(ds, ("connect2", "diffused")) sc.add_source(ms1) sc.add_source(ms2) im = sc.render() fig, ax = plt.subplots() ax.imshow(im) return fig @requires_file(hex8) @pytest.mark.usefixtures("with_pyembree_ray_tracing_engine") @pytest.mark.mpl_image_compare def test_composite_mesh_render_pyembree(): ds = data_dir_load(hex8) sc = Scene() cam = sc.add_camera(ds) cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length") cam.set_position( ds.arr([-3.0, 3.0, -3.0], "code_length"), ds.arr([0.0, -1.0, 0.0], "dimensionless"), ) cam.set_width = ds.arr([8.0, 8.0, 8.0], "code_length") cam.resolution = (800, 800) ms1 = MeshSource(ds, ("connect1", "diffused")) ms2 = MeshSource(ds, ("connect2", "diffused")) sc.add_source(ms1) sc.add_source(ms2) im = sc.render() fig, ax = plt.subplots() ax.imshow(im) return fig ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/tests/test_off_axis_SPH.py0000644000175100001770000002470314714401662025557 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_almost_equal from yt.testing import fake_sph_orientation_ds, requires_module from yt.utilities.lib.pixelization_routines import pixelize_sph_kernel_projection from yt.utilities.on_demand_imports import _scipy from yt.visualization.volume_rendering import off_axis_projection as OffAP spatial = _scipy.spatial ndimage = _scipy.ndimage def test_no_rotation(): """Determines if a projection processed through off_axis_projection with no rotation will give the same image buffer if processed directly through pixelize_sph_kernel_projection """ normal_vector = [0.0, 0.0, 1.0] resolution = (64, 64) ds = fake_sph_orientation_ds() ad = ds.all_data() left_edge = ds.domain_left_edge right_edge = ds.domain_right_edge center = (left_edge + right_edge) / 2 width = right_edge - left_edge px = ad["all", "particle_position_x"] py = ad["all", "particle_position_y"] pz = ad["all", "particle_position_y"] hsml = ad["all", "smoothing_length"] quantity_to_smooth = ad["gas", "density"] density = ad["io", "density"] mass = ad["io", "particle_mass"] bounds = [-4, 4, -4, 4, -4, 4] buf2 = np.zeros(resolution) mask = np.ones_like(buf2, dtype="uint8") buf1 = OffAP.off_axis_projection( ds, center, normal_vector, width, resolution, ("gas", "density") ) pixelize_sph_kernel_projection( buf2, mask, px, py, pz, hsml, mass, density, quantity_to_smooth, bounds ) assert_almost_equal(buf1.ndarray_view(), buf2) @requires_module("scipy") def test_basic_rotation_1(): """All particles on Z-axis should now be on the negative Y-Axis fake_sph_orientation has three z-axis particles, so there should be three y-axis particles after rotation (0, 0, 1) -> (0, -1) (0, 0, 2) -> (0, -2) (0, 0, 3) -> (0, -3) In addition, we should find a local maxima at (0, 0) due to: (0, 0, 0) -> (0, 0) (0, 1, 0) -> (0, 0) (0, 2, 0) -> (0, 0) and the one particle on the x-axis should not change its position: (1, 0, 0) -> (1, 0) """ expected_maxima = ([0.0, 0.0, 0.0, 0.0, 1.0], [0.0, -1.0, -2.0, -3.0, 0.0]) normal_vector = [0.0, 1.0, 0.0] north_vector = [0.0, 0.0, -1.0] resolution = (64, 64) ds = fake_sph_orientation_ds() left_edge = ds.domain_left_edge right_edge = ds.domain_right_edge center = (left_edge + right_edge) / 2 width = right_edge - left_edge buf1 = OffAP.off_axis_projection( ds, center, normal_vector, width, resolution, ("gas", "density"), north_vector=north_vector, ) find_compare_maxima(expected_maxima, buf1, resolution, width) @requires_module("scipy") def test_basic_rotation_2(): """Rotation of x-axis onto z-axis. All particles on z-axis should now be on the negative x-Axis fake_sph_orientation has three z-axis particles, so there should be three x-axis particles after rotation (0, 0, 1) -> (-1, 0) (0, 0, 2) -> (-2, 0) (0, 0, 3) -> (-3, 0) In addition, we should find a local maxima at (0, 0) due to: (0, 0, 0) -> (0, 0) (1, 0, 0) -> (0, 0) and the two particles on the y-axis should not change its position: (0, 1, 0) -> (0, 1) (0, 2, 0) -> (0, 2) """ expected_maxima = ( [-1.0, -2.0, -3.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 1.0, 2.0], ) normal_vector = [1.0, 0.0, 0.0] north_vector = [0.0, 1.0, 0.0] resolution = (64, 64) ds = fake_sph_orientation_ds() left_edge = ds.domain_left_edge right_edge = ds.domain_right_edge center = (left_edge + right_edge) / 2 width = right_edge - left_edge buf1 = OffAP.off_axis_projection( ds, center, normal_vector, width, resolution, ("gas", "density"), north_vector=north_vector, ) find_compare_maxima(expected_maxima, buf1, resolution, width) @requires_module("scipy") def test_basic_rotation_3(): """Rotation of z-axis onto negative z-axis. All fake particles on z-axis should now be of the negative z-Axis. fake_sph_orientation has three z-axis particles, so we should have a local maxima at (0, 0) (0, 0, 1) -> (0, 0) (0, 0, 2) -> (0, 0) (0, 0, 3) -> (0, 0) In addition, (0, 0, 0) should also contribute to the local maxima at (0, 0): (0, 0, 0) -> (0, 0) x-axis particles should be rotated as such: (1, 0, 0) -> (0, -1) and same goes for y-axis particles: (0, 1, 0) -> (-1, 0) (0, 2, 0) -> (-2, 0) """ expected_maxima = ([0.0, 0.0, -1.0, -2.0], [0.0, -1.0, 0.0, 0.0]) normal_vector = [0.0, 0.0, -1.0] resolution = (64, 64) ds = fake_sph_orientation_ds() left_edge = ds.domain_left_edge right_edge = ds.domain_right_edge center = (left_edge + right_edge) / 2 width = right_edge - left_edge buf1 = OffAP.off_axis_projection( ds, center, normal_vector, width, resolution, ("gas", "density") ) find_compare_maxima(expected_maxima, buf1, resolution, width) @requires_module("scipy") def test_basic_rotation_4(): """Rotation of x-axis to z-axis and original z-axis to y-axis with the use of the north_vector. All fake particles on z-axis should now be on the y-Axis. All fake particles on the x-axis should now be on the z-axis, and all fake particles on the y-axis should now be on the x-axis. (0, 0, 1) -> (0, 1) (0, 0, 2) -> (0, 2) (0, 0, 3) -> (0, 3) In addition, (0, 0, 0) should contribute to the local maxima at (0, 0): (0, 0, 0) -> (0, 0) x-axis particles should be rotated and contribute to the local maxima at (0, 0): (1, 0, 0) -> (0, 0) and the y-axis particles shift into the positive x direction: (0, 1, 0) -> (1, 0) (0, 2, 0) -> (2, 0) """ expected_maxima = ([0.0, 0.0, 0.0, 0.0, 1.0, 2.0], [1.0, 2.0, 3.0, 0.0, 0.0, 0.0]) normal_vector = [1.0, 0.0, 0.0] north_vector = [0.0, 0.0, 1.0] resolution = (64, 64) ds = fake_sph_orientation_ds() left_edge = ds.domain_left_edge right_edge = ds.domain_right_edge center = (left_edge + right_edge) / 2 width = right_edge - left_edge buf1 = OffAP.off_axis_projection( ds, center, normal_vector, width, resolution, ("gas", "density"), north_vector=north_vector, ) find_compare_maxima(expected_maxima, buf1, resolution, width) @requires_module("scipy") def test_center_1(): """Change the center to [0, 3, 0] Every point will be shifted by 3 in the y-domain With this, we should not be able to see any of the y-axis particles (0, 1, 0) -> (0, -2) (0, 2, 0) -> (0, -1) (0, 0, 1) -> (0, -3) (0, 0, 2) -> (0, -3) (0, 0, 3) -> (0, -3) (0, 0, 0) -> (0, -3) (1, 0, 0) -> (1, -3) """ expected_maxima = ([0.0, 0.0, 0.0, 1.0], [-2.0, -1.0, -3.0, -3.0]) normal_vector = [0.0, 0.0, 1.0] resolution = (64, 64) ds = fake_sph_orientation_ds() left_edge = ds.domain_left_edge right_edge = ds.domain_right_edge # center = [(left_edge[0] + right_edge[0])/2, # left_edge[1], # (left_edge[2] + right_edge[2])/2] center = [0.0, 3.0, 0.0] width = right_edge - left_edge buf1 = OffAP.off_axis_projection( ds, center, normal_vector, width, resolution, ("gas", "density") ) find_compare_maxima(expected_maxima, buf1, resolution, width) @requires_module("scipy") def test_center_2(): """Change the center to [0, -1, 0] Every point will be shifted by 1 in the y-domain With this, we should not be able to see any of the y-axis particles (0, 1, 0) -> (0, 2) (0, 2, 0) -> (0, 3) (0, 0, 1) -> (0, 1) (0, 0, 2) -> (0, 1) (0, 0, 3) -> (0, 1) (0, 0, 0) -> (0, 1) (1, 0, 0) -> (1, 1) """ expected_maxima = ([0.0, 0.0, 0.0, 1.0], [2.0, 3.0, 1.0, 1.0]) normal_vector = [0.0, 0.0, 1.0] resolution = (64, 64) ds = fake_sph_orientation_ds() left_edge = ds.domain_left_edge right_edge = ds.domain_right_edge center = [0.0, -1.0, 0.0] width = right_edge - left_edge buf1 = OffAP.off_axis_projection( ds, center, normal_vector, width, resolution, ("gas", "density") ) find_compare_maxima(expected_maxima, buf1, resolution, width) @requires_module("scipy") def test_center_3(): """Change the center to the left edge, or [0, -8, 0] Every point will be shifted by 8 in the y-domain With this, we should not be able to see anything ! """ expected_maxima = ([], []) normal_vector = [0.0, 0.0, 1.0] resolution = (64, 64) ds = fake_sph_orientation_ds() left_edge = ds.domain_left_edge right_edge = ds.domain_right_edge center = [0.0, -1.0, 0.0] width = [ (right_edge[0] - left_edge[0]), left_edge[1], (right_edge[2] - left_edge[2]), ] buf1 = OffAP.off_axis_projection( ds, center, normal_vector, width, resolution, ("gas", "density") ) find_compare_maxima(expected_maxima, buf1, resolution, width) @requires_module("scipy") def find_compare_maxima(expected_maxima, buf, resolution, width): buf_ndarray = buf.ndarray_view() max_filter_buf = ndimage.maximum_filter(buf_ndarray, size=5) maxima = np.isclose(max_filter_buf, buf_ndarray, rtol=1e-09) # ignore contributions from zones of no smoothing for i in range(len(maxima)): for j in range(len(maxima[i])): if np.isclose(buf_ndarray[i, j], 0.0, 1e-09): maxima[i, j] = False coords = ([], []) for i in range(len(maxima)): for j in range(len(maxima[i])): if maxima[i, j]: coords[0].append(i) coords[1].append(j) pixel_tolerance = 2.0 x_scaling_factor = resolution[0] / width[0] y_scaling_factor = resolution[1] / width[1] for i in range(len(expected_maxima[0])): found_match = False for j in range(len(coords[0])): # normalize coordinates x_coord = coords[0][j] y_coord = coords[1][j] x_coord -= resolution[0] / 2 y_coord -= resolution[1] / 2 x_coord /= x_scaling_factor y_coord /= y_scaling_factor if ( spatial.distance.euclidean( [x_coord, y_coord], [expected_maxima[0][i], expected_maxima[1][i]] ) < pixel_tolerance ): found_match = True break if found_match is not True: raise AssertionError pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/tests/test_points.py0000644000175100001770000000357314714401662024565 0ustar00runnerdockerimport os import shutil import tempfile from unittest import TestCase import numpy as np from yt.testing import fake_random_ds from yt.visualization.volume_rendering.api import ( PointSource, Scene, create_volume_source, ) def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True class PointsVRTest(TestCase): # This toggles using a temporary directory. Turn off to examine images. use_tmpdir = True def setUp(self): np.random.seed(0) if self.use_tmpdir: self.curdir = os.getcwd() # Perform I/O in safe place instead of yt main dir self.tmpdir = tempfile.mkdtemp() os.chdir(self.tmpdir) else: self.curdir, self.tmpdir = None, None def tearDown(self): if self.use_tmpdir: os.chdir(self.curdir) shutil.rmtree(self.tmpdir) def test_points_vr(self): ds = fake_random_ds(64) dd = ds.sphere(ds.domain_center, 0.45 * ds.domain_width[0]) ds.field_info[ds.field_list[0]].take_log = False sc = Scene() cam = sc.add_camera(ds) cam.resolution = (512, 512) vr = create_volume_source(dd, field=ds.field_list[0]) vr.transfer_function.clear() vr.transfer_function.grey_opacity = False vr.transfer_function.map_to_colormap(0.0, 1.0, scale=10.0, colormap="Reds") sc.add_source(vr) cam.set_width(1.8 * ds.domain_width) cam.lens.setup_box_properties(cam) # DRAW SOME POINTS npoints = 1000 vertices = np.random.random([npoints, 3]) colors = np.random.random([npoints, 4]) colors[:, 3] = 0.10 points_source = PointSource(vertices, colors=colors) sc.add_source(points_source) im = sc.render() im.write_png("points.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/tests/test_save_render.py0000644000175100001770000000261114714401662025536 0ustar00runnerdockerimport os import shutil import tempfile from unittest import TestCase import yt from yt.testing import fake_random_ds def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True class SaveRenderTest(TestCase): # This toggles using a temporary directory. Turn off to examine images. use_tmpdir = True tmpdir = "./" def setUp(self): if self.use_tmpdir: tempfile.mkdtemp() self.tmpdir = tempfile.mkdtemp() def tearDown(self): if self.use_tmpdir: shutil.rmtree(self.tmpdir) def test_save_render(self): ds = fake_random_ds(ndims=32) sc = yt.create_scene(ds) # make sure it renders if nothing exists, even if render = False sc.save(os.path.join(self.tmpdir, "raw.png"), render=False) # make sure it re-renders sc.save(os.path.join(self.tmpdir, "raw_2.png"), render=True) # make sure sigma clip does not re-render sc.save(os.path.join(self.tmpdir, "clip_2.png"), sigma_clip=2.0, render=False) sc.save(os.path.join(self.tmpdir, "clip_4.png"), sigma_clip=4.0, render=False) # save a different format with/without sigma clips sc.save(os.path.join(self.tmpdir, "no_clip.jpg"), render=False) sc.save(os.path.join(self.tmpdir, "clip_2.jpg"), sigma_clip=2, render=False) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/tests/test_scene.py0000644000175100001770000000624214714401662024342 0ustar00runnerdockerimport os import shutil import tempfile from unittest import TestCase import numpy as np from yt.testing import assert_fname, fake_random_ds, fake_vr_orientation_test_ds from yt.visualization.volume_rendering.api import ( create_scene, create_volume_source, volume_render, ) def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True class RotationTest(TestCase): # This toggles using a temporary directory. Turn off to examine images. use_tmpdir = True def setUp(self): if self.use_tmpdir: self.curdir = os.getcwd() # Perform I/O in safe place instead of yt main dir self.tmpdir = tempfile.mkdtemp() os.chdir(self.tmpdir) else: self.curdir, self.tmpdir = None, None def tearDown(self): if self.use_tmpdir: os.chdir(self.curdir) shutil.rmtree(self.tmpdir) def test_rotation(self): ds = fake_random_ds(32) ds2 = fake_random_ds(32) dd = ds.sphere(ds.domain_center, ds.domain_width[0] / 2) dd2 = ds2.sphere(ds2.domain_center, ds2.domain_width[0] / 2) im, sc = volume_render(dd, field=("gas", "density")) im.write_png("test.png") vol = sc.get_source(0) tf = vol.transfer_function tf.clear() mi, ma = dd.quantities.extrema(("gas", "density")) mi = np.log10(mi) ma = np.log10(ma) mi_bound = ((ma - mi) * (0.10)) + mi ma_bound = ((ma - mi) * (0.90)) + mi tf.map_to_colormap(mi_bound, ma_bound, scale=0.01, colormap="Blues_r") vol2 = create_volume_source(dd2, field=("gas", "density")) sc.add_source(vol2) tf = vol2.transfer_function tf.clear() mi, ma = dd2.quantities.extrema(("gas", "density")) mi = np.log10(mi) ma = np.log10(ma) mi_bound = ((ma - mi) * (0.10)) + mi ma_bound = ((ma - mi) * (0.90)) + mi tf.map_to_colormap(mi_bound, ma_bound, scale=0.01, colormap="Reds_r") fname = "test_scene.pdf" sc.save(fname, sigma_clip=6.0) assert_fname(fname) fname = "test_rot.png" sc.camera.pitch(np.pi) sc.render() sc.save(fname, sigma_clip=6.0, render=False) assert_fname(fname) def test_annotations(): from matplotlib.image import imread curdir = os.getcwd() tmpdir = tempfile.mkdtemp() os.chdir(tmpdir) ds = fake_vr_orientation_test_ds(N=16) sc = create_scene(ds) sc.annotate_axes() sc.annotate_domain(ds) sc.render() # ensure that there are actually red, green, blue, and white pixels # in the image. see Issue #1595 im = sc._last_render for c in ([1, 0, 0, 1], [0, 1, 0, 1], [0, 0, 1, 1], [1, 1, 1, 1]): assert np.where((im == c).all(axis=-1))[0].shape[0] > 0 sc[0].tfh.tf.add_layers(10, colormap="cubehelix") sc.save_annotated( "test_scene_annotated.png", text_annotate=[[(0.1, 1.05), "test_string"]], ) image = imread("test_scene_annotated.png") assert image.shape == sc.camera.resolution + (4,) os.chdir(curdir) shutil.rmtree(tmpdir) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/tests/test_sigma_clip.py0000644000175100001770000000166714714401662025362 0ustar00runnerdockerimport os import shutil import tempfile from unittest import TestCase import yt from yt.testing import fake_random_ds def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True class SigmaClipTest(TestCase): # This toggles using a temporary directory. Turn off to examine images. use_tmpdir = True def setUp(self): if self.use_tmpdir: self.curdir = os.getcwd() # Perform I/O in safe place instead of yt main dir self.tmpdir = tempfile.mkdtemp() os.chdir(self.tmpdir) else: self.curdir, self.tmpdir = None, None def tearDown(self): if self.use_tmpdir: os.chdir(self.curdir) shutil.rmtree(self.tmpdir) def test_sigma_clip(self): ds = fake_random_ds(32) sc = yt.create_scene(ds) sc.save("clip_2.png", sigma_clip=2) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/tests/test_simple_vr.py0000644000175100001770000000166214714401662025246 0ustar00runnerdockerimport os import shutil import tempfile from unittest import TestCase import yt from yt.testing import fake_random_ds def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True class SimpleVRTest(TestCase): # This toggles using a temporary directory. Turn off to examine images. use_tmpdir = True def setUp(self): if self.use_tmpdir: self.curdir = os.getcwd() # Perform I/O in safe place instead of yt main dir self.tmpdir = tempfile.mkdtemp() os.chdir(self.tmpdir) else: self.curdir, self.tmpdir = None, None def tearDown(self): if self.use_tmpdir: os.chdir(self.curdir) shutil.rmtree(self.tmpdir) def test_simple_vr(self): ds = fake_random_ds(32) _im, _sc = yt.volume_render(ds, fname="test.png", sigma_clip=4.0) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/tests/test_varia.py0000644000175100001770000000734014714401662024347 0ustar00runnerdockerimport os import shutil import tempfile from unittest import TestCase import numpy as np import yt from yt.testing import fake_random_ds from yt.visualization.volume_rendering.api import Scene, create_volume_source def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True class VariousVRTests(TestCase): # This toggles using a temporary directory. Turn off to examine images. use_tmpdir = True def setUp(self): if self.use_tmpdir: self.curdir = os.getcwd() # Perform I/O in safe place instead of yt main dir self.tmpdir = tempfile.mkdtemp() os.chdir(self.tmpdir) else: self.curdir, self.tmpdir = None, None self.ds = fake_random_ds(32) def tearDown(self): if self.use_tmpdir: os.chdir(self.curdir) shutil.rmtree(self.tmpdir) del self.ds def test_simple_scene_creation(self): yt.create_scene(self.ds) def test_modify_transfer_function(self): im, sc = yt.volume_render(self.ds) volume_source = sc.get_source(0) tf = volume_source.transfer_function tf.clear() tf.grey_opacity = True tf.add_layers(3, colormap="RdBu") sc.render() def test_multiple_fields(self): im, sc = yt.volume_render(self.ds) volume_source = sc.get_source(0) volume_source.set_field(("gas", "velocity_x")) volume_source.set_weight_field(("gas", "density")) sc.render() def test_rotation_volume_rendering(self): im, sc = yt.volume_render(self.ds) sc.camera.yaw(np.pi) sc.render() def test_simple_volume_rendering(self): im, sc = yt.volume_render(self.ds, sigma_clip=4.0) def test_lazy_volume_source_construction(self): sc = Scene() source = create_volume_source(self.ds.all_data(), ("gas", "density")) assert source._volume is None assert source._transfer_function is None source.tfh.bounds = (0.1, 1) source.set_log(False) assert not source.log_field assert source.transfer_function.x_bounds == [0.1, 1] assert source._volume is None source.set_log(True) assert source.log_field assert source.transfer_function.x_bounds == [-1, 0] assert source._volume is None source.transfer_function = None source.tfh.bounds = None ad = self.ds.all_data() np.testing.assert_allclose( source.transfer_function.x_bounds, np.log10(ad.quantities.extrema(("gas", "density"))), ) assert source.tfh.log == source.log_field source.set_field(("gas", "velocity_x")) source.set_log(False) assert source.transfer_function.x_bounds == list( ad.quantities.extrema(("gas", "velocity_x")) ) assert source._volume is None source.set_field(("gas", "density")) assert source.volume is not None assert not source.volume._initialized assert source.volume.fields is None del source.volume assert source._volume is None sc.add_source(source) sc.add_camera() sc.render() assert source.volume is not None assert source.volume._initialized assert source.volume.fields == [("gas", "density")] assert source.volume.log_fields == [True] source.set_field(("gas", "velocity_x")) source.set_log(False) sc.render() assert source.volume is not None assert source.volume._initialized assert source.volume.fields == [("gas", "velocity_x")] assert source.volume.log_fields == [False] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/tests/test_vr_cameras.py0000644000175100001770000001325014714401662025364 0ustar00runnerdockerimport os import os.path import shutil import tempfile from unittest import TestCase import numpy as np from yt.testing import assert_fname, fake_random_ds from yt.visualization.volume_rendering.api import ( ColorTransferFunction, ProjectionTransferFunction, ) from yt.visualization.volume_rendering.old_camera import ( FisheyeCamera, InteractiveCamera, PerspectiveCamera, ProjectionCamera, StereoPairCamera, ) def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True class CameraTest(TestCase): # This toggles using a temporary directory. Turn off to examine images. use_tmpdir = True def setUp(self): if self.use_tmpdir: self.curdir = os.getcwd() # Perform I/O in safe place instead of yt main dir self.tmpdir = tempfile.mkdtemp() os.chdir(self.tmpdir) else: self.curdir, self.tmpdir = None, None self.ds = fake_random_ds(64) self.c = self.ds.domain_center self.L = np.array([0.5, 0.5, 0.5]) self.W = 1.5 * self.ds.domain_width self.N = 64 self.field = ("gas", "density") def tearDown(self): if self.use_tmpdir: os.chdir(self.curdir) shutil.rmtree(self.tmpdir) def setup_transfer_function(self, camera_type): if camera_type in ["perspective", "camera", "stereopair", "interactive"]: mi, ma = self.ds.all_data().quantities["Extrema"](self.field) tf = ColorTransferFunction((mi, ma), grey_opacity=True) tf.map_to_colormap(mi, ma, scale=10.0, colormap="RdBu_r") return tf elif camera_type in ["healpix"]: return ProjectionTransferFunction() else: pass def test_camera(self): tf = self.setup_transfer_function("camera") cam = self.ds.camera( self.c, self.L, self.W, self.N, transfer_function=tf, log_fields=[False] ) cam.snapshot("camera.png") assert_fname("camera.png") im = cam.snapshot() im = cam.draw_domain(im) cam.draw_coordinate_vectors(im) cam.draw_line(im, [0, 0, 0], [1, 1, 0]) def test_data_source_camera(self): ds = self.ds tf = self.setup_transfer_function("camera") data_source = ds.sphere(ds.domain_center, ds.domain_width[0] * 0.5) cam = ds.camera( self.c, self.L, self.W, self.N, log_fields=[False], transfer_function=tf, data_source=data_source, ) cam.snapshot("data_source_camera.png") assert_fname("data_source_camera.png") def test_perspective_camera(self): ds = self.ds tf = self.setup_transfer_function("camera") cam = PerspectiveCamera( self.c, self.L, self.W, self.N, ds=ds, transfer_function=tf, log_fields=[False], ) cam.snapshot("perspective.png") assert_fname("perspective.png") def test_interactive_camera(self): ds = self.ds tf = self.setup_transfer_function("camera") cam = InteractiveCamera( self.c, self.L, self.W, self.N, ds=ds, transfer_function=tf, log_fields=[False], ) del cam # Can't take a snapshot here since IC uses matplotlib.' def test_projection_camera(self): ds = self.ds cam = ProjectionCamera(self.c, self.L, self.W, self.N, ds=ds, field=self.field) cam.snapshot("projection.png") assert_fname("projection.png") def test_stereo_camera(self): ds = self.ds tf = self.setup_transfer_function("camera") cam = ds.camera( self.c, self.L, self.W, self.N, transfer_function=tf, log_fields=[False] ) stereo_cam = StereoPairCamera(cam) # Take image cam1, cam2 = stereo_cam.split() cam1.snapshot(fn="stereo1.png") cam2.snapshot(fn="stereo2.png") assert_fname("stereo1.png") assert_fname("stereo2.png") def test_camera_movement(self): ds = self.ds tf = self.setup_transfer_function("camera") cam = ds.camera( self.c, self.L, self.W, self.N, transfer_function=tf, log_fields=[False], north_vector=[0.0, 0.0, 1.0], ) cam.zoom(0.5) for snap in cam.zoomin(2.0, 3): snap for snap in cam.move_to( np.array(self.c) + 0.1, 3, final_width=None, exponential=False ): snap for snap in cam.move_to( np.array(self.c) - 0.1, 3, final_width=2.0 * self.W, exponential=False ): snap for snap in cam.move_to( np.array(self.c), 3, final_width=1.0 * self.W, exponential=True ): snap cam.rotate(np.pi / 10) cam.pitch(np.pi / 10) cam.yaw(np.pi / 10) cam.roll(np.pi / 10) for snap in cam.rotation(np.pi, 3, rot_vector=None): snap for snap in cam.rotation(np.pi, 3, rot_vector=np.random.random(3)): snap cam.snapshot("final.png") assert_fname("final.png") def test_fisheye(self): ds = self.ds tf = self.setup_transfer_function("camera") cam = FisheyeCamera( ds.domain_center, ds.domain_width[0], 360.0, 256, transfer_function=tf, ds=ds, ) cam.snapshot("fisheye.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/tests/test_vr_orientation.py0000644000175100001770000000661114714401662026307 0ustar00runnerdockerimport os import tempfile import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import pytest from yt.testing import fake_vr_orientation_test_ds from yt.visualization.volume_rendering.api import ( Scene, create_volume_source, off_axis_projection, ) def scene_to_mpl_figure(scene): """helper function to convert a scene image rendering to matplotlib so we can rely on pytest-mpl to compare images """ tmpfd, tmpname = tempfile.mkstemp(suffix=".png") os.close(tmpfd) scene.save(tmpname, sigma_clip=1.0) image = mpl.image.imread(tmpname) os.remove(tmpname) fig, ax = plt.subplots() ax.set(aspect="equal") ax.imshow(image) return fig class TestOrientation: @classmethod def setup_class(cls): cls.ds = fake_vr_orientation_test_ds() cls.scene = Scene() vol = create_volume_source(cls.ds, field=("gas", "density")) cls.scene.add_source(vol) cls._last_lense_type = None @classmethod def set_camera(cls, lens_type): # this method isn't thread-safe # if lens_type == cls._last_lense_type: # return cls._last_lense_type = lens_type cam = cls.scene.add_camera(cls.ds, lens_type=lens_type) cam.resolution = (1000, 1000) cam.position = cls.ds.arr(np.array([-4.0, 0.0, 0.0]), "code_length") cam.switch_orientation( normal_vector=[1.0, 0.0, 0.0], north_vector=[0.0, 0.0, 1.0] ) cam.set_width(cls.ds.domain_width * 2.0) cls.camera = cam @pytest.mark.parametrize("lens_type", ["perspective", "plane-parallel"]) @pytest.mark.mpl_image_compare(remove_text=True) def test_vr_orientation_lense_type(self, lens_type): # note that a previous version of this test proved flaky # and required a much lower precision for plane-parallel # https://github.com/yt-project/yt/issue/3069 # https://github.com/yt-project/yt/pull/3068 # https://github.com/yt-project/yt/pull/3294 self.set_camera(lens_type) return scene_to_mpl_figure(self.scene) @pytest.mark.mpl_image_compare(remove_text=True) def test_vr_orientation_yaw(self): self.set_camera("plane-parallel") center = self.ds.arr([0, 0, 0], "code_length") self.camera.yaw(np.pi, rot_center=center) return scene_to_mpl_figure(self.scene) @pytest.mark.mpl_image_compare(remove_text=True) def test_vr_orientation_pitch(self): self.set_camera("plane-parallel") center = self.ds.arr([0, 0, 0], "code_length") self.camera.pitch(np.pi, rot_center=center) return scene_to_mpl_figure(self.scene) @pytest.mark.mpl_image_compare(remove_text=True) def test_vr_orientation_roll(self): self.set_camera("plane-parallel") center = self.ds.arr([0, 0, 0], "code_length") self.camera.roll(np.pi, rot_center=center) return scene_to_mpl_figure(self.scene) @pytest.mark.mpl_image_compare(remove_text=True) def test_vr_orientation_off_axis_projection(self): image = off_axis_projection( self.ds, center=[0.5, 0.5, 0.5], normal_vector=[-0.3, -0.1, 0.8], width=[1.0, 1.0, 1.0], resolution=512, item=("gas", "density"), no_ghost=False, ) fig, ax = plt.subplots() ax.imshow(image) return fig ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/tests/test_zbuff.py0000644000175100001770000000741514714401662024364 0ustar00runnerdockerimport os import shutil import tempfile from unittest import TestCase import numpy as np from numpy.testing import assert_almost_equal from yt.testing import fake_random_ds from yt.visualization.volume_rendering.api import ( OpaqueSource, Scene, ZBuffer, create_volume_source, ) class FakeOpaqueSource(OpaqueSource): # A minimal (mock) concrete implementation of OpaqueSource def render(self, camera, zbuffer=None): pass def _validate(self): pass def setup_module(): """Test specific setup.""" from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True class ZBufferTest(TestCase): # This toggles using a temporary directory. Turn off to examine images. use_tmpdir = True def setUp(self): np.random.seed(0) if self.use_tmpdir: self.curdir = os.getcwd() # Perform I/O in safe place instead of yt main dir self.tmpdir = tempfile.mkdtemp() os.chdir(self.tmpdir) else: self.curdir, self.tmpdir = None, None def tearDown(self): if self.use_tmpdir: os.chdir(self.curdir) shutil.rmtree(self.tmpdir) def test_composite_vr(self): ds = fake_random_ds(64) dd = ds.sphere(ds.domain_center, 0.45 * ds.domain_width[0]) ds.field_info[ds.field_list[0]].take_log = False sc = Scene() cam = sc.add_camera(ds) cam.resolution = (512, 512) vr = create_volume_source(dd, field=ds.field_list[0]) vr.transfer_function.clear() vr.transfer_function.grey_opacity = True vr.transfer_function.map_to_colormap(0.0, 1.0, scale=10.0, colormap="Reds") sc.add_source(vr) cam.set_width(1.8 * ds.domain_width) cam.lens.setup_box_properties(cam) # Create Arbitrary Z-buffer empty = cam.lens.new_image(cam) z = np.empty(empty.shape[:2], dtype="float64") # Let's put a blue plane right through the center z[:] = cam.width[2] / 2.0 empty[:, :, 2] = 1.0 # Set blue to 1's empty[:, :, 3] = 1.0 # Set alpha to 1's zbuffer = ZBuffer(empty, z) zsource = FakeOpaqueSource() zsource.set_zbuffer(zbuffer) sc.add_source(zsource) im = sc.render() im.write_png("composite.png") def test_nonrectangular_add(self): rgba1 = np.ones((64, 1, 4)) z1 = np.expand_dims(np.arange(64.0), 1) rgba2 = np.zeros((64, 1, 4)) z2 = np.expand_dims(np.arange(63.0, -1.0, -1.0), 1) exact_rgba = np.concatenate((np.ones(32), np.zeros(32))) exact_rgba = np.expand_dims(exact_rgba, 1) exact_rgba = np.dstack((exact_rgba, exact_rgba, exact_rgba, exact_rgba)) exact_z = np.concatenate((np.arange(32.0), np.arange(31.0, -1.0, -1.0))) exact_z = np.expand_dims(exact_z, 1) buff1 = ZBuffer(rgba1, z1) buff2 = ZBuffer(rgba2, z2) buff = buff1 + buff2 assert_almost_equal(buff.rgba, exact_rgba) assert_almost_equal(buff.z, exact_z) def test_rectangular_add(self): rgba1 = np.ones((8, 8, 4)) z1 = np.arange(64.0) z1 = z1.reshape((8, 8)) buff1 = ZBuffer(rgba1, z1) rgba2 = np.zeros((8, 8, 4)) z2 = np.arange(63.0, -1.0, -1.0) z2 = z2.reshape((8, 8)) buff2 = ZBuffer(rgba2, z2) buff = buff1 + buff2 exact_rgba = np.empty((8, 8, 4), dtype=np.float64) exact_rgba[0:4, 0:8, :] = 1.0 exact_rgba[4:8, 0:8, :] = 0.0 exact_z = np.concatenate((np.arange(32.0), np.arange(31.0, -1.0, -1.0))) exact_z = np.expand_dims(exact_z, 1) exact_z = exact_z.reshape(8, 8) assert_almost_equal(buff.rgba, exact_rgba) assert_almost_equal(buff.z, exact_z) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/transfer_function_helper.py0000644000175100001770000001531114714401662026131 0ustar00runnerdockerfrom io import BytesIO import numpy as np from yt.data_objects.profiles import create_profile from yt.funcs import mylog from yt.visualization.volume_rendering.transfer_functions import ColorTransferFunction class TransferFunctionHelper: r"""A transfer function helper. This attempts to help set up a good transfer function by finding bounds, handling linear/log options, and displaying the transfer function combined with 1D profiles of rendering quantity. Parameters ---------- ds: A Dataset instance A static output that is currently being rendered. This is used to help set up data bounds. Notes ----- """ profiles = None def __init__(self, ds): self.ds = ds self.field = None self.log = False self.tf = None self.bounds = None self.grey_opacity = False self.profiles = {} def set_bounds(self, bounds=None): """ Set the bounds of the transfer function. Parameters ---------- bounds: array-like, length 2, optional A length 2 list/array in the form [min, max]. These should be the raw values and not the logarithm of the min and max. If bounds is None, the bounds of the data are calculated from all of the data in the dataset. This can be slow for very large datasets. """ if bounds is None: bounds = self.ds.all_data().quantities["Extrema"](self.field, non_zero=True) bounds = [b.ndarray_view() for b in bounds] self.bounds = bounds # Do some error checking. assert len(self.bounds) == 2 if self.log: assert self.bounds[0] > 0.0 assert self.bounds[1] > 0.0 return def set_field(self, field): """ Set the field to be rendered Parameters ---------- field: string The field to be rendered. """ if field != self.field: self.log = self.ds._get_field_info(field).take_log self.field = field def set_log(self, log): """ Set whether or not the transfer function should be in log or linear space. Also modifies the ds.field_info[field].take_log attribute to stay in sync with this setting. Parameters ---------- log: boolean Sets whether the transfer function should use log or linear space. """ self.log = log def build_transfer_function(self): """ Builds the transfer function according to the current state of the TransferFunctionHelper. Returns ------- A ColorTransferFunction object. """ if self.bounds is None: mylog.info( "Calculating data bounds. This may take a while. " "Set the TransferFunctionHelper.bounds to avoid this." ) self.set_bounds() if self.log: mi, ma = np.log10(self.bounds[0]), np.log10(self.bounds[1]) else: mi, ma = self.bounds self.tf = ColorTransferFunction( (mi, ma), grey_opacity=self.grey_opacity, nbins=512 ) return self.tf def setup_default(self): """Setup a default colormap Creates a ColorTransferFunction including 10 gaussian layers whose colors sample the 'nipy_spectral' colormap. Also attempts to scale the transfer function to produce a natural contrast ratio. """ self.tf.add_layers(10, colormap="nipy_spectral") factor = self.tf.funcs[-1].y.size / self.tf.funcs[-1].y.sum() self.tf.funcs[-1].y *= 2 * factor def plot(self, fn=None, profile_field=None, profile_weight=None): """ Save the current transfer function to a bitmap, or display it inline. Parameters ---------- fn: string, optional Filename to save the image to. If None, the returns an image to an IPython session. Returns ------- If fn is None, will return an image to an IPython notebook. """ from matplotlib.backends.backend_agg import FigureCanvasAgg from matplotlib.figure import Figure if self.tf is None: self.build_transfer_function() self.setup_default() tf = self.tf if self.log: xfunc = np.logspace xmi, xma = np.log10(self.bounds[0]), np.log10(self.bounds[1]) else: xfunc = np.linspace xmi, xma = self.bounds x = xfunc(xmi, xma, tf.nbins) y = tf.funcs[3].y w = np.append(x[1:] - x[:-1], x[-1] - x[-2]) colors = np.array( [tf.funcs[0].y, tf.funcs[1].y, tf.funcs[2].y, np.ones_like(x)] ).T fig = Figure(figsize=[6, 3]) canvas = FigureCanvasAgg(fig) ax = fig.add_axes([0.2, 0.2, 0.75, 0.75]) ax.bar( x, tf.funcs[3].y, w, edgecolor=[0.0, 0.0, 0.0, 0.0], log=self.log, color=colors, bottom=[0], ) if profile_field is not None: try: prof = self.profiles[self.field] except KeyError: self.setup_profile(profile_field, profile_weight) prof = self.profiles[self.field] try: prof[profile_field] except KeyError: prof.add_fields([profile_field]) xplot = prof.x yplot = ( prof[profile_field] * tf.funcs[3].y.max() / prof[profile_field].max() ) ax.plot(xplot, yplot, color="w", linewidth=3) ax.plot(xplot, yplot, color="k") ax.set_xscale({True: "log", False: "linear"}[self.log]) ax.set_xlim(x.min(), x.max()) ax.set_xlabel(self.ds._get_field_info(self.field).get_label()) ax.set_ylabel(r"$\mathrm{alpha}$") ax.set_ylim(y.max() * 1.0e-3, y.max() * 2) if fn is None: from IPython.core.display import Image f = BytesIO() canvas.print_figure(f) f.seek(0) img = f.read() return Image(img) else: fig.savefig(fn) def setup_profile(self, profile_field=None, profile_weight=None): if profile_field is None: profile_field = "cell_volume" prof = create_profile( self.ds.all_data(), self.field, profile_field, n_bins=128, extrema={self.field: self.bounds}, weight_field=profile_weight, logs={self.field: self.log}, ) self.profiles[self.field] = prof return ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/transfer_functions.py0000644000175100001770000011257114714401662024763 0ustar00runnerdockerimport numpy as np from more_itertools import always_iterable from yt.funcs import mylog from yt.utilities.physical_constants import clight, hcgs, kboltz class TransferFunction: r"""A transfer function governs the transmission of emission and absorption through a volume. Transfer functions are defined by boundaries, bins, and the value that governs transmission through that bin. This is scaled between 0 and 1. When integrating through a volume the value through a given cell is defined by the value calculated in the transfer function. Parameters ---------- x_bounds : tuple of floats The min and max for the transfer function. Values below or above these values are discarded. nbins : int How many bins to calculate; in between, linear interpolation is used, so low values are typically fine. Notes ----- Typically, raw transfer functions are not generated unless particular and specific control over the integration is desired. Usually either color transfer functions, where the color values are calculated from color tables, or multivariate transfer functions are used. """ def __init__(self, x_bounds, nbins=256): self.pass_through = 0 self.nbins = nbins # Strip units off of x_bounds, if any x_bounds = [np.float64(xb) for xb in x_bounds] self.x_bounds = x_bounds self.x = np.linspace(x_bounds[0], x_bounds[1], nbins, dtype="float64") self.y = np.zeros(nbins, dtype="float64") self.grad_field = -1 self.light_source_v = self.light_source_c = np.zeros(3, "float64") self.features = [] def add_gaussian(self, location, width, height): r"""Add a Gaussian distribution to the transfer function. Typically, when rendering isocontours, a Gaussian distribution is the easiest way to draw out features. The spread provides a softness. The values are calculated as :math:`f(x) = h \exp{-(x-x_0)^2 / w}`. Parameters ---------- location : float The centroid of the Gaussian (:math:`x_0` in the above equation.) width : float The relative width (:math:`w` in the above equation.) height : float The peak height (:math:`h` in the above equation.) Note that while values greater 1.0 will be accepted, the values of the transmission function are clipped at 1.0. Examples -------- >>> tf = TransferFunction((-10.0, -5.0)) >>> tf.add_gaussian(-9.0, 0.01, 1.0) """ vals = height * np.exp(-((self.x - location) ** 2.0) / width) self.y = np.clip(np.maximum(vals, self.y), 0.0, np.inf) self.features.append( ( "gaussian", f"location(x):{location:3.2g}", f"width(x):{width:3.2g}", f"height(y):{height:3.2g}", ) ) def add_line(self, start, stop): r"""Add a line between two points to the transmission function. This will accept a starting point in (x,y) and an ending point in (x,y) and set the values of the transmission function between those x-values to be along the line connecting the y values. Parameters ---------- start : tuple of floats (x0, y0), the starting point. x0 is between the bounds of the transfer function and y0 must be between 0.0 and 1.0. stop : tuple of floats (x1, y1), the ending point. x1 is between the bounds of the transfer function and y1 must be between 0.0 and 1.0. Examples -------- This will set the transfer function to be linear from 0.0 to 1.0, across the bounds of the function. >>> tf = TransferFunction((-10.0, -5.0)) >>> tf.add_line((-10.0, 0.0), (-5.0, 1.0)) """ x0, y0 = start x1, y1 = stop slope = (y1 - y0) / (x1 - x0) # We create a whole new set of values and then backout the ones that do # not satisfy our bounding box arguments vals = slope * (self.x - x0) + y0 vals[~((self.x >= x0) & (self.x <= x1))] = 0.0 self.y = np.clip(np.maximum(vals, self.y), 0.0, np.inf) self.features.append( ( "line", f"start(x,y):({start[0]:3.2g}, {start[1]:3.2g})", f"stop(x,y):({stop[0]:3.2g}, {stop[1]:3.2g})", ) ) def add_step(self, start, stop, value): r"""Adds a step function to the transfer function. This accepts a `start` and a `stop`, and then in between those points the transfer function is set to the maximum of the transfer function and the `value`. Parameters ---------- start : float This is the beginning of the step function; must be within domain of the transfer function. stop : float This is the ending of the step function; must be within domain of the transfer function. value : float The value the transfer function will be set to between `start` and `stop`. Note that the transfer function will *actually* be set to max(y, value) where y is the existing value of the transfer function. Examples -------- Note that in this example, we have added a step function, but the Gaussian that already exists will "win" where it exceeds 0.5. >>> tf = TransferFunction((-10.0, -5.0)) >>> tf.add_gaussian(-7.0, 0.01, 1.0) >>> tf.add_step(-8.0, -6.0, 0.5) """ vals = np.zeros(self.x.shape, "float64") vals[(self.x >= start) & (self.x <= stop)] = value self.y = np.clip(np.maximum(vals, self.y), 0.0, np.inf) self.features.append( ( "step", f"start(x):{start:3.2g}", f"stop(x):{stop:3.2g}", f"value(y):{value:3.2g}", ) ) def add_filtered_planck(self, wavelength, trans): from yt._maintenance.numpy2_compat import trapezoid vals = np.zeros(self.x.shape, "float64") nu = clight / (wavelength * 1e-8) nu = nu[::-1] for i, logT in enumerate(self.x): T = 10**logT # Black body at this nu, T Bnu = ((2.0 * hcgs * nu**3) / clight**2.0) / ( np.exp(hcgs * nu / (kboltz * T)) - 1.0 ) # transmission f = Bnu * trans[::-1] # integrate transmission over nu vals[i] = trapezoid(f, nu) # normalize by total transmission over filter self.y = vals / trans.sum() # self.y = np.clip(np.maximum(vals, self.y), 0.0, 1.0) def plot(self, filename): r"""Save an image file of the transfer function. This function loads up matplotlib, plots the transfer function and saves. Parameters ---------- filename : string The file to save out the plot as. Examples -------- >>> tf = TransferFunction((-10.0, -5.0)) >>> tf.add_gaussian(-9.0, 0.01, 1.0) >>> tf.plot("sample.png") """ import matplotlib import matplotlib.pyplot as plt matplotlib.use("Agg") plt.clf() plt.plot(self.x, self.y, "xk-") plt.xlim(*self.x_bounds) plt.ylim(0.0, 1.0) plt.savefig(filename) def show(self): r"""Display an image of the transfer function This function loads up matplotlib and displays the current transfer function. Parameters ---------- Examples -------- >>> tf = TransferFunction((-10.0, -5.0)) >>> tf.add_gaussian(-9.0, 0.01, 1.0) >>> tf.show() """ import matplotlib.pyplot as plt plt.clf() plt.plot(self.x, self.y, "xk-") plt.xlim(*self.x_bounds) plt.ylim(0.0, 1.0) plt.draw() def clear(self): self.y[:] = 0.0 self.features = [] def __repr__(self): disp = ( ": " f"x_bounds:({self.x_bounds[0]:3.2g}, {self.x_bounds[1]:3.2g}) " f"nbins:{self.nbins:3.2g} features:{self.features}" ) return disp class MultiVariateTransferFunction: r"""This object constructs a set of field tables that allow for multiple field variables to control the integration through a volume. The integration through a volume typically only utilizes a single field variable (for instance, Density) to set up and control the values returned at the end of the integration. For things like isocontours, this is fine. However, more complicated schema are possible by using this object. For instance, density-weighted emission that produces colors based on the temperature of the fluid. Parameters ---------- grey_opacity : bool Should opacity be calculated on a channel-by-channel basis, or overall? Useful for opaque renderings. Default: False """ def __init__(self, grey_opacity=False): self.n_field_tables = 0 self.tables = [] # Tables are interpolation tables self.field_ids = [0] * 6 # This correlates fields with tables self.weight_field_ids = [-1] * 6 # This correlates self.field_table_ids = [0] * 6 self.weight_table_ids = [-1] * 6 self.grad_field = -1 self.light_source_v = self.light_source_c = np.zeros(3, "float64") self.grey_opacity = grey_opacity def add_field_table(self, table, field_id, weight_field_id=-1, weight_table_id=-1): r"""This accepts a table describing integration. A "field table" is a tabulated set of values that govern the integration through a given field. These are defined not only by the transmission coefficient, interpolated from the table itself, but the `field_id` that describes which of several fields the integration coefficient is to be calculated from. Parameters ---------- table : `TransferFunction` The integration table to be added to the set of tables used during the integration. field_id : int Each volume has an associated set of fields. This identifies which of those fields will be used to calculate the integration coefficient from this table. weight_field_id : int, optional If specified, the value of the field this identifies will be multiplied against the integration coefficient. weight_table_id : int, optional If specified, the value from the *table* this identifies will be multiplied against the integration coefficient. Notes ----- This can be rather complicated. It's recommended that if you are interested in manipulating this in detail that you examine the source code, specifically the function FIT_get_value in yt/_amr_utils/VolumeIntegrator.pyx. Examples -------- This example shows how to link a new transfer function against field 0. Note that this by itself does not link a *channel* for integration against a field. This is because the weighting system does not mandate that all tables contribute to a channel, only that they contribute a value which may be used by other field tables. >>> mv = MultiVariateTransferFunction() >>> tf = TransferFunction((-10.0, -5.0)) >>> tf.add_gaussian(-7.0, 0.01, 1.0) >>> mv.add_field_table(tf, 0) """ self.tables.append(table) self.field_ids[self.n_field_tables] = field_id self.weight_field_ids[self.n_field_tables] = weight_field_id self.weight_table_ids[self.n_field_tables] = weight_table_id self.n_field_tables += 1 def link_channels(self, table_id, channels=0): r"""Link an image channel to a field table. Once a field table has been added, it can be linked against a channel (any one of the six -- red, green, blue, red absorption, green absorption, blue absorption) and then the value calculated for that field table will be added to the integration for that channel. Not all tables must be linked against channels. Parameters ---------- table_id : int The 0-indexed table to link. channels : int or list of ints The channel or channels to link with this table's calculated value. Examples -------- This example shows how to link a new transfer function against field 0, and then link that table against all three RGB channels. Typically an absorption (or 'alpha') channel is also linked. >>> mv = MultiVariateTransferFunction() >>> tf = TransferFunction((-10.0, -5.0)) >>> tf.add_gaussian(-7.0, 0.01, 1.0) >>> mv.add_field_table(tf, 0) >>> mv.link_channels(0, [0, 1, 2]) """ for c in always_iterable(channels): self.field_table_ids[c] = table_id class ColorTransferFunction(MultiVariateTransferFunction): r"""A complete set of transfer functions for standard color-mapping. This is the best and easiest way to set up volume rendering. It creates field tables for all three colors, their alphas, and has support for sampling color maps and adding independent color values at all locations. It will correctly set up the `MultiVariateTransferFunction`. Parameters ---------- x_bounds : tuple of floats The min and max for the transfer function. Values below or above these values are discarded. nbins : int How many bins to calculate; in between, linear interpolation is used, so low values are typically fine. grey_opacity : bool Should opacity be calculated on a channel-by-channel basis, or overall? Useful for opaque renderings. """ def __init__(self, x_bounds, nbins=256, grey_opacity=False): MultiVariateTransferFunction.__init__(self) # Strip units off of x_bounds, if any x_bounds = [np.float64(xb) for xb in x_bounds] self.x_bounds = x_bounds self.nbins = nbins # This is all compatibility and convenience. self.red = TransferFunction(x_bounds, nbins) self.green = TransferFunction(x_bounds, nbins) self.blue = TransferFunction(x_bounds, nbins) self.alpha = TransferFunction(x_bounds, nbins) self.funcs = (self.red, self.green, self.blue, self.alpha) self.grey_opacity = grey_opacity self.features = [] # Now we do the multivariate stuff # We assign to Density, but do not weight for i, tf in enumerate(self.funcs[:3]): self.add_field_table(tf, 0, weight_table_id=3) self.link_channels(i, i) self.add_field_table(self.funcs[3], 0) self.link_channels(3, 3) # We don't have a fifth table, so the value will *always* be zero. # self.link_channels(4, [3,4,5]) def add_gaussian(self, location, width, height): r"""Add a Gaussian distribution to the transfer function. Typically, when rendering isocontours, a Gaussian distribution is the easiest way to draw out features. The spread provides a softness. The values are calculated as :math:`f(x) = h \exp{-(x-x_0)^2 / w}`. Parameters ---------- location : float The centroid of the Gaussian (:math:`x_0` in the above equation.) width : float The relative width (:math:`w` in the above equation.) height : list of 4 float The peak height (:math:`h` in the above equation.) Note that while values greater 1.0 will be accepted, the values of the transmission function are clipped at 1.0. This must be a list, and it is in the order of (red, green, blue, alpha). Examples -------- This adds a red spike. >>> tf = ColorTransferFunction((-10.0, -5.0)) >>> tf.add_gaussian(-9.0, 0.01, [1.0, 0.0, 0.0, 1.0]) """ for tf, v in zip(self.funcs, height, strict=True): tf.add_gaussian(location, width, v) self.features.append( ( "gaussian", f"location(x):{location:3.2g}", f"width(x):{width:3.2g}", f"height(y):({height[0]:3.2g}, {height[1]:3.2g}, {height[2]:3.2g}, {height[3]:3.2g})", ) ) def add_step(self, start, stop, value): r"""Adds a step function to the transfer function. This accepts a `start` and a `stop`, and then in between those points the transfer function is set to the maximum of the transfer function and the `value`. Parameters ---------- start : float This is the beginning of the step function; must be within domain of the transfer function. stop : float This is the ending of the step function; must be within domain of the transfer function. value : list of 4 floats The value the transfer function will be set to between `start` and `stop`. Note that the transfer function will *actually* be set to max(y, value) where y is the existing value of the transfer function. This must be a list, and it is in the order of (red, green, blue, alpha). Examples -------- This adds a step function that will produce a white value at > -6.0. >>> tf = ColorTransferFunction((-10.0, -5.0)) >>> tf.add_step(-6.0, -5.0, [1.0, 1.0, 1.0, 1.0]) """ for tf, v in zip(self.funcs, value, strict=True): tf.add_step(start, stop, v) self.features.append( ( "step", f"start(x):{start:3.2g}", f"stop(x):{stop:3.2g}", f"value(y):({value[0]:3.2g}, {value[1]:3.2g}, {value[2]:3.2g}, {value[3]:3.2g})", ) ) def plot(self, filename): r"""Save an image file of the transfer function. This function loads up matplotlib, plots all of the constituent transfer functions and saves. Parameters ---------- filename : string The file to save out the plot as. Examples -------- >>> tf = ColorTransferFunction((-10.0, -5.0)) >>> tf.add_layers(8) >>> tf.plot("sample.png") """ from matplotlib import pyplot from matplotlib.ticker import FuncFormatter pyplot.clf() ax = pyplot.axes() i_data = np.zeros((self.alpha.x.size, self.funcs[0].y.size, 3)) i_data[:, :, 0] = np.outer(np.ones(self.alpha.x.size), self.funcs[0].y) i_data[:, :, 1] = np.outer(np.ones(self.alpha.x.size), self.funcs[1].y) i_data[:, :, 2] = np.outer(np.ones(self.alpha.x.size), self.funcs[2].y) ax.imshow(i_data, origin="lower") ax.fill_between( np.arange(self.alpha.y.size), self.alpha.x.size * self.alpha.y, y2=self.alpha.x.size, color="white", ) ax.set_xlim(0, self.alpha.x.size) xticks = ( np.arange(np.ceil(self.alpha.x[0]), np.floor(self.alpha.x[-1]) + 1, 1) - self.alpha.x[0] ) xticks *= (self.alpha.x.size - 1) / (self.alpha.x[-1] - self.alpha.x[0]) ax.xaxis.set_ticks(xticks) def x_format(x, pos): return "%.1f" % ( x * (self.alpha.x[-1] - self.alpha.x[0]) / (self.alpha.x.size - 1) + self.alpha.x[0] ) ax.xaxis.set_major_formatter(FuncFormatter(x_format)) yticks = np.linspace(0, 1, 5) * self.alpha.y.size ax.yaxis.set_ticks(yticks) def y_format(y, pos): return y / self.alpha.y.size ax.yaxis.set_major_formatter(FuncFormatter(y_format)) ax.set_ylabel("Transmission") ax.set_xlabel("Value") pyplot.savefig(filename) def show(self, ax=None): r"""Display an image of the transfer function This function loads up matplotlib and displays the current transfer function. Parameters ---------- Examples -------- >>> tf = TransferFunction((-10.0, -5.0)) >>> tf.add_gaussian(-9.0, 0.01, 1.0) >>> tf.show() """ from matplotlib import pyplot from matplotlib.ticker import FuncFormatter pyplot.clf() ax = pyplot.axes() i_data = np.zeros((self.alpha.x.size, self.funcs[0].y.size, 3)) i_data[:, :, 0] = np.outer(np.ones(self.alpha.x.size), self.funcs[0].y) i_data[:, :, 1] = np.outer(np.ones(self.alpha.x.size), self.funcs[1].y) i_data[:, :, 2] = np.outer(np.ones(self.alpha.x.size), self.funcs[2].y) ax.imshow(i_data, origin="lower") ax.fill_between( np.arange(self.alpha.y.size), self.alpha.x.size * self.alpha.y, y2=self.alpha.x.size, color="white", ) ax.set_xlim(0, self.alpha.x.size) xticks = ( np.arange(np.ceil(self.alpha.x[0]), np.floor(self.alpha.x[-1]) + 1, 1) - self.alpha.x[0] ) xticks *= (self.alpha.x.size - 1) / (self.alpha.x[-1] - self.alpha.x[0]) if len(xticks) > 5: xticks = xticks[:: len(xticks) // 5] ax.xaxis.set_ticks(xticks) def x_format(x, pos): return "%.1f" % ( x * (self.alpha.x[-1] - self.alpha.x[0]) / (self.alpha.x.size - 1) + self.alpha.x[0] ) ax.xaxis.set_major_formatter(FuncFormatter(x_format)) yticks = np.linspace(0, 1, 5) * self.alpha.y.size ax.yaxis.set_ticks(yticks) def y_format(y, pos): s = f"{y:0.2f}" return s ax.yaxis.set_major_formatter(FuncFormatter(y_format)) ax.set_ylabel("Opacity") ax.set_xlabel("Value") def vert_cbar( self, resolution, log_scale, ax, label=None, label_fmt=None, *, label_fontsize=10, size=10, ): r"""Display an image of the transfer function This function loads up matplotlib and displays the current transfer function. Parameters ---------- Examples -------- >>> tf = TransferFunction((-10.0, -5.0)) >>> tf.add_gaussian(-9.0, 0.01, 1.0) >>> tf.show() """ from matplotlib.ticker import FuncFormatter if label is None: label = "" alpha = self.alpha.y max_alpha = alpha.max() i_data = np.zeros((self.alpha.x.size, self.funcs[0].y.size, 3)) i_data[:, :, 0] = np.outer(self.funcs[0].y, np.ones(self.alpha.x.size)) i_data[:, :, 1] = np.outer(self.funcs[1].y, np.ones(self.alpha.x.size)) i_data[:, :, 2] = np.outer(self.funcs[2].y, np.ones(self.alpha.x.size)) ax.imshow(i_data, origin="lower", aspect="auto") ax.plot(alpha, np.arange(self.alpha.y.size), "w") # Set TF limits based on what is visible visible = np.argwhere(self.alpha.y > 1.0e-3 * self.alpha.y.max()) # Display colobar values xticks = ( np.arange(np.ceil(self.alpha.x[0]), np.floor(self.alpha.x[-1]) + 1, 1) - self.alpha.x[0] ) xticks *= (self.alpha.x.size - 1) / (self.alpha.x[-1] - self.alpha.x[0]) if len(xticks) > 5: xticks = xticks[:: len(xticks) // 5] # Add colorbar limits to the ticks (May not give ideal results) xticks = np.append(visible[0], xticks) xticks = np.append(visible[-1], xticks) # remove dupes xticks = list(set(xticks)) ax.yaxis.set_ticks(xticks) def x_format(x, pos): val = ( x * (self.alpha.x[-1] - self.alpha.x[0]) / (self.alpha.x.size - 1) + self.alpha.x[0] ) if log_scale: val = 10**val if label_fmt is None: if abs(val) < 1.0e-3 or abs(val) > 1.0e4: if not val == 0.0: e = np.floor(np.log10(abs(val))) return rf"${val / 10.0**e:.2f}\times 10^{{ {int(e):d} }}$" else: return r"$0$" else: return f"{val:.1g}" else: return label_fmt % (val) ax.yaxis.set_major_formatter(FuncFormatter(x_format)) yticks = np.linspace(0, 1, 2, endpoint=True) * max_alpha ax.xaxis.set_ticks(yticks) def y_format(y, pos): s = f"{y:0.2f}" return s ax.xaxis.set_major_formatter(FuncFormatter(y_format)) ax.set_xlim(0.0, max_alpha) ax.get_xaxis().set_ticks([]) ax.set_ylim(visible[0].item(), visible[-1].item()) ax.tick_params(axis="y", colors="white", labelsize=label_fontsize) ax.set_ylabel(label, color="white", size=size * resolution / 512.0) def sample_colormap(self, v, w, alpha=None, colormap="gist_stern", col_bounds=None): r"""Add a Gaussian based on an existing colormap. Constructing pleasing Gaussians in a transfer function can pose some challenges, so this function will add a single Gaussian whose colors are taken from a colormap scaled between the bounds of the transfer function. As with `TransferFunction.add_gaussian`, the value is calculated as :math:`f(x) = h \exp{-(x-x_0)^2 / w}` but with the height for each color calculated from the colormap. Parameters ---------- v : float The value at which the Gaussian is to be added. w : float The relative width (:math:`w` in the above equation.) alpha : float, optional The alpha value height for the Gaussian colormap : string, optional An acceptable colormap. See either yt.visualization.color_maps or https://scipy-cookbook.readthedocs.io/items/Matplotlib_Show_colormaps.html . col_bounds: array_like, optional Limits ([min, max]) the values over which the colormap spans to these values. Useful for sampling an entire colormap over a range smaller than the transfer function bounds. See Also -------- ColorTransferFunction.add_layers : Many-at-a-time adder Examples -------- >>> tf = ColorTransferFunction((-10.0, -5.0)) >>> tf.sample_colormap(-7.0, 0.01, colormap="cmyt.arbre") """ import matplotlib as mpl v = np.float64(v) if col_bounds is None: rel = (v - self.x_bounds[0]) / (self.x_bounds[1] - self.x_bounds[0]) else: rel = (v - col_bounds[0]) / (col_bounds[1] - col_bounds[0]) cmap = mpl.colormaps[colormap] r, g, b, a = cmap(rel) if alpha is None: alpha = a self.add_gaussian(v, w, [r, g, b, alpha]) mylog.debug( "Adding gaussian at %s with width %s and colors %s", v, w, (r, g, b, alpha) ) def map_to_colormap( self, mi, ma, scale=1.0, colormap="gist_stern", scale_func=None ): r"""Map a range of values to a full colormap. Given a minimum and maximum value in the TransferFunction, map a full colormap over that range at an alpha level of `scale`. Optionally specify a scale_func function that modifies the alpha as a function of the transfer function value. Parameters ---------- mi : float The start of the TransferFunction to map the colormap ma : float The end of the TransferFunction to map the colormap scale: float, optional The alpha value to be used for the height of the transfer function. Larger values will be more opaque. colormap : string, optional An acceptable colormap. See either yt.visualization.color_maps or https://scipy-cookbook.readthedocs.io/items/Matplotlib_Show_colormaps.html . scale_func: function(:obj:`!value`, :obj:`!minval`, :obj:`!maxval`), optional A user-defined function that can be used to scale the alpha channel as a function of the TransferFunction field values. Function maps value to somewhere between minval and maxval. Examples -------- >>> def linramp(vals, minval, maxval): ... return (vals - vals.min()) / (vals.max() - vals.min()) >>> tf = ColorTransferFunction((-10.0, -5.0)) >>> tf.map_to_colormap(-8.0, -6.0, scale=10.0, colormap="cmyt.arbre") >>> tf.map_to_colormap( ... -6.0, -5.0, scale=10.0, colormap="cmyt.arbre", scale_func=linramp ... ) """ import matplotlib as mpl mi = np.float64(mi) ma = np.float64(ma) rel0 = int( self.nbins * (mi - self.x_bounds[0]) / (self.x_bounds[1] - self.x_bounds[0]) ) rel1 = int( self.nbins * (ma - self.x_bounds[0]) / (self.x_bounds[1] - self.x_bounds[0]) ) rel0 = max(rel0, 0) rel1 = min(rel1, self.nbins - 1) + 1 tomap = np.linspace(0.0, 1.0, num=rel1 - rel0) cmap = mpl.colormaps[colormap] cc = cmap(tomap) if scale_func is None: scale_mult = 1.0 else: scale_mult = scale_func(tomap, 0.0, 1.0) self.red.y[rel0:rel1] = cc[:, 0] * scale_mult self.green.y[rel0:rel1] = cc[:, 1] * scale_mult self.blue.y[rel0:rel1] = cc[:, 2] * scale_mult self.alpha.y[rel0:rel1] = scale * cc[:, 3] * scale_mult self.features.append( ( "map_to_colormap", f"start(x):{mi:3.2g}", f"stop(x):{ma:3.2g}", f"value(y):{scale:3.2g}", ) ) def add_layers( self, N, w=None, mi=None, ma=None, alpha=None, colormap="gist_stern", col_bounds=None, ): r"""Add a set of Gaussians based on an existing colormap. Constructing pleasing Gaussians in a transfer function can pose some challenges, so this function will add several evenly-spaced Gaussians whose colors are taken from a colormap scaled between the bounds of the transfer function. For each Gaussian to be added, `ColorTransferFunction.sample_colormap` is called. Parameters ---------- N : int How many Gaussians to add w : float The relative width of each Gaussian. If not included, it is calculated as 0.001 * (max_val - min_val) / N mi : float, optional If only a subset of the data range is to have the Gaussians added, this is the minimum for that subset ma : float, optional If only a subset of the data range is to have the Gaussians added, this is the maximum for that subset alpha : list of floats, optional The alpha value height for each Gaussian. If not supplied, it is set as 1.0 everywhere. colormap : string, optional An acceptable colormap. See either yt.visualization.color_maps or https://scipy-cookbook.readthedocs.io/items/Matplotlib_Show_colormaps.html . col_bounds: array_like, optional Limits ([min, max]) the values over which the colormap spans to these values. Useful for sampling an entire colormap over a range smaller than the transfer function bounds. See Also -------- ColorTransferFunction.sample_colormap : Single Gaussian adder Examples -------- >>> tf = ColorTransferFunction((-10.0, -5.0)) >>> tf.add_layers(8) """ if col_bounds is None: dist = self.x_bounds[1] - self.x_bounds[0] if mi is None: mi = self.x_bounds[0] + dist / (10.0 * N) if ma is None: ma = self.x_bounds[1] - dist / (10.0 * N) else: dist = col_bounds[1] - col_bounds[0] if mi is None: mi = col_bounds[0] + dist / (10.0 * N) if ma is None: ma = col_bounds[1] - dist / (10.0 * N) if w is None: w = 0.001 * (ma - mi) / N w = max(w, 1.0 / self.nbins) if alpha is None and self.grey_opacity: alpha = np.ones(N, dtype="float64") elif alpha is None and not self.grey_opacity: alpha = np.logspace(-3, 0, N) for v, a in zip(np.mgrid[mi : ma : N * 1j], alpha, strict=True): self.sample_colormap(v, w, a, colormap=colormap, col_bounds=col_bounds) def get_colormap_image(self, height, width): image = np.zeros((height, width, 3), dtype="uint8") hvals = np.mgrid[self.x_bounds[0] : self.x_bounds[1] : height * 1j] for i, f in enumerate(self.funcs[:3]): vals = np.interp(hvals, f.x, f.y) image[:, :, i] = (vals[:, None] * 255).astype("uint8") image = image[::-1, :, :] return image def clear(self): for f in self.funcs: f.clear() self.features = [] def __repr__(self): disp = ( ":\n" + "x_bounds:[%3.2g, %3.2g] nbins:%i features:\n" % (self.x_bounds[0], self.x_bounds[1], self.nbins) ) for f in self.features: disp += f"\t{str(f)}\n" return disp class ProjectionTransferFunction(MultiVariateTransferFunction): r"""A transfer function that defines a simple projection. To generate an interpolated, off-axis projection through a dataset, this transfer function should be used. It will create a very simple table that merely sums along each ray. Note that the end product will need to be scaled by the total width through which the rays were cast, a piece of information inaccessible to the transfer function. Parameters ---------- x_bounds : tuple of floats, optional If any of your values lie outside this range, they will be truncated. n_fields : int, optional How many fields we're going to project and pass through Notes ----- When you use this transfer function, you may need to explicitly disable logging of fields. """ def __init__(self, x_bounds=(-1e60, 1e60), n_fields=1): if n_fields > 3: raise NotImplementedError( f"supplied ${n_fields} but n_fields > 3 not implemented." ) MultiVariateTransferFunction.__init__(self) # Strip units off of x_bounds, if any x_bounds = [np.float64(xb) for xb in x_bounds] self.x_bounds = x_bounds self.nbins = 2 self.linear_mapping = TransferFunction(x_bounds, 2) self.linear_mapping.pass_through = 1 self.link_channels(0, [0, 1, 2]) # same emission for all rgb, default for i in range(n_fields): self.add_field_table(self.linear_mapping, i) self.link_channels(i, i) self.link_channels(n_fields, [3, 4, 5]) # this will remove absorption class PlanckTransferFunction(MultiVariateTransferFunction): """ This sets up a planck function for multivariate emission and absorption. We assume that the emission is black body, which is then convolved with appropriate Johnson filters for *red*, *green* and *blue*. *T_bounds* and *rho_bounds* define the limits of tabulated emission and absorption functions. *anorm* is a "fudge factor" that defines the somewhat arbitrary normalization to the scattering approximation: because everything is done largely unit-free, and is really not terribly accurate anyway, feel free to adjust this to change the relative amount of reddening. Maybe in some future version this will be unitful. """ def __init__( self, T_bounds, rho_bounds, nbins=256, red="R", green="V", blue="B", anorm=1e6 ): MultiVariateTransferFunction.__init__(self) mscat = -1 from .UBVRI import johnson_filters for i, f in enumerate([red, green, blue]): jf = johnson_filters[f] tf = TransferFunction(T_bounds) tf.add_filtered_planck(jf["wavelen"], jf["trans"]) self.add_field_table(tf, 0, 1) self.link_channels(i, i) # 0 => 0, 1 => 1, 2 => 2 mscat = max(mscat, jf["Lchar"] ** -4) for i, f in enumerate([red, green, blue]): # Now we set up the scattering scat = (johnson_filters[f]["Lchar"] ** -4 / mscat) * anorm tf = TransferFunction(rho_bounds) mylog.debug("Adding: %s with relative scattering %s", f, scat) tf.y *= 0.0 tf.y += scat self.add_field_table(tf, 1, weight_field_id=1) self.link_channels(i + 3, i + 3) self._normalize() self.grey_opacity = False def _normalize(self): fmax = np.array([f.y for f in self.tables[:3]]) normal = fmax.max(axis=0) for f in self.tables[:3]: f.y = f.y / normal if __name__ == "__main__": tf = ColorTransferFunction((-20, -5)) tf.add_gaussian(-16.0, 0.4, [0.2, 0.3, 0.1]) tf.add_gaussian(-14.0, 0.8, [0.4, 0.1, 0.2]) tf.add_gaussian(-10.0, 1.0, [0.0, 0.0, 1.0]) tf.plot("tf.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/utils.py0000644000175100001770000001275414714401662022211 0ustar00runnerdockerimport numpy as np from yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer3D, ) from yt.data_objects.static_output import Dataset from yt.utilities.lib import bounding_volume_hierarchy from yt.utilities.lib.image_samplers import ( InterpolatedProjectionSampler, ProjectionSampler, VolumeRenderSampler, ) from yt.utilities.on_demand_imports import NotAModule try: from yt.utilities.lib.embree_mesh import mesh_traversal # type: ignore # Catch ValueError in case size of objects in Cython change except (ImportError, ValueError): mesh_traversal = NotAModule("pyembree") def data_source_or_all(data_source): if isinstance(data_source, Dataset): data_source = data_source.all_data() if not isinstance(data_source, (YTSelectionContainer3D, type(None))): raise RuntimeError( "The data_source is not a valid 3D data container.\n" "Expected an object of type YTSelectionContainer3D but received " f"an object of type {type(data_source)}." ) return data_source def new_mesh_sampler(camera, render_source, engine): params = ensure_code_unit_params(camera._get_sampler_params(render_source)) args = ( np.atleast_3d(params["vp_pos"]), np.atleast_3d(params["vp_dir"]), params["center"], params["bounds"], np.atleast_3d(params["image"]).astype("float64"), params["x_vec"], params["y_vec"], params["width"], render_source.volume_method, ) kwargs = {"lens_type": params["lens_type"]} if engine == "embree": sampler = mesh_traversal.EmbreeMeshSampler(*args, **kwargs) elif engine == "yt": sampler = bounding_volume_hierarchy.BVHMeshSampler(*args, **kwargs) return sampler def new_volume_render_sampler(camera, render_source): params = ensure_code_unit_params(camera._get_sampler_params(render_source)) params.update(transfer_function=render_source.transfer_function) params.update(transfer_function=render_source.transfer_function) params.update(num_samples=render_source.num_samples) args = ( np.atleast_3d(params["vp_pos"]), np.atleast_3d(params["vp_dir"]), params["center"], params["bounds"], params["image"], params["x_vec"], params["y_vec"], params["width"], render_source.volume_method, params["transfer_function"], params["num_samples"], ) kwargs = { "lens_type": params["lens_type"], } if "camera_data" in params: kwargs["camera_data"] = params["camera_data"] if render_source.zbuffer is not None: kwargs["zbuffer"] = render_source.zbuffer.z args[4][:] = np.reshape( render_source.zbuffer.rgba[:], (camera.resolution[0], camera.resolution[1], 4), ) else: kwargs["zbuffer"] = np.ones(params["image"].shape[:2], "float64") sampler = VolumeRenderSampler(*args, **kwargs) return sampler def new_interpolated_projection_sampler(camera, render_source): params = ensure_code_unit_params(camera._get_sampler_params(render_source)) params.update(transfer_function=render_source.transfer_function) params.update(num_samples=render_source.num_samples) args = ( np.atleast_3d(params["vp_pos"]), np.atleast_3d(params["vp_dir"]), params["center"], params["bounds"], params["image"], params["x_vec"], params["y_vec"], params["width"], render_source.volume_method, params["num_samples"], ) kwargs = {"lens_type": params["lens_type"]} if render_source.zbuffer is not None: kwargs["zbuffer"] = render_source.zbuffer.z else: kwargs["zbuffer"] = np.ones(params["image"].shape[:2], "float64") sampler = InterpolatedProjectionSampler(*args, **kwargs) return sampler def new_projection_sampler(camera, render_source): params = ensure_code_unit_params(camera._get_sampler_params(render_source)) params.update(transfer_function=render_source.transfer_function) params.update(num_samples=render_source.num_samples) args = ( np.atleast_3d(params["vp_pos"]), np.atleast_3d(params["vp_dir"]), params["center"], params["bounds"], params["image"], params["x_vec"], params["y_vec"], params["width"], render_source.volume_method, params["num_samples"], ) kwargs = { "lens_type": params["lens_type"], } if render_source.zbuffer is not None: kwargs["zbuffer"] = render_source.zbuffer.z else: kwargs["zbuffer"] = np.ones(params["image"].shape[:2], "float64") sampler = ProjectionSampler(*args, **kwargs) return sampler def get_corners(le, re): return np.array( [ [le[0], le[1], le[2]], [re[0], le[1], le[2]], [re[0], re[1], le[2]], [le[0], re[1], le[2]], [le[0], le[1], re[2]], [re[0], le[1], re[2]], [re[0], re[1], re[2]], [le[0], re[1], re[2]], ], dtype="float64", ) def ensure_code_unit_params(params): for param_name in ["center", "vp_pos", "vp_dir", "width"]: param = params[param_name] if hasattr(param, "in_units"): params[param_name] = param.in_units("code_length") bounds = params["bounds"] if hasattr(bounds[0], "units"): params["bounds"] = tuple(b.in_units("code_length").d for b in bounds) return params ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/volume_rendering.py0000644000175100001770000001140114714401662024401 0ustar00runnerdockerfrom yt.funcs import mylog from yt.utilities.exceptions import YTSceneFieldNotFound from .api import MeshSource, Scene, create_volume_source from .utils import data_source_or_all def create_scene(data_source, field=None, lens_type="plane-parallel"): r"""Set up a scene object with sensible defaults for use in volume rendering. A helper function that creates a default camera view, transfer function, and image size. Using these, it returns an instance of the Scene class, allowing one to further modify their rendering. This function is the same as volume_render() except it doesn't render the image. Parameters ---------- data_source : :class:`yt.data_objects.data_containers.AMR3DData` This is the source to be rendered, which can be any arbitrary yt 3D object field: string, tuple, optional The field to be rendered. If unspecified, this will use the default_field for your dataset's frontend--usually ('gas', 'density'). A default transfer function will be built that spans the range of values for that given field, and the field will be logarithmically scaled if the field_info object specifies as such. lens_type: string, optional This specifies the type of lens to use for rendering. Current options are 'plane-parallel', 'perspective', and 'fisheye'. See :class:`yt.visualization.volume_rendering.lens.Lens` for details. Default: 'plane-parallel' Returns ------- sc: Scene A :class:`yt.visualization.volume_rendering.scene.Scene` object that was constructed during the rendering. Useful for further modifications, rotations, etc. Examples -------- >>> import yt >>> ds = yt.load("Enzo_64/DD0046/DD0046") >>> sc = yt.create_scene(ds) """ data_source = data_source_or_all(data_source) sc = Scene() if field is None: field = data_source.ds.default_field if field not in data_source.ds.derived_field_list: raise YTSceneFieldNotFound( f"""Could not find field '{field}' in {data_source.ds}. Please specify a field in create_scene()""" ) mylog.info("Setting default field to %s", field.__repr__()) if hasattr(data_source.ds.index, "meshes"): source = MeshSource(data_source, field=field) else: source = create_volume_source(data_source, field=field) sc.add_source(source) sc.add_camera(data_source=data_source, lens_type=lens_type) return sc def volume_render( data_source, field=None, fname=None, sigma_clip=None, lens_type="plane-parallel" ): r"""Create a simple volume rendering of a data source. A helper function that creates a default camera view, transfer function, and image size. Using these, it returns an image and an instance of the Scene class, allowing one to further modify their rendering. Parameters ---------- data_source : :class:`yt.data_objects.data_containers.AMR3DData` This is the source to be rendered, which can be any arbitrary yt 3D object field: string, tuple, optional The field to be rendered. If unspecified, this will use the default_field for your dataset's frontend--usually ('gas', 'density'). A default transfer function will be built that spans the range of values for that given field, and the field will be logarithmically scaled if the field_info object specifies as such. fname: string, optional If specified, the resulting rendering will be saved to this filename in png format. sigma_clip: float, optional If specified, the resulting image will be clipped before saving, using a threshold based on sigma_clip multiplied by the standard deviation of the pixel values. Recommended values are between 2 and 6. Default: None lens_type: string, optional This specifies the type of lens to use for rendering. Current options are 'plane-parallel', 'perspective', and 'fisheye'. See :class:`yt.visualization.volume_rendering.lens.Lens` for details. Default: 'plane-parallel' Returns ------- im: ImageArray The resulting image, stored as an ImageArray object. sc: Scene A :class:`yt.visualization.volume_rendering.scene.Scene` object that was constructed during the rendering. Useful for further modifications, rotations, etc. Examples -------- >>> import yt >>> ds = yt.load("Enzo_64/DD0046/DD0046") >>> im, sc = yt.volume_render(ds, fname="test.png", sigma_clip=4.0) """ sc = create_scene(data_source, field=field) im = sc.render() sc.save(fname=fname, sigma_clip=sigma_clip, render=False) return im, sc ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/visualization/volume_rendering/zbuffer_array.py0000644000175100001770000000476014714401662023710 0ustar00runnerdockerimport numpy as np class ZBuffer: """A container object for z-buffer arrays A zbuffer is a companion array for an image that allows the volume rendering infrastructure to determine whether one opaque source is in front of another opaque source. The z buffer encodes the distance to the opaque source relative to the camera position. Parameters ---------- rgba: MxNx4 image The image the z buffer corresponds to z: MxN image The z depth of each pixel in the image. The shape of the image must be the same as each RGBA channel in the original image. Examples -------- >>> import numpy as np >>> shape = (64, 64) >>> rng = np.random.default_rng() >>> b1 = Zbuffer(rng.random(shape), np.ones(shape)) >>> b2 = Zbuffer(rng.random(shape), np.zeros(shape)) >>> c = b1 + b2 >>> np.all(c.rgba == b2.rgba) True >>> np.all(c.z == b2.z) True >>> np.all(c == b2) True """ def __init__(self, rgba, z): super().__init__() assert rgba.shape[: len(z.shape)] == z.shape self.rgba = rgba self.z = z self.shape = z.shape def __add__(self, other): assert self.shape == other.shape f = self.z < other.z if self.z.shape[1] == 1: # Non-rectangular rgba = self.rgba * f[:, None, :] rgba += other.rgba * (1.0 - f)[:, None, :] else: b = self.z > other.z rgba = np.zeros(self.rgba.shape) rgba[f] = self.rgba[f] rgba[b] = other.rgba[b] z = np.min([self.z, other.z], axis=0) return ZBuffer(rgba, z) def __iadd__(self, other): tmp = self + other self.rgba = tmp.rgba self.z = tmp.z return self def __eq__(self, other): equal = True equal *= np.all(self.rgba == other.rgba) equal *= np.all(self.z == other.z) return equal def paint(self, ind, value, z): if z < self.z[ind]: self.rgba[ind] = value self.z[ind] = z if __name__ == "__main__": shape: tuple[int, ...] = (64, 64) shapes: list[tuple[int, ...]] = [(64, 64), (16, 16, 4), (128,), (16, 32)] rng = np.random.default_rng() for shape in shapes: b1 = ZBuffer(rng.random(shape), np.ones(shape)) b2 = ZBuffer(rng.random(shape), np.zeros(shape)) c = b1 + b2 assert np.all(c.rgba == b2.rgba) assert np.all(c.z == b2.z) assert np.all(c == b2) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1731331021.423154 yt-4.4.0/yt.egg-info/0000755000175100001770000000000014714401715013712 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731331020.0 yt-4.4.0/yt.egg-info/PKG-INFO0000644000175100001770000003534514714401714015020 0ustar00runnerdockerMetadata-Version: 2.1 Name: yt Version: 4.4.0 Summary: An analysis and visualization toolkit for volumetric data Author-email: The yt project License: BSD 3-Clause Project-URL: Homepage, https://yt-project.org/ Project-URL: Documentation, https://yt-project.org/doc/ Project-URL: Source, https://github.com/yt-project/yt/ Project-URL: Tracker, https://github.com/yt-project/yt/issues Keywords: astronomy astrophysics visualization amr adaptivemeshrefinement Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: Console Classifier: Framework :: Matplotlib Classifier: Intended Audience :: Science/Research Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: POSIX :: AIX Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: C Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Classifier: Topic :: Scientific/Engineering :: Astronomy Classifier: Topic :: Scientific/Engineering :: Physics Classifier: Topic :: Scientific/Engineering :: Visualization Requires-Python: >=3.10.3 Description-Content-Type: text/markdown License-File: COPYING.txt Requires-Dist: cmyt>=1.1.2 Requires-Dist: ewah-bool-utils>=1.2.0 Requires-Dist: matplotlib>=3.5 Requires-Dist: more-itertools>=8.4 Requires-Dist: numpy<3,>=1.21.3 Requires-Dist: numpy!=2.0.1; platform_machine == "arm64" and platform_system == "Darwin" Requires-Dist: packaging>=20.9 Requires-Dist: pillow>=8.3.2 Requires-Dist: tomli-w>=0.4.0 Requires-Dist: tqdm>=3.4.0 Requires-Dist: unyt>=2.9.2 Requires-Dist: tomli>=1.2.3; python_version < "3.11" Requires-Dist: typing-extensions>=4.4.0; python_version < "3.12" Provides-Extra: hdf5 Requires-Dist: h5py!=3.12.0,>=3.1.0; platform_system == "Windows" and extra == "hdf5" Provides-Extra: netcdf4 Requires-Dist: netCDF4!=1.6.1,>=1.5.3; extra == "netcdf4" Provides-Extra: fortran Requires-Dist: f90nml>=1.1; extra == "fortran" Provides-Extra: adaptahop Provides-Extra: ahf Provides-Extra: amrex Provides-Extra: amrvac Requires-Dist: yt[Fortran]; extra == "amrvac" Provides-Extra: art Provides-Extra: arepo Requires-Dist: yt[HDF5]; extra == "arepo" Provides-Extra: artio Provides-Extra: athena Provides-Extra: athena-pp Provides-Extra: boxlib Provides-Extra: cf-radial Requires-Dist: xarray>=0.16.1; extra == "cf-radial" Requires-Dist: arm-pyart>=1.19.2; extra == "cf-radial" Provides-Extra: chimera Requires-Dist: yt[HDF5]; extra == "chimera" Provides-Extra: chombo Requires-Dist: yt[HDF5]; extra == "chombo" Provides-Extra: cholla Requires-Dist: yt[HDF5]; extra == "cholla" Provides-Extra: eagle Requires-Dist: yt[HDF5]; extra == "eagle" Provides-Extra: enzo-e Requires-Dist: yt[HDF5]; extra == "enzo-e" Requires-Dist: libconf>=1.0.1; extra == "enzo-e" Provides-Extra: enzo Requires-Dist: yt[HDF5]; extra == "enzo" Requires-Dist: libconf>=1.0.1; extra == "enzo" Provides-Extra: exodus-ii Requires-Dist: yt[netCDF4]; extra == "exodus-ii" Provides-Extra: fits Requires-Dist: astropy>=4.0.1; extra == "fits" Requires-Dist: regions>=0.7; extra == "fits" Provides-Extra: flash Requires-Dist: yt[HDF5]; extra == "flash" Provides-Extra: gadget Requires-Dist: yt[HDF5]; extra == "gadget" Provides-Extra: gadget-fof Requires-Dist: yt[HDF5]; extra == "gadget-fof" Provides-Extra: gamer Requires-Dist: yt[HDF5]; extra == "gamer" Provides-Extra: gdf Requires-Dist: yt[HDF5]; extra == "gdf" Provides-Extra: gizmo Requires-Dist: yt[HDF5]; extra == "gizmo" Provides-Extra: halo-catalog Requires-Dist: yt[HDF5]; extra == "halo-catalog" Provides-Extra: http-stream Requires-Dist: requests>=2.20.0; extra == "http-stream" Provides-Extra: idefix Requires-Dist: yt_idefix[HDF5]>=2.3.0; extra == "idefix" Provides-Extra: moab Requires-Dist: yt[HDF5]; extra == "moab" Provides-Extra: nc4-cm1 Requires-Dist: yt[netCDF4]; extra == "nc4-cm1" Provides-Extra: open-pmd Requires-Dist: yt[HDF5]; extra == "open-pmd" Provides-Extra: owls Requires-Dist: yt[HDF5]; extra == "owls" Provides-Extra: owls-subfind Requires-Dist: yt[HDF5]; extra == "owls-subfind" Provides-Extra: parthenon Requires-Dist: yt[HDF5]; extra == "parthenon" Provides-Extra: ramses Requires-Dist: yt[Fortran]; extra == "ramses" Requires-Dist: scipy; extra == "ramses" Provides-Extra: rockstar Provides-Extra: sdf Requires-Dist: requests>=2.20.0; extra == "sdf" Provides-Extra: stream Provides-Extra: swift Requires-Dist: yt[HDF5]; extra == "swift" Provides-Extra: tipsy Provides-Extra: ytdata Requires-Dist: yt[HDF5]; extra == "ytdata" Provides-Extra: full Requires-Dist: cartopy>=0.22.0; extra == "full" Requires-Dist: firefly>=3.2.0; extra == "full" Requires-Dist: glueviz>=0.13.3; extra == "full" Requires-Dist: ipython>=7.16.2; extra == "full" Requires-Dist: ipywidgets>=8.0.0; extra == "full" Requires-Dist: miniballcpp>=0.2.1; extra == "full" Requires-Dist: mpi4py>=3.0.3; extra == "full" Requires-Dist: pandas>=1.1.2; extra == "full" Requires-Dist: pooch>=0.7.0; extra == "full" Requires-Dist: pyaml>=17.10.0; extra == "full" Requires-Dist: pykdtree>=1.3.1; extra == "full" Requires-Dist: pyx>=0.15; extra == "full" Requires-Dist: scipy>=1.5.0; extra == "full" Requires-Dist: glue-core!=1.2.4; python_version >= "3.10" and extra == "full" Requires-Dist: ratarmount~=0.8.1; (platform_system != "Windows" and platform_system != "Darwin") and extra == "full" Requires-Dist: yt[adaptahop]; extra == "full" Requires-Dist: yt[ahf]; extra == "full" Requires-Dist: yt[amrex]; extra == "full" Requires-Dist: yt[amrvac]; extra == "full" Requires-Dist: yt[art]; extra == "full" Requires-Dist: yt[arepo]; extra == "full" Requires-Dist: yt[artio]; extra == "full" Requires-Dist: yt[athena]; extra == "full" Requires-Dist: yt[athena_pp]; extra == "full" Requires-Dist: yt[boxlib]; extra == "full" Requires-Dist: yt[cf_radial]; extra == "full" Requires-Dist: yt[chimera]; extra == "full" Requires-Dist: yt[chombo]; extra == "full" Requires-Dist: yt[cholla]; extra == "full" Requires-Dist: yt[eagle]; extra == "full" Requires-Dist: yt[enzo_e]; extra == "full" Requires-Dist: yt[enzo]; extra == "full" Requires-Dist: yt[exodus_ii]; extra == "full" Requires-Dist: yt[fits]; extra == "full" Requires-Dist: yt[flash]; extra == "full" Requires-Dist: yt[gadget]; extra == "full" Requires-Dist: yt[gadget_fof]; extra == "full" Requires-Dist: yt[gamer]; extra == "full" Requires-Dist: yt[gdf]; extra == "full" Requires-Dist: yt[gizmo]; extra == "full" Requires-Dist: yt[halo_catalog]; extra == "full" Requires-Dist: yt[http_stream]; extra == "full" Requires-Dist: yt[idefix]; extra == "full" Requires-Dist: yt[moab]; extra == "full" Requires-Dist: yt[nc4_cm1]; extra == "full" Requires-Dist: yt[open_pmd]; extra == "full" Requires-Dist: yt[owls]; extra == "full" Requires-Dist: yt[owls_subfind]; extra == "full" Requires-Dist: yt[parthenon]; extra == "full" Requires-Dist: yt[ramses]; extra == "full" Requires-Dist: yt[rockstar]; extra == "full" Requires-Dist: yt[sdf]; extra == "full" Requires-Dist: yt[stream]; extra == "full" Requires-Dist: yt[swift]; extra == "full" Requires-Dist: yt[tipsy]; extra == "full" Requires-Dist: yt[ytdata]; extra == "full" Provides-Extra: mapserver Requires-Dist: bottle; extra == "mapserver" Provides-Extra: test Requires-Dist: pyaml>=17.10.0; extra == "test" Requires-Dist: pytest>=6.1; extra == "test" Requires-Dist: pytest-mpl>=0.16.1; extra == "test" Requires-Dist: sympy!=1.10,!=1.9; extra == "test" Requires-Dist: imageio!=2.35.0; extra == "test" # The yt Project [![PyPI](https://img.shields.io/pypi/v/yt)](https://pypi.org/project/yt) [![Supported Python Versions](https://img.shields.io/pypi/pyversions/yt)](https://pypi.org/project/yt/) [![Latest Documentation](https://img.shields.io/badge/docs-latest-brightgreen.svg)](http://yt-project.org/docs/dev/) [![Users' Mailing List](https://img.shields.io/badge/Users-List-lightgrey.svg)](https://mail.python.org/archives/list/yt-users@python.org//) [![Devel Mailing List](https://img.shields.io/badge/Devel-List-lightgrey.svg)](https://mail.python.org/archives/list/yt-dev@python.org//) [![Data Hub](https://img.shields.io/badge/data-hub-orange.svg)](https://hub.yt/) [![Powered by NumFOCUS](https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A)](http://numfocus.org) [![Sponsor our Project](https://img.shields.io/badge/donate-to%20yt-blueviolet)](https://numfocus.org/donate-to-yt) [![Build and Test](https://github.com/yt-project/yt/actions/workflows/build-test.yaml/badge.svg)](https://github.com/yt-project/yt/actions/workflows/build-test.yaml) [![CI (bleeding edge)](https://github.com/yt-project/yt/actions/workflows/bleeding-edge.yaml/badge.svg)](https://github.com/yt-project/yt/actions/workflows/bleeding-edge.yaml) [![pre-commit.ci status](https://results.pre-commit.ci/badge/github/yt-project/yt/main.svg)](https://results.pre-commit.ci/latest/github/yt-project/yt/main) [![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json)](https://github.com/charliermarsh/ruff)
yt is an open-source, permissively-licensed Python library for analyzing and visualizing volumetric data. yt supports structured, variable-resolution meshes, unstructured meshes, and discrete or sampled data such as particles. Focused on driving physically-meaningful inquiry, yt has been applied in domains such as astrophysics, seismology, nuclear engineering, molecular dynamics, and oceanography. Composed of a friendly community of users and developers, we want to make it easy to use and develop - we'd love it if you got involved! We've written a [method paper](https://ui.adsabs.harvard.edu/abs/2011ApJS..192....9T) you may be interested in; if you use yt in the preparation of a publication, please consider citing it. ## Code of Conduct yt abides by a code of conduct partially modified from the PSF code of conduct, and is found [in our contributing guide](http://yt-project.org/docs/dev/developing/developing.html#yt-community-code-of-conduct). ## Installation You can install the most recent stable version of yt either with conda from [conda-forge](https://conda-forge.org/): ```shell conda install -c conda-forge yt ``` or with pip: ```shell python -m pip install yt ``` More information on the various ways to install yt, and in particular to install from source, can be found on [the project's website](https://yt-project.org/docs/dev/installing.html). ## Getting Started yt is designed to provide meaningful analysis of data. We have some Quickstart example notebooks in the repository: * [Introduction](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/1\)_Introduction.ipynb) * [Data Inspection](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/2\)_Data_Inspection.ipynb) * [Simple Visualization](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/3\)_Simple_Visualization.ipynb) * [Data Objects and Time Series](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/4\)_Data_Objects_and_Time_Series.ipynb) * [Derived Fields and Profiles](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/5\)_Derived_Fields_and_Profiles.ipynb) * [Volume Rendering](https://github.com/yt-project/yt/tree/main/doc/source/quickstart/6\)_Volume_Rendering.ipynb) If you'd like to try these online, you can visit our [yt Hub](https://hub.yt/) and run a notebook next to some of our example data. ## Contributing We love contributions! yt is open source, built on open source, and we'd love to have you hang out in our community. We have developed some [guidelines](CONTRIBUTING.rst) for contributing to yt. **Imposter syndrome disclaimer**: We want your help. No, really. There may be a little voice inside your head that is telling you that you're not ready to be an open source contributor; that your skills aren't nearly good enough to contribute. What could you possibly offer a project like this one? We assure you - the little voice in your head is wrong. If you can write code at all, you can contribute code to open source. Contributing to open source projects is a fantastic way to advance one's coding skills. Writing perfect code isn't the measure of a good developer (that would disqualify all of us!); it's trying to create something, making mistakes, and learning from those mistakes. That's how we all improve, and we are happy to help others learn. Being an open source contributor doesn't just mean writing code, either. You can help out by writing documentation, tests, or even giving feedback about the project (and yes - that includes giving feedback about the contribution process). Some of these contributions may be the most valuable to the project as a whole, because you're coming to the project with fresh eyes, so you can see the errors and assumptions that seasoned contributors have glossed over. (This disclaimer was originally written by [Adrienne Lowe](https://github.com/adriennefriend) for a [PyCon talk](https://www.youtube.com/watch?v=6Uj746j9Heo), and was adapted by yt based on its use in the README file for the [MetPy project](https://github.com/Unidata/MetPy)) ## Resources We have some community and documentation resources available. * Our latest documentation is always at http://yt-project.org/docs/dev/ and it includes recipes, tutorials, and API documentation * The [discussion mailing list](https://mail.python.org/archives/list/yt-users@python.org//) should be your first stop for general questions * The [development mailing list](https://mail.python.org/archives/list/yt-dev@python.org//) is better suited for more development issues * You can also join us on Slack at yt-project.slack.com ([request an invite](https://yt-project.org/slack.html)) Is your code compatible with yt ? Great ! Please consider giving us a shoutout as a shiny badge in your README - markdown ```markdown [![yt-project](https://img.shields.io/static/v1?label="works%20with"&message="yt"&color="blueviolet")](https://yt-project.org) ``` - rst ```reStructuredText |yt-project| .. |yt-project| image:: https://img.shields.io/static/v1?label="works%20with"&message="yt"&color="blueviolet" :target: https://yt-project.org ``` ## Powered by NumFOCUS yt is a fiscally sponsored project of [NumFOCUS](https://numfocus.org/). If you're interested in supporting the active maintenance and development of this project, consider [donating to the project](https://numfocus.salsalabs.org/donate-to-yt/index.html). ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731331021.0 yt-4.4.0/yt.egg-info/SOURCES.txt0000644000175100001770000013065514714401715015610 0ustar00runnerdockerCITATION CONTRIBUTING.rst COPYING.txt CREDITS MANIFEST.in README.md conftest.py pyproject.toml setup.py setupext.py doc/Makefile doc/README doc/activate doc/activate.csh doc/cheatsheet.tex doc/docstring_idioms.txt doc/extensions/README doc/extensions/config_help.py doc/extensions/pythonscript_sphinxext.py doc/extensions/yt_colormaps.py doc/extensions/yt_cookbook.py doc/extensions/yt_showfields.py doc/helper_scripts/code_support.py doc/helper_scripts/parse_cb_list.py doc/helper_scripts/parse_dq_list.py doc/helper_scripts/parse_object_list.py doc/helper_scripts/run_recipes.py doc/helper_scripts/show_fields.py doc/helper_scripts/split_auto.py doc/helper_scripts/table.py doc/source/conf.py doc/source/index.rst doc/source/installing.rst doc/source/yt3differences.rst doc/source/yt4differences.rst doc/source/_static/apiKey01.jpg doc/source/_static/apiKey02.jpg doc/source/_static/apiKey03.jpg doc/source/_static/apiKey04.jpg doc/source/_static/custom.css doc/source/_static/yt_icon.png doc/source/_static/yt_logo.png doc/source/_static/yt_logo.svg doc/source/_templates/layout.html doc/source/_templates/autosummary/class.rst doc/source/about/index.rst doc/source/analyzing/Particle_Trajectories.ipynb doc/source/analyzing/astropy_integrations.rst doc/source/analyzing/fields.rst doc/source/analyzing/filtering.rst doc/source/analyzing/generating_processed_data.rst doc/source/analyzing/index.rst doc/source/analyzing/ionization_cube.py doc/source/analyzing/mesh_filter.ipynb doc/source/analyzing/objects.rst doc/source/analyzing/parallel_computation.rst doc/source/analyzing/particle_filter.ipynb doc/source/analyzing/saving_data.rst doc/source/analyzing/time_series_analysis.rst doc/source/analyzing/units.rst doc/source/analyzing/_images/fields_ipywidget.png doc/source/analyzing/_static/axes.c doc/source/analyzing/_static/axes.h doc/source/analyzing/_static/axes_calculator.pyx doc/source/analyzing/_static/axes_calculator_setup.txt doc/source/analyzing/domain_analysis/XrayEmissionFields.ipynb doc/source/analyzing/domain_analysis/clump_finding.rst doc/source/analyzing/domain_analysis/cosmology_calculator.rst doc/source/analyzing/domain_analysis/index.rst doc/source/analyzing/domain_analysis/xray_data_README.rst doc/source/cookbook/amrkdtree_downsampling.py doc/source/cookbook/annotate_timestamp_and_scale.py doc/source/cookbook/annotations.py doc/source/cookbook/average_value.py doc/source/cookbook/calculating_information.rst doc/source/cookbook/camera_movement.py doc/source/cookbook/changing_label_formats.py doc/source/cookbook/colormaps.py doc/source/cookbook/complex_plots.rst doc/source/cookbook/constructing_data_objects.rst doc/source/cookbook/contours_on_slice.py doc/source/cookbook/count.sh doc/source/cookbook/custom_camera_volume_rendering.py doc/source/cookbook/custom_colorbar_tickmarks.ipynb doc/source/cookbook/custom_transfer_function_volume_rendering.py doc/source/cookbook/customized_phase_plot.py doc/source/cookbook/customized_profile_plot.py doc/source/cookbook/derived_field.py doc/source/cookbook/downsampling_amr.py doc/source/cookbook/extract_fixed_resolution_data.py doc/source/cookbook/find_clumps.py doc/source/cookbook/fits_radio_cubes.ipynb doc/source/cookbook/fits_xray_images.ipynb doc/source/cookbook/geographic_xforms_and_projections.ipynb doc/source/cookbook/global_phase_plots.py doc/source/cookbook/hse_field.py doc/source/cookbook/image_background_colors.py doc/source/cookbook/image_resolution.py doc/source/cookbook/index.rst doc/source/cookbook/matplotlib-animation.py doc/source/cookbook/multi_plot_3x2_FRB.py doc/source/cookbook/multi_plot_slice_and_proj.py doc/source/cookbook/multi_width_image.py doc/source/cookbook/multiplot_2x2.py doc/source/cookbook/multiplot_2x2_coordaxes_slice.py doc/source/cookbook/multiplot_2x2_time_series.py doc/source/cookbook/multiplot_export_to_mpl.py doc/source/cookbook/multiplot_phaseplot.py doc/source/cookbook/notebook_tutorial.rst doc/source/cookbook/offaxis_projection.py doc/source/cookbook/offaxis_projection_colorbar.py doc/source/cookbook/opaque_rendering.py doc/source/cookbook/overplot_grids.py doc/source/cookbook/overplot_particles.py doc/source/cookbook/particle_filter.py doc/source/cookbook/particle_filter_sfr.py doc/source/cookbook/particle_one_color_plot.py doc/source/cookbook/particle_xvz_plot.py doc/source/cookbook/particle_xy_plot.py doc/source/cookbook/power_spectrum_example.py doc/source/cookbook/profile_with_standard_deviation.py doc/source/cookbook/rad_velocity.py doc/source/cookbook/radial_profile_styles.py doc/source/cookbook/render_two_fields.py doc/source/cookbook/render_two_fields_tf.py doc/source/cookbook/rendering_with_box_and_grids.py doc/source/cookbook/show_hide_axes_colorbar.py doc/source/cookbook/sigma_clip.py doc/source/cookbook/simple_1d_line_plot.py doc/source/cookbook/simple_contour_in_slice.py doc/source/cookbook/simple_off_axis_projection.py doc/source/cookbook/simple_off_axis_slice.py doc/source/cookbook/simple_pdf.py doc/source/cookbook/simple_phase.py doc/source/cookbook/simple_plots.rst doc/source/cookbook/simple_profile.py doc/source/cookbook/simple_projection.py doc/source/cookbook/simple_projection_methods.py doc/source/cookbook/simple_projection_stddev.py doc/source/cookbook/simple_projection_weighted.py doc/source/cookbook/simple_radial_profile.py doc/source/cookbook/simple_slice.py doc/source/cookbook/simple_slice_matplotlib_example.py doc/source/cookbook/simple_slice_with_multiple_fields.py doc/source/cookbook/simple_volume_rendering.py doc/source/cookbook/simulation_analysis.py doc/source/cookbook/streamlines.py doc/source/cookbook/streamlines_isocontour.py doc/source/cookbook/sum_mass_in_sphere.py doc/source/cookbook/surface_plot.py doc/source/cookbook/thin_slice_projection.py doc/source/cookbook/time_series.py doc/source/cookbook/time_series_profiles.py doc/source/cookbook/tipsy_and_yt.ipynb doc/source/cookbook/various_lens.py doc/source/cookbook/velocity_vectors_on_slice.py doc/source/cookbook/vol-annotated.py doc/source/cookbook/vol-lines.py doc/source/cookbook/vol-points.py doc/source/cookbook/yt_gadget_analysis.ipynb doc/source/cookbook/yt_gadget_owls_analysis.ipynb doc/source/cookbook/zoomin_frames.py doc/source/cookbook/tests/test_cookbook.py doc/source/developing/building_the_docs.rst doc/source/developing/creating_datatypes.rst doc/source/developing/creating_derived_fields.rst doc/source/developing/creating_frontend.rst doc/source/developing/debugdrive.rst doc/source/developing/deprecating_features.rst doc/source/developing/developing.rst doc/source/developing/extensions.rst doc/source/developing/external_analysis.rst doc/source/developing/index.rst doc/source/developing/releasing.rst doc/source/developing/testing.rst doc/source/examining/Loading_Data_via_Functions.ipynb doc/source/examining/Loading_Generic_Array_Data.ipynb doc/source/examining/Loading_Generic_Particle_Data.ipynb doc/source/examining/Loading_Spherical_Data.ipynb doc/source/examining/index.rst doc/source/examining/loading_data.rst doc/source/examining/low_level_inspection.rst doc/source/faq/index.rst doc/source/help/index.rst doc/source/intro/index.rst doc/source/quickstart/1)_Introduction.ipynb doc/source/quickstart/2)_Data_Inspection.ipynb doc/source/quickstart/3)_Simple_Visualization.ipynb doc/source/quickstart/4)_Data_Objects_and_Time_Series.ipynb doc/source/quickstart/5)_Derived_Fields_and_Profiles.ipynb doc/source/quickstart/6)_Volume_Rendering.ipynb doc/source/quickstart/index.rst doc/source/reference/changelog.rst doc/source/reference/code_support.rst doc/source/reference/command-line.rst doc/source/reference/configuration.rst doc/source/reference/demeshening.rst doc/source/reference/field_list.rst doc/source/reference/index.rst doc/source/reference/python_introduction.rst doc/source/reference/_images/yt3_p0010_proj_density_None_x_z002.png doc/source/reference/_images/yt4_p0010_proj_density_None_x_z002.png doc/source/reference/api/api.rst doc/source/visualizing/FITSImageData.ipynb doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb doc/source/visualizing/Volume_Rendering_Tutorial.ipynb doc/source/visualizing/callbacks.rst doc/source/visualizing/geographic_projections_and_transforms.rst doc/source/visualizing/index.rst doc/source/visualizing/interactive_data_visualization.rst doc/source/visualizing/manual_plotting.rst doc/source/visualizing/mapserver.rst doc/source/visualizing/plots.rst doc/source/visualizing/sketchfab.rst doc/source/visualizing/streamlines.rst doc/source/visualizing/unstructured_mesh_rendering.rst doc/source/visualizing/visualizing_particle_datasets_with_firefly.rst doc/source/visualizing/volume_rendering.rst doc/source/visualizing/_images/all_colormaps.png doc/source/visualizing/_images/allsky.png doc/source/visualizing/_images/firefly_example.png doc/source/visualizing/_images/idv.jpg doc/source/visualizing/_images/ip_saver_0.png doc/source/visualizing/_images/ip_saver_1.png doc/source/visualizing/_images/ip_saver_2.png doc/source/visualizing/_images/mapserver.png doc/source/visualizing/_images/native_yt_colormaps.png doc/source/visualizing/_images/scene_diagram.svg doc/source/visualizing/_images/surfaces_blender.png doc/source/visualizing/_images/vr_sample.jpg doc/source/visualizing/colormaps/cmap_images.py doc/source/visualizing/colormaps/index.rst yt/__init__.py yt/_typing.py yt/_version.py yt/api.py yt/arraytypes.py yt/config.py yt/default.mplstyle yt/exthook.py yt/funcs.py yt/loaders.py yt/py.typed yt/sample_data_registry.json yt/startup_tasks.py yt/testing.py yt.egg-info/PKG-INFO yt.egg-info/SOURCES.txt yt.egg-info/dependency_links.txt yt.egg-info/entry_points.txt yt.egg-info/not-zip-safe yt.egg-info/requires.txt yt.egg-info/top_level.txt yt/_maintenance/__init__.py yt/_maintenance/backports.py yt/_maintenance/deprecation.py yt/_maintenance/ipython_compat.py yt/_maintenance/numpy2_compat.py yt/analysis_modules/__init__.py yt/analysis_modules/absorption_spectrum/__init__.py yt/analysis_modules/absorption_spectrum/api.py yt/analysis_modules/cosmological_observation/__init__.py yt/analysis_modules/cosmological_observation/api.py yt/analysis_modules/halo_analysis/__init__.py yt/analysis_modules/halo_analysis/api.py yt/analysis_modules/halo_finding/__init__.py yt/analysis_modules/halo_finding/api.py yt/analysis_modules/halo_mass_function/__init__.py yt/analysis_modules/halo_mass_function/api.py yt/analysis_modules/level_sets/__init__.py yt/analysis_modules/level_sets/api.py yt/analysis_modules/particle_trajectories/__init__.py yt/analysis_modules/particle_trajectories/api.py yt/analysis_modules/photon_simulator/__init__.py yt/analysis_modules/photon_simulator/api.py yt/analysis_modules/ppv_cube/__init__.py yt/analysis_modules/ppv_cube/api.py yt/analysis_modules/radmc3d_export/__init__.py yt/analysis_modules/radmc3d_export/api.py yt/analysis_modules/spectral_integrator/__init__.py yt/analysis_modules/spectral_integrator/api.py yt/analysis_modules/star_analysis/__init__.py yt/analysis_modules/star_analysis/api.py yt/analysis_modules/sunrise_export/__init__.py yt/analysis_modules/sunrise_export/api.py yt/analysis_modules/sunyaev_zeldovich/__init__.py yt/analysis_modules/sunyaev_zeldovich/api.py yt/analysis_modules/two_point_functions/__init__.py yt/analysis_modules/two_point_functions/api.py yt/data_objects/__init__.py yt/data_objects/analyzer_objects.py yt/data_objects/api.py yt/data_objects/construction_data_containers.py yt/data_objects/data_containers.py yt/data_objects/derived_quantities.py yt/data_objects/field_data.py yt/data_objects/image_array.py yt/data_objects/particle_filters.py yt/data_objects/particle_trajectories.py yt/data_objects/particle_unions.py yt/data_objects/profiles.py yt/data_objects/region_expression.py yt/data_objects/static_output.py yt/data_objects/time_series.py yt/data_objects/unions.py yt/data_objects/index_subobjects/__init__.py yt/data_objects/index_subobjects/grid_patch.py yt/data_objects/index_subobjects/octree_subset.py yt/data_objects/index_subobjects/particle_container.py yt/data_objects/index_subobjects/stretched_grid.py yt/data_objects/index_subobjects/unstructured_mesh.py yt/data_objects/level_sets/__init__.py yt/data_objects/level_sets/api.py yt/data_objects/level_sets/clump_handling.py yt/data_objects/level_sets/clump_info_items.py yt/data_objects/level_sets/clump_tools.py yt/data_objects/level_sets/clump_validators.py yt/data_objects/level_sets/contour_finder.py yt/data_objects/level_sets/tests/__init__.py yt/data_objects/level_sets/tests/test_clump_finding.py yt/data_objects/selection_objects/__init__.py yt/data_objects/selection_objects/boolean_operations.py yt/data_objects/selection_objects/cut_region.py yt/data_objects/selection_objects/data_selection_objects.py yt/data_objects/selection_objects/disk.py yt/data_objects/selection_objects/object_collection.py yt/data_objects/selection_objects/point.py yt/data_objects/selection_objects/ray.py yt/data_objects/selection_objects/region.py yt/data_objects/selection_objects/slices.py yt/data_objects/selection_objects/spheroids.py yt/data_objects/tests/__init__.py yt/data_objects/tests/test_add_field.py yt/data_objects/tests/test_bbox.py yt/data_objects/tests/test_boolean_regions.py yt/data_objects/tests/test_center_squeeze.py yt/data_objects/tests/test_chunking.py yt/data_objects/tests/test_clone.py yt/data_objects/tests/test_compose.py yt/data_objects/tests/test_connected_sets.py yt/data_objects/tests/test_covering_grid.py yt/data_objects/tests/test_cutting_plane.py yt/data_objects/tests/test_data_collection.py yt/data_objects/tests/test_data_containers.py yt/data_objects/tests/test_dataset_access.py yt/data_objects/tests/test_derived_quantities.py yt/data_objects/tests/test_disks.py yt/data_objects/tests/test_ellipsoid.py yt/data_objects/tests/test_exclude_functions.py yt/data_objects/tests/test_extract_regions.py yt/data_objects/tests/test_firefly.py yt/data_objects/tests/test_fluxes.py yt/data_objects/tests/test_glue.py yt/data_objects/tests/test_image_array.py yt/data_objects/tests/test_io_geometry.py yt/data_objects/tests/test_numpy_ops.py yt/data_objects/tests/test_octree.py yt/data_objects/tests/test_ortho_rays.py yt/data_objects/tests/test_particle_filter.py yt/data_objects/tests/test_particle_trajectories.py yt/data_objects/tests/test_particle_trajectories_pytest.py yt/data_objects/tests/test_pickling.py yt/data_objects/tests/test_points.py yt/data_objects/tests/test_profiles.py yt/data_objects/tests/test_projection.py yt/data_objects/tests/test_rays.py yt/data_objects/tests/test_refinement.py yt/data_objects/tests/test_regions.py yt/data_objects/tests/test_registration.py yt/data_objects/tests/test_slice.py yt/data_objects/tests/test_sph_data_objects.py yt/data_objects/tests/test_spheres.py yt/data_objects/tests/test_time_series.py yt/data_objects/tests/test_units_override.py yt/extensions/__init__.py yt/fields/__init__.py yt/fields/angular_momentum.py yt/fields/api.py yt/fields/astro_fields.py yt/fields/astro_simulations.py yt/fields/cosmology_fields.py yt/fields/derived_field.py yt/fields/domain_context.py yt/fields/field_aliases.py yt/fields/field_detector.py yt/fields/field_exceptions.py yt/fields/field_functions.py yt/fields/field_info_container.py yt/fields/field_plugin_registry.py yt/fields/field_type_container.py yt/fields/fluid_fields.py yt/fields/fluid_vector_fields.py yt/fields/geometric_fields.py yt/fields/interpolated_fields.py yt/fields/local_fields.py yt/fields/magnetic_field.py yt/fields/my_plugin_fields.py yt/fields/particle_fields.py yt/fields/species_fields.py yt/fields/tensor_fields.py yt/fields/vector_operations.py yt/fields/xray_emission_fields.py yt/fields/tests/__init__.py yt/fields/tests/test_ambiguous_fields.py yt/fields/tests/test_angular_momentum.py yt/fields/tests/test_field_access.py yt/fields/tests/test_field_access_pytest.py yt/fields/tests/test_field_name_container.py yt/fields/tests/test_fields.py yt/fields/tests/test_fields_plugins.py yt/fields/tests/test_magnetic_fields.py yt/fields/tests/test_particle_fields.py yt/fields/tests/test_species_fields.py yt/fields/tests/test_sph_fields.py yt/fields/tests/test_vector_fields.py yt/fields/tests/test_xray_fields.py yt/frontends/__init__.py yt/frontends/api.py yt/frontends/adaptahop/__init__.py yt/frontends/adaptahop/api.py yt/frontends/adaptahop/data_structures.py yt/frontends/adaptahop/definitions.py yt/frontends/adaptahop/fields.py yt/frontends/adaptahop/io.py yt/frontends/adaptahop/tests/__init__.py yt/frontends/adaptahop/tests/test_outputs.py yt/frontends/ahf/__init__.py yt/frontends/ahf/api.py yt/frontends/ahf/data_structures.py yt/frontends/ahf/fields.py yt/frontends/ahf/io.py yt/frontends/ahf/tests/__init__.py yt/frontends/ahf/tests/test_outputs.py yt/frontends/amrex/__init__.py yt/frontends/amrex/api.py yt/frontends/amrex/data_structures.py yt/frontends/amrex/definitions.py yt/frontends/amrex/fields.py yt/frontends/amrex/io.py yt/frontends/amrex/misc.py yt/frontends/amrex/tests/__init__.py yt/frontends/amrex/tests/test_field_parsing.py yt/frontends/amrex/tests/test_outputs.py yt/frontends/amrvac/__init__.py yt/frontends/amrvac/api.py yt/frontends/amrvac/data_structures.py yt/frontends/amrvac/datfile_utils.py yt/frontends/amrvac/definitions.py yt/frontends/amrvac/fields.py yt/frontends/amrvac/io.py yt/frontends/amrvac/tests/__init__.py yt/frontends/amrvac/tests/test_outputs.py yt/frontends/amrvac/tests/test_read_amrvac_namelist.py yt/frontends/amrvac/tests/test_units_override.py yt/frontends/amrvac/tests/sample_parfiles/bw_3d.par yt/frontends/amrvac/tests/sample_parfiles/tvdlf_scheme.par yt/frontends/arepo/__init__.py yt/frontends/arepo/api.py yt/frontends/arepo/data_structures.py yt/frontends/arepo/fields.py yt/frontends/arepo/io.py yt/frontends/arepo/tests/__init__.py yt/frontends/arepo/tests/test_outputs.py yt/frontends/art/__init__.py yt/frontends/art/api.py yt/frontends/art/data_structures.py yt/frontends/art/definitions.py yt/frontends/art/fields.py yt/frontends/art/io.py yt/frontends/art/tests/__init__.py yt/frontends/art/tests/test_outputs.py yt/frontends/artio/__init__.py yt/frontends/artio/_artio_caller.pyx yt/frontends/artio/api.py yt/frontends/artio/data_structures.py yt/frontends/artio/definitions.py yt/frontends/artio/fields.py yt/frontends/artio/io.py yt/frontends/artio/artio_headers/LICENSE yt/frontends/artio/artio_headers/artio.c yt/frontends/artio/artio_headers/artio.h yt/frontends/artio/artio_headers/artio_endian.c yt/frontends/artio/artio_headers/artio_endian.h yt/frontends/artio/artio_headers/artio_file.c yt/frontends/artio/artio_headers/artio_grid.c yt/frontends/artio/artio_headers/artio_internal.h yt/frontends/artio/artio_headers/artio_mpi.c yt/frontends/artio/artio_headers/artio_mpi.h yt/frontends/artio/artio_headers/artio_parameter.c yt/frontends/artio/artio_headers/artio_particle.c yt/frontends/artio/artio_headers/artio_posix.c yt/frontends/artio/artio_headers/artio_selector.c yt/frontends/artio/artio_headers/artio_sfc.c yt/frontends/artio/artio_headers/cosmology.c yt/frontends/artio/artio_headers/cosmology.h yt/frontends/artio/tests/__init__.py yt/frontends/artio/tests/test_outputs.py yt/frontends/athena/__init__.py yt/frontends/athena/api.py yt/frontends/athena/data_structures.py yt/frontends/athena/definitions.py yt/frontends/athena/fields.py yt/frontends/athena/io.py yt/frontends/athena/misc.py yt/frontends/athena/tests/__init__.py yt/frontends/athena/tests/test_outputs.py yt/frontends/athena_pp/__init__.py yt/frontends/athena_pp/api.py yt/frontends/athena_pp/data_structures.py yt/frontends/athena_pp/definitions.py yt/frontends/athena_pp/fields.py yt/frontends/athena_pp/io.py yt/frontends/athena_pp/misc.py yt/frontends/athena_pp/tests/__init__.py yt/frontends/athena_pp/tests/test_outputs.py yt/frontends/boxlib/__init__.py yt/frontends/boxlib/_deprecation.py yt/frontends/boxlib/api.py yt/frontends/boxlib/data_structures/__init__.py yt/frontends/boxlib/fields/__init__.py yt/frontends/boxlib/io/__init__.py yt/frontends/boxlib/tests/__init__.py yt/frontends/boxlib/tests/test_boxlib_deprecation.py yt/frontends/boxlib/tests/test_outputs.py yt/frontends/cf_radial/__init__.py yt/frontends/cf_radial/api.py yt/frontends/cf_radial/data_structures.py yt/frontends/cf_radial/fields.py yt/frontends/cf_radial/io.py yt/frontends/cf_radial/tests/__init__.py yt/frontends/cf_radial/tests/test_cf_radial_pytest.py yt/frontends/cf_radial/tests/test_outputs.py yt/frontends/chimera/__init__.py yt/frontends/chimera/api.py yt/frontends/chimera/data_structures.py yt/frontends/chimera/definitions.py yt/frontends/chimera/fields.py yt/frontends/chimera/io.py yt/frontends/chimera/misc.py yt/frontends/chimera/tests/__init__.py yt/frontends/chimera/tests/test_outputs.py yt/frontends/cholla/__init__.py yt/frontends/cholla/api.py yt/frontends/cholla/data_structures.py yt/frontends/cholla/definitions.py yt/frontends/cholla/fields.py yt/frontends/cholla/io.py yt/frontends/cholla/misc.py yt/frontends/cholla/tests/__init__.py yt/frontends/cholla/tests/test_outputs.py yt/frontends/chombo/__init__.py yt/frontends/chombo/api.py yt/frontends/chombo/data_structures.py yt/frontends/chombo/definitions.py yt/frontends/chombo/fields.py yt/frontends/chombo/io.py yt/frontends/chombo/misc.py yt/frontends/chombo/tests/__init__.py yt/frontends/chombo/tests/test_outputs.py yt/frontends/eagle/__init__.py yt/frontends/eagle/api.py yt/frontends/eagle/data_structures.py yt/frontends/eagle/definitions.py yt/frontends/eagle/fields.py yt/frontends/eagle/io.py yt/frontends/eagle/tests/__init__.py yt/frontends/eagle/tests/test_outputs.py yt/frontends/enzo/__init__.py yt/frontends/enzo/answer_testing_support.py yt/frontends/enzo/api.py yt/frontends/enzo/data_structures.py yt/frontends/enzo/definitions.py yt/frontends/enzo/fields.py yt/frontends/enzo/io.py yt/frontends/enzo/misc.py yt/frontends/enzo/simulation_handling.py yt/frontends/enzo/tests/__init__.py yt/frontends/enzo/tests/test_outputs.py yt/frontends/enzo_e/__init__.py yt/frontends/enzo_e/api.py yt/frontends/enzo_e/data_structures.py yt/frontends/enzo_e/definitions.py yt/frontends/enzo_e/fields.py yt/frontends/enzo_e/io.py yt/frontends/enzo_e/misc.py yt/frontends/enzo_e/tests/__init__.py yt/frontends/enzo_e/tests/test_misc.py yt/frontends/enzo_e/tests/test_outputs.py yt/frontends/exodus_ii/__init__.py yt/frontends/exodus_ii/api.py yt/frontends/exodus_ii/data_structures.py yt/frontends/exodus_ii/definitions.py yt/frontends/exodus_ii/fields.py yt/frontends/exodus_ii/io.py yt/frontends/exodus_ii/misc.py yt/frontends/exodus_ii/simulation_handling.py yt/frontends/exodus_ii/util.py yt/frontends/exodus_ii/tests/__init__.py yt/frontends/exodus_ii/tests/test_outputs.py yt/frontends/fits/__init__.py yt/frontends/fits/api.py yt/frontends/fits/data_structures.py yt/frontends/fits/definitions.py yt/frontends/fits/fields.py yt/frontends/fits/io.py yt/frontends/fits/misc.py yt/frontends/fits/tests/__init__.py yt/frontends/fits/tests/test_outputs.py yt/frontends/flash/__init__.py yt/frontends/flash/api.py yt/frontends/flash/data_structures.py yt/frontends/flash/definitions.py yt/frontends/flash/fields.py yt/frontends/flash/io.py yt/frontends/flash/misc.py yt/frontends/flash/tests/__init__.py yt/frontends/flash/tests/test_outputs.py yt/frontends/gadget/__init__.py yt/frontends/gadget/api.py yt/frontends/gadget/data_structures.py yt/frontends/gadget/definitions.py yt/frontends/gadget/fields.py yt/frontends/gadget/io.py yt/frontends/gadget/simulation_handling.py yt/frontends/gadget/testing.py yt/frontends/gadget/tests/__init__.py yt/frontends/gadget/tests/test_gadget_pytest.py yt/frontends/gadget/tests/test_outputs.py yt/frontends/gadget_fof/__init__.py yt/frontends/gadget_fof/api.py yt/frontends/gadget_fof/data_structures.py yt/frontends/gadget_fof/fields.py yt/frontends/gadget_fof/io.py yt/frontends/gadget_fof/tests/__init__.py yt/frontends/gadget_fof/tests/test_outputs.py yt/frontends/gamer/__init__.py yt/frontends/gamer/api.py yt/frontends/gamer/cfields.pyx yt/frontends/gamer/data_structures.py yt/frontends/gamer/definitions.py yt/frontends/gamer/fields.py yt/frontends/gamer/io.py yt/frontends/gamer/misc.py yt/frontends/gamer/tests/__init__.py yt/frontends/gamer/tests/test_outputs.py yt/frontends/gdf/__init__.py yt/frontends/gdf/api.py yt/frontends/gdf/data_structures.py yt/frontends/gdf/definitions.py yt/frontends/gdf/fields.py yt/frontends/gdf/io.py yt/frontends/gdf/misc.py yt/frontends/gdf/tests/__init__.py yt/frontends/gdf/tests/conftest.py yt/frontends/gdf/tests/test_outputs.py yt/frontends/gdf/tests/test_outputs_nose.py yt/frontends/gizmo/__init__.py yt/frontends/gizmo/api.py yt/frontends/gizmo/data_structures.py yt/frontends/gizmo/fields.py yt/frontends/gizmo/tests/__init__.py yt/frontends/gizmo/tests/test_outputs.py yt/frontends/halo_catalog/__init__.py yt/frontends/halo_catalog/api.py yt/frontends/halo_catalog/data_structures.py yt/frontends/halo_catalog/fields.py yt/frontends/halo_catalog/io.py yt/frontends/halo_catalog/tests/__init__.py yt/frontends/halo_catalog/tests/test_outputs.py yt/frontends/http_stream/__init__.py yt/frontends/http_stream/api.py yt/frontends/http_stream/data_structures.py yt/frontends/http_stream/io.py yt/frontends/moab/__init__.py yt/frontends/moab/api.py yt/frontends/moab/data_structures.py yt/frontends/moab/definitions.py yt/frontends/moab/fields.py yt/frontends/moab/io.py yt/frontends/moab/misc.py yt/frontends/moab/tests/__init__.py yt/frontends/moab/tests/test_c5.py yt/frontends/nc4_cm1/__init__.py yt/frontends/nc4_cm1/api.py yt/frontends/nc4_cm1/data_structures.py yt/frontends/nc4_cm1/fields.py yt/frontends/nc4_cm1/io.py yt/frontends/nc4_cm1/tests/__init__.py yt/frontends/nc4_cm1/tests/test_outputs.py yt/frontends/open_pmd/__init__.py yt/frontends/open_pmd/api.py yt/frontends/open_pmd/data_structures.py yt/frontends/open_pmd/definitions.py yt/frontends/open_pmd/fields.py yt/frontends/open_pmd/io.py yt/frontends/open_pmd/misc.py yt/frontends/open_pmd/tests/__init__.py yt/frontends/open_pmd/tests/test_outputs.py yt/frontends/owls/__init__.py yt/frontends/owls/api.py yt/frontends/owls/data_structures.py yt/frontends/owls/fields.py yt/frontends/owls/io.py yt/frontends/owls/owls_ion_tables.py yt/frontends/owls/simulation_handling.py yt/frontends/owls/tests/__init__.py yt/frontends/owls/tests/test_outputs.py yt/frontends/owls_subfind/__init__.py yt/frontends/owls_subfind/api.py yt/frontends/owls_subfind/data_structures.py yt/frontends/owls_subfind/fields.py yt/frontends/owls_subfind/io.py yt/frontends/owls_subfind/tests/__init__.py yt/frontends/owls_subfind/tests/test_outputs.py yt/frontends/parthenon/__init__.py yt/frontends/parthenon/api.py yt/frontends/parthenon/data_structures.py yt/frontends/parthenon/definitions.py yt/frontends/parthenon/fields.py yt/frontends/parthenon/io.py yt/frontends/parthenon/misc.py yt/frontends/parthenon/tests/__init__.py yt/frontends/parthenon/tests/test_outputs.py yt/frontends/ramses/__init__.py yt/frontends/ramses/api.py yt/frontends/ramses/data_structures.py yt/frontends/ramses/definitions.py yt/frontends/ramses/field_handlers.py yt/frontends/ramses/fields.py yt/frontends/ramses/hilbert.py yt/frontends/ramses/io.py yt/frontends/ramses/io_utils.pyx yt/frontends/ramses/particle_handlers.py yt/frontends/ramses/tests/__init__.py yt/frontends/ramses/tests/test_file_sanitizer.py yt/frontends/ramses/tests/test_hilbert.py yt/frontends/ramses/tests/test_outputs.py yt/frontends/ramses/tests/test_outputs_pytest.py yt/frontends/rockstar/__init__.py yt/frontends/rockstar/api.py yt/frontends/rockstar/data_structures.py yt/frontends/rockstar/definitions.py yt/frontends/rockstar/fields.py yt/frontends/rockstar/io.py yt/frontends/rockstar/tests/__init__.py yt/frontends/rockstar/tests/test_outputs.py yt/frontends/sdf/__init__.py yt/frontends/sdf/api.py yt/frontends/sdf/data_structures.py yt/frontends/sdf/definitions.py yt/frontends/sdf/fields.py yt/frontends/sdf/io.py yt/frontends/sdf/misc.py yt/frontends/sph/__init__.py yt/frontends/sph/api.py yt/frontends/sph/data_structures.py yt/frontends/sph/fields.py yt/frontends/sph/io.py yt/frontends/stream/__init__.py yt/frontends/stream/api.py yt/frontends/stream/data_structures.py yt/frontends/stream/definitions.py yt/frontends/stream/fields.py yt/frontends/stream/io.py yt/frontends/stream/misc.py yt/frontends/stream/sample_data/__init__.py yt/frontends/stream/sample_data/hexahedral_mesh.py yt/frontends/stream/sample_data/tetrahedral_mesh.py yt/frontends/stream/tests/__init__.py yt/frontends/stream/tests/test_callable_grids.py yt/frontends/stream/tests/test_outputs.py yt/frontends/stream/tests/test_stream_amrgrids.py yt/frontends/stream/tests/test_stream_hexahedral.py yt/frontends/stream/tests/test_stream_octree.py yt/frontends/stream/tests/test_stream_particles.py yt/frontends/stream/tests/test_stream_species.py yt/frontends/stream/tests/test_stream_stretched.py yt/frontends/stream/tests/test_stream_unstructured.py yt/frontends/stream/tests/test_update_data.py yt/frontends/swift/__init__.py yt/frontends/swift/api.py yt/frontends/swift/data_structures.py yt/frontends/swift/fields.py yt/frontends/swift/io.py yt/frontends/swift/tests/__init__.py yt/frontends/swift/tests/test_outputs.py yt/frontends/tipsy/__init__.py yt/frontends/tipsy/api.py yt/frontends/tipsy/data_structures.py yt/frontends/tipsy/definitions.py yt/frontends/tipsy/fields.py yt/frontends/tipsy/io.py yt/frontends/tipsy/tests/__init__.py yt/frontends/tipsy/tests/test_outputs.py yt/frontends/ytdata/__init__.py yt/frontends/ytdata/api.py yt/frontends/ytdata/data_structures.py yt/frontends/ytdata/fields.py yt/frontends/ytdata/io.py yt/frontends/ytdata/utilities.py yt/frontends/ytdata/tests/__init__.py yt/frontends/ytdata/tests/test_data_reload.py yt/frontends/ytdata/tests/test_old_outputs.py yt/frontends/ytdata/tests/test_outputs.py yt/frontends/ytdata/tests/test_unit.py yt/geometry/__init__.py yt/geometry/api.py yt/geometry/fake_octree.pyx yt/geometry/geometry_enum.py yt/geometry/geometry_handler.py yt/geometry/grid_container.pxd yt/geometry/grid_container.pyx yt/geometry/grid_geometry_handler.py yt/geometry/grid_visitors.pxd yt/geometry/grid_visitors.pyx yt/geometry/oct_container.pxd yt/geometry/oct_container.pyx yt/geometry/oct_geometry_handler.py yt/geometry/oct_visitors.pxd yt/geometry/oct_visitors.pyx yt/geometry/particle_deposit.pxd yt/geometry/particle_deposit.pyx yt/geometry/particle_geometry_handler.py yt/geometry/particle_oct_container.pyx yt/geometry/particle_smooth.pxd yt/geometry/particle_smooth.pyx yt/geometry/selection_routines.pxd yt/geometry/selection_routines.pyx yt/geometry/unstructured_mesh_handler.py yt/geometry/vectorized_ops.h yt/geometry/_selection_routines/always_selector.pxi yt/geometry/_selection_routines/boolean_selectors.pxi yt/geometry/_selection_routines/compose_selector.pxi yt/geometry/_selection_routines/cut_region_selector.pxi yt/geometry/_selection_routines/cutting_plane_selector.pxi yt/geometry/_selection_routines/data_collection_selector.pxi yt/geometry/_selection_routines/disk_selector.pxi yt/geometry/_selection_routines/ellipsoid_selector.pxi yt/geometry/_selection_routines/grid_selector.pxi yt/geometry/_selection_routines/halo_particles_selector.pxi yt/geometry/_selection_routines/indexed_octree_subset_selector.pxi yt/geometry/_selection_routines/octree_subset_selector.pxi yt/geometry/_selection_routines/ortho_ray_selector.pxi yt/geometry/_selection_routines/point_selector.pxi yt/geometry/_selection_routines/ray_selector.pxi yt/geometry/_selection_routines/region_selector.pxi yt/geometry/_selection_routines/selector_object.pxi yt/geometry/_selection_routines/slice_selector.pxi yt/geometry/_selection_routines/sphere_selector.pxi yt/geometry/coordinates/__init__.py yt/geometry/coordinates/api.py yt/geometry/coordinates/cartesian_coordinates.py yt/geometry/coordinates/coordinate_handler.py yt/geometry/coordinates/cylindrical_coordinates.py yt/geometry/coordinates/geographic_coordinates.py yt/geometry/coordinates/polar_coordinates.py yt/geometry/coordinates/spec_cube_coordinates.py yt/geometry/coordinates/spherical_coordinates.py yt/geometry/coordinates/tests/__init__.py yt/geometry/coordinates/tests/test_axial_pixelization.py yt/geometry/coordinates/tests/test_cartesian_coordinates.py yt/geometry/coordinates/tests/test_cylindrical_coordinates.py yt/geometry/coordinates/tests/test_geographic_coordinates.py yt/geometry/coordinates/tests/test_polar_coordinates.py yt/geometry/coordinates/tests/test_sanitize_center.py yt/geometry/coordinates/tests/test_sph_pixelization.py yt/geometry/coordinates/tests/test_sph_pixelization_pytestonly.py yt/geometry/coordinates/tests/test_spherical_coordinates.py yt/geometry/tests/__init__.py yt/geometry/tests/fake_octree.py yt/geometry/tests/test_ewah_write_load.py yt/geometry/tests/test_geometries.py yt/geometry/tests/test_grid_container.py yt/geometry/tests/test_grid_index.py yt/geometry/tests/test_particle_deposit.py yt/geometry/tests/test_particle_octree.py yt/geometry/tests/test_pickleable_selections.py yt/sample_data/__init__.py yt/sample_data/api.py yt/tests/__init__.py yt/tests/test_external_frontends.py yt/tests/test_funcs.py yt/tests/test_load_archive.py yt/tests/test_load_errors.py yt/tests/test_load_sample.py yt/tests/test_testing.py yt/tests/test_version.py yt/units/__init__.py yt/units/_numpy_wrapper_functions.py yt/units/dimensions.py yt/units/equivalencies.py yt/units/physical_constants.py yt/units/unit_lookup_table.py yt/units/unit_object.py yt/units/unit_registry.py yt/units/unit_symbols.py yt/units/unit_systems.py yt/units/yt_array.py yt/units/tests/__init__.py yt/units/tests/test_magnetic_code_units.py yt/utilities/__init__.py yt/utilities/api.py yt/utilities/chemical_formulas.py yt/utilities/command_line.py yt/utilities/configuration_tree.py yt/utilities/configure.py yt/utilities/cosmology.py yt/utilities/cython_fortran_utils.pxd yt/utilities/cython_fortran_utils.pyx yt/utilities/decompose.py yt/utilities/definitions.py yt/utilities/exceptions.py yt/utilities/file_handler.py yt/utilities/flagging_methods.py yt/utilities/fortran_utils.py yt/utilities/hierarchy_inspection.py yt/utilities/initial_conditions.py yt/utilities/io_handler.py yt/utilities/linear_interpolators.py yt/utilities/lodgeit.py yt/utilities/logger.py yt/utilities/math_utils.py yt/utilities/mesh_code_generation.py yt/utilities/mesh_types.yaml yt/utilities/metadata.py yt/utilities/minimal_representation.py yt/utilities/nodal_data_utils.py yt/utilities/object_registries.py yt/utilities/on_demand_imports.py yt/utilities/operator_registry.py yt/utilities/orientation.py yt/utilities/parameter_file_storage.py yt/utilities/particle_generator.py yt/utilities/performance_counters.py yt/utilities/periodic_table.py yt/utilities/physical_constants.py yt/utilities/physical_ratios.py yt/utilities/png_writer.py yt/utilities/rpdb.py yt/utilities/sdf.py yt/utilities/tree_container.py yt/utilities/voropp.pyx yt/utilities/amr_kdtree/__init__.py yt/utilities/amr_kdtree/amr_kdtools.py yt/utilities/amr_kdtree/amr_kdtree.py yt/utilities/amr_kdtree/api.py yt/utilities/answer_testing/__init__.py yt/utilities/answer_testing/answer_tests.py yt/utilities/answer_testing/api.py yt/utilities/answer_testing/framework.py yt/utilities/answer_testing/level_sets_tests.py yt/utilities/answer_testing/testing_utilities.py yt/utilities/grid_data_format/__init__.py yt/utilities/grid_data_format/writer.py yt/utilities/grid_data_format/conversion/__init__.py yt/utilities/grid_data_format/conversion/conversion_abc.py yt/utilities/grid_data_format/conversion/conversion_athena.py yt/utilities/grid_data_format/docs/IRATE_notes.txt yt/utilities/grid_data_format/docs/gdf_specification.txt yt/utilities/grid_data_format/scripts/convert_distributed_athena.py yt/utilities/grid_data_format/scripts/convert_single_athena.py yt/utilities/grid_data_format/tests/__init__.py yt/utilities/grid_data_format/tests/test_writer.py yt/utilities/lib/__init__.py yt/utilities/lib/_octree_raytracing.hpp yt/utilities/lib/_octree_raytracing.pxd yt/utilities/lib/_octree_raytracing.pyx yt/utilities/lib/allocation_container.pxd yt/utilities/lib/allocation_container.pyx yt/utilities/lib/alt_ray_tracers.pyx yt/utilities/lib/amr_kdtools.pxd yt/utilities/lib/amr_kdtools.pyx yt/utilities/lib/api.py yt/utilities/lib/autogenerated_element_samplers.pxd yt/utilities/lib/autogenerated_element_samplers.pyx yt/utilities/lib/basic_octree.pyx yt/utilities/lib/bitarray.pxd yt/utilities/lib/bitarray.pyx yt/utilities/lib/bounded_priority_queue.pxd yt/utilities/lib/bounded_priority_queue.pyx yt/utilities/lib/bounding_volume_hierarchy.pxd yt/utilities/lib/bounding_volume_hierarchy.pyx yt/utilities/lib/contour_finding.pxd yt/utilities/lib/contour_finding.pyx yt/utilities/lib/cosmology_time.pyx yt/utilities/lib/cyoctree.pyx yt/utilities/lib/depth_first_octree.pyx yt/utilities/lib/distance_queue.pxd yt/utilities/lib/distance_queue.pyx yt/utilities/lib/element_mappings.pxd yt/utilities/lib/element_mappings.pyx yt/utilities/lib/endian_swap.h yt/utilities/lib/field_interpolation_tables.pxd yt/utilities/lib/fixed_interpolator.cpp yt/utilities/lib/fixed_interpolator.hpp yt/utilities/lib/fixed_interpolator.pxd yt/utilities/lib/fnv_hash.pxd yt/utilities/lib/fnv_hash.pyx yt/utilities/lib/fortran_reader.pyx yt/utilities/lib/fp_utils.pxd yt/utilities/lib/geometry_utils.pxd yt/utilities/lib/geometry_utils.pyx yt/utilities/lib/grid_traversal.pxd yt/utilities/lib/grid_traversal.pyx yt/utilities/lib/healpix_interface.pxd yt/utilities/lib/image_samplers.pxd yt/utilities/lib/image_samplers.pyx yt/utilities/lib/image_utilities.pyx yt/utilities/lib/interpolators.pyx yt/utilities/lib/lenses.pxd yt/utilities/lib/lenses.pyx yt/utilities/lib/line_integral_convolution.pyx yt/utilities/lib/marching_cubes.h yt/utilities/lib/marching_cubes.pyx yt/utilities/lib/mesh_triangulation.h yt/utilities/lib/mesh_triangulation.pyx yt/utilities/lib/mesh_utilities.pyx yt/utilities/lib/misc_utilities.pyx yt/utilities/lib/octree_raytracing.py yt/utilities/lib/origami.pyx yt/utilities/lib/origami_tags.c yt/utilities/lib/origami_tags.h yt/utilities/lib/particle_kdtree_tools.pxd yt/utilities/lib/particle_kdtree_tools.pyx yt/utilities/lib/particle_mesh_operations.pyx yt/utilities/lib/partitioned_grid.pxd yt/utilities/lib/partitioned_grid.pyx yt/utilities/lib/pixelization_constants.cpp yt/utilities/lib/pixelization_constants.hpp yt/utilities/lib/pixelization_routines.pyx yt/utilities/lib/platform_dep.h yt/utilities/lib/platform_dep_math.hpp yt/utilities/lib/points_in_volume.pyx yt/utilities/lib/primitives.pxd yt/utilities/lib/primitives.pyx yt/utilities/lib/quad_tree.pyx yt/utilities/lib/ragged_arrays.pyx yt/utilities/lib/tsearch.c yt/utilities/lib/tsearch.h yt/utilities/lib/vec3_ops.pxd yt/utilities/lib/volume_container.pxd yt/utilities/lib/write_array.pyx yt/utilities/lib/cykdtree/__init__.py yt/utilities/lib/cykdtree/c_kdtree.cpp yt/utilities/lib/cykdtree/c_kdtree.hpp yt/utilities/lib/cykdtree/c_utils.cpp yt/utilities/lib/cykdtree/c_utils.hpp yt/utilities/lib/cykdtree/kdtree.pxd yt/utilities/lib/cykdtree/kdtree.pyx yt/utilities/lib/cykdtree/plot.py yt/utilities/lib/cykdtree/utils.pxd yt/utilities/lib/cykdtree/utils.pyx yt/utilities/lib/cykdtree/tests/__init__.py yt/utilities/lib/cykdtree/tests/scaling.py yt/utilities/lib/cykdtree/tests/test_kdtree.py yt/utilities/lib/cykdtree/tests/test_plot.py yt/utilities/lib/cykdtree/tests/test_utils.py yt/utilities/lib/cykdtree/windows/stdint.h yt/utilities/lib/embree_mesh/__init__.py yt/utilities/lib/embree_mesh/mesh_construction.pxd yt/utilities/lib/embree_mesh/mesh_construction.pyx yt/utilities/lib/embree_mesh/mesh_intersection.pxd yt/utilities/lib/embree_mesh/mesh_intersection.pyx yt/utilities/lib/embree_mesh/mesh_samplers.pxd yt/utilities/lib/embree_mesh/mesh_samplers.pyx yt/utilities/lib/embree_mesh/mesh_traversal.pxd yt/utilities/lib/embree_mesh/mesh_traversal.pyx yt/utilities/lib/tests/__init__.py yt/utilities/lib/tests/test_allocation_container.py yt/utilities/lib/tests/test_alt_ray_tracers.py yt/utilities/lib/tests/test_bitarray.py yt/utilities/lib/tests/test_bounding_volume_hierarchy.py yt/utilities/lib/tests/test_element_mappings.py yt/utilities/lib/tests/test_fill_region.py yt/utilities/lib/tests/test_geometry_utils.py yt/utilities/lib/tests/test_nn.py yt/utilities/lib/tests/test_ragged_arrays.py yt/utilities/lib/tests/test_sample.py yt/utilities/parallel_tools/__init__.py yt/utilities/parallel_tools/controller_system.py yt/utilities/parallel_tools/io_runner.py yt/utilities/parallel_tools/parallel_analysis_interface.py yt/utilities/parallel_tools/task_queue.py yt/utilities/tests/__init__.py yt/utilities/tests/cosmology_answers.yml yt/utilities/tests/test_amr_kdtree.py yt/utilities/tests/test_chemical_formulas.py yt/utilities/tests/test_config.py yt/utilities/tests/test_coordinate_conversions.py yt/utilities/tests/test_cosmology.py yt/utilities/tests/test_cython_fortran_utils.py yt/utilities/tests/test_decompose.py yt/utilities/tests/test_flagging_methods.py yt/utilities/tests/test_hierarchy_inspection.py yt/utilities/tests/test_interpolators.py yt/utilities/tests/test_minimal_representation.py yt/utilities/tests/test_on_demand_imports.py yt/utilities/tests/test_particle_generator.py yt/utilities/tests/test_periodic_table.py yt/utilities/tests/test_periodicity.py yt/utilities/tests/test_selectors.py yt/utilities/tests/test_set_log_level.py yt/visualization/__init__.py yt/visualization/_colormap_data.py yt/visualization/_commons.py yt/visualization/_handlers.py yt/visualization/api.py yt/visualization/base_plot_types.py yt/visualization/color_maps.py yt/visualization/eps_writer.py yt/visualization/fits_image.py yt/visualization/fixed_resolution.py yt/visualization/fixed_resolution_filters.py yt/visualization/geo_plot_utils.py yt/visualization/image_writer.py yt/visualization/line_plot.py yt/visualization/particle_plots.py yt/visualization/plot_container.py yt/visualization/plot_modifications.py yt/visualization/plot_window.py yt/visualization/profile_plotter.py yt/visualization/streamlines.py yt/visualization/mapserver/__init__.py yt/visualization/mapserver/pannable_map.py yt/visualization/mapserver/html/Leaflet.Coordinates-0.1.5.css yt/visualization/mapserver/html/Leaflet.Coordinates-0.1.5.src.js yt/visualization/mapserver/html/__init__.py yt/visualization/mapserver/html/map.js yt/visualization/mapserver/html/map_index.html yt/visualization/tests/__init__.py yt/visualization/tests/test_callbacks.py yt/visualization/tests/test_callbacks_geographic.py yt/visualization/tests/test_color_maps.py yt/visualization/tests/test_commons.py yt/visualization/tests/test_eps_writer.py yt/visualization/tests/test_export_frb.py yt/visualization/tests/test_filters.py yt/visualization/tests/test_fits_image.py yt/visualization/tests/test_geo_projections.py yt/visualization/tests/test_image_comp_2D_plots.py yt/visualization/tests/test_image_comp_geo.py yt/visualization/tests/test_image_writer.py yt/visualization/tests/test_invalid_origin.py yt/visualization/tests/test_line_annotation_unit.py yt/visualization/tests/test_line_plots.py yt/visualization/tests/test_mesh_slices.py yt/visualization/tests/test_normal_plot_api.py yt/visualization/tests/test_offaxisprojection.py yt/visualization/tests/test_offaxisprojection_pytestonly.py yt/visualization/tests/test_particle_plot.py yt/visualization/tests/test_plot_modifications.py yt/visualization/tests/test_plotwindow.py yt/visualization/tests/test_profile_plots.py yt/visualization/tests/test_raw_field_slices.py yt/visualization/tests/test_save.py yt/visualization/tests/test_set_zlim.py yt/visualization/tests/test_splat.py yt/visualization/volume_rendering/UBVRI.py yt/visualization/volume_rendering/__init__.py yt/visualization/volume_rendering/_cuda_caster.cu yt/visualization/volume_rendering/api.py yt/visualization/volume_rendering/blenders.py yt/visualization/volume_rendering/camera.py yt/visualization/volume_rendering/camera_path.py yt/visualization/volume_rendering/create_spline.py yt/visualization/volume_rendering/image_handling.py yt/visualization/volume_rendering/lens.py yt/visualization/volume_rendering/off_axis_projection.py yt/visualization/volume_rendering/old_camera.py yt/visualization/volume_rendering/render_source.py yt/visualization/volume_rendering/scene.py yt/visualization/volume_rendering/transfer_function_helper.py yt/visualization/volume_rendering/transfer_functions.py yt/visualization/volume_rendering/utils.py yt/visualization/volume_rendering/volume_rendering.py yt/visualization/volume_rendering/zbuffer_array.py yt/visualization/volume_rendering/tests/__init__.py yt/visualization/volume_rendering/tests/test_camera_attributes.py yt/visualization/volume_rendering/tests/test_composite.py yt/visualization/volume_rendering/tests/test_lenses.py yt/visualization/volume_rendering/tests/test_mesh_render.py yt/visualization/volume_rendering/tests/test_off_axis_SPH.py yt/visualization/volume_rendering/tests/test_points.py yt/visualization/volume_rendering/tests/test_save_render.py yt/visualization/volume_rendering/tests/test_scene.py yt/visualization/volume_rendering/tests/test_sigma_clip.py yt/visualization/volume_rendering/tests/test_simple_vr.py yt/visualization/volume_rendering/tests/test_varia.py yt/visualization/volume_rendering/tests/test_vr_cameras.py yt/visualization/volume_rendering/tests/test_vr_orientation.py yt/visualization/volume_rendering/tests/test_zbuff.py././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731331020.0 yt-4.4.0/yt.egg-info/dependency_links.txt0000644000175100001770000000000114714401714017757 0ustar00runnerdocker ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731331020.0 yt-4.4.0/yt.egg-info/entry_points.txt0000644000175100001770000000022414714401714017205 0ustar00runnerdocker[console_scripts] yt = yt.utilities.command_line:run_main [nose.plugins.0.10] answer-testing = yt.utilities.answer_testing.framework:AnswerTesting ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731331000.0 yt-4.4.0/yt.egg-info/not-zip-safe0000644000175100001770000000000114714401670016140 0ustar00runnerdocker ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731331020.0 yt-4.4.0/yt.egg-info/requires.txt0000644000175100001770000000420114714401714016306 0ustar00runnerdockercmyt>=1.1.2 ewah-bool-utils>=1.2.0 matplotlib>=3.5 more-itertools>=8.4 numpy<3,>=1.21.3 packaging>=20.9 pillow>=8.3.2 tomli-w>=0.4.0 tqdm>=3.4.0 unyt>=2.9.2 [:platform_machine == "arm64" and platform_system == "Darwin"] numpy!=2.0.1 [:python_version < "3.11"] tomli>=1.2.3 [:python_version < "3.12"] typing-extensions>=4.4.0 [Fortran] f90nml>=1.1 [HDF5] [HDF5:platform_system == "Windows"] h5py!=3.12.0,>=3.1.0 [adaptahop] [ahf] [amrex] [amrvac] yt[Fortran] [arepo] yt[HDF5] [art] [artio] [athena] [athena-pp] [boxlib] [cf-radial] xarray>=0.16.1 arm-pyart>=1.19.2 [chimera] yt[HDF5] [cholla] yt[HDF5] [chombo] yt[HDF5] [eagle] yt[HDF5] [enzo] yt[HDF5] libconf>=1.0.1 [enzo-e] yt[HDF5] libconf>=1.0.1 [exodus-ii] yt[netCDF4] [fits] astropy>=4.0.1 regions>=0.7 [flash] yt[HDF5] [full] cartopy>=0.22.0 firefly>=3.2.0 glueviz>=0.13.3 ipython>=7.16.2 ipywidgets>=8.0.0 miniballcpp>=0.2.1 mpi4py>=3.0.3 pandas>=1.1.2 pooch>=0.7.0 pyaml>=17.10.0 pykdtree>=1.3.1 pyx>=0.15 scipy>=1.5.0 yt[adaptahop] yt[ahf] yt[amrex] yt[amrvac] yt[art] yt[arepo] yt[artio] yt[athena] yt[athena_pp] yt[boxlib] yt[cf_radial] yt[chimera] yt[chombo] yt[cholla] yt[eagle] yt[enzo_e] yt[enzo] yt[exodus_ii] yt[fits] yt[flash] yt[gadget] yt[gadget_fof] yt[gamer] yt[gdf] yt[gizmo] yt[halo_catalog] yt[http_stream] yt[idefix] yt[moab] yt[nc4_cm1] yt[open_pmd] yt[owls] yt[owls_subfind] yt[parthenon] yt[ramses] yt[rockstar] yt[sdf] yt[stream] yt[swift] yt[tipsy] yt[ytdata] [full:platform_system != "Windows" and platform_system != "Darwin"] ratarmount~=0.8.1 [full:python_version >= "3.10"] glue-core!=1.2.4 [gadget] yt[HDF5] [gadget-fof] yt[HDF5] [gamer] yt[HDF5] [gdf] yt[HDF5] [gizmo] yt[HDF5] [halo-catalog] yt[HDF5] [http-stream] requests>=2.20.0 [idefix] yt_idefix[HDF5]>=2.3.0 [mapserver] bottle [moab] yt[HDF5] [nc4-cm1] yt[netCDF4] [netCDF4] netCDF4!=1.6.1,>=1.5.3 [open-pmd] yt[HDF5] [owls] yt[HDF5] [owls-subfind] yt[HDF5] [parthenon] yt[HDF5] [ramses] yt[Fortran] scipy [rockstar] [sdf] requests>=2.20.0 [stream] [swift] yt[HDF5] [test] pyaml>=17.10.0 pytest>=6.1 pytest-mpl>=0.16.1 sympy!=1.10,!=1.9 imageio!=2.35.0 [tipsy] [ytdata] yt[HDF5] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731331020.0 yt-4.4.0/yt.egg-info/top_level.txt0000644000175100001770000000000314714401714016434 0ustar00runnerdockeryt

mXөR=I}+#yq}<|\.Ih}3\֙[,ki"2j[?^ȧ_[r= /-k,/?fҩrk["8E@@M*ROH8*&!Cbע KRP89$n!Mݲo lͩI~~*Q[h3A ٦#v~RC҆&/ݲ>%%2{\6QbkHq_5Yˣ7>I|"$%/iK(V]SH p S`Z23í] H=*dBE g安O@3S j*arq˛Z$ixJSzBʐI֭ih@rVc&80%6$"Zt*5iЭ7 1  fJpL `tFM~yw$D[19sMu|^[|nmƻk;e%24L:fEB&ɬ]3ʋ N/ePҙ8ܡ5&M*[Zp0m3zɄwCr*qe%)GH-|h x6T:6t$K9lcb1DM6|.p耝YBsH "6}<+f`/Hwm`fWC|b͈Y^& @2+U@){=A~Ǯq~7vgrR*ϙCRAiۚ ⯆7XIYv΍ag:<M< $Ҥ#`/ktEs%RADFGlh~/|lZߢ|)N9|&HI 2ޡI)w yіvԢ j9?va)K$Aޑ H :с'SZ|{7i1 5G|M)N?H^l_}B=R) 0eôBZ#.m/?g^WHɶGGv1栬qZ՝8E(?~/ޮe;nJ_i-`v^㻀h"l89z_X!Q5/Uevc>Ak0.H-(=^ hĞFXW@NH++VRM:qg(3:$<.ʷl'ߝ"FavE|Onp8Z|DMK+a&WYT+*΅tbkm< \kFPNM-AhX uSS(ց{!AYX`?Nǽ([! h89C#=m(JsΕW;?<;\9\~k/J_,$N(ViRԄ{s՞G0\ .?)mMrh =$otIz9`\O {grxt9"I0dѡ)3nBURR S0m7磩U\h+$ _z9*IKsh#S[կT΢nbYl23dt/ڙWCC;iߋ~hͲ˷xu_^:8K-nVghsE]+t;~QE}\XtWxbE^j uYUЍg҇N;o5tYf4 na=O$frS)5=gR֊N^_|/_.mKă3Oigh9u_J!j2-wE<7 ́<꾂_p*)JpՁ2Ey )MD0b5R4 =$5o!fsi:tE9.!@6Hևvr#ȟ$ Z 2'p̞d/T%#Djc,v4aλ$Ϲl\Ț6U kEnb Arm⦾$u+0 64JLHCt+% M,,]؅vڨJkw{P^[#YZ}Iۚ6KM9ԇ,C KR%|>ﴡ[x]XʰV_iVVj4;g:RoTt* _m'׻5e֒CNTmp(;v{L"15o@ "cGfۿ1evwCm88X SBl);6#IIg2= hQDIu0 @69}ΎO2u&''PК>$^'%ʣI{2ty\;{{&Lr">?1 b0M!JxN:A59-GAi>2̒l&Jl~!-_R\l6к[ž]`;k>r'qoAګAʶW-v~gZȝPw» ۉȇnգ1+K/ ܝdW"1DIJ![K&[0#%9`]8mH#vҳ+݈əbb̶Ӭt{Hܩ6^82jVtdې<{n߉=:wxʽ:*ou@*ⳠnE솲օl>F_b=O!޶,n&: hkC"%.ۚȪp?n!J&0mH<(W hn#.m-lgkWͬh(.uHNq}Ae _duLN-6>6)n)$q\u7xu4]xw-j7 9YTq/qux:޷eSgT=6`E-sfR>"L-Q0͕}+km/]=ƚIK ]9.N|ku'33M;zx qXwo|wO϶uB=@;!U p9|9Z( w?8E:QYϔhjN>H#ɏjPWea%MUMegF;+}7.&SgVt[y],V8g5Oql| jhtNQ !]Lܷ!vsվH %ڔ}ip:%)cg_/Dy "幡ܳQ,29\e/y9)B)Y4?ڈrg9R -Mtv葛\wd-B;,h2I>:9 >Fq_^+&tD UFހ}+K:M;;~d ThW0T)s-pj|մQi7gմUM>a5< 57^mA.ײ͔riU_WER&Ct IJl4:xr c¤xU+2 u倭-E]a eodr`I.i#~u*x=IlN_[=H}ERS v8,+һ gE2y ݬetH}l  C23u@a-phJ>@hM4kϤsZ^\'M7:iɒ^\ akzc""0vjڟϵ] E鉿 ˢge{#1Ͷv4 |>Zjg}jr'SfiS̴/2?bcF߶@ #-d3gfR#0`B>ÚůVXY {'~/m*/)~lvٖk¤jNrM;]˕ o,0/NI*Gݢߚɷ)y`J/5yaMԼ񤟉m͠N1΋^-}ko}ewo=U898}JǑr{ _Кń$'3y` :BoT1I{i^]OwGQ:qKs0 (#G<O&ۣ<_EĘdf\+0N]mb5vJt/9=åmʿh[B׀8O0ԐNBݓ.qFDm?!Mx6}xrC|LYߊ1>[ H%s^̑Ywב?&O1ǩj΁_r*//?oTPPPH,/VW otVeKNMWbGaf>P-MsM;2xTO.o^hYޔc 8 Yw!$jT$3UkV$?$I%+I&ڲf)t-!pM$iTd:̴MZ";$NI2)~Xֵ}wu0SCM{^ Ǽj==FtTfRYR/:K%O%]N$$ʌ,-x ?zyUS'fJh7-LJ<ӿ9´=Xk>acbF oǘd~#t4 0nReDFcGeCSpjzTd7{i)fD.X'WBШZۢSPPHc}GD& 1pdGp<1hxɓ;}l/@hc::~H-.ϊ"KWl'R h~0QN:|/{6ͳadkkfʬo)!k$=#b/`R3X$y9y@zU#{_ 7P񌷄ֶOCqtϷC[q.[w%y;P`%[Ve7D ܧ!0J+w[>mb:)|HO[/ :g1hW[l&^ЦoCsB m<{;r{ZtwͤAyBm_~v=xzbEc?oS?7,}*wqjU0gWλ^Gц ^kl}˛@#/熸Z;Da XuFFY{9I^_І;ihJ-) ZvU9@Yg6/b2 ԧ  )mg.hJ3lkڑRmSڽb!2 ;87-v9;T=d}EyWcuE믷偏A;2)hb9CtYWAS7Y!}khSUGsEEd nn:fV3CVix\E?ՙe95)0Gz܃4;5ԽiNveOR|΍@Iz#W~C~lћx3E#c3Za?5kV@8PuQUIj{zA9{MZ|.|7w}qŘak N|GZIqֵKkmJJ#r>`.H:pIWȕW,*BLL3H:X=nZ${Ii1!5h}9(>cjN瑮ց)ߜ5dA~Fr^jBrG[ȕvIq6tIF5x.#鍮]+7HPůVh0jeʯUOMsKAp#pM:$w-ți\ &7YwOLy8_)K,ibK k_s$~m @sHck4xv"IڊQEK:/~8ZЈHJGF%MyH{_D{{ S秩G+s+mʟѾץm1Og vc;1ǵ.A$F~ /d7R>xHݥCm:&_+seW]@ gI'ewe"ם<4}x֩hIG;n{*]XޓVN?9jk=ԍPSRC"۞% KXNHoonfwD3c;ħ5}  K7'W1>`䳰Y|kCKO]YڭfQU'Eo ?GےW9rVU_,td`Q~T8*Ry!f_qfiGz}WxÕ97QIԽ61@+O[s8B~w<)Qic_ HG h>&!"؛-U~ZN"E)keg;}sJ^:rnVe]# s+wn$P:p 9>g.l%L3 >U* ăXϊyPݪjNM.}7IDDǷOĉx6LT{RW}I;:cYKaQzq@,|*|p}=G܂7q1Pu,vw?M{*vhJ_Z 9Po}tMUFi,"Fo@_yIM}|f-2s?s0>m*~ F4{ň WAzwI'jnm&:רVH")\jv!&i=Z$8m3~w]splfbok%dc}\Y5Լ0I P}Z>MMRn x%GYx뭑D΢ȭ|rF?|L$ "Mݘނ WW-8$}[ b:}]v!#Ќ';w nOA8?Hkejo1I-}&fR!hG$Oj[!ff'i@V=-o?$߷%,)?fKe?O$Wye摾+t? v}&jh6@y/I+#ɝX۝];ŘRhlIyX0Վ~zoseZS6؁;W6RСSTI+ IŮ /&\:b7sxEr[PxP&bTmOU(bZ07$6Wx.`87lgv .xs:نoڃQ塮Zˎ@u#7)q 7?yַ!Jh&Df_oijr6Te#ګtUIݖeMRԮ{۸1S8ݟ0q筋\o|'g+gW^3j6א$S\J}G"滸S}2?ɯ z wTu}f}<7)-pmX-3WoVj[@M-޶-A%ڕڸ_SD!iޔ܋\cGvi[_r`y[)W_uTihyܴOnoX񹑑}pҽY4^V%"W mk*ԓHCHCs`nN?ߍ)w$Scx? -Cwe2,-}eݏZ5u,;hQ6rWɛcss_}JZi'3 .D:_/nj̞IC>KiLR?J,X \Pvsᇠwә],SqX.+s6/ڙ2$s #T`qXS꒜M7… M Zaavd#6 $fo7#qoW4W>_[h߅* Iu=:~gDݘ/o5P7KԉemjIC|4P\`: $Sۘ@0[eO2_c3DCiIv$2(-Ec-OTAq6~_d չ &~`߾VK0 僲SZץ~uLMke ')Q¶ }TXȤaEz^ ts%T{']iry)xlo RM@"g3՚Houc09 «^}M$w$ck^ ,E?H fN P`r~DO3yL39_@#U؀Mcit@΀5|'ɑn93qW;np/m>jG[IK URomM>)P|Iډ:Fh+ty2vM"9&I=N&mК۔ OT&O1y(ВnFocGRʙǘꐶLt0N uhk:#p!ΥsPT&sGp%|w1HL6"M %X VP[ ͱK ^&ZEjf'&eSym}\t;VKH\:I_Ol3ڋh ,>bpLj7| w@ ynW=x,>5]%G[%|$ |0vőVuRL3Q :J@Gv%pI{;i.A+VC2gg\hkB5]3!^@օ<6c}u?ہ ~rK A:7͌e ٻ M"9j/u 7҃ߡSQŔu,y60ͼ>,xKײa/Qkti&IׄՂ0@*ҳ67!/_Gr޳pƬey"g9tY>/䟌!-͕a=vGn}x'8maؿ9~Ue$-r^kz>v3y`i뵱aF#*- n%^>G/ _v3w aĉC#G-ԯERU6l;[7Yӏ|1_Tw]/w~ېދ9=@Я;*G)@{^Zj޳mlp? ٚC ?gVնH3oU%5x)+0>h7UlmI=5`tJ塤1s\ϖkvYT|>J>"/[[ swo#podaj 2~x:#?X'׶hD !m{g#Xt\@Wo_$WM-M@IteI_U:L0/з9/ _!j[Igox0EKf9^S>e K=U.< ԻଗtQJy,zݡUB,^jI=+p/S|p ;`3WS]%U-Z8Z2N&L̍ԫt֋+bNxtr`7O$r?ŭR z77/sZ\{xn$`E&<:]a{2&チ?qdne~~v付z-{Gl6a{ꩧW_u@Ff%%%6oC*80&w{}_U$}=t"H. T}kHW䃾>);|[\L$/z2@`0v[r& EH@ & s9.jw][E>4pƶn& @=V?em-<KD>y$7}CUTdl7 9Ie(O}-֑g'JRWuA'TYBsqc51vK q?' x.褯? ]b!ӐJcCW[|RM v$%/⢺a3N&[&!=&o%"*ayH.D/Xlx93l;M Eupoؒtht @ԄG@"}o@ߋT.,Q `&eZ0)[%ê'^7NNM8W}*Pl=- qmU+/p>蛇Y{Mޤ)-o!]ۀՄ`IKi!!n/lxM6<_.k7Qf*Qv Eͱ{*F};LckCsxr<č~i`~nt7#9[qb¥Vx "=kF tKw񷀂kkٍ/<bك^;>r֊I_+80a7赃n=hs5~࣢Ċ\Gu9p :n7\q?c}bkfK&~y@Ov@DHnL|?^i89M')=f> >6K>.{˻#}]}\1ԎTNBy,cWwF{M@ߍp/@_<(?##xHQb](z:).{S/ev'ۅXu#kKEYָޕ,Xn-Gz{|7! ItK,t0ɯ9o;/ [W]i!O{ZY{98]]I\/ 7['?wA凐}W;s'yDc% o^cb]akI< &<-x;p܌Ǎʩf|:,r&ͷ|Oh lZHta&+wEHU@M3HҔ+Cw}ЗvC(C)TVxig㲠l*(.&Pe#׵g ~wJRfS>dBA_g5v4EWΠծt\ z7-:b(H+cK)%Y[/Z61cN#"cqXn߾f /"[@rouJˠF,]"}m֬Yֶm[~mmuY~w{׭Fa5%@IDAT?YY?LǍt>~3?e?!Rm.G#Poʑ]gg7΄熻 Wa]j!߷Rf 7LrIQ}IrrckSJh@هg8͖3?s<h(&K vS eAГ$Q&:+ۄ[G Oci-V.G׹%ᾘp RS,j))vb+@3٩O(&7ۤ2o\a <5bȥL)Gs=~2{WI23 z&|MfG"ɻj kKV=䯶 P헪\<~ ,aӇb.+ެjteKxl.@/8AjAUꏣIlֻ(ݯNg([ ɰ`kԫ $uڧ':*3ՔVЉ2Rd&u6wb">L3IfcǤ6JݍJB[c%hrn)?- {Ո R%g2@lwUʛeN6y.;0ad$. 1G82JyOBg9c789iſQ<pnLQ`HUA1Js0ס0>+"j]> cuq:ƚ4ҽ0)T̆)5O+ͮ@}4k.k7H'|" ^ecl3-=:19wR҅[{TTXέ"eQ~s!BL^nZ|:M&*BPzQ/e+zo\E}KQG6B+@r ܈,J6~>q6$aOg6')@ޑvlݗ? >v$I+ .6w" W {7%KmD٦[>uRƿ%7v$/0:%շI嘘75JA'y@zXݮ6r$CQЃ%,,Ǫ21ܽ'~NoaE&W]d/ğ!^|[ҸZZe*3-ƨxS~QcXwi,ȧ|HxNe=Y'? ,h,%o^^'i54`~}s)57*aͪQ17fg9 ]%U!e.sԞ6Q~fQS;geR_yqF}[ΝzEb=f_}O:^Qʟъ^mX떥oYʙt.X%g[E@%e<f\ײrxzҭg{cv 6ݫX(ǖp;HaQkK΢܏ʚ&֙qӮrɍ! 9hNIϭU|vj}mŶ[=SN֥Kٳu$;u -b7|5oI=88[`U:Q/Ŀ):|e)g NC6w!q6I2٤I_Ͷ\ŷ?ܦOC2y+SO͙hf7qLrDgG]I*lB% $L>Y<[ 1鬶~RMX'IE*vڕ}=s N{ogUscE)1,DVDQl,lB b;g.wy} sv}^;gv,e?;֔) V&i&i^'HOW՟F&iG C*ס4QF SE:M @o(ԕ~mlJo*gQ{h2׺w\^L$-{6RtyTCvn 䋀5bRj˂¤ŹqNkoeN> z.gt #B9m0YԶHz?vduR=ΒE{Rl[4tXֶO7ϧ4OPZlZHp7j9oLӾqn}dBA(5 ~z w{eV,فa7J~?v[o}twS||hh~(JϟP8G)ކu: ۈLT |9;Ӧ`ȂęDY/s?@ 2Rd8DM}#ƼiI؛CK-B+Gcv 8kb x[4]t֍l׹OsZXwc{zPy[܃KZCi}F`zh/i1sX-^'8G3iREFhZ0~vs8~}Sp2Pu3^̥ uaTER1vAlyV+2;`K=Q(˫9(h&֥ +z\=jKT=/}`S*s3>+eQ`HjJ;LKϾЂ3pK<|F[a/x-/p9'|bz=n}lhNɮ<תC8'`{x="5r4۷]w(G~=#=BgGIn,j-oe/rZ+{xjm7 uvu6@e)6~OfAyEݻZp_۾v K-V:zQf?pZh+M\o OةZFl{^oigX4-}OMR_;mGg-^m>N>Z+uҥWNf wir/IOq "?eT% flhg>I"wp<εmEGʴE9 0=; MMoa:*Q*D/ߣ]ۤ;DjO?E;Zv!jB@33`afQe,gMaZ wr_޷NHx/THPKsa>an%92ҢΌN厀Ҳ=df28[[cf=a5FmdA^-̎`RžF{* %%-خJw6$@ O+e:0KVl'߲G|. *^ηT)vc -]`PGx~hQiìRHwܶ!뮪}vo͙ġבZIXωLk.vn2+qiW?GG!i[së05洷;0$~'a\=]ҫs20n*86؇ vqeGawןN,ux^Uw.Y]_iR 3F-y97jE\BķFS}{1O&3?c{"Z./84v^ 6'/aP=!.|lIGxxau} 1_b΍.*b, ? {~mx!=퀒>j&,2S*A˼ן7mO#= YH|0FhYAY/g+o#ߛ(F{X̚dk PNs Q>XLR/~LK=v;2;oH(};Fu6U]Ϫd޽V]G5|T{(ہ-Ey3K$~ x>#wy#Zp9:q y50{}{Vo !")1rSN'PƱGX?#҂= 5L2x=rJ}Df%eBVT#M_Q,[1i BT-[ڬY-Us,ZNW_u:0NJKK9́ :u rm۶ 4~Ex҈ w9_~.BtM]>Jqu`?AyWWBuR>wfN~X[d)gFj#hk:> i4Kx& |yEM6Aj j6P.xkcH[5G`ճ4Go8ϻw'_F[ELs ܯm\:MgϋPҸŢVwq?W.J?ھ$b޳^ynUFۦP_7cwf4K˴%9k򬷦d&Yǿ湜 ćL\A[O @, o<т k5 @܁O}_ι8Pp95D&XԮR{&ër;ɟ43ʀǼ\?ޟtӶ2B%g-.[sZ`Mf|3bC^>eL6q Y8C#`H'9aܳAW[h/4][8vX7{[Ϥ=<)G/rf^oj[? jԷ9.2,'}@Zy& $t'wڮ<Fx}rB|1E]?~}xn.O3'';" wո2z 0AW"!h$g4ztSͶP#h(_2Uȟf߻2nta +yrյļy*Cm+/e.neEdcpZљX6w~D}ϔa$7:{S= pʖj-~3knŬk<[u µgv 䴅kWHhU@U+dcj Z-@OQ {>QN:$m˲v>ߡk9}ty;$^yV`QFcV%ԁSnl`ѾXjWǎ@me >?/fWټ'#s]",(6[f=bؓ]xu(D2H?Qm]SmR+fb>Z,߉L&r[rx(sb(^ ?_{ ?6 lsޓ.[>zsՁ0CQd-]f'gZXWr 廅]'uQl0''T^W4QNaLeZ-E:&wgrn{E.Vz-ѥ`h[OX#iIvü!#kF"|hM]_b}͕ 0ᴻ3"fi;R(ܟݑVC՜^M6><:18MZ NCR|@Hc k1˺9.NqJ  YX eQgdum,ܟLUP;7~ Hl#{3 I&ȫ(y-ao;;:!k ug'q+z)ϔHe]+S ];EFzbGˣgQju l#W}+-i!y,zU޴gK~jVy|Q$q }ij̉ypss[wys6h]mә-yʁv̩hw^O-S◡ܷ82O~\r~+1f}U";JV\^&,|&&'n7zui7&TzY =}YΝvMI۸.QY0ꎟX VB>ڞ>A ]$6y'rA!YL6N6>R}>BZ>.J;N;uwY3F "4ެĿ#GQ,~.~_(υF0W54SX{y=y)͙)%k{;-[̽;0X,^4%G5S~oe}1mlսxVux9`-ܢM8[5 [M]nK^0+E'#EX-}!@>W囡=kx\FZ|a3)kX_x'OeXپa8;-9 0a#s{'QG{ }) <·fFp@F;3o-|[=ޱ1߃|>N]GizKQn6C|2 c)#ق?#m}lAvVP#;@63XH :ج>NN#!L[ Go^iK-_z- w=6:wd2uTquXŔ.SM/<%z |CuT1^7ζ(5g+W+g+ lc5_B_N9/bփ_"dx] 7,3b̋㜖܇%_aߚO'2\Kylq{Fv -F݄c;TRsHs(E8!I% L^{g!$iЯɴDv_<>w1EN6{z?^*y~E腶 \vHEHsћktyb,';rӛ<~W6ʏ9ރ|^Y,lJj]Uy'A֠SOi>>@ jp2 JmrK^}~_F ugabtd<3(l|ׄi[^um68J;0£Cv)ԣN<5mzH٦lWg&N"m_zNuWHwHFOu1<, *nFavgKZzКD8i~|%x)_Ij!O {h0Gs/C-z2|AC_⨳I9& pۉSS|v|U8,zw#O$3hʧ/=G]uU{u/n[OO+6}L0mӦM64@EBԶsgg7o>ۥ/PSj=_RjFa+6@"Ϧ Aq”]|#HfiʜzioӠȫ=OA_-LJ:Fdo "p>ZJh[v%.L]r*r)!鐞ޤajw_SdE* qà1Q/2_`\mVL1}3`i,0~3uP:pZ׺w#9e95rwAse`\W5n&%q쟡= Fou6a:5I\*KPۻ3(e{*D38Dс|uBGmM :&XŢ\MupvuS=5ie.@|bGWTZ|MrZ!܌V{@vZmpo"Hvb? &[=f&w#Dy15U 3g}<AS\-y.7qʿYɲ]2n%^8;,`_S'Yw u u&YnԢ4SAaڰ!b1 }(2io=𑌞W֫/mq^;/ދoii6ўPX4VȝΓٶ\x 6"-)0Xc`}JIO|ml,!CmԒ`n;~<ׇжIٟ6L]ac^TRGiy~DbrW (y&?9!R9Lw) %<"JO7!0#<5OF6@̀a1F-XW%e@MD4),NONh5?r9J]am;Q{}l xֵ`Cܼ.v\ϳihq6PO^OsD]dHB[y[ئhɤ5k;ϋ0ha `ރpןPVh|0%̞lnJgaUczb hv/<~+7E=q@̸KYY|O4 6 >G{d]@Wt!9y; #- P잲l,D W|_?5It8²?|_,: ?ʡL?4ՑmK%δ93 luQGk7A?@5l_b vhZ)gKv"Wa `W|vm<^@ AhP1fZڎA]Y%{ކse4_Q!*vW& CZLi6*YQ<)rDQ_rϭ6~a)iMқ3F .e9WI9`oz"hlp| p䫯MWnN'm)b7iA^m3j QBGbz:Xy@-D3XPl&{4th|7 @k(W^,~||ߞojA_3+M ֏Aǭ 3K%i3c ͞? I"ُ] Zy꘩QPՓS_4- mx]釲ԓD{=92TˣY!_L]|ݜdLIïTg;JN9 ZH>[QJSDoPNusNgg48?eԼ}rb(#aQtT_ta7 \ !ǹz}Τv=ݡ9t~'ӒRV:C\RT4Ŕj= 8+1Ihb <0<6B45oNO6{u?***쩧'|dW$S a b>xA*7ZQi4r.Z}Wdgjz?NSFMOQ^{G?r@^m`T55 ̥haSNDak/!PW~*Knk3dAs.qo::ѧsk&}" Zu&M0/]P6:?LB܎>ƕ1M vcTbPnQk˙$ {H`. k m&nMb^X]|an ^Hxa4KLmB kl ܒ@H%L|땁KUvWF٭D^%p=!-ֺ SMD:%rVZ.mZQ'ʨ+02EmgǷmnz#-*,!cKwܢyW |?I@_(MBhmS*ؠU 3Ik>r (mP_q2,-SF;Y7Ɵ48p^l |Tm[$˾aB&Vlh:'`e `FoB!&bH9UqmS}{;k&mt BhsʊV,Jt KEXL@ź D4i.w/cb"Jm`OGH@ e/R0wڢxr#26k=('ކvtl7B [x.Lw4JX ˢ ^ngw#%@*y'plrK[~K=L& <äK-Ϫv3Qt,KcƮ_G !gDhw<2љyFDtOWT䅶 u ˃}G eҦ*~7rqndԚM^?Ə幹] ptuR݀WgGkCm h~;@ݸS=$;5ۺ,d)@mp`~x'yXNګgEImVb@떀ʳ״~KikeYʗi=FHr~\' \;@K]ap]^EuGlȒU X6Omۑo͆.kCnEfߞ !\g?sih[eֽ#aV/wg7ů>#Q^7[OL!W0P;Wа06tEY Of*ISӥ mIg ʼbf$uDk[-5Uw=V\֦{Ȗ0y]C@2OCQ]6W}(7%3 3Նkjtƭ`Z=\BQ5GRjz<<Ѫֿ|kJGsj*n{瘫ޯ>2}oHH[mxd[YkQ Frhn9N^G kaG ғ}hxX\6f%PB& E]XlYуQr5iMԓ[| {0vQfz} ;KgH֮|$ӯdpҟ&o\ R,++iӦ9I}[h1$C *|c)кyp~:ޯ:  Hoj@ ՗[(W5̰g);M)Ǣz+Z*̀۔BaElQ.oɀ׵k׬mn&O].O=N֫3!h836UnMlnIOTr~-kZһN N0aN8L]?ι1 9>ulg. dȬsx6 ЈrʬQV+x |U|f4V&v^ xNz8[ ji}xMNٗ ]s j6B m шP; "@|_c*'F!)ZmUMp b-ЃiiP-*E#ڄB{vٙ^1L0m˓Ѽ;ܦW6ɞ_ɳxN{ϢWzQSCEﴩ5I)Ozp 4@0jn9ЬB*m7E4|`1cgVCPiD#{QB/@IDAT@qL̿F认 xG^z9rكz|,] {1;`(޾Ar jta#sava lu ۊjO -"Ǻ0ǛF1uKѤ{8v^OLI;nE0x`ĩ#B p=ٝ؛2Ze+IY2őؽ}Kx-RV28y-v#`LQ-BpԒbN0zE头!l\f\޳06/?/%5@9ߛ.;o b\U2^|㻞DF(vmMl|C84%;1uz0_@@Rq/7;I#HlOLs`;^GHL'votu,ݩb(GqC6]penRYj'-mqy?R3:KyѠ` );=n_5ޮƬѝ'y (!elɍ+c'G(v}lgȌiZAc^v݂"=m$1>\'ٺy߀NJoф$,]}j09"ihj|x?ڇq&I"tڃYXi (,?,:5Հ/iџ]+ƭ-.;TKQ 8ab5W=d[i/CjH!b̤ijߒ'*?zmL̦cf-`C|֣jve-ė-}zL*KYky+ϒ/ѷl, #ھt@׽og9+w"i7ʹ==W_jS@rA޷A78plkzL ,723Qrɿ̯}=zV], bzT\a A:k-*t-7nIKaJ:G:Ʀ^ ipW@?,xΒ8[qB$WJ5K#NyUxak'9cY> KD/M_D$c;k(Mw+Cnz?C;LJ.¡eO9htNɚ^>H3ɡOh؅\.vɿj\-S$[۽{wɫrհX]0ɼ`i'IvGsmSAOn|I#SSfқw7wڽ]vwl ojsV_[(W( ewig!%4bDҶHE:`QV]X# |?׾{~!.$14)*3 /KM5AZ\m=jO85xYoxnev@L5 `P(k稡@yj mqFB 'ihKZPh*|Z_6y丏 .L >oUXޟޕ&!wrnkG0ݶ]7i×3mM XD=O#DHɨF BqL5ɒHvVvH ?E_pV@ ?7*陇q'l ]Շ(^@5 W:|OH}>ȥt:/eVCx"1V'x~efcrZ _`%p@H~0-cL~D$It|W}囥·. a7޵о+ :M"3 a=@H[J J/bSl <@zim )y6 YH; 3PU~GB<-At _[>ZFC$)FTܘ;z +֊ ]=c7 3iy|`N~u;Ց9U!PXsvX0ls?i7 i8oyKviR$N)[!+䷾ QMӢWyϼnF[#ٖ8:x̦lCpiaTe6,]"RwqetMOL(`تc@ů"#lvM}GO'Jcm&ZkŽwqk8^%/ne1pL()3pGq%7``Pժӈ{a2) /$چxIcf#|(}EȎV@o(4_FޚBHz%ZSYv̌8ЁQm Rx=J_.c8tc{{p uSǎ,:|:>432K;MW o)tl-;8)n|veY& M(=cv7о xB8-~u3#8q@;h#msopк* mq+ ;"+֍yoP+@w}arQv%q`::_wU7 g EIDY !)*aK,\AhxK&z};VfIGc>@i п~:W},1@x۔waMI=.!gݫ¶^qb4F$sYhy' Y,{["kh? oPcSh``X 0Bq`T[qpRifpUJ!P̆:Ti_fzo{ful`6..8׾qxl;OWţ Q4@Jz l-l Pܾ8ix`&?$ w:.w~/6E›SBۢz=Ys إ6p61&m\mBk$ c[AM@e(ɓṥ- =`^,ͪ`H#̃д`Ubmbfc0 pTEYʋْƒZHAƪK'yzt%﫟J{&^bR^c163x @1Х*֎om_4@o~AZh<)hדOLO2VaJL≶뀧 ˝SXd{2XkAVx7^T[|ۈڳbSқ=ct_֞٨ VlO'~[m5'ەv2[r{/6'{#Ý xyNWv.DUݵӆ036ާ}ZG3tXlQa\xgAeo, ;7zڎ矉>BYF`Jw&}E/ej{rgFQ[a o'A)PS+|?׵bvt63ڙ~]un_c̯͵Z7,&ܮtNeS?'QCu'o*op[bO|F9?%2Puя d7[ Ӟْ~ѵ LnӁگu-9;و*zȟU+/ifzP֗ Jkפ޵G"3. V p8 ~umh_fndOL*J_ps\`W?B5(:cL bLnD/ IU&rr8y w$:>(h ({),[K bPygJ`+Bæ7f!e~:$w^ *뙻hLz´}e{q}Jϥ=O2>G鴳5@wCpwk u;iNXaK"ɐsr^cHѷ2G:GUm]N;￟m -ѝw9kjA}q@ӛoil6Zk+i~wх-`Yz`4oKۋ/hzߚ~S71_:Lg'\ll"6LEm:T^J8>O3f-?iotĉml_HѣF;u;GG$֭[D)c`Qy*WLe?USIps "ʔČSʿLj[Pό6Q\՝[gmi W}Be?tE:dBL}-0b  \]iG)\ܶrb˯n4P^_?落LpThh>=g V@۱m}v i h[ʆnQHf$4yV ;q.0TR"U!vځ ]6ϦrV;_ 0KgVO18L[+ uU+d5FWbҖDa #GM: k7[l>FNM2ԺԹ:i)Ik2or:8_g|a@8eR_T֒O>Œdߏp5=|fy&J"%x'_x?#߈03ۚwdsh+gvB9ދ>v: : } aAjFyBd1hiv[ڛ^6T-q-x Q12[q(9ۋ Οݘm| wd!${Jezb XMr#BosҨ>A[3L4a]SOʷx\7 caMmN!_l|ɻR4opg[5Z5,zJαr+P*J}S3;(>ΞŤijn6 sO<]!<+johKLuyo}/T [H?^uhSJ|3Go-^vK:6Կʤ(EAq>m1ᴙn\%~Ú/zHp@mdlQubT)_OVLҺ%͗9QG:k^uu)۔qk]]07;u`^˂l6@ô"Wq߇9B14N +CmĽ/ pM|iyM4aPѳL읚J(ǚ,L=exIE=ťF5\Q;t՚"ro[ϺD' kGN}*emEs(;ݳ÷͙b(;"7#Ev16jM\^uᆐdzYL Va@z@Sx@?o3Y] eVrRyJK&E =EG\F}K݇peNq xޢ>uxi*>R'R {~*0lJk4CQu'ΏZ&oɘC9ͽMͶSB&^+{ ko][ K™X6mSu+~ozyڇ:*S>>Ъ r)*۹Ĭ̶eX{mr\1>ꅖmSJT OԿW_"Uvv~hiJ{qx NG (j a_$-4 |i\ɟL'[]l?J klB<3>i*c ^o6^/"i7{kW?&OwPf)j'kK}/Vlr%q70&0e~?u*;F[ҟCʧK>ӣH҇ ow3okgܜq:QL@ Q-̃(]44Ӗȵ2q(o_67|oA}Nc#1@p*^A޵(3ӑ\n0ŎaA0Pu`Rn/w'نp bN`ax^hLo\K9IJ1T8%[cԁN#hqqE:,m_|]M<߹yBȩ#u2I1:Νmx@E*+(FЕ҅w3'9Ry s v=)MN`/2(5A&^P0v>y+j*tE}i(AM}˿T~ʸHUWaWȖc >mfA?dm~*-U-=}E `a͕"z1lSYз~)^hvEϤ`]E ̠bVL QT0`QL(bQ`ĬE0 6OW==;3XL<;ݕԩN 2Z۔F4hruk`Kd'~Vrҋ;O[m/wi"NcZ٨u7MmkyOpL۸-Rcp'qke-AipIU{M m([EF~K IhyneYIągpc|=<Ŏv z佤om~9RH[&9F\-O/Jx/_gVjC \[8tƲR Ӹ/o~Ξ<,k F:Xx'".|)7Gy{I򶤩WG4&u-_r Cvt`K@_ّ=rl^e_m;ӣl[3)U!e Xv3%~ i+!wl|UB0Gh7Ans'дH>x^ gKCm;vf YCA/`^~rMs]i7mX@26s/Y`'(.} |(bŋS$軎uN׍\ۢrvߣ5OF< U}bF_t W6\iϕ7k;^RN59TZ6+Tؙ|Lƒ,HjZ(H)~#KʑGד|ڂP~rA5&5 n_rAej_1j"χwsdX( ;t)dxςJE9ʇ_]Ư6|>qZ[OCkmȊ:TZ5ѩ,nj ^MS/<)C5~Z2^LOラҚ3ѡ>mMoEfkS} muяjLD]&I&K?Ȼ 4Hls|9F29uIy;POZbSm@`gs*~:@s2;!Ms[jvF{mtc['c`jks{|V0krP1-\+Un+z&bM*oH!2e>aNMZ [Ӻn2 e-@Ͷ=@)C z xRRdM&?OE%zDcڙwĹ842&5k,<8{LQr-g=`DZl66:x%q,[ʻ7fH}ory:@|c6iKܒ Ot[]1b2·簅zPTLi5Br12]B9?LXW&лquz)_'XWf&25s+ack@boVYT\xP&NEzRp\w M 9ns\dN ^¯' "k<ޢNvl  IˌW,|}y}UFd4NR@&2 ja۵uYՁHg2Q-8( @ ^ N}i7ʷ(mkOe\8v~[kmtubwcRo>@c;nH-a GCU#r#PǏ,9m]%k [;r>~ESzO9Fsi~=i{u}:B]m6?-=_i^.c~+l .\n'F3;»Ei Wռ-Ic/#,5: 6ǭ:9͠eԱO2a!mJxs8>E{r6AEri>I_3x{o?@+Ld߄h!Vfcjp- ϣM}& !o9_u7>#-GM!xVvqElZ.nN-aؙoKy&_}oĵ˧l@Y#N-"?[x֔mgvH5ZaV8SCNJ7?ʏ^ٶhď10y=}?{_g%-O'ێ9/վ UuRz3Os#7ېwRF !]FQFYS*jc(E7wkڭ%!eEu3S[_1"}2h3Bk B$@&B18;q0iJ*?8b =c". MpEv)SgڥZbj69&&T^5| Lc.(ں$ F֚o>;<·k/H9`o*zUvrDIuh}%=|>iۢz@ m#eUݝorfk#_Hp[1kZ;e*9^ % aŵve, yBJ@6m[p:"J ZdFڊEe4d1~I>lO?ѫ#M1GUɌ=(glO 4'g% C} OB<\j[53!Xp®׿%,%ʥ]RnD"Cc1==ys>+'սNH[wI(}zma랡m\H5&;?! X6H +頯5PvHY>~hOI˗! }eR[I_L: MiWpe 8y{מ[8zeGA_o[ =<{o۴>s%~2O" Yuu׵μiZr-Z,pq貙ޕh"*C5e,%MwC[C|nbGD3M&&M ˠ6 bgҷVeĹLoaX4 #L:b4kbe699*=l2;8w f9Wۄm(#* LMe&]IqssE{@և>kW%s/>/{p_M[ǯ9sc؊i8&PB|[͢"+}i"AK҂(|?{@\t. rR%_\;Uma *Шg\u}Gv-KOM.>dG@_q|)ȟ\n+\6Ğ->:==|G_!D{v%9Ȁ{ &G.C+\Σm`/nSgTeA\9`=@ W(·LqK%ǐ]T; g"~ iSN'draX!N%m1W<owfhxS^طSBv>v;c4=paUމ!sUR ij7V;uU~U#^8R֓\9\4?[mIfzR SO}Ík}wTߟ1v%LO⍬34i[NK.V v*  V]j ZakcC7lLS6`w[P&igˤ'Ek`>F#TgZi>[Bp]$ cnd*ˑ[3|*#ϝ4E6fdS@t%}׊RW7@Қo D-Y]d$t^kRM5ɭlG@IDATho?~%ĵ[gKחdzro]޷\[ ~:-lܚ$WLuG)xw3eP曕pYma:ᲈ!Si0%غRoV#3Ojm}<Nf'|+kTj~#}$;nכ$aPF.[}PG)LniE&2FJQ'&ҵQ# YL5)nd'T%i7gy;+xդdMi{vʷZڂ:njbRz"ZnuMyI4#ywIǘ6uy1Ĭt+Nrk;]W)oB ]Eowi2㶞S&}c 8 ܒFxn|i j{ t4&Q3D3npmzž&KT$gӹdoؼ|.8e|b7D7MʨxҢm@ocJr ݁y7M64û~!`Q|ێ݆{tM6@YG6ז_x~t|@% ;X\Ͷ&HªQT P![U04Oν< Lodwe~i> `=ٻ:v6ָt۶v*ua;"}x43_(w_I&~ \ɂmD@TķP>f1tS= :tS-Cs <48bUvp=eE\fđ(!vm;LF,p給wg/ܞ*ſt60pש۷)g276w4i,Nq{م4 G0{b۽ٌ!ߞT% }@CB콒f_sz.5sa k,4kw9?$^tv7֞dix:!|TX{/~Ao>c=e> `#C ^wxI;hy.!v;}&BAV f} ؉J=${a%{lrP;' uT]CB(W8m2=M˧-Y̷ =ݜk]s٪ i?F^zGrpAD9AbӮb奘KNyXW3ufw )v5+T 䥧 mц,L5JbŲUYIrgr4S78~9V{'85z 6QJ"2Ʀ`g&ٳ7prp`r7U7Ohjge۹WΏt?[J_(hT@2[-i4oolqwR0'Dۀ6ŒFvu/p W$@fvcbMsx>Ti0'si>vmLNs2Q%"^R F5(~J@lcDRP%4~&Oop&_vǾ2PJ3HӛA?dΠ流Xώv:!7=^^ dkk]W:Fx:dF>j8˽QnWUKO}C;yu,e 9j' PvY64D٠w"3?~R\[a I _}_ͻu.)ԢTr:1NvJ F DiHhWFOI 4!ÿolϐw7-2ucqAgU@wF pY@jc$v0qtI:iZ_kih )1mX \>JOk7W3v74<,RP{dL|o]Oeթ煀;'-bWa& 9}?Lbgxw1yK;1Ay8l Cd[pk ^Ϲȟjߗ?e+zʿDLSsEJ>vd;Ib` Xl -bߡGsҎlPf#+gPnm |NjGIN ܐYqs{zmL2|06Lg^LgYv{n`VKد/pVrJOD&ɽ S&M[ V/Q|*cm~H=˶)Mˏ /wţâג)^OE#^dSC{ׁ3v *Ho +1Yyvwh^m(aaI&ķmo㲹ƻIPJoSG?ʶvckt!W.plZ9r{hÝOmJ1:˞PzRtw*Ѯm˺P$* t'mzZ/@k4h7;-p}{VrmD炬qKӂ H=I:X.H&52p})vA~YÒn?MKqu@-RA~V}}WB_uȗBxg5-~Š-pɛu]@# 5 ٲ;uiafmL:96Uo&/hGMeyUTPO$I ~J5Ʒ8B-I]H+;,Gy'?#q2iCKlVι|GE> R6v.rbQb{"OY&y)]2ǂ tXYh\EY @N,uGie3^Dۣ F,A޴s`G@n EYnmh;,G>0EyGoiF? 0=YT)0nt9 kܼoZ djch?ʅ4I̓3>F}ml^p̓'g:f州CD=]z #}*s"KFp36 [~ 3~tQ>}Lоy|붤-CUp?f!)i駅ʼ1+ܘ"IbznOOᡔI0f4u;jҖ-x'Hr)}Q2ÙJ%i0%s(c;^ ]e2:D]X(zP6!z)dhBv,mM Bal$%E~ޚ| mfT^7Z& E [G|vm÷^@39 ?6h m@m]I969h8?E.f"e)gw] ]1oKڭ{y8WEvavipA<@*ي*_9}Ь}`l=Ci.@˹#9|i qZ$L$X'jN/+Ԏ\h@&0;f_,<E3N o7(%PN HVۄoc|h77.yϓDf ^oARHqDp"ppr?@׶l; 8 L!T3Xj&K^4xtOlZ+-W /%;(Oǿh*S&Ѵ*MT;gS`k ޓ96@zz,YD + F}آsHrC7$oyhHS3-dZFdr~z7FrUh{><"7-!ڀz۔2m t猲 +з i`g\|ۇ.ɮ6j  0E_[=dR.37 [c$&I}Eڙu!c^6}<7g,˾eIyγ,W_{s]/Y8頯ũSU9E]87.LU!=)+$)zQ48Kl[DZkۍQ0Cq{3fK_6x.7qvVQ;|ܨUg)T6eQ^s0 e+Uqcnx6G);T?z3+GR*رzrhCz4>~[37E>V5A6=''{r[ݨ[׳ᣐYȱ]7qBV >ȥDXlP]ކPsR-h,j6Iw§؉m\],^NG%,U⟗O[9-nTz.w;i8#^';ɁWm>gb8Ѧ7xtHXJv~AE-߼>g5FG@ AP? ط?wPwy9=L(@8濔4g8H6qc!m /a*f D6p=_LS(ɬWGO#NgKFze؂"Y EݷA6?$` h :ݎ7sm)ܞ%Gڗn9^w?<`Ag)EM̰ӱ;. ZM7 < -ٝ+lph;66ڎ nc?b}&F2)qQa+^b9r|RT?>;`t C7LHCˑO9v fP(ӈ|Ha\"WG}^Ƽg8R.*yH%L{[vzGMBp q΋aeh 2o=)dzؽҔŪ8Hr4E)Q Wڍؑޫ;5cLuS-rݿ%-=}g暒ɉ|:'pZ$ 02O!+sw*s2)̍ZGnf[sWx?:v#UەUu=ߍ"S_q>oLbxOGVIyͤ{i+/R8U'U1hsTg& - S'uІ+4-ԝ_L:q}jr@}Ҟ*ϯ1Gh0ю0ȤIB:ٻj [Qmd pTU2.l`oΎ3 OIs^jR Xuo]Pn{h۲ޚv KݦL2ӘS`..q7M,OP2wѢTcP.j}זQW[:;V'}mY=6)_O}~r;3^nճ-Sxz5"m'y'eY7 p6 \GIDI&*MnZ="VMk :IɬӖ|57Q/DJ S>wmUy$A"|Ȼ4'jeTw" N^U*lzS=UA}-,ik2E8hqe&P)Yo#\*E]U|w7A92.=$GmD`UZk=@߰=io74cUMu%jE"%xHK(Ei@p”g*+gwpW*?X0h&_$ |d@d{~ @c~rCqfRkcSۚ@A@݁Icj6c%6V7:>z8x:(4s =zJyޟ+ugxH[T64JsXv刼6trYObUO-@[9Vvm-]c{>6%;|I&yKa)9fhlNع_w3G'?W:~-頵",Lo*-$d/- -i)WdGGζgX#B4bV+|4CqK94‹*Whe) :/= B{>\:ג$*zFҲ{vs9)* φq[Q6MsB<uak(KXl[K`rؗfvK87sжLFSN.9ܦ `~#}9'3S6]X8jSq~C8r'ik>|)pL[8v(I#0E2w9Gyr&US_Źlq?;PDG^s ddb5/ ;;Pׁ9G ̳'g8y끴!T601cu1KLter6É$<ݺK@T2̃F&U:suDザJG(lR i!ڜ2_D-l/fIfW@3C]%;m"4 `~gqC[$N2&7Sɻٖ@F4f"N6߽]8E!3Bۙfn0< /8Hk4XBF̟*Q2f?r߸ (תJxu^DRs:WyEhSëvCBOyx1uw_+S!Sqq#c3qGԵGy?6jGSdy1^ZKvxKcbdBM|+i2ZK|͖O#5617ތ";IɈBw(5h/7S+9pm4~]J-o57s!:^_}>)7xOam ECƯgy2B]r4CxHH֗:ͫ(3DZӘnfZ oԶt7ig?l +iWW N[i:NVu4P"ёAmkxU=0jeգ {1%P)]981nuv!Ka7f0D2æ?Á$)qa,G &d&;"8WiШB?x Botr 0K]>[p߂fWNjU9Tɜ7oƥ$0 UܒۅC:2dbۂ72a @As?>j;Ur<(:zv\ܚڣ-3]y7X6{&)$#@bL93&ހ h=Ϥ6;Yu+䴘9R8ZFV-CH+-Ee.s )G<!GFMF<꛳K Ք3JQ@9 ~v6yU{Zm{a:v꾘`2]az{Gozۦ&NЩxUS{;yy;@ u n#hF+m1-906`ի w$Y倲PЮ%݄9 -C\ij U0h5y@<fϡk_l @50L["M%i0nHI*eU{DnT\|ɡ.Y*}~q0Jcߥ;,-.t|Hso+c3m(|r3v}\Pz.8lou5D| |av>mv ͐W%H~,&О3v߀_O¼;#s;1 8;nBBi7NL@pNVghǧP|N[dS)ڊASέÇؗOY%p<}I]aa q#~'3uH#"lD/ԍ٭2҈4:Y ~sys5&}p/t D/~p)JUwly2-Xrn/",d}L#8J;+.\Mt@cbQw9.;;Yk:]GS+Cūğ8M+۶Y6 )GVNt!;# ['Sɣ8z ckbVQ䒏Oz!q'ۏ9* w:ucGrC[Ϝ,H;j/"ɗ#IH6 H.=U>wDJeH>}L.EtJq}T<髕7d|\I}?-A ׂM{zd͖S&\Y:T&7W^Kg~Gwɿ>j`1P{h“F0 vRm!sM$ I@U&NN3I c X4N5(ɵ~;\4pOZ>>leI#xޤqd{&k3Hؾ Ԅ8Զuڶ&Yے| [hS]+@_@w4qɤG0{ ׆:d-O{ŷp ЎR 35׮s&]ALZ>MGn(4;`?Ѭ:苦+ Ă*QJ"}eȤX6S 〳Ňaϵ(&{Iu|Ą}9 : '^wқY\i[kQާ0).b2{ohΑvb5_ ۠wSS*>e}4\z!n{B'y-!nehKzD#NhK?{Ŷl. 2} f7gs]d<*GRhs64B ~ m]y ʽqŦ?86b+j&s1@bq&6r˹ZK93v\d6Wlv\xgrk@.薡]l5&~=7@?t'\XAy_ 2kسړ6g/07oe5Yon,sW_9:F%3'9YGˉKy.$_*LM*cXn\ cdtX-E,,an?Hv/C=K1jB>XZ;w:܍x f\^T߄ϣ-Sl}Ƴ'CTZ~ݼ^OZBw)DCP (6&+ΕԏvxWj%ј44;3< 47, P;5ٷ+E:_%}xgooNg~b¬T8鹤#HaӃq-pa1 tgՎA&oJ{ҟU}3 f4{elv{Ϋ_]s{ ݿ%VRW6Gr5ޚ) X-/iS%_g5k_ͻ.U-0 z7e2'I!jt3z#^T$qF0$93/cV?eX+gGlzIt91(ݽ,?Eӥ_ by/N~f#V}N5 5}W$_iiZ?VKDMl7tnH>% oįvy+M#y\ d ԰D% l+&t ᴩ޶5yXe1p 6=@8M6s lEL51 ӸM4  *cjhMNhHAh*>rmZyιآܕn|;&H_$Of}rvU,[j;ފOz&vi{mS, TlMi'|Zs}%$d -1EL]p|ٹL n,Mڰ)9iR;Qx w/WyG+wK"kl>߱W5᝻߻[v: |7qG8l \mCr'Wah{گ7  mN%_o8\̷]H*q٘rRy4l݌ Ul4o JY@,FNεTteF|&6,>׾!ٞ$k0 ]V:Sy}2u;>.ZWNh9yo n5mA>S{IDǙJA?wlL mʣI>ZΤ; K:[xPzC5ܔw\z;uܳ W lwmD2x]*h ԥډ+_)A}ՌC)/}8g}M+B~_upt+ ڄ`12o}_FOz?ncf/B=d)[N)rBj|8Ă.F8Ϡ-bU|6߅\ O~f{%ZxeiIiw/ew{wLؚO:^{xW4mwۆH"W,&sX %tMxr-:ֻwe}@_VKalZx :%^fW:` C3=; ֏)G?b{71ڤ)>[wh|;> BX]@IDATB8Q0n{!`=`sFPW3qAy2' _pzK;S-zs0Lv;}IO #}m@vPTGBey-r*eA'sr Nluy~(|LQv,nP .p@OJCϘdmlo[7Ɓ# 2ҌN@;;$6\0/WTI9ArG0U]z!3Qx0m1v|M\-]y냾Jh0-<. fdvajߠ77\"5ZwkI"]XrU%8I~rhE<.;e^dtJ/qabTq t6+|# }8.xF˱QU"_` hkɂ,5|ڮ></!*)Wкv|kǒr;5ҟ1?s>ŧcԭ|W+;ܓK{9 OD jS(x^Wħz ʲ켈-9! *w/@VH'dȰy'̘J^~Ohۓk:s%C^ % mmmGՁ-GK4v@=DŽAMwۺɧط'pV`8 OTZ6*r+8zqRʿ @eDwmYX )dWg?l??:w605q_64;Hmc뒇CkIcqˎKPF>),|.AJ͒좩LDv?#.7- q]2&ZAQmu- )}KBw^pR,gйz⭉z6 [H9[fٞYyh UTZԕ6k~%쏭rZ=udߗ>QJXq>%N 5+EiGjztA٧kNUO˹.7xw&L,I UӦNT3H{ݤ'&qϷ&ImAiAxm\MO p+&Is p-I2)*]cW^14>mw}4 @39uW*b2>t[Ƥ~[H6#mIvS&1yɤ49GFtyFྤ]Z9Ai}lDU85pp~51ڷϐk|_/q_8~b4KWT&>N^٢=_R迱Ss!]NQʿNdqP*z=Qq\Ҹ8>Uacu1 Ft+< B;S_vܛ2E\fB&/$,*9 rxp%&lsƹr^ړ;Jl6ۢ#xnv!{W&h7֚K FǐޞlvWYM :1K»US܆l?],,;+eUa+ 2zx!{)9/F6%JaoiV>K,uٌuxK"wg]n0ް9gX[Ձ3 s 'ы.UKrSJF\=Cdg,*w4E7=-(>m;o=@nw:NZ~?\F]R&h(oʢj>bq6[%h_Xm[B*=T%Q([kMן,lOF{nKP-Ia >teyUkYLdDWy0yewZg-t[nUGsV{,6،NQbi㏶ysة_"{'dPW#S dv$h 6S;l9m7#].M1eoVfMdkItEZ;6J?4iM5;-[I$wN|hM08 <]Xy v.Is* 1ҤL!6]ŭ<"ǡ=>ɗOuLMiV r@޿{=hG%'^Sl&lH7Da F"deK[Go whg3d\HXYLۓS A;|#ÐOqýgYtsޮVEfr۔"cu #^ȟ"(_.{ 4Ŀ4>ʇ~ᛚ(]O?*hW Ms2+VM:Z> c*kK=Yti7ws_+xht;$3Ue(ͯЩy,ؔZ[gpڑ&ॼjZ5MEѮ~#UOz)8rG"Ήp>J$*D _ΘpV,и0h$E"Lz1f[pE Ȋ[YUu$!*b X(*"bac`!v**"0==wܹ3 _ς;ݽ׶4#ր@~d]fKibAxuoVPEV5s& z\W5- қ6MgaF$h݂5nݭwIPiD.TRb,΍`vbFP`6t 72I`lu yGf" uz$JaӲ$Os`x8, f"(k8?} G64U]>,ej%I7&:YZ},>˂tU u]uB0FepFAT܍2 w+Ji?[ٔSvIF>ʣ]=xҭ$bWҞN=a -l5/d[Wz֜+s>h8=~cy7 K^[rVeƯ-+^pŨno'Jyl^ikLVm)6ȰVuOԭX.c-Gӗ~v[k.{~S]S]nяC+-ʡbgyh~TϘx sb+fNJOtPTS^.ʬh:iؑżpK#^3'ژ7M.UIJb"N?|͡`ʉhl8{7ϳivLluTPЩH om;ǿN6V!@ F":W3y DWF1GŖt6o+*:/iɭA5LUWDɓ9G[r^Kh)\,FRwkY϶EF%q:_ck}Ki2lV7|}Wq6s)=m~Ylcw P@YN0-p,\}횽_$wOwjQtΗ5#|Fv{:z5Dǔ љ6u=yf U9uo~<Y} bslna%E]dyxKUжk1X"o{t4gWd%ݞӴsddVO]^b3I*Nk5)B$u/E\^82Y{N\,atD 78n)bzrxuqeMbf \OAq[bٰaC_ցSBQUW]R;muҿ>3 ÈP 5XlҼ Nl5 yX +|]̴ 0YIӒqCt\xMo",(tGR} 1:vۓg6t[g@cˮ藪uV\GXԷHhz_P[+Р5}Z1!I䃾[ZO0U7@mvR=Oly*6Kݘp_!Lڣ;^ZYQG,/Fr$GX5m[x28vst8iu_ +f7.ָrM밒\k Ⴙq]-Vg=@x=;v.6Fr CA# x[8#l$n6I;)wa :cg+RLuͱ>Ut.c9frlO},/çR)Ċ# ǜ< -Pfj}KѬ(BIA[I[Mךȶ?rF-߬7QP5&F/?mֳ2l=vݏjr`Zd!G.̴ M~)=s"aS5}5D amE>sIɞʏR(O-b2%vQH0W=P֬TgӀV=54f!Pmxɶq/s0p뒴S\7LW7q̍)x7@ߚ)A_펡z`sMIaZz:i걮__JZҷXJb>h7d˻;tW֔EpHw{q쳱sZx8ݪ ]EVuVl- E٭Z|F@gKu>őO$tkNKyY~nqxýG}'DĂl@ڹ0If0 R 9=dwO"03@D%}Zg OCw) i >Oq<'2\ lV[zҩ(؛;z'Z ä7rU" 1V-=D5I@$~rPVͬZNb&^B~i@_(;Y;@N^gJ^OX'"ydRhi -z[C3#H@GMb`|FH45ߊH;8eA=^cC/~mx;{0d,v5볰#ߟD8䇅3q\3Q{FGN|A_VQS{hU! q<$9 օ/Gx㖔@˻H~QwnK|,@3 ߌ<E7"ew&Sl}@!I,QDz9o4n+oD;y~~s8Rw"`.܁$fWmm\mL jˎ ,.p{e@ n8Oţ7eQ*v a:f;vOIɼfRo"yEACp `<|5UI%*DkNtg+5rݐɻIj5xo+` wmCN,lu^3Whgٍ3|7qFs#vwn6{S[l1܀17ێMr$7Oaǰy%9768!;ǘJnn D ow1B{@]_jR[CK4yo_l*Jvͽö-9yN`3H lG| ]詞Eٽ siy;d ʏtHڽfgR ?N\80,GRWz$MlXf'zry5'Fs/ wPAN~hGqe7GO&wPZt>L*smVrNcEEK̻FNU#7 t?_T)h/OR3*MKnSqeg(a6Wb;ۖ >,VV iSOY,LT)'x+IE-8..Ǝa'AYüs̘y%6o ;O!nĮ'sq/'.EP1#Z`pz(g_CW+-avkn3s$>ƸS|{c &128C~GZ ɦHFrg+}H)lP>ɇ&kwsq I;IJ(~[a:x.T@0e'pÇrwAe.jnq /GJR\"jr84iufɏ\;-nŞoDyΕ9o,!Xܚb{Hb.a1MωTy$&-:c.Z2xb{"]f#Md W*Tn5#AU"fmfn_MB;m| Y`MTCOb߫H^kSzpt^;j)Wį(K`*`0mܒk(Q VA5xM BmuktjEt\}&[_X$M.kaX?!?^~oc1w/ uߝi3 )bTC.L\)k#_3%ɗ$F@Zps K"]~̉Sw#XFyU#mRNt nQ5J-{0l:2UjJ[2z,:S۩|`Mg`,&;U'w`5nS֑OQ*Q8Vγڕz( zv,OC/py|,ԎLBaϘ7!.3/s vͷI;(|UNZ{'ujq;'m9C0JZK s*=a¤ba|;pQW72r p/x^b~ |r^oˈ7v:HsќڒGmYހ֤R<ڴzQBbWzhۗEl{2913sw;)u,;hc]oa3'-{i&]\|*|Ϟ 4pwk j2?p#WD*s7ũp& !] lН Ѝ)˞ s0TenOKs׎(w2s}4v"v)V&t.WsTz7 0{6s靔 9V~m;HgE;̳92$IS2BV_YR[Pz5CJ|^rQfEEھHosb{㭌xqk;v$'0MT|/UKN9O'v P~ʏaP譱2:3jRI| _Bټ_݉V̵#QB8 mVD_(}<[іO%.{~TDiնP}n|4p:2-v9أlP̶fco;PsjKʳ!miIhԦfC^Lvl ? ֑lNŃ[Ÿǐ08>F!%TA(bp͕q)Tx~{6_]~i%1 CȲOd]ZZq갚ۻxQ6^ɍ0VI"5v`Cůxצ46P>Y(26!b 'SN?MZϦ6᥂" 6yٴ:iFHW si{|u_tوJ_*>&AF1,ct>mZ*6d~u ¹IuCl*<__awAa6񼡽|?PTjZk^Wa'n[a2Iz}EouRgܯcE?1IU_ʋܓP݋Y_k~)jM=ѴivY.]m۶V[o:t`B)oO}5Q %O@kP- Z%@Z5 sXJXga'){,\Z%S`b/_^wem[v@_i5La%a veSW,X`k!ZЗQ8E0ES+EiWK2{Xq>Nf=B/9 M%9zEq/F: 8Q)zSԷqzco&*{Y[P. WVGr?oWI:Z|M.Dj,1 o+䲹'R*X|Nd^5!LI[g\F5Vw;HJ5nt(ޘ7IKXӒ@EX ǨIT3 8hi6)О)p$9D n޿>-S <r=@^.-+ѣWˈdܽs0;}bj$9 pкRKVWqߜKE_ɸ运 ጷ Z(EIGd'e~JRfֿ k_P p,`gY./܅i%g:J_%zꤶI ]H^r( ֩8o-&1ϺQVډ?i~d+ur:m U\ +7o/374]nțA:֏vЗV/k3*o*mA/po81!$,.D;|vYVT?ōqڊ|ߘrz35› 4*QLW?S!(3< _x7G!@h'~`nz.o h"c=Lu RmSٙoa?JO?ٛoiֵkWߪS>~mb:wj*DAʻ$}_~7x/^QˬK:tIMI&)mjoA{M6>+_% 5sV(Ջa3lG)5:W~we=@uMu&Uxba#LA,ߥ&1q] cqP֊Tx:,p\|ehj$PfC8ZRj s&¼J:؆噅1THy6*8Yd #>|@˧~SulnHW]`j]3|;= ކI/\~ID0VcۣAP*6?'挵#Ng!o 9$쾔IIX{مI(1Y}~/ݴI.'#Sr[k_'m4'{IC¥?MH璿( rhnm{^KH%ڇdПBğHN'<, ؆EY@U%*wwYyFW\Gv t?詸ؑt/c1\5ϫ]Gy%_ ߲ED$Zz RH|]` wtrqxÑ}) f/;wP $ Rw=vT"I/&̛1'/N--¥¶VQYkSWsH[?ARr*ɥ> FyV;>Qh;qXMYQ+2WSZOV@yvVr8XHǮ*zpDvKԑ~{q ~d2_;˜dT;93d=ڥh*v@1=Kև$頢Eol屁d8w1]-=ld鋺.cU*ݱb *OM_ [27 FW(os+x:-0c wj\{&]MtMw˳3Xž';MTK\?j}XT I[(?׏ݟ?f saVbN?)]9<+^oy~Oh#`SZi.EN_z 1zk9πY64_B#T9SH2mkX}']q%5> ;I[ɉ 6J͍gip?qc'zƴh͸S0GW ƆIw:8 iD\kz;l00x`{ۢE [;V~F DI +ف_~uiܬYlT M8XNa }L$F7|{`dMz'^C=dVՐ䦾Dx H"~2:ou+`{Ĉi˔W,o_[d+[]O\r%.O-[ttqǥ˼yL:{lkܸ]{6rȬQ(&|=s⯿I-յ^/՘h(Hj{OAƿc*i nIJE*ʶ5n]#X)I`iWq' Ĩ/+Q'WĴA_J3=äov+fg&҆n9ZN^l$' JRz Q-@IDAT|W$^wȃbDvޅ4Ž׏v2W + %!o4>7mT6-s U5Dbǫ`($ٲVEIdnv5ZLgsZ릶y(buJ )e*HG5b)yL<)&HF$VXjS[S0->}<0{w6ͧtm  Ns)HlN~3#іr;*0mK|OIڄfX9)CSM9ᴡ j\\r0z9S]lGml4tb8qnXm}4̐8]FZN*/K$J@_lLNyɩ!X@_}No;sg:h j1/m˾aC--M_+'uݚbS#s`G?!K-o~g"M=1!t!6[]V:"HbP|$#@y6<ݩ0NNs <>pķ>#0n$Gb3\&թKaq^RO+HAQKbV$c/*u_.nNы*S̋o $@ߊY=mAFE@%2;])v;'k5BFHx3x$Cf7ŋm<;"-YXE[t~x0-hJϚi.JZz/^KKR>k0n=L*?J _@_Qeez 7' }qׄ JϤ,l޷Zُ.ͅ?#D{. |}, de)5W檍@nAuYũMmabG}>t ޤtǐu*lkOr/PJzgr.*Ԏ`rh K/_&0hl.fV6|3ǑWj<]@ڔܹ9Գ(O a+]i0+~|!|I$}M08ۣ҃,u &.sO(cpsgC? >֯8pS|9:sD]yJyd~IBaHTcOYl6x8ִ^3vR쨻Le]XbFﰷC؀P ݹNz?+8 څ8. m@^};G-$,G,cPc(|d)p-)MzF![+Ԩua"T8>%Z;R_ZCx&bn4+)ЗpWʛ:*.4M}IE' pN8|Pj~зG?|9&`؅= 9,m tA|6e.f:؝]@am7h1gOsw?uL#93WVz3Td=Uq2NMWӺD}%| B95p. rS`iO.~<C|7uҟ)a^_ɾp N$/HX4gc˼p\_v޼5+ YQAAA \tA [ni%%%W$+7;S_௤xE W`h$ [nN.+VpE]d_| >$+iZѨQ} ƣSO x+0W˙40ܯ_?;x`[l}4h) +BLP W%խE~Ճ~zץ]$}>lذx֖j^2WmvxCOFmqRP-D1Х,3c0% ULq7VMpnuB,uij]Ve_i06ocYK$c0c.-̉_X4^#i'o_@ i5VױRc l`@b?1G|:? l x5:K)t˺!HRL#ۄl=CD/ g$0VRt0 eKoZj}`ŅMUI_v>5r!Tml;ɞpu;eoyN\bvqh{iɤibFte6jƀMGhg"'c։̏tt/MG<֋~V&P_*EPOۇr`mMrوXN [=[skHgt ]+l B.|)Y\&)c#b&F G4.Is*:r2M=~)@HT-=ϹýkS`ߌolI*E;" *G}I|FrR8_bB;sl9߰L3+#j?^z5^/. o0Ѩ%5jRjc"r].KqBئnO8gUR~TQ%ctkVmXPs7 h&ڇH@1.szA;Ɲ\"-<ନ_yv'i8crf$ u9\0flڼܛ+⛢RS| ӿ rĬ$tTXIۧA;5=C((ZjoeWOԓ.;_.n| m3lo1WeY8U7 )c*3Oyy?JbuGCl^C]`Wv(QDNt`S+&R<6c"@c3s .u"N:eQA>HΎfmA-;nRoSuD+mԜ-s ^g37J׎-\nXʾK/cpr ddZޅ/ c Q3{#AABʖR?/0_=VLH^`e^eYMj>$52a[,NcoQmAcnI/aHdğxP-O&Ϭ8.0v=P[IۙT>L܎|&=ϡBiiZft&.&so$#{u#ZE"&o6ڵs `+\Q28-u>5hн9$-+)<:֭J\wv OlKBϞ=n x%wʷfK\g9@\`T&HZeFbfϯg<O5sε h.]\9IF$-:k; <@d߁$ ͫeBʐ zWQ뮿>_aVC$_xeh-%֚9í$г$ͩw*ņȩMp0}nr 5_^j M+&'lA5}`MHl7K:n9ɏv 1YCJ):߻%j#Gxa߃00>[%`zF& qgXXG8^ =ޛ'Yi:Ɂ!|◭oV6+6+C:nYZV3 |=-@2"d+\:ZCbc*_u|R0@KgEVk凲sH$IxE'oHK\f_rCGp+7I0΍o/+op+`_03Mף`7[Go#KԷxKOc",#ҵ70mч$ib:$^GbH'wH΍K6@r%OؖN"VHTVpT5F#{Jtu,zI*#mm8mb*- Se3H~m'X^#H7[= NNT@0"TƎֆS2g0m7p6uYbfҟAwygd\9I!0v&s6 9iL7V;5L%w9~m~c1>Fi s/Gzu)f;@Ί6{&&ٴ5AQDǻY/ M?#Qwx źđe*N\eyц#.2p?O )n(汉/ڬEzۄ\†X D!:&,FmavO![lTsyi(%Hun#?PQXr+Q?j ,x%^ػOZҿ~e] g)Ri[ǿ K$o;_16KpoY}]~(?){ؙO2+RpX 3f”_/4;.֧SoI[ \0 iE_\kʥVLE'Qv{؍Xt,~Dب ʯyoN⅟asi{C[C1^mmㆎb3eh\轏Dpk=ǖU2˦c|VjL؎e]G%)l'})!\s.{($Ds!lL#mTn(i@1x2o)<7UoЖv:K}S617xP6YvY:ޑ഻/y/aKjGuM l0k%o1.O(9v#WGg>G~o26ZS&R4F4Z4Io!xk=HۇrrM_g}n[B 0nm_R笁[יXua ޴n>Hϳب#.Cv;´^]?y)UqfA?6.kr>ԟfyr)qph'ݲV0Lݠ(a;Im^1M#q߃g8SElRu}}ogx,u&2 J͉UP~}k<&jn]kjؐ Ir6)]ҭ!PI(h>ޏ_*#I^"H#;૶Qz ?~V7nS zU^j\p=HRhIsY- l@ I)E4(#_;0 $I]0}Z2 m5ħ]1DH{Ӻ5\[]Ц#Qou|4h^jeVHb4.Ymaf#`B V] $Uy LT)|-`$@R8My Eji,/b$YC{YؔvzfF˧=4\ͻW=p]莊fƤzK<cśRC϶)e#9ҷV&''vU1T^YkWZIyMi@[HC%1nEM0R_>톣A^!0Ʀ:Y-/thi[''Ni.9.qs7ҐБ"=ZߖOh $vlPˑ]j2[},eGۉt'bCMOcXs.[*>Qߎ-:S]۶~!yEYa?j++޲sq>cFWb)'|GJvGC(]WOۘ<E83 pat%iTJ=KQ2{rv4wW37qfa0 G] D=G)42i\K }Bt_kmN8󥈤Sq]nw-IZoLISO=eKT lMZDcxF e^t!v޽{T<#*/gz=X្F?I NXI'iK%tjSA$Ə?zi}ۈyK۠/r VI Yǃj^ z3R)y(>Uo1}}ruoz޹AߵIv%Iʗ=085'`o b+ougU/+gYij&c'[sG}fA54-4\}A_ymu}Ŵ~0ǙRV I 2~/f\`yʑ>6|J? ;AN]ʃU:ǗzHo{ oo> _< )?U}3,rVC\&hP„04 ހ:SFk*1yj%Th _,2$7vQMJ*pYX/Y(f ae4>V+S1n>E6$~]*3Iwk[Unm{YCjS'(86Lt«JHoM:XAD % \v|ҳ*η\ x4vZhQ9Ku@ߩ~ࡱ0~]bVQ+M$ow_QsS,YT{ :^P#$#H_cLj;1XvdQ a.^LއxV9OlK[h*!Nm+,uI_2zENZڵ( ' K TЀt6\͹D$~a}Vmb?DZK-ĆIV*x*п}\F$] iML@_B2dmc.;t[Vr>#.jË&z#^IuH'GJ{'`K;7gIA8^rNj$=_ M:;oU F"j?^wOu.kďN I 6kSqYЙ{A_}H];E3l8&/9:-.l@mZ)5גY/\Kʤ_d 7GkDzw+(e0Bf_[kYۮU 굠`~maR>/ZXA?sn][M5Ė=$q}gቯ 7v胼rߵIYm.TZVe| :v/PUj$!*Pt3~R=+R4i$P]ҾFޢU+6վb1uNHm^^kӦ3|uC:śx$&*\.$InrssMjD$+zS0~gj/YRR!5~[̌S`믿>uUb=Znf'Ntj*.\h|M: :A!LjZy]ad|0y+rIJ&}S5 V}nM.1}8$iYY: 'A|Lұ2 Sp 9&!/NۆhmU~御tu?|(~qSTsL"ox]qsbHv]9+HƊHKg0"1d.xKQkJbJA] <-o uZ@JSX:PԍW9eD E?z99]G˜ تEP.H7*.9\4 0qTuXLWqa\BIuOEɯAJK HiSyNuD[5=˙nB>o_=0˘/l# Z*,xc' wK uSJb}DzT[Ci ET:sǴQ<)G.̅灡`+6Dό@莟nλ|<}^M6.<%.%DٯywN&;j>J{t-%⺔0G Χm7P{0x +S̷l˳ އf {ϫ)hZ [FB9yOLON1[*L,=ܧaFݖSۯl@8Q-2)= sv:G}./c<(5|&m(&@AE ӽ &TŀEd@ F̨(P# g{fgE޻f©S:Ephx'7mdbkoNN4+-ZN)3<8?H ڲѸ2&(}|[uIŖf=s NT}{ff ,D~pՏq垨wI:+i4p/'w4 hVHJ?9*_ FZdL3AҖU\jɒi\ Ҿxc'wPg4'm`p{~[ `s_v!dQe]0EZ}4۰%gcR1&Ԋ0M61$(Խ4|]G`\ U^hïUm cUNeOK6Kŏ+]g~^#4K\E+Çsk 8z6nuQͩhj[7:uiyaOi4 h"I׶J^[3߀R}Ү2:04eoW-ҟO"=gZAcS7LlB@["EmH;n?ڹ(BYyq 50=?~CP'c KXº?B58,>eZwdj};/SgjIhh#-M-Bۉllt(˕ؔd]?nТ}'v9re> m m dI ^Z4vL\1/a^ZG|.o|AwrI'_gWDsd?// 1ecݙ1UˬtƙӋq0ncM )8A4ȶ̍,D#%)6iqqDI`[j3#Q8^Y,`!bmXln23ٵ#GER~0O[?^ i4v\J{߰9s渟3yբE F/QWk1q"i (>H#Tv#G4[?q{r,xUitN4j9|p\˼OkZ>w6'`7m4燕\߿;\e6cʔ)>ְ/W̙h`vCA-?dWxķi giW\, R,#^תiOF;l(5 #:+寵s\Û2k=o oNo }8u\]HZ8%o\f֩B'! 1!K! j ,>9>^}PN9Җ"g,9_J)Zm%vw#$21+4I{cuMʒ4wVuw!'b>?K?xny! ne^ 72ϠC/|b͢l{IdȧN0\}<4Z~b26|4g"L,`9ugڮ\๧ ^CȧZt=RpAAs\:^$MxӣuI] kQhS&V|<@'KKP'z@X\! Ǽ~$mߺQ޵pSFkC^Y6,-lt4 ^^M!qP+OgٵcB[͙gTPNj~ps\5iT+@fp'H+,P kvr8lj{ +0u$[I"1`g?~W*NIzf>OpS'fޤt{G;@js1!>'ߣLE&UIe:H {V9 iDy ֬2Z&naQpocȮv-wI[p IS7۫ V4>]FDrR#G\rþvn{uEOEѝNQߒ+?mksza`RӦ7hЄ8r /-`pL[4ZF9w)v ,ShvlIR.8A&_E܏b{BWF!@ N`;4^GǙ%W?ڠk,qq4c\AVaۄ4j'PV?W6iD;X?J;7T4֥Ѣ5wH#qR#eh_ћEu1ק-HѲ[ѧ}UDV4qԜ7*nU#-w4ǔn%?IK';G#q,O$}E0g_G> >hY?\tt\H[)盜B>_ݪ<|^=#_CE4ߍݘϧ֭Utjr<֙Ewhک=WW,PjT I_$wKY%H#l6]$K|Ċ@9Y05PYfV[d{0ђ{ɑ,/F̽vmO{ Pv <{+ }Zli| o~_fL-lNT&yvm6۲+듀Fp@ tW (l߾}g]cr _C¹qri޼yξ;qwygLL<Ɂ7|tȜ СCVznCQc ӳgO{'OmLT9P}0X]y啮e3Wxp3f8ٽi9<+$nO(T&MrR9J+XIekXSn zlih'l?yOwRԚj[TA `"wr>(MH[ZuYnw|C/D3p5=_3O?w)h==ݳ \  ok$¥4-_|_/踒pjeg@󑴮ߎ*4kS}|=~lCkW )OP" [xbɧ3ǥ\,M; GfnlpS^E -u 3me<~CCtZ> '@P C2JSs&G5b@j":JBΘXA Hڍ2)zO ,|{޿3#dw~5r~:twWSmeSوȫZ2dחZ`֏ӚI.~2Ku i Ixi^aqNb]ĴX`${Xx~Tqg1SZ>aӛ++1uE(1)]I H[B۔nù AƗt]ymq:>?^Qj2w[QOLlH\ߦz`"ʦuI~)_0[$ CJ­*T@IDAT{p^N,r1߆c&޶6ugKi'#T]̯ e`Mظ%`}M]vBm `y鼷Pv{tv? {麯̱6M&n~aͱC&&߰!K1P}ԆV_o}T?ҶޠNk=pN.j} Tt6iNJP;D~4g#H2$^=0/P|ʹ 4u|'ڧ|cbR?t\0}bsʖ#iN]w毶;bAmyͬ] [a hw_!^Xkh].ŷbԪɸPV ;O lŤ ^t?"!r[ k,{ 0 uќ}nͻ斷=I)Ww@빫4iH_\%Q|ml i{*o_C΁_j-PVMb ל]c|X4K䕸Pkr٦B]r|Y:7Ă!WhK|{G2^.Ļ,I}%i^P4~ڞnL؎`l=N>}[pܽm_sfoBo|n&&pl\^4elZx'od6 3K/Vek/ϥ4#:ZkThzϰX #4 ۍSTFlX_;.Qw\(`S#h5j s!e493,zb*nwKȹxYg>)a0 mZIe LBv WnOηH~K7ж3:e.OAg&]Sr,s%7j-n"Ȫ91=xwܫd#[$QTrʮhZAIGn'کF`Z_IA;+{DHIZ~P$^'Y$3RD; Hn}>gY*s4i?N?Z^B6D?!Sh1TlΠ_EӋҐ hO4xNyP9"@y07B17ĝ +Ӂ7 xn.yi5V_*PLZ:K6wAaچ/ Kyk* q՚]X~Au]M {A'R}_ڗX=}cǎ.4KkԷi _}Hg&A@Z`Qi Lh( RگCzu-lƮ}Ƚ!R !]'f/^ɕ&zk&jit}&ك:H&@ .pO< '~Uл\zC@!>Y2?A֖gkX~~^YYڌ施ڕ?6 F?*wMUatX_ҢqU6m Vz}a}wV!ϸ|cBza\Co) b3M!Mtw+O=ŲП"?m ;ri[cG2v[1: yFQgBYUZ;+UyG|aruT1^ʊ*e+XT`]nz9e@K226!K)i7gM 1QJp 6^ ٴ^4<$ WPuc8y|-E8jgP&}[& )OB,hkVm}Bgp4Z E0LC <5=36j}@u?l`" N=Bu#@&h9?#x`'I]I]N9^tECk㹗mnEZx%箞m\eGsHnuHsۑ =NijkO[揙mHı>q-)fh, H<1؂Y4n)ac޶kƣ֒_&H^~꙱9MI=h>/\-o@юh+qP.ELC q-FȶVogtrj5hkbA~P6ZۧשDHtHg9H9Iu?ӳ ss*^OZ;mdƳTsl@ 0xOk2 ~ʥ,n.V\EHI؛ލwz6vdh-q}VŽP EE=ejj} ݓkb|5ۃe|֙P ~u?z}E}B?nI 7n|O&܂n+2N+iC}RP}$7o80S;mbu'̝ZRܚWh= eN4gkMܛ= ]5/{^x]A} 6# /¿]MY?j^Nz`=Vp2dHН/M@y]> 辢w俪ӯtg~LZj$.8ZEtPj>~Q!iЌI3[P5>.WM$gQh֜h9ob \WՠM&W<7L<`QILm"2 6_{~_+%L]['w~'GI"!&]z񂅺3xfM=(kt]訩m]c{rA/h&mJ-zj[n1ѩv(3<|PEy"vϫ΋|\TW: 'V!hmjMzuu+>32*;w{ݢ'XAĕ\ 7$5vsiCϥY|6C:3& @Zܶϥ)APßm[vguc$kYT,|V0{)}i!w}؎^ߔ^jڧ b{S^0OehӶNѶ3Ӆ4|8IOsH|&iW_hdQ (m5V\0tyo@cHB^Ih5sg{>) vvH49|{ZB /r;vFadfKYk@Y.@WXSzDY0PO8zo?x-Q dK.v,T~ISBA57s U%_3}`E`!񍽧AZES==D}B<AJ܃|y rwPFI FG³.^`@ډ67cҁM렒vJ').%֍ %LtNN6n '֢d?rWdv>ĮX_f>NH̓y9asPMH.wJ 3Yh"F>%xL5J-[\},PyךȒPRBuCT(uHc0bt`/WSܧ"Wo6, \V@7dd-]ac(1앷[x99گ?tyPZ$&/rM*dCp* ]YwUf} ^.)}[Y7{YP9+@|[BkܤW5D.&LC諰k%Mޑ"҄:BN࿚m [f ƀ#( qj`>C[)~e`"W$}= \7ס-RB In&Au$IbVd+N9jV/\m}j{w@RPi[怾z>W~AWY%ANtڼ49@}2@*Ji# M =|_@_kϡ% ij0sއKKwW +$_%-|>;KotyP>ΎPNlC.c&Nwρa s`.tۉк &@(RLBC ud(mOYKk o s/mʣ_~c7{HxtwJPﵨheAκ vI MǺ50/i('dp?}AJI4JQ~e%^Y3(ϳf\\ ww&gmV[D /w>@ˈPYV4u?$4$.V-2/y&~]4@!+)[mDdxYR+W> CU  ~ msI :܊N$}ƵC.|ݎ:ULElP|r(n_l] N*bYת{Mq?gg~ZKHɋ2=߱W;r^@K֓U7U ݒC==x *#[6#?BkbQJѓ-:ԋ6`9kC;4Oj1ժG6)9cvx%i+"/t=F vWZ⊎֞};n^W@5i̼Ih}Y/Gi.diXӞ8;ƺ ʸlmjЌ[^†"bweb9RQ-SC evJ@_u ^1z BH p0\uEVp&Ld۷p:!ڸPV2>h7>+߄ƺJxu׸=З6'Z "7zm;WQ/?ڤ9>-jeI@8W3Z0qmϝrڴ`U]hF}*vLږv,w ;Ih3w<6hal|N>NL:/^59vrq8 u | x.&5٘YQٛm{m2**>ƞ?,^' ]s3z/O١}lMe1]M_.ww4>=K Qۇ= CYO}vMPۇN.9;u6©#%n_PŎ{Kڠ }>јiyj,.d8K `tt;]oV}$^_H/U'n* Q|2yl.8 vu> 1a UoܴWg53zge"rtzjY}vaeQ Vhin%{²&+Y$;JSgRܕ4ʴOAL룩Y ~ގ/;Vi }fz~pmF Z+Cz_hW&K.\@|SݜS/(eër'bǻiނ,wAh9v}L|ßX~f6׾^#גz0bʋ4$;jCa&RNOmYCL<)u-ܠ]qo/!n$x/v5Hݡ~+g!:bOɫ@8o>{9I#@>ܟpqlSkѢ&{].Ȣ1R_f{>F?x`ҷӉv iAorO]` GVޙ6t5E#K9 ƽGҏϑAC YQ(O*+8{sm@ő}7]lk];mp![xL\J2ҔڰuGfMeOrH|ߓhᣭUI;25[W[d]&m *Hȱ.BfƢ~mcMpۑ/7UȾ.J>:dMa%wMtN);_q}XFV^lS$+貽ˣ478}HcO[8 d.}վ\+(]HNmPW[H,%*;Hw,L~~#o>[NK(Yb(w,ТhMmEy[A䉸O/-OZw%=ښG~ a{TuuNj/sт5̀Ty^vez kɍŏRl7 {ϥ'm པ& I嗰$X} ! hTWc/>xIFɖ4# Z颞O:ǎd G^իFcNt M<[3[PLlkgwK60%睨SAyQOJҫҡSqߎ0z&MlTuJf!+e:ޚv@F ٺ(%oK39됹)_ф%+} )cD=tpMp )G 9Ϫէ&恡RJ9o +^( o_% 8N𠨔_o4iA%I&ֹ->?#se2"H?޵),h9T螥њ+?$a-Z*CøZ@|P )_ θc u]aX@D ~} 1!H#w%9dJx iJg@ta_ 3 M,VED陲htAM6 c!-&Ⱦy7|d*|q(NLM$/ c_q.x;tuB[KmHS*ps+~/ @ImSO*Ԛ=Ldsv#p9>7+1r5O !/}K"pH䩶TXO]?ʠUP0FBL \',pbdW NC^V 1Ṇ?<6VǴlt" D20==s4Gz][g=A> u^ Գ?>,|t!Y=f1iJC~${:L'KН&YйI-mxR:mDYI>>QlB}ȇ4  K>͉wY{+<f.|St4&9щ^sxc`(AI-1)loP뼚Ύg#b;m^4d“|&Ȥe: ~Xy ԁh.y>Q1[jEK}fKm>GoH]fǡ 0yyWQ;jV}=U_<4 `=%{h'vأ3D B,`U9NP4C=69(W'޻ \ZB{-rU|\'p|(q38=?Qxsɸ֘H9 0G9\wjgˡa:dk~mu}9n٨_j_aOx笉-@N>h&.iJ黐&##վrϓ' 5J'INCFh˧ޒ\+!iN=!OԠJy"& nW *,^r$O$17,s꒘xFUoa|ii_hj\! inb={2­v="{^ !/d?g w19/ȏ򮄏jk?!Ks'?¢c^ ʻ~&WϛY[: FNZCEs1(^p8ncgƴ~ XTq=ZZZmI N+cR>};[{1qGv@cndU7*xCv{@aG׹ V /?.Td`-~Ben5lg&ʩ| m榼uNz,UPg۾e[}k8ݬӣ?!HrmV<ت(vl@umH@Yv'+R|f]Yx@Ă . )p;m良u$-fzx.,÷쩫ІYډ_m+zC3b]u h=?Wc/r}Y_}h֢7Fhq4=v/O4BNYl#s(vNU62yu&]r-r i Uv?ִeH ON*XUӢ^R"u5t}/V꼵4_;Z,.RzoK=JyWp9.^{t`x[CEO&78GɶGbiaNjvr5?~+}܏qx*/{U2Mwؠvc@Q3{b65E:9߹c4R}pAspg{"w2͆vy/)KK/QI!Is6^eho]["JFvGP.߯nQG&A<$|V[4GΛԨZϥ|B-/+$-ŎB'Y23vrH? )Y,> G~gjKwo/b߂"_Ҩ"D}گ E-Cd+,%JçOK+  GqyjϊJӑ:M$`(ݿ!0ٖK5$f1 XqZB@Ɂ|πuʒlBF;>~K}#e p@{#gsS`Kh}yp&:ͷlO4F_#]X'A Y:)!I&4 BȐ@mQ_ҩ_]rGpL|7L0"b>R^ߨͪM/ж{l~B#i(iK^ꓬpT~#ՑW*ࢀh*ӓ{隷]yߨ튴-~/ox)q@ld-/5eU?j,*ɣZ:7"µ?&6pmQgؽ@Ldu#F& yL? _؁{>+"hJ8#s}Û,i(MOfJTY{"Mxd'Vޥ5DEd=ڿl֖gOfi{ GSj":5 U lvk3mz@MYs=(Ŕ+q@VKFՈ.mZ-}K]^IVhOEаK>T %0>RٟC̖F:~\Btt^B϶[JK+ nCS0vV-Iwѽ`a[,GG\8rgr.&v5i [#ld6VC!4/NZ$24> + cl䩈`k.˱MyᣧS9'm8}P5ΔzKו6_]MYL}kJ.d C- Z 0ڗw|/79i65+:"%?թ\k#+qxkUP.wR/'mp$ֽ[چ;W@Eh-*8kW~iTIGwmG֫WnC|VWԖHs=.c[ 6yϪƹGd:/P!) [\Q4~g>Тxgo?Ysܬriau¯MZ ?+ @ smkx_/(ޤm 6-:TT.hf(\plp*(8h:VXmNs,̾c|t}V ?R+ޫ$^-*`wpׇ۰{%C4/F E&,܍|RidS;փ dg-$$ X?[tP~աy|9ҭw~ cB=-)9ڢy~Z7uJ>N</l+Ɣ}lc2@Y(x4 66?yP'ۄ&/RjQ(ҝ?_7u8$L^ȱRMe ~pX+ū Q?"z Zijw7Ȍ3-jGK=bsI)"aDHStm6Ɏ>R=x5RNP76 G%%mڧ#[h@,?9kc +8ԝrkh}[/}++NztY^L%a[7ɷ")yebqb2Ugs1@/ceyd*(6 _K{7C{Wfo1O7%bufd%A!c㖘^n%<Dqu`)8G',Z߈<,Ҽl`{ scv,d^6tHn9u@ڿd]uXIxK}4/~A@)[u[:R DYA}qekOwk%ⷷDWCZ@<4!U:?u}mGBŒU)(9hDMf:mq0J_9tdy(Ԍ \H ڛ_$P`%Zl TIoK(-X~fҹvf4w1/zx _?"-TzߎD>mOFR!ZIz 8ݳH\ܜOz2&4UTX4Gr774>ZQжL$d,| L!8f\/w+3#Uܞ:{}Ay Nh v9s0*q(̴=7dRD~2ʐxP-&aǖ OBPyuw aڼ?7gR{-t 1jfڱLնZ29n=uZiw#L+x&T_ { |pkfsm9Xr_im_RA9v72.yxx&*Nêt6e݋KoFm8S'pĭ7l:@X*i1R8UQ>GDOwnx[[-=^i+OS!| q '̳6 ܇xocY?~\xk7"r`R+lj$,AqM1q]΁n82I&v袪m|u+PEk)VU'Yy/yʦWm.wY!G?6$>OȢf]<#:8^@Cm ka,ՋJikװjc]id@^uvvCymx6b_#MZ(㩋׃ <3 vnC "cxr_ڂk+: Y<\_>8'cbvY+{U>DB3޹Y/e<}SoWn x|19]qkheϥNǦ m0௚x.h,f^]ʮ/+lMb[VٮFݙ'|x8)=%K > O_4]ocQ9qP/bCcB,7TP7;N>٬.|yx4=R5~Rr!zZk\} 9@ "25f ǰp̙R kRPGҮ Y&kܓ0"_*mY;7tڎa C bJM*A'oCGi:`\'{ɲq哄w"~zCBSh+Jd#3FLX37rtl\6e/|JXb"Wm'flE֓L`Sߪs~¨E[2M${0&%:;䊞dYf>T'?y@IDATPM89GΔgj2gqUhS9lC?&L~P?-zFoF70{N\>u\%4FH}$rQG? ԶbBgZ[j{p3J[w$0izUTX H1s7gB %AZ}7HWOLm{pޭޣu"4o`Dl"3u-R ·&gz[La$Ьy[>mp!s8߾thNUY; Hl1Wpm[{c6&be]97v'jܞN`s\s8 lv= G}q0ZK{*: c{_RNSѡs%c9P2}v㎟ɯLHn WZC٬G;P4diG%?] ݝ @ٌ-vBR+ۖ/\hZ*tb(.hll2q=_GIx6Lnrf6Ba~VҺfVL٬VvI&:%%ռ@K',nCdwSڛ.~k&>5LOR>ϓF S_+ ~ lZȈnW$ ɖI 7bbZӯ4#Zû7)?c/V\c(*o0lGQGڒC>"Rx"]F@k*Y @1~l 0nsZӴwf6lwXv9_;<-рx.}ߕ'YSg#Ƣڶk񃶉4E&1d1[Mwiބ|,Ү#g40Y7Z`I~n73} sB- ˈ$cnf '6v;kǡH3,>x{&ey7G S֯FLaCة1`mG8 wɿ>i' h;,|Yzks)ȱ\Rd6S eSK89طg"/.0Jeç +\G8|./2okm%[//ugI/&P#|+;Q)(`+lD&Dm.ꃾrBv:ɖVvL ,mӓQi^ĿB@c3`kbnȕ؞`|[^Ih/DY} Pi=YzrP(#1ձ(?L/+^c/wVe}4di#iA'ښ!@_Fzݤu=yuԩ3}N'} $cH ,4[>)E+2²^أ &::;^S}WS3yd@_M`'Am4+z@-mA/h,Cb 0\78e&a{SR]@ΔE~K6+Q❰&$kpsijgˀ+@ zIߔj۬wMȠaiĻJ.!,֔n~;WöT 0%fxUZ[Mǀ/qmR݁Ӭťy:{0֞;Hxb7RC՞IKj4W;;Qk~Cݤ-\={ y $/ʋ(Ra'n@ݭIA\vIL|8N⦼&ҟq?|LpqZTf$~+/ Fꃅh_=.:>[_v`p3e ji>_z"oj+JYwC;zX,f82|$هtHO|ζMl{佖٩| +1.]-`4e./֧>=N,Dzݩvy|–G;GiIBV1NM_?DJZSeܨפMv;7dCݘ{Ǘ34r ϐlgl7Vr*090oٍqY!݇6*@_ѡ# sowMeH2VRpΝAKv#vZ|ի8pbJռߣo|O;D;?_1EM~oDoC}w`L;[ ͟ M:<8jz(sF)$~_M{!&v+}r~(=w<c55~ɔd|ŋK;].僾zF|cip#@ߣ8;|WW}W+v Q/5 峲vg[T!S1MWsIaq%x`uCosjj躉x IKOd#$nXZ ~ȕJhI&"l5Dȭ 03-Z}e^ |g+&A6ZJ0x&!5`\[oJ|ʪN>4AwxS?+N%DvlD&N'ù(Q[:fWZ{c5a4Q%fqO:*~#0{{}k9u_4N*>$yX("o,b>]7j?gzDLE1gW!3!c\ qQLk< I$])xQtsC׌g{#ⵤ (݆l5N/Ԯ,釮 *j8t( ʳR$#v zi-+nN -p5߮1>.=եWϡzyL/ -&*/IU 8P,P g3^2-=x#yn͏Цϳ}mbga괭W+b íM퍶b>vcJβɥ u=M'i\)QD,{?Xw'A%m EJz7c1isVeRd7{w oc=03aגO,ЯfבϿ=|G;c6$Ϊ7x>~-y-/,&|O٤M^$ .S?pnPSNm(>yHq{05u/%_n˞峇3Z͆0Ӷn^Xl:S?#)Qϡ7tǎQSLjZ(m2@X2FC\WԎ_sɑ_(kFg/ nsm=)M35Hm碕]íZ[wD_Wni! HN u&s/NY!A!ٽw$0˙GNV}9_,)׊24u8 BF 0w5;~ْw?` rQPxJ̀)fe=a⇢ TeM慑~䨛kx>5 P\ߋ/e"YAI\ĽK6ڷgu iR_x ,S^{ >=PEvυJ$G8&GPϰNm͹ie6 8 7мERMѪXƼx:tZK%T<(Gs ,$vJUzJsI]5|0F#I>ȍ-5?Ew/.2gN7I-l\E]-uRu,`%nC:$Y;t&KK*)l@)8K m`#C6Nt˖Sdے(tO,pO࡙:Ik8F /7t2qMK@EL~g::dvX,[`{/ev"mC0 n *զ=q_ȺmdNpv}%ҵ< =#O>7zt 'e$W!E%.$?«3ِ>-^Rj=CM&c{~h7ݦW S$/qo/XInZ7&*~Jk6ݶctS~x$g5 ѮHMjY!kot1Ztw˔ UHV pk$=*3YEwy*賑 ZLt7Wv%:1 Xc%aGÃ7lu)ehSp!-UZ iĀ?7>ɒ>-Ib;NQwex)=(?l⦑W&-aOH\Osہ܀uٺI@t v6'9upki8vۺS2ުy:)վ~ DNg ߎqd/Uz' Zw=.jS{`D?l ڌQot--iqe' lyeCJs-V? ߇I;)yϦD> 줖%LEҏ$}X i z' rse5%ݭl afrL2.ln,xVjiϱ޼ljv-JRԻOI_::ϺF8Ҿx @6צ۠|[7Vꑏ;#yHM L#)Iౚ֑:E)P}6{u5vd{6뒴RDIڞͽ$s?|rAۙ?u[QLgǰ_z[pӓyޫ*[|3Ь \ptMΎ^Lc"aOËK7X\Lrm`Ht.kv{w_']𶽿b;^ePrK.m-c*{6@qAs5nBݎI7So8s >Z-&y2&a^Fz&]tJi'S"mk#^ka#[ P5L'02$}W'#Gtiq:tĭKHw1Z+~2zIh6 5W@?v`n.aQPi:4y>OiW\Yo](U1WC|6_?+0 lºY "Z)6U-)W5K_T=-Vd}6~ 66CO }j.,9~tv?sloX\{E?z22er5:g.N@&>Ց"KAU4QS/ \ w4hR:RVa,떙چ|5sqKͤAnL\~Ì%93lD0wWiK=pi,5 hU^xz$bSrk3+,3t/t49 ]ؽ%7MuaZ/}:e F_w.X8!S7,LNA7 rTQ@^("VteS:QXӄ<0fzdRGcxΣgݒ_I҂1xB.[I2.ZJ׫W/5.~3>M@oL]iP>DVD3H-ԓ1*c KReG>)y79;t{4;^|HGAivtgI~\Yƹ'NkJۦ/:tH?LZA~XT>Qgd _kn8ͳ@-qj(j9͆(002FE-mklUK慶jMeX%ĂIjDaGǎ9p9Mq /Z`&;:~%jn(sUUM9o[$p/饭 \>^p \U|ؔHKtGN!=w`NM׬E<kK^lv@Ά]{֎606ًC 8 ֊+KR|&$HCP2nUhC:_ܝW~'jC`MP^H`.Í>&,FY m\ٽ[^})FN\,"۶*`iR)P:k/՛S>ǬQڑt*3gB"%O+ F~ՅÑFILSG;fWL9 p?ʏ,]Cr8~5|n$핳PH_ gDz,4y?{5M"Lݩ=PIb;6.{7g*w!Q!ج<#W mcUjqWۂj{u}Lwd ajקL! 7ȯ@lSz۹&xynLy٫mt6/:@u !~&>UQ(KL!b \@\ϲ_>yV]*K)suɨTHSwx%_Y=^G 4|*(3\mX[B}Or2f'yյccJ0[ŚV%com꩖B;އ&OaMJ(pk#aEHHNoTIF==6_e.aT{RtbAn$m}qEڤ~GғBҶc}ryoj5݁e܋6*~wDy\7QCEB}lMI>{<#,fyvS w.nVbߔ= E(XϽR -R=Y:&/ ^ic$N]8{91ZE)FuVm$$y%!Zt cV ]~šrO6}Z\z&u|q-h6`bN*-}}IxCK]e>sE6~KuFJv?y@rg~h+^H| v9ULf ؇gP]-=U~W6 wXF-CxI$(Јw["5,fm.0Ưj}{0oťu[ ܿ+FV(0SכVѕnO؅: \A'Д5 &ҹ.w+L&d Cbap@h\dipYTmn8җ!%eGi & bOQ̫bҤ{Yi5<.YG?$rӍKO2͉"i>iq=wH΍.P/7I_HvX}DSA6ŏh?vП(Dl*k!Z~l_&gHeX rFM}ɚ~$A/\q 9P+^TNƘt]On+">x=*~Aٱ-$=ͮ-=mlmr)؆=_fo:8˲F4N 2ѱv0ibKNGJC@z&s@|J(¶xIN/6qAg:e^ ~5~o߭x)+S/|WmHGRx_֔8\ѕ`#ª# )TFTۅHi=lsEeP9xѕA9u%}shl WJ\O2&HV rc,i* >ZNBE:ɾdDil+W?n8pm6EUYdTGׇV;v{f]zvt ;7Ys'"͙z,1$Ӹ5|W|Гlw)E7f5C// Sc#c6]dwpԆ>.?<6vnfV,Is^ۅ~6":?2xIglwƑ`3wD3m{/gVJƝmO |Gz^]~];:6@{ц'[3Xd/۸I8D] yzѭ{iy|YT ߣ(1{OמUSǡ{rnJk $dCۣG=9}lhW{=s [p_l{ Jێ9ƬPpu'}_Գi|Nzs%az`c{{L he^.O9nh툂1iәDi vf9" .cPfr,*^(?R˓pUKCRΑ+} ɇe}ґ7|HLMwmPf56~SE _zVжBAɺ>O>}Dǣ YtHd xm3_i{<ŬqC봷,8֢'Z;Y-@ "x1HsvED ϒH F9M$$܉`Е#FOfxЩ9R~i N,%Ng#2aY۫oUm*qݙ\v |">[,a<-IΫ /7*Wi*?5ż3X[z Ef<`3>yφ'_ڶ^F|^] {"y904燖\ CYƀ+]Oz7r?4۠jNߊ[-^82^|J-K.[nb+ID J{=UPV2)^a3[N*V60S Jufi'Ev%o[E 4nv$URpbǦ-.}$$]N{o{|?bVA.JX=nHhԋDjߔ׆Ww#IO}cǬ0߆k#.=R D>jQM'El~C{.e 2Kކ<[=ꏶwf,({xfَE5( U)u}_ @}q+]o[ lmM`UW=W5E(#I{=*"gi-e` Ju.?݉ |?jkK3ⴉņQ"ͤrzp%K<% 6.ye L~?Z;mmMxF<}qA{K: 9׽LJh!Cqk# A\K{;b9]eN}z4xŐk).FV%+Sd|v"mwh,8UNE@_y ;NU _)c'p%+3ft=Dž/_zCۓ@s<_{$rґSQQ'G6^k/(f5o2eEXATSetr+apӖu!_j( aSM7|&P|W_ͽ|oQ_ 'I 3u2=]̦%Axh & K.'0; xW}Z}qoܕ._I.9)5R&Lc}*Rul0E2}>4;Cl$$~XT>m~b2(v]1a37 X0dig4=u9"Pަ #dyX5Pc>҂xsv|ENt'ex]>A*z7s sCuaǎ1ɥmE j&(m,c'q3ߐM$ّOwv#j\Ͽ3z-&Z;L_B 0-jca'RhM7R`6%d轻tZ q;``*L]"/:IOŞqE JId=Eᶊ'^#صW?GH9ጜD?w'zۻ rğ\];@>: Ľw[ ~90MW3 %@/Sd8)&Tp#YYdc 2 ԡ;ڼJ@#@8GRT3FN$2.c%s WK#< 3 ( /Yu/gqob_F9u'@RaɄ?nnÞ6[klAH|RGY*[|olFcA!q2m4\`,'>K#"{֝b+6H ~lxMs.2ΓUweZ-$*K,:Ϭ}[s* &7Oy;M:`jik2ɘ1DHi-]bZw7I:29@jWX9\]ߕ31?g.ܾ MXj[:[(hP{U9nz՞^)56,yhIi/"-WWlH r:] Z]%IO"G"IUlL Dz{} \t/wM"TelԳc޷RI3,#p`nNʎRmTKԝFRҧjYvl_о2d"|SvSygT wz/S'~Py$ADkpypw+1$i@-Gm(cXfFr.=̮q$ſIGujll|JeC*9V $j91z2mb(ig (ل21.u>cz{1ik+^/JҎK)~^|>L?Ck IfݨI[]Pr!^\#m5tr4zk$q_pL0C?P֐9~чNfs.!z`^ߢ,`,n92F?ߟs._t#zk.I*WKNLʠZ~_wТ(/s%Wu|$Q~f6AÆ%ŧrDf[/:o }?OVکo8xt*K/eC"3M;jN콃.+$|3~i hCrGx ?a$K ^n ԥ֒ޚ'㑌p#x Tߩ{lDݯ)aЬmQd]|b tJ!<( \.Kx4?u)_]Pl@ͩ?BqnRSbsfKKM*[-i">DX/Qsw7yxܟD ._D$fm)LfjZPNZiR;3YT9$DhRъG]o8t@7쵌xa=,٣H? Т& :Rep]厊jtmƁBy ߅;/oSho]"4`ѵ {kuȪٯv9UMȏTCdzBBw#F/ 3iT9v! smv:%U_cJ~M|g7(CVё2_31lfseklwG{u~`Jj8=E_On0x֝?!lx]] o Ǥ MC맥VOc7% 3sr;/6:yf?)qJ{ =p__P~']@y/3NG] ^~Fw I݆7! /T/&a?: KStnB|&  p̂;}l{dS'PgKF[)yUmTۣŸ`،LPD,RT[(6d,`FGy?9#Ct#&j\?UGsmZ*@P$V>`OJFaC#e=|b˨ee Fecinc\F@<$6{G# z !I^'d<|>(C$^Ճ{RUVRbGخ%ۑҨzOժvfxA:pYj\9@}VIK@S(k,MWS/Ru3-]܎0Wew 3@5=ۭvۃC]n` YɜW~ܜH@øH)ǘ xs{<;>)q*#UI6&]p%_=3{%-[g}hf&y;,9c|K_ir975 }7B I`便SqO6Φ~Fv|9IHTڈO< =I$.uljsa`X[TZu \A?ʘ4o<' ˛rK"фD$En{`^6Zo74AL[OԎ# Ce`6O}Hh`/Lz՚פ!J'>h*V}SnPwQ𷙬,>g݅_$|# h߃4)$M.tLpѢ{¤KUۃs* C+^_v]&)IMS1jwsY::9ghW^V\~'i#ν:{#c-%u'iAw-jkJGм #Rμ~t[: CzŰq 4g):HWV s-NStI ǒt䔂(ӏE@@`$cGDz"T,R)9)(L i#sp &¸fBGq8@Jb $8~I-?[ڮFDs-vmct|p:8 ⟥GHhxFd)q Z_{ aD~%?`v7y8$` )Twf%r' Ϩ+׆q7s0LJ]lժ _}jLV3-zF]m֞R.z/+܍VkA͢q>NŸH+yud'>~5\Hvy{&=߫eORcHٜj16ؔ\ AXX|%xLm!nS׋I\͝z pfy+3zj $.zؗ/ۤ qt!˧g-y>Mܼ'v&k >&=D6$k7?#͇t!Ej4Ol:X4mUS7l5R%1~CX5?q4=|:ŐQ;]np1X5^wu$tI/@M-jZWukTn'}REޢxl-)_-^ Zh "lXHWe~;X(my[Vu"&!f%/v9@1|ia2|K$ kHK~5DlH"ƣo͑A:JI&ͷ+ &v-NB#r7B+5tLst_o3)n^LǤnXeq$簞x Ly~#0wO- os?:$ g|1O -_RI6Llʳ$P#AC#S8Zz[;y7KA$W{dNGHM`q)҉Ͷ=::j- Hp-p nEXHDFQe%))PZlR|!U|R"ӂ643%r]gX})-_TN 6)RN}L<#V -lml?쩆ؑ!"3q 6\X ݚmx% X+<Nz ~*Iu.Kf}Mo߀޿VXt#}G>N/1 zaEJaTlQ}O|Sv6*f -\aG[;ҦUl!F 틊= S.wx(!?,(06V=|g>`Q[.=AM %:Wc% ʨFw_ʬ}|V6^hgK.4h`?m$2{>5ړӷ:{/`U'gk;͋b Hv1:G-BHhOOʢ]J2tiǒ,e6:q`J`g߻] 8]F7wgǏAIRmg>Iĥ[; 6Oʞ|_k-}L͏ ֱ:%d\7 @ܽR7W$ʯ& =>']U݃>Μ \PHٌ&K"1Stg}߇|vMp8Nm AcCLrHIV\zŧ6+B.Kl1v" /Q?[?= @ӵyhôtȞu:m iETZ}0ԩOpt@۽D1z~R$}K*F<1)uJata#FXZә,`O>iŸ]rY􍲐 _ zVҐ3X!IןZ}~;1౨,[ke-zGOeBoG/a8-!N:gi[UF>aA.5ѕzL $2>0==2he#9eeH|SaE?ާn,};> ߻8mqGz&I=B|=p2o0' fZ$)(Is4zf9k3ϱ;R]StK!7b \YD)cx#L&, 4{y̎o,qR ,Y|!4֮EBW._mvv,tv6)`J'l$O$y DεjGDq_visG)}z29쪤>1 MOHOR+v_q .{zPlߤc‰_|BoZ5'bOlpfJ0za 0~faAkJ9^V06A=8I!T~1}xf 4oos\vBWpeU;p<p/H24g}.& ?]+ʷ6r2Jo%W-tᕲ79N l[EH:z,b// B c liĬ#U([)}u^^k&vwpߚ_gBXw?gL?hz [}C7HoQ /[mm];,5ZT VUԇ־i{=T.umD]3gڢ6d~EJmE(!/ٮy[oFf'T~`ҝ4撩Իc+=Nr֎]a-, j$\O7!=s @AD~G8{?eۂr8'TRWp>}?q!O=Wdm/vf4~ۛoH6^ģ7dJ+N@eɥn¾2<c >q9U6iR_'|zQ`z38(qH[\j*3# &q mO v6e?Di~)JhqkA{q]z[[Ϫ^!фA}OaM!!eK 4~[cN20O=+j9&eF"y}ۏFZFֹK⌲D5gA_xcT5hï*vuwE H -ވv> =$ov`?;/d[muD0{>H6@|.JTJ#y6MԿiRl!]HWjcϧOsQ^t"n<L429V 䢽4c[DjT{  ɛ8E΁W. Dj7nzӷhaso[ʾ nџ}Ao-Ҙet<޽z_֭~qق ܹsM/KJR'Q-E%Y?(]4T[†Mυ.伐f .RKro෾䫙Au6\K"%8ku0lpTxْjB U\h&\%fp^z "I::d&@ΙI~ I&/J&>䑱sHqvRglIu,ef)L28ELe/>] j7GФI$!J~ ߍYrRނA͍&xSRަ ŐrYidCR?(% k9z$d8g'ݴM)N.,+t5K;r#"}t/K˾e ǛkQXXn)Jk"<0 ghbh_,Sfv^D|&,#yvYUk'ԍ4x@P璖b6 @̼B{ A<Ls=[pIЗ-cq߯uj#83mĞfoSnj$*Z|jsA9@|N[=҃ґ]RAyЈ$ݔ`M`\x?A5C}j/5؈:)H mf9Q5.WMbVuCrݖ-Pfʾy`oD>G"(ZATѦ%!{w <+?ͦ`=6tq!8?œ`رbէ|#FoE b"?eۺt:~q $!.~p4$a>[b{֮+_hOK 9[f{?J9.Ҕun k`e jq )B`\̶Ѹ痼v+=#j'XUS{6@՘:MT^PmQN'uj 'I{cR[w`FsQ;5BM,5)JOeRvU_oƎ3C~$**?ްH޻FIĥ/N}˓u%nv0|$Rs0c(Pu-Ll0ej*JG{GKɃ$l=mh(IU#[9gnK럷lňq!B3>휊i6f0zEw*esn6Gҥ y&;X$U#,vY:sS{aJyNЏIw`]f鑼')E<%%Ox7v 'R/5c: ^O;!>fį'ڃ7RQzy  XmQ38ж\^IkewusMV:zi䁙rY90FH /X5BE5HuwHu%Q*l 5?}鉴:>|$zVKoaCs90)!ż_+?8?74SSP`ɼ,6)9 3Eh!u黍#Ȱ( d4ЭYylY0M⹐jY$Q Dy@s&Mpc"1RHB(£BG~ ʣ0ODf{wf?O>?(v쓺[ kc_.dEE xbhj퍐16̧O(A#,\im3VSC=h{;զ8~ cđ.]Ec8nbidA'kb*np̫cr\?52ٰV> . m 6Z:}g$_Is+$ -&4:ZGv @"܏jtkD=4_ zN@:euVuH{=_V"C1:& w钭;*\~ ϫ 9L4}Jzvͪo=̜o@D @2l]rEJ ,Z 785 Vͫ>)fCvvA~+^NMyR0Ɖ@~}PU\m: M o,E4$xߪ[(~%i`߸.DUU'¡h7 Ò0e64EyP%veA eg$"_&cmP`;>+lٽp#|U9WqOC}{v/\vv {VIH?%lޑ0Zذk\W<_\I 'qa@j7#0){E2#zYfjg߃NL9f$L}:+ Tp:ШdH[h"+{d)7h3O]7=@Pߑ0'j3KxϑjyO$ Y~y*n:CRP>BAY;D9iLt ]w\cGLS3h1]Hו&؊~wh,Lڗ_~ ]uU[nX~ŵ;G}FVJXjzam۶usqq*iӸD"a;v 6i)S믿һ뮻vmg%%%E7ak8,TjH%ʰ]vvȡZ^T?Ґ!6b/vu+z~l!5oI5km{ݚL*믝Yl뭷^l-%ꫯڻKR״>ܚNZzoXVcSDwtXjf⯐ k@;07$\ǒSߵwCα&-9 hi.% g. K ^a&g?p&˷s .} § HL Wp T씕f#l{9?& drG -ki+^ثrR93l]ZVo.b!Mpy_4*V(lqv{M~bu'ǃN.ib>l\@T`jŻ bflJ1iU4YLj®,1o+I; %5fƘtı,o7VDm5xAqiǗ^n ? <ՙ$˟$|ox<Gɣ1vIv_n6.aij' ᓕz~ dBeN'6f#g,D ާPUH#״ul OZwKߚr4*"Y;4}BW̃gJU{)[75RFz ^IN8;񮅧OIo։Pu ɠ(}'Qa_Y\Z}(Mig.ɠ|b)XemIuKҊo-a*[]*T נJ'M0B8yyߙw2t["<*ߧ)DW FN9ɣ(q /?_3&_%݂?t.iItn&},]}wik ܑ7Gb}NW϶1:h?׵! =6`~1&IL+IIR2~]"$ǝGFlۃNfzMU7Ұ(8KvKOU"Tt{^@EqK{:AXY5Dꎺ!6GU5c2${vrMy1ǼDyox֪/"ھϤ(("`A1`|9 #9(D,0`s1gEEEPXED3wfU==3;,svup*]uu}w8o~oU/WU P&c%چNxDծ| kMmք}/œm8~\+,hgwF_YvM)6t1`>J>*Ea6vaEՍCH#ԍ.s'I=5TT^ / d ԎGAcu-X@JV"ɁQx)w:ˀݛD=U58椽:pKfxļRs.ϥ?8Ʌ_/Jeoq^Q5-Uv¦7f';i[p c2@Do`KW٩DZ7pā}l[ɃFi#OlO7P,ãmjD`jlbCy'uѢVVu-okio@04> fPTGA4QƌT*5lZ.Dw nQtzw"m I'lcNB~H<ì;aֺ*S!]M `@?> ]#MR7q EFw-kF}zd8-L 6nE-N'6/J O~RfݥQ5n"JSOю6תcs^Rݗ㠥)WIN χ1wdjcj >l nU֥6ҢdR7pۃE/_GeOʉrZu|}xI63//O v!\1<ĜN'K3Y7i !yX={3Hi%OK6ttMn6~ph=>Sp mֳy9Q}iEEi@O?EPcg '>0vW[m5 \,_Dz_|EoM6.R)!W (*p(7>څ:~x;c]ڳh{̦MʏC?%toZ :dѢEOwСC5X.xsꩧڭ\A;sDu8`BGܶ8vM4)g9 ߷Ֆ;}Y4X=e,RFY}nCO(m@_y ]_#ouyV]53@ܦ ي'j4Qh*8|pt5]LPvm>,r̃/:c:Q=+I6:N-AȗD850\ՔPBIV$;IN&#>DY7W(Ǖ)0ħeuջ)sQ; w,3,|4I7B7RMvB:;}t7H,0&Ih3$}3 jE6|WU'^}Ϧ]gPDƝrXt*Y:cgN_yХv[SlC&"ICm*3X"Yԉ;pQtA9ÕbZF;<'T`\=]'+)S-|ژ_`1"`$`@7p,ωO#c9 - r9sU_+&^K9h!>o qZƝ*Ix?籣N49 :f} ׼D~T5g2WyPGk,O9xRt?}1ԋ ,$-{ xISEo{TbHj^љm*-)dqW6B5dS.ϸjG+'6'yLMƖ-jࣺWp:oI|Nڻ|֎68v?w~ŏK%_6 )(Ej9ٮRYK$OCSִ]%It8U9l}6чTKƯp>8G]zB7i۰qXIE YL?=^S_2^ex.X)T}dI]}`O>N-NAIJR5֜AnloTxݟZMg6#QN|؎^sUf]@^UV*2m=6{/:L> LשxhK_C?t ٮH8 `H oM O3`_zqUNԧgֱ_uR&?PB Zub X{#0.{cs2ۜ^ӷAR[|:B^ouZ5::~s+{'ϯsxʴZIc9n_@IDATOOڇtϸv1(ґIUSx(?kt2*~#z^^URJŭYo2ʍT_$@X _ J)5gG'rN'vcTW?mN=$~,r 2l"xCcu7]NUԁDRs*.-,JH/٨.DgDpjvjѤB$ׁH^J^wM9N!%A];%?g- FZm+ϩvMy@rP8JYIRW~Ԟy3g]w]/+)XIve'+Wҿ,4oN^x6ܰjlݽNEo.Z_Yo H?MQDiӧn9'}K$;aV)~p An`}kOyg<uߓN.=\۷/+07t_?<T#tMu]6n8SFYM7o])]~6l0ĵ6{=of4S3fuy]!`jgd9ߑDL&IGHM >uRЬu&V 3+^.QP wG{kb!݆7bvht5Tۮ,Z27Q(p IoAT{H2o`Moݔ<ܖVې۷w_>Hk_0K RnOEDהN5]hSXXHǑ\k틔H@ Ui)c6}dqbe]9$bbיo$pKEs׻lߟlݮ9)^&۩͖$G8U)İS`]t,VVLBR{yO:@<4s~ޔܽzJx2eޞ4q1rb jE#Y?R}N-՟8_|g Wn[-%m݋pV wbO@L:B _Em_k 8~ /%u5[&KpePx7һ#VI,܅ߖ/! $-K'I%qkDIr .xMm@4;ԏ ;K~0q#MMY%5uQVVkx_)HZ^qMRy;N G_YWg^rǮ<|O15Dpkɏ(J ? a9/q=P+;}ޝ.p/ۥ}:f![IfUl.D"^RY |QS|}ۋ8LG".a~C_C`V} a>`P2Tс P`:6^bŠܙ>I ]'K-yMݫ-vJd#1?ޕz"6uDϑ(l͆ y6y/m/2TJc0Sqݚw$3 Z'BRmh_.f*7Z*x0Q ы(~Lj5?f*o$v$RЭ,mEfp{b6϶#p:ŸMٴ9{XRTzyD'Q5o3vlU^6W'>k}{WUpNkz:;K6Бw0>cn;BZѲ޶V:M^k׌l8/zU ?Xžbҿ,s_P=[tNiQ53_5X;QH?>eG $Y75Ͼe[~Կ9٭v^lsP^sUJ|m4^@FUnm7MO8:IuC2{&Qi^8S>inR[I1Gb^5z½6Gʷfv`ͷn$6ΧTc"ezoS=L}/pHfDF}~b%ԢMlp)U;%7Hp;Я J{}|\Gj:0WJ`Dj:#-wR6RE5Į{9E/8զ&Rǁ:OϼIS%H07#eHY6غ=hB;& ozUCng͚~dMR"]&K@̵Z˚7J(T?#oIN '{5_%%,uٗȵ݈;RU:,KMASN0IQٴ<1b*z;R?O!B ER Qޗi"IfK5Z$o΂oeHXyukmYYwN{ի|'Cj)"24Tߝd"$(nC$mW]W kңg]|-|?@ P{4Sw"aNʑ <ydO)ꖧ| wx< ᱑2ҽtݒ .IO/,~MW.;woc8 3JR^hv93/3ty`}u -~'eݖco1!O0k8߸z{ fП, smǐmND{$d)kh`s:=ZHL"m'_8{xԗKp>vUb tp֏L+7 \?f{OK=>ZJj0y>Oٕ[j!&Y0Kh§ucY,#t9ћm%-8y 9i{M>bX;?]I-_Q5Џ|c 6N6AV"HϞސ>@9<̯[<UG]v/a ys(^t\.nB+:_۰y\\tIcϣSw kl=Kvs 1IƊY{z 5@ter1٦ Y}R]|3t6HܗİؽkZW%Sl+[suF'/i+2\&eH%.CDr֢ywp:dF o((N=>T~ɯ8PwῴKb t|=%~6ړcU!#c 1;޳js]/`uC= <~nXM.2T4^B+˶tNK,Dj2^Bʴ}oP<1A{*Kjߴ⢯lTHvCXX?P=ZmSoš]sQCىgj%_mhWR E։oϗ GVo)?ʛ| ?IF#uNWA=WlJ.CW32#8; ԋrFH'~<9'zU]~~]LZ  #ٛjCry $5}mcVz!^b/t}:%ߎHbR0*90VM} 67k,vv90| x_J 5rC6"~f30MW('? d=Ҷ&N?/y]Ufoógu7J|r\GעলKrq2Z#iI5au3.euaNAr^Ah* wTbXC$ 'Ox9oP?9=إ-_mݴo)GKuy_8asA\ƲUxz/I*Z~z[s0.`c^Vw v19QʛҠ2Ij?dXͭx_C83؏{.$V,UW7 8hY)lHKեعW/u[dEGBRA~mh_$i)7 qgjɧ/JPFkFT6nwq)ތ˺-gځ |b%ɉ̟`}>VKSsR4a<`u(FRqQGesBI7G"[}խ·j'|Ҥ@RJUj~@#$WSKG=vNQFٴ>Az?ьkv>G mحn0I-tHQ=P+ R$Y;հ)z0q !u<-:ʉ 0yͽgENgGIqy.X(tZe@0ub.!}階׻,={ eT-DvFAz% |tH$w=n[&i42,2 ZU(buE/0gзv>]*xG^z@s_ qxV"sKyҤL9_ʡ6%ei:=tPo%?fAt3 @sjg/Zޖxgۼ ?Wh ߩl2|T*jv-w?®o[Ÿc /D=o&$pwV8} `k]>'IRM& k-sXM(:b y@ ւRս('A2A;;#|XՉ"1ھvr f#s>(ҳ;cw:1o5:'ֲOvO3ʧI(z3%\L%!+ *: SMl|wT䒹u[f uS|"{kΨ~ŹgfcC;ţK1K%a|Q{ta0|K%E]\7Cm+mwpA6.;K|ۡC <+9cB HY^־jwW+HuRx"iI|mݽԀ>uVVw>%/aEaYJ'seF)xҜG^RE QI]M)==Fu^?ދM۫NkPJ-Nwd H}HFv}"l"jSϐ^YWc涀lMmߺ]]в5j3$ze,pZ#!u.fFrYSiI'+OͅIDl݉vB-y˶Ǽ)S1wq3W /#>jivb9Rk:>XM99!ue$ lQ溑KׇT?w#sZ'Bp)FnOX~kj`q?0Ֆsnmbh gjA7jz$i,ZS"#Ƌ/?<|)T|w ".w9Ͼ\&yj.$Ϋ>9G6M#Gl@8\-F|HY~}2󌙤w<-8GؾͯSv+'`>Ri-Er'ʑ^/@pҙ^ 6L jΰ \9M3oK Qs LQ>wL}0gIQsY6qMm$~WA@֍7E' ї ruzS"o&3DZ]nv!8\sۺLAOzp]HY$[V>`+Gvh<\~Du]cƌiX}]|b,fVNzUV+K7]x&!uY@ԃ>x>ќ3wY TT7ZXr Ŀ~w*!O+̪"4fypM4VNae0Z%l\p8zT&]ͥo=GzԣT ]@o8.37̪<^+?,҆i>IQN:tjbH>kj sbQq?`ٸ9sٰ`?1<H[%d3=kg+,J^2N,;Na*ݢQ{C?QmI+RݫM)WL^Cwn= qْ_yalAt{ƾ`!}6=8srN6j>8^Ev_ArM₾lG_[V+V؃v;yz2XvcԝIJIr?B-̰뢻ou\&==x3);Ym%3 ]^LUdH],79e9-IHŶfI@ޒ8qp5n|ʺp->ʌ.6?P-|ڣw_bOvJj+:ߝ:v4"TXX Y%>qr06٣)>u[DACdw<4v]_V=@xwilO՞MkqJmM5UTڋo9^]:wb9pO~HݍCZ% k-RMK ]"tNOTJ?|9l~AzliWdx}Ψ%w#> &Yƞ݊׳Prjv" ŵYк iM+XAl#9p,Ce_EO6#?_EjmfS" 6?|@k3y:_bQһf}&eT - -("HBS'95)~_M+]*T-Ș&<6PLgȎS󙃹Tq;4x}6F_|!:=Ui)'YtTnhI}bIWeIXaM;9T$>G~rj=%t"FvAZا<3'MNӑ}N/ x 'qh3x~%@o Q8^Q3Y]B!Runa=.o^wMqnZ<[p./Qg!/t+CEDI¶4s}ߑh7ۻ8˝o|(qPUi ]$L$,RT[Hjcǎy?eٺH1cS _HZ6KגpJG]FnfA:qGi3gt?S4~8p=W$ e. ߿o9_򕿟6.ڇ|Ё$x.Nv%#_J޸_S>ąz3O%S֫@3&Q&\Iٟ`S)4XDRmٴn.ػBL, J "dzlɗMqJeށI.&$82a[7hk*G!Io$`loaa|ԧBRoejL"^ "Q٤w2!ʴ1%cth,JiSEuMg86s̪tc>Vv H-]v(T+O}:HZeRwU\1`G1DuvL$5nv}NgKHUmqsm0"?N|m[&? +>^;~v\6/žmr:"Y(ID}2fw;s,#x`7/Fbw܅:Zx9Z_wOv~`R ©X1͝=I_B} Vw088.Kp{'~K/Buk[;pzl3zsoXnʢNsGTvP!ߏݝ4鿤iT;X(; KQEo9W^@\A+DXl; .usWx{;~hOܯp!AWرFSm̓TL`9E;De"mx*u/Qz)&Clav`3vR{ےOچ/2fy,>uےH_۷8 puY}+ (fm``sW7&xxt{ܵXF!Ye[>\im&Aht5:'H|K]| 3}a?#=0f7ڗm;utwod%l>vU}9^mI/vt`tFwlz[n:GHLKFZ2Q;'j>X ?j~gH},v!f(<=ՒWl*H"Rmjޖ@9|w1Ulݚg?'Ůq_7۞㢽ѱg[Ķٍ[ݽ)+m_7?eQiNԊgڅ"Xc8;WEkrgw^mk\':T={;V2Qn+z%eByR۵y}pNj ja.зVs33|=KCt!3<;m"r۬ԥ_$?pcwKz;(7ZK7+һT1%pG?~RV@u4~<au>IV^FGrxuVqmܦyp@WѨPҞOGh:5㻱˭mLGupFWl@Z:33jIחX(r<)I\8"M)gh7/›$ / ϓL&mGԎ~Rp6֛Z<6×Vj0\VB#i^IU;<ei>pk ۬]Y6NsqڢHZgݖHnɧOHokWfܓ=,XÜh:KucyoO uZut+TB"O)H=KP qd=e>pD0TTd\dHxfiφ|Wžmfa$ v>OH9{;~˪?m3F_uS _'9}7g.L ,2/$ŬK#mHC$Q]0fS~C)_Տ1,brtjlg4D r2EJ/mKJʿMi-Z>C/KkJZ]&Z` SVVfS%"\G-[o/s/+OI(Cu˔)SX/Z߯ܳ ,NbOҭڵk.ݓܹsͿϷ >>P'"|5׸ t۪\/}~KCO^۶m})!Cd Q|N.,uhUQѫԯb9k#W`?=7A+lN V&v#Zh|]nT,/nqmcXFDq(M?z0!_JI.oн#6I0蘠O|/gdO00aWuўE ,*$C_?&%-/9^?,G,@0ϢyuᦤW~TNb/#?YE$B@rѱes$%IᓎRIfSɯF=RnM )ixi`6MeoÒ;ÔC-mS9ZչL R`)_fxqkzjUC=nS 1L_dA&zL Xxc:ܒtJM9AOGf}BkYxCvh lR(eQ]ñѓq\FP[L{L'yvm =T[,8Jȁ|~~X<=5ޕrH|؀<%o1&C8aiIbA+~NhDʪ~BR*OdRӋQaʮ!Qw'XoFGBWYV`k$HK*QUHNx8G7tF ת`3i铼^vxTpK<(' A; Vm_͉ike^1~g=I@.o{wXhOWHS% 'ǝm/+I/wu߲-O|`=fqVIԝBD&#I_D}F!u]uo)v|:݁. UpC-߯;׼"S mˊ+iʛl]F)+yE.:{`ڕtkTRFJ/U=%[mDrGmAǽ-O2UPod;Fη t6`n}ꄋzwlyq-yHv*; +Wx&g_M_3a}gZ }[u!y?F?=OHz,ZCK)؍o ~'0S{=垪R}v'vpؒm@qW|%=H8HAө̍W'wj.H%Yɳ @y,ԉK@E=~0 ojrT%Dfc-ɢ ~}iwL2 ~8uSszvU|ayN,v nݛZ}<9N:;%-˲rX:u I(͹y Zچ$`/]\TB [#kdc\S{r-_Ң>Ŧ]~\/v,},md؟iHu|]tcݭ@M@j^\H8t=I_YJ {K,q)t_:KO/yQœ|[klEyMv4y$#P2]>G:f`So5/jC|nb!,zI}=\)C .PӚ醶Ms'/>i 쀾wY4u᯷TڄOf!ZE߲]6T= CNe[}(RMU^jWl2/j]@wz'ބ4 nhcm#'aV9[mS,3(n'rc.Ez[~:kΨ鱔ƀ/F^(>  50}SQ qt$^Wa"=]Q{  ~s4uBE[nfWW >noR*[º7o?FڏcTISOr,PLkOR= 5ǯ(G?>4l ouXO{tX}g%qL-QqJ4&ڻOgѷ^eGϩՑ0M:DUԥ; སEb{Ǫ_,c JNJJֻh*v$oԏ59OTat;6ن?_=^»s:(4쑊ڃ6B5cy! p!}nuJyO (!r{n"9)^Fұ{ԑ>啼RwYXʇk_s?A4 TGT!Qrt;pZ_vlc3ecGu}Ďj#spQ ˘v^ Pߵ8i-n#~)BԒ/Ғz_.H 2ҸR.[dŪ6Xs=)C!`0YLԷ?8=ݼTKY^.@ƀїg,6%9Ϥ>l7tm7^*xN(B.ӏM 7Жp;?8n<  N^=6Pn}pGR~Gc1Iz}mjQVn4usI?W7 7/(j5?e-~ȣE& (sT6 Fu>^u~} ;Jt2- PIWSIemg6)iWWg8+{}Qqe.p'$I]ҭ+@NiKWnQ~}U%U_q߃q,]/,Gҭ}d_S PUjz٨w1#]t3<+/D,3"%W?;t@w]ަD;/R{ꩧ2?Hb[Ij׏, &LpEQl..>o+@tΨrNemwn7kY˖+R}Ib@KU+R܃=B151XT@|92s|WWH#ȿfGvWیN;hWӞ_RiJyI?XSlzHhoIzTF>@w+i,xJe* %I*5#9uLW$ 5V`[(Lgi+]p2eًI֌@˒~U]+4h ~41;Α,8`I 䓔r;3М+'G0o:}n i)a6uAaw pk]iЭ1hڋ~Cĝ555m,\S%!/Eδ (':&EAik-o8gݲ{;#O@jjF.:(A6α{Vz,w&Z: JF$Ka[N`n ~$jr瑎%=[?HhFbM..@0@aHCxX ^8Akhe;@u~EvuvE%=&״HMDˋ,hp)Ԭ_|9@IDATEsz՜K?:MZx/KoI"(ղ+4/ 3 .?31O>\6CR;M&]!5lH-.{K+XH\ 87%?ʸw+,_>.)E#qʒ㼦[EpMp:J݁3tIuk]{)h [_ڑ<_h=-r8_p``fPN^.]3ƅ$%wze@9 $6Q\wxBC8pj=⛩oO=v(r딗#xFۚuo.r% \EG'=عc=>צ“|u1:A~7A()]j$6Qח^ރ>l xRMmCi~ʫ'釷XK +}:|`lsJ5ta'gF`.s! vahsjgcc-ԏAmG[LI3(2zg+Wkk*mZ6L~eֵaa#_[zBQ_P6]yWzi߻;f O:9D hư)=|" xP{KcJlڞ|)fEv,TܥE{xb7uFǜ$ko-⸊)!|j*z71vI3Z.9~s(-w;MVmΧ\s^ְʑ2Ae#ALshܗ`G{Aw.@:w/V(0e r>"i+R a[r 6JG7&dpg6 |>&= zߏs>["z fƝ,0>ޡ#q=ߜyL Av4S-57*D>5k"<_5'iy{RGm[ d jaXl$ȟ5o9"P;*tњN-ȞN_t P'DqJJ?w=Y[woxGY R\WV"U9ZlȌ>U37S7CN[OS풞dt6.PW+~5~4ry״l;| ڔg]/nz lԀפKwUnVя|*Vo4^o6-|1x$_ lOw0YSЋ f%L|) ߯7.I,x*ϲz<ʔ&褗IKw|i펅Ayϼ5w)LDOc",\UKK"\ˋg*uk O7<v5- ;כTi\yJb" HҊD -}~PǪkUk"5yƒx2h. CUMTMd+ X|4N121|?2F3}Dq&[MD . c(7!މm>Pn6^c y|:&K_s8`oީ˜[9VyEu6`-/MX`1uѹ3>֞ :BM;RX`K"*]/^3 9|[HPv[)kԲe1#-=^cُܖv_A9A_,q?"6|x 0et\ު"; @$j> tlf򳴝xGt6B(W;3ӈqʻ kfMB҅Q^okox;,LOc\6ΔSj?_$zv?B,lލ5t]sCUR݃֝s{"i342WC70 Ȇ2oqRf|#rx=ѯHR_ns.$Q03fF״1IvrU;`$]kaTwIZ)L=P,L~+(;w:&=tg#{݅##[ȩ!lsQv@֗3(ةD^>Z7]ۦtFzWk"!5% ,8I;==i{l=s%ƙ/ڄ Y6bGfnBEĸԓ K\9 iDխvdJ\<ƭP~>k']dOT#%5!ljllx8lʷ}dQ.B@RQ#/'~OegN7v%=Z">,0rjI⩥6 ~&Et͈ESO}6t}oR^axޭ=7gևMRޓ293к!~Ҝ.J4IϨ7ar/˝鴁DvM1ѕYd{X!S?pԞ߆$$SDV~{]*x < b-|*|< oWƯ!i~W$0oVӿ[yYΎIi(6H6.MNeԉ95~' cUxQ)V92bzceדc ) HZSW_8oջ{RTRR0} y@߼ WCɶ=mggtMlз=Ouzyw7k@>`"MG0ͦGre;K з9vPU^} }e%&1q,˚\hpG*BP*QRlIv}%)8+ }6&GЋ3/td44mu&6"菕-xc:Qq> XoCڄI) keK&Z EUsO$(1h ؄EL@s ԷWjӤBMd痫nAjr' 6kLi3R HpCpz۹i=;2ڙoxu*($2U0g.\ҎYXV~a,mH<Y%֥\Ä z=2ciϿ?.$Z k> 2 +9 \F&< )wAjZcEѳ0uXP6(&<zLHǖ 7pLD h+gj/@@pLJGL|B@.\[zT#=C{/n;窟?߻p_ CkV5ob}@_+&[WHI.sv(Zۂl+N<:\@34w'"O쨂x"0wX~.e+սu@N@K9]C^?&ғR$[5=@ $=l-m= V}+<4g4{z=wI{TlոSD}x϶mO–1Zy¶"B"Ȼ=Vt ڿvz{U^ sU;;!WC4y]KBjt԰*/mCրJ'|g̙T/ I?*+7ud˨okbP?ao[+<٩2'W:G?2| <$ůǐ$'*QڱYa#$ҿ5ѺO}}xqb#(d-C' qJ%g`yEGw;suX-z&e=v(#̚Ы\H^#mJ ;%POi_ABJ^vlԄ?lS/oV.܈K Xע:o=[;/ӆ.Y/@n&FR%(~~Ҧl4y㕝sm=7|@%u+ ښx2|FڞUAad:YxAl3&ί')TW請|7z ξ#_;'6 Paw$nZ@9$ƴ\CDW1ۜA%TFY[0e!|; 7]uGq"CX ߟ(dpkZNhHHݢ#7AR{Om` A f$˄{8&? ս]$nT8{Q},w^P]`yp$$Eu@ڣ񧩾GO'2^ =G} F -E2e.K=Y;$P7R}eX,w厷̓c0uT?]_zd];Xk]cmGvh$7$*Rn_f;rlzr%!Xكǹn`?TҷkfTQQPTPb1g# Fs@E1cFD# U1"$ "lޙsf,K~{kOӡuA8:#I_}l^Nʓ@:8, ߃:E:;\zRE׋_|61NdijGɝ^H/p q@Ǿ݉ -ۈ+ۉRV]|k)Y .+о&֝]N:v)s~7:׮]T}1!Qp 9h V`>=ZF(ȳz.^ZWQrhܜa M[wz1T{ՓM$wbrߩs5\Ry'17bp9T6u}RN)o۸ƺ0.(ƾ.>Uo] mi 8.rPOۮ\<6TO\3)Ϛ{K{ s=2LXCS?oOj j7SݚWֳb=(*#߶~^u ki>fen[:S ~$_v?<:L-]b.Sݕ *J+&*6pQhivfP"{Z2wπ̋JyxF\Z%25"I쐂vo1k/vqɓ|2 Q/k۬ljIyb'*]i&NXké[a^kpBBފ*cBZN?&ySyaz&SX =c&!M6OIr8ʲˍЫo /y@->'zkw&_/I6rG5s eDg~6^pD/Z?ߴ1$B]@!H\< Bu|%nrz'LGV9?V)FJ)@tç`'` *F Jh+!k=Y} y>k)]6s FҚЅeٙމ:A@>_tqV. # T%Mt' |i3qZmxlB}ڡ1ްy^=DMe01]ZRȱ 0t\ SH{YgզzHix3EmPf?Ge`gVyCr¸ι0gޑQ[uBP1E֓1W{- (&-Ϟ5=%E5ݦlDkʭÃ#60Í *l@Z߈[t/$gG#SL4š!{M/\ Z|DnmFl Pp`Ҳ=KƜF BHrWm_Ge.jiŎñь]SotB%"@c'XuEа'pI]SahB 3Y4op0u{J[ \sw_m^Q=%ٕcEg3?mi_q5V`?j8?ɝP{`lT??1GJevSHCdR%QZMq<i o U͗LE>^n$Bӂ8h]sxC v8\.R.&4VtS~"f?K 1qS}ߗDrN.kz ut?`|4C`҈&DE&E;$47dWe {iarBSLad0_K i .ੋ4Ѝm d% 7aJA [ٰZ(o1jgqn.HH}C A.Ⱦ5F6i'ULc#r?.toj9qmaȥAz Bc|4USl\{F :<]u.I\.6BWpߨOX: ryM `}8%{&i߆u+0I^g&a|.`VaMNYcB7pgF_8^bQ 8y OGѱ;޵tI b֯ưs,͉])EplG>N .+zjC>mpb,6iԣY/0m .>2 Bm8QMZ^y(ȤIpP~y14)|9M 3!j-hv╶D8au Ѕ4Ju@NIX~mXq[|$4T BA tFyvqpF)[;DccvI!'\鄔?ϑv@u$z9r跎V>ۛE[cT +nܠw鲰z@d#n#YY (v9zTd<#Yv?7+G`?t8eCbϴqnW@~s%$`XG 6`!-mPႶMW"YĽ l FbGqM1LVij#06dd+¡k[<6qitB -NLafVXaWXDzGcE@H2~x G+j4gg_ZGvŴ1WcVj1kS]Sw)S \,܀6q5|{bx p2po$JuDR:zˣfzף9:i]*e42Z!Xۢsѝ훂Sl_ZI_[r.؛k0ϖwO͔h9T5m2!2žVo"Ⱦ W[+K>꿮v;+ɔi.)0aj@ζt?,xh,և 0!26%Yuu]HXyM!hk{^OAC^4O0c>}|)K،nf"6w1a2KM$6jgZj(d{!3Ж@ݎxrdOb_؂nŁY)ZWAG*W[7/؞*h-ػ#)glB&nNx8.[K~<ׁ1A=+.;q}sp9-&Nh؄2гk̫.:yT+?/\@|h fEٕ?*臏GA{'9ڵ=给 ץ_V*}"Ʊa\mX+הuَ9R`P 9/n4g>'+ڙz2]|WDZ_4ˑ̵_`I',^.*~к븡>J>1NhS'/=X Ng<Ӿ;T̙݁07>_'`= o@#zF6:V4O[*p6vf$sڠA𐢕g~ c Y5z9/I?r7 좮w]O&oR3-u|ϔxCi ZLK]ZPKY%]<*"qr w6V<Sr4lx)J+ =B d,ܕL'K#(q s1 &|?׍$ H,h-6G# , /Ҿr=!,n xIK3ʥ: (?֕|{(n [w#]ohuWGAZ=slB<ήW2zy^4bV;ӟ K邳,?u Zx9ؓ3 FGXV{VP^*_fKV l:ҴpjoS@?;BAœSy6۝Tu䒩#Tc=Ѽ)$h0""\YdFƘO}[G1b"ySܓB+)hHS7J=Ui c탒!^%{@[ %_#BG u_E,O4KxߎngוhgUh4wH<.Ogl}QWef6uR+})m[lζ/SpY I[FK/Ρ,w¢7ڗz,B~YB`FІ`sߑ]& gJ;Km h}usyY|vSX$ (g N=Du1Ӽ~<.}Ψʾ^@Yn$M9W塔w6]=χ7KS;7˒{gsߊ'޴5}IuCNw(yoGoDs~1FU]wh$0]G&G{iG,eI߳ls8FI|j#24;n݊Fopm~忰!}s*ЊMyw[)JOqPZٳ)g ПQٓK,_A{ EIօI6J;U 2 dX4ěۃمRI+7.yws\~93%SEB^*^ MCOY6~_>dmw.b'nгho}ߓędGֳu0q%*8F6Ug|;OSPdMxB4?1Z S9r 4 8 uN= Jhc1 :JHiG Ss +66rK .] %;ooGsnS:w'GKOytNJmwgޓЮ B 3{:q n{C^gk*hmItqvȝotm @m 4>>A梀!jQȯ1Le6ٿUNIۋ Q,j.w#sAوKsp0?L LutrFE +IZ]6\9v3a׳}n=М:h w:?#C O}.r:÷ރ,%rtl4$B,%|G{& ^3hB73b;۳|% `[0EƗt9aE/WH}qIi섾zMMUUB[nC\{3ҭ)igwTP-}X;S6%B6!^7VBw7H$~\w٭h h7 gDU .#Yk5sSEmŻ5={rb+v4o} kÂR1ιlڸ1"#ga5JSCsGZK Zt|m?8} IۥPN=|1ܳw_!҂'M ` ~ׁ{ne0T_ێw>RgFdFަ`E?Gk2K;Cb^;8/aY%?SmZٺC+ALuMae_ҟm26[ScW3 qYKe={_C{֞oUiVg'փ}5"t.%%5kZ-B6vGs|Vv_G[D6 n70bۑdݍ|-VVɯo8r}"6|)@<9|ųδm4ċĿ%e&ipҩvJhh2㘣SSWߚ^6.0!L6YE;px36`4尮d :Vn@.Fzng0ˏn6,z3pZPu٤tB__t2ᔭ6Cs&| HI'z> g _ꞱU><Ň:l.E#(.Bg /bdD63FD6(%yh[J<ǷoSW^rhMP~N`J ?06 .lE"lU&t߆>iUp;ܦM`mNbG{fl|.ytz1ި`y酾ʞ{sy8O%17 !W!| MO$Nܙ&A.z.pFsgCڞSKAD|n]26{CE7X\}'B3!(`.pb*bx\Fu?:H:5$nl J;؎Q"۝u(A~zo6F}wseON%N2J>;#Zzkj!S{0aXnSt\Mq%Vz1W/Y[nW {^w_DL8*K&U.N,0dYX\wi;TVpsIoq?S]Zy7039P/Ÿ`!?bL䘌p,ÿ`t, .8$i֮_Ӟ ” LZ[#gq0 Q:f$-eV@Bh#o?ʠK wP~I4s2MC}z˻%!׫1~;(hu$ w'nw]f5C6w$;+U\</8_ۻ!ʼnql\~xK3߶slZ,BVHb[ʦ-E%R8[eƤ.)m |=|)J"zL@AÒ іn@B/! ӱ57-+ȾbbW#辞!?eT$2XG Prm.\kG2ܝ@ebUiz^@G.?hjFȧDqBa:VlpڅӨo?,1$G,RQ>>I@긐@IDATĮϜ#ƲJL9q!!$v򣑑v tkO}Wȓ^AK+a_<1AѾp8mvDC|4H?+Ğ W1y>FYamdRvBЁNmň:;\bM%c%V@wTGL,Z.s۫jƸNrI6:rApIh~#f MRxXCҶkA 1oo { mpa'mCвI=mfd;KL;_D,pWG1%":nD̐8/bOѥ}G++ xӼ?tA pYa6뷷9s2o pVf_$U͖0ԝGǧ}.硻Q̀CТW/Ieݧ6KEK4.F=]GZA~b.*I $[ P|0|4e֤q}}¥TRA].9}iįuaݠW [ݬ=|v4Y]B#V)~hܮ< iO4 ~wIZFؾ~zQ~.NO(tMB`?ؾү VA4#L=6`5N ԩ<0*Rhgkc`#2 ct&u?'!۷ul"5[d7?2KX maƛgdWK?Iz [^m6E LquIѕ(q$m/>+/GIG֟eֻټUdShpiK7?]kB{-Z~6ܟD|W>?s'sNcwpTGl-+կG3C6q?/p\[0_#+x"g*C!RAzw\$N&K5A}>";觰M]u4W]>(-^fJ)s *`HL/Ƒje _+s:ƺ#W~M{NkÆ,xE3mHտf*nfA&_̞İy E%r MhK_]3 -Mknv!\)OEX@&+/esaূ s0Ӣw4mJ?̶U[{q&>Uoh ⵖ8ncv$h 6fMS}Y 6:Nx#1chО7k 44w "BIowNw\4|8" ֦]|c\N-16S.Ă d˚MEUH :>E G@epZ'%b$@P-#D")h %s}/7xS~hs]G76ڻ/wld X`[鱘'NkGX A4߅MiЛts BvYH8?g wgLo֠d/yJ;J %D >8TdMЈr$΃hk̿ fD4`Z2V69~lޥ/֗4&3?B XzpNPOVwuتٜ16_Zک7֋44T2ek/}W}=-*r5>j'_r}q[8pcV7WQ{S.? w}>wd3N^:{h#m:PniؤLH~I7O˫esHOpI*qz@:፥MKڸw5(z< b||nQN6+i]xV" >O}OWH2EzW3K'#CAiDټ\#%Э~; W \IsrLhp{#.2;AO2Bz#+x4Zق uOgr$qi:b\Zü ~wAᯄ nx;OUWF'qCws4nɻm)uŤI:M=-poú\S}٦^ߔ{=7ׁR .Jsg~;_M^gEinJu]zi_C2h+&0N6L׹k|I j+CY/s35G})h"6bh·Ň".wh&6|Xߒ@+~y4 ӵ8} (i[=c&M2 M` a)\2Xa“@c &(7X\a2NhK:Ђ_,.:4Q6FxJPmI dxlޖ Xpɺb%f$M;b0;ٌ_[&{vi"LsOnJ>63 )mK*w.,֌cY#%Ri7裥C\d~pn^ Ry}U,mP/+!2)&d9r1;|N -Vʦ.i[.ָDA;yݖ9PS£iYvyQh4ՠɉaڜ6IC3|A {A[horվE(] M6ktQjW«Q6>] K ",z8TB\莎h nUⷣ]H"9S&&% #u T$co.Ws߅%N͸?,"ywĆx?Vjm,+@kK}Vz ߞ/`׌ŃmZ_>y;D-nFǘءviͅ R'- nvWG9%p1zjk2>lQ@=GolZ|>`qB!wyV;z{)F!TimDl lQ8W l4:"< CFmœ"].''4JGq8;P=L.Kt$@.N<;Ďn'u͈"vu9^'M/{w-弞OXqEGa^/]OCX|`TH˹r g NE ͞?v@}|SVx5tXc ȿl8ؚ׭^mw'b6c[\>Ҋ?GI@Z׹ siFo$_X-pw0tM *YMhfD7ῴ{ :b9Wk']Qu }\kz6E|%#`98AXR8S|ñ( i iFj|ÿaZr7,oMU?4Z C:-} 0p& a!x+RaLԺTN 0tgV-561[(!TB !UFva0~/.ʒ2QTr-U"-*G⃌dRՍ1A {̓%B_|Xi]qnnh3 O.f#8jF`6[uGwn ~(bM=E;%4غI7kpy\qun#Hs/@S#X0ʶw=hѨWKN(JtCyںA߃( ul[&gpW+jj \'g`s~{Y^Cq;Fr+ Yo;,D`X(Ҝ'6km8 %U v[d9Gwo81,:>T7Cb_X$mP܂-!H:{;wodqr< BUƦ:;Q=)ڮj|AZ эCCq8(G_#ĚU{8W<\_>nW_ԽS!A_똓oPjVۛ,_ixgxtat[?UfSP. 6Flm⿤S.PDֹ2K;9y!nTO<$ h}I2 7fq%?Sݣ{Yy ›F0/P'>B뼟~/\3.QmUڬV5!RnR0N4O[sI9;+q0}K0ݹ D^d&@`!WT1!ZD 26rdSqo|fC:Hny@,39i!&B71k v@ثK2;8gW 7aAmD{-BbXdnH0!AxEhxN\wuoKT>iϋu\vr=$Jۚ>Bm"#F揄.ư(@諑{ 4.Ed.\ZZE6ZTN=aA4]O8} x>ȬcuG@}+2] 2-Ԓnqϴzt/,}e-&{Ihܱ ȯFߦ^$L~p,/K;6mtwBikݻ~P<l/L ,>vQOuLkh3ti#nm>s@tAt}|W;ҶFRv^J0 AMuTϏ|ϻw{zm!]#ٴuOLƭPx'§;k&;kcIp6W$Rh' ]l޺G[-|s;yr\1X9V~>{[n\N>c/;Mh@sCdR7%^ \%27`-'QPqZW1fÌ 61K.iOA/'KM;>fDڢ|mA{M( ýjl΢hW-ێ9D8pm,UzM` -`צ,"|7M8 l[ p^FXv.~//j{A9~xm*­s#jO nqW@c$.VGunG$ֈĶHYaSY%sۄ֦8/-Pyߪ? /#\c#=G?Pē %wC]> - Wܙ. Z|oT5O ~G Ь|C iMyI xHhyO!Ul1!~'N| ?e Wg 4uR07n,A0r5(j=e"l\"rq% F!y gO1`~`YŜ*ZMJ@Dsm.iN2Xe/{ҥN9#{IԴ+iV,!B˅e˽jْc˘?l=X^_ P"~(Tm{KM.eA|1\47BRG {yNKMӊbs[1M5]Yn Dc[:I#\KڗCpXDTx ٗՋqa}zLL¥#gk&.є_ث}AO8a082o c1g6CkDiK/ w OvWKnEdlDjaQ#3 t֛'R /ybg`!f>M5.Ps! ra\:3ɔ%Zc, Zg*RF&#MBqW[\Б|Z5XT0:o"ZI61oqtw2O܊~qcáfi!hJI>A" dۯqhϢ:!g,6,02mmsi 'i^\舷ؔ6=h>Jضfl;m6ny[UP^ײ>N JJtғ6:Pa؇&!gfIK~Ph 0 Ӕ<ޥk.<>'5z&aE[ݨQ$HcI:W!L_ӛ,vd(p8 Ό.e],|n&6=@tI=ya/?q຺{a 5,G#ko:"]}uxW` &+HvGͳwn%Ώ€ڷ@ -<,2φa#="~`HSՒS,,;%jP8*;ڲ痜DFm之Y{7mqyiH6!kU#h 9mԋk6\%k8[kFaw[=~-i9'P/@?6Iu#]|%L3g~1vz7v[yyfƯίڎCKPsʵcsSxiw2yGyF\OP.xXnKpAG(9-,\ |su-` o3ҵ)é0p.ag?2u_u }4%L~ضьl 曁91̇O L)v"焁z<;uQQ\wG5i ۥ`;Z+ٽ aOAl"e.</BZ7“~Wt*t3r 0+#8 z)[ U.FB_œMBrD ݥ}o8r<Z( ƾ㾄'i۰1(MN::.l?$^҇atuO6Q1܋8hS>z !j=o&ŲEH9[ІeC/&4i$=TQ`fY2ibG,91!lØs>rZ~B7i5Sv0᷐X|YA? Bm!0BYq6ZZ} `_cG<1VZ.HpHv [q[3_8`x1e1'FalnƢiq̎>-H9c. dov6$q+Ҋv`>K70IAuj~Z Кg;6Dcy7>Ү.>k0`m먗~*Y#/FMmv 4;Y6bgOUl2[*޴ A݆u9AKI)m[܅go<GO`p|\! hSd^iF``jt\_۠Ye:DHAH+,ނŤ@O]&QI\ 0N+;}=G-x A|{>i^D:iɦr9~Dq n9y#N"l}xZ34'v?|(եgy!N")(^/B"vb/ (ƒ3]Z_&jϖ P4bhBѓooۯ;є^Ԃo M~Y~c}VM]ii>ū8oE]c+t3?-|bA-[Kƺ&*$㥕[ Ok,iG' f嫓v`Wg[_):\q.[=i7H_}+/̏6j'huDBUGLdyM_cPvoWC{8EyK7 ABfgQkl6qN6 B[҂OWgs~+#<~>)V\KGM; [B XsPq$Y]KD;;lغ7qn Ws߂(8)n΢5J3*ִOmk>&_Sķ1} z ?ƞ4(%O<9{^|ٱmfh}kN9P:EPy>o)N;mx1K\ݍ_?陟%IO)5[5ě66 /ZXo.m;hAnG nݎ OQ@'P·%n}6o ok񅦹`E"b,?G+ޓ1_]Tlbۿbgͅ!-'@ Wg%˪E7j3؁6=]BЇ5Ibفݢbwb`v7&vcBQ BP$$n;޹3 9X{^{m Yvt G <ߠm#ne$,݅flYl=葬WY !ӽv~AN0.e+&ג H7?!a/پw_^'d*hK~;Lۖ7r%բMM\AXܢF`:w7 }Q)wt[8& +72m kס. A3;Ȗm"˹ʅ_)4}4OҌCƒjOi!v0_mM6*eVuf|WLtmncK/#x@)o'ڋSK)u~"#ژjᑸU>U5ϫpp D[SF#$#dl[$FmdZ+̱_?zV7,4t!8zi ,ݍ6y|YHw}zSaVw%GPҖ/{~3BTtebӯE}^A݊,q ߜ/.‡hV㰾jǥy2kua57aEF2pa#2~Cت/Qz7 A-@n}m݊7loy{SS=l[-tH;J.O!<`əi5w8ryE/4:WcvD(h? 4^&c -R@'CQk&vL ;Ys/yYžBPo^&*HpZ_7G[u(=@IDAT@٣h3 W;D6=OMfSZ04.f|~)s|ϖD?M|b÷//f츼'}hƻX<ڠ_{i,9Ue2fZ\s IG04t'j9Eꊺ6︲&@ Pӹ ij8)X `> 1. k"ҍ 2-Ce_ R &1?@#5: ^a% P5X)ZB]l}9| ƅybfGpyϻI`OOPT{6Dw=`}K6\rf5@=thh^'= &MFoNbnBpzgs8& KC L4iFtE d`vkpO.1'ŏ.`6Η(9P&L+yעɃ.f!|9"[4:* C_&ʝ;UvThQ,9{N2w6/>A~by$ '*BkzXo+ٲd:t } c>fF.&ϧXIh(&ۣ7'~RY=12 kk$q&L^Llf#gΖ\#WbS>3 YhwzyYaYAI>Mb7&F9Ҧt /GМZU&h8#"4҃ ߁V~ ۂ\EOG /܍vW7d zI:Zz"6Z^9PI `;p6(0xaixB8ebiځ<䅖?߳up"90=9Oz.D|o#<#N^vsЏ+lad;FDPj^jpKCf"=cC=~/)KiF|C"gb%h;iQJ=l+nAp81 Tl. ol(QVMEcq,[ ->v)^M^j捾Qiʎ2{Րޞ\#ϡ nϜO|;;w&a(xRWNGwW&\j{g_kCڭ8JHC:f/&*J>ã2vxzPlxsyɓFv]Ho֜@ -I}z'#ܾݹ'$KWd ܨ|N*~I.ؑώ~ٶĹU&}&i_[P7g3 `N(S3tn# :)hZGlŁm9*g.ps !Pd Z0>6}$r&#WR$i- nсf[?_gP| #Z o j dgfOS.v/ػshI 6 []]D6U悴Ou}]j+|iw͝ڢ4Uypۿ'%N;wRC@8;4AaZ (ʼn2V]CgK`PqC w >-{얥8^Aws\Y q BPlZ tC8;9Sa=].#Yd1[n"x V=$nj%s$mvcXAT]HG?tj)8P0,7T&Uھ2{}}Re:PP|\SBoikq 8b"<'i]yr0%yQ _Ċ_nľ(L5Nma🡝tA@OY4!:SM`}G4oP4Hɟ&I}'4 Og>aw;}m\ۤ9Mt@&K_t[cu+uv4'_D7vQ7BƶC0NYZEƪ4tG9!=]/kY5"}ɤuj{BC]B?VG>.r|&9 ٚCz˷3y Iq$u5Ql;t}I5 -.ay(q(-GT.BG'+!'A6kf(}$u?ȿŋܸPOJ(e_nW:Zk3BҾ p'|$bRi͆¬CMc\ >} Mr;L4銞AѼLat}-_XA /Qp  0+xk J9>@sd]VB(LutP6K'|v ;KiQݖw4̍ C's鱶Uav|qB\ (vV~iЈꍝJ@mN0Y,!ܩ$ ~X\.~dz~ԎvTcgPfc+4 d4u?RNhxB W_N}<SH('_:eoOh3ҤΫl11..J9 ʞoU%p)@P~w mHEPgO.%D+OG!\ˑt'6Xc+'u-ҩCKI6IzZ(P'~6{?RmUi2+ }Pe4ƻV)Ёl+Ē+qO@WÏ: C\?mQMl0sƺS9Mqpd' };P_)u> G.ňP/o6'>]zڍl \8L!tk9hSftљS{{LHleXܜSQ5 6Ό({6=sQۡ-{`M M<[ w2}8߭v(24K3zе ։a}lcϭlNk9jx,:W)/F y!_dhvqw> xguӏh N G6jƁ]&=]꠭!;r{~=a!02[4{Wa郩[A"ǿ |g?zu{5f6~z%uOUF9ޞ.kk{. %Y_)$k^nO}uɿ3'`x^]{wmbs){z$x^àI.Q'`k/QܖD,-DNJP&K/̞b͛4I[j\Z bK@[lA Mab%7n"*uJ|`g iK#^,"7 ΏP-#tϦۆF#X @+ :p"hpLCP63.0h]>;-mJĵr/?oueAE<g 3==C!A Jy%Ttۆ, jUPbn_&X-CfkF%!Z>LBPP n36潫>O\~>}qLƪabK߫n) Mp~z~l$[ë[% Z/Ybo/LDL:@sE _dN 71a55S Iq7;߾~g(wvUFm>01~; (Njƻaٷ.fs?qw^Em^,g~K [bxEy P[xh&ZAytbM 3ɝן !pƤׇǏoGE½E y CBq?2,mjvBewҕW߀fޟB{ڱHmAN#PW&׆OƽĎɑ:~\nwFV ;k#hqA'r_;lD@:$Ramc ؋G!oLX4P9mjKut]BԘgmDYhNXdу ぷH~.N( GD, 躊H_mtd%R'<*}vݏX|6` moaKɃ.?ZYJ#DɾLv!$֞C; ۜoJʳB=Q"L`n#~htc!>Ԃ]bdؑBuW[@NzZp`^ ~5=;e[v+ .?mv7ӎ7?[e#xLeH.d.ew-y _B8&3Bp.aXruKifKkV9[J@m9Gw,ڏ[H]Pނ4j8,n!:&kvlgK曪۠?hR(NNeԿq‚SBAk̅owB)yI~ , LIk0\$&-H}LJ8JH&Lح)B(Hs2ϚVW@/tzV>Uӗ'a[ւZɓa%APғxI/%VfwNs9voI`I\{3Gǖ"ؔ@/8v>?>&:(|,-9|ܑ/s)h ~si@BC_ s~~ HKQ; i131sǶ;\B|WM[kdGDj ve47gڑNPѕ#ԍz4KSGrGdU;'t:\gfݡuW څp0@j1pG`Ro~@N7ߕJةzN_lz yo"DQ9X6RI%mZ-Ng"o⌇ޖ$V֪8c%}%=+~${j}R{Lcg]| wP?6C՜l=w04#R6;k7} ia [J/O*uO<>{N%y 3jw0ϲE^i{ $hұ EMS@$APކ:.ҵO"B:6&Ak$QL_ Ť;$O¿Т4}@t~ b`zPEH?H8It$~s)3;GGWX\(ǪVGwXm\0RC߷a6/yJ@wBAA@X7csNȋ[4>;"1PEZ8į$9_QB5nSsmB97Dߒ76xg@.K tx'. /trx0gcE/4_J?{AW]mx%<򌅭xwf:vB XL4R- FOj7#OĴ siAcf7їRׁ@fB{1u^2|K?K4fW \GC%Q Nn<鬂y5I8Ƒ5[%b]`Gy,.h?C \!# 4ff1 cW3x?j6/P5,Xu&eKMB-fϰ{m1/{I4j:l;FH=2`bze{4WhjeJTg`B}I>LJ[Ymjwv{}ĥ\ۦ`sh6>:gۃK*Pi a3 ڄ'2n1n֒Կ+s~ϲ٩ZSoJYcҚ־2CwxLT=:VvIL_ZW4ſkcզ.vw^6mٞk*kmǯ~RE tZ4T$. dbayMʊ4"`cgDX?VN!/|(?#RKNC'X^ԝ(8&s;yE̘Ҵ=EB~l]R2ʒ]!$oi_1X*C2% pVm{Q6u%\Xh5NL4xWh灄SWR=׆$vν-*ypigʄϕ0Wgԫ a-M6=&iI% q;ҹL_ѰI.: 7ٮ'7_mY :JËi{d@m_ vI2asXo jB6 ШEv y{'1Ͽqúv!|=ff-?]6ڵ05:0kNA2gZn\P  lgA84]P0av)$u?g3j]atgt;$@N x]{Ře?zvaR{NxJ-^9Ύî6Љu>jC'|W-/"V{E잲X\\z: \L-6.Xͻi 6_d/1*tBAp(δwgM9G1#X. khy{s/ZW@@FH升s29mۘ-v-,4/6 4NO.j`Ac iF~,cs7luS[;>dߦG_v*"ͯAojOSۡ^J:=Bg<ɦ3жִ!4 K Xxʉ;my@Nt9yN^lGpĺ͝Uv?MojQ1k:p-ૡanL:F M}P`Ǚ3UK5m MP.F!n3 ![RNJR ^KvmC=>dӖנi?Dь7E!#A~-[QM?vgd (}%UέBi8x_8مq_mis04[h)9 <7 mn@Ok lO>sC5v̀I{2*R,:G|݈0@7a\w7ʼnE܋81b\ .mٍ| ':CGS-;lؾ/֎b py_qպA<ǖԃuېZu%}z~93T&3#z$$)5OAB[8}7m &nO}9;: `F;)bѸ3E0% iͳQ2!fklN+:oy*4.-aRs PWgNtx$PW5SW_=:gWG/c W(ke۴_!^[F%*4WK9/lbA1ˢW=:(o>{~#L_TgcCj2Xdbw}Dp+Mm:Nk6O `cAu[Z_ۀ֝].Wijhםؤa7!o }5Ai!A|!TKRB_T{g=6#:zӆ I+Wu (@i}ǽ 7O輋HfMi)lj%  K9>)"tѷn1Ӂh{HDOζ7LNܛ Am)-ٓ&XϮ .1~ }CYa@-$bЫ0A9_׵`42æ2TG=`]BO=Hpx1m&:+qMVKp!ܺ_9o]bYӘؼ{B_#a6V?d2M"o{a0 @XH.-1' ̉WPJ-T!+mnaoL:D'gԏu+}nS=Ywc`3\"lY U>35 mE k{Ǘ8IdgN j}.~h>^-Qw Dۻm, 4Au&=^KIu6 ?Ҍ򞘴`e]LGWB SM}j+;b>t2M {uD6$~/ųy<2~SNI;: ~:D_veT{;RUSNWFSJ߳^vLH:P\Bl Hsһv}Wr$6VwY$ Lζo2Q MI7c|Cy8(%aĦ"l)gc,p;-EX\y 2M>''q{ /"ke[Kx_$ͫ`hjLBtD*7J.!Fñy5iWm/F[JJI6vS>q'cgإ5Q>OkS3o;샒 R{7v!?bcu+%ẼJn?ʟ:#[ϻf]af3{] jʮ2С$eH?h'cWvu1 q69Zp41  <޷_gywѬ<;Z({¹>mob68ǼR:'vg.$}ݜϼZvCjӏHk7B'LmThvv@\JA=ow.ErrS- ΍ BqIvYίlv}r ޹joyػ#N]n]0xfPI">AQa=˺ثF4GW$,meL9v+<WRҘi{!kVѪZDDeu"aXzE,~ %T^i* .@ƴP;xq-9Lq@h+Xa'@S"?۪zKZ\h^ΖK۟h ^s͈ە+cR.N]pE ,lܹ[l ޗyi>ȸPcjiP')e2i2y!Vh4 L/[87RXkjk^2YF jM4Ũ/ {ϒC@sYfEԽgzdu]bgCxvn|%C1^pa& 7av48gΰh7ǣi +9 < Sax! yK HZ p{ 2:^HL&M `p3aC';jAwQkL %^&%N9`zc6F3y!RɋOQFA~tl`d/' $tIʳ֗rvͳc8홠̩yI< >_ZWe[h&r:0^>Yt8ipuqUzI]Ș2>u)VC0?Iϯ݃G:L\Gs&\*Vr1sEy.Dz~!փѤ=X~uf[ 1U$ܮ6tkiSyǮe,M*`:C6d VóƮ`a {/3pYvT?2ypok❚-kg2E~>kӳ+qè&*(?jg웂fiE6!=VU[OCY4\^9I5ŠqaK>}Bp7, DEm'?๫Ўؚ8~%W/R«[gFNDaw  i]|J wHY;W,p &p-Y4OuY(p( %te_bv'hOi=u4a*(3n~<k OP9r@ݗ>E'[/mX}`U{#[_l0ڝt);H/JO+/\Vs;nK@E~@-2<=aԜ!cƶ'vyͅdvz]yz!mse73錖flFf} c ^vKv6 Yt).3҆'RSAxD5u^~ RG~yhYe ڴ<[5;Nb~m_c k'sdWx+l~UG0G`%_F֡-Enrn>L GO&c!lF6=-tXʷ;G!Or-mfc`3ӕCГ s%kFq,A3 KShٕ~|^!//aǧ@ NۀXaAu!K~8惡fyn AG{I/?" Ns?PB?cz}-Zcz (b:㝾A;h xIGgSŜq&e{.k]lCAi(2KtG~1)Act#J1nvߎ^ qaZӍ@ן ^ChG w1ڹZLiw H`XdkM27,4m"Nؤ`8d2%.4ԋ-FK <4E?~C`uJBՂ6gU9A>,MSKTJ~v8wOGߓ:KWdtYnq&h7&(&4O5:D6]k`Jcm]U"㷰 vs.7/ e|IMd_W:TE_dZ6-19Zy3ܽ> s7DG:[r gx"@f%P_ i"3tQBߠ<7x/o CP+aR;)w| A`&aFj EmB|&mVۆԥ)-΍&F=H`6|D>C%HH }eKZlmo%ZLZI@ $&]y}Xe!ڎkroAd7wߎ|0L t0g00?eZPSFuoiv胴V?R`!@'4KA GB=0Vn!Sy1/ PeBp Yɋܤ4sppҶ<ONDC O8miBpJ"vhߕGmWP^鴬URg59ڒχ܈h Rp:f2+0I?HӡfOĄnPL. _?1)d1 -a+~ڟ1v`Ci_o #|#s<6a#hǁG6#XZN?Θİ;]0&|*.nFx\v3uqc'\dS{n~?͌,/?GR^?..ξp3cRm(& u΄eF-w?3֞z@:j=KS\Ӟ$}OBs˾On}H{/!G+ߚ ccI^D$8#u1H?~{ ?!.',2X2ڣ ,ڭ>lK =T\i=T@0f ԁWqqzj4lZ $0T# @=D$a^ ]3y##FR 6VGymstiPow0acazvB- ]AW_+?18WS"M/<7 :4.n/JEB_ [`p!e&mmi^A1Lspιz1ng[rc;ǹ ^A{XB%Lsp5!U: #\ ?׊>;9y}ctEAzZkZة&2i8Pr?K0C<Ƙ .,{w|; h kWɵuLx?sLROa dTY; t *zAŭZθy.68+IL/g4YFw}$q!S䷵lJ>䨛gÝԕBHy(5ڡ\4Ec"[d-иإ]ԡE:MMlJFH+;oL&̩] B<}a։zՇ޻4CZ1l>kw$'/v k8y)4{j=o߁ G Lr$6$5ni.pD/E6a9#g'< OO;r>zҼPOwX¬H?)'\gڹGrKhAT2a B\ZwT-{6 zJA:5@|o(Rs&;t'NOP,­燣_Tek;>C'Fo)?ԧٝz~"a'ζlF: 4U~M)Т0•;a$)mz((ƹÈW mL^E,XœA뿶l݌kԾG|J=c4FlՓw!.4 {wwWrؘqHuvEku CMwڧ'S-l g+6?;Yeb nP}mR4k+P&- E>n-G˼_7}vtsh>!m(O[?']DKEn{ʼC} _v<}^ m̌bQ[4"_iM_ [!q *Cr*Iisr q8A7w؆Q?zADSaƆ§;s=hy>Y;X #{1<9։ADK|]b.p^uꗟᄺc/mSК* }Y =g'5 o2ppI&=Z N͋tL!(/lДlBsQ3N `|{Bc }GY6:/ԎEK" @i|rfOz~lOgcTƯK0KŸ*VI& XY]ډ%w\?SCgVw˙3WTÄXTDH_هz5I7A5r$4yXQ<:usli.L,]KI]a%~oEps]ZzcyD "_}gn- !otKwB_PZhB4ITDX,B; (pYG rJÔ|d# W Őim_ղ{ t [m4ƃB<ФYv?W֖E /&{04} @sAx0 эN7e5](gP=y( 4o徼;b_ ~(>fZLlHP4c(⟄D\WF(w =,'cax(9AM<BO>{zpVla1)8yQ a5fک |( m&,` šI훅d_$&~[3(Ci %j)NBޒLitC eCNutiDN;ƣ:SJ#~S͏N`hX+S'-xF{;6WSNgFqcfALDOIVYV}m7u, aaɏwB5K 'sEzmU萶{ɾ3}}BX,9h8>qTl_Ƙ40ZL2y1ԇ*ƿ)˶C!y-mc0xHom#Q+[Wgdx%9i0ٙW튆ѬZVeQ}ZYv&[XDPI| ~<@6v'@{vfae;C(%y?L;ۙD{K+^M嵊 ø!DXkwnI9l9 ݵ8ؔ;?_3hYlʲMCڍ j(#M6KiJU9^?6oM9z'w] _{+L#HK7ޟz=hH]x춺Yܜ7z<߉וlHPvO:JQeF_!y4|.K_OdB&4]lYw\ЕӯPx:pM#M|$x&a!}Bx}3{UWov'!k37qpݟ=B+Gy|k"ab&VWhjP?uU#!+~@tU7X,s;fl쨠>#R;i@ͲV_q1htv{ֺ el|O%vFZ{'[mB^u}%-ĥ4Q[^POH: ^ji:E/`>E{LSEf{!xW نqq"D9=B0c&:}طmo'mp臖tٽ{}m{M;~lux7MdM&m'=@J;L-`?ƺDwHlʆwүuX/ +38݋zMߴmɸg<-gE݁co |s:ROx@{V㱗{-u}kiOkS,^V =) ^Q/~l{,V5egvG)"`yD؉1;÷ưk@?Ƌc8zѷQ6%T!/- ΢v~GfoBaE霄#(rtoh^мs}P#<}_;&JM@i_/˞=֫X-pߐ-Zi~pM*H )XI&Aiᠪ$Fi,]!d4@\qQu؞tW+T *в -pW plX|r8b.: [V_#hڶop\Ҧ>"):M} |Hu&e#k滾; Wew:Rg"1[vy@V:tn? ngH_۳f;=ƵVH?gY Y}hX4\%NczV]8@BK9-~r=y8(8ȥ ^";KA8^!ڥ=zյe}D 9w_Kb~`ށ-vm« s h!ft(n+^vF^Xp$Hu&>!x)sbuv"x2v`6UTw{^O%+&{_ ;8@f^O-v;cs8Yn;h>mwRّh݊|Hx8^+se@(⧠XbUc~i|٦v4wMhե.oH?dCnQie7z ݀}c!huA!y=/bְS5*mZt6sr ȆP&AzS!sPG@Og>FN/Z=f0s {V'?_]p3v!D ޸C iGDé?7y>%vL8`CK8V{t@C%g~*#i_㧡Cl"Mչg㶉3"AajjXG &֏hrPe1aȖ`Fb T?*k:L|SOi=/t%LAM ҁoC_;8xăǦ6+hw'W ' H +Њv ̠VE/E@W[ -Vv(ߔfaIG6ʤ#|u.odZ u&,X8|T5% 50?\Ϥ4ow$jv<0SY?D2mm_ eI QڀuCip@Yݾu'[-նEmO-8l<<s, AIЫ@ YINna"gR/1ƵЦ7γ8 @j /Qi`MZ@ΊFPa%'p=Y02s`0xR],.@0<|}Fb(;DA1pGf6\>6f@%@D )| Q}#66CRxM(b #.[+8qX-s[D1<~f7o:!O?_rތZPL ]#oER" "ԍ=g>W1{ZΞ7t㙶ad-[_ϰqc2eE6瓵 T˜&`e,BuMS_"͜{!7AhMW_љ~ i|F,:xvħyqHZ%}) 9wWHR;C2P5Ridvzr4}wʅ{T-~j5k^"Nr4mW}JbjH2{h un1S~݅s tnz6"DjôIm)6 Nޙ}/}o}wOՂQ=8™O Fڈ#9*:TUz%Oઞ` +0?FrFrխHn%J_ - 7rMy/ R+F'O-pZ&3/䷟ׅgur1׭N%|Ӫj נaaiƤ}^b<' -D!AeSGU~eȡntWUϗ"_ʢ#+?{ivK|~ԩv gҐ(;}R)`َwwWC=m.v a57H ½Js!liK> ,* j{=# PqA#/ ~FrP,1AӪH́a%*/ !]c )CQ3^͗~[M x&ݗ>MbE!1NX+Fj$s g9DCzޣ( |S̠Z6 `2Ӝ `Ş5 >[bn Vm : mᝠIg.(4qAIPkU0߿9OX'"*;E<]"j){^;@ L{i+ 6O1\KEgj$opHjU/]4=tUWF`P<6(P$Eo3?=?ٜgR ҊLJ:&u}eIj~â2;2LLn8bbd:4[q(ȥD}7&qVeO`Ý?݈ɺ)g_'^[c ҙ-. %I pHSop?)/N&~QUKNlS R,Lhw2 ̒6& ^ͮЧB zO /v5Fuc;ށj= zv,}ܴynh% g8s|Wi?IT#ޑ8 ~g&ѬjZ0߂2L2'QyËd_&a`G|.}A0=8[k/r-2 PN={Om(~_g|U.p[r8> L89u@P~{2"X7-9KO>YUaATV/hx>YnBK'#0[|]88U=~0:M=>Fcϫ(W][Ӎt\Op+=&'I(7,frɁ:,HAMDZp:L:kׁ#ƽPc9,-]cE&~i w&]B<\;ZXߛ_?|,ߕSlz-,*{)8DI; OHgC@k*pׂ\rի=IeU¡i#fN\}7$nfP/vExcOoyT\X}T. 0/+$^w|{Ɠ7r%ӣ -TAR#UNb9syU@)af#9A;!EKp($ [>BZ0Y 0kkZ@L`F '+AUiWiZP7;{m~B@_\H(uT~U/Uv 0Uokݟ$5D;`04|Waz'(#=/H| G:`uUK!X$(=ei֪s&4R!W>+ mEҟ @MtT}roA[e1|ms}pSvdr| e, .ݑ/Mh$,W^U'[,_W+U wD_Cj?$EH~Itڪ:bB o:u {6/zϥ6VYqȩnM7FrSEYִbL`HԵgvL]>(1H>,VA >4x>] /"ՈBtB EF00YO_k9 &%MejC=m# 4o{uc:怾 Ci0#38?ʱa1V,HbuCdrD$ARF=k짧Vx/t ,Aߔ6~P$].}BL\dZ'mv Ӄr"IX%R}It5܏kiMxT݄ q=bH"^1uəHfW2ߖ!k`JhBb\!Ogj1$g:sC#\I"_wÓ_r8A) X2^yMIy.`g};xnS7٘pvnC9!% ȭyvz:IޖA<▰ y+4 U}$);!.If nWͼqQ( c1I}3L__߄Pg>*2\wSk<șߗ6,y=C٭C˟Wǡux un+___ՑuƗ8u&3y: "y&=Ŭ-s;ORXDP{t;Ϥ0ǯO-%A0__:IX*wcΰQ`.Uo6WF/0+<ݥ|Cu x/Y39ykoI}h]3?%oL>:8Qp{&vʸWO/X vziNoM{Hj̫R$v3VU I'}6 #s%"۱G> Dt'KچJ+O)|$ʠf?Q^ ܎ϲ! -A|] %W~5 +'Q{ ](˥TV:ЁF}HfA_)1y`fWW[7$!7j I{@_ûG2iLN9mMVLLr=482~cvB&3ݙ7:ħ?yIclYnvk謀|WNd3{T:/(210!US5TTL';BchʂB3nhGGQ[!%@ 2]TMN |$ѕO}:\*!ߝއٮoHhKm=_L4|i_Vp+g[rq@1^,FStJr |40+MNTC"޲> ey~V|q[~Z?-MSKqAZ?V@Q!JSLd!tJˤqIxm7;| #q+ mE s6f0sLS0M^ʄL¯6VVJO[墭ZS8%R\L%]J$U`uDZ"2zۊDu.(EIv" zO ť aeb3 i[8b[~;&yQGWF7F4f{M>hYLMh[26<] #=sɳ_W*.k*-?j9syq,8=LNw-ti of'7-c )©Hv#{̲? I[VS$U @$ +t :ӿR*!7iTy¼6 cɒkMG:SZh{0^C'ױSl*Q>{vKֱ{a=K:$iZ!iq0Us{ۻD$uD@a>  .M//uj!m&pS "B=v^ q=|%~p]Ҥq% >E5S5.m H'.Xe{wf*Ҭzp}[@ލ1BvაvgOF-߂Pf8"A<"ǟzG-Ne%y#.` #$=ezU%$52Yr6>D:?\9<Tw%}* ?pCJRʍI~w8 /j5}ә4ݍ^&\JOR% !56mp4в(D2ϩLT64Nk'҆@ktK%$݂@+xgam ?>;T\F^E)TROgG|ٛ'Ga< x RN߃ۢ^iRCke\KBҬ&ba8-}p- =wc<X>yJe3#Xb5[/6Y6FdcYhW//N %j9v硺IV]PrqHn;>neQxExn@QF)ԓHl˽pv%'ۀZWV_|N&h6̀fߴb*z ~6F>-Tuq= ~Xþ`+="4m.)?i{WGdXƴ#'Yy pa6 }`n1Tz/Y9am,\ΘPB$HN߹{,ka〩@!K>j7rҾP}j)2fiC>$*gl 9 aYGnyk7#ʭ({,l|SY$ri^bz "s"\eW} R]>96;ۚW@+$ߒO IZI:6w +k8<}?r17u;jԞLNSmTg{KYmJjjiѯp,"$`{ t]ۇYĤmX6C14zzPz~n΁ ;^lx]C( /1L9 `ur B#'g} m$咦 s7sD$H[f 7g?a̗B/,.p 3st謚v#Y QI:z(_%oC'fUۦQ7Y\N?yBe)}"*owv W>|T[3 J2pC2iqTIsҁoL)˥Њ~1j:P9=~uJƿ/)֒;mwV5(.T|7օ601 x` 'xv}\bjF,iTš+CIKE#9ӛ^pL$291x>>@RnhK%E,&H%)~{w֋ "Ҋw&r KC7 N䰣&mϧӻ jPu}X;"y7vy}P)HcȮ@}ŇlkNS8dw0[w0Rϑm$(>(#̓HC]N{._$QԳֈt}* p3CK5gϩ71@gx:oej"=騵-'RĘ@8F}*g+[ҮJRp!OҡщtFQG4Ršf|}3ݮI"|{gn !ۑ׋ v8#GRφ4:KGQZ5Ȼ31|5HkuÝ>/}E6AN bW:,hS/EyJ2qZ툺X1$aZYp/$ܕSȗ8+z0A@t ł7jfrXPWd[n]S oĥfyX Yt"!vMD_!}L:I}JwoRu훳H9"ݫI@5 -S: T@44RҷS|XXt.s~vf;k<_Q>$v@4",pRp I &$KR `Uk]/'t\o{W;D qϻ-A'Mw!,`,iA|&poW8A U/cvLrv%- X |鲍7[=l7"ym{5˙2݃ ?br\Vi,@'_zg?,wSm:Jʩv N 'bA7 ܛZѷd5*lOygHӎ-N-fQ&Wr Ph%^ҧ4Ñ )~환697nWy4Y8K`Bt࿲XVeK5c]j>'EZǤֲ$ɵ1<Mv"iCa|ɤdYxpWONT<iZ*2<ҮvжX@!:Q;,ם1NMj[x~NͼJca" $*[?긹S&!uG"q (CL_M܇Q~>Kqf=XZ$^ q͹jp,wxM}7s̼6GkT{T l5[uϵL%夢pT73zlI6<\mHnv{[~\qܜFq'ʋ~؏.+wS9q4 +٠ԒiK7%f^̿Q^-vһpן=J+f>FD{4o8Ke{5kb1-DXY]1#usE m >!)^`&A/- SՄ5ɩb0 E 8UJz0<¯^<8\8a0)&62>އvb:YsO{f x;@c _ jR5vvc>(9g˳Nv(l5 IQH+ԕGz7@D m|AitX3Ͽ#?FhɓPM(Fuԋ=$vx`;äQ}Y7N$8c,WiI" 'O׮UԫX%կsSmtvBsFZ&Gzކm}Ut6G:@omp3Ot Û ³TVKzj"hPOus'd1>A:Z}t1#esH¬;;?*q֑Ko'i ;9mpB`8RIJvWtK!_jv {Jɍ3mIZzJهT# Op?p:vfڙыīcZ?m+=t;)bWTf/wUhX[@:r.rJD?MHuGUcl=d"&k)-ѥ9`@d;/uJ@l_Iӧ\i  fPv&¦z!rʻRύ ) jItS˜{7P]4еmDpW1"@R;„U~acn &Ml9RJ@I$[\ .tR]y$ ߵNJ*UԤxBe(KQnLFX\x|څ`/isvcQGx*ha E !ItVb^0~sy𫤵$q];;<@}lk-_HIWk=-w-R I_QXFD3(o,x$5vpkiBq*pP|rW~ O uwZ]bՕ]qDnfJy]$@llĩIڋύmY`agۭl?h/QƁ6)q`^UcZvu+EYnjbvx!ԇvy1Rl Jԋ?;#fYY{K/{n}@* \F&Ϸw~==%ݮri֟uNΤ/F{^M*]{{H7ᜎloB}*މb幸ڜ2z&e`Y j{77߱ ysk/;+Q hKKT"mםM]YThng8J6MOeGXcek݇k"Ԝiu.#lI YBfC> л'ՌjIsfNI\|jgIU҃.a)r9~]ɁrTp GdGÌ]ҭQ. ?Y.tՀ<-^LM D*NXu[68t|z^x%R'+%XN(DGbAU]|Pjfk}0NM4.+Fzb:LnK:tI{QtJ@[Y[[" d%@{ `6 \d]SxNQt XI7 yzt耹 VU ,OCϝg> K eQ}m'1ME8GM\P.-qg@_ŜK:MmF搀M 1K'I⌓Vխ |? ݈BL|̽P1jW֗gw.@9yŗt7;ۀL^ٹ_g!нY<{(!~/ey*q_h66eh*fpA9(()7__S;Z8sjZݹT exxv)fЇLe*<3)"bGqhԣ)mf{o0Fl;fwq;bXśF[ΘC޳,=_-N׭l4ʝ ._g=|>#_M#+%dwRD 4K]ml:}싔뇴+;E;ipGG?aK}ǒ]xtޓC`>_w nZq8m ؁ R)vVT!5vmP!ׯf}lкp ׉Kl [ׯ'R"/}_^(r*듌w;NJ”CD#Xh=f+>u>`\sAKkqr֟ٔ0KLPYNF^gX蕼Hu|D{q)s:)*Hsӓ `Z2[LImQ{eRǢh"|2c/a$=eo#v]U~p~?Ձ͛p4QC:'fahnwڦ{+&9'gˬU O^4.DК~ +f^iA`]nOjU}`bkYA)sߞFQ֋u dZ @'4;v vثn1TtI@thWK$^a Lۘx㭒>wи- a--_V\X%B'Sf7JH7ȱG,I*IALJLG)B-sԪvPN3:Qic΋w礧}9@C-Tc9ah3@q@ka^yvy#nq6~PBe1D:ʜ-W\Oev-Mݺp6Jީ xAO3~8{\m-܂EP1M QgnEPo'|o3Ozt.|8ҹ+ue0rr h j|]a-tH_s4u~HHHL L~%$,6\緱UࣚS'!; _7rlSOvBLjAg;l仨ѡ: l˭p]V&zmJFQ/|3~ZGGB3sK@z/N{| rL`ҴÌ,u'`Y[aype L&NT.-럹F7UI~(}zC-m pxc v-w_kbiF@ܧwІP1\slM<Rɢ:W`n Ys%W_B_!|zϟkrhhs\pj<6" $\|WktB{tC\ XX]ALtjlWboc~Q"IS Џ|rO*E3Մnb?r@6zqRB.&+D -dV G!4cծh]ݨmİET}ΖUdv b>A_ pu S$Ii.m%+/՚8鸼pd7`И;n0!ەj}%rnO=N3wu@ *ҩgꁿ2Aq#k" J3|IZZ>+nS->GJ=x>YL4pHjK ˛ Ȁ̿"B3|Y g1.`bcaT #,>?/d'R]As* q&wy4wmO`k$k@5+ʛCr&MDd=C$SƽZ F"=s_4 }%ѣףnܬɧKߥ?霍 p  [9ӾRy 矿Qgr9 $gESb$6MHui=IULmxIm{e)w]oЭM( rmNo?}dz7j> 8phRVDG 4:c?uFE UӶ}E4sYۦ@0-`: Ew{}wTmG@9?hH`he-Oҥn֓Bڔ@z01 O^Ki,,6!qU7X%#WϳTÖ] PgvTM/(o B?ϕ.l$36,>A5POcFߏԭ[\@k9t"f(OTBw꫉wlsyqPů Y<3/Hl>RғAohńGK68zqszkTӖLj9f=#eދ>b '|, 5BҁV9ގ;zCbNNM/Rn.vĥmccM7vs& jB/:R,RXSg,/~TU z X-~}=_ׇzVg y?^XcX_-#rouwnހGGpKYX-$pzh8s*`}Ni[9k.y ^x]48^9•M2.M9ڽ;RE#U]xv.u^Pve"jF$)q<~| K l4.S]!>8kOSp7Uo=Zv팆8Z:;<냟]8 Ң\'+ϠpB;8x ؚ%@YmG&0.,\\8Qs YUr?~9'x-t⛗.%ޜWq*cվTgL*Ko^š5' YM_S4-_hg-u3ڵ a!rz*"A Ss(wnFI#ڇeqo9cO:+߾л`e>W=Yeh 7dYaƓI{W'ړR6-ih%pCCs?s+|[@jK:TomفM%ye_dIovg<1ROEri'W8xtxf\A[vG^V:4 x|9=Z_;iDnh? 1_I%UKj:]"e5I懅d|]+l|όKoQ$-ԛ<%]DӸjK!@HF\>T|PVI}癈PܜWuWRQMh%&LHjnHO8!oFnֱedR}+3=@=⧮i=N#mlxh.v'9MWZi)UÇ?l[fmen/aj#Lײ*;`<+PŜh2Vn(Qcؐ/mrmO!'?Oۈ6b {"3:St>SQU5WƖuCj$ Sv # ~}X#T$I8)mitU3uUڰ/\qs9iz̵@al.518|4I6 WIvUSknVT|;i>PE5s+JH].@>+}inH ?x%>PO^}lVl7g﫢M#E[r5Y>IyGu1*ME"@,%=T M3sLNL.|r0|-]Z;Kr7} r-!H3Q{;{' |=MOA{L ʪo\Gbٜ`F_U1_5Xҵ]hOQP>0tM 9vl-dȧ[@OӇNi@Ob$IݙT۫%mYt|c@v&#=AVkk0Rh'Y8&oE}& ~ @wYWB$U[ S>mTCҒEJt ,U" JH i t rkWk p*-yV#$U\Y-k@Fȟa t"xV~íŘ8jҗG UАbgp9,nIHӥ#s"%OWCZR)Ršɕ/\~:m`PjU O٤ ҾmT^|tjg*kX*ålrGt#Y?=o\TqLJ.r3ob0>xGvq)>GI޻t2v0f~U*`dGS mgV?񴱎ɻŶ1V|wԁ=(w@˞onV@W6\:Jmwl *.a;<e= =N8Ot2`h!J=g qxړ-Ŕ]v|:0ԇWt{@k!1 @IDAT)'`̛5U "7Vޝ{۰Ŀṃlxt}^ 򄯇6WG^/DFP&V^=FY~]*ҿqXX[ $Ip`cRv xFȮegQ==̨z69e/M %ٓU9e d0ET IU`?8f`KZ+q\EﱉKoWH布'ĥAb.v%@?S[C_PC{3 $# m\u+i#?u`9I'fA`U @2ַV:H"ŶDYrV9*tzn4KA$uj_yf+@#O/m+/+.7 f3)H_,Յz;iS:1TU-@Q BRA lP lzTl1;QR*('sf̝߂ݱk,~F 󹅽 ~. !L-ܢܷe"!\^.b;hA{}M(PAI$Hh~,^>m턌uMχoF6%tϭ`*Juw}dSI<$-4354THi0%lZ3#u=ɰσMYWvyY߾- ; w Ǣ1:}D:wyOL9aA^]Ʊh<0myNhSג h'2m$ ,,wZq}' P?>Gr{L~Ħn+Msa{5P1\nF})X7ϓzAr9t% ܑZVzw.tLKQ:f݄U_ םius?"qKڽo7ʉO*;+9fm Gc͙kA=޹%Gp @Y-6Uac̖uB< W^'ߴ `_#'V^7QwUO|4K^rBh{ic'ůqJYО[>ߗyi l{!HOInDngy^+a,h#p_/&َ6hܻzD4fz_ØmMluy@{}p79y}¤_{J6qc9IJn,h0T i3;ot;BϙO#DPuI媿o%jG0-n&gW %H>xm(Ց\zंE 0S m#y~pㄿ43Q,t,g%5φȸ=8%X{PR` ޅڍyA}p;p`Lo)||fνUkx? 5|`#b_ =S؝A[7&IxոoОc Xݩtp B$#ѵR^]DFeW=Ͽ XGMN81VliLJ[Un| a`Jڋu,Thn/T\p.雽x^sKgl*4́4[(xQi꾐_s5̹r6C;gObGfIYͼցSn|>DO#΍x0FGCvK̵K_?\o]fl:,JF;-V$%s{RTTU~6i/wIiyКZrCk46E/j0Y{+.JFaP, LyyJkc/mWY[ {( *:i]e|L?WGP#>q4|:a.i K]FDOI~ټ*QuxY*蜊2˶Vroʧ@j7oVj.Kk8>WG@ɦAs4?HЊYOH0jE/ Z:Cg@cdRMhxўTO?G]hVhP|3pq}]ڮgW ^~xt$Y9ɯHtm]=?хQQSi؋ͅ:!Ϗ9]:~8v)6Fsoʚvojioe^G}M6)u16 u"PҼFBvB )!: B.ڈ*̈́]Q8q+Of^hi ^gտl;69?]I'_#gсq 1 4tm&ptn&Pmc,loWl 1Pu7qu_Ndmj,9>^?5\ Aah=pO8;7&D-e./Di>`ͲlxA w9sg-q_n2)*g}Ғ4gKsTrm6(2] DkHuAe@&<`y2߳FRƱ/&(v,.ݎ _zd[[`G[p@&o^pC=4XߋhIZPh@iG} ~_u*u +`]"~՚ l`=z@UB߳:pW\SVG2~۸ߑ6*y@^n0L XXvq|@i2#X@ df9vsҩ X ^`•Czl򈸟Aʭ{s.E 'o?mԮeTaQSrfZs/> U Lc%Ai_ 7燓&v%8m6|lך4GeJ6d#n˖]S] A#X sKgwY*E V)Q{üu#,?vHpIc'7`%WAKJ.< } H+EX/,%σ|,Ľ* O&܂W(uLS΢Q`ƿBhs1Td",(|@ds}b0GV"r(ef),@o緑ZƦǔAo.Cx5!hl; M.vAl>4f{)-aV[4l# {>$dӉUl3"6g6z0~s'G^dw"d0^OKU? {2ȯqn7%}J6T~chZUMz 2˸͌mmJD&T1C0E4Tjr/<~hEke%(SQYE9$<_+Sh.uE[.N֍,a'nyJ@~A))ES}>Ѻ dsđumHctG\ iTt~/Kz 6?-g楇Az&FtyVF~yMv=>T|K@=]x!',}r7նjwQ {ư7)5o2cQhoz%w"G .sYв j`#(T2A3E}dn7۾1]golmП֥]~.!AJ2ioKCWg pv{0zY|S6@{h.<$&=mƖBHF[qd?I0 ze-)\- p>ń+ummU GhgmV&^ӿ57tI;;كGY akfnd_~6N[߉IN߿7)߂cջ7@_ 񓎱lnekr>m C_d,ʾeۗ\bSק\ G6>fga_362vA1Us qaقL=(\<h #۲|"̬6^ΚvZl\fGaXO7YnL), 89&)׿:v6MXPBM 9MB_%!!RL2XcH+U$IE{7([sF˷)i2bӭ[MbB_f'M C*6 }.B`m[!4ku: B($&Rv|Y<]D`{6 nSWL;b}Oi J }:+k*zo y,a4ECn_=0}pX< esNЇ0H+Z88,9vUN`q3w O^L|df)b,1`WX4<愾rE?>m 7evBǛt5ѕ1LkQCPA=?~ }X`x~R re{{`ezF#x -4:a.iRyėW۲8dq+}E^@MV5@BX'-Mp?WFiZر)m~ؼG٭,D琝4CMٛX&mɤǯ_cddċ<oв2ϡ>: 7"=>2u<>0G97 Y.FX$愘Qd{-؎p:GEkǮ~mr杕SqYR{H8/ v+y_t1ϥM7=s(7QٛG`HIζ.:ōJVݷ! vkp[ϧ -ϮI+Om(Ctu•W|7p:B߶SpNI;MO9I7nm% 5TcSn-TZnZ3ϗ܌69uOMAםU)vOIq@t۪&Ȧ׫)RYU+ C}!lyp l8>vݐn%mfqqWt 5F4eC+-:_aag(q?. ʮ'5pqxDsE|Bg1M&ْ\Ic (5$- 6$ƭpںrG[fᆓM~I~H}L[򔩏"vtpN25[<怠"=C/p' %Ʒ^:Hz ,o]. \ (,UaFo-$U m7˹ I_ Ւ d_"N 3xF/4Ɲ(]wZf+7行m$=@MY&8 < v[!nOk m5i8y+\Ռאw"2-`"aS|thOKfa-lȦurI%>69LE@3wg^NO.xW(avh'|37_o" H ϴ %kXlxA!o&4ߠM=ǏǡEs ӭ g&^*$&XdwXK&3e[Jcؚ(P?'fg Aᩡ4o.kO&˯p|h&*h6bqi-ZG7$!OYViͧn0%Ƌ&Q O~4R% 4Gpͽtw^se.H{請Mӎy b z C fc~fqs.މ`HmG@;3?4[ ۠]~?x. ga}f20B/α>Rt^; 1]!SiTgC5XuݕmE /q],]{p(qqidGE,^hzЇU~"xx>;="?tO#,VzRw#@WiY6ddCiK<.m^銯G[f_yܹ9Ȗ A2<GSvO(67Z-W"*}1I|{ YOr\?F6/޹X!鬌|DR@݉m3`Ȟ5}1d LFG;=ة`LB6=%(ftb"aKmkKR%?%CEەn˜;"cgޔC~RѲ@9Gm4>[Qd{*h6?1zuړvxRssvMMhG $/2mZpKhC,B[cj³ EզEԈ#yDZrüȟ?"ҵ[%FLYAqwlDwŘ 衶|ҾwـmM8~E?1u6Ֆ(r~(ƕ%i7 ]@||((5)LυO 0txm1O6b{=ۃ;/Խw/<(58N=FC_Z.vPd\<ç>e:e/ |NK+B_iѯ(GіLߧ6:%co-yi>$E>ls%`@'%¯vs|ɯ%(=q)7>讎əsI7?NhL%;4~> Y`6+B(-{[K,B6 vD{؃é\^xTB1ЊsM|Z;Gߒ$bEhle(~-/{aL!:X&ϕ(%N#4owV%*Vf3+?V0d7{9b&}'W@6kDzD2goȸ)Ӵe/\md-ü pX ,1 1Vϡ.CWVjxV-mHh%kM$JO.I~ѳ8-U?;tCiH }t`rHh,SnC}ͯ!mr㻐do9<k㾣9H} Jз NSlX:vS9kH }iN]&dLW-Z'NUp }#ބ} Wg;Mt%yf^9je݌PRdoT؎vK+/z'zE( m,Qz"⶜ƴ̵DOy$ ẍ́_ݒ1RSmPrVlƔOϤj/愾JD+1ֶ }LП8zK(Z=wI+Eog ScMToG\,{0|]aXf!Tޖ_n8]5a+j@k@Sd桌#Hbhg'wau<TQ+nUrtew?ֵ~?{x'a|! l }yJƒ e'DfToN{.ҲHq(!?dЎCS dwa`q\?,L0V}ت߶w؄ j2Z:tv)*).9 (D1n8LF tƍ7x@Z@oU7`~}X k5x &3Й] ~Պ? ƀY{t%p,\[u0MR2F@=h4l>63уoHh2 t"ZJH=`cZh2.Uf%TAF L8oi_su[^( \'xYxO}їn2|"&](m)8E!붔ÇFem.2 Vi81z6hSi؞>(44LUcy;,y/pWi駚@lLg<5Ugɋf4xR{ZQ;y.9EPІYP_Jlc& ˰ (hCpV݋y.LX^ka7-vWÍXE#pVY hߓi?e=ʞ&/a Ѣ*_#;o{G<ʠ˱ {Ņޞv8 m T%w+\x\ ֠=d ]yIKyև^L Ӵib.`Fmn'y^r]#2ES^a_kO%'=Va<^?xƑ>(9*%-i^EB3ga`s3BOփcY|g{m^'B鱳G\sEaU.l 놀rK^sFmyđet>",56f|t'<|^=~}a,L^'}0ɏIH=#hגi7/AG۟˲KNl띪tڽ"tk{[cN>ԝq`5'оzhζRGlIL>~ q7fSt. `=waVߤŰxLNᝲi wb6 (M-8Q&Pҽض׻HWP1?@xR>oe8Ҏ^6Mr6(S#4#IcK}hk5ˆ\[6* |~pKu~*1 X;\yAVxh>pj1?8؜ч6&^_~~R^ L!&G+ӷ{M'lBJN%oa4_S'b+ے 좱@#pt㗙)28CnXV;StT`oi<|{4&c =,JǬ)}dfd0ZE}ꫦ#~SN-tCCG/8w\[#z[V?z /n2^}CH@se2 *uuuOK߮*1sfpr,:5kV__m]wKsɖ _Dm&w}@DRB.im}YFszlHĴ; A &J*bOPЄ6D\waF g/gDH 4Q\hz{y1. HCg.{&ز݇, l? SO2C81{h?-?3ൎGZl%$t :cޖY8e-c *ejh~dSx§ذ,8ʌU$sgyc4sCz,&lG#5ݛǣםC.8eΔаmhJ ,~.Eij?vë#_k a}BMioi=Vs]қ HS6qgd3Ͱy"e_fٓz|ܻ '84As*ߴsl!:-{FJb:K&9YyV@` r4*HKrlczvY굉_kAGuٗlֽLB9b}*0~7=v:mp!-;ۓ|z|8v] Si{Ю;ȧN.zB6T%)ۈ FlemHS\yd6`.N_},ߍ ؕkl&^+3Z6uQ| 4bR7DǮʒg`^tIl֬'?nu&=iF*KBdgF[,66VF0yx/?oPz@t(@DR?wgQX ;Rd9O hH/ل?f.#\h%rl5|j8=1C\p<¹Y&4+MV.%e57Zx+)?fCe#H;?y<Alo8gSTHhkV+WݠsرsKY߲G_w.Zݑ(s+XpkiI^r=bz :`*Als/'~ߑןЁV^x-_.=~^ ?ԏu$n_%u9 o7DiI>dz4O<-G 6/H[לLpsi _& zy.ǖH|.ڏ g&A9{ljv~t)^ B̎H2fk4ף=%b np߹c/!~\g}ES> L_TI,ǥKQP{Ҥ^4Db4Ds>k-tDx'7{CS- mcKe':]]-KGڶddɠI7iE#ػfGq\hBg}۵o3l|V݆4ZF|pXugg8І'LvO"Pߐ8`t.g!s[?FVUm%F[='e C;n< ~Zsy&#-knI4#y.Di`{$p4ˎ Z{"q pڮhS: $Mg|{1.碒@d}hn&l/ױ,C@㢢̷U)%Y(ߕixMwՙАڻdL߈k>6}ë͉3]c]@t@IDAT,|blk{Vڮ66i6ZeB@cmπlJ ܊kKۗaa BSء.IkJp&>!1x&r֞散~ .zdAz~- (vidz&~L~G6n[Ə~g( `yv,>ۼqI՞»?l@yᅉO]#j$nyw~^XXV;,iԫUw*vهTK5%Ǒlڍl7 *ň[Yt8efq2[?@7L`@Kv>k/Vf7c)紓+)xUKKODz\>dZ_AiXעFPv{Ci\4i I Q&COHCW"ďƿ 8n@wM&n 71.ʒٱ5W…-gS qם_@bh_I%Z×@XpitWWVw>,(޳}L)lͳFgLGGڤ3mg/DH;SLO{sfTJֱblHL/kImtIb"Y æ[K |42, ԯ[P\WlP=r}l~. F[35Q m^qWd .^[뫂b ;da>QKI{,qx,/n/~51oUܦW0fW&FkE|L_)2<-5G d4ק_OͳxQ'^~_$0RƢkqAwCow 5Dt>\ollI_N & 6eg՗p!@we뻐[e1͗XwCKއ%vPBEgaˡW:ꬰwfx8tuqV>|#LBaÆ.+H[W#<Фu7xcǎK/N>dMGiO~}Ǚ "Vg!8isx>cahˋlFZJCKk(i{v͏},L`"~#>5YkM5'Ʒ4Bʫ8θT,) ҘHH(KeI%4E.dDZ2p&"r/`σ3pRy" @!7р `b)wY֤tXi.kd.op;#1hM|wh᷄CbA}/Gg*]SJl֣\Cww:^r;OiFқj/]O/A4i9bq l.ȅiPXt gXPH/*GKݒSqHw.qo>S 鎋 mH$HcF6͓Ci/GWOS: |Gߤ/I) h^rH(4ZˍhԨ N8N0&~n SN{͏q%:3"2 w]iaxR^>ϰ0^>$^7%Wrɇ aإ GoF۽jӝv^}Gԅ\ .*{0wb#xP$uχ1k%SKd#4Q ugQo?.<Z͗Q7G9 cAE'^:h8|v9ӱ],!i͉jwL^i6'Q)C/<3mtK]ay7{=\6CH#(8h㫌e\5@0|]Ur7 ueK;!mA3<ޣ낓ј>\G92#EK}Ђ8qO}i56=O'Qjc>Mʶ oN?7Q'd'І^ÎP_П7 JTEυ/6A9I^yVf$Hm60Yʼn[g e][bhG<xB߀)pk'P?6YnDkceKl_fymr- j~[ ӴubAS/Gsf"?g/ 'KXiO6ۨ~7~m2tГ&;ڹ 5bi'rqoEm1d#?XעQ_|o_s,X 8y?s2j<}&g"D՝YVO`ΙumD|EHA4Mƨl3'P̄,p\6Šb;5~ .Q x64n <=q~8zNcGba)EHYFïj/͙3i N0i N4ɦMA7|O|`ͳ*{eo :ko;ᕄM3, PyT>?͛0AI(d ;-ؿ #ԢgTrz >y/AynnB@3 泵 AI }~b ~ 2mЮX}4YGئ꥾|6Q퀐<q'Z繂sI[ ~G; l-&su3BY5(Mښ,*OiQ? Bg^tќR@3dGlce+LFvH՚oߔo>KXQ97Ih;2.,V i(-M"0}_ .lBˀz'G x[tW 4 .Ljg^2IѣlcW! A.LZ{a{'zQ54Ab$ҼBjIՕZ>8f+abÉv4Np,YD >J>n:9I4>~=JDm6N>El .vnμÉ(05>> 4]h8U%9i/$.zړ U~njT zzvBh) koe}C.ey7vzRX`O֝n{a/ / Z  `=hg+ N\BٯEJh .'y4 |e zqoiHM:YO%be7pۧ{uSF;LZD{SҾ%.ƫwKb8i M4VAgR6A۲a}8^ w^7XYePx,zA0?-(3-e賵y<_SIhuON zC ƨ5N=N{=n'kҏ7EA(cwWn{p]BI惬 tqe5iFxK;@vJ7r[9Ư!}[A).I/YG??LN찚8Ɲ62QocD\9s:s>^ s'Ʒ>{w6X+?_y%aԎ 2v՟wRۊ:zN/cQsf !e6jT_8E0aAcL/3ekCgj7Yzڙ:O8Id\ϨUsPMƉhVS."HMD2qV QۦKaK@mCa`$hT"yec)8ݒqkŋcvW[DN%ZU㧩Qr㧏 @ Y$CXz4!~Uhqf)Ey6̍YN WSjfl _Rsr~J)E-M!'#͝j ${n8^*}~K+S wH1综=swYǎm/ |ea%A^ZZjmb)_Z(Ar0=z؟iM|~ ?lƌ3ˍVXNi Z‚L@'9 2pK.B՜ ҩæxzjI1l-h<.rfILo39餓FJK"P:moa[=EJ R˩\ @ ob2|з]ð*`uw|ZًM`T6p EMnH@V dB6[ c, c]_6Uld,_p/u yhf0w vy(سTq JD$;Z_郍[! DZ h܊憄 v4tD#-,`dždSײiubR2Pm j;ĝ=f9-kvZHo*?-evn{ @(rY2q I|"!AH>>|`[YAv$"[&EAVSɻm Boj'Rmg/r Z-Ltv>:E;Ks_ GfLb4UFԍ@ *y͐l `븐Zwq%̂m_ޖmt_aq0IB_Ҙg FsS_x꼒~7Xw Nǡh5/PeFO9`wDe+;qϨۙloOR#E Ah`AZ GM}F>g]ThO:HCn*}Ҟ;[&!@7ޞ23i/̸[J@'Vn֏N@'qvTc"i 0Y_|/;hխ(6fFssb% ltGӉV[5|"E%*# f_u_9k }J^vsh9!x)]m1J]3:l cOZNOA 4LNRl̐߿3 s|t{0~&\"~+N_iJ6-0,/wz6nmf6`ۘwiJ8,wL K;Xwav[}1Ddb.\~Ahʶi({}OMBIwar¼ªieA9TFb}VڊѾI[VWW;{RjSEd[.*npXi'\vTxg:-8&Y ([Ύ4ܚb@?H]n 1ϛQ+lj|B5}L8Bٙlzyj#0Mrcp^Bރ*t`*%9>+lQfZpP -Iڷ,gak‡'thmzWl{[wޤ(  zߊ]̕~/{W=:PX$rd$4A\1u X%0 Dk q6 ]tBjϥY%D慑VhmAv\*"-uEƐ2~m8.IBc-~F8Sg3>&#ݛ1YBomQv+O^J}mU޵^+i;CL=vjݕ'uba`CXb8 k!B0H6r.D f՛wJ^aw~PВ{yw,.VEg<њ$]#xӅۑ,V#!}do~txqR# >qx^Xs(jW|j+%ElEރR7 %4X^D~Bc~o7YԲ4?mޖI@✆-4A'$G'Y\u^_jGC'^pkk+M%(ϥ םJECaH^ >liYr[&E^YU$ i=pOhV.E=Y/;%Q#~.F 'Ts&;֎n$$[V^ֹ@[Px`ViG>s^[@_Rr~EfN<5)i;7(_81;qZCe\цQۏ ~ ya,$I_%?N96Y%lF{2:drld(Ɂ: 9ݓ\8YenfeRuI<d(|Gw/cCwzc[hEs3e)^TlI#7NyI˅k9Q >>x&>}DJ KJG;vx4q@6K\hS׵w%EkAedK=phZНzQ7blhӫEtuBuy1Ks{Om֟{N+HH'>_%Юӌ4Ww|><ǘ 'N5U lЩ^u\^dU_?FO+Y[[ |`L8=3.r73^T]\բ&3ζESy귂u;qhfƼ]՞p5VzIM1|M%<ٜ\W,aܣMK|;ukߞI&螣h쩉̓\'@wU~d>}d>ME '41$ Y[dd3 h]00ْg_}ADXveF|>`Jk dQ6JX>A8SzQeb BE Qs_ O/'` a8LЁi —i11s1f\i_2wm2p}kwo&wq쿎L2>52 dսѧ5Ag =OVWyHXu^&ţhK A. @} ) p HST_`AL(Ʒ zxz:>]!2ŀE̿q8<~ogdlC5ԗ96Fi,ƠG4AZ.z{r6fojMh9*/>'9tﳐ9HO.|Or1z\AT'L4:f2&exoI!,]Ϧnmx]$J]iZȤD3^f_HkY8a^9raVY僣!E}!B6,VsR8r1WpT d+sUG*smB#hI.jV0\CTtmy2,%pvaYpMSm}%#we J$D[XU:#D,I/ ED^RJ 3sQ B>ihÑcMA0c<\vB9o r J]Eư 7Aͨ^Z{{h;!]u78Z6믷<JM{0v1f ERʹmeِIU<>|]!9Ӥ ] ;˰|.H*X '(Wz'*gt(GmbJ6KMڭxfKSN'Ns.4h1\AV˸Ѭ:1 VV^lZv팁ni o{7# C Wl= vcSVW1Mы +Co`mx4-;>I{wmvUw[~;PxW9v^_PL9{ >~^|, ^d*%c,b]W6gcE6Ѕ{pOyVRV@%Gn^7>BSn*,t<2O) ![;o#h3m&r݂+"]ʓdI(oN͌g"R\DN=#xSǸN/-l>^۟;hwNµ!4Ǹ2Spr {J4to'26 @8q[p`L$/GO٭n)OQF{FMK_c nS[d^(a̒t\jq% )\a2'5`z.w{̋R] YH鬒uS7  GI>&<|V dnr~{I<8,aA]J+e<)2#DKJ.*kaؖ4r ti#ŇoZɔni vӦJ~>-ͣ[.QdFt]r;0) פè EĈfEfĬ0"bD"f+挨 ( 0ٞYv1<=өrWJ_:PڌߔF DJMWyk|-qKxuM/D/p dW܃ʈ逸=" i5X78zȚ䠎qI}IfߌϞLw{!)͉e&MM&y`H"MR$Kjzc ] vS٪/X=oMskPZH@&]s%% l?J]L e{OGPT[3Χ2E;E+bUxn5$$>s|tr:`L:lÒ7U~1Q=n/Kk79*" Sp9ͺdZ&N(ٯ?BLfr)&˹?q`þj'V?A:ٳYfyOz!/ 0AP˲S?7!Rf (J$Uy9o5!3il\ 1 uԞ Hj}*L:h6 al{~inַG=<;k{R 14]τenx.$%8~zDRcvº]} ~켗}Tjhf7x<wmAlłqs.Ik]xڪvP6~EۤM<߆4KMi[)i֨A^NqQ!*\yO>$$npIⶤYĬjtH풀%n$HPC$gI!y/ ᷪyNNo-i?W;}.DE;X&cm,"ewXL#BS.`|q~tIxL@ IOTIH;NosҰ룬Cna]'eQ՛oH;Cyʵ8i Q˅ԹB:]E{x_yZ\([),Ϩ壭{O?b|K_ݮMK:84iʶMFz g}Eo۹+V{=ny?h>_[!{ #7NOxNihBS~rlY.:E]w!lǡ޽1bʙC=Ԍ;^ cذaUVYŪh 0^ jH7$oh,KP Ww:>L;'H ֕ܺw/[Ru$v\gtʶAbX&F&z9+"u8s@miڟo[E#!%:V^ݽNT=@8LhEhH^K]1B[{ͣ˩3oLp ] 4{@1|;X~!(:EWgnwzZ"I>PŒb8|DՂ\5=1ӀIJ-ͥnX -89{LOӑsD]f&i\-HeU6Dp9`. +kQ{:2P|`yLKo%#- 1Yg_|%jUl+U:1lkJԙ&q3ʇvrm HΗ3ҖG[Z*6S߽h?`v;t+wtON3*tG{P/ gOy6qĵ;^A~Hw}ȯ'||ӗnkTCȟ0::qR$tMHvt >[GmF:|jĵ{=V;ZRvKN~vZW Ͳ1]RN!҉~}53`8$O^jNѳ KS @!">4"kOq[)1 B\>Um/ FiȡHW)ΏqkqyԒvbә9gS%/# ts_b]fB)w >ڄ&?ӖH{\u0Hثqs 㦇ܮÏYC۸dI{,~zTQI1Kނg$ϗWυO~y1Ẳɢt?[8Vu oڙ2&vmUggY!9KƦ7IkVћ?UbFx}کOМyiZ7>>@>Y_Y&pc̗ߔ_)yڭ? JUl`H|Y_bܲ|;7f~'o(\n݌k۔Xo~*>RÙM?4Xܩo3)He_Z\ bfweb:iKLj$q,i4iZLߑE\R# jJ &R\}&'A$@eOaW:8mم䳸)}NVCe^ Y(\ `1j_fҧ2-my(d'3X@I݃ފf ҷ{}oi/@ H|}ǃ tKDH&ҏwS5p^ˬޥR,$/"=@2|G´X~y6e!Eˌ5'3L.', #`>[ANP4s){ Zi2\Ecܩ݈g-EmLg_`oR o%_.Aa~h~HL  E+`J['/sxED Ӆ5u7ݖ7 NzI4#mYç7|), =&l}5H@j\?TyMk^V%Sؿȏ̽ߠʤr |h$ g~b_kkn})X\N̩Wԯ'Q>>? R9v_ȭ&H;wӸUЦ=Kn||ovj>]}r:ў?P \֖4܆?ƔU)O/e `j(PYf$ބ6BFD>HcjN3Efyf1/"5{jst[Rpx ]af4)/P)೯!Vg56F_{]O〲~Z$G@{h9|`nNL1QwqkQ#B ڀmښ;/jZb6W;9 {ݹчJ)3z&o[Q^x`/&?-K)Ik#&"r\fvny4#x rr%IIj(q)Yo@CܽA8+qG1]H ZUQoK'̲_51':8ou%!;yIыo 4̛VvBM!R5%A2n/@>5O 7S gYf)S' +M+\Vܹsou[iAx'LBVQ$fk38LCjߤIچ>mƼkvqW_5{,[ @J.2K07Ǐ7ó@o7o)(Ri u+HBҩw\K9YrGzk;$M5.t(X.I$ $HJ5Kk&4w$ UȄ27 I6ּKqVH8 LxU+S 5&f&ʮxT5A[R :"=jlcSS 1Z_:[ 56^ǽtiVxi{Již6~\?cr]gˬN[Ӷ!χ\\O?Ozuwcl48yI%Krmm |y]O4B8)c Y5_c nI}8x/ R,V8k"6IzcB@IDATeBќ|~83 g7t;lsS{[`*ARo;s:bz74$2k!59~p3B.8 +[T},;a=DPԬgG8_y2׹|iϘg1W1O֮_r(*7_[m$=aOC7Wz)ђF>%kvx6>_ΘQӶ2#9kg+ Mwа쌛r!ɑm) -64W d)<:]:47۾T+ \Oٚ +<tTviQ}9!yQMҭQ*poIMc (/ ?3E#&>OfPK~GL5Gtz$unneƕ}^ {P5\?{Puط3Msouk#ϼUڍw14sF79PT/௚ASnɑ> %{)(n|qi+5C̯KOf"]1͑`Vg/ԽHWw˗ "%U8ijvɉ(0m/uQ,P ".pM2$jv=DO4Quja3NA#uĖJibV>D#Cp1};<ێ{9/f!mXs{jBϕEC Kj^DHoYR)mm}4j" 5θA;=l1T"mi+cꫛ]va `I:vh$Fxu;l zW?l:wlZn]tKY62P*5gKV"Y݉U ԑ`#:0*̬?VV81x$x;hA:ErSlAZ4fZ~gLcuil ~tm%ik-zItRҷWA ŘnGvK_OGȀN.յ|y{TwIÓ.7 0 ~.`)="D߱>s#KI^oT[^Ջ[C ?Y2uc; 4O#]`]8/sF1rCȴqASߧ~M-wrzT Sq+3 %ms8,Ul896}Ũpk3)6~_ўKZB^Id 4o΄s+IAi?%z@ݍW"p@l{W|mD=w[=z39=OVRNU s@"ic sIIl_l&inc.K4WZt5gZ=/HB:G\\n9eQCsHzh 05S`f x&‘5}ȇ"CNף'.bNi*fv R5d)s~YW>0{s仌]uOՏ@بN󲤖]k½[ò$-ovݤ2͚oKwi|_5Ow͘W|I>滩c(ճ3:.֭Ma&yS+ Ǡյ_/c1kaNN|e/cGpǒYE}P&YEң,UiN3^/z~j>{փv=Pv؅n$dY] !M euǙV ] O/jdzMNu7>9rIz ιY,%pEɽ h6NMbҎ_\R4Zza#HG oii,= ;Ԙ?we3 2ZJo2&RuG$9JUr%s$vF2Ot +{稈|:pY| '$7O|"T ^%m$p#| I/ap"(xT⿯>`Jn u[||Vh!]5+|IiqVr iqu,f0 oNݒ?I3k%\^p<`pxW:~kqMTǫh7v:8#| q򱌿ȵE[QGeܽ$6V`68WxWV#hĐ?ۤMKhX)l+Dl<mO74+OSC\a^=R`;yMZWE m!@O;{h-2s|mr!,KSQs3u)yƴ܅+|"g#0T{LU3 k56(+H}7 xtJߵ@[$)%"3gB/7>/yM,1<8zdä\YlqvR4:9=u[Mq9QKwjg/uYꏭx*~&K>M>մ5ǘJ 9 @ѿ+%Qg )S艫y<; |CK(j|R)SXٚ }~bpA;rx8̔fI:s8+-%#d._l_$1-zI:Ws[gO̓@][b -|U*%t`~>d MUє<WvIE/&]wC-Il򩒶u_[c':x}`RB]WRݓ^ҖE "m 1 r܎ߦfׂn6cɻ㓫)MRm_^ʁ:-p73ٚ}ִc' )A?ӹr(:5L~-&v<|*ڐ(<:ﴖ)ǡzc Т@o[Aiisxr ?V"}sNh^G$r%6GqD+Oĩ@ {4"yɫS7\dֹ6u~w)yxns\hNp}G{H"uŲaXq6/(6@ob_X>2$-oYT `~4,ڷh'uX-\u5A`Z ԃe}c)CюozN0FATmQb>s͋v<96R'jvTR/K@ygJ犒 ?Der4tHߪE1䀤r%9[ + m۶Md ncJb#@,aZ^ [@.g&)ÙTdNzahZ{Aj smJ {'@V3$Ȃ!nCyIVAkm h& ((0BCe10Ke$=סrC@)orܖp-z-R氲ͽHAYQO3oj%.wk,|\B]?f;9$iTx}L}&d{sIo b0@_L bjlVf0ۋ&4\\۽90&. ff5 cCmIWE2jE2΢v (0R%߇Z~E7^ K!4c:wZ\(4* nJ{h6X_7IJ'vo 8 KjG&[)|Ru ų ז]-GL,3~2;q|y' u% I0%~W aÐ)h,:xKr\# \ {SgݨWitpWsb)ߔpn 7h é}/5=ÇG&Y :=SOT\kBK6񟟳`TiM8A<Wpz]`3$iGe-WR~ZԘ_E *̮f懒s3.Shg6̪VR˕3ic"7yTXsZ^7B ei<BN_j:q6.F+i? -2$UD^Gs+}tߍ3c1v16H30G-JTV Rzߥ}W/+,k^KVCYPIa/~}-6 W6)H5 tժ0`VL)KYbޖN1E X]N TY .$+зu֦YfiU~7oKҘ>5V [hH$ f[mÅ\ ]W07!wwԣ|С|\YοIרCRp&-[ vnbP8;_"?LȪ[O\ܒȟCYAB*V}< }<쭏s?e"J[%K 聘SAWim὘b&XP|x=Oޤ?_G8ˡ-Vzsy25!Wu̴ Aqfғ1KχE_E cqbC!ڄ34&,$T{'9`ɲqS'!y؃M̅qއ?O&<%fW}H>ӕ0a|"Lx67GcgJsXDHug~IךC7W>Bxۚ͑H8ozăRª}ɔyh`8Ƴ !R\.M*$^Zs%3A-ș1-`תjV\~ YiLJO $ԓ2$/*_1z#$XӺPʖ=J~9e ^gs$l$фgq{6T|PR?iNWX~_Wsྐ]PYjv<΂~\%`Wڐx03AU[U~r @y]m"a0~3`^D\Uٝ)ɓ-/)-kwgkD89F\ Y nI 'BMda-Mou,jI'E])ug1iUUkN8ԯ̓ZЊKO%r^轰|}2zbxzӖ;s^P`cՙ[2ct#sI'-Ŏ$Xo^_^f")9b~.W͉8']:9NF5OǗ}bn蠮n/i:aa NWI&5w'n1X?nb]3tu`EE1rXR87k!deX ܉\~܇|{(SIYI8§O?q1-LL,u<2r(^sZ6|zbN[ǟ"j5x0vBɐFm駑vQJ@fHR#ө {H;d__S٪b_rIk剘}mu@~ێiꗕS&Љft3aTw_Ԛ4jAfW(avhv O}&A~a^{Fhs'czy8<2lX{LfOʑM_zqmxzJe?fP9~5g#ſ/&=G OOOK]=5#}ui+hHE_7[vuAڮ=0(JF*>AKUs> #e6$i+醨Ὁj@IfHs&]g40{^gqUF^a\\T!/؝ѱ oB\Aޜ|YEfשGy;ojJe\ +y`y} RMwof \Q|Y}@[He&sxQ jysH =R>q`ʌJ|`:(z#c_(-_>"/'X~B PsOӅH`ضbKa 3;² ^p硔wi~jStJwkM[z@B)(|IoOŪ:z u䞍N1)$ͨjdxISxލ\ҵ0XR q׎m^~ms݀@0[P%3FWN忥r9TMW ;@I[ga{ے'3Ơr5F C[f1hZZܭ-4 %!@#[u6MʺewF$Tm@y5ʧ)GNJݑP>tLZIbn?LԠi᜚#qW!F+q2u>Ȥl1kݺقjS6웘mp,3& I G wT?p9+_A/ԎAc!jhq\jl9NJ3[qgC- ව@r9m' Vc7 u+ ؏b_ɡ^S鰋{aRD Wdj?+F{FWsv3u+b,(eѿv@m@$Mߘlc@'=2RRYm0y3&c/i]WoPkirȀFg8z;+0P,ևw2?< H cl}!a +RkV +~\++u$I 0!#ЯAn*}[s&3 +ۚ*$\wzTGFz~o.iOO Gg Ժnjkw3x+S_ }ڐkKsqwgSm a\9$ k-wU'kNS߫t(P' F mw7a,cg3 Qh 3pU;^FfER RdʛL57C΄WWWv,IDrT,G[S6cE$:[-)iΌ4˙EV^Ҿj C&һ6.ܝ &,8D} ?$ϲx2e!ZڊKB?Ү,|݊1R砾Mz__g!CᅻA$=1{_ruL3}MV|MrҹX|`z5nۖ [_uB8XKuJ}I_`֡J |7~3ZK7yv|7^N$m\Fnj m!3Lwqd2łI'>?+%g=Zބ/j]qP{%C]@3Z`m\ XƕjxPjf"ԭ99ӗAV>i<UN  2*}E2Y֕Thg[!mMNTq#?RW\u8~ zw~Wz=悾"If&I4xWmgvPQZ.7e#hZJz`?)rWMU Ё8Yz.]KmP m- gD96kBGIJ"C51L|{_ܚnArYuҊ wlߢa<0vhѢXJ+܋NJS gZ'r>`IG_yt%`I)v> u'I;R6חJ=`yF3wOq[ AH_ CS!G"2C5꛶u/r=nVKNHӍ\?  A]yx1eŲ; nO$/RGPz@CU+: c`߻Mq:4{}cv @+Wy-D#: ̝,,.甼L_VGbFkAkJHt`Uw/Ԙvq嵈2F ?i=zpx[-gfZ3h{r@4^tژ<-?;~C9&9r$EnO/O[oR#=ldǭǫ<=Do3UC#w>@0 aɄc9SS5H/ZoC|;b(H͡RJgSi]z93IzEK,&p+A]? jO~\zԪҘpX ,NDe)T=3Ydm!`Ul6nUW 峱I/θ贪1\iwp60+hޯҿ_gEEQM_I7&y`1Ql@{xav@69wT8:r[Ö?q+9R~\?G7ͮRs|{8TONlTL}aŠ nBmJ }|7F' ㌗wNIZk~b(%<%ֹȎF2[XI[iV \=\vJk<~|O՞}piCjxmK r-ihp<' ka" %)}pq/m_N)Cm|Q1i\M?܈$CJڜhswO&kKc|~'+d.h7?%j=]?N0g!cJ2'E[cĘ9{CmS犾_cE(_~uV[x6Oodª(])R.7H/ߌd /®^&Ym7 )t2dkRXC[Iv[f2-P}Wo" %sGvkbPO^/ڗ YF} PHGjkyrrtC(5oZo>J^DŽK鰋}z LD?-o~WAzЙk&R}a NԧB] xCT͡5N{Gh5?p}rp.as0IH*36;Yu" L4w$jwJ=n'29,pv-4CM Q/3! 3:RoDN$fG&S3EZrĵ =n6%i*q F.q {0 f"Kg=nW FqSMZy<暌iUZv(q́PP̖$s m (FEs}MegѾ_1=³5|OK T(8*̫S-A||2[}ӝ2Oq/}yGmլ $vljy ܹdP\sA_I}@7nDͽɥ, 8+dNRcO^{|tAx#^ nN$IթY3sl"U˘MOMת3{fa2\/s_K?[b;x=+ޙoM&>ڬqV9T-}Jo{&OSu+ʲ WPX!S͍]πZԮ-Nԯ){0,v?˄XvųA_@y sFh&⷗#dwo.@>\*@Zg7I3qcH2Kҝ]+SO[׭uZݏ< ,){>46 z!$@_Î:# 3+<;aa[ԇP2iz29ق8`. _?R^T>1O.IT^LGD6|Z\9|m ?;h8<]W4[OaqTbcӬpxREOW6jl&]`c}>YymSmfv9pla[GԮn1ʽA!N}BE^-o,_7L;:Ae$W qʿCՓ4AD.W\ VI:yZ=B]^A]&:>[ ļȤC[gE7u^R zK>jdDzqZL Ǖ~6T9Q $tR4%0o IRӐ{TS}"bs @;u_z nģ8;!ꀙ 7 :BQ B@֠\Ӗl_^*:l2n[ i\od{9$!ZΚ Lk!O/寒XXn:pŸukv :QRUd¿b ׄ 3o[c|vzC8SE <8A\4Z@Vc&ݛǡChPt3,%}q3;')'5b)'Ӈ_3Xp8?K-JdGڛh'_?at4%͸{UKwOL+Ob<=̚/BG %Dc ^yiEaؽWhYF;6XϦ^s?;US/:]!_߉҆Ĕv\.n(zXчӺsWcm?CfUH-f-~cW&YDזwz:nԻܝF,97Du!9FC@b$/T 4r܆AEo 75*y{ny+P@Z\ɿi/<]DDQG)%Km[;/iKN."N$<<{ag+Ŷ-Upgeh}Uo|e|R o8glfbK:s`ъ52_{Uj߻ &5+ʿ\sF]y]Y@<ʵ7Ba6[ul]~ JʕEnJù2hK&}rסL~-uE@_SlW(C_1GO1䁾C H۶ V ;L^Dn # NE3YQ,+;I ۼ Gʾ+Q䳶h/@_vņ :KO*G<. <@_Rh^̚NUT[{dpYBt{5Z!5ڜ̈́2ORenD.ʩjU.2pJ-j6=BRNAl&_q|JSkoȺfn.^2#x[>eۺvȿc}[B5dr,54eNiL= נ=P'sqͭ@F)@_A)x[ktd/;v=UH;]c7[}̋_ 5@5|(#BnOSQ?|Ү U8Mk3-KT s_hg&~G`!Rġ1 pVvy8SaʆR@f Qw '>[-6LBMK\WN_JDquw mSrdoքv23 Gԥ޴ɵٔw(=$Pv6t>0zCm(ԥwnf~<̍C-iU76OՇp5]2V n~]1iXUpCjU5/桚cv&0`,$6䂾x i889߸DmфM'>wWRߒFZ?^}*Cl\&~ qFoٝb{J.K?6𡾛?+<]k{<]H8>L9DKax|HeX8䶩}@Y'ǭȀ)/LnqWc'xN3$|JO8l-4+9)jxPܟbB ZnW+Pt8&rnZEJ@(ጛ^rv4gkOR: }5I}Ʒ m%nqLD]^R ~[Nc5pg|vΘ}CtN&_ilK%G^D%AAC}H@IDAT[LSsY5k o(i;KdkCQB~{EAzxVŸ)'gl3@_`2z6uM6GTkuF Ӌ Fx0z 0Lq!+z 3{:mBSǖďR rgb1U[ovؾk|XZB ꘴xLS `HYi]Hs٧t;c.ģG"'x_$Qte:qJ:-$T<.9`dltHN7?[F&@eWsm Hot0i H}]c[ՀyY@mRm9urkoLG̑snN";OC%{',)kg \ڎ}RXN{CX/w橘?6fytszԟ34ᩭkyimB[N5ǔ7ͪIPZ7cWK6MmNGM/TtM|d>G$ u20&RGn3]۫~s9 7[5חcC񙠔)Ô_[C?CV,,Am($)K3HLo 7S+O7뢧w^Cp]H[XTBV 2FjK<ߖ 3Fœ!/IsE2eMRf@\XB5 ץPé}ǒ/^~49ۚve|lW-RF=Q0!HZ4p= ><`$(9nfF#mm)ܗ n udficǔ4!8r*K+Qٔ#8U\I߷xU9uc)ʭV5P#m-vQ!G{~LP+"(nF3v)`Y;jvhRD3ԝvqHv`5IyH2yɓERRMvq3n'=AڅW1gts0 ϗkYڻWQ˭(} )}3VH^`iOՃE+ UW~:d_HMG^(}AY Ḩ5PW j]CC GElw>+\VYx^{_x+~UP( CĂe(wE_N#Ǻy7ab eq!+uDYu H93U/Awa\za$#+ucO2#9k|~N)\LU-M>L/ #II4FEL(j:%EOF=DR}5M&T.41AnM8/;T-Еƻ7 O2ݜA!e#i,Z<sPCzm?ߞrԛi8zf!H_eN /eH .g)ӣd/o˞_h9 `~uLwFKa ;՗7sxcYfKh:e71/\u~46cp&1%7>2scKϙ'~&YNWG`7t1RC~B$ݗg E+NQyܪrL tmh{_`botæ6~cN|g52 QIbPdRm;&oΛ`GLi)f@ʟDO=Am$E{N~ܯAZ#g/^"xS]:(V}iM> hW,zH01|ţ!욓PG9dNs cUOU~2)DWLYbi/PT{9~>!v"}W[\']=RN%G3s k16#~3CUU<(M_IAJxo7fd(?~T.!Kʾ)wj6%YIڷ=osnPZ8kQz&e(XEci?3|ΗcyXRW!(7yƁuŨ35Vq~h.}(MpEi~c9;ݸF5a<*:@/@|%ǯ~1$B}Fe9ga'ڧ*I}ISUx:O~og+|o*WݔOY-VHJh' ᵃ͚vӎ wU$D'Ȧ83>gz՛KXiB<~|2\zёiHޅ$%g|5*+o\~WfE،k+3:6a1} %3!wz( 4g<'_g#} 0,I4 0]#<#\%>EnI,ܚP+ow/L{i5D\{-6}VUgZ}L%rH&i*+\WfHK^ %YdZ|{@ouPR__6mBNa9.1LR{ppS=mJs>꧟~T20<ksClBxS#>jKGg@KkA_=;Wp o_2Ӭu=d]Ɓ^ͪ=ѡЗctfӥYD vqFlrr!MYsjUo@!+m#=೔&`km;D^ޒt^N޵$oO 6=| yIQtkL@"*k^sbΊY1asDQ1̠+b ,g?3=3˪NU=u/*$uhzINUE3gt1cQD~,[(:?՝[a5~69/QVH9 s{}U7]Sިz(L|g/gc]{52}8}XW{=U) IXNkWg~q'ulHVz#Ut/vvm)Ub)׻ݍVel2w1bGDza^blUN;\}q.*g-nq/jJ~ߓX\q嚤R KP]K=zc_wq!" ڣ|Q{o}cCtb*nNZ+kZf=S)s{|RJUϴhEkgE*%ǵ?zsQOLy.ِY D@wZ$l{'02v]D!(etRu];SC!|zHR \wgsyf鏾Snss0+;qo%4{>_&\1bs$C Qc! vV wR{LT[[kyyn iڹ֫6S+(e:K[[K9PKV0x+V`b& (WP,Bv:ߦˑr@:I%:-b.&x=0ɱg@ d(JۤC]VaDU Dꋜ%G@M%> WɄIqcK:تɖ9tv0 L#nNQ/AWU2a驓NaG"|šn^luxJR^O̶J兟ߛM̃+yyG\*=J'VO+#O4MfItիJ\ק<Ď*߶{J}x4Yc:#կ/-~@W! ܇҄mF?}YT11-3Pe".|ijy6VnQi?_~wۋ45 UQ& ?PKx!a@ީLb2؟ AIZE3{З;|&<!q(rU}-{6sWFBwC0t8lWNx#i;;;^!uS3QR+NՋؒ G1|{r_옚X/l®* iT kK0[>1FrGiIV|aa~4+g-Rօ;V),ӧ"/9;on]<lx<0Q‹!k[Y bĺ26lP_$'oiO.]^gn0M7xU?XCH=r?L;):S:ĶD̲̟NTl8ćܽ|!yuퟲn`d@SPf/m?Vd,W\omξ]$GK}*Z+;öaGEtu(S'Tw8v~~@4wJJW;ڇ +6DH߭8Z #MM$Og{\GY"\U'䞛GUnSICI6K >_y]/\iޖp_82I-4}~};3aMF:_JELWIKO C9_BDӐY#*HS*ŝVRN?3mM0jާYZBRz#+.#Uj}ԫ:p-F9{j0.rN^-?dx.H_!Np_@|L"jIjW$ZU In{J["Im+H;,{~j:MnJ!'tRe,Ηwi̦|^ !uH1Y֚IWg!Z20Z?RiB~rKu$\-y_0$Zv±j] @M\kRW LҒqTS5 "[BW|xdĤ&[rYh[Ƒ@V]=@i:#7 ]monʶb -wH~Iq:f݂״ߛig`Pxlƍ×%$@Kۮ`Pk}s8g@`Ql:6Ao#lvwY`'XY|*pۋ~H*^+7roަ" cy8hOըOUPi" kGeu]xݐ|p6=|@t*So>ȧKz[X@춲LVj|AoERAꑟ! չCl`{rO@_\3ߢG;iR^?N Dg) q4-Ò 'Gs٧BU8.a\ ä:ה-3]!;Y68xlE]yDMaT뾰Ѩԅ!a ha5,eugխC] lx&z-'d4wƴ`^7gWƞrʻջtx`pCNg8bb-,mi K\WPb#̵_SƇMH<8umݰ֌yxrYNCS )h7KNB%~4mP~/Mmxj8Ȼ0Ŏ&/#æMHrkjr}GUkh9$ <W'_c)g}0cIWS2.#,5<OZ򹶨,W/qNʊ7[A{.v7 7r{O>/I=7xie)[= GϔPHZ s?eBOGgm럟#ŀ*dƌpA -v@=W_r{2!>TGK9a,9IkǹC-Dy~JQݔYp 9,DƮ0X}34Fз饿 T?8s('|naG2642dHTA:u2M>cL'07k.7ێ0k pq4s|zi}HYv ~3.JeR;ֶ1 7yYQVUQ.6?:~AkfkuN MkiqǡǁL^ :btG((n$נh"XBXK `fYp'I%Oo'w諟 ژP3!h%61#c!&.(8Wg)H' X'O OI ~u@ձΝ!{9THc¦aOMa=R`QC'$i?!Y0Z\I ƾ=:CIj?P+ʂO,B6ڱ44 "ɺ3`P[!`qz#)Zt/;Yy Z~Y>,3wp Ƒ]Njc+ݳڨGZ}?3 xNHj&' oj\s'{hOHJ14}iC"ۧHz*0jcv`qCX{z=;0lߡ|*_il3اAo (IFI܂oGN|zw2YڊT/BR^k $D*̿*DU;0mxc;j7Azo3y\:JkIu3SO#Tg`(<>{;jwTBzg{|AuU^"tnl0 kJ*ZNG_Gu[{zcڡ#+S[2m,Qd qœِoϞ堨kOS#4F& wRH_r{jHwꥴ{56v܅9A x8kM jG{Gr?d.aև+U5eۃ|P{:5 T'4ZHۥ3J'bƷ"jlOnrF~U\R_isMq_D ,ўrX ?kN$O)4 zvt'`PrVs 9(H~[|3ԵɆcq0޲cR"vuNN4 )v>q=qFSfs_7)+G|/i_o0 F‘Ei*)6!H 0Lz!UX /! .c\/J+~q\ͳ?L|$=agV $ð ,qg7JGih/:jk|9Dguz=l5NXǠ]}Cs/2nK|g_;`' h귄1CSO ɲ'ԕM%>x؅bBE%#goM߲X}ߥG,J}~x!obaOn1灴}A:K 4y}b(q-<\>GjGӿ\e)τ!}_N*Ź)yΌ2~OkK[pi;=&#m(e;'lM^ksg;' =ABnms-%R b}u56rZ+lLX[#.N4Vf4}Hs)b/fK5M[~CYUUSݐe`N(w Si3_Qb!aW]si {hEx@mX+F:ӟ"s @_YHd+-}hITQZ/A wO[lw EG&ڽhe_;b%?M,/YA.h۶ߝ+DַmV,%|]CݒtHwlb7ع ֺ1DJMP!fjp˥G3#2"۳€2 м_Y!d~8 ﺔ6N٣eg'Lrl0i‡$U"W0\Ush 7,ļW $u~W9 8DT>ϋEx@6>i^<^ ێa*@4%|@C\ni\oz?FTQPNV_tIulp,@z/PfFx,JP %ĭԷ@تy1e4uj)֠,_r1{ $#Jun˥ `I5CspOƠJvՐC~_W4َ۳Ty]}-*, mj7NdrNf825m AluQd5L9o|b\A-OQ4>׭/U)kRP6ؙCݹ+u Um2mppݭ7uEyly'qn,dY ޔ[sw\a7D 4qR 9eث!GE;6:EmO-@U8p|3cJS'4A@/F_+JQgUKCdu\M퓟iЗ:T> Psi}6Bs2 /_Ì2/N*!0S Qa oyJN節ukp̉:ݙsx3j?F_sGǸƲy?gZ$RB\`iD_Djx%מmv% @Əw#I&%&'D"Qɡ  =[$V1:'HMCᗵA_M7}*/*s}so5559kZNc)d. u't @4G-58Mn2Zڬn4ڊIebhI[ ~gϜ)3 MI|14|&tNPf9.Ml`ILۘ4 5HQ2C=p{eI]\_Óv(LjCi(En[BCB3}&!<[Z6 loxJP$ Oҡ?>_L};Iv.MA?SZD:pvR Mᶡ\ wqz lm@4Wxm=}_#-BK/]M W$h@߮Ѝ:A}^׍UcjW]Xm}EeHRfe09}yͫ߃WZ+ ,~50 >7_^œh|{EY+0ڄ{/;S;-C~-jH9tfAB}?y58ˀP1 }azz&(9 @N"m62ߕ7NWSWyT;%؋"kq Dr@^BTx;k*1wk]yۂ0x:.NbGb2KS%Qu-gT"6-v$`K;s OsҟcZL}Jޥх( J}gbJ <+`<ҝNaC6ʹ;wH{E2[tS iM4wՑeh뿺IxGOp3vPbQU3&ڽqDZ$@6籠}1/h;%N uNX@PKbD)6cIpŊ'cqN&C.\fQqXczdŁb;^Lܳ>uہmn(56Y8mFQ#~oj!jG}@:3b)- @}˘ a:tP8]~+b;"Ҟ&7agH)7L=@vA8>I8Т*ȷl Zϲ if@0mὺ;;C ,f ,Z>lX"U@/|۰ߖ<|.~WBi%8)BR TނFAsYz[I+`ZR4k-I[@QH~@3!fd''{'\:N#l .}99)ܖ~C -tݕoS(Ke.P +Xo6WH݁D;`Sdp$UtJd}@aG7>af%@FWf*nEA)_]E iZO9ۛ32vp'\z#~ʲ48) t } @x&]t ؾc?C/;CҖovi;5oFeaiYʖg< &@^Y\Ֆ,|K~Z?o״G^S'%/i kihMܙ7ND0ǩ餮SHNQG`??:h/qW˟blPZ{ށ2C#Gx,y\8!(8=ғWp dՅԇA8e5@ T$sDbūÄS I CI#=q$h {z|rRld;ʖno072L%q+JSF*O&uV\6b\~"}CDoI%35јr66ymoS_2qw@+mȦcR%:Хw*wħ<5k8]BLq;TSH}ni?_{;6£k2iU{<_yjm6+؋^^{^W/э{d>k ߙI{H8U%y@Jm 6q/]n=<`Ms}Ţ[l)v^=WG.Ǘ80 4q ϲX~ց6%=Gm!\rJ)G]+obɸ@L@I‘r#g.DR%.v]+#ڎߦir.v(l.of J#m39$N}P?$on!l'Ǻ%e ?[sIl~;*u@>͢g/Ī!yVQ$-5X 5wXK3ς}N' LI*E~?pD]];wۏ4ɓmԨQ5P_o~Y|*^$ my٦0m ښJδ}l;'tirBR&NuE,=IVŽiI.qasMhf28NM(ڞJ,Pۧ%!^fcANg_o|Ֆ R}W>;<~nbx$_5@^ۑՒYc#Q9}6[ ~浅NױN@%$S]ć$m JWytFun;F2}W;KuR@qt<چJx3x3ӟ$02UtEg'Ir)r)`P>7F+)[T~hFn)Z\::p$pɻdg fs D L|N&1@̖KdCF/)؈.9󱃹yNjv ~Sl{ύ%ƫ\K2~OzTnnz%C-=ܻ${5} 4TLa7oj}P#E`^}"p5XO;od7Cw|[umMw/*8l?.R Խ1 DNy%Io '_6{缵B{N%߳XUxmNׯTZ=P@1|WG;HDG ,Ck0㧵nlзvAF/Dֶ`{Mv&vZBvJ(,<, &=f)v,f2Xp}q`F¶~oM69XbNB-Je#-\R 9n÷)?p_Ok0hMGopc{woLUR4ktv.Ύ ,߹~y oZW ۳-$$vl@^ocj:5.tr'^T-oِyP@g}\t+fknnӁ<:QBm.Tw*0'eyFi[)ws)oXbɳB$Vݞg!Gi]R  ,bee#mo}jBP"[b\*t\˛2W%-o@8xXݪp[+®\'Iu \?ViGtC\DW ii;woJṚ0j4 ysc.ٳgӛAFj{^I`(5c] !Pa%N'vڄd90zs֍]m쀮يI/Brkl3nkX;yKL鼎<ة0 )m"ToO X5TK'7N:y'Yd% ! { @IDAT vhevs y`塝fʶl&ޥկį fe:Md!~y/sCW|Y%%O aBH>r-H|jHzH*.GiӱE99=iLPQ鲐`iIrKd@ő0(Q'WrN03L6-P14;71 ݳE7L^+_QH_cu~kxfs>!Zhl]dyk~%L+ >'be[COH~an?¡6ݨqʶ }ZmgR~HE@[5mgߤ_O:WtEyUr4fyvۃ 0Jl8.N *$1x101kKHNN;B=t`PgKڡю_g$ȃH/uAH[Ǿ#iRI{pw a# (~#{;u#?&/ lv}k>N;͸m"kJ4:;o,q]K`y!^rZD+Ñi-%5Fp)=^y3 ,W2 ѫbX8a#(l9oݎ,C]~;@,D{^r@s\ҰA)⬋j?(g)1jkjޥnw~/Boly )5r|]_/ʂfpAIXUv C_gg;w4H@X-,o~eop.u#r3jr2O$ƨ⏉7Gk4Y59;P旔2O@ s6M<]*nWhgHeO?\EYZk-[r%Ϝ9Ӂ;z-BIWk.p'+*H܏5ڭ0U*[AT^dSp3ݚ xm@kI\kѷIR}><ɫI4y(kFT4&n&k[L&Om6͛SiVιtʏ"w@s]韘ěDaRݵeNݛ<]6! F 藬-IΝnpY7sѥ]tWP^mt Yokp>O\*E[ŕ?{;'')[%k]9072gB^ 9+q>~';u\sĶ&=XZ$ -G+Qf@BWTC!A8~n<)9|89M Ro'\:{ xĻuDwH.EpHljP[`-;>`G kv4Gm1(|׵i7Z7T= Ⱦݪ5nQ>~^ yW4|v:]]ۼ$YZMaqq};:rt|opgV {GJc k1޾nxss#B>MGd}ob'/Lu65MyPw/8y?;z\% >ҍj4ΏS0 o02^lX4侮Q'F܎N$RFwʩ^(I,"?]^Qay눋:DZƠI ΣjX/pAmD={s|Qʛp/] S_Ka@]L8 !P2䶢czS)=TPS*ߓ|Gm+__rz9!5M͐ZFVThmF}]*#YمؑőAWŭsm4ڞszcFMx~ YQVuPLGj/fhAnoփ l3yQb8 3qXi'VzWb<]\謃 9CgM[{j1X{1v: vL5zG3C>O] ;ګjZkjt菤!;=DQZ!+!vZ,G1"zQf 믿v /lkmfnK.}l$:"T_ߴtc)(wrߒoiWZЍ KjQz5\@_}PKA߂Zl@ohiczaÆ9B^n9oqu [wuM >*xMܳ I?9Wkfw /4e"#XC,5RIע@_tZQ*=Ȥ;'=xa" g}JjIY>?`:&M8VII鸏ͭ&_Ktn 41q^0'8撦%{9(3y5ً@v*a yՒ$ob'1J8$Ҍ^0͑VR=4)*iJtW `"'OTQEbYt b$cC,N/,9/y8ud8t.MOGsD"EЗl#]&LKHeVEH#|fsgYʺf&>)cY*f̓_<-~:]+~.maeW1遷YP]ۋG>. ]q36 |VOגG[6 Q@+ hWťQYVӭ+1,>&a8/#WI&NΘaEi{?q-)ssO@ &:O A"Rrf4,Tw  зH`2U\*`<6%i`!.<{v&vJ 2*ڍm ~mWxQN-唜L"`gGb@.ۏ:qe^DzZx]aʩԁ~h=]{'^܁,n er<<:"rڣ8γFgI#w'Dk'Z0^ov[c3:kx^hQ+#Pkd95 &dMF5uYIt37RwJ=t> 5h{`}th.Y͋].]ov1?:s#m$t(Rm^=Aќvc./| |e!WeZgȢ#=&.s 8{BtG .w#6lzxgdl YRWߤ9'N`USмIN vˡweתs)*uhmpI[238Pϴ>bC@͖LЯ]S0~wͻ4+GpFEd'UHb^KN=K}6کb`/~57'jI];V[9weqsm7?7:mBۅx4.a`-}’i)5k2iiy䑶[Y?(}A 0 I$ֵkWF3BZIՃT@tuwD6Ǽ~П@}@u>igOKd  OgH""D a!ku 鎂A}rI[ۂ%3rQW-X+[N:DH/vt'ukqo)=\Xr=漩Oq[tK耙|:j5|fKTIТjۡY5#BtU!Aa r%^Nz7b\0Н6w0\ ah>K*aP: Y}nKY!?Lׯ^dr-Ʉ}˚Џn{P. ۬W@&ۦ,W(11 xsX2meYʀGOvr`S]qSF]]YЇ&3>a> e_RgIO^#; Po''aHUə5#MozR?U0&?:Ib;9E\nUз2 ?=ExR@#R  {?bAz yqA=GΙOc&z<ׁZJ:mZ/M };^gk70nY1.e@fKF@E9u=kQfN͡ zk{}Q2I{]0[Q'l(7b·MS<+sl&oi!N#nvmk~IXҺ{ֻ,Q=T]X=NL#;ډ<ލԺ;P@|䳣pQ#II״xzpl[L18}om9mXOymU@{inof>g(娑zSsWЬ:tt{tI׭HWƥ#4 /=6n8'=%Ig7e'Yf9k_{5矷> H뮻'?FcQh*$7%pB%,VBpkTg%Ms i%74}n=҉b2ȩ5ij:-J9^$ɛ俰] n݋W jT]o~`[{8)]B/ 9?g")ڐst5fO܃| Y8RsBӣҶ~0J^Iyg„ܙ iZhڟLep~2I4B2{x`6Ji=|r t @Թ3. g ,j _{#&͑Jr9 Wjp^RTuKO^ msu OVFP:m)~"+R?K(;{mbbU X%rvcfiY Hb,:s=C\sAL7u WG[IO`'۴y;mP K Ib=W|%iˉ< gѦI RP nd`0s5YqE-ʼҋY Ob12eԕ#k:?"T<0|NM NɱDMBpL {N!wnս/NHz$;ɖ/)|N:؛|SyPS.w$">!9ۄxXe(8'xLd>M$#íGyNޟY6o=˦ĨR*HbUNOu2^"K!@퀨9N)mAyn.L+PrR,0yoy:|wkm&\~hK'PS>VML`C|z>RA+|`<;>d+=W\8⣹k׏2Tun"q RJߛC8z6Q~ΪsQ: =tLߺ }we7H?{_[Z7/*hԋ7x~+E .DrlF_@ZD4Mz>2W/OjH{s@x{[؆_,~ c{>gjV&|~?B984*"*OG%mRG~Mh]S%f|ã5ڎpO:>RsN8L]Հ~sUBC'=xr~:Ny0FW nԪ^1?ɭ+f ش1a͘hwk'@DϤߟgbr@c]Ñ},A^Cn|zG=5l7o'79+)[+CRߙy6%ZPɠYL<;ֲ3NڀO^Ӧ+Y܁*Z[U+"{e~k1B;@u[OЦ_b6ׅ:NiH٦\ `j)$ECϻ9ej!9Tp0ړs["AXlH7 >BzS#owRhQ?|Făԑ~,6.R{-ݠ1Wg qzSmUc`%n%Kqߙ97E\&ҏ7f̨Wp U+(|3?INj(Jl۝>O[Ib]aO;ǒ/kP~jO3B]Jf]`4N#={\QFsdɼP}XHx˟/dlԛd;y[N);N {#h)#zU`\[ :iq{Ȩ|dW/ FE{%m,YwOR &9 @2Ѥn4⥣ M4/p$aZ+l +a1sY29wer -3wy*5 6>i;xo^/ q,M&رST#II7 %c 'g[I;N,0gFht} ȕ${@pvbX$?+vw 7жf++@Xwe.yEᄏ߶rKӧ7'+o߾^qt[]_1 D-Mh:Φ}qr(pX}Դkmix7Q)̇S uFaמ- jzzVdδϖȔʤ.K-ifܹn&黐RI HB-*}PW\=$=i^V6\O9y>et~j&lѡ| 糌s/2!CдiM9$ Cuq"}nJ>Va/(g]ib>.NO qI2y_7!8a<[ )W5o)dP|Tz{%LډZBH5MHk x DMH@be s_̞)CUezY?trGPݍ`-l607Q[9N$O>V"M X]W" ܿSù4dO&Q!ш}Ӹf\yƛMf38 p<|?q4mmA"?1V~`s+[QO2np1Ix~ o |ބ=L{6Ib'$K:ζN_|xGS^Ľ I&%GE>e-Fq,ek X)}ݭ{-ģaz[~-VNTNۓ8q0P&$/e]{mӺW9t({2)?Z rQxrw?6K̰R-IH?@=ߠwXݎvD ;`Ʃz}k>J^7yY}]v!n"cHz S(vwEzVBp}x ܁CZ<Ie {ml~;ȷ;<'NWW⿴xY2vPXHhB+ ޲$FP> żTR`jr㖻eu}@q(Y*/ jE~XSZr(G9f:T7-(?$ {܎Iѻx~7+,CiS%gk{:,rIͿoub|ڔzaTJ+D+9]ob79{6I}%M=Q"{ЭZKho2x(~9rD=m9D6em+c?"wFqlVv{!dh~,O.?7d$KQӐ B*?#uioR.úeբor-_;.j; s哏WO>̯zc͜rOYS9/nj[e0?I߄77^q&m7i0LBҒj!wz[K!ݦ*#NH$QB#vwh,Kڀ87QrF:#qϗ&)~J1 XA)a;qqѳe7L|Ygm ʝiAC?+y4DHE7I+ф[I"H8Zghf+iϭ!,4IkrⓤIoK fIv}Ojw@O"~ZJŝHD93Tvht|2x(c-Sav4[LK~IC}dש;ߢGu.eIk,v~|^P*5Mjm^CAٖt/od?^KXtto.=l @-"-hStM#oV`_)Y&' q'|FNTGX(47I^;is|Md>R@b_4 ˑMs'(ߣC초u]g xtb}ᛤ#!~^α!(jSa.Ҿir&]ZD,~-11@\[a,a/Ud hW%SlTI%aDTy[,P# XnLl$?it IgXd8 !G%ӣ\C.}RHz)}nkzRBISq󀓢xM]XN `@PTT#Fx@1+&99cQ gbp*T@$ J43ﯺ{ggvvU޽vwկ^9tF7ko];{ܭs'o3XNКydݒ  YvΡ.Z-e:p9׋H 1 $K Fva©{'-y(D)E6hsZ:koP2X'&*&Xez &i{kk[]r:8W 35ZLwԙ\$eu+-x.P`@}u0Q`j#H=9ganE?qUH$E< o?&K}r LOq3qߘ.9GXxM=zv-.9hxz4܃WDl߯ҏri:=d0mk:HXY}? ,mv=}[ƵF-?Cy&ee(|oP)PC[}Q,s;\QPƳwSN,X,mm"=_Q۶mSŕOW~&V֨*pޠ/$9 ҒV诨i[jsV_5Vf}>,92P_o$dokPΝ͸q2ܤ+R?}7mmx1ǧس_omjƻz4s5s16MGP]L,~lk>ܿO1nAߠ4 mf왁 :AyV'tʰ&uĤmu]:,hx[* _e'SBH娼H\gA>gB"~\f(-&ިHe4QX">rCbƯfLO:7$_!xdv $loڊK( e5"ݮt^Y58?p||6+guccbH1oNC0E'G|)@1FL\#mTtiwq;RkWy>a|/M764\)"o_>j=ת,-q/~~ʽKۼ#g)GG{.yTP;pmc Dv{USߛs%DSƔBCt@Ea ~D7U+:!}i'ʼ ^NionJKnν)M)ʓGL{Ahȥq&}h7;o|ru]:z-wN~sx?4.EG{rQ%| _ %' rmGo\qKNm])D2Bc;'uɗ C], /9GfUor{sJg8Am tV xqChģt!y-D(r"n_ EPuof6r䣊CA> Iz _}w-$VK vT ?7!U.$'M[ACr}uzCi'Ex{WW&Ⱥ:Mu<逬v#/1ys8">K^(+X̺\_%?) ^iav'1VȠY}d@c zG}7IJGtZ"0EruStCtwH?w(~;@IDAT-d 㙓X` ֌ n\\qoѐw |T;,YZ\vLX^:P~8߾$JJ1WVR"5.Z(5u@iKUqw_!VAE"7bILjF2i@؅ o^ena02rME ,*e5\}t\UvupA,P0qG v,P@K -;~z r6 }{S/@h4ly6K`RjBH`oP H Z7U545IlNfP;Z|fD+`L"8>.u44U& @ ;AnZ:* 0qհ%&gJ|>LSzA5h#_q; 7m50Psȵ+&mѤmV1U&}=pp3fi/eÄ_gEzpo2y\u3{-2Ԍ/wp$~&|Us>z&Q1?@Z݄/)hNEC\%:o184囟0)|u܂=m4[Qk~wӠ^o<fAZ&q{m+3L)4m]6`_k<\41z|Sq26l男%2aMdWջc:qCn:u`6y@,HK~ ]&𢳰8$3+s=3GM'$sC 9XRy tܤ>Ķَ8X:hn[ 3l޿3YX6 <\qFqU{mr?yg-6# ń"e-V"^OX#6cQ^l{s8hԏʵ>J>=iPʣN#_亄_11v>l]^^1yOeO2|r)FçKٖ^}ХO s3hM@iz*kk"'9M#_#n;F 2PŤ_*hIdVR[0L_VFtN?_W %&쫵R-ÌHZY "_1A/L7I utN{\ŷ^~nUI/'M|O\%)1_"q h/b7}tkcWv@uЛ؍7v>P[E +3 $]^vfY??P6G]NCcVB[exرf̙櫯q R|C~i (n:x*Vt\>مp :k.˦ ,lzE&8W4i⣁W~IFCFBbH`Z.\hP6 o~akŘNA\Ic?ߐ!\IVO3 snԲNϪVKn5O|D_z Kuxk wfV.)V=){-v+家 թ TV}Xe7٭wI&5)0+H"ڱ-if ._@u adwJ`D/ xGPش"vS epN?x.ζdi%ԵZtUe?@_mLi4E>jM|)=oP9Ue<33y"\0<ji|(ށVw'i)^Ұd8WlLڟ։O^(#K囻^ Qȵ٢Ԅ\,N@fռL exdW LQ\me5`Aڂl!qM0ҍt4X e'M~.ڑ&RR Di9z @Gyv cÙhsC NM*O%RpnRϑq\Z-C=>h¡]T?&8:P p]fYiNlO3bE˯7Gf 4''ƤI) hg;ʮdC+OK20PP6$ۘL7vS=\Iטߙj,e9`o/}a YjBj3`>?xg\/;;>r3|יM0{ͦk`Doi?o-?J}Ih'M-)͡nv7}%ĝ;lA3 š "ߤoLu5㬖LjXo ľd *[KQG oBZ"K4D/OɅ}i˧hͭN;l󽴟"Wc+;,y7Gf8!.[q"L"dcɬ"לĢP- d&)rCΠRl=e۶j2Y7'εؠTƃRMt,/@uLgyv 3ItZM!i.ϿOS-ʑ _- +PYOc9UdD'esғX0dd,S( !2끨M6 ,\H  Hܗ,&k:[Ú)NCژeg>q=ߗd["-+|x4h<Ƅh}li4 QG̜`J( ȣ^LAoA?}$m,$'@\A[ui/u5~j.6pNwH8{"GXL~u&r Sa§e[=G|= ^vp Ꝫgzu-n~; [̡qW#&<no:XW0E384ۜW}e#sq^P{X0]v_r^./vcx_dRxmg6YbfiIm '3y^'V@al]cbESonPKlmrVHgCaI|O 2& 7{!I$|11|}NƊpS (|=]]GftM|HctDMq7{?HCڝ6„`ۦonH0q @9coe3xY+ڒcLh@#FRks/ܖ\d4֋92rF4w_ľ &1Y iR4SAJkۖ3Bi.QImo{ժװ 9:o!f:4i֮ˋJ,DNyk(Xԉ章Aقܤ/K>>&NMBNGݑ9pʎ4S&f{ɶhv6wN3_%(G4VK1ȱ&Dj=Kl\r'] #,-OAM~vQ<؜dI!E3j\ld w$OG9Xx TWthkՖSx M=(^HGT^ "#nU+.b.rq,gK̜L[Vq3&""яZ8ezKR骟8ьEMp@M;N7Lf+#̘Af +@8ǹ)^LRT]c|LH4?iD \ݐm(?q;}|B5܌m(βQ\sKP3" ?V%s^w;i'aU ۃ/NvǖCf" ‰?hbB)?MAxڌ O}bh95Oϼ݃2Wp4w=7a왋椸aZxkq)4~( Y"yKzTA!VB@H$bSRۯz2{y%%_{!s/i {-d.] #Toe-,{WFDj {I7Bf'ڙN n( ̚Ԗ~@m[L<% 4S~[K[:I5A]Y:N{'|}oQ hMJp >+ ߮N_idCJZY!҂ă&ցb4emj-e؇ yS0nf[6BS S|5:؊I[c@]f=\EZ J/-E&@ 4!t2&QE#k0ϱiAp2!$}H*5{,Z]/8O/gɪq_LMky%ϙ}E1_9S1ڕ˯!xKĬXio9:9k7VafkyM̊-᧖H[s!)>ƟlH:3rLX-@#ԉ耿Ffth \v)}!wzhqV[Ǚ|;AOsDd&>P(7MKZybGq|y;in+u4ov1GTD{hs+L ڞE) e~^3qOv oN-Bn)_[~a6fG2.H"CKZ?FU_~u= ؀ɌP9rZLN!S&XQ%zs$s6t Ў=Bnd̒h?(T0:7\C|"otk$zy'92Cl+ e$lvda7~A3WZju<`-Vۮ'z%idw޼=w)$7uwyO>'Ϲ#-aG[ɐ}2 n1(!+EoU+%/้AAo$ʦ|;̵Mej*Z9?م!O1kc]2;}U5Ũϫ([8<|QڲvD8/H?v 5悒t}2v"3](w ӊz>nkF'EY\aԖ'PZ.OoZrhOvELwLi2d 'Hks̚}xY(A;KƷ'%q9T@ ~s(ҁ)p-I D2FiBWod$>RSi Aqn$@a"jE7'~33E(YT@h A9 چcff(a"E:]v)@U>bEK4/3~eg:=@h^_Y}tIl;_cYv*xc}@@!Zzaچ&1q-}]|C)c/{QS)ש|׃w-d}eϙ*o<ųZa_hޤ<f}?M-6΢ݾ|lc> 7>iJjkow /ۄmT:q[lTXZ|vL *slHUM&pA-&M׆ @bLͻR6iZx*t0f7WSw@ƶs*vilMԋ[JǘيIrQ/0}ѐbCV4Y94t~ﵥmlߐҤci.$҉-(0e2<Â+R& ;l*I.5/bQ cHo&̈'{fv4g'ۻh??w'N6oTE.m4!J+۞bFDtSķ lKM0`fV婘/v;e:$1 3hb'; *W _%'F{S5/Я'p7^ ţ4ܗ+Y$^; $S2zNʚ`#ѮZ">;x{}~P;4e~Wp[uIS%flNqPu*bUi@8GN)ÑxMD9[@{M^&T :Ȉ^qjI~F~sffh-ҕ=F^H|U -$$:ԥM]H]$17 sd!U,MU%WڟŐ_Rb2컢%7yHC&ݫyo˽)ņ-姺K~5K| &зm۶V35X_5_^xl@$pSil[iK1>W -{0b8} F_KFغ9*~ V:<5X jrXXW |ӯv;l/iwqMMO;NGcF{?{h=fd[vvE+z "I`㤣r(ϜWh R{Am#r@?yi3ukɜ$Qr!uhO/%<˔@ba"4#hA`ôMEHRx; ) &gi9n8M3G^)7[c5'rȦ=ac oD(3!Urm528orSs"v(tY%`bg+#f]L{]Zi8N ^W,>ϟD8w< Yt^g="Mlϣf9iUd"C]d<th,߷Ň%hdhW>|g6w@|i ;qzNΣ xG~TF_i˷bɒ!TˢTU}IG-*on g]+8×DsM%QnP,O/֞B!)Pbȹ/ώQx\):Id7&^e:[ak=Gb`v^Z]Hò)1*SWjN=5ղ}dH R-,Y^:-UO]e&E?碶vZ{Rm-}uk?2ߧLJH&ءfX8dkB.3 +[Enߚwl5srk}P> ˫~g#Ũ,^Z8;@_kIR>#Y^hud;qcp^2η\uIrzX\ /q!#1of{JTgQo~-[Xa^̺hzq̑7؉{|d^Enf끾p;\&3@6ϜG:$P6u$](|~[O15~?΄z&'n' sBabhޙWk$#h:ɟ=\v57S=Uw1yހ- i"ߘ֔Bƍfx?#J=l3Bc+Jvdiwh۬Rӹf"-*Gpbڗ"ՃPy\Hȥp*YۄCyy|8=D3 q PH2:W+:deh g}k L0],X$%إ\5N'-O:sѸ{e% K\fR6;{Չ53)[ze̽9:K JY;Avgkv vCo4r EoB[al*Bv5굀^ g6bbGG?J&32 q]>l LeLKF2)Aw&g`wn"E/YL !\f v Wa/9-?էJV"['ֲK\4{#$ ^fV-a~H=N"S RO# vq@C]=xRұft昊g [d:u[!:{f]laAϡ ?,R2ڌmo~eb(ϋ'DA}Ffl5j&,lkfޓvҶ!ﷇ kɳ,V1+Z; éҡ]:ve,xpHKT[1=?Mx?$5̿C''r6VV=k7$:>^fLw;GIC:b<_V9n@q/meʉ^ZÐ3Le[=Ä͋_塎YaDO0TFXC{?4L^'Oϥxc5_ Z:wMf:RV.!99G AY>*)'-fOU-P7oؖI![Ƃo,^" a^`pv[PAETpA QX1;vyiQ%ݐaskuxpfG<wW<"0LG-573]_n: gtO2A}Z?e{Z磹"njkz%R/oN?OpAQ#v=$G䋂9|L\4cL J˗ZH3)庁6/L'sk,;1_rF/hzG^sY*nZ%T8nAuKz<ǻ>|V!F.+2uqY+Ku1fwc yj̪wWO<2ke &矙.iO Ӎv,S-F9"@C$ eb8sWQwN6,ofaZ:ZVZ6(ȓ3 3_-${s\Ţp^ It&/Wc+ӂT Ս87_!O, 7tcfNd/{BiSˣ1\20l`aTg{w5"'=َ<}o`OLWķ7]/v`M+CMAVGN3͒s0{2"fgxGAxt~,'01Cx׹LZKfXw qN3cSh`.e/"R(,ފGg!M.eiBKŤ Zd/R@>\7'p{ArR0[ÇGtdPG8I`3S'&'Oo;ieCDv6kè"Öa5H#m T*z WX[7_i;,j8Ԍ<^ud>KZt& n+tn-,ףm|M^CmeT#'ebҦ1]YYߣj-) wY4KX< -Lgٚ-2$Nh@:us _m!`eؼW˅!/3xm#r=eHrq^Im[ qlś(22!n}-N R][, E(Ԫd~_&qG9U̵̬fPީ585`%_k l_F\'㠀!W;|pϭs'5eS/Nؓrڅ~fNr4gfY->Xr@*So9ה  g?;nU%21, u2A*50}lrFZltК|{ tRӂm^f-&LgY$ *FSۼж"-`? -m5wWatn"X ^bI7PToS35Yt?mNhIX"DѻmS2oy/3sK{n~;]m~m(L kMzֈ<&_h} 9 smŻk;VKBHԎpx|lI7hsL&lW9ؕ'\諷hG~/V3/GAKjJX҈Sʻ9w])] f?]-mTy @k3+5ִPK9$O޳wNmro@Ѓ@̃?[S~H:,KJsV;;!th=6L jr$`aTT'C nMB&$;H3bCJ:Xۑ;D$7㹽[.3?# zD6/ Ȏ1B@MACBSDzރz::|4~.$ښ);ʹ8'v&CEC# o)&.iA[y9 @,Z 3Ӗ2jMkm?BmL~i;\lFwZ`#e('-;혤Soy2.qߜb"3Z.1`vE{)8@|ܩL*2ө[kEGhQe|]Mִ|4r@Zط܂Z3_bObDYbr!MoAҒ tG00☼sL3e@~RKeCiE3L;1 &é$nYZN |Fw~`'bjK`p3CߘLڄ\A:N9IxG>c5'oj=ލ):8pdgC)j mpBz>D)Wleo:sgb eTb#p8T^N_2ND_IRUηgQSe9OmĆףhפCz/&O(IMȥ[q5n:hL QK ٜ6;nqt/($ ;2j3v%ɜ2۝2`\i{x2}%'m$ˎv/o>0]!/yFƢ5i# z1;0i) 5_KQHc[T$?' )Fձ3cr!n,m|9 =tkx@av7~MHHXD%bTaXJ[MZy':L6yLZ!Jբp+POЪa OڞQ~1߳ DC '߿čewa& ֻ߸ȭt.9}yaOŀ B}A_@)D%<}~ʻ4Hp}4`2i[79o'ES"u?+nV;shfƩ_NKA7l*y:1۵{6VG /Q X`$ ̭CylzFm粉;m+h_p?,i|C\hMvNڣOZ3k8'UNm݂%~m3f*@v驔}u}3o,<Ɍ,}N.7m+6sM 6,JI̢n%~PVTSh2,Z૆2څ+!72[w`K 8s ps7sI2;GLz0{V?Ʀ-6^ˣ>U@e}wҟC@pƝwf[#G`J6T4!g4\U;F7é8^߁ܻPydlUsIjdK\4%yk5\7wn6VHMASԿ𦘭̃Humx/| Gei|tex?K(áHGaۑ?_7|e8nV/{ ;ѻ %^z ~eۯ $͛x4[` {p Pl@j;p D$_FpÃ*3٩6&YmB@jVU83:pw~ӽÓK[- ,یMD02s\6L7)bB޻GӸCf]Zi`K\](pE gM}OAg5ԁIXDVh9m?$=?ĭ44n~ <|&p]ny?&,x\L~;O67O6wΌ%ϥMkoNܷyj1eAfJ+pk?ç-U7S.w "&b.֭ṇRɄO!iħRR9ߚ4:̜lja^ - .?GRY{swicTfz]y2\G j&+Sd/_DtZҮѧ&47&k2S'4%nD-kg`\kɡt&rCDh>mx<}9É6ѻB]xpΐ3b)?$bJ QOM׿1.y|@Kh7i WCtf.I=I9rSWMykٕ1o@Z-7+$2f^ǸC9N u<..9o "OAC}cTPǭ-mEXgO׊_i]dj·Dogi*rE;W ihCP!oN[ԗqnRbA ,"zYZK5s$!NЅFg_!FQ~̸[ZdaE2_JNZPoK2ީDp&*'l[7$i0ZvNc[.7Q$ah}ZLYhcn{'3HzsrCvGD$nw91i  L#;֌Yh^ӵ !&g5WUlW/p:-+=7Uh1ޝ:@12⦹.0*PQNlFD!e~c$y%+=HG@|5G}6}뽩׶D;05cKu!ҤzChFX,4sKϻ9G,J 6AyASeX:"5U |H ة$S-󙨝p$j_%C~n> ."=HTu}t[ǹ޸rл@ȅ43 ׭kChv\JqoK=nՙ:0q.1mod~(9||dH4uk!|5) NKoCo7cC/v,*gO#NfUp`ӕ]JL'zQv_8)WZoq![tPbÕiڋz@l`$e؛P5f(R/I0L <\_# U&ŘO0,D6q2}"=t/LO(7l tj0&?'fiYCVj Kw܌ł,V>HӣN#'#lr ^?E' [S)POkKo6rU7Q__҂7ԏ}+iI$ao3W 7P7Sy<7F_4L< |&|CG()Uz$mXz8oꧏ"\r SG:T O8թWQ߯oH$ Op<*#? :|>mI8ͷ[|wծ= {G/c_d7:BjFsKuwh{юaA.MDRC?1? Xiv<:c?2ȎoQ ̈B{ljr ]P9 z6CbV2)|9-_>mjhT1(Tw(2s Tܲ12=njE%'vDQQ ѯnt}-?<Iի2xArY/5h#8#GQ$Øg)[" 9%7極h[6=c\Ĩm63^~uS^^,= B/]8 _Hs??4ԅ4T֭6L Ȟ:,&d$o'Xs kP4S132R vmJ6~}G!xD[z;cب2xAO3inp$S:o"aj(5|( !fHA+zkak 5 c-'M My/wLAnk%~+rEzCy/V5CL,Zwg?P7Pi"' Ho6 Ov-W_ 8%#MhIڒE MM5 ՂF6yu&O =A5١xdž ׿fj<ՄGdR Mq/NxH0KӴ);XJУ):s'vL´]X*NDk3--e3"^7wdNʪD3|fhz:z @/FXUbȆW 7H10LHchkV4q[{BeO8 x@|B*.N¯;MG `؁ֵ0=>rǢEc)J:$>5e¨l&M`"^Nڧ/K.ū1vS'=璿 翹v=6KP!~i؅>44&y"{<6}{ \# jnׁv"i`\[?AARn%oRf-kDewlҀo$!gm9hN|V)Y郔y pяk 1Q;(/] 0ӹ?}QoS\[ j-Py/}fA[ɂBo., 5-5 $J]8 Qc6?n>u,eI]iYɓE-e!=P “6_ a1} M6Y`HdJρ{w;Hw uAWb*m&awHQ5b6:wDq-?jXXWJwۺ_,%b *Ŋv +b^7T`J w6T.<@M `mG[؋H8/M(%m,AQ#G~yh7\fl tnhS$vq,8O5Tj۔l IpKv)yh/)żTmN;uhړLB.ࠐG}Z؏>nZhmۆ鵄%6PKҀۦ_ghzv& T;@_⯝]P[?74;ƜjI/DgT$u&)R?[f.?=!ڶwy 2 Anç24,2,{;׹!]$;ZlZ GѳS^UP;n\Ge3bJ3WԸ`y`*\@%8oLۭ 9}.AٌHǑU:zE^vuIeke TA6h6w2 mdǎmnfVkKD31Bf~cP% 8Ƙn"N:JDp:Y2(!cldQ X+T霾ZɂWQ$ڱqx e(drr E7M{Ɍ-i#o[y9ۑ_#EډrO[B?Lt{̠zkRC; l" 9Ha9RPs0)YT2m!B - EwYsUDiE.fՌdNU J sr+s,u}BRuLYO+u D,LQTMO;<[}T4~gZ+-$Rw1 y¨iM=7r,꾔߃us!;dIHNM|s.2T'TP >.o::v炾Q@_G)TafYY.Ԇ"+;%gtvR?LSIin@_-dy)דz&^ؔ۟qSKejAgxCjRL?/8RloǎͶn/fMܫBFܫWwVj5S՗l)Z'eL}Əo&MB|+'AgÌ:`5?y:AGt<&㧩j;m'e/1>JzQ@w.۱39RW޻f %Zs)?G *--6Z sU])l;C@LboC>$_,VFQt5\JyoptSSJ]U0(I-"A#];B(Uu<^;K;AZ|_,}MvuDX stHehF3.b{g & L Y2 BafZK Z<+W-!7 PЄ&:XM×t/y ux`_aRFpnu|5 ZzZ-BFS,tav9`l-Ws@_`^)m; = sm-_94E;MM$lnG~I[64lg.N>@=mbH02zIa&I'.~ bA-y8 !,Z:3S2uq9+;(wI6 u|GMOV;O=E˹ 4zr"^@n^ 1|LƣZ=C9}!|""7V\r/:vy4oKgV _G2;7<0JR&ؒ:%? #; 5}位2.VLM?^n7_{Dj;&WE`ּMeW CB  &Mj1kxh(inDHi1fz1f4 mo^fn*|M I:e-f_U,c':/ =d%-^&GH =8%b0~a09ݨi[d^ڼNʈnu_աNZzýu˨'>;Mo-G[%Ogߴ4?K;@sX=&O{=7'C{UK,˃ή0u]EC@HΧXLxfKcYV3&hU.(8g#y;VkI<`ssr(Ej3~_#pDc/f1|e9LE .#،3?'K;Sí4~),fgs <зE_UMe ȧR4Tb*/lG6[N=`KҞґv\6]aVۺANBp8 49~){]蛫/,MoH~n w9܃k\#pKp9, $!Fwwﯺ{{l7l1y6\4rѤ<{*S5n;V2Y:Djj"jW6|{;l97)_s:Ksn罚}I@44U~x4"7,`U}3{ws,.ei"+4/p`]!y\M+ :uȧ_Us&ItU?(u)uG=og{NA.EЗEF)M$Z^ ߑ"LF(܁fېhǸ·DD̘-\d-^^G@kɶ/R%..zv\T5AEqi޶i>>6aMN&WM&i Q Y3:{qNw^JLhE-$F@Mk,m&/0PvZ33۾z[C8`=QFPI{y̕輝ʼnݬ/z|{A)s;Y3<ݚj`9~GhF&R9͗0XtG>S|/e?$9[7S"{,7pg>M:xyt2馁;orϨԛWܚ{Cx;bh^>v?o&>DU*;(-TE; dM) }?Ig>w޿" UZIv?~K{);jՏl~Yr n43w ]iwSvE9OpPmn8;O砺b@\Yi1] +FS}vbrv6hz`{mUΰ,sq2˷=w3岇t-uNJ*,x%`ޚc=dz5~#t:噙G} m9E{;8'rIJAMT|pID/\5G%HӍr3y\#BK Uǃ^s3Mٓs4dAo3sx38=2?_Ln ot!ﭢD+cm7$%( <;җRGXSiFI2 xy饗oamN4ɦ#R#Z qv%GQM4FgE`XAPOeHzk5HoAi!ROfx_8ĤUzTHC$1:D+LBVZ>u-FIۮiyW?ʢO  Ю LD:! ˆ{7~Z!~} (C4!m ч 9Ґ(h'"ȔD5R9JlY*  {Ov.}wsUf8^${v mH1@0 }9h_|kiJ+4g+=N|5(W:1)|)?Ft05տi g{qێ}G=[#ofN7zbeN{+i9^ύ ҇ f&Qh Sɠ)}~oGk =!zF L5^3^2.;lz~{M.WX-7 i^ dA_0}xk0Wl΅];3xJ՘& ('b]X`LNײ]"6ע 2e}k!hR"ڠ?ΞBe j3.4-0yYIU:'pZ(S3\JcF,]ZGw ̪b,Qjd!F=S!SN=B_&,.< _L}i[U܎ikE|>iGI {g 梏1w|oIay+pk#uPDgD[xsqE`yvnԖmt[-; W Dޔ,- NIngwȔC*a.lodYp9M8iq]x$Vxڎ7aRpdO:GSe42:&:*eOz 1(-I:xЗ}Sf^"VNs8=ΝEc#rW!iNbrpy4?cdg]mIb[u3O]xdbY~\[Z:'O {7OjV]UOod Uۅ\*sMR[u.NZ~a&W?`BS|o.}3k$[|DɅ[uP(H|o2;۸Yݯ .LrgUTwWygji1,Ļ/_:zݞZj 9>k1M|7G..gsoXu^SJA.6gxlWs"8?.5E -u;Uw~U,*uyߦE2{4Eg b*)g̥A:ū[;>/4-㯅 u@e4--MlKאWҥtE;ie]զc9oRK@Ő)Ϸe+OS4ζ>?Vߓpƪ+wReyy'w:k]Z}ԌGdvۦkNk$'PNj&ۂ]}J&4@'FG_M|1n9#]&oqZÓZLp-uI+gJ*_%⦕<}ColMC8SKڞvNsQkc >Bؽ᳦R5H锵3i 0LAF%m/oVZ> pVH $>YKJQ7IQ BhQrPa9AS@4B~Jy%Wnǁ^C+GUG s6]K \Vdr?&I@g;GM|mɉe9v*^ԟP1G\V{ZtErYwiְQhi񲕺=5T 9u.zы_g\o RzB&;^̬x Kk$mlcww۾Ԏ AZI)6vwǾܯY,C^5Η]FėmS`#2mv8a6^L헔W;Y-(osӭy e;H_^Aav^2eO+-2 9|)4t 1{vN esQ2 \\ Y+O2jدJ.W6MF "_`r"N"*>ٳ2&v %s'ԆEk4.F2pD!4\/wM5kڦ'3ax` &Ҥ w^oiϘU7Ʋ UWGkP?/MAGm v5mYl4jTqw iqDMywڞIW^&whoEJC@42HWi/?}%x#i]U5 xNxB*[k~R1Q[:5_la*oyxTI.C\Q/*HGf~ W %S~2Mp%q:yMnx]t`zbgˠ~4Z/{7[iy8=1PPv.ڽUEw*{m O6a4f V^ ^B-iuo Q}IlǷ0SA:i99)w64 K,a  !_I٫I3tD঴ +v]Ծ^At}<e775̐&VX.7~/!IǩY礰20 "8L*C3ᓁH:Vkw϶hUdѴ,\Vd.?;[e۰9idO9m G<Ӹɞ k1/= WP]0P__lR7\()o'~QuE1}ݫ3hޚTMzRu"ABnR4ƚQJlz N h[u]{<@ ? 䍧 fnZ 9h%ɭ"'36}gA*P6x5{>gs]Zt+ϙ9,&ՐdTR5ɎiؖeJҕ0[? f2Lv<6\. Ƒh9Lߐ^3wli5ėvXW7~.K=Z_WR?uG]^\ kCVL^Zӯ'F5_)4aV/|<= /lJfmf$Qa4C_g懘# ?sbfL< Mj5nyy:}'7yR[|?ߝh%Ӵ_S3jYl.pX3Ͳ͸*߳oN| Kх7 OULI9ƙ<ۥL=:?pE abAě2mlyf4&ًfyN ez@cTۃ+͟Vzun63}g{YeFfwg&sdf&~k&,VCxו5?t{ݒ43g5&f䳟 lT8ʍ[>Ww/` ll 3nXQ̟S2/Em0cGr%dF p ڟz<F4ݑݕ2 /uSWCR++\f ja6 6ff$isFR} mfx6U% `WQ¼ȄAr?CM̋ɴ!M΁27d}wyUT rO ә;Jr$pU}ٙOlD=+뗥 }1h's|F΢? UWU;p< }S ͳ1L^$Z"&~K3O79[-P\ͷ~9~m)͜p{}L[ >=/Ep'c=|gü)a=8rƮNOse|Ēr9TTW\+ԝH!waF/lb<ȝƒ+>[f9 rD?w?x w)"wo4~Yvj%ڃ'Lo oE8oH1&;7$]H-t+-|R%vy9pZzL]1fp SfFϘo{5g *G 7#;5T|XZt%#뾾ON">@G0H޳xIooݛ c='~9.,L< AEZy,Ķ58`lf#h;9s+uTfR6&A޵\61ua-h20;3.;qL»Zm@m]ji!:#N:0 U@8KX=ևXBlb,SD)?a'o [2H=b*YL>G{\ض3q8׉fA|O 9kQ>%03B1MaW=_Sm=4Ru_Qe%~7D[ͮukAy15lR7gsڙFds%WѾw17JI4qcu y0%&n<0{5 >H(veG_ -P1ybs?]v켿=o'Dp+SJ=(%@Θos[@b7¼(mP<. ?;ARٞ8XLC2̦Z/x>*쾑X7MuA)j]ڎf2H]& sڴ07xgqv\8e:kyhz)xl,}՛4i1D;$햵o!W oǝ͍Hu{FuXz]~%f=կhE@Ұ-G_q$i8ɦ22%_} Nھiy߯pƤk}9#`!"0 %s2vf'?jt%MߴDzk&B|)ܻ%-54jn,s2-J|KiKFIl^9l30~׹ %L<(WT|Ԣf@ǾL,@zM+#[V֏zpmg3N<h6FoZlj1Y&TLz&&L,%ýOwF86#syMbs+hejlILVa{Y?ά'/ TT=5'ї'?Osg>3ə̖\KX;v[ӡچPΑ^0\G i.RYZ̳D2?4w̹bqf}cMVU3<] )V1 aFm4VªIԞB4(M_?i3)4='͒,i|+bfz` 0eN"31&T\U'.v'<͘8\X`"̖ptbjj2p,X.;NX-hL?[| N`w#^z׹A}(Y})lj_!ha/]Eͬx;AGy;F6uOd&4Ք~WM|g~,|!O-%qϖYgYZ\ } @Jo{wP[?c w!.o*m;ƏCK{Ğcy?CNVw/MOCq/K,<9 !͆(Yg-^mY GS-t3:%+h+_m̈́% V75d;fic?{n#w{ ڒk»|I=|cZ$ڂqX`_3$Z^1~ ]R!7@ܴźķ`t Ł&196091Dx& 9Y^Lvr+ϫمw +ꬃ4rB38lvu1uޅ3At󓾢{{ae$~m;$KHx jz}`S(y3L]G Ud@@:b28QuufZ[[kaVZi%3,XO.N w\xj8'L>l8uذafk0wqzyMTKbOTf)>`L&-`kdj-ަNhI ۃ{M̤dIN'_8 E&4 )N2M|Mm  Om3>LIg&?5r M5]@`1Kw($rY?:'VcrЂp4mgsEEsCHIZOhu{*?u{<evb!/ummkk;=C!,A7eۗJem?`5Lk2K寚zqiL;𽹡ѩ?v<&ηB8(tǙCݻe n9NI1gg3)ڭkj rUKK#3%JU6F$i祐l_6B:\Earݏ>2:럶p8+L_ʷ/c){<5x6;++ ֋1Φ-SR lfrH:֏cS1 ~1o6w/d~5i`lMfcı%4,VMd5Y[zWȰ=g+f_k##i:t:J8G2ԛ}Scv@yr~`Sg;8$ls&ֲ;+EPmYwO2P ,^q, ɓ MŸ|NLvu_u}v Mj=JaΞήbAs#兇fw;d'b7 nf^ᭌ7,F;~1;ndX/4"ڀpL;tV7.)3`Wێ7?7^ t'u-ᥟrw#wg=G> 9! (r LDYd60g]B u&ad!s%ls& !Iy|cTط1}adCޥҧ*%.|AumQEi=1߾]XQMEH}_Z&mkm+b8<rS,_[orkR6ۼP+l3*=}csh^?;/մ'iv9iQeIZ04&"VWի;71#߆I H`K|3@`7AkkDl.fV |{D mT~LT%4jg#ZrkV2qPUe &NIºVe~lY!4Ӗ<ْӿ ve#s#@d S ׎<…5/[O!p%s{ǷNi7[vZd~*q%ϊuaT&ڭQw=I#4&uHNfr'L氂D4S,^o9q@)مY\r}0%_Ls=TV.[i2w uԮ;%Úki.k/MqPמD׆6|wxJ|6ٳUC(Ûu `5@=J]^_;:>8=nyC͒{mva䘌aL1 0"ѽS8۞',~{s3=numukg˚ Guxfe&}z FXH9 tƔJ3Nxۑe~]K,Ly|wqu6϶3{#@k^@)-3k?m<|/2/H4{O%yE^ZWCEcKj旖̝m{rUXJ ]\oSm#]ڗOsqM֑Fᄥ:g 3CkM x<&:^fϱd]|"u`p1'RGOV-O| "Pe7t(\27wUaӎ!߷* 'm~)d1WV0ub>HFTI3p/t]ii˩_Z@[ڄKk0{hTvGMdf}i/gYfc|69$pOV,ir[cx+|B+CgYj]X8~1cpڝ"<˜DyvܗT$e q_,ʳx]MqXx*g$ou%]vBcUe,kE1*\ۜ:}P J'Vd<\fӶww`>b <o_];q#:7i$[+T.L3]íyQ) (eg){Gg~ԉW_}T?|/b(s1GQX8~~;JoW=/8܉60Nbpf:KhNC:1x7BtsƣNPI4}C̸\P=fP^MAZ)ҟU \F&rI''ie3~x^12!=\oцm)v_oц"ƒ&S9|57!vd(k[(}݉Iƥg#P}ߍ] \NfI ˳L0k_}`g bGp !82OȌ(L o$@9;E6k?2y)8s8[P`jʡ=09L, Lk]I3)ikCv@{K潎`W#K'\Rxx$aqy5dǤl[ X}C̦f@s;L\7:Fg-oR̮hnnm٬G"ɶhn\J.<[0Qj"iX&Z~V?O/~/c<ɬ~htpH;e` pSc};o~mcYƌIߧq.G\I"2 6g65J{8oLrx˙iXej#GnK[G#h'-)~Z]͞Nػ E }vLٌdWɩz; waa~vxS$v 2P땂%k&iIo Y+`F'=^i!$Vb#\i~OC=O%,  Y6𫕂+F@Zb׉NyM2 0L !Ư$LZ._3~_DZj$ VJvSn4~<ʶYkWݚ+6Q|g'['YQ$K_afΕ&ps3Mo!0_#Qh \-ҵ[|W_Z< H$w&U,VߙfƙiDwXo Z a -VEHCX'muFڌa>b9d(y ?'Nѧ4:(3rrw<!Z!]`oOf 4F# ewٗvσ`.%@4} 9n¼ffb_(NO";&7 مhX/'=٥>8<,.gL]s +\ȁ1aޤHy@B';<5~|`,2IxAٳ|fto <c~Y tޥ<1/37c6 _uf_o CfZDKһw>A}nJ 3x>mS͐LawԒ痡 KD6;ב~^51ԝ [\5άItt ,|>yN-M$ Lj:;m'mjɔ\+0ib2jk^80˴ }EJ:JӞj %_[>iJh>Z ͂lNSǦi")nS2S_Whl?~&u*Y -'<;7CE?.c"㹳1O:H/bP&t7ݟӋb2Ȼ_}@O՘둌Ó/}J=1n5 2gIx1Cm煺v? ~OwUbkqL7tJNr>wL"y4x "~5=LX Z ΂i9gǗ|C1G\^ٓ^ƽj|k],⋱eJKf`d6f3=LW4ӭY$lM5nbR,jIս{&4Aw࿝D:M7>eWij0w}9Ch5y-dQB :6Q,Zh1J~A;f&oK|mW؉'4-/@%!c̞l=<1pku$gҔ|*0Ig|x0KJ.#3 RU\=jo1\jcdlza_}g[ b`@Bl5wfاADKh@:K@~s"m~fv1\2OkN]l&;;xA⿴̋NEe mJʾm` ~Mf5.8:e1b$i%$ l9(+sfR:$5XlJ# ac2!y&>񜲔⢷Sp;um^mh]KOv1NpFBѡSb>E2𾙞ԂcL'|@fRtgLY9NY,ԁpPyA<{q6ss6k/Uގ'ẻ t@cp/EҟwzS⋏:|>ŷ-_8eN=ՖwGn;_R珑 @e>Lo$D]%!"k¿W44Y%d_m<(pىQP)dg;ڥ4B+R/1ex/1u[{nhGY47|i'ھ1&G)ͳ0 FI ByPaJnđS!Ir:#|dZƧ.HoEn7BY6{1$m u2] ?ӛ@yW%ڕ^U*ؓM+s-m?7feӒ}593P/ʳNMHuiX)Fgm6mo:aS+Eli:) O[L_WP iJtiv]JN1~ 4s&A֮ iA gh>Q`鶱0O2WB с63ே7c~}H_ Mz$ HGjia]`-5ZE./E}5׶J^Kie4f2{mA> `M%vI")4usIuc[3e@qڧLU8mg&˰`~&ӝo>Vw4/Mtx-iBלhڥ'Mverw>C&QANڽȣ7 p1Mw8K.Dx„ 9{Hf2^ Kj\`1Xܡ'KFͶglvoƋ6BΧ !akL-bp"̲(ijH۹K$ ڬxPqwҝ} 0}n[31f\DLƀ`ZJNO?t85@f|6fEeXໄhni&Ã߇Fa/Vcq:̑ 9$[7P7YeXhKۙ8{;.1?3bcdNF&r 7jdM@ dP[]ycļ{HrN3A/4YbM5@FT'~UV<q#&-ٔ/|s`mM.;GMgчGҦ.i;hL<0l imm᫷0hJOdo|9^ C ͒E^h6YGSzm\}A_ '9We4K4/al_)vhH[| ]XdL2C ŝ5HztТw%"sTx܃-=h3̏|YI]/SQPOv^ b_zd;RӖ7u0k 7*r=IuŇ^LضO%ЯP73˔`m>Hv𓌩;"i )I d21ז/*)gm a͜y̦VL~T3ýʾs4;G3~H#y?1)%yc)ؚ|so7kUb%ߝHX$f\_劘鸔p ՔCtIE'͐E/_lVFr9Yt']@jjQ9ڍHmO +TH!ի'>I݊:}gH;aס\M' s;pɉevAR#+W( CY d\AZg7 k̾kV^ye駟?߼]SPM>(Ӛj,i!~gl }kr x9m4s>1vܸ$aÆevy(?\rL o5S\utp/_cvJw4`E;4 @_ucX IPBޙ"ŤG=&yo [6}hk6nb#0`>/з< ŧ2qGjRZxڷa RDm[]/rbEɾSV Xx4+/3 <"2 ؏~Vt\~7i|qyaotZ$gfWz{/x> m xA@_iIuZ+I4{ _{dO3Ϣxywr_+?bl̈́ajFwU| Αw-ЩMESy}rb-""/1B1E[7ѢZ=t8QIsx֠v{C 2=}bM%ZLgCڂhζC1>Ŝ\\6=>@fE _Hs*oնyֶퟘ7y@V2yzd"'S a QY}G8q\!= "I!Zy5gԡlqڑŶwa2~q&o;U}mBJv EWq G&Lĥ11+K01ju7x'# )$R ՂxE在"iَ 4_d7}f@IDAT6U@)k#[~lF6s=٬GR뷃Am|if>&` ̞lx cyXu BsЎp4LkVx˃o$ao^xt]d7}Ms]/Hu(٣f,C En懇݋K*-emѶ.Y9~Z_-D:}gnݞDybe@ iQK3fT6T2NV4]C].a?WS@_ŖԱR;/ H}L?Ňk&lqWnߖ $)}|{8gF^X8 o+Syf)}[f$N=hץ#Y;wl>U|75=N@n"n_\e8& ZGkpHu2L$~ВC~@j{8j~ d9ķN2&R[?? gQDM.UD%RF9u.; leՃ<3ffUW5: 0l#4 {vQQ6%8y]3hfiL 1&oPTGM"] P{E9èy̏?J|)Q㏛'xŒ;>V}˱f=0oiu}+-s>lf Dwit);xۙ]Τ+d@GV5 !Gm|o$ O6BC@9eL\㤓aGvDx,y!&DrךBVD}4K\abpMQ4aBҙLt@_LŽj~&t5,m[i?RIL$&>;D;S&V-`9;d%똖oG'`+B&O45zХEHcRc1nA'3sZ"0q8Lی 5uX3$~8lrk3[I!"]e;Yy^qP'◢X;wy: ;'H4=M}tc܃>X/2{ WO2ՇG6q(:"y#dͯب8,@Z4Ke i\^64_A0߃2lS6e@6o 0r ~b⨻Y8GdO¤zf*̙;G1 aڄ ^,y7:o5a9w2!{d.Pq@ #4O̸%7,!q|4HI*s݁ mł^+믪 ]/y7VkqْNw?;;zb^h xMpg uh]j#_4[zZ, UsA,byc3Eq;WkͤZ)НF->j y}X3q|~ s+6(gu:K mɦoYZ0_vv< mc?B ךɅܑxsTŬjZөT;^|s(JKfwv/O^4%;u:Ļv#he.>\.I{5״ھ&L0'O6=zxl3=ywL3d_|q:뚾}[m_<8òk w,bW}:][|#S|1pK.ĨOhQd2w .-q@1|pKkEh$*0To2,q J[)ä]ap8-v7h5g ` jW_%l lŭDh15HW]BOd@OF 9bZeV\? *bhǂyL$^)S;]4*I]a q:qާ^:J:.S8$yb¤tVË{ד=aƯ|$;6oL51LPg O,VjsV,&iaHWvZ̕.?㹋󓹥0I\65y AV5SO0r?ZKlFhԳK|+ƾ'MӦ4E$?9Ǥ'wCCL][ИD.{+#&YY&runjFI{]\Sn|l,?74Md&&lT&a6f ْ>&J[l)sZ7L!df$>OmҌ@&Y)-޿ ћ r0e?6[/6S[cyĿ5xi&\4nɅIS|<0&H2?] D0ٞ9 W;xCmbz8oOpr9 ekse~y!m:jNjd{{Xjc41%E;\ —7w{&HJ T |(EwCԳܽY$nO|éQIu(ウ@,E>=8m! _ %]k&_`3V6[uvd?g1|$Iܶ3?gښKːc#'GxJ8N/ӭ?#ZUA=u lƨZ9.s]=<)%X> 3PU"S;1CֆWxϡafnmo09k>1>|x#gᗘړlSGJU5_f '`.++0cǎ꺦SN3W]ue@C,CalI?um__KPC܎'ZN_@,&MB6l3{}?G ,A0+^v!\1[m{yVmT@_iAI9nީ@aھn;`+NnIH6 _J7E xtѐ W]WnmÕ@È4J!/-L޼hhRhmJSw$N .I*6IF#%^q8RfIQ).m Z8,S>|pI{9rQm_Js|o'%`ڀKK}ZD©Kdۄף!IdנF y2sau]1};M%,Wq.aI,'\IKԡc}-o]D/a_cTf9{2̞!̋}= S 6ȭxߙ*H7$vdHry1~_aEFȿR]{?GVE "G4ՙ6ރu]nj]I.]XƱ <7/{P”NK2I駟l2?oa̙K)/OE]dVXa~'s=g~r-gFmAᣏ>چO2\}s3_ FCJ) PEblUvf-4SKyaA}mpȟMf^lcӔLڵ=.,+̨H+K"ːvcoߠ)ujհ%Iݢĭ)|@ sW='>t 9Qv7m-my6`>y-59/{ 4d MgF܀7~kZ`vA`SQ % [? :9}̈́"(q#咴$R'g{ozٺҊK4 y,|S.B1gz@@qѯ8;8mRe7m+fí n cn3QR.fu#藧\F߿rZgku& &B$nFHدMFqfhmbnHO &kIĉ&>{Lbyh!]ly5?Q*wخ ?6>bʡe)FT-[mm%ùhwq]Bx2:ؙu9)O'}h:hj-_p e҆:V2m ۼAyrpͦ*Ͻ^- O̔dp/d?r`@&lϯWy摆F|; 1,4d|m 뺚U?1ˁxnoB4+ǫ~|Kp`OSк}wʛI<.dlj8s˔F:}T3i^.ìEjlު>΢JW$Ƶ~G#;E_M~CU(2H&rYp ½5#E'F " un«{ J\ \;fAV?3»5dwWzex^WA."-pj8^C2X,gI)G4K~Q/`#u-[4xJWa$IVtͱz!Jxy=cTISQ.{,RF l]$Hڐmڴ1 裏Z~r6r&]1F!Ex`1a+'((8Q}ĔC( )d#}J߂$px}5=m#F J̍-̞=}ȴ], elʽxڃodz˥k _tpdX_$m(K'0]눬78lo* .p'#@_EܟI`I9&ЄM$w,,mô:[7.WnD'&\6\ *hڒ~-CLㇶLV%%om9 teey=dgmhpmFg&2 zלV_@D&$0RsY$g+mV&ӓY@'qnOy 3^7|*"J[:F WV# d^?p@ߎd1,fK$V߶go`IV+\.*`^˂Ӽ]]<*-ܻWi V@="xVQX0J|[fтw,sz&x-a¢`y4dV ;LMQqѴ弭Ύ?/qmj3/d5 L|;I WBOgټ7Zf^ՏZqof3!cJ-NPu2Iр* :72R@us!9# vs1j(3׍H'Tp g[}H?l\IH6C^ॅvcsh^Vk2 Z?})Џo+iLV xK6Q̇b7gm'kWs-ra( 2m!qSn O`Inٽ6R+egcj{%X yPJ}e۽3 *Ӧuٍ_]qy?"̃@Tس&Yh%F xlpƤlaiࡠřq\:]{--CRh.|~9D?6 Yg\rH g]?}{GxZzs~;eHHrΜ殻2}5ѡW?|tIfe5dA)0Ly*a_W5 O[ Dڂ=TěM+{_(4:o4w"LU?ٴv{d3 6n=z'M6r),+HC8({ᅹ;\-gbLCC@f2t[6L%4NDV$}٩72ܸup=C},|ޙs=&?2RMh5dJpdxvvI JO50w@(y<g*ZRI]S[IT*^J/o\D"*,ϐǏ75uPY9U*173OO0^!NjZwi~ED^ U]}o< ?8/kb4q=8dݮ7Bnym"~DA^}u=m3& p4QS@uF]&[91 L dkIUy}~^#Ԝ24SƉ6k+/A4:Hv|\p-.FԁJ ׎DL)ی7VU\0NF'K'7I_͊h[~qWم0 iL^L39_S<ah^+M, h­&sIc4t+dfъčxM G Wu᱓ 4O3S*_k --7]ӎhni'DԸN͊knsż-T>_ipǢD=[sQˣ3tA9CoD'Н7pcnnnjX?/nT44FdVtiܕVҩ8 GҦVfv;|' ǏW~.UO8{Goݠ5JL3 1˙8¤>5؏٧A[/f@n-[$7 t з`?`_AVp?aᯀٷN@Z.\(İh>ےOs+~Q"`wBZUGPn|C YA,23YZU'o{j4wIl)ow]ִTn]c~ U+Z_ԕ՘1S9NVD 1qD4 ZLs3FfT-$F.er>'ӎmk2# җAYRQa'c&*VQXP,~rQ4Y dxЬ//;2y˃PMDsrb,@xoҔ*)3dm/m$wxIRV͎n6kiۍt֟?xR9h>B5zO;&/%[,BdkTZW_;AGy)qmZ·p*7 'ֳe\ۊc쒗;6b3a3s ڇ9@gl.@v}v5ao0*51\Θsm̑1Id"۞|QdL.gb/[>5r]lL_3~϶,}\Kٜw~#3??mfɶqڹK7^qJ %e?߱\i \Z }#bdVR$ރ$pZ~i0OEkW3g:4]XHma6km]hw߉z׷uxAw8j }[_/N U/>x#Sm$V6#L|M~HrQyYHF(823T\Bۂ/,JߎuTAιqN2N?j"L7WSJzT"lam.ܒV alw}gM=&)= 8t{+{yX3?ǿmSC80ŭ#A_+YWWIS1?bR'cd=c\VF)~h'/$=pM 4sXc|yy60k f>O_?ĘvN%){ am+?zOo~>`<^ۗ_g[g\]Pw<^\ou#x蒕qx(q>+w26ެ;:\ڲuٕwvmjBC!vL?h liuLeW\Q 99J#ιk?-i<)t}fN S"gޣ ;*vm9bȭn\pSOYꭷ2cǎ5Kb!yK.zއz(H˯`ݲq*-ʛo +?Y6Ѷsb`yidlO"M?@r饗2l0sꩧZ>F o9_ܚ7N;dhn1crJsIC juޙb*9qmfiz0l[BF^-c|-Ki3(lS!,J15Ft?KZ= U8ZgGVT4q@F[zfg/W&eƂ&YՂ<#)Njdpf {v6 ykoYTP ,$RZ[7 NE`? ځvnMcH6/:"ݶ=ea4R ;c 5̂y;9 /唆#xmJN!+1W b՛|q(ٵJnBsM.&y{NgzyB:,5eIiHYd$ zyYX~ױ5MdoNDOYTz;Ŵ, \_,Cݮ@џ,G?':hbDksuhpocr"^|vDŽw4:_:E=#A=աK ҽqV{~E_N LwH+ٟa]4nv/жpo];eE_n8mp@Bڻ_$[PWڀvvѢOUYA %Zt1Ixu6Gh18}8q8QgN[8w<{E[EZDɪ(Ҟ݄wa 4xWe/(I rFvR|3*^qX Cm[]qZҒC> ҏ$ɜ+ <^JZl牀F%$!q{)1GodXu%껤 ڔ2H!ͣ#-hI^;k]Hl$$>cc-De.NW]6 6z]X498c`R0_q<$~ΕZ8qWEdeE E:m5״|m-I'z} mQF,_[< t0쵶0i/_cWtKv /4ր,5l1P)jJȶO1FnW9nVϑl=FɸE3ݐ+ǙjC-(!tEٶ>Mide7݊m˚#7E8\u,@Y22Ie>GUknMWDҔArI4Fl+4L8i;#,>ٮ|zFn5h#n1hqo.43 x֎ݚ-r%7#ڜG+ 0ch'r~-d'E|M)>M?#cL}CY0mT[_errH>k>0EO;&Կnb|xw{mSZdG5wVZ}٧yvTAC*#0eg['Ok{ `eG,˦cboU|K *9gA[xLf Zo"kgFN ? xX#51wXrfkJ.b2L(7Kp9fʼCoQ&]d7nn/263#oQ_`$uDt`NRY}s~$$ߴ^ˉ$!:%q5^EF'kɯ[ԘT:du?Ȍha?xr""3IZipЍPP@`j[hs=L^b,m{P3;3b~т_$dK57/3-d:|m5ǘ%!n]%Va^o ]]lN{YJwk_<$c rQew\Zk hd=; m}[V()4Fɳ7>V;'$oH:rej__9^MB|"(uuuKײ 6?OigP9Dhzhjo[{Q,"WDFcgQ3]Z4]e|X:<irԷ>A'#A~*ߙn@kqZJ, @zJɊXq<{TRy)*w;Mw/[~b;쳭Ƣ@]:tJ\Zi޴j*QnAOo߾h͜KC #.H7%ykk+IU?:| ʥNdWڨ;up\cmZ_4smhh.TXapK,J8sI&RDJ׷aM#`<yYWq1=Qa>/s!z6ll]wo/oi7zQ%s2F:l=ii}H}#  5֖Wtn\7Pv$X$/0q+."nKMRAo{1= `s|рjP- @FJ8`G/M؍v3yD'|?H~ԉ?4Ʃ+ItCKOR\;1UZHw(4'LJʹbb'Xi ;p!²Ch;c(11ڣxӚl$^/[~󞥺/N T~I)T`8 DWxY8LN>ubT<`nS[EwI|֔IdjGxT=mf4=7d'/J atdwOmv!$2ì,5}\D=nq7vcM=;f"qM5?Ӧ>vC\2MX;L7R3Qq4Oٝ8֩%^17cB"(~gۻȬE\1O=} Cme`QhY U~ ցZؖYj 9˴qE4mL CkRʯ+6]>$gP8K3,UZ?2uWjǵ!MߖFM >aꏲv*RVv9Nxtxpb%wt!choJ\ 0iK(hW~Ҽ;RgeMu[5U0|_~ F@YXVS݁v!0(HVɤ5oUf=8{YgR;/8Io0k-Mg\]2d\{Ҝ7Yoy.(w;2~q=hOmY'"k `o2&L87=]@w]}m[r)$"avw~ӗ~) (}YpWk2ތ@҈ 4+D_38%M;\;PbSו0nBuc9r?aQA}B8cҟɹvl[؎#=4GƓ7Xxr38 [O!<D Yɀ,̈t՞S}ڨszzMlBܣYZCآ;mNfڍa1&H773'U_Ŷ(mI.-1ľ|UѶޔtgH Ǻj*$o}}K__P{_A^&ؖ\[y nX=|ŸH~W'` ^srqk7yDYP}{1D[|2k[?_=:! Yk"jWBJՙ[9s LX$MlmsHU̶Ƭϸ%זԳ5.A6ȶ2WTEnĶ10s'b- 4duw%՞iy"ۮiƵ6I/sRRCY/}T_\ɾy.ss 4?C-sjꂫ4Ԉ$ g\m{N⛓>Vuߘ.:m 7~Pw5Ҹ uI5`m4qaj{?MHѤ]zhJl%  RhKHL T~/` c1`= R0^a+,8uG}!-_I^`򾡠 F[#i3!Zp-O`L8<) {&+ڢh .Tt0*(Rߗلpk4&%)@[ x>e77u܉lF$Nꝥɢm#6%KW_|z``HwGd>\hV2y͔\u\J$}{}pGðnHҼޅ^ՖVm'\*hU}1~nc\VJGoZ`+Tfmj*|s]H\@]rDtYp;j0c9FZ-̪hB?J0砭}Fܻ>2yv`s۰Wsy %'Xc_xl|9ԧ'?14/&r~Vͫ8yX%Ek;0Y"l$ Mhܛ }*Abz:E*~k_;FP3/ O+ %ZlUxg8Z'pvZ+`*n3b ,0sѦnnLh.m++L#FsdluE5`;s3PrrA8bR޷ʱ[h"k:4t5J&e `lw談8-y8l͛],IW퐲 B2bpY8l.If|EN4(^GtRg^RIC/g0޺DD7\-6( SR{xv5E:ìWrPr" ѵܵ--):(ۅ]Ua H#({$ CG3ե$Ac$wv(8M u8)[ MVH|uWv 0m>/[+wE9, zv4/5l\BΓ&jYo3d@0)kg7.81a[M7H63`DY!>ֵy/er5|\;mI- U@޲xN7n3ĹnH߇r#6[hjۂzvnړM=6g!<8+;9dsHR[OvMx{&+Oa&3go=OJ ުƭ?Jg!I}J{ͷI"L%fi[WQ[SSGp7__f%NndȼHȷAf͊fIDi¯d;keXS˥Uy4oGvHhcj;wScgגg+zYg!pc)Ɛ4jOX a0/t!Fr+ #CYFd4}.W_G7:HKgG}.q}  !m]L߭>|M jf.c`N>ԙ(vV-O[=2 &ї40o@v8Tִ)H?lq˜CXp{^@0r_V4V.OL@[~fLH9pR(lE+Nrz]i Ht&X Ȓ͙XBi,PG*:dIQD?(۾ϼ &]y9&}7(ހfJO-o:V1Rc(~hY ?˭W!L㫚bk!lSf)d.c`V@J[-_ٟӈlJr?\w`{)zvF1^rK:vͰ|/C#᩾Cloӂn%|1~J@ij1{J7C?؍~p;) y.3Pd8rYID+"7b⹠ܥUص2UY*VѤ-U!^W(x+w~_ix4ؖWXYN:M7l0rA߰`ռ&uӲTH xd9\-Dx~^TTg/}Vtt ҹ~ }SCH',?Y|)f\I}!-LOX ų^r+`MXsl1qaM+$jXnay(w#'N9ip+Th CRי}?npI1-XDMZHAJ["r6y"(wLY$Ī\.=<ǖ[G=,MMރ_¼=Qw,/^W4+.<%Sp`T_9.^m>3_FO.eK"wcyuŕfBmYq*OEzoZ P1ܚ{z8msY=N0W1,p (vt [C²fb1@9K:.iSG~bCUáqOGեM㢃!2U)[%uR7hj-0ly6VtcZZӒZDS 2C?%$[Ka(1^/nUڶpjBʙUٜ}zaLGSE|*$o0btOj& Od6=!~6,tb5@%C/ɶvĕmwkjF-̵QJ֟F-5/g^_z[#Ssw]E:1u/db#xdl1Z|G;|$~,ڎ:FjߢXH|Js Yԙ?i4ZQB ,xAXxpO_m8$%jMݍْB:Y9=y'-@/̡tq$L':+1g׎7YT)6yu_FsoC'ne*چmoz4h_.?!x;KJ~5{; so <~?ܸv7M뇶-Cᴛ=Y9 ɵXYE`{WNmOkqf k~oKUW@aem}we&^:@o&bӹ+s_+NnyM7;FڭVA#* U?@):xx Cjs|U_|eS6ʴ@6Eń'F [ X"  vzq[gpRrD2r2)3~S來bIG䰤r8"C{Lܳh i'rI*2v1kD_α*ӕm^BzV-Tr{d/nzIWD &ޮIML[j2F6s{R/,,voajdKT/hz^]O{겚}Zm*:}6I&7ObҋlVĒI\&w7BŴUy&X/rQi W(wBU/ԷJ'/+6aqԽY-uy1_9@@soi%E%. MfN/(|A-I"G\'+Fm(ڈKFjl)K /?~͑js6 p}E߭,jϭj/@PT:D_'0sHS4~eG8a[Ɖ!p{ =`xҳOCURގNK,5Oe< 1evJ'۪c[*N{COtt=sͅ#ͫfd+cɆ5Q*Inç%Ђ_"DdQ,pi/h2c4"U 7bj˒VUrpi>_?4k֨u |ytrς6K+&Ъ6/$&-v%Fe{m$t"fmCWRvI"=ΜaNNԛvй VY7w' A=U藊ƣ,{v ߜ~R%^ qhij2ǑnUʔm GdFvc{~3']i*i&0MIL϶<T童8t\etj.9@kOp՜/9w.bilJpeh_V|bws@fܝ fwpP_<f*ϙT)K~7j5Dm-kB[m{^qh'?0q |'ΜMq?@V.uIm^u(m|xw>Su~l+<]rjnJ[4~ѧWsr|Xۖ~ŒyV$>+ ?̷ļ_). _pq޻6ܙXQdkͥO/59w2jg qd_uݙ67f6goXF6l:z\~&/zJv\/Lqc7z3+3Ny%Fw=fhcNG's})y~VV*'j9kGO4Ix:fVu A]D 76Kcؐcg?-e/95 fc\Ϥ&.˸z4P7#M)#ߎyiTf{xVHQ!N SLCkoT6,Vmha'~Uuv*%S ![j4g>{oަzy-_M4Y$EFq uY1{p^ؤ5SkgjzLD=Q{G7# K'ps#`wwm-k``i[,ރ-=Q%Uϕ:D_4Jn*ZhK+cw | i/ M>ыqqzR#$߼vB)\Pgo1d_|?xaTX)_ϫ-&FXҼŘ*7ga5'&V4["U ?aow1EۜOPL>ŰGh4y1͓r_s rr;_}o^t-m}cf$CGc5ށ1XS: gv1㛙qil^ȁe9fkzbӺfQOh w4%I⤰p0 Q$k0EsW-xwɦHP{ԇC(FkYÌ5V>M7JVy_/_t~5 @H-5$ -pccrڮZqkf炾}S 8_dA_Ŕ yœDF[XY.+?XW*Cˮ\zHl68KKͥxUH]-'FxIgC/rA_oI\зilouʦ+&$_F˰R= Ck%IBJ zA@Wwxt8W1Wq}`h)q%!qjbdOǠsxdzJ…g]WuIf$,F=/n|7L}@.Shsn}nεUq[`#w2@_-n0!m,3dz̗:$#u'jWȹ7y |P~.6ަ,/0w ia.vB"YiBZY-ԙ@_jr2Q%<Зb{ fv7+s#)_&ITHm[\K5G8aZ0T{z>+J{Zr]O6+t Vi{Z@"h7@g'+9lTC_m r]w:F}e FŭSp3l\}?[W<^MȘo:EGcSYp6L*v_ՙ;eb$Hm#^kFu90>!9kuxڣ,諀@ӛboBx3@ƕO-ʎWyз[A<ە}9V)<{^ͅqc|PrW=wqkNs&7!1 \&V #o-/SKF KѬϬn;C>y; 9BSFґ,i1TzI&D3"(3MľG_c |#Uu٢@fn=|wΦ}*O~ sHxw}+Kw8J V %|a7j5צɚKDŽ׊\#mYECԫU[rP(+C7޴~Зغ1neNd +H~XItoƐ[fT (أ1;7Z|=Onƕ]Y8ϸE[_6ND)3nX(DQy#y Gy2Dè5@oU+4+<[bxu7U[D85mL3T4~^ӓgS:7Z/VIX=BRik&z吀дgXq݉^ S `cǥEZHS@M{ۚnA"HGOu XBUK]J!=yLFv9BuH.!W]T:.9*w:m^aV7Qz Թ Cp6wkNVy87ؤ2?SKX?#\ &f8bRgqtd.dMF3 LJn?:ؘj:$v}:UqHrO?#!w)Xvc;Ÿ (g4ЮRԩF*x--_,~w 5blG7Z |J@\OHZo1K<=hShƨgn׿μqa]œ_Ҟ]YMqWE8@h]fI|L365odĬm`>ouwvɜ@=kf mL΅;4W=щ/QӯG;gEN"E.dº>s+ @쩔ygo7G4S e)yR+:N^F_Q% Mx.$b.tOȆ2k^㮳yc%-ڳh,:χ0. *F3*H%mQ_SZ>+lU(^n>RU&qkfy3PgO5 ?i@4;Cl]ڀKARL,w6|γf ً'bFuh7WL[@ Fv cHyDJOKL&h < bvhmOVq"e5[Ң9 jg&ls~@FTLSt&1y4<hc[hEc#4@iϵ\yJLJrI~dS|;?,1g_):,rGgώ_޲$MڏnKSEd4_*^ [m>I[-',yhZrJRZ &fÌ: 8VK%,+P`3ދLqED#p$q nQP'9JáѬ,h$&?#Lo>dmFNaAhA߬P=K|QhaL«fˁDjBIl8]iFűJsڂKy\F >^˸]}q/~'կ0d0427ȱg!1#9`ՁL2w@X'm=|IZ1d)oR BKvo"LL2mRٟWrZVxeN>/WuKp6hO'r͐T[Y!ӂI{TlFe䳲sRj'O%gw/'i NZՊt8  )n #I![<)%[eoi S&nk©Ӂè'T M)lX i#Bd-Zζ%:1 %I/4쫦^Vb &7ەLʋ7? Tykf72?9:zWs^FV͆:"aFj[ 'ټm,IU&6|e+$6$ L&}c=oDeUp#Rf9&2?Uך1FZױ[iJ o"9,FaCB}H96/L%V a/*7h!b8˜7?]cߟ#Lda#}H{:~~%kcnK=T4AM_HdQA] Ĵ`2'A*N480c:%; 2E֊i@`1c5E&7D/Ql ^Lr|eeM7Oƪ}֖UR}b 32#lСtq}>֋-pupo{]zjgRl lm i3"78~Q̆-1;WGijQZ+$W' O>V55RRzhO8@gxVj{ÉO)0&HL~ci՛b/Fcyw6`֖khκQG:n@ԨǶGU|'NK?Q@!2%*nkA\v~]Qw1ֻ^6}pu-|@v,_c.y#AuZzX@?x%ߜ:BR5V\l! ʼn2aW;7&˖7Q{B4vFfR[QrNc%Xd/p{ޟE[2Q`x\$ Y P' l+Fd>a!2'Ե4BB$w1bIJz}%_(%MTni%d^@!(xO͜9~eKlvbZk-NMVP{MM(H`ISxzt<ow/yR <:#O ϶i h ~)p,`I#l iL+lcqcM 4F N0 6[@IDAT)A<5Tލ`x$ +ZM>B#h:xUBLDvڄH6OM:XfN;Hϐ&&J]Y|#X'9@{ovũ{$ϓxwmT{V&R\$2'ќ۶îUуh[yr%s6gHgIxEM ^iE4֞t[}UO@P)AXD]5~€@&ѤF!^&|Ħ+G'fW/^۴eM?GgQjYTZ̧=n_ŴwQ$?ǎ{"z*ɓҘLhԝI/9Zge\ߞ5x:nnCoyڧ06Ϥ'g?()K!zg3Ud,ՂT55?6a7m͐v| Nkfag4fTObsS|PK~l9^jl[ Nl^JrOc|L)Z^-aJZLP"Wc&)"jŃT1i$b;Zj۱G>-;(Ȥ^92OHGXe #2ϴP3dŁ|7qZv1@_zyjI9ȁꩢtjbQfw9[̽ sgbzdZLy ϳw򁳻$2JaM塴zX/RkvLsgH0ڛs){@]2I<~:-}PA&9ɋ9M HbyaÓpV:MS(èi_Qx@[jk!?|N!|S;1V3b#&sO Lh"Liݹez)t nyf&6-LjQ o'h:DkAѱ(c=yyZ*ڃCȢ*g]k-Lbx4Vr@Ӹo[+ wgbIvNf=~~Z,~xj.SkmEyJ[Dy1n$+.5VȠ!^EjUBؒmQ5ޖv4) TT:uڍ P4e[ߨۋbh 1۲s '/zA}QN$wpn*%鱔GQ.x!?^Ңك:< $ܰԃ WӶfbq#Bl{H[*%8wa:yh ),ڨmypjPfP&fZv|gl4kZ;vbۀ3ٖYw]:=&O?G6䝀W8Kkni/ ֬RU] 惾~LjoXII^.t8+r)q `eġ. QC hFW)'t,9 @>[df܋=I{;24 Mar_Sh:& 2/bf9vhՕpѢjQHun7!ЯVBJ3iq!*޺>9EVgIHvy4ڹij8/;7e i|gLWGL}mɓzv4R:m]HkK TgМ( TkA7ӣ싸N;YEvKj:5r:8Xo48 cO| e7{{}!iꬃsR:CcO[*JigwSk xgWuȴ2 3I湖ABtS/mUd0`)^S} d|jId,̃/Zcn'|9(n\NVّ<Orgm&PHؗtXod/}u~͡ ifDLָsDZO>}:g04?)3JhzKj;A|+P[%?99 :.{4y3"/fAX2oǜ_ѻVj,`+D{k~acݺ9)S~5s:bEQaWbi'a\@q:8,+ڙC'CҐADT%*q(,+#?ɻ%}صEGe4iM PF /GYJzӤy7ZMR4E`}"m_WC}ڤo<k@j63Һ;Wm]J3aKa~8L8DqhL7`v~=̢zɒm,l0!} e?;ؽBz"Zxn+ܓ' e[ѡF"LD#S?m,`טHQף=)hϿoYcIGm xhVڡ_Rk0yRd% 'qƆv<_;Ymi Җ'/FUnOUyrTyXߐkr4w c`#&C+[Gz}7`j~-9`LN&d,\A[F[T(hje-2itU*|z?JKוgF-Z&^"IܗrH{ـgLoؙ2sw iFWsWrӤ"̛x[<Ccۓ;\3쟶 7S_o>+ӔQy2xSEmZ=A(GE~$mR^:ў(xWTu ܴğ.v #M{mmlK?wΠpi1p^\AA,i/\h1h m/*iQnA/7'G!_7"݅R٥6QxDawE*,IG %C-L}-N3G$gVDkMڊqouJ"%dM+^$q 39wzzh]z!Ɯrj&~f<ɫ451T]e[%2r`VM%uM%MJDs oa"!:un[h&aaA! [slu {b|P0? mKY}jg&\";л1 r'L_=!* )ۥy ,'%獊+- h4H=tFIl6R(mk!*;ɗ@Ҕi28%VD@rj4uH 2tk՘{uSX@r &4q5P/xvlczQ["(2$09T: }& tZϮhLߞiyH( u q\㔟4MK~=O~uփq5&ѢpCɦd,o˾NNi\=~zb$qZ9"ޖ@̖¿Z}!>mԅ,CR HcqI[#)*eS"Ey].GfBHjFXpPն4pxG"=v"*+}+7-ݦho>UBDmz[=3ugO}*5і]^j\UU[OZFWl$K:~Ax& 5= =Vs1ھ6\tK3T {ī&3Cוppţ&d/7[|ϞR^e¶Ž$gNxv5uGq^}91].^q wMzYsSgf*4Húbw+]3w.m{w(p5HkXܺqծ`ţpy1qh'!Uu{bȮϪQd`j483| x6A\@φjec ::݊aBP^c)Jz\e8Or8ߪT6e#1@CԴӫE=gJ@پ bo|vҎ{mYXd>)d dbOmfH5GmwP\ʮl$_u)C.LG^!wN|f^PfӚB7m)7hVhudFsteC=t.zHooky.Dfҷh_F8&> U=~*u !LҎ9Գ}Jʥ$e4%wo%\szøs`WR g8q0pf,\76nd\i1HWjkF:# &<-<'ݙbZ$"^Ҕz9mNy D#vh| pB[&akCr`bn`T/0]Ÿw,"Hv.ϤtȠa|Bkn/ ( ]U? M9s7z4m[)iU=qw.q(P6xi=kl~ǮowEMw0/n?k]zpI1yl[p<%-(M~Yz=;w]ka@8`$R̀ W#TL~:HT]SXĎ fh3$,זc8gv62}J,=1xV&3|2[")V:o r*H|[~[Ox3kOe*t)Wx +cA|eLCz8D.vW^wՙ8*݁gH8BfQ<>Q~mv(SݮΌH{Yx.T@ƒb 4BjfdKġ.GsZ<> l9!4:8U}(C)vK(W:Q r]ex7 3,ϰAc}Oˎ{G9hyisDBi2;ߐl?{KaSN櫯23f">j)YJSp6C,Lu_J%pW0LD72/ )''ӿ5x9 0\venߵ3ݻw7#_}uQ6DW^yp[63#G۷m?Onޜ$[rsӡlds!L_l?Nz4fݶ"HϤ͗9nvMx'FjmƠ,Bh"l]P$ـԄYdjKs GV6wIغoFe:l&( h%2A2e-#ꀸ uNFhylӾ.BIek"{.QΒM&Ox)mDʂ?2!|txy6-~(nwFkdIEU?;b p8c 0 T@>Wے= M 'eϥ0.:$9Ӗh{d>$ p:l. =Hn]J˹\Q5՜M!eZ9n8J uC!wӔ]Ml};!s_DOszAj؍p7H :ggP %qYQlo~ojHyC}<ڋK2kn1?6qѸ}5f~Gy+ hdۼ}n }@J>89yI3+{e\fd2`m@,jPZ ~;G&R#u!l)ߑB9(z&/<+N aAJ'yA:" -כ6[@]buĭ?IZ*~If?jS|G{Ik:<\ڽU|v`9j0i6q >/k&Hx_Vx}pۃ,t"}lEa"J;_2J;uv$~Jڿy MXS~翙*e.#pm%믿1v,5KХƥ΀R@'p=K@tbVZMw(^2￿駟5\|MfUW5wuI'dx#-b&M27x9c͐!CLee7n52M]w 0|n3c̛oiRq$n;.ߗ mܭLM20:[Z/$I[@Y BW@Jgtk5_\-4"H슀XiEȯ)/)KisIZ-p0ݙnC?w֎XY.M9V>dCGhڅ)^Cyfc˼ /<+Ī'-l@8/RZ7hiy$_GV3ZZR`cuGv5Mi?MۆͭWy)bDv$\L1|,=D{&E%XqRU{?3\1}dMHxV e2l*';aҕ>)Yx4{R0T]7㆏V>IB 9e,y~RQORnn̪ۢwQL]c**R.kqO[Z=O cӉoGɭ?glnc .UYՆuXG>َ-9h6-T {5\ rϞQ7fa^ W^\axLUζ* yOi&JۏM#W#ib5v }.>K0k-f50ѡi4i0t)9,p_ja?{fsDΚ@C!bEFb$HmzVp2pa.(kejg_ѳgO3g3uT[oE_܌3L6͌1lF .Xp7_7\wu6-݄!G&oBݪdVe\ouh]mKia>Dٮ.X75Za$rIpjx{#èᰪ( p=[-$OC V!<!;P75 ?eQr-(ߴo1_fHm 膋>Ly ҭzv^&8Lv;O*; !4HZ=],:?;Ib_3~ڍ|:mpxMiqҮcXܥH>k-?N"IgLow!/gZ|k،wR(Wqe?xÍƣZsWI<6IٝɧͰzcK‚)xvFl cKfodW~/\ˆ6-q}"K,\[@7!S/;b Dayao|Χ aR[DFSpA e"HOo5!!Ih];WG\2̚e~ifΜiz 8 <|#m 4Ȃ.V?]wS$IبA[L K" !/JM4^  PkO=Ѝ-0) {%3dw `ҟWG: i'̜pRHc)[t.lGeHFh 4baL˼S%岗G[k_97 fxk#yu{  ܽB'_Kn'[[obj0Cz3YQ<""\]JᨕC`(Xm?Icax,xI8ηp64 z/Koڟs{QNgXԆFPT/F2 %jѣX&'-$C}: Z>{D|zl&{Cn94wYH|QR`ND\ZLTd$MA0(Y_ _K*z3"xr'jGv A{3]#V*>?;f<dzz7;4gKS7;6wN y6yE*pS w;43%>ލ'V27ϥz3i8}:gxǩ4րvew$zn,n^HeƧjٶܐMz-?63!3mz^pId?JF+nvgdR;ÁJLJ8 Ҭj.|XhmཟKwiqM|f7V'P9/f]`x#UNCӡ\y$m3iزŬLdwAe.l/@8EKQ~XCZYhһȤZ)!Jh1̈́BfҏPQZC6g2UXOz~ vKuusxǚ6<v Ts&jS D,ovi/C?sÓr3m\w:^%^gbDQ'4 Wнja} O0 z65lrRh5qDgw߰(7 )J4ҥz]3&" mAW&"]]X/Dtz"Z:zp^&Ȋnkaoo!f,isZ7퉧 ɸLՄ*OJ B\'"ߔW~L>ۙ 蕞\趁#vTşdJCZQAӇVZFl`Cj0 6}x4U$#S49b'Fxys!.>rs#h$ t A8%w7ҽVĽe8Bw$CUL")1<~/3Hm>#谶 TYfd=g(Е/fҶe9.I]@?kŪA.)n)^ GVǴDJuS/ D[s?aWn>t<|9!E3U^. \#:5;(_srXiIKVwfZ<_Ag)7x}z箭noóF@qP N7?$.5G.>抧k4'ɐeϑ,Hl͸(zݽOo@3OY_FRdǭfNGLeMϱw쎀Ui_ < o#+eZ{<|߽,K~'Lͱ7.s͐CZ-ߑݧĩL(wʾN/;FGbVv,FP̠ӣ$cG{DtgH߂m8ڬ {6fPk^znb7ZlЖh:6xvXx(M^<]YVz ӓ^[i~MYZfGwdh2CKև(j&թcU}W;r.1>C8/`|9,Vg.a4G;%pLD|.5߼z<] a_=("iCmiڳhϾC/GVZ~uu= ,FoXNn]y oqpn/1,=w总Tfʷ$C=<5Hޟ&NisFL+Tkf7vFy<.m cSoWceCGJHJyk l.ש} <3Oe +UCZQy̭xs>_y6Y'm?Ǝo, Q7P%sdKk\6CTMkÙ3~9>)yscύHϘ-/.!bܸ o]KqïxipP BbeMwI4Z '9-."lE|e(^ .s%-YH:c KSͷD(bz d}'L ͛딛? |!_''Bn#d8%i42[7뮻iժPecqM) lϵ`o#O~tHntiIhZtbH! 4ĶVfh :۱n7ʮk&OlK|}q ܹəͳ.̳I3|q%≤5쒴ЦMl\ׯ9f&QipTOĭ#Ka.iEX%ޭaq$l_$͓hsE=ovIFhI=~GuDU^:fn:hfrp ZZw’y9k1VY0>ok{Wi jId\4Dx[]~+(H<6͛L#tx"^1N8v.G$1&4~ܛBڭ :12h20K?QjǺ&>>VL~5}?zүDhlh6[}界Y7{7{$_Ro;`rܴY$z1}o!{ueC7 ŢV~W[ v?O֠|w'%B΅h4 m\y9zZ&b[7q1c7?fĔ#FI_EɁC%mMAf%T1s>Mi EVaKtε#46X֭;yc%.$7ʂlDQҕP.{!e7Q0P<wVZ l, I:f[,[<-Ir ~+1iqA.IY! 橧2/vD#:mE1=l0ﴦ!o [[n|f+O4m\^@ ^xo~go !Tʂ)g#{fd!+?׌B)aaB}i_}Kf`^Wg NV;KC/}a7=)n=b '][-U5Ϩr#h?[<}6Cj ˟&sQ #Ҿ;ma%pI-?m;lC[!:+mr1ͷ(&KHq&lM>Ϝm[^Rofmrɖrec1<挬Ki7m Px) e=j__lY䀴s} &!0'/ svln{yzu E_B2c h_%#_,/?;o[Ɛm"N/=n˲;khw0+179߀fwju%g0h]Aϳ8Cީ~>ikH$.h%"u_r['/>4#͒8jeBV#nES)N ֑?Z[8#B׫ЖvlU3zewOE!Z<,V $!҉ƚ[7!$N, ?:GJx M !cv}_clG@ڻ໋EhWҔ{ջLYsN+CVE7Ek3O0~3U}|^삢LJ #y o "n;(Gd#"Rdf7L4ha¢P/9MC}ϥc%3 L28~.37[h`\0>}߇.x%ZfO6p_uUG!f{7|[VYw1"9cNWZrB|KaDI-^N aܥݻ-sSnw($KPh kJE*|p)v z <_ww%m@iaTDM rdjw4fC6Z9smX՜wy{1kmߪ2A סC pArŤ#ѣkGa%TL`vӂ 5 )$ rI}q5x^U6a% =!B_h7i YvwM6p$ϥLEϢ)P:f_ۙɉ̪VADZ< ͺ=+5Z8连z5pWqS}ڳ,4[GO5[,#t4#f9qe`&2/P\v7} '^y/r[_8}/]F}rI1fZlTݘ81[A} 8z9DWxYo?C6ʊh:L%Pf( /K{C(~;5tI 5#XcBo_7 )DG~U Y$zLfX,( sX Yj-9D\&sya&DzCb]i_0;{2:fֈw:ߦ}Yz>S(՟;n.XJ \_`F}jhKR?V>P hch ^_ `3;CPHrHi ˦Ox.k]+fB.f[*EK0ԋ)$rJ;Y+SxKVc^宒T>SoQ;^eťϣgy+?@<(ux5i|;~8~ Ѧ؊vVv^չ>#i{X نu hH<)9؍ 5"򨧞|I C"b8*%_'_C789/s!юˇ ε?w߆V!=i^vUس{b{ґ5pkihh׵_dzO"7?՝7o^lɴJ޽مsՙg=h= p_/h솓`QX\74-[1+r»zh5oq M!iU(ژ ȷ[f){Mx%H3XyS Q ) zsMCX+fVRh;R naFW U{VZBMIpMAV»“h䎉%xpIZM(]wBfZ ivxh*Js:,\ Be7reJ澌 1[A??8,nu?ɋ6u|p3"Zq|6f% d ,Y'=ALKnjcdy Є ޓ˵?-=>lv @f,!FcӖ TXq Zvmi P݌EƳl_60QE^'|o  i+m1G`C^7$?1_W0֞euW͙5<>3y}_&ԵC 8.)KC1Nѹƚ Aͩz4&D GA[S*dh 9_-z:*_7W 8sy(~(|h]gыҎ;(d_%tYʖƺfggo%o[E_DE:[Ҿ#RĎZXt .j-iG@9#*.%֢ۆ#g:hQ(n-$vASr$>ѕwj8炕}m,vs|Bzl)a`3 T$)kxR/x_,p/Y`L~w~DIkf"c 0f' *. ڍG]vI<+Ib p3B]9k}ve+6UFbu0/ aoa)׍JLwrK7 *A# R^k #4^~Z%56"&ۺVˢ݅'hM@~L}l(i:k( {$\$ Ρfn츞6֝jث.Ndhto\ly J]شڪ,_U0ZQ`j":oYmg΁r􀾭D(;<{?uLť0DTB|WqS7#ϕ_IZXWQ.Ж}7~M_u5/p:&'jSAMli5mH/H]/!~/ QBV^ğqTW? kΘ&Qʦ;7mQ^ `m7 cp; z-(ngǰ%9jm˄IJ68H>=U_ڍuf !u_t$e?ʆ Wec}i&:VE4ֵ4ߥh${lIOzj#@_/m'gN@٤|atm^0ơGA/,D0u`M%zׯqLi!@י9@;E =M!Gؚ9N=gxlE/uI~4+,6{Zy V­}Q~zTCf73 }H_s@sڽ ^25v˾xG(kڐ$ N?mFݎ"vl2imm\(qgپ{#O9gҋqcqRH}!^硾$|. U@Ԭ\46rщ|߹OOi.mqt\Y%' Ll-3O#3.UWa`;Ա/ `;ӶD^ޟ2 `9L1!~7FA1-}bhq sMuQ#=#nGZΒ]u.L%k+g&inep5 ωtKSݎ/GSi͘mRlNzn@@nON6T(ockcm~g;=S*[;%#L6d@i '3@7A֭u2eJˬ&pR2Ϛal^&L0d/\v4Xxx\-l$OcJCLsASpci h QyXsMMe n4Խ+Ax)Lf"$^ p%jIЍ/(؍3j1fke9 q 茼Ҫ$싐0 g<(>޻T1g0b8sΘsϜs@1DA=~n}웙S]]U]] ƏL$'7bZR> }Ԇ'(; /y@{T?݁~jw)Jw]c*!nL7&`+ӮS`/Byz=b"mOpA}-c񛩭c~_<7 v )^L{x!I5C73@%h']܀Eݼ{w팎eDY'ꠏ4o`/wwOf}^BIG~2V-?PĨќb'O@pnOZ12z1:cKdM=1,Iџ8-үu;w/&ċ fQ%ސﯹ͸^.|JcP?j-xS{KL]8؟ lq qһ/^8n]>)?}$cqx߼2PiW9a867 6ж=')!X(8 pBv{,6B B&՜ll O} Քfzs{Nrr (!TfE<,垛[OS&AR\Zf]Y7満Q-@멗kŊe$|fo2}7sYƁ/0 iܻ̋v/q j«/mO T^PjM-v>Ȃpr:Cs*oeLE,ORI;DC;}TNgIYz3^S}iS >H%? ֤babKbj4jIK2曞`i$, dThXWc+嚔QXDFI+`կ6,g>CKKh{1K{ʃ(YJ*6$F2/ќL9lϖ寔]t1[zSNn+qw|)Zr43Mfb3JT'(hk,w{X=YrYvKlxؖTp,MgؓˈtT.AcCnM"v κV.?` הh6Dq5[ߞ>cDy}چV[IJ_mc)Q_=J7v~`+%pAgӭ}ЩImU@I(RR+snWzO\ R\wu7pͶDjiP <֒KHB=2ka؄ ks" ++# EkQ,ۤ M=GJ3Qbqlw.4Jls>LBݾ @`.0F7Vxhg 7PU^l9ڣ'B33TD- |ouy{E-cmf"w)VPg{JDD/+m JKv%K˜KPlV)M>6<3/φrc}c٪TWt4(Ee!;|X2]|d0,p_5|s͙~5{50P\bfmm4? 'ў6X|;蔾Pݙ 11Jw[?{+BucT 9%|kqׇUT]eɦ 3=!,PHvRs2+w Wӝ$\67?۝B-_E?Պ3:meծ`o,hϣtȢP4_ԁn@x*[x־&v2;A_)oQ^weYj}{ [;>Qڎk͹>sEs?<I#;?06Z;?"<\ eк 7{x<]N:dRڠ 9{ɿ+&! ME?T'Aഋ,&<\=c7J\qYNnS1ЉƀUrAw?.*yw'oex$܌3woUp$niUH<9x4* 䳿re0{!Yb|N  -1cz!&NvBݞ鉻0Jpk#Cemsb5+ڗ]=fMZZ!FkX~Rvt*|uXճQK.F-Ȼ$0*tFϚkWGvӉI0ZblS??wF9cY]t HBhV㫵)yFPm3^ChkK[ ԅC{n%jRs-6n %NbR$܎YYPH̟E(G @ʇLOz\cO,x+3̃( Ĺ8]q–lEyh2MJ8>72͉roUx;~!􇔔qЁZ[]B\93;*f@[.( 5 > ˼N bfX)8$ ;ҖޮW-l93 u.wPp9 L-kqB6(vgpy112ҧSG2aWeSq98wCd$ JF5_J%C6~rP.t_Q{8 1 QRQr ֆf? Z4Kr0мfv','w3&0=X`=pk2#9 !?ʠreCs2%#^ "ؐNsJ~|w]jwevfT#~/j%ݬ('nL9dBz qݙefq]{[^M ~#.}@2_!q!܂[#,,MၾAycM;Jc7".CyI<ӛtFȖ/5fk56TpcmP!ڒ7;<u ZJKU+@U#AqiQ^={A/&/}*29%.̽[T]`Z?Ry;{ELƹxW^?Y Z?X)Y)K&20Jۑ2d2s$JL#)Ykt$_8q'2͕H) IJH1WqSn'!}0H?_79cW)?Q<$ ^ca Rg ̢,n ᙌ:sr~a=4ganEN?#rLVPg{E _Xl HL2 cbw,i-"K{ )q}wf"3l'_tKhY~9kT;> ~%Nn``ңH7eVgT'# DZ5Ks rGJ1U7 qV4SmğBWeQM`,<2Xe1 }.2}'4r -WxJJ?7BpT"3%; <՝C_ |ώ6-Lsdh[?3Ϥ.mGEGqE$^{@??>9/V~v,KapZt7S>8%%@ND " Z]=D\ee+ԛQ__&:Σz䉷řzwT;?SpW^臨a.@bu Wf؅oPiSˣ\"p qzHQN_7)bv\,JW94,nSrH|&.z0ՃT+5oL?ⷬ[$;PY}?Y |r{8O2oǺ -Eݖ̝t|KA[^Bri15`/&c Z4J,a\g!.P oG_^p€OS;WsSV&CL |yҤGKMe>=ffj";쑽)T~>Zh ?09{{[\'7h?9U/έ1ّE5QZa]مJ L—e 8chj(vNE Ocuʓq hunA MČJ3ceXIdң«;iR:Ś2+l"5+@۫fRf\=R<;q'&j3IaV-oKSX芝& 1VEAEgO#@>y叩){?XXa"SqsvOW?)=.YƃzDo- D>Q›UޤWwWYOiY_ڛ4 -nKj%NOBsigҷ31|SF)y,Vº1ճgޛL +)YbIW 40Y.&(и? r7{)(S h s 檈p*2Ȝ/R1=< ֛RmU.Pe[@\P"Do6YJ_-v.b=Q2 jo/[ B8J v`q'rD)`RJ۵z"eBW J_E{IY<}=J.(!t@)pJ_J_=_2g)GGc]`vy EXږ x}iT!$z=])8ܕPgl}q`І HvF/nW=X8Л=·vJ(ߒl&jVUm ,V۪.hS 6fzLgOaV 64A\Hw \= =/Đ76仼[6d+-2N/a-uQ(;ׂf{ cSna6|829 ؒ>ֵ[ovG2Wn`F8錫1nQ+^ G!>}:yޖ7[ :v#2 Bs(xW,'Pgq-yU +X L5替7F:pplQ~59bvf_WN4+Icr񃥠W|W[wY\q]lNݘ/liQwncPD?!.1ݧ|3.GO.X;&)5 2g`$8,&? eeA0~{ł7f0%R_D:4oE^igt{@۹q/ѵkWeѫjYrS~z_wgNӽwHjO\m&jgF ipofLkG׸>wG yg'.?WR=g`7W5*r}D&_߻5eu~OJt)uZՅ+E95rHۤC˲>BJp ;}(ʃi^[+_PI0㔄Irtr^h1~+)Y32$R!`.E>x֭SH\#㟯*KC@ER|Gan:PN0M<{CRmy.U]P|7DE}@j.%6aB %M;Ոp%?jeZ.a [&h^):Pp8wbz{~0O?Dh8"tއouwAǺ %Nts5;ᔯZjw;+X?|Bj̛Z['|b_R vϛS?_CdtB#bog <|| ^et;!=]IF]^Qzڏn^nS*Ҝ)@jҷü$/5Q+7غܳi2X)c+ۏ{aa4N2iKIf ܤ7 {yZ-X#c^ͥ8xBH}ጿJsM)=53,Ń)e@A) ,af>XיX KՈ̙ۦFi$0V|K XF-6{oxY'D8XSiv9e3$H*VY}!_/1? b,|L/L-oO7Ʀ]zl}(p1ZF`иhp=_мa͑R'-ʿiЁQ~m4 w#mr G9;:!l{ihPۧL\rq,868Ţ5K^o%_{sj1Tp1fikƝ*ʹDt+AC=Az2*|ҷZܠɲBl9nԊsb3ܮKvrMK蓽`LDbf+ab*'KнN&6IEa&z Xv}8f^fZH R?q`ϚQHg |-z xVXe#ۡɽ+c}8g)dJ.I./0 P+DzW7KHkٹV:%<^AW9{U,\q;h u4y|s{6ϚBJRÀǁX cxUKQ"9NGTadO!HhH{ c|6BQegbV,ȼ׀op5asڋk#:/{V8@>j9 w>q+{ڄ&ʰ(ǿCB?cNf`8;?]HT -:X.~%E:lݩLb- ߥGEֈJ2 g%˺w1xy͋q̟SXhݩR[({F{OŸg# О m]]4l eoPc|QN2u ~Kgz'MG$s'e6:t*D+Tm9E:>~~ʷ99b觩~j4B,|5>D;Ǐ%gxAZ]& _fw:3Xk,%@*U@ qq}z7O‡`zsyE\yFVc#J;|6f!њfaF rKXEoÀ98dKͬFo=?S0ڱ=e,i}RqG )%|.!ER^`qq^yfnRׂ“{1^ GBLҘ Q>.Mbꯁn2o.RSS_CC/ʹd;T Z㴨,[I->աM7T:E :P7*_4ZJZ+MqYBR;i')}UfD[䵦X:gd݅# Rq UFOێj^?5іġhlTKzª]JYIY ́5 X ӫ,\+$eUq:[J_AR| J_]ZO ȫTޏF֣QkJ_;?k_e0=y*'LPRZ(E"D=XvWXy)}wW޼o"V?8n庸9p|V~ٲJ_mxk 攡ʧڗӗ[V-pUvdQ?-9t>JUF\uȷ \\J_mQ/ Q'+^ծ.hoVB+éKOW1>& 6 %Oև2n c&`sjz(uo2g8ʖ#%X 4(f}7\H\$|͍džK u7DqR96o'(fW E ja{U+)k擬oU/jTGr4y)RS:f2cqs&J")Z 7,V}>{)u(xT̲Xvo+ik۞ṠQ;<~˰ZLA9w)}6f,p)u q[jh{K1T:I8*\QbMK)/Gz/;+ : Յ^Of4nONQfo,}53KT~v*+9H,4gowˆS{bzks"ʳc{T΁hd)FC嗵4s^.\^wUT6|"x]-Ee²\$Hbs23eGoY.p^~.,~hW&7:;pi07S3'~1<;]>Oܕ#l{b !B=\ *=j1=hi]*Ļ"V>.*hML/aY>;ܰ^Ʉh+g`A`pN_|9ܻwR/Jq(g^SOȊmsLd#o&=wv^H~)py{@ɿ!HG.pZ d :#+@'ay378wiyVF"FiY |NW.r!x>XX0w|3t&)xmXCDsJCw[ѷ7ZMS 646Dy ;+ W gžH9Gk]:(LlݚRjdc ּ҅>%}Es$:6Tij;4ZhvХ[8>g(aUK}4Y xʶF!(C/(mo bb;iGөd1h]۱@j|QREU* swޯ(!4$,q^I7[J_Dvt䒚R̳Б3Gd1_ 3냘؞>[JMV}<[F~F" ۲5+a1=3BbnG{s N=6 W<:Y|Zxg2'2PTD VXoasAݗ~Q4du: Ym)}%eN#a2m&%g+ſJ_Y7*(l. D=f Ф[ #-,U6"5( { }v9Em_77":<&|[7\)O"NU `e7QNRT}x+J_T&XUMV,#%)>,~t%j*^}(^ĭژo)tAְy/`HڎkR[)ТЫ.gfVܝ(l%Iܼms@X>< MT BBÞ$h+1pc&x @qJc:WE-Y$>'ZC<텅6}=/C&o/ϧspda~{ \oFv9ucKsHW;Bim]코k3ϧENbY_rސ*i!+Xx><"o&N/Z!>=WJaYoRkCM$>- ̕m)Ssعp} c2V _Ŏ&)>1@q-Tw`EGS-ϲ^2YV:h-(#ܖDF~ý&-W1}P +]KнE$m=_Ӻ }2i$ԦJ** kBeّEy[;CYbQ$v) Dy l@YTMȏF"|g^{} UEz٢,Q_~*o5Əmv:XT}>5V{m^L :XѮP"\#SHXI LBJ"c6=K"0xyQfz//̘_cBQŇg+P0R/fndY(N}8K(3Wdh^p;}}s\`5T9_5O9[FS,&AA*D2t* sSWI)j%{/2cs. ͩ;B L : 3{P v_eC/Oav·XGG2%-0Mfwr< 2/4*}xcXMXt-/;>D{(GUBs@bB]6x]ҜR/bA͂"g \5?Yk>2#7CP )$ϠvCy [8wlw-\X(JS CxPs,ك dan }bQGSR`oC}kSǣ8cmj-5GeP5mg;ߗ}XqO/0_ p^SW==|Cƀ*`)R|(E@By-ѴW̥/7>7P(ם@\C jcwa橸rgNGʢLj#vl9č.M|m[UL_P"Z)/[1(@W̗R*Ё*,t%p!ajFRɢRd]VĊjP! %]^[h,J_(3s2IOpR[om5<cM3q՞bo) mKܟ8;)/Dڇ*{ܘVU*{~z0}](u*Y~u` ai'Eu,+.Lfv)>Xj0ݱ*Wz;ڿ+viXJ借$o@תT0ׇhqؿ`OzYzacktp̦#lˤGP,߫f`N.O`Ϻc ὶ зPR?a ¾> SJ-O*fCe%rweQ߻tGY\Q5:ww)?)4^^ݪ(\wjԲ  D&͹._?2.^Xϻc *R/W;ng޼[1sFgD#עwp"[\Xe7e쓽.J*}jqXň/BZ-F~r`o+ sLz;^/jr1~>Y p +A?^2<ِQAƥ6>oPb\0=Yօ:AAޜN6D/{O1&Rٝo]Bz}y <2"7avG?<RPRDǁhWe™)(DžQ3G{eZ@,0;?<~ 3bv$g9ll_UQ-(R & %`S&/M%I)f_aiIjj)jKYKi4RZLrbAc6~}\XLHجFb"A*GSŁư|jm0_e{_k 8W ~N;sBg5?TCJ󯱦p VQS0'ˋ柺뮶:kk*hX:cH|7" #b4e=}%ϻX&, wA(mEā N\|\;Rc7 -Ah 2e,ZID+{ ֻ:)tPh;[JC= 呢YփX48(lA9*Dɡ+a,,ʂzU'BLuGUқ{\w,&sE6u{S\]+6@B@)[Y>]3H#c qEVd,ɸ[µ s) 2b*FO/m(bѬ.Y&f%˰2(nAIJnc#Yg<$.爆",5`eS?N˅cT.SL:hT54u$i7Mߺ;D$;;%(3Y?&p{~:`ed4'7q,ޜQ.7ݧcO_'x 8ldtrU#8ݫo} ܬm4^":sKzb1\ ޞ T?FA=#n[uBdFP$$8/F1́O雓Pىu3]Gq^L @A@yR5h 'ڄ{F篤@m=Y +W/We!ݘ4 2eզO`Vks\/ }'hk-҉lK׮]fǔRUWnS\ 41/QޠH=rir+3e@-"bqCn@R}/K0A]NN%|?zx֣ (UY08\7AК zC.:+iσ6䓮R-WEK)Pj+=tt.q)4װo"Hsp+LY:\eu#eSP*C/AzKtJŪN,Z~,g6-E/ցn!|=IO,vp2ŏh^A%<ѷ ͞Y_ /ЯEA~7ER9(5Pru:B<  +`JƩ)(c+@}@QѦj]DJxܜp;)-AKP}{$֥-@tpr|p4Rٚ/r-Zܕ fxcEJɿ/cp4EoA{߁zvocfAUݺӂ iP#Qt|w"cNКSA/ʜ2h(?{uWnMgYԋrp~VעV#X Ԓ$nhcL-d$WRenZܔ; Jf\|ˑOIsŤFQHhhR xhxWv3'Eɼ)x ʘ So˵[.x9igba—'m'Ykx #WHTvuG~O pDY9 ,+nؼqJ_%gOl1>N-\wLqe2OkIfezQxVw=Wef+ٙl7 ,1b#o7ȷXK>82cvm%T-74,7#'lه˗O*U/ϥdf2m82iI-\ Z;}gcR"E+'!( yk!nkyI 0ЮX'-$rP WHCo?qYJ<>Weؚ'bT1ݙօMnp xXu텊eorZĵcscw {@%h}Nd53x.@L|rexj#}g,@' %/&.ΣCdaU׫E_րm1 e02Vo9á2V"zB8I8.pa+}&ǹW( 3 ?3!trs(VxWMJ{`uE#T[C_ GY2 \XޢL7lљ]О ZISJm+};PRFA+DTy_ZڪJeZ LsJ_ ʚfJS݁R ҷ5GL'{3Ǡʼ?WmZ^cX8Wuֻq kO}w(L<,)^&UHАa|\ $)}W6S~`zФ,gajf-c!CԏI߰̾ R-o(1ZheŶ ] ,$6-dk?w_GW#aGl[{ ߋ(L~4PQ!x.&@#'sZ I}ڿ{r9K0}"(sJ5oZM(щC4(t؞( е3|O=~E.WaTw>,t0g&GE`6+8:ܬ" YBYZPNڏq.amS"W-`@oX?)zL'ӂp?: )}]JFh\ET/! Ya-; 1+-TAc_ŊFW[~*[ ʵp)Pb, EԌ-I|mzV$eTK R糄d:K@nmqwNDh\ RPۮ*\ȺZ8!ߤ73؏hL,#G=-s&F(b89pAcJ|_{n nvOqA5Ӈ2QDȎ?:‡Ayv>hTox 4oL&[X ,z>[,AvmT5n"0oSd q:H?7-Dm:u s3K+.@D}%{J Kl;D3aˊ`'-MBQ%!(:YA ӹ$Y V~|Y~VGV‰f?^`>_`)%Hi㰸q>UwDۚ ” GGlhiɏ _MZ[7ulKl}.)%nW,>~U_ 2'S_bU.JPe-~\ܽ>U.e2k5\3.K(7|A(~$rC~5~) ;,WR`5U袅&oŤMc\PFK!z@F:NQw'Y%Y.3reMg !є/q9}0)o@1S!^q0~0OXkV+|݋LCAm_%lĊz1hmAV)-M (K-ZiK IG2j|3OsD(o/=*P"0)?&m}ޝ1@C} x)4Xfڔh\t1Ss$SF+Fb+kҐ"$>ɒQhY #t=7|@IDAT*Z$%Պx~mAҏŒ0QjfzrQ:P*^2E!wO4s[3'ݰiI|G 7}g2&NF9<(G#km]{@ym΢?ƚ)6}A Pw 7\Q&{e(Qײ"}qf^tZ1>iC<=4?Τ'WŚ[F*: cPm q#ew^h 7b}D~y  B湊|R,ŇАUdd1䞡Iq;G5*pG`gjal~yq]E˿k>m^)N[j( 5ӟ禲`UZs'sbCeDŽ 2F`-G5_mN") +X`Df "[i7Q٥hxpO/JDŽä0`qaȧQrUo<黔,_MI>XoUNuS'&b*7)jʑYX[k;\i_ЎCV/ŦxԆ|H?_+ڞO0|P_6GE[r48ivX̂$y89^oth,e@|ZC2:fAGS- mM ^/O/eoV' [.Y.44k(cbGArn˚xUm:S 0`rOn- LF:+)e!"f#6f̚4r9hv`uA-=Br&R@#o jcֱ"bu@VJ%X¬fmL8ѼK5[ |>xي'(,N@SrVNFO5)}Va ol Mvkޗ"^ o¬KYbDL@+}ޖuF(3v%[iZ =71FjzW9>u@]8wD%)\.޺ KVA0JF}U*E)\@\i!TZX96q)TlQ5K25maA֗wb=v[> XcՉ4 ЁT+z8_= ֻڙ+"i\'3͓#P2Ee)ʟJ8.Oe̓X[ɵD0$mYɵ AXw " PrTT%>SuWYyMX|Rs?\|@Jѡ_a`zw')&N"v$l #,mݾ\Ùx}@=\o:[V:: l؈d_0"jR}!wR!+/WL*{_(@IzP"p@<(:`e-p=?2G S }"fu 2gسJ&R ??ʬkh0 򊨕ޠ4G)av{HѢ'zI{jKPW7&rad. Ʀsv`9iqĥ|modAUA{6C NaAHs>'Ř_= jx872-?sI?釲Z w-E]^.< i(-=ɖ Yp2ЂSXI|~&#.X!p'E܍'Se4#~E?q"6խFP-=ݢ8^dw?O?ußKIjsÔ8. :-Q%_XKφĀuGyFJo>_ pةH*ʁsD9([A qٍ^40+سIjwMʯ6)WJ3΃qL}Ųijs} ;L:HwddEG$zc)3@: vRWpY9)k+M$dcBvn]2II!EAzKƘ|0y'8*{.B[+~2Ak..~kz s^@.SiBq~^e?눧%Q-eB8Zi >/fUV1=z0]v5[lfV2S6#ָ2髽UN5NʩSZˌ5W_{O `?x3fB~}e1;@󝘣?hg0@<G-]5]ā4N?t]ssIfs/)sαQ26w!&eJ E&U,\$mI.t_: &9mֳތ+yhwpi3(@LIkSUBܧ \u a͢?D$fF^/2~^/†ixTq:} J9rȳӡme-|_IYޯiR(m.X2݃|[J,i~' y 3a a#_i:2D,n0 ZOi%߆RX;d ѩ(ޡ%@CV>Tn, ՒAVm魂\ ,HcV/ݩOGؼY3廮P|LxL[ҭNH9"ڒ\ G^8]vZt䷳1;B/]}xX2B#ki,0Ͳ*Kka|&k7o;J&Rgm_4 Er,fAD2c꾳] ԔG3𾀬7Ƽ"p?Z7uh/]_t[K$Vwhlo$ՋgU^蠂;Rr)6livP&cc|zbo{b\9Mַrp--X?khc_1B˓|E*@F:L2p.ҹ{eX/wZ<<KQRKH߯i޻d3kx?4#H1\R]!b>8V>v̲f4%:j<6,UY6љwD,8o18RbZL{+hc@|XZ0 \H뚧rw;J#}Xjyc!P!r%l<L\B^s` ||畲58Ȩp¿N4- yn ̠P eAog73Y<Ңm ]t~FF"H xt2 9׼G ǣe=g&f[ӯL#.{M+9as **phxd/qe&'F/ 'ζꫯl`t3?;Ν;W^|Gfܸq6^NRTWUZj)zRt믦SNfĈX`k[K W׬kp :V/+IkcL^,Ueh\?JaYNF'r5 Wr]2Y=>`%5Nw9d>d3h 曻`'pիKfcN<|6ݙgi^xas1fn}I-Ie{cR\M"0 | %Y(NM sbm#8RԤBb5\EPXNc,२?~vKZHم݂؁& * &J(p3{vgfg6n;3'gN<9v'{ohmIDVB@  Iuԅ3䍴pufU}I a> ց"M#;\N<Y)'yyz[6CA)FCF HM|ec[Im:D)RGZ ĔN^L$?v>D'nFBڲU/d4P6́hT{9,s]ܙ=EﯾЬpWD\0I@-d& W3!OOh̆8ؽ 뢭OJ/z8boeqCژx Պ XλUFȼE39*2y R,PRxdiYJ~~D{|j2+){_΁/gdZIoX2s/ths5W_-wh  `V2Y17Η$Nԧhm_΋2FI$v6%>3.ѣ;-yly4VY8M64 $CS7_IhZ=@Kæ:f/lԃ56M̌,\Aò`Z.R7h `C^9ڝ{!^5ke|91նR͡p(9 7I{K0N}sbodrd:;LfZOXMu+%ˏlIL5fF O{2SEfyB%xn& Zw<_1o|njcƎktIX`:JRvue2Aѣ V'||(η"tArz^fe] F2˦m8CWMA_sIgf|Ͷ ,h=IZT֭ׯ_L.[mva3Zrhe*֬뮳:7Kje-G¥> 'Y&`]-Ӫgykc/[7j#FE7Ǵ?lvj `ᄏp̣>j^xaӦMS_Zh!NB>K^VݩE$>#m9kgϞ6M71Fey|[޽{'ZP|)GyĶ]w\{A>}{.|}fWC>6^!$%K_`-吶5mTEhՊꯩH-KIX٧P1FveGֱȀ!ǧ|;Ƌ}E 9h?؈ QoCZSMlp{z8\+sxOyL2Ck,<{hMC;2A~)0}}0{b@ɥUufܣiu[m0"AX eFk TFgNbu.W`s )\vbڳ1݈rk+x4>; 4c$ԝjFKa5=^&y}-m؆AZho$z:0fiq%vJ }>GP.2 +V@+p.ρz@P}E&m/ARnvj,-Fڀ~z"hJ=?0rUb-.bɔQ}|X@yxIHsM~EEieP= sPStDO\S8jcBS:W~,㐥0nb/dg<rbv${ &;' -p |QfS~9뢶 mCJ YPx"mk2?6@ftnˍOջ~pO-ݠj|iN8ef$`'8u5V [MgJH@+Ϙ2MA_U*̦#me@XOo Dl5XHݼsjX-Hݠ K)>V޵ڄ] 9"RH+L<`_W0~E.|l!8I%_H` r--`*6L3~ wttlf%,5kjLݭKKymf]o^mM61Q믿Z?Zf.[Zb+EömUV5?i@ki9K[)Pճϛ= P*pR3:Zd ͇o8őV~G]F['<$6)q `XnTnc:^{Chള}0فĆndŽfY 6`0qgɾDkS{odqEeK4R!QTR@$ϭMWR8SgoZ,G@/s4VtF8>R*Ǜgkr l*sLaJKi10O{㌴'1f;пmgوmiKCp~IMqV;Kij!y#$,Cw{ݷpdS8,mcg[V~.1ИX1Ɵ?a}$p*zE.1kvDR.a|>7\% ''f]?Y1迀7]C<ĤIcdmQ0{3q0\W-Y,8[Q:jgʌ4;yQ nή\'Ҥ- U]?i$KR"&pךsuS{A֟# kcUft䙟VB8@Ѫ[*s^=ڼ5| rΑ7%y2o%35w*^罽6_߷)Rj /!͈3$1~ӔV1';Z7>wz:P S0MdH"d'L)u˥E]_|E;m4 SWszH`wmSfEVuЀ:pk͋%i~VCx--O%+sZ߽ka5t{%/3~85+4\˿;[ZH .^h#O)\p]ѢygeVb$n6iJ2scOnmDZ멎24i6,b7F=nPsdK+ф`X/ګLWDgeF>0H>_2(h ll7:*'p߀h}$SBB6e)I/1:ۆ;X d1 ~av7? O=$.@]1Q"/NK2$nv lh .a2j۴7"*4){ -B_~oԤ^ u~'j^VXўuֿ^) '.TN7+&G&x(͠L.풃mU} 2ND5odΟHZd1rڊ}Zh d3/f1pɦE N9IKf3B,?k)S;:-YMn-~mNgz'sb/Ώ_dtrzi̸<~숈Tz&8*I^ˊN ~&GF+u5Aґ6 2D<ŸuBB@̰b-zeb%2S19FMnȸ޾kko}t='9"7iz(u #[u_Q|j5߲9h zs)ZiO}Ut 7in +l@ḱc _nE/,rW-tf6\oN i'j7$=9+-k)񛀡[.7|wb|uи66f0").$D^^k{,`foh 8IKXb[o@& ޔK``eݙMoL~ HcY ` |}p@^iӒϺ_3_~(,wPĖI89=ai]Wyq0\@#Iҟm}9Kʌ;kՕ^!vD=Oygp/k[ybOn] {G%K6v :t~biͯTJMLDk2vIhա{^S=*Ikbf"6IbjZS2GHbN/|m *؂p̓T8 oa$yxy3aKG(+DS .Ri['6| lu_8γ0 >mLiFlbÞ&b7y4lFYt-|֝--䔉9+45 ˢ[@Iy'i {'vNJ^Ô쨔O\t49 {}xр:ϬoW•Zh7~oa 8Ys}_ʋ71_gpJ!' 6m8\&|A7%=n[\ܵ} WTuT+V@vX_'vH4~V(7xDʼ8"h%3B_W{ 81eN ?$I~'4Ton2͏7-fbgvPi1[NEw//5d,*[s"D~>Af<_HgG=_>Km0;/9ܵ?[\Z}053v^gX.PGi̋s6ryvYʌ&i.9šIXڎ:@`ԟ'F&/6AޙL{= _H"E2fɨnc4nvu_# ίo,}aO\<~2{`ɶkmGK1r,mq?݈\}IJ͉GeW|s}I5ojYInA:#^aGKGvKڙ=H{-6ʝ~R|z !ƣTR\NBNc|"gS*8L[SY*] `hk;P@ X򓶸 4dV sZn&"? Ϝc:o Wq ɓ'[ IiJ3Y ۘ1c̅Wf'zoR"[eB9^ar E9dOvaeOWl חwz&2۷*l, rLDgԬ/oVs͂ ^s-ֱԮG%ySݥJmλU=%@! rndQC+l2 ty:׷ ٘}?h'gvwaY}FZQzyإݜB=GQ?IC'2o(>Ɲ{3MDij= |-h .E7&~$icJ$͢F8`,=W2=3rB0y1)༛qIԻ\q$J}Dy3w<ʼn3^ СGoZ$/zrdz3@FiE)]Lj9CJ?뇁r?*7(kÝdJR~ݒ/e_4SMBњY;ƧHMrE_/+@Nf|mwv D3[3E?gdOWK2if_ШUvv kpOWF|Jݷh,}"  ͊iOҟM0NߝDV|S>r#]F"Ic2|zƵ_ :f+Sĥ )Ik`܁S-`%I#g_wQ4q!gn+ }hdC Zl T` ain Go02p۔ϲSBл&ڕPg7&fs_%m?o?OŁ7RqpX&:C$>;m[8mH692Sq7Z \)r\倾sM]L eo./FTSH7.h#lk` ߖyPV+^@E}$M/\'ؕ}&/3a|v(͹}S _h/.HUFG`,O&6gt,XJlͷ9\;1Z~LƂʱL3!o2|bhENawmt6b{P_1Wm2IWL{hlj,fhaROQ7٬-s?{KoA[h/O,ش[L~w]]K/FblƑLe}Guـ˚UfNd^G/Q0Sf~r(W9TE {03 mse|H}5xe(`_2+reDBm5O[&j(;v%5ikCgF\*A<~Mix 'ϳMG"Sʆ{]GYӣ$Vh쮇׬`,q/S$>,BL Ӫ7Kp0}ZD&.8-l͢ɾBwr9xOf {kjbFr DUay/?Wta͡AWӑY㽇>(%寗RopB@stSV wyCFCtȑ#Ri3:m_B%H@)Sf]vVQ" lb*xg7Q46i w޷ߠ\XK X6;{wi~{[PKF'if:L Xw=z0pjS&%ӭ[7D] W[ZЙPC]N@0rV$'n5FIrs='۷vwȪj֢G1:tͶkcw4ñ `/S 0~SO6|PT+[:`Q}RI^f4^{]~'o=2Q!s=~p?-G_{iXTu!Ai }Hs&*=>Ż/G"ZyCjd3pW erNGyG ? [YҢOͭW{&@ݮjԘ>M^u2K- HS|8HߎKUakr_BSv1 PjI} /7G+~0^j0]sOǘ'4~gg>:k9~:m~8cQHȦ\hYZ^Te@+@-1WR~Whx>[ If9v(Y!4 YW ~qu E_mY|eb8S߮FFN, 89"k=kM(M#.C!D/o4p( iMP20]/cP5᳑$nfB{W`e%4t_ٌv֊\?xgqSԋ4E>Ŝ-iFl!x8O "cY矁_fW<[p]_:b &O ogI>Yk:-G.Nli)g68. QƸh>w hf`KsBѴT'-TٷtLmf viBk){ڍisrf}fyiՇ40<^m~2$/.@IDAT NG;Va;If4?Zx\BK1Md̓g„*q9{Ϋ3I!b{ed$-XL.Jk!nEC];ld%>Ȇ_n~l^p g.밿YmGawC=f \qn'm2R:5T^ԕyiaGiŕСCfҐVT3†ڶW< DvC_eꚣqO[.!AO \&Mc̺-Xn[ ioJӡIV{Y$TU:i+ ]eVNc5{)&{4Yn=!.K-RC6 K`W) Ac?+n2lrNݽ%;.جJYZZ&_2Iu,ȆpcБZ;#=X'Sox\kڻEL86}U.sveLo'PLLwNKNZ/Eoigjlc>q8lKǥKi΅j1"L:S^Q漅IdwdH͠\ف=ӃUw8 ~t̠vl\ھȯIBe#; eJ0zɧI -另mV, N@X?YΠmiLIu6Yj|4ՂhƗժv5#j#YiO(;#)QF='rׁ=ߤ1ڞ{o"ɨ0DHm94GwfTr l #+"^qC@eU>ztL@Ox&͂R䙛NdQpSNv ?/w˼GqKGQo]-~QU>d߅fkz 834fWcf4RG> iGT#<+ ˞Q.ObR2?V"Fw>3͇.~}1mF|o{Oڮ\#vKy6hiUgeyD#,F WӖMhl/Uuy?9=,/Rp&X&U皱̰ 4NzȦt̙cjQq&WBji;@-?i{ʆ9|\C g2HWE6+zvzG^G][i; u9è]HV|JڙK/8 (@?e4de!rs(qu2l,+mp% ʮW)ͫqZ ׃6ZYq?E6HSߤ&Qi{ptV3?7Η"hȻbr='%iHޝ:ŅLG?2ApҾӦ#mD~aJ#@ Z\ܒ,9߆Hev@1arʧްʏI'j~Biį p`yGA8&&=[MNH`I2=ɰ0$K bB n}<@O,\Nì˚L6TFK)/X L5i=ZFY&ϱ{oMx?\7sSDp.½%shKѶG#m8^' 2 4H4#[&m+:->GO@?LD}cРHV|*rRwDS?7zwͽfቷIx149?⬃ 9hM N1Lom_&T-t nv"JJh0@IJeYǴkޙw9>iF/ æ*="l9$&@C(u2HPnJLIYkO ,Wsp٨ג )+jc 0Z10\/b'{Sg6ֱ ͢ɀ/O& 1yn xu/0] ܮj3(V]e/w+V7Xs.0'kPĨL-Ʌ>)rA (|D~/Z_Fss= ںv8F]u^GOx$%/ӟX*P܉>r>2]s5HW#lm͔ Z:ZXH<,?`ax۴gi2sd2PGsn)HQM}uI2,_kv&~Xt#Gb"!)EK@4GFJw.?ib:-R(aR0O+%mw[P[j T**  }WuE'iGҪ}!aF 7`e.dw7Cο044Yam <$*hL"ĭB6ޯ wKlp[nNC?lvXƔDLo]2h&1UH6fE.(W#A)@_=)ri|DVŨԴ>>tfT`_6Nq'\z1qQb:vr i\݌P ŶM0җjK8|6DҸ(6`,rn1\A_u$CGMUH!`Hd4i"ً<tO2aM=j㧎aA&+ y;-ɴςzb0br 4USK]b6`߯"DPS_(hL6 諄S{# n΢񞓻R7FWvEjJNhh68r҄6^3ەE['HvMle255ƃ< (p~G|I_q4 "Isl+y{u fE#J[UC})TO^2=gb7|4([uHF,~\kݭGmHu8xZKQ@"F N.M~,'3Xl8BЗ VrZ?+wm<=Gj=QKP2}g? r/)(?I.N?6S2pH;'6̊cW,ױ,93Ʒ<;,scz) Ұr$xvihh<׌KcdϺ3W~_vy{o}@_7y .n̾_&Z<@v#]QJ$טKđbxdO;6?5(?,PU YNM4 ۈ->soA>?ޣM@ 8t*x[Lo0XɄ^z>Nt@vr:>CQ9rZJЩ`B=jv.F?P'aXlz3 ͳsg Ga0RQmŬCa+C#iλ2 Z˶b¡w:#-vU& &6;Mi˥Nʎ뙨EG WtƄe g‚_%wZtw;6Ih%?]f><4]6?'pޓ-!;efx+gB&|lv2`ht(r e|$`ad5lcQhK\VhzJP'Ze&|ԦLiLo<^1kr~LTI|`/zH0j&˛j)H^J[Wmˮ:~7=rm\}9OW>>eMٕ 'Y^ 'ͤ\`׌`; b"4N_>'+fyy7p[<.5,:clì` e?Xn1dE(tX8]\ vjØ:e7ak7{ Zvm <W|乏q=u2ȩj"C+=27H5 cwݽCx.E.SR>+k.qwG}hV ͠N9!SH޽סƫwq@kr.=yeo>g<ۚzx9μCWuuWI:1>Nvkq_OmNh6M["SݕjmJ*EԜxY#vEixɈ٨8qb[VʡL ~l"AUVXqTQw dGbbmSP=zW_ŕi$ @WϮ U}o)~~@X& uQ^V!ŽIdMp4g\h-u_qر6J&|KӀ#NfiQp=\ Q_8{'¶k#ko~N1I˼;Wd`8%~jDOC΁{<"n 7~ j6 ?pE>]L/&垒!iXO#Z؍o8w)mFL]vvmih'7$6Gۥ BtKQ Zo"0 O@1&322ɶh}RM!)}cOAcb&!߷ ^x ؙh#S7DB9ɮCoNCkW#ҫ"ڣ4>*QF.~M+|܄!efCgzH`^V}϶DQet$|>͞5O3@ruG5voc6<@NG~i<3Lr3)I.QZ.ίBhka{yշ%;-ƒfИIff3n¨>L sXH) f~`Cgqr rJv_}hoJv~x?x͌֎-a]uF_1B։Ě? 41V|U!f08_,K9O 5{ dby@;b<6ԇi}cQqCkXdmIm(%bOr|%YC_\ֆ㏿KmJK8$l4Fz# M*?, @y|U S?2Q<(;9°HFv̵%.{J/~dO^&m^#PK`(&!棴'siن=LLZdaڣ6F, ZjeT:߃m'M[$E:`JO:|OC^İZ!wj)ۖqt@XF(I @/F=b4tPRx,76)aKp.ڪ%Y9}5_blJHώnc:X0נa6oٍ^>˿;r4at;סyDGa>&u f0(OіIcxflOR2"7v j()g^nN ;{n7 S.+{<߄L_&DBif}AFQ}~@^Խlrn.- j ƈ ͽ,ǫg2X,uqrb9jH4xH7xoISD+7_=5ót\UV.8cJRwFvh=-2tM2U_D d25ptiyP>I2Ym4Jn0!!msj;tALE3.il/)p݌N$m1 mu\i/4N>MkG:PGz%Lcvďg0g{v驌5w sahw@Eɩlj&Υ]GTD#\sS*-IEݼ>>:@; ӢBbw ӅVEt3-|dvl *u/mo N?2Xddw}F܌1CUε1jL/xN }7~O+;Үn-֏QU{Cd,Ɔfȏ޾ =͗|dRzRdCǚ$ڦДր4=5gقiiQSb(I8e w |Wԅ;&AmS;$vԕ?!~ wb| QDo65D'Q*6oQJRK~"ǥ4'WE`"E%K S*U[kv(Ms17Wl:~ww/; dU{Ipt1_=@TyI Pm:ƾv^晲Rz?ApWNT?OllN:ؼ9Nimzچ;]1upP&}nڜlp2oKo ӗ|}.wOSMy[S8o2VT?i+r,nmI+v]xUδca}4R{49jSCarҶ|%xD [|?wɞ5c74ҁw\"FƂ0aZ^i[ 'qahGs@lvg 4;a_Av&lpsL)= ]yﱼs<( ˌ-W(K!A])paI{ 4$WV87MR~< {Z&Oq T]AgaJ _oڸ-Ѭ)sl<%.X}2#ri\nZ$-E/}RWdk9VO(8JȥI&Mn}޳\Y^`d%Ji0bC/.~ ITDY;FUZ275r{ CdAmDO^yHP}1aYUvA\[..׹LԴ~%$P&JRKuc&2eS1lE[c AF,hux| ! xAڛ:۞IkXքJ{!їv0]Ȏ`j <. O (+B 'ZPG|KyބOdr:Rt3 _3`Ģ=Ƀ1y1v.CuENN됕Dz?\L.C1 Xu/G6_qZ bÛטuZrvsknL<6kJ x!G>S:rr1$ ۖh6lŪh0;_@;Tk{M7v;@v.7ⓃJ )pʚcNneBCy\#|C~C ^)]6HӷZo|I1jSv=hG֎]ʅ(~&sf4!ߓxCw4JX>}jaRӁOڗ 7~ɤ@n}Ե#nXx2ytE|ZۓS?˾Y eT69bT؊CJ-tq9p3HRW'5+ef\t%;eD VϴL4,xЯыHG`y+!UȻ-b i#|Vj^ V_w KBgRF#'/8m(uZs3/׿nys(ZI֑!{ LfdCn+G77HbL(\07},QFyFUfG𤟊|h'I\>rnI,f>X+"w_k1}qK Mq6]dρ8YixbΘKu-QCA7]P\Q3o~pˏrxCK`ɍB+WzF$.+#P,d+Q4]@؞[LsgϞt~]Om.v#{[xUbv~;OJ1Ju U`S+#ZԗЁL>Mآ(Рwl+ ?T}}E9a1l*_[!] Uo!6F_[];V4C_P ]g'a\yw4`}eLxUtM"Ji"ޭpp< l{V=`)~L+$ #oIJ~_3 o LdgXL~ɎQZ;SQԏlCMpM@6Bʙ'^0y8MihJ; H:e=!Ֆc!f|[-!q1#5o adXAA~ۓq吿= ?|xA@4hR?D}aԎ& o|d,L[8o[zYw}+1 3#;%ےԳ.@׶+x8ŃilZJ,LfX6A&՗7ʻ7 gڱ{y/USOԙnip߅қ&Vڱ϶n} k]_{k5bιt{<+[}{C<;uts |w:[ɮ@Hͺx' 5ݚ#yӿzWrY~O`;QG##\ZE4*@i?||>"dpv6`^&f=Hi5yӼS "7T"atyBB{~ (aVyGXDؚ_#=D}a}ܨaQ|J y4jxGr:/iK|0KD.Ud?ȫj?G&|?/L0̙3dS˖b?%`4ɤ9\2N^aLNl[mѢoW u@sZ.CST DlIHSa{e)Wq~o9ko b>@~P捘}z5 HsGiOҤ1O'b~fki8=Ei* چ hd<4Վw"u`Dmdmܫ3-0D{VJ'|PS(|!m9<W:eo n~EJ4G:buf@5MA&*Z( #iUm.7u?α! VЖFQwG:ES=hsLGz<(mP^ dqm-_OߞdZk9 MHRk$ZYGdm֢AU[,cX~jhG/1CuLBK Dq&"q(HS! ɀ}'U !<[cl/t3vB Òzp@֮Lϰ!:sh#_w]&Z{qZo?s|IU^2ެJ.Ӊ:;:$[Ґ]qkӑf-T wdĬ2g֝]-O  9?x|4؛t]op9]wUx)2SF7bY0I^J=d(q6-~AFi}τ?R](߯y߿\iNHQ΋ZV*'_>L7 "W.9B9UBVJ;n8 WhѸԞs@X<SӰy7 ܕZY^lb W?7h#Ӿ} :2ZZ˔_{XA5UZ$EN֥/g`rRc=9hl,؄l&J& 5Ȼ?`"iRXѮ+5)?Bު*s-ǒK[uz1Y9-ٴnt'wm41mG[(\F, ۂy1][!^7Fs#ޘr ޛ}0Dv~6ɝ,Lɦ Á|`$g5Q?:Zw&OhpxLSS9X~k k`SeVb2b揘?2^Ggږ/3GG֞HUKz)sdlf',e*R5*#ykxahU&i,&k&Ypv/2νr5:_F ,_&(gw4i>$O Կ4&7&r) J2{hВ%]~K[3Fwر|׺bt'4 O1ZUgXCl_*"/7Nk~mݮ];4OӾ0@̹@[NOSP?9A\cQ>0ILCdSOđd/8W}1 *ޤصmyDB~XZ`މ^tV"DĔ=L.Xy1:دEJ8,+~jg5:bzWr}4SpO]46FS+vg鶘Nvgfy4c]Os=R蚗<@ՁǑ42eަRj'ԏvH ̆h_CgmjX_؍gxxxF,Xe%a)fEੰ{;Ƙ%9dU„߻PSk(5%˻%:SX{% BKzha)x3w_-gyv֙54פV{"yWÞtxk JY~Z}[*Ip̧i~ ureTeQ(ۆ2/#S~@kA?<lw(Wq nbҖ}%J9ev aoZ_d.`%.>HNo#`fv&C 4}ruSrmiR{{Pĺƴ|r8Wbҕxj--__6;s_egmD[([s>ff]qU:9%7@Z,4pSy{t2<8mbt-pH_qǽڲE ~=M2tmzr EU}amM5"kiwDz ^BQv,O}WU>߲7\BMʼ["'pMU)OK-jo"gŴUvXs&v0-.P!~λOSua?H2ӾY[ǃHa2<&j|6=ŵ\y܋M]|j/y`JZOCo9;rHۢhԲeG;A9ѣähn~>.b%<yk\yxA~$ @Lƍ!eO f6ؘԌL+V,abZG=ҬVl"izu-eiaPr<-Vu7\}7>őVg̕Ox9K]}/{ ~u*Qo }?\@zgFʞfЬx1.6%ȇpE<3V"ƾ̨P9PgZ 6Ws\ pV Cv֯#o$'ij<ȼl8N͎Mvp/ vΘ0&$bq8p&(^Xє~Ɉ@JMTQQ9Eh1u.ɠwK kk`TT^cfcSo_iD Ʃ9['Qfm! MӺ \9dLT-HGJI-Vkhi@@ee¹o~4@hMjh\&>QEREpuHL?]́'ߓ;OZ'uJE .c6RK_$UA>jMC;י~ s,o֦d4J)d"D87h~Kg8F(^$J'm݁35:(.mYpGo8!vJeBZc.MH-kNP=t;.Ѐ ϧǚ|S"yo?Mf w 02!D+4@eA$-+LV?oR|=w ټy\PvtBn)"t%x]dfBv//)饢W.N7_%t*蹈&{C!l*/* zQ56mwz ~Bzux!;fjwn~lׂq! f\$,nrKdd/& X$m+9]yGW6F؝B<ccPsbXuGj v.K?>*)feߍwgN! )@IDATm4 xCI)@doݩsl|'k|?%Z|뎷%FO]^ɷsxւs@_5TZKq⧪Q/(ybڼor@a~($i#"-Kj2s_S쾭b%Rt <|W +sL\| sNG *o '$FL|,i _>Fn4n&.y\IaiWcwi`'.xrc>e,7Ào\r3>п΅ QJG7^_4#^_Ң;SR^9 ZUn}2gܓ%V-$=&b)%do!k.עj#2 ڴdxF=+=GJ)d)0[ToK$mMănso?*1^ p=B_}821y^&ȋOF}j5]nH9N9A߅ȒF0>i\scjLtCֹڴRU֙)SQJ<?fa`Iđ \(s-߰j 0#Tg`}}bS{Wm9ׇBḅA]l;#h37$26ͦ.XWm@[ ML,pL!&4,CANebvQ,݀1lmJŒU$ /}uC60mz{ p/3pkP *Y:bے3h|bn|'4sy+qІ3{btc@0̆c^sGdSQS) `"6Ve<-~^(F0ɏ't cҀ[S7OfyFvRFȵ&}]jWUT/ٿi!7|)"HFIչ' WD~)M9=Sn6o>a|c$I\\u{.jVɃ8loK$p󲤹o#==4B#VkeSZ3!MP.}ʕb95>Ei"1hs֟r*?vb?'y.ʹơ ̸IYGŲ\8~8cSMw.?+'e`x%[a2Ҥ9EX$ p0 -/אyKEè 9d߰%O2BEbS03,vP"}o\$n{7sRh>rlob$k`KR"u݋Z #YM lΒ? 9cH!T-)NYťᔸƝk tչ"KM֧. wIi6B '߲9U<$3Eh-sd)(KviO,bNa'KObJ -fe`o+3JoO񞸤ZZr~GPyR'?΍WXpceWT]_:lBhK.Gi)e!a !wƖ}_:7e/J}k i|>$hNoVs_&%zˇdP"؂>3/|5YP~:¥5hOsֽ;s1J=#m^xZ/V1q>vchryX\uoSN/#}Mhֶ@ϹDhCi9\Dd/WwxCIx6lOdIZEM&ȧK93O7//}9aUa4BSS9(3e04`ڥ., [>U'a>W&qe{ dQ$m~LpFy҇.q-OC94Ũ#vo&n'j $eRi3 n\.XVn3y2r%oC*]%ĖlIk$Kn/u# 4Y&y\^AK7(${i.v½qZ$}Ywr9 P;j?(Lz#ɿiޗ^QS˗YB#͚̈ki:.6;ׯoM׮]N;d:v@tr/~}MMѩT32fM۶m^/Ĭ6ڞ!YȜUrܻTȑk8}1,S=.EX&ExANє2z~B澖Q}{9EXW 2)_)\ܫg ^SWAyN/ դOv-2_=JpZкٵ<$Q[ԉk:&ZO%_͌Uk`-\'4B%߫kQ}C`ƭ|èy+2b, 6) C,D5>K<@"oZH9sL{*-u6>B}O3_Ȓ&EX٬pU 4{U<ƿdd.AoqCf:xSy賟`KOadmjSaa JX\Xu8}2wō9|V&fz&ֽ=vEݹ,e׆Az됁L'rܞiyKP#Uݼ-wKS__qVT1ևf` l zїU](sѭ /ۚg& KZMWڥk?cyƌ EБj䴦ܶQr?K%Hz@6Ky~u&PN4FJ"=3oKEP:`i%lĤK! q c>>}OPCޞhz.Nu8BVOLcʪڮHlhVJn[<}ʸawoкn~,@5&g$d\}?<8UFy$mcUOYë6tSF"ηHv7.dgOp޾:dG<f#3e͝1Q2FXZHk3rRhQUaȦY^R 6W og"ڧ9y65]yu'%ӻg6Z<12\>-r<0`\.ưLJ [pQ+#9eV򶓶/cV?G<>sKmQWOE|P4uTs)w}44dH p f2uftG_M_&IL yWm7?j%ݒK,*.)K#Vʝ%r3X,.pQ+%{'o2Pt%&~f#|hז:s_U]b5sAUҬb'0MX PIOnhvk|-SOT&6cF#_k30/XyJ_|6/%Է~"YYjkQ>N(O0=5]m7¿1i2ɜ/V4I| gf ce>NM{$7MǸU뱟y9L'H(_}RayRouY#[(O_U~h]-]w'%1S2pM':Bz[BZmˡfud d].ikIL  ~a8_x;<ߚrd05O%fN-ө.#,QuOM/j>)ղ,N}uKf*&S6Nvg!˄;_4VWҹHV/§$J6d $ KA=k XLVgheLnx7{.yCre7=N _/9W7Ѱ/$0f ;ibV[eݑryɺ~GzmbNW޴ @|W{,]a1/ B6^QK?ʭ4Se[]Zxj')77tb-,PܳgOOK7d]h!Hd<?`t_ ;h=餓bKB.$xgv/.I aݔoܱ.q3k8f껓"{qU~C- )u"Ҟ|UL/32EUDPҵ ?0VJ|*sAAX_g[ekF@⑬'QjB،s4 <]m|Zk;jlյ5u 7ZDZFⓏ 9nX*ȥQ"bϦ~+IJزQ_ˆAX1~hi꼱X樕ofxO#Cw3Kf*cT(fnϴtHy7`FLʏH)ݬ^`| H0~-ܩl|r*^Za6/$fA,Bk/Xj|'Vj*C 6R6u~D%MA@r9{wI]ϑ3WR7.P8X#~މ<)枤Ոt;i!GS.܏ޝ ,i7ȭ\oG_ ɱI[1nʖ"xd C"l8 O;x& @`]=&]}xE-mfزeKO?^ ]nLmmzcZk C#,^{ +埖@.TJ@\;^W.F KqRfs-{ (i/fYkӱB$ ו7ާ( ­Kar0 L(MY r|=Nb0-}}zn| /Ze-$yy _@I }gzI}M33v}\ S:^׾@ c{Qi37a<ߗ\=L-`\sԁ*x _I؆MZS$W')rigntfyɑ܈PJ}uUEpddJTH0}d k"ɔؖ''xdԨQ3ۆvt\jӦ{:!4}϶%Y'J ꪫZK`ӈ#lr%[aV+I9]AFվ.@!||]kaaMM ,5iҭ[7 G_e-Xc ;"W5oX^AŽ`я?hA>V +`-uD~NWBei],Qs11HVbP**D>X. ÂiIϠۖy,AM&';SĬ<`0 |zK$1v\.Yl驣7l.17&R'ڒvS S"Wl~tZ*kޯ{2`:Wi0ζE4;=n C!Thg-B/һLKڌ'7~fYZy \4ܯe E" fQDpIJH2K?Mʬ05SG]fRk տ W34K?+!+*M)J?LB81pWJ=1rh3LuĀi6|c1>yL^@'{챇yGRQr,ߦ@]N[4@Z898IIi#9IڬMpV[|yn\\}栃@cO3 >6<`-FwGgk"TゾL̼54W M.=d#s~r ] YY9)*U󆯂zqX/jN"beK؎ _Jv]ًhYSI Z*3*V-CO'3(Wv3L2'FvI.l@ѐɍSQ5Os҈(HY!<%OVXб"T,`/nXw?P|fH{#\; t6vLȞ,Cܲ{ CX'HD|OEeIQߋj gȰSX7oܓ IZ-b&+!~Kl -p; OWVƗ)$P,5fC:k>zI弍)iohfs0F'JۼԂ%Pf}Q~scs$C2ms|sFw0Lr;y`6V~n`bK>Y#HcPv.Uލ8+\)֚ ]Buu+W]IP@sr͑'O,$LOuߏEAڔ}-}.^mWvoBy2m'…~w/TA9#բ^m>%_n"* `PC_!c^}5gswymJTM29OYxBWBnAYʂ[o+h~ B@Nݺ7<@fE|I;[?wuo[0x~#7U 0klyψ qh}6ZIِr"ƺ^(;$)&37/<]T^B f|EfNa>E6V4B9!KZ+#XZZx@HeH_|YnwHf'Z$r2@ۆ$ Zɝw;UjFinm@WI7V}գ| helES ?"rF$cNR_{#s~zyQb, 7U_{=ֽQ, qp]m 6}=wd+Z0_zH21ri}r_A-ݶr`WlgB@ ޸W<hbzqk$[XݯK'KԠ4:) 9E}cgN[y:F`9x=S=*&f][8C~r|XofCn|7KS?gr1 (H gHї.uך="[XY{NvlSa\4a(cQWxx;mWa,.x-5<ݟIÕF8 Pfio=5>ě9̥T_R/c"$x.qo_ĺ/9ܝ1 cN3\veC|'("wAZ#Hy퐙;g^UL[ynVic$%m Y,_0>L~~3&`f}5[̌[nE8|~=V/iF.5ٞМ˅ڂ0^m<!zb8ZqրӦatwoIɓm]z˜p FV.M:=-x֕ qSzEh=w vM%m F_ʅ+:Nb9ƒ+eWAԥK3i$2{T|qgiοf GmKwFdE,˖eYƞ˿ bXMCRvk|! .M:;+6/|wssqRzQ>A T%''OHSu\g I$-?zw٩#Y@@ݞ{˭z)8VQ`'u!2 %x^M|:>t@YKM'߰Rsž6᱔ N"}?NG[˷>kŧk8V9 ^)/=)F6 iYճndɝ5S KCOvar`=_4CvcCOX<R mTNyG2fs?z.sY1N_ - -X}e8(H/DžC4G&[.i n#qU/b#Y 옩l4FG~~e4 .L{6gld1+IVWNrqGL b8kj$m wD vEU:_i d-|3l`[O f^H&,O.glFPdԮg{~ R Z=Pd X;4a)"YI*K ]%M#Q81"АldZ.dvl2(+ii?PFf/xq+ c6mh͒6+BUgʼnQ|:#ᢊ๓^m%.Ё}&/+ț u+f"^+3ݣoݩ9;̬֜ s6u"J.؛i`H~ᐦ3b\mBT/zdfm|y(O 0թ<*q L+=cEuQbwE?Qx'^9foϼe 7m0_=8 <`{=,פOhOz;t3q@șfs*}z7@i4&ʙޖ-:064e n؆nDYop E~bytz\/mL>~;u9+Y|./N#X-bjkTVҾrmb$=v&3WS(ʕ"Y)fQn+ {exٟM-M⿴]M$J+Tw4m"Zǜs9f6N=Ti5l !k8SK/ꪫ%X.-衇L=l|6TͰos_tS#Jc V֭[G[{6ؾ o?Uif'aNk۷o_# a6 `Տ:| +Nӎԋ|s#Mb4=R(\_)I$,UmT Bl;*)!8+ 8}ֆakK)uG@#7T^^vX%z=!^EcYnx( jEt6(Hy+00*bv\ʆ,:ZδƋEB( !hwӃ.}y9L$e(F`Qu./iӲڨ)3RH4 3`$8SۻNL؍- (J9Q^Ơӈ7?M% 1\,8@#cXL8^ Ab9)`| !ϤseATnq.}P0ǠW7I]#D[zU[`1:B-x9i]ƻkC0DP %M Si 4q<26 W^}a;ӯ~ i'cӆwEh} %b-˦?x0:+Gs85 y<ͥ)>uo:ŽJ I3QmQ8;rj+dv= ˬ֒<+&M^4J$Sg}yz E'IBMdD |vc̞8_~k'7{go>KQ5䎜{.:tP_.9$͚5˜uYyqJ *W> G]]R:$\tEٔNQj-N|-ſGi_5. 'V|VZ 7(TUU!IX [uUͦnjoGSj&RZHgaR?mN"J>_̙3_]wO?/j~mTp:B1G^*ķUK eKfYc.GقK .;pKToehKP]%¤e>Ϗ\WT߾SOVZXV8 f'F˷!>Q}hz+T̬SZk OKooFqK#/Rh)YɎfԤ[Z*nVIQkE2]o_QßH_U$dq}br130Uo̬? j0evD RM]{/B9%=uޒx.="pAo$v)}e re/[DȣTb۬ X%uV &>dR0@ O*,@st9Ƨ`ZdZg3 !aŦU52#i`ğY֜&ݝQ 6+W~U֢xGؓoު0Rj?B6jT1eeޅoz{+@$8En:M[C;M\MUo\lcU] %'AERH=Rb'{/O_ '/]7݌)rL,y +) WN絪'0c/x#Ʈ i8ٳb*L-3'i{RvR& NvϽ08SAWڧ3zy\HY7L[!/𻹳1IJr kG.z $6qBh(-mÀA_%m&2Tܘ72~ Sr0{/j=i;6"`=*SЪѩ!|ϸڗ% 5 G~% A\ DtPuZeb1GA*4+Z_KejETJe|Uw r,emf=7-ӫ#-QJjdN3t+d wblQ 1l Wr(I9$S;G<>Yb;vf}Wky׼It6q$ =$y׻`٘Rtda?;~%&K(CU}WNMخG8,sҮUK0$%%>y\X46Y+ 9/&7[cKez%^kNx@Ijۺm:|/@f 4a'E0&Fq}+O0{}wY}XLK.I[ 'K/7[} Ž9)t3N?a(w.X^BtUx|G_}ѼI[RW]¹ƹ.NĶ=C8oێY-wׯ|5W(pU6V~HX wu'{Jo&ǒLޜ<+[0&ʯ(gD iJL[?RBd'—Pn&4tҁ@\O Rp/\u9{ %}Ix^hL>{^m;qDm*w]\ҬNfeqp?-~2śOeǾg Ěk /yi% ?٣@ﷴp&j/RN6Vcjgw%yut-M{Did.,`%p]X:IxZp۽K+z卄 }|~jQr|+mͪ ,ܻI?0$\o +OwkU Xݛq tK=6J\Haͅh=ROy}$824T\rIӵkW3fDf- o^)bT^)]xXjHկYse^{_mH'ehZv߉-зK.F>~ZYTD[x+El\18q!]N}%7r3["Ɛ*F^*[| e645u_ú"f,"XDB<& L>rӓ91ʃST2ąZx3:J}Z<-kd hA[Ȫp,ZܞXPtVO="PZ[ҷy'gᮽ2+5iȁF𭏇|RwM+0#bYh9B Id#6Iv%qJtH>ٽ w=k0 ;TvvVOr4}[~ b|Ku!kTJkC&Su!BlBsIr^{!l)^F^7\'J_K(Wc_O|Y=DM,WNQ'T2@ߵN-iH4R_q ۷t0'F3|(Kޯ4qUp1MIpԒiFl܍5s Ϝw\}<2Im,n_uFNXMJ+MlZ6:TL! #迄6Ws!ux;MPrdZսv5dǦ?4CG6͎WUG#_v-rEg"WTߒQn 0)36ueNuFjc2a+5j Fg?~y\},8@֐[^_3Q}VZJr=1}t3n83qD3{,ւ-2mmZs]Pqj@6;@eYƬok/C~ny =(PwQ^aUyF |ʨ~ڿ%o7$R[ G臑oRR0Ezo3'H'Thcre1qT |/YjsN<,DZ㷭ݙxS|\ s6Mxp\^`Gx:v7NG۸6Y}_;^-UeYt?7 X!鮴<ЛH~Se=~_gt Mb?"% NNB~ͣY9`y% Wuɽ$Q:ַԞ(Gpk#x9O H= r;֛C2uٓoن#s βn3.F`1wWY^xo;ƭF>1A|+t]0Ǩ-x@4m,^>affSnϿl!+_"$W?)mp[3vEDb'65fhr'.@D)}HKB{ s[mnu^*d \6hK P4R#:u$@1'6"j5v$F~R&8DD^p X,/=v"ޮq;+Z^ɀ=?IV^ex%vr޺w+H5Äơ?ݿཏ\㨥Y]P#d{rljn̅ ÚU'>DIC6[@ j`A |$^Eע *Z܍2;ϵQtRFӵ$yÂ"{56V"&m"ݰ`6i냹8׈fm]}3/ҶZ..qho<!-/ms `z)=`w-Ie=M_A"?|{EiX[˪e2זo! oR~um4 kW8X|Y0fl,:;1p'S9nM "?+ҕs?wUiE!u3KUgd!"P|E"/NnCpxpȣv6ۉT6Ϗ!Y u(j"ŘM𲮖2K.RKm)+Qy`( +3!2^oyaZqׅ0[BDײ[]I+VglA'2\ ~yg-Ѧê>6I̦>m}m VlL_Y:Hmk8,e{ۗ|#X?2ƙ;WdyWs@d6OI|jઽI&8Z&Yp<_[Wy 5,y! u5׵DU~m"&ʖB"(qWDNnmn<unX20.I6Њׂs=GB@R>+ڇlLjD9BdJ'TИۣ"%X&9( õKO/]WĢr2iٹv?R`9LNdmzWa(6`ꫭE\aw :7-`Gncnk+@8 ʻϻ>Wh3Qߪ^~V9vh_j)tӵBrG?ؗ=Kʚ &vGq;&W[\ ; n{`=dolqW=.2ƒ%F(XT+4v߅ fbfG| _MmLZElW&1CF=Cזkg'& /lsNb -Db6^EtBu}}arq+V4Ej)r~~ՊIkhϨ"i-oU\.%7O !~U+8s2\ =M=Ǘr=8vy@CC sQVnAr)z4 ByLo%-lx+d;eŀf8 XJ }Ln+[|kc3Bŀt+K}Vf_)R}]} w5WIp w^-,T_}CciѾ[[ ֮t ƃ5G4q J9՛pw>C#Kk{ ZY},egߩ|Yi^|a{)k UZ\iNdRC\ߥ6IҤ .B~R&5V44~,5pʮI[ ߡ|e5$  :/qڱ&O$Eu 08ndx&NFRzOn,$8q>` eUw'JRؙTs#|V Ya65hf|~r+ hb[ȃ[fWg_W5H!E"sn_mQ٬jkTuSWqgOҕ_ I^$P8>߶I쿁]}}x!6 FOƸ`Xi'U'%xO!Ҹ8žHTy~p&͐w2 Gsw( WE=`"60A4?~4ɮHBjvgdnzJ4 v ץFڄL*1 F)*j/^2Ffäsw6$6,D iy}mD̥6 \g^xW$P ^ȇ"2IoXD`Y;YH .)Lяw$'RU XQ²ZLs2Jjei)#]2H6Mr b_7$Z|jq۳B>ݸ?^5iW_ú pb%y.L'$gNV{~IaNݗ]ΛZu)7%պB~ 5XC|{?h(Z.>SkOOX QG'ekI{8܉ԣ!𡴋wMF̱rEIi:ז>#Ă>?n@OO*~7UzqW!Kmi[0^کn\KfXµI _ /K|6ލUWīC)y2n'Ob<\=dz/@\ KM hs$zG6& Z m)x~н!MطAއ+n촂dAW(;f)_E7Sifȏz3+(vWΔ,yez뻷yZ8I~ ޯfYjbc@W&Xd|2'#fWt!~q51ܣŻוMB,_-aSPA!-Ii70InF~i^w%BΒ(/悠@& kQ{WTh՞!;";|~p-4. #^5=6^T}V| ]ۯaI%N9+gDR4ElukAxD ȗmSM%~w-*RF9vʥ +# =N1s.AJ}G$u&WX-=_`,.)GOW|Dֶ_ O(XdD?@vY\V q|ۓǭD=Qj@4[ Br@4sA3$O07P/]+3"_u(3K;+$,[ZQMﯲUo:~,Q}nv!T!o} K(Clf!95I+ΟLI=+\L963줽Yq283yZ\~Bpik3qS ߟ,]lƄٛ|U).̬'[/K9?dF$g|Sx|45锹-̮Px</ `KkZ[WOY вi2`D@AnU o0{b"RƢ[]az5OFb8%lNp׸rFQ14Bݜ؃yqM9Jqb8Cޥyס|=qD/NBBt D67q PMS!"<ʹ^`%R*Ӿɕ }콺cⲋ 6e'@4Z'60לg:_ݨd}oVQtc Ɩ#y۶i/&_0N'rT̄9Ehi`# mP~I=<;'z~f'̉J?ɭw*WǢ8ahzіBge:'ϥpo=$ǯ Gp?/ iכʡ̲hgB,/Mb( r󩹄6x7f9Qg" p#KS쾠%E*t?_*b h{H ޖ VE2m`O7]-f#aE| Ove?ވSM{| LLUf<|G5>iNHm.x~]~ZrAj1rU6h37c\@BDnZDA*hƪp 0I3u"^]%<8:yL~S dY.IK'br,saV̄ݓ/I.4"jЖձvѺr|gӔ!0}=CcL4ɓI|x ZK{QmMK ӾNC\@Q59]T…KQr.WlL"pM s'LlZT$Ե#т/Q޿QZ?urD@^ROr78/s=տAR~Z؍o{VA¸7X^x+a>16WE|"GT|}FXHEY!h8u0gbGs!|u5k- ^:wM_iiSWaY?pMʸv}/}0 ~/ZJ-8wl%L||K3x,W"T[Qzc8;T(Eaب'zGb,kEp;mtYG;)\ ~dld㵈5mP-^3[,ꋆ$1&e4@_9K}a'S\gN'_ZqNh_Dڔ&O-h~0r.M[ Zv2Ewj/xnw2fh_ Ƙé7 {CѶj,j% en8}Mn+[E~W,=v;j &aޖ~12VA#՟3t7tIՒfD%3m.=؎ x7N\Tǔ=b<PT2=Fԫ/GAԅ|q {&alf'wy.ѝSW-Nyw9f[MDs5{a,ʋ™dV&]nt Nd\8}i``M"y,׼lf MrV U)H s3|$PTk^pAavZ hsORJǀՈjK&BCeY ,PPrky.%hY#(bT0WO{񾥊q}k*s}/b\Y#I.A( *E={ Dk)` q Fn9x!4v[l5髱_d^06R&.vU\hij=,M~T3|a2NиPd%g"уOxWZ+mNJƬ //_|E 9Zb#-&? /Q.n@:6!oEʱMn&#Y˜c^}Zp'Hbo^ 5POE? uERIֲM}QSa~׼+sIǢgf*;\ ,D9 _j9E޶J'IqB$ Wey a2A)↴]!{sv@hA·xn~iAs%NeHU<)NRݟΩ%Wy{$)r}',7v9Ka-XoU)&1oOpݓwe(vHWu9G1$Lĕ2<[^8|Bz0ސɆ֛9f&`fQ)Gtqh xmA?Ke"|F #>p߰ѢգEls*}$;uϔke f0'^oo!紪ջn}m/6," i57¯P+sbF?%&R!F{ɰ7#XR@ ='8jڃR~Ƈ>V[½ϸLQ ȋUDWacUX+>^M N0ms!qmPuxaQ1J}V!O#_hnNKph&e% u^ų]NPq\:ݚ%&$`5I{'E@0 L03 ߀Y1+s83fT̨1aX(HPII3^UWWxW:YDk R59 S=P`xg,89,FP۩"΍,*7?;ӎN ]KtB&xI>䧕j@cbY6I-P ]&;Z|tSȮf`3[hsGw*8ԡ0c-TH:zÌҢu$E5W%ǭ@Rz)sg֦o 峛o8-{VeP9Yg4/Q ^_}U{uׯq( JJJ@.vZTey˶l&mekBlw|%;\ړm bgjjDso.A *2HUc2nEhI{JTqa^^syfL8dM'2}&[/nc 6jvt7C6E@Ef,j'ԾM:\ڍSȬM55OhAM >Rr"=a/ 6EhуM]I,sk+ hS|J(ob~q lo B'1[NwPo5O29?V8)_VyңäcʹYN_v\i+m"a]pЈ5y^$|_7cX&q}5ܯISSuI3堨~x!d-\h2{7uQ Uԡ sij+ ̀Fa#ҿ1ެ$Ł+O~7$bqA TZ#{>”_긅<}{2(57ocG^H{Vɩe+;Z@R47N*?+⽂rvG@_*x~w|&.7tef֥DklVxdz0`Zڹ6@z ?n[كaLn@Sgs mD7NPǖ>-j>qK\ㇳwZ"R׃e63P7R+yќ?+lNv'8i|y}ݮս?h&4 7hH.fFp2&-?,pr@:Ҿ e#{uǙMcňW]OirY+sov3pi3UK 7N9wZq]tjz\ y/P|UiʢW^/M>Tϻ}|!"Z=׏Ȅ45%SN*}`%,mLfq31cRDً`1۰K za*]&\$gc;32:N Oh3C }&sJ`O׬Gr<mJI0g?sQz/ ~m EVZG0v諰>/<*>/L3k[V1Mߘm"h1'w \7sJ~9Tr(]}h?2{I3i9ԇ>,̵7L dK;05C;psKѭJ-RRֆ/{ݟl}_{H}h+:UPs'iAP8PK@`" \.(_[VGVEvWȝ^ @F;~`dx'Jxygr&>>B:mC!Ch\~!Y-0Q"̬a8>ɑǪT`W)HJ4[!W-_4% yJڑ̀7Q. L3Ђ@C-D=# zڵ;*#biwʣ ?ٱ,ΕJoRS W_6GCd}[1ׯt_]s@_v: ] }ŧ$9⢥pG*vp$y}qch& z44Gى67.X%6! K ຽiak%d[T0J՛D;ho* U`=@0T2& FZARn @Puh 9#>hաrLvǾ1J = E>֓Z)DW=DX|#˗cc^Ɩu؈TOyoV,^F_Lk.,x? )Wgms&]łXgzqwkҏC'"/ 2[MzUwim=h7ZEv9E{  DV8%R5٠g:O#ȝТ/Ey)_/9f3AG%h.Z4FIZ.n%?=M=l(q.W>,h6SGt=u`-{nP4}ѳ-$@1<[L+|(ORj/R碅G/ҟUeKْ1%ȎKXq=ܒ"iG4]Lġd/,z$,.ڀ %7I[+2-h B+2U:6u'Xg}@=JV2^2~V-A*'+?tr(vhdkKz 2m/5m+ Ҩ])OZy@癈d&G}Gi7daw 4KX(\0JCubzliI()hvRUHtB웹@9a{:xSd5[>D1E¦蜴o7'VN?N$U_Idq$0'ՀIGV8t-D l)D3l3<`:n.9ɴcaA}lZ}V>t,kVvV/f36L;m1c,?I3&b =ͯOv{l9~@o[tH)@n#;#6DVu#4 $6&JЋG~vҎ˺C7gL5 G+F2 VPb|C-zɽCNo(PgXM<'HяZn2^30I}o*zcXˆH^am&c v % r mXj%7X* _=IfP~_+߰Ԏ}|gO~THxJ;vL);-CW!<*G7WlReKو*K.?f(A*X kq%6&\Q~)&ZVX2N߈~WxmM\{[8VP"Mh3کymѥ̍l+DAR@K+Rb4հT(Zb|~aK:<2N8,ړqw{A%Xx0}Wwg>lɖh3ۋLwj, pB~^A[,ny2bpm4ZcƆXuBM=mG,-Æ$~g(G/_b1͛ABђL,iA! '>A2]2tx}g2?,1`x-c;Zz|S,W10E=ŀЦ"VûjYj΄׵B7 GӋ)J&g-r%Or^Kv, >E3"yvP-&њ"49+2R =%`V &ֻ)QvirMi $B4.m>&20p,?m\S~t X\WVJ|\*t퍀CdBnCIo[^ Fy ~CL0;r嗆)r KnM9naf7t4#U@nI)6ibC83hc3 \w"BK.rfbW+8)I{,]╿r ^uN8,d=i)s9|rE H-;,EL:[Wy..^$xy:v% `#'~}3u) X{;}\siJݨ}>hTגQjڌYcX1 %"tn'_n 4H_Ht6S@)w{}E|- e>}_NFv-Ü /C׵"8U.)wːq@&ȴ[,l }v'!.Y4ֹ@jJMpuyc=Xzsf=9x'G遲jl~d]DhSc_bkVg.4NJMڪ&ȾyB O5 !|AnjIxPri&ua pK$0} kSZ($SS:3軻vU6-b LsYUR|_^ŊyO%&sK{ϏL &hSJ:ٲR1T{u6da9Uݎ2h2cpbrΧU]&%xǦ Z\GoԂXVr('> XlK%8/AuG#hf#҅KO3Ȃu@N9~Kj{0ƻ4%"~ԍ)9tzf螜ґ%ݕ-z#3V]]F9-%Ϯ7C%;m1ք/E0 )M`"Բ%%L8, !x̷3 "C[+NV*ƽŜk5VrCǝB.(:'Z{],"+!/1VF\fsqI_J&✲_2QV XoX9VPΑVTѻH[G~Mߦub4Z,1uOJP/#dJM.cRUJ@ 2yaQ%8DŽPrmɹ`6g\. p4HN"!MҠ'ռ~Ϝozҳ<=8GvV#h0i =ﶓdxM `u ݅y'?Q?@jϧګY:tjuS@jl݅e9 ;i߱߭UOx'2ꈅҖ<<]O.ZA0?19? ˙7[ӂH_H#dĚ\GqhƞߚH<;qlj֋T܋;Af_3u2k"I,*rܝ| nP%mΊ7ݲ pL&J)O%'-%L;.Rgh<|/P ~K?_|Z$WU{IK2!!y֝_\n]y?KU&6fq/R0'OwaO`xLU#Xu8m|Z@l;ږ0D=~}O``& 4/=REi1G4kAZ8dabsGͅDX2|k#~2Q3sa]1ߺOuվW_M&.\$mWBәh Xp$t,ѐqNlA^ H"1~&H%RhīZbf8O<绋PwO|f !iI}"G;79{g,xΜlSߪ T仹\DaL2AZs40?ɘB+ˋȁa tv皾>1,3&ųZ߳Ij䟙>%GJ|+BH] ťlj{Q7fhv !ϰ.MNPf&/s[\߯댅S?I#sW(|#]~Ƙ\.nǻ$ӵO,b5;aS~=Q7z6W_W>:4*FfucK{ ` ynjFIDRĨb W6CҨFwWYwY ۆ4 {WCˋ5VrlDzSSScMбcGLcc2SP0o~|d$|ɀEV|e)Mf1$gi7AŒ1Qmb7_5.3AMܑ7bw}ղ#s?&` dlx< 08o{kۢ.O,$:q,nj'=c NYwbnDx^FX h:<Nv#7'>wddf2 a;I?nB]tk.oCGO ܖ7!zWWdu?tZIfG[XYc;8嵙9.q? ojZlSMl/W;Qщ s,5b^29q /9*P{m{?Ft{V;hZvv8\GgRMduqbIZ\5bcmwYi5X+ВTL[, la옝1r;8{ՓC6Ǔv3Æ U/K&؞w:-CѠ'ҝ P!H:b); pOb)S'=5Um&( :%Ksev;΀Q:Y:*)4]miI^ubxIxשԲ%$)_7'%fr:qCSү< LYgel:s7R~M vh'\p6kݼ.b49Y"nBoևO<#JG>}w2bqH^5>X# LjօGF{#&H%Vc >3QϾѾ8 CX6G;T̡l5bV);y0B(%Ƃ`&=eqDC?%,˥XVB"S} Bs#-B&n/#GǩS7R7~FEsf .I$~|ɧ!k 6PE5f,y2K#h0R4Fʊ3o*TWlFV i7/ +׽_yr e?{^@x$ dʙMzuӟ{7Q40oD_Hښ_ w)PwXt{ۿJZ8_`!}uA~)Ȍȍ)T)/D/ 0 ⢴_-=ͭ5 ΓdJRO_( 0j?a:Tʍ[ъ+hVZiߢ.jjkkMNL]] 8裏СC'W{U&a *sU Wɭ@VW()zI:Em|m&Mm+O Hj:aڈf\$h6?iۦs4}Àpa4+u/0w מo20mO@GXL#i ؼ~J)M1rb:hh?XpA.дZ6ъnlg'LX"}s]:P EXqƯI&u9{>8Vӧ}ɢ'JlMru-IO@V#V5SU(7p%)(&t#މg?AּCI#8Qyy ]ī|@~EͤD4K#=̖oge'k+\0/}@Z5ip& ;d.iK!M됿%uP1P/YIiR;>lL)s3ڗuI}y>+C|^w}Wv}\d1K&9Ezqdw:hLYZ :6o3&nDƵ %>_=/շ3FH}+8mdCƩh(}|[_#rA^!i5 ;RIESq:BWk'}M@'И[{ eJ/"^!|4 £aQGY\Z Kso/o*on |pQ_X y.E2 ߉\ꦓk@cŔcO>#DjFzVr2-:'GPM|H ~5z͘Dz}Er^|x0RPZ:$߇|8?3IL5>AyC(.ބ~>o<:;̳*uWcҎT0d'|nM)N۞7--~L|4e3bk%~f_7Zh!đ`v2,cV_}4駟Z&K-Mnjf~G3vX3yd [ W^9`;myLuն7.A xV6P[q Y ,  G R8bCUШ:cu-ڹp Sf%4kv:ui iǾi^R]ͬV2/nۘL觶:mj`,NK+̍5}e*L䷽~oWb'b@[k#Mh6oXWZHIE5qK~jZ8~-G['4ec `nh nv]Ԙj]i$S$k:6 9(8͟55N} >O‡n!c E^2Ud(2SEg. 4PG$" ? _bGZݱ#|w2>@-̦ΞK~>-J-eߚlLJfE'K3(Ml;߹>\U_CP%LS%5YlH?f%>c@=ZaB=zYMRUkN=O :tpy Jv.s|p3;ŮKݭC9},"aT?+[ԃZdk5'm^56ヾ#r,dj @(W[N?p3픾 ߭ UFw^m0ۿsi3BC. b4s!4c`В~'^Hjj磱(%S15ћ3TSoa(>VGڅR| +X-@j2e?1/N:)y% .RgR Gƌc'o8+V'N4/#@K&"JkT= ȥ&#wnΝk>lj:Jlã s};k>ls駛_l.B.wKwfE81~rx?afͪ`}€$MZG0 !vfO.Vt/j3}=<ɝڂԊH䑯ڥp8/_MS){g[{& D\m_ͫY\7_Z[ M5%ީ2`\yvLV# :m q+X :@BiG?4eD&HޖgBQ<- 2iQ;9Yld:BEMF#~&)CimXn4GD8&uk&0~}WN3o?2vhCiOkIu0"2b#Y {DVS(.i8 tacpU̞@ߍ` :yO[  }܁m~S||y!," RvaFhq'fEߌKlP/#O1kaCjQ.v-%t3bޣNl=KΙA%M/0-N?Ի꾵nΘV@g0C6!K_sO|ž"gAf\@ JȜ>S91VXaF6I@\i6 8@*ў{iu/F[ h)JKXZk#",b̙35\mjK{_|.`1ba9؜7ˆC+0瞳mAwnK`"x4vUW^zi{/-={7ȮO3Ϙ#Ge#O/Ƽm Wk ̝5:tT|6U> s59i=t:w&10_ҳg@5ח H gQ~@ \{m6c +gv&f(hr0s!RMTm mY'I|9W>ĭE3T=}HW.vV[%/7Kg=IIH&/Szx&,x;o_+`D7M1l=EX'ži25 7#`˾zL氺6FͶʭ'nF{@UGc]q6/n&>NoN=Ij; ?7};,ji(y:zu~u]w?]ɭYuAG]Ki l۝wruo%8*m%ҏ[@йY=> ư;}8TvhWSsƙt D>sDŽ81e4.ǧ3}EYS p\:u ??rL-otmm}~Qn/i.Jù$ }Kb/4 ʒ//n3̃=uj6$Lњ)T`r8"yOv'y2Ԛ@_[cr}Y"ʉ޸l٠SQp T6G%i.dqܺxDC5Kcu'Vxږ9$U'%}Sb&we&c@j;PG..f<>|LBeEm@lT_ڍd ]CҼ} L;U\u{p4ONF{cj=)@wy;K7 Y ¬&:w!tjsKb9 h03iW}F!E2ɖ``e4o0=b,w"WgQ*e5K2r](9,,d?&?!ޞ n< &=ntQS\imOq*WދuZ4az%O*2yל33G*jD`OФEbi.2 'E?"FvuY.c ^#k\_%l"Nf"\,9sQ!vA뎦kz1 1?AL`!1L8lvog9bO8^ U*l@a޴Tjrd Wڊ2%A$2 Kv mҠAYit'`U #-WiO?mo^3_2qWZ UB'ZeᐴeY25! V4vBCͣ>63hm5todvCX!?iww}VUZđH>V[fsJkY(Tɶ7 ¿Ƿ駟,{Ͻo2Agֽ{P?oir[\e^Of4;.i氅!VC;!VCqUjmH "}biDظ̃e#/pg oXȚ.9vJ`/ l?m~>nW2Yp#c+)3q-LV Z7_<+[6C2![OlwEϑ_AM$xo}ν6+8O;I_g/XfP\9f%^pY~Lt@#Cz0"`8֬ HG:q qvς pӁ,;AJ|0]M9M20s7|;쥃h256?m+=p% ;>VDÞ\J>=M]J̗yhf]B ^h1TbI~.L.G_<pdf墔&-r> hR7^O}y]4& Ƴ2p3TjȣFf~8k ^3yg~!IArWىsvd{cg&0>$BcsPMjK ' ~ƼK%S믿/[uYjJVUk9^$-d7rYW;w:e~" iL+b`TL,¶ڷdxq-`Ofb%*W$"%Qv1/gսBc{juFy2Cx+/"ϥ&~;L1;M&MG&?{9#dW~+- r]Oy뙷n7r\hP)k1.bG@{w72555f뭷0*N߽m -StT*ȓCۘ Nu6iesnCi$oKW"l |4 K/6jԒoz2)K{V4G?t5OgТ݂ۀDc+ꥳ@4%9DA_# pէD3=7P½x]0Q"a{2ThABLQ'`٘?7MRBnc!A$ h?6}8ͶK:ip 42.G8*p  ?ǢjU+qRpFI>zkZz]`~\uy6tjsiYWBx)|QsK?̷Z]o#?iA~Ė#߳s5wh(pCBW$gl02"K&LGi_ Qod_[{~Т/Eo]nwR3Phkn#Mޥ8,^O0Y&@XJ[S_WoΨi)\>ڗ ;DfU]xx4S:12ƇG-#"j,;1kRsKtd0;X>=[cZ.zWTH>EJ-׈QҼTQ9[iUBTOS ILCC@!T:tJ Xiʖo(pS{(X*%k7 a)W D/_zI" &:@huh vH+ݯ)/xn ޔFC&"&m&lbn#Sn0@ 2CIt=hP@k"o=zdUXWI}‚a^W+W~9ÐINB-hV^yeW_>l5kp{XȘо^^޶V$ Zq(NᵸQI, \y4E { Q\9Zz bZZH8h<,+گFۭdt.tM3Ui؉$0%O*w ?Q8g!EQFA;]NJcly plԘ}<.Av{ @pgTg?ɚ@[17dB`LйZs$6_AH}8܂ԏصYdI@7L_߻ bɭ鞎A4i”M@^qٟ$w;E&Sf]x-,V:rZNC\Ȱ _-w&ڞ/0QWOEKRZ~EI-v|`JIpsڢ% 7 +Fku2O@HJt>X.(r|B^%SKFژ}gq$ٯQOlv^ >l‚H64 }ìcܐo))PrŴGYGG > ]3% Xve12IeuTuYƠMI]OJ -T 、~L@_qo/;.kmv?3`/4Ҏb-Ҧ ͔)?l!.0W|eAIZ{>Г_nLZz ̔ @'*|ǃ'J[JՍ=g@IDATMvדR޽GՆUIxoZ`KQs˳{XZ[wLY3 nZ$iu>f~6K|mUo̫J߻ݿ#h u<þׂ?xsi Ђp @ұ+맸kи`LmiLHkWqqSg'd/MЀMkiR fbIN,jr/K8:H9.C#@AvQÏ~]aI6dС橻ka=um\d,C &@0-KojvcMRk3gKA_MRџѓ]òzAm}1grS-rH&ɡ̱~ƎOh? 4r^ `jږQ=Z*. 2|&'xA_zqx:h"u8lG h5aUՇ4P2>n.Gуi}Z8Ӄ.B#PLmEF=(Ǹv7Hw#t ȶ|P 8Y?݌}T _"zePl-/*SiwhrWR7BJjbQ+vypxW|Ji9Nц6i2NM"T5]I3,$ʄvX-AS+ ʠ4R #;n޽ͱ_;oYfyCF<'ww&&#U爵j[qo첋5'{*9*K,\u[ZbGC:$NCw] @tt]weɆL:Ll9"Zi_xHd#F0{e! kIMpq fE]dZ[n/>!C,~D>|CO;l66EʿV[Ёt"/ OK!ʖM+FKmEuEJ[lfAu&~k,tMO;ك*hi/лqȿ9-Af':٥]:ꍧgAQ?oiY];dq4gСR߹N:J_i(o"_iP%{ ocW-p~n&{YWxGq`i Efh10'<>.Y8hn{ZSÄf^_?_){!oF@" nLNh2Ϣb72'Bۗ"85/O^v͛O%^JkзȢ0{ޟva"Q#ܝVD=. ȽȐ]~)eX.]u3o҉ |ӆ!CmיּoةBX CAT3  b"<6nu4+c&d8>av(5 se#OzLɂN4\۳iR:)ݧYzVW1P5S-ZZteqR,iV]\JiZ$w9^ HRL bA`nڵ7ѵH&!}a& ̘\w s6Ŧ̨nkb#%.$܃2%S9ZDԌˇ;k_^væҽ?,>5F"كE3Mم{BUX$b߳8`ƫ??Pk-6d$lG9s(W!5 (2s?[]v`CZй PMse׽5ޝ^Npսqe2Ayl>2'n҄f?ly$[VdOYZ2 W='oU(o:qP4}\d&CݻwZގ{@{FHVB(0+U?S;JkYxVlF@ZUq́o&Ia%:J|OR9:L1I]2 I~ =$4PW~4) ̍.㖭!EdɝqIuIspE)VjGV%ؑa+m-F>0hJ9e_Vsetz1^ z؊ɏ 5˘І:tVZ"cҢB)%?WAH&!"W 84=B| 8Q%KN[I1ʅQ2IZ2Y'EڏŒc}_KX&HfGWHnftj._J}oc"es>lfESJ2rޅ~a.2Oe+@061?F80& ZNSj06{?NP77}d I-ArKP> 1;v𼌱W-:H_? Cx?1NBJ-#\.-Dޏ<;qP|i\ c6o3Kݍ  )W(>|R9Uo*۳'HڵM.!7`TIFH~N4ƏJ|<* #jwr`jЮUQ0AR$h_DWB樅b\9X̱;\+ع)6;^BXfcNU~$ TJh Ag21!nf~{?9nqsy+5jԨVYdYMm'@H|ChҨ^mE &ewX~+]#a@NiskBqDֻ̓1` 'o;m®`Z#B Cݨ*SPгTDqw:'FB+397?;}Tw3iS"[?TfM`kk(j:0b8̀a֩U\3-2 ['+VJe.K@?oXYZQ{CH֤}l_z7œt h+;|ߑ}:x;Eݟf{VF,J %NoD<%w]5g.KW2~ ?]0})z~@BΫ9 Cz,['PӲcO ?tOƭg-y8c;b&@Oe2HP/(Ǟ󈬚j̽/y)қI|+ӂ9?Ţ,7̏jM. ݁2x/۽5=%>!-ʵsmd?{]LL·pBN[EEu_GJWծ׋nToQBN?xKPB["?./Ýq4FN@'%Byؑ+ёƕNk!OL[P*Տ<h/$ Yd*wPo6A>{t:c[6$iAcgxj٬CZ ZփyD, Mshs#x`^V/73He!!`>eC c{Nٗy{ȮGc!sGOrf+!gCB?oD; Ww0CZ?L? S KjǗ¡8=zPdBhF] ycLӪCp}|Bʋt3O;.Oܼ ݷ*R^WWЊ:sM[ip~4iK0 E9Sp\n_@hGgreJk5謎*sPH ;mt"[ҥ:0/#DGOw:]nߒ4̌Zu˅IerV;v}Mk)tu: J&! jx( Ɵ#H6˷`' Q+UgOՂXjN)L=ÎeA&~Aۺv<I݌s+`RKvz:$Nʢ.`a?B|'ϱ}hJS2=֟ pd3Sav=h@ҏ mbW!OX{-4hOpc"t"rjۖ42qj;9+GP/_M.nd=8Z)cҬt~Ly1#[:K%#~ leGSps'e `f2Q&B𽽦~&G#;D IJK%CKiA0Zx (̕1,`JI@UB0T1?v\ef( ?YI,_\wN|@'RgOdH:S}oD_<_}HD1<6^)(ऱ~2`uG=,~l R J5$S//whC_X/R1n~$߁~,^E_Z4|ik?w[WJ7 .j)p cHnG&L+(`rK [ǚK#ٛ!HJ<'_O"MOB؏>!Hps ݖʺ_20 d<d.x 7`/}a D}( hB9:VI@%FRدqKQ>r5P Yl)*):W_k82зo_kv.Zcկ kh&Ud}nRm|]B 0ͯ@HJZF,Ki JM5fiVŌ"r:AhJ%Lћ \uwmvH@ѧk v70PMg0դ*IaL8BЇ E_Ly7adf[=M4m px@9H΁wpHuE{S)۠ې5k )H  >6u*&(1Pg-{rZwհmEiq"VlQ <=*&p~``F~n9V"?|s2J+M [>jvfDF7LM=CA!S SRXYoљ)M]zpXAd:_jE?pHwz2.Iq)Orw9+n5~]Yo)XYw@#, C+Jj-]({aA.ci7H{SiqЛi?kOR/ ]9RBNE(6YliI[`<5xɴA!hq,Wd2_"[tFVOG`nBJd&?hNs;aߡ\vlT0¹ushG3cWj$ٷڴ!i!#CQ(眇v }]P:99CD%r.;߂HbO(-:ܵReM Om&AlZ~IgWIZ<-=, v7G6)[pЖo-\TVķk|RM袭V)!i1#۪i}O޴MSg0ɤՔ8<mᗿ4`4_&H;&NADc.⾏KS-3mϼ\0 @f;}IwUji4%2}W##H@iJ䣑jA >M.:lLLÜ9'1i6AIo"ȥ&aHaWcBT9/>~:ojM:ҺC6m^6k]83qh=,ˣ7ɶan[ *ʸ˞y;Ql|KQ w{Y.b}T|*bA,`gER.`omdMr/EKvg̙s~sLGIkyB촹Ƿ.Mg ؍y̌Ӽݏ`AlvIo\>U*xac7v&` 8r-|jeUюTU|eFs67eO+0vܹ BW1/'1V)2WYG k ;6%n!/}RЗ+֎<9<*2+vXŞd"*A_²-wkk:F-+B lR-O36*#q( mzvjJ@^r ! Ǧ~0-?:BtΘ|'/S'hz~+Kn_sjwےRxR%Ү58W;-4_&X P 3Oa@k DAK}Ekmݢ|S {@ۓoy.veE-2w]׸=31A +&PrKVW,_o(_s{"nDS)+"n^fTW]G.TktFjgV&G!YػϢl}6&|$r4u?/?keUoX_~5ٻXH>=Y5Ar\? YU20^y7E"Z͘ So1U}P"=6%Ƙ ٝ!pP,A^r jnmN@&LJf/u(=Z(vg~ $[\Aőr>4Z<YFa1mmy#~Y`#RTZoފÿK`.WM0+*yzE1ן@)S8BNAx=mk/ɰH"JnX8ݚ2O A,v:AL" `t_U̳+kʹ4}9&AuY78G}=N<*Lj-K½} I`o35~2mTF *%}PO-DE{[ߨj7*2gRէf*\Ҕ͸ H܋8f9@>=9`;vό.{)Z1 97U|x.~ muIt\QUWJӦ|ą+`T-25 \ Ҋc]갯/)TC.wwהY2ʊ.eX\nϒjuloYkާ;!9ιG=yNvW~rzX$_0VyfY>2a9?͋}Џg~|N$mpY,:͌*YR=}1ߎ\[r,>?D;bqyo A/ﮥ,Nѩx$"g,Ə7cƌ13gbے)nVU1K/Y\m,+l&?&qIbQ]Dh"@\12L?I_1J}fw .HM{.y0U'@y1޵W}/ k-W\m5_G"8[EŅ+cjQ<cpSjݣ A<0d99i_9ضGȂNjhox(W ݩ+evh%rw,Cב, 9_i5AB(glVQc՝Ǩ%$AAaUo%sA-(X}߯V%-k=O.>^s܋S ҖB*(ovm7AIcKY '/|͠wf"؇OU$WfL@ CV Q2#y1O]H#x܍,R ]O~wQ:flԘ٢_EzCoہybV韢˖g,<L±=sM-sr<,=߂MMR:I,BXHXjD:DWʝf/h%yo<2XXgSOf7EG=+^ }JƒJ󎗮ģ}"AzD<=-ye֝CfSZAKƭsϲF&w< ~PNՎɅO1[2\ƸdGCG;gg3fA\J ^"16pabvz)UAͯ54ulX+[R9dnm`5a`M#K S ]uSʲ$?wk =dMJ+n\68ˋz]8x6t{jBhψ{i/9hE,u"? б"зCEn9 }5 ꥢ3m5vl *SI{=KZ /iNgd6+Â4oB0&XT]-6{W[H`k;w+/FD'Kct)) ,; WZ)b S@O"8%dY{`'ܠA[¹"TŚGԴC.ψK!Jb}"mHu3#t9A$@#g^E;ǥd7-Re|; x;hRE'2ZL.hn[۔,n6"$3}W)a7n | WWhA*t9W\87[O^H۔ZIJsXè/D= #~|MA_MEV_FFمr][vj c2k_c]=k86Eԡl'uJ@\9A)cqJ,ۈ V/\"b.!:Xn"s% ݁l?U֛,0y,.`+o/-0G %Bx~q/8ijfA-}D2hqxgI%fi^j jlij]#|sllC&f%os6Fv]KXt hhP8H~F>Mĸqʂz~[P+kFq:[ C#j+ukI{KX,Snr0Uڑ'vo]2ԿV_bIƓWS O *G~+O춊{`OV7q1?ǿZe^sEKQHD}S4wv C#zuC<C<~jQ/3:B҃. 2-hfny87EN@a"` '?!`sXA}U664#B_,-6JHXi-ְ3?!YƔ\ w(~ɸw)voK/ܝ}G G_j)|?% /ɋ̾)̧ՓjM B&k" ozTkp2лS4,S; Qsj~1Ň7ItU@| Mfve+ or?l98YUX>GGCw¿gl=- ttd]Iߍl_G$@=Q Lg"' ju >F}׼ڞر?8G$WfXpu$0˦6BNb*>(Jq|T\6ġ 3+@_7o)jWB9 C>%\G)(@A^Q މ I{cGBA}d\!s$.*D\fX#?P`~7e_~]զڑ>{2wtw, 2m_, $k`:YoqW'ؚ4gy9Z5fXW Bɶq:07v_QefZlO9H)C%iѺ;4$PqnIu\yE%,TOHK30{6Y6;$W#Ոd͞_Ԋ2mG-u5 TG}l|㻉ueS"G<tXCztVKf]p #Vbqd|B+bvݭygߦ;k9^VrCA M/OTr.(w;䡕d[4yds%%{3-0ٸ|ӼI#ˀ@>om>rGv#Fa(lBcQ"M]1$gxe$"O+g?~oeAG*I.ʂDKㆍdA9>cǍSwGݴq}G~X&~VAnc*:'uoPo:,lW<5&-3h >xIsٵ#r5Fyȵ|4>mC~t'~|RyђBjR>F#moSStQuשez|\S ή9W9[rLG3)Zw!`W2Z0|6(ؼٜv!Af@ &5X?E. t\1["V .8h=Yj.W/xH6f|)mWб^pO!,QM~tT0qO{MwL32Y WYӯt8ۡȍM6zJ}Ϋ/ޱTTX!Aנǧh3;\ɷgA!"54c:DMtA'z [MgBYO҅8NH;5 H QaQrB>yeτ!J q=X#U#wLήIJp.uKQ$E>"`yj$hpiUU GӥbҘeyOiăB413QˑoO4%WU]ʷC~:dU$expkTF$k2tr~ǽɼ;_Ƀ'2v8]3ZPP.r-vqqBz$,7,6Pߖ0~7i t``;ܱm.fB|txY`)y nu?ITndKo+dy+ؑK \ELWeׄG%r%Gm n݂~C-bo\{~7VNlzQfz#?Ly/&YUU+h`r֒?>SzuV}V%ܬ:B@|: %"m}y4C;:XYϫ/`pQ ~W4gFB33Pݖ68FO]@x[GQaڇ~@IDATck$Ւ=f9ة%-lsJչ(p^0G ɟ6DOF-(|iFB.K)ǀX f8{T%\ DS_:8ùđK}eU@%0cbEڥ(CeYٯE#is^l~uAi =oz;s `}[1]Zd>psHb9}y.ӖZQBG+D;}"3 =Ľk\qHʢ/Ƽ~/ޏ`,˲y $e= 9b6cƳ"M1}]ikZI7b x%>¬E]]Ý҇}fDz{Eٌ"0sXɈ=Z ߐk)%J4vm1'˽Ґ"THE-*: з=;3w8U+ɭ6E|n %I|?OӇb=,Tx,ǣo;{FzmJ^ڿJˌ"D:BE1/Աlt݌+`q5:rŮIZ\ݜs ꛔý*ᐯ"oq쬘1/^9sG288n,*\o\s5GE T/l,R栃6+,|>fϞ=s5 ^14h(9T}Ȇ튫$a5@xqk1VOb-IƿLI=j bKj\ 4 <Ȭf]w5K/-գt?lƏo^{mv#jhfڴiN>o$Hg3Gi<_gui."Rś>֡Eg~$0h<;m\Hє 8sJk)6op-ׇO~?@V1%&i=EYCtFP'[%id}P$9ť3 י}f_ PVu~ΤͬDZ0VBnP:Pdç Y:ae)HMKȯc:%Omͩ+TqP4uJ>)ʰM~]̆#u CY1ϗ#{n@BTH墩>ؗoW,}_>$Kί'D|W{};4cz[`|4 u<1d&פ?Ua%@BsJ̓:1_ń*V}+5+tb`Ql76~q[8a0>5ƼKjW|6jV/.}[1Qz\Yt+&7r Q|'6+ 5p +,D&C$fwdyb~l^f'qtŴ=ss,f]񌱋|U UTK2}o;=exSEwԵls02**KTr)FJ!^}N$=Xz1uEnaޓ,F^89s,wjh'Y/S^|6VKelB{H"BA}md)7I|›V:MFhK0{nKCZږqq;_[Fx_xܘOXH,Q4M<ٜyP+< "·vYr%[oe9h~ی=L4;5k_=LqHłeP ׫-\ek u9N[ i/"mᓙ] $HIկ~R`+djgEc*YdQf2GyygͲ.= N0I[l1sE9 [ohg3Gu(WBS0s*}D}˖ٲ7Q:e`-㝵*lMcQ;,1mMkYbeiJPBދg׀ly,ʑn\ҶaE+xq/8<Lm wÉ 5tkd9p-U(u,H %*COόFZ s۪H]w ?߬NO;_u:$ɢ't㇔H33ߦ4}e%3OɫgV=qX΅JA/n,w$ɊR&?q%Z6tw=Cy>(}* O\X6'3uokr!ACrr"mxq}x>|g~ʇ" +Gjј r/64;\QbUF.徎t0jӽ攒v3l:<"il\lHsahn@e]ɽͺTRu0sKnJKLح,<]DdਠoNJ % ?aBM ƻewN,jb,Hdg6 ʾǝKZ~, UKo6{4 (mr} <˕KuQg{1ΤKw}#&V6 Ɏloa:ܗ~#yߗN~apކwG\cӂp&G~dI0hUrtΩ)ާit-NƛK[Ϸ6ҎGvCvOe;G;$eYEg~-`XY}Jv_)WQ, 0~~8D܋L",gkCbd 0zꩧP{~X Y)4Jˏ*x-X}aOx[]KT|BG k8vvUM`$z="vx۴IEރ?x{yl6,4_ӟ`s$|XoBh3%e,2^Ke;VvN>тθ ,~މ*O&84O~oWp$Itif\4y:[A[\M+{S*RX,~\Gq=P/[.ꢒvx9>//aAT3FGD'鏚 r}&oAmLO !ܐ-*KnZ?/g?-LreKַH'{}1ni7TckiGXus9v=][l޲eKÇ;v;oo7~رtMgK^{g^|ѤR)[֭[v~gKv:ZS#e2z:74 AJq!Z?+9{RTg)_0=h!P?AsO  ZX.Yw,#<"#L}W??$):dC$*N XpS8 Lآ^f|q`[a1nVнE gN'$z|f[N4ϱb|;~z+$KmE1-a%7A|8wai~g u^iRpKڒXfI$eY $Z ,.i7Y\{?O˸ER>']7^dن/$wϢ%>o :; XJG%у4&Kk&ͺ|tRʐ μN/-J\ٍEWù5+>y"K~njYW@Ǚ(h]Գ|Vq}@hߡ)Q2l벜?m|}3d~7nZH}mY1[J.oen|, _EXi. `Y}ƈW$z?_hITpie,[FcKr=i׉,8VpĶ|+- zWS+]_肉B^8Mo-*ֹQ{Kdz†%_#}k:M,T/n]3'eo,jęȬ3͙^RN#L)$)=Zd_fQI8>OU|wL(>e}cU-Y ?::e݆XK^(C2ozc ܪ|x-XoWt$̹oDžWX(銎wG^c _,~ ܲ.wޏ]vi{/,m۶/P7#M"xof.BeU]wԩW`QPK #0GoN!O$@"Fb~'I<]?&a-bڠi^AǂxrtG> D$O}[4HO.p]ɖ/$7on.s>cKrv9nkwjFu;Xj *ESNu@;VZɱֵCѴVoQ>vi mh~iDZe̯} Gr## ^$Wm /x:.']fg1ThE3w5ҹKDZM :1©C/O ldN~/kcp-/7ѕz*Ru, ߋMbߌALFRs U"gI:}8K+3ιx0d mkgEM[N_=2 ƪ삞ӛb syfz#\@ '2v9u7lrw, 960XZZ$%:>.R&-w&( F} \4'=<@>/a<DY ,&SnTP=)>{-!0J$&.~96Nk2@?~\ЮXoVӇI]BV{Gy㛹ڼ=38ַHr᤹v$?xs҄ʇ z2Wf ~e|OL2_v+vt vQ9pWP.mև+^[瘒lgMFaN!mL#4X\Zܵ!ll?yT <> 2Ҏdp9/x%'~sNS!'Hƚ:#`xnSSs;2}3@E'ic}OA-Bopζp. 0w,aɦ0q3s6q&Zwu͉'h?y饗1cZ:8j}5OR6L4~J\|y qi]B6?W0_b U;E=s?#d;-\Y5ݺusb꫷>nj1~<@'C=&~٭b f^*P!ꜽIwj "MrDR%tqxZ͏1@qJ遂VϤߗU#{ 諃3-%OZ.I-^vBP / qSlh:}I*}N5yjp/[IꁗNJ+P ֗2h_bO_f)ɷO0IY@Snc'=kV1b/veDz3Xk|d!ʕ3-<`yUU7<}[ю(wHOnqfKHsQh~/p ~5|b}e_oT5ɱFv~$-1?Zqt@2M*ФaI5I8kruJJ@DLE@VkQ":_1LqWf2Eޛ吜?ֱq91w! KB[RSayd#] %eޅ() q۶qcE%y_>WBYyZ Y>cJ|+ߩV!w%Խ y,~$fBH.TK܊LtK@eyڜ:G('q^h\ͣqoȗS{_\ckRsxJ.TM%\vॷ-4)w :WpA4wH)EnDt H/uEqpQa!E%ٚ7]PПScV>z6m믎P~衇 E>:ֽtKK-<ʯ>+NqM%j⠖dxWSJjPnIesۻ犕ԛ *2:U>Hy-T'[%w;B4:pOM%}Y Bl'c=f\+n9r ^'rǏDyg{a.r^ZN>d ^tQQ(cӽYs5MΝ0?ez87}XiBbA1jL--$d)BC5$^w0`ѣquTXnh(:M6By]T^Eq~#&KđX?JO~An!п"?Uu(^7gD,7c\d = #`92(J/Ho+PNOYecuvI!Ҟ奊.|df>'ruDras|RhW"vȕ4NoqkTl|!'iO\tũYhOi)BVy ̇ԑF!>|ۇ]߾r*'8ӎݚTpY̾VY{7}O?/YgHMR:id 5; ʧw% XYܛ&w@($}3i=)2Rؕ1쐺n?m2#_ n=ϝF)Zb:)͡j,AF6B@ I{MPd|W,v,Y<.ϵf`̫̂}JJG;'I=%/TD۴d33O}bkGօ副2 VV_Jh7tތ⳱FqPC)skG (lQ7y) bMZ INSe3 Ybr OB{L8X6Cy+'nkZNSz=QZ.D^9P .cFfx=s%(?ah7K-yb-ݟaHi %IրS_([r69S]ZrSA߼DL;FA9(SDP>.(/|ujeC/(a6[.&&{waR@$]JdW[MoYe(FsXF/&)mGdZ}L]Ky̱dYNl%+8 O!pK%&A1Ձri11+4b2;kK3N+@ uCz4O3SIp&֝jkvMMCyהf~ #N|h\iՅx!rSk=Si:bd 'ƣʼn.ߓd^ww UPb)S*2&dQ^pFzI&Zhhhl=nn 6Zk{ӱ]>2e,ck! C`mEؙ}1I5uP\/Aou&;~a1!ݔX`pi W(#s^ th[IO,DWȒxڦ!;:uO@=6zUy2cg~lTP t=T5 Dx'}9FsFAWyoK :u @}[1"kw箴wp Lc-tQ߰ʏ7)G x9Mփvis=N|Y Z7wD?i͔l2o&=,Q3&oZkOڂ_M#4Zd "a2|:ml<嗸S{6s]44X[nDNGyYh{({\LWI,rpD念lZ;mnyM/G4ʠl\qERW`Job 4b:E5m@FCKJn?2VPw;:ebGӯ}/tS'fqm;ĎQUbU.vUEg:[,Ʈm(`n}fǷ@2!:j~"@d7w|]{\YJIn3\g)PD -RcA+]=@JR}ޱ^dl)܂<{5ưOu, /U#IXʓѓE) b޹e$0 ;rmql_iz- "C. 0$ +p{:,g;@(T:,N , o'砛+t檫2|[L=P,lnYTFfPaaQ'J#ƖB{׎e$sd BD3 :8R뀳g3—#tE1g}68k5:tpC:-E6|-8 7-_²r@LM"D6aU F˄=LTqSan|m]qFO7PűNK RQPwwVl-n%YgS\&2%{t@ŕXĜ sek˯ƟKa nD`(-:sɯj}Ltȗ}~th-zqqs Ʌp#;ܧ~|5 DŀkVŵ y["(`ߥ^?N8g?-ɻa9jeq:1# j;s8 6Z^|/A; %3p;^]h"ߙyĊw8''y~ Zs,/jw$%{1 ԡQrxWg݈_nf)ɲ;iaכ5*O$(@<--RMEMqfB$poQXi?XX/WW WK؋1abYuWD+D;UogiǓ; 1s79Z0"^sImAݓor6Lze0 Gܺq;@%WhE j^K+|!ĈNF*/LN*GYNIԸ0^]*$ٜ|HW*zW_G0ь3yVɠ|oT>ml֬YPk!8æ+ o, >;gn|$k`Yx3kGxT{_v;br 1of,Vgc$]+܎ @DIP;ud6|sq8Mӎ-jU~J8Ao~Ŝӧ;E;ԩSM[_Z5 d6eYY> wo) Ʒcf-auq0}TPjy?(lppwܢǓ&Jҁ:9B_u?p O=&Mհ.αZ1e7! µъ_sE)  Rݠ&;D,rWIew"dye\<\Ƕkqf/x-ݰD~Zf˚JɳJ[.θ㲂`^R PO5T,(`gePVn qn%) /|l.RN t{w`:DQL:HKr]"]p0Ґ+7$)XMNUBRc^}yW/HṄNKOo!ʗ<Ŋs63h2ī忲AL;3XeHc8{mRGp}~BeUx5wN<ŵĉMǢVQZz| ^qk%ZثIuuf xpȼ!zi,Χ:s̬XX0qƔkvY*<oZKÁ|'dwB !>vbVd(0CY&3 >M2jژSC'S-|'\WK׼z@nծVDX0۷,eΛ_/biQܾN1ZnUJk.|Z&X,3d7݃V\>z;\7|Hڜ$HyALj0U嵕\::+ Y}+/(Wβt Y{A_[ t6"5 Eަ&}xRS]׵$(HcN* Wϴq깞-ڢ.iAx0`ڱ̀$$iWU;⤜qsU'WKWy94l%"Q9 JtCIdAQrD[h]}5g4x3}֌Uqv}+due[X \fIV:~=ģs4*k/ʑ2[K:06 %M7kz.^!]3Xi{-oj63EKW A-kx RW-ϵ@_I)C39_AOJ)lTlGq,/-JXlhW?/?<+ 㒐1N}8iuKty KrW}|j;Qꕓ K>ݹX+Y>f7/HVÇ|Sg<諊^( Пv} E$U"*}|Hry\sCr}|][+3)ycY?8+\^XD2}&3uEk#UR=UG"#gO;tP.؂(]:W" i}gOf0J.I1<"ݙ{'E}}7ϩy8c[@I6{s#)G8$w;xvI]ֶXۋOHfj {iTfǖ{[yTGK[r}c7DP4ֈ{/DBB_9RV .q跓oFe9Эf5yd9Wg"Z-K+`jAӊ?Mm4(@y:)$J]<.]-D!? G(2~ʰ2mՂwfXWK[=\s`joYX i9Htu~5 HQ<צnl'8JC?̌q'„7N` {7f&]pDpO9K.>gmijz'A?յò=>IX/|܉ @=| zOR-}k]&|,B:Z JؚۙwzϣtVL=`;iOw`靱oP_v$˥WAb7 򸓀@ߺs_ , s w1#uhӢ֧gbZ&GM=!T<'Q?yUK2](C(8TVJχ `°lRE,a֞xK7?~83eM )?B,|+WW$LA]ϽI&o6Dœy<!w lsB|=T&4P`ﺹv|Mh3}u-?^>^\F]\UjEE"yB:aX[rS+P@wN@N[8+hWJ3jKX5ۼMP΀}" ܁iGAR}tϏ7M7y$,N٪١gc< rGXϏ$~WK+2X~!$ѯOct,eE#oUnߠ']tۑIzP#'d{m>K]yō9}Ft6)UK:oB?JHOr4~/l"> e4JǺ)\ -w״>A9-f6y(b ꒛sJP40sӿ_c،f˫]I3+4qxgu-`x&imWorRJDkT.^=ׇbE]|Xx ljKxdm](~iJ"4EjI¿0c.Io'4N5UdYbpU*[V(sFݞ,"x;xT_E :tAާI#t'Eʦy3*XyԧD#T?c^k;i3 煿{\K%i¹vm'W;&m%5|4Ӓ~q ^We%lݹ灁Y=o=&T%#q](a· +WJڶ($Հ .zݍd,av`ȣ¥Bcn$*TC7c-w! S^=JiKsMHcsgxi ,c_V訸a$PW[jJNPWo\$*mbW|ZP^Ϙۂ¸B`ZdN}6mfYXs[X1%YK}TY],Ir8dnn"1/qKmQ,Ի e5ƖPY vpGd}Ί_-QA_51g{hD@*?=<99H >&#f90MdN_nȻ>$QZrirԙϡD> ٹ s{;Wfl-B蜑V=:v5j0a9;D&?1%iFu$?R,Y_nc:X?֔%yF OfjX12Ar=H7[X~r[~kap·ϲV*OvˎYuBTYI*ɛj7T>@yς`kBwAo#˧+XٽqxX:Q9kkV~퀜/}I}Zor Qz|g7=gbnbVCA0RtkMSpY.1L60ehQVg KXat~+NpW92W7b<fBVМ$9mm#۾az`(9,[W Jժ|gazm%;繀'-/Ow#9,娭qh-js:8 ָޟ{[C͡DtlQt-iۿK!\X9$ϤM0dO:R܃cJd' ^rɽ,Z|xw4!00>U#Tub"DzdcOL뉞|{쌑l@Ldnb'r#k2}IgmaEꛛPr+)ofv*b |!1Bq=ĽU/+ ~mv(v[r.|ɡQxƼ˥𪑙)ïXcQx@k$Q"I|E^f,^2:%,m ZGͻ"#; E!(ՐVׇ!ćQ)5tu?&ʙ:3ȆqzL#TAFbkn lp8Si/3(<Wsoq`PCR/*bNJ.Nrڑj.2BFH_ u˚3[#dPɪ2Mf:)[8斩m'$ٺ|rƌ{""ٟkkIRк >!i+aY!8=ub%)˪.=an^r3َ$OJE_Orc (CځqaX`귲Ղ@}YQ=م0Y7!9Ւ#_ڪ jB/a?1e$"EY _pPhI>پҵKWH1BUV&=[ +-[]J9Z{,7#@,o0hAqǂE _\]GD7$ 7eeE}qBins6G \P~U9]oH }hS,x<|m[hK/lM<9ıf,.\ ce T4ze>y19QFߝۙ3 \R\ _?T|ТWbKrß1ߺɼ r9e h:;2,q%iWVYA¬e-pckmmY#GMeXd9Z_x.8ė%@bMe{ jMn ŖA@AOcMs\{GO"},I%Җ{ tr;c%ީiyWB' @~Қ6l!\D<ݲIizCn1Pfs7'䝾CF܁kah^o'W@7Ͻ欦Y?ӏËZФ2u(4?j~GX *-? f%5&{KpUnkd1hae)y$Q>+/?stʮH};MصzL$V~sbH )$ tQֈ:3EVʌ**QC=lC|_fș7Ob;!C+o ĽLۢ\H^AҠ萷}n6aw 7'Q$}h{ M]}ޣlA&w`Rv'v< =iBv&Fu=ozfnéߥ>_YQbU-$ 'u1Ko(I%5_4h8 vg|niȌfQ%n"HJ`YŇ4B7Ұaٌi8'nqPO,xkq'qa/$Xvzr 1/ Jp12÷_DGۄzbsm>Qpw8cw&'2Kn7O%wZluGg7e+%IlsJR1$ +.~8t O/-x|_-& 94&Nv]"}ʬŦg,nm39Y^ʹXCtRS飮DV䲧~ OεjEc'tWݏkQS0ɏPk}j7m'P8+[d9MO!y$0j'd-hwR'PzfԵdM_f,yd[dbp2\LSk< K<K^ŹĽӻ^ g;ovl3>DDŒ:޽oj:3dsQJ3#`U43ŏ-[o=$s$LZC*!@t;+nxٹ|{7sv؂˷0a磭O`e",7-Jo'Ra6\כ/^~cI,GI-z$Z1Qα^ښ=PiAҧ0ѫ^j|^%3ۖ(-*?SƸOf @cש(h:%z ۰?=0NtBZ Xg% n1~fXf>q.DˠX,g iX%{}fg_>}H}U3!U+9mv8Rrc܀oHMӔC8f:I\b J(*b kTĮE[blkFK,{(] 7{of}>fms˹LaCp~-H+A\v- &}w<ޡǟ&;{gmؔsc~dٴj C-^ @Sjzw[2#61[֬I“M Վg1 ^( ɚV\OߤU!j:J}vާ 9^I#ArˆɱҘU:tTGxvDxiS@n% ~t/?V\cMᾱF c:TAKveE݂_w PRft.]]u52~\( O|b 3+)G),٥($TZO:Qj=w;-Ƹ}5~|KdعUZ7.MQwA4KKaʛ Cnɛd6疀#Xd@yW1ek Mؤ&4u,zZUL6+ 7g^3kHӣ,#h[>L&B&Xg m jA$lxQ֠ChF|VN yՌPlg&D-i΁B}׆ նT M3[T["x_iX%k]'Dpo5qH.',5Ґ6 n9{Zymɣ&vO)K(E(a -N.*2`=[hHpO/6]`lWfv[oIrh:`?\eNW խ=i*[B|$QRVӼ$yLBR.ƩU堳9 ٮvKEE WmnBH+NZSzLn?z~ mSL:|Q"(_ $8|i]\ߨqmٷ4\9hb9^5JkKc^#s!NBz zr+n4ڥH &SɖԘG!xf)(Gy@L'(L![V] fŖzEo-"lUH#FE0!cEg|qäVm.zek5S.MrζtLxBf] 1\GY%^LI^/N;+Ǐ>Yn}fo!lR(88s1%j)⢕GSQ?%Cߑz)`e>?Ǹ66`d$aFqR<)ąeܔ\(!xs-nxFrs]5 Ͱന,1NAr]a)c7/2\T?qlGua@+ f׺ׁSiy1EƜ}@vɌv ;w^⠹Rp6p8S\ѫqdY :r?iT!9s'OoovMOu&ǹȾ2xZEqF2]C!_Iiۨ@'pm5-d8 ,`D#(0kK.HnT&BA1`bs!3[s^?8 ]ڒҒưX[P )~i6Ыtl 7>i0\PX{uTHNV~/ߛi̕r:f^` <~_C,6 ȶ͑(e#1xc r+tM\)ӲWBd9|s|C~KݶQ|K%G'f.E8{VyVnĿ\7~C$<-Ve/ը-ތ["`ve dѩ:dN[4j {x" Z4w~' NvxTp2rW` rģ#C6w_Fi^KLEX]挘G)Ay2'd+FWԻ@w1.K0|[N4($eL8ߋxLg\df; OQ$e#sv(g%;zZxTa>I\TU9iq ɩ2yvVې&bF7lbԹ_軓75VWQ;ۀsu潽nWސX1dX).8s0`ǎv GeXl!snTҢ[@h7`soEf!o*d<.1"{W;5f$ }d3AD_~RAxo6hSF{Ҿ 8@SG?KRQiEҶж zHS΃_\x92ˏ)VƦF-/7c]6!T^SpOgFM9?|A3 =?֏|JKډ7 :`eI,GRF)I}x{mMD3i>A>fuǤoϬJڱ-D:H1,HYsQNxU]]6׆uW6hdvޚ5kfMֹ@Eݧ-`D1RڍJrswtêP#?H+XYȥзK[ JA/v,KM΋/oY?KT%PID#@3h2rLJjwfPl҈0~j |g  %g|bE/>N̛ڿkIk:ak "&yHf0|BOA6QzQhhx__ވqNU?(PՍ|S䥷4~OmT'S_2>%$7"ܖ|U3X4hE9ЙV܅& "tU{9ti+v0} a*E]/8v(m@)yOb{?j-K·>FQ9b]j.iS@lozL/Vk|E[(Ϩv>ZN P%!Oڐs1wOae#E 1m䌣/flkKЌO W̯i"?`W-@,|毅ʗN;Je*zkʼoH$ 1$X``g>w=J1qWnw~-F{փ4rj8eVgRkL3K5=}UhfO-Gsb̸k>rchF-ߒ ϵwp $움l̝;7'ƦMۛw]`ݻ֭zk*}Qwnڶmk~L|E 5k֘7|>fw"ۋ@=`1b9rd_Df|<Q#ʴ ;+iarҊ 6NqP^')5Y k1'N]Y+dW?ƴ~%3$܉4z%X=~+}PR)8[a yiQ$xLJ+aPBZ'0~Gs7WGVu|sSNbހZIǞL}U&A%ƥKuAuΓ-0`̥Bil@[r`6~-kiWi܋̔+֘ ټf 2&/\G5X7r"ۦe&xpD{35 -BG{ yۅ.agŐ<ی!Fpn\x/(I0v>R0{39?mHbouie-dPe o}r+;ԡE2wVXuܱSfv c6FZ7gjӔƨ?P- dSpKQ h)T?m')Eyw"2[W+CJcL"H3fcLSaލ֏9IL;vt>T,f1gz=/IDNZeIVD;S׾}SYYiFeR+.L*S­*/E3{eNbҤwb#"W4+jM e#ʑyu2Yŭy !A$m^z70:'+=3Wd)Me&r9H-+i[>/&w+ۂvd, T Ի;U' <' Adrt#Ƚ@`d:L <";m]ͳ* >04e}h|gHC`|{;`r8̉~¬F zc\j{cg\Ӫ3 W $_EAVќ݊'jʌ<\~ lHn;(D2)( %1?K]Ief<%רZiZ`dӦM33|ӤiѢcNAfĉf,Fm.]jv8&#|;gʴj 8DUUUN\ ,0+V0oiݺaH$̘1ck׮5mڴ1ڵ3[m^̙3xve'&M2lcB(ˉ㠜<^rBKOpol ! 2WR@Wk.kơ l';Wb2 trmjC09G[;%h̭.0ʑ i;8NH+Qe$@ҷ#6p+3mY?IſEmێFM]q F51kC6#a[.D<ipmlT]A+~xl4~7F IDX\?-v `OTǯ{ij\}K׽씜bU6yi C&mڀ U^PMM[9>'QeUidtN\`$XިmN`Vy)cW&8Mt |gif Omj:_L)"L_VnIzPiҮIޗ%Ar(R+099RWq%tlehZca#4}9QfOמ$Mg(SqG^bΓ Sw/RcdW4-TlnLKk8ņ+9(=e5gBpNI ̸~xmlЁJ&ƀ8WN҄'P_a!BXr.3˒c>*,&BRy1m|D`n5aE-_Fվt4g WX7nN_ GEopw(]#teD/w;f>(ф e ⛙Y%ߎ'i}I07kDxC^TٿwI9=ҳs0Z〾ns?A6}vmM:uرc7U̙3'%W~֭+VFI>}8`>19!p}F6{K3avhРfܸqfM6q[3Kdw:f 3h 'j*#03 n4i/RZėq1C }`7!YSao*ri 87 ߪdI_ؾzj1B(QӬ}USmi:"ˬHCK;@_J*eUqQD; P{bZ9@{Hd8wh]i'j/`S̨YM_:uY^DP߅1?Ph< iKyPݭz>?ՈWَґIUgM)?~lHmo"f*(4v t!EZT949UW#^gU}n&ƿ*ZuT,ϧLQ}~pFqJem|'mViXy91_ZuoY(M姐<&׾4b99:SGz~2kFBAO^P]Da#Tq1L<JjT? HIwHc:U[ɵ#:70 RKVE\ezL&_ 2cWLZAgN-hv,6/2I^0r=Qc>6Wz,=6"^ S NjUI5!i;q%L| Qe~ł(mrsڌcuuލshi)I'FbrD2J]37Jk~ھ"a{hHf=w4uD ;&7ݚ3>A:LY$\̫%=oOD٩Ԗ($[Y=dkwfѢE3s;k,ӨQ#s '5o{hʗn6lhN/dEqg7ڤq@]i 0)Sv02?+sQ(SWh^V+b1tL|,Sć.Wt ҄SR՜v_N= R;7[;GYLoA1&M'\ x1)bI$2I'84[2 /tXfS9.h!>,lȷF3 52o):JMnG5^x4$L -VjBܾ(iAx3ES;Qs1 .S9sJcRю6Ld?3,druPINºO;_ p8{Og8:C!'vuOMgU(HPsv!96cKpiA*2 ct{>E޿R# M`G" -i鞌:8@@^շl;Qv=IC.&=G)⸄~zXWN=Z<|Ck#./!QmFzj{}+oGj&>,;#Ϝt5dP-5$  +GHͼȹi[O we2=a\9dF>+|XH؁6ՋR\k]݇-ޏMD[vf\AC6Sx3] 9~%L +72q߲RO2Cz(hgJWS yyK#vb{( 'hBmpD-bRQߪWXpN=eqCl"s&h:Fc ,mu I/$| %ɡX\BNXjRw:FXr:xlZ8@_xZ׭CRǜr$L-F 3e8ܝ.2_zqɴR q ̥ExbEo)nh/7"Oʟ˙sL7q.*"}! g(sߒV glO@_LfG|P/K-k/F S[])1WD]j|OHՕ& ЅĬE|~zo-7$R5Y@[kmCh~H .Hsbq0jca(_,-^Uȁ=?h9]u%J׀̥͸7痮TJG|?ĘWMB3r,{m.<ЯrE9"" SIӰwd:nK/ϥ||1}w_@ƍ1ٵÐoWGTAfU?cnM~}cd0@z rM.ϥU}D{$ d!0j=9f2߿!\Ø1qzCKuh *\ |҆TXgʯ4}C-Y>YƆEwʫ>= YH,'c2v9չPpʋ!2R4~j78 J4Lͅ]()8J!;E] A++IEOcOmZ]<-b]tw**xJӻsOF9݈sqzͱw*t֑%ʬvv!At@(>IV6e^SN+XftJ Efm+ھv SYh(eYff޼yh`,`W+Vd jZ m>L< ߽ ?:ebg1cx>}!m6X`bt9uQvT$lR{wGLf*Iǟx#폝nB61 af5Vwn "`34e#g sص\N:e͇ԗeK:|M t )Dކ7і 0z&` `mKΏ` _51 TdO3q/mCKC>hNM؂lN|A_^X57)+Ae'S |OYj$ -m|g|-u|5|D'dz_=u'4\f["\ǿQ`^dL=r҆tPD2?{/`:7@mhL0sOZѡxRxB̑E/Ѧbȏ 1LV>F&Ixaѡgn ձcGy:iv;93g4:PF{g̘Ak[T9dךdW:u}IL8r+K,1ƍszth.<чj h֯_9ENV{ekxԨQfժU+Ǐ721! Pqras޽|7/Y-mח{/-hUU*`#tI+t0U+5.'- P V8X#wXS7S\NN+:bT\uG[L8$m-yƬWkUÍN.:T AG-%5Չ~`G=OSz/m /Z*ԋVT*_o{PW 1cǷoI5uH]o[\_d2dQL/ߣI>/%ͤobu=S~dw.c1^Ɏjn*%39"jY)|T(5؜mhPT]h{PrZ݇I;A\NbkD{=EA]JKIjD}zp_ț32gJ0?Vr:}bw=WڴOltzi5{ؓrORi|)Li7F8l}U@%_¯C@x?A۹Dj0?-VG4~_J8 -*_"X)1&yc3LsЕUޒB}+chί,7}k;TY.ofv`?g6;{Q.SҥCmgW!o4&?$I<̙^ CI@?jYn!#r1J#g#kߖ~bw-MiQW&ϤF ԧ3<7߬H_KfhhLTwZ`<@k`k7ME-2V!?Z\i,;3կZx}/&:qC-C+߷yf* ,pb, cA+dMUt+_\pdBtewe:3 ׳g-2Vڵk瘔+;am4'p\ o [8qyV&۷4h[>hyw_u5%0P{L,QZ]f㒉u<; !횰k`X4s]MD2C-*zmT ; jR&XہSjcEw3[aD wO|h[z,mjveUkroc ySĶwX=V;+V?g< Ax@dj㽪.oC2vϗ,&}L&i`vhCjE-l.V|*DPYt>o$6$0%`;a]?SVHq->E7 l~O'(/xe?L@Rd3)\Ik%_ 7cR3RA-HUd/)>-Z<~!P:aE eәE[|x)wS{}q/CC?PC/fRB <- G2I-_>U54\(m%'2N͈gܜ\q}:ևPs `˞իlMLN`GΛE_As0FlelӼȸ}Qd>cJ ?*ɞU<ԆFO_i % 9o!g2Tpra q8d ?ݷŲ %fzcnnz(ڼI/HU k4Ps:͙T'ƥ۽<9 C6, y-m6D[Ka"}%?HnY&w^"EIV_my7TQϚHgs32#4ȓ+gI.{ʔ#HӶL3Ph_$ޤ?#^6jv2k< ?oiwLU__a2O/sg?,7^yE[ӡtI2xa^>-- py$irL1Sp9ܜB;vrǮ󬐐Β=z_IЫ9'4~{/VڵUx ]1 Q_P8:}X\;K:xNxStWfl6lm_U5j;[ ٮO8VءJKMvQ+ene$ g ^Ved:VNBٯwI֒pzsݠd-DSbP,=H{WI%pI/vfj$ׇE 穝:?}̪mi뻴0JW^ (e~E ꩈͥ%գ3pCSZ˅nCJLtȀԾ棋Ѥ{abac&#ZjRgC i^k4'4nFdwZG%{H1)NZ@70Y4 BU&4ȵg|>䯙xIqj{^:NX rXj|\R=TໞŷB~#(h&F?R[i(։|>NKy^;C1 /p,=Yg[UR.&:u\ ~Eh7gĎpuQ%ܩeDڝNeD HIM2}XG-y~E BAu!4ld8lE}e];15Q4^wlGfVh|CK/ӗ(UT7%3&giWE=9Rf?kmdx,9#ޤ6mT0P$< '8 xQ udTt>p<>G1dC`ӳmg8؝1oq\}F.#^JȷP>qo|b[4w7|(9S+Ey-y@zVߐ+hm#n/ߥn|)+* L~m:2+%Е7/2OVQ7';쁯Cȸd q_`Ki3tMh.ez}[}Wn:x WLʪyF|rF5lDu@[I%- :Lͻydw問KJgd.ɌlF%[i_aI^#;T \ŹނW؃-@ӷYۂ0;T ?<;m~ʮVv&w}:"J;7 악;忇OMUѤ6bieCu_G}b~֖K|Y$¶utn:ۙo W^;W,³p-}D31+~lk i,ڥYuu|_kk|R[嗏1b/zw19 MTy}K^%}\s'C/C dgKI^sl;P;sJՅq ?7I{_t}<hS"hn8&j/׷뢖Y~泴%X(ƼZjKk?"~aI\[? YJ(ɩ9(c'j_Oͅo tƟÓj4y'#S!hW ms:ho$rxL]үrqA|a-XI.x?]F\o?!-eLɎ.7=xP#b6_*47!=ؠTOlM cfٱ.ZQFn d9{i2:ʽp܇1ωIV"\ 4uvf~J *&ti;*\>loLV ,?oٞAT25[* ~Q90DUjSC(lɞ,m?ڪ3E;hWLNdL|^?< w>-Ic03-$^~0cI{ ' W!FvzgxIG~o;,Lv(0Rs_q['/:O2 ;H+&V.MފMYݐhjBj0lCǐv{:5PL~Nڋ2&P'vc]:e쭩zM1m*P4fDL|׳p\8#zS6NDiUqy56ǬC#hvԵw 2~a`z/=w@TlLM6e[~K?J-)@%!j ]G2g\ pݺ O6_e%C"~LvԓY"{/Ђ:6X> /'+%&;hdi?bgf iٲF[9~:-j9Voy0F1L=0O cDZt0~h%>o- hg3+MxmF Rn_G+/|= 2b\ow r[qۗ>;I(3- ?o4ZR'ZQBϤ~35O~F8|//PVoS❢^R<^vNr&|;\"mZ sWG'? ?uSXL0Ch7xl'љiݒ_q0c ),+-7Y oo1p_KBSR)\j:TfRfR֒[# =t l׺},d2ޛ|k<{;HH`4iJROy)UBuf:A|\ۤmaAo2ZK'hK_[36JfDx}v (t[G+u+)=Pv߮G*~њjSTR!Y/Y-%}im9;ف{1O$v N'˔ٛԁ@_J/aȐd(P 'brXL@Qdk>:onCCB #a 4]H-QC';#/[S :0O@̟δ%)e}bgީ C?pR8ERwnͤz2wQ!> p Q={@W9OVXa6mqκt,A?kԊX<֔GI}'Ο"֪Fi}~tVS'!ԨS:kK:kN=\XF?<[lgfɞen_&7cǎ5^sQމ: r2%&Kz!igVi<$Ei/e*|o*" @hKni|Et@ "ѹjP4ЇQ7ě%tyv| #- |Ꞹe&cf68ۘXHXVyơ(o]FJQ~fQvd~Y$꡸/4.}V:1,he£%MFĘ'{c-GLNU;ơ?7KHUztd<HiH9ɜ&Xchb^DC`ztAJ'awSr ٨>Yۙ~8i=N {צ@BhV hDLeGp6$>C~:uoo橗7,iB'm}bȬ=+C}_Dv=ePS1uɼ@<-TVuV?6=zSC7o?5H&?=2ѨxbߝX^fgnSaNenO8ھ6Fee&6IUOjMsFL&7)̵%]%2+EIw6 0KusR[ƃHç6S`A\/s<1Ш~>/ uLu'Ād!%,l$Jڷ9b87)Z YX_ɵx )uUpΊfa`W16, fI~5NM"41(&y;ԗR'lKu&UM{%(v8M/Tf)V#a6%b|6$ɡK)ޔJ2[ O֔H5( eRi.})[*]bV)L}mS60ix?ۉ;cX5csx`P[JH 8WO}lKQ[ML0! 0(-HI:}jTmd`T$xC.wÅ}J5垩 3_&+`-c"v׽ƗhqK1 s91kI_daYP%RT[OO)0=|X&-Z}_oGKʎwva|LVzTby#6SjT.tn;rᔯ#mSEL%M^Xqu->IQN5vÓb,.%>U:(c;Rψ$5&e3on< vI[u@Y: ZAMp<l>ˆWv/X@̌zLc-+ޅФ9 HC\dyЃƝħ 7jk~^W2뽴ABѕHfߑB4ޟYK-~/i δ=7~e۫׆r- U]p׭tz&^גO ܑ},&]HW~Z`꤭QYs}oWMIe}Svte!Bf;)_bmښJ8R> -7٤U(gn4)YYUi`W}mܲբcG1<50}|xϱ) /^Hk϶gm^űf9_Yl%e@_@*_p* +3M }/#,f{>;bxn$}#bi{H & yuu(l +R.]MN[Zn$m *r"&:hRﲝ=~B@N)v ЦJN9tai4wɣ|}gg_CR¹p6Nv4n#@to?b>^k^&%yߙ|'Sٓm $њq]g%OV";TSrqCZRN#LcuM&m)->L::GmS5f!.zu0Ӥh=e}8od0opD}l~m1=yؒE?z`fVD\$+*2CnIX2[ķ]\RqON:DK#_} |fvXB;ɴGnBlh 2Uf04{p:~Iӎ8`&K/2@IDAT:PDΊ/s'(ġ/Y&|5_[~J2?? qxx㶯Ե#3 М^qG6D;iNlG"?먅[ߏ쾌MBSWi۬us}Oi·|g[d'5и3o^-Rg L5,\-EIyIQdffYA$`Œ9y|f,&3a9É9L(* $NIxNOwz_z8A 7Qou߇T't-W HObDp˯WhwW "-#]~Jg_ʛOe>CV!8m0[?/Z`){ƒp}+C,Aj9и<#α * W%Ooʴ B@~|7ǧIwƾ]sqBƒZtECc'4 syG%TmjԎRIg^_JjX=x;X.WFig3۩tmI9 z!F >r8>I$09Fd;~%KZ('u=OieF;~lf'u(z0J:aÇ^Pg*O3HgzDe40`^ޭ%u_7/]pq.,]Mʡ-щZһV/;5UJ:Ic<}MbQ+qy/A$ywt"L9ڤXo_,k_[ΎCӜ4띒hѤeDymv/yzahzz]O=<:"{X ǪŻ_ľ\W3d0O}قgGzF =4w1%UKt=|prkԒtgR_y(a>\~⟼ _&2y37`]sqϊg[!;qР K۝Pf]G}3['%ZzKsi$zԳ0Β[ʴk9~yc䍢QF C!$k/Vyl-[5Si&OB9-+*KXCRd+rd% el&y P2v_ݗ:C.>-גyEӔWrqW1U^(9w}xUed;eŏ{ru{{A\BٝJH5i.O4םPr^7?xp(ZQCJi5Ov8稪6t{۶~wCg #Ϧ+ JF+70|PPs`uC^z%|%! `cA*/ >ˉ%%y[/}9+c(X9X;c_<`hb4Kp}^ֵd%glD#I[,]#Y8g5xc|7~c';I,"`6>-zow4*:ԧOϸr9:S NkMŞYSHs'8Vb"vV5v RʊKLڢtk֗CT~ր^ \YM\SaX“Ԃɖo!a$Y9^eH ]NygXVaj܄8HlorʯEyPوvܠ=$TmwR0*}cHZLv %d\PVD6:eIiUP ɓ " gJ& r]a.1 e2rqE9 #+rߕkz'$X㊃7e,Jy: ):Ō0DR-.$lQ@bR ]nZ^F~<3@;auI'I=7$=FLrW{ok@g&rbFS;4tܨ}dΛN`T*u#9dsII#C [YA`״dCޚS@VseM/Sloy.oŧTsOVEWRM%&"F^u ǩw)𾒼*ȆZEh%EZ~G=1aEMBZxeo6u'Xœw)1]pdJӤߘ`B@q-) ]Po[fwWS?hpWY]gOc]%KYAUof;2lzd@TٲV [D+yO`$J,d. j k燔RsͣB)sqg&Y #vwbou1.%v=ûBHK]w<`9#SYGuQqe ,1j_VBg"µ tK$v&-<>Xѓp-0#oku,%p(H~h-SV_yXF,)TJ!wA+>;um>ȋG-DVa-ZJ%8R.{zWRVȴ cJ8#e ;u>%Ǫơ(h]˓K#Hfg^˓w8ms] ; JS)]]:3о,)|qWQȍwV{ W}{U-꾳FM;o/^%C T!jvԻZ=B.F ӗ1"IƀzvoBޜBp֮|*[KlHu).SE45͓ʿX%6Š3To1i26a[^FO-4?YU}.zM\^/ʩĻM/%j\hu|ē/2y=֟[^5 5wW??x;':D7fW77@KA['}:>v *ޥ k64wVIRGhR(Ű2e.rJ2(FD47ŕUՑhQ}:Q~n\m=[K{ cxEvʅS%e$_^(y<&e>5-(eׂ J(*[xdr<1r?J_νe-Wtӈ‹)y83P[Wϭ;lfߊJAV6̖}{ >򝫬[! Zeh ˣɾAVPPvqGd0VĥqT]5}MX5e )Vqk- 3Z+a_u[V^.@HZ]T9ʘXVڅRuef<~t9|2e.gV [c޷bʒі-asOвqGwl, h0qeQ( 2-,u.;:&dE)AzŗugO.kivꉵӚ /y)?>aɧ%z{VAW@YYA"ꣴa#h F&y,@INݥjޑ-R-)?j#7Mk)OJƌ~)Da^,=\' @L jXZ徜lpwgwb+hWiQ_{C근q1[krDٲC]Osi|\=B:uđb.UF|{&$|5Қ[1؊QXq"+Eݙ$5h[n-KBgr#lК,h4 1Eޭ2H/ٶj>je [@I,ȝEcYq0?j va{wd۰{޴n5o)U`mŞt^kկ§WcO uWmRKtHX88-TY/ c?%MK1+P5;N|3N蒏9|8_J/z.'/zv]mxJMuTXlA'W_~kL\ffcdTff^LELg3yCohv&itdVj Ax45f1 6q>!NctLe45lN]fK^ۮhfFxq@dB;lI}]Aާc{{;Ĩ:?PW]wbB3*LJu2-꧆R?__+>{)u:}CEH[aYʚCPo|ta<ؖw͘[}4%ebĘCN$,94FX&-hGܘX]/'qg ӼH3ICn(tĢL`c7 5,N5GDqaND;y,BGg Hf>^:ȶex{YȚ+mLfhe,x8Qe"(WOl RKOh? xϡUmy&4q5t2$Xx൙_Sx9#:qp7=[5*\tj'EE_DJ4fWh$~X$a>:j%3 {99U s*?ԛm^d+sl3<,_%nO{KL*1!OP: C FQQZqE̻wܜl ՙQe%zS9n65k09":Ё0N4nWTkzR;@\̜<1jg3MW#u?Ec%%ٹ c2!c:271n2&j W{03_Oo\ |=[}D~ZCQw2013NBy.d4/.̽ъƤ{z>|A)κRr//Цݙo/j~FfoNJiޙXӻnB.xJKtE;0s{Cc8Ie/mMrE&_vƘk혟~6ʘ ™6'mޫy4E'y鋋(&fQ,Vd JYa)ͨ'fϼ`f3{1/n$!X@{f HvWZ)Ѱl\{C [8ZbWtLqlP[irٹgzk܂6.V򲡜x_)Q}N&$JYz[%XR?C+P1}03τ(~c4) q-j r7*&!ą4r]w"R->k@wo%7##fh<=EΤijK#C=C#N|5O4 "0g#\hBw sp @G  A ʔ*u K׈5 s*BpP ~Ǥ1Gh։$Ix0AuR3HhvghM_p eB*ݸ 鏅1]= p=ã w N)3ڥ; %F=6h}p k-#6euĜh=:XAxA+| p+:WL; )py{ğd;+!HcԆ_̽XOؙ3t^* ;bϷz0m<+ ,Ur_chsi9f%֘/x/aXKwrr˸z|C/rdŨN(ňL/n5iKl̋Aŵ6fY$eox<яcNm}oy#@IKط2^,z_KRk~׼4Vxzz-SOhb@VEAfЛW7o13H@[޵+= /W/i݌}.џP ~=K\ yҶR[S/(GV:5Ù7G8OQ e )\U({fZے&ކƘ P'k+#|nþU%yO@JKBP;h/w*ef]b-zGCumX-Y1IKx]374_ =m+WX/]ҍzaU$gЈVhhq׈{e@xB`6Gz7WH]J4/@ sOBҮMܓ;e(8Zk?J6H͍# ]OEޓ~y\J,-ԒPw?gE> ʌwHWRwpըLI[ Q-)z͠ Ɏ>4EJ=ٞOZs|cgB]:(,K#FR>A>#3}#0yz鴐zf&?^-fHb.f>;3ARǢf %iX7U *Z\"N f[%,m.6a-e]B?eX#,}₳$g& _'I ^9I@b(5|S[ Hu-Ԝ:Fsk@S< D0,Z|}*pX˟sp4tcYi'BhR4,C5TRՋfgh _VkQI;X|dGՠ!쏑ϣ޸lQO$ow^L^Tj)!A V\"CL+h.JJ\IҹEǐzEMUT>f7y{)Ah e斎Pʄ{۹DW;:]gYߊk|2wܮ{"0huAz #EjW;5UGє;%L sw].=͏W{H>?5U[wirU~K1626/m)xGcHlͮ6-.2㳭dn%ҦNM.a4TP[+$ّ\"s/ǂY}}#?H6ٖmo!rדHLXoY1&c[)&C<{MPЅǓQ4\UNl? 'ݙ&Yi̙@7=xϡ#RjG`\~.|\ȦgP.\ZR[e%p>}Ç .7bmgŠ +`>NS? m=l}riE moEp=[ojoB4s^5˾˓\|v2>2MFBqT|:R<}SƏB!洅iXlKL4Ѯ*LuKnI`cs%--w'R9g#7҅|jHwh綀 k62mK )ergяDqcUc_ +ҦQb&\Y􊹏-dp!GB>U-7=L9r*KN-kЈx$> W`YU~a豟§ T_L?CѯgۭZT4\,hPbj["<$9]6O"WF{\ݕ 0b|[^vWZ5& Ƭ(xUlY@> حZ`/Kvyrjn8%5noF(:^??7Wg_aAy_@b]T{uC|͊]\.Gnx"=ˬ;.^8`(gm D-*60MSiqB~ }]dh-\9Q}m/cNX>0?ڃU}/O+bu?p1|kRRN26ТtT 'J[GrGt1i h&)vo9ws%49+}[GȃmL+]vɝ3΢0r+&m|SzYӵE^({L/ڏ)Qy8<b]_ky6!3Uq"xa`nȧ%' c;Lg?)% LƁJ9"Ih;| 2[@%8?G}IB…eϯ;֤-K@/'jҀ3w ~ L&Mrap. YΪ8k^Xp UtuRA[]bw.6"lO\zXߨ6qҋ4](>7W6a%ޠ:ˡR$QŊ߀B+ ¢ 'YT幝i3I|Fn[pӞLC)k<b.D7]xkPOGG7/gV@Lr> RvSMaѡ<'d" ?Jn0vޥ$[p9c}*Σ{?)Z~R?;FECQHPzjFHnGmv ꧳xZ6+eĊ͖-(>l*2w_VkocgGA=i'۟nzZ ՘ǢB̂?YuL"cb|oλ:۷aۗw dكeYK%)SqAk2DgEn(}'}81sY)e[m Cm\ԛ}A/ѵsK'M?3}Р1AfI;).eqN s#8Ki/h]sO@ʙRs=9_1FR81FhBh7ʖ R|X|?wPk;rb2*p\\'tiV6<،Ķ_G񞻱~?s@Kkoci|*iKVQ':4OO4;0s^f?u)R*2E,#)h))(z!@)T{g_ Y3R"_Al?"I3#_b;XskLKS';gdng=S _ǝ*8ˏl>Z֔>5gu[Hx!~yr;Z f/Y RN$SD<%Eu7`ziɌUYv@cDs'Q;~.&U}{3?Cf伳|`Px7u(Gei: T6uӧT:ߺ;?ȓǦ_4?_|>FqX Z =z1b)Wsۉ;Xg#ػ4﯀ ; 枩Y'7 Ohr[oy_$zaf| *zr,5zdbdlINv?_>]++&0.&rn{d@ޥpޗdݦP~KvH dAڅ< WWZ ]xxDuM ~.vҔz~o O҆D lA|W|DGО@Igw.xV $gz4O4Z~7?!f)ߜ: //1w^H$c\P١ܟ ](1殯e"v eՉi`rGz}F <[?g֢Жr:(X;Dكȕx{?#*tpS}xܨ̎f'Չaf$`X`mkwG~:@yo9NC$WtbZT@%i1,"R|i(e4~}?o9F.#fA07GUrG-җ:t8I7s!*{:MXҦ$?'5G M!>f_ڬ3&VB@=kנY):ۺ+k?/ T ZA([iɿ4&3q wAnv/zL\7_ZJ 3 vK3c1>ᮅ-%q >:hN&ʗ&I)d3j{ wtN47]Xخƒ==.Ƽ'LzK(`1ˊKk-?e"_A ^ԸP\Xھe,2 /~I? *1K|{̖ŰS TA⎃6d| 8 k{l9|%L}nyTyf(x^s)|#?k2C~DxRUw*md1jY|;WJe?J[&, \o{~>8¥'~Wӵ"#ȍ6$%ƂQ/ct Z;I3%Ya1:Iɇ7!(̄HI vdhXxռ#fX3ح4;ЬfwCբZDZJ:JK?]aw.u1j֢AvtOa,̼>3wyg'?Pg1l vڒo<+oCan|߄̡aVW7 _7niܻahܻ f>ׅ )ݦº|HK^6lAk= R?5QObE<4 W 6f̥P.U铧W3,({*eA'i+vd5e]uZqgxƮkH;2TP{?42DJ){?n~c? w w~=)=ί_zRG ~Ss}$Q ?`M!#/vb /wo>aOcmJ6JIru]o G#-bAutg5y%@IDATeL R$~VMvR gP(}QTZNB9!B5U!-y8~CuIDJk䉪n*FdW(ʪBّ:, }FrrJ;! +Z>h >mЗ1v(<%Wbih8Z'XGE{Uh{+68|VpNƜxTyd+9f01ǟuP'g(D5xɤ6faE-Rp5!</\z .H2w2t+Ѯ[D}&7sXφ1۶sǧPV=vv\ s'L9} Xorn!h|?|!Y>ndqf%գT867ϡsUKciQ=ttļM:ׯ+Ȫpe@KM 돲6ݛs#?}2pCb{ ^bvQXq͔ kr(pC-,Z}%l2Yeq艹Q3d%/Ũ ֛ԯENܐ܏zh;7@kg/FBr5a3ܨki/oZX@nGqQ?ZnEXxO5__?N ıxOQ~mj2gqjӔݙZ|܆1I;M64: Üõ3#-&]zz CwׯYo #~ I*W9IDϹ,* ۝w7Џ͜s/ϻЊ+xa,1kFE-oj.)NG9:Ё;1_ w ݴC99M[Ry /ugڇnG/Pj19Zd7Us,ID |G0sT1o%z"dô1sS\iU_Y7ŅK /Φ>{W4ɭ.+II9a.$ï,8HգiQ) ==?/k=VYvhG;Jb\ `Yb(Wdw1FaΟH|qf.-ܹlI3sLӾ}{by,/OJ?TɫgN>~]-;7ǽv5gU -':h&WVbe#Oa!/rZy| K4QU 7aÆ5\Ay+#okC&(Zhl-_~ڴicZ(RihT0},EJ&-<f3BKO?楚??$$MȌH+qUK iÊ` =ॱ^(7Wb=pG+,цfAPdqցloh@$)Rzt[9lQ5ܬS58\w5d)kAYky'~| \VL?K,a._TL߰bcKBɩ!0=X.ye<S6M+*FR's-!%s3Y _a9ʂsf3Ch)UH X (tn\ve6@)'v 8?^:4DcK l,R1~eGjU:߆ e,Zo{cbi Si-W*s'0P~I?q 0XP9^Kߴlq|?JO:q/er?v?47_/ԘB\s\,œs85bWڭSs2>Qk˭lP2/jN6w:}+|t?⹼J@՛#ѢZe}؇`X GrHnW|_^dEt-r߲>P$iFW5^ 6tXݓho,Y Ԗ d=EYڽ Nj>q w-̯_&{0lZ;ByTгHꩱ > 2\Cx0f'R.EKX| ;vw1ޒ˃~F\IGo!1@8JmwӫM%gcnZĸ`' >*=e=eʔ4e9Q{ƷvtjhQxe1_,(_8{BU~U;LtFo ݎP8;-JgVތUZʺee>M|AU Fef˒7GQ٬z ICd?\A} ҥlBa(PzA+A%Rko&d!%3Jo\K9ˡ'LV3ʕ-<'LL^_p޿LTks뼻Gםot`$2dy:[x ~q[%ZH'tkt^H?rnL=ㆴT%H4(ـ,Gԡ32|C#Whpz}O"o9$_nmd 4IIXzGp\|->.7n"QYo0 ?7\n%;qТmb[qn+ ފDόE! o4ZP`bj1rԂjׁ\܇,$vc1ym 0M.b*cqSs?kڱ ,4'#M;1SgƍxwS-mqz+ {r8AwhLQ٠vrs^[. 7Ate))s3uȡ'GYh ꛚm̭^7'pR3s8}7#MY}?|Ai4oDJ?> DcWwGY ʭmi:791=Ldw <_ }| r-?3sa|Z+y_Z˥xְ+˦^uܹ}qgvygߚk|7>&E]dXve b0dSBȂz"p8*^~A:MB|i-e͝zgDz@mt&JY7mpEXȋA ]4a#jw+-QU%A_g[?ߜ~k5_3.|`.jsyw'dH K+׬:k]K% \=A~e%9iDջo|l56b%%"FYaX7nWpE_aCN>$nsEex(%!^s@?}UHŒX= B٫&Ȝq v U^6VXzQ [ts|Oϒ +2vAɟݩdA_z}z+JKk 2o9A#qorT)d WQ.2 e{w>-87ִsJe15VG@_r/yϲК5T> hynca쏒 ܿ zBGWPUQGXxw nK8HGA+\o%܊goYġ߲*+E 1&_;9"/Oae&?nH;?;X0Cր,}s˔rRH%zvBb_~dxsإXҡ x%۠{6dvhdB8F6$h'Q uA |DWP yZW{ (RϿg0pUźr!][ lw* NdKbIĒ/m#=":Ssvw1/䏷DZ7] MQL1)x> \0b -ZZ _Q8}/w3{6ȝC,2'k !)?C,@ZCt.pyQamZ1QoyK=>4!,O`z~²Ji-¤.key*\3u^."? QAwub-KJ'1N[y,id 7򎺷5ct'az|:6JJ6=Rg1"K=@ȶ6?Xk~Ղܿh,Qporɥk?,$=.ټ5G $X+ lA Q{̅VV׆Jh'z[}{҃CYÅwC^ϗ/O^k̵ӿ66𒳰k0 rHI|BR'oZ`hsŊt0Y?+H΍ V{u_x!Bˬ4os5=z0ooߗ@M`6 Vfs(.N)p2BpM_RCZ, }W'pVN܉^|Uo[w[΂-j\EU[ILUk8UO5p ;+RN dl%"CS}g.8㏃ V 0^+BQ)"GMO"f\cTB<(q T.<1$Y۸t&A+F=I*0#7$Vn|}ȆdoM$޶Z=L˕TxDv9=4$ԗwGK.>Ztn<`Hm{IVAnt@/{-Q-m G+e3U;K+8 kJ>ɨrp=cGJ(a; s-Vw’Ry@hJXU]tXGhi.7u}pؓ>׽(Oq%"<[7 &r@۠/k-=  [V2gx"0BONY@-Cm g&{S2'  Z],|m;>rA*򋓀vToj0Z;V>J?mn4, qz:SjuQ0^y*-i, Сp-ͬ,.~;i?Sy[0*]4ҟȴ}dSn.]Xاz0v\zeg_f2鋅uBSvuйWkj򛯹X? ?h9̝NcodW-54ẠoY)'tyL着A2`z3ο[.F#G-ZŹ>}I': {i=zxW2-V[l&LjvWݻӧCc=,F@e^zMϞ=+Gt&Ì 2tU j9!۰ҠUawr kTp>ioZ+ ATbqX*w]~m\aa;*'O>馛|Z>.H9 g}[}_p[mes<[PoL6 Esè?"O R7b&"ݨC5ryjjc3dal 2ړ0痐, +\{ü{&u]5A{"5,t!d?6@]p\w:|.7Zw6ors\|RX&~7ݕхk#K37УWr! J0eX7f $PE[c,o& Au̍fqcIvwRl( 9` }G n||XgT[uFA Ce;oESI˪l n*u/ӖSʿlNT ;E\H9՟" t$ ]K$6:f뀷v|.Ѣ䢤{haVְw8KSnܕYdlb+%gb- ZK#Z|Iw0:tE ~&5 /Ŵ˵Po,/ey޲!m& M'Nv{ O(prFÎn<͚7`JD@Dٍ°,T>O&{"\lK ,SHO7'> 5򉸜k!z; PA Lk K&D"DE%_@K3e e!<)&'(g ia@<&Qo˒\P>4.-RG'#cʧ/Ļ+_ N߾S#[[p+0CnyG,6ąs$:5Š{4'Z^p^b%&=h9ŧ$QFWI׺mWˍg\ߪǾ5Tޒ% 0G=ƝklԙJ(C~YuK&۲LM[Zf%HV`0j9&#/ %:]^Е6L!sq>ƒrT",mn;߲< h82cEWYE;P5U p$Zi1v1dEc(t9snJi"`˝V[J F009S3Vfrb ~g^d?:re)hQѯ4ףx<T=me#kg0z? ^Le% WՄP-!;d8wy2p ,.Y^,dj'[)NUi ~k ɔՎ*߲I: 9$sӁxCy$1>[A:+>mub/b@%`l7o֭9a8p '?k,sG P Ù`#_Cݻ﫯2W]uYpP?M.KL'71TWl#|& D'!JXv5ɨAJX W8au3*.?Y[X'v"/`kh5:I248Ŏ Ͷ5v$Վ(31wŃx0"cLQd4vwmʇ9 gkh2hEdaEI^֥x:24ģl+waa)cY5 {3a܅eS+ZsWqX`5,&s8Hvշ+:K <,ҦAU_g\Ќ%~}/#lݮy@svM4q<~sk*n͂űЦh-r5Ǯy(3@> J#m)eXcL T?_hAP;?|O 2dٛzޛa~k|o(Q0ܒ?ZƜoFg| nrm,חHw '?Eo S?Y/JuӣYhWyJw%/SU'ruˇ|>Xρ{[?{/-nko(5Lg=vm3uLC]Xʛ3]$p֏TҥK6?D5<޾|V51%Z\9i.1f'<,{Q(P0RDnl~I( q/Ę]@kے: - JVk2lME c}-L CSZ`{xѭ\ rCnvAJPoUD5gJ~Ɯ ?6ۭ瘅}'=~AYĔY W1タP;q=g_M?jY,Q^H+<ٴ#3`F]kƛae~Tm5-\Z{iho|*6z;ץ`hy#t& pTn/яY}-p<bȽ<ĪQr-8CVw?ZNEeЍ=d<1N*%gQb) oM;Η6Brs:Z߷.{A? GhP]9v-w;,N9ZNwJ[(3%]^Ķ~ܣ[#(O>$)b1ExX.!c^ H}'}!sZr&ϝA7Ka]ynb<8usQEtׅ 2GD@*f4m2Ju72 _3Gϐ8q3Sm#+~wA;>{]W·=Cpwz8Eӻ}qv59~,TYy&Nr#Omv,ϠUu]npyC9) \pA'ݼ0,Tk1-8p5$ӧoyyg=, x5A3@̴ kYP^m,]:V u5B¢A;%@rP\Tȑ#г~ܟ$fyxԨQa rKGv^?˻4@Rfc#OViWk4BaN<)u ܎ r'I )>YzO%V-lMP>Dˡ8^>]Zĭ)vmk%pы(A{QfI<)/x]yٴJn:Tl$y`ap Vaao ĻhY}<0-¡Q;pVjUTKt>I>(Ĭ:naeWQ6Ӡ|Og X?L*y^:x4}=HKN֦}d]?5mp9raMjE0Oj´a2ts үPAGJHz>ڻQ@s mVpMrvp'q vJōќlqPŃXn^," tpvKPJr5ej RCE0ޙ:6;PΡ AOWl۾6C^-X;L9X$Özخe}w«TW6+)~uQx1DO-@w:hcICgIsZCOm`|oh'u pMu& O񦋙*ŲWW iϳ>=z} y }*JN3{?lO篁n]: :9G3.lG-\DzErnA͘ &\«&sCv'D=O{_1{,cC>bn-N+*n+[^cufafK hOPo,`~*h&~0˨faej0n5qtd>}>נUI.*(hKjt$EZP{7'y-v-sFoa<׋Ky.6t\r|qnx%gp~:p ;V  3(9d7xs4Uֈ1Ck[$ ϳvJs Qt{I'X/z`1N<Ī/?]H}щ 70 27^d<A`WRMp= *aݤa*KЙa-A9ss`J @uA~7&o0"AK]~ywqGqV1kaN 30CJ|,_zߥ㌫kEG!*uu{!xX rZzR+*b*l» -U hUn#r arnER(-ņ:Ce+6%=R{.i,//Fv|%-欀0[D%֒YkBԤ_oکY=(T{zN^P;1(lw }v:vy)`AP*gɻzL|z/eߜ:Du1W"'8j[5)T?ZlZBv+_}ͰPcn$;yD @9xjvV8KGZ\u1ir~# KK-{qN?irM@i-*t[XZNYL9P߯3`a*߅YWKȨV?SУRXG^G W~Ǐ%ѕy(vb̧<|ߛQ=jn'sS-;̛Xkꫴ7O+K0W̝E2bH +tAZl_#5 GǸyƜ&U~jCy|T@ߘz?hUm|Xr. ]$1 Ew5BGvrM@9s~8k%c _c,8 tO< hACv|q_xx9yh3?jk ~cjubLfi"~?vl0fܕd3TNp!fh%{y%m" /ZXяkp"~]N{ʻhv@IDATb _t4I:ʆRZl^?o)-Gs9w.uSV oj1_Ǹ_϶g*Wݦ܍s!scYwqbyU^ b{zk9Mo)rT}V[m_^z1Rq:rUV p@ 9ݼ2(L.sa):P6tPOQ)Mi*!je5L4qCyO@K':b6RI}x'1灗xuOb9ep*tZ;V=BȩBYuU8чp '$c+wuWX~哸kf+t9ZZVaTm  WVM(i8Uu77&s%>kT ?L-Tu>_\Z#Lm \|?|&^߱ftޅ0sV* k TZ KkI}ꦁӸ=s)oQޏD },ɬ)s\5@ƉC_MM)*6I uSDΆ =ƿCp.V$#d,aس3}mFuTݪspm̭e5E C |vvHZ ?u2y'ZvKoWE9x 38qM[B"Svڌ[={P^K'>^+l{1e[2KZP&~K XՂr[O-ݐUZJisͫp(ۓ[uBVpnw2býƒ}zN.-m}]C2|,iRy5|Rь˛fҭ4==D%PzcߊTi_ك|~A?dam^skbs7u&-о07Q/y_\i} iz`o .MjY|lԴ gNho TbB*1汤KҮWIEZ6_= Ku?|f  sxs=.m 3r(@<cT;`a+w@䯝]+*s QeNX:EŔr}>%Oh_)?a]ZwRם _@n6EQ?|s؆ÙbEG^7\ K0dϒX'w-4'eZ=1AVqc^!wV9"8b-↾Qwb.2M/UyM?a=wR)lC^L91, {]gvpJR󚐮L7A,];eh<{ cg F5z'rRr#%&tV+/' >uy 64*VD Oޚm,M9;N;GlIV_}xwz뭗D)~O>dxG|q}kIK/46x)"WxX 3Tfܺ(8>ʴ 7AZgkeFSۅ Q9JV ,Vi:^B "# =uvU`|TnHU-ߩD-uOrA}'ݫvۭ:udf|p"eVH` +\뢉qV_}tQtRL,vlN!O˴ɍe|M$nj0O|Pۣぁj?5[}ח$ "@m<IJ!ȵ~=Xa0:|FqџI*"m)}ƶ _]7Gu_ZW{KOV`vAh~/h$\#اoR~?OEpVfԻӜm~Lg,-緰>rXkm7бi\h)n gjC+v~I<UMXp <(~Kt =C;]'29nNI`$Vol~]u} ƥׇOJUتEZmaPwPƙtv\JކOQ`=sD/Ig"dHHm M7ayxY} ug"J9C W^I.H\DzC~ ǰ&e'<@oSw$*HxaQ|篻`@dk e?2^:܉U[I5 f=~_yPD}BZ>8MștCBLϺ}sR>B- CF7u$v+kf,/ġ*$O#Vb Z(&2Ql؊PRVdu?'6_vD]*$b~moh4`}ɕ9*H~6;Rt?q*4!\|-@&Øh1"MRRFnۑ7QH''[=5n/U𧥉}Dؽ@?p˷}Z_(/??ccK ,\PYto&%kluս$$//mekw\$Qy]]i6n̷1sd*X+}Uɫayv4kXؔo4p?@W]G]+r}:c/ V=]a'X5l=-J6h]ԪS8_b[)øH\-V4biA<,+0sm-mʾ5Jζ(*je &/_qoz:@ia-(Ԯ(1|wrEj~um@6fᇺi*D8@}DСXסQzGG F&w2c)Wia\ߝ@?J2Ňe=Oi?9iUb0CEq|iX y>yxDkYen{Uy7+Uysa0mwn˸i߯.Z=[=CAkʭAUF$tvea]FC= e4w-BoUyb㘖Z|>j4*XZ2x}ڻK޺1^ܬAUWjkxZ}Oco,1AW@_-N+awxaJÝk>#cA*ΫFtQZ ҞY|%hwؾAqri]$'[/AYsD\> +;;4|N虪:fBمot jBsråqgi-w)Xsj6O\=0o7"Xoüky^|}|+sHu8' KjU< mp4{P>ILȿl+Ƽˑ-;2ѿvat^w 0T#n X@a|; 2ǴNҤANvyh}q+aQoW!%^?9L{*Kz6w̛c%"^ VcE ^̺̱ܿAҶǶ,Ϙe\W5_^̤ei H ofߤ}_nx} .dS{ -ukqw4L9o@0:Npzg*{cmcIq47>0[ւ:{a)'`10zwkl}ӍA_ JJ'Gn3 gD lP0ֿJP=[6~M^SChwaY;; ?~5^X&|Cҷg^P]q`N/Vheo$MfY:%y POuKyEgdUM Ht1JFuy<{$VS绎gGț{SM-H.އAf{9w~k7'/A~=uiӜ&S|sz]!5hYnaaԾOV;mje:1fzZ>wsg1ÿ"MfcEcsUr([6wqS7)TG1ЮӴ!n -PR9_ո/̳V:.@%dE ;2 Sv!dF>߀Z@AS(\ʌQh>F{3v>EROY h5qX]wD@[2J\j%$JcOqտR;@(LOWtIb7SSl+ƎS|/{gkJ:lVH$*K` K1n> ;IP^FkJ`pyCb89δ2i,_{Rꠈt/ou^D Idi4ص|fatma=x>0 ɶA6!̯g0ľJ-P7z@F9WLdB-G=pmPjp_A@[,NC#JM(SN+=@忋t2u8Ja݇0oR];2[PK=%e~NEB[zG7Dq$"}d;N??9ԜΦD,VJ$\`(lŦr%ki'~ gg2:+6&杖X7 +ag/M^ T'-sROiVfl1]D.r_ڇf`(ୋ- f`1Zzq|31D6wjP]a2`&|}m`PyZVw(Gli]X~-~#2鄗cڅ5OcŶ+ʇs6\Xj'[b4w /<[/"l}4;HnY/Vj<{W'Q~0 >ln+ Ŕn 9D?9vE#9hݪK/i}+>djS: ۀ65.t d 7RD鬸~qQaK';58Zn@!AnF.vHxY,;q?d(.wF>dg=޺r-d6/yu6{0bLWAE ,ߺO]4, `,? 1nu+EN z94*ĄQVOo_ -{sL N_ %.C+B7-Ul<ѱz.h7~32g{ֈo]!Eпds~} @@CXz̆QfB ̳vEe KwLy)O~{ƪޘy~xW/q',(TKO*ݗp:|:%#dm Fžy]^W<=B .Rv0O_bAK%[f}HfޗP^CJN>GW޻=wéX+y n&*@eu˙jƾvI)37m D*˂M }86ظ0_8| ]_g4MCQs$zL%/-֎ɋ$CJ@t> o$׫AP^QSI.;}X?h@'blʻ '4m{QrKWgur}i?Gע??׎4pȓp~|OېrojZ^|?.WzYh#w@ku*ͷ5d0t2o=W4k:3>/P=lC85}h(]&\mVH7q,"HXy *mYkcI\F`%_оn.pQr]x4iyRvJU@*.m<29=P(, n+]N=NMZ~9!0l"Ji!iLX-FO`SzrE딿`G5)+W 6;;?f el~:yEƴx-wM"m F݌?4qX]тoo6>r4ꁃtU!wp,Xh~p  N)M~~#@WYpUF~{A{VQy(vu- V?.i?rL], \ʼZHQXA8ÂCG3 )wX+1O. ==߼=Yl3XuWס'oCqʖf㲔d0n7DТutdM"tIՃRN_kPb_mnnM'[WE(`k(- |UY$jޯlJRRxIs.<-T!< Zs-_Ov?CvQUrj8)tGj_9O[lp‹Dx|&'hrG+߫ZfwJ*;zZ[dR|3oQ뭛Pd=i $@e <=Aj%*pwikL--(^г -ⅾWE1<Ӫ><Ǔcߧ 81m57|ǫq5VF˻n#j6i\݌]ۍ<!ETmLOV?wX77$ZZ0e6v tp[G~v(.l7lZ\5?C߇+t"uu * Z!my e Q更0CHu*VgUT̛[sw8I[G'V&{c vkg3obY$l :Q.YD~y(fU+X8}ƷfܫY[eZK*ݘH`Kh7E=0r<{%[oG5 PgQΜaVF`O\9PPcπ8;!{]zH :z"|->/DJc+; ?MhWWI@7(snM :^y@<9Uy!>g{>/2%1oPjk  _[; 6ޔ|K|9qH(aO8V"~/y<"*Qw2H[L ݃6=+d|:<.ۇ.N>ߴmʚըa?O8[_JAYܠPn7H)OpZ';˹x:((}d ^҃=H!/٧R.Dߘ܇A*0K~߮WOu+s(e,H1lN7%摦t9~YA.;VXaWulW`X<hߞnB_ed_c׍B7AWBW! Q|W_=,w!Ձ sS4/M5hE6$*m&:o;ѡؐ>b|}ƜC=rj-W%&ӴfN*~#lT-zoVgFV>kj94* n­J[J*eKOHvΌF 7&˫yu{+Ƨs[~/jZkdž|gKP:=B>Y=T]@߼E=n!n[ݺmtX*ǖF^kq՞ɍV}+(V LrC:Q㩞Y9opES%TSʒX]W*?:iMdcC>v Vs<܈eXnl-*P}E$P;wu5_cs3@~M-W'&wP Kׂ=uw_iWg*O{'_\8܍z"ݤun0ge|2 > 'Îu(/uk9WY\NŅsiB闌;:k*t E9` bą?)TJ-ZXZНnj#cΓ@I.sġy &Doҹɴ:k'I )5-X5>@=~[igk>:|] /M\Iamϓb1Ao+nlw~@9::C}NC9*VHdؗ84ѷdmU5n9\~٩Ǭƺח"m>}gpmG]Ood61neM~4ZZe3AhaFn`0S9ҥ[ } (:mC_Gv<N-+A/=puqfnn0*XE`i33mO}˝5!{U3#&@8uXwH"uI SûLA!_VbN_ccNam,A_c) cy'ǡ-A_95XU`Nxzxp+, ]9̏/$nGҋv6|#LOvvNjjPwD!;a4u/>xb^XQvw :>UNʭQoj5Ff 9Ah 1Ht9“cONڃcK2vwK4֏R}~\سDs2o1!׋?태RO>ݞ6zaˊܴ>[c ր5P[3ߙH`3a 򌗲~h'̍էBfq}+u+ΗsodRP: ゘xvvR/R<1>wBO<_a;N^ڑRf>/sTĸRr;e`@1ly+PF~uo"87K+}/`۲@_?j iv-떷w Ipp^hzH З)>A~8:HWdst?À0݇^Vȫ} 1Z?z6`q \}tRG u@z[r{{V}gua{(l@5[:R.YMoP *zlB<Oc[JvZߥ1Q3 u?Wޮ.$CoR0ͮ@{O=(s%/kAA7P^ }670ܾ -Ro/~ e;RnattO~fLfQPY{_$9.2,Ӡ% {B{$.4eÀx[TJǭ9WA' z/xIUVc *݁qelϳE廠`veXhDbp2Ow٨j!m;\E` i; Z9Ks wdeypmx˗%@IDATް]mjG ;KnW*iݥ¢>u>m y l–bٱWXjr]&Uo]\lG_dKX-쳾x0Tgc!E?V| N'.榟{U)glo*$x7#hO}šnEg]e+wn|c^X9DZ{^@wf@%јK%X_E[s=@۳O?{/ sUU~jssJP?+̼~32>7|E Ҫu@Y1YIJgM°~(b@;Su119նyU8F90(0ey`r#=0E anV'߄-΅>~PܢtC[XUMNGآW@TکMj_-爖UxHg(S1]RcWyO[zOxs;}L[ rUz8M ˱^lcK\+f*V/wq=UmQ?@|\tu?OZ޵0qe>XEN sVez:liS҇=ӖKq*(Kro/w:oq8XMAR 9̹I9wIUB/y:Ri7TJ yG-9DZ8,c{gi̺گx,/LVRO?ZY޺ƴGZ/(a=6~.{Hśk;ȣrqгau={i7m pVQ-ň&*#$A\Iȟu\gt}\=KZڻ73i.>3,OWX:Z8eG2}Ggde~ .ٟPX( ?kFhBɈeD6 [Us>+q`җ}ۼۺJra!MZI6@itZis V؛hZ+NpB*~K=5y K`1L_gClMG@)E륇W+[N6s1c }$mc>cA))KC;dR]޶ۤ\i*w.DޢZ7lV]K 2hԳwi!$.~־lႎ|>l 6+y@B+g`E3[Wr$4-|<]tѦ$ka)u/[~.اq/?%--C7#v8xiV^P0]޹{yﰴOW;|^P˧gmX%UfE 85l~`O?P{'9IYj:*)l+OסkrDgml4䷴1C<_osWS[_aeaoYUQȾ>bK'7zm&\n'p-懦TT[YZjw 5=hji:yrrtPRIgp>>N5o叵e'kDi Ŵ_<%b^p ViULK?ZE- 5K%; -U\zq-L-jwVS;Bϡ{~T+/w'񴶋qT/']$ا0{iג}&~vZ&S_(K#]gy*u CD9)`Nsm(Z~6 K ݷ!GuqsVHgƱMu#`%%H.t;fyt  )G6Hy^CaEXggպiQb>}aMkay@^WGG ,JSV)?PfwlLƍ}.ͫ٢D6+^ժ̦{1(m'BV!k8cS ̺9g~uuVv|: V<篎vGQO,CajZ[FۅV5o w&Kϣ/"Eo5tWwQ]\pRJ =ysʺ5ei=UMWp,g8_KK)lwdi9&QFIurni9A]9Y]`qk4۽Ahq)B7RlIyf< .4's"e/ w˥] >mJ]<v]AYg=O:)o\'NqOi\D>MPe\` Awbiw4pO23gnmq5Lו*jfJVɳl^^!V8%VSߞa\lÍmc|%v\tZ+tgUtd:|˭{Z_wb ۵BYvi>﵆nZ tTԧyQ0Uv\o=Mz EPJH}%[ĈJ%r'9UNųVv˳-5<I:b9_acͬƺ6LK O?~my*Qw xZ 57HkqmUӺx~C3g.! h̦;A0"k8p/`4yQ'nEU>˹B@nRy${6-}0Om5yb| u'.iRX:6ZoڕX,}9HÏKV߹дu_iQuQ>ܞ~sOXhﲻ8]uپtd&-^WUp'sz^\UO;|טDZu#Oε-v3AV-zi>Γ.J@J~G-4\_PE8.+8'ݼżv1Ww~R:Yu>E,LVEni:\Y>,~i4?(_Aq' 2MVH*}0.@j8" Wj#wWOߟkzڣ?6K6}f=wkVuStWyN*1V~0~s#Ƽ,,.<|+@Yv#?ߪ)˞FeTK0ouezk{:5$ u؅,ܒ`M[KϬ 0eUd;^JPv9˭Q\L?cTG~:S7LkOzu;j^VI, VN~=߅dMU>q>tTRr֓;`'?\ɷؓxX^4wPOKn-{g<.f͜n*0CUKϢ\a)CEQtu敷n3:tk*ȃs󣀍[ŏiӝaVlS2F_]<MU7iq5nϓ#P?ý˰l bpC|VWF^YL°x.HتwQʋ/hicݭ |8]!Ӽ<}fѨCf}{D繢HriG|9?pG[=p!G(/~;:F!. ՟vFYhm/J|3XmۤIƑv\pWV#e*NH?p7&_j% M1(-26~0A@0M? (Ҙp.H ,Db(IދrS~׀aᑞ{0}%9E./ (و *"¿)ӓֱ Q .9<ѷ︼3,aޡᇜ1z/EҺi盁8W7affaxؓS mZ Kq?LFp3ip:wVN_ZaH8߯|R&O#o'8P*S$ p|ap;si&2O (IԞ}I\<{PBxB_l'Bg_ A?a1 #n Zy31_JrLS}¬aw²c*D mqN^3TIwl͆ş0 m齄WafwNR0yV屛 =ߔ 8P <,iũ89T7W+~=4$. BËnalm'$G?%wcx =8ۼZyI WX>pGϢm+ $/cn/V5>/ |D}́aa7'֮a\}*` p؉|cϩٝ6&#ϑ=Z5V ЯV/|9bN9ϸi1,ݨӥOY:|67˫= :cO>L].it^;E MJ,+I=;h>_2 {k#tOҤc\eK{ނqptg"> ΀HFP='zQ|%484ŗI~TX ~>'s$3ԇb턻 -0&r"3JZ(Yw್"͂J0G'=Vay [R TZ }U1*xE' S`-\?2; &oZM=$IڹcA߼g{ْ]`k$ʵ_n2G&]eeMcR9OPW%`Twl^WX$bB_zQڑ᷑6zy k]r6|ƻgoy. 0N: @$uQ׳yyGzOY@w!]!9BFDppEޣh$ & ;cf>9 _5Sn'y$3q_ou!bG"KAwkd,; .inۗy:k!r>Kz45<ǓV?ܷgL П[kߧN֧t_K_*ǭۚM"Z-ߺd+{|mvkVyvWҬl,|)=~Vߢ nvN?b1l6~] lz}f45ui+=ܳFs߱i?SOvf?}YŶY]=G7B7!{8w"T 9EԾ탺d<| 9TIB[y L&lWmvSNzvn}yٞUZs>,CnSt^wke 9Sc]wT\e[] Xv*nQ ~̶sth2V7yc>v|[q+&~|>7o0 jʀnSD/6km!O|[J;,2vq=x΃zz8V3!o#4薎࿹v_x?3myMǸ3}f]U?U+u7 yˢOn97X=(Ou_Յn"Muy%yEWO-ϷuFfٟ}\j]W^b;\adӶʤe/U.,*mcwjN9  4N NV|{oV G|/#SCO^ݤQ_4=9>Pope4W_7 x'QN:n;eq'|{GTN|R Zd,N<8 drH 7escT{ٍs-C҇G v"!VPX+dqˡ` WH|zc".T/n^*hZ1XJ@䠖.O6Qu>X] ~3.WFHRq̷{vΥs{ZZg(Ur@_y7A:([(gyVAŹ1ZBzYߣ^bI`ii>zC~z'h%7jڮW^\ iԢTTg%c­fc}gbիu肵URtiZ;7Vqq ؛]MHstsLy; Wc4ytӆ|5 `P<\:tF{~<";xciYeKa/R-IZ̫'4&d|')||mn@D6!f}?&t~|@jzv n}GR`.]cBi10mPu>?=/1^}#ˏ\u] s?-)4 SWO6ɳրp}֏.WRi+L˃y|{@a0$B0M %QY`ZmoN7= ,TJ C3cbiI, ϱ oƣLYg$ZI/XX,Z1 @ @dqakRկBmӧ5O[6n֏gIȄL"T.4Og?vdM^,+,Y˱R˜04.%htgBcAPdNg?{*@_beg}ŠP)1z3NH[(j M2'@8f^4(vT]kX"`VeLwkj ,# DWaR BjKH3i\a pg?s-۝ ̗ܺӗiOnV$7rF0'O֧3z7`J.0[+|wg@,Kxo+1Maag FrxV%Yi3|W5@sT] pAtx@bh@}?u0X`msXpV申4e(Kկ!A2k8wcŃ DXgc|uDv~e։.ĜNÄl|\ɇ>+x5]e>6>dE:Z+/w&TSc~|fm\8KJJ_^{0֦f jZGK؏Zۋ@1nVis{ȪK:qSq^QN1wwFWjh椛?s(k%Q+^'}DQxBiO{f=n~35 xmtDR5-yd¿|Sӗ4bRY߱`'0XkG(͉z6y]wn--텡EӚap Cw=q'FҞaq6DzW`nT&L 򵥉$CPh/K i7˾ p%DlۤAŬ>U}pj^XDU-SW|/_Ҹe-~75 >>hyCIA :'BviD=\a9R{9-A/y~V?9x3CuF ұM;g,o zj\/P~خeo=5X%N\Zĕ@I) J_+hkqjJ Nߓ_izmp0<*q2~ڡ:6,__p[cUZC"Jr@р=<]s ϮA8M> +NhG]X'ŝes׭y{sr8[7sV7>'ό4t1|.0(ᝲA u*K0aZea7?`L{ڍw Od b-0Q|9sti@M1FN‚A4X$NNqE HB7@&ͭwGualM͗vsML.< b Չ5ekFK B;qY$-I:gey{BgZrTFS(]#j HӗԎ183nt v:yX!"'(W%O|xoBiN+ثn>v0|YUj5yUX)M 8ԃm]7Jᶟ {S{Jjy9{ϗWjFlYY8ϕj_)/,xbO5~:Q 6\w=6ߦ9xW[_eP?k%PwcAag?U Q/يjr6xe7|(b{0\O>w!kͫi?qN;:ga,Rg`E𿅑FXYƇ޵}P͝ cA\3^ 34&pi:pM_M}#hPH^~jnuE\v;u8ooe0˪'>jwuʩ]'O s~\+_?ug'װ:S|XhP P?cLh @ x$yzQJh_a @?My"ci iWM`@-K/.Vf5_X8bI$4Ty]5bJynGb5k("n0 ѽo3fx7{_׻3ng3\27ǫ8<1c%ke$4O%B6~V?v;1x,M95\Qz ûB㫻ٵ~;;ֳ!P~> Onuy/zy#- > K wk1t~w(.+;}sдx y>eECx0~pf}Q2jߌU*ҙo6WA+|o}`0>YBI3&{Dy2 ;|K'c_K? zsxƷXx 4&GE;ؿD)_sUv=G]?P&#W1v@kˤj|\ssO} ŭJ?c|ֱؑ}e? >BO6.#c[X3om76ߐ= ҳ#T(sct.@qosNX5{CS.-}ܡm\Z9j}ҨXooSbc!a2?Ȓ}]5 w(@'VBD8flcɋ.1 7:/Tn|\Q>[.`3?` 2~[ýd*d)h/q}$: Q^}:hfs7_/A5{~ɚw]QFqO[\[aF"A_B5X g1ީ繲n `+>c|5}oamH8"EbK`1`OsTV+OY|w=o2cyHhDjGl*x&4=~5bi硹zzVs*ʻ4N3^)SXIЇe}-*M}!4V׭ZBOG,CIDATQc<~ϪVÀ&, -wo/c b6(+;St^tn.ֶQ5Y$#x[2Kyڄ H}XcaE˳5<5/x')SuF&LyA}+(2ܞ\9iY࿪kX`4 vgsEpeۢ~Zm~J/޸r;ˮB._QyV}v^=IGKCW IAegqT,md L?a6 1|s'm{shS/\rBtMXwԇSlЏP0_&o`=n034{x(@ toz,m#5eԁ:,a穵9ߩWo[ Nh)}+\LYޕ8 g:&O,3]sb_(miyq`>r}5~.~W6'Μ3~Kt}P3^C S|Rq hVJ޼Ky` @#L/Q ]uqax&P92P8gCBu6Bx\ m8+iS[m]'a~dD}D[Pz!S1n=P,Zb)b|hK:;~օwVeS&k[uzdAR>u\շ(P~^e4=Caqc>.P aRσo1 ~~mkQlB4Ru{?pϑzT] m U Gu/ei?}h݃Wsatt\'`cύKyk϶y/PrB_K0lҸs5?@ r5LuǢ$Fkocf۷$ʈx-n1Oy'Y? {4vq7 CEZxpK?<tJ= c4]p2PB(`i"6Fs9*\2c ̜=-RzI1 JЀ2d3 1֨8pI?Ebi|5{kƞc$Ax:&ȿ=׷ޣygCT[T dz+K7pݳڅcy3~>,2nohzՒm+P8nY읾*rg!<66G;-0trȣEY}q.Ծ}F{uzǂ|nf )x [Y۹c۠qx̤[vѰoBޭ5uI.1,lr㘅@vKYҺthM2Tm-WQ 2yK7ECҦ۫.PYY!+Ղ/j?F4 (w:$nQ:y4/?ϜPm(0&pl[E {@VĨ`K&Bb]-]4_/6]{7Ƀ ^%İ up{0Fd1s?xa@I#0ϱy@«Zh/T`wfO۹{6tsuK~P6_nRu.,!Hr@N<DžC# `?p71y~OY,U/ Ưl|Fe|UX2s',/QXJ^ޡSVU.謢7(-lj޷"O{ #ɳE9awsAˏO_6 SoJ`O>m*2釭QܱB+w(#bz'&טx،^ow~>wx#Ï"~-LnJQ5gٿ2l?2%ֻ̂>;[Ö[~ѹwAoJG ֡_u}ċԁxҔ.1CDTv}gf[vZbub/9@nt9SִnPmXk7~Ԫ,j/Јؾr &% @`CQ} (G=*Ю{d>&ˊl4;[ce3 (P>Y*Vl@> /# T}Nrl T3ŕ{O|6|WW{3aކ|F /#ܱml0ެX>Zr,4Oxk9ބr<2< |}?s,j$oY˜ֽw7ߪ򮊖0C '9Fimguc|3^}Kȶ@A[Y|{2 u7ǜP)Mv吼t ;\JB43y$BҠ+ XCCn&a|?U Yޗ 'ρ1&pU'N[BUݐR8n䦄AB/Ȫ60:q<(P( )PGh4'PPy QN,>*LN$ۢeV0 m@pp #aBNkkv7 'K%tm5kmg[pC֭*::6tt|,mXwcz1Fۈ%u]}i9S(9+p~$.1@zX`H_/˿GZe@xqӆ5uU8I:=+쥭<Í,jQF+}ø`ˁn;>rBZܘ & h)n^љx3@8҄Nw5A2ˡ, *<yϚl{4( B'gN Zu›% s!A\M iD`(s,0~|Ph1/k(S!+'_Ӿ:-Bk8۠g=A3WH[o ÅT]RXێC+1O{W_hVWUZ]2G(B)`8܇*@/q ~")͜ سҍ}CPy4f3T{&7먔b(-_Zp9 '=a|TS1;XA x|oh9PA_ܳ5'~,.(q}KP KWGKjُޫ8 W[> XbakĚ`:vj$k`:`b?xg+Vn?q,5,M+BO(A|Pwrf;e͂C4zz|c!8P Xa]L6 (fϜi:@CUsT ||F';`4\?`H/J{_Ґڧy, AɵZs!X#8'$[# a(sUo\Qv5(-rat9w:!vȗGx|րp®Fu&y捐J\:5]99{4fƲ ҍЖ-{:m֪ى`(ee7G?:7vesqA&rXenG3,(6V_zL.,Cp]5'⃸ *׽c/tcbk+Xp}ꓵHZ6@}be{;h ̬c rP/M3ڔ*# c cTˤ ,CхeLxA_% VKzh><nVg]r3N7Qa,N+hc!NA 똃!pk2pJ:ѩ#b ߸2[.A߼R S~Ҁ2UDk/P)aZ&Me{S: PFxʜhǏP(ϋۆiqQjt=wۤx-bSGj8̣m?nk[@ok۶2=[0DAa.τ.Yv@,TY~o:s$uopat핗Sw3^uIWyO9|\ms|aBT "wu-Q7Ym"|RX^nmNEdAtݦRԩIJ͵ъb~]OC+=HjtItmc-5۔J +0N:=CkDD qкZt6M7`x՘Aӷڏ3?SqrC@f2E!nLi=<&dtgߥw ;/ϪYBv|ɒc:p*,4Fm`doZ4$(,TKхu wL7n];<A"Bț wjf.[)rn3Wo:Tz(Ar[#E@? Vܚn>,W*N hݎĠ=@l*XP,Xs,'OY= wZ+RnxE#0č; @oawVbmK{ z?StfîO0@Ċ\q_uG GxPn X\C>be0 #o 2Z"xaMTyzPOG=).9ii-,an3G;gw(!8ͧ83=K{tV4D"=쵌g~v8!Hq80viK}pp-v#kA Xjo[k U)YO8'*tZL.$]|Ky0zW֟Rk~XY5xCQ,xn>qU\vgd^Dʥt6BUF*//}dmϤT!tOxocwsTYw,$M^R+ɹ(RkbJvDSX]Е`uXϱ\_MSguƢ}w쬪*P oä]qXy po ֏|XJsniyW7 k-J^wh /;E06(B1Z&$V1l[ *k7aarZz3DUG/^)o>7b@{}3G[9y]>eaK+5c'9M6w]XY?s=pV>e3Mz8֘H95-7￀YĘLPaYXџ%ڶ}8+f9-CYn!嬳1/f,֮5˴&QLQ2>8-[򦡿= /dp<[i59~BHyq}GXce_,?b}@s2 qyG}V<e^2yC "iCa 4=˵Cproߗ| oc (>&{(jp2W~=C.ncCjPZw͉\ϫ\P`HqF l9k }csK{ߔ}v`6R26@X!F.b)?"X+6</h Dߺ_"5Յ}XהEW چMtf'C`2t9!bҥ8pu[pqڛڳOHUt,V_YDae`k2p@AXV.Yrv[K7 eH}WNUJHw?8KWE (3}"389emb_3k{/ CfW5jzξ VScah]iOFK*&ϚI\c Cy!z4Es }I~/JkSw3;k)/W\~}]N{yoCyM:a;|om>mq3O&vud-Sf8kA(XvNnm܉xq۪ ʢNԉsCEpn4Yrm3Q_0lf#sa97fpwGnx@е^a@l@G鰶MK\-kye,) 2˯^YoFJ|G?`cX/[ Q>ë'sg(O9X9}>gLl ~ 59gz[rM{{L#|~&]$0&>v[`Ô1 Yg:9wz1S2ēgmݦ\jaMݐu[Q?Gۓ톢.`-ܺ#z{d+UEv̷~ {F|N.܁փaݑ1nŻz% 3yY+j:d%IENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/_images/idv.jpg0000644000175100001770000027154014714401662021077 0ustar00runnerdockerJFIFHHExifII*JR(1 Z2fHHGIMP 2.9.22016:03:10 12:21:10:Photoshop 3.08BIM720160310< 122110-1221C   %# , #&')*)-0-(0%()(C   ((((((((((((((((((((((((((((((((((((((((((((((((((( 4y#eGxy冃3 F'zg\}`*Pxgn<.<0 Ld),(>B Xyފw;V@;F<3y&=PzΞ=/=$f8f7A59q^g&^Re$DD 'ZRT]ў@v{ǝXyy};F<3h l3&f Ae=ފ#ӏ:q٤; aǗw  7AG8јLf z*PhN<Ǜfyy}я/i H\R{}_teyIyy}X;Fe:h8`Fg=ފ<=Xs|Z^Wjvyygxi(1 !g.- d5J|1L@z*P%evڥ=D\DQqQ"$O4<).0|<9̤{}_0$DͳIzVa5JX-2ΨC{{-/3z138-.&pXHf54 eRF#ya2%{}__KT{JHtXrħrR;Ojcҗ&ܼ)qѕՐatc=OLGh.46o8^tO0g3I@i<0qU~}.Nub - Bʙ8ani=/q_]ϞB}MKw+Ya:z1Æ|`ʏ4LK1m*:Py o5-+0;>0!ioc;Z:̳asҾh0}7ʗliiy]Xя/ ֟$L枉DQ0O<3(8h*+-+<4蘊@=^.o/^tSo󳳖ϞQ ,ꅜtdGhTL5=-%Dut_qѕ(T=i͆ˊMGPtd=>|)Pn<>a>U(z\`Ҽjv"YEYRʲ; gT# Ϟ!1S=F"YE}WMhtڮ򚲿ke%;KKR^"$fF"AqgA <U~&#pJWydKIF>Af:y'zǠk:yϜ>D`GLy£a<=ފu1w0yRco;H'l㬩;Iv,^Q$uahaWmʻ4vgnX ^^ʞM3A9ak1 M%e%fCyq锠 Κ4Tb=3X^PHAiyP=ފ/X<59gts±c (jrҔuoh{mrW]K=XLH;FLzf=¢)Q1 L'g4AVcGk=3atxUa#ra9kY:]g,ZxW9kj8,ůR)o1 DzBb^LӴCyiD8GIE]5:)eev=p 5 fiYN*4 ʊ`I=Ғ!R{n)0p("Ro0oEW  @$yӌK 4w ;͜b|k9o9y=.YٖzzE3iJC۵C{rtiܴivi4VQ*KyyHz1g"| `̇tHaq╀ovӐ&o.s笱%E'^YYeMkk}ykM2J~ӥai_Ek%Gi苳qID6]\36`i7 c=agz|{}_px&Z9e8[K9),+9+g5q,ʲ-嬨ѝgϾ^Qm9ޛ5ͦyu6˦U*WLâܳtZ!f+]wL2ٍ ^7=pψghsỲ h3̇yh̀g\j<oCـayzkRJbJOO ϚӝeݎfZLgom)-rɮPQGiM(nYkGYMk0ۼ"@I]aǗ#=CY>di畞qIa2zʏDm>X:VP|OH,->2S>g79O0oE]CiNi^VS9o ;HBjK=eGi,ŷ ]-w㷯N3y|Ύ7SzfҔt+벥谯axo1jw3׹hȐzevyxg"| ɜ6AyPZpRX2@m*P,7 ‚Ґe7$v{YKWg+ewVrVJKGh5²;H:s_~;ooGOmc8v*oZu[7/fVֆ4R'vvynȟb=s25"zdh0O>Tg=BC~-u=-;O-5޾r5vk4g`k2;F<3>`<)/7A䞡0(8kȖ8R@LTbOfI9KRzYI,奜Q,9+FV!?#Kя.\3'z`=i"|>'f=clQyǼy!zf"&yQb3l=~hSѵ;J_ʲ-`YY^g5R;v)x3FM3ͽ#r/ :Z;Lwoieh9]{ԺB~7= Iq:Ph3 3IPyƢFC3gߥ=/ iΌ˹IJaO 1g-%YRӉZiN^W6oht:O,g_U#VK;OaǗo OΛLGLi$VL"{`! Qc  FR'd>l#93= QIyi<4< eGy'Җ*{>)&^m<+g{Xq,J8JJK8ʲfȚ/^FV f:}vCm+޾jݭX%.Hvy~ N` DbqMF284a'H(-+.03iqqaze$ g d i@Ҽ4Q<"YxW%O欫T4c.IWEYW]㵹h3Wm߭]׵,:z1g |P dPzǞ{G5LcyRh,<0{>)&S,䥼tQg%'Vw4,%,kv絎^{X_MsIr̲m4z:WxuZӹ]Րf^7=p ͦ#$^ O,N9 ;F<3>hy:yǠZg8c,.,$Ppaz+Ӫ7 4:񬰉g^ԄYe!,bTYDt;tZ:GiCl:mʱG}WFՕK[aǗkOLGzM͞qA(m"}AaU(z|\0&ӛR̰={H#w(5Q,㵎½.^e*qˣuXV:Z:YEFWuc~꬀я/ O3i K@h:TVk8ed4:gY&sCΌ47JvQϹBXi^õwVGY Iv,;HKOhWfv_i,tg`8;F<;I #-&i:VL#8{RQ f)Q>LOlPz3z˟BVRy1AȞ)U~}Nub$Z9Rxo%FYD-*]-<[-=!ᶌY^gZ@я.Sj=7O8OHLEs>N,(,"^`2<+*&j+&xXX@DQU~ΜRZ9mwyTDw*¢6VXU`4G}Wcr;+ Mc˫|ƣ<*)(4LH%^h.-2'}1"\tTZz̞&LIiY"+=ފH CJ|LƳ#KBY\YG/2pCr<+wkh0fr4ar?Kaǖ@i0pbdI#5 #XPhj@oEWO cxgH@v=tcxgH@=GvGywHя/5 oacK\=я/5 o9Ӕ=>0>edvy~O {}_9Ӕ=>1Z^Wyя/T3 02@P!"45#$1A%F`36/jnᔊJ-geP8Z)!h]L^@pv5/ʃ87'5pqYP6Ś cŇv1ԃ[<UfB_qfś7n,Yqfś7n,Yqfś7n,Yqfś7n,Yqfś7n,Yqfś7n,Yqfś7n,Yqf!AҬf:[MUtL%: jKY0\TMVZQSX:sThu^Rj6PjKsULu^ʺheJu^5BQ,koM|檘^7PIu^]W?.c{?gU_.꽕\_xl iifͥK6m,YifͥK6m,YifͥK6m,YifͥK6m,YifͥK6m,YifͥK6m,YifͥK6m,Y1'ajMc2os.(jy)}ZO#"mVM6@1bWco%DjZc+&co& &FЇOXYAL5(X9YD!!Dȇ?rSjjb6W+ĀYA07Vٯ" t'tgw[gd߿cʴDZʶUe`ѯ,)S.=YHWq 9,|b=@,HöҮP`mƵ]-uӖYa@^x y,vY]=\(yi|XXŦcaB%*ms%ov<(yҪ)n/ u`ZV?Q@[n#_YS e)^vMQPePiGHo Ni}TeNPڄ 㘎mkV\#QPiR:3`LQli ڔ9V OJ_3MTɶ8r?KQޚοF/%ЖJ.^^M̩`?QMk5aT:3zv,n7=&ՙduìm6x4)73n\ g8Zarž:jU)pJ|ym& .ֱ\tFՂY3-ZܛKM,BY앾U:ZY#іmXWs@m AwϺ(YxJ'lJbeGBͮMfֳ Jm՛KRiĵԴla)LMk͂+Y:jm>hA D!m]_r/V\;K>U3?VFJ~_AW\2)՚ԩ YZ՜a_!bjS@; ofs<~8b6hfi[N#ǯ¸;'DZ!dKP"|{"z%JceJy&!^ZR !zfzra PoSCcrVPDdcWOD#_jԿqe/4M32xrسҽPfŅȊsi4,!ߊa֝dՏ)O*y<ZFa˭Q\_羭1!̅_p ׹//?K|.sR׵V LA ݂2U (6'nJFVT$9(ZTQĺӦ-Xm7&ՉYzF;z;fӶv;}Mol2fɩ+)ޠC(٨ z/k*᪬N֛Mv]^NbN:pHW-={aUU)KmI?o/DglvȌv's*it"7-AIIiZ U0m*r5$1ĤStuC"۲ȯjPBm+pZ}q|~#;goOn莓~jBZw*NU'2ii$R|-T]'xuV$lRRU'](YbVK/w;+1evUY`D`RqPaVS [8Oh".Q$?J9l -BXTl^Rꥢ@p>K:y2{n8S?ML&D"D%0N2;Ut> Qv/Wdᨕ# Y0r]ܾ?I#uaS`S|0)8 )k9+m7Y"J2R aAo >YB&IV`#/vN~ AKȪH֣vN=N--$(ԩtY'3?l9p.Q*AJ&&\+կuV)* ޖEڱ;U-Mp`ȲU#/22,IB(ZVōAYE#l푝D o22AuiCm[n=;5dVE7꣕qS dXM5m:u+Ko,Ps*m:\Lp5zU;m'P٬&|2͆,m! 4nco=E<͔!kac|w4~n9?C[b3qyU֘7ś*K*PT/ÎIؔUbAk;j11U¿R*iet%zo[ *HeJKy75 Ac+ۧmĵE]-J$k1e^j'q!>E?GLt׷v9?A[ăm9+Q uuA[>e)dhLJ0Xei$jU:a:*|o_V$ 2w lִ'gTQ5:d5aW}2"dZ ӻO(͋j@Xb49јK赭ZuCKatVF-ߤիwKJnP@-ԠŎ vxc;c#' p}Sv˚afK'9)i̍b*a %#8.No-a=aĩQXl;VBR3RCYnJFUeXrg}F<9>oy WZ5BY],j˶gqXjT6֑ļ!K j$|1@EWB֕4^/MsҴoz\)pfRN[ XTҸiZVXo3|K5Œʮ'<%6=&!a{ЫL$Km>R6829ݎ鎐]#r#w>'K2% {Ϣ}#[kQrUzW)!u,=nh.faάĎMHÔibVMk= Oh">#y|u'Ic#u(V]FF%dֺ -u4m\/e'kJeaB'Z_z4=$gv:G=1?o<ߑ=7~RXS=??헻[k"H(Em۫0|:FP1}* @ xw-?g%f]5#~G4qX.ZyH&.]M& ۩MMtzO򂸬fqbFF=L*(vu\V-@R-Gr==Ǧ:w,fX3$"pOYFw>ov.>SVÚ= a(ey,%4@;gjL,K M#3V`bVMa YS{Crs Ȫ鯉Y8]Y5Ioh)&Gg7d FIaND/ վ\P鸵/(4%dĀ.q;nUlב"^βEnQJ2-ON46W;:t9KmKf]j}093IJńQZ2MCoQw+D j_y1Çc99nɎ3}[qDWfK9cY.1^ n]V+ Xb2,:ƫPs3IܬvYjc1i5][ex Ybi {ϲ)5 iY2 bq 2Sڀa^Mj[ҳee._?#;g~gWꆒJオ#i\PXUU1a\, nۻQUeUx"X u/NO!pJFu=r}y}>}?A[O*9j=KVsUnE,skRZmVQ=sۮj)sYzQ4/lv2zˬ2=]wFw+|.+qr0+N-u1 ,\԰CMaVx,+2D2u ,mRX>SK@a/9;t=}GO -5iOiՊ\7VTr-`0sS _au52,jiQCQiG ڞt^;[;i3=_xzw|O~Za?<:~b0FJ}Y),qȖ ^),ql-Y),qȖ +2/ߤd{zw;~g7dZY7PŠYCƔ$ M|)+vbK\VʮtJbQ$Į+nbc;l%eU"ٯǰYQYaM@ t*SS"w\5?tD п;O;.Be54{ = ;{:NNLeRIC%Frͣ%Kox,J,65#-qFWt]p[,_ʶClkعAcz7&J+>S**srbVNeXe1j5M]#bbV |m؋s,?.kPh ?E>4j] ecicai-5JT|"JEX U6+4UKau%PeMmYu^!UI:%XL  D-raZҐ1UH?-B7*C TaRғ@W뤞oĻHBh++JHQQ5w)\)0D?Gw߬LXUZT :ݚyRx6_Y+e1k0e[ RǑq`ӷ/R M4)وGumzqlX#425ej .]SPќSl'EZMcgNK"[f-( W!Zu^l~mClZ Ӈԭ5ȋ kj?"wG|Hڂ([U&$DfaG<\{l0nKIw,˪j30JFWa)Xs( VjsLa(m.U_you;eCs+v l8g˘ӷIk칋KK>@)=e਼Qw̍:wX+LC^sE䦠 6͆ҬJ1*k\;f3IJ51ͫ鿣)ծ uAYWS͎]$Ul&{8=?cV|(vY"#fD +IxBNӻf8|I-;Im>Sa̶"+3Uk}[Oo39|dȈ3?X_5BUn}?j|~[NcP C|/|5??wP C|/|5?|) 02q!@P1A"Q`a?/d/$I̦Wn*fbS' 31VL8/_z궥ՙ{ktZ҇nmEyEVo>]8R_ukJ[:EnT_yӋu%;kSQ[U*7kt]I~!ERϱ}N/IGnV)UyӋЪ/՗Myk5޺q}USV_%5o_zi^w2I$j䦼ޥ]8kI#xNj䦼ޝ]83$I$31f$F4՗My+5оӋvڿlLbGI$LI$=-~]8**;66H٘NI$I3'EoBu_{ˍ$o$uf'"DŊqvLןzu_{vZ$lI8KMY|כvu_{vz$FF'D$L1=4՗My/_zm\($ؘ)I'M5eS^m5޺q}J6I8$l'abV_%5ھ]}͵ť1I#dI$hE(UyӋbI԰B)d ǒLktص\jd Mb$X'8Qi]]8Wjl 'npqdII'iNo_zJNDŽAAYS$ZTժu_dOV4(T;5ޫ.! P012@"A3QB#a`q?}/Ev7#r˽>'+ڇUwqWz> 9ޏ9HvF1&%Q(8:i?q){!xIU`QȎ8udƲ*%O}_b8/dC?d|<#2cY9{"#ɍdTJz>⾔c){">OݑЏɍdT>z>⾆? )l~঩1nKQ^঩1nE Q]sdoGqWn{Z6MPEQCZ֓dOGqW<6$$(Jll6MHKYMS2cx}+}tV"Fae4SgCm(pS36=}^ *Db> Xm%%xO93yrzGMe2O jhZc.K#1LO' .~yHIDzQW+Ï̗ZhlQ3dYr'4> 2 Ed D]eНEtN jz><(ѴXҊ.1d9o"3}C$afnQ3&7צ}z^FF{HɐBm1MœdHCȲɍCpST̘7^訹: CorC"'=H~n 7ҢcfIQ ,IF(H)YF\|mLr11DB nDD8)f\/}zJt!2:&.xO,LC|zp6Qt]uf!Y ؙId$IYhqTfl/z> jЄEG\YvO؊=Ȳ˱JB*d]6|KhK(phD3axS׃.(D1̖Jdf~$KpNBg97G&]BDrQS9 1LS 1 3ug6~KF&Yd]O"DOGQ.Is֏GqV_RHh,|$Id MY%F$ʼnQ茍b#n!\W0c.-"NO(C%!2Q؜쟰֭%3pٸLLRҌjEBBDy') ލCn$M}i!"1$L!IbtɫEHS!Eю?zFD$UDH=1톯GaGǺ[Z2IUD7d =&2RDSdHD$9 Bvm L"15qF8yj$M##"sI;('Hz2D]>L#* iB(c$"$ycTB\ cNynz>a(q(1"Hh$8h=$P(EGD "L1r%H.z>"%Y8W$ݛx$6U.Jͣ$Mh("#(,"S$F&.fXͥ!13/&1Ĕl3?c!-YCEpQ4nHl­H"+I1;#Q$[d?"$d\CJ+$Ix)%hZP͢#ƛm") bDD Dá? J5.Q4(G;/m֕BYPM҅ȢF<bTKG%]G|קEhI&1H,O,xؖƛMhlj3ebTAc(Y,KtR≠,2p(D؇84Rh6Qb(W1Ò h~CCꓥcv$я"ȯDD2:" 2lldL/GqP0YE'tFtQD4(6%hk6hHPD$m?Db(Ldں^yF"Q Fxٳc%24Ui!o.6n#X$YiL -T1龗+U'Q:ߢ-VeLfGѰ#(EJ(x6H # #%.=}]F&)2tNji8(CQe#hFQIHD;1br㮧+ß+#1H31:$졭6DQ( 68֎:B6!VQń#(cce>2_ާ+ۓұH%!bX$x c&Dh3a(hhdQ$`yD`F4JCc~Yn}OGqW=2%6"XpHǁ5BDXX ##9C~\z>񗤄Yb2B?G,CyfČc(aV#+je%l>`/An8C(pl<,b.(lcO~_ Qq2,S#Ib#z6&66XߡF~G1ni0Yf,L,ls7HllRɖ02xOW+ֶF ~cc,n,L디yfO=g+;q}Y~O%'._z>⾇&ğ#<=2{'er"JNNؤGqW񗭓<`d)+x|ӔddN\G}__qd{dIq˗>idGqWg~t9fO)q;?K!1A"Qaq2 #BR`r3Ps$@b0CSc4D?ۛ~r5 GjE*} G P]==AYcenV>QAU[<("=<}1$`\aT;6\P¶@7?Ӆ=2׾*!sv.ʱ:q#3rRAvW\Mq5\Mq5\Mq5\Mq5\Mq5\Mq5\Mq5\Mq5\Mq5\Mq5\Mq5\Mq5\Mq5\Mq5Of:^*F1G:;N7< _ Bئ on}Օ[hqc .7YA7C4٠0$xڃ av%q@KT7tD<:9J/ c"hzYdž3A.W~l/~fSHX΄ Ͻ6߇:lzx٫[7έ3~VAuhx{k[7ΔvNMʄiϙ6OóC?*;9f6)鳃efU6eQ7g$0?~fSoGtOLD0u=SӲóYGCnVl:V$;5[/_ٿt@Q4wb m@4ˈH'Xseu,iF8Xϧa1h~cHݢkLJ1dk7q.rso?I| %tB'`.^K#;z&QHW6]HMjOJDŽwQxm2#|m=uIXѷǒ ]2&68,q`ykqfVLexr` [[i&\ZC6NeE yMzX_ǡfxt0!YhEn0 #' x{؋ѽ XoΌ΅e˹b[rڂ#u]۔,(֎[f-(I9k{hcV: {ovq_n@+/W@p>5zm'# [!n]۔ջR[2w% b[xTq.<ԓd7_+ ߺFŒ` tAeX#Ƅd$~TD[{cV?=QlTiQΦ\;G 3㠾(lRUz=qHlNƁPB.Nl8񩺱\:-k{"w#V!z.W-ӅlM֒@/ږ-J3&+bχԪ22̡Wia!U:<(=xWog5Z'LNύ$/p>a1'b\>ʟfvrg.>KJEVL|}[~IרDisx*h+[pscN <*vRS \O*MkT_Ē*^/G3[3.ʊMmT۶"cճR˳F/"eZT,PFe2 x);[ q#,z޺&͗)Y0oVʬS*=jyi^&ZH$À&kDݎ~r8@67[q(~IpT/!RァX %YREbY~_ n6`tR=rq.5 U@ۆvoz{߶P]uěXWϛ`-t4R #PT =kʤ}Y]qVo񬦑aޣ^]}P[\yߺyĻ)Un򧑽g92&G Uiuch:Koޢbc`U|i1ƣZ,ѪsQnwz֥f;FC5JfxFTj<r7'4ނMfQ4nd (4_ju "@y_<ҫv Q 2yj Z6b$U)O ۿm;L9 $zk(TT$xdWJ/22,AE\B*XRXp5Ѽlu Pc<pӍJBGxxN 0)ikF0c# B*[2##v0_$gsEϬɶA$p"@]Fy[[VϹلݱCρmqlK"#ƶ8yf>zjF7P]԰1v-R~}Bkedta;kz?|YIlCm.sV0(֒7 mG'B)9UA4xy$, ׊J66u'Hr 8ch: >VA%S"㭵 HuhoǍ]bH[#*HvC4g%5JbkL(b@xFPID^б 4HK:Q 3Ufg #6G=4'7V(LjlI5U%\m*|EVMe֎B#u5m/:Ő(nX_|uI#~ ?Eݑ7S<t}Nab. ݤq/ f @) Fڜ6_ IFCsEJ b1V\5o}kzw |+h;AoyQQa>oUWNo(@^B2~U&#'_mnc(8{ޛ</cʒ-,[oʏE#j2lnFƈ%\J? hK_L #/R6 n9ARǻ[vRɻK}i7q`4[z*ł2[Qy(g|ʦ;Ix1nzZSQ!n]lѬci[b󌶖ME璆w_]<sI2Xǰԛxvq*.J{?LL낷S#v-c/W]ER}Irp$QlʝR0yԀVKu['tN,QǑ[ݕ+@e^2{),."EkzwjC,Т$#ʂ;qI: Fá2wQvD݆%›w+ͭ^hKQXQF:R$9rRXX\ WYXɶI@4 sr)CvK݅p7J`$;V,d$1ΦJUO3kV̉p\WV3O,ME$atMڣX䙘,dv4ۼxvo{+ . _:&X2f˳Ƅ2J"er$۳–6%Hpc|V%LJ m;?YHl>_;Gb54M(1Șo rY9 JP_bQ 6V:j)r:ˊ6#x4 cJYÖZ?&?>y-o͍ ݖV5ׁȕ}|i}wt[k @>K>Kҝh#䤺YߟIUͯpHXOK^IKm6֮c;/׶Z~52ŵ)cɥgϴ"SB ֥YW#en"#b!Џaշڝ#8l4J 5ʦ]1[km6*[396AܠMto\5y3I]wfԶY畀 ,}nC#K#I<ɪڮJuWUmkP#v1yщ5{>&CgVZԑ7T yyƷ>`54ZF^RVO1e"~u26eQL,"B Qx "~[S,[Ã,\ɼ.-eR}Ԙu;JH5ͭdzPw2/jHfX_..Ykn:u MY[u̲ 8Br{{@GE-י _QѾ1e"}mgzCXGml8b8M2c~_ߣO"ۅ&+ (0M 1eDѺY3)6kb|)eLEe9HɅ679D3ֽܱїڤ|z-4, &]@7Y[zΤ1l)%Fv.*mEF*O}A4Qblr{G:=^<,?#hcL\ߗE^ H* ,ޭBBW"ѬXp͐^ZB .,l|uZ㥻oY0R.JH+asȢ݀-9"纂t`} H+as8>z{pa8ۺ6}`BI_~;<ijO"; q<Ճà_NO 6 hBSh AO[*#YHAEջ[s&EEu{vG:|$XX[|iSjM8{_qTQHW6]HMjO#jC"S,y|R4-"ǼTmB *V m A𫹹jPS9/hո;z҄nW ʡu8[BlBt*W\_g \1o |g,A.JP5j,8]Ae:'e PQBL3 CYǶxHůme!ׇ [s).EF#twoXF0zG>,)s+ T/B[7~4 ϟΩRHΧ k nr/}Gh"ύc{JIk)GG:dq3Ƿ/Ad@O,2&xƶH&) = xfgZ,U27Z_)[Ƅ,BdX)uHʀ06cѽCwYnI /̢_^F&X斱ֶGW8([۲Y$p\ 𹭡%qnƤۨ2۽x3N*ZT6凞I"+~EoL)`<\ї&f\hEmCd2G\XwV${e\yf掾#Q4PlMS`^y%ױT0f-KʳoA\ISkB۶m=ѹ3U ۷O!R"~oVǵ|ٜZ᧙.&qXFsE U@[m 9v¶70? *ٹSI}IdGQ[j.5+TXŏu]#@$MWbȘdq\;{}DA0c X7~UQHuoCgHy}nW3F-^8dH]M¯+Fz8 #[ᎧX)Fu5e:Qxᑐq` *yPH%4NC:'ʄ 3M1I|,V7(`CGxȱbtI#XMZCآu=m8WQ @uj/n*չU-WL[J7WC[džE6wk±7vQzYSߍ,o#pRO))>Ş_ըjFʷec-M{ZdZ펃t} 3X()F2lȇ[mfEqZӲKqFX>ulF cPb{$H~h۱6Y6)mGm/2ձ=hg[SIs';/=5݌ıM+.ؑm9"1v=qҶa^26uW|*W2˫s~?s~_ڥd&-FviaQ.F` m`y +M߃c(:zöEѶ:{z[|sWNJXg$% Pt:5~<%x^6'OPm=u"[ODQɳ2Ef|:[d"YB ^߅Zڱ7T=:0AP$>w,#Pp | Rʋ`vq*x&(TY9z IɆRsf0޶kPEK v $kR6f>"==Fr\PtVfD^|E.ܪ|OeQٍPz8uFa;6vOExќ<*R龴$lF%(V:QE{foL~oʣ@՘_$b=ETP#XV\ 5 ޡ&0&X;xXg$% Pt6uf|<*x7+?.?>yӺqnB/="U UtV *lMGwHom*l Ir_ЙE8T,qȲr{^@dݕK^HB6`'on&gԫ1Q^ lXx[CSKkoKMa%We7m[z|i鑣Ic&odl,,+)5&ȥX:Σ-PS+{hT [en( Px҆3Pb ~Zƒ uq)C /goUWNoJ+(׵ t6e7A[fp¦icEF]QB.(ѠT؊kvj~ J# Ggu3NH,hIem `! h3[UӸ[HL0,0vwTv,mm#i$X ^ލ(W =ץ(nqp؇QG~G_׽MY&YGjU7gp?*;Gbyچ?(c FY#[Cϒ@e m%NER],WoϦ,l_]'Ml @CJ؛]N˥otoh\)Q{WBzǺ4v8QĶ(I*OX| 3ʣđ]r #ȏ:__ӥA' ~7qo_VvGwF-C~'aҥFv(& [`inXs]}IKK_l&vw7cDbEaˇB6Y:UmU.sv+LdLq"4ko6ԹF{MO$sM p$1j2|{ٯg+;ҖXa{9F)DYIt03#jlk}KZ#,\+gϴ"SB ֧=!( :ۍ~bl{<SkDޤM.4IKّmvzѴ~?sf,e!xaoo1K.>8Av6m4Yj8VNw,QuU*qH.Ѻ[6R(k\^3<%A7ݍ7șdZ<\[FaX׍Ȣ;3bnidyiUԐxiW=ŏ±yc^6W"#XjHnyU>sjMym־wz1 Dx4V)E{>&CgVZ؛C0u7Ǚ U&"[+Y8tBۙ7el;}f m{qR4D\{>K;"Af"n:rm3Hɂp󭴼R>)prL׻mn}u;.Ib!n fX3f9Tiou Je䑜GlR)6:Ui&eR&Ͻf"ouE3bR=Ax*R9+_KS ƹZ,ߢ(ƬoÝmGS9Nzƙ$[5es{_Fb [&*,}M5;Vm mr8P)Gx$v:kCr͗Vc!@u۸Ti>7PŔ-;NTt:1eG&'^bxvr H\rE{/Ho.ϜTۅ7Sc!p&ao*%,҉4LFD<ףuG kY}¶H$/)~]8Ʌ%qCq'nO"H 4=QVӟY'S]C.,0Dc,M&p`־= ;Jh(BM cU&(EڣDUh&Py{6gD 㭼y voʠ[[v[[Lh־;;σ+>_ qE18* AL̃ LjA- ,η8uڷw|rV /E]v(f8e5[xخ._ⷎ7 EoVoΘD=ԱXmF,g ~ gZo(ijE$\Z4v8EWֳWrȢ݀-9"纷v-o@[6TYu}k0%|G* ,ޭ6Yx!@41*HѬXp,`u<-Hj:Cf%ΊH,i#AwcYu}k0%|G**ȰQ?{(1f ӌ_kwc0x,,8rDž ;jWWn΢[S.bP+fL΃z V:m Y9 b8*QL?Lz%"T.AV,8m/ƜⓏ54H:*@}fQٌwws:IF tabV$߻/@]EsjEe^}(Ke[5kPE6(w"? $Ő4x^]2&f2 7ЪG# ;xPD v~5jQXpҼHF/r [#b{)R-CLYHGLf[wVp9!:Ii^hRkʅ-ES!T$eߴ*쿇[9ԪHqh< sł̤QM~[~ N@Ewȑucc~#cKeK8 : av.7?/!>49߾#cK #c[ENxu# duCzYSr.T\_ٙS ̸Scm=@:6z_lmO}IC ǻv\PRxQE!\vq!7>u( T 땀Ƕhݦ^RsLJJ%$'i7t9T>24!K區S%D\۩g:٤?Qo7C %>qT (Xر~>Uf?ƶ4eB7xњ=Ye G%١?6.QΥ1*;o,CX/9$T{CuVy~4,qw[#^+-P-_/bw~5$ePE!(<*yC){Ibi:`{&v츠lˣm #p GtM J#_TӴ^ QBx/Ƃ#u<b]TcrZK4 ڃl—xEP![, H:v2ᠷU(]W[Dqa.Hp_տD_p}|~l?Gٖ'׬_uO;!/Ifc%ҤmȀ06cQF8 .\Mp.85lE#MY"mȀ06cAo{AI $r$ZD́zQI|إ Yϻ-f5'ITsUr%-4ahDf66/,4>H$À&M6"-AsLϛT#1[$.丐is>qNսIp4PT 5Q2iij ؠ|*9WF *G{P9&fU@X[*Ym GHHB:7+|2D_b#fLG]{E[)N#1 /:|څ bmʤ`C:V DJ68y ;\玃/$E( |څ bmʧoaζ/Ƨ2] @hGi|ګRM^#36o˴l%KO}@/c:1,SJ˲D$[NFxh2MsD,3?ښMD>q݉~`G}(x%R-Id.;,O-\p1آf#"cA>0UKuG ,z4A$ Ol9YT-4SoxqŸ6fqe(a@T";oDqБ2Hf Պu oK&C gh S>Uw,xn 2D{euKI,B{}#}Y#87E£;ݚ26pAn8dYֽO15[).Yס)MRo2nZji/'E4Cԛ Bj}1'܌\x@1Ubᇼnm`􇬪YV_aUHvXdM- `l8zC cuN#he1H'ՙMtJUaW ^kĨ-Ne`S{:a΅۩a9He>!Æhi f@7R 1gM4nx"a/fbA۹Z E1hBl{2n{'"*D/PGܯkl w.z@oε,|>iާXP ;)wp oo v2'1NR74t8,;i/媶Uߋ"YlRxIs`gƞΙF4߇,ݛw\@#'=3{']uXA1NR7c0~D3e܁ Os,޸1?Y±y-aa-JԻ2'1NR79X$`#EÖYJUy5U^*qcX.5E3`jU\3 '%l2*(l5]mnWƥqc5Inaq| Y21ӎy* yVDq Ql=|`Pqg0_+M}2PXMY\0g/dYoe8X%qN҃ZL["^XJZeCkr&=@c1>e^UU߿3e)]2Mz~/DpnH!%&=@c3C8cwxVB С߹8*s^\{Jrsaఖ֋%8*3^\@Ui,nA0htShPܟ$NVkMnkWj/"`W->iŠ"@2A)J$w_38倕M29? Y%%Uǖ$IP#!K_ +(krF%5g|GWk5EmstNQBU ƴTpXPn<'P ;<[CheEqT=W&oO@%ipSQ2XWƮP @ VWCJ72e n {tڪa0pNNN«7]^;BkqThQB]5.;X@ք.K:¸=OX\8dh⍣C%4hfzbQp]/ -Qrhxr@/%}+Y#,~h*ꅁ8UB\>Aat^MvPÐ<~`N%Ūi\>B+Όm8NW/m,PSwJZК5Zb_NQFW(y|m(gʦ G1M]4{}ZLsci{zԺe;/ 4G Ȯs0׆91 ٹDBi(e(1xQPIoœpN7h~z -l!Rn9|KPcNOS-2,,"ml 7r]aA4 dV9Rۦ{~r`z_*].8;^_yi,>Ʌh8)C43=BГڔm#fqm@c,mߺgX՜9XV9ovF1xtMP/;L+ڳ+zﰌk@fܓF/PLW/Mc5׃ۦLB Cܜo/![cZn>&r-@vbP!Z;[~ ;UbV7 lT%63/&N+b;sޫ %+dRU@5R#ϘY+#c^8fje#{0&}7B)G,af-YN"7<2hhʪ1,4~&Z:9ٞe9R"G'9yCEk@Vٰ߫DH4)6NOolVll@qx uړl k"cHKe1|0<+īD1H+ qcfX=ǴtWRm? 7aV-[3ktD#ůmm3_C!| MS " bc1cvZ{ vF.%6`M/ơۀ W|'JE|Qw鞡ؼg$o'N ~VNu&Vv'a{ ',9MqjŭbͦC的 UyBñ8,&MA[#!.=ݜ)G Wk;?G+2CFo7jB|LFb!7yi}7 z4J ul]Ő9;EQOՊ8"RM4Q_~~e1^~@7MsKhP/7Ui (H#혒kmaMxX/LYUv/Kn?D[nZ()`mLu *h޵oe.Q~R2LA)DйXԺM47G5A&Ck#7&9|Go )|g\ɶ 15\+jE"XS9Ih Jf3J&-n MHg"aѧ\nj ĭ bNJH,+=2yvE0)sជ(K@3C'(*CF' U )_Y\dû 6 M >"19J?A mn",g8,=>J0%zpaxFo%W̭45wY`")SS©;.PPyq71UVSpsP3܇4v^$, ϣ6GµJry|T穹t5c6GµJr~ jFa`dZAmyH"_q? +EJܽ@K}J(7s6fئk8~Oqm  Z]~\nX>[oKYoE?^-6saǫd eX?h?17mx:ZfUiOHSQ~2N |gsAly7Øq8(oO $hдts۝a>=>qPyY70A Ia#x-<ρM/P}8ԫ Q ̊^"xu@+dg1YHR5Yd2}V(eriF~HX_o<(&l/b0URhy\0Oj?&Sy7Øq 5}K;2s*Q"]a U9}3n|@:*h|"E([^ Mo5m0u)j1UWp-JH5KsM%J&gCмW7j*]8I@U<̞ȚpGdT/Hx.C=7Rnc K=$ps3 DUَ&799`o!PF.J*aS"*#?mTT]%FZ0^ZY5rrn>_ 9aTH: c Jé>Mп,B hrd} K>TX ejAt'cE-P,w}]^Bi|P[3yw|ϾPfZ>ŝ~0vPGS*MSYh.S;r5ތ{gq6ZKy_9f\df1T (49ϣ}C}~%IWA< ,o*{a0EB;sTNpD'm.?St:UfRB ]E&&PbZ1ns l%rw[ 2|ܕZŲQN.l6W"77qCΌ$$5rۇGΰ[}w`j[^nWG lS&]/8H#\)a)3J$pةJۮီ$(TN-&=x ؠFy=|̘ /N;ˀ]^IJ} GlF|0lZM!tZKTdÊS] N&"nU9rӟ'F@^@d8H#\)a)3J$prL:JAQ%JՁ `+17d50a@ht\tVtHi+TQ_|pkr.k[<֨n{İUK9abzubzl;S1VnUPA=wd`}wq,lDHm>-pkT7W=#[5Wձ#'K3=@`/%l[a^[T,] *6T1)}59A 8bÙ g;tZ2{$IK##X``hnڼkT$FuE|Eeg|8O5ÞWEQ-j PXGJJĕă17D o :7\ǵFE]=hˁulg5D]O1>lӆ VVn+>6Lf,٣}_,%x ^9IuBi6?];r[o了^A,,]6iw?Y*C0KRD}.TH,HY&%:Q#`U4h5̄2е#>*Ά~+39?~I v7{tVz*Ha`䫬ve(_Q=5 2y5*k9eG ܩ,$Ed`(._R!d:xQ`Q.iԴ/HZ_Cl*ET? tybD>(ABCڵ]+P`?5+XCUm]*lX*ŷJc2B;;լ(J*, a{: 7\tEȳx2+^΃+ўr6WvKWg7ٯ)\3 '^pGN cMHt"+P:a* dm!(0!\vjT5xخv6af 0#M r+V"j(P(/}ejjz?XBh"ky!4>kӜ Vw>`>c^VX]¡A@^v  @Yl.6чy8m{yGyW;o(UVZM; "OM;pCW`_{Vxh Gi !s(ݣtd\L qyv\z0eQX KPְ$\R X(":WHυxrVJm5lU1a)*WS WעWJWGA#Y66k/]r$Cmfew[]ORn2+ݒvWg4ԬUߞ`ɄQ1b!S.5g<͎GBa7y .{醺( }gj?#Ua~35O[d5_@5 Ǡ "F%z=gC+FmW-#;QώD+қvA2ѭV붥&xwV-gVv877t\7]T*xL~!3ө쾗 Ɛ -F>  1`=߆Y9e m 1-뀾P{L_J\0,ݹxu̢S&g!w[%@d 0WJ # =1Q"uNu~MT3]S>-^4, S^ AL[ypWhK-#, DǗ[^Y3(]wfj饲Ib̳vp؍7|S}ȶUٚڅÝG-7FyQN`PWIdko:-ͷ1h0x,4M6)/γ))rϖVJ3 ml=jTjU3}sǢh- E gb,7ݜ8#j!ʡ讓zɂ$"G-PrڨK ݨi6pWhK-#W?Y/P:r[71QF?Ho&ؽ(ڙ3й8fUm_#BX2u`*bpC6sF2dϒ>"U"bL=4ik^3RZ)Wi0q쎏8N#c^8fe{~G}1γWtOU,M7I1%nz,aQKw|L1N NLx _@x`6K /xSKs̻7TRb}*.J%dHhA] :?Dc8Ꞡ`B,*ӦQ,Nuoh,f=tT,LҡN%n,a:GeD!U c`pwļs~gn~> e`Xg:Ҝ1 WT WgCܪgZ(+ c_ e`Zcl` ʕfYpV:*Leb#ؗ~:?Do6vTK+Z} а,"W$AE/&EUۨNRwYRn_+pUlFʩV=]r =p]>KV̤B&bU %@5aBRq2џ7ĻtWxmq$7/+O|&FoYeixomUv6@qxxC6|fh^3i~,m zC!| l)nDXb\R$Gxwj! fRݔ`,!EX2 .Ŏ( xw86ڧ-K! f* R \XN@ J0uqҳ0fxDŷwC&R"Ȥ euьcьwm$5Pi~Cx.a * ,QyքvP!A7%Zy_Gn&O.#;`uFa(}egqd; lgZ0]$ ];/tY%viz&EVm2~Y[.0Z-;e>"U#?5Tgl8F2C ԳQXLNQ;#6g<IE:&e<"gNIkayO>d%\\( @$ JO3Ѕ2GE|pGژ 6p6Y<^{@ ; k}ǘ\ƫ\PdYZZ n ͮ/g0SP1aIX'2s7*=*äD~6P_66U2svت w![O:,aKKZWMo }E9t9G#-saϦL .KhAzWGB#VX–^ū3ճ)z;\}NH:>B 1< v95%JY1rQ&ǣIAZfUiOi£cpE,;a`86gdrk@eV(eriְ;r7uyP Ê#]1:~#]4}TxGNk8Tv/YLT`kAxu4z;-_7& bfguSF!*o^Qy7Øq*? "a4Q!T[Aľ BF|ra+V|/v% |_e15lկ̽X~Vc"y |G8Q-<?h n/~%`i15YJ^k)}xn(j &TLMbկ.!cKYd⸞. N2]O+~3O鐄 BF4zw n]#6Ʈj#TI1cԛa]KuF18d{Q5?H;^{_.#i{oQ a)>/@\UrٞSx/E3j-57^;eYٵLk[9~;FCeXJOI ТFsn]hʰ7]H]l.:)o}׉?Y5n]o6 K!xifVd] 2W1h0LE:Qq/ WTMoq9꫷O l(HYk~ VzDhBkya[ ׳ Ɛ[~ڕ1P/J@)CU]%f{ KugxQVs7),fQ|cV=oMA 3),)C^3 PNU2hgC+f;4rY[Ұ/u kL.](يtGd}o1^k.KugIEA68 TMQgZr8V:Xʃİ(}91洸0C l4R a447aCxzqcN 7#YnnGZrR-Ygy8N/zƙ?gIJX',^r'*aѹ}W@XK x c&Q%0 eWMA"Ѭy w 8s>Dc0mrO&s*sJ冡b\8(hKK]]M48K }^\ic;RxF8Rz D}V햴8Ygq6ZK۔4ӆ8v#7ߪ@_#ZŎL{a"2]gfMq.֯c+ rj`\^5W1WTD}廡2egKWb&Wyw|ϾR"Kg2"X8 ){:\8ӌ&7&̻[ d-s(P\'cE-EUq}6*TV<2yWll Gس`p̛ :'㏷C.TfX_C%nMǿSY1EcI z5V=2 ~-0>Ux1%Zb<4s!:mo'A;uU8\%t:w_U*=XA)!d߲5>–{#pƀJݭ95ς|f;'޲pAy";UjA4pӋd߲5>–*5W"`b*C 07_C$VfzU@dyp$$n0{F؛5c7k̨6#PUAPA9W@i!`v%k:r3wQZpLBXܿt>^o4ժ ;,8U_#5tsRx (EG't%F8q:gC,7:0ǣrHu& jѐp/گ^KOi.('E @oܞ*aNojg";S1VnUPA=^8aQ 9DL4n1mivۧ[(KpӾbzK{' xૢUO-~z5W1'i̢^7u1(! [s/mhkiglI 5-pkT7W=#[5W2n2n( tYDj[<֨n{ô"0۶%@Ԭ+hR~"Z2` ;-?I9{Rc1o̼y=4#1iP^aߥG+U=R˪ f:O\ejBw*( X}EXUbz 5./J[# Kj׶ УP!qߺ- P)E[w|O?Լ,]!lcxmSz!8janj2ZRQXJٮ12KViҥCr<(!17њgP9-E")yVt0մ9>r> 8xO{ԽSVgs/3hnJ(Zyk Sq}Ў fY+Ⱥz?|n֓zKxx:[=20 Wt񧳦/yJӘ8ybRЖzE\F.\EqG?&بElyʜ"k +bg6)[^V&q*z{Wa cJf +|.SR[״,Jl.\Mɡ8IԼn=j5k%'Pg i %>y2xX^HXJKF,hUwWt+"kGqمWKNc3C8eڎ0.ӊB5ϼ2Mz~!yŦ0cAa5esm- R/onq-pAtvR.}m 0i]N%Zqhuөr;c.৔f_F?E1b&mJ[;X5@ *w81Y`0zQ,lp8eS7=XiaRUc<0H.Z\ :'굊pQ)`N1m|2,,f(mw27O6uWߒTl[3/O}cί{8XsQP*|QŸis,^}^%rp)wHBB-nܺˏň4A;b"8PJx:YJlރ*M50o4T4K',d hFWx$71{'(M:&{>h #\ 2, -hM-|1\˄ w>OT/%SI|ѝ[9hCܼ)ܗߺgX՜9YJ0?s?ܿ6Ls4VTϺ8 ŶmJg)s\0c ~= qz1ĉ?o&ئ+]2W+U5~Ro WGo/yS=k,/s(tNw>|BDH&k= f`.&UQ@p[Ah- E g-lw+%cLjl n  k} Qn_ e`Zcz6 Z_rymfc!IncNjuӛmSp19^l'MʏA9~݁w_+pqK2K깗[Bc~l4WK>yzej<&cm 2џZJ{|m1WfzOnVHpȭp7АD2³X,Z黥߃AGn9KϬ1,4~!Xhg3GU2bKȪ'5ߋ,c-:7 1 -sNxVbv" |k&}3khpef싗j2 6^rzv\=rFAfA Ͼ R{} 0gEa/4WU9' eZz4;LtT+.[ ©'x{J>2A:WKp Y}X_l^s%bWk;0wjmu)XO9G^\%b#`cdhx/#~ N)]O[êM#j޲'1U`a '@C zAϼD(ՏiĿcG{,|U@&|Ja,L7;±iK{>ݨ<1[;l<ݸ 0[)Qb\^;3 ;URq{Q}!ceVZ g+'@C]GPy-(P@qCbmyEV6GµJrJ+k |iy2`*Vi,c֯9?X\0 ŗ7pF:#>mޚjE،PeZ8@+yGxzeWyjYsCی(pOj LrI[+,u^Y^Љ#Z5Jv{/W)CV {` w:՗d]>Ʒ,E14TBZ,A_SŦl7O]|]nQpgXQvZ_Y^Qz._I\prZm1Y[`c] όjXŧD:f_QtS37>1b ^6jR*| q7m~=% NAĚUq0e֮MP}q8D2=APŵo؁\ \{KT*,τ[յ|Ac+~M?n (ϱ A-;bU Ww*nSڲSxʰ6OW eX#ïnDMR*h|'0лhIakY!Xn/vg% Q(>+@\U;R@@GBAex |^q#Qkb nďh:mŷ[*!JO_m]AU 9G'X`&ئCM/ǿ F65gC宻J}yeTo˅ktPPx[SQVG̠,en!l|`pw[*JB `/ZYrh:&RX3@Ra5n{.Pk6JWam#BAWp|3t]ū,s̀˹2MKU/6IhՖw9 ,lkFf6i/޸G`Ytp[UpSKW2Wl! veNwsEdSJpXNUUK~N?< \9d .Vva"rK]Q ۔4ӆ8e*&4~C@^J_atvqW4~[a71S>j픽M>O*GWf5ukeT+oS>j픽M>F>`M@voI {AԺ3)]?9?&U u OWؓo"mBcwVt$㜟l9?h?lM/"w66+I?~M/* _ ?> A A H $I$I$I$InHI$@ I$H II$+kz I$ HAAmmmmmmmmI$H$ AI$HmmmmkmmmnA$$H$IA $  -mmmM[mmm۴I$I$IH$II  +mmmRmmm@ I$I$I$I$I$mmm]6i"mmmAI$I$I$I$I"mmݮmImmnBI I$I$I$I$I$I$Hmh6mH%mmۼIHI$I$I$I$I$+mm)mi$Emm$A$AI$I$I$I$mmMmm$-mmm HH$A$ $ I"m|mm鴑kmmn A$$AA$I @Hm6mҍ74[mm۰ $A$I$I$ $I$H$+mmM}5,Zmm$IH$I$I$mpiWH*_ mm II$ I$I$I"m6)ucp4mnbHHIII$I$I$Hmʹd6l$mۜHI$$I$I$I$+m\m_u+쵶m$@AH$ $I$I$m&bt^iem @ $I @I$I$I"mͶL %w_*-mn I I$ $I$I$I$HkUZJ7 -kmۼ  H$$+m&hp^tɌ{^{m$A $I  $ $mM_RBĂfUm$$I$I$I$I$I"|E}Zt?Gv?nI I @I$I$I$I$HʀˈGՎ_/[emۜ@II I H$I$+mI C=. ovM!m'$$AHI$$ $I$[EV>tbi= I$@ I$I$I I$I"ʁڪ/ "ouNbRIHA$I  @$Hz*4x4hI﷪Yػ$A$H$I$A$+_U܄:bOwf$$ I$I$I$I$I$6m? yfoq{gmE$  I$I$I$I$I"mf}acBy=nrH $I$I$I$AI$Hmk_QQi϶ۜ$H $I$A$6DL"WK"v$  I$I$AH$@m ;PX1{"lm$@@$A $I I&mlLŀ!(b;P~nH $   H$H6mU,2Z0ߏۼH  A$A$ j$ѬRb79 $$$ I$@I$m73̽1=\{o-$@$I$I$A$ @$A mmetRMqs6;nI$$ A$  $6mCsa6öp}۸ $II$A II m/fFrm$ HII$I$I$I$mn"bqZзӶ=m$@AI$I$I$I'Smd:DA t2]nIH$I $I$I$I$Hm s;angl9h?mۜ H @I$I$I$I$*-myC3o"P#o-?nl-$I$I$I I$I$II .m" cd0؛m $$II$I$H$@$A"viT1?S2)mnbH A$I I I$H6i41o6FHImۜI$$@IH +myn/餺 pѕMum I A I$H$mͶKcݝmmA II$I$I I$I"m^m}v( 睶l=n $I$$@$@$H LmhL՛mm۸$ $I$$I+mm`I\=mm@$A$I$I$I$m֏mQ?mm$I$I A I$I$I$I"m'mcmmnrI$I$I$I$I$I$I$Hm>m˅}mmۘI$I$I$I$I$I$I$+mm׶mmmmm$I$I$I$I$I$I$I$mmmI&ۂ-mm$I$I$I$I$I$I$I"mm}mHmmnI$I$I$I$I$I$I$HmmI$ mmۼI$I$I$I$I$I$I$+mmkmIg6mmm$I$I$I$I$I$I$I$mm]mmmmm$I$I$I$I$I$I$I"mmi86mmmnI$I$I$I$I$I$I$Hmm֮ymmmۼI$I$I$I$I$I$I$+mmm`mmmmmmmmmmmm&!1 0Pa@AQq`?[u8!h[:J5phj1 5Kn^:;mK6`! |wbۮ HKsŷ]/*Iסo5 #cw|xo-iZ[݋n^>.Z|5 #cw|-U6ޭ64-=[uCGmoV;-_n{ض뿥㞖Fާ9 y,h|Ewbۮv=elh o-_ n{ض뿥A:'0LAl--_ n{ض뿥HkW0gQPB@)j+s͝wsXZ"#tje ! V7CՉ;=[uS!.MF1b@T,1qKܜwbۮZ? p&, 0Q G\aRU[v-x墿lh61CpLll&'pN Ѧ-\ŹwbۮRG.)45 ˂ 4EIIz1(-:,R[v-x7Pmq`()D7 Z-(( X)j-cwrP9Xiy()Qoc!LLA-\wbۮB j,7cpLX\LA hTN5wbۮB6`npJ6aF! 1ZXZY`4c◹8݋n^8Xk,n0٬1ebtHشBpNa!-$(7wbۮ# (hk WI(@ˌ֝pjB L pTn{ض뿥jݝ/%!l7Ggd)- Hh=OఴB(`&Q@;mK6\Lcj Qcxj%5WkAbƦn:D=D &SJa Dᐆ[vlAƨ#pnl%b'ūZ#S=D}dCpB!!<K Xw sŷ]Gvx1=F0^ XJ3{ > D6 8(LCWTHB$z42=[uפ<P5I0n ؠe6c4AFCSp[F\c IV݋nwK8hkj4ob / qYB5N9O_;mRL J%TcFA뚙B@j6OAr1BaD8Y{=[ur^ FhbNRaalpH Z  +sŷ]%Q j1n ѡ&D jPjྣ\(!&&a D Bn{ضx-ae(")G(.$q Xiݛ:xh{  R/[v-x)REoLaks͝w|$"=DpQ|Uwfλ^>Ti ״Q6Hѵ:;g]/9-c6ul,SF iݛ:x G4֏~4 z"iݛ:x _i:ݛ:x _ie52/V*!1A Q`0aq@P?Digܗi6<.˟w( ,V%/܉,wx;HB~m%Cr ߩ?wv2oM6GwaM2oM6y?n~굀 ̜Jcǧ~ kGAjzL:8&ljq&y~`N~\= &y~N~|Oܚx?6?RMq3ՇZ^<$k8WHv8J8%[{q8C'-S_ol|f3/CI8%<` ql<3A/'>4".o1,vN5ŕ͝ e!,ip̘v'}`N}>A lHw0UĬ]J6 HSNxd<3>̧Q͛gj!;#;vC^bwix'pu͑o 3 TD:b:Rh kn(';nv0@n~w@r}<q5H3َ݂ 98o|'qb嗬<(˅Nj',o\\ohOXEǮd1gْBLN|RN-nn9k9OHl ~w@r}E6lINX9 a݂ -YL)\15-nCFs!VËq.;#mOGA+`sPN&&ZK\fhI-<XV1dY>@n!-N{? ;ՠ9>~Cmbmo׋95',nV=؉s'8:"W/R=l3x y?/m,<8YY$[39eͧv/s^.[+&.n7-yA"g)#C3ħ@tu6rp$ˍ-%xH3Xr%x#xv\-v[t&wy[#hq06l!+fq!CvOȉa5$8%^biؼ$;9AFѾ<pHIr8C]!A\&2:>Gz㯘9'v#v[e=ˎ$;œ^-,-KIXwhKN.a"r.lA1Z͗k|3)&m9eȻúަvgr}Qc#0"ze=Ό+ԏSƼpYA7Y3!eO (ssi899)myơ:A9[s I:rO6{lmB?O%,%[jֱlXm-524q< f.;&Hg' 9mAu B"]ەᙓlϸw `Y1<2IYg X-ay^bwdgѰN3݄ųjLFubf&Ɏ-\nXT}~Hug^R-a%iksn!,aV126$=&:[,~hcֺHOW|7#8Jx ,oݡH^%VgA?Sh r!+yx91csxf]  p/̈ns?ijtRܮVS Ի\-UjpBw .N܉9un9[ެ|BwdO~O:s_Xޠ. +?뜑GzȵsqruzǶmAqկKZ<=q+Ԏ.Q\'\?4-ok?VD _œu_ džWB u|4_ҟ_Qa8A\L}+!1AQaq 0@P`?0E [7[9n' ܶ,c@'pQ|9% J `#"k=a,k:ӏLX/)"zZY1Qe]F 87C9]]5O z†Ne% uY5\Ooe,bttz2F3ům U>!pX>0Fu{<@u3O5# X- x@a$kP)8~٣-tT?u"XU :.2E"a!6j@0 $Z; $DBr`!GR3c c! (> ~ϟ>~ϟ>~ϟ>~ϟ>~ϟ>~ϟ>~ϟ>~ϟ>~ϟ>~ϟ>~ϟ>~ϟ>~ϟ>~ϟ?L)4g!Ob\&P $kE=$Ep )UZ)5Fo [1q [ʢEV\@Sjz4cF90#r5 `ri,!4Aa=cH^'088wD' -.PR`oz >1H)\!&3u*[$W 2Yc()*,%@0Ml>yM#93 q$8 +Ke-bCpNOlؕ 5""Rzb3$B燡(uh=xUڭw$I_:"~^Wn&yB#!ިޡ1 P@eWSÓrouX7k KgKFU!CC'E( @A4K;?g|?g|?g|?g|?g|?g|?g|?g|?g|?g|?g|?g|?g|?g#kH΂"E1xX@iIx( Fr#[?chdk[/rW/jT_b1h-5L/w Ͳ$ v}|OL+ކc(lSp? ,7A~?&ol`A6ٽU&yF1l8Ej ӔVxF;1XwD"8!d$҉wQ)fPgn>Mحs.Qӏ0Gˮ&1X1%:tM, Uk˺i\AhFPi&搠Ɖ&[k%zs3K"g->oUg8* N'"P!1p`@^Aɬ[3rE=%Uvn>%8`sq\KF2[3%ɬ[3rZIġKxm7qp l;wd#sV,#&';gܿ*=WgC}Wy`Ӹw9Χٮ >bc ʗ9dg|Հ5##īX‰8Ҁ2on#BdZ/ .][7[Wh% \!a%+-1 ŃYW鬡өc*V,:c3+V@yCF*RVbF}* h- Ҝ]Q D`͔RUeϵ[z+CCt>Bli%ˬS4tp>\PqO;sǪMs#}V@e}#&'gܿ voXw║Y=Sާpv-ok7kc+ Fyu@S҇F8^ ^'__?I1%[l` [6F*IrH;f0i/ӥ\2?4_Jin^JF#pen:Hyԏh$N'4:y=* 4l Ey@ہo[a,uHnP5Ą싌J˚IrF;’cd&q$2;՜7U>1^D15uIK\O-ƒ|-묽 y?Y](IaF!e݇e%c):9k KJz t>5+B?:c'?*x>Ԣk=z/'0afM+TKG|}sF?tctwHpB1>%$ uk= kCaB$aGJï},S|d8993z>T ?}>OښQ".踔UÒkMP\LgTk~ 4n=kf^Q(<)XA{IucdTxG2`)K('"; _&)GXAҷw7(1(ْ^Aϓ2ٕ7Qkl@d镌SKAIʠ(9rɅbl>5=Cҙ xZHHAߛ RAbc9#IʙLDag"8}Kf(>@oϹ3$2SY|y+ƥ] Ǩ-H.XԟP 'I*d$ PO_DHʻIQB}vaD#ཅ!Zb\Vq,|y˦cE5x1YTsxe,7g)8L)]d]&Q-VJ:B6:b6hfplb,sǢa[pZX 5H}/aU;J6V󄿭Nj!13Xa E oPdZg79rbs™7x e Z;Ʃ#<2 > b9˶}0 ҂`͔\m0N!ި37гJ| ,T=_D&nR.bFN!˼-5Cq1y7g(2X =`ֿUpgxo "M czݖ@6-\ ޚc V=m'VScUV`)86_PD}k1ØVN)i *R {}Z SDjˬ$fcBfoŒRx8>4T 51ɆbL57 p]b&J0A'mr Q;6a DN߀{M +<}?X Rɦ9t覆X#$DTutd.o$F İb~bho 升 Ơ/CY&zFWz2rAUn\v(խ NNHaa=4~Q]l/n/ArAUn\v(խ NNHaa?ɟorcu !FJF7ˊ37I =YxY8yJ0A'r [HEF^RȖjvK{! Zz/>[cW2^ؒfd,0LUkTI H>~6.sЅeY# X!TI!IWo1ʭUm lA3O{^JIf5R}9'XF 1& 㬊\X8ʢ㉌g )`}%5•<`O>ulV pkd[xÅ7ngq rS~5Su.p*^rZ?,k" 6Z~PN3] @)>|_͌,q4ZC0:K(KغhkomtpR,9!uTNd 0 z/qEÓv.U wlBcRk7NmMɓc/e .' NeNfƩa(xc.̿<;3zмp:t#X&ǎE.?G#^934I'%2D|y g s]#^>|3w$\''Qn|NQZKk;Zꍶʙ["\3?-|_  Ɍ&q a`No$20\zfQb Ǡf|2e1npdK2}1'98䐉o BdH%d>k|Aҡ"#nM"+$rL*Em1%Ap79= 핮mQ,ʌ $UApV* b#SJddc'P8X5 oMF7gYw/pWk}Ĥwu״Dgmw' ta;ik $-5t)ϡygbpJL+s*ۗ#Z 8 OkKI[ہx}PFz)Z+ `W)m_4ɥdrXI3#ų}x}PFdMVWXPZR@Qƨ2'$WuBI Aޓ^)H6+^D#*naFh\1$ "\-5*1".piq>}ڟU،\ۤex⪄(_N,]blp^& L81b"H@ s\a^DbNpIeR y7r \$+h.(YR<E@6hs c[ek)(Lя6B{Ʃnemɫ&*6exSMCOw:E$%tes*wExn}vሉppĝTIm HyyJa퍿U$ IY~,C  ;ߚQ[q 69T[:79~п$qPpWXV%d$z:`S!]1 1ӂZ>1w9 >r/N:&n~0 d!='! {^~K)G Y렶f. t[չ#oX8vV~}8cE)3ʝz%uRAjdSNmm1ȆXEsRŷuό3|}/,2U闌Q}ťN8A-}く&2\s'f)35=8F0뒃̈L"䑕BNS`z&=A|#ö}2}PL3'IU!l$ vo4: d˓姻Hy}GsJ`eui@T[ P-ywхN36;i8y`K-aOJ x>a 2O> h_f,Ō*\桑Kv^)C#ø+Lݶ Xhbi<%pKv^)C ~B)6@ih";j;z?ciz/RaQg uO}c0\~s_ 㑃y&E!:Č"1FA&+ A Ɠ) 01L.E91X1NI1NYЄ"1FIl`e ZDȈ'1C )G!8RPEx.&qc{&33}1&&O":o P7d˪TI-?m{Pﵨ;1IFB"{:քCr8U3ěKIeWVR#=<ِW\SL"_2# @ 5ӾY PaCHa[h%A1zk'bS81pIh$$TP$!!BPJJwVpGp^mP7ڙ @yNW|;DZT#j<RAI6W80l{hSXWXJrb-5)eѓ:Q o9[_&= nٖxDX3ŵ@}\T rMnz7Uж@4)3W^|CInow$SSA)!A?cXIZ&XA`ʄ,k@!ܐkNO<O}zX? aL0Äe$a|VB CdJ7#a*ɋNOf 1w L`1b` /a@ d:MjWlY*&&K\B:Q[ >"w OI;f&Ae1n%iZ *\AnvL&BqCF8 &r71WNMqቒ8kLj橳6ګe(.Aw Ic'ж0o's[:y1V$[MddJ±!=S}'s[:y0*C$V_p8ѐ]aiDA-yaF߀g֑ 9y q^ >\XƬ>x("|a傂>g'@#^1&05^Dk$ۊA $'rzk 9n3U}0JFU?"~X܉^Rdwn=$d>UwD,&Bi??@>Ak|"sp]@{zGF:Gyď_>%>X"CXSn3,Apzn =d@rj_^9|Pr%x"(C g9~7ʰXga $+ 1N V0+!ABFO,-"j")Ӵ6*@beKдT6p ,Z=)t_Anr#z̧Y^@Raь=)t\W2tzK+9<'(X䱅̥D6` r'1V`u![~ Qu(w*-TaRrJ2E bAI{(IK1* >t.Plq.pXA{ᬞpR෌Ts3t酥bЧV2QJ6^SE("I'g6ܧĬWx8ͿЧklx8^N1c)2X'LQjb1 R`ˑ}@DɊ:JW4O5bьHclFWLzfr,21W^p~ o >-=vϹ@x@xx 'gIPX =NC4q% |YFJ#3-_8LF^Β:p TwD }ɞ&58"kÊzcjp_| $\Gb " ĝ2>r,-j< W˼̓˸0TӍ[t5)gd7ZŨ?iuǜLB[Rk32,1ۤ'v$tx0/X+iy 5 N8}vǜ| 4!tF 0ȳr=q1Zw\yHjɏ8R$.c0|e<0 yIJ2WHxa](~r; 3j-;NOFBVܤ\Gan_.?0%"Wra:ex;&<BQ:JY S' Et(HGW8E1@k c;1,$ qdL묦#rBcHzea HwcDc 8\uK9 Z > _Vrix5'˿*:Yؚ e{uƯ{b+9&2ȣk %*ӐDr~V _/`/I{aJ+]bFCSe+OȽג <þPKC+<g{M[|_[uB/wDP1jǭ׆(Y ìض=Ćrt4 [ƨ7;p7VBd"8+\Zki),CO|%tݎ,c9D1>Z:3upFӂu 1ƿ{&/](CpvH΋6spc&r#v `Kj;f%g/+,"Z9-H3o!˼ 7=e)2PIY*X=cv$22e]gA1 u537g(2X = RNۓs˟,u`b9+N V' !P ( 9"e0.Db'"*~0,wT%iH{u'Ǭ]g86߄cZ}b E /../Am84ϬBӿ+T%Vo^J׎Ig+M$%CN#xT?% k$E Mh/G S@.[tRI&I> %k$t}cH^'0J\Dtv$mp\;Z Gؘ`ZE@,mZ 5fߢ\Nn0C@*ۓp?i;Ιa:%EhM-'Zl`%AQRO ԯGeVy@O3XTp^+"v*Q5A ov/8P(Hg Qn TDO"d> M@Hྂ2꠩Cm& Rc9ŞQK$M(ځKIdWV!ǫ~}1sY,%Wii, ~< uo;N/rK1$FVYaNEd X!^ +yp-b嫄8ż9)NXr [#ҘG5ѐPBϯp ˬ7/jT_b3OF^94jOIṷX[,1˒=+C6\&`Ig aW/ _6Nf)I97S0FTWtib n& aCE}?$1}tjixB&]:4u~e→΋HD2D'./o!g!>NϡnR.]kdJ%נ0HњhxHr9e`Ay~E rp;p̳U , P߄'<5M/)Ї9)]X T q=ABFCCGC+ )LaۆeZW (4!e?} 2G(&3LX&0XG90YשHL#biلX}g%/ 5Č0 VC79>#4|gܿ06ճAT=zO &FO9X}ikC64yyJaS#a=lPa\x>m䤤 =qMd M $H#{bHG1r&Qٳ*jۓ+{ş!F:;-(Agqh'x_{m^ jy{WmI>g91AȲ\rqx'k2L ` b(\viP4+/x$RY_/ CY>c_ 嶚pc! Ζw.YaHUFw,|+-|f<3S)G$:F?Uܳk }s %۳)woDI6D^ Tw ]k)G$lanwtA,@A#!:Hg5 fbP*g=bkjc~b44$0x#&Al]x("F&q# 㧠#7i6Lh+b!3$7\ Iƹ?>1og5}BRS?;"7?,A YWgC?ܒ Zqޠ%迥YDhЭ6BPԍG;V2A.sPϲKaiXv-]jHo@I ](=Ͽ]">Lݶ }`H8k4G7y2`g04Ċě b 帢zlXqJMŒ/fag/l`]/;C~?jhFؘߢjM <&nMoK|z{nf$gz|JYq:,XڼcIJ޽&G\7 8 I֢nVXՐPpcM\bƪ5%9!/bۑg$(!R<چ^*q`J -x80J"_$zɞq+"soX`G"ub %AMdI*e%\/"a1Kcr ǠjRwsU<|܈pibedEOb=zю jqŽQop1t>LHr$E-$ćȁ(ӽ׳r?-$)Qop1t>LHr$E- TH(~oP8Zrdy pĸ:_ c3VE2LUYc&,3|5bbmǷŶ}3yedo ~vNޕ}*$䖋Q6O5 > \ ܐkNO’Gq"tfCqL̈́ĦLИ5JH%_ ex%4LKS>WpVdl'P4/w˸f :_!l,DZ#Iv v*pF`f{hM{FU?Vܳ\}և0_$Qvp:$73eO4q瓋 \-o% C F0bHDtW0⺄ ʶEgxf?Yz,j-1m䘸ŀMWِi QqjEMWِiX [<` VYYBƠ!IX%} VЕ;y_ ML1`q7.Q82cN+l/"rDHd.*JC_U>;hsJ"W?)h =0yϛ` IcA6! 匂uD>ִsmЋ&& sC`K f'C=lB^I,k&g#fx9'AIcJ V. uy,i"1|_$Ir:Dm(B،d~33M;n܊g`2?d"z:X1ǂ݉HTbyi+'aRdaGFHER: vJ뢉qe0AFbX UpFt r\Q8MK(1!}講$Æ&EXHD ewV Bng) 81"E<[OeDr&bbX UpFt r\Q8MK(1!}-_Y)<[/ؐĖS0i)TJf>Da2<"7KIeWV |G#`hKzlDK8YJ7 1PfW!- 0VNmHi" n'DO6O>Lrr25EKỲmЖ1 hMWҢImFfSmJ}^h JSF _$N'4:y1>J%:1Zz цJթvWPHOf2f.w )wk q ]Nf `TOH(ppfFȝw%%F|i 7#hV`Ч UkUew'57#C½%No֐Ɖ pb#j 5پx9v} Ce<ŰF.K+<w(IKoLdͩϏ %󅸖WJ7mqLi!ެẩŲ.508UIr` B6s_N(b֑ 9y q~o7H) H*8.cM.8 Sk> =<[*VrӓR HWO/ /N]՜ō?5)P`mfmI?nKC"g`|U1u2b.D*:tӞ=7=!L6$2/ !)K8h%Q_/'Qc&#^+ oݑƧVLQ}ձݮt8iW8cv.AQ|#1X ұím8ӑ\3S$#^mqm-%_qZ?jiF>BeY)fe! m kD#+5T17׻Lc ok Z2cpF[@97Z>ְa69YL^wPũ(#Ό*z,Rnj1tw5 :B"=dhj?I.PyɺCM3Tgۼ7DftBgOX&c1E(mH;.=9d¶XSc1!3WA7Ans/o `@LFf$\mtN < C!3} cksX5d,#Aa9 '[)OQ)aj<ÀO# 6'Y !NTaEo !(쒂q0 saF08=)R/<1f! B8_ H v_c>Č>0,gܿ#%|'庥ys%ٖzP$3\j(V~aFJBV| K(d[xðOXD vP,SqhC-5CQ$Ot-S _/`/A)(#Ž^L8H#%MN k{da~e bOXjB?" k{daiRnX$Y|n՛J8׬Չ'Row}_ъr@mr ux_h ~or\a{gܿ_| [w]_y/G?|d _lK}P\8_y/S9q"]aY\slK}X2\D_y/:>_3}uol././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/_images/ip_saver_0.png0000644000175100001770000006745214714401662022355 0ustar00runnerdockerPNG  IHDR{CnIDATx^mmyu>/3sܙ;3d$^Ȣ-V 8HM8H\)ӗ|) EO.R jqQH]En&ђhJ8"swy/* ϳs焤@^g֞Mg=]LB_ڲ'OWU~CdǬ,Ds 'ti`Fm)*kY-Y3+l`)9 GՊ Ȟڮ--E;ltnoؽs;ݵ3rX;tݱ_f?mZV5kLcK #(㥴n4,l^3gqZerv7Z։e%.lWߊsvڌp^?{6Rdp=[%73d:i)rװxw쯞S3Z+ G?U F“@iM=-6~Wcnv .[?jhу-j/q~׎;ߝm wuɇpI{9\~rحzmVlVQd"{-Z?~QV\EۉS/mD}l?ǯ- vb[na9>[-pbpه׽owGyy]G;ߨ48-}B+ݢgiG$wg*H;&:Zd=yE3:3n䲮UX7-썓겭ӎmwR/+]YW~Ckb!B!`>f q`;C'!g!!-٦=`ڻUr=\;P:XIifwd@XQ,^VQ{Kťg~U3 )A{1{@!!輭=\_apQ複L e>H +k]]CFl~@Kg/+4=Krwn1"Σ1gtZ"eNk1sZ4c^3 m~>!$B!{dgMZ:31N$6/]3gWx>c:c?Q8}[.pc{F2҃7#?n?]"MS5i&sb/ۤ.eՋs vMU<)Y YZVnBI@B!BJ׌Ծ0/cp6KWQ)6WтD_*uo +@v/?euK7x1q{5hH|ג.s}Op.>_ ߟ:.,B! H!>B!@iYiݺ xyu炘}i|*YW֪O WqHlT^r~X̵~[Bw܆kj.D=8d>6%l1,M۽v?io&9l:i+=_zY?n^ps=8% !vBBvem[ͮH.~aّ܇>,8;ja]\,I@1O0q 'Á:;J^>R;Tއ#ٷ#.(SsJj0{I@B!$ !@!Dm3؆ 즹轙h? seͽK7вM yq<sJ[ȓrouڵ J861OoW>7{nSo3q|t'#tH1%t5No͵~5漇9 vB! H!>B!@e6hѬ z7Yպ6m09;Bm󢬖_vRϐ7>NAZ<(~nCOciG}/+P|3aujt}ulNsy$BB}B-[ЅKO 0j3xE4'= (R%7H]eq}ऽb.б_ q݈џkEٕH}f{4n G{|cn~5jo%y\?NQzJ?:~ϋ~nQ|^H G~ 1/m BB}B>º-Zv|}-PyߛT+Н!Fq HnA=6 GB#apE_S7^ _id1W_p{N;d}n7;W?f.KpV`ى^R,;P65,lB BB}B*]v: ]u| .@jIc+mI3:uQ [:ζ qxmFdxq^3_bIH΅`+>'\k31FGU4ڭs{h5b̧uTwlJEp7gE_^!x}v0 {aB! H!>B!YgZ v1#U]wFV Egwv&l.'.ֿ;uYDuMh@"4O5$c/G}6XnfG[ԾƞՃ'ߏ?欿F>%бZv BI@B!BĶlf}4e u]h de|V Yt[Knc򓞝Q=:6H!DN:_ye(DvצKV/:ٟX%y4zl}As :uڿ ѥGpp9v)#Z=CjFm3ٷ]BHBB13v(;$qUISV(& y\ĴlFGc=9h |_87 _=<6_ܞ> b"1@9X:cKV|W?-A\Pi^;Lj:ĵ-l/Ҽ6/~Oz7cG[[<@!$ !!|Ͽ?1_ ?/\h`OwKpX!H1TĬ H&?Na{#AN r!?OO<_ߺczQ^+'QqnU4."Mt|-99^Kcz"j3+{rȺ$?s5|`{OSo?>7efvkeBI@B!B3Z9xA_v :uv<PXbe$pٮ64`<΃eHmxEÏ=Eoqz+fv+E@!$ !!|Cm3G]lx"Ŏrakc;jSYHj ޽cYs>`Nl,\Hw -C+tΥʯs?{ɯ/s[bFa'z > ҹ}=yT+g{0'8HǼ Aj T9K\_AܯN,9qsGsr99XE"[>srYHhWuI{Fn͏,<|YS=}oΓGs,ss_Z<&BbOzZ?Qُ[|<`!B <[oLw#5H1IyO룘"%x`u>_7ٹdx(vy~U#מbX;.sݷGs2iۛP˓_'Uy .rzޞH絏SB\@_w-c";Ѩe RE@,?5ܚN|΀ m)Y`8kئd0#Ϻ{hIF. ~yn6'TnJ lFdҜI6U 1] B^m%Zh :$?l, t́Bӥ[ ~fy3zW ;e'] v0E@!$ !!|6{“XYF}ɢ,-TmôUf|uz!O:h$ IEscBsxxqkq3 &I۫{ugo||OgD~ߵl^>S<%<Kqۑ,wůdAmu?1? vB! H!>B!~݋bWIiמ^y&"Ny֏z11#c0҂v|[3?3it!3(ׅ ķj~ \n˹#$`=nc,?XD[4Ff6kQՌs?x9*O!?VvOwҹY{܆'_23; vaBE}͖4cc$B~;e|l&!;O٧m cfoQ ž @w-ޅtIfzH~yrs#)$!ψ}20<MrkOī#s]Ft;)F \YD"Jxʻ5"ob-]_KvOH?˔YVxujOrMt_%%-QFhψ%`ɓYc_=<߻ O˶?^|lΛ>ǹ[qY BaB!BE=jnKQN'VA}rξ\p'=焽Үݡi=C!_?U[<M t~:6CMs@ 9!]E̥ێP0%-Fէ]8kW,O0Y-1׀OWPz!=lvo\Ulq`7njuٗFqG5qȇ}: VQD`%J=Rw<@!$ !!|7]6tN~HkܾbW#Fh.@O7z.u61>@_w }hb~هPy[0~_Fc. @Wac=)[nGދ^렚5珶!(KH<Qf=3wL۶%HEGCNZsyrNO/}ً[FWc}?ǿ'jܨ'?I$F]BHBB&SfvɎh~h!7~Vs yRZI=lFƌT(<݆Gif/ _0^V?qϵ}52xҵ|yy ?hk¥q ߖh.ѶLx \knE 9}_`!B ׻0kb Rܜ>mAp: tA͟O:. @.e"E748} ºusoݚcUc@xi/oLc>?^W6O-EA/BCVUWlކnҺսE`Onǔc+u1ctF;ݧu+k}08=m0s BB}B>];{@|I}RX A!O ԿT_bh>_{VEaϜsmY$??..(}_|9OatZzOnCWf<)p#Sc &seICm`ikl)<p 2ֿQ=)w+؝TfG0fL6٭z{5@!$ !!|[d6OԡuZ|}nU "9}{`\ad6sa? hEd 0?;ȯ3.L(s5gv` R}xd.ԅo>or[vp?#!|ӎ9ufŹOJ':6׊RvWƹ%ihܾdGZ딺[VW>>؞O }->+9vlf3B! H!>B!:Bܺ(rQ?ָ[֦18[-ׯgDݟX-x™SrIW |..^cQ$?q{q~k|c֯In:zw6OoX 5ۨآtLc> fWpAzn0}`8[iF`/Z|${?>gZ&ݖ[v2@!$ !!|-!7' mbiߎ-Q^?1aX@hR?ٯ|&1xwUk o?:fCר7m0':Z?[yvq)qz:N{A;y,y}UnnkUB%g>le {n>W|5(3645S8v@(mB! H!>B!t@(^w*k @h߼O1y9O&ۡbtOZE%7j{鱀;<{ ۇJ'xe$(OIާ:Qvo 7qy;/;DgwGE\ V^?n=*{f4 }o?Hѕ?b89,jU.B!$ !@!mi$Y(.h9 XLyXxb?R[XLuD.=sʒoh߇8ݍE.ns?.!1}?X֠Ӻ_dN %qHօ1  \vN @6˚( vebq 2s4MVE-TPL!kny+I?~}Qs^~z{?oc(kۇv|,I<@]VD߇†诉Jv ד`B!$ !@!xR1ދP_B}~`ϸugxEϜ^[9 =@-%mgZS3etؖo|>HEj/HdSnyNrmOl/C_d#4^%.F7w_%mtI@!$B!˗kT_ud$gyy?ˡA͓חAE +"2B]7ul Es`jbη=.MYAuν M^5rO?8lzxM^m[nN_lkNnQ%n[;6ϰ͸ Wܵ_ G*˂ !$B!:@45h_РiAQ.9~/}x><?{HI=z!c8rsm C=iceh%sqqqX?*QC琜?pr{}(SQ$ !B cf 7 ӑ5YYIb^4/NbwfBHBBx7mndiԯ1Mb%&Xм$6`m-W`>eP F1!qzx`u[h?5ǸϠ Q%%3:{xg\hXG$<$ H!$ !!\ZmTJgA!e秠(Dznid/<ח0~h7 TkY[6qy ۱XO1(J&sb+ vQs5'營)AeѤ8, -god[sR_$F9vp*o>Y T/iC/}]Ds>Ua]Vnui*qGL>BL!>B!t}Z0j66H*}9U kиvұ3ה*a}1y }1d+gn-"x [̮iдBa|2XY|/۷z;s1HTߋ4w.nK~muzn 3scCs*p u]BHBBPZ.PD1Yx~=BnIU7IC q]xt"n Tܧb< q?ލ=h Ekc(lcњ3#s%FwO##n9T%(;u;X0}PZZOү$ !B T)-#yn~_<>u%zWU$/:i] ̢ yrf<, i0:߀p^ZکsuP| ?Z3آ4%S)ڻv\C7c+v?q6h<?}<;ntr\ B( R;clO_B!co+3+m1k{>!s hn#?5oOxRߵF\[q}<\Y?]6I4 ~_A(W@${ ZNܞb ^d\u,&>-y_i\mO?#.M׊!$.}﹟njTiY B !@! &ͅN>2 I7T(nrOӃc|}כ Hw X'I; -yL&dɔ |O]A7ITK%w=}Dct-٦p~Mt SݢO 1,]6|+.F=!B ,ml$ux49vHlz |dl{Y3~k 8\ X}`'Z?/mk5&`iWر tpvK2g9nܦHOdڗWѰGٰ:X.#Emg4{!B! H!>B!Vecm\mDg-NTOi|cqDԃ$?7@EҺdOk#oL|pȒ֏%h'%WM[E|׉<>l9&3C͟ a=V~ ԕp@qq|B!$ !@! H.ҵ^n;x-#)s&n5!U_Κy>g]T4yK^Uآ #XEkw] BB}B>B r4od5`N$$ɹb3V~"7cQ%u? +^X͋~< b"6saŷŮrr@kJ2υ@e5+Հvޮ{On +!$B! XLep>¡1F ^ `?`9}XWonK?W u'ʌk:0'iE˪`!B mѱ9$Xm[^ 4_A['h7ߛaT3_N i8OM LpH.p;.)v"#YmtNnoVtLm,O!ݏ8LuN.4 "2%,k43?/[;.Ԓ|`; k2͎}߅:NK?9uon1'ۉTSlF}v }D6vB! H!>B!ٷ b.[ H&4n8<{}0-(o?FB*kfmp؆ߋ,y)cHq|i6`4Χ-6B. w>|@W⥪}P'9|^[3;ڰP3}RgvB BB}B-y2-CʱO|1QX{ kxqQ5Z|X>{@_Oti|ԁfSwoլ1 ]JVmG./͡n00݁POMjP?y{o|N7{r)m BB}B>-2늎q 4b2sekbT^ ;.p-9'jsdO~`bۼA|i0>tNgU ುry{L!~IJjU`W,B`!!ʓoxȣϠV ]ֈ} asr:`6K~:1\y֜-Wn-?jA8 BQ$F!$B!"& 5j]i| ulFulN `cX5ח 4NƯ~ 䖒znsnv. kٟuc}?W~9`l'~"><>};oXiFJo ڰWg/@!$ !!|m+ږd yhs&0Ic&6/@@z,4Na Y3mzH y('O`MƉy~6@!i&}Z>r}1g@N<z (XE_wGg=$ !$B! زFDq'usZ|VNyo@?9& y{?$BB}B>Be i*&:>\ncl{ g /J!͟# wygJ3%e t/u֗#6f׾#@9vn"3,/Ga]q'W,~},[!,noG'~`I*ߋNw5?*`'qRmE 4ch[B BB}B>+ϻv9Qxph~ -۶%6@!|]^;\N X BZO^t~.w?c7,B|hI;+1?^uxl~w6>Z|p:uknWY2}58vm`3gksC9}(92+$vct#XkBtRn29 BB}B>v nu LKV Ad *TT(lWD :~ wiPܪ[I ز}˟CڳsHAw8+Kn[]-ˌߋ:{ǿó?^^Χs5be3z7R&? cL^[Q BI@B!B޴cqP3\S :&>AB /B$nOEH:lqFLZ߀SZ ~a0>c.NhUUonot[Yܵ$%﨏}^71J>_;3^?`>0,'7)Z= ]X;?O]S)56|]{o| ,k4l}B ?*eOB!;x.,B!=UodfXD2[ӫI@r ~yW_ >pQaoAA=3ON 8yY#x2s_|o$߾]8M:36'X{ug};E@!@B \ OP$U$-c+n [g; 1ϋvV`Ay=}g$!}}v-/ХD [Z׷Q@ſZ0ؠ_c"k 1_y~ru_ Gڿ][C僴]u oι?q;D73z3E@!$ !!|ܚ9/9݅b"8`X(a 3 Ei2ɪږEvN}Sן^9xjJrxiq|}6۽V|\//;0N9ƵfJȓ}~Ohl{7}9o.y?6Xj0@?vû06*m E@!$ !!|ioFBo, >6.x| |d>Ij#.o2f?!N^测s;J V5Juvq'UMګȱ[_Wؠo|]xOyMoDwE$w|'~=U@)ac?Wcc(z.B!$ !@!8۱>>Dmcp1޸ [w~a)h_ݾ{U+Qu̻55{˱G(b4Ot:5I]@Oךn^~,eAtlnv>y{ck}Ԏ l?ܾUwΣ1~*=Ώ<cJۛn_YE[_$;Oq%<*ďFs}5.Xa^I?E@!$ !!|N?LtB{ :Djѕ'>~\(1ӑXȣA+s|@Eܪ5{ruOB!@G}̖ yA߄(Bxox/E I mO{abz yF:BS' ̖,KjAshn8~7>8'{":}àxxZ'띪H46;rAӢ+:|Uo3i-w] [?=rܙG} c o} .o~&;e#,sįZnN+N?\Y'KLØ'a@j 8~+ker +Fa}OyY7fݱ9!I*!ϷϡhM"{NTOCx.We?p+meI}nOv)x?TC5z`o_@rL2N^ vB! H!>B!wi(>z<&=G?8}P ~ä?>pcy?M=y4u05gzA$YUѼz؋UJ/= y[U~j|];Kڛ&jUnt@Cbtѝ^vYr\>ܺ8o~rߌ/ndO'a2Wp_cG{џF&DOwߏo'Ҟz{mkZ.B!$ !@!lֵ3dQD-)-,s],"qG:gGxr#)}WƳ'xddF{,?( . Hs`9 KR[.c<'<}1Jhn$ ~vіE^yyiUOʹ&c_ۗ%Ca(<k ||ȏ,Xdy_B!@ۊzyE4Niy* 2)l;?Zsu^t`^'xąmI,n<X ya|4_;~& Ns*vf҇ԭF1_Ttsf 4>oSцאĒ羰y۸DjHk v%_-g/Wq s ۆy BI@B!BePQqemb"d<kIpdGAqOD\d Dal_@oN'l" ?8_~t-O` OPFv>Z@{}䗇%=b^T7x֍vy'~zBI@B!B* .1*7hxΠ2'O64yEOıIіG?7vO 䅴^!ѹ~] wYuY_GZD;s=7w5u6l:>s *q2*r$7T5VBh#d,}{| &q}~$ !BBsOςiq؜oo1,b 7F#Za |> H ס @!ޟOT^$@!Ipv {?H yIIFraZuFh2 &ELFMyGc>'%bii I1` !B!:@{smPK( `?[c > c5}u3_l.$,[(j֢y>\FaEdl'/j5嚳r;SHiEtɗCHRsCc|okek/ loskX<ѷXl 9_o-$ !$B!OLDᲰ.mo$v&4(9:|DFL\~sFODI6e>ͳo֭T\sn5ń6'v^Ex=dl{kh; ~d}z kS:+㈏<n͟؛gA, T(TB'We6oP O& H!DB}B>Kt1+R.!Z_. e"xѓ߈ L_T:!o׉@{vhu0Z`ׁC<16yic5zܾdGx}R*qxjN;JQ ٿKwz_>NS@5YedWC1ؿ{; vB! H!>B!†-͙OET|iAUtLr9$G`\|-OKA[(.{{c'@p4<92Oɾ?JM⚐=C(ioTXMaLqbaa)N]ޱ"$1ĒւǁG7CƵSexeURؿ";wߊJm{[ûX'5}|ʌ|Ɩ]"qc-s6F In`!B Re;~ZZVb_QA=7_# VVW(l-:!H9C9>A:bϒb*Rgkn]C,6!)(bһv==-(bOjʃkrdk5_#5#\hX'Q`/y}u-o{bZvC9`80NXV BI@B!BLˍ9E2Mѱ9]@} *<z#P#V0'7:g0{>W?Һ{ Z\h1?w <媉%mL]b18X1sgۙH}ZvsU--s{9ߊ[1`zO}0xxg@!$B!fVf%ħCH\[.6yk+[PCq%gb@~ncqJw~ȡvkX`{ZpFx^јiEr %=c{/]5-')W'biGC/Y]h tΟ7#/*˓~UImL]y]Nc%ܱ,vw!$B! Xې>d40^'A6PD߯sKQ>bt-#73L%uɞh5ϔ_}g>!;e>q0Y ΥOhI ', =:7yfԩcgSZ^7ȳq8T5>$!)=O*2Ӽ=}KdE&iOeY BI@B!B2dŬ۶W)&+q(U1tv"&) }p<3|gE6|̟g~gw}wAU1#-UFha "(1-;NS >\qp N^{G_r Q{KbEҮuk3&%Ӄ!? @7[ז!$B!w"KhNM…QL~.\lIJCD$y{WXW__\~X o7OJ9';x|C']FQ oo+LKu\y bϯs?>< 9dfvw\{1w;*#a>8fU?>bu+ H<2ldl[fֵsB!Dno 3cf[w!"?ma vB*ږCޞ*)lu1@'Or*^'Y؊4C.K-]mY )s7Vd^Z.!ůzH $6׻Q8+KkZm6v noE|6d/ 幕 |,{ϹC/߳ݴ$/dhv#|'>qXgTyKsҳ~Z?ֶ ޶_mG1QƨoA_y$$xЊr$~$8Ao1W+δD>+ .B!$ !&BAL(c;6[g \BPLiw74=Oכ%-Gn_|A[C™SAw\"+mt7#_q \'wܾb?/ղ-j1*}{Р4+k`n֬۰&A%G.B!$ !@!(-kYDj$`3ۋyNCHT׉\u,I% f6aZ=`N8|-)ͨ]r}? b7]{ye3u1ERKhܺGc?шi[]kv<} ^`y†ƮxVkϚ88NFY{KO૽V BI@B!Br뗶I#^Zh\r E86׋m7-Se}{0?;ԅ?MXaǃ~|:{ۍg];P'}ד~DL\/u,P]_7| 3qrF-_m;__;_CI}'s``V6vB! H!>B!@a`}L^AEL `~Ap)e6cd1 $s6 yPPC6$Z?jnEׯ1] 6,:Auq=.}# I̺͗oz윧~ _+_2B!@ʖH˴lA-#O"֋kɾ壴Xq AG/yz2WjY? ',ѼH nA߸V2_`*KWkr&\#'dzTpx_ȮG 5`t-톹l^yެAL~w S<67ʸbNCB}N3 v!B! H!>BzMA&D? };7 7c:>'89 4*0Wx18dS [J̓e?7y-$pf&y!S?GMkix?g%{=sLl#i7dQ1oYI^1B/ [4Y%Cfdy BI@B!B*̺q" ";50?+Z?68%rO xn˙0יnsCMC,MGWQF uZV^=>t xk%^^^Gψqc;Hγ ǐm3!8ge3,Y[p oՔĘsf|`?:$Ve BI@B!B,;0~K]lUAn=?Db?\Y3Ĥ*OPF̉tS],y9B "N, 5~!エo'=u= ұ.wi-,]S2сn 3^CZazMwɹ@BI@B!Bڅ5tV\s6Ƀu;>El(|PM")[cok(DłTdIj ;<c 6 mg oRg=L&L+]KjfÔVN>kv"Y9A;SIUA2=}^{|_/xm?$(>ڲ{9|jo<k7ه㩯I BB}B>'T~"$Tߔ`O>xHH \pVH,g#g- m} Λq#a;AKGR48WHw_1lC0MG![8K|Pr>Lun:t?O{(E@!$ !!|u86Rvy4'*֏.w$pE̴b/}ً掠[(z?h Acay?q?,бt{mt%.~:]U^s;U>|wHs?L)`0OQs7?ǙUl1!{6 ۀmJbX}E-}-yO++H{E4ϧOr_B~" BB}B>U۪tvIl^;%Tq$c`` etr^}|-!i!~'EY*:8flSYi. Qgs /,vތ]eGjX?.oGtjJ醚kFqqlFYfHZx" BB}B>Z;fֳ(a}R8@>z@8{ wF\ٜ'6Lv (fs$?"59q =n0iQs~͞``@^jӛU7,qm2P@)&y~KSc/5>9)?mPY۵cK63EՏa`\!Ru]BHBB.2Q?4+(2kpP&^ B|A`羚o.'jxM/BRfm' }(nѪdӯO4~ܺ=d\ 9=s9dl'YtXAWux_#+m_[l~t}`{ һ_΃]BHBBзI2h}4mHNIc[@`/$:"{e XRMʬ4&=g1ݍJ2OTO_&6PN;Xi2BgJu"C۷: vB! H!>B!wr˘`,?^G6!hA.#.9'Gvlo5y$*\OE}ވ/\\#|. m!]+GMR9y{/.In}HEsL2,kXRu>?r/KMp~}UfM;5SE9Q. ^z)@6l'!$B!:@5薅X\pAl(rGdgP?s8T,nQ}siI6Ԭf`WmvS@f.[ktZ ourն;Nkbsk: Pj8~t}hĥG|XE BI@B!Bv6i? I'i5/m0ȗ/K|"DsS(ԉNZ` :y¢s[Ștob46v!!K0s7:ѡ>~_4U6& OΌFW'^1}"(ןqe`B!$ !@!?8:8MK*B^Z>5#o`Ug5{\vJ l` c^? NX[ І6yA%&sbDؾCNk ~Uz5fM̑$BB}B>`ջ!xĸZr2N_vn%p&uKgN@e$"#DO'U=<j?LKDTA_5Qp~~@\,:!$B!:\ VŴx'wZv2.+Kp(3OQ{ \wLd^ZOsH]KiʸK24$-U-KhjQߵB!cѧ}1Go5"bh?t!|LB &A I60 `娆Ƣ!OSQ9L@zN ~8LJx}3g.%ˍ6bc9֡ ¿ I}6QE !@7!W!F3o{gc{bb_O(A F{>>9 eZݼa. p~MHP|nWJ@9~Z?>fnX/.l偦iJ|^5d.: B!PhiA3Khj I7MV֩,Ԏ2:p/zI;Ah]˩UM+~F|?O|AdY|>xUlf)U}˜SK@+5`C](u¾m65595g+`p+Ƒ͗[ݢ~rN,'_|1iO 6뾾$&>N>^u? I^2IENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/_images/ip_saver_1.png0000644000175100001770000002014714714401662022344 0ustar00runnerdockerPNG  IHDR{C .IDATx^=hSm|hl5X[`;)N"TAEtsEPbNġ RAh?(VMSbRf~.ÿ&sBCa?(UBŠTi&fa5@)U#6n.Rgdov@ҙi寣7S4G*cP^,AOP&&$^)f}n Bj$Ux@ zUAYZ PtpHGvF^:z.xSKՁr [)`۠)(A7HѪAq(V,;aܞ ˊA#H񴅲M6kpPs.J-m.ͨ!`Pt3M'$16s !E(QP6J³rxiV7( eH(wsʦ)+1r ̬zֻV؍TFőH8sJ膅ӢQ*hb(ӎ97:{2qi'*1 e~2!4{Akq?yyt#yK K >( W|Lq";w\Dà@ևݟx    Bv7۽i) 4#y%+Lem3ݐu|K鎬j5ye$kEWNڒ ̟AGGr^r^)4U U9 Yݣ4;Q% 4ɷd=1WYN/- gMo=xWN q͞$3<\;I3 2ې&J3J2x2Yݕn̦?Sղ0|$ŮblX˩gm9 igT4gGy Go$E֠h0     BR\tUYuE Qdiv~YUGV]( l(jdts9kqFV3镳${AH9b84|Bi g+:Ǎ,8>rkZ6m 3! HHX˘E0; !L Q7M˾UeGz9<:/{`@@t{EySlZMml2kg?٫f˔f)M)T]J ~8Jӆ3-1ppeJMiq}fyx/V󺎇tqHW}:md2(O @@[@7M0aЕ;2mٟ] )m'b82?S$_rCJ{ipϖ)R:HѾG~!lg|fq$IKeӇT~πOvKRi:II^6tp۴zѸTl=|6I  ` 89}-My,ClE͜&kϰUJa*o~*\J7W̻QmݤsA{vӦW<$۳֝fC}x/dĶ0  gHri h,e׫a)ͯ 6iyi(^g.34X )=لl,.[^>3Z73>ϼ^Sz>Ep5,7<%@@@l?8$Xmze4R~x4h)q)-R99uyeTT~.4̻!$negn8l.gkt2W$')@@ {I&OLSd2d>l:uOMѷꍣMJ˖?=m m)ebG){A=uTI{3])1~qDzIllիSfb9lI9=ႂ$`GH $H $H $H $H $H*O+q9sN4̈S-0Ж yxnGdxa5m'@A伾0vyfEh$QJ-gEY|EG$s50k;B$H $H $H $H $H*bRɆ%-"TԤh.ryɌP"(= n}HX#=> :Od:$Ir Bd$I@d$I@d$H $H $-9dذO -  cdh8ߩSک3aN-%PBήNy$ifH1PG}B_9mΆg>$)D| lIS$ I2$ I2$ I2$ I2$ IxI#vOqM3#E_Z&%"= /ȴڋG$ $I@d$I@d$I@d$I?ܪ |H0;V ^^gP<G<$I1ir!3&2'SU$U"2P)itTll)Pݕ ]oMl<{J31zmuΦPZ\Lw~@wG nN m!Ir Bd$I@d$I@d$I@d$I@d$I;lH(XlKy3 D ϐp&+__+Rv~َޘzK$mC2g r >ζۂ!Ir Bd$I@d$I@d$I@d$I@d$I*3 \#Y)g⻠dC%6̌kdXqjwf L%dF'NnyzKW&D^^|yH佃9BkP3#TtDN~X:IS$ I2$ I2$ I2$ I2$ I|%:D)C=㔝!定1s Ig⻠!ƚgR|Hl +WOJ"oT~>y oǜ?ݺEKچPёH]$`H $H $H $H $H $>s9x zJ^Z*ZB͌PH6d,XaD\MsL&a\#trB)[@7U ՖH}$5%R#"!rz6Z@`GH$I$I$I$I$IRnWIgPRv!>1&Iϖ">Crg kBB_ e"?%R }C_D~"Z|_i-[z"𣔝EKl:hF;B$H $H $H $H $H*/UIq}yJ:S#e3g!6a1\%)VO眞D"/kFXl 鉼z$o|ZG%6>ϴ=)`;[@$ $I@d$I@d$I@d$I@d$I@TIWv씾&E_XY'69MeG_m3yPM=WoM~\anDsgC`GH$I$I$I$I$IR9F6zEh2E}%E$uEȜc"#Z28$OW6}L#8"ލ9%?yk5;Bwg*BD>{ &"tJI'  0-ʧ$w7G|[||Iop]N$9!I2$ I2$ I2$ I2 WjWgh؊_'!I1/) I".{Mnuc .]hu JHRX8J%g-~QA3FfBҰ<1;?ejqt5&*\k'dx"X=UՏBv~oǕ_5 Y9O ɺ+d-MG[Iz7$ms    lzBބl2mAOAB"tdcv~RZcӘFe]żfS9/eՅ^B/AΓAtd%lb4%CALATouL!R֍ٞ+*Xj"kq 97|:mɹ֎?FV1;L}QyWuerڗj,Ud5b^1j"<]Ȳ I7[1Cw,;Dֳ![@A9HR ` @@@@@-On I{otc4 Wa}ʐZΕvBn&d51j=1?XVv~9/;9_V:H YVΏ]ec/|}_b[ҋO @@@@_]4ID!Q̮N1gU~Q ٺR+BΨ9u0ΖLVTYk}YcyB<O-9Y_W!>'[9av_b<'Io_I[@[^] ޵ʹjs&1W񐲕{&_Q#61~yOQضUǜIYVcny~ tklgty?c~IQ*6)FIUcSjeA;?TV3UbGi 8w$WhSx5<'),TH,DQa$obrb{~`蕖<ly 2#҅=‚+M 2ԚQiSfQA9Ye;+o3٬Y,23%DzL٬YRmW[|K5_kHR" +7 #Q;_QʕA @eP4$m팕#IJAI{C @P? p䃎rݻpZSN AvΞa פiB (n ]t;9](6M6/wE?6BdRgcM=;)e@րBGSE:u4o?|$йxM\rm ;9*  iƜu<-q)ӛ^.s6T;XYYrU AoDsnjxY :eDUXpLiuZ7+ D"ȗD"|jǩ`w(AC*DAqV1Z!|ݶG Y8i 7nT&sS:fSefhR8SSj5{ˠES5n^hn^#.Sx2vLSPςq l7ǍNq|n)nL \hЄ_@;))9>7>7ind8+#p"$INN$IH$IRK[ YJ][*7lDG)|-b27QM%F. VwjeOsn!7&%N(&q")d ᠘o nԱƃ $(VP!@b=F9NWGHan~$*/UMǓx"HPlA$.l6+ @BB!V@Hٰgd#a(NelO3nTOխo(Yx>N1|?u-}     LRag3iTV1feu94ߔ}_!ʉBj_*aSQ*s~CNl&r޹Y?x~I&LA 4У\g;0}Tˀ`}!;!pTm!S*iJVZ|VDjOc̈́ {톜N*h hOl[FyOL!tGDk 2IDO F7<|\}'M&i='癤BӁ>zj$ N2dW'Q+U"PNBN"žaq״~e]k*(y][%3Wy\V>ULe59rj%'Z @@@@IU8t$~S#9}=W锆l eZt?9B~еè(x=ҕ}DmOu؋KYU!eY'7uK&LA ^;ٿ*BG{05JNڑ(IbeyАbONk^tCNOMș*d .bvuəܓ1|MhM@(IU}V6c%U`*+e5<ՋBN&OVi( ]j-!V!kt*rzU_l;5KWeYFg )zU*@uR-S锄O"hX&`粊J T7(SzT*ԃZCYZrYDVULNٖv`hM@($jj4*d v~rym.)d7fyG]Ӎ?Wwc6j{YeЙqf1s[#]b>[@ @@@@ItC=؁VLK9r9*9RE D6AZØaL惘3y)\B>n9ۖ[Q'\}Y?Oo:)sfr%Uk?>s[*њ0     L@t3ϣVIwwD&jr7f*d}(M]5e .dSY3Ru6c^U\l?S5i]5` @@@@@JDϰ j)'Q#'U% FuBvuj!c9s]H&Dz˺qw8=*ޒu9Y/ZL"/e%ٖ1{8i U<-6F 9729yK\n2I'D'D98~=~{ru@ zaS   P冀NTyitj,!RYJkYI3Py(#TDuoR!뻿3O2ӽcA19Rȩӟ=999hd4 2*S*B/(B鯣Fy>\BWBVR+D>gs9j*[rx/rޕ]YP!Lrs?I-k0j{iVurV{Y/њ0     LRU ;6R9J9JVi>Jgr鞬ٖ"~1sp"ﴑ~:sV_Ο)dȚY!*`3'LHVb1UN̪F[%Z @@@@詹흥6RK){A.&1[@:UKY”WR9TyKwdĜisYX!r^B4fwHiLw>3]rb%ZK @@@@1DLK9J9=hg9϶ey9_9^O| VʺQ'^i%9BY̐[N^x[!RXVZj!9=k]ǜ)d:S(3^ F*tU:r*r$Բqh̝wߖ/䌲_:!.vxv $˞O\{+I*^ ==P)dթC)'U8~'BBNϨSSk($:rr\Ld- Ym.j:9Vq+uS{r_ 9~ȹwԏi3Y[rE!k-k9U e%@kg2ÐݓFΫehM@&)UJLFNTNN_BFu!ϧRrl7eZ|BW'OM&dy稖T獜os~-/lUC,fWg15 /H֧)緿_ʹ=NRYv)缼A&Z @@@@p;?N~BF![@MMT\s9Zy I)fbK[Lp9r5b?Yē\T#9c9ie}3r.ihM@&VS d-&r:j)NJPk+1#6_s'U Y^u|S Y0QYgsYrnAOR/or6ޠhM@&i͞^|j)SӍ?5y/f`D3-F!;?ZaTː꩑5ٖdkM&bOG1>=Y<`Lj89G'VηQOTiKw)$i/YFZ3\V9لg;.gCʹoi豜6,͵%g]9Y̻1>R(ͶcfVNị.6*.dܓunGޓ[|}9yғs2k|m3l:њ0     L&*F 0HR:)'[dr6t(gtJFZ$jBgrrB<ޒfa[@6|z+BѩYlWZjNdr3LndFra~\i̘ -B q  3`X! !BB~x6X$_Kyp]SUuZOJ^>@@@I/(cNb\՞ɗSwSYinr'ort8<쥲a3ONlǦ )!Ta3{fn-Sb{&M~qVwS8G?OY)@`$?ɰN^ojzX^̓}%_?$Y4ͯm\eU*kçe/hu˽۶[n}&{ҙ&&fv}kXN̐)LE-Nzv~nt^J_KJor2  `z\͆?csʐeg*9-;9nNJTvm6ԟէf9o:n9sZ^yO5;tz;i\ݡiLiu;rWCnLA   Y'F?yV-YmdjET-2SƖ3g9HygrqJ*"WKeg{y3cV-t8Ka*6=d )@@s+{_'AۆYSf9 e2y?Miq뽔N*S]oS۩,rҲ4dj uY~єTG=[RSZg2^̫!7 @@HzseYgb*w/S8ٺgaJOR9JjG1[6m jj6dڶegJUỦ @@4f'GQatONsk\d:TaJ/)ҽSW)-ÔF*{1;sjYi}&'-U+,)cjӖybLmؤvݷRy^6!7 @@яNv2hңtqJ)M󞝟37ʝ'TAjg, L)YBӤ^;ogj:𷩼;7CnLA   ?9'rt86er$R9T..GH)-NR:BJ5}7[bBnrY*6"uR9~O3fS2l]d35ۤvL0{M`] CnLA   Wi 8~2Ycl"j)l}<[b Al,hk4~~D+S cˁ]frH+¨ԕ]ъ8%'Od;ix%DP:)ߕPmYvӣ,^d5 KyW5gOk""tx'˞LM)j:e2mfM"iXCkc*H7PD( Wl jE2TD$_" ҫBIQ}NCI<236Y b=5 8 Ųe(b8%Q54t R\ :2ڥ7RRkIENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/_images/mapserver.png0000644000175100001770000027524014714401662022326 0ustar00runnerdockerPNG  IHDR p>zgIDATx^mk?@ iIvCK)gFzHj\3v$8B@^h,[wI GWr~? #D%ywJX!ZGܘ/֯Ϻ}>ɷi}֗ALo V!]:g U|Дȗ|\'oжζʧT8ژY7M砎g#^DZ_37}ugӼ!e|%7uBVn?92S3VDr( @D.okACEO1YD vh#?&$NX#ZrgsPNCJSB)U5_clBDl~-{hs t.v{ ]q if9J #\.AЃ>KH_B:6N9q>_3b\\b9 %UZqj6ݥ+]"Cqa ?DC?"U/oЍp~7[q:~ 9ǹjani5ܥf]wHЪ=d֜ͺޛi8XNCy""ϐ=/$zʧŤw? 잵֋GuOxeq\¾tZ 2/ֵԩ< kH1Ebϧ/׸XxztDx@]dմV3u[)rEz,YųkLdR'YCz{Mx~ ^vl|ʟzn/c\_e>*fҙ zzlGYᘎeh;ŌsRfA|9h:lۍJu8Vٔ)lD&kas3\62T񐉛=;^(R-J(e/}> ة| r]%(i \w h%`+|%>XEt%ii#ɤ3l*v825D]f78xezBkǕpFvK-]$ ޵6Z!bɎ& oTƓJ!@jAb׉@hv' p"?WW Ш@ dCYqm/(c[D X.!Rؽ IУ. 'UK<(PDɋ%?JDRwXJ2KY#v\6MaN&6!p!Zt*]]S#C8 $NnP۶Q,aJp=W VXx>di n˔ne]zK 2 '>鿴9<0DdX/3F%D(H %(vrԆR9 ɖ\핶~X?M]B6->#x'f-hGٯ*Z^yQ/ɡ77pL_А6\cMl34taIqrn7¡,.G ,(K\&ؠo*XI\MM Oclam3ɖQ\W+_QGp;?u{&(&;:=@J?sLм *tC-W= %\s3]IPPOPha؋!=FL>ېobʑw}.qnɁ%J>(yQr#]WмSWl>f"M[ c~Gw`5"cI$8ymmmH0M#'jGT^!#CihcUF/2%b A K`H[xTIIt.B|Ƹob:&9?%sxU36+]PcsտDdQPH8:ipJ>?gj Yܮ@QK I+4Nܷ  q|,*43Q)]] f Tdaۘ4a?YLr$DCv\!&]r S#TMŸlA@6[(RpRYt@q*P@:L-Xh@ڶ1as\"W$ '0<,LpgBۢ σH1{7tt^ti_oyMvAVK yaO< T/4 bO< (>/P>(StL3l%ğ^Q.<>>ӹ~3h艄Tr¬bM&TV+۳O{-buHW8 GEVB&NǏmvQϹ)k21N5gA*[lkAvd0yrU9T1a| >u ;L+ m C˔d9Xv 7qhnA:F 8EHRW})W" 1߂Obɑa4VAR ik0+*gpwWEXxpiճp}+q{ܷs^~3V6OUԂ ó+nAݍPEp7U,wҫO(LL2`)V޳ng>>4s3pe?@-Nuعy'/G,/oF)! <$?ڃN7܂ƻc;}^%˖̀!8"iBDxb4LzT2b`C[aKpfJ:+Aq**%eqifc \u"C+r_cw)#$XJ|F2ʷƸ 38=SN¢E2ߣ CDM^.= ן ,|_=߹x>~t\EJeIh۶#iBK{*6:AS\ A!D6"@ZTP 5'6&}jZמ]ӎ:^3jR2Ix%nw"p_Ų˯eޱd 6%|88r3qqby,O7gġ\Ƴ d~ȌOY=\x>טA4+@CZ@dpG;p:0/_ vu<.^d0}8{f8/՟~痛/2B3p7gw`㺛ǯހnpD ՚QA62pzW^A) u b?fW:\ R[AR6*x8@hІ/i2sJ HB0JQWW˘^-r={V݄m$Hñ3].evWVqZOukLxf< ASrp,gAŊ1t5:l[ajL2, g@. HVtT>>>aJ=OJJ3g4W4L@z :}eӍO??݆l4 :- E't'נG'4;ƫϿMm=keuݛhi߄7Gbj:8k@K0 =B]MPcf zwp2S1pY[uL[hud;ΣrseDK sϚ'/hF[ ۚt37.iʕwb/ILiWބ/[)+o.{GמOכR|%hkq58xI#n,[r%d챸kqi hi|)ѣq1cichG=3W@uV^|0Aq*OUt1e*Ҩ,H0)H QHs,5] %8F\ *-*Sô^  Ba)gm¯ᡗc@afb.Ot!s-OŸT6wXF< AJXw?f }|->q޷q_fK0m^]uڝytSKQev"%[.fN` PnR[&ЂS&@ Gec׉?C:["O?C}; 裏UW\e˚˿O).8<|i̷c x7pاêUѺftQ@86v6>y'm EN):@&U s`YOiyR08F֑x_$GO4>0S2?Ϩ;t /?^|!RH9V|!.'~ū Q/9E> :tK UlGJ PHM o#:P%s,D3~6lU_r_7NKJY9GW߉ bWj6> 'vq7q0dz^ۄT0`R}bK_1~џ-'Ť4 w =( ;;װ_,L˾nTD_ Xڟrgе}@4ТWϹʉ߆~{ٹ} L l+Ph:A*op2mP-Us0a|R~U[GMLvP>U@8Dr!yA>E$C8Z4O\Go_.\}X޴ ]]hjZ6iS66'&]=X'az7yC"nSq>+y_F/c=iҘ]l0Z-̜^֞څ[ُ! T#!]]50ڭ4!OuH'VTf9e">'Zez5'*-D* DS!N5tX U Ą˔ d.2A<Ca`Ǡv+p@݋m5wv`+zPS^g =EQ0ZvoާڊBS>)Y@~]ԧ2l@8;<}rEoŗK(ڰk">P&MGkE34HY ö:]_)6NaiNFΠ:kDN>Y} T OH \!zYnА@Yg&=ЯPYspUuZe?8@S[Xsؑ z144$W*C*ֲo~"tWwwիxTq)'3E, o 8p_T4Dwm+UYL<i^q$s'͞C$wpD%-SX)ŠLO!! = .|ݦ D@NJEL>2Ԫ4PLiiz9hKXZ^C /Myy8hՃ׶0pȑVu ݀s.7Z_9G7agZs?,5[[vڽ*vsx/]{ Nl虄Ū?ubyGa:X(nn]ʺۈ_6yݼMg[,C: =Nb,}o;)>‘ah3VoZ'agZPJ#|=s vz29dOZ*g NSe8j9X|l9FMgEu$Wj 2Qiw 5H>] ߠvWr!+DWWJ?mv~{#v쐰vTWW{㖑[;17z$8}} {aM6m4[|qurI4vZxN?=2j`(7dYX!Kdi.e0iAڨH#&Hbw턃t6Qyc}g\(LCڒ.+dxS=6>PֶpL-̅Fx&̝ T@qE:txߚ(`=(qHO0|/>".j<ZOG6}!}^\5##h{8u:d^~4 nenȣ8%5?o@/z'Q܊ ˰< g]^ESJ_c<&W8r QfKPKP gDz><'}6Mԯ A6HtAE%D2JѶM&Ccrӻk˲7 e(1hAG 9{$d)S>ڏZ!PкzymnFT2M>wmAlK hLd$@{eMu0<in[>HǽɓI|BjNpJSz *MuҾ`0ҘZ Vup&G_d?*lY.9i)D b=_f ذKuD Vv KVg' e{6 E^I36=mmz8"϶\0/g[| +%1@(Otv(Ymo;vmg}*/o{oy)>aw}. Y `fY̒Ug3Ś>9z!ʅgFϔ]R՟I6QhFrd+[9qFZ@2,mwn|KEݖe ztҖW~ C?|`ݬ5`cNh5Zgxmmrw ֞: KC40MrfjW:ea*+o/) vx&i'h`ô:튨XD=#:#Bŀ*e5dV$0_((E;۱,8A'5ZEzM-'I~ݖiGfpٗ? ^7b!JK} KuعnM-GU_E:7Pg׮@?Xp`'%f\`*L Vu}Y6IIY=뗇7ŽVUj3٫(?@ř0 Ea!)|,cmTYhn$h=3Om>r^tdmEb7c2)D~:ZAfDNw ]˕Qt鯎lO7fTʇxpϕiilFֿ4c~0C"=Lr94tZbemUR2?ΰ"l*(W4,V6zhaŶ#@#tajdJ'e:#ɮAj`3ѓru:pFl @O0dFսGB7P|MGA4}r* nr\%qJF2L3 <89Q>(ϖg*#me:NR 68_swP9OR )l%Q0E)ժ#׈V,W*!H\m>th28''B[ЇqYu lnwPfU{Py hmO*,#t=CpZ7ʼ %?uV:wFNLau)~V g3w|@ҡcJrS\uoǒ81Z6 a G0&LxFJN"Ng f}$NJ 8P+B B:fʩ)/ vWrA_s᪹ʖ01_#:h 8 + jO|=o8 xp3Pd2=iCPDX ;O/ U1z::|CH9!PW'>2Hs>X$P\t:M7X}8is<L[) 0ZMY0Xfᯩ5G FP|@霷_i_z& *|34Z0?V>wJ\@X{MЇh%`H̺Mt㝍 R<C>8Cjt Ɏ|% s(X&&>Zs1cuGיs9cߟ{JF|L@3f)Yx|1m'9 BJ 51_ak^Vԇw"$8/ywӴ\3a Z0&[L`B[",Fňqz+>rƥ"}P7]Bi,>g 8ҽX.WiJs.$KFL\Tc\-Cj@ \b2`p4n|ot^sSi+pHL%%y&11.cL·hݭlS? :_ð83e;vJt#V3_?;ag6<Y_wMy Ut.ݟ9g{;ڂRLlT̹.F~)h vjgUL# ~sF92W;u 3ukhOw~b~Pu>3)YMLK%3Ԣ]iTZʙk, tsE4wC{?&ndߡCIy;<"a]zZÞ3ægw&[(iش=ͩ0$;z |۷EQp+"(?BBUi9bZPurlV>FT:VPPkfJE/FYK-.ނ('w=\g EAt|}%d #NkYPdy@,Hج=AE 5|HPfԩKBXfׁ҅$618"V`; 1!DnCD3N,sAy,H5s)bdԘSbEYbՂ6Z b\*sQ4p,5A]%/1Ҧ(%`7"@,!r\<*o? !^tk5ZsV0wR@:#%BY.26}YGg FŴ幠vc b4 7"$ -) %W 1,<\wRK7nC2 F_-t-ZcAy՜ l@$P9`MP]B6+@}̋!7< uUr12UGUbX[cw#?|Ufh;Ϲ48_o6Gy $yuTaE+ru EjTϽƿFG[(Xto#_u*T, {]X~ *Z u)#::58lێV A@dO,PHE-|{w3Pl0 `(XW)ʶX֞"VFߓmp@Ito>5HaPX{0sN_Q'U ?o#LJdؽW Fs,I?}.%V$L)zA"1p2@,T3n#bD9IfZ(4 }Yg^? Ea u0 gDRmz$47-H¥K~v6 +.a8y˛o%zG&醙V!C" 4f A!mS@!BwShVG(Q, % wrᾕva5ka W(.yY=k&O0h=O>AHvt'xc}CJ{ =bL ެ0g2:ns!zAO7)&Z/;LȘEuM;C0lt|39[Ľ*j<pTV.fF*c-{_!RMx;z|%}١!;pwo~(tD hv\Xe{~FB[[a"ʠ⭅(C(9hab4)dل&ZXPxq7ϧȪi9:͡hGi/Ð:%Ƨ/a0׍6)58X̐.9zSB|j0DږKED~E>ׁ928Df4^B6h- e8j9^ /ZQS`M>vL^@^4"2) +Fy5  V⌚!9/xȉ #N3\]{Z[ړzvR5,)(yb)(Wv0#T1Օ9Ԋ絶T>v0!##Pă KPsC=ZD= zzFbǧ!Q=Zϩ[0>ο HCAnwh4-Cn=obB"(t+CR_ri9ּĺܪFtJ^c ;#~̮@gn] dGA'Ȟ̔bqs U'T@@uE 0YJ`yrc#ZF>{zW, )Iv\Y[˞1˓~y Ҕc<{zq8DͻDne'Ռ9{1TB[`5D#߿-v87SB?eƼø@#rEj{G?)4]в2KL.5w9o id_O-~ =:>(!*b8ȵl@עWUX 3QmpTx7+V@]a}9z0w\rGtRoQEgzM.ayWw aVzoSQ;jeiə8PBRf|_|:q IM!i9[}u2P tN)#zhNAD/Jh[ Spv{ZPCU2eKmEX3hD$@p4DP7`?b0 <.-{(fƐؓr RA%bO04>~ 瓁(Р8&,_gQzH2%V;zA1@ V4De*,r jSFFpal8.\FCٵ DTv q&8(==B%*n2bqJt Jb>_ǹQ9B1nl iNv$: X+%ւGw"ue1AZעR5wJ32EءC;+ =uX@4(aJa+w=;os@f4IXnozOzjnរ\Ou#H5%UC3m[$,bISt5NgjYnG] yJr͘l 2,p}!uTUm+p JȈ#%_kRNt0\#Qӵ zBg0LP,;/*<9CġF63:ˇӭϫLKLe,KCdOf*yj/g R: _+ \XzrE#G5nR ƃ-9v pN2wTU\Qh끏d^ǂ.n\sl/4 `?9mFPj;nʷ-rLu.Rn{ Ƭ3ouuaՍ -:NZ=ӍQ&'?#-r ۧ3A lm;YjpQJ6ϱFS3L64cn@'*AWLRvLtrX)$F~oU'4%R'X;ƞ5#>3?a䰫{J0P0r0Rh'K8ujSB`2japȲP_3q[ew0|0RSP}q"^Qb V'\[,b .]|ݠ$z(Dcgd6ރQz B$R~9ڟmźFHjRK!Oc,Y$-z1 LpG؍{Ŏkӳ{M > ԌC98K@n^krPrx˔"Y8^v:܌/.λ-_x~%y/ƻ/hO׽C7;|+ /}x链D-||O^%O}^oo]~o~?w~ o{ߌ{0_\O|3g=g5ј/9K2\D8v,腁':?Ão M bښc[o=cpqB*~FGZ& Z8 Ch$5oKçke)+Z@Ĺy݋j*80pڼ;jpt d9F)JGoB({{+,K읺Q$|PDA&tre͕=vk>W94R$-V\`TCև`v2I12fnstp(}l:ojO-Ф^s[<+xu7YAg5JLbOc`4RWR<..257Yo:uӁf- Rm ^w%,L {|CA~ ݵkBP\ƀ"ar喞򨅂*6U;fo(vP]3PP"li`לQE >TS zA;޼{5!y?I-]3N WGJ@FϵH_Gv_fǿ /X/Oy =onŝ_n#M;?'I| ^>uO`p}?xy_{[/앸;s8?O߄_~7ׯ\\d[ߎٳg7|3>ц=yo{]#JX{L \pơ-AMI*햀 ܃Xʂ\)h"E"2I-j7$98B/'q]"ӿ59Eqx =h2CqH-OcƾtAAN!EC48>teYJ\ZÁa"Dqz͝Z.PFBk}կӾ+K pׇ=rjGft1PuM$]ɪ;.KP!StdP??B5 !gEdJ)ԛ[(|`r๧9Rs]GvEc:`\o7uPy 2Z#|[m<۱6SqWC1(J1'-z8mxw܇}oOx7,>r?׾ex >~-}id/K |֙cvBW(>pso>O|qM7Y| mw^1 K>-4)OyOݿt<n6xY'wq/͜<BR"(%CUb"8Mu9v X-R{RnUK\zI,nU[l"Sp$ൠY=oT8rA|[<{@%nD(SQZ&pΤ/jf:V8WL#~1bIq&EIoX1k6^w$]C ,Z߅1N0gNCh6LFYWE͸ {a^|'$nؠvHmq8<ُULӧNќ=_+fa>!cQ;yHKZm@7;lmEHc֍k_}Gqbvyg3ݠivZ2[ݯ۶QQU))rM'7##ESajtTRPS@ l%  'HeڿYv*'p6O0£/Nu5/2 sś_7qk_4W^j\y7.>>gox!wqWE@<6v?zͯ/78}%|dN^\xۮy>f%4Hu]/e)$ IڇϟS$ Ι+W-HɁmgO_%7aؒ="[⡇D#j]xRDVPK~3PQJ6ZN^)]|JQCtHҤ[]12]rIq`aRO#ީػ٬8Z_пao2)t== 8GH988$@ WJ{}_P!f5ri% Z%Q.{VI [9ѣHH'Ƭ\jK xN0NScsK' q*`6\ u#@uѕw݀5VCHi1ؙE0/T3M j|bϐVϩ ]2in-U׆ Ɋ)498¡y\3֮K=3&j) fM<yp#CҏG@i`=>{ќj9э{\_LMѷO<1q q ~㞶RȠ!`WKT&A7SOh+m` TVF^1.n 9RB4.Sop=f%*p-"^=yxg$3Aœ'TdtZNb9~MT."xQXbDZ*._[nPP2:(3tD#Id|Ccַco|{@xW55w~v.?KqoF>3; t/| 1V7kS Xs&o=ދ眼E J,x+^>I/,{UB``[ϠyDu)8sIx䮻Ҍ@zJ/Hm0%k ]ye;fzl%+Svld]mTI I*DBӘ):vᑋ F牋Iȼu]≍]عNcPj@& s9 l'yrܕs~u~+wӾZL% '$.ת%suqޡzԻ'[OУ>펉^CUHp.-V)]ĩZ`r~%x4E]z1dG'BuuY9I^4T҉U-b hq&7 ϳpAHhIsA[y niZtHMFۦT)4OkFUhl]ޭo0@ZEZvQ+΋)bfvnXBfҀz kwl)OCo)IISU+ܼNpO*퇆^o5ntљV=+`,G]uFG:ZV `eo5:`T #uR4UDOs6̴rg3|LN=Bcϟ}[H[O f}w@w}7i_xO=fesO#0xar{BwDeq{+6>̥ucdϐzbqMp>"LT*`=b/qD!HB3h덁}0 = bC`vϳ~^0e3D5a`J, v9Lzqc$s4mP[-fWn4<ӆ X>T &iG6Wȗ4FJO^m3rQMjK"-T=|{$ݱUc܃V (8 b8R2tш#*w+U3VJh! UXDV~S|w}: ;:&A2I~)&=cSG16 s0l' F̝aϬMeƸ3sG?G\ {EՒB _rE? r5|7ϹF>g~RjXT@Bh;MӈUmcSuw׊.\zF#4xW+řgp%Px\?4@hkJlރ?=}6𪧞«57O_zW/ƛky]~Q| ":zv6 Cc>1 $u:o>O @lyċ=q( 5q!/͢{ٳxeŊ Ȩ5sLsziڢuP5 PA*PHQ~ Jb%=c_64oY k.|2o3 Xso'TY`@$z cLF3Qd}.^VG.wH19]Kx64=ps_m%4~VpZub)M:/7`t.\^ͽj)F'OJ_q`'޹^-!ȼ5OyK$t!m[l<%\C؇{en;NW/\@ j8} -a\u?WDHH `o7zvC9S2k_38q{mEjjD;ژk/;XNV_*'ɼAg)5Qȵ`dϭ1':f赾)P_t4|'~9*r[m"4|<{cħb;jGSєY#=EB?CSnN &IIli e6H2xf:[Ƀ+S#D0d? }1tmhK.=I)ʱ1P,,Q5qL=lJKEN *1ԙ?Bۓi>s_U4:@n/by&M:. Dܹo OOΗVZ~ƹ7Ն,sCYi\}LNM26ȐnZEYs ETzXFpcjTacʭ'~zߵZJ߬|S(塇UDtcmU{䝯-Oy+4 )uA{>?[~OXU3 ҫEZ4љa>6Dz8Ο?e׺!E,{~*YΟyfOQ PxdLN& s*$2r-<" ,VKJ9-e&ŰTO2 (DK\ ]JMJu֘H|'GDKPpT,6d״41,#n4:0P s#OG眳'X!Ž`pFVb*I¹&rɹtCxO@W@{@FӼ͓Ő̻7nx`Z0x1l b50C5jA t zl muOŌ#C熠z @ȘN-IS K:oV:L<2 5K1:j574H8|{DңY44+~D9Q$kJ&әKИhm=;BsF @exrd h}[3nf[3`NQQ;jet" ,!6f]ӴfE=(`)H @,*WTގ #&sL\ ]zay&A&ED@̤0W-dƓ<6cd=sQ 3/ pEj-y ?zR0?>"W =88KѕACǡ`o;7I\ -`k0'tz\t z`Hg yTmtZDp>GV>^F#jtۯ"v;IƇ -}&P$ITn"(0~A`mdTW6a}n^VȻs/.[=1ѼtḛxI 0cB(\t@dUJ|]Xb@V}p5Wat.)]DN狣'^c;& K J1b&לղԩƼfYFD8a/mU׭5sf&[OxQʙFozRO*^ Ja+ē_JHaXrػ6sXR-cSB]oHb(J Z|]ct yF6B'H4 O1]ΕS mn'':dhNƯ xYqbnܑoiG\:ͥ]AK1CN:% FL l3L߬]ӟ3YBNc@TL.o*4\$9)2!x# Zc8f P33< * I<^ $U#v$f;#q9ckk! 38{{eHÔ1]YS_{70y+W+* 7<.WUZE fہ t{RB- :dD/j4X#z - 'x؏`3 `D分ܽR9YZbQz:v5ܚ0mt #05jUW 0[p `J%5Hͦ}*ZWp ?݊U(M+tlǜ8Vj0wd5`BQ>ж@2>ш"sS] )znr&oR_醵~},;VN-A`w&Qw3Yf2 !@H"$a&Խ(EDEu]jk]ZEKZԪE6x]Z @! 6Yg32#$};ss'>wKw{~?7v.,ZpѱĐ$Z"JPHJ $8Xn1L<>KII {mmmm_A9R]FR$=%EUzl̀q@n~dzsđ|}v*+⛜9{6Vkٴiz@bG1cp8''N8e9PA#[gɿ"˄To]-߅!GN9s =)J^_[ g3z`RƔS6Vy&: 5%`GDo/?Aki+Ҽpz!Qҿ/+Jnr㫪RQ17˼Z^fTk~TwQ͡Ꚛ胺Fi:MMt~ں:!"uc;vh\1_K\DA]@GDĆDt8EHtϓSՐܰ'|i&"I|>4USg0~Z0fA5mo$B5|Zl2f G.!NZ^l"س7~1gƺs &>o= y3g6.{zC TFCyaM`!q[[t k9m"ò>WR^Qi̹u6ǟN reg9se˖8(B^^^_iox+fIDV+q}/VNyغJ]A׉@gg9#hih1g/Qo~f[|6K\(=Ѽl;unÍ-gśNZ8 Vg0~uV/fN?;еǏ8qlݾ[qQfHLdPAl(>MP Y/v}/vc !q=YۗѣF_@(~W3AFˊ+I=|=" 4#GM/Ckmb8g>Fϓ "#Q284c/ X<6o:@E蝈fO? ©w8Nj8qƙ,m}zhlw~] !5+koqQ"B"-S4g{vhݻrsq8$[OZ +s)tY>7Ä!2z7yTwRod|>_v_#}WXN74N_M6zqZi;TŅXZQ6Y܇3 M bnmt~?vT) xj3jLFB) (in5kD65%VfϞeub g?:=t<ŝܜq#邏.%?/G>BRRo"i@0"4[n5&!=lttVF]Y3gH1i'pq%+q 0^˃CH|tmm- 駟*荽͙Omi1ƒ nZΨ%1iQ3Gz0`3r&~B ۹9`/Ϸ\:vnڴ[Dz';č%xCpqNPRgN˷I hgՇg]=i] 6RlnN%t8INIA@ApTa u*~oHqڵ,$&&4OڵvztU%e={$181_<ӉP sjj^Qe`ªAoiF%7Z#T7}(Q9v9Ҙe5knRL1/ZAS2c4SOO*& "$5\' QH<6( 5oGvH#eQnSW_'W(})\B7CVVh !|nXq7"\nן&JШ]k^ȼ$p4(iJbB;wA}(Ǟ/냀3 _n\@2eB>gYe4.?ĉgİ,0yxo'|k/?RNQ-`aRR{L0S1}AONM b©)裉|m-8qЈ$ }~[~`0{7̛5S]yvj :#fXljzpIz,^Z@ȳ4ҬfLԈ5$a ~& Qs;.PI.캆j4ĔRdl'_#?$axHQD((}Ўy&u]&LkiDw~=? yM&SuW9O_{C]弘ƳZAwWtX\Ν;EPηic 9 -Nu uuJbTB%L\TGvA0g* /=,k+^ϊp cc<ĉGpV<)+%ŝY@9(.> B⣏,jI*:EQ!𑇥U*gF䄾>d}1*2{f2c4YD`Z:5r |rF( B:*YB:@ՁCHT8/ב؉fm.|!NKHgMSʛˀ:nk8]y}9sX{2,9}0պMPӏ5+et$E5;?NKOq|.š&w)b铯l۱[/m,OD+nÝ4{HvP[4|0RxmS? cE s׬[ZEsKsL $fBƒWb:͔qlfw=JS)af, 4`2c$k; C8r$G9\?!PP!i\D3A̫$B_"}"qц@0ĮEo}{~+&M8opX4( 鸈c[ zz@/1[ii8#m6"=#|̥ HZ|aʤL:5 tzr\nVZ8eF%BF7rHm8"p9n~HB1/}fl`˽ "6bB{PpM}^OgrPYɟPb=s/a*x])Q0`w&sǭ>{3p\Α];~'ζt)) ^`p 9n;v8Gh@D4Gcs7ݻvS[[CMm*##g2ViJa~ϚXquVtFca|(VEi`^áCzA6N I7X ă Z0He4[ #fmGCEDiZ2LW"k;L +V+#7ƏΧ~"i֓9\^zAGKüRPE%MM7aCGǃn"b|Gx Vڱa 6VWgOYzp5uI_Oua?\k їՎp]6Ϛ_:mĂI=F{N;U h"d+K҃sݑ~_.&O( ъw9SWCWډUQU3 ?w^瓟Gaag}\+jc,!N8qp/gA "}aq:Z6sj&n7)TP?AKi&y"|#?QpnXeI%l0eOªDXQ!8@YU@T-Ci< r%[P{י}ΌĎv)uڂ`nEڲYj⪗H.qn%GѰ!HPWfے-HVUBv{<#I>yQq#{gLRJ666Ҕ~8g~^vdMʸ~_QRTӥ&3Ҥ&mvt+ܼyMևlRJI h<իWկ|%=FX#cII8&dMߚ[`azF<q֍>M>'S @':xK+<'S\ftY^}/pKpbVWߛy`܅niK^+=cq|/~1 90Ln|myyٻO{&y D s}b9k/|<_½^ȳ>[G[^G?'C4VM!hK)96\p'D @O="Lq,Sv5a̯SeXIl4%UQ0UF}~嗿8 spk=\ @щ>| D Շh Y_~XY@ƛg?|CуtVݍ^΅1BtIRJɾiӵ<ʷWrsC8+ז7v]`J1Os:ɍ촼f*ӭ$s9sI{_&t?Ȯ.)$ ;ÇA˧=*T::M+D UJSYߦJ7ߎ+G2O~_Ll_ySes5$> L~,UO|Koz-b|b'HW\mўW~^?e@7|<^}|fsTATשY =HH|=b@[txd⡘ev[pl^p3ߍ5ц1,3HA4Q.{aaf >V 6/rgoP2ɜb<2u$e?ޭ8`Q]ge4|5& HØS0HXYwi*ZNfb\$ Lc6duD&@dWV06 0 La`UCu=A%~l]->ċLumL8 }sBD}ƣ)U`ۄY&&T`Lk ͬV6BkK@ K ΃}T071Y7mefu2$Iځuaa&ڰ&Ⱦ V24L3TGbi!kF~YRНffkW6tfבNF%L.y|ߺ\6_CA2̀ǹni7kc >ɜ,r#x:(Wiw86C!QBQC+cW0 0D֙VTbftf*k>1VL'@e `Es|qҏY8Ex h!}HKYk;eb)YFx XY+ 0U!3}kKvht ڂX ywf0 0 86>g?s$o|)[(׵330T fvj3=Qbԭ@ Wot%#\ZC0P7\'6Mΐ(2~Z(pa2$(_]㸳vHݟ@CP=RD֊}}O\Ӝ)Z>;Z/Dr:~GCh0ZOƋ߆6 pMds/?96?3aygVi<~h;2q|zAy5t?Ͽw^ .!b,myI<(8ouHsB5o[߾To>m<۸q{fŬg _jqI5k&{7z0*;s{`S[9!g,4 0?d# 0D! Yi&;1ѣ=/`cqwh@}:=^3|>u9XjˠZdG'L :o2o}saL^w Nʀh8&aaIpxσI=aAahԘF kxÏ4F~Mf&;h W52l`>ϵKudu=t$G_2^hfy'f{8' k 0 )NtB}was8ΉA%XA Rh4cF9@1NR0(x.F:SIOst`;#L}X^=iݮ秘'Uy"m/0 #JIVg0 3цi gˍگy(5lqf1B鮑6.S1'%Hu–qUFs+?i>3 W4vF8ScRڍh0{"R}?0 цQjLS\ 3/So< DD Nџ ̯pmhVᕌZө]?O7*!fBBSNg0].7FChG$ws9 àY``-%) 3 0m 㢘]YǴ~1Wͻ]&¸mf5n`\ PiEMletKh _Tv ~urZ)3ү\a6 eafڐlkW% 0D?f郡d׶ME 3AS(7'նr̿ Nfq>`<)Dߚ?wuec+͹`83-7c`6̆祉;K}h0 c3 0h? gDc96~F;B muɧWkr>ʤQcltV 8DW0_a9u[uWD=;c~6xNv5F{r T.i=Zskr+Ϲ92.ǧu 7ʸyA,B&bПk!B>A+`&t 0 /Hǐ`0|s&0(~kܪhfQ 4 2PLWnhV]^_m5f 0 -3u a:o7Iiv!17dF&sp[ Ek)4돨Jq+e)U"Cqrз~~!7f|^ݎ0qF0Ϗ37maqdzM"uvL0D6FͥC:}1@AxAfӖj5󋶸dPԻ GcߙZHO`*W2!ܰys\̈bsqMc󺌴a&0 @gmtf aDNFfs|d8,4L튨Q`MLb=|q1QG.Ӹmty^>f<$2*{f+)ؚKľ3.bȃ' Laa&kI, |E3aD8B)f,nCNb;!T`E]/hC}/2h=)os>O}o"CKQFLIiDiws6:Ky 0g74$ƣ}:h0Dv`mdla$X!a .KZ'^gAfJv$vfovq(9SE` &[imt}RAr)L[TZlIӨ_uDa8& NJW d~k0`h( \c'fNJZ76 2 3 EL59]T󠓝ST$xq|,\\ͤΣ߇nDE3Z}L/R 3цa92n`'e sۀ"/;0 3f"O1E@i5Afh@W^b[Lv2-(&Zib53+jV|hNg_?Txkя|i=/=& C9E=x}#ӄyiS\F7  Mz~̊:5vkк(օx|8zatm)2ѦZu2=%U3 pM`ƸwLV7+[P0?: 9`_jw{"j?\J-tvRϤ JSRLe">wxq# FI]5d}oՂN*D}vՠ5g0 BlXī9Hgԅg,qL_a6Hby fPx.5tp hͼH5/K PyS*|xR}%7pE^Rh>18|O40y{#؝0 Fʫ0TEKE| 0m8![}׷6fWҡK̈́j3@wcuwǡx諁soFyVk5ծA1~T=kc&a fCg"0G0 Sh ?]j>`TGu4'zu N¬5^+A췚O3lJ33cvQs`K806 + 4Hte MWj3 ø0m'??_[;a9+|,Í(s@1К p]N9>C|QXq;AOy]iT7JR fE fJr^IΚ]4ơ3^4} 0 Cd)}2@Sٿ=l$Y1ﮅ7 Aa4j/$n'Q!4ʔܩe.oDZKh7_4=[lO_Y1̴h0q ZTOb 0_Q_;k[3*w_NP0*4qy`t;KsEPT{=tYǘM.e&yy|].}k܆9 Tī-\oa_~=6~*6,c㗹79?yz?:W]32~~Q/Qop<jgf:QAW:3㎌XdMBwVL3S#4d|_4^c9k(8ZR]E= _Gq@@_G`\>]?Da;ώu0 ڰa/Ozeꢙg0D+2s1P0L9XAh*|=z"ס3o溁d0Q"ra $?pwj}^tmJ~ĕ{ϛ)0 pm8V s4`'?gqǩֹ42Đ:)_+ iM)jf!4=PDcn9qHk}Yj saM|֎Yga&ڰ_ծZh. 6i0K*:}UvjF8!m/w6j}.=Ux*fyQ 7 ̓}9G/!X+Kh .9 ]0:JⲴ -3i|(7o_q_"2) ?U^>Ҥ5цa@Tkƚ@)A6 0Do= xpVQ~ƂY֨1ЏLТߔ7`4<wq)w$$ɋC~a0BO9j- 〞v-b.0 D/ 4|Ӥ,Kn2Q/wq$s\r;5A$*ؠ.f/adsFUOC a86 Cҩ~K%)H m '-4 p]]!玟p^ ]v}{ؖ8Tj< s]g+AipR>4 &z]B(-vzNWs_(!]etDFTKE;O.A`ejCKϷRw%_U^Y/QWdAωu ۾g"# G|aZ6U c (WdDh3N 0DgdxvϜ]>%kIV|zW|dM5NX%Zd2\PTR<ʆq<\˼qDv`͡A{!i 'υr0 q:?mD$k i>hNL/n35άe֏;QVgf!p`wG]%7-[цa(DCױT5j 0 A6<@*S:fX+px@Q- ;QyP3e;(3u:fuHfyW96a k  yp!XO]JfMD2P8G0.z>`|s1B+̋k:klD!d[=8Aaקf]tW;Ձ!}m ;_ށ6 &qnEZ kWۗ4au\<plkvSӜgXD.-e=Pg̓$@s#. ̷rI]bmQgdj\KZi"[;:](i2Z'uMxцa&}Bb t07e/}+ bۨx79DGXswi5Ґ $Ϣ,Ɵ><7}/D;ż0i/avft§a6éؼݚu0Bӫ_@0D;w}~ǣ$;[o2Ί6XloF_G5jLׁa&0 3QO}7}S - 0m8%F7d;_%se p1Ղڋ92+h !dԊn djG@尜_iߗc/س߄6 ekm*qaf }Z>@3C訋ab!k!LT 4Ɖqoݛ[83&AbW=Ki?y6Cq]qe"}LaLafW2 ~,_ZcF@[lḣ/oil4:2m23Ьv-y5p)Fm& Z~e #qS]cJ#KDbq-WFvTu]]-4z<=ag,`~+iɕah?t j܈F,O?㱼P}2348㤩pq5.9!j 9qCL>|1piiLa\:KU)u0 3tE4ʤf<4xm:;bT` 6@s$pF<156xn0' qO܈Tve9#"|9zU*cac1fqfZ/j]P96 t;Ywamoh-#axӾo-z1'oʱTLamvF0go ,5m3+}Yoy>/{3z0HL=臮;Vm*RφYh 6 0mw'Z;3|o?T7Kh\9PXX>g}k( G2R6?;ICh-n<j)Jh}{"_P1dص;c;=wblAs@6 HjF#^W9[aD5}uF;5;'g@d4k oE 4rඌ>T={QqA&h@6[rL!DET/N+Ɠ|AюQ7 Ίϭ~?>Ї'f 0 5|!^'\wm{af =zYӐi\K5nQ@QVZ"JcF]B%+WG3LS0Ã[FTNR/} rS/> *q}| GZh>~={XOUF,bTjMAT1[[/^6zDx@[&OjycR<tDa&1e$~ƈFF`f: 0DcAɯ[}ii%cY3OF!`҉mn5 Y9\Ф׀pK`?XwԲ> yƒ1u`^ooD1Nu ؓ׸PPuK.Da-tDTm?^u p0 ѻ}4eagd(}@ jDyj]c ElCA탸`*Jb; 3ʱhW T;?Ո0?2}Kdە:`wg}~pь :U]3D̅ +ul ̾/3Y?1 pmơ5ؙac>PL`F,% fF*&i[ ``ײ|Lm km'~_@?vgҨI5sh< ͰDyzRI:kANu l%cqB jbڧD=0>2ќsϷ!TQ ־ǔkq]5Қ1:a9뿦?? 6 È~E;w-W[:^'s z'n ,ƗL 3a?駃6n_Cl`C,|cA/+pS!&#-=)fzcI'_g(g%}ߞp̿5ffQvmlg5Rܧ[k 3`e6ԔFɀz|[9_zC&0 e!avK d!e`Q>8GU"Qnblclw"tqEt~a 0 >o!K/__9J~eVyx#0@JK agr˂+G B l>7;p._Dz@OL`;&u { t&OOQN&Ph5-3cèx򍯇9cF\*js`:ޗ~^kf=̓ilP0v}Ya؍#)A2D•ONRZ{fk龓d(;s]BDy9q+…Uð;?,0!,jh m0Eh |vj|A{Τ fQyyCB0 q}P t < f1 ^F|Q>[ [K̋#:,`޿"^o)sf\(;a/Uii>]2j_{aQ_5LagK',!q7Q'4`I@k;=N0K3eoЗvvWT:?K@멧~׌yjfO<^O\1;G2Ac:38Uc|8-`8D0ή|RfQ|:Mt_c5r[BīO[;shh1jV-^cf? r_*jK9M?~\$P:H^C0 8 6wv[O{1odE;#W]9yКj` іLRֆ/?e¥"Q{3Q˶`ʕ&{h0!f~짃h0"rX &ug͟+p{#طp"Ґ"Zr5L~"Kt0 Ѷpot]>()v[3B+_,4N )ꬍfu{Wn/}Џ Бؿ\V;L;5#1h1k] k/Ú^tEVV>oTZZ_Lժۇf~7jž)M2": lDs?3цadL@Ȳog:~mCK%瀚Y<$SHݵ!K[S}`6?oW2G hGCFM2 fbqu_l58#ݮﯖn'HQZ)ޯzL→Źsq% ZTC8tC _U:H2,3/(}/!syhJt\fYwD_bGzK>hk}hD/@s?pD:tTQoHXË#,(ДT렩Bf U)L:Ȳ25֙JT\;sK7zn]GȊ`xMzYa'-A]/?j!cOtm.NqNm'c`Ff 4N?2ܧ0I@n!Z[TƑZtX8*uL#!bJ%1 0 њ 9xwۑdY<2w˹CJԌ@ i@OjA/iFEٗ]6Yn0O3TT]8,re¼rWS\alx1++xDžyFqh6@bź`Ǖ%{ol]3BŇY3)P3dauMZ^Ͳܷ$#W8u:tFCN F;G.|n)Hl2.7>x?;͚ dc~ӽ})dg̘ahIhm Ym@3g }m ǘEZŞ j0? @\IF>qШ~HގC3fL=cj5>^3+o% L0\P2ʸ-B A eIwh*ǚ񓈑 BeWԨ$#2ΙǬ|#gy57wfcuVZhPmٔĢI+ bz抍c聖yĜFvl_n!k\! uŌ9Zѵly?Kjqn![}:ֲ %ےzZnKmbpKֱR`UQ=YL(e[˺&]͒=JK3fL=c2;6IhNMvxǞs޵ϠJhT1 <аRb;1{MIڡI1o˒"˲^D<4++ $'$YI`5P[y}q-^s +NM4=cƌ͏3mA G DB &U%~!ԘtfZV\.ׇwJNЖHvA9ab֭S"\Wr,Ǎ.|ƌi|3envJu-WjZq䴁Aؠ}+纖,Sfy"#~T1NtAX;, "'b(\Vhu@4*6h`T tTh LKπy[ S6?TL;ٜ1e\?2ݢUpW^y9o_'O[m> xDϘ1 ]H/#xut 鉚 pW@4z@?4M3њ} 1' GYvz=A/h&s$~6:n>_ܨ3fL=c=kʰKfꔖnf.asu29ۙKoqsv:dJHV,: 7Vb8ݖI陥g mǬtoY4˥T&&p/Cf I;  h,T,D5>`Re|ߝ2~ƌ(tg;@^nW;s̘1/C;F:KwXk ب)CEkTc(kَ3?,(Xj_;j?c'k - Kzpb L҈R*DϘ1c9fo4e…G-nU eAdlo@_c9.n .WSa Kv&Y^S[Ωo]TlsʡVޤhMy[1u/okgAhqd,kQTTtKG>eO5Όtc_r~/{~ʍ:XWzGǚwwd)Nc9ti^lWdT,%34T4t xK8vX&ZڒW mK+4qJ6[@L|nQONhV%~+= p^w&ƍLo:s3[)xy7fꇺ E^|Nu)@O 5ng$ P6 E}2Rݝ#`jZ jcE$&SHSC ^ m̵0@ NO F`& 9;Q0uƌDϘaw, 3wggY<Х*nq fSa0G tT1ж=d񵀚48Pt5dBs&a9DtOҰyZB˕#7W.lBj 66Bv>k؂^yB]#I=UHuĜ8* Z/PʼN7Gmıp X, Hr.Y ($ DW'َd-Ү:[Fj/4Ѐ~,bygeS23f̘ z rhP QRVZjW]L9= Ajin6h~-5c--aQ3kEqNY@tIShaD9,g]F']:0K0ۀϻCeM-ՏO Xgev"/uv/ܴ~eبX-g+pL<۬yW+d&؍}Wf8X/M&zƔB4>nm5h h3WFr;V}r W3zɈx(NVGtN*&WcSf/y>`_]qI'ϯ 6B,OL9a o S`ćBq,lhYžWQЧA Um4s{\c5ᾮ K;vвBuؾ(1*=gP>nT~ۂOK ƌI(HltE'ʺH~^ˮVmxIQl֎D /gۻ9o>3!%` U Q»H>A T@g!K9A21\ha0Vv=.iM+~=iV?9 K,e;:1c2?%6A8՟xY LV"r mƛNQ j`1 ĝ9nۺ>;\C PYXg <3>ޞ"`cF >˼:;0I2w0d|x'pvk]\/ :5X{យNV$ Y[\K(@9U+\c|(ZjqH/`bˠ\I /;aϮч;.1=`!l:2kfzk^ ~=M&tӶg 763ʍScQ'1i,2E  @w@êj!`b'ښM1NI:OY~7_`D߭A1%#BQVKX~RJt!‚GG6@d ̊Z)79G@-Ф! $ $Ǿֶ1a<1fǟXY41>ZiMYsɰV7: rٰuOC .tHŀCZ\w1>|yO5FéAc2֭c{Я}L40bz7|_<5~eu89>˘1\`ܱՁ^:p~dT')Mw\)S & eb*mYD69q.ogmbD#2R;8hmP1D#Е^&irIs)ߍ5b@R~c9 &2hE| e]PZ,[F|WE=^-˵A G@z3d߽=m2KnjB}x }P2vQt8o؂IKobWV j93􂺉sN°RS Z 8Qk:k4JA̠WcH[pgf/| L #}94҇WH[Fm>΀Wی$T B2ǶA7hH'cm }>DϘhN˚S=NwپS+юM(MjB'h_I Ccpz\H@'D^'9,Yv,tv C@B&ΑXog,,v12z$2TiЬd5Вvb>D"r fύCI n3v~~/zDϘijٶD[q`SpDgZAeuBFL54F{u6lFD,}'@$!nSv5K:x 3dsvti.Vʓ YG]`hO$s%KY-:qVL4JeuCǰ:W&iNlX'ǹ/WB΄݂q_V*ӗ5P29_T} þɵCh_Z{ToGsѐS%98 fǩ5XfV*PAP9Șf2²t]9@Ut<%:-dgǀ!VAL!n#wsx| D5ms}+il@:5f+t>Ja.Dqd†ks56dH.uPn9;4~b\If44S隠b)ioQ%jF&62VlPnjfAI3͇Vu0L2"ݥ-~sK5!.%s!j+wG`Ùo8{H\4E1.d)NJ͆en<ޘLhN*VzSL; ݭj#T CΧq^9ZJT' vWB=R {`rD@$Y^Od|㼼 zƌiGذXѯBlC/3Re@vrbP+<;Kc1Y}&ygT"@` )}rIi}z!L$,4 ;cnHӆ8rΎ4Ha֭Cb)8G,甌%SQ:_bt9iCOL?}% gVH! rNޮ?ߦ3^/;/x+7̇e ѮViKj]JC]9D+.:S1%W`|{HЋtwxb |jw @[wi u 3o1w +>1f4 !h_i}̶g337h+DˊT6SZq حubQ)H oi@췾O${vd>Z$+ [w5i~mT͚kdYu|C65VFTǦ N&`yFdlZɲ͋oWeH$^2F03!{RԦ8H{ XNsC6g&pjZR4}?Act膱;?f2QBw1":5ƠS}LدVl:a[IJVاɃ_G>, u`ψyص'Lt)їYU?mϙ 5MdpYD5\j*g6Ci-,#,`K^Jwefe!-}uD@90w2_W ԓfyAY,Z}]J=*/S3A3Xn}q^VGJ•lgwpqKvqD,3RI'M/pwG|KدV3t#PAZGb-vU H;ɤ3J l2&ίh3 O|@f< sv.e$9/htbR2cMd@' [ObωXv4DbbXُʡ1$L>Cb % c=Mfl@&v_eM]ZF_5W&%c[s^%iF :b?ca~dxwO#],gq!{Ï~sٖ(DڣY{:/0҄,hَpnL Xk:ّL h[IʧX0ԍ1 `3I6Cδ4\,P1-zo2&6HS_ =>/Zy@g?[`D:vL6 _y̘1cVӅX .n%'K7T!;fK;Dt'z$qw`Anj !DaDu?#b\ vC\)2Ĝ?턡.x{ei2֔t+LJzv>=^"ETRbM@kL@Ο /213^ gφjƹ+-~7m.n~g۷ASqlDK>Z}b'fm:CDcEVxqpp}#C8svX h]f˯7/)/鉚SN۴){7S dpMcA>3f̘xhXʠlc2[,E!if-Jql\xN6wh]./"}r$ &G}c;~}XoR;ez/ښ VC%840 .5923:t&^y?kՠ<:} 3fж`eqz}a1C\8Z 6WG3KR#H^Qy>tŽ\0X-N?'4J6 $N_{?xw~.@G$7MTxd fOBtIik5@r h.:,Bkhlgf\`%۰&y=w qjeV:l(a\s+"@fV`MO|+_7g 1V8vH^PcZ $hEdyƍ}d/&Tv1 &$c5&3yFص 7J(dvksEP>~L=cƹ4 Rct]H]_Z5y8 ptx&*lSٟlB,Zesf/i m^q.[ e/@2&ڧi;+O={fsH lԛԸb~z4@ nf>O3Č3fXOZ t :B:\L$ d;S]0wG 77^@5J)(_B%m>x8 `'JcG-o<A7rGDxNX&+ZjW"7&O*d8@T[X#*0GUi`uT)u0O2f2,CڶOu3>{Ö/ z@o}&I10ʂ?8o6NsNo:~J|^UkiDcnM.3Kbwntgz3">`J?~m9TÚnHeD7x<Wo܃#ei`@w+0P gaōC@3)S190p ZlJ Ds{7,bPSV,_k;\*yQZiwP u#>4mRjFlq%Ct&Y}o"ϔA4¯r]\Ïgct R-f1z)`p^sm0Ϝ]T,Yo A7u(}wș (@d0XSlD'avd_O@o*K!4|s 7Co>}px@*=Eo~VK!䪊X&f߅ cRC!:SI-, &:\Rtz_f,*"~EmP;lF2Ҡ}>El5aQA% A>Uh뉻ufBieldu=r1cƌԷ -4.-Yj☒ A 3xN8!}rE׎0W Lth!9t|x`c~:{J nM`Ç#D֕XkC>]"heY @a 9됀-.7`Z8G,},5 Km:${iNZc-}9'Sq$==b:Bc gd sw*{weP>omt+"n4* =>f8]쓇kZO*73#Jyh2ZF@ ޙ4룜SpЌt39T'ݺc찡ʊ~/[XcZmǗQ}3fLLY`XXI30p9.=Qrqs4q6E{t X??sQ߾ ?WJEن}>;,|A~:K[cSYT 8-n(Y@Kp?b-#XM^3kd>q5P?1Mc+M2T>zDϘ2uavc?i7gu3dG̚(<֠ѳcy=GXB+*GieƁ꞉.8;N@xv:8a]V8e`GKH_;cA$9T! v\#fXxɲJ{%\?:üUNA>;){K8 VNB.'lv]DC& վ/m.=ʧyc9qRxKGoeFiuPM|Umꥷ6v_=A3^S\C1pQig$bK mٜ/mv8FicH`A >5 a>5#(DžX %{~YKS:`Y 3f A}gnF҅:a ѥ$}_hm >Yi7mG {dդ@}D`r\ ]0'[3A3v.^j ?OuXq&%YW_g;DP*rm^y@gWz8fv}Pc|}Un&>M- ULFAةDl.YYY PY?x:3f uآmCa}ݤya v_k[o@9h>j%T١&bt' X|fB 8hwwB)\!B -x{@8.J@9Zd?TZA &pvmee-o{!?嘅:#= eDU]2ۙ\,=A3LP1 ypg=};:VapZ =:-.k P,(KH%LFY"c{KgGP,j2Ϝa`[kgC頡i+6>5F:d0 ~]6CǺ +ɾY )cgLF۴n8\Bnuki#MBe^3zxS i( eU_둥%pSB QQځI|bB@=fT119~<-Qʐ')&v Cq߬.bܶIF6^ڝB0aEݵDL\3]mu)C`Lΰ:HiT<Ifq6˼*SDX }3~ h_c 1 2DϘ1罳 3iЬZT]pHiD^VSw%@8z# x2a(݁pR88_'84W5?4Kt7M,ΒZ;-j̴Fa6eo>/6ꊌmVw%H&(0o)hHӞ/sǏ gp^W6Fz4rx]u8 * ^f|D>s_nks߰KfB0VžFEEd„2"DI}SNZΗS@#x8@ݚ)hx鯗]!࣋4rIAe:Gi@ow@}kAva L@7ɢrYna4tk iJ3Dcd\%h+h,n#Rк` d;ZR%l~ Z^"+kc zgb%'ؕ1n}T?3f+-ʄfH76(_e3O[],@1҇U3)`|Σ=nb 4BjiH_p+Ya}2P@ؕ ʨ$# r ֢v]39O 7 ^Zrp9H6% }i)zDHKG}~{i]v)2ҷ wki+n2գ(Y.\ U_+MpNRnAkΦr$ '>78jΙ!kH ҬgaIV.G Ͷjm}2A1HWejNAdqOv.fK3f̘0Nm3Ӗi1hN^@NmK@Z4J0q1|au~,Αe!c݇L˅s[i=EslDg.1G|~SN.w knfXI>@̴oltDPH{%(&߱Ee#? r uqFR/.ۘ1Ay|p˺}x= 2ZyA# @{u @Ty}j|黰ĺ>4=@& 1,XxΖI=J B]jȞR2b0ѧɅ$ q.-nC̲]8i1H/Qn%3qɯ?gZ[,TW ^?mL@SD܂Ʈ?ru Ԯp)o2!G;մhrʟ"Ci%g^ˈ΄}w ZYD]_rn|.F$͆}} (}?7A1c)2W^A h 4ֺNn^g '8GHkPE^rf5es[[XGO'-`e‰e9g5i0 kobSwѸd^[+3k癌&ںmWUn+FƠ2}ok2Rr^8%F kIeќ{NMd'~ 3. fa󬲭fPH3J_c vh~X5v[k/[v#~,sٵÝ#` q.fnF(1KH H5P!a K)Gn WLAaWѓl U&ήF3k{7hs!5@AYD²PFZi$>֚.]VI~/杔u` w`* WqT>밾7' dzz36R}Q؎A]Y&~YG df5KXYa]`-eSYY{X#1 m@&5ѲeuѶ+g#mL5C08}fe""jHvȥVǃڞ PʂUVłpTAgCMjޕ, ֟zɲhr9GBa T-Rh?2>* JHf^ ԜL[ӌ̳fchiҶh9mO aB;h kwF p`cA1ʘI352&p BfE}_ٲξKM\bNgaŷZ0Ze|1aI\ 2V}8>yA},62pH@K_euiUBC7C)cҼ}0כ'rj,eQuBvq^A ܨ}P͝3$[ Dّ0*ڋa\9aQtYrɑw1A5,'Ir (; TeW6Z Re4֠'|<1Cr)t,F}!cOJ=5>܏>'=cƌ]9G`b,@-fƻmd2RNű / vc )0f\g5nAt 2ѣ aT_ǻ6fd{JU3ԄCc V whx7dqn+sbٸX EX-ih)D`"Cګsi` tHPlx7ǾqX2X0D׵P7_ivc}AP {ֆ>&ʛXnOJڑӒt._\SZs?;@vl Q8`>ױ=*I@8oSB񮏝T-7nLO=c$},[wʕ :6i,Sf1Y/3ŕn`l-ʝ#l~4hTE$嘮LGo0Nq}9);P_M  uUtsX vh+ym]*j~a)e!i(H w<&-3f8BqrF<#r.nQ&n9)AQ1:V"3jR[藯%-ںJhQ,P* k ,e^Y&cx[f[r {5 '֠QQm ɣCkZFHtІ-8qF9/u:wOXzcYr8 B:%bwr :]ԓ 3eHB_+Ը}َ|eK `̘S$߻q Zݰ?к;ݯ PVA>*4sdK]&Ϩs 51_%ݴ cmy =Y@$#MZ"RqÆQ0S0q,)<3}V3}q1AAnAw#p[ZWw02A>yj5eZVB=16rc`hѓ.Ea[`⒫ nht蒥vahM';nXOR*QŘFjaߓ1hy=}IԀ-~5Mș?ύb@j1T uV1P߫cyA^yP";^dM69zp IU2$c.EaӸ.´994j̘1=iQXhunUElu<;&LX" S+-@%DS#\߶#P`ܑ= 3vh15 &sk~ZZ9šWTiBka=ݕNi(PU8؊ ۘ1Q$=R(M7>%'hqc}/]q]q슷7dwD>ʏPf'*^ר}7ͩ'&bFI_={P! \D/rb]YgB1tOR~%p ]%^W7|[>ܭ? "VD#N8$ ?KA%ݟ+*m)ϢS氕leG^3ɖq3&񑴃ɔ=V;^S?V#gj0UT Ipi 5eU^y!?@ͺq8RILȬ#]]/릮rH4|8n[Vw~E5YSj@z ʱs { dIX 5d- 8߰$(9@kE0'O=o~8ѥ%["Q 7FI,OXtSœi $|.CJU|^;9\{'cƌm&]iJkQhݴҠ 6ۍ7xU&E&&b WUys$YgM\4[V.᏾t-W{#)+J9;OxzkA(gh"Sq0zs=0BѧYei:L=cƨ`&8 |)-~Aln*C5K?%Pk2y00CIjC+mcT2'N۾aT=x;3 2 E M,H'۩2kMx¥. 45EgC yH;9 uKZ{(B\w!-;>;a'A/!erY;e z!8#x!x?ͳ i!_![ig1l驉Fʮ2 hXh40T#qnNγ|qD;'vv|H㞗r8TfT@Ĺ=i@W2y'5m5,e _Zcϥޮ}YPx`@f>cr}@і[(6N*ҝWh9@}eGBlm3vq >֏#`\3_`bIY +: rPh 9OUw+D}io>9@j>.9+&8*cf;rR^!`ظrxD ]_LQQgYEl| gLfi*+; N+4d;3鳮4jZ| uA0Kv9f'\O/_zh'b3,NꜜC%|[ZpP2vlY41)շsMPQh[ZB^+RHAY1)tC RrҌb1e`ug>h<(G 9ez"pFgUVN1ou|+q7f:s[N;vyWgo!A'/a$ _YWpp) dR+cP2rlrJۭNK7Kzg/ C{h+|]r ǜ-e@3NzPdjg*-`Z}0ܤ؇#$}~Mč`]C&4O=cjO3vGqǮiX ӿ ڠ t#$+-dmg*g*K3g(5V =ցm4Y>m |XNp8%w!8N W~X#cl hʤC4˰&]E-x 1c{aW|]qܹ¾:C= ->8ޡ ư}V䳅=wKc>Uö?ljn3H{ * ?dwո}?b@=yunSpci#.1^Wy \h0A49Lk|XS p~ @rUCKܯ)5+ эRHVGT:\6tK$7E#C> aJr 5DϘAUׁ]Imj%m9? ~ l^xJV,. &4tOPE(ԡ.[qm3ea3 bCs;O%m8̯RM*Ai"Kwj܁噍*I$h !i麡){ì1cϮm^ep\@Ƥ\ hii4?6XYn),Q -Y32ʕDb9n%c?s4K,zA{4}LZe?= I WEJ,* ]YKHXAe{6y\kO/= ( 1x`!8GCP`LɶIXjי5VJckgT_&=J$PY峜(XmWHJkLLN@tr.et |ZfZChtVO3 z mhж6AUV/aC1fYJH1c3fbzI4 z2Y塦 y.C9ɘIT a0[0641%) 'ɑFwƾ/o\\CI1ݡydZI&\l)0D+g Q&&ʹavPya&06fq`ZY5^K=D-99;G,y! S-p*߄uR?oL=cDt*YNMeb/vRh lps23Pɸʷ@97;l("hh z) S*OW@ҧx-l 7*02,ek-Z1UWPZBN/,, y{:t gWDn\dN]2?6i v0۾fM_1nQ`agH?g.=#ljɓ于@/1vj72Q_b 4xsDB+ʺ]!Ec]t'P7m540n2] kzγTz=ܫtSF%2ߐ#! c3:KєY2J&fT#A93a+lnGlU|LEtjc\Z1MW hi(-C ]gq%&͵+4HD{G8Q1^c\iKYZvQ~SIcGo%ڜ*zwGp N3~)*.8`v'0,Y[->!& ٴTC}N Mi Amu}l {.q;ZWv.ݮa^Zi|"BQfE tD'hϵX'|ŜS35*r'Vݕ><#:Eu5l0%59#9bfb~xLKNpQ&vhAeJmЬ?.c-8}Ve3DBi92Ӹp+/pv P bIڿ$ɽ/rŕ ԏ/גG\K]r'tԄ2 rhFv(otƷ a4P>xG3gwUb>I#0KMkK P@\tO\!Z5$Zc,X0٘}˵*x/u@t]ؽA$&4ٺTĔrva:y&;iώ;F<ֵԹuqՆ gZѷ90_^寶גhqzP}&jmyFQK4G(aƵUzC֍ @^+ו~{Q;\A}ȟIwBP76a|ugq+@\v9{\KP |e\M qn#aH$V=kZݥdG?ٵͦJۤ)(BU1r?c!̓$Zگ|k=r72&hЂwmT|g[ynkq_W]u"+7MvBe3Z. lgw`aQxZDlKd~w+Iw%ْIg~Tу:}: B-` ,&,c95%AUbC`{M[$N]q`uv^ 7۽L b ek vqNC4!Ss$`jU}9 4{>9錹$+clmhʰL|Ĭ7#c>6p[h?I~ӣi\Cvl's`I/$L80N3=7hh=hOp 㯮. foӅBYB$̥xImåW=,.>AI"A963-ff:viړ PL_%)cNӌ=arjHɰ=x}=Ҵ? D.1f9Q>~luO{<.&&cIa3)3L|~O"F2㽚j&϶k^8ؒvJn<_Zgj.n}]s^,8ֱSPCk ma]%%sąP<-U_kP֕N P(|( V,gM˱( CDcnNC$H lZjnpg&0L]ߓw?gel!Q2F"Z3Nx@bw\6hw /Vq'w0rڠ #;̊̇&4W8>QM6~@2JkG:8AtP( [<: {?.9iL1j҄v7G o7NT9,/hf;mFClە{U=7s3 V`R^/&P Z„r1#/OESu/f(L2& 5.\GeLCvbh=~:n3 tZ@ba=_%Mu|1G3e$x =HeY6!4V\ tne?Oxc0J4ASm A90[烩bkٞ 5wSof`/F';u!TgsP(Q(O>0cx@˓:W 5G/J24ż3@j{b)>9t7W](.SZO|(*iH۬bToOLtkץ:жOn|)Tf3f~w2F"/"KC<L~[ CǍ-x5r9hxkN/'jdƵ :^͜zv]!yks@_6֥f4?zDn]( BHLdv S߅6u52Bv!OOd )5I4k<ƖhOy&d_oe F;h>3^[3̖98l+6rvf{O=Qskž>u%~oGE#lc}5X#bC\ i0 ܺ4ҽm[?F}l__{j/u.?*P( B!ANp'x/QoDPv{mS!Gߧauj<`~C7a"s&mq>^i翟D*|1%q#:rJѺK8|r׆ͮ{g5@ Mei}߇Kwt=2#Y~с6aBb BàP(e<往M_;IZ7^3f[tMri{V>C=b ) hmj z3Vcy_|=.#p瓛g[s?sY׍+xgcI?$ζOI/yo=s W*θgwpP7 #[yx2тeP zm]c5ߔ&myr_‘O41qj@}GG&{?bԥVLbBP(P%95j`sP&N9`8knx P?=ΒeH7t })sγ(&Pmf&^̄"sHQ&Y@{<>27paՖh Ly]x|vbL  3aZ0\1~qbu>¶(iծi*<|F3 mri; , }ډ*Va; Z>*Kݛ TϡHo zBPtPF,Zżʲ.x lFX^4R1rSpjʳ$>t<6y4UT](MtΌ'fy-a(vefᬖso2[E9b=^uFzVӋy~\N xW} ;b4SZ]mYsp{2 .3_lx{̴=a<~?0w9@c [W&u+:6T77{ƹQr]wQk{טDpmxP( "*dM(&zÙg070"g@s ,ba֠q0ڧ6ʵ<V Pm.QLMjO|3f'l EmUV6#s;ڡ&2~a>ym~ag'~}㤿5t~xZPy>éni[0`djmۖvkF{ f雭Q~cSz0OFvq<;4PAtP( bsxwRfj`{GMJ l:f _So@#4хªAEw6:Fą#[/gt6w!ucOj?G(ݾ֪X-ٌ &ޒܹƫίY&2PgusΙX=ǩ^)czYim/4JH7ާ!^ܕpl?1OjڷD? BP(.q}0jFqHoA~}x(P_%VѼ5Mc)s3 :c[ƺYdwf/.0/k@LJmT'tCF0ї60ѳ' JwoXj0 ^WJKBOV.{{le&Cmio{FD?0 BP(&g Wiq3A2h>k Od4sWm}̵<]At4j8OI~`:ɼ7u0ٞ1*it;ϩu`lf 3pf gd:"yԴQG7-Ҍ9M yIzg[q{oHDa֙s<)9ICs»lO. BP([~~ĄIg$+#IDqPA4"M ϵ?X}eo@a@[}!&"f:mc[kfJ7{1,tJSenhDL/nv{k /i_cT1ϗ絘8{}e3 x7 k˞n֣;m[i@YrK#}|eW_F{ BP( sEr;kLv4$x P&yv;k<=*Ign7q{[qyLZo ;Nię2ǟX6/8Ooiic0P4ԀeT' hgf̲g{}T}L(s}a!):h(ayf~;8( BP(&k8f ?U ܾ! 7by$ 4jϷo!7è"B䆂6qMټuZm6κӤR02|N$דm0ÎIwPZf'mS`ΌCu&Dߗɖ .3c7~6.f __6uMHJuG߬ԛ;}8{b1{N[r oW%U>'x@ R*JJ1cmH툊tD f%'Q)KgJh/JQU_h:G"x@G 2}/3/AWys7zF{cLO)PxYa_Ν^Qx97Zy(yM97 aOqo/; 2kga%c1ƼTECZFS#(J3mU%GyʀMtM2{\J\!@m 4Ednj*0/}U)9q<y;(FkUh.Ls Nƕ-ռPw8\ƎC]p(OrS^ZҤN4^ x>[e9_F(AY#xb)s/m8|67 '91S"Cm/30&V=Z lLxOCzF(KZ{lFEE I@4$H*z h'|hRmϙY/ @ڳFtNT^% kK)x]G%6"EHR8^u)㭾,m1>{?'V_ʛ|,)NO%}cMzNBqaS/ xy?z=W◶><vAqaW)[p}5y/v<(÷s)߿/%)YK.*Hƭ,9}YoMo?y8~qSۛ{h/l#`c}c H_(<|GFmXtzsg=<c1a S}G3Sh㯹+_~*J8+}@35uP^8c^E /{9eGVsz6oH/9huisE^s( [~(;UA*ҝA?/=& `%"cqi=Y5rZX&@N"W_L#rVP)Md; fyq4 'ߡޜ<#MCC2;̨e%5_/dJ-~a;M@sM<)cZnv Xi"D-F+WAx㠣bPPF'G <>cp|);#}<4P>2ؒh[tR9Jg=Dԋg*ԇ(hv6GipN1c$tHsYO6VNvcjc) zA,f&9/&ʅ>]RUg&g$q5p1ӌ]YKu8|eMn}'MRy;zrO\Q6Gio)o[NUf7+;1XfU^ wr2DPͲ&*lVO6+/+}8G_g@ΐm .1V1NB68i%ژZ;#1]%9F:zk_J '}v^=\p^>1 s=D]rc;^baZr'_kPa`jq4kIU=/nD @$c16.p)hcJc h8Zqi;H=cPLt\C'_eݙgPyh{r 8i0t<чf{8Ou;#aP_￶t}"{7_zEs.5+3> D>a1cYS}m%Sy?RRpT^H P&]Pu?Ҭ?KiUo۟փPE֥sXF:kyK騔ӹ"QAh)865q=}͍Kxٜfss~ct rzH;soYF^~thǥ^/~_L+c~ 4cX {#aEU࢒PDeV+:n;&=/xO4vr,% 3*N^rKNE;6=v(/XP>naSt*?Hֳ'zxc|c.?Z8]&9(Ā22KE*[/]%3[_0w?MjB qSlgxeBum"z_׻qT^fr(mzz|讲gd"ތ&j4쉖c1׍1:-ES=o/zt:ƊtO(%>e M,as8S{2;-b+ 审Hife=NU퉞 kk@E4;㎷PgJBwQ=u;TIeeO4X%3ƶhc Lhe;'QB%@$oon2:^[ ,uk=^fV$SPp,؞?=kcV{"/G S>mS>ȖxCfG&%򶁳Νa '0cRG-HN0 ZQ k ʡBelsb7e ܷ0Jze|yXTĀYz3v"x;vcFf5a/t&|$cpV{C39C!E?3[\7;QEh7g@5Z5o+8EO2jt2$z+1uo+s)`?ml5&R}g%*B1{A~aD%~.orz*)N%J NZ^=[Y*OQq%o*sP/Go6ױ-7AW(+ekN^GulSz9\1}߿/9it_-i?_YMƧ}.A!^o_OMW f#M1y/q8/cK<+ 5Z.<=c`X3L2khc1ƹī5 M@G m ޗ>ٮbAsѤp,MVDV uPF8&3ݟC^: ~\A4wUSaVBlQe3 )P7 Q᎞JT2X#]pVwQ!'(Нx4ciĩTDU &jV1qCO9HMS8\a 7Q伙Z"k׳C%9іm΅1>S!4Jz.O\he%ngyJY**/tS^KYqw v4+t;f`1X?@#nnX6&zbdfV[GL@V˻[ P?(&O.(Vڋ}OhCl^ÃCϋ,U~Dl?.AV&MyA\ 'NdJ~bmI,c+\J,X~XD1 q?`uZ/4:Js ޣҦ e1cm#ǒ9y =bB:CܥgXS*b}$!>BqnSƛaX*ZeSxC:GS Qٵ|VXVwĘmc1`x&nN-Lƶ_Ga۹m 慦Kwf)LBLA,^4롼Ygt"t@x9_76}y^+b +7Y> (!rQ1VD#)ѤNJe[yaw9n > xz쬌hN0sy `c;H J.WȎ 鉮ot,GezYx<\ 1cӜ//KaLTv!s9U[%ɐ=Ny_qh.t[k{ޑ( Kr1Hml*.M(W, g)7KrRKff `;cc1cWΉ;N )SJE뱈h۩9X/$jq@Dj\ʀ<ǿףqz)R眒ǜ5q|yO$\9m~5g^N朏9Ty8/{]z:ZQY~=uڵ'c1hcw5unlO)l쇋) 3 ΅.(/L4Br/=ȝckhU_x{F߬a.JQyn||XD11p`De/VV ,/ᔃ}`,ѩH=%GAJ(, y%,㦍3 룈 Rp(cA]-zҌZRhJE0c1J12+P%cA48 8 v)r/z ':&bw 1Qpyl޹ΝǍ}PSys9$H(xL%g`jFLw)q+l'̠s=|03C3iB tFem9s_2tx_'aac1dm*ټaϪJNyh>5z輷9)&D?G˜9smF<'sWћ)vہ, cLcD-F{Ԡ` 8N(^h>fg}_>ye;ǓϜ Rj]_# џvD@/q{N^_=f1J\O:D ^({ކҍ,Y つrs%"s1AԻJ1etn_obU_=Og%cpc1VoL \U#3T,{NY(BUne<g<=Myu9uC_.#by]wWQ_xȟ<_rSz=؎?\J&)\{)_y. }C֫p}!WQ{MXt./±7{G0 1X63)%o^vFOʴT6|ט(6|Yr5yS5;eGz=/a%c1+2 (3؝+бFp )d= ~q>x7c{542\N?Dɍ=&/s}\v8 U97~e!IX61X6Vиm&1B9Q_ogFL߁=#mu68̐Q];פwm1chcl6xpTyTp'-7?QM{k̰gΑ 1|'K}Ҕ;y=t͝hc1+X(9`INۘ@C{*yZQ"R1 +<Ƀn\aPnD=Y?w֣:_U?y%GI% XNb5e)V@>b=d>>'a ۻC,/⧕hc1cD{3Xx įM9PN "<홐3Mhs,KtUN1{Dc1c%(Ffgj-vlBA#GgJ@sn º>WZ쏡qNt| R1^F^X&DqB1+mnQ G+ZapHa9@"M!1q|z T <*^C XqNOc_X6&V͜E'N)Q qrtxe([Ut:*s]e$Sgn-zk;J*wZO_?m1܅g S6F+Zzx?gS4$CLxYNdj1Z)TtmS@qPy%7knDc13S6&E O{r'&E>xPP4?vN 盜!rLkY;ĥic1N헸ъNЊzɤ'Z)yz&Z )_L(8n:R<&zuntEO粚~CMh deS9>}1ox&1$D%F+(+m#G5edVPf'I9,'zt]c)6LDhO?8ywy"V1n5cl0KaLze `!׏)1ȵƖZ˔':hαfB3FymuR@q:1j1m1V,D4rOP yt~23R6)^ZxVxiO4tyi=~%3 H)dfR4|}bt|֏_s<e1c%=H Ń]SJBۆ|9 lǽx' tNLzLxb1c pJ1X, t0_dŽ_<^98q<\Nn$%׻T =NƤ0E?Z+Uuz a1cm#c1OS~{4%l.9Ŏ ,Wc)%e>f]{u?R7 A2)N֟TO-nS͝{T5L攣d`wٴ6c1$hcDRLDfD* O*'![^ur/9S'+c1X6UF_QI#>$燍!7y1p%UV?.y+, }CY\w '?G:':D{|!lȭX6c1J1,¸d?"b<Ў`N0&{L&^H|[cm1cMJ))Ko%b9 %Z(?BR*QN\9o?~?}*DE[hs9I.gN$3rLԟ 'f$LO2"#_@+{b1c H6D/rZHLQ6Ofeόm1c+ƌ)MgR<73Z/+,OVqDn&FJO`{c;'dsswΉ >?$gLMو!O!S(8.1c1]!.JO3z\y]%;?#S)yz읎c Q磏3./|+c1jyi==Tf$ԊeGEB``NRq#%0v}<ݞ 7p#y7X3E"9ZRey8YX,#;+wOu2w((R~~->F2YDc&1ƞh/,04O`0 ꞩL*YEׅR^L7[Wx{C4+p\&ڻXΫ Z{f·:vzzJD(ud) $%,UjŇH"A H ? ATB HGU!V[PZJBiCR@Q!r;񙙽kcoٓ9g2~.iioOOƑ"y2È&"'DDs&M]@3^[Z<;1߹d'|jYx8Y&R%^GX,coo,b~C?/wnv-pg['ٟy&!v'.Eu,*){Ej"ݻ*~7N~C|q6gϓCDDD$N{"-XoU!> ~ DMyU5d64)OpRt Ӱ#3NCD"8/U}|P;uu<}Oone^ď6Ϋ_>_½~9>~ٟ"~px {i\wp˗z??w`e[>7^{pc-u|ci<?,`='⟏}YƏ,-[7K[S|#xGqwO/I3ӋB(WW^~cK_T_A|z_"/b {kx9ry'~{)XW߭݋'CT Qp C2=O܅Ͽj|(~]W/&z:כÓ򫯾s (b2c2"'\=P㫁P_tNKkQi^߿י[nyw5?Z{v67߼Qqը*^~jp؏#՚uԯTgEg춺qB#C5s/] hl7Ft{̿sA""""CUѣuwMjӀS}ODDDD aD7+#""""6#/xmck`˙aRG@w׼""crhK(Γə%DDDD isGM|tV#"k"""w˓Cz[1$;SDDwqǚDD tDO[H1y=6˘fDDDD4y% u~N8n&1 rcI>|ѣ[9ޔڥ%h1'e }0(!Sdt9.^ <2zTD]QH{u,""""s̚Uy{qAWVDͳՎt"Zшhf0@|? tDDDD4Ϊڬrd ? 7swj%=MHD 9[-1De!xm#NJw0HU@ם'{=G~Yx݁è bn 5(NRzOVn!F؉1n!/l^os]~Ne^3vH,CWXIU=he^}(1&nң@G4-`q ..p2w=dW2sܼsIDDDD ȇ7O j!/X4E-E)Ji WOͧ$ڛiC1sq8)ťdh2 EЬܺ!@::enn/YDDDDDBH,˼?a? O],ARjĔő̚M:@J_F,]sQX4/,Qfvn2$""""J1%)r0 WNY) X慇$I$"jRM`@6flq@;pqbcX_8XPW+4^4t;̃j"䪧۾)!2ut{a&%b2z) YL1YJ*jopBF Vtg3y wWsM^sRT (\(ZXZtV2ŬF4,yLI,b)M@W!m!T!ZjNBCKTi1ΖBFHd`RA炋txQL3\/H&,WDDDDDbfI=y 1M@{ j,JLИT$C5B9|wCZ.;\ T\z%@K2YYj뺠iW`6QOLb$yjV8ʲ 萊 !"IjD2[Q5 ݉?|u 8.8Gh" @PG).Md!v<ӘnDAIzR2,F)"0k Lg7 |6_tHEq4h6Eu`VKLn<%͒'9>Кg/2svӗt\!P5T$ mHqb<ԭ^v3DGQTMD m:h:Rq4*ONt{kWT/AҼSĨsU\#z^ f+rVXaĤJ@7+m@Nqc6 z""""bd3\DDDD3IENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/_images/native_yt_colormaps.png0000644000175100001770000026714014714401662024403 0ustar00runnerdockerPNG  IHDRX(bKGD pHYsaa?itIME M IDATxypU?.w, 'Ѹ1:*Li,oRA1)Z*Ơ←DЇ$@ !!$Kw6܄,|?U|Ϸ9}tJKK-AA^C*AAKAA ,AA1AA1AAAAKAAKAA ,AA .-- HCB.!z$Bݛ3zYC{+8a~i;ӧ@9P{T ig xhg4,E@/ʶtȝk^};~<3f>s&3f`%t'!+64p2Un7SJ]V~,% Çc~'N>=ĉ;khh`۶mb&CeŊЫ`߾}<#q׭[Ç ]kXhӟ;2eJ7n?d(;c3|p-[& .  \z|>Çg֬Yq466??\s5v9~8DžTaaaTvsUWQYY||7,]Ԗ3tPx(KM1A((( @%Ka]$%%%n̙3yWinn%o,SFSҘb`]&4x*eY߿s /P__OFFϏ;؛_rr2UUU,\bҤI$$$H㈁WW4pe'K*[.3gرc/fҥꫯ2z sNvV\\ԩS4ӧGX9r?*ϱcXzuTشi(**KA68q"s`=z iӦ1)Ù3g'p!<dddxrʨ)N-'%BfO2E( WIIIhFNNN'ѣ%v:?ٌ=*ƌ_5\)N00dAL ţ:yyyF{zzmn_'??Áؿ?VV茶>*r٣V~N]bb}|Wi&yƍGZZZrL$:钒WX]# Bo'"ͅAG]]?&MDQQ17o&;;_5[ٍwT_R߽{wǍ}g;NLݻ9sfrXv}lYÞ={`b`'2 \I܌3k0s. x,ZEuXVqq1T%d YAAAg`,p'"AY O2E(      E8  B/"#X  Lz&ERqU|wi.̞ʹ=խ'_H\}ko~bt~_bKVoĆ?rWdKr piAF+˲ :klذ*811<.]ʰaú?9s0o<;l֭ر|1c֭###o=R&N)'|xСO>onVZZ)SSSLߧNu4I&IKAt:b֬Yî]Xx1v~ jZl̘1l޼6~'F ѣٵk$zkŋٸq#gώ&))q۷4ٹsnY,^8nvm/^Ȓ`#UjjjxcdgglXo555deeu7O8_5iii򗿌 |gƽR̟?-[0uTY2#/ DnƂpizaJCC6m3a„neffΗ_~Uzz:ڵǣWN>|>%a&%KPYYiEs ; 'O̐!C:tfΜə3gְ4/R:eFYAF<zFMM k׮b {ʯ;<՝^d=z4=2RlٲSUʱIƇ~ȇ~v50k,;ƭ)OBBcǎeE4fϞ͎;:- \‡>YE"8{ƾeE({/zs??ًP2E(ZDOU2RA ,AAA ,AA1AAAAg_  VdKA鳅FR:tZx2,5-P\Caa?6M8L#ǪGw.~k1U\,="_Ic`fL\V(#҄[qlŤ7i(,4 8Mp c"~LpE w :=w:=;p828myfbmq */DWyb.?bөKQYNW7'oQEօGsE]kzRН.+ns]֓>MxR+ 2%啤iAAaDXt)Æ "s'N씿͛7Ovw)}(J_#GJcb` }|.ѯFFLAA%%%XEkk+K<ZR￟!CD'%%I#L @Pau]'99μyhiiwF,DRRRRK;[McIUKGz&++KF~L rgBF_a ~V^ #55+W^uF~L9*PTTeYtttsNxIOOղnrrroL rgRWv߼t:$++\ڵyn7~SxGG QiiideeE9AKAbLiOQuuu(>0IQdPAO^;X2E8|0hmm/τ 4Nq5tPƎƍ馛ĉlٲ)SzZ,$$$p-SK;4%yy+V~MM k׮rÊ+=zt+Ŗ-[:[j}TTT;piҘ4i ,V^2RyL2E  08())4ﻍOHH`ٲe,[4#=EzIXM#S7A ,A V_>0|AKO‚4 X  `[ZZ*  `  2}LCiii&JENaEptוNLL$//K2l0 8+r=0qNؼy3O>d|NgF)ůkFW ***} SSS)((`ɒ%$&&y;ɾ7o^].39sD(% MAA%%%XEkk+K<ZR￟!CD'%%uoС<ICC7nr]wE^x1f͊v9~89_ʕ+x<2E(%G.]INN&%%Ç3o(1cyyyXŧ~ʋ/~;\.eQVV;$33'OFϸ;x7y;)0L9߿իWHMMeʕݻ˲[z+eܸqx<(yEEEvUW]#++ ̨#{㡨˲`Νs1m,L  ~o3g`Y~f;FZZm\tF])))9sFb` 9b֬Yî]|rwbY[-Bzz:saFf8PӢW6RJ#<Zy|I6l{G ,Az^_t)7ݪ*811<.]ʰa~?`ؐQJaYsM7ERoQ=R(HKKꫯG:   ()),Z[[)//祗^'nܧJɓ'yyyG/ȱ,ύp{A]]ׯG)%ߋ%:=j/kpUb at]'99>o<֭[G[[IIIWVVe|I;l߾}+FLG۷۷@jj*ӧOghſDBB))),\7|FϨD&LҥKq\@pȑ#Aff&K.eȐ!/J)֬YRӧ'|®]hmm%'' tZ#--qq("??˗iTUUi3gWFX5n H_x^ʊk\ASafr-=F~mR,\wo30l=n222hjjwe֭?w4Mz!N' \./~k~;\.q̞={(..&++Çolx)8vG V IDATy1j(z!4M㣏>G}]O>SۇK!5큮j_WNhի|rʋ}vϟȠ[ui>S`Μ9v|FF/wޱ &OСCJLLΎ>cxFi9r_}UX0M@ i| eq뭷az+eܸq|̟?'PTTDMM͠b` o7 }[_2E( R<EEEXEGG;w~ Y__ѣG裏041 o 9|0/}\\\ԩS_Oe>|8+V@u<'|‰'z~;CMM cǎeQ/؈{Ta1G;_O=9gi&MF(hjjӧOgiFnnX<˴*5S\\̚5kصk/\7qsa|>QAdy#)))]Rn77p7oF~' {pWi_~%ӦM#))W^yÇġCشi---g?O>a߾}8qw}A_eϞᴛ5t/CI sPPSSڵkp\bŊњiDnݻ;v,vdzrJoΎ;uf͚um:UÇofǎ|=%K~z;i{v)((G(,,d۶mlܸ^a$'''DBB#F`=Fl4ٳgc:N'Vb۶mkx^x< 7@kk+6l@)̙34iҠ0TiiiV9p]umlCN8U]*Sl#[AAA a)+T XmXL_U  Wj}2 b`  6lKFmzo`$\?}<k/RAAElГAh#@;bT>L|=Fc |Cu`h^Lx` `)S0 ,e҃#JC)4Nפ = 2B>XXZҔ-W) M9єu\hʍphʉ$tД :NP>I$Dtp$4ˍN8 rn9q[~4|  jGqXh,?C|t[΋nyi, 㬳h("d&*NhL@,cS K),aiY_s`jc)cP\! COݘzF8挈Km9COĈ 16z?k^ -v =a_cP`\ t},?&^,|x1f3 `Y\ٲSCzY:R`zX:; iY`eZ`XXM`akaD7hf)e:LF3J@7ߚ {+x8B>5\ӕJDWI8DtޔMs+n,tġpzG;E|h/;vtGGEw9hN@J73rМ-nBi!_`\W +,SS24,+4,KǴt,ˉi9:=[T. TnL\;047r}Ԝ>5F $00D ÅeNfeEWN:ų>#</\  6DmsH^^K.eذavZo>^yJ#ٷo۷oTO΂ дί)\.rss Z,Zf̘a9#x/ĉ\.<˖-#99˲xgX`^{]O?_Wq`˖-|wAnn.7|=%Z__͛C)Evv6EEEJK5 E UFKuu5YYYq+z02ì_[nѣGoR ?f>e̘144M/^Lvv6gΜa˖-lذ{SLaϞ=Q֞={'==^{ ů~+n7_}oKbb"o&#F.] ,ACKwOO >}>\dn߾3}t222(,,d֭]XO>VRRReܹW_}@cc#͌37sLwff&˖-r6m}ͤcY{8|0uuuۿm4tM۷okfΝKvv6YYYAx<aYܹ2~a{|ѣ|GviۨQ4Z F `ĈI[[Nȑ#|uuuTTTp1Ś2dÇ'''={0oL4)ny]鑗ǡChkk#??3j(:di|1~xNhnn,JiӦ֞={?~{quuu 2D:O #XFHX555].Ê+=zsLLL;`֭޽cRXXom?~<+Wdر]ɉ/F_1bara)))>/`Ĉ|ͼ˝dO:{ӧe{ظq#gΜ!%%ѣGi~zΜ9CRR&MbѢE>D&f}PRRm ~GM8X4n8ƍw~7Oq8)S쵳 puquŕrX|9˗/]wI'  Xp)BibA^iAKAA ,AA^qh9<^UV OrWdKAbA ,Aap)JKK;m<(--e߾}Ҡ*GA5.WS6lڤ911<.]ʰaÀ=/v͂ :m#b`/MZA1`YK/O ;}3n8^/;w^GaСPXKy-t]'99͛Ǻuhkk딶-[OaW^ڷo۷oTO΂ дߞqݤŋ9tPu.'OdƍՑɲeˤ% }Lz&++$|>_tadÇY~=r G~ .ik.l˲xWIII7 l޼Ms b` g.AV`^Gjj*+W`y۷ogL> ٺu9 7x~˲ꫯrI~RRR)++KZA.3㡨˲`Ν_z=G}da~l_m=Se˖xhjjb˖-,_>qiiiq0rH 4iRܲ ꒜LVVYYYvm?Xw튁%\T ’z~N'@5Fɓ'҈ LݑLj#裏X|yd2Z[[Q~AdB} U+#~SR{z{ /_./>c_=;enuw\@=fϲٳl,= GǒA1A~-b`  \%RJ_~YBfᦐ+'p  B/Ӈ#Xdee1f% è¨"PDΏ~0BOwL.,ܶ3PB=1| 7&n,0qF|{0B2!15n, A*g]$}A?.HuGyMpf+;P:#\:+Ntnps:3;^p\vEu{A=pVpk$$8BL$gh`!ϓA7XK~f2#:ۯ~z r /]`0 >3BanuV]oqb 59"t,hY N+Zw Ӻ-cu ltũPע#wQM߁`A:Wmpq9E8܆?؍,GQodY#d~Lsƒȫ~J8{ B?},䅾~G@:e{.wÆ TUU²+7nw%-?Kv? bK+aw?>0 ()) ,<'WVV2o<*++D,18/#XCo5Ϻu y8tJ)Ə ټy3uuu(Φ.e(,,G2j($KAbq,_~ͪU0Mw}_|7|#FP\\Rzt8*++:u*1uTv-  0xؿ?W ?><㤥p?:rssinnfܹdggػL6uֱ|r\.4  0x<E'HOO+!C@CC\lܸjƎɓ5CVVÆ `ᤧ70sLiA ,Aat:̼-bԩ8pPQQ]wĉ㦯pw9eb|AA fv|^pkC9},#aAL2"t0|5;Td˶Lgq6K ֓ފ)a^`Y5C75 sY! p x:B]8W-kEʻki"&V?:"UU ׌d9RZ*weA ,AYcLZޝ_lذ*RXVuR73l馛HLLFA>qncJ)/^̬Y,'O[oyfnviA ,!Okr:]777{q!R?˗@}}=7oۥLeOMMeƌݻWCK. }vdZ/fժUɻ믿΃>oɈ#(..F)E}}=;F)@KwWb)KgQag|?~4nv?SWWGnn.̝;lYm(//4M#G䦛n %MV_<EEEQ`_6?qq0dhhh 77믿7R]]رc_|zSK.-2E(z_]rN οh"Nʁ8pu]L8#`  %\O"TzK/b` hi-(b` < үAAA j2+ Ћ  B/g`ƏP.|MxWi.L.n=Bbd\QJ"}Zı #N:x)'&m-8%sFlѽz=PR(ThtTiJJi(_-$#tZ DהWr#2ϖWnuzjQQYbuWQУt^r;kO?ܕaDUAZB  |8pbԩ̘13>ٳgS\\|{UP>AMKifçw2%u|RWWǩuΜ9CuuuMMMx^ˡi)zX1ロ'radPo(Y#^a!A;!›xXv-c׮]466p8HJJb….Ĕ)Sسg}:J)-[Fvv6>'ORTTD\\SN(S/EQ1ϓYzE P)d̚53}MӸx"[lj*3P9pL={pA~1eL` #y< i)B/lݺ /4n7ӧOȺ\.SQz)--ܹs ::l3rs=>}#Gҟ={6ϟӦu`|>-p84'lڴ5kֈr5K!]Hvah:,;yyy{n~?mթ|ۿ/!!+WXnE{5-L̛72<ŋ=VWW^F\6lz:u*wuw~2'OݻwO{~ 6RGR᯲H_]x<RY;7Æx<<O=?~'O8NEDAF'[nn /t:M-.|AJKK9w\`Cff' 82]Q”¨jfΟ?ς ,0 'O@)Au:$PpĂ%( Qo c+Y׮]0 -@5*//bvލGu-s:q:,Y]v1w\"""3 ` 0شih,\,^/1tFvv6dJ   # ^ ߎ!BǃR: FGGx뮻xꩧؾ};?m6-[ơCqtFA,AIu>:: >S|>_3{l{={6IIIHgDAF{ jkka8̙3TWWV4//G[9XˆBQokײo>vEcc#$.\{Ĕ)SسgtFa!BAAQgz%~ (XpQo  (X  BQb_AA@Ă% 0 :X'р'կ >n"z8\3p8nwa Svy8îY+S68<8nylyhzݥ,N׮-[ ,. \+@a` NJق@.7\>JfMϚwݷt|𲫐2v_z N*s+}"̶-{/Z`]t?:_9:znsraw="%ksXl4M wEtmsiwv*`Guկ,K05 ZFjAzٷo466!33,N'ׯ_tv!;;LדZyWYr%YYY="""HLLd̘1?v[lA)ep_-[p1R("%%żu5΂aوe֬Yp8rرW^y4֯_Ojj*+W,ÇyW} yW뮻lVTTon^urrrɱYRR©SXv-: ݿ(Ŝ9sxEa(q'7F` bS__ϛoITTyyy$''co8rqqqL>˖-#;;ɓ')**"..Sy^x'2e mmm|glܸkגlDFF/ZPiڴi<躎ٳl߾'O׽nb\]שa˖-\..]z[#-- Guu5'N… _M%nwu/qF^|E\.<uD jd#CˆCUa֭v^xv3}t"&&|RΝ;`-X={nWy"""!&&e˖QVVFEEE3fLvS&66q1qD~m;fV8f̘˗o{{;*++M3fPQQKHKK3;JttyeEFF>+sk1t"#yz߄fΟ?ς e0 'O@)Au:˦:G1 ==T~ūTxn7iiiTVVםT믿&==O5 7"J@uϵk0 Dk~?aݻwun-AN%Kk.ΝKDDD˴i&R|> v3{lLkk+֭M}:wHct]QSSCZZJŋ4-̂FhFRR(X?8 0i>c6m2- ?fxGy$""kZ,3}aK9 ڵkر'|r… 4iBbb"ѤQXX鱗M||%6 СC\pAQ?D7\ajd"41H-c܁Vnx<RY;'M*-x<<O=?~|Ʋe(,,dKpn4f3#˜+T&vSWWGBBBH[[6mbٲe=*J`[[[kk9i] x&O t%ŋTVVv;|7nX^oix^|2{we̙3zq|'$&&2vXSs ࣶsUoΕ+W|=Z͂IKK+W)XGA>Mp0/CCV Cwed{<k׮e߾}ڵFIII,\7)))SgsyzT餧sS7 N6 ޽{7%A)??lgϞ7 6HRSSyG6RB:&&4-[fQ8}>o%楗^vfv;#i;L2eHOO3vXɓioo7s 5z*((gE({^ag;:ս{k[ߋeڋ0%J" r!BaL2D8[F~A< 0iAKd% `  W. 0R  3h`Eؔk=ljE9Pv݇rC9;mlG9Bphӏa``F밣v݆a:Q6'2]- C+ڜZvڜv݁nNGӍ8lNl6\`B#  İm.HH4[~BSʉvv§(D+Q8T6\(""PÀH " H " 1F1}D>>"u.q~lZ;~ Mkïihz;864ߊt۰[kom~?Gh~?M>?;.U3hj-N7tn7Fɩhqhhqhw8NN'vAttF #p.ڝHtG#fsٝ.v'Z8"hh9i;h9l"\v'vcl6vl6l6()([xlQ:Q RDӬ\OрFN#ܠm3|_༕V66hG3|F;z+-6Ca IDATގډۈZۈډڈ3FiDњF8bm ZG]U\&͎WӢىowssw9;E$¦PnGaTy* b12vDT41%BDAJAAN2444PPP@MM4(X°@>k'8k̐N[lw 󯬬k1T# !BwF4m܆J WfґtQ4ː+VwVٛٶm.\͢E4e֯_Orr26'Nիz*TWWvYreX|ǜ;wORRL0k׮Gq%|>cǎ%// 3믿ܹsv_~%QQQ,^s2.]b֭Ց̢E,eo˼yXpwex ^z%<i̚5Kk` Bq:t̋ꤴC#dǎaC>=(?*?#***xx<86nfL)Il޼+Wr1ouOG" q˄%Qx9Xõ/^Ȍ3$%%իWos(~.]}JIIbƌ$''3fU$\﷔#iӦr8{,w>OHH`ί;y]r=q0r@3Μ9sy& y !^COSS%%%̞=1cH% )P8"*OqXA|2n8I% ­EVVTpy+((  "_  06Dxw & e<$q2 0 00 9  zSknya`=]n]}ݦCpzzF걷v.ʮʄkBZF=n'n^ի e@6epï4e` ;δ݆um-8sU?ˎ5n54e^ײ۾]vwXu ,k((op|R­vIQ0]9w0;Tp+]A!(,aPSoزe nee%J'Dob#(-vI-/AA=l۶ .ڊfѢEdff믿R2fnsvEuu5cƌaƌrQAA?~<-rQ^^͛xk_r͛7|$^/^pEmFMM W^ԩSl۶M*WVk0ˋXFNfnŋNs2sLs˗/ٻw/{5M<Ǥfv[oax<fϞ-+%,a4iFBo,c+Y?xiii4~"++|Əϳ>+RlR  `  %   Igܿ꺸8JNU]j6'`C*`WQXAAA`KB<ptrvrrrrb({G8s[Gasd3ÿ蔳2fzp{еC^[K S;0 M@ :A::)";ºƒ\: r5[ 8vfxP=iбX2UE&]a4On:\#SOW]᪣lUpAJi]G57Tj+ 0to WaaIǚV_gZXCUr]z]wU5W!a>*,]˽%p{C±A6}W= O;ZTvX:T=*_}֮]Kjj(X 0J?:O<nKEE}NcÎ3 ]wٳtYEAV-uXN:ڵkM2xM#GpAHHHgݦJUUk֬a3a9z(DEE1}t/_墲BRbɒ%sv;ͳ>ӧOiǏgŊe;vP]]RDVZihh`۶mTUUin˗3mڴn}Μ9444 [F^;ba)e6'NPRRB~~>\|>e>q\\.N:ń p8=ԑG%!!z>CvI~~>&Mbʕ/b.k@`ʹfl6}MMMar| ˗/СClڴ ..lܸ^z ˅4?~x/e.222ظq#.g}>;Ë/HTT>ƍ.J)jjj!?]O~餶vE~Č0lJn1:p(qj03gXnOoeZb9;!!Z>ܭex'(**>cܸqq=b[Xl[n%??nm3f̀Imm-ϟg?ƍ㷿mlVVs:tSÎasRRRXd O?ϓ MԩSw燐6t]G4|>"4\zn*W Lw󴂕AEDDDĘ\B[[jPXX`֬YuuVΞ=k' P V!dFF|!T2t:,! /i :RRRp\TWWsSIx>ӧO3m4Ng&O2I&|rill3gΝ;5k 466R]]ͬYرcSN%11*++1_Ncc#aPWWg*111`η]b 3G eGoKDRR۷{2k,.\hQvq:sNN')))I\.&NHYY׮]Cu;w.- 55+VPZZʮ]ovK/~{ߣzrsso,_;LيcҥSXXh.yvEaa!MMM0ydbbbl477yff̙}ζmhll$""iӦY>}-[t찠6mK^~YPP0(:l#[rBeserucodyXz5R!bATTTNrr27n`ΝnsQTA[7!CbGPG) vdiv&MēO>yӯ"Qc, 0;tTbQ.SNeԩRC Qoa-B/u$U VDAA`"AAKAaI_ Z7 ȅ.o^.;}D!uWZq@:=[nta`aa` 7 ]Ur֪d8 ,G7P陮L<:[ t_!\r;uKP]n7rh*4MкMWcn׹B7)[92c{ jwf޶k{ 3KneaQM7PzQ5FZ uJ kFLCNi8atF<]]~en/S_TV\8<VPdQ?OlٲV~iee%ׯW^17455gzDEEʒ%K4iR8v;vW^%ށŝ[A4vMyy9DFFA^^` €܁O?:O<nKEE}Nc6z3 coᡇ"%%oΆ xDzRɕAHJJJ8uk׮5(++_69ihh !!￟wfkk+UUUY\=>> &X<ѣG'**ӧ|r\.Է?/tƍQQQAll,c_w}sˊ+>}: (xWQJ{_9vyYG}ӧ4dzb RSSaǎTWW"11UV1~xضmUUUhfL6-###yg-~>( EMo1o 6gz>;reawC%Zr JJJ'55˗/r3gNrq)&L`D9裏@}}=~!;w$??I&rJJJJx1 5 uyfZZZXf 6>0O>˗s!6m??C6nK/t?KDDw_$**q~555~'?tR[[ۯzimm5/`?4c8h.0ԴdAxba[t;1zΜ9úu,~++׊+ rn,OHbbF'̚5dnwS\\Lff&` Qk%CYjeҥKl޼is5 )**(jYAfΜɴiӨҥKSZZc=FVVΝcֆhXWbMSnV2r[<3ޕ+Whkk__Ôzx 9~8̚5T~nٳgͰC46n܈R!_ep(X1dpB0 ("t:aư@w`:f#;222`QRRBVVlذ瓛KTTUUUimSC>3MvbccYfMXZ C=Dff&唗SRR“O>Ɍ3fԩ9sTPWX zU{=Yz^8KtUh8=epX!CEtt4^WSScR__Off-5vXN:˗1 +V_|Enؗeu˗/VWүt:[qzQJ:Hbb"999ѣm\\c޼ysȑN媾իWߖ/*LBAa\v~)gϞ/]}q!^ʕ+W8z(6f~mN8+W/xu29~8G@{{;ϟ=NMM娭VJOO˗/uVNg]4551e&Md~@UUvǶmۨ4êIJJ`ǎ={z4úS6n˗x^^J9T!¡`0YhS$))|޽{5k .(;8NJKKٹs'N IDAT$`\.'Nk׮:qqq̝;Eʊ+(--e׮]L<7xvy饗dQXX[oELL |7olűtR),,4i < v&bbb9H#zFF2%%%:ukך~eee/~G444M*֬Ycτ ,rѣY|9.J QJQPPR۽njwѣGcǎ1c (--f~3f //}~~~۶m{Teر,Y3gXYYY\vOJ(Xz-bIĉOjj*/_>r1gΜ0y˅ԩSL0!lά"xGIHH?;wϤIXr%%%%ѣ"suM… <쳦n'??xٺu+J)yo]| 'OPQQMXzuBQ/ʝ~CYOexPzit(YCy0}9su!u]w:%%%X3f@mm-Vl<g1n8Ҹ{HII1rrr-[֭[nsƌkLBll,G5-\ǎ#>>[t);v 磴5k0~x>.\ÇEkب-t'[ΪU,VK.y>εk(,,Ȣ6I~̙L6*.]Dyy9c޽̚5 ZlN'ܹIJJex0.'RVVƵku8ΝˢEHMMeŊitǙTw߫ $f1bǀ!0 c33^28sf&&c2gƓs̙33! x` 0Hl${{W{66Z:]vUuoWڽ۷3rH/^}Ù1c*]NInn.[lA4אk#++%KPRR;èQXx17n6Ǔ϶m۰,iӦ1ydjjj\ŋHII uuuƒgWRW^y5dΝ;pj羱TAFʑrTA..#KrR9ŞAA&!M\zU(C 0-_Ue.b` mA ,AAtE( _,AA1AAz7l&mt\q]dOQ]!-:Gɫ]ێ9B#eeqH2Qq2NľQ.XS܊u4<{օ\.㵈H9ݡu!hQm~hBP '4%l};;d:Ʒ}:"#"12vT dp^64DzľaUDiHDC8Zxo;ΈH#69qllu*M da3,oEYDGt7EZ#dҌJC|HS/~;wF#: ,\Yfv1M{'NK3]rG ,AAƻx饗m￟d9y$MMMם\#//۷SVVƤI=eY.|+\x3dٲe /"?Hby'ݰ{w^~ӟa`Ϟ=\|ϬY;LSN#0rH:thܞ={8x uuuqmd^/((**B)b ?v2JKK!..ݻws!7/^<^{5L;e.]?%%%|466Ƽy2x~D-_'V?yMWo@W+b_vQ}6qAzb 222/x7zuzx^:thG{SWW͛ywXbÇgٲeq5M /@}}=IIIRUUC=Ί+HJJ7|˗/駟RPP@JJ 'O/ k֬ЖR_됭XޙH}"^^ާw&W>cڵQamp:,]~?|G]Xqi&>C2335j'NdȐ!ܝwn~.\țoɊ+u58z,ߘ1c|# ,NF<5adggܹsٴiL2:?swh"8u6m²͙:u*vɓض͕+W\CBC5k/&66J|n 뭛zS/[wâK466f͚n?곯2D8{2dW&M||| .PWWѣGRRRm{RWWǡC8p@T:~@ NSS`6meeewLII,O]]1G̙3>|{=.]r]w[oq!jkk/طo_ƛeY\xx˲hll1wl "C=>—!¯%q)AkUYb%%%ܹ &0{(cgڴix<v;#aȐ!QGz6l{۶4hӧO'?? .]ݻپ};#GdQ߇ >3fꫯ4 e4Ms 6Xd %%%;5ŋqn?~~ךuu5ׯ;dx<jjj(++sBޅ1z CeAL(++cҤI;X'N`Ν\xÇgٲe2W\a۶mTTT`СC;]QQcnXrr2cǎB[:J)PJ1o:-sqԩS9xk`o}[PUUkFBBBy b`݂bo~8ά7=(0sN*VzN4w}zӬZձ_\xsax'B=R1a***Xn 6l򈉉Au78˘1c8xk`>\MM ;vٳ4558J)qx,+31yd&O… o~G}=nCu4 ۶*؆/M ^sS!›SԀ@C+ }nǎeYhƘ1c:755QSS}Lj#2d/UX|>k7\u'vk,c2(  |shƏЫ2G||< 11˗/}(I&QRR‹/ȢE|| 4)>ϓKrr2ir!{>s222x͛oICC^zji̘1W_}f͛2|p" ]#66LE5q{rW[K#TT,#K K~0sLN>KAիW9rL2E*D ރ z72DZIju+"_$$$PPPpBb`[)e R_"&or  @KA/qˆ=E&TCUuϚ KPAAܲԗ:N@h*a5[f3ٌa]0۝n7cv݂DsLA4L4Z @SA ͆,n1>  쀁4pLp,ҰZc1ol<ئl aB.L3ˌò1X,;ӊbl0f, jt:L ҈v-U ք5ajM KkZC `l- s)@at 2ZC #8[8B)R:єhġv1Ġ ;I0tNjf{ќX>ObF ,pݘ1c/ddC,A;MAKAA ,A=ЧٰaEEEڵ+*Zn[n ؐ B+aݴt,KE&20RΦ,Y)-[PUUEKK 3i$6nHee%UUUݻO?4'Oy5d IDATgq*//_t{NJ)//g̙ܹz}Ya׮]8pFҘ;w.&Ldga v $Xh*fbРAQiE~~>^ǏaRRR:t(˖-`^[[KYYWvJJJOַEJJ UUUk$$$0rHi41BA䐑Aqq1Qq b̙39qGeСƢ:۲,~]5k0l09u}X  W@ K,C1`6%%%=z,²,<3w+hA?v2233"A#GdرL2 ߽{7c <֭[AzWC]P B:=|!VAtd BbѢE'nӧq?jw]]qB||<A׈:5ːap0 ,y bg BcȐ!L<}a|駜>}XCccc9s /_ϰax<̚53gPZZz20{ln݊mی1VN:Ell,yyyPb` Bb9rݟ;w.uuu_͍7klܸ4y<;|dgg`xka…$$$k.ꈍ%33|i Atd BS?ի{L'55{SxNN999QaӦMsϟLs֬Y̚5KID?]  %7AAAEEE2& p,AA-mkA):: Xt0NsbЉArN :^phtGӌ4;}iF#@wZќVt'F @9 r( a_9jRTpоm[!_`k^l݃{,=['yL#K08l͋4҃zK`XZތ( +֌7|Ղ*v-XZ Y6!Yؘ8q1g;&tq=:8*l (lE\X!߱ƱоmeX!_YYl(@9:AsBa urv >5VވtESۺ18t-]3Дbzs71(BSJY(B7ZB5=҃hz{Z<`An +JvxF6JAs};ԶmmZM8Zc`;Fq0pб mʋ[yTQ1Z B׀>b"ty8,-SPG-5ݹs7`׍3_TKj~wO/J 7#K>:+FpgǎTTTDbb"999̛7/j!fAk H -,M]]MZZVS]]Ͷm8q?K!\}VUӨòwyf ࡇ0B$222{=VXiرO>WD~~>SN… ;:u Ø1cXlv vŋQJ1|p-[FJJ /_.ٳrJ. %7X B ׸j#11ɓ'nذ#Gp^/---$++'xzW+.x ֬YR^zS{=n|IRSS_mXX BߠqHKK2>--Ξ=ѣG),,$''dFd…AAA'O &Krr.^ٳ7n,Xzjkk"A/_F4Fe9y$k׮ WJQWWGjj*555رgԄ8(g1C qqWvk b` B"%%t&66aGvK,q| $''SPPqy,ˊ4-@:) ldPƐ[p @||<|ᇘ'|ĉ2dPYYe:\x$RRR㡩ΝѣIKKS:J_1z-2M -,ϽދeYOz?Ο'Xp!~<^u˩G0sLyW9{,8q78qqqsjkkٶm['JzAo[L4*#KU㏳c^yILL$77ys`\۷yfi 4 c;~zL3vX׈ZjooIKKc[ YA  =X IIIP3 .]ҥKOIIngggQa>bcc; ުހ~s} o'      X  b`  >Cxb FKK WիرǏH\\̛7Ç׳j*&NUj 2e4  7nټy33l0R'?mx;t]2^z ۶INN'O(HJJ4:s x^i[dPJ%Kt]'!!$f̘Avv6ǎsILL$33+Wpҥ.jiiԩS,YQFСC3gv[ɓʕ+n|5ʺz)//4IHH`ر 9rSNFzQA/QQQAEEf ÿۿ!--MgӦM|dff2j(&NVSLa۶m̝;GBFF4  dP|g]4iٳ(xGx<9sV\cz7SNq?ݻ)((y7iӦIb` BѣYr%:}%55F^yy䑞Avv6̝;M6Q\\4ɓ'S\\ٳg" BO7X B񐜜LRR5?.9s&/^HKKs;2uT!66VD&҃% "t2ƦMƎ;$+ԩS2d^st)~3<T  GKx?sLѣGۣ^/Æ c޽b6 bwO\\T  ?(,,6nԨQ<쳝“eah"-Zc?O{gqnoKA"C b`  %  \ UTT$}  7AAܲihOaV|W2דv)/ƗFF뮫ѲN8 ׫lBRw܎IA řB~Gn\Ldű,PVL agGʴa?!"jv[^GzݕF ;rz7M]n;n]czy6aJFƷyq(+ =X p]Y:6nȋ/(m+,AA^cD^^\znii1A>M+ոq(,,4M?͛u9sH b` }uzUSHBt]'!!3fPVVƱcǘ>}:7ogҤIGߧCff&ؽ{3VTTR5k0j(ٶm(1b˗/wYšk~[p{HҤ 4 fL$++|^/ǏgÆ 0tP_=CNN@*aTWW(,,q,3|p}Q4McΝ_z ]ףSX ' ***5k>ٳgq3gĉ=zC؈8撔]yǃeYnÇq7_WTVV2f̘\+A ,A^?$ &|g]˲4iǶmJJJ8z( XeYܐ!C=z4o3f cƌa„ u穭eڵQiRWWI!%$LI8zhV\i|>߃%%%۷˗3x`<[nu 1Mx9}4߿{{@ @VV; Ete?2 x<$''55ӧaҤI 2djjj:?|pϟO<딕;QOJJJ鶌!%_^ikfKMMӧOS]]oAccJJJ8wDzz:~ .p%,ɓ'ϋ/HUUuuu!!zSxbbbF]+a`"C)W'At`5"XprgԮ % 7AA{EQQ0  DKA&s˦i(*:JҀt ҸDդq /@.LOḐtHtʁ|`.0VĸDQM~yW62hĐA:4H[x2ݴ!rdQNSJrl\D.a0"\\ ݸ4."c%v4.Qtr!59Ee-AeK"QUSFQNQU5!R鵹c/J}D3"*Qre\\WQC&qK"z6m{%ON%QLwe!TS9pKK|d4FPҨjzaD[Aii)/"02 Ыٸq#(4$ϏZ?1qDƍwCǬ[ -[&J#psf¸q(,,4M?͛u9s2[_l<z 2?Q4[uf̘AYYǎcΜ9TUU}vΝ;GBB999,Z s=ԩSرcƒ̙3􋊊Xbǎǒ%K0a+S__϶mۨ@)ň#X|9~Ň~a<|8ӦMDQn2"Om"ѣGk.&O̬YHIIa,[RLt1bw}7̚5 &gϞtovNJjj* .$++}pǡFAATVVix^ HOO'== 7nӧO'%%yСC0a̙3K.Έ#sXt]񐐐@bb"JɕsK_ 2"Om>3֮]M4/rv}qBse6lXTzÆ sȰ.\… ֲv(4s wWC q@ ӹ;]ê9Dv|ھ x<]տ#z:SJ1t SECrrrL̙3z"333|=J||<111t]!`+6f 1gN>͖-[8<555e˖(ӧO{njjjؿ?~)wyģ~aǎ;wɓ_:N<[oŕ+WzE]~Μ9˗ijjc#=X BeȐ!<#l߾8CJJ ~{]wŹs(..&66e˖1f̘(s6oތcժUGyw}_~V ѣTVWՏgϞƍy1MSiKy@?Q4{`SXXc|VV=P2111_U2>t{,Kwq?O;=Q~?*lʔ)L2%?~1wygT/\jj*=(L/A,OmA ,AAcȜQFA~MWCt8d'_h/**wAA   % лe`\INiJQ4еi((ehNꨘm۫ 1xP^ k!2VV Ph8Zov?L  oX*4T@jmU oj !@xPV 0 w[FQ.(t ;tfF3(t2,at 5# ;o6x쐼Bym ʰQҝء0-* p}ж,LetP6 9t `6ڨC jB*hA(DYA rLcc9 J(eߍѴ9(c"y̰DV ӢA>r%}MEo ***=zt ONzz:))),Xd;@}}=^ǓDFFfܹsIIIaΜ9A||<ӦM#%%yą P3i$LѣG0҃% ނg( VC}k׮Ų,&M;1@;vpqm4 ;;${9Ǝرci 2Bӈ kիn9x eYdddr%@f<=z4+WD4|>oɓ'{HII0 ^~e0' )..'6650NuO>a۶m,[aÆzٽ{7gϞ @ ,AAx<)wiLBNN\|9JF4f޼yɓ~>}#F0c 7NM ,A菽WVg( D_ѼT?~<;vp{ 4XWWȑ#}<>̉'HNNСC={ BA ,Ac3"I,C7K??3gZ[[X(..4MRSSYj7֮6OyWQJ1qDfΜoX (,,6nʔ)L2Y&J;pG__w^Wq?O;=RYh4Fiio  f- ]gx i +& N{3lǕŲ XM`4tY>&r?z#EEE^ڝ!qFZZZXzu0>JKKٺu+<X |΂e@&%%G~~~sPJ1h nv,Xaؿ?uuuhgĉ̙3熍'2n8A[͸q(,,4M?͛u}(e-9˲8w6l@)ŋrRZZ㤫|lݺ{w#GbY.\ŋ_ FXB3(++رc̙3;vp1|IW~޽ݻם%++?0xy:u*;vX9sfeg۶mTTTbĈ,_0h ƌCEEEumf۶miSNZo>cĉL: \^+**B)Ś5kCQXXHZZ|=kD\æ#gϦ@ @aa!GCC۷o{!3308qD4jkkOyiii_g<>|bVXAFF_|o^}={ti`9rq(((p կ~Eee%cƌ'hEu{iil0Q,!bLƧ:ƭ$3fLYf'ݝo억캳2lsO}s=g&n7NUUYl.(( ++Kp|򠂥y-^>`˖-rĉQ .Kk&## &dժUDDDtR}]}QL&?;**/W_Fll,cǎ 22rpN&ÁqrGGGb nJqq6FX<}7|7>鄅N||<̚53g??vVkn՚-.{&@BB/gΜܹs|ᇌ7% "yw4iBZy'5˗/pBm[,a8y$FMLL̍BVV`~َ$'']8uuu$$$>y GZ|zďN *;>NkŪAUU, DDDܬ#44tә0a.׮]xhd&wAacZr _|MMMԸjt:uaL¨Qx >T9s/(++{~[wϞ=0<<{U__Occ#W^Z… TVVȑ#p%)..&??|N8Acc#S>:::0ͤk+2{l5kVlgffrerrrPiӦ1a„*l6|W\vQFڵk53}tv;Ԧio VLL}^>}:<<``њ !//&m\ڏA"~饋pѤPJvyʕ7 3sLfΜs{v_6BCCYfM}M&Sԟs;w\M7*/f߻_jW;;v,֭UTT?DGGKe% {ɓ C4V\墶;w( -tvvRZZܹs)** XAoz=dffsobEQxgrQUUŮ]`ڴi^?!qqqpa~jZd2qyN&v;QQQ"!E(#`IĖ0LTՁAlfqY0'O$>>sRYYI[[ Z,V+!,, Ddd$'Ofرia:˗Yv-DFFO~t:4Lv-n'**VU*,AP'UP__OUU:xqq1SL!444JJJeoѢE:u~i-PRLBxxOXY8s ׮]j,sϓ"}"]y얦Ai&n7NUUYl%jjjx'2e #Oygz o( . ͌32e W\4n&RRRؿ?nN.\jr_P]],Xˆ}D<֗:0|rʄ 4bRSS  55]vQQQѯ ꫯrـ֬Ybv}0>8k׮˨QZڵ Ӊn'::Z/A`0hWX֭[)**bn?˗yW?y<K`0}t>S{1HAAAߏիW|2mmm>}RRR d2osiZ[[;n.]cjRUUE}}8dkn` cKA~ٳ9x z@q(L0~eff_joyz? jaF?ϟg˗ #--GcRRRp:X,q_8m:a>mܸqP*d!س&bϽI{F{ŞegAbRBP<'AA%,d`0LTUA%  ă`AA[,AAfЦi"JS8z9_z ߭sޛ uҡ|Ӌze4y:}lJSw;PU/*ἷKGM/ #77zm{7 s!E.Vs~U]/q)}套tT5xnl{S}faX]"y,H2999( `6IOO';;^û(^I&aٶm63{HHl6222~nn.(tEQΝ;9~O8EQl'???immgҤI>v^}U.^ʕ+:uOxZ2b0w\̛boox7DFF2uTo͹sxHLLv+Y$%%[o_0ܹs={fP, ӎ{tCBBկ~%K >dY);)4V\墶;w(Jb˫Vf^*GQ^|EBBB蠮}:=z۷c6{gڵ>iccƌ,\IEEw&,,Z[[fΜ9iKUU~_hN8#Gذanhh(AիWBGG|_8-̨Q|9Kt:1l6Ξ=.44N d2ˤIOJxx8{ vHH=UU-)S`ikkӎ3ea~ll,˖-CS^^#5j @xxz5f̘W\\}̙3)--tj~{N`0u^,\˅n~޳ߋõUE _N}}=UUUtg09s&\r?<ڰaCZX,ZZZnu'deew:u*| ɓ0f̘>m;NY> )S|rm/˅`0ٳtvv(**bʔ)~(Bgg'&Nի痒“O>)KAn/_ ??UU0aB@ŋxJ|z 7~xكn$`<7~)t¢EHKKO=O=]xނk޼y\v>{w$0L4I+I&'LttmK,jYlY@jXX?|tGA? v_b[n8[-<<E,}*SL!77O633Sjct7L344hYf oIJJbܸq\v2n7_~,..f…:a0`=@cEmO  xp8p'O~tvvRTTDrrrɴigҤIÛQF}/!!!̙3O>̓O>RtYW^O>kkk}[hllpk{wx }a]GE` 0dxF@wIOOooo5pN'rQ^g+Rq\髪Tvv0sL>sN:ĉ)..fĉ>] IDAT#f>SΜ9-aaaL:( qqqrA{͞,a?)J^: RCUU={6yyy̚5 (ڵ+ ?ܹs[Q~ hRSS3go~s,66V3 U.^蓦A} x\ g47nEEEX˖-ȑ#|'b2HMM f "v}a69rK,/OQᎶ8z{rdY*GʑrdY*GkiA%adQ΃;dQAD`  "AAD`  AAq 4  w+҂% 0 LzOhz)ؗ xח.?;=#8y} 9ɇ*A܋gQQo?^nzm/W$HJΟB ΍,w k/͞ҾQ2ȌeT3,J[xj,wEeSZ{KuytJन*] q8*{-=8jw{h+]w |t>J|˿]Y,An eҕY-XsAZAaɡVQ0ͤ^u (^I&aٶmbֳf###~AUU"##yBUUJJJؿ?/nܸ'x￟l٢… INN֎撛s? .$$$D~zƌfII 999(s}'D` °3Hm<Ҋ% ʕ+q\ֲsNEaѢEZUVa|F/BGGuuuQTTij>K|||t:9}4{EiHEQxgի|/Kxx.>>kr&''5;}a4y}oN!] $0tkat:õٳg}„b2|\O WFd2ˤIOJxx8{w"##9s&ƍoyd">>yAMM ZUsNOOgʔ)>y )o'zu\gSUUNe[3gR]]͕+Wn*^rR( g E(xK.BAjn(//gӦMnN'l20;v۰a}ڶX,ٳ={9s|y7PN<sƍ5|mm-'N %%ii&c?j,K H ;BSRRX|9|TUe„ >a/^ @n4xw)Bee%ɓ'Vɓ'`u>k֬bYb۩T__JYti e>]A~n"zUYS X[RTTӵ0ܴ/EHHׯL&SSUAJgggvw=l&&&\.>6l&X,񫮮 1XeOf V3HWz{F+ ҂u'! ­kvSql±cǤՂ% !''IvLzz:eff2g)LA72 _adʕ+q\ֲsNEaѢEh arPJrCNGxx8fٳg5uAhkkd21yd,Xvͥ]b;Gr4iK,@z~۷ooh4ٳ5VGEEʒ%K0Lyf^xxƍܹs444K/a6Xj*$&&n'33ݎf*RSS̝;W.,A#]wU|rʄ 4R hnnpv f||] sQ͛Gvv6gΜaX,ƍ#44g}޽{پ};֭h4n'11zE… 8BBB0}t~Q[[ݻ9O*(,](ٻw/dٲe[o1}t,YN`/ {'--mDQX F(lڴ ۍDUUfüyQQQdffRZZڧ cҥ(ba;wGx;VKuu57sK/l`ժU֒Hrr2vLv;6FHMMn3wZ~ڀXU`yvb…ٳG+GҥKp%Naa!橧"99yWX ° %%˗p8GUU&Lq8nBCC\obi&)))`F"##5qhŋ$&&bZ)..PYYfd2a=z4MMM}`9r#Gh԰o>؆ FsgϞ/n7.N .\ ==ϲ=uW\#11qDWX !]C @tt4+V`֭3m4裏fa49q}k]]c69&''p8~_|ѣ1̙>bfǎL8G`FDD X~[ZZx5k?0aaaTUU{n\._oo&$$PWWGQQшX2Ѩ BoHվBVVtj+**,e@ҫ X,֦744N\\Fx tX,]gaaah`0s_(ss zU[[?IIIѣ9w\iFGGvZ[6X 0DIOOGUU qEıc(++ҥKr)mf#>>;vPWWGMM 999XVN808ym]o:8pFN8Aaaav9v?~'LVVݻz.^ȗ_~իW}ƲvZ?bt L2Dm ]-ٳc֬Yw}dddo>\.iii̟?[N\F#/fiO>$1o&ƒ%K|lXV HII9V__G<33f ?СC$''h"v#~i: sO), < QUG}ظq4\R9T,#KR9R9٥r=H  A3H "AyPX  "AAJ-BAAiAA% 0eQhT&FeQFDם+rۓͥ;Ζ-[uY/^O***PT,Yd͛7 /aX,y?ΡCx裏'E… k̝;W+]vrя~;s 吝cDzrJL&O=`۩رc(sTUUa0l,^QFAR9Cz hYYj&vMhh(=P***`ݺu455ӵ0Ge޼ydggkb0n8<<쳸nYnFv;kBpBee%Vرcq8Ցn'<<ݮ$++K$??իWG'/^̥Kg…x<ioo筷bƌ,YN<ȇ~ڵk:*o ,3(//gӦM+[nիi޼y$%%ɓ} cҥX,Ə9w\Hy衇eΜ9L8|Ν;GCCW&!!{UVa۩ 99Y4v͆bJ;vhd>gddh&8n˗@BBg(o:@xx8& EQ(,,$!! ˘1cxǨҥKw}},AaXq8磪*&L SZZJAA8n ǣ(כ{ZIJJ /((Hfhŋ$&&bZ)..PYYfd2a=z4MMM} #Gpmۧ۰aA[Vv;TUUh"NWp8xGʳAۑ.¡},8ɓTWW3e@ҫ طX,X,Z[[ikkZhoo'..FP8|ᇔ }2oƎKGGZZV/v|[*N o l#% KQUBbccimm&;FYYـU]]M^^.]SNil6ٱcuuuԐCJJ ʉ'4F\\'Om㯼ѣ9q№Vrr2uuu\tӏ^ ٳv۷o455qrrrP °j&2{l###}SSS$|Z^{59ŋlO>IXXo&o6111<>6V+c[`KGL[;---DDDsxxwغu+ ,,{u97rdY*GʑrrR9 -X  "A2PX E(,AAX  w%  ݊`  0TNr:" &#/nw $!D0d2 IDATKH@L%f0SE*:85/7}P.%֘HۥگD hݟ:pPۡhfN2s q3~õ%UH pדkvSql±cǤ,, irrr())AQEl6Nvv6z}nc̙3gȟӧp:_aΟ?ϧ~J]]s<cƌ"KAOZZ+WrQ[[Ν;QE+~HHȰ8OMzz:III;y~/_>w_FUcJ  NGxx8Zٌfٳ:x eeea2}fرXgܸq;w^z%5iժU֒Hrr2vLv;6FHMMn3w~Chh(֭~ZE` ͑q8磪*&LKKK)((Á&44O(^L&|$%%]uh4rEZxfa2=jd׮];ѣGyw~AG 01 DGG3zhVXAMM6>SO~zp0ͿGQ<]$99AmmhZvfM:q\DXz5͔IE% EQ8N"++Dbbbhiijjj- V4ۉaSXXNbL]]$''T~:;;}Zn8D` #tEXZ[[)--cǎ XkNuu5yyy\tBN:EFF6xvA]]555䐒Bbbfjr MLɓ'[[[p---x<.\3(fq5ŋihh`׮]JJJT!t ïu@U={6yyyKdddo>\.iii̟?[N\F#/fiO>$1o&ƒ%K|lXV |j>@`}g?~\Xv-VSO=Enn.oO?d1P6n8(m >EYZ"k ­AAX  "AAD`  ߏA. p"-X  ̠̓UMҭ@UNQPUQJ;Rẽ};((:UUP9wz C~vW>6EUAv+>۞۞8 zPuo7н (}nwm0\Qz@\~=?ӓ@|k{POto^h =wpas{<]iu9> j ~o^Aqo{\.vƽnNO`yqNO:}=1up\K/m-qq7x<׷n֣ŹgZoeܕiHHDť(R)AKA6nȅ 0AaH,6a$2*gPdbYHw)999hf3dgg~^-[h:HNʼy󤐅S`4av_U-͛e\KY.wZZ+WrQ[[Ν;QE=g!..EUUv"""iӦI=>5ZޓYf MCCh49K2M 01 DGG3zhVXAMM 7ml6baĉdddցO>.K.0<Hh.ۗaE1XãZ,:$..zM$Aנ`sݚ5j}%R`.B0P_xn87E8\'nV{P!O騪Jaa!'O`\xr=D7{z*/_ӧOSPP@JJ @WWӧ)//{.jAid 0ZTٳgǬYxꩧسg:qqq<#|~Y־GDD0~x.\6m䠪*H 7qAyɋ=k7ŞEs9bϊBZ[^kq!3_/ܳp,= {^Y{}"H۰"ֹ)(wIF";dKQR0  Ġ rO-n'bAz3`i LS#-X  ̠=mܸGv=O@z/sL ߞ>Ho[}ٻټ7_՜.Tpu;'z:D7jwc=]wx.\^΍ n0~(8]'_^Kqn|z~=Ǎz<\]A}^NWdo#~$U SUom| mOU@u :w][_8ݮ17 U:NASUzAESt:UtNj:mUz&`i}]ׇ`yټW^ K`ZAʕ+|g>}˗/Ƙ1c?>.m۶(6}O ) k׮mFPbcc?~W+Uy=/53xHر&` m6`f-h!q=W [B{~Ows}nY`'OFumt:u;L;?gff2sL~i~?ne8tfѢE撒Bnn.sO7 MƁz;c >bUU3f#G͂ rɈ% WF#-\.\.1E9vXS&33I&I_~I0ӧsa@7RSS/hiirUUUL>MII K&!S  4V^ͦMOɡrƌ/ydffioo'!![|cc#1a[nGæ٤qȑGf̘aB^WFΜ9c;5}ty-Z$f[AaSZZs{1JJJ??mD&2n`E3pwpѣG>}6}^ OΙ3ghnn>c֬Y̷UUUL8ĉzPWWs=gCIGGdKA,bYh6m>᫯v:rKb㉏6 n\cc#o 8q">ײE4M>3b,*?~<<(L ,AaIuu7Ç2eJ2_cǎ1ykꫪb z*nifΜ/̂ 4v='Nygb/^țo%..XBF?J{{;:3gd̘1\.Ν;Ǟ={i]ˠ^ahD6mG/ ,`„ $&&ÇѴ7ϟ<ݾ*//?dŶ|~_vtד?p%%%ݾQb֭:t[nE:XB#O[Akyxre߾}455a&̞= ސ.?dƌz5n ~Gv9Xt)f\A;-[p &Omd+hkkɓ<{Rr1ʬ!*Gʑrrrd*GȿWAK蟛|%@ȵ,b`  %  %    bЖiAA  B?3h+ &Y@v x/Hx/dQ^bWA{SZ]"MF|+;zww劒sh=ѿ;qЁ Nq8 7N\8l@GGÁ=*.V/ln;+D~;_j>8;KY W9]Zwor-2Z%*uz^2}t.ZRr#uzvT衭{¯W8uBitӣB:Ҵ0YVM(-j!#tp8Cg\$ wqNW ;;tNWXosG؏pB`  CpKAիWٱc'N Ø1c(((G)euX)Oc4TY`4  0xW1MիWF[[555dee_–۲e ~UVن㡥8~ӟX]7P.))aժUAN8[o,XuV/_NAAaŋ1"5$FzORPP@JJ Ǐ&t:1 uAu[fΜ9=zcǎ`?Nyy93gδ峲 ="` YQDjHrp\TWW oj0HLL̙3Hw A!=i^M6'Caa!3z^/=\LXAAk֬QԩS:u[o;^{kגA^^%%%uj1a襥P__ϙ3g8q}+V`ƌ}vygbr:12Ǐ瞳GMƝw @RRO?4/^ӧOa8?.}QKԐc#pP\\Lqq1-bӦM|}6R]SMHJJ=Mvv6r-̙3_|Z ? 6 0,NIZZ)))=W=r#@@>Ԑ0_3g2f\.ΝcϞ=L2tu KLLS͛7DQQ\r;w@^^EA ,a"`RCB>,Enn. 4INNf,\z|>?|gYJ)~Ț0aUUU|tttOnn.O<G  0Tw}7w}ueWZc3Voi#RZZ* " KFԐ .b` B?#`RC4 %    Q2, Џ  B?3h4TVV 4"Yi$Wx4IxO$^iġSqJVAAaP EP)p|ȅpx$MЖ=șF JƈgvɫKY=1Q6˹XRH PJƥѾvד]LO}s|lO>L2|(etGwc":;EW*';:M|]%ヤR`eLth(>L{9֣ =ez/{Oe^G:7P6%k׿  Cko_~S7WcN8A[[crw`֭C)sgW<qnr_Wlܸ_~L: 3Hp8OJJJ?k~>c4TY`||ALLŋq\v駟&770ػw/ Iff&3gΤMظq#^G}4Fӧy)))30vX<A+*izjhkkv&N/~ [v˖-~VZeROc`.o%%%Z 08w6l@)=͠ٱcu(lݺ˗SPPa444plx lܸ`0o}tq?/^L^^n3gΰgrrriUUUq뭷RUUŕ+WHJJvX0ʸV,KFS륾'|RRR?~-Ս0 z[x]md&LSXse޽̟?>)//g̙vXW# @4 SN/رceY-~K}}=?b 4Nasa~mIHХ TI.d5rp\TWW oj ףz<&M/113grc!5:tG4Mtdeeө k #X]~7_y@Gµ_N;_4V^ͦMOɡrƌsC^/=\LXAAk֬?s=iA4M[Vn~Q__O~~u;xXv-QRRBYYYsq!b[|_bL8Gmm-r%Q?` }j(--zΜ9É'裏Xbu7qv3Lqu)**w^4Mu第,***ؾ};O=uOJJ駟ŋqi6lxm3MI&|]?0.]ٳg5McԩTUU%2E(}N*\CᠸF IDATb-ZĦMnRJvMiˬ\wTUU|7͝wP]]rdgg-œ9sxcF233yPJerr2=ҥK7\,p]Y|9n[:XBp{! \:)›Mff 5_s…;L6-f)))̝;{[oˣ k=yڴi\pwXa`.+&4M>s,YBqqqLܟg:Ĝ9s[ ,AAtyי9s&cƌrq9Ô)SnX_[[[|̟?G ~)//U͛IJJd\Ν;IHH //^|ۻK\\yq ~s]wٳ|G\u1^/3g6RUZZʁl˲GȲ4XBFڛKZ@UCO 4.\GSSiٳ-p=|>_Wd?yFi̝;>[nxyyϜ0aUUU|tttOnn.O<ǃ>Hyy9}{ݻt:`֬Y1JD[UUEqqqӀeeeٳn7J)xnrHrr͞e*GrzھsN=Ul׭r.DzU \AAKÜ2E8̸0@)BAKoD0ZJ+א %@innA ,AA-  TdKAFw &24KSb8ZK ~Lj L҃aԭ2)-q:ci,MMR_͉BrܠA@9C'.BqBaBrUVHFFBY ; e(Kfo@@YL 4fь`7(3f(3nLkfv ; / ?oL?EtË#E~t#D78tv9u@7L#h~+phCN@5Cwp8ttM;݅Sw]8t7N͍C 9͍h ZA-P. ͉\+D┃rbarTNJ'p>Ca(@X`baDe `-$hZV `XA !4}a%h1,? Kbxt>7b>,Ïe.܇t3n# aυ|ݲM8tPBia;t݉puJs8tݍpq!4Jw|r('JsJw47Fp#F@ai(-n"xY6 YNTUb)?"G `Y  X,XbY,훦bб4ClYAL ,4 Lia?&faA1a pbNLӉi80MÈ ;a:9_O΋/Hnn.w<|+m'f/K/c\]]444HA2k,K._-3f(aH!{ ! uaҤIsA-Zd~ ,Y@MM . 3c /^b`ÇNJJ .dƌ͛Ν˭j_~=`qƱo>Ldڴi,Y𸣣~'N`,[t$iQm")H/84Mu,ˢ?1gx+6mڄIoַŘ1chii=d'Z)))<#x<_Jrr2SN믿NNN3f̰V֬Yw_?m #뛜p#m9&`̙3ijj;x eeen>Yt)rwgϞ]xGzj&OLjj*FNrrrHMMeTTTpȑ= ,_L&OLII _~%_}'N`ժU1vXxZZZ8~\#"C233ˣBc\t4yyy|>\BRRRL\CCk8x A `ܸq121lj455er81iHOO端bʔ)rY5Z0TeCS~2k,l·-c,]rؽ{7.\:ͧ #"SnMdP `2ExMNR?>Yfq>}:FnWcƌ4Mz0{lƎKzz=2W2331 gammm455%X 0\.N{G[[vܹsinnf˖-\tG2u3m46nHuu5XeddpN:Ecc#ϟfeeQRR¦M8}4.\`1i$iLʩ 7&4L.3gΤ֬Yömؿ?[n kŊl߾zRRRr-\p_ӦMcΜ9PyW^͖-[ӟic?m7|Tm"SV375:Ws^^>lq?5SO=t8Xt)K.pt8Xz5t [|y̱QN - 0 ,"zEGE5 XʔЈjiNAz CUIIKDAA ڿ^Я 0)]Zą]ת@ M"vs_7ŒH%v\vMk;vZ-7+  % 0t)€7dYXNo:4L]p~ Caz~Ѥc6kM hA ӣaƇckFa9U(fjW46;BR!V%),4,T$Mҗj%QZB}V&BX!]W,P?RW-ϊicPqxJPi*݂4P>/G !梃prIgďp_j ]k֕UtG+Z&3CS螁?t0u-仴JR), *̶н-r3[;e(p+[(D #Eu ^av1Zt{=#T#ciXz 4EqTdKJNHN[z駟sa~~_nݺٚ*++innA(,,$p9;DΞ=K0kkkIMM%--M*NTR%'$-7M.XBff&֒kRSL3gPXXhG~ݻ*x('N:z(6l%NVmƩSPJϲeHMMB#f۷oŋNvv6> )))p4}urPީNkO-,,>R@gRTT:WX|9?OXz5555n&MġCb9t8N ॗ^vSOOvy饗0 4yW),,?1={<1Nf C: 7أp$a5`%4Z:=zLq Ϸ ӧOc=5o< IMMŋs[ө&8~8ӧOXŊ+&33+VJmm->ǤIHKK#33 ";>!9!^w2E(D ),,7$ R[[KZZm:uݻws%|>ib@III q1/Ghjj瞋)K0 &PQQG)..fԩ$%%I b` }qկF{ܬ+SڃkOMOO')):::((( ))N>Mmmm+p-pwxgӦMDu8t:tSiϸqxZKHH`ժƯ7'Orv?n+&^dpn2E8FVdpP\(">EEE|PPP'8{~9,bɒ%撑˗6m'OŋӃ999466OzzzsݶرcY`O?4YYYݾA!Kaa!444t3ߏiVzz:io>ؿ:Y~=iii?ގ>}:fjjjزe /_۷siZZZ8y$MMMdeeIc 2E8xj("PTwƒQ!Py2Ex}dffSt9@hTiɒ%|G{p=aÆnzٳgwqGL'dk|>)**vt}$&&2w\̙#?A w#;E{tLnmzp<}7o͋ prgbb"V1v裏ʳN"AAkP9QШ50ksT,4*b` o}I^'7 Q;ePUQ}崇JI   zAcAA~DFAAA[  3쇜N=*>"̘|TXzV>e__Kh,Ќ 2@a?U7{)]uEi+Ge1ttѩ]vFLR&J 9Mphk}9 t:84Tg|JEŅE;4eGdh踮bUgٵ.e.[t#y+ kPϵBB)eL LbXVgy wgk~VeSv[}zuu-Y27ZvfRWag8AY7#"`u,ErMJy* 2%A J.\ !% IJ&c ,xB"9oRJ asIvŋQJҥKIOOQm۶qe󩨨`Æ ?3qqqoSWW%-- 2mڴm߿6233Yheee XM` TrA tGvm;ǎ;xWя~M_y1k,Ο?϶mb ƍc…\.N8 HOOgڵCotX~= Hb` ؊7ѰJX\x~233{ŋڵ˖INNfܹs9y$GaAvO8A9466rJU>33'NĄ={6L2ݲ,Ѱ,2(5 Q[}(u#j,axg$&&{Wgf޽G p N>M\\{6n7g֭I~~>>z⨨Sm@T'Qwx衇زe /,[uSwm7Xhoh"y饗p:̞=R^g$$${n#'' Jb` ßb~Ą==qO}}={CIKڐR5z)=HI466sN:::HIIa,X@*FKC p/ZKtRaА͞AAAY9srCZ=oH9WWA6   V˿X;O^@}&x%^~P|wμT|qT|[*6%ގ#6/փ:ꢯ󸫼% lh7S]E/eҽ\R֮_lVOt}+e2C~.,xW]µo-a}euZx ]hoZď-"qN=\[Fa2FGl.{\3;h}*쇜BPn #X SԨX-wA~H b`  %u ø+ 7p_sQ&z]zv*O<Ν;x"J)Xt)鴴vZz!>Ξ=of۶m\|VXAJJ |̝;;wʳ>eY޽Fff&-L.A F7p(hxb[_3c&w kc]~/u7U;v,>;vꫯȖyppIv@u6ooSO=ekjjѣ<ѻvСC|&==:֯_OBBL hGWh2JKKIKKcرX.^h̛7RRSSILL4M/_Nnn.999^zΞ=k3 իW3vXƌC0d׮]\ &ƌ3>}:~tOpD"SY2E8/;EȎ;8{,XRV7n\QMcqff&qqq\tOMM%>>ޖijj"?!Fa/5efN-S S/2iiiX$L^(t~-]~֬YCRRRC!MA ,AaNcc#+W$??>5MgڣU.]ڣ^=ᠵURXdk/C"_^A2E_]u7Uxg$&&{g~-4Mc˖-,]Mxq:1flݺ4Q__O\\8),2E8XOl"M`O*x衇زe /,[uFVoƖo7ʕ+2 c$$${n#'' L[;E({^ao^!{#q Ya8"HaHo0/4XSû(fA Q2`S@d &in\" IDATA ,A6je,H X  ÝAa?bUu ;!Z˴ޫ]! *CAAgm]Stˡth;Qt'i mtˣpŅ\0h "&&5.(bJbF @0 Bm tq@O00 a~FH. bf elgee0MGP: @ ٿA4 ?h .`10m3L~o 䜞DϚG̹Lq+bڰ[FաkwmtuEt1'9dܒ;IrW:;!hh r6?+mJ (-(1)K߱.@v.D2IOMB)ko3>d⒓ | R1-iTLKcDڛ66z -h=߆i$Sqe )[@jZ.47\S\.@CColP|K Wz®òka?: >9)us;᫯hm ΋i^ư ntZyu{.F \>r:ZjصZ,~o@|:wQH p6ʙftp񠗆\>hQNu7ͺ&Ms{2mΊؚu9,.M) ,.MdNA:τ 蘉es6+!?ϱk.25Pr.@4 ȺuغuT HĒs]aAm0" #0쐽!2% xH~͛7S]]n`˖-?~0(((`ٲeddd2_|;v젩$Νk2k,9z(˗˦M!--+W^shAA۶m~ .+WuV^~e à5kFXNN?0;v`Ν$%%xb***&b`肜0" #\.V^ի5<OL\ORZZk~na>lqjjj0A " r.@4 X  b`  b`  % 0P  B?"#X  ̠ܯ؀b?T?:^%^C ѥb: i4VHxq(vD~w4l N8..W1r))GO9TvCoMu zMu%uޮWntTV;~ĩnzQQ2f,PV\cuiCPc|8|U$ޖU2vXδJu;"Cز"PA۽Ky=ZRO:izEꥇ5>6OVʂ` 0(SL~߰tRf̘alݺGzILLdΜ9,Xv;ƌ#7A ,a FcdYs'5#Y>(uFIY0YcW۶m~$$$}vΟ?رc~߾}?~|;˗{}9qO=iii 2E(^ OUUwEEEdggzjL쳎˗/A~~>)))S^^#c&o555b\ b` 7cA"SMMM0CFFFu̘1lٲSNuٺu+gϞ'$))I.zA ,aG05Ԍg5L ;,j kv,aؿsrr~ŋ k1&Lʕ+=_2=ZA)3j(@ J(eCebᰈt e:LߩLײctw_}Fe.C]m Z 2=+򕆊LUo$}4.[|l^2zή:{׳Έ.u^{._Kʂ` 0TJ>O ֮]˾}1A~ٳo1V[[Kee%^WAAǵ?N; ãO f,+foi1+ b` Ð|}ai9(?n͛psQƎҥKYv-c޼yرF||}=EEE\G˗/Dqq1x2f͚ĉ9~8Nb,Ys(Xf III]Gc72M2*,y nA!MZZq9;IXlO<Ou d̙#{dž[n4M|GEEE7LKK"n/"f)cҥl޼W^yNkkk)7}{nmۆi3{ bۿa\w4y|2n,YF.oΛoi/Ӱxbؽ{7ő… 竦rPLhًP"j/ž܋SE({u/B`EW,Y̙3i.0GefL heqΟ?ϥK?~<^?ɓ'KRGKfu@ٳg޽Fq{WcZC !TBpHN iUNPDꪺRUn{noڊ8RF`b0 v^ι>l^Hx朝9so,3* /O+rqʕizD E3ۦffî]"-[`|| UUcaݺu!455B,YUUUHOOǙ3g ! -[rJVbEDD}ֆ'x ؈"c޽X,x饗i{=Fmm-o_|v;HNN.-Bk,+h駟Fvv6VXB|Wꫯׇ{6 GMM pU$&&")) -- HH`1"^lX (~z,t͛y\$''ƍu(엕bB@J^],kܕ@s drrr044۷o0::< ƈ:WJ Fxb塡׮]COOQ\\ z{{q\.R8ҏbڋ/ÇcBUUUz#<81 xec%nڠy۷o0M}%$?!w#("Y~jݲ`nR"XDDDDѾ, """i,""""XDD;"(&D1 ;)TÐ|y~60Oa:py aXV`<*< j(aN!}y0UJ).XMwmSN< P jPcEP>VC|'FNN***l2 bϞ=B%XbnJ$X3(Ϫ'Ng}g}ٸ|2ߏ4^}#)) 91gZ"3rĉX` ++ W\SpBRSS,b/ge}"6[@Mߏq4wݰlq_)DD<- я~x3orW4EBBpyTU|^Nx0NfCb`ڵhnni(**\dX/HJJN%XDDD|EZZ>C 996 y桲Gbk9(D$eZf ֬Y6EE***#)*]Js1 l8j> //;`ݯ>wC^^{1V"-B*6/C(V [ "Xf'"YM"j3^h=-]0ΪYwܬ{9K҈aSM`E$@$XPT EPg@O4 a+1EϫzT|_}/Oӌ ò>.g`ٲeZx'188˗cAYY3XjrssJdeeŋ!9s?}N`׮]g}@~YYYذaE3<Fuu5,YsΝ;b+n\3 iIDATK.appI)122" `c"g qvT7X;Q" @ZZ.]>1YaDy0m,''v{쁢(ذaGGYY!܌~_<,""\g?ӧKKKQZZ򳙙x7^cR7XNh5nJCToH[Ov'X4KX:qrͩ`0Z}=gl|7!"$+wEeH4؃EDDD4͢փO6 CN2M?^MPlDD,"""i{|/=4j'¥ H8t~>Q(aZ.OAJaZ/=plS:B V6HC"lYӃMSXv@~!pz'5y4'<[{ZPӴN @ ;Ϙ0H}~<ҽ5-'],RQL!4ÛW?9t 擝0l{`YD@W % oӅ0K!Hνoeb$XYVV_6IJA&9477 [Es*Ҏ1Ԋ1::۷O^x>-,b$cKY+rPU)))lo{f[Y+&IS|O=]"I‘#Gp97n}}}8z(._ )%l6n݊69sBAݎE.\۷R[Lkk+.\Ǐx ޽6 6m׿ƪUߏs!%%Xj^!9r_~%(**BUU233`E/AFF>#_8vލbbaڵqN'n )eXPSStСCX,x'۷oןI'ODee%qyXhV+n7_(8~8`<ӉSNK,eggOLLFZߙXv-MFMM RSS',cII V^ xꩧpItvvjRJl޼YϿyfWBWW/^̝hf @4<ɓ'MN}Daa!{9}O`1""iƷEQpܺz*t:MBbtww#33fܹsHMM !eQ#ZV46IJAIJJի҂/BA8}Q<ß'\znB[[nݺsy&ܹ Պ!q…HMM;#˗/c``8|0n߾?G1 ,*@lAS~zH)q1 ?񏑜 hii(**<[oqv<(++CSSn7JJJOu˞^z Gž}066y桸=Zs.*QIdEY^L[xEwww徧wpfE=L ?E(YYVVM؃u}?r6"9`1""""bEDDDD fT?ZڧYY40 Z'U""""bEDDD1""""bEDDD`1""""bEDDDD `1""""bEDDDD `1"""""XDDDD `,""""XDDDD ,""""XDDDDؑb%IENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/_images/scene_diagram.svg0000644000175100001770000010771714714401662023121 0ustar00runnerdocker ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/_images/surfaces_blender.png0000644000175100001770000065415014714401662023631 0ustar00runnerdockerPNG  IHDR?P, pHYs  TtEXtFile/Users/jillnaiman/blenderScience/science_blender/blenderFiles/forblogpost.blend^|tEXtDate2013/03/30 13:51:17QtEXtTime00:00:00.01-n tEXtFrame001Q" tEXtCameraCamerah tEXtSceneScene!]tEXtRenderTime01:04.63Fq4 IDATxyeU[{o|9geP@nI `$ p4 x"la&:pc "7`ahaɖXR{9{}Te /^Tw3ޗY[0 xn%ʃK)/Ui0^0 d'j:|z(pWl?qr1maM ]i 8q6 0Ouf͹tGX#0 a0 xFad[)Oپ4 da0 xzgÁS; 8Y6 xRYs[oZ OѧpcucG0 h0&{|+Gqx_\6}azd0 h0 w7^OrF`%~.kBvϱ6xBn=;^a,L@ax2BI(тIy}ιZ֗Fwf`[h|a U 4S3”z6/w=e~u[U[;76lqr0m^)w/a NYKGW'0/<jvϑsW}%$:ph ':1QU e3(.۟2{4 I P%izad^s0N& 0gS#KA.pP"xnE&>,1l+P:pI$ڈ Df.? +]$NҠ7P֛9n š 0g|w3<b`wa<1mq $W>I~~|tAipKbGpd U@ ²(݅!ȤcH؈P,jq*tU)ڃ䔖sS h&0 $`0 c| /𒼼-?^#'jٯ{^z_EJ7hRi'w- hEWBp-5e 7AvG sKv r}`#8Bbb\#Y@b{ xn`(lwxSاװc>d ?:nDhJC;nִ;LJ\l{eUB'4R7Ƭ= {ǽ!0 haƊYкdpx ]$p|?topu,[:}Lm:q;}ߵe?UhԳOubtAhyvE.@AO`ΚUm; RsRدM4EI=jV- 0 h0wISA8p j(a>s0]1[!r!#u6EMr ݿ=_Ov6e!$>(Cw܂'芹lObzF5tX r,Zgւ&.6G9*ɿ|6Pnsbd.o4Iލ}#c( f̫Y}?Jx)-,HY(.0{=q-wXvV alTH*6[(J7ھmgmZQOl ]&pOÄ1nL8VN/A um-ɶqg{˖~aPA2J"S, ڛL +sUChZϪw\d//TYsguqcӎ|:K_d o~wwKp4JS=| ,)x$F]MWb "x(.[|\\d_pip >;8_+ CU:}9VtqlnMkpj҆a<6 sHX\=Z;uXcB+?-z(%e6a H+0j싽ˬMΒ~pNC-n{F;|ϕp !(}xbHuH<;I"2ְ5lc*Jpz;0j>knHΘydQKTB) 5"gT3q ,Yd${31=zJ~Qb{&ȝ9Y8:嗢}3zW@[c)\A;h+u0z ^Mčt_^EM^0 9*Іa0\ Q@TmSq6o(Z9':A޳p `ws=L YYsc\u^)1SxWw(IaKSvtp{v$5T,rA*:Gr7֎]Ep;I)wqqv0%6 !BfE6k6ZAEanxt VisDiŸ!!u v*OrxJ ȌY+ Q'P̘?[H|,@Yd¸ѱ" F;`d׬O;]V~%5C0mp@szN+k.n)qBQ'ˡ2Z8a^ gU2YCH}de 2pORS%5@AQA'8akslÇ+I~p0\iLeA=4p.]/>[g]7MvVor ^W!b`0 &Q mEo`":TԑY*K3p8~R(~%!n"ydk tP gJ+QK߮G%Z =Pa %pBBgJ !Zccn4􈬛R/3r_kv0%& 8be t-h8dw@lW^] QDP>E+j<ˊBHS#愘VzDBNG&@e|K '%"6"8bIU\)i}=Sy:I3aOzҺ,/dqdV"WGsrb%Vw'0g h0N,Wq#" ?nm%^#^cѲ~ Э,[:tb@Z)[M+Dk|MT q+f΅3+VQP8\fBtuGd1pdB5t0WyȤ;ѕOŐTM q0" <)4~=FFA*o2H ]ѣG1 xv6 $2ʿaUT4/7$-7>>&;חI B!P(D;*I@JľRveZ JInjum3BɵS}xDE*fgWy׵:`"(Khm]%݀f|}$"cdΡ7Vw4wf͐1>?a}_ε?G/utJ("\p :B9 S #k@!Pt=n<1;k_^drj ʘXPd_ Zk\ˢ2vZe`[ Mi>Lޜ5CವWLs?dՖ[u~\kqqˏ0g h0N){L P~*a $vY,Z8pKX>|];p'KPDzFɧVW* H)lU.z\xo q#xD}3S]:u΁?@;&tܬ: D%'?wMh/k:$X Fi*@>>07@bV۰j1F?*>{R/cGW00mI7h">ӭ2ej'L 8N,#QX*84/KNvȆhpWW'D^j2?+Iǎm4Et3 7oP+~d-VA:G=qܓ<1U4؃*@'xXt<J*sKS]X7>&m? ~olCWT+^F`'0& 8Q̿*dyIRփ ru /,iO KcC(1 ),@J)YM[dX掂hGѱItmZTwd6#8B]ŊnIP„з],Ac'u[RszA.T:MܽQCt.[Q+{eqr[!W(z|fлsZAp-~wSea'S_chGWC߫xXR+h7ij*7;8hv8z޿w%q!1-WفNX*D5PO)C154%BPX*twR9PB]Ajdzo IDAT B+YGq:TX@2=9Ogxe+o[t7~pH5]gL^R FAn@ KZKbM %q-.J"eDA{sR}RՊGN]5@hG|&@H$UX;$QgBL\!4ޣTUR.P%$"jWC{ybu>>Unc>~qZnחak#d)j;_}?OaO Іa|nW1938s˗ >:Wx4A^.3זݹϑ$Y&dƃ"ݤ<@H@Yf5W YKan:>4aOjmOt_E6f1Z{(8-Tه."H|(=z7eJX[tb͐]Y{dGrU:dvZnq`"4L֫$U BES1^ǯw?nT0no0O7e.T!EB5Ҝ_g~>W.qx]{sݽwME *c5?ռ@ف~5HINn$(ӒP :GG ֱt%: %jqv%aTzrdut;C_cA(z^%ԧ:vOzoSl{c,ip؋Ò"ĂXQdx\CJCV_g}`a~HP~ .y.ٷB]Rz YqmW0n@ηsC¦)άKIŖ<ݼzpcsOeޕk:5if'%'PLPA}i8 xDVYBɘ`h*n(mBZrjP]@lȐ^xd %xCVq-oC(ц5}BɵN " &ȓ&clGKL۸iجCy8dpsa$ٷ4Je0 h0kfʝ?̶_}7IITy*bM{}LEJMLV ) u_-$tizS;J7U:OyOHBh" B'Ęc@Y&L>Ab EvRWy'^`Hj:G{ba!7GRYFE>)Y\:f>a&& xN/eYH [ܥ&/)a,.nyߺ{^ƒ7ݚPHF# &QY:Y!/g,H&&=HbG)"_D+ľ ꬆ+D f*9;(3f\aՐXa]YnmBR}ϝ(i"u: Ny1xP eAPT|XTݍ![r} @$ILo]f뎑ywmOI;pB 2Dhk9ۻ`0a*`m~./o-;9n6zIn;#NP6t@w~Sf۱iq nRॻD1tLP^7 Dŧ-u).E\@7ܽPMU3qHZGE;qK.-H͖HY};{ʍRNYేX  Huc\[_k} .R\b($̈́ Ɇ2<$p¾}*=aQRSgQuOa h0#}go&GŖc/q2}%qT!\k'yɷ߿ˇNiQ4t~exZnxvB{ U7c' F4 %M?F\Z">Z%.VPHh\HWsYtȁ;q*"2UB\9{DѥQaϯZ=.et&0Z>=HC]Z2A 1t#zJ(*HtYY41 ;2y~Z7"ï)1R8&[{@TrT_51S DRC ^.Ѐs87`ƭ{i3@J+?#{x֡_-Ys.g0,aY9$w:$"%T"46N͐?=9?9@fRX)G9wK("(q(Rw{;IK ðCHo7!靏l7ͮtG?;ol0n@Ƨs)x7Wy~p=U\dZvDk8qx}êٹo.~:Ӛɓ0F>ҵ;٭ݢmGe-hK4R!TnK7%]I!p8zCD{x;c=Xxfqˆ2ܙ^RTD.15H* 9|8[[6lE5EED)z h6TQ;>wWLNչ.$0ng0>}W)iDQt6*B\nטx *:q3pc?dP:G ЗEqãQ(׎|osYO.!e󪌺9[2QR(y;) 霭3),$ɇxdeT|M ?`:Xe{|8*sɾ7\o2xmBվ ޶?Ha&& S?dYZo≠roé{~&ƅtLF5odؑ>1 ,:$ժ#mqngOp=g`%W8:!"Su64KJr[x%VԾq0$1Bw :Fh|2tDaQEB? !:#tBˬ/n}u~ Q[tue )#K1׸/+O'Is$':"0k˧ G+4}.t0*![\"^ƘViA˶n0Edmb0>ul|k;;ξgy5Md(]?\"t$%}WᆧPѮvqR8ԙ%c`yUB/tB uM] "L:ٔ[V!۷GP|:ɰI˜/(js(~8Po|3·r;ՔU +.)_1M{tM/6\Gpy(E_˪--N_ 0nka"N$g>hF*zz4P(6Q\48dO\f;os {[q9w | ^̇T ;Τ^7;|aYI՗7?VW(I2h*qR >UIGGЏvC&m'Ăf~MBO㘍)ɟ)t!zN<9i;v291 (  %MdE# dsqe aCE/W Ru)ŷ!n]tU%5C'0@Ƨ=P~ƨݐI꘩-?POjϾm9q=L1YvI%'d^ _ r qZEo[ޤ]+s-trq8Ǵb{SJCCW#8@AjNp%t#^ )(܉R6v~eRl͹ C C8,I#DwzW"k*K?Hy=iM4Ԅԡ.J\xF8'r84;0T>@ ;0 0m"FCXKr8dwqOs?5}$ʛ\GgݤKzR+bHXʨ.FeW.|nڕ-2i[E"e Ss]2sH$DGh"7Eڎ6HiN<[mSttBۣӀn |$T'摻gչg 9w>^88ln56oX]%)ѡ2-HX˰TdIIG: Q8L<_-SqwD"l acVg'?d-F^0bN72 _ãϣ'HR۰Fv>ɣ|iꓷ³}G='qf찹v绬C~[DѐL=˩ڡ'E* )+ȍOY @;4~NA0t_G'r0FwtwH`bԾqt~L36;vX~"BVo+3 DFTo*@czlȑ3m~Av_z. {ԏq{i`x㿘;~,3? ע`Tz)L})W?Yª1ߒ#\lvztd&ERgS{Mq7 [W4DgUkQ_E+*E!*}[*KL+MX+)1) %eS8%q}ZqLHS<@yN6&xdhLk?Au'#^{(,J `t%mޥ64teڧ+{ >} %!ȥV-AMi}6J&ew܅fS{:s-N]8#cq&4 mf)nNyk!h?`o?>G|s3UF֫3QG>Iyj>c1o\UT/<ipw]>!vNW<@ EIqr۟KYY45TR2UrFEDktBCƾtHD;4Z w VNрwE{}Kk:o36 q-b}?g.0P2eB;E+\D$G~A(Y amLDwÕR<ز$2,Z^UN0̰E B,XF j[a` ,Bȁ84h: CtW/vs 6M-6z̮)€Su1!\g:\itXh4'\9ڜ21,XCx`j ? ęIzO_M|r~w['xYv.v\bXOf iJdHnhL1U, 3α[pض7VوϤy Az< aWG#'X3O:4珛񜯩g9W&6!6znTաޅ~F0>v27ӷ߀{c=b0'~Wb_=ovih,+)kWV8"Y`0LVlYZF, ࣘce#T LWNuN$ bZ+@ ce(fqM`yWN745mA85pbh<ύ$$jOK:Zɨ` IDATc#O3+4Bh(P9v֤E%Ofܠ Lאc `8*`CjλzP] SaƤmﴙцPU*;bLQz`pP,ޢmȔ-҄cSy5 ;"t `:tFi*B.d*Rz`&"Ȕ*rtOʃV'D)K;kcO.f(rr\"h6YR+;Nƹ(ehAibfM7!Y.\o,jQ&-R^60swxrǧoea.}#: u?ͩ嶛Mń۟JcP|:'j&656zn*x[*e@]${;wS}W,q"K{q%|nrjOWK%O߯$_ұ)h+9GvWL56Sw' u-f`,t yNSb3VݹJ FJt R[4[v0Lējb]^جH%Џ>qM'Z's4f+B0Ԡ6*L,1Ǽp"\PG)r0Fed c*ƭ A[1 T|ND8Zr_ s?׌<N8e3`>6ʧs^T~ _ 3 o&6&6zn^G'H=Vl',\Hmkb`Qg6jW#ϱy[[w=hd_(`yqĒ'$EH^* VnƁI!8d) 16l0%c 'ΒYT2C<|R@["ko0xGЃ#+"JOm3~~ *)iv2c!K0([$)$w}wEv[66av~uq_mƓMlb)6znd$HV?-Vugp.jܳ!g`P)ɥd:ow5M5; `f]|r#EW0]"0Irx;: 0Mi ,&ϼzmvnA2` FaX0[FQ%"4<.txnl!PTS!P>1 LrFCX=4 |.~Y:=kpٔ3TY'Wx6.,(mʎY,v|+ ;cvSB$N?ǧR$!>03!҅wSDVb\I[b؋L9?FdS/=K.=CE 4'?!eHc7FdOb߀]OeOskޖFU.)>(Βmoq|o  #.DOcq=;YZ[G &YE6z% F0wm0md>YZ37T#Hs&yV1P` P}4W-XBv#2HG54,R}bڭXO9AW8!X) 8'6$evO!RUbq|N0`F X۱lg#R}<}k;6b CSK#=[Ԣ.}:ԋ2>V2p4 ,Y ZL)u Lp/Sw" kG1&:b!WzU6`aʊ} qH.j9v?G.7KAym5XѫlSGomcYc {k||fwLe_u߼r{3O 3S{Z"*=ub e#CLjYs²[;`1p  #dh+L 8GЀX22 Euj}23Y2s]v|:&a[ [k^&tag؃W2.Z.v-zDim{^Ԡ.Qq9 K?d`R!Vj S<%I|7A(Q*$]ҟфV^~e϶׃~ wdzuq *wF!!lbMlA/i>LvMnE(|v$lᏲ=3?^F[n9ݦ.^`맸Sco꒞`F<{$LnJN&Il7IJ2$ҁ! ɷLت8z9BٹL%orO.Q )wsXF),sˠ䡜|4ѼMYRw~!< 5Kc Vl! OMWay&鴮d݄ v[w;KSSA=Y3+2Dq Π!AQ)-Jria&VŬ-QgqːThߝgi'9 A]Ͽ|\'KK5!>/KKԇ|q|Toa$ _fT&M@MnȄcW@ӊ)Ŋ;P$/tydom$}zڶ˄ ϮyXVB[l; SW^qϲiE?7M{&6>tYXlhR8b`7=5tJ9mk-H%O(WW?c ވ1ʖ$ZK5gK^6rx·E) R!luGh׫g-WE҂1 l 69!Nm>Gc,r2mǬ2\ -c݂؆mtϨaiP]"WAr$)yEf09L)Vȅ\(\Q-^G43S`rX\F)r21lX3XaB1 NZ{"LQaw=(6 -( K Is$Ya Sf <~b8wu>iY3ޒrZU.}\&i3lbKpl&c#S)[P#ix]1 Җ\>tSoNiM"6>v/.\9 c=93V!'z$xk-! 058P/&"MglG-hWSVBscm W aKKHN6}Y:ҼS\sZnƓMlb)6zn)NSDgqpOZ>r5P?7db{R > QNWP!)x Om8  XցG0LvZ,ޣ: p Ѩvx CG=i*\D^M?E.ަ6Hwx1 Y' a7+۔ϱJ[g BZA9qX9&n,q >elճNr!bUosӗ\L[OqrW>lE%9{E+ҕvUI~GK/ujLs#=EoƓMlb)6zn#AۇnUo@6IsLmw|CO형Z aKKի$Zj)lV[=sތm7YiAa} jB)S1D)E{A:].JCh uQVJG(s9O\I5Qh6]lڀ%daͭÝCs0 NK6D`15Z ǭxʌ̜ɖQ-B,qGpl`W<Ď̜$`VipBh]8d΍C Lkoos}ѡ0>2"959 a\X2UI'i-y&k$Cs:lSպ YBH.GTj[їcj2)޽&`M@o%t z.3'Z|-x/MZd|B(+T}-i䭕8^V{W#t]k'UMyh(&:<;j%q@Z4(Hh'HWh`$ꦀ՞qt]vCe@7JdeycWdca0 :j@Z#40j` ˱9aڕ,!N{y6E .pʯ]71;g<9v^J Hm6Ŭ6b4a܊@W>G\؅Y4i挔Y]ooK|~=#@oFMlbml&!i-˯mÖQ7ÿC_eRv&UF} լkųS赘`i ~xt]-Ft\X3q 3YFc) PRC9\t6,%kLO$$5()Pie&T2\*ys EÐрANݖ])d:54ҩ/̢QrB*#fQ~5{zG&Gpww^q\5]nXK`,֐9l)-tsjb!Â-:.I^\nFMlbml&Ⱦĕ~PXҸ?KB#oJlY!ggmY9Ճ5_dO@#XD($~6Q:q 4PS (BmY A؂#E0S4XIFhLvAQx@i8h`u: )russ5హmtb& ~J #AGe4pp tmvTh!:+t`.0p:`OPC1ITs0arĵlSD+"L*~ l'8unU8,Dqa<'%K%KOf Gp.yD=w|l'z<]9:ܓ"mCﯛ/a}<Ν1=$]$@1Gsw3lbg@o⁎b]\ƌ>kKx:x|jyqN3kSZ'h"“d(%]'щM '4F- I`ڻ"DnM9ׁV o:^VN;ul4طt-+qJf;$*0)%HPCc(+Ǽ4@wp}%ddwe6zΫOBQ1*hڢF0ҝϘQ .s5%̚x 7>sRD%^=A83>foơJ)`,Ha81(T3n+Wy!ww 45p8:GIŽ`\,/0"ghdUI 褚}pI# `n=\S=OVvXy IDAT5*3|=ZV0O}EVK$raZ^xs3zn‚ eQ :2e.p=ïc>{Nl!4]((ڀ%3\ӏA& )NӜ ^S%x&A;F3:\KqfkItlDg<6.",,Ruيs&E"c-n OS&e>*(/2^1rYH^fuGסtjrf&'6zzl)c:<({Cןzí7!㸬7iMS?zgX,*mz8@`U8rәi [p  8nYv9H(-2G1(J !v` X|xLuhm=\;<8  ⺜8 P  r0mݚz7M\owF缨zFI0aFIM]1 {ƭeS~Wyعy@5H7-=1S+aP5nYyj¹m{DALS1e:v?0x P$Ze%JUbj0&ru|OaIDzmޔMaW2X/ gu6}m| >ϓz*8*Isv])3͈Mgl&??D@f)̅{J-*C-$|Ͽ֎/~7 ta݊jNk^rIr%RXJC `,A2 Bۖc.Wj-r<cSyT( c#ba1leIփR7Ts);24J$pH T:_̢ jc0g`PYn4#HIF3B:DsZ鉷ٺ=khJC%^8iCVQ:͡+Æ*>p޴\ūŎ=`[~K7yd2F ~W=ODyq[GS|Gjռ6j`.EE9xq1 a+R<Amput\[wtm{}rP?I5rxvvr1a}(/0g!'0pMl8Л$e~W2b.`z^6o"d{9.ַ*v-ȭb=#+R$6l_.jɝ*M:ĵ)4'q92@f ..\ j,[S!3l "uA2LAi#F<`BX erQFC8R0@F#CfhR;ޠcH &:YJGQfd[SnDq%1҅pkmA} u7N05vBvDV 饢vb^є9ձ35$Uf.9K{Mu:#jE/o`ւGkg9w4ͳsw.ÐpuL3WOHCSZ$"n-8S% &p Gp^IBh"{'5Q| ~1âs6yx@o#} $U>Z=K.=ĵˍYkZ MȚ]˒ T̀V׶ pSZrFrD0is5%J4糄3=іT@1 ,s(A yAHS6dRh j I=te)ΐEh "8An;9`;-xIAExe\dŗWRX[բK G=J:~z4ƝC@c &c}*!n^eT"# 30o Be]e܈X^,f c8>%}G_}g񻫓 ݐ)5)%l=ypKZ7~]!r1蘦Vb/G`e:BVtMίs7&6&b7GoX2420hگKIc[yQyiuѽ5+dblW$k>9&>(NX|y sl ڿZp]ƅNB B\rjvZ&X66J>""d9xBMf0S0>bG vDu1BACf{k& #(0&$(\})'h#7*kS4W*q5NQhqhBIq!sk0 dž}Mw~ȳLuhZt ٯ}~9ͰxF*K ?"DڰƆ\dW gf 7l$֬PmDl I>9 CG]枾BNh0'(ZQ 00akS֏~j0Ƙ2WatAN05HgdܚqUz"PsW,(yoc$'VaCq5PʜPl90Ȼ?qSJi]rP(M^͕#JR*[ It6+-M}Jc+K¾Y7+bp%JӜ[WҷjCś's }p}o#j{he$N&pC)aaDb7I/!r:6JuP*ok~ u]>sWo?5xo9NŪ9"g]p~i$Y`4!`kJb*A^&GmsJ!k( vPD|ѮܤiIQ=}dq-\bDC90-h̥ 6LJc]pmŇr[YcH(ќLϾp,P+.C,!1LOxE>BfP.OGALqc 7H`U g <0ʥN1 _ 'O/pO:t k5Z%iUA/k\{hG_5$TNX,y"J\{@zd(L=ud Tr wspbՔJo: LP+61ǰ2W&31tV-/FS:Xꆹgm$H;oBh e`` N!w8h*NfEG10fjo~x[f'dV! NZ:f呼nPh348;;TS[[*5yse> ׌Sٝ (R"qJσ 9yFP2e$]} tn| 6kPEkD:,pa,N-Z;qd-y.d#kBo(3BÝ#&b1hDTsB@ jUaVݽьYVَ5K2/\m.C-xRI = 7 ņ ~_(|.Pϸq9"um2C_#~;Üa]UXGv_a6g#:MR~-b' }6 #5.B7*ؘfa~>C v":ե jC0+\'iC>tMl 'YoMՄUgêȅTu/uäOFxLusku}Q&9}?moƻ*zOX_ų?rZS]+˳KX\nE1Mx8~av&6s̏7^N/\>;3z My֬|Jihro_彇@ C'`C:vSi 8#)#EJ7ڌuDC=lj7k(15A1\ s#$rC_zU a VB .d"BcޝC4UK#eI`29<~}S Nʚ,'/S܅ f-,d*wOB%D}ʪr" kCik%wd%̑T83Wy_u<4O||(=Ì_n;>#hDu10bNծF>99tX o:l]>rS!zjYZO\OLamoƻ*zO^~Or/~ 4q3ϵXKg&7!ŦAe+ rPкZyUMol&kE[|~yaV`-bs\$hm#fBPaV7f~xZ;bМU&T\U\SjRa/r(T1( X1)9ZoQNy/3 b,5*r 'x"at_eQKSY_Aۛh_dȫtE\I0;_EO@ ؜X{3RgW˶xxwD2lK;3fyPqHqoR y~W~o~gZ6zJE-w#nc/M׀S'%k}gmo7FYE /k/0[:MbՎ^Vu~q{6mx"di w~îvؾr(VkV8~O6sDc'OhONЍC !FV8G>x1N 8G-@ T;v`5P3U`$BںZiD0VєEĴuȾy£NQS2ϻ Hzk\!iH';Te7?d67?FbHLAj!'D2*2ŧh3 {Y ;Gmv 7Nj NS}_y? 񡏒+Vҁ=d$].RƉpMɑՉMs*Ɵfd(HeͷY@.2kER/17pk[{EZkr3b@RT̴<;&׀1rwH>d?pN5b j<> V@MkVE$#x9#%!i#R$4cF)*gMzxI3yES0#e2De1f+ T}Z:o.p$`޸],j}\"1$Eo*]ֵ]*K 59&) N{%n bNm`W_QW' D꫐ułk%,v=tVU؅6ע{˻k>z.&96/_Ū^Dh ={x6'5Nc?ڷ}*eHzl8lþdٴT`o2Y[ȱK; IDAT;Ka)?S{ [{<;&?Ds6"2r+k0tT),;k *f`{@4s`b ziŃl@cxTCpԑEPద?G*LDE^l+|ڔr&)G&ٛfI\԰dV9*v!P gHD<0N,nDo )F!?]X˙U5^><͸0fBii.n0:4ѐY[89TA䘛ԶN J3*k 2:d0HdmjB5$9 ɰiB]'ѿA *1@F;C~mW[S,|#m'AhT^x' *Ehqt{r` Ŕ>( fp[ ؉HENz0?aG3jZ5'Q؅jz>vI ZN1ʮ!ѡ*{< @.^#p=8@|*g:R٪_f_ުwal6k%fT~e9zYBz&yS6~eoEp[./\rQ<ЌlG5-Tx$mj˄6>idںf%1jxFHL :GL8(+n%9zrx3;72f ǣ#8JeXcݢ*3b 47\Y8Ood^1|w$Z`&>y7L% 'iw,⹁ajB xÄ3,ol\1jRHbpc1d$Yz6 WiHn:1g4gFםέ:&7jsM&{w:zx͉JYpmϗ䥳 r:_ÿUocm<9bOVrͬVx lr1eH冷Bd."k@k>mT6dFXռkDEy* yiJ4;k5l\yju2L1m'$"HNR6=U _"\QҤ×gI0&9^jjZ@u hj$v{l)&A?uɉh]x6ex'X253KUfb(jVZ=>P7Б%bhS\bV!R p Hvז5?i"VZWCektGwta av $rq!bN]fwlΐ㖟gb8YM:BZmzS|b0ޑ]s>m c wג5)gG[/uYe,+6zBd$ ןýf4,mՃ"T{PTaGN鵖ۤAFYUg!ZCBZ/bM>ۓwNQeƒQ( b=vz3%u5!ҁ-Y{Cqg#ćNC!iL%:WY4-&"MjnrTINmT \JF<#VXlsX|zyJhbr~7 x2, ybk/]Ntf57:TW@ͽS=p{^Y(&u_]4j-3==zxw@o㉏sV"}3 AWX[fjyQ6ԏ1o`7SU=n2M= ΁\qE#Srq2Xx62A2!cюAZG".c8@=4dhPJǣHH5Ѩ,2v҄ 2k/)469,-OUA(>:fsBQe+0wyP'Ή;V RŔYbGk}[$B+R7i.u.˶;0{cpaF*U+טZȭW+#rԬ'+`HD8T[r=hȸm4]$WãCocmlYn軉͂z.HʦX"PRἾv/S,LIzM:W.d7~vCݗhN!{#(" hEP\C$0Vh96- ’XԤD0ɣbǦUoh~ Y*D5L X#5,_P_FEL'W$~ 똬N! M~.XySΡwH/Au{Ƣ'_*(=4ܬޜKDqAHK z?΀ {}p&et3>9қ![8BF0W6kk {ѩ R m!Kv0|?'_yc6-6?˛sLL= /#"_mj6Pea\ &"_UZ-Z/&ȇεS#8*&IHɢBЇ#ΐSzvo\ %::FuAw0y.pw88I Ho(mwx%ڋei#beNUL;<52uO!QŠ[I3KpNv33gM#ͱk5VcxPnXtrY ί5D0`4=q.G[fppwrH(/jrj̚ea̻lJ̪JL%paS#;֙r8Bh5RR{F=>vWGWG>xn~ Ww/Qȇ#.Gy_GorO߮Omlc .Ccg5P֍GBכr 9k UvS0t+mWQЇռ{O ;9pgﴀ8v&[jo⚌Vd0`8nXK@k Ǝ[)Y#ꩍWM86 w8; h:[Fn} 3NֶR9(%?-ձCErMY÷5C2!\-OIS,4~QeP``(x- f$ye&o}ioCJF.P{E!ySo(7Bg0$u/Jq3 S|I-Ċ41[o+ oB=^Ԝ p 7wC"2ŽF~̐)ZQ aaz{7)t{껲¿>4<&X!JR2]CX=w~zU^)'w?ί'6mloG-vGTeXӗ m&|ئ V7\o}׿z ~7x VDtW%=Lٲa!szI?]@wBq-5)gMk״w6]qLR~bC;„UhR9تD7e0$Aym(me`)D8'%q {?B6(Shږv  (h̽/ WGdỈT0VSĢ>tU(& դTA2yp&*! &QHw )L;e6۴1(JE>|}õy˽Va[֛:)]>wނ>ُ{l%'ΐy$#FtX3)(Thj'"E19;pV`m2gߊ}aF/JGpiO{ kapEo{^q,2NGTmU B!$'p N}'xzOzl6ыx\^@6m^4!}Fj2w@.Z7=嵝[~W3q.y&kIn2r 0-mnOxe4νk\ aDu%-e&E"kp2 !'xԣ9ق)&P'!y S-~XyvFK=!r^N*`X2k LS%,3Q!Gq(-e FTDEM5I FÎ +RH3PY5*e'Vm-z'wު#5@WHP`W[9D'>4ΐWCm':~|`Eedu |#3@ f a|!BAP#!a{ve.ȍq{wc9JڂT6Jӻ.g_ I;D%4?|zϋgFӳ['gro\ (cA>u(q#Iϊk1ֳ~ap|m<ѱF<>N2?^vϮ>ceX/sIx#m3d1֩`blVͷY=YZ\V1ȗ뱷6RshdIk~]ر12CGq5Zi8l 8%FX ؐq#P1bٺq!"d 8 _0u@u J ma8 `m=y8*c!UJELD0"I|x$.E⬪vy'*IceJ"L,^/DJĒ U(COGAj%Icg9}=b_]pK 6u9{HgZ[zyRutQۿ_giDI deڭwL[ƈ`AyƽrՂW_C Q30R/Ppu2lŪ *2TB-L^ԆUł>%86>Ǿ{Qve^gz{^#|j2b\T㍷Ӎ}=`v<]Y .ecY!1rHvs=?5[m\孿KE5K,/4 \/El6V7.-_׳֞Lt{[_Ĵnlp^:tK9Udd>0r8xuAXH @DaC$C=NaF`^!JpJV7Bz4=WL(_mG iuԷ{հ;èXjO<٩$&I*"s6`P+><¥$u)Ή8$D''u&IT2;R$E]_a118bv@[kJd:z0씷%c]_^сnPuz8~1x2BN9)w@yjѕ y6c nxC+!a軣.3%ƿAE;Ri8c4|>Lj) 쒆GwRla^$,M| I;v}?¿ǫ`Ɠ\_Fj;J?U&]rD\dž-b t/B%_r_Z<@GRwoS/Vm{Z`غsBΕ1 gp@ۼ}[CXѐS3.x IFԖj 92O(yw9Y(*JްA 'PϩcN^5(ZK4 A t.H zM0/^Zpb)kiJ˔2KfN$FMq{^'i!$Ԋ^\vuI9H vDWX\h}םQs3:!Ȼp}c]sώvkUL/ f sBA,w|/? α46}a 9%RO3td؂E~ O;G =1uTq2UYO2}d }bۣ:cv}]jലt,eH;Am(uq׏Om<cއI7 Ħgz߮iE vm='~_>c c}\_0`׽XZ`4(1y>5p^[Q2sچ SmayT"*dga]FLa u)NAJpB!)!u <'a ܭt ^ˢZ ggPk̬Til%#*!൨onTřӽk1ө(`&I#AT v4OT%JVPw3yj;8ܜy-=- l2ZV95C\)yairgT)Ioбk.+o/!9Cx KSZucB, nF]ΈKWӫT,~[>CSxn$Yjui%{QI"f̕,eAj4 {qn8GAˆ.oxc ?*|}[۴%*l7kUg65U998=;qeukұK-n'ucOF/ N aFϭ^,I(h1))ⱬ1Dm n7EcQ*]CSWܜjTLNyG! Wa5>)v":YJ.29bu*t K~/q|ԎS֜xxg܎> `L!'& HorU9H }( \y Fiu$wսP'Ր4H͚qR#2i% ]@Jf\=3tJXhڹQ]fY@HheLaOrl6 Dy#?ϓGz1Lciel^MlF*Y%uw᪚XzG@*'[Syʕw>bo8%+w|+.lj=?"C8bHyr%*d: ajC*N 8O^a!93#ucT e]eq0qSԋDDH#āK$;m腰Ox8RsT)jb!NO'|^l"@ctmQDĮFBB#)ɷ_e675Myͪuphvp>+;X3nEh5u'0f~ꋘ!*C.o\g/ 7 1f쐽VLbrƤ3%N6XYV0gj 8ÝRC)V-fiA=" 2E#b@XE as IDAT~&P{BFqRC!ՆTXd~X %".Yb>@ocX?9Ojr_jViU(gf}kHTjzH`V1.驃qQ}~;rô 6Wg_;xn^y{7^N3@vTd`/;]x)GJxC}L$( @'G0 ʐ"Co4("V>*pS K__&[|jSr;1p y RtnR'QJ5_:GUtwg&,JIT\-Zba o?؀aI:,j!IT[(R9^=mN(ҵ i98ɹ36\!8VzeV5Mlmõyo yNxcs+q`2#&0#sCr1 B@=dxaЕ 8(#UP[f:s_ж!a;_" ۼ6[*-sVix$$ 2dP1 2 >UUg Ի$>2Z"Ou2'Br/ۤ= #*rELZq:*lwDY[,SnΏ'M ߹tt5ܴ!xFnN:9 Z3_v= G? cwYϒf/\sC?U' bXt7 s\sTs9N VvS/͉.e2cW]TI z@G;͜ʳ.4 b֪j !Q(.#S03jd.MFAN,%]_> @oc|}k\y9XVccMI6^/;<7[]&d..9Ƶ2=ߨֻ#or:ËZ%6fGX"Ω2x0ENq֢D JGT2u5C Bؑ JAj Yґ{ ٹ!`֢g 煆'YbHKrzd 3sPI9K ;%{ qjzջIR,8Qi]_=r<󨯺)v& / `ayES#"FT)*|wwTi]"EKf&ƥ^/Q4uZخIMSi--6g%]jg2sP6vyǤ:ss|v&; fTJk7e+0n#X(ZF3C Mx.x3JLt)xbN EnQ7=%HUT Jb;1L*r''B{_C`ml_*{VKnޱd/c9_HX.Vikb^iUnOV]# $Gk dBi̅e2:d|_ #mNvϙ|(X$0"8m~5hX 5yxD0Wȳs0(DcQ)cҰ5:f"xGjZoW1;?f=7Ԓɘ )ws2qҺEcDs  \ 6-Dxf D"|R++j`rN?殓FHĦQǬH4*a#"25ĢNE-{ؖ! \;.v[(n[9\ U}}쯊ɔz5֞~tY5\G3>'.2k8cWy*L,f$LԆEBu$xQTy:5̝Fڡgq 3Bjㅓwa2B(H!80cRC!J%!+~y׶Omlc (72^6&z3u0̅h4dU.Y#! x+>'hO2XY4ԮM~ x,óhhF^NA"'ERl):p!V獱j0g?$v{?{;xA#LhgI}+nNy`IlDŀ{sS2\y ΛJqE)FbFE|RWԇL]Y*bb3!JecKfZONyc_>^|8y WlAZ<k<_I EMl̽J.=T"†Ӝ؄\!Ш @~ɔ+w8S۰14L6cv a2S:f'J(2DYQ{DIYƼchl HMK NtV{fEٸVXҳUo&/S^Ekrk匶W6M+qv4эBU1#RVL؇k6^a&H%tPq,s9)"{X` Ch-k,1wc%xS)~ ءs%Lq㋦%/Kk].oW2's,d2Sҩ*ԸRe&Ni1u3ETHbY!.&>*ޫp]tz+W?>zƵǡ84b#H! Fg-ɛ)VQ6 M`kR^YiiVcF7WhJFZF5!XxP-oΠgӉp'ձ].ܩۇ?҄KP4Aí[FfFdTZY6&Eg`mݡW%[">tafxk~mڑCuֿ=S_iBtt v’c g+젞!E/,Tw`,p 0' m_gMX1F 耑p {e):Gt&[;̴Œ??bYj D2ˀ(8Ō_-9 4֘ypʙB<TfJ\6ijeǙS׬-?0[gJ9tXGa#8AΠ9rBaLhZn&- D8BTS (K0 =GZp Y::.̹~%L[2 ƶq5+^)1FPLS#{yxk}*y%TήQeU cj|i:L\&⟭O̦/ǟ}݈IM)g%!z\(3~*jc_#tCaY"D7W~_6x&R#92C(%i`FA5s+ ĕQM4L-XbJڍN܆rs(=\[?eޯ<sQ3zS]~xuP|%1#8H&y]..Dca2DYhri-)‚E=\Q.hu_1i@ U2$gMa¸w4`reɖr^pzl Q._<]x Px0*sωQ1pjs64֑ǯ_w?,g 3T>?M\ &މxs!A&u! #nԎ|QY(m]8j.5JF3T僭HgYQ{0BxmvQdJO0(^~;ۋk 7-DY. fW;8%xkGWQ\ )(ꬊa۫ ~MQ veA8eABdl?#.aX*%6*ն6QzAT,uKiy|~m^:ڻ;Ѱ(, dHlY1O|VRQ-9A`X [)4fEn)34oM {Fdt R`H+Sh P5& ż|!xCeT{z ™p\'^{pAi\ThK|9xa#3؇8!9.ES4yN\2 kHχ:bE9h c]!|Ol#c7w$9zs uӾpK/ Ձ t}{/@˪^!#2Gzh<TCmK(Y$M @0SR`;-ܓ)ra XHZ"˪~DkP{q4/t΅EvC<d6¹K.$Ƀ$%0ar]؛t{D=.*E< %|MÀm ݥ1sK)GZHn ()`׏fAgc_:R(4 z7a/GXٞ_JuPxa>/P̒8y9J Ŀ?<7?$6zx"|R:*][(2WE{=B ?h믕²;݀`zgYfqR !w,*\cmERmu2Y+꒪Ӽ op}|U\Ńҵt- ʝޣ$KRߋZ!%Za52F&&g/V2]ɃTƍ%eIYSx3z+h sFnXU_.kqexuz(ԅe|Zu[O[i:%9ІFar(4eVqi$$JFh pvriHg0LErfK8>B+  YK5tyO͜գ@$e4n1蘔hv&Qm^aթҊIU%D'T9DޘYR#~'8eP}2+فd}bV 0 :Qy*!Lh\Q}V>chz{9$ H$SҁD*UC Bpt$\ݺu>YOqhwy5 R3d fGܗ<}.'~x<7?Jl&6Ɲ}nޤZCA߿2Țymnо#蠀:iw)2 PRgEePeU`S.)f0O'R1ΧLca"Bhқi$pU[R}_S#Z'lwp_#oS|H.t7F.ЄҗK+G@`fTC>0JYXm 4uL8 JSѥҎMp MF4렏2-αHqk?@)tD"r  X_fNpmVsf-!Sh*AK5c/1p ul9܈.{P (aI wX$aY1ݓG^GW4M^I %!E2VMg= YL|:ϸ9fxiI.S?Nu(xEK<7?bl&6N[~"vC]"o^v M˓;Oى-&#c~de J2S@mF){'P"8 LAAmf-#G,U U-YnXDp>D-Jp-pb>(J[^}[T(&0]d /*MiXw!L81E D*͞ӋeV  cVdٍƈ#,t;|@0 իJW&4m)EԣE p%xbFV2?pҾ$` /|pɠ8֤/ ϓ𨕹 RG7OMl_*6zxm.O>lO)8vٿ Ty(_7UGU/Xp&_dl3NC^6c=ӽ}Д5CzȢt 1nrxU>\ P\V|wo ,DVWc,1l6- p/ފDƂ5`x)֥<`Wr>.y}\2kaUC,c-s-"0FbY% ]ꊰKP"E:LzWGq,QM8[Yh-!+B)Wf8} u+b̥{I-AiKQ EK9-( t6Nvx 4sz$-+X\sxJIm}i4r2l>H=C/Hxop?lٛĿll&6[="S>rԽ}:cO<Kt W`/I/]T]jo,u4m Q=tAV80B#s#ccj r3f'" [ng?ǡdܡ6G_Ox= ,0qƪrC32q6cؙ3q@nJن-%CLv*]ipԠ,]A fzB\D3Ul3ߐ@e 1k4Cuouy.3C&vؚ0[+Dܔ`]^MM`60ߍ^'Ҧui4`xW '-eJURRvΣ5]6[4G ed Swvby&RO[|dϲ$ r"JAbL넇5n}3g8{QI+#t|9ڲ}~m ]`6~9v_ [5W,4ߙ6r>+bF;-Cb7‹Pj3;zeY%V9CK؃+fYeaic9\Hԝ!;OkB3FyZ%E7, b8t(2jLƖ1.::0Z'q Gg8 rj<6xߜR^~=oѭ0+Ut_O=Or2Ҡ(c҇7MlMm'FP?Ba@Lp;$x3z )cGCp'W%+>qVsuI'If`!k $frUdm!,1)<ڢ>iH]ˀ!n''fEx'g4-/a%q Aq"ɿ")EVeq7_KQvlƈ,`!;2 \s֎=YXacSN:^OOGSF5 MHwR[Kp8FD7Uֳh ro^e\݀5b+>M s2N_ſ)cI܆edI~M:lɢ0JYY>oO"Ybl+eܬT4<;bx4zsBTS c8'9Ych8Sp4h:G !.Ӳ *.7 z&ށMX^_׿ ycR򖁛 9|hámǽ} /f9Ɲ)aS~Ͽƽg.+m*-mH p 5vڍ;ǒ e .I k+QgBcT{T0.10Ox( HN@ a:&a-[ qc vh&!4&EA.OKoG'&&)z[G٢{n@ǣid)L)xhKWE?mbWkl= pu #&JTs(C #U:lBb`Iݭ6,Ѽr& >onT͋tt嫅XFj0 b=e(WehJGϰrc?t]A'GЩqEv L<3axGb7 gÌlJ ICo~38,5mO~~V ?;d͋n|R+ᕣn7?D́J Z@WFl%xFtI1Y%KfcդlNAm% s=9O?&0Yj/*:K)' @\(PK6PYG`5X_0ƢuYn3m(60l%RDՒr¢G¶-i ,߷3yAK* *{ rɶьhkh*[̗X4cy7Q_F֭r1J`z)a!#Da' Btifexl`.TQY\q-c7`2$[^3&ܱR H 7,!P(*T {S;)n&.7>S2mfe}7=V#DƐKyr#x|G6BIN!q>H2Krr6YI۪8ЉIu/%݌DoO80D[ԤC;FUi[ C)J9e-AM^RJT6/3\1J/Ou,dP(XKwJ8c,re0F0ʺ%V @.pDeQ"GZ|" K JRE@4j(? Ѐ$G:C\hvh@L)6qXP!%R+̕C#r~54[3Y"d=Y2 azcQmco7w66*xv1-uW2RGX?5jUiN(A^Ȭh?d0tGsn r|y ) o { \qFq_!EQNr::ymp X %h&Vytǽda[o0&ۯ!Zݏ̜d 1RR(@q2X.'P h^q iO ;A 56Jlu0}7^ݮQ}c"eѴfDUife~p,i7T/ǬJuO ːjEIxcdTPgq$мJ]}Ucy\.L"SbydUG舻ޣ-dn2'eo[;dꔝ?V6QE3+ Ɯ~:C:Ӓ 0K ] 8(97^51` ٺ  E( |fm'o ; &ޣ I" `ӳO581kѰ}nL@f(mld8«wx6ʑJ!k_Sǰh qߦt `,%VH ::ۼOto_=E PCjJD4㫷Ϫx'xr c/e W\ &8<anSxEAn`72t6Yڮ;fF)U.,XEQG R!+M67$K1UM-b.(7)5AqC0C02b_39T#YD$]YV}e1f @-Ba, ,qS'`&[ݿo=⒣f i-c萓\I9]JF`[{zd%GNweRGp 5Z@3QCd#$}+Y99]yE<+svB]2r,C>Ck/a]un=,o ݱjj>Z Cl低̈́MЛx YVn _rjõo8/_|[M>w j۲>7gb:",Oy*O=QH66"\G 9-wޜsj24"Pfֻ2=Na:OxrIy~V,շ<Y!0'`_ݘ_}xb Hܐ$R>/&D1]"SL!X$TNRf:Ji,b/`1B٘ RHܨ+уEDDPI`4hآY8-Pf4P_axDyb75Q(RQcZrpJ,"3-q̊ 9h0Cu yM0Z`_ymeMP?k #A* 9wTAb9ʚ _\I>x#4W͠Ka@A`\:n4~0Ԡm&6n@o=o^ss c}i S~sx8Vc?Η?[8mnX?Zk`v1(?gxp~a]mWx| & ̣mr[dI18^7xujOP0 3TllhOwVeY5`_31]0!v=!) [hUf'^ڢя3bQV-tUh0=wd<ٺOR20.]R0,cc=JOeC\UaS(`Neچ]KLvP"}:!rF=6GB%WX*|vI=rbњ1d<[`)#.1SO#ٹ'T R/),i%a]1^fv^- h-IZq"%Z?|#8(̄W Q!}pv rF]n`뀔YM/1;˺0'SP u b""SIQUYV. $ Ayڒ O󹓼ll.9@6-α38ŇL k._h"ΖT&8Zh IDCKʃ7 Mc;,E ~Q5!xm='81Rl5BҴ"{O$O}@QR2~wj 6ƹuyJOT};!w?7/i{ow`xb7ĵYs̖u?FmL檱:Ͼ+m~ݜWv e\;4osIM0h5K:0E f@O 3;^HvП`DLlMl] {.@ӫh]ڥD؁F 7Cx |6qk"d&0Gfhlh t-n'-Ϭ5&<5}'ԙ:ex.DɥhhE",S^I+δ 6^irf@ qUJhYȹ'I'-S~) %AϚjZJx{ܛ8aok7ʃys^VnTԥ"wALԙeQ( 5MV@œLȅtx-Սm)nCn$h|X_ID 7jC⪦Fi )QLb'X%%M:{V {H3kxm$~j3V=#.3]tuI$y5%s(or& C[L0Kh5*5{z b]vyWi~b3lbjlT6^\Xs.GYc}9nOŻ0zP?XR4,7-]K"G>zЗ2(K^86J-1*waW؅inZϭ|&EX4DruL5JBV;/We䪺0pxV_Rc8}a}H8}q8΍eK+i~XVZI会 }p,jB.Ҡ{U"| {bkTa)V© )DʑȎDGC#$ 2)O'&΋Zpb[6[xk?jigzeߢ?E"%L]mm hQߘ3kΙF`t=2S$zi0YJZ.fX $$LtY53ߦ)t𴤍>N) Ha@ ^ngպZ.!L0幁`PDwO(F0|xOUGY0*ǺQT16%C*i]1vIp: Xܡ99uQCknfMlkb4MSq?5\2 ;1 nxߙd/ K :"Z0ut|곃 #[vKq.k4PfgTXbQ 3GzKUHj'n{&Ί`Wlx tnXIÑ˭Iua\)bBDdT|VQgFe$ni S 2oC! ̬23bI7NnxbqX8°oFIƲqEԡ8Fs,`1Hֳv|#_ HrN0O89ȽONG2\]ړ`aPBEjJH:Kh[7 O\IN4-{h+ѐۣJȵ^V/$;,~d3lb_M# gwٹ=gfYg`{Ę ?qrwc2vs* 4ZNy%6ͬBQ~?u-\)VÒ.ʇYN)RZ|FrUenpchTLlG[+lc!ٺq|K˴KlHwS&W%hES/&~>WYv2uݱg򆂊DDHZڼˮ,2I2JmE+4){85Ld:d]l.5_ c6%KYʆ$on$$ĸSK-#ږ蔇ֈK6ծcF`=Eݓi7Xio{y&xG@!"t1t4g-̫lf l'|9QX7ҩ<_M[;~G?oڸ]rbj z,5?aW50NBmdrmJfZ5SfTp}s{_؀oH.` j1O ysjlW50,Z E%>OW*'beCΔHnd:XKޘJB'kc; ăFNCFN{NԨVjQTPQ57\pR&d"D0gۋRE$ ZEJ[aבK"}ۏmr|#/{ki`ATV tDȘ4!mGlڲIOIeJg L.=A 69Ag-qBC68GdfBZ3sҲ2nLzB[~3"X5<"gzbGm#RTz@~OGBA=L)PtSmwˠ]yxKc'c޸}6>\gexhoB7܁_np&ʞ Gq3F5S\$GY'݊_-$4߸lpZ:D#b3.hid!=1d_h^#fP'NybAg*%3lͼ9` qJ#6*8'֛J8z/dZxel]ÚZRq:; +Ǣ8T>T"N]ʇf8s\' &fTr}W^Թik⾝%cf=]0aDۈ#0q%M g%q"4PgXf,l$1Ն 9AFBQxٗA/5hsr6T". =p| J31u̔Y~K(Lٛs0go|ޜ|V.S̘͔ ,c޿eO+\DVn!Upgxp-R"R:/0ek1MPYaK?bou2лa\q8ycɎ0I*q ~~LGnUp OHp7~?#FX KMլebDwah{O٪]B"u3D41aXZ ᢏj,żF0\ʸd,^106zdߠ<'[%OAσ㽰iig"1bāRCWp .2Fzwa$}dXd0_ I-+`=2ǁ.+fZpְ/a+Χ8i9'Lgv~a~Nb,ݪG+ ,j{t ؜NrlmǺ)xH~A93X 4C6yxqybyAB, ƝR'~xcw'?HҮh+zܸ20  Xiq=UVBXò{lgw[ƒptg> K~L ӍBNL9?Cm~-9^ry2_-f aHNDd V!kq2di|~^RK׈O]='OF9/k朩P{MtUĞh7"b" g^ԫ6 uYP< Yxll HJ{^"vQ͑$0h=>gf]J7#vCj{B Gȧq7k$Li4 |DN5~1(h8|jTR%!NXoL-|i9|=rGWyxfr [["گĆ&_ ]WlJz=kD6`{#qvGYXi7ZN6ܭL]űлjL~ 38K9K=~%6;6)I8oG9f ֛U[=zg'?ɝzި:.-qx_%m;:3CjjٟZ揈1)>DÒOGsX XϨaDXI¶Wfz}tw `-$ ,_T5EI*` ޱp IJ KWů[^thY2eOQN99 BbXB@u`t"v,UX"k&+A%-¶XK5g>.5˥CVHG/ZDVwWj,.x۫f*͂bRol$۷n-CO͜<}73yn$-#Kr3ğ78EN~;v6@k#ڿ_aQ IDATWßWö5$ؓov_г=>{sS5x7yp'1)w /p8k4?o?oQq-^DlnCpurp2 qJ%;&-F􈮖v]FCNҟYw VD q$I:H ˒&XO@M?*WWJ:RLa&ɾb>bTUf&"Z,-.י<]h~\x'"XD)2B'细i.lPmzp#)A'\F\#ސ*Z3$h)ߠ :UG"Qlm?ߏBI5{ ͒_ U|;9+,aYOE#^p ;giJYb("[gU9zAS5A/?;W-'`2JgH=VIYlf!ܝiYS8zuF#ڍR'0?ѯvv@k&ڿOEovy3nxЍ̗BG0\(BrUߋw:Dx nF#ԇ<n>k]P;x?o??Wldž2ƃnk1Nmnd:Xe0紝H}#Pۈ#i'LXdT6Ȗ@~LUw'{r}(EΣ AyM(D2~ t7VF'u6),5Ĝrdϡi?%Gy稟"Aѱ U 0Oe E c9>StLGgNO8bZ4"&N"Ā@=TTq'*RDUm ƲwOJteЩ$5W>~CY.-HMXzӃb1aYx-zƬLD:HYJW5 jT:"B:"D%P UL\xƢ(NP;B4(hS'L&LkOUQU &s|+83#:#\s]ÝBK4^H#2(H䉳| 0 K@Kؤ tBl6x }{ۦ?? gu2k1Ef)8,7vw]-лڋӿg2WQ3иe9R&lm2l7-:p]gm`٫aDhrifx} Itsm]Zҙ]4H`az#$'0qŭrqbճ71I7"Fbu%B".6Pw +deͺjBEfe 5^s^w{BЬŰϥ T =aʤHkpVRg^,^'1VS{ָ mO=MG͔'!a\yiz"8XK. o.qv9nHnhZ(90S59ӢSc tv,=moK(yy |ïZ۩_4.Q;~ɭw c|Uj :ZMo%CHJoh;=z܄ ߡ]rq}|wrL{]cMV~9f)9~0 :i/d*ή7/5> ̮N^/ 9'O 5F;&Ɉw"{P%t!6J)D JS 83`[Q7TE]ZTtFtu!`Qj'+ m R}ΪD1NmZEhvaEu>mGDYuTZ2xbZ}(4 h_~&go|T>1Iɔn䮷7َc=loW.o൛<|:?-ooX㒣mK_2@Yd140QjL ge1EZh\|B{˼~VVPi#i J.)9Iøq dU:Xiº_⹝x:iؽh6"Y"mEW0[a~;:kڸ~ןVỵyMGATՉD\c,؛̉e$W/jRۓ_hMTU{g%Yr .,i^uB'G#t--zAzCAPyS.gJ؎0؆7Fma]Ҵ+&=Co_O.@!pEȮ$d$W&+vzx`_;7րZR83,zWWe6#5! Wm(9k4eyGxeu08Ğ=OY8OU3ɝx|/W0U>{7=<+af՘  UFԾ^צ>VO_X֕Om XC dtYwK3\n3IXpx!{fh8]LJ [5F"rDbpv65D RaSeܞDD; M"| `ˢh0V~}U.5a? R߱)F:O_!>.^zuKR-AWaE YتJZ2;fɐ)aQH\7"X$MpNh:AC.HTR 4|F*Y>WLen'&AAj@L~kr])r9jC$qi{?8XbsIsض\uI'K6ϲ ǣx=wS.WXyzONBN^>!L=b;\Eoؚ}|Ŕgո|}R8𒓡l'[.-*bw!!Co t"hܱj{rYv4}dvNc3!s 9d=cJ;$އ+L6lq~pp^ ""NުJQ3"15ޱ?u]c=3Ӱxu"b5Lв,:EĈJ{Al70 8Į(x%|zi_y=KvSX8jǤ#_ZX9OMRðTdiSSB _ GsV&.gI^VIu8͒2ZX|;Û@DaQGD0C'2qtTk#}Cgo)1IV׈pF ~n~ƤtEsE1îL:zF+0ҙL&KqIZTa.ޖ5b%}ʲmap*iTS#\ӏEϲ-A=6wD&$,=zCGbElREb@h"-ns]h4 z7'|:3َr(gZ %[(+5 Y5B6*уB7QB=;:r8]O4ܚ:Tbpv!ҭoHP#NUIdPL !M720#)?y6V8S8E稜CUD"&OA$ѭzJ8 "Xx 󓳧AE)_lDeJVH)ɩCrJ5B]YJ2POsтK[/V<ٱVP)cn,`"L # !++EX".4`3b1u. u>po h*0hD6&eKR)_^wP8M`?_'1;x؂ 挷wԞ8X ٶIK;Qs.1%tD;~w3p O=||E.mh}[@c4C[Aʈ=@m1hCh`y6VrGUu# !X?Y>.=5xcY֛L}ڞ܀WyLa589ԇ*{%OSq01稽h^NXWsؐk".1wݯO^~0Â%ᛁ`Z(@0Z0Ai}Eg Gs @̶yHfLA-2̕R|iR"}Op8}D\7BI$k_$[}cs*-NY g7ٛ=bpaUiJ|xri;Lο8:  ;,bF𘱎h{} lSR2;L >O$/~h;[\ש٦w(=ɛ(` ao"?=~xz%6_zbw .v#|oϵ!l6>.YrEFWf\ C,* < _,8DuǠTB =^x?ŗfܮQX"KFIX1~. imJn o5ݭtfB_nD=^,;D'r#ʉ!^qK/-/j{q(OYlR[0Tqb0)eδڶ:b V&ImŢH P:~khqMF\CSșĔ|p&C/&`H̲f17!r ;υab*Nq'4S&L3ȜjF309_qP_Jf,dY_緅fOqUњ yp}r 1Py&JYF^YS$kP6KCRg`]/Whnq:ǿ__ozslhmtu=h?gѱl<B/1ďݸv@br#<0g7@jJ"d܄fʰ-{7~+k)cRt`AP'{&@+kZ81rBH)Nx/@ gCCs\FɆڑdD]y$n XdY7aqOk߫jjTN!QYڃ6*}a``.zgk(&X:`%~ۋkzqou):4F5DTD0Ɏɂ&U]NN'WE3bTvKd@t˒bhYH/eG^*LT V0žeӷ^#.W:!Jw)3)o)QEa-|8'CDUTA1NepehP2[LQ y\.Xڞ6ah3zE}\Ŗ| Ua@shX-TqbK4;$+nepi6Ǚ] X M7crm؏7V]8o~IeuO) ¥HKk#yc&$7TEM]ۻG'NEh# eA7h,)!iaU$hk$ϧ"QM}@3hm9mXҰM'?I'Q)Ji`\X+c.̎1M49|Ey|C S49 z|M-ڠ bv2 V=' {X=/}uwbJ%4h韣#9FQ˖I;̗HgI;Iғe#K@81 /G`]^]yc[|Ïs a y?-W9">nfo<ˡW m|mjoK8л s8^)x~6tD\ECtq#mmn2&ӓC'R] (rظ. Ne1ipE$ŦȐ z9|!9u n6fIº,[$rp*34 Ap\u؁'器䓭T=}{tjTy*MTuw Hv' ]o x^ $L؁c<=ڀI&֒BGHhM$!tۖ.U {\QHn :<)zDnkQ&S]+\!`te~i 9e4w#8n 0].\#l] C#Yur9XD2ƣØXjg> IDATi! 3.St="o22P Ӟp^lêoqUİ^Uz|}}Sr yfTu B'FkLh^'+Cb2M?v>׿o|?o^dn@=Z$e"b! =Rp.h !ȡOW>ʋM-3.sȳ1+,ےwL@_z>Q_ S^ /C  =Yp,e{8e$` Qb*u &C+ ^{>uÌ(Qpk*Y`opԨB&#Զz@]Fg|hƭ_gA=1vy- O: Fc"4/8y!ƺ-Ɓ]6fmuKnڻT9੻2'7 M>fGF6ػۿ"Wcn|`0U: uXp6Wv]]pb ,%_8/~^ـ 8My{L6ڔPgps9ث:FpʤwQ9&3̦UrY QX{R#%!;Ӣ%I m^ rSZkkTS Yk'r9Y!.ƒ]TF|Dׁ嚮Ѥ1#ӕ_N&PH'z %Z0E\%a!R]~3؏lo2ƒ?)ugIEK,$a eյ~ ,gڱq \=~BSSMLh @OqQ>q8敖SEy`_ƚGt9W1"P* #V O¬=' 8UVJL$)2)B]Z[iufsŖ RjPGthl BOy> *OgDFas8qW$D AK8,`{U5g:U|7sgǧpۿs~9491pfМs8K8pg]N&Jn#h!Y?}7a!yiM 5yE9{Ә۲a}y;[U]ծN8vD$"  dA|eAQX|H 8qMnPn[w|3z:gr:B pl2G׎ϐlvFbܐL_GzzX`d8x`X5J6nD߳PV}6~o~^N[8& yLň=Yu*U17;ʒC[q Vt`!lQHL340_]ycNI2qj6I Xy1oS!!cŦW=cS\~JWWnq6mmEKKGֺjVKˁSo/}}k/t7ޠ` 氄hFrd'POg4S]mzt1A4UݜP32Pi9h84^ji2Y4H_fʑFT,gx!sd^vh֌OK9 G:6!=;J<;NxwCk;B9p\=ߣX)1 ؞y"&Bϼvd͸SU_j"Kmsʟ|#4r$ypj$ѤT5 5&% Qm CAe TEVtS#!#RBF&aĩ|VF{`lHJHX$ٚ2u$7$5YX4R!zqb]AȔwqJlfBsKrsCh k*H޹e8(xaSMa$i1Th]9]ƘlD̨zYo ʍFtAs1A.Y>XM.S9BWR6rX?؟׾zo=0i=\zXÕ"Rwmuq֓c˥mFֱBb)4F"2 8E,A: oYZaeP$a;|E{\r,؍~VʁB533yG.leHL"v!h&L[j]#OX ތ21m%ׂܯYI{n֋{v91G$G+$K7pxŚ9e HH$sU Y_?هr{~Fz+oQ]a.Qdia4֬*.M6*'^:p!pU1 K8@.P[+СL[>5;_ȳ|]؍,U5Ӵeed(#&p OvOJ,8/8O^q !S|p-A:5=\u05IB1ɝDYк#m"&y<4@S+Ū 3.` 2Z\7{7&Ϥgc7 7])j8eF3ϻ-ck\U/T4u3Z'}9.=O_؟׾ħ_<' 긪, .HEdX~?T}*D[>IDR!!) QqH8:ڲ$RM7$þo ñDwwךwGoYiuCЊSw\ Yḡ<p~ G-HJhAdSM  g\/c<=>3f~$KGl0rȿbk F1ZڏûX7ܴL6^E˕}c`x5 1i9YH VCSg#jh\<7>{Z֖)aDPk F8MVI1%I[[#%]}ޱ*0/d OKJrWAhf!b@Dg)%_lVf\eqhkB[nEsAM5t} e0qvVaS  `[T0wiq LH rbВGJ ,R< 5gp eqbj9|ɷCC!xVyi ^=hE|˿#΍u'Zm8W}1tV{C_|A1AD]ܬ7 ' s9=/ uw7FxOO^@k_;%?^'ppgp$'[ pq1qO& s ɟBcoΛW:< t 37 `9ZKgX)]C=Leh ' 7;fUB縁h@H cLųE2$9-{oD5mkT څxA>Vk 骏<PܞX,d3|f$.8[¯;&>~R V;iQ$YWh")\2V5*>VYiTQE.h Ɇr(5D#<@ xBv\#UvڀBEOC^} Hw߹;Πa™ҵ;Nɽ| /9L=xlSH- `иClJM?ƗY݌pv"65;•#CP x7H z `K5OXu(]+iu#TqWws^~+u0,ڌ븞scSۇNPڄKf v䒍D>`Na_OU^@k_ f/`P^RuOȉaZZ7b-",D*AzË;^wcqk1`b1k9( k"p=ޖwExu\8t*L"r:O뉊1^h"vȦ2 %(;:&͝7B^gÄ"0jf􊳎SR$bk)mL3sr6"4Fe>58K|nz.bq%*XRaRaR]9f C 8 )<(Z}~9i3L3*\vw7yU KD/Cm,\)X{ֈ$5Sȵu:;;*8ِoEը1Xc'j˯_9M^}Nϰ# C9Z˺J* ˨8B*3\A :kOh"ĄqBHĈxS58}/pr $_Of=}fa$ȘPMgQ;j3}(57Cp%`At5nѷu3se}MQGh%([G)a> ͻGp睿.fEDט$aJ,Hf#ĤE(P5jװLH ZǠg2x7A R/Â? IDATO(2JX MV "r֠ )dΔ;G>Y%";Cb$šE5%ۀ4!KL 5'Yr*u1K&2`7hrʑX@;.v;y8pxXX' c8xV'J0=eԇ!)k3x ړ'7=7䙰06O}ycWwɎ"TvbOŮdoIpo_&,] xւn`t 6.-DUD y* 5"*(ZSASJɻf]MW^*) Cckb!m efR#;֖jd ;Pt%opz/,Q5B31`XSt6 1om(asʴ0A}=ϐ og@ G@4~!pCmeB֠\MiCL"T+(Iv [;4\K}gw~+˗mPH3US'*VH$3f5W׍3E/܊F)$yKwk4Hc AuPE"( e"PgH FIKP{H .XUgY%4OaMkf(0fi!'\尢TҒvE):OlYd8OG7"%q/V1-9S9lG`/ռGS]#Fih0^: eq=<3Ne%L0Ѝq,U.e3zm)R-i`٩̔t/$}}Hj//xG9-fHRҷuO[1xM"\j HNIR"ɝPWn̻o)XQ: CL^2 "8ŷ`Nvt Lq0{JIvQj)az'(4YmF+lY~RZ5~ұ=⚒汖x`/s:m+``*i+Ȕ>t hhKPRSmsY*kG#[ю"4wSЧ 윉gT#i1;LL k 氆UD:KL˨{9{!yUgQϐvG7 R6.OOV^@k_3\ϸe19xjEDXkb%zm%@*qbHk1ܑ5'֦W_>#u~+cVk )fɳWimd4ox 0a Xcvk0`+ , 5D&:礠ʣ^ t%&nX:3amkvT0ak'KrhXC-mt{6L]n{fu[ r6t+J4xH vвP0oz?]{bҩ7fV i쇜܈&IW6bADQS&C^Inz&ي+ it lJ*G9 ˒ƸdE-2eh0KSu' 7 YU&2.h"]cEzxyj0`nNA0=T +ߣ^1L G>+c^kjq<8{/h,k/%*݄B FX!~`XERB7 ,,}A1.+8 }pd/#7U:o<}}j//4o\Sr4k` F0S%l4fR桧Ӡ4FDIQUQD42T Zc7^9/s.OWX%+y\ z%uTlj䠍 ɳ9:ޗǐ `qt;1<h&,ZSiEK$*ʱ]EA׍5s' mse0.Jx \"/A%ysHZsnxSy5VHV& Py/z]ImD!Af?2~?گ'#>yJL#Zi1 :[#}*x_aq)q72 CԑܿUw u[jY1R>ܦ5Cq.Xw۞i))-EZ SDz[KX/FY N,>֜Éˮspf/nF .p X!jk|\ G t&\v֟ev 0() JbXjo4\?$gC쮢y/-l S ]NWWFt"yQ^@k_/r,#h|!)תϽcm4E#A5 P%*b<bH^҇ХhnS5,n0iy }@jRؚ Son?Є;scY!s`&&PCUTu9eor;ķH$jdP-jX8$YLϰj#f -sE#̄4W?qWJ;BѬs`RQzg^[3BN"Ėk &}y@&\xG>w>l5!LYުZgBAUD# "NI,_VXk01IQ}[cnT8 ni KTPЅWlaQ$>3KX򵊥1XK N tfuJZrJI!jpX@y;% fq]<̦0a=gx8]!&q52|@G}~1>47ZwȌqYCooG^7&\iW?Cؼ5~sׇz_ygi`|5x&D4&1y.#&" ECB0.g*<^M&9CXb,'7Bє|gȎQGңP"7/E" R콢[6+KD_V$\%y .b :޴ºH4,ddV1 HRє0Њ$z/<`wLn|g}a"?vGIxlai1ج3 1SuP|mسt1+J\tI6o$Rw4,(CNR=r&B ȭC2@pXs N.āR5eo\,Ѿ6pVsXˎi<΢50s?{x=<9sMyNF&N}k[|l$w_:5Ǵθ'iyS<&IPMqX!A 1FCX1.?m޿ 4Z$)7_5%12 y}|֞1ZoDԈX#*%cBt̵ gmpO*}^/8ŀCZX=\ ٯLjt8 xƠCnc*0NDzBeg~3#Nnw yp֯3R%Mc.7F`""!3zn,0<l ~jӬKη_zkߚ{6 ]! dUϐ3A 16\J5șfξJDY:l5쇮%l8AëSg]n۳W7鹠`C"k΁H$[rogE>Զt]vZ&3%Q] =rxHH,n" $#)` qˑCyc&bkh4t53 ݡ:SekSqikue nPTk qio,s,b=шmFdTǥI+;?g_wo>_/}<)36,af=heSDDUD 1֨@O4M +w}p̮~RR#=]~'{,ՙcA9{(uACMHR͊u Iw6a`iI\%Aba8@} 1N_$OyRaH, 'f-,<o`vT%<KJzߊHxM#qܹ>Y}PX(\'$JɍoY[?\7v@v@/hG?0 jY,IpkV:kuf),#lV+C4"ubJ ))'B$!+MX+r"&- i`YI,ѬxisU։UU&Lr=H\`7$e7 Odm?WkbZMɠTrvj_k;0JHu=% -4N~q}wꟉ 3Y}e9,GP,=nbbt /(JDT$ S1e&M|5nbg4^8[G0D-msXHttyQ1ji/8d[ܨߦk#7hޛڶgy7Ƙsn}W媺U22ESA" ) IDATPHBڿ(ɏ% EIH"A¸7`0)ʮjnsns}V7ˏ1\s[0vk55{3{ 7nb' '3(Tq[kpCjjTY@4UHE`,s -36_ \pAO6ccì'Ⱦ{+-{̶ǐwHyyݭfW(лտZ GU`FLgL+[#RM- icQC㰎DUE.N a,4z4f5k7KЌ?F`9/ZdGC(&@ 躡k8&wI̘l7װ^}zɭqxdUnWY5=YǷrO8PtsW^FvkhSct- .Aa.qMh'd`ottb>zczBWC!@tr54 :2^j.l833n{cNVb*b6q {)> y#1! zH 5#'̬7Z\t3"11|2UG%JTx^L;-RZZEMu@BDLH+@F*äcy<gOjD8GrXq< a2B࠲ Q$%In; #X|``דk98"x.6c(5 g֚@L2:fìnZϐ>!H .8|ZaA ՜cV,z)LV SmFtd٠})^fϕh9JYFgZo1@3_Eͮ?5S& ֔ц0;=FIԵ(>\DykG'11Htrn^j 7FWvۿ?1޽h\}Qj '\{z'k5㊩a8n*SωpX3Lf ÆI^aɘbT3q4՜YldQb=kaP>3a{pe&%; >3]:-_7 |OcOyhW'uƖ—e_#ݝdWpлկS,_>[>O}خΧjbM T "t1!2"`JB +,ε)!͔3bI4f ~v ^[چYpuwtY- k;!Vt##ְPߘLK>mj:c_[5R8"ՉcpҳP2JVPz_A~x¦C2CHGⶌdhWhr^+vWq,C}GQ3MuMqlLt56~r;=xx%Ӈw؈7m#({5 l9YrTRhLcvjxD5C+#lT7Ǝ W*9O#%;yɐdҧP|%f tc'L "T=~}6/?grkF>3Ž0#L]Xm 蛝kæ=׻Ow_^NxG;u\x%tĎЀr$MG>"!hl`L\o=Fh,{9*0})t7v;fM)tܟ}J#Ǯ0 Z)fh({ov]}zW:><Տ}ON[o}ѳZsGX `'_evT57 m1W]H. 15zJ4dgl#aYZLÄ(\uY ZHQGSvVe!0:yrBddG'7n2[ JGh+F  ;|?onծ^|OB5n_7nk{×umgʏ?޾;ԏ\c1]r6ܺNGɞZa+[Lv>kkJ_UjU3%%d}>fL}Qe`,jADH$2:٩oUS}-vX `Zknk`S&qk(MOEz\&#Z >EޫƷx\<ך.U1l@|Spz,0=S840Z1 8>ˍyL #~xE %BYHzi>߷얿QKtﰺ }g\;wR_&wjr&v2kH)IL|Ypе94N,0{XF8;+ }b@ofwRLjF[mաQ8z}^ӽF i֯a\k[ zPM[1)k{,Ⱥhc%뉞PIKֈj5'OucxDюXcŃ ScVPױƾ8qw;qk_цb0V,+3D93-Ds/̻mX*ufk@4/߻v<mAhӢV4CvbMBZ N[S*Pc^UTeU[ji\2RemiRi>F&wbJYs9BD-xD3zvmCdiҕUQv}˾ܾo?2 tK6 Zze'I7}b*8|*bKH,`kٳY|h4b&Y5 n|TM@[C:Pa!.a.0@Ti%3 ̳aTHQ؇`e 7-ޜgk^ӬMaW4лױ>oטĕW㒐 11#Ggf_溜c?^}zYrg85A#I,axE~?gˆu8>ҁC;UkIc@nMHyc"X:Sl%4w^>Y;!(B%#||چ'֍mHl?G5( 'cWngyÞ6%^&qED+E Ȉ9 ~ q >JrV2*"ByV,-^'WzF{ћrǸ[:K)n2FRGS¢ (NıjR`Fl sګ@4u^7Nuhucw~ e zDɝ蒘\$I$ V7a,6=u!T`?aW+LJYy}*b1.gUKdl>ij$14!gIE<~7hed9-ϳ<0Z,Tz/ʿ&m#k#Xن 0n,s5Dtۑm mxW0E"?mw;7YvcG7"JC 5*L1s__~瓋W Wjʥ@i.?E``\Li(]DC: #c,eț1?TNtlѭ 4bK_- ^zwQ#{dN?F1TujRlF&)FҽO\cЉjqcđ✋3$3LC%n@~3Y2R>7y?ܱ'HZRsΥh4v-˒uރú%=a| [18ւ+]lA5f)IyNelE5h[RLFn",5.Y|y*=wlBD|RgJqY4iDTلCjM_гBIK?X}KLaM1rw߈ٓĂg^l dOi[Գ^2m+9q1]A8P8YԵv(_T 4<.+(nl|6Ry ݟI@΃Ϳʃdw;7_vu){WB.6֦26ELD2:?:;Y^P y,[ȫc4B,Q+vv^%"ɷW1H붇 <0 AG7J^rBi2aHH ԡ~cF|ܛUmi<>ss_ߗ*}.ulo[mzKb#[mW!9?vS_(zRJֻ4z==8Ԉu&F-TYb-b(:g00U;Ҭtq$miۼ 5D_e."Tޭ߹k!;lΤ0ah 7V-A0)VOhtcmKjt?T}QM@cV-E@BidΡ/iږ%x%|9W"")gXCN O8lM-,pơ軆B:PKXݳl_4F4ya5h7p7Ssx꛵vzWԿ>rǫvJ`ֶF xCm/1+8ݹbqN %0 {>0bʲE[e (7!K7%zqH7'S.ͽ/!*V|W(k%Ɓ5;Ի7}*oL9 99(' r0e W<~Ļq#0)|cB7dcgpQj^dC'i5UwW#T0b黙?q3 IDAT8^O?D1sk@c!k ;RVrz̝hd$")n*PhWQ?sTkjȈ.;@htרnH"Ⱥe"tĀH/ R)f6C".cgFx',22KO/z`<87kؔ1YSs78N^vosxmw_;]o3?Oضދ1jx$ C%=-Yǧ1:E ti.)FJsA{ˁn=2^۾¨uh]8%) .|y3x/pv]Fծ~w{/I>c0V,9kr CFTKA[g?؟C: oM¶/m=+!qe2Se~k/K Hu['yE+ 5lQqrG+FV!CQn,BiHg{`+4aFFy < yb ,h/ cMdg1aRb%cΠj R`KMq` /;{3a[W `p|nF PJUг#Wpl ^*:fy . "c0/J6K`"j߉jNosM㑪ٖӤ၉.l_=t(!J/)C5Uosbi#>o݇"eƕCU:F)1H6@Ȳa#8}fXp3%Y1fFSs(\ ^pkB"Mʫ+6oF[ 2 Xj0V/+8/-xorWwS;]|IjEd\d- DG$G\ z{3fbҰ֬QіpEsz $], NĽţo]סڻ]{Gf2>Dtѯ|ꑫQ+1T+1&?_:J%gp k[c'H0IVio1b qŽ/:M[b#ՀA\+JpkWqN62Rc& }%sxGB@7̎x`Z6Ҟӆ-MyX{n~uMͱؚ$If߻ s|_xf:3l(4`,BuWuGczXա!Z0qx?ٮG1ѣcdELLk@-f "4@W^ _qzcFwV\s#sҒPR0{vTFūd tb''b\dAM`BOi h `2 ]̝xRD!n Y4Ek)402;Am}O 7#/FM_rK~&5) F:l tZ )Å'i/\t0.}lޔA%vLbM%&[> qk "{~ڿXkSTDxģvf&X[b_, ,+DؐZTQ6- b5[&t.Kpeш)cQ$p&$ߓ*4yDL T5XCЫ5zD.XZ 5wB ҁn("c[$ FWB wbS$\[Kc]ծ~-e'N*j 'bHhCXF@">CDM\8 7\<ɟ[}j\h3l+r!EMQX*e*@؀,^ay} uZnltU^4J#D}ckkS+1])t( taáRyW\>~)2Uϴ TPÞ$@rqF}S^dQ5{7v$)9/n⒇di~ n=+O- <衑KXmsϞ8X9 JIQ'DhDdgW4bt'։D$:IZ`Ms 9D՚̜/&TO@HR=cfD&mZF#CZ_[X&b< zc&cfLe[IeY+= lC}6 Y"Ƀ9gEqŅ`i ˢNRyN|r^+gšbi1,:u,jtCϻU_;]ksl󯁽{]m_A=5:1fbDd:X_ DtTUͨ0& Bmz̞?m<fA S0K-u5yR!X߇j'ko+;Yqk-rp|]ݍivbW L x-tQB da=Jp€f2881#!hbrD+CyBlOF,5# fYd-t(o7c\nВmŤe$ɝ{%1:2?<2ɴzZ_YTbץXf>HWo`][$ 9F>!s[F"uCE{4X"WE+0*֨S2xcf=~$ EE D%TvDKCۿ<({d9s<q #Dg>K d6:[(!(>OM< 2J[`ȥĎ>*ck8oqsJ}Ui(M>^b/ L %<_ծ]W_i~-oěmwF!#4YDm+Ƭ!ʸTu\֘ 7XGǁ1Gw\οecCUã,H3b[yg״stK6М0naEʱЭ 9?,C+7ݘ%HT7GljPċcN>5cE\ev],AjƎJ \9&R P t*cx mFڣô2t娓flo|e$/'r$TfblW`Quc5בO9!bT]'Smǒ :bFs±lRdEC3̺a}Foe$ͣ@K̊,Y fo~2^NDuN<7{/QUL؟ϰ8:)!:E̓)?$F=]&b?Hy񓸆!b|4{?VuA_Y"G6y1~Nϧɒa ] \ ޹ï w@jW)ć =\Z[>*Tf䝨jUU+6sDyW툖@~>p}~sn9F ћdI9wبcI^9(C3{–Eb\b" MK%P: E6.Se; |`wk@KKVke͘)R%wg)]t,wݛViɃ0(Э87` lƅeb|>)T]*y]/ث,H ^ue`j@vkHґ0j$T_cͼƐ. y 4fR$IΎ9IIKEdܛ).J 2n[S 6/<|I<7{/RWMŧ=@rD9.]cMwt.ҎNSyod<מ#Eհ^#1vk[1E"c1֚?FE^*V+d~MPY+q kKXg+w62*z!9<^-GpFR䇾~SNҵ8-^Ad$ tCzӭ]\hz* ĬH(hCo6+r ~3~<-ݒ ѹ*Fcc -H&L\Ymrd#,N|3J4{r-:)ǂ̡nmHqI`zheϷq`{A$! O N9vtx;ɱ2.Iy`X5&yѵPrRnJ=I:E2`SWձvx^e>fv;~OT(d*tx\Of#6יS8Af/QLFXIJk8b5*f 7+PMIJ?LK|YĶ8?x<]p1*Fk0bD+XxZTݾi_+xk0W/ޮխ1 ޽*zɣ;|]s]jwTkgT/QAG'xxb:k0?h2K%z}CbCX9izyZ%rP]J&wMd eQ\$}9M;˵8`6X R%A#`d`KZV*L, tZ*K#5-%ItAw2 ;ވwc+3 a}cz`3.4Z)0B=3wshjսjG {:9=4ZbVGL"N_mM_W2\rܭbc'n"}1 Lm>F-e6?pA]qt.h-D>õ%]uM$u`CĸqQ\[?.wH!@i9sQu jYW8Hx}@"L̞ G>?>o}q]Zjw̫<<$PLpeA+sK}-vn­{=ω\/avOs-v8Mg^ Αj']\tR!)8ˊlG!B{zL I$ aD;i9F9ozN'x= qe_,3(h`IKev*&@ 1tTi-MG`%!}F놜¡N:elf0e*wn-X8Pc5^<7}b(&ݮGcq"/ !6YG2òӼ{:Gv .F5: J^Ef.8QcBsRWBHBi!qP$\% ^!w w4ƾ#_\>B@C ^.?VE+FbE0F(m0Kt٭5UmXi|לF2h=zҫ_somȷ?mm)ix\y?{{ 7`A50GSz@;h,ub<i{5|G ;뻭/ix銝=3U8[;xVL,Ms% A3 63b0>m< 95sA<JK"x!#}VTxB ,WܶĖE\G"m н&&f N'b݌7dSbdCg޺bZ\)5ӖsMvy)~Dʆ}r00tS{ X =DB4}|6=nk^0 Y s&1}Ng?YfWOZg#hE =Y[Dh IDATpc$HYļ,"Uj~vPt*!s(׾a, Pτ%3XqMI ٪YF3?)E| s$Ո`BT UTLΚ69w,ʺTyS9v߽{~ok[<RO>q두a(O3L;嬬v~oYDN/; 5`]0<𜝼NZ[txeNgW?= |8M\cj[|"<>r d y#;6+&%Ɍq;Yeލ8_:@RXLC*{#k{P]^`:'JuȾZi2v`GBZy$8jZULJ #cه# Ǧ1N:6wa( *hzivCIUd H/)#Z!SUQEpQks;|[-ֶUO秸QH\ maQ0&uxCV內ܜӏzBo~Ӧf6qc|'w9s?rUL1'gkހ[sVp.*`VpP ube0HeS\H~KA--wg ˸iʁۃt$ nk ΦnOBu<5LB(ӏ@7Tl7ioAȧ& /6 "1M׀hW+Jh ˙鸞,tHQK΋>+!ozyfϛsLGhx(2ܰHM4h3tEL s(D1HV(=a"ƘitTbb];0+G?WMٓlR+Wjj jB$0ob*G:v}J^b9#P~46l Ҫ!8I2`+bȄ'tDj{?kXŐSnh:ꦯwC9߾:DĚ4W{;z[~ʛۯmmP[mj~\fvKRpk+4ѡ9&)rhPϙL"hNv۸zc5nz<,mSbuC"ra@(΀'7hd+) Z,&ndcA {}R/S#c6E x7UEG2`r\ 1d-֋J!/3#<09/S LLG|'Xw7'7bCoj\t%'FUƯ}^ݭey|7ֶJ/'+hJĀ%?Sڈ vaNa *` +f1y*Ao> OE&BOGPvG}4O/Xٻ$Yr;p| ;?_}A7ܰy*# RcZ\\]vaٽ7Q\|8ӡcMİycdWsCa!d*[D R ~h `H^.9 6҃b2IK勏1=Bڦo(9 ( Al'_5!GԪDl:!~&+AD}#=vc4AhB(f;R:eڝ.+6=u 3yi@pkͩ%ѿεFJJk/SB'L y#7ј)E,`,JEiemS$+8ٚ|OoޥABՐWv@yuGƜ6Z~:=pv oլ-7v]<\gЏtt)WFK.9dHlR-/_@jγA=9mp=k˞0K67_1_)O2b@m%9e)x#E Ws/=ߟ;+h~XUE ]el%5Ut**ږ5o~+ۯmmY[oԺxg/p1P8j(_y2p *݇C~1jp͖˸ 3׹iw8l˼q8Q{_[25;!n2PHc=4XO56y?L6a]r,N`ŚLǫa?p\b֚fAayyzgk-xBV^iH4! fdV@O1+a\D2֢ S\#X!$07f |ᰅPEƂ(f`yjz4ܭCG|éɗCѨ5jDKVt"Ot/V ڗpCSl~IX(uR>;+X2Phorx\0mC X($E:͙1tq.+LbD+y:l 5XꞸM,2L0D|V*VZzs1vZ E7U-7^]&`2AgC ,Be?bw^4DUCڋMia= У}!kJd~C?o @D}bfɇִKb*>Ĝ_x,"k0Sc 핽yp8{ $vdZQ+Ĉ'?-|rm}j T7.$s0lڴ( qR-XaB0^g)rF?߈L3g+ Eѱ.:ѩ„Fi'6SG&\>lQ&[4Lj,6QgRgc _"4 ? &PuZ_Ni7πgEvJG-fo 23hBPMQ4dk+*>F8iYEEr}Gt'R‚A-6`4{6Ȼ6 :xlħGK-9/L]oϸꁏ1k0$%%ƺHKJx".sF~]wΨFxMq$YU[UzXdxW)hJN^$%&/1D+0M*aǰ+L-U ><=}ɂuI"jVd5Ern>F;`g}SXzrlzMG[R~;̣И'5!"Ts^WN["gkhtΧDF&wo[zOP3i͌Y5{U9]/n-x>/R@-:' G|c u3m:t%B>ZiET|M{Ԇ;m<g<`2A],}m}j ^7lsm؁ 6C 'm27(%+, -;X@t}KZC9;{R9H]4L:VoaO-j:f% G4*MSo$.8$33IdHxNFϔjf\WS'}ˉ{G/kvy$R%} bY{} GE2n@$:`t4|y"Q2Sr8/؆eo@_R[mŤzwuýuU4ΈW`'X)k"]z7@B8~h4pr_#ѳb\gwsӳOE@tGXzYd]$re}H>|䷞ 뇠bMYJ u9Y` ֡ #a ;2e46Z䡔>;Ek64#:Z.?5?5;- U{|g4Q)-`kyZemZ4AaxOMr1\.fR)}1-J)r!9e1ِk8/Ζ"e2`mc1Hy=yӉ̐9bQs\l1[(QhW]\JP=z[GYE"M ;;TֿK߽?O/1X++=kҳ wkpmy[DOz/%/(,HYF}&E*IൣrL< քΰ=Ӓ"C1yb:%tyX|>o}`[c-c6}؇b6^tY48ؠz²tV饨/,d5$-BaʀK*~_rc+b;yn.;+ܞ'b-: Yg4H;ƭ<?s%TDɾ m1U_nFaL8enhL= ; ab79Ox)GZѿ?]s4?sYy|WVedxa3؈7w;eqx(>,R^:†Ntu@b8Yk-O70o6ҔeG"Hϴ:BI8WQ'cϽK{¶G^[2?k0ҍ{5¼T=BUԖo-<=Qp 6vo0K,xS:#I 2dblPU\:d>` 2/$R|bV!k|˲ ̀wj}_v7?}1 Oqq= Om$:".7uzh\XER"`wc6"WXe2M7b%X1jj2:F`%aQ?啫{{oo/;+sؽ#Av1*brZ$ŷ@b4X[uKihr YI Hʊc\˥{msHQ*sќ2 (3PF| ߠ;|1pc׾dEL,kϼfmW Vڢc6}&Kq۠`4J1+h׹+јхXzkoq +H()9)K(lʒ1 OLKDЀN>~s?u__˗mm돰vSkϸqnrNMC%>BA -i%㎧6Ũ" 39[cXo~i:Sh|>!`w8x[CrN']C#ɯXs*a^,WXGo7-PmC6y|Vy&L/>|C{ \zѠf}Qq%w*fJg $Mf`gqd6\`:;:Ñ:6GM@GƦ^іs*TLGѦb^u:fG ")9d~V'gZcwg:ZRU"b̐t;BOT܇?`uU9"椖ddU4)\jtKZ毘\bH6PG5_ED42ք8~|+*)Fxޓ}M0tsXXXBOG25L-fkrB`=!'M+?؛8:J'"#=P 7|EP KnVў8Fh -*&\=#v"^m-z/܉:PYb2ҋ^ɑa_xƑ'bA IDAT;JWDc]bɑ40) ~1-c*_l_k ;ŀO$dQڤZ$f#f+f i "ؐ9q/#Q`a& -LXh->p޳.=KG tua *"Q ur):QzN]6vzxB/`O(tat 80j(4ϲ~yjvgĨ̭tp8qME[wR>XVDB_-m$Ѫ3+g15!b4@ؿ߰ǟJyK%`I\2\qL2ĝ|L}wL&LK\n w`œ"F*eH@off1iJisJ¾bcCՄ8Fuc$F:(]19n?^{[a{ֶ?@okf?{.N [H@~[c lA.۔h ZX^*-O8`URƌ,+Ԏbr#a#:^bk3{ 9H9 pgL`B9qH"k+-X_AX,,X{շw TY-kV2HH4w%uG zQ[0]8@K Fa x3Mج6\E*p XDX5Wj:1hXlJ[ cx4K/ovRG$N8Iͪ); 4-{hn;䃜r.QD{&``{8lMYC>rl5,@ANLe$k zERiEYYX)w8` X!3-}Ij|vz9$š` ?͙҃v'~<0Y@"QgÞu;acG1F[˥g5r}ͳ{_:cϿiqlk[hmwdyl`ZrO&PmZKgT۲HK+S&HTp`}gx4ej;+}& DX1nvi:PZ KpTw0$A`D.DB~T#5s-3㸧čmhY7ӃW턦^d:egZ JU?jVWgXlak&E +8lK~HgPcmz1Nq > XÆ tKV6nAtb7,D@MVM׵Lk Q8#`k^ܜO  mVPUETAC("YmMVrn9EZ$y6%If2д :τlǤ(rP<-̄Kz6>^SG _ay M m`-GV1% Mt gT6*11儷μfA=gؗǽIc < &Naoʺ7xv]mmm+mmk.OمBr OEQ Zavp9ӊ޳ )RMWk.vσ(C:Ţ%VhƻcQnΨ=}x2~HFӨt^pИ^24qYo"WOnr~ q{;7fm֪QG_(o> ӑ `ژBTa!ɚu\G:*,ι1=9m C"7iiM &)g.ʚax+p.=mBkh4˰S2`[eǮɨqp6 %8V9w0Ȳfϗmlxl~LjbE=1Ćh<;nݝ;í.b*(^Phv 1Ha_I< C;rъLbMq.7yyE#%nTڌ&lbR0aگ/4qy*#bbmāCΐh`[$0HJgWI%L9]j6<5<#Hm-e~k;`)v0iL12W,Ԇ܀'4,Vi"!-UbTӉ]wl w{~Jh'܃_>w^Zޟ T[z>Tť8hˏ-`(Y&auBu3p0yA n/":ĢJ5z6ͬ\. Ŋ!Yq Uz!g8CYx-W0u7hpPz$Zh4lzC@!]檣+i^(b)R! Q ?{7ڛ/}&ʯC, 2ژ-b,=&èQ0B|rQ=>~$.ÓM F Ay@U``}:\{|bDDmZtN2] hRVZLj󐉥x㳼9spKfk՞+KT1-ڮ.Kj TY5ЄuJl։AjqI&`U{+ަS1u'nz j)+bwd;‘߸#:ouw_/}Nm}M{>ϳ:WC ny#ꊁjA>˘zYÖEvnݵe A57Q*Q! 8 cw8N {N+KnRXKNbYapFf:lsGf0% gjLDˆ$a"uU炅i$,6y}V+uxV 7> 1:z%bbɆqW:q f"&i u=x^Aȑ|:X%oq[+ڈ**']#P^ۤM2"+QQT"S}EI)D9%aid~˿mtQ7֑0P 2I?XFP9h;T3Ѓ zEЧk' K|Bj31GHF4 T*,=kOmX/'43lqDs26F#]ȮRa&A@&xhs'?h;q4wc.< :rÇkvz# A%vޥ"Q4 uAb8 ~߷ab3ѱ퓰 mf% { ]<\]c_x?>ͫooZz-N=_|BP t.;a MѥXaG؁z)/S%_AǙK;z6Z*~]ѽw~wYvb۲tGR<~mqEgkfk@_1i`;AMeph,]i4h$$! Ϣ$IIG+x|D_旹>,X81f̣Wݕ{u}pkzCKOAQoJ*LH"!XO/؉ZIm.5xbh+0h.#GygW9YԎg< ?ܺ˯_ᕭTz[_o'Rc %DjƋG wu*jS]&=zjPqe%q "|l8.0yaoPXc02~`p@v`(mA) 0ټ/\fHQT)Ԋ sp7<'ݸL`-ݥ¼Xk]!kULMQWsrY+H 1 ϢcZb1DeB -pF95^rN rPf1Ör?brCWGywM>AW#\_G}-jqmc%Z ѢAb$J xhԈi-"S/UrnB!8 &-m0~u'^CBB.#ɔx&G_kb*m@_;5S$LP(+#3Sn8.܍O܆kIdwaAw$iaAc-ɹ@ͮvaW9&.՞v>'DX #0L2͒7ׅinl ݯ]OFM.L=CJګ(O?wuH(L2fЉ.jt]}q"-ꗑ2TR1֕WGGGocoMo鑬SL<ļѐy9Ít7vbpD]H5JM~x&b)5`(FϮvLȈkx9}k0;LT L5#4F#QkM/ x)Vq5PXgqv+G)A &NʂoxNhrǺ_r]89e"p3+{|,">zD/nH lj$r1)hӾn+ƎzjDY.>pc Y >J==Z#Z 9sk{rH!7G4s2)ĀF]w*JeVQm[mE};1Qbj1DgR WW*W# ~NvQ 1Z*ܼƧ$3|o}1]Կu/m_?tۛ{PS܅ncv9s;19{yE)MC0)aah[#'BDE2JZ+aqjY 0h q7^sgIo(u]ysZ$CΨK7b`B}ײzDV6@KK7=coxHF8Rc,?^"R]\%R`No=xCQ^Xfljk'%n:z<Ði#nfhpL0>I.9%rJ!(E{$NmKA zHXƈaDF_ Fw$3[ Ɣw߹3HA&QKbIdOz7<9C'z%B;8\DR'O]83~/=untI|;xy1h|:0T&p#ԭ I9r$Q!? `GȚERI/@LoֆY+Jb?=\h+h` ^捊?]QLKK>?瑢}̙$ȼL,B#'ZwOd\7BlSd\> ˃916{5T|MsH;߰S]WnTl<]MYXJvgy懸^/]Ln }Qo IPg P}hyƖK/ BAwYee}N"V6ݺ 7wa@ݷyƻ ם$ZG'=+V)J+a*ܰ}c&pE$[%2٢\W5Fs6b SR ʼn(nZy1mʞ[ѧDhE1#Fvjd\VLݪ#2jK}'O74 Bq *6Q?(glw(+d  d,Y.p#'o0tN)Qv7Dom( ._[ ~O,]nD/{4[ m@$iS}o;fqp4SŊٯ~Kw+RE5/j胸ts w/ӊ f1b.(F"z7Njztf?_xCRD,=ZrHAZ3On Αl,S׹wL=GƛHc(sV:,EN`2߇lij7zԣ]ǢEh0SSD1J.^e3qDs5/,ZJWNښWVFxAxQ3??|/Zo?>)0SJrn=ʬC}n{EJ?b`il!pXt\1-ӱ22?%@T 90"M قSli1gi#kE֊4:eHP,:YMyAaM8ay~ sFxtj^o9'&v*{e![18dS>aB(4}e*1O~oC>^^;.E~톜 ixC}.93`ֹz&9S[V3ȦbCc&[ݵRnb+irN&_wXF1hC0#.⚇ơjlT5k_.8V77.{u{e.ǧc?|{&to㍪GUŮ#DiF%Y5c]vbPџvwq(^c#~J#MYcwiÊo(Qvܠ7k 6>8IB$vLUz6ؤ:ӝNƔ1_C^fĀ,pc:o b. B{H@!v9|t V5~r~ Jƅ*WV|kŷ91uZeC ("wZkO7>x焏zćc+WK≯ }軏׶?6Ey-F `A7GH*ⴿj晴=TS֑ur|`D,@,mGYxؿJ6f>ʒXq&8mIXPW={>gymĞWX2ŭ5eG4 #HV5碍ZcOe鷽c?Χs~q1];.E]{խrҨLH-ᄀر L1p؝*mN BDitOzsF!jcs:ļϻܕd^EލOUCo 8 p8*i"[M) NGo\5rWUsJlGW:)6/;ȕ\r\iXm":c*뺠>WTbcT)L*/n*0z_,K޸i'o`løWPeHd*ЎE|tTʦgNcbhdxHH ܽs#bDwvwrSw疼(xf+20r޼-] Dvj3B>7f|J50{iw#u__q'CmoᅕcFZx3U. .}Y6J-#DN=PJ4-WyZW}36,+FPWdH) 5}zt(;R .x Uo5obc_ҫFbl~k'0'y/^LzΪ }Qozsi4Dy7w-lͿ. ކ7K<ѱ'9 m|茏m#Coon'5R˛>)a|qm:[J0keɺ(Qk\!G<Ԝ\jJQMeP3JwiDLIU\Y8*{y=-c7:[gFVʱ3CфCt6Hň$鼮h16jm:ƩZxy0 P<$6р4ȋQ|\V+Wõ}ZYb[2Ygfn[}a~uH>C.PL nJkVW~mt%X-5uw|&Z' PBlupPv0CSc0Ètd K?#}KA*&B!w3&Q|hv7:(/W~?l7xVƁ0 é(W7;rfC2m~7ְ=aq<_onw. >awE eN+IϓW=YoYu0WTs"I W`'\)~UЈcFGN/,SO'.,hi*tmіmx1JYR"Nc,$Wt|k9&cW X JKۣ[7SNJ T`n 0r"ym{lb):.{PֈAÜ>qtW(|SlRYR$[Cu'9XuWj|w7xI籧p&"I,1V;̼hРk/./7ڭ_ +j}kbwP]t/T?œmͳ2G,090LЭ 6ljMĶ]B>S1Ǡ0-ʠüYaU;u1|E[x+ S_B,,g'uB98%XªwL)Uh]lmw64M;𱶃Cc(Kx >'0#K Чqxx^{/ݓr|<|CO&[q͕YqPʻzciũfiY:%ma' dAك- -%f>"$ѕ&T#Du(*Fԅ(X-\ߖɝ{Rck ">%ƴ% *X {ϊN)*4.yS0K΂`Ĥ]q#Fu,$w|_yW>/ HHrh)ѳIό]5r@ ,.LТ%cO<:}`h,<0M~)x4`:cSaGoS<쪑4h}(}:͢:O3IwVx:{LMNGX^n%,3ߺ5"6Ic؜d6"n㵼S|v< q)Փo#3GOS\&a,[ߚ|5tKDDrM64]LN }Qo~w^.#Bs y[p vuCaO: .td(瀻ٽO]x#0EKL(3zf 0Sq{>v6b}bo]:?~;s`:aQJguR҆H a)')q+譨2u9 ! - SHTՄq%զtUwŸwzr ŗnF:VGIAxtI}XRL 0X;cXCu ?l'$<~I8J Epd8Jm&DRh_Ab[8'8tH1C܄`..;_x=q]rpƒ zsYMnf@VgqoqOXMVc~*c<& = ;LG{PH7OnoK@@N+sz9T=&(W# Ue`y!Bgnon#g.Cɸv]w!PN)* =L-6}v؀  9Aa{-i0=}$~&Mgkd=B߿/Q"‹NWS~%,|_v͞l7 LΥ[L4x~rsF q@Vʮмy-̗.76*i{=﹞2pbF,daBctʽp۔hHkSP1gW.9ۈ>up-Є@%kV.xZZČSTzD{!wcVȊx)*ד"7ˠ [x+"H'&ꬔʳ%LsCh+ԣ.u7s2vH:/^R~A$65ڎӚiOhjNGLH1*meEGia[@$7Ym8\w5"mz^+t5>uBYQbQvPEލ|ʑ{!`r{v󆑞)p^ַQX"GGՒxXz'򚽴6GK _E^%.2-b)#HUw:r6!-Vv:`LS׾yT%ް2'^]{* ˺{J9#MfQRce@64bd;F52l>cE(36 rzHyQ9|DO6X-=7h0R.uсLX , BOv=醜MhL6Ӱb`~x>"GhQ qQ2L؍.v{îT\}o@{QVxxՖ,jSB-[AկUf.O{/gJs} IDAT(wvyDq@q:u-Yf2őLEa/s® -mQ|]Xԣ zJH,z5#c(ie1(DnB!eH:c0jٛ+S7vF'M64,.AqYZ3&( f"- H FkW[ K #eDr= 8*Habn?'NiX+A=v BEAtz SyWXKFbҾkRtfN+:L,OO5:+%׉EVp=E8*JAjAquO۬H6uZz.h" 4wlIhCx`wݵ]|Mz08a,ɊLIH(dl4JTfK?@ CBAuG+dS ֔kP>E߹/QNqc; qOpĄ6;XXylXȩgxיoL r8] Љ <@>_Cpՙԋnش)t 0ӥ)G~i .(θx]$}y/q'[h0 tvHD?v.j5agX׳f7$h l%YNpdÓjܹKy>6 X}9#Zby)dA4S/ۮ}9K?:s- c4OSiz]q _UVB7~+oG_D<_G@_a|њګR4j/,<{q0p M ryd1){+4D=9]q8L8tِ.N(": SŞ:HNXsgqbl$FF %۵9b}) p &2N8+;2We9`K#HkKoo>{*)fbC!/q`YXzH-}oQi:ތ8``}7j-.Oq{t#0{,3pyH*îvm`:`5gcŀm:*ph4E\$OxuJ7ٶewQMRtISE@_wdd"L!_(xNľm<<~17{6E~2i}[M_g agS<`7xE.wx.b^U9+)K`Di5Ke) ((+*:+j}@ŴUXkU^5*NRT|@TpVpPd_F+Meu;=zFo>kE#zzs~ cJYd) nj̆dSQc Ohq7(HaДXmҧ 0Bq0/.g:uʍ4=5eHl|jg[\E6kvL\jYmIzrwznv%EEiڒփ7A bmYd MS[ -NEKVsjsޜΰy.? ̼g;+ֿ}MWSf쏘L*filtF=Ըg|ex6{rcδ1rNƵWfna+LIdC\-C,bs҇(MwW qhE]'1 >q_הYo.U͒swOY>c1}n=dND8ǯsffίD#iHG1/sYŤyE  ׁ֌.Ze 4*ð/pKrT6e=dvoB hE%lZ[6=kB=PcwhMhk}8$ Έ󎢖濺 P]pjph3$o=D9xca|Ul( sݬc|- {6OOaVޯ{9qпqȳ^ ~Oh]1:8e`P C0#ʚ39XϪF2LeͭI=5bJA-R4&j_W -VB򇫻aQdq-NBķ,G0Υݗh8\@>}}7 OH;԰_u($e ӕ[ň:؝y§?U1S3Yll e=zrۚ>}%$FԈ숙 [S{<{gxN ĀMxԢu5‚3>v8wvu8( _||fKymiigAִ>3֊qFd+GAmV)'SLa){{|pY[T ^1JY`{i9BAVX&~[_ĵ0[OA20U.'_ztup9'.̿r;oO;O,g0LptLH^F6H˜!c6ZzbB"H 5H`P a`O$zGdT2crx\oT*3gЧS4mPD^ۉRu0%K8TXg8~CvIMnqU}L?~D1zdo*ooFbquEtP#D,UBD~4r(nfKpӢ%80^3Ep0xp,k7B'۾qnHF]eZZ@pf+Jm=QA$FsF3"Tbv4+G{m]!7 ο`ܗR_bbm}fYW>xZDFpʪ8k7͍ Cb`4Հ2'nmh1{Nl)\ !vBr.íVž02)> eih"P2zȢ*kFUD3J ^cβοe$-6<ث-޳\qa[0C] qp&/7za#">/ꏲMH9n @% $X=٫kS-`,` b0 J %/8[04#vL^B5j,e:DSabA,AY=DM| XCʰZ\_/Ypbx["WZ=1 .bފMc ~f7sI_% /'''??M;`s/ fSohy^⾑ePL4r>3 8̫ 4`F@Wh1+ 9I Ј@-_%Sk0ee"]uQ-|9~_S{GD];DO]b";,w8ZO%+xݗ_xkcَp`й1t)H6r A5cS3Sn#1+LMehVh7$,)o}3gYTͺJ-s5-^G$WԄ/^݀o5cGBf3@0D3b` f"WPԮxĶ Wer!=l|p`; ~Nd,8` ,bPg lne;3ա>z}T:ّ,d(eu÷{[#>{ao@.a56[A&SpAÜޠWtKn5Д2 {}m9/nU5QOa1 mس*t#ayODOxS %"YtbYXL(k[Ũ.#QJ`La@ Nf{ I H07/}5 sSbM".BD#K)q ;ĥ?KM go?^gqn-9wk///vraq_"7;`2SΈلqH8pp83, ]O@(6{3ؾ{pg%@m=E[({(*5nF.rĢ'W KoB6g,f{ ,j߼#TW>  mVXCywQѫl8h:I ,{2آ]oSA68dCmjWs[~+W6W2'M,1q`6f.GDxfQ<:nk#{01ֈ*1۞*I'0ywq#MْO},#{Iε]FIl#T 03$YA|k{~ռ~ M^JmM&{ g9OUOn#:GO^7 2nF"9g=]Ț &ㆷ^/B2sNK{XY5HXANd4ol FKbjvLLh2Zkom}c[QPl{BH>11z]=FoѪ{y^A<1Mvh/5rc ud/ $\9dǻHH %EEzZd:# Bni `#snq s[4dg.hJ (/\ /7䄓ӁD=quI 6d{'nK H7[M*"#(TB-PBM982W:PO5 BMtx>Ὲ|jfP]p*rizܠl;,sa0*Bw "vFwBًK}C8SY̽V<Ŧ*=et4b{yimWP]XXݭoicT N%јhR٫ h^ jc{uH8k9^FRfِC粶!GGݙTg 8눂dm>[J>E9wG/W1gO+99+1Lx6GrcJ L_DO)s{1Mlxb$wm"݊2CXP`tY̬Z49lʑCPYĊ>o}uQDaBm 6hĔŀ>Pt*[SLEot2ӻ<|dKcN^헇լi\q-c?>(9<~;澕 \=P!}S cM=\km ΐS8+#휝})y}aI+3z ZNN8,pm7<=aGzEZ%(!Zq IDAT#(n;%h-7ݦm V\TQE!_@wM^:[-=tמ)#FBOeH]QGMwtdK*߁'ΛָE;2śY4f]nHnY6S_o~kA>zD{W=Ax븓`h@$hm桵 (|InkLgeџ'#T'z=J1ZWᦉFFH9H4sw,Zچŕ8^K[T^ lk}HFzJVCq@Dp!rcp:9ĈWG\i+^P>ϊWOtH <`j7Y~BM`$w nU&)!#\=$(yZEG=(NgIKˆH;/W|@7y,QbGK;T('zvvx=yc"B=bzV_ T%G hО]yzAvW!-l0Ø'op)9Mqcf:G4qE-7 |ن GWU} @_ՇVv9 M@WZ æ;PEv#x|p40+IvwCvmJu* z >̳I8[КyBD"REjCP:(|!,̫~l>2ݶycZ*/fޠ_گC^AhV>`LHN|wp GCV3g ,yww)|B"mbdu k p ݏ=1iYM/~ns%T薜Ĝ}%v*FBuRQMpvrix)qtT6 3;ɀX gk vyPbMRF8/ٿxqs(XE=;DdEH Z&ī,|Bil0Qj yK7Ej*Zu24 U өl̞ٶA upMt^nϑsW^$vP0Rv'ʻ2U`Gs1gvkF Y!du 1uƭi`4>8JD$ O0kT3aTPѦm$G,y㷔6kAv>i"FF+1>fy)G[y9]Xnhb. n[4`7[ޖEgUnytSg#;<`#c XB핝a"AjEzO$9*IbW\5lT 5<˯c1'8'D` ^uy-'K?}4w"ŀ *FN (<;i!+Y2:asrzM:E8/pNr(D#qC-R<91nh H@"4(fAF*Q]in]Mjtl+;A{o +i#²睷2=MlW8w;ezP[hYh]n+C /,{̺ŧj1o7V Nfp]%2hZ!NvA@f/߼Y#"' +l^nQ 1%Pkjɂl9?dEQKOT6WP=e532%$)!I\;J- T'YkBAL emJ@4o&(yńVʯ,lĩH DL{`=?e`wk7Ƭ{wEƎPXzu͡C2r fSH.b&Z2sAҝ|t`Wl, j) a4`GWU} @_Շ;&+D춖۾̔:>0AB/Mt g}D/!}anc۽N vC)hh@R]f┞>4uez8&Uyw)!4C32zͲB91毞-dڮ|-EVGǹ)g e4{똹g[NkV+ ^d@0#la&l 4>ðS]'|>A>y8LiN%ioj|)}נ|0Y,][3k :Dh2C1WlקcHOk##S ??:[?~~j\檮C!Bఞ&)T~ɫrAѠ9H`F%X@{5^jb( ZaKpn z>Ax&f@H(r2p];sv؉DGqXrOEz9 f ,F)mLQsYlz1@sS)-XE4kFQh J2Jhq=rҧy+ۯhG}=9FH@(qĐ[{_.h4{'fZ38% O\)I`8\,o#mi1&GSl ˠDҫ)*88߰oRGw}\{iqePr˷;3_CT`!>`P-fp"_Zޠ˖kHdK9HkBq? IhXшXb^#Ng]܈hqpcRYL[\҉ Bi|V/.ˑ%JY-Y>"I!V[5@nS;U]GU}h5萂M:4D5K;$j3CfA%Kn68 gyr`[z.Y<³]=ǁ0cB1xVp͈{cm6o9L 9.hS {}>;&F N! YZFFSr]1VR= P}L|[6NvS3^A>2$f~@,fWM3, b2G]Љ:MhXeͽ{7 ad 1I>bp6fWN=9~E0H qؐSZ|Ş}pd#SP!!Vx@ϒPDbG=Uq#8.?wo ,F/mpnDo,|l1ӚjhB}Q\y}]5Q&i1,mZK<S'vc.WeV;7+h^/|zWfë(^_-/$ޘ-4ʋQDN=@Im3vG,Dus%5-r˪ˏ3|x"zKޕ>]؅=a^kh%CSx8#02<L̸)IH̓r#!YwĚ՜@y>uXARi:FH]b8pX#QG)cǾEk{1Qr#F>9DjhWӂy䓡ه= ZD`Z9ʨpw1tWrPnBu8ЊɈ4c/33uJ951)@u/f" =Pla$y`p>柭z $SّCKOQf G6:SχQ1ӉVHu%}y9SX/O\ZRƛnR8iTHzŁS ZSpS/ WΜ!U9|&}RGzOf`:"BL(bJƟV j5@dl& ukub1D) eR$DڞɘQsdY. g-j` ҮX:70ݠ{FGGk U@biR~` o3~yPq;`#,lCa"X.pr3?^8z"f tŸ9>cK= (s$7h-2Ql,vp|ȲSgưEei &y!j6b]j/Z$JȮlN`r,TKX:K BQn,}7a?fz\87f{*2㰱)UCуGa|BR@f?LG{qyrx7)tt#[:rNtq#n$d}s%[[n$i/8iKt4R*a (h3d)Oކ;yfz.oZ&| IDAT}^ɹ}KFE-i$/@LqqN&R߾R.YGhD\󈔵 HE;x/7AyXi]-e:X[{/1sv|]j}U7/ ^'479=ݒ>b]޾BNb u)m{ȾF5H soG,ܼ0VaѝkR!ma '4x|;ԅw :B0+hWhd D[h Xel +$Mjhw#&grZJq؎s:޼5"xq #ɃN@dsֺW„BtfR<:%l2XZ1ƐŁ U :(h-"[Ě89.xH`8!8L` 50[?W|c~Fcq`<ӿ7.YaLtd ?Ђ]VOhiOabh"QKg91B^hKY*׏LY2c.|}x9HU%Hn.>c"ךxBJ>uC}:gFzL09jlIR__psB>x'с顯i&`(y7y (Λ5}($e8KU!ZNfqMk-^4eB5eô 5`oX*t*aWzFzO% B+&gd7Uy# [&*nKkG˶|'#ZHjzP5StN5fh];>NV{zDɽuVALIƳ)#.SDQ$䐿1DZ,g2I >kN/fBLv6PcW &-EIBHb:m D"Ha>}ftGHX}^x%քIpOqT & >S9j IG@2a'S6)wb,)ጃf.i A"2a0Bgz?ؼhϣ2.^W賀/ǤۥmXNa˳پ6vzN09^Rz|jBh]_~WF+*.0 t4n4d릤tPn'beUḸ~!lMАM6hA Q7^`\rqZJsO6fay€`^^PLP+L3np]*|Mm bpʫ`U2Izxd?dA/R(|f #EXBVYRhFYj1;I5"M3DPkzF6 jI(kZF:?y%McgW OT+^ !3`Blp9wp¦k)AGH;anSC̲J0)@N :G 8)A,Jd{Oĵ4緞p@޻ڒ}{}\p.EI )ATr,q$/eAR#1|JvD Ć>$_$,DtQ9̹Z͇ս>h^ }KuuSo=VRaΐՊH% =}2D<&YVY$촤l| fC32~^W ;egҖT`#Lō !YHncMߦ{݈gdc;͵f.dFج)\aEҪ3="vfεZB$#_'P6')'u"i'*P(%hF#z@ְc!'{%(z-dŬ.vyI%x|GDvdE0uSH5<ӿ.'6`SS0s*l76T ޲[D}ʼnxE9C.39\qcHO(sfDCAp9fuNq DQ (Że]hIgbOȵ|0z |8.y}0[털?ѱşo?q&DZj-;J,RsE):\&dC?J/`\6GJl2#S6MRŊ8$6sŜPg7-9[8=ftF@֕|Pb5sŕ MU=?q7`YHV΅!ZQnYnIг86鐱I$P{fQԊI%O= {JlH}?Z"A^Y%1D \:@6#DZ*EmS%f9.B׌^H_yREB _d4'{B)ڙl2a)ԝPYRHQ\r+ a1hJUT1ƲB)8X#bnNKNs妔utH\ĪܬUSb{}~= 046YûTplyx6y[5ΐ |Q_{Pƭgt%sƾ*6imP|Ec#x>Lw䮐eb $PPгRW,< qo'mI-Ʒ)>~b9\eFvc_w#-Ĉ#׭+Y!d}1ZԲ$waj]Q,b7RA52NdUxu b rErr4ڈ͆3QL(2,$PCf f݃%ㆠ!r +8:>E 6Ŭ摗NKCى-Og %, 巃G]nԁ8Ȭ3D{Q%D'vb`1+On&Da-T /e@obOƜU }A&/쵂 I[b1e!ye3`hBH7ԙ!ND d掝\#xK II0 w̞k> iD f2`R1 Xߖu&Ck,ZJĊТJ };T3pR\9VKձsZn &+Azg^9]ͳG>ey"V"b13D15Qv7o#<N&qb!V]龂#@2/Ε3ɊutFҡvZz9? [oG|,9$Dl{Bn)CxЩo#ПtRM6I!?jM_D)X3-'RH㻒zH^Cيi<3N_ 4; ;Ϸ$>sMZL< ZeK͍`h` 5sa.+2lN!"zm3lDb3œhZB!:4ED4]28ۥ*!xkE+G%##sdTGhsOI1iYP2ę|;u`u#]1NK 7*d d;)PՂSLT:!10i}=g{4A>bghAVYtM)R؁A#Vk&۸ CFn:>TN..Rmޡ̸'{{<_po]#u<3ԁa*BA,`*f z< T=UǛBLmVv#&PG%^moY+28 3=-)NLGOnCeuwh M?9Y^s=ouv6wLl6ͧ䀏!R)X_&ȭPP^ }ٖ겷8քn1HsL5_uMdGĘ,u=dTdjv~F Lq$S;_F:L3O 䬐OȌd"h7djX6瘐R/(+Dj~uV._o6t#l;HU.a1-ǁa')-tt0n%qM8GDNơ(#mYbQ?m)[z+2xa~KoOb~JVVi$6{Y^ o Rvk28J)2bwqvE%\,"*dsPI."ȅ&J1ieʫ BP^D{bQOhP٨P:VM!-R_3wP[D3U43|d~3CzaE{^S`[YHي~: j=D$@],5iRN,wjBm>^~w?O@;Uk[pu$x5{5B=n;X_z_N-?pg{<+bqKёoj($2jS ,4鰣=P)YVYrReڇ\fAAiXS(\h2$2u JY<3Ap@%X`CdI"Im:Qs]9N?#=?tBaJ+xpAMN~`yTwF'B#Y![4{)lW]f]fԙ~]F!踹]33g=Qi6PGlg=lI/9Bcva jF-D8%Ei*0;'D#S8鴨y$9M$SmFD{oqJDC_Jgd3P&IT&Lg:Ji܃!!u ࠢ\. 2g72_LO\[>=x,ܡaR9Jb6 nA"H>|kY zaXJlè 5"ظ3Kxs ĵ˃<.YĮսx;2zMe}63mx88CNi[] @9k|S,8S9IWmyq-w|3?W¬"ecĥ󋑟i| hV'aQ`xB&6,`޼E@_+$\BY>r 7h3jh+ׂ?QwgIpߩxcz%ɄWV)X/w\1JIjLSNAmc]\U oxrPʯ{\#} NԦ 3W!B$"*<ڝ1Y9=&l|<ш+h >m"72mMi*ǁ*o%mM̓.fO$˭w⻏=]|Wkyʲ@A=k [xS%8!BĀ/]ڟMq'Gp1/}-UҤ-5APKe-X9鳠Y͆aâ.Y4c^_X=:eCmoD}Gf[%!GׄYzg_9m sSvy@86 w$&gDGB=kƎ!|nCjl7i!={RٽHg~p_<.®uf8ϳ~/)?8B~E\Hed.9mK]vhpP=+(\ =cdm~2>M.'\֬+#- lh!TeL\u:ɆUd(fZ@KxO"*t6F$dKb&^"iKp5 UfMS.k._{]$<9g/W(OY|7 ]T4G37VX eln'lSJ 55df7#B%R#d0ئFȔ']5e3vk-]d suڬ5XB;{tջRwV?j˰Z$x]G&ΰ IJcҊ>}"`T;e?ݠS0m@1whG<|/-y~TA{BOQ8GMvVbAe=.WyQdA>WjK1-+ xJuyIkȕaT,Wjɂ+v\iĢ_Yۉom_ U8RT5>~Hv%Rp mtRIz7@5RP"|)W3&^L.p ʹ&]O Q9c+SbAmHuf\&rd,6r=#5֫Ja {~aUwS"01gA"c?v8q`b#puǴָt%Llj"!:0?GsGpX2iɆbp1"E CC^ؒIόYc ȡPl{N)%Iunң#r XA5K[G:ԛ`:]W桌! 7+S6>hxLmvWa > \6z թ y()𣪌94 IDATG/2߈,"oipsє3b]&N+aDH %qWEA a;欐ɮ{0 tCS¬GkWj-FX™=\P*X7ZĀg)90NI跒{w|3g?BYԂZ#+lq)" F>|JeR˜ ؎7 ={z`L.Ց K/M#^uOXzۗTi8+yorK]H ꑰ! 2|upoY!]!F3. K43։5x| O|W>P?F*^',o <祟(%Z\Ѫ.'Y>bKOT"{ohӭP m t+[{2ڨ}E$Yp#me-7gk2J l)Z6& N:0-IqHs ؘE{ [' q0xqL ző]M-?P%PyTCV8(M*yַp,[Ƿ/UȯXQ-~[sѩ&ɓXG_=X_*fnU](.#q4$ 5"2&߹a(&x7,_~@OMyȁˉu[=nW.93X8:hfZ ޱJz=%T9$gߪ%" =G*4Gb}c+020*uV,+j Dqc?}:g>={)ӽqg #{BYB2e3#p]a /"~?=Vt<{1/p̛Mcw: w'ܜIlo ?JOSMMwtJtT{6(8VԎŗD8tL_WD3g ĎvE4V̡Q@F3f1Bڴca; hQ}p53h#!RȍTw|\EUȎX*Ҟxɚ?48ekthqe{f#>?L܉vBϑ(W1Mb$JLm Cx"i^XiǪg]Qњ]+2jr1XWj3;T-949/*Q㨈P}ƾ]5Oj=;1ɖAAVK\%-gЀ|gݱ@SH}ލ߃k0kԠuб XF5z >܂dqD]<^E /Z(-\BW+%dcJPaW1u5K!o۰B1*^[cscNn9픷fl6cOJd4todcP ɵT:B'*O$9jkIЪ&I+B]E tBVFeJv9M5WэLuC41M[&LfM)qN0bfd8\Puԍ_OCMNiPmhAppW"=%²\e*!uPj ;ݭϮʀ+詠V#04(I|¯\A 8fg|'NTU7ԆW=T zz1/#)]aMV%f˒LDOchŬfVfGQk[xzb)j.m\lakz);Q/U]U.RaQT.0Qu4HPG #UCKG@Dj(@Kg'L9BZCцLG]Txye,U.;W5 28]8^{=P).@en# ,䏹+/ŀ3^fo"e9k`8`l6/F T "P$ ºBK:bCQ/ P0O!]EuCٟ<8cj.3Au|k|W,(Vrb,e^Li[F_s}!2diFdgG"L  vuYSw tY1%OP*ۊ<OwʳyQet3Ŏ@s,lL"3z֑FGK#hz\Ody21S笣8pĀMZ.[.Rfpc*f t;bf{Ҳik6~>ԞNfWnHdVdKZDIsR -s>GFpH E 1"*ǫ)%86G qi!P,Z $ ݕ@:XeG>vQC#"O ZuvJF ٣1?!m2`h= L/c=}Bh /wA/薨``-5MdM'}2u<n]#Ȝq;m[?XL0 \d :NH&[-d4JY 4ʼ.AQ8[ ^@-aL9ߚLQ|GacFı\֘fc22՝ᒘ梮pr\ql% U騶>.XEW, :cb>ʼ'uV'7w g@1HT2`*,#& GO.nkقGׄ泻&b 51 KoNE;^AO/gyqtrֱɥ@8X<:iuJF)>^LsoId?nm|b D} 0o$ß"@[WPe#t2M&z 0J4.;cjI#FMPA¾zMu #r}"%ah K!b/@К賔۩U0m -P'5ጝ 7gWqy!*ak/-ÊV)|s4_>7`ѷH_*h{恗Q|x StJ~Q7ЮYܷ᭯Nml}oޜG= Wᤨ3O"o$5*xzcYv`5 IJꢠZ)B7 `Və z˷~QMU! ӌ!2pmo:uf3"aWM:3z&1->`<VU4XNz&,Nj,ryCksyFQ5g~67H:E.ĕ<ɯNE¶NpeSY1' i?q^Ԭu,'_MngyaP96r}aÞWv FTXN\h&=ǖ݊ߌ=J5TfФkWY]|D%iE}p{:ߡۤ,]I ` fQZ C|_ܱ{?j?D\y#,}.8szp W/`a KdEfZ9;q1"6SţKI(kO۲hhD玸HۣZvµp"ng뗜^jI_Wu.:bk{ǯGa ݏZc_xmԒ2FC Tpr6>@xC3sxcIjv58uP--K2jB0Xz$1f L~]u`5.&xvL*̴Pem5LW {󵛿g:c7]qkr)5f oJcTbNnq-b"\sYks`'h# =#>lʶݽE;5!Jpi&' t?Eg,6?fs`)fO>cql"hW;(UkY#IxYqV3弙?=ZL3qdYnjwO])jjG+#xj9oLI`ǔE^[ݗjq,]5U`=/Nwx܃ Mز"vPcӂkmg:qaO9?Qb ż4cWCXprȳ{ 0XAl^Oz:F$>Y!~rg1kig%j34ưH2Kc;>b>~׾ݻG#yN.J}dqb! f("k0y1N(l'm|@b gq1\  ;` !ڡGv%{b Z[܌]R+󒜮$[fZ zyQ6v>2`7S>տ/CuioJ̱L+_C#oP\_NpV#sv+ rsf4J-uіHk%ݒvMqpz$Kڥ99 g=RfLje&Y$mٻ“ΝJGl/)%^GYţ󿄿\6,bO]dU&ӎ@P3pʡp{Jh"{ݖggk!=Cn ->;H:HaCCah ɼ>G@h{.mS @؋шR %JN/A|?fw*]4`-Uyّww@@1 QaGrדLgzfJߖ)ylmGduzqt= .:/Ejr/Pmhb̾hՌʴٔ =9x>,T>3e?¯?(im)-YCFHKr.Yq&b$xxV}HyD!ѫ`$g6?ȕ#kHY.S#Xvm "?VYx^})Hy:Ts-곞3hd!iUWz@H^-9(h{A=ɫErR#R 1芓ݒ010;b#p&A>;{\y^ɭ_P38 -,L7mFkArx̹uKvNi0kdۜb{uRK X? + SĞ|JJIH%zqz\)yb>nh_svVf]hpG,I94\2z0|;ީ>7ViE .voiˣ %B@ޠW+l˶Ďٓg_l7p!%9eVrf/(4'4v[e'<628=sE)R O' ƱXG֊fkG4 D8i$v'P+PVi Ŏ}hF+:jj vIӣ] Ŕa7/"ɧAR" IDAT0RCiZ_&ыmrZ#YUMyWع^a ~}HbQe=*+Ԏn&Ț!VKOk3$2SL`рVx%t ;8/kT֨=g:sp]eV e yMg}o|Bl6qh˟GMFxyR=Ef^g9zwfϛ\Ski7UZZ3yZ({S"/ O=&Dz=$/>[.v1**J $W uK$(5ټ嵝_]}OiTc{7'Ý'm'֯6zmý\i"2lbGӽUddR>ZVR8ރyKS*x񂩨 M4 87TG3{#]2tf]$e1(M \0EK{%짮Sy`ve.`i %Dbfb^dqY=#R!= $EQF&z0dMu {*2 ֝'_eBVrm1z qb}w o%tX2۸HRؘ#5zY9ksۀs'w|WI|(HWrF^q Ęw%'x|K:esVcM2aW8~;m[8 /0Ib^XKds+HwN=`Į4bN܌$2butԑ#3#NU#R>)<~ 2$Dv89_fr*E¬L20SLr U}A\hoTgݓG&v?o~pzzgq;]9b#l9ɷcyE#6d3pݣ,Jj^^`@R76{uF+ 10ouجt+W V{ EM9FH\~CH6T.KA-Y;q'*Ы$h]ɤ} j $1@ I;M #rAk Q$$Po SriZ\uybX2}-l$d,'N/ZYwݧv3c)c7{-Ffk^W])3C}nVF7q_ yײ6[8ik?O,?5F,={sND.]sp"EE-A@!ې/ lYa[,ۂA%m 0 2hȐLHeDs%%ܝٙVUVfFĹ|dFVu woC;+;*322"{~i:zGtq\o pۑ1J/hӹ ottb{/Xؽ((# ]뾱6Z^d'\̦g+$설eY_FGRAUC";ZKqWaD WJ0g?Xn'Dpf0;Fa=Y$~%~c2eii" WAW\Yq뿱=uj{M6޶kX*ǎ!O VaDUd{Xy̋ Max.ksA>ErE\2^ 壄1GII,;Z ~1P(p"5]jbYO[Dw~BEST "f5sJSv]E>sJ}{<+p06+51HU\ŞaCyLv@R.%8n("hQLJ{8_aI7?:k۞qsyQ>G?>g!o=g-9ov*ɽ8Rh`P7ƖjAade[1CٽUmn;o 6#aNQ Q F,)#͹2XL)BB1~iv&i1E`w[t.̀D4 $4%vDl5ſb-_P)ᆒ1VCv\HPCϻIL$&"s-,-kܲPTt@JIt,W1L+Rf{u*e߲0Pv,Ƞ6C>Y[p-za1qMdU C3寸SLZ <Ȧ0m"rdd~ZߛeFM_Y=ɠuՒILNx7eE-aN0iKVXC#45A:^dEUp "ym"C`IJ+$ŧ91w,{S.FlDq hLi,CCQW}oR'kՠH@#U_%V`o^f0 s-CoU$7bW,OK>-}}sniFXv+ 9RaHd% UI(9yD6x5$O(, D4DE 6< p\iDi>2-vWDuv,$#ZSЙ $K 2R0ݤΙ5 ]o;,=uB2>ʤep~O!bet. ޹|ǃd7 -vD3-dM=^nE*DH:@YR\5"ƣ"K^Ң$YhBz޶GvceCwLw @Y3!r\k=sKC}$QS\/-&NswwSTw ꗐ:c w$>Q憦9r۠PJ'{Q8 ]<i*Njc)Q13T Iy':ޅ)G?(يYGf$ɐ6;0tl4bDl}Qb!L Ƒ*/6OPҶ3;8Gݔ 78}b>YΌ z{zTH ՜3Õ8@gx% `=˄8$G82Ԓ=Q{NxhqmxŔ_mMܮ"j ɤuo5ept*-\״DX!MOքJm&J Ӓ2#VdU>S)}HHND! K_ZG2;-U \/SHq8E,~_j֫l90{`A/ˡ> uЇ:ԭҿszˣqh>S݇Qz5?}yXqn[R%8}zWdۡmBcf2Px k ߏWpg'p Dj4wAbiR](Wb]zT_S9{Ն.8ͿwZ+„d2z;@YۓgdNQ<--tZOl&tzym_%Vo-ngA`01ӽ]>-LVM$GUY:5[U#SF6*KcYx.F9J|W[fe.YP1B=S, K;%%9+s LBv8l--󓭨c0ە kާƘkMG>9ur-W)chP!ELz_巶&-3X3oHxo")OEpJ" 6d QN34~lq}y:cȡ> uЇ:ԯ_|}F3 Ќ|tjƦmA7@-<{ۑo@{N`(􎥀0w͙.jOkTx,'D8 LBNa~j6M&t'G_zvՓ7'fb&X~v]b{fLMTUiz&09If#bC5:L h.$6'ZNo&; bݳA imQR$ ىI ;x(npj*CY CRf.eq I"XA,-isSfh"8"!$G-FF{.M TmT*d8;DjX]P#pzwѽjyU] 2J1lJL$u$Ӵͮ+@_Xby>TڟĎ=oJ֚á" &Wٷ[& z# IƬ:Az0DP}C+(d!`àX]xh4[V=z4N}8zqWe*]˱*ٌ̄%*[Pqgƽ ĿLX7b׈j}:?ʼ Fd=cw?O?{I儧,&=psSO,cȿN1D kvSX@㼉 RfGV߽?'PH04Z\6 A `5'qӐ s(!0BsJQL̩TLY " ?9&0>gD4ZϷFuՉ/{qѵLX'w-/}Dw3FKAH 6BE&rdIezїgҬU>٘We۶ܹ@yfyi@^ҐԹ/GGis`{;%tY5I[&uhEŗC+ڕ9 $tpvwhhĿ~ Q>ԡ~9^6{g]G }ELFHz(Ppq7.B7#^]gTUۊfA+r V̇*76֘@%uDi*!u*—{bI/Tb$aDw^&7$]+I+ Rm؄l &Z`+_B~ i吒wr$( GھbRH@h8oT',=Uʢ.<; tJ44$䃦`:R35.ܲwė]ayZwBnz¾AXfszGt4OxkHJ%]GߖG# ֍ΎX[$هPy.z~j? __F7֫uOlG7,zn$ 0ھ52(!j v箧C~\hfj3#Vb-s30X0ٜv~نczJ* )zw}lLQRԠFTPT{yTj٫G}O:AmC s d,3ZØ&c0`VT$4ֶT_,^B#PjF ^)Rn.agV=V?}J%2݁>iI ) Hjc*D~Қ`LL+xdh<ᙙO#T:xYX%[*q-fXrh֠Eb2R+GVh &r)-DoNuKc3) wezvE O^bm-<%^2 IĄLH+fkIUTl}[|vy0qe!KrQ}{Y%TH5}[L=%e~wzoⲗ7[NQEKR^Ӽ_Yoe[u(~{/}} M.PpZ鿾3`(rkAf :&FV#a43F8ҝ[\A`=H mCPKr%Ύpv,Sr%%MHS -QP ӅEbTEeZFQޫbMON11y *ke%b"Y ReBA[D0ƠVEK"F2 3Iu0|?y_ :7QRdadlb3XOsllc6 QlێcRV*ˉˮ1mwɾ 7HzޞC )%i'M$<.ޢJҀC#$Ydߔb<1 mȤ'M<^'^IU ˛) aB] c`"!5pc?T99` IDAT=Fҷ溉$cLfTv\f~t֡cj~ B/js ~A3_O~0#X}C} 5oԍeSF[[m32#w|GsoDfT+#òf$Td^bp'#mL0oS«ħs/C?c5WКk4ހ#(>Qkת$SQ11iS$JPzVjX,*J0l><8$f F#o& beD[=Q.ĈpO>~>eq tm1L+Hqߺ~UChl`0YSp;/íֺpz1JLgaW&"?QH'Zw:yWo?hw֐Ѐ ʼn ؚQ%YisLVt|1FKʊI05$^53l-($6|db9{6|a~w]9oO~=%ND恐d=МF:hHf7'v~o/_RT̝BئNՊZqQp9hːZ ))ª' Wb{dЀ5y"tygѨ*1e,a2RKQsm9 uPds+%Jѭ8ǔݯn?Pza8GPj"-z<㱾p4-ưt]C*5/L!f.YMjNN7Ϭ#M^'Uɴl=W)RY :`ŵV1o-Tm`5 *얳$.{xmiW)+O~{M[)݌3?xu@剒PY8ۄ 1FuZYQK :&΋EE'+|anIc$LBBr=G/O"%9WvAy[xWQJyA=t+4eAϿV/H(4#qOa8GPfej:)|^o<ޮ dۥ%oimLU7 G4AX0-]op}-LJqY̾S \84D%{?d@C G*ZbeO6)% }pbB:D&ƶ?M훆 PϚw䕸&y *DȠ4+Nk+m4`&!.ίmOb]m"IRk]_I5L5 .6nܣ> !/vYpucyݟY2 fkGuVMkң6 8[ P6Z5D+j,01VnS\ͨ.I9ݭ<.l-w؎=o Ly[spDzFw\t?~Q>ԡ:[Ќd?ְ-ow~vށG\̀t}dÑ2ő2 !xcGU\v˰2@M憹!AW C# 5 bInBCIQݠ|Dk &b/_y  x)fF4ZpQ|G=| ڂEjďf5 bBa$DZLO56ч~쇞> mSC1yIDݹv J-^ՀS|2cn%+a.DO{<~,NM)ƸS >DU6{n[M}fIgQ븀HOR ޜ׫2^|]JiʱNrS}9D&o==~UURWdnm-v"~VވN|$ R4-#q斀P*Sl5FS4Z2GzPbwo/0 d$tf(%pScEmG)zK^Jq_~à5Wo_G#>uЇ:7U ,5kXP i7+)L>vTX%zs|?.r^<,(إ`KG/hP3_ iJ`Ԇjum5w&|v-l'K?ӗ|nz1{)O6W6qu4ZzpuO$ hϹ^HbԪW!:*Tz-VHh 6u`VWW{Vf W_q:x}?5V*NL}UHKj[-iM<~ Nd1[m 7:`[fW{F)qֻ@e6nGΤ+| |{׼Cj>ԡZ wB0edmʼG_b>^4ӿytg>'v D2͞6 &{0KZ4H–ކoBne,Hh<%f>&0 4kO *z# *Qc8$^L'osbHSbƒ142 UЪم#Y=UjhR8H)$QW)].G*Ę݉R"ӄ znJ#(| = jC9G=Nh<u XI"f2 bq"t*,gLn|* ] =]N~3j;aYQ\r/i󟮾Knmn:n:ԶPj'hxkln?KnV hvpchԻ+xX|sTs<9::(^ؓ7' (s~hT L@/nJ`IUdM cΪɇWq+5u뽠d譨M)͵eoIOhf Nq ׼d]d&j.ZPx > b+ *!Gݹ:;;i]G:DzE'51;S 0ZHh" PC-TF!\o&ޅ8Q,ҘIGN}1Q'W(M\@< gj̬ YC3d8۩Pf |>$%up =ړs"u I9q S36#Ca$n@FĘ_Tiq+1"Y̹ \DV\f"Dދ岽>=397hi߭iF E{A7y-X*w7Cj[}C}C>GS("̃}ʨHGa=Depc͔)a0fG56hx6e9RMIca;Ff3ʿ(&1`,~ڑ8ThCOt`:?lC+=r8*tF<7VaWDӞj8O2l,̙z+tknVdY7HŦZ ##1ٵ/^R{WVuIƣ7' x(Zth$L-*@6b01L(D2u9j=XKsU w=`k +U1IShԈt*%,k\+W8u 3z6VZAY҃AHZ7Dhx1؈$*$a:1$KonJZg0p<4 $^CUG #,Vje)/HZm^Eպ߳hG]֯[l} l5<PQ}C}y ǿ`*lf?m[>݌{ 29¹‘u(n6W$1q`~Rr|j2T0hӾw/n-Lp`5Py g ~֊XJjtH֤b}c_s`cx:ZNg ro- + ͎Mdރeϡq0m#AqvhQk)3)c1KzONgcV:Ys{G`*#f/5 jcc |+9B% D:rG`T:+)pYKֽp2FɻzwԠX5UequJ#!JH:O{>˺9*q%=آ qh,gEjp9<2:S+&J31U֠kxKxWH.Kc) S 3I԰sQܻ&hNG$±Fɍc]t}K6enr~#UU hFix-1ehVI]]GTIc(JH{7{/|f|"U%1ӂpc*}סF58VKZA:A+phLUO6_|Ƕ Bj7/ӭ&bi8lB*'-t6l /a )j1B2qǯė;09gfJ{;HMp6̝6U¨jM&M}No]OWgϨj]wY>aSv$OAr*$֥.b5M99{Sg4S)%UD܍1[5)&!b*b<Y(Sh.Çh|`R9kG)[(:`zy6kbA۸2#@L#mދ$Ęc16Aq|Λ;}i?~]Y7)3Y4')hix,"5R#r:PAI`I5j#&jЪN.ĖSĶILM10z8m`4c߱ki"b,MZ|WI|.έ`4fdEs$m4Ȥң#/1}4L )i ):M~md4cR4igFL-H}%/c|]cv^}WSͨehu:h["D&l)]"y YJGR xbYz4`>An5S"zƘ?~Qt8E\OpS?ԡ[}CzO?tQ ߕJ7Z( &0A2VK ʈ+ xR9蕨#JboG`ֶ/yixk͉ؠ R" 3X$:w{tW`i+S,>}f&_r^&p\|Z|akW91w$! m{I$kTH$4Ӭ[ji7;U | U0=P|$cEd21 `hhUwml:VwMqLƣ1IɪrڊKPMv^db1BۉDzy#-_Jh+ѥPiڧi5J})4O=DL`&]*D4 brx`+ĺGTZE07-;o=Uՙsߞ쁃Li*"e!![lYYE Hd) H id%m$ðC Ñe ; 3kȘ=Lj` o2ޜI\aLh|_7_/ϰ,<iMpyEDAg):( u6زt+sZBaJ݄Qk ?'s2{4k 1XƢ ,u{5X*@{/gQm!!P5T5 ձ'j4i l% omC[fa6'oƾη_2.wABѱ.mKTPD-iTеj~l=B>"4s2Тcm[QaQӆ:0-a̐ eY&.8a RQ92pְLhs@߰5jB~{60n('*ߘ_~?AEUޱIvu!3KeRBaԎ˄SMp*TPwyxǞܷ2ܥ+|xf<2 5LzRr(%l': ;> p#q{ -XRAM>Z\dI4%HNW^ök>wta{&tu{t Tk4Ƃ8pW=y+' >&Sѹb<Չ* 2!?NkQģ'm6,m;|>փKRU֓٬r@,,/bYMӀG`lO`Z & - 9.* m1:b&.J:F^GQJ脯znp,R9<γI6< Eȿ=@^F_ ߠm,rF[>cfLS9CkOOC^$*2\/o@ɇ?u3/Ze+g7d(eI&/kŧW3!d'ټ=XaӐ\$ ##++ z~c{]o?IeI<Pt܁T,'x1 !heLAA тBJ~A8:h`AX\%éײweɫ1YcQ5].;׶>㠒9A:(7Y )p$ \r%ݮ;ͨsv #W2߹mH x\e Hh qK-qPxhGx|ĻQ7کDĠtӅRpv4* '!jڿoP}D~2յ;~RJ9q;c*=~$8#K,:?9}SG{޻a01vhh$ߚe0x% v+Q`|M ^zy~N^g0%dP"͎9uUI8e<8Xvh%J5lL8t--FHٕNey+vhK &&YuئRBYg浣?2kj pыYs/߹~T!tQ;-$PT<o30N]z^0l3m*Ēm %!࣠%25 g,:5M!(9! h@S3&wER,K [ENB8}qlxOXN-"N6CwK/?="G6D+=ɕ wz8y;39a0aK0` y?s{[ʤP<%=ri{^ylqXRXK"th#m̵ծF~}GR!-pvmv1!Z^Ko {پmjE7MՉ穻Ѧn˂K-`642αΣZlOj*jD6L0FEF,zJL=t.)V|oX֧N߀`3Jb׶X`D$rqjUm mFxĨ梢NV%fےc.U LdE|.{B;e4BMҸ_T2l+V,=u<\KjϬa2husbAZ48d,m@Rcm&sLQ d xqGkzkHҰoW{驵n ^& o;]#RT30ΰ:Z_V+<g=ܗY*}1lu6GmW/^>w^噼 3dP>xŚt/7,!%:J^ fUC6101*a4bW4Tcd}moτZ%Mӎh@s.I|*@M#ADi29 pEh2YJkia^fY Gw#Aq06X"$j+y@ki3 G_zӸl]8gv`0WW%)/:tyޫjk)oGdwdL 2.ͤD ! .!)Y,8}}@q3 {0B T2BXHm-jq֤APUKvI,! Ha 4 㗾W~65JAI3sHݔ `Mo`p6݊*sLj 9QQtVlzGҶJ[5PZhMOF4aIY`,ar!RvJ`Xhלwq?0]#2K=(vzX6̤W#EcѲmALzrz|Nj%2A >aWxy/q 史Q6y85>һ_%~o?|W+ʐ9343e)ag'YJ-ʛ*a #{\8B1&(a{;U)έ#*XEH6awpL[a\b,"d6E$5Bk; ӽڶ&.zh/0 #{Qk;O|fx#ZHAcMF(!{.v,ӼV1 AedLՅ.8$Enie`2b-UhSPc SA{Ι/D]3{6x}J<#9SG, ݻ[LtmNe<EN"A!x^ϐIKN5ξ:N).lO3Q^aڟ2V`#z 3$3: ڻ &}z`)3#J#*: `ڄ|k(-8"^4>L99|p3 p !F_)n\qoΫ3/?NZģ5!O֧iNuߥw[?-1ަ,m@#z|BGySd'(m4옔߿D튆88t6vsݤ0M5b_o|摆Sr3 xo/.~Mp٬}Z͵էYZFkO)=7V="쪀3ߺE&L-/,-!(.KtMx{`=W#/zg ͪe#9lsW 3ױ)Q\pquhF X!)Wx%ghl'Nv.m}nz*;hjh5NPlG0LFua UUoL 5`j&#D0%U?ηgzqذXPm6Zr1q?X ݝcY!}4YF\@i;]+Y4_]Z J\Iw5n2-יx(/VP٦x(?ҥٛ3JtgMnɯoiOv~ݗ缟׳Top[U Q}D7x̡r=cD2kֽs[\98N@"z%ϻ"y-#d9jC(x^ϕ(>KxUMҀ v͗ A\0ZCCCR#-b M@0kasy܅(,;/纏-$ QcіҢv PZuPux8PgS7EE[Laul1!agBN%nj L7㶢Bn*AU##-W>loQug'W4[!K޳3_d1 #xʘ.RvcofoU "ݪw&C#dҙ3fˮc 7X_I8G^gbPр?LI z Wo3N9Yg^0Բfzi6YWE4y/T`,[91s_R֒ G)%[՛*sҞ]nYL~tH:nݞ7h 24Bks#;iL&*4!R bGh[''XL=y&8tiRY?Sj3QA B5@J=m_.d~:6bGiZic2j2Χ1;!8ϢeDJ2u>WG&8<üa=>dv w.긾BkY 1ؽ¨$8'@mJ%&zOnB`{G"2B%I8@6r*ZH|@p` bQȠbv){'x槹-mrW m,oEO0n:a0Mb -/Pn3nA֭˟jG`:{FdO ϲrv.*H&'~ Fwe9Ь1xj6ThQnqѻ\m+9,xбZݭ{Wh3$ÎmdɤڒyLjr_yiGہ-Q; OOُ!c L[<@c=hP*Zc-T?W8 tmV((fLy$xK7[[< elNfPa5w2y@rjϭ4F=n{ u. ~{^Q% zB'd۪cXrfyÀŋFԡۺwHlGmb,!4B S%$GtF{6&6{\.xB,odOI^>{]ڌ7p' 1(p@i)'La<=K=',sgI(O= 88j2Tqglt.D%(.tm˅ ܛb"*d=,=ptk9=IWYA4~,rC8LFPi }a x0IIr,fKUT/YM\<$zuhb'T\sjeWԿHdnxV WJwƖ+?]<|˷s ?[%5f\r.?6T=]¬={sJ:b͐1yK&XC0.QכxWoOXU6Mb h^hZ6GW+4#$)6#((wG#8OP⎺K9 ]dzqp kخ 1 re8nq-a;O2 oʜ: :6M L,Hmd]!d#6w8ic FyiFH,9Ců,hVP9ZwM{Fh0Wh=Xicn[3 \8\E,Yȏy^bN:6H: 04Y#~i9'csRrEا I)Jai3c0I{wMzohClG-RXj׸Kg~Zn*5eSʾrM/2?GrP0b8Ӂ#5UY^IGX}ܥ?/G齋uHt0Pp\nrrMWQ'nS8[;~ƒVT=REҪ^36jf 67y3x\)OW?).(-׮8섧VGt cbJ2Z 62^&3c?bhc^/6D h9<1@y|S@xW,ume0rE;K,_ puWxWmp-7&d#L6s 2oN^Li4ԿEX\ɪƆk#t4c4`[UBO| m󧷏:l`XrY-C(Ni{k҂vi],|?Fׇ)s 0bl}|/.?^~&ߤ_N @xRqjyԏ^%/&nxIJАdu25'f s{BD8lϵIԜT6P_3Z,*H08h\ lT<0=Nsa;/ٚ'籊1Zt}6cbv8s{] Gaڂ%lHlm11reTE@Lq3nGgCw2[z-1-lfg97lBnQe#c3gXzH=p loaiR'ݺHOh{AgV{)Q֤nkhJgvI"(FɠM'WsJxѤqg-_eE<7$:_MLSx'yVX|r5 @ x1ytqF&\;$ƚڟE\Z6~ڬփ_RI; G¿j-mQWnuI,/$ɦM={<#4#c-.P('({^Ҡ;\: mٸMZeU^k12Ƿk\)Op'>- iD;؞ZB9a"m6?jPa#Q$<Q<7t=rqQBMdPהN{ICP6+"VM.C ortPtFÑJg}6k9CoU]Q$/Hujyٚ Ʉ%Gv|o` zyn,S{qrI{?q [%h jP0+0]Mngjr%=ï%[~[LVʣֲڗg7nRo_h3Ospw10^Om/ďQtX7=c?x#C'Mn=v"ԁ'/ݧء g,Ķ[}fS/9\`cI;TrT` y13w1k4Gma$@=1~DyPNYYwEװcpO(`32s2^s<rp˰q2`N,eE=ChnHVzm7UZْÄb2O>òUMpkKrI [B*F!} ^:v SC! 5:G]o@9l4y(6)@/ŋÕO@ J@ x+F-qyO}#\.{L/NOָ!L.PnpRpavLAf,N[#(bac+_3{M0v *F7htm j-qDp'spxLxRXǂ1\hyA8|lW8lo2oN3?%f_tAzrA^NaE0S1160ݖ3,]J&l`4'>R:F^/(=&48puVApK)Hoe+VO]^eoX/26U]8{a0+0-[O>քfU kBsD\hϡ[l}iN֊mg ׃F|j0Rܿ^bߝOĥL:[v.-0BOijq p\# Ec4)\Ve=>)Ӓzjb-`vIް0d3VC!3-b`'y[uS #d1T hE&fq<p89pL`Ke(˦ry9d8ǫ#dezLe=ipFflI%ijXW>9{c D990G5 Dp0*B(,}-L -fVhӖɯn|?[7nly3K/;_6aƄ‗?)^pId(m.`2A,X)P;_,ي˂W|>[8CCFH2[B7Hl3'9Ba,El3EPE:ӏ^F~ͦog.FPAއԢ):B2wn\9}+9w9ۼt 8aQ*Qx&`B.V7g;gLg<1Խ1VNܳKCNVQE"CD@,qagic(,"cv*mDhdpt)g3gy̩U"n`bfq{ =/JÊnwz}̪9A*L0yg>-5AJ؀ZxiEU@x]: ]`@ږzwH׊xxt_}wPO^ BkCSeru=gcmߜs=>>nmw'`G&`d[ؼ"b@X-$,B興q"PR tPGuu׭[}֚c\{S&rV*gך{uƸB(t0mɠAGQΣ%pÈ,x~7~:rK:9U& SNM)8m|g0 Z3b4i EI@ ]LNl8HB۫n&<[ǜ앐8g ]` ҬJ eC9Yt;NOp9g׌%'wWNy|rb$#FNS `w( a2DM,m2[= 9!sv P4h|h]QK(%&4RM%s/ |=<]̖Z4(,.dͲnN;AOff,'?5DGx 1#~lzu_ew p2GwϳksrCI.kO&7& ې)S ?*& tF fbk@P"DhB 6MC-;lڪA{l4Ѽ,iy4Zʮ~vK#|aabDvEc|!q)՞楠5, mCCw)MCka"` o>+ߝ9)IE!^;&(1s 7J4 Z=t\(= ߵm:KbV[E:},ьR֘(ӮK"(JM" R&S$+4 yy[ݟP8so~`V_n5i3M|c~r$#F Y&?IהH? tݩަ;\cӪ÷~:bK@gX#4__lDrM k7T$=[6M!7:ţNuOr-E$#YAfY <"<`(2J+JxT +^?]]F I[ذW349,a1-x@-x<4v4(GVӢ+ȥ^F2a[N1e;#(+A!/I'2-3e,G _q^bX,aX c䪎djP){WJ2S!_e#FT+#Fӟd_dS$&R kԯSIs{)"moyq+Q%>tiMZin+h!|L974.-:'XUfIԗ?,P Cḏ.Q\Fv"^@@WFр v鍭qto+"FO2YTS/v.YLP+7ANPbS N;+'rP%gSᄻ/:ZeJ,-6(Rl3񛠑MF5GQ%[֞B'X 6;tI\8FU|:mV|bPpC^Ko'I\ ӺQ?ܯl԰ux~}>E؈/F=b{gx^'Hm7w 6v䵏מ/yba~fPr*(bC!L)tOe(uIݮ?lV{Ի59=7',Zw;a (M rA*6ZFc6ePz2& 8E>LZ ϤZcQG\=["SB@p^{)A85ᆠzmU"wȇQ TYfmF#T|LD ` 1IT^{)7`>B-rC#%QoMqk/s N_╒vFqc(*JA"6NCHmR-u#de@00#F|+1#޳x:febʕG t[k <:q}I:$ܛHwRޠК%O-.7|s(P kO s{h;~Cܽ.#Wt=%N1y;!؁E38Nr~"CQc12yjπi6^+^0CQ,R4۔BmV2O{AK%JRηACGܸ 旇%lXtB9C</KXjC)1c0 5Jr4f#P%PhmNBd>j9d @iO!3u`ľqGx1eHGMh5we:ߎ eIIOZޛpsU[llNUvBc)oE=l"3h ThɨsL!zBWSj %~ b,9 R׫q6O^5̋ ~Q%@MwE 6`W!L-(@Zh˫/`[SSo$IPQp "Le'85Y͖l}oQ{*ߢjl:)@4}+/,Lo6$w ĉQGmto JSFa>||-|-n8`[ш/F=bĈw?X;Q+ȭ[E$j7>W w;iN|o5]%Fe}C?;I5F`h"!0ܢ)̡j.TP1h w[8w`1ӄA_7i{.TB3TEL]gG/y4ۓ6^v\; N1Hk&C N1ƀcK^%\%alGmL4C)EٶZz# eb䷑d-n"8'&ʜa97NHMh$j 8 lAL.M87C#i/}t\ x ΢zB)(r!W23[hעgjT"ڮ5+d[L3Ju`ӣzԥ F f̫՞X>1eFնۨsU&/AڹpH`U#\X0j`fdh9\VeCV ugB~J)4aUɀt-2SĐI~Y װmvW\ugg*EM`;ap27}hĈ #1b;b;!~v eB7a/&Ċ`/ouikP#F14g,0r@S˙f2ÀU&IF-p2 &: 9 ^m`27 ?|B^I3=WȳvXza SL*M)e:oDPyقEC uC_2q6w& WfkPA<(Gz,3RF,{nd@/,WH_DOkIsˋ$XDEJ ΦƦ_H&'7c>V^}na$#FmߧFνst R(e6Q!ґ;f$Bb9nBR3Id2}cSvfd`,2îrڞvxW*,XqdGvխq0%Uv#:{gW2p}صZiUi^6n孭.Ly+pI&$O3&9,7JmQs*Xŕ왮-V (e8&s^ūju[ٯ^f;o&䨘WWoТ[ZؒsHE^g`D+zyA.bLvX!(t*n#3,Nus)UEdcBG[L:[ CIjzjdEYC4$ v;AӒ%1S ,P5!uׄ$gF\Wia$#FOw-&)I`_”oei"im:'EeI2܆ǚB{5E(M[9ϕ;YbDt5)UOnc7E T/|ml|{=uG7c(|uQy fwU|ܛYZ׻ kS`yGpLAe`5M-iZYs>A2$ȐUl }g?FYČ4#X]:FRme7 4uÐNPJað% єU |sA)'Yn4bˆ@1.qPC.%KǭYC&*7tHqôw}4Iʖb׽{%n}vдzВh o)o))s2;]L ~P &PHQ5'MU?ݜ;?zt 9=EA! _\~= *~܄/o`h%_8ΒIS6ľk^}$k$E=6ǀD fz$0Fl (,mا:·iCCG,j*ϴvrZee#R6:w/_+9_iެ͟qsLS H\ 4a`7QS&:ubӡ&)wCYq*)r <"ʐ r қ%ŋ[6$`p*P)Ul~l]6{!PW6}ᯢ(F OmM-mM{^8d6cIR>!Klu9`m`xԒ.iVѰe pڰ[biơpYkgHgQH7 оYM˕oeQJGcKhĈ#1bUcQnٰa󍸇#>h&4!FWH;oZzsHzIꗒѻD󪗗+[#7$D:ҧCpJ!A RVhO*țUzl4=xIH?dG?ӿK*bЊ?2B#xLB< 0 \3T9;2Ԡh* {iE}yrP34ˡK6C':{,H&pփU>xe[zkHJO#^N #F fpD" d!$9}]oua?{ kRM{u彆s{=<"H.;符$󌅡Q 5+?]𠖕R9dI-٧dpwFa+>G^ E ͊ n<  *T %aU +υ9 cNlDUˆɚDMRw -etږG[_\1bs=OR0LK\yM[Tˣ-lEsbM㡉'IV '9WuڥUOHsx_oE#F+#FΧkMdm0i FDᐅDwvnӠ4y6NJI>KG~ 'o2;L={nKub,ݒ)qp53R`maoQœEdd4͂k!].PX <Iw?otɈ"hėaР0s#+#sVFDP9^2(3a ;&9gNϽ%F/$ؒh$2k8t:r_5}HwoE#F #6ptBx"ZdnĪm%I$zTjjH%xX7Q]<#&I.F{ϐ^kHNm ǁ,YM?O=\ -ɠ L\]_ͳ7@vPLٙ`F M;֒[fp\4 n`!t>qkmkb_`&.d.J U=ewcmN\xBdm+^75JA;ˆB] ڤjn74&61w]T:ouwNQ1$H_eb~C#F #6sRQ%<*KB+5Y[5qG9t ^.+D#iao3}TuϳI8}VZҸN|>1hks ".#ض<]5S).Ā%09M~ 8]+;xTA~C?_og5.Qzl!uٯjW$V#롥]Ӆ`k+؃}..3(·ع3NhͿ7[67<2]@Lw}>Fc P^>beawC9Zfx?7ބFx1#F ]q1DyhE 8?ݕ_܍&{!' V!ګ@DĬpUrV/UruD)G& ")}b_Դܢ8jÞ[PEOWosd2C?>5/Ga0r+Q~]7DCnf狫㫺2Cl L-fd_ţnJzx3ŰqZ EGt'IP"9d/Gj:Ұ|HgUntl ڃzGƳoB#F #8.sm=b^"eZ>pQwsoi,&9U\薪Alb|e̺CVnzswWCK,ak:weug}CG^&xO IDAT"C.MHfQ%Ul3d\F9^Y DV  i BfIN)`JM B !,.:b Q͢X- _DÓt"5xXll_.o*&1.8$ Yb"oN׊[IlV=< #{1݂@1b{.*AsXI$k-Pk܊Y͔=K~h~ܳO@0:C4cѴtj1Z\Uhi\3\U{/o?#F[0#F$&Y,Y8js0 >.p;I D {&}2vXL MrS*C$ҷ35ULʮ$j}nJiq7i:lU`=Ӌh}M,o+T2gXm>gJf(8 B#BrZ;ƒx'r?Ack];ԽɱΫN aa++T h+ψgjLjuCOCtyH>,Y`+ #Jގ\c_o?#F0#F$tbߴt/^)݆X CJC%(qԁH]!=fIJԃ+-zPqA"e Yb׶%hQ5 !HMGN籰m.Y ˊ<|s3%A u>R絰5F)RU|V6B` !}.8:&"9$қ!#MŐ긖1I=8nŵD[mwL]WlkkpQ0dTVMҸi?ټdCJi)ع[u+t[#y('20湰G`^F;)éPÛ⏏#u+#F8e`uO\pvU#--ލJު .1fQnL sgᚸϰm7N^e'*TI@]Ie#b/SUD>6 5nΔ_f yXI3$8b)xV BRm㝑v`yW<ȧ{#XK°S20 ZX6LS"~|ƒ9|M1~n- '([5AZ[8D TjI.f9iK8׍_8_U_<9#gĈw#F=b :7Z^S{ށ<JR]5 'քO %YEGfMi {h^NB]2<>9OHI{*cҕ0-[]^?g V4e1l9NhrrL@.M CwB֤0w|Ȏi>Cr>  rp1*܄{m(`>j1XԞ"퉀&I>epzYIƬ[-&, { zğ+3bĻ#1beLoGYpBz53=ԡ6$վ<;ws op f66& $j+(jU|>3$fQ\Ѳ2܆S^6NqBC( %+so5qӶ hYL!7%|!T=\(2,~ ˮ +c-./itA5:o¢{x +.l`-f>Wj8zwn/j"2ѯs7~#ؘߚ3bĻ#1beH/9 {V5Ce(Z!9eX;:]>hz$qb]b%h:j aEdVr^J9÷oF?eWN^V1Qa湹Zٌ N=ª4C=(ÇN'gMև"xc¬,4`gv lKрs|}6boís{z%{Yg0s. tUl0uxHyQpY| 5×gZmqU 9ygĈw/F=b7EՆ9 [1cb,òY78+,_5{8!Xmy-Lbl6orѠ֣14޻ϲ$YC@l`+*MqSGY2vKn*/VIGXBApZjsQs`>!+(2V<@;rmIR'߈&h2) {D=!Lъi'`|-$d,mvE~/qHSil44L_8(DxT9tڵʤ܏FK/|M$r??uFxWc$#F 4E0j8=მꞯ3o&a×1/rV=ZnνȚMu oQ9 j;&+p,Kąc-Y H9ڞL̀.j2Np簈7 ֐+Kxp$S2kqP1]q$ 3wO x)n4ʼav.s㐏;kP_|XVJ1j4X?óORcŜO *-1C~hB"5`P7S7Y3bĻ#1=v%16vS\Ϟ{G_f(nɲAY4p+swx>[<2ˇ֚DAz ̐YWٝjwٳ,WJۏF>&jcR^(*N8=cRPVT[Ү؛,8R{.j `=Tuxi'DHb!7dCLfXmMqg+ʴ@a!A-K-{eN$F,Ӥ͘r{@^p&kBduf |av$ ׻Ex1ݎ@Ϋ&N5a-ZW l*v&MS9|S}O\ tdJ|ϓ͠~A>lOȇ}3Y{.5N6XJ*Dҙ%n8f%W`ʣe|N+>.%[[x45 0LA?(u;%skvi] @q/):EK fT> vP^.a XN _C{޼mP^m&CqWߠǻΈvzĈ6f~^I1ҫKE*ls ߴM~3o攅~!>XV:us,~jُpgGFf <LA "#w /_$i>mY˓2mHH A*9A ʤKǏ%+\Cζ#("dB8VrAXEtfWP㑮1sBgKʒsЀLd:xm2E{݀[|hhe8N#mt왘~Xo&|r(G#ޕ=ǻΈ m܈ ˶z+R DjU\ʫM?я!=~̣#_O~$d =i]luB5P 8v ^Ӷ r PNpfMLayŎoC6@iv ސ?Gfh% 4pԡCaMǰQh75Mvzq9PiA"U7:P)7DZy>mmȖ5ďF)/όw# 0#>~Ldx=la 0$+ ֚[sm^Sks/k!!G~eO^`Zzlm}V1+.a1Zb ƠAk9bFb &0c0` /sHI$#<_cUܶ^ `,x^( +eH24ukե7R-@'x9) ocCpo 8W`{hDsX{/#=0#Ø尊ُl'Ȇq_{&yvd?Wƒ۟O}pi\82^$BEYsrGp[N'j%QOs /ehRԂb$~( |`Ba<&Q\a=3 Y鎔@1q[C-57†B3.0-1Z{C9|0o;л'W1՜yw-aٙ$ɑ 865\T_޷>GpWrx7]/ٻqA↍Vk snp/~lCp߹g1NGDmwK.kl'\8~#0Eϡm;U6'19Fbf g+XF Qq/gJ&zh`a]Ι:3icYdiZr MV1gЂ\ \LN_D^>Q҂JR~@!΂:%?+6>&:gZ+7~SPzײPȦd ]uSKFpwڏس|xYhl@1K0Fy_qQIpcknUMv*P{pf?𧳖x@8Dpb@9ӊReem6w8jӂAjm=X]>xK9i9ն:NwCQ~F {wͫqgU찁C7ɧ D-k9KU5͹„7骱eAЅ/5|ѐïMh$ $_I 3m:d3dˤKI2Z Du_ab0( HQS(WNh &k[ u}ꎰ[BֈJA4Kٸ9Љq)Y/U'vj㓭3y ;~p vBЇ p"fBn}KtAqlDwcjKgYSG ҁTڍ%P}WYIm7 -+?q-˙u{MQ-ŚEc5uk(ohJ>1f`Ac9HS~HT)ʠ fBМ9o&]mоɚׅM[^fL_-U `,MbzXD먛F(3s6sZ8B9R-՚Reqe;9 ܦEG/ZlW`g~@PG6}Vng!ր;3;j 3ۊP gLfM?┵DZ(c9e}rpV Ch/~wo.Qmpǫؑfz:Mu1Sv7`:Z>oa!  0鄮޼.} nenFp}~^テY^ĀִUί Jh:9|t_0 ,dD\Zh0;K\b{B>6d2qzJr+ /j:Ab/EB(pLYŏZc}U4fDvFGItیgm|bj5Vt֪ET'f,*FYU͍Z݁.!eihfaXS0amp06}iT/s3=harݴ5jEe|,"{gK7/yĂ0FKgXk[*˭kBc>u>ڳȑE:Y%X%_+ . VyX0X. ʋ}Ӻ}3XOD ̽e aO,iC+gqƸ8QyfjGI` ;b ըu !f@JHQ):C+3TR5Þ$q GAaʭ nOAؓHZ+QsTe _I0;Utyk{a}6oᮘ}ƪ.lDdޚ(Ts\/9H3F*]hj,ؐglc>m$>zC\;3-ڋy U(ʵd M,=BwuA"1|3q:;4`#@c[3mbk(9CיkP FؾZ[׬Q[$*-:}a+y?c]bSa" WJ0}ֲ.7l~ӗF[NIDAT%%o~GՋ []ce~L:kUY K a7`c0G|H$Z׼w2/yRt]rlQ@0}y5H޺N{WObĊViD ڎҙ܁o;q:֍m&"H;x_o7p@"thf<NY݂͙U`J>`ao#Aط_#S r4jfi{W *|W2ɸ< LT04VE s@^qQv), 1t ;֒&IrcGؗu0зQQ?F+ nL%P2{_=3ߪG59qYX5an})JL dD 0gH FP (L[kTl9N:wʧ y-9*UtA6[Y:>:'JUG37m  8ˠ!m7wc"siiXClJ`EREX&[3waK'ej(,CWXSݗ%))-s=q A:ЂFC&jEXl<m/_xs̯}$60~SK LTiZZ0Oq iiB' ހ k89 ,>R?b؇/n^Ʌvj3ơf3>xԯHxÌ.%h@gmF.K$uY'i=KqX~ևNBaZQCK:Q:Kx*\c4PС lX")a~F̼F5Pp-K]p?&ܩP_O- .;-Z. ew3YwpWpF윢=ϡKiBtGd?]myo\8#ɽ&f7`?w߻[Լ;4{D<)k'2m3j7CX:;-3 WhFb5*uv2pcE qٙJɚl9Wr] ?6|~D:Ђ @V⟣qd15=-lON%Z#cؿ/V PvJYURϷmÖ/œ;>2nxK/jNx>ی ;VYK@; |6ʮ;k,^am ʫVX~}*<,4,1$H)]s Zd.MsDgu\Dt$8ooCV]؁η?B$+ n(M*4]KR9e t\GMP*'Pq>n6w{}Zzַj*e'eӀ6A3N"ݓ[O2z~MU7l5oߧ/sM3 YjX2i]i5C+#A&N+0V0l eR\30¹ tYFS7M%;؄fz7C9R@ Ib7~dX{[e&;$a_fAYZ^EXym@H%q]}pG _=5{IsFfjݩzc6q܈Ԡ;;X9㗒E 쎫] lU1pd)FTz1L@N~sO&AȑZ#\OD{dUt+ ]|KcZp^U0O~۩{wD!?qsGZsFd]k=Z עcwIk|OϻwɷauE"uSue1H_\1M"G)t$44;pTU.T)거"t=" еj>C{D hAdߖ5x,6>M2fSKc =ȍ,|aT7e7iOmC/!gY?gY_{iMZΚN Vtŷk~}m 5q>7J 7gQ]tD8E(4c҈2iR]`=؞ =#b۴#&.FP7:Rt`t{ Qï.esb+/_C0X8MHܜ7}7rkٟ;z/=fjG2,O[|?|z7s7xa6`i`;?]CIJԱTZ(5q,;z?A!f#eų HZታIv:^K#g#s_<~r# 3qNG#H >tNd~icbtYkv!D0:ٯ'<.U_Sl -}7νqE㰻*Eijqy1*k `!0(07.tu4I~&*ɟŭ)/B%Ax̾}zZ޸ͻ?Yb9vڗ1y.>Lж={֫ߛd )+[{;Of.&G^R$ڠ5VzG_kk0?K.BQ z`qx#e困T[@fr_9#y+-QB72 /]=s/>_tA&Z)06YSe~WZPv{X(yrlNq)|T -v򆮙ꚗߟ%`6ϒUa ja^6'o)Ü8ЛDՊ&7઀@ަ͟\ Axj˱=ɟU mӲ-FchadD,mmodp(bTRC gTRC aabg aagg descDisplaymluc nlNLdaDKplPLenUSnbNOfrFRptBRptPTzhCNesESjaJPruRUsvSEzhTWdeDEfiFIitITkoKRiMactextCopyright Apple, Inc., 2011XYZ RXYZ u<XYZ \9XYZ $} curv #(-26;@EJOTY^chmrw| %+28>ELRY`gnu| &/8AKT]gqz !-8COZfr~ -;HUcq~ +:IXgw'7HYj{+=Oat 2FZn  % : O d y  ' = T j " 9 Q i  * C \ u & @ Z t .Id %A^z &Ca~1Om&Ed#Cc'Ij4Vx&IlAe@e Ek*Qw;c*R{Gp@j>i  A l !!H!u!!!"'"U"""# #8#f###$$M$|$$% %8%h%%%&'&W&&&''I'z''( (?(q(())8)k))**5*h**++6+i++,,9,n,,- -A-v--..L.../$/Z///050l0011J1112*2c223 3F3334+4e4455M555676r667$7`7788P8899B999:6:t::;-;k;;<' >`>>?!?a??@#@d@@A)AjAAB0BrBBC:C}CDDGDDEEUEEF"FgFFG5G{GHHKHHIIcIIJ7J}JK KSKKL*LrLMMJMMN%NnNOOIOOP'PqPQQPQQR1R|RSS_SSTBTTU(UuUVV\VVWDWWX/X}XYYiYZZVZZ[E[[\5\\]']x]^^l^__a_``W``aOaabIbbcCccd@dde=eef=ffg=ggh?hhiCiijHjjkOkklWlmm`mnnknooxop+ppq:qqrKrss]sttptu(uuv>vvwVwxxnxy*yyzFz{{c{|!||}A}~~b~#G k͂0WGrׇ;iΉ3dʋ0cʍ1fΏ6n֑?zM _ɖ4 uL$h՛BdҞ@iءG&vVǥ8nRĩ7u\ЭD-u`ֲK³8%yhYѹJº;.! zpg_XQKFAǿ=ȼ:ɹ8ʷ6˶5̵5͵6ζ7ϸ9к<Ѿ?DINU\dlvۀ܊ݖޢ)߯6DScs 2F[p(@Xr4Pm8Ww)Kmparaff Y vcgtV.mI#   , E h$`\#[!"7#a$%&'()*+,-./0123456789:;<=>?@BCDE$F,G4H9I?JDKGLJMMNQOVP`QmR}STUVWXYZ\] ^_%`4aDbXcodefghijl mn0oFp_qrstvwBxfyz{|~!5CKNOOPOPPQQQRSW^k}֗&9L^q̤.F]s°бܲ#(/9GXlĂŘƭ#4AJLH@5( ܰݞތw^A|P%{P%q>:\5W w,JV.^% "Cl._2[ !#$)%8&@'?(<)2*$+,,-./01234567s8d9T:E;6<%=>>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abdefg+h7iBjMkXlbmlnxopqrstv w$x?@@ABCDEFGHIJKLMNOPQRSTUVWXYZ[}\y]u^q_o`oasbycdefghijklmnopqrsuvw+x8yEzO{Y|b}j~qvvsme[PF;/$ޑ (/7>FOYesΩܪ,7BMWajr{0EXl̑ͣβϿپں۶ܲݬާߢzqg]SI>3(paQA1 ndin6V1L9%2 P T9Ǯ  *5AN\k{0Ic}7Z&U-j0vL1) 6 W  P $ uaWUWZ`pBsH"e  [!!"g##$%:%&'T(()n*%*+,Z--./p0=1123456{7p8h9a:];[<\=_>d?j@pAtBuCtDrEqFrGuH{IJKLMNOQR!S;TVUqVWXY[\C]o^_ab;cwdfgLhikHlmo]pr#stvawy:z| }q~ր; z^ԊLǍEʐWDOtT8#+Ly*mHǚKί"ӫV "-9FTcs-Gb~@e =oW#lRGP p <  a A & 'AaQ2*0 8 !"L##$u%2%&'w(<))*+Q,,-.r/?00123i4G5(6 6789:;<=>?@ABCDF GH+I9JGKWLhM|NOPQST(UMVtWXY[\8]a^_`bcRdegh@ijlm^npqYrtumvx2y{|m}I!U#mVϓKʖNؙjPcըRܯvO4&%.>SlÈũ#TЉldC]uI! #/;HWfw8Rn .Sy&W4q8_PT m 6 u T 7  (DfM&nn~. !D!"#l$)$%&p'7(()*g+4,,-.r/G0012345h6T7E8<98::;Al?@ABCE F-GRHvIJKLN O,PMQqRSTVWOXYZ\&]]^_abDcdfgNhik0lmo+pqs@tvwoxzH{},~ }zCܞ{!ѣG Ӫl=õgP=.$&1?Qe}טٶ&T4tMGsf32 B&lmmodǵC       C #" D!1A"Qa2q#BR3br$4CS%cDs;!1A"Qaq2#BR3Cbr ?"XiH󄗸?Kޟi?y\uFΗ%㔊3OX$==NlY9$Hѫ]Z^BmU [|œ9@r27=v /cކ,尐"?dv YY%4!-`qvrO4%ݯ B#zڶ?K=)n`;sf'q ?+T#~0v?Tc[ wo#kM~;1kwn>dpAuaL&a2F_Z;JVd؏Z1.>?~! s&<f)cG"cpSnh+mVCLMUrrM/ћN* t8=Jj5 h<6sxrn 9En_Ni[aԖi^|G$qGYެnrG(k6rH wuxF:18uF;`T(NTQ$Gڗ8Ân!/x0gqL/+±cF҈_e,NVmZyPElS#F@ǡ,v7.u;.BEYGW=յKۃmq3~~:HzUFV)]TLܽkFFݐXA/{Ue8D55bN+̰t 9%K$ ܌V"#+V)[eFNՁ;Tsۙ@5zT-j?JIϞ嚽w+8C߸Wa!ڪvW+O-?áL5GQ)qJs|7N*aE,pdTK T+`` >SQLٟ Ӷ8pgE.)=M^}Qw2vN(ǽSgPACS$, TA8@4_>ʞzUᮗXQ\̷?+lp'i|;s>=gqWF!+F Rvr(ޤ~pl.OPcLRopSK[2#$B46VA Nm}ƕŲ&ýE:|jnPDqpIMA?  \bO"r.>kes0 ol#2P;2slX <.m}\1O&xqq5Bt]W5VdL+.H a9SڅcFT{𛄦NPpi9ha jn o_091P+lũg}Ispt)yl7dR61,w-^ƞn+.Q '!i^|LRqL&(~ HB9HZHH؊` ((f9LW>@ 2ႚmq "YmLVOQM B=es^ֿ*8; -nv17/宣7C,wF3.D5bK$f״/4rOV<޳ 3X߽Zj=jN c/u}]WB:؊MB NxYM~]B-xLb՝(iش膤U1x֖BR - _.iĢ2Gn۸}[Vvߔܦ㔚[hI@.%ON2zG9&۹ sGI;UW_}fB[-U%~ ZE$^X?uY2l`t ׽9vǔdwrڴ֣We,pZ06\xCr 9|`E8w$󁓚3hbZp'j&,mP/_JC<S P+'ZC[4BϓJ7W-XQ[>hQk"?k pZZFFz۔JrZstf5V$Ծf1SgҌK0:&_$,ש C|SRW`T7=*Ya0j M&onybl)[);\~w3s!"k/VWOS%!!*NI )q֦?85pMBs,бR ShBek+rGr<3ݏWX.tܗ1Hn`uU#j}coBu G0:jNd]m!>:0??I)iqG۱UVvvp>د4qkx[L^sGNkˋkK[R;fRn r#5N[iFkrʚ 6'ڥ#&TU5"Ll ScO.D DҙNj4 C1޵\iB ?3?jk̎H; w+L;t:f.>iwL Ťj8QlmI&~EkX 1ʣ&O :mU8XLis{U'R7*nm?v>jۇ;[rTb!ocvY Q ~J𭴈DH&WbX9~ոCמSrrfөrC0]y}k{!~+K^B =%'+S5>﷔O˛J"RhKYLԷF'"@ֱ a+<(cya)I֢y4xe~i뎣{y <קfɫ,k&G> ҶB%/LI0A2ª`5%0d [M崎3gGG\dN"\mZR<$ZZ-9ĎefZ;PR[w[qsn"?záSZ=i6CSҰdZf;A+̄ǜ+<yNkGpkZ.l@ @ %*3UoRVw;PJJ=On(ح(YQ%\c  " nE8L0^ pP{$)z 4`(e >V e|׾ +erձe+߇(NIJlldS;a"44ĘLnZMя\Tl#?jHN9}3議=v/.HzZ֨>=iމ,:?M+hF`w\xmCL"ڊ s!*wSE<ǘR{{>OnO.vasP vÿ49jQ=y-gOg2̞wqU~/6Ic.[FR`s$k o=줷0WD|ŶXc3*xa'a'+G:").) |M~$`q![%iΐ…QAdaM#NI=GĄG ɜ7zրs">akVW6Z.r[1BJFNU.VtQI }"QQB"wS|QN<@J+LAAe>Z#(B R2?j`V"ơ+PXThrsGCj)s͂(ml›[*T@: ֛)ŠXAZ-19'OAeFUW2@ۄAT|fN5chkKpds`So?+Լ]6;רߊ_gu iD'zy-s>1X:0sRFHgPUkl&?7cNit6$Qƣu ])V؄辚7Yza\˚70h=>m Id6P7qW[=G5H#F)* egc@wRRnx'b:4Eܳ(<5zAۘnX}= 1&[ʱ55?jS[N20 ]Vy%"xȡ!"e'fX#iSzKF״Hy(Zvld`=\<.n܊,N\+$%I+?{kku=FB9֯aJYi4?KfdLƞ_ r}iiβ7?24-˕1ҫZqI[+{`Q:VdRަDz$O#JjLÕR\Y\/<r)9`L"XN,yLM7j3x٠c(9E=x += sǺI2.%asHpݿJYCJb !KKrEʽVIuzy-{Wv_wDsګ5[ s=Ă(w$ Z$s%[e 3;~lVkS͞գ~CP\.O.-QtӠʫyݛ4ӌnm U $<[G>./'QzSq.-uԬ3,\Cz^7Zx~4erCIПN[+IoJ1c{luѽ,}|e+p3GW%9yih34& ͖E1n=ڨ--nb z AFZ A+\ [ =A!/MOJsy)*ֹHfxTՑ 4` Wa~4>^\Ds iiO|u>?vG&ĩzVm0oatcXZ]>'tB% 6pȽAڢ8V:6op^4N& !XU1.NOަ6u iZE \ ~_ywҍS6G.m܁qhNÛ015OC[ X$Of/ۥXP֫GiƚK HƿJ2ojuʖ{r|3ˍAqBEC *3i\k#CZMRk%P ,ZO\͐PՋ+is Pzq P',J!nS };\yu$]4dakhur[d')#X0p>e tl|/YyxYGrQI5mmdS`;ԟڰkd-aCQ38\7SA8@/AUOՐs64:7V9 Uj&!Yc!?|P4q#ӅvHw'TĈZas OT& sá=Y.WsqǨB"Ș}SǯcR 18)Ѽr3)C+C1SѰO${^S)0M>s$hriSF}>JohSwsi~"쯸C.Hd`OZn%{l) 򔔑X.,e扶?P= J7L&gZYW{=W@4v 0Hmq ׅ%g=^F=U?+13ycaOh8 (EFZ*&'$“txS@d N2lg(7ٺ18Ln1Vx\,y1I$򈆣rܼdcr*whThXQ9){' $%Ą%|NNIUFF)*O/ږ\[~8cpsd.U)~Em[y&#]1k L,⏇rtLz5};A6Vk%ƖBړbPy~RjrR0B)]W?C&7(j=W=pRzkˏ҆oSˬPjnd?#cFPS K,uysC桌 jw*?ҧLYY5pc;2'zW;o҄T'E'#r]aE/:gf c3Ҭ4CYiV+gE@&eTE?c]J-G-jHޣ1HyMdFǵ{soD#zTYfd͐!ZJ_P*W_軴+~MV{ xqS=y?[d y.M} Jab0kK`hRJߚLgPKt5e爯$)OyZ aN}TH`kMDgsQ[pvǍݘQbτVaq^ ʢi)ac`>ԮCFCNVOd*#t!Hv6U3ZsbH{fn-q6q-Z iiTF̿=gB0}kc 0?zUfl}Bk·Ndnd TbvDiEVGC)?sUcm:n#yЎtn-x`eRyI?5Z<'OPlW|[Ey>~ݏiogY/㬉!%d?z*U <\66=Nj_zyh%EtYJFTϭ<Lb\1Y2셿X#E ;Jqꃮ%"8􅽶+dI_T1l¹ݍT|2RMa>T5V\9b;s}'pxRI D_b͌I<4IJ8&K.EmՏEO܌S8;?t%/qTVZf F(vz?8o:Y emd}jkqVk;Cx}푿ziq.lBO- ' >ڲ4eѽǞ^|zZysG{y'7g3^?E'4 ձ=W#Zp')Ҵ#y.%p.޹ieFV*Au }id2\ Np֥%-c%09{܏UGma°qOz}4Ru,1ݎڒՔvk6̠w柧:girdXDc+CI"gsy>0jTNN{$WRyG,ԓRڷۚ!(SťH5ҳklcMZȨ}z֤aR,94J!\b$[s`fVRd'Ol˵.2+zջl鐺b~9=i@L2[`)Z6F4oc⢖qP CF#Q5(xʢJFei5%;sR\`u4w􊢘DJ^bc'zIIBd S'crp*!=t|Jp= ;>k|] Sys!ǥkq*Jbխ{5b[iA5𞫽(Iʍ ƈ`~-*B=աRI5傖y+ƚA5V8u'5.jYzx0GW}&ՠ-n؊sdOI'om2<c-Ƌ́<>v# ˯B/2\zRPD^{k0:A,լz=1P8ڐ5Σ*VG8 j>0fmjv`|͵]8#DW[P0~E=H|T4b.~$ M#}Ж+q^]DQW]c-1 ~oҘMS5) P< v- i1Ezm<ed.~;)ݽ>4:{f%jFߑ۟`/̶`zGNU0p'Q( ^Exm}Z|d2@ė_ 0s9|-Z3%}$ Mo!IV]5i%^=gO\ N̴Mnw!⁁KRq[ŬXYxʡKRW:صu)֭ͼU-S+I*x,r:jk(O*)#cthZww[<L]WbmJcRN^a Au-ipy}$qiQt2F5T>GYcPsie K(AH.Q\+~lݣ#1Jvpm#rcP6xI4!rƎ^>5`to pvBIԑɎ's[!FSLZ+ %yPso]N9;"CuD"-9fT+bp*iPI19D'ڠ/SN'6wrXmeǼ78nց.z3*᩼w@(W*NIE^[ʱV%Lu\dRq}y@YXG*ύT_8ޗNs#u4 眹 `Njl.Bo[/+LY {W7YWb78bzP1R*ASsY&mJ D[sYY@qP[g茅 ?^r^بtUXdIc]i%CFl麝93TZ 8\+M2 %bymքbR>۷R.k))#qc|k p1c| z4n Kc sٓ~b]^pv\:U_FU~V;(YDEf'9P|o-i<*浣Cl%?,1I`&+IGG5ZFmeeb麓ltV\h7YDиNL;Ehfoq>cUN^~ ,rRt5\61=B?a1(; B9r+QK> [vI C4eJ;з6)5.UoTO%=תc$+6a3g 5.) #P=f*\`zOXurәT&8sئsd'FkӤk#hMSLcaxj(p>bp35-;tjn+}juH-L1BY͵4qnJN8FY̐- nj4,EdBfNTaHqOOZN;:бOzwK;gnVa;n5,n j ϥl40!^n%̹"N;!?M; qҫ%qDA͟_JV0hgǕw5DcqVoGi*I?$}sGHNaNXF R*gzH2k!78PZ`ԇzٲhu5TG;9NI]#)]7`uz'j/j\ xSUϥAYQw61N#[R?.~}koUL71)1! =+ ɧIu>H;RP7}N2[.hȖH!Os*ڌړ>%+6Yw~QvO[-z8;h=lĸ=٫LN=HѸgFmw>[2\ȶ vVS+~Ln/o%|It+s_2FkyF/]#X{̭VjCsghZid̟\t>j/K5HIEIGoؑ$2վuq`Õ\RLnP%Oj Mҗ{ z$]"aTDO5xX+֠AVMͺ8۔QkkQQ,T.1Mtn<9ܓ.lc~L66.I#c6XڭL(Z05>`AȺ@b}oz=ZĶceY[O=zW %u<>'_TRڶ`T|m\)Y#$GCPst8<\&,!nB׶ݩ΂ч Ij9 )[aj갴+4R3d |;)O{z]Qrܣچd a=@,q 3ޫLd'@+sJ叹cli3z`nJgMDcz 1^2ܬ`⋁S;jzPZ&ؐE}mUZVas:Z2]]>^oCpx2 o е#"涴wY%He#r1S"ıcªxyrkOFƕE@8#c8?ܼ3D6J Œ1\e7k"]ƕܵ)dw^>Ηp*Sc^w> #<7cjt]tKZ{?,qZx$ QɫiV .RERʓ1id~ 21 as5Uga9x>ƙZ-!V1@W$MIY[,tPfg:;}"⛍A3T݋&]d9V:[yNT2s涱''PZ5gE1[M<ᕎFF';MPjΧ+QZǫ0G_LS,RA͊2+2M8>)jKS69Ez麞jp)*u7Ɇ1 N6 g~S؊sq 6=van>S EcWCSCj*. W*<9 :oX)`"}aV3P+W7>)=N~=A*.hV[E+OVs O$&)#'9?v Ҭ>P@?R)դ0pZłIYw0Y]Nwj(YGMaA̬ leCވ 6ƌ1Ypct'pr9bI` ̸$Rf0:S:>ө}N,s4n t=<ׅI.:h1Sڤ zoKnfBkkUS!>q#Cf@+S[LQbewV0F=ۛ%Y&P QQ{ЂT9LS"r18U \.UB\|$P T7SGe'2zv /(1`; :b,Bei5YKYyNJݗVKK{w3KEQ t(M#xR-Z}&YHDˏ Aw?g=z$Hڪ/|s= B6h P^R=N:uܓH I`?LTj_:ܖ:EҟY2GZ|9,\,?!665T:{l2O)s+zQ zH` It-gbMqpJYfcԓMzSN,^@2z!4ldm0+Wbثl-1";QZG>My$'s^݂G+p{J-M{$Wr>JV ר# W޲]\ 8~Ոgc\"$2]v6jM0MTl[N'f$URV:`QW~=7V2q_1J'H:7 W(y$"y*hPQDGh3ΟSjԾ§}i7H,qKY#M.ͨKuII ACR35i܅ 'UpCEI\.tUam(7BJ%` &]º֦dZiWFSL%K=Nf[D955jdo^Qgn,{*dgsr|O)IMǏpޠHl-7oS҅q_v)+ؚ2 bDL1śf ?Tjօ<;{BO cGޑ MHH7-Lb8#Mon7; [cY6ЩiQt ];!VS։KhTZ]Aчr[W~OU?P\!qVՏaTRLt,Գ`3XN+mCU@QKjJW 4&f,nj]f䈡ް3Ҕѿ1pذ"$ҝIv&Lp-Y<åfkgEzɣdFۃ]@Es]"c?R*OzO]=m |IGg۱K +۲QzV9NU}5B&I$!CՐ_!$a7Sb?T VSzT'*yX{v5h]쓾I- 'wrrw4F^kb 1qMӃ5 g66\Jyҹ];YN_]TS@] t>.94:M-TmyLpuc'={&xz}̄:LOųbvjIZr[ }QDPt83Am#U&:zIR4I[B̃RN#$&Mo[b`+օiOJHsAqbf *ij+/sVܐh򋥧kr2t9J穭21 dTM =͸!< 1OAэ`!Ijսj;1Zַ\U95Y [eGj. n|a"V9[0W_Lx5kG>5HPA#(0vd<5 DE+{Qp-3רz(D+%rhaE'4Sj:rRdlfɜS5ԑ@)L+*`Pچ6FJ 솤5RHCI;fҔ(n1ޅ+"Fp(mrwK >\eTPD2F)(wY*CFr):06ho.,f@1k6:]ِ07!_TnV|̍QLt; SҬk{7)Q>[YL9%AUjdY 2S)t[}>Qxmc= 2fuI HKА*C1YPAk֫w+;rJ٫k0֜,#^$qvBR%'AY1( ;7UG!k(;YDFf w%iOgf#e 늪ҋ}Mu͖2BAFȵYHޙ:1/+JQ#p*}>9ZەNM/{H9h׎u 6o4}t-Լ;CLYO$~NѦ!>Rtfis˱5dXCMةTVpq>c&~3q\%7YrpEZnUqK]%5 v"=WC$o˨izSiS(S 64al H;>PƫW&iּ&J)?qG-"0iD0awbM O[3vvsǥBq@@>|%>8:}i%G|.z~K`pO.澅⏊ZE3hH<(!GLuf:WE<4~'q9 xT O?8%U\_')w1qwY #nQ+MOD?%Kq',ӭU<'V>Uf/w;=`K|xN Oϸ? ŷXsP=z%+*G r+:]ZB4RLLKuFvrG!"dtGO=OG@yF?N|C q ]>hE.<:ڂdӝRNDZ\]bk]cNd6):2H:Zc53/o"8sOng}S6Zw._<p݋Ƭ|ʰ:W ulD:>s4+B2s@7Kڹp|?lOb;})+9 rz@>( X:l%UYL}&f0 qa;+&wȦ)sIwE'Ҋ|N2r!rm>vFe9zwgiA#%̱[y2VwV͛z' ,ۺ:7ܠ =B-6R=+YBC?]t)7\>NV:ssS@q1'*]JHjF0db ~tYW,=hF*0$YB[#l֧cL'$  FWpe+{2GJrטڹe±VA`k6[QȩJTd A*u*ƾx}ۚEeD[)z^Fk#Ϯ+Kq?JQN#sXy 1Yhd9xTQK Y)qhV?.k Oܚ"ˢ#Dn˦!#(*iƛ1G5.=4M<eSbsFux.~H1&Y*,ZIcx-Kh;SS+CJvUnK1)啽`/Qɬ%(Z>(ꃒJ0?Lhyѐf2} ڔzg.{ V#2)>sldsJPϟTt/N!2M))Y дNpYsXUۚ#Ҫfi9; ]ѷ5?FXܡ7}#;U8{Yqj@h5(ۻ/TRg577*+H/of &%\JcV[I(y/Udk*a!t@զ@ '5axH둋oz;O&̫lz՛0 Yؤ )efym7\8:M*R%.⊼k`Pb^'F*쎸Jb$*CUUǔxRj 1F9] ?G~\sTW+y|U>j4祖JNe(U;ӒB'4Ssk]m{faQ{e6ܭ4lZrmf/'i2ZHՀ$8.yiJr~5 qcivAcx+-gZ\ecU0hdzGڍ6b/M'5M{Tf3;AbρDQ&LjG dv$w^?_i z&Ėe^gV'cz Ҿn?ZŮW"$X;c$[&wnu%ch7oϒg.Xp3k8TNEKj4,}2v]ҸN#@p:W]X\!K['iy~àQZqEh+9 Ùd%kgbH=-`8غ>?wAP+\HM\ÁtԶN.nQc5f~\m/4Kw澀vl^lsJ^tz sMl\cǧ\I&__#1EMGfԭ5]{[2I "P? m{8ܥ?I5} 1q%ZkRZ VuQknWlE,nq?f&Lj!9X>ۏf=6Z_lb1|݊Z_"5H䴿hfCUhU?ƼZ|Ó;m{Xt"j֗F4`a;0 ꈾ7ܵU*2vJ=q /Yd TҤA E∍*Ǜ4T@>L|]KY PVc)ޘ@W1kֽZתV -mu4-1;` }|3QsNM?t,Ogw\kkB"y[Wz>Ϋ|1 5~u$LX6˪4pFKx0m\U ˭nk@89>÷ԪG7'7қ_K|5aXNP\ ^2+|&4-$CƐ WZ> }J4h?r:}A=?&R: &W99LSɳ[sP45WROVYr@,F{D|8t_Q+y N@X@FryWjydS[>ʩ9fRp ߎ Դ=#.?  ^$18 iy7q!댜i4F:o__Q7MGCybԭHQ4J{ d8/%ޡ~o$E["3PO{㤔Ǟ?SSN&ݶ| m=eou\Z"TPЦ+BպvUI8k>hM6z>|oy:F7iwrm!8/?5m$i,#z-zspjqo |=(Yc/n$^ms!E_\ # //Z GV]o އ%KV*%<|_xp'w67c<%|+:آ5G1NS*_k/s%JY[r;WϿ x5]f;$fF'.O])X INj_8:k+9|u_:mO(FOj,Npzҿ/)~/vFq~v7i.9W֝SOk_V.y#{Q ]at> xEymZxur+ͦe~db0Eu ,rIgQNRq&#) 憣w}|2Y8cH?H9*}d /_K+8X`yu,ӆw~ez:^X<=k|=ԃXӒRWcE!0Gb{v3qֲD ;5C::5v'tڠ11( os ֎fU];P3ɟ)?ҋ̫#vʯF~բKHnb3J3Җx©hKJ3[|MnZۨo N\H!oCYHD¶:T>;2 HZZZr(d_ .h^5ӟ޽]UfɕVGXmf^ur;E[$}&`+pbv9c"Ŏ1*-t->Zc}_/է|.' M 8|cㄾhyv3=ܳ1f&q%ᙂdMiJ'5 eQA60%Z ӥeU S?i-y~A~fɉ ¶ri1C͊>Q#=E?*W%?^y VipP'H5Sga8*me48!?ziceZEIay&̌)f'Y 'A; ^v-*tk #yݝd;*_NbM>I'Ōz!,瓑|EAH>Tis$nB-> ~#AwywKoyTS{`V>X #VPXD=;2떶*Dncgڠcs_mk~i|o?O.iLvRΝTJsNlr( 6o[H=}BL!U@no\~(n[ 'Lyyj/ ooc7 S҆Mmrݰ|5դӑJ]֮|ÅT^3L072óSZCB_#9Sm(VF;rq┻xJLE11+7Qq%jJ <ۗI֕nSsҷ{_@u=:!ʎG_J}dUx.>>UH/Vņ=%=X׺L's\8~;۲n{zuk+;`]{+MSYVOscVz .uJƈevfemO?WA|!k)%lcsҾaC3j:\꒙waW җ:h^XְYֱs#?7o5d< {[#_ү/85k{ @Tt QvM$3UU$W)F;Wy#NEYb,g'?.f[5C!c$$rl;bN72hu^#d[O8fcޭڎ÷L=j ŤfQ(-Ԍjܷ Y8YPPƈW:]Ț+l iI_`qKdžJAԵGϽWu6hvQ_FEO6k5ԾݎURx 7~Ɲ>-[SI,𻲰 :q"nkėP8Vjѥ w cRZRt[r0$v+W}[UӸ,]yQOB1FZt\v9Dm&E¸CtKGd63`K{F ͼn ShyWHu~АFnAw051xN8k]I|E(0 N3FM;քu8xjp"< _*HClz8ÚSЊ'-C fTMlTt9OCD"snDm7j ]IfF((F$֜m 4p6GqV-ZME:@V[LmG[d޵ȷV$PQ(v<3Ww#bTf nQo7m?J Q,%ĎsTfm[P$1"sGHeƁdMLo7B=k1$7A^:러WN[.աkZBеg+Xo)l3ԑQnc4 r)TFA,1zmQ#ӓg\ #^u'VI@̯|U%C0]WCL:jmor~մm4MT`lf 6]%dӜJa9\u0?z r\HGpu*${"/ު )=d)6'k_Wke]!߂U KKiN+ɣ\Z!/QOxoW\Ij`V[J5uTx{CY+״]i$D)wYܰ[>Z.7dUkcPc->k}*yZA]ڋ5W_2;E[SY]oC֭NVd9K3^=+ܟ4vu7ێi/j $~暳40n;z^ޅ[6$_Ԑ?\/Agmb:o/8u-2kZ~3 .':hW*}r(k?U,$I}YC,w̲)E'fS:rVN;e`P n7 鋆J\y|YHy{WD/ԗeԠb5Z[Zzk&ٮRxhz7Pvm8ͽK5U6f=q⍈5LW~ q^.IOϛPz{xW@rɌ]*qV?+F,^gbS϶g]_dBMX`^Vi <o 8sY#wƾ6Ēk"4aa|C+} ot.,/Vgw;*¸> mxZ{mm;Id#.s6 i ) kAl=7#8fVHm 1;m:⾧5!|8aʁq+N98xKGۅԑL՜W|e—γq{svRw",~'Pv7o~_ ի m$Y!6z{? :ۢmi-fZpO{(늗k2}Co 9+R hp8e96`hUS4㐏j3qRU7TߤUҤm4H8D5ОTڽ?!qGx?8YsջhvSܡޒi{q,>a jvci˨؍VU~-B&M-5\qC.Ff3ulO)兩bRBEXl^YFϨ^ȷQACF׏ I$* Oe.HrlFcl$Y9`c}4?-oYڜ$Gpz.c:4mRF09}E<]VIks: V·R ҂&o!W`#ҬZ_l.1GFeVATۭJW4RkMfzX+FۣP-\@ʘ+jmuo ̌25eɚ[BJ!aw.푰Vgc6|7ӎ6u]&HXfUkIxZ jīF}Zz#%هO֑AEI4؞[{p3o[}E΢Aa9-<mb$- eemǯ} {d(VMr>,}Ŝ%/>[yqm_㥷 j'1p2B3 im)@@7V:ec'nߕ/cھm^Jx"%!8uW˭3#D| H JN>[icg+\jG*p}j'ڝ6mF` ֏v䏨}_얱2wO^V&8rT!,$}X8mw/udyclO:>o4d匢//_ָɩj;3N^w'ւo}Qp12F9]k\87̀玷u3mVn+q"19*(=MfAmf9٥l; `"=*Ӭ^ WD4|a-*ߏ/]/1 \cKLJ~6 :$E8FqRiU.[;fcO?nrlroROϩ] t#I nfpuGxI jI9 9_2VjHT;]tѬ 8_*  j!fmľ&:4W[%A4F]¹eELLJ2ѕr*{nc:UO 5er:։aW$GZVlV1$U]67'!*d&nj:bs#i R4yO_tN;oH#Oo ?Χ%}A#w),.xW~KOQv^TD庇!n<}A 44Gk0X,r*3SX]_WL˵+4 zJI`F/NYu7(Z}HqDmWPe,i(dYʨx uȖSe{QE ^gҸkh"2\ښZ鄔[v8{mԨjPKFvn/.%$NK s+qJ +F~dB1v1UrYyLV ich5lkx+ʙ~GY^"[uNi.n)2I!8II,ݏw?v?y[I$b'ѿ?"•#GAj:R?DCָh*7Rxh9q{%_mO5pUZ ZBAdj'EҬO8.{orU~$kqN4CzR!29ZITSq?Dfte=V_˜wXI&|??%XMu;Ó};YZKgi-ā1Hz7ޏյooNu+O)oH~YerfNe9Aw>#Jq#KDYO 4])5^k5B{SS4v>g2L}* 9dSЩ؃\ ɥ̚uS)+).rE32ߡ f{*B /E =*M7UX#'`J&H BH_uuiiZ3?yh:Bmݰc,l V[B9[5`ҵh/SoF6V ̃K&:PlqU֐b<QH5 S(ښUþ=ZLʸđK靵HIP2mZQ 7YTKUܨR;LGjpJ~mfndNhK-~ǨZ5ډ i\s3%*\`R[G%n*UXUo;r(/;5 rwHhIFNdn n7 r`OPi=%G\ufH\`Ơe_P5dnӕ؞ I`f\++("~Y΃tT7x4HmI@-^&>|0v'u|]wkK[*CR12=]<wʮY_;VCg aA[BLXd/m=SMgP \fce67h(ÞGڳiGt}CSnBe{*_ ; WlzG-֙vXyzVlBA^hV~"OP)^I-$Dǖ~(#dJ0*a}5Yه -JcҀ=)v |V7mثe@d3}^ 2/ kH#?ŵp{֗ZA+(PDME]AWBF|BOںw4mq8I?TP?9\ 1UF%N!Q"z~Ԩ9-&#kgr>Nrd2Zn?: +AjB|5WkXu8k&?*+5syd^?ͤJA##/B [vQa;%~!$r#.m$ɧ]XFJE|9ӔcU)b55ޜEԲ_.AiR@:410pa 0{nQCyE#*Ԅڶ%U"@y}j}hm\֤-J'nUTl2ҭ}e?H5bt#=Ŷք tf[ N!9Ҷ+<sp;8ȭS7FAAaDVgi7C+@tK].om"L?pjR23Ҋ'QHfM}S_4\vaKsNuyeǣ &G)TQuURZPŢr7Dd+":[#ECң2^_pGd6*`{TOڶ޽6Gu J-xTyHb;6 ޜ4P2IUjIIOc!)mH Mrq0 $*~Hϥ5#|aqBH2hR+/L ]FQ0Sfס\̈vQ6bY `REVKGSܚKAg).pº%WPLG&)e͊۷*1I4ɎQ#g #g9Z{dB_[#~xJnQUd1Dj@_0F/ROSFZȣ"%Nv{Ion)4ѡ2)Km߅͸{7!4ہ$QfhJiȪ }K =9C oi9W~Zi @\clcF2lkG_Jʊb^@o+<V#ixdQ 4sz+)Tsʵ=4FJu4r_,H$q-{dT༐(VIIEpCY٨fPJui2a^j7<1IH ,ZOS1v m( )blV=+8(T[`S}ĆҬQZQ%̻5E,9r A!˞mggS@t$״g֘:`Q&)m$"!*2 ԍmkX8ڑjXcojx\dL}P%ebTp޳Իyz\+Lȣwrۋa!jw+kjB|mǷ~ƯC62ZrҚZ:lSiN+ΰS -qc'o9u6^o&^.iq]m!$#S3@^}*]nɳQAL5-R{x:DOK,f$.wbkח@;鐩T!"?UMP 6ԣKXR>0C\ecl.ϜR2y~T-2 VuDC-*]EemĹ  a{Y A@'pJ;WIB>ơqʩ^yR\ELe٩zAbJ5$rӹ}+k(_;2[4'k T9y cVXr$b geWKH Ɓ@Ń}#L.xzO :K=^M&19tr>[=]Sm?vC?xY,g0ˉ#2FV+nFu֓4sDQb)%՟[ID7 GVUx[SQ̴y7ӟ(D{M?J,Y$Z"S5[/e(};ҹ#&x?jG4*@ٔhخ]F jXo iL )3,L3)Tfh|7Ity$gcX.yNMVt[(i4+C+Pk" qSëC! 4!M7+A4$,ӵԙVdaP R_U7 TJH.$qVEp?hrn#Z[ڥOF Ц l4ҋS ;dD ##3A5ae޽O(/sn߉\ݮKtîwsF'-.VS"SE9ވ, =24ḪbyFvI>>:!h)CFaRָv;Vt:k bd= ͚el};(+s*6(jHXkfja(ֽ/}m#v#Js(ԓ1R*im TTT'( 0Lb(2Ɖ;iÇ].A~X)eh@ LڇĢHdI#`~+-13է)M3ATRQNF6k<7Tw2 SjZ]i=3tq ZAHuexħQ߭>V l}=j5B)ȗQ(` sxJlt4.Ѳ4>+aDȣ\k1tqH:70ФJYwY7#^@sĜoͤ-F=vICϮ5}X][l _J\C1 ڒuf0:Zɩ? } q[$lW2~JkqTн)=cen vn9 ȡF;фw]daʻi|DW*}E399+ VLخ lVuhXKCZ&o4f l_y¶iп_869޽BÊ׷B֟>fHmr^/t?5GunQ GwŔ.`u}xJdAُ>h[4iHs*t]4f$)XG4fA#"l~`F PoMUf٨ u}NG!pH0gQTbj(ȺD4jyFAB,J2v׳M<)u\<=hǔF<8a.{mQb@Vk7rk`8ɭ9ӭnZUE8|*2}+hԓҬ|Sn&8vAlH;S[K{ vXe/{f cGs6 x6\{;Vv挥'x+ *ڪ U|l(oҜ?s["`S>1;TGQoB2iy #RW>g&CrcLv4qE᭺!94 dK1;q"<[xrMBפ $AZ{OJܒkȻ5W1gۘS-)ΠS[,rUJsԚOWNKsQUD@FUh_Y3ؚRW9yL q)oP@(y!h[F8~Z䕲2rK!3,c֍m!BR` Pa$R_]$ .OÅeֆHgU7FcrܹQ#sZ+UT.SHׄ9Vwm62۰FZN'yNP̺UMt!Dl b] mVΡ;&+2[Ƀh\W$#EYVF⌶ds3ipE"r촄S]0[~Lrd,Ê_ÔyNv!xKo2U!m/"*A*uIqLd9J6VNc,6O-m i5xC2܊e?ޭKiGZ@LoJ^u&kSSL>S wFY߁@Ay]y o+E8)!V#yD9]|,¦F)څ1\wBhB irIއt1{ $@8IG$i7MEIko0D [0|! j6yz6(ߵzGSN`A5 J%QSKؾY e \ҩ۲(dQnA;Y33nܪ:jSui @e#b$XIC${A<(c{< _A[<۝>TSyG}F"=M j%eܩaKsҮY5@ KFOjDsҏ64V[{͜.QZ'40alE' 0vCԖ9QFRC w5 (ºI@f-+dǑJĚCXHDsNtctޔ [; ETgkUdN)OUf|)7)ڴ,%\F7\*7uM:[AU6}^ YStc͓z5߱Zmkn-6@v**ЁRF’g$2WI/00k]5>)bbobPWw+84lNGx8-ʟ8iJ 8sjc-zl3G:3Fzմ_*Te@e s(X452 , Q`Ric0K9zNҦԯTd1P:DV鬕Z<שʝy _U3]Hf~9WSRێdue8;gӵI[9COLC1Ȧ #i$ s+)O*iImIy恴لm,W#z G' v{d\F.#=z⢊Rm3BP0Fk3-l7dI) /l+l]$ iITI"JH&iaDǯO)唖י #SJM6v7|/tw9LneKmm*{K}2Pɥ껝r:,wQJ"wy Mm:űO>U2 Fjc|+QV+R S[e*j[)})?6MDeAmX.u]: s R[XxWڪ,31lxhX#2Z atǒ;S-O _\mUTxjieTGjEUtrCH@9(6~I9rMghcwTQ;Kr 6P$ K8'P hx%m |YV!|sQ( lke %q0P3\QiI'\U̕ ?"Ê05z7I/ӌгpn\3tW%@1t@xF]%>?X=$E85 @3j hcku@+Hh54s%:F*8[!2T8"a ) ؅FS' Q20;ңRZ"0p^'s@,]hhG"ї@i=j\kEM+B:jl$aڈ  `US\˳I %u|f= 4MA{zԓ_>RJJÐN)Z\c=SKP.zY-B&VC! )Ŕ)kr7 D#r3\ٯ̱S]?n4Ҋs}#;Xdk7;%%;^87[TBHcdw$L;Elex7LDVґ09_;_nD9O5'i=w#DF`L;вA8y\q`Ү|=,AKeb=E+Y:KBbpjyi ٯiuԴԒ&C= r*kkX#+0CsR#~\E“V@@3U&Dc`Id{vQ[3Si1 . YW<?J&Oi@8EݠH/piq⥖y|;FPMnyH08 ѷwǙ+GVRBf"lGZ2+MfcFoQ7<ގ+Q> Tw$tq'1{R< Q;I)c@ H%yGV,BB;MYڧrM@dl/y R"#PQ5,wcZ8v 92K=jioiEI4,fqVQ/ ( sJ&6/*è5sʹ`#XA,Nr6rIUo"Y4T_;ҹN4ȮMhkw,Kjp('*wZ`@E=޻SHqU_xvd+捶t=!bJoXMm Ք|=ds nGt>Ԏ|)[Ҷk\auE&}=[SUoK9xJ}ܺi֫ʓY] O'P7+ucH K%W#-\Ęb(:=(qR]L !#-7֮:pӯ(4m T=fxY Lɗ;zjsƘOC躏7KsnQ,3C=j@YL{<aW꧱RI0VK!?ॶKyinN%i{GFjbbq. HK }A M4; Ybq;WgPqNUdáP:@M)$مa⏨tm^cR"I{|r%JPHT!aZ&md!C'wn,9G4d1G|˲X Y7PYk8ZȰ, 1yRƥtw(Ȭ;rk JռoZnE d%sTl`,ni{)eMH4Ouc,Y#կզeqEwAR^3qQiFPLxçGJls۾T[ʤJ0kQKGW%Wz4n!,"z6 HNhgnѥ'Mk=k:{] {N ˈ3]W􅷳}N0sĢTkZƁ 9To]S&fWDsjoUA4U/)- N>Un}RVc5Ut}!{Շ; GBZE:0 kky1F܊5M@ls|I q96I.,&ӒQ3M/tf2 Ȩ-l$S?f*hycXX7!2Ii\'ޔ}g)jz؆r!THֈvi4fCUB;{{Us5vFwzԑuXϫpjN-xP^kb?Va4pϨihܓU/FS7O*E7:,0坏LgrMJ\K ;}D¿ڽiٸg'rF@"XH/#`m5K`=}ru5yVeU+R{Yn* ZD^L z:v6&0=MB|! .܀=)%r3Mxw\d[;c:K㒙Ԓ9Dΐz^oC63Jo5&BytlLf+Wb۹<49-"QVw<Ôַr)VטK9=)t}~U}H#r (aEjF H:([Z}gFn&0ˌ*{L Q~s#NAሂH\`XV09B0GZY+`H ]W.9<;sSAy`(P]P4@.z;6TȤoS2 (]1u%B6* 9JuypiGg K`Ԝ̕kJ81Σj105#Q-#$yCѶ=k̞Aғ[|U+ARJ4]:aUrdҋnZIX'SRٕEȬbZy#gvqgrFl:GP`e=3FZnsx=BK*))MLIڜ<'0b~%= y I@c$߇-.*ŷ)}Xa [`"H<&Y]FЭ#EҚ]OmkXr;W1oa\3[X>:12!4*ŭjJ/rI3_Ewnj"p@Pw2.d ki`NkH*IKdz*Z 1Lm,%,DlH̍V .<8G,cՍMaU5&"d-(q*ɡpv9w YoI&{U;DyjJnpZYG5eoDֹ/xYG޺>EnI=X&_m4b$؟.x-;'wJo\42TP?wOV+}0 Z%X#H㘏( J]Znfx\}>2,.fIYlbGBZϐY/iq#~_qKdT;ޞ_NBY9Jpso) vE`H%#+".=iETޠ.nM/sD\m+ 'jVbxǔ,&t4\JE L,o r) Vk: 7GPy=j~X_zLUOT\n^|^4 b[81M,C<+Ck5OɁ3^7\QqF1Ls|7Pbp*nWz/LSm56DHy)+@8sZ:tr|e+[{B$TF N5:3#vR\5(X$7PG_PGK[j=h捨,[Rr|D<@M2U'6Lm,bfj"CPHӷ)($8|0`x2E(,z7E]>UR(Q0’ v;# 5/t &\hַʵ YrRei>[V'"CMCS yg}H[i}[9\\Qny W\[t$+{!tq-#,U#2` 4'VI8Nri.Ƶ^9{O6?Tt !K|yF\ =W%#eD{\0M~N .QR )li]Ӊ8ګem]3.yY\6jAZN?3Ⱦu5.n?:vtJLhF<2kӡ%bw*YݤKui`Y 1v`sU}/B{.9ɧڹHsBI+^\BY9W JRyQ:QB|(nd|?ζqsg,ĚXZ) yG0O[KLyĸQ[(oPUKtfyI+Vi4@I%XUA4}{plo5^.L v*~a֌Y`*wo&tЂr4.K5,cz\9d56}rj4Ln&&IgzĊAmY-A4jP(=MZ)"mm荎Wp:Dw7OjkZɉD0uOqPƋTA2އX֦D" ղ:靻?, y`k(ezHQ]ێbq@8mʒđւaqv\CDjUD0D7w]{{$8Čg)U2],X;OEđJrd&>ڑvJ:o Y{Oݬ~|-mlַR7=kmo9K(fFvDQR.{*)A)EbHDO`){hDf(cM^QtH2Fg^NhlWYڸaӧBP 'Ϻi6VZQ-e5߇҉ ɼr³eY6u!pUH׮6$r}Q˅ z4**8OZ!vGd~#}GTKN#'.p7\uxNskչ1EPm/۲]c-Pԍw!K$q;U#}lj`/vW2_NKI-$>XvM<x^')^7BN z'Op6Z ֑{*V2 Pipkp:m S% EE(e;ާv&vG,ҼKT=RNV`a=Lj}4*T:,4zHϹ$xwXUrKT֞3lCLĨy&ec9ki,dee ]%a:aٲzmMA'>6%h }egTJSY$g#Sj:, {Mn%俆'.qV1qW5ax'Ojm2J1=j7Z)zRH*͗^,l"/}k&5^V˅+! 뚱Q]νZt"5[@Xݛ61R[%$Pfmg5pd4TS I'-r'5qWb޵V{6QHyA}g%*<+!5_a߲eO[?sJ+8àHUKQ(eu?M\ ^U59̅55 赚dN $eR0Uj/NMl9 /m>i_U@nV-Jw&'2ȴUtzK˱j]DX09P- c8A.#Og1N$C*vQx> CKz }I$lҁ9macX_jpue, ] kwsI&>u]-h7OqU$qݤ3Wb4*ݘƏtCĽfSObzLN()hz$ ZE-o زy{՝O-@EKLMMG$a~~h⽷:"7F幀Ng&t ` R MogW*H[7+m+YBˏ~#) N[(P{ ȥϱZnwQa]r}4w3+yx!PsS%W2G3kԼ:6Uo`! #hrh|u30qYeQ3Z6O^K^G '"YA(wĖ 5`hMR)D(;II5Uۂ??w`ȯ65Z](01ֈL8 )Qa5 [M/(]酔D4p^c*X:sF75>#ޢL֡oL]*kqPZ# rZ\U-o so.c+;si-}[A Mi)/89usKu"On-cމL0چqF" [dk0U4cS< 9;S)𗓱Dcdr7SR$:Ƀ<-]T3L4 .7(^}@,5Dɑt״ri]MbOz$#KS&:l?U=)g,pj, '{ީ6~.'e_ E1gS|w)MlFڮ7Э @(f<8[΁& sHx_@ m@uapYakp7V'RY@ wH^-umKOy&扆m5{WCKmQXa#aF_\M(yXW1Pw=霼_* C',nu ]n b0'sU7%lͼq<=ORjѥv$#?xa A) ^iTwI4#Bt>:@_Jˤ?d59,0S+I*m j˥)$UoY+* ktnhcl(#xs({"jxK_ֳFw8(tyrM5,QdwbުIkJl]%k6S Mͳ  h(<aF鷃%k;xn8 rg2<סPFqRh t 3V-.EKc:vAIqچBةԎ4ilfefRlz6}*)a=i e#ҙGP(bcLUN2l51V>z9cs%8ʲ6HeoQ̿^5ni}[V.Y1)iq^:p\zb>`>-ޖK!<f$.6] qD) n3Gey#o)#ew)09!$tQ%?!Ak-Y㇙<怞9oI,i+#]1f'*ě0N.sU$XdLm`|IN1[kR?3 JPF*FAMRv%\t,p]54ELY 4i 蚐{u#Zt[cbrP:$y|ƚJr+djG ^[&(;W=%w~%sTm]TJ_ide&6bsF$ְHcldk Z8sgtBS]Ǖ]=Hiq?Nq.vΝs5V:m0d`Np(-;sB&41N'$'o=eCLenjRFՓ&=Jܣy`. For a brief demonstration of a few of these callbacks in action together, see the cookbook recipe: :ref:`annotations-recipe`. Also note that new ``annotate_`` methods can be defined without modifying yt's source code, see :ref:`extend-annotations`. Coordinate Systems in Callbacks ------------------------------- Many of the callbacks (e.g. :class:`~yt.visualization.plot_modifications.TextLabelCallback`) are specified to occur at user-defined coordinate locations (like where to place a marker or text on the plot). There are several different coordinate systems used to identify these locations. These coordinate systems can be specified with the ``coord_system`` keyword in the relevant callback, which is by default set to ``data``. The valid coordinate systems are: ``data`` – the 3D dataset coordinates ``plot`` – the 2D coordinates defined by the actual plot limits ``axis`` – the MPL axis coordinates: (0,0) is lower left; (1,1) is upper right ``figure`` – the MPL figure coordinates: (0,0) is lower left, (1,1) is upper right Here we will demonstrate these different coordinate systems for an projection of the x-plane (i.e. with axes in the y and z directions): .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") s = yt.SlicePlot(ds, "x", ("gas", "density")) s.set_axes_unit("kpc") # Plot marker and text in data coords s.annotate_marker((0.2, 0.5, 0.9), coord_system="data") s.annotate_text((0.2, 0.5, 0.9), "data: (0.2, 0.5, 0.9)", coord_system="data") # Plot marker and text in plot coords s.annotate_marker((200, -300), coord_system="plot") s.annotate_text((200, -300), "plot: (200, -300)", coord_system="plot") # Plot marker and text in axis coords s.annotate_marker((0.1, 0.2), coord_system="axis") s.annotate_text((0.1, 0.2), "axis: (0.1, 0.2)", coord_system="axis") # Plot marker and text in figure coords # N.B. marker will not render outside of axis bounds s.annotate_marker((0.1, 0.2), coord_system="figure", color="black") s.annotate_text( (0.1, 0.2), "figure: (0.1, 0.2)", coord_system="figure", text_args={"color": "black"}, ) s.save() Note that for non-cartesian geometries and ``coord_system="data"``, the coordinates are still interpreted in the corresponding cartesian system. For instance using a polar dataset from AMRVAC : .. python-script:: import yt ds = yt.load("amrvac/bw_polar_2D0000.dat") s = yt.plot_2d(ds, ("gas", "density")) s.set_background_color("density", "black") # Plot marker and text in data coords s.annotate_marker((0.2, 0.5, 0.9), coord_system="data") s.annotate_text((0.2, 0.5, 0.9), "data: (0.2, 0.5, 0.9)", coord_system="data") # Plot marker and text in plot coords s.annotate_marker((0.4, -0.5), coord_system="plot") s.annotate_text((0.4, -0.5), "plot: (0.4, -0.5)", coord_system="plot") # Plot marker and text in axis coords s.annotate_marker((0.1, 0.2), coord_system="axis") s.annotate_text((0.1, 0.2), "axis: (0.1, 0.2)", coord_system="axis") # Plot marker and text in figure coords # N.B. marker will not render outside of axis bounds s.annotate_marker((0.6, 0.2), coord_system="figure") s.annotate_text((0.6, 0.2), "figure: (0.6, 0.2)", coord_system="figure") s.save() Available Callbacks ------------------- The underlying functions are more thoroughly documented in :ref:`callback-api`. .. _clear-annotations: Clear Callbacks (Some or All) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. function:: clear_annotations(index=None) This function will clear previous annotations (callbacks) in the plot. If no index is provided, it will clear all annotations to the plot. If an index is provided, it will clear only the Nth annotation to the plot. Note that the index goes from 0..N, and you can specify the index of the last added annotation as -1. (This is a proxy for :func:`~yt.visualization.plot_window.clear_annotations`.) .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.SlicePlot(ds, "z", ("gas", "density"), center="c", width=(20, "kpc")) p.annotate_scale() p.annotate_timestamp() # Oops, I didn't want any of that. p.clear_annotations() p.save() .. _annotate-list: List Currently Applied Callbacks ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. function:: list_annotations() This function will print a list of each of the currently applied callbacks together with their index. The index can be used with :ref:`clear_annotations() function ` to remove a specific callback. (This is a proxy for :func:`~yt.visualization.plot_window.list_annotations`.) .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.SlicePlot(ds, "z", ("gas", "density"), center="c", width=(20, "kpc")) p.annotate_scale() p.annotate_timestamp() p.list_annotations() .. _annotate-arrow: Overplot Arrow ~~~~~~~~~~~~~~ .. function:: annotate_arrow(self, pos, length=0.03, coord_system='data', **kwargs) (This is a proxy for :class:`~yt.visualization.plot_modifications.ArrowCallback`.) Overplot an arrow pointing at a position for highlighting a specific feature. Arrow points from lower left to the designated position with arrow length "length". .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"), center="c") slc.annotate_arrow((0.5, 0.5, 0.5), length=0.06, color="blue") slc.save() .. _annotate-clumps: Clump Finder Callback ~~~~~~~~~~~~~~~~~~~~~ .. function:: annotate_clumps(self, clumps, **kwargs) (This is a proxy for :class:`~yt.visualization.plot_modifications.ClumpContourCallback`.) Take a list of ``clumps`` and plot them as a set of contours. .. python-script:: import numpy as np import yt from yt.data_objects.level_sets.api import Clump, find_clumps ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") data_source = ds.disk([0.5, 0.5, 0.5], [0.0, 0.0, 1.0], (8.0, "kpc"), (1.0, "kpc")) c_min = 10 ** np.floor(np.log10(data_source["gas", "density"]).min()) c_max = 10 ** np.floor(np.log10(data_source["gas", "density"]).max() + 1) master_clump = Clump(data_source, ("gas", "density")) master_clump.add_validator("min_cells", 20) find_clumps(master_clump, c_min, c_max, 2.0) leaf_clumps = master_clump.leaves prj = yt.ProjectionPlot(ds, "z", ("gas", "density"), center="c", width=(20, "kpc")) prj.annotate_clumps(leaf_clumps) prj.save("clumps") .. _annotate-contours: Overplot Contours ~~~~~~~~~~~~~~~~~ .. function:: annotate_contour(self, field, levels=5, factor=4, take_log=False,\ clim=None, plot_args=None, label=False, \ text_args=None, data_source=None) (This is a proxy for :class:`~yt.visualization.plot_modifications.ContourCallback`.) Add contours in ``field`` to the plot. ``levels`` governs the number of contours generated, ``factor`` governs the number of points used in the interpolation, ``take_log`` governs how it is contoured and ``clim`` gives the (lower, upper) limits for contouring. .. python-script:: import yt ds = yt.load("Enzo_64/DD0043/data0043") s = yt.SlicePlot(ds, "x", ("gas", "density"), center="max") s.annotate_contour(("gas", "temperature")) s.save() .. _annotate-quivers: Overplot Quivers ~~~~~~~~~~~~~~~~ Axis-Aligned Data Sources ^^^^^^^^^^^^^^^^^^^^^^^^^ .. function:: annotate_quiver(self, field_x, field_y, field_c=None, *, factor=16, scale=None, \ scale_units=None, normalize=False, **kwargs) (This is a proxy for :class:`~yt.visualization.plot_modifications.QuiverCallback`.) Adds a 'quiver' plot to any plot, using the ``field_x`` and ``field_y`` from the associated data, skipping every ``factor`` pixels in the discretization. A third field, ``field_c``, can be used as color; which is the counterpart of ``matplotlib.axes.Axes.quiver``'s final positional argument ``C``. ``scale`` is the data units per arrow length unit using ``scale_units``. If ``normalize`` is ``True``, the fields will be scaled by their local (in-plane) length, allowing morphological features to be more clearly seen for fields with substantial variation in field strength. All additional keyword arguments are passed down to ``matplotlib.Axes.axes.quiver``. Example using a constant color .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.ProjectionPlot( ds, "z", ("gas", "density"), center=[0.5, 0.5, 0.5], weight_field="density", width=(20, "kpc"), ) p.annotate_quiver( ("gas", "velocity_x"), ("gas", "velocity_y"), factor=16, color="purple", ) p.save() And now using a continuous colormap .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.ProjectionPlot( ds, "z", ("gas", "density"), center=[0.5, 0.5, 0.5], weight_field="density", width=(20, "kpc"), ) p.annotate_quiver( ("gas", "velocity_x"), ("gas", "velocity_y"), ("gas", "vorticity_z"), factor=16, cmap="inferno_r", ) p.save() Off-Axis Data Sources ^^^^^^^^^^^^^^^^^^^^^ .. function:: annotate_cquiver(self, field_x, field_y, field_c=None, *, factor=16, scale=None, \ scale_units=None, normalize=False, **kwargs) (This is a proxy for :class:`~yt.visualization.plot_modifications.CuttingQuiverCallback`.) Get a quiver plot on top of a cutting plane, using the ``field_x`` and ``field_y`` from the associated data, skipping every ``factor`` datapoints in the discretization. ``scale`` is the data units per arrow length unit using ``scale_units``. If ``normalize`` is ``True``, the fields will be scaled by their local (in-plane) length, allowing morphological features to be more clearly seen for fields with substantial variation in field strength. Additional arguments can be passed to the ``plot_args`` dictionary, see matplotlib.axes.Axes.quiver for more info. .. python-script:: import yt ds = yt.load("Enzo_64/DD0043/data0043") s = yt.OffAxisSlicePlot(ds, [1, 1, 0], [("gas", "density")], center="c") s.annotate_cquiver( ("gas", "cutting_plane_velocity_x"), ("gas", "cutting_plane_velocity_y"), factor=10, color="orange", ) s.zoom(1.5) s.save() .. _annotate-grids: Overplot Grids ~~~~~~~~~~~~~~ .. function:: annotate_grids(self, alpha=0.7, min_pix=1, min_pix_ids=20, \ draw_ids=False, id_loc="lower left", \ periodic=True, min_level=None, \ max_level=None, cmap='B-W Linear_r', \ edgecolors=None, linewidth=1.0) (This is a proxy for :class:`~yt.visualization.plot_modifications.GridBoundaryCallback`.) Adds grid boundaries to a plot, optionally with alpha-blending via the ``alpha`` keyword. Cuttoff for display is at ``min_pix`` wide. ``draw_ids`` puts the grid id in the ``id_loc`` corner of the grid. (``id_loc`` can be upper/lower left/right. ``draw_ids`` is not so great in projections...) .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"), center="max") slc.annotate_grids() slc.save() .. _annotate-cell-edges: Overplot Cell Edges ~~~~~~~~~~~~~~~~~~~ .. function:: annotate_cell_edges(line_width=0.002, alpha=1.0, color='black') (This is a proxy for :class:`~yt.visualization.plot_modifications.CellEdgesCallback`.) Annotate the edges of cells, where the ``line_width`` relative to size of the longest plot axis is specified. The ``alpha`` of the overlaid image and the ``color`` of the lines are also specifiable. Note that because the lines are drawn from both sides of a cell, the image sometimes has the effect of doubling the line width. Color here is a matplotlib color name or a 3-tuple of RGB float values. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"), center="max") slc.annotate_cell_edges() slc.save() .. _annotate-image-line: Overplot a Straight Line ~~~~~~~~~~~~~~~~~~~~~~~~ .. function:: annotate_line(self, p1, p2, *, coord_system='data', **kwargs) (This is a proxy for :class:`~yt.visualization.plot_modifications.LinePlotCallback`.) Overplot a line with endpoints at p1 and p2. p1 and p2 should be 2D or 3D coordinates consistent with the coordinate system denoted in the "coord_system" keyword. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.ProjectionPlot(ds, "z", ("gas", "density"), center="m", width=(10, "kpc")) p.annotate_line((0.3, 0.4), (0.8, 0.9), coord_system="axis") p.save() .. _annotate-magnetic-field: Overplot Magnetic Field Quivers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. function:: annotate_magnetic_field(self, factor=16, *, scale=None, \ scale_units=None, normalize=False, \ **kwargs) (This is a proxy for :class:`~yt.visualization.plot_modifications.MagFieldCallback`.) Adds a 'quiver' plot of magnetic field to the plot, skipping every ``factor`` datapoints in the discretization. ``scale`` is the data units per arrow length unit using ``scale_units``. If ``normalize`` is ``True``, the magnetic fields will be scaled by their local (in-plane) length, allowing morphological features to be more clearly seen for fields with substantial variation in field strength. Additional arguments can be passed to the ``plot_args`` dictionary, see matplotlib.axes.Axes.quiver for more info. .. python-script:: import yt ds = yt.load( "MHDSloshing/virgo_low_res.0054.vtk", units_override={ "time_unit": (1, "Myr"), "length_unit": (1, "Mpc"), "mass_unit": (1e17, "Msun"), }, ) p = yt.ProjectionPlot(ds, "z", ("gas", "density"), center="c", width=(300, "kpc")) p.annotate_magnetic_field(headlength=3) p.save() .. _annotate-marker: Annotate a Point With a Marker ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. function:: annotate_marker(self, pos, marker='x', *, coord_system='data', **kwargs) (This is a proxy for :class:`~yt.visualization.plot_modifications.MarkerAnnotateCallback`.) Overplot a marker on a position for highlighting specific features. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") s = yt.SlicePlot(ds, "z", ("gas", "density"), center="c", width=(10, "kpc")) s.annotate_marker((-2, -2), coord_system="plot", color="blue", s=500) s.save() .. _annotate-particles: Overplotting Particle Positions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. function:: annotate_particles(self, width, p_size=1.0, col='k', marker='o',\ stride=1, ptype='all', alpha=1.0, data_source=None) (This is a proxy for :class:`~yt.visualization.plot_modifications.ParticleCallback`.) Adds particle positions, based on a thick slab along ``axis`` with a ``width`` along the line of sight. ``p_size`` controls the number of pixels per particle, and ``col`` governs the color. ``ptype`` will restrict plotted particles to only those that are of a given type. ``data_source`` will only plot particles contained within the data_source object. WARNING: if ``data_source`` is a :class:`yt.data_objects.selection_data_containers.YTCutRegion` then the ``width`` parameter is ignored. .. python-script:: import yt ds = yt.load("Enzo_64/DD0043/data0043") p = yt.ProjectionPlot(ds, "x", ("gas", "density"), center="m", width=(10, "Mpc")) p.annotate_particles((10, "Mpc")) p.save() To plot only the central particles .. python-script:: import yt ds = yt.load("Enzo_64/DD0043/data0043") p = yt.ProjectionPlot(ds, "x", ("gas", "density"), center="m", width=(10, "Mpc")) sp = ds.sphere(p.data_source.center, ds.quan(1, "Mpc")) p.annotate_particles((10, "Mpc"), data_source=sp) p.save() .. _annotate-sphere: Overplot a Circle on a Plot ~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. function:: annotate_sphere(self, center, radius, circle_args=None, \ coord_system='data', text=None, text_args=None) (This is a proxy for :class:`~yt.visualization.plot_modifications.SphereCallback`.) Overplot a circle with designated center and radius with optional text. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.ProjectionPlot(ds, "z", ("gas", "density"), center="c", width=(20, "kpc")) p.annotate_sphere([0.5, 0.5, 0.5], radius=(2, "kpc"), circle_args={"color": "black"}) p.save() .. _annotate-streamlines: Overplot Streamlines ~~~~~~~~~~~~~~~~~~~~ .. function:: annotate_streamlines(self, field_x, field_y, *, linewidth=1.0, linewidth_upscaling=1.0, \ color=None, color_threshold=float('-inf'), factor=16, **kwargs) (This is a proxy for :class:`~yt.visualization.plot_modifications.StreamlineCallback`.) Add streamlines to any plot, using the ``field_x`` and ``field_y`` from the associated data, using ``nx`` and ``ny`` starting points that are bounded by ``xstart`` and ``ystart``. To begin streamlines from the left edge of the plot, set ``start_at_xedge`` to ``True``; for the bottom edge, use ``start_at_yedge``. A line with the qmean vector magnitude will cover 1.0/``factor`` of the image. Additional keyword arguments are passed down to `matplotlib.axes.Axes.streamplot `_ .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") s = yt.SlicePlot(ds, "z", ("gas", "density"), center="c", width=(20, "kpc")) s.annotate_streamlines(("gas", "velocity_x"), ("gas", "velocity_y")) s.save() .. _annotate-line-integral-convolution: Overplot Line Integral Convolution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. function:: annotate_line_integral_convolution(self, field_x, field_y, \ texture=None, kernellen=50., \ lim=(0.5,0.6), cmap='binary', \ alpha=0.8, const_alpha=False) (This is a proxy for :class:`~yt.visualization.plot_modifications.LineIntegralConvolutionCallback`.) Add line integral convolution to any plot, using the ``field_x`` and ``field_y`` from the associated data. A white noise background will be used for ``texture`` as default. Adjust the bounds of ``lim`` in the range of ``[0, 1]`` which applies upper and lower bounds to the values of line integral convolution and enhance the visibility of plots. When ``const_alpha=False``, alpha will be weighted spatially by the values of line integral convolution; otherwise a constant value of the given alpha is used. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") s = yt.SlicePlot(ds, "z", ("gas", "density"), center="c", width=(20, "kpc")) s.annotate_line_integral_convolution(("gas", "velocity_x"), ("gas", "velocity_y"), lim=(0.5, 0.65)) s.save() .. _annotate-text: Overplot Text ~~~~~~~~~~~~~ .. function:: annotate_text(self, pos, text, coord_system='data', \ text_args=None, inset_box_args=None) (This is a proxy for :class:`~yt.visualization.plot_modifications.TextLabelCallback`.) Overplot text on the plot at a specified position. If you desire an inset box around your text, set one with the inset_box_args dictionary keyword. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") s = yt.SlicePlot(ds, "z", ("gas", "density"), center="max", width=(10, "kpc")) s.annotate_text((2, 2), "Galaxy!", coord_system="plot") s.save() .. _annotate-title: Add a Title ~~~~~~~~~~~ .. function:: annotate_title(self, title='Plot') (This is a proxy for :class:`~yt.visualization.plot_modifications.TitleCallback`.) Accepts a ``title`` and adds it to the plot. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.ProjectionPlot(ds, "z", ("gas", "density"), center="c", width=(20, "kpc")) p.annotate_title("Density Plot") p.save() .. _annotate-velocity: Overplot Quivers for the Velocity Field ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. function:: annotate_velocity(self, factor=16, *, scale=None, scale_units=None, \ normalize=False, **kwargs) (This is a proxy for :class:`~yt.visualization.plot_modifications.VelocityCallback`.) Adds a 'quiver' plot of velocity to the plot, skipping every ``factor`` datapoints in the discretization. ``scale`` is the data units per arrow length unit using ``scale_units``. If ``normalize`` is ``True``, the velocity fields will be scaled by their local (in-plane) length, allowing morphological features to be more clearly seen for fields with substantial variation in field strength. Additional arguments can be passed to the ``plot_args`` dictionary, see matplotlib.axes.Axes.quiver for more info. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.SlicePlot(ds, "z", ("gas", "density"), center="m", width=(10, "kpc")) p.annotate_velocity(headwidth=4) p.save() .. _annotate-timestamp: Add the Current Time and/or Redshift ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. function:: annotate_timestamp(x_pos=None, y_pos=None, corner='lower_left',\ time=True, redshift=False, \ time_format='t = {time:.1f} {units}', \ time_unit=None, time_offset=None, \ redshift_format='z = {redshift:.2f}', \ draw_inset_box=False, coord_system='axis', \ text_args=None, inset_box_args=None) (This is a proxy for :class:`~yt.visualization.plot_modifications.TimestampCallback`.) Annotates the timestamp and/or redshift of the data output at a specified location in the image (either in a present corner, or by specifying (x,y) image coordinates with the x_pos, y_pos arguments. If no time_units are specified, it will automatically choose appropriate units. It allows for custom formatting of the time and redshift information, the specification of an inset box around the text, and changing the value of the timestamp via a constant offset. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.SlicePlot(ds, "z", ("gas", "density"), center="c", width=(20, "kpc")) p.annotate_timestamp() p.save() .. _annotate-scale: Add a Physical Scale Bar ~~~~~~~~~~~~~~~~~~~~~~~~ .. function:: annotate_scale(corner='lower_right', coeff=None, \ unit=None, pos=None, \ scale_text_format="{scale} {units}", \ max_frac=0.16, min_frac=0.015, \ coord_system='axis', text_args=None, \ size_bar_args=None, draw_inset_box=False, \ inset_box_args=None) (This is a proxy for :class:`~yt.visualization.plot_modifications.ScaleCallback`.) Annotates the scale of the plot at a specified location in the image (either in a preset corner, or by specifying (x,y) image coordinates with the pos argument. Coeff and units (e.g. 1 Mpc or 100 kpc) refer to the distance scale you desire to show on the plot. If no coeff and units are specified, an appropriate pair will be determined such that your scale bar is never smaller than min_frac or greater than max_frac of your plottable axis length. Additional customization of the scale bar is possible by adjusting the text_args and size_bar_args dictionaries. The text_args dictionary accepts matplotlib's font_properties arguments to override the default font_properties for the current plot. The size_bar_args dictionary accepts keyword arguments for the AnchoredSizeBar class in matplotlib's axes_grid toolkit. Finally, the format of the scale bar text can be adjusted using the scale_text_format keyword argument. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.SlicePlot(ds, "z", ("gas", "density"), center="c", width=(20, "kpc")) p.annotate_scale() p.save() .. _annotate-triangle-facets: Annotate Triangle Facets Callback ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. function:: annotate_triangle_facets(triangle_vertices, **kwargs) (This is a proxy for :class:`~yt.visualization.plot_modifications.TriangleFacetsCallback`.) This add a line collection of a SlicePlot's plane-intersection with the triangles to the plot. This callback is ideal for a dataset representing a geometric model of triangular facets. .. python-script:: import os import h5py import yt # Load data file ds = yt.load("MoabTest/fng_usrbin22.h5m") # Create the desired slice plot s = yt.SlicePlot(ds, "z", ("moab", "TALLY_TAG")) # get triangle vertices from file (in this case hdf5) # setup file path for yt test directory filename = os.path.join( yt.config.ytcfg.get("yt", "test_data_dir"), "MoabTest/mcnp_n_impr_fluka.h5m" ) f = h5py.File(filename, mode="r") coords = f["/tstt/nodes/coordinates"][:] conn = f["/tstt/elements/Tri3/connectivity"][:] points = coords[conn - 1] # Annotate slice-triangle intersection contours to the plot s.annotate_triangle_facets(points, colors="black") s.save() .. _annotate-mesh-lines: Annotate Mesh Lines Callback ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. function:: annotate_mesh_lines(**kwargs) (This is a proxy for :class:`~yt.visualization.plot_modifications.MeshLinesCallback`.) This draws the mesh line boundaries over a plot using a Matplotlib line collection. This callback is only useful for unstructured or semi-structured mesh datasets. .. python-script:: import yt ds = yt.load("MOOSE_sample_data/out.e") sl = yt.SlicePlot(ds, "z", ("connect1", "nodal_aux")) sl.annotate_mesh_lines(color="black") sl.save() .. _annotate-ray: Overplot the Path of a Ray ~~~~~~~~~~~~~~~~~~~~~~~~~~ .. function:: annotate_ray(ray, *, arrow=False, **kwargs) (This is a proxy for :class:`~yt.visualization.plot_modifications.RayCallback`.) Adds a line representing the projected path of a ray across the plot. The ray can be either a :class:`~yt.data_objects.selection_objects.ray.YTOrthoRay`, :class:`~yt.data_objects.selection_objects.ray.YTRay`, or a Trident :class:`~trident.light_ray.LightRay` object. annotate_ray() will properly account for periodic rays across the volume. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") oray = ds.ortho_ray(0, (0.3, 0.4)) ray = ds.ray((0.1, 0.2, 0.3), (0.6, 0.7, 0.8)) p = yt.ProjectionPlot(ds, "z", ("gas", "density")) p.annotate_ray(oray) p.annotate_ray(ray) p.save() Applying filters on the final image ----------------------------------- It is also possible to operate on the plotted image directly by using one of the fixed resolution buffer filter as described in :ref:`frb-filters`. Note that it is necessary to call the plot object's ``refresh`` method to apply filters. .. python-script:: import yt ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030') p = yt.SlicePlot(ds, 'z', 'density') p.frb.apply_gauss_beam(sigma=30) p.refresh() p.save() .. _extend-annotations: Extending annotations methods ----------------------------- New ``annotate_`` methods can be added to plot objects at runtime (i.e., without modifying yt's source code) by subclassing the base ``PlotCallback`` class. This is the recommended way to add custom and unique annotations to yt plots, as it can be done through local plugins, individual scripts, or even external packages. Here's a minimal example: .. python-script:: import yt from yt.visualization.api import PlotCallback class TextToPositionCallback(PlotCallback): # bind a new `annotate_text_to_position` plot method _type_name = "text_to_position" def __init__(self, text, x, y): # this method can have arbitrary arguments # and should store them without alteration, # but not run expensive computations self.text = text self.position = (x, y) def __call__(self, plot): # this method's signature is required # this is where we perform potentially expensive operations # the plot argument exposes matplotlib objects: # - plot._axes is a matplotlib.axes.Axes object # - plot._figure is a matplotlib.figure.Figure object plot._axes.annotate( self.text, xy=self.position, xycoords="data", xytext=(0.2, 0.6), textcoords="axes fraction", color="white", fontsize=30, arrowprops=dict(facecolor="black", shrink=0.05), ) ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.SlicePlot(ds, "z", "density") p.annotate_text_to_position("Galactic center !", x=0, y=0) p.save() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2551513 yt-4.4.0/doc/source/visualizing/colormaps/0000755000175100001770000000000014714401715020174 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/colormaps/cmap_images.py0000644000175100001770000000067114714401662023020 0ustar00runnerdockerimport matplotlib as mpl import yt # Load the dataset. ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # Create projections using each colormap available. p = yt.ProjectionPlot(ds, "z", "density", weight_field="density", width=0.4) for cmap in mpl.colormaps: if cmap.startswith("idl"): continue p.set_cmap(field="density", cmap=cmap) p.annotate_title(cmap) p.save(f"Projection_{cmap.replace(' ', '_')}.png") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/colormaps/index.rst0000644000175100001770000001602414714401662022041 0ustar00runnerdocker.. _colormaps: Colormaps ========= There are several colormaps available for yt. yt includes all of the matplotlib colormaps as well for nearly all functions. Individual visualization functions usually allow you to specify a colormap with the ``cmap`` flag. In yt 3.3, we changed the default yt colormap from ``algae`` to ``arbre``. This colormap was designed and voted on by the yt community and is designed to be easier for people with different color sensitivities as well as when printed in black and white. In 3.3, additional colormaps ``dusk``, ``kelp`` and ``octarine`` were also added, following the same guidelines. For a deeper dive into colormaps, see the SciPy 2015 talk by Stéfan van der Walt and Nathaniel Smith about the new matplotlib colormap ``viridis`` at https://www.youtube.com/watch?v=xAoljeRJ3lU . To specify a different default colormap (including ``viridis``), in your yt configuration file (see :ref:`configuration-file`) you can set the value ``default_colormap`` to the name of the colormap you would like. In contrast to previous versions of yt, starting in 3.3 yt no longer overrides any matplotlib defaults and instead only applies the colormap to yt-produced plots. .. _install-palettable: Palettable and ColorBrewer2 ~~~~~~~~~~~~~~~~~~~~~~~~~~~ While colormaps that employ a variety of colors often look attractive, they are not always the best choice to convey information to one's audience. There are numerous `articles `_ and `presentations `_ that discuss how rainbow-based colormaps fail with regard to black-and-white reproductions, colorblind audience members, and confusing in color ordering. Depending on the application, the consensus seems to be that gradients between one or two colors are the best way for the audience to extract information from one's figures. Many such colormaps are found in palettable. If you have installed `palettable `_ (formerly brewer2mpl), you can also access the discrete colormaps available to that package including those from `colorbrewer `_. Install `palettable `_ with ``pip install palettable``. To access these maps in yt, instead of supplying the colormap name, specify a tuple of the form (name, type, number), for example ``('RdBu', 'Diverging', 9)``. These discrete colormaps will not be interpolated, and can be useful for creating colorblind/printer/grayscale-friendly plots. For more information, visit `http://colorbrewer2.org `_. .. _custom-colormaps: Making and Viewing Custom Colormaps ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ yt can also accommodate custom colormaps using the :func:`~yt.visualization.color_maps.make_colormap` function These custom colormaps can be made to an arbitrary level of complexity. You can make these on the fly for each yt session, or you can store them in your :ref:`plugin-file` for access to them in every future yt session. The example below creates two custom colormaps, one that has three equally spaced bars of blue, white and red, and the other that interpolates in increasing lengthed intervals from black to red, to green, to blue. These will be accessible for the rest of the yt session as 'french_flag' and 'weird'. See :func:`~yt.visualization.color_maps.make_colormap` and :func:`~yt.visualization.color_maps.show_colormaps` for more details. .. code-block:: python yt.make_colormap( [("blue", 20), ("white", 20), ("red", 20)], name="french_flag", interpolate=False, ) yt.make_colormap( [("black", 5), ("red", 10), ("green", 20), ("blue", 0)], name="weird", interpolate=True, ) yt.show_colormaps(subset=["french_flag", "weird"], filename="cmaps.png") All Colormaps (including matplotlib) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is a chart of all of the yt and matplotlib colormaps available. In addition to each colormap displayed here, you can access its "reverse" by simply appending a ``"_r"`` to the end of the colormap name. .. image:: ../_images/all_colormaps.png :width: 512 Native yt Colormaps ~~~~~~~~~~~~~~~~~~~ .. image:: ../_images/native_yt_colormaps.png :width: 512 Displaying Colormaps Locally ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To display the most up to date colormaps locally, you can use the :func:`~yt.visualization.color_maps.show_colormaps` function. By default, you'll see every colormap available to you, but you can specify subsets of colormaps to display, either as just the ``yt_native`` colormaps, or by specifying a list of colormap names. This will display all the colormaps available in a local window: .. code-block:: python import yt yt.show_colormaps() or to output the original yt colormaps to an image file, try: .. code-block:: python import yt yt.show_colormaps( subset=[ "cmyt.algae", "cmyt.arbre", "cmyt.dusk", "cmyt.kelp", "cmyt.octarine", "cmyt.pastel", ], filename="yt_native.png", ) .. note :: Since yt 4.1, yt native colormaps are shipped as a separate package `cmyt `_ that can be used outside yt itself. Within `yt` functions, these colormaps can still be referenced without the ``"cmyt."`` prefix. However, there is no guarantee that this will work in upcoming version of matplotlib, so our recommentation is to keep the prefix at all times to retain forward compatibility. yt also retains compatibility with names these colormaps were formerly known as (for instance ``cmyt.pastel`` used to be named ``kamae``). Applying a Colormap to your Rendering ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ All of the visualization functions in yt have a keyword allowing you to manually specify a specific colormap. For example: .. code-block:: python yt.write_image(im, "output.png", cmap_name="jet") If you're using the Plot Window interface (e.g. SlicePlot, ProjectionPlot, etc.), it's even easier than that. Simply create your rendering, and you can quickly swap the colormap on the fly after the fact with the ``set_cmap`` callback: .. code-block:: python ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.ProjectionPlot(ds, "z", ("gas", "density")) p.set_cmap(field=("gas", "density"), cmap="turbo") p.save("proj_with_jet_cmap.png") p.set_cmap(field=("gas", "density"), cmap="hot") p.save("proj_with_hot_cmap.png") For more information about the callbacks available to Plot Window objects, see :ref:`callbacks`. Examples of Each Colormap ~~~~~~~~~~~~~~~~~~~~~~~~~ To give the reader a better feel for how a colormap appears once it is applied to a dataset, below we provide a library of identical projections of an isolated galaxy where only the colormap has changed. They use the sample dataset "IsolatedGalaxy" available at `https://yt-project.org/data `_. .. yt_colormaps:: cmap_images.py ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/geographic_projections_and_transforms.rst0000644000175100001770000001430514714401662026562 0ustar00runnerdocker.. _geographic_projections_and_transforms: Geographic Projections and Transforms ===================================== Geographic data that is on a sphere can be visualized by projecting that data onto a representation of that sphere flattened into 2d space. There exist a number of projection types, which can be found in the `the cartopy documentation `_. With support from `cartopy `_, ``yt`` now supports these projection types for geographically loaded data. Underlying data is assumed to have a transform of `PlateCarree `__, which is data on a flattened, rectangular, latitude/longitude grid. This is a a typical format for geographic data. The distinction between the data transform and projection is worth noting. The data transform is what system your data is defined with and the data projection is what the resulting plot will display. For more information on this difference, refer to `the cartopy documentation on these differences `_. If your data is not of this form, feel free to open an issue or file a pull request on the ``yt`` github page for this feature. It should be noted that these projections are not the same as yt's ProjectionPlot. For more information on yt's projection plots, see :ref:`projection-types`. .. _install-cartopy: Installing Cartopy ^^^^^^^^^^^^^^^^^^ In order to access the geographic projection functionality, you will need to have an installation of ``cartopy`` available on your machine. Please refer to `Cartopy's documentation for detailed instructions `_ Using Basic Transforms ^^^^^^^^^^^^^^^^^^^^^^^ As mentioned above, the default data transform is assumed to be of `PlateCarree `__, which is data on a flattened, rectangular, latitude/longitude grid. To set something other than ``PlateCarree``, the user can access the dictionary in the coordinate handler that defines the coordinate transform to change the default transform type. Because the transform describes the underlying data coordinate system, the loaded dataset will carry this newly set attribute and all future plots will have the user-defined data transform. Also note that the dictionary is ordered by axis type. Because slicing along the altitude may differ from, say, the latitude axis, we may choose to have different transforms for each axis. .. code-block:: python ds = yt.load_uniform_grid(data, sizes, 1.0, geometry=("geographic", dims), bbox=bbox) ds.coordinates.data_transform["altitude"] = "Miller" p = yt.SlicePlot(ds, "altitude", "AIRDENS") In this example, the ``data_transform`` kwarg has been changed from its default of ``PlateCarree`` to ``Miller``. You can check that you have successfully changed the defaults by inspecting the ``data_transform`` and ``data_projection`` dictionaries in the coordinate handler. For this dataset, that would be accessed by: .. code-block:: python print(ds.coordinates.data_transform["altitude"]) print(ds.coordinates.data_projection["altitude"]) Using Basic Projections ^^^^^^^^^^^^^^^^^^^^^^^ All of the transforms available in ``Cartopy`` v0.15 and above are accessible with this functionality. The next few examples will use a GEOS dataset accessible from the ``yt`` data downloads page. For details about loading this data, please see :doc:`../cookbook/geographic_xforms_and_projections`. If a geographic dataset is loaded without any defined projection the default option of ``Mollweide`` will be displayed. .. code-block:: python ds = yt.load_uniform_grid(data, sizes, 1.0, geometry=("geographic", dims), bbox=bbox) p = yt.SlicePlot(ds, "altitude", "AIRDENS") If an option other than ``Mollweide`` is desired, the plot projection type can be set with the ``set_mpl_projection`` function. The next code block illustrates how to set the projection to a ``Robinson`` projection from the default ``PlateCarree``. .. code-block:: python ds = yt.load_uniform_grid(data, sizes, 1.0, geometry=("geographic", dims), bbox=bbox) p = yt.SlicePlot(ds, "altitude", "AIRDENS") p.set_mpl_projection("Robinson") p.show() The axes attributes of the plot can be accessed to add in annotations, such as coastlines. The axes are matplotlib ``GeoAxes`` so any of the annotations available with matplotlib should be available for customization. Here a ``Robinson`` plot is made with coastline annotations. .. code-block:: python p.set_mpl_projection("Robinson") p.render() p.plots["AIRDENS"].axes.set_global() p.plots["AIRDENS"].axes.coastlines() p.show() ``p.render()`` is required here to access the plot axes. When a new projection is called the plot axes are reset and are not available unless set up again. Additional arguments can be passed to the projection function for further customization. If additional arguments are desired, then rather than passing a string of the projection name, one would pass a 2 or 3-item tuple, the first item of the tuple corresponding to a string of the transform name, and the second and third items corresponding to the args and kwargs of the transform, respectively. Alternatively, a user can pass a transform object rather than a string or tuple. This allows for users to create and define their own transforms, beyond what is available in cartopy. The type must be a cartopy GeoAxes object or a matplotlib transform object. For creating custom transforms, see `the matplotlib example `_. The function ``set_mpl_projection()`` accepts several input types for varying levels of customization: .. code-block:: python set_mpl_projection("ProjectionType") set_mpl_projection(("ProjectionType", (args))) set_mpl_projection(("ProjectionType", (args), {kwargs})) set_mpl_projection(cartopy.crs.PlateCarree()) Further examples of using the geographic transforms with this dataset can be found in :doc:`../cookbook/geographic_xforms_and_projections`. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/index.rst0000644000175100001770000000115214714401662020036 0ustar00runnerdocker.. _visualizing: Visualizing Data ================ yt comes with a number of ways for visualizing one's data including slices, projections, line plots, profiles, phase plots, volume rendering, 3D surfaces, streamlines, and a google-maps-like interface for exploring one's dataset interactively. .. toctree:: :maxdepth: 2 plots callbacks manual_plotting volume_rendering unstructured_mesh_rendering interactive_data_visualization visualizing_particle_datasets_with_firefly sketchfab mapserver streamlines colormaps/index geographic_projections_and_transforms FITSImageData ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/interactive_data_visualization.rst0000644000175100001770000000073314714401662025222 0ustar00runnerdocker.. _interactive_data_visualization: Interactive Data Visualization ============================== The interactive, OpenGL-based volume rendering system for yt has been exported into its own package, called ``yt_idv``. Documentation, including installation instructions, can be found at `its website `_, and the source code is hosted under the yt-project organization on github at `yt_idv `_. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/manual_plotting.rst0000644000175100001770000001353414714401662022133 0ustar00runnerdocker.. _manual-plotting: Using the Manual Plotting Interface =================================== Sometimes you need a lot of flexibility in creating plots. While the :class:`~yt.visualization.plot_window.PlotWindow` provides an easy to use object that can create nice looking, publication quality plots with a minimum of effort, there are often times when its ease of use conflicts with your need to change the font only on the x-axis, or whatever your need/desire/annoying coauthor requires. To that end, yt provides a number of ways of getting the raw data that goes into a plot to you in the form of a one or two dimensional dataset that you can plot using any plotting method you like. matplotlib or another python library are easiest, but these methods allow you to take your data and plot it in gnuplot, or any unnamed commercial plotting packages. Note that the index object associated with your snapshot file contains a list of plots you've made in ``ds.plots``. .. _fixed-resolution-buffers: Slice, Projections, and other Images: The Fixed Resolution Buffer ----------------------------------------------------------------- For slices and projects, yt provides a manual plotting interface based on the :class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer` (hereafter referred to as FRB) object. Despite its somewhat unwieldy name, at its heart, an FRB is a very simple object: it's essentially a window into your data: you give it a center and a width or a left and right edge, and an image resolution, and the FRB returns a fully pixelized image. The simplest way to generate an FRB is to use the ``.to_frb(width, resolution, center=None)`` method of any data two-dimensional data object: .. python-script:: import matplotlib matplotlib.use("Agg") import numpy as np from matplotlib import pyplot as plt import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") _, c = ds.find_max(("gas", "density")) proj = ds.proj(("gas", "density"), 0) width = (10, "kpc") # we want a 1.5 mpc view res = [1000, 1000] # create an image with 1000x1000 pixels frb = proj.to_frb(width, res, center=c) plt.imshow(np.array(frb["gas", "density"])) plt.savefig("my_perfect_figure.png") Note that in the above example the axes tick marks indicate pixel indices. If you want to represent physical distances on your plot axes, you will need to use the ``extent`` keyword of the ``imshow`` function. The FRB is a very small object that can be deleted and recreated quickly (in fact, this is how ``PlotWindow`` plots work behind the scenes). Furthermore, you can add new fields in the same "window", and each of them can be plotted with their own zlimit. This is quite useful for creating a mosaic of the same region in space with Density, Temperature, and x-velocity, for example. Each of these quantities requires a substantially different set of limits. A more complex example, showing a few yt helper functions that can make setting up multiple axes with colorbars easier than it would be using only matplotlib can be found in the :ref:`advanced-multi-panel` cookbook recipe. .. _frb-filters: Fixed Resolution Buffer Filters ------------------------------- The FRB can be modified by using set of predefined filters in order to e.g. create realistically looking, mock observation images out of simulation data. Applying filter is an irreversible operation, hence the order in which you are using them matters. .. python-script:: import matplotlib matplotlib.use("Agg") from matplotlib import pyplot as plt import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = ds.slice("z", 0.5) frb = slc.to_frb((20, "kpc"), 512) frb.apply_gauss_beam(nbeam=30, sigma=2.0) frb.apply_white_noise(5e-23) plt.imshow(frb["gas", "density"].d) plt.savefig("frb_filters.png") Currently available filters: Gaussian Smoothing ~~~~~~~~~~~~~~~~~~ .. function:: apply_gauss_beam(self, nbeam=30, sigma=2.0) (This is a proxy for :class:`~yt.visualization.fixed_resolution_filters.FixedResolutionBufferGaussBeamFilter`.) This filter convolves the FRB with 2d Gaussian that is "nbeam" pixel wide and has standard deviation "sigma". White Noise ~~~~~~~~~~~ .. function:: apply_white_noise(self, bg_lvl=None) (This is a proxy for :class:`~yt.visualization.fixed_resolution_filters.FixedResolutionBufferWhiteNoiseFilter`.) This filter adds white noise with the amplitude "bg_lvl" to the FRB. If "bg_lvl" is not present, 10th percentile of the FRB's values is used instead. .. _manual-line-plots: Line Plots ---------- This is perhaps the simplest thing to do. yt provides a number of one dimensional objects, and these return a 1-D numpy array of their contents with direct dictionary access. As a simple example, take a :class:`~yt.data_objects.selection_data_containers.YTOrthoRay` object, which can be created from a index by calling ``ds.ortho_ray(axis, center)``. .. python-script:: import matplotlib matplotlib.use("Agg") import numpy as np from matplotlib import pyplot as plt import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") _, c = ds.find_max(("gas", "density")) ax = 0 # take a line cut along the x axis # cutting through the y0,z0 such that we hit the max density ray = ds.ortho_ray(ax, (c[1], c[2])) # Sort the ray values by 'x' so there are no discontinuities # in the line plot srt = np.argsort(ray["index", "x"]) plt.subplot(211) plt.semilogy(np.array(ray["index", "x"][srt]), np.array(ray["gas", "density"][srt])) plt.ylabel("density") plt.subplot(212) plt.semilogy(np.array(ray["index", "x"][srt]), np.array(ray["gas", "temperature"][srt])) plt.xlabel("x") plt.ylabel("temperature") plt.savefig("den_temp_xsweep.png") Of course, you'll likely want to do something more sophisticated than using the matplotlib defaults, but this gives the general idea. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/mapserver.rst0000644000175100001770000000271114714401662020735 0ustar00runnerdocker.. _mapserver: Mapserver - A Google-Maps-like Interface to your Data ----------------------------------------------------- The mapserver is an experimental feature. It's based on `Leaflet `_, a library written to create zoomable, map-tile interfaces. (Similar to Google Maps.) yt provides everything you need to start up a web server that will interactively re-pixelize an adaptive image. This means you can explore your datasets in a fully pan-n-zoom interface. .. note:: Previous versions of yt bundled the necessary dependencies, but with more recent released you will need to install the package ``bottle`` via pip or conda. To start up the mapserver, you can use the command yt (see :ref:`command-line`) with the ``mapserver`` subcommand. It takes several of the same options and arguments as the ``plot`` subcommand. For instance: .. code-block:: bash yt mapserver DD0050/DD0050 That will take a slice along the x axis at the center of the domain. The field, projection, weight and axis can all be specified on the command line. When you do this, it will spawn a micro-webserver on your localhost, and output the URL to connect to standard output. You can connect to it (or create an SSH tunnel to connect to it) and explore your data. Double-clicking zooms, and dragging drags. .. image:: _images/mapserver.png :scale: 50% This is also functional on touch-capable devices such as Android Tablets and iPads/iPhones. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/plots.rst0000644000175100001770000025527214714401662020106 0ustar00runnerdocker .. _how-to-make-plots: How to Make Plots ================= .. note:: In this document, and the rest of the yt documentation, we use field tuples; for instance, we specify density as ``("gas", "density")`` whereas in previous versions of this document we typically just used ``"density"``. While the latter will still work in many or most cases, and may suffice for your purposes, for ensuring we explicitly avoid ambiguity we use field tuples here. In this section we explain how to use yt to create visualizations of simulation data, derived fields, and the data produced by yt analysis objects. For details about the data extraction and algorithms used to produce the image and analysis data, please see the yt `method paper `_. There are also many example scripts in :ref:`cookbook`. The :class:`~yt.visualization.plot_window.PlotWindow` interface is useful for taking a quick look at simulation outputs. Simple mechanisms exist for making plots of slices, projections, 1D spatial line plots, 1D profiles, and 2D profiles (phase plots), all of which are described below. .. _viewing-plots: Viewing Plots ------------- yt uses an environment neutral plotting mechanism that detects the appropriate matplotlib configuration for a given environment, however it defaults to a basic renderer. To utilize interactive plots in matplotlib supported environments (Qt, GTK, WX, etc.) simply call the ``toggle_interactivity()`` function. Below is an example in a jupyter notebook environment, but the same command should work in other environments as well: .. code-block:: IPython %matplotlib notebook import yt yt.toggle_interactivity() .. _simple-inspection: Slices & Projections -------------------- If you need to take a quick look at a single simulation output, yt provides the :class:`~yt.visualization.plot_window.PlotWindow` interface for generating annotated 2D visualizations of simulation data. You can create a :class:`~yt.visualization.plot_window.PlotWindow` plot by supplying a dataset, a list of fields to plot, and a plot center to create a :class:`~yt.visualization.plot_window.AxisAlignedSlicePlot`, :class:`~yt.visualization.plot_window.OffAxisSlicePlot`, :class:`~yt.visualization.plot_window.ProjectionPlot`, or :class:`~yt.visualization.plot_window.OffAxisProjectionPlot`. Plot objects use yt data objects to extract the maximum resolution data available to render a 2D image of a field. Whenever a two-dimensional image is created, the plotting object first obtains the necessary data at the *highest resolution*. Every time an image is requested of it -- for instance, when the width or field is changed -- this high-resolution data is then pixelized and placed in a buffer of fixed size. This is accomplished behind the scenes using :class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer`. The :class:`~yt.visualization.plot_window.PlotWindow` class exposes the underlying matplotlib `figure `_ and `axes `_ objects, making it easy to customize your plots and add new annotations. See :ref:`matplotlib-customization` for more information. .. _slice-plots: Slice Plots ~~~~~~~~~~~ The quickest way to plot a slice of a field through your data is via :class:`~yt.visualization.plot_window.SlicePlot`. These plots are generally quicker than projections because they only need to read and process a slice through the dataset. The following script plots a slice through the density field along the z-axis centered on the center of the simulation box in a simulation dataset we've opened and stored in ``ds``: .. code-block:: python slc = yt.SlicePlot(ds, "z", ("gas", "density")) slc.save() These two commands will create a slice object and store it in a variable we've called ``slc``. Since this plot is aligned with the simulation coordinate system, ``slc`` is an instance of :class:`~yt.visualization.plot_window.AxisAlignedSlicePlot`. We then call the ``save()`` function, which automatically saves the plot in png image format with an automatically generated filename. If you don't want the slice object to stick around, you can accomplish the same thing in one line: .. code-block:: python yt.SlicePlot(ds, "z", ("gas", "density")).save() It's nice to keep the slice object around if you want to modify the plot. By default, the plot width will be set to the size of the simulation box. To zoom in by a factor of ten, you can call the zoom function attached to the slice object: .. code-block:: python slc = yt.SlicePlot(ds, "z", ("gas", "density")) slc.zoom(10) slc.save("zoom") This will save a new plot to disk with a different filename - prepended with 'zoom' instead of the name of the dataset. If you want to set the width manually, you can do that as well. For example, the following sequence of commands will create a slice, set the width of the plot to 10 kiloparsecs, and save it to disk, with the filename prefix being ``10kpc`` and the rest determined by the field, visualization method, etc. .. code-block:: python from yt.units import kpc slc = yt.SlicePlot(ds, "z", ("gas", "density")) slc.set_width(10 * kpc) slc.save("10kpc") The plot width can be specified independently along the x and y direction by passing a tuple of widths. An individual width can also be represented using a ``(value, unit)`` tuple. The following sequence of commands all equivalently set the width of the plot to 200 kiloparsecs in the ``x`` and ``y`` direction. .. code-block:: python from yt.units import kpc slc.set_width(200 * kpc) slc.set_width((200, "kpc")) slc.set_width((200 * kpc, 200 * kpc)) The ``SlicePlot`` also optionally accepts the coordinate to center the plot on and the width of the plot: .. code-block:: python yt.SlicePlot( ds, "z", ("gas", "density"), center=[0.2, 0.3, 0.8], width=(10, "kpc") ).save() Note that, by default, :class:`~yt.visualization.plot_window.SlicePlot` shifts the coordinates on the axes such that the origin is at the center of the slice. To instead use the coordinates as defined in the dataset, use the optional argument: ``origin="native"`` If supplied without units, the center is assumed by in code units. There are also the following alternative options for the ``center`` keyword: * ``"center"``, ``"c"``: the domain center * ``"left"``, ``"l"``, ``"right"`` ``"r"``: the domain's left/right edge along the normal direction (``SlicePlot``'s second argument). Remaining axes use their respective domain center values. * ``"min"``: the position of the minimum density * ``"max"``, ``"m"``: the position of the maximum density * ``"min/max_"``: the position of the minimum/maximum in the first field matching field name * ``("min", field)``: the position of the minimum of ``field`` * ``("max", field)``: the position of the maximum of ``field`` where for the last two objects any spatial field, such as ``"density"``, ``"velocity_z"``, etc., may be used, e.g. ``center=("min", ("gas", "temperature"))``. ``"left"`` and ``"right"`` are not allowed for off-axis slices. The effective resolution of the plot (i.e. the number of resolution elements in the image itself) can be controlled with the ``buff_size`` argument: .. code-block:: python yt.SlicePlot(ds, "z", ("gas", "density"), buff_size=(1000, 1000)) Here is an example that combines all of the options we just discussed. .. python-script:: import yt from yt.units import kpc ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot( ds, "z", ("gas", "density"), center=[0.5, 0.5, 0.5], width=(20, "kpc"), buff_size=(1000, 1000), ) slc.save() The above example will display an annotated plot of a slice of the Density field in a 20 kpc square window centered on the coordinate (0.5, 0.5, 0.5) in the x-y plane. The axis to slice along is keyed to the letter 'z', corresponding to the z-axis. Finally, the image is saved to a png file. Conceptually, you can think of the plot object as an adjustable window into the data. For example: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "pressure"), center="c") slc.save() slc.zoom(30) slc.save("zoom") will save a plot of the pressure field in a slice along the z axis across the entire simulation domain followed by another plot that is zoomed in by a factor of 30 with respect to the original image. Both plots will be centered on the center of the simulation box. With these sorts of manipulations, one can easily pan and zoom onto an interesting region in the simulation and adjust the boundaries of the region to visualize on the fly. If you want to slice through a subset of the full dataset volume, you can use the ``data_source`` keyword with a :ref:`data object ` or a :ref:`cut region `. See :class:`~yt.visualization.plot_window.AxisAlignedSlicePlot` for the full class description. .. _plot-2d: Plots of 2D Datasets ~~~~~~~~~~~~~~~~~~~~ If you have a two-dimensional cartesian, cylindrical, or polar dataset, :func:`~yt.visualization.plot_window.plot_2d` is a way to make a plot within the dataset's plane without having to specify the axis, which in this case is redundant. Otherwise, ``plot_2d`` accepts the same arguments as ``SlicePlot``. The one other difference is that the ``center`` keyword argument can be a two-dimensional coordinate instead of a three-dimensional one: .. python-script:: import yt ds = yt.load("WindTunnel/windtunnel_4lev_hdf5_plt_cnt_0030") p = yt.plot_2d(ds, ("gas", "density"), center=[1.0, 0.4]) p.set_log(("gas", "density"), False) p.save() See :func:`~yt.visualization.plot_window.plot_2d` for the full description of the function and its keywords. .. _off-axis-slices: Off Axis Slices ~~~~~~~~~~~~~~~ Off axis slice plots can be generated in much the same way as grid-aligned slices. Off axis slices use :class:`~yt.data_objects.selection_data_containers.YTCuttingPlane` to slice through simulation domains at an arbitrary oblique angle. A :class:`~yt.visualization.plot_window.OffAxisSlicePlot` can be instantiated by specifying a dataset, the normal to the cutting plane, and the name of the fields to plot. Just like an :class:`~yt.visualization.plot_window.AxisAlignedSlicePlot`, an :class:`~yt.visualization.plot_window.OffAxisSlicePlot` can be created via the :class:`~yt.visualization.plot_window.SlicePlot` class. For example: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") L = [1, 1, 0] # vector normal to cutting plane north_vector = [-1, 1, 0] cut = yt.SlicePlot(ds, L, ("gas", "density"), width=(25, "kpc"), north_vector=north_vector) cut.save() In this case, a normal vector for the cutting plane is supplied in the second argument. Optionally, a ``north_vector`` can be specified to fix the orientation of the image plane. .. note:: Not every data types have support for off-axis slices yet. Currently, this operation is supported for grid based and SPH data with cartesian geometry. In some cases an off-axis projection over a thin region might be used instead. .. _projection-plots: Projection Plots ~~~~~~~~~~~~~~~~ Using a fast adaptive projection, yt is able to quickly project simulation data along the coordinate axes. Projection plots are created by instantiating a :class:`~yt.visualization.plot_window.ProjectionPlot` object. For example: .. python-script:: import yt from yt.units import kpc ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") prj = yt.ProjectionPlot( ds, "z", ("gas", "temperature"), width=25 * kpc, weight_field=("gas", "density"), buff_size=(1000, 1000), ) prj.save() will create a density-weighted projection of the temperature field along the x axis with 1000 resolution elements per side, plot it, and then save the plot to a png image file. Like :ref:`slice-plots`, annotations and modifications can be applied after creating the ``ProjectionPlot`` object. Annotations are described in :ref:`callbacks`. See :class:`~yt.visualization.plot_window.ProjectionPlot` for the full class description. If you want to project through a subset of the full dataset volume, you can use the ``data_source`` keyword with a :ref:`data object `. The :ref:`thin-slice-projections` recipes demonstrates this functionality. .. note:: Not every data types have support for off-axis projections yet. Currently, this operation is supported for grid based data with cartesian geometry, as well as SPH particles data. .. _projection-types: Types of Projections """""""""""""""""""" There are several different methods of projections that can be made either when creating a projection with :meth:`~yt.static_output.Dataset.proj` or when making a :class:`~yt.visualization.plot_window.ProjectionPlot`. In either construction method, set the ``method`` keyword to be one of the following: ``integrate`` (unweighted): +++++++++++++++++++++++++++ This is the default projection method. It simply integrates the requested field :math:`f({\bf x})` along a line of sight :math:`\hat{\bf n}` , given by the axis parameter (e.g. :math:`\hat{\bf i},\hat{\bf j},` or :math:`\hat{\bf k}`). The units of the projected field :math:`g({\bf X})` will be the units of the unprojected field :math:`f({\bf x})` multiplied by the appropriate length unit, e.g., density in :math:`\mathrm{g\ cm^{-3}}` will be projected to :math:`\mathrm{g\ cm^{-2}}`. .. math:: g({\bf X}) = {\int\ {f({\bf x})\hat{\bf n}\cdot{d{\bf x}}}} ``integrate`` (weighted): +++++++++++++++++++++++++ When using the ``integrate`` method, a ``weight_field`` argument may also be specified, which will produce a weighted projection. :math:`w({\bf x})` is the field used as a weight. One common example would be to weight the "temperature" field by the "density" field. In this case, the units of the projected field are the same as the unprojected field. .. math:: g({\bf X}) = \int\ {f({\bf x})\tilde{w}({\bf x})\hat{\bf n}\cdot{d{\bf x}}} where the "~" over :math:`w({\bf x})` reflects the fact that it has been normalized like so: .. math:: \tilde{w}({\bf x}) = \frac{w({\bf x})}{\int\ {w({\bf x})\hat{\bf n}\cdot{d{\bf x}}}} For weighted projections using the ``integrate`` method, it is also possible to project the standard deviation of a field. In this case, the projected field is mathematically given by: .. math:: g({\bf X}) = \left[\int\ {f({\bf x})^2\tilde{w}({\bf x})\hat{\bf n}\cdot{d{\bf x}}} - \left(\int\ {f({\bf x})\tilde{w}({\bf x})\hat{\bf n}\cdot{d{\bf x}}}\right)^2\right]^{1/2} in order to make a weighted projection of the standard deviation of a field along a line of sight, the ``moment`` keyword argument should be set to 2. ``max``: ++++++++ This method picks out the maximum value of a field along the line of sight given by the axis parameter. ``min``: ++++++++ This method picks out the minimum value of a field along the line of sight given by the axis parameter. ``sum``: ++++++++ This method is the same as ``integrate``, except that it does not multiply by a path length when performing the integration, and is just a straight summation of the field along the given axis. The units of the projected field will be the same as those of the unprojected field. This method is typically only useful for datasets such as 3D FITS cubes where the third axis of the dataset is something like velocity or frequency, and should *only* be used with fixed-resolution grid-based datasets. .. _off-axis-projections: Off Axis Projection Plots ~~~~~~~~~~~~~~~~~~~~~~~~~ Internally, off axis projections are created using :ref:`camera` by applying the :class:`~yt.visualization.volume_rendering.transfer_functions.ProjectionTransferFunction`. In this use case, the volume renderer casts a set of plane parallel rays, one for each pixel in the image. The data values along each ray are summed, creating the final image buffer. For SPH datsets, the coordinates are instead simply rotated before the axis-aligned projection function is applied. .. _off-axis-projection-function: To avoid manually creating a camera and setting the transfer function, yt provides the :func:`~yt.visualization.volume_rendering.off_axis_projection.off_axis_projection` function, which wraps the camera interface to create an off axis projection image buffer. These images can be saved to disk or used in custom plots. This snippet creates an off axis projection through a simulation. .. python-script:: import numpy as np import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") L = [1, 1, 0] # vector normal to cutting plane W = [0.02, 0.02, 0.02] c = [0.5, 0.5, 0.5] N = 512 image = yt.off_axis_projection(ds, c, L, W, N, ("gas", "density")) yt.write_image(np.log10(image), "%s_offaxis_projection.png" % ds) Here, ``W`` is the width of the projection in the x, y, *and* z directions. One can also generate annotated off axis projections using :class:`~yt.visualization.plot_window.ProjectionPlot`. These plots can be created in much the same way as an ``OffAxisSlicePlot``, requiring only an open dataset, a direction to project along, and a field to project. For example: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") L = [1, 1, 0] # vector normal to cutting plane north_vector = [-1, 1, 0] prj = yt.ProjectionPlot( ds, L, ("gas", "density"), width=(25, "kpc"), north_vector=north_vector ) prj.save() ``OffAxisProjectionPlot`` objects can also be created with a number of keyword arguments, as described in :class:`~yt.visualization.plot_window.OffAxisProjectionPlot`. Like on-axis projections, the projection of the standard deviation of a weighted field can be created by setting ``moment=2`` in the call to :class:`~yt.visualization.plot_window.ProjectionPlot`. .. _slices-and-projections-in-spherical-geometry: Slice Plots and Projection Plots in Spherical Geometry ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ What to expect when plotting data in spherical geometry? Here we explain the notations and projections system yt uses for to render 2D images of spherical data. The native spherical coordinates are - the spherical radius :math:`r` - the colatitude :math:`\theta`, defined between :math:`0` and :math:`\pi` - the azimuth :math:`\varphi`, defined between :math:`0` and :math:`2\pi` :math:`\varphi`-normal slices are represented in the poloidal plane, with axes :math:`R, z`, where - :math:`R = r \sin \theta` is the cylindrical radius - :math:`z = r \cos \theta` is the elevation .. python-script:: import yt ds = yt.load_sample("KeplerianDisk", unit_system="cgs") slc = yt.SlicePlot(ds, "phi", ("gas", "density")) slc.save() :math:`\theta`-normal slices are represented in a :math:`x/\sin(\theta)` VS :math:`y/\sin(\theta)` plane, where - :math:`x = R \cos \varphi` - :math:`y = R \sin \varphi` are the cartesian plane coordinates .. python-script:: import yt ds = yt.load_sample("KeplerianDisk", unit_system="cgs") slc = yt.SlicePlot(ds, "theta", ("gas", "density")) slc.save() Finally, :math:`r`-normal slices are represented following a `Aitoff-Hammer projection `_ We denote - the latitude :math:`\bar\theta = \frac{\pi}{2} - \theta` - the longitude :math:`\lambda = \varphi - \pi` .. python-script:: import yt ds = yt.load_sample("KeplerianDisk", unit_system="cgs") slc = yt.SlicePlot(ds, "r", ("gas", "density")) slc.save() .. _unstructured-mesh-slices: Unstructured Mesh Slices ------------------------ Unstructured Mesh datasets can be sliced using the same syntax as above. Here is an example script using a publicly available MOOSE dataset: .. python-script:: import yt ds = yt.load("MOOSE_sample_data/out.e-s010") sl = yt.SlicePlot(ds, "x", ("connect1", "diffused")) sl.zoom(0.75) sl.save() Here, we plot the ``'diffused'`` variable, using a slice normal to the ``'x'`` direction, through the meshed labelled by ``'connect1'``. By default, the slice goes through the center of the domain. We have also zoomed out a bit to get a better view of the resulting structure. To instead plot the ``'convected'`` variable, using a slice normal to the ``'z'`` direction through the mesh labelled by ``'connect2'``, we do: .. python-script:: import yt ds = yt.load("MOOSE_sample_data/out.e-s010") sl = yt.SlicePlot(ds, "z", ("connect2", "convected")) sl.zoom(0.75) sl.save() These slices are made by sampling the finite element solution at the points corresponding to each pixel of the image. The ``'convected'`` and ``'diffused'`` variables are node-centered, so this interpolation is performed by converting the sample point the reference coordinate system of the element and evaluating the appropriate shape functions. You can also plot element-centered fields: .. python-script:: import yt ds = yt.load("MOOSE_sample_data/out.e-s010") sl = yt.SlicePlot(ds, "y", ("connect1", "conv_indicator")) sl.zoom(0.75) sl.save() We can also annotate the mesh lines, as follows: .. python-script:: import yt ds = yt.load("MOOSE_sample_data/out.e-s010") sl = yt.SlicePlot(ds, "z", ("connect1", "diffused")) sl.annotate_mesh_lines(color="black") sl.zoom(0.75) sl.save() The ``plot_args`` parameter is a dictionary of keyword arguments that will be passed to matplotlib. It can be used to control the mesh line color, thickness, etc... The above examples all involve 8-node hexahedral mesh elements. Here is another example from a dataset that uses 6-node wedge elements: .. python-script:: import yt ds = yt.load("MOOSE_sample_data/wedge_out.e") sl = yt.SlicePlot(ds, "z", ("connect2", "diffused")) sl.save() Slices can also be used to examine 2D unstructured mesh datasets, but the slices must be taken to be normal to the ``'z'`` axis, or you'll get an error. Here is an example using another MOOSE dataset that uses triangular mesh elements: .. python-script:: import yt ds = yt.load("MOOSE_sample_data/out.e") sl = yt.SlicePlot(ds, "z", ("connect1", "nodal_aux")) sl.save() You may run into situations where you have a variable you want to visualize that exists on multiple mesh blocks. To view the variable on ``all`` mesh blocks, simply pass ``all`` as the first argument of the field tuple: .. python-script:: import yt ds = yt.load("MultiRegion/two_region_example_out.e", step=-1) sl = yt.SlicePlot(ds, "z", ("all", "diffused")) sl.save() .. _particle-plotting-workarounds: Additional Notes for Plotting Particle Data ------------------------------------------- Since version 4.2.0, off-axis projections ares supported for non-SPH particle data. Previous to that, this operation was only supported for SPH particles. Two historical workaround methods were available for plotting non-SPH particles with off-axis projections. 1. :ref:`smooth-non-sph` - this method involves extracting particle data to be reloaded with :class:`~yt.loaders.load_particles` and using the :class:`~yt.frontends.stream.data_structures.StreamParticlesDataset.add_sph_fields` function to create smoothing lengths. This works well for relatively small datasets, but is not parallelized and may take too long for larger data. 2. Plot from a saved :class:`~yt.data_objects.construction_data_containers.YTCoveringGrid`, :class:`~yt.data_objects.construction_data_containers.YTSmoothedCoveringGrid`, or :class:`~yt.data_objects.construction_data_containers.YTArbitraryGrid` dataset. This second method is illustrated below. First, construct one of the grid data objects listed above. Then, use the :class:`~yt.data_objects.data_containers.YTDataContainer.save_as_dataset` function (see :ref:`saving_data`) to save a deposited particle field (see :ref:`deposited-particle-fields`) as a reloadable dataset. This dataset can then be loaded and visualized using both off-axis projections and slices. Note, the change in the field name from ``("deposit", "nbody_mass")`` to ``("grid", "nbody_mass")`` after reloading. .. python-script:: import yt ds = yt.load("gizmo_cosmology_plus/snap_N128L16_132.hdf5") # create a 128^3 covering grid over the entire domain L = 7 cg = ds.covering_grid(level=L, left_edge=ds.domain_left_edge, dims=[2**L]*3) fn = cg.save_as_dataset(fields=[("deposit", "nbody_mass")]) ds_grid = yt.load(fn) p = yt.ProjectionPlot(ds_grid, [1, 1, 1], ("grid", "nbody_mass")) p.save() Plot Customization: Recentering, Resizing, Colormaps, and More -------------------------------------------------------------- You can customize each of the four plot types above in identical ways. We'll go over each of the customizations methods below. For each of the examples below we will modify the following plot. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.save() Panning and zooming ~~~~~~~~~~~~~~~~~~~ There are three methods to dynamically pan around the data. :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.pan` accepts x and y deltas. .. python-script:: import yt from yt.units import kpc ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.pan((2 * kpc, 2 * kpc)) slc.save() :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.pan_rel` accepts deltas in units relative to the field of view of the plot. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.pan_rel((0.1, -0.1)) slc.save() :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.zoom` accepts a factor to zoom in by. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.zoom(2) slc.save() Set axes units ~~~~~~~~~~~~~~ :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_axes_unit` allows the customization of the axes unit labels. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.set_axes_unit("Mpc") slc.save() The same result could have been accomplished by explicitly setting the ``width`` to ``(.01, 'Mpc')``. .. _set-image-units: Set image units ~~~~~~~~~~~~~~~ :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_axes_unit` allows the customization of the units used for the image and colorbar. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.set_unit(("gas", "density"), "Msun/pc**3") slc.save() If the unit you would like to convert to needs an equivalency, this can be specified via the ``equivalency`` keyword argument of ``set_unit``. For example, let's make a plot of the temperature field, but present it using an energy unit instead of a temperature unit: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "temperature"), width=(10, "kpc")) slc.set_unit(("gas", "temperature"), "keV", equivalency="thermal") slc.save() Set the plot center ~~~~~~~~~~~~~~~~~~~ The :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_center` function accepts a new center for the plot, in code units. New centers must be two element tuples. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.set_center((0.5, 0.503)) slc.save() Adjusting the plot view axes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ There are a number of ways in which the initial orientation of a :class:`~yt.visualization.plot_window.PlotWindow` object can be adjusted. The first two axis orientation modifications, :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.flip_horizontal` and :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.flip_vertical`, are equivalent to the ``invert_xaxis`` and ``invert_yaxis`` of matplotlib ``Axes`` objects. ``flip_horizontal`` will invert the plot's x-axis while the :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.flip_vertical` method will invert the plot's y-axis .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # slicing with standard view (right-handed) slc = yt.SlicePlot(ds, "z", ("gas", "velocity_x"), width=(20, 'kpc')) slc.annotate_title("Standard Horizontal (Right Handed)") slc.save("Standard.png") # flip the horizontal axis (not right handed) slc.flip_horizontal() slc.annotate_title("Horizontal Flipped (Not Right Handed)") slc.save("NotRightHanded.png") # flip the vertical axis slc = yt.SlicePlot(ds, "z", ("gas", "velocity_x"), width=(20, 'kpc')) slc.flip_vertical() slc.annotate_title("Flipped vertical") slc.save("FlippedVertical.png") In addition to inverting the direction of each axis, :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.swap_axes` will exchange the plot's vertical and horizontal axes: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # slicing with non right-handed coordinates slc = yt.SlicePlot(ds, "z", ("gas", "velocity_x"), width=(20, 'kpc')) slc.swap_axes() slc.annotate_title("Swapped axes") slc.save("SwappedAxes.png") # toggle swap_axes (return to standard view) slc.swap_axes() slc.annotate_title("Standard Axes") slc.save("StandardAxes.png") When using the ``flip_horizontal`` and ``flip_vertical`` with ``swap_axes``, it is important to remember that any ``flip_horizontal`` and ``flip_vertical`` operations are applied to the image axes (not underlying dataset coordinates) after any ``swap_axes`` calls, regardless of the order in which the callbacks are added. Also note that when using ``swap_axes``, any plot modifications relating to limits, image width or resolution should still be supplied in reference to the standard (unswapped) orientation rather than the swapped view. Finally, it's worth mentioning that these three methods can be used in combination to rotate the view: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # initial view slc = yt.SlicePlot(ds, "z", ("gas", "velocity_x"), width=(20, 'kpc')) slc.save("InitialOrientation.png") slc.annotate_title("Initial View") # swap + vertical flip = rotate 90 degree rotation (clockwise) slc.swap_axes() slc.flip_vertical() slc.annotate_title("90 Degree Clockwise Rotation") slc.save("SwappedAxes90CW.png") # vertical flip + horizontal flip = rotate 180 degree rotation slc = yt.SlicePlot(ds, "z", ("gas", "velocity_x"), width=(20, 'kpc')) slc.flip_horizontal() slc.flip_vertical() slc.annotate_title("180 Degree Rotation") slc.save("FlipAxes180.png") # swap + horizontal flip = rotate 90 degree rotation (counter clockwise) slc = yt.SlicePlot(ds, "z", ("gas", "velocity_x"), width=(20, 'kpc')) slc.swap_axes() slc.flip_horizontal() slc.annotate_title("90 Degree Counter Clockwise Rotation") slc.save("SwappedAxes90CCW.png") .. _hiding-colorbar-and-axes: Hiding the Colorbar and Axis Labels ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The :class:`~yt.visualization.plot_window.PlotWindow` class has functions attached for hiding/showing the colorbar and axes. This allows for making minimal plots that focus on the data: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.hide_colorbar() slc.hide_axes() slc.save() See the cookbook recipe :ref:`show-hide-axes-colorbar` and the full function description :class:`~yt.visualization.plot_window.PlotWindow` for more information. Fonts ~~~~~ :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_font` allows font customization. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.set_font({"family": "sans-serif", "style": "italic", "weight": "bold", "size": 24}) slc.save() Colormaps ~~~~~~~~~ Each of these functions accepts at least two arguments. In all cases the first argument is a field name. This makes it possible to use different custom colormaps for different fields tracked by the plot object. To change the colormap for the plot, call the :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_cmap` function. Use any of the colormaps listed in the :ref:`colormaps` section. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.set_cmap(("gas", "density"), "RdBu_r") slc.save() Colorbar Normalization / Scaling """""""""""""""""""""""""""""""" For a general introduction to the topic of colorbar scaling, see ``_. Here we will focus on the defaults, and the ways to customize them, of yt plot classes. In this section, "norm" is used as short for "normalization", and is interchangeable with "scaling". Map-like plots e.g., ``SlicePlot``, ``ProjectionPlot`` and ``PhasePlot``, default to `logarithmic (log) `_ normalization when all values are strictly positive, and `symmetric log (symlog) `_ otherwise. yt supports two different interfaces to move away from the defaults. See **constrained norms** and **arbitrary norm** hereafter. .. note:: defaults can be configured on a per-field basis, see :ref:`per-field-plotconfig` **Constrained norms** The standard way to change colorbar scalings between linear, log, and symmetric log (symlog). Colorbar properties can be constrained via two methods: - :meth:`~yt.visualization.plot_container.PlotContainer.set_zlim` controls the limits of the colorbar range: ``zmin`` and ``zmax``. - :meth:`~yt.visualization.plot_container.ImagePlotContainer.set_log` allows switching to linear or symlog normalization. With symlog, the linear threshold can be set explicitly. Otherwise, yt will dynamically determine a reasonable value. Use the :meth:`~yt.visualization.plot_container.PlotContainer.set_zlim` method to set a custom colormap range. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.set_zlim(("gas", "density"), zmin=(1e-30, "g/cm**3"), zmax=(1e-25, "g/cm**3")) slc.save() Units can be left out, in which case they implicitly match the current display units of the colorbar (controlled with the ``set_unit`` method, see :ref:`set-image-units`). It is not required to specify both ``zmin`` and ``zmax``. Left unset, they will default to the extreme values in the current view. This default behavior can be enforced or restored by passing ``zmin="min"`` (reps. ``zmax="max"``) explicitly. :meth:`~yt.visualization.plot_container.ImagePlotContainer.set_log` takes a boolean argument to select log (``True``) or linear (``False``) scalings. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.set_log(("gas", "density"), False) # switch to linear scaling slc.save() One can switch to `symlog `_ by providing a "linear threshold" (``linthresh``) value. With ``linthresh="auto"`` yt will switch to symlog norm and guess an appropriate value automatically, with different behavior depending on the dynamic range of the data. When the dynamic range of the symlog scale is less than 15 orders of magnitude, the linthresh value will be the minimum absolute nonzero value, as in .. python-script:: import yt ds = yt.load_sample("IsolatedGalaxy") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.set_log(("gas", "density"), linthresh="auto") slc.save() When the dynamic range of the symlog scale exceeds 15 orders of magnitude, the linthresh value is calculated as 1/10\ :sup:`15` of the maximum nonzero value in order to avoid possible floating point precision issues. The following plot triggers the dynamic range cutoff .. python-script:: import yt ds = yt.load_sample("FIRE_M12i_ref11") p = yt.ProjectionPlot(ds, "x", ("gas", "density"), width=(30, "Mpc")) p.set_log(("gas", "density"), linthresh="auto") p.save() In the previous example, it is actually safe to expand the dynamic range and in other cases you may find that the selected linear threshold is not well suited to your dataset. To pass an explicit value instead .. python-script:: import yt ds = yt.load_sample("FIRE_M12i_ref11") p = yt.ProjectionPlot(ds, "x", ("gas", "density"), width=(30, "Mpc")) p.set_log(("gas", "density"), linthresh=(1e-22, "g/cm**2")) p.save() Similar to the ``zmin`` and ``zmax`` arguments of the ``set_zlim`` method, units can be left out in ``linthresh``. **Arbitrary norms** Alternatively, arbitrary `matplotlib norms `_ can be passed via the :meth:`~yt.visualization.plot_container.PlotContainer.set_norm` method. In that case, any numeric value is treated as having implicit units, matching the current display units. This alternative interface is more flexible, but considered experimental as of yt 4.1. Don't forget that with great power comes great responsibility. .. python-script:: import yt from matplotlib.colors import TwoSlopeNorm ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "velocity_x"), width=(30, "kpc")) slc.set_norm(("gas", "velocity_x"), TwoSlopeNorm(vcenter=0)) # using a diverging colormap to emphasize that vcenter corresponds to the # middle value in the color range slc.set_cmap(("gas", "velocity_x"), "RdBu") slc.save() .. note:: When calling :meth:`~yt.visualization.plot_container.PlotContainer.set_norm`, any constraints previously set with :meth:`~yt.visualization.plot_container.PlotContainer.set_log` or :meth:`~yt.visualization.plot_container.PlotContainer.set_zlim` will be dropped. Conversely, calling ``set_log`` or ``set_zlim`` will have the effect of dropping any norm previously set via ``set_norm``. The :meth:`~yt.visualization.plot_container.ImagePlotContainer.set_background_color` function accepts a field name and a color (optional). If color is given, the function will set the plot's background color to that. If not, it will set it to the bottom value of the color map. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(1.5, "Mpc")) slc.set_background_color(("gas", "density")) slc.save("bottom_colormap_background") slc.set_background_color(("gas", "density"), color="black") slc.save("black_background") Annotations ~~~~~~~~~~~ A slice object can also add annotations like a title, an overlying quiver plot, the location of grid boundaries, halo-finder annotations, and many other annotations, including user-customizable annotations. For example: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.annotate_grids() slc.save() will plot the density field in a 10 kiloparsec slice through the z-axis centered on the highest density point in the simulation domain. Before saving the plot, the script annotates it with the grid boundaries, which are drawn as lines in the plot, with colors going from black to white depending on the AMR level of the grid. Annotations are described in :ref:`callbacks`. Set the size and resolution of the plot ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To set the size of the plot, use the :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_figure_size` function. The argument is the size of the longest edge of the plot in inches. View the full resolution image to see the difference more clearly. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.set_figure_size(10) slc.save() To change the resolution of the image, call the :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_buff_size` function. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.set_buff_size(1600) slc.save() Also see cookbook recipe :ref:`image-resolution-primer` for more information about the parameters that determine the resolution of your images. Turning off minorticks ~~~~~~~~~~~~~~~~~~~~~~ By default minorticks for the x and y axes are turned on. The minorticks may be removed using the :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_minorticks` function, which either accepts a specific field name including the 'all' alias and the desired state for the plot as 'on' or 'off'. There is also an analogous :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_colorbar_minorticks` function for the colorbar axis. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc")) slc.set_minorticks("all", False) slc.set_colorbar_minorticks("all", False) slc.save() .. _matplotlib-customization: Further customization via matplotlib ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Each :class:`~yt.visualization.plot_window.PlotWindow` object is really a container for plots - one plot for each field specified in the list of fields supplied when the plot object is created. The individual plots can be accessed via the ``plots`` dictionary attached to each :class:`~yt.visualization.plot_window.PlotWindow` object: .. code-block:: python slc = SlicePlot(ds, 2, [("gas", "density"), ("gas", "temperature")]) dens_plot = slc.plots["gas", "density"] In this example ``dens_plot`` is an instance of :class:`~yt.visualization.plot_window.WindowPlotMPL`, an object that wraps the matplotlib `figure `_ and `axes `_ objects. We can access these matplotlib primitives via attributes of ``dens_plot``. .. code-block:: python figure = dens_plot.figure axes = dens_plot.axes colorbar_axes = dens_plot.cax These are the `figure `_ and `axes `_ objects that control the actual drawing of the plot. Arbitrary plot customizations are possible by manipulating these objects. See :ref:`matplotlib-primitives` for an example. .. _how-to-make-1d-profiles: 1D Profile Plots ---------------- 1D profiles are used to calculate the average or the sum of a given quantity with respect to a second quantity. Two common examples are the "average density as a function of radius" or "the total mass within a given set of density bins." When created, they default to the average: in fact, they default to the average as weighted by the total cell mass. However, this can be modified to take either the total value or the average with respect to a different quantity. Profiles operate on :ref:`data objects `; they will take the entire data contained in a sphere, a prism, an extracted region and so on, and they will calculate and use that as input to their calculation. To make a 1D profile plot, create a (:class:`~yt.visualization.profile_plotter.ProfilePlot`) object, supplying the data object, the field for binning, and a list of fields to be profiled. .. python-script:: import yt from yt.units import kpc ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") my_galaxy = ds.disk(ds.domain_center, [0.0, 0.0, 1.0], 10 * kpc, 3 * kpc) plot = yt.ProfilePlot(my_galaxy, ("gas", "density"), [("gas", "temperature")]) plot.save() This will create a :class:`~yt.data_objects.selection_data_containers.YTDisk` centered at [0.5, 0.5, 0.5], with a normal vector of [0.0, 0.0, 1.0], radius of 10 kiloparsecs and height of 3 kiloparsecs and will then make a plot of the mass-weighted average temperature as a function of density for all of the gas contained in the cylinder. We could also have made a profile considering only the gas in a sphere. For instance: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") my_sphere = ds.sphere([0.5, 0.5, 0.5], (100, "kpc")) plot = yt.ProfilePlot(my_sphere, ("gas", "temperature"), [("gas", "mass")], weight_field=None) plot.save() Note that because we have specified the weighting field to be ``None``, the profile plot will display the accumulated cell mass as a function of temperature rather than the average. Also note the use of a ``(value, unit)`` tuple. These can be used interchangeably with units explicitly imported from ``yt.units`` when creating yt plots. We can also accumulate along the bin field of a ``ProfilePlot`` (the bin field is the x-axis in a ``ProfilePlot``, in the last example the bin field is ``Temperature``) by setting the ``accumulation`` keyword argument to ``True``. The following example uses ``weight_field = None`` and ``accumulation = True`` to generate a plot of the enclosed mass in a sphere: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") my_sphere = ds.sphere([0.5, 0.5, 0.5], (100, "kpc")) plot = yt.ProfilePlot( my_sphere, "radius", [("gas", "mass")], weight_field=None, accumulation=True ) plot.save() Notably, above we have specified the field tuple for the mass, but not for the ``radius`` field. The ``radius`` field will not be ambiguous, but if you want to ensure that it refers to the radius of the cells on which the "gas" field type is defined, you can specify it using the field tuple ``("index", "radius")``. You can also access the data generated by profiles directly, which can be useful for overplotting average quantities on top of phase plots, or for exporting and plotting multiple profiles simultaneously from a time series. The ``profiles`` attribute contains a list of all profiles that have been made. For each item in the list, the x field data can be accessed with ``x``. The profiled fields can be accessed from the dictionary ``field_data``. .. code-block:: python plot = ProfilePlot( my_sphere, ("gas", "temperature"), [("gas", "mass")], weight_field=None ) profile = plot.profiles[0] # print the bin field, in this case temperature print(profile.x) # print the profiled mass field print(profile["gas", "mass"]) Other options, such as the number of bins, are also configurable. See the documentation for :class:`~yt.visualization.profile_plotter.ProfilePlot` for more information. Overplotting Multiple 1D Profiles ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ It is often desirable to overplot multiple 1D profile to show evolution with time. This is supported with the ``from_profiles`` class method. 1D profiles are created with the :func:`~yt.data_objects.profiles.create_profile` method and then given to the ProfilePlot object. .. python-script:: import yt # Create a time-series object. es = yt.load_simulation("enzo_tiny_cosmology/32Mpc_32.enzo", "Enzo") es.get_time_series(redshifts=[5, 4, 3, 2, 1, 0]) # Lists to hold profiles, labels, and plot specifications. profiles = [] labels = [] # Loop over each dataset in the time-series. for ds in es: # Create a data container to hold the whole dataset. ad = ds.all_data() # Create a 1d profile of density vs. temperature. profiles.append( yt.create_profile( ad, [("gas", "temperature")], fields=[("gas", "mass")], weight_field=None, accumulation=True, ) ) # Add labels labels.append("z = %.2f" % ds.current_redshift) # Create the profile plot from the list of profiles. plot = yt.ProfilePlot.from_profiles(profiles, labels=labels) # Save the image. plot.save() Customizing axis limits ~~~~~~~~~~~~~~~~~~~~~~~ By default the x and y limits for ``ProfilePlot`` are determined using the :class:`~yt.data_objects.derived_quantities.Extrema` derived quantity. If you want to create a plot with custom axis limits, you have two options. First, you can create a custom profile object using :func:`~yt.data_objects.profiles.create_profile`. This function accepts a dictionary of ``(max, min)`` tuples keyed to field names. .. python-script:: import yt import yt.units as u ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") sp = ds.sphere("m", 10 * u.kpc) profiles = yt.create_profile( sp, ("gas", "temperature"), ("gas", "density"), weight_field=None, extrema={("gas", "temperature"): (1e3, 1e7), ("gas", "density"): (1e-26, 1e-22)}, ) plot = yt.ProfilePlot.from_profiles(profiles) plot.save() You can also make use of the :meth:`~yt.visualization.profile_plotter.ProfilePlot.set_xlim` and :meth:`~yt.visualization.profile_plotter.ProfilePlot.set_ylim` functions to customize the axes limits of a plot that has already been created. Note that calling ``set_xlim`` is much slower than calling ``set_ylim``. This is because ``set_xlim`` must recreate the profile object using the specified extrema. Creating a profile directly via :func:`~yt.data_objects.profiles.create_profile` might be significantly faster. Note that since there is only one bin field, ``set_xlim`` does not accept a field name as the first argument. .. python-script:: import yt import yt.units as u ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") sp = ds.sphere("m", 10 * u.kpc) plot = yt.ProfilePlot(sp, ("gas", "temperature"), ("gas", "density"), weight_field=None) plot.set_xlim(1e3, 1e7) plot.set_ylim(("gas", "density"), 1e-26, 1e-22) plot.save() Customizing Units ~~~~~~~~~~~~~~~~~ Units for both the x and y axis can be controlled via the :meth:`~yt.visualization.profile_plotter.ProfilePlot.set_unit` method. Adjusting the plot units does not require recreating the histogram, so adjusting units will always be inexpensive, requiring only an in-place unit conversion. In the following example we create a plot of the average density in solar masses per cubic parsec as a function of radius in kiloparsecs. .. python-script:: import yt import yt.units as u ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") sp = ds.sphere("m", 10 * u.kpc) plot = yt.ProfilePlot(sp, "radius", ("gas", "density"), weight_field=None) plot.set_unit(("gas", "density"), "msun/pc**3") plot.set_unit("radius", "kpc") plot.save() Linear and Logarithmic Scaling ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The axis scaling can be manipulated via the :meth:`~yt.visualization.profile_plotter.ProfilePlot.set_log` function. This function accepts a field name and a boolean. If the boolean is ``True``, the field is plotted in log scale. If ``False``, the field is plotted in linear scale. In the following example we create a plot of the average x velocity as a function of radius. Since the x component of the velocity vector can be negative, we set the scaling to be linear for this field. .. python-script:: import yt import yt.units as u ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") sp = ds.sphere("m", 10 * u.kpc) plot = yt.ProfilePlot(sp, "radius", ("gas", "velocity_x"), weight_field=None) plot.set_log(("gas", "velocity_x"), False) plot.save() Setting axis labels ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The axis labels can be manipulated via the :meth:`~yt.visualization.profile_plotter.ProfilePlot.set_ylabel` and :meth:`~yt.visualization.profile_plotter.ProfilePlot.set_xlabel` functions. The :meth:`~yt.visualization.profile_plotter.ProfilePlot.set_ylabel` function accepts a field name and a string with the desired label. The :meth:`~yt.visualization.profile_plotter.ProfilePlot.set_xlabel` function just accepts the desired label and applies this to all of the plots. In the following example we create a plot of the average x-velocity and density as a function of radius. The xlabel is set to "Radius", for all plots, and the ylabel is set to "velocity in x direction" for the x-velocity plot. .. python-script:: import yt ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") ad = ds.all_data() plot = yt.ProfilePlot(ad, "radius", [("gas", "temperature"), ("gas", "velocity_x")], weight_field=None) plot.set_xlabel("Radius") plot.set_ylabel(("gas", "velocity_x"), "velocity in x direction") plot.save() Adding plot title ~~~~~~~~~~~~~~~~~ Plot title can be set via the :meth:`~yt.visualization.profile_plotter.ProfilePlot.annotate_title` function. It accepts a string argument which is the plot title and an optional ``field`` parameter which specifies the field for which plot title should be added. ``field`` could be a string or a list of string. If ``field`` is not passed, plot title will be added for the fields. In the following example we create a plot and set the plot title. .. python-script:: import yt ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") ad = ds.all_data() plot = yt.ProfilePlot(ad, ("gas", "density"), [("gas", "temperature")], weight_field=None) plot.annotate_title("Temperature vs Density Plot") plot.save() Another example where we create plots from profile. By specifying the fields we can add plot title to a specific plot. .. python-script:: import yt ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") sphere = ds.sphere("max", (1.0, "Mpc")) profiles = [] profiles.append(yt.create_profile(sphere, ["radius"], fields=[("gas", "density")], n_bins=64)) profiles.append( yt.create_profile(sphere, ["radius"], fields=["dark_matter_density"], n_bins=64) ) plot = yt.ProfilePlot.from_profiles(profiles) plot.annotate_title("Plot Title: Density", ("gas", "density")) plot.annotate_title("Plot Title: Dark Matter Density", "dark_matter_density") plot.save() Here, ``plot.annotate_title("Plot Title: Density", ("gas", "density"))`` will only set the plot title for the ``"density"`` field. Thus, allowing us the option to have different plot titles for different fields. Annotating plot with text ~~~~~~~~~~~~~~~~~~~~~~~~~ Plots can be annotated at a desired (x,y) coordinate using :meth:`~yt.visualization.profile_plotter.ProfilePlot.annotate_text` function. This function accepts the x-position, y-position, a text string to be annotated in the plot area, and an optional list of fields for annotating plots with the specified field. Furthermore, any keyword argument accepted by the matplotlib ``axes.text`` function could also be passed which will can be useful to change fontsize, text-alignment, text-color or other such properties of annotated text. In the following example we create a plot and add a simple annotation. .. python-script:: import yt ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") ad = ds.all_data() plot = yt.ProfilePlot(ad, ("gas", "density"), [("gas", "temperature")], weight_field=None) plot.annotate_text(1e-30, 1e7, "Annotated Text") plot.save() To add annotations to a particular set of fields we need to pass in the list of fields as follows, where ``"ftype1"`` and ``"ftype2"`` are the field types (and may be the same): .. code-block:: python plot.annotate_text( 1e-30, 1e7, "Annotation", [("ftype1", "field1"), ("ftype2", "field2")] ) To change the text annotated text properties, we need to pass the matplotlib ``axes.text`` arguments as follows: .. code-block:: python plot.annotate_text( 1e-30, 1e7, "Annotation", fontsize=20, bbox=dict(facecolor="red", alpha=0.5), horizontalalignment="center", verticalalignment="center", ) The above example will set the fontsize of annotation to 20, add a bounding box of red color and center align horizontally and vertically. The is just an example to modify the text properties, for further options please check `matplotlib.axes.Axes.text `_. Altering Line Properties ~~~~~~~~~~~~~~~~~~~~~~~~ Line properties for any and all of the profiles can be changed with the :func:`~yt.visualization.profile_plotter.set_line_property` function. The two arguments given are the line property and desired value. .. code-block:: python plot.set_line_property("linestyle", "--") With no additional arguments, all of the lines plotted will be altered. To change the property of a single line, give also the index of the profile. .. code-block:: python # change only the first line plot.set_line_property("linestyle", "--", 0) .. _how-to-1d-unstructured-mesh: 1D Line Sampling ---------------- YT has the ability to sample datasets along arbitrary lines and plot the result. You must supply five arguments to the ``LinePlot`` class. They are enumerated below: 1. Dataset 2. A list of fields or a single field you wish to plot 3. The starting point of the sampling line. This should be an n-element list, tuple, ndarray, or YTArray with the elements corresponding to the coordinates of the starting point. (n should equal the dimension of the dataset) 4. The ending point of the sampling line. This should also be an n-element list, tuple, ndarray, or YTArray with the elements corresponding to the coordinates of the ending point. 5. The number of sampling points along the line, e.g. if 1000 is specified, then data will be sampled at 1000 points evenly spaced between the starting and ending points. The below code snippet illustrates how this is done: .. code-block:: python ds = yt.load("SecondOrderTris/RZ_p_no_parts_do_nothing_bcs_cone_out.e", step=-1) plot = yt.LinePlot(ds, [("all", "v"), ("all", "u")], (0, 0, 0), (0, 1, 0), 1000) plot.save() If working in a Jupyter Notebook, ``LinePlot`` also has the ``show()`` method. You can add a legend to a 1D sampling plot. The legend process takes two steps: 1. When instantiating the ``LinePlot``, pass a dictionary of labels with keys corresponding to the field names 2. Call the ``LinePlot`` ``annotate_legend`` method X- and Y- axis units can be set with ``set_x_unit`` and ``set_unit`` methods respectively. The below code snippet combines all the features we've discussed: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") plot = yt.LinePlot(ds, ("gas", "density"), [0, 0, 0], [1, 1, 1], 512) plot.annotate_legend(("gas", "density")) plot.set_x_unit("cm") plot.set_unit(("gas", "density"), "kg/cm**3") plot.save() If a list of fields is passed to ``LinePlot``, yt will create a number of individual figures equal to the number of different dimensional quantities. E.g. if ``LinePlot`` receives two fields with units of "length/time" and a field with units of "temperature", two different figures will be created, one with plots of the "length/time" fields and another with the plot of the "temperature" field. It is only necessary to call ``annotate_legend`` for one field of a multi-field plot to produce a legend containing all the labels passed in the initial construction of the ``LinePlot`` instance. Example: .. python-script:: import yt ds = yt.load("SecondOrderTris/RZ_p_no_parts_do_nothing_bcs_cone_out.e", step=-1) plot = yt.LinePlot( ds, [("all", "v"), ("all", "u")], [0, 0, 0], [0, 1, 0], 100, field_labels={("all", "u"): r"v$_x$", ("all", "v"): r"v$_y$"}, ) plot.annotate_legend(("all", "u")) plot.save() ``LinePlot`` is a bit different from yt ray objects which are data containers. ``LinePlot`` is a plotting class that may use yt ray objects to supply field plotting information. However, perhaps the most important difference to highlight between rays and ``LinePlot`` is that rays return data elements that intersect with the ray and make no guarantee about the spacing between data elements. ``LinePlot`` sampling points are guaranteed to be evenly spaced. In the case of cell data where multiple points fall within the same cell, the ``LinePlot`` object will show the same field value for each sampling point that falls within the same cell. .. _how-to-make-2d-profiles: 2D Phase Plots -------------- 2D phase plots function in much the same was as 1D phase plots, but with a :class:`~yt.visualization.profile_plotter.PhasePlot` object. Much like 1D profiles, 2D profiles (phase plots) are best thought of as plotting a distribution of points, either taking the average or the accumulation in a bin. The default behavior is to average, using the cell mass as the weighting, but this behavior can be controlled through the ``weight_field`` parameter. For example, to generate a 2D distribution of mass enclosed in density and temperature bins, you can do: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") my_sphere = ds.sphere("c", (50, "kpc")) plot = yt.PhasePlot( my_sphere, ("gas", "density"), ("gas", "temperature"), [("gas", "mass")], weight_field=None ) plot.save() If you would rather see the average value of a field as a function of two other fields, leave off the ``weight_field`` argument, and it will average by the cell mass. This would look something like: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") my_sphere = ds.sphere("c", (50, "kpc")) plot = yt.PhasePlot(my_sphere, ("gas", "density"), ("gas", "temperature"), [("gas", "H_p0_fraction")]) plot.save() Customizing Phase Plots ~~~~~~~~~~~~~~~~~~~~~~~ Similarly to 1D profile plots, :class:`~yt.visualization.profile_plotter.PhasePlot` can be customized via ``set_unit``, ``set_xlim``, ``set_ylim``, and ``set_zlim``. The following example illustrates how to manipulate these functions. :class:`~yt.visualization.profile_plotter.PhasePlot` can also be customized in a similar manner as :class:`~yt.visualization.plot_window.SlicePlot`, such as with ``hide_colorbar`` and ``show_colorbar``. .. python-script:: import yt ds = yt.load("sizmbhloz-clref04SNth-rs9_a0.9011/sizmbhloz-clref04SNth-rs9_a0.9011.art") center = ds.arr([64.0, 64.0, 64.0], "code_length") rvir = ds.quan(1e-1, "Mpccm/h") sph = ds.sphere(center, rvir) plot = yt.PhasePlot(sph, ("gas", "density"), ("gas", "temperature"), ("gas", "mass"), weight_field=None) plot.set_unit(("gas", "density"), "Msun/pc**3") plot.set_unit(("gas", "mass"), "Msun") plot.set_xlim(1e-5, 1e1) plot.set_ylim(1, 1e7) plot.save() It is also possible to construct a custom 2D profile object and then use the :meth:`~yt.visualization.profile_plotter.PhasePlot.from_profile` function to create a ``PhasePlot`` using the profile object. This will sometimes be faster, especially if you need custom x and y axes limits. The following example illustrates this workflow: .. python-script:: import yt ds = yt.load("sizmbhloz-clref04SNth-rs9_a0.9011/sizmbhloz-clref04SNth-rs9_a0.9011.art") center = ds.arr([64.0, 64.0, 64.0], "code_length") rvir = ds.quan(1e-1, "Mpccm/h") sph = ds.sphere(center, rvir) units = {("gas", "density"): "Msun/pc**3", ("gas", "mass"): "Msun"} extrema = {("gas", "density"): (1e-5, 1e1), ("gas", "temperature"): (1, 1e7)} profile = yt.create_profile( sph, [("gas", "density"), ("gas", "temperature")], n_bins=[128, 128], fields=[("gas", "mass")], weight_field=None, units=units, extrema=extrema, ) plot = yt.PhasePlot.from_profile(profile) plot.save() Probability Distribution Functions and Accumulation --------------------------------------------------- Both 1D and 2D profiles which show the total of amount of some field, such as mass, in a bin (done by setting the ``weight_field`` keyword to ``None``) can be turned into probability distribution functions (PDFs) by setting the ``fractional`` keyword to ``True``. When set to ``True``, the value in each bin is divided by the sum total from all bins. These can be turned into cumulative distribution functions (CDFs) by setting the ``accumulation`` keyword to ``True``. This will make it so that the value in any bin N is the cumulative sum of all bins from 0 to N. The direction of the summation can be reversed by setting ``accumulation`` to ``-True``. For ``PhasePlot``, the accumulation can be set independently for each axis by setting ``accumulation`` to a list of ``True``/ ``-True`` /``False`` values. .. _particle-plots: Particle Plots -------------- Slice and projection plots both provide a callback for over-plotting particle positions onto gas fields. However, sometimes you want to plot the particle quantities by themselves, perhaps because the gas fields are not relevant to your use case, or perhaps because your dataset doesn't contain any gas fields in the first place. Additionally, you may want to plot your particles with a third field, such as particle mass or age, mapped to a colorbar. :class:`~yt.visualization.particle_plots.ParticlePlot` provides a convenient way to do this in yt. The easiest way to make a :class:`~yt.visualization.particle_plots.ParticlePlot` is to use the convenience routine. This has the syntax: .. code-block:: python p = yt.ParticlePlot(ds, ("all", "particle_position_x"), ("all", "particle_position_y")) p.save() Here, ``ds`` is a dataset we've previously opened. The commands create a particle plot that shows the x and y positions of all the particles in ``ds`` and save the result to a file on the disk. The type of plot returned depends on the fields you pass in; in this case, ``p`` will be an :class:`~yt.visualization.particle_plots.ParticleProjectionPlot`, because the fields are aligned to the coordinate system of the simulation. The above example is equivalent to the following: .. code-block:: python p = yt.ParticleProjectionPlot(ds, "z") p.save() Most of the callbacks the work for slice and projection plots also work for :class:`~yt.visualization.particle_plots.ParticleProjectionPlot`. For instance, we can zoom in: .. code-block:: python p = yt.ParticlePlot(ds, ("all", "particle_position_x"), ("all", "particle_position_y")) p.zoom(10) p.save("zoom") change the width: .. code-block:: python p.set_width((500, "kpc")) or change the axis units: .. code-block:: python p.set_unit(("all", "particle_position_x"), "Mpc") Here is a full example that shows the simplest way to use :class:`~yt.visualization.particle_plots.ParticlePlot`: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.ParticlePlot(ds, ("all", "particle_position_x"), ("all", "particle_position_y")) p.save() In the above examples, we are simply splatting particle x and y positions onto a plot using some color. Colors can be applied to the plotted particles by providing a ``z_field``, which will be summed along the line of sight in a manner similar to a projection. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.ParticlePlot(ds, ("all", "particle_position_x"), ("all", "particle_position_y"), ("all", "particle_mass")) p.set_unit(("all", "particle_mass"), "Msun") p.zoom(32) p.save() Additionally, a ``weight_field`` can be given such that the value in each pixel is the weighted average along the line of sight. .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.ParticlePlot( ds, ("all", "particle_position_x"), ("all", "particle_position_y"), ("all", "particle_mass"), weight_field=("all", "particle_ones"), ) p.set_unit(("all", "particle_mass"), "Msun") p.zoom(32) p.save() Note the difference in the above two plots. The first shows the total mass along the line of sight. The density is higher in the inner regions, and hence there are more particles and more mass along the line of sight. The second plot shows the average mass per particle along the line of sight. The inner region is dominated by low mass star particles, whereas the outer region is comprised of higher mass dark matter particles. Both :class:`~yt.visualization.particle_plots.ParticleProjectionPlot` and :class:`~yt.visualization.particle_plots.ParticlePhasePlot` objects accept a ``deposition`` argument which controls the order of the "splatting" of the particles onto the pixels in the plot. The default option, ``"ngp"``, corresponds to the "Nearest-Grid-Point" (0th-order) method, which simply finds the pixel the particle is located in and deposits 100% of the particle or its plotted quantity into that pixel. The other option, ``"cic"``, corresponds to the "Cloud-In-Cell" (1st-order) method, which linearly interpolates the particle or its plotted quantity into the four nearest pixels in the plot. Here is a complete example that uses the ``particle_mass`` field to set the colorbar and shows off some of the modification functions for :class:`~yt.visualization.particle_plots.ParticleProjectionPlot`: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.ParticlePlot( ds, ("all", "particle_position_x"), ("all", "particle_position_y"), ("all", "particle_mass"), width=(0.5, 0.5), ) p.set_unit(("all", "particle_mass"), "Msun") p.zoom(32) p.annotate_title("Zoomed-in Particle Plot") p.save() If the fields passed in to :class:`~yt.visualization.particle_plots.ParticlePlot` do not correspond to a valid :class:`~yt.visualization.particle_plots.ParticleProjectionPlot`, a :class:`~yt.visualization.particle_plots.ParticlePhasePlot` will be returned instead. :class:`~yt.visualization.particle_plots.ParticlePhasePlot` is used to plot arbitrary particle fields against each other, and do not support some of the callbacks available in :class:`~yt.visualization.particle_plots.ParticleProjectionPlot` - for instance, :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.pan` and :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.zoom` don't make much sense when of your axes is a position and the other is a velocity. The modification functions defined for :class:`~yt.visualization.profile_plotter.PhasePlot` should all work, however. Here is an example of making a :class:`~yt.visualization.particle_plots.ParticlePhasePlot` of ``particle_position_x`` versus ``particle_velocity_z``, with the ``particle_mass`` on the colorbar: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.ParticlePlot(ds, ("all", "particle_position_x"), ("all", "particle_velocity_z"), ("all", "particle_mass")) p.set_unit(("all", "particle_position_x"), "Mpc") p.set_unit(("all", "particle_velocity_z"), "km/s") p.set_unit(("all", "particle_mass"), "Msun") p.save() and here is one with the particle x and y velocities on the plot axes: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.ParticlePlot(ds, ("all", "particle_velocity_x"), ("all", "particle_velocity_y"), ("all", "particle_mass")) p.set_unit(("all", "particle_velocity_x"), "km/s") p.set_unit(("all", "particle_velocity_y"), "km/s") p.set_unit(("all", "particle_mass"), "Msun") p.set_ylim(-400, 400) p.set_xlim(-400, 400) p.save() If you want more control over the details of the :class:`~yt.visualization.particle_plots.ParticleProjectionPlot` or :class:`~yt.visualization.particle_plots.ParticlePhasePlot`, you can always use these classes directly. For instance, here is an example of using the ``depth`` argument to :class:`~yt.visualization.particle_plots.ParticleProjectionPlot` to only plot the particles that live in a thin slice around the center of the domain: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.ParticleProjectionPlot(ds, 2, [("all", "particle_mass")], width=(0.5, 0.5), depth=0.01) p.set_unit(("all", "particle_mass"), "Msun") p.save() Using :class:`~yt.visualization.particle_plots.ParticleProjectionPlot`, you can also plot particles along an off-axis direction: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") L = [1, 1, 1] # normal or "line of sight" vector N = [0, 1, 0] # north or "up" vector p = yt.ParticleProjectionPlot( ds, L, [("all", "particle_mass")], width=(0.05, 0.05), depth=0.3, north_vector=N ) p.set_unit(("all", "particle_mass"), "Msun") p.save() Here is an example of using the ``data_source`` argument to :class:`~yt.visualization.particle_plots.ParticlePhasePlot` to only consider the particles that lie within a 50 kpc sphere around the domain center: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") my_sphere = ds.sphere("c", (50.0, "kpc")) p = yt.ParticlePhasePlot( my_sphere, ("all", "particle_velocity_x"), ("all", "particle_velocity_y"), ("all", "particle_mass") ) p.set_unit(("all", "particle_velocity_x"), "km/s") p.set_unit(("all", "particle_velocity_y"), "km/s") p.set_unit(("all", "particle_mass"), "Msun") p.set_ylim(-400, 400) p.set_xlim(-400, 400) p.save() :class:`~yt.visualization.particle_plots.ParticleProjectionPlot` objects also admit a ``density`` flag, which allows one to plot the surface density of a projected quantity. This simply divides the quantity in each pixel of the plot by the area of that pixel. It also changes the label on the colorbar to reflect the new units and the fact that it is a density. This may make most sense in the case of plotting the projected particle mass, in which case you can plot the projected particle mass density: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.ParticleProjectionPlot(ds, 2, [("all", "particle_mass")], width=(0.5, 0.5), density=True) p.set_unit(("all", "particle_mass"), "Msun/kpc**2") # Note that the dimensions reflect the density flag p.save() Finally, with 1D and 2D Profiles, you can create a :class:`~yt.data_objects.profiles.ParticleProfile` object separately using the :func:`~yt.data_objects.profiles.create_profile` function, and then use it create a :class:`~yt.visualization.particle_plots.ParticlePhasePlot` object using the :meth:`~yt.visualization.particle_plots.ParticlePhasePlot.from_profile` method. In this example, we have also used the ``weight_field`` argument to compute the average ``particle_mass`` in each pixel, instead of the total: .. python-script:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") ad = ds.all_data() profile = yt.create_profile( ad, [("all", "particle_velocity_x"), ("all", "particle_velocity_y")], [("all", "particle_mass")], n_bins=800, weight_field=("all", "particle_ones"), ) p = yt.ParticlePhasePlot.from_profile(profile) p.set_unit(("all", "particle_velocity_x"), "km/s") p.set_unit(("all", "particle_velocity_y"), "km/s") p.set_unit(("all", "particle_mass"), "Msun") p.set_ylim(-400, 400) p.set_xlim(-400, 400) p.save() Under the hood, the :class:`~yt.data_objects.profiles.ParticleProfile` class works a lot like a :class:`~yt.data_objects.profiles.Profile2D` object, except that instead of just binning the particle field, you can also use higher-order deposition functions like the cloud-in-cell interpolant to spread out the particle quantities over a few cells in the profile. The :func:`~yt.data_objects.profiles.create_profile` will automatically detect when all the fields you pass in are particle fields, and return a :class:`~yt.data_objects.profiles.ParticleProfile` if that is the case. For a complete description of the :class:`~yt.data_objects.profiles.ParticleProfile` class please consult the reference documentation. .. _interactive-plotting: Interactive Plotting -------------------- The best way to interactively plot data is through the IPython notebook. Many detailed tutorials on using the IPython notebook can be found at :ref:`notebook-tutorial`. The simplest way to launch the notebook it is to type: .. code-block:: bash jupyter lab at the command line. This will prompt you for a password (so that if you're on a shared user machine no one else can pretend to be you!) and then spawn an IPython notebook you can connect to. If you want to see yt plots inline inside your notebook, you need only create a plot and then call ``.show()`` and the image will appear inline: .. notebook-cell:: import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p = yt.ProjectionPlot(ds, "z", ("gas", "density"), center='m', width=(10,'kpc'), weight_field=("gas", "density")) p.set_figure_size(5) p.show() .. _saving_plots: Saving Plots ------------ If you want to save your yt plots, you have a couple of options for customizing the plot filenames. If you don't care what the filenames are, just calling the ``save`` method with no additional arguments usually suffices: .. code-block:: python import yt ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100") slc = yt.SlicePlot(ds, "z", [("gas", "kT"), ("gas", "density")], width=(500.0, "kpc")) slc.save() which will yield PNG plots with the filenames .. code-block:: bash $ ls \*.png sloshing_nomag2_hdf5_plt_cnt_0100_Slice_z_density.png sloshing_nomag2_hdf5_plt_cnt_0100_Slice_z_kT.png which has a general form of .. code-block:: bash [dataset name]_[plot type]_[axis]_[field name].[suffix] Calling ``save`` with a single argument or the ``name`` keyword argument specifies an alternative name for the plot: .. code-block:: python slc.save("bananas") or .. code-block:: python slc.save(name="bananas") yields .. code-block:: bash $ ls \*.png bananas_Slice_z_kT.png bananas_Slice_z_density.png If you call ``save`` with a full filename with a file suffix, the plot will be saved with that filename: .. code-block:: python slc.save("sloshing.png") since this will take any field and plot it with this filename, it is typically only useful if you are plotting one field. If you want to simply change the image format of the plotted file, use the ``suffix`` keyword: .. code-block:: python slc.save(name="bananas", suffix="eps") yielding .. code-block:: bash $ ls *.eps bananas_Slice_z_kT.eps bananas_Slice_z_density.eps .. _remaking-plots: Remaking Figures from Plot Datasets ----------------------------------- When working with datasets that are too large to be stored locally, making figures just right can be cumbersome as it requires continuously moving images somewhere they can be viewed. However, image creation is actually a two step process of first creating the projection, slice, or profile object, and then converting that object into an actual image. Fortunately, the hard part (creating slices, projections, profiles) can be separated from the easy part (generating images). The intermediate slice, projection, and profile objects can be saved as reloadable datasets, then handed back to the plotting machinery discussed here. For slices and projections, the saveable object is associated with the plot object as ``data_source``. This can be saved with the :func:`~yt.data_objects.data_containers.YTDataContainer.save_as_dataset` function. For more information, see :ref:`saving_data`. .. code-block:: python p = yt.ProjectionPlot(ds, "x", ("gas", "density"), weight_field=("gas", "density")) fn = p.data_source.save_as_dataset() This function will optionally take a ``filename`` keyword that follows the same logic as discussed above in :ref:`saving_plots`. The filename to which the dataset was written will be returned. Once saved, this file can be reloaded completely independently of the original dataset and given back to the plot function with the same arguments. One can now continue to tweak the figure to one's liking. .. code-block:: python new_ds = yt.load(fn) new_p = yt.ProjectionPlot( new_ds, "x", ("gas", "density"), weight_field=("gas", "density") ) new_p.save() The same functionality is available for profile and phase plots. In each case, a special data container, ``data``, is given to the plotting functions. For ``ProfilePlot``: .. code-block:: python ad = ds.all_data() p1 = yt.ProfilePlot( ad, ("gas", "density"), ("gas", "temperature"), weight_field=("gas", "mass") ) # note that ProfilePlots can hold a list of profiles fn = p1.profiles[0].save_as_dataset() new_ds = yt.load(fn) p2 = yt.ProfilePlot( new_ds.data, ("gas", "density"), ("gas", "temperature"), weight_field=("gas", "mass"), ) p2.save() For ``PhasePlot``: .. code-block:: python ad = ds.all_data() p1 = yt.PhasePlot( ad, ("gas", "density"), ("gas", "temperature"), ("gas", "mass"), weight_field=None ) fn = p1.profile.save_as_dataset() new_ds = yt.load(fn) p2 = yt.PhasePlot( new_ds.data, ("gas", "density"), ("gas", "temperature"), ("gas", "mass"), weight_field=None, ) p2.save() .. _eps-writer: Publication-ready Figures ------------------------- While the routines above give a convenient method to inspect and visualize your data, publishers often require figures to be in PDF or EPS format. While the matplotlib supports vector graphics and image compression in PDF formats, it does not support compression in EPS formats. The :class:`~yt.visualization.eps_writer.DualEPS` module provides an interface with the `PyX `_, which is a Python abstraction of the PostScript drawing model with a LaTeX interface. It is optimal for publications to provide figures with vector graphics to avoid rasterization of the lines and text, along with compression to produce figures that do not have a large filesize. .. note:: PyX must be installed, which can be accomplished either manually with ``python -m pip install pyx``. This module can take any of the plots mentioned above and create an EPS or PDF figure. For example, .. code-block:: python import yt.visualization.eps_writer as eps slc = yt.SlicePlot(ds, "z", ("gas", "density")) slc.set_width(25, "kpc") eps_fig = eps.single_plot(slc) eps_fig.save_fig("zoom", format="eps") eps_fig.save_fig("zoom-pdf", format="pdf") The ``eps_fig`` object exposes all of the low-level functionality of ``PyX`` for further customization (see the `PyX documentation `_). There are a few convenience routines in ``eps_writer``, such as drawing a circle, .. code-block:: python eps_fig.circle(radius=0.2, loc=(0.5, 0.5)) eps_fig.sav_fig("zoom-circle", format="eps") with a radius of 0.2 at a center of (0.5, 0.5), both of which are in units of the figure's field of view. The :func:`~yt.visualization.eps_writer.multiplot_yt` routine also provides a convenient method to produce multi-panel figures from a PlotWindow. For example, .. code-block:: python import yt import yt.visualization.eps_writer as eps slc = yt.SlicePlot( ds, "z", [ ("gas", "density"), ("gas", "temperature"), ("gas", "pressure"), ("gas", "velocity_magnitude"), ], ) slc.set_width(25, "kpc") eps_fig = eps.multiplot_yt(2, 2, slc, bare_axes=True) eps_fig.scale_line(0.2, "5 kpc") eps_fig.save_fig("multi", format="eps") will produce a 2x2 panel figure with a scale bar indicating 5 kpc. The routine will try its best to place the colorbars in the optimal margin, but it can be overridden by providing the keyword ``cb_location`` with a dict of either ``right, left, top, bottom`` with the fields as the keys. You can also combine slices, projections, and phase plots. Here is an example that includes slices and phase plots: .. code-block:: python from yt import PhasePlot, SlicePlot from yt.visualization.eps_writer import multiplot_yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") p1 = SlicePlot(ds, "x", ("gas", "density")) p1.set_width(10, "kpc") p2 = SlicePlot(ds, "x", ("gas", "temperature")) p2.set_width(10, "kpc") p2.set_cmap(("gas", "temperature"), "hot") sph = ds.sphere(ds.domain_center, (10, "kpc")) p3 = PhasePlot( sph, "radius", ("gas", "density"), ("gas", "temperature"), weight_field=("gas", "mass"), ) p4 = PhasePlot( sph, "radius", ("gas", "density"), ("gas", "pressure"), weight_field=("gas", "mass") ) mp = multiplot_yt( 2, 2, [p1, p2, p3, p4], savefig="yt", shrink_cb=0.9, bare_axes=False, yt_nocbar=False, margins=(0.5, 0.5), ) mp.save_fig("multi_slice_phase") Using yt's style with matplotlib -------------------------------- It is possible to use yt's plot style in outside of yt itself, with the :func:`~yt.funcs.matplotlib_style_context` context manager .. code-block:: python import matplotlib.pyplot as plt import numpy as np import yt plt.rcParams["font.size"] = 14 x = np.linspace(-np.pi, np.pi, 100) y = np.sin(x) with yt.funcs.matplotlib_style_context(): fig, ax = plt.subplots() ax.plot(x, y) ax.set( xlabel=r"$x$", ylabel=r"$y$", title="A yt-styled matplotlib figure", ) Note that :func:`~yt.funcs.matplotlib_style_context` doesn't control the font size, so we adjust it manually in the preamble. With matplotlib 3.7 and newer, you can avoid importing yt altogether .. code-block:: python # requires matplotlib>=3.7 import matplotlib.pyplot as plt import numpy as np plt.rcParams["font.size"] = 14 x = np.linspace(-np.pi, np.pi, 100) y = np.sin(x) with plt.style.context("yt.default"): fig, ax = plt.subplots() ax.plot(x, y) ax.set( xlabel=r"$x$", ylabel=r"$y$", title="A yt-styled matplotlib figure", ) and you can also enable yt's style without a context manager as .. code-block:: python # requires matplotlib>=3.7 import matplotlib.pyplot as plt import numpy as np plt.style.use("yt.default") plt.rcParams["font.size"] = 14 x = np.linspace(-np.pi, np.pi, 100) y = np.sin(x) fig, ax = plt.subplots() ax.plot(x, y) ax.set( xlabel=r"$x$", ylabel=r"$y$", title="A yt-styled matplotlib figure", ) For more details, see `matplotlib's documentation _` ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/sketchfab.rst0000644000175100001770000003345114714401662020670 0ustar00runnerdocker.. _extracting-isocontour-information: .. _surfaces: 3D Surfaces and Sketchfab ========================= .. sectionauthor:: Jill Naiman and Matthew Turk Surface Objects and Extracting Isocontour Information ----------------------------------------------------- yt contains an implementation of the `Marching Cubes `__ algorithm, which can operate on 3D data objects. This provides two things. The first is to identify isocontours and return either the geometry of those isocontours or to return another field value sampled along that isocontour. The second piece of functionality is to calculate the flux of a field over an isocontour. Note that these isocontours are not guaranteed to be topologically connected. In fact, inside a given data object, the marching cubes algorithm will return all isocontours, not just a single connected one. This means if you encompass two clumps of a given density in your data object and extract an isocontour at that density, it will include both of the clumps. This means that with adaptive mesh refinement data, you *will* see cracks across refinement boundaries unless a "crack-fixing" step is applied to match up these boundaries. yt does not perform such an operation, and so there will be seams visible in 3D views of your isosurfaces. Surfaces can be exported in `OBJ format `_, values can be samples at the center of each face of the surface, and flux of a given field could be calculated over the surface. This means you can, for instance, extract an isocontour in density and calculate the mass flux over that isocontour. It also means you can export a surface from yt and view it in something like `Blender `__, `MeshLab `__, or even on your Android or iOS device in `MeshPad `__. To extract geometry or sample a field, call :meth:`~yt.data_objects.data_containers.YTSelectionContainer3D.extract_isocontours`. To calculate a flux, call :meth:`~yt.data_objects.data_containers.YTSelectionContainer3D.calculate_isocontour_flux`. both of these operations will run in parallel. For more information on enabling parallelism in yt, see :ref:`parallel-computation`. Alternatively, you can make an object called ``YTSurface`` that makes this process much easier. You can create one of these objects by specifying a source data object and a field over which to identify a surface at a given value. For example: .. code-block:: python import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") sphere = ds.sphere("max", (1.0, "Mpc")) surface = ds.surface(sphere, ("gas", "density"), 1e-27) This object, ``surface``, can be queried for values on the surface. For instance: .. code-block:: python print(surface["gas", "temperature"].min(), surface["gas", "temperature"].max()) will return the values 11850.7476943 and 13641.0663899. These values are interpolated to the face centers of every triangle that constitutes a portion of the surface. Note that reading a new field requires re-calculating the entire surface, so it's not the fastest operation. You can get the vertices of the triangle by looking at the property ``.vertices``. Exporting to a File ------------------- If you want to export this to a `PLY file `_ you can call the routine ``export_ply``, which will write to a file and optionally sample a field at every face or vertex, outputting a color value to the file as well. This file can then be viewed in MeshLab, Blender or on the website `Sketchfab.com `__. But if you want to view it on Sketchfab, there's an even easier way! Exporting to Sketchfab ---------------------- `Sketchfab `__ is a website that uses WebGL, a relatively new technology for displaying 3D graphics in any browser. It's very fast and typically requires no plugins. Plus, it means that you can share data with anyone and they can view it immersively without having to download the data or any software packages! Sketchfab provides a free tier for up to 10 models, and these models can be embedded in websites. There are lots of reasons to want to export to Sketchfab. For instance, if you're looking at a galaxy formation simulation and you publish a paper, you can include a link to the model in that paper (or in the arXiv listing) so that people can explore and see what the data looks like. You can also embed a model in a website with other supplemental data, or you can use Sketchfab to discuss morphological properties of a dataset with collaborators. It's also just plain cool. The ``YTSurface`` object includes a method to upload directly to Sketchfab, but it requires that you get an API key first. You can get this API key by creating an account and then going to your "dashboard," where it will be listed on the right hand side. Once you've obtained it, put it into your ``~/.config/yt/yt.toml`` file under the heading ``[yt]`` as the variable ``sketchfab_api_key``. If you don't want to do this, you can also supply it as an argument to the function ``export_sketchfab``. Now you can run a script like this: .. code-block:: python import yt from yt.units import kpc ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") dd = ds.sphere(ds.domain_center, (500, "kpc")) rho = 1e-28 bounds = [[dd.center[i] - 250 * kpc, dd.center[i] + 250 * kpc] for i in range(3)] surf = ds.surface(dd, ("gas", "density"), rho) upload_id = surf.export_sketchfab( title="galaxy0030 - 1e-28", description="Extraction of Density (colored by temperature) at 1e-28 g/cc", color_field=("gas", "temperature"), color_map="hot", color_log=True, bounds=bounds, ) and yt will extract a surface, convert to a format that Sketchfab.com understands (PLY, in a zip file) and then upload it using your API key. For this demo, I've used data kindly provided by Ryan Joung from a simulation of galaxy formation. Here's what my newly-uploaded model looks like, using the embed code from Sketchfab: .. raw:: html As a note, Sketchfab has a maximum model size of 50MB for the free account. 50MB is pretty hefty, though, so it shouldn't be a problem for most needs. Additionally, if you have an eligible e-mail address associated with a school or university, you can request a free professional account, which allows models up to 200MB. See https://sketchfab.com/education for details. OBJ and MTL Files ----------------- If the ability to maneuver around an isosurface of your 3D simulation in `Sketchfab `__ cost you half a day of work (let's be honest, 2 days), prepare to be even less productive. With a new `OBJ file `__ exporter, you can now upload multiple surfaces of different transparencies in the same file. The following code snippet produces two files which contain the vertex info (surfaces.obj) and color/transparency info (surfaces.mtl) for a 3D galaxy simulation: .. code-block:: python import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") rho = [2e-27, 1e-27] trans = [1.0, 0.5] filename = "./surfaces" sphere = ds.sphere("max", (1.0, "Mpc")) for i, r in enumerate(rho): surf = ds.surface(sphere, ("gas", "density"), r) surf.export_obj( filename, transparency=trans[i], color_field=("gas", "temperature"), plot_index=i, ) The calling sequence is fairly similar to the ``export_ply`` function `previously used `__ to export 3D surfaces. However, one can now specify a transparency for each surface of interest, and each surface is enumerated in the OBJ files with ``plot_index``. This means one could potentially add surfaces to a previously created file by setting ``plot_index`` to the number of previously written surfaces. One tricky thing: the header of the OBJ file points to the MTL file (with the header command ``mtllib``). This means if you move one or both of the files you may have to change the header to reflect their new directory location. A Few More Options ------------------ There are a few extra inputs for formatting the surface files you may want to use. (1) Setting ``dist_fac`` will divide all the vertex coordinates by this factor. Default will scale the vertices by the physical bounds of your sphere. (2) Setting ``color_field_max`` and/or ``color_field_min`` will scale the colors of all surfaces between this min and max. Default is to scale the colors of each surface to their own min and max values. Uploading to SketchFab ---------------------- To upload to `Sketchfab `__ one only needs to zip the OBJ and MTL files together, and then upload via your dashboard prompts in the usual way. For example, the above script produces: .. raw:: html Importing to MeshLab and Blender -------------------------------- The new OBJ formatting will produce multi-colored surfaces in both `MeshLab `__ and `Blender `__, a feature not possible with the `previous PLY exporter `__. To see colors in MeshLab go to the "Render" tab and select "Color -> Per Face". Note in both MeshLab and Blender, unlike Sketchfab, you can't see transparencies until you render. ...One More Option ------------------ If you've started poking around the actual code instead of skipping off to lose a few days running around your own simulations you may have noticed there are a few more options then those listed above, specifically, a few related to something called "Emissivity." This allows you to output one more type of variable on your surfaces. For example: .. code-block:: python import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") rho = [2e-27, 1e-27] trans = [1.0, 0.5] filename = "./surfaces" def emissivity(field, data): return data["gas", "density"] ** 2 * np.sqrt(data["gas", "temperature"]) add_field("emissivity", function=_Emissivity, sampling_type="cell", units=r"g*K/cm**6") sphere = ds.sphere("max", (1.0, "Mpc")) for i, r in enumerate(rho): surf = ds.surface(sphere, ("gas", "density"), r) surf.export_obj( filename, transparency=trans[i], color_field=("gas", "temperature"), emit_field="emissivity", plot_index=i, ) will output the same OBJ and MTL as in our previous example, but it will scale an emissivity parameter by our new field. Technically, this makes our outputs not really OBJ files at all, but a new sort of hybrid file, however we needn't worry too much about that for now. This parameter is useful if you want to upload your files in Blender and have the embedded rendering engine do some approximate ray-tracing on your transparencies and emissivities. This does take some slight modifications to the OBJ importer scripts in Blender. For example, on a Mac, you would modify the file "/Applications/Blender/blender.app/Contents/MacOS/2.65/scripts/addons/io_scene_obj/import_obj.py", in the function "create_materials" with: .. code-block:: diff # ... elif line_lower.startswith(b'tr'): # translucency context_material.translucency = float_func(line_split[1]) elif line_lower.startswith(b'tf'): # rgb, filter color, blender has no support for this. pass elif line_lower.startswith(b'em'): # MODIFY: ADD THIS LINE context_material.emit = float_func(line_split[1]) # MODIFY: THIS LINE TOO elif line_lower.startswith(b'illum'): illum = int(line_split[1]) # ... To use this in Blender, you might create a `Blender script `__ like the following: .. code-block:: python from math import radians import bpy bpy.ops.import_scene.obj(filepath="./surfaces.obj") # will use new importer # set up lighting = indirect bpy.data.worlds["World"].light_settings.use_indirect_light = True bpy.data.worlds["World"].horizon_color = [0.0, 0.0, 0.0] # background = black # have to use approximate, not ray tracing for emitting objects ... # ... for now... bpy.data.worlds["World"].light_settings.gather_method = "APPROXIMATE" bpy.data.worlds["World"].light_settings.indirect_factor = 20.0 # turn up all emiss # set up camera to be on -x axis, facing toward your object scene = bpy.data.scenes["Scene"] scene.camera.location = [-0.12, 0.0, 0.0] # location scene.camera.rotation_euler = [ radians(90.0), 0.0, radians(-90.0), ] # face to (0,0,0) # render scene.render.filepath = "/Users/jillnaiman/surfaces_blender" # needs full path bpy.ops.render.render(write_still=True) This above bit of code would produce an image like so: .. image:: _images/surfaces_blender.png Note that the hottest stuff is brightly shining, while the cool stuff is less so (making the inner isodensity contour barely visible from the outside of the surfaces). If the Blender image caught your fancy, you'll be happy to know there is a greater integration of Blender and yt in the works, so stay tuned! ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/streamlines.rst0000644000175100001770000001173114714401662021261 0ustar00runnerdocker.. _streamlines: Streamlines: Tracking the Trajectories of Tracers in your Data ============================================================== Streamlines, as implemented in yt, are defined as being parallel to a vector field at all points. While commonly used to follow the velocity flow or magnetic field lines, they can be defined to follow any three-dimensional vector field. Once an initial condition and total length of the streamline are specified, the streamline is uniquely defined. Relatedly, yt also has the ability to follow :doc:`../analyzing/Particle_Trajectories`. Method ------ Streamlining through a volume is useful for a variety of analysis tasks. By specifying a set of starting positions, the user is returned a set of 3D positions that can, in turn, be used to visualize the 3D path of the streamlines. Additionally, individual streamlines can be converted into :class:`~yt.data_objects.construction_data_containers.YTStreamline` objects, and queried for all the available fields along the streamline. The implementation of streamlining in yt is described below. #. Decompose the volume into a set of non-overlapping, fully domain tiling bricks, using the :class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` homogenized volume. #. For every streamline starting position: #. While the length of the streamline is less than the requested length: #. Find the brick that contains the current position #. If not already present, generate vertex-centered data for the vector fields defining the streamline. #. While inside the brick #. Integrate the streamline path using a Runge-Kutta 4th order method and the vertex centered data. #. During the intermediate steps of each RK4 step, if the position is updated to outside the current brick, interrupt the integration and locate a new brick at the intermediate position. #. The set of streamline positions are stored in the :class:`~yt.visualization.streamlines.Streamlines` object. Example Script ++++++++++++++ .. python-script:: import matplotlib.pyplot as plt import numpy as np from mpl_toolkits.mplot3d import Axes3D import yt from yt.units import Mpc from yt.visualization.api import Streamlines # Load the dataset ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # Define c: the center of the box, N: the number of streamlines, # scale: the spatial scale of the streamlines relative to the boxsize, # and then pos: the random positions of the streamlines. c = ds.domain_center N = 100 scale = ds.domain_width[0] pos_dx = np.random.random((N, 3)) * scale - scale / 2.0 pos = c + pos_dx # Create streamlines of the 3D vector velocity and integrate them through # the box defined above streamlines = Streamlines( ds, pos, ("gas", "velocity_x"), ("gas", "velocity_y"), ("gas", "velocity_z"), length=1.0 * Mpc, get_magnitude=True, ) streamlines.integrate_through_volume() # Create a 3D plot, trace the streamlines through the 3D volume of the plot fig = plt.figure() ax = Axes3D(fig, auto_add_to_figure=False) fig.add_axes(ax) for stream in streamlines.streamlines: stream = stream[np.all(stream != 0.0, axis=1)] ax.plot3D(stream[:, 0], stream[:, 1], stream[:, 2], alpha=0.1) # Save the plot to disk. plt.savefig("streamlines.png") Data Access Along the Streamline -------------------------------- .. note:: This functionality has not been implemented yet in the 3.x series of yt. If you are interested in working on this and have questions, please let us know on the yt-dev mailing list. Once the streamlines are found, a :class:`~yt.data_objects.construction_data_containers.YTStreamline` object can be created using the :meth:`~yt.visualization.streamlines.Streamlines.path` function, which takes as input the index of the streamline requested. This conversion is done by creating a mask that defines where the streamline is, and creating 't' and 'dts' fields that define the dimensionless streamline integration coordinate and integration step size. Once defined, fields can be accessed in the standard manner. Example Script ++++++++++++++++ .. code-block:: python import matplotlib.pyplot as plt import yt from yt.visualization.api import Streamlines ds = yt.load("DD1701") # Load ds streamlines = Streamlines(ds, ds.domain_center) streamlines.integrate_through_volume() stream = streamlines.path(0) plt.semilogy(stream["t"], stream["gas", "density"], "-x") Running in Parallel -------------------- The integration of the streamline paths is "embarrassingly" parallelized by splitting the streamlines up between the processors. Upon completion, each processor has access to all of the streamlines through the use of a reduction operation. For more information on enabling parallelism in yt, see :ref:`parallel-computation`. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/unstructured_mesh_rendering.rst0000644000175100001770000003670514714401662024563 0ustar00runnerdocker.. _unstructured_mesh_rendering: Unstructured Mesh Rendering =========================== Beginning with version 3.3, yt has the ability to volume render unstructured mesh data like that created by finite element calculations. No additional dependencies are required in order to use this feature. However, it is possible to speed up the rendering operation by installing with `Embree `_ support. Embree is a fast ray-tracing library from Intel that can substantially speed up the mesh rendering operation on large datasets. You can read about how to install yt with Embree support below, or you can skip to the examples. Optional Embree Installation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You'll need to `install Python bindings for netCDF4 `_. Then you'll need to get Embree itself and its corresponding Python bindings (pyembree). For conda-based systems, this is trivial, see `pyembree's doc `_ For systems other than conda, you will need to install Embree first, either by `compiling from source `_ or by using one of the pre-built binaries available at Embree's `releases `_ page. Then you'll want to install pyembree from source as follows. .. code-block:: bash git clone https://github.com/scopatz/pyembree To install, navigate to the root directory and run the setup script. If Embree was installed to some location that is not in your path by default, you will need to pass in CFLAGS and LDFLAGS to the setup.py script. For example, the Mac OS X package installer puts the installation at /opt/local/ instead of usr/local. To account for this, you would do: .. code-block:: bash CFLAGS='-I/opt/local/include' LDFLAGS='-L/opt/local/lib' python setup.py install Once Embree and pyembree are installed, a,d in order to use the unstructured mesh rendering capability, you must :ref:`rebuild yt from source `, . Once again, if embree is installed in a location that is not part of your default search path, you must tell yt where to find it. There are a number of ways to do this. One way is to again manually pass in the flags when running the setup script in the yt-git directory: .. code-block:: bash CFLAGS='-I/opt/local/include' LDFLAGS='-L/opt/local/lib' python setup.py develop You can also set EMBREE_DIR environment variable to '/opt/local', in which case you could just run .. code-block:: bash python setup.py develop as usual. Finally, if you create a file called embree.cfg in the yt-git directory with the location of the embree installation, the setup script will find this and use it, provided EMBREE_DIR is not set. An example embree.cfg file could like this: .. code-block:: bash /opt/local/ We recommend one of the later two methods, especially if you plan on re-compiling the cython extensions regularly. Note that none of this is necessary if you installed embree into a location that is in your default path, such as /usr/local. Examples ^^^^^^^^ First, here is an example of rendering an 8-node, hexahedral MOOSE dataset. .. python-script:: import yt ds = yt.load("MOOSE_sample_data/out.e-s010") # create a default scene sc = yt.create_scene(ds) # override the default colormap ms = sc.get_source() ms.cmap = "Eos A" # adjust the camera position and orientation cam = sc.camera cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length") cam_pos = ds.arr([-3.0, 3.0, -3.0], "code_length") north_vector = ds.arr([0.0, -1.0, -1.0], "dimensionless") cam.set_position(cam_pos, north_vector) # increase the default resolution cam.resolution = (800, 800) # render and save sc.save() You can also overplot the mesh boundaries: .. python-script:: import yt ds = yt.load("MOOSE_sample_data/out.e-s010") # create a default scene sc = yt.create_scene(ds) # override the default colormap ms = sc.get_source() ms.cmap = "Eos A" # adjust the camera position and orientation cam = sc.camera cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length") cam_pos = ds.arr([-3.0, 3.0, -3.0], "code_length") north_vector = ds.arr([0.0, -1.0, -1.0], "dimensionless") cam.set_position(cam_pos, north_vector) # increase the default resolution cam.resolution = (800, 800) # render, draw the element boundaries, and save sc.render() sc.annotate_mesh_lines() sc.save() As with slices, you can visualize different meshes and different fields. For example, Here is a script similar to the above that plots the "diffused" variable using the mesh labelled by "connect2": .. python-script:: import yt ds = yt.load("MOOSE_sample_data/out.e-s010") # create a default scene sc = yt.create_scene(ds, ("connect2", "diffused")) # override the default colormap ms = sc.get_source() ms.cmap = "Eos A" # adjust the camera position and orientation cam = sc.camera cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length") cam_pos = ds.arr([-3.0, 3.0, -3.0], "code_length") north_vector = ds.arr([0.0, -1.0, -1.0], "dimensionless") cam.set_position(cam_pos, north_vector) # increase the default resolution cam.resolution = (800, 800) # render and save sc.save() Next, here is an example of rendering a dataset with tetrahedral mesh elements. Note that in this dataset, there are multiple "steps" per file, so we specify that we want to look at the last one. .. python-script:: import yt filename = "MOOSE_sample_data/high_order_elems_tet4_refine_out.e" ds = yt.load(filename, step=-1) # we look at the last time frame # create a default scene sc = yt.create_scene(ds, ("connect1", "u")) # override the default colormap ms = sc.get_source() ms.cmap = "Eos A" # adjust the camera position and orientation cam = sc.camera camera_position = ds.arr([3.0, 3.0, 3.0], "code_length") cam.set_width(ds.arr([2.0, 2.0, 2.0], "code_length")) north_vector = ds.arr([0.0, -1.0, 0.0], "dimensionless") cam.set_position(camera_position, north_vector) # increase the default resolution cam.resolution = (800, 800) # render and save sc.save() Here is an example using 6-node wedge elements: .. python-script:: import yt ds = yt.load("MOOSE_sample_data/wedge_out.e") # create a default scene sc = yt.create_scene(ds, ("connect2", "diffused")) # override the default colormap ms = sc.get_source() ms.cmap = "Eos A" # adjust the camera position and orientation cam = sc.camera cam.set_position(ds.arr([1.0, -1.0, 1.0], "code_length")) cam.width = ds.arr([1.5, 1.5, 1.5], "code_length") # render and save sc.save() Another example, this time plotting the temperature field from a 20-node hex MOOSE dataset: .. python-script:: import yt # We load the last time frame ds = yt.load("MOOSE_sample_data/mps_out.e", step=-1) # create a default scene sc = yt.create_scene(ds, ("connect2", "temp")) # override the default colormap. This time we also override # the default color bounds ms = sc.get_source() ms.cmap = "hot" ms.color_bounds = (500.0, 1700.0) # adjust the camera position and orientation cam = sc.camera camera_position = ds.arr([-1.0, 1.0, -0.5], "code_length") north_vector = ds.arr([0.0, -1.0, -1.0], "dimensionless") cam.width = ds.arr([0.04, 0.04, 0.04], "code_length") cam.set_position(camera_position, north_vector) # increase the default resolution cam.resolution = (800, 800) # render, draw the element boundaries, and save sc.render() sc.annotate_mesh_lines() sc.save() The dataset in the above example contains displacement fields, so this is a good opportunity to demonstrate their use. The following example is exactly like the above, except we scale the displacements by a factor of a 10.0, and additionally add an offset to the mesh by 1.0 unit in the x-direction: .. python-script:: import yt # We load the last time frame ds = yt.load( "MOOSE_sample_data/mps_out.e", step=-1, displacements={"connect2": (10.0, [0.01, 0.0, 0.0])}, ) # create a default scene sc = yt.create_scene(ds, ("connect2", "temp")) # override the default colormap. This time we also override # the default color bounds ms = sc.get_source() ms.cmap = "hot" ms.color_bounds = (500.0, 1700.0) # adjust the camera position and orientation cam = sc.camera camera_position = ds.arr([-1.0, 1.0, -0.5], "code_length") north_vector = ds.arr([0.0, -1.0, -1.0], "dimensionless") cam.width = ds.arr([0.05, 0.05, 0.05], "code_length") cam.set_position(camera_position, north_vector) # increase the default resolution cam.resolution = (800, 800) # render, draw the element boundaries, and save sc.render() sc.annotate_mesh_lines() sc.save() As with other volume renderings in yt, you can swap out different lenses. Here is an example that uses a "perspective" lens, for which the rays diverge from the camera position according to some opening angle: .. python-script:: import yt ds = yt.load("MOOSE_sample_data/out.e-s010") # create a default scene sc = yt.create_scene(ds, ("connect2", "diffused")) # override the default colormap ms = sc.get_source() ms.cmap = "Eos A" # Create a perspective Camera cam = sc.add_camera(ds, lens_type="perspective") cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length") cam_pos = ds.arr([-4.5, 4.5, -4.5], "code_length") north_vector = ds.arr([0.0, -1.0, -1.0], "dimensionless") cam.set_position(cam_pos, north_vector) # increase the default resolution cam.resolution = (800, 800) # render, draw the element boundaries, and save sc.render() sc.annotate_mesh_lines() sc.save() You can also create scenes that have multiple meshes. The ray-tracing infrastructure will keep track of the depth information for each source separately, and composite the final image accordingly. In the next example, we show how to render a scene with two meshes on it: .. python-script:: import yt from yt.visualization.volume_rendering.api import MeshSource, Scene ds = yt.load("MOOSE_sample_data/out.e-s010") # this time we create an empty scene and add sources to it one-by-one sc = Scene() # set up our Camera cam = sc.add_camera(ds) cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length") cam.set_position( ds.arr([-3.0, 3.0, -3.0], "code_length"), ds.arr([0.0, -1.0, 0.0], "dimensionless"), ) cam.set_width = ds.arr([8.0, 8.0, 8.0], "code_length") cam.resolution = (800, 800) # create two distinct MeshSources from 'connect1' and 'connect2' ms1 = MeshSource(ds, ("connect1", "diffused")) ms2 = MeshSource(ds, ("connect2", "diffused")) sc.add_source(ms1) sc.add_source(ms2) # render and save im = sc.render() sc.save() However, in the rendered image above, we note that the color is discontinuous on in the middle and upper part of the cylinder's side. In the original data, there are two parts but the value of ``diffused`` is continuous at the interface. This discontinuous color is due to an independent colormap setting for the two mesh sources. To fix it, we can explicitly specify the colormap bound for each mesh source as follows: .. python-script:: import yt from yt.visualization.volume_rendering.api import MeshSource, Scene ds = yt.load("MOOSE_sample_data/out.e-s010") # this time we create an empty scene and add sources to it one-by-one sc = Scene() # set up our Camera cam = sc.add_camera(ds) cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length") cam.set_position( ds.arr([-3.0, 3.0, -3.0], "code_length"), ds.arr([0.0, -1.0, 0.0], "dimensionless"), ) cam.set_width = ds.arr([8.0, 8.0, 8.0], "code_length") cam.resolution = (800, 800) # create two distinct MeshSources from 'connect1' and 'connect2' ms1 = MeshSource(ds, ("connect1", "diffused")) ms2 = MeshSource(ds, ("connect2", "diffused")) # add the following lines to set the range of the two mesh sources ms1.color_bounds = (0.0, 3.0) ms2.color_bounds = (0.0, 3.0) sc.add_source(ms1) sc.add_source(ms2) # render and save im = sc.render() sc.save() Making Movies ^^^^^^^^^^^^^ Here are a couple of example scripts that show how to create image frames that can later be stitched together into a movie. In the first example, we look at a single dataset at a fixed time, but we move the camera around to get a different vantage point. We call the rotate() method 300 times, saving a new image to the disk each time. .. code-block:: python import numpy as np import yt ds = yt.load("MOOSE_sample_data/out.e-s010") # create a default scene sc = yt.create_scene(ds) # override the default colormap ms = sc.get_source() ms.cmap = "Eos A" # adjust the camera position and orientation cam = sc.camera cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length") cam_pos = ds.arr([-3.0, 3.0, -3.0], "code_length") north_vector = ds.arr([0.0, -1.0, -1.0], "dimensionless") cam.set_position(cam_pos, north_vector) # increase the default resolution cam.resolution = (800, 800) # set the camera to use "steady_north" cam.steady_north = True # make movie frames num_frames = 301 for i in range(num_frames): cam.rotate(2.0 * np.pi / num_frames) sc.render() sc.save("movie_frames/surface_render_%.4d.png" % i) Finally, this example demonstrates how to loop over the time steps in a single file with a fixed camera position: .. code-block:: python import matplotlib.pyplot as plt import yt from yt.visualization.volume_rendering.api import MeshSource NUM_STEPS = 127 CMAP = "hot" VMIN = 300.0 VMAX = 2000.0 for step in range(NUM_STEPS): ds = yt.load("MOOSE_sample_data/mps_out.e", step=step) time = ds._get_current_time() # the field name is a tuple of strings. The first string # specifies which mesh will be plotted, the second string # specifies the name of the field. field_name = ('connect2', 'temp') # this initializes the render source ms = MeshSource(ds, field_name) # set up the camera here. these values were arrived by # calling pitch, yaw, and roll in the notebook until I # got the angle I wanted. sc.add_camera(ds) camera_position = ds.arr([0.1, 0.0, 0.1], 'code_length') cam.focus = ds.domain_center north_vector = ds.arr([-0.3032476, -0.71782557, 0.62671153], 'dimensionless') cam.width = ds.arr([ 0.04, 0.04, 0.04], 'code_length') cam.resolution = (800, 800) cam.set_position(camera_position, north_vector) # actually make the image here im = ms.render(cam, cmap=CMAP, color_bounds=(VMIN, VMAX)) # Plot the result using matplotlib and save. # Note that we are setting the upper and lower # bounds of the colorbar to be the same for all # frames of the image. # must clear the image between frames plt.clf() fig = plt.gcf() ax = plt.gca() ax.imshow(im, interpolation='nearest', origin='lower') # Add the colorbar using a fake (not shown) image. p = ax.imshow(ms.data, visible=False, cmap=CMAP, vmin=VMIN, vmax=VMAX) cb = fig.colorbar(p) cb.set_label(field_name[1]) ax.text(25, 750, 'time = %.2e' % time, color='k') ax.axes.get_xaxis().set_visible(False) ax.axes.get_yaxis().set_visible(False) plt.savefig('movie_frames/test_%.3d' % step) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/visualizing_particle_datasets_with_firefly.rst0000644000175100001770000000434114714401662027624 0ustar00runnerdocker.. _visualizing_particle_datasets_with_firefly: Visualizing Particle Datasets with Firefly ========================================== `Firefly `_ is an interactive, browser-based, particle visualization platform that allows you to filter, colormap, and fly through their data. The Python frontend allows users to both load in their own datasets and customize every aspect of the user interface. yt offers to ability to export your data to Firefly's ffly or JSON format through the :meth:`~yt.data_objects.data_containers.YTDataContainer.create_firefly_object` method. You can adjust the interface settings, particle colors, decimation factors, and other `Firefly settings `_ through the returned ``Firefly.reader`` object. Once the settings are tuned to your liking, calling the ``reader.writeToDisk()`` method will produce the final ffly files. Note that ``reader.clean_datadir`` defaults to true when using :meth:`~yt.data_objects.data_containers.YTDataContainer.create_firefly_object` so if you would like to manage multiple datasets make sure to pass different ``datadir`` keyword arguments. .. image:: _images/firefly_example.png :width: 85% :align: center :alt: Screenshot of a sample Firefly visualization Exporting an Example Dataset to Firefly ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Here is an example of how to use yt to export data to Firefly using some `sample data `_. .. code-block:: python ramses_ds = yt.load("DICEGalaxyDisk_nonCosmological/output_00002/info_00002.txt") region = ramses_ds.sphere(ramses_ds.domain_center, (1000, "kpc")) reader = region.create_firefly_object( "IsoGalaxyRamses", fields_to_include=["particle_extra_field_1", "particle_extra_field_2"], fields_units=["dimensionless", "dimensionless"], ) ## adjust some of the options reader.settings["color"]["io"] = [1, 1, 0, 1] ## set default color reader.particleGroups[0].decimation_factor = 100 ## increase the decimation factor ## dump files to ## ~/IsoGalaxyRamses/Dataio000.ffly ## ~/IsoGalaxyRamses/filenames.json ## ~/IsoGalaxyRamses/DataSettings.json reader.writeToDisk() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/visualizing/volume_rendering.rst0000644000175100001770000011107714714401662022303 0ustar00runnerdocker.. _volume_rendering: 3D Visualization and Volume Rendering ===================================== yt has the ability to create 3D visualizations using a process known as *volume rendering* (oftentimes abbreviated VR). This volume rendering code differs from the standard yt infrastructure for generating :ref:`simple-inspection` in that it evaluates the radiative transfer equations through the volume with user-defined transfer functions for each ray. Thus it can accommodate both opaque and transparent structures appropriately. Currently all of the rendering capabilities are implemented in software, requiring no specialized hardware. Optimized versions implemented with OpenGL and utilizing graphics processors are being actively developed. .. note:: There is a Jupyter notebook containing a volume rendering tutorial: :doc:`Volume_Rendering_Tutorial`. Volume Rendering Introduction ----------------------------- Constructing a 3D visualization is a process of describing the "scene" that will be rendered. This includes the location of the viewing point (i.e., where the "camera" is placed), the method by which a system would be viewed (i.e., the "lens," which may be orthographic, perspective, fisheye, spherical, and so on) and the components that will be rendered (render "sources," such as volume elements, lines, annotations, and opaque surfaces). The 3D plotting infrastructure then develops a resultant image from this scene, which can be saved to a file or viewed inline. By constructing the scene in this programmatic way, full control can be had over each component in the scene as well as the method by which the scene is rendered; this can be used to prototype visualizations, inject annotation such as grid or continent lines, and then to render a production-quality visualization. By changing the "lens" used, a single camera path can output images suitable for planetarium domes, immersive and head tracking systems (such as the Oculus Rift or recent 360-degree/virtual reality movie viewers such as the mobile YouTube app), as well as standard screens. .. image:: _images/scene_diagram.svg :width: 50% :align: center :alt: Diagram of a 3D Scene .. _scene-description: Volume Rendering Components --------------------------- The Scene class and its subcomponents are organized as follows. Indented objects *hang* off of their parent object. * :ref:`Scene ` - container object describing a volume and its contents * :ref:`Sources ` - objects to be rendered * :ref:`VolumeSource ` - simulation volume tied to a dataset * :ref:`TransferFunction ` - mapping of simulation field values to color, brightness, and transparency * :ref:`OpaqueSource ` - Opaque structures like lines, dots, etc. * :ref:`Annotations ` - Annotated structures like grid cells, simulation boundaries, etc. * :ref:`Camera ` - object for rendering; consists of a location, focus, orientation, and resolution * :ref:`Lens ` - object describing method for distributing rays through Sources .. _scene: Scene ^^^^^ The :class:`~yt.visualization.volume_rendering.scene.Scene` is the container class which encompasses the whole of the volume rendering interface. At its base level, it describes an infinite volume, with a series of :class:`~yt.visualization.volume_rendering.render_source.RenderSource` objects hanging off of it that describe the contents of that volume. It also contains a :class:`~yt.visualization.volume_rendering.camera.Camera` for rendering that volume. All of its classes can be accessed and modified as properties hanging off of the scene. The scene's most important functions are :meth:`~yt.visualization.volume_rendering.scene.Scene.render` for casting rays through the scene and :meth:`~yt.visualization.volume_rendering.scene.Scene.save` for saving the resulting rendered image to disk (see note on :ref:`when_to_render`). The easiest way to create a scene with sensible defaults is to use the functions: :func:`~yt.visualization.volume_rendering.volume_rendering.create_scene` (creates the scene) or :func:`~yt.visualization.volume_rendering.volume_rendering.volume_render` (creates the scene and then triggers ray tracing to produce an image). See the :ref:`annotated-vr-example` for details. .. _render-sources: RenderSources ^^^^^^^^^^^^^ :class:`~yt.visualization.volume_rendering.render_source.RenderSource` objects comprise the contents of what is actually *rendered*. One can add several different RenderSources to a Scene and the ray-tracing step will pass rays through all of them to produce the final rendered image. .. _volume-sources: VolumeSources +++++++++++++ :class:`~yt.visualization.volume_rendering.render_source.VolumeSource` objects are 3D :ref:`geometric-objects` of individual datasets placed into the scene for rendering. Each VolumeSource requires a :ref:`TransferFunction ` to describe how the fields in the VolumeSource dataset produce different colors and brightnesses in the resulting image. .. _opaque-sources: OpaqueSources +++++++++++++ In addition to semi-transparent objects, fully opaque structures can be added to a scene as :class:`~yt.visualization.volume_rendering.render_source.OpaqueSource` objects including :class:`~yt.visualization.volume_rendering.render_source.LineSource` objects and :class:`~yt.visualization.volume_rendering.render_source.PointSource` objects. These are useful if you want to annotate locations or particles in an image, or if you want to draw lines connecting different regions or vertices. For instance, lines can be used to draw outlines of regions or continents. Worked examples of using the ``LineSource`` and ``PointSource`` are available at :ref:`cookbook-vol-points` and :ref:`cookbook-vol-lines`. .. _volume_rendering_annotations: Annotations +++++++++++ Similar to OpaqueSources, annotations enable the user to highlight certain information with opaque structures. Examples include :class:`~yt.visualization.volume_rendering.api.BoxSource`, :class:`~yt.visualization.volume_rendering.api.GridSource`, and :class:`~yt.visualization.volume_rendering.api.CoordinateVectorSource`. These annotations will operate in data space and can draw boxes, grid information, and also provide a vector orientation within the image. For example scripts using these features, see :ref:`cookbook-volume_rendering_annotations`. .. _transfer_functions: Transfer Functions ^^^^^^^^^^^^^^^^^^ A transfer function describes how rays that pass through the domain of a :class:`~yt.visualization.volume_rendering.render_source.VolumeSource` are mapped from simulation field values to color, brightness, and opacity in the resulting rendered image. A transfer function consists of an array over the x and y dimensions. The x dimension typically represents field values in your underlying dataset to which you want your rendering to be sensitive (e.g. density from 1e20 to 1e23). The y dimension consists of 4 channels for red, green, blue, and alpha (opacity). A transfer function starts with all zeros for its y dimension values, implying that rays traversing the VolumeSource will not show up at all in the final image. However, you can add features to the transfer function that will highlight certain field values in your rendering. .. _transfer-function-helper: TransferFunctionHelper ++++++++++++++++++++++ Because good transfer functions can be difficult to generate, the :class:`~yt.visualization.volume_rendering.transfer_function_helper.TransferFunctionHelper` exists in order to help create and modify transfer functions with smart defaults for your datasets. To ease constructing transfer functions, each ``VolumeSource`` instance has a ``TransferFunctionHelper`` instance associated with it. This is the easiest way to construct and customize a ``ColorTransferFunction`` for a volume rendering. In the following example, we make use of the ``TransferFunctionHelper`` associated with a scene's ``VolumeSource`` to create an appealing transfer function between a physically motivated range of densities in a cosmological simulation: .. python-script:: import yt ds = yt.load("Enzo_64/DD0043/data0043") sc = yt.create_scene(ds, lens_type="perspective") # Get a reference to the VolumeSource associated with this scene # It is the first source associated with the scene, so we can refer to it # using index 0. source = sc[0] # Set the bounds of the transfer function source.tfh.set_bounds((3e-31, 5e-27)) # set that the transfer function should be evaluated in log space source.tfh.set_log(True) # Make underdense regions appear opaque source.tfh.grey_opacity = True # Plot the transfer function, along with the CDF of the density field to # see how the transfer function corresponds to structure in the CDF source.tfh.plot("transfer_function.png", profile_field=("gas", "density")) # save the image, flooring especially bright pixels for better contrast sc.save("rendering.png", sigma_clip=6.0) For fun, let's make the same volume_rendering, but this time setting ``grey_opacity=False``, which will make overdense regions stand out more: .. python-script:: import yt ds = yt.load("Enzo_64/DD0043/data0043") sc = yt.create_scene(ds, lens_type="perspective") source = sc[0] # Set transfer function properties source.tfh.set_bounds((3e-31, 5e-27)) source.tfh.set_log(True) source.tfh.grey_opacity = False source.tfh.plot("transfer_function.png", profile_field=("gas", "density")) sc.save("rendering.png", sigma_clip=4.0) To see a full example on how to use the ``TransferFunctionHelper`` interface, follow the annotated :doc:`TransferFunctionHelper_Tutorial`. Color Transfer Functions ++++++++++++++++++++++++ A :class:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction` is the standard way to map dataset field values to colors, brightnesses, and opacities in the rendered rays. One can add discrete features to the transfer function, which will render isocontours in the field data and works well for visualizing nested structures in a simulation. Alternatively, one can also add continuous features to the transfer function. See :ref:`cookbook-custom-transfer-function` for an annotated, runnable tutorial explaining usage of the ColorTransferFunction. There are several methods to create a :class:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction` for a volume rendering. We will describe the low-level interface for constructing color transfer functions here, and provide examples for each option. add_layers """""""""" The easiest way to create a ColorTransferFunction is to use the :meth:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction.add_layers` function, which will add evenly spaced isocontours along the transfer function, sampling a colormap to determine the colors of the layers. .. python-script:: import numpy as np import yt ds = yt.load("Enzo_64/DD0043/data0043") sc = yt.create_scene(ds, lens_type="perspective") source = sc[0] source.set_field(("gas", "density")) source.set_log(True) bounds = (3e-31, 5e-27) # Since this rendering is done in log space, the transfer function needs # to be specified in log space. tf = yt.ColorTransferFunction(np.log10(bounds)) tf.add_layers(5, colormap="cmyt.arbre") source.tfh.tf = tf source.tfh.bounds = bounds source.tfh.plot("transfer_function.png", profile_field=("gas", "density")) sc.save("rendering.png", sigma_clip=6) sample_colormap """"""""""""""" To add a single gaussian layer with a color determined by a colormap value, use :meth:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction.sample_colormap`. .. python-script:: import numpy as np import yt ds = yt.load("Enzo_64/DD0043/data0043") sc = yt.create_scene(ds, lens_type="perspective") source = sc[0] source.set_field(("gas", "density")) source.set_log(True) bounds = (3e-31, 5e-27) # Since this rendering is done in log space, the transfer function needs # to be specified in log space. tf = yt.ColorTransferFunction(np.log10(bounds)) tf.sample_colormap(np.log10(1e-30), w=0.01, colormap="cmyt.arbre") source.tfh.tf = tf source.tfh.bounds = bounds source.tfh.plot("transfer_function.png", profile_field=("gas", "density")) sc.save("rendering.png", sigma_clip=6) add_gaussian """""""""""" If you would like to add a gaussian with a customized color or no color, use :meth:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction.add_gaussian`. .. python-script:: import numpy as np import yt ds = yt.load("Enzo_64/DD0043/data0043") sc = yt.create_scene(ds, lens_type="perspective") source = sc[0] source.set_field(("gas", "density")) source.set_log(True) bounds = (3e-31, 5e-27) # Since this rendering is done in log space, the transfer function needs # to be specified in log space. tf = yt.ColorTransferFunction(np.log10(bounds)) tf.add_gaussian(np.log10(1e-29), width=0.005, height=[0.753, 1.0, 0.933, 1.0]) source.tfh.tf = tf source.tfh.bounds = bounds source.tfh.plot("transfer_function.png", profile_field=("gas", "density")) sc.save("rendering.png", sigma_clip=6) map_to_colormap """"""""""""""" Finally, to map a colormap directly to a range in densities use :meth:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction.map_to_colormap`. This makes it possible to map a segment of the transfer function space to a colormap at a single alpha value. Where the above options produced layered volume renderings, this allows all of the density values in a dataset to contribute to the volume rendering. .. python-script:: import numpy as np import yt ds = yt.load("Enzo_64/DD0043/data0043") sc = yt.create_scene(ds, lens_type="perspective") source = sc[0] source.set_field(("gas", "density")) source.set_log(True) bounds = (3e-31, 5e-27) # Since this rendering is done in log space, the transfer function needs # to be specified in log space. tf = yt.ColorTransferFunction(np.log10(bounds)) def linramp(vals, minval, maxval): return (vals - vals.min()) / (vals.max() - vals.min()) tf.map_to_colormap( np.log10(3e-31), np.log10(5e-27), colormap="cmyt.arbre", scale_func=linramp ) source.tfh.tf = tf source.tfh.bounds = bounds source.tfh.plot("transfer_function.png", profile_field=("gas", "density")) sc.save("rendering.png", sigma_clip=6) Projection Transfer Function ++++++++++++++++++++++++++++ This is designed to allow you to generate projections like what you obtain from the standard :ref:`projection-plots`, and it forms the basis of :ref:`off-axis-projections`. See :ref:`cookbook-offaxis_projection` for a simple example. Note that the integration here is scaled to a width of 1.0; this means that if you want to apply a colorbar, you will have to multiply by the integration width (specified when you initialize the volume renderer) in whatever units are appropriate. Planck Transfer Function ++++++++++++++++++++++++ This transfer function is designed to apply a semi-realistic color field based on temperature, emission weighted by density, and approximate scattering based on the density. This class is currently under-documented, and it may be best to examine the source code to use it. More Complicated Transfer Functions +++++++++++++++++++++++++++++++++++ For more complicated transfer functions, you can use the :class:`~yt.visualization.volume_rendering.transfer_functions.MultiVariateTransferFunction` object. This allows for a set of weightings, linkages and so on. All of the information about how all transfer functions are used and values are extracted is contained in the sourcefile ``utilities/lib/grid_traversal.pyx``. For more information on how the transfer function is actually applied, look over the source code there. .. _camera: Camera ^^^^^^ The :class:`~yt.visualization.volume_rendering.camera.Camera` object is what it sounds like, a camera within the Scene. It possesses the quantities: * :meth:`~yt.visualization.volume_rendering.camera.Camera.position` - the position of the camera in scene-space * :meth:`~yt.visualization.volume_rendering.camera.Camera.width` - the width of the plane the camera can see * :meth:`~yt.visualization.volume_rendering.camera.Camera.focus` - the point in space the camera is looking at * :meth:`~yt.visualization.volume_rendering.camera.Camera.resolution` - the image resolution * ``north_vector`` - a vector defining the "up" direction in an image * :ref:`lens ` - an object controlling how rays traverse the Scene .. _camera_movement: Moving and Orienting the Camera +++++++++++++++++++++++++++++++ There are multiple ways to manipulate the camera viewpoint and orientation. One can set the properties listed above explicitly, or one can use the :class:`~yt.visualization.volume_rendering.camera.Camera` helper methods. In either case, any change triggers an update of all of the other properties. Note that the camera exists in a right-handed coordinate system centered on the camera. Rotation-related methods * :meth:`~yt.visualization.volume_rendering.camera.Camera.pitch` - rotate about the lateral axis * :meth:`~yt.visualization.volume_rendering.camera.Camera.yaw` - rotate about the vertical axis (i.e. ``north_vector``) * :meth:`~yt.visualization.volume_rendering.camera.Camera.roll` - rotate about the longitudinal axis (i.e. ``normal_vector``) * :meth:`~yt.visualization.volume_rendering.camera.Camera.rotate` - rotate about an arbitrary axis * :meth:`~yt.visualization.volume_rendering.camera.Camera.iter_rotate` - iteratively rotate about an arbitrary axis For the rotation methods, the camera pivots around the ``rot_center`` rotation center. By default, this is the camera position, which means that the camera doesn't change its position at all, it just changes its orientation. Zoom-related methods * :meth:`~yt.visualization.volume_rendering.camera.Camera.set_width` - change the width of the FOV * :meth:`~yt.visualization.volume_rendering.camera.Camera.zoom` - change the width of the FOV * :meth:`~yt.visualization.volume_rendering.camera.Camera.iter_zoom` - iteratively change the width of the FOV Perhaps counterintuitively, the camera does not get closer to the focus during a zoom; it simply reduces the width of the field of view. Translation-related methods * :meth:`~yt.visualization.volume_rendering.camera.Camera.set_position` - change the location of the camera keeping the focus fixed * :meth:`~yt.visualization.volume_rendering.camera.Camera.iter_move` - iteratively change the location of the camera keeping the focus fixed The iterative methods provide iteration over a series of changes in the position or orientation of the camera. These can be used within a loop. For an example on how to use all of these camera movement functions, see :ref:`cookbook-camera_movement`. .. _lenses: Camera Lenses ^^^^^^^^^^^^^ Cameras possess :class:`~yt.visualization.volume_rendering.lens.Lens` objects, which control the geometric path in which rays travel to the camera. These lenses can be swapped in and out of an existing camera to produce different views of the same Scene. For a full demonstration of a Scene object rendered with different lenses, see :ref:`cookbook-various_lens`. Plane Parallel ++++++++++++++ The :class:`~yt.visualization.volume_rendering.lens.PlaneParallelLens` is the standard lens type used for orthographic projections. All rays emerge parallel to each other, arranged along a plane. Perspective and Stereo Perspective ++++++++++++++++++++++++++++++++++ The :class:`~yt.visualization.volume_rendering.lens.PerspectiveLens` adjusts for an opening view angle, so that the scene will have an element of perspective to it. :class:`~yt.visualization.volume_rendering.lens.StereoPerspectiveLens` is identical to PerspectiveLens, but it produces two images from nearby camera positions for use in 3D viewing. How 3D the image appears at viewing will depend upon the value of :attr:`~yt.visualization.volume_rendering.lens.StereoPerspectiveLens.disparity`, which is half the maximum distance between two corresponding points in the left and right images. By default, it is set to 3 pixels. Fisheye or Dome +++++++++++++++ The :class:`~yt.visualization.volume_rendering.lens.FisheyeLens` is appropriate for viewing an arbitrary field of view. Fisheye images are typically used for dome-based presentations; the Hayden planetarium for instance has a field of view of 194.6. The images returned by this camera will be flat pixel images that can and should be reshaped to the resolution. Spherical and Stereo Spherical ++++++++++++++++++++++++++++++ The :class:`~yt.visualization.volume_rendering.lens.SphericalLens` produces a cylindrical-spherical projection. Movies rendered in this way can be displayed as YouTube 360-degree videos (for more information see `the YouTube help: Upload 360-degree videos `_). :class:`~yt.visualization.volume_rendering.lens.StereoSphericalLens` is identical to :class:`~yt.visualization.volume_rendering.lens.SphericalLens` but it produces two images from nearby camera positions for virtual reality movies, which can be displayed in head-tracking devices (e.g. Oculus Rift) or in mobile YouTube app with Google Cardboard (for more information see `the YouTube help: Upload virtual reality videos `_). `This virtual reality video `_ on YouTube is an example produced with :class:`~yt.visualization.volume_rendering.lens.StereoSphericalLens`. As in the case of :class:`~yt.visualization.volume_rendering.lens.StereoPerspectiveLens`, the difference between the two images can be controlled by changing the value of :attr:`~yt.visualization.volume_rendering.lens.StereoSphericalLens.disparity` (See above). .. _annotated-vr-example: Annotated Examples ------------------ .. warning:: 3D visualizations can be fun but frustrating! Tuning the parameters to both look nice and convey useful scientific information can be hard. We've provided information about best practices and tried to make the interface easy to develop nice visualizations, but getting them *just right* is often time-consuming. It's usually best to start out simple and expand and tweak as needed. The scene interface provides a modular interface for creating renderings of arbitrary data sources. As such, manual composition of a scene can require a bit more work, but we will also provide several helper functions that attempt to create satisfactory default volume renderings. When the :func:`~yt.visualization.volume_rendering.volume_rendering.volume_render` function is called, first an empty :class:`~yt.visualization.volume_rendering.scene.Scene` object is created. Next, a :class:`~yt.visualization.volume_rendering.api.VolumeSource` object is created, which decomposes the volume elements into a tree structure to provide back-to-front rendering of fixed-resolution blocks of data. (If the volume elements are grids, this uses a :class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` object.) When the :class:`~yt.visualization.volume_rendering.api.VolumeSource` object is created, by default it will create a transfer function based on the extrema of the field that you are rendering. The transfer function describes how rays that pass through the domain are "transferred" and thus how brightness and color correlates to the field values. Modifying and adjusting the transfer function is the primary way to modify the appearance of an image based on volumes. Once the basic set of objects to be rendered is constructed (e.g. :class:`~yt.visualization.volume_rendering.scene.Scene`, :class:`~yt.visualization.volume_rendering.render_source.RenderSource`, and :class:`~yt.visualization.volume_rendering.api.VolumeSource` objects) , a :class:`~yt.visualization.volume_rendering.camera.Camera` is created and added to the scene. By default the creation of a camera also creates a plane-parallel :class:`~yt.visualization.volume_rendering.lens.Lens` object. The analog to a real camera is intentional -- a camera can take a picture of a scene from a particular point in time and space, but different lenses can be swapped in and out. For example, this might include a fisheye lens, a spherical lens, or some other method of describing the direction and origin of rays for rendering. Once the camera is added to the scene object, we call the main methods of the :class:`~yt.visualization.volume_rendering.scene.Scene` class, :meth:`~yt.visualization.volume_rendering.scene.Scene.render` and :meth:`~yt.visualization.volume_rendering.scene.Scene.save`. When rendered, the scene will loop through all of the :class:`~yt.visualization.volume_rendering.render_source.RenderSource` objects that have been added and integrate the radiative transfer equations through the volume. Finally, the image and scene object is returned to the user. An example script the uses the high-level :func:`~yt.visualization.volume_rendering.volume_rendering.volume_render` function to quickly set up defaults is: .. python-script:: import yt # load the data ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") # volume render the ("gas", "density") field, and save the resulting image im, sc = yt.volume_render(ds, ("gas", "density"), fname="rendering.png") # im is the image array generated. it is also saved to 'rendering.png'. # sc is an instance of a Scene object, which allows you to further refine # your renderings and later save them. # Let's zoom in and take a closer look sc.camera.width = (300, "kpc") sc.camera.switch_orientation() # Save the zoomed in rendering sc.save("zoomed_rendering.png") Alternatively, if you don't want to immediately generate an image of your volume rendering, and you just want access to the default scene object, you can skip the expensive operation of rendering by just running the :func:`~yt.visualization.volume_rendering.volume_rendering.create_scene` function in lieu of the :func:`~yt.visualization.volume_rendering.volume_rendering.volume_render` function. Example: .. python-script:: import numpy as np import yt ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") sc = yt.create_scene(ds, ("gas", "density")) source = sc[0] source.transfer_function = yt.ColorTransferFunction( np.log10((1e-30, 1e-23)), grey_opacity=True ) def linramp(vals, minval, maxval): return (vals - vals.min()) / (vals.max() - vals.min()) source.transfer_function.map_to_colormap( np.log10(1e-25), np.log10(8e-24), colormap="cmyt.arbre", scale_func=linramp ) # For this low resolution dataset it's very important to use interpolated # vertex centered data to avoid artifacts. For high resolution data this # setting may cause a substantial slowdown for marginal visual improvement. source.set_use_ghost_zones(True) cam = sc.camera cam.width = 15 * yt.units.kpc cam.focus = ds.domain_center cam.normal_vector = [-0.3, -0.3, 1] cam.switch_orientation() sc.save("rendering.png") For an in-depth tutorial on how to create a Scene and modify its contents, see this annotated :doc:`Volume_Rendering_Tutorial`. .. _volume-rendering-method: Volume Rendering Method ----------------------- Direct ray casting through a volume enables the generation of new types of visualizations and images describing a simulation. yt has the facility to generate volume renderings by a direct ray casting method. However, the ability to create volume renderings informed by analysis by other mechanisms -- for instance, halo location, angular momentum, spectral energy distributions -- is useful. The volume rendering in yt follows a relatively straightforward approach. #. Create a set of transfer functions governing the emission and absorption as a function of one or more variables. (:math:`f(v) \rightarrow (r,g,b,a)`) These can be functions of any field variable, weighted by independent fields, and even weighted by other evaluated transfer functions. (See ref:`_transfer_functions`.) #. Partition all chunks into non-overlapping, fully domain-tiling "bricks." Each of these "bricks" contains the finest available data at any location. #. Generate vertex-centered data for all grids in the volume rendered domain. #. Order the bricks from front-to-back. #. Construct plane of rays parallel to the image plane, with initial values set to zero and located at the back of the region to be rendered. #. For every brick, identify which rays intersect. These are then each 'cast' through the brick. #. Every cell a ray intersects is sampled 5 times (adjustable by parameter), and data values at each sampling point are trilinearly interpolated from the vertex-centered data. #. Each transfer function is evaluated at each sample point. This gives us, for each channel, both emission (:math:`j`) and absorption (:math:`\alpha`) values. #. The value for the pixel corresponding to the current ray is updated with new values calculated by rectangular integration over the path length: :math:`v^{n+1}_{i} = j_{i}\Delta s + (1 - \alpha_{i}\Delta s )v^{n}_{i}` where :math:`n` and :math:`n+1` represent the pixel before and after passing through a sample, :math:`i` is the color (red, green, blue) and :math:`\Delta s` is the path length between samples. #. Determine if any addition integrate will change the sample value; if not, terminate integration. (This reduces integration time when rendering front-to-back.) #. The image is returned to the user: .. image:: _images/vr_sample.jpg :width: 512 Parallelism ----------- yt can utilize both MPI and OpenMP parallelism for volume rendering. Both, and their combination, are described below. MPI Parallelization ^^^^^^^^^^^^^^^^^^^ Currently the volume renderer is parallelized using MPI to decompose the volume by attempting to split up the :class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` in a balanced way. This has two advantages: #. The :class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` construction is parallelized since each MPI task only needs to know about the part of the tree it will traverse. #. Each MPI task will only read data for portion of the volume that it has assigned. Once the :class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` has been constructed, each MPI task begins the rendering phase until all of its bricks are completed. At that point, each MPI task has a full image plane which we then use a tree reduction to construct the final image, using alpha blending to add the images together at each reduction phase. Caveats: #. At this time, the :class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` can only be decomposed by a power of 2 MPI tasks. If a number of tasks not equal to a power of 2 are used, the largest power of 2 below that number is used, and the remaining cores will be idle. This issue is being actively addressed by current development. #. Each MPI task, currently, holds the entire image plane. Therefore when image plane sizes get large (>2048^2), the memory usage can also get large, limiting the number of MPI tasks you can use. This is also being addressed in current development by using image plane decomposition. For more information about enabling parallelism, see :ref:`parallel-computation`. OpenMP Parallelization ^^^^^^^^^^^^^^^^^^^^^^ The volume rendering also parallelized using the OpenMP interface in Cython. While the MPI parallelization is done using domain decomposition, the OpenMP threading parallelizes the rays intersecting a given brick of data. As the average brick size relative to the image plane increases, the parallel efficiency increases. By default, the volume renderer will use the total number of cores available on the symmetric multiprocessing (SMP) compute platform. For example, if you have a shiny new laptop with 8 cores, you'll by default launch 8 OpenMP threads. The number of threads can be controlled with the num_threads keyword in :meth:`~yt.visualization.volume_rendering.camera.Camera.snapshot`. You may also restrict the number of OpenMP threads used by default by modifying the environment variable OMP_NUM_THREADS. Running in Hybrid MPI + OpenMP ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The two methods for volume rendering parallelization can be used together to leverage large supercomputing resources. When choosing how to balance the number of MPI tasks vs OpenMP threads, there are a few things to keep in mind. For these examples, we will assume you are using Nmpi MPI tasks, and Nmp OpenMP tasks, on a total of P cores. We will assume that the machine has a Nnode SMP nodes, each with cores_per_node cores per node. #. For each MPI task, num_threads (or OMP_NUM_THREADS) OpenMP threads will be used. Therefore you should usually make sure that Nmpi*Nmp = P. #. For simulations with many grids/AMRKDTree bricks, you generally want to increase Nmpi. #. For simulations with large image planes (>2048^2), you generally want to decrease Nmpi and increase Nmp. This is because, currently, each MPI task stores the entire image plane, and doing so can approach the memory limits of a given SMP node. #. Please make sure you understand the (super)computer topology in terms of the numbers of cores per socket, node, etc when making these decisions. #. For many cases when rendering using your laptop/desktop, OpenMP will provide a good enough speedup by default that it is preferable to launching the MPI tasks. For more information about enabling parallelism, see :ref:`parallel-computation`. .. _vr-faq: Volume Rendering Frequently Asked Questions ------------------------------------------- .. _opaque_rendering: Opacity ^^^^^^^ There are currently two models for opacity when rendering a volume, which are controlled in the ``ColorTransferFunction`` with the keyword ``grey_opacity=False`` or ``True`` (the default). The first will act such for each of the red, green, and blue channels, each channel is only opaque to itself. This means that if a ray that has some amount of red then encounters material that emits blue, the red will still exist and in the end that pixel will be a combination of blue and red. However, if the ColorTransferFunction is set up with grey_opacity=True, then blue will be opaque to red, and only the blue emission will remain. For an in-depth example, please see the cookbook example on opaque renders here: :ref:`cookbook-opaque_rendering`. .. _sigma_clip: Improving Image Contrast with Sigma Clipping ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If your images appear to be too dark, you can try using the ``sigma_clip`` keyword in the :meth:`~yt.visualization.volume_rendering.scene.Scene.save` or :func:`~yt.visualization.volume_rendering.volume_rendering.volume_render` functions. Because the brightness range in an image is scaled to match the range of emissivity values of underlying rendering, if you have a few really high-emissivity points, they will scale the rest of your image to be quite dark. ``sigma_clip = N`` can address this by removing values that are more than ``N`` standard deviations brighter than the mean of your image. Typically, a choice of 4 to 6 will help dramatically with your resulting image. See the cookbook recipe :ref:`cookbook-sigma_clip` for a demonstration. .. _when_to_render: When to Render ^^^^^^^^^^^^^^ The rendering of a scene is the most computationally demanding step in creating a final image and there are a number of ways to control at which point a scene is actually rendered. The default behavior of the :meth:`~yt.visualization.volume_rendering.scene.Scene.save` function includes a call to :meth:`~yt.visualization.volume_rendering.scene.Scene.render`. This means that in most cases (including the above examples), after you set up your scene and volumes, you can simply call :meth:`~yt.visualization.volume_rendering.scene.Scene.save` without first calling :meth:`~yt.visualization.volume_rendering.scene.Scene.render`. If you wish to save the most recently rendered image without rendering again, set ``render=False`` in the call to :meth:`~yt.visualization.volume_rendering.scene.Scene.save`. Cases where you may wish to use ``render=False`` include saving images at different ``sigma_clip`` values (see :ref:`cookbook-sigma_clip`) or when saving an image that has already been rendered in a Jupyter notebook using :meth:`~yt.visualization.volume_rendering.scene.Scene.show`. Changes to the scene including adding sources, modifying transfer functions or adjusting camera settings generally require rendering again. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/yt3differences.rst0000644000175100001770000003351614714401662017311 0ustar00runnerdocker.. _yt3differences: What's New and Different in yt 3.0? =================================== If you are new to yt, welcome! If you're coming to yt 3.0 from an older version, however, there may be a few things in this version that are different than what you are used to. We have tried to build compatibility layers to minimize disruption to existing scripts, but necessarily things will be different in some ways. .. contents:: :depth: 2 :local: :backlinks: none Updating to yt 3.0 from Old Versions (and going back) ----------------------------------------------------- First off, you need to update your version of yt to yt 3.0. If you're installing yt for the first time, please visit :ref:`installing-yt`. If you already have a version of yt installed, you should just need one command: .. code-block:: bash $ yt update This will update yt to the most recent version and rebuild the source base. If you installed using the installer script, it will assure you have all of the latest dependencies as well. This step may take a few minutes. To test that yt is correctly installed, try: .. code-block:: bash $ python -c "import yt" .. _transitioning-to-3.0: Converting Old Scripts to Work with yt 3.0 ------------------------------------------ After installing yt-3.0, you'll want to change your old scripts in a few key ways. After accounting for the changes described in the list below, try running your script. If it still fails, the callback failures in python are fairly descriptive and it may be possible to deduce what remaining changes are necessary. If you continue to have trouble, please don't hesitate to :ref:`request help `. The list below is arranged in order of most important changes to least important changes. * **Replace** ``from yt.mods import *`` **with** ``import yt`` **and prepend yt classes and functions with** ``yt.`` We have reworked yt's import system so that most commonly-used yt functions and classes live in the top-level yt namespace. That means you can now import yt with ``import yt``, load a dataset with ``ds = yt.load(filename)`` and create a plot with ``yt.SlicePlot``. See :ref:`api-reference` for a full API listing. You can still import using ``from yt.mods import *`` to get a pylab-like experience. * **Unit conversions are different** Fields and metadata for data objects and datasets now have units. The unit system keeps you from making weird things like ``ergs`` + ``g`` and can handle things like ``g`` + ``kg`` or ``kg*m/s**2 == Newton``. See :ref:`units` and :ref:`conversion-factors` for more information. * **Change field names from CamelCase to lower_case_with_underscores** Previously, yt would use "Enzo-isms" for field names. We now very specifically define fields as lowercase with underscores. For instance, what used to be ``VelocityMagnitude`` would now be ``velocity_magnitude``. Axis names are now at the *end* of field names, not the beginning. ``x-velocity`` is now ``velocity_x``. For a full list of all of the fields, see :ref:`field-list`. * **Full field names have two parts now** Fields can be accessed by a single name, but they are named internally as ``(field_type, field_name)`` for more explicit designation which can address particles, deposited fluid quantities, and more. See :ref:`fields`. * **Code-specific field names can be accessed by the name defined by the external code** Mesh fields that exist on-disk in an output file can be read in using whatever name is used by the output file. On-disk fields are always returned in code units. The full field name will be ``(code_name, field_name)``. See :ref:`field-list`. * **Particle fields are now more obviously different than mesh fields** Particle fields on-disk will also be in code units, and will be named ``(particle_type, field_name)``. If there is only one particle type in the output file, all particles will use ``io`` as the particle type. See :ref:`fields`. * **Change** ``pf`` **to** ``ds`` The objects we used to refer to as "parameter files" we now refer to as datasets. Instead of ``pf``, we now suggest you use ``ds`` to refer to an object returned by ``yt.load``. * **Remove any references to** ``pf.h`` **with** ``ds`` You can now create data objects without referring to the hierarchy. Instead of ``pf.h.all_data()``, you can now say ``ds.all_data()``. The hierarchy is still there, but it is now called the index: ``ds.index``. * **Use** ``yt.enable_parallelism()`` **to make a script parallel-compatible** Command line arguments are only parsed when yt is imported using ``from yt.mods import *``. Since command line arguments are not parsed when using ``import yt``, it is no longer necessary to specify ``--parallel`` at the command line when running a parallel computation. Use ``yt.enable_parallelism()`` in your script instead. See :ref:`parallel-computation` for more details. * **Change your derived quantities to the new syntax** Derived quantities have been reworked. You can now do ``dd.quantities.total_mass()`` instead of ``dd.quantities['TotalMass']()``. See :ref:`derived-quantities`. * **Change your method of accessing the** ``grids`` **attribute** The ``grids`` attribute of data objects no longer exists. To get this information, you have to use spatial chunking and then access them. See :ref:`here ` for an example. For datasets that use grid hierarchies, you can also access the grids for the entire dataset via ``ds.index.grids``. This attribute is not defined for particle or octree datasets. Cool New Things --------------- Lots of new things have been added in yt 3.0! Below we summarize a handful of these. Lots of New Codes are Supported ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Because of the additions of **Octrees**, **Particle Deposition**, and **Irregular Grids**, we now support a bunch more codes. See :ref:`code-support` for more information. Octrees ^^^^^^^ Octree datasets such as RAMSES, ART and ARTIO are now supported -- without any regridding! We have a native, lightweight octree indexing system. Irregular Grids ^^^^^^^^^^^^^^^ MOAB Hex8 format is supported, and non-regular grids can be added relatively easily. Better Particle Support ^^^^^^^^^^^^^^^^^^^^^^^ Particle Codes and SPH """""""""""""""""""""" yt 3.0 features particle selection, smoothing, and deposition. This utilizes a combination of coarse-grained indexing and octree indexing for particles. Particle Deposition """"""""""""""""""" In yt-3.0, we provide mechanisms for describing and creating fields generated by depositing particles into one or a handful of zones. This could include deposited mass or density, average values, and the like. For instance, the total stellar mass in some region can be deposited and averaged. Particle Filters and Unions """"""""""""""""""""""""""" Throughout yt, the notion of "particle types" has been more deeply embedded. These particle types can be dynamically defined at runtime, for instance by taking a filter of a given type or the union of several different types. This might be, for instance, defining a new type called ``young_stars`` that is a filtering of ``star_age`` to be fewer than a given threshold, or ``fast`` that filters based on the velocity of a particle. Unions could be the joining of multiple types of particles -- the default union of which is ``all``, representing all particle types in the simulation. Units ^^^^^ yt now has a unit system. This is one of the bigger features, and in essence it means that you can convert units between anything. In practice, it makes it much easier to define fields and convert data between different unit systems. See :ref:`units` for more information. Non-Cartesian Coordinates ^^^^^^^^^^^^^^^^^^^^^^^^^ Preliminary support for non-cartesian coordinates has been added. We expect this to be considerably solidified and expanded in yt 3.1. Reworked Import System ^^^^^^^^^^^^^^^^^^^^^^ It's now possible to import all yt functionality using ``import yt``. Rather than using ``from yt.mods import *``, we suggest using ``import yt`` in new scripts. Most commonly used yt functionality is attached to the ``yt`` module. Load a dataset with ``yt.load()``, create a phase plot using ``yt.PhasePlot``, and much more, see :ref:`the api docs ` to learn more about what's in the ``yt`` namespace, or just use tab completion in IPython: ``yt.``. It's still possible to use ``from yt.mods import *`` to create an interactive pylab-like experience. Importing yt this way has several side effects, most notably the command line arguments parsing and other startup tasks will run. API Changes ----------- These are the items that have already changed in *user-facing* API: Field Naming ^^^^^^^^^^^^ .. warning:: Field naming is probably the single biggest change you will encounter in yt 3.0. Fields can be accessed by their short names, but yt now has an explicit mechanism of distinguishing between field types and particle types. This is expressed through a two-key description. For example:: my_object["gas", "density"] will return the gas field density. In this example "gas" is the field type and "density" is the field name. Field types are a bit like a namespace. This system extends to particle types as well. By default you do *not* need to use the field "type" key, but in case of ambiguity it will utilize the default value in its place. This should therefore be identical to:: my_object["density"] To enable a compatibility layer, on the dataset you simply need to call the method ``setup_deprecated_fields`` like so: .. code-block:: python ds = yt.load("MyData") ds.setup_deprecated_fields() This sets up aliases from the old names to the new. See :ref:`fields` and :ref:`field-list` for more information. Units of Fields ^^^^^^^^^^^^^^^ Fields now are all subclasses of NumPy arrays, the ``YTArray``, which carries along with it units. This means that if you want to manipulate fields, you have to modify them in a unitful way. See :ref:`units`. Parameter Files are Now Datasets ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Wherever possible, we have attempted to replace the term "parameter file" (i.e., ``pf``) with the term "dataset." In yt-3.0, all of the ``pf`` attributes of objects are now ``ds`` or ``dataset`` attributes. Hierarchy is Now Index ^^^^^^^^^^^^^^^^^^^^^^ The hierarchy object (``pf.h``) is now referred to as an index (``ds.index``). It is no longer necessary to directly refer to the ``index`` as often, since data objects are now attached to the to the ``dataset`` object. Before, you would say ``pf.h.sphere()``, now you can say ``ds.sphere()``. New derived quantities interface ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Derived quantities can now be accessed via a function that hangs off of the ``quantities`` attribute of data objects. Instead of ``dd.quantities['TotalMass']()``, you can now use ``dd.quantities.total_mass()`` to do the same thing. All derived quantities can be accessed via a function that hangs off of the ``quantities`` attribute of data objects. Any derived quantities that *always* returned lists (like ``Extrema``, which would return a list even if you only ask for one field) now only returns a single result if you only ask for one field. Results for particle and mesh fields will also be returned separately. See :ref:`derived-quantities` for more information. Field Info ^^^^^^^^^^ In previous versions of yt, the ``dataset`` object (what we used to call a parameter file) had a ``field_info`` attribute which was a dictionary leading to derived field definitions. At the present time, because of the field naming changes (i.e., access-by-tuple) it is better to utilize the function ``_get_field_info`` than to directly access the ``field_info`` dictionary. For example:: finfo = ds._get_field_info("gas", "density") This function respects the special "field type" ``unknown`` and will search all field types for the field name. Projection Argument Order ^^^^^^^^^^^^^^^^^^^^^^^^^ Previously, projections were inconsistent with the other data objects. (The API for Plot Windows is the same.) The argument order is now ``field`` then ``axis`` as seen here: :class:`~yt.data_objects.construction_data_containers.YTQuadTreeProj`. Field Parameters ^^^^^^^^^^^^^^^^ All data objects now accept an explicit list of ``field_parameters`` rather than accepting ``kwargs`` and supplying them to field parameters. See :ref:`field_parameters`. Object Renaming ^^^^^^^^^^^^^^^ Nearly all internal objects have been renamed. Typically this means either removing ``AMR`` from the prefix or replacing it with ``YT``. All names of objects remain the same for the purposes of selecting data and creating them; i.e., ``sphere`` objects are still called ``sphere`` - you can access or create one via ``ds.sphere``. For a detailed description and index see :ref:`available-objects`. Boolean Regions ^^^^^^^^^^^^^^^ Boolean regions are not yet implemented in yt 3.0. .. _grid-chunking: Grids ^^^^^ It used to be that one could get access to the grids that belonged to a data object. Because we no longer have just grid-based data in yt, this attribute does not make sense. If you need to determine which grids contribute to a given object, you can either query the ``grid_indices`` field, or mandate spatial chunking like so: .. code-block:: python for chunk in obj.chunks([], "spatial"): for grid in chunk._current_chunk.objs: print(grid) This will "spatially" chunk the ``obj`` object and print out all the grids included. Halo Catalogs ^^^^^^^^^^^^^ The ``Halo Profiler`` infrastructure has been fundamentally rewritten and now exists using the ``Halo Catalog`` framework. See :ref:`halo-analysis`. Analysis Modules ^^^^^^^^^^^^^^^^ While we're trying to port over all of the old analysis modules, we have not gotten all of them working in 3.0 yet. The docs pages for those modules not-yet-functioning are clearly marked. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/doc/source/yt4differences.rst0000644000175100001770000005352314714401662017312 0ustar00runnerdocker.. _yt4differences: What's New and Different in yt 4.0? =================================== If you are new to yt, welcome! If you're coming to yt 4.0 from an older version, however, there may be a few things in this version that are different than what you are used to. We have tried to build compatibility layers to minimize disruption to existing scripts, but necessarily things will be different in some ways. .. contents:: :depth: 2 :local: :backlinks: none Updating to yt 4.0 from Old Versions (and going back) ----------------------------------------------------- .. _transitioning-to-4.0: Converting Old Scripts to Work with yt 4.0 ------------------------------------------ After installing yt-4.0, you’ll want to change your old scripts in a few key ways. After accounting for the changes described in the list below, try running your script. If it still fails, the Python tracebacks should be fairly descriptive and it may be possible to deduce what remaining changes are necessary. If you continue to have trouble, please don’t hesitate to :ref:`request help `. The list below is arranged in order of most to least important changes. * **Fields should be specified as tuples not as strings** In the past, you could specify fields as strings like ``"density"``, but with the growth of yt and its many derived fields, there can be sometimes be overlapping field names (e.g., ``("gas", "density")`` and ``("PartType0", "density")``), where yt doesn't know which to use. To remove any ambiguity, it is now strongly recommended to explicitly specify the full tuple form of all fields. Just search for all field accesses in your scripts, and replace strings with tuples (e.g. replace ``"a"`` with ``("gas", "a" )``). There is a compatibility rule in yt-4.0 to allow strings to continue to work until yt-4.1, but you may get unexpected behavior. Any field specifications that are ambiguous will throw an error in future versions of yt. See our :ref:`fields`, and :ref:`available field list ` documentation for more information. * **Use Newer Versions of Python** The yt-4.0 release will be the final release of yt to support Python 3.6. Starting with yt-4.1, python 3.6 will no longer be supported, so please start using 3.7+ as soon as possible. * **Particle-based datasets no longer accept n_ref and over_refine_factor** One of the major upgrades in yt-4 is native treatment of particle-based datasets. This is in contrast to previous yt behavior which loaded particle-based datasets as octrees, which could then be treated like grid-based datasets. In order to define the octrees, users were required to specify ``n_ref`` and ``over_refine_factor`` values at load time. Please remove any reference to ``n_ref`` and ``over_refine_factor`` in your scripts. * **Neutral ion fields changing format** In previous versions, neutral ion fields were specified as ``ELEMENT_number_density`` (e.g., ``H_number_density`` to represent H I number density). This led to a lot of confusion, because some people assumed these fields were the total hydrogen density, not neutral hydrogen density. In yt-4.0, we have resolved this issue by explicitly calling total hydrogen number density ``H_nuclei_density`` and neutral hydrogen density ``H_p0_number_density`` (where ``p0`` refers to plus 0 charge). This syntax follows the rule for other ions: H II = ``H_p1`` = ionized hydrogen. Change your scripts accordingly. See :ref:`species-fields` for more information. * **Change in energy and momentum field names** Fields representing energy and momentum quantities are now given names which reflect their dimensionality. For example, the ``("gas", "kinetic_energy")`` field was actually a field for kinetic energy density, and so it has been renamed to ``("gas", "kinetic_energy_density")``. The old name still exists as an alias as of yt v4.0.0, but it will be removed in yt v4.1.0. See next item below for more information. Other examples include ``"gas", "specific_thermal_energy"`` for thermal energy per unit mass, and ``("gas", "momentum_density_x")`` for the x-axis component of momentum density. See :ref:`efields` for more information. * **Deprecated field names** Certain field names are deprecated within yt v4.0.x and removed in yt v4.1. For example, ``("gas", "kinetic_energy")`` has been renamed to ``("gas", "kinetic_energy_density")``, though the former name has been added as an alias. Other fields, such as ``("gas", "cylindrical_tangential_velocity_absolute")``, are being removed entirely. When the deprecated field names are used for the first time in a session, a warning will be logged, so it is advisable to set your logging level to ``WARNING`` (``yt.set_log_level("error")``) at a minimum to catch these. See :ref:`faq-log-level` for more information on setting your log level and :ref:`available-fields` to see all available fields. * ``cmocean`` **colormaps need prefixing** yt used to automatically load and register external colormaps from the ``cmocean`` package unprefixed (e.g., ``set_cmap(FIELD, "balance")``. This became unsustainable with the 3.4 release of Matplotlib, in which colormaps with colliding names raise errors. The fix is to explicitly import the ``cmocean`` module and prefix ``cmocean`` colormaps (like ``balance``) with ``cmo.`` (e.g., ``cmo.balance``). Note that this solution works with any yt-supported version of Matplotlib, but is not backward compatible with earlier versions of yt. * Position and velocity fields now default to using linear scaling in profiles and phase plots, whereas previously behavior was determined by whether the dataset was particle- or grid-based. Efforts have been made to standardize the treatment of other fields in profile and phase plots for particle and grid datasets. Important New Aliases ^^^^^^^^^^^^^^^^^^^^^ With the advent of supporting SPH data at the particle level instead of smoothing onto an octree (see below), a new alias for both gas particle masses and cell masses has been created: ``("gas", "mass")``, which aliases to ``("gas", "cell_mass")`` for grid-based frontends and to the gas particle mass for SPH frontends. In a number of places in yt, code that used ``("gas", "cell_mass")`` has been replaced by ``("gas", "mass")``. Since the latter is an alias for the former, old scripts which use ``("gas", "cell_mass")`` should not break. Deprecations ^^^^^^^^^^^^ The following methods and method arguments are deprecated as of yt 4.0 and will be removed in yt 4.1 * :meth:`~yt.visualization.plot_window.PlotWindow.set_window_size` is deprecated in favor to :meth:`~yt.visualization.plot_container.PlotContainer.set_figure_size` * :meth:`~yt.visualization.eps_writer.return_cmap` is deprecated in favor to :meth:`~yt.visualization.eps_writer.return_colormap` * :meth:`~yt.data_objects.derived_quantities.WeightedVariance` is deprecated in favor to :meth:`~yt.data_objects.derived_quantities.WeightedStandardDeviation` * :meth:`~yt.visualization.plot_window.PWViewerMPL.annotate_clear` is deprecated in favor to :meth:`~yt.visualization.plot_window.PWViewerMPL.clear_annotations` * :meth:`~yt.visualization.color_maps.add_cmap` is deprecated in favor to :meth:`~yt.visualization.color_maps.add_colormap` * :meth:`~yt.loaders.simulation` is deprecated in favor to :meth:`~yt.loaders.load_simulation` * :meth:`~yt.data_objects.index_subobjects.octree_subset.OctreeSubset.get_vertex_centered_data` now takes a list of fields as input, passing a single field is deprecated * manually updating the ``periodicity`` attributed of a :class:`~yt.data_objects.static_output.Dataset` object is deprecated. Use the :meth:`~yt.data_objects.static_output.Dataset.force_periodicity` if you need to force periodicity to ``True`` or ``False`` along all axes. * the :meth:`~yt.data_objects.static_output.Dataset.add_smoothed_particle_field` method is deprecated and already has no effect in yt 4.0 . See :ref:`sph-data` * the :meth:`~yt.data_objects.static_output.Dataset.add_gradient_fields` used to accept an ``input_field`` keyword argument, now deprecated in favor to ``fields`` * :meth:`~yt.data_objects.time_series.DatasetSeries.from_filenames` is deprecated because its functionality is now included in the basic ``__init__`` method. Use :class:`~yt.data_objects.time_series.DatasetSeries` directly. * the ``particle_type`` keyword argument from ``yt.add_field()`` (:meth:`~yt.fields.field_info_container.FieldInfoContainer.add_field`) and ``ds.add_field()`` (:meth:`~yt.data_objects.static_output.Dataset.add_field`) methods is now a deprecated in favor to the ``sampling_type`` keyword argument. * the :meth:`~yt.fields.particle_fields.add_volume_weighted_smoothed_field` is deprecated and already has no effect in yt 4.0 . See :ref:`sph-data` * the :meth:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree.locate_brick` method is deprecated in favor to, and is now an alias for :meth:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree.locate_node` * the :class:`~yt.utilities.exceptions.YTOutputNotIdentified` error is a deprecated alias for :class:`~yt.utilities.exceptions.YTUnidentifiedDataType` * the ``limits`` argument from :meth:`~yt.visualization.image_writer.write_projection` is deprecated in favor to ``vmin`` and ``vmax`` * :meth:`~yt.visualization.plot_container.ImagePlotContainer.set_cbar_minorticks` is a deprecated alias for :meth:`~yt.visualization.plot_container.ImagePlotContainer.set_colorbar_minorticks` * the ``axis`` argument from :meth:`yt.visualization.plot_window.SlicePlot` is a deprecated alias for the ``normal`` argument * the old configuration file ``ytrc`` is deprecated in favor of the new ``yt.toml`` format. In yt 4.0, you'll get a warning every time you import yt if you're still using the old configuration file, which will instruct you to invoke the yt command line interface to convert automatically to the new format. * the ``load_field_plugins`` parameter is deprecated from the configuration file (note that it is already not used as of yt 4.0) Cool New Things --------------- Changes for Working with SPH Data ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In yt-3.0 most user-facing operations on SPH data are produced by interpolating SPH data onto a volume-filling octree mesh. Historically this was easier to implement When support for SPH data was added to yt as it allowed re-using a lot of the existing infrastructure. This had some downsides because the octree was a single, global object, the memory and CPU overhead of smoothing SPH data onto the octree can be prohibitive on particle datasets produced by large simulations. Constructing the octree during the initial indexing phase also required each particle (albeit, in a 64-bit integer) to be present in memory simultaneously for a sorting operation, which was memory prohibitive. Visualizations of slices and projections produced by yt using the default settings are somewhat blocky since by default we use a relatively coarse octree to preserve memory. In yt-4.0 this has all changed! Over the past two years, Nathan Goldbaum, Meagan Lang and Matt Turk implemented a new approach for handling I/O of particle data, based on storing compressed bitmaps containing Morton indices instead of an in-memory octree. This new capability means that the global octree index is now no longer necessary to enable I/O chunking and spatial indexing of particle data in yt. The new I/O method has opened up a new way of dealing with the particle data and in particular, SPH data. .. _sph-data: Scatter and Gather approach for SPH data ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As mentioned, previously operations such as slice, projection and arbitrary grids would smooth the particle data onto the global octree. As this is no longer used, a different approach was required to visualize the SPH data. Using SPLASH as inspiration, SPH smoothing pixelization operations were created using smoothing operations via "scatter" and "gather" approaches. We estimate the contributions of a particle to a single pixel by considering the point at the centre of the pixel and using the standard SPH smoothing formula. The heavy lifting in these functions is undertaken by cython functions. It is now possible to generate slice plots, projection plots, covering grids and arbitrary grids of smoothed quantities using these operations. The following code demonstrates how this could be achieved. The following would use the scatter method: .. code-block:: python import yt ds = yt.load("snapshot_033/snap_033.0.hdf5") plot = yt.SlicePlot(ds, 2, ("gas", "density")) plot.save() plot = yt.ProjectionPlot(ds, 2, ("gas", "density")) plot.save() arbitrary_grid = ds.arbitrary_grid([0.0, 0.0, 0.0], [25, 25, 25], dims=[16, 16, 16]) ag_density = arbitrary_grid["gas", "density"] covering_grid = ds.covering_grid(4, 0, 16) cg_density = covering_grid["gas", "density"] In the above example the ``covering_grid`` and the ``arbitrary_grid`` will return the same data. In fact, these containers are very similar but provide a slightly different API. The above code can be modified to use the gather approach by changing a global setting for the dataset. This can be achieved with ``ds.sph_smoothing_style = "gather"``, so far, the gather approach is not supported for projections. The default behaviour for SPH interpolation is that the values are normalized inline with Eq. 9 in `SPLASH, Price (2009) `_. This can be disabled with ``ds.use_sph_normalization = False``. This will disable the normalization for all future interpolations. The gather approach requires finding nearest neighbors using the KDTree. The first call will generate a KDTree for the entire dataset which will be stored in a sidecar file. This will be loaded whenever necessary. Off-Axis Projection for SPH Data ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The current ``OffAxisProjectionPlot`` class will now support SPH projection plots. The following is a code example: .. code-block:: python import yt ds = yt.load("Data/GadgetDiskGalaxy/snapshot_200.hdf5") smoothing_field = ("gas", "density") _, center = ds.find_max(smoothing_field) sp = ds.sphere(center, (10, "kpc")) normal_vector = sp.quantities.angular_momentum_vector() prj = yt.OffAxisProjectionPlot(ds, normal_vector, smoothing_field, center, (20, "kpc")) prj.save() Smoothing Data onto an Octree ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Whilst the move away from the global octree is a promising one in terms of performance and dealing with SPH data in a more intuitive manner, it does remove a useful feature. We are aware that many users will have older scripts which take advantage of the global octree. As such, we have added support to smooth SPH data onto an octree when desired by the users. The new octree is designed to give results consistent with those of the previous octree, but the new octree takes advantage of the scatter and gather machinery also added. .. code-block:: python import numpy as np import yt ds = yt.load("GadgetDiskGalaxy/snapshot_200.hdf5") left = np.array([0, 0, 0], dtype="float64") right = np.array([64000, 64000, 64000], dtype="float64") # generate an octree octree = ds.octree(left, right, n_ref=64) # Scatter deposition is the default now, and thus this will print scatter print(octree.sph_smoothing_style) # the density will be calculated using SPH scatter density = octree["PartType0", "density"] # this will return the x positions of the octs x = octree["index", "x"] The above code can be modified to use the gather approach by using ``ds.sph_smoothing_style = 'gather'`` before any field access. The octree just uses the smoothing style and number of neighbors defined by the dataset. The octree implementation is very simple. It uses a recursive algorithm to build a ``depth-first`` which is consistent with the results from yt-3. Depth-first search (DFS) means that tree starts refining at the root node (this is the largest node which contains every particles) and refines as far as possible along each branch before backtracking. .. _yt-units-is-now-unyt: ``yt.units`` Is Now a Wrapper for ``unyt`` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ We have extracted ``yt.units`` into ``unyt``, its own library that you can install separately from yt from ``pypi`` and ``conda-forge``. You can find out more about using ``unyt`` in `its documentation `_ and in `a paper in the Journal of Open Source Software `_. From the perspective of a user of yt, very little should change. While things in ``unyt`` have different names -- for example ``YTArray`` is now called ``unyt_array`` -- we have provided wrappers in ``yt.units`` so imports in your old scripts should continue to work without issue. If you have any old scripts that don't work due to issues with how yt is using ``unyt`` or units issues in general please let us know by `filing an issue on GitHub `_. Moving ``unyt`` into its own library has made it much easier to add some cool new features, which we detail below. ``ds.units`` ~~~~~~~~~~~~ Each dataset now has a set of unit symbols and physical constants associated with it, allowing easier customization and smoother interaction, especially in workflows that need to use code units or cosmological units. The ``ds.units`` object has a large number of attributes corresponding to the names of units and physical constants. All units known to the dataset will be available, including custom units. In situations where you might have used ``ds.arr`` or ``ds.quan`` before, you can now safely use ``ds.units``: >>> ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030') >>> u = ds.units >>> ad = ds.all_data() >>> data = ad['Enzo', 'Density'] >>> data + 12*u.code_mass/u.code_length**3 unyt_array([1.21784693e+01, 1.21789148e+01, 1.21788494e+01, ..., 4.08936836e+04, 5.78006836e+04, 3.97766906e+05], 'code_mass/code_length**3') >>> data + .0001*u.mh/u.cm**3 unyt_array([6.07964513e+01, 6.07968968e+01, 6.07968314e+01, ..., 4.09423016e+04, 5.78493016e+04, 3.97815524e+05], 'code_mass/code_length**3') Automatic Unit Simplification ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Often the results of an operation will result in a unit expression that can be simplified by cancelling pairs of factors. Before yt 4.0, these pairs of factors were only cancelled if the same unit appeared in both the numerator and denominator of an expression. Now, all pairs of factors have have inverse dimensions are cancelled, and the appropriate scaling factor is incorporated into the result. For example, ``Hz`` and ``s`` will now appropriately be recognized as inverses: >>> from yt.units import Hz, s >>> frequency = 60*Hz >>> time = 60*s >>> frequency*time unyt_quantity(3600, '(dimensionless)') Similar simplifications will happen even if units aren't reciprocals of each other, for example here ``hour`` and ``minute`` automatically cancel each other: >>> from yt.units import erg, minute, hour >>> power = [20, 40, 80] * erg / minute >>> elapsed_time = 3*hour >>> print(power*elapsed_time) [ 3600. 7200. 14400.] erg Alternate Unit Name Resolution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ It's now possible to use a number of common alternate spellings for unit names and if ``unyt`` knows about the alternate spelling it will automatically resolve alternate spellings to a canonical name. For example, it's now possible to do things like this: >>> import yt.units as u >>> d = 20*u.mile >>> d.to('km') unyt_quantity(32.18688, 'km') >>> d.to('kilometer') unyt_quantity(32.18688, 'km') >>> d.to('kilometre') unyt_quantity(32.18688, 'km') You can also use alternate unit names in more complex algebraic unit expressions: >>> v = d / (20*u.minute) >>> v.to('kilometre/hour') unyt_quantity(96.56064, 'km/hr') In this example the common british spelling ``"kilometre"`` is resolved to ``"km"`` and ``"hour"`` is resolved to ``"hr"``. Field-Specific Configuration ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You can now set configuration values on a per-field basis. For instance, this means that if you always want a particular colormap associated with a particular field, you can do so! This is documented under :ref:`per-field-plotconfig`, and was added in `PR 1931 `_. New Method for Accessing Sample Datasets ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ There is now a function entitled ``load_sample()`` that allows the user to automatically load sample data from the yt hub in a local yt session. Previously, users would have to explicitly download these data directly from `https://yt-project.org/data `_, unpackage them, and load them into a yt session, but now this occurs from within a python session. For more information see: :ref:`Loading Sample Data ` Some Widgets ^^^^^^^^^^^^ In yt, we now have some simple display wrappers for objects if you are running in a Jupyter environment with the `ipywidgets `_ package installed. For instance, the ``ds.fields`` object will now display field information in an interactive widget, and three-element unyt arrays (such as ``ds.domain_left_edge``) will be displayed interactively as well. The package `widgyts `_ provides interactive, yt-specific visualization of slices, projections, and additional dataset display information. New External Packages ^^^^^^^^^^^^^^^^^^^^^ As noted above (:ref:`yt-units-is-now-unyt`), unyt has been extracted from yt, and we now use it as an external library. In addition, other parts of yt such as :ref:`interactive_data_visualization` have been extracted, and we are working toward a more modular approach for things such as Jupyter widgets and other "value-added" integrations. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/pyproject.toml0000644000175100001770000004001314714401662014477 0ustar00runnerdocker[build-system] # keep in sync with .github/workflows/wheels.yaml requires = [ "setuptools>=61.2", "Cython>=3.0.3", "numpy>=2.0.0", "ewah-bool-utils>=1.2.0", ] build-backend = "setuptools.build_meta" [project] name = "yt" version = "4.4.0" description = "An analysis and visualization toolkit for volumetric data" authors = [ { name = "The yt project", email = "yt-dev@python.org" }, ] classifiers = [ "Development Status :: 5 - Production/Stable", "Environment :: Console", "Framework :: Matplotlib", "Intended Audience :: Science/Research", "License :: OSI Approved :: BSD License", "Operating System :: MacOS :: MacOS X", "Operating System :: POSIX :: AIX", "Operating System :: POSIX :: Linux", "Programming Language :: C", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Topic :: Scientific/Engineering :: Astronomy", "Topic :: Scientific/Engineering :: Physics", "Topic :: Scientific/Engineering :: Visualization", ] keywords = [ "astronomy astrophysics visualization amr adaptivemeshrefinement", ] requires-python = ">=3.10.3" # keep in sync with minimal_requirements.txt dependencies = [ "cmyt>=1.1.2", "ewah-bool-utils>=1.2.0", "matplotlib>=3.5", "more-itertools>=8.4", "numpy>=1.21.3, <3", # keep minimal requirement in sync with NPY_TARGET_VERSION # https://github.com/numpy/numpy/issues/27037 "numpy!=2.0.1 ; platform_machine=='arm64' and platform_system=='Darwin'", "packaging>=20.9", "pillow>=8.3.2", "tomli-w>=0.4.0", "tqdm>=3.4.0", "unyt>=2.9.2", "tomli>=1.2.3;python_version < '3.11'", "typing-extensions>=4.4.0;python_version < '3.12'", ] [project.readme] file = "README.md" content-type = "text/markdown" [project.license] text = "BSD 3-Clause" [project.urls] Homepage = "https://yt-project.org/" Documentation = "https://yt-project.org/doc/" Source = "https://github.com/yt-project/yt/" Tracker = "https://github.com/yt-project/yt/issues" [project.entry-points."nose.plugins.0.10"] # this section can be cleaned when nose tests on GHA are removed answer-testing = "yt.utilities.answer_testing.framework:AnswerTesting" [project.optional-dependencies] # some generic, reusable constraints on optional-deps HDF5 = ["h5py>=3.1.0,!=3.12.0; platform_system=='Windows'"] # see https://github.com/h5py/h5py/issues/2505 netCDF4 = ["netCDF4!=1.6.1,>=1.5.3"] # see https://github.com/Unidata/netcdf4-python/issues/1192 Fortran = ["f90nml>=1.1"] # frontend-specific requirements # all frontends should have a target, even if no additional requirements are needed # note that, because pip normalizes underscores to hyphens, we need to apply this transformation # in target names here so that recursive dependencies link correctly. # This does *not* prevent end-users to write, say `pip install yt[enzo_e]`. # We also normalize all target names to lower case for consistency. adaptahop = [] ahf = [] amrex = [] amrvac = ["yt[Fortran]"] art = [] arepo = ["yt[HDF5]"] artio = [] athena = [] athena-pp = [] boxlib = [] cf-radial = ["xarray>=0.16.1", "arm-pyart>=1.19.2",] chimera = ["yt[HDF5]"] chombo = ["yt[HDF5]"] cholla = ["yt[HDF5]"] eagle = ["yt[HDF5]"] enzo-e = ["yt[HDF5]", "libconf>=1.0.1"] enzo = ["yt[HDF5]", "libconf>=1.0.1"] exodus-ii = ["yt[netCDF4]"] fits = ["astropy>=4.0.1", "regions>=0.7"] flash = ["yt[HDF5]"] gadget = ["yt[HDF5]"] gadget-fof = ["yt[HDF5]"] gamer = ["yt[HDF5]"] gdf = ["yt[HDF5]"] gizmo = ["yt[HDF5]"] halo-catalog = ["yt[HDF5]"] http-stream = ["requests>=2.20.0"] idefix = ["yt_idefix[HDF5]>=2.3.0"] # externally packaged frontend moab = ["yt[HDF5]"] nc4-cm1 = ["yt[netCDF4]"] open-pmd = ["yt[HDF5]"] owls = ["yt[HDF5]"] owls-subfind = ["yt[HDF5]"] parthenon = ["yt[HDF5]"] ramses = ["yt[Fortran]", "scipy"] rockstar = [] sdf = ["requests>=2.20.0"] stream = [] swift = ["yt[HDF5]"] tipsy = [] ytdata = ["yt[HDF5]"] # "full" should contain all optional dependencies intended for users (not devs) # in particular it should enable support for all frontends full = [ "cartopy>=0.22.0", "firefly>=3.2.0", "glueviz>=0.13.3", "ipython>=7.16.2", "ipywidgets>=8.0.0", "miniballcpp>=0.2.1", "mpi4py>=3.0.3", "pandas>=1.1.2", "pooch>=0.7.0", "pyaml>=17.10.0", "pykdtree>=1.3.1", "pyx>=0.15", "scipy>=1.5.0", "glue-core!=1.2.4;python_version >= '3.10'", # see https://github.com/glue-viz/glue/issues/2263 "ratarmount~=0.8.1;platform_system!='Windows' and platform_system!='Darwin'", "yt[adaptahop]", "yt[ahf]", "yt[amrex]", "yt[amrvac]", "yt[art]", "yt[arepo]", "yt[artio]", "yt[athena]", "yt[athena_pp]", "yt[boxlib]", "yt[cf_radial]", "yt[chimera]", "yt[chombo]", "yt[cholla]", "yt[eagle]", "yt[enzo_e]", "yt[enzo]", "yt[exodus_ii]", "yt[fits]", "yt[flash]", "yt[gadget]", "yt[gadget_fof]", "yt[gamer]", "yt[gdf]", "yt[gizmo]", "yt[halo_catalog]", "yt[http_stream]", "yt[idefix]", "yt[moab]", "yt[nc4_cm1]", "yt[open_pmd]", "yt[owls]", "yt[owls_subfind]", "yt[parthenon]", "yt[ramses]", "yt[rockstar]", "yt[sdf]", "yt[stream]", "yt[swift]", "yt[tipsy]", "yt[ytdata]", ] # dev-only extra targets mapserver = [ "bottle", ] test = [ "pyaml>=17.10.0", "pytest>=6.1", "pytest-mpl>=0.16.1", "sympy!=1.10,!=1.9", # see https://github.com/sympy/sympy/issues/22241 "imageio!=2.35.0", # see https://github.com/yt-project/yt/issues/4966 ] [project.scripts] yt = "yt.utilities.command_line:run_main" [tool.setuptools] include-package-data = true zip-safe = false [tool.setuptools.packages.find] namespaces = false [tool.black] # TODO: drop this section when ruff supports embedded python blocks # see https://github.com/astral-sh/ruff/issues/8237 include = '\.pyi?$' exclude = ''' /( \.eggs | \.git | \.hg | \.mypy_cache | \.tox | \.venv | _build | buck-out | build | dist | yt/frontends/stream/sample_data )/ | yt/visualization/_colormap_data.py ''' [tool.ruff] exclude = [ "doc", "benchmarks", "*/api.py", "*/__init__.py", "*/__config__.py", "yt/units", "yt/frontends/stream/sample_data", "yt/utilities/fits_image.py", "yt/utilities/lodgeit.py", "yt/mods.py", "yt/visualization/_colormap_data.py", "yt/exthook.py", ] [tool.ruff.lint] select = [ "E", "F", "W", "C4", # flake8-comprehensions "B", # flake8-bugbear "G", # flake8-logging-format "TCH", # flake8-type-checking "YTT", # flake8-2020 "UP", # pyupgrade "I", # isort "NPY", # numpy specific rules "RUF031"# incorrectly-parenthesized-tuple-in-subscript ] ignore = [ "E501", # line too long "E741", # Do not use variables named 'I', 'O', or 'l' "B018", # Found useless expression. # disabled because ds.index is idiomatic "UP038", # non-pep604-isinstance ] [tool.ruff.lint.per-file-ignores] "test_*" = ["NPY002"] [tool.ruff.lint.isort] combine-as-imports = true known-third-party = [ "IPython", "nose", "numpy", "sympy", "matplotlib", "unyt", "git", "yaml", "dateutil", "requests", "coverage", "pytest", "pyx", "glue", ] known-first-party = ["yt"] # The -s option prevents pytest from capturing output sent to stdout # -v runs pytest in verbose mode # -rsfE: The -r tells pytest to provide extra test summary info on the events # specified by the characters following the r. s: skipped, f: failed, E: error [tool.pytest.ini_options] addopts = ''' -s -v -rsfE --ignore-glob='/*_nose.py' --ignore-glob='/*/yt/data_objects/level_sets/tests/test_clump_finding.py' --ignore-glob='/*/yt/data_objects/tests/test_connected_sets.py' --ignore-glob='/*/yt/data_objects/tests/test_data_containers.py' --ignore-glob='/*/yt/data_objects/tests/test_dataset_access.py' --ignore-glob='/*/yt/data_objects/tests/test_particle_filter.py' --ignore-glob='/*/yt/data_objects/tests/test_particle_trajectories.py' --ignore-glob='/*/yt/data_objects/tests/test_pickling.py' --ignore-glob='/*/yt/data_objects/tests/test_regions.py' --ignore-glob='/*/yt/fields/tests/test_particle_fields.py' --ignore-glob='/*/yt/fields/tests/test_vector_fields.py' --ignore-glob='/*/yt/fields/tests/test_xray_fields.py' --ignore-glob='/*/yt/frontends/adaptahop/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/ahf/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/amrex/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/amrvac/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/amrvac/tests/test_units_override.py' --ignore-glob='/*/yt/frontends/arepo/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/art/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/artio/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/athena/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/athena_pp/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/boxlib/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/cf_radial/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/chimera/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/cholla/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/chombo/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/eagle/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/enzo/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/enzo_e/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/exodus_ii/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/fits/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/flash/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/gadget/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/gadget_fof/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/gamer/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/gdf/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/gdf/tests/test_outputs_nose.py' --ignore-glob='/*/yt/frontends/gizmo/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/halo_catalog/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/moab/tests/test_c5.py' --ignore-glob='/*/yt/frontends/nc4_cm1/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/open_pmd/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/owls/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/owls_subfind/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/parthenon/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/ramses/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/rockstar/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/tipsy/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/ytdata/tests/test_old_outputs.py' --ignore-glob='/*/yt/frontends/ytdata/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/ytdata/tests/test_unit.py' --ignore-glob='/*/yt/geometry/coordinates/tests/test_axial_pixelization.py' --ignore-glob='/*/yt/geometry/coordinates/tests/test_cylindrical_coordinates.py' --ignore-glob='/*/yt/geometry/coordinates/tests/test_spherical_coordinates.py' --ignore-glob='/*/yt/tests/test_funcs.py' --ignore-glob='/*/yt/utilities/lib/cykdtree/tests/__init__.py' --ignore-glob='/*/yt/utilities/lib/cykdtree/tests/test_kdtree.py' --ignore-glob='/*/yt/utilities/lib/cykdtree/tests/test_plot.py' --ignore-glob='/*/yt/utilities/lib/cykdtree/tests/test_utils.py' --ignore-glob='/*/yt/utilities/tests/test_cosmology.py' --ignore-glob='/*/yt/visualization/tests/test_callbacks.py' --ignore-glob='/*/yt/visualization/tests/test_color_maps.py' --ignore-glob='/*/yt/visualization/tests/test_geo_projections.py' --ignore-glob='/*/yt/visualization/tests/test_image_writer.py' --ignore-glob='/*/yt/visualization/tests/test_line_plots.py' --ignore-glob='/*/yt/visualization/tests/test_mesh_slices.py' --ignore-glob='/*/yt/visualization/tests/test_norm_api_custom_norm.py' --ignore-glob='/*/yt/visualization/tests/test_norm_api_inf_zlim.py' --ignore-glob='/*/yt/visualization/tests/test_norm_api_lineplot.py' --ignore-glob='/*/yt/visualization/tests/test_norm_api_particleplot.py' --ignore-glob='/*/yt/visualization/tests/test_norm_api_phaseplot_set_colorbar_explicit.py' --ignore-glob='/*/yt/visualization/tests/test_norm_api_phaseplot_set_colorbar_implicit.py' --ignore-glob='/*/yt/visualization/tests/test_norm_api_profileplot.py' --ignore-glob='/*/yt/visualization/tests/test_norm_api_set_background_color.py' --ignore-glob='/*/yt/visualization/tests/test_particle_plot.py' --ignore-glob='/*/yt/visualization/tests/test_plot_modifications.py' --ignore-glob='/*/yt/visualization/tests/test_plotwindow.py' --ignore-glob='/*/yt/visualization/tests/test_profile_plots.py' --ignore-glob='/*/yt/visualization/tests/test_raw_field_slices.py' --ignore-glob='/*/yt/visualization/volume_rendering/tests/test_mesh_render.py' --ignore-glob='/*/yt/visualization/volume_rendering/tests/test_vr_orientation.py' ''' [tool.check-manifest] # ignore generated C/C++ files, otherwise reported as "missing from VCS" (Version Control System) # Please resist the temptation to use patterns instead of exact file names here. ignore = [ "yt/frontends/artio/_artio_caller.c", "yt/frontends/gamer/cfields.c", "yt/frontends/ramses/io_utils.c", "yt/geometry/fake_octree.c", "yt/geometry/grid_container.c", "yt/geometry/grid_visitors.c", "yt/geometry/oct_container.c", "yt/geometry/oct_visitors.c", "yt/geometry/particle_deposit.c", "yt/geometry/particle_oct_container.cpp", "yt/geometry/particle_smooth.c", "yt/geometry/selection_routines.c", "yt/utilities/cython_fortran_utils.c", "yt/utilities/lib/_octree_raytracing.cpp", "yt/utilities/lib/allocation_container.c", "yt/utilities/lib/alt_ray_tracers.c", "yt/utilities/lib/amr_kdtools.c", "yt/utilities/lib/autogenerated_element_samplers.c", "yt/utilities/lib/basic_octree.c", "yt/utilities/lib/bitarray.c", "yt/utilities/lib/bounded_priority_queue.c", "yt/utilities/lib/bounding_volume_hierarchy.cpp", "yt/utilities/lib/contour_finding.c", "yt/utilities/lib/cosmology_time.c", "yt/utilities/lib/cykdtree/kdtree.cpp", "yt/utilities/lib/cykdtree/utils.cpp", "yt/utilities/lib/cyoctree.c", "yt/utilities/lib/cyoctree.cpp", "yt/utilities/lib/depth_first_octree.c", "yt/utilities/lib/distance_queue.c", "yt/utilities/lib/element_mappings.c", "yt/utilities/lib/ewah_bool_wrap.cpp", "yt/utilities/lib/fnv_hash.c", "yt/utilities/lib/fortran_reader.c", "yt/utilities/lib/geometry_utils.cpp", "yt/utilities/lib/grid_traversal.cpp", "yt/utilities/lib/image_samplers.cpp", "yt/utilities/lib/image_utilities.c", "yt/utilities/lib/interpolators.c", "yt/utilities/lib/lenses.c", "yt/utilities/lib/line_integral_convolution.c", "yt/utilities/lib/marching_cubes.cpp", "yt/utilities/lib/mesh_triangulation.c", "yt/utilities/lib/mesh_utilities.c", "yt/utilities/lib/misc_utilities.cpp", "yt/utilities/lib/origami.c", "yt/utilities/lib/particle_kdtree_tools.cpp", "yt/utilities/lib/particle_mesh_operations.c", "yt/utilities/lib/partitioned_grid.cpp", "yt/utilities/lib/pixelization_routines.cpp", "yt/utilities/lib/points_in_volume.c", "yt/utilities/lib/primitives.c", "yt/utilities/lib/quad_tree.c", "yt/utilities/lib/ragged_arrays.c", "yt/utilities/lib/write_array.c", ] [tool.mypy] python_version = '3.10' show_error_codes = true ignore_missing_imports = true warn_unused_configs = true warn_unused_ignores = true warn_unreachable = true show_error_context = true exclude = "(test_*|lodgeit)" [tool.cibuildwheel] build = "cp310-* cp311-* cp312-* cp313-*" build-verbosity = 1 test-skip = "*-musllinux*" test-extras = "test" test-command = [ "pytest -c {project}/pyproject.toml --rootdir . --color=yes --pyargs yt -ra", ] [tool.cibuildwheel.linux] archs = "x86_64" [tool.cibuildwheel.macos] archs = "auto" [tool.cibuildwheel.windows] archs = "auto64" ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.4351544 yt-4.4.0/setup.cfg0000644000175100001770000000004614714401715013405 0ustar00runnerdocker[egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/setup.py0000644000175100001770000000741514714401662013306 0ustar00runnerdockerimport glob import os import sys from collections import defaultdict from distutils.ccompiler import get_default_compiler from importlib import resources as importlib_resources from setuptools import Distribution, setup # ensure enclosing directory is in PYTHON_PATH to allow importing from setupext.py if (script_dir := os.path.dirname(__file__)) not in sys.path: sys.path.insert(0, script_dir) from setupext import ( NUMPY_MACROS, check_CPP14_flags, check_for_openmp, check_for_pyembree, create_build_ext, get_python_include_dirs, install_ccompiler, ) install_ccompiler() if os.path.exists("MANIFEST"): os.remove("MANIFEST") with open("README.md") as file: long_description = file.read() CPP14_CONFIG = defaultdict( lambda: check_CPP14_flags(["-std=c++14", "-std=c++1y", "-std=gnu++0x"]), {"msvc": ["/std:c++14"]}, ) CPP11_CONFIG = defaultdict(lambda: ["-std=c++11"], {"msvc": ["/std:c++11"]}) _COMPILER = get_default_compiler() omp_args, _ = check_for_openmp() if os.name == "nt": std_libs = [] else: std_libs = ["m"] CPP14_FLAG = CPP14_CONFIG[_COMPILER] CPP11_FLAG = CPP11_CONFIG[_COMPILER] FIXED_INTERP = "fixed_interpolator" cythonize_aliases = { "LIB_DIR": "yt/utilities/lib/", "LIB_DIR_GEOM": ["yt/utilities/lib/", "yt/geometry/"], "LIB_DIR_GEOM_ARTIO": [ "yt/utilities/lib/", "yt/geometry/", "yt/frontends/artio/artio_headers/", ], "STD_LIBS": std_libs, "EWAH_LIBS": std_libs + [os.path.abspath(importlib_resources.files("ewah_bool_utils"))], "OMP_ARGS": omp_args, "FIXED_INTERP": FIXED_INTERP, "ARTIO_SOURCE": sorted(glob.glob("yt/frontends/artio/artio_headers/*.c")), "CPP14_FLAG": CPP14_FLAG, "CPP11_FLAG": CPP11_FLAG, } lib_exts = [ "yt/geometry/*.pyx", "yt/utilities/cython_fortran_utils.pyx", "yt/frontends/ramses/io_utils.pyx", "yt/frontends/gamer/cfields.pyx", "yt/utilities/lib/cykdtree/kdtree.pyx", "yt/utilities/lib/cykdtree/utils.pyx", "yt/frontends/artio/_artio_caller.pyx", "yt/utilities/lib/*.pyx", ] embree_libs, embree_aliases = check_for_pyembree(std_libs) cythonize_aliases.update(embree_aliases) lib_exts += embree_libs # This overrides using lib_exts, so it has to happen after lib_exts is fully defined build_ext, sdist = create_build_ext(lib_exts, cythonize_aliases) # Force setuptools to consider that there are ext modules, even if empty. # See https://github.com/yt-project/yt/issues/2922 and # https://stackoverflow.com/a/62668026/2601223 for the fix. class BinaryDistribution(Distribution): """Distribution which always forces a binary package with platform name.""" def has_ext_modules(self): return True if __name__ == "__main__": # Avoid a race condition on fixed_interpolator.o during parallel builds by # building it only once and storing it in a static library. # See https://github.com/yt-project/yt/issues/4278 and # https://github.com/pypa/setuptools/issues/3119#issuecomment-2076922303 # for the inspiration for this fix. # build_clib doesn't add the Python include dirs (for Python.h) by default, # as opposed to build_ext, so we need to add them manually. clib_include_dirs = get_python_include_dirs() # fixed_interpolator.cpp uses Numpy types import numpy clib_include_dirs.append(numpy.get_include()) fixed_interp_lib = ( FIXED_INTERP, { "sources": ["yt/utilities/lib/fixed_interpolator.cpp"], "include_dirs": clib_include_dirs, "define_macros": NUMPY_MACROS, }, ) setup( cmdclass={"sdist": sdist, "build_ext": build_ext}, distclass=BinaryDistribution, libraries=[fixed_interp_lib], ext_modules=[], # !!! We override this inside build_ext above ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/setupext.py0000644000175100001770000003523314714401662014026 0ustar00runnerdockerimport contextlib import glob import logging import os import platform import shutil import subprocess import sys import tempfile from textwrap import dedent from concurrent.futures import ThreadPoolExecutor from distutils import sysconfig from distutils.ccompiler import CCompiler, new_compiler from distutils.sysconfig import customize_compiler from subprocess import PIPE, Popen from sys import platform as _platform import ewah_bool_utils from setuptools.command.build_ext import build_ext as _build_ext from setuptools.command.sdist import sdist as _sdist from setuptools.errors import CompileError, LinkError import importlib.resources as importlib_resources log = logging.getLogger("setupext") @contextlib.contextmanager def stdchannel_redirected(stdchannel, dest_filename): """ A context manager to temporarily redirect stdout or stderr e.g.: with stdchannel_redirected(sys.stderr, os.devnull): if compiler.has_function('clock_gettime', libraries=['rt']): libraries.append('rt') Code adapted from https://stackoverflow.com/a/17752455/1382869 """ try: oldstdchannel = os.dup(stdchannel.fileno()) dest_file = open(dest_filename, "w") os.dup2(dest_file.fileno(), stdchannel.fileno()) yield finally: if oldstdchannel is not None: os.dup2(oldstdchannel, stdchannel.fileno()) if dest_file is not None: dest_file.close() def check_for_openmp(): """Returns OpenMP compiler and linker flags if local setup supports OpenMP or [], [] otherwise Code adapted from astropy_helpers, originally written by Tom Robitaille and Curtis McCully. """ # Create a temporary directory ccompiler = new_compiler() customize_compiler(ccompiler) tmp_dir = tempfile.mkdtemp() start_dir = os.path.abspath(".") CCODE = dedent("""\ #include #include int main() { omp_set_num_threads(2); #pragma omp parallel printf("nthreads=%d\\n", omp_get_num_threads()); return 0; }""" ) # TODO: test more known compilers: # MinGW, AppleClang with libomp, MSVC, ICC, XL, PGI, ... if os.name == "nt": # TODO: make this work with mingw # AFAICS there's no easy way to get the compiler distutils # will be using until compilation actually happens compile_flags = ["-openmp"] link_flags = [""] else: compile_flags = ["-fopenmp"] link_flags = ["-fopenmp"] try: os.chdir(tmp_dir) with open("test_openmp.c", "w") as f: f.write(CCODE) os.mkdir("objects") # Compile, link, and run test program with stdchannel_redirected(sys.stderr, os.devnull): ccompiler.compile( ["test_openmp.c"], output_dir="objects", extra_postargs=compile_flags ) ccompiler.link_executable( glob.glob(os.path.join("objects", "*")), "test_openmp", extra_postargs=link_flags, ) output = ( subprocess.check_output("./test_openmp") .decode(sys.stdout.encoding or "utf-8") .splitlines() ) if "nthreads=" in output[0]: nthreads = int(output[0].strip().split("=")[1]) if len(output) == nthreads: using_openmp = True else: log.warning( "Unexpected number of lines from output of test " "OpenMP program (output was %s)", output, ) using_openmp = False else: log.warning( "Unexpected output from test OpenMP program (output was %s)", output ) using_openmp = False except (CompileError, LinkError): using_openmp = False finally: os.chdir(start_dir) if using_openmp: log.warning("Using OpenMP to compile parallel extensions") else: log.warning( "Unable to compile OpenMP test program so Cython\n" "extensions will be compiled without parallel support" ) if using_openmp: return compile_flags, link_flags else: return [], [] def check_CPP14_flag(compile_flags): # Create a temporary directory ccompiler = new_compiler() customize_compiler(ccompiler) tmp_dir = tempfile.mkdtemp() start_dir = os.path.abspath(".") # Note: This code requires C++14 functionalities (also required to compile yt) # It compiles on gcc 4.7.4 (together with the entirety of yt) with the flag "-std=gnu++0x". # It does not compile on gcc 4.6.4 (neither does yt). CPPCODE = dedent("""\ #include struct node { std::vector vic; bool visited = false; }; int main() { return 0; }""" ) os.chdir(tmp_dir) try: with open("test_cpp14.cpp", "w") as f: f.write(CPPCODE) os.mkdir("objects") # Compile, link, and run test program with stdchannel_redirected(sys.stderr, os.devnull): ccompiler.compile( ["test_cpp14.cpp"], output_dir="objects", extra_postargs=compile_flags ) return True except CompileError: return False finally: os.chdir(start_dir) def check_CPP14_flags(possible_compile_flags): for flags in possible_compile_flags: if check_CPP14_flag([flags]): return flags log.warning( "Your compiler seems to be too old to support C++14. " "yt may not be able to compile. Please use a newer version." ) return [] def check_for_pyembree(std_libs): embree_libs = [] embree_aliases = {} try: importlib_resources.files("pyembree") except ImportError: return embree_libs, embree_aliases embree_prefix = os.path.abspath(read_embree_location()) embree_inc_dir = os.path.join(embree_prefix, "include") embree_lib_dir = os.path.join(embree_prefix, "lib") if _platform == "darwin": embree_lib_name = "embree.2" else: embree_lib_name = "embree" embree_aliases["EMBREE_INC_DIR"] = ["yt/utilities/lib/", embree_inc_dir] embree_aliases["EMBREE_LIB_DIR"] = [embree_lib_dir] embree_aliases["EMBREE_LIBS"] = std_libs + [embree_lib_name] embree_libs += ["yt/utilities/lib/embree_mesh/*.pyx"] if in_conda_env(): conda_basedir = os.path.dirname(os.path.dirname(sys.executable)) embree_aliases["EMBREE_INC_DIR"].append(os.path.join(conda_basedir, "include")) embree_aliases["EMBREE_LIB_DIR"].append(os.path.join(conda_basedir, "lib")) return embree_libs, embree_aliases def in_conda_env(): return any(s in sys.version for s in ("Anaconda", "Continuum", "conda-forge")) def read_embree_location(): """ Attempts to locate the embree installation. First, we check for an EMBREE_DIR environment variable. If one is not defined, we look for an embree.cfg file in the root yt source directory. Finally, if that is not present, we default to /usr/local. If embree is installed in a non-standard location and none of the above are set, the compile will not succeed. This only gets called if check_for_pyembree() returns something other than None. """ rd = os.environ.get("EMBREE_DIR") if rd is None: try: rd = open("embree.cfg").read().strip() except IOError: rd = "/usr/local" fail_msg = ( "I attempted to find Embree headers in %s. \n" "If this is not correct, please set your correct embree location \n" "using EMBREE_DIR environment variable or your embree.cfg file. \n" "Please see http://yt-project.org/docs/dev/visualizing/unstructured_mesh_rendering.html " "for more information. \n" % rd ) # Create a temporary directory tmpdir = tempfile.mkdtemp() curdir = os.getcwd() try: os.chdir(tmpdir) # Get compiler invocation compiler = os.getenv("CXX", "c++") compiler = compiler.split(" ") # Attempt to compile a test script. filename = r"test.cpp" file = open(filename, "wt", 1) CCODE = dedent("""\ #include "embree2/rtcore.h int main() { return 0; }""" ) file.write(CCODE) file.flush() p = Popen( compiler + ["-I%s/include/" % rd, filename], stdin=PIPE, stdout=PIPE, stderr=PIPE, ) output, err = p.communicate() exit_code = p.returncode if exit_code != 0: log.warning( "Pyembree is installed, but I could not compile Embree test code." ) log.warning("The error message was: ") log.warning(err) log.warning(fail_msg) # Clean up file.close() except OSError: log.warning( "read_embree_location() could not find your C compiler. " "Attempted to use '%s'.", compiler, ) return False finally: os.chdir(curdir) shutil.rmtree(tmpdir) return rd def get_cpu_count(): if platform.system() == "Windows": return 0 cpu_count = os.cpu_count() try: user_max_cores = int(os.getenv("MAX_BUILD_CORES", cpu_count)) except ValueError as e: raise ValueError( "MAX_BUILD_CORES must be set to an integer. " + "See above for original error." ) from e max_cores = min(cpu_count, user_max_cores) return max_cores def install_ccompiler(): def _compile( self, sources, output_dir=None, macros=None, include_dirs=None, debug=0, extra_preargs=None, extra_postargs=None, depends=None, ): """Function to monkey-patch distutils.ccompiler.CCompiler""" macros, objects, extra_postargs, pp_opts, build = self._setup_compile( output_dir, macros, include_dirs, sources, depends, extra_postargs ) cc_args = self._get_cc_args(pp_opts, debug, extra_preargs) for obj in objects: try: src, ext = build[obj] except KeyError: continue self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts) # Return *all* object filenames, not just the ones we just built. return objects CCompiler.compile = _compile def get_python_include_dirs(): """Extracted from distutils.command.build_ext.build_ext.finalize_options(), https://github.com/python/cpython/blob/812245ecce2d8344c3748228047bab456816180a/Lib/distutils/command/build_ext.py#L148-L167 """ include_dirs = [] # Make sure Python's include directories (for Python.h, pyconfig.h, # etc.) are in the include search path. py_include = sysconfig.get_python_inc() plat_py_include = sysconfig.get_python_inc(plat_specific=1) # If in a virtualenv, add its include directory # Issue 16116 if sys.exec_prefix != sys.base_exec_prefix: include_dirs.append(os.path.join(sys.exec_prefix, 'include')) # Put the Python "system" include dir at the end, so that # any local include dirs take precedence. include_dirs.extend(py_include.split(os.path.pathsep)) if plat_py_include != py_include: include_dirs.extend(plat_py_include.split(os.path.pathsep)) return include_dirs NUMPY_MACROS = [ ("NPY_NO_DEPRECATED_API", "NPY_1_7_API_VERSION"), # keep in sync with runtime requirements (pyproject.toml) ("NPY_TARGET_VERSION", "NPY_1_21_API_VERSION"), ] def create_build_ext(lib_exts, cythonize_aliases): class build_ext(_build_ext): # subclass setuptools extension builder to avoid importing cython and numpy # at top level in setup.py. See http://stackoverflow.com/a/21621689/1382869 # NOTE: this is likely not necessary anymore since # pyproject.toml was introduced in the project def finalize_options(self): from Cython.Build import cythonize # Override the list of extension modules self.distribution.ext_modules[:] = cythonize( lib_exts, aliases=cythonize_aliases, compiler_directives={"language_level": 3}, nthreads=get_cpu_count(), ) _build_ext.finalize_options(self) # Prevent numpy from thinking it is still in its setup process # see http://stackoverflow.com/a/21621493/1382869 if isinstance(__builtins__, dict): # sometimes this is a dict so we need to check for that # https://docs.python.org/3/library/builtins.html __builtins__["__NUMPY_SETUP__"] = False else: __builtins__.__NUMPY_SETUP__ = False import numpy self.include_dirs.append(numpy.get_include()) self.include_dirs.append(ewah_bool_utils.get_include()) define_macros = NUMPY_MACROS if self.define is None: self.define = define_macros else: self.define.extend(define_macros) def build_extensions(self): self.check_extensions_list(self.extensions) ncpus = get_cpu_count() if ncpus > 0: with ThreadPoolExecutor(ncpus) as executor: results = { executor.submit(self.build_extension, extension): extension for extension in self.extensions } for result in results: result.result() else: super().build_extensions() def build_extension(self, extension): try: super().build_extension(extension) except CompileError as exc: print(f"While building '{extension.name}' following error was raised:\n {exc}") raise class sdist(_sdist): # subclass setuptools source distribution builder to ensure cython # generated C files are included in source distribution. # See http://stackoverflow.com/a/18418524/1382869 def run(self): # Make sure the compiled Cython files in the distribution are up-to-date from Cython.Build import cythonize cythonize( lib_exts, aliases=cythonize_aliases, compiler_directives={"language_level": 3}, nthreads=get_cpu_count(), ) _sdist.run(self) return build_ext, sdist ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2551513 yt-4.4.0/yt/0000755000175100001770000000000014714401715012220 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/__init__.py0000644000175100001770000000721414714401662014336 0ustar00runnerdocker""" yt is a toolkit for analyzing and visualizing volumetric data. * Website: https://yt-project.org * Documentation: https://yt-project.org/doc * Data hub: https://girder.hub.yt * Contribute: https://github.com/yt-project/yt """ from ._version import __version__, version_info # isort: skip import yt.units as units import yt.utilities.physical_constants as physical_constants from yt.data_objects.api import ( DatasetSeries, ImageArray, ParticleProfile, Profile1D, Profile2D, Profile3D, add_particle_filter, create_profile, particle_filter, ) from yt.fields.api import ( DerivedField, FieldDetector, FieldInfoContainer, ValidateDataField, ValidateGridType, ValidateParameter, ValidateProperty, ValidateSpatial, add_field, add_xray_emissivity_field, derived_field, field_plugins, ) from yt.funcs import ( enable_plugins, get_memory_usage, get_pbar, get_version_stack, get_yt_version, insert_ipython, is_root, is_sequence, memory_checker, only_on_root, parallel_profile, print_tb, rootonly, toggle_interactivity, ) from yt.units import ( YTArray, YTQuantity, display_ytarray, loadtxt, savetxt, uconcatenate, ucross, udot, uhstack, uintersect1d, unorm, ustack, uunion1d, uvstack, ) from yt.units.unit_object import define_unit # type: ignore from yt.utilities.logger import set_log_level, ytLogger as mylog from yt import frontends import yt.visualization.volume_rendering.api as volume_rendering from yt.frontends.stream.api import hexahedral_connectivity from yt.frontends.ytdata.api import save_as_dataset from yt.loaders import ( load, load_amr_grids, load_archive, load_hdf5_file, load_hexahedral_mesh, load_octree, load_particles, load_sample, load_simulation, load_uniform_grid, load_unstructured_mesh, ) def run_nose(*args, **kwargs): # we hide this function behind a closure so we # don't make pytest a hard dependency for end users # see https://github.com/yt-project/yt/issues/3771 from yt.testing import run_nose return run_nose(*args, **kwargs) from yt.config import _setup_postinit_configuration from yt.units.unit_systems import UnitSystem, unit_system_registry # type: ignore # Import some helpful math utilities from yt.utilities.math_utils import ortho_find, periodic_position, quartiles from yt.utilities.parallel_tools.parallel_analysis_interface import ( communication_system, enable_parallelism, parallel_objects, ) # Now individual component imports from the visualization API from yt.visualization.api import ( AxisAlignedProjectionPlot, AxisAlignedSlicePlot, FITSImageData, FITSOffAxisProjection, FITSOffAxisSlice, FITSParticleOffAxisProjection, FITSParticleProjection, FITSProjection, FITSSlice, FixedResolutionBuffer, LineBuffer, LinePlot, OffAxisProjectionPlot, OffAxisSlicePlot, ParticleImageBuffer, ParticlePhasePlot, ParticlePlot, ParticleProjectionPlot, PhasePlot, ProfilePlot, ProjectionPlot, SlicePlot, add_colormap, apply_colormap, make_colormap, plot_2d, scale_image, show_colormaps, write_bitmap, write_image, write_projection, ) from yt.visualization.volume_rendering.api import ( ColorTransferFunction, TransferFunction, create_scene, off_axis_projection, volume_render, ) # TransferFunctionHelper, MultiVariateTransferFunction # off_axis_projection # run configuration callbacks _setup_postinit_configuration() del _setup_postinit_configuration ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2591512 yt-4.4.0/yt/_maintenance/0000755000175100001770000000000014714401715014641 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/_maintenance/__init__.py0000644000175100001770000000000014714401662016741 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/_maintenance/backports.py0000644000175100001770000000442014714401662017204 0ustar00runnerdocker# A namespace to store simple backports from most recent Python versions # Backports should be contained in (if/else) blocks, checking on the runtime # Python version, e.g. # # if sys.version_info < (3, 8): # ... # insert backported definitions # else: # pass # # while the else part if meant as a no-op, it is required to allow automated # cleanups with pyupgrade, when our minimal supported Python version reaches # the requirement for backports to be unneeded. # # Likewise, imports from this module should be guarded by a similar condition, # e.g., # # if sys.version_info >= (3, 8): # from functools import cached_property # else: # from yt._maintenance.backports import cached_property import sys if sys.version_info >= (3, 11): pass else: from enum import Enum # backported from Python 3.11.0 class ReprEnum(Enum): """ Only changes the repr(), leaving str() and format() to the mixed-in type. """ # backported from Python 3.11.0 class StrEnum(str, ReprEnum): """ Enum where members are also (and must be) strings """ def __new__(cls, *values): "values must already be of type `str`" if len(values) > 3: raise TypeError(f"too many arguments for str(): {values!r}") if len(values) == 1: # it must be a string if not isinstance(values[0], str): raise TypeError(f"{values[0]!r} is not a string") if len(values) >= 2: # check that encoding argument is a string if not isinstance(values[1], str): raise TypeError(f"encoding must be a string, not {values[1]!r}") if len(values) == 3: # check that errors argument is a string if not isinstance(values[2], str): raise TypeError("errors must be a string, not %r" % (values[2])) # noqa: UP031 value = str(*values) member = str.__new__(cls, value) member._value_ = value return member def _generate_next_value_(name, start, count, last_values): # noqa B902 """ Return the lower-cased version of the member name. """ return name.lower() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/_maintenance/deprecation.py0000644000175100001770000000435514714401662017520 0ustar00runnerdockerimport warnings from functools import wraps from types import FunctionType def issue_deprecation_warning( msg: str, *, stacklevel: int, since: str, removal: str | None = None, ): """ Parameters ---------- msg : str A text message explaining that the code surrounding the call to this function is deprecated, and what should be changed on the user side to avoid it. stacklevel: int Number of stack frames to be skipped when pointing at caller code, starting from *this* function's frame. In general 3 is a minimum. since and removal: str version numbers, indicating the anticipated removal date Notes ----- removal can be left empty if it is not clear how many minor releases are expected to happen before the next major. removal and since arguments are keyword-only to forbid accidentally swapping them. Examples -------- >>> issue_deprecation_warning( ... "This code is deprecated.", stacklevel=3, since="4.0" ... ) """ msg += f"\nDeprecated since yt {since}" if removal is not None: msg += f"\nThis feature is planned for removal in yt {removal}" warnings.warn(msg, DeprecationWarning, stacklevel=stacklevel) def future_positional_only(positions2names: dict[int, str], /, **depr_kwargs): """Warn users when using a future positional-only argument as keyword. Note that positional-only arguments are available from Python 3.8 See https://www.python.org/dev/peps/pep-0570/ """ def outer(func: FunctionType): @wraps(func) def inner(*args, **kwargs): for no, name in sorted(positions2names.items()): if name not in kwargs: continue value = kwargs[name] issue_deprecation_warning( f"Using the {name!r} argument as keyword (on position {no}) " "is deprecated. " "Pass the argument as positional to suppress this warning, " f"i.e., use {func.__name__}({value!r}, ...)", stacklevel=3, **depr_kwargs, ) return func(*args, **kwargs) return inner return outer ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/_maintenance/ipython_compat.py0000644000175100001770000000120314714401662020245 0ustar00runnerdockerfrom importlib.metadata import version from importlib.util import find_spec from packaging.version import Version __all__ = [ "IS_IPYTHON", "IPYWIDGETS_ENABLED", ] IS_IPYTHON: bool HAS_IPYWIDGETS_GE_8: bool IPYWIDGETS_ENABLED: bool try: # this name is only defined if running within ipython/jupyter __IPYTHON__ # type: ignore [name-defined] # noqa: B018 except NameError: IS_IPYTHON = False else: IS_IPYTHON = True HAS_IPYWIDGETS_GE_8 = ( Version(version("ipywidgets")) >= Version("8.0.0") if find_spec("ipywidgets") is not None else False ) IPYWIDGETS_ENABLED = IS_IPYTHON and HAS_IPYWIDGETS_GE_8 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/_maintenance/numpy2_compat.py0000644000175100001770000000041414714401662020010 0ustar00runnerdocker# avoid deprecation warnings in numpy >= 2.0 import numpy as np if hasattr(np, "trapezoid"): # np.trapz is deprecated in numpy 2.0 in favor of np.trapezoid trapezoid = np.trapezoid else: trapezoid = np.trapz # type: ignore [attr-defined] # noqa: NPY201 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/_typing.py0000644000175100001770000000206714714401662014251 0ustar00runnerdockerfrom typing import Any, Optional, TypeAlias import numpy as np import unyt as un FieldDescT = tuple[str, tuple[str, list[str], str | None]] KnownFieldsT = tuple[FieldDescT, ...] ParticleType = str FieldType = str FieldName = str FieldKey = tuple[FieldType, FieldName] ImplicitFieldKey = FieldName AnyFieldKey = FieldKey | ImplicitFieldKey DomainDimensions = tuple[int, ...] | list[int] | np.ndarray ParticleCoordinateTuple = tuple[ str, # particle type tuple[np.ndarray, np.ndarray, np.ndarray], # xyz float | np.ndarray, # hsml ] # Geometry specific types AxisName = str AxisOrder = tuple[AxisName, AxisName, AxisName] # types that can be converted to un.Unit Unit: TypeAlias = un.Unit | str # types that can be converted to un.unyt_quantity Quantity = un.unyt_quantity | tuple[float, Unit] # np.ndarray[...] syntax is runtime-valid from numpy 1.22, we quote it until our minimal # runtime requirement is bumped to, or beyond this version MaskT = Optional["np.ndarray[Any, np.dtype[np.bool_]]"] AlphaT = Optional["np.ndarray[Any, np.dtype[np.float64]]"] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/_version.py0000644000175100001770000000332214714401662014417 0ustar00runnerdockerfrom typing import NamedTuple from packaging.version import Version __all__ = [ "__version__", "version_info", ] __version__ = "4.4.0" # keep in sync with pyproject.toml class VersionTuple(NamedTuple): """ A minimal representation of the current version number that can be used downstream to check the runtime version simply by comparing with builtin tuples, as can be done with the runtime Python version using sys.version_info https://docs.python.org/3/library/sys.html#sys.version_info """ major: int minor: int micro: int releaselevel: str serial: int def _parse_to_version_info(version_str: str) -> VersionTuple: # adapted from matplotlib 3.5 """ Parse a version string to a namedtuple analogous to sys.version_info. See: https://packaging.pypa.io/en/latest/version.html#packaging.version.parse https://docs.python.org/3/library/sys.html#sys.version_info """ v = Version(version_str) if v.pre is None and v.post is None and v.dev is None: return VersionTuple(v.major, v.minor, v.micro, "final", 0) elif v.dev is not None: return VersionTuple(v.major, v.minor, v.micro, "alpha", v.dev) elif v.pre is not None: releaselevel = {"a": "alpha", "b": "beta", "rc": "candidate"}.get( v.pre[0], "alpha" ) return VersionTuple(v.major, v.minor, v.micro, releaselevel, v.pre[1]) elif v.post is not None: # fallback for v.post: guess-next-dev scheme from setuptools_scm return VersionTuple(v.major, v.minor, v.micro + 1, "alpha", v.post) else: return VersionTuple(v.major, v.minor, v.micro + 1, "alpha", 0) version_info = _parse_to_version_info(__version__) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2591512 yt-4.4.0/yt/analysis_modules/0000755000175100001770000000000014714401715015573 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/__init__.py0000644000175100001770000000000014714401662017673 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2591512 yt-4.4.0/yt/analysis_modules/absorption_spectrum/0000755000175100001770000000000014714401715021675 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/absorption_spectrum/__init__.py0000644000175100001770000000000014714401662023775 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/absorption_spectrum/api.py0000644000175100001770000000030114714401662023013 0ustar00runnerdockerfrom yt.utilities.exceptions import YTModuleRemoved raise YTModuleRemoved( "AbsorptionSpectrum", "https://github.com/trident-project/trident", "https://trident.readthedocs.io/", ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2591512 yt-4.4.0/yt/analysis_modules/cosmological_observation/0000755000175100001770000000000014714401715022661 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/cosmological_observation/__init__.py0000644000175100001770000000000014714401662024761 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/cosmological_observation/api.py0000644000175100001770000000033314714401662024004 0ustar00runnerdockerfrom yt.utilities.exceptions import YTModuleRemoved raise YTModuleRemoved( "CosmologySplice and LightCone", "https://github.com/yt-project/yt_astro_analysis", "https://yt-astro-analysis.readthedocs.io/", ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2591512 yt-4.4.0/yt/analysis_modules/halo_analysis/0000755000175100001770000000000014714401715020421 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/halo_analysis/__init__.py0000644000175100001770000000000014714401662022521 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/halo_analysis/api.py0000644000175100001770000000031314714401662021542 0ustar00runnerdockerfrom yt.utilities.exceptions import YTModuleRemoved raise YTModuleRemoved( "halo_analysis", "https://github.com/yt-project/yt_astro_analysis", "https://yt-astro-analysis.readthedocs.io/", ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2591512 yt-4.4.0/yt/analysis_modules/halo_finding/0000755000175100001770000000000014714401715020214 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/halo_finding/__init__.py0000644000175100001770000000000014714401662022314 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/halo_finding/api.py0000644000175100001770000000031214714401662021334 0ustar00runnerdockerfrom yt.utilities.exceptions import YTModuleRemoved raise YTModuleRemoved( "halo_finding", "https://github.com/yt-project/yt_astro_analysis", "https://yt-astro-analysis.readthedocs.io/", ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2591512 yt-4.4.0/yt/analysis_modules/halo_mass_function/0000755000175100001770000000000014714401715021446 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/halo_mass_function/__init__.py0000644000175100001770000000000014714401662023546 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/halo_mass_function/api.py0000644000175100001770000000027614714401662022577 0ustar00runnerdockerfrom yt.utilities.exceptions import YTModuleRemoved raise YTModuleRemoved( "halo_mass_function", "https://github.com/yt-project/yt_attic", "https://yt-attic.readthedocs.io/", ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2591512 yt-4.4.0/yt/analysis_modules/level_sets/0000755000175100001770000000000014714401715017740 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/level_sets/__init__.py0000644000175100001770000000000014714401662022040 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/level_sets/api.py0000644000175100001770000000036014714401662021063 0ustar00runnerdockerraise RuntimeError( "The level_sets module has been moved to yt.data_objects.level_sets." "Please, change the import in your scripts from " "'from yt.analysis_modules.level_sets' to " "'from yt.data_objects.level_sets.'." ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2631514 yt-4.4.0/yt/analysis_modules/particle_trajectories/0000755000175100001770000000000014714401715022154 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/particle_trajectories/__init__.py0000644000175100001770000000000014714401662024254 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/particle_trajectories/api.py0000644000175100001770000000030514714401662023276 0ustar00runnerdockerraise RuntimeError( "Particle trajectories are now available from DatasetSeries " "objects as ts.particle_trajectories. The ParticleTrajectories " "analysis module has been removed." ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2631514 yt-4.4.0/yt/analysis_modules/photon_simulator/0000755000175100001770000000000014714401715021201 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/photon_simulator/__init__.py0000644000175100001770000000000014714401662023301 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/photon_simulator/api.py0000644000175100001770000000024114714401662022322 0ustar00runnerdockerfrom yt.utilities.exceptions import YTModuleRemoved raise YTModuleRemoved( "photon_simulator", "pyXSIM", "http://hea-www.cfa.harvard.edu/~jzuhone/pyxsim" ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2631514 yt-4.4.0/yt/analysis_modules/ppv_cube/0000755000175100001770000000000014714401715017376 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/ppv_cube/__init__.py0000644000175100001770000000000014714401662021476 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/ppv_cube/api.py0000644000175100001770000000030514714401662020520 0ustar00runnerdockerfrom yt.utilities.exceptions import YTModuleRemoved raise YTModuleRemoved( "PPVCube", "https://github.com/yt-project/yt_astro_analysis", "https://yt-astro-analysis.readthedocs.io/", ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2631514 yt-4.4.0/yt/analysis_modules/radmc3d_export/0000755000175100001770000000000014714401715020511 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/radmc3d_export/__init__.py0000644000175100001770000000000014714401662022611 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/radmc3d_export/api.py0000644000175100001770000000031414714401662021633 0ustar00runnerdockerfrom yt.utilities.exceptions import YTModuleRemoved raise YTModuleRemoved( "radmc3d_export", "https://github.com/yt-project/yt_astro_analysis", "https://yt-astro-analysis.readthedocs.io/", ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2631514 yt-4.4.0/yt/analysis_modules/spectral_integrator/0000755000175100001770000000000014714401715021646 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/spectral_integrator/__init__.py0000644000175100001770000000000014714401662023746 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/spectral_integrator/api.py0000644000175100001770000000025114714401662022770 0ustar00runnerdockerraise RuntimeError( "The spectral_integrator module as been moved to yt.fields. " "'add_xray_emissivity_field' can now be imported " "from the yt module." ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2631514 yt-4.4.0/yt/analysis_modules/star_analysis/0000755000175100001770000000000014714401715020447 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/star_analysis/__init__.py0000644000175100001770000000000014714401662022547 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/star_analysis/api.py0000644000175100001770000000027114714401662021573 0ustar00runnerdockerfrom yt.utilities.exceptions import YTModuleRemoved raise YTModuleRemoved( "star_analysis", "https://github.com/yt-project/yt_attic", "https://yt-attic.readthedocs.io/", ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2631514 yt-4.4.0/yt/analysis_modules/sunrise_export/0000755000175100001770000000000014714401715020664 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/sunrise_export/__init__.py0000644000175100001770000000000014714401662022764 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/sunrise_export/api.py0000644000175100001770000000027214714401662022011 0ustar00runnerdockerfrom yt.utilities.exceptions import YTModuleRemoved raise YTModuleRemoved( "sunrise_export", "https://github.com/yt-project/yt_attic", "https://yt-attic.readthedocs.io/", ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2631514 yt-4.4.0/yt/analysis_modules/sunyaev_zeldovich/0000755000175100001770000000000014714401715021334 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/sunyaev_zeldovich/__init__.py0000644000175100001770000000000014714401662023434 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/sunyaev_zeldovich/api.py0000644000175100001770000000031714714401662022461 0ustar00runnerdockerfrom yt.utilities.exceptions import YTModuleRemoved raise YTModuleRemoved( "sunyaev_zeldovich", "https://github.com/yt-project/yt_astro_analysis", "https://yt-astro-analysis.readthedocs.io/", ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2631514 yt-4.4.0/yt/analysis_modules/two_point_functions/0000755000175100001770000000000014714401715021705 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/two_point_functions/__init__.py0000644000175100001770000000000014714401662024005 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/analysis_modules/two_point_functions/api.py0000644000175100001770000000027714714401662023037 0ustar00runnerdockerfrom yt.utilities.exceptions import YTModuleRemoved raise YTModuleRemoved( "two_point_functions", "https://github.com/yt-project/yt_attic", "https://yt-attic.readthedocs.io/", ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/api.py0000644000175100001770000000002614714401662013342 0ustar00runnerdocker""" API for yt """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/arraytypes.py0000644000175100001770000000063614714401662015003 0ustar00runnerdockerimport numpy as np # Now define convenience functions def blankRecordArray(desc, elements): """ Accept a descriptor describing a recordarray, and return one that's full of zeros This seems like it should be in the numpy distribution... """ blanks = [] for atype in desc["formats"]: blanks.append(np.zeros(elements, dtype=atype)) return np.rec.fromarrays(blanks, **desc) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/config.py0000644000175100001770000000476414714401662014053 0ustar00runnerdockerimport os from yt.utilities.configure import YTConfig, config_dir, configuration_callbacks ytcfg_defaults = {} ytcfg_defaults["yt"] = { "serialize": False, "only_deserialize": False, "time_functions": False, "colored_logs": False, "suppress_stream_logging": False, "stdout_stream_logging": False, "log_level": 20, "inline": False, "num_threads": -1, "store_parameter_files": False, "parameter_file_store": "parameter_files.csv", "maximum_stored_datasets": 500, "skip_dataset_cache": True, "load_field_plugins": False, "plugin_filename": "my_plugins.py", "parallel_traceback": False, "pasteboard_repo": "", "reconstruct_index": True, "test_storage_dir": "/does/not/exist", "test_data_dir": "/does/not/exist", "enzo_db": "", "answer_testing_tolerance": 3, "answer_testing_bitwise": False, "gold_standard_filename": "gold311", "local_standard_filename": "local001", "answer_tests_url": "http://answers.yt-project.org/{1}_{2}", "sketchfab_api_key": "None", "imagebin_api_key": "e1977d9195fe39e", "imagebin_upload_url": "https://api.imgur.com/3/image", "imagebin_delete_url": "https://api.imgur.com/3/image/{delete_hash}", "curldrop_upload_url": "http://use.yt/upload", "thread_field_detection": False, "ignore_invalid_unit_operation_errors": False, "chunk_size": 1000, "xray_data_dir": "/does/not/exist", "supp_data_dir": "/does/not/exist", "default_colormap": "cmyt.arbre", "ray_tracing_engine": "yt", "internals": { "within_testing": False, "parallel": False, "strict_requires": False, "global_parallel_rank": 0, "global_parallel_size": 1, "topcomm_parallel_rank": 0, "topcomm_parallel_size": 1, "command_line": False, }, } # For backward compatibility, do not use these vars internally in yt CONFIG_DIR = config_dir() _global_config_file = YTConfig.get_global_config_file() _local_config_file = YTConfig.get_local_config_file() # Load the config ytcfg = YTConfig() ytcfg.update(ytcfg_defaults, metadata={"source": "defaults"}) # Try loading the local config first, otherwise fall back to global config if os.path.exists(_local_config_file): ytcfg.read(_local_config_file) elif os.path.exists(_global_config_file): ytcfg.read(_global_config_file) def _setup_postinit_configuration(): """This is meant to be run last in yt.__init__""" for callback in configuration_callbacks: callback(ytcfg) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2671514 yt-4.4.0/yt/data_objects/0000755000175100001770000000000014714401715014642 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/__init__.py0000644000175100001770000000000014714401662016742 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/analyzer_objects.py0000644000175100001770000000566414714401662020566 0ustar00runnerdockerimport inspect from yt.utilities.object_registries import analysis_task_registry class AnalysisTask: def __init_subclass__(cls, *args, **kwargs): if hasattr(cls, "skip") and not cls.skip: return analysis_task_registry[cls.__name__] = cls def __init__(self, *args, **kwargs): # This should only get called if the subclassed object # does not override if len(args) + len(kwargs) != len(self._params): raise RuntimeError self.__dict__.update(zip(self._params, args, strict=False)) self.__dict__.update(kwargs) def __repr__(self): # Stolen from YTDataContainer.__repr__ s = f"{self.__class__.__name__}: " s += ", ".join(f"{i}={getattr(self, i)}" for i in self._params) return s def analysis_task(params=None): if params is None: params = () def create_new_class(func): cls = type(func.__name__, (AnalysisTask,), {"eval": func, "_params": params}) return cls return create_new_class @analysis_task(("field",)) def MaximumValue(params, data_object): v = data_object.quantities["MaxLocation"](params.field)[0] return v @analysis_task() def CurrentTimeYears(params, ds): return ds.current_time * ds["years"] class SlicePlotDataset(AnalysisTask): _params = ["field", "axis", "center"] def __init__(self, *args, **kwargs): from yt.visualization.api import SlicePlot self.SlicePlot = SlicePlot AnalysisTask.__init__(self, *args, **kwargs) def eval(self, ds): slc = self.SlicePlot(ds, self.axis, self.field, center=self.center) return slc.save() class QuantityProxy(AnalysisTask): _params = None quantity_name = None def __repr__(self): # Stolen from YTDataContainer.__repr__ s = f"{self.__class__.__name__}: " s += ", ".join([str(list(self.args))]) s += ", ".join(f"{k}={v}" for k, v in self.kwargs.items()) return s def __init__(self, *args, **kwargs): self.args = args self.kwargs = kwargs def eval(self, data_object): rv = data_object.quantities[self.quantity_name](*self.args, **self.kwargs) return rv class ParameterValue(AnalysisTask): _params = ["parameter"] def __init__(self, parameter, cast=None): self.parameter = parameter if cast is None: def _identity(x): return x cast = _identity self.cast = cast def eval(self, ds): return self.cast(ds.get_parameter(self.parameter)) def create_quantity_proxy(quantity_object): args, varargs, kwargs, defaults = inspect.getargspec(quantity_object[1]) # Strip off 'data' which is on every quantity function params = args[1:] if kwargs is not None: params += kwargs dd = {"_params": params, "quantity_name": quantity_object[0]} cls = type(quantity_object[0], (QuantityProxy,), dd) return cls ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/api.py0000644000175100001770000000074014714401662015767 0ustar00runnerdockerfrom . import construction_data_containers as __cdc, selection_objects as __sdc from .analyzer_objects import AnalysisTask, analysis_task from .image_array import ImageArray from .index_subobjects import AMRGridPatch, OctreeSubset from .particle_filters import add_particle_filter, particle_filter from .profiles import ParticleProfile, Profile1D, Profile2D, Profile3D, create_profile from .static_output import Dataset from .time_series import DatasetSeries, DatasetSeriesObject ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/construction_data_containers.py0000644000175100001770000033677714714401662023214 0ustar00runnerdockerimport fileinput import io import os import warnings import zipfile from functools import partial, wraps from re import finditer from tempfile import NamedTemporaryFile, TemporaryFile import numpy as np from more_itertools import always_iterable from yt._maintenance.deprecation import issue_deprecation_warning from yt._typing import FieldKey from yt.config import ytcfg from yt.data_objects.field_data import YTFieldData from yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer1D, YTSelectionContainer2D, YTSelectionContainer3D, ) from yt.fields.field_exceptions import NeedsGridType, NeedsOriginalGrid from yt.frontends.sph.data_structures import ParticleDataset from yt.funcs import ( get_memory_usage, is_sequence, iter_fields, mylog, only_on_root, validate_moment, ) from yt.geometry import particle_deposit as particle_deposit from yt.geometry.coordinates.cartesian_coordinates import all_data from yt.loaders import load_uniform_grid from yt.units._numpy_wrapper_functions import uconcatenate from yt.units.unit_object import Unit # type: ignore from yt.units.yt_array import YTArray from yt.utilities.exceptions import ( YTNoAPIKey, YTNotInsideNotebook, YTParticleDepositionNotImplemented, YTTooManyVertices, ) from yt.utilities.grid_data_format.writer import write_to_gdf from yt.utilities.lib.cyoctree import CyOctree from yt.utilities.lib.interpolators import ghost_zone_interpolate from yt.utilities.lib.marching_cubes import march_cubes_grid, march_cubes_grid_flux from yt.utilities.lib.misc_utilities import fill_region, fill_region_float from yt.utilities.lib.pixelization_routines import ( interpolate_sph_grid_gather, interpolate_sph_positions_gather, normalization_1d_utility, normalization_3d_utility, pixelize_sph_kernel_arbitrary_grid, ) from yt.utilities.lib.quad_tree import QuadTree from yt.utilities.math_utils import compute_stddev_image from yt.utilities.minimal_representation import MinimalProjectionData from yt.utilities.parallel_tools.parallel_analysis_interface import ( communication_system, parallel_objects, parallel_root_only, ) from yt.visualization.color_maps import get_colormap_lut class YTStreamline(YTSelectionContainer1D): """ This is a streamline, which is a set of points defined as being parallel to some vector field. This object is typically accessed through the Streamlines.path function. The resulting arrays have their dimensionality reduced to one, and an ordered list of points at an (x,y) tuple along `axis` are available, as is the `t` field, which corresponds to a unitless measurement along the ray from start to end. Parameters ---------- positions : array-like List of streamline positions length : float The magnitude of the distance; dts will be divided by this fields : list of strings, optional If you want the object to pre-retrieve a set of fields, supply them here. This is not necessary. ds : dataset object Passed in to access the index kwargs : dict of items Any additional values are passed as field parameters that can be accessed by generated fields. Examples -------- >>> from yt.visualization.api import Streamlines >>> streamlines = Streamlines(ds, [0.5] * 3) >>> streamlines.integrate_through_volume() >>> stream = streamlines.path(0) >>> fig, ax = plt.subplots() >>> ax.set_yscale("log") >>> ax.plot(stream["t"], stream["gas", "density"], "-x") """ _type_name = "streamline" _con_args = ("positions",) sort_by = "t" def __init__(self, positions, length=1.0, fields=None, ds=None, **kwargs): YTSelectionContainer1D.__init__(self, ds, fields, **kwargs) self.positions = positions self.dts = np.empty_like(positions[:, 0]) self.dts[:-1] = np.sqrt( np.sum((self.positions[1:] - self.positions[:-1]) ** 2, axis=1) ) self.dts[-1] = self.dts[-1] self.length = length self.dts /= length self.ts = np.add.accumulate(self.dts) self._set_center(self.positions[0]) self.set_field_parameter("center", self.positions[0]) self._dts, self._ts = {}, {} # self._refresh_data() def _get_list_of_grids(self): # Get the value of the line at each LeftEdge and RightEdge LE = self.ds.grid_left_edge RE = self.ds.grid_right_edge # Check left faces first min_streampoint = np.min(self.positions, axis=0) max_streampoint = np.max(self.positions, axis=0) p = np.all((min_streampoint <= RE) & (max_streampoint > LE), axis=1) self._grids = self.index.grids[p] def _get_data_from_grid(self, grid, field): # No child masking here; it happens inside the mask cut mask = self._get_cut_mask(grid) if field == "dts": return self._dts[grid.id] if field == "t": return self._ts[grid.id] return grid[field].flat[mask] def _get_cut_mask(self, grid): points_in_grid = np.all(self.positions > grid.LeftEdge, axis=1) & np.all( self.positions <= grid.RightEdge, axis=1 ) pids = np.where(points_in_grid)[0] mask = np.zeros(points_in_grid.sum(), dtype="int64") dts = np.zeros(points_in_grid.sum(), dtype="float64") ts = np.zeros(points_in_grid.sum(), dtype="float64") for mi, (i, pos) in enumerate( zip(pids, self.positions[points_in_grid], strict=True) ): if not points_in_grid[i]: continue ci = ((pos - grid.LeftEdge) / grid.dds).astype("int64") if grid.child_mask[ci[0], ci[1], ci[2]] == 0: continue for j in range(3): ci[j] = min(ci[j], grid.ActiveDimensions[j] - 1) mask[mi] = np.ravel_multi_index(ci, grid.ActiveDimensions) dts[mi] = self.dts[i] ts[mi] = self.ts[i] self._dts[grid.id] = dts self._ts[grid.id] = ts return mask class YTProj(YTSelectionContainer2D): _key_fields = YTSelectionContainer2D._key_fields + ["weight_field"] _con_args = ("axis", "field", "weight_field") _container_fields = ("px", "py", "pdx", "pdy", "weight_field") def __init__( self, field, axis, weight_field=None, center=None, ds=None, data_source=None, method="integrate", field_parameters=None, max_level=None, *, moment=1, ): super().__init__(axis, ds, field_parameters) if method == "mip": issue_deprecation_warning( "The 'mip' method value is a deprecated alias for 'max'. " "Please use max directly.", since="4.1", stacklevel=4, ) method = "max" if method == "sum": self.method = "integrate" self._sum_only = True else: self.method = method self._sum_only = False if self.method in ["max", "mip"]: self.func = np.max elif self.method == "min": self.func = np.min elif self.method == "integrate": self.func = np.sum # for the future else: raise NotImplementedError(self.method) validate_moment(moment, weight_field) self.moment = moment self._set_center(center) self._projected_units = {} if data_source is None: data_source = self.ds.all_data() if max_level is not None: data_source.max_level = max_level for k, v in data_source.field_parameters.items(): if k not in self.field_parameters or self._is_default_field_parameter(k): self.set_field_parameter(k, v) self.data_source = data_source if weight_field is None: self.weight_field = weight_field else: self.weight_field = self._determine_fields(weight_field)[0] for f in self._determine_fields(field): nodal_flag = self.ds._get_field_info(f).nodal_flag if any(nodal_flag): raise RuntimeError( "Nodal fields are currently not supported for projections." ) @property def blocks(self): return self.data_source.blocks @property def field(self): return [k for k in self.field_data.keys() if k not in self._container_fields] def get_data(self, fields=None): fields = self._determine_fields(fields) sfields = [] if self.moment == 2: def _sq_field(field, data, fname: FieldKey): return data[fname] ** 2 for field in fields: fd = self.ds._get_field_info(field) ftype, fname = field self.ds.add_field( (ftype, f"tmp_{fname}_squared"), partial(_sq_field, fname=field), sampling_type=fd.sampling_type, units=f"({fd.units})*({fd.units})", ) sfields.append((ftype, f"tmp_{fname}_squared")) nfields = len(fields) nsfields = len(sfields) # We need a new tree for every single set of fields we add if nfields == 0: return if isinstance(self.ds, ParticleDataset): return tree = self._get_tree(nfields + nsfields) # This only needs to be done if we are in parallel; otherwise, we can # safely build the mesh as we go. if communication_system.communicators[-1].size > 1: for chunk in self.data_source.chunks([], "io", local_only=False): self._initialize_chunk(chunk, tree) _units_initialized = False with self.data_source._field_parameter_state(self.field_parameters): for chunk in parallel_objects( self.data_source.chunks([], "io", local_only=True) ): if not _units_initialized: self._initialize_projected_units(fields, chunk) _units_initialized = True self._handle_chunk(chunk, fields + sfields, tree) # if there's less than nprocs chunks, units won't be initialized # on all processors, so sync with _projected_units on rank 0 projected_units = self.comm.mpi_bcast(self._projected_units) self._projected_units = projected_units # Note that this will briefly double RAM usage if self.method in ["max", "mip"]: merge_style = -1 op = "max" elif self.method == "min": merge_style = -2 op = "min" elif self.method == "integrate": merge_style = 1 op = "sum" else: raise NotImplementedError # TODO: Add the combine operation xax = self.ds.coordinates.x_axis[self.axis] yax = self.ds.coordinates.y_axis[self.axis] ix, iy, ires, nvals, nwvals = tree.get_all(False, merge_style) px, pdx = self.ds.index._icoords_to_fcoords( ix[:, None], ires // self.ds.ires_factor, axes=(xax,) ) py, pdy = self.ds.index._icoords_to_fcoords( iy[:, None], ires // self.ds.ires_factor, axes=(yax,) ) px = px.ravel() py = py.ravel() pdx = pdx.ravel() pdy = pdy.ravel() np.multiply(pdx, 0.5, pdx) np.multiply(pdy, 0.5, pdy) nvals = self.comm.mpi_allreduce(nvals, op=op) nwvals = self.comm.mpi_allreduce(nwvals, op=op) if self.weight_field is not None: # If there are 0s remaining in the weight vals # this will not throw an error, but silently # return nans for vals where dividing by 0 # Leave as NaNs to be auto-masked by Matplotlib with np.errstate(invalid="ignore"): np.divide(nvals, nwvals[:, None], nvals) # We now convert to half-widths and center-points data = {} code_length = self.ds.domain_width.units data["px"] = self.ds.arr(px, code_length) data["py"] = self.ds.arr(py, code_length) data["weight_field"] = nwvals data["pdx"] = self.ds.arr(pdx, code_length) data["pdy"] = self.ds.arr(pdy, code_length) data["fields"] = nvals # Now we run the finalizer, which is ignored if we don't need it field_data = np.hsplit(data.pop("fields"), nfields + nsfields) for fi, field in enumerate(fields): mylog.debug("Setting field %s", field) input_units = self._projected_units[field] fvals = field_data[fi].ravel() if self.moment == 2: fvals = compute_stddev_image(field_data[fi + nfields].ravel(), fvals) self[field] = self.ds.arr(fvals, input_units) for i in list(data.keys()): self[i] = data.pop(i) mylog.info("Projection completed") if self.moment == 2: for field in sfields: self.ds.field_info.pop(field) self.tree = tree def to_pw(self, fields=None, center="center", width=None, origin="center-window"): r"""Create a :class:`~yt.visualization.plot_window.PWViewerMPL` from this object. This is a bare-bones mechanism of creating a plot window from this object, which can then be moved around, zoomed, and on and on. All behavior of the plot window is relegated to that routine. """ pw = self._get_pw(fields, center, width, origin, "Projection") return pw def plot(self, fields=None): if hasattr(self.data_source, "left_edge") and hasattr( self.data_source, "right_edge" ): left_edge = self.data_source.left_edge right_edge = self.data_source.right_edge center = (left_edge + right_edge) / 2.0 width = right_edge - left_edge xax = self.ds.coordinates.x_axis[self.axis] yax = self.ds.coordinates.y_axis[self.axis] lx, rx = left_edge[xax], right_edge[xax] ly, ry = left_edge[yax], right_edge[yax] width = (rx - lx), (ry - ly) else: width = self.ds.domain_width center = self.ds.domain_center pw = self._get_pw(fields, center, width, "native", "Projection") try: pw.show() except YTNotInsideNotebook: pass return pw def _initialize_projected_units(self, fields, chunk): for field in self.data_source._determine_fields(fields): if field in self._projected_units: continue finfo = self.ds._get_field_info(field) if finfo.units is None: # First time calling a units="auto" field, infer units and cache # for future field accesses. finfo.units = str(chunk[field].units) field_unit = Unit(finfo.output_units, registry=self.ds.unit_registry) if self.method in ("min", "max") or self._sum_only: path_length_unit = Unit(registry=self.ds.unit_registry) else: ax_name = self.ds.coordinates.axis_name[self.axis] path_element_name = ("index", f"path_element_{ax_name}") path_length_unit = self.ds.field_info[path_element_name].units path_length_unit = Unit( path_length_unit, registry=self.ds.unit_registry ) # Only convert to appropriate unit system for path # elements that aren't angles if not path_length_unit.is_dimensionless: path_length_unit = path_length_unit.get_base_equivalent( unit_system=self.ds.unit_system ) if self.weight_field is None: self._projected_units[field] = field_unit * path_length_unit else: self._projected_units[field] = field_unit class YTParticleProj(YTProj): """ A projection operation optimized for SPH particle data. """ _type_name = "particle_proj" def __init__( self, field, axis, weight_field=None, center=None, ds=None, data_source=None, method="integrate", field_parameters=None, max_level=None, *, moment=1, ): super().__init__( field, axis, weight_field, center, ds, data_source, method, field_parameters, max_level, moment=moment, ) def _handle_chunk(self, chunk, fields, tree): raise NotImplementedError("Particle projections have not yet been implemented") class YTQuadTreeProj(YTProj): """ This is a data object corresponding to a line integral through the simulation domain. This object is typically accessed through the `proj` object that hangs off of index objects. YTQuadTreeProj is a projection of a `field` along an `axis`. The field can have an associated `weight_field`, in which case the values are multiplied by a weight before being summed, and then divided by the sum of that weight; the two fundamental modes of operating are direct line integral (no weighting) and average along a line of sight (weighting.) What makes `proj` different from the standard projection mechanism is that it utilizes a quadtree data structure, rather than the old mechanism for projections. It will not run in parallel, but serial runs should be substantially faster. Note also that lines of sight are integrated at every projected finest-level cell. Parameters ---------- field : string This is the field which will be "projected" along the axis. If multiple are specified (in a list) they will all be projected in the first pass. axis : int The axis along which to slice. Can be 0, 1, or 2 for x, y, z. weight_field : string If supplied, the field being projected will be multiplied by this weight value before being integrated, and at the conclusion of the projection the resultant values will be divided by the projected `weight_field`. center : array_like, optional The 'center' supplied to fields that use it. Note that this does not have to have `coord` as one value. Strictly optional. data_source : `yt.data_objects.data_containers.YTSelectionContainer`, optional If specified, this will be the data source used for selecting regions to project. method : string, optional The method of projection to be performed. "integrate" : integration along the axis "mip" : maximum intensity projection (deprecated) "max" : maximum intensity projection "min" : minimum intensity projection "sum" : same as "integrate", except that we don't multiply by the path length WARNING: The "sum" option should only be used for uniform resolution grid datasets, as other datasets may result in unphysical images. style : string, optional The same as the method keyword. Deprecated as of version 3.0.2. Please use method keyword instead. field_parameters : dict of items Values to be passed as field parameters that can be accessed by generated fields. moment : integer, optional for a weighted projection, moment = 1 (the default) corresponds to a weighted average. moment = 2 corresponds to a weighted standard deviation. Examples -------- >>> ds = load("RedshiftOutput0005") >>> prj = ds.proj(("gas", "density"), 0) >>> print(proj["gas", "density"]) """ _type_name = "quad_proj" def __init__( self, field, axis, weight_field=None, center=None, ds=None, data_source=None, method="integrate", field_parameters=None, max_level=None, *, moment=1, ): super().__init__( field, axis, weight_field, center, ds, data_source, method, field_parameters, max_level, moment=moment, ) if not self.deserialize(field): self.get_data(field) self.serialize() @property def _mrep(self): return MinimalProjectionData(self) def deserialize(self, fields): if not ytcfg.get("yt", "serialize"): return False for field in fields: self[field] = None deserialized_successfully = False store_file = self.ds.parameter_filename + ".yt" if os.path.isfile(store_file): deserialized_successfully = self._mrep.restore(store_file, self.ds) if deserialized_successfully: mylog.info("Using previous projection data from %s", store_file) for field, field_data in self._mrep.field_data.items(): self[field] = field_data if not deserialized_successfully: for field in fields: del self[field] return deserialized_successfully def serialize(self): if not ytcfg.get("yt", "serialize"): return self._mrep.store(self.ds.parameter_filename + ".yt") def _get_tree(self, nvals): xax = self.ds.coordinates.x_axis[self.axis] yax = self.ds.coordinates.y_axis[self.axis] xd = self.ds.domain_dimensions[xax] yd = self.ds.domain_dimensions[yax] bounds = ( self.ds.domain_left_edge[xax], self.ds.domain_right_edge[xax], self.ds.domain_left_edge[yax], self.ds.domain_right_edge[yax], ) return QuadTree( np.array([xd, yd], dtype="int64"), nvals, bounds, method=self.method ) def _initialize_chunk(self, chunk, tree): icoords = chunk.icoords xax = self.ds.coordinates.x_axis[self.axis] yax = self.ds.coordinates.y_axis[self.axis] i1 = icoords[:, xax] i2 = icoords[:, yax] ilevel = chunk.ires * self.ds.ires_factor tree.initialize_chunk(i1, i2, ilevel) def _handle_chunk(self, chunk, fields, tree): mylog.debug( "Adding chunk (%s) to tree (%0.3e GB RAM)", chunk.ires.size, get_memory_usage() / 1024.0, ) if self.method in ("min", "max") or self._sum_only: dl = self.ds.quan(1.0, "") else: ax_name = self.ds.coordinates.axis_name[self.axis] dl = chunk["index", f"path_element_{ax_name}"] # This is done for cases where our path element does not have a CGS # equivalent. Once "preferred units" have been implemented, this # will not be necessary at all, as the final conversion will occur # at the display layer. if not dl.units.is_dimensionless: dl.convert_to_units(self.ds.unit_system["length"]) v = np.empty((chunk.ires.size, len(fields)), dtype="float64") for i, field in enumerate(fields): d = chunk[field] * dl v[:, i] = d if self.weight_field is not None: w = chunk[self.weight_field] np.multiply(v, w[:, None], v) np.multiply(w, dl, w) else: w = np.ones(chunk.ires.size, dtype="float64") icoords = chunk.icoords xax = self.ds.coordinates.x_axis[self.axis] yax = self.ds.coordinates.y_axis[self.axis] i1 = icoords[:, xax] i2 = icoords[:, yax] ilevel = chunk.ires * self.ds.ires_factor tree.add_chunk_to_tree(i1, i2, ilevel, v, w) class YTCoveringGrid(YTSelectionContainer3D): """A 3D region with all data extracted to a single, specified resolution. Left edge should align with a cell boundary, but defaults to the closest cell boundary. Parameters ---------- level : int The resolution level data to which data will be gridded. Level 0 is the root grid dx for that dataset. (The grid resolution will be simulation size / 2**level along each grid axis.) left_edge : array_like The left edge of the region to be extracted. Specify units by supplying a YTArray, otherwise code length units are assumed. dims : array_like Number of cells along each axis of resulting covering_grid fields : array_like, optional A list of fields that you'd like pre-generated for your object num_ghost_zones : integer, optional The number of padding ghost zones used when accessing fields. data_source : An existing data object to intersect with the covering grid. Grid points outside the data_source will exist as empty values. Examples -------- >>> cube = ds.covering_grid(2, left_edge=[0.0, 0.0, 0.0], dims=[128, 128, 128]) """ _spatial = True _type_name = "covering_grid" _con_args = ("level", "left_edge", "ActiveDimensions") _container_fields = ( ("index", "dx"), ("index", "dy"), ("index", "dz"), ("index", "x"), ("index", "y"), ("index", "z"), ) _base_grid = None def __init__( self, level, left_edge, dims, fields=None, ds=None, num_ghost_zones=0, use_pbar=True, field_parameters=None, *, data_source=None, ): if field_parameters is None: center = None else: center = field_parameters.get("center", None) super().__init__(center, ds, field_parameters, data_source=data_source) self.level = level self.left_edge = self._sanitize_edge(left_edge) self.ActiveDimensions = self._sanitize_dims(dims) rdx = self.ds.domain_dimensions * self.ds.relative_refinement(0, level) # normalize dims as a non-zero dim array dims = np.array(list(always_iterable(dims))) rdx[np.where(dims - 2 * num_ghost_zones <= 1)] = 1 # issue 602 self.base_dds = self.ds.domain_width / self.ds.domain_dimensions self.dds = self.ds.domain_width / rdx.astype("float64") self.right_edge = self.left_edge + self.ActiveDimensions * self.dds self._num_ghost_zones = num_ghost_zones self._use_pbar = use_pbar self.global_startindex = ( np.rint((self.left_edge - self.ds.domain_left_edge) / self.dds).astype( "int64" ) + self.ds.domain_offset ) self._setup_data_source() self.get_data(fields) def get_global_startindex(self): r"""Get the global start index of the covering grid.""" return self.global_startindex def to_xarray(self, fields=None): r"""Export this fixed-resolution object to an xarray Dataset This function will take a regularized grid and optionally a list of fields and return an xarray Dataset object. If xarray is not importable, this will raise ImportError. Parameters ---------- fields : list of strings or tuple field names, default None If this is supplied, it is the list of fields to be exported into the data frame. If not supplied, whatever fields presently exist will be used. Returns ------- arr : Dataset The data contained in the object. Examples -------- >>> dd = ds.r[::256j, ::256j, ::256j] >>> xf1 = dd.to_xarray([("gas", "density"), ("gas", "temperature")]) >>> dd["gas", "velocity_magnitude"] >>> xf2 = dd.to_xarray() """ import xarray as xr data = {} coords = {} for f in fields or self.field_data.keys(): data[f] = { "dims": ( "x", "y", "z", ), "data": self[f], "attrs": {"units": str(self[f].uq)}, } # We have our data, so now we generate both our coordinates and our metadata. LE = self.LeftEdge + self.dds / 2.0 RE = self.RightEdge - self.dds / 2.0 N = self.ActiveDimensions u = str(LE.uq) for i, ax in enumerate("xyz"): coords[ax] = { "dims": (ax,), "data": np.mgrid[LE[i] : RE[i] : N[i] * 1j], "attrs": {"units": u}, } return xr.Dataset.from_dict({"data_vars": data, "coords": coords}) @property def icoords(self): ic = np.indices(self.ActiveDimensions).astype("int64") return np.column_stack( [ i.ravel() + gi for i, gi in zip(ic, self.get_global_startindex(), strict=True) ] ) @property def fwidth(self): fw = np.ones((self.ActiveDimensions.prod(), 3), dtype="float64") fw *= self.dds return fw @property def fcoords(self): LE = self.LeftEdge + self.dds / 2.0 RE = self.RightEdge - self.dds / 2.0 N = self.ActiveDimensions fc = np.mgrid[ LE[0] : RE[0] : N[0] * 1j, LE[1] : RE[1] : N[1] * 1j, LE[2] : RE[2] : N[2] * 1j, ] return np.column_stack([f.ravel() for f in fc]) @property def ires(self): tr = np.ones(self.ActiveDimensions.prod(), dtype="int64") tr *= self.level return tr def set_field_parameter(self, name, val): super().set_field_parameter(name, val) if self._data_source is not None: self._data_source.set_field_parameter(name, val) def _sanitize_dims(self, dims): if not is_sequence(dims): dims = [dims] * len(self.ds.domain_left_edge) if len(dims) != len(self.ds.domain_left_edge): raise RuntimeError( "Length of dims must match the dimensionality of the dataset" ) return np.array(dims, dtype="int32") def _sanitize_edge(self, edge): if not is_sequence(edge): edge = [edge] * len(self.ds.domain_left_edge) if len(edge) != len(self.ds.domain_left_edge): raise RuntimeError( "Length of edges must match the dimensionality of the dataset" ) if hasattr(edge, "units"): if edge.units.registry is self.ds.unit_registry: return edge edge_units = edge.units.copy() edge_units.registry = self.ds.unit_registry else: edge_units = "code_length" return self.ds.arr(edge, edge_units, dtype="float64") def _reshape_vals(self, arr): if len(arr.shape) == 3: return arr return arr.reshape(self.ActiveDimensions, order="C") @property def shape(self): return tuple(self.ActiveDimensions.tolist()) def _setup_data_source(self): reg = self.ds.region(self.center, self.left_edge, self.right_edge) if self._data_source is None: # note: https://github.com/yt-project/yt/pull/4063 implemented # a data_source kwarg for YTCoveringGrid, but not YTArbitraryGrid # so as of 4063, this will always be True for YTArbitraryGrid # instances. self._data_source = reg else: self._data_source = self.ds.intersection([self._data_source, reg]) self._data_source.min_level = 0 self._data_source.max_level = self.level # This triggers "special" behavior in the RegionSelector to ensure we # select *cells* whose bounding boxes overlap with our region, not just # their cell centers. self._data_source.loose_selection = True def get_data(self, fields=None): if fields is None: return fields = self._determine_fields(fields) fields_to_get = [f for f in fields if f not in self.field_data] fields_to_get = self._identify_dependencies(fields_to_get) if len(fields_to_get) == 0: return try: fill, gen, part, alias = self._split_fields(fields_to_get) except NeedsGridType as e: if self._num_ghost_zones == 0: num_ghost_zones = self._num_ghost_zones raise RuntimeError( "Attempting to access a field that needs ghost zones, but " f"{num_ghost_zones = }. You should create the covering grid " "with nonzero num_ghost_zones." ) from e else: raise # checking if we have a sph particles if len(part) == 0: is_sph_field = False else: is_sph_field = self.ds.field_info[part[0]].is_sph_field if len(part) > 0 and len(alias) == 0: if is_sph_field: self._fill_sph_particles(fields) for field in fields: if field in gen: gen.remove(field) else: self._fill_particles(part) if len(fill) > 0: self._fill_fields(fill) for a, f in sorted(alias.items()): if f.sampling_type == "particle" and not is_sph_field: self[a] = self._data_source[f] else: self[a] = f(self) self.field_data[a].convert_to_units(f.output_units) if len(gen) > 0: part_gen = [] cell_gen = [] for field in gen: finfo = self.ds.field_info[field] if finfo.sampling_type == "particle": part_gen.append(field) else: cell_gen.append(field) self._generate_fields(cell_gen) for p in part_gen: self[p] = self._data_source[p] def _split_fields(self, fields_to_get): fill, gen = self.index._split_fields(fields_to_get) particles = [] alias = {} for field in gen: finfo = self.ds._get_field_info(field) if finfo.is_alias: alias[field] = finfo continue try: finfo.check_available(self) except NeedsOriginalGrid: fill.append(field) for field in fill: finfo = self.ds._get_field_info(field) if finfo.sampling_type == "particle": particles.append(field) gen = [f for f in gen if f not in fill and f not in alias] fill = [f for f in fill if f not in particles] return fill, gen, particles, alias def _fill_particles(self, part): for p in part: self[p] = self._data_source[p] def _check_sph_type(self, finfo): """ Check if a particle field has an SPH type. There are several ways that this can happen, checked in this order: 1. If the field type is a known particle filter, and is in the list of SPH ptypes, use this type 2. If the field is an alias of an SPH field, but its type is not "gas", use this type 3. Otherwise, if the field type is not in the SPH types list and it is not "gas", we fail If we get through without erroring out, we either have a known SPH particle filter, an alias of an SPH field, the default SPH ptype, or "gas" for an SPH field. Then we return the particle type. """ ftype, fname = finfo.name sph_ptypes = self.ds._sph_ptypes ptype = sph_ptypes[0] err = KeyError(f"{ftype} is not a SPH particle type!") if ftype in self.ds.known_filters: if ftype not in sph_ptypes: raise err else: ptype = ftype elif finfo.is_alias: if finfo.alias_name[0] not in sph_ptypes: raise err elif ftype != "gas": ptype = ftype elif ftype not in sph_ptypes and ftype != "gas": raise err return ptype def _fill_sph_particles(self, fields): from tqdm import tqdm # checks that we have the field and gets information fields = [f for f in fields if f not in self.field_data] if len(fields) == 0: return smoothing_style = getattr(self.ds, "sph_smoothing_style", "scatter") normalize = getattr(self.ds, "use_sph_normalization", True) kernel_name = getattr(self.ds, "kernel_name", "cubic") bounds, size = self._get_grid_bounds_size() period = self.ds.coordinates.period.copy() if hasattr(period, "in_units"): period = period.in_units("code_length").d # check periodicity per dimension is_periodic = self.ds.periodicity if smoothing_style == "scatter": for field in fields: fi = self.ds._get_field_info(field) ptype = self._check_sph_type(fi) buff = np.zeros(size, dtype="float64") if normalize: buff_den = np.zeros(size, dtype="float64") pbar = tqdm(desc=f"Interpolating SPH field {field}") for chunk in self._data_source.chunks([field], "io"): px = chunk[ptype, "particle_position_x"].in_base("code").d py = chunk[ptype, "particle_position_y"].in_base("code").d pz = chunk[ptype, "particle_position_z"].in_base("code").d hsml = chunk[ptype, "smoothing_length"].in_base("code").d mass = chunk[ptype, "particle_mass"].in_base("code").d dens = chunk[ptype, "density"].in_base("code").d field_quantity = chunk[field].d pixelize_sph_kernel_arbitrary_grid( buff, px, py, pz, hsml, mass, dens, field_quantity, bounds, pbar=pbar, check_period=is_periodic, period=period, kernel_name=kernel_name, ) if normalize: pixelize_sph_kernel_arbitrary_grid( buff_den, px, py, pz, hsml, mass, dens, np.ones(dens.shape[0]), bounds, pbar=pbar, check_period=is_periodic, period=period, kernel_name=kernel_name, ) if normalize: normalization_3d_utility(buff, buff_den) self[field] = self.ds.arr(buff, fi.units) pbar.close() if smoothing_style == "gather": num_neighbors = getattr(self.ds, "num_neighbors", 32) for field in fields: fi = self.ds._get_field_info(field) ptype = self._check_sph_type(fi) buff = np.zeros(size, dtype="float64") fields_to_get = [ "particle_position", "density", "particle_mass", "smoothing_length", field[1], ] all_fields = all_data(self.ds, ptype, fields_to_get, kdtree=True) interpolate_sph_grid_gather( buff, all_fields["particle_position"], bounds, all_fields["smoothing_length"], all_fields["particle_mass"], all_fields["density"], all_fields[field[1]].in_units(fi.units), self.ds.index.kdtree, use_normalization=normalize, num_neigh=num_neighbors, ) self[field] = self.ds.arr(buff, fi.units) def _fill_fields(self, fields): fields = [f for f in fields if f not in self.field_data] if len(fields) == 0: return output_fields = [ np.zeros(self.ActiveDimensions, dtype="float64") for field in fields ] domain_dims = self.ds.domain_dimensions.astype( "int64" ) * self.ds.relative_refinement(0, self.level) refine_by = self.ds.refine_by if not is_sequence(self.ds.refine_by): refine_by = [refine_by, refine_by, refine_by] refine_by = np.array(refine_by, dtype="i8") for chunk in parallel_objects(self._data_source.chunks(fields, "io")): input_fields = [chunk[field] for field in fields] # NOTE: This usage of "refine_by" is actually *okay*, because it's # being used with respect to iref, which is *already* scaled! fill_region( input_fields, output_fields, self.level, self.global_startindex, chunk.icoords, chunk.ires, domain_dims, refine_by, ) if self.comm.size > 1: for i in range(len(fields)): output_fields[i] = self.comm.mpi_allreduce(output_fields[i], op="sum") for field, v in zip(fields, output_fields, strict=True): fi = self.ds._get_field_info(field) self[field] = self.ds.arr(v, fi.units) def _generate_container_field(self, field): rv = self.ds.arr(np.ones(self.ActiveDimensions, dtype="float64"), "") axis_name = self.ds.coordinates.axis_name if field == ("index", f"d{axis_name[0]}"): np.multiply(rv, self.dds[0], rv) elif field == ("index", f"d{axis_name[1]}"): np.multiply(rv, self.dds[1], rv) elif field == ("index", f"d{axis_name[2]}"): np.multiply(rv, self.dds[2], rv) elif field == ("index", axis_name[0]): x = np.mgrid[ self.left_edge[0] + 0.5 * self.dds[0] : self.right_edge[0] - 0.5 * self.dds[0] : self.ActiveDimensions[0] * 1j ] np.multiply(rv, x[:, None, None], rv) elif field == ("index", axis_name[1]): y = np.mgrid[ self.left_edge[1] + 0.5 * self.dds[1] : self.right_edge[1] - 0.5 * self.dds[1] : self.ActiveDimensions[1] * 1j ] np.multiply(rv, y[None, :, None], rv) elif field == ("index", axis_name[2]): z = np.mgrid[ self.left_edge[2] + 0.5 * self.dds[2] : self.right_edge[2] - 0.5 * self.dds[2] : self.ActiveDimensions[2] * 1j ] np.multiply(rv, z[None, None, :], rv) else: raise KeyError(field) return rv @property def LeftEdge(self): return self.left_edge @property def RightEdge(self): return self.right_edge def deposit(self, positions, fields=None, method=None, kernel_name="cubic"): cls = getattr(particle_deposit, f"deposit_{method}", None) if cls is None: raise YTParticleDepositionNotImplemented(method) # We allocate number of zones, not number of octs. Everything # inside this is Fortran ordered because of the ordering in the # octree deposit routines, so we reverse it here to match the # convention there nvals = tuple(self.ActiveDimensions[::-1]) # append a dummy dimension because we are only depositing onto # one grid op = cls(nvals + (1,), kernel_name) op.initialize() op.process_grid(self, positions, fields) # Fortran-ordered, so transpose. vals = op.finalize().transpose() # squeeze dummy dimension we appended above return np.squeeze(vals, axis=0) def write_to_gdf(self, gdf_path, fields, nprocs=1, field_units=None, **kwargs): r""" Write the covering grid data to a GDF file. Parameters ---------- gdf_path : string Pathname of the GDF file to write. fields : list of strings Fields to write to the GDF file. nprocs : integer, optional Split the covering grid into *nprocs* subgrids before writing to the GDF file. Default: 1 field_units : dictionary, optional Dictionary of units to convert fields to. If not set, fields are in their default units. All remaining keyword arguments are passed to yt.utilities.grid_data_format.writer.write_to_gdf. Examples -------- >>> cube.write_to_gdf( ... "clumps.h5", ... [("gas", "density"), ("gas", "temperature")], ... nprocs=16, ... overwrite=True, ... ) """ data = {} for field in fields: if field in field_units: units = field_units[field] else: units = str(self[field].units) data[field] = (self[field].in_units(units).v, units) le = self.left_edge.v re = self.right_edge.v bbox = np.array([[l, r] for l, r in zip(le, re, strict=True)]) ds = load_uniform_grid( data, self.ActiveDimensions, bbox=bbox, length_unit=self.ds.length_unit, time_unit=self.ds.time_unit, mass_unit=self.ds.mass_unit, nprocs=nprocs, sim_time=self.ds.current_time.v, ) write_to_gdf(ds, gdf_path, **kwargs) def _get_grid_bounds_size(self): dd = self.ds.domain_width / 2**self.level bounds = np.zeros(6, dtype="float64") bounds[0] = self.left_edge[0].in_base("code") bounds[1] = bounds[0] + dd[0].d * self.ActiveDimensions[0] bounds[2] = self.left_edge[1].in_base("code") bounds[3] = bounds[2] + dd[1].d * self.ActiveDimensions[1] bounds[4] = self.left_edge[2].in_base("code") bounds[5] = bounds[4] + dd[2].d * self.ActiveDimensions[2] size = np.ones(3, dtype="int64") * 2**self.level return bounds, size def to_fits_data(self, fields, length_unit=None): r"""Export a set of gridded fields to a FITS file. This will export a set of FITS images of either the fields specified or all the fields already in the object. Parameters ---------- fields : list of strings These fields will be pixelized and output. If "None", the keys of the FRB will be used. length_unit : string, optional the length units that the coordinates are written in. The default is to use the default length unit of the dataset. """ from yt.visualization.fits_image import FITSImageData if length_unit is None: length_unit = self.ds.length_unit fields = list(iter_fields(fields)) fid = FITSImageData(self, fields, length_unit=length_unit) return fid class YTArbitraryGrid(YTCoveringGrid): """A 3D region with arbitrary bounds and dimensions. In contrast to the Covering Grid, this object accepts a left edge, a right edge, and dimensions. This allows it to be used for creating 3D particle deposition fields that are independent of the underlying mesh, whether that is yt-generated or from the simulation data. For example, arbitrary boxes around particles can be drawn and particle deposition fields can be created. This object will refuse to generate any fluid fields. Parameters ---------- left_edge : array_like The left edge of the region to be extracted right_edge : array_like The left edge of the region to be extracted dims : array_like Number of cells along each axis of resulting grid. Examples -------- >>> obj = ds.arbitrary_grid( ... [0.0, 0.0, 0.0], [0.99, 0.99, 0.99], dims=[128, 128, 128] ... ) """ _spatial = True _type_name = "arbitrary_grid" _con_args = ("left_edge", "right_edge", "ActiveDimensions") _container_fields = ( ("index", "dx"), ("index", "dy"), ("index", "dz"), ("index", "x"), ("index", "y"), ("index", "z"), ) def __init__(self, left_edge, right_edge, dims, ds=None, field_parameters=None): if field_parameters is None: center = None else: center = field_parameters.get("center", None) YTSelectionContainer3D.__init__(self, center, ds, field_parameters) self.left_edge = self._sanitize_edge(left_edge) self.right_edge = self._sanitize_edge(right_edge) self.ActiveDimensions = self._sanitize_dims(dims) self.dds = self.base_dds = ( self.right_edge - self.left_edge ) / self.ActiveDimensions self.level = 99 self._setup_data_source() def _fill_fields(self, fields): fields = [f for f in fields if f not in self.field_data] if len(fields) == 0: return # It may be faster to adapt fill_region_float to fill multiple fields # instead of looping here for field in fields: dest = np.zeros(self.ActiveDimensions, dtype="float64") for chunk in self._data_source.chunks(fields, "io"): fill_region_float( chunk.fcoords, chunk.fwidth, chunk[field], self.left_edge, self.right_edge, dest, 1, self.ds.domain_width, int(any(self.ds.periodicity)), ) fi = self.ds._get_field_info(field) self[field] = self.ds.arr(dest, fi.units) def _get_grid_bounds_size(self): bounds = np.empty(6, dtype="float64") bounds[0] = self.left_edge[0].in_base("code") bounds[2] = self.left_edge[1].in_base("code") bounds[4] = self.left_edge[2].in_base("code") bounds[1] = self.right_edge[0].in_base("code") bounds[3] = self.right_edge[1].in_base("code") bounds[5] = self.right_edge[2].in_base("code") size = self.ActiveDimensions return bounds, size class LevelState: current_dx = None current_dims = None current_level = None global_startindex = None old_global_startindex = None fields = None data_source = None # These are all cached here as numpy arrays, without units, in # code_lengths. domain_width = None domain_left_edge = None domain_right_edge = None left_edge = None right_edge = None base_dx = None dds = None class YTSmoothedCoveringGrid(YTCoveringGrid): """A 3D region with all data extracted and interpolated to a single, specified resolution. (Identical to covering_grid, except that it interpolates.) Smoothed covering grids start at level 0, interpolating to fill the region to level 1, replacing any cells actually covered by level 1 data, and then recursively repeating this process until it reaches the specified `level`. Parameters ---------- level : int The resolution level data is uniformly gridded at left_edge : array_like The left edge of the region to be extracted dims : array_like Number of cells along each axis of resulting covering_grid. fields : array_like, optional A list of fields that you'd like pre-generated for your object Example ------- cube = ds.smoothed_covering_grid(2, left_edge=[0.0, 0.0, 0.0], \ dims=[128, 128, 128]) """ _type_name = "smoothed_covering_grid" filename = None _min_level = None @wraps(YTCoveringGrid.__init__) # type: ignore [misc] def __init__(self, *args, **kwargs): ds = kwargs["ds"] self._base_dx = ( ds.domain_right_edge - ds.domain_left_edge ) / ds.domain_dimensions.astype("float64") self.global_endindex = None YTCoveringGrid.__init__(self, *args, **kwargs) self._final_start_index = self.global_startindex def _setup_data_source(self, level_state=None): if level_state is None: super()._setup_data_source() return # We need a buffer region to allow for zones that contribute to the # interpolation but are not directly inside our bounds level_state.data_source = self.ds.region( self.center, level_state.left_edge - level_state.current_dx, level_state.right_edge + level_state.current_dx, ) level_state.data_source.min_level = level_state.current_level level_state.data_source.max_level = level_state.current_level self._pdata_source = self.ds.region( self.center, level_state.left_edge - level_state.current_dx, level_state.right_edge + level_state.current_dx, ) self._pdata_source.min_level = level_state.current_level self._pdata_source.max_level = level_state.current_level def _compute_minimum_level(self): # This attempts to determine the minimum level that we should be # starting on for this box. It does this by identifying the minimum # level that could contribute to the minimum bounding box at that # level; that means that all cells from coarser levels will be replaced. if self._min_level is not None: return self._min_level ils = LevelState() min_level = 0 for l in range(self.level, 0, -1): dx = self._base_dx / self.ds.relative_refinement(0, l) start_index, end_index, dims = self._minimal_box(dx) ils.left_edge = start_index * dx + self.ds.domain_left_edge ils.right_edge = ils.left_edge + dx * dims ils.current_dx = dx ils.current_level = l self._setup_data_source(ils) # Reset the max_level ils.data_source.min_level = 0 ils.data_source.max_level = l ils.data_source.loose_selection = False min_level = self.level for chunk in ils.data_source.chunks([], "io"): # With our odd selection methods, we can sometimes get no-sized ires. ir = chunk.ires if ir.size == 0: continue min_level = min(ir.min(), min_level) if min_level >= l: break self._min_level = min_level return min_level def _fill_fields(self, fields): fields = [f for f in fields if f not in self.field_data] if len(fields) == 0: return ls = self._initialize_level_state(fields) min_level = self._compute_minimum_level() # NOTE: This usage of "refine_by" is actually *okay*, because it's # being used with respect to iref, which is *already* scaled! refine_by = self.ds.refine_by if not is_sequence(self.ds.refine_by): refine_by = [refine_by, refine_by, refine_by] refine_by = np.array(refine_by, dtype="i8") runtime_errors_count = 0 for level in range(self.level + 1): if level < min_level: self._update_level_state(ls) continue nd = self.ds.dimensionality refinement = np.zeros_like(ls.base_dx) refinement += self.ds.relative_refinement(0, ls.current_level) refinement[nd:] = 1 domain_dims = self.ds.domain_dimensions * refinement domain_dims = domain_dims.astype("int64") tot = ls.current_dims.prod() for chunk in ls.data_source.chunks(fields, "io"): chunk[fields[0]] input_fields = [chunk[field] for field in fields] tot -= fill_region( input_fields, ls.fields, ls.current_level, ls.global_startindex, chunk.icoords, chunk.ires, domain_dims, refine_by, ) if level == 0 and tot != 0: runtime_errors_count += 1 self._update_level_state(ls) if runtime_errors_count: warnings.warn( "Something went wrong during field computation. " "This is likely due to missing ghost-zones support " f"in class {type(self.ds)}", category=RuntimeWarning, stacklevel=1, ) mylog.debug("Caught %d runtime errors.", runtime_errors_count) for field, v in zip(fields, ls.fields, strict=True): if self.level > 0: v = v[1:-1, 1:-1, 1:-1] fi = self.ds._get_field_info(field) self[field] = self.ds.arr(v, fi.units) def _initialize_level_state(self, fields): ls = LevelState() ls.domain_width = self.ds.domain_width ls.domain_left_edge = self.ds.domain_left_edge ls.domain_right_edge = self.ds.domain_right_edge ls.base_dx = self._base_dx ls.dds = self.dds ls.left_edge = self.left_edge ls.right_edge = self.right_edge for att in ( "domain_width", "domain_left_edge", "domain_right_edge", "left_edge", "right_edge", "base_dx", "dds", ): setattr(ls, att, getattr(ls, att).in_units("code_length").d) ls.current_dx = ls.base_dx ls.current_level = 0 ls.global_startindex, end_index, idims = self._minimal_box(ls.current_dx) ls.current_dims = idims.astype("int32") ls.left_edge = ls.global_startindex * ls.current_dx + self.ds.domain_left_edge.d ls.right_edge = ls.left_edge + ls.current_dims * ls.current_dx ls.fields = [np.zeros(idims, dtype="float64") - 999 for field in fields] self._setup_data_source(ls) return ls def _minimal_box(self, dds): LL = self.left_edge.d - self.ds.domain_left_edge.d # Nudge in case we're on the edge LL += np.finfo(np.float64).eps LS = self.right_edge.d - self.ds.domain_left_edge.d LS += np.finfo(np.float64).eps cell_start = LL / dds # This is the cell we're inside cell_end = LS / dds if self.level == 0: start_index = np.array(np.floor(cell_start), dtype="int64") end_index = np.array(np.ceil(cell_end), dtype="int64") dims = np.rint((self.ActiveDimensions * self.dds.d) / dds).astype("int64") else: # Give us one buffer start_index = np.rint(cell_start).astype("int64") - 1 # How many root cells do we occupy? end_index = np.rint(cell_end).astype("int64") dims = end_index - start_index + 1 return start_index, end_index, dims.astype("int32") def _update_level_state(self, level_state): ls = level_state if ls.current_level >= self.level: return rf = float(self.ds.relative_refinement(ls.current_level, ls.current_level + 1)) ls.current_level += 1 nd = self.ds.dimensionality refinement = np.zeros_like(ls.base_dx) refinement += self.ds.relative_refinement(0, ls.current_level) refinement[nd:] = 1 ls.current_dx = ls.base_dx / refinement ls.old_global_startindex = ls.global_startindex ls.global_startindex, end_index, ls.current_dims = self._minimal_box( ls.current_dx ) ls.left_edge = ls.global_startindex * ls.current_dx + self.ds.domain_left_edge.d ls.right_edge = ls.left_edge + ls.current_dims * ls.current_dx input_left = (level_state.old_global_startindex) * rf + 1 new_fields = [] for input_field in level_state.fields: output_field = np.zeros(ls.current_dims, dtype="float64") output_left = level_state.global_startindex + 0.5 ghost_zone_interpolate( rf, input_field, input_left, output_field, output_left ) new_fields.append(output_field) level_state.fields = new_fields self._setup_data_source(ls) class YTSurface(YTSelectionContainer3D): r"""This surface object identifies isocontours on a cell-by-cell basis, with no consideration of global connectedness, and returns the vertices of the Triangles in that isocontour. This object simply returns the vertices of all the triangles calculated by the `marching cubes `_ algorithm; for more complex operations, such as identifying connected sets of cells above a given threshold, see the extract_connected_sets function. This is more useful for calculating, for instance, total isocontour area, or visualizing in an external program (such as `MeshLab `_.) The object has the properties .vertices and will sample values if a field is requested. The values are interpolated to the center of a given face. Parameters ---------- data_source : YTSelectionContainer This is the object which will used as a source surface_field : string Any field that can be obtained in a data object. This is the field which will be isocontoured. field_value : float, YTQuantity, or unit tuple The value at which the isocontour should be calculated. Examples -------- This will create a data object, find a nice value in the center, and output the vertices to "triangles.obj" after rescaling them. >>> from yt.units import kpc >>> sp = ds.sphere("max", (10, "kpc")) >>> surf = ds.surface(sp, ("gas", "density"), 5e-27) >>> print(surf["gas", "temperature"]) >>> print(surf.vertices) >>> bounds = [ ... (sp.center[i] - 5.0 * kpc, sp.center[i] + 5.0 * kpc) for i in range(3) ... ] >>> surf.export_ply("my_galaxy.ply", bounds=bounds) """ _type_name = "surface" _con_args = ("data_source", "surface_field", "field_value") _container_fields = ( ("index", "dx"), ("index", "dy"), ("index", "dz"), ("index", "x"), ("index", "y"), ("index", "z"), ) def __init__(self, data_source, surface_field, field_value, ds=None): self.data_source = data_source self.surface_field = data_source._determine_fields(surface_field)[0] finfo = data_source.ds.field_info[self.surface_field] try: self.field_value = field_value.to(finfo.units) except AttributeError: if isinstance(field_value, tuple): self.field_value = data_source.ds.quan(*field_value) self.field_value = self.field_value.to(finfo.units) else: self.field_value = data_source.ds.quan(field_value, finfo.units) self.vertex_samples = YTFieldData() center = data_source.get_field_parameter("center") super().__init__(center=center, ds=ds) def _generate_container_field(self, field): self.get_data(field) return self[field] def get_data(self, fields=None, sample_type="face", no_ghost=False): if isinstance(fields, list) and len(fields) > 1: for field in fields: self.get_data(field) return elif isinstance(fields, list): fields = fields[0] # Now we have a "fields" value that is either a string or None if fields is not None: mylog.info("Extracting (sampling: %s)", fields) verts = [] samples = [] for _io_chunk in parallel_objects(self.data_source.chunks([], "io")): for block, mask in self.data_source.blocks: my_verts = self._extract_isocontours_from_grid( block, self.surface_field, self.field_value, mask, fields, sample_type, no_ghost=no_ghost, ) if fields is not None: my_verts, svals = my_verts samples.append(svals) verts.append(my_verts) verts = np.concatenate(verts).transpose() verts = self.comm.par_combine_object(verts, op="cat", datatype="array") # verts is an ndarray here and will always be in code units, so we # expose it in the public API as a YTArray self._vertices = self.ds.arr(verts, "code_length") if fields is not None: samples = uconcatenate(samples) samples = self.comm.par_combine_object(samples, op="cat", datatype="array") if sample_type == "face": self[fields] = samples elif sample_type == "vertex": self.vertex_samples[fields] = samples def _extract_isocontours_from_grid( self, grid, field, value, mask, sample_values=None, sample_type="face", no_ghost=False, ): # TODO: check if multiple fields can be passed here vals = grid.get_vertex_centered_data([field], no_ghost=no_ghost)[field] if sample_values is not None: # TODO: is no_ghost=False correct here? svals = grid.get_vertex_centered_data([sample_values])[sample_values] else: svals = None sample_type = {"face": 1, "vertex": 2}[sample_type] my_verts = march_cubes_grid( value, vals, mask, grid.LeftEdge, grid.dds, svals, sample_type ) return my_verts def calculate_flux(self, field_x, field_y, field_z, fluxing_field=None): r"""This calculates the flux over the surface. This function will conduct `Marching Cubes`_ on all the cells in a given data container (grid-by-grid), and then for each identified triangular segment of an isocontour in a given cell, calculate the gradient (i.e., normal) in the isocontoured field, interpolate the local value of the "fluxing" field, the area of the triangle, and then return: area * local_flux_value * (n dot v) Where area, local_value, and the vector v are interpolated at the barycenter (weighted by the vertex values) of the triangle. Note that this specifically allows for the field fluxing across the surface to be *different* from the field being contoured. If the fluxing_field is not specified, it is assumed to be 1.0 everywhere, and the raw flux with no local-weighting is returned. Additionally, the returned flux is defined as flux *into* the surface, not flux *out of* the surface. Parameters ---------- field_x : string The x-component field field_y : string The y-component field field_z : string The z-component field fluxing_field : string, optional The field whose passage over the surface is of interest. If not specified, assumed to be 1.0 everywhere. Returns ------- flux : YTQuantity The summed flux. References ---------- .. _Marching Cubes: https://en.wikipedia.org/wiki/Marching_cubes Examples -------- This will create a data object, find a nice value in the center, and calculate the metal flux over it. >>> sp = ds.sphere("max", (10, "kpc")) >>> surf = ds.surface(sp, ("gas", "density"), 5e-27) >>> flux = surf.calculate_flux( ... ("gas", "velocity_x"), ... ("gas", "velocity_y"), ... ("gas", "velocity_z"), ... ("gas", "metal_density"), ... ) """ flux = 0.0 mylog.info("Fluxing %s", fluxing_field) for _io_chunk in parallel_objects(self.data_source.chunks([], "io")): for block, mask in self.data_source.blocks: flux += self._calculate_flux_in_grid( block, mask, field_x, field_y, field_z, fluxing_field ) flux = self.comm.mpi_allreduce(flux, op="sum") return flux def _calculate_flux_in_grid( self, grid, mask, field_x, field_y, field_z, fluxing_field=None ): vc_fields = [self.surface_field, field_x, field_y, field_z] if fluxing_field is not None: vc_fields.append(fluxing_field) vc_data = grid.get_vertex_centered_data(vc_fields) if fluxing_field is None: ff = self.ds.arr( np.ones_like(vc_data[self.surface_field].d, dtype="float64"), "dimensionless", ) else: ff = vc_data[fluxing_field] surf_vals = vc_data[self.surface_field] field_x_vals = vc_data[field_x] field_y_vals = vc_data[field_y] field_z_vals = vc_data[field_z] ret = march_cubes_grid_flux( self.field_value, surf_vals, field_x_vals, field_y_vals, field_z_vals, ff, mask, grid.LeftEdge, grid.dds, ) # assumes all the fluxing fields have the same units ret_units = field_x_vals.units * ff.units * grid.dds.units**2 ret = self.ds.arr(ret, ret_units) ret.convert_to_units(self.ds.unit_system[ret_units.dimensions]) return ret _vertices = None @property def vertices(self): if self._vertices is None: self.get_data() return self._vertices @property def triangles(self): vv = np.empty((self.vertices.shape[1] // 3, 3, 3), dtype="float64") vv = self.ds.arr(vv, self.vertices.units) for i in range(3): for j in range(3): vv[:, i, j] = self.vertices[j, i::3] return vv _surface_area = None @property def surface_area(self): if self._surface_area is not None: return self._surface_area tris = self.triangles x = tris[:, 1, :] - tris[:, 0, :] y = tris[:, 2, :] - tris[:, 0, :] areas = (x[:, 1] * y[:, 2] - x[:, 2] * y[:, 1]) ** 2 np.add(areas, (x[:, 2] * y[:, 0] - x[:, 0] * y[:, 2]) ** 2, out=areas) np.add(areas, (x[:, 0] * y[:, 1] - x[:, 1] * y[:, 0]) ** 2, out=areas) np.sqrt(areas, out=areas) self._surface_area = 0.5 * areas.sum() return self._surface_area def export_obj( self, filename, transparency=1.0, dist_fac=None, color_field=None, emit_field=None, color_map=None, color_log=True, emit_log=True, plot_index=None, color_field_max=None, color_field_min=None, emit_field_max=None, emit_field_min=None, ): r"""Export the surface to the OBJ format Suitable for visualization in many different programs (e.g., Blender). NOTE: this exports an .obj file and an .mtl file, both with the general 'filename' as a prefix. The .obj file points to the .mtl file in its header, so if you move the 2 files, make sure you change the .obj header to account for this. ALSO NOTE: the emit_field needs to be a combination of the other 2 fields used to have the emissivity track with the color. Parameters ---------- filename : string The file this will be exported to. This cannot be a file-like object. If there are no file extensions included - both obj & mtl files are created. transparency : float This gives the transparency of the output surface plot. Values from 0.0 (invisible) to 1.0 (opaque). dist_fac : float Divide the axes distances by this amount. color_field : string Should a field be sample and colormapped? emit_field : string Should we track the emissivity of a field? This should be a combination of the other 2 fields being used. color_map : string Which color map should be applied? color_log : bool Should the color field be logged before being mapped? emit_log : bool Should the emitting field be logged before being mapped? plot_index : integer Index of plot for multiple plots. If none, then only 1 plot. color_field_max : float Maximum value of the color field across all surfaces. color_field_min : float Minimum value of the color field across all surfaces. emit_field_max : float Maximum value of the emitting field across all surfaces. emit_field_min : float Minimum value of the emitting field across all surfaces. Examples -------- >>> sp = ds.sphere("max", (10, "kpc")) >>> trans = 1.0 >>> surf = ds.surface(sp, ("gas", "density"), 5e-27) >>> surf.export_obj("my_galaxy", transparency=trans) >>> sp = ds.sphere("max", (10, "kpc")) >>> mi, ma = sp.quantities.extrema("temperature") >>> rhos = [1e-24, 1e-25] >>> trans = [0.5, 1.0] >>> for i, r in enumerate(rhos): ... surf = ds.surface(sp, "density", r) ... surf.export_obj( ... "my_galaxy", ... transparency=trans[i], ... color_field="temperature", ... plot_index=i, ... color_field_max=ma, ... color_field_min=mi, ... ) >>> sp = ds.sphere("max", (10, "kpc")) >>> rhos = [1e-24, 1e-25] >>> trans = [0.5, 1.0] >>> def _Emissivity(field, data): ... return ( ... data["gas", "density"] ... * data["gas", "density"] ... * np.sqrt(data["gas", "temperature"]) ... ) >>> ds.add_field( ... ("gas", "emissivity"), ... function=_Emissivity, ... sampling_type="cell", ... units=r"g**2*sqrt(K)/cm**6", ... ) >>> for i, r in enumerate(rhos): ... surf = ds.surface(sp, "density", r) ... surf.export_obj( ... "my_galaxy", ... transparency=trans[i], ... color_field="temperature", ... emit_field="emissivity", ... plot_index=i, ... ) """ if color_map is None: color_map = ytcfg.get("yt", "default_colormap") if self.vertices is None: if color_field is not None: self.get_data(color_field, "face") elif color_field is not None: if color_field not in self.field_data: self[color_field] if color_field is None: self.get_data(self.surface_field, "face") if emit_field is not None: if color_field not in self.field_data: self[emit_field] only_on_root( self._export_obj, filename, transparency, dist_fac, color_field, emit_field, color_map, color_log, emit_log, plot_index, color_field_max, color_field_min, emit_field_max, emit_field_min, ) def _color_samples_obj( self, cs, em, color_log, emit_log, color_map, arr, color_field_max, color_field_min, color_field, emit_field_max, emit_field_min, emit_field, ): # this now holds for obj files if color_field is not None: if color_log: cs = np.log10(cs) if emit_field is not None: if emit_log: em = np.log10(em) if color_field is not None: if color_field_min is None: cs = [float(field) for field in cs] cs = np.array(cs) mi = cs.min() else: mi = color_field_min if color_log: mi = np.log10(mi) if color_field_max is None: cs = [float(field) for field in cs] cs = np.array(cs) ma = cs.max() else: ma = color_field_max if color_log: ma = np.log10(ma) cs = (cs - mi) / (ma - mi) else: cs[:] = 1.0 # to get color indices for OBJ formatting lut = get_colormap_lut(color_map) x = np.mgrid[0.0 : 1.0 : lut[0].shape[0] * 1j] arr["cind"][:] = (np.interp(cs, x, x) * (lut[0].shape[0] - 1)).astype("uint8") # now, get emission if emit_field is not None: if emit_field_min is None: em = [float(field) for field in em] em = np.array(em) emi = em.min() else: emi = emit_field_min if emit_log: emi = np.log10(emi) if emit_field_max is None: em = [float(field) for field in em] em = np.array(em) ema = em.max() else: ema = emit_field_max if emit_log: ema = np.log10(ema) em = (em - emi) / (ema - emi) x = np.mgrid[0.0:255.0:2j] # assume 1 emissivity per color arr["emit"][:] = ( np.interp(em, x, x) ) * 2.0 # for some reason, max emiss = 2 else: arr["emit"][:] = 0.0 @parallel_root_only def _export_obj( self, filename, transparency, dist_fac=None, color_field=None, emit_field=None, color_map=None, color_log=True, emit_log=True, plot_index=None, color_field_max=None, color_field_min=None, emit_field_max=None, emit_field_min=None, ): if color_map is None: color_map = ytcfg.get("yt", "default_colormap") if plot_index is None: plot_index = 0 if isinstance(filename, io.IOBase): fobj = filename + ".obj" fmtl = filename + ".mtl" else: if plot_index == 0: fobj = open(filename + ".obj", "w") fmtl = open(filename + ".mtl", "w") cc = 1 else: # read in last vertex linesave = "" for line in fileinput.input(filename + ".obj"): if line[0] == "f": linesave = line p = [m.start() for m in finditer(" ", linesave)] cc = int(linesave[p[len(p) - 1] :]) + 1 fobj = open(filename + ".obj", "a") fmtl = open(filename + ".mtl", "a") ftype = [("cind", "uint8"), ("emit", "float")] vtype = [("x", "float"), ("y", "float"), ("z", "float")] if plot_index == 0: fobj.write("# yt OBJ file\n") fobj.write("# www.yt-project.org\n") fobj.write( f"mtllib {filename}.mtl\n\n" ) # use this material file for the faces fmtl.write("# yt MLT file\n") fmtl.write("# www.yt-project.org\n\n") # (0) formulate vertices nv = self.vertices.shape[1] # number of groups of vertices f = np.empty( nv // self.vertices.shape[0], dtype=ftype ) # store sets of face colors v = np.empty(nv, dtype=vtype) # stores vertices if color_field is not None: cs = self[color_field] else: cs = np.empty(self.vertices.shape[1] // self.vertices.shape[0]) if emit_field is not None: em = self[emit_field] else: em = np.empty(self.vertices.shape[1] // self.vertices.shape[0]) self._color_samples_obj( cs, em, color_log, emit_log, color_map, f, color_field_max, color_field_min, color_field, emit_field_max, emit_field_min, emit_field, ) # map color values to color scheme lut = get_colormap_lut(color_map) # interpolate emissivity to enumerated colors emiss = np.interp( np.mgrid[0 : lut[0].shape[0]], np.mgrid[0 : len(cs)], f["emit"][:] ) if dist_fac is None: # then normalize by bounds DLE = self.pf.domain_left_edge DRE = self.pf.domain_right_edge bounds = [(DLE[i], DRE[i]) for i in range(3)] for i, ax in enumerate("xyz"): # Do the bounds first since we cast to f32 tmp = self.vertices[i, :] np.subtract(tmp, bounds[i][0], tmp) w = bounds[i][1] - bounds[i][0] np.divide(tmp, w, tmp) np.subtract(tmp, 0.5, tmp) # Center at origin. v[ax][:] = tmp else: for i, ax in enumerate("xyz"): tmp = self.vertices[i, :] np.divide(tmp, dist_fac, tmp) v[ax][:] = tmp # (1) write all colors per surface to mtl file for i in range(0, lut[0].shape[0]): omname = f"material_{i}_{plot_index}" # name of the material fmtl.write( f"newmtl {omname}\n" ) # the specific material (color) for this face fmtl.write(f"Ka {0.0:.6f} {0.0:.6f} {0.0:.6f}\n") # ambient color, keep off fmtl.write( f"Kd {lut[0][i]:.6f} {lut[1][i]:.6f} {lut[2][i]:.6f}\n" ) # color of face fmtl.write( f"Ks {0.0:.6f} {0.0:.6f} {0.0:.6f}\n" ) # specular color, keep off fmtl.write(f"d {transparency:.6f}\n") # transparency fmtl.write(f"em {emiss[i]:.6f}\n") # emissivity per color fmtl.write("illum 2\n") # not relevant, 2 means highlights on? fmtl.write(f"Ns {0:.6f}\n\n") # keep off, some other specular thing # (2) write vertices for i in range(0, self.vertices.shape[1]): fobj.write(f"v {v['x'][i]:.6f} {v['y'][i]:.6f} {v['z'][i]:.6f}\n") fobj.write("#done defining vertices\n\n") # (3) define faces and materials for each face for i in range(0, self.triangles.shape[0]): omname = f"material_{f['cind'][i]}_{plot_index}" # which color to use fobj.write( f"usemtl {omname}\n" ) # which material to use for this face (color) fobj.write(f"f {cc} {cc+1} {cc+2}\n\n") # vertices to color cc = cc + 3 fmtl.close() fobj.close() def export_blender( self, transparency=1.0, dist_fac=None, color_field=None, emit_field=None, color_map=None, color_log=True, emit_log=True, plot_index=None, color_field_max=None, color_field_min=None, emit_field_max=None, emit_field_min=None, ): r"""This exports the surface to the OBJ format, suitable for visualization in many different programs (e.g., Blender). NOTE: this exports an .obj file and an .mtl file, both with the general 'filename' as a prefix. The .obj file points to the .mtl file in its header, so if you move the 2 files, make sure you change the .obj header to account for this. ALSO NOTE: the emit_field needs to be a combination of the other 2 fields used to have the emissivity track with the color. Parameters ---------- transparency : float This gives the transparency of the output surface plot. Values from 0.0 (invisible) to 1.0 (opaque). dist_fac : float Divide the axes distances by this amount. color_field : string Should a field be sample and colormapped? emit_field : string Should we track the emissivity of a field? NOTE: this should be a combination of the other 2 fields being used. color_map : string Which color map should be applied? color_log : bool Should the color field be logged before being mapped? emit_log : bool Should the emitting field be logged before being mapped? plot_index : integer Index of plot for multiple plots. If none, then only 1 plot. color_field_max : float Maximum value of the color field across all surfaces. color_field_min : float Minimum value of the color field across all surfaces. emit_field_max : float Maximum value of the emitting field across all surfaces. emit_field_min : float Minimum value of the emitting field across all surfaces. Examples -------- >>> sp = ds.sphere("max", (10, "kpc")) >>> trans = 1.0 >>> surf = ds.surface(sp, ("gas", "density"), 5e-27) >>> surf.export_obj("my_galaxy", transparency=trans) >>> sp = ds.sphere("max", (10, "kpc")) >>> mi, ma = sp.quantities.extrema("temperature")[0] >>> rhos = [1e-24, 1e-25] >>> trans = [0.5, 1.0] >>> for i, r in enumerate(rhos): ... surf = ds.surface(sp, "density", r) ... surf.export_obj( ... "my_galaxy", ... transparency=trans[i], ... color_field="temperature", ... plot_index=i, ... color_field_max=ma, ... color_field_min=mi, ... ) >>> sp = ds.sphere("max", (10, "kpc")) >>> rhos = [1e-24, 1e-25] >>> trans = [0.5, 1.0] >>> def _Emissivity(field, data): ... return ( ... data["gas", "density"] ... * data["gas", "density"] ... * np.sqrt(data["gas", "temperature"]) ... ) >>> ds.add_field(("gas", "emissivity"), function=_Emissivity, units="g / cm**6") >>> for i, r in enumerate(rhos): ... surf = ds.surface(sp, "density", r) ... surf.export_obj( ... "my_galaxy", ... transparency=trans[i], ... color_field="temperature", ... emit_field="emissivity", ... plot_index=i, ... ) """ if color_map is None: color_map = ytcfg.get("yt", "default_colormap") if self.vertices is None: if color_field is not None: self.get_data(color_field, "face") elif color_field is not None: if color_field not in self.field_data: self[color_field] if color_field is None: self.get_data(self.surface_field, "face") if emit_field is not None: if color_field not in self.field_data: self[emit_field] fullverts, colors, alpha, emisses, colorindex = only_on_root( self._export_blender, transparency, dist_fac, color_field, emit_field, color_map, color_log, emit_log, plot_index, color_field_max, color_field_min, emit_field_max, emit_field_min, ) return fullverts, colors, alpha, emisses, colorindex def _export_blender( self, transparency, dist_fac=None, color_field=None, emit_field=None, color_map=None, color_log=True, emit_log=True, plot_index=None, color_field_max=None, color_field_min=None, emit_field_max=None, emit_field_min=None, ): if color_map is None: color_map = ytcfg.get("yt", "default_colormap") if plot_index is None: plot_index = 0 ftype = [("cind", "uint8"), ("emit", "float")] vtype = [("x", "float"), ("y", "float"), ("z", "float")] # (0) formulate vertices nv = self.vertices.shape[1] # number of groups of vertices f = np.empty( nv / self.vertices.shape[0], dtype=ftype ) # store sets of face colors v = np.empty(nv, dtype=vtype) # stores vertices if color_field is not None: cs = self[color_field] else: cs = np.empty(self.vertices.shape[1] / self.vertices.shape[0]) if emit_field is not None: em = self[emit_field] else: em = np.empty(self.vertices.shape[1] / self.vertices.shape[0]) self._color_samples_obj( cs, em, color_log, emit_log, color_map, f, color_field_max, color_field_min, color_field, emit_field_max, emit_field_min, emit_field, ) # map color values to color scheme lut = get_colormap_lut(color_map) # interpolate emissivity to enumerated colors emiss = np.interp( np.mgrid[0 : lut[0].shape[0]], np.mgrid[0 : len(cs)], f["emit"][:] ) if dist_fac is None: # then normalize by bounds DLE = self.ds.domain_left_edge DRE = self.ds.domain_right_edge bounds = [(DLE[i], DRE[i]) for i in range(3)] for i, ax in enumerate("xyz"): # Do the bounds first since we cast to f32 tmp = self.vertices[i, :] np.subtract(tmp, bounds[i][0], tmp) w = bounds[i][1] - bounds[i][0] np.divide(tmp, w, tmp) np.subtract(tmp, 0.5, tmp) # Center at origin. v[ax][:] = tmp else: for i, ax in enumerate("xyz"): tmp = self.vertices[i, :] np.divide(tmp, dist_fac, tmp) v[ax][:] = tmp return v, lut, transparency, emiss, f["cind"] def export_ply( self, filename, bounds=None, color_field=None, color_map=None, color_log=True, sample_type="face", no_ghost=False, *, color_field_max=None, color_field_min=None, ): r"""This exports the surface to the PLY format, suitable for visualization in many different programs (e.g., MeshLab). Parameters ---------- filename : string The file this will be exported to. This cannot be a file-like object. bounds : list of tuples The bounds the vertices will be normalized to. This is of the format: [(xmin, xmax), (ymin, ymax), (zmin, zmax)]. Defaults to the full domain. color_field : string Should a field be sample and colormapped? color_map : string Which color map should be applied? color_log : bool Should the color field be logged before being mapped? color_field_max : float Maximum value of the color field across all surfaces. color_field_min : float Minimum value of the color field across all surfaces. Examples -------- >>> from yt.units import kpc >>> sp = ds.sphere("max", (10, "kpc")) >>> surf = ds.surface(sp, ("gas", "density"), 5e-27) >>> print(surf["gas", "temperature"]) >>> print(surf.vertices) >>> bounds = [ ... (sp.center[i] - 5.0 * kpc, sp.center[i] + 5.0 * kpc) for i in range(3) ... ] >>> surf.export_ply("my_galaxy.ply", bounds=bounds) """ if color_map is None: color_map = ytcfg.get("yt", "default_colormap") if self.vertices is None: self.get_data(color_field, sample_type, no_ghost=no_ghost) elif color_field is not None: if sample_type == "face" and color_field not in self.field_data: self[color_field] elif sample_type == "vertex" and color_field not in self.vertex_samples: self.get_data(color_field, sample_type, no_ghost=no_ghost) self._export_ply( filename, bounds, color_field, color_map, color_log, sample_type, color_field_max=color_field_max, color_field_min=color_field_min, ) def _color_samples( self, cs, color_log, color_map, arr, *, color_field_max=None, color_field_min=None, ): cs = np.asarray(cs) if color_log: cs = np.log10(cs) if color_field_min is None: mi = cs.min() else: mi = color_field_min if color_log: mi = np.log10(mi) if color_field_max is None: ma = cs.max() else: ma = color_field_max if color_log: ma = np.log10(ma) cs = (cs - mi) / (ma - mi) from yt.visualization.image_writer import map_to_colors cs = map_to_colors(cs, color_map) arr["red"][:] = cs[0, :, 0] arr["green"][:] = cs[0, :, 1] arr["blue"][:] = cs[0, :, 2] @parallel_root_only def _export_ply( self, filename, bounds=None, color_field=None, color_map=None, color_log=True, sample_type="face", *, color_field_max=None, color_field_min=None, ): if color_map is None: color_map = ytcfg.get("yt", "default_colormap") if hasattr(filename, "read"): f = filename else: f = open(filename, "wb") if bounds is None: DLE = self.ds.domain_left_edge DRE = self.ds.domain_right_edge bounds = [(DLE[i], DRE[i]) for i in range(3)] elif any(not all(isinstance(be, YTArray) for be in b) for b in bounds): bounds = [ tuple( be if isinstance(be, YTArray) else self.ds.quan(be, "code_length") for be in b ) for b in bounds ] nv = self.vertices.shape[1] vs = [ ("x", ">> from yt.units import kpc >>> dd = ds.sphere("max", (200, "kpc")) >>> rho = 5e-27 >>> bounds = [ ... (dd.center[i] - 100.0 * kpc, dd.center[i] + 100.0 * kpc) ... for i in range(3) ... ] ... >>> surf = ds.surface(dd, ("gas", "density"), rho) >>> rv = surf.export_sketchfab( ... title="Testing Upload", ... description="A simple test of the uploader", ... color_field="temperature", ... color_map="hot", ... color_log=True, ... bounds=bounds, ... ) ... """ if color_map is None: color_map = ytcfg.get("yt", "default_colormap") api_key = api_key or ytcfg.get("yt", "sketchfab_api_key") if api_key in (None, "None"): raise YTNoAPIKey("SketchFab.com", "sketchfab_api_key") ply_file = TemporaryFile() self.export_ply( ply_file, bounds, color_field, color_map, color_log, sample_type="vertex", no_ghost=no_ghost, color_field_max=color_field_max, color_field_min=color_field_min, ) ply_file.seek(0) # Greater than ten million vertices and we throw an error but dump # to a file. if self.vertices.shape[1] > 1e7: tfi = 0 fn = "temp_model_%03i.ply" % tfi while os.path.exists(fn): fn = "temp_model_%03i.ply" % tfi tfi += 1 open(fn, "wb").write(ply_file.read()) raise YTTooManyVertices(self.vertices.shape[1], fn) zfs = NamedTemporaryFile(suffix=".zip") with zipfile.ZipFile(zfs, "w", zipfile.ZIP_DEFLATED) as zf: zf.writestr("yt_export.ply", ply_file.read()) zfs.seek(0) zfs.seek(0) data = { "token": api_key, "name": title, "description": description, "tags": "yt", } files = {"modelFile": zfs} upload_id = self._upload_to_sketchfab(data, files) upload_id = self.comm.mpi_bcast(upload_id, root=0) return upload_id @parallel_root_only def _upload_to_sketchfab(self, data, files): from yt.utilities.on_demand_imports import _requests as requests SKETCHFAB_DOMAIN = "sketchfab.com" SKETCHFAB_API_URL = f"https://api.{SKETCHFAB_DOMAIN}/v2/models" SKETCHFAB_MODEL_URL = f"https://{SKETCHFAB_DOMAIN}/models/" try: r = requests.post(SKETCHFAB_API_URL, data=data, files=files, verify=False) except requests.exceptions.RequestException: mylog.exception("An error has occurred") return result = r.json() if r.status_code != requests.codes.created: mylog.error("Upload to SketchFab failed with error: %s", result) return model_uid = result["uid"] model_url = SKETCHFAB_MODEL_URL + model_uid if model_uid: mylog.info("Model uploaded to: %s", model_url) else: mylog.error("Problem uploading.") return model_uid class YTOctree(YTSelectionContainer3D): """A 3D region with all the data filled into an octree. This container will deposit particle fields onto octs using a kernel and SPH smoothing. The octree is built in a depth-first fashion. Depth-first search (DFS) means that tree starts refining at the root node (this is the largest node which contains every particles) and refines as far as possible along each branch before backtracking. Parameters ---------- right_edge : array_like The right edge of the region to be extracted. Specify units by supplying a YTArray, otherwise code length units are assumed. left_edge : array_like The left edge of the region to be extracted. Specify units by supplying a YTArray, otherwise code length units are assumed. n_ref: int This is the maximum number of particles per leaf in the resulting octree. ptypes: list This is the type of particles to include when building the tree. This will default to all particles. Examples -------- octree = ds.octree(n_ref=64) x_positions_of_cells = octree['index', 'x'] y_positions_of_cells = octree['index', 'y'] z_positions_of_cells = octree['index', 'z'] density_of_gas_in_cells = octree['gas', 'density'] """ _spatial = True _type_name = "octree" _con_args = ("left_edge", "right_edge", "n_ref") _container_fields = ( ("index", "dx"), ("index", "dy"), ("index", "dz"), ("index", "x"), ("index", "y"), ("index", "z"), ("index", "depth"), ("index", "refined"), ("index", "sizes"), ("index", "positions"), ) def __init__( self, left_edge=None, right_edge=None, n_ref=32, ptypes=None, ds=None, field_parameters=None, ): if field_parameters is None: center = None else: center = field_parameters.get("center", None) YTSelectionContainer3D.__init__(self, center, ds, field_parameters) self.left_edge = self._sanitize_edge(left_edge, ds.domain_left_edge) self.right_edge = self._sanitize_edge(right_edge, ds.domain_right_edge) self.n_ref = n_ref self.ptypes = self._sanitize_ptypes(ptypes) self._setup_data_source() self.tree def _generate_tree(self): positions = [] for ptype in self.ptypes: positions.append( self._data_source[ptype, "particle_position"].in_units("code_length").d ) positions = ( np.concatenate(positions) if len(positions) > 0 else np.array(positions) ) if not positions.size: mylog.info("No particles found!") self._octree = None return mylog.info("Allocating Octree for %s particles", positions.shape[0]) self._octree = CyOctree( positions, left_edge=self.left_edge.to("code_length").d, right_edge=self.right_edge.to("code_length").d, n_ref=self.n_ref, ) mylog.info("Allocated %s nodes in octree", self._octree.num_nodes) mylog.info("Octree bound %s particles", self._octree.bound_particles) # Now we store the index data about the octree in the python container ds = self.ds pos = ds.arr(self._octree.node_positions, "code_length") self["index", "positions"] = pos self["index", "x"] = pos[:, 0] self["index", "y"] = pos[:, 1] self["index", "z"] = pos[:, 2] self["index", "refined"] = self._octree.node_refined sizes = ds.arr(self._octree.node_sizes, "code_length") self["index", "sizes"] = sizes self["index", "dx"] = sizes[:, 0] self["index", "dy"] = sizes[:, 1] self["index", "dz"] = sizes[:, 2] self["index", "depth"] = self._octree.node_depth @property def tree(self): """ The Cython+Python octree instance """ if hasattr(self, "_octree"): return self._octree self._generate_tree() return self._octree @property def sph_smoothing_style(self): smoothing_style = getattr(self.ds, "sph_smoothing_style", "scatter") return smoothing_style @property def sph_normalize(self): normalize = getattr(self.ds, "use_sph_normalization", "normalize") return normalize @property def sph_num_neighbors(self): num_neighbors = getattr(self.ds, "num_neighbors", 32) return num_neighbors def _sanitize_ptypes(self, ptypes): if ptypes is None: return ["all"] if not isinstance(ptypes, list): ptypes = [ptypes] for ptype in ptypes: if ptype not in self.ds.particle_types: mess = f"{ptype} not found. Particle type must " mess += "be in the dataset!" raise TypeError(mess) return ptypes def _setup_data_source(self): mylog.info( ( "Allocating octree with spatial range " "[%.4e, %.4e, %.4e] code_length to " "[%.4e, %.4e, %.4e] code_length" ), *self.left_edge.to("code_length").d, *self.right_edge.to("code_length").d, ) self._data_source = self.ds.region(self.center, self.left_edge, self.right_edge) def _sanitize_edge(self, edge, default): if edge is None: return default.copy() if not is_sequence(edge): edge = [edge] * len(self.ds.domain_left_edge) if len(edge) != len(self.ds.domain_left_edge): raise RuntimeError( "Length of edges must match the dimensionality of the dataset" ) if hasattr(edge, "units"): if edge.units.registry is self.ds.unit_registry: return edge edge_units = edge.units else: edge_units = "code_length" return self.ds.arr(edge, edge_units) def get_data(self, fields=None): if fields is None: return if hasattr(self.ds, "_sph_ptypes"): sph_ptypes = self.ds._sph_ptypes else: sph_ptypes = tuple( value for value in self.ds.particle_types_raw if value in ["PartType0", "Gas", "gas", "io"] ) if len(sph_ptypes) == 0: raise RuntimeError smoothing_style = self.sph_smoothing_style normalize = self.sph_normalize if fields[0] in sph_ptypes: units = self.ds._get_field_info(fields).units if smoothing_style == "scatter": self._scatter_smooth(fields, units, normalize) else: self._gather_smooth(fields, units, normalize) elif fields[0] == "index": return self[fields] else: raise NotImplementedError def _gather_smooth(self, fields, units, normalize): buff = np.zeros(self.tree.num_nodes, dtype="float64") # Again, attempt to load num_neighbors from the octree num_neighbors = self.sph_num_neighbors # For the gather approach we load up all of the data, this like other # gather approaches is not memory conservative but with spatial chunking # this can be fixed fields_to_get = [ "particle_position", "density", "particle_mass", "smoothing_length", fields[1], ] all_fields = all_data(self.ds, fields[0], fields_to_get, kdtree=True) interpolate_sph_positions_gather( buff, all_fields["particle_position"], self["index", "positions"], all_fields["smoothing_length"], all_fields["particle_mass"], all_fields["density"], all_fields[fields[1]].in_units(units), self.ds.index.kdtree, use_normalization=normalize, num_neigh=num_neighbors, ) self[fields] = self.ds.arr(buff[~self["index", "refined"]], units) def _scatter_smooth(self, fields, units, normalize): from tqdm import tqdm buff = np.zeros(self.tree.num_nodes, dtype="float64") if normalize: buff_den = np.zeros(buff.shape[0], dtype="float64") else: buff_den = np.empty(0) ptype = fields[0] pbar = tqdm(desc=f"Interpolating (scatter) SPH field {fields[0]}") for chunk in self._data_source.chunks([fields], "io"): px = chunk[ptype, "particle_position_x"].to("code_length").d py = chunk[ptype, "particle_position_y"].to("code_length").d pz = chunk[ptype, "particle_position_z"].to("code_length").d hsml = chunk[ptype, "smoothing_length"].to("code_length").d pmass = chunk[ptype, "particle_mass"].to("code_mass").d pdens = chunk[ptype, "density"].to("code_mass/code_length**3").d field_quantity = chunk[fields].to(units).d if px.shape[0] > 0: self.tree.interpolate_sph_cells( buff, buff_den, px, py, pz, pmass, pdens, hsml, field_quantity, use_normalization=normalize, ) pbar.update(1) pbar.close() if normalize: normalization_1d_utility(buff, buff_den) self[fields] = self.ds.arr(buff[~self["index", "refined"]], units) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/data_containers.py0000644000175100001770000016515014714401662020363 0ustar00runnerdockerimport abc import weakref from collections import defaultdict from contextlib import contextmanager from typing import TYPE_CHECKING, Optional import numpy as np from yt._maintenance.deprecation import issue_deprecation_warning from yt._typing import AnyFieldKey, FieldKey, FieldName from yt.config import ytcfg from yt.data_objects.field_data import YTFieldData from yt.data_objects.profiles import create_profile from yt.fields.field_exceptions import NeedsGridType from yt.frontends.ytdata.utilities import save_as_dataset from yt.funcs import get_output_filename, iter_fields, mylog, parse_center_array from yt.units._numpy_wrapper_functions import uconcatenate from yt.utilities.amr_kdtree.api import AMRKDTree from yt.utilities.exceptions import ( YTCouldNotGenerateField, YTException, YTFieldNotFound, YTFieldTypeNotFound, YTNonIndexedDataContainer, YTSpatialFieldUnitError, ) from yt.utilities.object_registries import data_object_registry from yt.utilities.on_demand_imports import _firefly as firefly from yt.utilities.parameter_file_storage import ParameterFileStore if TYPE_CHECKING: from yt.data_objects.static_output import Dataset def sanitize_weight_field(ds, field, weight): if weight is None: field_object = ds._get_field_info(field) if field_object.sampling_type == "particle": if field_object.name[0] == "gas": ptype = ds._sph_ptypes[0] else: ptype = field_object.name[0] weight_field = (ptype, "particle_ones") else: weight_field = ("index", "ones") else: weight_field = weight return weight_field def _get_ipython_key_completion(ds): # tuple-completion (ftype, fname) was added in IPython 8.0.0 # with earlier versions, completion works with fname only # this implementation should work transparently with all IPython versions tuple_keys = ds.field_list + ds.derived_field_list fnames = list({k[1] for k in tuple_keys}) return tuple_keys + fnames class YTDataContainer(abc.ABC): """ Generic YTDataContainer container. By itself, will attempt to generate field, read fields (method defined by derived classes) and deal with passing back and forth field parameters. """ _chunk_info = None _num_ghost_zones = 0 _con_args: tuple[str, ...] = () _skip_add = False _container_fields: tuple[AnyFieldKey, ...] = () _tds_attrs: tuple[str, ...] = () _tds_fields: tuple[str, ...] = () _field_cache = None _index = None _key_fields: list[str] def __init__(self, ds: Optional["Dataset"], field_parameters) -> None: """ Typically this is never called directly, but only due to inheritance. It associates a :class:`~yt.data_objects.static_output.Dataset` with the class, sets its initial set of fields, and the remainder of the arguments are passed as field_parameters. """ # ds is typically set in the new object type created in # Dataset._add_object_class but it can also be passed as a parameter to the # constructor, in which case it will override the default. # This code ensures it is never not set. self.ds: Dataset if ds is not None: self.ds = ds else: if not hasattr(self, "ds"): raise RuntimeError( "Error: ds must be set either through class type " "or parameter to the constructor" ) self._current_particle_type = "all" self._current_fluid_type = self.ds.default_fluid_type self.ds.objects.append(weakref.proxy(self)) mylog.debug("Appending object to %s (type: %s)", self.ds, type(self)) self.field_data = YTFieldData() if self.ds.unit_system.has_current_mks: mag_unit = "T" else: mag_unit = "G" self._default_field_parameters = { "center": self.ds.arr(np.zeros(3, dtype="float64"), "cm"), "bulk_velocity": self.ds.arr(np.zeros(3, dtype="float64"), "cm/s"), "bulk_magnetic_field": self.ds.arr(np.zeros(3, dtype="float64"), mag_unit), "normal": self.ds.arr([0.0, 0.0, 1.0], ""), } if field_parameters is None: field_parameters = {} self._set_default_field_parameters() for key, val in field_parameters.items(): self.set_field_parameter(key, val) def __init_subclass__(cls, *args, **kwargs): super().__init_subclass__(*args, **kwargs) if hasattr(cls, "_type_name") and not cls._skip_add: name = getattr(cls, "_override_selector_name", cls._type_name) data_object_registry[name] = cls @property def pf(self): return getattr(self, "ds", None) @property def index(self): if self._index is not None: return self._index self._index = self.ds.index return self._index def _debug(self): """ When called from within a derived field, this will run pdb. However, during field detection, it will not. This allows you to more easily debug fields that are being called on actual objects. """ import pdb pdb.set_trace() def _set_default_field_parameters(self): self.field_parameters = {} for k, v in self._default_field_parameters.items(): self.set_field_parameter(k, v) def _is_default_field_parameter(self, parameter): if parameter not in self._default_field_parameters: return False return ( self._default_field_parameters[parameter] is self.field_parameters[parameter] ) def apply_units(self, arr, units): try: arr.units.registry = self.ds.unit_registry return arr.to(units) except AttributeError: return self.ds.arr(arr, units=units) def _first_matching_field(self, field: FieldName) -> FieldKey: for ftype, fname in self.ds.derived_field_list: if fname == field: return (ftype, fname) raise YTFieldNotFound(field, self.ds) def _set_center(self, center): if center is None: self.center = None return else: axis = getattr(self, "axis", None) self.center = parse_center_array(center, ds=self.ds, axis=axis) self.set_field_parameter("center", self.center) def get_field_parameter(self, name, default=None): """ This is typically only used by derived field functions, but it returns parameters used to generate fields. """ if name in self.field_parameters: return self.field_parameters[name] else: return default def set_field_parameter(self, name, val): """ Here we set up dictionaries that get passed up and down and ultimately to derived fields. """ self.field_parameters[name] = val def has_field_parameter(self, name): """ Checks if a field parameter is set. """ return name in self.field_parameters def clear_data(self): """ Clears out all data from the YTDataContainer instance, freeing memory. """ self.field_data.clear() def has_key(self, key): """ Checks if a data field already exists. """ return key in self.field_data def keys(self): return self.field_data.keys() def _reshape_vals(self, arr): return arr def __getitem__(self, key): """ Returns a single field. Will add if necessary. """ f = self._determine_fields([key])[0] if f not in self.field_data and key not in self.field_data: if f in self._container_fields: self.field_data[f] = self.ds.arr(self._generate_container_field(f)) return self.field_data[f] else: self.get_data(f) # fi.units is the unit expression string. We depend on the registry # hanging off the dataset to define this unit object. # Note that this is less succinct so that we can account for the case # when there are, for example, no elements in the object. try: rv = self.field_data[f] except KeyError: fi = self.ds._get_field_info(f) rv = self.ds.arr(self.field_data[key], fi.units) return rv def _ipython_key_completions_(self): return _get_ipython_key_completion(self.ds) def __setitem__(self, key, val): """ Sets a field to be some other value. """ self.field_data[key] = val def __delitem__(self, key): """ Deletes a field """ if key not in self.field_data: key = self._determine_fields(key)[0] del self.field_data[key] def _generate_field(self, field): ftype, fname = field finfo = self.ds._get_field_info(field) with self._field_type_state(ftype, finfo): if fname in self._container_fields: tr = self._generate_container_field(field) if finfo.sampling_type == "particle": tr = self._generate_particle_field(field) else: tr = self._generate_fluid_field(field) if tr is None: raise YTCouldNotGenerateField(field, self.ds) return tr def _generate_fluid_field(self, field): # First we check the validator finfo = self.ds._get_field_info(field) if self._current_chunk is None or self._current_chunk.chunk_type != "spatial": gen_obj = self else: gen_obj = self._current_chunk.objs[0] gen_obj.field_parameters = self.field_parameters try: finfo.check_available(gen_obj) except NeedsGridType as ngt_exception: rv = self._generate_spatial_fluid(field, ngt_exception.ghost_zones) else: rv = finfo(gen_obj) return rv def _generate_spatial_fluid(self, field, ngz): finfo = self.ds._get_field_info(field) if finfo.units is None: raise YTSpatialFieldUnitError(field) units = finfo.units try: rv = self.ds.arr(np.zeros(self.ires.size, dtype="float64"), units) accumulate = False except YTNonIndexedDataContainer: # In this case, we'll generate many tiny arrays of unknown size and # then concatenate them. outputs = [] accumulate = True ind = 0 if ngz == 0: deps = self._identify_dependencies([field], spatial=True) deps = self._determine_fields(deps) for _io_chunk in self.chunks([], "io", cache=False): for _chunk in self.chunks([], "spatial", ngz=0, preload_fields=deps): o = self._current_chunk.objs[0] if accumulate: rv = self.ds.arr(np.empty(o.ires.size, dtype="float64"), units) outputs.append(rv) ind = 0 # Does this work with mesh? with o._activate_cache(): ind += o.select( self.selector, source=self[field], dest=rv, offset=ind ) else: chunks = self.index._chunk(self, "spatial", ngz=ngz) for chunk in chunks: with self._chunked_read(chunk): gz = self._current_chunk.objs[0] gz.field_parameters = self.field_parameters wogz = gz._base_grid if accumulate: rv = self.ds.arr( np.empty(wogz.ires.size, dtype="float64"), units ) outputs.append(rv) ind += wogz.select( self.selector, source=gz[field][ngz:-ngz, ngz:-ngz, ngz:-ngz], dest=rv, offset=ind, ) if accumulate: rv = uconcatenate(outputs) return rv def _generate_particle_field(self, field): # First we check the validator ftype, fname = field if self._current_chunk is None or self._current_chunk.chunk_type != "spatial": gen_obj = self else: gen_obj = self._current_chunk.objs[0] try: finfo = self.ds._get_field_info(field) finfo.check_available(gen_obj) except NeedsGridType as ngt_exception: if ngt_exception.ghost_zones != 0: raise NotImplementedError from ngt_exception size = self._count_particles(ftype) rv = self.ds.arr(np.empty(size, dtype="float64"), finfo.units) ind = 0 for _io_chunk in self.chunks([], "io", cache=False): for _chunk in self.chunks(field, "spatial"): x, y, z = (self[ftype, f"particle_position_{ax}"] for ax in "xyz") if x.size == 0: continue mask = self._current_chunk.objs[0].select_particles( self.selector, x, y, z ) if mask is None: continue # This requests it from the grid and does NOT mask it data = self[field][mask] rv[ind : ind + data.size] = data ind += data.size else: with self._field_type_state(ftype, finfo, gen_obj): rv = self.ds._get_field_info(field)(gen_obj) return rv def _count_particles(self, ftype): for (f1, _f2), val in self.field_data.items(): if f1 == ftype: return val.size size = 0 for _io_chunk in self.chunks([], "io", cache=False): for _chunk in self.chunks([], "spatial"): x, y, z = (self[ftype, f"particle_position_{ax}"] for ax in "xyz") if x.size == 0: continue size += self._current_chunk.objs[0].count_particles( self.selector, x, y, z ) return size def _generate_container_field(self, field): raise NotImplementedError def _parameter_iterate(self, seq): for obj in seq: old_fp = obj.field_parameters obj.field_parameters = self.field_parameters yield obj obj.field_parameters = old_fp def write_out(self, filename, fields=None, format="%0.16e"): """Write out the YTDataContainer object in a text file. This function will take a data object and produce a tab delimited text file containing the fields presently existing and the fields given in the ``fields`` list. Parameters ---------- filename : String The name of the file to write to. fields : List of string, Default = None If this is supplied, these fields will be added to the list of fields to be saved to disk. If not supplied, whatever fields presently exist will be used. format : String, Default = "%0.16e" Format of numbers to be written in the file. Raises ------ ValueError Raised when there is no existing field. YTException Raised when field_type of supplied fields is inconsistent with the field_type of existing fields. Examples -------- >>> ds = fake_particle_ds() >>> sp = ds.sphere(ds.domain_center, 0.25) >>> sp.write_out("sphere_1.txt") >>> sp.write_out("sphere_2.txt", fields=["cell_volume"]) """ if fields is None: fields = sorted(self.field_data.keys()) field_order = [("index", k) for k in self._key_fields] diff_fields = [field for field in fields if field not in field_order] field_order += diff_fields field_order = sorted(self._determine_fields(field_order)) field_shapes = defaultdict(list) for field in field_order: shape = self[field].shape field_shapes[shape].append(field) # Check all fields have the same shape if len(field_shapes) != 1: err_msg = ["Got fields with different number of elements:\n"] for shape, these_fields in field_shapes.items(): err_msg.append(f"\t {these_fields} with shape {shape}") raise YTException("\n".join(err_msg)) with open(filename, "w") as fid: field_header = [str(f) for f in field_order] fid.write("\t".join(["#"] + field_header + ["\n"])) field_data = np.array([self.field_data[field] for field in field_order]) for line in range(field_data.shape[1]): field_data[:, line].tofile(fid, sep="\t", format=format) fid.write("\n") def to_dataframe(self, fields): r"""Export a data object to a :class:`~pandas.DataFrame`. This function will take a data object and an optional list of fields and export them to a :class:`~pandas.DataFrame` object. If pandas is not importable, this will raise ImportError. Parameters ---------- fields : list of strings or tuple field names This is the list of fields to be exported into the DataFrame. Returns ------- df : :class:`~pandas.DataFrame` The data contained in the object. Examples -------- >>> dd = ds.all_data() >>> df = dd.to_dataframe([("gas", "density"), ("gas", "temperature")]) """ from yt.utilities.on_demand_imports import _pandas as pd data = {} fields = self._determine_fields(fields) for field in fields: data[field[-1]] = self[field] df = pd.DataFrame(data) return df def to_astropy_table(self, fields): """ Export region data to a :class:~astropy.table.table.QTable, which is a Table object which is unit-aware. The QTable can then be exported to an ASCII file, FITS file, etc. See the AstroPy Table docs for more details: http://docs.astropy.org/en/stable/table/ Parameters ---------- fields : list of strings or tuple field names This is the list of fields to be exported into the QTable. Examples -------- >>> sp = ds.sphere("c", (1.0, "Mpc")) >>> t = sp.to_astropy_table([("gas", "density"), ("gas", "temperature")]) """ from astropy.table import QTable t = QTable() fields = self._determine_fields(fields) for field in fields: t[field[-1]] = self[field].to_astropy() return t def save_as_dataset(self, filename=None, fields=None): r"""Export a data object to a reloadable yt dataset. This function will take a data object and output a dataset containing either the fields presently existing or fields given in the ``fields`` list. The resulting dataset can be reloaded as a yt dataset. Parameters ---------- filename : str, optional The name of the file to be written. If None, the name will be a combination of the original dataset and the type of data container. fields : list of string or tuple field names, optional If this is supplied, it is the list of fields to be saved to disk. If not supplied, all the fields that have been queried will be saved. Returns ------- filename : str The name of the file that has been created. Examples -------- >>> import yt >>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") >>> sp = ds.sphere(ds.domain_center, (10, "Mpc")) >>> fn = sp.save_as_dataset(fields=[("gas", "density"), ("gas", "temperature")]) >>> sphere_ds = yt.load(fn) >>> # the original data container is available as the data attribute >>> print(sds.data["gas", "density"]) [ 4.46237613e-32 4.86830178e-32 4.46335118e-32 ..., 6.43956165e-30 3.57339907e-30 2.83150720e-30] g/cm**3 >>> ad = sphere_ds.all_data() >>> print(ad["gas", "temperature"]) [ 1.00000000e+00 1.00000000e+00 1.00000000e+00 ..., 4.40108359e+04 4.54380547e+04 4.72560117e+04] K """ keyword = f"{str(self.ds)}_{self._type_name}" filename = get_output_filename(filename, keyword, ".h5") data = {} if fields is not None: for f in self._determine_fields(fields): data[f] = self[f] else: data.update(self.field_data) # get the extra fields needed to reconstruct the container tds_fields = tuple(("index", t) for t in self._tds_fields) for f in [f for f in self._container_fields + tds_fields if f not in data]: data[f] = self[f] data_fields = list(data.keys()) need_grid_positions = False need_particle_positions = False ptypes = [] ftypes = {} for field in data_fields: if field in self._container_fields: ftypes[field] = "grid" need_grid_positions = True elif self.ds.field_info[field].sampling_type == "particle": if field[0] not in ptypes: ptypes.append(field[0]) ftypes[field] = field[0] need_particle_positions = True else: ftypes[field] = "grid" need_grid_positions = True # projections and slices use px and py, so don't need positions if self._type_name in ["cutting", "proj", "slice", "quad_proj"]: need_grid_positions = False if need_particle_positions: for ax in self.ds.coordinates.axis_order: for ptype in ptypes: p_field = (ptype, f"particle_position_{ax}") if p_field in self.ds.field_info and p_field not in data: data_fields.append(field) ftypes[p_field] = p_field[0] data[p_field] = self[p_field] if need_grid_positions: for ax in self.ds.coordinates.axis_order: g_field = ("index", ax) if g_field in self.ds.field_info and g_field not in data: data_fields.append(g_field) ftypes[g_field] = "grid" data[g_field] = self[g_field] g_field = ("index", "d" + ax) if g_field in self.ds.field_info and g_field not in data: data_fields.append(g_field) ftypes[g_field] = "grid" data[g_field] = self[g_field] extra_attrs = { arg: getattr(self, arg, None) for arg in self._con_args + self._tds_attrs } extra_attrs["con_args"] = repr(self._con_args) extra_attrs["data_type"] = "yt_data_container" extra_attrs["container_type"] = self._type_name extra_attrs["dimensionality"] = self._dimensionality save_as_dataset( self.ds, filename, data, field_types=ftypes, extra_attrs=extra_attrs ) return filename def to_glue(self, fields, label="yt", data_collection=None): """ Takes specific *fields* in the container and exports them to Glue (http://glueviz.org) for interactive analysis. Optionally add a *label*. If you are already within the Glue environment, you can pass a *data_collection* object, otherwise Glue will be started. """ from glue.core import Data, DataCollection if ytcfg.get("yt", "internals", "within_testing"): from glue.core.application_base import Application as GlueApplication else: try: from glue.app.qt.application import GlueApplication except ImportError: from glue.qt.glue_application import GlueApplication gdata = Data(label=label) for component_name in fields: gdata.add_component(self[component_name], component_name) if data_collection is None: dc = DataCollection([gdata]) app = GlueApplication(dc) try: app.start() except AttributeError: # In testing we're using a dummy glue application object # that doesn't have a start method pass else: data_collection.append(gdata) def create_firefly_object( self, datadir=None, fields_to_include=None, fields_units=None, default_decimation_factor=100, velocity_units="km/s", coordinate_units="kpc", show_unused_fields=0, *, JSONdir=None, match_any_particle_types=True, **kwargs, ): r"""This function links a region of data stored in a yt dataset to the Python frontend API for [Firefly](http://github.com/ageller/Firefly), a browser-based particle visualization tool. Parameters ---------- datadir : string Path to where any `.json` files should be saved. If a relative path will assume relative to `${HOME}`. A value of `None` will default to `${HOME}/Data`. fields_to_include : array_like of strings or field tuples A list of fields that you want to include in your Firefly visualization for on-the-fly filtering and colormapping. default_decimation_factor : integer The factor by which you want to decimate each particle group by (e.g. if there are 1e7 total particles in your simulation you might want to set this to 100 at first). Randomly samples your data like `shuffled_data[::decimation_factor]` so as to not overtax a system. This is adjustable on a per particle group basis by changing the returned reader's `reader.particleGroup[i].decimation_factor` before calling `reader.writeToDisk()`. velocity_units : string The units that the velocity should be converted to in order to show streamlines in Firefly. Defaults to km/s. coordinate_units : string The units that the coordinates should be converted to. Defaults to kpc. show_unused_fields : boolean A flag to optionally print the fields that are available, in the dataset but were not explicitly requested to be tracked. match_any_particle_types : boolean If True, when any of the fields_to_include match multiple particle groups then the field will be added for all matching particle groups. If False, an error is raised when encountering an ambiguous field. Default is True. Any additional keyword arguments are passed to firefly.data_reader.Reader.__init__ Returns ------- reader : Firefly.data_reader.Reader object A reader object from the Firefly, configured to output the current region selected Examples -------- >>> ramses_ds = yt.load( ... "/Users/agurvich/Desktop/yt_workshop/" ... + "DICEGalaxyDisk_nonCosmological/output_00002/info_00002.txt" ... ) >>> region = ramses_ds.sphere(ramses_ds.domain_center, (1000, "kpc")) >>> reader = region.create_firefly_object( ... "IsoGalaxyRamses", ... fields_to_include=[ ... "particle_extra_field_1", ... "particle_extra_field_2", ... ], ... fields_units=["dimensionless", "dimensionless"], ... ) >>> reader.settings["color"]["io"] = [1, 1, 0, 1] >>> reader.particleGroups[0].decimation_factor = 100 >>> reader.writeToDisk() """ ## handle default arguments if fields_to_include is None: fields_to_include = [] if fields_units is None: fields_units = [] ## handle input validation, if any if len(fields_units) != len(fields_to_include): raise RuntimeError("Each requested field must have units.") ## for safety, in case someone passes a float just cast it default_decimation_factor = int(default_decimation_factor) if JSONdir is not None: issue_deprecation_warning( "The 'JSONdir' keyword argument is a deprecated alias for 'datadir'." "Please use 'datadir' directly.", stacklevel=3, since="4.1", ) datadir = JSONdir ## initialize a firefly reader instance reader = firefly.data_reader.Reader( datadir=datadir, clean_datadir=True, **kwargs ) ## Ensure at least one field type contains every field requested if match_any_particle_types: # Need to keep previous behavior: single string field names that # are ambiguous should bring in any matching ParticleGroups instead # of raising an error # This can be expanded/changed in the future to include field # tuples containing some sort of special "any" ParticleGroup unambiguous_fields_to_include = [] unambiguous_fields_units = [] for field, field_unit in zip(fields_to_include, fields_units, strict=True): if isinstance(field, tuple): # skip tuples, they'll be checked with _determine_fields unambiguous_fields_to_include.append(field) unambiguous_fields_units.append(field_unit) continue _, candidates = self.ds._get_field_info_helper(field) if len(candidates) == 1: # Field is unambiguous, add in tuple form # This should be equivalent to _tupleize_field unambiguous_fields_to_include.append(candidates[0]) unambiguous_fields_units.append(field_unit) else: # Field has multiple candidates, add all of them instead # of original field. Note this may bring in aliases and # equivalent particle fields for c in candidates: unambiguous_fields_to_include.append(c) unambiguous_fields_units.append(field_unit) fields_to_include = unambiguous_fields_to_include fields_units = unambiguous_fields_units # error if any requested field is unknown or (still) ambiguous # This is also sufficient if match_any_particle_types=False fields_to_include = self._determine_fields(fields_to_include) ## Also generate equivalent of particle_fields_by_type including ## derived fields kysd = defaultdict(list) for k, v in self.ds.derived_field_list: kysd[k].append(v) ## create a ParticleGroup object that contains *every* field for ptype in sorted(self.ds.particle_types_raw): ## skip this particle type if it has no particles in this dataset if self[ptype, "relative_particle_position"].shape[0] == 0: continue ## loop through the fields and print them to the screen if show_unused_fields: ## read the available extra fields from yt this_ptype_fields = self.ds.particle_fields_by_type[ptype] ## load the extra fields and print them for field in this_ptype_fields: if field not in fields_to_include: mylog.warning( "detected (but did not request) %s %s", ptype, field ) field_arrays = [] field_names = [] ## explicitly go after the fields we want for field, units in zip(fields_to_include, fields_units, strict=True): ## Only interested in fields with the current particle type, ## whether that means general fields or field tuples ftype, fname = field if ftype not in (ptype, "all"): continue ## determine if you want to take the log of the field for Firefly log_flag = "log(" in units ## read the field array from the dataset this_field_array = self[ptype, fname] ## fix the units string and prepend 'log' to the field for ## the UI name if log_flag: units = units[len("log(") : -1] fname = f"log{fname}" ## perform the unit conversion and take the log if ## necessary. this_field_array.in_units(units) if log_flag: this_field_array = np.log10(this_field_array) ## add this array to the tracked arrays field_arrays += [this_field_array] field_names = np.append(field_names, [fname], axis=0) ## flag whether we want to filter and/or color by these fields ## we'll assume yes for both cases, this can be changed after ## the reader object is returned to the user. field_filter_flags = np.ones(len(field_names)) field_colormap_flags = np.ones(len(field_names)) ## field_* needs to be explicitly set None if empty ## so that Firefly will correctly compute the binary ## headers if len(field_arrays) == 0: if len(fields_to_include) > 0: mylog.warning("No additional fields specified for %s", ptype) field_arrays = None field_names = None field_filter_flags = None field_colormap_flags = None ## Check if particles have velocities if "relative_particle_velocity" in kysd[ptype]: velocities = self[ptype, "relative_particle_velocity"].in_units( velocity_units ) else: velocities = None ## create a firefly ParticleGroup for this particle type pg = firefly.data_reader.ParticleGroup( UIname=ptype, coordinates=self[ptype, "relative_particle_position"].in_units( coordinate_units ), velocities=velocities, field_arrays=field_arrays, field_names=field_names, field_filter_flags=field_filter_flags, field_colormap_flags=field_colormap_flags, decimation_factor=default_decimation_factor, ) ## bind this particle group to the firefly reader object reader.addParticleGroup(pg) return reader # Numpy-like Operations def argmax(self, field, axis=None): r"""Return the values at which the field is maximized. This will, in a parallel-aware fashion, find the maximum value and then return to you the values at that maximum location that are requested for "axis". By default it will return the spatial positions (in the natural coordinate system), but it can be any field Parameters ---------- field : string or tuple field name The field to maximize. axis : string or list of strings, optional If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., 'x', 'y', 'z') or a list of fields, but cannot be 0, 1, 2. Returns ------- A list of YTQuantities as specified by the axis argument. Examples -------- >>> temp_at_max_rho = reg.argmax( ... ("gas", "density"), axis=("gas", "temperature") ... ) >>> max_rho_xyz = reg.argmax(("gas", "density")) >>> t_mrho, v_mrho = reg.argmax( ... ("gas", "density"), ... axis=[("gas", "temperature"), ("gas", "velocity_magnitude")], ... ) >>> x, y, z = reg.argmax(("gas", "density")) """ if axis is None: mv, pos0, pos1, pos2 = self.quantities.max_location(field) return pos0, pos1, pos2 if isinstance(axis, str): axis = [axis] rv = self.quantities.sample_at_max_field_values(field, axis) if len(rv) == 2: return rv[1] return rv[1:] def argmin(self, field, axis=None): r"""Return the values at which the field is minimized. This will, in a parallel-aware fashion, find the minimum value and then return to you the values at that minimum location that are requested for "axis". By default it will return the spatial positions (in the natural coordinate system), but it can be any field Parameters ---------- field : string or tuple field name The field to minimize. axis : string or list of strings, optional If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., 'x', 'y', 'z') or a list of fields, but cannot be 0, 1, 2. Returns ------- A list of YTQuantities as specified by the axis argument. Examples -------- >>> temp_at_min_rho = reg.argmin( ... ("gas", "density"), axis=("gas", "temperature") ... ) >>> min_rho_xyz = reg.argmin(("gas", "density")) >>> t_mrho, v_mrho = reg.argmin( ... ("gas", "density"), ... axis=[("gas", "temperature"), ("gas", "velocity_magnitude")], ... ) >>> x, y, z = reg.argmin(("gas", "density")) """ if axis is None: mv, pos0, pos1, pos2 = self.quantities.min_location(field) return pos0, pos1, pos2 if isinstance(axis, str): axis = [axis] rv = self.quantities.sample_at_min_field_values(field, axis) if len(rv) == 2: return rv[1] return rv[1:] def _compute_extrema(self, field): if self._extrema_cache is None: self._extrema_cache = {} if field not in self._extrema_cache: # Note we still need to call extrema for each field, as of right # now mi, ma = self.quantities.extrema(field) self._extrema_cache[field] = (mi, ma) return self._extrema_cache[field] _extrema_cache = None def max(self, field, axis=None): r"""Compute the maximum of a field, optionally along an axis. This will, in a parallel-aware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method 'max' for maximum intensity. If the max has already been requested, it will use the cached extrema value. Parameters ---------- field : string or tuple field name The field to maximize. axis : string, optional If supplied, the axis to project the maximum along. Returns ------- Either a scalar or a YTProjection. Examples -------- >>> max_temp = reg.max(("gas", "temperature")) >>> max_temp_proj = reg.max(("gas", "temperature"), axis=("index", "x")) """ if axis is None: rv = tuple(self._compute_extrema(f)[1] for f in iter_fields(field)) if len(rv) == 1: return rv[0] return rv elif axis in self.ds.coordinates.axis_name: return self.ds.proj(field, axis, data_source=self, method="max") else: raise NotImplementedError(f"Unknown axis {axis}") def min(self, field, axis=None): r"""Compute the minimum of a field. This will, in a parallel-aware fashion, compute the minimum of the given field. Supplying an axis will result in a return value of a YTProjection, with method 'min' for minimum intensity. If the min has already been requested, it will use the cached extrema value. Parameters ---------- field : string or tuple field name The field to minimize. axis : string, optional If supplied, the axis to compute the minimum along. Returns ------- Either a scalar or a YTProjection. Examples -------- >>> min_temp = reg.min(("gas", "temperature")) >>> min_temp_proj = reg.min(("gas", "temperature"), axis=("index", "x")) """ if axis is None: rv = tuple(self._compute_extrema(f)[0] for f in iter_fields(field)) if len(rv) == 1: return rv[0] return rv elif axis in self.ds.coordinates.axis_name: return self.ds.proj(field, axis, data_source=self, method="min") else: raise NotImplementedError(f"Unknown axis {axis}") def std(self, field, axis=None, weight=None): """Compute the standard deviation of a field, optionally along an axis, with a weight. This will, in a parallel-ware fashion, compute the standard deviation of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be "ones" or "particle_ones", depending on the field, resulting in an unweighted standard deviation. Parameters ---------- field : string or tuple field name The field to calculate the standard deviation of axis : string, optional If supplied, the axis to compute the standard deviation along (i.e., to project along) weight : string, optional The field to use as a weight. Returns ------- Scalar or YTProjection. """ weight_field = sanitize_weight_field(self.ds, field, weight) if axis in self.ds.coordinates.axis_name: r = self.ds.proj( field, axis, data_source=self, weight_field=weight_field, moment=2 ) elif axis is None: r = self.quantities.weighted_standard_deviation(field, weight_field)[0] else: raise NotImplementedError(f"Unknown axis {axis}") return r def ptp(self, field): r"""Compute the range of values (maximum - minimum) of a field. This will, in a parallel-aware fashion, compute the "peak-to-peak" of the given field. Parameters ---------- field : string or tuple field name The field to average. Returns ------- Scalar Examples -------- >>> rho_range = reg.ptp(("gas", "density")) """ ex = self._compute_extrema(field) return ex[1] - ex[0] def profile( self, bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field=("gas", "mass"), accumulation=False, fractional=False, deposition="ngp", *, override_bins=None, ): r""" Create a 1, 2, or 3D profile object from this data_source. The dimensionality of the profile object is chosen by the number of fields given in the bin_fields argument. This simply calls :func:`yt.data_objects.profiles.create_profile`. Parameters ---------- bin_fields : list of strings List of the binning fields for profiling. fields : list of strings The fields to be profiled. n_bins : int or list of ints The number of bins in each dimension. If None, 64 bins for each bin are used for each bin field. Default: 64. extrema : dict of min, max tuples Minimum and maximum values of the bin_fields for the profiles. The keys correspond to the field names. Defaults to the extrema of the bin_fields of the dataset. If a units dict is provided, extrema are understood to be in the units specified in the dictionary. logs : dict of boolean values Whether or not to log the bin_fields for the profiles. The keys correspond to the field names. Defaults to the take_log attribute of the field. units : dict of strings The units of the fields in the profiles, including the bin_fields. weight_field : str or tuple field identifier The weight field for computing weighted average for the profile values. If None, the profile values are sums of the data in each bin. accumulation : bool or list of bools If True, the profile values for a bin n are the cumulative sum of all the values from bin 0 to n. If -True, the sum is reversed so that the value for bin n is the cumulative sum from bin N (total bins) to n. If the profile is 2D or 3D, a list of values can be given to control the summation in each dimension independently. Default: False. fractional : If True the profile values are divided by the sum of all the profile data such that the profile represents a probability distribution function. deposition : Controls the type of deposition used for ParticlePhasePlots. Valid choices are 'ngp' and 'cic'. Default is 'ngp'. This parameter is ignored if the input fields are not of particle type. override_bins : dict of bins to profile plot with If set, ignores n_bins and extrema settings and uses the supplied bins to profile the field. If a units dict is provided, bins are understood to be in the units specified in the dictionary. Examples -------- Create a 1d profile. Access bin field from profile.x and field data from profile[]. >>> ds = load("DD0046/DD0046") >>> ad = ds.all_data() >>> profile = ad.profile( ... ad, ... [("gas", "density")], ... [("gas", "temperature"), ("gas", "velocity_x")], ... ) >>> print(profile.x) >>> print(profile["gas", "temperature"]) >>> plot = profile.plot() """ p = create_profile( self, bin_fields, fields, n_bins, extrema, logs, units, weight_field, accumulation, fractional, deposition, override_bins=override_bins, ) return p def mean(self, field, axis=None, weight=None): r"""Compute the mean of a field, optionally along an axis, with a weight. This will, in a parallel-aware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be "ones" or "particle_ones", depending on the field being averaged, resulting in an unweighted average. Parameters ---------- field : string or tuple field name The field to average. axis : string, optional If supplied, the axis to compute the mean along (i.e., to project along) weight : string, optional The field to use as a weight. Returns ------- Scalar or YTProjection. Examples -------- >>> avg_rho = reg.mean(("gas", "density"), weight="cell_volume") >>> rho_weighted_T = reg.mean( ... ("gas", "temperature"), axis=("index", "y"), weight=("gas", "density") ... ) """ weight_field = sanitize_weight_field(self.ds, field, weight) if axis in self.ds.coordinates.axis_name: r = self.ds.proj(field, axis, data_source=self, weight_field=weight_field) elif axis is None: r = self.quantities.weighted_average_quantity(field, weight_field) else: raise NotImplementedError(f"Unknown axis {axis}") return r def sum(self, field, axis=None): r"""Compute the sum of a field, optionally along an axis. This will, in a parallel-aware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type "sum", which does not take into account path length) along that axis. Parameters ---------- field : string or tuple field name The field to sum. axis : string, optional If supplied, the axis to sum along. Returns ------- Either a scalar or a YTProjection. Examples -------- >>> total_vol = reg.sum("cell_volume") >>> cell_count = reg.sum(("index", "ones"), axis=("index", "x")) """ # Because we're using ``sum`` to specifically mean a sum or a # projection with the method="sum", we do not utilize the ``mean`` # function. if axis in self.ds.coordinates.axis_name: with self._field_parameter_state({"axis": axis}): r = self.ds.proj(field, axis, data_source=self, method="sum") elif axis is None: r = self.quantities.total_quantity(field) else: raise NotImplementedError(f"Unknown axis {axis}") return r def integrate(self, field, weight=None, axis=None, *, moment=1): r"""Compute the integral (projection) of a field along an axis. This projects a field along an axis. Parameters ---------- field : string or tuple field name The field to project. weight : string or tuple field name The field to weight the projection by axis : string The axis to project along. moment : integer, optional for a weighted projection, moment = 1 (the default) corresponds to a weighted average. moment = 2 corresponds to a weighted standard deviation. Returns ------- YTProjection Examples -------- >>> column_density = reg.integrate(("gas", "density"), axis=("index", "z")) """ if weight is not None: weight_field = sanitize_weight_field(self.ds, field, weight) else: weight_field = None if axis in self.ds.coordinates.axis_name: r = self.ds.proj( field, axis, data_source=self, weight_field=weight_field, moment=moment ) else: raise NotImplementedError(f"Unknown axis {axis}") return r @property def _hash(self): s = f"{self}" try: import hashlib return hashlib.md5(s.encode("utf-8")).hexdigest() except ImportError: return s def __reduce__(self): args = tuple( [self.ds._hash(), self._type_name] + [getattr(self, n) for n in self._con_args] + [self.field_parameters] ) return (_reconstruct_object, args) def clone(self): r"""Clone a data object. This will make a duplicate of a data object; note that the `field_parameters` may not necessarily be deeply-copied. If you modify the field parameters in-place, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is. Notes ----- One use case for this is to have multiple identical data objects that are being chunked over in different orders. Examples -------- >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sp = ds.sphere("c", 0.1) >>> sp_clone = sp.clone() >>> sp["gas", "density"] >>> print(sp.field_data.keys()) [("gas", "density")] >>> print(sp_clone.field_data.keys()) [] """ args = self.__reduce__() return args[0](self.ds, *args[1][1:]) def __repr__(self): # We'll do this the slow way to be clear what's going on s = f"{self.__class__.__name__} ({self.ds}): " for i in self._con_args: try: s += ( f", {i}={getattr(self, i).in_base(unit_system=self.ds.unit_system)}" ) except AttributeError: s += f", {i}={getattr(self, i)}" return s @contextmanager def _field_parameter_state(self, field_parameters): # What we're doing here is making a copy of the incoming field # parameters, and then updating it with our own. This means that we'll # be using our own center, if set, rather than the supplied one. But # it also means that any additionally set values can override it. old_field_parameters = self.field_parameters new_field_parameters = field_parameters.copy() new_field_parameters.update(old_field_parameters) self.field_parameters = new_field_parameters yield self.field_parameters = old_field_parameters @contextmanager def _field_type_state(self, ftype, finfo, obj=None): if obj is None: obj = self old_particle_type = obj._current_particle_type old_fluid_type = obj._current_fluid_type fluid_types = self.ds.fluid_types if finfo.sampling_type == "particle" and ftype not in fluid_types: obj._current_particle_type = ftype else: obj._current_fluid_type = ftype yield obj._current_particle_type = old_particle_type obj._current_fluid_type = old_fluid_type def _determine_fields(self, fields): if str(fields) in self.ds._determined_fields: return self.ds._determined_fields[str(fields)] explicit_fields = [] for field in iter_fields(fields): if field in self._container_fields: explicit_fields.append(field) continue finfo = self.ds._get_field_info(field) ftype, fname = finfo.name # really ugly check to ensure that this field really does exist somewhere, # in some naming convention, before returning it as a possible field type if ( (ftype, fname) not in self.ds.field_info and (ftype, fname) not in self.ds.field_list and fname not in self.ds.field_list and (ftype, fname) not in self.ds.derived_field_list and fname not in self.ds.derived_field_list and (ftype, fname) not in self._container_fields ): raise YTFieldNotFound((ftype, fname), self.ds) # these tests are really insufficient as a field type may be valid, and the # field name may be valid, but not the combination (field type, field name) particle_field = finfo.sampling_type == "particle" local_field = finfo.local_sampling if local_field: pass elif particle_field and ftype not in self.ds.particle_types: raise YTFieldTypeNotFound(ftype, ds=self.ds) elif not particle_field and ftype not in self.ds.fluid_types: raise YTFieldTypeNotFound(ftype, ds=self.ds) explicit_fields.append((ftype, fname)) self.ds._determined_fields[str(fields)] = explicit_fields return explicit_fields _tree = None @property def tiles(self): if self._tree is not None: return self._tree self._tree = AMRKDTree(self.ds, data_source=self) return self._tree @property def blocks(self): for _io_chunk in self.chunks([], "io"): for _chunk in self.chunks([], "spatial", ngz=0): # For grids this will be a grid object, and for octrees it will # be an OctreeSubset. Note that we delegate to the sub-object. o = self._current_chunk.objs[0] cache_fp = o.field_parameters.copy() o.field_parameters.update(self.field_parameters) for b, m in o.select_blocks(self.selector): if m is None: continue yield b, m o.field_parameters = cache_fp # PR3124: Given that save_as_dataset is now the recommended method for saving # objects (see Issue 2021 and references therein), the following has been re-written. # # Original comments (still true): # # In the future, this would be better off being set up to more directly # reference objects or retain state, perhaps with a context manager. # # One final detail: time series or multiple datasets in a single pickle # seems problematic. def _get_ds_by_hash(hash): from yt.data_objects.static_output import Dataset if isinstance(hash, Dataset): return hash from yt.data_objects.static_output import _cached_datasets for ds in _cached_datasets.values(): if ds._hash() == hash: return ds return None def _reconstruct_object(*args, **kwargs): # returns a reconstructed YTDataContainer. As of PR 3124, we now return # the actual YTDataContainer rather than a (ds, YTDataContainer) tuple. # pull out some arguments dsid = args[0] # the hash id dtype = args[1] # DataContainer type (e.g., 'region') field_parameters = args[-1] # the field parameters # re-instantiate the base dataset from the hash and ParameterFileStore ds = _get_ds_by_hash(dsid) override_weakref = False if not ds: override_weakref = True datasets = ParameterFileStore() ds = datasets.get_ds_hash(dsid) # instantiate the class with remainder of the args and adjust the state cls = getattr(ds, dtype) obj = cls(*args[2:-1]) obj.field_parameters.update(field_parameters) # any nested ds references are weakref.proxy(ds), so need to ensure the ds # we just loaded persists when we leave this function (nosetests fail without # this) if we did not have an actual dataset as an argument. if hasattr(obj, "ds") and override_weakref: obj.ds = ds return obj ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/derived_quantities.py0000644000175100001770000006663414714401662021124 0ustar00runnerdockerimport numpy as np from yt.funcs import camelcase_to_underscore, iter_fields from yt.units.yt_array import array_like_field from yt.utilities.exceptions import YTParticleTypeNotFound from yt.utilities.object_registries import derived_quantity_registry from yt.utilities.parallel_tools.parallel_analysis_interface import ( ParallelAnalysisInterface, parallel_objects, ) from yt.utilities.physical_constants import gravitational_constant_cgs from yt.utilities.physical_ratios import HUGE def get_position_fields(field, data): axis_names = [data.ds.coordinates.axis_name[num] for num in [0, 1, 2]] field = data._determine_fields(field)[0] finfo = data.ds.field_info[field] if finfo.sampling_type == "particle": if finfo.is_alias: ftype = finfo.alias_name[0] else: ftype = finfo.name[0] position_fields = [(ftype, f"particle_position_{d}") for d in axis_names] else: position_fields = [("index", ax_name) for ax_name in axis_names] return position_fields class DerivedQuantity(ParallelAnalysisInterface): num_vals = -1 def __init__(self, data_source): self.data_source = data_source def __init_subclass__(cls, *args, **kwargs): super().__init_subclass__(*args, **kwargs) if cls.__name__ != "DerivedQuantity": derived_quantity_registry[cls.__name__] = cls def count_values(self, *args, **kwargs): return def __call__(self, *args, **kwargs): """Calculate results for the derived quantity""" # create the index if it doesn't exist yet self.data_source.ds.index self.count_values(*args, **kwargs) chunks = self.data_source.chunks( [], chunking_style=self.data_source._derived_quantity_chunking ) storage = {} for sto, ds in parallel_objects(chunks, -1, storage=storage): sto.result = self.process_chunk(ds, *args, **kwargs) # Now storage will have everything, and will be done via pickling, so # the units will be preserved. (Credit to Nathan for this # idea/implementation.) values = [[] for i in range(self.num_vals)] for key in sorted(storage): for i in range(self.num_vals): values[i].append(storage[key][i]) # These will be YTArrays values = [self.data_source.ds.arr(values[i]) for i in range(self.num_vals)] values = self.reduce_intermediate(values) return values def process_chunk(self, data, *args, **kwargs): raise NotImplementedError def reduce_intermediate(self, values): raise NotImplementedError class DerivedQuantityCollection: def __new__(cls, data_source, *args, **kwargs): inst = object.__new__(cls) inst.data_source = data_source for f in inst.keys(): setattr(inst, camelcase_to_underscore(f), inst[f]) return inst def __getitem__(self, key): dq = derived_quantity_registry[key] # Instantiate here, so we can pass it the data object # Note that this means we instantiate every time we run help, etc # I have made my peace with this. return dq(self.data_source) def keys(self): return derived_quantity_registry.keys() class WeightedAverageQuantity(DerivedQuantity): r""" Calculates the weight average of a field or fields. Returns a YTQuantity for each field requested; if one, it returns a single YTQuantity, if many, it returns a list of YTQuantities in order of the listed fields. Where f is the field and w is the weight, the weighted average is Sum_i(f_i \* w_i) / Sum_i(w_i). Parameters ---------- fields : string / tuple, or list of strings / tuples The field or fields of which the average value is to be calculated. weight : string or tuple The weight field. Examples -------- >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> ad = ds.all_data() >>> print( ... ad.quantities.weighted_average_quantity( ... [("gas", "density"), ("gas", "temperature")], ("gas", "mass") ... ) ... ) """ def count_values(self, fields, weight): # This is a list now self.num_vals = len(fields) + 1 def __call__(self, fields, weight): fields = list(iter_fields(fields)) rv = super().__call__(fields, weight) if len(rv) == 1: rv = rv[0] return rv def process_chunk(self, data, fields, weight): vals = [(data[field] * data[weight]).sum(dtype=np.float64) for field in fields] wv = data[weight].sum(dtype=np.float64) return vals + [wv] def reduce_intermediate(self, values): w = values.pop(-1).sum(dtype=np.float64) return [v.sum(dtype=np.float64) / w for v in values] class TotalQuantity(DerivedQuantity): r""" Calculates the sum of the field or fields. Parameters ---------- fields The field or list of fields to be summed. Examples -------- >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> ad = ds.all_data() >>> print(ad.quantities.total_quantity([("gas", "mass")])) """ def count_values(self, fields): # This is a list now self.num_vals = len(fields) def __call__(self, fields): fields = list(iter_fields(fields)) rv = super().__call__(fields) if len(rv) == 1: rv = rv[0] return rv def process_chunk(self, data, fields): vals = [data[field].sum(dtype=np.float64) for field in fields] return vals def reduce_intermediate(self, values): return [v.sum(dtype=np.float64) for v in values] class TotalMass(TotalQuantity): r""" Calculates the total mass of the object. Returns a YTArray where the first element is total gas mass and the second element is total particle mass. Examples -------- >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> ad = ds.all_data() >>> print(ad.quantities.total_mass()) """ def __call__(self): self.data_source.ds.index fi = self.data_source.ds.field_info if ("gas", "mass") in fi: gas = super().__call__([("gas", "mass")]) else: gas = self.data_source.ds.quan(0.0, "g") if ("nbody", "particle_mass") in fi: part = super().__call__([("nbody", "particle_mass")]) else: part = self.data_source.ds.quan(0.0, "g") return self.data_source.ds.arr([gas, part]) class CenterOfMass(DerivedQuantity): r""" Calculates the center of mass, using gas and/or particles. The center of mass is the mass-weighted mean position. Parameters ---------- use_gas : bool Flag to include gas in the calculation. Gas is ignored if not present. Default: True use_particles : bool Flag to include particles in the calculation. Particles are ignored if not present. Default: False particle_type: string Flag to specify the field type of the particles to use. Useful for particle-based codes where you don't want to use all of the particles in your calculation. Default: 'all' Examples -------- >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> ad = ds.all_data() >>> print(ad.quantities.center_of_mass()) """ def count_values(self, use_gas=True, use_particles=False, particle_type="nbody"): finfo = self.data_source.ds.field_info includes_gas = ("gas", "mass") in finfo includes_particles = (particle_type, "particle_mass") in finfo self.use_gas = use_gas & includes_gas self.use_particles = use_particles & includes_particles self.num_vals = 0 if self.use_gas: self.num_vals += 4 if self.use_particles: self.num_vals += 4 def process_chunk( self, data, use_gas=True, use_particles=False, particle_type="nbody" ): vals = [] if self.use_gas: vals += [ (data["gas", ax] * data["gas", "mass"]).sum(dtype=np.float64) for ax in "xyz" ] vals.append(data["gas", "mass"].sum(dtype=np.float64)) if self.use_particles: vals += [ ( data[particle_type, f"particle_position_{ax}"] * data[particle_type, "particle_mass"] ).sum(dtype=np.float64) for ax in "xyz" ] vals.append(data[particle_type, "particle_mass"].sum(dtype=np.float64)) return vals def reduce_intermediate(self, values): if len(values) not in (4, 8): raise RuntimeError x = values.pop(0).sum(dtype=np.float64) y = values.pop(0).sum(dtype=np.float64) z = values.pop(0).sum(dtype=np.float64) w = values.pop(0).sum(dtype=np.float64) if len(values) > 0: # Note that this could be shorter if we pre-initialized our x,y,z,w # values as YTQuantity objects. x += values.pop(0).sum(dtype=np.float64) y += values.pop(0).sum(dtype=np.float64) z += values.pop(0).sum(dtype=np.float64) w += values.pop(0).sum(dtype=np.float64) return self.data_source.ds.arr([v / w for v in [x, y, z]]) class BulkVelocity(DerivedQuantity): r""" Calculates the bulk velocity, using gas and/or particles. The bulk velocity is the mass-weighted mean velocity. Parameters ---------- use_gas : bool Flag to include gas in the calculation. Gas is ignored if not present. Default: True use_particles : bool Flag to include particles in the calculation. Particles are ignored if not present. Default: True particle_type: string Flag to specify the field type of the particles to use. Useful for particle-based codes where you don't want to use all of the particles in your calculation. Default: 'all' Examples -------- >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> ad = ds.all_data() >>> print(ad.quantities.bulk_velocity()) """ def count_values(self, use_gas=True, use_particles=False, particle_type="nbody"): if use_particles and particle_type not in self.data_source.ds.particle_types: raise YTParticleTypeNotFound(particle_type, self.data_source.ds) # This is a list now self.num_vals = 0 if use_gas: self.num_vals += 4 if use_particles and "nbody" in self.data_source.ds.particle_types: self.num_vals += 4 def process_chunk( self, data, use_gas=True, use_particles=False, particle_type="nbody" ): vals = [] if use_gas: vals += [ (data["gas", f"velocity_{ax}"] * data["gas", "mass"]).sum( dtype=np.float64 ) for ax in "xyz" ] vals.append(data["gas", "mass"].sum(dtype=np.float64)) if use_particles and "nbody" in data.ds.particle_types: vals += [ ( data[particle_type, f"particle_velocity_{ax}"] * data[particle_type, "particle_mass"] ).sum(dtype=np.float64) for ax in "xyz" ] vals.append(data[particle_type, "particle_mass"].sum(dtype=np.float64)) return vals def reduce_intermediate(self, values): if len(values) not in (4, 8): raise RuntimeError x = values.pop(0).sum(dtype=np.float64) y = values.pop(0).sum(dtype=np.float64) z = values.pop(0).sum(dtype=np.float64) w = values.pop(0).sum(dtype=np.float64) if len(values) > 0: # Note that this could be shorter if we pre-initialized our x,y,z,w # values as YTQuantity objects. x += values.pop(0).sum(dtype=np.float64) y += values.pop(0).sum(dtype=np.float64) z += values.pop(0).sum(dtype=np.float64) w += values.pop(0).sum(dtype=np.float64) return self.data_source.ds.arr([v / w for v in [x, y, z]]) class WeightedStandardDeviation(DerivedQuantity): r""" Calculates the weighted standard deviation and weighted mean for a field or list of fields. Returns a YTArray for each field requested; if one, it returns a single YTArray, if many, it returns a list of YTArrays in order of the listed fields. The first element of each YTArray is the weighted standard deviation, and the second element is the weighted mean. Where f is the field, w is the weight, and is the weighted mean, the weighted standard deviation is sqrt( Sum_i( (f_i - )^2 \* w_i ) / Sum_i(w_i) ). Parameters ---------- fields : string / tuple, or list of strings / tuples The field or fields of which the average value is to be calculated. weight : string or tuple The weight field. Examples -------- >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> ad = ds.all_data() >>> print( ... ad.quantities.weighted_standard_deviation( ... [("gas", "density"), ("gas", "temperature")], ("gas", "mass") ... ) ... ) """ def count_values(self, fields, weight): # This is a list now self.num_vals = 2 * len(fields) + 1 def __call__(self, fields, weight): fields = list(iter_fields(fields)) units = [self.data_source.ds._get_field_info(field).units for field in fields] rv = super().__call__(fields, weight) rv = [self.data_source.ds.arr(v, u) for v, u in zip(rv, units, strict=True)] if len(rv) == 1: rv = rv[0] return rv def process_chunk(self, data, fields, weight): my_weight = data[weight].d.sum(dtype=np.float64) if my_weight == 0: return [0.0 for field in fields] + [0.0 for field in fields] + [0.0] my_means = [ (data[field].d * data[weight].d).sum(dtype=np.float64) / my_weight for field in fields ] my_var2s = [ (data[weight].d * (data[field].d - my_mean) ** 2).sum(dtype=np.float64) / my_weight for field, my_mean in zip(fields, my_means, strict=True) ] return my_means + my_var2s + [my_weight] def reduce_intermediate(self, values): my_weight = values.pop(-1) all_weight = my_weight.sum(dtype=np.float64) rvals = [] for i in range(int(len(values) / 2)): my_mean = values[i] my_var2 = values[i + int(len(values) / 2)] all_mean = (my_weight * my_mean).sum(dtype=np.float64) / all_weight ret = [ ( np.sqrt( (my_weight * (my_var2 + (my_mean - all_mean) ** 2)).sum( dtype=np.float64 ) / all_weight ) ), all_mean, ] rvals.append(np.array(ret)) return rvals class AngularMomentumVector(DerivedQuantity): r""" Calculates the angular momentum vector, using gas (grid-based) and/or particles. The angular momentum vector is the mass-weighted mean specific angular momentum. Returns a YTArray of the vector. Parameters ---------- use_gas : bool Flag to include grid-based gas in the calculation. Gas is ignored if not present. Default: True use_particles : bool Flag to include particles in the calculation. Particles are ignored if not present. Default: True particle_type: string Flag to specify the field type of the particles to use. Useful for particle-based codes where you don't want to use all of the particles in your calculation. Default: 'all' Examples -------- Find angular momentum vector of galaxy in grid-based isolated galaxy dataset >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") ... ad = ds.all_data() ... print(ad.quantities.angular_momentum_vector()) [-7.50868209e+26 1.06032596e+27 2.19274002e+29] cm**2/s >>> # Find angular momentum vector of gas disk in particle-based dataset >>> ds = yt.load("FIRE_M12i_ref11/snapshot_600.hdf5") ... _, c = ds.find_max(("gas", "density")) ... sp = ds.sphere(c, (10, "kpc")) ... search_args = dict(use_gas=False, use_particles=True, particle_type="PartType0") ... print(sp.quantities.angular_momentum_vector(**search_args)) [4.88104442e+28 7.38463362e+28 6.20030135e+28] cm**2/s """ def count_values(self, use_gas=True, use_particles=True, particle_type="all"): if use_particles and particle_type not in self.data_source.ds.particle_types: raise YTParticleTypeNotFound(particle_type, self.data_source.ds) num_vals = 0 # create the index if it doesn't exist yet self.data_source.ds.index self.particle_type = particle_type self.use_gas = use_gas & (("gas", "mass") in self.data_source.ds.field_info) self.use_particles = use_particles & ( (self.particle_type, "particle_mass") in self.data_source.ds.field_info ) if self.use_gas: num_vals += 4 if self.use_particles: num_vals += 4 self.num_vals = num_vals def process_chunk( self, data, use_gas=True, use_particles=False, particle_type="all" ): rvals = [] if self.use_gas: rvals.extend( [ ( data["gas", f"specific_angular_momentum_{axis}"] * data["gas", "mass"] ).sum(dtype=np.float64) for axis in "xyz" ] ) rvals.append(data["gas", "mass"].sum(dtype=np.float64)) if self.use_particles: rvals.extend( [ ( data[ self.particle_type, f"particle_specific_angular_momentum_{axis}", ] * data[self.particle_type, "particle_mass"] ).sum(dtype=np.float64) for axis in "xyz" ] ) rvals.append( data[self.particle_type, "particle_mass"].sum(dtype=np.float64) ) return rvals def reduce_intermediate(self, values): jx = values.pop(0).sum(dtype=np.float64) jy = values.pop(0).sum(dtype=np.float64) jz = values.pop(0).sum(dtype=np.float64) m = values.pop(0).sum(dtype=np.float64) if values: jx += values.pop(0).sum(dtype=np.float64) jy += values.pop(0).sum(dtype=np.float64) jz += values.pop(0).sum(dtype=np.float64) m += values.pop(0).sum(dtype=np.float64) return self.data_source.ds.arr([jx / m, jy / m, jz / m]) class Extrema(DerivedQuantity): r""" Calculates the min and max value of a field or list of fields. Returns a YTArray for each field requested. If one, a single YTArray is returned, if many, a list of YTArrays in order of field list is returned. The first element of each YTArray is the minimum of the field and the second is the maximum of the field. Parameters ---------- fields The field or list of fields over which the extrema are to be calculated. non_zero : bool If True, only positive values are considered in the calculation. Default: False check_finite : bool If True, non-finite values will be explicitly excluded. Default: False Examples -------- >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> ad = ds.all_data() >>> print(ad.quantities.extrema([("gas", "density"), ("gas", "temperature")])) """ def count_values(self, fields, non_zero, *, check_finite=False): self.num_vals = len(fields) * 2 def __call__(self, fields, non_zero=False, *, check_finite=False): fields = list(iter_fields(fields)) rv = super().__call__(fields, non_zero, check_finite=check_finite) if len(rv) == 1: rv = rv[0] return rv def process_chunk(self, data, fields, non_zero, *, check_finite=False): vals = [] for field in fields: field = data._determine_fields(field)[0] fd = data[field] if non_zero: fd = fd[fd > 0.0] if check_finite: fd = fd[np.isfinite(fd)] if fd.size > 0: vals += [fd.min(), fd.max()] else: vals += [ array_like_field(data, HUGE, field), array_like_field(data, -HUGE, field), ] return vals def reduce_intermediate(self, values): # The values get turned into arrays here. return [ self.data_source.ds.arr([mis.min(), mas.max()]) for mis, mas in zip(values[::2], values[1::2], strict=True) ] class SampleAtMaxFieldValues(DerivedQuantity): _sign = -1 r""" Calculates the maximum value and returns whichever fields are asked to be sampled. Parameters ---------- field : tuple or string The field over which the extrema are to be calculated. sample_fields : list of fields The fields to sample and return at the minimum value. Examples -------- >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> ad = ds.all_data() >>> print(ad.quantities.sample_at_max_field_values(("gas", "density"), ... [("gas", "temperature"), ("gas", "velocity_magnitude")])) """ def count_values(self, field, sample_fields): # field itself, then index, then the number of sample fields self.num_vals = 1 + len(sample_fields) def __call__(self, field, sample_fields): rv = super().__call__(field, sample_fields) if len(rv) == 1: rv = rv[0] return rv def process_chunk(self, data, field, sample_fields): field = data._determine_fields(field)[0] ma = array_like_field(data, self._sign * HUGE, field) vals = [array_like_field(data, -1, sf) for sf in sample_fields] maxi = -1 if data[field].size > 0: maxi = self._func(data[field]) ma = data[field][maxi] vals = [data[sf][maxi] for sf in sample_fields] return (ma,) + tuple(vals) def reduce_intermediate(self, values): i = self._func(values[0]) # ma is values[0] return [val[i] for val in values] def _func(self, arr): return np.argmax(arr) class MaxLocation(SampleAtMaxFieldValues): r""" Calculates the maximum value plus the x, y, and z position of the maximum. Parameters ---------- field : tuple or string The field over which the extrema are to be calculated. Examples -------- >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> ad = ds.all_data() >>> print(ad.quantities.max_location(("gas", "density"))) """ def __call__(self, field): # Make sure we have an index self.data_source.index sample_fields = get_position_fields(field, self.data_source) rv = super().__call__(field, sample_fields) if len(rv) == 1: rv = rv[0] return rv class SampleAtMinFieldValues(SampleAtMaxFieldValues): _sign = 1 r""" Calculates the minimum value and returns whichever fields are asked to be sampled. Parameters ---------- field : tuple or string The field over which the extrema are to be calculated. sample_fields : list of fields The fields to sample and return at the minimum value. Examples -------- >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> ad = ds.all_data() >>> print(ad.quantities.sample_at_min_field_values(("gas", "density"), ... [("gas", "temperature"), ("gas", "velocity_magnitude")])) """ def _func(self, arr): return np.argmin(arr) class MinLocation(SampleAtMinFieldValues): r""" Calculates the minimum value plus the x, y, and z position of the minimum. Parameters ---------- field : tuple or string The field over which the extrema are to be calculated. Examples -------- >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> ad = ds.all_data() >>> print(ad.quantities.min_location(("gas", "density"))) """ def __call__(self, field): # Make sure we have an index self.data_source.index sample_fields = get_position_fields(field, self.data_source) rv = super().__call__(field, sample_fields) if len(rv) == 1: rv = rv[0] return rv class SpinParameter(DerivedQuantity): r""" Calculates the dimensionless spin parameter. Given by Equation 3 of Peebles (1971, A&A, 11, 377), the spin parameter is defined as .. math:: \lambda = (L * |E|^(1/2)) / (G * M^5/2), where L is the total angular momentum, E is the total energy (kinetic and potential), G is the gravitational constant, and M is the total mass. Parameters ---------- use_gas : bool Flag to include gas in the calculation. Gas is ignored if not present. Default: True use_particles : bool Flag to include particles in the calculation. Particles are ignored if not present. Default: True particle_type : str Particle type to be used for Center of mass calculation when use_particle = True. Default: all Examples -------- >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> ad = ds.all_data() >>> print(ad.quantities.spin_parameter()) """ def count_values(self, **kwargs): self.num_vals = 3 def process_chunk( self, data, use_gas=True, use_particles=True, particle_type="nbody" ): if use_particles and particle_type not in self.data_source.ds.particle_types: raise YTParticleTypeNotFound(particle_type, self.data_source.ds) use_gas &= ("gas", "mass") in self.data_source.ds.field_info use_particles &= ( particle_type, "particle_mass", ) in self.data_source.ds.field_info e = data.ds.quan(0.0, "erg") j = data.ds.quan(0.0, "g*cm**2/s") m = data.ds.quan(0.0, "g") if use_gas: e += (data["gas", "kinetic_energy_density"] * data["gas", "volume"]).sum( dtype=np.float64 ) j += data["gas", "angular_momentum_magnitude"].sum(dtype=np.float64) m += data["gas", "mass"].sum(dtype=np.float64) if use_particles: e += ( data[particle_type, "particle_velocity_magnitude"] ** 2 * data[particle_type, "particle_mass"] ).sum(dtype=np.float64) j += data[particle_type, "particle_angular_momentum_magnitude"].sum( dtype=np.float64 ) m += data[particle_type, "particle_mass"].sum(dtype=np.float64) return (e, j, m) def reduce_intermediate(self, values): e = values.pop(0).sum(dtype=np.float64) j = values.pop(0).sum(dtype=np.float64) m = values.pop(0).sum(dtype=np.float64) return j * np.sqrt(np.abs(e)) / m**2.5 / gravitational_constant_cgs ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/field_data.py0000644000175100001770000000017714714401662017276 0ustar00runnerdockerclass YTFieldData(dict): """ A Container object for field data, instead of just having it be a dict. """ pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/image_array.py0000644000175100001770000003257414714401662017510 0ustar00runnerdockerimport numpy as np from unyt import unyt_array from yt.config import ytcfg from yt.visualization.image_writer import write_bitmap, write_image class ImageArray(unyt_array): r"""A custom Numpy ndarray used for images. This differs from ndarray in that you can optionally specify an info dictionary which is used later in saving, and can be accessed with ImageArray.info. Parameters ---------- input_array: array_like A numpy ndarray, or list. Other Parameters ---------------- info: dictionary Contains information to be stored with image. Returns ------- obj: ImageArray object Raises ------ None See Also -------- numpy.ndarray : Inherits Notes ----- References ---------- Examples -------- These are written in doctest format, and should illustrate how to use the function. Use the variables 'ds' for the dataset, 'pc' for a plot collection, 'c' for a center, and 'L' for a vector. >>> im = np.zeros([64, 128, 3]) >>> for i in range(im.shape[0]): ... for k in range(im.shape[2]): ... im[i, :, k] = np.linspace(0.0, 0.3 * k, im.shape[1]) >>> myinfo = { ... "field": "dinosaurs", ... "east_vector": np.array([1.0, 0.0, 0.0]), ... "north_vector": np.array([0.0, 0.0, 1.0]), ... "normal_vector": np.array([0.0, 1.0, 0.0]), ... "width": 0.245, ... "units": "cm", ... "type": "rendering", ... } >>> im_arr = ImageArray(im, info=myinfo) >>> im_arr.save("test_ImageArray") Numpy ndarray documentation appended: """ def __new__( cls, input_array, units=None, registry=None, info=None, bypass_validation=False, ): obj = super().__new__( cls, input_array, units, registry, bypass_validation=bypass_validation ) if info is None: info = {} obj.info = info return obj def __array_finalize__(self, obj): # see InfoArray.__array_finalize__ for comments super().__array_finalize__(obj) self.info = getattr(obj, "info", None) def write_hdf5(self, filename, dataset_name=None): r"""Writes ImageArray to hdf5 file. Parameters ---------- filename : string The filename to create and write a dataset to dataset_name : string The name of the dataset to create in the file. Examples -------- >>> im = np.zeros([64, 128, 3]) >>> for i in range(im.shape[0]): ... for k in range(im.shape[2]): ... im[i, :, k] = np.linspace(0.0, 0.3 * k, im.shape[1]) >>> myinfo = { ... "field": "dinosaurs", ... "east_vector": np.array([1.0, 0.0, 0.0]), ... "north_vector": np.array([0.0, 0.0, 1.0]), ... "normal_vector": np.array([0.0, 1.0, 0.0]), ... "width": 0.245, ... "units": "cm", ... "type": "rendering", ... } >>> im_arr = ImageArray(im, info=myinfo) >>> im_arr.write_hdf5("test_ImageArray.h5") """ if dataset_name is None: dataset_name = self.info.get("name", "image") super().write_hdf5(filename, dataset_name=dataset_name, info=self.info) def add_background_color(self, background="black", inline=True): r"""Adds a background color to a 4-channel ImageArray This adds a background color to a 4-channel ImageArray, by default doing so inline. The ImageArray must already be normalized to the [0,1] range. Parameters ---------- background: This can be used to set a background color for the image, and can take several types of values: * ``white``: white background, opaque * ``black``: black background, opaque * ``None``: transparent background * 4-element array [r,g,b,a]: arbitrary rgba setting. Default: 'black' inline : boolean, optional If True, original ImageArray is modified. If False, a copy is first created, then modified. Default: True Returns ------- out: ImageArray The modified ImageArray with a background color added. Examples -------- >>> im = np.zeros([64, 128, 4]) >>> for i in range(im.shape[0]): ... for k in range(im.shape[2]): ... im[i, :, k] = np.linspace(0.0, 10.0 * k, im.shape[1]) >>> im_arr = ImageArray(im) >>> im_arr.rescale() >>> new_im = im_arr.add_background_color([1.0, 0.0, 0.0, 1.0], inline=False) >>> new_im.write_png("red_bg.png") >>> im_arr.add_background_color("black") >>> im_arr.write_png("black_bg.png") """ assert self.shape[-1] == 4 if background is None: background = (0.0, 0.0, 0.0, 0.0) elif background == "white": background = (1.0, 1.0, 1.0, 1.0) elif background == "black": background = (0.0, 0.0, 0.0, 1.0) # Alpha blending to background if inline: out = self else: out = self.copy() for i in range(3): out[:, :, i] = self[:, :, i] * self[:, :, 3] out[:, :, i] += background[i] * background[3] * (1.0 - self[:, :, 3]) out[:, :, 3] = self[:, :, 3] + background[3] * (1.0 - self[:, :, 3]) return out def rescale(self, cmax=None, amax=None, inline=True): r"""Rescales the image to be in [0,1] range. Parameters ---------- cmax : float, optional Normalization value to use for rgb channels. Defaults to None, corresponding to using the maximum value in the rgb channels. amax : float, optional Normalization value to use for alpha channel. Defaults to None, corresponding to using the maximum value in the alpha channel. inline : boolean, optional Specifies whether or not the rescaling is done inline. If false, a new copy of the ImageArray will be created, returned. Default:True. Returns ------- out: ImageArray The rescaled ImageArray, clipped to the [0,1] range. Notes ----- This requires that the shape of the ImageArray to have a length of 3, and for the third dimension to be >= 3. If the third dimension has a shape of 4, the alpha channel will also be rescaled. Examples -------- >>> im = np.zeros([64, 128, 4]) >>> for i in range(im.shape[0]): ... for k in range(im.shape[2]): ... im[i, :, k] = np.linspace(0.0, 0.3 * k, im.shape[1]) >>> im = ImageArray(im) >>> im.write_png("original.png") >>> im.rescale() >>> im.write_png("normalized.png") """ assert len(self.shape) == 3 assert self.shape[2] >= 3 if inline: out = self else: out = self.copy() if cmax is None: cmax = self[:, :, :3].sum(axis=2).max() if cmax > 0.0: np.multiply(self[:, :, :3], 1.0 / cmax, out[:, :, :3]) if self.shape[2] == 4: if amax is None: amax = self[:, :, 3].max() if amax > 0.0: np.multiply(self[:, :, 3], 1.0 / amax, out[:, :, 3]) np.clip(out, 0.0, 1.0, out) return out def write_png( self, filename, sigma_clip=None, background="black", rescale=True, ): r"""Writes ImageArray to png file. Parameters ---------- filename : string Filename to save to. If None, PNG contents will be returned as a string. sigma_clip : float, optional Image will be clipped before saving to the standard deviation of the image multiplied by this value. Useful for enhancing images. Default: None background: This can be used to set a background color for the image, and can take several types of values: * ``white``: white background, opaque * ``black``: black background, opaque * ``None``: transparent background * 4-element array [r,g,b,a]: arbitrary rgba setting. Default: 'black' rescale : boolean, optional If True, will write out a rescaled image (without modifying the original image). Default: True Examples -------- >>> im = np.zeros([64, 128, 4]) >>> for i in range(im.shape[0]): ... for k in range(im.shape[2]): ... im[i, :, k] = np.linspace(0.0, 10.0 * k, im.shape[1]) >>> im_arr = ImageArray(im) >>> im_arr.write_png("standard.png") >>> im_arr.write_png("non-scaled.png", rescale=False) >>> im_arr.write_png("black_bg.png", background="black") >>> im_arr.write_png("white_bg.png", background="white") >>> im_arr.write_png("green_bg.png", background=[0, 1, 0, 1]) >>> im_arr.write_png("transparent_bg.png", background=None) """ if rescale: scaled = self.rescale(inline=False) else: scaled = self if self.shape[-1] == 4: out = scaled.add_background_color(background, inline=False) else: out = scaled if filename is not None and filename[-4:] != ".png": filename += ".png" if sigma_clip is not None: clip_value = self._clipping_value(sigma_clip, im=out) return write_bitmap(out.swapaxes(0, 1), filename, clip_value) else: return write_bitmap(out.swapaxes(0, 1), filename) def write_image( self, filename, color_bounds=None, channel=None, cmap_name=None, func=lambda x: x, ): r"""Writes a single channel of the ImageArray to a png file. Parameters ---------- filename : string Note filename not be modified. Other Parameters ---------------- channel: int Which channel to write out as an image. Defaults to 0 cmap_name: string Name of the colormap to be used. color_bounds : tuple of floats, optional The min and max to scale between. Outlying values will be clipped. cmap_name : string, optional An acceptable colormap. See either yt.visualization.color_maps or https://scipy-cookbook.readthedocs.io/items/Matplotlib_Show_colormaps.html . func : function, optional A function to transform the buffer before applying a colormap. Returns ------- scaled_image : uint8 image that has been saved Examples -------- >>> im = np.zeros([64, 128]) >>> for i in range(im.shape[0]): ... im[i, :] = np.linspace(0.0, 0.3 * i, im.shape[1]) >>> myinfo = { ... "field": "dinosaurs", ... "east_vector": np.array([1.0, 0.0, 0.0]), ... "north_vector": np.array([0.0, 0.0, 1.0]), ... "normal_vector": np.array([0.0, 1.0, 0.0]), ... "width": 0.245, ... "units": "cm", ... "type": "rendering", ... } >>> im_arr = ImageArray(im, info=myinfo) >>> im_arr.write_image("test_ImageArray.png") """ if cmap_name is None: cmap_name = ytcfg.get("yt", "default_colormap") if filename is not None and filename[-4:] != ".png": filename += ".png" # TODO: Write info dict as png metadata if channel is None: return write_image( self.swapaxes(0, 1).to_ndarray(), filename, color_bounds=color_bounds, cmap_name=cmap_name, func=func, ) else: return write_image( self.swapaxes(0, 1)[:, :, channel].to_ndarray(), filename, color_bounds=color_bounds, cmap_name=cmap_name, func=func, ) def save(self, filename, png=True, hdf5=True, dataset_name=None): """ Saves ImageArray. Arguments: filename: string This should not contain the extension type (.png, .h5, ...) Optional Arguments: png: boolean, default True Save to a png hdf5: boolean, default True Save to hdf5 file, including info dictionary as attributes. """ if png: if not filename.endswith(".png"): filename = filename + ".png" if len(self.shape) > 2: self.write_png(filename) else: self.write_image(filename) if hdf5: if not filename.endswith(".h5"): filename = filename + ".h5" self.write_hdf5(filename, dataset_name) def _clipping_value(self, sigma_clip, im=None): # return the max value to clip with given a sigma_clip value. If im # is None, the current instance is used if im is None: im = self nz = im[:, :, :3][im[:, :, :3].nonzero()] return nz.mean() + sigma_clip * nz.std() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2671514 yt-4.4.0/yt/data_objects/index_subobjects/0000755000175100001770000000000014714401715020174 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/index_subobjects/__init__.py0000644000175100001770000000011514714401662022303 0ustar00runnerdockerfrom .grid_patch import AMRGridPatch from .octree_subset import OctreeSubset ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/index_subobjects/grid_patch.py0000644000175100001770000003705314714401662022663 0ustar00runnerdockerimport weakref import numpy as np import yt.geometry.particle_deposit as particle_deposit from yt._typing import FieldKey from yt.config import ytcfg from yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer, ) from yt.funcs import is_sequence from yt.geometry.selection_routines import convert_mask_to_indices from yt.units.yt_array import YTArray from yt.utilities.exceptions import ( YTFieldTypeNotFound, YTParticleDepositionNotImplemented, ) from yt.utilities.lib.interpolators import ghost_zone_interpolate from yt.utilities.lib.mesh_utilities import clamp_edges from yt.utilities.nodal_data_utils import get_nodal_slices class AMRGridPatch(YTSelectionContainer): _spatial = True _num_ghost_zones = 0 _grids = None _id_offset = 1 _cache_mask = True _type_name = "grid" _skip_add = True _con_args = ("id", "filename") _container_fields = ( ("index", "dx"), ("index", "dy"), ("index", "dz"), ("index", "x"), ("index", "y"), ("index", "z"), ) OverlappingSiblings = None def __init__(self, id, filename=None, index=None): super().__init__(index.dataset, None) self.id = id self._child_mask = self._child_indices = self._child_index_mask = None self.ds = index.dataset self._index = weakref.proxy(index) self.start_index = None self.filename = filename self._last_mask = None self._last_count = -1 self._last_selector_id = None def get_global_startindex(self): """ Return the integer starting index for each dimension at the current level. """ if self.start_index is not None: return self.start_index if self.Parent is None: left = self.LeftEdge.d - self.ds.domain_left_edge.d start_index = left / self.dds.d return np.rint(start_index).astype("int64").ravel() pdx = self.Parent.dds.d di = np.rint((self.LeftEdge.d - self.Parent.LeftEdge.d) / pdx) start_index = self.Parent.get_global_startindex() + di self.start_index = (start_index * self.ds.refine_by).astype("int64").ravel() return self.start_index def __getitem__(self, key): tr = super().__getitem__(key) try: fields = self._determine_fields(key) except YTFieldTypeNotFound: return tr finfo = self.ds._get_field_info(fields[0]) if not finfo.sampling_type == "particle": num_nodes = 2 ** sum(finfo.nodal_flag) new_shape = list(self.ActiveDimensions) if num_nodes > 1: new_shape += [num_nodes] return tr.reshape(new_shape) return tr def convert(self, datatype): """ This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError. """ return self.ds[datatype] @property def shape(self): return self.ActiveDimensions def _reshape_vals(self, arr): if len(arr.shape) == 3: return arr return arr.reshape(self.ActiveDimensions, order="C") def _generate_container_field(self, field): if self._current_chunk is None: self.index._identify_base_chunk(self) if field == ("index", "dx"): tr = self._current_chunk.fwidth[:, 0] elif field == ("index", "dy"): tr = self._current_chunk.fwidth[:, 1] elif field == ("index", "dz"): tr = self._current_chunk.fwidth[:, 2] elif field == ("index", "x"): tr = self._current_chunk.fcoords[:, 0] elif field == ("index", "y"): tr = self._current_chunk.fcoords[:, 1] elif field == ("index", "z"): tr = self._current_chunk.fcoords[:, 2] return self._reshape_vals(tr) def _setup_dx(self): # So first we figure out what the index is. We don't assume # that dx=dy=dz, at least here. We probably do elsewhere. id = self.id - self._id_offset ds = self.ds index = self.index if self.Parent is not None: if not hasattr(self.Parent, "dds"): self.Parent._setup_dx() self.dds = self.Parent.dds.d / self.ds.refine_by else: LE, RE = (index.grid_left_edge[id, :].d, index.grid_right_edge[id, :].d) self.dds = (RE - LE) / self.ActiveDimensions if self.ds.dimensionality < 3: self.dds[2] = ds.domain_right_edge[2] - ds.domain_left_edge[2] elif self.ds.dimensionality < 2: self.dds[1] = ds.domain_right_edge[1] - ds.domain_left_edge[1] self.dds = self.dds.view(YTArray) self.dds.units = self.index.grid_left_edge.units def __repr__(self): cls_name = self.__class__.__name__ return f"{cls_name}_{self.id:04d} ({self.ActiveDimensions})" def __int__(self): return self.id def clear_data(self): """ Clear out the following things: child_mask, child_indices, all fields, all field parameters. """ super().clear_data() self._setup_dx() def _prepare_grid(self): """Copies all the appropriate attributes from the index.""" # This is definitely the slowest part of generating the index # Now we give it pointers to all of its attributes # Note that to keep in line with Enzo, we have broken PEP-8 h = self.index # cache it my_ind = self.id - self._id_offset self.ActiveDimensions = h.grid_dimensions[my_ind] self.LeftEdge = h.grid_left_edge[my_ind] self.RightEdge = h.grid_right_edge[my_ind] # This can be expensive so we allow people to disable this behavior # via a config option if ytcfg.get("yt", "reconstruct_index"): if is_sequence(self.Parent) and len(self.Parent) > 0: p = self.Parent[0] else: p = self.Parent if p is not None and p != []: # clamp grid edges to an integer multiple of the parent cell # width clamp_edges(self.LeftEdge, p.LeftEdge, p.dds) clamp_edges(self.RightEdge, p.RightEdge, p.dds) h.grid_levels[my_ind, 0] = self.Level # This might be needed for streaming formats # self.Time = h.gridTimes[my_ind,0] self.NumberOfParticles = h.grid_particle_count[my_ind, 0] def get_position(self, index): """Returns center position of an *index*.""" pos = (index + 0.5) * self.dds + self.LeftEdge return pos def _fill_child_mask(self, child, mask, tofill, dlevel=1): rf = self.ds.refine_by if dlevel != 1: rf = rf**dlevel gi, cgi = self.get_global_startindex(), child.get_global_startindex() startIndex = np.maximum(0, cgi // rf - gi) endIndex = np.minimum( (cgi + child.ActiveDimensions) // rf - gi, self.ActiveDimensions ) endIndex += startIndex == endIndex mask[ startIndex[0] : endIndex[0], startIndex[1] : endIndex[1], startIndex[2] : endIndex[2], ] = tofill @property def child_mask(self): """ Generates self.child_mask, which is zero where child grids exist (and thus, where higher resolution data is available). """ child_mask = np.ones(self.ActiveDimensions, "bool") for child in self.Children: self._fill_child_mask(child, child_mask, 0) for sibling in self.OverlappingSiblings or []: self._fill_child_mask(sibling, child_mask, 0, dlevel=0) return child_mask @property def child_indices(self): return self.child_mask == 0 @property def child_index_mask(self): """ Generates self.child_index_mask, which is -1 where there is no child, and otherwise has the ID of the grid that resides there. """ child_index_mask = np.zeros(self.ActiveDimensions, "int32") - 1 for child in self.Children: self._fill_child_mask(child, child_index_mask, child.id) for sibling in self.OverlappingSiblings or []: self._fill_child_mask(sibling, child_index_mask, sibling.id, dlevel=0) return child_index_mask def retrieve_ghost_zones(self, n_zones, fields, all_levels=False, smoothed=False): # We will attempt this by creating a datacube that is exactly bigger # than the grid by nZones*dx in each direction nl = self.get_global_startindex() - n_zones new_left_edge = nl * self.dds + self.ds.domain_left_edge # Something different needs to be done for the root grid, though level = self.Level if all_levels: level = self.index.max_level + 1 kwargs = { "dims": self.ActiveDimensions + 2 * n_zones, "num_ghost_zones": n_zones, "use_pbar": False, "fields": fields, } # This should update the arguments to set the field parameters to be # those of this grid. field_parameters = {} field_parameters.update(self.field_parameters) if smoothed: cube = self.ds.smoothed_covering_grid( level, new_left_edge, field_parameters=field_parameters, **kwargs ) else: cube = self.ds.covering_grid( level, new_left_edge, field_parameters=field_parameters, **kwargs ) cube._base_grid = self return cube def get_vertex_centered_data( self, fields: list[FieldKey], smoothed: bool = True, no_ghost: bool = False, ): # Make sure the field list has only unique entries fields = list(set(fields)) new_fields = {} for field in fields: finfo = self.ds._get_field_info(field) new_fields[field] = self.ds.arr( np.zeros(self.ActiveDimensions + 1), finfo.output_units ) if no_ghost: for field in fields: # Ensure we have the native endianness in this array. Avoid making # a copy if possible. old_field = np.asarray(self[field], dtype="=f8") # We'll use the ghost zone routine, which will naturally # extrapolate here. input_left = np.array([0.5, 0.5, 0.5], dtype="float64") output_left = np.array([0.0, 0.0, 0.0], dtype="float64") # rf = 1 here ghost_zone_interpolate( 1, old_field, input_left, new_fields[field], output_left ) else: cg = self.retrieve_ghost_zones(1, fields, smoothed=smoothed) for field in fields: src = cg[field].in_units(new_fields[field].units).d dest = new_fields[field].d np.add(dest, src[1:, 1:, 1:], dest) np.add(dest, src[:-1, 1:, 1:], dest) np.add(dest, src[1:, :-1, 1:], dest) np.add(dest, src[1:, 1:, :-1], dest) np.add(dest, src[:-1, 1:, :-1], dest) np.add(dest, src[1:, :-1, :-1], dest) np.add(dest, src[:-1, :-1, 1:], dest) np.add(dest, src[:-1, :-1, :-1], dest) np.multiply(dest, 0.125, dest) return new_fields def select_icoords(self, dobj): mask = self._get_selector_mask(dobj.selector) if mask is None: return np.empty((0, 3), dtype="int64") coords = convert_mask_to_indices(mask, self._last_count) coords += self.get_global_startindex()[None, :] return coords def select_fcoords(self, dobj): mask = self._get_selector_mask(dobj.selector) if mask is None: return np.empty((0, 3), dtype="float64") coords = convert_mask_to_indices(mask, self._last_count).astype("float64") coords += 0.5 coords *= self.dds.d[None, :] coords += self.LeftEdge.d[None, :] return coords def select_fwidth(self, dobj): count = self.count(dobj.selector) if count == 0: return np.empty((0, 3), dtype="float64") coords = np.empty((count, 3), dtype="float64") for axis in range(3): coords[:, axis] = self.dds.d[axis] return coords def select_ires(self, dobj): mask = self._get_selector_mask(dobj.selector) if mask is None: return np.empty(0, dtype="int64") coords = np.empty(self._last_count, dtype="int64") coords[:] = self.Level return coords def select_tcoords(self, dobj): dt, t = dobj.selector.get_dt(self) return dt, t def smooth(self, *args, **kwargs): raise NotImplementedError def particle_operation(self, *args, **kwargs): raise NotImplementedError def deposit(self, positions, fields=None, method=None, kernel_name="cubic"): # Here we perform our particle deposition. cls = getattr(particle_deposit, f"deposit_{method}", None) if cls is None: raise YTParticleDepositionNotImplemented(method) # We allocate number of zones, not number of octs. Everything # inside this is Fortran ordered because of the ordering in the # octree deposit routines, so we reverse it here to match the # convention there nvals = tuple(self.ActiveDimensions[::-1]) # append a dummy dimension because we are only depositing onto # one grid op = cls(nvals + (1,), kernel_name) op.initialize() if positions.size > 0: op.process_grid(self, positions, fields) vals = op.finalize() if vals is None: return # Fortran-ordered, so transpose. vals = vals.transpose() # squeeze dummy dimension we appended above return np.squeeze(vals, axis=0) def select_blocks(self, selector): mask = self._get_selector_mask(selector) yield self, mask def _get_selector_mask(self, selector): if self._cache_mask and hash(selector) == self._last_selector_id: mask = self._last_mask else: mask, count = selector.fill_mask_regular_grid(self) if self._cache_mask: self._last_mask = mask self._last_selector_id = hash(selector) if mask is None: self._last_count = 0 else: self._last_count = count return mask def select(self, selector, source, dest, offset): mask = self._get_selector_mask(selector) count = self.count(selector) if count == 0: return 0 dim = np.squeeze(self.ds.dimensionality) nodal_flag = source.shape[:dim] - self.ActiveDimensions[:dim] if sum(nodal_flag) == 0: dest[offset : offset + count] = source[mask] else: slices = get_nodal_slices(source.shape, nodal_flag, dim) for i, sl in enumerate(slices): dest[offset : offset + count, i] = source[tuple(sl)][np.squeeze(mask)] return count def count(self, selector): mask = self._get_selector_mask(selector) if mask is None: return 0 return self._last_count def count_particles(self, selector, x, y, z): # We don't cache the selector results count = selector.count_points(x, y, z, 0.0) return count def select_particles(self, selector, x, y, z): mask = selector.select_points(x, y, z, 0.0) return mask ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/index_subobjects/octree_subset.py0000644000175100001770000006134214714401662023423 0ustar00runnerdockerimport abc from contextlib import contextmanager from functools import cached_property from itertools import product, repeat import numpy as np from unyt import unyt_array import yt.geometry.particle_deposit as particle_deposit import yt.geometry.particle_smooth as particle_smooth from yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer, ) from yt.geometry.particle_oct_container import ParticleOctreeContainer from yt.units.dimensions import length # type: ignore from yt.utilities.exceptions import ( YTFieldTypeNotFound, YTInvalidPositionArray, YTParticleDepositionNotImplemented, ) from yt.utilities.lib.geometry_utils import compute_morton from yt.utilities.logger import ytLogger as mylog def cell_count_cache(func): def cc_cache_func(self, dobj): if hash(dobj.selector) != self._last_selector_id: self._cell_count = -1 rv = func(self, dobj) self._cell_count = rv.shape[0] self._last_selector_id = hash(dobj.selector) return rv return cc_cache_func class OctreeSubset(YTSelectionContainer, abc.ABC): _spatial = True _num_ghost_zones = 0 _type_name = "octree_subset" _skip_add = True _con_args: tuple[str, ...] = ("base_region", "domain", "ds") _domain_offset = 0 _cell_count = -1 _block_order = "C" def __init__(self, base_region, domain, ds, num_zones=2, num_ghost_zones=0): super().__init__(ds, None) self._num_zones = num_zones self._num_ghost_zones = num_ghost_zones self.domain = domain self.domain_id = domain.domain_id self.ds = domain.ds self._index = self.ds.index self._last_mask = None self._last_selector_id = None self.base_region = base_region self.base_selector = base_region.selector @property @abc.abstractmethod def oct_handler(self): # In charge of returning the oct_handler pass def __getitem__(self, key): tr = super().__getitem__(key) try: fields = self._determine_fields(key) except YTFieldTypeNotFound: return tr finfo = self.ds._get_field_info(fields[0]) if not finfo.sampling_type == "particle": # We may need to reshape the field, if it is being queried from # field_data. If it's already cached, it just passes through. if len(tr.shape) < 4: tr = self._reshape_vals(tr) return tr return tr @property def nz(self): return self._num_zones + 2 * self._num_ghost_zones def _reshape_vals(self, arr): nz = self.nz if len(arr.shape) <= 2: n_oct = arr.shape[0] // (nz**3) elif arr.shape[-1] == 3: n_oct = arr.shape[-2] else: n_oct = arr.shape[-1] if arr.size == nz * nz * nz * n_oct: new_shape = (nz, nz, nz, n_oct) elif arr.size == nz * nz * nz * n_oct * 3: new_shape = (nz, nz, nz, n_oct, 3) else: raise RuntimeError # Note that if arr is already F-contiguous, this *shouldn't* copy the # data. But, it might. However, I can't seem to figure out how to # make the assignment to .shape, which *won't* copy the data, make the # resultant array viewed in Fortran order. arr = arr.reshape(new_shape, order="F") return arr def mask_refinement(self, selector): mask = self.oct_handler.mask(selector, domain_id=self.domain_id) return mask def select_blocks(self, selector): mask = self.oct_handler.mask(selector, domain_id=self.domain_id) slicer = OctreeSubsetBlockSlice(self, self.ds) for i, sl in slicer: yield sl, np.atleast_3d(mask[i, ...]) def select_tcoords(self, dobj): # These will not be pre-allocated, which can be a problem for speed and # memory usage. dts, ts = [], [] for sl, mask in self.select_blocks(dobj.selector): sl.child_mask = np.asfortranarray(mask) dt, t = dobj.selector.get_dt(sl) dts.append(dt) ts.append(t) if len(dts) == len(ts) == 0: return np.empty(0, "f8"), np.empty(0, "f8") return np.concatenate(dts), np.concatenate(ts) @cached_property def domain_ind(self): return self.oct_handler.domain_ind(self.selector) def deposit(self, positions, fields=None, method=None, kernel_name="cubic"): r"""Operate on the mesh, in a particle-against-mesh fashion, with exclusively local input. This uses the octree indexing system to call a "deposition" operation (defined in yt/geometry/particle_deposit.pyx) that can take input from several particles (local to the mesh) and construct some value on the mesh. The canonical example is to sum the total mass in a mesh cell and then divide by its volume. Parameters ---------- positions : array_like (Nx3) The positions of all of the particles to be examined. A new indexed octree will be constructed on these particles. fields : list of arrays All the necessary fields for computing the particle operation. For instance, this might include mass, velocity, etc. method : string This is the "method name" which will be looked up in the `particle_deposit` namespace as `methodname_deposit`. Current methods include `count`, `simple_smooth`, `sum`, `std`, `cic`, `weighted_mean`, `mesh_id`, and `nearest`. kernel_name : string, default 'cubic' This is the name of the smoothing kernel to use. Current supported kernel names include `cubic`, `quartic`, `quintic`, `wendland2`, `wendland4`, and `wendland6`. Returns ------- List of fortran-ordered, mesh-like arrays. """ # Here we perform our particle deposition. if fields is None: fields = [] cls = getattr(particle_deposit, f"deposit_{method}", None) if cls is None: raise YTParticleDepositionNotImplemented(method) nz = self.nz nvals = (nz, nz, nz, (self.domain_ind >= 0).sum()) if np.max(self.domain_ind) >= nvals[-1]: print( f"nocts, domain_ind >= 0, max {self.oct_handler.nocts} {nvals[-1]} {np.max(self.domain_ind)}" ) raise Exception() # We allocate number of zones, not number of octs op = cls(nvals, kernel_name) op.initialize() mylog.debug( "Depositing %s (%s^3) particles into %s Octs", positions.shape[0], positions.shape[0] ** 0.3333333, nvals[-1], ) positions.convert_to_units("code_length") pos = positions.d # We should not need the following if we know in advance all our fields # need no casting. fields = [np.ascontiguousarray(f, dtype="float64") for f in fields] op.process_octree( self.oct_handler, self.domain_ind, pos, fields, self.domain_id, self._domain_offset, ) vals = op.finalize() if vals is None: return return np.asfortranarray(vals) def mesh_sampling_particle_field(self, positions, mesh_field, lvlmax=None): r"""Operate on the particles, in a mesh-against-particle fashion, with exclusively local input. This uses the octree indexing system to call a "mesh sampling" operation (defined in yt/geometry/particle_deposit.pyx). For each particle, the function returns the value of the cell containing the particle. Parameters ---------- positions : array_like (Nx3) The positions of all of the particles to be examined. mesh_field : array_like (M,) The value of the field to deposit. lvlmax : array_like (N), optional If provided, the maximum level where to look for cells Returns ------- List of fortran-ordered, particle-like arrays containing the value of the mesh at the location of the particles. """ # Here we perform our particle deposition. npart = positions.shape[0] nocts = (self.domain_ind >= 0).sum() # We allocate number of zones, not number of octs op = particle_deposit.CellIdentifier(npart, "none") op.initialize(npart) mylog.debug( "Depositing %s Octs onto %s (%s^3) particles", nocts, positions.shape[0], positions.shape[0] ** 0.3333333, ) pos = positions.to_value("code_length").astype("float64", copy=False) op.process_octree( self.oct_handler, self.domain_ind, pos, None, self.domain_id, self._domain_offset, lvlmax=lvlmax, ) igrid, icell = op.finalize() igrid = np.asfortranarray(igrid) icell = np.asfortranarray(icell) # Some particle may fall outside of the local domain, so we # need to be careful here ids = igrid * 8 + icell mask = ids >= 0 # Fill the return array ret = np.empty(npart) ret[mask] = mesh_field[ids[mask]] ret[~mask] = np.nan return ret def smooth( self, positions, fields=None, index_fields=None, method=None, create_octree=False, nneighbors=64, kernel_name="cubic", ): r"""Operate on the mesh, in a particle-against-mesh fashion, with non-local input. This uses the octree indexing system to call a "smoothing" operation (defined in yt/geometry/particle_smooth.pyx) that can take input from several (non-local) particles and construct some value on the mesh. The canonical example is to conduct a smoothing kernel operation on the mesh. Parameters ---------- positions : array_like (Nx3) The positions of all of the particles to be examined. A new indexed octree will be constructed on these particles. fields : list of arrays All the necessary fields for computing the particle operation. For instance, this might include mass, velocity, etc. index_fields : list of arrays All of the fields defined on the mesh that may be used as input to the operation. method : string This is the "method name" which will be looked up in the `particle_smooth` namespace as `methodname_smooth`. Current methods include `volume_weighted`, `nearest`, `idw`, `nth_neighbor`, and `density`. create_octree : bool Should we construct a new octree for indexing the particles? In cases where we are applying an operation on a subset of the particles used to construct the mesh octree, this will ensure that we are able to find and identify all relevant particles. nneighbors : int, default 64 The number of neighbors to examine during the process. kernel_name : string, default 'cubic' This is the name of the smoothing kernel to use. Current supported kernel names include `cubic`, `quartic`, `quintic`, `wendland2`, `wendland4`, and `wendland6`. Returns ------- List of fortran-ordered, mesh-like arrays. """ # Here we perform our particle deposition. positions.convert_to_units("code_length") if create_octree: morton = compute_morton( positions[:, 0], positions[:, 1], positions[:, 2], self.ds.domain_left_edge, self.ds.domain_right_edge, ) morton.sort() particle_octree = ParticleOctreeContainer( [1, 1, 1], self.ds.domain_left_edge, self.ds.domain_right_edge, num_zones=self._nz, ) # This should ensure we get everything within one neighbor of home. particle_octree.n_ref = nneighbors * 2 particle_octree.add(morton) particle_octree.finalize(self.domain_id) pdom_ind = particle_octree.domain_ind(self.selector) else: particle_octree = self.oct_handler pdom_ind = self.domain_ind if fields is None: fields = [] if index_fields is None: index_fields = [] cls = getattr(particle_smooth, f"{method}_smooth", None) if cls is None: raise YTParticleDepositionNotImplemented(method) nz = self.nz mdom_ind = self.domain_ind nvals = (nz, nz, nz, (mdom_ind >= 0).sum()) op = cls(nvals, len(fields), nneighbors, kernel_name) op.initialize() mylog.debug( "Smoothing %s particles into %s Octs", positions.shape[0], nvals[-1] ) # Pointer operations within 'process_octree' require arrays to be # contiguous cf. https://bitbucket.org/yt_analysis/yt/issues/1079 fields = [np.ascontiguousarray(f, dtype="float64") for f in fields] op.process_octree( self.oct_handler, mdom_ind, positions, self.fcoords, fields, self.domain_id, self._domain_offset, self.ds.periodicity, index_fields, particle_octree, pdom_ind, self.ds.geometry, ) # If there are 0s in the smoothing field this will not throw an error, # but silently return nans for vals where dividing by 0 # Same as what is currently occurring, but suppressing the div by zero # error. with np.errstate(invalid="ignore"): vals = op.finalize() if isinstance(vals, list): vals = [np.asfortranarray(v) for v in vals] else: vals = np.asfortranarray(vals) return vals def particle_operation( self, positions, fields=None, method=None, nneighbors=64, kernel_name="cubic" ): r"""Operate on particles, in a particle-against-particle fashion. This uses the octree indexing system to call a "smoothing" operation (defined in yt/geometry/particle_smooth.pyx) that expects to be called in a particle-by-particle fashion. For instance, the canonical example of this would be to compute the Nth nearest neighbor, or to compute the density for a given particle based on some kernel operation. Many of the arguments to this are identical to those used in the smooth and deposit functions. Note that the `fields` argument must not be empty, as these fields will be modified in place. Parameters ---------- positions : array_like (Nx3) The positions of all of the particles to be examined. A new indexed octree will be constructed on these particles. fields : list of arrays All the necessary fields for computing the particle operation. For instance, this might include mass, velocity, etc. One of these will likely be modified in place. method : string This is the "method name" which will be looked up in the `particle_smooth` namespace as `methodname_smooth`. nneighbors : int, default 64 The number of neighbors to examine during the process. kernel_name : string, default 'cubic' This is the name of the smoothing kernel to use. Current supported kernel names include `cubic`, `quartic`, `quintic`, `wendland2`, `wendland4`, and `wendland6`. Returns ------- Nothing. """ # Here we perform our particle deposition. positions.convert_to_units("code_length") morton = compute_morton( positions[:, 0], positions[:, 1], positions[:, 2], self.ds.domain_left_edge, self.ds.domain_right_edge, ) morton.sort() particle_octree = ParticleOctreeContainer( [1, 1, 1], self.ds.domain_left_edge, self.ds.domain_right_edge, num_zones=2, ) particle_octree.n_ref = nneighbors * 2 particle_octree.add(morton) particle_octree.finalize() pdom_ind = particle_octree.domain_ind(self.selector) if fields is None: fields = [] cls = getattr(particle_smooth, f"{method}_smooth", None) if cls is None: raise YTParticleDepositionNotImplemented(method) nz = self.nz mdom_ind = self.domain_ind nvals = (nz, nz, nz, (mdom_ind >= 0).sum()) op = cls(nvals, len(fields), nneighbors, kernel_name) op.initialize() mylog.debug( "Smoothing %s particles into %s Octs", positions.shape[0], nvals[-1] ) op.process_particles( particle_octree, pdom_ind, positions, fields, self.domain_id, self._domain_offset, self.ds.periodicity, self.ds.geometry, ) vals = op.finalize() if vals is None: return if isinstance(vals, list): vals = [np.asfortranarray(v) for v in vals] else: vals = np.asfortranarray(vals) return vals @cell_count_cache def select_icoords(self, dobj): return self.oct_handler.icoords( dobj.selector, domain_id=self.domain_id, num_cells=self._cell_count ) @cell_count_cache def select_fcoords(self, dobj): fcoords = self.oct_handler.fcoords( dobj.selector, domain_id=self.domain_id, num_cells=self._cell_count ) return self.ds.arr(fcoords, "code_length") @cell_count_cache def select_fwidth(self, dobj): fwidth = self.oct_handler.fwidth( dobj.selector, domain_id=self.domain_id, num_cells=self._cell_count ) return self.ds.arr(fwidth, "code_length") @cell_count_cache def select_ires(self, dobj): return self.oct_handler.ires( dobj.selector, domain_id=self.domain_id, num_cells=self._cell_count ) def select(self, selector, source, dest, offset): n = self.oct_handler.selector_fill( selector, source, dest, offset, domain_id=self.domain_id ) return n def count(self, selector): return -1 def count_particles(self, selector, x, y, z): # We don't cache the selector results count = selector.count_points(x, y, z, 0.0) return count def select_particles(self, selector, x, y, z): mask = selector.select_points(x, y, z, 0.0) return mask def get_vertex_centered_data(self, fields): # Make sure the field list has only unique entries fields = list(set(fields)) new_fields = {} cg = self.retrieve_ghost_zones(1, fields) for field in fields: new_fields[field] = cg[field][1:, 1:, 1:].copy() np.add(new_fields[field], cg[field][:-1, 1:, 1:], new_fields[field]) np.add(new_fields[field], cg[field][1:, :-1, 1:], new_fields[field]) np.add(new_fields[field], cg[field][1:, 1:, :-1], new_fields[field]) np.add(new_fields[field], cg[field][:-1, 1:, :-1], new_fields[field]) np.add(new_fields[field], cg[field][1:, :-1, :-1], new_fields[field]) np.add(new_fields[field], cg[field][:-1, :-1, 1:], new_fields[field]) np.add(new_fields[field], cg[field][:-1, :-1, :-1], new_fields[field]) np.multiply(new_fields[field], 0.125, new_fields[field]) return new_fields class OctreeSubsetBlockSlicePosition: def __init__(self, ind, block_slice): self.ind = ind self.block_slice = block_slice nz = self.block_slice.octree_subset.nz self.ActiveDimensions = np.array([nz, nz, nz], dtype="int64") self.ds = block_slice.ds def __getitem__(self, key): bs = self.block_slice rv = np.require( bs.octree_subset[key][:, :, :, self.ind].T, requirements=bs.octree_subset._block_order, ) return rv @property def id(self): return self.ind @property def Level(self): return self.block_slice._ires[0, 0, 0, self.ind] @property def LeftEdge(self): LE = ( self.block_slice._fcoords[0, 0, 0, self.ind, :].d - self.block_slice._fwidth[0, 0, 0, self.ind, :].d * 0.5 ) return self.block_slice.octree_subset.ds.arr( LE, self.block_slice._fcoords.units ) @property def RightEdge(self): RE = ( self.block_slice._fcoords[-1, -1, -1, self.ind, :].d + self.block_slice._fwidth[-1, -1, -1, self.ind, :].d * 0.5 ) return self.block_slice.octree_subset.ds.arr( RE, self.block_slice._fcoords.units ) @property def dds(self): return self.block_slice._fwidth[0, 0, 0, self.ind, :] def clear_data(self): pass def get_vertex_centered_data(self, fields, smoothed=False, no_ghost=False): field = fields[0] new_field = self.block_slice.get_vertex_centered_data(fields)[field] return {field: new_field[..., self.ind]} @contextmanager def _field_parameter_state(self, field_parameters): yield self.block_slice.octree_subset._field_parameter_state(field_parameters) class OctreeSubsetBlockSlice: def __init__(self, octree_subset, ds): self.octree_subset = octree_subset self.ds = ds self._vertex_centered_data = {} # Cache some attributes for attr in ["ires", "icoords", "fcoords", "fwidth"]: v = getattr(octree_subset, attr) setattr(self, f"_{attr}", octree_subset._reshape_vals(v)) @property def octree_subset_with_gz(self): subset_with_gz = getattr(self, "_octree_subset_with_gz", None) if not subset_with_gz: self._octree_subset_with_gz = self.octree_subset.retrieve_ghost_zones(1, []) return self._octree_subset_with_gz def get_vertex_centered_data(self, fields, smoothed=False, no_ghost=False): if no_ghost is True: raise NotImplementedError( "get_vertex_centered_data without ghost zones for " "oct-based datasets has not been implemented." ) # Make sure the field list has only unique entries fields = list(set(fields)) new_fields = {} cg = self.octree_subset_with_gz for field in fields: if field in self._vertex_centered_data: new_fields[field] = self._vertex_centered_data[field] else: finfo = self.ds._get_field_info(field) orig_field = cg[field] nocts = orig_field.shape[-1] new_field = np.zeros((3, 3, 3, nocts), order="F") # Compute vertex-centred data as mean of 8 neighbours cell data slices = (slice(1, None), slice(None, -1)) for slx, sly, slz in product(*repeat(slices, 3)): new_field += orig_field[slx, sly, slz] new_field *= 0.125 new_fields[field] = self.ds.arr(new_field, finfo.output_units) self._vertex_centered_data[field] = new_fields[field] return new_fields def __iter__(self): for i in range(self._ires.shape[-1]): yield i, OctreeSubsetBlockSlicePosition(i, self) class YTPositionArray(unyt_array): @property def morton(self): self.validate() eps = np.finfo(self.dtype).eps LE = self.min(axis=0) LE -= np.abs(LE) * eps RE = self.max(axis=0) RE += np.abs(RE) * eps morton = compute_morton(self[:, 0], self[:, 1], self[:, 2], LE, RE) return morton def to_octree(self, num_zones=2, dims=(1, 1, 1), n_ref=64): mi = self.morton mi.sort() eps = np.finfo(self.dtype).eps LE = self.min(axis=0) LE -= np.abs(LE) * eps RE = self.max(axis=0) RE += np.abs(RE) * eps octree = ParticleOctreeContainer(dims, LE, RE, num_zones=num_zones) octree.n_ref = n_ref octree.add(mi) octree.finalize() return octree def validate(self): if ( len(self.shape) != 2 or self.shape[1] != 3 or self.units.dimensions != length ): raise YTInvalidPositionArray(self.shape, self.units.dimensions) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/index_subobjects/particle_container.py0000644000175100001770000000671514714401662024425 0ustar00runnerdockerimport contextlib from more_itertools import always_iterable from yt.data_objects.data_containers import YTFieldData from yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer, ) from yt.utilities.exceptions import ( YTDataSelectorNotImplemented, YTNonIndexedDataContainer, ) def _non_indexed(name): def _func_non_indexed(self, *args, **kwargs): raise YTNonIndexedDataContainer(self) return _func_non_indexed class ParticleContainer(YTSelectionContainer): _spatial = False _type_name = "particle_container" _skip_add = True _con_args = ("base_region", "base_selector", "data_files", "overlap_files") def __init__( self, base_region, base_selector, data_files, overlap_files=None, domain_id=-1 ): if overlap_files is None: overlap_files = [] self.field_data = YTFieldData() self.field_parameters = {} self.data_files = list(always_iterable(data_files)) self.overlap_files = list(always_iterable(overlap_files)) self.ds = self.data_files[0].ds self._last_mask = None self._last_selector_id = None self._current_particle_type = "all" # self._current_fluid_type = self.ds.default_fluid_type self.base_region = base_region self.base_selector = base_selector self._octree = None self._temp_spatial = False if isinstance(base_region, ParticleContainer): self._temp_spatial = base_region._temp_spatial self._octree = base_region._octree # To ensure there are not domains if global octree not used self.domain_id = -1 def __reduce__(self): # we need to override the __reduce__ from data_containers as this method is not # a registered dataset method (i.e., ds.particle_container does not exist) arg_tuple = tuple(getattr(self, attr) for attr in self._con_args) return (self.__class__, arg_tuple) @property def selector(self): raise YTDataSelectorNotImplemented(self.oc_type_name) def select_particles(self, selector, x, y, z): mask = selector.select_points(x, y, z) return mask @contextlib.contextmanager def _expand_data_files(self): old_data_files = self.data_files old_overlap_files = self.overlap_files self.data_files = list(set(self.data_files + self.overlap_files)) self.data_files.sort() self.overlap_files = [] yield self self.data_files = old_data_files self.overlap_files = old_overlap_files def retrieve_ghost_zones(self, ngz, coarse_ghosts=False): gz_oct = self.octree.retrieve_ghost_zones(ngz, coarse_ghosts=coarse_ghosts) gz = ParticleContainer( gz_oct.base_region, gz_oct.base_selector, gz_oct.data_files, overlap_files=gz_oct.overlap_files, selector_mask=gz_oct.selector_mask, domain_id=gz_oct.domain_id, ) gz._octree = gz_oct return gz select_blocks = _non_indexed("select_blocks") deposit = _non_indexed("deposit") smooth = _non_indexed("smooth") select_icoords = _non_indexed("select_icoords") select_fcoords = _non_indexed("select_fcoords") select_fwidth = _non_indexed("select_fwidth") select_ires = _non_indexed("select_ires") select = _non_indexed("select") count = _non_indexed("count") count_particles = _non_indexed("count_particles") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/index_subobjects/stretched_grid.py0000644000175100001770000000420714714401662023544 0ustar00runnerdockerimport numpy as np from yt.geometry.selection_routines import convert_mask_to_indices from .grid_patch import AMRGridPatch class StretchedGrid(AMRGridPatch): def __init__(self, id, cell_widths, *, filename=None, index=None): self.cell_widths = [np.array(_) for _ in cell_widths] super().__init__(id, filename, index) def _check_consistency(self): computed_right_edge = self.LeftEdge + [ _.sum() for _ in self.cell_widths * self.LeftEdge.uq ] assert (computed_right_edge == self.RightEdge).all() def _get_selector_mask(self, selector): if self._cache_mask and hash(selector) == self._last_selector_id: mask = self._last_mask else: mask = selector.fill_mask(self) if self._cache_mask: self._last_mask = mask self._last_selector_id = hash(selector) if mask is None: self._last_count = 0 else: self._last_count = mask.sum() return mask def select_fwidth(self, dobj): mask = self._get_selector_mask(dobj.selector) if mask is None: return np.empty((0, 3), dtype="float64") indices = convert_mask_to_indices(mask, self._last_count) coords = np.array( [ self.cell_widths[0][indices[:, 0]], self.cell_widths[1][indices[:, 1]], self.cell_widths[2][indices[:, 2]], ] ).T return coords def select_fcoords(self, dobj): mask = self._get_selector_mask(dobj.selector) if mask is None: return np.empty((0, 3), dtype="float64") cell_centers = [ self.LeftEdge[i].d + np.add.accumulate(self.cell_widths[i]) - 0.5 * self.cell_widths[i] for i in range(3) ] indices = convert_mask_to_indices(mask, self._last_count) coords = np.array( [ cell_centers[0][indices[:, 0]], cell_centers[1][indices[:, 1]], cell_centers[2][indices[:, 2]], ] ).T return coords ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/index_subobjects/unstructured_mesh.py0000644000175100001770000001553214714401662024340 0ustar00runnerdockerimport numpy as np from yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer, ) from yt.funcs import mylog from yt.utilities.lib.mesh_utilities import fill_fcoords, fill_fwidths class UnstructuredMesh(YTSelectionContainer): # This is a base class, not meant to be used directly. _spatial = False _connectivity_length = -1 _type_name = "unstructured_mesh" _skip_add = True _index_offset = 0 _con_args = ("mesh_id", "filename", "connectivity_indices", "connectivity_coords") def __init__( self, mesh_id, filename, connectivity_indices, connectivity_coords, index ): super().__init__(index.dataset, None) self.filename = filename self.mesh_id = mesh_id # This is where we set up the connectivity information self.connectivity_indices = connectivity_indices if connectivity_indices.shape[1] != self._connectivity_length: if self._connectivity_length == -1: self._connectivity_length = connectivity_indices.shape[1] else: raise RuntimeError self.connectivity_coords = connectivity_coords self.ds = index.dataset self._index = index self._last_mask = None self._last_count = -1 self._last_selector_id = None def _check_consistency(self): if self.connectivity_indices.shape[1] != self._connectivity_length: raise RuntimeError for gi in range(self.connectivity_indices.shape[0]): ind = self.connectivity_indices[gi, :] - self._index_offset coords = self.connectivity_coords[ind, :] for i in range(3): assert np.unique(coords[:, i]).size == 2 mylog.debug("Connectivity is consistent.") def __repr__(self): return "UnstructuredMesh_%04i" % (self.mesh_id) def get_global_startindex(self): """ Return the integer starting index for each dimension at the current level. """ raise NotImplementedError def convert(self, datatype): """ This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError. """ return self.ds[datatype] @property def shape(self): raise NotImplementedError def _generate_container_field(self, field): raise NotImplementedError def select_fcoords(self, dobj=None): # This computes centroids! mask = self._get_selector_mask(dobj.selector) if mask is None: return np.empty((0, 3), dtype="float64") centers = fill_fcoords( self.connectivity_coords, self.connectivity_indices, self._index_offset ) return centers[mask, :] def select_fwidth(self, dobj): raise NotImplementedError def select_icoords(self, dobj): raise NotImplementedError def select_ires(self, dobj): raise NotImplementedError def select_tcoords(self, dobj): raise NotImplementedError def deposit(self, positions, fields=None, method=None, kernel_name="cubic"): raise NotImplementedError def select_blocks(self, selector): mask = self._get_selector_mask(selector) yield self, mask def select(self, selector, source, dest, offset): mask = self._get_selector_mask(selector) count = self.count(selector) if count == 0: return 0 dest[offset : offset + count] = source[mask, ...] return count def count(self, selector): mask = self._get_selector_mask(selector) if mask is None: return 0 return self._last_count def count_particles(self, selector, x, y, z): # We don't cache the selector results count = selector.count_points(x, y, z, 0.0) return count def select_particles(self, selector, x, y, z): mask = selector.select_points(x, y, z, 0.0) return mask def _get_selector_mask(self, selector): if hash(selector) == self._last_selector_id: mask = self._last_mask else: self._last_mask = mask = selector.fill_mesh_cell_mask(self) self._last_selector_id = hash(selector) if mask is None: self._last_count = 0 else: self._last_count = mask.sum() return mask def select_fcoords_vertex(self, dobj=None): mask = self._get_selector_mask(dobj.selector) if mask is None: return np.empty((0, self._connectivity_length, 3), dtype="float64") vertices = self.connectivity_coords[self.connectivity_indices - 1] return vertices[mask, :, :] class SemiStructuredMesh(UnstructuredMesh): _connectivity_length = 8 _type_name = "semi_structured_mesh" _container_fields = ("dx", "dy", "dz") def __repr__(self): return "SemiStructuredMesh_%04i" % (self.mesh_id) def _generate_container_field(self, field): if self._current_chunk is None: self.index._identify_base_chunk(self) if field == "dx": return self._current_chunk.fwidth[:, 0] elif field == "dy": return self._current_chunk.fwidth[:, 1] elif field == "dz": return self._current_chunk.fwidth[:, 2] def select_fwidth(self, dobj): mask = self._get_selector_mask(dobj.selector) if mask is None: return np.empty((0, 3), dtype="float64") widths = fill_fwidths( self.connectivity_coords, self.connectivity_indices, self._index_offset ) return widths[mask, :] def select_ires(self, dobj): ind = np.zeros(self.connectivity_indices.shape[0]) mask = self._get_selector_mask(dobj.selector) if mask is None: return np.empty(0, dtype="int32") return ind[mask] def select_tcoords(self, dobj): mask = self._get_selector_mask(dobj.selector) if mask is None: return np.empty(0, dtype="float64") dt, t = dobj.selector.get_dt_mesh(self, mask.sum(), self._index_offset) return dt, t def _get_selector_mask(self, selector): if hash(selector) == self._last_selector_id: mask = self._last_mask else: self._last_mask = mask = selector.fill_mesh_cell_mask(self) self._last_selector_id = hash(selector) if mask is None: self._last_count = 0 else: self._last_count = mask.sum() return mask def select(self, selector, source, dest, offset): mask = self._get_selector_mask(selector) count = self.count(selector) if count == 0: return 0 # Note: this likely will not work with vector fields. dest[offset : offset + count] = source.flat[mask] return count ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2711515 yt-4.4.0/yt/data_objects/level_sets/0000755000175100001770000000000014714401715017007 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/level_sets/__init__.py0000644000175100001770000000000014714401662021107 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/level_sets/api.py0000644000175100001770000000052014714401662020130 0ustar00runnerdockerfrom .clump_handling import Clump, find_clumps from .clump_info_items import add_clump_info from .clump_tools import ( clump_list_sort, recursive_all_clumps, recursive_bottom_clumps, return_all_clumps, return_bottom_clumps, ) from .clump_validators import add_validator from .contour_finder import identify_contours ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/level_sets/clump_handling.py0000644000175100001770000004005614714401662022353 0ustar00runnerdockerimport uuid import numpy as np from yt.fields.derived_field import ValidateSpatial from yt.frontends.ytdata.utilities import save_as_dataset from yt.funcs import get_output_filename, mylog from yt.utilities.tree_container import TreeContainer from .clump_info_items import clump_info_registry from .clump_validators import clump_validator_registry from .contour_finder import identify_contours def add_contour_field(ds, contour_key): def _contours(field, data): fd = data.get_field_parameter(f"contour_slices_{contour_key}") vals = data["index", "ones"] * -1 if fd is None or fd == 0.0: return vals for sl, v in fd.get(data.id, []): vals[sl] = v return vals ds.add_field( ("index", f"contours_{contour_key}"), function=_contours, validators=[ValidateSpatial(0)], take_log=False, display_field=False, sampling_type="cell", units="", ) class Clump(TreeContainer): def __init__( self, data, field, parent=None, clump_info=None, validators=None, base=None, contour_key=None, contour_id=None, ): self.data = data self.field = field self.parent = parent self.quantities = data.quantities self.min_val = self.data[field].min() self.max_val = self.data[field].max() self.info = {} self.children = [] # is this the parent clump? if base is None: base = self self.total_clumps = 0 if clump_info is None: self.set_default_clump_info() else: self.clump_info = clump_info for ci in self.clump_info: ci(self) self.base = base self.clump_id = self.base.total_clumps self.base.total_clumps += 1 self.contour_key = contour_key self.contour_id = contour_id if parent is not None: self.data.parent = self.parent.data if validators is None: validators = [] self.validators = validators # Return value of validity function. self.valid = None _leaves = None @property def leaves(self): if self._leaves is not None: return self._leaves self._leaves = [] for clump in self: if not clump.children: self._leaves.append(clump) return self._leaves def add_validator(self, validator, *args, **kwargs): """ Add a validating function to determine whether the clump should be kept. """ callback = clump_validator_registry.find(validator, *args, **kwargs) self.validators.append(callback) for child in self.children: child.add_validator(validator) def add_info_item(self, info_item, *args, **kwargs): "Adds an entry to clump_info list and tells children to do the same." callback = clump_info_registry.find(info_item, *args, **kwargs) callback(self) self.clump_info.append(callback) for child in self.children: child.add_info_item(info_item) def set_default_clump_info(self): "Defines default entries in the clump_info array." # add_info_item is recursive so this function does not need to be. self.clump_info = [] self.add_info_item("total_cells") self.add_info_item("cell_mass") if any("jeans" in f for f in self.data.pf.field_list): self.add_info_item("mass_weighted_jeans_mass") self.add_info_item("volume_weighted_jeans_mass") self.add_info_item("max_grid_level") if any("number_density" in f for f in self.data.pf.field_list): self.add_info_item("min_number_density") self.add_info_item("max_number_density") def clear_clump_info(self): """ Clears the clump_info array and passes the instruction to its children. """ self.clump_info = [] for child in self.children: child.clear_clump_info() def find_children(self, min_val, max_val=None): if self.children: mylog.info("Wiping out existing children clumps: %d.", len(self.children)) self.children = [] if max_val is None: max_val = self.max_val nj, cids = identify_contours(self.data, self.field, min_val, max_val) # Here, cids is the set of slices and values, keyed by the # parent_grid_id, that defines the contours. So we can figure out all # the unique values of the contours by examining the list here. unique_contours = set() for sl_list in cids.values(): for _sl, ff in sl_list: unique_contours.update(np.unique(ff)) contour_key = uuid.uuid4().hex base_object = getattr(self.data, "base_object", self.data) add_contour_field(base_object.ds, contour_key) for cid in sorted(unique_contours): if cid == -1: continue new_clump = base_object.cut_region( [f"obj['contours_{contour_key}'] == {cid}"], {(f"contour_slices_{contour_key}"): cids}, ) if new_clump["index", "ones"].size == 0: # This is to skip possibly duplicate clumps. # Using "ones" here will speed things up. continue self.children.append( Clump( new_clump, self.field, parent=self, validators=self.validators, base=self.base, clump_info=self.clump_info, contour_key=contour_key, contour_id=cid, ) ) def __iter__(self): yield self for child in self.children: yield from child def save_as_dataset(self, filename=None, fields=None): r"""Export clump tree to a reloadable yt dataset. This function will take a clump object and output a dataset containing the fields given in the ``fields`` list and all info items. The resulting dataset can be reloaded as a yt dataset. Parameters ---------- filename : str, optional The name of the file to be written. If None, the name will be a combination of the original dataset and the clump index. fields : list of strings or tuples, optional If this is supplied, it is the list of fields to be saved to disk. Returns ------- filename : str The name of the file that has been created. Examples -------- >>> import yt >>> from yt.data_objects.level_sets.api import Clump, find_clumps >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> data_source = ds.disk( ... [0.5, 0.5, 0.5], [0.0, 0.0, 1.0], (8, "kpc"), (1, "kpc") ... ) >>> field = ("gas", "density") >>> step = 2.0 >>> c_min = 10 ** np.floor(np.log10(data_source[field]).min()) >>> c_max = 10 ** np.floor(np.log10(data_source[field]).max() + 1) >>> master_clump = Clump(data_source, field) >>> master_clump.add_info_item("center_of_mass") >>> master_clump.add_validator("min_cells", 20) >>> find_clumps(master_clump, c_min, c_max, step) >>> fn = master_clump.save_as_dataset( ... fields=[("gas", "density"), ("all", "particle_mass")] ... ) >>> new_ds = yt.load(fn) >>> print(ds.tree["clump", "cell_mass"]) 1296926163.91 Msun >>> print(ds.tree["grid", "density"]) [ 2.54398434e-26 2.46620353e-26 2.25120154e-26 ..., 1.12879234e-25 1.59561490e-25 1.09824903e-24] g/cm**3 >>> print(ds.tree["all", "particle_mass"]) [ 4.25472446e+38 4.25472446e+38 4.25472446e+38 ..., 2.04238266e+38 2.04523901e+38 2.04770938e+38] g >>> print(ds.tree.children[0]["clump", "cell_mass"]) 909636495.312 Msun >>> print(ds.leaves[0]["clump", "cell_mass"]) 3756566.99809 Msun >>> print(ds.leaves[0]["grid", "density"]) [ 6.97820274e-24 6.58117370e-24 7.32046082e-24 6.76202430e-24 7.41184837e-24 6.76981480e-24 6.94287213e-24 6.56149658e-24 6.76584569e-24 6.94073710e-24 7.06713082e-24 7.22556526e-24 7.08338898e-24 6.78684331e-24 7.40647040e-24 7.03050456e-24 7.12438678e-24 6.56310217e-24 7.23201662e-24 7.17314333e-24] g/cm**3 """ ds = self.data.ds keyword = "%s_clump_%d" % (str(ds), self.clump_id) filename = get_output_filename(filename, keyword, ".h5") # collect clump info fields clump_info = {ci.name: [] for ci in self.base.clump_info} clump_info.update( { field: [] for field in ["clump_id", "parent_id", "contour_key", "contour_id"] } ) for clump in self: clump_info["clump_id"].append(clump.clump_id) if clump.parent is None: parent_id = -1 else: parent_id = clump.parent.clump_id clump_info["parent_id"].append(parent_id) contour_key = clump.contour_key if contour_key is None: contour_key = -1 clump_info["contour_key"].append(contour_key) contour_id = clump.contour_id if contour_id is None: contour_id = -1 clump_info["contour_id"].append(contour_id) for ci in self.base.clump_info: clump_info[ci.name].append(clump.info[ci.name][1]) for ci in clump_info: if hasattr(clump_info[ci][0], "units"): clump_info[ci] = ds.arr(clump_info[ci]) else: clump_info[ci] = np.array(clump_info[ci]) ftypes = {ci: "clump" for ci in clump_info} # collect data fields if fields is not None: contour_fields = [ ("index", f"contours_{ckey}") for ckey in np.unique(clump_info["contour_key"]) if str(ckey) != "-1" ] ptypes = [] field_data = {} need_grid_positions = False for f in self.base.data._determine_fields(fields) + contour_fields: if ds.field_info[f].sampling_type == "particle": if f[0] not in ptypes: ptypes.append(f[0]) ftypes[f] = f[0] else: need_grid_positions = True if f[1] in ("x", "y", "z", "dx", "dy", "dz"): # skip 'xyz' if a user passes that in because they # will be added to ftypes below continue ftypes[f] = "grid" field_data[f] = self.base[f] if len(ptypes) > 0: for ax in "xyz": for ptype in ptypes: p_field = (ptype, f"particle_position_{ax}") if p_field in ds.field_info and p_field not in field_data: ftypes[p_field] = p_field[0] field_data[p_field] = self.base[p_field] for clump in self: if clump.contour_key is None: continue for ptype in ptypes: cfield = (ptype, f"contours_{clump.contour_key}") if cfield not in field_data: field_data[cfield] = clump.data._part_ind(ptype).astype( np.int64 ) ftypes[cfield] = ptype field_data[cfield][clump.data._part_ind(ptype)] = ( clump.contour_id ) if need_grid_positions: for ax in "xyz": g_field = ("index", ax) if g_field in ds.field_info and g_field not in field_data: field_data[g_field] = self.base[g_field] ftypes[g_field] = "grid" g_field = ("index", "d" + ax) if g_field in ds.field_info and g_field not in field_data: ftypes[g_field] = "grid" field_data[g_field] = self.base[g_field] if self.contour_key is not None: cfilters = {} for field in field_data: if ftypes[field] == "grid": ftype = "index" else: ftype = field[0] cfield = (ftype, f"contours_{self.contour_key}") if cfield not in cfilters: cfilters[cfield] = field_data[cfield] == self.contour_id field_data[field] = field_data[field][cfilters[cfield]] clump_info.update(field_data) extra_attrs = {"data_type": "yt_clump_tree", "container_type": "yt_clump_tree"} save_as_dataset( ds, filename, clump_info, field_types=ftypes, extra_attrs=extra_attrs ) return filename def pass_down(self, operation): """ Performs an operation on a clump with an exec and passes the instruction down to clump children. """ # Call if callable, otherwise do an exec. if callable(operation): operation() else: exec(operation) for child in self.children: child.pass_down(operation) def _validate(self): "Apply all user specified validator functions." # Only call functions if not done already. if self.valid is not None: return self.valid self.valid = True for validator in self.validators: self.valid &= validator(self) if not self.valid: break return self.valid def __reduce__(self): raise RuntimeError( "Pickling Clump instances is not supported. Please use " "Clump.save_as_dataset instead" ) def __getitem__(self, request): return self.data[request] def find_clumps(clump, min_val, max_val, d_clump): mylog.info("Finding clumps: min: %e, max: %e, step: %f", min_val, max_val, d_clump) if min_val >= max_val: return clump.find_children(min_val, max_val=max_val) if len(clump.children) == 1: find_clumps(clump, min_val * d_clump, max_val, d_clump) elif len(clump.children) > 0: these_children = [] mylog.info("Investigating %d children.", len(clump.children)) for child in clump.children: find_clumps(child, min_val * d_clump, max_val, d_clump) if len(child.children) > 0: these_children.append(child) elif child._validate(): these_children.append(child) else: mylog.info( "Eliminating invalid, childless clump with %d cells.", len(child.data["index", "ones"]), ) if len(these_children) > 1: mylog.info( "%d of %d children survived.", len(these_children), len(clump.children) ) clump.children = these_children elif len(these_children) == 1: mylog.info( "%d of %d children survived, linking its children to parent.", len(these_children), len(clump.children), ) clump.children = these_children[0].children for child in clump.children: child.parent = clump child.data.parent = clump.data else: mylog.info( "%d of %d children survived, erasing children.", len(these_children), len(clump.children), ) clump.children = [] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/level_sets/clump_info_items.py0000644000175100001770000000632414714401662022723 0ustar00runnerdockerimport numpy as np from yt.utilities.operator_registry import OperatorRegistry clump_info_registry = OperatorRegistry() def add_clump_info(name, function): clump_info_registry[name] = ClumpInfoCallback(name, function) class ClumpInfoCallback: r""" A ClumpInfoCallback is a function that takes a clump, computes a quantity, and returns a string to be printed out for writing clump info. """ def __init__(self, name, function, args=None, kwargs=None): self.name = name self.function = function self.args = args if self.args is None: self.args = [] self.kwargs = kwargs if self.kwargs is None: self.kwargs = {} def __call__(self, clump): if self.name not in clump.info: clump.info[self.name] = self.function(clump, *self.args, **self.kwargs) rv = clump.info[self.name] return rv[0] % rv[1] def _center_of_mass(clump, units="code_length", **kwargs): p = clump.quantities.center_of_mass(**kwargs) return "Center of mass: %s.", p.to(units) add_clump_info("center_of_mass", _center_of_mass) def _total_cells(clump): n_cells = clump.data["index", "ones"].size return "Cells: %d.", n_cells add_clump_info("total_cells", _total_cells) def _cell_mass(clump): cell_mass = clump.data["gas", "cell_mass"].sum().in_units("Msun") return "Mass: %e Msun.", cell_mass add_clump_info("cell_mass", _cell_mass) def _mass_weighted_jeans_mass(clump): jeans_mass = clump.data.quantities.weighted_average_quantity( "jeans_mass", ("gas", "cell_mass") ).in_units("Msun") return "Jeans Mass (mass-weighted): %.6e Msolar.", jeans_mass add_clump_info("mass_weighted_jeans_mass", _mass_weighted_jeans_mass) def _volume_weighted_jeans_mass(clump): jeans_mass = clump.data.quantities.weighted_average_quantity( "jeans_mass", ("index", "cell_volume") ).in_units("Msun") return "Jeans Mass (volume-weighted): %.6e Msolar.", jeans_mass add_clump_info("volume_weighted_jeans_mass", _volume_weighted_jeans_mass) def _max_grid_level(clump): max_level = clump.data["index", "grid_level"].max() return "Max grid level: %d.", max_level add_clump_info("max_grid_level", _max_grid_level) def _min_number_density(clump): min_n = clump.data["gas", "number_density"].min().in_units("cm**-3") return "Min number density: %.6e cm^-3.", min_n add_clump_info("min_number_density", _min_number_density) def _max_number_density(clump): max_n = clump.data["gas", "number_density"].max().in_units("cm**-3") return "Max number density: %.6e cm^-3.", max_n add_clump_info("max_number_density", _max_number_density) def _distance_to_main_clump(clump, units="pc"): master = clump while master.parent is not None: master = master.parent master_com = clump.data.ds.arr(master.data.quantities.center_of_mass()) my_com = clump.data.ds.arr(clump.data.quantities.center_of_mass()) distance = np.sqrt(((master_com - my_com) ** 2).sum()) distance.convert_to_units("pc") return ( f"Distance from master center of mass: %.6e {units}.", distance.in_units(units), ) add_clump_info("distance_to_main_clump", _distance_to_main_clump) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/level_sets/clump_tools.py0000644000175100001770000000433514714401662021727 0ustar00runnerdockerimport numpy as np nar = np.array counter = 0 def recursive_all_clumps(clump, list, level, parentnumber): """A recursive function to flatten the index in *clump*. Not to be called directly: please call return_all_clumps, below.""" global counter counter += 1 clump.number = counter clump.parentnumber = parentnumber counter += 1 list.append(clump) clump.level = level if clump.children is not None: for child in clump.children: recursive_all_clumps(child, list, level + 1, clump.number) return list def return_all_clumps(clump): """Flatten the index defined by *clump*. Additionally adds three variables to the clump: level = depth of index number = index of clump in the final array parentnumber = index of this clumps parent """ global counter counter = 0 list = [] level = 0 clump.level = level parentnumber = -1 recursive_all_clumps(clump, list, level, parentnumber) return list def return_bottom_clumps(clump, dbg=0): """ Recursively return clumps at the bottom of the index. This gives a list of clumps similar to what one would get from a CLUMPFIND routine """ global counter counter = 0 list = [] level = 0 recursive_bottom_clumps(clump, list, dbg, level) return list def recursive_bottom_clumps(clump, clump_list, dbg=0, level=0): """Loops over a list of clumps (clumps) and fills clump_list with the bottom most. Recursive. Prints the level and the number of cores to the screen.""" global counter if (clump.children is None) or (len(clump.children) == 0): counter += 1 clump_list.append(clump) else: for child in clump.children: recursive_bottom_clumps(child, clump_list, dbg=dbg, level=level + 1) def clump_list_sort(clump_list): """Returns a copy of clump_list, sorted by ascending minimum density. This eliminates overlap when passing to yt.visualization.plot_modification.ClumpContourCallback""" minDensity = [c["gas", "density"].min() for c in clump_list] args = np.argsort(minDensity) list = nar(clump_list)[args] reverse = range(list.size - 1, -1, -1) return list[reverse] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/level_sets/clump_validators.py0000644000175100001770000000652514714401662022742 0ustar00runnerdockerimport numpy as np from yt.utilities.lib.misc_utilities import gravitational_binding_energy from yt.utilities.operator_registry import OperatorRegistry from yt.utilities.physical_constants import gravitational_constant_cgs as G clump_validator_registry = OperatorRegistry() def add_validator(name, function): clump_validator_registry[name] = ClumpValidator(function) class ClumpValidator: r""" A ClumpValidator is a function that takes a clump and returns True or False as to whether the clump is valid and shall be kept. """ def __init__(self, function, args=None, kwargs=None): self.function = function self.args = args if self.args is None: self.args = [] self.kwargs = kwargs if self.kwargs is None: self.kwargs = {} def __call__(self, clump): return self.function(clump, *self.args, **self.kwargs) def _gravitationally_bound( clump, use_thermal_energy=True, use_particles=True, truncate=True, num_threads=0 ): "True if clump is gravitationally bound." use_particles &= ("all", "particle_mass") in clump.data.ds.field_info bulk_velocity = clump.quantities.bulk_velocity(use_particles=use_particles) kinetic = ( 0.5 * ( clump["gas", "mass"] * ( (bulk_velocity[0] - clump["gas", "velocity_x"]) ** 2 + (bulk_velocity[1] - clump["gas", "velocity_y"]) ** 2 + (bulk_velocity[2] - clump["gas", "velocity_z"]) ** 2 ) ).sum() ) if use_thermal_energy: kinetic += ( clump["gas", "mass"] * clump["gas", "specific_thermal_energy"] ).sum() if use_particles: kinetic += ( 0.5 * ( clump["all", "particle_mass"] * ( (bulk_velocity[0] - clump["all", "particle_velocity_x"]) ** 2 + (bulk_velocity[1] - clump["all", "particle_velocity_y"]) ** 2 + (bulk_velocity[2] - clump["all", "particle_velocity_z"]) ** 2 ) ).sum() ) if use_particles: m = np.concatenate( [clump["gas", "mass"].in_cgs(), clump["all", "particle_mass"].in_cgs()] ) px = np.concatenate( [clump["index", "x"].in_cgs(), clump["all", "particle_position_x"].in_cgs()] ) py = np.concatenate( [clump["index", "y"].in_cgs(), clump["all", "particle_position_y"].in_cgs()] ) pz = np.concatenate( [clump["index", "z"].in_cgs(), clump["all", "particle_position_z"].in_cgs()] ) else: m = clump["gas", "mass"].in_cgs() px = clump["index", "x"].in_cgs() py = clump["index", "y"].in_cgs() pz = clump["index", "z"].in_cgs() potential = clump.data.ds.quan( G * gravitational_binding_energy( m, px, py, pz, truncate, (kinetic / G).in_cgs(), num_threads=num_threads ), kinetic.in_cgs().units, ) if truncate and potential >= kinetic: return True return potential >= kinetic add_validator("gravitationally_bound", _gravitationally_bound) def _min_cells(clump, n_cells): "True if clump has a minimum number of cells." return clump["index", "ones"].size >= n_cells add_validator("min_cells", _min_cells) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/level_sets/contour_finder.py0000644000175100001770000000521114714401662022401 0ustar00runnerdockerfrom collections import defaultdict import numpy as np from yt.funcs import get_pbar, mylog from yt.utilities.lib.contour_finding import ( ContourTree, TileContourTree, link_node_contours, update_joins, ) from yt.utilities.lib.partitioned_grid import PartitionedGrid def identify_contours(data_source, field, min_val, max_val, cached_fields=None): tree = ContourTree() gct = TileContourTree(min_val, max_val) total_contours = 0 contours = {} node_ids = [] DLE = data_source.ds.domain_left_edge masks = {g.id: m for g, m in data_source.blocks} for g, node, (sl, dims, gi) in data_source.tiles.slice_traverse(): g.field_parameters.update(data_source.field_parameters) node.node_ind = len(node_ids) nid = node.node_id node_ids.append(nid) values = g[field][sl].astype("float64") contour_ids = np.zeros(dims, "int64") - 1 mask = masks[g.id][sl].astype("uint8") total_contours += gct.identify_contours( values, contour_ids, mask, total_contours ) new_contours = tree.cull_candidates(contour_ids) tree.add_contours(new_contours) # Now we can create a partitioned grid with the contours. LE = (DLE + g.dds * gi).in_units("code_length").ndarray_view() RE = LE + (dims * g.dds).in_units("code_length").ndarray_view() pg = PartitionedGrid( g.id, [contour_ids.view("float64")], mask, LE, RE, dims.astype("int64") ) contours[nid] = (g.Level, node.node_ind, pg, sl) node_ids = np.array(node_ids, dtype="int64") if node_ids.size == 0: return 0, {} trunk = data_source.tiles.tree.trunk mylog.info("Linking node (%s) contours.", len(contours)) link_node_contours(trunk, contours, tree, node_ids) mylog.info("Linked.") # joins = tree.cull_joins(bt) # tree.add_joins(joins) joins = tree.export() contour_ids = defaultdict(list) pbar = get_pbar("Updating joins ... ", len(contours)) final_joins = np.unique(joins[:, 1]) for i, nid in enumerate(sorted(contours)): level, node_ind, pg, sl = contours[nid] ff = pg.my_data[0].view("int64") update_joins(joins, ff, final_joins) contour_ids[pg.parent_grid_id].append((sl, ff)) pbar.update(i + 1) pbar.finish() rv = {} rv.update(contour_ids) # NOTE: Because joins can appear in both a "final join" and a subsequent # "join", we can't know for sure how many unique joins there are without # checking if no cells match or doing an expensive operation checking for # the unique set of final join values. return final_joins.size, rv ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2711515 yt-4.4.0/yt/data_objects/level_sets/tests/0000755000175100001770000000000014714401715020151 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/level_sets/tests/__init__.py0000644000175100001770000000000014714401662022251 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/level_sets/tests/test_clump_finding.py0000644000175100001770000001366214714401662024411 0ustar00runnerdockerimport os import shutil import tempfile import numpy as np from numpy.testing import assert_array_equal, assert_equal from yt.data_objects.level_sets.api import Clump, add_clump_info, find_clumps from yt.data_objects.level_sets.clump_info_items import clump_info_registry from yt.fields.derived_field import ValidateParameter from yt.loaders import load, load_uniform_grid from yt.testing import requires_file, requires_module from yt.utilities.answer_testing.framework import data_dir_load def test_clump_finding(): n_c = 8 n_p = 1 dims = (n_c, n_c, n_c) density = np.ones(dims) high_rho = 10.0 # add a couple disconnected density enhancements density[2, 2, 2] = high_rho density[6, 6, 6] = high_rho # put a particle at the center of one of them dx = 1.0 / n_c px = 2.5 * dx * np.ones(n_p) data = { "density": density, "particle_mass": np.ones(n_p), "particle_position_x": px, "particle_position_y": px, "particle_position_z": px, } ds = load_uniform_grid(data, dims) ad = ds.all_data() master_clump = Clump(ad, ("gas", "density")) master_clump.add_validator("min_cells", 1) def _total_volume(clump): total_vol = clump.data.quantities.total_quantity( [("index", "cell_volume")] ).in_units("cm**3") return "Cell Volume: %6e cm**3.", total_vol add_clump_info("total_volume", _total_volume) master_clump.add_info_item("total_volume") find_clumps(master_clump, 0.5, 2.0 * high_rho, 10.0) # there should be two children assert_equal(len(master_clump.children), 2) leaf_clumps = master_clump.leaves for l in leaf_clumps: keys = l.info.keys() assert "total_cells" in keys assert "cell_mass" in keys assert "max_grid_level" in keys assert "total_volume" in keys # two leaf clumps assert_equal(len(leaf_clumps), 2) # check some clump fields assert_equal(master_clump.children[0]["gas", "density"][0].size, 1) assert_equal( master_clump.children[0]["gas", "density"][0], ad["gas", "density"].max() ) assert_equal(master_clump.children[0]["all", "particle_mass"].size, 1) assert_array_equal( master_clump.children[0]["all", "particle_mass"], ad["all", "particle_mass"] ) assert_equal(master_clump.children[1]["gas", "density"][0].size, 1) assert_equal( master_clump.children[1]["gas", "density"][0], ad["gas", "density"].max() ) assert_equal(master_clump.children[1]["all", "particle_mass"].size, 0) # clean up global registry to avoid polluting other tests del clump_info_registry["total_volume"] i30 = "IsolatedGalaxy/galaxy0030/galaxy0030" @requires_module("h5py") @requires_file(i30) def test_clump_tree_save(): tmpdir = tempfile.mkdtemp() curdir = os.getcwd() os.chdir(tmpdir) ds = data_dir_load(i30) data_source = ds.disk([0.5, 0.5, 0.5], [0.0, 0.0, 1.0], (8, "kpc"), (1, "kpc")) field = ("gas", "density") step = 2.0 c_min = 10 ** np.floor(np.log10(data_source[field]).min()) c_max = 10 ** np.floor(np.log10(data_source[field]).max() + 1) master_clump = Clump(data_source, field) master_clump.add_info_item("center_of_mass") master_clump.add_validator("min_cells", 20) find_clumps(master_clump, c_min, c_max, step) leaf_clumps = master_clump.leaves fn = master_clump.save_as_dataset( fields=[ ("gas", "density"), ("index", "x"), ("index", "y"), ("index", "z"), ("all", "particle_mass"), ] ) ds2 = load(fn) # compare clumps in the tree t1 = list(master_clump) t2 = list(ds2.tree) mt1 = ds.arr([c.info["cell_mass"][1] for c in t1]) mt2 = ds2.arr([c["clump", "cell_mass"] for c in t2]) it1 = np.argsort(mt1).astype("int64") it2 = np.argsort(mt2).astype("int64") assert_array_equal(mt1[it1], mt2[it2]) for i1, i2 in zip(it1, it2, strict=True): ct1 = t1[i1] ct2 = t2[i2] assert_array_equal(ct1["gas", "density"], ct2["grid", "density"]) assert_array_equal(ct1["all", "particle_mass"], ct2["all", "particle_mass"]) # compare leaf clumps c1 = list(leaf_clumps) c2 = list(ds2.leaves) mc1 = ds.arr([c.info["cell_mass"][1] for c in c1]) mc2 = ds2.arr([c["clump", "cell_mass"] for c in c2]) ic1 = np.argsort(mc1).astype("int64") ic2 = np.argsort(mc2).astype("int64") assert_array_equal(mc1[ic1], mc2[ic2]) os.chdir(curdir) shutil.rmtree(tmpdir) i30 = "IsolatedGalaxy/galaxy0030/galaxy0030" @requires_module("h5py") @requires_file(i30) def test_clump_field_parameters(): """ Make sure clump finding on fields with field parameters works. """ def _also_density(field, data): factor = data.get_field_parameter("factor") return factor * data["gas", "density"] ds = data_dir_load(i30) ds.add_field( ("gas", "also_density"), function=_also_density, units=ds.fields.gas.density.units, sampling_type="cell", validators=[ValidateParameter("factor")], ) data_source = ds.disk([0.5, 0.5, 0.5], [0.0, 0.0, 1.0], (8, "kpc"), (1, "kpc")) data_source.set_field_parameter("factor", 1) step = 2.0 field = ("gas", "density") c_min = 10 ** np.floor(np.log10(data_source[field]).min()) c_max = 10 ** np.floor(np.log10(data_source[field]).max() + 1) master_clump_1 = Clump(data_source, ("gas", "density")) master_clump_1.add_validator("min_cells", 20) master_clump_2 = Clump(data_source, ("gas", "also_density")) master_clump_2.add_validator("min_cells", 20) find_clumps(master_clump_1, c_min, c_max, step) find_clumps(master_clump_2, c_min, c_max, step) leaf_clumps_1 = master_clump_1.leaves leaf_clumps_2 = master_clump_2.leaves for c1, c2 in zip(leaf_clumps_1, leaf_clumps_2, strict=True): assert_array_equal(c1["gas", "density"], c2["gas", "density"]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/particle_filters.py0000644000175100001770000001455514714401662020562 0ustar00runnerdockerimport copy from contextlib import contextmanager from yt.fields.field_info_container import NullFunc, TranslationFunc from yt.funcs import mylog from yt.utilities.exceptions import YTIllDefinedFilter # One to one mapping filter_registry: dict[str, "ParticleFilter"] = {} class DummyFieldInfo: particle_type = True sampling_type = "particle" dfi = DummyFieldInfo() class ParticleFilter: def __init__(self, name, function, requires, filtered_type): self.name = name self.function = function self.requires = requires[:] self.filtered_type = filtered_type @contextmanager def apply(self, dobj): with dobj._chunked_read(dobj._current_chunk): with dobj._field_type_state(self.filtered_type, dfi): # We won't be storing the field data from the whole read, so we # start by filtering now. filter = self.function(self, dobj) yield # Retain a reference here, and we'll filter all appropriate fields # later. fd = dobj.field_data for f, tr in fd.items(): if f[0] != self.filtered_type: continue if tr.shape != filter.shape and tr.shape[0] != filter.shape[0]: raise YTIllDefinedFilter(self, tr.shape, filter.shape) else: d = tr[filter] dobj.field_data[self.name, f[1]] = d def available(self, field_list): # Note that this assumes that all the fields in field_list have the # same form as the 'requires' attributes. This won't be true if the # fields are implicitly "all" or something. return all((self.filtered_type, field) in field_list for field in self.requires) def missing(self, field_list): return [ (self.filtered_type, field) for field in self.requires if (self.filtered_type, field) not in field_list ] def wrap_func(self, field_name, old_fi): new_fi = copy.copy(old_fi) new_fi.name = (self.name, field_name[1]) if old_fi._function == NullFunc: new_fi._function = TranslationFunc(old_fi.name) # Marking the field as inherited new_fi._inherited_particle_filter = True return new_fi def add_particle_filter(name, function, requires=None, filtered_type="all"): r"""Create a new particle filter in the global namespace of filters A particle filter is a short name that corresponds to an algorithm for filtering a set of particles into a subset. This is useful for creating new particle types based on a cut on a particle field, such as particle mass, ID or type. After defining a new filter, it still needs to be added to the dataset by calling :func:`~yt.data_objects.static_output.add_particle_filter`. .. note:: Alternatively, you can make use of the :func:`~yt.data_objects.particle_filters.particle_filter` decorator to define a new particle filter. Parameters ---------- name : string The name of the particle filter. New particle fields with particle type set by this name will be added to any dataset that enables this particle filter. function : reference to a function The function that defines the particle filter. The function should accept two arguments: a reference to a particle filter object and a reference to an abstract yt data object. See the example below. requires : a list of field names A list of field names required by the particle filter definition. filtered_type : string The name of the particle type to be filtered. Examples -------- >>> import yt >>> def _stars(pfilter, data): ... return data[pfilter.filtered_type, "particle_type"] == 2 >>> yt.add_particle_filter( ... "stars", function=_stars, filtered_type="all", requires=["particle_type"] ... ) >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> ds.add_particle_filter("stars") >>> ad = ds.all_data() >>> print(ad["stars", "particle_mass"]) [ 1.68243760e+38 1.65690882e+38 1.65813321e+38 ..., 2.04238266e+38 2.04523901e+38 2.04770938e+38] g """ if requires is None: requires = [] filter = ParticleFilter(name, function, requires, filtered_type) if filter_registry.get(name, None) is not None: mylog.warning("The %s particle filter already exists. Overriding.", name) filter_registry[name] = filter def particle_filter(name=None, requires=None, filtered_type="all"): r"""A decorator that adds a new particle filter A particle filter is a short name that corresponds to an algorithm for filtering a set of particles into a subset. This is useful for creating new particle types based on a cut on a particle field, such as particle mass, ID or type. .. note:: Alternatively, you can make use of the :func:`~yt.data_objects.particle_filters.add_particle_filter` function to define a new particle filter using a more declarative syntax. Parameters ---------- name : string The name of the particle filter. New particle fields with particle type set by this name will be added to any dataset that enables this particle filter. If not set, the name will be inferred from the name of the filter function. requires : a list of field names A list of field names required by the particle filter definition. filtered_type : string The name of the particle type to be filtered. Examples -------- >>> import yt >>> # define a filter named "stars" >>> @yt.particle_filter(requires=["particle_type"], filtered_type="all") ... def stars(pfilter, data): ... return data[pfilter.filtered_type, "particle_type"] == 2 >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> ds.add_particle_filter("stars") >>> ad = ds.all_data() >>> print(ad["stars", "particle_mass"]) [ 1.68243760e+38 1.65690882e+38 1.65813321e+38 ..., 2.04238266e+38 2.04523901e+38 2.04770938e+38] g """ def wrapper(function): if name is None: used_name = function.__name__ else: used_name = name return add_particle_filter(used_name, function, requires, filtered_type) return wrapper ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/particle_trajectories.py0000644000175100001770000003504314714401662021603 0ustar00runnerdockerimport numpy as np from yt.config import ytcfg from yt.data_objects.field_data import YTFieldData from yt.funcs import get_pbar, mylog from yt.units.yt_array import array_like_field from yt.utilities.exceptions import YTIllDefinedParticleData from yt.utilities.lib.particle_mesh_operations import CICSample_3 from yt.utilities.on_demand_imports import _h5py as h5py from yt.utilities.parallel_tools.parallel_analysis_interface import parallel_root_only class ParticleTrajectories: r"""A collection of particle trajectories in time over a series of datasets. Parameters ---------- outputs : ~yt.data_objects.time_series.DatasetSeries DatasetSeries object from which to draw the particles. indices : array_like An integer array of particle indices whose trajectories we want to track. If they are not sorted they will be sorted. fields : list of strings, optional A set of fields that is retrieved when the trajectory collection is instantiated. Default: None (will default to the fields 'particle_position_x', 'particle_position_y', 'particle_position_z') suppress_logging : boolean Suppress yt's logging when iterating over the simulation time series. Default: False ptype : str, optional Only use this particle type. Default: None, which uses all particle type. Examples -------- >>> my_fns = glob.glob("orbit_hdf5_chk_00[0-9][0-9]") >>> my_fns.sort() >>> fields = [ ... ("all", "particle_position_x"), ... ("all", "particle_position_y"), ... ("all", "particle_position_z"), ... ("all", "particle_velocity_x"), ... ("all", "particle_velocity_y"), ... ("all", "particle_velocity_z"), ... ] >>> ds = load(my_fns[0]) >>> init_sphere = ds.sphere(ds.domain_center, (0.5, "unitary")) >>> indices = init_sphere["all", "particle_index"].astype("int64") >>> ts = DatasetSeries(my_fns) >>> trajs = ts.particle_trajectories(indices, fields=fields) >>> for t in trajs: ... print( ... t["all", "particle_velocity_x"].max(), ... t["all", "particle_velocity_x"].min(), ... ) """ def __init__( self, outputs, indices, fields=None, suppress_logging=False, ptype=None ): indices.sort() # Just in case the caller wasn't careful self.field_data = YTFieldData() self.data_series = outputs self.masks = [] self.sorts = [] self.array_indices = [] self.indices = indices self.num_indices = len(indices) self.num_steps = len(outputs) self.times = [] self.suppress_logging = suppress_logging self.ptype = ptype if ptype else "all" if fields is None: fields = [] if self.suppress_logging: old_level = int(ytcfg.get("yt", "log_level")) mylog.setLevel(40) ds_first = self.data_series[0] dd_first = ds_first.all_data() fds = {} for field in ( "particle_index", "particle_position_x", "particle_position_y", "particle_position_z", ): fds[field] = dd_first._determine_fields((self.ptype, field))[0] # Note: we explicitly pass dynamic=False to prevent any change in piter from # breaking the assumption that the same processors load the same datasets my_storage = {} pbar = get_pbar("Constructing trajectory information", len(self.data_series)) for i, (sto, ds) in enumerate( self.data_series.piter(storage=my_storage, dynamic=False) ): dd = ds.all_data() newtags = dd[fds["particle_index"]].d.astype("int64") mask = np.isin(newtags, indices, assume_unique=True) sort = np.argsort(newtags[mask]) array_indices = np.where(np.isin(indices, newtags, assume_unique=True))[0] self.array_indices.append(array_indices) self.masks.append(mask) self.sorts.append(sort) pfields = {} for field in (f"particle_position_{ax}" for ax in "xyz"): pfields[field] = dd[fds[field]].ndarray_view()[mask][sort] sto.result_id = ds.parameter_filename sto.result = (ds.current_time, array_indices, pfields) pbar.update(i + 1) pbar.finish() if self.suppress_logging: mylog.setLevel(old_level) sorted_storage = sorted(my_storage.items()) _fn, (time, *_) = sorted_storage[0] time_units = time.units times = [time.to(time_units) for _fn, (time, *_) in sorted_storage] self.times = self.data_series[0].arr([time.value for time in times], time_units) self.particle_fields = [] output_field = np.empty((self.num_indices, self.num_steps)) output_field.fill(np.nan) for field in (f"particle_position_{ax}" for ax in "xyz"): for i, (_fn, (_time, indices, pfields)) in enumerate(sorted_storage): try: # This will fail if particles ids are # duplicate. This is due to the fact that the rhs # would then have a different shape as the lhs output_field[indices, i] = pfields[field] except ValueError as e: raise YTIllDefinedParticleData( "This dataset contains duplicate particle indices!" ) from e self.field_data[fds[field]] = array_like_field( dd_first, output_field.copy(), fds[field] ) self.particle_fields.append(field) # Instantiate fields the caller requested self._get_data(fields) def has_key(self, key): return key in self.field_data def keys(self): return self.field_data.keys() def __getitem__(self, key): """ Get the field associated with key. """ if key == "particle_time": return self.times if key not in self.field_data: self._get_data([key]) return self.field_data[key] def __setitem__(self, key, val): """ Sets a field to be some other value. """ self.field_data[key] = val def __delitem__(self, key): """ Delete the field from the trajectory """ del self.field_data[key] def __iter__(self): """ This iterates over the trajectories for the different particles, returning dicts of fields for each trajectory """ for idx in range(self.num_indices): traj = {} traj["particle_index"] = self.indices[idx] traj["particle_time"] = self.times for field in self.field_data.keys(): traj[field] = self[field][idx, :] yield traj def __len__(self): """ The number of individual trajectories """ return self.num_indices def add_fields(self, fields): """ Add a list of fields to an existing trajectory Parameters ---------- fields : list of strings A list of fields to be added to the current trajectory collection. Examples ________ >>> trajs = ParticleTrajectories(my_fns, indices) >>> trajs.add_fields([("all", "particle_mass"), ("all", "particle_gpot")]) """ self._get_data(fields) def _get_data(self, fields): """ Get a list of fields to include in the trajectory collection. The trajectory collection itself is a dict of 2D numpy arrays, with shape (num_indices, num_steps) """ missing_fields = [field for field in fields if field not in self.field_data] if not missing_fields: return if self.suppress_logging: old_level = int(ytcfg.get("yt", "log_level")) mylog.setLevel(40) ds_first = self.data_series[0] dd_first = ds_first.all_data() fds = {} new_particle_fields = [] for field in missing_fields: fds[field] = dd_first._determine_fields(field)[0] if field not in self.particle_fields: ftype = fds[field][0] if ftype in ds_first.particle_types: self.particle_fields.append(field) new_particle_fields.append(field) grid_fields = [ field for field in missing_fields if field not in self.particle_fields ] step = 0 fields_str = ", ".join(str(f) for f in missing_fields) pbar = get_pbar( f"Generating [{fields_str}] fields in trajectories", self.num_steps, ) # Note: we explicitly pass dynamic=False to prevent any change in piter from # breaking the assumption that the same processors load the same datasets my_storage = {} for i, (sto, ds) in enumerate( self.data_series.piter(storage=my_storage, dynamic=False) ): mask = self.masks[i] sort = self.sorts[i] pfield = {} if new_particle_fields: # there's at least one particle field dd = ds.all_data() for field in new_particle_fields: # This is easy... just get the particle fields pfield[field] = dd[fds[field]].d[mask][sort] if grid_fields: # This is hard... must loop over grids for field in grid_fields: pfield[field] = np.zeros(self.num_indices) x = self["particle_position_x"][:, step].d y = self["particle_position_y"][:, step].d z = self["particle_position_z"][:, step].d particle_grids, particle_grid_inds = ds.index._find_points(x, y, z) # This will fail for non-grid index objects for grid in particle_grids: cube = grid.retrieve_ghost_zones(1, grid_fields) for field in grid_fields: CICSample_3( x, y, z, pfield[field], self.num_indices, cube[fds[field]], np.array(grid.LeftEdge, dtype="float64"), np.array(grid.ActiveDimensions, dtype="int32"), grid.dds[0], ) sto.result_id = ds.parameter_filename sto.result = (self.array_indices[i], pfield) pbar.update(step) step += 1 pbar.finish() output_field = np.empty((self.num_indices, self.num_steps)) output_field.fill(np.nan) for field in missing_fields: fd = fds[field] for i, (_fn, (indices, pfield)) in enumerate(sorted(my_storage.items())): output_field[indices, i] = pfield[field] self.field_data[field] = array_like_field(dd_first, output_field.copy(), fd) if self.suppress_logging: mylog.setLevel(old_level) def trajectory_from_index(self, index): """ Retrieve a single trajectory corresponding to a specific particle index Parameters ---------- index : int This defines which particle trajectory from the ParticleTrajectories object will be returned. Returns ------- A dictionary corresponding to the particle's trajectory and the fields along that trajectory Examples -------- >>> import matplotlib.pyplot as plt >>> trajs = ParticleTrajectories(my_fns, indices) >>> traj = trajs.trajectory_from_index(indices[0]) >>> plt.plot( ... traj["all", "particle_time"], ... traj["all", "particle_position_x"], ... "-x", ... ) >>> plt.savefig("orbit") """ mask = np.isin(self.indices, (index,), assume_unique=True) if not np.any(mask): print("The particle index %d is not in the list!" % (index)) raise IndexError fields = sorted(self.field_data.keys()) traj = {} traj[self.ptype, "particle_time"] = self.times traj[self.ptype, "particle_index"] = index for field in fields: traj[field] = self[field][mask, :][0] return traj @parallel_root_only def write_out(self, filename_base): """ Write out particle trajectories to tab-separated ASCII files (one for each trajectory) with the field names in the file header. Each file is named with a basename and the index number. Parameters ---------- filename_base : string The prefix for the outputted ASCII files. Examples -------- >>> trajs = ParticleTrajectories(my_fns, indices) >>> trajs.write_out("orbit_trajectory") """ fields = sorted(self.field_data.keys()) num_fields = len(fields) first_str = "# particle_time\t" + "\t".join(fields) + "\n" template_str = "%g\t" * num_fields + "%g\n" for ix in range(self.num_indices): outlines = [first_str] for it in range(self.num_steps): outlines.append( template_str % tuple( [self.times[it]] + [self[field][ix, it] for field in fields] ) ) fid = open(filename_base + "_%d.dat" % self.indices[ix], "w") fid.writelines(outlines) fid.close() del fid @parallel_root_only def write_out_h5(self, filename): """ Write out all the particle trajectories to a single HDF5 file that contains the indices, the times, and the 2D array for each field individually Parameters ---------- filename : string The output filename for the HDF5 file Examples -------- >>> trajs = ParticleTrajectories(my_fns, indices) >>> trajs.write_out_h5("orbit_trajectories") """ fid = h5py.File(filename, mode="w") fid.create_dataset("particle_indices", dtype=np.int64, data=self.indices) fid.close() self.times.write_hdf5(filename, dataset_name="particle_times") fields = sorted(self.field_data.keys()) for field in fields: self[field].write_hdf5(filename, dataset_name=f"{field}") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/particle_unions.py0000644000175100001770000000051314714401662020412 0ustar00runnerdockerfrom yt._maintenance.deprecation import issue_deprecation_warning from .unions import ParticleUnion # noqa: F401 issue_deprecation_warning( "Importing ParticleUnion from yt.data_objects.particle_unions is deprecated. " "Please import this class from yt.data_objects.unions instead", stacklevel=3, since="4.2", ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/profiles.py0000644000175100001770000015616514714401662017056 0ustar00runnerdockerimport numpy as np from more_itertools import collapse from yt.data_objects.field_data import YTFieldData from yt.fields.derived_field import DerivedField from yt.frontends.ytdata.utilities import save_as_dataset from yt.funcs import get_output_filename, is_sequence, iter_fields, mylog from yt.units.unit_object import Unit # type: ignore from yt.units.yt_array import YTQuantity, array_like_field from yt.utilities.exceptions import ( YTIllDefinedBounds, YTIllDefinedProfile, YTProfileDataShape, ) from yt.utilities.lib.misc_utilities import ( new_bin_profile1d, new_bin_profile2d, new_bin_profile3d, ) from yt.utilities.lib.particle_mesh_operations import CICDeposit_2, NGPDeposit_2 from yt.utilities.parallel_tools.parallel_analysis_interface import ( ParallelAnalysisInterface, parallel_objects, ) def _sanitize_min_max_units(amin, amax, finfo, registry): # returns a copy of amin and amax, converted to finfo's output units umin = getattr(amin, "units", None) umax = getattr(amax, "units", None) if umin is None: umin = Unit(finfo.output_units, registry=registry) rmin = YTQuantity(amin, umin) else: rmin = amin.in_units(finfo.output_units) if umax is None: umax = Unit(finfo.output_units, registry=registry) rmax = YTQuantity(amax, umax) else: rmax = amax.in_units(finfo.output_units) return rmin, rmax def preserve_source_parameters(func): def save_state(*args, **kwargs): # Temporarily replace the 'field_parameters' for a # grid with the 'field_parameters' for the data source prof = args[0] source = args[1] if hasattr(source, "field_parameters"): old_params = source.field_parameters source.field_parameters = prof._data_source.field_parameters tr = func(*args, **kwargs) source.field_parameters = old_params else: tr = func(*args, **kwargs) return tr return save_state class ProfileFieldAccumulator: def __init__(self, n_fields, size): shape = size + (n_fields,) self.values = np.zeros(shape, dtype="float64") self.mvalues = np.zeros(shape, dtype="float64") self.qvalues = np.zeros(shape, dtype="float64") self.used = np.zeros(size, dtype="bool") self.weight_values = np.zeros(size, dtype="float64") class ProfileND(ParallelAnalysisInterface): """The profile object class""" def __init__(self, data_source, weight_field=None): self.data_source = data_source self.ds = data_source.ds self.field_map = {} self.field_info = {} self.field_data = YTFieldData() if weight_field is not None: self.standard_deviation = YTFieldData() weight_field = self.data_source._determine_fields(weight_field)[0] else: self.standard_deviation = None self.weight_field = weight_field self.field_units = {} ParallelAnalysisInterface.__init__(self, comm=data_source.comm) def add_fields(self, fields): """Add fields to profile Parameters ---------- fields : list of field names A list of fields to create profile histograms for """ fields = self.data_source._determine_fields(fields) for f in fields: self.field_info[f] = self.data_source.ds.field_info[f] temp_storage = ProfileFieldAccumulator(len(fields), self.size) citer = self.data_source.chunks([], "io") for chunk in parallel_objects(citer): self._bin_chunk(chunk, fields, temp_storage) self._finalize_storage(fields, temp_storage) def set_field_unit(self, field, new_unit): """Sets a new unit for the requested field Parameters ---------- field : string or field tuple The name of the field that is to be changed. new_unit : string or Unit object The name of the new unit. """ if field in self.field_units: self.field_units[field] = Unit(new_unit, registry=self.ds.unit_registry) else: fd = self.field_map[field] if fd in self.field_units: self.field_units[fd] = Unit(new_unit, registry=self.ds.unit_registry) else: raise KeyError(f"{field} not in profile!") def _finalize_storage(self, fields, temp_storage): # We use our main comm here # This also will fill _field_data for i, _field in enumerate(fields): # q values are returned as q * weight but we want just q temp_storage.qvalues[..., i][temp_storage.used] /= ( temp_storage.weight_values[temp_storage.used] ) # get the profile data from all procs all_store = {self.comm.rank: temp_storage} all_store = self.comm.par_combine_object(all_store, "join", datatype="dict") all_val = np.zeros_like(temp_storage.values) all_mean = np.zeros_like(temp_storage.mvalues) all_std = np.zeros_like(temp_storage.qvalues) all_weight = np.zeros_like(temp_storage.weight_values) all_used = np.zeros_like(temp_storage.used, dtype="bool") # Combine the weighted mean and standard deviation from each processor. # For two samples with total weight, mean, and standard deviation # given by w, m, and s, their combined mean and standard deviation are: # m12 = (m1 * w1 + m2 * w2) / (w1 + w2) # s12 = (m1 * (s1**2 + (m1 - m12)**2) + # m2 * (s2**2 + (m2 - m12)**2)) / (w1 + w2) # Here, the mvalues are m and the qvalues are s**2. for p in sorted(all_store.keys()): all_used += all_store[p].used old_mean = all_mean.copy() old_weight = all_weight.copy() all_weight[all_store[p].used] += all_store[p].weight_values[ all_store[p].used ] for i, _field in enumerate(fields): all_val[..., i][all_store[p].used] += all_store[p].values[..., i][ all_store[p].used ] all_mean[..., i][all_store[p].used] = ( all_mean[..., i] * old_weight + all_store[p].mvalues[..., i] * all_store[p].weight_values )[all_store[p].used] / all_weight[all_store[p].used] all_std[..., i][all_store[p].used] = ( old_weight * (all_std[..., i] + (old_mean[..., i] - all_mean[..., i]) ** 2) + all_store[p].weight_values * ( all_store[p].qvalues[..., i] + (all_store[p].mvalues[..., i] - all_mean[..., i]) ** 2 ) )[all_store[p].used] / all_weight[all_store[p].used] all_std = np.sqrt(all_std) del all_store self.used = all_used blank = ~all_used self.weight = all_weight self.weight[blank] = 0.0 for i, field in enumerate(fields): if self.weight_field is None: self.field_data[field] = array_like_field( self.data_source, all_val[..., i], field ) else: self.field_data[field] = array_like_field( self.data_source, all_mean[..., i], field ) self.standard_deviation[field] = array_like_field( self.data_source, all_std[..., i], field ) self.standard_deviation[field][blank] = 0.0 self.weight = array_like_field( self.data_source, self.weight, self.weight_field ) self.field_data[field][blank] = 0.0 self.field_units[field] = self.field_data[field].units if isinstance(field, tuple): self.field_map[field[1]] = field else: self.field_map[field] = field def _bin_chunk(self, chunk, fields, storage): raise NotImplementedError def _filter(self, bin_fields): # cut_points is set to be everything initially, but # we also want to apply a filtering based on min/max pfilter = np.ones(bin_fields[0].shape, dtype="bool") for (mi, ma), data in zip(self.bounds, bin_fields, strict=True): pfilter &= data > mi pfilter &= data < ma return pfilter, [data[pfilter] for data in bin_fields] def _get_data(self, chunk, fields): # We are using chunks now, which will manage the field parameters and # the like. bin_fields = [chunk[bf] for bf in self.bin_fields] for i in range(1, len(bin_fields)): if bin_fields[0].shape != bin_fields[i].shape: raise YTProfileDataShape( self.bin_fields[0], bin_fields[0].shape, self.bin_fields[i], bin_fields[i].shape, ) # We want to make sure that our fields are within the bounds of the # binning pfilter, bin_fields = self._filter(bin_fields) if not np.any(pfilter): return None arr = np.zeros((bin_fields[0].size, len(fields)), dtype="float64") for i, field in enumerate(fields): if pfilter.shape != chunk[field].shape: raise YTProfileDataShape( self.bin_fields[0], bin_fields[0].shape, field, chunk[field].shape ) units = chunk.ds.field_info[field].output_units arr[:, i] = chunk[field][pfilter].in_units(units) if self.weight_field is not None: if pfilter.shape != chunk[self.weight_field].shape: raise YTProfileDataShape( self.bin_fields[0], bin_fields[0].shape, self.weight_field, chunk[self.weight_field].shape, ) units = chunk.ds.field_info[self.weight_field].output_units weight_data = chunk[self.weight_field].in_units(units) else: weight_data = np.ones(pfilter.shape, dtype="float64") weight_data = weight_data[pfilter] # So that we can pass these into return arr, weight_data, bin_fields def __getitem__(self, field): if field in self.field_data: fname = field else: # deal with string vs tuple field names and attempt to guess which field # we are supposed to be talking about fname = self.field_map.get(field, None) if isinstance(field, tuple): fname = self.field_map.get(field[1], None) if fname != field: raise KeyError( f"Asked for field '{field}' but only have data for " f"fields '{list(self.field_data.keys())}'" ) elif isinstance(field, DerivedField): fname = self.field_map.get(field.name[1], None) if fname is None: raise KeyError(field) if getattr(self, "fractional", False): return self.field_data[fname] else: return self.field_data[fname].in_units(self.field_units[fname]) def items(self): return [(k, self[k]) for k in self.field_data.keys()] def keys(self): return self.field_data.keys() def __iter__(self): return sorted(self.items()) def _get_bins(self, mi, ma, n, take_log): if take_log: ret = np.logspace(np.log10(mi), np.log10(ma), n + 1) # at this point ret[0] and ret[-1] are not exactly equal to # mi and ma due to round-off error. Let's force them to be # mi and ma exactly to avoid incorrectly discarding cells near # the edges. See Issue #1300. ret[0], ret[-1] = mi, ma return ret else: return np.linspace(mi, ma, n + 1) def save_as_dataset(self, filename=None): r"""Export a profile to a reloadable yt dataset. This function will take a profile and output a dataset containing all relevant fields. The resulting dataset can be reloaded as a yt dataset. Parameters ---------- filename : str, optional The name of the file to be written. If None, the name will be a combination of the original dataset plus the type of object, e.g., Profile1D. Returns ------- filename : str The name of the file that has been created. Examples -------- >>> import yt >>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") >>> ad = ds.all_data() >>> profile = yt.create_profile( ... ad, ... [("gas", "density"), ("gas", "temperature")], ... ("gas", "mass"), ... weight_field=None, ... n_bins=(128, 128), ... ) >>> fn = profile.save_as_dataset() >>> prof_ds = yt.load(fn) >>> print(prof_ds.data["gas", "mass"]) (128, 128) >>> print(prof_ds.data["index", "x"].shape) # x bins as 1D array (128,) >>> print(prof_ds.data["gas", "density"]) # x bins as 2D array (128, 128) >>> p = yt.PhasePlot( ... prof_ds.data, ... ("gas", "density"), ... ("gas", "temperature"), ... ("gas", "mass"), ... weight_field=None, ... ) >>> p.save() """ keyword = f"{str(self.ds)}_{self.__class__.__name__}" filename = get_output_filename(filename, keyword, ".h5") args = ("field", "log") extra_attrs = { "data_type": "yt_profile", "profile_dimensions": self.size, "weight_field": self.weight_field, "fractional": self.fractional, "accumulation": self.accumulation, } data = {} data.update(self.field_data) data["weight"] = self.weight data["used"] = self.used.astype("float64") std = "standard_deviation" if self.weight_field is not None: std_data = getattr(self, std) data.update({(std, field[1]): std_data[field] for field in self.field_data}) dimensionality = 0 bin_data = [] for ax in "xyz": if hasattr(self, ax): dimensionality += 1 data[ax] = getattr(self, ax) bin_data.append(data[ax]) bin_field_name = f"{ax}_bins" data[bin_field_name] = getattr(self, bin_field_name) extra_attrs[f"{ax}_range"] = self.ds.arr( [data[bin_field_name][0], data[bin_field_name][-1]] ) for arg in args: key = f"{ax}_{arg}" extra_attrs[key] = getattr(self, key) bin_fields = np.meshgrid(*bin_data) for i, ax in enumerate("xyz"[:dimensionality]): data[getattr(self, f"{ax}_field")] = bin_fields[i] extra_attrs["dimensionality"] = dimensionality ftypes = {field: "data" for field in data if field[0] != std} if self.weight_field is not None: ftypes.update({(std, field[1]): std for field in self.field_data}) save_as_dataset( self.ds, filename, data, field_types=ftypes, extra_attrs=extra_attrs ) return filename class ProfileNDFromDataset(ProfileND): """ An ND profile object loaded from a ytdata dataset. """ def __init__(self, ds): ProfileND.__init__(self, ds.data, ds.parameters.get("weight_field", None)) self.fractional = ds.parameters.get("fractional", False) self.accumulation = ds.parameters.get("accumulation", False) exclude_fields = ["used", "weight"] for ax in "xyz"[: ds.dimensionality]: setattr(self, ax, ds.data["data", ax]) ax_bins = f"{ax}_bins" ax_field = f"{ax}_field" ax_log = f"{ax}_log" setattr(self, ax_bins, ds.data["data", ax_bins]) field_name = tuple(ds.parameters.get(ax_field, (None, None))) setattr(self, ax_field, field_name) self.field_info[field_name] = ds.field_info[field_name] setattr(self, ax_log, ds.parameters.get(ax_log, False)) exclude_fields.extend([ax, ax_bins, field_name[1]]) self.weight = ds.data["data", "weight"] self.used = ds.data["data", "used"].d.astype(bool) profile_fields = [ f for f in ds.field_list if f[1] not in exclude_fields and f[0] != "standard_deviation" ] for field in profile_fields: self.field_map[field[1]] = field self.field_data[field] = ds.data[field] self.field_info[field] = ds.field_info[field] self.field_units[field] = ds.data[field].units if ("standard_deviation", field[1]) in ds.field_list: self.standard_deviation[field] = ds.data["standard_deviation", field[1]] class Profile1D(ProfileND): """An object that represents a 1D profile. Parameters ---------- data_source : AMD3DData object The data object to be profiled x_field : string field name The field to profile as a function of x_n : integer The number of bins along the x direction. x_min : float The minimum value of the x profile field. If supplied without units, assumed to be in the output units for x_field. x_max : float The maximum value of the x profile field. If supplied without units, assumed to be in the output units for x_field. x_log : boolean Controls whether or not the bins for the x field are evenly spaced in linear (False) or log (True) space. weight_field : string field name The field to weight the profiled fields by. override_bins_x : array Array to set as xbins and ignore other parameters if set """ def __init__( self, data_source, x_field, x_n, x_min, x_max, x_log, weight_field=None, override_bins_x=None, ): super().__init__(data_source, weight_field) self.x_field = data_source._determine_fields(x_field)[0] self.field_info[self.x_field] = self.data_source.ds.field_info[self.x_field] self.x_log = x_log x_min, x_max = _sanitize_min_max_units( x_min, x_max, self.field_info[self.x_field], self.ds.unit_registry ) self.x_bins = array_like_field( data_source, self._get_bins(x_min, x_max, x_n, x_log), self.x_field ) if override_bins_x is not None: self.x_bins = array_like_field(data_source, override_bins_x, self.x_field) self.size = (self.x_bins.size - 1,) self.bin_fields = (self.x_field,) self.x = 0.5 * (self.x_bins[1:] + self.x_bins[:-1]) def _bin_chunk(self, chunk, fields, storage): rv = self._get_data(chunk, fields) if rv is None: return fdata, wdata, (bf_x,) = rv bf_x.convert_to_units(self.field_info[self.x_field].output_units) bin_ind = np.digitize(bf_x, self.x_bins) - 1 new_bin_profile1d( bin_ind, wdata, fdata, storage.weight_values, storage.values, storage.mvalues, storage.qvalues, storage.used, ) # We've binned it! def set_x_unit(self, new_unit): """Sets a new unit for the x field Parameters ---------- new_unit : string or Unit object The name of the new unit. """ self.x_bins.convert_to_units(new_unit) self.x = 0.5 * (self.x_bins[1:] + self.x_bins[:-1]) @property def bounds(self): return ((self.x_bins[0], self.x_bins[-1]),) def plot(self): r""" This returns a :class:`~yt.visualization.profile_plotter.ProfilePlot` with the fields that have been added to this object. """ from yt.visualization.profile_plotter import ProfilePlot return ProfilePlot.from_profiles(self) def _export_prep(self, fields, only_used): if only_used: idxs = self.used else: idxs = slice(None, None, None) if not only_used and not np.all(self.used): masked = True else: masked = False if fields is None: fields = self.field_data.keys() else: fields = self.data_source._determine_fields(fields) return idxs, masked, fields def to_dataframe(self, fields=None, only_used=False, include_std=False): r"""Export a profile object to a pandas DataFrame. This function will take a data object and construct from it and optionally a list of fields a pandas DataFrame object. If pandas is not importable, this will raise ImportError. Parameters ---------- fields : list of strings or tuple field names, default None If this is supplied, it is the list of fields to be exported into the DataFrame. If not supplied, whatever fields exist in the profile, along with the bin field, will be exported. only_used : boolean, default False If True, only the bins which have data will be exported. If False, all the bins will be exported, but the elements for those bins in the data arrays will be filled with NaNs. include_std : boolean, optional If True, include the standard deviation of the profile in the pandas DataFrame. It will appear in the table as the field name with "_stddev" appended, e.g. "velocity_x_stddev". Default: False Returns ------- df : :class:`~pandas.DataFrame` The data contained in the profile. Examples -------- >>> sp = ds.sphere("c", (0.1, "unitary")) >>> p = sp.profile( ... ("index", "radius"), [("gas", "density"), ("gas", "temperature")] ... ) >>> df1 = p.to_dataframe() >>> df2 = p.to_dataframe(fields=("gas", "density"), only_used=True) """ from yt.utilities.on_demand_imports import _pandas as pd idxs, masked, fields = self._export_prep(fields, only_used) pdata = {self.x_field[-1]: self.x[idxs]} for field in fields: pdata[field[-1]] = self[field][idxs] if include_std: pdata[f"{field[-1]}_stddev"] = self.standard_deviation[field][idxs] df = pd.DataFrame(pdata) if masked: mask = np.zeros(df.shape, dtype="bool") mask[~self.used, 1:] = True df.mask(mask, inplace=True) return df def to_astropy_table(self, fields=None, only_used=False, include_std=False): """ Export the profile data to a :class:`~astropy.table.table.QTable`, which is a Table object which is unit-aware. The QTable can then be exported to an ASCII file, FITS file, etc. See the AstroPy Table docs for more details: http://docs.astropy.org/en/stable/table/ Parameters ---------- fields : list of strings or tuple field names, default None If this is supplied, it is the list of fields to be exported into the DataFrame. If not supplied, whatever fields exist in the profile, along with the bin field, will be exported. only_used : boolean, optional If True, only the bins which are used are copied to the QTable as rows. If False, all bins are copied, but the bins which are not used are masked. Default: False include_std : boolean, optional If True, include the standard deviation of the profile in the AstroPy QTable. It will appear in the table as the field name with "_stddev" appended, e.g. "velocity_x_stddev". Default: False Returns ------- qt : :class:`~astropy.table.QTable` The data contained in the profile. Examples -------- >>> sp = ds.sphere("c", (0.1, "unitary")) >>> p = sp.profile( ... ("index", "radius"), [("gas", "density"), ("gas", "temperature")] ... ) >>> qt1 = p.to_astropy_table() >>> qt2 = p.to_astropy_table(fields=("gas", "density"), only_used=True) """ from astropy.table import QTable idxs, masked, fields = self._export_prep(fields, only_used) qt = QTable(masked=masked) qt[self.x_field[-1]] = self.x[idxs].to_astropy() if masked: qt[self.x_field[-1]].mask = self.used for field in fields: qt[field[-1]] = self[field][idxs].to_astropy() if masked: qt[field[-1]].mask = self.used if include_std: qt[f"{field[-1]}_stddev"] = self.standard_deviation[field][ idxs ].to_astropy() if masked: qt[f"{field[-1]}_stddev"].mask = self.used return qt class Profile1DFromDataset(ProfileNDFromDataset, Profile1D): """ A 1D profile object loaded from a ytdata dataset. """ def __init(self, ds): ProfileNDFromDataset.__init__(self, ds) class Profile2D(ProfileND): """An object that represents a 2D profile. Parameters ---------- data_source : AMD3DData object The data object to be profiled x_field : string field name The field to profile as a function of along the x axis. x_n : integer The number of bins along the x direction. x_min : float The minimum value of the x profile field. If supplied without units, assumed to be in the output units for x_field. x_max : float The maximum value of the x profile field. If supplied without units, assumed to be in the output units for x_field. x_log : boolean Controls whether or not the bins for the x field are evenly spaced in linear (False) or log (True) space. y_field : string field name The field to profile as a function of along the y axis y_n : integer The number of bins along the y direction. y_min : float The minimum value of the y profile field. If supplied without units, assumed to be in the output units for y_field. y_max : float The maximum value of the y profile field. If supplied without units, assumed to be in the output units for y_field. y_log : boolean Controls whether or not the bins for the y field are evenly spaced in linear (False) or log (True) space. weight_field : string field name The field to weight the profiled fields by. override_bins_x : array Array to set as xbins and ignore other parameters if set override_bins_y : array Array to set as ybins and ignore other parameters if set """ def __init__( self, data_source, x_field, x_n, x_min, x_max, x_log, y_field, y_n, y_min, y_max, y_log, weight_field=None, override_bins_x=None, override_bins_y=None, ): super().__init__(data_source, weight_field) # X self.x_field = data_source._determine_fields(x_field)[0] self.x_log = x_log self.field_info[self.x_field] = self.data_source.ds.field_info[self.x_field] x_min, x_max = _sanitize_min_max_units( x_min, x_max, self.field_info[self.x_field], self.ds.unit_registry ) self.x_bins = array_like_field( data_source, self._get_bins(x_min, x_max, x_n, x_log), self.x_field ) if override_bins_x is not None: self.x_bins = array_like_field(data_source, override_bins_x, self.x_field) # Y self.y_field = data_source._determine_fields(y_field)[0] self.y_log = y_log self.field_info[self.y_field] = self.data_source.ds.field_info[self.y_field] y_min, y_max = _sanitize_min_max_units( y_min, y_max, self.field_info[self.y_field], self.ds.unit_registry ) self.y_bins = array_like_field( data_source, self._get_bins(y_min, y_max, y_n, y_log), self.y_field ) if override_bins_y is not None: self.y_bins = array_like_field(data_source, override_bins_y, self.y_field) self.size = (self.x_bins.size - 1, self.y_bins.size - 1) self.bin_fields = (self.x_field, self.y_field) self.x = 0.5 * (self.x_bins[1:] + self.x_bins[:-1]) self.y = 0.5 * (self.y_bins[1:] + self.y_bins[:-1]) def _bin_chunk(self, chunk, fields, storage): rv = self._get_data(chunk, fields) if rv is None: return fdata, wdata, (bf_x, bf_y) = rv bf_x.convert_to_units(self.field_info[self.x_field].output_units) bin_ind_x = np.digitize(bf_x, self.x_bins) - 1 bf_y.convert_to_units(self.field_info[self.y_field].output_units) bin_ind_y = np.digitize(bf_y, self.y_bins) - 1 new_bin_profile2d( bin_ind_x, bin_ind_y, wdata, fdata, storage.weight_values, storage.values, storage.mvalues, storage.qvalues, storage.used, ) # We've binned it! def set_x_unit(self, new_unit): """Sets a new unit for the x field Parameters ---------- new_unit : string or Unit object The name of the new unit. """ self.x_bins.convert_to_units(new_unit) self.x = 0.5 * (self.x_bins[1:] + self.x_bins[:-1]) def set_y_unit(self, new_unit): """Sets a new unit for the y field Parameters ---------- new_unit : string or Unit object The name of the new unit. """ self.y_bins.convert_to_units(new_unit) self.y = 0.5 * (self.y_bins[1:] + self.y_bins[:-1]) @property def bounds(self): return ((self.x_bins[0], self.x_bins[-1]), (self.y_bins[0], self.y_bins[-1])) def plot(self): r""" This returns a :class:~yt.visualization.profile_plotter.PhasePlot with the fields that have been added to this object. """ from yt.visualization.profile_plotter import PhasePlot return PhasePlot.from_profile(self) class Profile2DFromDataset(ProfileNDFromDataset, Profile2D): """ A 2D profile object loaded from a ytdata dataset. """ def __init(self, ds): ProfileNDFromDataset.__init__(self, ds) class ParticleProfile(Profile2D): """An object that represents a *deposited* 2D profile. This is like a Profile2D, except that it is intended for particle data. Instead of just binning the particles, the added fields will be deposited onto the mesh using either the nearest-grid-point or cloud-in-cell interpolation kernels. Parameters ---------- data_source : AMD3DData object The data object to be profiled x_field : string field name The field to profile as a function of along the x axis. x_n : integer The number of bins along the x direction. x_min : float The minimum value of the x profile field. If supplied without units, assumed to be in the output units for x_field. x_max : float The maximum value of the x profile field. If supplied without units, assumed to be in the output units for x_field. y_field : string field name The field to profile as a function of along the y axis y_n : integer The number of bins along the y direction. y_min : float The minimum value of the y profile field. If supplied without units, assumed to be in the output units for y_field. y_max : float The maximum value of the y profile field. If supplied without units, assumed to be in the output units for y_field. weight_field : string field name The field to use for weighting. Default is None. deposition : string, optional The interpolation kernel to be used for deposition. Valid choices: "ngp" : nearest grid point interpolation "cic" : cloud-in-cell interpolation """ accumulation = False fractional = False def __init__( self, data_source, x_field, x_n, x_min, x_max, x_log, y_field, y_n, y_min, y_max, y_log, weight_field=None, deposition="ngp", ): x_field = data_source._determine_fields(x_field)[0] y_field = data_source._determine_fields(y_field)[0] if deposition not in ["ngp", "cic"]: raise NotImplementedError(deposition) elif (x_log or y_log) and deposition != "ngp": mylog.warning( "cic deposition is only supported for linear axis " "scales, falling back to ngp deposition" ) deposition = "ngp" self.deposition = deposition # set the log parameters to False (since that doesn't make much sense # for deposited data) and also turn off the weight field. super().__init__( data_source, x_field, x_n, x_min, x_max, x_log, y_field, y_n, y_min, y_max, y_log, weight_field=weight_field, ) # Either stick the particle field in the nearest bin, # or spread it out using the 2D CIC deposition function def _bin_chunk(self, chunk, fields, storage): rv = self._get_data(chunk, fields) if rv is None: return fdata, wdata, (bf_x, bf_y) = rv # make sure everything has the same units before deposition. # the units will be scaled to the correct values later. if self.deposition == "ngp": func = NGPDeposit_2 elif self.deposition == "cic": func = CICDeposit_2 for fi, _field in enumerate(fields): if self.weight_field is None: deposit_vals = fdata[:, fi] else: deposit_vals = wdata * fdata[:, fi] field_mask = np.zeros(self.size, dtype="uint8") func( bf_x, bf_y, deposit_vals, fdata[:, fi].size, storage.values[:, :, fi], field_mask, self.x_bins, self.y_bins, ) locs = field_mask > 0 storage.used[locs] = True if self.weight_field is not None: func( bf_x, bf_y, wdata, fdata[:, fi].size, storage.weight_values, field_mask, self.x_bins, self.y_bins, ) else: storage.weight_values[locs] = 1.0 storage.mvalues[locs, fi] = ( storage.values[locs, fi] / storage.weight_values[locs] ) # We've binned it! class Profile3D(ProfileND): """An object that represents a 2D profile. Parameters ---------- data_source : AMD3DData object The data object to be profiled x_field : string field name The field to profile as a function of along the x axis. x_n : integer The number of bins along the x direction. x_min : float The minimum value of the x profile field. If supplied without units, assumed to be in the output units for x_field. x_max : float The maximum value of the x profile field. If supplied without units, assumed to be in the output units for x_field. x_log : boolean Controls whether or not the bins for the x field are evenly spaced in linear (False) or log (True) space. y_field : string field name The field to profile as a function of along the y axis y_n : integer The number of bins along the y direction. y_min : float The minimum value of the y profile field. If supplied without units, assumed to be in the output units for y_field. y_max : float The maximum value of the y profile field. If supplied without units, assumed to be in the output units for y_field. y_log : boolean Controls whether or not the bins for the y field are evenly spaced in linear (False) or log (True) space. z_field : string field name The field to profile as a function of along the z axis z_n : integer The number of bins along the z direction. z_min : float The minimum value of the z profile field. If supplied without units, assumed to be in the output units for z_field. z_max : float The maximum value of thee z profile field. If supplied without units, assumed to be in the output units for z_field. z_log : boolean Controls whether or not the bins for the z field are evenly spaced in linear (False) or log (True) space. weight_field : string field name The field to weight the profiled fields by. override_bins_x : array Array to set as xbins and ignore other parameters if set override_bins_y : array Array to set as xbins and ignore other parameters if set override_bins_z : array Array to set as xbins and ignore other parameters if set """ def __init__( self, data_source, x_field, x_n, x_min, x_max, x_log, y_field, y_n, y_min, y_max, y_log, z_field, z_n, z_min, z_max, z_log, weight_field=None, override_bins_x=None, override_bins_y=None, override_bins_z=None, ): super().__init__(data_source, weight_field) # X self.x_field = data_source._determine_fields(x_field)[0] self.x_log = x_log self.field_info[self.x_field] = self.data_source.ds.field_info[self.x_field] x_min, x_max = _sanitize_min_max_units( x_min, x_max, self.field_info[self.x_field], self.ds.unit_registry ) self.x_bins = array_like_field( data_source, self._get_bins(x_min, x_max, x_n, x_log), self.x_field ) if override_bins_x is not None: self.x_bins = array_like_field(data_source, override_bins_x, self.x_field) # Y self.y_field = data_source._determine_fields(y_field)[0] self.y_log = y_log self.field_info[self.y_field] = self.data_source.ds.field_info[self.y_field] y_min, y_max = _sanitize_min_max_units( y_min, y_max, self.field_info[self.y_field], self.ds.unit_registry ) self.y_bins = array_like_field( data_source, self._get_bins(y_min, y_max, y_n, y_log), self.y_field ) if override_bins_y is not None: self.y_bins = array_like_field(data_source, override_bins_y, self.y_field) # Z self.z_field = data_source._determine_fields(z_field)[0] self.z_log = z_log self.field_info[self.z_field] = self.data_source.ds.field_info[self.z_field] z_min, z_max = _sanitize_min_max_units( z_min, z_max, self.field_info[self.z_field], self.ds.unit_registry ) self.z_bins = array_like_field( data_source, self._get_bins(z_min, z_max, z_n, z_log), self.z_field ) if override_bins_z is not None: self.z_bins = array_like_field(data_source, override_bins_z, self.z_field) self.size = (self.x_bins.size - 1, self.y_bins.size - 1, self.z_bins.size - 1) self.bin_fields = (self.x_field, self.y_field, self.z_field) self.x = 0.5 * (self.x_bins[1:] + self.x_bins[:-1]) self.y = 0.5 * (self.y_bins[1:] + self.y_bins[:-1]) self.z = 0.5 * (self.z_bins[1:] + self.z_bins[:-1]) def _bin_chunk(self, chunk, fields, storage): rv = self._get_data(chunk, fields) if rv is None: return fdata, wdata, (bf_x, bf_y, bf_z) = rv bf_x.convert_to_units(self.field_info[self.x_field].output_units) bin_ind_x = np.digitize(bf_x, self.x_bins) - 1 bf_y.convert_to_units(self.field_info[self.y_field].output_units) bin_ind_y = np.digitize(bf_y, self.y_bins) - 1 bf_z.convert_to_units(self.field_info[self.z_field].output_units) bin_ind_z = np.digitize(bf_z, self.z_bins) - 1 new_bin_profile3d( bin_ind_x, bin_ind_y, bin_ind_z, wdata, fdata, storage.weight_values, storage.values, storage.mvalues, storage.qvalues, storage.used, ) # We've binned it! @property def bounds(self): return ( (self.x_bins[0], self.x_bins[-1]), (self.y_bins[0], self.y_bins[-1]), (self.z_bins[0], self.z_bins[-1]), ) def set_x_unit(self, new_unit): """Sets a new unit for the x field Parameters ---------- new_unit : string or Unit object The name of the new unit. """ self.x_bins.convert_to_units(new_unit) self.x = 0.5 * (self.x_bins[1:] + self.x_bins[:-1]) def set_y_unit(self, new_unit): """Sets a new unit for the y field Parameters ---------- new_unit : string or Unit object The name of the new unit. """ self.y_bins.convert_to_units(new_unit) self.y = 0.5 * (self.y_bins[1:] + self.y_bins[:-1]) def set_z_unit(self, new_unit): """Sets a new unit for the z field Parameters ---------- new_unit : string or Unit object The name of the new unit. """ self.z_bins.convert_to_units(new_unit) self.z = 0.5 * (self.z_bins[1:] + self.z_bins[:-1]) class Profile3DFromDataset(ProfileNDFromDataset, Profile3D): """ A 2D profile object loaded from a ytdata dataset. """ def __init(self, ds): ProfileNDFromDataset.__init__(self, ds) def sanitize_field_tuple_keys(input_dict, data_source): if input_dict is not None: dummy = {} for item in input_dict: dummy[data_source._determine_fields(item)[0]] = input_dict[item] return dummy else: return input_dict def create_profile( data_source, bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field=("gas", "mass"), accumulation=False, fractional=False, deposition="ngp", override_bins=None, ): r""" Create a 1, 2, or 3D profile object. The dimensionality of the profile object is chosen by the number of fields given in the bin_fields argument. Parameters ---------- data_source : YTSelectionContainer Object The data object to be profiled. bin_fields : list of strings List of the binning fields for profiling. fields : list of strings The fields to be profiled. n_bins : int or list of ints The number of bins in each dimension. If None, 64 bins for each bin are used for each bin field. Default: 64. extrema : dict of min, max tuples Minimum and maximum values of the bin_fields for the profiles. The keys correspond to the field names. Defaults to the extrema of the bin_fields of the dataset. If a units dict is provided, extrema are understood to be in the units specified in the dictionary. logs : dict of boolean values Whether or not to log the bin_fields for the profiles. The keys correspond to the field names. Defaults to the take_log attribute of the field. units : dict of strings The units of the fields in the profiles, including the bin_fields. weight_field : str or tuple field identifier The weight field for computing weighted average for the profile values. If None, the profile values are sums of the data in each bin. Defaults to ("gas", "mass"). accumulation : bool or list of bools If True, the profile values for a bin n are the cumulative sum of all the values from bin 0 to n. If -True, the sum is reversed so that the value for bin n is the cumulative sum from bin N (total bins) to n. If the profile is 2D or 3D, a list of values can be given to control the summation in each dimension independently. Default: False. fractional : bool If True the profile values are divided by the sum of all the profile data such that the profile represents a probability distribution function. deposition : strings Controls the type of deposition used for ParticlePhasePlots. Valid choices are 'ngp' and 'cic'. Default is 'ngp'. This parameter is ignored if the input fields are not of particle type. override_bins : dict of bins to profile plot with If set, ignores n_bins and extrema settings and uses the supplied bins to profile the field. If a units dict is provided, bins are understood to be in the units specified in the dictionary. Examples -------- Create a 1d profile. Access bin field from profile.x and field data from profile[]. >>> ds = load("DD0046/DD0046") >>> ad = ds.all_data() >>> profile = create_profile( ... ad, [("gas", "density")], [("gas", "temperature"), ("gas", "velocity_x")] ... ) >>> print(profile.x) >>> print(profile["gas", "temperature"]) """ bin_fields = data_source._determine_fields(bin_fields) fields = list(iter_fields(fields)) is_pfield = [ data_source.ds._get_field_info(f).sampling_type == "particle" for f in bin_fields + fields ] wf = None if weight_field is not None: wf = data_source.ds._get_field_info(weight_field) is_pfield.append(wf.sampling_type == "particle") wf = wf.name if len(bin_fields) > 1 and isinstance(accumulation, bool): accumulation = [accumulation for _ in range(len(bin_fields))] bin_fields = data_source._determine_fields(bin_fields) fields = data_source._determine_fields(fields) units = sanitize_field_tuple_keys(units, data_source) extrema = sanitize_field_tuple_keys(extrema, data_source) logs = sanitize_field_tuple_keys(logs, data_source) override_bins = sanitize_field_tuple_keys(override_bins, data_source) if any(is_pfield) and not all(is_pfield): if hasattr(data_source.ds, "_sph_ptypes"): is_local = [ data_source.ds.field_info[f].sampling_type == "local" for f in bin_fields + fields ] if wf is not None: is_local.append(wf.sampling_type == "local") is_local_or_pfield = [ pf or lf for (pf, lf) in zip(is_pfield, is_local, strict=True) ] if not all(is_local_or_pfield): raise YTIllDefinedProfile( bin_fields, data_source._determine_fields(fields), wf, is_pfield ) else: raise YTIllDefinedProfile( bin_fields, data_source._determine_fields(fields), wf, is_pfield ) if len(bin_fields) == 1: cls = Profile1D elif len(bin_fields) == 2 and all(is_pfield): if deposition == "cic": if logs is not None: if (bin_fields[0] in logs and logs[bin_fields[0]]) or ( bin_fields[1] in logs and logs[bin_fields[1]] ): raise RuntimeError( "CIC deposition is only implemented for linear-scaled axes" ) else: logs = {bin_fields[0]: False, bin_fields[1]: False} if any(accumulation) or fractional: raise RuntimeError( "The accumulation and fractional keyword arguments must be " "False for CIC deposition" ) cls = ParticleProfile elif len(bin_fields) == 2: cls = Profile2D elif len(bin_fields) == 3: cls = Profile3D else: raise NotImplementedError if weight_field is not None and cls == ParticleProfile: (weight_field,) = data_source._determine_fields([weight_field]) wf = data_source.ds._get_field_info(weight_field) if not wf.sampling_type == "particle": weight_field = None if not is_sequence(n_bins): n_bins = [n_bins] * len(bin_fields) if not is_sequence(accumulation): accumulation = [accumulation] * len(bin_fields) if logs is None: logs = {} logs_list = [] for bin_field in bin_fields: if bin_field in logs: logs_list.append(logs[bin_field]) else: logs_list.append(data_source.ds.field_info[bin_field].take_log) logs = logs_list # Are the extrema all Nones? Then treat them as though extrema was set as None if extrema is None or not any(collapse(extrema.values())): ex = [ data_source.quantities["Extrema"](f, non_zero=l) for f, l in zip(bin_fields, logs, strict=True) ] # pad extrema by epsilon so cells at bin edges are not excluded for i, (mi, ma) in enumerate(ex): mi = mi - np.spacing(mi) ma = ma + np.spacing(ma) ex[i][0], ex[i][1] = mi, ma else: ex = [] for bin_field in bin_fields: bf_units = data_source.ds.field_info[bin_field].output_units try: field_ex = list(extrema[bin_field[-1]]) except KeyError as e: try: field_ex = list(extrema[bin_field]) except KeyError: raise RuntimeError( f"Could not find field {bin_field[-1]} or {bin_field} in extrema" ) from e if isinstance(field_ex[0], tuple): field_ex = [data_source.ds.quan(*f) for f in field_ex] if any(exi is None for exi in field_ex): try: ds_extrema = data_source.quantities.extrema(bin_field) except AttributeError: # ytdata profile datasets don't have data_source.quantities bf_vals = data_source[bin_field] ds_extrema = data_source.ds.arr([bf_vals.min(), bf_vals.max()]) for i, exi in enumerate(field_ex): if exi is None: field_ex[i] = ds_extrema[i] # pad extrema by epsilon so cells at bin edges are # not excluded field_ex[i] -= (-1) ** i * np.spacing(field_ex[i]) if units is not None and bin_field in units: for i, exi in enumerate(field_ex): if hasattr(exi, "units"): field_ex[i] = exi.to(units[bin_field]) else: field_ex[i] = data_source.ds.quan(exi, units[bin_field]) fe = data_source.ds.arr(field_ex) else: if hasattr(field_ex, "units"): fe = field_ex.to(bf_units) else: fe = data_source.ds.arr(field_ex, bf_units) fe.convert_to_units(bf_units) field_ex = [fe[0].v, fe[1].v] if is_sequence(field_ex[0]): field_ex[0] = data_source.ds.quan(field_ex[0][0], field_ex[0][1]) field_ex[0] = field_ex[0].in_units(bf_units) if is_sequence(field_ex[1]): field_ex[1] = data_source.ds.quan(field_ex[1][0], field_ex[1][1]) field_ex[1] = field_ex[1].in_units(bf_units) ex.append(field_ex) if override_bins is not None: o_bins = [] for bin_field in bin_fields: bf_units = data_source.ds.field_info[bin_field].output_units try: field_obin = override_bins[bin_field[-1]] except KeyError: field_obin = override_bins[bin_field] if field_obin is None: o_bins.append(None) continue if isinstance(field_obin, tuple): field_obin = data_source.ds.arr(*field_obin) if units is not None and bin_field in units: fe = data_source.ds.arr(field_obin, units[bin_field]) else: if hasattr(field_obin, "units"): fe = field_obin.to(bf_units) else: fe = data_source.ds.arr(field_obin, bf_units) fe.convert_to_units(bf_units) field_obin = fe.d o_bins.append(field_obin) args = [data_source] for f, n, (mi, ma), l in zip(bin_fields, n_bins, ex, logs, strict=True): if mi <= 0 and l: raise YTIllDefinedBounds(mi, ma) args += [f, n, mi, ma, l] kwargs = {"weight_field": weight_field} if cls is ParticleProfile: kwargs["deposition"] = deposition if override_bins is not None: for o_bin, ax in zip(o_bins, ["x", "y", "z"], strict=False): kwargs[f"override_bins_{ax}"] = o_bin obj = cls(*args, **kwargs) obj.accumulation = accumulation obj.fractional = fractional if fields is not None: obj.add_fields(list(fields)) for field in fields: if fractional: obj.field_data[field] /= obj.field_data[field].sum() for axis, acc in enumerate(accumulation): if not acc: continue temp = obj.field_data[field] temp = np.rollaxis(temp, axis) if weight_field is not None: temp_weight = obj.weight temp_weight = np.rollaxis(temp_weight, axis) if acc < 0: temp = temp[::-1] if weight_field is not None: temp_weight = temp_weight[::-1] if weight_field is None: temp = temp.cumsum(axis=0) else: # avoid 0-division warnings by nan-masking _denom = temp_weight.cumsum(axis=0) _denom[_denom == 0.0] = np.nan temp = (temp * temp_weight).cumsum(axis=0) / _denom if acc < 0: temp = temp[::-1] if weight_field is not None: temp_weight = temp_weight[::-1] temp = np.rollaxis(temp, axis) obj.field_data[field] = temp if weight_field is not None: temp_weight = np.rollaxis(temp_weight, axis) obj.weight = temp_weight if units is not None: for field, unit in units.items(): field = data_source._determine_fields(field)[0] if field == obj.x_field: obj.set_x_unit(unit) elif field == getattr(obj, "y_field", None): obj.set_y_unit(unit) elif field == getattr(obj, "z_field", None): obj.set_z_unit(unit) else: obj.set_field_unit(field, unit) return obj ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/region_expression.py0000644000175100001770000002121414714401662020757 0ustar00runnerdockerimport weakref from functools import cached_property from yt.funcs import obj_length from yt.units.yt_array import YTQuantity from yt.utilities.exceptions import ( YTDimensionalityError, YTFieldNotFound, YTFieldNotParseable, ) from yt.visualization.line_plot import LineBuffer from .data_containers import _get_ipython_key_completion class RegionExpression: def __init__(self, ds): self.ds = weakref.proxy(ds) @cached_property def all_data(self): return self.ds.all_data() def __getitem__(self, item): # At first, we will only implement this as accepting a slice that is # (optionally) unitful corresponding to a specific set of coordinates # that result in a rectangular prism or a slice. try: return self.all_data[item] except (YTFieldNotParseable, YTFieldNotFound): # any error raised by self.ds._get_field_info # signals a type error (not a field), however we don't want to # catch plain TypeErrors as this may create subtle bugs very hard # to decipher, like broken internal function calls. pass if isinstance(item, slice): if obj_length(item.start) == 3 and obj_length(item.stop) == 3: # This is for a ray that is not orthogonal to an axis. # it's straightforward to do this, so we create a ray # and drop out here. return self._create_ray(item) else: # This is for the case where we give a slice as an index; one # possible use case of this would be where we supply something # like ds.r[::256j] . This would be expanded, implicitly into # ds.r[::256j, ::256j, ::256j]. Other cases would be if we do # ds.r[0.1:0.9] where it will be expanded along all dimensions. item = tuple(item for _ in range(self.ds.dimensionality)) if item is Ellipsis: item = (Ellipsis,) # from this point, item is implicitly assumed to be iterable if Ellipsis in item: # expand "..." into the appropriate number of ":" item = list(item) idx = item.index(Ellipsis) item.pop(idx) if Ellipsis in item: # this error mimics numpy's raise IndexError("an index can only have a single ellipsis ('...')") while len(item) < self.ds.dimensionality: item.insert(idx, slice(None)) if len(item) != self.ds.dimensionality: # Not the right specification, and we don't want to do anything # implicitly. Note that this happens *after* the implicit expansion # of a single slice. raise YTDimensionalityError(len(item), self.ds.dimensionality) # OK, now we need to look at our slices. How many are a specific # coordinate? nslices = sum(isinstance(v, slice) for v in item) if nslices == 0: return self._create_point(item) elif nslices == 1: return self._create_ortho_ray(item) elif nslices == 2: return self._create_slice(item) else: if all(s.start is s.stop is s.step is None for s in item): return self.all_data return self._create_region(item) def _ipython_key_completions_(self): return _get_ipython_key_completion(self.ds) def _spec_to_value(self, input): if isinstance(input, tuple): v = self.ds.quan(input[0], input[1]).to("code_length") elif isinstance(input, YTQuantity): v = self.ds.quan(input).to("code_length") else: v = self.ds.quan(input, "code_length") return v def _create_slice(self, slice_tuple): # This is somewhat more complex because we want to allow for slicing # in one dimension but also *not* using the entire domain; for instance # this means we allow something like ds.r[0.5, 0.1:0.4, 0.1:0.4]. axis = None new_slice = [] for ax, v in enumerate(slice_tuple): if not isinstance(v, slice): if axis is not None: raise RuntimeError axis = ax coord = self._spec_to_value(v) new_slice.append(slice(None, None, None)) else: new_slice.append(v) # This new slice doesn't need to be a tuple dim = self.ds.dimensionality if dim < 2: raise ValueError( "Can not create a slice from data with dimensionality '%d'" % dim ) if dim == 2: coord = self.ds.domain_center[2] axis = 2 source = self._create_region(new_slice) sl = self.ds.slice(axis, coord, data_source=source) # Now, there's the possibility that what we're also seeing here # includes some steps, which would be for getting back a fixed # resolution buffer. We check for that by checking if we have # exactly two imaginary steps. xax = self.ds.coordinates.x_axis[axis] yax = self.ds.coordinates.y_axis[axis] if ( getattr(new_slice[xax].step, "imag", 0.0) != 0.0 and getattr(new_slice[yax].step, "imag", 0.0) != 0.0 ): # We now need to convert to a fixed res buffer. # We'll do this by getting the x/y axes, and then using that. width = source.right_edge[xax] - source.left_edge[xax] height = source.right_edge[yax] - source.left_edge[yax] # Make a resolution tuple with resolution = (int(new_slice[xax].step.imag), int(new_slice[yax].step.imag)) sl = sl.to_frb(width=width, resolution=resolution, height=height) return sl def _slice_to_edges(self, ax, val): if val.start is None: l = self.ds.domain_left_edge[ax] else: l = self._spec_to_value(val.start) if val.stop is None: r = self.ds.domain_right_edge[ax] else: r = self._spec_to_value(val.stop) if r < l: raise RuntimeError return l, r def _create_region(self, bounds_tuple): left_edge = self.ds.domain_left_edge.copy() right_edge = self.ds.domain_right_edge.copy() dims = [1, 1, 1] for ax, b in enumerate(bounds_tuple): l, r = self._slice_to_edges(ax, b) left_edge[ax] = l right_edge[ax] = r d = getattr(b.step, "imag", None) if d is not None: d = int(d) dims[ax] = d center = (left_edge + right_edge) / 2.0 if None not in dims: return self.ds.arbitrary_grid(left_edge, right_edge, dims) return self.ds.region(center, left_edge, right_edge) def _create_point(self, point_tuple): coord = [self._spec_to_value(p) for p in point_tuple] return self.ds.point(coord) def _create_ray(self, ray_slice): start_point = [self._spec_to_value(v) for v in ray_slice.start] end_point = [self._spec_to_value(v) for v in ray_slice.stop] if getattr(ray_slice.step, "imag", 0.0) != 0.0: return LineBuffer(self.ds, start_point, end_point, int(ray_slice.step.imag)) else: return self.ds.ray(start_point, end_point) def _create_ortho_ray(self, ray_tuple): axis = None new_slice = [] coord = [] npoints = 0 start_point = [] end_point = [] for ax, v in enumerate(ray_tuple): if not isinstance(v, slice): val = self._spec_to_value(v) coord.append(val) new_slice.append(slice(None, None, None)) start_point.append(val) end_point.append(val) else: if axis is not None: raise RuntimeError if getattr(v.step, "imag", 0.0) != 0.0: npoints = int(v.step.imag) xi = self._spec_to_value(v.start) xf = self._spec_to_value(v.stop) dx = (xf - xi) / npoints start_point.append(xi + 0.5 * dx) end_point.append(xf - 0.5 * dx) else: axis = ax new_slice.append(v) if npoints > 0: ray = LineBuffer(self.ds, start_point, end_point, npoints) else: if axis == 1: coord = [coord[1], coord[0]] source = self._create_region(new_slice) ray = self.ds.ortho_ray(axis, coord, data_source=source) return ray ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2711515 yt-4.4.0/yt/data_objects/selection_objects/0000755000175100001770000000000014714401715020340 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/selection_objects/__init__.py0000644000175100001770000000064414714401662022456 0ustar00runnerdockerfrom .boolean_operations import ( YTBooleanContainer, YTDataObjectUnion, YTIntersectionContainer3D, ) from .cut_region import YTCutRegion from .disk import YTDisk from .object_collection import YTDataCollection from .point import YTPoint from .ray import YTOrthoRay, YTRay from .region import YTRegion from .slices import YTCuttingPlane, YTSlice from .spheroids import YTEllipsoid, YTMinimalSphere, YTSphere ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/selection_objects/boolean_operations.py0000644000175100001770000001147514714401662024605 0ustar00runnerdockerimport numpy as np from more_itertools import always_iterable import yt.geometry from yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer, YTSelectionContainer3D, ) from yt.data_objects.static_output import Dataset from yt.funcs import validate_object, validate_sequence class YTBooleanContainer(YTSelectionContainer3D): r""" This is a boolean operation, accepting AND, OR, XOR, and NOT for combining multiple data objects. This object is not designed to be created directly; it is designed to be created implicitly by using one of the bitwise operations (&, \|, ^, \~) on one or two other data objects. These correspond to the appropriate boolean operations, and the resultant object can be nested. Parameters ---------- op : string Can be AND, OR, XOR, NOT or NEG. dobj1 : yt.data_objects.selection_objects.data_selection_objects. YTSelectionContainer The first selection object dobj2 : yt.data_objects.selection_objects.base_objects.YTSelectionContainer The second object Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sp = ds.sphere("c", 0.1) >>> dd = ds.r[:, :, :] >>> new_obj = sp ^ dd >>> print(new_obj.sum("cell_volume"), dd.sum("cell_volume") - sp.sum("cell_volume")) """ _type_name = "bool" _con_args = ("op", "dobj1", "dobj2") def __init__( self, op, dobj1, dobj2, ds=None, field_parameters=None, data_source=None ): YTSelectionContainer3D.__init__(self, None, ds, field_parameters, data_source) self.op = op.upper() self.dobj1 = dobj1 self.dobj2 = dobj2 name = f"Boolean{self.op}Selector" sel_cls = getattr(yt.geometry.selection_routines, name) self._selector = sel_cls(self) def _get_bbox(self): le1, re1 = self.dobj1._get_bbox() if self.op == "NOT": return le1, re1 else: le2, re2 = self.dobj2._get_bbox() return np.minimum(le1, le2), np.maximum(re1, re2) class YTIntersectionContainer3D(YTSelectionContainer3D): """ This is a more efficient method of selecting the intersection of multiple data selection objects. Creating one of these objects returns the intersection of all of the sub-objects; it is designed to be a faster method than chaining & ("and") operations to create a single, large intersection. Parameters ---------- data_objects : Iterable of YTSelectionContainer The data objects to intersect Examples -------- >>> import yt >>> ds = yt.load("RedshiftOutput0005") >>> sp1 = ds.sphere((0.4, 0.5, 0.6), 0.15) >>> sp2 = ds.sphere((0.38, 0.51, 0.55), 0.1) >>> sp3 = ds.sphere((0.35, 0.5, 0.6), 0.15) >>> new_obj = ds.intersection((sp1, sp2, sp3)) >>> print(new_obj.sum("cell_volume")) """ _type_name = "intersection" _con_args = ("data_objects",) def __init__(self, data_objects, ds=None, field_parameters=None, data_source=None): validate_sequence(data_objects) for obj in data_objects: validate_object(obj, YTSelectionContainer) validate_object(ds, Dataset) validate_object(field_parameters, dict) validate_object(data_source, YTSelectionContainer) YTSelectionContainer3D.__init__(self, None, ds, field_parameters, data_source) self.data_objects = list(always_iterable(data_objects)) class YTDataObjectUnion(YTSelectionContainer3D): """ This is a more efficient method of selecting the union of multiple data selection objects. Creating one of these objects returns the union of all of the sub-objects; it is designed to be a faster method than chaining | (or) operations to create a single, large union. Parameters ---------- data_objects : Iterable of YTSelectionContainer The data objects to union Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sp1 = ds.sphere((0.4, 0.5, 0.6), 0.1) >>> sp2 = ds.sphere((0.3, 0.5, 0.15), 0.1) >>> sp3 = ds.sphere((0.5, 0.5, 0.9), 0.1) >>> new_obj = ds.union((sp1, sp2, sp3)) >>> print(new_obj.sum("cell_volume")) """ _type_name = "union" _con_args = ("data_objects",) def __init__(self, data_objects, ds=None, field_parameters=None, data_source=None): validate_sequence(data_objects) for obj in data_objects: validate_object(obj, YTSelectionContainer) validate_object(ds, Dataset) validate_object(field_parameters, dict) validate_object(data_source, YTSelectionContainer) YTSelectionContainer3D.__init__(self, None, ds, field_parameters, data_source) self.data_objects = list(always_iterable(data_objects)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/selection_objects/cut_region.py0000644000175100001770000002227414714401662023060 0ustar00runnerdockerimport re import numpy as np from more_itertools import always_iterable from yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer, YTSelectionContainer3D, ) from yt.data_objects.static_output import Dataset from yt.funcs import iter_fields, validate_object, validate_sequence from yt.geometry.selection_routines import points_in_cells from yt.utilities.exceptions import YTIllDefinedCutRegion from yt.utilities.on_demand_imports import _scipy class YTCutRegion(YTSelectionContainer3D): """ This is a data object designed to allow individuals to apply logical operations to fields and filter as a result of those cuts. Parameters ---------- data_source : YTSelectionContainer3D The object to which cuts will be applied. conditionals : list of strings A list of conditionals that will be evaluated. In the namespace available, these conditionals will have access to 'obj' which is a data object of unknown shape, and they must generate a boolean array. For instance, conditionals = ["obj['gas', 'temperature'] < 1e3"] Examples -------- >>> import yt >>> ds = yt.load("RedshiftOutput0005") >>> sp = ds.sphere("max", (1.0, "Mpc")) >>> cr = ds.cut_region(sp, ["obj['gas', 'temperature'] < 1e3"]) """ _type_name = "cut_region" _con_args = ("base_object", "conditionals") _derived_quantity_chunking = "all" def __init__( self, data_source, conditionals, ds=None, field_parameters=None, base_object=None, locals=None, ): if locals is None: locals = {} validate_object(data_source, YTSelectionContainer) validate_sequence(conditionals) for condition in conditionals: validate_object(condition, str) validate_object(ds, Dataset) validate_object(field_parameters, dict) validate_object(base_object, YTSelectionContainer) self.conditionals = list(always_iterable(conditionals)) if isinstance(data_source, YTCutRegion): # If the source is also a cut region, add its conditionals # and set the source to be its source. # Preserve order of conditionals. self.conditionals = data_source.conditionals + self.conditionals data_source = data_source.base_object super().__init__( data_source.center, ds, field_parameters, data_source=data_source ) self.filter_fields = self._check_filter_fields() self.base_object = data_source self.locals = locals self._selector = None # Need to interpose for __getitem__, fwidth, fcoords, icoords, iwidth, # ires and get_data def _check_filter_fields(self): fields = [] for cond in self.conditionals: for field in re.findall(r"\[([A-Za-z0-9_,.'\"\(\)]+)\]", cond): fd = field.replace('"', "").replace("'", "") if "," in fd: fd = tuple(fd.strip("()").split(",")) fd = self.ds._get_field_info(fd) if fd.sampling_type == "particle" or fd.is_sph_field: raise RuntimeError( f"cut_region requires a mesh-based field, " f"but {fd.name} is a particle field! Use " f"a particle filter instead. " ) fields.append(fd.name) return fields def chunks(self, fields, chunking_style, **kwargs): # We actually want to chunk the sub-chunk, not ourselves. We have no # chunks to speak of, as we do not data IO. for chunk in self.index._chunk(self.base_object, chunking_style, **kwargs): with self.base_object._chunked_read(chunk): with self._chunked_read(chunk): self.get_data(fields) yield self def get_data(self, fields=None): fields = list(iter_fields(fields)) self.base_object.get_data(fields) ind = self._cond_ind for field in fields: f = self.base_object[field] if f.shape != ind.shape: parent = getattr(self, "parent", self.base_object) self.field_data[field] = parent[field][self._part_ind(field[0])] else: self.field_data[field] = self.base_object[field][ind] @property def blocks(self): # We have to take a slightly different approach here. Note that all # that .blocks has to yield is a 3D array and a mask. for obj, m in self.base_object.blocks: m = m.copy() with obj._field_parameter_state(self.field_parameters): for cond in self.conditionals: ss = eval(cond) m = np.logical_and(m, ss, m) if not np.any(m): continue yield obj, m @property def _cond_ind(self): ind = None obj = self.base_object locals = self.locals.copy() if "obj" in locals: raise RuntimeError( '"obj" has been defined in the "locals" ; ' "this is not supported, please rename the variable." ) locals["obj"] = obj with obj._field_parameter_state(self.field_parameters): for cond in self.conditionals: res = eval(cond, locals) if ind is None: ind = res if ind.shape != res.shape: raise YTIllDefinedCutRegion(self.conditionals) np.logical_and(res, ind, ind) return ind def _part_ind_KDTree(self, ptype): """Find the particles in cells using a KDTree approach.""" parent = getattr(self, "parent", self.base_object) units = "code_length" pos = np.stack( [ self["index", "x"].to(units), self["index", "y"].to(units), self["index", "z"].to(units), ], axis=1, ).value dx = np.stack( [ self["index", "dx"].to(units), self["index", "dy"].to(units), self["index", "dz"].to(units), ], axis=1, ).value ppos = np.stack( [ parent[ptype, "particle_position_x"], parent[ptype, "particle_position_y"], parent[ptype, "particle_position_z"], ], axis=1, ).value mask = np.zeros(ppos.shape[0], dtype=bool) levels = self["index", "grid_level"].astype("int32").value if levels.size == 0: return mask levelmin = levels.min() levelmax = levels.max() for lvl in range(levelmax, levelmin - 1, -1): # Filter out cells not in the current level lvl_mask = levels == lvl dx_loc = dx[lvl_mask] pos_loc = pos[lvl_mask] grid_tree = _scipy.spatial.cKDTree(pos_loc, boxsize=1) # Compute closest cell for all remaining particles dist, icell = grid_tree.query( ppos[~mask], distance_upper_bound=dx_loc.max(), p=np.inf ) mask_loc = np.isfinite(dist[:]) # Check that particles within dx of a cell are in it i = icell[mask_loc] dist = np.abs(ppos[~mask][mask_loc, :] - pos_loc[i]) tmp_mask = np.all(dist <= (dx_loc[i] / 2), axis=1) mask_loc[mask_loc] = tmp_mask # Update the particle mask with particles found at this level mask[~mask] |= mask_loc return mask def _part_ind_brute_force(self, ptype): parent = getattr(self, "parent", self.base_object) units = "code_length" mask = points_in_cells( self["index", "x"].to(units), self["index", "y"].to(units), self["index", "z"].to(units), self["index", "dx"].to(units), self["index", "dy"].to(units), self["index", "dz"].to(units), parent[ptype, "particle_position_x"].to(units), parent[ptype, "particle_position_y"].to(units), parent[ptype, "particle_position_z"].to(units), ) return mask def _part_ind(self, ptype): # If scipy is installed, use the fast KD tree # implementation. Else, fall back onto the direct # brute-force algorithm. try: _scipy.spatial.KDTree return self._part_ind_KDTree(ptype) except ImportError: return self._part_ind_brute_force(ptype) @property def icoords(self): return self.base_object.icoords[self._cond_ind, :] @property def fcoords(self): return self.base_object.fcoords[self._cond_ind, :] @property def ires(self): return self.base_object.ires[self._cond_ind] @property def fwidth(self): return self.base_object.fwidth[self._cond_ind, :] def _get_bbox(self): """ Get the bounding box for the cut region. Here we just use the bounding box for the source region. """ return self.base_object._get_bbox() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/selection_objects/data_selection_objects.py0000644000175100001770000015271114714401662025411 0ustar00runnerdockerimport abc import itertools import sys import uuid from collections import defaultdict from contextlib import contextmanager import numpy as np from more_itertools import always_iterable from unyt import unyt_array from unyt.exceptions import UnitConversionError, UnitParseError import yt.geometry from yt.data_objects.data_containers import YTDataContainer from yt.data_objects.derived_quantities import DerivedQuantityCollection from yt.data_objects.field_data import YTFieldData from yt.fields.field_exceptions import NeedsGridType from yt.funcs import fix_axis, is_sequence, iter_fields, validate_width_tuple from yt.geometry.api import Geometry from yt.geometry.selection_routines import compose_selector from yt.units import YTArray from yt.utilities.exceptions import ( GenerationInProgress, YTBooleanObjectError, YTBooleanObjectsWrongDataset, YTDataSelectorNotImplemented, YTDimensionalityError, YTFieldUnitError, YTFieldUnitParseError, ) from yt.utilities.lib.marching_cubes import march_cubes_grid, march_cubes_grid_flux from yt.utilities.logger import ytLogger as mylog from yt.utilities.parallel_tools.parallel_analysis_interface import ( ParallelAnalysisInterface, ) if sys.version_info >= (3, 11): from typing import assert_never else: from typing_extensions import assert_never class YTSelectionContainer(YTDataContainer, ParallelAnalysisInterface, abc.ABC): _locked = False _sort_by = None _selector = None _current_chunk = None _data_source = None _dimensionality: int _max_level = None _min_level = None _derived_quantity_chunking = "io" def __init__(self, ds, field_parameters, data_source=None): ParallelAnalysisInterface.__init__(self) super().__init__(ds, field_parameters) self._data_source = data_source if data_source is not None: if data_source.ds != self.ds: raise RuntimeError( "Attempted to construct a DataContainer with a data_source " "from a different Dataset", ds, data_source.ds, ) if data_source._dimensionality < self._dimensionality: raise RuntimeError( "Attempted to construct a DataContainer with a data_source " "of lower dimensionality (%u vs %u)" % (data_source._dimensionality, self._dimensionality) ) self.field_parameters.update(data_source.field_parameters) self.quantities = DerivedQuantityCollection(self) @property def selector(self): if self._selector is not None: return self._selector s_module = getattr(self, "_selector_module", yt.geometry.selection_routines) sclass = getattr(s_module, f"{self._type_name}_selector", None) if sclass is None: raise YTDataSelectorNotImplemented(self._type_name) if self._data_source is not None: self._selector = compose_selector( self, self._data_source.selector, sclass(self) ) else: self._selector = sclass(self) return self._selector def chunks(self, fields, chunking_style, **kwargs): # This is an iterator that will yield the necessary chunks. self.get_data() # Ensure we have built ourselves if fields is None: fields = [] # chunk_ind can be supplied in the keyword arguments. If it's a # scalar, that'll be the only chunk that gets returned; if it's a list, # those are the ones that will be. chunk_ind = kwargs.pop("chunk_ind", None) if chunk_ind is not None: chunk_ind = list(always_iterable(chunk_ind)) for ci, chunk in enumerate(self.index._chunk(self, chunking_style, **kwargs)): if chunk_ind is not None and ci not in chunk_ind: continue with self._chunked_read(chunk): self.get_data(fields) # NOTE: we yield before releasing the context yield self def _identify_dependencies(self, fields_to_get, spatial=False): inspected = 0 fields_to_get = fields_to_get[:] for field in itertools.cycle(fields_to_get): if inspected >= len(fields_to_get): break inspected += 1 fi = self.ds._get_field_info(field) fd = self.ds.field_dependencies.get( field, None ) or self.ds.field_dependencies.get(field[1], None) # This is long overdue. Any time we *can't* find a field # dependency -- for instance, if the derived field has been added # after dataset instantiation -- let's just try to # recalculate it. if fd is None: try: fd = fi.get_dependencies(ds=self.ds) self.ds.field_dependencies[field] = fd except Exception: continue requested = self._determine_fields(list(set(fd.requested))) deps = [d for d in requested if d not in fields_to_get] fields_to_get += deps return sorted(fields_to_get) def get_data(self, fields=None): if self._current_chunk is None: self.index._identify_base_chunk(self) if fields is None: return nfields = [] apply_fields = defaultdict(list) for field in self._determine_fields(fields): # We need to create the field on the raw particle types # for particles types (when the field is not directly # defined for the derived particle type only) finfo = self.ds.field_info[field] if ( field[0] in self.ds.filtered_particle_types and finfo._inherited_particle_filter ): f = self.ds.known_filters[field[0]] apply_fields[field[0]].append((f.filtered_type, field[1])) else: nfields.append(field) for filter_type in apply_fields: f = self.ds.known_filters[filter_type] with f.apply(self): self.get_data(apply_fields[filter_type]) fields = nfields if len(fields) == 0: return # Now we collect all our fields # Here is where we need to perform a validation step, so that if we # have a field requested that we actually *can't* yet get, we put it # off until the end. This prevents double-reading fields that will # need to be used in spatial fields later on. fields_to_get = [] # This will be pre-populated with spatial fields fields_to_generate = [] for field in self._determine_fields(fields): if field in self.field_data: continue finfo = self.ds._get_field_info(field) try: finfo.check_available(self) except NeedsGridType: fields_to_generate.append(field) continue fields_to_get.append(field) if len(fields_to_get) == 0 and len(fields_to_generate) == 0: return elif self._locked: raise GenerationInProgress(fields) # Track which ones we want in the end ofields = set(list(self.field_data.keys()) + fields_to_get + fields_to_generate) # At this point, we want to figure out *all* our dependencies. fields_to_get = self._identify_dependencies(fields_to_get, self._spatial) # We now split up into readers for the types of fields fluids, particles = [], [] finfos = {} for field_key in fields_to_get: finfo = self.ds._get_field_info(field_key) finfos[field_key] = finfo if finfo.sampling_type == "particle": particles.append(field_key) elif field_key not in fluids: fluids.append(field_key) # The _read method will figure out which fields it needs to get from # disk, and return a dict of those fields along with the fields that # need to be generated. read_fluids, gen_fluids = self.index._read_fluid_fields( fluids, self, self._current_chunk ) for f, v in read_fluids.items(): self.field_data[f] = self.ds.arr(v, units=finfos[f].units) self.field_data[f].convert_to_units(finfos[f].output_units) read_particles, gen_particles = self.index._read_particle_fields( particles, self, self._current_chunk ) for f, v in read_particles.items(): self.field_data[f] = self.ds.arr(v, units=finfos[f].units) self.field_data[f].convert_to_units(finfos[f].output_units) fields_to_generate += gen_fluids + gen_particles self._generate_fields(fields_to_generate) for field in list(self.field_data.keys()): if field not in ofields: self.field_data.pop(field) def _generate_fields(self, fields_to_generate): index = 0 def dimensions_compare_equal(a, b, /) -> bool: if a == b: return True try: if (a == 1 and b.is_dimensionless) or (a.is_dimensionless and b == 1): return True except AttributeError: return False return False with self._field_lock(): # At this point, we assume that any fields that are necessary to # *generate* a field are in fact already available to us. Note # that we do not make any assumption about whether or not the # fields have a spatial requirement. This will be checked inside # _generate_field, at which point additional dependencies may # actually be noted. while any(f not in self.field_data for f in fields_to_generate): field = fields_to_generate[index % len(fields_to_generate)] index += 1 if field in self.field_data: continue fi = self.ds._get_field_info(field) try: fd = self._generate_field(field) if hasattr(fd, "units"): fd.units.registry = self.ds.unit_registry if fd is None: raise RuntimeError if fi.units is None: # first time calling a field with units='auto', so we # infer the units from the units of the data we get back # from the field function and use these units for future # field accesses units = getattr(fd, "units", "") if units == "": sunits = "" dimensions = 1 else: sunits = str( units.get_base_equivalent(self.ds.unit_system.name) ) dimensions = units.dimensions if fi.dimensions is None: mylog.warning( "Field %s was added without specifying units or dimensions, " "auto setting units to %r", fi.name, sunits or "dimensionless", ) elif not dimensions_compare_equal(fi.dimensions, dimensions): raise YTDimensionalityError(fi.dimensions, dimensions) fi.units = sunits fi.dimensions = dimensions self.field_data[field] = self.ds.arr(fd, units) if fi.output_units is None: fi.output_units = fi.units try: fd.convert_to_units(fi.units) except AttributeError: # If the field returns an ndarray, coerce to a # dimensionless YTArray and verify that field is # supposed to be unitless fd = self.ds.arr(fd, "") if fi.units != "": raise YTFieldUnitError(fi, fd.units) from None except UnitConversionError as e: raise YTFieldUnitError(fi, fd.units) from e except UnitParseError as e: raise YTFieldUnitParseError(fi) from e self.field_data[field] = fd except GenerationInProgress as gip: for f in gip.fields: if f not in fields_to_generate: fields_to_generate.append(f) def __or__(self, other): if not isinstance(other, YTSelectionContainer): raise YTBooleanObjectError(other) if self.ds is not other.ds: raise YTBooleanObjectsWrongDataset() # Should maybe do something with field parameters here from yt.data_objects.selection_objects.boolean_operations import ( YTBooleanContainer, ) return YTBooleanContainer("OR", self, other, ds=self.ds) def __invert__(self): # ~obj asel = yt.geometry.selection_routines.AlwaysSelector(self.ds) from yt.data_objects.selection_objects.boolean_operations import ( YTBooleanContainer, ) return YTBooleanContainer("NOT", self, asel, ds=self.ds) def __xor__(self, other): if not isinstance(other, YTSelectionContainer): raise YTBooleanObjectError(other) if self.ds is not other.ds: raise YTBooleanObjectsWrongDataset() from yt.data_objects.selection_objects.boolean_operations import ( YTBooleanContainer, ) return YTBooleanContainer("XOR", self, other, ds=self.ds) def __and__(self, other): if not isinstance(other, YTSelectionContainer): raise YTBooleanObjectError(other) if self.ds is not other.ds: raise YTBooleanObjectsWrongDataset() from yt.data_objects.selection_objects.boolean_operations import ( YTBooleanContainer, ) return YTBooleanContainer("AND", self, other, ds=self.ds) def __add__(self, other): return self.__or__(other) def __sub__(self, other): if not isinstance(other, YTSelectionContainer): raise YTBooleanObjectError(other) if self.ds is not other.ds: raise YTBooleanObjectsWrongDataset() from yt.data_objects.selection_objects.boolean_operations import ( YTBooleanContainer, ) return YTBooleanContainer("NEG", self, other, ds=self.ds) @contextmanager def _field_lock(self): self._locked = True yield self._locked = False @contextmanager def _ds_hold(self, new_ds): """ This contextmanager is used to take a data object and preserve its attributes but allow the dataset that underlies it to be swapped out. This is typically only used internally, and differences in unit systems may present interesting possibilities. """ old_ds = self.ds old_index = self._index self.ds = new_ds self._index = new_ds.index old_chunk_info = self._chunk_info old_chunk = self._current_chunk old_size = self.size self._chunk_info = None self._current_chunk = None self.size = None self._index._identify_base_chunk(self) with self._chunked_read(None): yield self._index = old_index self.ds = old_ds self._chunk_info = old_chunk_info self._current_chunk = old_chunk self.size = old_size @contextmanager def _chunked_read(self, chunk): # There are several items that need to be swapped out # field_data, size, shape obj_field_data = [] if hasattr(chunk, "objs"): for obj in chunk.objs: obj_field_data.append(obj.field_data) obj.field_data = YTFieldData() old_field_data, self.field_data = self.field_data, YTFieldData() old_chunk, self._current_chunk = self._current_chunk, chunk old_locked, self._locked = self._locked, False yield self.field_data = old_field_data self._current_chunk = old_chunk self._locked = old_locked if hasattr(chunk, "objs"): for obj in chunk.objs: obj.field_data = obj_field_data.pop(0) @contextmanager def _activate_cache(self): cache = self._field_cache or {} old_fields = {} for field in (f for f in cache if f in self.field_data): old_fields[field] = self.field_data[field] self.field_data.update(cache) yield for field in cache: self.field_data.pop(field) if field in old_fields: self.field_data[field] = old_fields.pop(field) self._field_cache = None def _initialize_cache(self, cache): # Wipe out what came before self._field_cache = {} self._field_cache.update(cache) @property def icoords(self): if self._current_chunk is None: self.index._identify_base_chunk(self) return self._current_chunk.icoords @property def fcoords(self): if self._current_chunk is None: self.index._identify_base_chunk(self) return self._current_chunk.fcoords @property def ires(self): if self._current_chunk is None: self.index._identify_base_chunk(self) return self._current_chunk.ires @property def fwidth(self): if self._current_chunk is None: self.index._identify_base_chunk(self) return self._current_chunk.fwidth @property def fcoords_vertex(self): if self._current_chunk is None: self.index._identify_base_chunk(self) return self._current_chunk.fcoords_vertex @property def max_level(self): if self._max_level is None: try: return self.ds.max_level except AttributeError: return None return self._max_level @max_level.setter def max_level(self, value): if self._selector is not None: del self._selector self._selector = None self._current_chunk = None self.size = None self.shape = None self.field_data.clear() self._max_level = value @property def min_level(self): if self._min_level is None: try: return 0 except AttributeError: return None return self._min_level @min_level.setter def min_level(self, value): if self._selector is not None: del self._selector self._selector = None self.field_data.clear() self.size = None self.shape = None self._current_chunk = None self._min_level = value class YTSelectionContainer0D(YTSelectionContainer): _spatial = False _dimensionality = 0 def __init__(self, ds, field_parameters=None, data_source=None): super().__init__(ds, field_parameters, data_source) class YTSelectionContainer1D(YTSelectionContainer): _spatial = False _dimensionality = 1 def __init__(self, ds, field_parameters=None, data_source=None): super().__init__(ds, field_parameters, data_source) self._grids = None self._sortkey = None self._sorted = {} class YTSelectionContainer2D(YTSelectionContainer): _key_fields = ["px", "py", "pdx", "pdy"] _dimensionality = 2 """ Prepares the YTSelectionContainer2D, normal to *axis*. If *axis* is 4, we are not aligned with any axis. """ _spatial = False def __init__(self, axis, ds, field_parameters=None, data_source=None): super().__init__(ds, field_parameters, data_source) # We need the ds, which will exist by now, for fix_axis. self.axis = fix_axis(axis, self.ds) self.set_field_parameter("axis", axis) def _convert_field_name(self, field): return field def _get_pw(self, fields, center, width, origin, plot_type): from yt.visualization.fixed_resolution import FixedResolutionBuffer as frb from yt.visualization.plot_window import PWViewerMPL, get_window_parameters axis = self.axis skip = self._key_fields skip += list(set(frb._exclude_fields).difference(set(self._key_fields))) self.fields = [k for k in self.field_data if k not in skip] if fields is not None: self.fields = list(iter_fields(fields)) + self.fields if len(self.fields) == 0: raise ValueError("No fields found to plot in get_pw") (bounds, center, display_center) = get_window_parameters( axis, center, width, self.ds ) pw = PWViewerMPL( self, bounds, fields=self.fields, origin=origin, frb_generator=frb, plot_type=plot_type, geometry=self.ds.geometry, ) pw._setup_plots() return pw def to_frb(self, width, resolution, center=None, height=None, periodic=False): r"""This function returns a FixedResolutionBuffer generated from this object. A FixedResolutionBuffer is an object that accepts a variable-resolution 2D object and transforms it into an NxM bitmap that can be plotted, examined or processed. This is a convenience function to return an FRB directly from an existing 2D data object. Parameters ---------- width : width specifier This can either be a floating point value, in the native domain units of the simulation, or a tuple of the (value, unit) style. This will be the width of the FRB. height : height specifier This will be the physical height of the FRB, by default it is equal to width. Note that this will not make any corrections to resolution for the aspect ratio. resolution : int or tuple of ints The number of pixels on a side of the final FRB. If iterable, this will be the width then the height. center : array-like of floats, optional The center of the FRB. If not specified, defaults to the center of the current object. periodic : bool Should the returned Fixed Resolution Buffer be periodic? (default: False). Returns ------- frb : :class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer` A fixed resolution buffer, which can be queried for fields. Examples -------- >>> proj = ds.proj(("gas", "density"), 0) >>> frb = proj.to_frb((100.0, "kpc"), 1024) >>> write_image(np.log10(frb["gas", "density"]), "density_100kpc.png") """ if (self.ds.geometry is Geometry.CYLINDRICAL and self.axis == 1) or ( self.ds.geometry is Geometry.POLAR and self.axis == 2 ): if center is not None and center != (0.0, 0.0): raise NotImplementedError( "Currently we only support images centered at R=0. " + "We plan to generalize this in the near future" ) from yt.visualization.fixed_resolution import ( CylindricalFixedResolutionBuffer, ) validate_width_tuple(width) if is_sequence(resolution): resolution = max(resolution) frb = CylindricalFixedResolutionBuffer(self, width, resolution) return frb if center is None: center = self.center if center is None: center = (self.ds.domain_right_edge + self.ds.domain_left_edge) / 2.0 elif is_sequence(center) and not isinstance(center, YTArray): center = self.ds.arr(center, "code_length") if is_sequence(width): w, u = width if isinstance(w, tuple) and isinstance(u, tuple): height = u w, u = w width = self.ds.quan(w, units=u) elif not isinstance(width, YTArray): width = self.ds.quan(width, "code_length") if height is None: height = width elif is_sequence(height): h, u = height height = self.ds.quan(h, units=u) elif not isinstance(height, YTArray): height = self.ds.quan(height, "code_length") if not is_sequence(resolution): resolution = (resolution, resolution) from yt.visualization.fixed_resolution import FixedResolutionBuffer xax = self.ds.coordinates.x_axis[self.axis] yax = self.ds.coordinates.y_axis[self.axis] bounds = ( center[xax] - width * 0.5, center[xax] + width * 0.5, center[yax] - height * 0.5, center[yax] + height * 0.5, ) frb = FixedResolutionBuffer(self, bounds, resolution, periodic=periodic) return frb class YTSelectionContainer3D(YTSelectionContainer): """ Returns an instance of YTSelectionContainer3D, or prepares one. Usually only used as a base class. Note that *center* is supplied, but only used for fields and quantities that require it. """ _key_fields = ["x", "y", "z", "dx", "dy", "dz"] _spatial = False _num_ghost_zones = 0 _dimensionality = 3 def __init__(self, center, ds, field_parameters=None, data_source=None): super().__init__(ds, field_parameters, data_source) self._set_center(center) self.coords = None self._grids = None def cut_region(self, field_cuts, field_parameters=None, locals=None): """ Return a YTCutRegion, where the a cell is identified as being inside the cut region based on the value of one or more fields. Note that in previous versions of yt the name 'grid' was used to represent the data object used to construct the field cut, as of yt 3.0, this has been changed to 'obj'. Parameters ---------- field_cuts : list of strings A list of conditionals that will be evaluated. In the namespace available, these conditionals will have access to 'obj' which is a data object of unknown shape, and they must generate a boolean array. For instance, conditionals = ["obj['gas', 'temperature'] < 1e3"] field_parameters : dictionary A dictionary of field parameters to be used when applying the field cuts. locals : dictionary A dictionary of local variables to use when defining the cut region. Examples -------- To find the total mass of hot gas with temperature greater than 10^6 K in your volume: >>> ds = yt.load("RedshiftOutput0005") >>> ad = ds.all_data() >>> cr = ad.cut_region(["obj['gas', 'temperature'] > 1e6"]) >>> print(cr.quantities.total_quantity(("gas", "cell_mass")).in_units("Msun")) """ if locals is None: locals = {} cr = self.ds.cut_region( self, field_cuts, field_parameters=field_parameters, locals=locals ) return cr def _build_operator_cut(self, operation, field, value, units=None): """ Given an operation (>, >=, etc.), a field and a value, return the cut_region implementing it. This is only meant to be used internally. Examples -------- >>> ds._build_operator_cut(">", ("gas", "density"), 1e-24) ... # is equivalent to ... ds.cut_region(['obj["gas", "density"] > 1e-24']) """ ftype, fname = self._determine_fields(field)[0] if units is None: field_cuts = f'obj["{ftype}", "{fname}"] {operation} {value}' else: field_cuts = ( f'obj["{ftype}", "{fname}"].in_units("{units}") {operation} {value}' ) return self.cut_region(field_cuts) def _build_function_cut(self, function, field, units=None, **kwargs): """ Given a function (np.abs, np.all) and a field, return the cut_region implementing it. This is only meant to be used internally. Examples -------- >>> ds._build_function_cut("np.isnan", ("gas", "density"), locals={"np": np}) ... # is equivalent to ... ds.cut_region(['np.isnan(obj["gas", "density"])'], locals={"np": np}) """ ftype, fname = self._determine_fields(field)[0] if units is None: field_cuts = f'{function}(obj["{ftype}", "{fname}"])' else: field_cuts = f'{function}(obj["{ftype}", "{fname}"].in_units("{units}"))' return self.cut_region(field_cuts, **kwargs) def exclude_above(self, field, value, units=None): """ This function will return a YTCutRegion where all of the regions whose field is above a given value are masked. Parameters ---------- field : string The field in which the conditional will be applied. value : float The minimum value that will not be masked in the output YTCutRegion. units : string or None The units of the value threshold. None will use the default units given in the field. Returns ------- cut_region : YTCutRegion The YTCutRegion with the field above the given value masked. Examples -------- To find the total mass of hot gas with temperature colder than 10^6 K in your volume: >>> ds = yt.load("RedshiftOutput0005") >>> ad = ds.all_data() >>> cr = ad.exclude_above(("gas", "temperature"), 1e6) >>> print(cr.quantities.total_quantity(("gas", "cell_mass")).in_units("Msun")) """ return self._build_operator_cut("<=", field, value, units) def include_above(self, field, value, units=None): """ This function will return a YTCutRegion where only the regions whose field is above a given value are included. Parameters ---------- field : string The field in which the conditional will be applied. value : float The minimum value that will not be masked in the output YTCutRegion. units : string or None The units of the value threshold. None will use the default units given in the field. Returns ------- cut_region : YTCutRegion The YTCutRegion with the field above the given value masked. Examples -------- To find the total mass of hot gas with temperature warmer than 10^6 K in your volume: >>> ds = yt.load("RedshiftOutput0005") >>> ad = ds.all_data() >>> cr = ad.include_above(("gas", "temperature"), 1e6) >>> print(cr.quantities.total_quantity(("gas", "cell_mass")).in_units("Msun")) """ return self._build_operator_cut(">", field, value, units) def exclude_equal(self, field, value, units=None): """ This function will return a YTCutRegion where all of the regions whose field are equal to given value are masked. Parameters ---------- field : string The field in which the conditional will be applied. value : float The minimum value that will not be masked in the output YTCutRegion. units : string or None The units of the value threshold. None will use the default units given in the field. Returns ------- cut_region : YTCutRegion The YTCutRegion with the field equal to the given value masked. Examples -------- >>> ds = yt.load("RedshiftOutput0005") >>> ad = ds.all_data() >>> cr = ad.exclude_equal(("gas", "temperature"), 1e6) >>> print(cr.quantities.total_quantity(("gas", "cell_mass")).in_units("Msun")) """ return self._build_operator_cut("!=", field, value, units) def include_equal(self, field, value, units=None): """ This function will return a YTCutRegion where only the regions whose field are equal to given value are included. Parameters ---------- field : string The field in which the conditional will be applied. value : float The minimum value that will not be masked in the output YTCutRegion. units : string or None The units of the value threshold. None will use the default units given in the field. Returns ------- cut_region : YTCutRegion The YTCutRegion with the field equal to the given value included. Examples -------- >>> ds = yt.load("RedshiftOutput0005") >>> ad = ds.all_data() >>> cr = ad.include_equal(("gas", "temperature"), 1e6) >>> print(cr.quantities.total_quantity(("gas", "cell_mass")).in_units("Msun")) """ return self._build_operator_cut("==", field, value, units) def exclude_inside(self, field, min_value, max_value, units=None): """ This function will return a YTCutRegion where all of the regions whose field are inside the interval from min_value to max_value. Parameters ---------- field : string The field in which the conditional will be applied. min_value : float The minimum value inside the interval to be excluded. max_value : float The maximum value inside the interval to be excluded. units : string or None The units of the value threshold. None will use the default units given in the field. Returns ------- cut_region : YTCutRegion The YTCutRegion with the field inside the given interval excluded. Examples -------- >>> ds = yt.load("RedshiftOutput0005") >>> ad = ds.all_data() >>> cr = ad.exclude_inside(("gas", "temperature"), 1e5, 1e6) >>> print(cr.quantities.total_quantity(("gas", "cell_mass")).in_units("Msun")) """ ftype, fname = self._determine_fields(field)[0] if units is None: field_cuts = ( f'(obj["{ftype}", "{fname}"] <= {min_value}) | ' f'(obj["{ftype}", "{fname}"] >= {max_value})' ) else: field_cuts = ( f'(obj["{ftype}", "{fname}"].in_units("{units}") <= {min_value}) | ' f'(obj["{ftype}", "{fname}"].in_units("{units}") >= {max_value})' ) cr = self.cut_region(field_cuts) return cr def include_inside(self, field, min_value, max_value, units=None): """ This function will return a YTCutRegion where only the regions whose field are inside the interval from min_value to max_value are included. Parameters ---------- field : string The field in which the conditional will be applied. min_value : float The minimum value inside the interval to be excluded. max_value : float The maximum value inside the interval to be excluded. units : string or None The units of the value threshold. None will use the default units given in the field. Returns ------- cut_region : YTCutRegion The YTCutRegion with the field inside the given interval excluded. Examples -------- >>> ds = yt.load("RedshiftOutput0005") >>> ad = ds.all_data() >>> cr = ad.include_inside(("gas", "temperature"), 1e5, 1e6) >>> print(cr.quantities.total_quantity(("gas", "cell_mass")).in_units("Msun")) """ ftype, fname = self._determine_fields(field)[0] if units is None: field_cuts = ( f'(obj["{ftype}", "{fname}"] > {min_value}) & ' f'(obj["{ftype}", "{fname}"] < {max_value})' ) else: field_cuts = ( f'(obj["{ftype}", "{fname}"].in_units("{units}") > {min_value}) & ' f'(obj["{ftype}", "{fname}"].in_units("{units}") < {max_value})' ) cr = self.cut_region(field_cuts) return cr def exclude_outside(self, field, min_value, max_value, units=None): """ This function will return a YTCutRegion where all of the regions whose field are outside the interval from min_value to max_value. Parameters ---------- field : string The field in which the conditional will be applied. min_value : float The minimum value inside the interval to be excluded. max_value : float The maximum value inside the interval to be excluded. units : string or None The units of the value threshold. None will use the default units given in the field. Returns ------- cut_region : YTCutRegion The YTCutRegion with the field outside the given interval excluded. Examples -------- >>> ds = yt.load("RedshiftOutput0005") >>> ad = ds.all_data() >>> cr = ad.exclude_outside(("gas", "temperature"), 1e5, 1e6) >>> print(cr.quantities.total_quantity(("gas", "cell_mass")).in_units("Msun")) """ cr = self.exclude_below(field, min_value, units) cr = cr.exclude_above(field, max_value, units) return cr def include_outside(self, field, min_value, max_value, units=None): """ This function will return a YTCutRegion where only the regions whose field are outside the interval from min_value to max_value are included. Parameters ---------- field : string The field in which the conditional will be applied. min_value : float The minimum value inside the interval to be excluded. max_value : float The maximum value inside the interval to be excluded. units : string or None The units of the value threshold. None will use the default units given in the field. Returns ------- cut_region : YTCutRegion The YTCutRegion with the field outside the given interval excluded. Examples -------- >>> ds = yt.load("RedshiftOutput0005") >>> ad = ds.all_data() >>> cr = ad.exclude_outside(("gas", "temperature"), 1e5, 1e6) >>> print(cr.quantities.total_quantity(("gas", "cell_mass")).in_units("Msun")) """ cr = self.exclude_inside(field, min_value, max_value, units) return cr def exclude_below(self, field, value, units=None): """ This function will return a YTCutRegion where all of the regions whose field is below a given value are masked. Parameters ---------- field : string The field in which the conditional will be applied. value : float The minimum value that will not be masked in the output YTCutRegion. units : string or None The units of the value threshold. None will use the default units given in the field. Returns ------- cut_region : YTCutRegion The YTCutRegion with the field below the given value masked. Examples -------- >>> ds = yt.load("RedshiftOutput0005") >>> ad = ds.all_data() >>> cr = ad.exclude_below(("gas", "temperature"), 1e6) >>> print(cr.quantities.total_quantity(("gas", "cell_mass")).in_units("Msun")) """ return self._build_operator_cut(">=", field, value, units) def exclude_nan(self, field, units=None): """ This function will return a YTCutRegion where all of the regions whose field is NaN are masked. Parameters ---------- field : string The field in which the conditional will be applied. units : string or None The units of the value threshold. None will use the default units given in the field. Returns ------- cut_region : YTCutRegion The YTCutRegion with the NaN entries of the field masked. Examples -------- >>> ds = yt.load("RedshiftOutput0005") >>> ad = ds.all_data() >>> cr = ad.exclude_nan(("gas", "temperature")) >>> print(cr.quantities.total_quantity(("gas", "cell_mass")).in_units("Msun")) """ return self._build_function_cut("~np.isnan", field, units, locals={"np": np}) def include_below(self, field, value, units=None): """ This function will return a YTCutRegion where only the regions whose field is below a given value are included. Parameters ---------- field : string The field in which the conditional will be applied. value : float The minimum value that will not be masked in the output YTCutRegion. units : string or None The units of the value threshold. None will use the default units given in the field. Returns ------- cut_region : YTCutRegion The YTCutRegion with only regions with the field below the given value included. Examples -------- >>> ds = yt.load("RedshiftOutput0005") >>> ad = ds.all_data() >>> cr = ad.include_below(("gas", "temperature"), 1e5, 1e6) >>> print(cr.quantities.total_quantity(("gas", "cell_mass")).in_units("Msun")) """ return self._build_operator_cut("<", field, value, units) def extract_isocontours( self, field, value, filename=None, rescale=False, sample_values=None ): r"""This identifies isocontours on a cell-by-cell basis, with no consideration of global connectedness, and returns the vertices of the Triangles in that isocontour. This function simply returns the vertices of all the triangles calculated by the `marching cubes `_ algorithm; for more complex operations, such as identifying connected sets of cells above a given threshold, see the extract_connected_sets function. This is more useful for calculating, for instance, total isocontour area, or visualizing in an external program (such as `MeshLab `_.) Parameters ---------- field : string Any field that can be obtained in a data object. This is the field which will be isocontoured. value : float The value at which the isocontour should be calculated. filename : string, optional If supplied, this file will be filled with the vertices in .obj format. Suitable for loading into meshlab. rescale : bool, optional If true, the vertices will be rescaled within their min/max. sample_values : string, optional Any field whose value should be extracted at the center of each triangle. Returns ------- verts : array of floats The array of vertices, x,y,z. Taken in threes, these are the triangle vertices. samples : array of floats If `sample_values` is specified, this will be returned and will contain the values of the field specified at the center of each triangle. Examples -------- This will create a data object, find a nice value in the center, and output the vertices to "triangles.obj" after rescaling them. >>> dd = ds.all_data() >>> rho = dd.quantities["WeightedAverageQuantity"]( ... ("gas", "density"), weight=("gas", "cell_mass") ... ) >>> verts = dd.extract_isocontours( ... ("gas", "density"), rho, "triangles.obj", True ... ) """ from yt.data_objects.static_output import ParticleDataset from yt.frontends.stream.data_structures import StreamParticlesDataset verts = [] samples = [] if isinstance(self.ds, (ParticleDataset, StreamParticlesDataset)): raise NotImplementedError for block, mask in self.blocks: my_verts = self._extract_isocontours_from_grid( block, mask, field, value, sample_values ) if sample_values is not None: my_verts, svals = my_verts samples.append(svals) verts.append(my_verts) verts = np.concatenate(verts).transpose() verts = self.comm.par_combine_object(verts, op="cat", datatype="array") verts = verts.transpose() if sample_values is not None: samples = np.concatenate(samples) samples = self.comm.par_combine_object(samples, op="cat", datatype="array") if rescale: mi = np.min(verts, axis=0) ma = np.max(verts, axis=0) verts = (verts - mi) / (ma - mi).max() if filename is not None and self.comm.rank == 0: if hasattr(filename, "write"): f = filename else: f = open(filename, "w") for v1 in verts: f.write(f"v {v1[0]:0.16e} {v1[1]:0.16e} {v1[2]:0.16e}\n") for i in range(len(verts) // 3): f.write(f"f {i * 3 + 1} {i * 3 + 2} {i * 3 + 3}\n") if not hasattr(filename, "write"): f.close() if sample_values is not None: return verts, samples return verts def _extract_isocontours_from_grid( self, grid, mask, field, value, sample_values=None ): vc_fields = [field] if sample_values is not None: vc_fields.append(sample_values) vc_data = grid.get_vertex_centered_data(vc_fields, no_ghost=False) try: svals = vc_data[sample_values] except KeyError: svals = None my_verts = march_cubes_grid( value, vc_data[field], mask, grid.LeftEdge, grid.dds, svals ) return my_verts def calculate_isocontour_flux( self, field, value, field_x, field_y, field_z, fluxing_field=None ): r"""This identifies isocontours on a cell-by-cell basis, with no consideration of global connectedness, and calculates the flux over those contours. This function will conduct `marching cubes `_ on all the cells in a given data container (grid-by-grid), and then for each identified triangular segment of an isocontour in a given cell, calculate the gradient (i.e., normal) in the isocontoured field, interpolate the local value of the "fluxing" field, the area of the triangle, and then return: area * local_flux_value * (n dot v) Where area, local_value, and the vector v are interpolated at the barycenter (weighted by the vertex values) of the triangle. Note that this specifically allows for the field fluxing across the surface to be *different* from the field being contoured. If the fluxing_field is not specified, it is assumed to be 1.0 everywhere, and the raw flux with no local-weighting is returned. Additionally, the returned flux is defined as flux *into* the surface, not flux *out of* the surface. Parameters ---------- field : string Any field that can be obtained in a data object. This is the field which will be isocontoured and used as the "local_value" in the flux equation. value : float The value at which the isocontour should be calculated. field_x : string The x-component field field_y : string The y-component field field_z : string The z-component field fluxing_field : string, optional The field whose passage over the surface is of interest. If not specified, assumed to be 1.0 everywhere. Returns ------- flux : float The summed flux. Note that it is not currently scaled; this is simply the code-unit area times the fields. Examples -------- This will create a data object, find a nice value in the center, and calculate the metal flux over it. >>> dd = ds.all_data() >>> rho = dd.quantities["WeightedAverageQuantity"]( ... ("gas", "density"), weight=("gas", "cell_mass") ... ) >>> flux = dd.calculate_isocontour_flux( ... ("gas", "density"), ... rho, ... ("gas", "velocity_x"), ... ("gas", "velocity_y"), ... ("gas", "velocity_z"), ... ("gas", "metallicity"), ... ) """ flux = 0.0 for block, mask in self.blocks: flux += self._calculate_flux_in_grid( block, mask, field, value, field_x, field_y, field_z, fluxing_field ) flux = self.comm.mpi_allreduce(flux, op="sum") return flux def _calculate_flux_in_grid( self, grid, mask, field, value, field_x, field_y, field_z, fluxing_field=None ): vc_fields = [field, field_x, field_y, field_z] if fluxing_field is not None: vc_fields.append(fluxing_field) vc_data = grid.get_vertex_centered_data(vc_fields) if fluxing_field is None: ff = np.ones_like(vc_data[field], dtype="float64") else: ff = vc_data[fluxing_field] return march_cubes_grid_flux( value, vc_data[field], vc_data[field_x], vc_data[field_y], vc_data[field_z], ff, mask, grid.LeftEdge, grid.dds, ) def extract_connected_sets( self, field, num_levels, min_val, max_val, log_space=True, cumulative=True ): """ This function will create a set of contour objects, defined by having connected cell structures, which can then be studied and used to 'paint' their source grids, thus enabling them to be plotted. Note that this function *can* return a connected set object that has no member values. """ if log_space: cons = np.logspace(np.log10(min_val), np.log10(max_val), num_levels + 1) else: cons = np.linspace(min_val, max_val, num_levels + 1) contours = {} for level in range(num_levels): contours[level] = {} if cumulative: mv = max_val else: mv = cons[level + 1] from yt.data_objects.level_sets.api import identify_contours from yt.data_objects.level_sets.clump_handling import add_contour_field nj, cids = identify_contours(self, field, cons[level], mv) unique_contours = set() for sl_list in cids.values(): for _sl, ff in sl_list: unique_contours.update(np.unique(ff)) contour_key = uuid.uuid4().hex # In case we're a cut region already... base_object = getattr(self, "base_object", self) add_contour_field(base_object.ds, contour_key) for cid in sorted(unique_contours): if cid == -1: continue contours[level][cid] = base_object.cut_region( [f"obj['contours_{contour_key}'] == {cid}"], {f"contour_slices_{contour_key}": cids}, ) return cons, contours def _get_bbox(self): """ Return the bounding box for this data container. This generic version will return the bounds of the entire domain. """ return self.ds.domain_left_edge, self.ds.domain_right_edge def get_bbox(self) -> tuple[unyt_array, unyt_array]: """ Return the bounding box for this data container. """ geometry: Geometry = self.ds.geometry if geometry is Geometry.CARTESIAN: le, re = self._get_bbox() le.convert_to_units("code_length") re.convert_to_units("code_length") return le, re elif ( geometry is Geometry.CYLINDRICAL or geometry is Geometry.POLAR or geometry is Geometry.SPHERICAL or geometry is Geometry.GEOGRAPHIC or geometry is Geometry.INTERNAL_GEOGRAPHIC or geometry is Geometry.SPECTRAL_CUBE ): raise NotImplementedError( f"get_bbox is currently not implemented for {geometry=}!" ) else: assert_never(geometry) def volume(self): """ Return the volume of the data container. This is found by adding up the volume of the cells with centers in the container, rather than using the geometric shape of the container, so this may vary very slightly from what might be expected from the geometric volume. """ return self.quantities.total_quantity(("index", "cell_volume")) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/selection_objects/disk.py0000644000175100001770000000610014714401662021642 0ustar00runnerdockerimport numpy as np from yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer, YTSelectionContainer3D, ) from yt.data_objects.static_output import Dataset from yt.funcs import ( fix_length, validate_3d_array, validate_center, validate_float, validate_object, validate_sequence, ) class YTDisk(YTSelectionContainer3D): """ By providing a *center*, a *normal*, a *radius* and a *height* we can define a cylinder of any proportion. Only cells whose centers are within the cylinder will be selected. Parameters ---------- center : array_like coordinate to which the normal, radius, and height all reference normal : array_like the normal vector defining the direction of lengthwise part of the cylinder radius : float the radius of the cylinder height : float the distance from the midplane of the cylinder to the top and bottom planes fields : array of fields, optional any fields to be pre-loaded in the cylinder object ds: ~yt.data_objects.static_output.Dataset, optional An optional dataset to use rather than self.ds field_parameters : dictionary A dictionary of field parameters than can be accessed by derived fields. data_source: optional Draw the selection from the provided data source rather than all data associated with the data_set Examples -------- >>> import yt >>> ds = yt.load("RedshiftOutput0005") >>> c = [0.5, 0.5, 0.5] >>> disk = ds.disk(c, [1, 0, 0], (1, "kpc"), (10, "kpc")) """ _type_name = "disk" _con_args = ("center", "_norm_vec", "radius", "height") def __init__( self, center, normal, radius, height, fields=None, ds=None, field_parameters=None, data_source=None, ): validate_center(center) validate_3d_array(normal) validate_float(radius) validate_float(height) validate_sequence(fields) validate_object(ds, Dataset) validate_object(field_parameters, dict) validate_object(data_source, YTSelectionContainer) YTSelectionContainer3D.__init__(self, center, ds, field_parameters, data_source) self._norm_vec = np.array(normal) / np.sqrt(np.dot(normal, normal)) self.set_field_parameter("normal", self._norm_vec) self.set_field_parameter("center", self.center) self.height = fix_length(height, self.ds) self.radius = fix_length(radius, self.ds) self._d = -1.0 * np.dot(self._norm_vec, self.center) def _get_bbox(self): """ Return the minimum bounding box for the disk. """ # http://www.iquilezles.org/www/articles/diskbbox/diskbbox.htm pa = self.center + self._norm_vec * self.height pb = self.center - self._norm_vec * self.height a = pa - pb db = self.radius * np.sqrt(1.0 - a.d * a.d / np.dot(a.d, a.d)) return np.minimum(pa - db, pb - db), np.maximum(pa + db, pb + db) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/selection_objects/object_collection.py0000644000175100001770000000207014714401662024373 0ustar00runnerdockerimport numpy as np from yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer, YTSelectionContainer3D, ) from yt.data_objects.static_output import Dataset from yt.funcs import validate_center, validate_object, validate_sequence class YTDataCollection(YTSelectionContainer3D): """ By selecting an arbitrary *object_list*, we can act on those grids. Child cells are not returned. """ _type_name = "data_collection" _con_args = ("_obj_list",) def __init__( self, obj_list, ds=None, field_parameters=None, data_source=None, center=None ): validate_sequence(obj_list) validate_object(ds, Dataset) validate_object(field_parameters, dict) validate_object(data_source, YTSelectionContainer) if center is not None: validate_center(center) YTSelectionContainer3D.__init__(self, center, ds, field_parameters, data_source) self._obj_ids = np.array([o.id - o._id_offset for o in obj_list], dtype="int64") self._obj_list = obj_list ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/selection_objects/point.py0000644000175100001770000000327414714401662022052 0ustar00runnerdockerfrom yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer, YTSelectionContainer0D, ) from yt.data_objects.static_output import Dataset from yt.funcs import validate_3d_array, validate_object from yt.units import YTArray class YTPoint(YTSelectionContainer0D): """ A 0-dimensional object defined by a single point Parameters ---------- p: array_like A points defined within the domain. If the domain is periodic its position will be corrected to lie inside the range [DLE,DRE) to ensure one and only one cell may match that point ds: ~yt.data_objects.static_output.Dataset, optional An optional dataset to use rather than self.ds field_parameters : dictionary A dictionary of field parameters than can be accessed by derived fields. data_source: optional Draw the selection from the provided data source rather than all data associated with the data_set Examples -------- >>> import yt >>> ds = yt.load("RedshiftOutput0005") >>> c = [0.5, 0.5, 0.5] >>> point = ds.point(c) """ _type_name = "point" _con_args = ("p",) def __init__(self, p, ds=None, field_parameters=None, data_source=None): validate_3d_array(p) validate_object(ds, Dataset) validate_object(field_parameters, dict) validate_object(data_source, YTSelectionContainer) super().__init__(ds, field_parameters, data_source) if isinstance(p, YTArray): # we pass p through ds.arr to ensure code units are attached self.p = self.ds.arr(p) else: self.p = self.ds.arr(p, "code_length") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/selection_objects/ray.py0000644000175100001770000002130114714401662021503 0ustar00runnerdockerimport numpy as np from yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer, YTSelectionContainer1D, ) from yt.data_objects.static_output import Dataset from yt.frontends.sph.data_structures import SPHDataset from yt.funcs import ( fix_axis, validate_3d_array, validate_axis, validate_float, validate_object, validate_sequence, ) from yt.units import YTArray, YTQuantity from yt.units._numpy_wrapper_functions import udot, unorm from yt.utilities.lib.pixelization_routines import SPHKernelInterpolationTable from yt.utilities.logger import ytLogger as mylog class YTOrthoRay(YTSelectionContainer1D): """ This is an orthogonal ray cast through the entire domain, at a specific coordinate. This object is typically accessed through the `ortho_ray` object that hangs off of index objects. The resulting arrays have their dimensionality reduced to one, and an ordered list of points at an (x,y) tuple along `axis` are available. Parameters ---------- axis : int or char The axis along which to slice. Can be 0, 1, or 2 for x, y, z. coords : tuple of floats The (plane_x, plane_y) coordinates at which to cast the ray. Note that this is in the plane coordinates: so if you are casting along x, this will be (y, z). If you are casting along y, this will be (z, x). If you are casting along z, this will be (x, y). ds: ~yt.data_objects.static_output.Dataset, optional An optional dataset to use rather than self.ds field_parameters : dictionary A dictionary of field parameters than can be accessed by derived fields. data_source: optional Draw the selection from the provided data source rather than all data associated with the data_set Examples -------- >>> import yt >>> ds = yt.load("RedshiftOutput0005") >>> oray = ds.ortho_ray(0, (0.2, 0.74)) >>> print(oray["gas", "density"]) Note: The low-level data representation for rays are not guaranteed to be spatially ordered. In particular, with AMR datasets, higher resolution data is tagged on to the end of the ray. If you want this data represented in a spatially ordered manner, manually sort it by the "t" field, which is the value of the parametric variable that goes from 0 at the start of the ray to 1 at the end: >>> my_ray = ds.ortho_ray(...) >>> ray_sort = np.argsort(my_ray["t"]) >>> density = my_ray["gas", "density"][ray_sort] """ _key_fields = ["x", "y", "z", "dx", "dy", "dz"] _type_name = "ortho_ray" _con_args = ("axis", "coords") def __init__(self, axis, coords, ds=None, field_parameters=None, data_source=None): validate_axis(ds, axis) validate_sequence(coords) for c in coords: validate_float(c) validate_object(ds, Dataset) validate_object(field_parameters, dict) validate_object(data_source, YTSelectionContainer) super().__init__(ds, field_parameters, data_source) self.axis = fix_axis(axis, self.ds) xax = self.ds.coordinates.x_axis[self.axis] yax = self.ds.coordinates.y_axis[self.axis] self.px_ax = xax self.py_ax = yax # Even though we may not be using x,y,z we use them here. self.px_dx = f"d{'xyz'[self.px_ax]}" self.py_dx = f"d{'xyz'[self.py_ax]}" # Convert coordinates to code length. if isinstance(coords[0], YTQuantity): self.px = self.ds.quan(coords[0]).to("code_length") else: self.px = self.ds.quan(coords[0], "code_length") if isinstance(coords[1], YTQuantity): self.py = self.ds.quan(coords[1]).to("code_length") else: self.py = self.ds.quan(coords[1], "code_length") self.sort_by = "xyz"[self.axis] @property def coords(self): return (self.px, self.py) class YTRay(YTSelectionContainer1D): """ This is an arbitrarily-aligned ray cast through the entire domain, at a specific coordinate. This object is typically accessed through the `ray` object that hangs off of index objects. The resulting arrays have their dimensionality reduced to one, and an ordered list of points at an (x,y) tuple along `axis` are available, as is the `t` field, which corresponds to a unitless measurement along the ray from start to end. Parameters ---------- start_point : array-like set of 3 floats The place where the ray starts. end_point : array-like set of 3 floats The place where the ray ends. ds: ~yt.data_objects.static_output.Dataset, optional An optional dataset to use rather than self.ds field_parameters : dictionary A dictionary of field parameters than can be accessed by derived fields. data_source: optional Draw the selection from the provided data source rather than all data associated with the data_set Examples -------- >>> import yt >>> ds = yt.load("RedshiftOutput0005") >>> ray = ds.ray((0.2, 0.74, 0.11), (0.4, 0.91, 0.31)) >>> print(ray["gas", "density"], ray["t"], ray["dts"]) Note: The low-level data representation for rays are not guaranteed to be spatially ordered. In particular, with AMR datasets, higher resolution data is tagged on to the end of the ray. If you want this data represented in a spatially ordered manner, manually sort it by the "t" field, which is the value of the parametric variable that goes from 0 at the start of the ray to 1 at the end: >>> my_ray = ds.ray(...) >>> ray_sort = np.argsort(my_ray["t"]) >>> density = my_ray["gas", "density"][ray_sort] """ _type_name = "ray" _con_args = ("start_point", "end_point") _container_fields = ("t", "dts") def __init__( self, start_point, end_point, ds=None, field_parameters=None, data_source=None ): validate_3d_array(start_point) validate_3d_array(end_point) validate_object(ds, Dataset) validate_object(field_parameters, dict) validate_object(data_source, YTSelectionContainer) super().__init__(ds, field_parameters, data_source) if isinstance(start_point, YTArray): self.start_point = self.ds.arr(start_point).to("code_length") else: self.start_point = self.ds.arr(start_point, "code_length", dtype="float64") if isinstance(end_point, YTArray): self.end_point = self.ds.arr(end_point).to("code_length") else: self.end_point = self.ds.arr(end_point, "code_length", dtype="float64") if (self.start_point < self.ds.domain_left_edge).any() or ( self.end_point > self.ds.domain_right_edge ).any(): mylog.warning( "Ray start or end is outside the domain. " "Returned data will only be for the ray section inside the domain." ) self.vec = self.end_point - self.start_point self._set_center(self.start_point) self.set_field_parameter("center", self.start_point) self._dts, self._ts = None, None def _generate_container_field(self, field): # What should we do with `ParticleDataset`? if isinstance(self.ds, SPHDataset): return self._generate_container_field_sph(field) else: return self._generate_container_field_grid(field) def _generate_container_field_grid(self, field): if self._current_chunk is None: self.index._identify_base_chunk(self) if field == "dts": return self._current_chunk.dtcoords elif field == "t": return self._current_chunk.tcoords else: raise KeyError(field) def _generate_container_field_sph(self, field): if field not in ["dts", "t"]: raise KeyError(field) length = unorm(self.vec) pos = self[self.ds._sph_ptypes[0], "particle_position"] r = pos - self.start_point l = udot(r, self.vec / length) if field == "t": return l / length hsml = self[self.ds._sph_ptypes[0], "smoothing_length"] mass = self[self.ds._sph_ptypes[0], "particle_mass"] dens = self[self.ds._sph_ptypes[0], "density"] # impact parameter from particle to ray b = np.sqrt(np.sum(r**2, axis=1) - l**2) # Use an interpolation table to evaluate the integrated 2D # kernel from the dimensionless impact parameter b/hsml. # The table tabulates integrals for values of (b/hsml)**2 itab = SPHKernelInterpolationTable(self.ds.kernel_name) dl = itab.interpolate_array((b / hsml) ** 2) * mass / dens / hsml**2 return dl / length ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/selection_objects/region.py0000644000175100001770000000452314714401662022202 0ustar00runnerdockerfrom yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer, YTSelectionContainer3D, ) from yt.data_objects.static_output import Dataset from yt.funcs import ( validate_3d_array, validate_center, validate_object, validate_sequence, ) from yt.units import YTArray class YTRegion(YTSelectionContainer3D): """A 3D region of data with an arbitrary center. Takes an array of three *left_edge* coordinates, three *right_edge* coordinates, and a *center* that can be anywhere in the domain. If the selected region extends past the edges of the domain, no data will be found there, though the object's `left_edge` or `right_edge` are not modified. Parameters ---------- center : array_like The center of the region left_edge : array_like The left edge of the region right_edge : array_like The right edge of the region """ _type_name = "region" _con_args = ("center", "left_edge", "right_edge") def __init__( self, center, left_edge, right_edge, fields=None, ds=None, field_parameters=None, data_source=None, ): if center is not None: validate_center(center) validate_3d_array(left_edge) validate_3d_array(right_edge) validate_sequence(fields) validate_object(ds, Dataset) validate_object(field_parameters, dict) validate_object(data_source, YTSelectionContainer) YTSelectionContainer3D.__init__(self, center, ds, field_parameters, data_source) if not isinstance(left_edge, YTArray): self.left_edge = self.ds.arr(left_edge, "code_length", dtype="float64") else: # need to assign this dataset's unit registry to the YTArray self.left_edge = self.ds.arr(left_edge.copy(), dtype="float64") if not isinstance(right_edge, YTArray): self.right_edge = self.ds.arr(right_edge, "code_length", dtype="float64") else: # need to assign this dataset's unit registry to the YTArray self.right_edge = self.ds.arr(right_edge.copy(), dtype="float64") def _get_bbox(self): """ Return the minimum bounding box for the region. """ return self.left_edge.copy(), self.right_edge.copy() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/selection_objects/slices.py0000644000175100001770000003364714714401662022212 0ustar00runnerdockerimport numpy as np from yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer, YTSelectionContainer2D, ) from yt.data_objects.static_output import Dataset from yt.funcs import ( fix_length, is_sequence, iter_fields, validate_3d_array, validate_axis, validate_center, validate_float, validate_object, validate_width_tuple, ) from yt.utilities.exceptions import YTNotInsideNotebook from yt.utilities.minimal_representation import MinimalSliceData from yt.utilities.orientation import Orientation class YTSlice(YTSelectionContainer2D): """ This is a data object corresponding to a slice through the simulation domain. This object is typically accessed through the `slice` object that hangs off of index objects. Slice is an orthogonal slice through the data, taking all the points at the finest resolution available and then indexing them. It is more appropriately thought of as a slice 'operator' than an object, however, as its field and coordinate can both change. Parameters ---------- axis : int or char The axis along which to slice. Can be 0, 1, or 2 for x, y, z. coord : float The coordinate along the axis at which to slice. This is in "domain" coordinates. center : array_like, optional The 'center' supplied to fields that use it. Note that this does not have to have `coord` as one value. optional. ds: ~yt.data_objects.static_output.Dataset, optional An optional dataset to use rather than self.ds field_parameters : dictionary A dictionary of field parameters than can be accessed by derived fields. data_source: optional Draw the selection from the provided data source rather than all data associated with the data_set Examples -------- >>> import yt >>> ds = yt.load("RedshiftOutput0005") >>> slice = ds.slice(0, 0.25) >>> print(slice["gas", "density"]) """ _top_node = "/Slices" _type_name = "slice" _con_args = ("axis", "coord") _container_fields = ("px", "py", "pz", "pdx", "pdy", "pdz") def __init__( self, axis, coord, center=None, ds=None, field_parameters=None, data_source=None ): validate_axis(ds, axis) validate_float(coord) # center is an optional parameter if center is not None: validate_center(center) validate_object(ds, Dataset) validate_object(field_parameters, dict) validate_object(data_source, YTSelectionContainer) YTSelectionContainer2D.__init__(self, axis, ds, field_parameters, data_source) self._set_center(center) self.coord = fix_length(coord, ds) def _generate_container_field(self, field): xax = self.ds.coordinates.x_axis[self.axis] yax = self.ds.coordinates.y_axis[self.axis] if self._current_chunk is None: self.index._identify_base_chunk(self) if field == "px": return self._current_chunk.fcoords[:, xax] elif field == "py": return self._current_chunk.fcoords[:, yax] elif field == "pz": return self._current_chunk.fcoords[:, self.axis] elif field == "pdx": return self._current_chunk.fwidth[:, xax] * 0.5 elif field == "pdy": return self._current_chunk.fwidth[:, yax] * 0.5 elif field == "pdz": return self._current_chunk.fwidth[:, self.axis] * 0.5 else: raise KeyError(field) @property def _mrep(self): return MinimalSliceData(self) def to_pw(self, fields=None, center="center", width=None, origin="center-window"): r"""Create a :class:`~yt.visualization.plot_window.PWViewerMPL` from this object. This is a bare-bones mechanism of creating a plot window from this object, which can then be moved around, zoomed, and on and on. All behavior of the plot window is relegated to that routine. """ pw = self._get_pw(fields, center, width, origin, "Slice") return pw def plot(self, fields=None): if hasattr(self._data_source, "left_edge") and hasattr( self._data_source, "right_edge" ): left_edge = self._data_source.left_edge right_edge = self._data_source.right_edge center = (left_edge + right_edge) / 2.0 width = right_edge - left_edge xax = self.ds.coordinates.x_axis[self.axis] yax = self.ds.coordinates.y_axis[self.axis] lx, rx = left_edge[xax], right_edge[xax] ly, ry = left_edge[yax], right_edge[yax] width = (rx - lx), (ry - ly) else: width = self.ds.domain_width center = self.ds.domain_center pw = self._get_pw(fields, center, width, "native", "Slice") try: pw.show() except YTNotInsideNotebook: pass return pw class YTCuttingPlane(YTSelectionContainer2D): """ This is a data object corresponding to an oblique slice through the simulation domain. This object is typically accessed through the `cutting` object that hangs off of index objects. A cutting plane is an oblique plane through the data, defined by a normal vector and a coordinate. It attempts to guess an 'north' vector, which can be overridden, and then it pixelizes the appropriate data onto the plane without interpolation. Parameters ---------- normal : array_like The vector that defines the desired plane. For instance, the angular momentum of a sphere. center : array_like The center of the cutting plane, where the normal vector is anchored. north_vector: array_like, optional An optional vector to describe the north-facing direction in the resulting plane. ds: ~yt.data_objects.static_output.Dataset, optional An optional dataset to use rather than self.ds field_parameters : dictionary A dictionary of field parameters than can be accessed by derived fields. data_source: optional Draw the selection from the provided data source rather than all data associated with the dataset Notes ----- This data object in particular can be somewhat expensive to create. It's also important to note that unlike the other 2D data objects, this object provides px, py, pz, as some cells may have a height from the plane. Examples -------- >>> import yt >>> ds = yt.load("RedshiftOutput0005") >>> cp = ds.cutting([0.1, 0.2, -0.9], [0.5, 0.42, 0.6]) >>> print(cp["gas", "density"]) """ _plane = None _top_node = "/CuttingPlanes" _key_fields = YTSelectionContainer2D._key_fields + ["pz", "pdz"] _type_name = "cutting" _con_args = ("normal", "center") _tds_attrs = ("_inv_mat",) _tds_fields = ("x", "y", "z", "dx") _container_fields = ("px", "py", "pz", "pdx", "pdy", "pdz") def __init__( self, normal, center, north_vector=None, ds=None, field_parameters=None, data_source=None, ): validate_3d_array(normal) validate_center(center) if north_vector is not None: validate_3d_array(north_vector) validate_object(ds, Dataset) validate_object(field_parameters, dict) validate_object(data_source, YTSelectionContainer) YTSelectionContainer2D.__init__(self, None, ds, field_parameters, data_source) self._set_center(center) self.set_field_parameter("center", self.center) # Let's set up our plane equation # ax + by + cz + d = 0 self.orienter = Orientation(normal, north_vector=north_vector) self._norm_vec = self.orienter.normal_vector self._d = -1.0 * np.dot(self._norm_vec, self.center) self._x_vec = self.orienter.unit_vectors[0] self._y_vec = self.orienter.unit_vectors[1] # First we try all three, see which has the best result: self._rot_mat = np.array([self._x_vec, self._y_vec, self._norm_vec]) self._inv_mat = np.linalg.pinv(self._rot_mat) self.set_field_parameter("cp_x_vec", self._x_vec) self.set_field_parameter("cp_y_vec", self._y_vec) self.set_field_parameter("cp_z_vec", self._norm_vec) @property def normal(self): return self._norm_vec def _generate_container_field(self, field): if self._current_chunk is None: self.index._identify_base_chunk(self) if field == "px": x = self._current_chunk.fcoords[:, 0] - self.center[0] y = self._current_chunk.fcoords[:, 1] - self.center[1] z = self._current_chunk.fcoords[:, 2] - self.center[2] tr = np.zeros(x.size, dtype="float64") tr = self.ds.arr(tr, "code_length") tr += x * self._x_vec[0] tr += y * self._x_vec[1] tr += z * self._x_vec[2] return tr elif field == "py": x = self._current_chunk.fcoords[:, 0] - self.center[0] y = self._current_chunk.fcoords[:, 1] - self.center[1] z = self._current_chunk.fcoords[:, 2] - self.center[2] tr = np.zeros(x.size, dtype="float64") tr = self.ds.arr(tr, "code_length") tr += x * self._y_vec[0] tr += y * self._y_vec[1] tr += z * self._y_vec[2] return tr elif field == "pz": x = self._current_chunk.fcoords[:, 0] - self.center[0] y = self._current_chunk.fcoords[:, 1] - self.center[1] z = self._current_chunk.fcoords[:, 2] - self.center[2] tr = np.zeros(x.size, dtype="float64") tr = self.ds.arr(tr, "code_length") tr += x * self._norm_vec[0] tr += y * self._norm_vec[1] tr += z * self._norm_vec[2] return tr elif field == "pdx": return self._current_chunk.fwidth[:, 0] * 0.5 elif field == "pdy": return self._current_chunk.fwidth[:, 1] * 0.5 elif field == "pdz": return self._current_chunk.fwidth[:, 2] * 0.5 else: raise KeyError(field) def to_pw(self, fields=None, center="center", width=None, axes_unit=None): r"""Create a :class:`~yt.visualization.plot_window.PWViewerMPL` from this object. This is a bare-bones mechanism of creating a plot window from this object, which can then be moved around, zoomed, and on and on. All behavior of the plot window is relegated to that routine. """ normal = self.normal center = self.center self.fields = list(iter_fields(fields)) + [ k for k in self.field_data.keys() if k not in self._key_fields ] from yt.visualization.fixed_resolution import FixedResolutionBuffer from yt.visualization.plot_window import ( PWViewerMPL, get_oblique_window_parameters, ) (bounds, center_rot) = get_oblique_window_parameters( normal, center, width, self.ds ) pw = PWViewerMPL( self, bounds, fields=self.fields, origin="center-window", periodic=False, oblique=True, frb_generator=FixedResolutionBuffer, plot_type="OffAxisSlice", ) if axes_unit is not None: pw.set_axes_unit(axes_unit) pw._setup_plots() return pw def to_frb(self, width, resolution, height=None, periodic=False): r"""This function returns a FixedResolutionBuffer generated from this object. An FixedResolutionBuffer is an object that accepts a variable-resolution 2D object and transforms it into an NxM bitmap that can be plotted, examined or processed. This is a convenience function to return an FRB directly from an existing 2D data object. Unlike the corresponding to_frb function for other YTSelectionContainer2D objects, this does not accept a 'center' parameter as it is assumed to be centered at the center of the cutting plane. Parameters ---------- width : width specifier This can either be a floating point value, in the native domain units of the simulation, or a tuple of the (value, unit) style. This will be the width of the FRB. height : height specifier, optional This will be the height of the FRB, by default it is equal to width. resolution : int or tuple of ints The number of pixels on a side of the final FRB. periodic : boolean This can be true or false, and governs whether the pixelization will span the domain boundaries. Returns ------- frb : :class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer` A fixed resolution buffer, which can be queried for fields. Examples -------- >>> v, c = ds.find_max(("gas", "density")) >>> sp = ds.sphere(c, (100.0, "au")) >>> L = sp.quantities.angular_momentum_vector() >>> cutting = ds.cutting(L, c) >>> frb = cutting.to_frb((1.0, "pc"), 1024) >>> write_image(np.log10(frb["gas", "density"]), "density_1pc.png") """ if is_sequence(width): validate_width_tuple(width) width = self.ds.quan(width[0], width[1]) if height is None: height = width elif is_sequence(height): validate_width_tuple(height) height = self.ds.quan(height[0], height[1]) if not is_sequence(resolution): resolution = (resolution, resolution) from yt.visualization.fixed_resolution import FixedResolutionBuffer bounds = (-width / 2.0, width / 2.0, -height / 2.0, height / 2.0) frb = FixedResolutionBuffer(self, bounds, resolution, periodic=periodic) return frb ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/selection_objects/spheroids.py0000644000175100001770000001717414714401662022725 0ustar00runnerdockerimport numpy as np from yt.data_objects.selection_objects.data_selection_objects import ( YTSelectionContainer, YTSelectionContainer3D, ) from yt.data_objects.static_output import Dataset from yt.funcs import ( fix_length, validate_3d_array, validate_center, validate_float, validate_object, validate_sequence, ) from yt.units import YTArray from yt.utilities.exceptions import YTEllipsoidOrdering, YTException, YTSphereTooSmall from yt.utilities.logger import ytLogger as mylog from yt.utilities.math_utils import get_rotation_matrix from yt.utilities.on_demand_imports import _miniball class YTSphere(YTSelectionContainer3D): """ A sphere of points defined by a *center* and a *radius*. Parameters ---------- center : array_like The center of the sphere. radius : float, width specifier, or YTQuantity The radius of the sphere. If passed a float, that will be interpreted in code units. Also accepts a (radius, unit) tuple or YTQuantity instance with units attached. Examples -------- >>> import yt >>> ds = yt.load("RedshiftOutput0005") >>> c = [0.5, 0.5, 0.5] >>> sphere = ds.sphere(c, (1.0, "kpc")) """ _type_name = "sphere" _con_args = ("center", "radius") def __init__( self, center, radius, ds=None, field_parameters=None, data_source=None ): validate_center(center) validate_float(radius) validate_object(ds, Dataset) validate_object(field_parameters, dict) validate_object(data_source, YTSelectionContainer) super().__init__(center, ds, field_parameters, data_source) # Unpack the radius, if necessary radius = fix_length(radius, self.ds) if radius < self.index.get_smallest_dx(): raise YTSphereTooSmall( ds, radius.in_units("code_length"), self.index.get_smallest_dx().in_units("code_length"), ) self.set_field_parameter("radius", radius) self.set_field_parameter("center", self.center) self.radius = radius def _get_bbox(self): """ Return the minimum bounding box for the sphere. """ return -self.radius + self.center, self.radius + self.center class YTMinimalSphere(YTSelectionContainer3D): """ Build the smallest sphere that encompasses a set of points. Parameters ---------- points : YTArray The points that the sphere will contain. Examples -------- >>> import yt >>> ds = yt.load("output_00080/info_00080.txt") >>> points = ds.r["particle_position"] >>> sphere = ds.minimal_sphere(points) """ _type_name = "sphere" _override_selector_name = "minimal_sphere" _con_args = ("center", "radius") def __init__(self, points, ds=None, field_parameters=None, data_source=None): validate_object(ds, Dataset) validate_object(field_parameters, dict) validate_object(data_source, YTSelectionContainer) validate_object(points, YTArray) points = fix_length(points, ds) if len(points) < 2: raise YTException( f"Not enough points. Expected at least 2, got {len(points)}" ) mylog.debug("Building minimal sphere around points.") mb = _miniball.Miniball(points) if not mb.is_valid(): raise YTException("Could not build valid sphere around points.") center = ds.arr(mb.center(), points.units) radius = ds.quan(np.sqrt(mb.squared_radius()), points.units) super().__init__(center, ds, field_parameters, data_source) self.set_field_parameter("radius", radius) self.set_field_parameter("center", self.center) self.radius = radius class YTEllipsoid(YTSelectionContainer3D): """ By providing a *center*,*A*,*B*,*C*,*e0*,*tilt* we can define a ellipsoid of any proportion. Only cells whose centers are within the ellipsoid will be selected. Parameters ---------- center : array_like The center of the ellipsoid. A : float The magnitude of the largest axis (semi-major) of the ellipsoid. B : float The magnitude of the medium axis (semi-medium) of the ellipsoid. C : float The magnitude of the smallest axis (semi-minor) of the ellipsoid. e0 : array_like (automatically normalized) the direction of the largest semi-major axis of the ellipsoid tilt : float After the rotation about the z-axis to align e0 to x in the x-y plane, and then rotating about the y-axis to align e0 completely to the x-axis, tilt is the angle in radians remaining to rotate about the x-axis to align both e1 to the y-axis and e2 to the z-axis. Examples -------- >>> import yt >>> ds = yt.load("RedshiftOutput0005") >>> c = [0.5, 0.5, 0.5] >>> ell = ds.ellipsoid(c, 0.1, 0.1, 0.1, np.array([0.1, 0.1, 0.1]), 0.2) """ _type_name = "ellipsoid" _con_args = ("center", "_A", "_B", "_C", "_e0", "_tilt") def __init__( self, center, A, B, C, e0, tilt, fields=None, ds=None, field_parameters=None, data_source=None, ): validate_center(center) validate_float(A) validate_float(B) validate_float(C) validate_3d_array(e0) validate_float(tilt) validate_sequence(fields) validate_object(ds, Dataset) validate_object(field_parameters, dict) validate_object(data_source, YTSelectionContainer) YTSelectionContainer3D.__init__(self, center, ds, field_parameters, data_source) # make sure the magnitudes of semi-major axes are in order if A < B or B < C: raise YTEllipsoidOrdering(ds, A, B, C) # make sure the smallest side is not smaller than dx self._A = self.ds.quan(A, "code_length") self._B = self.ds.quan(B, "code_length") self._C = self.ds.quan(C, "code_length") if self._C < self.index.get_smallest_dx(): raise YTSphereTooSmall(self.ds, self._C, self.index.get_smallest_dx()) self._e0 = e0 = e0 / (e0**2.0).sum() ** 0.5 self._tilt = tilt # find the t1 angle needed to rotate about z axis to align e0 to x t1 = np.arctan(e0[1] / e0[0]) # rotate e0 by -t1 RZ = get_rotation_matrix(t1, (0, 0, 1)).transpose() r1 = (e0 * RZ).sum(axis=1) # find the t2 angle needed to rotate about y axis to align e0 to x t2 = np.arctan(-r1[2] / r1[0]) """ calculate the original e1 given the tilt about the x axis when e0 was aligned to x after t1, t2 rotations about z, y """ RX = get_rotation_matrix(-tilt, (1, 0, 0)).transpose() RY = get_rotation_matrix(-t2, (0, 1, 0)).transpose() RZ = get_rotation_matrix(-t1, (0, 0, 1)).transpose() e1 = ((0, 1, 0) * RX).sum(axis=1) e1 = (e1 * RY).sum(axis=1) e1 = (e1 * RZ).sum(axis=1) e2 = np.cross(e0, e1) self._e1 = e1 self._e2 = e2 self.set_field_parameter("A", A) self.set_field_parameter("B", B) self.set_field_parameter("C", C) self.set_field_parameter("e0", e0) self.set_field_parameter("e1", e1) self.set_field_parameter("e2", e2) def _get_bbox(self): """ Get the bounding box for the ellipsoid. NOTE that in this case it is not the *minimum* bounding box. """ radius = self.ds.arr(np.max([self._A, self._B, self._C]), "code_length") return -radius + self.center, radius + self.center ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/static_output.py0000644000175100001770000023324514714401662020135 0ustar00runnerdockerimport abc import functools import hashlib import itertools import os import pickle import sys import time import warnings import weakref from collections import defaultdict from collections.abc import MutableMapping from functools import cached_property from importlib.util import find_spec from stat import ST_CTIME from typing import TYPE_CHECKING, Any, Literal, Optional import numpy as np import unyt as un from more_itertools import unzip from unyt import Unit, UnitSystem, unyt_quantity from unyt.exceptions import UnitConversionError, UnitParseError from yt._maintenance.deprecation import issue_deprecation_warning from yt._maintenance.ipython_compat import IPYWIDGETS_ENABLED from yt._typing import ( AnyFieldKey, AxisOrder, FieldKey, FieldType, ImplicitFieldKey, ParticleType, ) from yt.config import ytcfg from yt.data_objects.particle_filters import ParticleFilter, filter_registry from yt.data_objects.region_expression import RegionExpression from yt.data_objects.unions import ParticleUnion from yt.fields.derived_field import DerivedField, ValidateSpatial from yt.fields.field_type_container import FieldTypeContainer from yt.fields.fluid_fields import setup_gradient_fields from yt.funcs import iter_fields, mylog, set_intersection, setdefaultattr from yt.geometry.api import Geometry from yt.geometry.coordinates.api import ( CartesianCoordinateHandler, CoordinateHandler, CylindricalCoordinateHandler, GeographicCoordinateHandler, InternalGeographicCoordinateHandler, PolarCoordinateHandler, SpectralCubeCoordinateHandler, SphericalCoordinateHandler, ) from yt.geometry.geometry_handler import Index from yt.units import UnitContainer, _wrap_display_ytarray, dimensions from yt.units.dimensions import current_mks from yt.units.unit_object import define_unit # type: ignore from yt.units.unit_registry import UnitRegistry # type: ignore from yt.units.unit_systems import ( # type: ignore create_code_unit_system, unit_system_registry, ) from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.configure import YTConfig, configuration_callbacks from yt.utilities.cosmology import Cosmology from yt.utilities.exceptions import ( YTFieldNotFound, YTFieldNotParseable, YTIllDefinedParticleFilter, YTObjectNotImplemented, ) from yt.utilities.lib.fnv_hash import fnv_hash from yt.utilities.minimal_representation import MinimalDataset from yt.utilities.object_registries import data_object_registry, output_type_registry from yt.utilities.parallel_tools.parallel_analysis_interface import parallel_root_only from yt.utilities.parameter_file_storage import NoParameterShelf, ParameterFileStore if TYPE_CHECKING: from sympy import Symbol if sys.version_info >= (3, 11): from typing import assert_never else: from typing_extensions import assert_never # We want to support the movie format in the future. # When such a thing comes to pass, I'll move all the stuff that is constant up # to here, and then have it instantiate EnzoDatasets as appropriate. _cached_datasets: MutableMapping[int | str, "Dataset"] = weakref.WeakValueDictionary() # we set this global to None as a place holder # its actual instantiation is delayed until after yt.__init__ # is completed because we need yt.config.ytcfg to be instantiated first _ds_store: ParameterFileStore | None = None def _setup_ds_store(ytcfg: YTConfig) -> None: global _ds_store _ds_store = ParameterFileStore() configuration_callbacks.append(_setup_ds_store) def _unsupported_object(ds, obj_name): def _raise_unsupp(*args, **kwargs): raise YTObjectNotImplemented(ds, obj_name) return _raise_unsupp class MutableAttribute: """A descriptor for mutable data""" def __init__(self, display_array=False): self.data = weakref.WeakKeyDictionary() # We can assume that ipywidgets will not be *added* to the system # during the course of execution, and if it is, we will not wrap the # array. self.display_array = display_array and IPYWIDGETS_ENABLED def __get__(self, instance, owner): return self.data.get(instance, None) def __set__(self, instance, value): if self.display_array: try: value._ipython_display_ = functools.partial( _wrap_display_ytarray, value ) except AttributeError: # If they have slots, we can't assign a new item. So, let's catch # the error! pass if isinstance(value, np.ndarray): value.flags.writeable = False self.data[instance] = value def requires_index(attr_name): @property def ireq(self): self.index # By now it should have been set attr = self.__dict__[attr_name] return attr @ireq.setter def ireq(self, value): self.__dict__[attr_name] = value return ireq class Dataset(abc.ABC): _load_requirements: list[str] = [] default_fluid_type = "gas" default_field = ("gas", "density") fluid_types: tuple[FieldType, ...] = ("gas", "deposit", "index") particle_types: tuple[ParticleType, ...] = ("io",) # By default we have an 'all' particle_types_raw: tuple[ParticleType, ...] | None = ("io",) geometry: Geometry = Geometry.CARTESIAN coordinates = None storage_filename = None particle_unions: dict[ParticleType, ParticleUnion] | None = None known_filters: dict[ParticleType, ParticleFilter] | None = None _index_class: type[Index] field_units: dict[AnyFieldKey, Unit] | None = None derived_field_list = requires_index("derived_field_list") fields = requires_index("fields") conversion_factors: dict[str, float] | None = None # _instantiated represents an instantiation time (since Epoch) # the default is a place holder sentinel, falsy value _instantiated: float = 0 _particle_type_counts = None _proj_type = "quad_proj" _ionization_label_format = "roman_numeral" _determined_fields: dict[str, list[FieldKey]] | None = None fields_detected = False # these are set in self._parse_parameter_file() domain_left_edge = MutableAttribute(True) domain_right_edge = MutableAttribute(True) domain_dimensions = MutableAttribute(True) # the point in index space "domain_left_edge" doesn't necessarily # map to (0, 0, 0) domain_offset = np.zeros(3, dtype="int64") _periodicity = MutableAttribute() _force_periodicity = False # these are set in self._set_derived_attrs() domain_width = MutableAttribute(True) domain_center = MutableAttribute(True) def __new__(cls, filename=None, *args, **kwargs): if not isinstance(filename, str): obj = object.__new__(cls) # The Stream frontend uses a StreamHandler object to pass metadata # to __init__. is_stream = hasattr(filename, "get_fields") and hasattr( filename, "get_particle_type" ) if not is_stream: obj.__init__(filename, *args, **kwargs) return obj apath = os.path.abspath(os.path.expanduser(filename)) cache_key = (apath, pickle.dumps(args), pickle.dumps(kwargs)) if ytcfg.get("yt", "skip_dataset_cache"): obj = object.__new__(cls) elif cache_key not in _cached_datasets: obj = object.__new__(cls) if not obj._skip_cache: _cached_datasets[cache_key] = obj else: obj = _cached_datasets[cache_key] return obj def __init_subclass__(cls, *args, **kwargs): super().__init_subclass__(*args, **kwargs) if cls.__name__ in output_type_registry: warnings.warn( f"Overwriting {cls.__name__}, which was previously registered. " "This is expected if you're importing a yt extension with a " "frontend that was already migrated to the main code base.", stacklevel=2, ) output_type_registry[cls.__name__] = cls mylog.debug("Registering: %s as %s", cls.__name__, cls) def __init__( self, filename: str, dataset_type: str | None = None, units_override: dict[str, str] | None = None, # valid unit_system values include all keys from unyt.unit_systems.unit_systems_registry + "code" unit_system: Literal[ "cgs", "mks", "imperial", "galactic", "solar", "geometrized", "planck", "code", ] = "cgs", default_species_fields: Optional[ "Any" ] = None, # Any used as a placeholder here *, axis_order: AxisOrder | None = None, ) -> None: """ Base class for generating new output types. Principally consists of a *filename* and a *dataset_type* which will be passed on to children. """ # We return early and do NOT initialize a second time if this file has # already been initialized. if self._instantiated != 0: return self.dataset_type = dataset_type self.conversion_factors = {} self.parameters: dict[str, Any] = {} self.region_expression = self.r = RegionExpression(self) self.known_filters = self.known_filters or {} self.particle_unions = self.particle_unions or {} self.field_units = self.field_units or {} self._determined_fields = {} self.units_override = self.__class__._sanitize_units_override(units_override) self.default_species_fields = default_species_fields self._input_filename: str = os.fspath(filename) # to get the timing right, do this before the heavy lifting self._instantiated = time.time() self.no_cgs_equiv_length = False if unit_system == "code": # create a fake MKS unit system which we will override later to # avoid chicken/egg issue of the unit registry needing a unit system # but code units need a unit registry to define the code units on used_unit_system = "mks" else: used_unit_system = unit_system self._create_unit_registry(used_unit_system) self._parse_parameter_file() self.set_units() self.setup_cosmology() self._assign_unit_system(unit_system) self._setup_coordinate_handler(axis_order) self.print_key_parameters() self._set_derived_attrs() # Because we need an instantiated class to check the ds's existence in # the cache, we move that check to here from __new__. This avoids # double-instantiation. # PR 3124: _set_derived_attrs() can change the hash, check store here if _ds_store is None: raise RuntimeError( "Something went wrong during yt's initialization: " "dataset cache isn't properly initialized" ) try: _ds_store.check_ds(self) except NoParameterShelf: pass self._setup_classes() @property def filename(self): if self._input_filename.startswith("http"): return self._input_filename else: return os.path.abspath(os.path.expanduser(self._input_filename)) @property def parameter_filename(self): # historic alias return self.filename @property def basename(self): return os.path.basename(self.filename) @property def directory(self): return os.path.dirname(self.filename) @property def fullpath(self): issue_deprecation_warning( "the Dataset.fullpath attribute is now aliased to Dataset.directory, " "and all path attributes are now absolute. " "Please use the directory attribute instead", stacklevel=3, since="4.1", ) return self.directory @property def backup_filename(self): name, _ext = os.path.splitext(self.filename) return name + "_backup.gdf" @cached_property def unique_identifier(self) -> str: retv = int(os.stat(self.parameter_filename)[ST_CTIME]) name_as_bytes = bytearray(map(ord, self.parameter_filename)) retv += fnv_hash(name_as_bytes) return str(retv) @property def periodicity(self): if self._force_periodicity: return (True, True, True) elif getattr(self, "_domain_override", False): # dataset loaded with a bounding box return (False, False, False) return self._periodicity def force_periodicity(self, val=True): """ Override box periodicity to (True, True, True). Use ds.force_periodicty(False) to use the actual box periodicity. """ # This is a user-facing method that embrace a long-standing # workaround in yt user codes. if not isinstance(val, bool): raise TypeError("force_periodicity expected a boolean.") self._force_periodicity = val @classmethod def _missing_load_requirements(cls) -> list[str]: # return a list of optional dependencies that are # needed for the present class to function, but currently missing. # returning an empty list means that all requirements are met return [name for name in cls._load_requirements if not find_spec(name)] # abstract methods require implementation in subclasses @classmethod @abc.abstractmethod def _is_valid(cls, filename: str, *args, **kwargs) -> bool: # A heuristic test to determine if the data format can be interpreted # with the present frontend return False @abc.abstractmethod def _parse_parameter_file(self): # set up various attributes from self.parameter_filename # for a full description of what is required here see # yt.frontends._skeleton.SkeletonDataset pass @abc.abstractmethod def _set_code_unit_attributes(self): # set up code-units to physical units normalization factors # for a full description of what is required here see # yt.frontends._skeleton.SkeletonDataset pass def _set_derived_attrs(self): if self.domain_left_edge is None or self.domain_right_edge is None: self.domain_center = np.zeros(3) self.domain_width = np.zeros(3) else: self.domain_center = 0.5 * (self.domain_right_edge + self.domain_left_edge) self.domain_width = self.domain_right_edge - self.domain_left_edge if not isinstance(self.current_time, YTQuantity): self.current_time = self.quan(self.current_time, "code_time") for attr in ("center", "width", "left_edge", "right_edge"): n = f"domain_{attr}" v = getattr(self, n) if not isinstance(v, YTArray) and v is not None: # Note that we don't add on _ipython_display_ here because # everything is stored inside a MutableAttribute. v = self.arr(v, "code_length") setattr(self, n, v) def __reduce__(self): args = (self._hash(),) return (_reconstruct_ds, args) def __repr__(self): return f"{self.__class__.__name__}: {self.parameter_filename}" def __str__(self): return self.basename def _hash(self): s = f"{self.basename};{self.current_time};{self.unique_identifier}" return hashlib.md5(s.encode("utf-8")).hexdigest() @cached_property def checksum(self): """ Computes md5 sum of a dataset. Note: Currently this property is unable to determine a complete set of files that are a part of a given dataset. As a first approximation, the checksum of :py:attr:`~parameter_file` is calculated. In case :py:attr:`~parameter_file` is a directory, checksum of all files inside the directory is calculated. """ def generate_file_md5(m, filename, blocksize=2**20): with open(filename, "rb") as f: while True: buf = f.read(blocksize) if not buf: break m.update(buf) m = hashlib.md5() if os.path.isdir(self.parameter_filename): for root, _, files in os.walk(self.parameter_filename): for fname in files: fname = os.path.join(root, fname) generate_file_md5(m, fname) elif os.path.isfile(self.parameter_filename): generate_file_md5(m, self.parameter_filename) else: m = "notafile" if hasattr(m, "hexdigest"): m = m.hexdigest() return m @property def _mrep(self): return MinimalDataset(self) @property def _skip_cache(self): return False @classmethod def _guess_candidates(cls, base, directories, files): """ This is a class method that accepts a directory (base), a list of files in that directory, and a list of subdirectories. It should return a list of filenames (defined relative to the supplied directory) and a boolean as to whether or not further directories should be recursed. This function doesn't need to catch all possibilities, nor does it need to filter possibilities. """ return [], True def close(self): # noqa: B027 pass def __getitem__(self, key): """Returns units, parameters, or conversion_factors in that order.""" return self.parameters[key] def __iter__(self): yield from self.parameters def get_smallest_appropriate_unit( self, v, quantity="distance", return_quantity=False ): """ Returns the largest whole unit smaller than the YTQuantity passed to it as a string. The quantity keyword can be equal to `distance` or `time`. In the case of distance, the units are: 'Mpc', 'kpc', 'pc', 'au', 'rsun', 'km', etc. For time, the units are: 'Myr', 'kyr', 'yr', 'day', 'hr', 's', 'ms', etc. If return_quantity is set to True, it finds the largest YTQuantity object with a whole unit and a power of ten as the coefficient, and it returns this YTQuantity. """ good_u = None if quantity == "distance": unit_list = [ "Ppc", "Tpc", "Gpc", "Mpc", "kpc", "pc", "au", "rsun", "km", "m", "cm", "mm", "um", "nm", "pm", ] elif quantity == "time": unit_list = [ "Yyr", "Zyr", "Eyr", "Pyr", "Tyr", "Gyr", "Myr", "kyr", "yr", "day", "hr", "s", "ms", "us", "ns", "ps", "fs", ] else: raise ValueError( "Specified quantity must be equal to 'distance' or 'time'." ) for unit in unit_list: uq = self.quan(1.0, unit) if uq <= v: good_u = unit break if good_u is None and quantity == "distance": good_u = "cm" if good_u is None and quantity == "time": good_u = "s" if return_quantity: unit_index = unit_list.index(good_u) # This avoids indexing errors if unit_index == 0: return self.quan(1, unit_list[0]) # Number of orders of magnitude between unit and next one up OOMs = np.ceil( np.log10( self.quan(1, unit_list[unit_index - 1]) / self.quan(1, unit_list[unit_index]) ) ) # Backwards order of coefficients (e.g. [100, 10, 1]) coeffs = 10 ** np.arange(OOMs)[::-1] for j in coeffs: uq = self.quan(j, good_u) if uq <= v: return uq else: return good_u def has_key(self, key): """ Checks units, parameters, and conversion factors. Returns a boolean. """ return key in self.parameters _instantiated_index = None @property def index(self): if self._instantiated_index is None: self._instantiated_index = self._index_class( self, dataset_type=self.dataset_type ) # Now we do things that we need an instantiated index for # ...first off, we create our field_info now. oldsettings = np.geterr() np.seterr(all="ignore") self.create_field_info() np.seterr(**oldsettings) return self._instantiated_index @parallel_root_only def print_key_parameters(self): for a in [ "current_time", "domain_dimensions", "domain_left_edge", "domain_right_edge", "cosmological_simulation", ]: if not hasattr(self, a): mylog.error("Missing %s in parameter file definition!", a) continue v = getattr(self, a) mylog.info("Parameters: %-25s = %s", a, v) if hasattr(self, "cosmological_simulation") and self.cosmological_simulation: for a in [ "current_redshift", "omega_lambda", "omega_matter", "omega_radiation", "hubble_constant", ]: if not hasattr(self, a): mylog.error("Missing %s in parameter file definition!", a) continue v = getattr(self, a) mylog.info("Parameters: %-25s = %s", a, v) if getattr(self, "_domain_override", False): if any(self._periodicity): mylog.warning( "A bounding box was explicitly specified, so we " "are disabling periodicity." ) @parallel_root_only def print_stats(self): self.index.print_stats() @property def field_list(self): return self.index.field_list def create_field_info(self): self.field_dependencies = {} self.derived_field_list = [] self.filtered_particle_types = [] self.field_info = self._field_info_class(self, self.field_list) self.coordinates.setup_fields(self.field_info) self.field_info.setup_fluid_fields() for ptype in self.particle_types: self.field_info.setup_particle_fields(ptype) self.field_info.setup_fluid_index_fields() if "all" not in self.particle_types: mylog.debug("Creating Particle Union 'all'") pu = ParticleUnion("all", list(self.particle_types_raw)) nfields = self.add_particle_union(pu) if nfields == 0: mylog.debug("zero common fields: skipping particle union 'all'") if "nbody" not in self.particle_types: mylog.debug("Creating Particle Union 'nbody'") ptypes = list(self.particle_types_raw) if hasattr(self, "_sph_ptypes"): for sph_ptype in self._sph_ptypes: if sph_ptype in ptypes: ptypes.remove(sph_ptype) if ptypes: nbody_ptypes = [] for ptype in ptypes: if (ptype, "particle_mass") in self.field_info: nbody_ptypes.append(ptype) pu = ParticleUnion("nbody", nbody_ptypes) nfields = self.add_particle_union(pu) if nfields == 0: mylog.debug("zero common fields, skipping particle union 'nbody'") self.field_info.setup_extra_union_fields() self.field_info.load_all_plugins(self.default_fluid_type) deps, unloaded = self.field_info.check_derived_fields() self.field_dependencies.update(deps) self.fields = FieldTypeContainer(self) self.index.field_list = sorted(self.field_list) # Now that we've detected the fields, set this flag so that # deprecated fields will be logged if they are used self.fields_detected = True def set_field_label_format(self, format_property, value): """ Set format properties for how fields will be written out. Accepts format_property : string indicating what property to set value: the value to set for that format_property """ available_formats = {"ionization_label": ("plus_minus", "roman_numeral")} if format_property in available_formats: if value in available_formats[format_property]: setattr(self, f"_{format_property}_format", value) else: raise ValueError( f"{value} not an acceptable value for format_property " f"{format_property}. Choices are {available_formats[format_property]}." ) else: raise ValueError( f"{format_property} not a recognized format_property. Available " f"properties are: {list(available_formats.keys())}" ) def setup_deprecated_fields(self): from yt.fields.field_aliases import _field_name_aliases added = [] for old_name, new_name in _field_name_aliases: try: fi = self._get_field_info(new_name) except YTFieldNotFound: continue self.field_info.alias(("gas", old_name), fi.name) added.append(("gas", old_name)) self.field_info.find_dependencies(added) def _setup_coordinate_handler(self, axis_order: AxisOrder | None) -> None: # backward compatibility layer: # turning off type-checker on a per-line basis cls: type[CoordinateHandler] if isinstance(self.geometry, tuple): # type: ignore [unreachable] issue_deprecation_warning( # type: ignore [unreachable] f"Dataset object {self} has a tuple for its geometry attribute. " "This is interpreted as meaning the first element is the actual geometry string, " "and the second represents an arbitrary axis order. " "This will stop working in a future version of yt.\n" "If you're loading data using yt.load_* functions, " "you should be able to clear this warning by using the axis_order keyword argument.\n" "Otherwise, if your code relies on this behaviour, please reach out and open an issue:\n" "https://github.com/yt-project/yt/issues/new\n" "Also see https://github.com/yt-project/yt/pull/4244#discussion_r1063486520 for reference", since="4.2", stacklevel=2, ) self.geometry, axis_order = self.geometry elif callable(self.geometry): issue_deprecation_warning( f"Dataset object {self} has a class for its geometry attribute. " "This was accepted in previous versions of yt but leads to undefined behaviour. " "This will stop working in a future version of yt.\n" "If you are relying on this behaviour, please reach out and open an issue:\n" "https://github.com/yt-project/yt/issues/new", since="4.2", stacklevel=2, ) cls = self.geometry # type: ignore [assignment] if type(self.geometry) is str: # noqa: E721 issue_deprecation_warning( f"Dataset object {self} has a raw string for its geometry attribute. " "In yt>=4.2, a yt.geometry.geometry_enum.Geometry member is expected instead. " "This will stop working in a future version of yt.\n", since="4.2", stacklevel=2, ) self.geometry = Geometry(self.geometry.lower()) if isinstance(self.geometry, CoordinateHandler): issue_deprecation_warning( f"Dataset object {self} has a CoordinateHandler object for its geometry attribute. " "In yt>=4.2, a yt.geometry.geometry_enum.Geometry member is expected instead. " "This will stop working in a future version of yt.\n", since="4.2", stacklevel=2, ) _class_name = type(self.geometry).__name__ if not _class_name.endswith("CoordinateHandler"): raise RuntimeError( "Expected CoordinateHandler child class name to end with CoordinateHandler" ) _geom_str = _class_name[: -len("CoordinateHandler")] self.geometry = Geometry(_geom_str.lower()) del _class_name, _geom_str # end compatibility layer if not isinstance(self.geometry, Geometry): raise TypeError( "Expected dataset.geometry attribute to be of " "type yt.geometry.geometry_enum.Geometry\n" f"Got {self.geometry=} with type {type(self.geometry)}" ) if self.geometry is Geometry.CARTESIAN: cls = CartesianCoordinateHandler elif self.geometry is Geometry.CYLINDRICAL: cls = CylindricalCoordinateHandler elif self.geometry is Geometry.POLAR: cls = PolarCoordinateHandler elif self.geometry is Geometry.SPHERICAL: cls = SphericalCoordinateHandler # It shouldn't be required to reset self.no_cgs_equiv_length # to the default value (False) here, but it's still necessary # see https://github.com/yt-project/yt/pull/3618 self.no_cgs_equiv_length = False elif self.geometry is Geometry.GEOGRAPHIC: cls = GeographicCoordinateHandler self.no_cgs_equiv_length = True elif self.geometry is Geometry.INTERNAL_GEOGRAPHIC: cls = InternalGeographicCoordinateHandler self.no_cgs_equiv_length = True elif self.geometry is Geometry.SPECTRAL_CUBE: cls = SpectralCubeCoordinateHandler else: assert_never(self.geometry) self.coordinates = cls(self, ordering=axis_order) def add_particle_union(self, union): # No string lookups here, we need an actual union. f = self.particle_fields_by_type # find fields common to all particle types in the union fields = set_intersection([f[s] for s in union if s in self.particle_types_raw]) if len(fields) == 0: # don't create this union if no fields are common to all # particle types return len(fields) for field in fields: units = set() for s in union: # First we check our existing fields for units funits = self._get_field_info((s, field)).units # Then we override with field_units settings. funits = self.field_units.get((s, field), funits) units.add(funits) if len(units) == 1: self.field_units[union.name, field] = list(units)[0] self.particle_types += (union.name,) self.particle_unions[union.name] = union fields = [(union.name, field) for field in fields] new_fields = [_ for _ in fields if _ not in self.field_list] self.field_list.extend(new_fields) new_field_info_fields = [ _ for _ in fields if _ not in self.field_info.field_list ] self.field_info.field_list.extend(new_field_info_fields) self.index.field_list = sorted(self.field_list) # Give ourselves a chance to add them here, first, then... # ...if we can't find them, we set them up as defaults. new_fields = self._setup_particle_types([union.name]) self.field_info.find_dependencies(new_fields) return len(new_fields) def add_particle_filter(self, filter): """Add particle filter to the dataset. Add ``filter`` to the dataset and set up relevant derived_field. It will also add any ``filtered_type`` that the ``filter`` depends on. """ # This requires an index self.index # This is a dummy, which we set up to enable passthrough of "all" # concatenation fields. n = getattr(filter, "name", filter) self.known_filters[n] = None if isinstance(filter, str): used = False f = filter_registry.get(filter, None) if f is None: return False used = self._setup_filtered_type(f) if used: filter = f else: used = self._setup_filtered_type(filter) if not used: self.known_filters.pop(n, None) return False self.known_filters[filter.name] = filter return True def _setup_filtered_type(self, filter): # Check if the filtered_type of this filter is known, # otherwise add it first if it is in the filter_registry if filter.filtered_type not in self.known_filters.keys(): if filter.filtered_type in filter_registry: add_success = self.add_particle_filter(filter.filtered_type) if add_success: mylog.info( "Added filter dependency '%s' for '%s'", filter.filtered_type, filter.name, ) if not filter.available(self.derived_field_list): raise YTIllDefinedParticleFilter( filter, filter.missing(self.derived_field_list) ) fi = self.field_info fd = self.field_dependencies available = False for fn in self.derived_field_list: if fn[0] == filter.filtered_type: # Now we can add this available = True self.derived_field_list.append((filter.name, fn[1])) fi[filter.name, fn[1]] = filter.wrap_func(fn, fi[fn]) # Now we append the dependencies fd[filter.name, fn[1]] = fd[fn] if available: if filter.name not in self.particle_types: self.particle_types += (filter.name,) if filter.name not in self.filtered_particle_types: self.filtered_particle_types.append(filter.name) if hasattr(self, "_sph_ptypes"): if filter.filtered_type == self._sph_ptypes[0]: mylog.warning( "It appears that you are filtering on an SPH field " "type. It is recommended to use 'gas' as the " "filtered particle type in this case instead." ) if filter.filtered_type in (self._sph_ptypes + ("gas",)): self._sph_ptypes = self._sph_ptypes + (filter.name,) new_fields = self._setup_particle_types([filter.name]) deps, _ = self.field_info.check_derived_fields(new_fields) self.field_dependencies.update(deps) return available def _setup_particle_types(self, ptypes=None): df = [] if ptypes is None: ptypes = self.ds.particle_types_raw for ptype in set(ptypes): df += self._setup_particle_type(ptype) return df def _get_field_info( self, field: FieldKey | ImplicitFieldKey | DerivedField, /, ) -> DerivedField: field_info, candidates = self._get_field_info_helper(field) if field_info.name[1] in ("px", "py", "pz", "pdx", "pdy", "pdz"): # escape early as a bandaid solution to # https://github.com/yt-project/yt/issues/3381 return field_info def _are_ambiguous(candidates: list[FieldKey]) -> bool: if len(candidates) < 2: return False ftypes, fnames = (list(_) for _ in unzip(candidates)) assert all(name == fnames[0] for name in fnames) fi = self.field_info all_aliases: bool = all( fi[c].is_alias_to(fi[candidates[0]]) for c in candidates ) all_equivalent_particle_fields: bool if ( not self.particle_types or not self.particle_unions or not self.particle_types_raw ): all_equivalent_particle_fields = False elif all(ft in self.particle_types for ft in ftypes): ptypes = ftypes sub_types_list: list[set[str]] = [] for pt in ptypes: if pt in self.particle_types_raw: sub_types_list.append({pt}) elif pt in self.particle_unions: sub_types_list.append(set(self.particle_unions[pt].sub_types)) all_equivalent_particle_fields = all( st == sub_types_list[0] for st in sub_types_list ) else: all_equivalent_particle_fields = False return not (all_aliases or all_equivalent_particle_fields) if _are_ambiguous(candidates): ft, fn = field_info.name possible_ftypes = [c[0] for c in candidates] raise ValueError( f"The requested field name {fn!r} " "is ambiguous and corresponds to any one of " f"the following field types:\n {possible_ftypes}\n" "Please specify the requested field as an explicit " "tuple (, ).\n" ) return field_info def _get_field_info_helper( self, field: FieldKey | ImplicitFieldKey | DerivedField, /, ) -> tuple[DerivedField, list[FieldKey]]: self.index ftype: str fname: str if isinstance(field, str): ftype, fname = "unknown", field elif isinstance(field, tuple) and len(field) == 2: ftype, fname = field elif isinstance(field, DerivedField): ftype, fname = field.name else: raise YTFieldNotParseable(field) if ftype == "unknown": candidates: list[FieldKey] = [ (ft, fn) for ft, fn in self.field_info if fn == fname ] # We also should check "all" for particles, which can show up if you're # mixing deposition/gas fields with particle fields. if hasattr(self, "_sph_ptype"): to_guess = [self.default_fluid_type, "all"] else: to_guess = ["all", self.default_fluid_type] to_guess += list(self.fluid_types) + list(self.particle_types) for ftype in to_guess: if (ftype, fname) in self.field_info: return self.field_info[ftype, fname], candidates elif (ftype, fname) in self.field_info: return self.field_info[ftype, fname], [] raise YTFieldNotFound(field, ds=self) def _setup_classes(self): # Called by subclass self.object_types = [] self.objects = [] self.plots = [] for name, cls in sorted(data_object_registry.items()): if name in self._index_class._unsupported_objects: setattr(self, name, _unsupported_object(self, name)) continue self._add_object_class(name, cls) self.object_types.sort() def _add_object_class(self, name, base): # skip projection data objects that don't make sense # for this type of data if "proj" in name and name != self._proj_type: return elif "proj" in name: name = "proj" self.object_types.append(name) obj = functools.partial(base, ds=weakref.proxy(self)) obj.__doc__ = base.__doc__ setattr(self, name, obj) def _find_extremum(self, field, ext, source=None, to_array=True): """ Find the extremum value of a field in a data object (source) and its position. Parameters ---------- field : str or tuple(str, str) ext : str 'min' or 'max', select an extremum source : a Yt data object to_array : bool select the return type. Returns ------- val, coords val: unyt.unyt_quantity extremum value detected coords: unyt.unyt_array or list(unyt.unyt_quantity) Conversion to a single unyt_array object is only possible for coordinate systems with homogeneous dimensions across axes (i.e. cartesian). """ ext = ext.lower() if source is None: source = self.all_data() method = { "min": source.quantities.min_location, "max": source.quantities.max_location, }[ext] val, x1, x2, x3 = method(field) coords = [x1, x2, x3] mylog.info("%s value is %0.5e at %0.16f %0.16f %0.16f", ext, val, *coords) if to_array: if any(x.units.is_dimensionless for x in coords): mylog.warning( "dataset `%s` has angular coordinates. " "Use 'to_array=False' to preserve " "dimensionality in each coordinate.", str(self), ) # force conversion to length alt_coords = [] for x in coords: alt_coords.append( self.quan(x.v, "code_length") if x.units.is_dimensionless else x.to("code_length") ) coords = self.arr(alt_coords, dtype="float64").to("code_length") return val, coords def find_max(self, field, source=None, to_array=True): """ Returns (value, location) of the maximum of a given field. This is a wrapper around _find_extremum """ mylog.debug("Searching for maximum value of %s", field) return self._find_extremum(field, "max", source=source, to_array=to_array) def find_min(self, field, source=None, to_array=True): """ Returns (value, location) for the minimum of a given field. This is a wrapper around _find_extremum """ mylog.debug("Searching for minimum value of %s", field) return self._find_extremum(field, "min", source=source, to_array=to_array) def find_field_values_at_point(self, fields, coords): """ Returns the values [field1, field2,...] of the fields at the given coordinates. Returns a list of field values in the same order as the input *fields*. """ point = self.point(coords) ret = [point[f] for f in iter_fields(fields)] if len(ret) == 1: return ret[0] else: return ret def find_field_values_at_points(self, fields, coords): """ Returns the values [field1, field2,...] of the fields at the given [(x1, y1, z2), (x2, y2, z2),...] points. Returns a list of field values in the same order as the input *fields*. """ # If an optimized version exists on the Index object we'll use that try: return self.index._find_field_values_at_points(fields, coords) except AttributeError: pass fields = list(iter_fields(fields)) out = [] # This may be slow because it creates a data object for each point for field_index, field in enumerate(fields): funit = self._get_field_info(field).units out.append(self.arr(np.empty((len(coords),)), funit)) for coord_index, coord in enumerate(coords): out[field_index][coord_index] = self.point(coord)[field] if len(fields) == 1: return out[0] else: return out # Now all the object related stuff def all_data(self, find_max=False, **kwargs): """ all_data is a wrapper to the Region object for creating a region which covers the entire simulation domain. """ self.index if find_max: c = self.find_max("density")[1] else: c = (self.domain_right_edge + self.domain_left_edge) / 2.0 return self.region(c, self.domain_left_edge, self.domain_right_edge, **kwargs) def box(self, left_edge, right_edge, **kwargs): """ box is a wrapper to the Region object for creating a region without having to specify a *center* value. It assumes the center is the midpoint between the left_edge and right_edge. Keyword arguments are passed to the initializer of the YTRegion object (e.g. ds.region). """ # we handle units in the region data object # but need to check if left_edge or right_edge is a # list or other non-array iterable before calculating # the center if isinstance(left_edge[0], YTQuantity): left_edge = YTArray(left_edge) right_edge = YTArray(right_edge) left_edge = np.asanyarray(left_edge, dtype="float64") right_edge = np.asanyarray(right_edge, dtype="float64") c = (left_edge + right_edge) / 2.0 return self.region(c, left_edge, right_edge, **kwargs) def _setup_particle_type(self, ptype): orig = set(self.field_info.items()) self.field_info.setup_particle_fields(ptype) return [n for n, v in set(self.field_info.items()).difference(orig)] @property def particle_fields_by_type(self): fields = defaultdict(list) for field in self.field_list: if field[0] in self.particle_types_raw: fields[field[0]].append(field[1]) return fields @property def particles_exist(self): for pt, f in itertools.product(self.particle_types_raw, self.field_list): if pt == f[0]: return True return False @property def particle_type_counts(self): self.index if not self.particles_exist: return {} # frontends or index implementation can populate this dict while # creating the index if they know particle counts at that time if self._particle_type_counts is not None: return self._particle_type_counts self._particle_type_counts = self.index._get_particle_type_counts() return self._particle_type_counts @property def ires_factor(self): o2 = np.log2(self.refine_by) if o2 != int(o2): raise RuntimeError # In the case that refine_by is 1 or 0 or something, we just # want to make it a non-operative number, so we set to 1. return max(1, int(o2)) def relative_refinement(self, l0, l1): return self.refine_by ** (l1 - l0) def _assign_unit_system( self, # valid unit_system values include all keys from unyt.unit_systems.unit_systems_registry + "code" unit_system: Literal[ "cgs", "mks", "imperial", "galactic", "solar", "geometrized", "planck", "code", ], ) -> None: # we need to determine if the requested unit system # is mks-like: i.e., it has a current with the same # dimensions as amperes. mks_system = False mag_unit: unyt_quantity | None = getattr(self, "magnetic_unit", None) mag_dims: set[Symbol] | None if mag_unit is not None: mag_dims = mag_unit.units.dimensions.free_symbols else: mag_dims = None if unit_system != "code": # if the unit system is known, we can check if it # has a "current_mks" unit us = unit_system_registry[str(unit_system).lower()] mks_system = us.base_units[current_mks] is not None elif mag_dims and current_mks in mag_dims: # if we're using the not-yet defined code unit system, # then we check if the magnetic field unit has a SI # current dimension in it mks_system = True # Now we get to the tricky part. If we have an MKS-like system but # we asked for a conversion to something CGS-like, or vice-versa, # we have to convert the magnetic field if mag_dims is not None: self.magnetic_unit: unyt_quantity if mks_system and current_mks not in mag_dims: self.magnetic_unit = self.quan( self.magnetic_unit.to_value("gauss") * 1.0e-4, "T" ) # The following modification ensures that we get the conversion to # mks correct self.unit_registry.modify( "code_magnetic", self.magnetic_unit.value * 1.0e3 * 0.1**-0.5 ) elif not mks_system and current_mks in mag_dims: self.magnetic_unit = self.quan( self.magnetic_unit.to_value("T") * 1.0e4, "gauss" ) # The following modification ensures that we get the conversion to # cgs correct self.unit_registry.modify( "code_magnetic", self.magnetic_unit.value * 1.0e-4 ) current_mks_unit = "A" if mks_system else None us = create_code_unit_system( self.unit_registry, current_mks_unit=current_mks_unit ) if unit_system != "code": us = unit_system_registry[str(unit_system).lower()] self._unit_system_name: str = unit_system self.unit_system: UnitSystem = us self.unit_registry.unit_system = self.unit_system @property def _uses_code_length_unit(self) -> bool: return self._unit_system_name == "code" or self.no_cgs_equiv_length @property def _uses_code_time_unit(self) -> bool: return self._unit_system_name == "code" def _create_unit_registry(self, unit_system): from yt.units import dimensions # yt assumes a CGS unit system by default (for back compat reasons). # Since unyt is MKS by default we specify the MKS values of the base # units in the CGS system. So, for length, 1 cm = .01 m. And so on. # Note that the values associated with the code units here will be # modified once we actually determine what the code units are from # the dataset # NOTE that magnetic fields are not done here yet, see set_code_units self.unit_registry = UnitRegistry(unit_system=unit_system) # 1 cm = 0.01 m self.unit_registry.add("code_length", 0.01, dimensions.length) # 1 g = 0.001 kg self.unit_registry.add("code_mass", 0.001, dimensions.mass) # 1 g/cm**3 = 1000 kg/m**3 self.unit_registry.add("code_density", 1000.0, dimensions.density) # 1 erg/g = 1.0e-4 J/kg self.unit_registry.add( "code_specific_energy", 1.0e-4, dimensions.energy / dimensions.mass ) # 1 s = 1 s self.unit_registry.add("code_time", 1.0, dimensions.time) # 1 K = 1 K self.unit_registry.add("code_temperature", 1.0, dimensions.temperature) # 1 dyn/cm**2 = 0.1 N/m**2 self.unit_registry.add("code_pressure", 0.1, dimensions.pressure) # 1 cm/s = 0.01 m/s self.unit_registry.add("code_velocity", 0.01, dimensions.velocity) # metallicity self.unit_registry.add("code_metallicity", 1.0, dimensions.dimensionless) # dimensionless hubble parameter self.unit_registry.add("h", 1.0, dimensions.dimensionless, r"h") # cosmological scale factor self.unit_registry.add("a", 1.0, dimensions.dimensionless) def set_units(self): """ Creates the unit registry for this dataset. """ if getattr(self, "cosmological_simulation", False): # this dataset is cosmological, so add cosmological units. self.unit_registry.modify("h", self.hubble_constant) if getattr(self, "current_redshift", None) is not None: # Comoving lengths for my_unit in ["m", "pc", "AU", "au"]: new_unit = f"{my_unit}cm" my_u = Unit(my_unit, registry=self.unit_registry) self.unit_registry.add( new_unit, my_u.base_value / (1 + self.current_redshift), dimensions.length, f"\\rm{{{my_unit}}}/(1+z)", prefixable=True, ) self.unit_registry.modify("a", 1 / (1 + self.current_redshift)) self.set_code_units() def setup_cosmology(self): """ If this dataset is cosmological, add a cosmology object. """ if not getattr(self, "cosmological_simulation", False): return # Set dynamical dark energy parameters use_dark_factor = getattr(self, "use_dark_factor", False) w_0 = getattr(self, "w_0", -1.0) w_a = getattr(self, "w_a", 0.0) # many frontends do not set this setdefaultattr(self, "omega_radiation", 0.0) self.cosmology = Cosmology( hubble_constant=self.hubble_constant, omega_matter=self.omega_matter, omega_lambda=self.omega_lambda, omega_radiation=self.omega_radiation, use_dark_factor=use_dark_factor, w_0=w_0, w_a=w_a, ) if not hasattr(self, "current_time"): self.current_time = self.cosmology.t_from_z(self.current_redshift) if getattr(self, "current_redshift", None) is not None: self.critical_density = self.cosmology.critical_density( self.current_redshift ) self.scale_factor = 1.0 / (1.0 + self.current_redshift) def get_unit_from_registry(self, unit_str): """ Creates a unit object matching the string expression, using this dataset's unit registry. Parameters ---------- unit_str : str string that we can parse for a sympy Expr. """ new_unit = Unit(unit_str, registry=self.unit_registry) return new_unit def set_code_units(self): # here we override units, if overrides have been provided. self._override_code_units() # set attributes like ds.length_unit self._set_code_unit_attributes() self.unit_registry.modify("code_length", self.length_unit) self.unit_registry.modify("code_mass", self.mass_unit) self.unit_registry.modify("code_time", self.time_unit) vel_unit = getattr(self, "velocity_unit", self.length_unit / self.time_unit) pressure_unit = getattr( self, "pressure_unit", self.mass_unit / (self.length_unit * (self.time_unit) ** 2), ) temperature_unit = getattr(self, "temperature_unit", 1.0) density_unit = getattr( self, "density_unit", self.mass_unit / self.length_unit**3 ) specific_energy_unit = getattr(self, "specific_energy_unit", vel_unit**2) self.unit_registry.modify("code_velocity", vel_unit) self.unit_registry.modify("code_temperature", temperature_unit) self.unit_registry.modify("code_pressure", pressure_unit) self.unit_registry.modify("code_density", density_unit) self.unit_registry.modify("code_specific_energy", specific_energy_unit) # Defining code units for magnetic fields are tricky because # they have different dimensions in different unit systems, so we have # to handle them carefully if hasattr(self, "magnetic_unit"): if self.magnetic_unit.units.dimensions == dimensions.magnetic_field_cgs: # We have to cast this explicitly to MKS base units, otherwise # unyt will convert it automatically to Tesla value = self.magnetic_unit.to_value("sqrt(kg)/(sqrt(m)*s)") dims = dimensions.magnetic_field_cgs else: value = self.magnetic_unit.to_value("T") dims = dimensions.magnetic_field_mks else: # Fallback to gauss if no magnetic unit is specified # 1 gauss = 1 sqrt(g)/(sqrt(cm)*s) = 0.1**0.5 sqrt(kg)/(sqrt(m)*s) value = 0.1**0.5 dims = dimensions.magnetic_field_cgs self.unit_registry.add("code_magnetic", value, dims) # domain_width does not yet exist if self.domain_left_edge is not None and self.domain_right_edge is not None: DW = self.arr(self.domain_right_edge - self.domain_left_edge, "code_length") self.unit_registry.add( "unitary", float(DW.max() * DW.units.base_value), DW.units.dimensions ) @classmethod def _validate_units_override_keys(cls, units_override): valid_keys = set(cls.default_units.keys()) invalid_keys_found = set(units_override.keys()) - valid_keys if invalid_keys_found: raise ValueError( f"units_override contains invalid keys: {invalid_keys_found}" ) default_units = { "length_unit": "cm", "time_unit": "s", "mass_unit": "g", "velocity_unit": "cm/s", "magnetic_unit": "gauss", "temperature_unit": "K", } @classmethod def _sanitize_units_override(cls, units_override): """ Convert units_override values to valid input types for unyt. Throw meaningful errors early if units_override is ill-formed. Parameters ---------- units_override : dict keys should be strings with format "_unit" (e.g. "mass_unit"), and need to match a key in cls.default_units values should be mappable to unyt.unyt_quantity objects, and can be any combinations of: - unyt.unyt_quantity - 2-long sequence (tuples, list, ...) with types (number, str) e.g. (10, "km"), (0.1, "s") - number (in which case the associated is taken from cls.default_unit) Raises ------ TypeError If unit_override has invalid types ValueError If provided units do not match the intended dimensionality, or in case of a zero scaling factor. """ uo = {} if units_override is None: return uo cls._validate_units_override_keys(units_override) for key in cls.default_units: try: val = units_override[key] except KeyError: continue # Now attempt to instantiate a unyt.unyt_quantity from val ... try: # ... directly (valid if val is a number, or a unyt_quantity) uo[key] = YTQuantity(val) continue except RuntimeError: # note that unyt.unyt_quantity throws RuntimeError in lieu of TypeError pass try: # ... with tuple unpacking (valid if val is a sequence) uo[key] = YTQuantity(*val) continue except (RuntimeError, TypeError, UnitParseError): pass raise TypeError( "units_override values should be 2-sequence (float, str), " "YTQuantity objects or real numbers; " f"received {val} with type {type(val)}." ) for key, q in uo.items(): if q.units.is_dimensionless: uo[key] = YTQuantity(q, cls.default_units[key]) try: uo[key].to(cls.default_units[key]) except UnitConversionError as err: raise ValueError( "Inconsistent dimensionality in units_override. " f"Received {key} = {uo[key]}" ) from err if uo[key].value == 0.0: raise ValueError( f"Invalid 0 normalisation factor in units_override for {key}." ) return uo def _override_code_units(self): if not self.units_override: return mylog.warning( "Overriding code units: Use this option only if you know that the " "dataset doesn't define the units correctly or at all." ) for ukey, val in self.units_override.items(): mylog.info("Overriding %s: %s.", ukey, val) setattr(self, ukey, self.quan(val)) _units = None _unit_system_id = None @property def units(self): current_uid = self.unit_registry.unit_system_id if self._units is not None and self._unit_system_id == current_uid: return self._units self._unit_system_id = current_uid self._units = UnitContainer(self.unit_registry) return self._units _arr = None @property def arr(self): """Converts an array into a :class:`yt.units.yt_array.YTArray` The returned YTArray will be dimensionless by default, but can be cast to arbitrary units using the ``units`` keyword argument. Parameters ---------- input_array : Iterable A tuple, list, or array to attach units to units: String unit specification, unit symbol or astropy object The units of the array. Powers must be specified using python syntax (cm**3, not cm^3). input_units : Deprecated in favor of 'units' dtype : string or NumPy dtype object The dtype of the returned array data Examples -------- >>> import yt >>> import numpy as np >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> a = ds.arr([1, 2, 3], "cm") >>> b = ds.arr([4, 5, 6], "m") >>> a + b YTArray([ 401., 502., 603.]) cm >>> b + a YTArray([ 4.01, 5.02, 6.03]) m Arrays returned by this function know about the dataset's unit system >>> a = ds.arr(np.ones(5), "code_length") >>> a.in_units("Mpccm/h") YTArray([ 1.00010449, 1.00010449, 1.00010449, 1.00010449, 1.00010449]) Mpc """ if self._arr is not None: return self._arr self._arr = functools.partial(YTArray, registry=self.unit_registry) return self._arr _quan = None @property def quan(self): """Converts an scalar into a :class:`yt.units.yt_array.YTQuantity` The returned YTQuantity will be dimensionless by default, but can be cast to arbitrary units using the ``units`` keyword argument. Parameters ---------- input_scalar : an integer or floating point scalar The scalar to attach units to units: String unit specification, unit symbol or astropy object The units of the quantity. Powers must be specified using python syntax (cm**3, not cm^3). input_units : Deprecated in favor of 'units' dtype : string or NumPy dtype object The dtype of the array data. Examples -------- >>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> a = ds.quan(1, "cm") >>> b = ds.quan(2, "m") >>> a + b 201.0 cm >>> b + a 2.01 m Quantities created this way automatically know about the unit system of the dataset. >>> a = ds.quan(5, "code_length") >>> a.in_cgs() 1.543e+25 cm """ if self._quan is not None: return self._quan self._quan = functools.partial(YTQuantity, registry=self.unit_registry) return self._quan def add_field( self, name, function, sampling_type, *, force_override=False, **kwargs ): """ Dataset-specific call to add_field Add a new field, along with supplemental metadata, to the list of available fields. This respects a number of arguments, all of which are passed on to the constructor for :class:`~yt.data_objects.api.DerivedField`. Parameters ---------- name : str is the name of the field. function : callable A function handle that defines the field. Should accept arguments (field, data) sampling_type: str "cell" or "particle" or "local" force_override: bool If False (default), an error will be raised if a field of the same name already exists. units : str A plain text string encoding the unit. Powers must be in python syntax (** instead of ^). take_log : bool Describes whether the field should be logged validators : list A list of :class:`FieldValidator` objects vector_field : bool Describes the dimensionality of the field. Currently unused. display_name : str A name used in the plots force_override : bool Whether to override an existing derived field. Does not work with on-disk fields. """ from yt.fields.field_functions import validate_field_function validate_field_function(function) self.index if force_override and name in self.index.field_list: raise RuntimeError( "force_override is only meant to be used with " "derived fields, not on-disk fields." ) if not force_override and name in self.field_info: mylog.warning( "Field %s already exists. To override use `force_override=True`.", name, ) self.field_info.add_field( name, function, sampling_type, force_override=force_override, **kwargs ) self.field_info._show_field_errors.append(name) deps, _ = self.field_info.check_derived_fields([name]) self.field_dependencies.update(deps) def add_mesh_sampling_particle_field(self, sample_field, ptype="all"): """Add a new mesh sampling particle field Creates a new particle field which has the value of the *deposit_field* at the location of each particle of type *ptype*. Parameters ---------- sample_field : tuple The field name tuple of the mesh field to be deposited onto the particles. This must be a field name tuple so yt can appropriately infer the correct particle type. ptype : string, default 'all' The particle type onto which the deposition will occur. Returns ------- The field name tuple for the newly created field. Examples -------- >>> ds = yt.load("output_00080/info_00080.txt") ... ds.add_mesh_sampling_particle_field(("gas", "density"), ptype="all") >>> print("The density at the location of the particle is:") ... print(ds.r["all", "cell_gas_density"]) The density at the location of the particle is: [9.33886124e-30 1.22174333e-28 1.20402333e-28 ... 2.77410331e-30 8.79467609e-31 3.50665136e-30] g/cm**3 >>> len(ds.r["all", "cell_gas_density"]) == len(ds.r["all", "particle_ones"]) True """ if isinstance(sample_field, tuple): ftype, sample_field = sample_field[0], sample_field[1] else: raise RuntimeError return self.index._add_mesh_sampling_particle_field(sample_field, ftype, ptype) def add_deposited_particle_field( self, deposit_field, method, kernel_name="cubic", weight_field=None ): """Add a new deposited particle field Creates a new deposited field based on the particle *deposit_field*. Parameters ---------- deposit_field : tuple The field name tuple of the particle field the deposited field will be created from. This must be a field name tuple so yt can appropriately infer the correct particle type. method : string This is the "method name" which will be looked up in the `particle_deposit` namespace as `methodname_deposit`. Current methods include `simple_smooth`, `sum`, `std`, `cic`, `weighted_mean`, `nearest` and `count`. kernel_name : string, default 'cubic' This is the name of the smoothing kernel to use. It is only used for the `simple_smooth` method and is otherwise ignored. Current supported kernel names include `cubic`, `quartic`, `quintic`, `wendland2`, `wendland4`, and `wendland6`. weight_field : (field_type, field_name) or None Weighting field name for deposition method `weighted_mean`. If None, use the particle mass. Returns ------- The field name tuple for the newly created field. """ self.index if isinstance(deposit_field, tuple): ptype, deposit_field = deposit_field[0], deposit_field[1] else: raise RuntimeError if weight_field is None: weight_field = (ptype, "particle_mass") units = self.field_info[ptype, deposit_field].output_units take_log = self.field_info[ptype, deposit_field].take_log name_map = { "sum": "sum", "std": "std", "cic": "cic", "weighted_mean": "avg", "nearest": "nn", "simple_smooth": "ss", "count": "count", } field_name = "%s_" + name_map[method] + "_%s" field_name = field_name % (ptype, deposit_field.replace("particle_", "")) if method == "count": field_name = f"{ptype}_count" if ("deposit", field_name) in self.field_info: mylog.warning("The deposited field %s already exists", field_name) return ("deposit", field_name) else: units = "dimensionless" take_log = False def _deposit_field(field, data): """ Create a grid field for particle quantities using given method. """ pos = data[ptype, "particle_position"] fields = [data[ptype, deposit_field]] if method == "weighted_mean": fields.append(data[ptype, weight_field]) fields = [np.ascontiguousarray(f) for f in fields] d = data.deposit(pos, fields, method=method, kernel_name=kernel_name) d = data.ds.arr(d, units=units) if method == "weighted_mean": d[np.isnan(d)] = 0.0 return d self.add_field( ("deposit", field_name), function=_deposit_field, sampling_type="cell", units=units, take_log=take_log, validators=[ValidateSpatial()], ) return ("deposit", field_name) def add_gradient_fields(self, fields=None): """Add gradient fields. Creates four new grid-based fields that represent the components of the gradient of an existing field, plus an extra field for the magnitude of the gradient. The gradient is computed using second-order centered differences. Parameters ---------- fields : str or tuple(str, str), or a list of the previous Label(s) for at least one field. Can either represent a tuple (, ) or simply the field name. Warning: several field types may match the provided field name, in which case the first one discovered internally is used. Returns ------- A list of field name tuples for the newly created fields. Raises ------ YTFieldNotParsable If fields are not parsable to yt field keys. YTFieldNotFound : If at least one field can not be identified. Examples -------- >>> grad_fields = ds.add_gradient_fields(("gas", "density")) >>> print(grad_fields) ... [ ... ("gas", "density_gradient_x"), ... ("gas", "density_gradient_y"), ... ("gas", "density_gradient_z"), ... ("gas", "density_gradient_magnitude"), ... ] Note that the above example assumes ds.geometry == 'cartesian'. In general, the function will create gradient components along the axes of the dataset coordinate system. For instance, with cylindrical data, one gets 'density_gradient_' """ if fields is None: raise TypeError("Missing required positional argument: fields") self.index data_obj = self.all_data() explicit_fields = data_obj._determine_fields(fields) grad_fields = [] for ftype, fname in explicit_fields: units = self.field_info[ftype, fname].units setup_gradient_fields(self.field_info, (ftype, fname), units) # Now we make a list of the fields that were just made, to check them # and to return them grad_fields += [ (ftype, f"{fname}_gradient_{suffix}") for suffix in self.coordinates.axis_order ] grad_fields.append((ftype, f"{fname}_gradient_magnitude")) deps, _ = self.field_info.check_derived_fields(grad_fields) self.field_dependencies.update(deps) return grad_fields _max_level = None @property def max_level(self): if self._max_level is None: self._max_level = self.index.max_level return self._max_level @max_level.setter def max_level(self, value): self._max_level = value _min_level = None @property def min_level(self): if self._min_level is None: self._min_level = self.index.min_level return self._min_level @min_level.setter def min_level(self, value): self._min_level = value def define_unit(self, symbol, value, tex_repr=None, offset=None, prefixable=False): """ Define a new unit and add it to the dataset's unit registry. Parameters ---------- symbol : string The symbol for the new unit. value : tuple or ~yt.units.yt_array.YTQuantity The definition of the new unit in terms of some other units. For example, one would define a new "mph" unit with (1.0, "mile/hr") tex_repr : string, optional The LaTeX representation of the new unit. If one is not supplied, it will be generated automatically based on the symbol string. offset : float, optional The default offset for the unit. If not set, an offset of 0 is assumed. prefixable : bool, optional Whether or not the new unit can use SI prefixes. Default: False Examples -------- >>> ds.define_unit("mph", (1.0, "mile/hr")) >>> two_weeks = YTQuantity(14.0, "days") >>> ds.define_unit("fortnight", two_weeks) """ define_unit( symbol, value, tex_repr=tex_repr, offset=offset, prefixable=prefixable, registry=self.unit_registry, ) def _is_within_domain(self, point) -> bool: assert len(point) == len(self.domain_left_edge) assert point.units.dimensions == un.dimensions.length for i, x in enumerate(point): if self.periodicity[i]: continue if x < self.domain_left_edge[i]: return False if x > self.domain_right_edge[i]: return False return True def _reconstruct_ds(*args, **kwargs): datasets = ParameterFileStore() ds = datasets.get_ds_hash(*args) return ds @functools.total_ordering class ParticleFile: filename: str file_id: int start: int | None = None end: int | None = None total_particles: defaultdict[str, int] | None = None def __init__(self, ds, io, filename, file_id, range=None): self.ds = ds self.io = weakref.proxy(io) self.filename = filename self.file_id = file_id if range is None: range = (None, None) self.start, self.end = range self.total_particles = self.io._count_particles(self) # Now we adjust our start/end, in case there are fewer particles than # we realized if self.start is None: self.start = 0 self.end = max(self.total_particles.values()) + self.start def select(self, selector): # noqa: B027 pass def count(self, selector): # noqa: B027 pass def _calculate_offsets(self, fields, pcounts): # noqa: B027 pass def __lt__(self, other): if self.filename != other.filename: return self.filename < other.filename return self.start < other.start def __eq__(self, other): if self.filename != other.filename: return False return self.start == other.start def __hash__(self): return hash((self.filename, self.file_id, self.start, self.end)) class ParticleDataset(Dataset): _unit_base = None filter_bbox = False _proj_type = "particle_proj" def __init__( self, filename, dataset_type=None, units_override=None, unit_system="cgs", index_order=None, index_filename=None, default_species_fields=None, ): self.index_order = index_order self.index_filename = index_filename super().__init__( filename, dataset_type=dataset_type, units_override=units_override, unit_system=unit_system, default_species_fields=default_species_fields, ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2791517 yt-4.4.0/yt/data_objects/tests/0000755000175100001770000000000014714401715016004 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/__init__.py0000644000175100001770000000000014714401662020104 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_add_field.py0000644000175100001770000000726214714401662021320 0ustar00runnerdockerfrom functools import partial import pytest import unyt import yt from yt import derived_field from yt.fields import local_fields from yt.testing import fake_random_ds def test_add_field_lambda(): ds = fake_random_ds(16) ds.add_field( ("gas", "spam"), lambda field, data: data["gas", "density"], sampling_type="cell", ) # check access ds.all_data()["gas", "spam"] def test_add_field_partial(): ds = fake_random_ds(16) def _spam(field, data, factor): return factor * data["gas", "density"] ds.add_field( ("gas", "spam"), partial(_spam, factor=1), sampling_type="cell", ) # check access ds.all_data()["gas", "spam"] def test_add_field_arbitrary_callable(): ds = fake_random_ds(16) class Spam: def __call__(self, field, data): return data["gas", "density"] ds.add_field(("gas", "spam"), Spam(), sampling_type="cell") # check access ds.all_data()["gas", "spam"] def test_add_field_uncallable(): ds = fake_random_ds(16) class Spam: pass with pytest.raises(TypeError, match=r"(is not a callable object)$"): ds.add_field(("bacon", "spam"), Spam(), sampling_type="cell") def test_add_field_wrong_signature(): ds = fake_random_ds(16) def _spam(data, field): return data["gas", "density"] with pytest.raises( TypeError, match=( r"Received field function with invalid signature\. " r"Expected exactly 2 positional parameters \('field', 'data'\), got \('data', 'field'\)" ), ): ds.add_field(("bacon", "spam"), _spam, sampling_type="cell") def test_add_field_keyword_only(): ds = fake_random_ds(16) def _spam(field, *, data): return data["gas", "density"] with pytest.raises( TypeError, match=( r"Received field function .* with invalid signature\. " r"Parameters 'field' and 'data' must accept positional values \(they cannot be keyword-only\)" ), ): ds.add_field( ("bacon", "spam"), _spam, sampling_type="cell", ) def test_derived_field(monkeypatch): tmp_field_info = local_fields.LocalFieldInfoContainer(None, [], None) monkeypatch.setattr(local_fields, "local_fields", tmp_field_info) @derived_field(name="pressure", sampling_type="cell", units="dyne/cm**2") def _pressure(field, data): return ( (data.ds.gamma - 1.0) * data["gas", "density"] * data["gas", "specific_thermal_energy"] ) @pytest.mark.parametrize( "add_field_kwargs", [ # full default: auto unit detection, no (in)validation {}, # explicit "auto", should be identical to default behaviour {"units": "auto"}, # explicitly requesting dimensionless units {"units": "dimensionless"}, # explicitly requesting dimensionless units (short hand) {"units": ""}, # explictly requesting no dimensions {"dimensions": yt.units.dimensionless}, # should work with unyt.dimensionless too {"dimensions": unyt.dimensionless}, # supported short hand {"dimensions": "dimensionless"}, ], ) def test_dimensionless_field(add_field_kwargs): ds = fake_random_ds(16) def _dimensionless_field(field, data): return data["gas", "density"] / data["gas", "density"].units ds.add_field( name=("gas", "dimensionless_density"), function=_dimensionless_field, sampling_type="local", **add_field_kwargs, ) # check access ds.all_data()["gas", "dimensionless_density"] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_bbox.py0000644000175100001770000000377514714401662020364 0ustar00runnerdocker# Some tests for finding bounding boxes import numpy as np from numpy.testing import assert_equal from yt.testing import assert_allclose_units, fake_amr_ds def test_object_bbox(): ds = fake_amr_ds() reg = ds.box( ds.domain_left_edge + 0.5 * ds.domain_width, ds.domain_right_edge - 0.5 * ds.domain_width, ) le, re = reg.get_bbox() assert_equal(le, ds.domain_left_edge + 0.5 * ds.domain_width) assert_equal(re, ds.domain_right_edge - 0.5 * ds.domain_width) sp = ds.sphere("c", (0.1, "unitary")) le, re = sp.get_bbox() assert_equal(le, -sp.radius + sp.center) assert_equal(re, sp.radius + sp.center) dk = ds.disk("c", [1, 1, 0], (0.25, "unitary"), (0.25, "unitary")) le, re = dk.get_bbox() le0 = ds.arr( [0.5 - 0.25 * np.sqrt(2.0), 0.5 - 0.25 * np.sqrt(2.0), 0.25], "code_length" ) re0 = ds.arr( [0.5 + 0.25 * np.sqrt(2.0), 0.5 + 0.25 * np.sqrt(2.0), 0.75], "code_length" ) assert_allclose_units(le, le0) assert_allclose_units(re, re0) ep = ds.ellipsoid("c", 0.3, 0.2, 0.1, np.array([0.1, 0.1, 0.1]), 0.2) le, re = ep.get_bbox() assert_equal(le, -ds.quan(0.3, "code_length") + sp.center) assert_equal(re, ds.quan(0.3, "code_length") + sp.center) spb = ds.sphere( ds.domain_center - ds.quan(0.1, "code_length"), (0.1, "code_length") ) regb = ds.box(ds.domain_center, ds.domain_center + ds.quan(0.2, "code_length")) br1 = spb & regb br2 = spb | regb br3 = spb ^ regb br4 = ~regb le1, re1 = br1.get_bbox() le2, re2 = br2.get_bbox() le3, re3 = br3.get_bbox() le4, re4 = br4.get_bbox() le0 = ds.arr([0.3, 0.3, 0.3], "code_length") re0 = ds.arr([0.7, 0.7, 0.7], "code_length") assert_allclose_units(le1, le0) assert_allclose_units(re1, re0) assert_allclose_units(le2, le0) assert_allclose_units(re2, re0) assert_allclose_units(le3, le0) assert_allclose_units(re3, re0) assert_equal(le4, regb.left_edge) assert_equal(re4, regb.right_edge) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_boolean_regions.py0000644000175100001770000006123414714401662022571 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_array_equal from yt.testing import fake_amr_ds # We use morton indices in this test because they are single floating point # values that uniquely identify each cell. That's a convenient way to compare # inclusion in set operations, since there are no duplicates. def test_boolean_spheres_no_overlap(): r"""Test to make sure that boolean objects (spheres, no overlap) behave the way we expect. Test non-overlapping spheres. This also checks that the original spheres don't change as part of constructing the booleans. """ ds = fake_amr_ds() sp1 = ds.sphere([0.25, 0.25, 0.25], 0.15) sp2 = ds.sphere([0.75, 0.75, 0.75], 0.15) # Store the original indices i1 = sp1["index", "morton_index"] i1.sort() i2 = sp2["index", "morton_index"] i2.sort() ii = np.concatenate((i1, i2)) ii.sort() # Make some booleans bo1 = sp1 & sp2 bo2 = sp1 - sp2 bo3 = sp1 | sp2 # also works with + bo4 = ds.union([sp1, sp2]) bo5 = ds.intersection([sp1, sp2]) # This makes sure the original containers didn't change. new_i1 = sp1["index", "morton_index"] new_i1.sort() new_i2 = sp2["index", "morton_index"] new_i2.sort() assert_array_equal(new_i1, i1) assert_array_equal(new_i2, i2) # Now make sure the indices also behave as we expect. empty = np.array([]) assert_array_equal(bo1["index", "morton_index"], empty) assert_array_equal(bo5["index", "morton_index"], empty) b2 = bo2["index", "morton_index"] b2.sort() assert_array_equal(b2, i1) b3 = bo3["index", "morton_index"] b3.sort() assert_array_equal(b3, ii) b4 = bo4["index", "morton_index"] b4.sort() assert_array_equal(b4, ii) bo6 = sp1 ^ sp2 b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(i1, i2)) def test_boolean_spheres_overlap(): r"""Test to make sure that boolean objects (spheres, overlap) behave the way we expect. Test overlapping spheres. """ ds = fake_amr_ds() sp1 = ds.sphere([0.45, 0.45, 0.45], 0.15) sp2 = ds.sphere([0.55, 0.55, 0.55], 0.15) # Get indices of both. i1 = sp1["index", "morton_index"] i2 = sp2["index", "morton_index"] # Make some booleans bo1 = sp1 & sp2 bo2 = sp1 - sp2 bo3 = sp1 | sp2 bo4 = ds.union([sp1, sp2]) bo5 = ds.intersection([sp1, sp2]) # Now make sure the indices also behave as we expect. lens = np.intersect1d(i1, i2) apple = np.setdiff1d(i1, i2) both = np.union1d(i1, i2) b1 = bo1["index", "morton_index"] b1.sort() b2 = bo2["index", "morton_index"] b2.sort() b3 = bo3["index", "morton_index"] b3.sort() assert_array_equal(b1, lens) assert_array_equal(b2, apple) assert_array_equal(b3, both) b4 = bo4["index", "morton_index"] b4.sort() b5 = bo5["index", "morton_index"] b5.sort() assert_array_equal(b3, b4) assert_array_equal(b1, b5) bo6 = sp1 ^ sp2 b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(i1, i2)) def test_boolean_regions_no_overlap(): r"""Test to make sure that boolean objects (regions, no overlap) behave the way we expect. Test non-overlapping regions. This also checks that the original regions don't change as part of constructing the booleans. """ ds = fake_amr_ds() re1 = ds.region([0.25] * 3, [0.2] * 3, [0.3] * 3) re2 = ds.region([0.65] * 3, [0.6] * 3, [0.7] * 3) # Store the original indices i1 = re1["index", "morton_index"] i1.sort() i2 = re2["index", "morton_index"] i2.sort() ii = np.concatenate((i1, i2)) ii.sort() # Make some booleans bo1 = re1 & re2 bo2 = re1 - re2 bo3 = re1 | re2 bo4 = ds.union([re1, re2]) bo5 = ds.intersection([re1, re2]) # This makes sure the original containers didn't change. new_i1 = re1["index", "morton_index"] new_i1.sort() new_i2 = re2["index", "morton_index"] new_i2.sort() assert_array_equal(new_i1, i1) assert_array_equal(new_i2, i2) # Now make sure the indices also behave as we expect. empty = np.array([]) assert_array_equal(bo1["index", "morton_index"], empty) assert_array_equal(bo5["index", "morton_index"], empty) b2 = bo2["index", "morton_index"] b2.sort() assert_array_equal(b2, i1) b3 = bo3["index", "morton_index"] b3.sort() assert_array_equal(b3, ii) b4 = bo4["index", "morton_index"] b4.sort() b5 = bo5["index", "morton_index"] b5.sort() assert_array_equal(b3, b4) bo6 = re1 ^ re2 b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(i1, i2)) def test_boolean_regions_overlap(): r"""Test to make sure that boolean objects (regions, overlap) behave the way we expect. Test overlapping regions. """ ds = fake_amr_ds() re1 = ds.region([0.55] * 3, [0.5] * 3, [0.6] * 3) re2 = ds.region([0.6] * 3, [0.55] * 3, [0.65] * 3) # Get indices of both. i1 = re1["index", "morton_index"] i2 = re2["index", "morton_index"] # Make some booleans bo1 = re1 & re2 bo2 = re1 - re2 bo3 = re1 | re2 bo4 = ds.union([re1, re2]) bo5 = ds.intersection([re1, re2]) # Now make sure the indices also behave as we expect. cube = np.intersect1d(i1, i2) bite_cube = np.setdiff1d(i1, i2) both = np.union1d(i1, i2) b1 = bo1["index", "morton_index"] b1.sort() b2 = bo2["index", "morton_index"] b2.sort() b3 = bo3["index", "morton_index"] b3.sort() assert_array_equal(b1, cube) assert_array_equal(b2, bite_cube) assert_array_equal(b3, both) b4 = bo4["index", "morton_index"] b4.sort() b5 = bo5["index", "morton_index"] b5.sort() assert_array_equal(b3, b4) assert_array_equal(b1, b5) bo6 = re1 ^ re2 b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(i1, i2)) def test_boolean_cylinders_no_overlap(): r"""Test to make sure that boolean objects (cylinders, no overlap) behave the way we expect. Test non-overlapping cylinders. This also checks that the original cylinders don't change as part of constructing the booleans. """ ds = fake_amr_ds() cyl1 = ds.disk([0.25] * 3, [1, 0, 0], 0.1, 0.1) cyl2 = ds.disk([0.75] * 3, [1, 0, 0], 0.1, 0.1) # Store the original indices i1 = cyl1["index", "morton_index"] i1.sort() i2 = cyl2["index", "morton_index"] i2.sort() ii = np.concatenate((i1, i2)) ii.sort() # Make some booleans bo1 = cyl1 & cyl2 bo2 = cyl1 - cyl2 bo3 = cyl1 | cyl2 bo4 = ds.union([cyl1, cyl2]) bo5 = ds.intersection([cyl1, cyl2]) # This makes sure the original containers didn't change. new_i1 = cyl1["index", "morton_index"] new_i1.sort() new_i2 = cyl2["index", "morton_index"] new_i2.sort() assert_array_equal(new_i1, i1) assert_array_equal(new_i2, i2) # Now make sure the indices also behave as we expect. empty = np.array([]) assert_array_equal(bo1["index", "morton_index"], empty) assert_array_equal(bo5["index", "morton_index"], empty) b2 = bo2["index", "morton_index"] b2.sort() assert_array_equal(b2, i1) b3 = bo3["index", "morton_index"] b3.sort() assert_array_equal(b3, ii) b4 = bo4["index", "morton_index"] b4.sort() b5 = bo5["index", "morton_index"] b5.sort() assert_array_equal(b3, b4) bo6 = cyl1 ^ cyl2 b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(i1, i2)) def test_boolean_cylinders_overlap(): r"""Test to make sure that boolean objects (cylinders, overlap) behave the way we expect. Test overlapping cylinders. """ ds = fake_amr_ds() cyl1 = ds.disk([0.45] * 3, [1, 0, 0], 0.2, 0.2) cyl2 = ds.disk([0.55] * 3, [1, 0, 0], 0.2, 0.2) # Get indices of both. i1 = cyl1["index", "morton_index"] i2 = cyl2["index", "morton_index"] # Make some booleans bo1 = cyl1 & cyl2 bo2 = cyl1 - cyl2 bo3 = cyl1 | cyl2 bo4 = ds.union([cyl1, cyl2]) bo5 = ds.intersection([cyl1, cyl2]) # Now make sure the indices also behave as we expect. vlens = np.intersect1d(i1, i2) bite_disk = np.setdiff1d(i1, i2) both = np.union1d(i1, i2) b1 = bo1["index", "morton_index"] b1.sort() b2 = bo2["index", "morton_index"] b2.sort() b3 = bo3["index", "morton_index"] b3.sort() assert_array_equal(b1, vlens) assert_array_equal(b2, bite_disk) assert_array_equal(b3, both) b4 = bo4["index", "morton_index"] b4.sort() b5 = bo5["index", "morton_index"] b5.sort() assert_array_equal(b3, b4) assert_array_equal(b1, b5) bo6 = cyl1 ^ cyl2 b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(i1, i2)) del ds def test_boolean_ellipsoids_no_overlap(): r"""Test to make sure that boolean objects (ellipsoids, no overlap) behave the way we expect. Test non-overlapping ellipsoids. This also checks that the original ellipsoids don't change as part of constructing the booleans. """ ds = fake_amr_ds() ell1 = ds.ellipsoid([0.25] * 3, 0.05, 0.05, 0.05, np.array([0.1] * 3), 0.1) ell2 = ds.ellipsoid([0.75] * 3, 0.05, 0.05, 0.05, np.array([0.1] * 3), 0.1) # Store the original indices i1 = ell1["index", "morton_index"] i1.sort() i2 = ell2["index", "morton_index"] i2.sort() ii = np.concatenate((i1, i2)) ii.sort() # Make some booleans bo1 = ell1 & ell2 bo2 = ell1 - ell2 bo3 = ell1 | ell2 bo4 = ds.union([ell1, ell2]) bo5 = ds.intersection([ell1, ell2]) # This makes sure the original containers didn't change. new_i1 = ell1["index", "morton_index"] new_i1.sort() new_i2 = ell2["index", "morton_index"] new_i2.sort() assert_array_equal(new_i1, i1) assert_array_equal(new_i2, i2) # Now make sure the indices also behave as we expect. empty = np.array([]) assert_array_equal(bo1["index", "morton_index"], empty) assert_array_equal(bo5["index", "morton_index"], empty) b2 = bo2["index", "morton_index"] b2.sort() assert_array_equal(b2, i1) b3 = bo3["index", "morton_index"] b3.sort() assert_array_equal(b3, ii) b4 = bo4["index", "morton_index"] b4.sort() b5 = bo5["index", "morton_index"] b5.sort() assert_array_equal(b3, b4) bo6 = ell1 ^ ell2 b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(i1, i2)) def test_boolean_ellipsoids_overlap(): r"""Test to make sure that boolean objects (ellipsoids, overlap) behave the way we expect. Test overlapping ellipsoids. """ ds = fake_amr_ds() ell1 = ds.ellipsoid([0.45] * 3, 0.05, 0.05, 0.05, np.array([0.1] * 3), 0.1) ell2 = ds.ellipsoid([0.55] * 3, 0.05, 0.05, 0.05, np.array([0.1] * 3), 0.1) # Get indices of both. i1 = ell1["index", "morton_index"] i2 = ell2["index", "morton_index"] # Make some booleans bo1 = ell1 & ell2 bo2 = ell1 - ell2 bo3 = ell1 | ell2 bo4 = ds.union([ell1, ell2]) bo5 = ds.intersection([ell1, ell2]) # Now make sure the indices also behave as we expect. overlap = np.intersect1d(i1, i2) diff = np.setdiff1d(i1, i2) both = np.union1d(i1, i2) b1 = bo1["index", "morton_index"] b1.sort() b2 = bo2["index", "morton_index"] b2.sort() b3 = bo3["index", "morton_index"] b3.sort() assert_array_equal(b1, overlap) assert_array_equal(b2, diff) assert_array_equal(b3, both) b4 = bo4["index", "morton_index"] b4.sort() b5 = bo5["index", "morton_index"] b5.sort() assert_array_equal(b3, b4) assert_array_equal(b1, b5) bo6 = ell1 ^ ell2 b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(i1, i2)) def test_boolean_mix_periodicity(): r"""Test that a hybrid boolean region behaves as we expect. This also tests nested logic and that periodicity works. """ ds = fake_amr_ds() re = ds.region([0.5] * 3, [0.0] * 3, [1] * 3) # whole thing sp = ds.sphere([0.95] * 3, 0.3) # wraps around cyl = ds.disk([0.05] * 3, [1, 1, 1], 0.1, 0.4) # wraps around # Get original indices rei = re["index", "morton_index"] spi = sp["index", "morton_index"] cyli = cyl["index", "morton_index"] # Make some booleans # whole box minux spherical bites at corners bo1 = re - sp # sphere plus cylinder bo2 = sp | cyl # a jumble, the region minus the sp+cyl bo3 = re - (sp | cyl) # Now make sure the indices also behave as we expect. bo4 = ds.union([re, sp, cyl]) bo5 = ds.intersection([re, sp, cyl]) expect = np.setdiff1d(rei, spi) ii = bo1["index", "morton_index"] ii.sort() assert_array_equal(expect, ii) # expect = np.union1d(spi, cyli) ii = bo2["index", "morton_index"] ii.sort() assert_array_equal(expect, ii) # expect = np.union1d(spi, cyli) expect = np.setdiff1d(rei, expect) ii = bo3["index", "morton_index"] ii.sort() assert_array_equal(expect, ii) b4 = bo4["index", "morton_index"] b4.sort() b5 = bo5["index", "morton_index"] b5.sort() ii = np.union1d(np.union1d(rei, cyli), spi) ii.sort() assert_array_equal(ii, b4) ii = np.intersect1d(np.intersect1d(rei, cyli), spi) ii.sort() assert_array_equal(ii, b5) bo6 = (re ^ sp) ^ cyl b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(np.setxor1d(rei, spi), cyli)) def test_boolean_ray_region_no_overlap(): r"""Test to make sure that boolean objects (ray, region, no overlap) behave the way we expect. Test non-overlapping ray and region. This also checks that the original objects don't change as part of constructing the booleans. """ ds = fake_amr_ds() re = ds.box([0.25] * 3, [0.75] * 3) ra = ds.ray([0.1] * 3, [0.1, 0.1, 0.9]) # Store the original indices i1 = re["index", "morton_index"] i1.sort() i2 = ra["index", "morton_index"] i2.sort() ii = np.concatenate((i1, i2)) ii.sort() # Make some booleans bo1 = re & ra bo2 = re - ra bo3 = re | ra bo4 = ds.union([re, ra]) bo5 = ds.intersection([re, ra]) # This makes sure the original containers didn't change. new_i1 = re["index", "morton_index"] new_i1.sort() new_i2 = ra["index", "morton_index"] new_i2.sort() assert_array_equal(new_i1, i1) assert_array_equal(new_i2, i2) # Now make sure the indices also behave as we expect. empty = np.array([]) assert_array_equal(bo1["index", "morton_index"], empty) assert_array_equal(bo5["index", "morton_index"], empty) b2 = bo2["index", "morton_index"] b2.sort() assert_array_equal(b2, i1) b3 = bo3["index", "morton_index"] b3.sort() assert_array_equal(b3, ii) b4 = bo4["index", "morton_index"] b4.sort() b5 = bo5["index", "morton_index"] b5.sort() assert_array_equal(b3, b4) bo6 = re ^ ra b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(i1, i2)) def test_boolean_ray_region_overlap(): r"""Test to make sure that boolean objects (ray, region, overlap) behave the way we expect. Test overlapping ray and region. This also checks that the original objects don't change as part of constructing the booleans. """ ds = fake_amr_ds() re = ds.box([0.25] * 3, [0.75] * 3) ra = ds.ray([0] * 3, [1] * 3) # Get indices of both. i1 = re["index", "morton_index"] i2 = ra["index", "morton_index"] # Make some booleans bo1 = re & ra bo2 = re - ra bo3 = re | ra bo4 = ds.union([re, ra]) bo5 = ds.intersection([re, ra]) # Now make sure the indices also behave as we expect. short_line = np.intersect1d(i1, i2) cube_minus_line = np.setdiff1d(i1, i2) both = np.union1d(i1, i2) b1 = bo1["index", "morton_index"] b1.sort() b2 = bo2["index", "morton_index"] b2.sort() b3 = bo3["index", "morton_index"] b3.sort() assert_array_equal(b1, short_line) assert_array_equal(b2, cube_minus_line) assert_array_equal(b3, both) b4 = bo4["index", "morton_index"] b4.sort() b5 = bo5["index", "morton_index"] b5.sort() assert_array_equal(b3, b4) assert_array_equal(b1, b5) bo6 = re ^ ra b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(i1, i2)) def test_boolean_rays_no_overlap(): r"""Test to make sure that boolean objects (rays, no overlap) behave the way we expect. Test non-overlapping rays. """ ds = fake_amr_ds() ra1 = ds.ray([0, 0, 0], [0, 0, 1]) ra2 = ds.ray([1, 0, 0], [1, 0, 1]) # Store the original indices i1 = ra1["index", "morton_index"] i1.sort() i2 = ra2["index", "morton_index"] i2.sort() ii = np.concatenate((i1, i2)) ii.sort() # Make some booleans bo1 = ra1 & ra2 bo2 = ra1 - ra2 bo3 = ra1 | ra2 bo4 = ds.union([ra1, ra2]) bo5 = ds.intersection([ra1, ra2]) # This makes sure the original containers didn't change. new_i1 = ra1["index", "morton_index"] new_i1.sort() new_i2 = ra2["index", "morton_index"] new_i2.sort() assert_array_equal(new_i1, i1) assert_array_equal(new_i2, i2) # Now make sure the indices also behave as we expect. empty = np.array([]) assert_array_equal(bo1["index", "morton_index"], empty) assert_array_equal(bo5["index", "morton_index"], empty) b2 = bo2["index", "morton_index"] b2.sort() assert_array_equal(b2, i1) b3 = bo3["index", "morton_index"] b3.sort() assert_array_equal(b3, ii) b4 = bo4["index", "morton_index"] b4.sort() b5 = bo5["index", "morton_index"] b5.sort() assert_array_equal(b3, b4) bo6 = ra1 ^ ra2 b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(i1, i2)) def test_boolean_rays_overlap(): r"""Test to make sure that boolean objects (rays, overlap) behave the way we expect. Test non-overlapping rays. """ ds = fake_amr_ds() ra1 = ds.ray([0] * 3, [1] * 3) ra2 = ds.ray([0] * 3, [0.5] * 3) # Get indices of both. i1 = ra1["index", "morton_index"] i1.sort() i2 = ra2["index", "morton_index"] i2.sort() ii = np.concatenate((i1, i2)) ii.sort() # Make some booleans bo1 = ra1 & ra2 bo2 = ra1 - ra2 bo3 = ra1 | ra2 bo4 = ds.union([ra1, ra2]) bo5 = ds.intersection([ra1, ra2]) # Now make sure the indices also behave as we expect. short_line = np.intersect1d(i1, i2) short_line_b = np.setdiff1d(i1, i2) full_line = np.union1d(i1, i2) b1 = bo1["index", "morton_index"] b1.sort() b2 = bo2["index", "morton_index"] b2.sort() b3 = bo3["index", "morton_index"] b3.sort() assert_array_equal(b1, short_line) assert_array_equal(b2, short_line_b) assert_array_equal(b3, full_line) b4 = bo4["index", "morton_index"] b4.sort() b5 = bo5["index", "morton_index"] b5.sort() assert_array_equal(b3, i1) assert_array_equal(b3, b4) assert_array_equal(b1, b5) bo6 = ra1 ^ ra2 b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(i1, i2)) def test_boolean_slices_no_overlap(): r"""Test to make sure that boolean objects (slices, no overlap) behave the way we expect. Test non-overlapping slices. This also checks that the original regions don't change as part of constructing the booleans. """ ds = fake_amr_ds() sl1 = ds.r[:, :, 0.25] sl2 = ds.r[:, :, 0.75] # Store the original indices i1 = sl1["index", "morton_index"] i1.sort() i2 = sl2["index", "morton_index"] i2.sort() ii = np.concatenate((i1, i2)) ii.sort() # Make some booleans bo1 = sl1 & sl2 bo2 = sl1 - sl2 bo3 = sl1 | sl2 bo4 = ds.union([sl1, sl2]) bo5 = ds.intersection([sl1, sl2]) # This makes sure the original containers didn't change. new_i1 = sl1["index", "morton_index"] new_i1.sort() new_i2 = sl2["index", "morton_index"] new_i2.sort() assert_array_equal(new_i1, i1) assert_array_equal(new_i2, i2) # Now make sure the indices also behave as we expect. empty = np.array([]) assert_array_equal(bo1["index", "morton_index"], empty) assert_array_equal(bo5["index", "morton_index"], empty) b2 = bo2["index", "morton_index"] b2.sort() assert_array_equal(b2, i1) b3 = bo3["index", "morton_index"] b3.sort() assert_array_equal(b3, ii) b4 = bo4["index", "morton_index"] b4.sort() b5 = bo5["index", "morton_index"] b5.sort() assert_array_equal(b3, b4) bo6 = sl1 ^ sl2 b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(i1, i2)) def test_boolean_slices_overlap(): r"""Test to make sure that boolean objects (slices, overlap) behave the way we expect. Test overlapping slices. """ ds = fake_amr_ds() sl1 = ds.r[:, :, 0.25] sl2 = ds.r[:, 0.75, :] # Get indices of both. i1 = sl1["index", "morton_index"] i2 = sl2["index", "morton_index"] # Make some booleans bo1 = sl1 & sl2 bo2 = sl1 - sl2 bo3 = sl1 | sl2 bo4 = ds.union([sl1, sl2]) bo5 = ds.intersection([sl1, sl2]) # Now make sure the indices also behave as we expect. line = np.intersect1d(i1, i2) orig = np.setdiff1d(i1, i2) both = np.union1d(i1, i2) b1 = bo1["index", "morton_index"] b1.sort() b2 = bo2["index", "morton_index"] b2.sort() b3 = bo3["index", "morton_index"] b3.sort() assert_array_equal(b1, line) assert_array_equal(b2, orig) assert_array_equal(b3, both) b4 = bo4["index", "morton_index"] b4.sort() b5 = bo5["index", "morton_index"] b5.sort() assert_array_equal(b3, b4) assert_array_equal(b1, b5) bo6 = sl1 ^ sl2 b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(i1, i2)) def test_boolean_ray_slice_no_overlap(): r"""Test to make sure that boolean objects (ray, slice, no overlap) behave the way we expect. Test non-overlapping ray and slice. This also checks that the original regions don't change as part of constructing the booleans. """ ds = fake_amr_ds() sl = ds.r[:, :, 0.25] ra = ds.ray([0] * 3, [0, 1, 0]) # Store the original indices i1 = sl["index", "morton_index"] i1.sort() i2 = ra["index", "morton_index"] i2.sort() ii = np.concatenate((i1, i2)) ii.sort() # Make some booleans bo1 = sl & ra bo2 = sl - ra bo3 = sl | ra bo4 = ds.union([sl, ra]) bo5 = ds.intersection([sl, ra]) # This makes sure the original containers didn't change. new_i1 = sl["index", "morton_index"] new_i1.sort() new_i2 = ra["index", "morton_index"] new_i2.sort() assert_array_equal(new_i1, i1) assert_array_equal(new_i2, i2) # Now make sure the indices also behave as we expect. empty = np.array([]) assert_array_equal(bo1["index", "morton_index"], empty) assert_array_equal(bo5["index", "morton_index"], empty) b2 = bo2["index", "morton_index"] b2.sort() assert_array_equal(b2, i1) b3 = bo3["index", "morton_index"] b3.sort() assert_array_equal(b3, ii) b4 = bo4["index", "morton_index"] b4.sort() b5 = bo5["index", "morton_index"] b5.sort() assert_array_equal(b3, b4) bo6 = sl ^ ra b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(i1, i2)) def test_boolean_ray_slice_overlap(): r"""Test to make sure that boolean objects (rays and slices, overlap) behave the way we expect. Test overlapping rays and slices. """ ds = fake_amr_ds() sl = ds.r[:, :, 0.25] ra = ds.ray([0, 0, 0.25], [0, 1, 0.25]) # Get indices of both. i1 = sl["index", "morton_index"] i1.sort() i2 = ra["index", "morton_index"] i1.sort() ii = np.concatenate((i1, i2)) ii.sort() # Make some booleans bo1 = sl & ra bo2 = sl - ra bo3 = sl | ra bo4 = ds.union([sl, ra]) bo5 = ds.intersection([sl, ra]) # Now make sure the indices also behave as we expect. line = np.intersect1d(i1, i2) sheet_minus_line = np.setdiff1d(i1, i2) sheet = np.union1d(i1, i2) b1 = bo1["index", "morton_index"] b1.sort() b2 = bo2["index", "morton_index"] b2.sort() b3 = bo3["index", "morton_index"] b3.sort() assert_array_equal(b1, line) assert_array_equal(b2, sheet_minus_line) assert_array_equal(b3, sheet) b4 = bo4["index", "morton_index"] b4.sort() b5 = bo5["index", "morton_index"] b5.sort() assert_array_equal(b3, i1) assert_array_equal(b3, b4) assert_array_equal(b1, b5) bo6 = sl ^ ra b6 = bo6["index", "morton_index"] b6.sort() assert_array_equal(b6, np.setxor1d(i1, i2)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_center_squeeze.py0000644000175100001770000000163514714401662022444 0ustar00runnerdockerfrom numpy.testing import assert_equal from yt.testing import fake_amr_ds, fake_particle_ds, fake_random_ds def test_center_squeeze(): # checks that the center is reshaped correctly # create and test amr, random and particle data check_single_ds(fake_amr_ds(fields=("Density",), units=("g/cm**3",))) check_single_ds(fake_random_ds(16, fields=("Density",), units=("g/cm**3",))) check_single_ds(fake_particle_ds(npart=100)) def check_single_ds(ds): # checks that the center center = ds.domain_center # reference center value for test_shape in [(1, 3), (1, 1, 3)]: new_center = center.reshape(test_shape) assert_equal(ds.sphere(new_center, 0.25).center, center) assert_equal(ds.slice(0, 0.25, center=new_center).center, center) assert_equal( ds.region(new_center, [-0.25, -0.25, -0.25], [0.25, 0.25, 0.25]).center, center, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_chunking.py0000644000175100001770000000427414714401662021233 0ustar00runnerdockerfrom numpy.testing import assert_equal from yt.testing import fake_random_ds from yt.units._numpy_wrapper_functions import uconcatenate def _get_dobjs(c): dobjs = [ ("sphere", ("center", (1.0, "unitary"))), ("sphere", ("center", (0.1, "unitary"))), ("ortho_ray", (0, (c[1], c[2]))), ("slice", (0, c[0])), # ("disk", ("center", [0.1, 0.3, 0.6], # (0.2, 'unitary'), (0.1, 'unitary'))), ("cutting", ([0.1, 0.3, 0.6], "center")), ("all_data", ()), ] return dobjs def test_chunking(): for nprocs in [1, 2, 4, 8]: ds = fake_random_ds(64, nprocs=nprocs) c = (ds.domain_right_edge + ds.domain_left_edge) / 2.0 c += ds.arr(0.5 / ds.domain_dimensions, "code_length") for dobj in _get_dobjs(c): obj = getattr(ds, dobj[0])(*dobj[1]) coords = {"f": {}, "i": {}} for t in ["io", "all", "spatial"]: coords["i"][t] = [] coords["f"][t] = [] for chunk in obj.chunks(None, t): coords["f"][t].append(chunk.fcoords[:, :]) coords["i"][t].append(chunk.icoords[:, :]) coords["f"][t] = uconcatenate(coords["f"][t]) coords["i"][t] = uconcatenate(coords["i"][t]) coords["f"][t].sort() coords["i"][t].sort() assert_equal(coords["f"]["io"], coords["f"]["all"]) assert_equal(coords["f"]["io"], coords["f"]["spatial"]) assert_equal(coords["i"]["io"], coords["i"]["all"]) assert_equal(coords["i"]["io"], coords["i"]["spatial"]) def test_ds_hold(): ds1 = fake_random_ds(64) ds2 = fake_random_ds(128) dd = ds1.all_data() # dd.ds is a weakref, so can't use "is" assert dd.ds.__hash__() == ds1.__hash__() assert dd.index is ds1.index assert_equal(dd["index", "ones"].size, 64**3) with dd._ds_hold(ds2): assert dd.ds.__hash__() == ds2.__hash__() assert dd.index is ds2.index assert_equal(dd["index", "ones"].size, 128**3) assert dd.ds.__hash__() == ds1.__hash__() assert dd.index is ds1.index assert_equal(dd["index", "ones"].size, 64**3) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_clone.py0000644000175100001770000000216114714401662020516 0ustar00runnerdockerfrom numpy.testing import assert_array_equal, assert_equal from yt.testing import fake_random_ds def test_clone_sphere(): # Now we test that we can get different radial velocities based on field # parameters. fields = ("density", "velocity_x", "velocity_y", "velocity_z") units = ("g/cm**3", "cm/s", "cm/s", "cm/s") # Get the first sphere ds = fake_random_ds(16, fields=fields, units=units) sp0 = ds.sphere(ds.domain_center, 0.25) assert_equal(list(sp0.keys()), []) sp1 = sp0.clone() sp0["gas", "density"] assert_equal(list(sp0.keys()), (("gas", "density"),)) assert_equal(list(sp1.keys()), []) sp1["gas", "density"] assert_array_equal(sp0["gas", "density"], sp1["gas", "density"]) def test_clone_cut_region(): fields = ("density", "temperature") units = ("g/cm**3", "K") ds = fake_random_ds(64, nprocs=4, fields=fields, units=units) dd = ds.all_data() reg1 = dd.cut_region( ["obj['gas', 'temperature'] > 0.5", "obj['gas', 'density'] < 0.75"] ) reg2 = reg1.clone() assert_array_equal(reg1["gas", "density"], reg2["gas", "density"]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_compose.py0000644000175100001770000001344714714401662021074 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_array_equal from yt.testing import fake_amr_ds, fake_random_ds from yt.units._numpy_wrapper_functions import uintersect1d from yt.units.yt_array import YTArray def setup_module(): from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True # Copied from test_boolean for computing a unique identifier for # each cell from cell positions def _IDFIELD(field, data): width = data.ds.domain_right_edge - data.ds.domain_left_edge min_dx = YTArray(1.0 / 8192, units="code_length", registry=data.ds.unit_registry) delta = width / min_dx x = data["index", "x"] - min_dx / 2.0 y = data["index", "y"] - min_dx / 2.0 z = data["index", "z"] - min_dx / 2.0 xi = x / min_dx yi = y / min_dx zi = z / min_dx index = xi + delta[0] * (yi + delta[1] * zi) return index def test_compose_no_overlap(): r"""Test to make sure that composed data objects that don't overlap behave the way we expect (return empty collections) """ empty = np.array([]) for n in [1, 2, 4, 8]: ds = fake_random_ds(64, nprocs=n) ds.add_field(("index", "ID"), sampling_type="cell", function=_IDFIELD) # position parameters for initial region center = [0.25] * 3 left_edge = [0.1] * 3 right_edge = [0.4] * 3 normal = [1, 0, 0] radius = height = 0.15 # initial 3D regions sources = [ ds.sphere(center, radius), ds.region(center, left_edge, right_edge), ds.disk(center, normal, radius, height), ] # position parameters for non-overlapping regions center = [0.75] * 3 left_edge = [0.6] * 3 right_edge = [0.9] * 3 # subselect non-overlapping 0, 1, 2, 3D regions for data1 in sources: data2 = ds.sphere(center, radius, data_source=data1) assert_array_equal(data2["index", "ID"], empty) data2 = ds.region(center, left_edge, right_edge, data_source=data1) assert_array_equal(data2["index", "ID"], empty) data2 = ds.disk(center, normal, radius, height, data_source=data1) assert_array_equal(data2["index", "ID"], empty) for d in range(3): data2 = ds.slice(d, center[d], data_source=data1) assert_array_equal(data2["index", "ID"], empty) for d in range(3): data2 = ds.ortho_ray( d, center[0:d] + center[d + 1 :], data_source=data1 ) assert_array_equal(data2["index", "ID"], empty) data2 = ds.point(center, data_source=data1) assert_array_equal(data2["index", "ID"], empty) def test_compose_overlap(): r"""Test to make sure that composed data objects that do overlap behave the way we expect """ for n in [1, 2, 4, 8]: ds = fake_random_ds(64, nprocs=n) ds.add_field(("index", "ID"), sampling_type="cell", function=_IDFIELD) # position parameters for initial region center = [0.4, 0.5, 0.5] left_edge = [0.1] * 3 right_edge = [0.7] * 3 normal = [1, 0, 0] radius = height = 0.15 # initial 3D regions sources = [ ds.sphere(center, radius), ds.region(center, left_edge, right_edge), ds.disk(center, normal, radius, height), ] # position parameters for overlapping regions center = [0.6, 0.5, 0.5] left_edge = [0.3] * 3 right_edge = [0.9] * 3 # subselect non-overlapping 0, 1, 2, 3D regions for data1 in sources: id1 = data1["index", "ID"] data2 = ds.sphere(center, radius) data3 = ds.sphere(center, radius, data_source=data1) id2 = data2["index", "ID"] id3 = data3["index", "ID"] id3.sort() assert_array_equal(uintersect1d(id1, id2), id3) data2 = ds.region(center, left_edge, right_edge) data3 = ds.region(center, left_edge, right_edge, data_source=data1) id2 = data2["index", "ID"] id3 = data3["index", "ID"] id3.sort() assert_array_equal(uintersect1d(id1, id2), id3) data2 = ds.disk(center, normal, radius, height) data3 = ds.disk(center, normal, radius, height, data_source=data1) id2 = data2["index", "ID"] id3 = data3["index", "ID"] id3.sort() assert_array_equal(uintersect1d(id1, id2), id3) for d in range(3): data2 = ds.slice(d, center[d]) data3 = ds.slice(d, center[d], data_source=data1) id2 = data2["index", "ID"] id3 = data3["index", "ID"] id3.sort() assert_array_equal(uintersect1d(id1, id2), id3) for d in range(3): data2 = ds.ortho_ray(d, center[0:d] + center[d + 1 :]) data3 = ds.ortho_ray( d, center[0:d] + center[d + 1 :], data_source=data1 ) id2 = data2["index", "ID"] id3 = data3["index", "ID"] id3.sort() assert_array_equal(uintersect1d(id1, id2), id3) data2 = ds.point(center) data3 = ds.point(center, data_source=data1) id2 = data2["index", "ID"] id3 = data3["index", "ID"] id3.sort() assert_array_equal(uintersect1d(id1, id2), id3) def test_compose_max_level_min_level(): ds = fake_amr_ds() ad = ds.all_data() ad.max_level = 2 slc = ds.slice("x", 0.5, data_source=ad) assert slc["index", "grid_level"].max() == 2 frb = slc.to_frb(1.0, 128) assert np.all(frb["stream", "Density"] > 0) assert frb["index", "grid_level"].max() == 2 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_connected_sets.py0000644000175100001770000000112314714401662022413 0ustar00runnerdockerfrom yt.testing import fake_random_ds from yt.utilities.answer_testing.level_sets_tests import ExtractConnectedSetsTest def test_connected_sets(): ds = fake_random_ds(16, nprocs=8, particles=16**3) data_source = ds.disk([0.5, 0.5, 0.5], [0.0, 0.0, 1.0], (8, "kpc"), (1, "kpc")) field = ("gas", "density") min_val, max_val = data_source[field].min() / 2, data_source[field].max() / 2 data_source.extract_connected_sets( field, 5, min_val, max_val, log_space=True, cumulative=True ) yield ExtractConnectedSetsTest(ds, data_source, field, 5, min_val, max_val) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_covering_grid.py0000644000175100001770000003421414714401662022243 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_almost_equal, assert_array_equal, assert_equal from yt.fields.derived_field import ValidateParameter from yt.loaders import load, load_particles from yt.testing import ( fake_octree_ds, fake_random_ds, requires_file, requires_module, ) from yt.units import kpc # cylindrical data for covering_grid test cyl_2d = "WDMerger_hdf5_chk_1000/WDMerger_hdf5_chk_1000.hdf5" cyl_3d = "MHD_Cyl3d_hdf5_plt_cnt_0100/MHD_Cyl3d_hdf5_plt_cnt_0100.hdf5" def setup_module(): from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True @requires_module("h5py") @requires_file(cyl_2d) @requires_file(cyl_3d) def test_covering_grid(): # We decompose in different ways for level in [0, 1, 2]: for nprocs in [1, 2, 4, 8]: ds = fake_random_ds(16, nprocs=nprocs) axis_name = ds.coordinates.axis_name dn = ds.refine_by**level cg = ds.covering_grid(level, [0.0, 0.0, 0.0], dn * ds.domain_dimensions) # Test coordinate generation assert_equal(np.unique(cg["index", f"d{axis_name[0]}"]).size, 1) xmi = cg["index", axis_name[0]].min() xma = cg["index", axis_name[0]].max() dx = cg["index", f"d{axis_name[0]}"].flat[0:1] edges = ds.arr([[0, 1], [0, 1], [0, 1]], "code_length") assert_equal(xmi, edges[0, 0] + dx / 2.0) assert_equal(xmi, cg["index", axis_name[0]][0, 0, 0]) assert_equal(xmi, cg["index", axis_name[0]][0, 1, 1]) assert_equal(xma, edges[0, 1] - dx / 2.0) assert_equal(xma, cg["index", axis_name[0]][-1, 0, 0]) assert_equal(xma, cg["index", axis_name[0]][-1, 1, 1]) assert_equal(np.unique(cg["index", f"d{axis_name[1]}"]).size, 1) ymi = cg["index", axis_name[1]].min() yma = cg["index", axis_name[1]].max() dy = cg["index", f"d{axis_name[1]}"][0] assert_equal(ymi, edges[1, 0] + dy / 2.0) assert_equal(ymi, cg["index", axis_name[1]][0, 0, 0]) assert_equal(ymi, cg["index", axis_name[1]][1, 0, 1]) assert_equal(yma, edges[1, 1] - dy / 2.0) assert_equal(yma, cg["index", axis_name[1]][0, -1, 0]) assert_equal(yma, cg["index", axis_name[1]][1, -1, 1]) assert_equal(np.unique(cg["index", f"d{axis_name[2]}"]).size, 1) zmi = cg["index", axis_name[2]].min() zma = cg["index", axis_name[2]].max() dz = cg["index", f"d{axis_name[2]}"][0] assert_equal(zmi, edges[2, 0] + dz / 2.0) assert_equal(zmi, cg["index", axis_name[2]][0, 0, 0]) assert_equal(zmi, cg["index", axis_name[2]][1, 1, 0]) assert_equal(zma, edges[2, 1] - dz / 2.0) assert_equal(zma, cg["index", axis_name[2]][0, 0, -1]) assert_equal(zma, cg["index", axis_name[2]][1, 1, -1]) # Now we test other attributes assert_equal(cg["index", "ones"].max(), 1.0) assert_equal(cg["index", "ones"].min(), 1.0) assert_equal(cg["index", "grid_level"], level) assert_equal(cg["index", "cell_volume"].sum(), ds.domain_width.prod()) for g in ds.index.grids: di = g.get_global_startindex() dd = g.ActiveDimensions for i in range(dn): f = cg["gas", "density"][ dn * di[0] + i : dn * (di[0] + dd[0]) + i : dn, dn * di[1] + i : dn * (di[1] + dd[1]) + i : dn, dn * di[2] + i : dn * (di[2] + dd[2]) + i : dn, ] assert_equal(f, g["gas", "density"]) # More tests for cylindrical geometry for fn in [cyl_2d, cyl_3d]: ds = load(fn) ad = ds.all_data() upper_ad = ad.cut_region(["obj['index', 'z'] > 0"]) sp = ds.sphere((0, 0, 0), 0.5 * ds.domain_width[0], data_source=upper_ad) sp.quantities.total_mass() @requires_module("h5py") @requires_file(cyl_2d) @requires_file(cyl_3d) def test_covering_grid_data_source(): # test the data_source kwarg (new with PR 4063) for level in [0, 1, 2]: for nprocs in [1, 2, 4, 8]: ds = fake_random_ds(16, nprocs=nprocs) dn = ds.refine_by**level cg = ds.covering_grid(level, [0.0, 0.0, 0.0], dn * ds.domain_dimensions) # check that the regularized sphere is centered where it should be, # to within the tolerance of the grid. Use a center offsect from # the domain center by a bit center = ds.domain_center + np.min(ds.domain_width) * 0.15 sp = ds.sphere(center, (0.1, "code_length")) dims = dn * ds.domain_dimensions cg_sp = ds.covering_grid(level, [0.0, 0.0, 0.0], dims, data_source=sp) # find the discrete center of the sphere by averaging the position # along each dimension where values are non-zero cg_mask = cg_sp["gas", "density"] != 0 discrete_c = ds.arr( [ np.mean(cg_sp["index", dim][cg_mask]) for dim in ds.coordinates.axis_order ] ) # should be centered to within the tolerance of the grid at least grid_tol = ds.arr( [cg_sp["index", dim][0, 0, 0] for dim in ds.coordinates.axis_order] ) assert np.all(np.abs(discrete_c - center) <= grid_tol) # check that a region covering the whole domain matches no data_source reg = ds.region(ds.domain_center, ds.domain_left_edge, ds.domain_right_edge) cg_reg = ds.covering_grid(level, [0.0, 0.0, 0.0], dims, data_source=reg) assert np.all(cg["gas", "density"] == cg_reg["gas", "density"]) # check that a box covering a subset of the domain is the right volume right_edge = ds.domain_left_edge + ds.domain_width * 0.5 c = (right_edge + ds.domain_left_edge) / 2 reg = ds.region(c, ds.domain_left_edge, right_edge) cg_reg = ds.covering_grid(level, [0.0, 0.0, 0.0], dims, data_source=reg) box_vol = cg_reg["index", "cell_volume"][ cg_reg["gas", "density"] != 0 ].sum() actual_vol = np.prod(right_edge - ds.domain_left_edge) assert box_vol == actual_vol @requires_module("xarray") def test_xarray_export(): def _run_tests(cg): xarr = cg.to_xarray(fields=[("gas", "density"), ("gas", "temperature")]) assert ("gas", "density") in xarr.variables assert ("gas", "temperature") in xarr.variables assert ("gas", "specific_thermal_energy") not in xarr.variables assert "x" in xarr.coords assert "y" in xarr.coords assert "z" in xarr.coords assert xarr.sizes["x"] == dn * ds.domain_dimensions[0] assert xarr.sizes["y"] == dn * ds.domain_dimensions[1] assert xarr.sizes["z"] == dn * ds.domain_dimensions[2] assert_equal(xarr.x, cg["index", "x"][:, 0, 0]) assert_equal(xarr.y, cg["index", "y"][0, :, 0]) assert_equal(xarr.z, cg["index", "z"][0, 0, :]) fields = ("density", "temperature", "specific_thermal_energy") units = ("g/cm**3", "K", "erg/g") for level in [0, 1, 2]: ds = fake_random_ds(16, fields=fields, units=units) dn = ds.refine_by**level rcg = ds.covering_grid(level, [0.0, 0.0, 0.0], dn * ds.domain_dimensions) _run_tests(rcg) scg = ds.smoothed_covering_grid( level, [0.0, 0.0, 0.0], dn * ds.domain_dimensions ) _run_tests(scg) ag1 = ds.arbitrary_grid( [0.0, 0.0, 0.0], [1.0, 1.0, 1.0], dn * ds.domain_dimensions ) _run_tests(ag1) ag2 = ds.arbitrary_grid( [0.1, 0.3, 0.2], [0.4, 1.0, 0.9], dn * ds.domain_dimensions ) _run_tests(ag2) def test_smoothed_covering_grid(): # We decompose in different ways for level in [0, 1, 2]: for nprocs in [1, 2, 4, 8]: ds = fake_random_ds(16, nprocs=nprocs) dn = ds.refine_by**level cg = ds.smoothed_covering_grid( level, [0.0, 0.0, 0.0], dn * ds.domain_dimensions ) assert_equal(cg["index", "ones"].max(), 1.0) assert_equal(cg["index", "ones"].min(), 1.0) assert_equal(cg["index", "cell_volume"].sum(), ds.domain_width.prod()) for g in ds.index.grids: if level != g.Level: continue di = g.get_global_startindex() dd = g.ActiveDimensions for i in range(dn): f = cg["gas", "density"][ dn * di[0] + i : dn * (di[0] + dd[0]) + i : dn, dn * di[1] + i : dn * (di[1] + dd[1]) + i : dn, dn * di[2] + i : dn * (di[2] + dd[2]) + i : dn, ] assert_equal(f, g["gas", "density"]) def test_arbitrary_grid(): for ncells in [32, 64]: for px in [0.125, 0.25, 0.55519]: particle_data = { "particle_position_x": np.array([px]), "particle_position_y": np.array([0.5]), "particle_position_z": np.array([0.5]), "particle_mass": np.array([1.0]), } ds = load_particles(particle_data) for dims in ([ncells] * 3, [ncells, ncells / 2, ncells / 4]): LE = np.array([0.05, 0.05, 0.05]) RE = np.array([0.95, 0.95, 0.95]) dims = np.array(dims) dds = (RE - LE) / dims volume = ds.quan(np.prod(dds), "cm**3") obj = ds.arbitrary_grid(LE, RE, dims) deposited_mass = obj["deposit", "all_density"].sum() * volume assert_equal(deposited_mass, ds.quan(1.0, "g")) LE = np.array([0.00, 0.00, 0.00]) RE = np.array([0.05, 0.05, 0.05]) obj = ds.arbitrary_grid(LE, RE, dims) deposited_mass = obj["deposit", "all_density"].sum() assert_equal(deposited_mass, 0) # Test that we get identical results to the covering grid for unigrid data. # Testing AMR data is much harder. for nprocs in [1, 2, 4, 8]: ds = fake_random_ds(32, nprocs=nprocs) for ref_level in [0, 1, 2]: cg = ds.covering_grid( ref_level, [0.0, 0.0, 0.0], 2**ref_level * ds.domain_dimensions ) ag = ds.arbitrary_grid( [0.0, 0.0, 0.0], [1.0, 1.0, 1.0], 2**ref_level * ds.domain_dimensions ) assert_almost_equal(cg["gas", "density"], ag["gas", "density"]) def test_octree_cg(): ds = fake_octree_ds(num_zones=1, partial_coverage=0) cgrid = ds.covering_grid( 0, left_edge=ds.domain_left_edge, dims=ds.domain_dimensions ) density_field = cgrid["gas", "density"] assert_equal((density_field == 0.0).sum(), 0) def test_smoothed_covering_grid_2d_dataset(): ds = fake_random_ds([32, 32, 1], nprocs=4) ds.force_periodicity() scg = ds.smoothed_covering_grid(1, [0.0, 0.0, 0.0], [32, 32, 1]) assert_equal(scg["gas", "density"].shape, [32, 32, 1]) def test_arbitrary_grid_derived_field(): def custom_metal_density(field, data): # Calculating some random value return data["gas", "density"] * np.random.random_sample() ds = fake_random_ds(64, nprocs=8, particles=16**2) ds.add_field( ("gas", "Metal_Density"), units="g/cm**3", function=custom_metal_density, sampling_type="cell", ) def _tracerf(field, data): return data["gas", "Metal_Density"] / data["gas", "density"] ds.add_field( ("gas", "tracerf"), function=_tracerf, units="dimensionless", sampling_type="cell", take_log=False, ) galgas = ds.arbitrary_grid([0.4, 0.4, 0.4], [0.99, 0.99, 0.99], dims=[32, 32, 32]) galgas["gas", "tracerf"] def test_arbitrary_field_parameters(): def _test_field(field, data): par = data.get_field_parameter("test_parameter") return par * data["all", "particle_mass"] ds = fake_random_ds(64, nprocs=8, particles=16**2) ds.add_field( ("all", "test_field"), units="g", function=_test_field, sampling_type="particle", validators=[ValidateParameter("test_parameter")], ) agrid = ds.arbitrary_grid([0.4, 0.4, 0.4], [0.99, 0.99, 0.99], dims=[32, 32, 32]) agrid.set_field_parameter("test_parameter", 2) assert_array_equal(2 * agrid["all", "particle_mass"], agrid["all", "test_field"]) def test_arbitrary_grid_edge(): # Tests bug fix for issue #2087 # Regardless of how left_edge and right_edge are passed, the result should be # a YTArray with a unit registry that matches that of the dataset. dims = [32, 32, 32] ds = fake_random_ds(dims) # Test when edge is a list, numpy array, YTArray with dataset units, and # YTArray with non-dataset units ledge = [ [0.0, 0.0, 0.0], np.array([0.0, 0.0, 0.0]), [0.0, 0.0, 0.0] * ds.length_unit, [0.0, 0.0, 0.0] * kpc, ] redge = [ [1.0, 1.0, 1.0], np.array([1.0, 1.0, 1.0]), [1.0, 1.0, 1.0] * ds.length_unit, [1.0, 1.0, 1.0] * kpc, ] ledge_ans = [ [0.0, 0.0, 0.0] * ds.length_unit.to("code_length"), np.array([0.0, 0.0, 0.0]) * ds.length_unit.to("code_length"), [0.0, 0.0, 0.0] * ds.length_unit, [0.0, 0.0, 0.0] * kpc, ] redge_ans = [ [1.0, 1.0, 1.0] * ds.length_unit.to("code_length"), np.array([1.0, 1.0, 1.0]) * ds.length_unit.to("code_length"), [1.0, 1.0, 1.0] * ds.length_unit, [1.0, 1.0, 1.0] * kpc, ] for le, re, le_ans, re_ans in zip(ledge, redge, ledge_ans, redge_ans, strict=True): ag = ds.arbitrary_grid(left_edge=le, right_edge=re, dims=dims) assert np.array_equal(ag.left_edge, le_ans) assert np.array_equal(ag.right_edge, re_ans) assert ag.left_edge.units.registry == ds.unit_registry assert ag.right_edge.units.registry == ds.unit_registry ag["gas", "density"] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_cutting_plane.py0000644000175100001770000000361614714401662022260 0ustar00runnerdockerimport os import tempfile from numpy.testing import assert_equal from yt.testing import fake_random_ds from yt.units.unit_object import Unit def setup_module(): from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True def teardown_func(fns): for fn in fns: try: os.remove(fn) except OSError: pass def test_cutting_plane(): fns = [] for nprocs in [8, 1]: # We want to test both 1 proc and 8 procs, to make sure that # parallelism isn't broken ds = fake_random_ds(64, nprocs=nprocs) center = [0.5, 0.5, 0.5] normal = [1, 1, 1] cut = ds.cutting(normal, center) assert_equal(cut["index", "ones"].sum(), cut["index", "ones"].size) assert_equal(cut["index", "ones"].min(), 1.0) assert_equal(cut["index", "ones"].max(), 1.0) pw = cut.to_pw(fields=("gas", "density")) for p in pw.plots.values(): tmpfd, tmpname = tempfile.mkstemp(suffix=".png") os.close(tmpfd) p.save(name=tmpname) fns.append(tmpname) for width in [(1.0, "unitary"), 1.0, ds.quan(0.5, "code_length")]: frb = cut.to_frb(width, 64) for cut_field in [("index", "ones"), ("gas", "density")]: fi = ds._get_field_info(cut_field) data = frb[cut_field] assert_equal(data.info["data_source"], cut.__str__()) assert_equal(data.info["axis"], None) assert_equal(data.info["field"], str(cut_field)) assert_equal(data.units, Unit(fi.units)) assert_equal(data.info["xlim"], frb.bounds[:2]) assert_equal(data.info["ylim"], frb.bounds[2:]) assert_equal(data.info["length_to_cm"], ds.length_unit.in_cgs()) assert_equal(data.info["center"], cut.center) teardown_func(fns) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_data_collection.py0000644000175100001770000000235314714401662022545 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_equal from yt.testing import assert_rel_equal, fake_random_ds def setup_module(): from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True def test_data_collection(): # We decompose in different ways for nprocs in [1, 2, 4, 8]: ds = fake_random_ds(16, nprocs=nprocs) coll = ds.data_collection(ds.index.grids) crho = coll["gas", "density"].sum(dtype="float64").to_ndarray() grho = np.sum( [g["gas", "density"].sum(dtype="float64") for g in ds.index.grids], dtype="float64", ) assert_rel_equal(np.array([crho]), np.array([grho]), 12) assert_equal(coll.size, ds.domain_dimensions.prod()) for gi in range(ds.index.num_grids): grids = ds.index.grids[: gi + 1] coll = ds.data_collection(grids) crho = coll["gas", "density"].sum(dtype="float64") grho = np.sum( [g["gas", "density"].sum(dtype="float64") for g in grids], dtype="float64", ) assert_rel_equal(np.array([crho]), np.array([grho]), 12) assert_equal(coll.size, sum(g.ActiveDimensions.prod() for g in grids)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_data_containers.py0000644000175100001770000001467314714401662022567 0ustar00runnerdockerimport os import shutil import tempfile import unittest import numpy as np from nose.tools import assert_raises from numpy.testing import assert_array_equal, assert_equal from yt.data_objects.data_containers import YTDataContainer from yt.data_objects.particle_filters import particle_filter from yt.testing import ( fake_amr_ds, fake_particle_ds, fake_random_ds, requires_module, ) from yt.utilities.exceptions import YTException, YTFieldNotFound class TestDataContainers(unittest.TestCase): @classmethod def setUpClass(cls): cls.tmpdir = tempfile.mkdtemp() cls.curdir = os.getcwd() os.chdir(cls.tmpdir) @classmethod def tearDownClass(cls): os.chdir(cls.curdir) shutil.rmtree(cls.tmpdir) def test_yt_data_container(self): # Test if ds could be None with assert_raises(RuntimeError) as err: YTDataContainer(None, None) desired = ( "Error: ds must be set either through class" " type or parameter to the constructor" ) assert_equal(str(err.exception), desired) # Test if field_data key exists ds = fake_random_ds(5) proj = ds.proj(("gas", "density"), 0, data_source=ds.all_data()) assert_equal("px" in proj.keys(), True) assert_equal("pz" in proj.keys(), False) # Delete the key and check if exits del proj["px"] assert_equal("px" in proj.keys(), False) del proj["gas", "density"] assert_equal("density" in proj.keys(), False) # Delete a non-existent field with assert_raises(YTFieldNotFound) as ex: del proj["p_mass"] desired = "Could not find field 'p_mass' in UniformGridData." assert_equal(str(ex.exception), desired) def test_write_out(self): filename = "sphere.txt" ds = fake_random_ds(16, particles=10) sp = ds.sphere(ds.domain_center, 0.25) sp.write_out(filename, fields=[("gas", "cell_volume")]) with open(filename) as file: file_row_1 = file.readline() file_row_2 = file.readline() file_row_2 = np.array(file_row_2.split("\t"), dtype=np.float64) sorted_keys = sorted(sp.field_data.keys()) keys = [str(k) for k in sorted_keys] keys = "\t".join(["#"] + keys + ["\n"]) data = [sp.field_data[k][0] for k in sorted_keys] assert_equal(keys, file_row_1) assert_array_equal(data, file_row_2) def test_invalid_write_out(self): filename = "sphere.txt" ds = fake_random_ds(16, particles=10) sp = ds.sphere(ds.domain_center, 0.25) with assert_raises(YTException): sp.write_out(filename, fields=[("all", "particle_ones")]) @requires_module("pandas") def test_to_dataframe(self): fields = [("gas", "density"), ("gas", "velocity_z")] ds = fake_random_ds(6) dd = ds.all_data() df = dd.to_dataframe(fields) assert_array_equal(dd[fields[0]], df[fields[0][1]]) assert_array_equal(dd[fields[1]], df[fields[1][1]]) @requires_module("astropy") def test_to_astropy_table(self): from yt.units.yt_array import YTArray fields = [("gas", "density"), ("gas", "velocity_z")] ds = fake_random_ds(6) dd = ds.all_data() at1 = dd.to_astropy_table(fields) assert_array_equal(dd[fields[0]].d, at1[fields[0][1]].value) assert_array_equal(dd[fields[1]].d, at1[fields[1][1]].value) assert dd[fields[0]].units == YTArray.from_astropy(at1[fields[0][1]]).units assert dd[fields[1]].units == YTArray.from_astropy(at1[fields[1][1]]).units def test_std(self): ds = fake_random_ds(3) ds.all_data().std(("gas", "density"), weight=("gas", "velocity_z")) def test_to_frb(self): # Test cylindrical geometry fields = ["density", "cell_mass"] units = ["g/cm**3", "g"] ds = fake_amr_ds( fields=fields, units=units, geometry="cylindrical", particles=16**3 ) dd = ds.all_data() proj = ds.proj( ("gas", "density"), weight_field=("gas", "cell_mass"), axis=1, data_source=dd, ) frb = proj.to_frb((1.0, "unitary"), 64) assert_equal(frb.radius, (1.0, "unitary")) assert_equal(frb.buff_size, 64) def test_extract_isocontours(self): # Test isocontour properties for AMRGridData fields = ["density", "cell_mass"] units = ["g/cm**3", "g"] ds = fake_amr_ds(fields=fields, units=units, particles=16**3) dd = ds.all_data() q = dd.quantities["WeightedAverageQuantity"] rho = q(("gas", "density"), weight=("gas", "cell_mass")) dd.extract_isocontours(("gas", "density"), rho, "triangles.obj", True) dd.calculate_isocontour_flux( ("gas", "density"), rho, ("index", "x"), ("index", "y"), ("index", "z"), ("index", "dx"), ) # Test error in case of ParticleData ds = fake_particle_ds() dd = ds.all_data() q = dd.quantities["WeightedAverageQuantity"] rho = q(("all", "particle_velocity_x"), weight=("all", "particle_mass")) with assert_raises(NotImplementedError): dd.extract_isocontours("density", rho, sample_values="x") def test_derived_field(self): # Test that derived field on filtered particles do not require # their parent field to be created ds = fake_particle_ds() dd = ds.all_data() dd.set_field_parameter("axis", 0) @particle_filter(requires=["particle_mass"], filtered_type="io") def massive(pfilter, data): return data[pfilter.filtered_type, "particle_mass"].to("code_mass") > 0.5 ds.add_particle_filter("massive") def fun(field, data): return data[field.name[0], "particle_mass"] # Add the field to the massive particles ds.add_field( ("massive", "test"), function=fun, sampling_type="particle", units="code_mass", ) expected_size = (dd["io", "particle_mass"].to("code_mass") > 0.5).sum() fields_to_test = [f for f in ds.derived_field_list if f[0] == "massive"] def test_this(fname): data = dd[fname] assert_equal(data.shape[0], expected_size) for fname in fields_to_test: test_this(fname) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_dataset_access.py0000644000175100001770000001535014714401662022370 0ustar00runnerdockerimport numpy as np from nose.tools import assert_raises from numpy.testing import assert_almost_equal, assert_equal from yt.testing import ( fake_amr_ds, fake_particle_ds, fake_random_ds, requires_file, requires_module, ) from yt.utilities.answer_testing.framework import data_dir_load from yt.utilities.exceptions import YTDimensionalityError from yt.visualization.line_plot import LineBuffer # This will test the "dataset access" method. def test_box_creation(): ds = fake_random_ds(32, length_unit=2) left_edge = ds.arr([0.2, 0.2, 0.2], "cm") right_edge = ds.arr([0.6, 0.6, 0.6], "cm") center = (left_edge + right_edge) / 2 boxes = [ ds.box(left_edge, right_edge), ds.box(0.5 * np.array(left_edge), 0.5 * np.array(right_edge)), ds.box((0.5 * left_edge).tolist(), (0.5 * right_edge).tolist()), ] region = ds.region(center, left_edge, right_edge) for b in boxes: assert_almost_equal(b.left_edge, region.left_edge) assert_almost_equal(b.right_edge, region.right_edge) assert_almost_equal(b.center, region.center) def test_region_from_d(): ds = fake_amr_ds(fields=["density"], units=["g/cm**3"]) # We'll do a couple here # First, no string units reg1 = ds.r[0.2:0.3, 0.4:0.6, :] reg2 = ds.region([0.25, 0.5, 0.5], [0.2, 0.4, 0.0], [0.3, 0.6, 1.0]) assert_equal(reg1["gas", "density"], reg2["gas", "density"]) # Now, string units in some -- 1.0 == cm reg1 = ds.r[(0.1, "cm") : (0.5, "cm"), :, (0.25, "cm") : (0.35, "cm")] reg2 = ds.region([0.3, 0.5, 0.3], [0.1, 0.0, 0.25], [0.5, 1.0, 0.35]) assert_equal(reg1["gas", "density"], reg2["gas", "density"]) # Now, string units in some -- 1.0 == cm reg1 = ds.r[(0.1, "cm") : (0.5, "cm"), :, 0.25:0.35] reg2 = ds.region([0.3, 0.5, 0.3], [0.1, 0.0, 0.25], [0.5, 1.0, 0.35]) assert_equal(reg1["gas", "density"], reg2["gas", "density"]) # And, lots of : usage! reg1 = ds.r[:, :, :] reg2 = ds.all_data() assert_equal(reg1["gas", "density"], reg2["gas", "density"]) # Test slice as an index reg1 = ds.r[0.1:0.8] reg2 = ds.region([0.45, 0.45, 0.45], [0.1, 0.1, 0.1], [0.8, 0.8, 0.8]) assert_equal(reg1["gas", "density"], reg2["gas", "density"]) # Test with bad boundary initialization with assert_raises(RuntimeError): ds.r[0.3:0.1, 0.4:0.6, :] # Test region by creating an arbitrary grid reg1 = ds.r[0.15:0.55:16j, 0.25:0.65:32j, 0.35:0.75:64j] left_edge = np.array([0.15, 0.25, 0.35]) right_edge = np.array([0.55, 0.65, 0.75]) dims = np.array([16.0, 32.0, 64.0]) reg2 = ds.arbitrary_grid(left_edge, right_edge, dims) assert_equal(reg1["gas", "density"], reg2["gas", "density"]) def test_accessing_all_data(): # This will test first that we can access all_data, and next that we can # access it multiple times and get the *same object*. ds = fake_amr_ds(fields=["density"], units=["g/cm**3"]) dd = ds.all_data() assert_equal(ds.r["gas", "density"], dd["gas", "density"]) # Now let's assert that it's the same object rho = ds.r["gas", "density"] rho *= 2.0 assert_equal(dd["gas", "density"] * 2.0, ds.r["gas", "density"]) assert_equal(dd["gas", "density"] * 2.0, ds.r["gas", "density"]) def test_slice_from_r(): ds = fake_amr_ds(fields=["density"], units=["g/cm**3"]) sl1 = ds.r[0.5, :, :] sl2 = ds.slice("x", 0.5) assert_equal(sl1["gas", "density"], sl2["gas", "density"]) frb1 = sl1.to_frb(width=1.0, height=1.0, resolution=(1024, 512)) frb2 = ds.r[0.5, ::1024j, ::512j] assert_equal(frb1["gas", "density"], frb2["gas", "density"]) # Test slice which doesn't cover the whole domain box = ds.box([0.0, 0.25, 0.25], [1.0, 0.75, 0.75]) sl3 = ds.r[0.5, 0.25:0.75, 0.25:0.75] sl4 = ds.slice("x", 0.5, data_source=box) assert_equal(sl3["gas", "density"], sl4["gas", "density"]) frb3 = sl3.to_frb(width=0.5, height=0.5, resolution=(1024, 512)) frb4 = ds.r[0.5, 0.25:0.75:1024j, 0.25:0.75:512j] assert_equal(frb3["gas", "density"], frb4["gas", "density"]) def test_point_from_r(): ds = fake_amr_ds(fields=["density"], units=["g/cm**3"]) pt1 = ds.r[0.5, 0.3, 0.1] pt2 = ds.point([0.5, 0.3, 0.1]) assert_equal(pt1["gas", "density"], pt2["gas", "density"]) # Test YTDimensionalityError with assert_raises(YTDimensionalityError) as ex: ds.r[0.5, 0.1] assert_equal(str(ex.exception), "Dimensionality specified was 2 but we need 3") def test_ray_from_r(): ds = fake_amr_ds(fields=["density"], units=["g/cm**3"]) ray1 = ds.r[(0.1, 0.2, 0.3) : (0.4, 0.5, 0.6)] ray2 = ds.ray((0.1, 0.2, 0.3), (0.4, 0.5, 0.6)) assert_equal(ray1["gas", "density"], ray2["gas", "density"]) ray3 = ds.r[0.5 * ds.domain_left_edge : 0.5 * ds.domain_right_edge] ray4 = ds.ray(0.5 * ds.domain_left_edge, 0.5 * ds.domain_right_edge) assert_equal(ray3["gas", "density"], ray4["gas", "density"]) start = [(0.1, "cm"), 0.2, (0.3, "cm")] end = [(0.5, "cm"), (0.4, "cm"), 0.6] ray5 = ds.r[start:end] start_arr = [ds.quan(0.1, "cm"), ds.quan(0.2, "cm"), ds.quan(0.3, "cm")] end_arr = [ds.quan(0.5, "cm"), ds.quan(0.4, "cm"), ds.quan(0.6, "cm")] ray6 = ds.ray(start_arr, end_arr) assert_equal(ray5["gas", "density"], ray6["gas", "density"]) ray7 = ds.r[start:end:500j] ray8 = LineBuffer(ds, [0.1, 0.2, 0.3], [0.5, 0.4, 0.6], 500) assert_equal(ray7["gas", "density"], ray8["gas", "density"]) def test_ortho_ray_from_r(): ds = fake_amr_ds(fields=["density"], units=["g/cm**3"]) ray1 = ds.r[:, 0.3, 0.2] ray2 = ds.ortho_ray("x", [0.3, 0.2]) assert_equal(ray1["gas", "density"], ray2["gas", "density"]) # the y-coord is funny so test it too ray3 = ds.r[0.3, :, 0.2] ray4 = ds.ortho_ray("y", [0.2, 0.3]) assert_equal(ray3["gas", "density"], ray4["gas", "density"]) # Test ray which doesn't cover the whole domain box = ds.box([0.25, 0.0, 0.0], [0.75, 1.0, 1.0]) ray5 = ds.r[0.25:0.75, 0.3, 0.2] ray6 = ds.ortho_ray("x", [0.3, 0.2], data_source=box) assert_equal(ray5["gas", "density"], ray6["gas", "density"]) # Test fixed-resolution rays ray7 = ds.r[0.25:0.75:100j, 0.3, 0.2] ray8 = LineBuffer(ds, [0.2525, 0.3, 0.2], [0.7475, 0.3, 0.2], 100) assert_equal(ray7["gas", "density"], ray8["gas", "density"]) def test_particle_counts(): ds = fake_random_ds(16, particles=100) assert ds.particle_type_counts == {"io": 100} pds = fake_particle_ds(npart=128) assert pds.particle_type_counts == {"io": 128} g30 = "IsolatedGalaxy/galaxy0030/galaxy0030" @requires_module("h5py") @requires_file(g30) def test_checksum(): assert fake_random_ds(16).checksum == "notafile" assert data_dir_load(g30).checksum == "6169536e4b9f737ce3d3ad440df44c58" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_derived_quantities.py0000644000175100001770000002354614714401662023320 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_almost_equal, assert_equal import yt from yt import particle_filter from yt.testing import ( assert_rel_equal, fake_particle_ds, fake_random_ds, fake_sph_orientation_ds, requires_file, ) def setup_module(): from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True def test_extrema(): for nprocs in [1, 2, 4, 8]: ds = fake_random_ds( 16, nprocs=nprocs, fields=("density", "velocity_x", "velocity_y", "velocity_z"), units=("g/cm**3", "cm/s", "cm/s", "cm/s"), ) for sp in [ds.sphere("c", (0.25, "unitary")), ds.r[0.5, :, :]]: mi, ma = sp.quantities["Extrema"](("gas", "density")) assert_equal(mi, np.nanmin(sp["gas", "density"])) assert_equal(ma, np.nanmax(sp["gas", "density"])) dd = ds.all_data() mi, ma = dd.quantities["Extrema"](("gas", "density")) assert_equal(mi, np.nanmin(dd["gas", "density"])) assert_equal(ma, np.nanmax(dd["gas", "density"])) sp = ds.sphere("max", (0.25, "unitary")) assert_equal(np.any(np.isnan(sp["gas", "radial_velocity"])), False) mi, ma = dd.quantities["Extrema"](("gas", "radial_velocity")) assert_equal(mi, np.nanmin(dd["gas", "radial_velocity"])) assert_equal(ma, np.nanmax(dd["gas", "radial_velocity"])) def test_extrema_with_nan(): dens = np.ones((16, 16, 16)) dens[0, 0, 0] = np.nan data = {"density": dens} ds = yt.load_uniform_grid(data, data["density"].shape) ad = ds.all_data() mi, ma = ad.quantities.extrema(("stream", "density")) assert np.isnan(mi) assert np.isnan(ma) mi, ma = ad.quantities.extrema(("stream", "density"), check_finite=True) assert mi == 1.0 assert ma == 1.0 def test_average(): for nprocs in [1, 2, 4, 8]: ds = fake_random_ds(16, nprocs=nprocs, fields=("density",), units=("g/cm**3",)) for ad in [ds.all_data(), ds.r[0.5, :, :]]: my_mean = ad.quantities["WeightedAverageQuantity"]( ("gas", "density"), ("index", "ones") ) assert_rel_equal(my_mean, ad["gas", "density"].mean(), 12) my_mean = ad.quantities["WeightedAverageQuantity"]( ("gas", "density"), ("gas", "cell_mass") ) a_mean = (ad["gas", "density"] * ad["gas", "cell_mass"]).sum() / ad[ ("gas", "cell_mass") ].sum() assert_rel_equal(my_mean, a_mean, 12) def test_standard_deviation(): for nprocs in [1, 2, 4, 8]: ds = fake_random_ds(16, nprocs=nprocs, fields=("density",), units=("g/cm**3",)) for ad in [ds.all_data(), ds.r[0.5, :, :]]: my_std, my_mean = ad.quantities["WeightedStandardDeviation"]( ("gas", "density"), ("index", "ones") ) assert_rel_equal(my_mean, ad["gas", "density"].mean(), 12) assert_rel_equal(my_std, ad["gas", "density"].std(), 12) my_std, my_mean = ad.quantities["WeightedStandardDeviation"]( ("gas", "density"), ("gas", "cell_mass") ) a_mean = (ad["gas", "density"] * ad["gas", "cell_mass"]).sum() / ad[ ("gas", "cell_mass") ].sum() assert_rel_equal(my_mean, a_mean, 12) a_std = np.sqrt( (ad["gas", "cell_mass"] * (ad["gas", "density"] - a_mean) ** 2).sum() / ad["gas", "cell_mass"].sum() ) assert_rel_equal(my_std, a_std, 12) def test_max_location(): for nprocs in [1, 2, 4, 8]: ds = fake_random_ds(16, nprocs=nprocs, fields=("density",), units=("g/cm**3",)) for ad in [ds.all_data(), ds.r[0.5, :, :]]: mv, x, y, z = ad.quantities.max_location(("gas", "density")) assert_equal(mv, ad["gas", "density"].max()) mi = np.argmax(ad["gas", "density"]) assert_equal(ad["index", "x"][mi], x) assert_equal(ad["index", "y"][mi], y) assert_equal(ad["index", "z"][mi], z) def test_min_location(): for nprocs in [1, 2, 4, 8]: ds = fake_random_ds(16, nprocs=nprocs, fields=("density",), units=("g/cm**3",)) for ad in [ds.all_data(), ds.r[0.5, :, :]]: mv, x, y, z = ad.quantities.min_location(("gas", "density")) assert_equal(mv, ad["gas", "density"].min()) mi = np.argmin(ad["gas", "density"]) assert_equal(ad["index", "x"][mi], x) assert_equal(ad["index", "y"][mi], y) assert_equal(ad["index", "z"][mi], z) def test_sample_at_min_field_values(): for nprocs in [1, 2, 4, 8]: ds = fake_random_ds( 16, nprocs=nprocs, fields=("density", "temperature", "velocity_x"), units=("g/cm**3", "K", "cm/s"), ) for ad in [ds.all_data(), ds.r[0.5, :, :]]: mv, temp, vm = ad.quantities.sample_at_min_field_values( ("gas", "density"), [("gas", "temperature"), ("gas", "velocity_x")] ) assert_equal(mv, ad["gas", "density"].min()) mi = np.argmin(ad["gas", "density"]) assert_equal(ad["gas", "temperature"][mi], temp) assert_equal(ad["gas", "velocity_x"][mi], vm) def test_sample_at_max_field_values(): for nprocs in [1, 2, 4, 8]: ds = fake_random_ds( 16, nprocs=nprocs, fields=("density", "temperature", "velocity_x"), units=("g/cm**3", "K", "cm/s"), ) for ad in [ds.all_data(), ds.r[0.5, :, :]]: mv, temp, vm = ad.quantities.sample_at_max_field_values( ("gas", "density"), [("gas", "temperature"), ("gas", "velocity_x")] ) assert_equal(mv, ad["gas", "density"].max()) mi = np.argmax(ad["gas", "density"]) assert_equal(ad["gas", "temperature"][mi], temp) assert_equal(ad["gas", "velocity_x"][mi], vm) def test_in_memory_sph_derived_quantities(): ds = fake_sph_orientation_ds() ad = ds.all_data() ang_mom = ad.quantities.angular_momentum_vector() assert_equal(ang_mom, [0, 0, 0]) bv = ad.quantities.bulk_velocity() assert_equal(bv, [0, 0, 0]) com = ad.quantities.center_of_mass() assert_equal(com, [1 / 7, (1 + 2) / 7, (1 + 2 + 3) / 7]) ex = ad.quantities.extrema([("io", "x"), ("io", "y"), ("io", "z")]) for fex, ans in zip(ex, [[0, 1], [0, 2], [0, 3]], strict=True): assert_equal(fex, ans) for d, v, l in [ ("x", 1, [1, 0, 0]), ("y", 2, [0, 2, 0]), ("z", 3, [0, 0, 3]), ]: max_d, x, y, z = ad.quantities.max_location(("io", d)) assert_equal(max_d, v) assert_equal([x, y, z], l) for d in "xyz": min_d, x, y, z = ad.quantities.min_location(("io", d)) assert_equal(min_d, 0) assert_equal([x, y, z], [0, 0, 0]) tot_m = ad.quantities.total_mass() assert_equal(tot_m, [7, 0]) weighted_av_z = ad.quantities.weighted_average_quantity(("io", "z"), ("io", "z")) assert_equal(weighted_av_z, 7 / 3) iso_collapse = "IsothermalCollapse/snap_505" tipsy_gal = "TipsyGalaxy/galaxy.00300" @requires_file(iso_collapse) @requires_file(tipsy_gal) def test_sph_datasets_derived_quantities(): for fname in [tipsy_gal, iso_collapse]: ds = yt.load(fname) ad = ds.all_data() use_particles = "nbody" in ds.particle_types ad.quantities.angular_momentum_vector() ad.quantities.bulk_velocity(True, use_particles) ad.quantities.center_of_mass(True, use_particles) ad.quantities.extrema([("gas", "density"), ("gas", "temperature")]) ad.quantities.min_location(("gas", "density")) ad.quantities.max_location(("gas", "density")) ad.quantities.total_mass() ad.quantities.weighted_average_quantity(("gas", "density"), ("gas", "mass")) def test_derived_quantities_with_particle_types(): ds = fake_particle_ds() @particle_filter(requires=["particle_position_x"], filtered_type="all") def low_x(pfilter, data): return ( data[pfilter.filtered_type, "particle_position_x"].in_units("code_length") < 0.5 ) ds.add_particle_filter("low_x") ad = ds.all_data() for ptype in ["all", "low_x"]: # Check bulk velocity bulk_vx = ( ad[ptype, "particle_mass"] * ad[ptype, "particle_velocity_x"] / ad[ptype, "particle_mass"].sum() ).sum() assert_almost_equal( ad.quantities.bulk_velocity( use_gas=False, use_particles=True, particle_type=ptype )[0], bulk_vx, 5, ) # Check center of mass com_x = ( ad[ptype, "particle_mass"] * ad[ptype, "particle_position_x"] / ad[ptype, "particle_mass"].sum() ).sum() assert_almost_equal( ad.quantities.center_of_mass( use_gas=False, use_particles=True, particle_type=ptype )[0], com_x, 5, ) # Check angular momentum vector l_x = ( ad[ptype, "particle_specific_angular_momentum_x"] * ad[ptype, "particle_mass"] / ad[ptype, "particle_mass"].sum() ).sum() assert_almost_equal( ad.quantities.angular_momentum_vector( use_gas=False, use_particles=True, particle_type=ptype )[0], l_x, 5, ) # Check spin parameter values assert_almost_equal( ad.quantities.spin_parameter(use_gas=False, use_particles=True), 655.7311454765503, ) assert_almost_equal( ad.quantities.spin_parameter( use_gas=False, use_particles=True, particle_type="low_x" ), 1309.164886405665, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_disks.py0000644000175100001770000000347614714401662020545 0ustar00runnerdockerimport pytest from yt import YTQuantity from yt.testing import fake_random_ds def test_bad_disk_input(): # Fixes 1768 ds = fake_random_ds(16) # Test invalid 3d array with pytest.raises( TypeError, match=r"^Expected an array of size \(3,\), received 'list' of length 4$", ): ds.disk(ds.domain_center, [0, 0, 1, 1], (10, "kpc"), (20, "kpc")) # Test invalid float with pytest.raises( TypeError, match=( r"^Expected a numeric value \(or size-1 array\), " r"received 'unyt.array.unyt_array' of length 3$" ), ): ds.disk(ds.domain_center, [0, 0, 1], ds.domain_center, (20, "kpc")) # Test invalid float with pytest.raises( TypeError, match=( r"^Expected a numeric value \(or tuple of format \(float, String\)\), " r"received an inconsistent tuple '\(10, 10\)'.$" ), ): ds.disk(ds.domain_center, [0, 0, 1], (10, 10), (20, "kpc")) # Test invalid iterable with pytest.raises( TypeError, match=r"^Expected an iterable object, received 'unyt\.array\.unyt_quantity'$", ): ds.disk( ds.domain_center, [0, 0, 1], (10, "kpc"), (20, "kpc"), fields=YTQuantity(1, "kpc"), ) # Test invalid object with pytest.raises( TypeError, match=( r"^Expected an object of 'yt\.data_objects\.static_output\.Dataset' type, " r"received 'yt\.data_objects\.selection_objects\.region\.YTRegion'$" ), ): ds.disk(ds.domain_center, [0, 0, 1], (10, "kpc"), (20, "kpc"), ds=ds.all_data()) # Test valid disk ds.disk(ds.domain_center, [0, 0, 1], (10, "kpc"), (20, "kpc")) ds.disk(ds.domain_center, [0, 0, 1], 10, (20, "kpc")) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_ellipsoid.py0000644000175100001770000000414014714401662021401 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_array_less from yt.testing import fake_random_ds def setup_module(): from yt.config import ytcfg ytcfg["yt", "log_level"] = 50 ytcfg["yt", "internals", "within_testing"] = True def _difference(x1, x2, dw): rel = x1 - x2 rel[rel > dw / 2.0] -= dw rel[rel < -dw / 2.0] += dw return rel def test_ellipsoid(): # We decompose in different ways cs = [ np.array([0.5, 0.5, 0.5]), np.array([0.1, 0.2, 0.3]), np.array([0.8, 0.8, 0.8]), ] np.random.seed(0x4D3D3D3) for nprocs in [1, 2, 4, 8]: ds = fake_random_ds(64, nprocs=nprocs) DW = ds.domain_right_edge - ds.domain_left_edge min_dx = 2.0 / ds.domain_dimensions ABC = np.random.random((3, 12)) * 0.1 e0s = np.random.random((3, 12)) tilts = np.random.random(12) ABC[:, 0] = 0.1 for i in range(12): for c in cs: A, B, C = sorted(ABC[:, i], reverse=True) A = max(A, min_dx[0]) B = max(B, min_dx[1]) C = max(C, min_dx[2]) e0 = e0s[:, i] tilt = tilts[i] ell = ds.ellipsoid(c, A, B, C, e0, tilt) assert_array_less(ell["index", "radius"], A) p = np.array([ell["index", ax] for ax in "xyz"]) dot_evec = [np.zeros_like(ell["index", "radius"]) for i in range(3)] vecs = [ell._e0, ell._e1, ell._e2] mags = [ell._A, ell._B, ell._C] my_c = np.array([c] * p.shape[1]).transpose() dot_evec = [de.to_ndarray() for de in dot_evec] mags = [m.to_ndarray() for m in mags] for ax_i in range(3): dist = _difference(p[ax_i, :], my_c[ax_i, :], DW[ax_i]) for ax_j in range(3): dot_evec[ax_j] += dist * vecs[ax_j][ax_i] dist = 0 for ax_i in range(3): dist += dot_evec[ax_i] ** 2.0 / mags[ax_i] ** 2.0 assert_array_less(dist, 1.0) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_exclude_functions.py0000644000175100001770000000724614714401662023150 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_equal from yt.loaders import load_uniform_grid from yt.testing import fake_random_ds def test_exclude_above(): the_ds = fake_random_ds(ndims=3) all_data = the_ds.all_data() new_ds = all_data.exclude_above(("gas", "density"), 1) assert_equal(new_ds["gas", "density"], all_data["gas", "density"]) new_ds = all_data.exclude_above(("gas", "density"), 1e6, "g/m**3") assert_equal(new_ds["gas", "density"], all_data["gas", "density"]) new_ds = all_data.exclude_above(("gas", "density"), 0) assert_equal(new_ds["gas", "density"], []) def test_exclude_below(): the_ds = fake_random_ds(ndims=3) all_data = the_ds.all_data() new_ds = all_data.exclude_below(("gas", "density"), 1) assert_equal(new_ds["gas", "density"], []) new_ds = all_data.exclude_below(("gas", "density"), 1e6, "g/m**3") assert_equal(new_ds["gas", "density"], []) new_ds = all_data.exclude_below(("gas", "density"), 0) assert_equal(new_ds["gas", "density"], all_data["gas", "density"]) def test_exclude_nan(): test_array = np.nan * np.ones((10, 10, 10)) test_array[1, 1, :] = 1 data = {"density": test_array} ds = load_uniform_grid(data, test_array.shape, length_unit="cm", nprocs=1) ad = ds.all_data() no_nan_ds = ad.exclude_nan(("gas", "density")) assert_equal(no_nan_ds["gas", "density"], np.array(np.ones(10))) def test_equal(): test_array = np.ones((10, 10, 10)) test_array[1, 1, :] = 2.0 test_array[2, 1, :] = 3.0 data = {"density": test_array} ds = load_uniform_grid(data, test_array.shape, length_unit="cm", nprocs=1) ad = ds.all_data() no_ones = ad.exclude_equal(("gas", "density"), 1.0) assert np.all(no_ones["gas", "density"] != 1.0) only_ones = ad.include_equal(("gas", "density"), 1.0) assert np.all(only_ones["gas", "density"] == 1.0) def test_inside_outside(): test_array = np.ones((10, 10, 10)) test_array[1, 1, :] = 2.0 test_array[2, 1, :] = 3.0 data = {"density": test_array} ds = load_uniform_grid(data, test_array.shape, length_unit="cm", nprocs=1) ad = ds.all_data() only_ones_and_twos = ad.include_inside(("gas", "density"), 0.9, 2.1) assert np.all(only_ones_and_twos["gas", "density"] != 3.0) assert len(only_ones_and_twos["gas", "density"]) == 990 only_ones_and_twos = ad.exclude_outside(("gas", "density"), 0.9, 2.1) assert len(only_ones_and_twos["gas", "density"]) == 990 assert np.all(only_ones_and_twos["gas", "density"] != 3.0) only_threes = ad.include_outside(("gas", "density"), 0.9, 2.1) assert np.all(only_threes["gas", "density"] == 3) assert len(only_threes["gas", "density"]) == 10 only_threes = ad.include_outside(("gas", "density"), 0.9, 2.1) assert np.all(only_threes["gas", "density"] == 3) assert len(only_threes["gas", "density"]) == 10 # Repeat, but convert units to g/m**3 only_ones_and_twos = ad.include_inside(("gas", "density"), 0.9e6, 2.1e6, "g/m**3") assert np.all(only_ones_and_twos["gas", "density"] != 3.0) assert len(only_ones_and_twos["gas", "density"]) == 990 only_ones_and_twos = ad.exclude_outside(("gas", "density"), 0.9e6, 2.1e6, "g/m**3") assert len(only_ones_and_twos["gas", "density"]) == 990 assert np.all(only_ones_and_twos["gas", "density"] != 3.0) only_threes = ad.include_outside(("gas", "density"), 0.9e6, 2.1e6, "g/m**3") assert np.all(only_threes["gas", "density"] == 3) assert len(only_threes["gas", "density"]) == 10 only_threes = ad.include_outside(("gas", "density"), 0.9e6, 2.1e6, "g/m**3") assert np.all(only_threes["gas", "density"] == 3) assert len(only_threes["gas", "density"]) == 10 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_extract_regions.py0000644000175100001770000000737514714401662022632 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_almost_equal, assert_equal from yt.loaders import load from yt.testing import ( fake_amr_ds, fake_random_ds, requires_file, requires_module, ) def setup_module(): from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True def test_cut_region(): # We decompose in different ways for nprocs in [1, 2, 4, 8]: ds = fake_random_ds( 64, nprocs=nprocs, fields=("density", "temperature", "velocity_x"), units=("g/cm**3", "K", "cm/s"), ) # We'll test two objects dd = ds.all_data() r = dd.cut_region( [ "obj['gas', 'temperature'] > 0.5", "obj['gas', 'density'] < 0.75", "obj['gas', 'velocity_x'] > 0.25", ] ) t = ( (dd["gas", "temperature"] > 0.5) & (dd["gas", "density"] < 0.75) & (dd["gas", "velocity_x"] > 0.25) ) assert_equal(np.all(r["gas", "temperature"] > 0.5), True) assert_equal(np.all(r["gas", "density"] < 0.75), True) assert_equal(np.all(r["gas", "velocity_x"] > 0.25), True) assert_equal(np.sort(dd["gas", "density"][t]), np.sort(r["gas", "density"])) assert_equal(np.sort(dd["index", "x"][t]), np.sort(r["index", "x"])) r2 = r.cut_region(["obj['gas', 'temperature'] < 0.75"]) t2 = r["gas", "temperature"] < 0.75 assert_equal( np.sort(r2["gas", "temperature"]), np.sort(r["gas", "temperature"][t2]) ) assert_equal(np.all(r2["gas", "temperature"] < 0.75), True) # Now we can test some projections dd = ds.all_data() cr = dd.cut_region(["obj['index', 'ones'] > 0"]) for weight in [None, ("gas", "density")]: p1 = ds.proj(("gas", "density"), 0, data_source=dd, weight_field=weight) p2 = ds.proj(("gas", "density"), 0, data_source=cr, weight_field=weight) for f in p1.field_data: assert_almost_equal(p1[f], p2[f]) cr = dd.cut_region(["obj['gas', 'density'] > 0.25"]) p2 = ds.proj(("gas", "density"), 2, data_source=cr) assert_equal(p2["gas", "density"].max() > 0.25, True) p2 = ds.proj( ("gas", "density"), 2, data_source=cr, weight_field=("gas", "density") ) assert_equal(p2["gas", "density"].max() > 0.25, True) def test_region_and_particles(): ds = fake_amr_ds(particles=10000) ad = ds.all_data() reg = ad.cut_region('obj["index", "x"] < .5') mask = ad["all", "particle_position_x"] < 0.5 expected = np.sort(ad["all", "particle_position_x"][mask].value) result = np.sort(reg["all", "particle_position_x"]) assert_equal(expected.shape, result.shape) assert_equal(expected, result) ISOGAL = "IsolatedGalaxy/galaxy0030/galaxy0030" @requires_module("h5py") @requires_file(ISOGAL) def test_region_chunked_read(): # see #2104 ds = load("IsolatedGalaxy/galaxy0030/galaxy0030") sp = ds.sphere((0.5, 0.5, 0.5), (2, "kpc")) dense_sp = sp.cut_region(['obj["gas", "H_p0_number_density"]>= 1e-2']) dense_sp.quantities.angular_momentum_vector() @requires_module("h5py") @requires_file(ISOGAL) def test_chained_cut_region(): # see Issue #2233 ds = load(ISOGAL) base = ds.disk([0.5, 0.5, 0.5], [0, 0, 1], (4, "kpc"), (10, "kpc")) c1 = "(obj['index', 'cylindrical_radius'].in_units('kpc') > 2.0)" c2 = "(obj['gas', 'density'].to('g/cm**3') > 1e-26)" cr12 = base.cut_region([c1, c2]) cr1 = base.cut_region([c1]) cr12c = cr1.cut_region([c2]) field = ("index", "cell_volume") assert_equal( cr12.quantities.total_quantity(field), cr12c.quantities.total_quantity(field) ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_firefly.py0000644000175100001770000002076214714401662021065 0ustar00runnerdockerimport numpy as np import pytest from numpy.testing import assert_array_equal from yt.testing import fake_particle_ds, requires_module from yt.utilities.exceptions import YTFieldNotFound @requires_module("firefly") def test_firefly_JSON_string(): ds = fake_particle_ds() ad = ds.all_data() reader = ad.create_firefly_object( None, velocity_units="cm/s", coordinate_units="cm", ) reader.writeToDisk(write_to_disk=False, file_extension=".json") ## reader.JSON was not output to string correctly ## either Firefly is damaged or needs a hotfix-- try reinstalling. ## if that doesn't work contact the developers ## at github.com/ageller/Firefly/issues. assert len(reader.JSON) > 0 @requires_module("firefly") def test_firefly_write_to_disk(tmp_path): tmpdir = str(tmp_path) # create_firefly_object needs a str, not PosixPath ds = fake_particle_ds() ad = ds.all_data() reader = ad.create_firefly_object( tmpdir, velocity_units="cm/s", coordinate_units="cm", match_any_particle_types=True, # Explicitly specifying to avoid deprecation warning ) reader.writeToDisk() @pytest.fixture def firefly_test_dataset(): # create dataset ds_fields = [ # Assumed present ("pt1", "particle_position_x"), ("pt1", "particle_position_y"), ("pt1", "particle_position_z"), ("pt2", "particle_position_x"), ("pt2", "particle_position_y"), ("pt2", "particle_position_z"), # User input ("pt1", "common_field"), ("pt2", "common_field"), ("pt2", "pt2only_field"), ] ds_field_units = ["code_length"] * 9 ds_negative = [0] * 9 ds = fake_particle_ds( fields=ds_fields, units=ds_field_units, negative=ds_negative, ) return ds @requires_module("firefly") @pytest.mark.parametrize( "fields_to_include,fields_units", [ (None, None), # Test default values ([], []), # Test empty fields ], ) def test_field_empty_specification( firefly_test_dataset, fields_to_include, fields_units ): dd = firefly_test_dataset.all_data() reader = dd.create_firefly_object( fields_to_include=fields_to_include, fields_units=fields_units, coordinate_units="code_length", ) assert_array_equal( dd["pt1", "relative_particle_position"].d, reader.particleGroups[0].coordinates, ) assert_array_equal( dd["pt2", "relative_particle_position"].d, reader.particleGroups[1].coordinates, ) @requires_module("firefly") def test_field_unique_string_specification(firefly_test_dataset): # Test unique field (pt2only_field) dd = firefly_test_dataset.all_data() # Unique field string will fallback to "all" field type and fail # as nonexistent ("all", "pt2only_field") unless we set # match_any_particle_types=True reader = dd.create_firefly_object( fields_to_include=["pt2only_field"], fields_units=["code_length"], coordinate_units="code_length", match_any_particle_types=True, ) pt1 = reader.particleGroups[0] pt2 = reader.particleGroups[1] assert_array_equal( dd["pt1", "relative_particle_position"].d, pt1.coordinates, ) assert_array_equal( dd["pt2", "relative_particle_position"].d, pt2.coordinates, ) assert "pt2only_field" not in pt1.field_names assert "pt2only_field" in pt2.field_names arrind = np.flatnonzero(pt2.field_names == "pt2only_field")[0] assert_array_equal(dd["pt2", "pt2only_field"].d, pt2.field_arrays[arrind]) @requires_module("firefly") def test_field_common_string_specification(firefly_test_dataset): # Test common field (common_field) dd = firefly_test_dataset.all_data() # Common field string will be ambiguous and fail # unless we set match_any_particle_types=True reader = dd.create_firefly_object( fields_to_include=["common_field"], fields_units=["code_length"], coordinate_units="code_length", match_any_particle_types=True, ) pt1 = reader.particleGroups[0] pt2 = reader.particleGroups[1] assert_array_equal( dd["pt1", "relative_particle_position"].d, pt1.coordinates, ) assert_array_equal( dd["pt2", "relative_particle_position"].d, pt2.coordinates, ) assert "common_field" in pt1.field_names assert "common_field" in pt2.field_names arrind = np.flatnonzero(pt1.field_names == "common_field")[0] assert_array_equal(dd["pt1", "common_field"].d, pt1.field_arrays[arrind]) arrind = np.flatnonzero(pt2.field_names == "common_field")[0] assert_array_equal(dd["pt2", "common_field"].d, pt2.field_arrays[arrind]) @requires_module("firefly") @pytest.mark.parametrize( "fields_to_include,fields_units", [ ( [("pt2", "pt2only_field")], ["code_length"], ), # Test existing field tuple (pt2, pt2only_field) ( [("pt1", "common_field")], ["code_length"], ), # Test that tuples only bring in referenced particleGroup ( [("all", "common_field")], ["code_length"], ), # Test that "all" brings in all particleGroups ], ) def test_field_tuple_specification( firefly_test_dataset, fields_to_include, fields_units, ): dd = firefly_test_dataset.all_data() reader = dd.create_firefly_object( fields_to_include=fields_to_include, fields_units=fields_units, coordinate_units="code_length", ) assert_array_equal( dd["pt1", "relative_particle_position"].d, reader.particleGroups[0].coordinates, ) assert_array_equal( dd["pt2", "relative_particle_position"].d, reader.particleGroups[1].coordinates, ) all_pgs = reader.particleGroups all_pgs_names = ["pt1", "pt2"] for field in fields_to_include: ftype, fname = field for pgi in range(2): pg = all_pgs[pgi] if ftype == all_pgs_names[pgi]: assert fname in pg.field_names arrind = np.flatnonzero(pg.field_names == fname)[0] assert_array_equal(dd[field].d, pg.field_arrays[arrind]) elif ftype == "all": assert fname in pg.field_names this_pg_name = all_pgs_names[pgi] arrind = np.flatnonzero(pg.field_names == fname)[0] assert_array_equal(dd[this_pg_name, fname].d, pg.field_arrays[arrind]) else: assert fname not in pg.field_names @requires_module("firefly") @pytest.mark.parametrize( "fields_to_include,fields_units,ErrorType", [ ( ["dinos"], ["code_length"], YTFieldNotFound, ), # Test nonexistent field (dinos) ( ["common_field"], ["code_length"], ValueError, ), # Test ambiguous field (match_any_particle_types=False) ( [("pt1", "pt2only_field")], ["code_length"], YTFieldNotFound, ), # Test nonexistent field tuple (pt1, pt2only_field) ], ) def test_field_invalid_specification( firefly_test_dataset, fields_to_include, fields_units, ErrorType ): dd = firefly_test_dataset.all_data() # Note that we have specified match_any_particle_types as False since # that is the behavior expected in the future with pytest.raises(ErrorType): dd.create_firefly_object( fields_to_include=fields_to_include, fields_units=fields_units, coordinate_units="code_length", match_any_particle_types=False, ) @requires_module("firefly") def test_field_mixed_specification(firefly_test_dataset): dd = firefly_test_dataset.all_data() reader = dd.create_firefly_object( fields_to_include=["pt2only_field", ("pt1", "common_field")], fields_units=["code_length", "code_length"], ) pt1 = reader.particleGroups[0] pt2 = reader.particleGroups[1] assert "common_field" in pt1.field_names assert "common_field" not in pt2.field_names arrind = np.flatnonzero(pt1.field_names == "common_field")[0] assert_array_equal(dd["pt1", "common_field"].d, pt1.field_arrays[arrind]) assert "pt2only_field" not in pt1.field_names assert "pt2only_field" in pt2.field_names arrind = np.flatnonzero(pt2.field_names == "pt2only_field")[0] assert_array_equal(dd["pt2", "pt2only_field"].d, pt2.field_arrays[arrind]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_fluxes.py0000644000175100001770000001217314714401662020730 0ustar00runnerdockerimport os import shutil import tempfile from unittest import TestCase import numpy as np from numpy.testing import assert_almost_equal, assert_equal from yt.testing import fake_random_ds def setup_module(): from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True def test_flux_calculation(): ds = fake_random_ds(64, nprocs=4) dd = ds.all_data() surf = ds.surface(dd, ("index", "x"), 0.51) assert_equal(surf["index", "x"], 0.51) flux = surf.calculate_flux( ("index", "ones"), ("index", "zeros"), ("index", "zeros"), ("index", "ones") ) assert_almost_equal(flux.value, 1.0, 12) assert_equal(str(flux.units), "cm**2") flux2 = surf.calculate_flux( ("index", "ones"), ("index", "zeros"), ("index", "zeros") ) assert_almost_equal(flux2.value, 1.0, 12) assert_equal(str(flux2.units), "cm**2") def test_sampling(): ds = fake_random_ds(64, nprocs=4) dd = ds.all_data() for i, ax in enumerate([("index", "x"), ("index", "y"), ("index", "z")]): surf = ds.surface(dd, ax, 0.51) surf.get_data(ax, sample_type="vertex") assert_equal(surf.vertex_samples[ax], surf.vertices[i, :]) assert_equal(str(surf.vertices.units), "code_length") dens = surf["gas", "density"] vert_shape = surf.vertices.shape assert_equal(dens.shape[0], vert_shape[1] // vert_shape[0]) assert_equal(str(dens.units), "g/cm**3") class ExporterTests(TestCase): def setUp(self): self.curdir = os.getcwd() self.tmpdir = tempfile.mkdtemp() os.chdir(self.tmpdir) def tearDown(self): os.chdir(self.curdir) shutil.rmtree(self.tmpdir) def test_export_ply(self): ds = fake_random_ds(64, nprocs=4) dd = ds.all_data() surf = ds.surface(dd, ("index", "x"), 0.51) surf.export_ply("my_ply.ply", bounds=[(0, 1), (0, 1), (0, 1)]) assert os.path.exists("my_ply.ply") surf.export_ply( "my_ply2.ply", bounds=[(0, 1), (0, 1), (0, 1)], sample_type="vertex", color_field=("gas", "density"), ) assert os.path.exists("my_ply2.ply") def test_export_obj(self): ds = fake_random_ds( 16, nprocs=4, particles=16**3, fields=("density", "temperature"), units=("g/cm**3", "K"), ) sp = ds.sphere("max", (1.0, "cm")) surf = ds.surface(sp, ("gas", "density"), 0.5) surf.export_obj("my_galaxy", transparency=1.0, dist_fac=1.0) assert os.path.exists("my_galaxy.obj") assert os.path.exists("my_galaxy.mtl") mi, ma = sp.quantities.extrema(("gas", "temperature")) rhos = [0.5, 0.25] trans = [0.5, 1.0] for i, r in enumerate(rhos): basename = "my_galaxy_color" surf = ds.surface(sp, ("gas", "density"), r) surf.export_obj( basename, transparency=trans[i], color_field=("gas", "temperature"), dist_fac=1.0, plot_index=i, color_field_max=ma, color_field_min=mi, ) assert os.path.exists(f"{basename}.obj") assert os.path.exists(f"{basename}.mtl") def _Emissivity(field, data): return ( data["gas", "density"] * data["gas", "density"] * np.sqrt(data["gas", "temperature"]) ) ds.add_field( ("gas", "emissivity"), sampling_type="cell", function=_Emissivity, units=r"g**2*sqrt(K)/cm**6", ) for i, r in enumerate(rhos): basename = "my_galaxy_emis" surf = ds.surface(sp, ("gas", "density"), r) surf.export_obj( basename, transparency=trans[i], color_field=("gas", "temperature"), emit_field=("gas", "emissivity"), dist_fac=1.0, plot_index=i, ) basename = "my_galaxy_emis" assert os.path.exists(f"{basename}.obj") assert os.path.exists(f"{basename}.mtl") def test_correct_output_unit_fake_ds(): # see issue #1368 ds = fake_random_ds(64, nprocs=4, particles=16**3) x = y = z = 0.5 sp1 = ds.sphere((x, y, z), (300, "kpc")) Nmax = sp1.max(("gas", "density")) sur = ds.surface(sp1, ("gas", "density"), 0.5 * Nmax) sur["index", "x"][0] def test_radius_surface(): # see #1407 ds = fake_random_ds(64, nprocs=4, particles=16**3, length_unit=10.0) reg = ds.all_data() sp = ds.sphere(ds.domain_center, (0.5, "code_length")) for obj in [reg, sp]: for rad in [0.05, 0.1, 0.4]: surface = ds.surface(obj, ("index", "radius"), (rad, "code_length")) assert_almost_equal(surface.surface_area.v, 4 * np.pi * rad**2, decimal=2) verts = surface.vertices for i in range(3): assert_almost_equal(verts[i, :].min().v, 0.5 - rad, decimal=2) assert_almost_equal(verts[i, :].max().v, 0.5 + rad, decimal=2) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_glue.py0000644000175100001770000000104014714401662020345 0ustar00runnerdockerimport pytest from yt.config import ytcfg from yt.testing import fake_random_ds, requires_module_pytest as requires_module @pytest.fixture def within_testing(): old_value = ytcfg["yt", "internals", "within_testing"] ytcfg["yt", "internals", "within_testing"] = True yield ytcfg["yt", "internals", "within_testing"] = old_value @pytest.mark.usefixtures("within_testing") @requires_module("glue", "astropy") def test_glue_data_object(): ds = fake_random_ds(16) ad = ds.all_data() ad.to_glue([("gas", "density")]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_image_array.py0000644000175100001770000001112114714401662021672 0ustar00runnerdockerimport os import shutil import tempfile import unittest import numpy as np from numpy.testing import assert_equal from yt.data_objects.image_array import ImageArray from yt.testing import requires_module old_settings = None def setup_module(): global old_settings from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True old_settings = np.geterr() np.seterr(all="ignore") def teardown_module(): np.seterr(**old_settings) def dummy_image(kstep, nlayers): im = np.zeros([64, 128, nlayers]) for i in range(im.shape[0]): for k in range(im.shape[2]): im[i, :, k] = np.linspace(0.0, kstep * k, im.shape[1]) return im def test_rgba_rescale(): im_arr = ImageArray(dummy_image(10.0, 4)) new_im = im_arr.rescale(inline=False) assert_equal(im_arr[:, :, :3].max(), 2 * 10.0) assert_equal(im_arr[:, :, 3].max(), 3 * 10.0) assert_equal(new_im[:, :, :3].sum(axis=2).max(), 1.0) assert_equal(new_im[:, :, 3].max(), 1.0) im_arr.rescale() assert_equal(im_arr[:, :, :3].sum(axis=2).max(), 1.0) assert_equal(im_arr[:, :, 3].max(), 1.0) im_arr.rescale(cmax=0.0, amax=0.0) assert_equal(im_arr[:, :, :3].sum(axis=2).max(), 1.0) assert_equal(im_arr[:, :, 3].max(), 1.0) class TestImageArray(unittest.TestCase): tmpdir = None curdir = None def setUp(self): self.tmpdir = tempfile.mkdtemp() self.curdir = os.getcwd() os.chdir(self.tmpdir) def test_image_arry_units(self): im_arr = ImageArray(dummy_image(0.3, 3), units="cm") assert str(im_arr.units) == "cm" new_im = im_arr.in_units("km") assert str(new_im.units) == "km" @requires_module("h5py") def test_image_array_hdf5(self): myinfo = { "field": "dinosaurs", "east_vector": np.array([1.0, 0.0, 0.0]), "north_vector": np.array([0.0, 0.0, 1.0]), "normal_vector": np.array([0.0, 1.0, 0.0]), "width": 0.245, "type": "rendering", } im_arr = ImageArray(dummy_image(0.3, 3), units="cm", info=myinfo) im_arr.save("test_3d_ImageArray", png=False) im = np.zeros([64, 128]) for i in range(im.shape[0]): im[i, :] = np.linspace(0.0, 0.3 * 2, im.shape[1]) myinfo = { "field": "dinosaurs", "east_vector": np.array([1.0, 0.0, 0.0]), "north_vector": np.array([0.0, 0.0, 1.0]), "normal_vector": np.array([0.0, 1.0, 0.0]), "width": 0.245, "type": "rendering", } im_arr = ImageArray(im, info=myinfo, units="cm") im_arr.save("test_2d_ImageArray", png=False) im_arr.save("test_2d_ImageArray_ds", png=False, dataset_name="Random_DS") def test_image_array_rgb_png(self): im = np.zeros([64, 128]) for i in range(im.shape[0]): im[i, :] = np.linspace(0.0, 0.3 * 2, im.shape[1]) im_arr = ImageArray(im) im_arr.save("standard-image", hdf5=False) im_arr = ImageArray(dummy_image(10.0, 3)) im_arr.save("standard-png", hdf5=False) def test_image_array_rgba_png(self): im_arr = ImageArray(dummy_image(10.0, 4)) im_arr.write_png("standard") im_arr.write_png("non-scaled.png", rescale=False) im_arr.write_png("black_bg.png", background="black") im_arr.write_png("white_bg.png", background="white") im_arr.write_png("green_bg.png", background=[0.0, 1.0, 0.0, 1.0]) im_arr.write_png("transparent_bg.png", background=None) def test_image_array_background(self): im_arr = ImageArray(dummy_image(10.0, 4)) im_arr.rescale() new_im = im_arr.add_background_color([1.0, 0.0, 0.0, 1.0], inline=False) new_im.write_png("red_bg.png") im_arr.add_background_color("black") im_arr.write_png("black_bg2.png") def test_write_image(self): im_arr = ImageArray(dummy_image(10.0, 4)) im_arr.write_image("with_cmap", cmap_name="hot") im_arr.write_image("channel_1.png", channel=1) def test_clipping_value(self): im_arr = ImageArray(dummy_image(10.0, 4)) clip_val1 = im_arr._clipping_value(1) clip_val2 = im_arr._clipping_value(1, im=im_arr) assert clip_val2 == clip_val1 clip_val3 = im_arr._clipping_value(6) assert clip_val3 > clip_val2 im_arr[:] = 1.0 # std will be 0, mean will be 1, so clip value will be 1 assert im_arr._clipping_value(1) == 1.0 def tearDown(self): os.chdir(self.curdir) # clean up shutil.rmtree(self.tmpdir) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_io_geometry.py0000644000175100001770000000261414714401662021743 0ustar00runnerdockerimport os from tempfile import TemporaryDirectory import numpy as np from yt.frontends.ytdata.api import save_as_dataset from yt.frontends.ytdata.data_structures import YTDataContainerDataset from yt.loaders import load from yt.testing import fake_amr_ds, requires_module from yt.units import YTQuantity @requires_module("h5py") def test_preserve_geometric_properties(): for geom in ("cartesian", "cylindrical", "spherical"): ds1 = fake_amr_ds(fields=[("gas", "density")], units=["g/cm**3"], geometry=geom) ad = ds1.all_data() with TemporaryDirectory() as tmpdir: tmpf = os.path.join(tmpdir, "savefile.h5") fn = ad.save_as_dataset(tmpf, fields=[("gas", "density")]) ds2 = load(fn) assert isinstance(ds2, YTDataContainerDataset) dfl = ds2.derived_field_list assert ds1.geometry == ds2.geometry == geom expected = set(ds1.coordinates.axis_order) actual = {fname for ftype, fname in dfl} assert expected.difference(actual) == set() @requires_module("h5py") def test_default_to_cartesian(): data = {"density": np.random.random(128)} ds_attrs = {"current_time": YTQuantity(10, "Myr")} with TemporaryDirectory() as tmpdir: tmpf = os.path.join(tmpdir, "savefile.h5") fn = save_as_dataset(ds_attrs, tmpf, data) ds2 = load(fn) assert ds2.geometry == "cartesian" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_numpy_ops.py0000644000175100001770000001524514714401662021456 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_equal from yt.testing import fake_amr_ds, fake_random_ds def setup_module(): from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True def test_mean_sum_integrate(): for nprocs in [-1, 1, 2, 16]: if nprocs == -1: ds = fake_amr_ds(fields=("density",), units=("g/cm**3",), particles=20) else: ds = fake_random_ds( 32, nprocs=nprocs, fields=("density",), units=("g/cm**3",), particles=20 ) ad = ds.all_data() # Sums q = ad.sum(("gas", "density")) q1 = ad.quantities.total_quantity(("gas", "density")) assert_equal(q, q1) q = ad.sum(("all", "particle_ones")) q1 = ad.quantities.total_quantity(("all", "particle_ones")) assert_equal(q, q1) # Weighted Averages w = ad.mean(("gas", "density")) w1 = ad.quantities.weighted_average_quantity( ("gas", "density"), ("index", "ones") ) assert_equal(w, w1) w = ad.mean(("gas", "density"), weight=("gas", "density")) w1 = ad.quantities.weighted_average_quantity( ("gas", "density"), ("gas", "density") ) assert_equal(w, w1) w = ad.mean(("all", "particle_mass")) w1 = ad.quantities.weighted_average_quantity( ("all", "particle_mass"), ("all", "particle_ones") ) assert_equal(w, w1) w = ad.mean(("all", "particle_mass"), weight=("all", "particle_mass")) w1 = ad.quantities.weighted_average_quantity( ("all", "particle_mass"), ("all", "particle_mass") ) assert_equal(w, w1) # Projections p = ad.sum(("gas", "density"), axis=0) p1 = ds.proj(("gas", "density"), 0, data_source=ad, method="sum") assert_equal(p["gas", "density"], p1["gas", "density"]) # Check by axis-name p = ad.sum(("gas", "density"), axis="x") assert_equal(p["gas", "density"], p1["gas", "density"]) # Now we check proper projections p = ad.integrate(("gas", "density"), axis=0) p1 = ds.proj(("gas", "density"), 0, data_source=ad) assert_equal(p["gas", "density"], p1["gas", "density"]) # Check by axis-name p = ad.integrate(("gas", "density"), axis="x") assert_equal(p["gas", "density"], p1["gas", "density"]) def test_min_max(): for nprocs in [-1, 1, 2, 16]: fields = ["density", "temperature"] units = ["g/cm**3", "K"] if nprocs == -1: ds = fake_amr_ds(fields=fields, units=units, particles=20) else: ds = fake_random_ds( 32, nprocs=nprocs, fields=fields, units=units, particles=20 ) ad = ds.all_data() q = ad.min(("gas", "density")).v assert_equal(q, ad["gas", "density"].min()) q = ad.max(("gas", "density")).v assert_equal(q, ad["gas", "density"].max()) q = ad.min(("all", "particle_mass")).v assert_equal(q, ad["all", "particle_mass"].min()) q = ad.max(("all", "particle_mass")).v assert_equal(q, ad["all", "particle_mass"].max()) ptp = ad.ptp(("gas", "density")).v assert_equal(ptp, ad["gas", "density"].max() - ad["gas", "density"].min()) ptp = ad.ptp(("all", "particle_mass")).v assert_equal( ptp, ad["all", "particle_mass"].max() - ad["all", "particle_mass"].min() ) p = ad.max(("gas", "density"), axis=1) p1 = ds.proj(("gas", "density"), 1, data_source=ad, method="max") assert_equal(p["gas", "density"], p1["gas", "density"]) p = ad.min(("gas", "density"), axis=1) p1 = ds.proj(("gas", "density"), 1, data_source=ad, method="min") assert_equal(p["gas", "density"], p1["gas", "density"]) p = ad.max(("gas", "density"), axis="y") p1 = ds.proj(("gas", "density"), 1, data_source=ad, method="max") assert_equal(p["gas", "density"], p1["gas", "density"]) p = ad.min(("gas", "density"), axis="y") p1 = ds.proj(("gas", "density"), 1, data_source=ad, method="min") assert_equal(p["gas", "density"], p1["gas", "density"]) # Test that we can get multiple in a single pass qrho, qtemp = ad.max([("gas", "density"), ("gas", "temperature")]) assert_equal(qrho, ad["gas", "density"].max()) assert_equal(qtemp, ad["gas", "temperature"].max()) qrho, qtemp = ad.min([("gas", "density"), ("gas", "temperature")]) assert_equal(qrho, ad["gas", "density"].min()) assert_equal(qtemp, ad["gas", "temperature"].min()) def test_argmin(): fields = ["density", "temperature"] units = ["g/cm**3", "K"] for nprocs in [-1, 1, 2, 16]: if nprocs == -1: ds = fake_amr_ds(fields=fields, units=units) else: ds = fake_random_ds( 32, nprocs=nprocs, fields=fields, units=units, ) ad = ds.all_data() q = ad.argmin(("gas", "density"), axis=[("gas", "density")]) assert_equal(q, ad["gas", "density"].min()) q1, q2 = ad.argmin( ("gas", "density"), axis=[("gas", "density"), ("gas", "temperature")] ) mi = np.argmin(ad["gas", "density"]) assert_equal(q1, ad["gas", "density"].min()) assert_equal(q2, ad["gas", "temperature"][mi]) pos = ad.argmin(("gas", "density")) mi = np.argmin(ad["gas", "density"]) assert_equal(pos[0], ad["index", "x"][mi]) assert_equal(pos[1], ad["index", "y"][mi]) assert_equal(pos[2], ad["index", "z"][mi]) def test_argmax(): fields = ["density", "temperature"] units = ["g/cm**3", "K"] for nprocs in [-1, 1, 2, 16]: if nprocs == -1: ds = fake_amr_ds(fields=fields, units=units) else: ds = fake_random_ds( 32, nprocs=nprocs, fields=fields, units=units, ) ad = ds.all_data() q = ad.argmax(("gas", "density"), axis=[("gas", "density")]) assert_equal(q, ad["gas", "density"].max()) q1, q2 = ad.argmax( ("gas", "density"), axis=[("gas", "density"), ("gas", "temperature")] ) mi = np.argmax(ad["gas", "density"]) assert_equal(q1, ad["gas", "density"].max()) assert_equal(q2, ad["gas", "temperature"][mi]) pos = ad.argmax(("gas", "density")) mi = np.argmax(ad["gas", "density"]) assert_equal(pos[0], ad["index", "x"][mi]) assert_equal(pos[1], ad["index", "y"][mi]) assert_equal(pos[2], ad["index", "z"][mi]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_octree.py0000644000175100001770000000556314714401662020710 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_almost_equal, assert_equal from yt.testing import fake_sph_grid_ds n_ref = 4 def test_building_tree(): """ Build an octree and make sure correct number of particles """ ds = fake_sph_grid_ds() octree = ds.octree(n_ref=n_ref) assert octree["index", "x"].shape[0] == 17 def test_sph_interpolation_scatter(): """ Generate an octree, perform some SPH interpolation and check with some answer testing """ ds = fake_sph_grid_ds(hsml_factor=26.0) ds._sph_ptypes = ("io",) ds.use_sph_normalization = False octree = ds.octree(n_ref=n_ref) density = octree["io", "density"] answers = np.array( [ 1.00434706, 1.00434706, 1.00434706, 1.00434706, 1.00434706, 1.00434706, 1.00434706, 0.7762907, 0.89250848, 0.89250848, 0.97039088, 0.89250848, 0.97039088, 0.97039088, 1.01156175, ] ) assert_almost_equal(density.d, answers) def test_sph_interpolation_gather(): """ Generate an octree, perform some SPH interpolation and check with some answer testing """ ds = fake_sph_grid_ds(hsml_factor=26.0) ds.index ds._sph_ptypes = ("io",) ds.sph_smoothing_style = "gather" ds.num_neighbors = 5 ds.use_sph_normalization = False octree = ds.octree(n_ref=n_ref) density = octree["io", "density"] answers = np.array( [ 0.59240874, 0.59240874, 0.59240874, 0.59240874, 0.59240874, 0.59240874, 0.59240874, 0.10026846, 0.77014968, 0.77014968, 0.96127825, 0.77014968, 0.96127825, 0.96127825, 1.21183996, ] ) assert_almost_equal(density.d, answers) def test_octree_properties(): """ Generate an octree, and test the refinement, depth and sizes of the cells. """ ds = fake_sph_grid_ds() octree = ds.octree(n_ref=n_ref) depth = octree["index", "depth"] depth_ans = np.array([0] + [1] * 8 + [2] * 8, dtype=np.int64) assert_equal(depth, depth_ans) size_ans = np.zeros((depth.shape[0], 3), dtype=np.float64) for i in range(size_ans.shape[0]): size_ans[i, :] = (ds.domain_right_edge - ds.domain_left_edge) / 2.0 ** depth[i] dx = octree["index", "dx"].d assert_almost_equal(dx, size_ans[:, 0]) dy = octree["index", "dy"].d assert_almost_equal(dy, size_ans[:, 1]) dz = octree["index", "dz"].d assert_almost_equal(dz, size_ans[:, 2]) refined = octree["index", "refined"] refined_ans = np.array([True] + [False] * 7 + [True] + [False] * 8, dtype=np.bool_) assert_equal(refined, refined_ans) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_ortho_rays.py0000644000175100001770000000164314714401662021613 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_equal from yt.testing import fake_random_ds def test_ortho_ray(): ds = fake_random_ds(64, nprocs=8) dx = (ds.domain_right_edge - ds.domain_left_edge) / ds.domain_dimensions axes = ["x", "y", "z"] for ax in range(3): ocoord = ds.arr(np.random.random(2), "code_length") my_oray = ds.ortho_ray(ax, ocoord) my_axes = ds.coordinates.x_axis[ax], ds.coordinates.y_axis[ax] # find the cells intersected by the ortho ray my_all = ds.all_data() my_cells = ( np.abs(my_all["index", axes[my_axes[0]]] - ocoord[0]) <= 0.5 * dx[my_axes[0]] ) & ( np.abs(my_all["index", axes[my_axes[1]]] - ocoord[1]) <= 0.5 * dx[my_axes[1]] ) assert_equal( my_oray["gas", "density"].sum(), my_all["gas", "density"][my_cells].sum(), ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_particle_filter.py0000644000175100001770000001572514714401662022600 0ustar00runnerdockerimport os import shutil import tempfile import numpy as np from nose.tools import assert_raises from numpy.testing import assert_equal from yt.data_objects.particle_filters import add_particle_filter, particle_filter from yt.testing import fake_random_ds, fake_sph_grid_ds from yt.utilities.exceptions import YTIllDefinedFilter, YTIllDefinedParticleFilter from yt.visualization.plot_window import ProjectionPlot def test_add_particle_filter(): """Test particle filters created via add_particle_filter This accesses a deposition field using the particle filter, which was a problem in previous versions on this dataset because there are chunks with no stars in them. """ def stars(pfilter, data): filter_field = (pfilter.filtered_type, "particle_mass") return data[filter_field] > 0.5 add_particle_filter( "stars1", function=stars, filtered_type="all", requires=["particle_mass"] ) ds = fake_random_ds(16, nprocs=8, particles=16) ds.add_particle_filter("stars1") assert ("deposit", "stars1_cic") in ds.derived_field_list # Test without requires field add_particle_filter("stars2", function=stars) ds = fake_random_ds(16, nprocs=8, particles=16) ds.add_particle_filter("stars2") assert ("deposit", "stars2_cic") in ds.derived_field_list # Test adding filter with fields not defined on the ds with assert_raises(YTIllDefinedParticleFilter) as ex: add_particle_filter( "bad_stars", function=stars, filtered_type="all", requires=["wrong_field"] ) ds.add_particle_filter("bad_stars") actual = str(ex.exception) desired = ( "\nThe fields\n\t('all', 'wrong_field'),\nrequired by the" ' "bad_stars" particle filter, are not defined for this dataset.' ) assert_equal(actual, desired) def test_add_particle_filter_overriding(): """Test the add_particle_filter overriding""" from yt.data_objects.particle_filters import filter_registry from yt.funcs import mylog def star_0(pfilter, data): pass def star_1(pfilter, data): pass # Use a closure to store whether the warning was called def closure(status): def warning_patch(*args, **kwargs): status[0] = True def was_called(): return status[0] return warning_patch, was_called ## Test 1: we add a dummy particle filter add_particle_filter( "dummy", function=star_0, filtered_type="all", requires=["creation_time"] ) assert "dummy" in filter_registry assert_equal(filter_registry["dummy"].function, star_0) ## Test 2: we add another dummy particle filter. ## a warning is expected. We use the above closure to ## check that. # Store the original warning function warning = mylog.warning monkey_warning, monkey_patch_was_called = closure([False]) mylog.warning = monkey_warning add_particle_filter( "dummy", function=star_1, filtered_type="all", requires=["creation_time"] ) assert_equal(filter_registry["dummy"].function, star_1) assert_equal(monkey_patch_was_called(), True) # Restore the original warning function mylog.warning = warning def test_particle_filter_decorator(): """Test the particle_filter decorator""" @particle_filter(filtered_type="all", requires=["particle_mass"]) def heavy_stars(pfilter, data): filter_field = (pfilter.filtered_type, "particle_mass") return data[filter_field] > 0.5 ds = fake_random_ds(16, nprocs=8, particles=16) ds.add_particle_filter("heavy_stars") assert "heavy_stars" in ds.particle_types assert ("deposit", "heavy_stars_cic") in ds.derived_field_list # Test name of particle filter @particle_filter(name="my_stars", filtered_type="all", requires=["particle_mass"]) def custom_stars(pfilter, data): filter_field = (pfilter.filtered_type, "particle_mass") return data[filter_field] == 0.5 ds = fake_random_ds(16, nprocs=8, particles=16) ds.add_particle_filter("my_stars") assert "my_stars" in ds.particle_types assert ("deposit", "my_stars_cic") in ds.derived_field_list def test_particle_filter_exceptions(): @particle_filter(filtered_type="all", requires=["particle_mass"]) def filter1(pfilter, data): return data ds = fake_random_ds(16, nprocs=8, particles=16) ds.add_particle_filter("filter1") ad = ds.all_data() with assert_raises(YTIllDefinedFilter): ad["filter1", "particle_mass"].shape[0] @particle_filter(filtered_type="all", requires=["particle_mass"]) def filter2(pfilter, data): filter_field = ("io", "particle_mass") return data[filter_field] > 0.5 ds.add_particle_filter("filter2") ad = ds.all_data() ad["filter2", "particle_mass"].min() def test_particle_filter_dependency(): """ Test dataset add_particle_filter which should automatically add the dependency of the filter. """ @particle_filter(filtered_type="all", requires=["particle_mass"]) def h_stars(pfilter, data): filter_field = (pfilter.filtered_type, "particle_mass") return data[filter_field] > 0.5 @particle_filter(filtered_type="h_stars", requires=["particle_mass"]) def hh_stars(pfilter, data): filter_field = (pfilter.filtered_type, "particle_mass") return data[filter_field] > 0.9 ds = fake_random_ds(16, nprocs=8, particles=16) ds.add_particle_filter("hh_stars") assert "hh_stars" in ds.particle_types assert "h_stars" in ds.particle_types assert ("deposit", "hh_stars_cic") in ds.derived_field_list assert ("deposit", "h_stars_cic") in ds.derived_field_list def test_covering_grid_particle_filter(): @particle_filter(filtered_type="all", requires=["particle_mass"]) def heavy_stars(pfilter, data): filter_field = (pfilter.filtered_type, "particle_mass") return data[filter_field] > 0.5 ds = fake_random_ds(16, nprocs=8, particles=16) ds.add_particle_filter("heavy_stars") for grid in ds.index.grids: cg = ds.covering_grid(grid.Level, grid.LeftEdge, grid.ActiveDimensions) assert_equal( cg["heavy_stars", "particle_mass"].shape[0], grid["heavy_stars", "particle_mass"].shape[0], ) assert_equal( cg["heavy_stars", "particle_mass"].shape[0], grid["heavy_stars", "particle_mass"].shape[0], ) def test_sph_particle_filter_plotting(): ds = fake_sph_grid_ds() @particle_filter("central_gas", requires=["particle_position"], filtered_type="io") def _filter(pfilter, data): coords = np.abs(data[pfilter.filtered_type, "particle_position"]) return (coords[:, 0] < 1.6) & (coords[:, 1] < 1.6) & (coords[:, 2] < 1.6) ds.add_particle_filter("central_gas") plot = ProjectionPlot(ds, "z", ("central_gas", "density")) tmpdir = tempfile.mkdtemp() curdir = os.getcwd() os.chdir(tmpdir) plot.save() os.chdir(curdir) shutil.rmtree(tmpdir) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_particle_trajectories.py0000644000175100001770000000360114714401662023777 0ustar00runnerdockerimport glob import os from yt.config import ytcfg from yt.data_objects.time_series import DatasetSeries from yt.utilities.answer_testing.framework import GenericArrayTest, requires_ds def setup_module(): ytcfg["yt", "internals", "within_testing"] = True data_path = ytcfg.get("yt", "test_data_dir") pfields = [ ("all", "particle_position_x"), ("all", "particle_position_y"), ("all", "particle_position_z"), ] vfields = [ ("all", "particle_velocity_x"), ("all", "particle_velocity_y"), ("all", "particle_velocity_z"), ] @requires_ds("Orbit/orbit_hdf5_chk_0000") def test_orbit_traj(): fields = ["particle_velocity_x", "particle_velocity_y", "particle_velocity_z"] my_fns = glob.glob(os.path.join(data_path, "Orbit/orbit_hdf5_chk_00[0-9][0-9]")) my_fns.sort() ts = DatasetSeries(my_fns) ds = ts[0] traj = ts.particle_trajectories([1, 2], fields=fields, suppress_logging=True) for field in pfields + vfields: def field_func(name): return traj[field] # noqa: B023 yield GenericArrayTest(ds, field_func, args=[field]) @requires_ds("enzo_tiny_cosmology/DD0000/DD0000") def test_etc_traj(): fields = [ ("all", "particle_velocity_x"), ("all", "particle_velocity_y"), ("all", "particle_velocity_z"), ] my_fns = glob.glob( os.path.join(data_path, "enzo_tiny_cosmology/DD000[0-9]/*.hierarchy") ) my_fns.sort() ts = DatasetSeries(my_fns) ds = ts[0] sp = ds.sphere("max", (0.5, "Mpc")) indices = sp["particle_index"][sp["particle_type"] == 1][:5] traj = ts.particle_trajectories(indices, fields=fields, suppress_logging=True) traj.add_fields([("gas", "density")]) for field in pfields + vfields + [("gas", "density")]: def field_func(name): return traj[field] # noqa: B023 yield GenericArrayTest(ds, field_func, args=[field]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_particle_trajectories_pytest.py0000644000175100001770000001110414714401662025404 0ustar00runnerdockerimport numpy as np import pytest from numpy.testing import assert_raises from yt.data_objects.particle_filters import particle_filter from yt.data_objects.time_series import DatasetSeries from yt.testing import fake_particle_ds from yt.utilities.exceptions import YTIllDefinedParticleData pfields = [ ("all", "particle_position_x"), ("all", "particle_position_y"), ("all", "particle_position_z"), ] vfields = [ ("all", "particle_velocity_x"), ("all", "particle_velocity_y"), ("all", "particle_velocity_z"), ] @pytest.fixture def particle_trajectories_test_dataset(): n_particles = 2 n_steps = 2 ids = np.arange(n_particles, dtype="int64") data = {"particle_index": ids} fields = [ "particle_position_x", "particle_position_y", "particle_position_z", "particle_velocity_x", # adding a non-default field "particle_index", ] negative = [False, False, False, True, False] units = ["cm", "cm", "cm", "cm/s", "1"] ts = DatasetSeries( [ fake_particle_ds( fields=fields, negative=negative, units=units, npart=n_particles, data=data, ) for i in range(n_steps) ] ) return ts def test_uniqueness(): n_particles = 2 n_steps = 2 ids = np.arange(n_particles, dtype="int64") % (n_particles // 2) data = {"particle_index": ids} fields = [ "particle_position_x", "particle_position_y", "particle_position_z", "particle_index", ] negative = [False, False, False, False] units = ["cm", "cm", "cm", "1"] ts = DatasetSeries( [ fake_particle_ds( fields=fields, negative=negative, units=units, npart=n_particles, data=data, ) for i in range(n_steps) ] ) assert_raises(YTIllDefinedParticleData, ts.particle_trajectories, [0]) def test_ptype(): n_particles = 100 fields = [ "particle_position_x", "particle_position_y", "particle_position_z", "particle_index", "particle_dummy", ] negative = [False, False, False, False, False] units = ["cm", "cm", "cm", "1", "1"] # Setup filters on the 'particle_dummy' field, keeping only the first 50 @particle_filter(name="dummy", requires=["particle_dummy"]) def dummy(pfilter, data): return data[pfilter.filtered_type, "particle_dummy"] <= n_particles // 2 # Setup fake particle datasets with repeated ids. This should work because # the ids are unique among `dummy_particles` so let's test this data = { "particle_index": np.arange(n_particles) % (n_particles // 2), "particle_dummy": np.arange(n_particles), } all_ds = [ fake_particle_ds( fields=fields, negative=negative, units=units, npart=n_particles, data=data ) ] for ds in all_ds: ds.add_particle_filter("dummy") ts = DatasetSeries(all_ds) # Select all dummy particles ids = ts[0].all_data()["dummy", "particle_index"] # Build trajectories ts.particle_trajectories(ids, ptype="dummy") @pytest.mark.parametrize("ptype", [None, "io"]) def test_default_field_tuple(particle_trajectories_test_dataset, ptype): ds = particle_trajectories_test_dataset[0] ids = ds.all_data()[("all", "particle_index")] trajs = particle_trajectories_test_dataset.particle_trajectories( ids, ptype=ptype, suppress_logging=True ) ptype = ptype if ptype else "all" # ptype defaults to "all" for k in trajs.field_data.keys(): assert isinstance(k, tuple), f"Expected key to be tuple, received {type(k)}" assert ( k[0] == ptype ), f"Default field type ({k[0]}) does not match expected ({ptype})" assert ("all", k[1]) in pfields, f"Unexpected field: {k[1]}" @pytest.mark.parametrize("ptype", [None, "io"]) def test_time_and_index(particle_trajectories_test_dataset, ptype): ds = particle_trajectories_test_dataset[0] ids = ds.all_data()[("all", "particle_index")] trajs = particle_trajectories_test_dataset.particle_trajectories( ids, ptype=ptype, suppress_logging=True ) ptype = ptype if ptype else "all" # ptype defaults to "all" traj = trajs.trajectory_from_index(1) for field in ["particle_time", "particle_index"]: assert (ptype, field) in traj.keys(), f"Missing ({ptype},{field})" assert (field) not in traj.keys(), f"{field} present as bare string" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_pickling.py0000644000175100001770000000170314714401662021217 0ustar00runnerdockerimport pickle from numpy.testing import assert_equal from yt.testing import requires_file, requires_module from yt.utilities.answer_testing.framework import data_dir_load tipsy_ds = "TipsyGalaxy/galaxy.00300" enzo_ds = "enzo_tiny_cosmology/DD0000/DD0000" @requires_module("h5py") @requires_file(enzo_ds) def test_grid_pickles(): ds = data_dir_load(enzo_ds) ad = ds.all_data() # just test ad since there is a nested ds that will get (un)pickled ad_pickle = pickle.loads(pickle.dumps(ad)) assert_equal(ad.ds.basename, ad_pickle.ds.basename) @requires_file(tipsy_ds) def test_particle_pickles(): ds = data_dir_load(tipsy_ds) ad = ds.all_data() ds.index._identify_base_chunk(ad) ch0 = list(ds.index._chunk_io(ad, cache=False))[0] # just test one chunk since there is a nested ds that will get (un)pickled ch_pickle = pickle.loads(pickle.dumps(ch0)) assert_equal(ch0.dobj.ds.basename, ch_pickle.dobj.ds.basename) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_points.py0000644000175100001770000000510514714401662020733 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_equal import yt from yt.testing import fake_random_ds def setup_module(): from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True def test_point_creation(): ds = fake_random_ds(16) p1 = ds.point(ds.domain_center) p2 = ds.point([0.5, 0.5, 0.5]) p3 = ds.point([0.5, 0.5, 0.5] * yt.units.cm) # ensure all three points are really at the same position for fname in "xyz": assert_equal(p1["index", fname], p2["index", fname]) assert_equal(p1["index", fname], p3["index", fname]) def test_domain_point(): nparticles = 3 ds = fake_random_ds(16, particles=nparticles) p = ds.point(ds.domain_center) # ensure accessing one field works, store for comparison later point_den = p["gas", "density"] point_vel = p["gas", "velocity_x"] ad = ds.all_data() ppos = ad["all", "particle_position"] fpoint_den = ds.find_field_values_at_point(("gas", "density"), ds.domain_center) fpoint_den_vel = ds.find_field_values_at_point( [("gas", "density"), ("gas", "velocity_x")], ds.domain_center ) assert_equal(point_den, fpoint_den) assert_equal(point_den, fpoint_den_vel[0]) assert_equal(point_vel, fpoint_den_vel[1]) ppos_den = ds.find_field_values_at_points(("gas", "density"), ppos) ppos_vel = ds.find_field_values_at_points(("gas", "velocity_x"), ppos) ppos_den_vel = ds.find_field_values_at_points( [("gas", "density"), ("gas", "velocity_x")], ppos ) assert_equal(ppos_den.shape, (nparticles,)) assert_equal(ppos_vel.shape, (nparticles,)) assert_equal(len(ppos_den_vel), 2) assert_equal(ppos_den_vel[0], ppos_den) assert_equal(ppos_den_vel[1], ppos_vel) def test_fast_find_field_values_at_points(): ds = fake_random_ds(64, nprocs=8, particles=16**3) ad = ds.all_data() # right now this is slow for large numbers of particles, so randomly # sample 100 particles nparticles = 100 ppos = ad["all", "particle_position"] ppos = ppos[np.random.randint(low=0, high=len(ppos), size=nparticles)] ppos_den = ds.find_field_values_at_points(("gas", "density"), ppos) ppos_vel = ds.find_field_values_at_points(("gas", "velocity_x"), ppos) ppos_den_vel = ds.find_field_values_at_points( [("gas", "density"), ("gas", "velocity_x")], ppos ) assert_equal(ppos_den.shape, (nparticles,)) assert_equal(ppos_vel.shape, (nparticles,)) assert_equal(len(ppos_den_vel), 2) assert_equal(ppos_den_vel[0], ppos_den) assert_equal(ppos_den_vel[1], ppos_vel) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_profiles.py0000644000175100001770000005471714714401662021257 0ustar00runnerdockerimport os import shutil import tempfile import unittest import numpy as np from numpy.testing import assert_equal, assert_raises import yt from yt.data_objects.particle_filters import add_particle_filter from yt.data_objects.profiles import Profile1D, Profile2D, Profile3D, create_profile from yt.testing import ( assert_rel_equal, fake_random_ds, fake_sph_orientation_ds, requires_module, ) from yt.utilities.exceptions import YTIllDefinedProfile, YTProfileDataShape from yt.visualization.profile_plotter import PhasePlot, ProfilePlot _fields = ("density", "temperature", "dinosaurs", "tribbles") _units = ("g/cm**3", "K", "dyne", "erg") def test_profiles(): ds = fake_random_ds(64, nprocs=8, fields=_fields, units=_units) nv = ds.domain_dimensions.prod() dd = ds.all_data() rt, tt, dt = dd.quantities["TotalQuantity"]( [("gas", "density"), ("gas", "temperature"), ("stream", "dinosaurs")] ) e1, e2 = 0.9, 1.1 for nb in [8, 16, 32, 64]: for input_units in ["mks", "cgs"]: (rmi, rma), (tmi, tma), (dmi, dma) = ( getattr(ex, f"in_{input_units}")() for ex in dd.quantities["Extrema"]( [ ("gas", "density"), ("gas", "temperature"), ("stream", "dinosaurs"), ] ) ) # We log all the fields or don't log 'em all. No need to do them # individually. for lf in [True, False]: direct_profile = Profile1D( dd, ("gas", "density"), nb, rmi * e1, rma * e2, lf, weight_field=None, ) direct_profile.add_fields([("index", "ones"), ("gas", "temperature")]) indirect_profile_s = create_profile( dd, ("gas", "density"), [("index", "ones"), ("gas", "temperature")], n_bins=nb, extrema={("gas", "density"): (rmi * e1, rma * e2)}, logs={("gas", "density"): lf}, weight_field=None, ) indirect_profile_t = create_profile( dd, ("gas", "density"), [("index", "ones"), ("gas", "temperature")], n_bins=nb, extrema={("gas", "density"): (rmi * e1, rma * e2)}, logs={("gas", "density"): lf}, weight_field=None, ) for p1d in [direct_profile, indirect_profile_s, indirect_profile_t]: assert_equal(p1d["index", "ones"].sum(), nv) assert_rel_equal(tt, p1d["gas", "temperature"].sum(), 7) p2d = Profile2D( dd, ("gas", "density"), nb, rmi * e1, rma * e2, lf, ("gas", "temperature"), nb, tmi * e1, tma * e2, lf, weight_field=None, ) p2d.add_fields([("index", "ones"), ("gas", "temperature")]) assert_equal(p2d["index", "ones"].sum(), nv) assert_rel_equal(tt, p2d["gas", "temperature"].sum(), 7) p3d = Profile3D( dd, ("gas", "density"), nb, rmi * e1, rma * e2, lf, ("gas", "temperature"), nb, tmi * e1, tma * e2, lf, ("stream", "dinosaurs"), nb, dmi * e1, dma * e2, lf, weight_field=None, ) p3d.add_fields([("index", "ones"), ("gas", "temperature")]) assert_equal(p3d["index", "ones"].sum(), nv) assert_rel_equal(tt, p3d["gas", "temperature"].sum(), 7) p1d = Profile1D(dd, ("index", "x"), nb, 0.0, 1.0, False, weight_field=None) p1d.add_fields(("index", "ones")) av = nv / nb assert_equal(p1d["index", "ones"], np.ones(nb) * av) # We re-bin ones with a weight now p1d = Profile1D( dd, ("index", "x"), nb, 0.0, 1.0, False, weight_field=("gas", "temperature") ) p1d.add_fields([("index", "ones")]) assert_equal(p1d["index", "ones"], np.ones(nb)) # Verify we can access "ones" after adding a new field # See issue 988 p1d.add_fields([("gas", "density")]) assert_equal(p1d["index", "ones"], np.ones(nb)) p2d = Profile2D( dd, ("index", "x"), nb, 0.0, 1.0, False, ("index", "y"), nb, 0.0, 1.0, False, weight_field=None, ) p2d.add_fields(("index", "ones")) av = nv / nb**2 assert_equal(p2d["index", "ones"], np.ones((nb, nb)) * av) # We re-bin ones with a weight now p2d = Profile2D( dd, ("index", "x"), nb, 0.0, 1.0, False, ("index", "y"), nb, 0.0, 1.0, False, weight_field=("gas", "temperature"), ) p2d.add_fields([("index", "ones")]) assert_equal(p2d["index", "ones"], np.ones((nb, nb))) p3d = Profile3D( dd, ("index", "x"), nb, 0.0, 1.0, False, ("index", "y"), nb, 0.0, 1.0, False, ("index", "z"), nb, 0.0, 1.0, False, weight_field=None, ) p3d.add_fields(("index", "ones")) av = nv / nb**3 assert_equal(p3d["index", "ones"], np.ones((nb, nb, nb)) * av) # We re-bin ones with a weight now p3d = Profile3D( dd, ("index", "x"), nb, 0.0, 1.0, False, ("index", "y"), nb, 0.0, 1.0, False, ("index", "z"), nb, 0.0, 1.0, False, weight_field=("gas", "temperature"), ) p3d.add_fields([("index", "ones")]) assert_equal(p3d["index", "ones"], np.ones((nb, nb, nb))) p2d = create_profile( dd, ("gas", "density"), ("gas", "temperature"), weight_field=("gas", "cell_mass"), extrema={("gas", "density"): (None, rma * e2)}, ) assert_equal(p2d.x_bins[0], rmi - np.spacing(rmi)) assert_equal(p2d.x_bins[-1], rma * e2) assert str(ds.field_info["gas", "cell_mass"].units) == str(p2d.weight.units) p2d = create_profile( dd, ("gas", "density"), ("gas", "temperature"), weight_field=("gas", "cell_mass"), extrema={("gas", "density"): (rmi * e2, None)}, ) assert_equal(p2d.x_bins[0], rmi * e2) assert_equal(p2d.x_bins[-1], rma + np.spacing(rma)) extrema_s = {("all", "particle_position_x"): (0, 1)} logs_s = {("all", "particle_position_x"): False} extrema_t = {("all", "particle_position_x"): (0, 1)} logs_t = {("all", "particle_position_x"): False} def test_particle_profiles(): for nproc in [1, 2, 4, 8]: ds = fake_random_ds(32, nprocs=nproc, particles=32**3) dd = ds.all_data() p1d = Profile1D( dd, ("all", "particle_position_x"), 128, 0.0, 1.0, False, weight_field=None ) p1d.add_fields([("all", "particle_ones")]) assert_equal(p1d["all", "particle_ones"].sum(), 32**3) p1d = create_profile( dd, [("all", "particle_position_x")], [("all", "particle_ones")], weight_field=None, n_bins=128, extrema=extrema_s, logs=logs_s, ) assert_equal(p1d["all", "particle_ones"].sum(), 32**3) p1d = create_profile( dd, [("all", "particle_position_x")], [("all", "particle_ones")], weight_field=None, n_bins=128, extrema=extrema_t, logs=logs_t, ) assert_equal(p1d["all", "particle_ones"].sum(), 32**3) p2d = Profile2D( dd, ("all", "particle_position_x"), 128, 0.0, 1.0, False, ("all", "particle_position_y"), 128, 0.0, 1.0, False, weight_field=None, ) p2d.add_fields([("all", "particle_ones")]) assert_equal(p2d["all", "particle_ones"].sum(), 32**3) p3d = Profile3D( dd, ("all", "particle_position_x"), 128, 0.0, 1.0, False, ("all", "particle_position_y"), 128, 0.0, 1.0, False, ("all", "particle_position_z"), 128, 0.0, 1.0, False, weight_field=None, ) p3d.add_fields([("all", "particle_ones")]) assert_equal(p3d["all", "particle_ones"].sum(), 32**3) def test_mixed_particle_mesh_profiles(): ds = fake_random_ds(32, particles=10) ad = ds.all_data() assert_raises( YTIllDefinedProfile, ProfilePlot, ad, ("index", "radius"), ("all", "particle_mass"), ) assert_raises( YTIllDefinedProfile, ProfilePlot, ad, "radius", [("all", "particle_mass"), ("all", "particle_ones")], ) assert_raises( YTIllDefinedProfile, ProfilePlot, ad, ("index", "radius"), [("all", "particle_mass"), ("index", "ones")], ) assert_raises( YTIllDefinedProfile, ProfilePlot, ad, ("all", "particle_radius"), ("all", "particle_mass"), ("gas", "cell_mass"), ) assert_raises( YTIllDefinedProfile, ProfilePlot, ad, ("index", "radius"), ("gas", "cell_mass"), ("all", "particle_ones"), ) assert_raises( YTIllDefinedProfile, PhasePlot, ad, ("index", "radius"), ("all", "particle_mass"), ("gas", "velocity_x"), ) assert_raises( YTIllDefinedProfile, PhasePlot, ad, ("all", "particle_radius"), ("all", "particle_mass"), ("gas", "cell_mass"), ) assert_raises( YTIllDefinedProfile, PhasePlot, ad, ("index", "radius"), ("gas", "cell_mass"), ("all", "particle_ones"), ) assert_raises( YTIllDefinedProfile, PhasePlot, ad, ("all", "particle_radius"), ("all", "particle_mass"), ("all", "particle_ones"), ) def test_particle_profile_negative_field(): # see Issue #1340 n_particles = int(1e4) ppx, ppy, ppz = np.random.normal(size=[3, n_particles]) pvx, pvy, pvz = -np.ones((3, n_particles)) data = { "particle_position_x": ppx, "particle_position_y": ppy, "particle_position_z": ppz, "particle_velocity_x": pvx, "particle_velocity_y": pvy, "particle_velocity_z": pvz, } bbox = 1.1 * np.array( [[min(ppx), max(ppx)], [min(ppy), max(ppy)], [min(ppz), max(ppz)]] ) ds = yt.load_particles(data, bbox=bbox) ad = ds.all_data() profile = yt.create_profile( ad, [("all", "particle_position_x"), ("all", "particle_position_y")], ("all", "particle_velocity_x"), logs={ ("all", "particle_position_x"): True, ("all", "particle_position_y"): True, ("all", "particle_position_z"): True, }, weight_field=None, ) assert profile["all", "particle_velocity_x"].min() < 0 assert profile.x_bins.min() > 0 assert profile.y_bins.min() > 0 profile = yt.create_profile( ad, [("all", "particle_position_x"), ("all", "particle_position_y")], ("all", "particle_velocity_x"), weight_field=None, ) assert profile["all", "particle_velocity_x"].min() < 0 assert profile.x_bins.min() < 0 assert profile.y_bins.min() < 0 # can't use CIC deposition with log-scaled bin fields with assert_raises(RuntimeError): yt.create_profile( ad, [("all", "particle_position_x"), ("all", "particle_position_y")], ("all", "particle_velocity_x"), logs={ ("all", "particle_position_x"): True, ("all", "particle_position_y"): False, ("all", "particle_position_z"): False, }, weight_field=None, deposition="cic", ) # can't use CIC deposition with accumulation or fractional with assert_raises(RuntimeError): yt.create_profile( ad, [("all", "particle_position_x"), ("all", "particle_position_y")], ("all", "particle_velocity_x"), logs={ ("all", "particle_position_x"): False, ("all", "particle_position_y"): False, ("all", "particle_position_z"): False, }, weight_field=None, deposition="cic", accumulation=True, fractional=True, ) def test_profile_zero_weight(): def DMparticles(pfilter, data): filter = data[pfilter.filtered_type, "particle_type"] == 1 return filter def DM_in_cell_mass(field, data): return data["deposit", "DM_density"] * data["index", "cell_volume"] add_particle_filter( "DM", function=DMparticles, filtered_type="io", requires=["particle_type"] ) _fields = ( "particle_position_x", "particle_position_y", "particle_position_z", "particle_mass", "particle_velocity_x", "particle_velocity_y", "particle_velocity_z", "particle_type", ) _units = ("cm", "cm", "cm", "g", "cm/s", "cm/s", "cm/s", "dimensionless") ds = fake_random_ds( 32, particle_fields=_fields, particle_field_units=_units, particles=16 ) ds.add_particle_filter("DM") ds.add_field( ("gas", "DM_cell_mass"), units="g", function=DM_in_cell_mass, sampling_type="cell", ) sp = ds.sphere(ds.domain_center, (10, "kpc")) profile = yt.create_profile( sp, [("gas", "density")], [("gas", "radial_velocity")], weight_field=("gas", "DM_cell_mass"), ) assert not np.any(np.isnan(profile["gas", "radial_velocity"])) def test_profile_sph_data(): ds = fake_sph_orientation_ds() # test we create a profile without raising YTIllDefinedProfile yt.create_profile( ds.all_data(), [("gas", "density"), ("gas", "temperature")], [("gas", "kinetic_energy_density")], weight_field=None, ) def test_profile_override_limits(): ds = fake_random_ds(64, nprocs=8, fields=_fields, units=_units) sp = ds.sphere(ds.domain_center, (10, "kpc")) obins = np.linspace(-5, 5, 10) profile = yt.create_profile( sp, [("gas", "density")], [("gas", "temperature")], override_bins={("gas", "density"): (obins, "g/cm**3")}, ) assert_equal(ds.arr(obins, "g/cm**3"), profile.x_bins) profile = yt.create_profile( sp, [("gas", "density"), ("stream", "dinosaurs")], [("gas", "temperature")], override_bins={ ("gas", "density"): (obins, "g/cm**3"), ("stream", "dinosaurs"): obins, }, ) assert_equal(ds.arr(obins, "g/cm**3"), profile.x_bins) assert_equal(ds.arr(obins, "dyne"), profile.y_bins) profile = yt.create_profile( sp, [("gas", "density"), (("stream", "dinosaurs")), ("stream", "tribbles")], [("gas", "temperature")], override_bins={ ("gas", "density"): (obins, "g/cm**3"), (("stream", "dinosaurs")): obins, ("stream", "tribbles"): (obins, "erg"), }, ) assert_equal(ds.arr(obins, "g/cm**3"), profile.x_bins) assert_equal(ds.arr(obins, "dyne"), profile.y_bins) assert_equal(ds.arr(obins, "erg"), profile.z_bins) class TestBadProfiles(unittest.TestCase): tmpdir = None curdir = None def setUp(self): self.tmpdir = tempfile.mkdtemp() self.curdir = os.getcwd() os.chdir(self.tmpdir) def tearDown(self): os.chdir(self.curdir) # clean up shutil.rmtree(self.tmpdir) @requires_module("h5py") def test_unequal_data_shape_profile(self): density = np.random.random(128) temperature = np.random.random(128) mass = np.random.random((128, 128)) my_data = { ("gas", "density"): density, ("gas", "temperature"): temperature, ("gas", "mass"): mass, } fake_ds_med = {"current_time": yt.YTQuantity(10, "Myr")} field_types = {field: "gas" for field in my_data.keys()} yt.save_as_dataset(fake_ds_med, "mydata.h5", my_data, field_types=field_types) ds = yt.load("mydata.h5") with assert_raises(YTProfileDataShape): yt.PhasePlot( ds.data, ("gas", "temperature"), ("gas", "density"), ("gas", "mass"), ) @requires_module("h5py") def test_unequal_bin_field_profile(self): density = np.random.random(128) temperature = np.random.random(127) mass = np.random.random((128, 128)) my_data = { ("gas", "density"): density, ("gas", "temperature"): temperature, ("gas", "mass"): mass, } fake_ds_med = {"current_time": yt.YTQuantity(10, "Myr")} field_types = {field: "gas" for field in my_data.keys()} yt.save_as_dataset(fake_ds_med, "mydata.h5", my_data, field_types=field_types) ds = yt.load("mydata.h5") with assert_raises(YTProfileDataShape): yt.PhasePlot( ds.data, ("gas", "temperature"), ("gas", "density"), ("gas", "mass"), ) def test_set_linear_scaling_for_none_extrema(self): # See Issue #3431 # Ensures that extrema are calculated in the same way on subsequent passes # through the PhasePlot machinery. ds = fake_sph_orientation_ds() p = yt.PhasePlot( ds, ("all", "particle_position_spherical_theta"), ("all", "particle_position_spherical_radius"), ("all", "particle_mass"), weight_field=None, ) p.set_log(("all", "particle_position_spherical_theta"), False) p.save() def test_index_field_units(): # see #1849 ds = fake_random_ds(16, length_unit=2) ad = ds.all_data() icv_units = ad["index", "cell_volume"].units assert str(icv_units) == "code_length**3" gcv_units = ad["gas", "cell_volume"].units assert str(gcv_units) == "cm**3" prof = ad.profile( [("gas", "density"), ("gas", "velocity_x")], [("gas", "cell_volume"), ("index", "cell_volume")], weight_field=None, ) assert str(prof["index", "cell_volume"].units) == "code_length**3" assert str(prof["gas", "cell_volume"].units) == "cm**3" @requires_module("astropy") def test_export_astropy(): from yt.units.yt_array import YTArray ds = fake_random_ds(64) ad = ds.all_data() prof = ad.profile( ("index", "radius"), [("gas", "density"), ("gas", "velocity_x")], weight_field=("index", "ones"), n_bins=32, ) # export to AstroPy table at1 = prof.to_astropy_table() assert "radius" in at1.colnames assert "density" in at1.colnames assert "velocity_x" in at1.colnames assert_equal(prof.x.d, at1["radius"].value) assert_equal(prof["gas", "density"].d, at1["density"].value) assert_equal(prof["gas", "velocity_x"].d, at1["velocity_x"].value) assert prof.x.units == YTArray.from_astropy(at1["radius"]).units assert prof["gas", "density"].units == YTArray.from_astropy(at1["density"]).units assert ( prof["gas", "velocity_x"].units == YTArray.from_astropy(at1["velocity_x"]).units ) assert np.all(at1.mask["density"] == prof.used) at2 = prof.to_astropy_table(fields=("gas", "density"), only_used=True) assert "radius" in at2.colnames assert "velocity_x" not in at2.colnames assert_equal(prof.x.d[prof.used], at2["radius"].value) assert_equal(prof["gas", "density"].d[prof.used], at2["density"].value) at3 = prof.to_astropy_table(fields=("gas", "density"), include_std=True) assert_equal(prof["gas", "density"].d, at3["density"].value) assert_equal( prof.standard_deviation["gas", "density"].d, at3["density_stddev"].value ) assert ( prof.standard_deviation["gas", "density"].units == YTArray.from_astropy(at3["density_stddev"]).units ) @requires_module("pandas") def test_export_pandas(): ds = fake_random_ds(64) ad = ds.all_data() prof = ad.profile( "radius", [("gas", "density"), ("gas", "velocity_x")], weight_field=("index", "ones"), n_bins=32, ) # export to pandas DataFrame df1 = prof.to_dataframe() assert "radius" in df1.columns assert "density" in df1.columns assert "velocity_x" in df1.columns assert_equal(prof.x.d, df1["radius"]) assert_equal(prof["gas", "density"].d, np.nan_to_num(df1["density"])) assert_equal(prof["velocity_x"].d, np.nan_to_num(df1["velocity_x"])) df2 = prof.to_dataframe(fields=("gas", "density"), only_used=True) assert "radius" in df2.columns assert "velocity_x" not in df2.columns assert_equal(prof.x.d[prof.used], df2["radius"]) assert_equal(prof["gas", "density"].d[prof.used], df2["density"]) df3 = prof.to_dataframe(fields=("gas", "density"), include_std=True) assert_equal( prof.standard_deviation["gas", "density"].d, np.nan_to_num(df3["density_stddev"]), ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_projection.py0000644000175100001770000001622114714401662021574 0ustar00runnerdockerimport os import tempfile from unittest import mock import numpy as np from numpy.testing import assert_equal from yt.testing import assert_rel_equal, fake_amr_ds, fake_random_ds from yt.units.unit_object import Unit LENGTH_UNIT = 2.0 def setup_module(): from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True def teardown_func(fns): for fn in fns: try: os.remove(fn) except OSError: pass @mock.patch("matplotlib.backends.backend_agg.FigureCanvasAgg.print_figure") def test_projection(pf): fns = [] for nprocs in [8, 1]: # We want to test both 1 proc and 8 procs, to make sure that # parallelism isn't broken fields = ("density", "temperature", "velocity_x", "velocity_y", "velocity_z") units = ("g/cm**3", "K", "cm/s", "cm/s", "cm/s") ds = fake_random_ds( 64, fields=fields, units=units, nprocs=nprocs, length_unit=LENGTH_UNIT ) dims = ds.domain_dimensions xn, yn, zn = ds.domain_dimensions xi, yi, zi = ds.domain_left_edge.to_ndarray() + 1.0 / (ds.domain_dimensions * 2) xf, yf, zf = ds.domain_right_edge.to_ndarray() - 1.0 / ( ds.domain_dimensions * 2 ) dd = ds.all_data() coords = np.mgrid[xi : xf : xn * 1j, yi : yf : yn * 1j, zi : zf : zn * 1j] uc = [np.unique(c) for c in coords] # test if projections inherit the field parameters of their data sources dd.set_field_parameter("bulk_velocity", np.array([0, 1, 2])) proj = ds.proj(("gas", "density"), 0, data_source=dd) assert_equal( dd.field_parameters["bulk_velocity"], proj.field_parameters["bulk_velocity"] ) # Some simple projection tests with single grids for ax, an in enumerate("xyz"): xax = ds.coordinates.x_axis[ax] yax = ds.coordinates.y_axis[ax] for wf in [("gas", "density"), None]: proj = ds.proj( [("index", "ones"), ("gas", "density")], ax, weight_field=wf ) if wf is None: assert_equal( proj["index", "ones"].sum(), LENGTH_UNIT * proj["index", "ones"].size, ) assert_equal(proj["index", "ones"].min(), LENGTH_UNIT) assert_equal(proj["index", "ones"].max(), LENGTH_UNIT) else: assert_equal( proj["index", "ones"].sum(), proj["index", "ones"].size ) assert_equal(proj["index", "ones"].min(), 1.0) assert_equal(proj["index", "ones"].max(), 1.0) assert_equal(np.unique(proj["px"]), uc[xax]) assert_equal(np.unique(proj["py"]), uc[yax]) assert_equal(np.unique(proj["pdx"]), 1.0 / (dims[xax] * 2.0)) assert_equal(np.unique(proj["pdy"]), 1.0 / (dims[yax] * 2.0)) plots = [proj.to_pw(fields=("gas", "density")), proj.to_pw()] for pw in plots: for p in pw.plots.values(): tmpfd, tmpname = tempfile.mkstemp(suffix=".png") os.close(tmpfd) p.save(name=tmpname) fns.append(tmpname) frb = proj.to_frb((1.0, "unitary"), 64) for proj_field in [ ("index", "ones"), ("gas", "density"), ("gas", "temperature"), ]: fi = ds._get_field_info(proj_field) assert_equal(frb[proj_field].info["data_source"], proj.__str__()) assert_equal(frb[proj_field].info["axis"], ax) assert_equal(frb[proj_field].info["field"], str(proj_field)) field_unit = Unit(fi.units) if wf is not None: assert_equal( frb[proj_field].units, Unit(field_unit, registry=ds.unit_registry), ) else: if frb[proj_field].units.is_code_unit: proj_unit = "code_length" else: proj_unit = "cm" if field_unit != "" and field_unit != Unit(): proj_unit = f"({field_unit}) * {proj_unit}" assert_equal( frb[proj_field].units, Unit(proj_unit, registry=ds.unit_registry), ) assert_equal(frb[proj_field].info["xlim"], frb.bounds[:2]) assert_equal(frb[proj_field].info["ylim"], frb.bounds[2:]) assert_equal(frb[proj_field].info["center"], proj.center) if wf is None: assert_equal(frb[proj_field].info["weight_field"], wf) else: assert_equal( frb[proj_field].info["weight_field"], proj.data_source._determine_fields(wf)[0], ) # wf == None assert_equal(wf, None) v1 = proj["gas", "density"].sum() v2 = (dd["gas", "density"] * dd["index", f"d{an}"]).sum() assert_rel_equal(v1, v2.in_units(v1.units), 10) # Test moment projections def make_vsq_field(aname): def _vsquared(field, data): return data["gas", f"velocity_{aname}"] ** 2 return _vsquared for ax, an in enumerate("xyz"): ds.add_field( ("gas", f"velocity_{an}_squared"), make_vsq_field(an), sampling_type="local", units="cm**2/s**2", ) proj1 = ds.proj( [("gas", f"velocity_{an}"), ("gas", f"velocity_{an}_squared")], ax, weight_field=("gas", "density"), moment=1, ) proj2 = ds.proj( ("gas", f"velocity_{an}"), ax, weight_field=("gas", "density"), moment=2 ) assert_rel_equal( np.sqrt( proj1["gas", f"velocity_{an}_squared"] - proj1["gas", f"velocity_{an}"] ** 2 ), proj2["gas", f"velocity_{an}"], 10, ) teardown_func(fns) def test_max_level(): ds = fake_amr_ds(fields=[("gas", "density")], units=["mp/cm**3"]) proj = ds.proj(("gas", "density"), 2, method="max", max_level=2) assert proj["index", "grid_level"].max() == 2 proj = ds.proj(("gas", "density"), 2, method="max") assert proj["index", "grid_level"].max() == ds.index.max_level def test_min_level(): ds = fake_amr_ds(fields=[("gas", "density")], units=["mp/cm**3"]) proj = ds.proj(("gas", "density"), 2, method="min") assert proj["index", "grid_level"].min() == 0 proj = ds.proj(("gas", "density"), 2, method="max") assert proj["index", "grid_level"].min() == ds.index.min_level ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_rays.py0000644000175100001770000001415114714401662020376 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_equal from yt import load from yt.testing import ( assert_rel_equal, cubicspline_python, fake_random_ds, fake_sph_grid_ds, integrate_kernel, requires_file, requires_module, ) from yt.units._numpy_wrapper_functions import uconcatenate def test_ray(): for nproc in [1, 2, 4, 8]: ds = fake_random_ds(64, nprocs=nproc) dx = (ds.domain_right_edge - ds.domain_left_edge) / ds.domain_dimensions # Three we choose, to get varying vectors, and ten random pp1 = np.random.random((3, 13)) pp2 = np.random.random((3, 13)) pp1[:, 0] = [0.1, 0.2, 0.3] pp2[:, 0] = [0.8, 0.1, 0.4] pp1[:, 1] = [0.9, 0.2, 0.3] pp2[:, 1] = [0.8, 0.1, 0.4] pp1[:, 2] = [0.9, 0.2, 0.9] pp2[:, 2] = [0.8, 0.1, 0.4] unitary = ds.arr(1.0, "") for i in range(pp1.shape[1]): p1 = ds.arr(pp1[:, i] + 1e-8 * np.random.random(3), "code_length") p2 = ds.arr(pp2[:, i] + 1e-8 * np.random.random(3), "code_length") my_ray = ds.ray(p1, p2) assert_rel_equal(my_ray["dts"].sum(), unitary, 14) ray_cells = my_ray["dts"] > 0 # find cells intersected by the ray my_all = ds.all_data() dt = np.abs(dx / (p2 - p1)) tin = uconcatenate( [ [(my_all["index", "x"] - p1[0]) / (p2 - p1)[0] - 0.5 * dt[0]], [(my_all["index", "y"] - p1[1]) / (p2 - p1)[1] - 0.5 * dt[1]], [(my_all["index", "z"] - p1[2]) / (p2 - p1)[2] - 0.5 * dt[2]], ] ) tout = uconcatenate( [ [(my_all["index", "x"] - p1[0]) / (p2 - p1)[0] + 0.5 * dt[0]], [(my_all["index", "y"] - p1[1]) / (p2 - p1)[1] + 0.5 * dt[1]], [(my_all["index", "z"] - p1[2]) / (p2 - p1)[2] + 0.5 * dt[2]], ] ) tin = tin.max(axis=0) tout = tout.min(axis=0) my_cells = (tin < tout) & (tin < 1) & (tout > 0) assert_equal(ray_cells.sum(), my_cells.sum()) assert_rel_equal( my_ray["gas", "density"][ray_cells].sum(), my_all["gas", "density"][my_cells].sum(), 14, ) assert_rel_equal(my_ray["dts"].sum(), unitary, 14) @requires_module("h5py") @requires_file("GadgetDiskGalaxy/snapshot_200.hdf5") def test_ray_particle(): ds = load("GadgetDiskGalaxy/snapshot_200.hdf5") ray = ds.ray(ds.domain_left_edge, ds.domain_right_edge) assert_equal(ray["t"].shape, (1451,)) assert ray["dts"].sum(dtype="f8") > 0 ## test that kernels integrate correctly # (1) including the right particles # (2) correct t and dts values for those particles ## fake_sph_grid_ds: # This dataset should have 27 particles with the particles arranged # uniformly on a 3D grid. The bottom left corner is (0.5,0.5,0.5) and # the top right corner is (2.5,2.5,2.5). All particles will have # non-overlapping smoothing regions with a radius of 0.05, masses of # 1, and densities of 1, and zero velocity. def test_ray_particle2(): kernelfunc = cubicspline_python ds = fake_sph_grid_ds(hsml_factor=1.0) ds.kernel_name = "cubic" ## Ray through the one particle at (0.5, 0.5, 0.5): ## test basic kernel integration eps = 0.0 # 1e-7 start0 = np.array((1.0 + eps, 0.0, 0.5)) end0 = np.array((0.0, 1.0 + eps, 0.5)) ray0 = ds.ray(start0, end0) b0 = np.array([np.sqrt(2.0) * eps]) hsml0 = np.array([0.05]) len0 = np.sqrt(np.sum((end0 - start0) ** 2)) # for a ParticleDataset like this one, the Ray object attempts # to generate the 't' and 'dts' fields using the grid method ray0.field_data["t"] = ray0.ds.arr(ray0._generate_container_field_sph("t")) ray0.field_data["dts"] = ray0.ds.arr(ray0._generate_container_field_sph("dts")) # not demanding too much precision; # from kernel volume integrals, the linear interpolation # restricts you to 4 -- 5 digits precision assert_equal(ray0["t"].shape, (1,)) assert_rel_equal(ray0["t"], np.array([0.5]), 5) assert_rel_equal(ray0[("gas", "position")].v, np.array([[0.5, 0.5, 0.5]]), 5) dl0 = integrate_kernel(kernelfunc, b0, hsml0) dl0 *= ray0[("gas", "mass")].v / ray0[("gas", "density")].v assert_rel_equal(ray0[("dts")].v, dl0 / len0, 4) ## Ray in the middle of the box: ## test end points, >1 particle start1 = np.array((1.53, 0.53, 1.0)) end1 = np.array((1.53, 0.53, 3.0)) ray1 = ds.ray(start1, end1) b1 = np.array([np.sqrt(2.0) * 0.03] * 2) hsml1 = np.array([0.05] * 2) len1 = np.sqrt(np.sum((end1 - start1) ** 2)) # for a ParticleDataset like this one, the Ray object attempts # to generate the 't' and 'dts' fields using the grid method ray1.field_data["t"] = ray1.ds.arr(ray1._generate_container_field_sph("t")) ray1.field_data["dts"] = ray1.ds.arr(ray1._generate_container_field_sph("dts")) # not demanding too much precision; # from kernel volume integrals, the linear interpolation # restricts you to 4 -- 5 digits precision assert_equal(ray1["t"].shape, (2,)) assert_rel_equal(ray1["t"], np.array([0.25, 0.75]), 5) assert_rel_equal( ray1[("gas", "position")].v, np.array([[1.5, 0.5, 1.5], [1.5, 0.5, 2.5]]), 5 ) dl1 = integrate_kernel(kernelfunc, b1, hsml1) dl1 *= ray1[("gas", "mass")].v / ray1[("gas", "density")].v assert_rel_equal(ray1[("dts")].v, dl1 / len1, 4) ## Ray missing all particles: ## test handling of size-0 selections start2 = np.array((1.0, 2.0, 0.0)) end2 = np.array((1.0, 2.0, 3.0)) ray2 = ds.ray(start2, end2) # for a ParticleDataset like this one, the Ray object attempts # to generate the 't' and 'dts' fields using the grid method ray2.field_data["t"] = ray2.ds.arr(ray2._generate_container_field_sph("t")) ray2.field_data["dts"] = ray2.ds.arr(ray2._generate_container_field_sph("dts")) assert_equal(ray2["t"].shape, (0,)) assert_equal(ray2["dts"].shape, (0,)) assert_equal(ray2[("gas", "position")].v.shape, (0, 3)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_refinement.py0000644000175100001770000000315614714401662021557 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_array_equal, assert_equal import yt def setup_fake_refby(): refine_by = np.array([5, 1, 1]) top_grid_dim = [100, 10, 2] n1 = 100 n2 = 10 n3 = 2 grid_data = [ { "left_edge": [0.0, 0.0, 0.0], "right_edge": [1.0, np.pi, np.pi * 2.0], "level": 0, "dimensions": np.array([n1, n2, n3]), }, { "left_edge": [0.0, 0.0, 0.0], "right_edge": [0.5, np.pi, np.pi * 2.0], "level": 1, "dimensions": refine_by * [n1 / 2.0, n2, n3], }, ] for g in grid_data: g["density"] = (np.random.random(g["dimensions"].astype("i8")), "g/cm**3") bbox = np.array([[0.0, 1.0], [0.0, np.pi], [0.0, np.pi * 2]]) ds = yt.load_amr_grids( grid_data, top_grid_dim, bbox=bbox, geometry="spherical", refine_by=refine_by, length_unit="kpc", ) return ds def test_refine_by(): ds = setup_fake_refby() dd = ds.all_data() # This checks that we always refine_by 1 in dimensions 2 and 3 dims = ds.domain_dimensions * ds.refine_by**ds.max_level for i in range(1, 3): # Check the refine_by == 1 ncoords = np.unique(dd.icoords[:, i]).size assert_equal(ncoords, dims[i]) for g in ds.index.grids: dims = ds.domain_dimensions * ds.refine_by**g.Level # Now we can check converting back to the reference space v = ((g.icoords + 1) / dims.astype("f8")).max(axis=0) v *= ds.domain_width assert_array_equal(v, g.RightEdge.d) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_regions.py0000644000175100001770000000410014714401662021057 0ustar00runnerdockerfrom numpy.testing import assert_array_equal, assert_raises from yt.testing import fake_amr_ds, fake_random_ds from yt.units import cm def test_box_creation(): # test that creating a region with left and right edge # with units works ds = fake_random_ds(32, length_unit=2) reg = ds.box([0, 0, 0] * cm, [2, 2, 2] * cm) dens_units = reg["gas", "density"] reg = ds.box([0, 0, 0], [1, 1, 1]) dens_no_units = reg["gas", "density"] assert_array_equal(dens_units, dens_no_units) def test_max_level_min_level_semantics(): ds = fake_amr_ds() ad = ds.all_data() assert ad["index", "grid_level"].max() == 4 ad.max_level = 2 assert ad["index", "grid_level"].max() == 2 ad.max_level = 8 assert ad["index", "grid_level"].max() == 4 ad.min_level = 2 assert ad["index", "grid_level"].min() == 2 ad.min_level = 0 assert ad["index", "grid_level"].min() == 0 def test_ellipsis_selection(): ds = fake_amr_ds() reg = ds.r[:, :, :] ereg = ds.r[...] assert_array_equal(reg.fwidth, ereg.fwidth) reg = ds.r[(0.5, "cm"), :, :] ereg = ds.r[(0.5, "cm"), ...] assert_array_equal(reg.fwidth, ereg.fwidth) reg = ds.r[:, :, (0.5, "cm")] ereg = ds.r[..., (0.5, "cm")] assert_array_equal(reg.fwidth, ereg.fwidth) reg = ds.r[:, :, (0.5, "cm")] ereg = ds.r[..., (0.5, "cm")] assert_array_equal(reg.fwidth, ereg.fwidth) assert_raises(IndexError, ds.r.__getitem__, (..., (0.5, "cm"), ...)) # this test will fail until "arbitrary_grid" selector is implemented for 2D datasets # see issue https://github.com/yt-project/yt/issues/3437 """ from yt.utilities.answer_testing.framework import data_dir_load, requires_ds @requires_ds("castro_sedov_2d_cyl_in_cart_plt00150") def test_complex_slicing_2D_consistency(): # see https://github.com/yt-project/yt/issues/3429 ds = data_dir_load("castro_sedov_2d_cyl_in_cart_plt00150") reg = ds.r[0.1:0.2:8j, :] reg["gas", "density"] reg = ds.r[:, 1:2:8j] reg["gas", "density"] reg = ds.r[0.1:0.2:8j, 1:2:8j] reg["gas", "density"] """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_registration.py0000644000175100001770000000135114714401662022130 0ustar00runnerdockerimport pytest from yt.data_objects.static_output import Dataset from yt.utilities.object_registries import output_type_registry def test_reregistration_warning(): from yt.frontends import enzo # noqa true_EnzoDataset = output_type_registry["EnzoDataset"] try: with pytest.warns( UserWarning, match=( "Overwriting EnzoDataset, which was previously registered. " "This is expected if you're importing a yt extension with a " "frontend that was already migrated to the main code base." ), ): class EnzoDataset(Dataset): pass finally: output_type_registry["EnzoDataset"] = true_EnzoDataset ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_slice.py0000644000175100001770000000704414714401662020522 0ustar00runnerdockerimport os import tempfile from unittest import mock import numpy as np from numpy.testing import assert_equal from yt.testing import fake_random_ds from yt.units.unit_object import Unit def setup_module(): from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True def teardown_func(fns): for fn in fns: try: os.remove(fn) except OSError: pass @mock.patch("matplotlib.backends.backend_agg.FigureCanvasAgg.print_figure") def test_slice(pf): fns = [] grid_eps = np.finfo(np.float64).eps for nprocs in [8, 1]: # We want to test both 1 proc and 8 procs, to make sure that # parallelism isn't broken ds = fake_random_ds(64, nprocs=nprocs) dims = ds.domain_dimensions xn, yn, zn = ds.domain_dimensions dx = ds.arr(1.0 / (ds.domain_dimensions * 2), "code_length") xi, yi, zi = ds.domain_left_edge + dx xf, yf, zf = ds.domain_right_edge - dx coords = np.mgrid[xi : xf : xn * 1j, yi : yf : yn * 1j, zi : zf : zn * 1j] uc = [np.unique(c) for c in coords] slc_pos = 0.5 # Some simple slice tests with single grids for ax in range(3): xax = ds.coordinates.x_axis[ax] yax = ds.coordinates.y_axis[ax] slc = ds.slice(ax, slc_pos) shifted_slc = ds.slice(ax, slc_pos + grid_eps) assert_equal(slc["index", "ones"].sum(), slc["index", "ones"].size) assert_equal(slc["index", "ones"].min(), 1.0) assert_equal(slc["index", "ones"].max(), 1.0) assert_equal(np.unique(slc["px"]), uc[xax]) assert_equal(np.unique(slc["py"]), uc[yax]) assert_equal(np.unique(slc["pdx"]), 0.5 / dims[xax]) assert_equal(np.unique(slc["pdy"]), 0.5 / dims[yax]) pw = slc.to_pw(fields=("gas", "density")) for p in pw.plots.values(): tmpfd, tmpname = tempfile.mkstemp(suffix=".png") os.close(tmpfd) p.save(name=tmpname) fns.append(tmpname) for width in [(1.0, "unitary"), 1.0, ds.quan(0.5, "code_length")]: frb = slc.to_frb(width, 64) shifted_frb = shifted_slc.to_frb(width, 64) for slc_field in [("index", "ones"), ("gas", "density")]: fi = ds._get_field_info(slc_field) assert_equal(frb[slc_field].info["data_source"], slc.__str__()) assert_equal(frb[slc_field].info["axis"], ax) assert_equal(frb[slc_field].info["field"], str(slc_field)) assert_equal(frb[slc_field].units, Unit(fi.units)) assert_equal(frb[slc_field].info["xlim"], frb.bounds[:2]) assert_equal(frb[slc_field].info["ylim"], frb.bounds[2:]) assert_equal(frb[slc_field].info["center"], slc.center) assert_equal(frb[slc_field].info["coord"], slc_pos) assert_equal(frb[slc_field], shifted_frb[slc_field]) teardown_func(fns) def test_slice_over_edges(): ds = fake_random_ds( 64, nprocs=8, fields=("density",), units=("g/cm**3",), negative=[False] ) slc = ds.slice(0, 0.0) slc["gas", "density"] slc = ds.slice(1, 0.5) slc["gas", "density"] def test_slice_over_outer_boundary(): ds = fake_random_ds( 64, nprocs=8, fields=("density",), units=("g/cm**3",), negative=[False] ) slc = ds.slice(2, 1.0) slc["gas", "density"] assert_equal(slc["gas", "density"].size, 0) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_sph_data_objects.py0000644000175100001770000002575214714401662022725 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_allclose, assert_almost_equal, assert_equal from yt import SlicePlot, add_particle_filter from yt.loaders import load from yt.testing import fake_sph_grid_ds, fake_sph_orientation_ds, requires_file def test_point(): ds = fake_sph_orientation_ds() field_data = ds.stream_handler.fields["stream_file"] ppos = [field_data["io", f"particle_position_{d}"] for d in "xyz"] ppos = np.array(ppos).T for pos in ppos: for i in range(-1, 2): offset = 0.1 * np.array([i, 0, 0]) pt = ds.point(pos + offset) assert_equal(pt["gas", "density"].shape[0], 1) for j in range(-1, 2): offset = 0.1 * np.array([0, j, 0]) pt = ds.point(pos + offset) assert_equal(pt["gas", "density"].shape[0], 1) for k in range(-1, 2): offset = 0.1 * np.array([0, 0, k]) pt = ds.point(pos + offset) assert_equal(pt["gas", "density"].shape[0], 1) # The number of particles along each slice axis at that coordinate SLICE_ANSWERS = { ("x", 0): 6, ("x", 0.5): 0, ("x", 1): 1, ("y", 0): 5, ("y", 1): 1, ("y", 2): 1, ("z", 0): 4, ("z", 1): 1, ("z", 2): 1, ("z", 3): 1, } def test_slice(): ds = fake_sph_orientation_ds() for (ax, coord), answer in SLICE_ANSWERS.items(): # test that we can still select particles even if we offset the slice # within each particle's smoothing volume for i in range(-1, 2): sl = ds.slice(ax, coord + i * 0.1) assert_equal(sl["gas", "density"].shape[0], answer) REGION_ANSWERS = { ((-4, -4, -4), (4, 4, 4)): 7, ((0, 0, 0), (4, 4, 4)): 7, ((1, 0, 0), (4, 4, 4)): 1, ((0, 1, 0), (4, 4, 4)): 2, ((0, 0, 1), (4, 4, 4)): 3, ((0, 0, 0), (4, 4, 2)): 6, ((0, 0, 0), (4, 4, 1)): 5, ((0, 0, 0), (4, 1, 4)): 6, ((0, 0, 0), (1, 1, 4)): 6, } def test_slice_to_frb(): ds = fake_sph_orientation_ds() frb = ds.slice(0, 0.5).to_frb(ds.domain_width[0], (64, 64)) ref_vals = frb["gas", "density"] for center in ((0.5, "code_length"), (0.5, "cm"), ds.quan(0.5, "code_length")): frb = ds.slice(0, center).to_frb(ds.domain_width[0], (64, 64)) vals = frb["gas", "density"] assert_equal(vals, ref_vals) def test_region(): ds = fake_sph_orientation_ds() for (left_edge, right_edge), answer in REGION_ANSWERS.items(): # test that regions enclosing a particle's smoothing region # correctly select SPH particles for i in range(-1, 2): for j in range(-1, 2): le = np.array([le + i * 0.1 for le in left_edge]) re = np.array([re + j * 0.1 for re in right_edge]) # check if we went off the edge of the domain whl = le < ds.domain_left_edge le[whl] = ds.domain_left_edge[whl] whr = re > ds.domain_right_edge re[whr] = ds.domain_right_edge[whr] reg = ds.box(le, re) assert_equal(reg["gas", "density"].shape[0], answer) def test_periodic_region(): ds = fake_sph_grid_ds(10.0) coords = [0.7, 1.4, 2.8] for x in coords: for y in coords: for z in coords: center = np.array([x, y, z]) for n, w in [(8, 1.0), (27, 2.0)]: le = center - 0.5 * w re = center + 0.5 * w box = ds.box(le, re) assert box["io", "particle_ones"].size == n SPHERE_ANSWERS = { ((0, 0, 0), 4): 7, ((0, 0, 0), 3): 7, ((0, 0, 0), 2): 6, ((0, 0, 0), 1): 4, ((0, 0, 0), 0.5): 1, ((1, 0, 0), 0.5): 1, ((1, 0, 0), 1.0): 2, ((0, 1, 0), 1.0): 3, ((0, 0, 1), 1.0): 3, } def test_sphere(): ds = fake_sph_orientation_ds() for (center, radius), answer in SPHERE_ANSWERS.items(): # test that spheres enclosing a particle's smoothing region # correctly select SPH particles for i in range(-1, 2): for j in range(-1, 2): cent = np.array([c + i * 0.1 for c in center]) rad = radius + 0.1 * j sph = ds.sphere(cent, rad) assert_equal(sph["gas", "density"].shape[0], answer) DISK_ANSWERS = { ((0, 0, 0), (0, 0, 1), 4, 3): 7, ((0, 0, 0), (0, 0, 1), 4, 2): 6, ((0, 0, 0), (0, 0, 1), 4, 1): 5, ((0, 0, 0), (0, 0, 1), 4, 0.5): 4, ((0, 0, 0), (0, 1, 0), 4, 3): 7, ((0, 0, 0), (0, 1, 0), 4, 2): 7, ((0, 0, 0), (0, 1, 0), 4, 1): 6, ((0, 0, 0), (0, 1, 0), 4, 0.5): 5, ((0, 0, 0), (1, 0, 0), 4, 3): 7, ((0, 0, 0), (1, 0, 0), 4, 2): 7, ((0, 0, 0), (1, 0, 0), 4, 1): 7, ((0, 0, 0), (1, 0, 0), 4, 0.5): 6, ((0, 0, 0), (1, 1, 1), 1, 1): 4, ((-0.5, -0.5, -0.5), (1, 1, 1), 4, 4): 7, } def test_disk(): ds = fake_sph_orientation_ds() for (center, normal, radius, height), answer in DISK_ANSWERS.items(): # test that disks enclosing a particle's smoothing region # correctly select SPH particles for i in range(-1, 2): cent = np.array([c + i * 0.1 for c in center]) disk = ds.disk(cent, normal, radius, height) assert_equal(disk["gas", "density"].shape[0], answer) RAY_ANSWERS = { ((0, 0, 0), (3, 0, 0)): 2, ((0, 0, 0), (0, 3, 0)): 3, ((0, 0, 0), (0, 0, 3)): 4, ((0, 1, 0), (0, 2, 0)): 2, ((1, 0, 0), (0, 2, 0)): 2, ((0.5, 0.5, 0.5), (0.5, 0.5, 3.5)): 0, } def test_ray(): ds = fake_sph_orientation_ds() for (start_point, end_point), answer in RAY_ANSWERS.items(): for i in range(-1, 2): start = np.array([s + i * 0.1 for s in start_point]) end = np.array([e + i * 0.1 for e in end_point]) ray = ds.ray(start, end) assert_equal(ray["gas", "density"].shape[0], answer) CUTTING_ANSWERS = { ((1, 0, 0), (0, 0, 0)): 6, ((0, 1, 0), (0, 0, 0)): 5, ((0, 0, 1), (0, 0, 0)): 4, ((1, 1, 1), (1.0 / 3, 1.0 / 3, 1.0 / 3)): 3, ((1, 1, 1), (2.0 / 3, 2.0 / 3, 2.0 / 3)): 2, ((1, 1, 1), (1, 1, 1)): 1, } def test_cutting(): ds = fake_sph_orientation_ds() for (normal, center), answer in CUTTING_ANSWERS.items(): for i in range(-1, 2): cen = [c + 0.1 * i for c in center] cut = ds.cutting(normal, cen) assert_equal(cut["gas", "density"].shape[0], answer) def test_chained_selection(): ds = fake_sph_orientation_ds() for (center, radius), answer in SPHERE_ANSWERS.items(): sph = ds.sphere(center, radius) region = ds.box(ds.domain_left_edge, ds.domain_right_edge, data_source=sph) assert_equal(region["gas", "density"].shape[0], answer) def test_boolean_selection(): ds = fake_sph_orientation_ds() sph = ds.sphere([0, 0, 0], 0.5) sph2 = ds.sphere([1, 0, 0], 0.5) reg = ds.all_data() neg = reg - sph assert_equal(neg["gas", "density"].shape[0], 6) plus = sph + sph2 assert_equal(plus["gas", "density"].shape[0], 2) intersect = sph & sph2 assert_equal(intersect["gas", "density"].shape[0], 0) intersect = reg & sph2 assert_equal(intersect["gas", "density"].shape[0], 1) exclusive = sph ^ sph2 assert_equal(exclusive["gas", "density"].shape[0], 2) exclusive = sph ^ reg assert_equal(exclusive["gas", "density"].shape[0], 6) intersect = ds.intersection([sph, sph2]) assert_equal(intersect["gas", "density"].shape[0], 0) intersect = ds.intersection([reg, sph2]) assert_equal(intersect["gas", "density"].shape[0], 1) union = ds.union([sph, sph2]) assert_equal(union["gas", "density"].shape[0], 2) union = ds.union([sph, reg]) assert_equal(union["gas", "density"].shape[0], 7) def test_arbitrary_grid(): ds = fake_sph_grid_ds() # this loads up some sph data in a test grid agrid = ds.arbitrary_grid([0, 0, 0], [3, 3, 3], dims=[3, 3, 3]) # the field should be equal to the density of a particle in every voxel # which is 1. dens = agrid["gas", "density"] answers = np.ones(shape=(3, 3, 3)) assert_equal(dens, answers) def test_compare_arbitrary_grid_slice(): ds = fake_sph_orientation_ds() c = np.array([0.0, 0.0, 0.0]) width = 1.5 buff_size = 51 field = ("gas", "density") # buffer from arbitrary grid ag = ds.arbitrary_grid(c - width / 2, c + width / 2, [buff_size] * 3) buff_ag = ag[field][:, :, int(np.floor(buff_size / 2))].d.T # buffer from slice p = SlicePlot(ds, "z", field, center=c, width=width) p.set_buff_size(51) buff_slc = p.frb.data[field].d assert_equal(buff_slc, buff_ag) def test_gather_slice(): ds = fake_sph_grid_ds() ds.num_neighbors = 5 field = ("gas", "density") c = np.array([1.5, 1.5, 0.5]) width = 3.0 p = SlicePlot(ds, "z", field, center=c, width=width) p.set_buff_size(3) buff_scatter = p.frb.data[field].d ds.sph_smoothing_style = "gather" p = SlicePlot(ds, "z", field, center=c, width=width) p.set_buff_size(3) buff_gather = p.frb.data[field].d assert_allclose(buff_scatter, buff_gather, rtol=3e-16) def test_gather_grid(): ds = fake_sph_grid_ds() ds.num_neighbors = 5 field = ("gas", "density") ag = ds.arbitrary_grid([0, 0, 0], [3, 3, 3], dims=[3, 3, 3]) scatter = ag[field] ds.sph_smoothing_style = "gather" ag = ds.arbitrary_grid([0, 0, 0], [3, 3, 3], dims=[3, 3, 3]) gather = ag[field] assert_allclose(gather, scatter, rtol=3e-16) def test_covering_grid_scatter(): ds = fake_sph_grid_ds() field = ("gas", "density") buff_size = 8 ag = ds.arbitrary_grid(0, 3, [buff_size] * 3) ag_dens = ag[field].to("g*cm**-3").d cg = ds.covering_grid(3, 0, 8) cg_dens = cg[field].to("g*cm**-3").d assert_equal(ag_dens, cg_dens) def test_covering_grid_gather(): ds = fake_sph_grid_ds() ds.sph_smoothing_style = "gather" ds.num_neighbors = 5 field = ("gas", "density") buff_size = 8 ag = ds.arbitrary_grid(0, 3, [buff_size] * 3) ag_dens = ag[field].to("g*cm**-3").d cg = ds.covering_grid(3, 0, 8) cg_dens = cg[field].to("g*cm**-3").d assert_equal(ag_dens, cg_dens) @requires_file("TNGHalo/halo_59.hdf5") def test_covering_grid_derived_fields(): def hot_gas(pfilter, data): return data[pfilter.filtered_type, "temperature"] > 1.0e6 add_particle_filter( "hot_gas", function=hot_gas, filtered_type="gas", requires=["temperature"], ) bbox = [[40669.34, 56669.34], [45984.04, 61984.04], [54114.9, 70114.9]] ds = load("TNGHalo/halo_59.hdf5", bounding_box=bbox) ds.add_particle_filter("hot_gas") w = ds.quan(0.2, "Mpc") le = ds.domain_center - 0.5 * w re = ds.domain_center + 0.5 * w g = ds.r[le[0] : re[0] : 128j, le[1] : re[1] : 128j, le[2] : re[2] : 128j] T1 = g["gas", "temperature"].to("keV", "thermal") T2 = g["gas", "kT"] assert_almost_equal(T1, T2) T3 = g["hot_gas", "temperature"].to("keV", "thermal") T4 = g["hot_gas", "kT"] assert_almost_equal(T3, T4) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_spheres.py0000644000175100001770000000764014714401662021076 0ustar00runnerdockerimport numpy as np from numpy.testing import assert_array_equal, assert_equal, assert_raises from yt.data_objects.profiles import create_profile from yt.testing import fake_random_ds, periodicity_cases, requires_module from yt.utilities.exceptions import YTException, YTFieldNotFound def setup_module(): from yt.config import ytcfg ytcfg["yt", "internals", "within_testing"] = True _fields_to_compare = ( "spherical_radius", "cylindrical_radius", "spherical_theta", "cylindrical_theta", "spherical_phi", "cylindrical_z", ) def test_domain_sphere(): # Now we test that we can get different radial velocities based on field # parameters. # Get the first sphere ds = fake_random_ds( 16, fields=("density", "velocity_x", "velocity_y", "velocity_z"), units=("g/cm**3", "cm/s", "cm/s", "cm/s"), ) sp0 = ds.sphere(ds.domain_center, 0.25) # Compute the bulk velocity from the cells in this sphere bulk_vel = sp0.quantities.bulk_velocity() # Get the second sphere sp1 = ds.sphere(ds.domain_center, 0.25) # Set the bulk velocity field parameter sp1.set_field_parameter("bulk_velocity", bulk_vel) assert_equal( np.any(sp0["gas", "radial_velocity"] == sp1["gas", "radial_velocity"]), False, ) # Radial profile without correction # Note we set n_bins = 8 here. rp0 = create_profile( sp0, "radius", "radial_velocity", units={"radius": "kpc"}, logs={"radius": False}, n_bins=8, ) # Radial profile with correction for bulk velocity rp1 = create_profile( sp1, "radius", "radial_velocity", units={"radius": "kpc"}, logs={"radius": False}, n_bins=8, ) assert_equal(rp0.x_bins, rp1.x_bins) assert_equal(rp0.used, rp1.used) assert_equal(rp0.used.sum() > rp0.used.size / 2.0, True) assert_equal( np.any(rp0["radial_velocity"][rp0.used] == rp1["radial_velocity"][rp1.used]), False, ) ref_sp = ds.sphere("c", 0.25) for f in _fields_to_compare: ref_sp[f].sort() for center in periodicity_cases(ds): sp = ds.sphere(center, 0.25) for f in _fields_to_compare: sp[f].sort() assert_equal(sp[f], ref_sp[f]) def test_sphere_center(): ds = fake_random_ds( 16, nprocs=8, fields=("density", "temperature", "velocity_x"), units=("g/cm**3", "K", "cm/s"), ) # Test if we obtain same center in different ways sp1 = ds.sphere("max", (0.25, "unitary")) sp2 = ds.sphere("max_density", (0.25, "unitary")) assert_array_equal(sp1.center, sp2.center) sp1 = ds.sphere("min", (0.25, "unitary")) sp2 = ds.sphere("min_density", (0.25, "unitary")) assert_array_equal(sp1.center, sp2.center) def test_center_error(): ds = fake_random_ds(16, nprocs=16) with assert_raises(YTFieldNotFound): ds.sphere("min_non_existing_field_name", (0.25, "unitary")) with assert_raises(YTFieldNotFound): ds.sphere("max_non_existing_field_name", (0.25, "unitary")) @requires_module("miniball") def test_minimal_sphere(): ds = fake_random_ds(16, nprocs=8, particles=100) pos = ds.r["all", "particle_position"] sp1 = ds.minimal_sphere(pos) N0 = len(pos) # Check all particles have been found N1 = len(sp1["all", "particle_ones"]) assert_equal(N0, N1) # Check that any smaller sphere is missing some particles sp2 = ds.sphere(sp1.center, sp1.radius * 0.9) N2 = len(sp2["all", "particle_ones"]) assert N2 < N0 @requires_module("miniball") def test_minimal_sphere_bad_inputs(): ds = fake_random_ds(16, nprocs=8, particles=100) pos = ds.r["all", "particle_position"] ## Check number of points >= 2 # -> should fail assert_raises(YTException, ds.minimal_sphere, pos[:1, :]) # -> should not fail ds.minimal_sphere(pos[:2, :]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_time_series.py0000644000175100001770000001133214714401662021726 0ustar00runnerdockerimport tempfile from pathlib import Path import pytest from yt.data_objects.static_output import Dataset from yt.data_objects.time_series import DatasetSeries from yt.utilities.exceptions import YTUnidentifiedDataType from yt.utilities.object_registries import output_type_registry def test_pattern_expansion(): file_list = [f"fake_data_file_{str(i).zfill(4)}" for i in range(10)] with tempfile.TemporaryDirectory() as tmpdir: tmp_path = Path(tmpdir) for file in file_list: (tmp_path / file).touch() pattern = tmp_path / "fake_data_file_*" expected = [str(tmp_path / file) for file in file_list] found = DatasetSeries._get_filenames_from_glob_pattern(pattern) assert found == expected found2 = DatasetSeries._get_filenames_from_glob_pattern(Path(pattern)) assert found2 == expected def test_no_match_pattern(): with tempfile.TemporaryDirectory() as tmpdir: pattern = Path(tmpdir).joinpath("fake_data_file_*") with pytest.raises(FileNotFoundError): DatasetSeries._get_filenames_from_glob_pattern(pattern) @pytest.fixture def FakeDataset(): class _FakeDataset(Dataset): """A minimal loadable fake dataset subclass""" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @classmethod def _is_valid(cls, *args, **kwargs): return True def _parse_parameter_file(self): return def _set_code_unit_attributes(self): return def set_code_units(self): i = int(Path(self.filename).name.split("_")[-1]) self.current_time = i self.current_redshift = 1 / (i + 1) return def _hash(self): return def _setup_classes(self): return try: yield _FakeDataset finally: output_type_registry.pop("_FakeDataset") @pytest.fixture def fake_datasets(): file_list = [f"fake_data_file_{i:04d}" for i in range(10)] with tempfile.TemporaryDirectory() as tmpdir: pfile_list = [Path(tmpdir) / file for file in file_list] sfile_list = [str(file) for file in pfile_list] for file in pfile_list: file.touch() pattern = Path(tmpdir) / "fake_data_file_*" yield file_list, pfile_list, sfile_list, pattern def test_init_fake_dataseries(fake_datasets): file_list, pfile_list, sfile_list, pattern = fake_datasets # init from str pattern ts = DatasetSeries(pattern) assert ts._pre_outputs == sfile_list # init from Path pattern ppattern = Path(pattern) ts = DatasetSeries(ppattern) assert ts._pre_outputs == sfile_list # init form str list ts = DatasetSeries(sfile_list) assert ts._pre_outputs == sfile_list # init form Path list ts = DatasetSeries(pfile_list) assert ts._pre_outputs == pfile_list # rejected input type (str repr of a list) "[file1, file2, ...]" with pytest.raises(FileNotFoundError): DatasetSeries(str(file_list)) # finally, check that ts[0] fails to actually load with pytest.raises(YTUnidentifiedDataType): ts[0] def test_init_fake_dataseries2(FakeDataset, fake_datasets): _file_list, _pfile_list, _sfile_list, pattern = fake_datasets ds = DatasetSeries(pattern)[0] assert isinstance(ds, FakeDataset) ts = DatasetSeries(pattern, my_unsupported_kwarg=None) with pytest.raises(TypeError): ts[0] def test_get_by_key(FakeDataset, fake_datasets): _file_list, _pfile_list, sfile_list, pattern = fake_datasets ts = DatasetSeries(pattern) Ntot = len(sfile_list) t = ts[0].quan(1, "code_time") assert sfile_list[0] == ts.get_by_time(-t).filename assert sfile_list[0] == ts.get_by_time(t - t).filename assert sfile_list[1] == ts.get_by_time((0.8, "code_time")).filename assert sfile_list[1] == ts.get_by_time((1.2, "code_time")).filename assert sfile_list[Ntot - 1] == ts.get_by_time(t * (Ntot - 1)).filename assert sfile_list[Ntot - 1] == ts.get_by_time(t * Ntot).filename with pytest.raises(ValueError): ts.get_by_time(-2 * t, tolerance=0.1 * t) with pytest.raises(ValueError): ts.get_by_time(1000 * t, tolerance=0.1 * t) assert sfile_list[1] == ts.get_by_redshift(1 / 2.2).filename assert sfile_list[1] == ts.get_by_redshift(1 / 2).filename assert sfile_list[1] == ts.get_by_redshift(1 / 1.6).filename with pytest.raises(ValueError): ts.get_by_redshift(1000, tolerance=0.1) zmid = (ts[0].current_redshift + ts[1].current_redshift) / 2 assert sfile_list[1] == ts.get_by_redshift(zmid, prefer="smaller").filename assert sfile_list[0] == ts.get_by_redshift(zmid, prefer="larger").filename ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/tests/test_units_override.py0000644000175100001770000000441114714401662022457 0ustar00runnerdockerfrom functools import partial from numpy.testing import assert_raises from yt.data_objects.static_output import Dataset from yt.units import YTQuantity from yt.units.unit_registry import UnitRegistry mock_quan = partial(YTQuantity, registry=UnitRegistry()) def test_schema_validation(): valid_schemas = [ {"length_unit": 1.0}, {"length_unit": [1.0]}, {"length_unit": (1.0,)}, {"length_unit": int(1.0)}, {"length_unit": (1.0, "m")}, {"length_unit": [1.0, "m"]}, {"length_unit": YTQuantity(1.0, "m")}, ] for schema in valid_schemas: uo = Dataset._sanitize_units_override(schema) for v in uo.values(): q = mock_quan(v) # check that no error (TypeError) is raised q.to("pc") # check that q is a length def test_invalid_schema_detection(): invalid_key_schemas = [ {"len_unit": 1.0}, # plain invalid key {"lenght_unit": 1.0}, # typo ] for invalid_schema in invalid_key_schemas: assert_raises(ValueError, Dataset._sanitize_units_override, invalid_schema) invalid_val_schemas = [ {"length_unit": [1, 1, 1]}, # len(val) > 2 {"length_unit": [1, 1, 1, 1, 1]}, # "data type not understood" in unyt ] for invalid_schema in invalid_val_schemas: assert_raises(TypeError, Dataset._sanitize_units_override, invalid_schema) # 0 shouldn't make sense invalid_number_schemas = [ {"length_unit": 0}, {"length_unit": [0]}, {"length_unit": (0,)}, {"length_unit": (0, "cm")}, ] for invalid_schema in invalid_number_schemas: assert_raises(ValueError, Dataset._sanitize_units_override, invalid_schema) def test_typing_error_detection(): invalid_schema = {"length_unit": "1m"} # this is the error that is raised by unyt on bad input assert_raises(RuntimeError, mock_quan, invalid_schema["length_unit"]) # check that the sanitizer function is able to catch the # type issue before passing down to unyt assert_raises(TypeError, Dataset._sanitize_units_override, invalid_schema) def test_dimensionality_error_detection(): invalid_schema = {"length_unit": YTQuantity(1.0, "s")} assert_raises(ValueError, Dataset._sanitize_units_override, invalid_schema) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/time_series.py0000644000175100001770000006571414714401662017542 0ustar00runnerdockerimport functools import glob import inspect import os import weakref from abc import ABC, abstractmethod from functools import wraps from typing import TYPE_CHECKING, Literal import numpy as np from more_itertools import always_iterable from unyt import Unit, unyt_quantity from yt.config import ytcfg from yt.data_objects.analyzer_objects import AnalysisTask, create_quantity_proxy from yt.data_objects.particle_trajectories import ParticleTrajectories from yt.funcs import is_sequence, mylog from yt.units.yt_array import YTArray, YTQuantity from yt.utilities.exceptions import YTException from yt.utilities.object_registries import ( analysis_task_registry, data_object_registry, derived_quantity_registry, simulation_time_series_registry, ) from yt.utilities.parallel_tools.parallel_analysis_interface import ( communication_system, parallel_objects, parallel_root_only, ) if TYPE_CHECKING: from yt.data_objects.static_output import Dataset class AnalysisTaskProxy: def __init__(self, time_series): self.time_series = time_series def __getitem__(self, key): task_cls = analysis_task_registry[key] @wraps(task_cls.__init__) def func(*args, **kwargs): task = task_cls(*args, **kwargs) return self.time_series.eval(task) return func def keys(self): return analysis_task_registry.keys() def __contains__(self, key): return key in analysis_task_registry def get_ds_prop(propname): def _eval(params, ds): return getattr(ds, propname) cls = type(propname, (AnalysisTask,), {"eval": _eval, "_params": ()}) return cls attrs = ( "refine_by", "dimensionality", "current_time", "domain_dimensions", "domain_left_edge", "domain_right_edge", "unique_identifier", "current_redshift", "cosmological_simulation", "omega_matter", "omega_lambda", "omega_radiation", "hubble_constant", ) class TimeSeriesParametersContainer: def __init__(self, data_object): self.data_object = data_object def __getattr__(self, attr): if attr in attrs: return self.data_object.eval(get_ds_prop(attr)()) raise AttributeError(attr) class DatasetSeries: r"""The DatasetSeries object is a container of multiple datasets, allowing easy iteration and computation on them. DatasetSeries objects are designed to provide easy ways to access, analyze, parallelize and visualize multiple datasets sequentially. This is primarily expressed through iteration, but can also be constructed via analysis tasks (see :ref:`time-series-analysis`). Note that contained datasets are lazily loaded and weakly referenced. This means that in order to perform follow-up operations on data it's best to define handles on these datasets during iteration. Parameters ---------- outputs : list of filenames, or pattern A list of filenames, for instance ["DD0001/DD0001", "DD0002/DD0002"], or a glob pattern (i.e. containing wildcards '[]?!*') such as "DD*/DD*.index". In the latter case, results are sorted automatically. Filenames and patterns can be of type str, os.Pathlike or bytes. parallel : True, False or int This parameter governs the behavior when .piter() is called on the resultant DatasetSeries object. If this is set to False, the time series will not iterate in parallel when .piter() is called. If this is set to either True, one processor will be allocated for each iteration of the loop. If this is set to an integer, the loop will be parallelized over this many workgroups. It the integer value is less than the total number of available processors, more than one processor will be allocated to a given loop iteration, causing the functionality within the loop to be run in parallel. setup_function : callable, accepts a ds This function will be called whenever a dataset is loaded. mixed_dataset_types : True or False, default False Set to True if the DatasetSeries will load different dataset types, set to False if loading dataset of a single type as this will result in a considerable speed up from not having to figure out the dataset type. Examples -------- >>> ts = DatasetSeries( ... "GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0[0-6][0-9]0" ... ) >>> for ds in ts: ... SlicePlot(ds, "x", ("gas", "density")).save() ... >>> def print_time(ds): ... print(ds.current_time) ... >>> ts = DatasetSeries( ... "GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0[0-6][0-9]0", ... setup_function=print_time, ... ) ... >>> for ds in ts: ... SlicePlot(ds, "x", ("gas", "density")).save() """ _dataset_cls: type["Dataset"] | None = None def __init_subclass__(cls, *args, **kwargs): super().__init_subclass__(*args, **kwargs) code_name = cls.__name__[: cls.__name__.find("Simulation")] if code_name: simulation_time_series_registry[code_name] = cls mylog.debug("Registering simulation: %s as %s", code_name, cls) def __new__(cls, outputs, *args, **kwargs): try: outputs = cls._get_filenames_from_glob_pattern(outputs) except TypeError: pass ret = super().__new__(cls) ret._pre_outputs = outputs[:] ret.kwargs = {} return ret def __init__( self, outputs, parallel=True, setup_function=None, mixed_dataset_types=False, **kwargs, ): # This is needed to properly set _pre_outputs for Simulation subclasses. self._mixed_dataset_types = mixed_dataset_types if is_sequence(outputs) and not isinstance(outputs, str): self._pre_outputs = outputs[:] self.tasks = AnalysisTaskProxy(self) self.params = TimeSeriesParametersContainer(self) if setup_function is None: def _null(x): return None setup_function = _null self._setup_function = setup_function for type_name in data_object_registry: setattr( self, type_name, functools.partial(DatasetSeriesObject, self, type_name) ) self.parallel = parallel self.kwargs = kwargs @staticmethod def _get_filenames_from_glob_pattern(outputs): """ Helper function to DatasetSeries.__new__ handle a special case where "outputs" is assumed to be really a pattern string """ pattern = outputs epattern = os.path.expanduser(pattern) data_dir = ytcfg.get("yt", "test_data_dir") # if no match if found from the current work dir, # we try to match the pattern from the test data dir file_list = glob.glob(epattern) or glob.glob(os.path.join(data_dir, epattern)) if not file_list: raise FileNotFoundError(f"No match found for pattern : {pattern}") return sorted(file_list) def __getitem__(self, key): if isinstance(key, slice): if isinstance(key.start, float): return self.get_range(key.start, key.stop) # This will return a sliced up object! return DatasetSeries( self._pre_outputs[key], parallel=self.parallel, **self.kwargs ) o = self._pre_outputs[key] if isinstance(o, (str, os.PathLike)): o = self._load(o, **self.kwargs) self._setup_function(o) return o def __len__(self): return len(self._pre_outputs) @property def outputs(self): return self._pre_outputs def piter(self, storage=None, dynamic=False): r"""Iterate over time series components in parallel. This allows you to iterate over a time series while dispatching individual components of that time series to different processors or processor groups. If the parallelism strategy was set to be multi-processor (by "parallel = N" where N is an integer when the DatasetSeries was created) this will issue each dataset to an N-processor group. For instance, this would allow you to start a 1024 processor job, loading up 100 datasets in a time series and creating 8 processor groups of 128 processors each, each of which would be assigned a different dataset. This could be accomplished as shown in the examples below. The *storage* option is as seen in :func:`~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_objects` which is a mechanism for storing results of analysis on an individual dataset and then combining the results at the end, so that the entire set of processors have access to those results. Note that supplying a *store* changes the iteration mechanism; see below. Parameters ---------- storage : dict This is a dictionary, which will be filled with results during the course of the iteration. The keys will be the dataset indices and the values will be whatever is assigned to the *result* attribute on the storage during iteration. dynamic : boolean This governs whether or not dynamic load balancing will be enabled. This requires one dedicated processor; if this is enabled with a set of 128 processors available, only 127 will be available to iterate over objects as one will be load balancing the rest. Examples -------- Here is an example of iteration when the results do not need to be stored. One processor will be assigned to each dataset. >>> ts = DatasetSeries("DD*/DD*.index") >>> for ds in ts.piter(): ... SlicePlot(ds, "x", ("gas", "density")).save() ... This demonstrates how one might store results: >>> def print_time(ds): ... print(ds.current_time) ... >>> ts = DatasetSeries("DD*/DD*.index", setup_function=print_time) ... >>> my_storage = {} >>> for sto, ds in ts.piter(storage=my_storage): ... v, c = ds.find_max(("gas", "density")) ... sto.result = (v, c) ... >>> for i, (v, c) in sorted(my_storage.items()): ... print("% 4i %0.3e" % (i, v)) ... This shows how to dispatch 4 processors to each dataset: >>> ts = DatasetSeries("DD*/DD*.index", parallel=4) >>> for ds in ts.piter(): ... ProjectionPlot(ds, "x", ("gas", "density")).save() ... """ if not self.parallel: njobs = 1 elif not dynamic: if self.parallel: njobs = -1 else: njobs = self.parallel else: my_communicator = communication_system.communicators[-1] nsize = my_communicator.size if nsize == 1: self.parallel = False dynamic = False njobs = 1 else: njobs = nsize - 1 for output in parallel_objects( self._pre_outputs, njobs=njobs, storage=storage, dynamic=dynamic ): if storage is not None: sto, output = output if isinstance(output, str): ds = self._load(output, **self.kwargs) self._setup_function(ds) else: ds = output if storage is not None: next_ret = (sto, ds) else: next_ret = ds yield next_ret def eval(self, tasks, obj=None): return_values = {} for store, ds in self.piter(return_values): store.result = [] for task in always_iterable(tasks): try: style = inspect.getargspec(task.eval)[0][1] if style == "ds": arg = ds elif style == "data_object": if obj is None: obj = DatasetSeriesObject(self, "all_data") arg = obj.get(ds) rv = task.eval(arg) # We catch and store YT-originating exceptions # This fixes the standard problem of having a sphere that's too # small. except YTException: pass store.result.append(rv) return [v for k, v in sorted(return_values.items())] @classmethod def from_output_log(cls, output_log, line_prefix="DATASET WRITTEN", parallel=True): filenames = [] for line in open(output_log): if not line.startswith(line_prefix): continue cut_line = line[len(line_prefix) :].strip() fn = cut_line.split()[0] filenames.append(fn) obj = cls(filenames, parallel=parallel) return obj def _load(self, output_fn, *, hint: str | None = None, **kwargs): from yt.loaders import load if self._dataset_cls is not None: return self._dataset_cls(output_fn, **kwargs) elif self._mixed_dataset_types: return load(output_fn, hint=hint, **kwargs) ds = load(output_fn, hint=hint, **kwargs) self._dataset_cls = ds.__class__ return ds def particle_trajectories( self, indices, fields=None, suppress_logging=False, ptype=None ): r"""Create a collection of particle trajectories in time over a series of datasets. Parameters ---------- indices : array_like An integer array of particle indices whose trajectories we want to track. If they are not sorted they will be sorted. fields : list of strings, optional A set of fields that is retrieved when the trajectory collection is instantiated. Default: None (will default to the fields 'particle_position_x', 'particle_position_y', 'particle_position_z') suppress_logging : boolean Suppress yt's logging when iterating over the simulation time series. Default: False ptype : str, optional Only use this particle type. Default: None, which uses all particle type. Examples -------- >>> my_fns = glob.glob("orbit_hdf5_chk_00[0-9][0-9]") >>> my_fns.sort() >>> fields = [ ... ("all", "particle_position_x"), ... ("all", "particle_position_y"), ... ("all", "particle_position_z"), ... ("all", "particle_velocity_x"), ... ("all", "particle_velocity_y"), ... ("all", "particle_velocity_z"), ... ] >>> ds = load(my_fns[0]) >>> init_sphere = ds.sphere(ds.domain_center, (0.5, "unitary")) >>> indices = init_sphere["all", "particle_index"].astype("int64") >>> ts = DatasetSeries(my_fns) >>> trajs = ts.particle_trajectories(indices, fields=fields) >>> for t in trajs: ... print( ... t["all", "particle_velocity_x"].max(), ... t["all", "particle_velocity_x"].min(), ... ) Notes ----- This function will fail if there are duplicate particle ids or if some of the particle disappear. """ return ParticleTrajectories( self, indices, fields=fields, suppress_logging=suppress_logging, ptype=ptype ) def _get_by_attribute( self, attribute: str, value: unyt_quantity | tuple[float, Unit | str], tolerance: None | unyt_quantity | tuple[float, Unit | str] = None, prefer: Literal["nearest", "smaller", "larger"] = "nearest", ) -> "Dataset": r""" Get a dataset at or near to a given value. Parameters ---------- attribute : str The key by which to retrieve an output, usually 'current_time' or 'current_redshift'. The key must be an attribute of the dataset and monotonic. value : unyt_quantity or (value, unit) The value to search for. tolerance : unyt_quantity or (value, unit), optional If not None, do not return a dataset unless the value is within the tolerance value. If None, simply return the nearest dataset. Default: None. prefer : str The side of the value to return. Can be 'nearest', 'smaller' or 'larger'. Default: 'nearest'. """ if prefer not in ("nearest", "smaller", "larger"): raise ValueError( f"Side must be 'nearest', 'smaller' or 'larger', got {prefer}." ) # Use a binary search to find the closest value iL = 0 iR = len(self._pre_outputs) - 1 if iL == iR: ds = self[0] if ( tolerance is not None and abs(getattr(ds, attribute) - value) > tolerance ): raise ValueError( f"No dataset found with {attribute} within {tolerance} of {value}." ) return ds # Check signedness dsL = self[iL] dsR = self[iR] vL = getattr(dsL, attribute) vR = getattr(dsR, attribute) if vL < vR: sign = 1 elif vL > vR: sign = -1 else: raise ValueError( f"{dsL} and {dsR} have both {attribute}={vL}, cannot perform search." ) if isinstance(value, tuple): value = dsL.quan(*value) if isinstance(tolerance, tuple): tolerance = dsL.quan(*tolerance) # Short-circuit if value is out-of-range if not (vL * sign < value * sign < vR * sign): iL = iR = 0 while iR - iL > 1: iM = (iR + iL) // 2 dsM = self[iM] vM = getattr(dsM, attribute) if sign * value < sign * vM: iR = iM dsR = dsM elif sign * value > sign * vM: iL = iM dsL = dsM else: # Exact match dsL = dsR = dsM break if prefer == "smaller": ds_best = dsL if sign > 0 else dsR elif prefer == "larger": ds_best = dsR if sign > 0 else dsL elif abs(value - getattr(dsL, attribute)) < abs( value - getattr(dsR, attribute) ): ds_best = dsL else: ds_best = dsR if tolerance is not None: if abs(value - getattr(ds_best, attribute)) > tolerance: raise ValueError( f"No dataset found with {attribute} within {tolerance} of {value}." ) return ds_best def get_by_time( self, time: unyt_quantity | tuple[float, Unit | str], tolerance: None | unyt_quantity | tuple[float, Unit | str] = None, prefer: Literal["nearest", "smaller", "larger"] = "nearest", ) -> "Dataset": """ Get a dataset at or near to a given time. Parameters ---------- time : unyt_quantity or (value, unit) The time to search for. tolerance : unyt_quantity or (value, unit) If not None, do not return a dataset unless the time is within the tolerance value. If None, simply return the nearest dataset. Default: None. prefer : str The side of the value to return. Can be 'nearest', 'smaller' or 'larger'. Default: 'nearest'. Examples -------- >>> ds = ts.get_by_time((12, "Gyr")) >>> t = ts[0].quan(12, "Gyr") ... ds = ts.get_by_time(t, tolerance=(100, "Myr")) """ return self._get_by_attribute( "current_time", time, tolerance=tolerance, prefer=prefer ) def get_by_redshift( self, redshift: float, tolerance: float | None = None, prefer: Literal["nearest", "smaller", "larger"] = "nearest", ) -> "Dataset": """ Get a dataset at or near to a given time. Parameters ---------- redshift : float The redshift to search for. tolerance : float If not None, do not return a dataset unless the redshift is within the tolerance value. If None, simply return the nearest dataset. Default: None. prefer : str The side of the value to return. Can be 'nearest', 'smaller' or 'larger'. Default: 'nearest'. Examples -------- >>> ds = ts.get_by_redshift(0.0) """ return self._get_by_attribute( "current_redshift", redshift, tolerance=tolerance, prefer=prefer ) class TimeSeriesQuantitiesContainer: def __init__(self, data_object, quantities): self.data_object = data_object self.quantities = quantities def __getitem__(self, key): if key not in self.quantities: raise KeyError(key) q = self.quantities[key] def run_quantity_wrapper(quantity, quantity_name): @wraps(derived_quantity_registry[quantity_name][1]) def run_quantity(*args, **kwargs): to_run = quantity(*args, **kwargs) return self.data_object.eval(to_run) return run_quantity return run_quantity_wrapper(q, key) class DatasetSeriesObject: def __init__(self, time_series, data_object_name, *args, **kwargs): self.time_series = weakref.proxy(time_series) self.data_object_name = data_object_name self._args = args self._kwargs = kwargs qs = { qn: create_quantity_proxy(qv) for qn, qv in derived_quantity_registry.items() } self.quantities = TimeSeriesQuantitiesContainer(self, qs) def eval(self, tasks): return self.time_series.eval(tasks, self) def get(self, ds): # We get the type name, which corresponds to an attribute of the # index cls = getattr(ds, self.data_object_name) return cls(*self._args, **self._kwargs) class SimulationTimeSeries(DatasetSeries, ABC): def __init__(self, parameter_filename, find_outputs=False): """ Base class for generating simulation time series types. Principally consists of a *parameter_filename*. """ if not os.path.exists(parameter_filename): raise FileNotFoundError(parameter_filename) self.parameter_filename = parameter_filename self.basename = os.path.basename(parameter_filename) self.directory = os.path.dirname(parameter_filename) self.parameters = {} self.key_parameters = [] # Set some parameter defaults. self._set_parameter_defaults() # Read the simulation dataset. self._parse_parameter_file() # Set units self._set_units() # Figure out the starting and stopping times and redshift. self._calculate_simulation_bounds() # Get all possible datasets. self._get_all_outputs(find_outputs=find_outputs) self.print_key_parameters() def _set_parameter_defaults(self): # noqa: B027 pass @abstractmethod def _parse_parameter_file(self): pass @abstractmethod def _set_units(self): pass @abstractmethod def _calculate_simulation_bounds(self): pass @abstractmethod def _get_all_outputs(self, *, find_outputs=False): pass def __repr__(self): return self.parameter_filename _arr = None @property def arr(self): if self._arr is not None: return self._arr self._arr = functools.partial(YTArray, registry=self.unit_registry) return self._arr _quan = None @property def quan(self): if self._quan is not None: return self._quan self._quan = functools.partial(YTQuantity, registry=self.unit_registry) return self._quan @parallel_root_only def print_key_parameters(self): """ Print out some key parameters for the simulation. """ if self.simulation_type == "grid": for a in ["domain_dimensions", "domain_left_edge", "domain_right_edge"]: self._print_attr(a) for a in ["initial_time", "final_time", "cosmological_simulation"]: self._print_attr(a) if getattr(self, "cosmological_simulation", False): for a in [ "box_size", "omega_matter", "omega_lambda", "omega_radiation", "hubble_constant", "initial_redshift", "final_redshift", ]: self._print_attr(a) for a in self.key_parameters: self._print_attr(a) mylog.info("Total datasets: %d.", len(self.all_outputs)) def _print_attr(self, a): """ Print the attribute or warn about it missing. """ if not hasattr(self, a): mylog.error("Missing %s in dataset definition!", a) return v = getattr(self, a) mylog.info("Parameters: %-25s = %s", a, v) def _get_outputs_by_key(self, key, values, tolerance=None, outputs=None): r""" Get datasets at or near to given values. Parameters ---------- key : str The key by which to retrieve outputs, usually 'time' or 'redshift'. values : array_like A list of values, given as floats. tolerance : float If not None, do not return a dataset unless the value is within the tolerance value. If None, simply return the nearest dataset. Default: None. outputs : list The list of outputs from which to choose. If None, self.all_outputs is used. Default: None. Examples -------- >>> datasets = es.get_outputs_by_key("redshift", [0, 1, 2], tolerance=0.1) """ if not isinstance(values, YTArray): if isinstance(values, tuple) and len(values) == 2: values = self.arr(*values) else: values = self.arr(values) values = values.in_base() if outputs is None: outputs = self.all_outputs my_outputs = [] if not outputs: return my_outputs for value in values: outputs.sort(key=lambda obj, value=value: np.abs(value - obj[key])) if ( tolerance is None or np.abs(value - outputs[0][key]) <= tolerance ) and outputs[0] not in my_outputs: my_outputs.append(outputs[0]) else: mylog.error("No dataset added for %s = %f.", key, value) outputs.sort(key=lambda obj: obj["time"]) return my_outputs ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/data_objects/unions.py0000644000175100001770000000112014714401662016522 0ustar00runnerdockerfrom abc import ABC, abstractmethod from more_itertools import always_iterable class Union(ABC): @property @abstractmethod def _union_type(self) -> str: ... def __init__(self, name, sub_types): self.name = name self.sub_types = list(always_iterable(sub_types)) def __iter__(self): yield from self.sub_types def __repr__(self): return f"{self._union_type.capitalize()} Union: '{self.name}' composed of: {self.sub_types}" class MeshUnion(Union): _union_type = "mesh" class ParticleUnion(Union): _union_type = "particle" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/default.mplstyle0000644000175100001770000000043114714401662015436 0ustar00runnerdocker# basic usage (requires matplotlib 3.7+) # >>> import matplotlib as mpl # >>> mpl.style.use("yt.default") xtick.top: True ytick.right: True xtick.minor.visible: True ytick.minor.visible: True xtick.direction: in ytick.direction: in font.family: stixgeneral mathtext.fontset: cm ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2791517 yt-4.4.0/yt/extensions/0000755000175100001770000000000014714401715014417 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/extensions/__init__.py0000644000175100001770000000160114714401662016527 0ustar00runnerdocker""" yt.extensions ~~~~~~~~~~~~~ Redirect imports for extensions. This module basically makes it possible for us to transition from ytext.foo to ytext_foo without having to force all extensions to upgrade at the same time. When a user does ``from yt.extensions.foo import bar`` it will attempt to import ``from yt_foo import bar`` first and when that fails it will try to import ``from ytext.foo import bar``. We're switching from namespace packages because it was just too painful for everybody involved. :copyright: (c) 2015 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ # This source code is originally from flask, in the flask/ext/__init__.py file. def setup(): from ..exthook import ExtensionImporter importer = ExtensionImporter(["yt_%s", "ytext.%s"], __name__) importer.install() setup() del setup ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/exthook.py0000644000175100001770000001176414714401662014265 0ustar00runnerdocker""" yt.exthook ~~~~~~~~~~ Redirect imports for extensions. This module basically makes it possible for us to transition from ytext.foo to yt_foo without having to force all extensions to upgrade at the same time. When a user does ``from yt.extensions.foo import bar`` it will attempt to import ``from yt_foo import bar`` first and when that fails it will try to import ``from ytext.foo import bar``. We're switching from namespace packages because it was just too painful for everybody involved. This is used by `yt.extensions`. :copyright: (c) 2015 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ # This source code was originally in flask/exthook.py import os import sys class ExtensionImporter: """This importer redirects imports from this submodule to other locations. This makes it possible to transition from the old flaskext.name to the newer flask_name without people having a hard time. """ def __init__(self, module_choices, wrapper_module): self.module_choices = module_choices self.wrapper_module = wrapper_module self.prefix = wrapper_module + "." self.prefix_cutoff = wrapper_module.count(".") + 1 def __eq__(self, other): return ( self.__class__.__module__ == other.__class__.__module__ and self.__class__.__name__ == other.__class__.__name__ and self.wrapper_module == other.wrapper_module and self.module_choices == other.module_choices ) def __ne__(self, other): return not self.__eq__(other) def install(self): sys.meta_path[:] = [x for x in sys.meta_path if self != x] + [self] def find_module(self, fullname, path=None): if fullname.startswith(self.prefix): return self def load_module(self, fullname): if fullname in sys.modules: return sys.modules[fullname] modname = fullname.split(".", self.prefix_cutoff)[self.prefix_cutoff] for path in self.module_choices: realname = path % modname try: __import__(realname) except ImportError: exc_type, exc_value, tb = sys.exc_info() # since we only establish the entry in sys.modules at the # very this seems to be redundant, but if recursive imports # happen we will call into the move import a second time. # On the second invocation we still don't have an entry for # fullname in sys.modules, but we will end up with the same # fake module name and that import will succeed since this # one already has a temporary entry in the modules dict. # Since this one "succeeded" temporarily that second # invocation now will have created a fullname entry in # sys.modules which we have to kill. sys.modules.pop(fullname, None) # If it's an important traceback we reraise it, otherwise # we swallow it and try the next choice. The skipped frame # is the one from __import__ above which we don't care about if self.is_important_traceback(realname, tb): raise exc_value.with_traceback(tb.tb_next) # noqa: B904 continue module = sys.modules[fullname] = sys.modules[realname] if "." not in modname: setattr(sys.modules[self.wrapper_module], modname, module) return module raise ImportError(f"No module named {fullname}") def is_important_traceback(self, important_module, tb): """Walks a traceback's frames and checks if any of the frames originated in the given important module. If that is the case then we were able to import the module itself but apparently something went wrong when the module was imported. (Eg: import of an import failed). """ while tb is not None: if self.is_important_frame(important_module, tb): return True tb = tb.tb_next return False def is_important_frame(self, important_module, tb): """Checks a single frame if it's important.""" g = tb.tb_frame.f_globals if "__name__" not in g: return False module_name = g["__name__"] # Python 2.7 Behavior. Modules are cleaned up late so the # name shows up properly here. Success! if module_name == important_module: return True # Some python versions will clean up modules so early that the # module name at that point is no longer set. Try guessing from # the filename then. filename = os.path.abspath(tb.tb_frame.f_code.co_filename) test_string = os.path.sep + important_module.replace(".", os.path.sep) return ( test_string + ".py" in filename or test_string + os.path.sep + "__init__.py" in filename ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1731331021.2831516 yt-4.4.0/yt/fields/0000755000175100001770000000000014714401715013466 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/__init__.py0000644000175100001770000000000014714401662015566 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/angular_momentum.py0000644000175100001770000000742614714401662017424 0ustar00runnerdockerimport numpy as np from yt.utilities.lib.misc_utilities import ( obtain_position_vector, obtain_relative_velocity_vector, ) from .derived_field import ValidateParameter from .field_plugin_registry import register_field_plugin from .vector_operations import create_magnitude_field @register_field_plugin def setup_angular_momentum(registry, ftype="gas", slice_info=None): # Angular momentum defined here needs to be consistent with # _particle_specific_angular_momentum in particle_fields.py unit_system = registry.ds.unit_system def _specific_angular_momentum_x(field, data): xv, yv, zv = obtain_relative_velocity_vector(data) rv = obtain_position_vector(data) units = rv.units rv = np.rollaxis(rv, 0, len(rv.shape)) rv = data.ds.arr(rv, units=units) return rv[..., 1] * zv - rv[..., 2] * yv def _specific_angular_momentum_y(field, data): xv, yv, zv = obtain_relative_velocity_vector(data) rv = obtain_position_vector(data) units = rv.units rv = np.rollaxis(rv, 0, len(rv.shape)) rv = data.ds.arr(rv, units=units) return rv[..., 2] * xv - rv[..., 0] * zv def _specific_angular_momentum_z(field, data): xv, yv, zv = obtain_relative_velocity_vector(data) rv = obtain_position_vector(data) units = rv.units rv = np.rollaxis(rv, 0, len(rv.shape)) rv = data.ds.arr(rv, units=units) return rv[..., 0] * yv - rv[..., 1] * xv registry.add_field( (ftype, "specific_angular_momentum_x"), sampling_type="local", function=_specific_angular_momentum_x, units=unit_system["specific_angular_momentum"], validators=[ValidateParameter("center"), ValidateParameter("bulk_velocity")], ) registry.add_field( (ftype, "specific_angular_momentum_y"), sampling_type="local", function=_specific_angular_momentum_y, units=unit_system["specific_angular_momentum"], validators=[ValidateParameter("center"), ValidateParameter("bulk_velocity")], ) registry.add_field( (ftype, "specific_angular_momentum_z"), sampling_type="local", function=_specific_angular_momentum_z, units=unit_system["specific_angular_momentum"], validators=[ValidateParameter("center"), ValidateParameter("bulk_velocity")], ) create_magnitude_field( registry, "specific_angular_momentum", unit_system["specific_angular_momentum"], ftype=ftype, ) def _angular_momentum_x(field, data): return data[ftype, "mass"] * data[ftype, "specific_angular_momentum_x"] registry.add_field( (ftype, "angular_momentum_x"), sampling_type="local", function=_angular_momentum_x, units=unit_system["angular_momentum"], validators=[ValidateParameter("center"), ValidateParameter("bulk_velocity")], ) def _angular_momentum_y(field, data): return data[ftype, "mass"] * data[ftype, "specific_angular_momentum_y"] registry.add_field( (ftype, "angular_momentum_y"), sampling_type="local", function=_angular_momentum_y, units=unit_system["angular_momentum"], validators=[ValidateParameter("center"), ValidateParameter("bulk_velocity")], ) def _angular_momentum_z(field, data): return data[ftype, "mass"] * data[ftype, "specific_angular_momentum_z"] registry.add_field( (ftype, "angular_momentum_z"), sampling_type="local", function=_angular_momentum_z, units=unit_system["angular_momentum"], validators=[ValidateParameter("center"), ValidateParameter("bulk_velocity")], ) create_magnitude_field( registry, "angular_momentum", unit_system["angular_momentum"], ftype=ftype ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/api.py0000644000175100001770000000131614714401662014613 0ustar00runnerdocker# from . import species_fields from . import ( angular_momentum, astro_fields, cosmology_fields, fluid_fields, fluid_vector_fields, geometric_fields, local_fields, magnetic_field, my_plugin_fields, particle_fields, vector_operations, ) from .derived_field import ( DerivedField, ValidateDataField, ValidateGridType, ValidateParameter, ValidateProperty, ValidateSpatial, ) from .field_detector import FieldDetector from .field_info_container import FieldInfoContainer from .field_plugin_registry import field_plugins, register_field_plugin from .local_fields import add_field, derived_field from .xray_emission_fields import add_xray_emissivity_field ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/astro_fields.py0000644000175100001770000001173614714401662016527 0ustar00runnerdockerimport numpy as np from .derived_field import ValidateParameter from .field_plugin_registry import register_field_plugin from .vector_operations import create_magnitude_field @register_field_plugin def setup_astro_fields(registry, ftype="gas", slice_info=None): unit_system = registry.ds.unit_system pc = registry.ds.units.physical_constants # slice_info would be the left, the right, and the factor. # For example, with the old Enzo-ZEUS fields, this would be: # slice(None, -2, None) # slice(1, -1, None) # 1.0 # Otherwise, we default to a centered difference. if slice_info is None: sl_left = slice(None, -2, None) sl_right = slice(2, None, None) div_fac = 2.0 else: sl_left, sl_right, div_fac = slice_info def _dynamical_time(field, data): """ sqrt(3 pi / (16 G rho)) """ return np.sqrt(3.0 * np.pi / (16.0 * pc.G * data[ftype, "density"])) registry.add_field( (ftype, "dynamical_time"), sampling_type="local", function=_dynamical_time, units=unit_system["time"], ) def _jeans_mass(field, data): MJ_constant = (((5.0 * pc.kboltz) / (pc.G * pc.mh)) ** 1.5) * ( 3.0 / (4.0 * np.pi) ) ** 0.5 u = ( MJ_constant * ( (data[ftype, "temperature"] / data[ftype, "mean_molecular_weight"]) ** 1.5 ) * (data[ftype, "density"] ** (-0.5)) ) return u registry.add_field( (ftype, "jeans_mass"), sampling_type="local", function=_jeans_mass, units=unit_system["mass"], ) def _emission_measure(field, data): dV = data[ftype, "mass"] / data[ftype, "density"] nenhdV = data[ftype, "H_nuclei_density"] * dV nenhdV *= data[ftype, "El_number_density"] return nenhdV registry.add_field( (ftype, "emission_measure"), sampling_type="local", function=_emission_measure, units=unit_system["number_density"], ) def _mazzotta_weighting(field, data): # Spectroscopic-like weighting field for galaxy clusters # Only useful as a weight_field for temperature, metallicity, velocity ret = data[ftype, "El_number_density"].d ** 2 ret *= data[ftype, "kT"].d ** -0.75 return ret registry.add_field( (ftype, "mazzotta_weighting"), sampling_type="local", function=_mazzotta_weighting, units="", ) def _optical_depth(field, data): return data[ftype, "El_number_density"] * pc.sigma_thompson registry.add_field( (ftype, "optical_depth"), sampling_type="local", function=_optical_depth, units=unit_system["length"] ** -1, ) def _sz_kinetic(field, data): # minus sign is because radial velocity is WRT viewer # See issue #1225 return -data[ftype, "velocity_los"] * data[ftype, "optical_depth"] / pc.clight registry.add_field( (ftype, "sz_kinetic"), sampling_type="local", function=_sz_kinetic, units=unit_system["length"] ** -1, validators=[ValidateParameter("axis", {"axis": [0, 1, 2]})], ) def _szy(field, data): kT = data[ftype, "kT"] / (pc.me * pc.clight * pc.clight) return data[ftype, "optical_depth"] * kT registry.add_field( (ftype, "szy"), sampling_type="local", function=_szy, units=unit_system["length"] ** -1, ) def _entropy(field, data): mgammam1 = -2.0 / 3.0 tr = data[ftype, "kT"] * data[ftype, "El_number_density"] ** mgammam1 return data.apply_units(tr, field.units) registry.add_field( (ftype, "entropy"), sampling_type="local", units="keV*cm**2", function=_entropy ) def _lorentz_factor(field, data): b2 = data[ftype, "velocity_magnitude"].to_value("c") b2 *= b2 return 1.0 / np.sqrt(1.0 - b2) registry.add_field( (ftype, "lorentz_factor"), sampling_type="local", units="", function=_lorentz_factor, ) # 4-velocity spatial components def four_velocity_xyz(u): def _four_velocity(field, data): return data["gas", f"velocity_{u}"] * data["gas", "lorentz_factor"] return _four_velocity for u in registry.ds.coordinates.axis_order: registry.add_field( ("gas", f"four_velocity_{u}"), sampling_type="local", function=four_velocity_xyz(u), units=unit_system["velocity"], ) # 4-velocity t-component def _four_velocity_t(field, data): return data["gas", "lorentz_factor"] * pc.clight registry.add_field( ("gas", "four_velocity_t"), sampling_type="local", function=_four_velocity_t, units=unit_system["velocity"], ) create_magnitude_field( registry, "four_velocity", unit_system["velocity"], ftype=ftype, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/astro_simulations.py0000644000175100001770000000316114714401662017621 0ustar00runnerdockerfrom .domain_context import DomainContext # Here's how this all works: # # 1. We have a mapping here that defines fields we might expect to find in an # astrophysical simulation to units (not necessarily definitive) that we # may want to use them in. # 2. Simulations and frontends will register aliases from fields (which can # utilize units) to the fields enumerated here. # 3. This plugin can call additional plugins on the registry. class AstroSimulation(DomainContext): # This is an immutable of immutables. Note that we do not specify the # fluid type here, although in most cases we expect it to be "gas". _known_fluid_fields = ( ("density", "g/cm**3"), ("number_density", "1/cm**3"), ("pressure", "dyne / cm**2"), ("specific_thermal_energy", "erg / g"), ("temperature", "K"), ("velocity_x", "cm / s"), ("velocity_y", "cm / s"), ("velocity_z", "cm / s"), ("magnetic_field_x", "gauss"), ("magnetic_field_y", "gauss"), ("magnetic_field_z", "gauss"), ("radiation_acceleration_x", "cm / s**2"), ("radiation_acceleration_y", "cm / s**2"), ("radiation_acceleration_z", "cm / s**2"), ) # This set of fields can be applied to any particle type. _known_particle_fields = ( ("particle_position_x", "cm"), ("particle_position_y", "cm"), ("particle_position_z", "cm"), ("particle_velocity_x", "cm / s"), ("particle_velocity_y", "cm / s"), ("particle_velocity_z", "cm / s"), ("particle_mass", "g"), ("particle_index", ""), ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/cosmology_fields.py0000644000175100001770000001304114714401662017401 0ustar00runnerdockerfrom .derived_field import ValidateParameter from .field_exceptions import NeedsConfiguration, NeedsParameter from .field_plugin_registry import register_field_plugin @register_field_plugin def setup_cosmology_fields(registry, ftype="gas", slice_info=None): unit_system = registry.ds.unit_system # slice_info would be the left, the right, and the factor. # For example, with the old Enzo-ZEUS fields, this would be: # slice(None, -2, None) # slice(1, -1, None) # 1.0 # Otherwise, we default to a centered difference. if slice_info is None: sl_left = slice(None, -2, None) sl_right = slice(2, None, None) div_fac = 2.0 else: sl_left, sl_right, div_fac = slice_info def _matter_density(field, data): return data[ftype, "density"] + data[ftype, "dark_matter_density"] registry.add_field( (ftype, "matter_density"), sampling_type="local", function=_matter_density, units=unit_system["density"], ) def _matter_mass(field, data): return data[ftype, "matter_density"] * data["index", "cell_volume"] registry.add_field( (ftype, "matter_mass"), sampling_type="local", function=_matter_mass, units=unit_system["mass"], ) # rho_total / rho_cr(z). def _overdensity(field, data): if ( not hasattr(data.ds, "cosmological_simulation") or not data.ds.cosmological_simulation ): raise NeedsConfiguration("cosmological_simulation", 1) co = data.ds.cosmology return data[ftype, "matter_density"] / co.critical_density( data.ds.current_redshift ) registry.add_field( (ftype, "overdensity"), sampling_type="local", function=_overdensity, units="" ) # rho_baryon / def _baryon_overdensity(field, data): if ( not hasattr(data.ds, "cosmological_simulation") or not data.ds.cosmological_simulation ): raise NeedsConfiguration("cosmological_simulation", 1) omega_baryon = data.get_field_parameter("omega_baryon") if omega_baryon is None: raise NeedsParameter("omega_baryon") co = data.ds.cosmology # critical_density(z) ~ omega_lambda + omega_matter * (1 + z)^3 # mean matter density(z) ~ omega_matter * (1 + z)^3 return ( data[ftype, "density"] / omega_baryon / co.critical_density(0.0) / (1.0 + data.ds.current_redshift) ** 3 ) registry.add_field( (ftype, "baryon_overdensity"), sampling_type="local", function=_baryon_overdensity, units="", validators=[ValidateParameter("omega_baryon")], ) # rho_matter / def _matter_overdensity(field, data): if ( not hasattr(data.ds, "cosmological_simulation") or not data.ds.cosmological_simulation ): raise NeedsConfiguration("cosmological_simulation", 1) co = data.ds.cosmology # critical_density(z) ~ omega_lambda + omega_matter * (1 + z)^3 # mean density(z) ~ omega_matter * (1 + z)^3 return ( data[ftype, "matter_density"] / data.ds.omega_matter / co.critical_density(0.0) / (1.0 + data.ds.current_redshift) ** 3 ) registry.add_field( (ftype, "matter_overdensity"), sampling_type="local", function=_matter_overdensity, units="", ) # r / r_vir def _virial_radius_fraction(field, data): virial_radius = data.get_field_parameter("virial_radius") if virial_radius == 0.0: ret = 0.0 else: ret = data["index", "radius"] / virial_radius return ret registry.add_field( ("index", "virial_radius_fraction"), sampling_type="local", function=_virial_radius_fraction, validators=[ValidateParameter("virial_radius")], units="", ) # Weak lensing convergence. # Eqn 4 of Metzler, White, & Loken (2001, ApJ, 547, 560). # This needs to be checked for accuracy. def _weak_lensing_convergence(field, data): if ( not hasattr(data.ds, "cosmological_simulation") or not data.ds.cosmological_simulation ): raise NeedsConfiguration("cosmological_simulation", 1) co = data.ds.cosmology pc = data.ds.units.physical_constants observer_redshift = data.get_field_parameter("observer_redshift") source_redshift = data.get_field_parameter("source_redshift") # observer to lens dl = co.angular_diameter_distance(observer_redshift, data.ds.current_redshift) # observer to source ds = co.angular_diameter_distance(observer_redshift, source_redshift) # lens to source dls = co.angular_diameter_distance(data.ds.current_redshift, source_redshift) # removed the factor of 1 / a to account for the fact that we are projecting # with a proper distance. return ( 1.5 * (co.hubble_constant / pc.clight) ** 2 * (dl * dls / ds) * data[ftype, "matter_overdensity"] ).in_units("1/cm") registry.add_field( (ftype, "weak_lensing_convergence"), sampling_type="local", function=_weak_lensing_convergence, units=unit_system["length"] ** -1, validators=[ ValidateParameter("observer_redshift"), ValidateParameter("source_redshift"), ], ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/derived_field.py0000644000175100001770000005065514714401662016641 0ustar00runnerdockerimport contextlib import inspect import re from collections.abc import Iterable from typing import Optional from more_itertools import always_iterable import yt.units.dimensions as ytdims from yt._maintenance.deprecation import issue_deprecation_warning from yt._typing import FieldKey from yt.funcs import iter_fields, validate_field_key from yt.units.unit_object import Unit # type: ignore from yt.utilities.exceptions import YTFieldNotFound from yt.utilities.logger import ytLogger as mylog from yt.visualization._commons import _get_units_label from .field_detector import FieldDetector from .field_exceptions import ( FieldUnitsError, NeedsDataField, NeedsGridType, NeedsOriginalGrid, NeedsParameter, NeedsProperty, ) def TranslationFunc(field_name): def _TranslationFunc(field, data): # We do a bunch of in-place modifications, so we will copy this. return data[field_name].copy() _TranslationFunc.alias_name = field_name return _TranslationFunc def NullFunc(field, data): raise YTFieldNotFound(field.name) def DeprecatedFieldFunc(ret_field, func, since, removal): def _DeprecatedFieldFunc(field, data): # Only log a warning if we've already done # field detection if data.ds.fields_detected: args = [field.name, since] msg = "The Derived Field %s is deprecated as of yt v%s " if removal is not None: msg += "and will be removed in yt v%s " args.append(removal) if ret_field != field.name: msg += ", use %s instead" args.append(ret_field) mylog.warning(msg, *args) return func(field, data) return _DeprecatedFieldFunc class DerivedField: """ This is the base class used to describe a cell-by-cell derived field. Parameters ---------- name : str is the name of the field. function : callable A function handle that defines the field. Should accept arguments (field, data) units : str A plain text string encoding the unit, or a query to a unit system of a dataset. Powers must be in Python syntax (** instead of ^). If set to 'auto' or None (default), units will be inferred from the return value of the field function. take_log : bool Describes whether the field should be logged validators : list A list of :class:`FieldValidator` objects sampling_type : string, default = "cell" How is the field sampled? This can be one of the following options at present: "cell" (cell-centered), "discrete" (or "particle") for discretely sampled data. vector_field : bool Describes the dimensionality of the field. Currently unused. display_field : bool Governs its appearance in the dropdowns in Reason not_in_all : bool Used for baryon fields from the data that are not in all the grids display_name : str A name used in the plots output_units : str For fields that exist on disk, which we may want to convert to other fields or that get aliased to themselves, we can specify a different desired output unit than the unit found on disk. dimensions : str or object from yt.units.dimensions The dimensions of the field, only used for error checking with units='auto'. nodal_flag : array-like with three components This describes how the field is centered within a cell. If nodal_flag is [0, 0, 0], then the field is cell-centered. If any of the components of nodal_flag are 1, then the field is nodal in that direction, meaning it is defined at the lo and hi sides of the cell rather than at the center. For example, a field with nodal_flag = [1, 0, 0] would be defined at the middle of the 2 x-faces of each cell. nodal_flag = [0, 1, 1] would mean the that the field defined at the centers of the 4 edges that are normal to the x axis, while nodal_flag = [1, 1, 1] would be defined at the 8 cell corners. """ _inherited_particle_filter = False def __init__( self, name: FieldKey, sampling_type, function, units: str | bytes | Unit | None = None, take_log=True, validators=None, vector_field=False, display_field=True, not_in_all=False, display_name=None, output_units=None, dimensions=None, ds=None, nodal_flag=None, *, alias: Optional["DerivedField"] = None, ): validate_field_key(name) self.name = name self.take_log = take_log self.display_name = display_name self.not_in_all = not_in_all self.display_field = display_field self.sampling_type = sampling_type self.vector_field = vector_field self.ds = ds if self.ds is not None: self._ionization_label_format = self.ds._ionization_label_format else: self._ionization_label_format = "roman_numeral" if nodal_flag is None: self.nodal_flag = [0, 0, 0] else: self.nodal_flag = nodal_flag self._function = function self.validators = list(always_iterable(validators)) # handle units self.units: str | bytes | Unit | None if units in (None, "auto"): self.units = None elif isinstance(units, str): self.units = units elif isinstance(units, Unit): self.units = str(units) elif isinstance(units, bytes): self.units = units.decode("utf-8") else: raise FieldUnitsError( f"Cannot handle units {units!r} (type {type(units)}). " "Please provide a string or Unit object." ) if output_units is None: output_units = self.units self.output_units = output_units if isinstance(dimensions, str): dimensions = getattr(ytdims, dimensions) self.dimensions = dimensions if alias is None: self._shared_aliases_list = [self] else: self._shared_aliases_list = alias._shared_aliases_list self._shared_aliases_list.append(self) def _copy_def(self): dd = {} dd["name"] = self.name dd["units"] = self.units dd["take_log"] = self.take_log dd["validators"] = list(self.validators) dd["sampling_type"] = self.sampling_type dd["vector_field"] = self.vector_field dd["display_field"] = True dd["not_in_all"] = self.not_in_all dd["display_name"] = self.display_name return dd @property def is_sph_field(self): if self.sampling_type == "cell": return False is_sph_field = False if self.is_alias: name = self.alias_name else: name = self.name if hasattr(self.ds, "_sph_ptypes"): is_sph_field |= name[0] in (self.ds._sph_ptypes + ("gas",)) return is_sph_field @property def local_sampling(self): return self.sampling_type in ("discrete", "particle", "local") def get_units(self): if self.ds is not None: u = Unit(self.units, registry=self.ds.unit_registry) else: u = Unit(self.units) return u.latex_representation() def get_projected_units(self): if self.ds is not None: u = Unit(self.units, registry=self.ds.unit_registry) else: u = Unit(self.units) return (u * Unit("cm")).latex_representation() def check_available(self, data): """ This raises an exception of the appropriate type if the set of validation mechanisms are not met, and otherwise returns True. """ for validator in self.validators: validator(data) # If we don't get an exception, we're good to go return True def get_dependencies(self, *args, **kwargs): """ This returns a list of names of fields that this field depends on. """ e = FieldDetector(*args, **kwargs) e[self.name] return e def _get_needed_parameters(self, fd): params = [] values = [] permute_params = {} vals = [v for v in self.validators if isinstance(v, ValidateParameter)] for val in vals: if val.parameter_values is not None: permute_params.update(val.parameter_values) else: params.extend(val.parameters) values.extend([fd.get_field_parameter(fp) for fp in val.parameters]) return dict(zip(params, values, strict=True)), permute_params _unit_registry = None @contextlib.contextmanager def unit_registry(self, data): old_registry = self._unit_registry if hasattr(data, "unit_registry"): ur = data.unit_registry elif hasattr(data, "ds"): ur = data.ds.unit_registry else: ur = None self._unit_registry = ur yield self._unit_registry = old_registry def __call__(self, data): """Return the value of the field in a given *data* object.""" self.check_available(data) original_fields = data.keys() # Copy if self._function is NullFunc: raise RuntimeError( "Something has gone terribly wrong, _function is NullFunc " + f"for {self.name}" ) with self.unit_registry(data): dd = self._function(self, data) for field_name in data.keys(): if field_name not in original_fields: del data[field_name] return dd def get_source(self): """ Return a string containing the source of the function (if possible.) """ return inspect.getsource(self._function) def get_label(self, projected=False): """ Return a data label for the given field, including units. """ name = self.name[1] if self.display_name is not None: name = self.display_name # Start with the field name data_label = rf"$\rm{{{name}}}" # Grab the correct units if projected: raise NotImplementedError else: if self.ds is not None: units = Unit(self.units, registry=self.ds.unit_registry) else: units = Unit(self.units) # Add unit label if not units.is_dimensionless: data_label += _get_units_label(units.latex_representation()).strip("$") data_label += r"$" return data_label @property def alias_field(self) -> bool: issue_deprecation_warning( "DerivedField.alias_field is a deprecated equivalent to DerivedField.is_alias ", stacklevel=3, since="4.1", ) return self.is_alias @property def is_alias(self) -> bool: return self._shared_aliases_list.index(self) > 0 def is_alias_to(self, other: "DerivedField") -> bool: return self._shared_aliases_list is other._shared_aliases_list @property def alias_name(self) -> FieldKey | None: if self.is_alias: return self._shared_aliases_list[0].name return None def __repr__(self): if self._function is NullFunc: s = "On-Disk Field " elif self.is_alias: s = f"Alias Field for {self.alias_name!r} " else: s = "Derived Field " s += f"{self.name!r}: (units: {self.units!r}" if self.display_name is not None: s += f", display_name: {self.display_name!r}" if self.sampling_type == "particle": s += ", particle field" s += ")" return s def _is_ion(self): p = re.compile("_p[0-9]+_") result = False if p.search(self.name[1]) is not None: result = True return result def _ion_to_label(self): # check to see if the output format has changed if self.ds is not None: self._ionization_label_format = self.ds._ionization_label_format pnum2rom = { "0": "I", "1": "II", "2": "III", "3": "IV", "4": "V", "5": "VI", "6": "VII", "7": "VIII", "8": "IX", "9": "X", "10": "XI", "11": "XII", "12": "XIII", "13": "XIV", "14": "XV", "15": "XVI", "16": "XVII", "17": "XVIII", "18": "XIX", "19": "XX", "20": "XXI", "21": "XXII", "22": "XXIII", "23": "XXIV", "24": "XXV", "25": "XXVI", "26": "XXVII", "27": "XXVIII", "28": "XXIX", "29": "XXX", } # first look for charge to decide if it is an ion p = re.compile("_p[0-9]+_") m = p.search(self.name[1]) if m is not None: # Find the ionization state pstr = m.string[m.start() + 1 : m.end() - 1] segments = self.name[1].split("_") # find the ionization index for i, s in enumerate(segments): if s == pstr: ipstr = i for i, s in enumerate(segments): # If its the species we don't want to change the capitalization if i == ipstr - 1: continue segments[i] = s.capitalize() species = segments[ipstr - 1] # If there is a number in the species part of the label # that indicates part of a molecule symbols = [] for symb in species: # can't just use underscore b/c gets replaced later with space if symb.isdigit(): symbols.append("latexsub{" + symb + "}") else: symbols.append(symb) species_label = "".join(symbols) # Use roman numerals for ionization if self._ionization_label_format == "roman_numeral": roman = pnum2rom[pstr[1:]] label = ( species_label + r"\ " + roman + r"\ " + r"\ ".join(segments[ipstr + 1 :]) ) # use +/- for ionization else: sign = "+" * int(pstr[1:]) label = ( "{" + species_label + "}" + "^{" + sign + "}" + r"\ " + r"\ ".join(segments[ipstr + 1 :]) ) else: label = self.name[1] return label def get_latex_display_name(self): label = self.display_name if label is None: if self._is_ion(): fname = self._ion_to_label() label = r"$\rm{" + fname.replace("_", r"\ ") + r"}$" label = label.replace("latexsub", "_") else: label = r"$\rm{" + self.name[1].replace("_", r"\ ").title() + r"}$" elif label.find("$") == -1: label = label.replace(" ", r"\ ") label = r"$\rm{" + label + r"}$" return label def __copy__(self): # a shallow copy doesn't copy the _shared_alias_list attr # This method is implemented in support to ParticleFilter.wrap_func return type(self)( name=self.name, sampling_type=self.sampling_type, function=self._function, units=self.units, take_log=self.take_log, validators=self.validators, vector_field=self.vector_field, display_field=self.display_field, not_in_all=self.not_in_all, display_name=self.display_name, output_units=self.output_units, dimensions=self.dimensions, ds=self.ds, nodal_flag=self.nodal_flag, ) class FieldValidator: """ Base class for FieldValidator objects. Available subclasses include: """ def __init_subclass__(cls, **kwargs): # add the new subclass to the list of subclasses in the docstring class_str = f":class:`{cls.__name__}`" if ":class:" in FieldValidator.__doc__: class_str = ", " + class_str FieldValidator.__doc__ += class_str class ValidateParameter(FieldValidator): """ A :class:`FieldValidator` that ensures the dataset has a given parameter. Parameters ---------- parameters: str, iterable[str] a single parameter or list of parameters to require parameter_values: dict If *parameter_values* is supplied, this dict should map from field parameter to a value or list of values. It will ensure that the field is available for all permutations of the field parameter. """ def __init__( self, parameters: str | Iterable[str], parameter_values: dict | None = None, ): FieldValidator.__init__(self) self.parameters = list(always_iterable(parameters)) self.parameter_values = parameter_values def __call__(self, data): doesnt_have = [] for p in self.parameters: if not data.has_field_parameter(p): doesnt_have.append(p) if len(doesnt_have) > 0: raise NeedsParameter(doesnt_have) return True class ValidateDataField(FieldValidator): """ A :class:`FieldValidator` that ensures the output file has a given data field stored in it. Parameters ---------- field: str, tuple[str, str], or any iterable of the previous types. the field or fields to require """ def __init__(self, field): FieldValidator.__init__(self) self.fields = list(iter_fields(field)) def __call__(self, data): doesnt_have = [] if isinstance(data, FieldDetector): return True for f in self.fields: if f not in data.index.field_list: doesnt_have.append(f) if len(doesnt_have) > 0: raise NeedsDataField(doesnt_have) return True class ValidateProperty(FieldValidator): """ A :class:`FieldValidator` that ensures the data object has a given python attribute. Parameters ---------- prop: str, iterable[str] the required property or properties to require """ def __init__(self, prop: str | Iterable[str]): FieldValidator.__init__(self) self.prop = list(always_iterable(prop)) def __call__(self, data): doesnt_have = [p for p in self.prop if not hasattr(data, p)] if len(doesnt_have) > 0: raise NeedsProperty(doesnt_have) return True class ValidateSpatial(FieldValidator): """ A :class:`FieldValidator` that ensures the data handed to the field is of spatial nature -- that is to say, 3-D. Parameters ---------- ghost_zones: int If supplied, will validate that the number of ghost zones required for the field is <= the available ghost zones. Default is 0. fields: Optional str, tuple[str, str], or any iterable of the previous types. The field or fields to validate. """ def __init__(self, ghost_zones: int | None = 0, fields=None): FieldValidator.__init__(self) self.ghost_zones = ghost_zones self.fields = fields def __call__(self, data): # When we say spatial information, we really mean # that it has a three-dimensional data structure if not getattr(data, "_spatial", False): raise NeedsGridType(self.ghost_zones, self.fields) if self.ghost_zones <= data._num_ghost_zones: return True raise NeedsGridType(self.ghost_zones, self.fields) class ValidateGridType(FieldValidator): """ A :class:`FieldValidator` that ensures the data handed to the field is an actual grid patch, not a covering grid of any kind. Does not accept parameters. """ def __init__(self): FieldValidator.__init__(self) def __call__(self, data): # We need to make sure that it's an actual AMR grid if isinstance(data, FieldDetector): return True if getattr(data, "_type_name", None) == "grid": return True raise NeedsOriginalGrid() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/domain_context.py0000644000175100001770000000063514714401662017060 0ustar00runnerdockerimport abc from yt._typing import FieldKey domain_context_registry = {} class DomainContext(abc.ABC): class __metaclass__(type): def __init__(cls, name, b, d): type.__init__(cls, name, b, d) domain_context_registry[name] = cls _known_fluid_fields: tuple[FieldKey, ...] _known_particle_fields: tuple[FieldKey, ...] def __init__(self, ds): self.ds = ds ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/field_aliases.py0000644000175100001770000001572514714401662016637 0ustar00runnerdocker_field_name_aliases = [ ("GridLevel", "grid_level"), ("GridIndices", "grid_indices"), ("OnesOverDx", "ones_over_dx"), ("Ones", "ones"), # ("CellsPerBin", "cells_per_bin"), ("SoundSpeed", "sound_speed"), ("RadialMachNumber", "radial_mach_number"), ("MachNumber", "mach_number"), ("CourantTimeStep", "courant_time_step"), # ("ParticleVelocityMagnitude", "particle_velocity_magnitude"), ("VelocityMagnitude", "velocity_magnitude"), ("TangentialOverVelocityMagnitude", "tangential_over_velocity_magnitude"), ("Pressure", "pressure"), ("Entropy", "entropy"), ("sph_r", "spherical_r"), ("sph_theta", "spherical_theta"), ("sph_phi", "spherical_phi"), ("cyl_R", "cylindrical_radius"), ("cyl_z", "cylindrical_z"), ("cyl_theta", "cylindrical_theta"), ("cyl_RadialVelocity", "cylindrical_radial_velocity"), ("cyl_RadialVelocityABS", "cylindrical_radial_velocity_absolute"), ("cyl_TangentialVelocity", "velocity_cylindrical_theta"), ("cyl_TangentialVelocityABS", "velocity_cylindrical_theta"), ("DynamicalTime", "dynamical_time"), ("JeansMassMsun", "jeans_mass"), ("CellMass", "cell_mass"), ("TotalMass", "total_mass"), ("StarMassMsun", "star_mass"), ("Matter_Density", "matter_density"), ("ComovingDensity", "comoving_density"), ("Overdensity", "overdensity"), ("DensityPerturbation", "density_perturbation"), ("Baryon_Overdensity", "baryon_overdensity"), ("WeakLensingConvergence", "weak_lensing_convergence"), ("CellVolume", "cell_volume"), ("ChandraEmissivity", "chandra_emissivity"), ("XRayEmissivity", "xray_emissivity"), ("SZKinetic", "sz_kinetic"), ("SZY", "szy"), ("AveragedDensity", "averaged_density"), ("DivV", "div_v"), ("AbsDivV", "div_v_absolute"), ("Contours", "contours"), ("tempContours", "temp_contours"), ("SpecificAngularMomentumX", "specific_angular_momentum_x"), ("SpecificAngularMomentumY", "specific_angular_momentum_y"), ("SpecificAngularMomentumZ", "specific_angular_momentum_z"), ("AngularMomentumX", "angular_momentum_x"), ("AngularMomentumY", "angular_momentum_y"), ("AngularMomentumZ", "angular_momentum_z"), # ("ParticleSpecificAngularMomentumX", "particle_specific_angular_momentum_x"), # ("ParticleSpecificAngularMomentumY", "particle_specific_angular_momentum_y"), # ("ParticleSpecificAngularMomentumZ", "particle_specific_angular_momentum_z"), # ("ParticleAngularMomentumX", "particle_angular_momentum_x"), # ("ParticleAngularMomentumY", "particle_angular_momentum_y"), # ("ParticleAngularMomentumZ", "particle_angular_momentum_z"), # ("ParticleRadius", "particle_radius"), ("Radius", "radius"), ("RadialVelocity", "radial_velocity"), ("RadialVelocityABS", "radial_velocity_absolute"), ("TangentialVelocity", "tangential_velocity"), ("CuttingPlaneVelocityX", "cutting_plane_velocity_x"), ("CuttingPlaneVelocityY", "cutting_plane_velocity_y"), ("CuttingPlaneBX", "cutting_plane_magnetic_field_x"), ("CuttingPlaneBy", "cutting_plane_magnetic_field_y"), ("MeanMolecularWeight", "mean_molecular_weight"), ("particle_density", "particle_density"), ("ThermalEnergy", "specific_thermal_energy"), ("TotalEnergy", "specific_total_energy"), ("MagneticEnergy", "magnetic_energy_density"), ("GasEnergy", "specific_thermal_energy"), ("Gas_Energy", "specific_thermal_energy"), ("BMagnitude", "b_magnitude"), ("PlasmaBeta", "plasma_beta"), ("MagneticPressure", "magnetic_pressure"), ("BPoloidal", "b_poloidal"), ("BToroidal", "b_toroidal"), ("BRadial", "b_radial"), ("VorticitySquared", "vorticity_squared"), ("gradPressureX", "grad_pressure_x"), ("gradPressureY", "grad_pressure_y"), ("gradPressureZ", "grad_pressure_z"), ("gradPressureMagnitude", "grad_pressure_magnitude"), ("gradDensityX", "grad_density_x"), ("gradDensityY", "grad_density_y"), ("gradDensityZ", "grad_density_z"), ("gradDensityMagnitude", "grad_density_magnitude"), ("BaroclinicVorticityX", "baroclinic_vorticity_x"), ("BaroclinicVorticityY", "baroclinic_vorticity_y"), ("BaroclinicVorticityZ", "baroclinic_vorticity_z"), ("BaroclinicVorticityMagnitude", "baroclinic_vorticity_magnitude"), ("VorticityX", "vorticity_x"), ("VorticityY", "vorticity_y"), ("VorticityZ", "vorticity_z"), ("VorticityMagnitude", "vorticity_magnitude"), ("VorticityStretchingX", "vorticity_stretching_x"), ("VorticityStretchingY", "vorticity_stretching_y"), ("VorticityStretchingZ", "vorticity_stretching_z"), ("VorticityStretchingMagnitude", "vorticity_stretching_magnitude"), ("VorticityGrowthX", "vorticity_growth_x"), ("VorticityGrowthY", "vorticity_growth_y"), ("VorticityGrowthZ", "vorticity_growth_z"), ("VorticityGrowthMagnitude", "vorticity_growth_magnitude"), ("VorticityGrowthMagnitudeABS", "vorticity_growth_magnitude_absolute"), ("VorticityGrowthTimescale", "vorticity_growth_timescale"), ("VorticityRadPressureX", "vorticity_radiation_pressure_x"), ("VorticityRadPressureY", "vorticity_radiation_pressure_y"), ("VorticityRadPressureZ", "vorticity_radiation_pressure_z"), ("VorticityRadPressureMagnitude", "vorticity_radiation_pressure_magnitude"), ("VorticityRPGrowthX", "vorticity_radiation_pressure_growth_x"), ("VorticityRPGrowthY", "vorticity_radiation_pressure_growth_y"), ("VorticityRPGrowthZ", "vorticity_radiation_pressure_growth_z"), ("VorticityRPGrowthMagnitude", "vorticity_radiation_pressure_growth_magnitude"), ("VorticityRPGrowthTimescale", "vorticity_radiation_pressure_growth_timescale"), ("DiskAngle", "theta"), ("Height", "height"), ("HI density", "H_density"), ("HII density", "H_p1_density"), ("HeI density", "He_density"), ("HeII density", "He_p1_density"), ("HeIII density", "He_p2_density"), ] _field_units_aliases = [ ("cyl_RCode", "code_length"), ("HeightAU", "au"), ("cyl_RadialVelocityKMS", "km/s"), ("cyl_RadialVelocityKMSABS", "km/s"), ("cyl_TangentialVelocityKMS", "km/s"), ("cyl_TangentialVelocityKMSABS", "km/s"), ("CellMassMsun", "msun"), ("CellMassCode", "code_mass"), ("TotalMassMsun", "msun"), ("CellVolumeCode", "code_length"), ("CellVolumeMpc", "Mpc**3"), ("ParticleSpecificAngularMomentumXKMSMPC", "km/s/Mpc"), ("ParticleSpecificAngularMomentumYKMSMPC", "km/s/Mpc"), ("ParticleSpecificAngularMomentumZKMSMPC", "km/s/Mpc"), ("RadiusMpc", "Mpc"), ("ParticleRadiusMpc", "Mpc"), ("ParticleRadiuskpc", "kpc"), ("Radiuskpc", "kpc"), ("ParticleRadiuskpch", "kpc"), ("Radiuskpch", "kpc"), ("ParticleRadiuspc", "pc"), ("Radiuspc", "pc"), ("ParticleRadiusAU", "au"), ("RadiusAU", "au"), ("ParticleRadiusCode", "code_length"), ("RadiusCode", "code_length"), ("RadialVelocityKMS", "km/s"), ("RadialVelocityKMSABS", "km/s"), ("JeansMassMsun", "msun"), ] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/field_detector.py0000644000175100001770000002476214714401662017030 0ustar00runnerdockerfrom collections import defaultdict import numpy as np from yt.units.yt_array import YTArray from yt.utilities.io_handler import io_registry from .field_exceptions import NeedsGridType fp_units = { "bulk_velocity": "cm/s", "center": "cm", "normal": "", "cp_x_vec": "", "cp_y_vec": "", "cp_z_vec": "", "x_hat": "", "y_hat": "", "z_hat": "", "omega_baryon": "", "virial_radius": "cm", "observer_redshift": "", "source_redshift": "", } class FieldDetector(defaultdict): Level = 1 NumberOfParticles = 1 _read_exception = None _id_offset = 0 domain_id = 0 def __init__(self, nd=16, ds=None, flat=False, field_parameters=None): self.nd = nd self.flat = flat self._spatial = not flat self.ActiveDimensions = [nd, nd, nd] self.shape = tuple(self.ActiveDimensions) self.size = np.prod(self.ActiveDimensions) self.LeftEdge = [0.0, 0.0, 0.0] self.RightEdge = [1.0, 1.0, 1.0] self.dds = np.ones(3, "float64") if field_parameters is None: self.field_parameters = {} else: self.field_parameters = field_parameters class fake_dataset(defaultdict): pass if ds is None: # required attrs ds = fake_dataset(lambda: 1) ds["Massarr"] = np.ones(6) ds.current_redshift = 0.0 ds.omega_lambda = 0.0 ds.omega_matter = 0.0 ds.cosmological_simulation = 0 ds.gamma = 5.0 / 3.0 ds.hubble_constant = 0.7 ds.domain_left_edge = np.zeros(3, "float64") ds.domain_right_edge = np.ones(3, "float64") ds.dimensionality = 3 ds.periodicity = (True, True, True) self.ds = ds class fake_index: class fake_io: def _read_data_set(io_self, data, field): # noqa: B902 return self._read_data(field) _read_exception = RuntimeError io = fake_io() def get_smallest_dx(self): return 1.0 self.index = fake_index() self.requested = [] self.requested_parameters = [] rng = np.random.default_rng() if not self.flat: defaultdict.__init__( self, lambda: np.ones((nd, nd, nd), dtype="float64") + 1e-4 * rng.random((nd, nd, nd)), ) else: defaultdict.__init__( self, lambda: np.ones((nd * nd * nd), dtype="float64") + 1e-4 * rng.random(nd * nd * nd), ) def _reshape_vals(self, arr): if not self._spatial: return arr if len(arr.shape) == 3: return arr return arr.reshape(self.ActiveDimensions, order="C") def __missing__(self, item: tuple[str, str] | str): from yt.fields.derived_field import NullFunc if not isinstance(item, tuple): field = ("unknown", item) else: field = item finfo = self.ds._get_field_info(field) params, permute_params = finfo._get_needed_parameters(self) self.field_parameters.update(params) # For those cases where we are guessing the field type, we will # need to re-update -- otherwise, our item will always not have the # field type. This can lead to, for instance, "unknown" particle # types not getting correctly identified. # Note that the *only* way this works is if we also fix our field # dependencies during checking. Bug #627 talks about this. _item: tuple[str, str] = finfo.name if finfo is not None and finfo._function is not NullFunc: try: for param, param_v in permute_params.items(): for v in param_v: self.field_parameters[param] = v vv = finfo(self) if not permute_params: vv = finfo(self) except NeedsGridType as exc: ngz = exc.ghost_zones nfd = FieldDetector( self.nd + ngz * 2, ds=self.ds, field_parameters=self.field_parameters.copy(), ) nfd._num_ghost_zones = ngz vv = finfo(nfd) if ngz > 0: vv = vv[ngz:-ngz, ngz:-ngz, ngz:-ngz] for i in nfd.requested: if i not in self.requested: self.requested.append(i) for i in nfd.requested_parameters: if i not in self.requested_parameters: self.requested_parameters.append(i) if vv is not None: if not self.flat: self[_item] = vv else: self[_item] = vv.ravel() return self[_item] elif finfo is not None and finfo.sampling_type == "particle": io = io_registry[self.ds.dataset_type](self.ds) if hasattr(io, "_vector_fields") and ( _item in io._vector_fields or _item[1] in io._vector_fields ): try: cols = io._vector_fields[_item] except KeyError: cols = io._vector_fields[_item[1]] # A vector self[_item] = YTArray( np.ones((self.NumberOfParticles, cols)), finfo.units, registry=self.ds.unit_registry, ) else: # Not a vector self[_item] = YTArray( np.ones(self.NumberOfParticles), finfo.units, registry=self.ds.unit_registry, ) if _item == ("STAR", "BIRTH_TIME"): # hack for the artio frontend so we pass valid times to # the artio functions for calculating physical times # from internal times self[_item] *= -0.1 self.requested.append(_item) return self[_item] self.requested.append(_item) if _item not in self: self[_item] = self._read_data(_item) return self[_item] def _debug(self): # We allow this to pass through. return def deposit(self, *args, **kwargs): from yt.data_objects.static_output import ParticleDataset from yt.frontends.stream.data_structures import StreamParticlesDataset if kwargs["method"] == "mesh_id": if isinstance(self.ds, (StreamParticlesDataset, ParticleDataset)): raise ValueError rng = np.random.default_rng() return rng.random((self.nd, self.nd, self.nd)) def mesh_sampling_particle_field(self, *args, **kwargs): pos = args[0] npart = len(pos) rng = np.random.default_rng() return rng.random(npart) def smooth(self, *args, **kwargs): rng = np.random.default_rng() tr = rng.random((self.nd, self.nd, self.nd)) if kwargs["method"] == "volume_weighted": return [tr] return tr def particle_operation(self, *args, **kwargs): return None def _read_data(self, field_name): self.requested.append(field_name) finfo = self.ds._get_field_info(field_name) if finfo.sampling_type == "particle": self.requested.append(field_name) return np.ones(self.NumberOfParticles) return YTArray( defaultdict.__missing__(self, field_name), units=finfo.units, registry=self.ds.unit_registry, ) def get_field_parameter(self, param, default=0.0): if self.field_parameters and param in self.field_parameters: return self.field_parameters[param] self.requested_parameters.append(param) if param in ["center", "normal"] or param.startswith("bulk"): if param == "bulk_magnetic_field": if self.ds.unit_system.has_current_mks: unit = "T" else: unit = "G" else: unit = fp_units[param] rng = np.random.default_rng() return self.ds.arr(rng.random(3) * 1e-2, unit) elif param in ["surface_height"]: return self.ds.quan(0.0, "code_length") elif param in ["axis"]: return 0 elif param.startswith("cp_"): ax = param[3] rv = self.ds.arr((0.0, 0.0, 0.0), fp_units[param]) rv["xyz".index(ax)] = 1.0 return rv elif param.endswith("_hat"): ax = param[0] rv = YTArray((0.0, 0.0, 0.0), fp_units[param]) rv["xyz".index(ax)] = 1.0 return rv elif param == "fof_groups": return None elif param == "mu": return 1.0 else: return default _num_ghost_zones = 0 id = 1 def apply_units(self, arr, units): return self.ds.arr(arr, units=units) def has_field_parameter(self, param): return param in self.field_parameters @property def fcoords(self): fc = np.array( np.mgrid[0 : 1 : self.nd * 1j, 0 : 1 : self.nd * 1j, 0 : 1 : self.nd * 1j] ) if self.flat: fc.shape = (self.nd * self.nd * self.nd, 3) else: fc = fc.transpose() return self.ds.arr(fc, units="code_length") @property def fcoords_vertex(self): rng = np.random.default_rng() fc = rng.random((self.nd, self.nd, self.nd, 8, 3)) if self.flat: fc.shape = (self.nd * self.nd * self.nd, 8, 3) return self.ds.arr(fc, units="code_length") @property def icoords(self): ic = np.mgrid[ 0 : self.nd - 1 : self.nd * 1j, 0 : self.nd - 1 : self.nd * 1j, 0 : self.nd - 1 : self.nd * 1j, ] if self.flat: ic.shape = (self.nd * self.nd * self.nd, 3) else: ic = ic.transpose() return ic @property def ires(self): ir = np.ones(self.nd**3, dtype="int64") if not self.flat: ir.shape = (self.nd, self.nd, self.nd) return ir @property def fwidth(self): fw = np.ones((self.nd**3, 3), dtype="float64") / self.nd if not self.flat: fw.shape = (self.nd, self.nd, self.nd, 3) return self.ds.arr(fw, units="code_length") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/field_exceptions.py0000644000175100001770000000251714714401662017372 0ustar00runnerdockerclass ValidationException(Exception): pass class NeedsGridType(ValidationException): def __init__(self, ghost_zones=0, fields=None): self.ghost_zones = ghost_zones self.fields = fields def __str__(self): s = "s" if self.ghost_zones != 1 else "" return f"fields {self.fields} require {self.ghost_zones} ghost zone{s}." class NeedsOriginalGrid(NeedsGridType): def __init__(self): self.ghost_zones = 0 class NeedsDataField(ValidationException): def __init__(self, missing_fields): self.missing_fields = missing_fields def __str__(self): return f"({self.missing_fields})" class NeedsProperty(ValidationException): def __init__(self, missing_properties): self.missing_properties = missing_properties def __str__(self): return f"({self.missing_properties})" class NeedsParameter(ValidationException): def __init__(self, missing_parameters): self.missing_parameters = missing_parameters def __str__(self): return f"({self.missing_parameters})" class NeedsConfiguration(ValidationException): def __init__(self, parameter, value): self.parameter = parameter self.value = value def __str__(self): return f"(Needs {self.parameter} = {self.value})" class FieldUnitsError(Exception): pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/field_functions.py0000644000175100001770000000712214714401662017216 0ustar00runnerdockerfrom collections.abc import Callable from inspect import signature import numpy as np from yt.utilities.lib.misc_utilities import obtain_position_vector def get_radius(data, field_prefix, ftype): center = data.get_field_parameter("center").to("code_length") DW = (data.ds.domain_right_edge - data.ds.domain_left_edge).to("code_length") # This is in code_length so it can be the destination for our r later. radius2 = data.ds.arr( np.zeros(data[ftype, field_prefix + "x"].shape, dtype="float64"), "code_length" ) r = np.empty_like(radius2, subok=False) if any(data.ds.periodicity): rdw = radius2.v for i, ax in enumerate("xyz"): pos = data[ftype, f"{field_prefix}{ax}"] if str(pos.units) != "code_length": pos = pos.to("code_length") np.subtract( pos.d, center[i].d, r, ) if data.ds.periodicity[i]: np.abs(r, r) np.subtract(r, DW.d[i], rdw) np.abs(rdw, rdw) np.minimum(r, rdw, r) np.multiply(r, r, r) np.add(radius2.d, r, radius2.d) if data.ds.dimensionality < i + 1: break # Using the views into the array is not changing units and as such keeps # from having to do symbolic manipulations np.sqrt(radius2.d, radius2.d) # Alias it, just for clarity. radius = radius2 return radius def get_periodic_rvec(data): coords = obtain_position_vector(data).d if sum(data.ds.periodicity) == 0: return coords le = data.ds.domain_left_edge.in_units("code_length").d dw = data.ds.domain_width.in_units("code_length").d for i in range(coords.shape[0]): if not data.ds.periodicity[i]: continue coords[i, ...] -= le[i] # figure out which measure is less mins = np.argmin( [ np.abs(np.mod(coords[i, ...], dw[i])), np.abs(np.mod(coords[i, ...], -dw[i])), ], axis=0, ) temp_coords = np.mod(coords[i, ...], dw[i]) # Where second measure is better, updating temporary coords ii = mins == 1 temp_coords[ii] = np.mod(coords[i, ...], -dw[i])[ii] # Putting the temporary coords into the actual storage coords[i, ...] = temp_coords coords[i, ...] + le[i] return coords def validate_field_function(function: Callable) -> None: """ Inspect signature, raise a TypeError if invalid, return None otherwise. """ # This is a helper function to user-intended field registration methods # (e.g. Dataset.add_field and yt.derived_field) # it is not used in FieldInfoContainer.add_field to optimize performance # (inspect.signature is quite expensive and we don't want to validate yt's # internal code every time a dataset's fields are defined). # lookup parameters that do not have default values fparams = signature(function).parameters nodefaults = tuple(p.name for p in fparams.values() if p.default is p.empty) if nodefaults != ("field", "data"): raise TypeError( f"Received field function {function} with invalid signature. " f"Expected exactly 2 positional parameters ('field', 'data'), got {nodefaults!r}" ) if any( fparams[name].kind == fparams[name].KEYWORD_ONLY for name in ("field", "data") ): raise TypeError( f"Received field function {function} with invalid signature. " "Parameters 'field' and 'data' must accept positional values " "(they cannot be keyword-only)" ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/field_info_container.py0000644000175100001770000006526214714401662020214 0ustar00runnerdockerimport sys from collections import UserDict from collections.abc import Callable from unyt.exceptions import UnitConversionError from yt._maintenance.deprecation import issue_deprecation_warning from yt._typing import FieldKey, FieldName, FieldType, KnownFieldsT from yt.config import ytcfg from yt.fields.field_exceptions import NeedsConfiguration from yt.funcs import mylog, obj_length, only_on_root from yt.geometry.api import Geometry from yt.units.dimensions import dimensionless # type: ignore from yt.units.unit_object import Unit # type: ignore from yt.utilities.exceptions import ( YTCoordinateNotImplemented, YTDomainOverflow, YTFieldNotFound, ) from .derived_field import DeprecatedFieldFunc, DerivedField, NullFunc, TranslationFunc from .field_plugin_registry import FunctionName, field_plugins from .particle_fields import ( add_union_field, particle_deposition_functions, particle_scalar_functions, particle_vector_functions, sph_whitelist_fields, standard_particle_fields, ) if sys.version_info >= (3, 11): from typing import assert_never else: from typing_extensions import assert_never class FieldInfoContainer(UserDict): """ This is a generic field container. It contains a list of potential derived fields, all of which know how to act on a data object and return a value. This object handles converting units as well as validating the availability of a given field. """ fallback = None known_other_fields: KnownFieldsT = () known_particle_fields: KnownFieldsT = () extra_union_fields: tuple[FieldKey, ...] = () def __init__(self, ds, field_list: list[FieldKey], slice_info=None): super().__init__() self._show_field_errors: list[Exception] = [] self.ds = ds # Now we start setting things up. self.field_list = field_list self.slice_info = slice_info self.field_aliases: dict[FieldKey, FieldKey] = {} self.species_names: list[FieldName] = [] self.setup_fluid_aliases() @property def curvilinear(self) -> bool: issue_deprecation_warning( "FieldInfoContainer.curvilinear attribute is deprecated. " "Please compare the internal dataset geometry directly to known Geometry enum members instead. ", stacklevel=3, since="4.2", ) geometry = self.ds.geometry return ( geometry is Geometry.POLAR or geometry is Geometry.CYLINDRICAL or geometry is Geometry.SPHERICAL ) def setup_fluid_fields(self): pass def setup_fluid_index_fields(self): # Now we get all our index types and set up aliases to them if self.ds is None: return index_fields = {f for _, f in self if _ == "index"} for ftype in self.ds.fluid_types: if ftype in ("index", "deposit"): continue for f in index_fields: if (ftype, f) in self: continue self.alias((ftype, f), ("index", f)) def setup_particle_fields(self, ptype, ftype="gas", num_neighbors=64): skip_output_units = ("code_length",) for f, (units, aliases, dn) in sorted(self.known_particle_fields): units = self.ds.field_units.get((ptype, f), units) output_units = units if ( f in aliases or ptype not in self.ds.particle_types_raw ) and units not in skip_output_units: u = Unit(units, registry=self.ds.unit_registry) if u.dimensions is not dimensionless: output_units = str(self.ds.unit_system[u.dimensions]) if (ptype, f) not in self.field_list: continue self.add_output_field( (ptype, f), sampling_type="particle", units=units, display_name=dn, output_units=output_units, ) for alias in aliases: self.alias((ptype, alias), (ptype, f), units=output_units) # We'll either have particle_position or particle_position_[xyz] if (ptype, "particle_position") in self.field_list or ( ptype, "particle_position", ) in self.field_aliases: particle_scalar_functions( ptype, "particle_position", "particle_velocity", self ) else: # We need to check to make sure that there's a "known field" that # overlaps with one of the vector fields. For instance, if we are # in the Stream frontend, and we have a set of scalar position # fields, they will overlap with -- and be overridden by -- the # "known" vector field that the frontend creates. So the easiest # thing to do is to simply remove the on-disk field (which doesn't # exist) and replace it with a derived field. if (ptype, "particle_position") in self and self[ ptype, "particle_position" ]._function == NullFunc: self.pop((ptype, "particle_position")) particle_vector_functions( ptype, [f"particle_position_{ax}" for ax in "xyz"], [f"particle_velocity_{ax}" for ax in "xyz"], self, ) particle_deposition_functions(ptype, "particle_position", "particle_mass", self) standard_particle_fields(self, ptype) # Now we check for any leftover particle fields for field in sorted(self.field_list): if field in self: continue if not isinstance(field, tuple): raise RuntimeError if field[0] not in self.ds.particle_types: continue units = self.ds.field_units.get(field, None) if units is None: try: units = ytcfg.get("fields", *field, "units") except KeyError: units = "" self.add_output_field( field, sampling_type="particle", units=units, ) self.setup_smoothed_fields(ptype, num_neighbors=num_neighbors, ftype=ftype) def setup_extra_union_fields(self, ptype="all"): if ptype != "all": raise RuntimeError( "setup_extra_union_fields is currently" + 'only enabled for particle type "all".' ) for units, field in self.extra_union_fields: add_union_field(self, ptype, field, units) def setup_smoothed_fields(self, ptype, num_neighbors=64, ftype="gas"): # We can in principle compute this, but it is not yet implemented. if (ptype, "density") not in self or not hasattr(self.ds, "_sph_ptypes"): return new_aliases = [] for ptype2, alias_name in list(self): if ptype2 != ptype: continue if alias_name not in sph_whitelist_fields: if alias_name.startswith("particle_"): pass else: continue uni_alias_name = alias_name if "particle_position_" in alias_name: uni_alias_name = alias_name.replace("particle_position_", "") elif "particle_" in alias_name: uni_alias_name = alias_name.replace("particle_", "") new_aliases.append( ( (ftype, uni_alias_name), (ptype, alias_name), ) ) if "particle_position_" in alias_name: new_aliases.append( ( (ftype, alias_name), (ptype, alias_name), ) ) new_aliases.append( ( (ptype, uni_alias_name), (ptype, alias_name), ) ) for alias, source in new_aliases: self.alias(alias, source) self.alias((ftype, "particle_position"), (ptype, "particle_position")) self.alias((ftype, "particle_mass"), (ptype, "particle_mass")) # Collect the names for all aliases if geometry is curvilinear def get_aliases_gallery(self) -> list[FieldName]: aliases_gallery: list[FieldName] = [] known_other_fields = dict(self.known_other_fields) if self.ds is None: return aliases_gallery geometry: Geometry = self.ds.geometry if ( geometry is Geometry.POLAR or geometry is Geometry.CYLINDRICAL or geometry is Geometry.SPHERICAL ): aliases: list[FieldName] for field in sorted(self.field_list): if field[0] in self.ds.particle_types: continue args = known_other_fields.get(field[1], ("", [], None)) units, aliases, display_name = args aliases_gallery.extend(aliases) elif ( geometry is Geometry.CARTESIAN or geometry is Geometry.GEOGRAPHIC or geometry is Geometry.INTERNAL_GEOGRAPHIC or geometry is Geometry.SPECTRAL_CUBE ): # nothing to do pass else: assert_never(geometry) return aliases_gallery def setup_fluid_aliases(self, ftype: FieldType = "gas") -> None: known_other_fields = dict(self.known_other_fields) # For non-Cartesian geometry, convert alias of vector fields to # curvilinear coordinates aliases_gallery = self.get_aliases_gallery() for field in sorted(self.field_list): if not isinstance(field, tuple) or len(field) != 2: raise RuntimeError if field[0] in self.ds.particle_types: continue args = known_other_fields.get(field[1], None) if args is not None: units, aliases, display_name = args else: try: node = ytcfg.get("fields", *field).as_dict() except KeyError: node = {} units = node.get("units", "") aliases = node.get("aliases", []) display_name = node.get("display_name", None) # We allow field_units to override this. First we check if the # field *name* is in there, then the field *tuple*. units = self.ds.field_units.get(field[1], units) units = self.ds.field_units.get(field, units) self.add_output_field( field, sampling_type="cell", units=units, display_name=display_name ) axis_names = self.ds.coordinates.axis_order geometry: Geometry = self.ds.geometry for alias in aliases: if ( geometry is Geometry.POLAR or geometry is Geometry.CYLINDRICAL or geometry is Geometry.SPHERICAL ): if alias[-2:] not in ["_x", "_y", "_z"]: to_convert = False else: for suffix in ["x", "y", "z"]: if f"{alias[:-2]}_{suffix}" not in aliases_gallery: to_convert = False break to_convert = True if to_convert: if alias[-2:] == "_x": alias = f"{alias[:-2]}_{axis_names[0]}" elif alias[-2:] == "_y": alias = f"{alias[:-2]}_{axis_names[1]}" elif alias[-2:] == "_z": alias = f"{alias[:-2]}_{axis_names[2]}" elif ( geometry is Geometry.CARTESIAN or geometry is Geometry.GEOGRAPHIC or geometry is Geometry.INTERNAL_GEOGRAPHIC or geometry is Geometry.SPECTRAL_CUBE ): # nothing to do pass else: assert_never(geometry) self.alias((ftype, alias), field) @staticmethod def _sanitize_sampling_type(sampling_type: str) -> str: """Detect conflicts between deprecated and new parameters to specify the sampling type in a new field. This is a helper function to add_field methods. Parameters ---------- sampling_type : str One of "cell", "particle" or "local" (case insensitive) Raises ------ ValueError For unsupported values in sampling_type """ if not isinstance(sampling_type, str): raise TypeError("sampling_type should be a string.") sampling_type = sampling_type.lower() acceptable_samplings = ("cell", "particle", "local") if sampling_type not in acceptable_samplings: raise ValueError( f"Received invalid sampling type {sampling_type!r}. " f"Expected any of {acceptable_samplings}" ) return sampling_type def add_field( self, name: FieldKey, function: Callable, sampling_type: str, *, alias: DerivedField | None = None, force_override: bool = False, **kwargs, ) -> None: """ Add a new field, along with supplemental metadata, to the list of available fields. This respects a number of arguments, all of which are passed on to the constructor for :class:`~yt.data_objects.api.DerivedField`. Parameters ---------- name : tuple[str, str] field (or particle) type, field name function : callable A function handle that defines the field. Should accept arguments (field, data) sampling_type: str "cell" or "particle" or "local" force_override: bool If False (default), an error will be raised if a field of the same name already exists. alias: DerivedField (optional): existing field to be aliased units : str A plain text string encoding the unit. Powers must be in python syntax (** instead of ^). If set to "auto" the units will be inferred from the return value of the field function. take_log : bool Describes whether the field should be logged validators : list A list of :class:`FieldValidator` objects vector_field : bool Describes the dimensionality of the field. Currently unused. display_name : str A name used in the plots """ # Handle the case where the field has already been added. if not force_override and name in self: return kwargs.setdefault("ds", self.ds) sampling_type = self._sanitize_sampling_type(sampling_type) if ( not isinstance(name, str) and obj_length(name) == 2 and all(isinstance(e, str) for e in name) ): self[name] = DerivedField( name, sampling_type, function, alias=alias, **kwargs ) else: raise ValueError(f"Expected name to be a tuple[str, str], got {name}") def load_all_plugins(self, ftype: str | None = "gas") -> None: if ftype is None: return mylog.debug("Loading field plugins for field type: %s.", ftype) loaded = [] for n in sorted(field_plugins): loaded += self.load_plugin(n, ftype) only_on_root(mylog.debug, "Loaded %s (%s new fields)", n, len(loaded)) self.find_dependencies(loaded) def load_plugin( self, plugin_name: FunctionName, ftype: FieldType = "gas", skip_check: bool = False, ): f = field_plugins[plugin_name] orig = set(self.items()) f(self, ftype, slice_info=self.slice_info) loaded = [n for n, v in set(self.items()).difference(orig)] return loaded def find_dependencies(self, loaded): deps, unavailable = self.check_derived_fields(loaded) self.ds.field_dependencies.update(deps) # Note we may have duplicated dfl = set(self.ds.derived_field_list).union(deps.keys()) self.ds.derived_field_list = sorted(dfl) return loaded, unavailable def add_output_field(self, name, sampling_type, **kwargs): if name[1] == "density": if name in self: # this should not happen, but it does # it'd be best to raise an error here but # it may take a while to cleanup internal issues return kwargs.setdefault("ds", self.ds) self[name] = DerivedField(name, sampling_type, NullFunc, **kwargs) def alias( self, alias_name: FieldKey, original_name: FieldKey, units: str | None = None, deprecate: tuple[str, str | None] | None = None, ): """ Alias one field to another field. Parameters ---------- alias_name : tuple[str, str] The new field name. original_name : tuple[str, str] The field to be aliased. units : str A plain text string encoding the unit. Powers must be in python syntax (** instead of ^). If set to "auto" the units will be inferred from the return value of the field function. deprecate : tuple[str, str | None] | None If this is set, then the tuple contains two string version numbers: the first marking the version when the field was deprecated, and the second marking when the field will be removed. """ if original_name not in self: return if units is None: # We default to CGS here, but in principle, this can be pluggable # as well. # self[original_name].units may be set to `None` at this point # to signal that units should be autoset later oru = self[original_name].units if oru is None: units = None else: u = Unit(oru, registry=self.ds.unit_registry) if u.dimensions is not dimensionless: units = str(self.ds.unit_system[u.dimensions]) else: units = oru self.field_aliases[alias_name] = original_name function = TranslationFunc(original_name) if deprecate is not None: self.add_deprecated_field( alias_name, function=function, sampling_type=self[original_name].sampling_type, display_name=self[original_name].display_name, units=units, since=deprecate[0], removal=deprecate[1], ret_name=original_name, ) else: self.add_field( alias_name, function=function, sampling_type=self[original_name].sampling_type, display_name=self[original_name].display_name, units=units, alias=self[original_name], ) def add_deprecated_field( self, name, function, sampling_type, since, removal=None, ret_name=None, **kwargs, ): """ Add a new field which is deprecated, along with supplemental metadata, to the list of available fields. This respects a number of arguments, all of which are passed on to the constructor for :class:`~yt.data_objects.api.DerivedField`. Parameters ---------- name : str is the name of the field. function : callable A function handle that defines the field. Should accept arguments (field, data) sampling_type : str "cell" or "particle" or "local" since : str The version string marking when this field was deprecated. removal : str The version string marking when this field will be removed. ret_name : str The name of the field which will actually be returned, used only by :meth:`~yt.fields.field_info_container.FieldInfoContainer.alias`. units : str A plain text string encoding the unit. Powers must be in python syntax (** instead of ^). If set to "auto" the units will be inferred from the return value of the field function. take_log : bool Describes whether the field should be logged validators : list A list of :class:`FieldValidator` objects vector_field : bool Describes the dimensionality of the field. Currently unused. display_name : str A name used in the plots """ if ret_name is None: ret_name = name self.add_field( name, function=DeprecatedFieldFunc(ret_name, function, since, removal), sampling_type=sampling_type, **kwargs, ) def has_key(self, key): # This gets used a lot if key in self: return True if self.fallback is None: return False return key in self.fallback def __missing__(self, key): if self.fallback is None: raise KeyError(f"No field named {key}") return self.fallback[key] @classmethod def create_with_fallback(cls, fallback, name=""): obj = cls() obj.fallback = fallback obj.name = name return obj def __contains__(self, key): if super().__contains__(key): return True if self.fallback is None: return False return key in self.fallback def __iter__(self): yield from super().__iter__() if self.fallback is not None: yield from self.fallback def keys(self): keys = super().keys() if self.fallback: keys += list(self.fallback.keys()) return keys def check_derived_fields(self, fields_to_check=None): # The following exceptions lists were obtained by expanding an # all-catching `except Exception`. # We define # - a blacklist (exceptions that we know should be caught) # - a whitelist (exceptions that should be handled) # - a greylist (exceptions that may be covering bugs but should be checked) # See https://github.com/yt-project/yt/issues/2853 # in the long run, the greylist should be removed blacklist = () whitelist = (NotImplementedError,) greylist = ( YTFieldNotFound, YTDomainOverflow, YTCoordinateNotImplemented, NeedsConfiguration, TypeError, ValueError, IndexError, AttributeError, KeyError, # code smells -> those are very likely bugs UnitConversionError, # solved in GH PR 2897 ? # RecursionError is clearly a bug, and was already solved once # in GH PR 2851 RecursionError, ) deps = {} unavailable = [] fields_to_check = fields_to_check or list(self.keys()) for field in fields_to_check: fi = self[field] try: # fd: field detector fd = fi.get_dependencies(ds=self.ds) except blacklist as err: print(f"{err.__class__} raised for field {field}") raise SystemExit(1) from err except (*whitelist, *greylist) as e: if field in self._show_field_errors: raise if not isinstance(e, YTFieldNotFound): # if we're doing field tests, raise an error # see yt.fields.tests.test_fields if hasattr(self.ds, "_field_test_dataset"): raise mylog.debug( "Raises %s during field %s detection.", str(type(e)), field ) self.pop(field) continue # This next bit checks that we can't somehow generate everything. # We also manually update the 'requested' attribute missing = not all(f in self.field_list for f in fd.requested) if missing: self.pop(field) unavailable.append(field) continue fd.requested = set(fd.requested) deps[field] = fd mylog.debug("Succeeded with %s (needs %s)", field, fd.requested) # now populate the derived field list with results # this violates isolation principles and should be refactored dfl = set(self.ds.derived_field_list).union(deps.keys()) dfl = sorted(dfl) if not hasattr(self.ds.index, "meshes"): # the meshes attribute characterizes an unstructured-mesh data structure # ideally this filtering should not be required # and this could maybe be handled in fi.get_dependencies # but it's a lot easier to do here filtered_dfl = [] for field in dfl: try: ftype, fname = field if "vertex" in fname: continue except ValueError: # in very rare cases, there can a field represented by a single # string, like "emissivity" # this try block _should_ be removed and the error fixed upstream # for reference, a test that would break is # yt/data_objects/tests/test_fluxes.py::ExporterTests pass filtered_dfl.append(field) dfl = filtered_dfl self.ds.derived_field_list = dfl self._set_linear_fields() return deps, unavailable def _set_linear_fields(self): """ Sets which fields use linear as their default scaling in Profiles and PhasePlots. Default for all fields is set to log, so this sets which are linear. For now, set linear to geometric fields: position and velocity coordinates. """ non_log_prefixes = ("", "velocity_", "particle_position_", "particle_velocity_") coords = ("x", "y", "z") non_log_fields = [ prefix + coord for prefix in non_log_prefixes for coord in coords ] for field in self.ds.derived_field_list: if field[1] in non_log_fields: self[field].take_log = False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/field_plugin_registry.py0000644000175100001770000000066714714401662020443 0ustar00runnerdockerfrom collections.abc import Callable FunctionName = str FieldPluginMap = dict[FunctionName, Callable] field_plugins: FieldPluginMap = {} def register_field_plugin(func: Callable) -> Callable: name = func.__name__ if name.startswith("setup_"): name = name[len("setup_") :] if name.endswith("_fields"): name = name[: -len("_fields")] field_plugins[name] = func # And, we return it, too return func ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1731330994.0 yt-4.4.0/yt/fields/field_type_container.py0000644000175100001770000001264714714401662020241 0ustar00runnerdocker""" A proxy object for field descriptors, usually living as ds.fields. """ import inspect import textwrap import weakref from functools import cached_property from yt._maintenance.ipython_compat import IPYWIDGETS_ENABLED from yt.fields.derived_field import DerivedField def _fill_values(values): value = ( '